diff --git a/.gitattributes b/.gitattributes index 7230f36f4..fba3b1303 100644 --- a/.gitattributes +++ b/.gitattributes @@ -1,2 +1,4 @@ - -common/test-fixtures/root/* eol=lf \ No newline at end of file +* text=auto +*.go text eol=lf +*.sh text eol=lf +common/test-fixtures/root/* eol=lf diff --git a/.github/CONTRIBUTING.md b/.github/CONTRIBUTING.md new file mode 100644 index 000000000..e0c51bc78 --- /dev/null +++ b/.github/CONTRIBUTING.md @@ -0,0 +1,217 @@ +# Contributing to Packer + +**First:** if you're unsure or afraid of _anything_, just ask or submit the +issue or pull request anyways. You won't be yelled at for giving your best +effort. The worst that can happen is that you'll be politely asked to change +something. We appreciate any sort of contributions, and don't want a wall of +rules to get in the way of that. + +However, for those individuals who want a bit more guidance on the best way to +contribute to the project, read on. This document will cover what we're looking +for. By addressing all the points we're looking for, it raises the chances we +can quickly merge or address your contributions. + +## Issues + +### Reporting an Issue + +* Make sure you test against the latest released version. It is possible we + already fixed the bug you're experiencing. + +* Run the command with debug output with the environment variable `PACKER_LOG`. + For example: `PACKER_LOG=1 packer build template.json`. Take the _entire_ + output and create a [gist](https://gist.github.com) for linking to in your + issue. Packer should strip sensitive keys from the output, but take a look + through just in case. + +* Provide a reproducible test case. If a contributor can't reproduce an issue, + then it dramatically lowers the chances it'll get fixed. And in some cases, + the issue will eventually be closed. + +* Respond promptly to any questions made by the Packer team to your issue. Stale + issues will be closed. + +### Issue Lifecycle + +1. The issue is reported. + +2. The issue is verified and categorized by a Packer collaborator. + Categorization is done via tags. For example, bugs are marked as "bugs" and + easy fixes are marked as "easy". + +3. Unless it is critical, the issue is left for a period of time (sometimes many + weeks), giving outside contributors a chance to address the issue. + +4. The issue is addressed in a pull request or commit. The issue will be + referenced in the commit message so that the code that fixes it is clearly + linked. + +5. The issue is closed. + +## Setting up Go to work on Packer + +If you have never worked with Go before, you will have to complete the following +steps in order to be able to compile and test Packer. These instructions target +POSIX-like environments (Mac OS X, Linux, Cygwin, etc.) so you may need to +adjust them for Windows or other shells. + +1. [Download](https://golang.org/dl) and install Go. The instructions below are + for go 1.7. Earlier versions of Go are no longer supported. + +2. Set and export the `GOPATH` environment variable and update your `PATH`. For + example, you can add the following to your `.bash_profile` (or comparable + shell startup scripts): + +``` +export GOPATH=$HOME/go +export PATH=$PATH:$GOPATH/bin +``` + +3. Download the Packer source (and its dependencies) by running + `go get github.com/hashicorp/packer`. This will download the Packer source to + `$GOPATH/src/github.com/hashicorp/packer`. + +4. When working on Packer, first `cd $GOPATH/src/github.com/hashicorp/packer` + so you can run `make` and easily access other files. Run `make help` to get + information about make targets. + +5. Make your changes to the Packer source. You can run `make` in + `$GOPATH/src/github.com/hashicorp/packer` to run tests and build the Packer + binary. Any compilation errors will be shown when the binaries are + rebuilding. If you don't have `make` you can simply run + `go build -o bin/packer .` from the project root. + +6. After running building Packer successfully, use + `$GOPATH/src/github.com/hashicorp/packer/bin/packer` to build a machine and + verify your changes work. For instance: + `$GOPATH/src/github.com/hashicorp/packer/bin/packer build template.json`. + +7. If everything works well and the tests pass, run `go fmt` on your code before + submitting a pull-request. + +### Opening an Pull Request + +Thank you for contributing! When you are ready to open a pull-request, you will +need to [fork +Packer](https://github.com/hashicorp/packer#fork-destination-box), push your +changes to your fork, and then open a pull-request. + +For example, my github username is `cbednarski`, so I would do the following: + +``` +git checkout -b f-my-feature +# Develop a patch. +git push https://github.com/cbednarski/Packer f-my-feature +``` + +From there, open your fork in your browser to open a new pull-request. + +**Note:** Go infers package names from their file paths. This means `go build` +will break if you `git clone` your fork instead of using `go get` on the main +Packer project. + +### Pull Request Lifecycle + +1. You are welcome to submit your pull request for commentary or review before + it is fully completed. Please prefix the title of your pull request with + "[WIP]" to indicate this. It's also a good idea to include specific questions + or items you'd like feedback on. + +1. Once you believe your pull request is ready to be merged, you can remove any + "[WIP]" prefix from the title and a core team member will review. + +1. One of Packer's core team members will look over your contribution and + either provide comments letting you know if there is anything left to do. We + do our best to provide feedback in a timely manner, but it may take some time + for us to respond. + +1. Once all outstanding comments and checklist items have been addressed, your + contribution will be merged! Merged PRs will be included in the next + Packer release. The core team takes care of updating the CHANGELOG as they + merge. + +1. In rare cases, we might decide that a PR should be closed. We'll make sure to + provide clear reasoning when this happens. + +### Tips for Working on Packer + +#### Working on forks + +The easiest way to work on a fork is to set it as a remote of the Packer +project. After following the steps in "Setting up Go to work on Packer": + +1. Navigate to `$GOPATH/src/github.com/hashicorp/packer` +2. Add the remote by running + `git remote add `. For example: + `git remote add mwhooker https://github.com/mwhooker/packer.git`. +3. Checkout a feature branch: `git checkout -b new-feature` +4. Make changes +5. (Optional) Push your changes to the fork: + `git push -u new-feature` + +This way you can push to your fork to create a PR, but the code on disk still +lives in the spot where the go cli tools are expecting to find it. + +#### Govendor + +If you are submitting a change that requires new or updated dependencies, please +include them in `vendor/vendor.json` and in the `vendor/` folder. This helps +everything get tested properly in CI. + +Note that you will need to use [govendor](https://github.com/kardianos/govendor) +to do this. This step is recommended but not required; if you don't use govendor +please indicate in your PR which dependencies have changed and to what versions. + +Use `govendor fetch ` to add dependencies to the project. See +[govendor quick start](https://github.com/kardianos/govendor#quick-start-also-see-the-faq) +for examples. + +Please only apply the minimal vendor changes to get your PR to work. Packer does +not attempt to track the latest version for each dependency. + +#### Running Unit Tests + +You can run tests for individual packages using commands like this: + +``` +make test TEST=./builder/amazon/... +``` + +#### Running Acceptance Tests + +Packer has [acceptance tests](https://en.wikipedia.org/wiki/Acceptance_testing) +for various builders. These typically require an API key (AWS, GCE), or +additional software to be installed on your computer (VirtualBox, VMware). + +If you're working on a new builder or builder feature and want verify it is +functioning (and also hasn't broken anything else), we recommend running the +acceptance tests. + +**Warning:** The acceptance tests create/destroy/modify _real resources_, which +may incur costs for real money. In the presence of a bug, it is possible that +resources may be left behind, which can cost money even though you were not +using them. We recommend running tests in an account used only for that purpose +so it is easy to see if there are any dangling resources, and so production +resources are not accidentally destroyed or overwritten during testing. + +To run the acceptance tests, invoke `make testacc`: + +``` +make testacc TEST=./builder/amazon/ebs +... +``` + +The `TEST` variable lets you narrow the scope of the acceptance tests to a +specific package / folder. The `TESTARGS` variable is recommended to filter down +to a specific resource to test, since testing all of them at once can sometimes +take a very long time. + +To run only a specific test, use the `-run` argument: + +``` +make testacc TEST=./builder/amazon/ebs TESTARGS="-run TestBuilderAcc_forceDeleteSnapshot" +``` + +Acceptance tests typically require other environment variables to be set for +things such as API tokens and keys. Each test should error and tell you which +credentials are missing, so those are not documented here. diff --git a/.gitignore b/.gitignore index 6ab3cdfb9..0f310de91 100644 --- a/.gitignore +++ b/.gitignore @@ -23,4 +23,5 @@ packer-test*.log .idea/ *.iml Thumbs.db -/packer.exe \ No newline at end of file +/packer.exe +.project diff --git a/.travis.yml b/.travis.yml index 5e753eb01..7bf4fb6db 100644 --- a/.travis.yml +++ b/.travis.yml @@ -6,9 +6,9 @@ sudo: false language: go go: - - 1.7.4 - - 1.8.3 - - 1.9 + - 1.9.x + - 1.10.x + - master install: - make deps @@ -21,4 +21,6 @@ branches: - master matrix: + allow_failures: + - go: master fast_finish: true diff --git a/CHANGELOG.md b/CHANGELOG.md index dfdab519f..e87c2e5da 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -1,35 +1,460 @@ -## (UNRELEASED) +## 1.2.6 (Unreleased) -### IMRPOVEMENTS: +### IMPROVEMENTS: -* post-processor/docker-push: Add `aws_profile` option to control the aws profile for ECR. [GH-5470] -* builder/docker: Add `aws_profile` option to control the aws profile for ECR. [GH-5470] -* post-processor/vsphere: Properly capture `ovftool` output. [GH-5499] -* builder/hyper-v: Also disable automatic checkpoints for gen 2 VMs. [GH-5517] -* builder/hyper-v: Add `disk_additional_size` option to allow for up to 64 additional disks. [GH-5491] -* builder/amazon: correctly deregister AMIs when `force_deregister` is set. [GH-5525] -* builder/digitalocean: Add `ipv6` option to enable on droplet. [GH-5534] -* builder/triton: Add `source_machine_image_filter` option to select an image ID based on a variety of parameters. [GH-5538] -* communicator/ssh: Add socks 5 proxy support. [GH-5439] -* builder/lxc: Add new `publish_properties` field to set image properties. [GH-5475] -* builder/virtualbox-ovf: Retry while removing VM to solve for transient errors. [GH-5512] -* builder/google: Interpolate network and subnetwork values, rather than relying on an API call that packer may not have permission for. [GH-5343] -* builder/lxc: Add three new configuration option categories to LXC builder: create_options, start_options, and attach_options. [GH-5530] -* core: Rewrite vagrantfile code to make cross-platform development easier. [5539] -* builder/triton: Update triton-go sdk. [GH-5531] -* builder/google: Add clean_image_name template engine. [GH-5463] +* command/validate: Warn users if config needs fixing. [GH-6423] + +### BUG FIXES: + +## 1.2.5 (July 16, 2018) + +### BUG FIXES: +* builder/alickoud: Fix issue where internet_max_bandwidth_out template option + was not being passed to builder. [GH-6416] +* builder/alicloud: Fix an issue with VPC cleanup. [GH-6418] +* builder/amazon-chroot: Fix communicator bug that broke chroot builds. + [GH-6363] +* builder/amazon: Replace packer's waiters with those from the AWS sdk, solving + several timeout bugs. [GH-6332] +* builder/azure: update azure-sdk-for-go, fixing 32-bit build errors. [GH-6479] +* builder/azure: update the max length of managed_image_resource_group to match + new increased length of 90 characters. [GH-6477] +* builder/hyper-v: Fix secure boot template feature so that it properly passes + the temolate for MicrosoftUEFICertificateAuthority. [GH-6415] +* builder/hyperv: Fix bug in HyperV IP lookups that was causing breakages in + FreeBSD/OpenBSD builds. [GH-6416] +* builder/qemu: Fix error race condition in qemu builder that caused convert to + fail on ubuntu 18.x [GH-6437] +* builder/qemu: vnc_bind_address was not being passed to qemu. [GH-6467] +* builder/virtualbox: Allow iso_url to be a symlink. [GH-6370] +* builder/vmware: Don't fail on DHCP lease files that cannot be read, fixing + bug where builder failed on NAT networks that don't serve DHCP. [GH-6415] +* builder/vmware: Fix bug where we couldn't discover IP if vm_name differed + from the vmx displayName. [GH-6448] +* builder/vmware: Fix validation to prevent hang when remopte_password is not + sent but vmware is building on esxi. [GH-6424] +* builder/vmware:Correctly default the vm export format to ovf; this is what + the docs claimed we already did, but we didn't. [GH-4538] +* communicator/winrm: Revert an attempt to determine whether remote upload + destinations were files or directories, as this broke uploads on systems + without Powershell installed. [GH-6481] +* core: Fix bug in parsing of iso checksum files that arose when setting + iso_url to a relative filepath. [GH-6488] +* core: Fix Packer crash caused by improper error handling in the downloader. + [GH-6381] +* fix: Fix bug where fixer for ssh_private_ip that failed when boolean values + are passed as strings. [GH-6458] +* provisioner/powershell: Make upload of powershell variables retryable, in + case of system restarts. [GH-6388] + +### IMPROVEMENTS: +* builder/amazon: Add the ap-northeast-3 region. [GH-6385] +* builder/amazon: Spot requests may now have tags applied using the `spot_tags` + option [GH-5452] +* builder/cloudstack: Add support for Projectid and new config option + prevent_firewall_changes. [GH-6487] +* builder/openstack: Add support for token authorization and cloud.yaml. + [GH-6362] +* builder/oracle-oci: Add new "instance_name" template option. [GH-6408] +* builder/scaleway: Add new "bootscript" parameter, allowing the user to not + use the default local bootscript [GH-6439] +* builder/vmware: Add support for linked clones to vmware-vmx. [GH-6394] +* debug: The -debug flag will now cause Packer to pause between provisioner + scripts in addition to Packer steps. [GH-4663] +* post-processor/googlecompute-import: Added new googlecompute-import post- + processor [GH-6451] +* provisioner/ansible: Add new "playbook_files" option to execute multiple + playbooks within one provisioner call. [GH-5086] + +## 1.2.4 (May 29, 2018) + +### BUG FIXES: + +* builder/amazon: Can now force the chroot builder to mount an entire block + device instead of a partition [GH-6194] +* builder/azure: windows-sql-cloud is now in the default list of projects to + check for provided images. [GH-6210] +* builder/chroot: A new template option, `nvme_device_path` has been added to + provide a workaround for users who need the amazon-chroot builder to mount + a NVMe volume on their instances. [GH-6295] +* builder/hyper-v: Fix command for mounting multiple disks [GH-6267] +* builder/hyperv: Enable IP retrieval for Server 2008 R2 hosts. [GH-6219] +* builder/hyperv: Fix bug in MAC address specification on Hyper-V. [GH-6187] +* builder/parallels-pvm: Add missing disk compaction step. [GH-6202] +* builder/vmware-esxi: Remove floppy files from the remote server on cleanup. + [GH-6206] +* communicator/winrm: Updated dependencies to fix a race condition [GH-6261] +* core: When using `-on-error=[abort|ask]`, output the error to the user. + [GH-6252] +* provisioner/puppet: Extra-Arguments are no longer prematurely + interpolated.[GH-6215] +* provisioner/shell: Remove file stat that was causing problems uploading files + [GH-6239] + +### IMPROVEMENTS: + +* builder/amazon: Amazon builders other than `chroot` now support T2 unlimited + instances [GH-6265] +* builder/azure: Allow device login for US government cloud. [GH-6105] +* builder/azure: Devicelogin Support for Windows [GH-6285] +* builder/azure: Enable simultaneous builds within one resource group. + [GH-6231] +* builder/azure: Faster deletion of Azure Resource Groups. [GH-6269] +* builder/azure: Updated Azure SDK to v15.0.0 [GH-6224] +* builder/hyper-v: Hyper-V builds now connect to vnc display by default when + building [GH-6243] +* builder/hyper-v: New `use_fixed_vhd_format` allows vm export in an Azure- + compatible format [GH-6101] +* builder/hyperv: New config option for specifying what secure boot template to + use, allowing secure boot of linux vms. [GH-5883] +* builder/qemu: Add support for hvf accelerator. [GH-6193] +* builder/scaleway: Fix SSH communicator connection issue. [GH-6238] +* core: Add opt-in Packer top-level command autocomplete [GH-5454] +* post-processor/shell-local: New options have been added to create feature + parity with the shell-local provisioner. This feature now works on Windows + hosts. [GH-5956] +* provisioner/chef: New config option allows user to skip cleanup of chef + client staging directory. [GH-4300] +* provisioner/shell-local: Can now access automatically-generated WinRM + password as variable [GH-6251] +* provisoner/shell-local: New options have been added to create feature parity + with the shell-local post-processor. This feature now works on Windows + hosts. [GH-5956] +* builder/virtualbox: Use HTTPS to download guest editions, now that it's + available. [GH-6406] + +## 1.2.3 (April 25, 2018) + +### BUG FIXES: + +* builder/azure: Azure CLI may now be logged into several accounts. [GH-6087] +* builder/ebssurrogate: Snapshot all launch devices. [GH-6056] +* builder/hyper-v: Fix CopyExportedVirtualMachine script so it works with + links. [GH-6082] +* builder/oracle-classic: Fix panics when cleaning up resources that haven't + been created. [GH-6095] +* builder/parallels: Allow user to cancel build while the OS is starting up. + [GH-6166] +* builder/qemu: Avoid warning when using raw format. [GH-6080] +* builder/scaleway: Fix compilation issues on solaris/amd64. [GH-6069] +* builder/virtualbox: Fix broken scancodes in boot_command. [GH-6067] +* builder/vmware-iso: Fail in validation if user gives wrong remote_type value. + [GH-4563] +* builder/vmware: Fixed a case-sensitivity issue when determing the network + type during the cloning step in the vmware-vmx builder. [GH-6057] +* builder/vmware: Fixes the DHCP lease and configuration pathfinders for VMware + Player. [GH-6096] +* builder/vmware: Multi-disk VM's can be properly handled by the compacting + stage. [GH-6074] +* common/bootcommand: Fix numerous bugs in the boot command code, and make + supported features consistent across builders. [GH-6129] +* communicator/ssh: Stop trying to discover whether destination is a directory + from uploader. [GH-6124] +* post-processor/vagrant: Large VMDKs should no longer show a 0-byte size on OS + X. [GH-6084] +* post-processor/vsphere: Fix encoding of spaces in passwords for upload. + [GH-6110] +* provisioner/ansible: Pass the inventory_directory configuration option to + ansible -i when it is set. [GH-6065] +* provisioner/powershell: fix bug with SSH communicator + cygwin. [GH-6160] +* provisioner/powershell: The {{.WinRMPassword}} template variable now works + with parallel builders. [GH-6144] + +### IMPROVEMENTS: + +* builder/alicloud: Update aliyungo common package. [GH-6157] +* builder/amazon: Expose more source ami data as template variables. [GH-6088] +* builder/amazon: Setting `force_delete` will only delete AMIs owned by the + user. This should prevent failures where we try to delete an AMI with a + matching name, but owned by someone else. [GH-6111] +* builder/azure: Users of Powershell provisioner may access the randomly- + generated winrm password using the template variable {{.WinRMPassword}}. + [GH-6113] +* builder/google: Users of Powershell provisioner may access the randomly- + generated winrm password using the template variable {{.WinRMPassword}}. + [GH-6141] +* builder/hyper-v: User can now configure hyper-v disk block size. [GH-5941] +* builder/openstack: Add configuration option for `instance_name`. [GH-6041] +* builder/oracle-classic: Better validation of destination image name. + [GH-6089] +* builder/oracle-oci: New config options for user data and user data file. + [GH-6079] +* builder/oracle-oci: use the official OCI sdk instead of handcrafted client. + [GH-6142] +* builder/triton: Add support to Skip TLS Verification of Triton Certificate. + [GH-6039] +* provisioner/ansible: Ansible users may provide a custom inventory file. + [GH-6107] +* provisioner/file: New `generated` tag allows users to upload files created + during Packer run. [GH-3891] + +## 1.2.2 (March 26, 2018) + +### BUG FIXES: + +* builder/amazon: Fix AWS credential defaulting [GH-6019] +* builder/LXC: make sleep timeout easily configurable [GH-6038] +* builder/virtualbox: Correctly send multi-byte scancodes when typing boot + command. [GH-5987] +* builder/virtualbox: Special boot-commands no longer overwrite previous + commands [GH-6002] +* builder/vmware: Default to disabling XHCI bus for USB on the vmware-iso + builder. [GH-5975] +* builder/vmware: Handle multiple devices per VMware network type [GH-5985] +* communicator/ssh: Handle errors uploading files more gracefully [GH-6033] +* provisioner/powershell: Fix environment variable file escaping. [GH-5973] + + +### IMPROVEMENTS: + +* builder/amazon: Added new region `cn-northwest-1`. [GH-5960] +* builder/amazon: Users may now access the amazon-generated administrator + password [GH-5998] +* builder/azure: Add support concurrent deployments in the same resource group. + [GH-6005] +* builder/azure: Add support for building with additional disks. [GH-5944] +* builder/azure: Add support for marketplace plan information. [GH-5970] +* builder/azure: Make all command output human readable. [GH-5967] +* builder/azure: Respect `-force` for managed image deletion. [GH-6003] +* builder/google: Add option to specify a service account, or to run without + one. [GH-5991] [GH-5928] +* builder/oracle-oci: Add new "use_private_ip" option. [GH-5893] +* post-processor/vagrant: Add LXC support. [GH-5980] +* provisioner/salt-masterless: Added Windows support. [GH-5702] +* provisioner/salt: Add windows support to salt provisioner [GH-6012] [GH-6012] + + +## 1.2.1 (February 23, 2018) + +### BUG FIXES: + +* builder/amazon: Fix authorization using assume role. [GH-5914] +* builder/hyper-v: Fix command collisions with VMWare PowerCLI. [GH-5861] +* builder/vmware-iso: Fix panic when building on esx5 remotes. [GH-5931] +* builder/vmware: Fix issue detecting host IP. [GH-5898] [GH-5900] +* provisioner/ansible-local: Fix conflicting escaping schemes for vars provided + via `--extra-vars`. [GH-5888] + +### IMPROVEMENTS: + +* builder/oracle-classic: Add `snapshot_timeout` option to control how long we + wait for the snapshot to be created. [GH-5932] +* builder/oracle-classic: Add support for WinRM connections. [GH-5929] + + +## 1.2.0 (February 9, 2018) + +### BACKWARDS INCOMPATIBILITIES: + +* 3rd party plugins: We have moved internal dependencies, meaning your 3rd + party plugins will no longer compile (however existing builds will still + work fine); the work to fix them is minimal and documented in GH-5810. + [GH-5810] +* builder/amazon: The `ssh_private_ip` option has been removed. Instead, please + use `"ssh_interface": "private"`. A fixer has been written for this, which + can be invoked with `packer fix`. [GH-5876] +* builder/openstack: Extension support has been removed. To use OpenStack + builder with the OpenStack Newton (Oct 2016) or earlier, we recommend you + use Packer v1.1.2 or earlier version. +* core: Affects Windows guests: User variables containing Powershell special + characters no longer need to be escaped.[GH-5376] +* provisioner/file: We've made destination semantics more consistent across the + various communicators. In general, if the destination is a directory, files + will be uploaded into the directory instead of failing. This mirrors the + behavior of `rsync`. There's a chance some users might be depending on the + previous buggy behavior, so it's worth ensuring your configuration is + correct. [GH-5426] +* provisioner/powershell: Regression from v1.1.1 forcing extra escaping of + environment variables in the non-elevated provisioner has been fixed. + [GH-5515] [GH-5872] + +### IMPROVEMENTS: + +* **New builder:** `ncloud` for building server images using the NAVER Cloud + Platform. [GH-5791] +* **New builder:** `oci-classic` for building new custom images for use with + Oracle Cloud Infrastructure Classic Compute. [GH-5819] +* **New builder:** `scaleway` - The Scaleway Packer builder is able to create + new images for use with Scaleway BareMetal and Virtual cloud server. + [GH-4770] +* builder/amazon: Add `kms_key_id` option to block device mappings. [GH-5774] +* builder/amazon: Add `skip_metadata_api_check` option to skip consulting the + amazon metadata service. [GH-5764] +* builder/amazon: Add Paris region (eu-west-3) [GH-5718] +* builder/amazon: Give better error messages if we have trouble during + authentication. [GH-5764] +* builder/amazon: Remove Session Token (STS) from being shown in the log. + [GH-5665] +* builder/amazon: Replace `InstanceStatusOK` check with `InstanceReady`. This + reduces build times universally while still working for all instance types. + [GH-5678] +* builder/amazon: Report which authentication provider we're using. [GH-5764] +* builder/amazon: Timeout early if metadata service can't be reached. [GH-5764] +* builder/amazon: Warn during prepare if we didn't get both an access key and a + secret key when we were expecting one. [GH-5762] +* builder/azure: Add validation for incorrect VHD URLs [GH-5695] +* builder/docker: Remove credentials from being shown in the log. [GH-5666] +* builder/google: Support specifying licenses for images. [GH-5842] +* builder/hyper-v: Allow MAC address specification. [GH-5709] +* builder/hyper-v: New option to use differential disks and Inline disk + creation to improve build time and reduce disk usage [GH-5631] +* builder/qemu: Add Intel HAXM support to QEMU builder [GH-5738] +* builder/triton: Triton RBAC is now supported. [GH-5741] +* builder/triton: Updated triton-go dependencies, allowing better error + handling. [GH-5795] +* builder/vmware-iso: Add support for cdrom and disk adapter types. [GH-3417] +* builder/vmware-iso: Add support for setting network type and network adapter + type. [GH-3417] +* builder/vmware-iso: Add support for usb/serial/parallel ports. [GH-3417] +* builder/vmware-iso: Add support for virtual soundcards. [GH-3417] +* builder/vmware-iso: More reliably retrieve the guest networking + configuration. [GH-3417] +* builder/vmware: Add support for "super" key in `boot_command`. [GH-5681] +* communicator/ssh: Add session-level keep-alives [GH-5830] +* communicator/ssh: Detect dead connections. [GH-4709] +* core: Gracefully clean up resources on SIGTERM. [GH-5318] +* core: Improved error logging in floppy file handling. [GH-5802] +* core: Improved support for downloading and validating a uri containing a + Windows UNC path or a relative file:// scheme. [GH-2906] +* post-processor/amazon-import: Allow user to specify role name in amazon- + import [GH-5817] +* post-processor/docker: Remove credentials from being shown in the log. + [GH-5666] +* post-processor/google-export: Synchronize credential semantics with the + Google builder. [GH-4148] +* post-processor/vagrant: Add vagrant post-processor support for Google + [GH-5732] +* post-processor/vsphere-template: Now accepts artifacts from the vSphere post- + processor. [GH-5380] +* provisioner/amazon: Use Amazon SDK's InstanceRunning waiter instead of + InstanceStatusOK waiter [GH-5773] +* provisioner/ansible: Improve user retrieval. [GH-5758] +* provisioner/chef: Add support for 'trusted_certs_dir' chef-client + configuration option [GH-5790] +* provisioner/chef: Added Policyfile support to chef-client provisioner. + [GH-5831] + +### BUG FIXES: + +* builder/alicloud-ecs: Attach keypair before starting instance in alicloud + builder [GH-5739] +* builder/amazon: Fix tagging support when building in us-gov/china. [GH-5841] +* builder/amazon: NewSession now inherits MaxRetries and other settings. + [GH-5719] +* builder/virtualbox: Fix interpolation ordering so that edge cases around + guest_additions_url are handled correctly [GH-5757] +* builder/virtualbox: Fix regression affecting users running Packer on a + Windows host that kept Packer from finding Virtualbox guest additions if + Packer ran on a different drive from the one where the guest additions were + stored. [GH-5761] +* builder/vmware: Fix case where artifacts might not be cleaned up correctly. + [GH-5835] +* builder/vmware: Fixed file handle leak that may have caused race conditions + in vmware builder [GH-5767] +* communicator/ssh: Add deadline to SSH connection to prevent Packer hangs + after script provisioner reboots vm [GH-4684] +* communicator/winrm: Fix issue copying empty directories. [GH-5763] +* provisioner/ansible-local: Fix support for `--extra-vars` in + `extra_arguments`. [GH-5703] +* provisioner/ansible-remote: Fixes an error where Packer's private key can be + overridden by inherited `ansible_ssh_private_key` options. [GH-5869] +* provisioner/ansible: The "default extra variables" feature added in Packer + v1.0.1 caused the ansible-local provisioner to fail when an --extra-vars + argument was specified in the extra_arguments configuration option; this + has been fixed. [GH-5335] +* provisioner/powershell: Regression from v1.1.1 forcing extra escaping of + environment variables in the non-elevated provisioner has been fixed. + [GH-5515] [GH-5872] + + +## 1.1.3 (December 8, 2017) + +### IMPROVEMENTS: + +* builder/alicloud-ecs: Add security token support and set TLS handshake + timeout through environment variable. [GH-5641] +* builder/amazon: Add a new parameter `ssh_interface`. Valid values include + `public_ip`, `private_ip`, `public_dns` or `private_dns`. [GH-5630] +* builder/azure: Add sanity checks for resource group names [GH-5599] +* builder/azure: Allow users to specify an existing resource group to use, + instead of creating a new one for every run. [GH-5548] +* builder/hyper-v: Add support for differencing disk. [GH-5458] +* builder/vmware-iso: Improve logging of network errors. [GH-5456] +* core: Add new `packer_version` template engine. [GH-5619] +* core: Improve logic checking for downloaded ISOs in case where user has + provided more than one URL in `iso_urls` [GH-5632] +* provisioner/ansible-local: Add ability to clean staging directory. [GH-5618] + +### BUG FIXES: + +* builder/amazon: Allow `region` to appear in `ami_regions`. [GH-5660] +* builder/amazon: `C5` instance types now build more reliably. [GH-5678] +* builder/amazon: Correctly set AWS region if given in template along with a + profile. [GH-5676] +* builder/amazon: Prevent `sriov_support` and `ena_support` from being used + with spot instances, which would cause a build failure. [GH-5679] +* builder/hyper-v: Fix interpolation context for user variables in + `boot_command` [GH-5547] +* builder/qemu: Set default disk size to 40960 MB to prevent boot failures. + [GH-5588] +* builder/vmware: Correctly detect Windows boot on vmware workstation. + [GH-5672] +* core: Fix windows path regression when downloading ISOs. [GH-5591] +* provisioner/chef: Fix chef installs on Windows. [GH-5649] + +## 1.1.2 (November 15, 2017) + +### IMPROVEMENTS: + +* builder/amazon: Correctly deregister AMIs when `force_deregister` is set. + [GH-5525] +* builder/digitalocean: Add `ipv6` option to enable on droplet. [GH-5534] +* builder/docker: Add `aws_profile` option to control the aws profile for ECR. + [GH-5470] +* builder/google: Add `clean_image_name` template engine. [GH-5463] +* builder/google: Allow selecting container optimized images. [GH-5576] +* builder/google: Interpolate network and subnetwork values, rather than + relying on an API call that packer may not have permission for. [GH-5343] +* builder/hyper-v: Add `disk_additional_size` option to allow for up to 64 + additional disks. [GH-5491] +* builder/hyper-v: Also disable automatic checkpoints for gen 2 VMs. [GH-5517] +* builder/lxc: Add new `publish_properties` field to set image properties. + [GH-5475] +* builder/lxc: Add three new configuration option categories to LXC builder: + `create_options`, `start_options`, and `attach_options`. [GH-5530] +* builder/triton: Add `source_machine_image_filter` option to select an image + ID based on a variety of parameters. [GH-5538] +* builder/virtualbox-ovf: Error during prepare if source path doesn't exist. + [GH-5573] +* builder/virtualbox-ovf: Retry while removing VM to solve for transient + errors. [GH-5512] +* communicator/ssh: Add socks 5 proxy support. [GH-5439] +* core/iso_config: Support relative paths in checksum file. [GH-5578] +* core: Rewrite vagrantfile code to make cross-platform development easier. + [GH-5539] +* post-processor/docker-push: Add `aws_profile` option to control the aws + profile for ECR. [GH-5470] +* post-processor/vsphere: Properly capture `ovftool` output. [GH-5499] ### BUG FIXES: -* builder/docker: Remove `login_email`, which no longer exists in the docker client. [GH-5511] -* builder/triton: Fix a bug where partially created images can be reported as complete. [GH-5566] -* builder/amazon: region is set from profile, if profile is set, rather than being overridden by metadata. [GH-5562] -* provisioner/windows-restart: Wait for restart no longer endlessly loops if user specifies a custom restart check command. [GH-5563] -* post-processor/vsphere: Use the vm disk path information to re-create the vmx datastore path. [GH-5567] * builder/amazon: Add a delay option to security group waiter. [GH-5536] -* builder/amazon: Fix regressions relating to spot instances and EBS volumes. [GH-5495] -* builder/hyperv: Fix admin check that was causing powershell failures. [GH-5510] -* builder/oracle: Defaulting of OCI builder region will first check the packer template and the OCI config file. [GH-5407] +* builder/amazon: Fix regressions relating to spot instances and EBS volumes. + [GH-5495] +* builder/amazon: Set region from profile, if profile is set, rather than being + overridden by metadata. [GH-5562] +* builder/docker: Remove `login_email`, which no longer exists in the docker + client. [GH-5511] +* builder/hyperv: Fix admin check that was causing powershell failures. + [GH-5510] +* builder/oracle: Defaulting of OCI builder region will first check the packer + template and the OCI config file. [GH-5407] +* builder/triton: Fix a bug where partially created images can be reported as + complete. [GH-5566] +* post-processor/vsphere: Use the vm disk path information to re-create the vmx + datastore path. [GH-5567] +* provisioner/windows-restart: Wait for restart no longer endlessly loops if + user specifies a custom restart check command. [GH-5563] ## 1.1.1 (October 13, 2017) @@ -212,7 +637,7 @@ * builder/cloudstack: Properly report back errors. [GH-5103] [GH-5123] * builder/docker: Fix windows filepath in docker-toolbox call [GH-4887] * builder/docker: Fix windows filepath in docker-toolbox call. [GH-4887] -* builder/hyperv: Use SID to verify membersip in Admin group, fixing for non- +* builder/hyperv: Use SID to verify membership in Admin group, fixing for non- english users. [GH-5022] * builder/hyperv: Verify membership in the group Hyper-V Administrators by SID not name. [GH-5022] @@ -238,7 +663,7 @@ * core: Remove logging that shouldn't be there when running commands. [GH-5042] * provisioner/shell: Fix bug where scripts were being run under `sh`. [GH-5043] -### IMRPOVEMENTS: +### IMPROVEMENTS: * provisioner/windows-restart: make it clear that timeouts come from the provisioner, not winrm. [GH-5040] @@ -459,7 +884,7 @@ * builder/amazon: Crashes when new EBS vols are used. [GH-4308] * builder/amazon: Fix crash in amazon-instance. [GH-4372] * builder/amazon: fix run volume tagging [GH-4420] -* builder/amazon: fix when using non-existant security\_group\_id. [GH-4425] +* builder/amazon: fix when using non-existent security\_group\_id. [GH-4425] * builder/amazon: Properly error if we don't have the ec2:DescribeSecurityGroups permission. [GH-4304] * builder/amazon: Properly wait for security group to exist. [GH-4369] @@ -1122,7 +1547,7 @@ * builder/parallels: Support Parallels Desktop 11. [GH-2199] * builder/openstack: Add `rackconnect_wait` for Rackspace customers to wait for RackConnect data to appear -* buidler/openstack: Add `ssh_interface` option for rackconnect for users that +* builder/openstack: Add `ssh_interface` option for rackconnect for users that have prohibitive firewalls * builder/openstack: Flavor names can be used as well as refs * builder/openstack: Add `availability_zone` [GH-2016] @@ -1153,7 +1578,7 @@ * core: Fix potential panic for post-processor plugin exits. [GH-2098] * core: `PACKER_CONFIG` may point to a non-existent file. [GH-2226] * builder/amazon: Allow spaces in AMI names when using `clean_ami_name` [GH-2182] -* builder/amazon: Remove deprecated ec2-upload-bundle paramger. [GH-1931] +* builder/amazon: Remove deprecated ec2-upload-bundle parameter. [GH-1931] * builder/amazon: Use IAM Profile to upload bundle if provided. [GH-1985] * builder/amazon: Use correct exit code after SSH authentication failed. [GH-2004] * builder/amazon: Retry finding created instance for eventual @@ -1236,7 +1661,7 @@ * builder/googlecompute: Support for ubuntu-os-cloud project * builder/googlecompute: Support for OAuth2 to avoid client secrets file -* builder/googlecompute: GCE image from persistant disk instead of tarball +* builder/googlecompute: GCE image from persistent disk instead of tarball * builder/qemu: Checksum type "none" can be used * provisioner/chef: Generate a node name if none available * provisioner/chef: Added ssl_verify_mode configuration @@ -1370,7 +1795,7 @@ * builder/docker: Can now specify login credentials to pull images. * builder/docker: Support mounting additional volumes. [GH-1430] * builder/parallels/all: Path to tools ISO is calculated automatically. [GH-1455] -* builder/parallels-pvm: `reassign_mac` option to choose wehther or not +* builder/parallels-pvm: `reassign_mac` option to choose whether or not to generate a new MAC address. [GH-1461] * builder/qemu: Can specify "none" acceleration type. [GH-1395] * builder/qemu: Can specify "tcg" acceleration type. [GH-1395] @@ -1399,7 +1824,7 @@ manager certs. [GH-1137] * builder/amazon/all: `delete_on_termination` set to false will work. * builder/amazon/all: Fix race condition on setting tags. [GH-1367] -* builder/amazon/all: More desctriptive error messages if Amazon only +* builder/amazon/all: More descriptive error messages if Amazon only sends an error code. [GH-1189] * builder/docker: Error if `DOCKER_HOST` is set. * builder/docker: Remove the container during cleanup. [GH-1206] @@ -1813,7 +2238,7 @@ * builder/digitalocean: scrub API keys from config debug output. [GH-516] * builder/virtualbox: error if VirtualBox version cant be detected. [GH-488] * builder/virtualbox: detect if vboxdrv isn't properly setup. [GH-488] -* builder/virtualbox: sleep a bit before export to ensure the sesssion +* builder/virtualbox: sleep a bit before export to ensure the session is unlocked. [GH-512] * builder/virtualbox: create SATA drives properly on VirtualBox 4.3. [GH-547] * builder/virtualbox: support user templates in SSH key path. [GH-539] @@ -1970,7 +2395,7 @@ * builder/virtualbox,vmware: Support SHA512 as a checksum type. [GH-356] * builder/vmware: The root hard drive type can now be specified with "disk_type_id" for advanced users. [GH-328] -* provisioner/salt-masterless: Ability to specfy a minion config. [GH-264] +* provisioner/salt-masterless: Ability to specify a minion config. [GH-264] * provisioner/salt-masterless: Ability to upload pillars. [GH-353] ### IMPROVEMENTS: @@ -2029,7 +2454,7 @@ * core: All HTTP downloads across Packer now support the standard proxy environmental variables (`HTTP_PROXY`, `NO_PROXY`, etc.) [GH-252] * builder/amazon: API requests will use HTTP proxy if specified by - enviromental variables. + environmental variables. * builder/digitalocean: API requests will use HTTP proxy if specified by environmental variables. @@ -2075,11 +2500,11 @@ * builder/amazon-instance: send IAM instance profile data. [GH-294] * builder/digitalocean: API request parameters are properly URL encoded. [GH-281] -* builder/virtualbox: dowload progress won't be shown until download +* builder/virtualbox: download progress won't be shown until download actually starts. [GH-288] * builder/virtualbox: floppy files names of 13 characters are now properly written to the FAT12 filesystem. [GH-285] -* builder/vmware: dowload progress won't be shown until download +* builder/vmware: download progress won't be shown until download actually starts. [GH-288] * builder/vmware: interrupt works while typing commands over VNC. * builder/virtualbox: floppy files names of 13 characters are now properly @@ -2201,7 +2626,7 @@ ### BUG FIXES: * builder/amazon/all: Gracefully handle when AMI appears to not exist - while AWS state is propogating. [GH-207] + while AWS state is propagating. [GH-207] * builder/virtualbox: Trim carriage returns for Windows to properly detect VM state on Windows. [GH-218] * core: build names no longer cause invalid config errors. [GH-197] @@ -2226,7 +2651,7 @@ * Amazon EBS builder can now optionally use a pre-made security group instead of randomly generating one. * DigitalOcean API key and client IDs can now be passed in as - environmental variables. See the documentatin for more details. + environmental variables. See the documentation for more details. * VirtualBox and VMware can now have `floppy_files` specified to attach floppy disks when booting. This allows for unattended Windows installs. * `packer build` has a new `-force` flag that forces the removal of @@ -2283,7 +2708,7 @@ * core: Non-200 response codes on downloads now show proper errors. [GH-141] * amazon-ebs: SSH handshake is retried. [GH-130] -* vagrant: The `BuildName` template propery works properly in +* vagrant: The `BuildName` template property works properly in the output path. * vagrant: Properly configure the provider-specific post-processors so things like `vagrantfile_template` work. [GH-129] diff --git a/CODEOWNERS b/CODEOWNERS index c037a38d4..216cce578 100644 --- a/CODEOWNERS +++ b/CODEOWNERS @@ -13,6 +13,8 @@ /builder/oracle/ @prydie @owainlewis /builder/profitbricks/ @jasmingacic /builder/triton/ @jen20 @sean- +/builder/ncloud/ @YuSungDuk +/builder/scaleway/ @dimtion @edouardb # provisioners diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md deleted file mode 100644 index acababe3f..000000000 --- a/CONTRIBUTING.md +++ /dev/null @@ -1,158 +0,0 @@ -# Contributing to Packer - -**First:** if you're unsure or afraid of _anything_, just ask -or submit the issue or pull request anyways. You won't be yelled at for -giving your best effort. The worst that can happen is that you'll be -politely asked to change something. We appreciate any sort of contributions, -and don't want a wall of rules to get in the way of that. - -However, for those individuals who want a bit more guidance on the -best way to contribute to the project, read on. This document will cover -what we're looking for. By addressing all the points we're looking for, -it raises the chances we can quickly merge or address your contributions. - -## Issues - -### Reporting an Issue - -* Make sure you test against the latest released version. It is possible - we already fixed the bug you're experiencing. - -* Run the command with debug ouput with the environment variable - `PACKER_LOG`. For example: `PACKER_LOG=1 packer build template.json`. Take - the *entire* output and create a [gist](https://gist.github.com) for linking - to in your issue. Packer should strip sensitive keys from the output, - but take a look through just in case. - -* Provide a reproducible test case. If a contributor can't reproduce an - issue, then it dramatically lowers the chances it'll get fixed. And in - some cases, the issue will eventually be closed. - -* Respond promptly to any questions made by the Packer team to your issue. - Stale issues will be closed. - -### Issue Lifecycle - -1. The issue is reported. - -2. The issue is verified and categorized by a Packer collaborator. - Categorization is done via tags. For example, bugs are marked as "bugs" - and easy fixes are marked as "easy". - -3. Unless it is critical, the issue is left for a period of time (sometimes - many weeks), giving outside contributors a chance to address the issue. - -4. The issue is addressed in a pull request or commit. The issue will be - referenced in the commit message so that the code that fixes it is clearly - linked. - -5. The issue is closed. - -## Setting up Go to work on Packer - -If you have never worked with Go before, you will have to complete the -following steps in order to be able to compile and test Packer. These instructions target POSIX-like environments (Mac OS X, Linux, Cygwin, etc.) so you may need to adjust them for Windows or other shells. - -1. [Download](https://golang.org/dl) and install Go. The instructions below - are for go 1.7. Earlier versions of Go are no longer supported. - -2. Set and export the `GOPATH` environment variable and update your `PATH`. For - example, you can add to your `.bash_profile`. - - ``` - export GOPATH=$HOME/go - export PATH=$PATH:$GOPATH/bin - ``` - -3. Download the Packer source (and its dependencies) by running `go get - github.com/hashicorp/packer`. This will download the Packer source to - `$GOPATH/src/github.com/hashicorp/packer`. - -4. When working on packer `cd $GOPATH/src/github.com/hashicorp/packer` so you - can run `make` and easily access other files. Run `make help` to get - information about make targets. - -5. Make your changes to the Packer source. You can run `make` in - `$GOPATH/src/github.com/hashicorp/packer` to run tests and build the packer - binary. Any compilation errors will be shown when the binaries are - rebuilding. If you don't have `make` you can simply run `go build -o bin/packer .` from the project root. - -6. After running building packer successfully, use - `$GOPATH/src/github.com/hashicorp/packer/bin/packer` to build a machine and - verify your changes work. For instance: `$GOPATH/src/github.com/hashicorp/packer/bin/packer build template.json`. - -7. If everything works well and the tests pass, run `go fmt` on your code - before submitting a pull-request. - -### Opening an Pull Request - -When you are ready to open a pull-request, you will need to [fork packer](https://github.com/hashicorp/packer#fork-destination-box), push your changes to your fork, and then open a pull-request. - -For example, my github username is `cbednarski` so I would do the following: - - git checkout -b f-my-feature - // develop a patch - git push https://github.com/cbednarski/packer f-my-feature - -From there, open your fork in your browser to open a new pull-request. - -**Note** Go infers package names from their filepaths. This means `go build` will break if you `git clone` your fork instead of using `go get` on the main packer project. - -### Tips for Working on Packer - -#### Working on forks - -The easiest way to work on a fork is to set it as a remote of the packer project. After following the steps in "Setting up Go to work on Packer": - -1. Navigate to $GOPATH/src/github.com/hashicorp/packer -2. Add the remote `git remote add `. For example `git remote add mwhooker https://github.com/mwhooker/packer.git`. -3. Checkout a feature branch: `git checkout -b new-feature` -4. Make changes -5. (Optional) Push your changes to the fork: `git push -u new-feature` - -This way you can push to your fork to create a PR, but the code on disk still lives in the spot where the go cli tools are expecting to find it. - -#### Govendor - -If you are submitting a change that requires new or updated dependencies, please include them in `vendor/vendor.json` and in the `vendor/` folder. This helps everything get tested properly in CI. - -Note that you will need to use [govendor](https://github.com/kardianos/govendor) to do this. This step is recommended but not required; if you don't use govendor please indicate in your PR which dependencies have changed and to what versions. - -Use `govendor fetch ` to add dependencies to the project. See -[govendor quick -start](https://github.com/kardianos/govendor#quick-start-also-see-the-faq) for -examples. - -Please only apply the minimal vendor changes to get your PR to work. Packer does not attempt to track the latest version for each dependency. - -#### Running Unit Tests - -You can run tests for individual packages using commands like this: - - $ make test TEST=./builder/amazon/... - -#### Running Acceptance Tests - -Packer has [acceptance tests](https://en.wikipedia.org/wiki/Acceptance_testing) -for various builders. These typically require an API key (AWS, GCE), or -additional software to be installed on your computer (VirtualBox, VMware). - -If you're working on a new builder or builder feature and want verify it is functioning (and also hasn't broken anything else), we recommend running the -acceptance tests. - -**Warning:** The acceptance tests create/destroy/modify *real resources*, which -may incur costs for real money. In the presence of a bug, it is possible that resources may be left behind, which can cost money even though you were not using them. We recommend running tests in an account used only for that purpose so it is easy to see if there are any dangling resources, and so production resources are not accidentally destroyed or overwritten during testing. - -To run the acceptance tests, invoke `make testacc`: - - $ make testacc TEST=./builder/amazon/ebs - ... - -The `TEST` variable lets you narrow the scope of the acceptance tests to a -specific package / folder. The `TESTARGS` variable is recommended to filter -down to a specific resource to test, since testing all of them at once can -sometimes take a very long time. - -Acceptance tests typically require other environment variables to be set for -things such as API tokens and keys. Each test should error and tell you which -credentials are missing, so those are not documented here. diff --git a/Makefile b/Makefile index 3a68f0ba5..cb5247d13 100644 --- a/Makefile +++ b/Makefile @@ -4,11 +4,13 @@ VET?=$(shell ls -d */ | grep -v vendor | grep -v website) GITSHA:=$(shell git rev-parse HEAD) # Get the current local branch name from git (if we can, this may be blank) GITBRANCH:=$(shell git symbolic-ref --short HEAD 2>/dev/null) -GOFMT_FILES?=$$(find . -not -path "./vendor/*" -name "*.go") GOOS=$(shell go env GOOS) GOARCH=$(shell go env GOARCH) GOPATH=$(shell go env GOPATH) +# gofmt +UNFORMATTED_FILES=$(shell find . -not -path "./vendor/*" -name "*.go" | xargs gofmt -s -l) + # Get the git commit GIT_DIRTY=$(shell test -n "`git status --porcelain`" && echo "+CHANGES" || true) GIT_COMMIT=$(shell git rev-parse --short HEAD) @@ -41,8 +43,11 @@ package: @sh -c "$(CURDIR)/scripts/dist.sh $(VERSION)" deps: + @go get golang.org/x/tools/cmd/goimports @go get golang.org/x/tools/cmd/stringer + @go get -u github.com/mna/pigeon @go get github.com/kardianos/govendor + @go get golang.org/x/tools/cmd/goimports @govendor sync dev: deps ## Build and install a development build @@ -57,10 +62,18 @@ dev: deps ## Build and install a development build @cp $(GOPATH)/bin/packer pkg/$(GOOS)_$(GOARCH) fmt: ## Format Go code - @gofmt -w -s $(GOFMT_FILES) + @gofmt -w -s main.go $(UNFORMATTED_FILES) fmt-check: ## Check go code formatting - $(CURDIR)/scripts/gofmtcheck.sh $(GOFMT_FILES) + @echo "==> Checking that code complies with gofmt requirements..." + @if [ ! -z "$(UNFORMATTED_FILES)" ]; then \ + echo "gofmt needs to be run on the following files:"; \ + echo "$(UNFORMATTED_FILES)" | xargs -n1; \ + echo "You can use the command: \`make fmt\` to reformat code."; \ + exit 1; \ + else \ + echo "Check passed."; \ + fi fmt-docs: @find ./website/source/docs -name "*.md" -exec pandoc --wrap auto --columns 79 --atx-headers -s -f "markdown_github+yaml_metadata_block" -t "markdown_github+yaml_metadata_block" {} -o {} \; @@ -73,6 +86,8 @@ fmt-examples: # source files. generate: deps ## Generate dynamically generated code go generate . + gofmt -w common/bootcommand/boot_command.go + goimports -w common/bootcommand/boot_command.go gofmt -w command/plugin.go test: deps fmt-check ## Run unit tests @@ -91,7 +106,7 @@ testrace: deps ## Test for race conditions @go test -race $(TEST) $(TESTARGS) -timeout=2m updatedeps: - @echo "INFO: Packer deps are managed by govendor. See CONTRIBUTING.md" + @echo "INFO: Packer deps are managed by govendor. See .github/CONTRIBUTING.md" help: @grep -E '^[a-zA-Z_-]+:.*?## .*$$' $(MAKEFILE_LIST) | sort | awk 'BEGIN {FS = ":.*?## "}; {printf "\033[36m%-30s\033[0m %s\n", $$1, $$2}' diff --git a/README.md b/README.md index 5249f515e..a3ec402a9 100644 --- a/README.md +++ b/README.md @@ -9,10 +9,10 @@ [travis]: https://travis-ci.org/hashicorp/packer [appveyor-badge]: https://ci.appveyor.com/api/projects/status/miavlgnp989e5obc/branch/master?svg=true [appveyor]: https://ci.appveyor.com/project/hashicorp/packer -[godoc-badge]: https://godoc.org/github.com/mitchellh/packer?status.svg -[godoc]: https://godoc.org/github.com/mitchellh/packer -[report-badge]: https://goreportcard.com/badge/github.com/mitchellh/packer -[report]: https://goreportcard.com/report/github.com/mitchellh/packer +[godoc-badge]: https://godoc.org/github.com/hashicorp/packer?status.svg +[godoc]: https://godoc.org/github.com/hashicorp/packer +[report-badge]: https://goreportcard.com/badge/github.com/hashicorp/packer +[report]: https://goreportcard.com/report/github.com/hashicorp/packer * Website: https://www.packer.io * IRC: `#packer-tool` on Freenode @@ -23,24 +23,8 @@ from a single source configuration. Packer is lightweight, runs on every major operating system, and is highly performant, creating machine images for multiple platforms in parallel. Packer -comes out of the box with support for the following platforms: - -* Amazon EC2 (AMI). Both EBS-backed and instance-store AMIs -* Azure -* CloudStack -* DigitalOcean -* Docker -* Google Compute Engine -* Hyper-V -* 1&1 -* OpenStack -* Oracle Cloud Infrastructure -* Parallels -* ProfitBricks -* QEMU. Both KVM and Xen images. -* Triton (Joyent Public Cloud) -* VMware -* VirtualBox +comes out of the box with support for many platforms, the full list of which can +be found at https://www.packer.io/docs/builders/index.html. Support for other platforms can be added via plugins. @@ -48,10 +32,6 @@ The images that Packer creates can easily be turned into [Vagrant](http://www.vagrantup.com) boxes. ## Quick Start -Download and install packages and dependencies -``` -go get github.com/hashicorp/packer -``` **Note:** There is a great [introduction and getting started guide](https://www.packer.io/intro) @@ -59,8 +39,10 @@ for those with a bit more patience. Otherwise, the quick start below will get you up and running quickly, at the sacrifice of not explaining some key points. -First, [download a pre-built Packer binary](https://www.packer.io/downloads.html) -for your operating system or [compile Packer yourself](CONTRIBUTING.md#setting-up-go-to-work-on-packer). +First, [download a pre-built Packer +binary](https://www.packer.io/downloads.html) for your operating system or +[compile Packer +yourself](https://github.com/hashicorp/packer/blob/master/.github/CONTRIBUTING.md#setting-up-go-to-work-on-packer). After Packer is installed, create your first template, which tells Packer what platforms to build images for and how you want to build them. In our @@ -108,4 +90,7 @@ https://www.packer.io/docs ## Developing Packer -See [CONTRIBUTING.md](https://github.com/hashicorp/packer/blob/master/CONTRIBUTING.md) for best practices and instructions on setting up your development environment to work on Packer. +See +[CONTRIBUTING.md](https://github.com/hashicorp/packer/blob/master/.github/CONTRIBUTING.md) +for best practices and instructions on setting up your development environment +to work on Packer. diff --git a/Vagrantfile b/Vagrantfile index 30b0437ba..c998e31c0 100644 --- a/Vagrantfile +++ b/Vagrantfile @@ -5,6 +5,10 @@ LINUX_BASE_BOX = "bento/ubuntu-16.04" FREEBSD_BASE_BOX = "jen20/FreeBSD-12.0-CURRENT" Vagrant.configure(2) do |config| + if Vagrant.has_plugin?("vagrant-cachier") + config.cache.scope = :box + end + # Compilation and development boxes config.vm.define "linux", autostart: true, primary: true do |vmCfg| vmCfg.vm.box = LINUX_BASE_BOX @@ -69,11 +73,6 @@ def configureProviders(vmCfg, cpus: "2", memory: "2048") end end - vmCfg.vm.provider "virtualbox" do |v| - v.memory = memory - v.cpus = cpus - end - return vmCfg end diff --git a/azure-merge.sh b/azure-merge.sh deleted file mode 100755 index 07c79b8b3..000000000 --- a/azure-merge.sh +++ /dev/null @@ -1,18 +0,0 @@ -PACKER=$GOPATH/src/github.com/mitchellh/packer -AZURE=/tmp/packer-azure - -ls $AZURE >/dev/null || git clone https://github.com/Azure/packer-azure /tmp/packer-azure -PWD=`pwd` -cd $AZURE && git pull -cd $PWD - -# copy things -cp -r $AZURE/packer/builder/azure $PACKER/builder/ -cp -r $AZURE/packer/communicator/* $PACKER/communicator/ -cp -r $AZURE/packer/provisioner/azureVmCustomScriptExtension $PACKER/provisioner/ - -# remove legacy API client -rm -rf $PACKER/builder/azure/smapi - -# fix imports -find $PACKER/builder/azure/ -type f | grep ".go" | xargs sed -e 's/Azure\/packer-azure\/packer\/builder\/azure/mitchellh\/packer\/builder\/azure/g' -i '' diff --git a/builder/alicloud/ecs/access_config.go b/builder/alicloud/ecs/access_config.go index bcc7ba76f..b49bd0bbf 100644 --- a/builder/alicloud/ecs/access_config.go +++ b/builder/alicloud/ecs/access_config.go @@ -15,6 +15,7 @@ type AlicloudAccessConfig struct { AlicloudSecretKey string `mapstructure:"secret_key"` AlicloudRegion string `mapstructure:"region"` AlicloudSkipValidation bool `mapstructure:"skip_region_validation"` + SecurityToken string `mapstructure:"security_token"` } // Client for AlicloudClient @@ -22,7 +23,12 @@ func (c *AlicloudAccessConfig) Client() (*ecs.Client, error) { if err := c.loadAndValidate(); err != nil { return nil, err } - client := ecs.NewECSClient(c.AlicloudAccessKey, c.AlicloudSecretKey, common.Region(c.AlicloudRegion)) + if c.SecurityToken == "" { + c.SecurityToken = os.Getenv("SECURITY_TOKEN") + } + client := ecs.NewECSClientWithSecurityToken(c.AlicloudAccessKey, c.AlicloudSecretKey, + c.SecurityToken, common.Region(c.AlicloudRegion)) + client.SetBusinessInfo("Packer") if _, err := client.DescribeRegions(); err != nil { return nil, err diff --git a/builder/alicloud/ecs/builder.go b/builder/alicloud/ecs/builder.go index 8650dc2d1..99d759dc2 100644 --- a/builder/alicloud/ecs/builder.go +++ b/builder/alicloud/ecs/builder.go @@ -6,12 +6,13 @@ import ( "log" "fmt" + "github.com/hashicorp/packer/common" "github.com/hashicorp/packer/helper/communicator" "github.com/hashicorp/packer/helper/config" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" "github.com/hashicorp/packer/template/interpolate" - "github.com/mitchellh/multistep" ) // The unique ID for this builder @@ -88,12 +89,12 @@ func (b *Builder) Run(ui packer.Ui, hook packer.Hook, cache packer.Cache) (packe steps = []multistep.Step{ &stepPreValidate{ AlicloudDestImageName: b.config.AlicloudImageName, - ForceDelete: b.config.AlicloudImageForceDetele, + ForceDelete: b.config.AlicloudImageForceDelete, }, &stepCheckAlicloudSourceImage{ SourceECSImageId: b.config.AlicloudSourceImage, }, - &StepConfigAlicloudKeyPair{ + &stepConfigAlicloudKeyPair{ Debug: b.config.PackerDebug, KeyPairName: b.config.SSHKeyPairName, PrivateKeyFile: b.config.Comm.SSHPrivateKey, @@ -131,14 +132,15 @@ func (b *Builder) Run(ui packer.Ui, hook packer.Hook, cache packer.Cache) (packe RegionId: b.config.AlicloudRegion, InternetChargeType: b.config.InternetChargeType, InternetMaxBandwidthOut: b.config.InternetMaxBandwidthOut, - InstnaceName: b.config.InstanceName, + InstanceName: b.config.InstanceName, ZoneId: b.config.ZoneId, }) if b.chooseNetworkType() == VpcNet { - steps = append(steps, &setpConfigAlicloudEIP{ + steps = append(steps, &stepConfigAlicloudEIP{ AssociatePublicIpAddress: b.config.AssociatePublicIpAddress, RegionId: b.config.AlicloudRegion, InternetChargeType: b.config.InternetChargeType, + InternetMaxBandwidthOut: b.config.InternetMaxBandwidthOut, }) } else { steps = append(steps, &stepConfigAlicloudPublicIP{ @@ -146,9 +148,9 @@ func (b *Builder) Run(ui packer.Ui, hook packer.Hook, cache packer.Cache) (packe }) } steps = append(steps, + &stepAttachKeyPair{}, &stepRunAlicloudInstance{}, &stepMountAlicloudDisk{}, - &stepAttachKeyPar{}, &communicator.StepConnect{ Config: &b.config.RunConfig.Comm, Host: SSHHost( @@ -164,17 +166,17 @@ func (b *Builder) Run(ui packer.Ui, hook packer.Hook, cache packer.Cache) (packe ForceStop: b.config.ForceStopInstance, }, &stepDeleteAlicloudImageSnapshots{ - AlicloudImageForceDeteleSnapshots: b.config.AlicloudImageForceDeteleSnapshots, - AlicloudImageForceDetele: b.config.AlicloudImageForceDetele, + AlicloudImageForceDeleteSnapshots: b.config.AlicloudImageForceDeleteSnapshots, + AlicloudImageForceDelete: b.config.AlicloudImageForceDelete, AlicloudImageName: b.config.AlicloudImageName, }, &stepCreateAlicloudImage{}, - &setpRegionCopyAlicloudImage{ + &stepRegionCopyAlicloudImage{ AlicloudImageDestinationRegions: b.config.AlicloudImageDestinationRegions, AlicloudImageDestinationNames: b.config.AlicloudImageDestinationNames, RegionId: b.config.AlicloudRegion, }, - &setpShareAlicloudImage{ + &stepShareAlicloudImage{ AlicloudImageShareAccounts: b.config.AlicloudImageShareAccounts, AlicloudImageUNShareAccounts: b.config.AlicloudImageUNShareAccounts, RegionId: b.config.AlicloudRegion, diff --git a/builder/alicloud/ecs/image_config.go b/builder/alicloud/ecs/image_config.go index c7fed2af6..d747009ef 100644 --- a/builder/alicloud/ecs/image_config.go +++ b/builder/alicloud/ecs/image_config.go @@ -3,10 +3,11 @@ package ecs import ( "fmt" - "github.com/denverdino/aliyungo/common" - "github.com/hashicorp/packer/template/interpolate" "regexp" "strings" + + "github.com/denverdino/aliyungo/common" + "github.com/hashicorp/packer/template/interpolate" ) type AlicloudDiskDevice struct { @@ -31,8 +32,8 @@ type AlicloudImageConfig struct { AlicloudImageUNShareAccounts []string `mapstructure:"image_unshare_account"` AlicloudImageDestinationRegions []string `mapstructure:"image_copy_regions"` AlicloudImageDestinationNames []string `mapstructure:"image_copy_names"` - AlicloudImageForceDetele bool `mapstructure:"image_force_delete"` - AlicloudImageForceDeteleSnapshots bool `mapstructure:"image_force_delete_snapshots"` + AlicloudImageForceDelete bool `mapstructure:"image_force_delete"` + AlicloudImageForceDeleteSnapshots bool `mapstructure:"image_force_delete_snapshots"` AlicloudImageForceDeleteInstances bool `mapstructure:"image_force_delete_instances"` AlicloudImageSkipRegionValidation bool `mapstructure:"skip_region_validation"` AlicloudDiskDevices `mapstructure:",squash"` diff --git a/builder/alicloud/ecs/packer_helper.go b/builder/alicloud/ecs/packer_helper.go index 2486b81cf..15f257381 100644 --- a/builder/alicloud/ecs/packer_helper.go +++ b/builder/alicloud/ecs/packer_helper.go @@ -3,8 +3,8 @@ package ecs import ( "fmt" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" ) func message(state multistep.StateBag, module string) { diff --git a/builder/alicloud/ecs/run_config.go b/builder/alicloud/ecs/run_config.go index b6098ce5c..2c52bb37b 100644 --- a/builder/alicloud/ecs/run_config.go +++ b/builder/alicloud/ecs/run_config.go @@ -57,7 +57,7 @@ func (c *RunConfig) Prepare(ctx *interpolate.Context) []error { } if c.InstanceType == "" { - errs = append(errs, errors.New("An aliclod_instance_type must be specified")) + errs = append(errs, errors.New("An alicloud_instance_type must be specified")) } if c.UserData != "" && c.UserDataFile != "" { diff --git a/builder/alicloud/ecs/run_config_test.go b/builder/alicloud/ecs/run_config_test.go index d62f60b38..fe499e21f 100644 --- a/builder/alicloud/ecs/run_config_test.go +++ b/builder/alicloud/ecs/run_config_test.go @@ -2,6 +2,7 @@ package ecs import ( "io/ioutil" + "os" "testing" "github.com/hashicorp/packer/helper/communicator" @@ -68,6 +69,7 @@ func TestRunConfigPrepare_UserData(t *testing.T) { if err != nil { t.Fatalf("err: %s", err) } + defer os.Remove(tf.Name()) defer tf.Close() c.UserData = "foo" @@ -92,6 +94,7 @@ func TestRunConfigPrepare_UserDataFile(t *testing.T) { if err != nil { t.Fatalf("err: %s", err) } + defer os.Remove(tf.Name()) defer tf.Close() c.UserDataFile = tf.Name() diff --git a/builder/alicloud/ecs/ssh_helper.go b/builder/alicloud/ecs/ssh_helper.go index e08b3afff..aac733ece 100644 --- a/builder/alicloud/ecs/ssh_helper.go +++ b/builder/alicloud/ecs/ssh_helper.go @@ -7,7 +7,7 @@ import ( "time" packerssh "github.com/hashicorp/packer/communicator/ssh" - "github.com/mitchellh/multistep" + "github.com/hashicorp/packer/helper/multistep" "golang.org/x/crypto/ssh" "golang.org/x/crypto/ssh/agent" ) diff --git a/builder/alicloud/ecs/step_attach_keypair.go b/builder/alicloud/ecs/step_attach_keypair.go index a6cc01381..9a565d144 100644 --- a/builder/alicloud/ecs/step_attach_keypair.go +++ b/builder/alicloud/ecs/step_attach_keypair.go @@ -1,19 +1,21 @@ package ecs import ( + "context" "fmt" + "time" + "github.com/denverdino/aliyungo/common" "github.com/denverdino/aliyungo/ecs" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" - "time" ) -type stepAttachKeyPar struct { +type stepAttachKeyPair struct { } -func (s *stepAttachKeyPar) Run(state multistep.StateBag) multistep.StepAction { +func (s *stepAttachKeyPair) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { keyPairName := state.Get("keyPair").(string) if keyPairName == "" { return multistep.ActionContinue @@ -48,7 +50,7 @@ func (s *stepAttachKeyPar) Run(state multistep.StateBag) multistep.StepAction { return multistep.ActionContinue } -func (s *stepAttachKeyPar) Cleanup(state multistep.StateBag) { +func (s *stepAttachKeyPair) Cleanup(state multistep.StateBag) { keyPairName := state.Get("keyPair").(string) if keyPairName == "" { return diff --git a/builder/alicloud/ecs/step_check_source_image.go b/builder/alicloud/ecs/step_check_source_image.go index 4990f99d5..6090d8868 100644 --- a/builder/alicloud/ecs/step_check_source_image.go +++ b/builder/alicloud/ecs/step_check_source_image.go @@ -1,19 +1,20 @@ package ecs import ( + "context" "fmt" "github.com/denverdino/aliyungo/common" "github.com/denverdino/aliyungo/ecs" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" ) type stepCheckAlicloudSourceImage struct { SourceECSImageId string } -func (s *stepCheckAlicloudSourceImage) Run(state multistep.StateBag) multistep.StepAction { +func (s *stepCheckAlicloudSourceImage) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { client := state.Get("client").(*ecs.Client) config := state.Get("config").(Config) ui := state.Get("ui").(packer.Ui) diff --git a/builder/alicloud/ecs/step_config_eip.go b/builder/alicloud/ecs/step_config_eip.go index b0845ba2a..f747068f2 100644 --- a/builder/alicloud/ecs/step_config_eip.go +++ b/builder/alicloud/ecs/step_config_eip.go @@ -1,28 +1,31 @@ package ecs import ( + "context" "fmt" "github.com/denverdino/aliyungo/common" "github.com/denverdino/aliyungo/ecs" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" ) -type setpConfigAlicloudEIP struct { +type stepConfigAlicloudEIP struct { AssociatePublicIpAddress bool RegionId string InternetChargeType string + InternetMaxBandwidthOut int allocatedId string } -func (s *setpConfigAlicloudEIP) Run(state multistep.StateBag) multistep.StepAction { +func (s *stepConfigAlicloudEIP) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { client := state.Get("client").(*ecs.Client) ui := state.Get("ui").(packer.Ui) instance := state.Get("instance").(*ecs.InstanceAttributesType) ui.Say("Allocating eip") ipaddress, allocateId, err := client.AllocateEipAddress(&ecs.AllocateEipAddressArgs{ RegionId: common.Region(s.RegionId), InternetChargeType: common.InternetChargeType(s.InternetChargeType), + Bandwidth: s.InternetMaxBandwidthOut, }) if err != nil { state.Put("error", err) @@ -54,7 +57,7 @@ func (s *setpConfigAlicloudEIP) Run(state multistep.StateBag) multistep.StepActi return multistep.ActionContinue } -func (s *setpConfigAlicloudEIP) Cleanup(state multistep.StateBag) { +func (s *stepConfigAlicloudEIP) Cleanup(state multistep.StateBag) { if len(s.allocatedId) == 0 { return } diff --git a/builder/alicloud/ecs/step_config_key_pair.go b/builder/alicloud/ecs/step_config_key_pair.go index 22ccd28b6..5a1d4c91e 100644 --- a/builder/alicloud/ecs/step_config_key_pair.go +++ b/builder/alicloud/ecs/step_config_key_pair.go @@ -1,6 +1,7 @@ package ecs import ( + "context" "fmt" "io/ioutil" "os" @@ -8,11 +9,11 @@ import ( "github.com/denverdino/aliyungo/common" "github.com/denverdino/aliyungo/ecs" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" ) -type StepConfigAlicloudKeyPair struct { +type stepConfigAlicloudKeyPair struct { Debug bool SSHAgentAuth bool DebugKeyPath string @@ -24,7 +25,7 @@ type StepConfigAlicloudKeyPair struct { keyName string } -func (s *StepConfigAlicloudKeyPair) Run(state multistep.StateBag) multistep.StepAction { +func (s *stepConfigAlicloudKeyPair) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { ui := state.Get("ui").(packer.Ui) if s.PrivateKeyFile != "" { @@ -107,7 +108,7 @@ func (s *StepConfigAlicloudKeyPair) Run(state multistep.StateBag) multistep.Step return multistep.ActionContinue } -func (s *StepConfigAlicloudKeyPair) Cleanup(state multistep.StateBag) { +func (s *stepConfigAlicloudKeyPair) Cleanup(state multistep.StateBag) { // If no key name is set, then we never created it, so just return // If we used an SSH private key file, do not go about deleting // keypairs diff --git a/builder/alicloud/ecs/step_config_public_ip.go b/builder/alicloud/ecs/step_config_public_ip.go index 65b7f3fe2..594f85603 100644 --- a/builder/alicloud/ecs/step_config_public_ip.go +++ b/builder/alicloud/ecs/step_config_public_ip.go @@ -1,19 +1,20 @@ package ecs import ( + "context" "fmt" "github.com/denverdino/aliyungo/ecs" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" ) type stepConfigAlicloudPublicIP struct { - publicIPAdress string - RegionId string + publicIPAddress string + RegionId string } -func (s *stepConfigAlicloudPublicIP) Run(state multistep.StateBag) multistep.StepAction { +func (s *stepConfigAlicloudPublicIP) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { client := state.Get("client").(*ecs.Client) ui := state.Get("ui").(packer.Ui) instance := state.Get("instance").(*ecs.InstanceAttributesType) @@ -24,7 +25,7 @@ func (s *stepConfigAlicloudPublicIP) Run(state multistep.StateBag) multistep.Ste ui.Say(fmt.Sprintf("Error allocating public ip: %s", err)) return multistep.ActionHalt } - s.publicIPAdress = ipaddress + s.publicIPAddress = ipaddress ui.Say(fmt.Sprintf("Allocated public ip address %s.", ipaddress)) state.Put("ipaddress", ipaddress) return multistep.ActionContinue diff --git a/builder/alicloud/ecs/step_config_security_group.go b/builder/alicloud/ecs/step_config_security_group.go index dc8c083b2..951f6e6fe 100644 --- a/builder/alicloud/ecs/step_config_security_group.go +++ b/builder/alicloud/ecs/step_config_security_group.go @@ -1,14 +1,15 @@ package ecs import ( + "context" "errors" "fmt" "time" "github.com/denverdino/aliyungo/common" "github.com/denverdino/aliyungo/ecs" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" ) type stepConfigAlicloudSecurityGroup struct { @@ -20,7 +21,7 @@ type stepConfigAlicloudSecurityGroup struct { isCreate bool } -func (s *stepConfigAlicloudSecurityGroup) Run(state multistep.StateBag) multistep.StepAction { +func (s *stepConfigAlicloudSecurityGroup) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { client := state.Get("client").(*ecs.Client) ui := state.Get("ui").(packer.Ui) networkType := state.Get("networktype").(InstanceNetWork) diff --git a/builder/alicloud/ecs/step_config_vpc.go b/builder/alicloud/ecs/step_config_vpc.go index c4bdd59a3..8ce5663ed 100644 --- a/builder/alicloud/ecs/step_config_vpc.go +++ b/builder/alicloud/ecs/step_config_vpc.go @@ -1,14 +1,15 @@ package ecs import ( + "context" "errors" "fmt" "time" "github.com/denverdino/aliyungo/common" "github.com/denverdino/aliyungo/ecs" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" ) type stepConfigAlicloudVPC struct { @@ -18,7 +19,7 @@ type stepConfigAlicloudVPC struct { isCreate bool } -func (s *stepConfigAlicloudVPC) Run(state multistep.StateBag) multistep.StepAction { +func (s *stepConfigAlicloudVPC) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { config := state.Get("config").(Config) client := state.Get("client").(*ecs.Client) ui := state.Get("ui").(packer.Ui) @@ -84,7 +85,8 @@ func (s *stepConfigAlicloudVPC) Cleanup(state multistep.StateBag) { e, _ := err.(*common.Error) if (e.Code == "DependencyViolation.Instance" || e.Code == "DependencyViolation.RouteEntry" || e.Code == "DependencyViolation.VSwitch" || - e.Code == "DependencyViolation.SecurityGroup") && time.Now().Before(timeoutPoint) { + e.Code == "DependencyViolation.SecurityGroup" || + e.Code == "Forbbiden") && time.Now().Before(timeoutPoint) { time.Sleep(1 * time.Second) continue } diff --git a/builder/alicloud/ecs/step_config_vswitch.go b/builder/alicloud/ecs/step_config_vswitch.go index dfa858d0b..4bbb19773 100644 --- a/builder/alicloud/ecs/step_config_vswitch.go +++ b/builder/alicloud/ecs/step_config_vswitch.go @@ -1,14 +1,15 @@ package ecs import ( + "context" "errors" "fmt" "time" "github.com/denverdino/aliyungo/common" "github.com/denverdino/aliyungo/ecs" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" ) type stepConfigAlicloudVSwitch struct { @@ -19,7 +20,7 @@ type stepConfigAlicloudVSwitch struct { VSwitchName string } -func (s *stepConfigAlicloudVSwitch) Run(state multistep.StateBag) multistep.StepAction { +func (s *stepConfigAlicloudVSwitch) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { client := state.Get("client").(*ecs.Client) ui := state.Get("ui").(packer.Ui) vpcId := state.Get("vpcid").(string) @@ -136,7 +137,7 @@ func (s *stepConfigAlicloudVSwitch) Cleanup(state multistep.StateBag) { e, _ := err.(*common.Error) if (e.Code == "IncorrectVSwitchStatus" || e.Code == "DependencyViolation" || e.Code == "DependencyViolation.HaVip" || - e.Code == "IncorretRouteEntryStatus") && time.Now().Before(timeoutPoint) { + e.Code == "IncorrectRouteEntryStatus") && time.Now().Before(timeoutPoint) { time.Sleep(1 * time.Second) continue } diff --git a/builder/alicloud/ecs/step_create_image.go b/builder/alicloud/ecs/step_create_image.go index 5060c78ca..ae5840c09 100644 --- a/builder/alicloud/ecs/step_create_image.go +++ b/builder/alicloud/ecs/step_create_image.go @@ -1,19 +1,20 @@ package ecs import ( + "context" "fmt" "github.com/denverdino/aliyungo/common" "github.com/denverdino/aliyungo/ecs" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" ) type stepCreateAlicloudImage struct { image *ecs.ImageType } -func (s *stepCreateAlicloudImage) Run(state multistep.StateBag) multistep.StepAction { +func (s *stepCreateAlicloudImage) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { config := state.Get("config").(Config) client := state.Get("client").(*ecs.Client) ui := state.Get("ui").(packer.Ui) diff --git a/builder/alicloud/ecs/step_create_instance.go b/builder/alicloud/ecs/step_create_instance.go index c4f33f19c..ff41a3df6 100644 --- a/builder/alicloud/ecs/step_create_instance.go +++ b/builder/alicloud/ecs/step_create_instance.go @@ -1,14 +1,15 @@ package ecs import ( + "context" "fmt" "io/ioutil" "log" "github.com/denverdino/aliyungo/common" "github.com/denverdino/aliyungo/ecs" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" ) type stepCreateAlicloudInstance struct { @@ -20,12 +21,12 @@ type stepCreateAlicloudInstance struct { RegionId string InternetChargeType string InternetMaxBandwidthOut int - InstnaceName string + InstanceName string ZoneId string instance *ecs.InstanceAttributesType } -func (s *stepCreateAlicloudInstance) Run(state multistep.StateBag) multistep.StepAction { +func (s *stepCreateAlicloudInstance) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { client := state.Get("client").(*ecs.Client) config := state.Get("config").(Config) ui := state.Get("ui").(packer.Ui) @@ -62,7 +63,7 @@ func (s *stepCreateAlicloudInstance) Run(state multistep.StateBag) multistep.Ste IoOptimized: ioOptimized, VSwitchId: vswitchId, SecurityGroupId: securityGroupId, - InstanceName: s.InstnaceName, + InstanceName: s.InstanceName, Password: password, ZoneId: s.ZoneId, DataDisk: diskDeviceToDiskType(config.AlicloudImageConfig.ECSImagesDiskMappings), @@ -88,7 +89,7 @@ func (s *stepCreateAlicloudInstance) Run(state multistep.StateBag) multistep.Ste InternetMaxBandwidthOut: s.InternetMaxBandwidthOut, IoOptimized: ioOptimized, SecurityGroupId: securityGroupId, - InstanceName: s.InstnaceName, + InstanceName: s.InstanceName, Password: password, ZoneId: s.ZoneId, DataDisk: diskDeviceToDiskType(config.AlicloudImageConfig.ECSImagesDiskMappings), diff --git a/builder/alicloud/ecs/step_delete_images_snapshots.go b/builder/alicloud/ecs/step_delete_images_snapshots.go index 24104fef0..9e294c45f 100644 --- a/builder/alicloud/ecs/step_delete_images_snapshots.go +++ b/builder/alicloud/ecs/step_delete_images_snapshots.go @@ -1,28 +1,29 @@ package ecs import ( + "context" "fmt" "log" "github.com/denverdino/aliyungo/common" "github.com/denverdino/aliyungo/ecs" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" ) type stepDeleteAlicloudImageSnapshots struct { - AlicloudImageForceDetele bool - AlicloudImageForceDeteleSnapshots bool + AlicloudImageForceDelete bool + AlicloudImageForceDeleteSnapshots bool AlicloudImageName string } -func (s *stepDeleteAlicloudImageSnapshots) Run(state multistep.StateBag) multistep.StepAction { +func (s *stepDeleteAlicloudImageSnapshots) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { client := state.Get("client").(*ecs.Client) ui := state.Get("ui").(packer.Ui) config := state.Get("config").(Config) ui.Say("Deleting image snapshots.") // Check for force delete - if s.AlicloudImageForceDetele { + if s.AlicloudImageForceDelete { images, _, err := client.DescribeImages(&ecs.DescribeImagesArgs{ RegionId: common.Region(config.AlicloudRegion), ImageName: s.AlicloudImageName, @@ -42,7 +43,7 @@ func (s *stepDeleteAlicloudImageSnapshots) Run(state multistep.StateBag) multist ui.Error(err.Error()) return multistep.ActionHalt } - if s.AlicloudImageForceDeteleSnapshots { + if s.AlicloudImageForceDeleteSnapshots { for _, diskDevice := range image.DiskDeviceMappings.DiskDeviceMapping { if err := client.DeleteSnapshot(diskDevice.SnapshotId); err != nil { err := fmt.Errorf("Deleting ECS snapshot failed: %s", err) diff --git a/builder/alicloud/ecs/step_mount_disk.go b/builder/alicloud/ecs/step_mount_disk.go index 222b27f08..642605bee 100644 --- a/builder/alicloud/ecs/step_mount_disk.go +++ b/builder/alicloud/ecs/step_mount_disk.go @@ -1,17 +1,18 @@ package ecs import ( + "context" "fmt" "github.com/denverdino/aliyungo/ecs" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" ) type stepMountAlicloudDisk struct { } -func (s *stepMountAlicloudDisk) Run(state multistep.StateBag) multistep.StepAction { +func (s *stepMountAlicloudDisk) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { client := state.Get("client").(*ecs.Client) config := state.Get("config").(Config) ui := state.Get("ui").(packer.Ui) diff --git a/builder/alicloud/ecs/step_pre_validate.go b/builder/alicloud/ecs/step_pre_validate.go index 527d84fbf..6bc3b1060 100644 --- a/builder/alicloud/ecs/step_pre_validate.go +++ b/builder/alicloud/ecs/step_pre_validate.go @@ -1,12 +1,13 @@ package ecs import ( + "context" "fmt" "github.com/denverdino/aliyungo/common" "github.com/denverdino/aliyungo/ecs" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" ) type stepPreValidate struct { @@ -14,7 +15,7 @@ type stepPreValidate struct { ForceDelete bool } -func (s *stepPreValidate) Run(state multistep.StateBag) multistep.StepAction { +func (s *stepPreValidate) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { ui := state.Get("ui").(packer.Ui) if s.ForceDelete { ui.Say("Force delete flag found, skipping prevalidating image name.") diff --git a/builder/alicloud/ecs/step_region_copy_image.go b/builder/alicloud/ecs/step_region_copy_image.go index f92878712..c0de5eeb9 100644 --- a/builder/alicloud/ecs/step_region_copy_image.go +++ b/builder/alicloud/ecs/step_region_copy_image.go @@ -1,21 +1,22 @@ package ecs import ( + "context" "fmt" "github.com/denverdino/aliyungo/common" "github.com/denverdino/aliyungo/ecs" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" ) -type setpRegionCopyAlicloudImage struct { +type stepRegionCopyAlicloudImage struct { AlicloudImageDestinationRegions []string AlicloudImageDestinationNames []string RegionId string } -func (s *setpRegionCopyAlicloudImage) Run(state multistep.StateBag) multistep.StepAction { +func (s *stepRegionCopyAlicloudImage) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { if len(s.AlicloudImageDestinationRegions) == 0 { return multistep.ActionContinue } @@ -51,7 +52,7 @@ func (s *setpRegionCopyAlicloudImage) Run(state multistep.StateBag) multistep.St return multistep.ActionContinue } -func (s *setpRegionCopyAlicloudImage) Cleanup(state multistep.StateBag) { +func (s *stepRegionCopyAlicloudImage) Cleanup(state multistep.StateBag) { _, cancelled := state.GetOk(multistep.StateCancelled) _, halted := state.GetOk(multistep.StateHalted) if cancelled || halted { diff --git a/builder/alicloud/ecs/step_run_instance.go b/builder/alicloud/ecs/step_run_instance.go index a92637648..57a799183 100644 --- a/builder/alicloud/ecs/step_run_instance.go +++ b/builder/alicloud/ecs/step_run_instance.go @@ -1,17 +1,18 @@ package ecs import ( + "context" "fmt" "github.com/denverdino/aliyungo/ecs" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" ) type stepRunAlicloudInstance struct { } -func (s *stepRunAlicloudInstance) Run(state multistep.StateBag) multistep.StepAction { +func (s *stepRunAlicloudInstance) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { client := state.Get("client").(*ecs.Client) ui := state.Get("ui").(packer.Ui) instance := state.Get("instance").(*ecs.InstanceAttributesType) @@ -42,8 +43,8 @@ func (s *stepRunAlicloudInstance) Cleanup(state multistep.StateBag) { ui := state.Get("ui").(packer.Ui) client := state.Get("client").(*ecs.Client) instance := state.Get("instance").(*ecs.InstanceAttributesType) - instanceAttrubite, _ := client.DescribeInstanceAttribute(instance.InstanceId) - if instanceAttrubite.Status == ecs.Starting || instanceAttrubite.Status == ecs.Running { + instanceAttribute, _ := client.DescribeInstanceAttribute(instance.InstanceId) + if instanceAttribute.Status == ecs.Starting || instanceAttribute.Status == ecs.Running { if err := client.StopInstance(instance.InstanceId, true); err != nil { ui.Say(fmt.Sprintf("Error stopping instance %s, it may still be around %s", instance.InstanceId, err)) return diff --git a/builder/alicloud/ecs/step_share_image.go b/builder/alicloud/ecs/step_share_image.go index 50e5a640f..6e9ae6ed0 100644 --- a/builder/alicloud/ecs/step_share_image.go +++ b/builder/alicloud/ecs/step_share_image.go @@ -1,21 +1,22 @@ package ecs import ( + "context" "fmt" "github.com/denverdino/aliyungo/common" "github.com/denverdino/aliyungo/ecs" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" ) -type setpShareAlicloudImage struct { +type stepShareAlicloudImage struct { AlicloudImageShareAccounts []string AlicloudImageUNShareAccounts []string RegionId string } -func (s *setpShareAlicloudImage) Run(state multistep.StateBag) multistep.StepAction { +func (s *stepShareAlicloudImage) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { client := state.Get("client").(*ecs.Client) ui := state.Get("ui").(packer.Ui) alicloudImages := state.Get("alicloudimages").(map[string]string) @@ -36,7 +37,7 @@ func (s *setpShareAlicloudImage) Run(state multistep.StateBag) multistep.StepAct return multistep.ActionContinue } -func (s *setpShareAlicloudImage) Cleanup(state multistep.StateBag) { +func (s *stepShareAlicloudImage) Cleanup(state multistep.StateBag) { _, cancelled := state.GetOk(multistep.StateCancelled) _, halted := state.GetOk(multistep.StateHalted) if cancelled || halted { diff --git a/builder/alicloud/ecs/step_stop_instance.go b/builder/alicloud/ecs/step_stop_instance.go index b12a74c63..d57a52c12 100644 --- a/builder/alicloud/ecs/step_stop_instance.go +++ b/builder/alicloud/ecs/step_stop_instance.go @@ -1,18 +1,19 @@ package ecs import ( + "context" "fmt" "github.com/denverdino/aliyungo/ecs" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" ) type stepStopAlicloudInstance struct { ForceStop bool } -func (s *stepStopAlicloudInstance) Run(state multistep.StateBag) multistep.StepAction { +func (s *stepStopAlicloudInstance) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { client := state.Get("client").(*ecs.Client) instance := state.Get("instance").(*ecs.InstanceAttributesType) ui := state.Get("ui").(packer.Ui) diff --git a/builder/amazon/chroot/builder.go b/builder/amazon/chroot/builder.go index b6b5088a2..052429e2e 100644 --- a/builder/amazon/chroot/builder.go +++ b/builder/amazon/chroot/builder.go @@ -13,9 +13,9 @@ import ( awscommon "github.com/hashicorp/packer/builder/amazon/common" "github.com/hashicorp/packer/common" "github.com/hashicorp/packer/helper/config" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" "github.com/hashicorp/packer/template/interpolate" - "github.com/mitchellh/multistep" ) // The unique ID for this builder @@ -33,9 +33,10 @@ type Config struct { CommandWrapper string `mapstructure:"command_wrapper"` CopyFiles []string `mapstructure:"copy_files"` DevicePath string `mapstructure:"device_path"` + NVMEDevicePath string `mapstructure:"nvme_device_path"` FromScratch bool `mapstructure:"from_scratch"` MountOptions []string `mapstructure:"mount_options"` - MountPartition int `mapstructure:"mount_partition"` + MountPartition string `mapstructure:"mount_partition"` MountPath string `mapstructure:"mount_path"` PostMountCommands []string `mapstructure:"post_mount_commands"` PreMountCommands []string `mapstructure:"pre_mount_commands"` @@ -43,6 +44,7 @@ type Config struct { RootVolumeSize int64 `mapstructure:"root_volume_size"` SourceAmi string `mapstructure:"source_ami"` SourceAmiFilter awscommon.AmiFilterOptions `mapstructure:"source_ami_filter"` + RootVolumeTags awscommon.TagMap `mapstructure:"root_volume_tags"` ctx interpolate.Context } @@ -66,6 +68,7 @@ func (b *Builder) Prepare(raws ...interface{}) ([]string, error) { "ami_description", "snapshot_tags", "tags", + "root_volume_tags", "command_wrapper", "post_mount_commands", "pre_mount_commands", @@ -112,8 +115,8 @@ func (b *Builder) Prepare(raws ...interface{}) ([]string, error) { b.config.MountPath = "/mnt/packer-amazon-chroot-volumes/{{.Device}}" } - if b.config.MountPartition == 0 { - b.config.MountPartition = 1 + if b.config.MountPartition == "" { + b.config.MountPartition = "1" } // Accumulate any errors or warnings @@ -173,7 +176,7 @@ func (b *Builder) Prepare(raws ...interface{}) ([]string, error) { return warns, errs } - log.Println(common.ScrubConfig(b.config, b.config.AccessKey, b.config.SecretKey)) + log.Println(common.ScrubConfig(b.config, b.config.AccessKey, b.config.SecretKey, b.config.Token)) return warns, nil } @@ -198,6 +201,7 @@ func (b *Builder) Run(ui packer.Ui, hook packer.Hook, cache packer.Cache) (packe state := new(multistep.BasicStateBag) state.Put("config", &b.config) state.Put("ec2", ec2conn) + state.Put("awsSession", session) state.Put("hook", hook) state.Put("ui", ui) state.Put("wrappedCommand", CommandWrapper(wrappedCommand)) @@ -228,6 +232,8 @@ func (b *Builder) Run(ui packer.Ui, hook packer.Hook, cache packer.Cache) (packe &StepPrepareDevice{}, &StepCreateVolume{ RootVolumeSize: b.config.RootVolumeSize, + RootVolumeTags: b.config.RootVolumeTags, + Ctx: b.config.ctx, }, &StepAttachVolume{}, &StepEarlyUnflock{}, @@ -305,7 +311,7 @@ func (b *Builder) Run(ui packer.Ui, hook packer.Hook, cache packer.Cache) (packe artifact := &awscommon.Artifact{ Amis: state.Get("amis").(map[string]string), BuilderIdValue: BuilderId, - Conn: ec2conn, + Session: session, } return artifact, nil diff --git a/builder/amazon/chroot/builder_test.go b/builder/amazon/chroot/builder_test.go index c13658286..c0e52a2f6 100644 --- a/builder/amazon/chroot/builder_test.go +++ b/builder/amazon/chroot/builder_test.go @@ -71,11 +71,16 @@ func TestBuilderPrepare_ChrootMounts(t *testing.T) { if err != nil { t.Errorf("err: %s", err) } +} + +func TestBuilderPrepare_ChrootMountsBadDefaults(t *testing.T) { + b := &Builder{} + config := testConfig() config["chroot_mounts"] = [][]string{ {"bad"}, } - warnings, err = b.Prepare(config) + warnings, err := b.Prepare(config) if len(warnings) > 0 { t.Fatalf("bad: %#v", warnings) } @@ -135,9 +140,14 @@ func TestBuilderPrepare_CopyFiles(t *testing.T) { if len(b.config.CopyFiles) != 1 && b.config.CopyFiles[0] != "/etc/resolv.conf" { t.Errorf("Was expecting default value for copy_files.") } +} + +func TestBuilderPrepare_CopyFilesNoDefault(t *testing.T) { + b := &Builder{} + config := testConfig() config["copy_files"] = []string{} - warnings, err = b.Prepare(config) + warnings, err := b.Prepare(config) if len(warnings) > 0 { t.Fatalf("bad: %#v", warnings) } diff --git a/builder/amazon/chroot/cleanup.go b/builder/amazon/chroot/cleanup.go index 0be3bef3f..0befac174 100644 --- a/builder/amazon/chroot/cleanup.go +++ b/builder/amazon/chroot/cleanup.go @@ -1,7 +1,7 @@ package chroot import ( - "github.com/mitchellh/multistep" + "github.com/hashicorp/packer/helper/multistep" ) // Cleanup is an interface that some steps implement for early cleanup. diff --git a/builder/amazon/chroot/communicator_test.go b/builder/amazon/chroot/communicator_test.go index 7017b9e02..43995b79a 100644 --- a/builder/amazon/chroot/communicator_test.go +++ b/builder/amazon/chroot/communicator_test.go @@ -1,8 +1,9 @@ package chroot import ( - "github.com/hashicorp/packer/packer" "testing" + + "github.com/hashicorp/packer/packer" ) func TestCommunicator_ImplementsCommunicator(t *testing.T) { diff --git a/builder/amazon/chroot/device_test.go b/builder/amazon/chroot/device_test.go new file mode 100644 index 000000000..a47fc7dff --- /dev/null +++ b/builder/amazon/chroot/device_test.go @@ -0,0 +1,10 @@ +package chroot + +import "testing" + +func TestDevicePrefixMatch(t *testing.T) { + /* + if devicePrefixMatch("nvme0n1") != "" { + } + */ +} diff --git a/builder/amazon/chroot/run_local_commands.go b/builder/amazon/chroot/run_local_commands.go index 024a208f8..fc1c01e2b 100644 --- a/builder/amazon/chroot/run_local_commands.go +++ b/builder/amazon/chroot/run_local_commands.go @@ -3,8 +3,8 @@ package chroot import ( "fmt" + sl "github.com/hashicorp/packer/common/shell-local" "github.com/hashicorp/packer/packer" - "github.com/hashicorp/packer/post-processor/shell-local" "github.com/hashicorp/packer/template/interpolate" ) @@ -21,7 +21,9 @@ func RunLocalCommands(commands []string, wrappedCommand CommandWrapper, ctx inte } ui.Say(fmt.Sprintf("Executing command: %s", command)) - comm := &shell_local.Communicator{} + comm := &sl.Communicator{ + ExecuteCommand: []string{"sh", "-c", command}, + } cmd := &packer.RemoteCmd{Command: command} if err := cmd.StartWithUi(comm, ui); err != nil { return fmt.Errorf("Error executing command: %s", err) diff --git a/builder/amazon/chroot/step_attach_volume.go b/builder/amazon/chroot/step_attach_volume.go index 6b83f440b..7044e0218 100644 --- a/builder/amazon/chroot/step_attach_volume.go +++ b/builder/amazon/chroot/step_attach_volume.go @@ -1,15 +1,15 @@ package chroot import ( - "errors" + "context" "fmt" "strings" - "time" + "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/service/ec2" awscommon "github.com/hashicorp/packer/builder/amazon/common" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" ) // StepAttachVolume attaches the previously created volume to an @@ -23,7 +23,7 @@ type StepAttachVolume struct { volumeId string } -func (s *StepAttachVolume) Run(state multistep.StateBag) multistep.StepAction { +func (s *StepAttachVolume) Run(ctx context.Context, state multistep.StateBag) multistep.StepAction { ec2conn := state.Get("ec2").(*ec2.EC2) device := state.Get("device").(string) instance := state.Get("instance").(*ec2.Instance) @@ -51,35 +51,7 @@ func (s *StepAttachVolume) Run(state multistep.StateBag) multistep.StepAction { s.volumeId = volumeId // Wait for the volume to become attached - stateChange := awscommon.StateChangeConf{ - Pending: []string{"attaching"}, - StepState: state, - Target: "attached", - Refresh: func() (interface{}, string, error) { - attempts := 0 - for attempts < 30 { - resp, err := ec2conn.DescribeVolumes(&ec2.DescribeVolumesInput{VolumeIds: []*string{&volumeId}}) - if err != nil { - return nil, "", err - } - if len(resp.Volumes[0].Attachments) > 0 { - a := resp.Volumes[0].Attachments[0] - return a, *a.State, nil - } - // When Attachment on volume is not present sleep for 2s and retry - attempts += 1 - ui.Say(fmt.Sprintf( - "Volume %s show no attachments. Attempt %d/30. Sleeping for 2s and will retry.", - volumeId, attempts)) - time.Sleep(2 * time.Second) - } - - // Attachment on volume is not present after all attempts - return nil, "", errors.New("No attachments on volume.") - }, - } - - _, err = awscommon.WaitForState(&stateChange) + err = awscommon.WaitUntilVolumeAttached(ctx, ec2conn, s.volumeId) if err != nil { err := fmt.Errorf("Error waiting for volume: %s", err) state.Put("error", err) @@ -115,26 +87,7 @@ func (s *StepAttachVolume) CleanupFunc(state multistep.StateBag) error { s.attached = false // Wait for the volume to detach - stateChange := awscommon.StateChangeConf{ - Pending: []string{"attaching", "attached", "detaching"}, - StepState: state, - Target: "detached", - Refresh: func() (interface{}, string, error) { - resp, err := ec2conn.DescribeVolumes(&ec2.DescribeVolumesInput{VolumeIds: []*string{&s.volumeId}}) - if err != nil { - return nil, "", err - } - - v := resp.Volumes[0] - if len(v.Attachments) > 0 { - return v, *v.Attachments[0].State, nil - } else { - return v, "detached", nil - } - }, - } - - _, err = awscommon.WaitForState(&stateChange) + err = awscommon.WaitUntilVolumeDetached(aws.BackgroundContext(), ec2conn, s.volumeId) if err != nil { return fmt.Errorf("Error waiting for volume: %s", err) } diff --git a/builder/amazon/chroot/step_check_root_device.go b/builder/amazon/chroot/step_check_root_device.go index fa0097aa4..5ff66d055 100644 --- a/builder/amazon/chroot/step_check_root_device.go +++ b/builder/amazon/chroot/step_check_root_device.go @@ -1,17 +1,18 @@ package chroot import ( + "context" "fmt" "github.com/aws/aws-sdk-go/service/ec2" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" ) // StepCheckRootDevice makes sure the root device on the AMI is EBS-backed. type StepCheckRootDevice struct{} -func (s *StepCheckRootDevice) Run(state multistep.StateBag) multistep.StepAction { +func (s *StepCheckRootDevice) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { image := state.Get("source_image").(*ec2.Image) ui := state.Get("ui").(packer.Ui) diff --git a/builder/amazon/chroot/step_chroot_provision.go b/builder/amazon/chroot/step_chroot_provision.go index 941e071b1..be8667077 100644 --- a/builder/amazon/chroot/step_chroot_provision.go +++ b/builder/amazon/chroot/step_chroot_provision.go @@ -1,17 +1,18 @@ package chroot import ( + "context" "log" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" ) // StepChrootProvision provisions the instance within a chroot. type StepChrootProvision struct { } -func (s *StepChrootProvision) Run(state multistep.StateBag) multistep.StepAction { +func (s *StepChrootProvision) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { hook := state.Get("hook").(packer.Hook) mountPath := state.Get("mount_path").(string) ui := state.Get("ui").(packer.Ui) diff --git a/builder/amazon/chroot/step_copy_files.go b/builder/amazon/chroot/step_copy_files.go index 534ec268a..a973a5d81 100644 --- a/builder/amazon/chroot/step_copy_files.go +++ b/builder/amazon/chroot/step_copy_files.go @@ -2,11 +2,13 @@ package chroot import ( "bytes" + "context" "fmt" - "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" "log" "path/filepath" + + "github.com/hashicorp/packer/helper/multistep" + "github.com/hashicorp/packer/packer" ) // StepCopyFiles copies some files from the host into the chroot environment. @@ -18,7 +20,7 @@ type StepCopyFiles struct { files []string } -func (s *StepCopyFiles) Run(state multistep.StateBag) multistep.StepAction { +func (s *StepCopyFiles) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { config := state.Get("config").(*Config) mountPath := state.Get("mount_path").(string) ui := state.Get("ui").(packer.Ui) diff --git a/builder/amazon/chroot/step_create_volume.go b/builder/amazon/chroot/step_create_volume.go index c1ecbf1a9..4adf3074f 100644 --- a/builder/amazon/chroot/step_create_volume.go +++ b/builder/amazon/chroot/step_create_volume.go @@ -1,14 +1,16 @@ package chroot import ( + "context" "fmt" "log" "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/service/ec2" awscommon "github.com/hashicorp/packer/builder/amazon/common" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" + "github.com/hashicorp/packer/template/interpolate" ) // StepCreateVolume creates a new volume from the snapshot of the root @@ -19,14 +21,36 @@ import ( type StepCreateVolume struct { volumeId string RootVolumeSize int64 + RootVolumeTags awscommon.TagMap + Ctx interpolate.Context } -func (s *StepCreateVolume) Run(state multistep.StateBag) multistep.StepAction { +func (s *StepCreateVolume) Run(ctx context.Context, state multistep.StateBag) multistep.StepAction { config := state.Get("config").(*Config) ec2conn := state.Get("ec2").(*ec2.EC2) instance := state.Get("instance").(*ec2.Instance) ui := state.Get("ui").(packer.Ui) + volTags, err := s.RootVolumeTags.EC2Tags(s.Ctx, *ec2conn.Config.Region, state) + if err != nil { + err := fmt.Errorf("Error tagging volumes: %s", err) + state.Put("error", err) + ui.Error(err.Error()) + return multistep.ActionHalt + } + + // Collect tags for tagging on resource creation + var tagSpecs []*ec2.TagSpecification + + if len(volTags) > 0 { + runVolTags := &ec2.TagSpecification{ + ResourceType: aws.String("volume"), + Tags: volTags, + } + + tagSpecs = append(tagSpecs, runVolTags) + } + var createVolume *ec2.CreateVolumeInput if config.FromScratch { createVolume = &ec2.CreateVolumeInput{ @@ -68,6 +92,10 @@ func (s *StepCreateVolume) Run(state multistep.StateBag) multistep.StepAction { } } + if len(tagSpecs) > 0 { + createVolume.SetTagSpecifications(tagSpecs) + volTags.Report(ui) + } log.Printf("Create args: %+v", createVolume) createVolumeResp, err := ec2conn.CreateVolume(createVolume) @@ -83,22 +111,7 @@ func (s *StepCreateVolume) Run(state multistep.StateBag) multistep.StepAction { log.Printf("Volume ID: %s", s.volumeId) // Wait for the volume to become ready - stateChange := awscommon.StateChangeConf{ - Pending: []string{"creating"}, - StepState: state, - Target: "available", - Refresh: func() (interface{}, string, error) { - resp, err := ec2conn.DescribeVolumes(&ec2.DescribeVolumesInput{VolumeIds: []*string{&s.volumeId}}) - if err != nil { - return nil, "", err - } - - v := resp.Volumes[0] - return v, *v.State, nil - }, - } - - _, err = awscommon.WaitForState(&stateChange) + err = awscommon.WaitUntilVolumeAvailable(ctx, ec2conn, s.volumeId) if err != nil { err := fmt.Errorf("Error waiting for volume: %s", err) state.Put("error", err) diff --git a/builder/amazon/chroot/step_early_cleanup.go b/builder/amazon/chroot/step_early_cleanup.go index edcef49fc..34e7817df 100644 --- a/builder/amazon/chroot/step_early_cleanup.go +++ b/builder/amazon/chroot/step_early_cleanup.go @@ -1,17 +1,19 @@ package chroot import ( + "context" "fmt" - "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" "log" + + "github.com/hashicorp/packer/helper/multistep" + "github.com/hashicorp/packer/packer" ) // StepEarlyCleanup performs some of the cleanup steps early in order to // prepare for snapshotting and creating an AMI. type StepEarlyCleanup struct{} -func (s *StepEarlyCleanup) Run(state multistep.StateBag) multistep.StepAction { +func (s *StepEarlyCleanup) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { ui := state.Get("ui").(packer.Ui) cleanupKeys := []string{ "copy_files_cleanup", diff --git a/builder/amazon/chroot/step_early_unflock.go b/builder/amazon/chroot/step_early_unflock.go index 4b24b8581..225e91fb9 100644 --- a/builder/amazon/chroot/step_early_unflock.go +++ b/builder/amazon/chroot/step_early_unflock.go @@ -1,16 +1,18 @@ package chroot import ( + "context" "fmt" - "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" "log" + + "github.com/hashicorp/packer/helper/multistep" + "github.com/hashicorp/packer/packer" ) // StepEarlyUnflock unlocks the flock. type StepEarlyUnflock struct{} -func (s *StepEarlyUnflock) Run(state multistep.StateBag) multistep.StepAction { +func (s *StepEarlyUnflock) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { cleanup := state.Get("flock_cleanup").(Cleanup) ui := state.Get("ui").(packer.Ui) diff --git a/builder/amazon/chroot/step_flock.go b/builder/amazon/chroot/step_flock.go index 648ec2acc..d75adc7f0 100644 --- a/builder/amazon/chroot/step_flock.go +++ b/builder/amazon/chroot/step_flock.go @@ -1,12 +1,14 @@ package chroot import ( + "context" "fmt" - "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" "log" "os" "path/filepath" + + "github.com/hashicorp/packer/helper/multistep" + "github.com/hashicorp/packer/packer" ) // StepFlock provisions the instance within a chroot. @@ -17,7 +19,7 @@ type StepFlock struct { fh *os.File } -func (s *StepFlock) Run(state multistep.StateBag) multistep.StepAction { +func (s *StepFlock) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { ui := state.Get("ui").(packer.Ui) lockfile := "/var/lock/packer-chroot/lock" diff --git a/builder/amazon/chroot/step_instance_info.go b/builder/amazon/chroot/step_instance_info.go index 00fd10dde..558e28a8d 100644 --- a/builder/amazon/chroot/step_instance_info.go +++ b/builder/amazon/chroot/step_instance_info.go @@ -1,28 +1,29 @@ package chroot import ( + "context" "fmt" "log" "github.com/aws/aws-sdk-go/aws/ec2metadata" "github.com/aws/aws-sdk-go/aws/session" "github.com/aws/aws-sdk-go/service/ec2" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" ) // StepInstanceInfo verifies that this builder is running on an EC2 instance. type StepInstanceInfo struct{} -func (s *StepInstanceInfo) Run(state multistep.StateBag) multistep.StepAction { +func (s *StepInstanceInfo) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { ec2conn := state.Get("ec2").(*ec2.EC2) + session := state.Get("awsSession").(*session.Session) ui := state.Get("ui").(packer.Ui) // Get our own instance ID ui.Say("Gathering information about this EC2 instance...") - sess := session.New() - ec2meta := ec2metadata.New(sess) + ec2meta := ec2metadata.New(session) identity, err := ec2meta.GetInstanceIdentityDocument() if err != nil { err := fmt.Errorf( diff --git a/builder/amazon/chroot/step_mount_device.go b/builder/amazon/chroot/step_mount_device.go index 8f740bd6e..38ec62164 100644 --- a/builder/amazon/chroot/step_mount_device.go +++ b/builder/amazon/chroot/step_mount_device.go @@ -2,6 +2,7 @@ package chroot import ( "bytes" + "context" "fmt" "log" "os" @@ -9,9 +10,9 @@ import ( "strings" "github.com/aws/aws-sdk-go/service/ec2" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" "github.com/hashicorp/packer/template/interpolate" - "github.com/mitchellh/multistep" ) type mountPathData struct { @@ -25,15 +26,19 @@ type mountPathData struct { // mount_device_cleanup CleanupFunc - To perform early cleanup type StepMountDevice struct { MountOptions []string - MountPartition int + MountPartition string mountPath string } -func (s *StepMountDevice) Run(state multistep.StateBag) multistep.StepAction { +func (s *StepMountDevice) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { config := state.Get("config").(*Config) ui := state.Get("ui").(packer.Ui) device := state.Get("device").(string) + if config.NVMEDevicePath != "" { + // customizable device path for mounting NVME block devices on c5 and m5 HVM + device = config.NVMEDevicePath + } wrappedCommand := state.Get("wrappedCommand").(CommandWrapper) var virtualizationType string @@ -46,6 +51,7 @@ func (s *StepMountDevice) Run(state multistep.StateBag) multistep.StepAction { } ctx := config.ctx + ctx.Data = &mountPathData{Device: filepath.Base(device)} mountPath, err := interpolate.Render(config.MountPath, &ctx) @@ -74,8 +80,9 @@ func (s *StepMountDevice) Run(state multistep.StateBag) multistep.StepAction { } deviceMount := device - if virtualizationType == "hvm" { - deviceMount = fmt.Sprintf("%s%d", device, s.MountPartition) + + if virtualizationType == "hvm" && s.MountPartition != "0" { + deviceMount = fmt.Sprintf("%s%s", device, s.MountPartition) } state.Put("deviceMount", deviceMount) @@ -96,7 +103,7 @@ func (s *StepMountDevice) Run(state multistep.StateBag) multistep.StepAction { ui.Error(err.Error()) return multistep.ActionHalt } - + log.Printf("[DEBUG] (step mount) mount command is %s", mountCommand) cmd := ShellCommand(mountCommand) cmd.Stderr = stderr if err := cmd.Run(); err != nil { diff --git a/builder/amazon/chroot/step_mount_extra.go b/builder/amazon/chroot/step_mount_extra.go index 3bd8951c7..ffd8ac027 100644 --- a/builder/amazon/chroot/step_mount_extra.go +++ b/builder/amazon/chroot/step_mount_extra.go @@ -2,12 +2,14 @@ package chroot import ( "bytes" + "context" "fmt" - "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" "os" "os/exec" "syscall" + + "github.com/hashicorp/packer/helper/multistep" + "github.com/hashicorp/packer/packer" ) // StepMountExtra mounts the attached device. @@ -18,7 +20,7 @@ type StepMountExtra struct { mounts []string } -func (s *StepMountExtra) Run(state multistep.StateBag) multistep.StepAction { +func (s *StepMountExtra) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { config := state.Get("config").(*Config) mountPath := state.Get("mount_path").(string) ui := state.Get("ui").(packer.Ui) diff --git a/builder/amazon/chroot/step_post_mount_commands.go b/builder/amazon/chroot/step_post_mount_commands.go index de927f9aa..a00e8e1bf 100644 --- a/builder/amazon/chroot/step_post_mount_commands.go +++ b/builder/amazon/chroot/step_post_mount_commands.go @@ -1,8 +1,10 @@ package chroot import ( + "context" + + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" ) type postMountCommandsData struct { @@ -16,7 +18,7 @@ type StepPostMountCommands struct { Commands []string } -func (s *StepPostMountCommands) Run(state multistep.StateBag) multistep.StepAction { +func (s *StepPostMountCommands) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { config := state.Get("config").(*Config) device := state.Get("device").(string) mountPath := state.Get("mount_path").(string) diff --git a/builder/amazon/chroot/step_pre_mount_commands.go b/builder/amazon/chroot/step_pre_mount_commands.go index 63aa6d15a..ce3c26e02 100644 --- a/builder/amazon/chroot/step_pre_mount_commands.go +++ b/builder/amazon/chroot/step_pre_mount_commands.go @@ -1,8 +1,10 @@ package chroot import ( + "context" + + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" ) type preMountCommandsData struct { @@ -14,7 +16,7 @@ type StepPreMountCommands struct { Commands []string } -func (s *StepPreMountCommands) Run(state multistep.StateBag) multistep.StepAction { +func (s *StepPreMountCommands) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { config := state.Get("config").(*Config) device := state.Get("device").(string) ui := state.Get("ui").(packer.Ui) diff --git a/builder/amazon/chroot/step_prepare_device.go b/builder/amazon/chroot/step_prepare_device.go index 6cbefb2cc..0939d33cd 100644 --- a/builder/amazon/chroot/step_prepare_device.go +++ b/builder/amazon/chroot/step_prepare_device.go @@ -1,19 +1,20 @@ package chroot import ( + "context" "fmt" "log" "os" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" ) // StepPrepareDevice finds an available device and sets it. type StepPrepareDevice struct { } -func (s *StepPrepareDevice) Run(state multistep.StateBag) multistep.StepAction { +func (s *StepPrepareDevice) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { config := state.Get("config").(*Config) ui := state.Get("ui").(packer.Ui) diff --git a/builder/amazon/chroot/step_register_ami.go b/builder/amazon/chroot/step_register_ami.go index a19266f57..4f3ad54c4 100644 --- a/builder/amazon/chroot/step_register_ami.go +++ b/builder/amazon/chroot/step_register_ami.go @@ -1,13 +1,14 @@ package chroot import ( + "context" "fmt" "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/service/ec2" awscommon "github.com/hashicorp/packer/builder/amazon/common" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" ) // StepRegisterAMI creates the AMI. @@ -17,7 +18,7 @@ type StepRegisterAMI struct { EnableAMISriovNetSupport bool } -func (s *StepRegisterAMI) Run(state multistep.StateBag) multistep.StepAction { +func (s *StepRegisterAMI) Run(ctx context.Context, state multistep.StateBag) multistep.StepAction { config := state.Get("config").(*Config) ec2conn := state.Get("ec2").(*ec2.EC2) snapshotId := state.Get("snapshot_id").(string) @@ -101,16 +102,8 @@ func (s *StepRegisterAMI) Run(state multistep.StateBag) multistep.StepAction { amis[*ec2conn.Config.Region] = *registerResp.ImageId state.Put("amis", amis) - // Wait for the image to become ready - stateChange := awscommon.StateChangeConf{ - Pending: []string{"pending"}, - Target: "available", - Refresh: awscommon.AMIStateRefreshFunc(ec2conn, *registerResp.ImageId), - StepState: state, - } - ui.Say("Waiting for AMI to become ready...") - if _, err := awscommon.WaitForState(&stateChange); err != nil { + if err := awscommon.WaitUntilAMIAvailable(ctx, ec2conn, *registerResp.ImageId); err != nil { err := fmt.Errorf("Error waiting for AMI: %s", err) state.Put("error", err) ui.Error(err.Error()) diff --git a/builder/amazon/chroot/step_snapshot.go b/builder/amazon/chroot/step_snapshot.go index 5cd646662..4c66b4fd1 100644 --- a/builder/amazon/chroot/step_snapshot.go +++ b/builder/amazon/chroot/step_snapshot.go @@ -1,14 +1,14 @@ package chroot import ( - "errors" + "context" "fmt" "time" "github.com/aws/aws-sdk-go/service/ec2" awscommon "github.com/hashicorp/packer/builder/amazon/common" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" ) // StepSnapshot creates a snapshot of the created volume. @@ -19,7 +19,7 @@ type StepSnapshot struct { snapshotId string } -func (s *StepSnapshot) Run(state multistep.StateBag) multistep.StepAction { +func (s *StepSnapshot) Run(ctx context.Context, state multistep.StateBag) multistep.StepAction { ec2conn := state.Get("ec2").(*ec2.EC2) ui := state.Get("ui").(packer.Ui) volumeId := state.Get("volume_id").(string) @@ -43,26 +43,7 @@ func (s *StepSnapshot) Run(state multistep.StateBag) multistep.StepAction { ui.Message(fmt.Sprintf("Snapshot ID: %s", s.snapshotId)) // Wait for the snapshot to be ready - stateChange := awscommon.StateChangeConf{ - Pending: []string{"pending"}, - StepState: state, - Target: "completed", - Refresh: func() (interface{}, string, error) { - resp, err := ec2conn.DescribeSnapshots(&ec2.DescribeSnapshotsInput{SnapshotIds: []*string{&s.snapshotId}}) - if err != nil { - return nil, "", err - } - - if len(resp.Snapshots) == 0 { - return nil, "", errors.New("No snapshots found.") - } - - s := resp.Snapshots[0] - return s, *s.State, nil - }, - } - - _, err = awscommon.WaitForState(&stateChange) + err = awscommon.WaitUntilSnapshotDone(ctx, ec2conn, s.snapshotId) if err != nil { err := fmt.Errorf("Error waiting for snapshot: %s", err) state.Put("error", err) diff --git a/builder/amazon/common/access_config.go b/builder/amazon/common/access_config.go index 73e50b862..862a64d21 100644 --- a/builder/amazon/common/access_config.go +++ b/builder/amazon/common/access_config.go @@ -3,10 +3,11 @@ package common import ( "fmt" "log" - "os" + "strings" "time" "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/aws/awserr" "github.com/aws/aws-sdk-go/aws/credentials" "github.com/aws/aws-sdk-go/aws/ec2metadata" "github.com/aws/aws-sdk-go/aws/session" @@ -16,15 +17,16 @@ import ( // AccessConfig is for common configuration related to AWS access type AccessConfig struct { - AccessKey string `mapstructure:"access_key"` - CustomEndpointEc2 string `mapstructure:"custom_endpoint_ec2"` - MFACode string `mapstructure:"mfa_code"` - ProfileName string `mapstructure:"profile"` - RawRegion string `mapstructure:"region"` - SecretKey string `mapstructure:"secret_key"` - SkipValidation bool `mapstructure:"skip_region_validation"` - Token string `mapstructure:"token"` - session *session.Session + AccessKey string `mapstructure:"access_key"` + CustomEndpointEc2 string `mapstructure:"custom_endpoint_ec2"` + MFACode string `mapstructure:"mfa_code"` + ProfileName string `mapstructure:"profile"` + RawRegion string `mapstructure:"region"` + SecretKey string `mapstructure:"secret_key"` + SkipValidation bool `mapstructure:"skip_region_validation"` + SkipMetadataApiCheck bool `mapstructure:"skip_metadata_api_check"` + Token string `mapstructure:"token"` + session *session.Session } // Config returns a valid aws.Config object for access to AWS services, or @@ -34,13 +36,14 @@ func (c *AccessConfig) Session() (*session.Session, error) { return c.session, nil } - config := aws.NewConfig().WithMaxRetries(11).WithCredentialsChainVerboseErrors(true) + config := aws.NewConfig().WithCredentialsChainVerboseErrors(true) - if c.ProfileName != "" { - if err := os.Setenv("AWS_PROFILE", c.ProfileName); err != nil { - return nil, fmt.Errorf("Set env error: %s", err) - } - } else if c.RawRegion != "" { + staticCreds := credentials.NewStaticCredentials(c.AccessKey, c.SecretKey, c.Token) + if _, err := staticCreds.Get(); err != credentials.ErrStaticCredentialsEmpty { + config.WithCredentials(staticCreds) + } + + if c.RawRegion != "" { config = config.WithRegion(c.RawRegion) } else if region := c.metadataRegion(); region != "" { config = config.WithRegion(region) @@ -50,25 +53,15 @@ func (c *AccessConfig) Session() (*session.Session, error) { config = config.WithEndpoint(c.CustomEndpointEc2) } - if c.AccessKey != "" { - creds := credentials.NewChainCredentials( - []credentials.Provider{ - &credentials.StaticProvider{ - Value: credentials.Value{ - AccessKeyID: c.AccessKey, - SecretAccessKey: c.SecretKey, - SessionToken: c.Token, - }, - }, - }) - config = config.WithCredentials(creds) - } - opts := session.Options{ SharedConfigState: session.SharedConfigEnable, Config: *config, } + if c.ProfileName != "" { + opts.Profile = c.ProfileName + } + if c.MFACode != "" { opts.AssumeRoleTokenProvider = func() (string, error) { return c.MFACode, nil @@ -82,11 +75,37 @@ func (c *AccessConfig) Session() (*session.Session, error) { } else { log.Printf("Found region %s", *sess.Config.Region) c.session = sess - } + cp, err := c.session.Config.Credentials.Get() + if err != nil { + if awsErr, ok := err.(awserr.Error); ok && awsErr.Code() == "NoCredentialProviders" { + return nil, fmt.Errorf("No valid credential sources found for AWS Builder. " + + "Please see https://www.packer.io/docs/builders/amazon.html#specifying-amazon-credentials " + + "for more information on providing credentials for the AWS Builder.") + } else { + return nil, fmt.Errorf("Error loading credentials for AWS Provider: %s", err) + } + } + log.Printf("[INFO] AWS Auth provider used: %q", cp.ProviderName) + } return c.session, nil } +func (c *AccessConfig) SessionRegion() string { + if c.session == nil { + panic("access config session should be set.") + } + return aws.StringValue(c.session.Config.Region) +} + +func (c *AccessConfig) IsGovCloud() bool { + return strings.HasPrefix(c.SessionRegion(), "us-gov-") +} + +func (c *AccessConfig) IsChinaCloud() bool { + return strings.HasPrefix(c.SessionRegion(), "cn-") +} + // metadataRegion returns the region from the metadata service func (c *AccessConfig) metadataRegion() string { @@ -108,6 +127,17 @@ func (c *AccessConfig) metadataRegion() string { func (c *AccessConfig) Prepare(ctx *interpolate.Context) []error { var errs []error + + if c.SkipMetadataApiCheck { + log.Println("(WARN) skip_metadata_api_check ignored.") + } + // Either both access and secret key must be set or neither of them should + // be. + if (len(c.AccessKey) > 0) != (len(c.SecretKey) > 0) { + errs = append(errs, + fmt.Errorf("`access_key` and `secret_key` must both be either set or not set.")) + } + if c.RawRegion != "" && !c.SkipValidation { if valid := ValidateRegion(c.RawRegion); !valid { errs = append(errs, fmt.Errorf("Unknown region: %s", c.RawRegion)) diff --git a/builder/amazon/common/access_config_test.go b/builder/amazon/common/access_config_test.go index 20d4851ba..9c543816a 100644 --- a/builder/amazon/common/access_config_test.go +++ b/builder/amazon/common/access_config_test.go @@ -2,6 +2,9 @@ package common import ( "testing" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/aws/session" ) func testAccessConfig() *AccessConfig { @@ -38,3 +41,20 @@ func TestAccessConfigPrepare_Region(t *testing.T) { c.SkipValidation = false } + +func TestAccessConfigPrepare_RegionRestricted(t *testing.T) { + c := testAccessConfig() + + // Create a Session with a custom region + c.session = session.Must(session.NewSession(&aws.Config{ + Region: aws.String("us-gov-west-1"), + })) + + if err := c.Prepare(nil); err != nil { + t.Fatalf("shouldn't have err: %s", err) + } + + if !c.IsGovCloud() { + t.Fatal("We should be in gov region.") + } +} diff --git a/builder/amazon/common/ami_config.go b/builder/amazon/common/ami_config.go index aa0792e3a..a3b2d2b4e 100644 --- a/builder/amazon/common/ami_config.go +++ b/builder/amazon/common/ami_config.go @@ -2,6 +2,7 @@ package common import ( "fmt" + "log" "github.com/hashicorp/packer/template/interpolate" ) @@ -16,7 +17,7 @@ type AMIConfig struct { AMIProductCodes []string `mapstructure:"ami_product_codes"` AMIRegions []string `mapstructure:"ami_regions"` AMISkipRegionValidation bool `mapstructure:"skip_region_validation"` - AMITags map[string]string `mapstructure:"tags"` + AMITags TagMap `mapstructure:"tags"` AMIENASupport bool `mapstructure:"ena_support"` AMISriovNetSupport bool `mapstructure:"sriov_support"` AMIForceDeregister bool `mapstructure:"force_deregister"` @@ -24,7 +25,7 @@ type AMIConfig struct { AMIEncryptBootVolume bool `mapstructure:"encrypt_boot"` AMIKmsKeyId string `mapstructure:"kms_key_id"` AMIRegionKMSKeyIDs map[string]string `mapstructure:"region_kms_key_ids"` - SnapshotTags map[string]string `mapstructure:"snapshot_tags"` + SnapshotTags TagMap `mapstructure:"snapshot_tags"` SnapshotUsers []string `mapstructure:"snapshot_users"` SnapshotGroups []string `mapstructure:"snapshot_groups"` } @@ -41,22 +42,20 @@ func stringInSlice(s []string, searchstr string) bool { func (c *AMIConfig) Prepare(accessConfig *AccessConfig, ctx *interpolate.Context) []error { var errs []error - if accessConfig != nil { - session, err := accessConfig.Session() - if err != nil { - errs = append(errs, err) - } else { - region := *session.Config.Region - if stringInSlice(c.AMIRegions, region) { - errs = append(errs, fmt.Errorf("Cannot copy AMI to AWS session region '%s', please remove it from `ami_regions`.", region)) - } - } - } - if c.AMIName == "" { errs = append(errs, fmt.Errorf("ami_name must be specified")) } + // Make sure that if we have region_kms_key_ids defined, + // the regions in region_kms_key_ids are also in ami_regions + if len(c.AMIRegionKMSKeyIDs) > 0 { + for kmsKeyRegion := range c.AMIRegionKMSKeyIDs { + if !stringInSlice(c.AMIRegions, kmsKeyRegion) { + errs = append(errs, fmt.Errorf("Region %s is in region_kms_key_ids but not in ami_regions", kmsKeyRegion)) + } + } + } + if len(c.AMIRegions) > 0 { regionSet := make(map[string]struct{}) regions := make([]string, 0, len(c.AMIRegions)) @@ -84,21 +83,17 @@ func (c *AMIConfig) Prepare(accessConfig *AccessConfig, ctx *interpolate.Context errs = append(errs, fmt.Errorf("Region %s is in ami_regions but not in region_kms_key_ids", region)) } } - + if (accessConfig != nil) && (region == accessConfig.RawRegion) { + // make sure we don't try to copy to the region we originally + // create the AMI in. + log.Printf("Cannot copy AMI to AWS session region '%s', deleting it from `ami_regions`.", region) + continue + } regions = append(regions, region) } c.AMIRegions = regions } - // Make sure that if we have region_kms_key_ids defined, - // the regions in region_kms_key_ids are also in ami_regions - if len(c.AMIRegionKMSKeyIDs) > 0 { - for kmsKeyRegion := range c.AMIRegionKMSKeyIDs { - if !stringInSlice(c.AMIRegions, kmsKeyRegion) { - errs = append(errs, fmt.Errorf("Region %s is in region_kms_key_ids but not in ami_regions", kmsKeyRegion)) - } - } - } if len(c.AMIUsers) > 0 && c.AMIEncryptBootVolume { errs = append(errs, fmt.Errorf("Cannot share AMI with encrypted boot volume")) diff --git a/builder/amazon/common/ami_config_test.go b/builder/amazon/common/ami_config_test.go index 5f130130c..120c88bfc 100644 --- a/builder/amazon/common/ami_config_test.go +++ b/builder/amazon/common/ami_config_test.go @@ -11,6 +11,12 @@ func testAMIConfig() *AMIConfig { } } +func getFakeAccessConfig(region string) *AccessConfig { + return &AccessConfig{ + RawRegion: region, + } +} + func TestAMIConfigPrepare_name(t *testing.T) { c := testAMIConfig() if err := c.Prepare(nil, nil); err != nil { @@ -118,6 +124,15 @@ func TestAMIConfigPrepare_regions(t *testing.T) { if err := c.Prepare(nil, nil); err == nil { t.Fatal("should have error b/c theres a region in in ami_regions that isn't in the key map") } + + // allow rawregion to exist in ami_regions list. + accessConf := getFakeAccessConfig("us-east-1") + c.AMIRegions = []string{"us-east-1", "us-west-1", "us-east-2"} + c.AMIRegionKMSKeyIDs = nil + if err := c.Prepare(accessConf, nil); err != nil { + t.Fatal("should allow user to have the raw region in ami_regions") + } + } func TestAMIConfigPrepare_Share_EncryptedBoot(t *testing.T) { diff --git a/builder/amazon/common/artifact.go b/builder/amazon/common/artifact.go index dc50f7721..90d7dec10 100644 --- a/builder/amazon/common/artifact.go +++ b/builder/amazon/common/artifact.go @@ -21,7 +21,7 @@ type Artifact struct { BuilderIdValue string // EC2 connection for performing API stuff. - Conn *ec2.EC2 + Session *session.Session } func (a *Artifact) BuilderId() string { @@ -69,15 +69,9 @@ func (a *Artifact) Destroy() error { for region, imageId := range a.Amis { log.Printf("Deregistering image ID (%s) from region (%s)", imageId, region) - regionConfig := &aws.Config{ - Credentials: a.Conn.Config.Credentials, - Region: aws.String(region), - } - session, err := session.NewSession(regionConfig) - if err != nil { - return err - } - regionConn := ec2.New(session) + regionConn := ec2.New(a.Session, &aws.Config{ + Region: aws.String(region), + }) // Get image metadata imageResp, err := regionConn.DescribeImages(&ec2.DescribeImagesInput{ diff --git a/builder/amazon/common/block_device.go b/builder/amazon/common/block_device.go index 57f6a4307..c88e1cbb5 100644 --- a/builder/amazon/common/block_device.go +++ b/builder/amazon/common/block_device.go @@ -1,6 +1,7 @@ package common import ( + "fmt" "strings" "github.com/aws/aws-sdk-go/aws" @@ -19,6 +20,7 @@ type BlockDevice struct { VirtualName string `mapstructure:"virtual_name"` VolumeType string `mapstructure:"volume_type"` VolumeSize int64 `mapstructure:"volume_size"` + KmsKeyId string `mapstructure:"kms_key_id"` } type BlockDevices struct { @@ -73,6 +75,10 @@ func buildBlockDevices(b []BlockDevice) []*ec2.BlockDeviceMapping { ebsBlockDevice.Encrypted = aws.Bool(blockDevice.Encrypted) } + if blockDevice.KmsKeyId != "" { + ebsBlockDevice.KmsKeyId = aws.String(blockDevice.KmsKeyId) + } + mapping.Ebs = ebsBlockDevice } @@ -81,10 +87,29 @@ func buildBlockDevices(b []BlockDevice) []*ec2.BlockDeviceMapping { return blockDevices } -func (b *BlockDevices) Prepare(ctx *interpolate.Context) []error { +func (b *BlockDevice) Prepare(ctx *interpolate.Context) error { + // Warn that encrypted must be true when setting kms_key_id + if b.KmsKeyId != "" && b.Encrypted == false { + return fmt.Errorf("The device %v, must also have `encrypted: "+ + "true` when setting a kms_key_id.", b.DeviceName) + } return nil } +func (b *BlockDevices) Prepare(ctx *interpolate.Context) (errs []error) { + for _, d := range b.AMIMappings { + if err := d.Prepare(ctx); err != nil { + errs = append(errs, fmt.Errorf("AMIMapping: %s", err.Error())) + } + } + for _, d := range b.LaunchMappings { + if err := d.Prepare(ctx); err != nil { + errs = append(errs, fmt.Errorf("LaunchMapping: %s", err.Error())) + } + } + return errs +} + func (b *AMIBlockDevices) BuildAMIDevices() []*ec2.BlockDeviceMapping { return buildBlockDevices(b.AMIMappings) } diff --git a/builder/amazon/common/block_device_test.go b/builder/amazon/common/block_device_test.go index 5fc25f243..130bbef0b 100644 --- a/builder/amazon/common/block_device_test.go +++ b/builder/amazon/common/block_device_test.go @@ -84,6 +84,27 @@ func TestBlockDevice(t *testing.T) { }, }, }, + { + Config: &BlockDevice{ + DeviceName: "/dev/sdb", + VolumeType: "gp2", + VolumeSize: 8, + DeleteOnTermination: true, + Encrypted: true, + KmsKeyId: "2Fa48a521f-3aff-4b34-a159-376ac5d37812", + }, + + Result: &ec2.BlockDeviceMapping{ + DeviceName: aws.String("/dev/sdb"), + Ebs: &ec2.EbsBlockDevice{ + VolumeType: aws.String("gp2"), + VolumeSize: aws.Int64(8), + DeleteOnTermination: aws.Bool(true), + Encrypted: aws.Bool(true), + KmsKeyId: aws.String("2Fa48a521f-3aff-4b34-a159-376ac5d37812"), + }, + }, + }, { Config: &BlockDevice{ DeviceName: "/dev/sdb", diff --git a/builder/amazon/common/interpolate_build_info.go b/builder/amazon/common/interpolate_build_info.go index a279a2f72..b141bc9d4 100644 --- a/builder/amazon/common/interpolate_build_info.go +++ b/builder/amazon/common/interpolate_build_info.go @@ -1,6 +1,36 @@ package common +import ( + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/ec2" + "github.com/hashicorp/packer/helper/multistep" +) + type BuildInfoTemplate struct { - SourceAMI string - BuildRegion string + BuildRegion string + SourceAMI string + SourceAMIName string + SourceAMITags map[string]string +} + +func extractBuildInfo(region string, state multistep.StateBag) *BuildInfoTemplate { + rawSourceAMI, hasSourceAMI := state.GetOk("source_image") + if !hasSourceAMI { + return &BuildInfoTemplate{ + BuildRegion: region, + } + } + + sourceAMI := rawSourceAMI.(*ec2.Image) + sourceAMITags := make(map[string]string, len(sourceAMI.Tags)) + for _, tag := range sourceAMI.Tags { + sourceAMITags[aws.StringValue(tag.Key)] = aws.StringValue(tag.Value) + } + + return &BuildInfoTemplate{ + BuildRegion: region, + SourceAMI: aws.StringValue(sourceAMI.ImageId), + SourceAMIName: aws.StringValue(sourceAMI.Name), + SourceAMITags: sourceAMITags, + } } diff --git a/builder/amazon/common/interpolate_build_info_test.go b/builder/amazon/common/interpolate_build_info_test.go new file mode 100644 index 000000000..7391fcd5d --- /dev/null +++ b/builder/amazon/common/interpolate_build_info_test.go @@ -0,0 +1,63 @@ +package common + +import ( + "reflect" + "testing" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/ec2" + "github.com/hashicorp/packer/helper/multistep" +) + +func testImage() *ec2.Image { + return &ec2.Image{ + ImageId: aws.String("ami-abcd1234"), + Name: aws.String("ami_test_name"), + Tags: []*ec2.Tag{ + { + Key: aws.String("key-1"), + Value: aws.String("value-1"), + }, + { + Key: aws.String("key-2"), + Value: aws.String("value-2"), + }, + }, + } +} + +func testState() multistep.StateBag { + state := new(multistep.BasicStateBag) + return state +} + +func TestInterpolateBuildInfo_extractBuildInfo_noSourceImage(t *testing.T) { + state := testState() + buildInfo := extractBuildInfo("foo", state) + + expected := BuildInfoTemplate{ + BuildRegion: "foo", + } + if !reflect.DeepEqual(*buildInfo, expected) { + t.Fatalf("Unexpected BuildInfoTemplate: expected %#v got %#v\n", expected, *buildInfo) + } +} + +func TestInterpolateBuildInfo_extractBuildInfo_withSourceImage(t *testing.T) { + state := testState() + state.Put("source_image", testImage()) + buildInfo := extractBuildInfo("foo", state) + + expected := BuildInfoTemplate{ + BuildRegion: "foo", + SourceAMI: "ami-abcd1234", + SourceAMIName: "ami_test_name", + SourceAMITags: map[string]string{ + "key-1": "value-1", + "key-2": "value-2", + }, + } + if !reflect.DeepEqual(*buildInfo, expected) { + t.Fatalf("Unexpected BuildInfoTemplate: expected %#v got %#v\n", expected, *buildInfo) + } +} diff --git a/builder/amazon/common/regions.go b/builder/amazon/common/regions.go index 5cdf2caeb..ad71a027f 100644 --- a/builder/amazon/common/regions.go +++ b/builder/amazon/common/regions.go @@ -5,6 +5,7 @@ func listEC2Regions() []string { return []string{ "ap-northeast-1", "ap-northeast-2", + "ap-northeast-3", "ap-south-1", "ap-southeast-1", "ap-southeast-2", @@ -12,6 +13,7 @@ func listEC2Regions() []string { "eu-central-1", "eu-west-1", "eu-west-2", + "eu-west-3", "sa-east-1", "us-east-1", "us-east-2", @@ -20,6 +22,7 @@ func listEC2Regions() []string { // not part of autogenerated list "us-gov-west-1", "cn-north-1", + "cn-northwest-1", } } diff --git a/builder/amazon/common/run_config.go b/builder/amazon/common/run_config.go index 73970e59e..9467d42bd 100644 --- a/builder/amazon/common/run_config.go +++ b/builder/amazon/common/run_config.go @@ -1,11 +1,11 @@ package common import ( - "errors" "fmt" "net" "os" "regexp" + "strings" "time" "github.com/hashicorp/packer/common/uuid" @@ -30,30 +30,32 @@ func (d *AmiFilterOptions) Empty() bool { type RunConfig struct { AssociatePublicIpAddress bool `mapstructure:"associate_public_ip_address"` AvailabilityZone string `mapstructure:"availability_zone"` + DisableStopInstance bool `mapstructure:"disable_stop_instance"` EbsOptimized bool `mapstructure:"ebs_optimized"` + EnableT2Unlimited bool `mapstructure:"enable_t2_unlimited"` IamInstanceProfile string `mapstructure:"iam_instance_profile"` + InstanceInitiatedShutdownBehavior string `mapstructure:"shutdown_behavior"` InstanceType string `mapstructure:"instance_type"` RunTags map[string]string `mapstructure:"run_tags"` + SecurityGroupId string `mapstructure:"security_group_id"` + SecurityGroupIds []string `mapstructure:"security_group_ids"` SourceAmi string `mapstructure:"source_ami"` SourceAmiFilter AmiFilterOptions `mapstructure:"source_ami_filter"` SpotPrice string `mapstructure:"spot_price"` SpotPriceAutoProduct string `mapstructure:"spot_price_auto_product"` - DisableStopInstance bool `mapstructure:"disable_stop_instance"` - SecurityGroupId string `mapstructure:"security_group_id"` - SecurityGroupIds []string `mapstructure:"security_group_ids"` - TemporarySGSourceCidr string `mapstructure:"temporary_security_group_source_cidr"` + SpotTags map[string]string `mapstructure:"spot_tags"` SubnetId string `mapstructure:"subnet_id"` TemporaryKeyPairName string `mapstructure:"temporary_key_pair_name"` + TemporarySGSourceCidr string `mapstructure:"temporary_security_group_source_cidr"` UserData string `mapstructure:"user_data"` UserDataFile string `mapstructure:"user_data_file"` - WindowsPasswordTimeout time.Duration `mapstructure:"windows_password_timeout"` VpcId string `mapstructure:"vpc_id"` - InstanceInitiatedShutdownBehavior string `mapstructure:"shutdown_behavior"` + WindowsPasswordTimeout time.Duration `mapstructure:"windows_password_timeout"` // Communicator settings Comm communicator.Config `mapstructure:",squash"` SSHKeyPairName string `mapstructure:"ssh_keypair_name"` - SSHPrivateIp bool `mapstructure:"ssh_private_ip"` + SSHInterface string `mapstructure:"ssh_interface"` } func (c *RunConfig) Prepare(ctx *interpolate.Context) []error { @@ -77,29 +79,53 @@ func (c *RunConfig) Prepare(ctx *interpolate.Context) []error { // Validation errs := c.Comm.Prepare(ctx) + + // Validating ssh_interface + if c.SSHInterface != "public_ip" && + c.SSHInterface != "private_ip" && + c.SSHInterface != "public_dns" && + c.SSHInterface != "private_dns" && + c.SSHInterface != "" { + errs = append(errs, fmt.Errorf("Unknown interface type: %s", c.SSHInterface)) + } + if c.SSHKeyPairName != "" { if c.Comm.Type == "winrm" && c.Comm.WinRMPassword == "" && c.Comm.SSHPrivateKey == "" { - errs = append(errs, errors.New("A private_key_file must be provided to retrieve the winrm password when using ssh_keypair_name.")) + errs = append(errs, fmt.Errorf("ssh_private_key_file must be provided to retrieve the winrm password when using ssh_keypair_name.")) } else if c.Comm.SSHPrivateKey == "" && !c.Comm.SSHAgentAuth { - errs = append(errs, errors.New("A private_key_file must be provided or ssh_agent_auth enabled when ssh_keypair_name is specified.")) + errs = append(errs, fmt.Errorf("ssh_private_key_file must be provided or ssh_agent_auth enabled when ssh_keypair_name is specified.")) } } if c.SourceAmi == "" && c.SourceAmiFilter.Empty() { - errs = append(errs, errors.New("A source_ami or source_ami_filter must be specified")) + errs = append(errs, fmt.Errorf("A source_ami or source_ami_filter must be specified")) } if c.InstanceType == "" { - errs = append(errs, errors.New("An instance_type must be specified")) + errs = append(errs, fmt.Errorf("An instance_type must be specified")) } if c.SpotPrice == "auto" { if c.SpotPriceAutoProduct == "" { - errs = append(errs, errors.New( + errs = append(errs, fmt.Errorf( "spot_price_auto_product must be specified when spot_price is auto")) } } + if c.SpotPriceAutoProduct != "" { + if c.SpotPrice != "auto" { + errs = append(errs, fmt.Errorf( + "spot_price should be set to auto when spot_price_auto_product is specified")) + } + } + + if c.SpotTags != nil { + if c.SpotPrice == "" || c.SpotPrice == "0" { + errs = append(errs, fmt.Errorf( + "spot_tags should not be set when not requesting a spot instance")) + } + } + if c.UserData != "" && c.UserDataFile != "" { errs = append(errs, fmt.Errorf("Only one of user_data or user_data_file can be specified.")) } else if c.UserDataFile != "" { @@ -131,5 +157,21 @@ func (c *RunConfig) Prepare(ctx *interpolate.Context) []error { errs = append(errs, fmt.Errorf("shutdown_behavior only accepts 'stop' or 'terminate' values.")) } + if c.EnableT2Unlimited { + if c.SpotPrice != "" { + errs = append(errs, fmt.Errorf("Error: T2 Unlimited cannot be used in conjuction with Spot Instances")) + } + firstDotIndex := strings.Index(c.InstanceType, ".") + if firstDotIndex == -1 { + errs = append(errs, fmt.Errorf("Error determining main Instance Type from: %s", c.InstanceType)) + } else if c.InstanceType[0:firstDotIndex] != "t2" { + errs = append(errs, fmt.Errorf("Error: T2 Unlimited enabled with a non-T2 Instance Type: %s", c.InstanceType)) + } + } + return errs } + +func (c *RunConfig) IsSpotInstance() bool { + return c.SpotPrice != "" && c.SpotPrice != "0" +} diff --git a/builder/amazon/common/run_config_test.go b/builder/amazon/common/run_config_test.go index 7cf42ded1..fde9ef760 100644 --- a/builder/amazon/common/run_config_test.go +++ b/builder/amazon/common/run_config_test.go @@ -48,7 +48,7 @@ func TestRunConfigPrepare_InstanceType(t *testing.T) { c := testConfig() c.InstanceType = "" if err := c.Prepare(nil); len(err) != 1 { - t.Fatalf("err: %s", err) + t.Fatalf("Should error if an instance_type is not specified") } } @@ -56,14 +56,14 @@ func TestRunConfigPrepare_SourceAmi(t *testing.T) { c := testConfig() c.SourceAmi = "" if err := c.Prepare(nil); len(err) != 1 { - t.Fatalf("err: %s", err) + t.Fatalf("Should error if a source_ami (or source_ami_filter) is not specified") } } func TestRunConfigPrepare_SourceAmiFilterBlank(t *testing.T) { c := testConfigFilter() if err := c.Prepare(nil); len(err) != 1 { - t.Fatalf("err: %s", err) + t.Fatalf("Should error if source_ami_filter is empty or not specified (and source_ami is not specified)") } } @@ -79,17 +79,58 @@ func TestRunConfigPrepare_SourceAmiFilterGood(t *testing.T) { } } +func TestRunConfigPrepare_EnableT2UnlimitedGood(t *testing.T) { + c := testConfig() + // Must have a T2 instance type if T2 Unlimited is enabled + c.InstanceType = "t2.micro" + c.EnableT2Unlimited = true + err := c.Prepare(nil) + if len(err) > 0 { + t.Fatalf("err: %s", err) + } +} + +func TestRunConfigPrepare_EnableT2UnlimitedBadInstanceType(t *testing.T) { + c := testConfig() + // T2 Unlimited cannot be used with instance types other than T2 + c.InstanceType = "m5.large" + c.EnableT2Unlimited = true + err := c.Prepare(nil) + if len(err) != 1 { + t.Fatalf("Should error if T2 Unlimited is enabled with non-T2 instance_type") + } +} + +func TestRunConfigPrepare_EnableT2UnlimitedBadWithSpotInstanceRequest(t *testing.T) { + c := testConfig() + // T2 Unlimited cannot be used with Spot Instances + c.InstanceType = "t2.micro" + c.EnableT2Unlimited = true + c.SpotPrice = "auto" + c.SpotPriceAutoProduct = "Linux/UNIX" + err := c.Prepare(nil) + if len(err) != 1 { + t.Fatalf("Should error if T2 Unlimited has been used in conjuntion with a Spot Price request") + } +} + func TestRunConfigPrepare_SpotAuto(t *testing.T) { c := testConfig() c.SpotPrice = "auto" if err := c.Prepare(nil); len(err) != 1 { - t.Fatalf("err: %s", err) + t.Fatalf("Should error if spot_price_auto_product is not set and spot_price is set to auto") } + // Good - SpotPrice and SpotPriceAutoProduct are correctly set c.SpotPriceAutoProduct = "foo" if err := c.Prepare(nil); len(err) != 0 { t.Fatalf("err: %s", err) } + + c.SpotPrice = "" + if err := c.Prepare(nil); len(err) != 1 { + t.Fatalf("Should error if spot_price is not set to auto and spot_price_auto_product is set") + } } func TestRunConfigPrepare_SSHPort(t *testing.T) { @@ -119,12 +160,13 @@ func TestRunConfigPrepare_UserData(t *testing.T) { if err != nil { t.Fatalf("err: %s", err) } + defer os.Remove(tf.Name()) defer tf.Close() c.UserData = "foo" c.UserDataFile = tf.Name() if err := c.Prepare(nil); len(err) != 1 { - t.Fatalf("err: %s", err) + t.Fatalf("Should error if user_data string and user_data_file have both been specified") } } @@ -136,13 +178,14 @@ func TestRunConfigPrepare_UserDataFile(t *testing.T) { c.UserDataFile = "idontexistidontthink" if err := c.Prepare(nil); len(err) != 1 { - t.Fatalf("err: %s", err) + t.Fatalf("Should error if the file specified by user_data_file does not exist") } tf, err := ioutil.TempFile("", "packer") if err != nil { t.Fatalf("err: %s", err) } + defer os.Remove(tf.Name()) defer tf.Close() c.UserDataFile = tf.Name() diff --git a/builder/amazon/common/ssh.go b/builder/amazon/common/ssh.go index c451bbf96..086bc12ee 100644 --- a/builder/amazon/common/ssh.go +++ b/builder/amazon/common/ssh.go @@ -9,7 +9,7 @@ import ( "github.com/aws/aws-sdk-go/service/ec2" packerssh "github.com/hashicorp/packer/communicator/ssh" - "github.com/mitchellh/multistep" + "github.com/hashicorp/packer/helper/multistep" "golang.org/x/crypto/ssh" "golang.org/x/crypto/ssh/agent" ) @@ -25,21 +25,40 @@ var ( // SSHHost returns a function that can be given to the SSH communicator // for determining the SSH address based on the instance DNS name. -func SSHHost(e ec2Describer, private bool) func(multistep.StateBag) (string, error) { +func SSHHost(e ec2Describer, sshInterface string) func(multistep.StateBag) (string, error) { return func(state multistep.StateBag) (string, error) { const tries = 2 // <= with current structure to check result of describing `tries` times for j := 0; j <= tries; j++ { var host string i := state.Get("instance").(*ec2.Instance) - if i.VpcId != nil && *i.VpcId != "" { - if i.PublicIpAddress != nil && *i.PublicIpAddress != "" && !private { + if sshInterface != "" { + switch sshInterface { + case "public_ip": + if i.PublicIpAddress != nil { + host = *i.PublicIpAddress + } + case "private_ip": + if i.PrivateIpAddress != nil { + host = *i.PrivateIpAddress + } + case "public_dns": + if i.PublicDnsName != nil { + host = *i.PublicDnsName + } + case "private_dns": + if i.PrivateDnsName != nil { + host = *i.PrivateDnsName + } + default: + panic(fmt.Sprintf("Unknown interface type: %s", sshInterface)) + } + } else if i.VpcId != nil && *i.VpcId != "" { + if i.PublicIpAddress != nil && *i.PublicIpAddress != "" { host = *i.PublicIpAddress } else if i.PrivateIpAddress != nil && *i.PrivateIpAddress != "" { host = *i.PrivateIpAddress } - } else if private && i.PrivateIpAddress != nil && *i.PrivateIpAddress != "" { - host = *i.PrivateIpAddress } else if i.PublicDnsName != nil && *i.PublicDnsName != "" { host = *i.PublicDnsName } @@ -63,7 +82,7 @@ func SSHHost(e ec2Describer, private bool) func(multistep.StateBag) (string, err time.Sleep(sshHostSleepDuration) } - return "", errors.New("couldn't determine IP address for instance") + return "", errors.New("couldn't determine address for instance") } } diff --git a/builder/amazon/common/ssh_test.go b/builder/amazon/common/ssh_test.go index 9f1d4a53e..02823d1d9 100644 --- a/builder/amazon/common/ssh_test.go +++ b/builder/amazon/common/ssh_test.go @@ -5,13 +5,14 @@ import ( "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/service/ec2" - "github.com/mitchellh/multistep" + "github.com/hashicorp/packer/helper/multistep" ) const ( - privateIP = "10.0.0.1" - publicIP = "192.168.1.1" - publicDNS = "public.dns.test" + privateIP = "10.0.0.1" + publicIP = "192.168.1.1" + privateDNS = "private.dns.test" + publicDNS = "public.dns.test" ) func TestSSHHost(t *testing.T) { @@ -20,44 +21,54 @@ func TestSSHHost(t *testing.T) { sshHostSleepDuration = 0 var cases = []struct { - allowTries int - vpcId string - private bool + allowTries int + vpcId string + sshInterface string ok bool wantHost string }{ - {1, "", false, true, publicDNS}, - {1, "", true, true, privateIP}, - {1, "vpc-id", false, true, publicIP}, - {1, "vpc-id", true, true, privateIP}, - {2, "", false, true, publicDNS}, - {2, "", true, true, privateIP}, - {2, "vpc-id", false, true, publicIP}, - {2, "vpc-id", true, true, privateIP}, - {3, "", false, false, ""}, - {3, "", true, false, ""}, - {3, "vpc-id", false, false, ""}, - {3, "vpc-id", true, false, ""}, + {1, "", "", true, publicDNS}, + {1, "", "private_ip", true, privateIP}, + {1, "vpc-id", "", true, publicIP}, + {1, "vpc-id", "private_ip", true, privateIP}, + {1, "vpc-id", "private_dns", true, privateDNS}, + {1, "vpc-id", "public_dns", true, publicDNS}, + {1, "vpc-id", "public_ip", true, publicIP}, + {2, "", "", true, publicDNS}, + {2, "", "private_ip", true, privateIP}, + {2, "vpc-id", "", true, publicIP}, + {2, "vpc-id", "private_ip", true, privateIP}, + {2, "vpc-id", "private_dns", true, privateDNS}, + {2, "vpc-id", "public_dns", true, publicDNS}, + {2, "vpc-id", "public_ip", true, publicIP}, + {3, "", "", false, ""}, + {3, "", "private_ip", false, ""}, + {3, "vpc-id", "", false, ""}, + {3, "vpc-id", "private_ip", false, ""}, + {3, "vpc-id", "private_dns", false, ""}, + {3, "vpc-id", "public_dns", false, ""}, + {3, "vpc-id", "public_ip", false, ""}, } for _, c := range cases { - testSSHHost(t, c.allowTries, c.vpcId, c.private, c.ok, c.wantHost) + testSSHHost(t, c.allowTries, c.vpcId, c.sshInterface, c.ok, c.wantHost) } } -func testSSHHost(t *testing.T, allowTries int, vpcId string, private, ok bool, wantHost string) { - t.Logf("allowTries=%d vpcId=%s private=%t ok=%t wantHost=%q", allowTries, vpcId, private, ok, wantHost) +func testSSHHost(t *testing.T, allowTries int, vpcId string, sshInterface string, ok bool, wantHost string) { + t.Logf("allowTries=%d vpcId=%s sshInterface=%s ok=%t wantHost=%q", allowTries, vpcId, sshInterface, ok, wantHost) e := &fakeEC2Describer{ allowTries: allowTries, vpcId: vpcId, privateIP: privateIP, publicIP: publicIP, + privateDNS: privateDNS, publicDNS: publicDNS, } - f := SSHHost(e, private) + f := SSHHost(e, sshInterface) st := &multistep.BasicStateBag{} st.Put("instance", &ec2.Instance{ InstanceId: aws.String("instance-id"), @@ -85,8 +96,8 @@ type fakeEC2Describer struct { allowTries int tries int - vpcId string - privateIP, publicIP, publicDNS string + vpcId string + privateIP, publicIP, privateDNS, publicDNS string } func (d *fakeEC2Describer) DescribeInstances(in *ec2.DescribeInstancesInput) (*ec2.DescribeInstancesOutput, error) { @@ -104,6 +115,7 @@ func (d *fakeEC2Describer) DescribeInstances(in *ec2.DescribeInstancesInput) (*e instance.PublicIpAddress = aws.String(d.publicIP) instance.PrivateIpAddress = aws.String(d.privateIP) instance.PublicDnsName = aws.String(d.publicDNS) + instance.PrivateDnsName = aws.String(d.privateDNS) } out := &ec2.DescribeInstancesOutput{ diff --git a/builder/amazon/common/state.go b/builder/amazon/common/state.go index 42beb0e00..b76352a5e 100644 --- a/builder/amazon/common/state.go +++ b/builder/amazon/common/state.go @@ -1,18 +1,15 @@ package common import ( - "errors" - "fmt" "log" - "net" "os" "strconv" - "strings" "time" - "github.com/aws/aws-sdk-go/aws/awserr" + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/aws/request" "github.com/aws/aws-sdk-go/service/ec2" - "github.com/mitchellh/multistep" + "github.com/hashicorp/packer/helper/multistep" ) // StateRefreshFunc is a function type used for StateChangeConf that is @@ -35,228 +32,304 @@ type StateChangeConf struct { Target string } -// AMIStateRefreshFunc returns a StateRefreshFunc that is used to watch -// an AMI for state changes. -func AMIStateRefreshFunc(conn *ec2.EC2, imageId string) StateRefreshFunc { - return func() (interface{}, string, error) { - resp, err := conn.DescribeImages(&ec2.DescribeImagesInput{ - ImageIds: []*string{&imageId}, - }) - if err != nil { - if ec2err, ok := err.(awserr.Error); ok && ec2err.Code() == "InvalidAMIID.NotFound" { - // Set this to nil as if we didn't find anything. - resp = nil - } else if isTransientNetworkError(err) { - // Transient network error, treat it as if we didn't find anything - resp = nil - } else { - log.Printf("Error on AMIStateRefresh: %s", err) - return nil, "", err - } - } +// Following are wrapper functions that use Packer's environment-variables to +// determing retry logic, then call the AWS SDK's built-in waiters. - if resp == nil || len(resp.Images) == 0 { - // Sometimes AWS has consistency issues and doesn't see the - // AMI. Return an empty state. - return nil, "", nil - } - - i := resp.Images[0] - return i, *i.State, nil +func WaitUntilAMIAvailable(ctx aws.Context, conn *ec2.EC2, imageId string) error { + imageInput := ec2.DescribeImagesInput{ + ImageIds: []*string{&imageId}, } + + err := conn.WaitUntilImageAvailableWithContext( + ctx, + &imageInput, + getWaiterOptions()...) + return err } -// InstanceStateRefreshFunc returns a StateRefreshFunc that is used to watch -// an EC2 instance. -func InstanceStateRefreshFunc(conn *ec2.EC2, instanceId string) StateRefreshFunc { - return func() (interface{}, string, error) { - resp, err := conn.DescribeInstances(&ec2.DescribeInstancesInput{ - InstanceIds: []*string{&instanceId}, - }) - if err != nil { - if ec2err, ok := err.(awserr.Error); ok && ec2err.Code() == "InvalidInstanceID.NotFound" { - // Set this to nil as if we didn't find anything. - resp = nil - } else if isTransientNetworkError(err) { - // Transient network error, treat it as if we didn't find anything - resp = nil - } else { - log.Printf("Error on InstanceStateRefresh: %s", err) - return nil, "", err - } - } +func WaitUntilInstanceTerminated(ctx aws.Context, conn *ec2.EC2, instanceId string) error { - if resp == nil || len(resp.Reservations) == 0 || len(resp.Reservations[0].Instances) == 0 { - // Sometimes AWS just has consistency issues and doesn't see - // our instance yet. Return an empty state. - return nil, "", nil - } - - i := resp.Reservations[0].Instances[0] - return i, *i.State.Name, nil + instanceInput := ec2.DescribeInstancesInput{ + InstanceIds: []*string{&instanceId}, } + + err := conn.WaitUntilInstanceTerminatedWithContext( + ctx, + &instanceInput, + getWaiterOptions()...) + return err } -// SpotRequestStateRefreshFunc returns a StateRefreshFunc that is used to watch -// a spot request for state changes. -func SpotRequestStateRefreshFunc(conn *ec2.EC2, spotRequestId string) StateRefreshFunc { - return func() (interface{}, string, error) { - resp, err := conn.DescribeSpotInstanceRequests(&ec2.DescribeSpotInstanceRequestsInput{ - SpotInstanceRequestIds: []*string{&spotRequestId}, - }) - - if err != nil { - if ec2err, ok := err.(awserr.Error); ok && ec2err.Code() == "InvalidSpotInstanceRequestID.NotFound" { - // Set this to nil as if we didn't find anything. - resp = nil - } else if isTransientNetworkError(err) { - // Transient network error, treat it as if we didn't find anything - resp = nil - } else { - log.Printf("Error on SpotRequestStateRefresh: %s", err) - return nil, "", err - } - } - - if resp == nil || len(resp.SpotInstanceRequests) == 0 { - // Sometimes AWS has consistency issues and doesn't see the - // SpotRequest. Return an empty state. - return nil, "", nil - } - - i := resp.SpotInstanceRequests[0] - return i, *i.State, nil +// This function works for both requesting and cancelling spot instances. +func WaitUntilSpotRequestFulfilled(ctx aws.Context, conn *ec2.EC2, spotRequestId string) error { + spotRequestInput := ec2.DescribeSpotInstanceRequestsInput{ + SpotInstanceRequestIds: []*string{&spotRequestId}, } + + err := conn.WaitUntilSpotInstanceRequestFulfilledWithContext( + ctx, + &spotRequestInput, + getWaiterOptions()...) + return err } -func ImportImageRefreshFunc(conn *ec2.EC2, importTaskId string) StateRefreshFunc { - return func() (interface{}, string, error) { - resp, err := conn.DescribeImportImageTasks(&ec2.DescribeImportImageTasksInput{ - ImportTaskIds: []*string{ - &importTaskId, +func WaitUntilVolumeAvailable(ctx aws.Context, conn *ec2.EC2, volumeId string) error { + volumeInput := ec2.DescribeVolumesInput{ + VolumeIds: []*string{&volumeId}, + } + + err := conn.WaitUntilVolumeAvailableWithContext( + ctx, + &volumeInput, + getWaiterOptions()...) + return err +} + +func WaitUntilSnapshotDone(ctx aws.Context, conn *ec2.EC2, snapshotID string) error { + snapInput := ec2.DescribeSnapshotsInput{ + SnapshotIds: []*string{&snapshotID}, + } + + err := conn.WaitUntilSnapshotCompletedWithContext( + ctx, + &snapInput, + getWaiterOptions()...) + return err +} + +// Wrappers for our custom AWS waiters + +func WaitUntilVolumeAttached(ctx aws.Context, conn *ec2.EC2, volumeId string) error { + volumeInput := ec2.DescribeVolumesInput{ + VolumeIds: []*string{&volumeId}, + } + + err := WaitForVolumeToBeAttached(conn, + ctx, + &volumeInput, + getWaiterOptions()...) + return err +} + +func WaitUntilVolumeDetached(ctx aws.Context, conn *ec2.EC2, volumeId string) error { + volumeInput := ec2.DescribeVolumesInput{ + VolumeIds: []*string{&volumeId}, + } + + err := WaitForVolumeToBeDetached(conn, + ctx, + &volumeInput, + getWaiterOptions()...) + return err +} + +func WaitUntilImageImported(ctx aws.Context, conn *ec2.EC2, taskID string) error { + importInput := ec2.DescribeImportImageTasksInput{ + ImportTaskIds: []*string{&taskID}, + } + + err := WaitForImageToBeImported(conn, + ctx, + &importInput, + getWaiterOptions()...) + return err +} + +// Custom waiters using AWS's request.Waiter + +func WaitForVolumeToBeAttached(c *ec2.EC2, ctx aws.Context, input *ec2.DescribeVolumesInput, opts ...request.WaiterOption) error { + w := request.Waiter{ + Name: "DescribeVolumes", + MaxAttempts: 40, + Delay: request.ConstantWaiterDelay(5 * time.Second), + Acceptors: []request.WaiterAcceptor{ + { + State: request.SuccessWaiterState, + Matcher: request.PathAllWaiterMatch, + Argument: "Volumes[].Attachments[].State", + Expected: "attached", }, }, - ) - if err != nil { - if ec2err, ok := err.(awserr.Error); ok && strings.HasPrefix(ec2err.Code(), "InvalidConversionTaskId") { - resp = nil - } else if isTransientNetworkError(err) { - resp = nil - } else { - log.Printf("Error on ImportImageRefresh: %s", err) - return nil, "", err + Logger: c.Config.Logger, + NewRequest: func(opts []request.Option) (*request.Request, error) { + var inCpy *ec2.DescribeVolumesInput + if input != nil { + tmp := *input + inCpy = &tmp } - } - - if resp == nil || len(resp.ImportImageTasks) == 0 { - return nil, "", nil - } - - i := resp.ImportImageTasks[0] - return i, *i.Status, nil + req, _ := c.DescribeVolumesRequest(inCpy) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return req, nil + }, } + return w.WaitWithContext(ctx) } -// WaitForState watches an object and waits for it to achieve a certain -// state. -func WaitForState(conf *StateChangeConf) (i interface{}, err error) { - log.Printf("Waiting for state to become: %s", conf.Target) - - sleepSeconds := SleepSeconds() - maxTicks := TimeoutSeconds()/sleepSeconds + 1 - notfoundTick := 0 - - for { - var currentState string - i, currentState, err = conf.Refresh() - if err != nil { - return - } - - if i == nil { - // If we didn't find the resource, check if we have been - // not finding it for awhile, and if so, report an error. - notfoundTick += 1 - if notfoundTick > maxTicks { - return nil, errors.New("couldn't find resource") +func WaitForVolumeToBeDetached(c *ec2.EC2, ctx aws.Context, input *ec2.DescribeVolumesInput, opts ...request.WaiterOption) error { + w := request.Waiter{ + Name: "DescribeVolumes", + MaxAttempts: 40, + Delay: request.ConstantWaiterDelay(5 * time.Second), + Acceptors: []request.WaiterAcceptor{ + { + State: request.SuccessWaiterState, + Matcher: request.PathAllWaiterMatch, + Argument: "length(Volumes[].Attachments[]) == `0`", + Expected: true, + }, + }, + Logger: c.Config.Logger, + NewRequest: func(opts []request.Option) (*request.Request, error) { + var inCpy *ec2.DescribeVolumesInput + if input != nil { + tmp := *input + inCpy = &tmp } - } else { - // Reset the counter for when a resource isn't found - notfoundTick = 0 - - if currentState == conf.Target { - return - } - - if conf.StepState != nil { - if _, ok := conf.StepState.GetOk(multistep.StateCancelled); ok { - return nil, errors.New("interrupted") - } - } - - found := false - for _, allowed := range conf.Pending { - if currentState == allowed { - found = true - break - } - } - - if !found { - err := fmt.Errorf("unexpected state '%s', wanted target '%s'", currentState, conf.Target) - return nil, err - } - } - - time.Sleep(time.Duration(sleepSeconds) * time.Second) + req, _ := c.DescribeVolumesRequest(inCpy) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return req, nil + }, } + return w.WaitWithContext(ctx) } -func isTransientNetworkError(err error) bool { - if nerr, ok := err.(net.Error); ok && nerr.Temporary() { - return true +func WaitForImageToBeImported(c *ec2.EC2, ctx aws.Context, input *ec2.DescribeImportImageTasksInput, opts ...request.WaiterOption) error { + w := request.Waiter{ + Name: "DescribeImages", + MaxAttempts: 300, + Delay: request.ConstantWaiterDelay(5 * time.Second), + Acceptors: []request.WaiterAcceptor{ + { + State: request.SuccessWaiterState, + Matcher: request.PathAllWaiterMatch, + Argument: "ImportImageTasks[].Status", + Expected: "completed", + }, + }, + Logger: c.Config.Logger, + NewRequest: func(opts []request.Option) (*request.Request, error) { + var inCpy *ec2.DescribeImportImageTasksInput + if input != nil { + tmp := *input + inCpy = &tmp + } + req, _ := c.DescribeImportImageTasksRequest(inCpy) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return req, nil + }, } - - return false + return w.WaitWithContext(ctx) } -// Returns 300 seconds (5 minutes) by default -// Some AWS operations, like copying an AMI to a distant region, take a very long time -// Allow user to override with AWS_TIMEOUT_SECONDS environment variable -func TimeoutSeconds() (seconds int) { - seconds = 300 +// This helper function uses the environment variables AWS_TIMEOUT_SECONDS and +// AWS_POLL_DELAY_SECONDS to generate waiter options that can be passed into any +// request.Waiter function. These options will control how many times the waiter +// will retry the request, as well as how long to wait between the retries. - override := os.Getenv("AWS_TIMEOUT_SECONDS") +// DEFAULTING BEHAVIOR: +// if AWS_POLL_DELAY_SECONDS is set but the others are not, Packer will set this +// poll delay and use the waiter-specific default + +// if AWS_TIMEOUT_SECONDS is set but AWS_MAX_ATTEMPTS is not, Packer will use +// AWS_TIMEOUT_SECONDS and _either_ AWS_POLL_DELAY_SECONDS _or_ 2 if the user has not set AWS_POLL_DELAY_SECONDS, to determine a max number of attempts to make. + +// if AWS_TIMEOUT_SECONDS, _and_ AWS_MAX_ATTEMPTS are both set, +// AWS_TIMEOUT_SECONDS will be ignored. + +// if AWS_MAX_ATTEMPTS is set but AWS_POLL_DELAY_SECONDS is not, then we will +// use waiter-specific defaults. + +type envInfo struct { + envKey string + Val int + overridden bool +} + +type overridableWaitVars struct { + awsPollDelaySeconds envInfo + awsMaxAttempts envInfo + awsTimeoutSeconds envInfo +} + +func getWaiterOptions() []request.WaiterOption { + envOverrides := getEnvOverrides() + waitOpts := applyEnvOverrides(envOverrides) + return waitOpts +} + +func getOverride(varInfo envInfo) envInfo { + override := os.Getenv(varInfo.envKey) if override != "" { n, err := strconv.Atoi(override) if err != nil { - log.Printf("Invalid timeout seconds '%s', using default", override) + log.Printf("Invalid %s '%s', using default", varInfo.envKey, override) } else { - seconds = n + varInfo.overridden = true + varInfo.Val = n } } - log.Printf("Allowing %ds to complete (change with AWS_TIMEOUT_SECONDS)", seconds) - return seconds + return varInfo } - -// Returns 2 seconds by default -// AWS async operations sometimes takes long times, if there are multiple parallel builds, -// polling at 2 second frequency will exceed the request limit. Allow 2 seconds to be -// overwritten with AWS_POLL_DELAY_SECONDS -func SleepSeconds() (seconds int) { - seconds = 2 - - override := os.Getenv("AWS_POLL_DELAY_SECONDS") - if override != "" { - n, err := strconv.Atoi(override) - if err != nil { - log.Printf("Invalid sleep seconds '%s', using default", override) - } else { - seconds = n - } +func getEnvOverrides() overridableWaitVars { + // Load env vars from environment, and use them to override defaults + envValues := overridableWaitVars{ + envInfo{"AWS_POLL_DELAY_SECONDS", 2, false}, + envInfo{"AWS_MAX_ATTEMPTS", 0, false}, + envInfo{"AWS_TIMEOUT_SECONDS", 300, false}, } - log.Printf("Using %ds as polling delay (change with AWS_POLL_DELAY_SECONDS)", seconds) - return seconds + envValues.awsMaxAttempts = getOverride(envValues.awsMaxAttempts) + envValues.awsPollDelaySeconds = getOverride(envValues.awsPollDelaySeconds) + envValues.awsTimeoutSeconds = getOverride(envValues.awsTimeoutSeconds) + + return envValues +} + +func applyEnvOverrides(envOverrides overridableWaitVars) []request.WaiterOption { + waitOpts := make([]request.WaiterOption, 0) + // If user has set poll delay seconds, overwrite it. If user has NOT, + // default to a poll delay of 2 seconds + if envOverrides.awsPollDelaySeconds.overridden { + delaySeconds := request.ConstantWaiterDelay(time.Duration(envOverrides.awsPollDelaySeconds.Val) * time.Second) + waitOpts = append(waitOpts, request.WithWaiterDelay(delaySeconds)) + } + + // If user has set max attempts, overwrite it. If user hasn't set max + // attempts, default to whatever the waiter has set as a default. + if envOverrides.awsMaxAttempts.overridden { + waitOpts = append(waitOpts, request.WithWaiterMaxAttempts(envOverrides.awsMaxAttempts.Val)) + } + + if envOverrides.awsMaxAttempts.overridden && envOverrides.awsTimeoutSeconds.overridden { + log.Printf("WARNING: AWS_MAX_ATTEMPTS and AWS_TIMEOUT_SECONDS are" + + " both set. Packer will be using AWS_MAX_ATTEMPTS and discarding " + + "AWS_TIMEOUT_SECONDS. If you have not set AWS_POLL_DELAY_SECONDS, " + + "Packer will default to a 2 second poll delay.") + } else if envOverrides.awsTimeoutSeconds.overridden { + log.Printf("DEPRECATION WARNING: env var AWS_TIMEOUT_SECONDS is " + + "deprecated in favor of AWS_MAX_ATTEMPTS. If you have not " + + "explicitly set AWS_POLL_DELAY_SECONDS, we are defaulting to a " + + "poll delay of 2 seconds, regardless of the AWS waiter's default.") + maxAttempts := envOverrides.awsTimeoutSeconds.Val / envOverrides.awsPollDelaySeconds.Val + // override the delay so we can get the timeout right + if !envOverrides.awsPollDelaySeconds.overridden { + delaySeconds := request.ConstantWaiterDelay(time.Duration(envOverrides.awsPollDelaySeconds.Val) * time.Second) + waitOpts = append(waitOpts, request.WithWaiterDelay(delaySeconds)) + } + waitOpts = append(waitOpts, request.WithWaiterMaxAttempts(maxAttempts)) + } + if len(waitOpts) == 0 { + log.Printf("No AWS timeout and polling overrides have been set. " + + "Packer will default to waiter-specific delays and timeouts. If you would " + + "like to customize the length of time between retries and max " + + "number of retries you may do so by setting the environment " + + "variables AWS_POLL_DELAY_SECONDS and AWS_MAX_ATTEMPTS to your " + + "desired values.") + } + + return waitOpts } diff --git a/builder/amazon/common/state_test.go b/builder/amazon/common/state_test.go new file mode 100644 index 000000000..2ef7ae5aa --- /dev/null +++ b/builder/amazon/common/state_test.go @@ -0,0 +1,66 @@ +package common + +import ( + "reflect" + "testing" + "time" + + "github.com/aws/aws-sdk-go/aws/request" +) + +func testGetWaiterOptions(t *testing.T) { + // no vars are set + envValues := overridableWaitVars{ + envInfo{"AWS_POLL_DELAY_SECONDS", 2, false}, + envInfo{"AWS_MAX_ATTEMPTS", 0, false}, + envInfo{"AWS_TIMEOUT_SECONDS", 300, false}, + } + options := applyEnvOverrides(envValues) + if len(options) > 0 { + t.Fatalf("Did not expect any waiter options to be generated; actual: %#v", options) + } + + // all vars are set + envValues = overridableWaitVars{ + envInfo{"AWS_POLL_DELAY_SECONDS", 1, true}, + envInfo{"AWS_MAX_ATTEMPTS", 800, true}, + envInfo{"AWS_TIMEOUT_SECONDS", 20, true}, + } + options = applyEnvOverrides(envValues) + expected := []request.WaiterOption{ + request.WithWaiterDelay(request.ConstantWaiterDelay(time.Duration(1) * time.Second)), + request.WithWaiterMaxAttempts(800), + } + if !reflect.DeepEqual(options, expected) { + t.Fatalf("expected != actual!! Expected: %#v; Actual: %#v.", expected, options) + } + + // poll delay is not set + envValues = overridableWaitVars{ + envInfo{"AWS_POLL_DELAY_SECONDS", 2, false}, + envInfo{"AWS_MAX_ATTEMPTS", 800, true}, + envInfo{"AWS_TIMEOUT_SECONDS", 300, false}, + } + options = applyEnvOverrides(envValues) + expected = []request.WaiterOption{ + request.WithWaiterMaxAttempts(800), + } + if !reflect.DeepEqual(options, expected) { + t.Fatalf("expected != actual!! Expected: %#v; Actual: %#v.", expected, options) + } + + // poll delay is not set but timeout seconds is + envValues = overridableWaitVars{ + envInfo{"AWS_POLL_DELAY_SECONDS", 2, false}, + envInfo{"AWS_MAX_ATTEMPTS", 0, false}, + envInfo{"AWS_TIMEOUT_SECONDS", 20, true}, + } + options = applyEnvOverrides(envValues) + expected = []request.WaiterOption{ + request.WithWaiterDelay(request.ConstantWaiterDelay(time.Duration(2) * time.Second)), + request.WithWaiterMaxAttempts(10), + } + if !reflect.DeepEqual(options, expected) { + t.Fatalf("expected != actual!! Expected: %#v; Actual: %#v.", expected, options) + } +} diff --git a/builder/amazon/common/step_ami_region_copy.go b/builder/amazon/common/step_ami_region_copy.go index f98dcbb92..d14fa472e 100644 --- a/builder/amazon/common/step_ami_region_copy.go +++ b/builder/amazon/common/step_ami_region_copy.go @@ -1,14 +1,15 @@ package common import ( + "context" "fmt" "sync" "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/service/ec2" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" ) type StepAMIRegionCopy struct { @@ -19,7 +20,7 @@ type StepAMIRegionCopy struct { Name string } -func (s *StepAMIRegionCopy) Run(state multistep.StateBag) multistep.StepAction { +func (s *StepAMIRegionCopy) Run(ctx context.Context, state multistep.StateBag) multistep.StepAction { ec2conn := state.Get("ec2").(*ec2.EC2) ui := state.Get("ui").(packer.Ui) amis := state.Get("amis").(map[string]string) @@ -52,7 +53,7 @@ func (s *StepAMIRegionCopy) Run(state multistep.StateBag) multistep.StepAction { go func(region string) { defer wg.Done() - id, snapshotIds, err := amiRegionCopy(state, s.AccessConfig, s.Name, ami, region, *ec2conn.Config.Region, regKeyID) + id, snapshotIds, err := amiRegionCopy(ctx, state, s.AccessConfig, s.Name, ami, region, *ec2conn.Config.Region, regKeyID) lock.Lock() defer lock.Unlock() amis[region] = id @@ -84,7 +85,7 @@ func (s *StepAMIRegionCopy) Cleanup(state multistep.StateBag) { // amiRegionCopy does a copy for the given AMI to the target region and // returns the resulting ID and snapshot IDs, or error. -func amiRegionCopy(state multistep.StateBag, config *AccessConfig, name string, imageId string, +func amiRegionCopy(ctx context.Context, state multistep.StateBag, config *AccessConfig, name string, imageId string, target string, source string, keyID string) (string, []string, error) { snapshotIds := []string{} isEncrypted := false @@ -115,14 +116,8 @@ func amiRegionCopy(state multistep.StateBag, config *AccessConfig, name string, imageId, target, err) } - stateChange := StateChangeConf{ - Pending: []string{"pending"}, - Target: "available", - Refresh: AMIStateRefreshFunc(regionconn, *resp.ImageId), - StepState: state, - } - - if _, err := WaitForState(&stateChange); err != nil { + // Wait for the image to become ready + if err := WaitUntilAMIAvailable(ctx, regionconn, *resp.ImageId); err != nil { return "", snapshotIds, fmt.Errorf("Error waiting for AMI (%s) in region (%s): %s", *resp.ImageId, target, err) } diff --git a/builder/amazon/common/step_create_tags.go b/builder/amazon/common/step_create_tags.go index 791227260..95a736a6b 100644 --- a/builder/amazon/common/step_create_tags.go +++ b/builder/amazon/common/step_create_tags.go @@ -1,6 +1,7 @@ package common import ( + "context" "fmt" "github.com/aws/aws-sdk-go/aws" @@ -8,30 +9,24 @@ import ( "github.com/aws/aws-sdk-go/aws/session" "github.com/aws/aws-sdk-go/service/ec2" retry "github.com/hashicorp/packer/common" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" "github.com/hashicorp/packer/template/interpolate" - "github.com/mitchellh/multistep" ) type StepCreateTags struct { - Tags map[string]string - SnapshotTags map[string]string + Tags TagMap + SnapshotTags TagMap Ctx interpolate.Context } -func (s *StepCreateTags) Run(state multistep.StateBag) multistep.StepAction { +func (s *StepCreateTags) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { ec2conn := state.Get("ec2").(*ec2.EC2) + session := state.Get("awsSession").(*session.Session) ui := state.Get("ui").(packer.Ui) amis := state.Get("amis").(map[string]string) - var sourceAMI string - if rawSourceAMI, hasSourceAMI := state.GetOk("source_image"); hasSourceAMI { - sourceAMI = *rawSourceAMI.(*ec2.Image).ImageId - } else { - sourceAMI = "" - } - - if len(s.Tags) == 0 && len(s.SnapshotTags) == 0 { + if !s.Tags.IsSet() && !s.SnapshotTags.IsSet() { return multistep.ActionContinue } @@ -39,23 +34,13 @@ func (s *StepCreateTags) Run(state multistep.StateBag) multistep.StepAction { for region, ami := range amis { ui.Say(fmt.Sprintf("Adding tags to AMI (%s)...", ami)) - // Declare list of resources to tag - awsConfig := aws.Config{ - Credentials: ec2conn.Config.Credentials, - Region: aws.String(region), - } - session, err := session.NewSession(&awsConfig) - if err != nil { - err := fmt.Errorf("Error creating AWS session: %s", err) - state.Put("error", err) - ui.Error(err.Error()) - return multistep.ActionHalt - } - regionconn := ec2.New(session) + regionConn := ec2.New(session, &aws.Config{ + Region: aws.String(region), + }) // Retrieve image list for given AMI resourceIds := []*string{&ami} - imageResp, err := regionconn.DescribeImages(&ec2.DescribeImagesInput{ + imageResp, err := regionConn.DescribeImages(&ec2.DescribeImagesInput{ ImageIds: resourceIds, }) @@ -87,27 +72,27 @@ func (s *StepCreateTags) Run(state multistep.StateBag) multistep.StepAction { // Convert tags to ec2.Tag format ui.Say("Creating AMI tags") - amiTags, err := ConvertToEC2Tags(s.Tags, *ec2conn.Config.Region, sourceAMI, s.Ctx) + amiTags, err := s.Tags.EC2Tags(s.Ctx, *ec2conn.Config.Region, state) if err != nil { state.Put("error", err) ui.Error(err.Error()) return multistep.ActionHalt } - ReportTags(ui, amiTags) + amiTags.Report(ui) ui.Say("Creating snapshot tags") - snapshotTags, err := ConvertToEC2Tags(s.SnapshotTags, *ec2conn.Config.Region, sourceAMI, s.Ctx) + snapshotTags, err := s.SnapshotTags.EC2Tags(s.Ctx, *ec2conn.Config.Region, state) if err != nil { state.Put("error", err) ui.Error(err.Error()) return multistep.ActionHalt } - ReportTags(ui, snapshotTags) + snapshotTags.Report(ui) // Retry creating tags for about 2.5 minutes err = retry.Retry(0.2, 30, 11, func(_ uint) (bool, error) { // Tag images and snapshots - _, err := regionconn.CreateTags(&ec2.CreateTagsInput{ + _, err := regionConn.CreateTags(&ec2.CreateTagsInput{ Resources: resourceIds, Tags: amiTags, }) @@ -120,7 +105,7 @@ func (s *StepCreateTags) Run(state multistep.StateBag) multistep.StepAction { // Override tags on snapshots if len(snapshotTags) > 0 { - _, err = regionconn.CreateTags(&ec2.CreateTagsInput{ + _, err = regionConn.CreateTags(&ec2.CreateTagsInput{ Resources: snapshotIds, Tags: snapshotTags, }) @@ -150,36 +135,3 @@ func (s *StepCreateTags) Run(state multistep.StateBag) multistep.StepAction { func (s *StepCreateTags) Cleanup(state multistep.StateBag) { // No cleanup... } - -func ReportTags(ui packer.Ui, tags []*ec2.Tag) { - for _, tag := range tags { - ui.Message(fmt.Sprintf("Adding tag: \"%s\": \"%s\"", - aws.StringValue(tag.Key), aws.StringValue(tag.Value))) - } -} - -func ConvertToEC2Tags(tags map[string]string, region, sourceAmiId string, ctx interpolate.Context) ([]*ec2.Tag, error) { - var ec2Tags []*ec2.Tag - for key, value := range tags { - - ctx.Data = &BuildInfoTemplate{ - SourceAMI: sourceAmiId, - BuildRegion: region, - } - interpolatedKey, err := interpolate.Render(key, &ctx) - if err != nil { - return ec2Tags, fmt.Errorf("Error processing tag: %s:%s - %s", key, value, err) - } - interpolatedValue, err := interpolate.Render(value, &ctx) - if err != nil { - return ec2Tags, fmt.Errorf("Error processing tag: %s:%s - %s", key, value, err) - } - - ec2Tags = append(ec2Tags, &ec2.Tag{ - Key: aws.String(interpolatedKey), - Value: aws.String(interpolatedValue), - }) - } - - return ec2Tags, nil -} diff --git a/builder/amazon/common/step_deregister_ami.go b/builder/amazon/common/step_deregister_ami.go index 188a40808..8a82d7c7e 100644 --- a/builder/amazon/common/step_deregister_ami.go +++ b/builder/amazon/common/step_deregister_ami.go @@ -1,12 +1,13 @@ package common import ( + "context" "fmt" "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/service/ec2" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" ) type StepDeregisterAMI struct { @@ -17,7 +18,7 @@ type StepDeregisterAMI struct { Regions []string } -func (s *StepDeregisterAMI) Run(state multistep.StateBag) multistep.StepAction { +func (s *StepDeregisterAMI) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { // Check for force deregister if !s.ForceDeregister { return multistep.ActionContinue @@ -40,9 +41,10 @@ func (s *StepDeregisterAMI) Run(state multistep.StateBag) multistep.StepAction { )) resp, err := regionconn.DescribeImages(&ec2.DescribeImagesInput{ + Owners: aws.StringSlice([]string{"self"}), Filters: []*ec2.Filter{{ Name: aws.String("name"), - Values: []*string{aws.String(s.AMIName)}, + Values: aws.StringSlice([]string{s.AMIName}), }}}) if err != nil { diff --git a/builder/amazon/common/step_encrypted_ami.go b/builder/amazon/common/step_encrypted_ami.go index 16c4ce7a8..0ad8bd5d3 100644 --- a/builder/amazon/common/step_encrypted_ami.go +++ b/builder/amazon/common/step_encrypted_ami.go @@ -1,13 +1,14 @@ package common import ( + "context" "fmt" "log" "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/service/ec2" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" ) type StepCreateEncryptedAMICopy struct { @@ -18,7 +19,7 @@ type StepCreateEncryptedAMICopy struct { AMIMappings []BlockDevice } -func (s *StepCreateEncryptedAMICopy) Run(state multistep.StateBag) multistep.StepAction { +func (s *StepCreateEncryptedAMICopy) Run(ctx context.Context, state multistep.StateBag) multistep.StepAction { ec2conn := state.Get("ec2").(*ec2.EC2) ui := state.Get("ui").(packer.Ui) kmsKeyId := s.KeyID @@ -64,15 +65,8 @@ func (s *StepCreateEncryptedAMICopy) Run(state multistep.StateBag) multistep.Ste } // Wait for the copy to become ready - stateChange := StateChangeConf{ - Pending: []string{"pending"}, - Target: "available", - Refresh: AMIStateRefreshFunc(ec2conn, *copyResp.ImageId), - StepState: state, - } - ui.Say("Waiting for AMI copy to become ready...") - if _, err := WaitForState(&stateChange); err != nil { + if err := WaitUntilAMIAvailable(ctx, ec2conn, *copyResp.ImageId); err != nil { err := fmt.Errorf("Error waiting for AMI Copy: %s", err) state.Put("error", err) ui.Error(err.Error()) diff --git a/builder/amazon/common/step_get_password.go b/builder/amazon/common/step_get_password.go index b457658b9..8b0d0c74c 100644 --- a/builder/amazon/common/step_get_password.go +++ b/builder/amazon/common/step_get_password.go @@ -1,6 +1,7 @@ package common import ( + "context" "crypto/rsa" "crypto/x509" "encoding/base64" @@ -11,20 +12,22 @@ import ( "time" "github.com/aws/aws-sdk-go/service/ec2" + commonhelper "github.com/hashicorp/packer/helper/common" "github.com/hashicorp/packer/helper/communicator" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" ) // StepGetPassword reads the password from a Windows server and sets it // on the WinRM config. type StepGetPassword struct { - Debug bool - Comm *communicator.Config - Timeout time.Duration + Debug bool + Comm *communicator.Config + Timeout time.Duration + BuildName string } -func (s *StepGetPassword) Run(state multistep.StateBag) multistep.StepAction { +func (s *StepGetPassword) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { ui := state.Get("ui").(packer.Ui) // Skip if we're not using winrm @@ -91,11 +94,15 @@ WaitLoop: ui.Message(fmt.Sprintf( "Password (since debug is enabled): %s", s.Comm.WinRMPassword)) } + // store so that we can access this later during provisioning + commonhelper.SetSharedState("winrm_password", s.Comm.WinRMPassword, s.BuildName) return multistep.ActionContinue } -func (s *StepGetPassword) Cleanup(multistep.StateBag) {} +func (s *StepGetPassword) Cleanup(multistep.StateBag) { + commonhelper.RemoveSharedStateFile("winrm_password", s.BuildName) +} func (s *StepGetPassword) waitForPassword(state multistep.StateBag, cancel <-chan struct{}) (string, error) { ec2conn := state.Get("ec2").(*ec2.EC2) diff --git a/builder/amazon/common/step_key_pair.go b/builder/amazon/common/step_key_pair.go index 68ce5af2d..927101fcf 100644 --- a/builder/amazon/common/step_key_pair.go +++ b/builder/amazon/common/step_key_pair.go @@ -1,14 +1,15 @@ package common import ( + "context" "fmt" "io/ioutil" "os" "runtime" "github.com/aws/aws-sdk-go/service/ec2" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" ) type StepKeyPair struct { @@ -22,7 +23,7 @@ type StepKeyPair struct { doCleanup bool } -func (s *StepKeyPair) Run(state multistep.StateBag) multistep.StepAction { +func (s *StepKeyPair) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { ui := state.Get("ui").(packer.Ui) if s.PrivateKeyFile != "" { diff --git a/builder/amazon/common/step_modify_ami_attributes.go b/builder/amazon/common/step_modify_ami_attributes.go index 3e86e5391..234e1762a 100644 --- a/builder/amazon/common/step_modify_ami_attributes.go +++ b/builder/amazon/common/step_modify_ami_attributes.go @@ -1,14 +1,15 @@ package common import ( + "context" "fmt" "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/aws/session" "github.com/aws/aws-sdk-go/service/ec2" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" "github.com/hashicorp/packer/template/interpolate" - "github.com/mitchellh/multistep" ) type StepModifyAMIAttributes struct { @@ -21,17 +22,11 @@ type StepModifyAMIAttributes struct { Ctx interpolate.Context } -func (s *StepModifyAMIAttributes) Run(state multistep.StateBag) multistep.StepAction { +func (s *StepModifyAMIAttributes) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { ec2conn := state.Get("ec2").(*ec2.EC2) + session := state.Get("awsSession").(*session.Session) ui := state.Get("ui").(packer.Ui) amis := state.Get("amis").(map[string]string) - - var sourceAMI string - if rawSourceAMI, hasSourceAMI := state.GetOk("source_image"); hasSourceAMI { - sourceAMI = *rawSourceAMI.(*ec2.Image).ImageId - } else { - sourceAMI = "" - } snapshots := state.Get("snapshots").(map[string][]string) // Determine if there is any work to do. @@ -48,10 +43,7 @@ func (s *StepModifyAMIAttributes) Run(state multistep.StateBag) multistep.StepAc } var err error - s.Ctx.Data = &BuildInfoTemplate{ - SourceAMI: sourceAMI, - BuildRegion: *ec2conn.Config.Region, - } + s.Ctx.Data = extractBuildInfo(*ec2conn.Config.Region, state) s.Description, err = interpolate.Render(s.Description, &s.Ctx) if err != nil { err = fmt.Errorf("Error interpolating AMI description: %s", err) @@ -152,22 +144,13 @@ func (s *StepModifyAMIAttributes) Run(state multistep.StateBag) multistep.StepAc // Modifying image attributes for region, ami := range amis { ui.Say(fmt.Sprintf("Modifying attributes on AMI (%s)...", ami)) - awsConfig := aws.Config{ - Credentials: ec2conn.Config.Credentials, - Region: aws.String(region), - } - session, err := session.NewSession(&awsConfig) - if err != nil { - err := fmt.Errorf("Error creating AWS session: %s", err) - state.Put("error", err) - ui.Error(err.Error()) - return multistep.ActionHalt - } - regionconn := ec2.New(session) + regionConn := ec2.New(session, &aws.Config{ + Region: aws.String(region), + }) for name, input := range options { ui.Message(fmt.Sprintf("Modifying: %s", name)) input.ImageId = &ami - _, err := regionconn.ModifyImageAttribute(input) + _, err := regionConn.ModifyImageAttribute(input) if err != nil { err := fmt.Errorf("Error modify AMI attributes: %s", err) state.Put("error", err) @@ -181,16 +164,13 @@ func (s *StepModifyAMIAttributes) Run(state multistep.StateBag) multistep.StepAc for region, region_snapshots := range snapshots { for _, snapshot := range region_snapshots { ui.Say(fmt.Sprintf("Modifying attributes on snapshot (%s)...", snapshot)) - awsConfig := aws.Config{ - Credentials: ec2conn.Config.Credentials, - Region: aws.String(region), - } - session := session.New(&awsConfig) - regionconn := ec2.New(session) + regionConn := ec2.New(session, &aws.Config{ + Region: aws.String(region), + }) for name, input := range snapshotOptions { ui.Message(fmt.Sprintf("Modifying: %s", name)) input.SnapshotId = &snapshot - _, err := regionconn.ModifySnapshotAttribute(input) + _, err := regionConn.ModifySnapshotAttribute(input) if err != nil { err := fmt.Errorf("Error modify snapshot attributes: %s", err) state.Put("error", err) diff --git a/builder/amazon/common/step_modify_ebs_instance.go b/builder/amazon/common/step_modify_ebs_instance.go index 12c2367aa..c2c454e06 100644 --- a/builder/amazon/common/step_modify_ebs_instance.go +++ b/builder/amazon/common/step_modify_ebs_instance.go @@ -1,12 +1,13 @@ package common import ( + "context" "fmt" "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/service/ec2" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" ) type StepModifyEBSBackedInstance struct { @@ -14,7 +15,7 @@ type StepModifyEBSBackedInstance struct { EnableAMISriovNetSupport bool } -func (s *StepModifyEBSBackedInstance) Run(state multistep.StateBag) multistep.StepAction { +func (s *StepModifyEBSBackedInstance) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { ec2conn := state.Get("ec2").(*ec2.EC2) instance := state.Get("instance").(*ec2.Instance) ui := state.Get("ui").(packer.Ui) diff --git a/builder/amazon/common/step_pre_validate.go b/builder/amazon/common/step_pre_validate.go index 4c57cce0b..73b4022b7 100644 --- a/builder/amazon/common/step_pre_validate.go +++ b/builder/amazon/common/step_pre_validate.go @@ -1,12 +1,13 @@ package common import ( + "context" "fmt" "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/service/ec2" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" ) // StepPreValidate provides an opportunity to pre-validate any configuration for @@ -17,7 +18,7 @@ type StepPreValidate struct { ForceDeregister bool } -func (s *StepPreValidate) Run(state multistep.StateBag) multistep.StepAction { +func (s *StepPreValidate) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { ui := state.Get("ui").(packer.Ui) if s.ForceDeregister { ui.Say("Force Deregister flag found, skipping prevalidating AMI Name") diff --git a/builder/amazon/common/step_run_source_instance.go b/builder/amazon/common/step_run_source_instance.go index a3dca8057..7c31fa1f8 100644 --- a/builder/amazon/common/step_run_source_instance.go +++ b/builder/amazon/common/step_run_source_instance.go @@ -1,41 +1,46 @@ package common import ( + "context" "encoding/base64" "fmt" "io/ioutil" "log" "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/aws/awserr" "github.com/aws/aws-sdk-go/service/ec2" + retry "github.com/hashicorp/packer/common" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" "github.com/hashicorp/packer/template/interpolate" - "github.com/mitchellh/multistep" ) type StepRunSourceInstance struct { AssociatePublicIpAddress bool AvailabilityZone string BlockDevices BlockDevices + Ctx interpolate.Context Debug bool EbsOptimized bool + EnableT2Unlimited bool ExpectedRootDevice string IamInstanceProfile string InstanceInitiatedShutdownBehavior string InstanceType string + IsRestricted bool SourceAMI string SubnetId string - Tags map[string]string - VolumeTags map[string]string + Tags TagMap UserData string UserDataFile string - Ctx interpolate.Context + VolumeTags TagMap instanceId string } -func (s *StepRunSourceInstance) Run(state multistep.StateBag) multistep.StepAction { +func (s *StepRunSourceInstance) Run(ctx context.Context, state multistep.StateBag) multistep.StepAction { ec2conn := state.Get("ec2").(*ec2.EC2) var keyName string if name, ok := state.GetOk("keyPair"); ok { @@ -84,16 +89,15 @@ func (s *StepRunSourceInstance) Run(state multistep.StateBag) multistep.StepActi s.Tags["Name"] = "Packer Builder" } - ec2Tags, err := ConvertToEC2Tags(s.Tags, *ec2conn.Config.Region, s.SourceAMI, s.Ctx) + ec2Tags, err := s.Tags.EC2Tags(s.Ctx, *ec2conn.Config.Region, state) if err != nil { err := fmt.Errorf("Error tagging source instance: %s", err) state.Put("error", err) ui.Error(err.Error()) return multistep.ActionHalt } - ReportTags(ui, ec2Tags) - volTags, err := ConvertToEC2Tags(s.VolumeTags, *ec2conn.Config.Region, s.SourceAMI, s.Ctx) + volTags, err := s.VolumeTags.EC2Tags(s.Ctx, *ec2conn.Config.Region, state) if err != nil { err := fmt.Errorf("Error tagging volumes: %s", err) state.Put("error", err) @@ -113,6 +117,12 @@ func (s *StepRunSourceInstance) Run(state multistep.StateBag) multistep.StepActi EbsOptimized: &s.EbsOptimized, } + if s.EnableT2Unlimited { + creditOption := "unlimited" + runOpts.CreditSpecification = &ec2.CreditSpecificationRequest{CpuCredits: &creditOption} + } + + // Collect tags for tagging on resource creation var tagSpecs []*ec2.TagSpecification if len(ec2Tags) > 0 { @@ -133,8 +143,11 @@ func (s *StepRunSourceInstance) Run(state multistep.StateBag) multistep.StepActi tagSpecs = append(tagSpecs, runVolTags) } - if len(tagSpecs) > 0 { + // If our region supports it, set tag specifications + if len(tagSpecs) > 0 && !s.IsRestricted { runOpts.SetTagSpecifications(tagSpecs) + ec2Tags.Report(ui) + volTags.Report(ui) } if keyName != "" { @@ -174,21 +187,26 @@ func (s *StepRunSourceInstance) Run(state multistep.StateBag) multistep.StepActi ui.Message(fmt.Sprintf("Instance ID: %s", instanceId)) ui.Say(fmt.Sprintf("Waiting for instance (%v) to become ready...", instanceId)) - stateChange := StateChangeConf{ - Pending: []string{"pending"}, - Target: "running", - Refresh: InstanceStateRefreshFunc(ec2conn, instanceId), - StepState: state, + + describeInstance := &ec2.DescribeInstancesInput{ + InstanceIds: []*string{aws.String(instanceId)}, } - latestInstance, err := WaitForState(&stateChange) - if err != nil { + if err := ec2conn.WaitUntilInstanceRunningWithContext(ctx, describeInstance); err != nil { err := fmt.Errorf("Error waiting for instance (%s) to become ready: %s", instanceId, err) state.Put("error", err) ui.Error(err.Error()) return multistep.ActionHalt } - instance := latestInstance.(*ec2.Instance) + r, err := ec2conn.DescribeInstances(describeInstance) + + if err != nil || len(r.Reservations) == 0 || len(r.Reservations[0].Instances) == 0 { + err := fmt.Errorf("Error finding source instance.") + state.Put("error", err) + ui.Error(err.Error()) + return multistep.ActionHalt + } + instance := r.Reservations[0].Instances[0] if s.Debug { if instance.PublicDnsName != nil && *instance.PublicDnsName != "" { @@ -206,6 +224,70 @@ func (s *StepRunSourceInstance) Run(state multistep.StateBag) multistep.StepActi state.Put("instance", instance) + // If we're in a region that doesn't support tagging on instance creation, + // do that now. + + if s.IsRestricted { + ec2Tags.Report(ui) + // Retry creating tags for about 2.5 minutes + err = retry.Retry(0.2, 30, 11, func(_ uint) (bool, error) { + _, err := ec2conn.CreateTags(&ec2.CreateTagsInput{ + Tags: ec2Tags, + Resources: []*string{instance.InstanceId}, + }) + if err == nil { + return true, nil + } + if awsErr, ok := err.(awserr.Error); ok { + if awsErr.Code() == "InvalidInstanceID.NotFound" { + return false, nil + } + } + return true, err + }) + + if err != nil { + err := fmt.Errorf("Error tagging source instance: %s", err) + state.Put("error", err) + ui.Error(err.Error()) + return multistep.ActionHalt + } + + // Now tag volumes + + volumeIds := make([]*string, 0) + for _, v := range instance.BlockDeviceMappings { + if ebs := v.Ebs; ebs != nil { + volumeIds = append(volumeIds, ebs.VolumeId) + } + } + + if len(volumeIds) > 0 && s.VolumeTags.IsSet() { + ui.Say("Adding tags to source EBS Volumes") + + volumeTags, err := s.VolumeTags.EC2Tags(s.Ctx, *ec2conn.Config.Region, state) + if err != nil { + err := fmt.Errorf("Error tagging source EBS Volumes on %s: %s", *instance.InstanceId, err) + state.Put("error", err) + ui.Error(err.Error()) + return multistep.ActionHalt + } + volumeTags.Report(ui) + + _, err = ec2conn.CreateTags(&ec2.CreateTagsInput{ + Resources: volumeIds, + Tags: volumeTags, + }) + + if err != nil { + err := fmt.Errorf("Error tagging source EBS Volumes on %s: %s", *instance.InstanceId, err) + state.Put("error", err) + ui.Error(err.Error()) + return multistep.ActionHalt + } + } + } + return multistep.ActionContinue } @@ -221,14 +303,8 @@ func (s *StepRunSourceInstance) Cleanup(state multistep.StateBag) { ui.Error(fmt.Sprintf("Error terminating instance, may still be around: %s", err)) return } - stateChange := StateChangeConf{ - Pending: []string{"pending", "running", "shutting-down", "stopped", "stopping"}, - Refresh: InstanceStateRefreshFunc(ec2conn, s.instanceId), - Target: "terminated", - } - _, err := WaitForState(&stateChange) - if err != nil { + if err := WaitUntilInstanceTerminated(aws.BackgroundContext(), ec2conn, s.instanceId); err != nil { ui.Error(err.Error()) } } diff --git a/builder/amazon/common/step_run_spot_instance.go b/builder/amazon/common/step_run_spot_instance.go index b34d09bcb..09d456496 100644 --- a/builder/amazon/common/step_run_spot_instance.go +++ b/builder/amazon/common/step_run_spot_instance.go @@ -1,6 +1,7 @@ package common import ( + "context" "encoding/base64" "fmt" "io/ioutil" @@ -13,9 +14,9 @@ import ( "github.com/aws/aws-sdk-go/service/ec2" retry "github.com/hashicorp/packer/common" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" "github.com/hashicorp/packer/template/interpolate" - "github.com/mitchellh/multistep" ) type StepRunSpotInstance struct { @@ -31,9 +32,10 @@ type StepRunSpotInstance struct { SourceAMI string SpotPrice string SpotPriceProduct string + SpotTags TagMap SubnetId string - Tags map[string]string - VolumeTags map[string]string + Tags TagMap + VolumeTags TagMap UserData string UserDataFile string Ctx interpolate.Context @@ -42,7 +44,7 @@ type StepRunSpotInstance struct { spotRequest *ec2.SpotInstanceRequest } -func (s *StepRunSpotInstance) Run(state multistep.StateBag) multistep.StepAction { +func (s *StepRunSpotInstance) Run(ctx context.Context, state multistep.StateBag) multistep.StepAction { ec2conn := state.Get("ec2").(*ec2.EC2) var keyName string if name, ok := state.GetOk("keyPair"); ok { @@ -142,14 +144,14 @@ func (s *StepRunSpotInstance) Run(state multistep.StateBag) multistep.StepAction s.Tags["Name"] = "Packer Builder" } - ec2Tags, err := ConvertToEC2Tags(s.Tags, *ec2conn.Config.Region, s.SourceAMI, s.Ctx) + ec2Tags, err := s.Tags.EC2Tags(s.Ctx, *ec2conn.Config.Region, state) if err != nil { err := fmt.Errorf("Error tagging source instance: %s", err) state.Put("error", err) ui.Error(err.Error()) return multistep.ActionHalt } - ReportTags(ui, ec2Tags) + ec2Tags.Report(ui) ui.Message(fmt.Sprintf( "Requesting spot instance '%s' for: %s", @@ -201,13 +203,7 @@ func (s *StepRunSpotInstance) Run(state multistep.StateBag) multistep.StepAction spotRequestId := s.spotRequest.SpotInstanceRequestId ui.Message(fmt.Sprintf("Waiting for spot request (%s) to become active...", *spotRequestId)) - stateChange := StateChangeConf{ - Pending: []string{"open"}, - Target: "active", - Refresh: SpotRequestStateRefreshFunc(ec2conn, *spotRequestId), - StepState: state, - } - _, err = WaitForState(&stateChange) + err = WaitUntilSpotRequestFulfilled(ctx, ec2conn, *spotRequestId) if err != nil { err := fmt.Errorf("Error waiting for spot request (%s) to become ready: %s", *spotRequestId, err) state.Put("error", err) @@ -226,26 +222,58 @@ func (s *StepRunSpotInstance) Run(state multistep.StateBag) multistep.StepAction } instanceId = *spotResp.SpotInstanceRequests[0].InstanceId + // Tag spot instance request + spotTags, err := s.SpotTags.EC2Tags(s.Ctx, *ec2conn.Config.Region, state) + if err != nil { + err := fmt.Errorf("Error tagging spot request: %s", err) + state.Put("error", err) + ui.Error(err.Error()) + return multistep.ActionHalt + } + spotTags.Report(ui) + + if len(spotTags) > 0 && s.SpotTags.IsSet() { + // Retry creating tags for about 2.5 minutes + err = retry.Retry(0.2, 30, 11, func(_ uint) (bool, error) { + _, err := ec2conn.CreateTags(&ec2.CreateTagsInput{ + Tags: spotTags, + Resources: []*string{spotRequestId}, + }) + return true, err + }) + if err != nil { + err := fmt.Errorf("Error tagging spot request: %s", err) + state.Put("error", err) + ui.Error(err.Error()) + return multistep.ActionHalt + } + } + // Set the instance ID so that the cleanup works properly s.instanceId = instanceId ui.Message(fmt.Sprintf("Instance ID: %s", instanceId)) ui.Say(fmt.Sprintf("Waiting for instance (%v) to become ready...", instanceId)) - stateChangeSpot := StateChangeConf{ - Pending: []string{"pending"}, - Target: "running", - Refresh: InstanceStateRefreshFunc(ec2conn, instanceId), - StepState: state, + describeInstance := &ec2.DescribeInstancesInput{ + InstanceIds: []*string{aws.String(instanceId)}, } - latestInstance, err := WaitForState(&stateChangeSpot) - if err != nil { + if err := ec2conn.WaitUntilInstanceRunningWithContext(ctx, describeInstance); err != nil { err := fmt.Errorf("Error waiting for instance (%s) to become ready: %s", instanceId, err) state.Put("error", err) ui.Error(err.Error()) return multistep.ActionHalt } - instance := latestInstance.(*ec2.Instance) + r, err := ec2conn.DescribeInstances(&ec2.DescribeInstancesInput{ + InstanceIds: []*string{aws.String(instanceId)}, + }) + if err != nil || len(r.Reservations) == 0 || len(r.Reservations[0].Instances) == 0 { + err := fmt.Errorf("Error finding source instance.") + state.Put("error", err) + ui.Error(err.Error()) + return multistep.ActionHalt + } + instance := r.Reservations[0].Instances[0] // Retry creating tags for about 2.5 minutes err = retry.Retry(0.2, 30, 11, func(_ uint) (bool, error) { @@ -278,21 +306,21 @@ func (s *StepRunSpotInstance) Run(state multistep.StateBag) multistep.StepAction } } - if len(volumeIds) > 0 && len(s.VolumeTags) > 0 { + if len(volumeIds) > 0 && s.VolumeTags.IsSet() { ui.Say("Adding tags to source EBS Volumes") - tags, err := ConvertToEC2Tags(s.VolumeTags, *ec2conn.Config.Region, s.SourceAMI, s.Ctx) + + volumeTags, err := s.VolumeTags.EC2Tags(s.Ctx, *ec2conn.Config.Region, state) if err != nil { err := fmt.Errorf("Error tagging source EBS Volumes on %s: %s", *instance.InstanceId, err) state.Put("error", err) ui.Error(err.Error()) return multistep.ActionHalt } - - ReportTags(ui, tags) + volumeTags.Report(ui) _, err = ec2conn.CreateTags(&ec2.CreateTagsInput{ Resources: volumeIds, - Tags: tags, + Tags: volumeTags, }) if err != nil { @@ -338,13 +366,8 @@ func (s *StepRunSpotInstance) Cleanup(state multistep.StateBag) { ui.Error(fmt.Sprintf("Error cancelling the spot request, may still be around: %s", err)) return } - stateChange := StateChangeConf{ - Pending: []string{"active", "open"}, - Refresh: SpotRequestStateRefreshFunc(ec2conn, *s.spotRequest.SpotInstanceRequestId), - Target: "cancelled", - } - _, err := WaitForState(&stateChange) + err := WaitUntilSpotRequestFulfilled(aws.BackgroundContext(), ec2conn, *s.spotRequest.SpotInstanceRequestId) if err != nil { ui.Error(err.Error()) } @@ -358,14 +381,8 @@ func (s *StepRunSpotInstance) Cleanup(state multistep.StateBag) { ui.Error(fmt.Sprintf("Error terminating instance, may still be around: %s", err)) return } - stateChange := StateChangeConf{ - Pending: []string{"pending", "running", "shutting-down", "stopped", "stopping"}, - Refresh: InstanceStateRefreshFunc(ec2conn, s.instanceId), - Target: "terminated", - } - _, err := WaitForState(&stateChange) - if err != nil { + if err := WaitUntilInstanceTerminated(aws.BackgroundContext(), ec2conn, s.instanceId); err != nil { ui.Error(err.Error()) } } diff --git a/builder/amazon/common/step_security_group.go b/builder/amazon/common/step_security_group.go index 8027903f5..031842cd3 100644 --- a/builder/amazon/common/step_security_group.go +++ b/builder/amazon/common/step_security_group.go @@ -1,6 +1,7 @@ package common import ( + "context" "fmt" "log" "time" @@ -10,8 +11,8 @@ import ( "github.com/aws/aws-sdk-go/service/ec2" "github.com/hashicorp/packer/common/uuid" "github.com/hashicorp/packer/helper/communicator" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" ) type StepSecurityGroup struct { @@ -23,7 +24,7 @@ type StepSecurityGroup struct { createdGroupId string } -func (s *StepSecurityGroup) Run(state multistep.StateBag) multistep.StepAction { +func (s *StepSecurityGroup) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { ec2conn := state.Get("ec2").(*ec2.EC2) ui := state.Get("ui").(packer.Ui) diff --git a/builder/amazon/common/step_source_ami_info.go b/builder/amazon/common/step_source_ami_info.go index c7e9f733e..5e57c2fbf 100644 --- a/builder/amazon/common/step_source_ami_info.go +++ b/builder/amazon/common/step_source_ami_info.go @@ -1,14 +1,15 @@ package common import ( + "context" "fmt" "log" "sort" "time" "github.com/aws/aws-sdk-go/service/ec2" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" ) // StepSourceAMIInfo extracts critical information from the source AMI @@ -52,7 +53,7 @@ func mostRecentAmi(images []*ec2.Image) *ec2.Image { return sortedImages[len(sortedImages)-1] } -func (s *StepSourceAMIInfo) Run(state multistep.StateBag) multistep.StepAction { +func (s *StepSourceAMIInfo) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { ec2conn := state.Get("ec2").(*ec2.EC2) ui := state.Get("ui").(packer.Ui) diff --git a/builder/amazon/common/step_stop_ebs_instance.go b/builder/amazon/common/step_stop_ebs_instance.go index 852626811..aee29b432 100644 --- a/builder/amazon/common/step_stop_ebs_instance.go +++ b/builder/amazon/common/step_stop_ebs_instance.go @@ -1,13 +1,14 @@ package common import ( + "context" "fmt" "github.com/aws/aws-sdk-go/aws/awserr" "github.com/aws/aws-sdk-go/service/ec2" "github.com/hashicorp/packer/common" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" ) type StepStopEBSBackedInstance struct { @@ -15,7 +16,7 @@ type StepStopEBSBackedInstance struct { DisableStopInstance bool } -func (s *StepStopEBSBackedInstance) Run(state multistep.StateBag) multistep.StepAction { +func (s *StepStopEBSBackedInstance) Run(ctx context.Context, state multistep.StateBag) multistep.StepAction { ec2conn := state.Get("ec2").(*ec2.EC2) instance := state.Get("instance").(*ec2.Instance) ui := state.Get("ui").(packer.Ui) @@ -75,15 +76,14 @@ func (s *StepStopEBSBackedInstance) Run(state multistep.StateBag) multistep.Step ui.Say("Automatic instance stop disabled. Please stop instance manually.") } - // Wait for the instance to actual stop + // Wait for the instance to actually stop ui.Say("Waiting for the instance to stop...") - stateChange := StateChangeConf{ - Pending: []string{"running", "pending", "stopping"}, - Target: "stopped", - Refresh: InstanceStateRefreshFunc(ec2conn, *instance.InstanceId), - StepState: state, - } - _, err = WaitForState(&stateChange) + err = ec2conn.WaitUntilInstanceStoppedWithContext(ctx, + &ec2.DescribeInstancesInput{ + InstanceIds: []*string{instance.InstanceId}, + }, + getWaiterOptions()...) + if err != nil { err := fmt.Errorf("Error waiting for instance to stop: %s", err) state.Put("error", err) diff --git a/builder/amazon/common/tags.go b/builder/amazon/common/tags.go new file mode 100644 index 000000000..918269f91 --- /dev/null +++ b/builder/amazon/common/tags.go @@ -0,0 +1,46 @@ +package common + +import ( + "fmt" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/ec2" + "github.com/hashicorp/packer/helper/multistep" + "github.com/hashicorp/packer/packer" + "github.com/hashicorp/packer/template/interpolate" +) + +type TagMap map[string]string +type EC2Tags []*ec2.Tag + +func (t EC2Tags) Report(ui packer.Ui) { + for _, tag := range t { + ui.Message(fmt.Sprintf("Adding tag: \"%s\": \"%s\"", + aws.StringValue(tag.Key), aws.StringValue(tag.Value))) + } +} + +func (t TagMap) IsSet() bool { + return len(t) > 0 +} + +func (t TagMap) EC2Tags(ctx interpolate.Context, region string, state multistep.StateBag) (EC2Tags, error) { + var ec2Tags []*ec2.Tag + ctx.Data = extractBuildInfo(region, state) + + for key, value := range t { + interpolatedKey, err := interpolate.Render(key, &ctx) + if err != nil { + return nil, fmt.Errorf("Error processing tag: %s:%s - %s", key, value, err) + } + interpolatedValue, err := interpolate.Render(value, &ctx) + if err != nil { + return nil, fmt.Errorf("Error processing tag: %s:%s - %s", key, value, err) + } + ec2Tags = append(ec2Tags, &ec2.Tag{ + Key: aws.String(interpolatedKey), + Value: aws.String(interpolatedValue), + }) + } + return ec2Tags, nil +} diff --git a/builder/amazon/ebs/builder.go b/builder/amazon/ebs/builder.go index 1cc346472..bd10ad36a 100644 --- a/builder/amazon/ebs/builder.go +++ b/builder/amazon/ebs/builder.go @@ -14,9 +14,9 @@ import ( "github.com/hashicorp/packer/common" "github.com/hashicorp/packer/helper/communicator" "github.com/hashicorp/packer/helper/config" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" "github.com/hashicorp/packer/template/interpolate" - "github.com/mitchellh/multistep" ) // The unique ID for this builder @@ -28,7 +28,7 @@ type Config struct { awscommon.AMIConfig `mapstructure:",squash"` awscommon.BlockDevices `mapstructure:",squash"` awscommon.RunConfig `mapstructure:",squash"` - VolumeRunTags map[string]string `mapstructure:"run_volume_tags"` + VolumeRunTags awscommon.TagMap `mapstructure:"run_volume_tags"` ctx interpolate.Context } @@ -48,6 +48,7 @@ func (b *Builder) Prepare(raws ...interface{}) ([]string, error) { "ami_description", "run_tags", "run_volume_tags", + "spot_tags", "snapshot_tags", "tags", }, @@ -69,11 +70,18 @@ func (b *Builder) Prepare(raws ...interface{}) ([]string, error) { errs = packer.MultiErrorAppend(errs, b.config.BlockDevices.Prepare(&b.config.ctx)...) errs = packer.MultiErrorAppend(errs, b.config.RunConfig.Prepare(&b.config.ctx)...) + if b.config.IsSpotInstance() && (b.config.AMIENASupport || b.config.AMISriovNetSupport) { + errs = packer.MultiErrorAppend(errs, + fmt.Errorf("Spot instances do not support modification, which is required "+ + "when either `ena_support` or `sriov_support` are set. Please ensure "+ + "you use an AMI that already has either SR-IOV or ENA enabled.")) + } + if errs != nil && len(errs.Errors) > 0 { return nil, errs } - log.Println(common.ScrubConfig(b.config, b.config.AccessKey, b.config.SecretKey)) + log.Println(common.ScrubConfig(b.config, b.config.AccessKey, b.config.SecretKey, b.config.Token)) return nil, nil } @@ -106,51 +114,54 @@ func (b *Builder) Run(ui packer.Ui, hook packer.Hook, cache packer.Cache) (packe state := new(multistep.BasicStateBag) state.Put("config", b.config) state.Put("ec2", ec2conn) + state.Put("awsSession", session) state.Put("hook", hook) state.Put("ui", ui) var instanceStep multistep.Step - isSpotInstance := b.config.SpotPrice != "" && b.config.SpotPrice != "0" - if isSpotInstance { + if b.config.IsSpotInstance() { instanceStep = &awscommon.StepRunSpotInstance{ - Debug: b.config.PackerDebug, - ExpectedRootDevice: "ebs", - SpotPrice: b.config.SpotPrice, - SpotPriceProduct: b.config.SpotPriceAutoProduct, - InstanceType: b.config.InstanceType, - UserData: b.config.UserData, - UserDataFile: b.config.UserDataFile, - SourceAMI: b.config.SourceAmi, - IamInstanceProfile: b.config.IamInstanceProfile, - SubnetId: b.config.SubnetId, - AssociatePublicIpAddress: b.config.AssociatePublicIpAddress, - EbsOptimized: b.config.EbsOptimized, - AvailabilityZone: b.config.AvailabilityZone, - BlockDevices: b.config.BlockDevices, - Tags: b.config.RunTags, - VolumeTags: b.config.VolumeRunTags, - Ctx: b.config.ctx, + AssociatePublicIpAddress: b.config.AssociatePublicIpAddress, + AvailabilityZone: b.config.AvailabilityZone, + BlockDevices: b.config.BlockDevices, + Ctx: b.config.ctx, + Debug: b.config.PackerDebug, + EbsOptimized: b.config.EbsOptimized, + ExpectedRootDevice: "ebs", + IamInstanceProfile: b.config.IamInstanceProfile, InstanceInitiatedShutdownBehavior: b.config.InstanceInitiatedShutdownBehavior, + InstanceType: b.config.InstanceType, + SourceAMI: b.config.SourceAmi, + SpotPrice: b.config.SpotPrice, + SpotPriceProduct: b.config.SpotPriceAutoProduct, + SpotTags: b.config.SpotTags, + SubnetId: b.config.SubnetId, + Tags: b.config.RunTags, + UserData: b.config.UserData, + UserDataFile: b.config.UserDataFile, + VolumeTags: b.config.VolumeRunTags, } } else { instanceStep = &awscommon.StepRunSourceInstance{ - Debug: b.config.PackerDebug, - ExpectedRootDevice: "ebs", - InstanceType: b.config.InstanceType, - UserData: b.config.UserData, - UserDataFile: b.config.UserDataFile, - SourceAMI: b.config.SourceAmi, - IamInstanceProfile: b.config.IamInstanceProfile, - SubnetId: b.config.SubnetId, - AssociatePublicIpAddress: b.config.AssociatePublicIpAddress, - EbsOptimized: b.config.EbsOptimized, - AvailabilityZone: b.config.AvailabilityZone, - BlockDevices: b.config.BlockDevices, - Tags: b.config.RunTags, - VolumeTags: b.config.VolumeRunTags, - Ctx: b.config.ctx, + AssociatePublicIpAddress: b.config.AssociatePublicIpAddress, + AvailabilityZone: b.config.AvailabilityZone, + BlockDevices: b.config.BlockDevices, + Ctx: b.config.ctx, + Debug: b.config.PackerDebug, + EbsOptimized: b.config.EbsOptimized, + EnableT2Unlimited: b.config.EnableT2Unlimited, + ExpectedRootDevice: "ebs", + IamInstanceProfile: b.config.IamInstanceProfile, InstanceInitiatedShutdownBehavior: b.config.InstanceInitiatedShutdownBehavior, + InstanceType: b.config.InstanceType, + IsRestricted: b.config.IsChinaCloud() || b.config.IsGovCloud(), + SourceAMI: b.config.SourceAmi, + SubnetId: b.config.SubnetId, + Tags: b.config.RunTags, + UserData: b.config.UserData, + UserDataFile: b.config.UserDataFile, + VolumeTags: b.config.VolumeRunTags, } } @@ -185,15 +196,16 @@ func (b *Builder) Run(ui packer.Ui, hook packer.Hook, cache packer.Cache) (packe }, instanceStep, &awscommon.StepGetPassword{ - Debug: b.config.PackerDebug, - Comm: &b.config.RunConfig.Comm, - Timeout: b.config.WindowsPasswordTimeout, + Debug: b.config.PackerDebug, + Comm: &b.config.RunConfig.Comm, + Timeout: b.config.WindowsPasswordTimeout, + BuildName: b.config.PackerBuildName, }, &communicator.StepConnect{ Config: &b.config.RunConfig.Comm, Host: awscommon.SSHHost( ec2conn, - b.config.SSHPrivateIp), + b.config.SSHInterface), SSHConfig: awscommon.SSHConfig( b.config.RunConfig.Comm.SSHAgentAuth, b.config.RunConfig.Comm.SSHUsername, @@ -201,7 +213,7 @@ func (b *Builder) Run(ui packer.Ui, hook packer.Hook, cache packer.Cache) (packe }, &common.StepProvision{}, &awscommon.StepStopEBSBackedInstance{ - Skip: isSpotInstance, + Skip: b.config.IsSpotInstance(), DisableStopInstance: b.config.DisableStopInstance, }, &awscommon.StepModifyEBSBackedInstance{ @@ -263,7 +275,7 @@ func (b *Builder) Run(ui packer.Ui, hook packer.Hook, cache packer.Cache) (packe artifact := &awscommon.Artifact{ Amis: state.Get("amis").(map[string]string), BuilderIdValue: BuilderId, - Conn: ec2conn, + Session: session, } return artifact, nil diff --git a/builder/amazon/ebs/builder_acc_test.go b/builder/amazon/ebs/builder_acc_test.go index 9ea9c87f9..b42472a39 100644 --- a/builder/amazon/ebs/builder_acc_test.go +++ b/builder/amazon/ebs/builder_acc_test.go @@ -1,8 +1,11 @@ +/* +Deregister the test image with +aws ec2 deregister-image --image-id $(aws ec2 describe-images --output text --filters "Name=name,Values=packer-test-packer-test-dereg" --query 'Images[*].{ID:ImageId}') +*/ package ebs import ( "fmt" - "os" "testing" "github.com/aws/aws-sdk-go/aws" @@ -58,14 +61,14 @@ func TestBuilderAcc_forceDeleteSnapshot(t *testing.T) { // Get image data by AMI name ec2conn, _ := testEC2Conn() - imageResp, _ := ec2conn.DescribeImages( - &ec2.DescribeImagesInput{Filters: []*ec2.Filter{ - { - Name: aws.String("name"), - Values: []*string{aws.String(amiName)}, - }, - }}, - ) + describeInput := &ec2.DescribeImagesInput{Filters: []*ec2.Filter{ + { + Name: aws.String("name"), + Values: []*string{aws.String(amiName)}, + }, + }} + ec2conn.WaitUntilImageExists(describeInput) + imageResp, _ := ec2conn.DescribeImages(describeInput) image := imageResp.Images[0] // Get snapshot ids for image @@ -243,13 +246,6 @@ func checkBootEncrypted() builderT.TestCheckFunc { } func testAccPreCheck(t *testing.T) { - if v := os.Getenv("AWS_ACCESS_KEY_ID"); v == "" { - t.Fatal("AWS_ACCESS_KEY_ID must be set for acceptance tests") - } - - if v := os.Getenv("AWS_SECRET_ACCESS_KEY"); v == "" { - t.Fatal("AWS_SECRET_ACCESS_KEY must be set for acceptance tests") - } } func testEC2Conn() (*ec2.EC2, error) { @@ -298,7 +294,7 @@ const testBuilderAccForceDeregister = ` "source_ami": "ami-76b2a71e", "ssh_username": "ubuntu", "force_deregister": "%s", - "ami_name": "packer-test-%s" + "ami_name": "%s" }] } ` @@ -313,7 +309,7 @@ const testBuilderAccForceDeleteSnapshot = ` "ssh_username": "ubuntu", "force_deregister": "%s", "force_delete_snapshot": "%s", - "ami_name": "packer-test-%s" + "ami_name": "%s" }] } ` diff --git a/builder/amazon/ebs/step_cleanup_volumes.go b/builder/amazon/ebs/step_cleanup_volumes.go index d056f5457..3fd0de64f 100644 --- a/builder/amazon/ebs/step_cleanup_volumes.go +++ b/builder/amazon/ebs/step_cleanup_volumes.go @@ -1,13 +1,14 @@ package ebs import ( + "context" "fmt" "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/service/ec2" "github.com/hashicorp/packer/builder/amazon/common" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" ) // stepCleanupVolumes cleans up any orphaned volumes that were not designated to @@ -17,7 +18,7 @@ type stepCleanupVolumes struct { BlockDevices common.BlockDevices } -func (s *stepCleanupVolumes) Run(state multistep.StateBag) multistep.StepAction { +func (s *stepCleanupVolumes) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { // stepCleanupVolumes is for Cleanup only return multistep.ActionContinue } diff --git a/builder/amazon/ebs/step_create_ami.go b/builder/amazon/ebs/step_create_ami.go index 9cb4bcee2..1d081db70 100644 --- a/builder/amazon/ebs/step_create_ami.go +++ b/builder/amazon/ebs/step_create_ami.go @@ -1,19 +1,21 @@ package ebs import ( + "context" "fmt" + "log" "github.com/aws/aws-sdk-go/service/ec2" awscommon "github.com/hashicorp/packer/builder/amazon/common" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" ) type stepCreateAMI struct { image *ec2.Image } -func (s *stepCreateAMI) Run(state multistep.StateBag) multistep.StepAction { +func (s *stepCreateAMI) Run(ctx context.Context, state multistep.StateBag) multistep.StepAction { config := state.Get("config").(Config) ec2conn := state.Get("ec2").(*ec2.EC2) instance := state.Get("instance").(*ec2.Instance) @@ -42,16 +44,18 @@ func (s *stepCreateAMI) Run(state multistep.StateBag) multistep.StepAction { state.Put("amis", amis) // Wait for the image to become ready - stateChange := awscommon.StateChangeConf{ - Pending: []string{"pending"}, - Target: "available", - Refresh: awscommon.AMIStateRefreshFunc(ec2conn, *createResp.ImageId), - StepState: state, - } - ui.Say("Waiting for AMI to become ready...") - if _, err := awscommon.WaitForState(&stateChange); err != nil { - err := fmt.Errorf("Error waiting for AMI: %s", err) + if err := awscommon.WaitUntilAMIAvailable(ctx, ec2conn, *createResp.ImageId); err != nil { + log.Printf("Error waiting for AMI: %s", err) + imagesResp, err := ec2conn.DescribeImages(&ec2.DescribeImagesInput{ImageIds: []*string{createResp.ImageId}}) + if err != nil { + log.Printf("Unable to determine reason waiting for AMI failed: %s", err) + err = fmt.Errorf("Unknown error waiting for AMI.") + } else { + stateReason := imagesResp.Images[0].StateReason + err = fmt.Errorf("Error waiting for AMI. Reason: %s", stateReason) + } + state.Put("error", err) ui.Error(err.Error()) return multistep.ActionHalt diff --git a/builder/amazon/ebssurrogate/builder.go b/builder/amazon/ebssurrogate/builder.go index 16e8367da..9642c1a83 100644 --- a/builder/amazon/ebssurrogate/builder.go +++ b/builder/amazon/ebssurrogate/builder.go @@ -12,9 +12,9 @@ import ( "github.com/hashicorp/packer/common" "github.com/hashicorp/packer/helper/communicator" "github.com/hashicorp/packer/helper/config" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" "github.com/hashicorp/packer/template/interpolate" - "github.com/mitchellh/multistep" ) const BuilderId = "mitchellh.amazon.ebssurrogate" @@ -26,8 +26,8 @@ type Config struct { awscommon.BlockDevices `mapstructure:",squash"` awscommon.AMIConfig `mapstructure:",squash"` - RootDevice RootBlockDevice `mapstructure:"ami_root_device"` - VolumeRunTags map[string]string `mapstructure:"run_volume_tags"` + RootDevice RootBlockDevice `mapstructure:"ami_root_device"` + VolumeRunTags awscommon.TagMap `mapstructure:"run_volume_tags"` ctx interpolate.Context } @@ -48,6 +48,7 @@ func (b *Builder) Prepare(raws ...interface{}) ([]string, error) { "run_tags", "run_volume_tags", "snapshot_tags", + "spot_tags", "tags", }, }, @@ -84,11 +85,18 @@ func (b *Builder) Prepare(raws ...interface{}) ([]string, error) { errs = packer.MultiErrorAppend(errs, fmt.Errorf("no volume with name '%s' is found", b.config.RootDevice.SourceDeviceName)) } + if b.config.IsSpotInstance() && (b.config.AMIENASupport || b.config.AMISriovNetSupport) { + errs = packer.MultiErrorAppend(errs, + fmt.Errorf("Spot instances do not support modification, which is required "+ + "when either `ena_support` or `sriov_support` are set. Please ensure "+ + "you use an AMI that already has either SR-IOV or ENA enabled.")) + } + if errs != nil && len(errs.Errors) > 0 { return nil, errs } - log.Println(common.ScrubConfig(b.config, b.config.AccessKey, b.config.SecretKey)) + log.Println(common.ScrubConfig(b.config, b.config.AccessKey, b.config.SecretKey, b.config.Token)) return nil, nil } @@ -120,54 +128,60 @@ func (b *Builder) Run(ui packer.Ui, hook packer.Hook, cache packer.Cache) (packe state := new(multistep.BasicStateBag) state.Put("config", &b.config) state.Put("ec2", ec2conn) + state.Put("awsSession", session) state.Put("hook", hook) state.Put("ui", ui) var instanceStep multistep.Step - isSpotInstance := b.config.SpotPrice != "" && b.config.SpotPrice != "0" - if isSpotInstance { + if b.config.IsSpotInstance() { instanceStep = &awscommon.StepRunSpotInstance{ - Debug: b.config.PackerDebug, - ExpectedRootDevice: "ebs", - SpotPrice: b.config.SpotPrice, - SpotPriceProduct: b.config.SpotPriceAutoProduct, - InstanceType: b.config.InstanceType, - UserData: b.config.UserData, - UserDataFile: b.config.UserDataFile, - SourceAMI: b.config.SourceAmi, - IamInstanceProfile: b.config.IamInstanceProfile, - SubnetId: b.config.SubnetId, - AssociatePublicIpAddress: b.config.AssociatePublicIpAddress, - EbsOptimized: b.config.EbsOptimized, - AvailabilityZone: b.config.AvailabilityZone, - BlockDevices: b.config.BlockDevices, - Tags: b.config.RunTags, - VolumeTags: b.config.VolumeRunTags, - Ctx: b.config.ctx, + AssociatePublicIpAddress: b.config.AssociatePublicIpAddress, + AvailabilityZone: b.config.AvailabilityZone, + BlockDevices: b.config.BlockDevices, + Ctx: b.config.ctx, + Debug: b.config.PackerDebug, + EbsOptimized: b.config.EbsOptimized, + ExpectedRootDevice: "ebs", + IamInstanceProfile: b.config.IamInstanceProfile, InstanceInitiatedShutdownBehavior: b.config.InstanceInitiatedShutdownBehavior, + InstanceType: b.config.InstanceType, + SourceAMI: b.config.SourceAmi, + SpotPrice: b.config.SpotPrice, + SpotPriceProduct: b.config.SpotPriceAutoProduct, + SpotTags: b.config.SpotTags, + SubnetId: b.config.SubnetId, + Tags: b.config.RunTags, + UserData: b.config.UserData, + UserDataFile: b.config.UserDataFile, + VolumeTags: b.config.VolumeRunTags, } } else { instanceStep = &awscommon.StepRunSourceInstance{ - Debug: b.config.PackerDebug, - ExpectedRootDevice: "ebs", - InstanceType: b.config.InstanceType, - UserData: b.config.UserData, - UserDataFile: b.config.UserDataFile, - SourceAMI: b.config.SourceAmi, - IamInstanceProfile: b.config.IamInstanceProfile, - SubnetId: b.config.SubnetId, - AssociatePublicIpAddress: b.config.AssociatePublicIpAddress, - EbsOptimized: b.config.EbsOptimized, - AvailabilityZone: b.config.AvailabilityZone, - BlockDevices: b.config.BlockDevices, - Tags: b.config.RunTags, - VolumeTags: b.config.VolumeRunTags, - Ctx: b.config.ctx, + AssociatePublicIpAddress: b.config.AssociatePublicIpAddress, + AvailabilityZone: b.config.AvailabilityZone, + BlockDevices: b.config.BlockDevices, + Ctx: b.config.ctx, + Debug: b.config.PackerDebug, + EbsOptimized: b.config.EbsOptimized, + EnableT2Unlimited: b.config.EnableT2Unlimited, + ExpectedRootDevice: "ebs", + IamInstanceProfile: b.config.IamInstanceProfile, InstanceInitiatedShutdownBehavior: b.config.InstanceInitiatedShutdownBehavior, + InstanceType: b.config.InstanceType, + IsRestricted: b.config.IsChinaCloud() || b.config.IsGovCloud(), + SourceAMI: b.config.SourceAmi, + SubnetId: b.config.SubnetId, + Tags: b.config.RunTags, + UserData: b.config.UserData, + UserDataFile: b.config.UserDataFile, + VolumeTags: b.config.VolumeRunTags, } } + amiDevices := b.config.BuildAMIDevices() + launchDevices := b.config.BuildLaunchDevices() + // Build the steps steps := []multistep.Step{ &awscommon.StepPreValidate{ @@ -196,15 +210,16 @@ func (b *Builder) Run(ui packer.Ui, hook packer.Hook, cache packer.Cache) (packe }, instanceStep, &awscommon.StepGetPassword{ - Debug: b.config.PackerDebug, - Comm: &b.config.RunConfig.Comm, - Timeout: b.config.WindowsPasswordTimeout, + Debug: b.config.PackerDebug, + Comm: &b.config.RunConfig.Comm, + Timeout: b.config.WindowsPasswordTimeout, + BuildName: b.config.PackerBuildName, }, &communicator.StepConnect{ Config: &b.config.RunConfig.Comm, Host: awscommon.SSHHost( ec2conn, - b.config.SSHPrivateIp), + b.config.SSHInterface), SSHConfig: awscommon.SSHConfig( b.config.RunConfig.Comm.SSHAgentAuth, b.config.RunConfig.Comm.SSHUsername, @@ -212,15 +227,15 @@ func (b *Builder) Run(ui packer.Ui, hook packer.Hook, cache packer.Cache) (packe }, &common.StepProvision{}, &awscommon.StepStopEBSBackedInstance{ - Skip: isSpotInstance, + Skip: b.config.IsSpotInstance(), DisableStopInstance: b.config.DisableStopInstance, }, &awscommon.StepModifyEBSBackedInstance{ EnableAMISriovNetSupport: b.config.AMISriovNetSupport, EnableAMIENASupport: b.config.AMIENASupport, }, - &StepSnapshotNewRootVolume{ - NewRootMountPoint: b.config.RootDevice.SourceDeviceName, + &StepSnapshotVolumes{ + LaunchDevices: launchDevices, }, &awscommon.StepDeregisterAMI{ AccessConfig: &b.config.AccessConfig, @@ -231,7 +246,8 @@ func (b *Builder) Run(ui packer.Ui, hook packer.Hook, cache packer.Cache) (packe }, &StepRegisterAMI{ RootDevice: b.config.RootDevice, - BlockDevices: b.config.BlockDevices.BuildAMIDevices(), + AMIDevices: amiDevices, + LaunchDevices: launchDevices, EnableAMISriovNetSupport: b.config.AMISriovNetSupport, EnableAMIENASupport: b.config.AMIENASupport, }, @@ -277,7 +293,7 @@ func (b *Builder) Run(ui packer.Ui, hook packer.Hook, cache packer.Cache) (packe artifact := &awscommon.Artifact{ Amis: amis.(map[string]string), BuilderIdValue: BuilderId, - Conn: ec2conn, + Session: session, } return artifact, nil diff --git a/builder/amazon/ebssurrogate/root_block_device.go b/builder/amazon/ebssurrogate/root_block_device.go index 2c4849732..09c13ca00 100644 --- a/builder/amazon/ebssurrogate/root_block_device.go +++ b/builder/amazon/ebssurrogate/root_block_device.go @@ -2,8 +2,7 @@ package ebssurrogate import ( "errors" - "github.com/aws/aws-sdk-go/aws" - "github.com/aws/aws-sdk-go/service/ec2" + "github.com/hashicorp/packer/template/interpolate" ) @@ -45,21 +44,3 @@ func (c *RootBlockDevice) Prepare(ctx *interpolate.Context) []error { return nil } - -func (d *RootBlockDevice) createBlockDeviceMapping(snapshotId string) *ec2.BlockDeviceMapping { - rootBlockDevice := &ec2.EbsBlockDevice{ - SnapshotId: aws.String(snapshotId), - VolumeType: aws.String(d.VolumeType), - VolumeSize: aws.Int64(d.VolumeSize), - DeleteOnTermination: aws.Bool(d.DeleteOnTermination), - } - - if d.IOPS != 0 { - rootBlockDevice.Iops = aws.Int64(d.IOPS) - } - - return &ec2.BlockDeviceMapping{ - DeviceName: aws.String(d.DeviceName), - Ebs: rootBlockDevice, - } -} diff --git a/builder/amazon/ebssurrogate/step_register_ami.go b/builder/amazon/ebssurrogate/step_register_ami.go index f0d145f35..63e0f5581 100644 --- a/builder/amazon/ebssurrogate/step_register_ami.go +++ b/builder/amazon/ebssurrogate/step_register_ami.go @@ -1,40 +1,42 @@ package ebssurrogate import ( + "context" "fmt" "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/service/ec2" awscommon "github.com/hashicorp/packer/builder/amazon/common" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" ) // StepRegisterAMI creates the AMI. type StepRegisterAMI struct { RootDevice RootBlockDevice - BlockDevices []*ec2.BlockDeviceMapping + AMIDevices []*ec2.BlockDeviceMapping + LaunchDevices []*ec2.BlockDeviceMapping EnableAMIENASupport bool EnableAMISriovNetSupport bool image *ec2.Image } -func (s *StepRegisterAMI) Run(state multistep.StateBag) multistep.StepAction { +func (s *StepRegisterAMI) Run(ctx context.Context, state multistep.StateBag) multistep.StepAction { config := state.Get("config").(*Config) ec2conn := state.Get("ec2").(*ec2.EC2) - snapshotId := state.Get("snapshot_id").(string) + snapshotIds := state.Get("snapshot_ids").(map[string]string) ui := state.Get("ui").(packer.Ui) ui.Say("Registering the AMI...") - blockDevicesExcludingRoot := DeduplicateRootVolume(s.BlockDevices, s.RootDevice, snapshotId) + blockDevices := s.combineDevices(snapshotIds) registerOpts := &ec2.RegisterImageInput{ Name: &config.AMIName, Architecture: aws.String(ec2.ArchitectureValuesX8664), RootDeviceName: aws.String(s.RootDevice.DeviceName), VirtualizationType: aws.String(config.AMIVirtType), - BlockDeviceMappings: blockDevicesExcludingRoot, + BlockDeviceMappings: blockDevices, } if s.EnableAMISriovNetSupport { @@ -61,15 +63,8 @@ func (s *StepRegisterAMI) Run(state multistep.StateBag) multistep.StepAction { state.Put("amis", amis) // Wait for the image to become ready - stateChange := awscommon.StateChangeConf{ - Pending: []string{"pending"}, - Target: "available", - Refresh: awscommon.AMIStateRefreshFunc(ec2conn, *registerResp.ImageId), - StepState: state, - } - ui.Say("Waiting for AMI to become ready...") - if _, err := awscommon.WaitForState(&stateChange); err != nil { + if err := awscommon.WaitUntilAMIAvailable(ctx, ec2conn, *registerResp.ImageId); err != nil { err := fmt.Errorf("Error waiting for AMI: %s", err) state.Put("error", err) ui.Error(err.Error()) @@ -119,17 +114,34 @@ func (s *StepRegisterAMI) Cleanup(state multistep.StateBag) { } } -func DeduplicateRootVolume(BlockDevices []*ec2.BlockDeviceMapping, RootDevice RootBlockDevice, snapshotId string) []*ec2.BlockDeviceMapping { - // Defensive coding to make sure we only add the root volume once - blockDevicesExcludingRoot := make([]*ec2.BlockDeviceMapping, 0, len(BlockDevices)) - for _, blockDevice := range BlockDevices { - if *blockDevice.DeviceName == RootDevice.SourceDeviceName { - continue - } +func (s *StepRegisterAMI) combineDevices(snapshotIds map[string]string) []*ec2.BlockDeviceMapping { + devices := map[string]*ec2.BlockDeviceMapping{} - blockDevicesExcludingRoot = append(blockDevicesExcludingRoot, blockDevice) + for _, device := range s.AMIDevices { + devices[*device.DeviceName] = device } - blockDevicesExcludingRoot = append(blockDevicesExcludingRoot, RootDevice.createBlockDeviceMapping(snapshotId)) - return blockDevicesExcludingRoot + // Devices in launch_block_device_mappings override any with + // the same name in ami_block_device_mappings, except for the + // one designated as the root device in ami_root_device + for _, device := range s.LaunchDevices { + snapshotId, ok := snapshotIds[*device.DeviceName] + if ok { + device.Ebs.SnapshotId = aws.String(snapshotId) + // Block devices with snapshot inherit + // encryption settings from the snapshot + device.Ebs.Encrypted = nil + device.Ebs.KmsKeyId = nil + } + if *device.DeviceName == s.RootDevice.SourceDeviceName { + device.DeviceName = aws.String(s.RootDevice.DeviceName) + } + devices[*device.DeviceName] = device + } + + blockDevices := []*ec2.BlockDeviceMapping{} + for _, device := range devices { + blockDevices = append(blockDevices, device) + } + return blockDevices } diff --git a/builder/amazon/ebssurrogate/step_register_ami_test.go b/builder/amazon/ebssurrogate/step_register_ami_test.go index b2abd7827..d2ef67685 100644 --- a/builder/amazon/ebssurrogate/step_register_ami_test.go +++ b/builder/amazon/ebssurrogate/step_register_ami_test.go @@ -1,37 +1,247 @@ package ebssurrogate import ( + "reflect" + "sort" "testing" + "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/service/ec2" ) -func GetStringPointer() *string { - tmp := "/dev/name" - return &tmp -} +const sourceDeviceName = "/dev/xvdf" +const rootDeviceName = "/dev/xvda" -func GetTestDevice() *ec2.BlockDeviceMapping { - TestDev := ec2.BlockDeviceMapping{ - DeviceName: GetStringPointer(), - } - return &TestDev -} - -func TestStepRegisterAmi_DeduplicateRootVolume(t *testing.T) { - TestRootDevice := RootBlockDevice{} - TestRootDevice.SourceDeviceName = "/dev/name" - - blockDevices := []*ec2.BlockDeviceMapping{} - blockDevicesExcludingRoot := DeduplicateRootVolume(blockDevices, TestRootDevice, "12342351") - if len(blockDevicesExcludingRoot) != 1 { - t.Fatalf("Unexpected length of block devices list") - } - - TestBlockDevice := GetTestDevice() - blockDevices = append(blockDevices, TestBlockDevice) - blockDevicesExcludingRoot = DeduplicateRootVolume(blockDevices, TestRootDevice, "12342351") - if len(blockDevicesExcludingRoot) != 1 { - t.Fatalf("Unexpected length of block devices list") +func newStepRegisterAMI(amiDevices, launchDevices []*ec2.BlockDeviceMapping) *StepRegisterAMI { + return &StepRegisterAMI{ + RootDevice: RootBlockDevice{ + SourceDeviceName: sourceDeviceName, + DeviceName: rootDeviceName, + DeleteOnTermination: true, + VolumeType: "ebs", + VolumeSize: 10, + }, + AMIDevices: amiDevices, + LaunchDevices: launchDevices, + } +} + +func sorted(devices []*ec2.BlockDeviceMapping) []*ec2.BlockDeviceMapping { + sort.SliceStable(devices, func(i, j int) bool { + return *devices[i].DeviceName < *devices[j].DeviceName + }) + return devices +} + +func TestStepRegisterAmi_combineDevices(t *testing.T) { + cases := []struct { + snapshotIds map[string]string + amiDevices []*ec2.BlockDeviceMapping + launchDevices []*ec2.BlockDeviceMapping + allDevices []*ec2.BlockDeviceMapping + }{ + { + snapshotIds: map[string]string{}, + amiDevices: []*ec2.BlockDeviceMapping{}, + launchDevices: []*ec2.BlockDeviceMapping{}, + allDevices: []*ec2.BlockDeviceMapping{}, + }, + { + snapshotIds: map[string]string{}, + amiDevices: []*ec2.BlockDeviceMapping{}, + launchDevices: []*ec2.BlockDeviceMapping{ + { + Ebs: &ec2.EbsBlockDevice{}, + DeviceName: aws.String(sourceDeviceName), + }, + }, + allDevices: []*ec2.BlockDeviceMapping{ + { + Ebs: &ec2.EbsBlockDevice{}, + DeviceName: aws.String(rootDeviceName), + }, + }, + }, + { + // Minimal single device + snapshotIds: map[string]string{ + sourceDeviceName: "snap-0123456789abcdef1", + }, + amiDevices: []*ec2.BlockDeviceMapping{}, + launchDevices: []*ec2.BlockDeviceMapping{ + { + Ebs: &ec2.EbsBlockDevice{}, + DeviceName: aws.String(sourceDeviceName), + }, + }, + allDevices: []*ec2.BlockDeviceMapping{ + { + Ebs: &ec2.EbsBlockDevice{ + SnapshotId: aws.String("snap-0123456789abcdef1"), + }, + DeviceName: aws.String(rootDeviceName), + }, + }, + }, + { + // Single launch device with AMI device + snapshotIds: map[string]string{ + sourceDeviceName: "snap-0123456789abcdef1", + }, + amiDevices: []*ec2.BlockDeviceMapping{ + { + Ebs: &ec2.EbsBlockDevice{}, + DeviceName: aws.String("/dev/xvdg"), + }, + }, + launchDevices: []*ec2.BlockDeviceMapping{ + { + Ebs: &ec2.EbsBlockDevice{}, + DeviceName: aws.String(sourceDeviceName), + }, + }, + allDevices: []*ec2.BlockDeviceMapping{ + { + Ebs: &ec2.EbsBlockDevice{ + SnapshotId: aws.String("snap-0123456789abcdef1"), + }, + DeviceName: aws.String(rootDeviceName), + }, + { + Ebs: &ec2.EbsBlockDevice{}, + DeviceName: aws.String("/dev/xvdg"), + }, + }, + }, + { + // Multiple launch devices + snapshotIds: map[string]string{ + sourceDeviceName: "snap-0123456789abcdef1", + "/dev/xvdg": "snap-0123456789abcdef2", + }, + amiDevices: []*ec2.BlockDeviceMapping{}, + launchDevices: []*ec2.BlockDeviceMapping{ + { + Ebs: &ec2.EbsBlockDevice{}, + DeviceName: aws.String(sourceDeviceName), + }, + { + Ebs: &ec2.EbsBlockDevice{}, + DeviceName: aws.String("/dev/xvdg"), + }, + }, + allDevices: []*ec2.BlockDeviceMapping{ + { + Ebs: &ec2.EbsBlockDevice{ + SnapshotId: aws.String("snap-0123456789abcdef1"), + }, + DeviceName: aws.String(rootDeviceName), + }, + { + Ebs: &ec2.EbsBlockDevice{ + SnapshotId: aws.String("snap-0123456789abcdef2"), + }, + DeviceName: aws.String("/dev/xvdg"), + }, + }, + }, + { + // Multiple launch devices with encryption + snapshotIds: map[string]string{ + sourceDeviceName: "snap-0123456789abcdef1", + "/dev/xvdg": "snap-0123456789abcdef2", + }, + amiDevices: []*ec2.BlockDeviceMapping{}, + launchDevices: []*ec2.BlockDeviceMapping{ + { + Ebs: &ec2.EbsBlockDevice{ + Encrypted: aws.Bool(true), + }, + DeviceName: aws.String(sourceDeviceName), + }, + { + Ebs: &ec2.EbsBlockDevice{ + Encrypted: aws.Bool(true), + }, + DeviceName: aws.String("/dev/xvdg"), + }, + }, + allDevices: []*ec2.BlockDeviceMapping{ + { + Ebs: &ec2.EbsBlockDevice{ + SnapshotId: aws.String("snap-0123456789abcdef1"), + // Encrypted: true stripped from snapshotted devices + }, + DeviceName: aws.String(rootDeviceName), + }, + { + Ebs: &ec2.EbsBlockDevice{ + SnapshotId: aws.String("snap-0123456789abcdef2"), + }, + DeviceName: aws.String("/dev/xvdg"), + }, + }, + }, + { + // Multiple launch devices and AMI devices with encryption + snapshotIds: map[string]string{ + sourceDeviceName: "snap-0123456789abcdef1", + "/dev/xvdg": "snap-0123456789abcdef2", + }, + amiDevices: []*ec2.BlockDeviceMapping{ + { + Ebs: &ec2.EbsBlockDevice{ + Encrypted: aws.Bool(true), + KmsKeyId: aws.String("keyId"), + }, + // Source device name can be used in AMI devices + // since launch device of same name gets renamed + // to root device name + DeviceName: aws.String(sourceDeviceName), + }, + }, + launchDevices: []*ec2.BlockDeviceMapping{ + { + Ebs: &ec2.EbsBlockDevice{ + Encrypted: aws.Bool(true), + }, + DeviceName: aws.String(sourceDeviceName), + }, + { + Ebs: &ec2.EbsBlockDevice{ + Encrypted: aws.Bool(true), + }, + DeviceName: aws.String("/dev/xvdg"), + }, + }, + allDevices: []*ec2.BlockDeviceMapping{ + { + Ebs: &ec2.EbsBlockDevice{ + Encrypted: aws.Bool(true), + KmsKeyId: aws.String("keyId"), + }, + DeviceName: aws.String(sourceDeviceName), + }, + { + Ebs: &ec2.EbsBlockDevice{ + SnapshotId: aws.String("snap-0123456789abcdef1"), + }, + DeviceName: aws.String(rootDeviceName), + }, + { + Ebs: &ec2.EbsBlockDevice{ + SnapshotId: aws.String("snap-0123456789abcdef2"), + }, + DeviceName: aws.String("/dev/xvdg"), + }, + }, + }, + } + for _, tc := range cases { + stepRegisterAmi := newStepRegisterAMI(tc.amiDevices, tc.launchDevices) + allDevices := stepRegisterAmi.combineDevices(tc.snapshotIds) + if !reflect.DeepEqual(sorted(allDevices), sorted(tc.allDevices)) { + t.Fatalf("Unexpected output from combineDevices") + } } } diff --git a/builder/amazon/ebssurrogate/step_snapshot_new_root.go b/builder/amazon/ebssurrogate/step_snapshot_new_root.go deleted file mode 100644 index e00a75cdb..000000000 --- a/builder/amazon/ebssurrogate/step_snapshot_new_root.go +++ /dev/null @@ -1,102 +0,0 @@ -package ebssurrogate - -import ( - "errors" - "fmt" - "time" - - "github.com/aws/aws-sdk-go/service/ec2" - awscommon "github.com/hashicorp/packer/builder/amazon/common" - "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" -) - -// StepSnapshotNewRootVolume creates a snapshot of the created volume. -// -// Produces: -// snapshot_id string - ID of the created snapshot -type StepSnapshotNewRootVolume struct { - NewRootMountPoint string - snapshotId string -} - -func (s *StepSnapshotNewRootVolume) Run(state multistep.StateBag) multistep.StepAction { - ec2conn := state.Get("ec2").(*ec2.EC2) - ui := state.Get("ui").(packer.Ui) - instance := state.Get("instance").(*ec2.Instance) - - var newRootVolume string - for _, volume := range instance.BlockDeviceMappings { - if *volume.DeviceName == s.NewRootMountPoint { - newRootVolume = *volume.Ebs.VolumeId - } - } - - ui.Say(fmt.Sprintf("Creating snapshot of EBS Volume %s...", newRootVolume)) - description := fmt.Sprintf("Packer: %s", time.Now().String()) - - createSnapResp, err := ec2conn.CreateSnapshot(&ec2.CreateSnapshotInput{ - VolumeId: &newRootVolume, - Description: &description, - }) - if err != nil { - err := fmt.Errorf("Error creating snapshot: %s", err) - state.Put("error", err) - ui.Error(err.Error()) - return multistep.ActionHalt - } - - // Set the snapshot ID so we can delete it later - s.snapshotId = *createSnapResp.SnapshotId - ui.Message(fmt.Sprintf("Snapshot ID: %s", s.snapshotId)) - - // Wait for the snapshot to be ready - stateChange := awscommon.StateChangeConf{ - Pending: []string{"pending"}, - StepState: state, - Target: "completed", - Refresh: func() (interface{}, string, error) { - resp, err := ec2conn.DescribeSnapshots(&ec2.DescribeSnapshotsInput{SnapshotIds: []*string{&s.snapshotId}}) - if err != nil { - return nil, "", err - } - - if len(resp.Snapshots) == 0 { - return nil, "", errors.New("No snapshots found.") - } - - s := resp.Snapshots[0] - return s, *s.State, nil - }, - } - - _, err = awscommon.WaitForState(&stateChange) - if err != nil { - err := fmt.Errorf("Error waiting for snapshot: %s", err) - state.Put("error", err) - ui.Error(err.Error()) - return multistep.ActionHalt - } - - state.Put("snapshot_id", s.snapshotId) - return multistep.ActionContinue -} - -func (s *StepSnapshotNewRootVolume) Cleanup(state multistep.StateBag) { - if s.snapshotId == "" { - return - } - - _, cancelled := state.GetOk(multistep.StateCancelled) - _, halted := state.GetOk(multistep.StateHalted) - - if cancelled || halted { - ec2conn := state.Get("ec2").(*ec2.EC2) - ui := state.Get("ui").(packer.Ui) - ui.Say("Removing snapshot since we cancelled or halted...") - _, err := ec2conn.DeleteSnapshot(&ec2.DeleteSnapshotInput{SnapshotId: &s.snapshotId}) - if err != nil { - ui.Error(fmt.Sprintf("Error: %s", err)) - } - } -} diff --git a/builder/amazon/ebssurrogate/step_snapshot_volumes.go b/builder/amazon/ebssurrogate/step_snapshot_volumes.go new file mode 100644 index 000000000..f238815e1 --- /dev/null +++ b/builder/amazon/ebssurrogate/step_snapshot_volumes.go @@ -0,0 +1,107 @@ +package ebssurrogate + +import ( + "context" + "fmt" + "sync" + "time" + + "github.com/aws/aws-sdk-go/service/ec2" + multierror "github.com/hashicorp/go-multierror" + awscommon "github.com/hashicorp/packer/builder/amazon/common" + "github.com/hashicorp/packer/helper/multistep" + "github.com/hashicorp/packer/packer" +) + +// StepSnapshotVolumes creates snapshots of the created volumes. +// +// Produces: +// snapshot_ids map[string]string - IDs of the created snapshots +type StepSnapshotVolumes struct { + LaunchDevices []*ec2.BlockDeviceMapping + snapshotIds map[string]string +} + +func (s *StepSnapshotVolumes) snapshotVolume(ctx context.Context, deviceName string, state multistep.StateBag) error { + ec2conn := state.Get("ec2").(*ec2.EC2) + ui := state.Get("ui").(packer.Ui) + instance := state.Get("instance").(*ec2.Instance) + + var volumeId string + for _, volume := range instance.BlockDeviceMappings { + if *volume.DeviceName == deviceName { + volumeId = *volume.Ebs.VolumeId + } + } + if volumeId == "" { + return fmt.Errorf("Volume ID for device %s not found", deviceName) + } + + ui.Say(fmt.Sprintf("Creating snapshot of EBS Volume %s...", volumeId)) + description := fmt.Sprintf("Packer: %s", time.Now().String()) + + createSnapResp, err := ec2conn.CreateSnapshot(&ec2.CreateSnapshotInput{ + VolumeId: &volumeId, + Description: &description, + }) + if err != nil { + return err + } + + // Set the snapshot ID so we can delete it later + s.snapshotIds[deviceName] = *createSnapResp.SnapshotId + + // Wait for snapshot to be created + err = awscommon.WaitUntilSnapshotDone(ctx, ec2conn, *createSnapResp.SnapshotId) + return err +} + +func (s *StepSnapshotVolumes) Run(ctx context.Context, state multistep.StateBag) multistep.StepAction { + ui := state.Get("ui").(packer.Ui) + + s.snapshotIds = map[string]string{} + + var wg sync.WaitGroup + var errs *multierror.Error + for _, device := range s.LaunchDevices { + wg.Add(1) + go func(device *ec2.BlockDeviceMapping) { + defer wg.Done() + if err := s.snapshotVolume(ctx, *device.DeviceName, state); err != nil { + errs = multierror.Append(errs, err) + } + }(device) + } + + wg.Wait() + + if errs != nil { + state.Put("error", errs) + ui.Error(errs.Error()) + return multistep.ActionHalt + } + + state.Put("snapshot_ids", s.snapshotIds) + return multistep.ActionContinue +} + +func (s *StepSnapshotVolumes) Cleanup(state multistep.StateBag) { + if len(s.snapshotIds) == 0 { + return + } + + _, cancelled := state.GetOk(multistep.StateCancelled) + _, halted := state.GetOk(multistep.StateHalted) + + if cancelled || halted { + ec2conn := state.Get("ec2").(*ec2.EC2) + ui := state.Get("ui").(packer.Ui) + ui.Say("Removing snapshots since we cancelled or halted...") + for _, snapshotId := range s.snapshotIds { + _, err := ec2conn.DeleteSnapshot(&ec2.DeleteSnapshotInput{SnapshotId: &snapshotId}) + if err != nil { + ui.Error(fmt.Sprintf("Error: %s", err)) + } + } + } +} diff --git a/builder/amazon/ebsvolume/block_device.go b/builder/amazon/ebsvolume/block_device.go index 5a23821b3..0d2b04613 100644 --- a/builder/amazon/ebsvolume/block_device.go +++ b/builder/amazon/ebsvolume/block_device.go @@ -7,7 +7,7 @@ import ( type BlockDevice struct { awscommon.BlockDevice `mapstructure:"-,squash"` - Tags map[string]string `mapstructure:"tags"` + Tags awscommon.TagMap `mapstructure:"tags"` } func commonBlockDevices(mappings []BlockDevice, ctx *interpolate.Context) (awscommon.BlockDevices, error) { diff --git a/builder/amazon/ebsvolume/builder.go b/builder/amazon/ebsvolume/builder.go index ea3f74b61..222febc56 100644 --- a/builder/amazon/ebsvolume/builder.go +++ b/builder/amazon/ebsvolume/builder.go @@ -11,9 +11,9 @@ import ( "github.com/hashicorp/packer/common" "github.com/hashicorp/packer/helper/communicator" "github.com/hashicorp/packer/helper/config" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" "github.com/hashicorp/packer/template/interpolate" - "github.com/mitchellh/multistep" ) const BuilderId = "mitchellh.amazon.ebsvolume" @@ -44,6 +44,7 @@ func (b *Builder) Prepare(raws ...interface{}) ([]string, error) { InterpolateFilter: &interpolate.RenderFilter{ Exclude: []string{ "run_tags", + "spot_tags", "ebs_volumes", }, }, @@ -56,17 +57,31 @@ func (b *Builder) Prepare(raws ...interface{}) ([]string, error) { var errs *packer.MultiError errs = packer.MultiErrorAppend(errs, b.config.AccessConfig.Prepare(&b.config.ctx)...) errs = packer.MultiErrorAppend(errs, b.config.RunConfig.Prepare(&b.config.ctx)...) + errs = packer.MultiErrorAppend(errs, b.config.launchBlockDevices.Prepare(&b.config.ctx)...) + + for _, d := range b.config.VolumeMappings { + if err := d.Prepare(&b.config.ctx); err != nil { + errs = packer.MultiErrorAppend(errs, fmt.Errorf("AMIMapping: %s", err.Error())) + } + } b.config.launchBlockDevices, err = commonBlockDevices(b.config.VolumeMappings, &b.config.ctx) if err != nil { errs = packer.MultiErrorAppend(errs, err) } + if b.config.IsSpotInstance() && (b.config.AMIENASupport || b.config.AMISriovNetSupport) { + errs = packer.MultiErrorAppend(errs, + fmt.Errorf("Spot instances do not support modification, which is required "+ + "when either `ena_support` or `sriov_support` are set. Please ensure "+ + "you use an AMI that already has either SR-IOV or ENA enabled.")) + } + if errs != nil && len(errs.Errors) > 0 { return nil, errs } - log.Println(common.ScrubConfig(b.config, b.config.AccessKey, b.config.SecretKey)) + log.Println(common.ScrubConfig(b.config, b.config.AccessKey, b.config.SecretKey, b.config.Token)) return nil, nil } @@ -103,45 +118,46 @@ func (b *Builder) Run(ui packer.Ui, hook packer.Hook, cache packer.Cache) (packe var instanceStep multistep.Step - isSpotInstance := b.config.SpotPrice != "" && b.config.SpotPrice != "0" - - if isSpotInstance { + if b.config.IsSpotInstance() { instanceStep = &awscommon.StepRunSpotInstance{ - Debug: b.config.PackerDebug, - ExpectedRootDevice: "ebs", - SpotPrice: b.config.SpotPrice, - SpotPriceProduct: b.config.SpotPriceAutoProduct, - InstanceType: b.config.InstanceType, - UserData: b.config.UserData, - UserDataFile: b.config.UserDataFile, - SourceAMI: b.config.SourceAmi, - IamInstanceProfile: b.config.IamInstanceProfile, - SubnetId: b.config.SubnetId, - AssociatePublicIpAddress: b.config.AssociatePublicIpAddress, - EbsOptimized: b.config.EbsOptimized, - AvailabilityZone: b.config.AvailabilityZone, - BlockDevices: b.config.launchBlockDevices, - Tags: b.config.RunTags, - Ctx: b.config.ctx, + AssociatePublicIpAddress: b.config.AssociatePublicIpAddress, + AvailabilityZone: b.config.AvailabilityZone, + BlockDevices: b.config.launchBlockDevices, + Ctx: b.config.ctx, + Debug: b.config.PackerDebug, + EbsOptimized: b.config.EbsOptimized, + ExpectedRootDevice: "ebs", + IamInstanceProfile: b.config.IamInstanceProfile, InstanceInitiatedShutdownBehavior: b.config.InstanceInitiatedShutdownBehavior, + InstanceType: b.config.InstanceType, + SourceAMI: b.config.SourceAmi, + SpotPrice: b.config.SpotPrice, + SpotPriceProduct: b.config.SpotPriceAutoProduct, + SpotTags: b.config.SpotTags, + SubnetId: b.config.SubnetId, + Tags: b.config.RunTags, + UserData: b.config.UserData, + UserDataFile: b.config.UserDataFile, } } else { instanceStep = &awscommon.StepRunSourceInstance{ - Debug: b.config.PackerDebug, - ExpectedRootDevice: "ebs", - InstanceType: b.config.InstanceType, - UserData: b.config.UserData, - UserDataFile: b.config.UserDataFile, - SourceAMI: b.config.SourceAmi, - IamInstanceProfile: b.config.IamInstanceProfile, - SubnetId: b.config.SubnetId, - AssociatePublicIpAddress: b.config.AssociatePublicIpAddress, - EbsOptimized: b.config.EbsOptimized, - AvailabilityZone: b.config.AvailabilityZone, - BlockDevices: b.config.launchBlockDevices, - Tags: b.config.RunTags, - Ctx: b.config.ctx, + AssociatePublicIpAddress: b.config.AssociatePublicIpAddress, + AvailabilityZone: b.config.AvailabilityZone, + BlockDevices: b.config.launchBlockDevices, + Ctx: b.config.ctx, + Debug: b.config.PackerDebug, + EbsOptimized: b.config.EbsOptimized, + EnableT2Unlimited: b.config.EnableT2Unlimited, + ExpectedRootDevice: "ebs", + IamInstanceProfile: b.config.IamInstanceProfile, InstanceInitiatedShutdownBehavior: b.config.InstanceInitiatedShutdownBehavior, + InstanceType: b.config.InstanceType, + IsRestricted: b.config.IsChinaCloud() || b.config.IsGovCloud(), + SourceAMI: b.config.SourceAmi, + SubnetId: b.config.SubnetId, + Tags: b.config.RunTags, + UserData: b.config.UserData, + UserDataFile: b.config.UserDataFile, } } @@ -173,15 +189,16 @@ func (b *Builder) Run(ui packer.Ui, hook packer.Hook, cache packer.Cache) (packe Ctx: b.config.ctx, }, &awscommon.StepGetPassword{ - Debug: b.config.PackerDebug, - Comm: &b.config.RunConfig.Comm, - Timeout: b.config.WindowsPasswordTimeout, + Debug: b.config.PackerDebug, + Comm: &b.config.RunConfig.Comm, + Timeout: b.config.WindowsPasswordTimeout, + BuildName: b.config.PackerBuildName, }, &communicator.StepConnect{ Config: &b.config.RunConfig.Comm, Host: awscommon.SSHHost( ec2conn, - b.config.SSHPrivateIp), + b.config.SSHInterface), SSHConfig: awscommon.SSHConfig( b.config.RunConfig.Comm.SSHAgentAuth, b.config.RunConfig.Comm.SSHUsername, @@ -189,7 +206,7 @@ func (b *Builder) Run(ui packer.Ui, hook packer.Hook, cache packer.Cache) (packe }, &common.StepProvision{}, &awscommon.StepStopEBSBackedInstance{ - Skip: isSpotInstance, + Skip: b.config.IsSpotInstance(), DisableStopInstance: b.config.DisableStopInstance, }, &awscommon.StepModifyEBSBackedInstance{ diff --git a/builder/amazon/ebsvolume/step_tag_ebs_volumes.go b/builder/amazon/ebsvolume/step_tag_ebs_volumes.go index 992459560..62a84146c 100644 --- a/builder/amazon/ebsvolume/step_tag_ebs_volumes.go +++ b/builder/amazon/ebsvolume/step_tag_ebs_volumes.go @@ -1,13 +1,13 @@ package ebsvolume import ( + "context" "fmt" "github.com/aws/aws-sdk-go/service/ec2" - awscommon "github.com/hashicorp/packer/builder/amazon/common" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" "github.com/hashicorp/packer/template/interpolate" - "github.com/mitchellh/multistep" ) type stepTagEBSVolumes struct { @@ -15,10 +15,9 @@ type stepTagEBSVolumes struct { Ctx interpolate.Context } -func (s *stepTagEBSVolumes) Run(state multistep.StateBag) multistep.StepAction { +func (s *stepTagEBSVolumes) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { ec2conn := state.Get("ec2").(*ec2.EC2) instance := state.Get("instance").(*ec2.Instance) - sourceAMI := state.Get("source_image").(*ec2.Image) ui := state.Get("ui").(packer.Ui) volumes := make(EbsVolumes) @@ -43,14 +42,14 @@ func (s *stepTagEBSVolumes) Run(state multistep.StateBag) multistep.StepAction { continue } - tags, err := awscommon.ConvertToEC2Tags(mapping.Tags, *ec2conn.Config.Region, *sourceAMI.ImageId, s.Ctx) + tags, err := mapping.Tags.EC2Tags(s.Ctx, *ec2conn.Config.Region, state) if err != nil { err := fmt.Errorf("Error tagging device %s with %s", mapping.DeviceName, err) state.Put("error", err) ui.Error(err.Error()) return multistep.ActionHalt } - awscommon.ReportTags(ui, tags) + tags.Report(ui) for _, v := range instance.BlockDeviceMappings { if *v.DeviceName == mapping.DeviceName { diff --git a/builder/amazon/instance/builder.go b/builder/amazon/instance/builder.go index 7cee44c09..fb59fff95 100644 --- a/builder/amazon/instance/builder.go +++ b/builder/amazon/instance/builder.go @@ -14,9 +14,9 @@ import ( "github.com/hashicorp/packer/common" "github.com/hashicorp/packer/helper/communicator" "github.com/hashicorp/packer/helper/config" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" "github.com/hashicorp/packer/template/interpolate" - "github.com/mitchellh/multistep" ) // The unique ID for this builder @@ -69,6 +69,7 @@ func (b *Builder) Prepare(raws ...interface{}) ([]string, error) { "run_volume_tags", "snapshot_tags", "tags", + "spot_tags", }, }, }, configs...) @@ -155,11 +156,18 @@ func (b *Builder) Prepare(raws ...interface{}) ([]string, error) { errs, fmt.Errorf("x509_key_path points to bad file: %s", err)) } + if b.config.IsSpotInstance() && (b.config.AMIENASupport || b.config.AMISriovNetSupport) { + errs = packer.MultiErrorAppend(errs, + fmt.Errorf("Spot instances do not support modification, which is required "+ + "when either `ena_support` or `sriov_support` are set. Please ensure "+ + "you use an AMI that already has either SR-IOV or ENA enabled.")) + } + if errs != nil && len(errs.Errors) > 0 { return nil, errs } - log.Println(common.ScrubConfig(b.config, b.config.AccessKey, b.config.SecretKey)) + log.Println(common.ScrubConfig(b.config, b.config.AccessKey, b.config.SecretKey, b.config.Token)) return nil, nil } @@ -191,44 +199,48 @@ func (b *Builder) Run(ui packer.Ui, hook packer.Hook, cache packer.Cache) (packe state := new(multistep.BasicStateBag) state.Put("config", &b.config) state.Put("ec2", ec2conn) + state.Put("awsSession", session) state.Put("hook", hook) state.Put("ui", ui) var instanceStep multistep.Step - if b.config.SpotPrice == "" || b.config.SpotPrice == "0" { - instanceStep = &awscommon.StepRunSourceInstance{ - Debug: b.config.PackerDebug, - InstanceType: b.config.InstanceType, - UserData: b.config.UserData, - UserDataFile: b.config.UserDataFile, - SourceAMI: b.config.SourceAmi, - IamInstanceProfile: b.config.IamInstanceProfile, - SubnetId: b.config.SubnetId, + if b.config.IsSpotInstance() { + instanceStep = &awscommon.StepRunSpotInstance{ AssociatePublicIpAddress: b.config.AssociatePublicIpAddress, - EbsOptimized: b.config.EbsOptimized, AvailabilityZone: b.config.AvailabilityZone, BlockDevices: b.config.BlockDevices, - Tags: b.config.RunTags, Ctx: b.config.ctx, - } - } else { - instanceStep = &awscommon.StepRunSpotInstance{ Debug: b.config.PackerDebug, + EbsOptimized: b.config.EbsOptimized, + IamInstanceProfile: b.config.IamInstanceProfile, + InstanceType: b.config.InstanceType, + SourceAMI: b.config.SourceAmi, SpotPrice: b.config.SpotPrice, SpotPriceProduct: b.config.SpotPriceAutoProduct, - InstanceType: b.config.InstanceType, + SubnetId: b.config.SubnetId, + Tags: b.config.RunTags, + SpotTags: b.config.SpotTags, UserData: b.config.UserData, UserDataFile: b.config.UserDataFile, - SourceAMI: b.config.SourceAmi, - IamInstanceProfile: b.config.IamInstanceProfile, - SubnetId: b.config.SubnetId, + } + } else { + instanceStep = &awscommon.StepRunSourceInstance{ AssociatePublicIpAddress: b.config.AssociatePublicIpAddress, - EbsOptimized: b.config.EbsOptimized, AvailabilityZone: b.config.AvailabilityZone, BlockDevices: b.config.BlockDevices, - Tags: b.config.RunTags, Ctx: b.config.ctx, + Debug: b.config.PackerDebug, + EbsOptimized: b.config.EbsOptimized, + EnableT2Unlimited: b.config.EnableT2Unlimited, + IamInstanceProfile: b.config.IamInstanceProfile, + InstanceType: b.config.InstanceType, + IsRestricted: b.config.IsChinaCloud() || b.config.IsGovCloud(), + SourceAMI: b.config.SourceAmi, + SubnetId: b.config.SubnetId, + Tags: b.config.RunTags, + UserData: b.config.UserData, + UserDataFile: b.config.UserDataFile, } } @@ -260,15 +272,16 @@ func (b *Builder) Run(ui packer.Ui, hook packer.Hook, cache packer.Cache) (packe }, instanceStep, &awscommon.StepGetPassword{ - Debug: b.config.PackerDebug, - Comm: &b.config.RunConfig.Comm, - Timeout: b.config.WindowsPasswordTimeout, + Debug: b.config.PackerDebug, + Comm: &b.config.RunConfig.Comm, + Timeout: b.config.WindowsPasswordTimeout, + BuildName: b.config.PackerBuildName, }, &communicator.StepConnect{ Config: &b.config.RunConfig.Comm, Host: awscommon.SSHHost( ec2conn, - b.config.SSHPrivateIp), + b.config.SSHInterface), SSHConfig: awscommon.SSHConfig( b.config.RunConfig.Comm.SSHAgentAuth, b.config.RunConfig.Comm.SSHUsername, @@ -334,7 +347,7 @@ func (b *Builder) Run(ui packer.Ui, hook packer.Hook, cache packer.Cache) (packe artifact := &awscommon.Artifact{ Amis: state.Get("amis").(map[string]string), BuilderIdValue: BuilderId, - Conn: ec2conn, + Session: session, } return artifact, nil diff --git a/builder/amazon/instance/builder_test.go b/builder/amazon/instance/builder_test.go index 56ba3911c..1ad77d82b 100644 --- a/builder/amazon/instance/builder_test.go +++ b/builder/amazon/instance/builder_test.go @@ -1,19 +1,20 @@ package instance import ( - "github.com/hashicorp/packer/packer" "io/ioutil" "os" "testing" + + "github.com/hashicorp/packer/packer" ) -func testConfig() map[string]interface{} { +func testConfig() (config map[string]interface{}, tf *os.File) { tf, err := ioutil.TempFile("", "packer") if err != nil { panic(err) } - return map[string]interface{}{ + config = map[string]interface{}{ "account_id": "foo", "ami_name": "foo", "instance_type": "m1.small", @@ -25,6 +26,8 @@ func testConfig() map[string]interface{} { "x509_key_path": tf.Name(), "x509_upload_path": "/foo", } + + return config, tf } func TestBuilder_ImplementsBuilder(t *testing.T) { @@ -37,7 +40,9 @@ func TestBuilder_ImplementsBuilder(t *testing.T) { func TestBuilderPrepare_AccountId(t *testing.T) { b := &Builder{} - config := testConfig() + config, tempfile := testConfig() + defer os.Remove(tempfile.Name()) + defer tempfile.Close() config["account_id"] = "" warnings, err := b.Prepare(config) @@ -73,7 +78,9 @@ func TestBuilderPrepare_AccountId(t *testing.T) { func TestBuilderPrepare_AMIName(t *testing.T) { var b Builder - config := testConfig() + config, tempfile := testConfig() + defer os.Remove(tempfile.Name()) + defer tempfile.Close() // Test good config["ami_name"] = "foo" @@ -110,7 +117,9 @@ func TestBuilderPrepare_AMIName(t *testing.T) { func TestBuilderPrepare_BundleDestination(t *testing.T) { b := &Builder{} - config := testConfig() + config, tempfile := testConfig() + defer os.Remove(tempfile.Name()) + defer tempfile.Close() config["bundle_destination"] = "" warnings, err := b.Prepare(config) @@ -128,7 +137,9 @@ func TestBuilderPrepare_BundleDestination(t *testing.T) { func TestBuilderPrepare_BundlePrefix(t *testing.T) { b := &Builder{} - config := testConfig() + config, tempfile := testConfig() + defer os.Remove(tempfile.Name()) + defer tempfile.Close() warnings, err := b.Prepare(config) if len(warnings) > 0 { @@ -145,7 +156,9 @@ func TestBuilderPrepare_BundlePrefix(t *testing.T) { func TestBuilderPrepare_InvalidKey(t *testing.T) { var b Builder - config := testConfig() + config, tempfile := testConfig() + defer os.Remove(tempfile.Name()) + defer tempfile.Close() // Add a random key config["i_should_not_be_valid"] = true @@ -160,7 +173,9 @@ func TestBuilderPrepare_InvalidKey(t *testing.T) { func TestBuilderPrepare_S3Bucket(t *testing.T) { b := &Builder{} - config := testConfig() + config, tempfile := testConfig() + defer os.Remove(tempfile.Name()) + defer tempfile.Close() config["s3_bucket"] = "" warnings, err := b.Prepare(config) @@ -183,7 +198,9 @@ func TestBuilderPrepare_S3Bucket(t *testing.T) { func TestBuilderPrepare_X509CertPath(t *testing.T) { b := &Builder{} - config := testConfig() + config, tempfile := testConfig() + defer os.Remove(tempfile.Name()) + defer tempfile.Close() config["x509_cert_path"] = "" warnings, err := b.Prepare(config) @@ -208,6 +225,7 @@ func TestBuilderPrepare_X509CertPath(t *testing.T) { t.Fatalf("error tempfile: %s", err) } defer os.Remove(tf.Name()) + defer tf.Close() config["x509_cert_path"] = tf.Name() warnings, err = b.Prepare(config) @@ -221,7 +239,9 @@ func TestBuilderPrepare_X509CertPath(t *testing.T) { func TestBuilderPrepare_X509KeyPath(t *testing.T) { b := &Builder{} - config := testConfig() + config, tempfile := testConfig() + defer os.Remove(tempfile.Name()) + defer tempfile.Close() config["x509_key_path"] = "" warnings, err := b.Prepare(config) @@ -246,6 +266,7 @@ func TestBuilderPrepare_X509KeyPath(t *testing.T) { t.Fatalf("error tempfile: %s", err) } defer os.Remove(tf.Name()) + defer tf.Close() config["x509_key_path"] = tf.Name() warnings, err = b.Prepare(config) @@ -259,7 +280,9 @@ func TestBuilderPrepare_X509KeyPath(t *testing.T) { func TestBuilderPrepare_X509UploadPath(t *testing.T) { b := &Builder{} - config := testConfig() + config, tempfile := testConfig() + defer os.Remove(tempfile.Name()) + defer tempfile.Close() config["x509_upload_path"] = "" warnings, err := b.Prepare(config) diff --git a/builder/amazon/instance/step_bundle_volume.go b/builder/amazon/instance/step_bundle_volume.go index a8e27d635..1c8b006b8 100644 --- a/builder/amazon/instance/step_bundle_volume.go +++ b/builder/amazon/instance/step_bundle_volume.go @@ -1,12 +1,13 @@ package instance import ( + "context" "fmt" "github.com/aws/aws-sdk-go/service/ec2" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" "github.com/hashicorp/packer/template/interpolate" - "github.com/mitchellh/multistep" ) type bundleCmdData struct { @@ -23,7 +24,7 @@ type StepBundleVolume struct { Debug bool } -func (s *StepBundleVolume) Run(state multistep.StateBag) multistep.StepAction { +func (s *StepBundleVolume) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { comm := state.Get("communicator").(packer.Communicator) config := state.Get("config").(*Config) instance := state.Get("instance").(*ec2.Instance) diff --git a/builder/amazon/instance/step_register_ami.go b/builder/amazon/instance/step_register_ami.go index d363bdfdd..20df1f0d4 100644 --- a/builder/amazon/instance/step_register_ami.go +++ b/builder/amazon/instance/step_register_ami.go @@ -1,13 +1,14 @@ package instance import ( + "context" "fmt" "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/service/ec2" awscommon "github.com/hashicorp/packer/builder/amazon/common" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" ) type StepRegisterAMI struct { @@ -15,7 +16,7 @@ type StepRegisterAMI struct { EnableAMISriovNetSupport bool } -func (s *StepRegisterAMI) Run(state multistep.StateBag) multistep.StepAction { +func (s *StepRegisterAMI) Run(ctx context.Context, state multistep.StateBag) multistep.StepAction { config := state.Get("config").(*Config) ec2conn := state.Get("ec2").(*ec2.EC2) manifestPath := state.Get("remote_manifest_path").(string) @@ -57,15 +58,8 @@ func (s *StepRegisterAMI) Run(state multistep.StateBag) multistep.StepAction { state.Put("amis", amis) // Wait for the image to become ready - stateChange := awscommon.StateChangeConf{ - Pending: []string{"pending"}, - Target: "available", - Refresh: awscommon.AMIStateRefreshFunc(ec2conn, *registerResp.ImageId), - StepState: state, - } - ui.Say("Waiting for AMI to become ready...") - if _, err := awscommon.WaitForState(&stateChange); err != nil { + if err := awscommon.WaitUntilAMIAvailable(ctx, ec2conn, *registerResp.ImageId); err != nil { err := fmt.Errorf("Error waiting for AMI: %s", err) state.Put("error", err) ui.Error(err.Error()) diff --git a/builder/amazon/instance/step_upload_bundle.go b/builder/amazon/instance/step_upload_bundle.go index a38a77c93..1e7711d42 100644 --- a/builder/amazon/instance/step_upload_bundle.go +++ b/builder/amazon/instance/step_upload_bundle.go @@ -1,11 +1,12 @@ package instance import ( + "context" "fmt" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" "github.com/hashicorp/packer/template/interpolate" - "github.com/mitchellh/multistep" ) type uploadCmdData struct { @@ -22,7 +23,7 @@ type StepUploadBundle struct { Debug bool } -func (s *StepUploadBundle) Run(state multistep.StateBag) multistep.StepAction { +func (s *StepUploadBundle) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { comm := state.Get("communicator").(packer.Communicator) config := state.Get("config").(*Config) manifestName := state.Get("manifest_name").(string) diff --git a/builder/amazon/instance/step_upload_x509_cert.go b/builder/amazon/instance/step_upload_x509_cert.go index 1de1aa311..14ea955e5 100644 --- a/builder/amazon/instance/step_upload_x509_cert.go +++ b/builder/amazon/instance/step_upload_x509_cert.go @@ -1,15 +1,17 @@ package instance import ( + "context" "fmt" - "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" "os" + + "github.com/hashicorp/packer/helper/multistep" + "github.com/hashicorp/packer/packer" ) type StepUploadX509Cert struct{} -func (s *StepUploadX509Cert) Run(state multistep.StateBag) multistep.StepAction { +func (s *StepUploadX509Cert) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { comm := state.Get("communicator").(packer.Communicator) config := state.Get("config").(*Config) ui := state.Get("ui").(packer.Ui) diff --git a/builder/azure/arm/TestVirtualMachineDeployment05.approved.txt b/builder/azure/arm/TestVirtualMachineDeployment05.approved.txt index 906820f27..31532ddbe 100644 --- a/builder/azure/arm/TestVirtualMachineDeployment05.approved.txt +++ b/builder/azure/arm/TestVirtualMachineDeployment05.approved.txt @@ -90,7 +90,7 @@ "image": { "uri": "https://localhost/custom.vhd" }, - "name": "osdisk", + "name": "[parameters('osDiskName')]", "osType": "Linux", "vhd": { "uri": "[concat(parameters('storageAccountBlobEndpoint'),variables('vmStorageAccountContainerName'),'/', parameters('osDiskName'),'.vhd')]" @@ -105,8 +105,6 @@ "addressPrefix": "10.0.0.0/16", "apiVersion": "2015-06-15", "location": "[resourceGroup().location]", - "nicName": "packerNic", - "publicIPAddressName": "packerPublicIP", "publicIPAddressType": "Dynamic", "sshKeyPath": "[concat('/home/',parameters('adminUsername'),'/.ssh/authorized_keys')]", "subnetAddressPrefix": "10.0.0.0/24", diff --git a/builder/azure/arm/artifact.go b/builder/azure/arm/artifact.go index e8a72091f..8e42fa43d 100644 --- a/builder/azure/arm/artifact.go +++ b/builder/azure/arm/artifact.go @@ -12,6 +12,11 @@ const ( BuilderId = "Azure.ResourceManagement.VMImage" ) +type AdditionalDiskArtifact struct { + AdditionalDiskUri string + AdditionalDiskUriReadOnlySas string +} + type Artifact struct { // VHD StorageAccountLocation string @@ -24,6 +29,9 @@ type Artifact struct { ManagedImageResourceGroupName string ManagedImageName string ManagedImageLocation string + + // Additional Disks + AdditionalDisks *[]AdditionalDiskArtifact } func NewManagedImageArtifact(resourceGroup, name, location string) (*Artifact, error) { @@ -53,12 +61,28 @@ func NewArtifact(template *CaptureTemplate, getSasUrl func(name string) string) return nil, err } + var additional_disks *[]AdditionalDiskArtifact + if template.Resources[0].Properties.StorageProfile.DataDisks != nil { + data_disks := make([]AdditionalDiskArtifact, len(template.Resources[0].Properties.StorageProfile.DataDisks)) + for i, additionaldisk := range template.Resources[0].Properties.StorageProfile.DataDisks { + additionalVhdUri, err := url.Parse(additionaldisk.Image.Uri) + if err != nil { + return nil, err + } + data_disks[i].AdditionalDiskUri = additionalVhdUri.String() + data_disks[i].AdditionalDiskUriReadOnlySas = getSasUrl(getStorageUrlPath(additionalVhdUri)) + } + additional_disks = &data_disks + } + return &Artifact{ OSDiskUri: vhdUri.String(), OSDiskUriReadOnlySas: getSasUrl(getStorageUrlPath(vhdUri)), TemplateUri: templateUri.String(), TemplateUriReadOnlySas: getSasUrl(getStorageUrlPath(templateUri)), + AdditionalDisks: additional_disks, + StorageAccountLocation: template.Resources[0].Location, }, nil } @@ -89,7 +113,7 @@ func storageUriToTemplateUri(su *url.URL) (*url.URL, error) { return url.Parse(strings.Replace(su.String(), filename, templateFilename, 1)) } -func (a *Artifact) isMangedImage() bool { +func (a *Artifact) isManagedImage() bool { return a.ManagedImageResourceGroupName != "" } @@ -118,7 +142,7 @@ func (a *Artifact) String() string { var buf bytes.Buffer buf.WriteString(fmt.Sprintf("%s:\n\n", a.BuilderId())) - if a.isMangedImage() { + if a.isManagedImage() { buf.WriteString(fmt.Sprintf("ManagedImageResourceGroupName: %s\n", a.ManagedImageResourceGroupName)) buf.WriteString(fmt.Sprintf("ManagedImageName: %s\n", a.ManagedImageName)) buf.WriteString(fmt.Sprintf("ManagedImageLocation: %s\n", a.ManagedImageLocation)) @@ -128,6 +152,12 @@ func (a *Artifact) String() string { buf.WriteString(fmt.Sprintf("OSDiskUriReadOnlySas: %s\n", a.OSDiskUriReadOnlySas)) buf.WriteString(fmt.Sprintf("TemplateUri: %s\n", a.TemplateUri)) buf.WriteString(fmt.Sprintf("TemplateUriReadOnlySas: %s\n", a.TemplateUriReadOnlySas)) + if a.AdditionalDisks != nil { + for i, additionaldisk := range *a.AdditionalDisks { + buf.WriteString(fmt.Sprintf("AdditionalDiskUri (datadisk-%d): %s\n", i+1, additionaldisk.AdditionalDiskUri)) + buf.WriteString(fmt.Sprintf("AdditionalDiskUriReadOnlySas (datadisk-%d): %s\n", i+1, additionaldisk.AdditionalDiskUriReadOnlySas)) + } + } } return buf.String() diff --git a/builder/azure/arm/artifact_test.go b/builder/azure/arm/artifact_test.go index 1b9a472af..a86a6353e 100644 --- a/builder/azure/arm/artifact_test.go +++ b/builder/azure/arm/artifact_test.go @@ -82,6 +82,60 @@ func TestArtifactString(t *testing.T) { } } +func TestAdditionalDiskArtifactString(t *testing.T) { + template := CaptureTemplate{ + Resources: []CaptureResources{ + { + Properties: CaptureProperties{ + StorageProfile: CaptureStorageProfile{ + OSDisk: CaptureDisk{ + Image: CaptureUri{ + Uri: "https://storage.blob.core.windows.net/system/Microsoft.Compute/Images/images/packer-osDisk.4085bb15-3644-4641-b9cd-f575918640b4.vhd", + }, + }, + DataDisks: []CaptureDisk{ + { + Image: CaptureUri{ + Uri: "https://storage.blob.core.windows.net/system/Microsoft.Compute/Images/images/packer-datadisk-1.4085bb15-3644-4641-b9cd-f575918640b4.vhd", + }, + }, + }, + }, + }, + Location: "southcentralus", + }, + }, + } + + artifact, err := NewArtifact(&template, getFakeSasUrl) + if err != nil { + t.Fatalf("err=%s", err) + } + + testSubject := artifact.String() + if !strings.Contains(testSubject, "OSDiskUri: https://storage.blob.core.windows.net/system/Microsoft.Compute/Images/images/packer-osDisk.4085bb15-3644-4641-b9cd-f575918640b4.vhd") { + t.Errorf("Expected String() output to contain OSDiskUri") + } + if !strings.Contains(testSubject, "OSDiskUriReadOnlySas: SAS-Images/images/packer-osDisk.4085bb15-3644-4641-b9cd-f575918640b4.vhd") { + t.Errorf("Expected String() output to contain OSDiskUriReadOnlySas") + } + if !strings.Contains(testSubject, "TemplateUri: https://storage.blob.core.windows.net/system/Microsoft.Compute/Images/images/packer-vmTemplate.4085bb15-3644-4641-b9cd-f575918640b4.json") { + t.Errorf("Expected String() output to contain TemplateUri") + } + if !strings.Contains(testSubject, "TemplateUriReadOnlySas: SAS-Images/images/packer-vmTemplate.4085bb15-3644-4641-b9cd-f575918640b4.json") { + t.Errorf("Expected String() output to contain TemplateUriReadOnlySas") + } + if !strings.Contains(testSubject, "StorageAccountLocation: southcentralus") { + t.Errorf("Expected String() output to contain StorageAccountLocation") + } + if !strings.Contains(testSubject, "AdditionalDiskUri (datadisk-1): https://storage.blob.core.windows.net/system/Microsoft.Compute/Images/images/packer-datadisk-1.4085bb15-3644-4641-b9cd-f575918640b4.vhd") { + t.Errorf("Expected String() output to contain AdditionalDiskUri") + } + if !strings.Contains(testSubject, "AdditionalDiskUriReadOnlySas (datadisk-1): SAS-Images/images/packer-datadisk-1.4085bb15-3644-4641-b9cd-f575918640b4.vhd") { + t.Errorf("Expected String() output to contain AdditionalDiskUriReadOnlySas") + } +} + func TestArtifactProperties(t *testing.T) { template := CaptureTemplate{ Resources: []CaptureResources{ @@ -122,7 +176,66 @@ func TestArtifactProperties(t *testing.T) { } } -func TestArtifactOverHypenatedCaptureUri(t *testing.T) { +func TestAdditionalDiskArtifactProperties(t *testing.T) { + template := CaptureTemplate{ + Resources: []CaptureResources{ + { + Properties: CaptureProperties{ + StorageProfile: CaptureStorageProfile{ + OSDisk: CaptureDisk{ + Image: CaptureUri{ + Uri: "https://storage.blob.core.windows.net/system/Microsoft.Compute/Images/images/packer-osDisk.4085bb15-3644-4641-b9cd-f575918640b4.vhd", + }, + }, + DataDisks: []CaptureDisk{ + { + Image: CaptureUri{ + Uri: "https://storage.blob.core.windows.net/system/Microsoft.Compute/Images/images/packer-datadisk-1.4085bb15-3644-4641-b9cd-f575918640b4.vhd", + }, + }, + }, + }, + }, + Location: "southcentralus", + }, + }, + } + + testSubject, err := NewArtifact(&template, getFakeSasUrl) + if err != nil { + t.Fatalf("err=%s", err) + } + + if testSubject.OSDiskUri != "https://storage.blob.core.windows.net/system/Microsoft.Compute/Images/images/packer-osDisk.4085bb15-3644-4641-b9cd-f575918640b4.vhd" { + t.Errorf("Expected template to be 'https://storage.blob.core.windows.net/system/Microsoft.Compute/Images/images/packer-osDisk.4085bb15-3644-4641-b9cd-f575918640b4.vhd', but got %s", testSubject.OSDiskUri) + } + if testSubject.OSDiskUriReadOnlySas != "SAS-Images/images/packer-osDisk.4085bb15-3644-4641-b9cd-f575918640b4.vhd" { + t.Errorf("Expected template to be 'SAS-Images/images/packer-osDisk.4085bb15-3644-4641-b9cd-f575918640b4.vhd', but got %s", testSubject.OSDiskUriReadOnlySas) + } + if testSubject.TemplateUri != "https://storage.blob.core.windows.net/system/Microsoft.Compute/Images/images/packer-vmTemplate.4085bb15-3644-4641-b9cd-f575918640b4.json" { + t.Errorf("Expected template to be 'https://storage.blob.core.windows.net/system/Microsoft.Compute/Images/images/packer-vmTemplate.4085bb15-3644-4641-b9cd-f575918640b4.json', but got %s", testSubject.TemplateUri) + } + if testSubject.TemplateUriReadOnlySas != "SAS-Images/images/packer-vmTemplate.4085bb15-3644-4641-b9cd-f575918640b4.json" { + t.Errorf("Expected template to be 'SAS-Images/images/packer-vmTemplate.4085bb15-3644-4641-b9cd-f575918640b4.json', but got %s", testSubject.TemplateUriReadOnlySas) + } + if testSubject.StorageAccountLocation != "southcentralus" { + t.Errorf("Expected StorageAccountLocation to be 'southcentral', but got %s", testSubject.StorageAccountLocation) + } + if testSubject.AdditionalDisks == nil { + t.Errorf("Expected AdditionalDisks to be not nil") + } + if len(*testSubject.AdditionalDisks) != 1 { + t.Errorf("Expected AdditionalDisks to have one additional disk, but got %d", len(*testSubject.AdditionalDisks)) + } + if (*testSubject.AdditionalDisks)[0].AdditionalDiskUri != "https://storage.blob.core.windows.net/system/Microsoft.Compute/Images/images/packer-datadisk-1.4085bb15-3644-4641-b9cd-f575918640b4.vhd" { + t.Errorf("Expected additional disk uri to be 'https://storage.blob.core.windows.net/system/Microsoft.Compute/Images/images/packer-datadisk-1.4085bb15-3644-4641-b9cd-f575918640b4.vhd', but got %s", (*testSubject.AdditionalDisks)[0].AdditionalDiskUri) + } + if (*testSubject.AdditionalDisks)[0].AdditionalDiskUriReadOnlySas != "SAS-Images/images/packer-datadisk-1.4085bb15-3644-4641-b9cd-f575918640b4.vhd" { + t.Errorf("Expected additional disk sas to be 'SAS-Images/images/packer-datadisk-1.4085bb15-3644-4641-b9cd-f575918640b4.vhd', but got %s", (*testSubject.AdditionalDisks)[0].AdditionalDiskUriReadOnlySas) + } +} + +func TestArtifactOverHyphenatedCaptureUri(t *testing.T) { template := CaptureTemplate{ Resources: []CaptureResources{ { diff --git a/builder/azure/arm/authenticate_test.go b/builder/azure/arm/authenticate_test.go index 2e290bb3d..63fd1c3aa 100644 --- a/builder/azure/arm/authenticate_test.go +++ b/builder/azure/arm/authenticate_test.go @@ -1,13 +1,14 @@ package arm import ( - "github.com/Azure/go-autorest/autorest/azure" "testing" + + "github.com/Azure/go-autorest/autorest/azure" ) // Behavior is the most important thing to assert for ServicePrincipalToken, but // that cannot be done in a unit test because it involves network access. Instead, -// I assert the expected intertness of this class. +// I assert the expected inertness of this class. func TestNewAuthenticate(t *testing.T) { testSubject := NewAuthenticate(azure.PublicCloud, "clientID", "clientString", "tenantID") spn, err := testSubject.getServicePrincipalToken() @@ -15,25 +16,25 @@ func TestNewAuthenticate(t *testing.T) { t.Fatalf(err.Error()) } - if spn.Token.AccessToken != "" { - t.Errorf("spn.Token.AccessToken: expected=\"\", actual=%s", spn.Token.AccessToken) + if spn.Token().AccessToken != "" { + t.Errorf("spn.Token().AccessToken: expected=\"\", actual=%s", spn.Token().AccessToken) } - if spn.Token.RefreshToken != "" { - t.Errorf("spn.Token.RefreshToken: expected=\"\", actual=%s", spn.Token.RefreshToken) + if spn.Token().RefreshToken != "" { + t.Errorf("spn.Token().RefreshToken: expected=\"\", actual=%s", spn.Token().RefreshToken) } - if spn.Token.ExpiresIn != "" { - t.Errorf("spn.Token.ExpiresIn: expected=\"\", actual=%s", spn.Token.ExpiresIn) + if spn.Token().ExpiresIn != "" { + t.Errorf("spn.Token().ExpiresIn: expected=\"\", actual=%s", spn.Token().ExpiresIn) } - if spn.Token.ExpiresOn != "" { - t.Errorf("spn.Token.ExpiresOn: expected=\"\", actual=%s", spn.Token.ExpiresOn) + if spn.Token().ExpiresOn != "" { + t.Errorf("spn.Token().ExpiresOn: expected=\"\", actual=%s", spn.Token().ExpiresOn) } - if spn.Token.NotBefore != "" { - t.Errorf("spn.Token.NotBefore: expected=\"\", actual=%s", spn.Token.NotBefore) + if spn.Token().NotBefore != "" { + t.Errorf("spn.Token().NotBefore: expected=\"\", actual=%s", spn.Token().NotBefore) } - if spn.Token.Resource != "" { - t.Errorf("spn.Token.Resource: expected=\"\", actual=%s", spn.Token.Resource) + if spn.Token().Resource != "" { + t.Errorf("spn.Token().Resource: expected=\"\", actual=%s", spn.Token().Resource) } - if spn.Token.Type != "" { - t.Errorf("spn.Token.Type: expected=\"\", actual=%s", spn.Token.Type) + if spn.Token().Type != "" { + t.Errorf("spn.Token().Type: expected=\"\", actual=%s", spn.Token().Type) } } diff --git a/builder/azure/arm/azure_client.go b/builder/azure/arm/azure_client.go index 3fd58bc34..f8477d235 100644 --- a/builder/azure/arm/azure_client.go +++ b/builder/azure/arm/azure_client.go @@ -1,6 +1,7 @@ package arm import ( + "context" "encoding/json" "fmt" "math" @@ -9,29 +10,26 @@ import ( "os" "strconv" - "github.com/Azure/azure-sdk-for-go/arm/compute" - "github.com/Azure/azure-sdk-for-go/arm/network" - "github.com/Azure/azure-sdk-for-go/arm/resources/resources" - armStorage "github.com/Azure/azure-sdk-for-go/arm/storage" + "github.com/Azure/azure-sdk-for-go/services/compute/mgmt/2018-04-01/compute" + "github.com/Azure/azure-sdk-for-go/services/network/mgmt/2018-01-01/network" + "github.com/Azure/azure-sdk-for-go/services/resources/mgmt/2018-02-01/resources" + armStorage "github.com/Azure/azure-sdk-for-go/services/storage/mgmt/2017-10-01/storage" "github.com/Azure/azure-sdk-for-go/storage" "github.com/Azure/go-autorest/autorest" "github.com/Azure/go-autorest/autorest/adal" "github.com/Azure/go-autorest/autorest/azure" "github.com/hashicorp/packer/builder/azure/common" - "github.com/hashicorp/packer/version" + "github.com/hashicorp/packer/helper/useragent" ) const ( EnvPackerLogAzureMaxLen = "PACKER_LOG_AZURE_MAXLEN" ) -var ( - packerUserAgent = fmt.Sprintf(";packer/%s", version.FormattedVersion()) -) - type AzureClient struct { storage.BlobStorageClient resources.DeploymentsClient + resources.DeploymentOperationsClient resources.GroupsClient network.PublicIPAddressesClient network.InterfacesClient @@ -41,10 +39,12 @@ type AzureClient struct { compute.VirtualMachinesClient common.VaultClient armStorage.AccountsClient + compute.DisksClient InspectorMaxLength int Template *CaptureTemplate LastError azureErrorResponse + VaultClientDelete common.VaultClient } func getCaptureResponse(body string) *CaptureTemplate { @@ -133,55 +133,67 @@ func NewAzureClient(subscriptionID, resourceGroupName, storageAccountName string azureClient.DeploymentsClient.Authorizer = autorest.NewBearerAuthorizer(servicePrincipalToken) azureClient.DeploymentsClient.RequestInspector = withInspection(maxlen) azureClient.DeploymentsClient.ResponseInspector = byConcatDecorators(byInspecting(maxlen), errorCapture(azureClient)) - azureClient.DeploymentsClient.UserAgent += packerUserAgent + azureClient.DeploymentsClient.UserAgent = fmt.Sprintf("%s %s", useragent.String(), azureClient.DeploymentsClient.UserAgent) + + azureClient.DeploymentOperationsClient = resources.NewDeploymentOperationsClientWithBaseURI(cloud.ResourceManagerEndpoint, subscriptionID) + azureClient.DeploymentOperationsClient.Authorizer = autorest.NewBearerAuthorizer(servicePrincipalToken) + azureClient.DeploymentOperationsClient.RequestInspector = withInspection(maxlen) + azureClient.DeploymentOperationsClient.ResponseInspector = byConcatDecorators(byInspecting(maxlen), errorCapture(azureClient)) + azureClient.DeploymentOperationsClient.UserAgent = fmt.Sprintf("%s %s", useragent.String(), azureClient.DeploymentOperationsClient.UserAgent) + + azureClient.DisksClient = compute.NewDisksClientWithBaseURI(cloud.ResourceManagerEndpoint, subscriptionID) + azureClient.DisksClient.Authorizer = autorest.NewBearerAuthorizer(servicePrincipalToken) + azureClient.DisksClient.RequestInspector = withInspection(maxlen) + azureClient.DisksClient.ResponseInspector = byConcatDecorators(byInspecting(maxlen), errorCapture(azureClient)) + azureClient.DisksClient.UserAgent = fmt.Sprintf("%s %s", useragent.String(), azureClient.DisksClient.UserAgent) azureClient.GroupsClient = resources.NewGroupsClientWithBaseURI(cloud.ResourceManagerEndpoint, subscriptionID) azureClient.GroupsClient.Authorizer = autorest.NewBearerAuthorizer(servicePrincipalToken) azureClient.GroupsClient.RequestInspector = withInspection(maxlen) azureClient.GroupsClient.ResponseInspector = byConcatDecorators(byInspecting(maxlen), errorCapture(azureClient)) - azureClient.GroupsClient.UserAgent += packerUserAgent + azureClient.GroupsClient.UserAgent = fmt.Sprintf("%s %s", useragent.String(), azureClient.GroupsClient.UserAgent) azureClient.ImagesClient = compute.NewImagesClientWithBaseURI(cloud.ResourceManagerEndpoint, subscriptionID) azureClient.ImagesClient.Authorizer = autorest.NewBearerAuthorizer(servicePrincipalToken) azureClient.ImagesClient.RequestInspector = withInspection(maxlen) azureClient.ImagesClient.ResponseInspector = byConcatDecorators(byInspecting(maxlen), errorCapture(azureClient)) - azureClient.ImagesClient.UserAgent += packerUserAgent + azureClient.ImagesClient.UserAgent = fmt.Sprintf("%s %s", useragent.String(), azureClient.ImagesClient.UserAgent) azureClient.InterfacesClient = network.NewInterfacesClientWithBaseURI(cloud.ResourceManagerEndpoint, subscriptionID) azureClient.InterfacesClient.Authorizer = autorest.NewBearerAuthorizer(servicePrincipalToken) azureClient.InterfacesClient.RequestInspector = withInspection(maxlen) azureClient.InterfacesClient.ResponseInspector = byConcatDecorators(byInspecting(maxlen), errorCapture(azureClient)) - azureClient.InterfacesClient.UserAgent += packerUserAgent + azureClient.InterfacesClient.UserAgent = fmt.Sprintf("%s %s", useragent.String(), azureClient.InterfacesClient.UserAgent) azureClient.SubnetsClient = network.NewSubnetsClientWithBaseURI(cloud.ResourceManagerEndpoint, subscriptionID) azureClient.SubnetsClient.Authorizer = autorest.NewBearerAuthorizer(servicePrincipalToken) azureClient.SubnetsClient.RequestInspector = withInspection(maxlen) azureClient.SubnetsClient.ResponseInspector = byConcatDecorators(byInspecting(maxlen), errorCapture(azureClient)) - azureClient.SubnetsClient.UserAgent += packerUserAgent + azureClient.SubnetsClient.UserAgent = fmt.Sprintf("%s %s", useragent.String(), azureClient.SubnetsClient.UserAgent) azureClient.VirtualNetworksClient = network.NewVirtualNetworksClientWithBaseURI(cloud.ResourceManagerEndpoint, subscriptionID) azureClient.VirtualNetworksClient.Authorizer = autorest.NewBearerAuthorizer(servicePrincipalToken) azureClient.VirtualNetworksClient.RequestInspector = withInspection(maxlen) azureClient.VirtualNetworksClient.ResponseInspector = byConcatDecorators(byInspecting(maxlen), errorCapture(azureClient)) - azureClient.VirtualNetworksClient.UserAgent += packerUserAgent + azureClient.VirtualNetworksClient.UserAgent = fmt.Sprintf("%s %s", useragent.String(), azureClient.VirtualNetworksClient.UserAgent) azureClient.PublicIPAddressesClient = network.NewPublicIPAddressesClientWithBaseURI(cloud.ResourceManagerEndpoint, subscriptionID) azureClient.PublicIPAddressesClient.Authorizer = autorest.NewBearerAuthorizer(servicePrincipalToken) azureClient.PublicIPAddressesClient.RequestInspector = withInspection(maxlen) azureClient.PublicIPAddressesClient.ResponseInspector = byConcatDecorators(byInspecting(maxlen), errorCapture(azureClient)) - azureClient.PublicIPAddressesClient.UserAgent += packerUserAgent + azureClient.PublicIPAddressesClient.UserAgent = fmt.Sprintf("%s %s", useragent.String(), azureClient.PublicIPAddressesClient.UserAgent) azureClient.VirtualMachinesClient = compute.NewVirtualMachinesClientWithBaseURI(cloud.ResourceManagerEndpoint, subscriptionID) azureClient.VirtualMachinesClient.Authorizer = autorest.NewBearerAuthorizer(servicePrincipalToken) azureClient.VirtualMachinesClient.RequestInspector = withInspection(maxlen) azureClient.VirtualMachinesClient.ResponseInspector = byConcatDecorators(byInspecting(maxlen), templateCapture(azureClient), errorCapture(azureClient)) - azureClient.VirtualMachinesClient.UserAgent += packerUserAgent + azureClient.VirtualMachinesClient.UserAgent = fmt.Sprintf("%s %s", useragent.String(), azureClient.VirtualMachinesClient.UserAgent) azureClient.AccountsClient = armStorage.NewAccountsClientWithBaseURI(cloud.ResourceManagerEndpoint, subscriptionID) azureClient.AccountsClient.Authorizer = autorest.NewBearerAuthorizer(servicePrincipalToken) azureClient.AccountsClient.RequestInspector = withInspection(maxlen) azureClient.AccountsClient.ResponseInspector = byConcatDecorators(byInspecting(maxlen), errorCapture(azureClient)) - azureClient.AccountsClient.UserAgent += packerUserAgent + azureClient.AccountsClient.UserAgent = fmt.Sprintf("%s %s", useragent.String(), azureClient.AccountsClient.UserAgent) keyVaultURL, err := url.Parse(cloud.KeyVaultEndpoint) if err != nil { @@ -192,11 +204,25 @@ func NewAzureClient(subscriptionID, resourceGroupName, storageAccountName string azureClient.VaultClient.Authorizer = autorest.NewBearerAuthorizer(servicePrincipalTokenVault) azureClient.VaultClient.RequestInspector = withInspection(maxlen) azureClient.VaultClient.ResponseInspector = byConcatDecorators(byInspecting(maxlen), errorCapture(azureClient)) - azureClient.VaultClient.UserAgent += packerUserAgent + azureClient.VaultClient.UserAgent = fmt.Sprintf("%s %s", useragent.String(), azureClient.VaultClient.UserAgent) + + // TODO(boumenot) - SDK still does not have a full KeyVault client. + // There are two ways that KeyVault has to be accessed, and each one has their own SPN. An authenticated SPN + // is tied to the URL, and the URL associated with getting the secret is different than the URL + // associated with deleting the KeyVault. As a result, I need to have *two* different clients to + // access KeyVault. I did not want to split it into two separate files, so I am starting with this. + // + // I do not like this implementation. It is getting long in the tooth, and should be re-examined now + // that we have a "working" solution. + azureClient.VaultClientDelete = common.NewVaultClientWithBaseURI(cloud.ResourceManagerEndpoint, subscriptionID) + azureClient.VaultClientDelete.Authorizer = autorest.NewBearerAuthorizer(servicePrincipalToken) + azureClient.VaultClientDelete.RequestInspector = withInspection(maxlen) + azureClient.VaultClientDelete.ResponseInspector = byConcatDecorators(byInspecting(maxlen), errorCapture(azureClient)) + azureClient.VaultClientDelete.UserAgent = fmt.Sprintf("%s %s", useragent.String(), azureClient.VaultClientDelete.UserAgent) // If this is a managed disk build, this should be ignored. if resourceGroupName != "" && storageAccountName != "" { - accountKeys, err := azureClient.AccountsClient.ListKeys(resourceGroupName, storageAccountName) + accountKeys, err := azureClient.AccountsClient.ListKeys(context.TODO(), resourceGroupName, storageAccountName) if err != nil { return nil, err } diff --git a/builder/azure/arm/azure_error_response_test.go b/builder/azure/arm/azure_error_response_test.go index f077fa6fe..2e4f7f1ac 100644 --- a/builder/azure/arm/azure_error_response_test.go +++ b/builder/azure/arm/azure_error_response_test.go @@ -1,26 +1,27 @@ package arm import ( - "github.com/approvals/go-approval-tests" - "github.com/hashicorp/packer/common/json" "strings" "testing" + + "github.com/approvals/go-approval-tests" + "github.com/hashicorp/packer/common/json" ) const AzureErrorSimple = `{"error":{"code":"ResourceNotFound","message":"The Resource 'Microsoft.Compute/images/PackerUbuntuImage' under resource group 'packer-test00' was not found."}}` const AzureErrorNested = `{"status":"Failed","error":{"code":"DeploymentFailed","message":"At least one resource deployment operation failed. Please list deployment operations for details. Please see https://aka.ms/arm-debug for usage details.","details":[{"code":"BadRequest","message":"{\r\n \"error\": {\r\n \"code\": \"InvalidRequestFormat\",\r\n \"message\": \"Cannot parse the request.\",\r\n \"details\": [\r\n {\r\n \"code\": \"InvalidJson\",\r\n \"message\": \"Error converting value \\\"playground\\\" to type 'Microsoft.WindowsAzure.Networking.Nrp.Frontend.Contract.Csm.Public.IpAllocationMethod'. Path 'properties.publicIPAllocationMethod', line 1, position 130.\"\r\n }\r\n ]\r\n }\r\n}"}]}}` func TestAzureErrorSimpleShouldUnmarshal(t *testing.T) { - var azureErrorReponse azureErrorResponse - err := json.Unmarshal([]byte(AzureErrorSimple), &azureErrorReponse) + var azureErrorResponse azureErrorResponse + err := json.Unmarshal([]byte(AzureErrorSimple), &azureErrorResponse) if err != nil { t.Fatal(err) } - if azureErrorReponse.ErrorDetails.Code != "ResourceNotFound" { + if azureErrorResponse.ErrorDetails.Code != "ResourceNotFound" { t.Errorf("Error.Code") } - if azureErrorReponse.ErrorDetails.Message != "The Resource 'Microsoft.Compute/images/PackerUbuntuImage' under resource group 'packer-test00' was not found." { + if azureErrorResponse.ErrorDetails.Message != "The Resource 'Microsoft.Compute/images/PackerUbuntuImage' under resource group 'packer-test00' was not found." { t.Errorf("Error.Message") } } @@ -50,26 +51,26 @@ func TestAzureErrorEmptyShouldFormat(t *testing.T) { } func TestAzureErrorSimpleShouldFormat(t *testing.T) { - var azureErrorReponse azureErrorResponse - err := json.Unmarshal([]byte(AzureErrorSimple), &azureErrorReponse) + var azureErrorResponse azureErrorResponse + err := json.Unmarshal([]byte(AzureErrorSimple), &azureErrorResponse) if err != nil { t.Fatal(err) } - err = approvaltests.VerifyString(t, azureErrorReponse.Error()) + err = approvaltests.VerifyString(t, azureErrorResponse.Error()) if err != nil { t.Fatal(err) } } func TestAzureErrorNestedShouldFormat(t *testing.T) { - var azureErrorReponse azureErrorResponse - err := json.Unmarshal([]byte(AzureErrorNested), &azureErrorReponse) + var azureErrorResponse azureErrorResponse + err := json.Unmarshal([]byte(AzureErrorNested), &azureErrorResponse) if err != nil { t.Fatal(err) } - err = approvaltests.VerifyString(t, azureErrorReponse.Error()) + err = approvaltests.VerifyString(t, azureErrorResponse.Error()) if err != nil { t.Fatal(err) } diff --git a/builder/azure/arm/builder.go b/builder/azure/arm/builder.go index e6993cfb5..23d3d2181 100644 --- a/builder/azure/arm/builder.go +++ b/builder/azure/arm/builder.go @@ -1,6 +1,7 @@ package arm import ( + "context" "errors" "fmt" "log" @@ -9,31 +10,29 @@ import ( "strings" "time" + armstorage "github.com/Azure/azure-sdk-for-go/services/storage/mgmt/2017-10-01/storage" + "github.com/Azure/azure-sdk-for-go/storage" + "github.com/Azure/go-autorest/autorest/adal" + "github.com/dgrijalva/jwt-go" packerAzureCommon "github.com/hashicorp/packer/builder/azure/common" - "github.com/hashicorp/packer/builder/azure/common/constants" "github.com/hashicorp/packer/builder/azure/common/lin" - - "github.com/Azure/azure-sdk-for-go/arm/storage" - "github.com/Azure/go-autorest/autorest/adal" packerCommon "github.com/hashicorp/packer/common" "github.com/hashicorp/packer/helper/communicator" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" ) type Builder struct { - config *Config - stateBag multistep.StateBag - runner multistep.Runner + config *Config + stateBag multistep.StateBag + runner multistep.Runner + ctxCancel context.CancelFunc } const ( - DefaultNicName = "packerNic" - DefaultPublicIPAddressName = "packerPublicIP" - DefaultSasBlobContainer = "system/Microsoft.Compute" - DefaultSasBlobPermission = "r" - DefaultSecretName = "packerKeyVaultSecret" + DefaultSasBlobContainer = "system/Microsoft.Compute" + DefaultSecretName = "packerKeyVaultSecret" ) func (b *Builder) Prepare(raws ...interface{}) ([]string, error) { @@ -53,8 +52,13 @@ func (b *Builder) Prepare(raws ...interface{}) ([]string, error) { } func (b *Builder) Run(ui packer.Ui, hook packer.Hook, cache packer.Cache) (packer.Artifact, error) { + ui.Say("Running builder ...") + ctx, cancel := context.WithCancel(context.Background()) + b.ctxCancel = cancel + defer cancel() + if err := newConfigRetriever().FillParameters(b.config); err != nil { return nil, err } @@ -87,9 +91,18 @@ func (b *Builder) Run(ui packer.Ui, hook packer.Hook, cache packer.Cache) (packe if err := resolver.Resolve(b.config); err != nil { return nil, err } + if b.config.ObjectID == "" { + b.config.ObjectID = getObjectIdFromToken(ui, spnCloud) + } else { + ui.Message("You have provided Object_ID which is no longer needed, azure packer builder determines this dynamically from the authentication token") + } + + if b.config.ObjectID == "" && b.config.OSType != constants.Target_Linux { + return nil, fmt.Errorf("could not determine the ObjectID for the user, which is required for Windows builds") + } if b.config.isManagedImage() { - group, err := azureClient.GroupsClient.Get(b.config.ManagedImageResourceGroupName) + group, err := azureClient.GroupsClient.Get(ctx, b.config.ManagedImageResourceGroupName) if err != nil { return nil, fmt.Errorf("Cannot locate the managed image resource group %s.", b.config.ManagedImageResourceGroupName) } @@ -97,14 +110,37 @@ func (b *Builder) Run(ui packer.Ui, hook packer.Hook, cache packer.Cache) (packe b.config.manageImageLocation = *group.Location // If a managed image already exists it cannot be overwritten. - _, err = azureClient.ImagesClient.Get(b.config.ManagedImageResourceGroupName, b.config.ManagedImageName, "") + _, err = azureClient.ImagesClient.Get(ctx, b.config.ManagedImageResourceGroupName, b.config.ManagedImageName, "") if err == nil { - return nil, fmt.Errorf("A managed image named %s already exists in the resource group %s.", b.config.ManagedImageName, b.config.ManagedImageResourceGroupName) + if b.config.PackerForce { + ui.Say(fmt.Sprintf("the managed image named %s already exists, but deleting it due to -force flag", b.config.ManagedImageName)) + f, err := azureClient.ImagesClient.Delete(ctx, b.config.ManagedImageResourceGroupName, b.config.ManagedImageName) + if err == nil { + err = f.WaitForCompletion(ctx, azureClient.ImagesClient.Client) + } + if err != nil { + return nil, fmt.Errorf("failed to delete the managed image named %s : %s", b.config.ManagedImageName, azureClient.LastError.Error()) + } + } else { + return nil, fmt.Errorf("the managed image named %s already exists in the resource group %s, use the -force option to automatically delete it.", b.config.ManagedImageName, b.config.ManagedImageResourceGroupName) + } } + } else { + // User is not using Managed Images to build, warning message here that this path is being deprecated + ui.Error("Warning: You are using Azure Packer Builder to create VHDs which is being deprecated, consider using Managed Images. Learn more http://aka.ms/packermanagedimage") + } + + if b.config.BuildResourceGroupName != "" { + group, err := azureClient.GroupsClient.Get(ctx, b.config.BuildResourceGroupName) + if err != nil { + return nil, fmt.Errorf("Cannot locate the existing build resource resource group %s.", b.config.BuildResourceGroupName) + } + + b.config.Location = *group.Location } if b.config.StorageAccount != "" { - account, err := b.getBlobAccount(azureClient, b.config.ResourceGroupName, b.config.StorageAccount) + account, err := b.getBlobAccount(ctx, azureClient, b.config.ResourceGroupName, b.config.StorageAccount) if err != nil { return nil, err } @@ -127,11 +163,13 @@ func (b *Builder) Run(ui packer.Ui, hook packer.Hook, cache packer.Cache) (packe b.setImageParameters(b.stateBag) var steps []multistep.Step + deploymentName := b.stateBag.Get(constants.ArmDeploymentName).(string) + if b.config.OSType == constants.Target_Linux { steps = []multistep.Step{ NewStepCreateResourceGroup(azureClient, ui), NewStepValidateTemplate(azureClient, ui, b.config, GetVirtualMachineDeployment), - NewStepDeployTemplate(azureClient, ui, b.config, GetVirtualMachineDeployment), + NewStepDeployTemplate(azureClient, ui, b.config, deploymentName, GetVirtualMachineDeployment), NewStepGetIPAddress(azureClient, ui, endpointConnectType), &communicator.StepConnectSSH{ Config: &b.config.Comm, @@ -140,21 +178,28 @@ func (b *Builder) Run(ui packer.Ui, hook packer.Hook, cache packer.Cache) (packe }, &packerCommon.StepProvision{}, NewStepGetOSDisk(azureClient, ui), + NewStepGetAdditionalDisks(azureClient, ui), NewStepPowerOffCompute(azureClient, ui), NewStepCaptureImage(azureClient, ui), NewStepDeleteResourceGroup(azureClient, ui), NewStepDeleteOSDisk(azureClient, ui), + NewStepDeleteAdditionalDisks(azureClient, ui), } } else if b.config.OSType == constants.Target_Windows { + keyVaultDeploymentName := b.stateBag.Get(constants.ArmKeyVaultDeploymentName).(string) steps = []multistep.Step{ NewStepCreateResourceGroup(azureClient, ui), NewStepValidateTemplate(azureClient, ui, b.config, GetKeyVaultDeployment), - NewStepDeployTemplate(azureClient, ui, b.config, GetKeyVaultDeployment), + NewStepDeployTemplate(azureClient, ui, b.config, keyVaultDeploymentName, GetKeyVaultDeployment), NewStepGetCertificate(azureClient, ui), NewStepSetCertificate(b.config, ui), NewStepValidateTemplate(azureClient, ui, b.config, GetVirtualMachineDeployment), - NewStepDeployTemplate(azureClient, ui, b.config, GetVirtualMachineDeployment), + NewStepDeployTemplate(azureClient, ui, b.config, deploymentName, GetVirtualMachineDeployment), NewStepGetIPAddress(azureClient, ui, endpointConnectType), + &StepSaveWinRMPassword{ + Password: b.config.tmpAdminPassword, + BuildName: b.config.PackerBuildName, + }, &communicator.StepConnectWinRM{ Config: &b.config.Comm, Host: func(stateBag multistep.StateBag) (string, error) { @@ -169,10 +214,12 @@ func (b *Builder) Run(ui packer.Ui, hook packer.Hook, cache packer.Cache) (packe }, &packerCommon.StepProvision{}, NewStepGetOSDisk(azureClient, ui), + NewStepGetAdditionalDisks(azureClient, ui), NewStepPowerOffCompute(azureClient, ui), NewStepCaptureImage(azureClient, ui), NewStepDeleteResourceGroup(azureClient, ui), NewStepDeleteOSDisk(azureClient, ui), + NewStepDeleteAdditionalDisks(azureClient, ui), } } else { return nil, fmt.Errorf("Builder does not support the os_type '%s'", b.config.OSType) @@ -213,9 +260,11 @@ func (b *Builder) Run(ui packer.Ui, hook packer.Hook, cache packer.Cache) (packe return NewArtifact( template.(*CaptureTemplate), func(name string) string { - month := time.Now().AddDate(0, 1, 0).UTC() blob := azureClient.BlobStorageClient.GetContainerReference(DefaultSasBlobContainer).GetBlobReference(name) - sasUrl, _ := blob.GetSASURI(month, DefaultSasBlobPermission) + options := storage.BlobSASOptions{} + options.BlobServiceSASPermissions.Read = true + options.Expiry = time.Now().AddDate(0, 1, 0).UTC() // one month + sasUrl, _ := blob.GetSASURI(options) return sasUrl }) } @@ -253,6 +302,10 @@ func (b *Builder) isPrivateNetworkCommunication() bool { } func (b *Builder) Cancel() { + if b.ctxCancel != nil { + log.Printf("Cancelling Azure builder...") + b.ctxCancel() + } if b.runner != nil { log.Println("Cancelling the step runner...") b.runner.Cancel() @@ -267,8 +320,8 @@ func canonicalizeLocation(location string) string { return strings.Replace(location, " ", "", -1) } -func (b *Builder) getBlobAccount(client *AzureClient, resourceGroupName string, storageAccountName string) (*storage.Account, error) { - account, err := client.AccountsClient.GetProperties(resourceGroupName, storageAccountName) +func (b *Builder) getBlobAccount(ctx context.Context, client *AzureClient, resourceGroupName string, storageAccountName string) (*armstorage.Account, error) { + account, err := client.AccountsClient.GetProperties(ctx, resourceGroupName, storageAccountName) if err != nil { return nil, err } @@ -280,23 +333,36 @@ func (b *Builder) configureStateBag(stateBag multistep.StateBag) { stateBag.Put(constants.AuthorizedKey, b.config.sshAuthorizedKey) stateBag.Put(constants.PrivateKey, b.config.sshPrivateKey) - stateBag.Put(constants.ArmTags, &b.config.AzureTags) + stateBag.Put(constants.ArmTags, b.config.AzureTags) stateBag.Put(constants.ArmComputeName, b.config.tmpComputeName) stateBag.Put(constants.ArmDeploymentName, b.config.tmpDeploymentName) + if b.config.OSType == constants.Target_Windows { + stateBag.Put(constants.ArmKeyVaultDeploymentName, fmt.Sprintf("kv%s", b.config.tmpDeploymentName)) + } stateBag.Put(constants.ArmKeyVaultName, b.config.tmpKeyVaultName) - stateBag.Put(constants.ArmLocation, b.config.Location) - stateBag.Put(constants.ArmNicName, DefaultNicName) - stateBag.Put(constants.ArmPublicIPAddressName, DefaultPublicIPAddressName) - stateBag.Put(constants.ArmResourceGroupName, b.config.tmpResourceGroupName) + stateBag.Put(constants.ArmNicName, b.config.tmpNicName) + stateBag.Put(constants.ArmPublicIPAddressName, b.config.tmpPublicIPAddressName) + if b.config.TempResourceGroupName != "" && b.config.BuildResourceGroupName != "" { + stateBag.Put(constants.ArmDoubleResourceGroupNameSet, true) + } + if b.config.tmpResourceGroupName != "" { + stateBag.Put(constants.ArmResourceGroupName, b.config.tmpResourceGroupName) + stateBag.Put(constants.ArmIsExistingResourceGroup, false) + } else { + stateBag.Put(constants.ArmResourceGroupName, b.config.BuildResourceGroupName) + stateBag.Put(constants.ArmIsExistingResourceGroup, true) + } stateBag.Put(constants.ArmStorageAccountName, b.config.StorageAccount) stateBag.Put(constants.ArmIsManagedImage, b.config.isManagedImage()) stateBag.Put(constants.ArmManagedImageResourceGroupName, b.config.ManagedImageResourceGroupName) stateBag.Put(constants.ArmManagedImageName, b.config.ManagedImageName) + stateBag.Put(constants.ArmAsyncResourceGroupDelete, b.config.AsyncResourceGroupDelete) } // Parameters that are only known at runtime after querying Azure. func (b *Builder) setRuntimeParameters(stateBag multistep.StateBag) { + stateBag.Put(constants.ArmLocation, b.config.Location) stateBag.Put(constants.ArmManagedImageLocation, b.config.manageImageLocation) } @@ -315,10 +381,17 @@ func (b *Builder) getServicePrincipalTokens(say func(string)) (*adal.ServicePrin var err error if b.config.useDeviceLogin { - servicePrincipalToken, err = packerAzureCommon.Authenticate(*b.config.cloudEnvironment, b.config.TenantID, say) + say("Getting auth token for Service management endpoint") + servicePrincipalToken, err = packerAzureCommon.Authenticate(*b.config.cloudEnvironment, b.config.TenantID, say, b.config.cloudEnvironment.ServiceManagementEndpoint) if err != nil { return nil, nil, err } + say("Getting token for Vault resource") + servicePrincipalTokenVault, err = packerAzureCommon.Authenticate(*b.config.cloudEnvironment, b.config.TenantID, say, strings.TrimRight(b.config.cloudEnvironment.KeyVaultEndpoint, "/")) + if err != nil { + return nil, nil, err + } + } else { auth := NewAuthenticate(*b.config.cloudEnvironment, b.config.ClientID, b.config.ClientSecret, b.config.TenantID) @@ -329,11 +402,39 @@ func (b *Builder) getServicePrincipalTokens(say func(string)) (*adal.ServicePrin servicePrincipalTokenVault, err = auth.getServicePrincipalTokenWithResource( strings.TrimRight(b.config.cloudEnvironment.KeyVaultEndpoint, "/")) - if err != nil { return nil, nil, err } + + } + + err = servicePrincipalToken.EnsureFresh() + + if err != nil { + return nil, nil, err + } + + err = servicePrincipalTokenVault.EnsureFresh() + + if err != nil { + return nil, nil, err } return servicePrincipalToken, servicePrincipalTokenVault, nil } + +func getObjectIdFromToken(ui packer.Ui, token *adal.ServicePrincipalToken) string { + claims := jwt.MapClaims{} + var p jwt.Parser + + var err error + + _, _, err = p.ParseUnverified(token.OAuthToken(), claims) + + if err != nil { + ui.Error(fmt.Sprintf("Failed to parse the token,Error: %s", err.Error())) + return "" + } + return claims["oid"].(string) + +} diff --git a/builder/azure/arm/builder_acc_test.go b/builder/azure/arm/builder_acc_test.go new file mode 100644 index 000000000..8f0c605f9 --- /dev/null +++ b/builder/azure/arm/builder_acc_test.go @@ -0,0 +1,322 @@ +package arm + +// these tests require the following variables to be set, +// although some test will only use a subset: +// +// * ARM_CLIENT_ID +// * ARM_CLIENT_SECRET +// * ARM_SUBSCRIPTION_ID +// * ARM_OBJECT_ID +// * ARM_STORAGE_ACCOUNT +// +// The subscription in question should have a resource group +// called "packer-acceptance-test" in "South Central US" region. The +// storage account refered to in the above variable should +// be inside this resource group and in "South Central US" as well. +// +// In addition, the PACKER_ACC variable should also be set to +// a non-empty value to enable Packer acceptance tests and the +// options "-v -timeout 90m" should be provided to the test +// command, e.g.: +// go test -v -timeout 90m -run TestBuilderAcc_.* + +import ( + "testing" + + "fmt" + builderT "github.com/hashicorp/packer/helper/builder/testing" + "os" +) + +const DeviceLoginAcceptanceTest = "DEVICELOGIN_TEST" + +func TestBuilderAcc_ManagedDisk_Windows(t *testing.T) { + builderT.Test(t, builderT.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Builder: &Builder{}, + Template: testBuilderAccManagedDiskWindows, + }) +} + +func TestBuilderAcc_ManagedDisk_Windows_Build_Resource_Group(t *testing.T) { + builderT.Test(t, builderT.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Builder: &Builder{}, + Template: testBuilderAccManagedDiskWindowsBuildResourceGroup, + }) +} + +func TestBuilderAcc_ManagedDisk_Windows_DeviceLogin(t *testing.T) { + if os.Getenv(DeviceLoginAcceptanceTest) == "" { + t.Skip(fmt.Sprintf( + "Device Login Acceptance tests skipped unless env '%s' set, as its requires manual step during execution", + DeviceLoginAcceptanceTest)) + return + } + builderT.Test(t, builderT.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Builder: &Builder{}, + Template: testBuilderAccManagedDiskWindowsDeviceLogin, + }) +} + +func TestBuilderAcc_ManagedDisk_Linux(t *testing.T) { + builderT.Test(t, builderT.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Builder: &Builder{}, + Template: testBuilderAccManagedDiskLinux, + }) +} + +func TestBuilderAcc_ManagedDisk_Linux_DeviceLogin(t *testing.T) { + if os.Getenv(DeviceLoginAcceptanceTest) == "" { + t.Skip(fmt.Sprintf( + "Device Login Acceptance tests skipped unless env '%s' set, as its requires manual step during execution", + DeviceLoginAcceptanceTest)) + return + } + builderT.Test(t, builderT.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Builder: &Builder{}, + Template: testBuilderAccManagedDiskLinuxDeviceLogin, + }) +} + +func TestBuilderAcc_Blob_Windows(t *testing.T) { + builderT.Test(t, builderT.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Builder: &Builder{}, + Template: testBuilderAccBlobWindows, + }) +} + +func TestBuilderAcc_Blob_Linux(t *testing.T) { + builderT.Test(t, builderT.TestCase{ + PreCheck: func() { testAccPreCheck(t) }, + Builder: &Builder{}, + Template: testBuilderAccBlobLinux, + }) +} + +func testAccPreCheck(*testing.T) {} + +const testBuilderAccManagedDiskWindows = ` +{ + "variables": { + "client_id": "{{env ` + "`ARM_CLIENT_ID`" + `}}", + "client_secret": "{{env ` + "`ARM_CLIENT_SECRET`" + `}}", + "subscription_id": "{{env ` + "`ARM_SUBSCRIPTION_ID`" + `}}" + }, + "builders": [{ + "type": "test", + + "client_id": "{{user ` + "`client_id`" + `}}", + "client_secret": "{{user ` + "`client_secret`" + `}}", + "subscription_id": "{{user ` + "`subscription_id`" + `}}", + + "managed_image_resource_group_name": "packer-acceptance-test", + "managed_image_name": "testBuilderAccManagedDiskWindows-{{timestamp}}", + + "os_type": "Windows", + "image_publisher": "MicrosoftWindowsServer", + "image_offer": "WindowsServer", + "image_sku": "2012-R2-Datacenter", + + "communicator": "winrm", + "winrm_use_ssl": "true", + "winrm_insecure": "true", + "winrm_timeout": "3m", + "winrm_username": "packer", + "async_resourcegroup_delete": "true", + + "location": "South Central US", + "vm_size": "Standard_DS2_v2" + }] +} +` +const testBuilderAccManagedDiskWindowsBuildResourceGroup = ` +{ + "variables": { + "client_id": "{{env ` + "`ARM_CLIENT_ID`" + `}}", + "client_secret": "{{env ` + "`ARM_CLIENT_SECRET`" + `}}", + "subscription_id": "{{env ` + "`ARM_SUBSCRIPTION_ID`" + `}}" + }, + "builders": [{ + "type": "test", + + "client_id": "{{user ` + "`client_id`" + `}}", + "client_secret": "{{user ` + "`client_secret`" + `}}", + "subscription_id": "{{user ` + "`subscription_id`" + `}}", + + "build_resource_group_name" : "packer-acceptance-test", + "managed_image_resource_group_name": "packer-acceptance-test", + "managed_image_name": "testBuilderAccManagedDiskWindows-{{timestamp}}", + + "os_type": "Windows", + "image_publisher": "MicrosoftWindowsServer", + "image_offer": "WindowsServer", + "image_sku": "2012-R2-Datacenter", + + "communicator": "winrm", + "winrm_use_ssl": "true", + "winrm_insecure": "true", + "winrm_timeout": "3m", + "winrm_username": "packer", + "async_resourcegroup_delete": "true", + + "vm_size": "Standard_DS2_v2" + }] +} +` + +const testBuilderAccManagedDiskWindowsDeviceLogin = ` +{ + "variables": { + "subscription_id": "{{env ` + "`ARM_SUBSCRIPTION_ID`" + `}}" + }, + "builders": [{ + "type": "test", + + "subscription_id": "{{user ` + "`subscription_id`" + `}}", + + "managed_image_resource_group_name": "packer-acceptance-test", + "managed_image_name": "testBuilderAccManagedDiskWindowsDeviceLogin-{{timestamp}}", + + "os_type": "Windows", + "image_publisher": "MicrosoftWindowsServer", + "image_offer": "WindowsServer", + "image_sku": "2012-R2-Datacenter", + + "communicator": "winrm", + "winrm_use_ssl": "true", + "winrm_insecure": "true", + "winrm_timeout": "3m", + "winrm_username": "packer", + + "location": "South Central US", + "vm_size": "Standard_DS2_v2" + }] +} +` + +const testBuilderAccManagedDiskLinux = ` +{ + "variables": { + "client_id": "{{env ` + "`ARM_CLIENT_ID`" + `}}", + "client_secret": "{{env ` + "`ARM_CLIENT_SECRET`" + `}}", + "subscription_id": "{{env ` + "`ARM_SUBSCRIPTION_ID`" + `}}" + }, + "builders": [{ + "type": "test", + + "client_id": "{{user ` + "`client_id`" + `}}", + "client_secret": "{{user ` + "`client_secret`" + `}}", + "subscription_id": "{{user ` + "`subscription_id`" + `}}", + + "managed_image_resource_group_name": "packer-acceptance-test", + "managed_image_name": "testBuilderAccManagedDiskLinux-{{timestamp}}", + + "os_type": "Linux", + "image_publisher": "Canonical", + "image_offer": "UbuntuServer", + "image_sku": "16.04-LTS", + + "location": "South Central US", + "vm_size": "Standard_DS2_v2" + }] +} +` +const testBuilderAccManagedDiskLinuxDeviceLogin = ` +{ + "variables": { + "subscription_id": "{{env ` + "`ARM_SUBSCRIPTION_ID`" + `}}" + }, + "builders": [{ + "type": "test", + + "subscription_id": "{{user ` + "`subscription_id`" + `}}", + + "managed_image_resource_group_name": "packer-acceptance-test", + "managed_image_name": "testBuilderAccManagedDiskLinuxDeviceLogin-{{timestamp}}", + + "os_type": "Linux", + "image_publisher": "Canonical", + "image_offer": "UbuntuServer", + "image_sku": "16.04-LTS", + "async_resourcegroup_delete": "true", + + "location": "South Central US", + "vm_size": "Standard_DS2_v2" + }] +} +` + +const testBuilderAccBlobWindows = ` +{ + "variables": { + "client_id": "{{env ` + "`ARM_CLIENT_ID`" + `}}", + "client_secret": "{{env ` + "`ARM_CLIENT_SECRET`" + `}}", + "subscription_id": "{{env ` + "`ARM_SUBSCRIPTION_ID`" + `}}", + "object_id": "{{env ` + "`ARM_OBJECT_ID`" + `}}", + "storage_account": "{{env ` + "`ARM_STORAGE_ACCOUNT`" + `}}" + }, + "builders": [{ + "type": "test", + + "client_id": "{{user ` + "`client_id`" + `}}", + "client_secret": "{{user ` + "`client_secret`" + `}}", + "subscription_id": "{{user ` + "`subscription_id`" + `}}", + "object_id": "{{user ` + "`object_id`" + `}}", + + "storage_account": "{{user ` + "`storage_account`" + `}}", + "resource_group_name": "packer-acceptance-test", + "capture_container_name": "test", + "capture_name_prefix": "testBuilderAccBlobWin", + + "os_type": "Windows", + "image_publisher": "MicrosoftWindowsServer", + "image_offer": "WindowsServer", + "image_sku": "2012-R2-Datacenter", + + "communicator": "winrm", + "winrm_use_ssl": "true", + "winrm_insecure": "true", + "winrm_timeout": "3m", + "winrm_username": "packer", + + "location": "South Central US", + "vm_size": "Standard_DS2_v2" + }] +} +` + +const testBuilderAccBlobLinux = ` +{ + "variables": { + "client_id": "{{env ` + "`ARM_CLIENT_ID`" + `}}", + "client_secret": "{{env ` + "`ARM_CLIENT_SECRET`" + `}}", + "subscription_id": "{{env ` + "`ARM_SUBSCRIPTION_ID`" + `}}", + "storage_account": "{{env ` + "`ARM_STORAGE_ACCOUNT`" + `}}" + }, + "builders": [{ + "type": "test", + + "client_id": "{{user ` + "`client_id`" + `}}", + "client_secret": "{{user ` + "`client_secret`" + `}}", + "subscription_id": "{{user ` + "`subscription_id`" + `}}", + + "storage_account": "{{user ` + "`storage_account`" + `}}", + "resource_group_name": "packer-acceptance-test", + "capture_container_name": "test", + "capture_name_prefix": "testBuilderAccBlobLinux", + + "os_type": "Linux", + "image_publisher": "Canonical", + "image_offer": "UbuntuServer", + "image_sku": "16.04-LTS", + + "location": "South Central US", + "vm_size": "Standard_DS2_v2" + }] +} +` diff --git a/builder/azure/arm/builder_test.go b/builder/azure/arm/builder_test.go index a44d363d0..99855333d 100644 --- a/builder/azure/arm/builder_test.go +++ b/builder/azure/arm/builder_test.go @@ -20,12 +20,12 @@ func TestStateBagShouldBePopulatedExpectedValues(t *testing.T) { constants.ArmTags, constants.ArmComputeName, constants.ArmDeploymentName, - constants.ArmLocation, constants.ArmNicName, constants.ArmResourceGroupName, constants.ArmStorageAccountName, constants.ArmVirtualMachineCaptureParameters, constants.ArmPublicIPAddressName, + constants.ArmAsyncResourceGroupDelete, } for _, v := range expectedStateBagKeys { diff --git a/builder/azure/arm/capture_template.go b/builder/azure/arm/capture_template.go index 0332a02f4..2ba2c48b8 100644 --- a/builder/azure/arm/capture_template.go +++ b/builder/azure/arm/capture_template.go @@ -23,7 +23,8 @@ type CaptureDisk struct { } type CaptureStorageProfile struct { - OSDisk CaptureDisk `json:"osDisk"` + OSDisk CaptureDisk `json:"osDisk"` + DataDisks []CaptureDisk `json:"dataDisks"` } type CaptureOSProfile struct { diff --git a/builder/azure/arm/capture_template_test.go b/builder/azure/arm/capture_template_test.go index 8c7a71405..10be51ee8 100644 --- a/builder/azure/arm/capture_template_test.go +++ b/builder/azure/arm/capture_template_test.go @@ -51,7 +51,33 @@ var captureTemplate01 = `{ "uri": "http://storage.blob.core.windows.net/vmcontainerce1a1b75-f480-47cb-8e6e-55142e4a5f68/osDisk.ce1a1b75-f480-47cb-8e6e-55142e4a5f68.vhd" }, "caching": "ReadWrite" - } + }, + "dataDisks": [ + { + "lun": 0, + "name": "packer-datadisk-0.32118633-6dc9-449f-83b6-a7d2983bec14.vhd", + "createOption": "Empty", + "image": { + "uri": "http://storage.blob.core.windows.net/system/Microsoft.Compute/Images/images/packer-datadisk-0.32118633-6dc9-449f-83b6-a7d2983bec14.vhd" + }, + "vhd": { + "uri": "http://storage.blob.core.windows.net/vmcontainerce1a1b75-f480-47cb-8e6e-55142e4a5f68/datadisk-0.ce1a1b75-f480-47cb-8e6e-55142e4a5f68.vhd" + }, + "caching": "ReadWrite" + }, + { + "lun": 1, + "name": "packer-datadisk-1.32118633-6dc9-449f-83b6-a7d2983bec14.vhd", + "createOption": "Empty", + "image": { + "uri": "http://storage.blob.core.windows.net/system/Microsoft.Compute/Images/images/packer-datadisk-1.32118633-6dc9-449f-83b6-a7d2983bec14.vhd" + }, + "vhd": { + "uri": "http://storage.blob.core.windows.net/vmcontainerce1a1b75-f480-47cb-8e6e-55142e4a5f68/datadisk-1.ce1a1b75-f480-47cb-8e6e-55142e4a5f68.vhd" + }, + "caching": "ReadWrite" + } + ] }, "osProfile": { "computerName": "[parameters('vmName')]", @@ -169,6 +195,42 @@ func TestCaptureParseJson(t *testing.T) { t.Errorf("Resources[0].Properties.StorageProfile.OSDisk.Caching's value was unexpected: %s", osDisk.Caching) } + // == Resources/Properties/StorageProfile/DataDisks ================ + dataDisks := testSubject.Resources[0].Properties.StorageProfile.DataDisks + if len(dataDisks) != 2 { + t.Errorf("Resources[0].Properties.StorageProfile.DataDisks, 2 disks expected but was: %d", len(dataDisks)) + } + if dataDisks[0].Name != "packer-datadisk-0.32118633-6dc9-449f-83b6-a7d2983bec14.vhd" { + t.Errorf("Resources[0].Properties.StorageProfile.dataDisks[0].Name's value was unexpected: %s", dataDisks[0].Name) + } + if dataDisks[0].CreateOption != "Empty" { + t.Errorf("Resources[0].Properties.StorageProfile.dataDisks[0].CreateOption's value was unexpected: %s", dataDisks[0].CreateOption) + } + if dataDisks[0].Image.Uri != "http://storage.blob.core.windows.net/system/Microsoft.Compute/Images/images/packer-datadisk-0.32118633-6dc9-449f-83b6-a7d2983bec14.vhd" { + t.Errorf("Resources[0].Properties.StorageProfile.dataDisks[0].Image.Uri's value was unexpected: %s", dataDisks[0].Image.Uri) + } + if dataDisks[0].Vhd.Uri != "http://storage.blob.core.windows.net/vmcontainerce1a1b75-f480-47cb-8e6e-55142e4a5f68/datadisk-0.ce1a1b75-f480-47cb-8e6e-55142e4a5f68.vhd" { + t.Errorf("Resources[0].Properties.StorageProfile.dataDisks[0].Vhd.Uri's value was unexpected: %s", dataDisks[0].Vhd.Uri) + } + if dataDisks[0].Caching != "ReadWrite" { + t.Errorf("Resources[0].Properties.StorageProfile.dataDisks[0].Caching's value was unexpected: %s", dataDisks[0].Caching) + } + if dataDisks[1].Name != "packer-datadisk-1.32118633-6dc9-449f-83b6-a7d2983bec14.vhd" { + t.Errorf("Resources[0].Properties.StorageProfile.dataDisks[1].Name's value was unexpected: %s", dataDisks[1].Name) + } + if dataDisks[1].CreateOption != "Empty" { + t.Errorf("Resources[0].Properties.StorageProfile.dataDisks[1].CreateOption's value was unexpected: %s", dataDisks[1].CreateOption) + } + if dataDisks[1].Image.Uri != "http://storage.blob.core.windows.net/system/Microsoft.Compute/Images/images/packer-datadisk-1.32118633-6dc9-449f-83b6-a7d2983bec14.vhd" { + t.Errorf("Resources[0].Properties.StorageProfile.dataDisks[1].Image.Uri's value was unexpected: %s", dataDisks[1].Image.Uri) + } + if dataDisks[1].Vhd.Uri != "http://storage.blob.core.windows.net/vmcontainerce1a1b75-f480-47cb-8e6e-55142e4a5f68/datadisk-1.ce1a1b75-f480-47cb-8e6e-55142e4a5f68.vhd" { + t.Errorf("Resources[0].Properties.StorageProfile.dataDisks[1].Vhd.Uri's value was unexpected: %s", dataDisks[1].Vhd.Uri) + } + if dataDisks[1].Caching != "ReadWrite" { + t.Errorf("Resources[0].Properties.StorageProfile.dataDisks[1].Caching's value was unexpected: %s", dataDisks[1].Caching) + } + // == Resources/Properties/OSProfile ============================ osProfile := testSubject.Resources[0].Properties.OSProfile if osProfile.AdminPassword != "[parameters('adminPassword')]" { diff --git a/builder/azure/arm/config.go b/builder/azure/arm/config.go index 3fc4f13b9..98e24dd81 100644 --- a/builder/azure/arm/config.go +++ b/builder/azure/arm/config.go @@ -14,7 +14,7 @@ import ( "strings" "time" - "github.com/Azure/azure-sdk-for-go/arm/compute" + "github.com/Azure/azure-sdk-for-go/services/compute/mgmt/2018-04-01/compute" "github.com/Azure/go-autorest/autorest/azure" "github.com/Azure/go-autorest/autorest/to" "github.com/masterzen/winrm" @@ -22,6 +22,7 @@ import ( "github.com/hashicorp/packer/builder/azure/common/constants" "github.com/hashicorp/packer/builder/azure/pkcs12" "github.com/hashicorp/packer/common" + commonhelper "github.com/hashicorp/packer/helper/common" "github.com/hashicorp/packer/helper/communicator" "github.com/hashicorp/packer/helper/config" "github.com/hashicorp/packer/packer" @@ -38,11 +39,32 @@ const ( DefaultVMSize = "Standard_A1" ) +const ( + // https://docs.microsoft.com/en-us/azure/architecture/best-practices/naming-conventions#naming-rules-and-restrictions + // Regular expressions in Go are not expressive enough, such that the regular expression returned by Azure + // can be used (no backtracking). + // + // -> ^[^_\W][\w-._]{0,79}(?1 + + if len(subnetList) == 0 { return "", fmt.Errorf("Cannot find a subnet in the resource group %q associated with the virtual network called %q", resourceGroupName, name) } - if len(*subnets.Value) > 1 { - return "", fmt.Errorf("Found multiple subnets in the resource group %q associated with the virtual network called %q, please use virtual_network_subnet_name and virtual_network_resource_group_name to disambiguate", resourceGroupName, name) + if len(subnetList) > 1 { + return "", fmt.Errorf("Found multiple subnets in the resource group %q associated with the virtual network called %q, please use virtual_network_subnet_name and virtual_network_resource_group_name to disambiguate", resourceGroupName, name) } - subnet := (*subnets.Value)[0] + subnet := subnetList[0] return *subnet.Name, nil } diff --git a/builder/azure/arm/step.go b/builder/azure/arm/step.go index 707a1a07e..7fc7396b6 100644 --- a/builder/azure/arm/step.go +++ b/builder/azure/arm/step.go @@ -1,20 +1,10 @@ package arm import ( - "github.com/hashicorp/packer/builder/azure/common" "github.com/hashicorp/packer/builder/azure/common/constants" - "github.com/mitchellh/multistep" + "github.com/hashicorp/packer/helper/multistep" ) -func processInterruptibleResult( - result common.InterruptibleTaskResult, sayError func(error), state multistep.StateBag) multistep.StepAction { - if result.IsCancelled { - return multistep.ActionHalt - } - - return processStepResult(result.Err, sayError, state) -} - func processStepResult( err error, sayError func(error), state multistep.StateBag) multistep.StepAction { diff --git a/builder/azure/arm/step_capture_image.go b/builder/azure/arm/step_capture_image.go index 10d29069e..ab675116e 100644 --- a/builder/azure/arm/step_capture_image.go +++ b/builder/azure/arm/step_capture_image.go @@ -1,20 +1,20 @@ package arm import ( + "context" "fmt" - "github.com/Azure/azure-sdk-for-go/arm/compute" - "github.com/hashicorp/packer/builder/azure/common" + "github.com/Azure/azure-sdk-for-go/services/compute/mgmt/2018-04-01/compute" "github.com/hashicorp/packer/builder/azure/common/constants" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" ) type StepCaptureImage struct { client *AzureClient generalizeVM func(resourceGroupName, computeName string) error - captureVhd func(resourceGroupName string, computeName string, parameters *compute.VirtualMachineCaptureParameters, cancelCh <-chan struct{}) error - captureManagedImage func(resourceGroupName string, computeName string, parameters *compute.Image, cancelCh <-chan struct{}) error + captureVhd func(ctx context.Context, resourceGroupName string, computeName string, parameters *compute.VirtualMachineCaptureParameters) error + captureManagedImage func(ctx context.Context, resourceGroupName string, computeName string, parameters *compute.Image) error get func(client *AzureClient) *CaptureTemplate say func(message string) error func(e error) @@ -42,32 +42,30 @@ func NewStepCaptureImage(client *AzureClient, ui packer.Ui) *StepCaptureImage { } func (s *StepCaptureImage) generalize(resourceGroupName string, computeName string) error { - _, err := s.client.Generalize(resourceGroupName, computeName) + _, err := s.client.Generalize(context.TODO(), resourceGroupName, computeName) if err != nil { s.say(s.client.LastError.Error()) } return err } -func (s *StepCaptureImage) captureImageFromVM(resourceGroupName string, imageName string, image *compute.Image, cancelCh <-chan struct{}) error { - _, errChan := s.client.ImagesClient.CreateOrUpdate(resourceGroupName, imageName, *image, cancelCh) - err := <-errChan +func (s *StepCaptureImage) captureImageFromVM(ctx context.Context, resourceGroupName string, imageName string, image *compute.Image) error { + f, err := s.client.ImagesClient.CreateOrUpdate(ctx, resourceGroupName, imageName, *image) if err != nil { s.say(s.client.LastError.Error()) } - return <-errChan + return f.WaitForCompletion(ctx, s.client.ImagesClient.Client) } -func (s *StepCaptureImage) captureImage(resourceGroupName string, computeName string, parameters *compute.VirtualMachineCaptureParameters, cancelCh <-chan struct{}) error { - _, errChan := s.client.Capture(resourceGroupName, computeName, *parameters, cancelCh) - err := <-errChan +func (s *StepCaptureImage) captureImage(ctx context.Context, resourceGroupName string, computeName string, parameters *compute.VirtualMachineCaptureParameters) error { + f, err := s.client.VirtualMachinesClient.Capture(ctx, resourceGroupName, computeName, *parameters) if err != nil { s.say(s.client.LastError.Error()) } - return <-errChan + return f.WaitForCompletion(ctx, s.client.VirtualMachinesClient.Client) } -func (s *StepCaptureImage) Run(state multistep.StateBag) multistep.StepAction { +func (s *StepCaptureImage) Run(ctx context.Context, state multistep.StateBag) multistep.StepAction { s.say("Capturing image ...") var computeName = state.Get(constants.ArmComputeName).(string) @@ -85,38 +83,37 @@ func (s *StepCaptureImage) Run(state multistep.StateBag) multistep.StepAction { s.say(fmt.Sprintf(" -> Compute Name : '%s'", computeName)) s.say(fmt.Sprintf(" -> Compute Location : '%s'", location)) - result := common.StartInterruptibleTask( - func() bool { - return common.IsStateCancelled(state) - }, - func(cancelCh <-chan struct{}) error { - err := s.generalizeVM(resourceGroupName, computeName) - if err != nil { - return err - } + err := s.generalizeVM(resourceGroupName, computeName) - if isManagedImage { - s.say(fmt.Sprintf(" -> Image ResourceGroupName : '%s'", targetManagedImageResourceGroupName)) - s.say(fmt.Sprintf(" -> Image Name : '%s'", targetManagedImageName)) - s.say(fmt.Sprintf(" -> Image Location : '%s'", targetManagedImageLocation)) - return s.captureManagedImage(targetManagedImageResourceGroupName, targetManagedImageName, imageParameters, cancelCh) - } else { - return s.captureVhd(resourceGroupName, computeName, vmCaptureParameters, cancelCh) - } - }) + if err == nil { + if isManagedImage { + s.say(fmt.Sprintf(" -> Image ResourceGroupName : '%s'", targetManagedImageResourceGroupName)) + s.say(fmt.Sprintf(" -> Image Name : '%s'", targetManagedImageName)) + s.say(fmt.Sprintf(" -> Image Location : '%s'", targetManagedImageLocation)) + err = s.captureManagedImage(ctx, targetManagedImageResourceGroupName, targetManagedImageName, imageParameters) + } else { + err = s.captureVhd(ctx, resourceGroupName, computeName, vmCaptureParameters) + } + } + if err != nil { + state.Put(constants.Error, err) + s.error(err) + + return multistep.ActionHalt + } // HACK(chrboum): I do not like this. The capture method should be returning this value - // instead having to pass in another lambda. I'm in this pickle because I am using - // common.StartInterruptibleTask which is not parametric, and only returns a type of error. - // I could change it to interface{}, but I do not like that solution either. + // instead having to pass in another lambda. // // Having to resort to capturing the template via an inspector is hack, and once I can // resolve that I can cleanup this code too. See the comments in azure_client.go for more // details. + // [paulmey]: autorest.Future now has access to the last http.Response, but I'm not sure if + // the body is still accessible. template := s.get(s.client) state.Put(constants.ArmCaptureTemplate, template) - return processInterruptibleResult(result, s.error, state) + return multistep.ActionContinue } func (*StepCaptureImage) Cleanup(multistep.StateBag) { diff --git a/builder/azure/arm/step_capture_image_test.go b/builder/azure/arm/step_capture_image_test.go index 4527627a3..10b6a0010 100644 --- a/builder/azure/arm/step_capture_image_test.go +++ b/builder/azure/arm/step_capture_image_test.go @@ -1,17 +1,18 @@ package arm import ( + "context" "fmt" "testing" - "github.com/Azure/azure-sdk-for-go/arm/compute" + "github.com/Azure/azure-sdk-for-go/services/compute/mgmt/2018-04-01/compute" "github.com/hashicorp/packer/builder/azure/common/constants" - "github.com/mitchellh/multistep" + "github.com/hashicorp/packer/helper/multistep" ) func TestStepCaptureImageShouldFailIfCaptureFails(t *testing.T) { var testSubject = &StepCaptureImage{ - captureVhd: func(string, string, *compute.VirtualMachineCaptureParameters, <-chan struct{}) error { + captureVhd: func(context.Context, string, string, *compute.VirtualMachineCaptureParameters) error { return fmt.Errorf("!! Unit Test FAIL !!") }, generalizeVM: func(string, string) error { @@ -26,7 +27,7 @@ func TestStepCaptureImageShouldFailIfCaptureFails(t *testing.T) { stateBag := createTestStateBagStepCaptureImage() - var result = testSubject.Run(stateBag) + var result = testSubject.Run(context.Background(), stateBag) if result != multistep.ActionHalt { t.Fatalf("Expected the step to return 'ActionHalt', but got '%d'.", result) } @@ -38,7 +39,7 @@ func TestStepCaptureImageShouldFailIfCaptureFails(t *testing.T) { func TestStepCaptureImageShouldPassIfCapturePasses(t *testing.T) { var testSubject = &StepCaptureImage{ - captureVhd: func(string, string, *compute.VirtualMachineCaptureParameters, <-chan struct{}) error { return nil }, + captureVhd: func(context.Context, string, string, *compute.VirtualMachineCaptureParameters) error { return nil }, generalizeVM: func(string, string) error { return nil }, @@ -51,7 +52,7 @@ func TestStepCaptureImageShouldPassIfCapturePasses(t *testing.T) { stateBag := createTestStateBagStepCaptureImage() - var result = testSubject.Run(stateBag) + var result = testSubject.Run(context.Background(), stateBag) if result != multistep.ActionContinue { t.Fatalf("Expected the step to return 'ActionContinue', but got '%d'.", result) } @@ -73,7 +74,7 @@ func TestStepCaptureImageShouldTakeStepArgumentsFromStateBag(t *testing.T) { } var testSubject = &StepCaptureImage{ - captureVhd: func(resourceGroupName string, computeName string, parameters *compute.VirtualMachineCaptureParameters, cancelCh <-chan struct{}) error { + captureVhd: func(_ context.Context, resourceGroupName string, computeName string, parameters *compute.VirtualMachineCaptureParameters) error { actualResourceGroupName = resourceGroupName actualComputeName = computeName actualVirtualMachineCaptureParameters = parameters @@ -91,7 +92,7 @@ func TestStepCaptureImageShouldTakeStepArgumentsFromStateBag(t *testing.T) { } stateBag := createTestStateBagStepCaptureImage() - var result = testSubject.Run(stateBag) + var result = testSubject.Run(context.Background(), stateBag) if result != multistep.ActionContinue { t.Fatalf("Expected the step to return 'ActionContinue', but got '%d'.", result) diff --git a/builder/azure/arm/step_create_resource_group.go b/builder/azure/arm/step_create_resource_group.go index 079ad5279..f835933db 100644 --- a/builder/azure/arm/step_create_resource_group.go +++ b/builder/azure/arm/step_create_resource_group.go @@ -1,19 +1,22 @@ package arm import ( + "context" + "errors" "fmt" - "github.com/Azure/azure-sdk-for-go/arm/resources/resources" + "github.com/Azure/azure-sdk-for-go/services/resources/mgmt/2018-02-01/resources" "github.com/hashicorp/packer/builder/azure/common/constants" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" ) type StepCreateResourceGroup struct { client *AzureClient - create func(resourceGroupName string, location string, tags *map[string]*string) error + create func(ctx context.Context, resourceGroupName string, location string, tags map[string]*string) error say func(message string) error func(e error) + exists func(ctx context.Context, resourceGroupName string) (bool, error) } func NewStepCreateResourceGroup(client *AzureClient, ui packer.Ui) *StepCreateResourceGroup { @@ -24,11 +27,12 @@ func NewStepCreateResourceGroup(client *AzureClient, ui packer.Ui) *StepCreateRe } step.create = step.createResourceGroup + step.exists = step.doesResourceGroupExist return step } -func (s *StepCreateResourceGroup) createResourceGroup(resourceGroupName string, location string, tags *map[string]*string) error { - _, err := s.client.GroupsClient.CreateOrUpdate(resourceGroupName, resources.Group{ +func (s *StepCreateResourceGroup) createResourceGroup(ctx context.Context, resourceGroupName string, location string, tags map[string]*string) error { + _, err := s.client.GroupsClient.CreateOrUpdate(ctx, resourceGroupName, resources.Group{ Location: &location, Tags: tags, }) @@ -39,22 +43,58 @@ func (s *StepCreateResourceGroup) createResourceGroup(resourceGroupName string, return err } -func (s *StepCreateResourceGroup) Run(state multistep.StateBag) multistep.StepAction { - s.say("Creating resource group ...") +func (s *StepCreateResourceGroup) doesResourceGroupExist(ctx context.Context, resourceGroupName string) (bool, error) { + exists, err := s.client.GroupsClient.CheckExistence(ctx, resourceGroupName) + if err != nil { + s.say(s.client.LastError.Error()) + } + + return exists.Response.StatusCode != 404, err +} + +func (s *StepCreateResourceGroup) Run(ctx context.Context, state multistep.StateBag) multistep.StepAction { + var doubleResource, ok = state.GetOk(constants.ArmDoubleResourceGroupNameSet) + if ok && doubleResource.(bool) { + err := errors.New("You have filled in both temp_resource_group_name and build_resource_group_name. Please choose one.") + return processStepResult(err, s.error, state) + } var resourceGroupName = state.Get(constants.ArmResourceGroupName).(string) var location = state.Get(constants.ArmLocation).(string) - var tags = state.Get(constants.ArmTags).(*map[string]*string) + var tags = state.Get(constants.ArmTags).(map[string]*string) - s.say(fmt.Sprintf(" -> ResourceGroupName : '%s'", resourceGroupName)) - s.say(fmt.Sprintf(" -> Location : '%s'", location)) - s.say(fmt.Sprintf(" -> Tags :")) - for k, v := range *tags { - s.say(fmt.Sprintf(" ->> %s : %s", k, *v)) + exists, err := s.exists(ctx, resourceGroupName) + if err != nil { + return processStepResult(err, s.error, state) + } + configThinksExists := state.Get(constants.ArmIsExistingResourceGroup).(bool) + if exists != configThinksExists { + if configThinksExists { + err = errors.New("The resource group you want to use does not exist yet. Please use temp_resource_group_name to create a temporary resource group.") + } else { + err = errors.New("A resource group with that name already exists. Please use build_resource_group_name to use an existing resource group.") + } + return processStepResult(err, s.error, state) } - err := s.create(resourceGroupName, location, tags) - if err == nil { + // If the resource group exists, we may not have permissions to update it so we don't. + if !exists { + s.say("Creating resource group ...") + + s.say(fmt.Sprintf(" -> ResourceGroupName : '%s'", resourceGroupName)) + s.say(fmt.Sprintf(" -> Location : '%s'", location)) + s.say(fmt.Sprintf(" -> Tags :")) + for k, v := range tags { + s.say(fmt.Sprintf(" ->> %s : %s", k, *v)) + } + err = s.create(ctx, resourceGroupName, location, tags) + if err == nil { + state.Put(constants.ArmIsResourceGroupCreated, true) + } + } else { + s.say("Using existing resource group ...") + s.say(fmt.Sprintf(" -> ResourceGroupName : '%s'", resourceGroupName)) + s.say(fmt.Sprintf(" -> Location : '%s'", location)) state.Put(constants.ArmIsResourceGroupCreated, true) } @@ -68,17 +108,30 @@ func (s *StepCreateResourceGroup) Cleanup(state multistep.StateBag) { } ui := state.Get("ui").(packer.Ui) - ui.Say("\nCleanup requested, deleting resource group ...") + if state.Get(constants.ArmIsExistingResourceGroup).(bool) { + ui.Say("\nThe resource group was not created by Packer, not deleting ...") + return + } else { + ui.Say("\nCleanup requested, deleting resource group ...") - var resourceGroupName = state.Get(constants.ArmResourceGroupName).(string) - _, errChan := s.client.GroupsClient.Delete(resourceGroupName, nil) - err := <-errChan - - if err != nil { - ui.Error(fmt.Sprintf("Error deleting resource group. Please delete it manually.\n\n"+ - "Name: %s\n"+ - "Error: %s", resourceGroupName, err)) + var resourceGroupName = state.Get(constants.ArmResourceGroupName).(string) + ctx := context.TODO() + f, err := s.client.GroupsClient.Delete(ctx, resourceGroupName) + if err == nil { + if state.Get(constants.ArmAsyncResourceGroupDelete).(bool) { + s.say(fmt.Sprintf("\n Not waiting for Resource Group delete as requested by user. Resource Group Name is %s", resourceGroupName)) + } else { + err = f.WaitForCompletion(ctx, s.client.GroupsClient.Client) + } + } + if err != nil { + ui.Error(fmt.Sprintf("Error deleting resource group. Please delete it manually.\n\n"+ + "Name: %s\n"+ + "Error: %s", resourceGroupName, err)) + return + } + if !state.Get(constants.ArmAsyncResourceGroupDelete).(bool) { + ui.Say("Resource group has been deleted.") + } } - - ui.Say("Resource group has been deleted.") } diff --git a/builder/azure/arm/step_create_resource_group_test.go b/builder/azure/arm/step_create_resource_group_test.go index 4e6b7592f..aa8fe2c2f 100644 --- a/builder/azure/arm/step_create_resource_group_test.go +++ b/builder/azure/arm/step_create_resource_group_test.go @@ -1,23 +1,75 @@ package arm import ( + "context" + "errors" "fmt" "testing" "github.com/hashicorp/packer/builder/azure/common/constants" - "github.com/mitchellh/multistep" + "github.com/hashicorp/packer/helper/multistep" ) +func TestStepCreateResourceGroupShouldFailIfBothGroupNames(t *testing.T) { + stateBag := new(multistep.BasicStateBag) + + stateBag.Put(constants.ArmDoubleResourceGroupNameSet, true) + + value := "Unit Test: Tags" + tags := map[string]*string{ + "tag01": &value, + } + + stateBag.Put(constants.ArmTags, tags) + var testSubject = &StepCreateResourceGroup{ + create: func(context.Context, string, string, map[string]*string) error { return nil }, + say: func(message string) {}, + error: func(e error) {}, + exists: func(context.Context, string) (bool, error) { return false, nil }, + } + var result = testSubject.Run(context.Background(), stateBag) + if result != multistep.ActionHalt { + t.Fatalf("Expected the step to return 'ActionHalt', but got '%d'.", result) + } + + if _, ok := stateBag.GetOk(constants.Error); ok == false { + t.Fatalf("Expected the step to set stateBag['%s'], but it was not.", constants.Error) + } +} + func TestStepCreateResourceGroupShouldFailIfCreateFails(t *testing.T) { var testSubject = &StepCreateResourceGroup{ - create: func(string, string, *map[string]*string) error { return fmt.Errorf("!! Unit Test FAIL !!") }, + create: func(context.Context, string, string, map[string]*string) error { + return fmt.Errorf("!! Unit Test FAIL !!") + }, say: func(message string) {}, error: func(e error) {}, + exists: func(context.Context, string) (bool, error) { return false, nil }, } stateBag := createTestStateBagStepCreateResourceGroup() - var result = testSubject.Run(stateBag) + var result = testSubject.Run(context.Background(), stateBag) + if result != multistep.ActionHalt { + t.Fatalf("Expected the step to return 'ActionHalt', but got '%d'.", result) + } + + if _, ok := stateBag.GetOk(constants.Error); ok == false { + t.Fatalf("Expected the step to set stateBag['%s'], but it was not.", constants.Error) + } +} + +func TestStepCreateResourceGroupShouldFailIfExistsFails(t *testing.T) { + var testSubject = &StepCreateResourceGroup{ + create: func(context.Context, string, string, map[string]*string) error { return nil }, + say: func(message string) {}, + error: func(e error) {}, + exists: func(context.Context, string) (bool, error) { return false, errors.New("FAIL") }, + } + + stateBag := createTestStateBagStepCreateResourceGroup() + + var result = testSubject.Run(context.Background(), stateBag) if result != multistep.ActionHalt { t.Fatalf("Expected the step to return 'ActionHalt', but got '%d'.", result) } @@ -29,14 +81,15 @@ func TestStepCreateResourceGroupShouldFailIfCreateFails(t *testing.T) { func TestStepCreateResourceGroupShouldPassIfCreatePasses(t *testing.T) { var testSubject = &StepCreateResourceGroup{ - create: func(string, string, *map[string]*string) error { return nil }, + create: func(context.Context, string, string, map[string]*string) error { return nil }, say: func(message string) {}, error: func(e error) {}, + exists: func(context.Context, string) (bool, error) { return false, nil }, } stateBag := createTestStateBagStepCreateResourceGroup() - var result = testSubject.Run(stateBag) + var result = testSubject.Run(context.Background(), stateBag) if result != multistep.ActionContinue { t.Fatalf("Expected the step to return 'ActionContinue', but got '%d'.", result) } @@ -49,21 +102,22 @@ func TestStepCreateResourceGroupShouldPassIfCreatePasses(t *testing.T) { func TestStepCreateResourceGroupShouldTakeStepArgumentsFromStateBag(t *testing.T) { var actualResourceGroupName string var actualLocation string - var actualTags *map[string]*string + var actualTags map[string]*string var testSubject = &StepCreateResourceGroup{ - create: func(resourceGroupName string, location string, tags *map[string]*string) error { + create: func(_ context.Context, resourceGroupName string, location string, tags map[string]*string) error { actualResourceGroupName = resourceGroupName actualLocation = location actualTags = tags return nil }, - say: func(message string) {}, - error: func(e error) {}, + say: func(message string) {}, + error: func(e error) {}, + exists: func(context.Context, string) (bool, error) { return false, nil }, } stateBag := createTestStateBagStepCreateResourceGroup() - var result = testSubject.Run(stateBag) + var result = testSubject.Run(context.Background(), stateBag) if result != multistep.ActionContinue { t.Fatalf("Expected the step to return 'ActionContinue', but got '%d'.", result) @@ -71,7 +125,7 @@ func TestStepCreateResourceGroupShouldTakeStepArgumentsFromStateBag(t *testing.T var expectedResourceGroupName = stateBag.Get(constants.ArmResourceGroupName).(string) var expectedLocation = stateBag.Get(constants.ArmLocation).(string) - var expectedTags = stateBag.Get(constants.ArmTags).(*map[string]*string) + var expectedTags = stateBag.Get(constants.ArmTags).(map[string]*string) if actualResourceGroupName != expectedResourceGroupName { t.Fatal("Expected the step to source 'constants.ArmResourceGroupName' from the state bag, but it did not.") @@ -81,7 +135,7 @@ func TestStepCreateResourceGroupShouldTakeStepArgumentsFromStateBag(t *testing.T t.Fatal("Expected the step to source 'constants.ArmResourceGroupName' from the state bag, but it did not.") } - if len(*expectedTags) != len(*actualTags) && *(*expectedTags)["tag01"] != *(*actualTags)["tag01"] { + if len(expectedTags) != len(actualTags) && expectedTags["tag01"] != actualTags["tag01"] { t.Fatal("Expected the step to source 'constants.ArmTags' from the state bag, but it did not.") } @@ -91,18 +145,78 @@ func TestStepCreateResourceGroupShouldTakeStepArgumentsFromStateBag(t *testing.T } } +func TestStepCreateResourceGroupMarkShouldFailIfTryingExistingButDoesntExist(t *testing.T) { + var testSubject = &StepCreateResourceGroup{ + create: func(context.Context, string, string, map[string]*string) error { + return fmt.Errorf("!! Unit Test FAIL !!") + }, + say: func(message string) {}, + error: func(e error) {}, + exists: func(context.Context, string) (bool, error) { return false, nil }, + } + + stateBag := createTestExistingStateBagStepCreateResourceGroup() + + var result = testSubject.Run(context.Background(), stateBag) + if result != multistep.ActionHalt { + t.Fatalf("Expected the step to return 'ActionHalt', but got '%d'.", result) + } + + if _, ok := stateBag.GetOk(constants.Error); ok == false { + t.Fatalf("Expected the step to set stateBag['%s'], but it was not.", constants.Error) + } +} + +func TestStepCreateResourceGroupMarkShouldFailIfTryingTempButExist(t *testing.T) { + var testSubject = &StepCreateResourceGroup{ + create: func(context.Context, string, string, map[string]*string) error { + return fmt.Errorf("!! Unit Test FAIL !!") + }, + say: func(message string) {}, + error: func(e error) {}, + exists: func(context.Context, string) (bool, error) { return true, nil }, + } + + stateBag := createTestStateBagStepCreateResourceGroup() + + var result = testSubject.Run(context.Background(), stateBag) + if result != multistep.ActionHalt { + t.Fatalf("Expected the step to return 'ActionHalt', but got '%d'.", result) + } + + if _, ok := stateBag.GetOk(constants.Error); ok == false { + t.Fatalf("Expected the step to set stateBag['%s'], but it was not.", constants.Error) + } +} + func createTestStateBagStepCreateResourceGroup() multistep.StateBag { stateBag := new(multistep.BasicStateBag) stateBag.Put(constants.ArmLocation, "Unit Test: Location") stateBag.Put(constants.ArmResourceGroupName, "Unit Test: ResourceGroupName") + stateBag.Put(constants.ArmIsExistingResourceGroup, false) value := "Unit Test: Tags" tags := map[string]*string{ "tag01": &value, } - stateBag.Put(constants.ArmTags, &tags) - + stateBag.Put(constants.ArmTags, tags) + return stateBag +} + +func createTestExistingStateBagStepCreateResourceGroup() multistep.StateBag { + stateBag := new(multistep.BasicStateBag) + + stateBag.Put(constants.ArmLocation, "Unit Test: Location") + stateBag.Put(constants.ArmResourceGroupName, "Unit Test: ResourceGroupName") + stateBag.Put(constants.ArmIsExistingResourceGroup, true) + + value := "Unit Test: Tags" + tags := map[string]*string{ + "tag01": &value, + } + + stateBag.Put(constants.ArmTags, tags) return stateBag } diff --git a/builder/azure/arm/step_delete_additional_disks.go b/builder/azure/arm/step_delete_additional_disks.go new file mode 100644 index 000000000..95b2edb4a --- /dev/null +++ b/builder/azure/arm/step_delete_additional_disks.go @@ -0,0 +1,108 @@ +package arm + +import ( + "context" + "errors" + "fmt" + "net/url" + "strings" + + "github.com/hashicorp/packer/builder/azure/common/constants" + + "github.com/hashicorp/packer/helper/multistep" + "github.com/hashicorp/packer/packer" +) + +type StepDeleteAdditionalDisk struct { + client *AzureClient + delete func(string, string) error + deleteManaged func(context.Context, string, string) error + say func(message string) + error func(e error) +} + +func NewStepDeleteAdditionalDisks(client *AzureClient, ui packer.Ui) *StepDeleteAdditionalDisk { + var step = &StepDeleteAdditionalDisk{ + client: client, + say: func(message string) { ui.Say(message) }, + error: func(e error) { ui.Error(e.Error()) }, + } + + step.delete = step.deleteBlob + step.deleteManaged = step.deleteManagedDisk + return step +} + +func (s *StepDeleteAdditionalDisk) deleteBlob(storageContainerName string, blobName string) error { + blob := s.client.BlobStorageClient.GetContainerReference(storageContainerName).GetBlobReference(blobName) + err := blob.Delete(nil) + + if err != nil { + s.say(s.client.LastError.Error()) + } + return err +} + +func (s *StepDeleteAdditionalDisk) deleteManagedDisk(ctx context.Context, resourceGroupName string, diskName string) error { + xs := strings.Split(diskName, "/") + diskName = xs[len(xs)-1] + f, err := s.client.DisksClient.Delete(ctx, resourceGroupName, diskName) + if err == nil { + err = f.WaitForCompletion(ctx, s.client.DisksClient.Client) + } + return err +} + +func (s *StepDeleteAdditionalDisk) Run(ctx context.Context, state multistep.StateBag) multistep.StepAction { + s.say("Deleting the temporary Additional disk ...") + + var dataDisks = state.Get(constants.ArmAdditionalDiskVhds).([]string) + var isManagedDisk = state.Get(constants.ArmIsManagedImage).(bool) + var isExistingResourceGroup = state.Get(constants.ArmIsExistingResourceGroup).(bool) + var resourceGroupName = state.Get(constants.ArmResourceGroupName).(string) + + if dataDisks == nil { + s.say(fmt.Sprintf(" -> No Additional Disks specified")) + return multistep.ActionContinue + } + + if isManagedDisk && !isExistingResourceGroup { + s.say(fmt.Sprintf(" -> Additional Disk : skipping, managed disk was used...")) + return multistep.ActionContinue + } + + for i, additionaldisk := range dataDisks { + s.say(fmt.Sprintf(" -> Additional Disk %d: '%s'", i+1, additionaldisk)) + var err error + if isManagedDisk { + err = s.deleteManaged(ctx, resourceGroupName, additionaldisk) + if err != nil { + s.say("Failed to delete the managed Additional Disk!") + return processStepResult(err, s.error, state) + } + } else { + u, err := url.Parse(additionaldisk) + if err != nil { + s.say("Failed to parse the Additional Disk's VHD URI!") + return processStepResult(err, s.error, state) + } + + xs := strings.Split(u.Path, "/") + if len(xs) < 3 { + err = errors.New("Failed to parse Additional Disk's VHD URI!") + } else { + var storageAccountName = xs[1] + var blobName = strings.Join(xs[2:], "/") + + err = s.delete(storageAccountName, blobName) + } + if err != nil { + return processStepResult(err, s.error, state) + } + } + } + return multistep.ActionContinue +} + +func (*StepDeleteAdditionalDisk) Cleanup(multistep.StateBag) { +} diff --git a/builder/azure/arm/step_delete_additional_disks_test.go b/builder/azure/arm/step_delete_additional_disks_test.go new file mode 100644 index 000000000..3d249fd0d --- /dev/null +++ b/builder/azure/arm/step_delete_additional_disks_test.go @@ -0,0 +1,227 @@ +package arm + +import ( + "context" + "errors" + "fmt" + "testing" + + "github.com/hashicorp/packer/builder/azure/common/constants" + "github.com/hashicorp/packer/helper/multistep" +) + +func TestStepDeleteAdditionalDiskShouldFailIfGetFails(t *testing.T) { + var testSubject = &StepDeleteAdditionalDisk{ + delete: func(string, string) error { return fmt.Errorf("!! Unit Test FAIL !!") }, + deleteManaged: func(context.Context, string, string) error { return nil }, + say: func(message string) {}, + error: func(e error) {}, + } + + stateBag := DeleteTestStateBagStepDeleteAdditionalDisk([]string{"http://storage.blob.core.windows.net/images/pkrvm_os.vhd"}) + + var result = testSubject.Run(context.Background(), stateBag) + if result != multistep.ActionHalt { + t.Fatalf("Expected the step to return 'ActionHalt', but got '%d'.", result) + } + + if _, ok := stateBag.GetOk(constants.Error); ok == false { + t.Fatalf("Expected the step to set stateBag['%s'], but it was not.", constants.Error) + } +} + +func TestStepDeleteAdditionalDiskShouldPassIfGetPasses(t *testing.T) { + var testSubject = &StepDeleteAdditionalDisk{ + delete: func(string, string) error { return nil }, + say: func(message string) {}, + error: func(e error) {}, + } + + stateBag := DeleteTestStateBagStepDeleteAdditionalDisk([]string{"http://storage.blob.core.windows.net/images/pkrvm_os.vhd"}) + + var result = testSubject.Run(context.Background(), stateBag) + if result != multistep.ActionContinue { + t.Fatalf("Expected the step to return 'ActionContinue', but got '%d'.", result) + } + + if _, ok := stateBag.GetOk(constants.Error); ok == true { + t.Fatalf("Expected the step to not set stateBag['%s'], but it was.", constants.Error) + } +} + +func TestStepDeleteAdditionalDiskShouldTakeStepArgumentsFromStateBag(t *testing.T) { + var actualStorageContainerName string + var actualBlobName string + + var testSubject = &StepDeleteAdditionalDisk{ + delete: func(storageContainerName string, blobName string) error { + actualStorageContainerName = storageContainerName + actualBlobName = blobName + return nil + }, + say: func(message string) {}, + error: func(e error) {}, + } + + stateBag := DeleteTestStateBagStepDeleteAdditionalDisk([]string{"http://storage.blob.core.windows.net/images/pkrvm_os.vhd"}) + var result = testSubject.Run(context.Background(), stateBag) + + if result != multistep.ActionContinue { + t.Fatalf("Expected the step to return 'ActionContinue', but got '%d'.", result) + } + + if actualStorageContainerName != "images" { + t.Fatalf("Expected the storage container name to be 'images', but found '%s'.", actualStorageContainerName) + } + + if actualBlobName != "pkrvm_os.vhd" { + t.Fatalf("Expected the blob name to be 'pkrvm_os.vhd', but found '%s'.", actualBlobName) + } +} + +func TestStepDeleteAdditionalDiskShouldHandleComplexStorageContainerNames(t *testing.T) { + var actualStorageContainerName string + var actualBlobName string + + var testSubject = &StepDeleteAdditionalDisk{ + delete: func(storageContainerName string, blobName string) error { + actualStorageContainerName = storageContainerName + actualBlobName = blobName + return nil + }, + say: func(message string) {}, + error: func(e error) {}, + } + + stateBag := DeleteTestStateBagStepDeleteAdditionalDisk([]string{"http://storage.blob.core.windows.net/abc/def/pkrvm_os.vhd"}) + testSubject.Run(context.Background(), stateBag) + + if actualStorageContainerName != "abc" { + t.Fatalf("Expected the storage container name to be 'abc/def', but found '%s'.", actualStorageContainerName) + } + + if actualBlobName != "def/pkrvm_os.vhd" { + t.Fatalf("Expected the blob name to be 'pkrvm_os.vhd', but found '%s'.", actualBlobName) + } +} + +func TestStepDeleteAdditionalDiskShouldFailIfVHDNameCannotBeURLParsed(t *testing.T) { + var testSubject = &StepDeleteAdditionalDisk{ + delete: func(string, string) error { return nil }, + say: func(message string) {}, + error: func(e error) {}, + deleteManaged: func(context.Context, string, string) error { return nil }, + } + + // Invalid URL per https://golang.org/src/net/url/url_test.go + stateBag := DeleteTestStateBagStepDeleteAdditionalDisk([]string{"http://[fe80::1%en0]/"}) + + var result = testSubject.Run(context.Background(), stateBag) + if result != multistep.ActionHalt { + t.Fatalf("Expected the step to return 'ActionHalt', but got '%v'.", result) + } + + if _, ok := stateBag.GetOk(constants.Error); ok == false { + t.Fatalf("Expected the step to not stateBag['%s'], but it was.", constants.Error) + } +} +func TestStepDeleteAdditionalDiskShouldFailIfVHDNameIsTooShort(t *testing.T) { + var testSubject = &StepDeleteAdditionalDisk{ + delete: func(string, string) error { return nil }, + say: func(message string) {}, + error: func(e error) {}, + deleteManaged: func(context.Context, string, string) error { return nil }, + } + + stateBag := DeleteTestStateBagStepDeleteAdditionalDisk([]string{"storage.blob.core.windows.net/abc"}) + + var result = testSubject.Run(context.Background(), stateBag) + if result != multistep.ActionHalt { + t.Fatalf("Expected the step to return 'ActionHalt', but got '%d'.", result) + } + + if _, ok := stateBag.GetOk(constants.Error); ok == false { + t.Fatalf("Expected the step to not stateBag['%s'], but it was.", constants.Error) + } +} + +func TestStepDeleteAdditionalDiskShouldPassIfManagedDiskInTempResourceGroup(t *testing.T) { + var testSubject = &StepDeleteAdditionalDisk{ + delete: func(string, string) error { return nil }, + say: func(message string) {}, + error: func(e error) {}, + } + + stateBag := new(multistep.BasicStateBag) + stateBag.Put(constants.ArmAdditionalDiskVhds, []string{"subscriptions/123-456-789/resourceGroups/existingresourcegroup/providers/Microsoft.Compute/disks/osdisk"}) + stateBag.Put(constants.ArmIsManagedImage, true) + stateBag.Put(constants.ArmIsExistingResourceGroup, false) + stateBag.Put(constants.ArmResourceGroupName, "testgroup") + + var result = testSubject.Run(context.Background(), stateBag) + if result != multistep.ActionContinue { + t.Fatalf("Expected the step to return 'ActionContinue', but got '%d'.", result) + } + + if _, ok := stateBag.GetOk(constants.Error); ok == true { + t.Fatalf("Expected the step to not set stateBag['%s'], but it was.", constants.Error) + } +} + +func TestStepDeleteAdditionalDiskShouldFailIfManagedDiskInExistingResourceGroupFailsToDelete(t *testing.T) { + var testSubject = &StepDeleteAdditionalDisk{ + delete: func(string, string) error { return nil }, + say: func(message string) {}, + error: func(e error) {}, + deleteManaged: func(context.Context, string, string) error { return errors.New("UNIT TEST FAIL!") }, + } + + stateBag := new(multistep.BasicStateBag) + stateBag.Put(constants.ArmAdditionalDiskVhds, []string{"subscriptions/123-456-789/resourceGroups/existingresourcegroup/providers/Microsoft.Compute/disks/osdisk"}) + stateBag.Put(constants.ArmIsManagedImage, true) + stateBag.Put(constants.ArmIsExistingResourceGroup, true) + stateBag.Put(constants.ArmResourceGroupName, "testgroup") + + var result = testSubject.Run(context.Background(), stateBag) + if result != multistep.ActionHalt { + t.Fatalf("Expected the step to return 'ActionHalt', but got '%d'.", result) + } + + if _, ok := stateBag.GetOk(constants.Error); ok == false { + t.Fatalf("Expected the step to not stateBag['%s'], but it was.", constants.Error) + } +} + +func TestStepDeleteAdditionalDiskShouldFailIfManagedDiskInExistingResourceGroupIsDeleted(t *testing.T) { + var testSubject = &StepDeleteAdditionalDisk{ + delete: func(string, string) error { return nil }, + say: func(message string) {}, + error: func(e error) {}, + deleteManaged: func(context.Context, string, string) error { return nil }, + } + + stateBag := new(multistep.BasicStateBag) + stateBag.Put(constants.ArmAdditionalDiskVhds, []string{"subscriptions/123-456-789/resourceGroups/existingresourcegroup/providers/Microsoft.Compute/disks/osdisk"}) + stateBag.Put(constants.ArmIsManagedImage, true) + stateBag.Put(constants.ArmIsExistingResourceGroup, true) + stateBag.Put(constants.ArmResourceGroupName, "testgroup") + + var result = testSubject.Run(context.Background(), stateBag) + if result != multistep.ActionContinue { + t.Fatalf("Expected the step to return 'ActionContinue', but got '%d'.", result) + } + + if _, ok := stateBag.GetOk(constants.Error); ok == true { + t.Fatalf("Expected the step to not set stateBag['%s'], but it was.", constants.Error) + } +} + +func DeleteTestStateBagStepDeleteAdditionalDisk(osDiskVhds []string) multistep.StateBag { + stateBag := new(multistep.BasicStateBag) + stateBag.Put(constants.ArmAdditionalDiskVhds, osDiskVhds) + stateBag.Put(constants.ArmIsManagedImage, false) + stateBag.Put(constants.ArmIsExistingResourceGroup, false) + stateBag.Put(constants.ArmResourceGroupName, "testgroup") + + return stateBag +} diff --git a/builder/azure/arm/step_delete_os_disk.go b/builder/azure/arm/step_delete_os_disk.go index 1dd28ec85..ad525591b 100644 --- a/builder/azure/arm/step_delete_os_disk.go +++ b/builder/azure/arm/step_delete_os_disk.go @@ -1,21 +1,24 @@ package arm import ( + "context" + "errors" "fmt" "net/url" "strings" "github.com/hashicorp/packer/builder/azure/common/constants" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" ) type StepDeleteOSDisk struct { - client *AzureClient - delete func(string, string) error - say func(message string) - error func(e error) + client *AzureClient + delete func(string, string) error + deleteManaged func(context.Context, string, string) error + say func(message string) + error func(e error) } func NewStepDeleteOSDisk(client *AzureClient, ui packer.Ui) *StepDeleteOSDisk { @@ -26,6 +29,7 @@ func NewStepDeleteOSDisk(client *AzureClient, ui packer.Ui) *StepDeleteOSDisk { } step.delete = step.deleteBlob + step.deleteManaged = step.deleteManagedDisk return step } @@ -39,31 +43,55 @@ func (s *StepDeleteOSDisk) deleteBlob(storageContainerName string, blobName stri return err } -func (s *StepDeleteOSDisk) Run(state multistep.StateBag) multistep.StepAction { +func (s *StepDeleteOSDisk) deleteManagedDisk(ctx context.Context, resourceGroupName string, imageName string) error { + xs := strings.Split(imageName, "/") + diskName := xs[len(xs)-1] + f, err := s.client.DisksClient.Delete(ctx, resourceGroupName, diskName) + if err == nil { + err = f.WaitForCompletion(ctx, s.client.DisksClient.Client) + } + return err +} + +func (s *StepDeleteOSDisk) Run(ctx context.Context, state multistep.StateBag) multistep.StepAction { s.say("Deleting the temporary OS disk ...") var osDisk = state.Get(constants.ArmOSDiskVhd).(string) var isManagedDisk = state.Get(constants.ArmIsManagedImage).(bool) + var isExistingResourceGroup = state.Get(constants.ArmIsExistingResourceGroup).(bool) + var resourceGroupName = state.Get(constants.ArmResourceGroupName).(string) - if isManagedDisk { + if isManagedDisk && !isExistingResourceGroup { s.say(fmt.Sprintf(" -> OS Disk : skipping, managed disk was used...")) return multistep.ActionContinue } s.say(fmt.Sprintf(" -> OS Disk : '%s'", osDisk)) + var err error + if isManagedDisk { + err = s.deleteManaged(ctx, resourceGroupName, osDisk) + if err != nil { + s.say("Failed to delete the managed OS Disk!") + return processStepResult(err, s.error, state) + } + return multistep.ActionContinue + } u, err := url.Parse(osDisk) if err != nil { s.say("Failed to parse the OS Disk's VHD URI!") - return multistep.ActionHalt + return processStepResult(err, s.error, state) } xs := strings.Split(u.Path, "/") + if len(xs) < 3 { + err = errors.New("Failed to parse OS Disk's VHD URI!") + } else { + var storageAccountName = xs[1] + var blobName = strings.Join(xs[2:], "/") - var storageAccountName = xs[1] - var blobName = strings.Join(xs[2:], "/") - - err = s.delete(storageAccountName, blobName) + err = s.delete(storageAccountName, blobName) + } return processStepResult(err, s.error, state) } diff --git a/builder/azure/arm/step_delete_os_disk_test.go b/builder/azure/arm/step_delete_os_disk_test.go index 2724ee611..e8b0d09ca 100644 --- a/builder/azure/arm/step_delete_os_disk_test.go +++ b/builder/azure/arm/step_delete_os_disk_test.go @@ -1,23 +1,26 @@ package arm import ( + "context" + "errors" "fmt" "testing" "github.com/hashicorp/packer/builder/azure/common/constants" - "github.com/mitchellh/multistep" + "github.com/hashicorp/packer/helper/multistep" ) func TestStepDeleteOSDiskShouldFailIfGetFails(t *testing.T) { var testSubject = &StepDeleteOSDisk{ - delete: func(string, string) error { return fmt.Errorf("!! Unit Test FAIL !!") }, - say: func(message string) {}, - error: func(e error) {}, + delete: func(string, string) error { return fmt.Errorf("!! Unit Test FAIL !!") }, + deleteManaged: func(context.Context, string, string) error { return nil }, + say: func(message string) {}, + error: func(e error) {}, } stateBag := DeleteTestStateBagStepDeleteOSDisk("http://storage.blob.core.windows.net/images/pkrvm_os.vhd") - var result = testSubject.Run(stateBag) + var result = testSubject.Run(context.Background(), stateBag) if result != multistep.ActionHalt { t.Fatalf("Expected the step to return 'ActionHalt', but got '%d'.", result) } @@ -36,7 +39,7 @@ func TestStepDeleteOSDiskShouldPassIfGetPasses(t *testing.T) { stateBag := DeleteTestStateBagStepDeleteOSDisk("http://storage.blob.core.windows.net/images/pkrvm_os.vhd") - var result = testSubject.Run(stateBag) + var result = testSubject.Run(context.Background(), stateBag) if result != multistep.ActionContinue { t.Fatalf("Expected the step to return 'ActionContinue', but got '%d'.", result) } @@ -61,7 +64,7 @@ func TestStepDeleteOSDiskShouldTakeStepArgumentsFromStateBag(t *testing.T) { } stateBag := DeleteTestStateBagStepDeleteOSDisk("http://storage.blob.core.windows.net/images/pkrvm_os.vhd") - var result = testSubject.Run(stateBag) + var result = testSubject.Run(context.Background(), stateBag) if result != multistep.ActionContinue { t.Fatalf("Expected the step to return 'ActionContinue', but got '%d'.", result) @@ -91,7 +94,7 @@ func TestStepDeleteOSDiskShouldHandleComplexStorageContainerNames(t *testing.T) } stateBag := DeleteTestStateBagStepDeleteOSDisk("http://storage.blob.core.windows.net/abc/def/pkrvm_os.vhd") - testSubject.Run(stateBag) + testSubject.Run(context.Background(), stateBag) if actualStorageContainerName != "abc" { t.Fatalf("Expected the storage container name to be 'abc/def', but found '%s'.", actualStorageContainerName) @@ -102,10 +105,123 @@ func TestStepDeleteOSDiskShouldHandleComplexStorageContainerNames(t *testing.T) } } +func TestStepDeleteOSDiskShouldFailIfVHDNameCannotBeURLParsed(t *testing.T) { + var testSubject = &StepDeleteOSDisk{ + delete: func(string, string) error { return nil }, + say: func(message string) {}, + error: func(e error) {}, + deleteManaged: func(context.Context, string, string) error { return nil }, + } + + // Invalid URL per https://golang.org/src/net/url/url_test.go + stateBag := DeleteTestStateBagStepDeleteOSDisk("http://[fe80::1%en0]/") + + var result = testSubject.Run(context.Background(), stateBag) + if result != multistep.ActionHalt { + t.Fatalf("Expected the step to return 'ActionHalt', but got '%v'.", result) + } + + if _, ok := stateBag.GetOk(constants.Error); ok == false { + t.Fatalf("Expected the step to not stateBag['%s'], but it was.", constants.Error) + } +} +func TestStepDeleteOSDiskShouldFailIfVHDNameIsTooShort(t *testing.T) { + var testSubject = &StepDeleteOSDisk{ + delete: func(string, string) error { return nil }, + say: func(message string) {}, + error: func(e error) {}, + deleteManaged: func(context.Context, string, string) error { return nil }, + } + + stateBag := DeleteTestStateBagStepDeleteOSDisk("storage.blob.core.windows.net/abc") + + var result = testSubject.Run(context.Background(), stateBag) + if result != multistep.ActionHalt { + t.Fatalf("Expected the step to return 'ActionHalt', but got '%d'.", result) + } + + if _, ok := stateBag.GetOk(constants.Error); ok == false { + t.Fatalf("Expected the step to not stateBag['%s'], but it was.", constants.Error) + } +} + +func TestStepDeleteOSDiskShouldPassIfManagedDiskInTempResourceGroup(t *testing.T) { + var testSubject = &StepDeleteOSDisk{ + delete: func(string, string) error { return nil }, + say: func(message string) {}, + error: func(e error) {}, + } + + stateBag := new(multistep.BasicStateBag) + stateBag.Put(constants.ArmOSDiskVhd, "subscriptions/123-456-789/resourceGroups/existingresourcegroup/providers/Microsoft.Compute/disks/osdisk") + stateBag.Put(constants.ArmIsManagedImage, true) + stateBag.Put(constants.ArmIsExistingResourceGroup, false) + stateBag.Put(constants.ArmResourceGroupName, "testgroup") + + var result = testSubject.Run(context.Background(), stateBag) + if result != multistep.ActionContinue { + t.Fatalf("Expected the step to return 'ActionContinue', but got '%d'.", result) + } + + if _, ok := stateBag.GetOk(constants.Error); ok == true { + t.Fatalf("Expected the step to not set stateBag['%s'], but it was.", constants.Error) + } +} + +func TestStepDeleteOSDiskShouldFailIfManagedDiskInExistingResourceGroupFailsToDelete(t *testing.T) { + var testSubject = &StepDeleteOSDisk{ + delete: func(string, string) error { return nil }, + say: func(message string) {}, + error: func(e error) {}, + deleteManaged: func(context.Context, string, string) error { return errors.New("UNIT TEST FAIL!") }, + } + + stateBag := new(multistep.BasicStateBag) + stateBag.Put(constants.ArmOSDiskVhd, "subscriptions/123-456-789/resourceGroups/existingresourcegroup/providers/Microsoft.Compute/disks/osdisk") + stateBag.Put(constants.ArmIsManagedImage, true) + stateBag.Put(constants.ArmIsExistingResourceGroup, true) + stateBag.Put(constants.ArmResourceGroupName, "testgroup") + + var result = testSubject.Run(context.Background(), stateBag) + if result != multistep.ActionHalt { + t.Fatalf("Expected the step to return 'ActionHalt', but got '%d'.", result) + } + + if _, ok := stateBag.GetOk(constants.Error); ok == false { + t.Fatalf("Expected the step to not stateBag['%s'], but it was.", constants.Error) + } +} + +func TestStepDeleteOSDiskShouldFailIfManagedDiskInExistingResourceGroupIsDeleted(t *testing.T) { + var testSubject = &StepDeleteOSDisk{ + delete: func(string, string) error { return nil }, + say: func(message string) {}, + error: func(e error) {}, + deleteManaged: func(context.Context, string, string) error { return nil }, + } + + stateBag := new(multistep.BasicStateBag) + stateBag.Put(constants.ArmOSDiskVhd, "subscriptions/123-456-789/resourceGroups/existingresourcegroup/providers/Microsoft.Compute/disks/osdisk") + stateBag.Put(constants.ArmIsManagedImage, true) + stateBag.Put(constants.ArmIsExistingResourceGroup, true) + stateBag.Put(constants.ArmResourceGroupName, "testgroup") + + var result = testSubject.Run(context.Background(), stateBag) + if result != multistep.ActionContinue { + t.Fatalf("Expected the step to return 'ActionContinue', but got '%d'.", result) + } + + if _, ok := stateBag.GetOk(constants.Error); ok == true { + t.Fatalf("Expected the step to not set stateBag['%s'], but it was.", constants.Error) + } +} + func DeleteTestStateBagStepDeleteOSDisk(osDiskVhd string) multistep.StateBag { stateBag := new(multistep.BasicStateBag) stateBag.Put(constants.ArmOSDiskVhd, osDiskVhd) stateBag.Put(constants.ArmIsManagedImage, false) + stateBag.Put(constants.ArmIsExistingResourceGroup, false) + stateBag.Put(constants.ArmResourceGroupName, "testgroup") return stateBag } diff --git a/builder/azure/arm/step_delete_resource_group.go b/builder/azure/arm/step_delete_resource_group.go index 2606b9fc8..c5b92fcf1 100644 --- a/builder/azure/arm/step_delete_resource_group.go +++ b/builder/azure/arm/step_delete_resource_group.go @@ -1,17 +1,21 @@ package arm import ( + "context" "fmt" - "github.com/hashicorp/packer/builder/azure/common" "github.com/hashicorp/packer/builder/azure/common/constants" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" +) + +const ( + maxResourcesToDelete = 50 ) type StepDeleteResourceGroup struct { client *AzureClient - delete func(resourceGroupName string, cancelCh <-chan struct{}) error + delete func(ctx context.Context, state multistep.StateBag, resourceGroupName string) error say func(message string) error func(e error) } @@ -27,30 +31,107 @@ func NewStepDeleteResourceGroup(client *AzureClient, ui packer.Ui) *StepDeleteRe return step } -func (s *StepDeleteResourceGroup) deleteResourceGroup(resourceGroupName string, cancelCh <-chan struct{}) error { - _, errChan := s.client.GroupsClient.Delete(resourceGroupName, cancelCh) +func (s *StepDeleteResourceGroup) deleteResourceGroup(ctx context.Context, state multistep.StateBag, resourceGroupName string) error { + var err error + if state.Get(constants.ArmIsExistingResourceGroup).(bool) { + s.say("\nThe resource group was not created by Packer, only deleting individual resources ...") + var deploymentName = state.Get(constants.ArmDeploymentName).(string) + err = s.deleteDeploymentResources(ctx, deploymentName, resourceGroupName) + if err != nil { + return err + } - err := <-errChan - if err != nil { - s.say(s.client.LastError.Error()) + if keyVaultDeploymentName, ok := state.GetOk(constants.ArmKeyVaultDeploymentName); ok { + err = s.deleteDeploymentResources(ctx, keyVaultDeploymentName.(string), resourceGroupName) + if err != nil { + return err + } + } + + return nil + } else { + s.say("\nThe resource group was created by Packer, deleting ...") + f, err := s.client.GroupsClient.Delete(ctx, resourceGroupName) + if err == nil { + if state.Get(constants.ArmAsyncResourceGroupDelete).(bool) { + // No need to wait for the complition for delete if request is Accepted + s.say(fmt.Sprintf("\nResource Group is being deleted, not waiting for deletion due to config. Resource Group Name '%s'", resourceGroupName)) + } else { + f.WaitForCompletion(ctx, s.client.GroupsClient.Client) + } + + } + + if err != nil { + s.say(s.client.LastError.Error()) + } + return err } - return err } -func (s *StepDeleteResourceGroup) Run(state multistep.StateBag) multistep.StepAction { +func (s *StepDeleteResourceGroup) deleteDeploymentResources(ctx context.Context, deploymentName, resourceGroupName string) error { + maxResources := int32(maxResourcesToDelete) + + deploymentOperations, err := s.client.DeploymentOperationsClient.ListComplete(ctx, resourceGroupName, deploymentName, &maxResources) + if err != nil { + s.reportIfError(err, resourceGroupName) + return err + } + + for deploymentOperations.NotDone() { + deploymentOperation := deploymentOperations.Value() + // Sometimes an empty operation is added to the list by Azure + if deploymentOperation.Properties.TargetResource == nil { + deploymentOperations.Next() + continue + } + + resourceName := *deploymentOperation.Properties.TargetResource.ResourceName + resourceType := *deploymentOperation.Properties.TargetResource.ResourceType + + s.say(fmt.Sprintf(" -> %s : '%s'", + resourceType, + resourceName)) + + err := deleteResource(ctx, s.client, + resourceType, + resourceName, + resourceGroupName) + s.reportIfError(err, resourceName) + if err = deploymentOperations.Next(); err != nil { + return err + } + } + + return nil +} + +func (s *StepDeleteResourceGroup) reportIfError(err error, resourceName string) { + if err != nil { + s.say(fmt.Sprintf("Error deleting resource. Please delete manually.\n\n"+ + "Name: %s\n"+ + "Error: %s", resourceName, err.Error())) + s.error(err) + } +} + +func (s *StepDeleteResourceGroup) Run(ctx context.Context, state multistep.StateBag) multistep.StepAction { s.say("Deleting resource group ...") var resourceGroupName = state.Get(constants.ArmResourceGroupName).(string) s.say(fmt.Sprintf(" -> ResourceGroupName : '%s'", resourceGroupName)) - result := common.StartInterruptibleTask( - func() bool { return common.IsStateCancelled(state) }, - func(cancelCh <-chan struct{}) error { return s.delete(resourceGroupName, cancelCh) }) + err := s.delete(ctx, state, resourceGroupName) + if err != nil { + state.Put(constants.Error, err) + s.error(err) + + return multistep.ActionHalt + } - stepAction := processInterruptibleResult(result, s.error, state) state.Put(constants.ArmIsResourceGroupCreated, false) - return stepAction + return multistep.ActionContinue } func (*StepDeleteResourceGroup) Cleanup(multistep.StateBag) { diff --git a/builder/azure/arm/step_delete_resource_group_test.go b/builder/azure/arm/step_delete_resource_group_test.go index 25ab14e4d..e67fa15d5 100644 --- a/builder/azure/arm/step_delete_resource_group_test.go +++ b/builder/azure/arm/step_delete_resource_group_test.go @@ -1,23 +1,24 @@ package arm import ( + "context" "fmt" "testing" "github.com/hashicorp/packer/builder/azure/common/constants" - "github.com/mitchellh/multistep" + "github.com/hashicorp/packer/helper/multistep" ) func TestStepDeleteResourceGroupShouldFailIfDeleteFails(t *testing.T) { var testSubject = &StepDeleteResourceGroup{ - delete: func(string, <-chan struct{}) error { return fmt.Errorf("!! Unit Test FAIL !!") }, + delete: func(context.Context, multistep.StateBag, string) error { return fmt.Errorf("!! Unit Test FAIL !!") }, say: func(message string) {}, error: func(e error) {}, } stateBag := DeleteTestStateBagStepDeleteResourceGroup() - var result = testSubject.Run(stateBag) + var result = testSubject.Run(context.Background(), stateBag) if result != multistep.ActionHalt { t.Fatalf("Expected the step to return 'ActionHalt', but got '%d'.", result) } @@ -29,14 +30,14 @@ func TestStepDeleteResourceGroupShouldFailIfDeleteFails(t *testing.T) { func TestStepDeleteResourceGroupShouldPassIfDeletePasses(t *testing.T) { var testSubject = &StepDeleteResourceGroup{ - delete: func(string, <-chan struct{}) error { return nil }, + delete: func(context.Context, multistep.StateBag, string) error { return nil }, say: func(message string) {}, error: func(e error) {}, } stateBag := DeleteTestStateBagStepDeleteResourceGroup() - var result = testSubject.Run(stateBag) + var result = testSubject.Run(context.Background(), stateBag) if result != multistep.ActionContinue { t.Fatalf("Expected the step to return 'ActionContinue', but got '%d'.", result) } @@ -48,7 +49,7 @@ func TestStepDeleteResourceGroupShouldPassIfDeletePasses(t *testing.T) { func TestStepDeleteResourceGroupShouldDeleteStateBagArmResourceGroupCreated(t *testing.T) { var testSubject = &StepDeleteResourceGroup{ - delete: func(resourceGroupName string, cancelCh <-chan struct{}) error { + delete: func(context.Context, multistep.StateBag, string) error { return nil }, say: func(message string) {}, @@ -56,7 +57,7 @@ func TestStepDeleteResourceGroupShouldDeleteStateBagArmResourceGroupCreated(t *t } stateBag := DeleteTestStateBagStepDeleteResourceGroup() - testSubject.Run(stateBag) + testSubject.Run(context.Background(), stateBag) value, ok := stateBag.GetOk(constants.ArmIsResourceGroupCreated) if !ok { diff --git a/builder/azure/arm/step_deploy_template.go b/builder/azure/arm/step_deploy_template.go index 8d73402b7..83590afd7 100644 --- a/builder/azure/arm/step_deploy_template.go +++ b/builder/azure/arm/step_deploy_template.go @@ -1,69 +1,221 @@ package arm import ( + "context" + "errors" "fmt" + "net/url" + "strings" - "github.com/hashicorp/packer/builder/azure/common" "github.com/hashicorp/packer/builder/azure/common/constants" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" ) type StepDeployTemplate struct { - client *AzureClient - deploy func(resourceGroupName string, deploymentName string, cancelCh <-chan struct{}) error - say func(message string) - error func(e error) - config *Config - factory templateFactoryFunc + client *AzureClient + deploy func(ctx context.Context, resourceGroupName string, deploymentName string) error + delete func(ctx context.Context, client *AzureClient, resourceType string, resourceName string, resourceGroupName string) error + disk func(ctx context.Context, resourceGroupName string, computeName string) (string, string, error) + deleteDisk func(ctx context.Context, imageType string, imageName string, resourceGroupName string) error + say func(message string) + error func(e error) + config *Config + factory templateFactoryFunc + name string } -func NewStepDeployTemplate(client *AzureClient, ui packer.Ui, config *Config, factory templateFactoryFunc) *StepDeployTemplate { +func NewStepDeployTemplate(client *AzureClient, ui packer.Ui, config *Config, deploymentName string, factory templateFactoryFunc) *StepDeployTemplate { var step = &StepDeployTemplate{ client: client, say: func(message string) { ui.Say(message) }, error: func(e error) { ui.Error(e.Error()) }, config: config, factory: factory, + name: deploymentName, } step.deploy = step.deployTemplate + step.delete = deleteResource + step.disk = step.getImageDetails + step.deleteDisk = step.deleteImage return step } -func (s *StepDeployTemplate) deployTemplate(resourceGroupName string, deploymentName string, cancelCh <-chan struct{}) error { +func (s *StepDeployTemplate) deployTemplate(ctx context.Context, resourceGroupName string, deploymentName string) error { deployment, err := s.factory(s.config) if err != nil { return err } - _, errChan := s.client.DeploymentsClient.CreateOrUpdate(resourceGroupName, deploymentName, *deployment, cancelCh) - - err = <-errChan + f, err := s.client.DeploymentsClient.CreateOrUpdate(ctx, resourceGroupName, deploymentName, *deployment) + if err == nil { + err = f.WaitForCompletion(ctx, s.client.DeploymentsClient.Client) + } if err != nil { s.say(s.client.LastError.Error()) } return err } -func (s *StepDeployTemplate) Run(state multistep.StateBag) multistep.StepAction { +func (s *StepDeployTemplate) Run(ctx context.Context, state multistep.StateBag) multistep.StepAction { s.say("Deploying deployment template ...") var resourceGroupName = state.Get(constants.ArmResourceGroupName).(string) - var deploymentName = state.Get(constants.ArmDeploymentName).(string) s.say(fmt.Sprintf(" -> ResourceGroupName : '%s'", resourceGroupName)) - s.say(fmt.Sprintf(" -> DeploymentName : '%s'", deploymentName)) + s.say(fmt.Sprintf(" -> DeploymentName : '%s'", s.name)) - result := common.StartInterruptibleTask( - func() bool { return common.IsStateCancelled(state) }, - func(cancelCh <-chan struct{}) error { - return s.deploy(resourceGroupName, deploymentName, cancelCh) - }, - ) - - return processInterruptibleResult(result, s.error, state) + return processStepResult( + s.deploy(ctx, resourceGroupName, s.name), + s.error, state) } -func (*StepDeployTemplate) Cleanup(multistep.StateBag) { +func (s *StepDeployTemplate) getImageDetails(ctx context.Context, resourceGroupName string, computeName string) (string, string, error) { + //We can't depend on constants.ArmOSDiskVhd being set + var imageName string + var imageType string + vm, err := s.client.VirtualMachinesClient.Get(ctx, resourceGroupName, computeName, "") + if err != nil { + return imageName, imageType, err + } else { + if vm.StorageProfile.OsDisk.Vhd != nil { + imageType = "image" + imageName = *vm.StorageProfile.OsDisk.Vhd.URI + } else { + imageType = "Microsoft.Compute/disks" + imageName = *vm.StorageProfile.OsDisk.ManagedDisk.ID + } + } + return imageType, imageName, nil +} + +//TODO(paulmey): move to helpers file +func deleteResource(ctx context.Context, client *AzureClient, resourceType string, resourceName string, resourceGroupName string) error { + switch resourceType { + case "Microsoft.Compute/virtualMachines": + f, err := client.VirtualMachinesClient.Delete(ctx, resourceGroupName, resourceName) + if err == nil { + err = f.WaitForCompletion(ctx, client.VirtualMachinesClient.Client) + } + return err + case "Microsoft.KeyVault/vaults": + // TODO(paulmey): not sure why VaultClient doesn't do cancellation + _, err := client.VaultClientDelete.Delete(resourceGroupName, resourceName) + return err + case "Microsoft.Network/networkInterfaces": + f, err := client.InterfacesClient.Delete(ctx, resourceGroupName, resourceName) + if err == nil { + err = f.WaitForCompletion(ctx, client.InterfacesClient.Client) + } + return err + case "Microsoft.Network/virtualNetworks": + f, err := client.VirtualNetworksClient.Delete(ctx, resourceGroupName, resourceName) + if err == nil { + err = f.WaitForCompletion(ctx, client.VirtualNetworksClient.Client) + } + return err + case "Microsoft.Network/publicIPAddresses": + f, err := client.PublicIPAddressesClient.Delete(ctx, resourceGroupName, resourceName) + if err == nil { + err = f.WaitForCompletion(ctx, client.PublicIPAddressesClient.Client) + } + return err + } + return nil +} + +func (s *StepDeployTemplate) deleteImage(ctx context.Context, imageType string, imageName string, resourceGroupName string) error { + // Managed disk + if imageType == "Microsoft.Compute/disks" { + xs := strings.Split(imageName, "/") + diskName := xs[len(xs)-1] + f, err := s.client.DisksClient.Delete(ctx, resourceGroupName, diskName) + if err == nil { + err = f.WaitForCompletion(ctx, s.client.DisksClient.Client) + } + return err + } + // VHD image + u, err := url.Parse(imageName) + if err != nil { + return err + } + xs := strings.Split(u.Path, "/") + if len(xs) < 3 { + return errors.New("Unable to parse path of image " + imageName) + } + var storageAccountName = xs[1] + var blobName = strings.Join(xs[2:], "/") + + blob := s.client.BlobStorageClient.GetContainerReference(storageAccountName).GetBlobReference(blobName) + err = blob.Delete(nil) + return err +} + +func (s *StepDeployTemplate) Cleanup(state multistep.StateBag) { + //Only clean up if this was an existing resource group and the resource group + //is marked as created + var existingResourceGroup = state.Get(constants.ArmIsExistingResourceGroup).(bool) + var resourceGroupCreated = state.Get(constants.ArmIsResourceGroupCreated).(bool) + if !existingResourceGroup || !resourceGroupCreated { + return + } + ui := state.Get("ui").(packer.Ui) + ui.Say("\nThe resource group was not created by Packer, deleting individual resources ...") + + var resourceGroupName = state.Get(constants.ArmResourceGroupName).(string) + var computeName = state.Get(constants.ArmComputeName).(string) + var deploymentName = s.name + imageType, imageName, err := s.disk(context.TODO(), resourceGroupName, computeName) + if err != nil { + ui.Error("Could not retrieve OS Image details") + } + + ui.Say(" -> Deployment: " + deploymentName) + if deploymentName != "" { + maxResources := int32(50) + deploymentOperations, err := s.client.DeploymentOperationsClient.ListComplete(context.TODO(), resourceGroupName, deploymentName, &maxResources) + if err != nil { + ui.Error(fmt.Sprintf("Error deleting resources. Please delete them manually.\n\n"+ + "Name: %s\n"+ + "Error: %s", resourceGroupName, err)) + } + for deploymentOperations.NotDone() { + deploymentOperation := deploymentOperations.Value() + // Sometimes an empty operation is added to the list by Azure + if deploymentOperation.Properties.TargetResource == nil { + deploymentOperations.Next() + continue + } + ui.Say(fmt.Sprintf(" -> %s : '%s'", + *deploymentOperation.Properties.TargetResource.ResourceType, + *deploymentOperation.Properties.TargetResource.ResourceName)) + err = s.delete(context.TODO(), s.client, + *deploymentOperation.Properties.TargetResource.ResourceType, + *deploymentOperation.Properties.TargetResource.ResourceName, + resourceGroupName) + if err != nil { + ui.Error(fmt.Sprintf("Error deleting resource. Please delete manually.\n\n"+ + "Name: %s\n"+ + "Error: %s", *deploymentOperation.Properties.TargetResource.ResourceName, err)) + } + if err = deploymentOperations.Next(); err != nil { + ui.Error(fmt.Sprintf("Error deleting resources. Please delete them manually.\n\n"+ + "Name: %s\n"+ + "Error: %s", resourceGroupName, err)) + break + } + } + + // The disk is not defined as an operation in the template so has to be + // deleted separately + ui.Say(fmt.Sprintf(" -> %s : '%s'", imageType, imageName)) + err = s.deleteDisk(context.TODO(), imageType, imageName, resourceGroupName) + if err != nil { + ui.Error(fmt.Sprintf("Error deleting resource. Please delete manually.\n\n"+ + "Name: %s\n"+ + "Error: %s", imageName, err)) + } + } } diff --git a/builder/azure/arm/step_deploy_template_test.go b/builder/azure/arm/step_deploy_template_test.go index 3a1d0c0bc..d9dce1d60 100644 --- a/builder/azure/arm/step_deploy_template_test.go +++ b/builder/azure/arm/step_deploy_template_test.go @@ -1,16 +1,17 @@ package arm import ( + "context" "fmt" "testing" "github.com/hashicorp/packer/builder/azure/common/constants" - "github.com/mitchellh/multistep" + "github.com/hashicorp/packer/helper/multistep" ) func TestStepDeployTemplateShouldFailIfDeployFails(t *testing.T) { var testSubject = &StepDeployTemplate{ - deploy: func(string, string, <-chan struct{}) error { + deploy: func(context.Context, string, string) error { return fmt.Errorf("!! Unit Test FAIL !!") }, say: func(message string) {}, @@ -19,7 +20,7 @@ func TestStepDeployTemplateShouldFailIfDeployFails(t *testing.T) { stateBag := createTestStateBagStepDeployTemplate() - var result = testSubject.Run(stateBag) + var result = testSubject.Run(context.Background(), stateBag) if result != multistep.ActionHalt { t.Fatalf("Expected the step to return 'ActionHalt', but got '%d'.", result) } @@ -31,14 +32,14 @@ func TestStepDeployTemplateShouldFailIfDeployFails(t *testing.T) { func TestStepDeployTemplateShouldPassIfDeployPasses(t *testing.T) { var testSubject = &StepDeployTemplate{ - deploy: func(string, string, <-chan struct{}) error { return nil }, + deploy: func(context.Context, string, string) error { return nil }, say: func(message string) {}, error: func(e error) {}, } stateBag := createTestStateBagStepDeployTemplate() - var result = testSubject.Run(stateBag) + var result = testSubject.Run(context.Background(), stateBag) if result != multistep.ActionContinue { t.Fatalf("Expected the step to return 'ActionContinue', but got '%d'.", result) } @@ -53,7 +54,7 @@ func TestStepDeployTemplateShouldTakeStepArgumentsFromStateBag(t *testing.T) { var actualDeploymentName string var testSubject = &StepDeployTemplate{ - deploy: func(resourceGroupName string, deploymentName string, cancelCh <-chan struct{}) error { + deploy: func(_ context.Context, resourceGroupName string, deploymentName string) error { actualResourceGroupName = resourceGroupName actualDeploymentName = deploymentName @@ -61,19 +62,19 @@ func TestStepDeployTemplateShouldTakeStepArgumentsFromStateBag(t *testing.T) { }, say: func(message string) {}, error: func(e error) {}, + name: "--deployment-name--", } stateBag := createTestStateBagStepValidateTemplate() - var result = testSubject.Run(stateBag) + var result = testSubject.Run(context.Background(), stateBag) if result != multistep.ActionContinue { t.Fatalf("Expected the step to return 'ActionContinue', but got '%d'.", result) } - var expectedDeploymentName = stateBag.Get(constants.ArmDeploymentName).(string) var expectedResourceGroupName = stateBag.Get(constants.ArmResourceGroupName).(string) - if actualDeploymentName != expectedDeploymentName { + if actualDeploymentName != "--deployment-name--" { t.Fatal("Expected StepValidateTemplate to source 'constants.ArmDeploymentName' from the state bag, but it did not.") } @@ -82,6 +83,31 @@ func TestStepDeployTemplateShouldTakeStepArgumentsFromStateBag(t *testing.T) { } } +func TestStepDeployTemplateDeleteImageShouldFailWhenImageUrlCannotBeParsed(t *testing.T) { + var testSubject = &StepDeployTemplate{ + say: func(message string) {}, + error: func(e error) {}, + name: "--deployment-name--", + } + // Invalid URL per https://golang.org/src/net/url/url_test.go + err := testSubject.deleteImage(context.TODO(), "image", "http://[fe80::1%en0]/", "Unit Test: ResourceGroupName") + if err == nil { + t.Fatal("Expected a failure because of the failed image name") + } +} + +func TestStepDeployTemplateDeleteImageShouldFailWithInvalidImage(t *testing.T) { + var testSubject = &StepDeployTemplate{ + say: func(message string) {}, + error: func(e error) {}, + name: "--deployment-name--", + } + err := testSubject.deleteImage(context.TODO(), "image", "storage.blob.core.windows.net/abc", "Unit Test: ResourceGroupName") + if err == nil { + t.Fatal("Expected a failure because of the failed image name") + } +} + func createTestStateBagStepDeployTemplate() multistep.StateBag { stateBag := new(multistep.BasicStateBag) diff --git a/builder/azure/arm/step_get_additional_disks.go b/builder/azure/arm/step_get_additional_disks.go new file mode 100644 index 000000000..23d16f425 --- /dev/null +++ b/builder/azure/arm/step_get_additional_disks.go @@ -0,0 +1,78 @@ +package arm + +import ( + "context" + "fmt" + + "github.com/Azure/azure-sdk-for-go/services/compute/mgmt/2018-04-01/compute" + + "github.com/hashicorp/packer/builder/azure/common/constants" + + "github.com/hashicorp/packer/helper/multistep" + "github.com/hashicorp/packer/packer" +) + +type StepGetDataDisk struct { + client *AzureClient + query func(ctx context.Context, resourceGroupName string, computeName string) (compute.VirtualMachine, error) + say func(message string) + error func(e error) +} + +func NewStepGetAdditionalDisks(client *AzureClient, ui packer.Ui) *StepGetDataDisk { + var step = &StepGetDataDisk{ + client: client, + say: func(message string) { ui.Say(message) }, + error: func(e error) { ui.Error(e.Error()) }, + } + + step.query = step.queryCompute + return step +} + +func (s *StepGetDataDisk) queryCompute(ctx context.Context, resourceGroupName string, computeName string) (compute.VirtualMachine, error) { + vm, err := s.client.VirtualMachinesClient.Get(ctx, resourceGroupName, computeName, "") + if err != nil { + s.say(s.client.LastError.Error()) + } + return vm, err +} + +func (s *StepGetDataDisk) Run(ctx context.Context, state multistep.StateBag) multistep.StepAction { + s.say("Querying the machine's additional disks properties ...") + + var resourceGroupName = state.Get(constants.ArmResourceGroupName).(string) + var computeName = state.Get(constants.ArmComputeName).(string) + + s.say(fmt.Sprintf(" -> ResourceGroupName : '%s'", resourceGroupName)) + s.say(fmt.Sprintf(" -> ComputeName : '%s'", computeName)) + + vm, err := s.query(ctx, resourceGroupName, computeName) + if err != nil { + state.Put(constants.Error, err) + s.error(err) + + return multistep.ActionHalt + } + + if vm.StorageProfile.DataDisks != nil { + var vhdUri string + additional_disks := make([]string, len(*vm.StorageProfile.DataDisks)) + for i, additionaldisk := range *vm.StorageProfile.DataDisks { + if additionaldisk.Vhd != nil { + vhdUri = *additionaldisk.Vhd.URI + s.say(fmt.Sprintf(" -> Additional Disk %d : '%s'", i+1, vhdUri)) + } else { + vhdUri = *additionaldisk.ManagedDisk.ID + s.say(fmt.Sprintf(" -> Managed Additional Disk %d : '%s'", i+1, vhdUri)) + } + additional_disks[i] = vhdUri + } + state.Put(constants.ArmAdditionalDiskVhds, additional_disks) + } + + return multistep.ActionContinue +} + +func (*StepGetDataDisk) Cleanup(multistep.StateBag) { +} diff --git a/builder/azure/arm/step_get_additional_disks_test.go b/builder/azure/arm/step_get_additional_disks_test.go new file mode 100644 index 000000000..f9685bef4 --- /dev/null +++ b/builder/azure/arm/step_get_additional_disks_test.go @@ -0,0 +1,131 @@ +package arm + +import ( + "context" + "fmt" + "testing" + + "github.com/Azure/azure-sdk-for-go/services/compute/mgmt/2018-04-01/compute" + + "github.com/hashicorp/packer/builder/azure/common/constants" + + "github.com/hashicorp/packer/helper/multistep" +) + +func TestStepGetAdditionalDiskShouldFailIfGetFails(t *testing.T) { + var testSubject = &StepGetDataDisk{ + query: func(context.Context, string, string) (compute.VirtualMachine, error) { + return createVirtualMachineWithDataDisksFromUri("test.vhd"), fmt.Errorf("!! Unit Test FAIL !!") + }, + say: func(message string) {}, + error: func(e error) {}, + } + + stateBag := createTestStateBagStepGetAdditionalDisks() + + var result = testSubject.Run(context.Background(), stateBag) + if result != multistep.ActionHalt { + t.Fatalf("Expected the step to return 'ActionHalt', but got '%d'.", result) + } + + if _, ok := stateBag.GetOk(constants.Error); ok == false { + t.Fatalf("Expected the step to set stateBag['%s'], but it was not.", constants.Error) + } +} + +func TestStepGetAdditionalDiskShouldPassIfGetPasses(t *testing.T) { + var testSubject = &StepGetDataDisk{ + query: func(context.Context, string, string) (compute.VirtualMachine, error) { + return createVirtualMachineWithDataDisksFromUri("test.vhd"), nil + }, + say: func(message string) {}, + error: func(e error) {}, + } + + stateBag := createTestStateBagStepGetAdditionalDisks() + + var result = testSubject.Run(context.Background(), stateBag) + if result != multistep.ActionContinue { + t.Fatalf("Expected the step to return 'ActionContinue', but got '%d'.", result) + } + + if _, ok := stateBag.GetOk(constants.Error); ok == true { + t.Fatalf("Expected the step to not set stateBag['%s'], but it was.", constants.Error) + } +} + +func TestStepGetAdditionalDiskShouldTakeValidateArgumentsFromStateBag(t *testing.T) { + var actualResourceGroupName string + var actualComputeName string + + var testSubject = &StepGetDataDisk{ + query: func(ctx context.Context, resourceGroupName string, computeName string) (compute.VirtualMachine, error) { + actualResourceGroupName = resourceGroupName + actualComputeName = computeName + + return createVirtualMachineWithDataDisksFromUri("test.vhd"), nil + }, + say: func(message string) {}, + error: func(e error) {}, + } + + stateBag := createTestStateBagStepGetAdditionalDisks() + var result = testSubject.Run(context.Background(), stateBag) + + if result != multistep.ActionContinue { + t.Fatalf("Expected the step to return 'ActionContinue', but got '%d'.", result) + } + + var expectedComputeName = stateBag.Get(constants.ArmComputeName).(string) + var expectedResourceGroupName = stateBag.Get(constants.ArmResourceGroupName).(string) + + if actualComputeName != expectedComputeName { + t.Fatal("Expected the step to source 'constants.ArmResourceGroupName' from the state bag, but it did not.") + } + + if actualResourceGroupName != expectedResourceGroupName { + t.Fatal("Expected the step to source 'constants.ArmResourceGroupName' from the state bag, but it did not.") + } + + expectedAdditionalDiskVhds, ok := stateBag.GetOk(constants.ArmAdditionalDiskVhds) + if !ok { + t.Fatalf("Expected the state bag to have a value for '%s', but it did not.", constants.ArmAdditionalDiskVhds) + } + + expectedAdditionalDiskVhd := expectedAdditionalDiskVhds.([]string) + if expectedAdditionalDiskVhd[0] != "test.vhd" { + t.Fatalf("Expected the value of stateBag[%s] to be 'test.vhd', but got '%s'.", constants.ArmAdditionalDiskVhds, expectedAdditionalDiskVhd[0]) + } +} + +func createTestStateBagStepGetAdditionalDisks() multistep.StateBag { + stateBag := new(multistep.BasicStateBag) + + stateBag.Put(constants.ArmComputeName, "Unit Test: ComputeName") + stateBag.Put(constants.ArmResourceGroupName, "Unit Test: ResourceGroupName") + + return stateBag +} + +func createVirtualMachineWithDataDisksFromUri(vhdUri string) compute.VirtualMachine { + vm := compute.VirtualMachine{ + VirtualMachineProperties: &compute.VirtualMachineProperties{ + StorageProfile: &compute.StorageProfile{ + OsDisk: &compute.OSDisk{ + Vhd: &compute.VirtualHardDisk{ + URI: &vhdUri, + }, + }, + DataDisks: &[]compute.DataDisk{ + { + Vhd: &compute.VirtualHardDisk{ + URI: &vhdUri, + }, + }, + }, + }, + }, + } + + return vm +} diff --git a/builder/azure/arm/step_get_certificate.go b/builder/azure/arm/step_get_certificate.go index b9126da36..784826771 100644 --- a/builder/azure/arm/step_get_certificate.go +++ b/builder/azure/arm/step_get_certificate.go @@ -1,12 +1,13 @@ package arm import ( + "context" "fmt" "time" "github.com/hashicorp/packer/builder/azure/common/constants" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" ) type StepGetCertificate struct { @@ -39,7 +40,7 @@ func (s *StepGetCertificate) getCertificateUrl(keyVaultName string, secretName s return *secret.ID, err } -func (s *StepGetCertificate) Run(state multistep.StateBag) multistep.StepAction { +func (s *StepGetCertificate) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { s.say("Getting the certificate's URL ...") var keyVaultName = state.Get(constants.ArmKeyVaultName).(string) diff --git a/builder/azure/arm/step_get_certificate_test.go b/builder/azure/arm/step_get_certificate_test.go index 553806c5e..5ce807afe 100644 --- a/builder/azure/arm/step_get_certificate_test.go +++ b/builder/azure/arm/step_get_certificate_test.go @@ -1,11 +1,12 @@ package arm import ( + "context" "fmt" "testing" "github.com/hashicorp/packer/builder/azure/common/constants" - "github.com/mitchellh/multistep" + "github.com/hashicorp/packer/helper/multistep" ) func TestStepGetCertificateShouldFailIfGetFails(t *testing.T) { @@ -18,7 +19,7 @@ func TestStepGetCertificateShouldFailIfGetFails(t *testing.T) { stateBag := createTestStateBagStepGetCertificate() - var result = testSubject.Run(stateBag) + var result = testSubject.Run(context.Background(), stateBag) if result != multistep.ActionHalt { t.Fatalf("Expected the step to return 'ActionHalt', but got '%d'.", result) } @@ -38,7 +39,7 @@ func TestStepGetCertificateShouldPassIfGetPasses(t *testing.T) { stateBag := createTestStateBagStepGetCertificate() - var result = testSubject.Run(stateBag) + var result = testSubject.Run(context.Background(), stateBag) if result != multistep.ActionContinue { t.Fatalf("Expected the step to return 'ActionContinue', but got '%d'.", result) } @@ -65,7 +66,7 @@ func TestStepGetCertificateShouldTakeStepArgumentsFromStateBag(t *testing.T) { } stateBag := createTestStateBagStepGetCertificate() - var result = testSubject.Run(stateBag) + var result = testSubject.Run(context.Background(), stateBag) if result != multistep.ActionContinue { t.Fatalf("Expected the step to return 'ActionContinue', but got '%d'.", result) diff --git a/builder/azure/arm/step_get_ip_address.go b/builder/azure/arm/step_get_ip_address.go index 2c8165b0b..ef838b339 100644 --- a/builder/azure/arm/step_get_ip_address.go +++ b/builder/azure/arm/step_get_ip_address.go @@ -1,11 +1,12 @@ package arm import ( + "context" "fmt" "github.com/hashicorp/packer/builder/azure/common/constants" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" ) type EndpointType int @@ -27,7 +28,7 @@ var ( type StepGetIPAddress struct { client *AzureClient endpoint EndpointType - get func(resourceGroupName string, ipAddressName string, interfaceName string) (string, error) + get func(ctx context.Context, resourceGroupName string, ipAddressName string, interfaceName string) (string, error) say func(message string) error func(e error) } @@ -52,8 +53,8 @@ func NewStepGetIPAddress(client *AzureClient, ui packer.Ui, endpoint EndpointTyp return step } -func (s *StepGetIPAddress) getPrivateIP(resourceGroupName string, ipAddressName string, interfaceName string) (string, error) { - resp, err := s.client.InterfacesClient.Get(resourceGroupName, interfaceName, "") +func (s *StepGetIPAddress) getPrivateIP(ctx context.Context, resourceGroupName string, ipAddressName string, interfaceName string) (string, error) { + resp, err := s.client.InterfacesClient.Get(ctx, resourceGroupName, interfaceName, "") if err != nil { s.say(s.client.LastError.Error()) return "", err @@ -62,8 +63,8 @@ func (s *StepGetIPAddress) getPrivateIP(resourceGroupName string, ipAddressName return *(*resp.IPConfigurations)[0].PrivateIPAddress, nil } -func (s *StepGetIPAddress) getPublicIP(resourceGroupName string, ipAddressName string, interfaceName string) (string, error) { - resp, err := s.client.PublicIPAddressesClient.Get(resourceGroupName, ipAddressName, "") +func (s *StepGetIPAddress) getPublicIP(ctx context.Context, resourceGroupName string, ipAddressName string, interfaceName string) (string, error) { + resp, err := s.client.PublicIPAddressesClient.Get(ctx, resourceGroupName, ipAddressName, "") if err != nil { return "", err } @@ -71,12 +72,12 @@ func (s *StepGetIPAddress) getPublicIP(resourceGroupName string, ipAddressName s return *resp.IPAddress, nil } -func (s *StepGetIPAddress) getPublicIPInPrivateNetwork(resourceGroupName string, ipAddressName string, interfaceName string) (string, error) { - s.getPrivateIP(resourceGroupName, ipAddressName, interfaceName) - return s.getPublicIP(resourceGroupName, ipAddressName, interfaceName) +func (s *StepGetIPAddress) getPublicIPInPrivateNetwork(ctx context.Context, resourceGroupName string, ipAddressName string, interfaceName string) (string, error) { + s.getPrivateIP(ctx, resourceGroupName, ipAddressName, interfaceName) + return s.getPublicIP(ctx, resourceGroupName, ipAddressName, interfaceName) } -func (s *StepGetIPAddress) Run(state multistep.StateBag) multistep.StepAction { +func (s *StepGetIPAddress) Run(ctx context.Context, state multistep.StateBag) multistep.StepAction { s.say("Getting the VM's IP address ...") var resourceGroupName = state.Get(constants.ArmResourceGroupName).(string) @@ -88,7 +89,7 @@ func (s *StepGetIPAddress) Run(state multistep.StateBag) multistep.StepAction { s.say(fmt.Sprintf(" -> NicName : '%s'", nicName)) s.say(fmt.Sprintf(" -> Network Connection : '%s'", EndpointCommunicationText[s.endpoint])) - address, err := s.get(resourceGroupName, ipAddressName, nicName) + address, err := s.get(ctx, resourceGroupName, ipAddressName, nicName) if err != nil { state.Put(constants.Error, err) s.error(err) diff --git a/builder/azure/arm/step_get_ip_address_test.go b/builder/azure/arm/step_get_ip_address_test.go index 1403e9a51..38b71fa5f 100644 --- a/builder/azure/arm/step_get_ip_address_test.go +++ b/builder/azure/arm/step_get_ip_address_test.go @@ -1,11 +1,12 @@ package arm import ( + "context" "fmt" "testing" "github.com/hashicorp/packer/builder/azure/common/constants" - "github.com/mitchellh/multistep" + "github.com/hashicorp/packer/helper/multistep" ) func TestStepGetIPAddressShouldFailIfGetFails(t *testing.T) { @@ -13,7 +14,9 @@ func TestStepGetIPAddressShouldFailIfGetFails(t *testing.T) { for _, endpoint := range endpoints { var testSubject = &StepGetIPAddress{ - get: func(string, string, string) (string, error) { return "", fmt.Errorf("!! Unit Test FAIL !!") }, + get: func(context.Context, string, string, string) (string, error) { + return "", fmt.Errorf("!! Unit Test FAIL !!") + }, endpoint: endpoint, say: func(message string) {}, error: func(e error) {}, @@ -21,7 +24,7 @@ func TestStepGetIPAddressShouldFailIfGetFails(t *testing.T) { stateBag := createTestStateBagStepGetIPAddress() - var result = testSubject.Run(stateBag) + var result = testSubject.Run(context.Background(), stateBag) if result != multistep.ActionHalt { t.Fatalf("Expected the step to return 'ActionHalt', but got '%d'.", result) } @@ -37,7 +40,7 @@ func TestStepGetIPAddressShouldPassIfGetPasses(t *testing.T) { for _, endpoint := range endpoints { var testSubject = &StepGetIPAddress{ - get: func(string, string, string) (string, error) { return "", nil }, + get: func(context.Context, string, string, string) (string, error) { return "", nil }, endpoint: endpoint, say: func(message string) {}, error: func(e error) {}, @@ -45,7 +48,7 @@ func TestStepGetIPAddressShouldPassIfGetPasses(t *testing.T) { stateBag := createTestStateBagStepGetIPAddress() - var result = testSubject.Run(stateBag) + var result = testSubject.Run(context.Background(), stateBag) if result != multistep.ActionContinue { t.Fatalf("Expected the step to return 'ActionContinue', but got '%d'.", result) } @@ -64,7 +67,7 @@ func TestStepGetIPAddressShouldTakeStepArgumentsFromStateBag(t *testing.T) { for _, endpoint := range endpoints { var testSubject = &StepGetIPAddress{ - get: func(resourceGroupName string, ipAddressName string, nicName string) (string, error) { + get: func(ctx context.Context, resourceGroupName string, ipAddressName string, nicName string) (string, error) { actualResourceGroupName = resourceGroupName actualIPAddressName = ipAddressName actualNicName = nicName @@ -77,7 +80,7 @@ func TestStepGetIPAddressShouldTakeStepArgumentsFromStateBag(t *testing.T) { } stateBag := createTestStateBagStepGetIPAddress() - var result = testSubject.Run(stateBag) + var result = testSubject.Run(context.Background(), stateBag) if result != multistep.ActionContinue { t.Fatalf("Expected the step to return 'ActionContinue', but got '%d'.", result) diff --git a/builder/azure/arm/step_get_os_disk.go b/builder/azure/arm/step_get_os_disk.go index 511ab5c51..62337fa2a 100644 --- a/builder/azure/arm/step_get_os_disk.go +++ b/builder/azure/arm/step_get_os_disk.go @@ -1,19 +1,20 @@ package arm import ( + "context" "fmt" - "github.com/Azure/azure-sdk-for-go/arm/compute" + "github.com/Azure/azure-sdk-for-go/services/compute/mgmt/2018-04-01/compute" "github.com/hashicorp/packer/builder/azure/common/constants" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" ) type StepGetOSDisk struct { client *AzureClient - query func(resourceGroupName string, computeName string) (compute.VirtualMachine, error) + query func(ctx context.Context, resourceGroupName string, computeName string) (compute.VirtualMachine, error) say func(message string) error func(e error) } @@ -29,15 +30,15 @@ func NewStepGetOSDisk(client *AzureClient, ui packer.Ui) *StepGetOSDisk { return step } -func (s *StepGetOSDisk) queryCompute(resourceGroupName string, computeName string) (compute.VirtualMachine, error) { - vm, err := s.client.VirtualMachinesClient.Get(resourceGroupName, computeName, "") +func (s *StepGetOSDisk) queryCompute(ctx context.Context, resourceGroupName string, computeName string) (compute.VirtualMachine, error) { + vm, err := s.client.VirtualMachinesClient.Get(ctx, resourceGroupName, computeName, "") if err != nil { s.say(s.client.LastError.Error()) } return vm, err } -func (s *StepGetOSDisk) Run(state multistep.StateBag) multistep.StepAction { +func (s *StepGetOSDisk) Run(ctx context.Context, state multistep.StateBag) multistep.StepAction { s.say("Querying the machine's properties ...") var resourceGroupName = state.Get(constants.ArmResourceGroupName).(string) @@ -46,7 +47,7 @@ func (s *StepGetOSDisk) Run(state multistep.StateBag) multistep.StepAction { s.say(fmt.Sprintf(" -> ResourceGroupName : '%s'", resourceGroupName)) s.say(fmt.Sprintf(" -> ComputeName : '%s'", computeName)) - vm, err := s.query(resourceGroupName, computeName) + vm, err := s.query(ctx, resourceGroupName, computeName) if err != nil { state.Put(constants.Error, err) s.error(err) diff --git a/builder/azure/arm/step_get_os_disk_test.go b/builder/azure/arm/step_get_os_disk_test.go index 2b3a4d5d6..f9e633319 100644 --- a/builder/azure/arm/step_get_os_disk_test.go +++ b/builder/azure/arm/step_get_os_disk_test.go @@ -1,19 +1,20 @@ package arm import ( + "context" "fmt" "testing" - "github.com/Azure/azure-sdk-for-go/arm/compute" + "github.com/Azure/azure-sdk-for-go/services/compute/mgmt/2018-04-01/compute" "github.com/hashicorp/packer/builder/azure/common/constants" - "github.com/mitchellh/multistep" + "github.com/hashicorp/packer/helper/multistep" ) func TestStepGetOSDiskShouldFailIfGetFails(t *testing.T) { var testSubject = &StepGetOSDisk{ - query: func(string, string) (compute.VirtualMachine, error) { + query: func(context.Context, string, string) (compute.VirtualMachine, error) { return createVirtualMachineFromUri("test.vhd"), fmt.Errorf("!! Unit Test FAIL !!") }, say: func(message string) {}, @@ -22,7 +23,7 @@ func TestStepGetOSDiskShouldFailIfGetFails(t *testing.T) { stateBag := createTestStateBagStepGetOSDisk() - var result = testSubject.Run(stateBag) + var result = testSubject.Run(context.Background(), stateBag) if result != multistep.ActionHalt { t.Fatalf("Expected the step to return 'ActionHalt', but got '%d'.", result) } @@ -34,7 +35,7 @@ func TestStepGetOSDiskShouldFailIfGetFails(t *testing.T) { func TestStepGetOSDiskShouldPassIfGetPasses(t *testing.T) { var testSubject = &StepGetOSDisk{ - query: func(string, string) (compute.VirtualMachine, error) { + query: func(context.Context, string, string) (compute.VirtualMachine, error) { return createVirtualMachineFromUri("test.vhd"), nil }, say: func(message string) {}, @@ -43,7 +44,7 @@ func TestStepGetOSDiskShouldPassIfGetPasses(t *testing.T) { stateBag := createTestStateBagStepGetOSDisk() - var result = testSubject.Run(stateBag) + var result = testSubject.Run(context.Background(), stateBag) if result != multistep.ActionContinue { t.Fatalf("Expected the step to return 'ActionContinue', but got '%d'.", result) } @@ -58,7 +59,7 @@ func TestStepGetOSDiskShouldTakeValidateArgumentsFromStateBag(t *testing.T) { var actualComputeName string var testSubject = &StepGetOSDisk{ - query: func(resourceGroupName string, computeName string) (compute.VirtualMachine, error) { + query: func(ctx context.Context, resourceGroupName string, computeName string) (compute.VirtualMachine, error) { actualResourceGroupName = resourceGroupName actualComputeName = computeName @@ -69,7 +70,7 @@ func TestStepGetOSDiskShouldTakeValidateArgumentsFromStateBag(t *testing.T) { } stateBag := createTestStateBagStepGetOSDisk() - var result = testSubject.Run(stateBag) + var result = testSubject.Run(context.Background(), stateBag) if result != multistep.ActionContinue { t.Fatalf("Expected the step to return 'ActionContinue', but got '%d'.", result) diff --git a/builder/azure/arm/step_power_off_compute.go b/builder/azure/arm/step_power_off_compute.go index 931050e04..da0ede214 100644 --- a/builder/azure/arm/step_power_off_compute.go +++ b/builder/azure/arm/step_power_off_compute.go @@ -1,17 +1,17 @@ package arm import ( + "context" "fmt" - "github.com/hashicorp/packer/builder/azure/common" "github.com/hashicorp/packer/builder/azure/common/constants" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" ) type StepPowerOffCompute struct { client *AzureClient - powerOff func(resourceGroupName string, computeName string, cancelCh <-chan struct{}) error + powerOff func(ctx context.Context, resourceGroupName string, computeName string) error say func(message string) error func(e error) } @@ -27,17 +27,18 @@ func NewStepPowerOffCompute(client *AzureClient, ui packer.Ui) *StepPowerOffComp return step } -func (s *StepPowerOffCompute) powerOffCompute(resourceGroupName string, computeName string, cancelCh <-chan struct{}) error { - _, errChan := s.client.PowerOff(resourceGroupName, computeName, cancelCh) - - err := <-errChan +func (s *StepPowerOffCompute) powerOffCompute(ctx context.Context, resourceGroupName string, computeName string) error { + f, err := s.client.VirtualMachinesClient.PowerOff(ctx, resourceGroupName, computeName) + if err == nil { + err = f.WaitForCompletion(ctx, s.client.VirtualMachinesClient.Client) + } if err != nil { s.say(s.client.LastError.Error()) } return err } -func (s *StepPowerOffCompute) Run(state multistep.StateBag) multistep.StepAction { +func (s *StepPowerOffCompute) Run(ctx context.Context, state multistep.StateBag) multistep.StepAction { s.say("Powering off machine ...") var resourceGroupName = state.Get(constants.ArmResourceGroupName).(string) @@ -46,11 +47,9 @@ func (s *StepPowerOffCompute) Run(state multistep.StateBag) multistep.StepAction s.say(fmt.Sprintf(" -> ResourceGroupName : '%s'", resourceGroupName)) s.say(fmt.Sprintf(" -> ComputeName : '%s'", computeName)) - result := common.StartInterruptibleTask( - func() bool { return common.IsStateCancelled(state) }, - func(cancelCh <-chan struct{}) error { return s.powerOff(resourceGroupName, computeName, cancelCh) }) + err := s.powerOff(ctx, resourceGroupName, computeName) - return processInterruptibleResult(result, s.error, state) + return processStepResult(err, s.error, state) } func (*StepPowerOffCompute) Cleanup(multistep.StateBag) { diff --git a/builder/azure/arm/step_power_off_compute_test.go b/builder/azure/arm/step_power_off_compute_test.go index 2dcdac7f6..8239f2122 100644 --- a/builder/azure/arm/step_power_off_compute_test.go +++ b/builder/azure/arm/step_power_off_compute_test.go @@ -1,23 +1,24 @@ package arm import ( + "context" "fmt" "testing" "github.com/hashicorp/packer/builder/azure/common/constants" - "github.com/mitchellh/multistep" + "github.com/hashicorp/packer/helper/multistep" ) func TestStepPowerOffComputeShouldFailIfPowerOffFails(t *testing.T) { var testSubject = &StepPowerOffCompute{ - powerOff: func(string, string, <-chan struct{}) error { return fmt.Errorf("!! Unit Test FAIL !!") }, + powerOff: func(context.Context, string, string) error { return fmt.Errorf("!! Unit Test FAIL !!") }, say: func(message string) {}, error: func(e error) {}, } stateBag := createTestStateBagStepPowerOffCompute() - var result = testSubject.Run(stateBag) + var result = testSubject.Run(context.Background(), stateBag) if result != multistep.ActionHalt { t.Fatalf("Expected the step to return 'ActionHalt', but got '%d'.", result) } @@ -29,14 +30,14 @@ func TestStepPowerOffComputeShouldFailIfPowerOffFails(t *testing.T) { func TestStepPowerOffComputeShouldPassIfPowerOffPasses(t *testing.T) { var testSubject = &StepPowerOffCompute{ - powerOff: func(string, string, <-chan struct{}) error { return nil }, + powerOff: func(context.Context, string, string) error { return nil }, say: func(message string) {}, error: func(e error) {}, } stateBag := createTestStateBagStepPowerOffCompute() - var result = testSubject.Run(stateBag) + var result = testSubject.Run(context.Background(), stateBag) if result != multistep.ActionContinue { t.Fatalf("Expected the step to return 'ActionContinue', but got '%d'.", result) } @@ -51,7 +52,7 @@ func TestStepPowerOffComputeShouldTakeStepArgumentsFromStateBag(t *testing.T) { var actualComputeName string var testSubject = &StepPowerOffCompute{ - powerOff: func(resourceGroupName string, computeName string, cancelCh <-chan struct{}) error { + powerOff: func(ctx context.Context, resourceGroupName string, computeName string) error { actualResourceGroupName = resourceGroupName actualComputeName = computeName @@ -62,7 +63,7 @@ func TestStepPowerOffComputeShouldTakeStepArgumentsFromStateBag(t *testing.T) { } stateBag := createTestStateBagStepPowerOffCompute() - var result = testSubject.Run(stateBag) + var result = testSubject.Run(context.Background(), stateBag) if result != multistep.ActionContinue { t.Fatalf("Expected the step to return 'ActionContinue', but got '%d'.", result) diff --git a/builder/azure/arm/step_save_winrm_password.go b/builder/azure/arm/step_save_winrm_password.go new file mode 100644 index 000000000..700a4b048 --- /dev/null +++ b/builder/azure/arm/step_save_winrm_password.go @@ -0,0 +1,23 @@ +package arm + +import ( + "context" + + commonhelper "github.com/hashicorp/packer/helper/common" + "github.com/hashicorp/packer/helper/multistep" +) + +type StepSaveWinRMPassword struct { + Password string + BuildName string +} + +func (s *StepSaveWinRMPassword) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { + // store so that we can access this later during provisioning + commonhelper.SetSharedState("winrm_password", s.Password, s.BuildName) + return multistep.ActionContinue +} + +func (s *StepSaveWinRMPassword) Cleanup(multistep.StateBag) { + commonhelper.RemoveSharedStateFile("winrm_password", s.BuildName) +} diff --git a/builder/azure/arm/step_set_certificate.go b/builder/azure/arm/step_set_certificate.go index 5e865b882..ef8832074 100644 --- a/builder/azure/arm/step_set_certificate.go +++ b/builder/azure/arm/step_set_certificate.go @@ -1,9 +1,11 @@ package arm import ( + "context" + "github.com/hashicorp/packer/builder/azure/common/constants" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" ) type StepSetCertificate struct { @@ -22,7 +24,7 @@ func NewStepSetCertificate(config *Config, ui packer.Ui) *StepSetCertificate { return step } -func (s *StepSetCertificate) Run(state multistep.StateBag) multistep.StepAction { +func (s *StepSetCertificate) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { s.say("Setting the certificate's URL ...") var winRMCertificateUrl = state.Get(constants.ArmCertificateUrl).(string) diff --git a/builder/azure/arm/step_set_certificate_test.go b/builder/azure/arm/step_set_certificate_test.go index 1ad9b7dbe..387d620a9 100644 --- a/builder/azure/arm/step_set_certificate_test.go +++ b/builder/azure/arm/step_set_certificate_test.go @@ -1,10 +1,11 @@ package arm import ( + "context" "testing" "github.com/hashicorp/packer/builder/azure/common/constants" - "github.com/mitchellh/multistep" + "github.com/hashicorp/packer/helper/multistep" ) func TestStepSetCertificateShouldPassIfGetPasses(t *testing.T) { @@ -16,7 +17,7 @@ func TestStepSetCertificateShouldPassIfGetPasses(t *testing.T) { stateBag := createTestStateBagStepSetCertificate() - var result = testSubject.Run(stateBag) + var result = testSubject.Run(context.Background(), stateBag) if result != multistep.ActionContinue { t.Fatalf("Expected the step to return 'ActionContinue', but got '%d'.", result) } @@ -35,7 +36,7 @@ func TestStepSetCertificateShouldTakeStepArgumentsFromStateBag(t *testing.T) { } stateBag := createTestStateBagStepSetCertificate() - var result = testSubject.Run(stateBag) + var result = testSubject.Run(context.Background(), stateBag) if result != multistep.ActionContinue { t.Fatalf("Expected the step to return 'ActionContinue', but got '%d'.", result) diff --git a/builder/azure/arm/step_test.go b/builder/azure/arm/step_test.go index 2b10c8781..1f75b1936 100644 --- a/builder/azure/arm/step_test.go +++ b/builder/azure/arm/step_test.go @@ -2,10 +2,10 @@ package arm import ( "fmt" - "github.com/hashicorp/packer/builder/azure/common" - "github.com/hashicorp/packer/builder/azure/common/constants" - "github.com/mitchellh/multistep" "testing" + + "github.com/hashicorp/packer/builder/azure/common/constants" + "github.com/hashicorp/packer/helper/multistep" ) func TestProcessStepResultShouldContinueForNonErrors(t *testing.T) { @@ -38,59 +38,3 @@ func TestProcessStepResultShouldHaltOnError(t *testing.T) { t.Errorf("Expected ActionHalt(%d), but got=%d", multistep.ActionHalt, code) } } - -func TestProcessStepResultShouldContinueOnSuccessfulTask(t *testing.T) { - stateBag := new(multistep.BasicStateBag) - result := common.InterruptibleTaskResult{ - IsCancelled: false, - Err: nil, - } - - code := processInterruptibleResult(result, func(error) { t.Fatal("Should not be called!") }, stateBag) - if _, ok := stateBag.GetOk(constants.Error); ok { - t.Errorf("Error was nil, but was still in the state bag.") - } - - if code != multistep.ActionContinue { - t.Errorf("Expected ActionContinue(%d), but got=%d", multistep.ActionContinue, code) - } -} - -func TestProcessStepResultShouldHaltWhenTaskIsCancelled(t *testing.T) { - stateBag := new(multistep.BasicStateBag) - result := common.InterruptibleTaskResult{ - IsCancelled: true, - Err: nil, - } - - code := processInterruptibleResult(result, func(error) { t.Fatal("Should not be called!") }, stateBag) - if _, ok := stateBag.GetOk(constants.Error); ok { - t.Errorf("Error was nil, but was still in the state bag.") - } - - if code != multistep.ActionHalt { - t.Errorf("Expected ActionHalt(%d), but got=%d", multistep.ActionHalt, code) - } -} - -func TestProcessStepResultShouldHaltOnTaskError(t *testing.T) { - stateBag := new(multistep.BasicStateBag) - isSaidError := false - result := common.InterruptibleTaskResult{ - IsCancelled: false, - Err: fmt.Errorf("boom"), - } - - code := processInterruptibleResult(result, func(error) { isSaidError = true }, stateBag) - if _, ok := stateBag.GetOk(constants.Error); !ok { - t.Errorf("Error was *not* nil, but was not in the state bag.") - } - - if !isSaidError { - t.Errorf("Expected error to be said, but it was not.") - } - - if code != multistep.ActionHalt { - t.Errorf("Expected ActionHalt(%d), but got=%d", multistep.ActionHalt, code) - } -} diff --git a/builder/azure/arm/step_validate_template.go b/builder/azure/arm/step_validate_template.go index d2b98638b..30ec3728b 100644 --- a/builder/azure/arm/step_validate_template.go +++ b/builder/azure/arm/step_validate_template.go @@ -1,16 +1,17 @@ package arm import ( + "context" "fmt" "github.com/hashicorp/packer/builder/azure/common/constants" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" ) type StepValidateTemplate struct { client *AzureClient - validate func(resourceGroupName string, deploymentName string) error + validate func(ctx context.Context, resourceGroupName string, deploymentName string) error say func(message string) error func(e error) config *Config @@ -30,20 +31,20 @@ func NewStepValidateTemplate(client *AzureClient, ui packer.Ui, config *Config, return step } -func (s *StepValidateTemplate) validateTemplate(resourceGroupName string, deploymentName string) error { +func (s *StepValidateTemplate) validateTemplate(ctx context.Context, resourceGroupName string, deploymentName string) error { deployment, err := s.factory(s.config) if err != nil { return err } - _, err = s.client.Validate(resourceGroupName, deploymentName, *deployment) + _, err = s.client.DeploymentsClient.Validate(ctx, resourceGroupName, deploymentName, *deployment) if err != nil { s.say(s.client.LastError.Error()) } return err } -func (s *StepValidateTemplate) Run(state multistep.StateBag) multistep.StepAction { +func (s *StepValidateTemplate) Run(ctx context.Context, state multistep.StateBag) multistep.StepAction { s.say("Validating deployment template ...") var resourceGroupName = state.Get(constants.ArmResourceGroupName).(string) @@ -52,7 +53,7 @@ func (s *StepValidateTemplate) Run(state multistep.StateBag) multistep.StepActio s.say(fmt.Sprintf(" -> ResourceGroupName : '%s'", resourceGroupName)) s.say(fmt.Sprintf(" -> DeploymentName : '%s'", deploymentName)) - err := s.validate(resourceGroupName, deploymentName) + err := s.validate(ctx, resourceGroupName, deploymentName) return processStepResult(err, s.error, state) } diff --git a/builder/azure/arm/step_validate_template_test.go b/builder/azure/arm/step_validate_template_test.go index a56030787..22251f489 100644 --- a/builder/azure/arm/step_validate_template_test.go +++ b/builder/azure/arm/step_validate_template_test.go @@ -1,23 +1,24 @@ package arm import ( + "context" "fmt" "testing" "github.com/hashicorp/packer/builder/azure/common/constants" - "github.com/mitchellh/multistep" + "github.com/hashicorp/packer/helper/multistep" ) func TestStepValidateTemplateShouldFailIfValidateFails(t *testing.T) { var testSubject = &StepValidateTemplate{ - validate: func(string, string) error { return fmt.Errorf("!! Unit Test FAIL !!") }, + validate: func(context.Context, string, string) error { return fmt.Errorf("!! Unit Test FAIL !!") }, say: func(message string) {}, error: func(e error) {}, } stateBag := createTestStateBagStepValidateTemplate() - var result = testSubject.Run(stateBag) + var result = testSubject.Run(context.Background(), stateBag) if result != multistep.ActionHalt { t.Fatalf("Expected the step to return 'ActionHalt', but got '%d'.", result) } @@ -29,14 +30,14 @@ func TestStepValidateTemplateShouldFailIfValidateFails(t *testing.T) { func TestStepValidateTemplateShouldPassIfValidatePasses(t *testing.T) { var testSubject = &StepValidateTemplate{ - validate: func(string, string) error { return nil }, + validate: func(context.Context, string, string) error { return nil }, say: func(message string) {}, error: func(e error) {}, } stateBag := createTestStateBagStepValidateTemplate() - var result = testSubject.Run(stateBag) + var result = testSubject.Run(context.Background(), stateBag) if result != multistep.ActionContinue { t.Fatalf("Expected the step to return 'ActionContinue', but got '%d'.", result) } @@ -51,7 +52,7 @@ func TestStepValidateTemplateShouldTakeStepArgumentsFromStateBag(t *testing.T) { var actualDeploymentName string var testSubject = &StepValidateTemplate{ - validate: func(resourceGroupName string, deploymentName string) error { + validate: func(ctx context.Context, resourceGroupName string, deploymentName string) error { actualResourceGroupName = resourceGroupName actualDeploymentName = deploymentName @@ -62,7 +63,7 @@ func TestStepValidateTemplateShouldTakeStepArgumentsFromStateBag(t *testing.T) { } stateBag := createTestStateBagStepValidateTemplate() - var result = testSubject.Run(stateBag) + var result = testSubject.Run(context.Background(), stateBag) if result != multistep.ActionContinue { t.Fatalf("Expected the step to return 'ActionContinue', but got '%d'.", result) diff --git a/builder/azure/arm/template_factory.go b/builder/azure/arm/template_factory.go index 16fe7bf93..a1a0dd943 100644 --- a/builder/azure/arm/template_factory.go +++ b/builder/azure/arm/template_factory.go @@ -3,10 +3,11 @@ package arm import ( "encoding/json" - "github.com/Azure/azure-sdk-for-go/arm/compute" - "github.com/Azure/azure-sdk-for-go/arm/resources/resources" + "github.com/Azure/azure-sdk-for-go/services/compute/mgmt/2018-04-01/compute" + "github.com/Azure/azure-sdk-for-go/services/resources/mgmt/2018-02-01/resources" "fmt" + "github.com/hashicorp/packer/builder/azure/common/constants" "github.com/hashicorp/packer/builder/azure/common/template" ) @@ -33,13 +34,20 @@ func GetVirtualMachineDeployment(config *Config) (*resources.Deployment, error) AdminUsername: &template.TemplateParameter{Value: config.UserName}, AdminPassword: &template.TemplateParameter{Value: config.Password}, DnsNameForPublicIP: &template.TemplateParameter{Value: config.tmpComputeName}, + NicName: &template.TemplateParameter{Value: config.tmpNicName}, OSDiskName: &template.TemplateParameter{Value: config.tmpOSDiskName}, + PublicIPAddressName: &template.TemplateParameter{Value: config.tmpPublicIPAddressName}, + SubnetName: &template.TemplateParameter{Value: config.tmpSubnetName}, StorageAccountBlobEndpoint: &template.TemplateParameter{Value: config.storageAccountBlobEndpoint}, - VMSize: &template.TemplateParameter{Value: config.VMSize}, - VMName: &template.TemplateParameter{Value: config.tmpComputeName}, + VirtualNetworkName: &template.TemplateParameter{Value: config.tmpVirtualNetworkName}, + VMSize: &template.TemplateParameter{Value: config.VMSize}, + VMName: &template.TemplateParameter{Value: config.tmpComputeName}, } - builder, _ := template.NewTemplateBuilder(template.BasicTemplate) + builder, err := template.NewTemplateBuilder(template.BasicTemplate) + if err != nil { + return nil, err + } osType := compute.Linux switch config.OSType { @@ -72,12 +80,20 @@ func GetVirtualMachineDeployment(config *Config) (*resources.Deployment, error) builder.SetOSDiskSizeGB(config.OSDiskSizeGB) } + if len(config.AdditionalDiskSize) > 0 { + builder.SetAdditionalDisks(config.AdditionalDiskSize, config.CustomManagedImageName != "" || (config.ManagedImageName != "" && config.ImagePublisher != "")) + } + if config.customData != "" { builder.SetCustomData(config.customData) } + if config.PlanInfo.PlanName != "" { + builder.SetPlanInfo(config.PlanInfo.PlanName, config.PlanInfo.PlanProduct, config.PlanInfo.PlanPublisher, config.PlanInfo.PlanPromotionCode) + } + if config.VirtualNetworkName != "" && DefaultPrivateVirtualNetworkWithPublicIp != config.PrivateVirtualNetworkWithPublicIp { - builder.SetPrivateVirtualNetworWithPublicIp( + builder.SetPrivateVirtualNetworkWithPublicIp( config.VirtualNetworkResourceGroupName, config.VirtualNetworkName, config.VirtualNetworkSubnetName) diff --git a/builder/azure/arm/template_factory_test.TestPlanInfo01.approved.json b/builder/azure/arm/template_factory_test.TestPlanInfo01.approved.json new file mode 100644 index 000000000..392dc0ec9 --- /dev/null +++ b/builder/azure/arm/template_factory_test.TestPlanInfo01.approved.json @@ -0,0 +1,204 @@ +{ + "$schema": "http://schema.management.azure.com/schemas/2014-04-01-preview/deploymentTemplate.json", + "contentVersion": "1.0.0.0", + "parameters": { + "adminPassword": { + "type": "string" + }, + "adminUsername": { + "type": "string" + }, + "dnsNameForPublicIP": { + "type": "string" + }, + "nicName": { + "type": "string" + }, + "osDiskName": { + "type": "string" + }, + "publicIPAddressName": { + "type": "string" + }, + "storageAccountBlobEndpoint": { + "type": "string" + }, + "subnetName": { + "type": "string" + }, + "virtualNetworkName": { + "type": "string" + }, + "vmName": { + "type": "string" + }, + "vmSize": { + "type": "string" + } + }, + "resources": [ + { + "apiVersion": "[variables('publicIPAddressApiVersion')]", + "location": "[variables('location')]", + "name": "[parameters('publicIPAddressName')]", + "properties": { + "dnsSettings": { + "domainNameLabel": "[parameters('dnsNameForPublicIP')]" + }, + "publicIPAllocationMethod": "[variables('publicIPAddressType')]" + }, + "tags": { + "PlanInfo": "planName00", + "PlanProduct": "planProduct00", + "PlanPromotionCode": "", + "PlanPublisher": "planPublisher00" + }, + "type": "Microsoft.Network/publicIPAddresses" + }, + { + "apiVersion": "[variables('virtualNetworksApiVersion')]", + "location": "[variables('location')]", + "name": "[variables('virtualNetworkName')]", + "properties": { + "addressSpace": { + "addressPrefixes": [ + "[variables('addressPrefix')]" + ] + }, + "subnets": [ + { + "name": "[variables('subnetName')]", + "properties": { + "addressPrefix": "[variables('subnetAddressPrefix')]" + } + } + ] + }, + "tags": { + "PlanInfo": "planName00", + "PlanProduct": "planProduct00", + "PlanPromotionCode": "", + "PlanPublisher": "planPublisher00" + }, + "type": "Microsoft.Network/virtualNetworks" + }, + { + "apiVersion": "[variables('networkInterfacesApiVersion')]", + "dependsOn": [ + "[concat('Microsoft.Network/publicIPAddresses/', parameters('publicIPAddressName'))]", + "[concat('Microsoft.Network/virtualNetworks/', variables('virtualNetworkName'))]" + ], + "location": "[variables('location')]", + "name": "[parameters('nicName')]", + "properties": { + "ipConfigurations": [ + { + "name": "ipconfig", + "properties": { + "privateIPAllocationMethod": "Dynamic", + "publicIPAddress": { + "id": "[resourceId('Microsoft.Network/publicIPAddresses', parameters('publicIPAddressName'))]" + }, + "subnet": { + "id": "[variables('subnetRef')]" + } + } + } + ] + }, + "tags": { + "PlanInfo": "planName00", + "PlanProduct": "planProduct00", + "PlanPromotionCode": "", + "PlanPublisher": "planPublisher00" + }, + "type": "Microsoft.Network/networkInterfaces" + }, + { + "apiVersion": "[variables('apiVersion')]", + "dependsOn": [ + "[concat('Microsoft.Network/networkInterfaces/', parameters('nicName'))]" + ], + "location": "[variables('location')]", + "name": "[parameters('vmName')]", + "plan": { + "name": "planName00", + "product": "planProduct00", + "publisher": "planPublisher00" + }, + "properties": { + "diagnosticsProfile": { + "bootDiagnostics": { + "enabled": false + } + }, + "hardwareProfile": { + "vmSize": "[parameters('vmSize')]" + }, + "networkProfile": { + "networkInterfaces": [ + { + "id": "[resourceId('Microsoft.Network/networkInterfaces', parameters('nicName'))]" + } + ] + }, + "osProfile": { + "adminPassword": "[parameters('adminPassword')]", + "adminUsername": "[parameters('adminUsername')]", + "computerName": "[parameters('vmName')]", + "linuxConfiguration": { + "ssh": { + "publicKeys": [ + { + "keyData": "", + "path": "[variables('sshKeyPath')]" + } + ] + } + } + }, + "storageProfile": { + "imageReference": { + "offer": "ignored00", + "publisher": "ignored00", + "sku": "ignored00", + "version": "latest" + }, + "osDisk": { + "caching": "ReadWrite", + "createOption": "FromImage", + "name": "[parameters('osDiskName')]", + "vhd": { + "uri": "[concat(parameters('storageAccountBlobEndpoint'),variables('vmStorageAccountContainerName'),'/', parameters('osDiskName'),'.vhd')]" + } + } + } + }, + "tags": { + "PlanInfo": "planName00", + "PlanProduct": "planProduct00", + "PlanPromotionCode": "", + "PlanPublisher": "planPublisher00" + }, + "type": "Microsoft.Compute/virtualMachines" + } + ], + "variables": { + "addressPrefix": "10.0.0.0/16", + "apiVersion": "2017-03-30", + "location": "[resourceGroup().location]", + "managedDiskApiVersion": "2017-03-30", + "networkInterfacesApiVersion": "2017-04-01", + "publicIPAddressApiVersion": "2017-04-01", + "publicIPAddressType": "Dynamic", + "sshKeyPath": "[concat('/home/',parameters('adminUsername'),'/.ssh/authorized_keys')]", + "subnetAddressPrefix": "10.0.0.0/24", + "subnetName": "[parameters('subnetName')]", + "subnetRef": "[concat(variables('vnetID'),'/subnets/',variables('subnetName'))]", + "virtualNetworkName": "[parameters('virtualNetworkName')]", + "virtualNetworkResourceGroup": "[resourceGroup().name]", + "virtualNetworksApiVersion": "2017-04-01", + "vmStorageAccountContainerName": "images", + "vnetID": "[resourceId(variables('virtualNetworkResourceGroup'), 'Microsoft.Network/virtualNetworks', variables('virtualNetworkName'))]" + } +} \ No newline at end of file diff --git a/builder/azure/arm/template_factory_test.TestPlanInfo02.approved.json b/builder/azure/arm/template_factory_test.TestPlanInfo02.approved.json new file mode 100644 index 000000000..b29616711 --- /dev/null +++ b/builder/azure/arm/template_factory_test.TestPlanInfo02.approved.json @@ -0,0 +1,209 @@ +{ + "$schema": "http://schema.management.azure.com/schemas/2014-04-01-preview/deploymentTemplate.json", + "contentVersion": "1.0.0.0", + "parameters": { + "adminPassword": { + "type": "string" + }, + "adminUsername": { + "type": "string" + }, + "dnsNameForPublicIP": { + "type": "string" + }, + "nicName": { + "type": "string" + }, + "osDiskName": { + "type": "string" + }, + "publicIPAddressName": { + "type": "string" + }, + "storageAccountBlobEndpoint": { + "type": "string" + }, + "subnetName": { + "type": "string" + }, + "virtualNetworkName": { + "type": "string" + }, + "vmName": { + "type": "string" + }, + "vmSize": { + "type": "string" + } + }, + "resources": [ + { + "apiVersion": "[variables('publicIPAddressApiVersion')]", + "location": "[variables('location')]", + "name": "[parameters('publicIPAddressName')]", + "properties": { + "dnsSettings": { + "domainNameLabel": "[parameters('dnsNameForPublicIP')]" + }, + "publicIPAllocationMethod": "[variables('publicIPAddressType')]" + }, + "tags": { + "PlanInfo": "planName00", + "PlanProduct": "planProduct00", + "PlanPromotionCode": "planPromotionCode00", + "PlanPublisher": "planPublisher00", + "dept": "engineering" + }, + "type": "Microsoft.Network/publicIPAddresses" + }, + { + "apiVersion": "[variables('virtualNetworksApiVersion')]", + "location": "[variables('location')]", + "name": "[variables('virtualNetworkName')]", + "properties": { + "addressSpace": { + "addressPrefixes": [ + "[variables('addressPrefix')]" + ] + }, + "subnets": [ + { + "name": "[variables('subnetName')]", + "properties": { + "addressPrefix": "[variables('subnetAddressPrefix')]" + } + } + ] + }, + "tags": { + "PlanInfo": "planName00", + "PlanProduct": "planProduct00", + "PlanPromotionCode": "planPromotionCode00", + "PlanPublisher": "planPublisher00", + "dept": "engineering" + }, + "type": "Microsoft.Network/virtualNetworks" + }, + { + "apiVersion": "[variables('networkInterfacesApiVersion')]", + "dependsOn": [ + "[concat('Microsoft.Network/publicIPAddresses/', parameters('publicIPAddressName'))]", + "[concat('Microsoft.Network/virtualNetworks/', variables('virtualNetworkName'))]" + ], + "location": "[variables('location')]", + "name": "[parameters('nicName')]", + "properties": { + "ipConfigurations": [ + { + "name": "ipconfig", + "properties": { + "privateIPAllocationMethod": "Dynamic", + "publicIPAddress": { + "id": "[resourceId('Microsoft.Network/publicIPAddresses', parameters('publicIPAddressName'))]" + }, + "subnet": { + "id": "[variables('subnetRef')]" + } + } + } + ] + }, + "tags": { + "PlanInfo": "planName00", + "PlanProduct": "planProduct00", + "PlanPromotionCode": "planPromotionCode00", + "PlanPublisher": "planPublisher00", + "dept": "engineering" + }, + "type": "Microsoft.Network/networkInterfaces" + }, + { + "apiVersion": "[variables('apiVersion')]", + "dependsOn": [ + "[concat('Microsoft.Network/networkInterfaces/', parameters('nicName'))]" + ], + "location": "[variables('location')]", + "name": "[parameters('vmName')]", + "plan": { + "name": "planName00", + "product": "planProduct00", + "promotionCode": "planPromotionCode00", + "publisher": "planPublisher00" + }, + "properties": { + "diagnosticsProfile": { + "bootDiagnostics": { + "enabled": false + } + }, + "hardwareProfile": { + "vmSize": "[parameters('vmSize')]" + }, + "networkProfile": { + "networkInterfaces": [ + { + "id": "[resourceId('Microsoft.Network/networkInterfaces', parameters('nicName'))]" + } + ] + }, + "osProfile": { + "adminPassword": "[parameters('adminPassword')]", + "adminUsername": "[parameters('adminUsername')]", + "computerName": "[parameters('vmName')]", + "linuxConfiguration": { + "ssh": { + "publicKeys": [ + { + "keyData": "", + "path": "[variables('sshKeyPath')]" + } + ] + } + } + }, + "storageProfile": { + "imageReference": { + "offer": "ignored00", + "publisher": "ignored00", + "sku": "ignored00", + "version": "latest" + }, + "osDisk": { + "caching": "ReadWrite", + "createOption": "FromImage", + "name": "[parameters('osDiskName')]", + "vhd": { + "uri": "[concat(parameters('storageAccountBlobEndpoint'),variables('vmStorageAccountContainerName'),'/', parameters('osDiskName'),'.vhd')]" + } + } + } + }, + "tags": { + "PlanInfo": "planName00", + "PlanProduct": "planProduct00", + "PlanPromotionCode": "planPromotionCode00", + "PlanPublisher": "planPublisher00", + "dept": "engineering" + }, + "type": "Microsoft.Compute/virtualMachines" + } + ], + "variables": { + "addressPrefix": "10.0.0.0/16", + "apiVersion": "2017-03-30", + "location": "[resourceGroup().location]", + "managedDiskApiVersion": "2017-03-30", + "networkInterfacesApiVersion": "2017-04-01", + "publicIPAddressApiVersion": "2017-04-01", + "publicIPAddressType": "Dynamic", + "sshKeyPath": "[concat('/home/',parameters('adminUsername'),'/.ssh/authorized_keys')]", + "subnetAddressPrefix": "10.0.0.0/24", + "subnetName": "[parameters('subnetName')]", + "subnetRef": "[concat(variables('vnetID'),'/subnets/',variables('subnetName'))]", + "virtualNetworkName": "[parameters('virtualNetworkName')]", + "virtualNetworkResourceGroup": "[resourceGroup().name]", + "virtualNetworksApiVersion": "2017-04-01", + "vmStorageAccountContainerName": "images", + "vnetID": "[resourceId(variables('virtualNetworkResourceGroup'), 'Microsoft.Network/virtualNetworks', variables('virtualNetworkName'))]" + } +} \ No newline at end of file diff --git a/builder/azure/arm/template_factory_test.TestVirtualMachineDeployment03.approved.json b/builder/azure/arm/template_factory_test.TestVirtualMachineDeployment03.approved.json index a220d9822..edb21ca39 100644 --- a/builder/azure/arm/template_factory_test.TestVirtualMachineDeployment03.approved.json +++ b/builder/azure/arm/template_factory_test.TestVirtualMachineDeployment03.approved.json @@ -11,12 +11,24 @@ "dnsNameForPublicIP": { "type": "string" }, + "nicName": { + "type": "string" + }, "osDiskName": { "type": "string" }, + "publicIPAddressName": { + "type": "string" + }, "storageAccountBlobEndpoint": { "type": "string" }, + "subnetName": { + "type": "string" + }, + "virtualNetworkName": { + "type": "string" + }, "vmName": { "type": "string" }, @@ -28,7 +40,7 @@ { "apiVersion": "[variables('publicIPAddressApiVersion')]", "location": "[variables('location')]", - "name": "[variables('publicIPAddressName')]", + "name": "[parameters('publicIPAddressName')]", "properties": { "dnsSettings": { "domainNameLabel": "[parameters('dnsNameForPublicIP')]" @@ -61,11 +73,11 @@ { "apiVersion": "[variables('networkInterfacesApiVersion')]", "dependsOn": [ - "[concat('Microsoft.Network/publicIPAddresses/', variables('publicIPAddressName'))]", + "[concat('Microsoft.Network/publicIPAddresses/', parameters('publicIPAddressName'))]", "[concat('Microsoft.Network/virtualNetworks/', variables('virtualNetworkName'))]" ], "location": "[variables('location')]", - "name": "[variables('nicName')]", + "name": "[parameters('nicName')]", "properties": { "ipConfigurations": [ { @@ -73,7 +85,7 @@ "properties": { "privateIPAllocationMethod": "Dynamic", "publicIPAddress": { - "id": "[resourceId('Microsoft.Network/publicIPAddresses', variables('publicIPAddressName'))]" + "id": "[resourceId('Microsoft.Network/publicIPAddresses', parameters('publicIPAddressName'))]" }, "subnet": { "id": "[variables('subnetRef')]" @@ -87,7 +99,7 @@ { "apiVersion": "[variables('apiVersion')]", "dependsOn": [ - "[concat('Microsoft.Network/networkInterfaces/', variables('nicName'))]" + "[concat('Microsoft.Network/networkInterfaces/', parameters('nicName'))]" ], "location": "[variables('location')]", "name": "[parameters('vmName')]", @@ -103,7 +115,7 @@ "networkProfile": { "networkInterfaces": [ { - "id": "[resourceId('Microsoft.Network/networkInterfaces', variables('nicName'))]" + "id": "[resourceId('Microsoft.Network/networkInterfaces', parameters('nicName'))]" } ] }, @@ -132,7 +144,7 @@ "osDisk": { "caching": "ReadWrite", "createOption": "FromImage", - "name": "osdisk", + "name": "[parameters('osDiskName')]", "vhd": { "uri": "[concat(parameters('storageAccountBlobEndpoint'),variables('vmStorageAccountContainerName'),'/', parameters('osDiskName'),'.vhd')]" } @@ -148,15 +160,13 @@ "location": "[resourceGroup().location]", "managedDiskApiVersion": "2017-03-30", "networkInterfacesApiVersion": "2017-04-01", - "nicName": "packerNic", "publicIPAddressApiVersion": "2017-04-01", - "publicIPAddressName": "packerPublicIP", "publicIPAddressType": "Dynamic", "sshKeyPath": "[concat('/home/',parameters('adminUsername'),'/.ssh/authorized_keys')]", "subnetAddressPrefix": "10.0.0.0/24", - "subnetName": "packerSubnet", + "subnetName": "[parameters('subnetName')]", "subnetRef": "[concat(variables('vnetID'),'/subnets/',variables('subnetName'))]", - "virtualNetworkName": "packerNetwork", + "virtualNetworkName": "[parameters('virtualNetworkName')]", "virtualNetworkResourceGroup": "[resourceGroup().name]", "virtualNetworksApiVersion": "2017-04-01", "vmStorageAccountContainerName": "images", diff --git a/builder/azure/arm/template_factory_test.TestVirtualMachineDeployment04.approved.json b/builder/azure/arm/template_factory_test.TestVirtualMachineDeployment04.approved.json index 0be4bbef4..ae52ff483 100644 --- a/builder/azure/arm/template_factory_test.TestVirtualMachineDeployment04.approved.json +++ b/builder/azure/arm/template_factory_test.TestVirtualMachineDeployment04.approved.json @@ -11,12 +11,24 @@ "dnsNameForPublicIP": { "type": "string" }, + "nicName": { + "type": "string" + }, "osDiskName": { "type": "string" }, + "publicIPAddressName": { + "type": "string" + }, "storageAccountBlobEndpoint": { "type": "string" }, + "subnetName": { + "type": "string" + }, + "virtualNetworkName": { + "type": "string" + }, "vmName": { "type": "string" }, @@ -28,7 +40,7 @@ { "apiVersion": "[variables('publicIPAddressApiVersion')]", "location": "[variables('location')]", - "name": "[variables('publicIPAddressName')]", + "name": "[parameters('publicIPAddressName')]", "properties": { "dnsSettings": { "domainNameLabel": "[parameters('dnsNameForPublicIP')]" @@ -61,11 +73,11 @@ { "apiVersion": "[variables('networkInterfacesApiVersion')]", "dependsOn": [ - "[concat('Microsoft.Network/publicIPAddresses/', variables('publicIPAddressName'))]", + "[concat('Microsoft.Network/publicIPAddresses/', parameters('publicIPAddressName'))]", "[concat('Microsoft.Network/virtualNetworks/', variables('virtualNetworkName'))]" ], "location": "[variables('location')]", - "name": "[variables('nicName')]", + "name": "[parameters('nicName')]", "properties": { "ipConfigurations": [ { @@ -73,7 +85,7 @@ "properties": { "privateIPAllocationMethod": "Dynamic", "publicIPAddress": { - "id": "[resourceId('Microsoft.Network/publicIPAddresses', variables('publicIPAddressName'))]" + "id": "[resourceId('Microsoft.Network/publicIPAddresses', parameters('publicIPAddressName'))]" }, "subnet": { "id": "[variables('subnetRef')]" @@ -87,7 +99,7 @@ { "apiVersion": "[variables('apiVersion')]", "dependsOn": [ - "[concat('Microsoft.Network/networkInterfaces/', variables('nicName'))]" + "[concat('Microsoft.Network/networkInterfaces/', parameters('nicName'))]" ], "location": "[variables('location')]", "name": "[parameters('vmName')]", @@ -103,7 +115,7 @@ "networkProfile": { "networkInterfaces": [ { - "id": "[resourceId('Microsoft.Network/networkInterfaces', variables('nicName'))]" + "id": "[resourceId('Microsoft.Network/networkInterfaces', parameters('nicName'))]" } ] }, @@ -129,7 +141,7 @@ "image": { "uri": "https://localhost/custom.vhd" }, - "name": "osdisk", + "name": "[parameters('osDiskName')]", "osType": "Linux", "vhd": { "uri": "[concat(parameters('storageAccountBlobEndpoint'),variables('vmStorageAccountContainerName'),'/', parameters('osDiskName'),'.vhd')]" @@ -146,18 +158,16 @@ "location": "[resourceGroup().location]", "managedDiskApiVersion": "2017-03-30", "networkInterfacesApiVersion": "2017-04-01", - "nicName": "packerNic", "publicIPAddressApiVersion": "2017-04-01", - "publicIPAddressName": "packerPublicIP", "publicIPAddressType": "Dynamic", "sshKeyPath": "[concat('/home/',parameters('adminUsername'),'/.ssh/authorized_keys')]", "subnetAddressPrefix": "10.0.0.0/24", - "subnetName": "packerSubnet", + "subnetName": "[parameters('subnetName')]", "subnetRef": "[concat(variables('vnetID'),'/subnets/',variables('subnetName'))]", - "virtualNetworkName": "packerNetwork", - "virtualNetworkResourceGroup": "[resourceGroup().name]", + "virtualNetworkName": "[parameters('virtualNetworkName')]", + "virtualNetworkResourceGroup": "[resourceGroup().name]", "virtualNetworksApiVersion": "2017-04-01", - "vmStorageAccountContainerName": "images", - "vnetID": "[resourceId(variables('virtualNetworkResourceGroup'), 'Microsoft.Network/virtualNetworks', variables('virtualNetworkName'))]" - } + "vmStorageAccountContainerName": "images", + "vnetID": "[resourceId(variables('virtualNetworkResourceGroup'), 'Microsoft.Network/virtualNetworks', variables('virtualNetworkName'))]" + } } \ No newline at end of file diff --git a/builder/azure/arm/template_factory_test.TestVirtualMachineDeployment05.approved.json b/builder/azure/arm/template_factory_test.TestVirtualMachineDeployment05.approved.json index 29c4750c5..5c78f287a 100644 --- a/builder/azure/arm/template_factory_test.TestVirtualMachineDeployment05.approved.json +++ b/builder/azure/arm/template_factory_test.TestVirtualMachineDeployment05.approved.json @@ -11,12 +11,24 @@ "dnsNameForPublicIP": { "type": "string" }, + "nicName": { + "type": "string" + }, "osDiskName": { "type": "string" }, + "publicIPAddressName": { + "type": "string" + }, "storageAccountBlobEndpoint": { "type": "string" }, + "subnetName": { + "type": "string" + }, + "virtualNetworkName": { + "type": "string" + }, "vmName": { "type": "string" }, @@ -29,7 +41,7 @@ "apiVersion": "[variables('networkInterfacesApiVersion')]", "dependsOn": [], "location": "[variables('location')]", - "name": "[variables('nicName')]", + "name": "[parameters('nicName')]", "properties": { "ipConfigurations": [ { @@ -48,7 +60,7 @@ { "apiVersion": "[variables('apiVersion')]", "dependsOn": [ - "[concat('Microsoft.Network/networkInterfaces/', variables('nicName'))]" + "[concat('Microsoft.Network/networkInterfaces/', parameters('nicName'))]" ], "location": "[variables('location')]", "name": "[parameters('vmName')]", @@ -64,7 +76,7 @@ "networkProfile": { "networkInterfaces": [ { - "id": "[resourceId('Microsoft.Network/networkInterfaces', variables('nicName'))]" + "id": "[resourceId('Microsoft.Network/networkInterfaces', parameters('nicName'))]" } ] }, @@ -90,7 +102,7 @@ "image": { "uri": "https://localhost/custom.vhd" }, - "name": "osdisk", + "name": "[parameters('osDiskName')]", "osType": "Linux", "vhd": { "uri": "[concat(parameters('storageAccountBlobEndpoint'),variables('vmStorageAccountContainerName'),'/', parameters('osDiskName'),'.vhd')]" @@ -107,9 +119,7 @@ "location": "[resourceGroup().location]", "managedDiskApiVersion": "2017-03-30", "networkInterfacesApiVersion": "2017-04-01", - "nicName": "packerNic", "publicIPAddressApiVersion": "2017-04-01", - "publicIPAddressName": "packerPublicIP", "publicIPAddressType": "Dynamic", "sshKeyPath": "[concat('/home/',parameters('adminUsername'),'/.ssh/authorized_keys')]", "subnetAddressPrefix": "10.0.0.0/24", diff --git a/builder/azure/arm/template_factory_test.TestVirtualMachineDeployment06.approved.json b/builder/azure/arm/template_factory_test.TestVirtualMachineDeployment06.approved.json index fb5e79b9f..04ce31e33 100644 --- a/builder/azure/arm/template_factory_test.TestVirtualMachineDeployment06.approved.json +++ b/builder/azure/arm/template_factory_test.TestVirtualMachineDeployment06.approved.json @@ -11,12 +11,24 @@ "dnsNameForPublicIP": { "type": "string" }, + "nicName": { + "type": "string" + }, "osDiskName": { "type": "string" }, + "publicIPAddressName": { + "type": "string" + }, "storageAccountBlobEndpoint": { "type": "string" }, + "subnetName": { + "type": "string" + }, + "virtualNetworkName": { + "type": "string" + }, "vmName": { "type": "string" }, @@ -28,7 +40,7 @@ { "apiVersion": "[variables('publicIPAddressApiVersion')]", "location": "[variables('location')]", - "name": "[variables('publicIPAddressName')]", + "name": "[parameters('publicIPAddressName')]", "properties": { "dnsSettings": { "domainNameLabel": "[parameters('dnsNameForPublicIP')]" @@ -71,11 +83,11 @@ { "apiVersion": "[variables('networkInterfacesApiVersion')]", "dependsOn": [ - "[concat('Microsoft.Network/publicIPAddresses/', variables('publicIPAddressName'))]", + "[concat('Microsoft.Network/publicIPAddresses/', parameters('publicIPAddressName'))]", "[concat('Microsoft.Network/virtualNetworks/', variables('virtualNetworkName'))]" ], "location": "[variables('location')]", - "name": "[variables('nicName')]", + "name": "[parameters('nicName')]", "properties": { "ipConfigurations": [ { @@ -83,7 +95,7 @@ "properties": { "privateIPAllocationMethod": "Dynamic", "publicIPAddress": { - "id": "[resourceId('Microsoft.Network/publicIPAddresses', variables('publicIPAddressName'))]" + "id": "[resourceId('Microsoft.Network/publicIPAddresses', parameters('publicIPAddressName'))]" }, "subnet": { "id": "[variables('subnetRef')]" @@ -102,7 +114,7 @@ { "apiVersion": "[variables('apiVersion')]", "dependsOn": [ - "[concat('Microsoft.Network/networkInterfaces/', variables('nicName'))]" + "[concat('Microsoft.Network/networkInterfaces/', parameters('nicName'))]" ], "location": "[variables('location')]", "name": "[parameters('vmName')]", @@ -118,7 +130,7 @@ "networkProfile": { "networkInterfaces": [ { - "id": "[resourceId('Microsoft.Network/networkInterfaces', variables('nicName'))]" + "id": "[resourceId('Microsoft.Network/networkInterfaces', parameters('nicName'))]" } ] }, @@ -144,7 +156,7 @@ "image": { "uri": "https://localhost/custom.vhd" }, - "name": "osdisk", + "name": "[parameters('osDiskName')]", "osType": "Linux", "vhd": { "uri": "[concat(parameters('storageAccountBlobEndpoint'),variables('vmStorageAccountContainerName'),'/', parameters('osDiskName'),'.vhd')]" @@ -166,15 +178,13 @@ "location": "[resourceGroup().location]", "managedDiskApiVersion": "2017-03-30", "networkInterfacesApiVersion": "2017-04-01", - "nicName": "packerNic", "publicIPAddressApiVersion": "2017-04-01", - "publicIPAddressName": "packerPublicIP", "publicIPAddressType": "Dynamic", "sshKeyPath": "[concat('/home/',parameters('adminUsername'),'/.ssh/authorized_keys')]", "subnetAddressPrefix": "10.0.0.0/24", - "subnetName": "packerSubnet", + "subnetName": "[parameters('subnetName')]", "subnetRef": "[concat(variables('vnetID'),'/subnets/',variables('subnetName'))]", - "virtualNetworkName": "packerNetwork", + "virtualNetworkName": "[parameters('virtualNetworkName')]", "virtualNetworkResourceGroup": "[resourceGroup().name]", "virtualNetworksApiVersion": "2017-04-01", "vmStorageAccountContainerName": "images", diff --git a/builder/azure/arm/template_factory_test.TestVirtualMachineDeployment07.approved.json b/builder/azure/arm/template_factory_test.TestVirtualMachineDeployment07.approved.json index 6f722c765..a3b278f1e 100644 --- a/builder/azure/arm/template_factory_test.TestVirtualMachineDeployment07.approved.json +++ b/builder/azure/arm/template_factory_test.TestVirtualMachineDeployment07.approved.json @@ -11,12 +11,24 @@ "dnsNameForPublicIP": { "type": "string" }, + "nicName": { + "type": "string" + }, "osDiskName": { "type": "string" }, + "publicIPAddressName": { + "type": "string" + }, "storageAccountBlobEndpoint": { "type": "string" }, + "subnetName": { + "type": "string" + }, + "virtualNetworkName": { + "type": "string" + }, "vmName": { "type": "string" }, @@ -28,7 +40,7 @@ { "apiVersion": "[variables('publicIPAddressApiVersion')]", "location": "[variables('location')]", - "name": "[variables('publicIPAddressName')]", + "name": "[parameters('publicIPAddressName')]", "properties": { "dnsSettings": { "domainNameLabel": "[parameters('dnsNameForPublicIP')]" @@ -61,11 +73,11 @@ { "apiVersion": "[variables('networkInterfacesApiVersion')]", "dependsOn": [ - "[concat('Microsoft.Network/publicIPAddresses/', variables('publicIPAddressName'))]", + "[concat('Microsoft.Network/publicIPAddresses/', parameters('publicIPAddressName'))]", "[concat('Microsoft.Network/virtualNetworks/', variables('virtualNetworkName'))]" ], "location": "[variables('location')]", - "name": "[variables('nicName')]", + "name": "[parameters('nicName')]", "properties": { "ipConfigurations": [ { @@ -73,7 +85,7 @@ "properties": { "privateIPAllocationMethod": "Dynamic", "publicIPAddress": { - "id": "[resourceId('Microsoft.Network/publicIPAddresses', variables('publicIPAddressName'))]" + "id": "[resourceId('Microsoft.Network/publicIPAddresses', parameters('publicIPAddressName'))]" }, "subnet": { "id": "[variables('subnetRef')]" @@ -87,7 +99,7 @@ { "apiVersion": "[variables('apiVersion')]", "dependsOn": [ - "[concat('Microsoft.Network/networkInterfaces/', variables('nicName'))]" + "[concat('Microsoft.Network/networkInterfaces/', parameters('nicName'))]" ], "location": "[variables('location')]", "name": "[parameters('vmName')]", @@ -103,7 +115,7 @@ "networkProfile": { "networkInterfaces": [ { - "id": "[resourceId('Microsoft.Network/networkInterfaces', variables('nicName'))]" + "id": "[resourceId('Microsoft.Network/networkInterfaces', parameters('nicName'))]" } ] }, @@ -130,7 +142,7 @@ "image": { "uri": "https://localhost/custom.vhd" }, - "name": "osdisk", + "name": "[parameters('osDiskName')]", "osType": "Linux", "vhd": { "uri": "[concat(parameters('storageAccountBlobEndpoint'),variables('vmStorageAccountContainerName'),'/', parameters('osDiskName'),'.vhd')]" @@ -147,15 +159,13 @@ "location": "[resourceGroup().location]", "managedDiskApiVersion": "2017-03-30", "networkInterfacesApiVersion": "2017-04-01", - "nicName": "packerNic", "publicIPAddressApiVersion": "2017-04-01", - "publicIPAddressName": "packerPublicIP", "publicIPAddressType": "Dynamic", "sshKeyPath": "[concat('/home/',parameters('adminUsername'),'/.ssh/authorized_keys')]", "subnetAddressPrefix": "10.0.0.0/24", - "subnetName": "packerSubnet", + "subnetName": "[parameters('subnetName')]", "subnetRef": "[concat(variables('vnetID'),'/subnets/',variables('subnetName'))]", - "virtualNetworkName": "packerNetwork", + "virtualNetworkName": "[parameters('virtualNetworkName')]", "virtualNetworkResourceGroup": "[resourceGroup().name]", "virtualNetworksApiVersion": "2017-04-01", "vmStorageAccountContainerName": "images", diff --git a/builder/azure/arm/template_factory_test.TestVirtualMachineDeployment08.approved.json b/builder/azure/arm/template_factory_test.TestVirtualMachineDeployment08.approved.json index c567b774c..dfdaaae74 100644 --- a/builder/azure/arm/template_factory_test.TestVirtualMachineDeployment08.approved.json +++ b/builder/azure/arm/template_factory_test.TestVirtualMachineDeployment08.approved.json @@ -11,12 +11,24 @@ "dnsNameForPublicIP": { "type": "string" }, + "nicName": { + "type": "string" + }, "osDiskName": { "type": "string" }, + "publicIPAddressName": { + "type": "string" + }, "storageAccountBlobEndpoint": { "type": "string" }, + "subnetName": { + "type": "string" + }, + "virtualNetworkName": { + "type": "string" + }, "vmName": { "type": "string" }, @@ -28,7 +40,7 @@ { "apiVersion": "[variables('publicIPAddressApiVersion')]", "location": "[variables('location')]", - "name": "[variables('publicIPAddressName')]", + "name": "[parameters('publicIPAddressName')]", "properties": { "dnsSettings": { "domainNameLabel": "[parameters('dnsNameForPublicIP')]" @@ -61,11 +73,11 @@ { "apiVersion": "[variables('networkInterfacesApiVersion')]", "dependsOn": [ - "[concat('Microsoft.Network/publicIPAddresses/', variables('publicIPAddressName'))]", + "[concat('Microsoft.Network/publicIPAddresses/', parameters('publicIPAddressName'))]", "[concat('Microsoft.Network/virtualNetworks/', variables('virtualNetworkName'))]" ], "location": "[variables('location')]", - "name": "[variables('nicName')]", + "name": "[parameters('nicName')]", "properties": { "ipConfigurations": [ { @@ -73,7 +85,7 @@ "properties": { "privateIPAllocationMethod": "Dynamic", "publicIPAddress": { - "id": "[resourceId('Microsoft.Network/publicIPAddresses', variables('publicIPAddressName'))]" + "id": "[resourceId('Microsoft.Network/publicIPAddresses', parameters('publicIPAddressName'))]" }, "subnet": { "id": "[variables('subnetRef')]" @@ -87,7 +99,7 @@ { "apiVersion": "[variables('apiVersion')]", "dependsOn": [ - "[concat('Microsoft.Network/networkInterfaces/', variables('nicName'))]" + "[concat('Microsoft.Network/networkInterfaces/', parameters('nicName'))]" ], "location": "[variables('location')]", "name": "[parameters('vmName')]", @@ -103,7 +115,7 @@ "networkProfile": { "networkInterfaces": [ { - "id": "[resourceId('Microsoft.Network/networkInterfaces', variables('nicName'))]" + "id": "[resourceId('Microsoft.Network/networkInterfaces', parameters('nicName'))]" } ] }, @@ -128,11 +140,11 @@ }, "osDisk": { "caching": "ReadWrite", - "createOption": "fromImage", + "createOption": "FromImage", "managedDisk": { "storageAccountType": "Standard_LRS" }, - "name": "osdisk", + "name": "[parameters('osDiskName')]", "osType": "Linux" } } @@ -146,15 +158,13 @@ "location": "[resourceGroup().location]", "managedDiskApiVersion": "2017-03-30", "networkInterfacesApiVersion": "2017-04-01", - "nicName": "packerNic", "publicIPAddressApiVersion": "2017-04-01", - "publicIPAddressName": "packerPublicIP", "publicIPAddressType": "Dynamic", "sshKeyPath": "[concat('/home/',parameters('adminUsername'),'/.ssh/authorized_keys')]", "subnetAddressPrefix": "10.0.0.0/24", - "subnetName": "packerSubnet", + "subnetName": "[parameters('subnetName')]", "subnetRef": "[concat(variables('vnetID'),'/subnets/',variables('subnetName'))]", - "virtualNetworkName": "packerNetwork", + "virtualNetworkName": "[parameters('virtualNetworkName')]", "virtualNetworkResourceGroup": "[resourceGroup().name]", "virtualNetworksApiVersion": "2017-04-01", "vmStorageAccountContainerName": "images", diff --git a/builder/azure/arm/template_factory_test.TestVirtualMachineDeployment09.approved.json b/builder/azure/arm/template_factory_test.TestVirtualMachineDeployment09.approved.json index fbbb7384e..197015da3 100644 --- a/builder/azure/arm/template_factory_test.TestVirtualMachineDeployment09.approved.json +++ b/builder/azure/arm/template_factory_test.TestVirtualMachineDeployment09.approved.json @@ -11,12 +11,24 @@ "dnsNameForPublicIP": { "type": "string" }, + "nicName": { + "type": "string" + }, "osDiskName": { "type": "string" }, + "publicIPAddressName": { + "type": "string" + }, "storageAccountBlobEndpoint": { "type": "string" }, + "subnetName": { + "type": "string" + }, + "virtualNetworkName": { + "type": "string" + }, "vmName": { "type": "string" }, @@ -28,7 +40,7 @@ { "apiVersion": "[variables('publicIPAddressApiVersion')]", "location": "[variables('location')]", - "name": "[variables('publicIPAddressName')]", + "name": "[parameters('publicIPAddressName')]", "properties": { "dnsSettings": { "domainNameLabel": "[parameters('dnsNameForPublicIP')]" @@ -61,11 +73,11 @@ { "apiVersion": "[variables('networkInterfacesApiVersion')]", "dependsOn": [ - "[concat('Microsoft.Network/publicIPAddresses/', variables('publicIPAddressName'))]", + "[concat('Microsoft.Network/publicIPAddresses/', parameters('publicIPAddressName'))]", "[concat('Microsoft.Network/virtualNetworks/', variables('virtualNetworkName'))]" ], "location": "[variables('location')]", - "name": "[variables('nicName')]", + "name": "[parameters('nicName')]", "properties": { "ipConfigurations": [ { @@ -73,7 +85,7 @@ "properties": { "privateIPAllocationMethod": "Dynamic", "publicIPAddress": { - "id": "[resourceId('Microsoft.Network/publicIPAddresses', variables('publicIPAddressName'))]" + "id": "[resourceId('Microsoft.Network/publicIPAddresses', parameters('publicIPAddressName'))]" }, "subnet": { "id": "[variables('subnetRef')]" @@ -87,7 +99,7 @@ { "apiVersion": "[variables('apiVersion')]", "dependsOn": [ - "[concat('Microsoft.Network/networkInterfaces/', variables('nicName'))]" + "[concat('Microsoft.Network/networkInterfaces/', parameters('nicName'))]" ], "location": "[variables('location')]", "name": "[parameters('vmName')]", @@ -103,7 +115,7 @@ "networkProfile": { "networkInterfaces": [ { - "id": "[resourceId('Microsoft.Network/networkInterfaces', variables('nicName'))]" + "id": "[resourceId('Microsoft.Network/networkInterfaces', parameters('nicName'))]" } ] }, @@ -131,11 +143,11 @@ }, "osDisk": { "caching": "ReadWrite", - "createOption": "fromImage", + "createOption": "FromImage", "managedDisk": { "storageAccountType": "Standard_LRS" }, - "name": "osdisk", + "name": "[parameters('osDiskName')]", "osType": "Linux" } } @@ -149,15 +161,13 @@ "location": "[resourceGroup().location]", "managedDiskApiVersion": "2017-03-30", "networkInterfacesApiVersion": "2017-04-01", - "nicName": "packerNic", "publicIPAddressApiVersion": "2017-04-01", - "publicIPAddressName": "packerPublicIP", "publicIPAddressType": "Dynamic", "sshKeyPath": "[concat('/home/',parameters('adminUsername'),'/.ssh/authorized_keys')]", "subnetAddressPrefix": "10.0.0.0/24", - "subnetName": "packerSubnet", + "subnetName": "[parameters('subnetName')]", "subnetRef": "[concat(variables('vnetID'),'/subnets/',variables('subnetName'))]", - "virtualNetworkName": "packerNetwork", + "virtualNetworkName": "[parameters('virtualNetworkName')]", "virtualNetworkResourceGroup": "[resourceGroup().name]", "virtualNetworksApiVersion": "2017-04-01", "vmStorageAccountContainerName": "images", diff --git a/builder/azure/arm/template_factory_test.TestVirtualMachineDeployment10.approved.json b/builder/azure/arm/template_factory_test.TestVirtualMachineDeployment10.approved.json index 0eb130138..0c7903a3d 100644 --- a/builder/azure/arm/template_factory_test.TestVirtualMachineDeployment10.approved.json +++ b/builder/azure/arm/template_factory_test.TestVirtualMachineDeployment10.approved.json @@ -11,12 +11,24 @@ "dnsNameForPublicIP": { "type": "string" }, + "nicName": { + "type": "string" + }, "osDiskName": { "type": "string" }, + "publicIPAddressName": { + "type": "string" + }, "storageAccountBlobEndpoint": { "type": "string" }, + "subnetName": { + "type": "string" + }, + "virtualNetworkName": { + "type": "string" + }, "vmName": { "type": "string" }, @@ -28,7 +40,7 @@ { "apiVersion": "[variables('publicIPAddressApiVersion')]", "location": "[variables('location')]", - "name": "[variables('publicIPAddressName')]", + "name": "[parameters('publicIPAddressName')]", "properties": { "dnsSettings": { "domainNameLabel": "[parameters('dnsNameForPublicIP')]" @@ -40,10 +52,10 @@ { "apiVersion": "[variables('networkInterfacesApiVersion')]", "dependsOn": [ - "[concat('Microsoft.Network/publicIPAddresses/', variables('publicIPAddressName'))]" + "[concat('Microsoft.Network/publicIPAddresses/', parameters('publicIPAddressName'))]" ], "location": "[variables('location')]", - "name": "[variables('nicName')]", + "name": "[parameters('nicName')]", "properties": { "ipConfigurations": [ { @@ -51,7 +63,7 @@ "properties": { "privateIPAllocationMethod": "Dynamic", "publicIPAddress": { - "id": "[resourceId('Microsoft.Network/publicIPAddresses', variables('publicIPAddressName'))]" + "id": "[resourceId('Microsoft.Network/publicIPAddresses', parameters('publicIPAddressName'))]" }, "subnet": { "id": "[variables('subnetRef')]" @@ -65,7 +77,7 @@ { "apiVersion": "[variables('apiVersion')]", "dependsOn": [ - "[concat('Microsoft.Network/networkInterfaces/', variables('nicName'))]" + "[concat('Microsoft.Network/networkInterfaces/', parameters('nicName'))]" ], "location": "[variables('location')]", "name": "[parameters('vmName')]", @@ -81,7 +93,7 @@ "networkProfile": { "networkInterfaces": [ { - "id": "[resourceId('Microsoft.Network/networkInterfaces', variables('nicName'))]" + "id": "[resourceId('Microsoft.Network/networkInterfaces', parameters('nicName'))]" } ] }, @@ -109,11 +121,11 @@ }, "osDisk": { "caching": "ReadWrite", - "createOption": "fromImage", + "createOption": "FromImage", "managedDisk": { "storageAccountType": "Standard_LRS" }, - "name": "osdisk", + "name": "[parameters('osDiskName')]", "osType": "Linux" } } @@ -127,9 +139,7 @@ "location": "[resourceGroup().location]", "managedDiskApiVersion": "2017-03-30", "networkInterfacesApiVersion": "2017-04-01", - "nicName": "packerNic", "publicIPAddressApiVersion": "2017-04-01", - "publicIPAddressName": "packerPublicIP", "publicIPAddressType": "Dynamic", "sshKeyPath": "[concat('/home/',parameters('adminUsername'),'/.ssh/authorized_keys')]", "subnetAddressPrefix": "10.0.0.0/24", diff --git a/builder/azure/arm/template_factory_test.TestVirtualMachineDeployment11.approved.json b/builder/azure/arm/template_factory_test.TestVirtualMachineDeployment11.approved.json new file mode 100644 index 000000000..bf94c8581 --- /dev/null +++ b/builder/azure/arm/template_factory_test.TestVirtualMachineDeployment11.approved.json @@ -0,0 +1,187 @@ +{ + "$schema": "http://schema.management.azure.com/schemas/2014-04-01-preview/deploymentTemplate.json", + "contentVersion": "1.0.0.0", + "parameters": { + "adminPassword": { + "type": "string" + }, + "adminUsername": { + "type": "string" + }, + "dnsNameForPublicIP": { + "type": "string" + }, + "nicName": { + "type": "string" + }, + "osDiskName": { + "type": "string" + }, + "publicIPAddressName": { + "type": "string" + }, + "storageAccountBlobEndpoint": { + "type": "string" + }, + "subnetName": { + "type": "string" + }, + "virtualNetworkName": { + "type": "string" + }, + "vmName": { + "type": "string" + }, + "vmSize": { + "type": "string" + } + }, + "resources": [ + { + "apiVersion": "[variables('publicIPAddressApiVersion')]", + "location": "[variables('location')]", + "name": "[parameters('publicIPAddressName')]", + "properties": { + "dnsSettings": { + "domainNameLabel": "[parameters('dnsNameForPublicIP')]" + }, + "publicIPAllocationMethod": "[variables('publicIPAddressType')]" + }, + "type": "Microsoft.Network/publicIPAddresses" + }, + { + "apiVersion": "[variables('virtualNetworksApiVersion')]", + "location": "[variables('location')]", + "name": "[variables('virtualNetworkName')]", + "properties": { + "addressSpace": { + "addressPrefixes": [ + "[variables('addressPrefix')]" + ] + }, + "subnets": [ + { + "name": "[variables('subnetName')]", + "properties": { + "addressPrefix": "[variables('subnetAddressPrefix')]" + } + } + ] + }, + "type": "Microsoft.Network/virtualNetworks" + }, + { + "apiVersion": "[variables('networkInterfacesApiVersion')]", + "dependsOn": [ + "[concat('Microsoft.Network/publicIPAddresses/', parameters('publicIPAddressName'))]", + "[concat('Microsoft.Network/virtualNetworks/', variables('virtualNetworkName'))]" + ], + "location": "[variables('location')]", + "name": "[parameters('nicName')]", + "properties": { + "ipConfigurations": [ + { + "name": "ipconfig", + "properties": { + "privateIPAllocationMethod": "Dynamic", + "publicIPAddress": { + "id": "[resourceId('Microsoft.Network/publicIPAddresses', parameters('publicIPAddressName'))]" + }, + "subnet": { + "id": "[variables('subnetRef')]" + } + } + } + ] + }, + "type": "Microsoft.Network/networkInterfaces" + }, + { + "apiVersion": "[variables('apiVersion')]", + "dependsOn": [ + "[concat('Microsoft.Network/networkInterfaces/', parameters('nicName'))]" + ], + "location": "[variables('location')]", + "name": "[parameters('vmName')]", + "properties": { + "diagnosticsProfile": { + "bootDiagnostics": { + "enabled": false + } + }, + "hardwareProfile": { + "vmSize": "[parameters('vmSize')]" + }, + "networkProfile": { + "networkInterfaces": [ + { + "id": "[resourceId('Microsoft.Network/networkInterfaces', parameters('nicName'))]" + } + ] + }, + "osProfile": { + "adminPassword": "[parameters('adminPassword')]", + "adminUsername": "[parameters('adminUsername')]", + "computerName": "[parameters('vmName')]", + "linuxConfiguration": { + "ssh": { + "publicKeys": [ + { + "keyData": "", + "path": "[variables('sshKeyPath')]" + } + ] + } + } + }, + "storageProfile": { + "dataDisks": [ + { + "caching": "ReadWrite", + "createOption": "Empty", + "diskSizeGB": 32, + "lun": 0, + "name": "datadisk-1", + "vhd": { + "uri": "[concat(parameters('storageAccountBlobEndpoint'),variables('vmStorageAccountContainerName'),'/datadisk-', '1','.vhd')]" + } + } + ], + "imageReference": { + "offer": "--image-offer--", + "publisher": "--image-publisher--", + "sku": "--image-sku--", + "version": "--version--" + }, + "osDisk": { + "caching": "ReadWrite", + "createOption": "FromImage", + "name": "[parameters('osDiskName')]", + "vhd": { + "uri": "[concat(parameters('storageAccountBlobEndpoint'),variables('vmStorageAccountContainerName'),'/', parameters('osDiskName'),'.vhd')]" + } + } + } + }, + "type": "Microsoft.Compute/virtualMachines" + } + ], + "variables": { + "addressPrefix": "10.0.0.0/16", + "apiVersion": "2017-03-30", + "location": "[resourceGroup().location]", + "managedDiskApiVersion": "2017-03-30", + "networkInterfacesApiVersion": "2017-04-01", + "publicIPAddressApiVersion": "2017-04-01", + "publicIPAddressType": "Dynamic", + "sshKeyPath": "[concat('/home/',parameters('adminUsername'),'/.ssh/authorized_keys')]", + "subnetAddressPrefix": "10.0.0.0/24", + "subnetName": "[parameters('subnetName')]", + "subnetRef": "[concat(variables('vnetID'),'/subnets/',variables('subnetName'))]", + "virtualNetworkName": "[parameters('virtualNetworkName')]", + "virtualNetworkResourceGroup": "[resourceGroup().name]", + "virtualNetworksApiVersion": "2017-04-01", + "vmStorageAccountContainerName": "images", + "vnetID": "[resourceId(variables('virtualNetworkResourceGroup'), 'Microsoft.Network/virtualNetworks', variables('virtualNetworkName'))]" + } +} \ No newline at end of file diff --git a/builder/azure/arm/template_factory_test.TestVirtualMachineDeployment12.approved.json b/builder/azure/arm/template_factory_test.TestVirtualMachineDeployment12.approved.json new file mode 100644 index 000000000..dc1ec1509 --- /dev/null +++ b/builder/azure/arm/template_factory_test.TestVirtualMachineDeployment12.approved.json @@ -0,0 +1,188 @@ +{ + "$schema": "http://schema.management.azure.com/schemas/2014-04-01-preview/deploymentTemplate.json", + "contentVersion": "1.0.0.0", + "parameters": { + "adminPassword": { + "type": "string" + }, + "adminUsername": { + "type": "string" + }, + "dnsNameForPublicIP": { + "type": "string" + }, + "nicName": { + "type": "string" + }, + "osDiskName": { + "type": "string" + }, + "publicIPAddressName": { + "type": "string" + }, + "storageAccountBlobEndpoint": { + "type": "string" + }, + "subnetName": { + "type": "string" + }, + "virtualNetworkName": { + "type": "string" + }, + "vmName": { + "type": "string" + }, + "vmSize": { + "type": "string" + } + }, + "resources": [ + { + "apiVersion": "[variables('publicIPAddressApiVersion')]", + "location": "[variables('location')]", + "name": "[parameters('publicIPAddressName')]", + "properties": { + "dnsSettings": { + "domainNameLabel": "[parameters('dnsNameForPublicIP')]" + }, + "publicIPAllocationMethod": "[variables('publicIPAddressType')]" + }, + "type": "Microsoft.Network/publicIPAddresses" + }, + { + "apiVersion": "[variables('virtualNetworksApiVersion')]", + "location": "[variables('location')]", + "name": "[variables('virtualNetworkName')]", + "properties": { + "addressSpace": { + "addressPrefixes": [ + "[variables('addressPrefix')]" + ] + }, + "subnets": [ + { + "name": "[variables('subnetName')]", + "properties": { + "addressPrefix": "[variables('subnetAddressPrefix')]" + } + } + ] + }, + "type": "Microsoft.Network/virtualNetworks" + }, + { + "apiVersion": "[variables('networkInterfacesApiVersion')]", + "dependsOn": [ + "[concat('Microsoft.Network/publicIPAddresses/', parameters('publicIPAddressName'))]", + "[concat('Microsoft.Network/virtualNetworks/', variables('virtualNetworkName'))]" + ], + "location": "[variables('location')]", + "name": "[parameters('nicName')]", + "properties": { + "ipConfigurations": [ + { + "name": "ipconfig", + "properties": { + "privateIPAllocationMethod": "Dynamic", + "publicIPAddress": { + "id": "[resourceId('Microsoft.Network/publicIPAddresses', parameters('publicIPAddressName'))]" + }, + "subnet": { + "id": "[variables('subnetRef')]" + } + } + } + ] + }, + "type": "Microsoft.Network/networkInterfaces" + }, + { + "apiVersion": "[variables('apiVersion')]", + "dependsOn": [ + "[concat('Microsoft.Network/networkInterfaces/', parameters('nicName'))]" + ], + "location": "[variables('location')]", + "name": "[parameters('vmName')]", + "properties": { + "diagnosticsProfile": { + "bootDiagnostics": { + "enabled": false + } + }, + "hardwareProfile": { + "vmSize": "[parameters('vmSize')]" + }, + "networkProfile": { + "networkInterfaces": [ + { + "id": "[resourceId('Microsoft.Network/networkInterfaces', parameters('nicName'))]" + } + ] + }, + "osProfile": { + "adminPassword": "[parameters('adminPassword')]", + "adminUsername": "[parameters('adminUsername')]", + "computerName": "[parameters('vmName')]", + "linuxConfiguration": { + "ssh": { + "publicKeys": [ + { + "keyData": "", + "path": "[variables('sshKeyPath')]" + } + ] + } + } + }, + "storageProfile": { + "dataDisks": [ + { + "caching": "ReadWrite", + "createOption": "Empty", + "diskSizeGB": 32, + "lun": 0, + "managedDisk": { + "storageAccountType": "Standard_LRS" + }, + "name": "datadisk-1" + } + ], + "imageReference": { + "offer": "--image-offer--", + "publisher": "--image-publisher--", + "sku": "--image-sku--", + "version": "--version--" + }, + "osDisk": { + "caching": "ReadWrite", + "createOption": "FromImage", + "managedDisk": { + "storageAccountType": "Standard_LRS" + }, + "name": "[parameters('osDiskName')]", + "osType": "Linux" + } + } + }, + "type": "Microsoft.Compute/virtualMachines" + } + ], + "variables": { + "addressPrefix": "10.0.0.0/16", + "apiVersion": "2017-03-30", + "location": "[resourceGroup().location]", + "managedDiskApiVersion": "2017-03-30", + "networkInterfacesApiVersion": "2017-04-01", + "publicIPAddressApiVersion": "2017-04-01", + "publicIPAddressType": "Dynamic", + "sshKeyPath": "[concat('/home/',parameters('adminUsername'),'/.ssh/authorized_keys')]", + "subnetAddressPrefix": "10.0.0.0/24", + "subnetName": "[parameters('subnetName')]", + "subnetRef": "[concat(variables('vnetID'),'/subnets/',variables('subnetName'))]", + "virtualNetworkName": "[parameters('virtualNetworkName')]", + "virtualNetworkResourceGroup": "[resourceGroup().name]", + "virtualNetworksApiVersion": "2017-04-01", + "vmStorageAccountContainerName": "images", + "vnetID": "[resourceId(variables('virtualNetworkResourceGroup'), 'Microsoft.Network/virtualNetworks', variables('virtualNetworkName'))]" + } +} \ No newline at end of file diff --git a/builder/azure/arm/template_factory_test.go b/builder/azure/arm/template_factory_test.go index b571871ef..fcf431336 100644 --- a/builder/azure/arm/template_factory_test.go +++ b/builder/azure/arm/template_factory_test.go @@ -5,7 +5,7 @@ import ( "encoding/json" "testing" - "github.com/Azure/azure-sdk-for-go/arm/resources/resources" + "github.com/Azure/azure-sdk-for-go/services/resources/mgmt/2018-02-01/resources" "github.com/approvals/go-approval-tests" "github.com/hashicorp/packer/builder/azure/common/constants" "github.com/hashicorp/packer/builder/azure/common/template" @@ -355,6 +355,76 @@ func TestVirtualMachineDeployment10(t *testing.T) { } } +// Ensure the VM template is correct when building with additional unmanaged disks +func TestVirtualMachineDeployment11(t *testing.T) { + config := map[string]interface{}{ + "location": "ignore", + "subscription_id": "ignore", + "os_type": constants.Target_Linux, + "communicator": "none", + "image_publisher": "--image-publisher--", + "image_offer": "--image-offer--", + "image_sku": "--image-sku--", + "image_version": "--version--", + + "disk_additional_size": []uint{32}, + + "resource_group_name": "packergroup", + "storage_account": "packerartifacts", + "capture_name_prefix": "packer", + "capture_container_name": "packerimages", + } + + c, _, err := newConfig(config, getPackerConfiguration()) + if err != nil { + t.Fatal(err) + } + + deployment, err := GetVirtualMachineDeployment(c) + if err != nil { + t.Fatal(err) + } + + err = approvaltests.VerifyJSONStruct(t, deployment.Properties.Template) + if err != nil { + t.Fatal(err) + } +} + +// Ensure the VM template is correct when building with additional managed disks +func TestVirtualMachineDeployment12(t *testing.T) { + config := map[string]interface{}{ + "location": "ignore", + "subscription_id": "ignore", + "os_type": constants.Target_Linux, + "communicator": "none", + "image_publisher": "--image-publisher--", + "image_offer": "--image-offer--", + "image_sku": "--image-sku--", + "image_version": "--version--", + + "disk_additional_size": []uint{32}, + + "managed_image_name": "ManagedImageName", + "managed_image_resource_group_name": "ManagedImageResourceGroupName", + } + + c, _, err := newConfig(config, getPackerConfiguration()) + if err != nil { + t.Fatal(err) + } + + deployment, err := GetVirtualMachineDeployment(c) + if err != nil { + t.Fatal(err) + } + + err = approvaltests.VerifyJSONStruct(t, deployment.Properties.Template) + if err != nil { + t.Fatal(err) + } +} + // Ensure the link values are not set, and the concrete values are set. func TestKeyVaultDeployment00(t *testing.T) { c, _, _ := newConfig(getArmBuilderConfiguration(), getPackerConfiguration()) @@ -453,3 +523,49 @@ func TestKeyVaultDeployment03(t *testing.T) { t.Fatal(err) } } + +func TestPlanInfo01(t *testing.T) { + planInfo := map[string]interface{}{ + "plan_info": map[string]string{ + "plan_name": "planName00", + "plan_product": "planProduct00", + "plan_publisher": "planPublisher00", + }, + } + + c, _, _ := newConfig(planInfo, getArmBuilderConfiguration(), getPackerConfiguration()) + deployment, err := GetVirtualMachineDeployment(c) + if err != nil { + t.Fatal(err) + } + + err = approvaltests.VerifyJSONStruct(t, deployment.Properties.Template) + if err != nil { + t.Fatal(err) + } +} + +func TestPlanInfo02(t *testing.T) { + planInfo := map[string]interface{}{ + "azure_tags": map[string]string{ + "dept": "engineering", + }, + "plan_info": map[string]string{ + "plan_name": "planName00", + "plan_product": "planProduct00", + "plan_publisher": "planPublisher00", + "plan_promotion_code": "planPromotionCode00", + }, + } + + c, _, _ := newConfig(planInfo, getArmBuilderConfiguration(), getPackerConfiguration()) + deployment, err := GetVirtualMachineDeployment(c) + if err != nil { + t.Fatal(err) + } + + err = approvaltests.VerifyJSONStruct(t, deployment.Properties.Template) + if err != nil { + t.Fatal(err) + } +} diff --git a/builder/azure/arm/tempname.go b/builder/azure/arm/tempname.go index 734cd74aa..501dfda65 100644 --- a/builder/azure/arm/tempname.go +++ b/builder/azure/arm/tempname.go @@ -2,13 +2,19 @@ package arm import ( "fmt" + "strings" "github.com/hashicorp/packer/builder/azure/common" ) const ( - TempNameAlphabet = "0123456789bcdfghjklmnpqrstvwxyz" - TempPasswordAlphabet = "0123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ" + TempNameAlphabet = "0123456789bcdfghjklmnpqrstvwxyz" + + numbers = "0123456789" + lowerCase = "abcdefghijklmnopqrstuvwxyz" + upperCase = "ABCDEFGHIJKLMNOPQRSTUVWXYZ" + + TempPasswordAlphabet = numbers + lowerCase + upperCase ) type TempName struct { @@ -19,6 +25,10 @@ type TempName struct { KeyVaultName string ResourceGroupName string OSDiskName string + NicName string + SubnetName string + PublicIPAddressName string + VirtualNetworkName string } func NewTempName() *TempName { @@ -29,10 +39,43 @@ func NewTempName() *TempName { tempName.DeploymentName = fmt.Sprintf("pkrdp%s", suffix) tempName.KeyVaultName = fmt.Sprintf("pkrkv%s", suffix) tempName.OSDiskName = fmt.Sprintf("pkros%s", suffix) + tempName.NicName = fmt.Sprintf("pkrni%s", suffix) + tempName.PublicIPAddressName = fmt.Sprintf("pkrip%s", suffix) + tempName.SubnetName = fmt.Sprintf("pkrsn%s", suffix) + tempName.VirtualNetworkName = fmt.Sprintf("pkrvn%s", suffix) tempName.ResourceGroupName = fmt.Sprintf("packer-Resource-Group-%s", suffix) - tempName.AdminPassword = common.RandomString(TempPasswordAlphabet, 32) + tempName.AdminPassword = generatePassword() tempName.CertificatePassword = common.RandomString(TempPasswordAlphabet, 32) return tempName } + +// generate a password that is acceptable to Azure +// Three of the four items must be met. +// 1. Contains an uppercase character +// 2. Contains a lowercase character +// 3. Contains a numeric digit +// 4. Contains a special character +func generatePassword() string { + var s string + for i := 0; i < 100; i++ { + s := common.RandomString(TempPasswordAlphabet, 32) + if !strings.ContainsAny(s, numbers) { + continue + } + + if !strings.ContainsAny(s, lowerCase) { + continue + } + + if !strings.ContainsAny(s, upperCase) { + continue + } + + return s + } + + // if an acceptable password cannot be generated in 100 tries, give up + return s +} diff --git a/builder/azure/arm/tempname_test.go b/builder/azure/arm/tempname_test.go index e9a49b429..120df3ea1 100644 --- a/builder/azure/arm/tempname_test.go +++ b/builder/azure/arm/tempname_test.go @@ -20,15 +20,49 @@ func TestTempNameShouldCreatePrefixedRandomNames(t *testing.T) { t.Errorf("Expected OSDiskName to begin with 'pkros', but got '%s'!", tempName.OSDiskName) } + if strings.Index(tempName.NicName, "pkrni") != 0 { + t.Errorf("Expected NicName to begin with 'pkrni', but got '%s'!", tempName.NicName) + } + + if strings.Index(tempName.PublicIPAddressName, "pkrip") != 0 { + t.Errorf("Expected PublicIPAddressName to begin with 'pkrip', but got '%s'!", tempName.PublicIPAddressName) + } + if strings.Index(tempName.ResourceGroupName, "packer-Resource-Group-") != 0 { t.Errorf("Expected ResourceGroupName to begin with 'packer-Resource-Group-', but got '%s'!", tempName.ResourceGroupName) } + + if strings.Index(tempName.SubnetName, "pkrsn") != 0 { + t.Errorf("Expected SubnetName to begin with 'pkrip', but got '%s'!", tempName.SubnetName) + } + + if strings.Index(tempName.VirtualNetworkName, "pkrvn") != 0 { + t.Errorf("Expected VirtualNetworkName to begin with 'pkrvn', but got '%s'!", tempName.VirtualNetworkName) + } +} + +func TestTempAdminPassword(t *testing.T) { + tempName := NewTempName() + + if !strings.ContainsAny(tempName.AdminPassword, numbers) { + t.Errorf("Expected AdminPassword to contain at least one of '%s'!", numbers) + } + if !strings.ContainsAny(tempName.AdminPassword, lowerCase) { + t.Errorf("Expected AdminPassword to contain at least one of '%s'!", lowerCase) + } + if !strings.ContainsAny(tempName.AdminPassword, upperCase) { + t.Errorf("Expected AdminPassword to contain at least one of '%s'!", upperCase) + } } func TestTempNameShouldHaveSameSuffix(t *testing.T) { tempName := NewTempName() suffix := tempName.ComputeName[5:] + if strings.HasSuffix(tempName.ComputeName, suffix) != true { + t.Errorf("Expected ComputeName to end with '%s', but the value is '%s'!", suffix, tempName.ComputeName) + } + if strings.HasSuffix(tempName.DeploymentName, suffix) != true { t.Errorf("Expected DeploymentName to end with '%s', but the value is '%s'!", suffix, tempName.DeploymentName) } @@ -37,8 +71,23 @@ func TestTempNameShouldHaveSameSuffix(t *testing.T) { t.Errorf("Expected OSDiskName to end with '%s', but the value is '%s'!", suffix, tempName.OSDiskName) } + if strings.HasSuffix(tempName.NicName, suffix) != true { + t.Errorf("Expected NicName to end with '%s', but the value is '%s'!", suffix, tempName.PublicIPAddressName) + } + + if strings.HasSuffix(tempName.PublicIPAddressName, suffix) != true { + t.Errorf("Expected PublicIPAddressName to end with '%s', but the value is '%s'!", suffix, tempName.PublicIPAddressName) + } + if strings.HasSuffix(tempName.ResourceGroupName, suffix) != true { t.Errorf("Expected ResourceGroupName to end with '%s', but the value is '%s'!", suffix, tempName.ResourceGroupName) } + if strings.HasSuffix(tempName.SubnetName, suffix) != true { + t.Errorf("Expected SubnetName to end with '%s', but the value is '%s'!", suffix, tempName.SubnetName) + } + + if strings.HasSuffix(tempName.VirtualNetworkName, suffix) != true { + t.Errorf("Expected VirtualNetworkName to end with '%s', but the value is '%s'!", suffix, tempName.VirtualNetworkName) + } } diff --git a/builder/azure/common/constants/stateBag.go b/builder/azure/common/constants/stateBag.go index a6eef5bd4..e1fe8a66b 100644 --- a/builder/azure/common/constants/stateBag.go +++ b/builder/azure/common/constants/stateBag.go @@ -15,20 +15,25 @@ const ( ArmComputeName string = "arm.ComputeName" ArmImageParameters string = "arm.ImageParameters" ArmCertificateUrl string = "arm.CertificateUrl" + ArmKeyVaultDeploymentName string = "arm.KeyVaultDeploymentName" ArmDeploymentName string = "arm.DeploymentName" ArmNicName string = "arm.NicName" ArmKeyVaultName string = "arm.KeyVaultName" ArmLocation string = "arm.Location" ArmOSDiskVhd string = "arm.OSDiskVhd" + ArmAdditionalDiskVhds string = "arm.AdditionalDiskVhds" ArmPublicIPAddressName string = "arm.PublicIPAddressName" ArmResourceGroupName string = "arm.ResourceGroupName" ArmIsResourceGroupCreated string = "arm.IsResourceGroupCreated" + ArmDoubleResourceGroupNameSet string = "arm.DoubleResourceGroupNameSet" ArmStorageAccountName string = "arm.StorageAccountName" ArmTags string = "arm.Tags" ArmVirtualMachineCaptureParameters string = "arm.VirtualMachineCaptureParameters" + ArmIsExistingResourceGroup string = "arm.IsExistingResourceGroup" ArmIsManagedImage string = "arm.IsManagedImage" ArmManagedImageResourceGroupName string = "arm.ManagedImageResourceGroupName" ArmManagedImageLocation string = "arm.ManagedImageLocation" ArmManagedImageName string = "arm.ManagedImageName" + ArmAsyncResourceGroupDelete string = "arm.AsyncResourceGroupDelete" ) diff --git a/builder/azure/common/devicelogin.go b/builder/azure/common/devicelogin.go index 904bd157f..8d053f802 100644 --- a/builder/azure/common/devicelogin.go +++ b/builder/azure/common/devicelogin.go @@ -1,28 +1,29 @@ package common import ( + "context" "fmt" "net/http" "os" "path/filepath" "regexp" + "strings" - "github.com/Azure/azure-sdk-for-go/arm/resources/subscriptions" + "github.com/Azure/azure-sdk-for-go/services/resources/mgmt/2016-06-01/subscriptions" "github.com/Azure/go-autorest/autorest" "github.com/Azure/go-autorest/autorest/adal" "github.com/Azure/go-autorest/autorest/azure" "github.com/Azure/go-autorest/autorest/to" - "github.com/hashicorp/packer/version" + "github.com/hashicorp/packer/helper/useragent" "github.com/mitchellh/go-homedir" ) var ( // AD app id for packer-azure driver. clientIDs = map[string]string{ - azure.PublicCloud.Name: "04cc58ec-51ab-4833-ac0d-ce3a7912414b", + azure.PublicCloud.Name: "04cc58ec-51ab-4833-ac0d-ce3a7912414b", + azure.USGovernmentCloud.Name: "a1479822-da77-46a7-abd0-6edacc8a8fac", } - - userAgent = fmt.Sprintf("packer/%s", version.FormattedVersion()) ) // NOTE(ahmetalpbalkan): Azure Active Directory implements OAuth 2.0 Device Flow @@ -40,8 +41,10 @@ var ( // Authenticate fetches a token from the local file cache or initiates a consent // flow and waits for token to be obtained. -func Authenticate(env azure.Environment, tenantID string, say func(string)) (*adal.ServicePrincipalToken, error) { +func Authenticate(env azure.Environment, tenantID string, say func(string), scope string) (*adal.ServicePrincipalToken, error) { clientID, ok := clientIDs[env.Name] + var resourceid string + if !ok { return nil, fmt.Errorf("packer-azure application not set up for Azure environment %q", env.Name) } @@ -53,9 +56,14 @@ func Authenticate(env azure.Environment, tenantID string, say func(string)) (*ad // for AzurePublicCloud (https://management.core.windows.net/), this old // Service Management scope covers both ASM and ARM. - apiScope := env.ServiceManagementEndpoint - tokenPath := tokenCachePath(tenantID) + if strings.Contains(scope, "vault") { + resourceid = "vault" + } else { + resourceid = "mgmt" + } + + tokenPath := tokenCachePath(tenantID + resourceid) saveToken := mkTokenCallback(tokenPath) saveTokenCallback := func(t adal.Token) error { say("Azure token expired. Saving the refreshed token...") @@ -63,46 +71,23 @@ func Authenticate(env azure.Environment, tenantID string, say func(string)) (*ad } // Lookup the token cache file for an existing token. - spt, err := tokenFromFile(say, *oauthCfg, tokenPath, clientID, apiScope, saveTokenCallback) + spt, err := tokenFromFile(say, *oauthCfg, tokenPath, clientID, scope, saveTokenCallback) if err != nil { return nil, err } if spt != nil { say(fmt.Sprintf("Auth token found in file: %s", tokenPath)) - - // NOTE(ahmetalpbalkan): The token file we found may contain an - // expired access_token. In that case, the first call to Azure SDK will - // attempt to refresh the token using refresh_token, which might have - // expired[1], in that case we will get an error and we shall remove the - // token file and initiate token flow again so that the user would not - // need removing the token cache file manually. - // - // [1]: expiration date of refresh_token is not returned in AAD /token - // response, we just know it is 14 days. Therefore user’s token - // will go stale every 14 days and we will delete the token file, - // re-initiate the device flow. - say("Validating the token.") - if err = validateToken(env, spt); err != nil { - say(fmt.Sprintf("Error: %v", err)) - say("Stored Azure credentials expired. Please reauthenticate.") - say(fmt.Sprintf("Deleting %s", tokenPath)) - if err := os.RemoveAll(tokenPath); err != nil { - return nil, fmt.Errorf("Error deleting stale token file: %v", err) - } - } else { - say("Token works.") - return spt, nil - } + return spt, nil } // Start an OAuth 2.0 device flow say(fmt.Sprintf("Initiating device flow: %s", tokenPath)) - spt, err = tokenFromDeviceFlow(say, *oauthCfg, clientID, apiScope) + spt, err = tokenFromDeviceFlow(say, *oauthCfg, clientID, scope) if err != nil { return nil, err } say("Obtained service principal token.") - if err := saveToken(spt.Token); err != nil { + if err := saveToken(spt.Token()); err != nil { say("Error occurred saving token to cache file.") return nil, err } @@ -138,7 +123,7 @@ func tokenFromFile(say func(string), oauthCfg adal.OAuthConfig, tokenPath, clien // endpoint is polled until user gives consent, denies or the flow times out. // Returned token must be saved. func tokenFromDeviceFlow(say func(string), oauthCfg adal.OAuthConfig, clientID, resource string) (*adal.ServicePrincipalToken, error) { - cl := autorest.NewClientWithUserAgent(userAgent) + cl := autorest.NewClientWithUserAgent(useragent.String()) deviceCode, err := adal.InitiateDeviceAuth(&cl, oauthCfg, clientID, resource) if err != nil { return nil, fmt.Errorf("Failed to start device auth: %v", err) @@ -183,31 +168,17 @@ func mkTokenCallback(path string) adal.TokenRefreshCallback { } } -// validateToken makes a call to Azure SDK with given token, essentially making -// sure if the access_token valid, if not it uses SDK’s functionality to -// automatically refresh the token using refresh_token (which might have -// expired). This check is essentially to make sure refresh_token is good. -func validateToken(env azure.Environment, token *adal.ServicePrincipalToken) error { - c := subscriptionsClient(env.ResourceManagerEndpoint) - c.Authorizer = autorest.NewBearerAuthorizer(token) - _, err := c.List() - if err != nil { - return fmt.Errorf("Token validity check failed: %v", err) - } - return nil -} - // FindTenantID figures out the AAD tenant ID of the subscription by making an // unauthenticated request to the Get Subscription Details endpoint and parses // the value from WWW-Authenticate header. func FindTenantID(env azure.Environment, subscriptionID string) (string, error) { const hdrKey = "WWW-Authenticate" - c := subscriptionsClient(env.ResourceManagerEndpoint) + c := subscriptions.NewClientWithBaseURI(env.ResourceManagerEndpoint) // we expect this request to fail (err != nil), but we are only interested // in headers, so surface the error if the Response is not present (i.e. // network error etc) - subs, err := c.Get(subscriptionID) + subs, err := c.Get(context.TODO(), subscriptionID) if subs.Response.Response == nil { return "", fmt.Errorf("Request failed: %v", err) } @@ -230,8 +201,3 @@ func FindTenantID(env azure.Environment, subscriptionID string) (string, error) } return m[1], nil } - -func subscriptionsClient(baseURI string) subscriptions.GroupClient { - client := subscriptions.NewGroupClientWithBaseURI(baseURI) - return client -} diff --git a/builder/azure/common/dump_config.go b/builder/azure/common/dump_config.go index 8be2c9aa3..73e619e76 100644 --- a/builder/azure/common/dump_config.go +++ b/builder/azure/common/dump_config.go @@ -2,9 +2,10 @@ package common import ( "fmt" - "github.com/mitchellh/reflectwalk" "reflect" "strings" + + "github.com/mitchellh/reflectwalk" ) type walker struct { diff --git a/builder/azure/common/interruptible_task.go b/builder/azure/common/interruptible_task.go deleted file mode 100644 index 94edc63c8..000000000 --- a/builder/azure/common/interruptible_task.go +++ /dev/null @@ -1,54 +0,0 @@ -package common - -import ( - "time" -) - -type InterruptibleTaskResult struct { - Err error - IsCancelled bool -} - -type InterruptibleTask struct { - IsCancelled func() bool - Task func(cancelCh <-chan struct{}) error -} - -func NewInterruptibleTask(isCancelled func() bool, task func(cancelCh <-chan struct{}) error) *InterruptibleTask { - return &InterruptibleTask{ - IsCancelled: isCancelled, - Task: task, - } -} - -func StartInterruptibleTask(isCancelled func() bool, task func(cancelCh <-chan struct{}) error) InterruptibleTaskResult { - t := NewInterruptibleTask(isCancelled, task) - return t.Run() -} - -func (s *InterruptibleTask) Run() InterruptibleTaskResult { - completeCh := make(chan error) - - cancelCh := make(chan struct{}) - defer close(cancelCh) - - go func() { - err := s.Task(cancelCh) - completeCh <- err - - // senders close, receivers check for close - close(completeCh) - }() - - for { - if s.IsCancelled() { - return InterruptibleTaskResult{Err: nil, IsCancelled: true} - } - - select { - case err := <-completeCh: - return InterruptibleTaskResult{Err: err, IsCancelled: false} - case <-time.After(100 * time.Millisecond): - } - } -} diff --git a/builder/azure/common/interruptible_task_test.go b/builder/azure/common/interruptible_task_test.go deleted file mode 100644 index d2090d87b..000000000 --- a/builder/azure/common/interruptible_task_test.go +++ /dev/null @@ -1,102 +0,0 @@ -package common - -import ( - "fmt" - "testing" - "time" -) - -func TestInterruptibleTaskShouldImmediatelyEndOnCancel(t *testing.T) { - testSubject := NewInterruptibleTask( - func() bool { return true }, - func(<-chan struct{}) error { - for { - time.Sleep(time.Second * 30) - } - }) - - result := testSubject.Run() - if result.IsCancelled != true { - t.Fatal("Expected the task to be cancelled, but it was not.") - } -} - -func TestInterruptibleTaskShouldRunTaskUntilCompletion(t *testing.T) { - var count int - - testSubject := &InterruptibleTask{ - IsCancelled: func() bool { - return false - }, - Task: func(<-chan struct{}) error { - for i := 0; i < 10; i++ { - count += 1 - } - - return nil - }, - } - - result := testSubject.Run() - if result.IsCancelled != false { - t.Errorf("Expected the task to *not* be cancelled, but it was.") - } - - if count != 10 { - t.Errorf("Expected the task to wait for completion, but it did not.") - } - - if result.Err != nil { - t.Errorf("Expected the task to return a nil error, but got=%s", result.Err) - } -} - -func TestInterruptibleTaskShouldImmediatelyStopOnTaskError(t *testing.T) { - testSubject := &InterruptibleTask{ - IsCancelled: func() bool { - return false - }, - Task: func(cancelCh <-chan struct{}) error { - return fmt.Errorf("boom") - }, - } - - result := testSubject.Run() - if result.IsCancelled != false { - t.Errorf("Expected the task to *not* be cancelled, but it was.") - } - - if result.Err == nil { - t.Errorf("Expected the task to return an error, but it did not.") - } -} - -func TestInterruptibleTaskShouldProvideLiveChannel(t *testing.T) { - testSubject := &InterruptibleTask{ - IsCancelled: func() bool { - return false - }, - Task: func(cancelCh <-chan struct{}) error { - isOpen := false - - select { - case _, ok := <-cancelCh: - isOpen = !ok - if !isOpen { - t.Errorf("Expected the channel to open, but it was closed.") - } - default: - isOpen = true - break - } - - if !isOpen { - t.Errorf("Check for openness failed.") - } - - return nil - }, - } - - testSubject.Run() -} diff --git a/builder/azure/common/lin/ssh.go b/builder/azure/common/lin/ssh.go index 1600c2f4f..0987a04ea 100644 --- a/builder/azure/common/lin/ssh.go +++ b/builder/azure/common/lin/ssh.go @@ -4,7 +4,7 @@ import ( "fmt" "github.com/hashicorp/packer/builder/azure/common/constants" - "github.com/mitchellh/multistep" + "github.com/hashicorp/packer/helper/multistep" "golang.org/x/crypto/ssh" ) diff --git a/builder/azure/common/lin/step_create_cert.go b/builder/azure/common/lin/step_create_cert.go index 6f6a685d3..da2cb5bf5 100644 --- a/builder/azure/common/lin/step_create_cert.go +++ b/builder/azure/common/lin/step_create_cert.go @@ -1,6 +1,7 @@ package lin import ( + "context" "crypto/rand" "crypto/rsa" "crypto/sha1" @@ -14,15 +15,15 @@ import ( "github.com/hashicorp/packer/builder/azure/common/constants" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" ) type StepCreateCert struct { TmpServiceName string } -func (s *StepCreateCert) Run(state multistep.StateBag) multistep.StepAction { +func (s *StepCreateCert) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { ui := state.Get("ui").(packer.Ui) ui.Say("Creating temporary certificate...") diff --git a/builder/azure/common/lin/step_generalize_os.go b/builder/azure/common/lin/step_generalize_os.go index c4dacc47f..cc62f4eec 100644 --- a/builder/azure/common/lin/step_generalize_os.go +++ b/builder/azure/common/lin/step_generalize_os.go @@ -2,17 +2,19 @@ package lin import ( "bytes" + "context" "fmt" - "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" "log" + + "github.com/hashicorp/packer/helper/multistep" + "github.com/hashicorp/packer/packer" ) type StepGeneralizeOS struct { Command string } -func (s *StepGeneralizeOS) Run(state multistep.StateBag) multistep.StepAction { +func (s *StepGeneralizeOS) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { ui := state.Get("ui").(packer.Ui) comm := state.Get("communicator").(packer.Communicator) diff --git a/builder/azure/common/state_bag.go b/builder/azure/common/state_bag.go index a402518a1..57889c03f 100644 --- a/builder/azure/common/state_bag.go +++ b/builder/azure/common/state_bag.go @@ -1,6 +1,6 @@ package common -import "github.com/mitchellh/multistep" +import "github.com/hashicorp/packer/helper/multistep" func IsStateCancelled(stateBag multistep.StateBag) bool { _, ok := stateBag.GetOk(multistep.StateCancelled) diff --git a/builder/azure/common/template/TestBuildLinux02.approved.txt b/builder/azure/common/template/TestBuildLinux02.approved.txt index c2533dd2d..97aad4de1 100644 --- a/builder/azure/common/template/TestBuildLinux02.approved.txt +++ b/builder/azure/common/template/TestBuildLinux02.approved.txt @@ -103,7 +103,7 @@ "storageProfile": { "osDisk": { "osType": "Linux", - "name": "osdisk", + "name": "[parameters('osDiskName')]", "vhd": { "uri": "[concat(parameters('storageAccountBlobEndpoint'),variables('vmStorageAccountContainerName'),'/', parameters('osDiskName'),'.vhd')]" }, diff --git a/builder/azure/common/template/template.go b/builder/azure/common/template/template.go index cdea487f0..6c4429c34 100644 --- a/builder/azure/common/template/template.go +++ b/builder/azure/common/template/template.go @@ -1,8 +1,8 @@ package template import ( - "github.com/Azure/azure-sdk-for-go/arm/compute" - "github.com/Azure/azure-sdk-for-go/arm/network" + "github.com/Azure/azure-sdk-for-go/services/compute/mgmt/2018-04-01/compute" + "github.com/Azure/azure-sdk-for-go/services/network/mgmt/2018-01-01/network" ) ///////////////////////////////////////////////// @@ -30,11 +30,19 @@ type Resource struct { Type *string `json:"type"` Location *string `json:"location,omitempty"` DependsOn *[]string `json:"dependsOn,omitempty"` + Plan *Plan `json:"plan,omitempty"` Properties *Properties `json:"properties,omitempty"` Tags *map[string]*string `json:"tags,omitempty"` Resources *[]Resource `json:"resources,omitempty"` } +type Plan struct { + Name *string `json:"name"` + Product *string `json:"product"` + Publisher *string `json:"publisher"` + PromotionCode *string `json:"promotionCode,omitempty"` +} + type OSDiskUnion struct { OsType compute.OperatingSystemTypes `json:"osType,omitempty"` OsState compute.OperatingSystemStateTypes `json:"osState,omitempty"` @@ -48,10 +56,23 @@ type OSDiskUnion struct { ManagedDisk *compute.ManagedDiskParameters `json:"managedDisk,omitempty"` } +type DataDiskUnion struct { + Lun *int `json:"lun,omitempty"` + BlobURI *string `json:"blobUri,omitempty"` + Name *string `json:"name,omitempty"` + Vhd *compute.VirtualHardDisk `json:"vhd,omitempty"` + Image *compute.VirtualHardDisk `json:"image,omitempty"` + Caching compute.CachingTypes `json:"caching,omitempty"` + CreateOption compute.DiskCreateOptionTypes `json:"createOption,omitempty"` + DiskSizeGB *int32 `json:"diskSizeGB,omitempty"` + ManagedDisk *compute.ManagedDiskParameters `json:"managedDisk,omitempty"` +} + // Union of the StorageProfile and ImageStorageProfile types. type StorageProfileUnion struct { ImageReference *compute.ImageReference `json:"imageReference,omitempty"` OsDisk *OSDiskUnion `json:"osDisk,omitempty"` + DataDisks *[]DataDiskUnion `json:"dataDisks,omitempty"` } ///////////////////////////////////////////////// diff --git a/builder/azure/common/template/template_builder.go b/builder/azure/common/template/template_builder.go index 7311817c2..4a8e6d444 100644 --- a/builder/azure/common/template/template_builder.go +++ b/builder/azure/common/template/template_builder.go @@ -5,7 +5,7 @@ import ( "fmt" "strings" - "github.com/Azure/azure-sdk-for-go/arm/compute" + "github.com/Azure/azure-sdk-for-go/services/compute/mgmt/2018-04-01/compute" "github.com/Azure/go-autorest/autorest/to" ) @@ -111,9 +111,8 @@ func (s *TemplateBuilder) SetManagedDiskUrl(managedImageId string, storageAccoun profile.ImageReference = &compute.ImageReference{ ID: &managedImageId, } - profile.OsDisk.Name = to.StringPtr("osdisk") profile.OsDisk.OsType = s.osType - profile.OsDisk.CreateOption = compute.FromImage + profile.OsDisk.CreateOption = compute.DiskCreateOptionTypesFromImage profile.OsDisk.Vhd = nil profile.OsDisk.ManagedDisk = &compute.ManagedDiskParameters{ StorageAccountType: storageAccountType, @@ -136,9 +135,8 @@ func (s *TemplateBuilder) SetManagedMarketplaceImage(location, publisher, offer, Version: &version, //ID: &imageID, } - profile.OsDisk.Name = to.StringPtr("osdisk") profile.OsDisk.OsType = s.osType - profile.OsDisk.CreateOption = compute.FromImage + profile.OsDisk.CreateOption = compute.DiskCreateOptionTypesFromImage profile.OsDisk.Vhd = nil profile.OsDisk.ManagedDisk = &compute.ManagedDiskParameters{ StorageAccountType: storageAccountType, @@ -179,6 +177,26 @@ func (s *TemplateBuilder) SetImageUrl(imageUrl string, osType compute.OperatingS return nil } +func (s *TemplateBuilder) SetPlanInfo(name, product, publisher, promotionCode string) error { + var promotionCodeVal *string = nil + if promotionCode != "" { + promotionCodeVal = to.StringPtr(promotionCode) + } + + for i, x := range *s.template.Resources { + if strings.EqualFold(*x.Type, resourceVirtualMachine) { + (*s.template.Resources)[i].Plan = &Plan{ + Name: to.StringPtr(name), + Product: to.StringPtr(product), + Publisher: to.StringPtr(publisher), + PromotionCode: promotionCodeVal, + } + } + } + + return nil +} + func (s *TemplateBuilder) SetOSDiskSizeGB(diskSizeGB int32) error { resource, err := s.getResourceByType(resourceVirtualMachine) if err != nil { @@ -191,6 +209,35 @@ func (s *TemplateBuilder) SetOSDiskSizeGB(diskSizeGB int32) error { return nil } +func (s *TemplateBuilder) SetAdditionalDisks(diskSizeGB []int32, isManaged bool) error { + resource, err := s.getResourceByType(resourceVirtualMachine) + if err != nil { + return err + } + + profile := resource.Properties.StorageProfile + dataDisks := make([]DataDiskUnion, len(diskSizeGB)) + + for i, additionalsize := range diskSizeGB { + dataDisks[i].DiskSizeGB = to.Int32Ptr(additionalsize) + dataDisks[i].Lun = to.IntPtr(i) + dataDisks[i].Name = to.StringPtr(fmt.Sprintf("datadisk-%d", i+1)) + dataDisks[i].CreateOption = "Empty" + dataDisks[i].Caching = "ReadWrite" + if isManaged { + dataDisks[i].Vhd = nil + dataDisks[i].ManagedDisk = profile.OsDisk.ManagedDisk + } else { + dataDisks[i].Vhd = &compute.VirtualHardDisk{ + URI: to.StringPtr(fmt.Sprintf("[concat(parameters('storageAccountBlobEndpoint'),variables('vmStorageAccountContainerName'),'/datadisk-', '%d','.vhd')]", i+1)), + } + dataDisks[i].ManagedDisk = nil + } + } + profile.DataDisks = &dataDisks + return nil +} + func (s *TemplateBuilder) SetCustomData(customData string) error { resource, err := s.getResourceByType(resourceVirtualMachine) if err != nil { @@ -225,7 +272,7 @@ func (s *TemplateBuilder) SetVirtualNetwork(virtualNetworkResourceGroup, virtual return nil } -func (s *TemplateBuilder) SetPrivateVirtualNetworWithPublicIp(virtualNetworkResourceGroup, virtualNetworkName, subnetName string) error { +func (s *TemplateBuilder) SetPrivateVirtualNetworkWithPublicIp(virtualNetworkResourceGroup, virtualNetworkName, subnetName string) error { s.setVariable("virtualNetworkResourceGroup", virtualNetworkResourceGroup) s.setVariable("virtualNetworkName", virtualNetworkName) s.setVariable("subnetName", subnetName) @@ -273,6 +320,17 @@ func (s *TemplateBuilder) getResourceByType(t string) (*Resource, error) { return nil, fmt.Errorf("template: could not find a resource of type %s", t) } +func (s *TemplateBuilder) getResourceByType2(t string) (**Resource, error) { + for _, x := range *s.template.Resources { + if strings.EqualFold(*x.Type, t) { + p := &x + return &p, nil + } + } + + return nil, fmt.Errorf("template: could not find a resource of type %s", t) +} + func (s *TemplateBuilder) setVariable(name string, value string) { (*s.template.Variables)[name] = value } @@ -401,12 +459,24 @@ const BasicTemplate = `{ "dnsNameForPublicIP": { "type": "string" }, + "nicName": { + "type": "string" + }, "osDiskName": { "type": "string" }, + "publicIPAddressName": { + "type": "string" + }, + "subnetName": { + "type": "string" + }, "storageAccountBlobEndpoint": { "type": "string" }, + "virtualNetworkName": { + "type": "string" + }, "vmSize": { "type": "string" }, @@ -422,14 +492,12 @@ const BasicTemplate = `{ "publicIPAddressApiVersion": "2017-04-01", "virtualNetworksApiVersion": "2017-04-01", "location": "[resourceGroup().location]", - "nicName": "packerNic", - "publicIPAddressName": "packerPublicIP", "publicIPAddressType": "Dynamic", "sshKeyPath": "[concat('/home/',parameters('adminUsername'),'/.ssh/authorized_keys')]", - "subnetName": "packerSubnet", + "subnetName": "[parameters('subnetName')]", "subnetAddressPrefix": "10.0.0.0/24", "subnetRef": "[concat(variables('vnetID'),'/subnets/',variables('subnetName'))]", - "virtualNetworkName": "packerNetwork", + "virtualNetworkName": "[parameters('virtualNetworkName')]", "virtualNetworkResourceGroup": "[resourceGroup().name]", "vmStorageAccountContainerName": "images", "vnetID": "[resourceId(variables('virtualNetworkResourceGroup'), 'Microsoft.Network/virtualNetworks', variables('virtualNetworkName'))]" @@ -438,7 +506,7 @@ const BasicTemplate = `{ { "apiVersion": "[variables('publicIPAddressApiVersion')]", "type": "Microsoft.Network/publicIPAddresses", - "name": "[variables('publicIPAddressName')]", + "name": "[parameters('publicIPAddressName')]", "location": "[variables('location')]", "properties": { "publicIPAllocationMethod": "[variables('publicIPAddressType')]", @@ -471,10 +539,10 @@ const BasicTemplate = `{ { "apiVersion": "[variables('networkInterfacesApiVersion')]", "type": "Microsoft.Network/networkInterfaces", - "name": "[variables('nicName')]", + "name": "[parameters('nicName')]", "location": "[variables('location')]", "dependsOn": [ - "[concat('Microsoft.Network/publicIPAddresses/', variables('publicIPAddressName'))]", + "[concat('Microsoft.Network/publicIPAddresses/', parameters('publicIPAddressName'))]", "[concat('Microsoft.Network/virtualNetworks/', variables('virtualNetworkName'))]" ], "properties": { @@ -484,7 +552,7 @@ const BasicTemplate = `{ "properties": { "privateIPAllocationMethod": "Dynamic", "publicIPAddress": { - "id": "[resourceId('Microsoft.Network/publicIPAddresses', variables('publicIPAddressName'))]" + "id": "[resourceId('Microsoft.Network/publicIPAddresses', parameters('publicIPAddressName'))]" }, "subnet": { "id": "[variables('subnetRef')]" @@ -500,7 +568,7 @@ const BasicTemplate = `{ "name": "[parameters('vmName')]", "location": "[variables('location')]", "dependsOn": [ - "[concat('Microsoft.Network/networkInterfaces/', variables('nicName'))]" + "[concat('Microsoft.Network/networkInterfaces/', parameters('nicName'))]" ], "properties": { "hardwareProfile": { @@ -513,7 +581,7 @@ const BasicTemplate = `{ }, "storageProfile": { "osDisk": { - "name": "osdisk", + "name": "[parameters('osDiskName')]", "vhd": { "uri": "[concat(parameters('storageAccountBlobEndpoint'),variables('vmStorageAccountContainerName'),'/', parameters('osDiskName'),'.vhd')]" }, @@ -524,7 +592,7 @@ const BasicTemplate = `{ "networkProfile": { "networkInterfaces": [ { - "id": "[resourceId('Microsoft.Network/networkInterfaces', variables('nicName'))]" + "id": "[resourceId('Microsoft.Network/networkInterfaces', parameters('nicName'))]" } ] }, diff --git a/builder/azure/common/template/template_builder_test.TestBuildLinux00.approved.json b/builder/azure/common/template/template_builder_test.TestBuildLinux00.approved.json index b94fa0628..85986d344 100644 --- a/builder/azure/common/template/template_builder_test.TestBuildLinux00.approved.json +++ b/builder/azure/common/template/template_builder_test.TestBuildLinux00.approved.json @@ -11,12 +11,24 @@ "dnsNameForPublicIP": { "type": "string" }, + "nicName": { + "type": "string" + }, "osDiskName": { "type": "string" }, + "publicIPAddressName": { + "type": "string" + }, "storageAccountBlobEndpoint": { "type": "string" }, + "subnetName": { + "type": "string" + }, + "virtualNetworkName": { + "type": "string" + }, "vmName": { "type": "string" }, @@ -28,7 +40,7 @@ { "apiVersion": "[variables('publicIPAddressApiVersion')]", "location": "[variables('location')]", - "name": "[variables('publicIPAddressName')]", + "name": "[parameters('publicIPAddressName')]", "properties": { "dnsSettings": { "domainNameLabel": "[parameters('dnsNameForPublicIP')]" @@ -61,11 +73,11 @@ { "apiVersion": "[variables('networkInterfacesApiVersion')]", "dependsOn": [ - "[concat('Microsoft.Network/publicIPAddresses/', variables('publicIPAddressName'))]", + "[concat('Microsoft.Network/publicIPAddresses/', parameters('publicIPAddressName'))]", "[concat('Microsoft.Network/virtualNetworks/', variables('virtualNetworkName'))]" ], "location": "[variables('location')]", - "name": "[variables('nicName')]", + "name": "[parameters('nicName')]", "properties": { "ipConfigurations": [ { @@ -73,7 +85,7 @@ "properties": { "privateIPAllocationMethod": "Dynamic", "publicIPAddress": { - "id": "[resourceId('Microsoft.Network/publicIPAddresses', variables('publicIPAddressName'))]" + "id": "[resourceId('Microsoft.Network/publicIPAddresses', parameters('publicIPAddressName'))]" }, "subnet": { "id": "[variables('subnetRef')]" @@ -87,7 +99,7 @@ { "apiVersion": "[variables('apiVersion')]", "dependsOn": [ - "[concat('Microsoft.Network/networkInterfaces/', variables('nicName'))]" + "[concat('Microsoft.Network/networkInterfaces/', parameters('nicName'))]" ], "location": "[variables('location')]", "name": "[parameters('vmName')]", @@ -103,7 +115,7 @@ "networkProfile": { "networkInterfaces": [ { - "id": "[resourceId('Microsoft.Network/networkInterfaces', variables('nicName'))]" + "id": "[resourceId('Microsoft.Network/networkInterfaces', parameters('nicName'))]" } ] }, @@ -132,7 +144,7 @@ "osDisk": { "caching": "ReadWrite", "createOption": "FromImage", - "name": "osdisk", + "name": "[parameters('osDiskName')]", "vhd": { "uri": "[concat(parameters('storageAccountBlobEndpoint'),variables('vmStorageAccountContainerName'),'/', parameters('osDiskName'),'.vhd')]" } @@ -148,18 +160,16 @@ "location": "[resourceGroup().location]", "managedDiskApiVersion": "2017-03-30", "networkInterfacesApiVersion": "2017-04-01", - "nicName": "packerNic", "publicIPAddressApiVersion": "2017-04-01", - "publicIPAddressName": "packerPublicIP", "publicIPAddressType": "Dynamic", "sshKeyPath": "[concat('/home/',parameters('adminUsername'),'/.ssh/authorized_keys')]", "subnetAddressPrefix": "10.0.0.0/24", - "subnetName": "packerSubnet", + "subnetName": "[parameters('subnetName')]", "subnetRef": "[concat(variables('vnetID'),'/subnets/',variables('subnetName'))]", - "virtualNetworkName": "packerNetwork", - "virtualNetworkResourceGroup": "[resourceGroup().name]", + "virtualNetworkName": "[parameters('virtualNetworkName')]", + "virtualNetworkResourceGroup": "[resourceGroup().name]", "virtualNetworksApiVersion": "2017-04-01", - "vmStorageAccountContainerName": "images", - "vnetID": "[resourceId(variables('virtualNetworkResourceGroup'), 'Microsoft.Network/virtualNetworks', variables('virtualNetworkName'))]" - } + "vmStorageAccountContainerName": "images", + "vnetID": "[resourceId(variables('virtualNetworkResourceGroup'), 'Microsoft.Network/virtualNetworks', variables('virtualNetworkName'))]" + } } \ No newline at end of file diff --git a/builder/azure/common/template/template_builder_test.TestBuildLinux01.approved.json b/builder/azure/common/template/template_builder_test.TestBuildLinux01.approved.json index 8eac0647d..d98871cb8 100644 --- a/builder/azure/common/template/template_builder_test.TestBuildLinux01.approved.json +++ b/builder/azure/common/template/template_builder_test.TestBuildLinux01.approved.json @@ -11,12 +11,24 @@ "dnsNameForPublicIP": { "type": "string" }, + "nicName": { + "type": "string" + }, "osDiskName": { "type": "string" }, + "publicIPAddressName": { + "type": "string" + }, "storageAccountBlobEndpoint": { "type": "string" }, + "subnetName": { + "type": "string" + }, + "virtualNetworkName": { + "type": "string" + }, "vmName": { "type": "string" }, @@ -28,7 +40,7 @@ { "apiVersion": "[variables('publicIPAddressApiVersion')]", "location": "[variables('location')]", - "name": "[variables('publicIPAddressName')]", + "name": "[parameters('publicIPAddressName')]", "properties": { "dnsSettings": { "domainNameLabel": "[parameters('dnsNameForPublicIP')]" @@ -61,11 +73,11 @@ { "apiVersion": "[variables('networkInterfacesApiVersion')]", "dependsOn": [ - "[concat('Microsoft.Network/publicIPAddresses/', variables('publicIPAddressName'))]", + "[concat('Microsoft.Network/publicIPAddresses/', parameters('publicIPAddressName'))]", "[concat('Microsoft.Network/virtualNetworks/', variables('virtualNetworkName'))]" ], "location": "[variables('location')]", - "name": "[variables('nicName')]", + "name": "[parameters('nicName')]", "properties": { "ipConfigurations": [ { @@ -73,7 +85,7 @@ "properties": { "privateIPAllocationMethod": "Dynamic", "publicIPAddress": { - "id": "[resourceId('Microsoft.Network/publicIPAddresses', variables('publicIPAddressName'))]" + "id": "[resourceId('Microsoft.Network/publicIPAddresses', parameters('publicIPAddressName'))]" }, "subnet": { "id": "[variables('subnetRef')]" @@ -87,7 +99,7 @@ { "apiVersion": "[variables('apiVersion')]", "dependsOn": [ - "[concat('Microsoft.Network/networkInterfaces/', variables('nicName'))]" + "[concat('Microsoft.Network/networkInterfaces/', parameters('nicName'))]" ], "location": "[variables('location')]", "name": "[parameters('vmName')]", @@ -103,7 +115,7 @@ "networkProfile": { "networkInterfaces": [ { - "id": "[resourceId('Microsoft.Network/networkInterfaces', variables('nicName'))]" + "id": "[resourceId('Microsoft.Network/networkInterfaces', parameters('nicName'))]" } ] }, @@ -129,7 +141,7 @@ "image": { "uri": "http://azure/custom.vhd" }, - "name": "osdisk", + "name": "[parameters('osDiskName')]", "osType": "Linux", "vhd": { "uri": "[concat(parameters('storageAccountBlobEndpoint'),variables('vmStorageAccountContainerName'),'/', parameters('osDiskName'),'.vhd')]" @@ -146,18 +158,16 @@ "location": "[resourceGroup().location]", "managedDiskApiVersion": "2017-03-30", "networkInterfacesApiVersion": "2017-04-01", - "nicName": "packerNic", "publicIPAddressApiVersion": "2017-04-01", - "publicIPAddressName": "packerPublicIP", "publicIPAddressType": "Dynamic", "sshKeyPath": "[concat('/home/',parameters('adminUsername'),'/.ssh/authorized_keys')]", "subnetAddressPrefix": "10.0.0.0/24", - "subnetName": "packerSubnet", + "subnetName": "[parameters('subnetName')]", "subnetRef": "[concat(variables('vnetID'),'/subnets/',variables('subnetName'))]", - "virtualNetworkName": "packerNetwork", - "virtualNetworkResourceGroup": "[resourceGroup().name]", + "virtualNetworkName": "[parameters('virtualNetworkName')]", + "virtualNetworkResourceGroup": "[resourceGroup().name]", "virtualNetworksApiVersion": "2017-04-01", - "vmStorageAccountContainerName": "images", - "vnetID": "[resourceId(variables('virtualNetworkResourceGroup'), 'Microsoft.Network/virtualNetworks', variables('virtualNetworkName'))]" - } + "vmStorageAccountContainerName": "images", + "vnetID": "[resourceId(variables('virtualNetworkResourceGroup'), 'Microsoft.Network/virtualNetworks', variables('virtualNetworkName'))]" + } } \ No newline at end of file diff --git a/builder/azure/common/template/template_builder_test.TestBuildLinux02.approved.json b/builder/azure/common/template/template_builder_test.TestBuildLinux02.approved.json index fffb0d7bb..d7dd46a45 100644 --- a/builder/azure/common/template/template_builder_test.TestBuildLinux02.approved.json +++ b/builder/azure/common/template/template_builder_test.TestBuildLinux02.approved.json @@ -11,12 +11,24 @@ "dnsNameForPublicIP": { "type": "string" }, + "nicName": { + "type": "string" + }, "osDiskName": { "type": "string" }, + "publicIPAddressName": { + "type": "string" + }, "storageAccountBlobEndpoint": { "type": "string" }, + "subnetName": { + "type": "string" + }, + "virtualNetworkName": { + "type": "string" + }, "vmName": { "type": "string" }, @@ -29,7 +41,7 @@ "apiVersion": "[variables('networkInterfacesApiVersion')]", "dependsOn": [], "location": "[variables('location')]", - "name": "[variables('nicName')]", + "name": "[parameters('nicName')]", "properties": { "ipConfigurations": [ { @@ -48,7 +60,7 @@ { "apiVersion": "[variables('apiVersion')]", "dependsOn": [ - "[concat('Microsoft.Network/networkInterfaces/', variables('nicName'))]" + "[concat('Microsoft.Network/networkInterfaces/', parameters('nicName'))]" ], "location": "[variables('location')]", "name": "[parameters('vmName')]", @@ -64,7 +76,7 @@ "networkProfile": { "networkInterfaces": [ { - "id": "[resourceId('Microsoft.Network/networkInterfaces', variables('nicName'))]" + "id": "[resourceId('Microsoft.Network/networkInterfaces', parameters('nicName'))]" } ] }, @@ -91,7 +103,7 @@ "image": { "uri": "http://azure/custom.vhd" }, - "name": "osdisk", + "name": "[parameters('osDiskName')]", "osType": "Linux", "vhd": { "uri": "[concat(parameters('storageAccountBlobEndpoint'),variables('vmStorageAccountContainerName'),'/', parameters('osDiskName'),'.vhd')]" @@ -108,9 +120,7 @@ "location": "[resourceGroup().location]", "managedDiskApiVersion": "2017-03-30", "networkInterfacesApiVersion": "2017-04-01", - "nicName": "packerNic", "publicIPAddressApiVersion": "2017-04-01", - "publicIPAddressName": "packerPublicIP", "publicIPAddressType": "Dynamic", "sshKeyPath": "[concat('/home/',parameters('adminUsername'),'/.ssh/authorized_keys')]", "subnetAddressPrefix": "10.0.0.0/24", diff --git a/builder/azure/common/template/template_builder_test.TestBuildWindows00.approved.json b/builder/azure/common/template/template_builder_test.TestBuildWindows00.approved.json index 1c3138deb..e1d511f40 100644 --- a/builder/azure/common/template/template_builder_test.TestBuildWindows00.approved.json +++ b/builder/azure/common/template/template_builder_test.TestBuildWindows00.approved.json @@ -11,12 +11,24 @@ "dnsNameForPublicIP": { "type": "string" }, + "nicName": { + "type": "string" + }, "osDiskName": { "type": "string" }, + "publicIPAddressName": { + "type": "string" + }, "storageAccountBlobEndpoint": { "type": "string" }, + "subnetName": { + "type": "string" + }, + "virtualNetworkName": { + "type": "string" + }, "vmName": { "type": "string" }, @@ -28,7 +40,7 @@ { "apiVersion": "[variables('publicIPAddressApiVersion')]", "location": "[variables('location')]", - "name": "[variables('publicIPAddressName')]", + "name": "[parameters('publicIPAddressName')]", "properties": { "dnsSettings": { "domainNameLabel": "[parameters('dnsNameForPublicIP')]" @@ -61,11 +73,11 @@ { "apiVersion": "[variables('networkInterfacesApiVersion')]", "dependsOn": [ - "[concat('Microsoft.Network/publicIPAddresses/', variables('publicIPAddressName'))]", + "[concat('Microsoft.Network/publicIPAddresses/', parameters('publicIPAddressName'))]", "[concat('Microsoft.Network/virtualNetworks/', variables('virtualNetworkName'))]" ], "location": "[variables('location')]", - "name": "[variables('nicName')]", + "name": "[parameters('nicName')]", "properties": { "ipConfigurations": [ { @@ -73,7 +85,7 @@ "properties": { "privateIPAllocationMethod": "Dynamic", "publicIPAddress": { - "id": "[resourceId('Microsoft.Network/publicIPAddresses', variables('publicIPAddressName'))]" + "id": "[resourceId('Microsoft.Network/publicIPAddresses', parameters('publicIPAddressName'))]" }, "subnet": { "id": "[variables('subnetRef')]" @@ -87,7 +99,7 @@ { "apiVersion": "[variables('apiVersion')]", "dependsOn": [ - "[concat('Microsoft.Network/networkInterfaces/', variables('nicName'))]" + "[concat('Microsoft.Network/networkInterfaces/', parameters('nicName'))]" ], "location": "[variables('location')]", "name": "[parameters('vmName')]", @@ -103,7 +115,7 @@ "networkProfile": { "networkInterfaces": [ { - "id": "[resourceId('Microsoft.Network/networkInterfaces', variables('nicName'))]" + "id": "[resourceId('Microsoft.Network/networkInterfaces', parameters('nicName'))]" } ] }, @@ -146,7 +158,7 @@ "osDisk": { "caching": "ReadWrite", "createOption": "FromImage", - "name": "osdisk", + "name": "[parameters('osDiskName')]", "vhd": { "uri": "[concat(parameters('storageAccountBlobEndpoint'),variables('vmStorageAccountContainerName'),'/', parameters('osDiskName'),'.vhd')]" } @@ -162,18 +174,16 @@ "location": "[resourceGroup().location]", "managedDiskApiVersion": "2017-03-30", "networkInterfacesApiVersion": "2017-04-01", - "nicName": "packerNic", "publicIPAddressApiVersion": "2017-04-01", - "publicIPAddressName": "packerPublicIP", "publicIPAddressType": "Dynamic", "sshKeyPath": "[concat('/home/',parameters('adminUsername'),'/.ssh/authorized_keys')]", "subnetAddressPrefix": "10.0.0.0/24", - "subnetName": "packerSubnet", + "subnetName": "[parameters('subnetName')]", "subnetRef": "[concat(variables('vnetID'),'/subnets/',variables('subnetName'))]", - "virtualNetworkName": "packerNetwork", - "virtualNetworkResourceGroup": "[resourceGroup().name]", + "virtualNetworkName": "[parameters('virtualNetworkName')]", + "virtualNetworkResourceGroup": "[resourceGroup().name]", "virtualNetworksApiVersion": "2017-04-01", - "vmStorageAccountContainerName": "images", - "vnetID": "[resourceId(variables('virtualNetworkResourceGroup'), 'Microsoft.Network/virtualNetworks', variables('virtualNetworkName'))]" - } + "vmStorageAccountContainerName": "images", + "vnetID": "[resourceId(variables('virtualNetworkResourceGroup'), 'Microsoft.Network/virtualNetworks', variables('virtualNetworkName'))]" + } } \ No newline at end of file diff --git a/builder/azure/common/template/template_builder_test.TestBuildWindows01.approved.json b/builder/azure/common/template/template_builder_test.TestBuildWindows01.approved.json new file mode 100644 index 000000000..3add40537 --- /dev/null +++ b/builder/azure/common/template/template_builder_test.TestBuildWindows01.approved.json @@ -0,0 +1,212 @@ +{ + "$schema": "http://schema.management.azure.com/schemas/2014-04-01-preview/deploymentTemplate.json", + "contentVersion": "1.0.0.0", + "parameters": { + "adminPassword": { + "type": "string" + }, + "adminUsername": { + "type": "string" + }, + "dnsNameForPublicIP": { + "type": "string" + }, + "nicName": { + "type": "string" + }, + "osDiskName": { + "type": "string" + }, + "publicIPAddressName": { + "type": "string" + }, + "storageAccountBlobEndpoint": { + "type": "string" + }, + "subnetName": { + "type": "string" + }, + "virtualNetworkName": { + "type": "string" + }, + "vmName": { + "type": "string" + }, + "vmSize": { + "type": "string" + } + }, + "resources": [ + { + "apiVersion": "[variables('publicIPAddressApiVersion')]", + "location": "[variables('location')]", + "name": "[parameters('publicIPAddressName')]", + "properties": { + "dnsSettings": { + "domainNameLabel": "[parameters('dnsNameForPublicIP')]" + }, + "publicIPAllocationMethod": "[variables('publicIPAddressType')]" + }, + "type": "Microsoft.Network/publicIPAddresses" + }, + { + "apiVersion": "[variables('virtualNetworksApiVersion')]", + "location": "[variables('location')]", + "name": "[variables('virtualNetworkName')]", + "properties": { + "addressSpace": { + "addressPrefixes": [ + "[variables('addressPrefix')]" + ] + }, + "subnets": [ + { + "name": "[variables('subnetName')]", + "properties": { + "addressPrefix": "[variables('subnetAddressPrefix')]" + } + } + ] + }, + "type": "Microsoft.Network/virtualNetworks" + }, + { + "apiVersion": "[variables('networkInterfacesApiVersion')]", + "dependsOn": [ + "[concat('Microsoft.Network/publicIPAddresses/', parameters('publicIPAddressName'))]", + "[concat('Microsoft.Network/virtualNetworks/', variables('virtualNetworkName'))]" + ], + "location": "[variables('location')]", + "name": "[parameters('nicName')]", + "properties": { + "ipConfigurations": [ + { + "name": "ipconfig", + "properties": { + "privateIPAllocationMethod": "Dynamic", + "publicIPAddress": { + "id": "[resourceId('Microsoft.Network/publicIPAddresses', parameters('publicIPAddressName'))]" + }, + "subnet": { + "id": "[variables('subnetRef')]" + } + } + } + ] + }, + "type": "Microsoft.Network/networkInterfaces" + }, + { + "apiVersion": "[variables('apiVersion')]", + "dependsOn": [ + "[concat('Microsoft.Network/networkInterfaces/', parameters('nicName'))]" + ], + "location": "[variables('location')]", + "name": "[parameters('vmName')]", + "properties": { + "diagnosticsProfile": { + "bootDiagnostics": { + "enabled": false + } + }, + "hardwareProfile": { + "vmSize": "[parameters('vmSize')]" + }, + "networkProfile": { + "networkInterfaces": [ + { + "id": "[resourceId('Microsoft.Network/networkInterfaces', parameters('nicName'))]" + } + ] + }, + "osProfile": { + "adminPassword": "[parameters('adminPassword')]", + "adminUsername": "[parameters('adminUsername')]", + "computerName": "[parameters('vmName')]", + "secrets": [ + { + "sourceVault": { + "id": "[resourceId(resourceGroup().name, 'Microsoft.KeyVault/vaults', '--test-key-vault-name')]" + }, + "vaultCertificates": [ + { + "certificateStore": "My", + "certificateUrl": "--test-winrm-certificate-url--" + } + ] + } + ], + "windowsConfiguration": { + "provisionVMAgent": true, + "winRM": { + "listeners": [ + { + "certificateUrl": "--test-winrm-certificate-url--", + "protocol": "https" + } + ] + } + } + }, + "storageProfile": { + "dataDisks": [ + { + "caching": "ReadWrite", + "createOption": "Empty", + "diskSizeGB": 32, + "lun": 0, + "managedDisk": { + "storageAccountType": "Premium_LRS" + }, + "name": "datadisk-1" + }, + { + "caching": "ReadWrite", + "createOption": "Empty", + "diskSizeGB": 64, + "lun": 1, + "managedDisk": { + "storageAccountType": "Premium_LRS" + }, + "name": "datadisk-2" + } + ], + "imageReference": { + "offer": "2012-R2-Datacenter", + "publisher": "WindowsServer", + "sku": "latest", + "version": "2015-1" + }, + "osDisk": { + "caching": "ReadWrite", + "createOption": "FromImage", + "managedDisk": { + "storageAccountType": "Premium_LRS" + }, + "name": "[parameters('osDiskName')]", + "osType": "Windows" + } + } + }, + "type": "Microsoft.Compute/virtualMachines" + } + ], + "variables": { + "addressPrefix": "10.0.0.0/16", + "apiVersion": "2017-03-30", + "location": "[resourceGroup().location]", + "managedDiskApiVersion": "2017-03-30", + "networkInterfacesApiVersion": "2017-04-01", + "publicIPAddressApiVersion": "2017-04-01", + "publicIPAddressType": "Dynamic", + "sshKeyPath": "[concat('/home/',parameters('adminUsername'),'/.ssh/authorized_keys')]", + "subnetAddressPrefix": "10.0.0.0/24", + "subnetName": "[parameters('subnetName')]", + "subnetRef": "[concat(variables('vnetID'),'/subnets/',variables('subnetName'))]", + "virtualNetworkName": "[parameters('virtualNetworkName')]", + "virtualNetworkResourceGroup": "[resourceGroup().name]", + "virtualNetworksApiVersion": "2017-04-01", + "vmStorageAccountContainerName": "images", + "vnetID": "[resourceId(variables('virtualNetworkResourceGroup'), 'Microsoft.Network/virtualNetworks', variables('virtualNetworkName'))]" + } +} \ No newline at end of file diff --git a/builder/azure/common/template/template_builder_test.TestBuildWindows02.approved.json b/builder/azure/common/template/template_builder_test.TestBuildWindows02.approved.json new file mode 100644 index 000000000..7c44163c3 --- /dev/null +++ b/builder/azure/common/template/template_builder_test.TestBuildWindows02.approved.json @@ -0,0 +1,205 @@ +{ + "$schema": "http://schema.management.azure.com/schemas/2014-04-01-preview/deploymentTemplate.json", + "contentVersion": "1.0.0.0", + "parameters": { + "adminPassword": { + "type": "string" + }, + "adminUsername": { + "type": "string" + }, + "dnsNameForPublicIP": { + "type": "string" + }, + "nicName": { + "type": "string" + }, + "osDiskName": { + "type": "string" + }, + "publicIPAddressName": { + "type": "string" + }, + "storageAccountBlobEndpoint": { + "type": "string" + }, + "subnetName": { + "type": "string" + }, + "virtualNetworkName": { + "type": "string" + }, + "vmName": { + "type": "string" + }, + "vmSize": { + "type": "string" + } + }, + "resources": [ + { + "apiVersion": "[variables('publicIPAddressApiVersion')]", + "location": "[variables('location')]", + "name": "[parameters('publicIPAddressName')]", + "properties": { + "dnsSettings": { + "domainNameLabel": "[parameters('dnsNameForPublicIP')]" + }, + "publicIPAllocationMethod": "[variables('publicIPAddressType')]" + }, + "type": "Microsoft.Network/publicIPAddresses" + }, + { + "apiVersion": "[variables('virtualNetworksApiVersion')]", + "location": "[variables('location')]", + "name": "[variables('virtualNetworkName')]", + "properties": { + "addressSpace": { + "addressPrefixes": [ + "[variables('addressPrefix')]" + ] + }, + "subnets": [ + { + "name": "[variables('subnetName')]", + "properties": { + "addressPrefix": "[variables('subnetAddressPrefix')]" + } + } + ] + }, + "type": "Microsoft.Network/virtualNetworks" + }, + { + "apiVersion": "[variables('networkInterfacesApiVersion')]", + "dependsOn": [ + "[concat('Microsoft.Network/publicIPAddresses/', parameters('publicIPAddressName'))]", + "[concat('Microsoft.Network/virtualNetworks/', variables('virtualNetworkName'))]" + ], + "location": "[variables('location')]", + "name": "[parameters('nicName')]", + "properties": { + "ipConfigurations": [ + { + "name": "ipconfig", + "properties": { + "privateIPAllocationMethod": "Dynamic", + "publicIPAddress": { + "id": "[resourceId('Microsoft.Network/publicIPAddresses', parameters('publicIPAddressName'))]" + }, + "subnet": { + "id": "[variables('subnetRef')]" + } + } + } + ] + }, + "type": "Microsoft.Network/networkInterfaces" + }, + { + "apiVersion": "[variables('apiVersion')]", + "dependsOn": [ + "[concat('Microsoft.Network/networkInterfaces/', parameters('nicName'))]" + ], + "location": "[variables('location')]", + "name": "[parameters('vmName')]", + "properties": { + "diagnosticsProfile": { + "bootDiagnostics": { + "enabled": false + } + }, + "hardwareProfile": { + "vmSize": "[parameters('vmSize')]" + }, + "networkProfile": { + "networkInterfaces": [ + { + "id": "[resourceId('Microsoft.Network/networkInterfaces', parameters('nicName'))]" + } + ] + }, + "osProfile": { + "adminPassword": "[parameters('adminPassword')]", + "adminUsername": "[parameters('adminUsername')]", + "computerName": "[parameters('vmName')]", + "secrets": [ + { + "sourceVault": { + "id": "[resourceId(resourceGroup().name, 'Microsoft.KeyVault/vaults', '--test-key-vault-name')]" + }, + "vaultCertificates": [ + { + "certificateStore": "My", + "certificateUrl": "--test-winrm-certificate-url--" + } + ] + } + ], + "windowsConfiguration": { + "provisionVMAgent": true, + "winRM": { + "listeners": [ + { + "certificateUrl": "--test-winrm-certificate-url--", + "protocol": "https" + } + ] + } + } + }, + "storageProfile": { + "dataDisks": [ + { + "caching": "ReadWrite", + "createOption": "Empty", + "diskSizeGB": 32, + "lun": 0, + "name": "datadisk-1", + "vhd": { + "uri": "[concat(parameters('storageAccountBlobEndpoint'),variables('vmStorageAccountContainerName'),'/datadisk-', '1','.vhd')]" + } + }, + { + "caching": "ReadWrite", + "createOption": "Empty", + "diskSizeGB": 64, + "lun": 1, + "name": "datadisk-2", + "vhd": { + "uri": "[concat(parameters('storageAccountBlobEndpoint'),variables('vmStorageAccountContainerName'),'/datadisk-', '2','.vhd')]" + } + } + ], + "osDisk": { + "caching": "ReadWrite", + "createOption": "FromImage", + "name": "[parameters('osDiskName')]", + "vhd": { + "uri": "[concat(parameters('storageAccountBlobEndpoint'),variables('vmStorageAccountContainerName'),'/', parameters('osDiskName'),'.vhd')]" + } + } + } + }, + "type": "Microsoft.Compute/virtualMachines" + } + ], + "variables": { + "addressPrefix": "10.0.0.0/16", + "apiVersion": "2017-03-30", + "location": "[resourceGroup().location]", + "managedDiskApiVersion": "2017-03-30", + "networkInterfacesApiVersion": "2017-04-01", + "publicIPAddressApiVersion": "2017-04-01", + "publicIPAddressType": "Dynamic", + "sshKeyPath": "[concat('/home/',parameters('adminUsername'),'/.ssh/authorized_keys')]", + "subnetAddressPrefix": "10.0.0.0/24", + "subnetName": "[parameters('subnetName')]", + "subnetRef": "[concat(variables('vnetID'),'/subnets/',variables('subnetName'))]", + "virtualNetworkName": "[parameters('virtualNetworkName')]", + "virtualNetworkResourceGroup": "[resourceGroup().name]", + "virtualNetworksApiVersion": "2017-04-01", + "vmStorageAccountContainerName": "images", + "vnetID": "[resourceId(variables('virtualNetworkResourceGroup'), 'Microsoft.Network/virtualNetworks', variables('virtualNetworkName'))]" + } +} \ No newline at end of file diff --git a/builder/azure/common/template/template_builder_test.go b/builder/azure/common/template/template_builder_test.go index ec93a61a7..bc7baa175 100644 --- a/builder/azure/common/template/template_builder_test.go +++ b/builder/azure/common/template/template_builder_test.go @@ -3,7 +3,7 @@ package template import ( "testing" - "github.com/Azure/azure-sdk-for-go/arm/compute" + "github.com/Azure/azure-sdk-for-go/services/compute/mgmt/2018-04-01/compute" "github.com/approvals/go-approval-tests" ) @@ -120,3 +120,64 @@ func TestBuildWindows00(t *testing.T) { t.Fatal(err) } } + +// Windows build with additional disk for an managed build +func TestBuildWindows01(t *testing.T) { + testSubject, err := NewTemplateBuilder(BasicTemplate) + if err != nil { + t.Fatal(err) + } + + err = testSubject.BuildWindows("--test-key-vault-name", "--test-winrm-certificate-url--") + if err != nil { + t.Fatal(err) + } + + err = testSubject.SetManagedMarketplaceImage("MicrosoftWindowsServer", "WindowsServer", "2012-R2-Datacenter", "latest", "2015-1", "1", "Premium_LRS") + if err != nil { + t.Fatal(err) + } + + err = testSubject.SetAdditionalDisks([]int32{32, 64}, true) + if err != nil { + t.Fatal(err) + } + + doc, err := testSubject.ToJSON() + if err != nil { + t.Fatal(err) + } + + err = approvaltests.VerifyJSONBytes(t, []byte(*doc)) + if err != nil { + t.Fatal(err) + } +} + +// Windows build with additional disk for an unmanaged build +func TestBuildWindows02(t *testing.T) { + testSubject, err := NewTemplateBuilder(BasicTemplate) + if err != nil { + t.Fatal(err) + } + + err = testSubject.BuildWindows("--test-key-vault-name", "--test-winrm-certificate-url--") + if err != nil { + t.Fatal(err) + } + + err = testSubject.SetAdditionalDisks([]int32{32, 64}, false) + if err != nil { + t.Fatal(err) + } + + doc, err := testSubject.ToJSON() + if err != nil { + t.Fatal(err) + } + + err = approvaltests.VerifyJSONBytes(t, []byte(*doc)) + if err != nil { + t.Fatal(err) + } +} diff --git a/builder/azure/common/template/template_parameters.go b/builder/azure/common/template/template_parameters.go index 424a30c89..65bc55e75 100644 --- a/builder/azure/common/template/template_parameters.go +++ b/builder/azure/common/template/template_parameters.go @@ -24,9 +24,13 @@ type TemplateParameters struct { KeyVaultName *TemplateParameter `json:"keyVaultName,omitempty"` KeyVaultSecretValue *TemplateParameter `json:"keyVaultSecretValue,omitempty"` ObjectId *TemplateParameter `json:"objectId,omitempty"` + NicName *TemplateParameter `json:"nicName,omitempty"` OSDiskName *TemplateParameter `json:"osDiskName,omitempty"` + PublicIPAddressName *TemplateParameter `json:"publicIPAddressName,omitempty"` StorageAccountBlobEndpoint *TemplateParameter `json:"storageAccountBlobEndpoint,omitempty"` + SubnetName *TemplateParameter `json:"subnetName,omitempty"` TenantId *TemplateParameter `json:"tenantId,omitempty"` + VirtualNetworkName *TemplateParameter `json:"virtualNetworkName,omitempty"` VMSize *TemplateParameter `json:"vmSize,omitempty"` VMName *TemplateParameter `json:"vmName,omitempty"` } diff --git a/builder/azure/common/vault.go b/builder/azure/common/vault.go index db5c6db0c..ab54ee493 100644 --- a/builder/azure/common/vault.go +++ b/builder/azure/common/vault.go @@ -9,15 +9,18 @@ import ( "net/url" "github.com/Azure/go-autorest/autorest" + "github.com/Azure/go-autorest/autorest/azure" ) const ( - AzureVaultApiVersion = "2015-06-01" + AzureVaultApiVersion = "2016-10-01" ) type VaultClient struct { autorest.Client keyVaultEndpoint url.URL + SubscriptionID string + baseURI string } func NewVaultClient(keyVaultEndpoint url.URL) VaultClient { @@ -26,6 +29,13 @@ func NewVaultClient(keyVaultEndpoint url.URL) VaultClient { } } +func NewVaultClientWithBaseURI(baseURI, subscriptionID string) VaultClient { + return VaultClient{ + baseURI: baseURI, + SubscriptionID: subscriptionID, + } +} + type Secret struct { ID *string `json:"id,omitempty"` Value string `json:"value"` @@ -76,6 +86,72 @@ func (client *VaultClient) GetSecret(vaultName, secretName string) (*Secret, err return &secret, nil } +// Delete deletes the specified Azure key vault. +// +// resourceGroupName is the name of the Resource Group to which the vault belongs. vaultName is the name of the vault +// to delete +func (client *VaultClient) Delete(resourceGroupName string, vaultName string) (result autorest.Response, err error) { + req, err := client.DeletePreparer(resourceGroupName, vaultName) + if err != nil { + err = autorest.NewErrorWithError(err, "keyvault.VaultsClient", "Delete", nil, "Failure preparing request") + return + } + + resp, err := client.DeleteSender(req) + if err != nil { + result.Response = resp + err = autorest.NewErrorWithError(err, "keyvault.VaultsClient", "Delete", resp, "Failure sending request") + return + } + + result, err = client.DeleteResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "keyvault.VaultsClient", "Delete", resp, "Failure responding to request") + } + + return +} + +// DeletePreparer prepares the Delete request. +func (client *VaultClient) DeletePreparer(resourceGroupName string, vaultName string) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "resourceGroupName": autorest.Encode("path", resourceGroupName), + "SubscriptionID": autorest.Encode("path", client.SubscriptionID), + "vaultName": autorest.Encode("path", vaultName), + } + + queryParameters := map[string]interface{}{ + "api-version": AzureVaultApiVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsDelete(), + autorest.WithBaseURL(client.baseURI), + autorest.WithPathParameters("/subscriptions/{SubscriptionID}/resourceGroups/{resourceGroupName}/providers/Microsoft.KeyVault/vaults/{vaultName}", pathParameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare(&http.Request{}) +} + +// DeleteSender sends the Delete request. The method will close the +// http.Response Body if it receives an error. +func (client *VaultClient) DeleteSender(req *http.Request) (*http.Response, error) { + return autorest.SendWithSender(client, + req, + azure.DoPollForAsynchronous(client.PollingDelay)) +} + +// DeleteResponder handles the response to the Delete request. The method always +// closes the http.Response Body. +func (client *VaultClient) DeleteResponder(resp *http.Response) (result autorest.Response, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK), + autorest.ByClosing()) + result.Response = resp + return +} + func (client *VaultClient) getVaultUrl(vaultName string) string { return fmt.Sprintf("%s://%s.%s/", client.keyVaultEndpoint.Scheme, vaultName, client.keyVaultEndpoint.Host) } diff --git a/builder/azure/pkcs12/pkcs12.go b/builder/azure/pkcs12/pkcs12.go index 8279cb572..c5ce90f0d 100644 --- a/builder/azure/pkcs12/pkcs12.go +++ b/builder/azure/pkcs12/pkcs12.go @@ -92,7 +92,7 @@ const ( ) // unmarshal calls asn1.Unmarshal, but also returns an error if there is any -// trailing data after unmarshaling. +// trailing data after unmarshalling. func unmarshal(in []byte, out interface{}) error { trailing, err := asn1.Unmarshal(in, out) if err != nil { @@ -398,7 +398,7 @@ func Encode(derBytes []byte, privateKey interface{}, password string) (pfxBytes return nil, err } - // Marhsal []contentInfo so we can re-constitute the byte stream that will + // Marshal []contentInfo so we can re-constitute the byte stream that will // be suitable for computing the MAC bytes, err := asn1.Marshal(contentInfos) if err != nil { diff --git a/builder/azure/pkcs12/safebags.go b/builder/azure/pkcs12/safebags.go index bbd1526b8..836236da3 100644 --- a/builder/azure/pkcs12/safebags.go +++ b/builder/azure/pkcs12/safebags.go @@ -83,7 +83,7 @@ func decodePkcs8ShroudedKeyBag(asn1Data, password []byte) (privateKey interface{ ret := new(asn1.RawValue) if err = unmarshal(pkData, ret); err != nil { - return nil, errors.New("pkcs12: error unmarshaling decrypted private key: " + err.Error()) + return nil, errors.New("pkcs12: error unmarshalling decrypted private key: " + err.Error()) } if privateKey, err = x509.ParsePKCS8PrivateKey(pkData); err != nil { diff --git a/builder/cloudstack/builder.go b/builder/cloudstack/builder.go index 27af37d17..6726b29f9 100644 --- a/builder/cloudstack/builder.go +++ b/builder/cloudstack/builder.go @@ -5,8 +5,8 @@ import ( "github.com/hashicorp/packer/common" "github.com/hashicorp/packer/helper/communicator" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" "github.com/xanzy/go-cloudstack/cloudstack" ) diff --git a/builder/cloudstack/config.go b/builder/cloudstack/config.go index 7a617c6cd..3d15d27a2 100644 --- a/builder/cloudstack/config.go +++ b/builder/cloudstack/config.go @@ -27,26 +27,27 @@ type Config struct { HTTPGetOnly bool `mapstructure:"http_get_only"` SSLNoVerify bool `mapstructure:"ssl_no_verify"` - CIDRList []string `mapstructure:"cidr_list"` - CreateSecurityGroup bool `mapstructure:"create_security_group"` - DiskOffering string `mapstructure:"disk_offering"` - DiskSize int64 `mapstructure:"disk_size"` - Expunge bool `mapstructure:"expunge"` - Hypervisor string `mapstructure:"hypervisor"` - InstanceName string `mapstructure:"instance_name"` - Keypair string `mapstructure:"keypair"` - Network string `mapstructure:"network"` - Project string `mapstructure:"project"` - PublicIPAddress string `mapstructure:"public_ip_address"` - SecurityGroups []string `mapstructure:"security_groups"` - ServiceOffering string `mapstructure:"service_offering"` - SourceISO string `mapstructure:"source_iso"` - SourceTemplate string `mapstructure:"source_template"` - TemporaryKeypairName string `mapstructure:"temporary_keypair_name"` - UseLocalIPAddress bool `mapstructure:"use_local_ip_address"` - UserData string `mapstructure:"user_data"` - UserDataFile string `mapstructure:"user_data_file"` - Zone string `mapstructure:"zone"` + CIDRList []string `mapstructure:"cidr_list"` + CreateSecurityGroup bool `mapstructure:"create_security_group"` + DiskOffering string `mapstructure:"disk_offering"` + DiskSize int64 `mapstructure:"disk_size"` + Expunge bool `mapstructure:"expunge"` + Hypervisor string `mapstructure:"hypervisor"` + InstanceName string `mapstructure:"instance_name"` + Keypair string `mapstructure:"keypair"` + Network string `mapstructure:"network"` + Project string `mapstructure:"project"` + PublicIPAddress string `mapstructure:"public_ip_address"` + SecurityGroups []string `mapstructure:"security_groups"` + ServiceOffering string `mapstructure:"service_offering"` + PreventFirewallChanges bool `mapstructure:"prevent_firewall_changes"` + SourceISO string `mapstructure:"source_iso"` + SourceTemplate string `mapstructure:"source_template"` + TemporaryKeypairName string `mapstructure:"temporary_keypair_name"` + UseLocalIPAddress bool `mapstructure:"use_local_ip_address"` + UserData string `mapstructure:"user_data"` + UserDataFile string `mapstructure:"user_data_file"` + Zone string `mapstructure:"zone"` TemplateName string `mapstructure:"template_name"` TemplateDisplayText string `mapstructure:"template_display_text"` diff --git a/builder/cloudstack/ssh.go b/builder/cloudstack/ssh.go index 7a4c2fcba..e1c2ef846 100644 --- a/builder/cloudstack/ssh.go +++ b/builder/cloudstack/ssh.go @@ -6,7 +6,7 @@ import ( "os" packerssh "github.com/hashicorp/packer/communicator/ssh" - "github.com/mitchellh/multistep" + "github.com/hashicorp/packer/helper/multistep" "golang.org/x/crypto/ssh" "golang.org/x/crypto/ssh/agent" ) diff --git a/builder/cloudstack/step_configure_networking.go b/builder/cloudstack/step_configure_networking.go index d9a6a365c..577b4b1f2 100644 --- a/builder/cloudstack/step_configure_networking.go +++ b/builder/cloudstack/step_configure_networking.go @@ -1,13 +1,14 @@ package cloudstack import ( + "context" "fmt" "math/rand" "strings" "time" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" "github.com/xanzy/go-cloudstack/cloudstack" ) @@ -16,7 +17,7 @@ type stepSetupNetworking struct { publicPort int } -func (s *stepSetupNetworking) Run(state multistep.StateBag) multistep.StepAction { +func (s *stepSetupNetworking) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { client := state.Get("client").(*cloudstack.CloudStackClient) config := state.Get("config").(*Config) ui := state.Get("ui").(packer.Ui) @@ -116,6 +117,11 @@ func (s *stepSetupNetworking) Run(state multistep.StateBag) multistep.StepAction // Store the port forward ID. state.Put("port_forward_id", forward.Id) + if config.PreventFirewallChanges { + ui.Message("Networking has been setup (without firewall changes)!") + return multistep.ActionContinue + } + if network.Vpcid != "" { ui.Message("Creating network ACL rule...") @@ -186,7 +192,7 @@ func (s *stepSetupNetworking) Cleanup(state multistep.StateBag) { // Create a new parameter struct. p := client.Firewall.NewDeleteFirewallRuleParams(fwRuleID) - ui.Message("Deleting firewal rule...") + ui.Message("Deleting firewall rule...") if _, err := client.Firewall.DeleteFirewallRule(p); err != nil { // This is a very poor way to be told the ID does no longer exist :( if !strings.Contains(err.Error(), fmt.Sprintf( diff --git a/builder/cloudstack/step_create_instance.go b/builder/cloudstack/step_create_instance.go index 3e37fad14..81f0c0699 100644 --- a/builder/cloudstack/step_create_instance.go +++ b/builder/cloudstack/step_create_instance.go @@ -1,6 +1,7 @@ package cloudstack import ( + "context" "encoding/base64" "errors" "fmt" @@ -8,9 +9,9 @@ import ( "strings" "github.com/hashicorp/packer/common" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" "github.com/hashicorp/packer/template/interpolate" - "github.com/mitchellh/multistep" "github.com/xanzy/go-cloudstack/cloudstack" ) @@ -27,7 +28,7 @@ type stepCreateInstance struct { } // Run executes the Packer build step that creates a CloudStack instance. -func (s *stepCreateInstance) Run(state multistep.StateBag) multistep.StepAction { +func (s *stepCreateInstance) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { client := state.Get("client").(*cloudstack.CloudStackClient) config := state.Get("config").(*Config) ui := state.Get("ui").(packer.Ui) @@ -46,7 +47,9 @@ func (s *stepCreateInstance) Run(state multistep.StateBag) multistep.StepAction p.SetDisplayname("Created by Packer") if keypair, ok := state.GetOk("keypair"); ok { - p.SetKeypair(keypair.(string)) + kp := keypair.(string) + ui.Message(fmt.Sprintf("Using keypair: %s", kp)) + p.SetKeypair(kp) } if securitygroups, ok := state.GetOk("security_groups"); ok { @@ -119,6 +122,7 @@ func (s *stepCreateInstance) Run(state multistep.StateBag) multistep.StepAction } ui.Message("Instance has been created!") + ui.Message(fmt.Sprintf("Instance ID: %s", instance.Id)) // In debug-mode, we output the password if s.Debug { @@ -225,7 +229,7 @@ func (s *stepCreateInstance) generateUserData(userData string, httpGETOnly bool) if len(ud) > maxUD { return "", fmt.Errorf( "The supplied user_data contains %d bytes after encoding, "+ - "this exeeds the limit of %d bytes", len(ud), maxUD) + "this exceeds the limit of %d bytes", len(ud), maxUD) } return ud, nil diff --git a/builder/cloudstack/step_create_security_group.go b/builder/cloudstack/step_create_security_group.go index 1bf23100b..c42eec5d5 100644 --- a/builder/cloudstack/step_create_security_group.go +++ b/builder/cloudstack/step_create_security_group.go @@ -1,11 +1,12 @@ package cloudstack import ( + "context" "fmt" "github.com/hashicorp/packer/common/uuid" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" "github.com/xanzy/go-cloudstack/cloudstack" ) @@ -13,7 +14,7 @@ type stepCreateSecurityGroup struct { tempSG string } -func (s *stepCreateSecurityGroup) Run(state multistep.StateBag) multistep.StepAction { +func (s *stepCreateSecurityGroup) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { client := state.Get("client").(*cloudstack.CloudStackClient) config := state.Get("config").(*Config) ui := state.Get("ui").(packer.Ui) diff --git a/builder/cloudstack/step_create_template.go b/builder/cloudstack/step_create_template.go index 18d6c1fb0..706d570f6 100644 --- a/builder/cloudstack/step_create_template.go +++ b/builder/cloudstack/step_create_template.go @@ -1,17 +1,18 @@ package cloudstack import ( + "context" "fmt" "time" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" "github.com/xanzy/go-cloudstack/cloudstack" ) type stepCreateTemplate struct{} -func (s *stepCreateTemplate) Run(state multistep.StateBag) multistep.StepAction { +func (s *stepCreateTemplate) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { client := state.Get("client").(*cloudstack.CloudStackClient) config := state.Get("config").(*Config) ui := state.Get("ui").(packer.Ui) @@ -50,7 +51,7 @@ func (s *stepCreateTemplate) Run(state multistep.StateBag) multistep.StepAction } ui.Message("Retrieving the ROOT volume ID...") - volumeID, err := getRootVolumeID(client, instanceID) + volumeID, err := getRootVolumeID(client, instanceID, config.Project) if err != nil { state.Put("error", err) ui.Error(err.Error()) @@ -88,13 +89,16 @@ func (s *stepCreateTemplate) Cleanup(state multistep.StateBag) { // Nothing to cleanup for this step. } -func getRootVolumeID(client *cloudstack.CloudStackClient, instanceID string) (string, error) { +func getRootVolumeID(client *cloudstack.CloudStackClient, instanceID, projectID string) (string, error) { // Retrieve the virtual machine object. p := client.Volume.NewListVolumesParams() // Set the type and virtual machine ID p.SetType("ROOT") p.SetVirtualmachineid(instanceID) + if projectID != "" { + p.SetProjectid(projectID) + } volumes, err := client.Volume.ListVolumes(p) if err != nil { diff --git a/builder/cloudstack/step_keypair.go b/builder/cloudstack/step_keypair.go index 675994fc1..89e7e4d1f 100644 --- a/builder/cloudstack/step_keypair.go +++ b/builder/cloudstack/step_keypair.go @@ -1,13 +1,14 @@ package cloudstack import ( + "context" "fmt" "io/ioutil" "os" "runtime" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" "github.com/xanzy/go-cloudstack/cloudstack" ) @@ -20,7 +21,7 @@ type stepKeypair struct { TemporaryKeyPairName string } -func (s *stepKeypair) Run(state multistep.StateBag) multistep.StepAction { +func (s *stepKeypair) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { ui := state.Get("ui").(packer.Ui) if s.PrivateKeyFile != "" { @@ -59,6 +60,12 @@ func (s *stepKeypair) Run(state multistep.StateBag) multistep.StepAction { ui.Say(fmt.Sprintf("Creating temporary keypair: %s ...", s.TemporaryKeyPairName)) p := client.SSH.NewCreateSSHKeyPairParams(s.TemporaryKeyPairName) + + cfg := state.Get("config").(*Config) + if cfg.Project != "" { + p.SetProjectid(cfg.Project) + } + keypair, err := client.SSH.CreateSSHKeyPair(p) if err != nil { err := fmt.Errorf("Error creating temporary keypair: %s", err) @@ -119,12 +126,16 @@ func (s *stepKeypair) Cleanup(state multistep.StateBag) { ui := state.Get("ui").(packer.Ui) client := state.Get("client").(*cloudstack.CloudStackClient) + cfg := state.Get("config").(*Config) + + p := client.SSH.NewDeleteSSHKeyPairParams(s.TemporaryKeyPairName) + if cfg.Project != "" { + p.SetProjectid(cfg.Project) + } ui.Say(fmt.Sprintf("Deleting temporary keypair: %s ...", s.TemporaryKeyPairName)) - _, err := client.SSH.DeleteSSHKeyPair(client.SSH.NewDeleteSSHKeyPairParams( - s.TemporaryKeyPairName, - )) + _, err := client.SSH.DeleteSSHKeyPair(p) if err != nil { ui.Error(err.Error()) ui.Error(fmt.Sprintf( diff --git a/builder/cloudstack/step_prepare_config.go b/builder/cloudstack/step_prepare_config.go index 4d6353c6a..644547018 100644 --- a/builder/cloudstack/step_prepare_config.go +++ b/builder/cloudstack/step_prepare_config.go @@ -1,18 +1,19 @@ package cloudstack import ( + "context" "fmt" "io/ioutil" "regexp" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" "github.com/xanzy/go-cloudstack/cloudstack" ) type stepPrepareConfig struct{} -func (s *stepPrepareConfig) Run(state multistep.StateBag) multistep.StepAction { +func (s *stepPrepareConfig) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { client := state.Get("client").(*cloudstack.CloudStackClient) config := state.Get("config").(*Config) ui := state.Get("ui").(packer.Ui) diff --git a/builder/cloudstack/step_shutdown_instance.go b/builder/cloudstack/step_shutdown_instance.go index b090a5836..a7f07e8b2 100644 --- a/builder/cloudstack/step_shutdown_instance.go +++ b/builder/cloudstack/step_shutdown_instance.go @@ -1,16 +1,17 @@ package cloudstack import ( + "context" "fmt" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" "github.com/xanzy/go-cloudstack/cloudstack" ) type stepShutdownInstance struct{} -func (s *stepShutdownInstance) Run(state multistep.StateBag) multistep.StepAction { +func (s *stepShutdownInstance) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { client := state.Get("client").(*cloudstack.CloudStackClient) config := state.Get("config").(*Config) ui := state.Get("ui").(packer.Ui) diff --git a/builder/digitalocean/builder.go b/builder/digitalocean/builder.go index 16d11ba29..d5e674d63 100644 --- a/builder/digitalocean/builder.go +++ b/builder/digitalocean/builder.go @@ -12,8 +12,8 @@ import ( "github.com/digitalocean/godo" "github.com/hashicorp/packer/common" "github.com/hashicorp/packer/helper/communicator" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" "golang.org/x/oauth2" ) diff --git a/builder/digitalocean/ssh.go b/builder/digitalocean/ssh.go index ebccb938a..2b5c4ef8d 100644 --- a/builder/digitalocean/ssh.go +++ b/builder/digitalocean/ssh.go @@ -5,7 +5,7 @@ import ( "golang.org/x/crypto/ssh" - "github.com/mitchellh/multistep" + "github.com/hashicorp/packer/helper/multistep" ) func commHost(state multistep.StateBag) (string, error) { diff --git a/builder/digitalocean/step_create_droplet.go b/builder/digitalocean/step_create_droplet.go index ed2c7390d..d9c8976bf 100644 --- a/builder/digitalocean/step_create_droplet.go +++ b/builder/digitalocean/step_create_droplet.go @@ -7,15 +7,15 @@ import ( "io/ioutil" "github.com/digitalocean/godo" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" ) type stepCreateDroplet struct { dropletId int } -func (s *stepCreateDroplet) Run(state multistep.StateBag) multistep.StepAction { +func (s *stepCreateDroplet) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { client := state.Get("client").(*godo.Client) ui := state.Get("ui").(packer.Ui) c := state.Get("config").(Config) diff --git a/builder/digitalocean/step_create_ssh_key.go b/builder/digitalocean/step_create_ssh_key.go index d255ffde0..c03809ba2 100644 --- a/builder/digitalocean/step_create_ssh_key.go +++ b/builder/digitalocean/step_create_ssh_key.go @@ -13,8 +13,8 @@ import ( "github.com/digitalocean/godo" "github.com/hashicorp/packer/common/uuid" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" "golang.org/x/crypto/ssh" ) @@ -25,7 +25,7 @@ type stepCreateSSHKey struct { keyId int } -func (s *stepCreateSSHKey) Run(state multistep.StateBag) multistep.StepAction { +func (s *stepCreateSSHKey) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { client := state.Get("client").(*godo.Client) ui := state.Get("ui").(packer.Ui) diff --git a/builder/digitalocean/step_droplet_info.go b/builder/digitalocean/step_droplet_info.go index 4d24bb899..5c4439749 100644 --- a/builder/digitalocean/step_droplet_info.go +++ b/builder/digitalocean/step_droplet_info.go @@ -5,13 +5,13 @@ import ( "fmt" "github.com/digitalocean/godo" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" ) type stepDropletInfo struct{} -func (s *stepDropletInfo) Run(state multistep.StateBag) multistep.StepAction { +func (s *stepDropletInfo) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { client := state.Get("client").(*godo.Client) ui := state.Get("ui").(packer.Ui) c := state.Get("config").(Config) diff --git a/builder/digitalocean/step_power_off.go b/builder/digitalocean/step_power_off.go index aeded228a..3b7e8bdd0 100644 --- a/builder/digitalocean/step_power_off.go +++ b/builder/digitalocean/step_power_off.go @@ -6,13 +6,13 @@ import ( "log" "github.com/digitalocean/godo" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" ) type stepPowerOff struct{} -func (s *stepPowerOff) Run(state multistep.StateBag) multistep.StepAction { +func (s *stepPowerOff) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { client := state.Get("client").(*godo.Client) c := state.Get("config").(Config) ui := state.Get("ui").(packer.Ui) diff --git a/builder/digitalocean/step_shutdown.go b/builder/digitalocean/step_shutdown.go index dfa47dc4a..98d6c0c75 100644 --- a/builder/digitalocean/step_shutdown.go +++ b/builder/digitalocean/step_shutdown.go @@ -7,13 +7,13 @@ import ( "time" "github.com/digitalocean/godo" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" ) type stepShutdown struct{} -func (s *stepShutdown) Run(state multistep.StateBag) multistep.StepAction { +func (s *stepShutdown) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { client := state.Get("client").(*godo.Client) c := state.Get("config").(Config) ui := state.Get("ui").(packer.Ui) diff --git a/builder/digitalocean/step_snapshot.go b/builder/digitalocean/step_snapshot.go index eb91fdead..0ccec359d 100644 --- a/builder/digitalocean/step_snapshot.go +++ b/builder/digitalocean/step_snapshot.go @@ -8,13 +8,13 @@ import ( "time" "github.com/digitalocean/godo" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" ) type stepSnapshot struct{} -func (s *stepSnapshot) Run(state multistep.StateBag) multistep.StepAction { +func (s *stepSnapshot) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { client := state.Get("client").(*godo.Client) ui := state.Get("ui").(packer.Ui) c := state.Get("config").(Config) @@ -111,6 +111,7 @@ func (s *stepSnapshot) Run(state multistep.StateBag) multistep.StepAction { ui.Error(err.Error()) return multistep.ActionHalt } + snapshotRegions = append(snapshotRegions, c.Region) log.Printf("Snapshot image ID: %d", imageId) state.Put("snapshot_image_id", imageId) diff --git a/builder/digitalocean/wait.go b/builder/digitalocean/wait.go index 2e4bd280c..684955853 100644 --- a/builder/digitalocean/wait.go +++ b/builder/digitalocean/wait.go @@ -198,12 +198,12 @@ func waitForImageState( } }() - log.Printf("Waiting for up to %d seconds for image transter to become %s", timeout/time.Second, desiredState) + log.Printf("Waiting for up to %d seconds for image transfer to become %s", timeout/time.Second, desiredState) select { case err := <-result: return err case <-time.After(timeout): - err := fmt.Errorf("Timeout while waiting to for image transter to become '%s'", desiredState) + err := fmt.Errorf("Timeout while waiting to for image transfer to become '%s'", desiredState) return err } } diff --git a/builder/docker/artifact_export_test.go b/builder/docker/artifact_export_test.go index 784535f1b..83d522090 100644 --- a/builder/docker/artifact_export_test.go +++ b/builder/docker/artifact_export_test.go @@ -1,8 +1,9 @@ package docker import ( - "github.com/hashicorp/packer/packer" "testing" + + "github.com/hashicorp/packer/packer" ) func TestExportArtifact_impl(t *testing.T) { diff --git a/builder/docker/artifact_import_test.go b/builder/docker/artifact_import_test.go index 0a002344c..20aac11ee 100644 --- a/builder/docker/artifact_import_test.go +++ b/builder/docker/artifact_import_test.go @@ -2,8 +2,9 @@ package docker import ( "errors" - "github.com/hashicorp/packer/packer" "testing" + + "github.com/hashicorp/packer/packer" ) func TestImportArtifact_impl(t *testing.T) { diff --git a/builder/docker/builder.go b/builder/docker/builder.go index c59456e80..ba58f61bb 100644 --- a/builder/docker/builder.go +++ b/builder/docker/builder.go @@ -5,8 +5,8 @@ import ( "github.com/hashicorp/packer/common" "github.com/hashicorp/packer/helper/communicator" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" ) const ( diff --git a/builder/docker/builder_test.go b/builder/docker/builder_test.go index e196b0b30..546d30dd4 100644 --- a/builder/docker/builder_test.go +++ b/builder/docker/builder_test.go @@ -1,8 +1,9 @@ package docker import ( - "github.com/hashicorp/packer/packer" "testing" + + "github.com/hashicorp/packer/packer" ) func TestBuilder_implBuilder(t *testing.T) { diff --git a/builder/docker/comm.go b/builder/docker/comm.go index 8af3d57ca..bda53e0a1 100644 --- a/builder/docker/comm.go +++ b/builder/docker/comm.go @@ -6,7 +6,7 @@ import ( "github.com/hashicorp/packer/communicator/ssh" "github.com/hashicorp/packer/helper/communicator" - "github.com/mitchellh/multistep" + "github.com/hashicorp/packer/helper/multistep" gossh "golang.org/x/crypto/ssh" ) diff --git a/builder/docker/communicator.go b/builder/docker/communicator.go index d15143172..e898ecd1c 100644 --- a/builder/docker/communicator.go +++ b/builder/docker/communicator.go @@ -105,7 +105,6 @@ func (c *Communicator) uploadReader(dst string, src io.Reader) error { // uploadFile uses docker cp to copy the file from the host to the container func (c *Communicator) uploadFile(dst string, src io.Reader, fi *os.FileInfo) error { - // command format: docker cp /path/to/infile containerid:/path/to/outfile log.Printf("Copying to %s on container %s.", dst, c.ContainerID) diff --git a/builder/docker/communicator_test.go b/builder/docker/communicator_test.go index bdfaef996..14bcdf496 100644 --- a/builder/docker/communicator_test.go +++ b/builder/docker/communicator_test.go @@ -67,11 +67,10 @@ func TestUploadDownload(t *testing.T) { hooks := map[string][]packer.Hook{} hooks[packer.HookProvision] = []packer.Hook{ &packer.ProvisionHook{ - Provisioners: []packer.Provisioner{ - upload, - download, + Provisioners: []*packer.HookedProvisioner{ + {upload, nil, ""}, + {download, nil, ""}, }, - ProvisionerTypes: []string{"", ""}, }, } hook := &packer.DispatchHook{Mapping: hooks} @@ -157,12 +156,11 @@ func TestLargeDownload(t *testing.T) { hooks := map[string][]packer.Hook{} hooks[packer.HookProvision] = []packer.Hook{ &packer.ProvisionHook{ - Provisioners: []packer.Provisioner{ - shell, - downloadCupcake, - downloadBigcake, + Provisioners: []*packer.HookedProvisioner{ + {shell, nil, ""}, + {downloadCupcake, nil, ""}, + {downloadBigcake, nil, ""}, }, - ProvisionerTypes: []string{"", "", ""}, }, } hook := &packer.DispatchHook{Mapping: hooks} @@ -203,8 +201,8 @@ func TestLargeDownload(t *testing.T) { // or ui output to do this because we use /dev/urandom to create the file. // if sha256.Sum256(inputFile) != sha256.Sum256(outputFile) { - // t.Fatalf("Input and output files do not match\n"+ - // "Input:\n%s\nOutput:\n%s\n", inputFile, outputFile) + // t.Fatalf("Input and output files do not match\n"+ + // "Input:\n%s\nOutput:\n%s\n", inputFile, outputFile) // } } @@ -267,13 +265,12 @@ func TestFixUploadOwner(t *testing.T) { hooks := map[string][]packer.Hook{} hooks[packer.HookProvision] = []packer.Hook{ &packer.ProvisionHook{ - Provisioners: []packer.Provisioner{ - fileProvisioner, - dirProvisioner, - shellProvisioner, - verifyProvisioner, + Provisioners: []*packer.HookedProvisioner{ + {fileProvisioner, nil, ""}, + {dirProvisioner, nil, ""}, + {shellProvisioner, nil, ""}, + {verifyProvisioner, nil, ""}, }, - ProvisionerTypes: []string{"", "", "", ""}, }, } hook := &packer.DispatchHook{Mapping: hooks} diff --git a/builder/docker/driver_docker.go b/builder/docker/driver_docker.go index c3db3dda6..50a2b9920 100644 --- a/builder/docker/driver_docker.go +++ b/builder/docker/driver_docker.go @@ -150,24 +150,56 @@ func (d *DockerDriver) IPAddress(id string) (string, error) { func (d *DockerDriver) Login(repo, user, pass string) error { d.l.Lock() - args := []string{"login"} - if user != "" { - args = append(args, "-u", user) - } - if pass != "" { - args = append(args, "-p", pass) - } - if repo != "" { - args = append(args, repo) - } - - cmd := exec.Command("docker", args...) - err := runAndStream(cmd, d.Ui) + version_running, err := d.Version() if err != nil { d.l.Unlock() + return err } - return err + // Version 17.07.0 of Docker adds support for the new + // `--password-stdin` option which can be used to offer + // password via the standard input, rather than passing + // the password and/or token using a command line switch. + constraint, err := version.NewConstraint(">= 17.07.0") + if err != nil { + d.l.Unlock() + return err + } + + cmd := exec.Command("docker") + cmd.Args = append(cmd.Args, "login") + + if user != "" { + cmd.Args = append(cmd.Args, "-u", user) + } + + if pass != "" { + if constraint.Check(version_running) { + cmd.Args = append(cmd.Args, "--password-stdin") + + stdin, err := cmd.StdinPipe() + if err != nil { + d.l.Unlock() + return err + } + io.WriteString(stdin, pass) + stdin.Close() + } else { + cmd.Args = append(cmd.Args, "-p", pass) + } + } + + if repo != "" { + cmd.Args = append(cmd.Args, repo) + } + + err = runAndStream(cmd, d.Ui) + if err != nil { + d.l.Unlock() + return err + } + + return nil } func (d *DockerDriver) Logout(repo string) error { @@ -292,7 +324,7 @@ func (d *DockerDriver) TagImage(id string, repo string, force bool) error { return err } - version_deprecated, err := version.NewVersion(string("1.12.0")) + version_deprecated, err := version.NewVersion("1.12.0") if err != nil { // should never reach this line return err diff --git a/builder/docker/exec.go b/builder/docker/exec.go index 201055b74..386c5aa79 100644 --- a/builder/docker/exec.go +++ b/builder/docker/exec.go @@ -2,8 +2,6 @@ package docker import ( "fmt" - "github.com/hashicorp/packer/packer" - "github.com/mitchellh/iochan" "io" "log" "os/exec" @@ -11,6 +9,9 @@ import ( "strings" "sync" "syscall" + + "github.com/hashicorp/packer/packer" + "github.com/mitchellh/iochan" ) func runAndStream(cmd *exec.Cmd, ui packer.Ui) error { @@ -19,7 +20,18 @@ func runAndStream(cmd *exec.Cmd, ui packer.Ui) error { defer stdout_w.Close() defer stderr_w.Close() - log.Printf("Executing: %s %v", cmd.Path, cmd.Args[1:]) + args := make([]string, len(cmd.Args)-1) + copy(args, cmd.Args[1:]) + + // Scrub password from the log output. + for i, v := range args { + if v == "-p" || v == "--password" { + args[i+1] = "" + break + } + } + + log.Printf("Executing: %s %v", cmd.Path, args) cmd.Stdout = stdout_w cmd.Stderr = stderr_w if err := cmd.Start(); err != nil { diff --git a/builder/docker/step_commit.go b/builder/docker/step_commit.go index 94205bc2f..d248ee64c 100644 --- a/builder/docker/step_commit.go +++ b/builder/docker/step_commit.go @@ -1,9 +1,11 @@ package docker import ( + "context" "fmt" + + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" ) // StepCommit commits the container to a image. @@ -11,7 +13,7 @@ type StepCommit struct { imageId string } -func (s *StepCommit) Run(state multistep.StateBag) multistep.StepAction { +func (s *StepCommit) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { driver := state.Get("driver").(Driver) containerId := state.Get("container_id").(string) config := state.Get("config").(*Config) diff --git a/builder/docker/step_commit_test.go b/builder/docker/step_commit_test.go index 29e583f81..16548b069 100644 --- a/builder/docker/step_commit_test.go +++ b/builder/docker/step_commit_test.go @@ -1,9 +1,11 @@ package docker import ( + "context" "errors" - "github.com/mitchellh/multistep" "testing" + + "github.com/hashicorp/packer/helper/multistep" ) func testStepCommitState(t *testing.T) multistep.StateBag { @@ -25,7 +27,7 @@ func TestStepCommit(t *testing.T) { driver.CommitImageId = "bar" // run the step - if action := step.Run(state); action != multistep.ActionContinue { + if action := step.Run(context.Background(), state); action != multistep.ActionContinue { t.Fatalf("bad action: %#v", action) } @@ -55,7 +57,7 @@ func TestStepCommit_error(t *testing.T) { driver.CommitErr = errors.New("foo") // run the step - if action := step.Run(state); action != multistep.ActionHalt { + if action := step.Run(context.Background(), state); action != multistep.ActionHalt { t.Fatalf("bad action: %#v", action) } diff --git a/builder/docker/step_connect_docker.go b/builder/docker/step_connect_docker.go index 45cf2d25d..ef0222a91 100644 --- a/builder/docker/step_connect_docker.go +++ b/builder/docker/step_connect_docker.go @@ -1,15 +1,17 @@ package docker import ( + "context" "fmt" - "github.com/mitchellh/multistep" "os/exec" "strings" + + "github.com/hashicorp/packer/helper/multistep" ) type StepConnectDocker struct{} -func (s *StepConnectDocker) Run(state multistep.StateBag) multistep.StepAction { +func (s *StepConnectDocker) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { config := state.Get("config").(*Config) containerId := state.Get("container_id").(string) driver := state.Get("driver").(Driver) diff --git a/builder/docker/step_export.go b/builder/docker/step_export.go index 428055a48..138e0f00c 100644 --- a/builder/docker/step_export.go +++ b/builder/docker/step_export.go @@ -1,18 +1,19 @@ package docker import ( + "context" "fmt" "os" "path/filepath" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" ) // StepExport exports the container to a flat tar file. type StepExport struct{} -func (s *StepExport) Run(state multistep.StateBag) multistep.StepAction { +func (s *StepExport) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { config := state.Get("config").(*Config) driver := state.Get("driver").(Driver) diff --git a/builder/docker/step_export_test.go b/builder/docker/step_export_test.go index d07d547c2..aa3fd124b 100644 --- a/builder/docker/step_export_test.go +++ b/builder/docker/step_export_test.go @@ -2,11 +2,13 @@ package docker import ( "bytes" + "context" "errors" - "github.com/mitchellh/multistep" "io/ioutil" "os" "testing" + + "github.com/hashicorp/packer/helper/multistep" ) func testStepExportState(t *testing.T) multistep.StateBag { @@ -38,7 +40,7 @@ func TestStepExport(t *testing.T) { driver.ExportReader = bytes.NewReader([]byte("data!")) // run the step - if action := step.Run(state); action != multistep.ActionContinue { + if action := step.Run(context.Background(), state); action != multistep.ActionContinue { t.Fatalf("bad action: %#v", action) } @@ -83,7 +85,7 @@ func TestStepExport_error(t *testing.T) { driver.ExportError = errors.New("foo") // run the step - if action := step.Run(state); action != multistep.ActionHalt { + if action := step.Run(context.Background(), state); action != multistep.ActionHalt { t.Fatalf("bad action: %#v", action) } diff --git a/builder/docker/step_pull.go b/builder/docker/step_pull.go index 6b1ac8935..fd011680e 100644 --- a/builder/docker/step_pull.go +++ b/builder/docker/step_pull.go @@ -1,16 +1,17 @@ package docker import ( + "context" "fmt" "log" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" ) type StepPull struct{} -func (s *StepPull) Run(state multistep.StateBag) multistep.StepAction { +func (s *StepPull) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { config := state.Get("config").(*Config) driver := state.Get("driver").(Driver) ui := state.Get("ui").(packer.Ui) diff --git a/builder/docker/step_pull_test.go b/builder/docker/step_pull_test.go index 7ba5705d5..b01b42107 100644 --- a/builder/docker/step_pull_test.go +++ b/builder/docker/step_pull_test.go @@ -1,9 +1,11 @@ package docker import ( + "context" "errors" - "github.com/mitchellh/multistep" "testing" + + "github.com/hashicorp/packer/helper/multistep" ) func TestStepPull_impl(t *testing.T) { @@ -19,7 +21,7 @@ func TestStepPull(t *testing.T) { driver := state.Get("driver").(*MockDriver) // run the step - if action := step.Run(state); action != multistep.ActionContinue { + if action := step.Run(context.Background(), state); action != multistep.ActionContinue { t.Fatalf("bad action: %#v", action) } @@ -41,7 +43,7 @@ func TestStepPull_error(t *testing.T) { driver.PullError = errors.New("foo") // run the step - if action := step.Run(state); action != multistep.ActionHalt { + if action := step.Run(context.Background(), state); action != multistep.ActionHalt { t.Fatalf("bad action: %#v", action) } @@ -62,7 +64,7 @@ func TestStepPull_login(t *testing.T) { config.Login = true // run the step - if action := step.Run(state); action != multistep.ActionContinue { + if action := step.Run(context.Background(), state); action != multistep.ActionContinue { t.Fatalf("bad action: %#v", action) } @@ -91,7 +93,7 @@ func TestStepPull_noPull(t *testing.T) { driver := state.Get("driver").(*MockDriver) // run the step - if action := step.Run(state); action != multistep.ActionContinue { + if action := step.Run(context.Background(), state); action != multistep.ActionContinue { t.Fatalf("bad action: %#v", action) } diff --git a/builder/docker/step_run.go b/builder/docker/step_run.go index 5a56d8d7d..1d2ba6862 100644 --- a/builder/docker/step_run.go +++ b/builder/docker/step_run.go @@ -1,16 +1,18 @@ package docker import ( + "context" "fmt" + + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" ) type StepRun struct { containerId string } -func (s *StepRun) Run(state multistep.StateBag) multistep.StepAction { +func (s *StepRun) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { config := state.Get("config").(*Config) driver := state.Get("driver").(Driver) tempDir := state.Get("temp_dir").(string) diff --git a/builder/docker/step_run_test.go b/builder/docker/step_run_test.go index 9ce556b12..bc6639319 100644 --- a/builder/docker/step_run_test.go +++ b/builder/docker/step_run_test.go @@ -1,9 +1,11 @@ package docker import ( + "context" "errors" - "github.com/mitchellh/multistep" "testing" + + "github.com/hashicorp/packer/helper/multistep" ) func testStepRunState(t *testing.T) multistep.StateBag { @@ -26,7 +28,7 @@ func TestStepRun(t *testing.T) { driver.StartID = "foo" // run the step - if action := step.Run(state); action != multistep.ActionContinue { + if action := step.Run(context.Background(), state); action != multistep.ActionContinue { t.Fatalf("bad action: %#v", action) } @@ -73,7 +75,7 @@ func TestStepRun_error(t *testing.T) { driver.StartError = errors.New("foo") // run the step - if action := step.Run(state); action != multistep.ActionHalt { + if action := step.Run(context.Background(), state); action != multistep.ActionHalt { t.Fatalf("bad action: %#v", action) } diff --git a/builder/docker/step_temp_dir.go b/builder/docker/step_temp_dir.go index 7e84b908f..a5b1d6ade 100644 --- a/builder/docker/step_temp_dir.go +++ b/builder/docker/step_temp_dir.go @@ -1,11 +1,13 @@ package docker import ( + "context" "fmt" - "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" "io/ioutil" "os" + + "github.com/hashicorp/packer/helper/multistep" + "github.com/hashicorp/packer/packer" ) // StepTempDir creates a temporary directory that we use in order to @@ -14,7 +16,7 @@ type StepTempDir struct { tempDir string } -func (s *StepTempDir) Run(state multistep.StateBag) multistep.StepAction { +func (s *StepTempDir) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { ui := state.Get("ui").(packer.Ui) ui.Say("Creating a temporary directory for sharing data...") diff --git a/builder/docker/step_temp_dir_test.go b/builder/docker/step_temp_dir_test.go index cc12932f3..e36d4fcff 100644 --- a/builder/docker/step_temp_dir_test.go +++ b/builder/docker/step_temp_dir_test.go @@ -1,12 +1,13 @@ package docker import ( + "context" "os" "path/filepath" "testing" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" ) func TestStepTempDir_impl(t *testing.T) { @@ -24,7 +25,7 @@ func testStepTempDir_impl(t *testing.T) string { } // run the step - if action := step.Run(state); action != multistep.ActionContinue { + if action := step.Run(context.Background(), state); action != multistep.ActionContinue { t.Fatalf("bad action: %#v", action) } diff --git a/builder/docker/step_test.go b/builder/docker/step_test.go index 4278fb50d..0c7d56649 100644 --- a/builder/docker/step_test.go +++ b/builder/docker/step_test.go @@ -2,9 +2,10 @@ package docker import ( "bytes" - "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" "testing" + + "github.com/hashicorp/packer/helper/multistep" + "github.com/hashicorp/packer/packer" ) func testState(t *testing.T) multistep.StateBag { diff --git a/builder/file/builder.go b/builder/file/builder.go index 1ac87331c..407bfcee5 100644 --- a/builder/file/builder.go +++ b/builder/file/builder.go @@ -2,7 +2,7 @@ package file /* The File builder creates an artifact from a file. Because it does not require -any virutalization or network resources, it's very fast and useful for testing. +any virtualization or network resources, it's very fast and useful for testing. */ import ( @@ -11,8 +11,8 @@ import ( "io/ioutil" "os" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" ) const BuilderId = "packer.file" diff --git a/builder/googlecompute/artifact_test.go b/builder/googlecompute/artifact_test.go index 9d028c1a3..bb803056c 100644 --- a/builder/googlecompute/artifact_test.go +++ b/builder/googlecompute/artifact_test.go @@ -1,8 +1,9 @@ package googlecompute import ( - "github.com/hashicorp/packer/packer" "testing" + + "github.com/hashicorp/packer/packer" ) func TestArtifact_impl(t *testing.T) { diff --git a/builder/googlecompute/builder.go b/builder/googlecompute/builder.go index 589108348..c60117075 100644 --- a/builder/googlecompute/builder.go +++ b/builder/googlecompute/builder.go @@ -8,8 +8,8 @@ import ( "github.com/hashicorp/packer/common" "github.com/hashicorp/packer/helper/communicator" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" ) // The unique ID for this builder. diff --git a/builder/googlecompute/config.go b/builder/googlecompute/config.go index 8ed3897d6..11e8ac47f 100644 --- a/builder/googlecompute/config.go +++ b/builder/googlecompute/config.go @@ -26,36 +26,39 @@ type Config struct { AccountFile string `mapstructure:"account_file"` ProjectId string `mapstructure:"project_id"` - AcceleratorType string `mapstructure:"accelerator_type"` - AcceleratorCount int64 `mapstructure:"accelerator_count"` - Address string `mapstructure:"address"` - DiskName string `mapstructure:"disk_name"` - DiskSizeGb int64 `mapstructure:"disk_size"` - DiskType string `mapstructure:"disk_type"` - ImageName string `mapstructure:"image_name"` - ImageDescription string `mapstructure:"image_description"` - ImageFamily string `mapstructure:"image_family"` - ImageLabels map[string]string `mapstructure:"image_labels"` - InstanceName string `mapstructure:"instance_name"` - Labels map[string]string `mapstructure:"labels"` - MachineType string `mapstructure:"machine_type"` - Metadata map[string]string `mapstructure:"metadata"` - Network string `mapstructure:"network"` - NetworkProjectId string `mapstructure:"network_project_id"` - OmitExternalIP bool `mapstructure:"omit_external_ip"` - OnHostMaintenance string `mapstructure:"on_host_maintenance"` - Preemptible bool `mapstructure:"preemptible"` - RawStateTimeout string `mapstructure:"state_timeout"` - Region string `mapstructure:"region"` - Scopes []string `mapstructure:"scopes"` - SourceImage string `mapstructure:"source_image"` - SourceImageFamily string `mapstructure:"source_image_family"` - SourceImageProjectId string `mapstructure:"source_image_project_id"` - StartupScriptFile string `mapstructure:"startup_script_file"` - Subnetwork string `mapstructure:"subnetwork"` - Tags []string `mapstructure:"tags"` - UseInternalIP bool `mapstructure:"use_internal_ip"` - Zone string `mapstructure:"zone"` + AcceleratorType string `mapstructure:"accelerator_type"` + AcceleratorCount int64 `mapstructure:"accelerator_count"` + Address string `mapstructure:"address"` + DisableDefaultServiceAccount bool `mapstructure:"disable_default_service_account"` + DiskName string `mapstructure:"disk_name"` + DiskSizeGb int64 `mapstructure:"disk_size"` + DiskType string `mapstructure:"disk_type"` + ImageName string `mapstructure:"image_name"` + ImageDescription string `mapstructure:"image_description"` + ImageFamily string `mapstructure:"image_family"` + ImageLabels map[string]string `mapstructure:"image_labels"` + ImageLicenses []string `mapstructure:"image_licenses"` + InstanceName string `mapstructure:"instance_name"` + Labels map[string]string `mapstructure:"labels"` + MachineType string `mapstructure:"machine_type"` + Metadata map[string]string `mapstructure:"metadata"` + Network string `mapstructure:"network"` + NetworkProjectId string `mapstructure:"network_project_id"` + OmitExternalIP bool `mapstructure:"omit_external_ip"` + OnHostMaintenance string `mapstructure:"on_host_maintenance"` + Preemptible bool `mapstructure:"preemptible"` + RawStateTimeout string `mapstructure:"state_timeout"` + Region string `mapstructure:"region"` + Scopes []string `mapstructure:"scopes"` + ServiceAccountEmail string `mapstructure:"service_account_email"` + SourceImage string `mapstructure:"source_image"` + SourceImageFamily string `mapstructure:"source_image_family"` + SourceImageProjectId string `mapstructure:"source_image_project_id"` + StartupScriptFile string `mapstructure:"startup_script_file"` + Subnetwork string `mapstructure:"subnetwork"` + Tags []string `mapstructure:"tags"` + UseInternalIP bool `mapstructure:"use_internal_ip"` + Zone string `mapstructure:"zone"` Account AccountFile stateTimeout time.Duration @@ -222,6 +225,11 @@ func NewConfig(raws ...interface{}) (*Config, []string, error) { errs = packer.MultiErrorAppend(fmt.Errorf("'on_host_maintenance' must be set to 'TERMINATE' when 'accelerator_count' is more than 0")) } + // If DisableDefaultServiceAccount is provided, don't allow a value for ServiceAccountEmail + if c.DisableDefaultServiceAccount && c.ServiceAccountEmail != "" { + errs = packer.MultiErrorAppend(fmt.Errorf("you may not specify a 'service_account_email' when 'disable_default_service_account' is true")) + } + // Check for any errors. if errs != nil && len(errs.Errors) > 0 { return nil, nil, errs diff --git a/builder/googlecompute/config_test.go b/builder/googlecompute/config_test.go index 551692b29..2da3faa59 100644 --- a/builder/googlecompute/config_test.go +++ b/builder/googlecompute/config_test.go @@ -3,6 +3,7 @@ package googlecompute import ( "fmt" "io/ioutil" + "os" "strings" "testing" ) @@ -170,10 +171,37 @@ func TestConfigPrepare(t *testing.T) { []string{"https://www.googleapis.com/auth/cloud-platform"}, false, }, + + { + "disable_default_service_account", + "", + false, + }, + { + "disable_default_service_account", + nil, + false, + }, + { + "disable_default_service_account", + false, + false, + }, + { + "disable_default_service_account", + true, + false, + }, + { + "disable_default_service_account", + "NOT A BOOL", + true, + }, } for _, tc := range cases { - raw := testConfig(t) + raw, tempfile := testConfig(t) + defer os.Remove(tempfile) if tc.Value == nil { delete(raw, tc.Key) @@ -225,7 +253,58 @@ func TestConfigPrepareAccelerator(t *testing.T) { } for _, tc := range cases { - raw := testConfig(t) + raw, tempfile := testConfig(t) + defer os.Remove(tempfile) + + errStr := "" + for k := range tc.Keys { + + // Create the string for error reporting + // convert value to string if it can be converted + errStr += fmt.Sprintf("%s:%v, ", tc.Keys[k], tc.Values[k]) + if tc.Values[k] == nil { + delete(raw, tc.Keys[k]) + } else { + raw[tc.Keys[k]] = tc.Values[k] + } + } + + _, warns, errs := NewConfig(raw) + + if tc.Err { + testConfigErr(t, warns, errs, strings.TrimRight(errStr, ", ")) + } else { + testConfigOk(t, warns, errs) + } + } +} + +func TestConfigPrepareServiceAccount(t *testing.T) { + cases := []struct { + Keys []string + Values []interface{} + Err bool + }{ + { + []string{"disable_default_service_account", "service_account_email"}, + []interface{}{true, "service@account.email.com"}, + true, + }, + { + []string{"disable_default_service_account", "service_account_email"}, + []interface{}{false, "service@account.email.com"}, + false, + }, + { + []string{"disable_default_service_account", "service_account_email"}, + []interface{}{true, ""}, + false, + }, + } + + for _, tc := range cases { + raw, tempfile := testConfig(t) + defer os.Remove(tempfile) errStr := "" for k := range tc.Keys { @@ -267,7 +346,8 @@ func TestConfigDefaults(t *testing.T) { } for _, tc := range cases { - raw := testConfig(t) + raw, tempfile := testConfig(t) + defer os.Remove(tempfile) c, warns, errs := NewConfig(raw) testConfigOk(t, warns, errs) @@ -280,7 +360,10 @@ func TestConfigDefaults(t *testing.T) { } func TestImageName(t *testing.T) { - c, _, _ := NewConfig(testConfig(t)) + raw, tempfile := testConfig(t) + defer os.Remove(tempfile) + + c, _, _ := NewConfig(raw) if !strings.HasPrefix(c.ImageName, "packer-") { t.Fatalf("ImageName should have 'packer-' prefix, found %s", c.ImageName) } @@ -290,7 +373,10 @@ func TestImageName(t *testing.T) { } func TestRegion(t *testing.T) { - c, _, _ := NewConfig(testConfig(t)) + raw, tempfile := testConfig(t) + defer os.Remove(tempfile) + + c, _, _ := NewConfig(raw) if c.Region != "us-east1" { t.Fatalf("Region should be 'us-east1' given Zone of 'us-east1-a', but is %s", c.Region) } @@ -298,9 +384,11 @@ func TestRegion(t *testing.T) { // Helper stuff below -func testConfig(t *testing.T) map[string]interface{} { - return map[string]interface{}{ - "account_file": testAccountFile(t), +func testConfig(t *testing.T) (config map[string]interface{}, tempAccountFile string) { + tempAccountFile = testAccountFile(t) + + config = map[string]interface{}{ + "account_file": tempAccountFile, "project_id": "hashicorp", "source_image": "foo", "ssh_username": "root", @@ -309,12 +397,20 @@ func testConfig(t *testing.T) map[string]interface{} { "label-1": "value-1", "label-2": "value-2", }, + "image_licenses": []string{ + "test-license", + }, "zone": "us-east1-a", } + + return config, tempAccountFile } func testConfigStruct(t *testing.T) *Config { - c, warns, errs := NewConfig(testConfig(t)) + raw, tempfile := testConfig(t) + defer os.Remove(tempfile) + + c, warns, errs := NewConfig(raw) if len(warns) > 0 { t.Fatalf("bad: %#v", len(warns)) } diff --git a/builder/googlecompute/driver.go b/builder/googlecompute/driver.go index bd302ddc0..54881dec8 100644 --- a/builder/googlecompute/driver.go +++ b/builder/googlecompute/driver.go @@ -11,7 +11,7 @@ import ( type Driver interface { // CreateImage creates an image from the given disk in Google Compute // Engine. - CreateImage(name, description, family, zone, disk string, image_labels map[string]string) (<-chan *Image, <-chan error) + CreateImage(name, description, family, zone, disk string, image_labels map[string]string, image_licenses []string) (<-chan *Image, <-chan error) // DeleteImage deletes the image with the given name. DeleteImage(name string) <-chan error @@ -58,30 +58,32 @@ type Driver interface { } type InstanceConfig struct { - AcceleratorType string - AcceleratorCount int64 - Address string - Description string - DiskSizeGb int64 - DiskType string - Image *Image - Labels map[string]string - MachineType string - Metadata map[string]string - Name string - Network string - NetworkProjectId string - OmitExternalIP bool - OnHostMaintenance string - Preemptible bool - Region string - Scopes []string - Subnetwork string - Tags []string - Zone string + AcceleratorType string + AcceleratorCount int64 + Address string + Description string + DisableDefaultServiceAccount bool + DiskSizeGb int64 + DiskType string + Image *Image + Labels map[string]string + MachineType string + Metadata map[string]string + Name string + Network string + NetworkProjectId string + OmitExternalIP bool + OnHostMaintenance string + Preemptible bool + Region string + ServiceAccountEmail string + Scopes []string + Subnetwork string + Tags []string + Zone string } -// WindowsPasswordConfig is the data structue that GCE needs to encrypt the created +// WindowsPasswordConfig is the data structure that GCE needs to encrypt the created // windows password. type WindowsPasswordConfig struct { key *rsa.PrivateKey diff --git a/builder/googlecompute/driver_gce.go b/builder/googlecompute/driver_gce.go index 69af625e9..971d0f95f 100644 --- a/builder/googlecompute/driver_gce.go +++ b/builder/googlecompute/driver_gce.go @@ -10,15 +10,14 @@ import ( "fmt" "log" "net/http" - "runtime" "strings" "time" - "google.golang.org/api/compute/v1" + compute "google.golang.org/api/compute/v1" "github.com/hashicorp/packer/common" + "github.com/hashicorp/packer/helper/useragent" "github.com/hashicorp/packer/packer" - "github.com/hashicorp/packer/version" "golang.org/x/oauth2" "golang.org/x/oauth2/google" @@ -81,15 +80,13 @@ func NewDriverGCE(ui packer.Ui, p string, a *AccountFile) (Driver, error) { log.Printf("[INFO] Instantiating GCE client...") service, err := compute.New(client) - // Set UserAgent - versionString := version.FormattedVersion() - service.UserAgent = fmt.Sprintf( - "(%s %s) Packer/%s", runtime.GOOS, runtime.GOARCH, versionString) - if err != nil { return nil, err } + // Set UserAgent + service.UserAgent = useragent.String() + return &driverGCE{ projectId: p, service: service, @@ -97,12 +94,13 @@ func NewDriverGCE(ui packer.Ui, p string, a *AccountFile) (Driver, error) { }, nil } -func (d *driverGCE) CreateImage(name, description, family, zone, disk string, image_labels map[string]string) (<-chan *Image, <-chan error) { +func (d *driverGCE) CreateImage(name, description, family, zone, disk string, image_labels map[string]string, image_licenses []string) (<-chan *Image, <-chan error) { gce_image := &compute.Image{ Description: description, Name: name, Family: family, Labels: image_labels, + Licenses: image_licenses, SourceDisk: fmt.Sprintf("%s%s/zones/%s/disks/%s", d.service.BasePath, d.projectId, zone, disk), SourceType: "RAW", } @@ -170,7 +168,7 @@ func (d *driverGCE) DeleteDisk(zone, name string) (<-chan error, error) { } func (d *driverGCE) GetImage(name string, fromFamily bool) (*Image, error) { - projects := []string{d.projectId, "centos-cloud", "coreos-cloud", "debian-cloud", "google-containers", "opensuse-cloud", "rhel-cloud", "suse-cloud", "ubuntu-os-cloud", "windows-cloud", "gce-nvme"} + projects := []string{d.projectId, "centos-cloud", "coreos-cloud", "cos-cloud", "debian-cloud", "google-containers", "opensuse-cloud", "rhel-cloud", "suse-cloud", "ubuntu-os-cloud", "windows-cloud", "gce-nvme", "windows-sql-cloud", "rhel-sap-cloud"} var errs error for _, project := range projects { image, err := d.GetImageFromProject(project, name, fromFamily) @@ -342,6 +340,20 @@ func (d *driverGCE) RunInstance(c *InstanceConfig) (<-chan error, error) { guestAccelerators = append(guestAccelerators, ac) } + // Configure the instance's service account. If the user has set + // disable_default_service_account, then the default service account + // will not be used. If they also do not set service_account_email, then + // the instance will be created with no service account or scopes. + serviceAccount := &compute.ServiceAccount{} + if !c.DisableDefaultServiceAccount { + serviceAccount.Email = "default" + serviceAccount.Scopes = c.Scopes + } + if c.ServiceAccountEmail != "" { + serviceAccount.Email = c.ServiceAccountEmail + serviceAccount.Scopes = c.Scopes + } + // Create the instance information instance := compute.Instance{ Description: c.Description, @@ -378,10 +390,7 @@ func (d *driverGCE) RunInstance(c *InstanceConfig) (<-chan error, error) { Preemptible: c.Preemptible, }, ServiceAccounts: []*compute.ServiceAccount{ - { - Email: "default", - Scopes: c.Scopes, - }, + serviceAccount, }, Tags: &compute.Tags{ Items: c.Tags, diff --git a/builder/googlecompute/driver_mock.go b/builder/googlecompute/driver_mock.go index 895775da5..720f9d60b 100644 --- a/builder/googlecompute/driver_mock.go +++ b/builder/googlecompute/driver_mock.go @@ -9,9 +9,9 @@ type DriverMock struct { CreateImageDesc string CreateImageFamily string CreateImageLabels map[string]string + CreateImageLicenses []string CreateImageZone string CreateImageDisk string - CreateImageResultLicenses []string CreateImageResultProjectId string CreateImageResultSelfLink string CreateImageResultSizeGb int64 @@ -82,11 +82,12 @@ type DriverMock struct { WaitForInstanceErrCh <-chan error } -func (d *DriverMock) CreateImage(name, description, family, zone, disk string, image_labels map[string]string) (<-chan *Image, <-chan error) { +func (d *DriverMock) CreateImage(name, description, family, zone, disk string, image_labels map[string]string, image_licenses []string) (<-chan *Image, <-chan error) { d.CreateImageName = name d.CreateImageDesc = description d.CreateImageFamily = family d.CreateImageLabels = image_labels + d.CreateImageLicenses = image_licenses d.CreateImageZone = zone d.CreateImageDisk = disk if d.CreateImageResultProjectId == "" { @@ -106,7 +107,7 @@ func (d *DriverMock) CreateImage(name, description, family, zone, disk string, i ch := make(chan *Image, 1) ch <- &Image{ Labels: d.CreateImageLabels, - Licenses: d.CreateImageResultLicenses, + Licenses: d.CreateImageLicenses, Name: name, ProjectId: d.CreateImageResultProjectId, SelfLink: d.CreateImageResultSelfLink, diff --git a/builder/googlecompute/private_key.go b/builder/googlecompute/private_key.go index 6bf22a309..cf81cba06 100644 --- a/builder/googlecompute/private_key.go +++ b/builder/googlecompute/private_key.go @@ -19,7 +19,7 @@ func processPrivateKeyFile(privateKeyFile, passphrase string) ([]byte, error) { PEMBlock, _ := pem.Decode(rawPrivateKeyBytes) if PEMBlock == nil { return nil, fmt.Errorf( - "%s does not contain a vaild private key", privateKeyFile) + "%s does not contain a valid private key", privateKeyFile) } if x509.IsEncryptedPEMBlock(PEMBlock) { diff --git a/builder/googlecompute/ssh.go b/builder/googlecompute/ssh.go index 5c224f0c6..bf704b13e 100644 --- a/builder/googlecompute/ssh.go +++ b/builder/googlecompute/ssh.go @@ -3,7 +3,7 @@ package googlecompute import ( "fmt" - "github.com/mitchellh/multistep" + "github.com/hashicorp/packer/helper/multistep" "golang.org/x/crypto/ssh" ) diff --git a/builder/googlecompute/step_check_existing_image.go b/builder/googlecompute/step_check_existing_image.go index d5f2f5e73..ac11600e1 100644 --- a/builder/googlecompute/step_check_existing_image.go +++ b/builder/googlecompute/step_check_existing_image.go @@ -1,10 +1,11 @@ package googlecompute import ( + "context" "fmt" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" ) // StepCheckExistingImage represents a Packer build step that checks if the @@ -12,7 +13,7 @@ import ( type StepCheckExistingImage int // Run executes the Packer build step that checks if the image already exists. -func (s *StepCheckExistingImage) Run(state multistep.StateBag) multistep.StepAction { +func (s *StepCheckExistingImage) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { c := state.Get("config").(*Config) d := state.Get("driver").(Driver) ui := state.Get("ui").(packer.Ui) diff --git a/builder/googlecompute/step_check_existing_image_test.go b/builder/googlecompute/step_check_existing_image_test.go index a7b336407..d4b9a9541 100644 --- a/builder/googlecompute/step_check_existing_image_test.go +++ b/builder/googlecompute/step_check_existing_image_test.go @@ -1,8 +1,10 @@ package googlecompute import ( - "github.com/mitchellh/multistep" + "context" "testing" + + "github.com/hashicorp/packer/helper/multistep" ) func TestStepCheckExistingImage_impl(t *testing.T) { @@ -21,7 +23,7 @@ func TestStepCheckExistingImage(t *testing.T) { driver.ImageExistsResult = true // run the step - if action := step.Run(state); action != multistep.ActionHalt { + if action := step.Run(context.Background(), state); action != multistep.ActionHalt { t.Fatalf("bad action: %#v", action) } diff --git a/builder/googlecompute/step_create_image.go b/builder/googlecompute/step_create_image.go index bcb840e78..c2bde16c8 100644 --- a/builder/googlecompute/step_create_image.go +++ b/builder/googlecompute/step_create_image.go @@ -1,12 +1,13 @@ package googlecompute import ( + "context" "errors" "fmt" "time" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" ) // StepCreateImage represents a Packer build step that creates GCE machine @@ -17,7 +18,7 @@ type StepCreateImage int // // The image is created from the persistent disk used by the instance. The // instance must be deleted and the disk retained before doing this step. -func (s *StepCreateImage) Run(state multistep.StateBag) multistep.StepAction { +func (s *StepCreateImage) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { config := state.Get("config").(*Config) driver := state.Get("driver").(Driver) ui := state.Get("ui").(packer.Ui) @@ -39,7 +40,7 @@ func (s *StepCreateImage) Run(state multistep.StateBag) multistep.StepAction { imageCh, errCh := driver.CreateImage( config.ImageName, config.ImageDescription, config.ImageFamily, config.Zone, - config.DiskName, config.ImageLabels) + config.DiskName, config.ImageLabels, config.ImageLicenses) var err error select { case err = <-errCh: diff --git a/builder/googlecompute/step_create_image_test.go b/builder/googlecompute/step_create_image_test.go index 639b25255..65ccca313 100644 --- a/builder/googlecompute/step_create_image_test.go +++ b/builder/googlecompute/step_create_image_test.go @@ -1,10 +1,11 @@ package googlecompute import ( + "context" "errors" "testing" - "github.com/mitchellh/multistep" + "github.com/hashicorp/packer/helper/multistep" "github.com/stretchr/testify/assert" ) @@ -21,12 +22,11 @@ func TestStepCreateImage(t *testing.T) { d := state.Get("driver").(*DriverMock) // These are the values of the image the driver will return. - d.CreateImageResultLicenses = []string{"test-license"} d.CreateImageResultProjectId = "test-project" d.CreateImageResultSizeGb = 100 // run the step - action := step.Run(state) + action := step.Run(context.Background(), state) assert.Equal(t, action, multistep.ActionContinue, "Step did not pass.") uncastImage, ok := state.GetOk("image") @@ -35,7 +35,6 @@ func TestStepCreateImage(t *testing.T) { assert.True(t, ok, "Image in state is not an Image.") // Verify created Image results. - assert.Equal(t, image.Licenses, d.CreateImageResultLicenses, "Created image licenses don't match the licenses returned by the driver.") assert.Equal(t, image.Name, c.ImageName, "Created image does not match config name.") assert.Equal(t, image.ProjectId, d.CreateImageResultProjectId, "Created image project does not match driver project.") assert.Equal(t, image.SizeGb, d.CreateImageResultSizeGb, "Created image size does not match the size returned by the driver.") @@ -47,6 +46,7 @@ func TestStepCreateImage(t *testing.T) { assert.Equal(t, d.CreateImageZone, c.Zone, "Incorrect image zone passed to driver.") assert.Equal(t, d.CreateImageDisk, c.DiskName, "Incorrect disk passed to driver.") assert.Equal(t, d.CreateImageLabels, c.ImageLabels, "Incorrect image_labels passed to driver.") + assert.Equal(t, d.CreateImageLicenses, c.ImageLicenses, "Incorrect image_licenses passed to driver.") } func TestStepCreateImage_errorOnChannel(t *testing.T) { @@ -61,7 +61,7 @@ func TestStepCreateImage_errorOnChannel(t *testing.T) { driver.CreateImageErrCh = errCh // run the step - action := step.Run(state) + action := step.Run(context.Background(), state) assert.Equal(t, action, multistep.ActionHalt, "Step should not have passed.") _, ok := state.GetOk("error") assert.True(t, ok, "State should have an error.") diff --git a/builder/googlecompute/step_create_instance.go b/builder/googlecompute/step_create_instance.go index a4509354e..06e5b8eb1 100644 --- a/builder/googlecompute/step_create_instance.go +++ b/builder/googlecompute/step_create_instance.go @@ -1,13 +1,14 @@ package googlecompute import ( + "context" "errors" "fmt" "io/ioutil" "time" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" ) // StepCreateInstance represents a Packer build step that creates GCE instances. @@ -72,7 +73,7 @@ func getImage(c *Config, d Driver) (*Image, error) { } // Run executes the Packer build step that creates a GCE instance. -func (s *StepCreateInstance) Run(state multistep.StateBag) multistep.StepAction { +func (s *StepCreateInstance) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { c := state.Get("config").(*Config) d := state.Get("driver").(Driver) sshPublicKey := state.Get("ssh_public_key").(string) @@ -99,27 +100,29 @@ func (s *StepCreateInstance) Run(state multistep.StateBag) multistep.StepAction var metadata map[string]string metadata, err = c.createInstanceMetadata(sourceImage, sshPublicKey) errCh, err = d.RunInstance(&InstanceConfig{ - AcceleratorType: c.AcceleratorType, - AcceleratorCount: c.AcceleratorCount, - Address: c.Address, - Description: "New instance created by Packer", - DiskSizeGb: c.DiskSizeGb, - DiskType: c.DiskType, - Image: sourceImage, - Labels: c.Labels, - MachineType: c.MachineType, - Metadata: metadata, - Name: name, - Network: c.Network, - NetworkProjectId: c.NetworkProjectId, - OmitExternalIP: c.OmitExternalIP, - OnHostMaintenance: c.OnHostMaintenance, - Preemptible: c.Preemptible, - Region: c.Region, - Scopes: c.Scopes, - Subnetwork: c.Subnetwork, - Tags: c.Tags, - Zone: c.Zone, + AcceleratorType: c.AcceleratorType, + AcceleratorCount: c.AcceleratorCount, + Address: c.Address, + Description: "New instance created by Packer", + DisableDefaultServiceAccount: c.DisableDefaultServiceAccount, + DiskSizeGb: c.DiskSizeGb, + DiskType: c.DiskType, + Image: sourceImage, + Labels: c.Labels, + MachineType: c.MachineType, + Metadata: metadata, + Name: name, + Network: c.Network, + NetworkProjectId: c.NetworkProjectId, + OmitExternalIP: c.OmitExternalIP, + OnHostMaintenance: c.OnHostMaintenance, + Preemptible: c.Preemptible, + Region: c.Region, + ServiceAccountEmail: c.ServiceAccountEmail, + Scopes: c.Scopes, + Subnetwork: c.Subnetwork, + Tags: c.Tags, + Zone: c.Zone, }) if err == nil { diff --git a/builder/googlecompute/step_create_instance_test.go b/builder/googlecompute/step_create_instance_test.go index a9a91d692..e56159138 100644 --- a/builder/googlecompute/step_create_instance_test.go +++ b/builder/googlecompute/step_create_instance_test.go @@ -1,12 +1,13 @@ package googlecompute import ( + "context" "errors" "strings" "testing" "time" - "github.com/mitchellh/multistep" + "github.com/hashicorp/packer/helper/multistep" "github.com/stretchr/testify/assert" ) @@ -26,7 +27,7 @@ func TestStepCreateInstance(t *testing.T) { d.GetImageResult = StubImage("test-image", "test-project", []string{}, 100) // run the step - assert.Equal(t, step.Run(state), multistep.ActionContinue, "Step should have passed and continued.") + assert.Equal(t, step.Run(context.Background(), state), multistep.ActionContinue, "Step should have passed and continued.") // Verify state nameRaw, ok := state.GetOk("instance_name") @@ -67,7 +68,7 @@ func TestStepCreateInstance_fromFamily(t *testing.T) { d.GetImageResult = StubImage("test-image", "test-project", []string{}, 100) // run the step - assert.Equal(t, step.Run(state), multistep.ActionContinue, "Step should have passed and continued.") + assert.Equal(t, step.Run(context.Background(), state), multistep.ActionContinue, "Step should have passed and continued.") // cleanup step.Cleanup(state) @@ -93,7 +94,7 @@ func TestStepCreateInstance_windowsNeedsPassword(t *testing.T) { d.GetImageResult = StubImage("test-image", "test-project", []string{"windows"}, 100) c.Comm.Type = "winrm" // run the step - if action := step.Run(state); action != multistep.ActionContinue { + if action := step.Run(context.Background(), state); action != multistep.ActionContinue { t.Fatalf("bad action: %#v", action) } @@ -142,7 +143,7 @@ func TestStepCreateInstance_windowsPasswordSet(t *testing.T) { config.Comm.WinRMPassword = "password" // run the step - if action := step.Run(state); action != multistep.ActionContinue { + if action := step.Run(context.Background(), state); action != multistep.ActionContinue { t.Fatalf("bad action: %#v", action) } @@ -188,7 +189,7 @@ func TestStepCreateInstance_error(t *testing.T) { d.GetImageResult = StubImage("test-image", "test-project", []string{}, 100) // run the step - assert.Equal(t, step.Run(state), multistep.ActionHalt, "Step should have failed and halted.") + assert.Equal(t, step.Run(context.Background(), state), multistep.ActionHalt, "Step should have failed and halted.") // Verify state _, ok := state.GetOk("error") @@ -212,7 +213,7 @@ func TestStepCreateInstance_errorOnChannel(t *testing.T) { d.GetImageResult = StubImage("test-image", "test-project", []string{}, 100) // run the step - assert.Equal(t, step.Run(state), multistep.ActionHalt, "Step should have failed and halted.") + assert.Equal(t, step.Run(context.Background(), state), multistep.ActionHalt, "Step should have failed and halted.") // Verify state _, ok := state.GetOk("error") @@ -238,7 +239,7 @@ func TestStepCreateInstance_errorTimeout(t *testing.T) { d.GetImageResult = StubImage("test-image", "test-project", []string{}, 100) // run the step - assert.Equal(t, step.Run(state), multistep.ActionHalt, "Step should have failed and halted.") + assert.Equal(t, step.Run(context.Background(), state), multistep.ActionHalt, "Step should have failed and halted.") // Verify state _, ok := state.GetOk("error") @@ -247,6 +248,54 @@ func TestStepCreateInstance_errorTimeout(t *testing.T) { assert.False(t, ok, "State should not have an instance name.") } +func TestStepCreateInstance_noServiceAccount(t *testing.T) { + state := testState(t) + step := new(StepCreateInstance) + defer step.Cleanup(state) + + state.Put("ssh_public_key", "key") + + c := state.Get("config").(*Config) + c.DisableDefaultServiceAccount = true + c.ServiceAccountEmail = "" + d := state.Get("driver").(*DriverMock) + d.GetImageResult = StubImage("test-image", "test-project", []string{}, 100) + + // run the step + assert.Equal(t, step.Run(context.Background(), state), multistep.ActionContinue, "Step should have passed and continued.") + + // cleanup + step.Cleanup(state) + + // Check args passed to the driver. + assert.Equal(t, d.RunInstanceConfig.DisableDefaultServiceAccount, c.DisableDefaultServiceAccount, "Incorrect value for DisableDefaultServiceAccount passed to driver.") + assert.Equal(t, d.RunInstanceConfig.ServiceAccountEmail, c.ServiceAccountEmail, "Incorrect value for ServiceAccountEmail passed to driver.") +} + +func TestStepCreateInstance_customServiceAccount(t *testing.T) { + state := testState(t) + step := new(StepCreateInstance) + defer step.Cleanup(state) + + state.Put("ssh_public_key", "key") + + c := state.Get("config").(*Config) + c.DisableDefaultServiceAccount = true + c.ServiceAccountEmail = "custom-service-account" + d := state.Get("driver").(*DriverMock) + d.GetImageResult = StubImage("test-image", "test-project", []string{}, 100) + + // run the step + assert.Equal(t, step.Run(context.Background(), state), multistep.ActionContinue, "Step should have passed and continued.") + + // cleanup + step.Cleanup(state) + + // Check args passed to the driver. + assert.Equal(t, d.RunInstanceConfig.DisableDefaultServiceAccount, c.DisableDefaultServiceAccount, "Incorrect value for DisableDefaultServiceAccount passed to driver.") + assert.Equal(t, d.RunInstanceConfig.ServiceAccountEmail, c.ServiceAccountEmail, "Incorrect value for ServiceAccountEmail passed to driver.") +} + func TestCreateInstanceMetadata(t *testing.T) { state := testState(t) c := state.Get("config").(*Config) diff --git a/builder/googlecompute/step_create_ssh_key.go b/builder/googlecompute/step_create_ssh_key.go index fd6d7f55a..7d5a24555 100644 --- a/builder/googlecompute/step_create_ssh_key.go +++ b/builder/googlecompute/step_create_ssh_key.go @@ -1,6 +1,7 @@ package googlecompute import ( + "context" "crypto/rand" "crypto/rsa" "crypto/x509" @@ -9,8 +10,8 @@ import ( "io/ioutil" "os" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" "golang.org/x/crypto/ssh" ) @@ -24,7 +25,7 @@ type StepCreateSSHKey struct { // Run executes the Packer build step that generates SSH key pairs. // The key pairs are added to the multistep state as "ssh_private_key" and // "ssh_public_key". -func (s *StepCreateSSHKey) Run(state multistep.StateBag) multistep.StepAction { +func (s *StepCreateSSHKey) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { ui := state.Get("ui").(packer.Ui) if s.PrivateKeyFile != "" { diff --git a/builder/googlecompute/step_create_ssh_key_test.go b/builder/googlecompute/step_create_ssh_key_test.go index 94504ac7f..5e5150738 100644 --- a/builder/googlecompute/step_create_ssh_key_test.go +++ b/builder/googlecompute/step_create_ssh_key_test.go @@ -1,7 +1,9 @@ package googlecompute import ( - "github.com/mitchellh/multistep" + "context" + + "github.com/hashicorp/packer/helper/multistep" "io/ioutil" "os" @@ -19,7 +21,7 @@ func TestStepCreateSSHKey_privateKey(t *testing.T) { defer step.Cleanup(state) // run the step - if action := step.Run(state); action != multistep.ActionContinue { + if action := step.Run(context.Background(), state); action != multistep.ActionContinue { t.Fatalf("bad action: %#v", action) } @@ -35,7 +37,7 @@ func TestStepCreateSSHKey(t *testing.T) { defer step.Cleanup(state) // run the step - if action := step.Run(state); action != multistep.ActionContinue { + if action := step.Run(context.Background(), state); action != multistep.ActionContinue { t.Fatalf("bad action: %#v", action) } @@ -53,6 +55,7 @@ func TestStepCreateSSHKey_debug(t *testing.T) { if err != nil { t.Fatalf("err: %s", err) } + defer os.Remove(tf.Name()) tf.Close() state := testState(t) @@ -63,7 +66,7 @@ func TestStepCreateSSHKey_debug(t *testing.T) { defer step.Cleanup(state) // run the step - if action := step.Run(state); action != multistep.ActionContinue { + if action := step.Run(context.Background(), state); action != multistep.ActionContinue { t.Fatalf("bad action: %#v", action) } diff --git a/builder/googlecompute/step_create_windows_password.go b/builder/googlecompute/step_create_windows_password.go index 312e54512..38d52d869 100644 --- a/builder/googlecompute/step_create_windows_password.go +++ b/builder/googlecompute/step_create_windows_password.go @@ -1,6 +1,7 @@ package googlecompute import ( + "context" "crypto/rand" "crypto/rsa" "crypto/x509" @@ -12,8 +13,9 @@ import ( "os" "time" + commonhelper "github.com/hashicorp/packer/helper/common" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" ) // StepCreateWindowsPassword represents a Packer build step that sets the windows password on a Windows GCE instance. @@ -23,7 +25,7 @@ type StepCreateWindowsPassword struct { } // Run executes the Packer build step that sets the windows password on a Windows GCE instance. -func (s *StepCreateWindowsPassword) Run(state multistep.StateBag) multistep.StepAction { +func (s *StepCreateWindowsPassword) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { ui := state.Get("ui").(packer.Ui) d := state.Get("driver").(Driver) c := state.Get("config").(*Config) @@ -111,6 +113,7 @@ func (s *StepCreateWindowsPassword) Run(state multistep.StateBag) multistep.Step } state.Put("winrm_password", data.password) + commonhelper.SetSharedState("winrm_password", data.password, c.PackerConfig.PackerBuildName) return multistep.ActionContinue } diff --git a/builder/googlecompute/step_create_windows_password_test.go b/builder/googlecompute/step_create_windows_password_test.go index 58fc8ee83..6abb6243d 100644 --- a/builder/googlecompute/step_create_windows_password_test.go +++ b/builder/googlecompute/step_create_windows_password_test.go @@ -1,11 +1,12 @@ package googlecompute import ( + "context" "errors" "io/ioutil" "os" - "github.com/mitchellh/multistep" + "github.com/hashicorp/packer/helper/multistep" "testing" ) @@ -21,7 +22,7 @@ func TestStepCreateOrResetWindowsPassword(t *testing.T) { defer step.Cleanup(state) // run the step - if action := step.Run(state); action != multistep.ActionContinue { + if action := step.Run(context.Background(), state); action != multistep.ActionContinue { t.Fatalf("bad action: %#v", action) } @@ -44,7 +45,7 @@ func TestStepCreateOrResetWindowsPassword_passwordSet(t *testing.T) { defer step.Cleanup(state) // run the step - if action := step.Run(state); action != multistep.ActionContinue { + if action := step.Run(context.Background(), state); action != multistep.ActionContinue { t.Fatalf("bad action: %#v", action) } @@ -63,7 +64,7 @@ func TestStepCreateOrResetWindowsPassword_dontNeedPassword(t *testing.T) { defer step.Cleanup(state) // run the step - if action := step.Run(state); action != multistep.ActionContinue { + if action := step.Run(context.Background(), state); action != multistep.ActionContinue { t.Fatalf("bad action: %#v", action) } @@ -74,6 +75,7 @@ func TestStepCreateOrResetWindowsPassword_debug(t *testing.T) { if err != nil { t.Fatalf("err: %s", err) } + defer os.Remove(tf.Name()) tf.Close() state := testState(t) @@ -89,7 +91,7 @@ func TestStepCreateOrResetWindowsPassword_debug(t *testing.T) { defer step.Cleanup(state) // run the step - if action := step.Run(state); action != multistep.ActionContinue { + if action := step.Run(context.Background(), state); action != multistep.ActionContinue { t.Fatalf("bad action: %#v", action) } @@ -116,7 +118,7 @@ func TestStepCreateOrResetWindowsPassword_error(t *testing.T) { driver.CreateOrResetWindowsPasswordErr = errors.New("error") // run the step - if action := step.Run(state); action != multistep.ActionHalt { + if action := step.Run(context.Background(), state); action != multistep.ActionHalt { t.Fatalf("bad action: %#v", action) } @@ -148,7 +150,7 @@ func TestStepCreateOrResetWindowsPassword_errorOnChannel(t *testing.T) { driver.CreateOrResetWindowsPasswordErrCh = errCh // run the step - if action := step.Run(state); action != multistep.ActionHalt { + if action := step.Run(context.Background(), state); action != multistep.ActionHalt { t.Fatalf("bad action: %#v", action) } diff --git a/builder/googlecompute/step_instance_info.go b/builder/googlecompute/step_instance_info.go index d5ece22e7..d67984b75 100644 --- a/builder/googlecompute/step_instance_info.go +++ b/builder/googlecompute/step_instance_info.go @@ -1,12 +1,13 @@ package googlecompute import ( + "context" "errors" "fmt" "time" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" ) // stepInstanceInfo represents a Packer build step that gathers GCE instance info. @@ -16,7 +17,7 @@ type StepInstanceInfo struct { // Run executes the Packer build step that gathers GCE instance info. // This adds "instance_ip" to the multistep state. -func (s *StepInstanceInfo) Run(state multistep.StateBag) multistep.StepAction { +func (s *StepInstanceInfo) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { config := state.Get("config").(*Config) driver := state.Get("driver").(Driver) ui := state.Get("ui").(packer.Ui) diff --git a/builder/googlecompute/step_instance_info_test.go b/builder/googlecompute/step_instance_info_test.go index 5b6c01d0a..86f718369 100644 --- a/builder/googlecompute/step_instance_info_test.go +++ b/builder/googlecompute/step_instance_info_test.go @@ -1,10 +1,12 @@ package googlecompute import ( + "context" "errors" - "github.com/mitchellh/multistep" "testing" "time" + + "github.com/hashicorp/packer/helper/multistep" ) func TestStepInstanceInfo_impl(t *testing.T) { @@ -23,7 +25,7 @@ func TestStepInstanceInfo(t *testing.T) { driver.GetNatIPResult = "1.2.3.4" // run the step - if action := step.Run(state); action != multistep.ActionContinue { + if action := step.Run(context.Background(), state); action != multistep.ActionContinue { t.Fatalf("bad action: %#v", action) } @@ -63,7 +65,7 @@ func TestStepInstanceInfo_InternalIP(t *testing.T) { driver.GetInternalIPResult = "5.6.7.8" // run the step - if action := step.Run(state); action != multistep.ActionContinue { + if action := step.Run(context.Background(), state); action != multistep.ActionContinue { t.Fatalf("bad action: %#v", action) } @@ -100,7 +102,7 @@ func TestStepInstanceInfo_getNatIPError(t *testing.T) { driver.GetNatIPErr = errors.New("error") // run the step - if action := step.Run(state); action != multistep.ActionHalt { + if action := step.Run(context.Background(), state); action != multistep.ActionHalt { t.Fatalf("bad action: %#v", action) } @@ -127,7 +129,7 @@ func TestStepInstanceInfo_waitError(t *testing.T) { driver.WaitForInstanceErrCh = errCh // run the step - if action := step.Run(state); action != multistep.ActionHalt { + if action := step.Run(context.Background(), state); action != multistep.ActionHalt { t.Fatalf("bad action: %#v", action) } @@ -160,7 +162,7 @@ func TestStepInstanceInfo_errorTimeout(t *testing.T) { driver.WaitForInstanceErrCh = errCh // run the step - if action := step.Run(state); action != multistep.ActionHalt { + if action := step.Run(context.Background(), state); action != multistep.ActionHalt { t.Fatalf("bad action: %#v", action) } diff --git a/builder/googlecompute/step_teardown_instance.go b/builder/googlecompute/step_teardown_instance.go index 58585d78f..27f9fee35 100644 --- a/builder/googlecompute/step_teardown_instance.go +++ b/builder/googlecompute/step_teardown_instance.go @@ -1,12 +1,13 @@ package googlecompute import ( + "context" "errors" "fmt" "time" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" ) // StepTeardownInstance represents a Packer build step that tears down GCE @@ -16,7 +17,7 @@ type StepTeardownInstance struct { } // Run executes the Packer build step that tears down a GCE instance. -func (s *StepTeardownInstance) Run(state multistep.StateBag) multistep.StepAction { +func (s *StepTeardownInstance) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { config := state.Get("config").(*Config) driver := state.Get("driver").(Driver) ui := state.Get("ui").(packer.Ui) diff --git a/builder/googlecompute/step_teardown_instance_test.go b/builder/googlecompute/step_teardown_instance_test.go index bd32ea26d..129c4e596 100644 --- a/builder/googlecompute/step_teardown_instance_test.go +++ b/builder/googlecompute/step_teardown_instance_test.go @@ -1,8 +1,10 @@ package googlecompute import ( - "github.com/mitchellh/multistep" + "context" "testing" + + "github.com/hashicorp/packer/helper/multistep" ) func TestStepTeardownInstance_impl(t *testing.T) { @@ -18,7 +20,7 @@ func TestStepTeardownInstance(t *testing.T) { driver := state.Get("driver").(*DriverMock) // run the step - if action := step.Run(state); action != multistep.ActionContinue { + if action := step.Run(context.Background(), state); action != multistep.ActionContinue { t.Fatalf("bad action: %#v", action) } diff --git a/builder/googlecompute/step_test.go b/builder/googlecompute/step_test.go index 3e9b843f4..a50bde49f 100644 --- a/builder/googlecompute/step_test.go +++ b/builder/googlecompute/step_test.go @@ -4,8 +4,8 @@ import ( "bytes" "testing" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" ) func testState(t *testing.T) multistep.StateBag { diff --git a/builder/googlecompute/step_wait_startup_script.go b/builder/googlecompute/step_wait_startup_script.go index f2e1e523f..efc2a78f7 100644 --- a/builder/googlecompute/step_wait_startup_script.go +++ b/builder/googlecompute/step_wait_startup_script.go @@ -1,19 +1,20 @@ package googlecompute import ( + "context" "errors" "fmt" "github.com/hashicorp/packer/common" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" ) type StepWaitStartupScript int // Run reads the instance metadata and looks for the log entry // indicating the startup script finished. -func (s *StepWaitStartupScript) Run(state multistep.StateBag) multistep.StepAction { +func (s *StepWaitStartupScript) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { config := state.Get("config").(*Config) driver := state.Get("driver").(Driver) ui := state.Get("ui").(packer.Ui) diff --git a/builder/googlecompute/step_wait_startup_script_test.go b/builder/googlecompute/step_wait_startup_script_test.go index 93cf108b1..a970dc48d 100644 --- a/builder/googlecompute/step_wait_startup_script_test.go +++ b/builder/googlecompute/step_wait_startup_script_test.go @@ -1,9 +1,11 @@ package googlecompute import ( - "github.com/mitchellh/multistep" - "github.com/stretchr/testify/assert" + "context" "testing" + + "github.com/hashicorp/packer/helper/multistep" + "github.com/stretchr/testify/assert" ) func TestStepWaitStartupScript(t *testing.T) { @@ -22,7 +24,7 @@ func TestStepWaitStartupScript(t *testing.T) { d.GetInstanceMetadataResult = StartupScriptStatusDone // Run the step. - assert.Equal(t, step.Run(state), multistep.ActionContinue, "Step should have passed and continued.") + assert.Equal(t, step.Run(context.Background(), state), multistep.ActionContinue, "Step should have passed and continued.") // Check that GetInstanceMetadata was called properly. assert.Equal(t, d.GetInstanceMetadataZone, testZone, "Incorrect zone passed to GetInstanceMetadata.") diff --git a/builder/googlecompute/winrm.go b/builder/googlecompute/winrm.go index ec617cba5..767787401 100644 --- a/builder/googlecompute/winrm.go +++ b/builder/googlecompute/winrm.go @@ -2,7 +2,7 @@ package googlecompute import ( "github.com/hashicorp/packer/helper/communicator" - "github.com/mitchellh/multistep" + "github.com/hashicorp/packer/helper/multistep" ) // winrmConfig returns the WinRM configuration. diff --git a/builder/hyperv/common/driver.go b/builder/hyperv/common/driver.go index 3e44b0ece..535652923 100644 --- a/builder/hyperv/common/driver.go +++ b/builder/hyperv/common/driver.go @@ -1,5 +1,9 @@ package common +import ( + "context" +) + // A driver is able to talk to HyperV and perform certain // operations with it. Some of the operations on here may seem overly // specific, but they were built specifically in mind to handle features @@ -52,6 +56,8 @@ type Driver interface { //Set the vlan to use for machine SetVirtualMachineVlanId(string, string) error + SetVmNetworkAdapterMacAddress(string, string) error + UntagVirtualMachineNetworkAdapterVlan(string, string) error CreateExternalVirtualSwitch(string, string) error @@ -64,9 +70,9 @@ type Driver interface { DeleteVirtualSwitch(string) error - CreateVirtualMachine(string, string, string, string, int64, int64, string, uint) error + CreateVirtualMachine(string, string, string, string, int64, int64, int64, string, uint, bool, bool) error - AddVirtualMachineHardDrive(string, string, string, int64, string) error + AddVirtualMachineHardDrive(string, string, string, int64, int64, string) error CloneVirtualMachine(string, string, string, bool, string, string, string, int64, string) error @@ -80,7 +86,7 @@ type Driver interface { SetVirtualMachineDynamicMemory(string, bool) error - SetVirtualMachineSecureBoot(string, bool) error + SetVirtualMachineSecureBoot(string, bool, string) error SetVirtualMachineVirtualizationExtensions(string, bool) error @@ -107,4 +113,10 @@ type Driver interface { MountFloppyDrive(string, string) error UnmountFloppyDrive(string) error + + // Connect connects to a VM specified by the name given. + Connect(string) (context.CancelFunc, error) + + // Disconnect disconnects to a VM specified by the context cancel function. + Disconnect(context.CancelFunc) } diff --git a/builder/hyperv/common/driver_mock.go b/builder/hyperv/common/driver_mock.go new file mode 100644 index 000000000..3fcb46323 --- /dev/null +++ b/builder/hyperv/common/driver_mock.go @@ -0,0 +1,578 @@ +package common + +import ( + "context" +) + +type DriverMock struct { + IsRunning_Called bool + IsRunning_VmName string + IsRunning_Return bool + IsRunning_Err error + + IsOff_Called bool + IsOff_VmName string + IsOff_Return bool + IsOff_Err error + + Uptime_Called bool + Uptime_VmName string + Uptime_Return uint64 + Uptime_Err error + + Start_Called bool + Start_VmName string + Start_Err error + + Stop_Called bool + Stop_VmName string + Stop_Err error + + Verify_Called bool + Verify_Err error + + Mac_Called bool + Mac_VmName string + Mac_Return string + Mac_Err error + + IpAddress_Called bool + IpAddress_Mac string + IpAddress_Return string + IpAddress_Err error + + GetHostName_Called bool + GetHostName_Ip string + GetHostName_Return string + GetHostName_Err error + + GetVirtualMachineGeneration_Called bool + GetVirtualMachineGeneration_VmName string + GetVirtualMachineGeneration_Return uint + GetVirtualMachineGeneration_Err error + + GetHostAdapterIpAddressForSwitch_Called bool + GetHostAdapterIpAddressForSwitch_SwitchName string + GetHostAdapterIpAddressForSwitch_Return string + GetHostAdapterIpAddressForSwitch_Err error + + TypeScanCodes_Called bool + TypeScanCodes_VmName string + TypeScanCodes_ScanCodes string + TypeScanCodes_Err error + + GetVirtualMachineNetworkAdapterAddress_Called bool + GetVirtualMachineNetworkAdapterAddress_VmName string + GetVirtualMachineNetworkAdapterAddress_Return string + GetVirtualMachineNetworkAdapterAddress_Err error + + SetNetworkAdapterVlanId_Called bool + SetNetworkAdapterVlanId_SwitchName string + SetNetworkAdapterVlanId_VlanId string + SetNetworkAdapterVlanId_Err error + + SetVmNetworkAdapterMacAddress_Called bool + SetVmNetworkAdapterMacAddress_VmName string + SetVmNetworkAdapterMacAddress_Mac string + SetVmNetworkAdapterMacAddress_Err error + + SetVirtualMachineVlanId_Called bool + SetVirtualMachineVlanId_VmName string + SetVirtualMachineVlanId_VlanId string + SetVirtualMachineVlanId_Err error + + UntagVirtualMachineNetworkAdapterVlan_Called bool + UntagVirtualMachineNetworkAdapterVlan_VmName string + UntagVirtualMachineNetworkAdapterVlan_SwitchName string + UntagVirtualMachineNetworkAdapterVlan_Err error + + CreateExternalVirtualSwitch_Called bool + CreateExternalVirtualSwitch_VmName string + CreateExternalVirtualSwitch_SwitchName string + CreateExternalVirtualSwitch_Err error + + GetVirtualMachineSwitchName_Called bool + GetVirtualMachineSwitchName_VmName string + GetVirtualMachineSwitchName_Return string + GetVirtualMachineSwitchName_Err error + + ConnectVirtualMachineNetworkAdapterToSwitch_Called bool + ConnectVirtualMachineNetworkAdapterToSwitch_VmName string + ConnectVirtualMachineNetworkAdapterToSwitch_SwitchName string + ConnectVirtualMachineNetworkAdapterToSwitch_Err error + + DeleteVirtualSwitch_Called bool + DeleteVirtualSwitch_SwitchName string + DeleteVirtualSwitch_Err error + + CreateVirtualSwitch_Called bool + CreateVirtualSwitch_SwitchName string + CreateVirtualSwitch_SwitchType string + CreateVirtualSwitch_Return bool + CreateVirtualSwitch_Err error + + AddVirtualMachineHardDrive_Called bool + AddVirtualMachineHardDrive_VmName string + AddVirtualMachineHardDrive_VhdFile string + AddVirtualMachineHardDrive_VhdName string + AddVirtualMachineHardDrive_VhdSizeBytes int64 + AddVirtualMachineHardDrive_VhdBlockSize int64 + AddVirtualMachineHardDrive_ControllerType string + AddVirtualMachineHardDrive_Err error + + CreateVirtualMachine_Called bool + CreateVirtualMachine_VmName string + CreateVirtualMachine_Path string + CreateVirtualMachine_HarddrivePath string + CreateVirtualMachine_VhdPath string + CreateVirtualMachine_Ram int64 + CreateVirtualMachine_DiskSize int64 + CreateVirtualMachine_DiskBlockSize int64 + CreateVirtualMachine_SwitchName string + CreateVirtualMachine_Generation uint + CreateVirtualMachine_DifferentialDisk bool + CreateVirtualMachine_FixedVHD bool + CreateVirtualMachine_Err error + + CloneVirtualMachine_Called bool + CloneVirtualMachine_CloneFromVmxcPath string + CloneVirtualMachine_CloneFromVmName string + CloneVirtualMachine_CloneFromSnapshotName string + CloneVirtualMachine_CloneAllSnapshots bool + CloneVirtualMachine_VmName string + CloneVirtualMachine_Path string + CloneVirtualMachine_HarddrivePath string + CloneVirtualMachine_Ram int64 + CloneVirtualMachine_SwitchName string + CloneVirtualMachine_Err error + + DeleteVirtualMachine_Called bool + DeleteVirtualMachine_VmName string + DeleteVirtualMachine_Err error + + SetVirtualMachineCpuCount_Called bool + SetVirtualMachineCpuCount_VmName string + SetVirtualMachineCpuCount_Cpu uint + SetVirtualMachineCpuCount_Err error + + SetVirtualMachineMacSpoofing_Called bool + SetVirtualMachineMacSpoofing_VmName string + SetVirtualMachineMacSpoofing_Enable bool + SetVirtualMachineMacSpoofing_Err error + + SetVirtualMachineDynamicMemory_Called bool + SetVirtualMachineDynamicMemory_VmName string + SetVirtualMachineDynamicMemory_Enable bool + SetVirtualMachineDynamicMemory_Err error + + SetVirtualMachineSecureBoot_Called bool + SetVirtualMachineSecureBoot_VmName string + SetVirtualMachineSecureBoot_TemplateName string + SetVirtualMachineSecureBoot_Enable bool + SetVirtualMachineSecureBoot_Err error + + SetVirtualMachineVirtualizationExtensions_Called bool + SetVirtualMachineVirtualizationExtensions_VmName string + SetVirtualMachineVirtualizationExtensions_Enable bool + SetVirtualMachineVirtualizationExtensions_Err error + + EnableVirtualMachineIntegrationService_Called bool + EnableVirtualMachineIntegrationService_VmName string + EnableVirtualMachineIntegrationService_IntegrationServiceName string + EnableVirtualMachineIntegrationService_Err error + + ExportVirtualMachine_Called bool + ExportVirtualMachine_VmName string + ExportVirtualMachine_Path string + ExportVirtualMachine_Err error + + CompactDisks_Called bool + CompactDisks_ExpPath string + CompactDisks_VhdDir string + CompactDisks_Err error + + CopyExportedVirtualMachine_Called bool + CopyExportedVirtualMachine_ExpPath string + CopyExportedVirtualMachine_OutputPath string + CopyExportedVirtualMachine_VhdDir string + CopyExportedVirtualMachine_VmDir string + CopyExportedVirtualMachine_Err error + + RestartVirtualMachine_Called bool + RestartVirtualMachine_VmName string + RestartVirtualMachine_Err error + + CreateDvdDrive_Called bool + CreateDvdDrive_VmName string + CreateDvdDrive_IsoPath string + CreateDvdDrive_Generation uint + CreateDvdDrive_ControllerNumber uint + CreateDvdDrive_ControllerLocation uint + CreateDvdDrive_Err error + + MountDvdDrive_Called bool + MountDvdDrive_VmName string + MountDvdDrive_Path string + MountDvdDrive_ControllerNumber uint + MountDvdDrive_ControllerLocation uint + MountDvdDrive_Err error + + SetBootDvdDrive_Called bool + SetBootDvdDrive_VmName string + SetBootDvdDrive_ControllerNumber uint + SetBootDvdDrive_ControllerLocation uint + SetBootDvdDrive_Generation uint + SetBootDvdDrive_Err error + + UnmountDvdDrive_Called bool + UnmountDvdDrive_VmName string + UnmountDvdDrive_ControllerNumber uint + UnmountDvdDrive_ControllerLocation uint + UnmountDvdDrive_Err error + + DeleteDvdDrive_Called bool + DeleteDvdDrive_VmName string + DeleteDvdDrive_ControllerNumber uint + DeleteDvdDrive_ControllerLocation uint + DeleteDvdDrive_Err error + + MountFloppyDrive_Called bool + MountFloppyDrive_VmName string + MountFloppyDrive_Path string + MountFloppyDrive_Err error + + UnmountFloppyDrive_Called bool + UnmountFloppyDrive_VmName string + UnmountFloppyDrive_Err error + + Connect_Called bool + Connect_VmName string + Connect_Cancel context.CancelFunc + Connect_Err error + + Disconnect_Called bool + Disconnect_Cancel context.CancelFunc +} + +func (d *DriverMock) IsRunning(vmName string) (bool, error) { + d.IsRunning_Called = true + d.IsRunning_VmName = vmName + return d.IsRunning_Return, d.IsRunning_Err +} + +func (d *DriverMock) IsOff(vmName string) (bool, error) { + d.IsOff_Called = true + d.IsOff_VmName = vmName + return d.IsOff_Return, d.IsOff_Err +} + +func (d *DriverMock) Uptime(vmName string) (uint64, error) { + d.Uptime_Called = true + d.Uptime_VmName = vmName + return d.Uptime_Return, d.Uptime_Err +} + +func (d *DriverMock) Start(vmName string) error { + d.Start_Called = true + d.Start_VmName = vmName + return d.Start_Err +} + +func (d *DriverMock) Stop(vmName string) error { + d.Stop_Called = true + d.Stop_VmName = vmName + return d.Stop_Err +} + +func (d *DriverMock) Verify() error { + d.Verify_Called = true + return d.Verify_Err +} + +func (d *DriverMock) Mac(vmName string) (string, error) { + d.Mac_Called = true + d.Mac_VmName = vmName + return d.Mac_Return, d.Mac_Err +} + +func (d *DriverMock) IpAddress(mac string) (string, error) { + d.IpAddress_Called = true + d.IpAddress_Mac = mac + return d.IpAddress_Return, d.IpAddress_Err +} + +func (d *DriverMock) GetHostName(ip string) (string, error) { + d.GetHostName_Called = true + d.GetHostName_Ip = ip + return d.GetHostName_Return, d.GetHostName_Err +} + +func (d *DriverMock) GetVirtualMachineGeneration(vmName string) (uint, error) { + d.GetVirtualMachineGeneration_Called = true + d.GetVirtualMachineGeneration_VmName = vmName + return d.GetVirtualMachineGeneration_Return, d.GetVirtualMachineGeneration_Err +} + +func (d *DriverMock) GetHostAdapterIpAddressForSwitch(switchName string) (string, error) { + d.GetHostAdapterIpAddressForSwitch_Called = true + d.GetHostAdapterIpAddressForSwitch_SwitchName = switchName + return d.GetHostAdapterIpAddressForSwitch_Return, d.GetHostAdapterIpAddressForSwitch_Err +} + +func (d *DriverMock) TypeScanCodes(vmName string, scanCodes string) error { + d.TypeScanCodes_Called = true + d.TypeScanCodes_VmName = vmName + d.TypeScanCodes_ScanCodes = scanCodes + return d.TypeScanCodes_Err +} + +func (d *DriverMock) GetVirtualMachineNetworkAdapterAddress(vmName string) (string, error) { + d.GetVirtualMachineNetworkAdapterAddress_Called = true + d.GetVirtualMachineNetworkAdapterAddress_VmName = vmName + return d.GetVirtualMachineNetworkAdapterAddress_Return, d.GetVirtualMachineNetworkAdapterAddress_Err +} + +func (d *DriverMock) SetNetworkAdapterVlanId(switchName string, vlanId string) error { + d.SetNetworkAdapterVlanId_Called = true + d.SetNetworkAdapterVlanId_SwitchName = switchName + d.SetNetworkAdapterVlanId_VlanId = vlanId + return d.SetNetworkAdapterVlanId_Err +} + +func (d *DriverMock) SetVmNetworkAdapterMacAddress(vmName string, mac string) error { + d.SetVmNetworkAdapterMacAddress_Called = true + d.SetVmNetworkAdapterMacAddress_VmName = vmName + d.SetVmNetworkAdapterMacAddress_Mac = mac + return d.SetVmNetworkAdapterMacAddress_Err +} + +func (d *DriverMock) SetVirtualMachineVlanId(vmName string, vlanId string) error { + d.SetVirtualMachineVlanId_Called = true + d.SetVirtualMachineVlanId_VmName = vmName + d.SetVirtualMachineVlanId_VlanId = vlanId + return d.SetVirtualMachineVlanId_Err +} + +func (d *DriverMock) UntagVirtualMachineNetworkAdapterVlan(vmName string, switchName string) error { + d.UntagVirtualMachineNetworkAdapterVlan_Called = true + d.UntagVirtualMachineNetworkAdapterVlan_VmName = vmName + d.UntagVirtualMachineNetworkAdapterVlan_SwitchName = switchName + return d.UntagVirtualMachineNetworkAdapterVlan_Err +} + +func (d *DriverMock) CreateExternalVirtualSwitch(vmName string, switchName string) error { + d.CreateExternalVirtualSwitch_Called = true + d.CreateExternalVirtualSwitch_VmName = vmName + d.CreateExternalVirtualSwitch_SwitchName = switchName + return d.CreateExternalVirtualSwitch_Err +} + +func (d *DriverMock) GetVirtualMachineSwitchName(vmName string) (string, error) { + d.GetVirtualMachineSwitchName_Called = true + d.GetVirtualMachineSwitchName_VmName = vmName + return d.GetVirtualMachineSwitchName_Return, d.GetVirtualMachineSwitchName_Err +} + +func (d *DriverMock) ConnectVirtualMachineNetworkAdapterToSwitch(vmName string, switchName string) error { + d.ConnectVirtualMachineNetworkAdapterToSwitch_Called = true + d.ConnectVirtualMachineNetworkAdapterToSwitch_VmName = vmName + d.ConnectVirtualMachineNetworkAdapterToSwitch_SwitchName = switchName + return d.ConnectVirtualMachineNetworkAdapterToSwitch_Err +} + +func (d *DriverMock) DeleteVirtualSwitch(switchName string) error { + d.DeleteVirtualSwitch_Called = true + d.DeleteVirtualSwitch_SwitchName = switchName + return d.DeleteVirtualSwitch_Err +} + +func (d *DriverMock) CreateVirtualSwitch(switchName string, switchType string) (bool, error) { + d.CreateVirtualSwitch_Called = true + d.CreateVirtualSwitch_SwitchName = switchName + d.CreateVirtualSwitch_SwitchType = switchType + return d.CreateVirtualSwitch_Return, d.CreateVirtualSwitch_Err +} + +func (d *DriverMock) AddVirtualMachineHardDrive(vmName string, vhdFile string, vhdName string, vhdSizeBytes int64, vhdDiskBlockSize int64, controllerType string) error { + d.AddVirtualMachineHardDrive_Called = true + d.AddVirtualMachineHardDrive_VmName = vmName + d.AddVirtualMachineHardDrive_VhdFile = vhdFile + d.AddVirtualMachineHardDrive_VhdName = vhdName + d.AddVirtualMachineHardDrive_VhdSizeBytes = vhdSizeBytes + d.AddVirtualMachineHardDrive_VhdSizeBytes = vhdDiskBlockSize + d.AddVirtualMachineHardDrive_ControllerType = controllerType + return d.AddVirtualMachineHardDrive_Err +} + +func (d *DriverMock) CreateVirtualMachine(vmName string, path string, harddrivePath string, vhdPath string, ram int64, diskSize int64, diskBlockSize int64, switchName string, generation uint, diffDisks bool, fixedVHD bool) error { + d.CreateVirtualMachine_Called = true + d.CreateVirtualMachine_VmName = vmName + d.CreateVirtualMachine_Path = path + d.CreateVirtualMachine_HarddrivePath = harddrivePath + d.CreateVirtualMachine_VhdPath = vhdPath + d.CreateVirtualMachine_Ram = ram + d.CreateVirtualMachine_DiskSize = diskSize + d.CreateVirtualMachine_DiskBlockSize = diskBlockSize + d.CreateVirtualMachine_SwitchName = switchName + d.CreateVirtualMachine_Generation = generation + d.CreateVirtualMachine_DifferentialDisk = diffDisks + return d.CreateVirtualMachine_Err +} + +func (d *DriverMock) CloneVirtualMachine(cloneFromVmxcPath string, cloneFromVmName string, cloneFromSnapshotName string, cloneAllSnapshots bool, vmName string, path string, harddrivePath string, ram int64, switchName string) error { + d.CloneVirtualMachine_Called = true + d.CloneVirtualMachine_CloneFromVmxcPath = cloneFromVmxcPath + d.CloneVirtualMachine_CloneFromVmName = cloneFromVmName + d.CloneVirtualMachine_CloneFromSnapshotName = cloneFromSnapshotName + d.CloneVirtualMachine_CloneAllSnapshots = cloneAllSnapshots + d.CloneVirtualMachine_VmName = vmName + d.CloneVirtualMachine_Path = path + d.CloneVirtualMachine_HarddrivePath = harddrivePath + d.CloneVirtualMachine_Ram = ram + d.CloneVirtualMachine_SwitchName = switchName + return d.CloneVirtualMachine_Err +} + +func (d *DriverMock) DeleteVirtualMachine(vmName string) error { + d.DeleteVirtualMachine_Called = true + d.DeleteVirtualMachine_VmName = vmName + return d.DeleteVirtualMachine_Err +} + +func (d *DriverMock) SetVirtualMachineCpuCount(vmName string, cpu uint) error { + d.SetVirtualMachineCpuCount_Called = true + d.SetVirtualMachineCpuCount_VmName = vmName + d.SetVirtualMachineCpuCount_Cpu = cpu + return d.SetVirtualMachineCpuCount_Err +} + +func (d *DriverMock) SetVirtualMachineMacSpoofing(vmName string, enable bool) error { + d.SetVirtualMachineMacSpoofing_Called = true + d.SetVirtualMachineMacSpoofing_VmName = vmName + d.SetVirtualMachineMacSpoofing_Enable = enable + return d.SetVirtualMachineMacSpoofing_Err +} + +func (d *DriverMock) SetVirtualMachineDynamicMemory(vmName string, enable bool) error { + d.SetVirtualMachineDynamicMemory_Called = true + d.SetVirtualMachineDynamicMemory_VmName = vmName + d.SetVirtualMachineDynamicMemory_Enable = enable + return d.SetVirtualMachineDynamicMemory_Err +} + +func (d *DriverMock) SetVirtualMachineSecureBoot(vmName string, enable bool, templateName string) error { + d.SetVirtualMachineSecureBoot_Called = true + d.SetVirtualMachineSecureBoot_VmName = vmName + d.SetVirtualMachineSecureBoot_Enable = enable + d.SetVirtualMachineSecureBoot_TemplateName = templateName + return d.SetVirtualMachineSecureBoot_Err +} + +func (d *DriverMock) SetVirtualMachineVirtualizationExtensions(vmName string, enable bool) error { + d.SetVirtualMachineVirtualizationExtensions_Called = true + d.SetVirtualMachineVirtualizationExtensions_VmName = vmName + d.SetVirtualMachineVirtualizationExtensions_Enable = enable + return d.SetVirtualMachineVirtualizationExtensions_Err +} + +func (d *DriverMock) EnableVirtualMachineIntegrationService(vmName string, integrationServiceName string) error { + d.EnableVirtualMachineIntegrationService_Called = true + d.EnableVirtualMachineIntegrationService_VmName = vmName + d.EnableVirtualMachineIntegrationService_IntegrationServiceName = integrationServiceName + return d.EnableVirtualMachineIntegrationService_Err +} + +func (d *DriverMock) ExportVirtualMachine(vmName string, path string) error { + d.ExportVirtualMachine_Called = true + d.ExportVirtualMachine_VmName = vmName + d.ExportVirtualMachine_Path = path + return d.ExportVirtualMachine_Err +} + +func (d *DriverMock) CompactDisks(expPath string, vhdDir string) error { + d.CompactDisks_Called = true + d.CompactDisks_ExpPath = expPath + d.CompactDisks_VhdDir = vhdDir + return d.CompactDisks_Err +} + +func (d *DriverMock) CopyExportedVirtualMachine(expPath string, outputPath string, vhdDir string, vmDir string) error { + d.CopyExportedVirtualMachine_Called = true + d.CopyExportedVirtualMachine_ExpPath = expPath + d.CopyExportedVirtualMachine_OutputPath = outputPath + d.CopyExportedVirtualMachine_VhdDir = vhdDir + d.CopyExportedVirtualMachine_VmDir = vmDir + return d.CopyExportedVirtualMachine_Err +} + +func (d *DriverMock) RestartVirtualMachine(vmName string) error { + d.RestartVirtualMachine_Called = true + d.RestartVirtualMachine_VmName = vmName + return d.RestartVirtualMachine_Err +} + +func (d *DriverMock) CreateDvdDrive(vmName string, isoPath string, generation uint) (uint, uint, error) { + d.CreateDvdDrive_Called = true + d.CreateDvdDrive_VmName = vmName + d.CreateDvdDrive_IsoPath = isoPath + d.CreateDvdDrive_Generation = generation + return d.CreateDvdDrive_ControllerNumber, d.CreateDvdDrive_ControllerLocation, d.CreateDvdDrive_Err +} + +func (d *DriverMock) MountDvdDrive(vmName string, path string, controllerNumber uint, controllerLocation uint) error { + d.MountDvdDrive_Called = true + d.MountDvdDrive_VmName = vmName + d.MountDvdDrive_Path = path + d.MountDvdDrive_ControllerNumber = controllerNumber + d.MountDvdDrive_ControllerLocation = controllerLocation + return d.MountDvdDrive_Err +} + +func (d *DriverMock) SetBootDvdDrive(vmName string, controllerNumber uint, controllerLocation uint, generation uint) error { + d.SetBootDvdDrive_Called = true + d.SetBootDvdDrive_VmName = vmName + d.SetBootDvdDrive_ControllerNumber = controllerNumber + d.SetBootDvdDrive_ControllerLocation = controllerLocation + d.SetBootDvdDrive_Generation = generation + return d.SetBootDvdDrive_Err +} + +func (d *DriverMock) UnmountDvdDrive(vmName string, controllerNumber uint, controllerLocation uint) error { + d.UnmountDvdDrive_Called = true + d.UnmountDvdDrive_VmName = vmName + d.UnmountDvdDrive_ControllerNumber = controllerNumber + d.UnmountDvdDrive_ControllerLocation = controllerLocation + return d.UnmountDvdDrive_Err +} + +func (d *DriverMock) DeleteDvdDrive(vmName string, controllerNumber uint, controllerLocation uint) error { + d.DeleteDvdDrive_Called = true + d.DeleteDvdDrive_VmName = vmName + d.DeleteDvdDrive_ControllerNumber = controllerNumber + d.DeleteDvdDrive_ControllerLocation = controllerLocation + return d.DeleteDvdDrive_Err +} + +func (d *DriverMock) MountFloppyDrive(vmName string, path string) error { + d.MountFloppyDrive_Called = true + d.MountFloppyDrive_VmName = vmName + d.MountFloppyDrive_Path = path + return d.MountFloppyDrive_Err +} + +func (d *DriverMock) UnmountFloppyDrive(vmName string) error { + d.UnmountFloppyDrive_Called = true + d.UnmountFloppyDrive_VmName = vmName + return d.UnmountFloppyDrive_Err +} + +func (d *DriverMock) Connect(vmName string) (context.CancelFunc, error) { + d.Connect_Called = true + d.Connect_VmName = vmName + return d.Connect_Cancel, d.Connect_Err +} + +func (d *DriverMock) Disconnect(cancel context.CancelFunc) { + d.Disconnect_Called = true + d.Disconnect_Cancel = cancel +} diff --git a/builder/hyperv/common/driver_ps_4.go b/builder/hyperv/common/driver_ps_4.go index 82b4c3f42..cadaa2ba5 100644 --- a/builder/hyperv/common/driver_ps_4.go +++ b/builder/hyperv/common/driver_ps_4.go @@ -1,6 +1,7 @@ package common import ( + "context" "fmt" "log" "runtime" @@ -146,6 +147,10 @@ func (d *HypervPS4Driver) SetVirtualMachineVlanId(vmName string, vlanId string) return hyperv.SetVirtualMachineVlanId(vmName, vlanId) } +func (d *HypervPS4Driver) SetVmNetworkAdapterMacAddress(vmName string, mac string) error { + return hyperv.SetVmNetworkAdapterMacAddress(vmName, mac) +} + func (d *HypervPS4Driver) UntagVirtualMachineNetworkAdapterVlan(vmName string, switchName string) error { return hyperv.UntagVirtualMachineNetworkAdapterVlan(vmName, switchName) } @@ -170,12 +175,12 @@ func (d *HypervPS4Driver) CreateVirtualSwitch(switchName string, switchType stri return hyperv.CreateVirtualSwitch(switchName, switchType) } -func (d *HypervPS4Driver) AddVirtualMachineHardDrive(vmName string, vhdFile string, vhdName string, vhdSizeBytes int64, controllerType string) error { - return hyperv.AddVirtualMachineHardDiskDrive(vmName, vhdFile, vhdName, vhdSizeBytes, controllerType) +func (d *HypervPS4Driver) AddVirtualMachineHardDrive(vmName string, vhdFile string, vhdName string, vhdSizeBytes int64, diskBlockSize int64, controllerType string) error { + return hyperv.AddVirtualMachineHardDiskDrive(vmName, vhdFile, vhdName, vhdSizeBytes, diskBlockSize, controllerType) } -func (d *HypervPS4Driver) CreateVirtualMachine(vmName string, path string, harddrivePath string, vhdPath string, ram int64, diskSize int64, switchName string, generation uint) error { - return hyperv.CreateVirtualMachine(vmName, path, harddrivePath, vhdPath, ram, diskSize, switchName, generation) +func (d *HypervPS4Driver) CreateVirtualMachine(vmName string, path string, harddrivePath string, vhdPath string, ram int64, diskSize int64, diskBlockSize int64, switchName string, generation uint, diffDisks bool, fixedVHD bool) error { + return hyperv.CreateVirtualMachine(vmName, path, harddrivePath, vhdPath, ram, diskSize, diskBlockSize, switchName, generation, diffDisks, fixedVHD) } func (d *HypervPS4Driver) CloneVirtualMachine(cloneFromVmxcPath string, cloneFromVmName string, cloneFromSnapshotName string, cloneAllSnapshots bool, vmName string, path string, harddrivePath string, ram int64, switchName string) error { @@ -198,8 +203,8 @@ func (d *HypervPS4Driver) SetVirtualMachineDynamicMemory(vmName string, enable b return hyperv.SetVirtualMachineDynamicMemory(vmName, enable) } -func (d *HypervPS4Driver) SetVirtualMachineSecureBoot(vmName string, enable bool) error { - return hyperv.SetVirtualMachineSecureBoot(vmName, enable) +func (d *HypervPS4Driver) SetVirtualMachineSecureBoot(vmName string, enable bool, templateName string) error { + return hyperv.SetVirtualMachineSecureBoot(vmName, enable, templateName) } func (d *HypervPS4Driver) SetVirtualMachineVirtualizationExtensions(vmName string, enable bool) error { @@ -343,3 +348,14 @@ func (d *HypervPS4Driver) verifyHypervPermissions() error { return nil } + +// Connect connects to a VM specified by the name given. +func (d *HypervPS4Driver) Connect(vmName string) (context.CancelFunc, error) { + return hyperv.ConnectVirtualMachine(vmName) +} + +// Disconnect disconnects to a VM specified by calling the context cancel function returned +// from Connect. +func (d *HypervPS4Driver) Disconnect(cancel context.CancelFunc) { + hyperv.DisconnectVirtualMachine(cancel) +} diff --git a/builder/hyperv/common/run_config.go b/builder/hyperv/common/run_config.go deleted file mode 100644 index a189a0355..000000000 --- a/builder/hyperv/common/run_config.go +++ /dev/null @@ -1,32 +0,0 @@ -package common - -import ( - "fmt" - "github.com/hashicorp/packer/template/interpolate" - "time" -) - -type RunConfig struct { - RawBootWait string `mapstructure:"boot_wait"` - - BootWait time.Duration `` -} - -func (c *RunConfig) Prepare(ctx *interpolate.Context) []error { - if c.RawBootWait == "" { - c.RawBootWait = "10s" - } - - var errs []error - var err error - - if c.RawBootWait != "" { - c.BootWait, err = time.ParseDuration(c.RawBootWait) - if err != nil { - errs = append( - errs, fmt.Errorf("Failed parsing boot_wait: %s", err)) - } - } - - return errs -} diff --git a/builder/hyperv/common/run_config_test.go b/builder/hyperv/common/run_config_test.go deleted file mode 100644 index 8068fe625..000000000 --- a/builder/hyperv/common/run_config_test.go +++ /dev/null @@ -1,37 +0,0 @@ -package common - -import ( - "testing" -) - -func TestRunConfigPrepare_BootWait(t *testing.T) { - var c *RunConfig - var errs []error - - // Test a default boot_wait - c = new(RunConfig) - errs = c.Prepare(testConfigTemplate(t)) - if len(errs) > 0 { - t.Fatalf("should not have error: %s", errs) - } - - if c.RawBootWait != "10s" { - t.Fatalf("bad value: %s", c.RawBootWait) - } - - // Test with a bad boot_wait - c = new(RunConfig) - c.RawBootWait = "this is not good" - errs = c.Prepare(testConfigTemplate(t)) - if len(errs) == 0 { - t.Fatalf("bad: %#v", errs) - } - - // Test with a good one - c = new(RunConfig) - c.RawBootWait = "5s" - errs = c.Prepare(testConfigTemplate(t)) - if len(errs) > 0 { - t.Fatalf("should not have error: %s", errs) - } -} diff --git a/builder/hyperv/common/ssh.go b/builder/hyperv/common/ssh.go index 182d154ff..f8a059bb0 100644 --- a/builder/hyperv/common/ssh.go +++ b/builder/hyperv/common/ssh.go @@ -3,7 +3,7 @@ package common import ( commonssh "github.com/hashicorp/packer/common/ssh" "github.com/hashicorp/packer/communicator/ssh" - "github.com/mitchellh/multistep" + "github.com/hashicorp/packer/helper/multistep" gossh "golang.org/x/crypto/ssh" ) diff --git a/builder/hyperv/common/step_clone_vm.go b/builder/hyperv/common/step_clone_vm.go index 4a5b55566..ac28f9cea 100644 --- a/builder/hyperv/common/step_clone_vm.go +++ b/builder/hyperv/common/step_clone_vm.go @@ -1,13 +1,14 @@ package common import ( + "context" "fmt" "log" "path/filepath" "strings" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" ) // This step clones an existing virtual machine. @@ -26,10 +27,12 @@ type StepCloneVM struct { EnableMacSpoofing bool EnableDynamicMemory bool EnableSecureBoot bool + SecureBootTemplate string EnableVirtualizationExtensions bool + MacAddress string } -func (s *StepCloneVM) Run(state multistep.StateBag) multistep.StepAction { +func (s *StepCloneVM) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { driver := state.Get("driver").(Driver) ui := state.Get("ui").(packer.Ui) ui.Say("Cloning virtual machine...") @@ -97,7 +100,8 @@ func (s *StepCloneVM) Run(state multistep.StateBag) multistep.StepAction { } if generation == 2 { - err = driver.SetVirtualMachineSecureBoot(s.VMName, s.EnableSecureBoot) + + err = driver.SetVirtualMachineSecureBoot(s.VMName, s.EnableSecureBoot, s.SecureBootTemplate) if err != nil { err := fmt.Errorf("Error setting secure boot: %s", err) state.Put("error", err) @@ -117,6 +121,16 @@ func (s *StepCloneVM) Run(state multistep.StateBag) multistep.StepAction { } } + if s.MacAddress != "" { + err = driver.SetVmNetworkAdapterMacAddress(s.VMName, s.MacAddress) + if err != nil { + err := fmt.Errorf("Error setting MAC address: %s", err) + state.Put("error", err) + ui.Error(err.Error()) + return multistep.ActionHalt + } + } + // Set the final name in the state bag so others can use it state.Put("vmName", s.VMName) diff --git a/builder/hyperv/common/step_configure_ip.go b/builder/hyperv/common/step_configure_ip.go index deac917f8..41d945a0d 100644 --- a/builder/hyperv/common/step_configure_ip.go +++ b/builder/hyperv/common/step_configure_ip.go @@ -1,19 +1,20 @@ package common import ( + "context" "fmt" "log" "strings" "time" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" ) type StepConfigureIp struct { } -func (s *StepConfigureIp) Run(state multistep.StateBag) multistep.StepAction { +func (s *StepConfigureIp) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { driver := state.Get("driver").(Driver) ui := state.Get("ui").(packer.Ui) diff --git a/builder/hyperv/common/step_configure_vlan.go b/builder/hyperv/common/step_configure_vlan.go index 5d713ac99..a167b9ed1 100644 --- a/builder/hyperv/common/step_configure_vlan.go +++ b/builder/hyperv/common/step_configure_vlan.go @@ -1,10 +1,11 @@ package common import ( + "context" "fmt" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" ) type StepConfigureVlan struct { @@ -12,7 +13,7 @@ type StepConfigureVlan struct { SwitchVlanId string } -func (s *StepConfigureVlan) Run(state multistep.StateBag) multistep.StepAction { +func (s *StepConfigureVlan) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { driver := state.Get("driver").(Driver) ui := state.Get("ui").(packer.Ui) diff --git a/builder/hyperv/common/step_create_external_switch.go b/builder/hyperv/common/step_create_external_switch.go index 8443202b3..52626afce 100644 --- a/builder/hyperv/common/step_create_external_switch.go +++ b/builder/hyperv/common/step_create_external_switch.go @@ -1,11 +1,12 @@ package common import ( + "context" "fmt" "github.com/hashicorp/packer/common/uuid" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" ) // This step creates switch for VM. @@ -17,12 +18,12 @@ type StepCreateExternalSwitch struct { oldSwitchName string } -func (s *StepCreateExternalSwitch) Run(state multistep.StateBag) multistep.StepAction { +func (s *StepCreateExternalSwitch) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { driver := state.Get("driver").(Driver) ui := state.Get("ui").(packer.Ui) vmName := state.Get("vmName").(string) - errorMsg := "Error createing external switch: %s" + errorMsg := "Error creating external switch: %s" var err error ui.Say("Creating external switch...") diff --git a/builder/hyperv/common/step_create_switch.go b/builder/hyperv/common/step_create_switch.go index 7e5ea2751..4b5ef48e2 100644 --- a/builder/hyperv/common/step_create_switch.go +++ b/builder/hyperv/common/step_create_switch.go @@ -1,9 +1,11 @@ package common import ( + "context" "fmt" + + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" ) const ( @@ -31,7 +33,7 @@ type StepCreateSwitch struct { createdSwitch bool } -func (s *StepCreateSwitch) Run(state multistep.StateBag) multistep.StepAction { +func (s *StepCreateSwitch) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { driver := state.Get("driver").(Driver) ui := state.Get("ui").(packer.Ui) diff --git a/builder/hyperv/common/step_create_tempdir.go b/builder/hyperv/common/step_create_tempdir.go index c2ac9f719..b445a4702 100644 --- a/builder/hyperv/common/step_create_tempdir.go +++ b/builder/hyperv/common/step_create_tempdir.go @@ -1,16 +1,17 @@ package common import ( + "context" "fmt" "io/ioutil" "os" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" ) type StepCreateTempDir struct { - // The user-supplied root directores into which we create subdirectories. + // The user-supplied root directories into which we create subdirectories. TempPath string VhdTempPath string // The subdirectories with the randomly generated name. @@ -18,7 +19,7 @@ type StepCreateTempDir struct { vhdDirPath string } -func (s *StepCreateTempDir) Run(state multistep.StateBag) multistep.StepAction { +func (s *StepCreateTempDir) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { ui := state.Get("ui").(packer.Ui) ui.Say("Creating temporary directory...") diff --git a/builder/hyperv/common/step_create_vm.go b/builder/hyperv/common/step_create_vm.go index a1d7f0d05..611b637b9 100644 --- a/builder/hyperv/common/step_create_vm.go +++ b/builder/hyperv/common/step_create_vm.go @@ -1,13 +1,14 @@ package common import ( + "context" "fmt" "log" "path/filepath" "strings" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" ) // This step creates the actual virtual machine. @@ -20,16 +21,23 @@ type StepCreateVM struct { HarddrivePath string RamSize uint DiskSize uint + DiskBlockSize uint Generation uint Cpu uint EnableMacSpoofing bool EnableDynamicMemory bool EnableSecureBoot bool + SecureBootTemplate string EnableVirtualizationExtensions bool AdditionalDiskSize []uint + DifferencingDisk bool + MacAddress string + SkipExport bool + OutputDir string + FixedVHD bool } -func (s *StepCreateVM) Run(state multistep.StateBag) multistep.StepAction { +func (s *StepCreateVM) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { driver := state.Get("driver").(Driver) ui := state.Get("ui").(packer.Ui) ui.Say("Creating virtual machine...") @@ -50,11 +58,18 @@ func (s *StepCreateVM) Run(state multistep.StateBag) multistep.StepAction { } vhdPath := state.Get("packerVhdTempDir").(string) + + // inline vhd path if export is skipped + if s.SkipExport { + vhdPath = filepath.Join(s.OutputDir, "Virtual Hard Disks") + } + // convert the MB to bytes ramSize := int64(s.RamSize * 1024 * 1024) diskSize := int64(s.DiskSize * 1024 * 1024) + diskBlockSize := int64(s.DiskBlockSize * 1024 * 1024) - err := driver.CreateVirtualMachine(s.VMName, path, harddrivePath, vhdPath, ramSize, diskSize, s.SwitchName, s.Generation) + err := driver.CreateVirtualMachine(s.VMName, path, harddrivePath, vhdPath, ramSize, diskSize, diskBlockSize, s.SwitchName, s.Generation, s.DifferencingDisk, s.FixedVHD) if err != nil { err := fmt.Errorf("Error creating virtual machine: %s", err) state.Put("error", err) @@ -89,7 +104,7 @@ func (s *StepCreateVM) Run(state multistep.StateBag) multistep.StepAction { } if s.Generation == 2 { - err = driver.SetVirtualMachineSecureBoot(s.VMName, s.EnableSecureBoot) + err = driver.SetVirtualMachineSecureBoot(s.VMName, s.EnableSecureBoot, s.SecureBootTemplate) if err != nil { err := fmt.Errorf("Error setting secure boot: %s", err) state.Put("error", err) @@ -113,7 +128,7 @@ func (s *StepCreateVM) Run(state multistep.StateBag) multistep.StepAction { for index, size := range s.AdditionalDiskSize { diskSize := int64(size * 1024 * 1024) diskFile := fmt.Sprintf("%s-%d.vhdx", s.VMName, index) - err = driver.AddVirtualMachineHardDrive(s.VMName, vhdPath, diskFile, diskSize, "SCSI") + err = driver.AddVirtualMachineHardDrive(s.VMName, vhdPath, diskFile, diskSize, diskBlockSize, "SCSI") if err != nil { err := fmt.Errorf("Error creating and attaching additional disk drive: %s", err) state.Put("error", err) @@ -123,6 +138,16 @@ func (s *StepCreateVM) Run(state multistep.StateBag) multistep.StepAction { } } + if s.MacAddress != "" { + err = driver.SetVmNetworkAdapterMacAddress(s.VMName, s.MacAddress) + if err != nil { + err := fmt.Errorf("Error setting MAC address: %s", err) + state.Put("error", err) + ui.Error(err.Error()) + return multistep.ActionHalt + } + } + // Set the final name in the state bag so others can use it state.Put("vmName", s.VMName) @@ -142,4 +167,6 @@ func (s *StepCreateVM) Cleanup(state multistep.StateBag) { if err != nil { ui.Error(fmt.Sprintf("Error deleting virtual machine: %s", err)) } + + // TODO: Clean up created VHDX } diff --git a/builder/hyperv/common/step_disable_vlan.go b/builder/hyperv/common/step_disable_vlan.go index 883ff76c4..0d4882ec1 100644 --- a/builder/hyperv/common/step_disable_vlan.go +++ b/builder/hyperv/common/step_disable_vlan.go @@ -1,15 +1,17 @@ package common import ( + "context" "fmt" + + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" ) type StepDisableVlan struct { } -func (s *StepDisableVlan) Run(state multistep.StateBag) multistep.StepAction { +func (s *StepDisableVlan) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { driver := state.Get("driver").(Driver) ui := state.Get("ui").(packer.Ui) diff --git a/builder/hyperv/common/step_enable_integration_service.go b/builder/hyperv/common/step_enable_integration_service.go index 0889bed22..e3f8fd70a 100644 --- a/builder/hyperv/common/step_enable_integration_service.go +++ b/builder/hyperv/common/step_enable_integration_service.go @@ -1,16 +1,18 @@ package common import ( + "context" "fmt" + + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" ) type StepEnableIntegrationService struct { name string } -func (s *StepEnableIntegrationService) Run(state multistep.StateBag) multistep.StepAction { +func (s *StepEnableIntegrationService) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { driver := state.Get("driver").(Driver) ui := state.Get("ui").(packer.Ui) ui.Say("Enabling Integration Service...") diff --git a/builder/hyperv/common/step_export_vm.go b/builder/hyperv/common/step_export_vm.go index 8c32052a3..8161da31d 100644 --- a/builder/hyperv/common/step_export_vm.go +++ b/builder/hyperv/common/step_export_vm.go @@ -1,12 +1,13 @@ package common import ( + "context" "fmt" "io/ioutil" "path/filepath" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" ) const ( @@ -17,18 +18,19 @@ const ( type StepExportVm struct { OutputDir string SkipCompaction bool + SkipExport bool } -func (s *StepExportVm) Run(state multistep.StateBag) multistep.StepAction { +func (s *StepExportVm) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { driver := state.Get("driver").(Driver) ui := state.Get("ui").(packer.Ui) - var err error var errorMsg string vmName := state.Get("vmName").(string) tmpPath := state.Get("packerTempDir").(string) outputPath := s.OutputDir + expPath := s.OutputDir // create temp path to export vm errorMsg = "Error creating temp export path: %s" @@ -39,21 +41,21 @@ func (s *StepExportVm) Run(state multistep.StateBag) multistep.StepAction { ui.Error(err.Error()) return multistep.ActionHalt } + if !s.SkipExport { + ui.Say("Exporting vm...") - ui.Say("Exporting vm...") - - err = driver.ExportVirtualMachine(vmName, vmExportPath) - if err != nil { - errorMsg = "Error exporting vm: %s" - err := fmt.Errorf(errorMsg, err) - state.Put("error", err) - ui.Error(err.Error()) - return multistep.ActionHalt + err = driver.ExportVirtualMachine(vmName, vmExportPath) + if err != nil { + errorMsg = "Error exporting vm: %s" + err := fmt.Errorf(errorMsg, err) + state.Put("error", err) + ui.Error(err.Error()) + return multistep.ActionHalt + } + // copy to output dir + expPath = filepath.Join(vmExportPath, vmName) } - // copy to output dir - expPath := filepath.Join(vmExportPath, vmName) - if s.SkipCompaction { ui.Say("Skipping disk compaction...") } else { @@ -68,16 +70,17 @@ func (s *StepExportVm) Run(state multistep.StateBag) multistep.StepAction { } } - ui.Say("Copying to output dir...") - err = driver.CopyExportedVirtualMachine(expPath, outputPath, vhdDir, vmDir) - if err != nil { - errorMsg = "Error exporting vm: %s" - err := fmt.Errorf(errorMsg, err) - state.Put("error", err) - ui.Error(err.Error()) - return multistep.ActionHalt + if !s.SkipExport { + ui.Say("Copying to output dir...") + err = driver.CopyExportedVirtualMachine(expPath, outputPath, vhdDir, vmDir) + if err != nil { + errorMsg = "Error exporting vm: %s" + err := fmt.Errorf(errorMsg, err) + state.Put("error", err) + ui.Error(err.Error()) + return multistep.ActionHalt + } } - return multistep.ActionContinue } diff --git a/builder/hyperv/common/step_mount_dvddrive.go b/builder/hyperv/common/step_mount_dvddrive.go index 1535e86b4..945c6a86d 100644 --- a/builder/hyperv/common/step_mount_dvddrive.go +++ b/builder/hyperv/common/step_mount_dvddrive.go @@ -1,19 +1,21 @@ package common import ( + "context" "fmt" - "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" "log" "path/filepath" "strings" + + "github.com/hashicorp/packer/helper/multistep" + "github.com/hashicorp/packer/packer" ) type StepMountDvdDrive struct { Generation uint } -func (s *StepMountDvdDrive) Run(state multistep.StateBag) multistep.StepAction { +func (s *StepMountDvdDrive) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { driver := state.Get("driver").(Driver) ui := state.Get("ui").(packer.Ui) diff --git a/builder/hyperv/common/step_mount_floppydrive.go b/builder/hyperv/common/step_mount_floppydrive.go index c787b1fe7..1ae03d1db 100644 --- a/builder/hyperv/common/step_mount_floppydrive.go +++ b/builder/hyperv/common/step_mount_floppydrive.go @@ -1,14 +1,16 @@ package common import ( + "context" "fmt" - "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" "io" "io/ioutil" "log" "os" "path/filepath" + + "github.com/hashicorp/packer/helper/multistep" + "github.com/hashicorp/packer/packer" ) const ( @@ -20,7 +22,7 @@ type StepMountFloppydrive struct { floppyPath string } -func (s *StepMountFloppydrive) Run(state multistep.StateBag) multistep.StepAction { +func (s *StepMountFloppydrive) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { if s.Generation > 1 { return multistep.ActionContinue } diff --git a/builder/hyperv/common/step_mount_guest_additions.go b/builder/hyperv/common/step_mount_guest_additions.go index 954075d08..71b6c3eec 100644 --- a/builder/hyperv/common/step_mount_guest_additions.go +++ b/builder/hyperv/common/step_mount_guest_additions.go @@ -1,10 +1,12 @@ package common import ( + "context" "fmt" - "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" "log" + + "github.com/hashicorp/packer/helper/multistep" + "github.com/hashicorp/packer/packer" ) type StepMountGuestAdditions struct { @@ -13,7 +15,7 @@ type StepMountGuestAdditions struct { Generation uint } -func (s *StepMountGuestAdditions) Run(state multistep.StateBag) multistep.StepAction { +func (s *StepMountGuestAdditions) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { ui := state.Get("ui").(packer.Ui) if s.GuestAdditionsMode != "attach" { diff --git a/builder/hyperv/common/step_mount_secondary_dvd_images.go b/builder/hyperv/common/step_mount_secondary_dvd_images.go index 4e7663d40..c23e39fac 100644 --- a/builder/hyperv/common/step_mount_secondary_dvd_images.go +++ b/builder/hyperv/common/step_mount_secondary_dvd_images.go @@ -1,10 +1,12 @@ package common import ( + "context" "fmt" - "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" "log" + + "github.com/hashicorp/packer/helper/multistep" + "github.com/hashicorp/packer/packer" ) type StepMountSecondaryDvdImages struct { @@ -18,7 +20,7 @@ type DvdControllerProperties struct { Existing bool } -func (s *StepMountSecondaryDvdImages) Run(state multistep.StateBag) multistep.StepAction { +func (s *StepMountSecondaryDvdImages) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { driver := state.Get("driver").(Driver) ui := state.Get("ui").(packer.Ui) ui.Say("Mounting secondary DVD images...") diff --git a/builder/hyperv/common/step_output_dir.go b/builder/hyperv/common/step_output_dir.go index 0054d1994..fc39d08b2 100644 --- a/builder/hyperv/common/step_output_dir.go +++ b/builder/hyperv/common/step_output_dir.go @@ -1,14 +1,15 @@ package common import ( + "context" "fmt" "log" "os" "path/filepath" "time" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" ) // StepOutputDir sets up the output directory by creating it if it does @@ -21,7 +22,7 @@ type StepOutputDir struct { cleanup bool } -func (s *StepOutputDir) Run(state multistep.StateBag) multistep.StepAction { +func (s *StepOutputDir) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { ui := state.Get("ui").(packer.Ui) if _, err := os.Stat(s.Path); err == nil { diff --git a/builder/hyperv/common/step_polling_installation.go b/builder/hyperv/common/step_polling_installation.go index 8586940b1..133418ee7 100644 --- a/builder/hyperv/common/step_polling_installation.go +++ b/builder/hyperv/common/step_polling_installation.go @@ -2,22 +2,23 @@ package common import ( "bytes" + "context" "fmt" "log" "os/exec" "strings" "time" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" ) const port string = "13000" -type StepPollingInstalation struct { +type StepPollingInstallation struct { } -func (s *StepPollingInstalation) Run(state multistep.StateBag) multistep.StepAction { +func (s *StepPollingInstallation) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { ui := state.Get("ui").(packer.Ui) errorMsg := "Error polling VM: %s" @@ -74,6 +75,6 @@ func (s *StepPollingInstalation) Run(state multistep.StateBag) multistep.StepAct return multistep.ActionContinue } -func (s *StepPollingInstalation) Cleanup(state multistep.StateBag) { +func (s *StepPollingInstallation) Cleanup(state multistep.StateBag) { } diff --git a/builder/hyperv/common/step_reboot_vm.go b/builder/hyperv/common/step_reboot_vm.go index 9446781a2..8e2bec1b8 100644 --- a/builder/hyperv/common/step_reboot_vm.go +++ b/builder/hyperv/common/step_reboot_vm.go @@ -1,16 +1,18 @@ package common import ( + "context" "fmt" - "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" "time" + + "github.com/hashicorp/packer/helper/multistep" + "github.com/hashicorp/packer/packer" ) type StepRebootVm struct { } -func (s *StepRebootVm) Run(state multistep.StateBag) multistep.StepAction { +func (s *StepRebootVm) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { driver := state.Get("driver").(Driver) ui := state.Get("ui").(packer.Ui) diff --git a/builder/hyperv/common/step_run.go b/builder/hyperv/common/step_run.go index 12d621e1c..bb415ffa1 100644 --- a/builder/hyperv/common/step_run.go +++ b/builder/hyperv/common/step_run.go @@ -1,19 +1,21 @@ package common import ( + "context" "fmt" + "log" + + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" - "time" ) type StepRun struct { - BootWait time.Duration - - vmName string + GuiCancelFunc context.CancelFunc + Headless bool + vmName string } -func (s *StepRun) Run(state multistep.StateBag) multistep.StepAction { +func (s *StepRun) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { driver := state.Get("driver").(Driver) ui := state.Get("ui").(packer.Ui) vmName := state.Get("vmName").(string) @@ -30,22 +32,13 @@ func (s *StepRun) Run(state multistep.StateBag) multistep.StepAction { s.vmName = vmName - if int64(s.BootWait) > 0 { - ui.Say(fmt.Sprintf("Waiting %s for boot...", s.BootWait)) - wait := time.After(s.BootWait) - WAITLOOP: - for { - select { - case <-wait: - break WAITLOOP - case <-time.After(1 * time.Second): - if _, ok := state.GetOk(multistep.StateCancelled); ok { - return multistep.ActionHalt - } - } + if !s.Headless { + ui.Say("Attempting to connect with vmconnect...") + s.GuiCancelFunc, err = driver.Connect(vmName) + if err != nil { + log.Printf(fmt.Sprintf("Non-fatal error starting vmconnect: %s. continuing...", err)) } } - return multistep.ActionContinue } @@ -57,6 +50,11 @@ func (s *StepRun) Cleanup(state multistep.StateBag) { driver := state.Get("driver").(Driver) ui := state.Get("ui").(packer.Ui) + if !s.Headless && s.GuiCancelFunc != nil { + ui.Say("Disconnecting from vmconnect...") + s.GuiCancelFunc() + } + if running, _ := driver.IsRunning(s.vmName); running { if err := driver.Stop(s.vmName); err != nil { ui.Error(fmt.Sprintf("Error shutting down VM: %s", err)) diff --git a/builder/hyperv/common/step_shutdown.go b/builder/hyperv/common/step_shutdown.go index 9b341b395..b37716f60 100644 --- a/builder/hyperv/common/step_shutdown.go +++ b/builder/hyperv/common/step_shutdown.go @@ -2,13 +2,14 @@ package common import ( "bytes" + "context" "errors" "fmt" "log" "time" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" ) // This step shuts down the machine. It first attempts to do so gracefully, @@ -28,7 +29,7 @@ type StepShutdown struct { Timeout time.Duration } -func (s *StepShutdown) Run(state multistep.StateBag) multistep.StepAction { +func (s *StepShutdown) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { comm := state.Get("communicator").(packer.Communicator) driver := state.Get("driver").(Driver) diff --git a/builder/hyperv/common/step_sleep.go b/builder/hyperv/common/step_sleep.go index 23f7e344f..e65bcf0b1 100644 --- a/builder/hyperv/common/step_sleep.go +++ b/builder/hyperv/common/step_sleep.go @@ -1,10 +1,12 @@ package common import ( + "context" "fmt" - "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" "time" + + "github.com/hashicorp/packer/helper/multistep" + "github.com/hashicorp/packer/packer" ) type StepSleep struct { @@ -12,7 +14,7 @@ type StepSleep struct { ActionName string } -func (s *StepSleep) Run(state multistep.StateBag) multistep.StepAction { +func (s *StepSleep) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { ui := state.Get("ui").(packer.Ui) if len(s.ActionName) > 0 { diff --git a/builder/hyperv/common/step_type_boot_command.go b/builder/hyperv/common/step_type_boot_command.go index dfeca6196..de96d4293 100644 --- a/builder/hyperv/common/step_type_boot_command.go +++ b/builder/hyperv/common/step_type_boot_command.go @@ -1,16 +1,16 @@ package common import ( + "context" "fmt" - "log" "strings" - "unicode" - "unicode/utf8" + "time" "github.com/hashicorp/packer/common" + "github.com/hashicorp/packer/common/bootcommand" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" "github.com/hashicorp/packer/template/interpolate" - "github.com/mitchellh/multistep" ) type bootCommandTemplateData struct { @@ -21,17 +21,29 @@ type bootCommandTemplateData struct { // This step "types" the boot command into the VM via the Hyper-V virtual keyboard type StepTypeBootCommand struct { - BootCommand []string + BootCommand string + BootWait time.Duration SwitchName string Ctx interpolate.Context } -func (s *StepTypeBootCommand) Run(state multistep.StateBag) multistep.StepAction { +func (s *StepTypeBootCommand) Run(ctx context.Context, state multistep.StateBag) multistep.StepAction { httpPort := state.Get("http_port").(uint) ui := state.Get("ui").(packer.Ui) driver := state.Get("driver").(Driver) vmName := state.Get("vmName").(string) + // Wait the for the vm to boot. + if int64(s.BootWait) > 0 { + ui.Say(fmt.Sprintf("Waiting %s for boot...", s.BootWait.String())) + select { + case <-time.After(s.BootWait): + break + case <-ctx.Done(): + return multistep.ActionHalt + } + } + hostIp, err := driver.GetHostAdapterIpAddressForSwitch(s.SwitchName) if err != nil { @@ -50,26 +62,32 @@ func (s *StepTypeBootCommand) Run(state multistep.StateBag) multistep.StepAction vmName, } + sendCodes := func(codes []string) error { + scanCodesToSendString := strings.Join(codes, " ") + return driver.TypeScanCodes(vmName, scanCodesToSendString) + } + d := bootcommand.NewPCXTDriver(sendCodes, -1) + ui.Say("Typing the boot command...") - scanCodesToSend := []string{} + command, err := interpolate.Render(s.BootCommand, &s.Ctx) - for _, command := range s.BootCommand { - command, err := interpolate.Render(command, &s.Ctx) - - if err != nil { - err := fmt.Errorf("Error preparing boot command: %s", err) - state.Put("error", err) - ui.Error(err.Error()) - return multistep.ActionHalt - } - - scanCodesToSend = append(scanCodesToSend, scancodes(command)...) + if err != nil { + err := fmt.Errorf("Error preparing boot command: %s", err) + state.Put("error", err) + ui.Error(err.Error()) + return multistep.ActionHalt } - scanCodesToSendString := strings.Join(scanCodesToSend, " ") + seq, err := bootcommand.GenerateExpressionSequence(command) + if err != nil { + err := fmt.Errorf("Error generating boot command: %s", err) + state.Put("error", err) + ui.Error(err.Error()) + return multistep.ActionHalt + } - if err := driver.TypeScanCodes(vmName, scanCodesToSendString); err != nil { - err := fmt.Errorf("Error sending boot command: %s", err) + if err := seq.Do(ctx, d); err != nil { + err := fmt.Errorf("Error running boot command: %s", err) state.Put("error", err) ui.Error(err.Error()) return multistep.ActionHalt @@ -79,202 +97,3 @@ func (s *StepTypeBootCommand) Run(state multistep.StateBag) multistep.StepAction } func (*StepTypeBootCommand) Cleanup(multistep.StateBag) {} - -func scancodes(message string) []string { - // Scancodes reference: http://www.win.tue.nl/~aeb/linux/kbd/scancodes-1.html - // - // Scancodes represent raw keyboard output and are fed to the VM by using - // powershell to use Msvm_Keyboard - // - // Scancodes are recorded here in pairs. The first entry represents - // the key press and the second entry represents the key release and is - // derived from the first by the addition of 0x80. - special := make(map[string][]string) - special[""] = []string{"0e", "8e"} - special[""] = []string{"53", "d3"} - special[""] = []string{"1c", "9c"} - special[""] = []string{"01", "81"} - special[""] = []string{"3b", "bb"} - special[""] = []string{"3c", "bc"} - special[""] = []string{"3d", "bd"} - special[""] = []string{"3e", "be"} - special[""] = []string{"3f", "bf"} - special[""] = []string{"40", "c0"} - special[""] = []string{"41", "c1"} - special[""] = []string{"42", "c2"} - special[""] = []string{"43", "c3"} - special[""] = []string{"44", "c4"} - special[""] = []string{"1c", "9c"} - special[""] = []string{"0f", "8f"} - special[""] = []string{"48", "c8"} - special[""] = []string{"50", "d0"} - special[""] = []string{"4b", "cb"} - special[""] = []string{"4d", "cd"} - special[""] = []string{"39", "b9"} - special[""] = []string{"52", "d2"} - special[""] = []string{"47", "c7"} - special[""] = []string{"4f", "cf"} - special[""] = []string{"49", "c9"} - special[""] = []string{"51", "d1"} - special[""] = []string{"38", "b8"} - special[""] = []string{"1d", "9d"} - special[""] = []string{"2a", "aa"} - special[""] = []string{"e038", "e0b8"} - special[""] = []string{"e01d", "e09d"} - special[""] = []string{"36", "b6"} - - shiftedChars := "~!@#$%^&*()_+{}|:\"<>?" - - scancodeIndex := make(map[string]uint) - scancodeIndex["1234567890-="] = 0x02 - scancodeIndex["!@#$%^&*()_+"] = 0x02 - scancodeIndex["qwertyuiop[]"] = 0x10 - scancodeIndex["QWERTYUIOP{}"] = 0x10 - scancodeIndex["asdfghjkl;'`"] = 0x1e - scancodeIndex[`ASDFGHJKL:"~`] = 0x1e - scancodeIndex[`\zxcvbnm,./`] = 0x2b - scancodeIndex["|ZXCVBNM<>?"] = 0x2b - scancodeIndex[" "] = 0x39 - - scancodeMap := make(map[rune]uint) - for chars, start := range scancodeIndex { - var i uint = 0 - for len(chars) > 0 { - r, size := utf8.DecodeRuneInString(chars) - chars = chars[size:] - scancodeMap[r] = start + i - i += 1 - } - } - - result := make([]string, 0, len(message)*2) - for len(message) > 0 { - var scancode []string - - if strings.HasPrefix(message, "") { - scancode = []string{"38"} - message = message[len(""):] - log.Printf("Special code '' found, replacing with: 38") - } - - if strings.HasPrefix(message, "") { - scancode = []string{"1d"} - message = message[len(""):] - log.Printf("Special code '' found, replacing with: 1d") - } - - if strings.HasPrefix(message, "") { - scancode = []string{"2a"} - message = message[len(""):] - log.Printf("Special code '' found, replacing with: 2a") - } - - if strings.HasPrefix(message, "") { - scancode = []string{"b8"} - message = message[len(""):] - log.Printf("Special code '' found, replacing with: b8") - } - - if strings.HasPrefix(message, "") { - scancode = []string{"9d"} - message = message[len(""):] - log.Printf("Special code '' found, replacing with: 9d") - } - - if strings.HasPrefix(message, "") { - scancode = []string{"aa"} - message = message[len(""):] - log.Printf("Special code '' found, replacing with: aa") - } - - if strings.HasPrefix(message, "") { - scancode = []string{"e038"} - message = message[len(""):] - log.Printf("Special code '' found, replacing with: e038") - } - - if strings.HasPrefix(message, "") { - scancode = []string{"e01d"} - message = message[len(""):] - log.Printf("Special code '' found, replacing with: e01d") - } - - if strings.HasPrefix(message, "") { - scancode = []string{"36"} - message = message[len(""):] - log.Printf("Special code '' found, replacing with: 36") - } - - if strings.HasPrefix(message, "") { - scancode = []string{"e0b8"} - message = message[len(""):] - log.Printf("Special code '' found, replacing with: e0b8") - } - - if strings.HasPrefix(message, "") { - scancode = []string{"e09d"} - message = message[len(""):] - log.Printf("Special code '' found, replacing with: e09d") - } - - if strings.HasPrefix(message, "") { - scancode = []string{"b6"} - message = message[len(""):] - log.Printf("Special code '' found, replacing with: b6") - } - - if strings.HasPrefix(message, "") { - log.Printf("Special code found, will sleep 1 second at this point.") - scancode = []string{"wait"} - message = message[len(""):] - } - - if strings.HasPrefix(message, "") { - log.Printf("Special code found, will sleep 5 seconds at this point.") - scancode = []string{"wait5"} - message = message[len(""):] - } - - if strings.HasPrefix(message, "") { - log.Printf("Special code found, will sleep 10 seconds at this point.") - scancode = []string{"wait10"} - message = message[len(""):] - } - - if scancode == nil { - for specialCode, specialValue := range special { - if strings.HasPrefix(message, specialCode) { - log.Printf("Special code '%s' found, replacing with: %s", specialCode, specialValue) - scancode = specialValue - message = message[len(specialCode):] - break - } - } - } - - if scancode == nil { - r, size := utf8.DecodeRuneInString(message) - message = message[size:] - scancodeInt := scancodeMap[r] - keyShift := unicode.IsUpper(r) || strings.ContainsRune(shiftedChars, r) - - scancode = make([]string, 0, 4) - if keyShift { - scancode = append(scancode, "2a") - } - - scancode = append(scancode, fmt.Sprintf("%02x", scancodeInt)) - - if keyShift { - scancode = append(scancode, "aa") - } - - scancode = append(scancode, fmt.Sprintf("%02x", scancodeInt+0x80)) - log.Printf("Sending char '%c', code '%v', shift %v", r, scancode, keyShift) - } - - result = append(result, scancode...) - } - - return result -} diff --git a/builder/hyperv/common/step_unmount_dvddrive.go b/builder/hyperv/common/step_unmount_dvddrive.go index 7ba748f2f..010e8b067 100644 --- a/builder/hyperv/common/step_unmount_dvddrive.go +++ b/builder/hyperv/common/step_unmount_dvddrive.go @@ -1,15 +1,17 @@ package common import ( + "context" "fmt" + + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" ) type StepUnmountDvdDrive struct { } -func (s *StepUnmountDvdDrive) Run(state multistep.StateBag) multistep.StepAction { +func (s *StepUnmountDvdDrive) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { driver := state.Get("driver").(Driver) vmName := state.Get("vmName").(string) ui := state.Get("ui").(packer.Ui) diff --git a/builder/hyperv/common/step_unmount_floppydrive.go b/builder/hyperv/common/step_unmount_floppydrive.go index 2d271584d..34f09aaa0 100644 --- a/builder/hyperv/common/step_unmount_floppydrive.go +++ b/builder/hyperv/common/step_unmount_floppydrive.go @@ -1,16 +1,18 @@ package common import ( + "context" "fmt" + + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" ) type StepUnmountFloppyDrive struct { Generation uint } -func (s *StepUnmountFloppyDrive) Run(state multistep.StateBag) multistep.StepAction { +func (s *StepUnmountFloppyDrive) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { driver := state.Get("driver").(Driver) ui := state.Get("ui").(packer.Ui) diff --git a/builder/hyperv/common/step_unmount_guest_additions.go b/builder/hyperv/common/step_unmount_guest_additions.go index b47e969d1..06600ddaa 100644 --- a/builder/hyperv/common/step_unmount_guest_additions.go +++ b/builder/hyperv/common/step_unmount_guest_additions.go @@ -1,15 +1,17 @@ package common import ( + "context" "fmt" + + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" ) type StepUnmountGuestAdditions struct { } -func (s *StepUnmountGuestAdditions) Run(state multistep.StateBag) multistep.StepAction { +func (s *StepUnmountGuestAdditions) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { driver := state.Get("driver").(Driver) vmName := state.Get("vmName").(string) ui := state.Get("ui").(packer.Ui) diff --git a/builder/hyperv/common/step_unmount_secondary_dvd_images.go b/builder/hyperv/common/step_unmount_secondary_dvd_images.go index e59ad6014..0f74d2789 100644 --- a/builder/hyperv/common/step_unmount_secondary_dvd_images.go +++ b/builder/hyperv/common/step_unmount_secondary_dvd_images.go @@ -1,15 +1,17 @@ package common import ( + "context" "fmt" + + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" ) type StepUnmountSecondaryDvdImages struct { } -func (s *StepUnmountSecondaryDvdImages) Run(state multistep.StateBag) multistep.StepAction { +func (s *StepUnmountSecondaryDvdImages) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { driver := state.Get("driver").(Driver) ui := state.Get("ui").(packer.Ui) vmName := state.Get("vmName").(string) diff --git a/builder/hyperv/common/step_wait_for_install_to_complete.go b/builder/hyperv/common/step_wait_for_install_to_complete.go index 51736ddb5..0a40730b3 100644 --- a/builder/hyperv/common/step_wait_for_install_to_complete.go +++ b/builder/hyperv/common/step_wait_for_install_to_complete.go @@ -1,10 +1,12 @@ package common import ( + "context" "fmt" - "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" "time" + + "github.com/hashicorp/packer/helper/multistep" + "github.com/hashicorp/packer/packer" ) const ( @@ -14,7 +16,7 @@ const ( type StepWaitForPowerOff struct { } -func (s *StepWaitForPowerOff) Run(state multistep.StateBag) multistep.StepAction { +func (s *StepWaitForPowerOff) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { driver := state.Get("driver").(Driver) ui := state.Get("ui").(packer.Ui) vmName := state.Get("vmName").(string) @@ -48,7 +50,7 @@ type StepWaitForInstallToComplete struct { ActionName string } -func (s *StepWaitForInstallToComplete) Run(state multistep.StateBag) multistep.StepAction { +func (s *StepWaitForInstallToComplete) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { driver := state.Get("driver").(Driver) ui := state.Get("ui").(packer.Ui) vmName := state.Get("vmName").(string) diff --git a/builder/hyperv/iso/builder.go b/builder/hyperv/iso/builder.go index b3c1e1f61..d50e5e26a 100644 --- a/builder/hyperv/iso/builder.go +++ b/builder/hyperv/iso/builder.go @@ -10,19 +10,25 @@ import ( hypervcommon "github.com/hashicorp/packer/builder/hyperv/common" "github.com/hashicorp/packer/common" + "github.com/hashicorp/packer/common/bootcommand" powershell "github.com/hashicorp/packer/common/powershell" "github.com/hashicorp/packer/common/powershell/hyperv" "github.com/hashicorp/packer/helper/communicator" "github.com/hashicorp/packer/helper/config" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" "github.com/hashicorp/packer/template/interpolate" - "github.com/mitchellh/multistep" ) const ( DefaultDiskSize = 40 * 1024 // ~40GB MinDiskSize = 256 // 256MB MaxDiskSize = 64 * 1024 * 1024 // 64TB + MaxVHDSize = 2040 * 1024 // 2040GB + + DefaultDiskBlockSize = 32 // 32MB + MinDiskBlockSize = 1 // 1MB + MaxDiskBlockSize = 256 // 256MB DefaultRamSize = 1 * 1024 // 1GB MinRamSize = 32 // 32MB @@ -47,17 +53,23 @@ type Config struct { common.HTTPConfig `mapstructure:",squash"` common.ISOConfig `mapstructure:",squash"` common.FloppyConfig `mapstructure:",squash"` + bootcommand.BootConfig `mapstructure:",squash"` hypervcommon.OutputConfig `mapstructure:",squash"` hypervcommon.SSHConfig `mapstructure:",squash"` - hypervcommon.RunConfig `mapstructure:",squash"` hypervcommon.ShutdownConfig `mapstructure:",squash"` // The size, in megabytes, of the hard disk to create for the VM. // By default, this is 130048 (about 127 GB). DiskSize uint `mapstructure:"disk_size"` + + // The size, in megabytes, of the block size used to create the hard disk. + // By default, this is 32768 (about 32 MB) + DiskBlockSize uint `mapstructure:"disk_block_size"` + // The size, in megabytes, of the computer memory in the VM. // By default, this is 1024 (about 1 GB). RamSize uint `mapstructure:"ram_size"` + // SecondaryDvdImages []string `mapstructure:"secondary_iso_images"` @@ -71,17 +83,18 @@ type Config struct { // By default this is "packer-BUILDNAME", where "BUILDNAME" is the name of the build. VMName string `mapstructure:"vm_name"` - BootCommand []string `mapstructure:"boot_command"` - SwitchName string `mapstructure:"switch_name"` - SwitchVlanId string `mapstructure:"switch_vlan_id"` - VlanId string `mapstructure:"vlan_id"` - Cpu uint `mapstructure:"cpu"` - Generation uint `mapstructure:"generation"` - EnableMacSpoofing bool `mapstructure:"enable_mac_spoofing"` - EnableDynamicMemory bool `mapstructure:"enable_dynamic_memory"` - EnableSecureBoot bool `mapstructure:"enable_secure_boot"` - EnableVirtualizationExtensions bool `mapstructure:"enable_virtualization_extensions"` - TempPath string `mapstructure:"temp_path"` + SwitchName string `mapstructure:"switch_name"` + SwitchVlanId string `mapstructure:"switch_vlan_id"` + MacAddress string `mapstructure:"mac_address"` + VlanId string `mapstructure:"vlan_id"` + Cpu uint `mapstructure:"cpu"` + Generation uint `mapstructure:"generation"` + EnableMacSpoofing bool `mapstructure:"enable_mac_spoofing"` + EnableDynamicMemory bool `mapstructure:"enable_dynamic_memory"` + EnableSecureBoot bool `mapstructure:"enable_secure_boot"` + SecureBootTemplate string `mapstructure:"secure_boot_template"` + EnableVirtualizationExtensions bool `mapstructure:"enable_virtualization_extensions"` + TempPath string `mapstructure:"temp_path"` // A separate path can be used for storing the VM's disk image. The purpose is to enable // reading and writing to take place on different physical disks (read from VHD temp path @@ -94,6 +107,16 @@ type Config struct { SkipCompaction bool `mapstructure:"skip_compaction"` + SkipExport bool `mapstructure:"skip_export"` + + // Use differencing disk + DifferencingDisk bool `mapstructure:"differencing_disk"` + + // Create the VM with a Fixed VHD format disk instead of Dynamic VHDX + FixedVHD bool `mapstructure:"use_fixed_vhd_format"` + + Headless bool `mapstructure:"headless"` + ctx interpolate.Context } @@ -120,9 +143,9 @@ func (b *Builder) Prepare(raws ...interface{}) ([]string, error) { warnings = append(warnings, isoWarnings...) errs = packer.MultiErrorAppend(errs, isoErrs...) + errs = packer.MultiErrorAppend(errs, b.config.BootConfig.Prepare(&b.config.ctx)...) errs = packer.MultiErrorAppend(errs, b.config.FloppyConfig.Prepare(&b.config.ctx)...) errs = packer.MultiErrorAppend(errs, b.config.HTTPConfig.Prepare(&b.config.ctx)...) - errs = packer.MultiErrorAppend(errs, b.config.RunConfig.Prepare(&b.config.ctx)...) errs = packer.MultiErrorAppend(errs, b.config.OutputConfig.Prepare(&b.config.ctx, &b.config.PackerConfig)...) errs = packer.MultiErrorAppend(errs, b.config.SSHConfig.Prepare(&b.config.ctx)...) errs = packer.MultiErrorAppend(errs, b.config.ShutdownConfig.Prepare(&b.config.ctx)...) @@ -135,6 +158,11 @@ func (b *Builder) Prepare(raws ...interface{}) ([]string, error) { } } + err = b.checkDiskBlockSize() + if err != nil { + errs = packer.MultiErrorAppend(errs, err) + } + err = b.checkRamSize() if err != nil { errs = packer.MultiErrorAppend(errs, err) @@ -166,7 +194,7 @@ func (b *Builder) Prepare(raws ...interface{}) ([]string, error) { } if len(b.config.AdditionalDiskSize) > 64 { - err = errors.New("VM's currently support a maximun of 64 additional SCSI attached disks.") + err = errors.New("VM's currently support a maximum of 64 additional SCSI attached disks.") errs = packer.MultiErrorAppend(errs, err) } @@ -249,6 +277,21 @@ func (b *Builder) Prepare(raws ...interface{}) ([]string, error) { } } + if b.config.Generation > 1 && b.config.FixedVHD { + err = errors.New("Fixed VHD disks are only supported on Generation 1 virtual machines.") + errs = packer.MultiErrorAppend(errs, err) + } + + if !b.config.SkipCompaction && b.config.FixedVHD { + err = errors.New("Fixed VHD disks do not support compaction.") + errs = packer.MultiErrorAppend(errs, err) + } + + if b.config.DifferencingDisk && b.config.FixedVHD { + err = errors.New("Fixed VHD disks are not supported with differencing disks.") + errs = packer.MultiErrorAppend(errs, err) + } + // Warnings if b.config.ShutdownCommand == "" { @@ -346,13 +389,20 @@ func (b *Builder) Run(ui packer.Ui, hook packer.Hook, cache packer.Cache) (packe SwitchName: b.config.SwitchName, RamSize: b.config.RamSize, DiskSize: b.config.DiskSize, + DiskBlockSize: b.config.DiskBlockSize, Generation: b.config.Generation, Cpu: b.config.Cpu, EnableMacSpoofing: b.config.EnableMacSpoofing, EnableDynamicMemory: b.config.EnableDynamicMemory, EnableSecureBoot: b.config.EnableSecureBoot, + SecureBootTemplate: b.config.SecureBootTemplate, EnableVirtualizationExtensions: b.config.EnableVirtualizationExtensions, AdditionalDiskSize: b.config.AdditionalDiskSize, + DifferencingDisk: b.config.DifferencingDisk, + SkipExport: b.config.SkipExport, + OutputDir: b.config.OutputDir, + MacAddress: b.config.MacAddress, + FixedVHD: b.config.FixedVHD, }, &hypervcommon.StepEnableIntegrationService{}, @@ -380,11 +430,12 @@ func (b *Builder) Run(ui packer.Ui, hook packer.Hook, cache packer.Cache) (packe }, &hypervcommon.StepRun{ - BootWait: b.config.BootWait, + Headless: b.config.Headless, }, &hypervcommon.StepTypeBootCommand{ - BootCommand: b.config.BootCommand, + BootCommand: b.config.FlatBootCommand(), + BootWait: b.config.BootWait, SwitchName: b.config.SwitchName, Ctx: b.config.ctx, }, @@ -418,6 +469,7 @@ func (b *Builder) Run(ui packer.Ui, hook packer.Hook, cache packer.Cache) (packe &hypervcommon.StepExportVm{ OutputDir: b.config.OutputDir, SkipCompaction: b.config.SkipCompaction, + SkipExport: b.config.SkipExport, }, // the clean up actions for each step will be executed reverse order @@ -475,8 +527,26 @@ func (b *Builder) checkDiskSize() error { if b.config.DiskSize < MinDiskSize { return fmt.Errorf("disk_size: Virtual machine requires disk space >= %v GB, but defined: %v", MinDiskSize, b.config.DiskSize/1024) - } else if b.config.DiskSize > MaxDiskSize { + } else if b.config.DiskSize > MaxDiskSize && !b.config.FixedVHD { return fmt.Errorf("disk_size: Virtual machine requires disk space <= %v GB, but defined: %v", MaxDiskSize, b.config.DiskSize/1024) + } else if b.config.DiskSize > MaxVHDSize && b.config.FixedVHD { + return fmt.Errorf("disk_size: Virtual machine requires disk space <= %v GB, but defined: %v", MaxVHDSize/1024, b.config.DiskSize/1024) + } + + return nil +} + +func (b *Builder) checkDiskBlockSize() error { + if b.config.DiskBlockSize == 0 { + b.config.DiskBlockSize = DefaultDiskBlockSize + } + + log.Println(fmt.Sprintf("%s: %v", "DiskBlockSize", b.config.DiskBlockSize)) + + if b.config.DiskBlockSize < MinDiskBlockSize { + return fmt.Errorf("disk_block_size: Virtual machine requires disk block size >= %v MB, but defined: %v", MinDiskBlockSize, b.config.DiskBlockSize) + } else if b.config.DiskBlockSize > MaxDiskBlockSize { + return fmt.Errorf("disk_block_size: Virtual machine requires disk block size <= %v MB, but defined: %v", MaxDiskBlockSize, b.config.DiskBlockSize) } return nil diff --git a/builder/hyperv/iso/builder_test.go b/builder/hyperv/iso/builder_test.go index 73ae5591b..4a63f3c67 100644 --- a/builder/hyperv/iso/builder_test.go +++ b/builder/hyperv/iso/builder_test.go @@ -1,11 +1,16 @@ package iso import ( + "context" "fmt" "reflect" "strconv" "testing" + "os" + + hypervcommon "github.com/hashicorp/packer/builder/hyperv/common" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" ) @@ -18,6 +23,7 @@ func testConfig() map[string]interface{} { "ssh_username": "foo", "ram_size": 64, "disk_size": 256, + "disk_block_size": 1, "guest_additions_mode": "none", "disk_additional_size": "50000,40000,30000", packer.BuildNameConfigKey: "foo", @@ -81,6 +87,111 @@ func TestBuilderPrepare_DiskSize(t *testing.T) { } } +func TestBuilderPrepare_DiskBlockSize(t *testing.T) { + var b Builder + config := testConfig() + expected_default_block_size := uint(32) + expected_min_block_size := uint(0) + expected_max_block_size := uint(256) + + // Test default with empty disk_block_size + delete(config, "disk_block_size") + warns, err := b.Prepare(config) + if len(warns) > 0 { + t.Fatalf("bad: %#v", warns) + } + if err != nil { + t.Fatalf("bad err: %s", err) + } + if b.config.DiskBlockSize != expected_default_block_size { + t.Fatalf("bad default block size with empty config: %d. Expected %d", b.config.DiskBlockSize, expected_default_block_size) + } + + test_sizes := []uint{0, 1, 32, 256, 512, 1 * 1024, 32 * 1024} + for _, test_size := range test_sizes { + config["disk_block_size"] = test_size + b = Builder{} + warns, err = b.Prepare(config) + if test_size > expected_max_block_size || test_size < expected_min_block_size { + if len(warns) > 0 { + t.Fatalf("bad, should have no warns: %#v", warns) + } + if err == nil { + t.Fatalf("bad, should have error but didn't. disk_block_size=%d outside expected valid range [%d,%d]", test_size, expected_min_block_size, expected_max_block_size) + } + } else { + if len(warns) > 0 { + t.Fatalf("bad: %#v", warns) + } + if err != nil { + t.Fatalf("bad, should not have error: %s", err) + } + if test_size == 0 { + if b.config.DiskBlockSize != expected_default_block_size { + t.Fatalf("bad default block size with 0 value config: %d. Expected: %d", b.config.DiskBlockSize, expected_default_block_size) + } + } else { + if b.config.DiskBlockSize != test_size { + t.Fatalf("bad block size with 0 value config: %d. Expected: %d", b.config.DiskBlockSize, expected_default_block_size) + } + } + } + } +} + +func TestBuilderPrepare_FixedVHDFormat(t *testing.T) { + var b Builder + config := testConfig() + config["use_fixed_vhd_format"] = true + config["generation"] = 1 + config["skip_compaction"] = true + config["differencing_disk"] = false + + //use_fixed_vhd_format should work with generation = 1, skip_compaction = true, and differencing_disk = false + warns, err := b.Prepare(config) + if len(warns) > 0 { + t.Fatalf("bad: %#v", warns) + } + if err != nil { + t.Fatalf("bad err: %s", err) + } + + //use_fixed_vhd_format should not work with differencing_disk = true + config["differencing_disk"] = true + b = Builder{} + warns, err = b.Prepare(config) + if len(warns) > 0 { + t.Fatalf("bad: %#v", warns) + } + if err == nil { + t.Fatal("should have error") + } + config["differencing_disk"] = false + + //use_fixed_vhd_format should not work with skip_compaction = false + config["skip_compaction"] = false + b = Builder{} + warns, err = b.Prepare(config) + if len(warns) > 0 { + t.Fatalf("bad: %#v", warns) + } + if err == nil { + t.Fatal("should have error") + } + config["skip_compaction"] = true + + //use_fixed_vhd_format should not work with generation = 2 + config["generation"] = 2 + b = Builder{} + warns, err = b.Prepare(config) + if len(warns) > 0 { + t.Fatalf("bad: %#v", warns) + } + if err == nil { + t.Fatal("should have error") + } +} + func TestBuilderPrepare_FloppyFiles(t *testing.T) { var b Builder config := testConfig() @@ -472,3 +583,45 @@ func TestBuilderPrepare_CommConfig(t *testing.T) { } } + +func TestUserVariablesInBootCommand(t *testing.T) { + var b Builder + config := testConfig() + + config[packer.UserVariablesConfigKey] = map[string]string{"test-variable": "test"} + config["boot_command"] = []string{"blah {{user `test-variable`}} blah"} + + warns, err := b.Prepare(config) + if len(warns) > 0 { + t.Fatalf("bad: %#v", warns) + } + if err != nil { + t.Fatalf("should not have error: %s", err) + } + + ui := packer.TestUi(t) + cache := &packer.FileCache{CacheDir: os.TempDir()} + hook := &packer.MockHook{} + driver := &hypervcommon.DriverMock{} + + // Set up the state. + state := new(multistep.BasicStateBag) + state.Put("cache", cache) + state.Put("config", &b.config) + state.Put("driver", driver) + state.Put("hook", hook) + state.Put("http_port", uint(0)) + state.Put("ui", ui) + state.Put("vmName", "packer-foo") + + step := &hypervcommon.StepTypeBootCommand{ + BootCommand: b.config.FlatBootCommand(), + SwitchName: b.config.SwitchName, + Ctx: b.config.ctx, + } + + ret := step.Run(context.Background(), state) + if ret != multistep.ActionContinue { + t.Fatalf("should not have error: %#v", ret) + } +} diff --git a/builder/hyperv/vmcx/builder.go b/builder/hyperv/vmcx/builder.go index a9583327a..987c41a12 100644 --- a/builder/hyperv/vmcx/builder.go +++ b/builder/hyperv/vmcx/builder.go @@ -9,13 +9,14 @@ import ( hypervcommon "github.com/hashicorp/packer/builder/hyperv/common" "github.com/hashicorp/packer/common" + "github.com/hashicorp/packer/common/bootcommand" powershell "github.com/hashicorp/packer/common/powershell" "github.com/hashicorp/packer/common/powershell/hyperv" "github.com/hashicorp/packer/helper/communicator" "github.com/hashicorp/packer/helper/config" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" "github.com/hashicorp/packer/template/interpolate" - "github.com/mitchellh/multistep" ) const ( @@ -42,9 +43,9 @@ type Config struct { common.HTTPConfig `mapstructure:",squash"` common.ISOConfig `mapstructure:",squash"` common.FloppyConfig `mapstructure:",squash"` + bootcommand.BootConfig `mapstructure:",squash"` hypervcommon.OutputConfig `mapstructure:",squash"` hypervcommon.SSHConfig `mapstructure:",squash"` - hypervcommon.RunConfig `mapstructure:",squash"` hypervcommon.ShutdownConfig `mapstructure:",squash"` // The size, in megabytes, of the computer memory in the VM. @@ -76,28 +77,37 @@ type Config struct { // By default this is "packer-BUILDNAME", where "BUILDNAME" is the name of the build. VMName string `mapstructure:"vm_name"` - BootCommand []string `mapstructure:"boot_command"` - SwitchName string `mapstructure:"switch_name"` - SwitchVlanId string `mapstructure:"switch_vlan_id"` - VlanId string `mapstructure:"vlan_id"` - Cpu uint `mapstructure:"cpu"` + // Use differencing disk + DifferencingDisk bool `mapstructure:"differencing_disk"` + + SwitchName string `mapstructure:"switch_name"` + SwitchVlanId string `mapstructure:"switch_vlan_id"` + MacAddress string `mapstructure:"mac_address"` + VlanId string `mapstructure:"vlan_id"` + Cpu uint `mapstructure:"cpu"` Generation uint - EnableMacSpoofing bool `mapstructure:"enable_mac_spoofing"` - EnableDynamicMemory bool `mapstructure:"enable_dynamic_memory"` - EnableSecureBoot bool `mapstructure:"enable_secure_boot"` - EnableVirtualizationExtensions bool `mapstructure:"enable_virtualization_extensions"` + EnableMacSpoofing bool `mapstructure:"enable_mac_spoofing"` + EnableDynamicMemory bool `mapstructure:"enable_dynamic_memory"` + EnableSecureBoot bool `mapstructure:"enable_secure_boot"` + SecureBootTemplate string `mapstructure:"secure_boot_template"` + EnableVirtualizationExtensions bool `mapstructure:"enable_virtualization_extensions"` Communicator string `mapstructure:"communicator"` SkipCompaction bool `mapstructure:"skip_compaction"` + SkipExport bool `mapstructure:"skip_export"` + + Headless bool `mapstructure:"headless"` + ctx interpolate.Context } // Prepare processes the build configuration parameters. func (b *Builder) Prepare(raws ...interface{}) ([]string, error) { err := config.Decode(&b.config, &config.DecodeOpts{ - Interpolate: true, + Interpolate: true, + InterpolateContext: &b.config.ctx, InterpolateFilter: &interpolate.RenderFilter{ Exclude: []string{ "boot_command", @@ -118,9 +128,9 @@ func (b *Builder) Prepare(raws ...interface{}) ([]string, error) { errs = packer.MultiErrorAppend(errs, isoErrs...) } + errs = packer.MultiErrorAppend(errs, b.config.BootConfig.Prepare(&b.config.ctx)...) errs = packer.MultiErrorAppend(errs, b.config.FloppyConfig.Prepare(&b.config.ctx)...) errs = packer.MultiErrorAppend(errs, b.config.HTTPConfig.Prepare(&b.config.ctx)...) - errs = packer.MultiErrorAppend(errs, b.config.RunConfig.Prepare(&b.config.ctx)...) errs = packer.MultiErrorAppend(errs, b.config.OutputConfig.Prepare(&b.config.ctx, &b.config.PackerConfig)...) errs = packer.MultiErrorAppend(errs, b.config.SSHConfig.Prepare(&b.config.ctx)...) errs = packer.MultiErrorAppend(errs, b.config.ShutdownConfig.Prepare(&b.config.ctx)...) @@ -398,7 +408,9 @@ func (b *Builder) Run(ui packer.Ui, hook packer.Hook, cache packer.Cache) (packe EnableMacSpoofing: b.config.EnableMacSpoofing, EnableDynamicMemory: b.config.EnableDynamicMemory, EnableSecureBoot: b.config.EnableSecureBoot, + SecureBootTemplate: b.config.SecureBootTemplate, EnableVirtualizationExtensions: b.config.EnableVirtualizationExtensions, + MacAddress: b.config.MacAddress, }, &hypervcommon.StepEnableIntegrationService{}, @@ -427,11 +439,12 @@ func (b *Builder) Run(ui packer.Ui, hook packer.Hook, cache packer.Cache) (packe }, &hypervcommon.StepRun{ - BootWait: b.config.BootWait, + Headless: b.config.Headless, }, &hypervcommon.StepTypeBootCommand{ - BootCommand: b.config.BootCommand, + BootCommand: b.config.FlatBootCommand(), + BootWait: b.config.BootWait, SwitchName: b.config.SwitchName, Ctx: b.config.ctx, }, @@ -465,6 +478,7 @@ func (b *Builder) Run(ui packer.Ui, hook packer.Hook, cache packer.Cache) (packe &hypervcommon.StepExportVm{ OutputDir: b.config.OutputDir, SkipCompaction: b.config.SkipCompaction, + SkipExport: b.config.SkipExport, }, // the clean up actions for each step will be executed reverse order diff --git a/builder/hyperv/vmcx/builder_test.go b/builder/hyperv/vmcx/builder_test.go index 209094bf3..9a4a54c33 100644 --- a/builder/hyperv/vmcx/builder_test.go +++ b/builder/hyperv/vmcx/builder_test.go @@ -1,13 +1,17 @@ package vmcx import ( + "context" + "fmt" "reflect" "testing" - "fmt" - "github.com/hashicorp/packer/packer" "io/ioutil" "os" + + hypervcommon "github.com/hashicorp/packer/builder/hyperv/common" + "github.com/hashicorp/packer/helper/multistep" + "github.com/hashicorp/packer/packer" ) func testConfig() map[string]interface{} { @@ -486,3 +490,53 @@ func TestBuilderPrepare_CommConfig(t *testing.T) { } } } + +func TestUserVariablesInBootCommand(t *testing.T) { + var b Builder + config := testConfig() + + //Create vmxc folder + td, err := ioutil.TempDir("", "packer") + if err != nil { + t.Fatalf("err: %s", err) + } + defer os.RemoveAll(td) + config["clone_from_vmxc_path"] = td + + config[packer.UserVariablesConfigKey] = map[string]string{"test-variable": "test"} + config["boot_command"] = []string{"blah {{user `test-variable`}} blah"} + + warns, err := b.Prepare(config) + if len(warns) > 0 { + t.Fatalf("bad: %#v", warns) + } + if err != nil { + t.Fatalf("should not have error: %s", err) + } + + ui := packer.TestUi(t) + cache := &packer.FileCache{CacheDir: os.TempDir()} + hook := &packer.MockHook{} + driver := &hypervcommon.DriverMock{} + + // Set up the state. + state := new(multistep.BasicStateBag) + state.Put("cache", cache) + state.Put("config", &b.config) + state.Put("driver", driver) + state.Put("hook", hook) + state.Put("http_port", uint(0)) + state.Put("ui", ui) + state.Put("vmName", "packer-foo") + + step := &hypervcommon.StepTypeBootCommand{ + BootCommand: b.config.FlatBootCommand(), + SwitchName: b.config.SwitchName, + Ctx: b.config.ctx, + } + + ret := step.Run(context.Background(), state) + if ret != multistep.ActionContinue { + t.Fatalf("should not have error: %#v", ret) + } +} diff --git a/builder/lxc/builder.go b/builder/lxc/builder.go index 31770b740..8c239cd9b 100644 --- a/builder/lxc/builder.go +++ b/builder/lxc/builder.go @@ -1,13 +1,14 @@ package lxc import ( - "github.com/hashicorp/packer/common" - "github.com/hashicorp/packer/packer" - "github.com/hashicorp/packer/template/interpolate" - "github.com/mitchellh/multistep" "log" "os" "path/filepath" + + "github.com/hashicorp/packer/common" + "github.com/hashicorp/packer/helper/multistep" + "github.com/hashicorp/packer/packer" + "github.com/hashicorp/packer/template/interpolate" ) // The unique ID for this builder diff --git a/builder/lxc/command.go b/builder/lxc/command.go index af81cff83..4afa56844 100644 --- a/builder/lxc/command.go +++ b/builder/lxc/command.go @@ -1,7 +1,11 @@ package lxc import ( + "bytes" + "fmt" + "log" "os/exec" + "strings" ) // CommandWrapper is a type that given a command, will possibly modify that @@ -13,3 +17,25 @@ type CommandWrapper func(string) (string, error) func ShellCommand(command string) *exec.Cmd { return exec.Command("/bin/sh", "-c", command) } + +func RunCommand(args ...string) error { + var stdout, stderr bytes.Buffer + + log.Printf("Executing args: %#v", args) + cmd := exec.Command(args[0], args[1:]...) + cmd.Stdout = &stdout + cmd.Stderr = &stderr + err := cmd.Run() + + stdoutString := strings.TrimSpace(stdout.String()) + stderrString := strings.TrimSpace(stderr.String()) + + if _, ok := err.(*exec.ExitError); ok { + err = fmt.Errorf("Command error: %s", stderrString) + } + + log.Printf("stdout: %s", stdoutString) + log.Printf("stderr: %s", stderrString) + + return err +} diff --git a/builder/lxc/communicator.go b/builder/lxc/communicator.go index 6e41ace60..3a7505d1f 100644 --- a/builder/lxc/communicator.go +++ b/builder/lxc/communicator.go @@ -2,7 +2,6 @@ package lxc import ( "fmt" - "github.com/hashicorp/packer/packer" "io" "io/ioutil" "log" @@ -11,6 +10,8 @@ import ( "path/filepath" "strings" "syscall" + + "github.com/hashicorp/packer/packer" ) type LxcAttachCommunicator struct { @@ -72,6 +73,24 @@ func (c *LxcAttachCommunicator) Upload(dst string, r io.Reader, fi *os.FileInfo) return err } + if fi != nil { + tfDir := filepath.Dir(tf.Name()) + // rename tempfile to match original file name. This makes sure that if file is being + // moved into a directory, the filename is preserved instead of a temp name. + adjustedTempName := filepath.Join(tfDir, (*fi).Name()) + mvCmd, err := c.CmdWrapper(fmt.Sprintf("sudo mv %s %s", tf.Name(), adjustedTempName)) + if err != nil { + return err + } + defer os.Remove(adjustedTempName) + ShellCommand(mvCmd).Run() + // change cpCmd to use new file name as source + cpCmd, err = c.CmdWrapper(fmt.Sprintf("sudo cp %s %s", adjustedTempName, dst)) + if err != nil { + return err + } + } + log.Printf("Running copy command: %s", dst) return ShellCommand(cpCmd).Run() diff --git a/builder/lxc/communicator_test.go b/builder/lxc/communicator_test.go index 854ba6680..4a81a36a7 100644 --- a/builder/lxc/communicator_test.go +++ b/builder/lxc/communicator_test.go @@ -1,8 +1,9 @@ package lxc import ( - "github.com/hashicorp/packer/packer" "testing" + + "github.com/hashicorp/packer/packer" ) func TestCommunicator_ImplementsCommunicator(t *testing.T) { diff --git a/builder/lxc/config.go b/builder/lxc/config.go index 5d49dbfd6..b0736a63f 100644 --- a/builder/lxc/config.go +++ b/builder/lxc/config.go @@ -2,13 +2,14 @@ package lxc import ( "fmt" + "os" + "time" + "github.com/hashicorp/packer/common" "github.com/hashicorp/packer/helper/config" "github.com/hashicorp/packer/packer" "github.com/hashicorp/packer/template/interpolate" "github.com/mitchellh/mapstructure" - "os" - "time" ) type Config struct { diff --git a/builder/lxc/step_export.go b/builder/lxc/step_export.go index 59e4b79e9..ddfa85048 100644 --- a/builder/lxc/step_export.go +++ b/builder/lxc/step_export.go @@ -1,21 +1,19 @@ package lxc import ( - "bytes" + "context" "fmt" - "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" "io" - "log" "os" - "os/exec" "path/filepath" - "strings" + + "github.com/hashicorp/packer/helper/multistep" + "github.com/hashicorp/packer/packer" ) type stepExport struct{} -func (s *stepExport) Run(state multistep.StateBag) multistep.StepAction { +func (s *stepExport) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { config := state.Get("config").(*Config) ui := state.Get("ui").(packer.Ui) @@ -45,7 +43,7 @@ func (s *stepExport) Run(state multistep.StateBag) multistep.StepAction { _, err = io.Copy(configFile, originalConfigFile) - commands := make([][]string, 4) + commands := make([][]string, 3) commands[0] = []string{ "lxc-stop", "--name", name, } @@ -55,13 +53,10 @@ func (s *stepExport) Run(state multistep.StateBag) multistep.StepAction { commands[2] = []string{ "chmod", "+x", configFilePath, } - commands[3] = []string{ - "sh", "-c", "chown $USER:`id -gn` " + filepath.Join(config.OutputDir, "*"), - } ui.Say("Exporting container...") for _, command := range commands { - err := s.SudoCommand(command...) + err := RunCommand(command...) if err != nil { err := fmt.Errorf("Error exporting container: %s", err) state.Put("error", err) @@ -74,25 +69,3 @@ func (s *stepExport) Run(state multistep.StateBag) multistep.StepAction { } func (s *stepExport) Cleanup(state multistep.StateBag) {} - -func (s *stepExport) SudoCommand(args ...string) error { - var stdout, stderr bytes.Buffer - - log.Printf("Executing sudo command: %#v", args) - cmd := exec.Command("sudo", args...) - cmd.Stdout = &stdout - cmd.Stderr = &stderr - err := cmd.Run() - - stdoutString := strings.TrimSpace(stdout.String()) - stderrString := strings.TrimSpace(stderr.String()) - - if _, ok := err.(*exec.ExitError); ok { - err = fmt.Errorf("Sudo command error: %s", stderrString) - } - - log.Printf("stdout: %s", stdoutString) - log.Printf("stderr: %s", stderrString) - - return err -} diff --git a/builder/lxc/step_lxc_create.go b/builder/lxc/step_lxc_create.go index d1dbdf2d3..0c89ce190 100644 --- a/builder/lxc/step_lxc_create.go +++ b/builder/lxc/step_lxc_create.go @@ -1,19 +1,17 @@ package lxc import ( - "bytes" + "context" "fmt" - "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" - "log" - "os/exec" "path/filepath" - "strings" + + "github.com/hashicorp/packer/helper/multistep" + "github.com/hashicorp/packer/packer" ) type stepLxcCreate struct{} -func (s *stepLxcCreate) Run(state multistep.StateBag) multistep.StepAction { +func (s *stepLxcCreate) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { config := state.Get("config").(*Config) ui := state.Get("ui").(packer.Ui) @@ -28,7 +26,9 @@ func (s *stepLxcCreate) Run(state multistep.StateBag) multistep.StepAction { } commands := make([][]string, 3) - commands[0] = append(config.EnvVars, "lxc-create") + commands[0] = append(commands[0], "env") + commands[0] = append(commands[0], config.EnvVars...) + commands[0] = append(commands[0], "lxc-create") commands[0] = append(commands[0], config.CreateOptions...) commands[0] = append(commands[0], []string{"-n", name, "-t", config.Name, "--"}...) commands[0] = append(commands[0], config.Parameters...) @@ -40,8 +40,7 @@ func (s *stepLxcCreate) Run(state multistep.StateBag) multistep.StepAction { ui.Say("Creating container...") for _, command := range commands { - log.Printf("Executing sudo command: %#v", command) - err := s.SudoCommand(command...) + err := RunCommand(command...) if err != nil { err := fmt.Errorf("Error creating container: %s", err) state.Put("error", err) @@ -64,29 +63,7 @@ func (s *stepLxcCreate) Cleanup(state multistep.StateBag) { } ui.Say("Unregistering and deleting virtual machine...") - if err := s.SudoCommand(command...); err != nil { + if err := RunCommand(command...); err != nil { ui.Error(fmt.Sprintf("Error deleting virtual machine: %s", err)) } } - -func (s *stepLxcCreate) SudoCommand(args ...string) error { - var stdout, stderr bytes.Buffer - - log.Printf("Executing sudo command: %#v", args) - cmd := exec.Command("sudo", args...) - cmd.Stdout = &stdout - cmd.Stderr = &stderr - err := cmd.Run() - - stdoutString := strings.TrimSpace(stdout.String()) - stderrString := strings.TrimSpace(stderr.String()) - - if _, ok := err.(*exec.ExitError); ok { - err = fmt.Errorf("Sudo command error: %s", stderrString) - } - - log.Printf("stdout: %s", stdoutString) - log.Printf("stderr: %s", stderrString) - - return err -} diff --git a/builder/lxc/step_prepare_output_dir.go b/builder/lxc/step_prepare_output_dir.go index 07c6f08b7..25479fd8a 100644 --- a/builder/lxc/step_prepare_output_dir.go +++ b/builder/lxc/step_prepare_output_dir.go @@ -1,16 +1,18 @@ package lxc import ( - "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" + "context" "log" "os" "time" + + "github.com/hashicorp/packer/helper/multistep" + "github.com/hashicorp/packer/packer" ) type stepPrepareOutputDir struct{} -func (stepPrepareOutputDir) Run(state multistep.StateBag) multistep.StepAction { +func (stepPrepareOutputDir) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { config := state.Get("config").(*Config) ui := state.Get("ui").(packer.Ui) diff --git a/builder/lxc/step_provision.go b/builder/lxc/step_provision.go index 0cf2a8bdb..ac33eb301 100644 --- a/builder/lxc/step_provision.go +++ b/builder/lxc/step_provision.go @@ -1,15 +1,17 @@ package lxc import ( - "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" + "context" "log" + + "github.com/hashicorp/packer/helper/multistep" + "github.com/hashicorp/packer/packer" ) // StepProvision provisions the instance within a chroot. type StepProvision struct{} -func (s *StepProvision) Run(state multistep.StateBag) multistep.StepAction { +func (s *StepProvision) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { hook := state.Get("hook").(packer.Hook) config := state.Get("config").(*Config) mountPath := state.Get("mount_path").(string) diff --git a/builder/lxc/step_wait_init.go b/builder/lxc/step_wait_init.go index 1ddda52b6..4e055a690 100644 --- a/builder/lxc/step_wait_init.go +++ b/builder/lxc/step_wait_init.go @@ -1,20 +1,22 @@ package lxc import ( + "context" "errors" "fmt" - "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" "log" "strings" "time" + + "github.com/hashicorp/packer/helper/multistep" + "github.com/hashicorp/packer/packer" ) type StepWaitInit struct { WaitTimeout time.Duration } -func (s *StepWaitInit) Run(state multistep.StateBag) multistep.StepAction { +func (s *StepWaitInit) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { ui := state.Get("ui").(packer.Ui) var err error diff --git a/builder/lxd/builder.go b/builder/lxd/builder.go index d59cf5bcf..290b8a631 100644 --- a/builder/lxd/builder.go +++ b/builder/lxd/builder.go @@ -1,11 +1,12 @@ package lxd import ( + "log" + "github.com/hashicorp/packer/common" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" "github.com/hashicorp/packer/template/interpolate" - "github.com/mitchellh/multistep" - "log" ) // The unique ID for this builder diff --git a/builder/lxd/communicator.go b/builder/lxd/communicator.go index 8eaa47a5f..d016b0990 100644 --- a/builder/lxd/communicator.go +++ b/builder/lxd/communicator.go @@ -2,13 +2,14 @@ package lxd import ( "fmt" - "github.com/hashicorp/packer/packer" "io" "log" "os" "os/exec" "path/filepath" "syscall" + + "github.com/hashicorp/packer/packer" ) type Communicator struct { @@ -54,7 +55,24 @@ func (c *Communicator) Start(cmd *packer.RemoteCmd) error { } func (c *Communicator) Upload(dst string, r io.Reader, fi *os.FileInfo) error { - cpCmd, err := c.CmdWrapper(fmt.Sprintf("lxc file push - %s", filepath.Join(c.ContainerName, dst))) + fileDestination := filepath.Join(c.ContainerName, dst) + // find out if the place we are pushing to is a directory + testDirectoryCommand := fmt.Sprintf(`test -d "%s"`, dst) + cmd := &packer.RemoteCmd{Command: testDirectoryCommand} + err := c.Start(cmd) + + if err != nil { + log.Printf("Unable to check whether remote path is a dir: %s", err) + return err + } + cmd.Wait() + + if cmd.ExitStatus == 0 { + log.Printf("path is a directory; copying file into directory.") + fileDestination = filepath.Join(c.ContainerName, dst, (*fi).Name()) + } + + cpCmd, err := c.CmdWrapper(fmt.Sprintf("lxc file push - %s", fileDestination)) if err != nil { return err } @@ -72,7 +90,7 @@ func (c *Communicator) UploadDir(dst string, src string, exclude []string) error // Don't use 'z' flag as compressing may take longer and the transfer is likely local. // If this isn't the case, it is possible for the user to compress in another step then transfer. - // It wouldn't be possibe to disable compression, without exposing this option. + // It wouldn't be possible to disable compression, without exposing this option. tar, err := c.CmdWrapper(fmt.Sprintf("tar -cf - -C %s .", src)) if err != nil { return err diff --git a/builder/lxd/communicator_test.go b/builder/lxd/communicator_test.go index 4a70160b7..972a5dc18 100644 --- a/builder/lxd/communicator_test.go +++ b/builder/lxd/communicator_test.go @@ -1,8 +1,9 @@ package lxd import ( - "github.com/hashicorp/packer/packer" "testing" + + "github.com/hashicorp/packer/packer" ) func TestCommunicator_ImplementsCommunicator(t *testing.T) { diff --git a/builder/lxd/config.go b/builder/lxd/config.go index 6a4745510..bcd3aea80 100644 --- a/builder/lxd/config.go +++ b/builder/lxd/config.go @@ -2,12 +2,12 @@ package lxd import ( "fmt" + "github.com/hashicorp/packer/common" "github.com/hashicorp/packer/helper/config" "github.com/hashicorp/packer/packer" "github.com/hashicorp/packer/template/interpolate" "github.com/mitchellh/mapstructure" - "time" ) type Config struct { @@ -16,8 +16,10 @@ type Config struct { ContainerName string `mapstructure:"container_name"` CommandWrapper string `mapstructure:"command_wrapper"` Image string `mapstructure:"image"` + Profile string `mapstructure:"profile"` + InitSleep string `mapstructure:"init_sleep"` PublishProperties map[string]string `mapstructure:"publish_properties"` - InitTimeout time.Duration + LaunchConfig map[string]string `mapstructure:"launch_config"` ctx interpolate.Context } @@ -53,6 +55,16 @@ func NewConfig(raws ...interface{}) (*Config, error) { errs = packer.MultiErrorAppend(errs, fmt.Errorf("`image` is a required parameter for LXD. Please specify an image by alias or fingerprint. e.g. `ubuntu-daily:x`")) } + if c.Profile == "" { + c.Profile = "default" + } + + // Sadly we have to wait a few seconds for /tmp to be intialized and networking + // to finish starting. There isn't a great cross platform to check when things are ready. + if c.InitSleep == "" { + c.InitSleep = "3" + } + if errs != nil && len(errs.Errors) > 0 { return nil, errs } diff --git a/builder/lxd/step_lxd_launch.go b/builder/lxd/step_lxd_launch.go index 1ec573b18..55b7e4954 100644 --- a/builder/lxd/step_lxd_launch.go +++ b/builder/lxd/step_lxd_launch.go @@ -1,37 +1,55 @@ package lxd import ( + "context" "fmt" - "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" + "log" + "strconv" "time" + + "github.com/hashicorp/packer/helper/multistep" + "github.com/hashicorp/packer/packer" ) type stepLxdLaunch struct{} -func (s *stepLxdLaunch) Run(state multistep.StateBag) multistep.StepAction { +func (s *stepLxdLaunch) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { config := state.Get("config").(*Config) ui := state.Get("ui").(packer.Ui) name := config.ContainerName image := config.Image + profile := fmt.Sprintf("--profile=%s", config.Profile) - args := []string{ - "launch", "--ephemeral=false", image, name, + launch_args := []string{ + "launch", "--ephemeral=false", profile, image, name, + } + + for k, v := range config.LaunchConfig { + launch_args = append(launch_args, fmt.Sprintf("--config %s=%s", k, v)) } ui.Say("Creating container...") - _, err := LXDCommand(args...) + _, err := LXDCommand(launch_args...) if err != nil { err := fmt.Errorf("Error creating container: %s", err) state.Put("error", err) ui.Error(err.Error()) return multistep.ActionHalt } - // TODO: Should we check `lxc info ` for "Running"? - // We have to do this so /tmp doens't get cleared and lose our provisioner scripts. - time.Sleep(1 * time.Second) + sleep_seconds, err := strconv.Atoi(config.InitSleep) + if err != nil { + err := fmt.Errorf("Error parsing InitSleep into int: %s", err) + state.Put("error", err) + ui.Error(err.Error()) + return multistep.ActionHalt + } + // TODO: Should we check `lxc info ` for "Running"? + // We have to do this so /tmp doesn't get cleared and lose our provisioner scripts. + + time.Sleep(time.Duration(sleep_seconds) * time.Second) + log.Printf("Sleeping for %d seconds...", sleep_seconds) return multistep.ActionContinue } @@ -39,12 +57,12 @@ func (s *stepLxdLaunch) Cleanup(state multistep.StateBag) { config := state.Get("config").(*Config) ui := state.Get("ui").(packer.Ui) - args := []string{ + cleanup_args := []string{ "delete", "--force", config.ContainerName, } ui.Say("Unregistering and deleting deleting container...") - if _, err := LXDCommand(args...); err != nil { + if _, err := LXDCommand(cleanup_args...); err != nil { ui.Error(fmt.Sprintf("Error deleting container: %s", err)) } } diff --git a/builder/lxd/step_provision.go b/builder/lxd/step_provision.go index c46b900e7..509006d77 100644 --- a/builder/lxd/step_provision.go +++ b/builder/lxd/step_provision.go @@ -1,15 +1,17 @@ package lxd import ( - "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" + "context" "log" + + "github.com/hashicorp/packer/helper/multistep" + "github.com/hashicorp/packer/packer" ) // StepProvision provisions the container type StepProvision struct{} -func (s *StepProvision) Run(state multistep.StateBag) multistep.StepAction { +func (s *StepProvision) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { hook := state.Get("hook").(packer.Hook) config := state.Get("config").(*Config) ui := state.Get("ui").(packer.Ui) diff --git a/builder/lxd/step_publish.go b/builder/lxd/step_publish.go index 31e2d638b..4dc21880f 100644 --- a/builder/lxd/step_publish.go +++ b/builder/lxd/step_publish.go @@ -1,15 +1,17 @@ package lxd import ( + "context" "fmt" - "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" "regexp" + + "github.com/hashicorp/packer/helper/multistep" + "github.com/hashicorp/packer/packer" ) type stepPublish struct{} -func (s *stepPublish) Run(state multistep.StateBag) multistep.StepAction { +func (s *stepPublish) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { config := state.Get("config").(*Config) ui := state.Get("ui").(packer.Ui) diff --git a/builder/ncloud/artifact.go b/builder/ncloud/artifact.go new file mode 100644 index 000000000..52624e437 --- /dev/null +++ b/builder/ncloud/artifact.go @@ -0,0 +1,46 @@ +package ncloud + +import ( + "bytes" + "fmt" + + ncloud "github.com/NaverCloudPlatform/ncloud-sdk-go/sdk" +) + +const BuilderID = "ncloud.server.image" + +type Artifact struct { + ServerImage *ncloud.ServerImage +} + +func (*Artifact) BuilderId() string { + return BuilderID +} + +func (a *Artifact) Files() []string { + /* no file */ + return nil +} + +func (a *Artifact) Id() string { + return a.ServerImage.MemberServerImageNo +} + +func (a *Artifact) String() string { + var buf bytes.Buffer + + // TODO : Logging artifact information + buf.WriteString(fmt.Sprintf("%s:\n\n", a.BuilderId())) + buf.WriteString(fmt.Sprintf("Member Server Image Name: %s\n", a.ServerImage.MemberServerImageName)) + buf.WriteString(fmt.Sprintf("Member Server Image No: %s\n", a.ServerImage.MemberServerImageNo)) + + return buf.String() +} + +func (a *Artifact) State(name string) interface{} { + return a.ServerImage.MemberServerImageStatus +} + +func (a *Artifact) Destroy() error { + return nil +} diff --git a/builder/ncloud/builder.go b/builder/ncloud/builder.go new file mode 100644 index 000000000..d5d546024 --- /dev/null +++ b/builder/ncloud/builder.go @@ -0,0 +1,111 @@ +package ncloud + +import ( + ncloud "github.com/NaverCloudPlatform/ncloud-sdk-go/sdk" + "github.com/hashicorp/packer/common" + "github.com/hashicorp/packer/helper/communicator" + "github.com/hashicorp/packer/helper/multistep" + "github.com/hashicorp/packer/packer" +) + +// Builder assume this implements packer.Builder +type Builder struct { + config *Config + stateBag multistep.StateBag + runner multistep.Runner +} + +func (b *Builder) Prepare(raws ...interface{}) ([]string, error) { + c, warnings, errs := NewConfig(raws...) + if errs != nil { + return warnings, errs + } + b.config = c + + b.stateBag = new(multistep.BasicStateBag) + + return warnings, nil +} + +func (b *Builder) Run(ui packer.Ui, hook packer.Hook, cache packer.Cache) (packer.Artifact, error) { + ui.Message("Creating Naver Cloud Platform Connection ...") + conn := ncloud.NewConnection(b.config.AccessKey, b.config.SecretKey) + + b.stateBag.Put("hook", hook) + b.stateBag.Put("ui", ui) + + var steps []multistep.Step + + steps = []multistep.Step{} + + if b.config.Comm.Type == "ssh" { + steps = []multistep.Step{ + NewStepValidateTemplate(conn, ui, b.config), + NewStepCreateLoginKey(conn, ui), + NewStepCreateServerInstance(conn, ui, b.config), + NewStepCreateBlockStorageInstance(conn, ui, b.config), + NewStepGetRootPassword(conn, ui), + NewStepCreatePublicIPInstance(conn, ui, b.config), + &communicator.StepConnectSSH{ + Config: &b.config.Comm, + Host: func(stateBag multistep.StateBag) (string, error) { + return stateBag.Get("PublicIP").(string), nil + }, + SSHConfig: SSHConfig(b.config.Comm.SSHUsername), + }, + &common.StepProvision{}, + NewStepStopServerInstance(conn, ui), + NewStepCreateServerImage(conn, ui, b.config), + NewStepDeleteBlockStorageInstance(conn, ui, b.config), + NewStepTerminateServerInstance(conn, ui), + } + } else if b.config.Comm.Type == "winrm" { + steps = []multistep.Step{ + NewStepValidateTemplate(conn, ui, b.config), + NewStepCreateLoginKey(conn, ui), + NewStepCreateServerInstance(conn, ui, b.config), + NewStepCreateBlockStorageInstance(conn, ui, b.config), + NewStepGetRootPassword(conn, ui), + NewStepCreatePublicIPInstance(conn, ui, b.config), + &communicator.StepConnectWinRM{ + Config: &b.config.Comm, + Host: func(stateBag multistep.StateBag) (string, error) { + return stateBag.Get("PublicIP").(string), nil + }, + WinRMConfig: func(state multistep.StateBag) (*communicator.WinRMConfig, error) { + return &communicator.WinRMConfig{ + Username: b.config.Comm.WinRMUser, + Password: state.Get("Password").(string), + }, nil + }, + }, + &common.StepProvision{}, + NewStepStopServerInstance(conn, ui), + NewStepCreateServerImage(conn, ui, b.config), + NewStepDeleteBlockStorageInstance(conn, ui, b.config), + NewStepTerminateServerInstance(conn, ui), + } + } + + // Run! + b.runner = common.NewRunnerWithPauseFn(steps, b.config.PackerConfig, ui, b.stateBag) + b.runner.Run(b.stateBag) + + // If there was an error, return that + if rawErr, ok := b.stateBag.GetOk("Error"); ok { + return nil, rawErr.(error) + } + + // Build the artifact and return it + artifact := &Artifact{} + + if serverImage, ok := b.stateBag.GetOk("memberServerImage"); ok { + artifact.ServerImage = serverImage.(*ncloud.ServerImage) + } + + return artifact, nil +} + +func (b *Builder) Cancel() { + b.runner.Cancel() +} diff --git a/builder/ncloud/config.go b/builder/ncloud/config.go new file mode 100644 index 000000000..c8c1374bb --- /dev/null +++ b/builder/ncloud/config.go @@ -0,0 +1,117 @@ +package ncloud + +import ( + "errors" + "fmt" + "os" + + "github.com/hashicorp/packer/common" + "github.com/hashicorp/packer/helper/communicator" + "github.com/hashicorp/packer/helper/config" + "github.com/hashicorp/packer/packer" + "github.com/hashicorp/packer/template/interpolate" +) + +// Config is structure to use packer builder plugin for Naver Cloud Platform +type Config struct { + common.PackerConfig `mapstructure:",squash"` + + AccessKey string `mapstructure:"access_key"` + SecretKey string `mapstructure:"secret_key"` + ServerImageProductCode string `mapstructure:"server_image_product_code"` + ServerProductCode string `mapstructure:"server_product_code"` + MemberServerImageNo string `mapstructure:"member_server_image_no"` + ServerImageName string `mapstructure:"server_image_name"` + ServerImageDescription string `mapstructure:"server_image_description"` + UserData string `mapstructure:"user_data"` + UserDataFile string `mapstructure:"user_data_file"` + BlockStorageSize int `mapstructure:"block_storage_size"` + Region string `mapstructure:"region"` + AccessControlGroupConfigurationNo string `mapstructure:"access_control_group_configuration_no"` + + Comm communicator.Config `mapstructure:",squash"` + ctx *interpolate.Context +} + +// NewConfig checks parameters +func NewConfig(raws ...interface{}) (*Config, []string, error) { + c := new(Config) + warnings := []string{} + + err := config.Decode(c, &config.DecodeOpts{ + Interpolate: true, + InterpolateFilter: &interpolate.RenderFilter{ + Exclude: []string{}, + }, + }, raws...) + if err != nil { + return nil, warnings, err + } + + var errs *packer.MultiError + if es := c.Comm.Prepare(nil); len(es) > 0 { + errs = packer.MultiErrorAppend(errs, es...) + } + + if c.AccessKey == "" { + errs = packer.MultiErrorAppend(errs, errors.New("access_key is required")) + } + + if c.SecretKey == "" { + errs = packer.MultiErrorAppend(errs, errors.New("secret_key is required")) + } + + if c.MemberServerImageNo == "" && c.ServerImageProductCode == "" { + errs = packer.MultiErrorAppend(errs, errors.New("server_image_product_code or member_server_image_no is required")) + } + + if c.MemberServerImageNo != "" && c.ServerImageProductCode != "" { + errs = packer.MultiErrorAppend(errs, errors.New("Only one of server_image_product_code and member_server_image_no can be set")) + } + + if c.ServerImageProductCode != "" && len(c.ServerImageProductCode) > 20 { + errs = packer.MultiErrorAppend(errs, errors.New("If server_image_product_code field is set, length of server_image_product_code should be max 20")) + } + + if c.ServerProductCode != "" && len(c.ServerProductCode) > 20 { + errs = packer.MultiErrorAppend(errs, errors.New("If server_product_code field is set, length of server_product_code should be max 20")) + } + + if c.ServerImageName != "" && (len(c.ServerImageName) < 3 || len(c.ServerImageName) > 30) { + errs = packer.MultiErrorAppend(errs, errors.New("If server_image_name field is set, length of server_image_name should be min 3 and max 20")) + } + + if c.ServerImageDescription != "" && len(c.ServerImageDescription) > 1000 { + errs = packer.MultiErrorAppend(errs, errors.New("If server_image_description field is set, length of server_image_description should be max 1000")) + } + + if c.BlockStorageSize != 0 { + if c.BlockStorageSize < 10 || c.BlockStorageSize > 2000 { + errs = packer.MultiErrorAppend(errs, errors.New("The size of BlockStorageSize is at least 10 GB and up to 2000GB")) + } else if int(c.BlockStorageSize/10)*10 != c.BlockStorageSize { + return nil, nil, errors.New("BlockStorageSize must be a multiple of 10 GB") + } + } + + if c.UserData != "" && c.UserDataFile != "" { + errs = packer.MultiErrorAppend(errs, errors.New("Only one of user_data or user_data_file can be specified.")) + } else if c.UserDataFile != "" { + if _, err := os.Stat(c.UserDataFile); err != nil { + errs = packer.MultiErrorAppend(errs, fmt.Errorf("user_data_file not found: %s", c.UserDataFile)) + } + } + + if c.UserData != "" && len(c.UserData) > 21847 { + errs = packer.MultiErrorAppend(errs, errors.New("If user_data field is set, length of UserData should be max 21847")) + } + + if c.Comm.Type == "wrinrm" && c.AccessControlGroupConfigurationNo == "" { + errs = packer.MultiErrorAppend(errs, errors.New("If Communicator is winrm, access_control_group_configuration_no is required")) + } + + if errs != nil && len(errs.Errors) > 0 { + return nil, warnings, errs + } + + return c, warnings, nil +} diff --git a/builder/ncloud/config_test.go b/builder/ncloud/config_test.go new file mode 100644 index 000000000..fb7886dbc --- /dev/null +++ b/builder/ncloud/config_test.go @@ -0,0 +1,146 @@ +package ncloud + +import ( + "strings" + "testing" +) + +func testConfig() map[string]interface{} { + return map[string]interface{}{ + "access_key": "access_key", + "secret_key": "secret_key", + "server_image_product_code": "SPSW0WINNT000016", + "server_product_code": "SPSVRSSD00000011", + "server_image_name": "packer-test {{timestamp}}", + "server_image_description": "server description", + "block_storage_size": 100, + "user_data": "#!/bin/sh\nyum install -y httpd\ntouch /var/www/html/index.html\nchkconfig --level 2345 httpd on", + "region": "Korea", + "access_control_group_configuration_no": "33", + "communicator": "ssh", + "ssh_username": "root", + } +} + +func testConfigForMemberServerImage() map[string]interface{} { + return map[string]interface{}{ + "access_key": "access_key", + "secret_key": "secret_key", + "server_product_code": "SPSVRSSD00000011", + "member_server_image_no": "2440", + "server_image_name": "packer-test {{timestamp}}", + "server_image_description": "server description", + "block_storage_size": 100, + "user_data": "#!/bin/sh\nyum install -y httpd\ntouch /var/www/html/index.html\nchkconfig --level 2345 httpd on", + "region": "Korea", + "access_control_group_configuration_no": "33", + "communicator": "ssh", + "ssh_username": "root", + } +} + +func TestConfigWithServerImageProductCode(t *testing.T) { + raw := testConfig() + + c, _, _ := NewConfig(raw) + + if c.AccessKey != "access_key" { + t.Errorf("Expected 'access_key' to be set to '%s', but got '%s'.", raw["access_key"], c.AccessKey) + } + + if c.SecretKey != "secret_key" { + t.Errorf("Expected 'secret_key' to be set to '%s', but got '%s'.", raw["secret_key"], c.SecretKey) + } + + if c.ServerImageProductCode != "SPSW0WINNT000016" { + t.Errorf("Expected 'server_image_product_code' to be set to '%s', but got '%s'.", raw["server_image_product_code"], c.ServerImageProductCode) + } + + if c.ServerProductCode != "SPSVRSSD00000011" { + t.Errorf("Expected 'server_product_code' to be set to '%s', but got '%s'.", raw["server_product_code"], c.ServerProductCode) + } + + if c.BlockStorageSize != 100 { + t.Errorf("Expected 'block_storage_size' to be set to '%d', but got '%d'.", raw["block_storage_size"], c.BlockStorageSize) + } + + if c.ServerImageDescription != "server description" { + t.Errorf("Expected 'server_image_description_key' to be set to '%s', but got '%s'.", raw["server_image_description"], c.ServerImageDescription) + } + + if c.Region != "Korea" { + t.Errorf("Expected 'region' to be set to '%s', but got '%s'.", raw["server_image_description"], c.Region) + } +} + +func TestConfigWithMemberServerImageCode(t *testing.T) { + raw := testConfigForMemberServerImage() + + c, _, _ := NewConfig(raw) + + if c.AccessKey != "access_key" { + t.Errorf("Expected 'access_key' to be set to '%s', but got '%s'.", raw["access_key"], c.AccessKey) + } + + if c.SecretKey != "secret_key" { + t.Errorf("Expected 'secret_key' to be set to '%s', but got '%s'.", raw["secret_key"], c.SecretKey) + } + + if c.MemberServerImageNo != "2440" { + t.Errorf("Expected 'member_server_image_no' to be set to '%s', but got '%s'.", raw["member_server_image_no"], c.MemberServerImageNo) + } + + if c.ServerProductCode != "SPSVRSSD00000011" { + t.Errorf("Expected 'server_product_code' to be set to '%s', but got '%s'.", raw["server_product_code"], c.ServerProductCode) + } + + if c.BlockStorageSize != 100 { + t.Errorf("Expected 'block_storage_size' to be set to '%d', but got '%d'.", raw["block_storage_size"], c.BlockStorageSize) + } + + if c.ServerImageDescription != "server description" { + t.Errorf("Expected 'server_image_description_key' to be set to '%s', but got '%s'.", raw["server_image_description"], c.ServerImageDescription) + } + + if c.Region != "Korea" { + t.Errorf("Expected 'region' to be set to '%s', but got '%s'.", raw["server_image_description"], c.Region) + } +} + +func TestEmptyConfig(t *testing.T) { + raw := new(map[string]interface{}) + + _, _, err := NewConfig(raw) + + if err == nil { + t.Error("Expected Config to require 'access_key', 'secret_key' and some mandatory fields, but it did not") + } + + if !strings.Contains(err.Error(), "access_key is required") { + t.Error("Expected Config to require 'access_key', but it did not") + } + + if !strings.Contains(err.Error(), "secret_key is required") { + t.Error("Expected Config to require 'secret_key', but it did not") + } + + if !strings.Contains(err.Error(), "server_image_product_code or member_server_image_no is required") { + t.Error("Expected Config to require 'server_image_product_code' or 'member_server_image_no', but it did not") + } +} + +func TestExistsBothServerImageProductCodeAndMemberServerImageNoConfig(t *testing.T) { + raw := map[string]interface{}{ + "access_key": "access_key", + "secret_key": "secret_key", + "server_image_product_code": "SPSW0WINNT000016", + "server_product_code": "SPSVRSSD00000011", + "member_server_image_no": "2440", + } + + _, _, err := NewConfig(raw) + + if !strings.Contains(err.Error(), "Only one of server_image_product_code and member_server_image_no can be set") { + t.Error("Expected Config to require Only one of 'server_image_product_code' and 'member_server_image_no' can be set, but it did not") + } +} diff --git a/builder/ncloud/ssh.go b/builder/ncloud/ssh.go new file mode 100644 index 000000000..08764aee6 --- /dev/null +++ b/builder/ncloud/ssh.go @@ -0,0 +1,25 @@ +package ncloud + +import ( + packerssh "github.com/hashicorp/packer/communicator/ssh" + "github.com/hashicorp/packer/helper/multistep" + "golang.org/x/crypto/ssh" +) + +// SSHConfig returns a function that can be used for the SSH communicator +// config for connecting to the specified host via SSH +func SSHConfig(username string) func(multistep.StateBag) (*ssh.ClientConfig, error) { + return func(state multistep.StateBag) (*ssh.ClientConfig, error) { + password := state.Get("Password").(string) + + return &ssh.ClientConfig{ + User: username, + Auth: []ssh.AuthMethod{ + ssh.Password(password), + ssh.KeyboardInteractive( + packerssh.PasswordKeyboardInteractive(password)), + }, + HostKeyCallback: ssh.InsecureIgnoreHostKey(), + }, nil + } +} diff --git a/builder/ncloud/step.go b/builder/ncloud/step.go new file mode 100644 index 000000000..8b312fae7 --- /dev/null +++ b/builder/ncloud/step.go @@ -0,0 +1,16 @@ +package ncloud + +import ( + "github.com/hashicorp/packer/helper/multistep" +) + +func processStepResult(err error, sayError func(error), state multistep.StateBag) multistep.StepAction { + if err != nil { + state.Put("Error", err) + sayError(err) + + return multistep.ActionHalt + } + + return multistep.ActionContinue +} diff --git a/builder/ncloud/step_create_block_storage_instance.go b/builder/ncloud/step_create_block_storage_instance.go new file mode 100644 index 000000000..b135a15e7 --- /dev/null +++ b/builder/ncloud/step_create_block_storage_instance.go @@ -0,0 +1,102 @@ +package ncloud + +import ( + "context" + "errors" + "fmt" + "log" + "time" + + ncloud "github.com/NaverCloudPlatform/ncloud-sdk-go/sdk" + "github.com/hashicorp/packer/helper/multistep" + "github.com/hashicorp/packer/packer" +) + +// StepCreateBlockStorageInstance struct is for making extra block storage +type StepCreateBlockStorageInstance struct { + Conn *ncloud.Conn + CreateBlockStorageInstance func(serverInstanceNo string) (string, error) + Say func(message string) + Error func(e error) + Config *Config +} + +// NewStepCreateBlockStorageInstance make StepCreateBlockStorage struct to make extra block storage +func NewStepCreateBlockStorageInstance(conn *ncloud.Conn, ui packer.Ui, config *Config) *StepCreateBlockStorageInstance { + var step = &StepCreateBlockStorageInstance{ + Conn: conn, + Say: func(message string) { ui.Say(message) }, + Error: func(e error) { ui.Error(e.Error()) }, + Config: config, + } + + step.CreateBlockStorageInstance = step.createBlockStorageInstance + + return step +} + +func (s *StepCreateBlockStorageInstance) createBlockStorageInstance(serverInstanceNo string) (string, error) { + + reqParams := new(ncloud.RequestBlockStorageInstance) + reqParams.BlockStorageSize = s.Config.BlockStorageSize + reqParams.ServerInstanceNo = serverInstanceNo + + blockStorageInstanceList, err := s.Conn.CreateBlockStorageInstance(reqParams) + if err != nil { + return "", err + } + + log.Println("Block Storage Instance information : ", blockStorageInstanceList.BlockStorageInstance[0]) + + if err := waiterBlockStorageInstanceStatus(s.Conn, blockStorageInstanceList.BlockStorageInstance[0].BlockStorageInstanceNo, "ATTAC", 10*time.Minute); err != nil { + return "", errors.New("TIMEOUT : Block Storage instance status is not attached") + } + + return blockStorageInstanceList.BlockStorageInstance[0].BlockStorageInstanceNo, nil +} + +func (s *StepCreateBlockStorageInstance) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { + if s.Config.BlockStorageSize == 0 { + return processStepResult(nil, s.Error, state) + } + + s.Say("Create extra block storage instance") + + serverInstanceNo := state.Get("InstanceNo").(string) + + blockStorageInstanceNo, err := s.CreateBlockStorageInstance(serverInstanceNo) + if err == nil { + state.Put("BlockStorageInstanceNo", blockStorageInstanceNo) + } + + return processStepResult(err, s.Error, state) +} + +func (s *StepCreateBlockStorageInstance) Cleanup(state multistep.StateBag) { + _, cancelled := state.GetOk(multistep.StateCancelled) + _, halted := state.GetOk(multistep.StateHalted) + + if !cancelled && !halted { + return + } + + if s.Config.BlockStorageSize == 0 { + return + } + + if blockStorageInstanceNo, ok := state.GetOk("BlockStorageInstanceNo"); ok { + s.Say("Clean up Block Storage Instance") + no := blockStorageInstanceNo.(string) + blockStorageInstanceList, err := s.Conn.DeleteBlockStorageInstances([]string{no}) + if err != nil { + return + } + + s.Say(fmt.Sprintf("Block Storage Instance is deleted. Block Storage InstanceNo is %s", no)) + log.Println("Block Storage Instance information : ", blockStorageInstanceList.BlockStorageInstance[0]) + + if err := waiterBlockStorageInstanceStatus(s.Conn, no, "DETAC", time.Minute); err != nil { + s.Say("TIMEOUT : Block Storage instance status is not deattached") + } + } +} diff --git a/builder/ncloud/step_create_block_storage_instance_test.go b/builder/ncloud/step_create_block_storage_instance_test.go new file mode 100644 index 000000000..6f4a9e4e2 --- /dev/null +++ b/builder/ncloud/step_create_block_storage_instance_test.go @@ -0,0 +1,64 @@ +package ncloud + +import ( + "context" + "fmt" + "testing" + + "github.com/hashicorp/packer/helper/multistep" +) + +func TestStepCreateBlockStorageInstanceShouldFailIfOperationCreateBlockStorageInstanceFails(t *testing.T) { + + var testSubject = &StepCreateBlockStorageInstance{ + CreateBlockStorageInstance: func(serverInstanceNo string) (string, error) { return "", fmt.Errorf("!! Unit Test FAIL !!") }, + Say: func(message string) {}, + Error: func(e error) {}, + Config: new(Config), + } + + testSubject.Config.BlockStorageSize = 10 + + stateBag := createTestStateBagStepCreateBlockStorageInstance() + + var result = testSubject.Run(context.Background(), stateBag) + + if result != multistep.ActionHalt { + t.Fatalf("Expected the step to return 'ActionHalt', but got '%d'.", result) + } + + if _, ok := stateBag.GetOk("Error"); ok == false { + t.Fatal("Expected the step to set stateBag['Error'], but it was not.") + } +} + +func TestStepCreateBlockStorageInstanceShouldPassIfOperationCreateBlockStorageInstancePasses(t *testing.T) { + var testSubject = &StepCreateBlockStorageInstance{ + CreateBlockStorageInstance: func(serverInstanceNo string) (string, error) { return "a", nil }, + Say: func(message string) {}, + Error: func(e error) {}, + Config: new(Config), + } + + testSubject.Config.BlockStorageSize = 10 + + stateBag := createTestStateBagStepCreateBlockStorageInstance() + + var result = testSubject.Run(context.Background(), stateBag) + + if result != multistep.ActionContinue { + t.Fatalf("Expected the step to return 'ActionContinue', but got '%d'.", result) + } + + if _, ok := stateBag.GetOk("Error"); ok == true { + t.Fatalf("Expected the step to not set stateBag['Error'], but it was.") + } +} + +func createTestStateBagStepCreateBlockStorageInstance() multistep.StateBag { + stateBag := new(multistep.BasicStateBag) + + stateBag.Put("InstanceNo", "a") + + return stateBag +} diff --git a/builder/ncloud/step_create_login_key.go b/builder/ncloud/step_create_login_key.go new file mode 100644 index 000000000..05ef97739 --- /dev/null +++ b/builder/ncloud/step_create_login_key.go @@ -0,0 +1,65 @@ +package ncloud + +import ( + "context" + "fmt" + "time" + + ncloud "github.com/NaverCloudPlatform/ncloud-sdk-go/sdk" + "github.com/hashicorp/packer/helper/multistep" + "github.com/hashicorp/packer/packer" +) + +type LoginKey struct { + KeyName string + PrivateKey string +} + +type StepCreateLoginKey struct { + Conn *ncloud.Conn + CreateLoginKey func() (*LoginKey, error) + Say func(message string) + Error func(e error) +} + +func NewStepCreateLoginKey(conn *ncloud.Conn, ui packer.Ui) *StepCreateLoginKey { + var step = &StepCreateLoginKey{ + Conn: conn, + Say: func(message string) { ui.Say(message) }, + Error: func(e error) { ui.Error(e.Error()) }, + } + + step.CreateLoginKey = step.createLoginKey + + return step +} + +func (s *StepCreateLoginKey) createLoginKey() (*LoginKey, error) { + KeyName := fmt.Sprintf("packer-%d", time.Now().Unix()) + + privateKey, err := s.Conn.CreateLoginKey(KeyName) + if err != nil { + return nil, err + } + + return &LoginKey{KeyName, privateKey.PrivateKey}, nil +} + +func (s *StepCreateLoginKey) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { + s.Say("Create Login Key") + + loginKey, err := s.CreateLoginKey() + if err == nil { + state.Put("LoginKey", loginKey) + s.Say(fmt.Sprintf("Login Key[%s] is created", loginKey.KeyName)) + } + + return processStepResult(err, s.Error, state) +} + +func (s *StepCreateLoginKey) Cleanup(state multistep.StateBag) { + if loginKey, ok := state.GetOk("LoginKey"); ok { + s.Say("Clean up login key") + s.Conn.DeleteLoginKey(loginKey.(*LoginKey).KeyName) + } +} diff --git a/builder/ncloud/step_create_login_key_test.go b/builder/ncloud/step_create_login_key_test.go new file mode 100644 index 000000000..3583f47a5 --- /dev/null +++ b/builder/ncloud/step_create_login_key_test.go @@ -0,0 +1,55 @@ +package ncloud + +import ( + "context" + "fmt" + "testing" + + "github.com/hashicorp/packer/helper/multistep" +) + +func TestStepCreateLoginKeyShouldFailIfOperationCreateLoginKeyFails(t *testing.T) { + var testSubject = &StepCreateLoginKey{ + CreateLoginKey: func() (*LoginKey, error) { return nil, fmt.Errorf("!! Unit Test FAIL !!") }, + Say: func(message string) {}, + Error: func(e error) {}, + } + + stateBag := createTestStateBagStepCreateLoginKey() + + var result = testSubject.Run(context.Background(), stateBag) + + if result != multistep.ActionHalt { + t.Fatalf("Expected the step to return 'ActionHalt', but got '%d'.", result) + } + + if _, ok := stateBag.GetOk("Error"); ok == false { + t.Fatal("Expected the step to set stateBag['Error'], but it was not.") + } +} + +func TestStepCreateLoginKeyShouldPassIfOperationCreateLoginKeyPasses(t *testing.T) { + var testSubject = &StepCreateLoginKey{ + CreateLoginKey: func() (*LoginKey, error) { return &LoginKey{"a", "b"}, nil }, + Say: func(message string) {}, + Error: func(e error) {}, + } + + stateBag := createTestStateBagStepCreateLoginKey() + + var result = testSubject.Run(context.Background(), stateBag) + + if result != multistep.ActionContinue { + t.Fatalf("Expected the step to return 'ActionContinue', but got '%d'.", result) + } + + if _, ok := stateBag.GetOk("Error"); ok == true { + t.Fatalf("Expected the step to not set stateBag['Error'], but it was.") + } +} + +func createTestStateBagStepCreateLoginKey() multistep.StateBag { + stateBag := new(multistep.BasicStateBag) + + return stateBag +} diff --git a/builder/ncloud/step_create_public_ip_instance.go b/builder/ncloud/step_create_public_ip_instance.go new file mode 100644 index 000000000..d4cd72b6d --- /dev/null +++ b/builder/ncloud/step_create_public_ip_instance.go @@ -0,0 +1,160 @@ +package ncloud + +import ( + "context" + "fmt" + "log" + "time" + + ncloud "github.com/NaverCloudPlatform/ncloud-sdk-go/sdk" + "github.com/hashicorp/packer/helper/multistep" + "github.com/hashicorp/packer/packer" +) + +type StepCreatePublicIPInstance struct { + Conn *ncloud.Conn + CreatePublicIPInstance func(serverInstanceNo string) (*ncloud.PublicIPInstance, error) + WaiterAssociatePublicIPToServerInstance func(serverInstanceNo string, publicIP string) error + Say func(message string) + Error func(e error) + Config *Config +} + +func NewStepCreatePublicIPInstance(conn *ncloud.Conn, ui packer.Ui, config *Config) *StepCreatePublicIPInstance { + var step = &StepCreatePublicIPInstance{ + Conn: conn, + Say: func(message string) { ui.Say(message) }, + Error: func(e error) { ui.Error(e.Error()) }, + Config: config, + } + + step.CreatePublicIPInstance = step.createPublicIPInstance + step.WaiterAssociatePublicIPToServerInstance = step.waiterAssociatePublicIPToServerInstance + + return step +} + +func (s *StepCreatePublicIPInstance) waiterAssociatePublicIPToServerInstance(serverInstanceNo string, publicIP string) error { + reqParams := new(ncloud.RequestGetServerInstanceList) + reqParams.ServerInstanceNoList = []string{serverInstanceNo} + + c1 := make(chan error, 1) + + go func() { + for { + serverInstanceList, err := s.Conn.GetServerInstanceList(reqParams) + + if err != nil { + c1 <- err + return + } + + if publicIP == serverInstanceList.ServerInstanceList[0].PublicIP { + c1 <- nil + return + } + + s.Say("Wait to associate public ip serverInstance") + time.Sleep(time.Second * 3) + } + }() + + select { + case res := <-c1: + return res + case <-time.After(time.Second * 60): + return fmt.Errorf("TIMEOUT : association public ip[%s] to server instance[%s] Failed", publicIP, serverInstanceNo) + } +} + +func (s *StepCreatePublicIPInstance) createPublicIPInstance(serverInstanceNo string) (*ncloud.PublicIPInstance, error) { + reqParams := new(ncloud.RequestCreatePublicIPInstance) + reqParams.ServerInstanceNo = serverInstanceNo + + publicIPInstanceList, err := s.Conn.CreatePublicIPInstance(reqParams) + if err != nil { + return nil, err + } + + publicIPInstance := publicIPInstanceList.PublicIPInstanceList[0] + publicIP := publicIPInstance.PublicIP + s.Say(fmt.Sprintf("Public IP Instance [%s:%s] is created", publicIPInstance.PublicIPInstanceNo, publicIP)) + + err = s.waiterAssociatePublicIPToServerInstance(serverInstanceNo, publicIP) + + return &publicIPInstance, nil +} + +func (s *StepCreatePublicIPInstance) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { + s.Say("Create Public IP Instance") + + serverInstanceNo := state.Get("InstanceNo").(string) + + publicIPInstance, err := s.CreatePublicIPInstance(serverInstanceNo) + if err == nil { + state.Put("PublicIP", publicIPInstance.PublicIP) + state.Put("PublicIPInstance", publicIPInstance) + } + + return processStepResult(err, s.Error, state) +} + +func (s *StepCreatePublicIPInstance) Cleanup(state multistep.StateBag) { + publicIPInstance, ok := state.GetOk("PublicIPInstance") + if !ok { + return + } + + s.Say("Clean up Public IP Instance") + publicIPInstanceNo := publicIPInstance.(*ncloud.PublicIPInstance).PublicIPInstanceNo + s.waitPublicIPInstanceStatus(publicIPInstanceNo, "USED") + + log.Println("Disassociate Public IP Instance ", publicIPInstanceNo) + s.Conn.DisassociatePublicIP(publicIPInstanceNo) + + s.waitPublicIPInstanceStatus(publicIPInstanceNo, "CREAT") + + reqParams := new(ncloud.RequestDeletePublicIPInstances) + reqParams.PublicIPInstanceNoList = []string{publicIPInstanceNo} + + log.Println("Delete Public IP Instance ", publicIPInstanceNo) + s.Conn.DeletePublicIPInstances(reqParams) +} + +func (s *StepCreatePublicIPInstance) waitPublicIPInstanceStatus(publicIPInstanceNo string, status string) { + c1 := make(chan error, 1) + + go func() { + reqParams := new(ncloud.RequestPublicIPInstanceList) + reqParams.PublicIPInstanceNoList = []string{publicIPInstanceNo} + + for { + resp, err := s.Conn.GetPublicIPInstanceList(reqParams) + if err != nil { + log.Printf(err.Error()) + c1 <- err + return + } + + if resp.TotalRows == 0 { + c1 <- nil + return + } + + instance := resp.PublicIPInstanceList[0] + if instance.PublicIPInstanceStatus.Code == status && instance.PublicIPInstanceOperation.Code == "NULL" { + c1 <- nil + return + } + + time.Sleep(time.Second * 2) + } + }() + + select { + case <-c1: + return + case <-time.After(time.Second * 60): + return + } +} diff --git a/builder/ncloud/step_create_public_ip_instance_test.go b/builder/ncloud/step_create_public_ip_instance_test.go new file mode 100644 index 000000000..e28a19d48 --- /dev/null +++ b/builder/ncloud/step_create_public_ip_instance_test.go @@ -0,0 +1,68 @@ +package ncloud + +import ( + "context" + "fmt" + "testing" + + ncloud "github.com/NaverCloudPlatform/ncloud-sdk-go/sdk" + + "github.com/hashicorp/packer/helper/multistep" +) + +func TestStepCreatePublicIPInstanceShouldFailIfOperationCreatePublicIPInstanceFails(t *testing.T) { + var testSubject = &StepCreatePublicIPInstance{ + CreatePublicIPInstance: func(serverInstanceNo string) (*ncloud.PublicIPInstance, error) { + return nil, fmt.Errorf("!! Unit Test FAIL !!") + }, + Say: func(message string) {}, + Error: func(e error) {}, + } + + stateBag := createTestStateBagStepCreateServerImage() + + var result = testSubject.Run(context.Background(), stateBag) + + if result != multistep.ActionHalt { + t.Fatalf("Expected the step to return 'ActionHalt', but got '%d'.", result) + } + + if _, ok := stateBag.GetOk("Error"); ok == false { + t.Fatal("Expected the step to set stateBag['Error'], but it was not.") + } +} + +func TestStepCreatePublicIPInstanceShouldPassIfOperationCreatePublicIPInstancePasses(t *testing.T) { + c := new(Config) + c.Comm.Prepare(nil) + c.Comm.Type = "ssh" + + var testSubject = &StepCreatePublicIPInstance{ + CreatePublicIPInstance: func(serverInstanceNo string) (*ncloud.PublicIPInstance, error) { + return &ncloud.PublicIPInstance{PublicIPInstanceNo: "a", PublicIP: "b"}, nil + }, + Say: func(message string) {}, + Error: func(e error) {}, + Config: c, + } + + stateBag := createTestStateBagStepCreatePublicIPInstance() + + var result = testSubject.Run(context.Background(), stateBag) + + if result != multistep.ActionContinue { + t.Fatalf("Expected the step to return 'ActionContinue', but got '%d'.", result) + } + + if _, ok := stateBag.GetOk("Error"); ok == true { + t.Fatalf("Expected the step to not set stateBag['Error'], but it was.") + } +} + +func createTestStateBagStepCreatePublicIPInstance() multistep.StateBag { + stateBag := new(multistep.BasicStateBag) + + stateBag.Put("InstanceNo", "a") + + return stateBag +} diff --git a/builder/ncloud/step_create_server_image.go b/builder/ncloud/step_create_server_image.go new file mode 100644 index 000000000..447485641 --- /dev/null +++ b/builder/ncloud/step_create_server_image.go @@ -0,0 +1,78 @@ +package ncloud + +import ( + "context" + "errors" + "fmt" + "time" + + ncloud "github.com/NaverCloudPlatform/ncloud-sdk-go/sdk" + "github.com/hashicorp/packer/helper/multistep" + "github.com/hashicorp/packer/packer" +) + +type StepCreateServerImage struct { + Conn *ncloud.Conn + CreateServerImage func(serverInstanceNo string) (*ncloud.ServerImage, error) + Say func(message string) + Error func(e error) + Config *Config +} + +func NewStepCreateServerImage(conn *ncloud.Conn, ui packer.Ui, config *Config) *StepCreateServerImage { + var step = &StepCreateServerImage{ + Conn: conn, + Say: func(message string) { ui.Say(message) }, + Error: func(e error) { ui.Error(e.Error()) }, + Config: config, + } + + step.CreateServerImage = step.createServerImage + + return step +} + +func (s *StepCreateServerImage) createServerImage(serverInstanceNo string) (*ncloud.ServerImage, error) { + // Can't create server image when status of server instance is stopping (not stopped) + if err := waiterServerInstanceStatus(s.Conn, serverInstanceNo, "NSTOP", 1*time.Minute); err != nil { + return nil, err + } + + reqParams := new(ncloud.RequestCreateServerImage) + reqParams.MemberServerImageName = s.Config.ServerImageName + reqParams.MemberServerImageDescription = s.Config.ServerImageDescription + reqParams.ServerInstanceNo = serverInstanceNo + + memberServerImageList, err := s.Conn.CreateMemberServerImage(reqParams) + if err != nil { + return nil, err + } + + serverImage := memberServerImageList.MemberServerImageList[0] + + s.Say(fmt.Sprintf("Server Image[%s:%s] is creating...", serverImage.MemberServerImageName, serverImage.MemberServerImageNo)) + + if err := waiterMemberServerImageStatus(s.Conn, serverImage.MemberServerImageNo, "CREAT", 6*time.Hour); err != nil { + return nil, errors.New("TIMEOUT : Server Image is not created") + } + + s.Say(fmt.Sprintf("Server Image[%s:%s] is created", serverImage.MemberServerImageName, serverImage.MemberServerImageNo)) + + return &serverImage, nil +} + +func (s *StepCreateServerImage) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { + s.Say("Create Server Image") + + serverInstanceNo := state.Get("InstanceNo").(string) + + serverImage, err := s.CreateServerImage(serverInstanceNo) + if err == nil { + state.Put("memberServerImage", serverImage) + } + + return processStepResult(err, s.Error, state) +} + +func (*StepCreateServerImage) Cleanup(multistep.StateBag) { +} diff --git a/builder/ncloud/step_create_server_image_test.go b/builder/ncloud/step_create_server_image_test.go new file mode 100644 index 000000000..49915ced2 --- /dev/null +++ b/builder/ncloud/step_create_server_image_test.go @@ -0,0 +1,60 @@ +package ncloud + +import ( + "context" + "fmt" + "testing" + + ncloud "github.com/NaverCloudPlatform/ncloud-sdk-go/sdk" + + "github.com/hashicorp/packer/helper/multistep" +) + +func TestStepCreateServerImageShouldFailIfOperationCreateServerImageFails(t *testing.T) { + var testSubject = &StepCreateServerImage{ + CreateServerImage: func(serverInstanceNo string) (*ncloud.ServerImage, error) { + return nil, fmt.Errorf("!! Unit Test FAIL !!") + }, + Say: func(message string) {}, + Error: func(e error) {}, + } + + stateBag := createTestStateBagStepCreateServerImage() + + var result = testSubject.Run(context.Background(), stateBag) + + if result != multistep.ActionHalt { + t.Fatalf("Expected the step to return 'ActionHalt', but got '%d'.", result) + } + + if _, ok := stateBag.GetOk("Error"); ok == false { + t.Fatal("Expected the step to set stateBag['Error'], but it was not.") + } +} +func TestStepCreateServerImageShouldPassIfOperationCreateServerImagePasses(t *testing.T) { + var testSubject = &StepCreateServerImage{ + CreateServerImage: func(serverInstanceNo string) (*ncloud.ServerImage, error) { return nil, nil }, + Say: func(message string) {}, + Error: func(e error) {}, + } + + stateBag := createTestStateBagStepCreateServerImage() + + var result = testSubject.Run(context.Background(), stateBag) + + if result != multistep.ActionContinue { + t.Fatalf("Expected the step to return 'ActionContinue', but got '%d'.", result) + } + + if _, ok := stateBag.GetOk("Error"); ok == true { + t.Fatalf("Expected the step to not set stateBag['Error'], but it was.") + } +} + +func createTestStateBagStepCreateServerImage() multistep.StateBag { + stateBag := new(multistep.BasicStateBag) + + stateBag.Put("InstanceNo", "a") + + return stateBag +} diff --git a/builder/ncloud/step_create_server_instance.go b/builder/ncloud/step_create_server_instance.go new file mode 100644 index 000000000..a8a72586d --- /dev/null +++ b/builder/ncloud/step_create_server_instance.go @@ -0,0 +1,145 @@ +package ncloud + +import ( + "context" + "errors" + "fmt" + "io/ioutil" + "log" + "time" + + ncloud "github.com/NaverCloudPlatform/ncloud-sdk-go/sdk" + "github.com/hashicorp/packer/helper/multistep" + "github.com/hashicorp/packer/packer" +) + +type StepCreateServerInstance struct { + Conn *ncloud.Conn + CreateServerInstance func(loginKeyName string, zoneNo string, feeSystemTypeCode string) (string, error) + CheckServerInstanceStatusIsRunning func(serverInstanceNo string) error + Say func(message string) + Error func(e error) + Config *Config + serverInstanceNo string +} + +func NewStepCreateServerInstance(conn *ncloud.Conn, ui packer.Ui, config *Config) *StepCreateServerInstance { + var step = &StepCreateServerInstance{ + Conn: conn, + Say: func(message string) { ui.Say(message) }, + Error: func(e error) { ui.Error(e.Error()) }, + Config: config, + } + + step.CreateServerInstance = step.createServerInstance + + return step +} + +func (s *StepCreateServerInstance) createServerInstance(loginKeyName string, zoneNo string, feeSystemTypeCode string) (string, error) { + reqParams := new(ncloud.RequestCreateServerInstance) + reqParams.ServerProductCode = s.Config.ServerProductCode + reqParams.MemberServerImageNo = s.Config.MemberServerImageNo + if s.Config.MemberServerImageNo == "" { + reqParams.ServerImageProductCode = s.Config.ServerImageProductCode + } + reqParams.LoginKeyName = loginKeyName + reqParams.ZoneNo = zoneNo + reqParams.FeeSystemTypeCode = feeSystemTypeCode + + if s.Config.UserData != "" { + reqParams.UserData = s.Config.UserData + } + + if s.Config.UserDataFile != "" { + contents, err := ioutil.ReadFile(s.Config.UserDataFile) + if err != nil { + return "", fmt.Errorf("Problem reading user data file: %s", err) + } + + reqParams.UserData = string(contents) + } + + if s.Config.AccessControlGroupConfigurationNo != "" { + reqParams.AccessControlGroupConfigurationNoList = []string{s.Config.AccessControlGroupConfigurationNo} + } + + serverInstanceList, err := s.Conn.CreateServerInstances(reqParams) + if err != nil { + return "", err + } + + s.serverInstanceNo = serverInstanceList.ServerInstanceList[0].ServerInstanceNo + s.Say(fmt.Sprintf("Server Instance is creating. Server InstanceNo is %s", s.serverInstanceNo)) + log.Println("Server Instance information : ", serverInstanceList.ServerInstanceList[0]) + + if err := waiterServerInstanceStatus(s.Conn, s.serverInstanceNo, "RUN", 30*time.Minute); err != nil { + return "", errors.New("TIMEOUT : server instance status is not running") + } + + s.Say(fmt.Sprintf("Server Instance is created. Server InstanceNo is %s", s.serverInstanceNo)) + + return s.serverInstanceNo, nil +} + +func (s *StepCreateServerInstance) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { + s.Say("Create Server Instance") + + var loginKey = state.Get("LoginKey").(*LoginKey) + var zoneNo = state.Get("ZoneNo").(string) + + feeSystemTypeCode := "MTRAT" + if _, ok := state.GetOk("FeeSystemTypeCode"); ok { + feeSystemTypeCode = state.Get("FeeSystemTypeCode").(string) + } + + serverInstanceNo, err := s.CreateServerInstance(loginKey.KeyName, zoneNo, feeSystemTypeCode) + if err == nil { + state.Put("InstanceNo", serverInstanceNo) + } + + return processStepResult(err, s.Error, state) +} + +func (s *StepCreateServerInstance) Cleanup(state multistep.StateBag) { + _, cancelled := state.GetOk(multistep.StateCancelled) + _, halted := state.GetOk(multistep.StateHalted) + + if !cancelled && !halted { + return + } + + if s.serverInstanceNo == "" { + return + } + + reqParams := new(ncloud.RequestGetServerInstanceList) + reqParams.ServerInstanceNoList = []string{s.serverInstanceNo} + + serverInstanceList, err := s.Conn.GetServerInstanceList(reqParams) + if err != nil || serverInstanceList.TotalRows == 0 { + return + } + + s.Say("Clean up Server Instance") + + serverInstance := serverInstanceList.ServerInstanceList[0] + // stop server instance + if serverInstance.ServerInstanceStatus.Code != "NSTOP" && serverInstance.ServerInstanceStatus.Code != "TERMT" { + reqParams := new(ncloud.RequestStopServerInstances) + reqParams.ServerInstanceNoList = []string{s.serverInstanceNo} + + log.Println("Stop Server Instance") + s.Conn.StopServerInstances(reqParams) + waiterServerInstanceStatus(s.Conn, s.serverInstanceNo, "NSTOP", time.Minute) + } + + // terminate server instance + if serverInstance.ServerInstanceStatus.Code != "TERMT" { + reqParams := new(ncloud.RequestTerminateServerInstances) + reqParams.ServerInstanceNoList = []string{s.serverInstanceNo} + + log.Println("Terminate Server Instance") + s.Conn.TerminateServerInstances(reqParams) + } +} diff --git a/builder/ncloud/step_create_server_instance_test.go b/builder/ncloud/step_create_server_instance_test.go new file mode 100644 index 000000000..676c3fa3f --- /dev/null +++ b/builder/ncloud/step_create_server_instance_test.go @@ -0,0 +1,60 @@ +package ncloud + +import ( + "context" + "fmt" + "testing" + + "github.com/hashicorp/packer/helper/multistep" +) + +func TestStepCreateServerInstanceShouldFailIfOperationCreateFails(t *testing.T) { + var testSubject = &StepCreateServerInstance{ + CreateServerInstance: func(loginKeyName string, zoneNo string, feeSystemTypeCode string) (string, error) { + return "", fmt.Errorf("!! Unit Test FAIL !!") + }, + Say: func(message string) {}, + Error: func(e error) {}, + } + + stateBag := createTestStateBagStepCreateServerInstance() + + var result = testSubject.Run(context.Background(), stateBag) + + if result != multistep.ActionHalt { + t.Fatalf("Expected the step to return 'ActionHalt', but got '%d'.", result) + } + + if _, ok := stateBag.GetOk("Error"); ok == false { + t.Fatal("Expected the step to set stateBag['Error'], but it was not.") + } +} + +func TestStepCreateServerInstanceShouldPassIfOperationCreatePasses(t *testing.T) { + var testSubject = &StepCreateServerInstance{ + CreateServerInstance: func(loginKeyName string, zoneNo string, feeSystemTypeCode string) (string, error) { return "", nil }, + Say: func(message string) {}, + Error: func(e error) {}, + } + + stateBag := createTestStateBagStepCreateServerInstance() + + var result = testSubject.Run(context.Background(), stateBag) + + if result != multistep.ActionContinue { + t.Fatalf("Expected the step to return 'ActionContinue', but got '%d'.", result) + } + + if _, ok := stateBag.GetOk("Error"); ok == true { + t.Fatalf("Expected the step to not set stateBag['Error'], but it was.") + } +} + +func createTestStateBagStepCreateServerInstance() multistep.StateBag { + stateBag := new(multistep.BasicStateBag) + + stateBag.Put("LoginKey", &LoginKey{"a", "b"}) + stateBag.Put("ZoneNo", "1") + + return stateBag +} diff --git a/builder/ncloud/step_delete_block_storage_instance.go b/builder/ncloud/step_delete_block_storage_instance.go new file mode 100644 index 000000000..d43edf1c9 --- /dev/null +++ b/builder/ncloud/step_delete_block_storage_instance.go @@ -0,0 +1,96 @@ +package ncloud + +import ( + "context" + "errors" + "fmt" + "log" + "time" + + ncloud "github.com/NaverCloudPlatform/ncloud-sdk-go/sdk" + "github.com/hashicorp/packer/helper/multistep" + "github.com/hashicorp/packer/packer" +) + +type StepDeleteBlockStorageInstance struct { + Conn *ncloud.Conn + DeleteBlockStorageInstance func(blockStorageInstanceNo string) error + Say func(message string) + Error func(e error) + Config *Config +} + +func NewStepDeleteBlockStorageInstance(conn *ncloud.Conn, ui packer.Ui, config *Config) *StepDeleteBlockStorageInstance { + var step = &StepDeleteBlockStorageInstance{ + Conn: conn, + Say: func(message string) { ui.Say(message) }, + Error: func(e error) { ui.Error(e.Error()) }, + Config: config, + } + + step.DeleteBlockStorageInstance = step.deleteBlockStorageInstance + + return step +} + +func (s *StepDeleteBlockStorageInstance) getBlockInstanceList(serverInstanceNo string) []string { + reqParams := new(ncloud.RequestBlockStorageInstanceList) + reqParams.ServerInstanceNo = serverInstanceNo + + blockStorageInstanceList, err := s.Conn.GetBlockStorageInstance(reqParams) + if err != nil { + return nil + } + + if blockStorageInstanceList.TotalRows == 1 { + return nil + } + + var instanceList []string + + for _, blockStorageInstance := range blockStorageInstanceList.BlockStorageInstance { + log.Println(blockStorageInstance) + if blockStorageInstance.BlockStorageType.Code != "BASIC" { + instanceList = append(instanceList, blockStorageInstance.BlockStorageInstanceNo) + } + } + + return instanceList +} + +func (s *StepDeleteBlockStorageInstance) deleteBlockStorageInstance(serverInstanceNo string) error { + blockStorageInstanceList := s.getBlockInstanceList(serverInstanceNo) + if blockStorageInstanceList == nil || len(blockStorageInstanceList) == 0 { + return nil + } + + _, err := s.Conn.DeleteBlockStorageInstances(blockStorageInstanceList) + if err != nil { + return err + } + + s.Say(fmt.Sprintf("Block Storage Instance is deleted. Block Storage InstanceNo is %s", blockStorageInstanceList)) + + if err := waiterDetachedBlockStorageInstance(s.Conn, serverInstanceNo, time.Minute); err != nil { + return errors.New("TIMEOUT : Block Storage instance status is not deattached") + } + + return nil +} + +func (s *StepDeleteBlockStorageInstance) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { + if s.Config.BlockStorageSize == 0 { + return processStepResult(nil, s.Error, state) + } + + s.Say("Delete Block Storage Instance") + + var serverInstanceNo = state.Get("InstanceNo").(string) + + err := s.DeleteBlockStorageInstance(serverInstanceNo) + + return processStepResult(err, s.Error, state) +} + +func (*StepDeleteBlockStorageInstance) Cleanup(multistep.StateBag) { +} diff --git a/builder/ncloud/step_delete_block_storage_instance_test.go b/builder/ncloud/step_delete_block_storage_instance_test.go new file mode 100644 index 000000000..024ed332e --- /dev/null +++ b/builder/ncloud/step_delete_block_storage_instance_test.go @@ -0,0 +1,59 @@ +package ncloud + +import ( + "context" + "fmt" + "testing" + + "github.com/hashicorp/packer/helper/multistep" +) + +func TestStepDeleteBlockStorageInstanceShouldFailIfOperationDeleteBlockStorageInstanceFails(t *testing.T) { + var testSubject = &StepDeleteBlockStorageInstance{ + DeleteBlockStorageInstance: func(blockStorageInstanceNo string) error { return fmt.Errorf("!! Unit Test FAIL !!") }, + Say: func(message string) {}, + Error: func(e error) {}, + Config: &Config{BlockStorageSize: 10}, + } + + stateBag := createTestStateBagStepDeleteBlockStorageInstance() + + var result = testSubject.Run(context.Background(), stateBag) + + if result != multistep.ActionHalt { + t.Fatalf("Expected the step to return 'ActionHalt', but got '%d'.", result) + } + + if _, ok := stateBag.GetOk("Error"); ok == false { + t.Fatal("Expected the step to set stateBag['Error'], but it was not.") + } +} + +func TestStepDeleteBlockStorageInstanceShouldPassIfOperationDeleteBlockStorageInstancePasses(t *testing.T) { + var testSubject = &StepDeleteBlockStorageInstance{ + DeleteBlockStorageInstance: func(blockStorageInstanceNo string) error { return nil }, + Say: func(message string) {}, + Error: func(e error) {}, + Config: &Config{BlockStorageSize: 10}, + } + + stateBag := createTestStateBagStepDeleteBlockStorageInstance() + + var result = testSubject.Run(context.Background(), stateBag) + + if result != multistep.ActionContinue { + t.Fatalf("Expected the step to return 'ActionContinue', but got '%d'.", result) + } + + if _, ok := stateBag.GetOk("Error"); ok == true { + t.Fatalf("Expected the step to not set stateBag['Error'], but it was.") + } +} + +func createTestStateBagStepDeleteBlockStorageInstance() multistep.StateBag { + stateBag := new(multistep.BasicStateBag) + + stateBag.Put("InstanceNo", "1") + + return stateBag +} diff --git a/builder/ncloud/step_get_rootpassword.go b/builder/ncloud/step_get_rootpassword.go new file mode 100644 index 000000000..051eb386b --- /dev/null +++ b/builder/ncloud/step_get_rootpassword.go @@ -0,0 +1,60 @@ +package ncloud + +import ( + "context" + "fmt" + + ncloud "github.com/NaverCloudPlatform/ncloud-sdk-go/sdk" + "github.com/hashicorp/packer/helper/multistep" + "github.com/hashicorp/packer/packer" +) + +type StepGetRootPassword struct { + Conn *ncloud.Conn + GetRootPassword func(serverInstanceNo string, privateKey string) (string, error) + Say func(message string) + Error func(e error) +} + +func NewStepGetRootPassword(conn *ncloud.Conn, ui packer.Ui) *StepGetRootPassword { + var step = &StepGetRootPassword{ + Conn: conn, + Say: func(message string) { ui.Say(message) }, + Error: func(e error) { ui.Error(e.Error()) }, + } + + step.GetRootPassword = step.getRootPassword + + return step +} + +func (s *StepGetRootPassword) getRootPassword(serverInstanceNo string, privateKey string) (string, error) { + reqParams := new(ncloud.RequestGetRootPassword) + reqParams.ServerInstanceNo = serverInstanceNo + reqParams.PrivateKey = privateKey + + rootPassword, err := s.Conn.GetRootPassword(reqParams) + if err != nil { + return "", err + } + + s.Say(fmt.Sprintf("Root password is %s", rootPassword.RootPassword)) + + return rootPassword.RootPassword, nil +} + +func (s *StepGetRootPassword) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { + s.Say("Get Root Password") + + serverInstanceNo := state.Get("InstanceNo").(string) + loginKey := state.Get("LoginKey").(*LoginKey) + + rootPassword, err := s.GetRootPassword(serverInstanceNo, loginKey.PrivateKey) + + state.Put("Password", rootPassword) + + return processStepResult(err, s.Error, state) +} + +func (*StepGetRootPassword) Cleanup(multistep.StateBag) { +} diff --git a/builder/ncloud/step_get_rootpassword_test.go b/builder/ncloud/step_get_rootpassword_test.go new file mode 100644 index 000000000..0c93229bd --- /dev/null +++ b/builder/ncloud/step_get_rootpassword_test.go @@ -0,0 +1,58 @@ +package ncloud + +import ( + "context" + "fmt" + "testing" + + "github.com/hashicorp/packer/helper/multistep" +) + +func TestStepGetRootPasswordShouldFailIfOperationGetRootPasswordFails(t *testing.T) { + var testSubject = &StepGetRootPassword{ + GetRootPassword: func(string, string) (string, error) { return "", fmt.Errorf("!! Unit Test FAIL !!") }, + Say: func(message string) {}, + Error: func(e error) {}, + } + + stateBag := DeleteTestStateBagStepGetRootPassword() + + var result = testSubject.Run(context.Background(), stateBag) + + if result != multistep.ActionHalt { + t.Fatalf("Expected the step to return 'ActionHalt', but got '%d'.", result) + } + + if _, ok := stateBag.GetOk("Error"); ok == false { + t.Fatal("Expected the step to set stateBag['Error'], but it was not.") + } +} + +func TestStepGetRootPasswordShouldPassIfOperationGetRootPasswordPasses(t *testing.T) { + var testSubject = &StepGetRootPassword{ + GetRootPassword: func(string, string) (string, error) { return "a", nil }, + Say: func(message string) {}, + Error: func(e error) {}, + } + + stateBag := DeleteTestStateBagStepGetRootPassword() + + var result = testSubject.Run(context.Background(), stateBag) + + if result != multistep.ActionContinue { + t.Fatalf("Expected the step to return 'ActionContinue', but got '%d'.", result) + } + + if _, ok := stateBag.GetOk("Error"); ok == true { + t.Fatalf("Expected the step to not set stateBag['Error'], but it was.") + } +} + +func DeleteTestStateBagStepGetRootPassword() multistep.StateBag { + stateBag := new(multistep.BasicStateBag) + + stateBag.Put("LoginKey", &LoginKey{"a", "b"}) + stateBag.Put("InstanceNo", "a") + + return stateBag +} diff --git a/builder/ncloud/step_stop_server_instance.go b/builder/ncloud/step_stop_server_instance.go new file mode 100644 index 000000000..293c8e55d --- /dev/null +++ b/builder/ncloud/step_stop_server_instance.go @@ -0,0 +1,65 @@ +package ncloud + +import ( + "context" + "fmt" + "log" + "time" + + ncloud "github.com/NaverCloudPlatform/ncloud-sdk-go/sdk" + "github.com/hashicorp/packer/helper/multistep" + "github.com/hashicorp/packer/packer" +) + +type StepStopServerInstance struct { + Conn *ncloud.Conn + StopServerInstance func(serverInstanceNo string) error + Say func(message string) + Error func(e error) +} + +func NewStepStopServerInstance(conn *ncloud.Conn, ui packer.Ui) *StepStopServerInstance { + var step = &StepStopServerInstance{ + Conn: conn, + Say: func(message string) { ui.Say(message) }, + Error: func(e error) { ui.Error(e.Error()) }, + } + + step.StopServerInstance = step.stopServerInstance + + return step +} + +func (s *StepStopServerInstance) stopServerInstance(serverInstanceNo string) error { + reqParams := new(ncloud.RequestStopServerInstances) + reqParams.ServerInstanceNoList = []string{serverInstanceNo} + + serverInstanceList, err := s.Conn.StopServerInstances(reqParams) + if err != nil { + return err + } + + s.Say(fmt.Sprintf("Server Instance is stopping. Server InstanceNo is %s", serverInstanceList.ServerInstanceList[0].ServerInstanceNo)) + log.Println("Server Instance information : ", serverInstanceList.ServerInstanceList[0]) + + if err := waiterServerInstanceStatus(s.Conn, serverInstanceNo, "NSTOP", 5*time.Minute); err != nil { + return err + } + + s.Say(fmt.Sprintf("Server Instance stopped. Server InstanceNo is %s", serverInstanceList.ServerInstanceList[0].ServerInstanceNo)) + + return nil +} + +func (s *StepStopServerInstance) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { + s.Say("Stop Server Instance") + + var serverInstanceNo = state.Get("InstanceNo").(string) + + err := s.StopServerInstance(serverInstanceNo) + + return processStepResult(err, s.Error, state) +} + +func (*StepStopServerInstance) Cleanup(multistep.StateBag) { +} diff --git a/builder/ncloud/step_stop_server_instance_test.go b/builder/ncloud/step_stop_server_instance_test.go new file mode 100644 index 000000000..4b5afe605 --- /dev/null +++ b/builder/ncloud/step_stop_server_instance_test.go @@ -0,0 +1,56 @@ +package ncloud + +import ( + "context" + "fmt" + "testing" + + "github.com/hashicorp/packer/helper/multistep" +) + +func TestStepStopServerInstanceShouldFailIfOperationStopFails(t *testing.T) { + var testSubject = &StepStopServerInstance{ + StopServerInstance: func(serverInstanceNo string) error { return fmt.Errorf("!! Unit Test FAIL !!") }, + Say: func(message string) {}, + Error: func(e error) {}, + } + + stateBag := createTestStateBagStepStopServerInstance() + + var result = testSubject.Run(context.Background(), stateBag) + + if result != multistep.ActionHalt { + t.Fatalf("Expected the step to return 'ActionHalt', but got '%d'.", result) + } + + if _, ok := stateBag.GetOk("Error"); ok == false { + t.Fatal("Expected the step to set stateBag['Error'], but it was not.") + } +} + +func TestStepStopServerInstanceShouldPassIfOperationStopPasses(t *testing.T) { + var testSubject = &StepStopServerInstance{ + StopServerInstance: func(serverInstanceNo string) error { return nil }, + Say: func(message string) {}, + Error: func(e error) {}, + } + + stateBag := createTestStateBagStepStopServerInstance() + + var result = testSubject.Run(context.Background(), stateBag) + + if result != multistep.ActionContinue { + t.Fatalf("Expected the step to return 'ActionContinue', but got '%d'.", result) + } + + if _, ok := stateBag.GetOk("Error"); ok == true { + t.Fatalf("Expected the step to not set stateBag['Error'], but it was.") + } +} + +func createTestStateBagStepStopServerInstance() multistep.StateBag { + stateBag := new(multistep.BasicStateBag) + + stateBag.Put("InstanceNo", "a") + return stateBag +} diff --git a/builder/ncloud/step_terminate_server_instance.go b/builder/ncloud/step_terminate_server_instance.go new file mode 100644 index 000000000..f6c8d281a --- /dev/null +++ b/builder/ncloud/step_terminate_server_instance.go @@ -0,0 +1,81 @@ +package ncloud + +import ( + "context" + "errors" + "time" + + ncloud "github.com/NaverCloudPlatform/ncloud-sdk-go/sdk" + "github.com/hashicorp/packer/helper/multistep" + "github.com/hashicorp/packer/packer" +) + +type StepTerminateServerInstance struct { + Conn *ncloud.Conn + TerminateServerInstance func(serverInstanceNo string) error + Say func(message string) + Error func(e error) +} + +func NewStepTerminateServerInstance(conn *ncloud.Conn, ui packer.Ui) *StepTerminateServerInstance { + var step = &StepTerminateServerInstance{ + Conn: conn, + Say: func(message string) { ui.Say(message) }, + Error: func(e error) { ui.Error(e.Error()) }, + } + + step.TerminateServerInstance = step.terminateServerInstance + + return step +} + +func (s *StepTerminateServerInstance) terminateServerInstance(serverInstanceNo string) error { + reqParams := new(ncloud.RequestTerminateServerInstances) + reqParams.ServerInstanceNoList = []string{serverInstanceNo} + + _, err := s.Conn.TerminateServerInstances(reqParams) + if err != nil { + return err + } + + c1 := make(chan error, 1) + + go func() { + reqParams := new(ncloud.RequestGetServerInstanceList) + reqParams.ServerInstanceNoList = []string{serverInstanceNo} + + for { + + serverInstanceList, err := s.Conn.GetServerInstanceList(reqParams) + if err != nil { + c1 <- err + return + } else if serverInstanceList.TotalRows == 0 { + c1 <- nil + return + } + + time.Sleep(time.Second * 3) + } + }() + + select { + case res := <-c1: + return res + case <-time.After(time.Second * 60): + return errors.New("TIMEOUT : Can't terminate server instance") + } +} + +func (s *StepTerminateServerInstance) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { + s.Say("Terminate Server Instance") + + var serverInstanceNo = state.Get("InstanceNo").(string) + + err := s.TerminateServerInstance(serverInstanceNo) + + return processStepResult(err, s.Error, state) +} + +func (*StepTerminateServerInstance) Cleanup(multistep.StateBag) { +} diff --git a/builder/ncloud/step_terminate_server_instance_test.go b/builder/ncloud/step_terminate_server_instance_test.go new file mode 100644 index 000000000..54a00748a --- /dev/null +++ b/builder/ncloud/step_terminate_server_instance_test.go @@ -0,0 +1,56 @@ +package ncloud + +import ( + "context" + "fmt" + "testing" + + "github.com/hashicorp/packer/helper/multistep" +) + +func TestStepTerminateServerInstanceShouldFailIfOperationTerminationFails(t *testing.T) { + var testSubject = &StepTerminateServerInstance{ + TerminateServerInstance: func(serverInstanceNo string) error { return fmt.Errorf("!! Unit Test FAIL !!") }, + Say: func(message string) {}, + Error: func(e error) {}, + } + + stateBag := createTestStateBagStepTerminateServerInstance() + + var result = testSubject.Run(context.Background(), stateBag) + + if result != multistep.ActionHalt { + t.Fatalf("Expected the step to return 'ActionHalt', but got '%d'.", result) + } + + if _, ok := stateBag.GetOk("Error"); ok == false { + t.Fatal("Expected the step to set stateBag['Error'], but it was not.") + } +} + +func TestStepTerminateServerInstanceShouldPassIfOperationTerminationPasses(t *testing.T) { + var testSubject = &StepTerminateServerInstance{ + TerminateServerInstance: func(serverInstanceNo string) error { return nil }, + Say: func(message string) {}, + Error: func(e error) {}, + } + + stateBag := createTestStateBagStepTerminateServerInstance() + + var result = testSubject.Run(context.Background(), stateBag) + + if result != multistep.ActionContinue { + t.Fatalf("Expected the step to return 'ActionContinue', but got '%d'.", result) + } + + if _, ok := stateBag.GetOk("Error"); ok == true { + t.Fatalf("Expected the step to not set stateBag['Error'], but it was.") + } +} + +func createTestStateBagStepTerminateServerInstance() multistep.StateBag { + stateBag := new(multistep.BasicStateBag) + + stateBag.Put("InstanceNo", "a") + return stateBag +} diff --git a/builder/ncloud/step_validate_template.go b/builder/ncloud/step_validate_template.go new file mode 100644 index 000000000..873cd4723 --- /dev/null +++ b/builder/ncloud/step_validate_template.go @@ -0,0 +1,269 @@ +package ncloud + +import ( + "bytes" + "context" + "errors" + "fmt" + "strings" + + ncloud "github.com/NaverCloudPlatform/ncloud-sdk-go/sdk" + "github.com/hashicorp/packer/helper/multistep" + "github.com/hashicorp/packer/packer" + "github.com/olekukonko/tablewriter" +) + +//StepValidateTemplate : struct for Validation a template +type StepValidateTemplate struct { + Conn *ncloud.Conn + Validate func() error + Say func(message string) + Error func(e error) + Config *Config + zoneNo string + regionNo string + FeeSystemTypeCode string +} + +// NewStepValidateTemplate : function for Validation a template +func NewStepValidateTemplate(conn *ncloud.Conn, ui packer.Ui, config *Config) *StepValidateTemplate { + var step = &StepValidateTemplate{ + Conn: conn, + Say: func(message string) { ui.Say(message) }, + Error: func(e error) { ui.Error(e.Error()) }, + Config: config, + } + + step.Validate = step.validateTemplate + + return step +} + +// getZoneNo : get zoneNo +func (s *StepValidateTemplate) getZoneNo() error { + if s.Config.Region == "" { + return nil + } + + regionList, err := s.Conn.GetRegionList() + if err != nil { + return err + } + + var regionNo string + for _, region := range regionList.RegionList { + if strings.EqualFold(region.RegionName, s.Config.Region) { + regionNo = region.RegionNo + } + } + + if regionNo == "" { + return fmt.Errorf("region %s is invalid", s.Config.Region) + } + + s.regionNo = regionNo + + // Get ZoneNo + ZoneList, err := s.Conn.GetZoneList(regionNo) + if err != nil { + return err + } + + if len(ZoneList.Zone) > 0 { + s.zoneNo = ZoneList.Zone[0].ZoneNo + } + + return nil +} + +func (s *StepValidateTemplate) validateMemberServerImage() error { + var serverImageName = s.Config.ServerImageName + + reqParams := new(ncloud.RequestServerImageList) + reqParams.RegionNo = s.regionNo + + memberServerImageList, err := s.Conn.GetMemberServerImageList(reqParams) + if err != nil { + return err + } + + var isExistMemberServerImageNo = false + for _, image := range memberServerImageList.MemberServerImageList { + // Check duplicate server_image_name + if image.MemberServerImageName == serverImageName { + return fmt.Errorf("server_image_name %s is exists", serverImageName) + } + + if image.MemberServerImageNo == s.Config.MemberServerImageNo { + isExistMemberServerImageNo = true + if s.Config.ServerProductCode == "" { + s.Config.ServerProductCode = image.OriginalServerProductCode + s.Say("server_product_code for member server image '" + image.OriginalServerProductCode + "' is configured automatically") + } + s.Config.ServerImageProductCode = image.OriginalServerImageProductCode + } + } + + if s.Config.MemberServerImageNo != "" && !isExistMemberServerImageNo { + return fmt.Errorf("member_server_image_no %s does not exist", s.Config.MemberServerImageNo) + } + + return nil +} + +func (s *StepValidateTemplate) validateServerImageProduct() error { + var serverImageProductCode = s.Config.ServerImageProductCode + if serverImageProductCode == "" { + return nil + } + + reqParams := new(ncloud.RequestGetServerImageProductList) + reqParams.RegionNo = s.regionNo + + serverImageProductList, err := s.Conn.GetServerImageProductList(reqParams) + if err != nil { + return err + } + + var isExistServerImage = false + var buf bytes.Buffer + var productName string + table := tablewriter.NewWriter(&buf) + table.SetHeader([]string{"Name", "Code"}) + + for _, product := range serverImageProductList.Product { + // Check exist server image product code + if product.ProductCode == serverImageProductCode { + isExistServerImage = true + productName = product.ProductName + break + } + + table.Append([]string{product.ProductName, product.ProductCode}) + } + + if !isExistServerImage { + reqParams.BlockStorageSize = 100 + + serverImageProductList, err := s.Conn.GetServerImageProductList(reqParams) + if err != nil { + return err + } + + for _, product := range serverImageProductList.Product { + // Check exist server image product code + if product.ProductCode == serverImageProductCode { + isExistServerImage = true + productName = product.ProductName + break + } + + table.Append([]string{product.ProductName, product.ProductCode}) + } + } + + if !isExistServerImage { + table.Render() + s.Say(buf.String()) + + return fmt.Errorf("server_image_product_code %s does not exist", serverImageProductCode) + } + + if strings.Contains(productName, "mssql") { + s.FeeSystemTypeCode = "FXSUM" + } + + return nil +} + +func (s *StepValidateTemplate) validateServerProductCode() error { + var serverImageProductCode = s.Config.ServerImageProductCode + var productCode = s.Config.ServerProductCode + + reqParams := new(ncloud.RequestGetServerProductList) + reqParams.ServerImageProductCode = serverImageProductCode + reqParams.RegionNo = s.regionNo + + productList, err := s.Conn.GetServerProductList(reqParams) + if err != nil { + return err + } + + var isExistProductCode = false + for _, product := range productList.Product { + // Check exist server image product code + if product.ProductCode == productCode { + isExistProductCode = true + if strings.Contains(product.ProductName, "mssql") { + s.FeeSystemTypeCode = "FXSUM" + } + + if product.ProductType.Code == "VDS" { + return errors.New("You cannot create my server image for VDS servers") + } + + break + } else if productCode == "" && product.ProductType.Code == "STAND" { + isExistProductCode = true + s.Config.ServerProductCode = product.ProductCode + s.Say("server_product_code '" + product.ProductCode + "' is configured automatically") + break + } + } + + if !isExistProductCode { + var buf bytes.Buffer + table := tablewriter.NewWriter(&buf) + table.SetHeader([]string{"Name", "Code"}) + for _, product := range productList.Product { + table.Append([]string{product.ProductName, product.ProductCode}) + } + table.Render() + + s.Say(buf.String()) + + return fmt.Errorf("server_product_code %s does not exist", productCode) + } + + return nil +} + +// Check ImageName / Product Code / Server Image Product Code / Server Product Code... +func (s *StepValidateTemplate) validateTemplate() error { + // Get RegionNo, ZoneNo + if err := s.getZoneNo(); err != nil { + return err + } + + // Validate member_server_image_no and member_server_image_no + if err := s.validateMemberServerImage(); err != nil { + return err + } + + // Validate server_image_product_code + if err := s.validateServerImageProduct(); err != nil { + return err + } + + // Validate server_product_code + return s.validateServerProductCode() +} + +// Run : main function for validation a template +func (s *StepValidateTemplate) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { + s.Say("Validating deployment template ...") + + err := s.Validate() + + state.Put("ZoneNo", s.zoneNo) + + if s.FeeSystemTypeCode != "" { + state.Put("FeeSystemTypeCode", s.FeeSystemTypeCode) + } + + return processStepResult(err, s.Error, state) +} + +// Cleanup : cleanup on error +func (s *StepValidateTemplate) Cleanup(multistep.StateBag) { +} diff --git a/builder/ncloud/step_validate_template_test.go b/builder/ncloud/step_validate_template_test.go new file mode 100644 index 000000000..7e9212c7f --- /dev/null +++ b/builder/ncloud/step_validate_template_test.go @@ -0,0 +1,55 @@ +package ncloud + +import ( + "context" + "fmt" + "testing" + + "github.com/hashicorp/packer/helper/multistep" +) + +func TestStepValidateTemplateShouldFailIfValidateFails(t *testing.T) { + var testSubject = &StepValidateTemplate{ + Validate: func() error { return fmt.Errorf("!! Unit Test FAIL !!") }, + Say: func(message string) {}, + Error: func(e error) {}, + } + + stateBag := createTestStateBagStepValidateTemplate() + + var result = testSubject.Run(context.Background(), stateBag) + + if result != multistep.ActionHalt { + t.Fatalf("Expected the step to return 'ActionHalt', but got '%d'.", result) + } + + if _, ok := stateBag.GetOk("Error"); ok == false { + t.Fatal("Expected the step to set stateBag['Error'], but it was not.") + } +} + +func TestStepValidateTemplateShouldPassIfValidatePasses(t *testing.T) { + var testSubject = &StepValidateTemplate{ + Validate: func() error { return nil }, + Say: func(message string) {}, + Error: func(e error) {}, + } + + stateBag := createTestStateBagStepValidateTemplate() + + var result = testSubject.Run(context.Background(), stateBag) + + if result != multistep.ActionContinue { + t.Fatalf("Expected the step to return 'ActionContinue', but got '%d'.", result) + } + + if _, ok := stateBag.GetOk("Error"); ok == true { + t.Fatalf("Expected the step to not set stateBag['Error'], but it was.") + } +} + +func createTestStateBagStepValidateTemplate() multistep.StateBag { + stateBag := new(multistep.BasicStateBag) + + return stateBag +} diff --git a/builder/ncloud/waiter_block_storage_instance.go b/builder/ncloud/waiter_block_storage_instance.go new file mode 100644 index 000000000..9cfa5562a --- /dev/null +++ b/builder/ncloud/waiter_block_storage_instance.go @@ -0,0 +1,80 @@ +package ncloud + +import ( + "fmt" + "log" + "time" + + ncloud "github.com/NaverCloudPlatform/ncloud-sdk-go/sdk" +) + +func waiterBlockStorageInstanceStatus(conn *ncloud.Conn, blockStorageInstanceNo string, status string, timeout time.Duration) error { + reqParams := new(ncloud.RequestBlockStorageInstanceList) + reqParams.BlockStorageInstanceNoList = []string{blockStorageInstanceNo} + + c1 := make(chan error, 1) + + go func() { + for { + blockStorageInstanceList, err := conn.GetBlockStorageInstance(reqParams) + if err != nil { + c1 <- err + return + } + + if status == "DETAC" && len(blockStorageInstanceList.BlockStorageInstance) == 0 { + c1 <- nil + return + } + + code := blockStorageInstanceList.BlockStorageInstance[0].BlockStorageInstanceStatus.Code + operationCode := blockStorageInstanceList.BlockStorageInstance[0].BlockStorageInstanceOperation.Code + + if code == status && operationCode == "NULL" { + c1 <- nil + return + } + + log.Println(blockStorageInstanceList.BlockStorageInstance[0]) + time.Sleep(time.Second * 5) + } + }() + + select { + case res := <-c1: + return res + case <-time.After(timeout): + return fmt.Errorf("TIMEOUT : block storage instance status is not changed into status %s", status) + } +} + +func waiterDetachedBlockStorageInstance(conn *ncloud.Conn, serverInstanceNo string, timeout time.Duration) error { + reqParams := new(ncloud.RequestBlockStorageInstanceList) + reqParams.ServerInstanceNo = serverInstanceNo + + c1 := make(chan error, 1) + + go func() { + for { + blockStorageInstanceList, err := conn.GetBlockStorageInstance(reqParams) + if err != nil { + c1 <- err + return + } + + if blockStorageInstanceList.TotalRows == 1 { + c1 <- nil + return + } + + time.Sleep(time.Second * 5) + } + }() + + select { + case res := <-c1: + return res + case <-time.After(timeout): + return fmt.Errorf("TIMEOUT : attached block storage instance is not detached") + } +} diff --git a/builder/ncloud/waiter_server_image_status.go b/builder/ncloud/waiter_server_image_status.go new file mode 100644 index 000000000..5ba640874 --- /dev/null +++ b/builder/ncloud/waiter_server_image_status.go @@ -0,0 +1,43 @@ +package ncloud + +import ( + "fmt" + "log" + "time" + + ncloud "github.com/NaverCloudPlatform/ncloud-sdk-go/sdk" +) + +func waiterMemberServerImageStatus(conn *ncloud.Conn, memberServerImageNo string, status string, timeout time.Duration) error { + reqParams := new(ncloud.RequestServerImageList) + reqParams.MemberServerImageNoList = []string{memberServerImageNo} + + c1 := make(chan error, 1) + + go func() { + for { + memberServerImageList, err := conn.GetMemberServerImageList(reqParams) + if err != nil { + c1 <- err + return + } + + code := memberServerImageList.MemberServerImageList[0].MemberServerImageStatus.Code + if code == status { + c1 <- nil + return + } + + log.Printf("Status of member server image [%s] is %s\n", memberServerImageNo, code) + log.Println(memberServerImageList.MemberServerImageList[0]) + time.Sleep(time.Second * 5) + } + }() + + select { + case res := <-c1: + return res + case <-time.After(timeout): + return fmt.Errorf("TIMEOUT : member server image status is not changed into status %s", status) + } +} diff --git a/builder/ncloud/waiter_server_instance_status.go b/builder/ncloud/waiter_server_instance_status.go new file mode 100644 index 000000000..176cde701 --- /dev/null +++ b/builder/ncloud/waiter_server_instance_status.go @@ -0,0 +1,43 @@ +package ncloud + +import ( + "fmt" + "log" + "time" + + ncloud "github.com/NaverCloudPlatform/ncloud-sdk-go/sdk" +) + +func waiterServerInstanceStatus(conn *ncloud.Conn, serverInstanceNo string, status string, timeout time.Duration) error { + reqParams := new(ncloud.RequestGetServerInstanceList) + reqParams.ServerInstanceNoList = []string{serverInstanceNo} + + c1 := make(chan error, 1) + + go func() { + for { + serverInstanceList, err := conn.GetServerInstanceList(reqParams) + if err != nil { + c1 <- err + return + } + + code := serverInstanceList.ServerInstanceList[0].ServerInstanceStatus.Code + if code == status { + c1 <- nil + return + } + + log.Printf("Status of serverInstanceNo [%s] is %s\n", serverInstanceNo, code) + log.Println(serverInstanceList.ServerInstanceList[0]) + time.Sleep(time.Second * 5) + } + }() + + select { + case res := <-c1: + return res + case <-time.After(timeout): + return fmt.Errorf("TIMEOUT : server instance status is not changed into status %s", status) + } +} diff --git a/builder/null/artifact_export_test.go b/builder/null/artifact_export_test.go index aa0540851..917578eff 100644 --- a/builder/null/artifact_export_test.go +++ b/builder/null/artifact_export_test.go @@ -1,8 +1,9 @@ package null import ( - "github.com/hashicorp/packer/packer" "testing" + + "github.com/hashicorp/packer/packer" ) func TestNullArtifact(t *testing.T) { diff --git a/builder/null/builder.go b/builder/null/builder.go index 3e587ebd0..f4d4764d6 100644 --- a/builder/null/builder.go +++ b/builder/null/builder.go @@ -5,8 +5,8 @@ import ( "github.com/hashicorp/packer/common" "github.com/hashicorp/packer/helper/communicator" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" ) const BuilderId = "fnoeding.null" diff --git a/builder/null/builder_test.go b/builder/null/builder_test.go index 2f77ccb39..89cf03a24 100644 --- a/builder/null/builder_test.go +++ b/builder/null/builder_test.go @@ -1,8 +1,9 @@ package null import ( - "github.com/hashicorp/packer/packer" "testing" + + "github.com/hashicorp/packer/packer" ) func TestBuilder_implBuilder(t *testing.T) { diff --git a/builder/null/ssh.go b/builder/null/ssh.go index b8dea896e..8e18bb6ac 100644 --- a/builder/null/ssh.go +++ b/builder/null/ssh.go @@ -7,7 +7,7 @@ import ( "os" "github.com/hashicorp/packer/communicator/ssh" - "github.com/mitchellh/multistep" + "github.com/hashicorp/packer/helper/multistep" gossh "golang.org/x/crypto/ssh" "golang.org/x/crypto/ssh/agent" ) diff --git a/builder/oneandone/builder.go b/builder/oneandone/builder.go index ad470c152..4b84b3938 100644 --- a/builder/oneandone/builder.go +++ b/builder/oneandone/builder.go @@ -3,11 +3,12 @@ package oneandone import ( "errors" "fmt" + "log" + "github.com/hashicorp/packer/common" "github.com/hashicorp/packer/helper/communicator" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" - "log" ) const BuilderId = "packer.oneandone" diff --git a/builder/oneandone/builder_test.go b/builder/oneandone/builder_test.go index ae90f72b2..daa8536e3 100644 --- a/builder/oneandone/builder_test.go +++ b/builder/oneandone/builder_test.go @@ -2,8 +2,9 @@ package oneandone import ( "fmt" - "github.com/hashicorp/packer/packer" "testing" + + "github.com/hashicorp/packer/packer" ) func testConfig() map[string]interface{} { diff --git a/builder/oneandone/config.go b/builder/oneandone/config.go index e71ef2c9f..ff85bdc6d 100644 --- a/builder/oneandone/config.go +++ b/builder/oneandone/config.go @@ -2,6 +2,9 @@ package oneandone import ( "errors" + "os" + "strings" + "github.com/1and1/oneandone-cloudserver-sdk-go" "github.com/hashicorp/packer/common" "github.com/hashicorp/packer/helper/communicator" @@ -9,8 +12,6 @@ import ( "github.com/hashicorp/packer/packer" "github.com/hashicorp/packer/template/interpolate" "github.com/mitchellh/mapstructure" - "os" - "strings" ) type Config struct { diff --git a/builder/oneandone/ssh.go b/builder/oneandone/ssh.go index 33fe0cc01..9fdb128a9 100644 --- a/builder/oneandone/ssh.go +++ b/builder/oneandone/ssh.go @@ -2,8 +2,9 @@ package oneandone import ( "fmt" + "github.com/hashicorp/packer/communicator/ssh" - "github.com/mitchellh/multistep" + "github.com/hashicorp/packer/helper/multistep" gossh "golang.org/x/crypto/ssh" ) diff --git a/builder/oneandone/step_create_server.go b/builder/oneandone/step_create_server.go index 2ad268ff8..cffd8c440 100644 --- a/builder/oneandone/step_create_server.go +++ b/builder/oneandone/step_create_server.go @@ -1,17 +1,19 @@ package oneandone import ( + "context" "fmt" - "github.com/1and1/oneandone-cloudserver-sdk-go" - "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" "strings" "time" + + "github.com/1and1/oneandone-cloudserver-sdk-go" + "github.com/hashicorp/packer/helper/multistep" + "github.com/hashicorp/packer/packer" ) type stepCreateServer struct{} -func (s *stepCreateServer) Run(state multistep.StateBag) multistep.StepAction { +func (s *stepCreateServer) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { ui := state.Get("ui").(packer.Ui) c := state.Get("config").(*Config) diff --git a/builder/oneandone/step_create_sshkey.go b/builder/oneandone/step_create_sshkey.go index 0ad089c23..a27561c1f 100644 --- a/builder/oneandone/step_create_sshkey.go +++ b/builder/oneandone/step_create_sshkey.go @@ -1,13 +1,15 @@ package oneandone import ( + "context" "crypto/x509" "encoding/pem" "fmt" - "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" - "golang.org/x/crypto/ssh" "io/ioutil" + + "github.com/hashicorp/packer/helper/multistep" + "github.com/hashicorp/packer/packer" + "golang.org/x/crypto/ssh" ) type StepCreateSSHKey struct { @@ -15,7 +17,7 @@ type StepCreateSSHKey struct { DebugKeyPath string } -func (s *StepCreateSSHKey) Run(state multistep.StateBag) multistep.StepAction { +func (s *StepCreateSSHKey) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { ui := state.Get("ui").(packer.Ui) c := state.Get("config").(*Config) diff --git a/builder/oneandone/step_take_snapshot.go b/builder/oneandone/step_take_snapshot.go index b19790c7d..65a3f7156 100644 --- a/builder/oneandone/step_take_snapshot.go +++ b/builder/oneandone/step_take_snapshot.go @@ -1,14 +1,16 @@ package oneandone import ( + "context" + "github.com/1and1/oneandone-cloudserver-sdk-go" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" ) type stepTakeSnapshot struct{} -func (s *stepTakeSnapshot) Run(state multistep.StateBag) multistep.StepAction { +func (s *stepTakeSnapshot) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { ui := state.Get("ui").(packer.Ui) c := state.Get("config").(*Config) diff --git a/builder/openstack/access_config.go b/builder/openstack/access_config.go index 5d9c2b39b..a5ab91b60 100644 --- a/builder/openstack/access_config.go +++ b/builder/openstack/access_config.go @@ -2,14 +2,14 @@ package openstack import ( "crypto/tls" - "fmt" - "os" - "crypto/x509" + "fmt" "io/ioutil" + "os" "github.com/gophercloud/gophercloud" "github.com/gophercloud/gophercloud/openstack" + "github.com/gophercloud/utils/openstack/clientconfig" "github.com/hashicorp/go-cleanhttp" "github.com/hashicorp/packer/template/interpolate" ) @@ -30,6 +30,8 @@ type AccessConfig struct { CACertFile string `mapstructure:"cacert"` ClientCertFile string `mapstructure:"cert"` ClientKeyFile string `mapstructure:"key"` + Token string `mapstructure:"token"` + Cloud string `mapstructure:"cloud"` osClient *gophercloud.ProviderClient } @@ -42,10 +44,6 @@ func (c *AccessConfig) Prepare(ctx *interpolate.Context) []error { return []error{fmt.Errorf("Invalid endpoint type provided")} } - if c.Region == "" { - c.Region = os.Getenv("OS_REGION_NAME") - } - // Legacy RackSpace stuff. We're keeping this around to keep things BC. if c.Password == "" { c.Password = os.Getenv("SDK_PASSWORD") @@ -59,6 +57,15 @@ func (c *AccessConfig) Prepare(ctx *interpolate.Context) []error { if c.Username == "" { c.Username = os.Getenv("SDK_USERNAME") } + // End RackSpace + + if c.Cloud == "" { + c.Cloud = os.Getenv("OS_CLOUD") + } + if c.Region == "" { + c.Region = os.Getenv("OS_REGION_NAME") + } + if c.CACertFile == "" { c.CACertFile = os.Getenv("OS_CACERT") } @@ -69,8 +76,39 @@ func (c *AccessConfig) Prepare(ctx *interpolate.Context) []error { c.ClientKeyFile = os.Getenv("OS_KEY") } - // Get as much as possible from the end - ao, _ := openstack.AuthOptionsFromEnv() + clientOpts := new(clientconfig.ClientOpts) + + // If a cloud entry was given, base AuthOptions on a clouds.yaml file. + if c.Cloud != "" { + clientOpts.Cloud = c.Cloud + + cloud, err := clientconfig.GetCloudFromYAML(clientOpts) + if err != nil { + return []error{err} + } + + if c.Region == "" && cloud.RegionName != "" { + c.Region = cloud.RegionName + } + } else { + authInfo := &clientconfig.AuthInfo{ + AuthURL: c.IdentityEndpoint, + DomainID: c.DomainID, + DomainName: c.DomainName, + Password: c.Password, + ProjectID: c.TenantID, + ProjectName: c.TenantName, + Token: c.Token, + Username: c.Username, + UserID: c.UserID, + } + clientOpts.AuthInfo = authInfo + } + + ao, err := clientconfig.AuthOptions(clientOpts) + if err != nil { + return []error{err} + } // Make sure we reauth as needed ao.AllowReauth = true @@ -87,6 +125,7 @@ func (c *AccessConfig) Prepare(ctx *interpolate.Context) []error { {&c.TenantName, &ao.TenantName}, {&c.DomainID, &ao.DomainID}, {&c.DomainName, &ao.DomainName}, + {&c.Token, &ao.TokenID}, } for _, s := range overrides { if *s.From != "" { @@ -132,7 +171,7 @@ func (c *AccessConfig) Prepare(ctx *interpolate.Context) []error { client.HTTPClient.Transport = transport // Auth - err = openstack.Authenticate(client, ao) + err = openstack.Authenticate(client, *ao) if err != nil { return []error{err} } diff --git a/builder/openstack/artifact_test.go b/builder/openstack/artifact_test.go index 20f5fe3b5..5cf2b5834 100644 --- a/builder/openstack/artifact_test.go +++ b/builder/openstack/artifact_test.go @@ -1,8 +1,9 @@ package openstack import ( - "github.com/hashicorp/packer/packer" "testing" + + "github.com/hashicorp/packer/packer" ) func TestArtifact_Impl(t *testing.T) { diff --git a/builder/openstack/builder.go b/builder/openstack/builder.go old mode 100644 new mode 100755 index 46d0e4e67..a718c3d59 --- a/builder/openstack/builder.go +++ b/builder/openstack/builder.go @@ -10,9 +10,9 @@ import ( "github.com/hashicorp/packer/common" "github.com/hashicorp/packer/helper/communicator" "github.com/hashicorp/packer/helper/config" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" "github.com/hashicorp/packer/template/interpolate" - "github.com/mitchellh/multistep" ) // The unique ID for this builder @@ -52,6 +52,11 @@ func (b *Builder) Prepare(raws ...interface{}) ([]string, error) { return nil, errs } + // By default, instance name is same as image name + if b.config.InstanceName == "" { + b.config.InstanceName = b.config.ImageName + } + log.Println(common.ScrubConfig(b.config, b.config.Password)) return nil, nil } @@ -70,7 +75,6 @@ func (b *Builder) Run(ui packer.Ui, hook packer.Hook, cache packer.Cache) (packe // Build the steps steps := []multistep.Step{ - &StepLoadExtensions{}, &StepLoadFlavor{ Flavor: b.config.Flavor, }, @@ -83,7 +87,7 @@ func (b *Builder) Run(ui packer.Ui, hook packer.Hook, cache packer.Cache) (packe SSHAgentAuth: b.config.RunConfig.Comm.SSHAgentAuth, }, &StepRunSourceServer{ - Name: b.config.ImageName, + Name: b.config.InstanceName, SourceImage: b.config.SourceImage, SourceImageName: b.config.SourceImageName, SecurityGroups: b.config.SecurityGroups, diff --git a/builder/openstack/builder_test.go b/builder/openstack/builder_test.go index 14fb8f2f9..42d78ee33 100644 --- a/builder/openstack/builder_test.go +++ b/builder/openstack/builder_test.go @@ -1,8 +1,9 @@ package openstack import ( - "github.com/hashicorp/packer/packer" "testing" + + "github.com/hashicorp/packer/packer" ) func testConfig() map[string]interface{} { diff --git a/builder/openstack/run_config.go b/builder/openstack/run_config.go index 55da56271..6b4e0932e 100644 --- a/builder/openstack/run_config.go +++ b/builder/openstack/run_config.go @@ -30,6 +30,7 @@ type RunConfig struct { Networks []string `mapstructure:"networks"` UserData string `mapstructure:"user_data"` UserDataFile string `mapstructure:"user_data_file"` + InstanceName string `mapstructure:"instance_name"` InstanceMetadata map[string]string `mapstructure:"instance_metadata"` ConfigDrive bool `mapstructure:"config_drive"` diff --git a/builder/openstack/server.go b/builder/openstack/server.go index 2c2dd8f2a..f509452d6 100644 --- a/builder/openstack/server.go +++ b/builder/openstack/server.go @@ -8,7 +8,7 @@ import ( "github.com/gophercloud/gophercloud" "github.com/gophercloud/gophercloud/openstack/compute/v2/servers" - "github.com/mitchellh/multistep" + "github.com/hashicorp/packer/helper/multistep" ) // StateRefreshFunc is a function type used for StateChangeConf that is diff --git a/builder/openstack/ssh.go b/builder/openstack/ssh.go index 9957c6163..1dd6672c4 100644 --- a/builder/openstack/ssh.go +++ b/builder/openstack/ssh.go @@ -12,7 +12,7 @@ import ( "github.com/gophercloud/gophercloud/openstack/compute/v2/extensions/floatingips" "github.com/gophercloud/gophercloud/openstack/compute/v2/servers" packerssh "github.com/hashicorp/packer/communicator/ssh" - "github.com/mitchellh/multistep" + "github.com/hashicorp/packer/helper/multistep" "golang.org/x/crypto/ssh" "golang.org/x/crypto/ssh/agent" ) diff --git a/builder/openstack/step_add_image_members.go b/builder/openstack/step_add_image_members.go index 642ed3d46..e23e54588 100644 --- a/builder/openstack/step_add_image_members.go +++ b/builder/openstack/step_add_image_members.go @@ -1,16 +1,17 @@ package openstack import ( + "context" "fmt" "github.com/gophercloud/gophercloud/openstack/imageservice/v2/members" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" ) type stepAddImageMembers struct{} -func (s *stepAddImageMembers) Run(state multistep.StateBag) multistep.StepAction { +func (s *stepAddImageMembers) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { imageId := state.Get("image").(string) ui := state.Get("ui").(packer.Ui) config := state.Get("config").(Config) diff --git a/builder/openstack/step_allocate_ip.go b/builder/openstack/step_allocate_ip.go index 521e42634..fb53ce434 100644 --- a/builder/openstack/step_allocate_ip.go +++ b/builder/openstack/step_allocate_ip.go @@ -1,13 +1,14 @@ package openstack import ( + "context" "fmt" "github.com/gophercloud/gophercloud/openstack/compute/v2/extensions/floatingips" "github.com/gophercloud/gophercloud/openstack/compute/v2/servers" "github.com/gophercloud/gophercloud/pagination" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" ) type StepAllocateIp struct { @@ -16,7 +17,7 @@ type StepAllocateIp struct { ReuseIps bool } -func (s *StepAllocateIp) Run(state multistep.StateBag) multistep.StepAction { +func (s *StepAllocateIp) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { ui := state.Get("ui").(packer.Ui) config := state.Get("config").(Config) server := state.Get("server").(*servers.Server) diff --git a/builder/openstack/step_create_image.go b/builder/openstack/step_create_image.go index 126bd4971..de8a6f4f6 100644 --- a/builder/openstack/step_create_image.go +++ b/builder/openstack/step_create_image.go @@ -1,6 +1,7 @@ package openstack import ( + "context" "fmt" "log" "time" @@ -8,13 +9,13 @@ import ( "github.com/gophercloud/gophercloud" "github.com/gophercloud/gophercloud/openstack/compute/v2/images" "github.com/gophercloud/gophercloud/openstack/compute/v2/servers" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" ) type stepCreateImage struct{} -func (s *stepCreateImage) Run(state multistep.StateBag) multistep.StepAction { +func (s *stepCreateImage) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { config := state.Get("config").(Config) server := state.Get("server").(*servers.Server) ui := state.Get("ui").(packer.Ui) diff --git a/builder/openstack/step_get_password.go b/builder/openstack/step_get_password.go index 10800d5be..4b962db4b 100644 --- a/builder/openstack/step_get_password.go +++ b/builder/openstack/step_get_password.go @@ -1,6 +1,7 @@ package openstack import ( + "context" "crypto/rsa" "fmt" "log" @@ -8,8 +9,8 @@ import ( "github.com/gophercloud/gophercloud/openstack/compute/v2/servers" "github.com/hashicorp/packer/helper/communicator" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" "golang.org/x/crypto/ssh" ) @@ -20,7 +21,7 @@ type StepGetPassword struct { Comm *communicator.Config } -func (s *StepGetPassword) Run(state multistep.StateBag) multistep.StepAction { +func (s *StepGetPassword) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { config := state.Get("config").(Config) ui := state.Get("ui").(packer.Ui) diff --git a/builder/openstack/step_key_pair.go b/builder/openstack/step_key_pair.go index 2d1500a5c..c4eba5804 100644 --- a/builder/openstack/step_key_pair.go +++ b/builder/openstack/step_key_pair.go @@ -1,6 +1,7 @@ package openstack import ( + "context" "fmt" "io/ioutil" "log" @@ -9,8 +10,8 @@ import ( "runtime" "github.com/gophercloud/gophercloud/openstack/compute/v2/extensions/keypairs" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" "golang.org/x/crypto/ssh" ) @@ -25,7 +26,7 @@ type StepKeyPair struct { doCleanup bool } -func (s *StepKeyPair) Run(state multistep.StateBag) multistep.StepAction { +func (s *StepKeyPair) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { ui := state.Get("ui").(packer.Ui) if s.PrivateKeyFile != "" { diff --git a/builder/openstack/step_key_pair_test.go b/builder/openstack/step_key_pair_test.go index 0ce45d6a7..891ad3cdc 100644 --- a/builder/openstack/step_key_pair_test.go +++ b/builder/openstack/step_key_pair_test.go @@ -81,7 +81,7 @@ func TestBerToDer(t *testing.T) { Writer: msg, } - // Test - a DER encoded key commes back unchanged. + // Test - a DER encoded key comes back unchanged. newKey := berToDer(der_encoded_key, ui) if newKey != der_encoded_key { t.Errorf("Trying to convert a DER encoded key should return the same key.") diff --git a/builder/openstack/step_load_extensions.go b/builder/openstack/step_load_extensions.go deleted file mode 100644 index 84ab0c8d2..000000000 --- a/builder/openstack/step_load_extensions.go +++ /dev/null @@ -1,58 +0,0 @@ -package openstack - -import ( - "fmt" - "log" - - "github.com/gophercloud/gophercloud/openstack/compute/v2/extensions" - "github.com/gophercloud/gophercloud/pagination" - "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" -) - -// StepLoadExtensions gets the FlavorRef from a Flavor. It first assumes -// that the Flavor is a ref and verifies it. Otherwise, it tries to find -// the flavor by name. -type StepLoadExtensions struct{} - -func (s *StepLoadExtensions) Run(state multistep.StateBag) multistep.StepAction { - config := state.Get("config").(Config) - ui := state.Get("ui").(packer.Ui) - - // We need the v2 compute client - client, err := config.computeV2Client() - if err != nil { - err = fmt.Errorf("Error initializing compute client: %s", err) - state.Put("error", err) - return multistep.ActionHalt - } - - ui.Say("Discovering enabled extensions...") - result := make(map[string]struct{}, 15) - pager := extensions.List(client) - err = pager.EachPage(func(p pagination.Page) (bool, error) { - // Extract the extensions from this page - exts, err := extensions.ExtractExtensions(p) - if err != nil { - return false, err - } - - for _, ext := range exts { - log.Printf("[DEBUG] Discovered extension: %s", ext.Alias) - result[ext.Alias] = struct{}{} - } - - return true, nil - }) - if err != nil { - err = fmt.Errorf("Error loading extensions: %s", err) - state.Put("error", err) - return multistep.ActionHalt - } - - state.Put("extensions", result) - return multistep.ActionContinue -} - -func (s *StepLoadExtensions) Cleanup(state multistep.StateBag) { -} diff --git a/builder/openstack/step_load_flavor.go b/builder/openstack/step_load_flavor.go index 816e0d8f4..8c331aa6a 100644 --- a/builder/openstack/step_load_flavor.go +++ b/builder/openstack/step_load_flavor.go @@ -1,12 +1,13 @@ package openstack import ( + "context" "fmt" "log" "github.com/gophercloud/gophercloud/openstack/compute/v2/flavors" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" ) // StepLoadFlavor gets the FlavorRef from a Flavor. It first assumes @@ -16,7 +17,7 @@ type StepLoadFlavor struct { Flavor string } -func (s *StepLoadFlavor) Run(state multistep.StateBag) multistep.StepAction { +func (s *StepLoadFlavor) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { config := state.Get("config").(Config) ui := state.Get("ui").(packer.Ui) diff --git a/builder/openstack/step_run_source_server.go b/builder/openstack/step_run_source_server.go index 427f507dc..bb09f85b7 100644 --- a/builder/openstack/step_run_source_server.go +++ b/builder/openstack/step_run_source_server.go @@ -1,14 +1,15 @@ package openstack import ( + "context" "fmt" "io/ioutil" "log" "github.com/gophercloud/gophercloud/openstack/compute/v2/extensions/keypairs" "github.com/gophercloud/gophercloud/openstack/compute/v2/servers" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" ) type StepRunSourceServer struct { @@ -25,7 +26,7 @@ type StepRunSourceServer struct { server *servers.Server } -func (s *StepRunSourceServer) Run(state multistep.StateBag) multistep.StepAction { +func (s *StepRunSourceServer) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { config := state.Get("config").(Config) flavor := state.Get("flavor_id").(string) ui := state.Get("ui").(packer.Ui) diff --git a/builder/openstack/step_stop_server.go b/builder/openstack/step_stop_server.go old mode 100644 new mode 100755 index e0f240fbd..4a0f867b0 --- a/builder/openstack/step_stop_server.go +++ b/builder/openstack/step_stop_server.go @@ -1,28 +1,22 @@ package openstack import ( + "context" "fmt" "github.com/gophercloud/gophercloud/openstack/compute/v2/extensions/startstop" "github.com/gophercloud/gophercloud/openstack/compute/v2/servers" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" ) type StepStopServer struct{} -func (s *StepStopServer) Run(state multistep.StateBag) multistep.StepAction { +func (s *StepStopServer) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { ui := state.Get("ui").(packer.Ui) config := state.Get("config").(Config) - extensions := state.Get("extensions").(map[string]struct{}) server := state.Get("server").(*servers.Server) - // Verify we have the extension - if _, ok := extensions["os-server-start-stop"]; !ok { - ui.Say("OpenStack cluster doesn't support stop, skipping...") - return multistep.ActionContinue - } - // We need the v2 compute client client, err := config.computeV2Client() if err != nil { diff --git a/builder/openstack/step_update_image_visibility.go b/builder/openstack/step_update_image_visibility.go index fb83134bd..8d9a3bc7c 100644 --- a/builder/openstack/step_update_image_visibility.go +++ b/builder/openstack/step_update_image_visibility.go @@ -1,16 +1,17 @@ package openstack import ( + "context" "fmt" imageservice "github.com/gophercloud/gophercloud/openstack/imageservice/v2/images" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" ) type stepUpdateImageVisibility struct{} -func (s *stepUpdateImageVisibility) Run(state multistep.StateBag) multistep.StepAction { +func (s *stepUpdateImageVisibility) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { imageId := state.Get("image").(string) ui := state.Get("ui").(packer.Ui) config := state.Get("config").(Config) diff --git a/builder/openstack/step_wait_for_rackconnect.go b/builder/openstack/step_wait_for_rackconnect.go index d791e9db0..6cbe2ed0e 100644 --- a/builder/openstack/step_wait_for_rackconnect.go +++ b/builder/openstack/step_wait_for_rackconnect.go @@ -1,19 +1,20 @@ package openstack import ( + "context" "fmt" "time" "github.com/gophercloud/gophercloud/openstack/compute/v2/servers" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" ) type StepWaitForRackConnect struct { Wait bool } -func (s *StepWaitForRackConnect) Run(state multistep.StateBag) multistep.StepAction { +func (s *StepWaitForRackConnect) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { if !s.Wait { return multistep.ActionContinue } diff --git a/builder/oracle/classic/artifact.go b/builder/oracle/classic/artifact.go new file mode 100644 index 000000000..82987be1a --- /dev/null +++ b/builder/oracle/classic/artifact.go @@ -0,0 +1,48 @@ +package classic + +import ( + "fmt" + + "github.com/hashicorp/go-oracle-terraform/compute" +) + +// Artifact is an artifact implementation that contains Image List +// and Machine Image info. +type Artifact struct { + MachineImageName string + MachineImageFile string + ImageListVersion int + driver *compute.ComputeClient +} + +// BuilderId uniquely identifies the builder. +func (a *Artifact) BuilderId() string { + return BuilderId +} + +// Files lists the files associated with an artifact. We don't have any files +// as the custom image is stored server side. +func (a *Artifact) Files() []string { + return nil +} + +func (a *Artifact) Id() string { + return a.MachineImageName +} + +func (a *Artifact) String() string { + return fmt.Sprintf("An image list entry was created: \n"+ + "Name: %s\n"+ + "File: %s\n"+ + "Version: %d", + a.MachineImageName, a.MachineImageFile, a.ImageListVersion) +} + +func (a *Artifact) State(name string) interface{} { + return nil +} + +// Destroy deletes the custom image associated with the artifact. +func (a *Artifact) Destroy() error { + return nil +} diff --git a/builder/oracle/classic/builder.go b/builder/oracle/classic/builder.go new file mode 100644 index 000000000..1e90439be --- /dev/null +++ b/builder/oracle/classic/builder.go @@ -0,0 +1,117 @@ +package classic + +import ( + "fmt" + "log" + "os" + + "github.com/hashicorp/go-cleanhttp" + "github.com/hashicorp/go-oracle-terraform/compute" + "github.com/hashicorp/go-oracle-terraform/opc" + ocommon "github.com/hashicorp/packer/builder/oracle/common" + "github.com/hashicorp/packer/common" + "github.com/hashicorp/packer/helper/communicator" + "github.com/hashicorp/packer/helper/multistep" + "github.com/hashicorp/packer/packer" +) + +// BuilderId uniquely identifies the builder +const BuilderId = "packer.oracle.classic" + +// Builder is a builder implementation that creates Oracle OCI custom images. +type Builder struct { + config *Config + runner multistep.Runner +} + +func (b *Builder) Prepare(rawConfig ...interface{}) ([]string, error) { + config, err := NewConfig(rawConfig...) + if err != nil { + return nil, err + } + b.config = config + + return nil, nil +} + +func (b *Builder) Run(ui packer.Ui, hook packer.Hook, cache packer.Cache) (packer.Artifact, error) { + loggingEnabled := os.Getenv("PACKER_OCI_CLASSIC_LOGGING") != "" + httpClient := cleanhttp.DefaultClient() + config := &opc.Config{ + Username: opc.String(b.config.Username), + Password: opc.String(b.config.Password), + IdentityDomain: opc.String(b.config.IdentityDomain), + APIEndpoint: b.config.apiEndpointURL, + LogLevel: opc.LogDebug, + Logger: &Logger{loggingEnabled}, + // Logger: # Leave blank to use the default logger, or provide your own + HTTPClient: httpClient, + } + // Create the Compute Client + client, err := compute.NewComputeClient(config) + if err != nil { + return nil, fmt.Errorf("Error creating OPC Compute Client: %s", err) + } + + // Populate the state bag + state := new(multistep.BasicStateBag) + state.Put("config", b.config) + state.Put("hook", hook) + state.Put("ui", ui) + state.Put("client", client) + + // Build the steps + steps := []multistep.Step{ + &ocommon.StepKeyPair{ + Debug: b.config.PackerDebug, + DebugKeyPath: fmt.Sprintf("oci_classic_%s.pem", b.config.PackerBuildName), + PrivateKeyFile: b.config.Comm.SSHPrivateKey, + }, + &stepCreateIPReservation{}, + &stepAddKeysToAPI{}, + &stepSecurity{}, + &stepCreateInstance{}, + &communicator.StepConnect{ + Config: &b.config.Comm, + Host: ocommon.CommHost, + SSHConfig: ocommon.SSHConfig( + b.config.Comm.SSHUsername, + b.config.Comm.SSHPassword), + }, + &common.StepProvision{}, + &stepSnapshot{}, + &stepListImages{}, + } + + // Run the steps + b.runner = common.NewRunner(steps, b.config.PackerConfig, ui) + b.runner.Run(state) + + // If there was an error, return that + if rawErr, ok := state.GetOk("error"); ok { + return nil, rawErr.(error) + } + + // If there is no snapshot, then just return + if _, ok := state.GetOk("snapshot"); !ok { + return nil, nil + } + + // Build the artifact and return it + artifact := &Artifact{ + ImageListVersion: state.Get("image_list_version").(int), + MachineImageName: state.Get("machine_image_name").(string), + MachineImageFile: state.Get("machine_image_file").(string), + driver: client, + } + + return artifact, nil +} + +// Cancel terminates a running build. +func (b *Builder) Cancel() { + if b.runner != nil { + log.Println("Cancelling the step runner...") + b.runner.Cancel() + } +} diff --git a/builder/oracle/classic/config.go b/builder/oracle/classic/config.go new file mode 100644 index 000000000..da3a76460 --- /dev/null +++ b/builder/oracle/classic/config.go @@ -0,0 +1,155 @@ +package classic + +import ( + "encoding/json" + "fmt" + "io/ioutil" + "net/url" + "os" + "regexp" + "time" + + "github.com/hashicorp/packer/common" + "github.com/hashicorp/packer/helper/communicator" + "github.com/hashicorp/packer/helper/config" + "github.com/hashicorp/packer/packer" + "github.com/hashicorp/packer/template/interpolate" +) + +type Config struct { + common.PackerConfig `mapstructure:",squash"` + Comm communicator.Config `mapstructure:",squash"` + attribs map[string]interface{} + + // Access config overrides + Username string `mapstructure:"username"` + Password string `mapstructure:"password"` + IdentityDomain string `mapstructure:"identity_domain"` + APIEndpoint string `mapstructure:"api_endpoint"` + apiEndpointURL *url.URL + + // Image + ImageName string `mapstructure:"image_name"` + Shape string `mapstructure:"shape"` + SourceImageList string `mapstructure:"source_image_list"` + SnapshotTimeout time.Duration `mapstructure:"snapshot_timeout"` + DestImageList string `mapstructure:"dest_image_list"` + // Attributes and Attributes file are both optional and mutually exclusive. + Attributes string `mapstructure:"attributes"` + AttributesFile string `mapstructure:"attributes_file"` + // Optional; if you don't enter anything, the image list description + // will read "Packer-built image list" + DestImageListDescription string `mapstructure:"image_description"` + // Optional. Describes what computers are allowed to reach your instance + // via SSH. This whitelist must contain the computer you're running Packer + // from. It defaults to public-internet, meaning that you can SSH into your + // instance from anywhere as long as you have the right keys + SSHSourceList string `mapstructure:"ssh_source_list"` + + ctx interpolate.Context +} + +func NewConfig(raws ...interface{}) (*Config, error) { + c := &Config{} + + // Decode from template + err := config.Decode(c, &config.DecodeOpts{ + Interpolate: true, + InterpolateContext: &c.ctx, + }, raws...) + if err != nil { + return nil, fmt.Errorf("Failed to mapstructure Config: %+v", err) + } + + c.apiEndpointURL, err = url.Parse(c.APIEndpoint) + if err != nil { + return nil, fmt.Errorf("Error parsing API Endpoint: %s", err) + } + // Set default source list + if c.SSHSourceList == "" { + c.SSHSourceList = "seciplist:/oracle/public/public-internet" + } + // Use default oracle username with sudo privileges + if c.Comm.SSHUsername == "" { + c.Comm.SSHUsername = "opc" + } + + if c.SnapshotTimeout == 0 { + c.SnapshotTimeout = 20 * time.Minute + } + + // Validate that all required fields are present + var errs *packer.MultiError + required := map[string]string{ + "username": c.Username, + "password": c.Password, + "api_endpoint": c.APIEndpoint, + "identity_domain": c.IdentityDomain, + "source_image_list": c.SourceImageList, + "dest_image_list": c.DestImageList, + "shape": c.Shape, + } + for k, v := range required { + if v == "" { + errs = packer.MultiErrorAppend(errs, fmt.Errorf("You must specify a %s.", k)) + } + } + + // Object names can contain only alphanumeric characters, hyphens, underscores, and periods + reValidObject := regexp.MustCompile("^[a-zA-Z0-9-._/]+$") + var objectValidation = []struct { + name string + value string + }{ + {"dest_image_list", c.DestImageList}, + {"image_name", c.ImageName}, + } + for _, ov := range objectValidation { + if !reValidObject.MatchString(ov.value) { + errs = packer.MultiErrorAppend(errs, fmt.Errorf("%s can contain only alphanumeric characters, hyphens, underscores, and periods.", ov.name)) + } + } + + if c.Attributes != "" && c.AttributesFile != "" { + errs = packer.MultiErrorAppend(errs, fmt.Errorf("Only one of user_data or user_data_file can be specified.")) + } else if c.AttributesFile != "" { + if _, err := os.Stat(c.AttributesFile); err != nil { + errs = packer.MultiErrorAppend(errs, fmt.Errorf("attributes_file not found: %s", c.AttributesFile)) + } + } + + if es := c.Comm.Prepare(&c.ctx); len(es) > 0 { + errs = packer.MultiErrorAppend(errs, es...) + } + + if errs != nil && len(errs.Errors) > 0 { + return nil, errs + } + + // unpack attributes from json into config + var data map[string]interface{} + + if c.Attributes != "" { + err := json.Unmarshal([]byte(c.Attributes), &data) + if err != nil { + err = fmt.Errorf("Problem parsing json from attributes: %s", err) + packer.MultiErrorAppend(errs, err) + } + c.attribs = data + } else if c.AttributesFile != "" { + fidata, err := ioutil.ReadFile(c.AttributesFile) + if err != nil { + err = fmt.Errorf("Problem reading attributes_file: %s", err) + packer.MultiErrorAppend(errs, err) + } + err = json.Unmarshal(fidata, &data) + c.attribs = data + if err != nil { + err = fmt.Errorf("Problem parsing json from attributes_file: %s", err) + packer.MultiErrorAppend(errs, err) + } + c.attribs = data + } + + return c, nil +} diff --git a/builder/oracle/classic/config_test.go b/builder/oracle/classic/config_test.go new file mode 100644 index 000000000..ff6bf2a00 --- /dev/null +++ b/builder/oracle/classic/config_test.go @@ -0,0 +1,87 @@ +package classic + +import ( + "testing" + + "github.com/stretchr/testify/assert" +) + +func testConfig() map[string]interface{} { + return map[string]interface{}{ + "identity_domain": "abc12345", + "username": "test@hashicorp.com", + "password": "testpassword123", + "api_endpoint": "https://api-test.compute.test.oraclecloud.com/", + "dest_image_list": "/Config-thing/myuser/myimage", + "source_image_list": "/oracle/public/whatever", + "shape": "oc3", + "image_name": "TestImageName", + "ssh_username": "opc", + } +} + +func TestConfigAutoFillsSourceList(t *testing.T) { + tc := testConfig() + conf, err := NewConfig(tc) + if err != nil { + t.Fatalf("Should not have error: %s", err.Error()) + } + if conf.SSHSourceList != "seciplist:/oracle/public/public-internet" { + t.Fatalf("conf.SSHSourceList should have been "+ + "\"seciplist:/oracle/public/public-internet\" but is \"%s\"", + conf.SSHSourceList) + } +} + +func TestConfigValidationCatchesMissing(t *testing.T) { + required := []string{ + "username", + "password", + "api_endpoint", + "identity_domain", + "dest_image_list", + "source_image_list", + "shape", + } + for _, key := range required { + tc := testConfig() + delete(tc, key) + _, err := NewConfig(tc) + if err == nil { + t.Fatalf("Test should have failed when config lacked %s!", key) + } + } +} + +func TestValidationsIgnoresOptional(t *testing.T) { + tc := testConfig() + delete(tc, "ssh_username") + _, err := NewConfig(tc) + if err != nil { + t.Fatalf("Shouldn't care if ssh_username is missing: err: %#v", err.Error()) + } +} + +func TestConfigValidatesObjects(t *testing.T) { + var objectTests = []struct { + object string + valid bool + }{ + {"foo-BAR.0_9", true}, + {"%", false}, + {"Matt...?", false}, + {"/Config-thing/myuser/myimage", true}, + } + for _, s := range []string{"dest_image_list", "image_name"} { + for _, tt := range objectTests { + tc := testConfig() + tc[s] = tt.object + _, err := NewConfig(tc) + if tt.valid { + assert.NoError(t, err, tt.object) + } else { + assert.Error(t, err, tt.object) + } + } + } +} diff --git a/builder/oracle/classic/log.go b/builder/oracle/classic/log.go new file mode 100644 index 000000000..7389dc057 --- /dev/null +++ b/builder/oracle/classic/log.go @@ -0,0 +1,14 @@ +package classic + +import "log" + +type Logger struct { + Enabled bool +} + +func (l *Logger) Log(input ...interface{}) { + if !l.Enabled { + return + } + log.Println(input...) +} diff --git a/builder/oracle/classic/step_add_keys.go b/builder/oracle/classic/step_add_keys.go new file mode 100644 index 000000000..b09310a1f --- /dev/null +++ b/builder/oracle/classic/step_add_keys.go @@ -0,0 +1,72 @@ +package classic + +import ( + "context" + "fmt" + "strings" + + "github.com/hashicorp/go-oracle-terraform/compute" + "github.com/hashicorp/packer/common/uuid" + "github.com/hashicorp/packer/helper/multistep" + "github.com/hashicorp/packer/packer" +) + +type stepAddKeysToAPI struct{} + +func (s *stepAddKeysToAPI) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { + // get variables from state + ui := state.Get("ui").(packer.Ui) + config := state.Get("config").(*Config) + client := state.Get("client").(*compute.ComputeClient) + + if config.Comm.Type != "ssh" { + ui.Say("Not using SSH communicator; skip generating SSH keys...") + return multistep.ActionContinue + } + + // grab packer-generated key from statebag context. + sshPublicKey := strings.TrimSpace(state.Get("publicKey").(string)) + + // form API call to add key to compute cloud + sshKeyName := fmt.Sprintf("/Compute-%s/%s/packer_generated_key_%s", + config.IdentityDomain, config.Username, uuid.TimeOrderedUUID()) + + ui.Say(fmt.Sprintf("Creating temporary key: %s", sshKeyName)) + + sshKeysClient := client.SSHKeys() + sshKeysInput := compute.CreateSSHKeyInput{ + Name: sshKeyName, + Key: sshPublicKey, + Enabled: true, + } + + // Load the packer-generated SSH key into the Oracle Compute cloud. + keyInfo, err := sshKeysClient.CreateSSHKey(&sshKeysInput) + if err != nil { + err = fmt.Errorf("Problem adding Public SSH key through Oracle's API: %s", err) + ui.Error(err.Error()) + state.Put("error", err) + return multistep.ActionHalt + } + state.Put("key_name", keyInfo.Name) + return multistep.ActionContinue +} + +func (s *stepAddKeysToAPI) Cleanup(state multistep.StateBag) { + // Delete the keys we created during this run + keyName, ok := state.GetOk("key_name") + if !ok { + // No keys were generated; none need to be cleaned up. + return + } + ui := state.Get("ui").(packer.Ui) + ui.Say("Deleting SSH keys...") + deleteInput := compute.DeleteSSHKeyInput{Name: keyName.(string)} + client := state.Get("client").(*compute.ComputeClient) + deleteClient := client.SSHKeys() + err := deleteClient.DeleteSSHKey(&deleteInput) + if err != nil { + ui.Error(fmt.Sprintf("Error deleting SSH keys: %s", err.Error())) + } + return +} diff --git a/builder/oracle/classic/step_create_instance.go b/builder/oracle/classic/step_create_instance.go new file mode 100644 index 000000000..d52500e09 --- /dev/null +++ b/builder/oracle/classic/step_create_instance.go @@ -0,0 +1,87 @@ +package classic + +import ( + "context" + "fmt" + + "github.com/hashicorp/go-oracle-terraform/compute" + "github.com/hashicorp/packer/helper/multistep" + "github.com/hashicorp/packer/packer" +) + +type stepCreateInstance struct{} + +func (s *stepCreateInstance) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { + // get variables from state + ui := state.Get("ui").(packer.Ui) + ui.Say("Creating Instance...") + + config := state.Get("config").(*Config) + client := state.Get("client").(*compute.ComputeClient) + ipAddName := state.Get("ipres_name").(string) + secListName := state.Get("security_list").(string) + + netInfo := compute.NetworkingInfo{ + Nat: []string{ipAddName}, + SecLists: []string{secListName}, + } + + // get instances client + instanceClient := client.Instances() + + // Instances Input + input := &compute.CreateInstanceInput{ + Name: config.ImageName, + Shape: config.Shape, + ImageList: config.SourceImageList, + Networking: map[string]compute.NetworkingInfo{"eth0": netInfo}, + Attributes: config.attribs, + } + if config.Comm.Type == "ssh" { + keyName := state.Get("key_name").(string) + input.SSHKeys = []string{keyName} + } + + instanceInfo, err := instanceClient.CreateInstance(input) + if err != nil { + err = fmt.Errorf("Problem creating instance: %s", err) + ui.Error(err.Error()) + state.Put("error", err) + return multistep.ActionHalt + } + + state.Put("instance_info", instanceInfo) + state.Put("instance_id", instanceInfo.ID) + ui.Message(fmt.Sprintf("Created instance: %s.", instanceInfo.ID)) + return multistep.ActionContinue +} + +func (s *stepCreateInstance) Cleanup(state multistep.StateBag) { + instanceID, ok := state.GetOk("instance_id") + if !ok { + return + } + + // terminate instance + ui := state.Get("ui").(packer.Ui) + client := state.Get("client").(*compute.ComputeClient) + config := state.Get("config").(*Config) + + ui.Say("Terminating source instance...") + + instanceClient := client.Instances() + input := &compute.DeleteInstanceInput{ + Name: config.ImageName, + ID: instanceID.(string), + } + + err := instanceClient.DeleteInstance(input) + if err != nil { + err = fmt.Errorf("Problem destroying instance: %s", err) + ui.Error(err.Error()) + state.Put("error", err) + return + } + // TODO wait for instance state to change to deleted? + ui.Say("Terminated instance.") +} diff --git a/builder/oracle/classic/step_create_ip_reservation.go b/builder/oracle/classic/step_create_ip_reservation.go new file mode 100644 index 000000000..5d400e746 --- /dev/null +++ b/builder/oracle/classic/step_create_ip_reservation.go @@ -0,0 +1,61 @@ +package classic + +import ( + "context" + "fmt" + + "github.com/hashicorp/go-oracle-terraform/compute" + "github.com/hashicorp/packer/common/uuid" + "github.com/hashicorp/packer/helper/multistep" + "github.com/hashicorp/packer/packer" +) + +type stepCreateIPReservation struct{} + +func (s *stepCreateIPReservation) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { + ui := state.Get("ui").(packer.Ui) + + config := state.Get("config").(*Config) + client := state.Get("client").(*compute.ComputeClient) + iprClient := client.IPReservations() + // TODO: add optional Name and Tags + + ipresName := fmt.Sprintf("ipres_%s_%s", config.ImageName, uuid.TimeOrderedUUID()) + ui.Message(fmt.Sprintf("Creating temporary IP reservation: %s", ipresName)) + + IPInput := &compute.CreateIPReservationInput{ + ParentPool: compute.PublicReservationPool, + Permanent: true, + Name: ipresName, + } + ipRes, err := iprClient.CreateIPReservation(IPInput) + + if err != nil { + err := fmt.Errorf("Error creating IP Reservation: %s", err) + state.Put("error", err) + ui.Error(err.Error()) + return multistep.ActionHalt + } + state.Put("instance_ip", ipRes.IP) + state.Put("ipres_name", ipresName) + return multistep.ActionContinue +} + +func (s *stepCreateIPReservation) Cleanup(state multistep.StateBag) { + ipResName, ok := state.GetOk("ipres_name") + if !ok { + return + } + + ui := state.Get("ui").(packer.Ui) + ui.Say("Cleaning up IP reservations...") + client := state.Get("client").(*compute.ComputeClient) + + input := compute.DeleteIPReservationInput{Name: ipResName.(string)} + ipClient := client.IPReservations() + err := ipClient.DeleteIPReservation(&input) + if err != nil { + fmt.Printf("error deleting IP reservation: %s", err.Error()) + } + +} diff --git a/builder/oracle/classic/step_list_images.go b/builder/oracle/classic/step_list_images.go new file mode 100644 index 000000000..a99612c14 --- /dev/null +++ b/builder/oracle/classic/step_list_images.go @@ -0,0 +1,104 @@ +package classic + +import ( + "context" + "fmt" + + "github.com/hashicorp/go-oracle-terraform/compute" + "github.com/hashicorp/packer/helper/multistep" + "github.com/hashicorp/packer/packer" +) + +type stepListImages struct{} + +func (s *stepListImages) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { + // get variables from state + ui := state.Get("ui").(packer.Ui) + config := state.Get("config").(*Config) + client := state.Get("client").(*compute.ComputeClient) + ui.Say("Adding image to image list...") + + imageListClient := client.ImageList() + getInput := compute.GetImageListInput{ + Name: config.DestImageList, + } + imList, err := imageListClient.GetImageList(&getInput) + if err != nil { + // If the list didn't exist, create it. + ui.Say(fmt.Sprintf(err.Error())) + ui.Say(fmt.Sprintf("Destination image list %s does not exist; Creating it...", + config.DestImageList)) + + ilInput := compute.CreateImageListInput{ + Name: config.DestImageList, + Description: "Packer-built image list", + } + + imList, err = imageListClient.CreateImageList(&ilInput) + if err != nil { + err = fmt.Errorf("Problem creating image list: %s", err) + ui.Error(err.Error()) + state.Put("error", err) + return multistep.ActionHalt + } + ui.Message(fmt.Sprintf("Image list %s created!", imList.URI)) + } + + // Now create and image list entry for the image into that list. + snap := state.Get("snapshot").(*compute.Snapshot) + version := len(imList.Entries) + 1 + entriesClient := client.ImageListEntries() + entriesInput := compute.CreateImageListEntryInput{ + Name: config.DestImageList, + MachineImages: []string{fmt.Sprintf("/Compute-%s/%s/%s", + config.IdentityDomain, config.Username, snap.MachineImage)}, + Version: version, + } + entryInfo, err := entriesClient.CreateImageListEntry(&entriesInput) + if err != nil { + err = fmt.Errorf("Problem creating an image list entry: %s", err) + ui.Error(err.Error()) + state.Put("error", err) + return multistep.ActionHalt + } + state.Put("image_list_entry", entryInfo) + ui.Message(fmt.Sprintf("created image list entry %s", entryInfo.Name)) + + machineImagesClient := client.MachineImages() + getImagesInput := compute.GetMachineImageInput{ + Name: config.ImageName, + } + + // Update image list default to use latest version + updateInput := compute.UpdateImageListInput{ + Default: version, + Description: config.DestImageListDescription, + Name: config.DestImageList, + } + _, err = imageListClient.UpdateImageList(&updateInput) + if err != nil { + err = fmt.Errorf("Problem updating default image list version: %s", err) + ui.Error(err.Error()) + state.Put("error", err) + return multistep.ActionHalt + } + + // Grab info about the machine image to return with the artifact + imInfo, err := machineImagesClient.GetMachineImage(&getImagesInput) + if err != nil { + err = fmt.Errorf("Problem getting machine image info: %s", err) + ui.Error(err.Error()) + state.Put("error", err) + return multistep.ActionHalt + } + state.Put("machine_image_file", imInfo.File) + state.Put("machine_image_name", imInfo.Name) + state.Put("image_list_version", version) + + return multistep.ActionContinue +} + +func (s *stepListImages) Cleanup(state multistep.StateBag) { + // Nothing to do + return +} diff --git a/builder/oracle/classic/step_security.go b/builder/oracle/classic/step_security.go new file mode 100644 index 000000000..a5b04418b --- /dev/null +++ b/builder/oracle/classic/step_security.go @@ -0,0 +1,152 @@ +package classic + +import ( + "context" + "fmt" + "strings" + + "github.com/hashicorp/go-oracle-terraform/compute" + "github.com/hashicorp/packer/common/uuid" + "github.com/hashicorp/packer/helper/multistep" + "github.com/hashicorp/packer/packer" +) + +type stepSecurity struct{} + +func (s *stepSecurity) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { + ui := state.Get("ui").(packer.Ui) + config := state.Get("config").(*Config) + + commType := "" + if config.Comm.Type == "ssh" { + commType = "SSH" + } else if config.Comm.Type == "winrm" { + commType = "WINRM" + } + + ui.Say(fmt.Sprintf("Configuring security lists and rules to enable %s access...", commType)) + + client := state.Get("client").(*compute.ComputeClient) + runUUID := uuid.TimeOrderedUUID() + + namePrefix := fmt.Sprintf("/Compute-%s/%s/", config.IdentityDomain, config.Username) + secListName := fmt.Sprintf("Packer_%s_Allow_%s_%s", commType, config.ImageName, runUUID) + secListClient := client.SecurityLists() + secListInput := compute.CreateSecurityListInput{ + Description: fmt.Sprintf("Packer-generated security list to give packer %s access", commType), + Name: namePrefix + secListName, + } + _, err := secListClient.CreateSecurityList(&secListInput) + if err != nil { + if !strings.Contains(err.Error(), "already exists") { + err = fmt.Errorf("Error creating security List to"+ + " allow Packer to connect to Oracle instance via %s: %s", commType, err) + ui.Error(err.Error()) + state.Put("error", err) + return multistep.ActionHalt + } + } + // DOCS NOTE: user must have Compute_Operations role + // Create security rule that allows Packer to connect via SSH or winRM + var application string + if commType == "SSH" { + application = "/oracle/public/ssh" + } else if commType == "WINRM" { + // Check to see whether a winRM security application is already defined + applicationClient := client.SecurityApplications() + application = fmt.Sprintf("packer_winRM_%s", runUUID) + applicationInput := compute.CreateSecurityApplicationInput{ + Description: "Allows Packer to connect to instance via winRM", + DPort: "5985-5986", + Name: application, + Protocol: "TCP", + } + _, err = applicationClient.CreateSecurityApplication(&applicationInput) + if err != nil { + err = fmt.Errorf("Error creating security application to"+ + " allow Packer to connect to Oracle instance via %s: %s", commType, err) + ui.Error(err.Error()) + state.Put("error", err) + return multistep.ActionHalt + } + state.Put("winrm_application", application) + } + secRulesClient := client.SecRules() + secRuleName := fmt.Sprintf("Packer-allow-%s-Rule_%s_%s", commType, + config.ImageName, runUUID) + secRulesInput := compute.CreateSecRuleInput{ + Action: "PERMIT", + Application: application, + Description: "Packer-generated security rule to allow ssh/winrm", + DestinationList: "seclist:" + namePrefix + secListName, + Name: namePrefix + secRuleName, + SourceList: config.SSHSourceList, + } + + _, err = secRulesClient.CreateSecRule(&secRulesInput) + if err != nil { + err = fmt.Errorf("Error creating security rule to"+ + " allow Packer to connect to Oracle instance: %s", err) + ui.Error(err.Error()) + state.Put("error", err) + return multistep.ActionHalt + } + state.Put("security_rule_name", secRuleName) + state.Put("security_list", secListName) + return multistep.ActionContinue +} + +func (s *stepSecurity) Cleanup(state multistep.StateBag) { + secRuleName, ok := state.GetOk("security_rule_name") + if !ok { + return + } + secListName, ok := state.GetOk("security_list") + if !ok { + return + } + + client := state.Get("client").(*compute.ComputeClient) + ui := state.Get("ui").(packer.Ui) + config := state.Get("config").(*Config) + + ui.Say("Deleting temporary rules and lists...") + + namePrefix := fmt.Sprintf("/Compute-%s/%s/", config.IdentityDomain, config.Username) + // delete security rules that Packer generated + secRulesClient := client.SecRules() + ruleInput := compute.DeleteSecRuleInput{Name: namePrefix + secRuleName.(string)} + err := secRulesClient.DeleteSecRule(&ruleInput) + if err != nil { + ui.Say(fmt.Sprintf("Error deleting the packer-generated security rule %s; "+ + "please delete manually. (error: %s)", secRuleName.(string), err.Error())) + } + + // delete security list that Packer generated + secListClient := client.SecurityLists() + input := compute.DeleteSecurityListInput{Name: namePrefix + secListName.(string)} + err = secListClient.DeleteSecurityList(&input) + if err != nil { + ui.Say(fmt.Sprintf("Error deleting the packer-generated security list %s; "+ + "please delete manually. (error : %s)", secListName.(string), err.Error())) + } + + // Some extra cleanup if we used the winRM communicator + if config.Comm.Type == "winrm" { + // Delete the packer-generated application + application, ok := state.GetOk("winrm_application") + if !ok { + return + } + applicationClient := client.SecurityApplications() + deleteApplicationInput := compute.DeleteSecurityApplicationInput{ + Name: namePrefix + application.(string), + } + err = applicationClient.DeleteSecurityApplication(&deleteApplicationInput) + if err != nil { + ui.Say(fmt.Sprintf("Error deleting the packer-generated winrm security application %s; "+ + "please delete manually. (error : %s)", application.(string), err.Error())) + } + } + +} diff --git a/builder/oracle/classic/step_snapshot.go b/builder/oracle/classic/step_snapshot.go new file mode 100644 index 000000000..64058966d --- /dev/null +++ b/builder/oracle/classic/step_snapshot.go @@ -0,0 +1,71 @@ +package classic + +import ( + "context" + "fmt" + + "github.com/hashicorp/go-oracle-terraform/compute" + "github.com/hashicorp/packer/helper/multistep" + "github.com/hashicorp/packer/packer" +) + +type stepSnapshot struct { + cleanupSnap bool +} + +func (s *stepSnapshot) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { + // get variables from state + ui := state.Get("ui").(packer.Ui) + ui.Say("Creating Snapshot...") + config := state.Get("config").(*Config) + client := state.Get("client").(*compute.ComputeClient) + instanceID := state.Get("instance_id").(string) + + // get instances client + snapshotClient := client.Snapshots() + + // Instances Input + snapshotInput := &compute.CreateSnapshotInput{ + Instance: fmt.Sprintf("%s/%s", config.ImageName, instanceID), + MachineImage: config.ImageName, + Timeout: config.SnapshotTimeout, + } + + snap, err := snapshotClient.CreateSnapshot(snapshotInput) + if err != nil { + err = fmt.Errorf("Problem creating snapshot: %s", err) + ui.Error(err.Error()) + state.Put("error", err) + return multistep.ActionHalt + } + state.Put("snapshot", snap) + ui.Message(fmt.Sprintf("Created snapshot: %s.", snap.Name)) + return multistep.ActionContinue +} + +func (s *stepSnapshot) Cleanup(state multistep.StateBag) { + // Delete the snapshot + var snap *compute.Snapshot + if snapshot, ok := state.GetOk("snapshot"); ok { + snap = snapshot.(*compute.Snapshot) + } else { + return + } + + ui := state.Get("ui").(packer.Ui) + ui.Say("Deleting Snapshot...") + client := state.Get("client").(*compute.ComputeClient) + snapClient := client.Snapshots() + snapInput := compute.DeleteSnapshotInput{ + Snapshot: snap.Name, + MachineImage: snap.MachineImage, + } + + err := snapClient.DeleteSnapshotResourceOnly(&snapInput) + if err != nil { + err = fmt.Errorf("Problem deleting snapshot: %s", err) + ui.Error(err.Error()) + state.Put("error", err) + } + return +} diff --git a/builder/oracle/oci/ssh.go b/builder/oracle/common/ssh.go similarity index 90% rename from builder/oracle/oci/ssh.go rename to builder/oracle/common/ssh.go index a9d62f4a4..8e448e592 100644 --- a/builder/oracle/oci/ssh.go +++ b/builder/oracle/common/ssh.go @@ -1,14 +1,14 @@ -package oci +package common import ( "fmt" packerssh "github.com/hashicorp/packer/communicator/ssh" - "github.com/mitchellh/multistep" + "github.com/hashicorp/packer/helper/multistep" "golang.org/x/crypto/ssh" ) -func commHost(state multistep.StateBag) (string, error) { +func CommHost(state multistep.StateBag) (string, error) { ipAddress := state.Get("instance_ip").(string) return ipAddress, nil } diff --git a/builder/oracle/oci/step_ssh_key_pair.go b/builder/oracle/common/step_ssh_key_pair.go similarity index 91% rename from builder/oracle/oci/step_ssh_key_pair.go rename to builder/oracle/common/step_ssh_key_pair.go index cc9d0358f..c545a09e2 100644 --- a/builder/oracle/oci/step_ssh_key_pair.go +++ b/builder/oracle/common/step_ssh_key_pair.go @@ -1,6 +1,7 @@ -package oci +package common import ( + "context" "crypto/rand" "crypto/rsa" "crypto/x509" @@ -10,18 +11,18 @@ import ( "os" "runtime" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" "golang.org/x/crypto/ssh" ) -type stepKeyPair struct { +type StepKeyPair struct { Debug bool DebugKeyPath string PrivateKeyFile string } -func (s *stepKeyPair) Run(state multistep.StateBag) multistep.StepAction { +func (s *StepKeyPair) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { ui := state.Get("ui").(packer.Ui) if s.PrivateKeyFile != "" { @@ -111,6 +112,6 @@ func (s *stepKeyPair) Run(state multistep.StateBag) multistep.StepAction { return multistep.ActionContinue } -func (s *stepKeyPair) Cleanup(state multistep.StateBag) { +func (s *StepKeyPair) Cleanup(state multistep.StateBag) { // Nothing to do } diff --git a/builder/oracle/oci/artifact.go b/builder/oracle/oci/artifact.go index 6684468fc..703cd062d 100644 --- a/builder/oracle/oci/artifact.go +++ b/builder/oracle/oci/artifact.go @@ -1,13 +1,15 @@ package oci import ( + "context" "fmt" - client "github.com/hashicorp/packer/builder/oracle/oci/client" + + "github.com/oracle/oci-go-sdk/core" ) // Artifact is an artifact implementation that contains a built Custom Image. type Artifact struct { - Image client.Image + Image core.Image Region string driver Driver } @@ -25,21 +27,27 @@ func (a *Artifact) Files() []string { // Id returns the OCID of the associated Image. func (a *Artifact) Id() string { - return a.Image.ID + return *a.Image.Id } func (a *Artifact) String() string { + var displayName string + if a.Image.DisplayName != nil { + displayName = *a.Image.DisplayName + } + return fmt.Sprintf( "An image was created: '%v' (OCID: %v) in region '%v'", - a.Image.DisplayName, a.Image.ID, a.Region, + displayName, *a.Image.Id, a.Region, ) } +// State ... func (a *Artifact) State(name string) interface{} { return nil } // Destroy deletes the custom image associated with the artifact. func (a *Artifact) Destroy() error { - return a.driver.DeleteImage(a.Image.ID) + return a.driver.DeleteImage(context.TODO(), *a.Image.Id) } diff --git a/builder/oracle/oci/builder.go b/builder/oracle/oci/builder.go index 5eedaa0b1..1554f6013 100644 --- a/builder/oracle/oci/builder.go +++ b/builder/oracle/oci/builder.go @@ -6,11 +6,12 @@ import ( "fmt" "log" - client "github.com/hashicorp/packer/builder/oracle/oci/client" + ocommon "github.com/hashicorp/packer/builder/oracle/common" "github.com/hashicorp/packer/common" "github.com/hashicorp/packer/helper/communicator" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" + "github.com/oracle/oci-go-sdk/core" ) // BuilderId uniquely identifies the builder @@ -50,17 +51,22 @@ func (b *Builder) Run(ui packer.Ui, hook packer.Hook, cache packer.Cache) (packe // Build the steps steps := []multistep.Step{ - &stepKeyPair{ + &ocommon.StepKeyPair{ Debug: b.config.PackerDebug, DebugKeyPath: fmt.Sprintf("oci_%s.pem", b.config.PackerBuildName), PrivateKeyFile: b.config.Comm.SSHPrivateKey, }, &stepCreateInstance{}, &stepInstanceInfo{}, + &stepGetDefaultCredentials{ + Debug: b.config.PackerDebug, + Comm: &b.config.Comm, + BuildName: b.config.PackerBuildName, + }, &communicator.StepConnect{ Config: &b.config.Comm, - Host: commHost, - SSHConfig: SSHConfig( + Host: ocommon.CommHost, + SSHConfig: ocommon.SSHConfig( b.config.Comm.SSHUsername, b.config.Comm.SSHPassword), }, @@ -77,10 +83,15 @@ func (b *Builder) Run(ui packer.Ui, hook packer.Hook, cache packer.Cache) (packe return nil, rawErr.(error) } + region, err := b.config.ConfigProvider.Region() + if err != nil { + return nil, err + } + // Build the artifact and return it artifact := &Artifact{ - Image: state.Get("image").(client.Image), - Region: b.config.AccessCfg.Region, + Image: state.Get("image").(core.Image), + Region: region, driver: driver, } diff --git a/builder/oracle/oci/client/base_client.go b/builder/oracle/oci/client/base_client.go deleted file mode 100644 index 7270687e7..000000000 --- a/builder/oracle/oci/client/base_client.go +++ /dev/null @@ -1,215 +0,0 @@ -package oci - -import ( - "bytes" - "encoding/json" - "github.com/google/go-querystring/query" - "net/http" - "net/url" -) - -const ( - contentType = "Content-Type" - jsonContentType = "application/json" -) - -// baseClient provides a basic (AND INTENTIONALLY INCOMPLETE) JSON REST client -// that abstracts away some of the repetitive code required in the OCI Client. -type baseClient struct { - httpClient *http.Client - method string - url string - queryStruct interface{} - header http.Header - body interface{} -} - -// newBaseClient constructs a default baseClient. -func newBaseClient() *baseClient { - return &baseClient{ - httpClient: http.DefaultClient, - method: "GET", - header: make(http.Header), - } -} - -// New creates a copy of an existing baseClient. -func (c *baseClient) New() *baseClient { - // Copy headers - header := make(http.Header) - for k, v := range c.header { - header[k] = v - } - - return &baseClient{ - httpClient: c.httpClient, - method: c.method, - url: c.url, - header: header, - } -} - -// Client sets the http Client used to perform requests. -func (c *baseClient) Client(httpClient *http.Client) *baseClient { - if httpClient == nil { - c.httpClient = http.DefaultClient - } else { - c.httpClient = httpClient - } - return c -} - -// Base sets the base client url. -func (c *baseClient) Base(path string) *baseClient { - c.url = path - return c -} - -// Path extends the client url. -func (c *baseClient) Path(path string) *baseClient { - baseURL, baseErr := url.Parse(c.url) - pathURL, pathErr := url.Parse(path) - // Bail on parsing error leaving the client's url unmodified - if baseErr != nil || pathErr != nil { - return c - } - - c.url = baseURL.ResolveReference(pathURL).String() - return c -} - -// QueryStruct sets the struct from which the request querystring is built. -func (c *baseClient) QueryStruct(params interface{}) *baseClient { - c.queryStruct = params - return c -} - -// SetBody wraps a given struct for serialisation and sets the client body. -func (c *baseClient) SetBody(params interface{}) *baseClient { - c.body = params - return c -} - -// Header - -// AddHeader adds a HTTP header to the client. Existing keys will be extended. -func (c *baseClient) AddHeader(key, value string) *baseClient { - c.header.Add(key, value) - return c -} - -// SetHeader sets a HTTP header on the client. Existing keys will be -// overwritten. -func (c *baseClient) SetHeader(key, value string) *baseClient { - c.header.Add(key, value) - return c -} - -// HTTP methods (subset) - -// Get sets the client's HTTP method to GET. -func (c *baseClient) Get(path string) *baseClient { - c.method = "GET" - return c.Path(path) -} - -// Post sets the client's HTTP method to POST. -func (c *baseClient) Post(path string) *baseClient { - c.method = "POST" - return c.Path(path) -} - -// Delete sets the client's HTTP method to DELETE. -func (c *baseClient) Delete(path string) *baseClient { - c.method = "DELETE" - return c.Path(path) -} - -// Do executes a HTTP request and returns the response encoded as either error -// or success values. -func (c *baseClient) Do(req *http.Request, successV, failureV interface{}) (*http.Response, error) { - resp, err := c.httpClient.Do(req) - if err != nil { - return nil, err - } - defer resp.Body.Close() - - if resp.StatusCode >= 200 && resp.StatusCode < 300 { - if successV != nil { - err = json.NewDecoder(resp.Body).Decode(successV) - } - } else { - if failureV != nil { - err = json.NewDecoder(resp.Body).Decode(failureV) - } - } - - return resp, err -} - -// Request builds a http.Request from the baseClient instance. -func (c *baseClient) Request() (*http.Request, error) { - reqURL, err := url.Parse(c.url) - if err != nil { - return nil, err - } - - if c.queryStruct != nil { - err = addQueryStruct(reqURL, c.queryStruct) - if err != nil { - return nil, err - } - } - - body := &bytes.Buffer{} - if c.body != nil { - if err := json.NewEncoder(body).Encode(c.body); err != nil { - return nil, err - } - } - - req, err := http.NewRequest(c.method, reqURL.String(), body) - if err != nil { - return nil, err - } - - // Add headers to request - for k, vs := range c.header { - for _, v := range vs { - req.Header.Add(k, v) - } - } - - return req, nil -} - -// Recieve creates a http request from the client and executes it returning the -// response. -func (c *baseClient) Receive(successV, failureV interface{}) (*http.Response, error) { - req, err := c.Request() - if err != nil { - return nil, err - } - return c.Do(req, successV, failureV) -} - -// addQueryStruct converts a struct to a querystring and merges any values -// provided in the URL itself. -func addQueryStruct(reqURL *url.URL, queryStruct interface{}) error { - urlValues, err := url.ParseQuery(reqURL.RawQuery) - if err != nil { - return err - } - queryValues, err := query.Values(queryStruct) - if err != nil { - return err - } - - for k, vs := range queryValues { - for _, v := range vs { - urlValues.Add(k, v) - } - } - reqURL.RawQuery = urlValues.Encode() - return nil -} diff --git a/builder/oracle/oci/client/client.go b/builder/oracle/oci/client/client.go deleted file mode 100644 index 8e6c1762d..000000000 --- a/builder/oracle/oci/client/client.go +++ /dev/null @@ -1,31 +0,0 @@ -package oci - -import ( - "net/http" -) - -const ( - apiVersion = "20160918" - userAgent = "go-oci/" + apiVersion - baseURLPattern = "https://%s.%s.oraclecloud.com/%s/" -) - -// Client is the main interface through which consumers interact with the OCI -// API. -type Client struct { - UserAgent string - Compute *ComputeClient - Config *Config -} - -// NewClient creates a new Client for communicating with the OCI API. -func NewClient(config *Config) (*Client, error) { - transport := NewTransport(http.DefaultTransport, config) - base := newBaseClient().Client(&http.Client{Transport: transport}) - - return &Client{ - UserAgent: userAgent, - Compute: NewComputeClient(base.New().Base(config.getBaseURL("iaas"))), - Config: config, - }, nil -} diff --git a/builder/oracle/oci/client/client_test.go b/builder/oracle/oci/client/client_test.go deleted file mode 100644 index f2ebd8f87..000000000 --- a/builder/oracle/oci/client/client_test.go +++ /dev/null @@ -1,49 +0,0 @@ -package oci - -import ( - "net/http" - "net/http/httptest" - "net/url" - "os" - - "github.com/go-ini/ini" -) - -var ( - mux *http.ServeMux - client *Client - server *httptest.Server - keyFile *os.File -) - -// setup sets up a test HTTP server along with a oci.Client that is -// configured to talk to that test server. Tests should register handlers on -// mux which provide mock responses for the API method being tested. -func setup() { - mux = http.NewServeMux() - server = httptest.NewServer(mux) - parsedURL, _ := url.Parse(server.URL) - - config := &Config{} - config.baseURL = parsedURL.String() - - var cfg *ini.File - var err error - cfg, keyFile, err = BaseTestConfig() - - config, err = loadConfigSection(cfg, "DEFAULT", config) - if err != nil { - panic(err) - } - - client, err = NewClient(config) - if err != nil { - panic("Failed to instantiate test client") - } -} - -// teardown closes the test HTTP server -func teardown() { - server.Close() - os.Remove(keyFile.Name()) -} diff --git a/builder/oracle/oci/client/compute.go b/builder/oracle/oci/client/compute.go deleted file mode 100644 index 183a3794f..000000000 --- a/builder/oracle/oci/client/compute.go +++ /dev/null @@ -1,21 +0,0 @@ -package oci - -// ComputeClient is a client for the OCI Compute API. -type ComputeClient struct { - BaseURL string - Instances *InstanceService - Images *ImageService - VNICAttachments *VNICAttachmentService - VNICs *VNICService -} - -// NewComputeClient creates a new client for communicating with the OCI -// Compute API. -func NewComputeClient(s *baseClient) *ComputeClient { - return &ComputeClient{ - Instances: NewInstanceService(s), - Images: NewImageService(s), - VNICAttachments: NewVNICAttachmentService(s), - VNICs: NewVNICService(s), - } -} diff --git a/builder/oracle/oci/client/config.go b/builder/oracle/oci/client/config.go deleted file mode 100644 index 4df6b6a01..000000000 --- a/builder/oracle/oci/client/config.go +++ /dev/null @@ -1,240 +0,0 @@ -package oci - -import ( - "crypto/rand" - "crypto/rsa" - "crypto/x509" - "encoding/pem" - "errors" - "fmt" - "io/ioutil" - "os" - - "github.com/go-ini/ini" - "github.com/mitchellh/go-homedir" -) - -// Config API authentication and target configuration -type Config struct { - // User OCID e.g. ocid1.user.oc1..aaaaaaaadcshyehbkvxl7arse3lv7z5oknexjgfhnhwidtugsxhlm4247 - User string `ini:"user"` - - // User's Tenancy OCID e.g. ocid1.tenancy.oc1..aaaaaaaagtgvshv6opxzjyzkupkt64ymd32n6kbomadanpcg43d - Tenancy string `ini:"tenancy"` - - // Bare metal region identifier (e.g. us-phoenix-1) - Region string `ini:"region"` - - // Hex key fingerprint (e.g. b5:a0:62:57:28:0d:fd:c9:59:16:eb:d4:51:9f:70:e4) - Fingerprint string `ini:"fingerprint"` - - // Path to OCI config file (e.g. ~/.oci/config) - KeyFile string `ini:"key_file"` - - // Passphrase used for the key, if it is encrypted. - PassPhrase string `ini:"pass_phrase"` - - // Private key (loaded via LoadPrivateKey or ParsePrivateKey) - Key *rsa.PrivateKey - - // Used to override base API URL. - baseURL string -} - -// getBaseURL returns either the specified base URL or builds the appropriate -// URL based on service, region, and API version. -func (c *Config) getBaseURL(service string) string { - if c.baseURL != "" { - return c.baseURL - } - return fmt.Sprintf(baseURLPattern, service, c.Region, apiVersion) -} - -// LoadConfigsFromFile loads all oracle oci configurations from a file -// (generally ~/.oci/config). -func LoadConfigsFromFile(path string) (map[string]*Config, error) { - if _, err := os.Stat(path); err != nil { - return nil, fmt.Errorf("Oracle OCI config file is missing: %s", path) - } - - cfgFile, err := ini.Load(path) - if err != nil { - err := fmt.Errorf("Failed to parse config file %s: %s", path, err.Error()) - return nil, err - } - - configs := make(map[string]*Config) - - // Load DEFAULT section to populate defaults for all other configs - config, err := loadConfigSection(cfgFile, "DEFAULT", nil) - if err != nil { - return nil, err - } - configs["DEFAULT"] = config - - // Load other sections. - for _, sectionName := range cfgFile.SectionStrings() { - if sectionName == "DEFAULT" { - continue - } - - // Map to Config struct with defaults from DEFAULT section. - config, err := loadConfigSection(cfgFile, sectionName, configs["DEFAULT"]) - if err != nil { - return nil, err - } - configs[sectionName] = config - } - - return configs, nil -} - -// Loads an individual Config object from a ini.Section in the Oracle OCI config -// file. -func loadConfigSection(f *ini.File, sectionName string, config *Config) (*Config, error) { - if config == nil { - config = &Config{} - } - - section, err := f.GetSection(sectionName) - if err != nil { - return nil, fmt.Errorf("Config file does not contain a %s section", sectionName) - } - - if err := section.MapTo(config); err != nil { - return nil, err - } - - config.Key, err = LoadPrivateKey(config) - if err != nil { - return nil, err - } - - return config, err -} - -// LoadPrivateKey loads private key from disk and parses it. -func LoadPrivateKey(config *Config) (*rsa.PrivateKey, error) { - // Expand '~' to $HOME - path, err := homedir.Expand(config.KeyFile) - if err != nil { - return nil, err - } - - // Read and parse API signing key - keyContent, err := ioutil.ReadFile(path) - if err != nil { - return nil, err - } - key, err := ParsePrivateKey(keyContent, []byte(config.PassPhrase)) - - return key, err -} - -// ParsePrivateKey parses a PEM encoded array of bytes into an rsa.PrivateKey. -// Attempts to decrypt the PEM encoded array of bytes with the given password -// if the PEM encoded byte array is encrypted. -func ParsePrivateKey(content, password []byte) (*rsa.PrivateKey, error) { - keyBlock, _ := pem.Decode(content) - - if keyBlock == nil { - return nil, errors.New("could not decode PEM private key") - } - - var der []byte - var err error - if x509.IsEncryptedPEMBlock(keyBlock) { - if len(password) < 1 { - return nil, errors.New("encrypted private key but no pass phrase provided") - } - der, err = x509.DecryptPEMBlock(keyBlock, password) - if err != nil { - return nil, err - } - } else { - der = keyBlock.Bytes - } - - if key, err := x509.ParsePKCS1PrivateKey(der); err == nil { - return key, nil - } - - key, err := x509.ParsePKCS8PrivateKey(der) - if err == nil { - switch key := key.(type) { - case *rsa.PrivateKey: - return key, nil - default: - return nil, errors.New("Private key is not an RSA private key") - } - } - return nil, fmt.Errorf("Failed to parse private key :%s", err) -} - -// BaseTestConfig creates the base (DEFAULT) config including a temporary key -// file. -// NOTE: Caller is responsible for removing temporary key file. -func BaseTestConfig() (*ini.File, *os.File, error) { - keyFile, err := generateRSAKeyFile() - if err != nil { - return nil, keyFile, err - } - // Build ini - cfg := ini.Empty() - section, _ := cfg.NewSection("DEFAULT") - section.NewKey("region", "us-ashburn-1") - section.NewKey("tenancy", "ocid1.tenancy.oc1..aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa") - section.NewKey("user", "ocid1.user.oc1..aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa") - section.NewKey("fingerprint", "3c:b6:44:d7:49:1a:ac:bf:de:7d:76:22:a7:f5:df:55") - section.NewKey("key_file", keyFile.Name()) - - return cfg, keyFile, nil -} - -// WriteTestConfig writes a ini.File to a temporary file for use in unit tests. -// NOTE: Caller is responsible for removing temporary file. -func WriteTestConfig(cfg *ini.File) (*os.File, error) { - confFile, err := ioutil.TempFile("", "config_file") - if err != nil { - return nil, err - } - - _, err = cfg.WriteTo(confFile) - if err != nil { - os.Remove(confFile.Name()) - return nil, err - } - - return confFile, nil -} - -// generateRSAKeyFile generates an RSA key file for use in unit tests. -// NOTE: The caller is responsible for deleting the temporary file. -func generateRSAKeyFile() (*os.File, error) { - // Create temporary file for the key - f, err := ioutil.TempFile("", "key") - if err != nil { - return nil, err - } - - // Generate key - priv, err := rsa.GenerateKey(rand.Reader, 2014) - if err != nil { - return nil, err - } - - // ASN.1 DER encoded form - privDer := x509.MarshalPKCS1PrivateKey(priv) - privBlk := pem.Block{ - Type: "RSA PRIVATE KEY", - Headers: nil, - Bytes: privDer, - } - - // Write the key out - if _, err := f.Write(pem.EncodeToMemory(&privBlk)); err != nil { - return nil, err - } - - return f, nil -} diff --git a/builder/oracle/oci/client/config_test.go b/builder/oracle/oci/client/config_test.go deleted file mode 100644 index 154fe1ec2..000000000 --- a/builder/oracle/oci/client/config_test.go +++ /dev/null @@ -1,283 +0,0 @@ -package oci - -import ( - "crypto/rand" - "crypto/rsa" - "crypto/x509" - "encoding/asn1" - "encoding/pem" - "os" - "reflect" - "strings" - "testing" -) - -func TestNewConfigMissingFile(t *testing.T) { - // WHEN - _, err := LoadConfigsFromFile("some/invalid/path") - - // THEN - - if err == nil { - t.Error("Expected missing file error") - } -} - -func TestNewConfigDefaultOnly(t *testing.T) { - // GIVEN - - // Get DEFAULT config - cfg, keyFile, err := BaseTestConfig() - defer os.Remove(keyFile.Name()) - - // Write test config to file - f, err := WriteTestConfig(cfg) - if err != nil { - t.Fatal(err) - } - defer os.Remove(f.Name()) // clean up - - // WHEN - - // Load configs - cfgs, err := LoadConfigsFromFile(f.Name()) - if err != nil { - t.Fatal(err) - } - - // THEN - - if _, ok := cfgs["DEFAULT"]; !ok { - t.Fatal("Expected DEFAULT config to exist in map") - } -} - -func TestNewConfigDefaultsPopulated(t *testing.T) { - // GIVEN - - // Get DEFAULT config - cfg, keyFile, err := BaseTestConfig() - defer os.Remove(keyFile.Name()) - - admin := cfg.Section("ADMIN") - admin.NewKey("user", "ocid1.user.oc1..bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb") - admin.NewKey("fingerprint", "11:11:11:11:11:11:11:11:11:11:11:11:11:11:11:11") - - // Write test config to file - f, err := WriteTestConfig(cfg) - if err != nil { - t.Fatal(err) - } - defer os.Remove(f.Name()) // clean up - - // WHEN - - cfgs, err := LoadConfigsFromFile(f.Name()) - adminConfig, ok := cfgs["ADMIN"] - - // THEN - - if !ok { - t.Fatal("Expected ADMIN config to exist in map") - } - - if adminConfig.Region != "us-ashburn-1" { - t.Errorf("Expected 'us-ashburn-1', got '%s'", adminConfig.Region) - } -} - -func TestNewConfigDefaultsOverridden(t *testing.T) { - // GIVEN - - // Get DEFAULT config - cfg, keyFile, err := BaseTestConfig() - defer os.Remove(keyFile.Name()) - - admin := cfg.Section("ADMIN") - admin.NewKey("user", "ocid1.user.oc1..bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb") - admin.NewKey("fingerprint", "11:11:11:11:11:11:11:11:11:11:11:11:11:11:11:11") - - // Write test config to file - f, err := WriteTestConfig(cfg) - if err != nil { - t.Fatal(err) - } - defer os.Remove(f.Name()) // clean up - - // WHEN - - cfgs, err := LoadConfigsFromFile(f.Name()) - adminConfig, ok := cfgs["ADMIN"] - - // THEN - - if !ok { - t.Fatal("Expected ADMIN config to exist in map") - } - - if adminConfig.Fingerprint != "11:11:11:11:11:11:11:11:11:11:11:11:11:11:11:11" { - t.Errorf("Expected fingerprint '11:11:11:11:11:11:11:11:11:11:11:11:11:11:11:11', got '%s'", - adminConfig.Fingerprint) - } -} - -func TestParseEncryptedPrivateKeyValidPassword(t *testing.T) { - // Generate private key - priv, err := rsa.GenerateKey(rand.Reader, 2014) - if err != nil { - t.Fatalf("Unexpected generating RSA key: %+v", err) - } - publicKey := priv.PublicKey - - // ASN.1 DER encoded form - privDer := x509.MarshalPKCS1PrivateKey(priv) - - blockType := "RSA PRIVATE KEY" - password := []byte("password") - cipherType := x509.PEMCipherAES256 - - // Encrypt priv with password - encryptedPEMBlock, err := x509.EncryptPEMBlock( - rand.Reader, - blockType, - privDer, - password, - cipherType) - if err != nil { - t.Fatalf("Unexpected error encryting PEM block: %+v", err) - } - - // Parse private key - key, err := ParsePrivateKey(pem.EncodeToMemory(encryptedPEMBlock), password) - if err != nil { - t.Fatalf("unexpected error: %+v", err) - } - - // Check we get the same key back - if !reflect.DeepEqual(publicKey, key.PublicKey) { - t.Errorf("expected public key of encrypted and decrypted key to match") - } -} - -func TestParseEncryptedPrivateKeyPKCS8(t *testing.T) { - // Generate private key - priv, err := rsa.GenerateKey(rand.Reader, 2014) - if err != nil { - t.Fatalf("Unexpected generating RSA key: %+v", err) - } - publicKey := priv.PublicKey - - // Implements x509.MarshalPKCS8PrivateKey which is not included in the - // standard library. - pkey := struct { - Version int - PrivateKeyAlgorithm []asn1.ObjectIdentifier - PrivateKey []byte - }{ - Version: 0, - PrivateKeyAlgorithm: []asn1.ObjectIdentifier{{1, 2, 840, 113549, 1, 1, 1}}, - PrivateKey: x509.MarshalPKCS1PrivateKey(priv), - } - privDer, err := asn1.Marshal(pkey) - if err != nil { - t.Fatalf("Unexpected marshaling RSA key: %+v", err) - } - - blockType := "RSA PRIVATE KEY" - password := []byte("password") - cipherType := x509.PEMCipherAES256 - - // Encrypt priv with password - encryptedPEMBlock, err := x509.EncryptPEMBlock( - rand.Reader, - blockType, - privDer, - password, - cipherType) - if err != nil { - t.Fatalf("Unexpected error encryting PEM block: %+v", err) - } - - // Parse private key - key, err := ParsePrivateKey(pem.EncodeToMemory(encryptedPEMBlock), password) - if err != nil { - t.Fatalf("unexpected error: %+v", err) - } - - // Check we get the same key back - if !reflect.DeepEqual(publicKey, key.PublicKey) { - t.Errorf("expected public key of encrypted and decrypted key to match") - } -} - -func TestParseEncryptedPrivateKeyInvalidPassword(t *testing.T) { - // Generate private key - priv, err := rsa.GenerateKey(rand.Reader, 2014) - if err != nil { - t.Fatalf("Unexpected generating RSA key: %+v", err) - } - - // ASN.1 DER encoded form - privDer := x509.MarshalPKCS1PrivateKey(priv) - - blockType := "RSA PRIVATE KEY" - password := []byte("password") - cipherType := x509.PEMCipherAES256 - - // Encrypt priv with password - encryptedPEMBlock, err := x509.EncryptPEMBlock( - rand.Reader, - blockType, - privDer, - password, - cipherType) - if err != nil { - t.Fatalf("Unexpected error encrypting PEM block: %+v", err) - } - - // Parse private key (with wrong password) - _, err = ParsePrivateKey(pem.EncodeToMemory(encryptedPEMBlock), []byte("foo")) - if err == nil { - t.Fatalf("Expected error, got nil") - } - - if !strings.Contains(err.Error(), "decryption password incorrect") { - t.Errorf("Expected error to contain 'decryption password incorrect', got %+v", err) - } -} - -func TestParseEncryptedPrivateKeyInvalidNoPassword(t *testing.T) { - // Generate private key - priv, err := rsa.GenerateKey(rand.Reader, 2014) - if err != nil { - t.Fatalf("Unexpected generating RSA key: %+v", err) - } - - // ASN.1 DER encoded form - privDer := x509.MarshalPKCS1PrivateKey(priv) - - blockType := "RSA PRIVATE KEY" - password := []byte("password") - cipherType := x509.PEMCipherAES256 - - // Encrypt priv with password - encryptedPEMBlock, err := x509.EncryptPEMBlock( - rand.Reader, - blockType, - privDer, - password, - cipherType) - if err != nil { - t.Fatalf("Unexpected error encrypting PEM block: %+v", err) - } - - // Parse private key (with wrong password) - _, err = ParsePrivateKey(pem.EncodeToMemory(encryptedPEMBlock), []byte{}) - if err == nil { - t.Fatalf("Expected error, got nil") - } - - if !strings.Contains(err.Error(), "no pass phrase provided") { - t.Errorf("Expected error to contain 'no pass phrase provided', got %+v", err) - } -} diff --git a/builder/oracle/oci/client/error.go b/builder/oracle/oci/client/error.go deleted file mode 100644 index a97112f81..000000000 --- a/builder/oracle/oci/client/error.go +++ /dev/null @@ -1,27 +0,0 @@ -package oci - -import "fmt" - -// APIError encapsulates an error returned from the API -type APIError struct { - Code string `json:"code"` - Message string `json:"message"` -} - -func (e APIError) Error() string { - return fmt.Sprintf("OCI: [%s] '%s'", e.Code, e.Message) -} - -// firstError is a helper function to work out which error to return from calls -// to the API. -func firstError(err error, apiError *APIError) error { - if err != nil { - return err - } - - if apiError != nil && len(apiError.Code) > 0 { - return apiError - } - - return nil -} diff --git a/builder/oracle/oci/client/image.go b/builder/oracle/oci/client/image.go deleted file mode 100644 index 802b4650c..000000000 --- a/builder/oracle/oci/client/image.go +++ /dev/null @@ -1,122 +0,0 @@ -package oci - -import ( - "time" -) - -// ImageService enables communicating with the OCI compute API's instance -// related endpoints. -type ImageService struct { - client *baseClient -} - -// NewImageService creates a new ImageService for communicating with the -// OCI compute API's instance related endpoints. -func NewImageService(s *baseClient) *ImageService { - return &ImageService{ - client: s.New().Path("images/"), - } -} - -// Image details a OCI boot disk image. -type Image struct { - // The OCID of the image originally used to launch the instance. - BaseImageID string `json:"baseImageId,omitempty"` - - // The OCID of the compartment containing the instance you want to use - // as the basis for the image. - CompartmentID string `json:"compartmentId"` - - // Whether instances launched with this image can be used to create new - // images. - CreateImageAllowed bool `json:"createImageAllowed"` - - // A user-friendly name for the image. It does not have to be unique, - // and it's changeable. You cannot use an Oracle-provided image name - // as a custom image name. - DisplayName string `json:"displayName,omitempty"` - - // The OCID of the image. - ID string `json:"id"` - - // Current state of the image. Allowed values are: - // - PROVISIONING - // - AVAILABLE - // - DISABLED - // - DELETED - LifecycleState string `json:"lifecycleState"` - - // The image's operating system (e.g. Oracle Linux). - OperatingSystem string `json:"operatingSystem"` - - // The image's operating system version (e.g. 7.2). - OperatingSystemVersion string `json:"operatingSystemVersion"` - - // The date and time the image was created. - TimeCreated time.Time `json:"timeCreated"` -} - -// GetImageParams are the paramaters available when communicating with the -// GetImage API endpoint. -type GetImageParams struct { - ID string `url:"imageId"` -} - -// Get returns a single Image -func (s *ImageService) Get(params *GetImageParams) (Image, error) { - image := Image{} - e := &APIError{} - - _, err := s.client.New().Get(params.ID).Receive(&image, e) - err = firstError(err, e) - - return image, err -} - -// CreateImageParams are the parameters available when communicating with -// the CreateImage API endpoint. -type CreateImageParams struct { - CompartmentID string `json:"compartmentId"` - DisplayName string `json:"displayName,omitempty"` - InstanceID string `json:"instanceId"` -} - -// Create creates a new custom image based on a running compute instance. It -// does *not* wait for the imaging process to finish. -func (s *ImageService) Create(params *CreateImageParams) (Image, error) { - image := Image{} - e := &APIError{} - - _, err := s.client.New().Post("").SetBody(params).Receive(&image, &e) - err = firstError(err, e) - - return image, err -} - -// GetResourceState GETs the LifecycleState of the given image id. -func (s *ImageService) GetResourceState(id string) (string, error) { - image, err := s.Get(&GetImageParams{ID: id}) - if err != nil { - return "", err - } - return image.LifecycleState, nil - -} - -// DeleteImageParams are the parameters available when communicating with -// the DeleteImage API endpoint. -type DeleteImageParams struct { - ID string `url:"imageId"` -} - -// Delete deletes an existing custom image. -// NOTE: Deleting an image results in the API endpoint returning 404 on -// subsequent calls. As such deletion can't be waited on with a Waiter. -func (s *ImageService) Delete(params *DeleteImageParams) error { - e := &APIError{} - - _, err := s.client.New().Delete(params.ID).SetBody(params).Receive(nil, e) - err = firstError(err, e) - - return err -} diff --git a/builder/oracle/oci/client/image_test.go b/builder/oracle/oci/client/image_test.go deleted file mode 100644 index ef55cc552..000000000 --- a/builder/oracle/oci/client/image_test.go +++ /dev/null @@ -1,115 +0,0 @@ -package oci - -import ( - "fmt" - "net/http" - "reflect" - "testing" -) - -func TestGetImage(t *testing.T) { - setup() - defer teardown() - - id := "ocid1.image.oc1.phx.a" - path := fmt.Sprintf("/images/%s", id) - mux.HandleFunc(path, func(w http.ResponseWriter, r *http.Request) { - fmt.Fprintf(w, `{"id":"%s"}`, id) - }) - - image, err := client.Compute.Images.Get(&GetImageParams{ID: id}) - if err != nil { - t.Errorf("Client.Compute.Images.Get() returned error: %v", err) - } - - want := Image{ID: id} - - if !reflect.DeepEqual(image, want) { - t.Errorf("Client.Compute.Images.Get() returned %+v, want %+v", image, want) - } -} - -func TestCreateImage(t *testing.T) { - setup() - defer teardown() - - mux.HandleFunc("/images/", func(w http.ResponseWriter, r *http.Request) { - fmt.Fprint(w, `{"displayName": "go-oci test"}`) - }) - - params := &CreateImageParams{ - CompartmentID: "ocid1.compartment.oc1..a", - DisplayName: "go-oci test image", - InstanceID: "ocid1.image.oc1.phx.a", - } - - image, err := client.Compute.Images.Create(params) - if err != nil { - t.Errorf("Client.Compute.Images.Create() returned error: %v", err) - } - - want := Image{DisplayName: "go-oci test"} - - if !reflect.DeepEqual(image, want) { - t.Errorf("Client.Compute.Images.Create() returned %+v, want %+v", image, want) - } -} - -func TestImageGetResourceState(t *testing.T) { - setup() - defer teardown() - - id := "ocid1.image.oc1.phx.a" - path := fmt.Sprintf("/images/%s", id) - mux.HandleFunc(path, func(w http.ResponseWriter, r *http.Request) { - fmt.Fprint(w, `{"LifecycleState": "AVAILABLE"}`) - }) - - state, err := client.Compute.Images.GetResourceState(id) - if err != nil { - t.Errorf("Client.Compute.Images.GetResourceState() returned error: %v", err) - } - - want := "AVAILABLE" - if state != want { - t.Errorf("Client.Compute.Images.GetResourceState() returned %+v, want %+v", state, want) - } -} - -func TestImageGetResourceStateInvalidID(t *testing.T) { - setup() - defer teardown() - - id := "ocid1.image.oc1.phx.a" - path := fmt.Sprintf("/images/%s", id) - mux.HandleFunc(path, func(w http.ResponseWriter, r *http.Request) { - w.WriteHeader(http.StatusNotFound) - fmt.Fprint(w, `{"code": "NotAuthorizedOrNotFound"}`) - }) - - state, err := client.Compute.Images.GetResourceState(id) - if err == nil { - t.Errorf("Client.Compute.Images.GetResourceState() expected error, got %v", state) - } - - want := &APIError{Code: "NotAuthorizedOrNotFound"} - if !reflect.DeepEqual(err, want) { - t.Errorf("Client.Compute.Images.GetResourceState() errored with %+v, want %+v", err, want) - } -} - -func TestDeleteInstance(t *testing.T) { - setup() - defer teardown() - - id := "ocid1.image.oc1.phx.a" - path := fmt.Sprintf("/images/%s", id) - mux.HandleFunc(path, func(w http.ResponseWriter, r *http.Request) { - w.WriteHeader(http.StatusNoContent) - }) - - err := client.Compute.Images.Delete(&DeleteImageParams{ID: id}) - if err != nil { - t.Errorf("Client.Compute.Images.Delete() returned error: %v", err) - } -} diff --git a/builder/oracle/oci/client/instance.go b/builder/oracle/oci/client/instance.go deleted file mode 100644 index 30cdbd50b..000000000 --- a/builder/oracle/oci/client/instance.go +++ /dev/null @@ -1,129 +0,0 @@ -package oci - -import ( - "time" -) - -// InstanceService enables communicating with the OCI compute API's instance -// related endpoints. -type InstanceService struct { - client *baseClient -} - -// NewInstanceService creates a new InstanceService for communicating with the -// OCI compute API's instance related endpoints. -func NewInstanceService(s *baseClient) *InstanceService { - return &InstanceService{ - client: s.New().Path("instances/"), - } -} - -// Instance details a OCI compute instance. -type Instance struct { - // The Availability Domain the instance is running in. - AvailabilityDomain string `json:"availabilityDomain"` - - // The OCID of the compartment that contains the instance. - CompartmentID string `json:"compartmentId"` - - // A user-friendly name. Does not have to be unique, and it's changeable. - DisplayName string `json:"displayName,omitempty"` - - // The OCID of the instance. - ID string `json:"id"` - - // The image used to boot the instance. - ImageID string `json:"imageId,omitempty"` - - // The current state of the instance. Allowed values: - // - PROVISIONING - // - RUNNING - // - STARTING - // - STOPPING - // - STOPPED - // - CREATING_IMAGE - // - TERMINATING - // - TERMINATED - LifecycleState string `json:"lifecycleState"` - - // Custom metadata that you provide. - Metadata map[string]string `json:"metadata,omitempty"` - - // The region that contains the Availability Domain the instance is running in. - Region string `json:"region"` - - // The shape of the instance. The shape determines the number of CPUs - // and the amount of memory allocated to the instance. - Shape string `json:"shape"` - - // The date and time the instance was created. - TimeCreated time.Time `json:"timeCreated"` -} - -// GetInstanceParams are the paramaters available when communicating with the -// GetInstance API endpoint. -type GetInstanceParams struct { - ID string `url:"instanceId,omitempty"` -} - -// Get returns a single Instance -func (s *InstanceService) Get(params *GetInstanceParams) (Instance, error) { - instance := Instance{} - e := &APIError{} - - _, err := s.client.New().Get(params.ID).Receive(&instance, e) - err = firstError(err, e) - - return instance, err -} - -// LaunchInstanceParams are the parameters available when communicating with -// the LunchInstance API endpoint. -type LaunchInstanceParams struct { - AvailabilityDomain string `json:"availabilityDomain,omitempty"` - CompartmentID string `json:"compartmentId,omitempty"` - DisplayName string `json:"displayName,omitempty"` - ImageID string `json:"imageId,omitempty"` - Metadata map[string]string `json:"metadata,omitempty"` - OPCiPXEScript string `json:"opcIpxeScript,omitempty"` - Shape string `json:"shape,omitempty"` - SubnetID string `json:"subnetId,omitempty"` -} - -// Launch creates a new OCI compute instance. It does *not* wait for the -// instance to boot. -func (s *InstanceService) Launch(params *LaunchInstanceParams) (Instance, error) { - instance := &Instance{} - e := &APIError{} - - _, err := s.client.New().Post("").SetBody(params).Receive(instance, e) - err = firstError(err, e) - - return *instance, err -} - -// TerminateInstanceParams are the parameters available when communicating with -// the TerminateInstance API endpoint. -type TerminateInstanceParams struct { - ID string `url:"instanceId,omitempty"` -} - -// Terminate terminates a running OCI compute instance. -// instance to boot. -func (s *InstanceService) Terminate(params *TerminateInstanceParams) error { - e := &APIError{} - - _, err := s.client.New().Delete(params.ID).SetBody(params).Receive(nil, e) - err = firstError(err, e) - - return err -} - -// GetResourceState GETs the LifecycleState of the given instance id. -func (s *InstanceService) GetResourceState(id string) (string, error) { - instance, err := s.Get(&GetInstanceParams{ID: id}) - if err != nil { - return "", err - } - return instance.LifecycleState, nil -} diff --git a/builder/oracle/oci/client/instance_test.go b/builder/oracle/oci/client/instance_test.go deleted file mode 100644 index e45b707af..000000000 --- a/builder/oracle/oci/client/instance_test.go +++ /dev/null @@ -1,96 +0,0 @@ -package oci - -import ( - "fmt" - "net/http" - "reflect" - "testing" -) - -func TestGetInstance(t *testing.T) { - setup() - defer teardown() - - id := "ocid1.instance.oc1.phx.a" - path := fmt.Sprintf("/instances/%s", id) - mux.HandleFunc(path, func(w http.ResponseWriter, r *http.Request) { - fmt.Fprintf(w, `{"id":"%s"}`, id) - }) - - instance, err := client.Compute.Instances.Get(&GetInstanceParams{ID: id}) - if err != nil { - t.Errorf("Client.Compute.Instances.Get() returned error: %v", err) - } - - want := Instance{ID: id} - - if !reflect.DeepEqual(instance, want) { - t.Errorf("Client.Compute.Instances.Get() returned %+v, want %+v", instance, want) - } -} - -func TestLaunchInstance(t *testing.T) { - setup() - defer teardown() - - mux.HandleFunc("/instances/", func(w http.ResponseWriter, r *http.Request) { - fmt.Fprint(w, `{"displayName": "go-oci test"}`) - }) - - params := &LaunchInstanceParams{ - AvailabilityDomain: "aaaa:PHX-AD-1", - CompartmentID: "ocid1.compartment.oc1..a", - DisplayName: "go-oci test", - ImageID: "ocid1.image.oc1.phx.a", - Shape: "VM.Standard1.1", - SubnetID: "ocid1.subnet.oc1.phx.a", - } - - instance, err := client.Compute.Instances.Launch(params) - if err != nil { - t.Errorf("Client.Compute.Instances.Launch() returned error: %v", err) - } - - want := Instance{DisplayName: "go-oci test"} - - if !reflect.DeepEqual(instance, want) { - t.Errorf("Client.Compute.Instances.Launch() returned %+v, want %+v", instance, want) - } -} - -func TestTerminateInstance(t *testing.T) { - setup() - defer teardown() - - id := "ocid1.instance.oc1.phx.a" - path := fmt.Sprintf("/instances/%s", id) - mux.HandleFunc(path, func(w http.ResponseWriter, r *http.Request) { - w.WriteHeader(http.StatusNoContent) - }) - - err := client.Compute.Instances.Terminate(&TerminateInstanceParams{ID: id}) - if err != nil { - t.Errorf("Client.Compute.Instances.Terminate() returned error: %v", err) - } -} - -func TestInstanceGetResourceState(t *testing.T) { - setup() - defer teardown() - - id := "ocid1.instance.oc1.phx.a" - path := fmt.Sprintf("/instances/%s", id) - mux.HandleFunc(path, func(w http.ResponseWriter, r *http.Request) { - fmt.Fprint(w, `{"LifecycleState": "RUNNING"}`) - }) - - state, err := client.Compute.Instances.GetResourceState(id) - if err != nil { - t.Errorf("Client.Compute.Instances.GetResourceState() returned error: %v", err) - } - - want := "RUNNING" - if state != want { - t.Errorf("Client.Compute.Instances.GetResourceState() returned %+v, want %+v", state, want) - } -} diff --git a/builder/oracle/oci/client/transport.go b/builder/oracle/oci/client/transport.go deleted file mode 100644 index 79da20509..000000000 --- a/builder/oracle/oci/client/transport.go +++ /dev/null @@ -1,116 +0,0 @@ -package oci - -import ( - "bytes" - "crypto" - "crypto/rand" - "crypto/rsa" - "crypto/sha256" - "encoding/base64" - "fmt" - "io" - "net/http" - "strconv" - "strings" - "time" -) - -type nopCloser struct { - io.Reader -} - -func (nopCloser) Close() error { - return nil -} - -// Transport adds OCI signature authentication to each outgoing request. -type Transport struct { - transport http.RoundTripper - config *Config -} - -// NewTransport creates a new Transport to add OCI signature authentication -// to each outgoing request. -func NewTransport(transport http.RoundTripper, config *Config) *Transport { - return &Transport{ - transport: transport, - config: config, - } -} - -func (t *Transport) RoundTrip(req *http.Request) (*http.Response, error) { - var buf *bytes.Buffer - - if req.Body != nil { - buf = new(bytes.Buffer) - buf.ReadFrom(req.Body) - req.Body = nopCloser{buf} - } - if req.Header.Get("date") == "" { - req.Header.Set("date", time.Now().UTC().Format(http.TimeFormat)) - } - if req.Header.Get("content-type") == "" { - req.Header.Set("content-type", "application/json") - } - if req.Header.Get("accept") == "" { - req.Header.Set("accept", "application/json") - } - if req.Header.Get("host") == "" { - req.Header.Set("host", req.URL.Host) - } - - var signheaders []string - if (req.Method == "PUT" || req.Method == "POST") && buf != nil { - signheaders = []string{"(request-target)", "host", "date", - "content-length", "content-type", "x-content-sha256"} - - if req.Header.Get("content-length") == "" { - req.Header.Set("content-length", strconv.Itoa(buf.Len())) - } - - hasher := sha256.New() - hasher.Write(buf.Bytes()) - hash := hasher.Sum(nil) - req.Header.Set("x-content-sha256", base64.StdEncoding.EncodeToString(hash)) - } else { - signheaders = []string{"date", "host", "(request-target)"} - } - - var signbuffer bytes.Buffer - for idx, header := range signheaders { - signbuffer.WriteString(header) - signbuffer.WriteString(": ") - - if header == "(request-target)" { - signbuffer.WriteString(strings.ToLower(req.Method)) - signbuffer.WriteString(" ") - signbuffer.WriteString(req.URL.RequestURI()) - } else { - signbuffer.WriteString(req.Header.Get(header)) - } - - if idx < len(signheaders)-1 { - signbuffer.WriteString("\n") - } - } - - h := sha256.New() - h.Write(signbuffer.Bytes()) - digest := h.Sum(nil) - signature, err := rsa.SignPKCS1v15(rand.Reader, t.config.Key, crypto.SHA256, digest) - if err != nil { - return nil, err - } - - authHeader := fmt.Sprintf("Signature headers=\"%s\","+ - "keyId=\"%s/%s/%s\","+ - "algorithm=\"rsa-sha256\","+ - "signature=\"%s\","+ - "version=\"1\"", - strings.Join(signheaders, " "), - t.config.Tenancy, t.config.User, t.config.Fingerprint, - base64.StdEncoding.EncodeToString(signature)) - req.Header.Add("Authorization", authHeader) - - return t.transport.RoundTrip(req) -} diff --git a/builder/oracle/oci/client/transport_test.go b/builder/oracle/oci/client/transport_test.go deleted file mode 100644 index 2299f3a0e..000000000 --- a/builder/oracle/oci/client/transport_test.go +++ /dev/null @@ -1,156 +0,0 @@ -package oci - -import ( - "net/http" - "strings" - "testing" -) - -const testKey = `-----BEGIN RSA PRIVATE KEY----- -MIIEpQIBAAKCAQEAyLnyzmYj0dZuDo2nclIdEyLZrFZLtw5xFldWpCUl5W3SxKDL -iIgwxpSO75Yf++Rzqc5j6S5bpIrdca6AwVXCNmjjxMPO7vLLm4l4IUOZMv5FqKaC -I2plFz9uBkzGscnYnMbsDA430082E07lYpNv1xy8JwpbrIsqIMh4XCKci/Od5sLR -kEicmOpQK42FGRTQjjmQoWtv+9XED+vYTRL0AxQEC/6i/E7yssFXZ+fpHSKWeKTQ -K/1Fc4pZ1zNzJcDXGuweISx/QMLz78TAPH5OBq/EQzHKSpKvfnWFRyBHne8pwuN8 -8wzbbD+/7OFjz28jNSERVJvfYe3X1k69IWMNGwIDAQABAoIBAQCZhcdU38AzxSrG -DMfuYymDslsEObiNWQlbig9lWlhCwx26cDVbxrZvm747NvpdgVyJmqbF+UP0dJVs -Voh51qrFTLIwk4bZMXBTFPCBmJ865knG9RuCFOUew8/WF7C82GHJf0eY7OL7xpDY -cbZ2D8gxofOydHSrYoElM88CwSI00xPCbBKEMrBO94oXC8yfp2bmV6bNhVXwFDEM -qda7M6jVAsBrTOzxUF5VdUUU/MLsu2cCk/ap1zer2Bml9Afk1bMeGJ3XDQgol0pS -CLxaGczpSNVMF9+pjA5sFHR5rmcl0b/kC9HsgOJGhLOimtS94O64dSdWifgsjf6+ -fhT2SMiRAoGBAOUDwkdzNqQfvS+qrP2ULpB4vt7MZ70rDGmyMDLZ1VWgcm2cDIad -b7MkFG6GCa48mKkFXK9mOPyq8ELoTjZo2p+relEqf49BpaNxr+cp11pX7g0AkzCa -a8LwdOOUW/poqYl2xLuw9Rz6ky6ybzatMvCwpQCwnbAdABIVxz4oQKHpAoGBAOBg -3uYC/ynGdF9gJTfdS5XPYoLcKKRRspBZrvvDHaWyBnanm5KYwDAZPzLQMqcpyPeo -5xgwMmtNlc6lKKyGkhSLNCV+eO3yAx1h/cq7ityvMS7u6o5sq+/bvtEnbUPYbEtk -AhVD7/w5Yyzzi4beiQxDKe0q1mvUAH56aGqJivBjAoGBALmUMTPbDhUzTwg4Y1Rd -ZtpVrj43H31wS+++oEYktTZc/T0LLi9Llr9w5kmlvmR94CtfF/ted6FwF5/wRajb -kQXAXC83pAR/au0mbCeDhWpFRLculxfUmqxuVBozF9G0TGYDY2rA+++OsgQuPebt -tRDL4/nKJQ4Ygf0lvr4EulM5AoGBALoIdyabu3WmfhwJujH8P8wA+ztmUCgVOIio -YwWIe49C8Er2om1ESqxWcmit6CFi6qY0Gw6Z/2OqGxgPJY8NsBZqaBziJF+cdWqq -MWMiZXqdopi4LC9T+KZROn9tQhGrYfaL/5IkFti3t/uwHbH/1f8dvKhQCSGzz4kN -8n7KdTDjAoGAKh8XdIOQlThFK108VT2yp4BGZXEOvWrR19DLbuUzHVrEX+Bd+uFo -Ruk9iKEH7PSnlRnIvWc1y9qN7fUi5OR3LvQNGlXHyUl6CtmH3/b064bBKudC+XTn -VBelcIoGpH7Dn9I6pKUFauJz1TSbQCIjYGNqrjyzLtG+lH/gy5q4xs8= ------END RSA PRIVATE KEY-----` - -type testTarget struct { - CapturedReq *http.Request - Response *http.Response -} - -func (target *testTarget) RoundTrip(req *http.Request) (*http.Response, error) { - target.CapturedReq = req - return target.Response, nil -} - -func testReq(method string, url string, body string, headers map[string]string) *http.Request { - req, err := http.NewRequest(method, url, strings.NewReader(body)) - if err != nil { - panic(err.Error()) - } - - for k, v := range headers { - req.Header.Add(k, v) - } - return req -} - -var KnownGoodCases = []struct { - name string - inputRequest func() *http.Request - expectedHeaders map[string]string -}{ - { - name: "Simple GET", - inputRequest: func() *http.Request { - return testReq("GET", "https://example.com/", "", map[string]string{ - "date": "Mon, 26 Sep 2016 11:04:22 GMT"}) - }, - expectedHeaders: map[string]string{ - "date": "Mon, 26 Sep 2016 11:04:22 GMT", - "content-type": "application/json", - "host": "example.com", - "accept": "application/json", - "Authorization": "Signature headers=\"date host (request-target)\",keyId=\"tenant/testuser/3c:b6:44:d7:49:1a:ac:bf:de:7d:76:22:a7:f5:df:55\",algorithm=\"rsa-sha256\",signature=\"UMw/FImQYZ5JBpfYcR9YN72lhupGl5yS+522NS9glLodU9f4oKRqaocpGdSUSRhhSDKxIx01rV547/HemJ6QqEPaJJuDQPXsGthokWMU2DBGyaMAqhLClgCJiRQMwpg4rdL2tETzkM3wy6UN+I52RYoNSdsnat2ZArCkfl8dIl9ydcwD8/+BqB8d2wyaAIS4iagdPKLAC/Mu9OzyUPOXQhYGYsoEdOowOUkHOlob65PFrlHmKJDdjEF3MDcygEpApItf4iUEloP5bjixAbZEVpj3HLQ5uaPx9m+RsLzYMeO0adE0wOv2YNmwZrExGhXh1BpTU33m5amHeUBxSaG+2A==\",version=\"1\"", - }, - }, - { - name: "Simple PUT request", - inputRequest: func() *http.Request { - return testReq("PUT", "https://example.com/", "Some Content", map[string]string{ - "date": "Mon, 26 Sep 2016 11:04:22 GMT"}) - }, - expectedHeaders: map[string]string{ - "date": "Mon, 26 Sep 2016 11:04:22 GMT", - "content-type": "application/json", - "content-length": "12", - "x-content-sha256": "lQ8fsURxamLtHxnwTYqd3MNYadJ4ZB/U9yQBKzu/fXA=", - "accept": "application/json", - "Authorization": "Signature headers=\"(request-target) host date content-length content-type x-content-sha256\",keyId=\"tenant/testuser/3c:b6:44:d7:49:1a:ac:bf:de:7d:76:22:a7:f5:df:55\",algorithm=\"rsa-sha256\",signature=\"FHyPt4PE2HGH+iftzcViB76pCJ2R9+DdTCo1Ss4mH4KHQJdyQtPsCpe6Dc19zie6cRr6dsenk21yYnncic8OwZhII8DULj2//qLFGmgFi84s7LJqMQ/COiP7O9KtCN+U8MMt4PV7ZDsiGFn3/8EUJ1wxYscxSIB19S1NpuEL062JgGfkqxTkTPd7V3Xh1NlmUUtQrAMR3l56k1iV0zXY9Uw0CjWYjueMP0JUmkO7zycYAVBrx7Q8wkmejlyD7yFrAnObyEsMm9cIL9IcruWFHeCHFxRLslw7AoLxibAm2Dc9EROuvCK2UkUp8AFkE+QyYDMrrSm1NLBMWdnYqdickA==\",version=\"1\"", - }, - }, - { - name: "Simple POST request", - inputRequest: func() *http.Request { - return testReq("POST", "https://example.com/", "Some Content", map[string]string{ - "date": "Mon, 26 Sep 2016 11:04:22 GMT"}) - }, - expectedHeaders: map[string]string{ - "date": "Mon, 26 Sep 2016 11:04:22 GMT", - "content-type": "application/json", - "content-length": "12", - "x-content-sha256": "lQ8fsURxamLtHxnwTYqd3MNYadJ4ZB/U9yQBKzu/fXA=", - "accept": "application/json", - "Authorization": "Signature headers=\"(request-target) host date content-length content-type x-content-sha256\",keyId=\"tenant/testuser/3c:b6:44:d7:49:1a:ac:bf:de:7d:76:22:a7:f5:df:55\",algorithm=\"rsa-sha256\",signature=\"WzGIoySkjqydwabMTxjVs05UBu0hThAEBzVs7HbYO45o2XpaoqGiNX67mNzs1PeYrGHpJp8+Ysoz66PChWV/1trxuTU92dQ/FgwvcwBRy5dQvdLkjWCZihNunSk4gt9652w6zZg/ybLon0CFbLRnlanDJDX9BgR3ttuTxf30t5qr2A4fnjFF4VjaU/CzE13cNfaWftjSd+xNcla2sbArF3R0+CEEb0xZEPzTyjjjkyvXdaPZwEprVn8IDmdJvLmRP4EniAPxE1EZIhd712M5ondQkR4/WckM44/hlKDeXGFb4y+QnU02i4IWgOWs3dh2tuzS1hp1zfq7qgPbZ4hp0A==\",version=\"1\"", - }, - }, - - { - name: "Simple DELETE", - inputRequest: func() *http.Request { - return testReq("DELETE", "https://example.com/", "Some Content", map[string]string{ - "date": "Mon, 26 Sep 2016 11:04:22 GMT"}) - }, - expectedHeaders: map[string]string{ - "date": "Mon, 26 Sep 2016 11:04:22 GMT", - "content-type": "application/json", - "accept": "application/json", - "Authorization": "Signature headers=\"date host (request-target)\",keyId=\"tenant/testuser/3c:b6:44:d7:49:1a:ac:bf:de:7d:76:22:a7:f5:df:55\",algorithm=\"rsa-sha256\",signature=\"Kj4YSpONZG1cibLbNgxIp4VoS5+80fsB2Fh2Ue28+QyXq4wwrJpMP+8jEupz1yTk1SNPYuxsk7lNOgtI6G1Hq0YJJVum74j46sUwRWe+f08tMJ3c9J+rrzLfpIrakQ8PaudLhHU0eK5kuTZme1dCwRWXvZq3r5IqkGot/OGMabKpBygRv9t0i5ry+bTslSjMqafTWLosY9hgIiGrXD+meB5tpyn+gPVYc//Hc/C7uNNgLJIMk5DKVa4U0YnoY3ojafZTXZQQNGRn2NDMcZUX3f3nJlUIfiZRiOCTkbPwx/fWb4MZtYaEsY5OPficbJRvfOBxSG1wjX+8rgO7ijhMAA==\",version=\"1\"", - }, - }, -} - -func TestKnownGoodRequests(t *testing.T) { - pKey, err := ParsePrivateKey([]byte(testKey), []byte{}) - if err != nil { - t.Fatalf("Failed to parse test key: %s", err.Error()) - } - - config := &Config{ - Key: pKey, - User: "testuser", - Tenancy: "tenant", - Fingerprint: "3c:b6:44:d7:49:1a:ac:bf:de:7d:76:22:a7:f5:df:55", - } - - expectedResponse := &http.Response{} - for _, tt := range KnownGoodCases { - targetBackend := &testTarget{Response: expectedResponse} - target := NewTransport(targetBackend, config) - - _, err = target.RoundTrip(tt.inputRequest()) - if err != nil { - t.Fatalf("%s: Failed to handle request %s", tt.name, err.Error()) - } - - sentReq := targetBackend.CapturedReq - - for header, val := range tt.expectedHeaders { - if sentReq.Header.Get(header) != val { - t.Fatalf("%s: Header mismatch in responnse,\n\t expecting \"%s\"\n\t got \"%s\"", tt.name, val, sentReq.Header.Get(header)) - } - } - } - -} diff --git a/builder/oracle/oci/client/vnic.go b/builder/oracle/oci/client/vnic.go deleted file mode 100644 index 31b37ec3e..000000000 --- a/builder/oracle/oci/client/vnic.go +++ /dev/null @@ -1,47 +0,0 @@ -package oci - -import ( - "time" -) - -// VNICService enables communicating with the OCI compute API's VNICs -// endpoint. -type VNICService struct { - client *baseClient -} - -// NewVNICService creates a new VNICService for communicating with the -// OCI compute API's instance related endpoints. -func NewVNICService(s *baseClient) *VNICService { - return &VNICService{client: s.New().Path("vnics/")} -} - -// VNIC - a virtual network interface card. -type VNIC struct { - AvailabilityDomain string `json:"availabilityDomain"` - CompartmentID string `json:"compartmentId"` - DisplayName string `json:"displayName,omitempty"` - ID string `json:"id"` - LifecycleState string `json:"lifecycleState"` - PrivateIP string `json:"privateIp"` - PublicIP string `json:"publicIp"` - SubnetID string `json:"subnetId"` - TimeCreated time.Time `json:"timeCreated"` -} - -// GetVNICParams are the paramaters available when communicating with the -// ListVNICs API endpoint. -type GetVNICParams struct { - ID string `url:"vnicId"` -} - -// Get returns an individual VNIC. -func (s *VNICService) Get(params *GetVNICParams) (VNIC, error) { - VNIC := &VNIC{} - e := &APIError{} - - _, err := s.client.New().Get(params.ID).Receive(VNIC, e) - err = firstError(err, e) - - return *VNIC, err -} diff --git a/builder/oracle/oci/client/vnic_attachment.go b/builder/oracle/oci/client/vnic_attachment.go deleted file mode 100644 index 3458f7da1..000000000 --- a/builder/oracle/oci/client/vnic_attachment.go +++ /dev/null @@ -1,52 +0,0 @@ -package oci - -import ( - "time" -) - -// VNICAttachmentService enables communicating with the OCI compute API's VNIC -// attachment endpoint. -type VNICAttachmentService struct { - client *baseClient -} - -// NewVNICAttachmentService creates a new VNICAttachmentService for communicating with the -// OCI compute API's instance related endpoints. -func NewVNICAttachmentService(s *baseClient) *VNICAttachmentService { - return &VNICAttachmentService{ - client: s.New().Path("vnicAttachments/"), - } -} - -// VNICAttachment details the attachment of a VNIC to a OCI instance. -type VNICAttachment struct { - AvailabilityDomain string `json:"availabilityDomain"` - CompartmentID string `json:"compartmentId"` - DisplayName string `json:"displayName,omitempty"` - ID string `json:"id"` - InstanceID string `json:"instanceId"` - LifecycleState string `json:"lifecycleState"` - SubnetID string `json:"subnetId"` - TimeCreated time.Time `json:"timeCreated"` - VNICID string `json:"vnicId"` -} - -// ListVnicAttachmentsParams are the paramaters available when communicating -// with the ListVnicAttachments API endpoint. -type ListVnicAttachmentsParams struct { - AvailabilityDomain string `url:"availabilityDomain,omitempty"` - CompartmentID string `url:"compartmentId"` - InstanceID string `url:"instanceId,omitempty"` - VNICID string `url:"vnicId,omitempty"` -} - -// List returns an array of VNICAttachments. -func (s *VNICAttachmentService) List(params *ListVnicAttachmentsParams) ([]VNICAttachment, error) { - vnicAttachments := new([]VNICAttachment) - e := new(APIError) - - _, err := s.client.New().Get("").QueryStruct(params).Receive(vnicAttachments, e) - err = firstError(err, e) - - return *vnicAttachments, err -} diff --git a/builder/oracle/oci/client/vnic_attachment_test.go b/builder/oracle/oci/client/vnic_attachment_test.go deleted file mode 100644 index 704423fb0..000000000 --- a/builder/oracle/oci/client/vnic_attachment_test.go +++ /dev/null @@ -1,31 +0,0 @@ -package oci - -import ( - "fmt" - "net/http" - "reflect" - "testing" -) - -func TestListVNICAttachments(t *testing.T) { - setup() - defer teardown() - - id := "ocid1.image.oc1.phx.a" - mux.HandleFunc("/vnicAttachments/", func(w http.ResponseWriter, r *http.Request) { - fmt.Fprintf(w, `[{"id":"%s"}]`, id) - }) - - params := &ListVnicAttachmentsParams{InstanceID: id} - - vnicAttachment, err := client.Compute.VNICAttachments.List(params) - if err != nil { - t.Errorf("Client.Compute.VNICAttachments.List() returned error: %v", err) - } - - want := []VNICAttachment{{ID: id}} - - if !reflect.DeepEqual(vnicAttachment, want) { - t.Errorf("Client.Compute.VNICAttachments.List() returned %+v, want %+v", vnicAttachment, want) - } -} diff --git a/builder/oracle/oci/client/vnic_test.go b/builder/oracle/oci/client/vnic_test.go deleted file mode 100644 index c208db5c4..000000000 --- a/builder/oracle/oci/client/vnic_test.go +++ /dev/null @@ -1,29 +0,0 @@ -package oci - -import ( - "fmt" - "net/http" - "reflect" - "testing" -) - -func TestGetVNIC(t *testing.T) { - setup() - defer teardown() - - id := "ocid1.vnic.oc1.phx.a" - path := fmt.Sprintf("/vnics/%s", id) - mux.HandleFunc(path, func(w http.ResponseWriter, r *http.Request) { - fmt.Fprintf(w, `{"id": "%s"}`, id) - }) - - vnic, err := client.Compute.VNICs.Get(&GetVNICParams{ID: id}) - if err != nil { - t.Errorf("Client.Compute.VNICs.Get() returned error: %v", err) - } - - want := &VNIC{ID: id} - if reflect.DeepEqual(vnic, want) { - t.Errorf("Client.Compute.VNICs.Get() returned %+v, want %+v", vnic, want) - } -} diff --git a/builder/oracle/oci/client/waiters.go b/builder/oracle/oci/client/waiters.go deleted file mode 100644 index bb9ca1c19..000000000 --- a/builder/oracle/oci/client/waiters.go +++ /dev/null @@ -1,59 +0,0 @@ -package oci - -import ( - "fmt" - "time" -) - -const ( - defaultWaitDurationMS = 5000 - defaultMaxRetries = 0 -) - -type Waiter struct { - WaitDurationMS int - MaxRetries int -} - -type WaitableService interface { - GetResourceState(id string) (string, error) -} - -func stringSliceContains(slice []string, value string) bool { - for _, elem := range slice { - if elem == value { - return true - } - } - return false -} - -// NewWaiter creates a waiter with default wait duration and unlimited retry -// operations. -func NewWaiter() *Waiter { - return &Waiter{WaitDurationMS: defaultWaitDurationMS, MaxRetries: defaultMaxRetries} -} - -// WaitForResourceToReachState polls a resource that implements WaitableService -// repeatedly until it reaches a known state or fails if it reaches an -// unexpected state. The duration of the interval and number of polls is -// determined by the Waiter configuration. -func (w *Waiter) WaitForResourceToReachState(svc WaitableService, id string, waitStates []string, terminalState string) error { - for i := 0; w.MaxRetries == 0 || i < w.MaxRetries; i++ { - state, err := svc.GetResourceState(id) - if err != nil { - return err - } - - if stringSliceContains(waitStates, state) { - time.Sleep(time.Duration(w.WaitDurationMS) * time.Millisecond) - continue - } else if state == terminalState { - return nil - } - - return fmt.Errorf("Unexpected resource state %s, expecting a waiting state %s or terminal state %s ", state, waitStates, terminalState) - } - - return fmt.Errorf("Maximum number of retries (%d) exceeded; resource did not reach state %s", w.MaxRetries, terminalState) -} diff --git a/builder/oracle/oci/client/waiters_test.go b/builder/oracle/oci/client/waiters_test.go deleted file mode 100644 index ce52b7922..000000000 --- a/builder/oracle/oci/client/waiters_test.go +++ /dev/null @@ -1,80 +0,0 @@ -package oci - -import ( - "errors" - "fmt" - "testing" -) - -const ( - ValidID = "ID" -) - -type testWaitSvc struct { - states []string - idx int - err error -} - -func (tw *testWaitSvc) GetResourceState(id string) (string, error) { - if id != ValidID { - return "", fmt.Errorf("Invalid id %s", id) - } - if tw.err != nil { - return "", tw.err - } - - if tw.idx >= len(tw.states) { - panic("Invalid test state") - } - state := tw.states[tw.idx] - tw.idx++ - return state, nil -} - -func TestReturnsWhenWaitStateIsReachedImmediately(t *testing.T) { - ws := &testWaitSvc{states: []string{"OK"}} - w := NewWaiter() - err := w.WaitForResourceToReachState(ws, ValidID, []string{}, "OK") - if err != nil { - t.Errorf("Failed to reach expected state, got %s", err) - } -} - -func TestReturnsWhenResourceWaitsInValidWaitingState(t *testing.T) { - w := &Waiter{WaitDurationMS: 1, MaxRetries: defaultMaxRetries} - ws := &testWaitSvc{states: []string{"WAITING", "OK"}} - err := w.WaitForResourceToReachState(ws, ValidID, []string{"WAITING"}, "OK") - if err != nil { - t.Errorf("Failed to reach expected state, got %s", err) - } -} - -func TestPropagatesErrorFromGetter(t *testing.T) { - w := NewWaiter() - ws := &testWaitSvc{states: []string{}, err: errors.New("ERROR")} - err := w.WaitForResourceToReachState(ws, ValidID, []string{"WAITING"}, "OK") - if err != ws.err { - t.Errorf("Expected error from getter got %s", err) - } -} - -func TestReportsInvalidTransitionStateAsError(t *testing.T) { - w := NewWaiter() - tw := &testWaitSvc{states: []string{"UNKNOWN_STATE"}, err: errors.New("ERROR")} - err := w.WaitForResourceToReachState(tw, ValidID, []string{"WAITING"}, "OK") - if err == nil { - t.Fatal("Expected error from getter") - } -} - -func TestErrorsWhenMaxWaitTriesExceeded(t *testing.T) { - w := Waiter{WaitDurationMS: 1, MaxRetries: 1} - - ws := &testWaitSvc{states: []string{"WAITING", "OK"}} - - err := w.WaitForResourceToReachState(ws, ValidID, []string{"WAITING"}, "OK") - if err == nil { - t.Fatal("Expecting error but wait terminated") - } -} diff --git a/builder/oracle/oci/config.go b/builder/oracle/oci/config.go index 36f8bb8e9..eda312ee2 100644 --- a/builder/oracle/oci/config.go +++ b/builder/oracle/oci/config.go @@ -1,17 +1,20 @@ package oci import ( + "encoding/base64" "errors" "fmt" + "io/ioutil" + "log" "os" "path/filepath" - client "github.com/hashicorp/packer/builder/oracle/oci/client" "github.com/hashicorp/packer/common" "github.com/hashicorp/packer/helper/communicator" "github.com/hashicorp/packer/helper/config" "github.com/hashicorp/packer/packer" "github.com/hashicorp/packer/template/interpolate" + ocicommon "github.com/oracle/oci-go-sdk/common" "github.com/mitchellh/go-homedir" ) @@ -20,18 +23,19 @@ type Config struct { common.PackerConfig `mapstructure:",squash"` Comm communicator.Config `mapstructure:",squash"` - AccessCfg *client.Config + ConfigProvider ocicommon.ConfigurationProvider AccessCfgFile string `mapstructure:"access_cfg_file"` AccessCfgFileAccount string `mapstructure:"access_cfg_file_account"` // Access config overrides - UserID string `mapstructure:"user_ocid"` - TenancyID string `mapstructure:"tenancy_ocid"` - Region string `mapstructure:"region"` - Fingerprint string `mapstructure:"fingerprint"` - KeyFile string `mapstructure:"key_file"` - PassPhrase string `mapstructure:"pass_phrase"` + UserID string `mapstructure:"user_ocid"` + TenancyID string `mapstructure:"tenancy_ocid"` + Region string `mapstructure:"region"` + Fingerprint string `mapstructure:"fingerprint"` + KeyFile string `mapstructure:"key_file"` + PassPhrase string `mapstructure:"pass_phrase"` + UsePrivateIP bool `mapstructure:"use_private_ip"` AvailabilityDomain string `mapstructure:"availability_domain"` CompartmentID string `mapstructure:"compartment_ocid"` @@ -41,6 +45,20 @@ type Config struct { Shape string `mapstructure:"shape"` ImageName string `mapstructure:"image_name"` + // Instance + InstanceName string `mapstructure:"instance_name"` + + // Metadata optionally contains custom metadata key/value pairs provided in the + // configuration. While this can be used to set metadata["user_data"] the explicit + // "user_data" and "user_data_file" values will have precedence. + // An instance's metadata can be obtained from at http://169.254.169.254 on the + // launched instance. + Metadata map[string]string `mapstructure:"metadata"` + + // UserData and UserDataFile file are both optional and mutually exclusive. + UserData string `mapstructure:"user_data"` + UserDataFile string `mapstructure:"user_data_file"` + // Networking SubnetID string `mapstructure:"subnet_ocid"` @@ -60,68 +78,54 @@ func NewConfig(raws ...interface{}) (*Config, error) { } // Determine where the SDK config is located - var accessCfgFile string - if c.AccessCfgFile != "" { - accessCfgFile = c.AccessCfgFile - } else { - accessCfgFile, err = getDefaultOCISettingsPath() + if c.AccessCfgFile == "" { + c.AccessCfgFile, err = getDefaultOCISettingsPath() if err != nil { - accessCfgFile = "" // Access cfg might be in template + log.Println("Default OCI settings file not found") } } - accessCfg := &client.Config{} - - if accessCfgFile != "" { - loadedAccessCfgs, err := client.LoadConfigsFromFile(accessCfgFile) - if err != nil { - return nil, fmt.Errorf("Invalid config file %s: %s", accessCfgFile, err) - } - cfgAccount := "DEFAULT" - if c.AccessCfgFileAccount != "" { - cfgAccount = c.AccessCfgFileAccount - } - - var ok bool - accessCfg, ok = loadedAccessCfgs[cfgAccount] - if !ok { - return nil, fmt.Errorf("No account section '%s' found in config file %s", cfgAccount, accessCfgFile) - } - } - - // Override SDK client config with any non-empty template properties - - if c.UserID != "" { - accessCfg.User = c.UserID - } - - if c.TenancyID != "" { - accessCfg.Tenancy = c.TenancyID - } - - if c.Region != "" { - accessCfg.Region = c.Region - } - - // Default if the template nor the API config contains a region. - if accessCfg.Region == "" { - accessCfg.Region = "us-phoenix-1" - } - - if c.Fingerprint != "" { - accessCfg.Fingerprint = c.Fingerprint - } - - if c.PassPhrase != "" { - accessCfg.PassPhrase = c.PassPhrase + if c.AccessCfgFileAccount == "" { + c.AccessCfgFileAccount = "DEFAULT" } + var keyContent []byte if c.KeyFile != "" { - accessCfg.KeyFile = c.KeyFile - accessCfg.Key, err = client.LoadPrivateKey(accessCfg) + path, err := homedir.Expand(c.KeyFile) if err != nil { - return nil, fmt.Errorf("Failed to load private key %s : %s", accessCfg.KeyFile, err) + return nil, err } + + // Read API signing key + keyContent, err = ioutil.ReadFile(path) + if err != nil { + return nil, err + } + } + + fileProvider, _ := ocicommon.ConfigurationProviderFromFileWithProfile(c.AccessCfgFile, c.AccessCfgFileAccount, c.PassPhrase) + if c.Region == "" { + var region string + if fileProvider != nil { + region, _ = fileProvider.Region() + } + if region == "" { + c.Region = "us-phoenix-1" + } + } + + providers := []ocicommon.ConfigurationProvider{ + NewRawConfigurationProvider(c.TenancyID, c.UserID, c.Region, c.Fingerprint, string(keyContent), &c.PassPhrase), + } + + if fileProvider != nil { + providers = append(providers, fileProvider) + } + + // Load API access configuration from SDK + configProvider, err := ocicommon.ComposingConfigurationProvider(providers) + if err != nil { + return nil, err } var errs *packer.MultiError @@ -129,44 +133,36 @@ func NewConfig(raws ...interface{}) (*Config, error) { errs = packer.MultiErrorAppend(errs, es...) } - // Required AccessCfg configuration options - - if accessCfg.User == "" { + if userOCID, _ := configProvider.UserOCID(); userOCID == "" { errs = packer.MultiErrorAppend( errs, errors.New("'user_ocid' must be specified")) } - if accessCfg.Tenancy == "" { + tenancyOCID, _ := configProvider.TenancyOCID() + if tenancyOCID == "" { errs = packer.MultiErrorAppend( errs, errors.New("'tenancy_ocid' must be specified")) } - if accessCfg.Region == "" { - errs = packer.MultiErrorAppend( - errs, errors.New("'region' must be specified")) - } - - if accessCfg.Fingerprint == "" { + if fingerprint, _ := configProvider.KeyFingerprint(); fingerprint == "" { errs = packer.MultiErrorAppend( errs, errors.New("'fingerprint' must be specified")) } - if accessCfg.Key == nil { + if _, err := configProvider.PrivateRSAKey(); err != nil { errs = packer.MultiErrorAppend( errs, errors.New("'key_file' must be specified")) } - c.AccessCfg = accessCfg - - // Required non AccessCfg configuration options + c.ConfigProvider = configProvider if c.AvailabilityDomain == "" { errs = packer.MultiErrorAppend( errs, errors.New("'availability_domain' must be specified")) } - if c.CompartmentID == "" { - c.CompartmentID = accessCfg.Tenancy + if c.CompartmentID == "" && tenancyOCID != "" { + c.CompartmentID = tenancyOCID } if c.Shape == "" { @@ -194,6 +190,30 @@ func NewConfig(raws ...interface{}) (*Config, error) { } } + // Optional UserData config + if c.UserData != "" && c.UserDataFile != "" { + errs = packer.MultiErrorAppend(errs, fmt.Errorf("Only one of user_data or user_data_file can be specified.")) + } else if c.UserDataFile != "" { + if _, err := os.Stat(c.UserDataFile); err != nil { + errs = packer.MultiErrorAppend(errs, fmt.Errorf("user_data_file not found: %s", c.UserDataFile)) + } + } + // read UserDataFile into string. + if c.UserDataFile != "" { + fiData, err := ioutil.ReadFile(c.UserDataFile) + if err != nil { + errs = packer.MultiErrorAppend(errs, fmt.Errorf("Problem reading user_data_file: %s", err)) + } + c.UserData = string(fiData) + } + // Test if UserData is encoded already, and if not, encode it + if c.UserData != "" { + if _, err := base64.StdEncoding.DecodeString(c.UserData); err != nil { + log.Printf("[DEBUG] base64 encoding user data...") + c.UserData = base64.StdEncoding.EncodeToString([]byte(c.UserData)) + } + } + if errs != nil && len(errs.Errors) > 0 { return nil, errs } diff --git a/builder/oracle/oci/config_provider.go b/builder/oracle/oci/config_provider.go new file mode 100644 index 000000000..25d467a4c --- /dev/null +++ b/builder/oracle/oci/config_provider.go @@ -0,0 +1,76 @@ +package oci + +import ( + "crypto/rsa" + "errors" + "fmt" + + "github.com/oracle/oci-go-sdk/common" +) + +// rawConfigurationProvider allows a user to simply construct a configuration +// provider from raw values. It errors on access when those values are empty. +type rawConfigurationProvider struct { + tenancy string + user string + region string + fingerprint string + privateKey string + privateKeyPassphrase *string +} + +// NewRawConfigurationProvider will create a rawConfigurationProvider. +func NewRawConfigurationProvider(tenancy, user, region, fingerprint, privateKey string, privateKeyPassphrase *string) common.ConfigurationProvider { + return rawConfigurationProvider{tenancy, user, region, fingerprint, privateKey, privateKeyPassphrase} +} + +func (p rawConfigurationProvider) PrivateRSAKey() (key *rsa.PrivateKey, err error) { + return common.PrivateKeyFromBytes([]byte(p.privateKey), p.privateKeyPassphrase) +} + +func (p rawConfigurationProvider) KeyID() (keyID string, err error) { + tenancy, err := p.TenancyOCID() + if err != nil { + return + } + + user, err := p.UserOCID() + if err != nil { + return + } + + fingerprint, err := p.KeyFingerprint() + if err != nil { + return + } + + return fmt.Sprintf("%s/%s/%s", tenancy, user, fingerprint), nil +} + +func (p rawConfigurationProvider) TenancyOCID() (string, error) { + if p.tenancy == "" { + return "", errors.New("no tenancy provided") + } + return p.tenancy, nil +} + +func (p rawConfigurationProvider) UserOCID() (string, error) { + if p.user == "" { + return "", errors.New("no user provided") + } + return p.user, nil +} + +func (p rawConfigurationProvider) KeyFingerprint() (string, error) { + if p.fingerprint == "" { + return "", errors.New("no fingerprint provided") + } + return p.fingerprint, nil +} + +func (p rawConfigurationProvider) Region() (string, error) { + if p.region == "" { + return "", errors.New("no region provided") + } + return p.region, nil +} diff --git a/builder/oracle/oci/config_test.go b/builder/oracle/oci/config_test.go index 9a6bd6486..51cbb386d 100644 --- a/builder/oracle/oci/config_test.go +++ b/builder/oracle/oci/config_test.go @@ -1,13 +1,16 @@ package oci import ( + "crypto/rand" + "crypto/rsa" + "crypto/x509" + "encoding/pem" "io/ioutil" "os" - "reflect" "strings" "testing" - client "github.com/hashicorp/packer/builder/oracle/oci/client" + "github.com/go-ini/ini" ) func testConfig(accessConfFile *os.File) map[string]interface{} { @@ -24,26 +27,24 @@ func testConfig(accessConfFile *os.File) map[string]interface{} { "subnet_ocid": "ocd1...", // Comm - "ssh_username": "opc", + "ssh_username": "opc", + "use_private_ip": false, + "metadata": map[string]string{ + "key": "value", + }, } } -func getField(c *client.Config, field string) string { - r := reflect.ValueOf(c) - f := reflect.Indirect(r).FieldByName(field) - return string(f.String()) -} - func TestConfig(t *testing.T) { // Shared set-up and defered deletion - cfg, keyFile, err := client.BaseTestConfig() + cfg, keyFile, err := baseTestConfigWithTmpKeyFile() if err != nil { t.Fatal(err) } defer os.Remove(keyFile.Name()) - cfgFile, err := client.WriteTestConfig(cfg) + cfgFile, err := writeTestConfig(cfg) if err != nil { t.Fatal(err) } @@ -54,7 +55,7 @@ func TestConfig(t *testing.T) { tmpHome, err := ioutil.TempDir("", "packer_config_test") if err != nil { - t.Fatalf("err: %+v", err) + t.Fatalf("Unexpected error when creating temporary directory: %+v", err) } defer os.Remove(tmpHome) @@ -63,15 +64,13 @@ func TestConfig(t *testing.T) { defer os.Setenv("HOME", home) // Config tests - t.Run("BaseConfig", func(t *testing.T) { raw := testConfig(cfgFile) _, errs := NewConfig(raw) if errs != nil { - t.Fatalf("err: %+v", errs) + t.Fatalf("Unexpected error in configuration %+v", errs) } - }) t.Run("NoAccessConfig", func(t *testing.T) { @@ -80,14 +79,14 @@ func TestConfig(t *testing.T) { _, errs := NewConfig(raw) - s := errs.Error() expectedErrors := []string{ - "'user_ocid'", "'tenancy_ocid'", "'fingerprint'", - "'key_file'", + "'user_ocid'", "'tenancy_ocid'", "'fingerprint'", "'key_file'", } + + s := errs.Error() for _, expected := range expectedErrors { if !strings.Contains(s, expected) { - t.Errorf("Expected %s to contain '%s'", s, expected) + t.Errorf("Expected %q to contain '%s'", s, expected) } } }) @@ -112,12 +111,17 @@ func TestConfig(t *testing.T) { raw := testConfig(cfgFile) c, errs := NewConfig(raw) if errs != nil { - t.Fatalf("err: %+v", errs) + t.Fatalf("Unexpected error in configuration %+v", errs) } - expected := "ocid1.tenancy.oc1..aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa" - if c.AccessCfg.Tenancy != expected { - t.Errorf("Expected tenancy: %s, got %s.", expected, c.AccessCfg.Tenancy) + tenancy, err := c.ConfigProvider.TenancyOCID() + if err != nil { + t.Fatalf("Unexpected error getting tenancy ocid: %v", err) + } + + expected := "ocid1.tenancy.oc1..aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa" + if tenancy != expected { + t.Errorf("Expected tenancy: %s, got %s.", expected, tenancy) } }) @@ -126,12 +130,17 @@ func TestConfig(t *testing.T) { raw := testConfig(cfgFile) c, errs := NewConfig(raw) if errs != nil { - t.Fatalf("err: %+v", errs) + t.Fatalf("Unexpected error in configuration %+v", errs) + } + + region, err := c.ConfigProvider.Region() + if err != nil { + t.Fatalf("Unexpected error getting region: %v", err) } expected := "us-ashburn-1" - if c.AccessCfg.Region != expected { - t.Errorf("Expected region: %s, got %s.", expected, c.AccessCfg.Region) + if region != expected { + t.Errorf("Expected region: %s, got %s.", expected, region) } }) @@ -158,7 +167,7 @@ func TestConfig(t *testing.T) { c, errs := NewConfig(raw) if errs != nil { - t.Errorf("Unexpected error(s): %s", errs) + t.Fatalf("Unexpected error in configuration %+v", errs) } if !strings.Contains(c.ImageName, "packer-") { @@ -166,30 +175,138 @@ func TestConfig(t *testing.T) { } }) - // Test that AccessCfgFile properties are overridden by their - // corosponding template keys. - accessOverrides := map[string]string{ - "user_ocid": "User", - "tenancy_ocid": "Tenancy", - "region": "Region", - "fingerprint": "Fingerprint", - } - for k, v := range accessOverrides { - t.Run("AccessCfg."+v+"Overridden", func(t *testing.T) { - expected := "override" + t.Run("user_ocid_overridden", func(t *testing.T) { + expected := "override" + raw := testConfig(cfgFile) + raw["user_ocid"] = expected - raw := testConfig(cfgFile) - raw[k] = expected + c, errs := NewConfig(raw) + if errs != nil { + t.Fatalf("Unexpected error in configuration %+v", errs) + } - c, errs := NewConfig(raw) - if errs != nil { - t.Fatalf("err: %+v", errs) - } + user, _ := c.ConfigProvider.UserOCID() + if user != expected { + t.Errorf("Expected ConfigProvider.UserOCID: %s, got %s", expected, user) + } + }) - accessVal := getField(c.AccessCfg, v) - if accessVal != expected { - t.Errorf("Expected AccessCfg.%s: %s, got %s", v, expected, accessVal) - } - }) - } + t.Run("tenancy_ocid_overidden", func(t *testing.T) { + expected := "override" + raw := testConfig(cfgFile) + raw["tenancy_ocid"] = expected + + c, errs := NewConfig(raw) + if errs != nil { + t.Fatalf("Unexpected error in configuration %+v", errs) + } + + tenancy, _ := c.ConfigProvider.TenancyOCID() + if tenancy != expected { + t.Errorf("Expected ConfigProvider.TenancyOCID: %s, got %s", expected, tenancy) + } + }) + + t.Run("region_overidden", func(t *testing.T) { + expected := "override" + raw := testConfig(cfgFile) + raw["region"] = expected + + c, errs := NewConfig(raw) + if errs != nil { + t.Fatalf("Unexpected error in configuration %+v", errs) + } + + region, _ := c.ConfigProvider.Region() + if region != expected { + t.Errorf("Expected ConfigProvider.Region: %s, got %s", expected, region) + } + }) + + t.Run("fingerprint_overidden", func(t *testing.T) { + expected := "override" + raw := testConfig(cfgFile) + raw["fingerprint"] = expected + + c, errs := NewConfig(raw) + if errs != nil { + t.Fatalf("Unexpected error in configuration: %+v", errs) + } + + fingerprint, _ := c.ConfigProvider.KeyFingerprint() + if fingerprint != expected { + t.Errorf("Expected ConfigProvider.KeyFingerprint: %s, got %s", expected, fingerprint) + } + }) +} + +// BaseTestConfig creates the base (DEFAULT) config including a temporary key +// file. +// NOTE: Caller is responsible for removing temporary key file. +func baseTestConfigWithTmpKeyFile() (*ini.File, *os.File, error) { + keyFile, err := generateRSAKeyFile() + if err != nil { + return nil, keyFile, err + } + // Build ini + cfg := ini.Empty() + section, _ := cfg.NewSection("DEFAULT") + section.NewKey("region", "us-ashburn-1") + section.NewKey("tenancy", "ocid1.tenancy.oc1..aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa") + section.NewKey("user", "ocid1.user.oc1..aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa") + section.NewKey("fingerprint", "70:04:5z:b3:19:ab:90:75:a4:1f:50:d4:c7:c3:33:20") + section.NewKey("key_file", keyFile.Name()) + + return cfg, keyFile, nil +} + +// WriteTestConfig writes a ini.File to a temporary file for use in unit tests. +// NOTE: Caller is responsible for removing temporary file. +func writeTestConfig(cfg *ini.File) (*os.File, error) { + confFile, err := ioutil.TempFile("", "config_file") + if err != nil { + return nil, err + } + + if _, err := confFile.Write([]byte("[DEFAULT]\n")); err != nil { + os.Remove(confFile.Name()) + return nil, err + } + + if _, err := cfg.WriteTo(confFile); err != nil { + os.Remove(confFile.Name()) + return nil, err + } + return confFile, nil +} + +// generateRSAKeyFile generates an RSA key file for use in unit tests. +// NOTE: The caller is responsible for deleting the temporary file. +func generateRSAKeyFile() (*os.File, error) { + // Create temporary file for the key + f, err := ioutil.TempFile("", "key") + if err != nil { + return nil, err + } + + // Generate key + priv, err := rsa.GenerateKey(rand.Reader, 2014) + if err != nil { + return nil, err + } + + // ASN.1 DER encoded form + privDer := x509.MarshalPKCS1PrivateKey(priv) + privBlk := pem.Block{ + Type: "RSA PRIVATE KEY", + Headers: nil, + Bytes: privDer, + } + + // Write the key out + if _, err := f.Write(pem.EncodeToMemory(&privBlk)); err != nil { + return nil, err + } + + return f, nil } diff --git a/builder/oracle/oci/driver.go b/builder/oracle/oci/driver.go index 51f6c364a..4e6013bbd 100644 --- a/builder/oracle/oci/driver.go +++ b/builder/oracle/oci/driver.go @@ -1,16 +1,18 @@ package oci import ( - client "github.com/hashicorp/packer/builder/oracle/oci/client" + "context" + + "github.com/oracle/oci-go-sdk/core" ) // Driver interfaces between the builder steps and the OCI SDK. type Driver interface { - CreateInstance(publicKey string) (string, error) - CreateImage(id string) (client.Image, error) - DeleteImage(id string) error - GetInstanceIP(id string) (string, error) - TerminateInstance(id string) error - WaitForImageCreation(id string) error - WaitForInstanceState(id string, waitStates []string, terminalState string) error + CreateInstance(ctx context.Context, publicKey string) (string, error) + CreateImage(ctx context.Context, id string) (core.Image, error) + DeleteImage(ctx context.Context, id string) error + GetInstanceIP(ctx context.Context, id string) (string, error) + TerminateInstance(ctx context.Context, id string) error + WaitForImageCreation(ctx context.Context, id string) error + WaitForInstanceState(ctx context.Context, id string, waitStates []string, terminalState string) error } diff --git a/builder/oracle/oci/driver_mock.go b/builder/oracle/oci/driver_mock.go index 6173760fc..df478998c 100644 --- a/builder/oracle/oci/driver_mock.go +++ b/builder/oracle/oci/driver_mock.go @@ -1,7 +1,9 @@ package oci import ( - client "github.com/hashicorp/packer/builder/oracle/oci/client" + "context" + + "github.com/oracle/oci-go-sdk/core" ) // driverMock implements the Driver interface and communicates with Oracle @@ -24,10 +26,12 @@ type driverMock struct { WaitForImageCreationErr error WaitForInstanceStateErr error + + cfg *Config } // CreateInstance creates a new compute instance. -func (d *driverMock) CreateInstance(publicKey string) (string, error) { +func (d *driverMock) CreateInstance(ctx context.Context, publicKey string) (string, error) { if d.CreateInstanceErr != nil { return "", d.CreateInstanceErr } @@ -38,16 +42,16 @@ func (d *driverMock) CreateInstance(publicKey string) (string, error) { } // CreateImage creates a new custom image. -func (d *driverMock) CreateImage(id string) (client.Image, error) { +func (d *driverMock) CreateImage(ctx context.Context, id string) (core.Image, error) { if d.CreateImageErr != nil { - return client.Image{}, d.CreateImageErr + return core.Image{}, d.CreateImageErr } d.CreateImageID = id - return client.Image{ID: id}, nil + return core.Image{Id: &id}, nil } // DeleteImage mocks deleting a custom image. -func (d *driverMock) DeleteImage(id string) error { +func (d *driverMock) DeleteImage(ctx context.Context, id string) error { if d.DeleteImageErr != nil { return d.DeleteImageErr } @@ -57,16 +61,19 @@ func (d *driverMock) DeleteImage(id string) error { return nil } -// GetInstanceIP returns the public IP corresponding to the given instance id. -func (d *driverMock) GetInstanceIP(id string) (string, error) { +// GetInstanceIP returns the public or private IP corresponding to the given instance id. +func (d *driverMock) GetInstanceIP(ctx context.Context, id string) (string, error) { if d.GetInstanceIPErr != nil { return "", d.GetInstanceIPErr } + if d.cfg.UsePrivateIP { + return "private_ip", nil + } return "ip", nil } // TerminateInstance terminates a compute instance. -func (d *driverMock) TerminateInstance(id string) error { +func (d *driverMock) TerminateInstance(ctx context.Context, id string) error { if d.TerminateInstanceErr != nil { return d.TerminateInstanceErr } @@ -78,12 +85,12 @@ func (d *driverMock) TerminateInstance(id string) error { // WaitForImageCreation waits for a provisioning custom image to reach the // "AVAILABLE" state. -func (d *driverMock) WaitForImageCreation(id string) error { +func (d *driverMock) WaitForImageCreation(ctx context.Context, id string) error { return d.WaitForImageCreationErr } // WaitForInstanceState waits for an instance to reach the a given terminal // state. -func (d *driverMock) WaitForInstanceState(id string, waitStates []string, terminalState string) error { +func (d *driverMock) WaitForInstanceState(ctx context.Context, id string, waitStates []string, terminalState string) error { return d.WaitForInstanceStateErr } diff --git a/builder/oracle/oci/driver_oci.go b/builder/oracle/oci/driver_oci.go index 1d6f943c0..138bb6474 100644 --- a/builder/oracle/oci/driver_oci.go +++ b/builder/oracle/oci/driver_oci.go @@ -1,117 +1,215 @@ package oci import ( + "context" "errors" "fmt" + "time" - client "github.com/hashicorp/packer/builder/oracle/oci/client" + core "github.com/oracle/oci-go-sdk/core" ) // driverOCI implements the Driver interface and communicates with Oracle // OCI. type driverOCI struct { - client *client.Client - cfg *Config + computeClient core.ComputeClient + vcnClient core.VirtualNetworkClient + cfg *Config + context context.Context } -// NewDriverOCI Creates a new driverOCI with a connected client. +// NewDriverOCI Creates a new driverOCI with a connected compute client and a connected vcn client. func NewDriverOCI(cfg *Config) (Driver, error) { - client, err := client.NewClient(cfg.AccessCfg) + coreClient, err := core.NewComputeClientWithConfigurationProvider(cfg.ConfigProvider) if err != nil { return nil, err } - return &driverOCI{client: client, cfg: cfg}, nil + + vcnClient, err := core.NewVirtualNetworkClientWithConfigurationProvider(cfg.ConfigProvider) + if err != nil { + return nil, err + } + + return &driverOCI{ + computeClient: coreClient, + vcnClient: vcnClient, + cfg: cfg, + }, nil } // CreateInstance creates a new compute instance. -func (d *driverOCI) CreateInstance(publicKey string) (string, error) { - params := &client.LaunchInstanceParams{ - AvailabilityDomain: d.cfg.AvailabilityDomain, - CompartmentID: d.cfg.CompartmentID, - ImageID: d.cfg.BaseImageID, - Shape: d.cfg.Shape, - SubnetID: d.cfg.SubnetID, - Metadata: map[string]string{ - "ssh_authorized_keys": publicKey, - }, +func (d *driverOCI) CreateInstance(ctx context.Context, publicKey string) (string, error) { + metadata := map[string]string{ + "ssh_authorized_keys": publicKey, } - instance, err := d.client.Compute.Instances.Launch(params) + if d.cfg.Metadata != nil { + for key, value := range d.cfg.Metadata { + metadata[key] = value + } + } + if d.cfg.UserData != "" { + metadata["user_data"] = d.cfg.UserData + } + + instanceDetails := core.LaunchInstanceDetails{ + AvailabilityDomain: &d.cfg.AvailabilityDomain, + CompartmentId: &d.cfg.CompartmentID, + ImageId: &d.cfg.BaseImageID, + Shape: &d.cfg.Shape, + SubnetId: &d.cfg.SubnetID, + Metadata: metadata, + } + + // When empty, the default display name is used. + if d.cfg.InstanceName != "" { + instanceDetails.DisplayName = &d.cfg.InstanceName + } + + instance, err := d.computeClient.LaunchInstance(context.TODO(), core.LaunchInstanceRequest{LaunchInstanceDetails: instanceDetails}) + if err != nil { return "", err } - return instance.ID, nil + return *instance.Id, nil } // CreateImage creates a new custom image. -func (d *driverOCI) CreateImage(id string) (client.Image, error) { - params := &client.CreateImageParams{ - CompartmentID: d.cfg.CompartmentID, - InstanceID: id, - DisplayName: d.cfg.ImageName, - } - image, err := d.client.Compute.Images.Create(params) +func (d *driverOCI) CreateImage(ctx context.Context, id string) (core.Image, error) { + res, err := d.computeClient.CreateImage(ctx, core.CreateImageRequest{CreateImageDetails: core.CreateImageDetails{ + CompartmentId: &d.cfg.CompartmentID, + InstanceId: &id, + DisplayName: &d.cfg.ImageName, + }}) + if err != nil { - return client.Image{}, err + return core.Image{}, err } - return image, nil + return res.Image, nil } // DeleteImage deletes a custom image. -func (d *driverOCI) DeleteImage(id string) error { - return d.client.Compute.Images.Delete(&client.DeleteImageParams{ID: id}) +func (d *driverOCI) DeleteImage(ctx context.Context, id string) error { + _, err := d.computeClient.DeleteImage(ctx, core.DeleteImageRequest{ImageId: &id}) + return err } -// GetInstanceIP returns the public IP corresponding to the given instance id. -func (d *driverOCI) GetInstanceIP(id string) (string, error) { - // get nvic and cross ref to find pub ip address - vnics, err := d.client.Compute.VNICAttachments.List( - &client.ListVnicAttachmentsParams{ - InstanceID: id, - CompartmentID: d.cfg.CompartmentID, - }, - ) +// GetInstanceIP returns the public or private IP corresponding to the given instance id. +func (d *driverOCI) GetInstanceIP(ctx context.Context, id string) (string, error) { + vnics, err := d.computeClient.ListVnicAttachments(ctx, core.ListVnicAttachmentsRequest{ + InstanceId: &id, + CompartmentId: &d.cfg.CompartmentID, + }) if err != nil { return "", err } - if len(vnics) < 1 { + if len(vnics.Items) == 0 { return "", errors.New("instance has zero VNICs") } - vnic, err := d.client.Compute.VNICs.Get(&client.GetVNICParams{ID: vnics[0].VNICID}) + vnic, err := d.vcnClient.GetVnic(ctx, core.GetVnicRequest{VnicId: vnics.Items[0].VnicId}) if err != nil { return "", fmt.Errorf("Error getting VNIC details: %s", err) } - return vnic.PublicIP, nil + if d.cfg.UsePrivateIP { + return *vnic.PrivateIp, nil + } + + if vnic.PublicIp == nil { + return "", fmt.Errorf("Error getting VNIC Public Ip for: %s", id) + } + + return *vnic.PublicIp, nil +} + +func (d *driverOCI) GetInstanceInitialCredentials(ctx context.Context, id string) (string, string, error) { + credentials, err := d.computeClient.GetWindowsInstanceInitialCredentials(ctx, core.GetWindowsInstanceInitialCredentialsRequest{ + InstanceId: &id, + }) + if err != nil { + return "", "", err + } + + return *credentials.InstanceCredentials.Username, *credentials.InstanceCredentials.Password, err } // TerminateInstance terminates a compute instance. -func (d *driverOCI) TerminateInstance(id string) error { - params := &client.TerminateInstanceParams{ID: id} - return d.client.Compute.Instances.Terminate(params) +func (d *driverOCI) TerminateInstance(ctx context.Context, id string) error { + _, err := d.computeClient.TerminateInstance(ctx, core.TerminateInstanceRequest{ + InstanceId: &id, + }) + return err } // WaitForImageCreation waits for a provisioning custom image to reach the // "AVAILABLE" state. -func (d *driverOCI) WaitForImageCreation(id string) error { - return client.NewWaiter().WaitForResourceToReachState( - d.client.Compute.Images, +func (d *driverOCI) WaitForImageCreation(ctx context.Context, id string) error { + return waitForResourceToReachState( + func(string) (string, error) { + image, err := d.computeClient.GetImage(ctx, core.GetImageRequest{ImageId: &id}) + if err != nil { + return "", err + } + return string(image.LifecycleState), nil + }, id, []string{"PROVISIONING"}, "AVAILABLE", + 0, //Unlimited Retries + 5*time.Second, //5 second wait between retries ) } // WaitForInstanceState waits for an instance to reach the a given terminal // state. -func (d *driverOCI) WaitForInstanceState(id string, waitStates []string, terminalState string) error { - return client.NewWaiter().WaitForResourceToReachState( - d.client.Compute.Instances, +func (d *driverOCI) WaitForInstanceState(ctx context.Context, id string, waitStates []string, terminalState string) error { + return waitForResourceToReachState( + func(string) (string, error) { + instance, err := d.computeClient.GetInstance(ctx, core.GetInstanceRequest{InstanceId: &id}) + if err != nil { + return "", err + } + return string(instance.LifecycleState), nil + }, id, waitStates, terminalState, + 0, //Unlimited Retries + 5*time.Second, //5 second wait between retries ) } + +// WaitForResourceToReachState checks the response of a request through a +// polled get and waits until the desired state or until the max retried has +// been reached. +func waitForResourceToReachState(getResourceState func(string) (string, error), id string, waitStates []string, terminalState string, maxRetries int, waitDuration time.Duration) error { + for i := 0; maxRetries == 0 || i < maxRetries; i++ { + state, err := getResourceState(id) + if err != nil { + return err + } + + if stringSliceContains(waitStates, state) { + time.Sleep(waitDuration) + continue + } else if state == terminalState { + return nil + } + return fmt.Errorf("Unexpected resource state %q, expecting a waiting state %s or terminal state %q ", state, waitStates, terminalState) + } + return fmt.Errorf("Maximum number of retries (%d) exceeded; resource did not reach state %q", maxRetries, terminalState) +} + +// stringSliceContains loops through a slice of strings returning a boolean +// based on whether a given value is contained in the slice. +func stringSliceContains(slice []string, value string) bool { + for _, elem := range slice { + if elem == value { + return true + } + } + return false +} diff --git a/builder/oracle/oci/step_create_instance.go b/builder/oracle/oci/step_create_instance.go index 8650f7314..7ca52bf32 100644 --- a/builder/oracle/oci/step_create_instance.go +++ b/builder/oracle/oci/step_create_instance.go @@ -1,15 +1,16 @@ package oci import ( + "context" "fmt" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" ) type stepCreateInstance struct{} -func (s *stepCreateInstance) Run(state multistep.StateBag) multistep.StepAction { +func (s *stepCreateInstance) Run(ctx context.Context, state multistep.StateBag) multistep.StepAction { var ( driver = state.Get("driver").(Driver) ui = state.Get("ui").(packer.Ui) @@ -18,7 +19,7 @@ func (s *stepCreateInstance) Run(state multistep.StateBag) multistep.StepAction ui.Say("Creating instance...") - instanceID, err := driver.CreateInstance(publicKey) + instanceID, err := driver.CreateInstance(ctx, publicKey) if err != nil { err = fmt.Errorf("Problem creating instance: %s", err) ui.Error(err.Error()) @@ -32,7 +33,7 @@ func (s *stepCreateInstance) Run(state multistep.StateBag) multistep.StepAction ui.Say("Waiting for instance to enter 'RUNNING' state...") - if err = driver.WaitForInstanceState(instanceID, []string{"STARTING", "PROVISIONING"}, "RUNNING"); err != nil { + if err = driver.WaitForInstanceState(ctx, instanceID, []string{"STARTING", "PROVISIONING"}, "RUNNING"); err != nil { err = fmt.Errorf("Error waiting for instance to start: %s", err) ui.Error(err.Error()) state.Put("error", err) @@ -56,14 +57,14 @@ func (s *stepCreateInstance) Cleanup(state multistep.StateBag) { ui.Say(fmt.Sprintf("Terminating instance (%s)...", id)) - if err := driver.TerminateInstance(id); err != nil { + if err := driver.TerminateInstance(context.TODO(), id); err != nil { err = fmt.Errorf("Error terminating instance. Please terminate manually: %s", err) ui.Error(err.Error()) state.Put("error", err) return } - err := driver.WaitForInstanceState(id, []string{"TERMINATING"}, "TERMINATED") + err := driver.WaitForInstanceState(context.TODO(), id, []string{"TERMINATING"}, "TERMINATED") if err != nil { err = fmt.Errorf("Error terminating instance. Please terminate manually: %s", err) ui.Error(err.Error()) diff --git a/builder/oracle/oci/step_create_instance_test.go b/builder/oracle/oci/step_create_instance_test.go index 558a17cb6..6ab2ceb67 100644 --- a/builder/oracle/oci/step_create_instance_test.go +++ b/builder/oracle/oci/step_create_instance_test.go @@ -1,10 +1,11 @@ package oci import ( + "context" "errors" "testing" - "github.com/mitchellh/multistep" + "github.com/hashicorp/packer/helper/multistep" ) func TestStepCreateInstance(t *testing.T) { @@ -16,7 +17,7 @@ func TestStepCreateInstance(t *testing.T) { driver := state.Get("driver").(*driverMock) - if action := step.Run(state); action != multistep.ActionContinue { + if action := step.Run(context.Background(), state); action != multistep.ActionContinue { t.Fatalf("bad action: %#v", action) } @@ -44,7 +45,7 @@ func TestStepCreateInstance_CreateInstanceErr(t *testing.T) { driver := state.Get("driver").(*driverMock) driver.CreateInstanceErr = errors.New("error") - if action := step.Run(state); action != multistep.ActionHalt { + if action := step.Run(context.Background(), state); action != multistep.ActionHalt { t.Fatalf("bad action: %#v", action) } @@ -73,7 +74,7 @@ func TestStepCreateInstance_WaitForInstanceStateErr(t *testing.T) { driver := state.Get("driver").(*driverMock) driver.WaitForInstanceStateErr = errors.New("error") - if action := step.Run(state); action != multistep.ActionHalt { + if action := step.Run(context.Background(), state); action != multistep.ActionHalt { t.Fatalf("bad action: %#v", action) } @@ -91,7 +92,7 @@ func TestStepCreateInstance_TerminateInstanceErr(t *testing.T) { driver := state.Get("driver").(*driverMock) - if action := step.Run(state); action != multistep.ActionContinue { + if action := step.Run(context.Background(), state); action != multistep.ActionContinue { t.Fatalf("bad action: %#v", action) } @@ -117,7 +118,7 @@ func TestStepCreateInstanceCleanup_WaitForInstanceStateErr(t *testing.T) { driver := state.Get("driver").(*driverMock) - if action := step.Run(state); action != multistep.ActionContinue { + if action := step.Run(context.Background(), state); action != multistep.ActionContinue { t.Fatalf("bad action: %#v", action) } diff --git a/builder/oracle/oci/step_get_default_credentials.go b/builder/oracle/oci/step_get_default_credentials.go new file mode 100644 index 000000000..b7f6f71e9 --- /dev/null +++ b/builder/oracle/oci/step_get_default_credentials.go @@ -0,0 +1,62 @@ +package oci + +import ( + "context" + "fmt" + "log" + + commonhelper "github.com/hashicorp/packer/helper/common" + "github.com/hashicorp/packer/helper/communicator" + "github.com/hashicorp/packer/helper/multistep" + "github.com/hashicorp/packer/packer" +) + +type stepGetDefaultCredentials struct { + Debug bool + Comm *communicator.Config + BuildName string +} + +func (s *stepGetDefaultCredentials) Run(ctx context.Context, state multistep.StateBag) multistep.StepAction { + var ( + driver = state.Get("driver").(*driverOCI) + ui = state.Get("ui").(packer.Ui) + id = state.Get("instance_id").(string) + ) + + // Skip if we're not using winrm + if s.Comm.Type != "winrm" { + log.Printf("[INFO] Not using winrm communicator, skipping get password...") + return multistep.ActionContinue + } + + // If we already have a password, skip it + if s.Comm.WinRMPassword != "" { + ui.Say("Skipping waiting for password since WinRM password set...") + return multistep.ActionContinue + } + + username, password, err := driver.GetInstanceInitialCredentials(ctx, id) + if err != nil { + err = fmt.Errorf("Error getting instance's credentials: %s", err) + ui.Error(err.Error()) + state.Put("error", err) + return multistep.ActionHalt + } + s.Comm.WinRMPassword = password + s.Comm.WinRMUser = username + + if s.Debug { + ui.Message(fmt.Sprintf( + "[DEBUG] (OCI default credentials): Credentials (since debug is enabled): %s", password)) + } + + // store so that we can access this later during provisioning + commonhelper.SetSharedState("winrm_password", s.Comm.WinRMPassword, s.BuildName) + + return multistep.ActionContinue +} + +func (s *stepGetDefaultCredentials) Cleanup(state multistep.StateBag) { + // no cleanup +} diff --git a/builder/oracle/oci/step_image.go b/builder/oracle/oci/step_image.go index 07c9ddb4b..b3d37da0a 100644 --- a/builder/oracle/oci/step_image.go +++ b/builder/oracle/oci/step_image.go @@ -1,15 +1,16 @@ package oci import ( + "context" "fmt" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" ) type stepImage struct{} -func (s *stepImage) Run(state multistep.StateBag) multistep.StepAction { +func (s *stepImage) Run(ctx context.Context, state multistep.StateBag) multistep.StepAction { var ( driver = state.Get("driver").(Driver) ui = state.Get("ui").(packer.Ui) @@ -18,7 +19,7 @@ func (s *stepImage) Run(state multistep.StateBag) multistep.StepAction { ui.Say("Creating image from instance...") - image, err := driver.CreateImage(instanceID) + image, err := driver.CreateImage(ctx, instanceID) if err != nil { err = fmt.Errorf("Error creating image from instance: %s", err) ui.Error(err.Error()) @@ -26,7 +27,7 @@ func (s *stepImage) Run(state multistep.StateBag) multistep.StepAction { return multistep.ActionHalt } - err = driver.WaitForImageCreation(image.ID) + err = driver.WaitForImageCreation(ctx, *image.Id) if err != nil { err = fmt.Errorf("Error waiting for image creation to finish: %s", err) ui.Error(err.Error()) diff --git a/builder/oracle/oci/step_image_test.go b/builder/oracle/oci/step_image_test.go index 72399278c..955d557a9 100644 --- a/builder/oracle/oci/step_image_test.go +++ b/builder/oracle/oci/step_image_test.go @@ -1,10 +1,11 @@ package oci import ( + "context" "errors" "testing" - "github.com/mitchellh/multistep" + "github.com/hashicorp/packer/helper/multistep" ) func TestStepImage(t *testing.T) { @@ -14,7 +15,7 @@ func TestStepImage(t *testing.T) { step := new(stepImage) defer step.Cleanup(state) - if action := step.Run(state); action != multistep.ActionContinue { + if action := step.Run(context.Background(), state); action != multistep.ActionContinue { t.Fatalf("bad action: %#v", action) } @@ -33,7 +34,7 @@ func TestStepImage_CreateImageErr(t *testing.T) { driver := state.Get("driver").(*driverMock) driver.CreateImageErr = errors.New("error") - if action := step.Run(state); action != multistep.ActionHalt { + if action := step.Run(context.Background(), state); action != multistep.ActionHalt { t.Fatalf("bad action: %#v", action) } @@ -56,7 +57,7 @@ func TestStepImage_WaitForImageCreationErr(t *testing.T) { driver := state.Get("driver").(*driverMock) driver.WaitForImageCreationErr = errors.New("error") - if action := step.Run(state); action != multistep.ActionHalt { + if action := step.Run(context.Background(), state); action != multistep.ActionHalt { t.Fatalf("bad action: %#v", action) } diff --git a/builder/oracle/oci/step_instance_info.go b/builder/oracle/oci/step_instance_info.go index 310d8699c..63b9120c1 100644 --- a/builder/oracle/oci/step_instance_info.go +++ b/builder/oracle/oci/step_instance_info.go @@ -1,24 +1,25 @@ package oci import ( + "context" "fmt" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" ) type stepInstanceInfo struct{} -func (s *stepInstanceInfo) Run(state multistep.StateBag) multistep.StepAction { +func (s *stepInstanceInfo) Run(ctx context.Context, state multistep.StateBag) multistep.StepAction { var ( driver = state.Get("driver").(Driver) ui = state.Get("ui").(packer.Ui) id = state.Get("instance_id").(string) ) - ip, err := driver.GetInstanceIP(id) + ip, err := driver.GetInstanceIP(ctx, id) if err != nil { - err = fmt.Errorf("Error getting instance's public IP: %s", err) + err = fmt.Errorf("Error getting instance's IP: %s", err) ui.Error(err.Error()) state.Put("error", err) return multistep.ActionHalt @@ -26,7 +27,7 @@ func (s *stepInstanceInfo) Run(state multistep.StateBag) multistep.StepAction { state.Put("instance_ip", ip) - ui.Say(fmt.Sprintf("Instance has public IP: %s.", ip)) + ui.Say(fmt.Sprintf("Instance has IP: %s.", ip)) return multistep.ActionContinue } diff --git a/builder/oracle/oci/step_instance_info_test.go b/builder/oracle/oci/step_instance_info_test.go index fdae8a0ef..d4e18dd44 100644 --- a/builder/oracle/oci/step_instance_info_test.go +++ b/builder/oracle/oci/step_instance_info_test.go @@ -1,10 +1,13 @@ package oci import ( + "bytes" + "context" "errors" "testing" - "github.com/mitchellh/multistep" + "github.com/hashicorp/packer/helper/multistep" + "github.com/hashicorp/packer/packer" ) func TestInstanceInfo(t *testing.T) { @@ -14,7 +17,7 @@ func TestInstanceInfo(t *testing.T) { step := new(stepInstanceInfo) defer step.Cleanup(state) - if action := step.Run(state); action != multistep.ActionContinue { + if action := step.Run(context.Background(), state); action != multistep.ActionContinue { t.Fatalf("bad action: %#v", action) } @@ -28,6 +31,36 @@ func TestInstanceInfo(t *testing.T) { } } +func TestInstanceInfoPrivateIP(t *testing.T) { + baseTestConfig := baseTestConfig() + baseTestConfig.UsePrivateIP = true + state := new(multistep.BasicStateBag) + state.Put("config", baseTestConfig) + state.Put("driver", &driverMock{cfg: baseTestConfig}) + state.Put("hook", &packer.MockHook{}) + state.Put("ui", &packer.BasicUi{ + Reader: new(bytes.Buffer), + Writer: new(bytes.Buffer), + }) + state.Put("instance_id", "ocid1...") + + step := new(stepInstanceInfo) + defer step.Cleanup(state) + + if action := step.Run(context.Background(), state); action != multistep.ActionContinue { + t.Fatalf("bad action: %#v", action) + } + + instanceIPRaw, ok := state.GetOk("instance_ip") + if !ok { + t.Fatalf("should have instance_ip") + } + + if instanceIPRaw.(string) != "private_ip" { + t.Fatalf("should've got ip ('%s' != 'private_ip')", instanceIPRaw.(string)) + } +} + func TestInstanceInfo_GetInstanceIPErr(t *testing.T) { state := testState() state.Put("instance_id", "ocid1...") @@ -38,7 +71,7 @@ func TestInstanceInfo_GetInstanceIPErr(t *testing.T) { driver := state.Get("driver").(*driverMock) driver.GetInstanceIPErr = errors.New("error") - if action := step.Run(state); action != multistep.ActionHalt { + if action := step.Run(context.Background(), state); action != multistep.ActionHalt { t.Fatalf("bad action: %#v", action) } diff --git a/builder/oracle/oci/step_test.go b/builder/oracle/oci/step_test.go index d4436e304..55afba523 100644 --- a/builder/oracle/oci/step_test.go +++ b/builder/oracle/oci/step_test.go @@ -4,39 +4,39 @@ import ( "bytes" "os" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" - - client "github.com/hashicorp/packer/builder/oracle/oci/client" ) // TODO(apryde): It would be good not to have to write a key file to disk to // load the config. func baseTestConfig() *Config { - _, keyFile, err := client.BaseTestConfig() + _, keyFile, err := baseTestConfigWithTmpKeyFile() if err != nil { panic(err) } cfg, err := NewConfig(map[string]interface{}{ - "availability_domain": "aaaa:PHX-AD-3", + "availability_domain": "aaaa:US-ASHBURN-AD-1", // Image - "base_image_ocid": "ocd1...", + "base_image_ocid": "ocid1.image.oc1.iad.aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa", "shape": "VM.Standard1.1", "image_name": "HelloWorld", + "region": "us-ashburn-1", // Networking - "subnet_ocid": "ocd1...", + "subnet_ocid": "ocid1.subnet.oc1.iad.aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa", // AccessConfig - "user_ocid": "ocid1...", - "tenancy_ocid": "ocid1...", - "fingerprint": "00:00...", + "user_ocid": "ocid1.user.oc1..aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa", + "tenancy_ocid": "ocid1.tenancy.oc1..aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa", + "fingerprint": "70:04:5z:b3:19:ab:90:75:a4:1f:50:d4:c7:c3:33:20", "key_file": keyFile.Name(), // Comm - "ssh_username": "opc", + "ssh_username": "opc", + "use_private_ip": false, }) // Once we have a config object they key file isn't re-read so we can @@ -50,9 +50,10 @@ func baseTestConfig() *Config { } func testState() multistep.StateBag { + baseTestConfig := baseTestConfig() state := new(multistep.BasicStateBag) - state.Put("config", baseTestConfig()) - state.Put("driver", &driverMock{}) + state.Put("config", baseTestConfig) + state.Put("driver", &driverMock{cfg: baseTestConfig}) state.Put("hook", &packer.MockHook{}) state.Put("ui", &packer.BasicUi{ Reader: new(bytes.Buffer), diff --git a/builder/parallels/common/driver.go b/builder/parallels/common/driver.go index b485d5b67..4fde15a6c 100644 --- a/builder/parallels/common/driver.go +++ b/builder/parallels/common/driver.go @@ -51,7 +51,7 @@ type Driver interface { // Send scancodes to the vm using the prltype python script. SendKeyScanCodes(string, ...string) error - // Apply default сonfiguration settings to the virtual machine + // Apply default configuration settings to the virtual machine SetDefaultConfiguration(string) error // Finds the MAC address of the NIC nic0 @@ -131,6 +131,7 @@ func NewDriver() (Driver, error) { latestDriver := 11 version, _ := drivers[strconv.Itoa(latestDriver)].Version() majVer, _ := strconv.Atoi(strings.SplitN(version, ".", 2)[0]) + log.Printf("Parallels version: %s", version) if majVer > latestDriver { log.Printf("Your version of Parallels Desktop for Mac is %s, Packer will use driver for version %d.", version, latestDriver) return drivers[strconv.Itoa(latestDriver)], nil diff --git a/builder/parallels/common/driver_9.go b/builder/parallels/common/driver_9.go index b6757d891..5634fb123 100644 --- a/builder/parallels/common/driver_9.go +++ b/builder/parallels/common/driver_9.go @@ -43,7 +43,7 @@ func (d *Parallels9Driver) Import(name, srcPath, dstDir string, reassignMAC bool srcMAC := "auto" if !reassignMAC { - srcMAC, err = getFirtsMACAddress(srcPath) + srcMAC, err = getFirstMACAddress(srcPath) if err != nil { return err } @@ -70,7 +70,7 @@ func getVMID(path string) (string, error) { return getConfigValueFromXpath(path, "/ParallelsVirtualMachine/Identification/VmUuid") } -func getFirtsMACAddress(path string) (string, error) { +func getFirstMACAddress(path string) (string, error) { return getConfigValueFromXpath(path, "/ParallelsVirtualMachine/Hardware/NetworkAdapter[@id='0']/MAC") } @@ -119,7 +119,7 @@ func getAppPath(bundleID string) (string, error) { return pathOutput, nil } -// CompactDisk performs the compation of the specified virtual disk image. +// CompactDisk performs the compaction of the specified virtual disk image. func (d *Parallels9Driver) CompactDisk(diskPath string) error { prlDiskToolPath, err := exec.LookPath("prl_disk_tool") if err != nil { @@ -222,7 +222,7 @@ func (d *Parallels9Driver) IsRunning(name string) (bool, error) { // Stop forcibly stops the VM. func (d *Parallels9Driver) Stop(name string) error { - if err := d.Prlctl("stop", name); err != nil { + if err := d.Prlctl("stop", name, "--kill"); err != nil { return err } @@ -275,7 +275,6 @@ func (d *Parallels9Driver) Version() (string, error) { } version := matches[1] - log.Printf("Parallels Desktop version: %s", version) return version, nil } diff --git a/builder/parallels/common/driver_9_test.go b/builder/parallels/common/driver_9_test.go index 32d7c6662..bc9d3eb42 100644 --- a/builder/parallels/common/driver_9_test.go +++ b/builder/parallels/common/driver_9_test.go @@ -16,6 +16,7 @@ func TestIPAddress(t *testing.T) { t.Fatalf("err: %s", err) } defer os.Remove(tf.Name()) + defer tf.Close() d := Parallels9Driver{ dhcpLeaseFile: tf.Name(), @@ -62,9 +63,9 @@ func TestIPAddress(t *testing.T) { func TestXMLParseConfig(t *testing.T) { td, err := ioutil.TempDir("", "configpvs") if err != nil { - t.Fatalf("Error creating temp file: %s", err) + t.Fatalf("Error creating temp dir: %s", err) } - defer os.Remove(td) + defer os.RemoveAll(td) config := []byte(` diff --git a/builder/parallels/common/run_config.go b/builder/parallels/common/run_config.go deleted file mode 100644 index 53fb8757b..000000000 --- a/builder/parallels/common/run_config.go +++ /dev/null @@ -1,30 +0,0 @@ -package common - -import ( - "fmt" - "time" - - "github.com/hashicorp/packer/template/interpolate" -) - -// RunConfig contains the configuration for VM run. -type RunConfig struct { - RawBootWait string `mapstructure:"boot_wait"` - - BootWait time.Duration `` -} - -// Prepare sets the configuration for VM run. -func (c *RunConfig) Prepare(ctx *interpolate.Context) []error { - if c.RawBootWait == "" { - c.RawBootWait = "10s" - } - - var err error - c.BootWait, err = time.ParseDuration(c.RawBootWait) - if err != nil { - return []error{fmt.Errorf("Failed parsing boot_wait: %s", err)} - } - - return nil -} diff --git a/builder/parallels/common/run_config_test.go b/builder/parallels/common/run_config_test.go deleted file mode 100644 index 8068fe625..000000000 --- a/builder/parallels/common/run_config_test.go +++ /dev/null @@ -1,37 +0,0 @@ -package common - -import ( - "testing" -) - -func TestRunConfigPrepare_BootWait(t *testing.T) { - var c *RunConfig - var errs []error - - // Test a default boot_wait - c = new(RunConfig) - errs = c.Prepare(testConfigTemplate(t)) - if len(errs) > 0 { - t.Fatalf("should not have error: %s", errs) - } - - if c.RawBootWait != "10s" { - t.Fatalf("bad value: %s", c.RawBootWait) - } - - // Test with a bad boot_wait - c = new(RunConfig) - c.RawBootWait = "this is not good" - errs = c.Prepare(testConfigTemplate(t)) - if len(errs) == 0 { - t.Fatalf("bad: %#v", errs) - } - - // Test with a good one - c = new(RunConfig) - c.RawBootWait = "5s" - errs = c.Prepare(testConfigTemplate(t)) - if len(errs) > 0 { - t.Fatalf("should not have error: %s", errs) - } -} diff --git a/builder/parallels/common/ssh.go b/builder/parallels/common/ssh.go index 8c2c8706f..6673698b5 100644 --- a/builder/parallels/common/ssh.go +++ b/builder/parallels/common/ssh.go @@ -3,7 +3,7 @@ package common import ( commonssh "github.com/hashicorp/packer/common/ssh" packerssh "github.com/hashicorp/packer/communicator/ssh" - "github.com/mitchellh/multistep" + "github.com/hashicorp/packer/helper/multistep" "golang.org/x/crypto/ssh" ) diff --git a/builder/parallels/common/step_attach_floppy.go b/builder/parallels/common/step_attach_floppy.go index 8b970fef2..fa9566011 100644 --- a/builder/parallels/common/step_attach_floppy.go +++ b/builder/parallels/common/step_attach_floppy.go @@ -1,11 +1,12 @@ package common import ( + "context" "fmt" "log" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" ) // StepAttachFloppy is a step that attaches a floppy to the virtual machine. @@ -22,7 +23,7 @@ type StepAttachFloppy struct { // Run adds a virtual FDD device to the VM and attaches the image. // If the image is not specified, then this step will be skipped. -func (s *StepAttachFloppy) Run(state multistep.StateBag) multistep.StepAction { +func (s *StepAttachFloppy) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { // Determine if we even have a floppy disk to attach var floppyPath string if floppyPathRaw, ok := state.GetOk("floppy_path"); ok { diff --git a/builder/parallels/common/step_attach_floppy_test.go b/builder/parallels/common/step_attach_floppy_test.go index f854fd23c..2d951cf05 100644 --- a/builder/parallels/common/step_attach_floppy_test.go +++ b/builder/parallels/common/step_attach_floppy_test.go @@ -1,11 +1,12 @@ package common import ( + "context" "io/ioutil" "os" "testing" - "github.com/mitchellh/multistep" + "github.com/hashicorp/packer/helper/multistep" ) func TestStepAttachFloppy_impl(t *testing.T) { @@ -30,7 +31,7 @@ func TestStepAttachFloppy(t *testing.T) { driver := state.Get("driver").(*DriverMock) // Test the run - if action := step.Run(state); action != multistep.ActionContinue { + if action := step.Run(context.Background(), state); action != multistep.ActionContinue { t.Fatalf("bad action: %#v", action) } if _, ok := state.GetOk("error"); ok { @@ -72,7 +73,7 @@ func TestStepAttachFloppy_noFloppy(t *testing.T) { driver := state.Get("driver").(*DriverMock) // Test the run - if action := step.Run(state); action != multistep.ActionContinue { + if action := step.Run(context.Background(), state); action != multistep.ActionContinue { t.Fatalf("bad action: %#v", action) } if _, ok := state.GetOk("error"); ok { diff --git a/builder/parallels/common/step_attach_parallels_tools.go b/builder/parallels/common/step_attach_parallels_tools.go index 87d602df7..1c1ea31b4 100644 --- a/builder/parallels/common/step_attach_parallels_tools.go +++ b/builder/parallels/common/step_attach_parallels_tools.go @@ -1,11 +1,12 @@ package common import ( + "context" "fmt" "log" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" ) // StepAttachParallelsTools is a step that attaches Parallels Tools ISO image @@ -25,7 +26,7 @@ type StepAttachParallelsTools struct { // Run adds a virtual CD-ROM device to the VM and attaches Parallels Tools ISO image. // If ISO image is not specified, then this step will be skipped. -func (s *StepAttachParallelsTools) Run(state multistep.StateBag) multistep.StepAction { +func (s *StepAttachParallelsTools) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { driver := state.Get("driver").(Driver) ui := state.Get("ui").(packer.Ui) vmName := state.Get("vmName").(string) diff --git a/builder/parallels/common/step_compact_disk.go b/builder/parallels/common/step_compact_disk.go index 2e88e7bd5..85688de51 100644 --- a/builder/parallels/common/step_compact_disk.go +++ b/builder/parallels/common/step_compact_disk.go @@ -1,10 +1,11 @@ package common import ( + "context" "fmt" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" ) // StepCompactDisk is a step that removes all empty blocks from expanding @@ -22,7 +23,7 @@ type StepCompactDisk struct { } // Run runs the compaction of the virtual disk attached to the VM. -func (s *StepCompactDisk) Run(state multistep.StateBag) multistep.StepAction { +func (s *StepCompactDisk) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { driver := state.Get("driver").(Driver) vmName := state.Get("vmName").(string) ui := state.Get("ui").(packer.Ui) diff --git a/builder/parallels/common/step_compact_disk_test.go b/builder/parallels/common/step_compact_disk_test.go index ace932a2d..e32ef217b 100644 --- a/builder/parallels/common/step_compact_disk_test.go +++ b/builder/parallels/common/step_compact_disk_test.go @@ -1,11 +1,12 @@ package common import ( + "context" "io/ioutil" "os" "testing" - "github.com/mitchellh/multistep" + "github.com/hashicorp/packer/helper/multistep" ) func TestStepCompactDisk_impl(t *testing.T) { @@ -31,7 +32,7 @@ func TestStepCompactDisk(t *testing.T) { driver.DiskPathResult = tf.Name() // Test the run - if action := step.Run(state); action != multistep.ActionContinue { + if action := step.Run(context.Background(), state); action != multistep.ActionContinue { t.Fatalf("bad action: %#v", action) } if _, ok := state.GetOk("error"); ok { @@ -59,7 +60,7 @@ func TestStepCompactDisk_skip(t *testing.T) { driver := state.Get("driver").(*DriverMock) // Test the run - if action := step.Run(state); action != multistep.ActionContinue { + if action := step.Run(context.Background(), state); action != multistep.ActionContinue { t.Fatalf("bad action: %#v", action) } if _, ok := state.GetOk("error"); ok { diff --git a/builder/parallels/common/step_output_dir.go b/builder/parallels/common/step_output_dir.go index 3accd2cbb..f6f092c4a 100644 --- a/builder/parallels/common/step_output_dir.go +++ b/builder/parallels/common/step_output_dir.go @@ -1,14 +1,15 @@ package common import ( + "context" "fmt" "log" "os" "path/filepath" "time" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" ) // StepOutputDir sets up the output directory by creating it if it does @@ -21,7 +22,7 @@ type StepOutputDir struct { } // Run sets up the output directory. -func (s *StepOutputDir) Run(state multistep.StateBag) multistep.StepAction { +func (s *StepOutputDir) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { ui := state.Get("ui").(packer.Ui) if _, err := os.Stat(s.Path); err == nil && s.Force { diff --git a/builder/parallels/common/step_output_dir_test.go b/builder/parallels/common/step_output_dir_test.go index 04f5a7339..53c1d885c 100644 --- a/builder/parallels/common/step_output_dir_test.go +++ b/builder/parallels/common/step_output_dir_test.go @@ -1,11 +1,12 @@ package common import ( + "context" "io/ioutil" "os" "testing" - "github.com/mitchellh/multistep" + "github.com/hashicorp/packer/helper/multistep" ) func testStepOutputDir(t *testing.T) *StepOutputDir { @@ -27,9 +28,11 @@ func TestStepOutputDir_impl(t *testing.T) { func TestStepOutputDir(t *testing.T) { state := testState(t) step := testStepOutputDir(t) + // Delete the test output directory when done + defer os.RemoveAll(step.Path) // Test the run - if action := step.Run(state); action != multistep.ActionContinue { + if action := step.Run(context.Background(), state); action != multistep.ActionContinue { t.Fatalf("bad action: %#v", action) } if _, ok := state.GetOk("error"); ok { @@ -51,7 +54,7 @@ func TestStepOutputDir_cancelled(t *testing.T) { step := testStepOutputDir(t) // Test the run - if action := step.Run(state); action != multistep.ActionContinue { + if action := step.Run(context.Background(), state); action != multistep.ActionContinue { t.Fatalf("bad action: %#v", action) } if _, ok := state.GetOk("error"); ok { @@ -76,7 +79,7 @@ func TestStepOutputDir_halted(t *testing.T) { step := testStepOutputDir(t) // Test the run - if action := step.Run(state); action != multistep.ActionContinue { + if action := step.Run(context.Background(), state); action != multistep.ActionContinue { t.Fatalf("bad action: %#v", action) } if _, ok := state.GetOk("error"); ok { diff --git a/builder/parallels/common/step_prepare_parallels_tools.go b/builder/parallels/common/step_prepare_parallels_tools.go index 99e8722dc..5da892782 100644 --- a/builder/parallels/common/step_prepare_parallels_tools.go +++ b/builder/parallels/common/step_prepare_parallels_tools.go @@ -1,10 +1,11 @@ package common import ( + "context" "fmt" "os" - "github.com/mitchellh/multistep" + "github.com/hashicorp/packer/helper/multistep" ) // StepPrepareParallelsTools is a step that prepares parameters related @@ -21,7 +22,7 @@ type StepPrepareParallelsTools struct { } // Run sets the value of "parallels_tools_path". -func (s *StepPrepareParallelsTools) Run(state multistep.StateBag) multistep.StepAction { +func (s *StepPrepareParallelsTools) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { driver := state.Get("driver").(Driver) if s.ParallelsToolsMode == ParallelsToolsModeDisable { diff --git a/builder/parallels/common/step_prepare_parallels_tools_test.go b/builder/parallels/common/step_prepare_parallels_tools_test.go index 0647149e5..4f46fc642 100644 --- a/builder/parallels/common/step_prepare_parallels_tools_test.go +++ b/builder/parallels/common/step_prepare_parallels_tools_test.go @@ -1,11 +1,12 @@ package common import ( + "context" "io/ioutil" "os" "testing" - "github.com/mitchellh/multistep" + "github.com/hashicorp/packer/helper/multistep" ) func TestStepPrepareParallelsTools_impl(t *testing.T) { @@ -32,7 +33,7 @@ func TestStepPrepareParallelsTools(t *testing.T) { driver.ToolsISOPathResult = tf.Name() // Test the run - if action := step.Run(state); action != multistep.ActionContinue { + if action := step.Run(context.Background(), state); action != multistep.ActionContinue { t.Fatalf("bad action: %#v", action) } if _, ok := state.GetOk("error"); ok { @@ -67,7 +68,7 @@ func TestStepPrepareParallelsTools_disabled(t *testing.T) { driver := state.Get("driver").(*DriverMock) // Test the run - if action := step.Run(state); action != multistep.ActionContinue { + if action := step.Run(context.Background(), state); action != multistep.ActionContinue { t.Fatalf("bad action: %#v", action) } if _, ok := state.GetOk("error"); ok { @@ -93,7 +94,7 @@ func TestStepPrepareParallelsTools_nonExist(t *testing.T) { driver.ToolsISOPathResult = "foo" // Test the run - if action := step.Run(state); action != multistep.ActionHalt { + if action := step.Run(context.Background(), state); action != multistep.ActionHalt { t.Fatalf("bad action: %#v", action) } if _, ok := state.GetOk("error"); !ok { diff --git a/builder/parallels/common/step_prlctl.go b/builder/parallels/common/step_prlctl.go index a38d8b7fe..9ae6ec7e9 100644 --- a/builder/parallels/common/step_prlctl.go +++ b/builder/parallels/common/step_prlctl.go @@ -1,12 +1,13 @@ package common import ( + "context" "fmt" "strings" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" "github.com/hashicorp/packer/template/interpolate" - "github.com/mitchellh/multistep" ) type commandTemplate struct { @@ -28,7 +29,7 @@ type StepPrlctl struct { } // Run executes `prlctl` commands. -func (s *StepPrlctl) Run(state multistep.StateBag) multistep.StepAction { +func (s *StepPrlctl) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { driver := state.Get("driver").(Driver) ui := state.Get("ui").(packer.Ui) vmName := state.Get("vmName").(string) diff --git a/builder/parallels/common/step_run.go b/builder/parallels/common/step_run.go index 8f86e089b..d59051a06 100644 --- a/builder/parallels/common/step_run.go +++ b/builder/parallels/common/step_run.go @@ -1,11 +1,11 @@ package common import ( + "context" "fmt" - "time" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" ) // StepRun is a step that starts the virtual machine. @@ -17,13 +17,11 @@ import ( // // Produces: type StepRun struct { - BootWait time.Duration - vmName string } // Run starts the VM. -func (s *StepRun) Run(state multistep.StateBag) multistep.StepAction { +func (s *StepRun) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { driver := state.Get("driver").(Driver) ui := state.Get("ui").(packer.Ui) vmName := state.Get("vmName").(string) @@ -39,22 +37,6 @@ func (s *StepRun) Run(state multistep.StateBag) multistep.StepAction { s.vmName = vmName - if int64(s.BootWait) > 0 { - ui.Say(fmt.Sprintf("Waiting %s for boot...", s.BootWait)) - wait := time.After(s.BootWait) - WAITLOOP: - for { - select { - case <-wait: - break WAITLOOP - case <-time.After(1 * time.Second): - if _, ok := state.GetOk(multistep.StateCancelled); ok { - return multistep.ActionHalt - } - } - } - } - return multistep.ActionContinue } @@ -68,7 +50,7 @@ func (s *StepRun) Cleanup(state multistep.StateBag) { ui := state.Get("ui").(packer.Ui) if running, _ := driver.IsRunning(s.vmName); running { - if err := driver.Prlctl("stop", s.vmName); err != nil { + if err := driver.Stop(s.vmName); err != nil { ui.Error(fmt.Sprintf("Error stopping VM: %s", err)) } } diff --git a/builder/parallels/common/step_shutdown.go b/builder/parallels/common/step_shutdown.go index f8ec90009..7f9d3b663 100644 --- a/builder/parallels/common/step_shutdown.go +++ b/builder/parallels/common/step_shutdown.go @@ -2,13 +2,14 @@ package common import ( "bytes" + "context" "errors" "fmt" "log" "time" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" ) // StepShutdown is a step that shuts down the machine. It first attempts to do @@ -28,7 +29,7 @@ type StepShutdown struct { } // Run shuts down the VM. -func (s *StepShutdown) Run(state multistep.StateBag) multistep.StepAction { +func (s *StepShutdown) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { comm := state.Get("communicator").(packer.Communicator) driver := state.Get("driver").(Driver) ui := state.Get("ui").(packer.Ui) diff --git a/builder/parallels/common/step_shutdown_test.go b/builder/parallels/common/step_shutdown_test.go index de3baabda..0d137431f 100644 --- a/builder/parallels/common/step_shutdown_test.go +++ b/builder/parallels/common/step_shutdown_test.go @@ -1,11 +1,12 @@ package common import ( + "context" "testing" "time" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" ) func TestStepShutdown_impl(t *testing.T) { @@ -23,7 +24,7 @@ func TestStepShutdown_noShutdownCommand(t *testing.T) { driver := state.Get("driver").(*DriverMock) // Test the run - if action := step.Run(state); action != multistep.ActionContinue { + if action := step.Run(context.Background(), state); action != multistep.ActionContinue { t.Fatalf("bad action: %#v", action) } if _, ok := state.GetOk("error"); ok { @@ -60,7 +61,7 @@ func TestStepShutdown_shutdownCommand(t *testing.T) { }() // Test the run - if action := step.Run(state); action != multistep.ActionContinue { + if action := step.Run(context.Background(), state); action != multistep.ActionContinue { t.Fatalf("bad action: %#v", action) } if _, ok := state.GetOk("error"); ok { @@ -97,7 +98,7 @@ func TestStepShutdown_shutdownTimeout(t *testing.T) { }() // Test the run - if action := step.Run(state); action != multistep.ActionHalt { + if action := step.Run(context.Background(), state); action != multistep.ActionHalt { t.Fatalf("bad action: %#v", action) } if _, ok := state.GetOk("error"); !ok { diff --git a/builder/parallels/common/step_test.go b/builder/parallels/common/step_test.go index cdaab7922..0964784f7 100644 --- a/builder/parallels/common/step_test.go +++ b/builder/parallels/common/step_test.go @@ -4,8 +4,8 @@ import ( "bytes" "testing" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" ) func testState(t *testing.T) multistep.StateBag { diff --git a/builder/parallels/common/step_type_boot_command.go b/builder/parallels/common/step_type_boot_command.go index c3076b82c..ed21c049d 100644 --- a/builder/parallels/common/step_type_boot_command.go +++ b/builder/parallels/common/step_type_boot_command.go @@ -1,17 +1,15 @@ package common import ( + "context" "fmt" - "log" - "strings" "time" - "unicode" - "unicode/utf8" packer_common "github.com/hashicorp/packer/common" + "github.com/hashicorp/packer/common/bootcommand" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" "github.com/hashicorp/packer/template/interpolate" - "github.com/mitchellh/multistep" ) type bootCommandTemplateData struct { @@ -22,29 +20,32 @@ type bootCommandTemplateData struct { // StepTypeBootCommand is a step that "types" the boot command into the VM via // the prltype script, built on the Parallels Virtualization SDK - Python API. -// -// Uses: -// driver Driver -// http_port int -// ui packer.Ui -// vmName string -// -// Produces: -// type StepTypeBootCommand struct { - BootCommand []string + BootCommand string + BootWait time.Duration HostInterfaces []string VMName string Ctx interpolate.Context } // Run types the boot command by sending key scancodes into the VM. -func (s *StepTypeBootCommand) Run(state multistep.StateBag) multistep.StepAction { +func (s *StepTypeBootCommand) Run(ctx context.Context, state multistep.StateBag) multistep.StepAction { debug := state.Get("debug").(bool) httpPort := state.Get("http_port").(uint) ui := state.Get("ui").(packer.Ui) driver := state.Get("driver").(Driver) + // Wait the for the vm to boot. + if int64(s.BootWait) > 0 { + ui.Say(fmt.Sprintf("Waiting %s for boot...", s.BootWait.String())) + select { + case <-time.After(s.BootWait): + break + case <-ctx.Done(): + return multistep.ActionHalt + } + } + var pauseFn multistep.DebugPauseFn if debug { pauseFn = state.Get("pauseFn").(multistep.DebugPauseFn) @@ -75,73 +76,37 @@ func (s *StepTypeBootCommand) Run(state multistep.StateBag) multistep.StepAction s.VMName, } + sendCodes := func(codes []string) error { + return driver.SendKeyScanCodes(s.VMName, codes...) + } + d := bootcommand.NewPCXTDriver(sendCodes, -1) + ui.Say("Typing the boot command...") - for i, command := range s.BootCommand { - command, err := interpolate.Render(command, &s.Ctx) - if err != nil { - err = fmt.Errorf("Error preparing boot command: %s", err) - state.Put("error", err) - ui.Error(err.Error()) - return multistep.ActionHalt - } + command, err := interpolate.Render(s.BootCommand, &s.Ctx) + if err != nil { + err = fmt.Errorf("Error preparing boot command: %s", err) + state.Put("error", err) + ui.Error(err.Error()) + return multistep.ActionHalt + } - codes := []string{} - for _, code := range scancodes(command) { - if code == "wait" { - if err := driver.SendKeyScanCodes(s.VMName, codes...); err != nil { - err = fmt.Errorf("Error sending boot command: %s", err) - state.Put("error", err) - ui.Error(err.Error()) - return multistep.ActionHalt - } - codes = []string{} - time.Sleep(1 * time.Second) - continue - } + seq, err := bootcommand.GenerateExpressionSequence(command) + if err != nil { + err := fmt.Errorf("Error generating boot command: %s", err) + state.Put("error", err) + ui.Error(err.Error()) + return multistep.ActionHalt + } - if code == "wait5" { - if err := driver.SendKeyScanCodes(s.VMName, codes...); err != nil { - err = fmt.Errorf("Error sending boot command: %s", err) - state.Put("error", err) - ui.Error(err.Error()) - return multistep.ActionHalt - } - codes = []string{} - time.Sleep(5 * time.Second) - continue - } + if err := seq.Do(ctx, d); err != nil { + err := fmt.Errorf("Error running boot command: %s", err) + state.Put("error", err) + ui.Error(err.Error()) + return multistep.ActionHalt + } - if code == "wait10" { - if err := driver.SendKeyScanCodes(s.VMName, codes...); err != nil { - err = fmt.Errorf("Error sending boot command: %s", err) - state.Put("error", err) - ui.Error(err.Error()) - return multistep.ActionHalt - } - codes = []string{} - time.Sleep(10 * time.Second) - continue - } - - // Since typing is sometimes so slow, we check for an interrupt - // in between each character. - if _, ok := state.GetOk(multistep.StateCancelled); ok { - return multistep.ActionHalt - } - codes = append(codes, code) - } - - if pauseFn != nil { - pauseFn(multistep.DebugLocationAfterRun, fmt.Sprintf("boot_command[%d]: %s", i, command), state) - } - - log.Printf("Sending scancodes: %#v", codes) - if err := driver.SendKeyScanCodes(s.VMName, codes...); err != nil { - err = fmt.Errorf("Error sending boot command: %s", err) - state.Put("error", err) - ui.Error(err.Error()) - return multistep.ActionHalt - } + if pauseFn != nil { + pauseFn(multistep.DebugLocationAfterRun, fmt.Sprintf("boot_command: %s", command), state) } return multistep.ActionContinue @@ -149,202 +114,3 @@ func (s *StepTypeBootCommand) Run(state multistep.StateBag) multistep.StepAction // Cleanup does nothing. func (*StepTypeBootCommand) Cleanup(multistep.StateBag) {} - -func scancodes(message string) []string { - // Scancodes reference: http://www.win.tue.nl/~aeb/linux/kbd/scancodes-1.html - // - // Scancodes represent raw keyboard output and are fed to the VM by the - // Parallels Virtualization SDK - C API, PrlDevKeyboard_SendKeyEvent - // - // Scancodes are recorded here in pairs. The first entry represents - // the key press and the second entry represents the key release and is - // derived from the first by the addition of 0x80. - special := make(map[string][]string) - special[""] = []string{"0e", "8e"} - special[""] = []string{"53", "d3"} - special[""] = []string{"1c", "9c"} - special[""] = []string{"01", "81"} - special[""] = []string{"3b", "bb"} - special[""] = []string{"3c", "bc"} - special[""] = []string{"3d", "bd"} - special[""] = []string{"3e", "be"} - special[""] = []string{"3f", "bf"} - special[""] = []string{"40", "c0"} - special[""] = []string{"41", "c1"} - special[""] = []string{"42", "c2"} - special[""] = []string{"43", "c3"} - special[""] = []string{"44", "c4"} - special[""] = []string{"1c", "9c"} - special[""] = []string{"0f", "8f"} - - special[""] = []string{"48", "c8"} - special[""] = []string{"50", "d0"} - special[""] = []string{"4b", "cb"} - special[""] = []string{"4d", "cd"} - special[""] = []string{"39", "b9"} - special[""] = []string{"52", "d2"} - special[""] = []string{"47", "c7"} - special[""] = []string{"4f", "cf"} - special[""] = []string{"49", "c9"} - special[""] = []string{"51", "d1"} - - special[""] = []string{"38", "b8"} - special[""] = []string{"1d", "9d"} - special[""] = []string{"2a", "aa"} - special[""] = []string{"e038", "e0b8"} - special[""] = []string{"e01d", "e09d"} - special[""] = []string{"36", "b6"} - - shiftedChars := "!@#$%^&*()_+{}:\"~|<>?" - - scancodeIndex := make(map[string]uint) - scancodeIndex["1234567890-="] = 0x02 - scancodeIndex["!@#$%^&*()_+"] = 0x02 - scancodeIndex["qwertyuiop[]"] = 0x10 - scancodeIndex["QWERTYUIOP{}"] = 0x10 - scancodeIndex["asdfghjkl;'`"] = 0x1e - scancodeIndex[`ASDFGHJKL:"~`] = 0x1e - scancodeIndex["\\zxcvbnm,./"] = 0x2b - scancodeIndex["|ZXCVBNM<>?"] = 0x2b - scancodeIndex[" "] = 0x39 - - scancodeMap := make(map[rune]uint) - for chars, start := range scancodeIndex { - var i uint - for len(chars) > 0 { - r, size := utf8.DecodeRuneInString(chars) - chars = chars[size:] - scancodeMap[r] = start + i - i++ - } - } - - result := make([]string, 0, len(message)*2) - for len(message) > 0 { - var scancode []string - - if strings.HasPrefix(message, "") { - scancode = []string{"38"} - message = message[len(""):] - log.Printf("Special code '' found, replacing with: 38") - } - - if strings.HasPrefix(message, "") { - scancode = []string{"1d"} - message = message[len(""):] - log.Printf("Special code '' found, replacing with: 1d") - } - - if strings.HasPrefix(message, "") { - scancode = []string{"2a"} - message = message[len(""):] - log.Printf("Special code '' found, replacing with: 2a") - } - - if strings.HasPrefix(message, "") { - scancode = []string{"b8"} - message = message[len(""):] - log.Printf("Special code '' found, replacing with: b8") - } - - if strings.HasPrefix(message, "") { - scancode = []string{"9d"} - message = message[len(""):] - log.Printf("Special code '' found, replacing with: 9d") - } - - if strings.HasPrefix(message, "") { - scancode = []string{"aa"} - message = message[len(""):] - log.Printf("Special code '' found, replacing with: aa") - } - - if strings.HasPrefix(message, "") { - scancode = []string{"e038"} - message = message[len(""):] - log.Printf("Special code '' found, replacing with: e038") - } - - if strings.HasPrefix(message, "") { - scancode = []string{"e01d"} - message = message[len(""):] - log.Printf("Special code '' found, replacing with: e01d") - } - - if strings.HasPrefix(message, "") { - scancode = []string{"36"} - message = message[len(""):] - log.Printf("Special code '' found, replacing with: 36") - } - - if strings.HasPrefix(message, "") { - scancode = []string{"e0b8"} - message = message[len(""):] - log.Printf("Special code '' found, replacing with: e0b8") - } - - if strings.HasPrefix(message, "") { - scancode = []string{"e09d"} - message = message[len(""):] - log.Printf("Special code '' found, replacing with: e09d") - } - - if strings.HasPrefix(message, "") { - scancode = []string{"b6"} - message = message[len(""):] - log.Printf("Special code '' found, replacing with: b6") - } - - if strings.HasPrefix(message, "") { - log.Printf("Special code found, will sleep 1 second at this point.") - scancode = []string{"wait"} - message = message[len(""):] - } - - if strings.HasPrefix(message, "") { - log.Printf("Special code found, will sleep 5 seconds at this point.") - scancode = []string{"wait5"} - message = message[len(""):] - } - - if strings.HasPrefix(message, "") { - log.Printf("Special code found, will sleep 10 seconds at this point.") - scancode = []string{"wait10"} - message = message[len(""):] - } - - if scancode == nil { - for specialCode, specialValue := range special { - if strings.HasPrefix(message, specialCode) { - log.Printf("Special code '%s' found, replacing with: %s", specialCode, specialValue) - scancode = specialValue - message = message[len(specialCode):] - break - } - } - } - - if scancode == nil { - r, size := utf8.DecodeRuneInString(message) - message = message[size:] - scancodeInt := scancodeMap[r] - keyShift := unicode.IsUpper(r) || strings.ContainsRune(shiftedChars, r) - - scancode = make([]string, 0, 4) - if keyShift { - scancode = append(scancode, "2a") - } - - scancode = append(scancode, fmt.Sprintf("%02x", scancodeInt)) - scancode = append(scancode, fmt.Sprintf("%02x", scancodeInt+0x80)) - - if keyShift { - scancode = append(scancode, "aa") - } - } - - result = append(result, scancode...) - } - - return result -} diff --git a/builder/parallels/common/step_type_boot_command_test.go b/builder/parallels/common/step_type_boot_command_test.go deleted file mode 100644 index ea33b9369..000000000 --- a/builder/parallels/common/step_type_boot_command_test.go +++ /dev/null @@ -1,77 +0,0 @@ -package common - -import ( - "strings" - "testing" - - "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" -) - -func TestStepTypeBootCommand(t *testing.T) { - state := testState(t) - - var bootcommand = []string{ - "1234567890-=", - "!@#$%^&*()_+", - "qwertyuiop[]", - "QWERTYUIOP{}", - "asdfghjkl;'`", - `ASDFGHJKL:"~`, - "\\zxcvbnm,./", - "|ZXCVBNM<>?", - " ", - } - - step := StepTypeBootCommand{ - BootCommand: bootcommand, - HostInterfaces: []string{}, - VMName: "myVM", - Ctx: *testConfigTemplate(t), - } - - comm := new(packer.MockCommunicator) - state.Put("communicator", comm) - - driver := state.Get("driver").(*DriverMock) - driver.VersionResult = "foo" - state.Put("http_port", uint(0)) - - // Test the run - if action := step.Run(state); action != multistep.ActionContinue { - t.Fatalf("bad action: %#v", action) - } - if _, ok := state.GetOk("error"); ok { - t.Fatal("should NOT have error") - } - - // Verify - var expected = [][]string{ - {"02", "82", "03", "83", "04", "84", "05", "85", "06", "86", "07", "87", "08", "88", "09", "89", "0a", "8a", "0b", "8b", "0c", "8c", "0d", "8d", "1c", "9c"}, - {}, - {"2a", "02", "82", "aa", "2a", "03", "83", "aa", "2a", "04", "84", "aa", "2a", "05", "85", "aa", "2a", "06", "86", "aa", "2a", "07", "87", "aa", "2a", "08", "88", "aa", "2a", "09", "89", "aa", "2a", "0a", "8a", "aa", "2a", "0b", "8b", "aa", "2a", "0c", "8c", "aa", "2a", "0d", "8d", "aa", "1c", "9c"}, - {"10", "90", "11", "91", "12", "92", "13", "93", "14", "94", "15", "95", "16", "96", "17", "97", "18", "98", "19", "99", "1a", "9a", "1b", "9b", "1c", "9c"}, - {"2a", "10", "90", "aa", "2a", "11", "91", "aa", "2a", "12", "92", "aa", "2a", "13", "93", "aa", "2a", "14", "94", "aa", "2a", "15", "95", "aa", "2a", "16", "96", "aa", "2a", "17", "97", "aa", "2a", "18", "98", "aa", "2a", "19", "99", "aa", "2a", "1a", "9a", "aa", "2a", "1b", "9b", "aa", "1c", "9c"}, - {"1e", "9e", "1f", "9f", "20", "a0", "21", "a1", "22", "a2", "23", "a3", "24", "a4", "25", "a5", "26", "a6", "27", "a7", "28", "a8", "29", "a9", "1c", "9c"}, - {"2a", "1e", "9e", "aa", "2a", "1f", "9f", "aa", "2a", "20", "a0", "aa", "2a", "21", "a1", "aa", "2a", "22", "a2", "aa", "2a", "23", "a3", "aa", "2a", "24", "a4", "aa", "2a", "25", "a5", "aa", "2a", "26", "a6", "aa", "2a", "27", "a7", "aa", "2a", "28", "a8", "aa", "2a", "29", "a9", "aa", "1c", "9c"}, - {"2b", "ab", "2c", "ac", "2d", "ad", "2e", "ae", "2f", "af", "30", "b0", "31", "b1", "32", "b2", "33", "b3", "34", "b4", "35", "b5", "1c", "9c"}, - {"2a", "2b", "ab", "aa", "2a", "2c", "ac", "aa", "2a", "2d", "ad", "aa", "2a", "2e", "ae", "aa", "2a", "2f", "af", "aa", "2a", "30", "b0", "aa", "2a", "31", "b1", "aa", "2a", "32", "b2", "aa", "2a", "33", "b3", "aa", "2a", "34", "b4", "aa", "2a", "35", "b5", "aa", "1c", "9c"}, - {"39", "b9", "1c", "9c"}, - } - fail := false - - for i := range driver.SendKeyScanCodesCalls { - t.Logf("prltype %s\n", strings.Join(driver.SendKeyScanCodesCalls[i], " ")) - } - - for i := range expected { - for j := range expected[i] { - if driver.SendKeyScanCodesCalls[i][j] != expected[i][j] { - fail = true - } - } - } - if fail { - t.Fatalf("Sent bad scancodes: %#v\n Expected: %#v", driver.SendKeyScanCodesCalls, expected) - } -} diff --git a/builder/parallels/common/step_upload_parallels_tools.go b/builder/parallels/common/step_upload_parallels_tools.go index 5769f1be5..0da0b26e0 100644 --- a/builder/parallels/common/step_upload_parallels_tools.go +++ b/builder/parallels/common/step_upload_parallels_tools.go @@ -1,13 +1,14 @@ package common import ( + "context" "fmt" "log" "os" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" "github.com/hashicorp/packer/template/interpolate" - "github.com/mitchellh/multistep" ) // This step uploads the Parallels Tools ISO to the virtual machine. @@ -37,7 +38,7 @@ type StepUploadParallelsTools struct { } // Run uploads the Parallels Tools ISO to the VM. -func (s *StepUploadParallelsTools) Run(state multistep.StateBag) multistep.StepAction { +func (s *StepUploadParallelsTools) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { comm := state.Get("communicator").(packer.Communicator) ui := state.Get("ui").(packer.Ui) diff --git a/builder/parallels/common/step_upload_parallels_tools_test.go b/builder/parallels/common/step_upload_parallels_tools_test.go index 263571ff6..aeff516d3 100644 --- a/builder/parallels/common/step_upload_parallels_tools_test.go +++ b/builder/parallels/common/step_upload_parallels_tools_test.go @@ -1,10 +1,11 @@ package common import ( + "context" "testing" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" ) func TestStepUploadParallelsTools_impl(t *testing.T) { @@ -23,7 +24,7 @@ func TestStepUploadParallelsTools(t *testing.T) { state.Put("communicator", comm) // Test the run - if action := step.Run(state); action != multistep.ActionContinue { + if action := step.Run(context.Background(), state); action != multistep.ActionContinue { t.Fatalf("bad action: %#v", action) } if _, ok := state.GetOk("error"); ok { @@ -48,7 +49,7 @@ func TestStepUploadParallelsTools_interpolate(t *testing.T) { state.Put("communicator", comm) // Test the run - if action := step.Run(state); action != multistep.ActionContinue { + if action := step.Run(context.Background(), state); action != multistep.ActionContinue { t.Fatalf("bad action: %#v", action) } if _, ok := state.GetOk("error"); ok { @@ -73,7 +74,7 @@ func TestStepUploadParallelsTools_attach(t *testing.T) { state.Put("communicator", comm) // Test the run - if action := step.Run(state); action != multistep.ActionContinue { + if action := step.Run(context.Background(), state); action != multistep.ActionContinue { t.Fatalf("bad action: %#v", action) } if _, ok := state.GetOk("error"); ok { diff --git a/builder/parallels/common/step_upload_version.go b/builder/parallels/common/step_upload_version.go index 6824b035e..1a1bfcf0c 100644 --- a/builder/parallels/common/step_upload_version.go +++ b/builder/parallels/common/step_upload_version.go @@ -2,11 +2,12 @@ package common import ( "bytes" + "context" "fmt" "log" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" ) // StepUploadVersion is a step that uploads a file containing the version of @@ -21,7 +22,7 @@ type StepUploadVersion struct { } // Run uploads a file containing the version of Parallels Desktop. -func (s *StepUploadVersion) Run(state multistep.StateBag) multistep.StepAction { +func (s *StepUploadVersion) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { comm := state.Get("communicator").(packer.Communicator) driver := state.Get("driver").(Driver) ui := state.Get("ui").(packer.Ui) diff --git a/builder/parallels/common/step_upload_version_test.go b/builder/parallels/common/step_upload_version_test.go index 8aaf9cc90..998d35f0e 100644 --- a/builder/parallels/common/step_upload_version_test.go +++ b/builder/parallels/common/step_upload_version_test.go @@ -1,10 +1,11 @@ package common import ( + "context" "testing" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" ) func TestStepUploadVersion_impl(t *testing.T) { @@ -23,7 +24,7 @@ func TestStepUploadVersion(t *testing.T) { driver.VersionResult = "foo" // Test the run - if action := step.Run(state); action != multistep.ActionContinue { + if action := step.Run(context.Background(), state); action != multistep.ActionContinue { t.Fatalf("bad action: %#v", action) } if _, ok := state.GetOk("error"); ok { @@ -48,7 +49,7 @@ func TestStepUploadVersion_noPath(t *testing.T) { state.Put("communicator", comm) // Test the run - if action := step.Run(state); action != multistep.ActionContinue { + if action := step.Run(context.Background(), state); action != multistep.ActionContinue { t.Fatalf("bad action: %#v", action) } if _, ok := state.GetOk("error"); ok { diff --git a/builder/parallels/iso/builder.go b/builder/parallels/iso/builder.go index 14ae2991d..9ba2002f7 100644 --- a/builder/parallels/iso/builder.go +++ b/builder/parallels/iso/builder.go @@ -7,11 +7,12 @@ import ( parallelscommon "github.com/hashicorp/packer/builder/parallels/common" "github.com/hashicorp/packer/common" + "github.com/hashicorp/packer/common/bootcommand" "github.com/hashicorp/packer/helper/communicator" "github.com/hashicorp/packer/helper/config" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" "github.com/hashicorp/packer/template/interpolate" - "github.com/mitchellh/multistep" ) const BuilderId = "rickard-von-essen.parallels" @@ -26,16 +27,15 @@ type Config struct { common.HTTPConfig `mapstructure:",squash"` common.ISOConfig `mapstructure:",squash"` common.FloppyConfig `mapstructure:",squash"` + bootcommand.BootConfig `mapstructure:",squash"` parallelscommon.OutputConfig `mapstructure:",squash"` parallelscommon.PrlctlConfig `mapstructure:",squash"` parallelscommon.PrlctlPostConfig `mapstructure:",squash"` parallelscommon.PrlctlVersionConfig `mapstructure:",squash"` - parallelscommon.RunConfig `mapstructure:",squash"` parallelscommon.ShutdownConfig `mapstructure:",squash"` parallelscommon.SSHConfig `mapstructure:",squash"` parallelscommon.ToolsConfig `mapstructure:",squash"` - BootCommand []string `mapstructure:"boot_command"` DiskSize uint `mapstructure:"disk_size"` DiskType string `mapstructure:"disk_type"` GuestOSType string `mapstructure:"guest_os_type"` @@ -76,13 +76,13 @@ func (b *Builder) Prepare(raws ...interface{}) ([]string, error) { errs = packer.MultiErrorAppend(errs, b.config.FloppyConfig.Prepare(&b.config.ctx)...) errs = packer.MultiErrorAppend( errs, b.config.OutputConfig.Prepare(&b.config.ctx, &b.config.PackerConfig)...) - errs = packer.MultiErrorAppend(errs, b.config.RunConfig.Prepare(&b.config.ctx)...) errs = packer.MultiErrorAppend(errs, b.config.PrlctlConfig.Prepare(&b.config.ctx)...) errs = packer.MultiErrorAppend(errs, b.config.PrlctlPostConfig.Prepare(&b.config.ctx)...) errs = packer.MultiErrorAppend(errs, b.config.PrlctlVersionConfig.Prepare(&b.config.ctx)...) errs = packer.MultiErrorAppend(errs, b.config.ShutdownConfig.Prepare(&b.config.ctx)...) errs = packer.MultiErrorAppend(errs, b.config.SSHConfig.Prepare(&b.config.ctx)...) errs = packer.MultiErrorAppend(errs, b.config.ToolsConfig.Prepare(&b.config.ctx)...) + errs = packer.MultiErrorAppend(errs, b.config.BootConfig.Prepare(&b.config.ctx)...) if b.config.DiskSize == 0 { b.config.DiskSize = 40000 @@ -185,11 +185,10 @@ func (b *Builder) Run(ui packer.Ui, hook packer.Hook, cache packer.Cache) (packe Commands: b.config.Prlctl, Ctx: b.config.ctx, }, - ¶llelscommon.StepRun{ - BootWait: b.config.BootWait, - }, + ¶llelscommon.StepRun{}, ¶llelscommon.StepTypeBootCommand{ - BootCommand: b.config.BootCommand, + BootWait: b.config.BootWait, + BootCommand: b.config.FlatBootCommand(), HostInterfaces: b.config.HostInterfaces, VMName: b.config.VMName, Ctx: b.config.ctx, diff --git a/builder/parallels/iso/builder_test.go b/builder/parallels/iso/builder_test.go index b4b9eb917..88a28144e 100644 --- a/builder/parallels/iso/builder_test.go +++ b/builder/parallels/iso/builder_test.go @@ -86,7 +86,7 @@ func TestBuilderPrepare_FloppyFiles(t *testing.T) { func TestBuilderPrepare_InvalidFloppies(t *testing.T) { var b Builder config := testConfig() - config["floppy_files"] = []string{"nonexistant.bat", "nonexistant.ps1"} + config["floppy_files"] = []string{"nonexistent.bat", "nonexistent.ps1"} b = Builder{} _, errs := b.Prepare(config) if errs == nil { diff --git a/builder/parallels/iso/step_attach_iso.go b/builder/parallels/iso/step_attach_iso.go index 4dead4705..a4cf634a8 100644 --- a/builder/parallels/iso/step_attach_iso.go +++ b/builder/parallels/iso/step_attach_iso.go @@ -1,12 +1,13 @@ package iso import ( + "context" "fmt" "log" parallelscommon "github.com/hashicorp/packer/builder/parallels/common" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" ) // This step attaches the ISO to the virtual machine. @@ -21,7 +22,7 @@ import ( // attachedIso bool type stepAttachISO struct{} -func (s *stepAttachISO) Run(state multistep.StateBag) multistep.StepAction { +func (s *stepAttachISO) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { driver := state.Get("driver").(parallelscommon.Driver) isoPath := state.Get("iso_path").(string) ui := state.Get("ui").(packer.Ui) diff --git a/builder/parallels/iso/step_create_disk.go b/builder/parallels/iso/step_create_disk.go index 0333472be..42640f5a8 100644 --- a/builder/parallels/iso/step_create_disk.go +++ b/builder/parallels/iso/step_create_disk.go @@ -1,19 +1,20 @@ package iso import ( + "context" "fmt" "strconv" parallelscommon "github.com/hashicorp/packer/builder/parallels/common" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" ) // This step creates the virtual disk that will be used as the // hard drive for the virtual machine. type stepCreateDisk struct{} -func (s *stepCreateDisk) Run(state multistep.StateBag) multistep.StepAction { +func (s *stepCreateDisk) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { config := state.Get("config").(*Config) driver := state.Get("driver").(parallelscommon.Driver) ui := state.Get("ui").(packer.Ui) diff --git a/builder/parallels/iso/step_create_vm.go b/builder/parallels/iso/step_create_vm.go index b085e4485..8a4e22517 100644 --- a/builder/parallels/iso/step_create_vm.go +++ b/builder/parallels/iso/step_create_vm.go @@ -1,11 +1,12 @@ package iso import ( + "context" "fmt" parallelscommon "github.com/hashicorp/packer/builder/parallels/common" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" ) // This step creates the actual virtual machine. @@ -16,7 +17,7 @@ type stepCreateVM struct { vmName string } -func (s *stepCreateVM) Run(state multistep.StateBag) multistep.StepAction { +func (s *stepCreateVM) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { config := state.Get("config").(*Config) driver := state.Get("driver").(parallelscommon.Driver) diff --git a/builder/parallels/iso/step_set_boot_order.go b/builder/parallels/iso/step_set_boot_order.go index a32249173..cfcda6080 100644 --- a/builder/parallels/iso/step_set_boot_order.go +++ b/builder/parallels/iso/step_set_boot_order.go @@ -1,11 +1,12 @@ package iso import ( + "context" "fmt" parallelscommon "github.com/hashicorp/packer/builder/parallels/common" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" ) // This step sets the device boot order for the virtual machine. @@ -18,7 +19,7 @@ import ( // Produces: type stepSetBootOrder struct{} -func (s *stepSetBootOrder) Run(state multistep.StateBag) multistep.StepAction { +func (s *stepSetBootOrder) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { driver := state.Get("driver").(parallelscommon.Driver) ui := state.Get("ui").(packer.Ui) vmName := state.Get("vmName").(string) diff --git a/builder/parallels/pvm/builder.go b/builder/parallels/pvm/builder.go index 5db72bbaa..e36916092 100644 --- a/builder/parallels/pvm/builder.go +++ b/builder/parallels/pvm/builder.go @@ -8,8 +8,8 @@ import ( parallelscommon "github.com/hashicorp/packer/builder/parallels/common" "github.com/hashicorp/packer/common" "github.com/hashicorp/packer/helper/communicator" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" ) // Builder implements packer.Builder and builds the actual Parallels @@ -74,11 +74,10 @@ func (b *Builder) Run(ui packer.Ui, hook packer.Hook, cache packer.Cache) (packe Commands: b.config.Prlctl, Ctx: b.config.ctx, }, - ¶llelscommon.StepRun{ - BootWait: b.config.BootWait, - }, + ¶llelscommon.StepRun{}, ¶llelscommon.StepTypeBootCommand{ - BootCommand: b.config.BootCommand, + BootCommand: b.config.FlatBootCommand(), + BootWait: b.config.BootWait, HostInterfaces: []string{}, VMName: b.config.VMName, Ctx: b.config.ctx, @@ -106,6 +105,9 @@ func (b *Builder) Run(ui packer.Ui, hook packer.Hook, cache packer.Cache) (packe Commands: b.config.PrlctlPost, Ctx: b.config.ctx, }, + ¶llelscommon.StepCompactDisk{ + Skip: b.config.SkipCompaction, + }, } // Run the steps. diff --git a/builder/parallels/pvm/config.go b/builder/parallels/pvm/config.go index 7641e0c9d..bcca5e724 100644 --- a/builder/parallels/pvm/config.go +++ b/builder/parallels/pvm/config.go @@ -6,6 +6,7 @@ import ( parallelscommon "github.com/hashicorp/packer/builder/parallels/common" "github.com/hashicorp/packer/common" + "github.com/hashicorp/packer/common/bootcommand" "github.com/hashicorp/packer/helper/config" "github.com/hashicorp/packer/packer" "github.com/hashicorp/packer/template/interpolate" @@ -19,15 +20,15 @@ type Config struct { parallelscommon.PrlctlConfig `mapstructure:",squash"` parallelscommon.PrlctlPostConfig `mapstructure:",squash"` parallelscommon.PrlctlVersionConfig `mapstructure:",squash"` - parallelscommon.RunConfig `mapstructure:",squash"` parallelscommon.SSHConfig `mapstructure:",squash"` parallelscommon.ShutdownConfig `mapstructure:",squash"` + bootcommand.BootConfig `mapstructure:",squash"` parallelscommon.ToolsConfig `mapstructure:",squash"` - BootCommand []string `mapstructure:"boot_command"` - SourcePath string `mapstructure:"source_path"` - VMName string `mapstructure:"vm_name"` - ReassignMAC bool `mapstructure:"reassign_mac"` + SourcePath string `mapstructure:"source_path"` + SkipCompaction bool `mapstructure:"skip_compaction"` + VMName string `mapstructure:"vm_name"` + ReassignMAC bool `mapstructure:"reassign_mac"` ctx interpolate.Context } @@ -61,7 +62,7 @@ func NewConfig(raws ...interface{}) (*Config, []string, error) { errs = packer.MultiErrorAppend(errs, c.PrlctlConfig.Prepare(&c.ctx)...) errs = packer.MultiErrorAppend(errs, c.PrlctlPostConfig.Prepare(&c.ctx)...) errs = packer.MultiErrorAppend(errs, c.PrlctlVersionConfig.Prepare(&c.ctx)...) - errs = packer.MultiErrorAppend(errs, c.RunConfig.Prepare(&c.ctx)...) + errs = packer.MultiErrorAppend(errs, c.BootConfig.Prepare(&c.ctx)...) errs = packer.MultiErrorAppend(errs, c.ShutdownConfig.Prepare(&c.ctx)...) errs = packer.MultiErrorAppend(errs, c.SSHConfig.Prepare(&c.ctx)...) errs = packer.MultiErrorAppend(errs, c.ToolsConfig.Prepare(&c.ctx)...) diff --git a/builder/parallels/pvm/config_test.go b/builder/parallels/pvm/config_test.go index c6c36a0b4..8c869c9fd 100644 --- a/builder/parallels/pvm/config_test.go +++ b/builder/parallels/pvm/config_test.go @@ -84,7 +84,7 @@ func TestNewConfig_FloppyFiles(t *testing.T) { func TestNewConfig_InvalidFloppies(t *testing.T) { c := testConfig(t) - c["floppy_files"] = []string{"nonexistant.bat", "nonexistant.ps1"} + c["floppy_files"] = []string{"nonexistent.bat", "nonexistent.ps1"} _, _, errs := NewConfig(c) if errs == nil { t.Fatalf("Nonexistent floppies should trigger multierror") diff --git a/builder/parallels/pvm/step_import.go b/builder/parallels/pvm/step_import.go index 91b8878af..0907c6b44 100644 --- a/builder/parallels/pvm/step_import.go +++ b/builder/parallels/pvm/step_import.go @@ -1,11 +1,12 @@ package pvm import ( + "context" "fmt" parallelscommon "github.com/hashicorp/packer/builder/parallels/common" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" ) // This step imports an PVM VM into Parallels. @@ -15,7 +16,7 @@ type StepImport struct { vmName string } -func (s *StepImport) Run(state multistep.StateBag) multistep.StepAction { +func (s *StepImport) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { driver := state.Get("driver").(parallelscommon.Driver) ui := state.Get("ui").(packer.Ui) config := state.Get("config").(*Config) diff --git a/builder/parallels/pvm/step_test.go b/builder/parallels/pvm/step_test.go index 827850a35..9d891c637 100644 --- a/builder/parallels/pvm/step_test.go +++ b/builder/parallels/pvm/step_test.go @@ -5,8 +5,8 @@ import ( "testing" parallelscommon "github.com/hashicorp/packer/builder/parallels/common" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" ) func testState(t *testing.T) multistep.StateBag { diff --git a/builder/profitbricks/builder.go b/builder/profitbricks/builder.go index 750b843c0..6a1428075 100644 --- a/builder/profitbricks/builder.go +++ b/builder/profitbricks/builder.go @@ -2,11 +2,12 @@ package profitbricks import ( "fmt" + "log" + "github.com/hashicorp/packer/common" "github.com/hashicorp/packer/helper/communicator" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" - "log" ) const BuilderId = "packer.profitbricks" diff --git a/builder/profitbricks/builder_test.go b/builder/profitbricks/builder_test.go index 70b73226f..98252b410 100644 --- a/builder/profitbricks/builder_test.go +++ b/builder/profitbricks/builder_test.go @@ -2,8 +2,9 @@ package profitbricks import ( "fmt" - "github.com/hashicorp/packer/packer" "testing" + + "github.com/hashicorp/packer/packer" ) func testConfig() map[string]interface{} { diff --git a/builder/profitbricks/config.go b/builder/profitbricks/config.go index bb4e8d37f..b38b8583c 100644 --- a/builder/profitbricks/config.go +++ b/builder/profitbricks/config.go @@ -2,13 +2,14 @@ package profitbricks import ( "errors" + "os" + "github.com/hashicorp/packer/common" "github.com/hashicorp/packer/helper/communicator" "github.com/hashicorp/packer/helper/config" "github.com/hashicorp/packer/packer" "github.com/hashicorp/packer/template/interpolate" "github.com/mitchellh/mapstructure" - "os" ) type Config struct { diff --git a/builder/profitbricks/ssh.go b/builder/profitbricks/ssh.go index a40959cc7..d56696330 100644 --- a/builder/profitbricks/ssh.go +++ b/builder/profitbricks/ssh.go @@ -2,8 +2,9 @@ package profitbricks import ( "fmt" + "github.com/hashicorp/packer/communicator/ssh" - "github.com/mitchellh/multistep" + "github.com/hashicorp/packer/helper/multistep" gossh "golang.org/x/crypto/ssh" ) diff --git a/builder/profitbricks/step_create_server.go b/builder/profitbricks/step_create_server.go index a4de3cb19..00742762a 100644 --- a/builder/profitbricks/step_create_server.go +++ b/builder/profitbricks/step_create_server.go @@ -1,6 +1,7 @@ package profitbricks import ( + "context" "encoding/json" "errors" "fmt" @@ -8,14 +9,14 @@ import ( "strings" "time" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" "github.com/profitbricks/profitbricks-sdk-go" ) type stepCreateServer struct{} -func (s *stepCreateServer) Run(state multistep.StateBag) multistep.StepAction { +func (s *stepCreateServer) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { ui := state.Get("ui").(packer.Ui) c := state.Get("config").(*Config) diff --git a/builder/profitbricks/step_create_ssh_key.go b/builder/profitbricks/step_create_ssh_key.go index 0afccb494..be0b973a8 100644 --- a/builder/profitbricks/step_create_ssh_key.go +++ b/builder/profitbricks/step_create_ssh_key.go @@ -1,13 +1,15 @@ package profitbricks import ( + "context" "crypto/x509" "encoding/pem" "fmt" - "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" - "golang.org/x/crypto/ssh" "io/ioutil" + + "github.com/hashicorp/packer/helper/multistep" + "github.com/hashicorp/packer/packer" + "golang.org/x/crypto/ssh" ) type StepCreateSSHKey struct { @@ -15,7 +17,7 @@ type StepCreateSSHKey struct { DebugKeyPath string } -func (s *StepCreateSSHKey) Run(state multistep.StateBag) multistep.StepAction { +func (s *StepCreateSSHKey) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { ui := state.Get("ui").(packer.Ui) c := state.Get("config").(*Config) diff --git a/builder/profitbricks/step_take_snapshot.go b/builder/profitbricks/step_take_snapshot.go index 822ea8699..a3bfe2d7d 100644 --- a/builder/profitbricks/step_take_snapshot.go +++ b/builder/profitbricks/step_take_snapshot.go @@ -1,17 +1,18 @@ package profitbricks import ( + "context" "encoding/json" "time" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" "github.com/profitbricks/profitbricks-sdk-go" ) type stepTakeSnapshot struct{} -func (s *stepTakeSnapshot) Run(state multistep.StateBag) multistep.StepAction { +func (s *stepTakeSnapshot) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { ui := state.Get("ui").(packer.Ui) c := state.Get("config").(*Config) diff --git a/builder/qemu/builder.go b/builder/qemu/builder.go index 2b6b206d9..fa2f1f9d9 100644 --- a/builder/qemu/builder.go +++ b/builder/qemu/builder.go @@ -11,11 +11,12 @@ import ( "time" "github.com/hashicorp/packer/common" + "github.com/hashicorp/packer/common/bootcommand" "github.com/hashicorp/packer/helper/communicator" "github.com/hashicorp/packer/helper/config" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" "github.com/hashicorp/packer/template/interpolate" - "github.com/mitchellh/multistep" ) const BuilderId = "transcend.qemu" @@ -25,6 +26,8 @@ var accels = map[string]struct{}{ "kvm": {}, "tcg": {}, "xen": {}, + "hax": {}, + "hvf": {}, } var netDevice = map[string]bool{ @@ -78,15 +81,15 @@ type Builder struct { } type Config struct { - common.PackerConfig `mapstructure:",squash"` - common.HTTPConfig `mapstructure:",squash"` - common.ISOConfig `mapstructure:",squash"` - Comm communicator.Config `mapstructure:",squash"` - common.FloppyConfig `mapstructure:",squash"` + common.PackerConfig `mapstructure:",squash"` + common.HTTPConfig `mapstructure:",squash"` + common.ISOConfig `mapstructure:",squash"` + bootcommand.VNCConfig `mapstructure:",squash"` + Comm communicator.Config `mapstructure:",squash"` + common.FloppyConfig `mapstructure:",squash"` ISOSkipCache bool `mapstructure:"iso_skip_cache"` Accelerator string `mapstructure:"accelerator"` - BootCommand []string `mapstructure:"boot_command"` DiskInterface string `mapstructure:"disk_interface"` DiskSize uint `mapstructure:"disk_size"` DiskCache string `mapstructure:"disk_cache"` @@ -96,6 +99,7 @@ type Config struct { Format string `mapstructure:"format"` Headless bool `mapstructure:"headless"` DiskImage bool `mapstructure:"disk_image"` + UseBackingFile bool `mapstructure:"use_backing_file"` MachineType string `mapstructure:"machine_type"` NetDevice string `mapstructure:"net_device"` OutputDir string `mapstructure:"output_directory"` @@ -117,10 +121,8 @@ type Config struct { // TODO(mitchellh): deprecate RunOnce bool `mapstructure:"run_once"` - RawBootWait string `mapstructure:"boot_wait"` RawShutdownTimeout string `mapstructure:"shutdown_timeout"` - bootWait time.Duration `` shutdownTimeout time.Duration `` ctx interpolate.Context } @@ -144,7 +146,7 @@ func (b *Builder) Prepare(raws ...interface{}) ([]string, error) { warnings := make([]string, 0) if b.config.DiskSize == 0 { - b.config.DiskSize = 40000 + b.config.DiskSize = 40960 } if b.config.DiskCache == "" { @@ -187,10 +189,6 @@ func (b *Builder) Prepare(raws ...interface{}) ([]string, error) { b.config.QemuBinary = "qemu-system-x86_64" } - if b.config.RawBootWait == "" { - b.config.RawBootWait = "10s" - } - if b.config.SSHHostPortMin == 0 { b.config.SSHHostPortMin = 2222 } @@ -220,6 +218,7 @@ func (b *Builder) Prepare(raws ...interface{}) ([]string, error) { } errs = packer.MultiErrorAppend(errs, b.config.FloppyConfig.Prepare(&b.config.ctx)...) + errs = packer.MultiErrorAppend(errs, b.config.VNCConfig.Prepare(&b.config.ctx)...) if b.config.NetDevice == "" { b.config.NetDevice = "virtio-net" @@ -257,9 +256,14 @@ func (b *Builder) Prepare(raws ...interface{}) ([]string, error) { b.config.DiskCompression = false } + if b.config.UseBackingFile && !(b.config.DiskImage && b.config.Format == "qcow2") { + errs = packer.MultiErrorAppend( + errs, errors.New("use_backing_file can only be enabled for QCOW2 images and when disk_image is true")) + } + if _, ok := accels[b.config.Accelerator]; !ok { errs = packer.MultiErrorAppend( - errs, errors.New("invalid accelerator, only 'kvm', 'tcg', 'xen', or 'none' are allowed")) + errs, errors.New("invalid accelerator, only 'kvm', 'tcg', 'xen', 'hax', 'hvf', or 'none' are allowed")) } if _, ok := netDevice[b.config.NetDevice]; !ok { @@ -279,7 +283,7 @@ func (b *Builder) Prepare(raws ...interface{}) ([]string, error) { if _, ok := diskDiscard[b.config.DiskDiscard]; !ok { errs = packer.MultiErrorAppend( - errs, errors.New("unrecognized disk cache type")) + errs, errors.New("unrecognized disk discard type")) } if !b.config.PackerForce { @@ -290,12 +294,6 @@ func (b *Builder) Prepare(raws ...interface{}) ([]string, error) { } } - b.config.bootWait, err = time.ParseDuration(b.config.RawBootWait) - if err != nil { - errs = packer.MultiErrorAppend( - errs, fmt.Errorf("Failed parsing boot_wait: %s", err)) - } - if b.config.RawShutdownTimeout == "" { b.config.RawShutdownTimeout = "5m" } @@ -387,7 +385,6 @@ func (b *Builder) Run(ui packer.Ui, hook packer.Hook, cache packer.Cache) (packe steps = append(steps, new(stepConfigureVNC), steprun, - &stepBootWait{}, &stepTypeBootCommand{}, ) diff --git a/builder/qemu/builder_test.go b/builder/qemu/builder_test.go index 24c5dc47e..d0d403666 100644 --- a/builder/qemu/builder_test.go +++ b/builder/qemu/builder_test.go @@ -94,46 +94,6 @@ func TestBuilderPrepare_Defaults(t *testing.T) { } } -func TestBuilderPrepare_BootWait(t *testing.T) { - var b Builder - config := testConfig() - - // Test a default boot_wait - delete(config, "boot_wait") - warns, err := b.Prepare(config) - if len(warns) > 0 { - t.Fatalf("bad: %#v", warns) - } - if err != nil { - t.Fatalf("err: %s", err) - } - - if b.config.RawBootWait != "10s" { - t.Fatalf("bad value: %s", b.config.RawBootWait) - } - - // Test with a bad boot_wait - config["boot_wait"] = "this is not good" - warns, err = b.Prepare(config) - if len(warns) > 0 { - t.Fatalf("bad: %#v", warns) - } - if err == nil { - t.Fatal("should have error") - } - - // Test with a good one - config["boot_wait"] = "5s" - b = Builder{} - warns, err = b.Prepare(config) - if len(warns) > 0 { - t.Fatalf("bad: %#v", warns) - } - if err != nil { - t.Fatalf("should not have error: %s", err) - } -} - func TestBuilderPrepare_VNCBindAddress(t *testing.T) { var b Builder config := testConfig() @@ -208,7 +168,7 @@ func TestBuilderPrepare_DiskSize(t *testing.T) { t.Fatalf("bad err: %s", err) } - if b.config.DiskSize != 40000 { + if b.config.DiskSize != 40960 { t.Fatalf("bad size: %d", b.config.DiskSize) } @@ -264,6 +224,49 @@ func TestBuilderPrepare_Format(t *testing.T) { } } +func TestBuilderPrepare_UseBackingFile(t *testing.T) { + var b Builder + config := testConfig() + + config["use_backing_file"] = true + + // Bad: iso_url is not a disk_image + config["disk_image"] = false + config["format"] = "qcow2" + b = Builder{} + warns, err := b.Prepare(config) + if len(warns) > 0 { + t.Fatalf("bad: %#v", warns) + } + if err == nil { + t.Fatal("should have error") + } + + // Bad: format is not 'qcow2' + config["disk_image"] = true + config["format"] = "raw" + b = Builder{} + warns, err = b.Prepare(config) + if len(warns) > 0 { + t.Fatalf("bad: %#v", warns) + } + if err == nil { + t.Fatal("should have error") + } + + // Good: iso_url is a disk image and format is 'qcow2' + config["disk_image"] = true + config["format"] = "qcow2" + b = Builder{} + warns, err = b.Prepare(config) + if len(warns) > 0 { + t.Fatalf("bad: %#v", warns) + } + if err != nil { + t.Fatalf("should not have error: %s", err) + } +} + func TestBuilderPrepare_FloppyFiles(t *testing.T) { var b Builder config := testConfig() diff --git a/builder/qemu/driver.go b/builder/qemu/driver.go index 632490114..eb1992ba6 100644 --- a/builder/qemu/driver.go +++ b/builder/qemu/driver.go @@ -14,7 +14,7 @@ import ( "time" "unicode" - "github.com/mitchellh/multistep" + "github.com/hashicorp/packer/helper/multistep" ) type DriverCancelCallback func(state multistep.StateBag) bool diff --git a/builder/qemu/ssh.go b/builder/qemu/ssh.go index 1881613fd..9c565eb32 100644 --- a/builder/qemu/ssh.go +++ b/builder/qemu/ssh.go @@ -3,7 +3,7 @@ package qemu import ( commonssh "github.com/hashicorp/packer/common/ssh" "github.com/hashicorp/packer/communicator/ssh" - "github.com/mitchellh/multistep" + "github.com/hashicorp/packer/helper/multistep" gossh "golang.org/x/crypto/ssh" ) diff --git a/builder/qemu/step_boot_wait.go b/builder/qemu/step_boot_wait.go deleted file mode 100644 index 3e4e48320..000000000 --- a/builder/qemu/step_boot_wait.go +++ /dev/null @@ -1,25 +0,0 @@ -package qemu - -import ( - "fmt" - "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" - "time" -) - -// stepBootWait waits the configured time period. -type stepBootWait struct{} - -func (s *stepBootWait) Run(state multistep.StateBag) multistep.StepAction { - config := state.Get("config").(*Config) - ui := state.Get("ui").(packer.Ui) - - if int64(config.bootWait) > 0 { - ui.Say(fmt.Sprintf("Waiting %s for boot...", config.bootWait)) - time.Sleep(config.bootWait) - } - - return multistep.ActionContinue -} - -func (s *stepBootWait) Cleanup(state multistep.StateBag) {} diff --git a/builder/qemu/step_configure_vnc.go b/builder/qemu/step_configure_vnc.go index 2cbfd6820..1f1cd088d 100644 --- a/builder/qemu/step_configure_vnc.go +++ b/builder/qemu/step_configure_vnc.go @@ -1,13 +1,14 @@ package qemu import ( + "context" "fmt" "log" "math/rand" "net" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" ) // This step configures the VM to enable the VNC server. @@ -20,7 +21,7 @@ import ( // vnc_port uint - The port that VNC is configured to listen on. type stepConfigureVNC struct{} -func (stepConfigureVNC) Run(state multistep.StateBag) multistep.StepAction { +func (stepConfigureVNC) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { config := state.Get("config").(*Config) ui := state.Get("ui").(packer.Ui) diff --git a/builder/qemu/step_convert_disk.go b/builder/qemu/step_convert_disk.go index db7fe52c3..299c36e65 100644 --- a/builder/qemu/step_convert_disk.go +++ b/builder/qemu/step_convert_disk.go @@ -1,11 +1,14 @@ package qemu import ( + "context" "fmt" "path/filepath" + "strings" + "github.com/hashicorp/packer/common" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" "os" ) @@ -14,7 +17,7 @@ import ( // hard drive for the virtual machine. type stepConvertDisk struct{} -func (s *stepConvertDisk) Run(state multistep.StateBag) multistep.StepAction { +func (s *stepConvertDisk) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { config := state.Get("config").(*Config) driver := state.Get("driver").(Driver) diskName := state.Get("disk_filename").(string) @@ -45,11 +48,32 @@ func (s *stepConvertDisk) Run(state multistep.StateBag) multistep.StepAction { ) ui.Say("Converting hard drive...") - if err := driver.QemuImg(command...); err != nil { - err := fmt.Errorf("Error converting hard drive: %s", err) - state.Put("error", err) - ui.Error(err.Error()) - return multistep.ActionHalt + // Retry the conversion a few times in case it takes the qemu process a + // moment to release the lock + err := common.Retry(1, 10, 10, func(_ uint) (bool, error) { + if err := driver.QemuImg(command...); err != nil { + if strings.Contains(err.Error(), `Failed to get shared "write" lock`) { + ui.Say("Error getting file lock for conversion; retrying...") + return false, nil + } + err = fmt.Errorf("Error converting hard drive: %s", err) + return true, err + } + return true, nil + }) + + if err != nil { + if err == common.RetryExhaustedError { + err = fmt.Errorf("Exhausted retries for getting file lock: %s", err) + state.Put("error", err) + ui.Error(err.Error()) + return multistep.ActionHalt + } else { + err := fmt.Errorf("Error converting hard drive: %s", err) + state.Put("error", err) + ui.Error(err.Error()) + return multistep.ActionHalt + } } if err := os.Rename(targetPath, sourcePath); err != nil { diff --git a/builder/qemu/step_copy_disk.go b/builder/qemu/step_copy_disk.go index 633d8d6b8..3f1c33ffb 100644 --- a/builder/qemu/step_copy_disk.go +++ b/builder/qemu/step_copy_disk.go @@ -1,18 +1,19 @@ package qemu import ( + "context" "fmt" "path/filepath" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" ) // This step copies the virtual disk that will be used as the // hard drive for the virtual machine. type stepCopyDisk struct{} -func (s *stepCopyDisk) Run(state multistep.StateBag) multistep.StepAction { +func (s *stepCopyDisk) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { config := state.Get("config").(*Config) driver := state.Get("driver").(Driver) isoPath := state.Get("iso_path").(string) @@ -27,7 +28,7 @@ func (s *stepCopyDisk) Run(state multistep.StateBag) multistep.StepAction { path, } - if config.DiskImage == false { + if !config.DiskImage || config.UseBackingFile { return multistep.ActionContinue } diff --git a/builder/qemu/step_create_disk.go b/builder/qemu/step_create_disk.go index 5d844e2a0..0a18a3c18 100644 --- a/builder/qemu/step_create_disk.go +++ b/builder/qemu/step_create_disk.go @@ -1,18 +1,19 @@ package qemu import ( + "context" "fmt" "path/filepath" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" ) // This step creates the virtual disk that will be used as the // hard drive for the virtual machine. type stepCreateDisk struct{} -func (s *stepCreateDisk) Run(state multistep.StateBag) multistep.StepAction { +func (s *stepCreateDisk) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { config := state.Get("config").(*Config) driver := state.Get("driver").(Driver) ui := state.Get("ui").(packer.Ui) @@ -22,11 +23,19 @@ func (s *stepCreateDisk) Run(state multistep.StateBag) multistep.StepAction { command := []string{ "create", "-f", config.Format, - path, - fmt.Sprintf("%vM", config.DiskSize), } - if config.DiskImage == true { + if config.UseBackingFile { + isoPath := state.Get("iso_path").(string) + command = append(command, "-b", isoPath) + } + + command = append(command, + path, + fmt.Sprintf("%vM", config.DiskSize), + ) + + if config.DiskImage && !config.UseBackingFile { return multistep.ActionContinue } diff --git a/builder/qemu/step_forward_ssh.go b/builder/qemu/step_forward_ssh.go index 7737fcbd8..6e9efaca1 100644 --- a/builder/qemu/step_forward_ssh.go +++ b/builder/qemu/step_forward_ssh.go @@ -1,13 +1,14 @@ package qemu import ( + "context" "fmt" "log" "math/rand" "net" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" ) // This step adds a NAT port forwarding definition so that SSH is available @@ -18,7 +19,7 @@ import ( // Produces: type stepForwardSSH struct{} -func (s *stepForwardSSH) Run(state multistep.StateBag) multistep.StepAction { +func (s *stepForwardSSH) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { config := state.Get("config").(*Config) ui := state.Get("ui").(packer.Ui) diff --git a/builder/qemu/step_prepare_output_dir.go b/builder/qemu/step_prepare_output_dir.go index 7023ac9b8..8eeebb4e1 100644 --- a/builder/qemu/step_prepare_output_dir.go +++ b/builder/qemu/step_prepare_output_dir.go @@ -1,16 +1,18 @@ package qemu import ( - "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" + "context" "log" "os" "time" + + "github.com/hashicorp/packer/helper/multistep" + "github.com/hashicorp/packer/packer" ) type stepPrepareOutputDir struct{} -func (stepPrepareOutputDir) Run(state multistep.StateBag) multistep.StepAction { +func (stepPrepareOutputDir) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { config := state.Get("config").(*Config) ui := state.Get("ui").(packer.Ui) diff --git a/builder/qemu/step_resize_disk.go b/builder/qemu/step_resize_disk.go index fbbd8d572..f4f44ec34 100644 --- a/builder/qemu/step_resize_disk.go +++ b/builder/qemu/step_resize_disk.go @@ -1,18 +1,19 @@ package qemu import ( + "context" "fmt" "path/filepath" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" ) // This step resizes the virtual disk that will be used as the // hard drive for the virtual machine. type stepResizeDisk struct{} -func (s *stepResizeDisk) Run(state multistep.StateBag) multistep.StepAction { +func (s *stepResizeDisk) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { config := state.Get("config").(*Config) driver := state.Get("driver").(Driver) ui := state.Get("ui").(packer.Ui) @@ -20,6 +21,7 @@ func (s *stepResizeDisk) Run(state multistep.StateBag) multistep.StepAction { command := []string{ "resize", + "-f", config.Format, path, fmt.Sprintf("%vM", config.DiskSize), } diff --git a/builder/qemu/step_run.go b/builder/qemu/step_run.go index dc9dd2faa..c524d5514 100644 --- a/builder/qemu/step_run.go +++ b/builder/qemu/step_run.go @@ -1,15 +1,16 @@ package qemu import ( + "context" "fmt" "log" "path/filepath" "strconv" "strings" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" "github.com/hashicorp/packer/template/interpolate" - "github.com/mitchellh/multistep" ) // stepRun runs the virtual machine @@ -27,7 +28,7 @@ type qemuArgsTemplateData struct { SSHHostPort uint } -func (s *stepRun) Run(state multistep.StateBag) multistep.StepAction { +func (s *stepRun) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { driver := state.Get("driver").(Driver) ui := state.Get("ui").(packer.Ui) diff --git a/builder/qemu/step_set_iso.go b/builder/qemu/step_set_iso.go index f32a2687b..dba34777f 100644 --- a/builder/qemu/step_set_iso.go +++ b/builder/qemu/step_set_iso.go @@ -1,11 +1,12 @@ package qemu import ( + "context" "fmt" "net/http" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" ) // This step set iso_patch to available url @@ -14,7 +15,7 @@ type stepSetISO struct { Url []string } -func (s *stepSetISO) Run(state multistep.StateBag) multistep.StepAction { +func (s *stepSetISO) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { ui := state.Get("ui").(packer.Ui) iso_path := "" diff --git a/builder/qemu/step_shutdown.go b/builder/qemu/step_shutdown.go index 88fa42c34..9c5f89f6c 100644 --- a/builder/qemu/step_shutdown.go +++ b/builder/qemu/step_shutdown.go @@ -1,13 +1,14 @@ package qemu import ( + "context" "errors" "fmt" "log" "time" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" ) // This step shuts down the machine. It first attempts to do so gracefully, @@ -23,7 +24,7 @@ import ( // type stepShutdown struct{} -func (s *stepShutdown) Run(state multistep.StateBag) multistep.StepAction { +func (s *stepShutdown) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { config := state.Get("config").(*Config) driver := state.Get("driver").(Driver) ui := state.Get("ui").(packer.Ui) diff --git a/builder/qemu/step_type_boot_command.go b/builder/qemu/step_type_boot_command.go index 526a792e3..79bcd1338 100644 --- a/builder/qemu/step_type_boot_command.go +++ b/builder/qemu/step_type_boot_command.go @@ -1,21 +1,18 @@ package qemu import ( + "context" "fmt" "log" "net" - "regexp" - "strings" "time" - "unicode" - "unicode/utf8" "github.com/hashicorp/packer/common" + "github.com/hashicorp/packer/common/bootcommand" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" "github.com/hashicorp/packer/template/interpolate" "github.com/mitchellh/go-vnc" - "github.com/mitchellh/multistep" - "os" ) const KeyLeftShift uint32 = 0xFFE1 @@ -38,12 +35,29 @@ type bootCommandTemplateData struct { // type stepTypeBootCommand struct{} -func (s *stepTypeBootCommand) Run(state multistep.StateBag) multistep.StepAction { +func (s *stepTypeBootCommand) Run(ctx context.Context, state multistep.StateBag) multistep.StepAction { config := state.Get("config").(*Config) debug := state.Get("debug").(bool) httpPort := state.Get("http_port").(uint) ui := state.Get("ui").(packer.Ui) vncPort := state.Get("vnc_port").(uint) + vncIP := state.Get("vnc_ip").(string) + + if config.VNCConfig.DisableVNC { + log.Println("Skipping boot command step...") + return multistep.ActionContinue + } + + // Wait the for the vm to boot. + if int64(config.BootWait) > 0 { + ui.Say(fmt.Sprintf("Waiting %s for boot...", config.BootWait.String())) + select { + case <-time.After(config.BootWait): + break + case <-ctx.Done(): + return multistep.ActionHalt + } + } var pauseFn multistep.DebugPauseFn if debug { @@ -52,7 +66,7 @@ func (s *stepTypeBootCommand) Run(state multistep.StateBag) multistep.StepAction // Connect to VNC ui.Say("Connecting to VM via VNC") - nc, err := net.Dial("tcp", fmt.Sprintf("127.0.0.1:%d", vncPort)) + nc, err := net.Dial("tcp", fmt.Sprintf("%s:%d", vncIP, vncPort)) if err != nil { err := fmt.Errorf("Error connecting to VNC: %s", err) state.Put("error", err) @@ -74,276 +88,44 @@ func (s *stepTypeBootCommand) Run(state multistep.StateBag) multistep.StepAction hostIP := "10.0.2.2" common.SetHTTPIP(hostIP) - ctx := config.ctx - ctx.Data = &bootCommandTemplateData{ + configCtx := config.ctx + configCtx.Data = &bootCommandTemplateData{ hostIP, httpPort, config.VMName, } + d := bootcommand.NewVNCDriver(c) + ui.Say("Typing the boot command over VNC...") - for i, command := range config.BootCommand { - command, err := interpolate.Render(command, &ctx) - if err != nil { - err := fmt.Errorf("Error preparing boot command: %s", err) - state.Put("error", err) - ui.Error(err.Error()) - return multistep.ActionHalt - } + command, err := interpolate.Render(config.VNCConfig.FlatBootCommand(), &configCtx) + if err != nil { + err := fmt.Errorf("Error preparing boot command: %s", err) + state.Put("error", err) + ui.Error(err.Error()) + return multistep.ActionHalt + } - // Check for interrupts between typing things so we can cancel - // since this isn't the fastest thing. - if _, ok := state.GetOk(multistep.StateCancelled); ok { - return multistep.ActionHalt - } + seq, err := bootcommand.GenerateExpressionSequence(command) + if err != nil { + err := fmt.Errorf("Error generating boot command: %s", err) + state.Put("error", err) + ui.Error(err.Error()) + return multistep.ActionHalt + } - if pauseFn != nil { - pauseFn(multistep.DebugLocationAfterRun, fmt.Sprintf("boot_command[%d]: %s", i, command), state) - } + if err := seq.Do(ctx, d); err != nil { + err := fmt.Errorf("Error running boot command: %s", err) + state.Put("error", err) + ui.Error(err.Error()) + return multistep.ActionHalt + } - vncSendString(c, command) + if pauseFn != nil { + pauseFn(multistep.DebugLocationAfterRun, fmt.Sprintf("boot_command: %s", command), state) } return multistep.ActionContinue } func (*stepTypeBootCommand) Cleanup(multistep.StateBag) {} - -func vncSendString(c *vnc.ClientConn, original string) { - // Scancodes reference: https://github.com/qemu/qemu/blob/master/ui/vnc_keysym.h - special := make(map[string]uint32) - special[""] = 0xFF08 - special[""] = 0xFFFF - special[""] = 0xFF0D - special[""] = 0xFF1B - special[""] = 0xFFBE - special[""] = 0xFFBF - special[""] = 0xFFC0 - special[""] = 0xFFC1 - special[""] = 0xFFC2 - special[""] = 0xFFC3 - special[""] = 0xFFC4 - special[""] = 0xFFC5 - special[""] = 0xFFC6 - special[""] = 0xFFC7 - special[""] = 0xFFC8 - special[""] = 0xFFC9 - special[""] = 0xFF0D - special[""] = 0xFF09 - special[""] = 0xFF52 - special[""] = 0xFF54 - special[""] = 0xFF51 - special[""] = 0xFF53 - special[""] = 0x020 - special[""] = 0xFF63 - special[""] = 0xFF50 - special[""] = 0xFF57 - special[""] = 0xFF55 - special[""] = 0xFF56 - special[""] = 0xFFE9 - special[""] = 0xFFE3 - special[""] = 0xFFE1 - special[""] = 0xFFEA - special[""] = 0xFFE4 - special[""] = 0xFFE2 - - shiftedChars := "~!@#$%^&*()_+{}|:\"<>?" - waitRe := regexp.MustCompile(`^`) - - // We delay (default 100ms) between each key event to allow for CPU or - // network latency. See PackerKeyEnv for tuning. - keyInterval := common.PackerKeyDefault - if delay, err := time.ParseDuration(os.Getenv(common.PackerKeyEnv)); err == nil { - keyInterval = delay - } - - // TODO(mitchellh): Ripe for optimizations of some point, perhaps. - for len(original) > 0 { - var keyCode uint32 - keyShift := false - - if strings.HasPrefix(original, "") { - keyCode = special[""] - original = original[len(""):] - log.Printf("Special code '' found, replacing with: %d", keyCode) - - c.KeyEvent(keyCode, true) - time.Sleep(keyInterval) - continue - } - - if strings.HasPrefix(original, "") { - keyCode = special[""] - original = original[len(""):] - log.Printf("Special code '' found, replacing with: %d", keyCode) - - c.KeyEvent(keyCode, true) - time.Sleep(keyInterval) - continue - } - - if strings.HasPrefix(original, "") { - keyCode = special[""] - original = original[len(""):] - log.Printf("Special code '' found, replacing with: %d", keyCode) - - c.KeyEvent(keyCode, true) - time.Sleep(keyInterval) - continue - } - - if strings.HasPrefix(original, "") { - keyCode = special[""] - original = original[len(""):] - log.Printf("Special code '' found, replacing with: %d", keyCode) - - c.KeyEvent(keyCode, false) - time.Sleep(keyInterval) - continue - } - - if strings.HasPrefix(original, "") { - keyCode = special[""] - original = original[len(""):] - log.Printf("Special code '' found, replacing with: %d", keyCode) - - c.KeyEvent(keyCode, false) - time.Sleep(keyInterval) - continue - } - - if strings.HasPrefix(original, "") { - keyCode = special[""] - original = original[len(""):] - log.Printf("Special code '' found, replacing with: %d", keyCode) - - c.KeyEvent(keyCode, false) - time.Sleep(keyInterval) - continue - } - - if strings.HasPrefix(original, "") { - keyCode = special[""] - original = original[len(""):] - log.Printf("Special code '' found, replacing with: %d", keyCode) - - c.KeyEvent(keyCode, true) - time.Sleep(keyInterval) - continue - } - - if strings.HasPrefix(original, "") { - keyCode = special[""] - original = original[len(""):] - log.Printf("Special code '' found, replacing with: %d", keyCode) - - c.KeyEvent(keyCode, true) - time.Sleep(keyInterval) - continue - } - - if strings.HasPrefix(original, "") { - keyCode = special[""] - original = original[len(""):] - log.Printf("Special code '' found, replacing with: %d", keyCode) - - c.KeyEvent(keyCode, true) - time.Sleep(keyInterval) - continue - } - - if strings.HasPrefix(original, "") { - keyCode = special[""] - original = original[len(""):] - log.Printf("Special code '' found, replacing with: %d", keyCode) - - c.KeyEvent(keyCode, false) - time.Sleep(keyInterval) - continue - } - - if strings.HasPrefix(original, "") { - keyCode = special[""] - original = original[len(""):] - log.Printf("Special code '' found, replacing with: %d", keyCode) - - c.KeyEvent(keyCode, false) - time.Sleep(keyInterval) - continue - } - - if strings.HasPrefix(original, "") { - keyCode = special[""] - original = original[len(""):] - log.Printf("Special code '' found, replacing with: %d", keyCode) - - c.KeyEvent(keyCode, false) - time.Sleep(keyInterval) - continue - } - - if strings.HasPrefix(original, "") { - log.Printf("Special code '' found, sleeping one second") - time.Sleep(1 * time.Second) - original = original[len(""):] - continue - } - - if strings.HasPrefix(original, "") { - log.Printf("Special code '' found, sleeping 5 seconds") - time.Sleep(5 * time.Second) - original = original[len(""):] - continue - } - - if strings.HasPrefix(original, "") { - log.Printf("Special code '' found, sleeping 10 seconds") - time.Sleep(10 * time.Second) - original = original[len(""):] - continue - } - - waitMatch := waitRe.FindStringSubmatch(original) - if len(waitMatch) > 1 { - log.Printf("Special code %s found, sleeping", waitMatch[0]) - if dt, err := time.ParseDuration(waitMatch[1]); err == nil { - time.Sleep(dt) - original = original[len(waitMatch[0]):] - continue - } - } - - for specialCode, specialValue := range special { - if strings.HasPrefix(original, specialCode) { - log.Printf("Special code '%s' found, replacing with: %d", specialCode, specialValue) - keyCode = specialValue - original = original[len(specialCode):] - break - } - } - - if keyCode == 0 { - r, size := utf8.DecodeRuneInString(original) - original = original[size:] - keyCode = uint32(r) - keyShift = unicode.IsUpper(r) || strings.ContainsRune(shiftedChars, r) - - log.Printf("Sending char '%c', code %d, shift %v", r, keyCode, keyShift) - } - - if keyShift { - c.KeyEvent(KeyLeftShift, true) - } - - c.KeyEvent(keyCode, true) - time.Sleep(keyInterval) - - c.KeyEvent(keyCode, false) - time.Sleep(keyInterval) - - if keyShift { - c.KeyEvent(KeyLeftShift, false) - } - time.Sleep(keyInterval) - } -} diff --git a/builder/scaleway/artifact.go b/builder/scaleway/artifact.go new file mode 100644 index 000000000..b5aea88da --- /dev/null +++ b/builder/scaleway/artifact.go @@ -0,0 +1,62 @@ +package scaleway + +import ( + "fmt" + "log" + + "github.com/scaleway/scaleway-cli/pkg/api" +) + +type Artifact struct { + // The name of the image + imageName string + + // The ID of the image + imageID string + + // The name of the snapshot + snapshotName string + + // The ID of the snapshot + snapshotID string + + // The name of the region + regionName string + + // The client for making API calls + client *api.ScalewayAPI +} + +func (*Artifact) BuilderId() string { + return BuilderId +} + +func (*Artifact) Files() []string { + // No files with Scaleway + return nil +} + +func (a *Artifact) Id() string { + return fmt.Sprintf("%s:%s", a.regionName, a.imageID) +} + +func (a *Artifact) String() string { + return fmt.Sprintf("An image was created: '%v' (ID: %v) in region '%v' based on snapshot '%v' (ID: %v)", + a.imageName, a.imageID, a.regionName, a.snapshotName, a.snapshotID) +} + +func (a *Artifact) State(name string) interface{} { + return nil +} + +func (a *Artifact) Destroy() error { + log.Printf("Destroying image: %s (%s)", a.imageID, a.imageName) + if err := a.client.DeleteImage(a.imageID); err != nil { + return err + } + log.Printf("Destroying snapshot: %s (%s)", a.snapshotID, a.snapshotName) + if err := a.client.DeleteSnapshot(a.snapshotID); err != nil { + return err + } + return nil +} diff --git a/builder/scaleway/artifact_test.go b/builder/scaleway/artifact_test.go new file mode 100644 index 000000000..8f158ef5f --- /dev/null +++ b/builder/scaleway/artifact_test.go @@ -0,0 +1,33 @@ +package scaleway + +import ( + "testing" + + "github.com/hashicorp/packer/packer" +) + +func TestArtifact_Impl(t *testing.T) { + var raw interface{} + raw = &Artifact{} + if _, ok := raw.(packer.Artifact); !ok { + t.Fatalf("Artifact should be artifact") + } +} + +func TestArtifactId(t *testing.T) { + a := &Artifact{"packer-foobar-image", "cc586e45-5156-4f71-b223-cf406b10dd1d", "packer-foobar-snapshot", "cc586e45-5156-4f71-b223-cf406b10dd1c", "ams1", nil} + expected := "ams1:cc586e45-5156-4f71-b223-cf406b10dd1d" + + if a.Id() != expected { + t.Fatalf("artifact ID should match: %v", expected) + } +} + +func TestArtifactString(t *testing.T) { + a := &Artifact{"packer-foobar-image", "cc586e45-5156-4f71-b223-cf406b10dd1d", "packer-foobar-snapshot", "cc586e45-5156-4f71-b223-cf406b10dd1c", "ams1", nil} + expected := "An image was created: 'packer-foobar-image' (ID: cc586e45-5156-4f71-b223-cf406b10dd1d) in region 'ams1' based on snapshot 'packer-foobar-snapshot' (ID: cc586e45-5156-4f71-b223-cf406b10dd1c)" + + if a.String() != expected { + t.Fatalf("artifact string should match: %v", expected) + } +} diff --git a/builder/scaleway/builder.go b/builder/scaleway/builder.go new file mode 100644 index 000000000..457002c74 --- /dev/null +++ b/builder/scaleway/builder.go @@ -0,0 +1,106 @@ +// The scaleway package contains a packer.Builder implementation +// that builds Scaleway images (snapshots). + +package scaleway + +import ( + "errors" + "fmt" + "log" + + "github.com/hashicorp/packer/common" + "github.com/hashicorp/packer/helper/communicator" + "github.com/hashicorp/packer/helper/multistep" + "github.com/hashicorp/packer/packer" + "github.com/scaleway/scaleway-cli/pkg/api" +) + +// The unique id for the builder +const BuilderId = "hashicorp.scaleway" + +type Builder struct { + config Config + runner multistep.Runner +} + +func (b *Builder) Prepare(raws ...interface{}) ([]string, error) { + c, warnings, errs := NewConfig(raws...) + if errs != nil { + return warnings, errs + } + b.config = *c + + return nil, nil +} + +func (b *Builder) Run(ui packer.Ui, hook packer.Hook, cache packer.Cache) (packer.Artifact, error) { + client, err := api.NewScalewayAPI(b.config.Organization, b.config.Token, b.config.UserAgent, b.config.Region) + + if err != nil { + ui.Error(err.Error()) + return nil, err + } + + state := new(multistep.BasicStateBag) + state.Put("config", b.config) + state.Put("client", client) + state.Put("hook", hook) + state.Put("ui", ui) + + steps := []multistep.Step{ + &stepCreateSSHKey{ + Debug: b.config.PackerDebug, + DebugKeyPath: fmt.Sprintf("scw_%s.pem", b.config.PackerBuildName), + PrivateKeyFile: b.config.Comm.SSHPrivateKey, + }, + new(stepCreateServer), + new(stepServerInfo), + &communicator.StepConnect{ + Config: &b.config.Comm, + Host: commHost, + SSHConfig: sshConfig, + }, + new(common.StepProvision), + new(stepShutdown), + new(stepSnapshot), + new(stepImage), + } + + b.runner = common.NewRunnerWithPauseFn(steps, b.config.PackerConfig, ui, state) + b.runner.Run(state) + + if rawErr, ok := state.GetOk("error"); ok { + return nil, rawErr.(error) + } + + // If we were interrupted or cancelled, then just exit. + if _, ok := state.GetOk(multistep.StateCancelled); ok { + return nil, errors.New("Build was cancelled.") + } + + if _, ok := state.GetOk(multistep.StateHalted); ok { + return nil, errors.New("Build was halted.") + } + + if _, ok := state.GetOk("snapshot_name"); !ok { + return nil, errors.New("Cannot find snapshot_name in state.") + } + + artifact := &Artifact{ + imageName: state.Get("image_name").(string), + imageID: state.Get("image_id").(string), + snapshotName: state.Get("snapshot_name").(string), + snapshotID: state.Get("snapshot_id").(string), + regionName: state.Get("region").(string), + client: client, + } + + return artifact, nil +} + +func (b *Builder) Cancel() { + if b.runner != nil { + log.Println("Cancelling the step runner...") + b.runner.Cancel() + } +} diff --git a/builder/scaleway/builder_test.go b/builder/scaleway/builder_test.go new file mode 100644 index 000000000..1d42ed7e9 --- /dev/null +++ b/builder/scaleway/builder_test.go @@ -0,0 +1,237 @@ +package scaleway + +import ( + "strconv" + "testing" + + "github.com/hashicorp/packer/packer" +) + +func testConfig() map[string]interface{} { + return map[string]interface{}{ + "api_access_key": "foo", + "api_token": "bar", + "region": "ams1", + "commercial_type": "START1-S", + "ssh_username": "root", + "image": "image-uuid", + } +} + +func TestBuilder_ImplementsBuilder(t *testing.T) { + var raw interface{} + raw = &Builder{} + if _, ok := raw.(packer.Builder); !ok { + t.Fatalf("Builder should be a builder") + } +} + +func TestBuilder_Prepare_BadType(t *testing.T) { + b := &Builder{} + c := map[string]interface{}{ + "api_token": []string{}, + } + + warnings, err := b.Prepare(c) + if len(warnings) > 0 { + t.Fatalf("bad: %#v", warnings) + } + if err == nil { + t.Fatalf("prepare should fail") + } +} + +func TestBuilderPrepare_InvalidKey(t *testing.T) { + var b Builder + config := testConfig() + + config["i_should_not_be_valid"] = true + warnings, err := b.Prepare(config) + if len(warnings) > 0 { + t.Fatalf("bad: %#v", warnings) + } + if err == nil { + t.Fatal("should have error") + } +} + +func TestBuilderPrepare_Region(t *testing.T) { + var b Builder + config := testConfig() + + delete(config, "region") + warnings, err := b.Prepare(config) + if len(warnings) > 0 { + t.Fatalf("bad: %#v", warnings) + } + if err == nil { + t.Fatalf("should error") + } + + expected := "ams1" + + config["region"] = expected + b = Builder{} + warnings, err = b.Prepare(config) + if len(warnings) > 0 { + t.Fatalf("bad: %#v", warnings) + } + if err != nil { + t.Fatalf("should not have error: %s", err) + } + + if b.config.Region != expected { + t.Errorf("found %s, expected %s", b.config.Region, expected) + } +} + +func TestBuilderPrepare_CommercialType(t *testing.T) { + var b Builder + config := testConfig() + + delete(config, "commercial_type") + warnings, err := b.Prepare(config) + if len(warnings) > 0 { + t.Fatalf("bad: %#v", warnings) + } + if err == nil { + t.Fatalf("should error") + } + + expected := "START1-S" + + config["commercial_type"] = expected + b = Builder{} + warnings, err = b.Prepare(config) + if len(warnings) > 0 { + t.Fatalf("bad: %#v", warnings) + } + if err != nil { + t.Fatalf("should not have error: %s", err) + } + + if b.config.CommercialType != expected { + t.Errorf("found %s, expected %s", b.config.CommercialType, expected) + } +} + +func TestBuilderPrepare_Image(t *testing.T) { + var b Builder + config := testConfig() + + delete(config, "image") + warnings, err := b.Prepare(config) + if len(warnings) > 0 { + t.Fatalf("bad: %#v", warnings) + } + if err == nil { + t.Fatal("should error") + } + + expected := "cc586e45-5156-4f71-b223-cf406b10dd1c" + + config["image"] = expected + b = Builder{} + warnings, err = b.Prepare(config) + if len(warnings) > 0 { + t.Fatalf("bad: %#v", warnings) + } + if err != nil { + t.Fatalf("should not have error: %s", err) + } + + if b.config.Image != expected { + t.Errorf("found %s, expected %s", b.config.Image, expected) + } +} + +func TestBuilderPrepare_SnapshotName(t *testing.T) { + var b Builder + config := testConfig() + + warnings, err := b.Prepare(config) + if len(warnings) > 0 { + t.Fatalf("bad: %#v", warnings) + } + if err != nil { + t.Fatalf("should not have error: %s", err) + } + + if b.config.SnapshotName == "" { + t.Errorf("invalid: %s", b.config.SnapshotName) + } + + config["snapshot_name"] = "foobarbaz" + b = Builder{} + warnings, err = b.Prepare(config) + if len(warnings) > 0 { + t.Fatalf("bad: %#v", warnings) + } + if err != nil { + t.Fatalf("should not have error: %s", err) + } + + config["snapshot_name"] = "{{timestamp}}" + b = Builder{} + warnings, err = b.Prepare(config) + if len(warnings) > 0 { + t.Fatalf("bad: %#v", warnings) + } + if err != nil { + t.Fatalf("should not have error: %s", err) + } + + _, err = strconv.ParseInt(b.config.SnapshotName, 0, 0) + if err != nil { + t.Fatalf("failed to parse int in template: %s", err) + } + +} + +func TestBuilderPrepare_ServerName(t *testing.T) { + var b Builder + config := testConfig() + + warnings, err := b.Prepare(config) + if len(warnings) > 0 { + t.Fatalf("bad: %#v", warnings) + } + if err != nil { + t.Fatalf("should not have error: %s", err) + } + + if b.config.ServerName == "" { + t.Errorf("invalid: %s", b.config.ServerName) + } + + config["server_name"] = "foobar" + b = Builder{} + warnings, err = b.Prepare(config) + if len(warnings) > 0 { + t.Fatalf("bad: %#v", warnings) + } + if err != nil { + t.Fatalf("should not have error: %s", err) + } + + config["server_name"] = "foobar-{{timestamp}}" + b = Builder{} + warnings, err = b.Prepare(config) + if len(warnings) > 0 { + t.Fatalf("bad: %#v", warnings) + } + if err != nil { + t.Fatalf("should not have error: %s", err) + } + + config["server_name"] = "foobar-{{" + b = Builder{} + warnings, err = b.Prepare(config) + if len(warnings) > 0 { + t.Fatalf("bad: %#v", warnings) + } + if err == nil { + t.Fatal("should have error") + } + +} diff --git a/builder/scaleway/config.go b/builder/scaleway/config.go new file mode 100644 index 000000000..84b5e989e --- /dev/null +++ b/builder/scaleway/config.go @@ -0,0 +1,124 @@ +package scaleway + +import ( + "errors" + "fmt" + "os" + + "github.com/hashicorp/packer/common" + "github.com/hashicorp/packer/common/uuid" + "github.com/hashicorp/packer/helper/communicator" + "github.com/hashicorp/packer/helper/config" + "github.com/hashicorp/packer/helper/useragent" + "github.com/hashicorp/packer/packer" + "github.com/hashicorp/packer/template/interpolate" + "github.com/mitchellh/mapstructure" +) + +type Config struct { + common.PackerConfig `mapstructure:",squash"` + Comm communicator.Config `mapstructure:",squash"` + + Token string `mapstructure:"api_token"` + Organization string `mapstructure:"api_access_key"` + + Region string `mapstructure:"region"` + Image string `mapstructure:"image"` + CommercialType string `mapstructure:"commercial_type"` + + SnapshotName string `mapstructure:"snapshot_name"` + ImageName string `mapstructure:"image_name"` + ServerName string `mapstructure:"server_name"` + Bootscript string `mapstructure:"bootscript"` + + UserAgent string + ctx interpolate.Context +} + +func NewConfig(raws ...interface{}) (*Config, []string, error) { + c := new(Config) + + var md mapstructure.Metadata + err := config.Decode(c, &config.DecodeOpts{ + Metadata: &md, + Interpolate: true, + InterpolateContext: &c.ctx, + InterpolateFilter: &interpolate.RenderFilter{ + Exclude: []string{ + "run_command", + }, + }, + }, raws...) + if err != nil { + return nil, nil, err + } + + c.UserAgent = useragent.String() + + if c.Organization == "" { + c.Organization = os.Getenv("SCALEWAY_API_ACCESS_KEY") + } + + if c.Token == "" { + c.Token = os.Getenv("SCALEWAY_API_TOKEN") + } + + if c.SnapshotName == "" { + def, err := interpolate.Render("snapshot-packer-{{timestamp}}", nil) + if err != nil { + panic(err) + } + + c.SnapshotName = def + } + + if c.ImageName == "" { + def, err := interpolate.Render("image-packer-{{timestamp}}", nil) + if err != nil { + panic(err) + } + + c.ImageName = def + } + + if c.ServerName == "" { + // Default to packer-[time-ordered-uuid] + c.ServerName = fmt.Sprintf("packer-%s", uuid.TimeOrderedUUID()) + } + + var errs *packer.MultiError + if es := c.Comm.Prepare(&c.ctx); len(es) > 0 { + errs = packer.MultiErrorAppend(errs, es...) + } + if c.Organization == "" { + errs = packer.MultiErrorAppend( + errs, errors.New("Scaleway Organization ID must be specified")) + } + + if c.Token == "" { + errs = packer.MultiErrorAppend( + errs, errors.New("Scaleway Token must be specified")) + } + + if c.Region == "" { + errs = packer.MultiErrorAppend( + errs, errors.New("region is required")) + } + + if c.CommercialType == "" { + errs = packer.MultiErrorAppend( + errs, errors.New("commercial type is required")) + } + + if c.Image == "" { + errs = packer.MultiErrorAppend( + errs, errors.New("image is required")) + } + + if errs != nil && len(errs.Errors) > 0 { + return nil, nil, errs + } + + common.ScrubConfig(c, c.Token) + return c, nil, nil +} diff --git a/builder/scaleway/ssh.go b/builder/scaleway/ssh.go new file mode 100644 index 000000000..e4677849f --- /dev/null +++ b/builder/scaleway/ssh.go @@ -0,0 +1,61 @@ +package scaleway + +import ( + "fmt" + "net" + "os" + + packerssh "github.com/hashicorp/packer/communicator/ssh" + "github.com/hashicorp/packer/helper/multistep" + "golang.org/x/crypto/ssh" + "golang.org/x/crypto/ssh/agent" +) + +func commHost(state multistep.StateBag) (string, error) { + ipAddress := state.Get("server_ip").(string) + return ipAddress, nil +} + +func sshConfig(state multistep.StateBag) (*ssh.ClientConfig, error) { + config := state.Get("config").(Config) + + var auth []ssh.AuthMethod + + if config.Comm.SSHAgentAuth { + authSock := os.Getenv("SSH_AUTH_SOCK") + if authSock == "" { + return nil, fmt.Errorf("SSH_AUTH_SOCK is not set") + } + + sshAgent, err := net.Dial("unix", authSock) + if err != nil { + return nil, fmt.Errorf("Cannot connect to SSH Agent socket %q: %s", authSock, err) + } + auth = []ssh.AuthMethod{ + ssh.PublicKeysCallback(agent.NewClient(sshAgent).Signers), + } + } + + if config.Comm.SSHPassword != "" { + auth = append(auth, + ssh.Password(config.Comm.SSHPassword), + ssh.KeyboardInteractive( + packerssh.PasswordKeyboardInteractive(config.Comm.SSHPassword)), + ) + } + + // Use key based auth if there is a private key in the state bag + if privateKey, ok := state.GetOk("private_key"); ok { + signer, err := ssh.ParsePrivateKey([]byte(privateKey.(string))) + if err != nil { + return nil, fmt.Errorf("Error setting up SSH config: %s", err) + } + auth = append(auth, ssh.PublicKeys(signer)) + } + + return &ssh.ClientConfig{ + User: config.Comm.SSHUsername, + Auth: auth, + HostKeyCallback: ssh.InsecureIgnoreHostKey(), + }, nil +} diff --git a/builder/scaleway/step_create_image.go b/builder/scaleway/step_create_image.go new file mode 100644 index 000000000..16345e74d --- /dev/null +++ b/builder/scaleway/step_create_image.go @@ -0,0 +1,54 @@ +package scaleway + +import ( + "context" + "fmt" + "log" + + "github.com/hashicorp/packer/helper/multistep" + "github.com/hashicorp/packer/packer" + "github.com/scaleway/scaleway-cli/pkg/api" +) + +type stepImage struct{} + +func (s *stepImage) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { + client := state.Get("client").(*api.ScalewayAPI) + ui := state.Get("ui").(packer.Ui) + c := state.Get("config").(Config) + snapshotID := state.Get("snapshot_id").(string) + bootscriptID := "" + + ui.Say(fmt.Sprintf("Creating image: %v", c.ImageName)) + + image, err := client.GetImage(c.Image) + if err != nil { + err := fmt.Errorf("Error getting initial image info: %s", err) + state.Put("error", err) + ui.Error(err.Error()) + return multistep.ActionHalt + } + + if image.DefaultBootscript != nil { + bootscriptID = image.DefaultBootscript.Identifier + } + + imageID, err := client.PostImage(snapshotID, c.ImageName, bootscriptID, image.Arch) + if err != nil { + err := fmt.Errorf("Error creating image: %s", err) + state.Put("error", err) + ui.Error(err.Error()) + return multistep.ActionHalt + } + + log.Printf("Image ID: %s", imageID) + state.Put("image_id", imageID) + state.Put("image_name", c.ImageName) + state.Put("region", c.Region) + + return multistep.ActionContinue +} + +func (s *stepImage) Cleanup(state multistep.StateBag) { + // no cleanup +} diff --git a/builder/scaleway/step_create_server.go b/builder/scaleway/step_create_server.go new file mode 100644 index 000000000..63350548d --- /dev/null +++ b/builder/scaleway/step_create_server.go @@ -0,0 +1,84 @@ +package scaleway + +import ( + "context" + "fmt" + "strings" + + "github.com/hashicorp/packer/helper/multistep" + "github.com/hashicorp/packer/packer" + "github.com/scaleway/scaleway-cli/pkg/api" +) + +type stepCreateServer struct { + serverID string +} + +func (s *stepCreateServer) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { + client := state.Get("client").(*api.ScalewayAPI) + ui := state.Get("ui").(packer.Ui) + c := state.Get("config").(Config) + sshPubKey := state.Get("ssh_pubkey").(string) + tags := []string{} + var bootscript *string + + ui.Say("Creating server...") + + if c.Bootscript != "" { + bootscript = &c.Bootscript + } + + if sshPubKey != "" { + tags = []string{fmt.Sprintf("AUTHORIZED_KEY=%s", strings.TrimSpace(sshPubKey))} + } + + server, err := client.PostServer(api.ScalewayServerDefinition{ + Name: c.ServerName, + Image: &c.Image, + Organization: c.Organization, + CommercialType: c.CommercialType, + Tags: tags, + Bootscript: bootscript, + }) + + if err != nil { + err := fmt.Errorf("Error creating server: %s", err) + state.Put("error", err) + ui.Error(err.Error()) + return multistep.ActionHalt + } + + err = client.PostServerAction(server, "poweron") + + if err != nil { + err := fmt.Errorf("Error starting server: %s", err) + state.Put("error", err) + ui.Error(err.Error()) + return multistep.ActionHalt + } + + s.serverID = server + + state.Put("server_id", server) + + return multistep.ActionContinue +} + +func (s *stepCreateServer) Cleanup(state multistep.StateBag) { + if s.serverID == "" { + return + } + + client := state.Get("client").(*api.ScalewayAPI) + ui := state.Get("ui").(packer.Ui) + + ui.Say("Destroying server...") + + err := client.DeleteServerForce(s.serverID) + + if err != nil { + ui.Error(fmt.Sprintf( + "Error destroying server. Please destroy it manually: %s", err)) + } + +} diff --git a/builder/scaleway/step_create_ssh_key.go b/builder/scaleway/step_create_ssh_key.go new file mode 100644 index 000000000..4272d9ff4 --- /dev/null +++ b/builder/scaleway/step_create_ssh_key.go @@ -0,0 +1,106 @@ +package scaleway + +import ( + "context" + "crypto/rand" + "crypto/rsa" + "crypto/x509" + "encoding/pem" + "fmt" + "io/ioutil" + "log" + "os" + "runtime" + "strings" + + "github.com/hashicorp/packer/helper/multistep" + "github.com/hashicorp/packer/packer" + "golang.org/x/crypto/ssh" +) + +type stepCreateSSHKey struct { + Debug bool + DebugKeyPath string + PrivateKeyFile string +} + +func (s *stepCreateSSHKey) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { + ui := state.Get("ui").(packer.Ui) + + if s.PrivateKeyFile != "" { + ui.Say("Using existing SSH private key") + privateKeyBytes, err := ioutil.ReadFile(s.PrivateKeyFile) + if err != nil { + state.Put("error", fmt.Errorf( + "Error loading configured private key file: %s", err)) + return multistep.ActionHalt + } + + state.Put("private_key", string(privateKeyBytes)) + state.Put("ssh_pubkey", "") + + return multistep.ActionContinue + } + + ui.Say("Creating temporary ssh key for server...") + + priv, err := rsa.GenerateKey(rand.Reader, 2014) + if err != nil { + err := fmt.Errorf("Error creating temporary SSH key: %s", err) + state.Put("error", err) + ui.Error(err.Error()) + return multistep.ActionHalt + } + + // ASN.1 DER encoded form + priv_der := x509.MarshalPKCS1PrivateKey(priv) + priv_blk := pem.Block{ + Type: "RSA PRIVATE KEY", + Headers: nil, + Bytes: priv_der, + } + + // Set the private key in the statebag for later + state.Put("private_key", string(pem.EncodeToMemory(&priv_blk))) + + pub, _ := ssh.NewPublicKey(&priv.PublicKey) + pub_sshformat := string(ssh.MarshalAuthorizedKey(pub)) + pub_sshformat = strings.Replace(pub_sshformat, " ", "_", -1) + + log.Printf("temporary ssh key created") + + // Remember some state for the future + state.Put("ssh_pubkey", string(pub_sshformat)) + + // If we're in debug mode, output the private key to the working directory. + if s.Debug { + ui.Message(fmt.Sprintf("Saving key for debug purposes: %s", s.DebugKeyPath)) + f, err := os.Create(s.DebugKeyPath) + if err != nil { + state.Put("error", fmt.Errorf("Error saving debug key: %s", err)) + return multistep.ActionHalt + } + defer f.Close() + + // Write the key out + if _, err := f.Write(pem.EncodeToMemory(&priv_blk)); err != nil { + state.Put("error", fmt.Errorf("Error saving debug key: %s", err)) + return multistep.ActionHalt + } + + // Chmod it so that it is SSH ready + if runtime.GOOS != "windows" { + if err := f.Chmod(0600); err != nil { + state.Put("error", fmt.Errorf("Error setting permissions of debug key: %s", err)) + return multistep.ActionHalt + } + } + } + + return multistep.ActionContinue +} + +func (s *stepCreateSSHKey) Cleanup(state multistep.StateBag) { + // SSH key is passed via tag. Nothing to do here. + return +} diff --git a/builder/scaleway/step_server_info.go b/builder/scaleway/step_server_info.go new file mode 100644 index 000000000..7ab14658f --- /dev/null +++ b/builder/scaleway/step_server_info.go @@ -0,0 +1,45 @@ +package scaleway + +import ( + "context" + "fmt" + + "github.com/hashicorp/packer/helper/multistep" + "github.com/hashicorp/packer/packer" + "github.com/scaleway/scaleway-cli/pkg/api" +) + +type stepServerInfo struct{} + +func (s *stepServerInfo) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { + client := state.Get("client").(*api.ScalewayAPI) + ui := state.Get("ui").(packer.Ui) + serverID := state.Get("server_id").(string) + + ui.Say("Waiting for server to become active...") + + _, err := api.WaitForServerState(client, serverID, "running") + if err != nil { + err := fmt.Errorf("Error waiting for server to become booted: %s", err) + state.Put("error", err) + ui.Error(err.Error()) + return multistep.ActionHalt + } + + server, err := client.GetServer(serverID) + if err != nil { + err := fmt.Errorf("Error retrieving server: %s", err) + state.Put("error", err) + ui.Error(err.Error()) + return multistep.ActionHalt + } + + state.Put("server_ip", server.PublicAddress.IP) + state.Put("root_volume_id", server.Volumes["0"].Identifier) + + return multistep.ActionContinue +} + +func (s *stepServerInfo) Cleanup(state multistep.StateBag) { + // no cleanup +} diff --git a/builder/scaleway/step_shutdown.go b/builder/scaleway/step_shutdown.go new file mode 100644 index 000000000..a8a029259 --- /dev/null +++ b/builder/scaleway/step_shutdown.go @@ -0,0 +1,44 @@ +package scaleway + +import ( + "context" + "fmt" + + "github.com/hashicorp/packer/helper/multistep" + "github.com/hashicorp/packer/packer" + "github.com/scaleway/scaleway-cli/pkg/api" +) + +type stepShutdown struct{} + +func (s *stepShutdown) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { + client := state.Get("client").(*api.ScalewayAPI) + ui := state.Get("ui").(packer.Ui) + serverID := state.Get("server_id").(string) + + ui.Say("Shutting down server...") + + err := client.PostServerAction(serverID, "poweroff") + + if err != nil { + err := fmt.Errorf("Error stopping server: %s", err) + state.Put("error", err) + ui.Error(err.Error()) + return multistep.ActionHalt + } + + _, err = api.WaitForServerState(client, serverID, "stopped") + + if err != nil { + err := fmt.Errorf("Error shutting down server: %s", err) + state.Put("error", err) + ui.Error(err.Error()) + return multistep.ActionHalt + } + + return multistep.ActionContinue +} + +func (s *stepShutdown) Cleanup(state multistep.StateBag) { + // no cleanup +} diff --git a/builder/scaleway/step_snapshot.go b/builder/scaleway/step_snapshot.go new file mode 100644 index 000000000..fd0dcb593 --- /dev/null +++ b/builder/scaleway/step_snapshot.go @@ -0,0 +1,40 @@ +package scaleway + +import ( + "context" + "fmt" + "log" + + "github.com/hashicorp/packer/helper/multistep" + "github.com/hashicorp/packer/packer" + "github.com/scaleway/scaleway-cli/pkg/api" +) + +type stepSnapshot struct{} + +func (s *stepSnapshot) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { + client := state.Get("client").(*api.ScalewayAPI) + ui := state.Get("ui").(packer.Ui) + c := state.Get("config").(Config) + volumeID := state.Get("root_volume_id").(string) + + ui.Say(fmt.Sprintf("Creating snapshot: %v", c.SnapshotName)) + snapshot, err := client.PostSnapshot(volumeID, c.SnapshotName) + if err != nil { + err := fmt.Errorf("Error creating snapshot: %s", err) + state.Put("error", err) + ui.Error(err.Error()) + return multistep.ActionHalt + } + + log.Printf("Snapshot ID: %s", snapshot) + state.Put("snapshot_id", snapshot) + state.Put("snapshot_name", c.SnapshotName) + state.Put("region", c.Region) + + return multistep.ActionContinue +} + +func (s *stepSnapshot) Cleanup(state multistep.StateBag) { + // no cleanup +} diff --git a/builder/triton/access_config.go b/builder/triton/access_config.go index 2df0ed150..3fe380e73 100644 --- a/builder/triton/access_config.go +++ b/builder/triton/access_config.go @@ -17,10 +17,12 @@ import ( // AccessConfig is for common configuration related to Triton access type AccessConfig struct { - Endpoint string `mapstructure:"triton_url"` - Account string `mapstructure:"triton_account"` - KeyID string `mapstructure:"triton_key_id"` - KeyMaterial string `mapstructure:"triton_key_material"` + Endpoint string `mapstructure:"triton_url"` + Account string `mapstructure:"triton_account"` + Username string `mapstructure:"triton_user"` + KeyID string `mapstructure:"triton_key_id"` + KeyMaterial string `mapstructure:"triton_key_material"` + InsecureSkipTLSVerify bool `mapstructure:"insecure_skip_tls_verify"` signer authentication.Signer } @@ -65,7 +67,12 @@ func (c *AccessConfig) Prepare(ctx *interpolate.Context) []error { } func (c *AccessConfig) createSSHAgentSigner() (authentication.Signer, error) { - signer, err := authentication.NewSSHAgentSigner(c.KeyID, c.Account) + input := authentication.SSHAgentSignerInput{ + KeyID: c.KeyID, + AccountName: c.Account, + Username: c.Username, + } + signer, err := authentication.NewSSHAgentSigner(input) if err != nil { return nil, fmt.Errorf("Error creating Triton request signer: %s", err) } @@ -94,8 +101,14 @@ func (c *AccessConfig) createPrivateKeySigner() (authentication.Signer, error) { } } - // Create signer - signer, err := authentication.NewPrivateKeySigner(c.KeyID, privateKeyMaterial, c.Account) + input := authentication.PrivateKeySignerInput{ + KeyID: c.KeyID, + AccountName: c.Account, + Username: c.Username, + PrivateKeyMaterial: privateKeyMaterial, + } + + signer, err := authentication.NewPrivateKeySigner(input) if err != nil { return nil, fmt.Errorf("Error creating Triton request signer: %s", err) } @@ -114,16 +127,19 @@ func (c *AccessConfig) CreateTritonClient() (*Client, error) { config := &tgo.ClientConfig{ AccountName: c.Account, TritonURL: c.Endpoint, + Username: c.Username, Signers: []authentication.Signer{c.signer}, } return &Client{ - config: config, + config: config, + insecureSkipTLSVerify: c.InsecureSkipTLSVerify, }, nil } type Client struct { - config *tgo.ClientConfig + config *tgo.ClientConfig + insecureSkipTLSVerify bool } func (c *Client) Compute() (*compute.ComputeClient, error) { @@ -132,6 +148,10 @@ func (c *Client) Compute() (*compute.ComputeClient, error) { return nil, errwrap.Wrapf("Error Creating Triton Compute Client: {{err}}", err) } + if c.insecureSkipTLSVerify { + computeClient.Client.InsecureSkipTLSVerify() + } + return computeClient, nil } @@ -141,6 +161,10 @@ func (c *Client) Network() (*network.NetworkClient, error) { return nil, errwrap.Wrapf("Error Creating Triton Network Client: {{err}}", err) } + if c.insecureSkipTLSVerify { + networkClient.Client.InsecureSkipTLSVerify() + } + return networkClient, nil } diff --git a/builder/triton/builder.go b/builder/triton/builder.go index 3e572e0a4..f2937a702 100644 --- a/builder/triton/builder.go +++ b/builder/triton/builder.go @@ -7,8 +7,8 @@ import ( "github.com/hashicorp/packer/common" "github.com/hashicorp/packer/helper/communicator" "github.com/hashicorp/packer/helper/config" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" ) const ( diff --git a/builder/triton/driver_triton.go b/builder/triton/driver_triton.go index 5e6acffef..58d943216 100644 --- a/builder/triton/driver_triton.go +++ b/builder/triton/driver_triton.go @@ -8,8 +8,8 @@ import ( "time" "github.com/hashicorp/packer/packer" - "github.com/joyent/triton-go/client" "github.com/joyent/triton-go/compute" + terrors "github.com/joyent/triton-go/errors" ) type driverTriton struct { @@ -177,7 +177,7 @@ func (d *driverTriton) WaitForMachineDeletion(machineId string, timeout time.Dur // Return true only when we receive a 410 (Gone) response. A 404 // indicates that the machine is being deleted whereas a 410 indicates // that this process has completed. - if triErr, ok := err.(*client.TritonError); ok && triErr.StatusCode == http.StatusGone { + if terrors.IsSpecificStatusCode(err, http.StatusGone) { return true, nil } } diff --git a/builder/triton/ssh.go b/builder/triton/ssh.go index 14fd26e31..179b94215 100644 --- a/builder/triton/ssh.go +++ b/builder/triton/ssh.go @@ -9,7 +9,7 @@ import ( "os" packerssh "github.com/hashicorp/packer/communicator/ssh" - "github.com/mitchellh/multistep" + "github.com/hashicorp/packer/helper/multistep" "golang.org/x/crypto/ssh" "golang.org/x/crypto/ssh/agent" ) diff --git a/builder/triton/step_create_image_from_machine.go b/builder/triton/step_create_image_from_machine.go index 76c46a768..425a6690b 100644 --- a/builder/triton/step_create_image_from_machine.go +++ b/builder/triton/step_create_image_from_machine.go @@ -1,11 +1,12 @@ package triton import ( + "context" "fmt" "time" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" ) // StepCreateImageFromMachine creates an image with the specified attributes @@ -13,7 +14,7 @@ import ( // The machine must be in the "stopped" state prior to this step being run. type StepCreateImageFromMachine struct{} -func (s *StepCreateImageFromMachine) Run(state multistep.StateBag) multistep.StepAction { +func (s *StepCreateImageFromMachine) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { config := state.Get("config").(Config) driver := state.Get("driver").(Driver) ui := state.Get("ui").(packer.Ui) diff --git a/builder/triton/step_create_image_from_machine_test.go b/builder/triton/step_create_image_from_machine_test.go index 36ee43f82..34eb95c8c 100644 --- a/builder/triton/step_create_image_from_machine_test.go +++ b/builder/triton/step_create_image_from_machine_test.go @@ -1,10 +1,11 @@ package triton import ( + "context" "errors" "testing" - "github.com/mitchellh/multistep" + "github.com/hashicorp/packer/helper/multistep" ) func TestStepCreateImageFromMachine(t *testing.T) { @@ -14,7 +15,7 @@ func TestStepCreateImageFromMachine(t *testing.T) { state.Put("machine", "test-machine-id") - if action := step.Run(state); action != multistep.ActionContinue { + if action := step.Run(context.Background(), state); action != multistep.ActionContinue { t.Fatalf("bad action: %#v", action) } @@ -36,7 +37,7 @@ func TestStepCreateImageFromMachine_CreateImageFromMachineError(t *testing.T) { driver.CreateImageFromMachineErr = errors.New("error") - if action := step.Run(state); action != multistep.ActionHalt { + if action := step.Run(context.Background(), state); action != multistep.ActionHalt { t.Fatalf("bad action: %#v", action) } @@ -59,7 +60,7 @@ func TestStepCreateImageFromMachine_WaitForImageCreationError(t *testing.T) { driver.WaitForImageCreationErr = errors.New("error") - if action := step.Run(state); action != multistep.ActionHalt { + if action := step.Run(context.Background(), state); action != multistep.ActionHalt { t.Fatalf("bad action: %#v", action) } diff --git a/builder/triton/step_create_source_machine.go b/builder/triton/step_create_source_machine.go index ae51fc60e..c3097fae2 100644 --- a/builder/triton/step_create_source_machine.go +++ b/builder/triton/step_create_source_machine.go @@ -1,18 +1,19 @@ package triton import ( + "context" "fmt" "time" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" ) // StepCreateSourceMachine creates an machine with the specified attributes // and waits for it to become available for provisioners. type StepCreateSourceMachine struct{} -func (s *StepCreateSourceMachine) Run(state multistep.StateBag) multistep.StepAction { +func (s *StepCreateSourceMachine) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { config := state.Get("config").(Config) driver := state.Get("driver").(Driver) ui := state.Get("ui").(packer.Ui) diff --git a/builder/triton/step_create_source_machine_test.go b/builder/triton/step_create_source_machine_test.go index 56f5d5940..78d97ad5e 100644 --- a/builder/triton/step_create_source_machine_test.go +++ b/builder/triton/step_create_source_machine_test.go @@ -1,10 +1,11 @@ package triton import ( + "context" "errors" "testing" - "github.com/mitchellh/multistep" + "github.com/hashicorp/packer/helper/multistep" ) func TestStepCreateSourceMachine(t *testing.T) { @@ -14,7 +15,7 @@ func TestStepCreateSourceMachine(t *testing.T) { driver := state.Get("driver").(*DriverMock) - if action := step.Run(state); action != multistep.ActionContinue { + if action := step.Run(context.Background(), state); action != multistep.ActionContinue { t.Fatalf("bad action: %#v", action) } @@ -39,7 +40,7 @@ func TestStepCreateSourceMachine_CreateMachineError(t *testing.T) { driver.CreateMachineErr = errors.New("error") - if action := step.Run(state); action != multistep.ActionHalt { + if action := step.Run(context.Background(), state); action != multistep.ActionHalt { t.Fatalf("bad action: %#v", action) } @@ -61,7 +62,7 @@ func TestStepCreateSourceMachine_WaitForMachineStateError(t *testing.T) { driver.WaitForMachineStateErr = errors.New("error") - if action := step.Run(state); action != multistep.ActionHalt { + if action := step.Run(context.Background(), state); action != multistep.ActionHalt { t.Fatalf("bad action: %#v", action) } @@ -81,7 +82,7 @@ func TestStepCreateSourceMachine_StopMachineError(t *testing.T) { driver := state.Get("driver").(*DriverMock) - if action := step.Run(state); action != multistep.ActionContinue { + if action := step.Run(context.Background(), state); action != multistep.ActionContinue { t.Fatalf("bad action: %#v", action) } @@ -109,7 +110,7 @@ func TestStepCreateSourceMachine_WaitForMachineStoppedError(t *testing.T) { driver := state.Get("driver").(*DriverMock) - if action := step.Run(state); action != multistep.ActionContinue { + if action := step.Run(context.Background(), state); action != multistep.ActionContinue { t.Fatalf("bad action: %#v", action) } @@ -137,7 +138,7 @@ func TestStepCreateSourceMachine_DeleteMachineError(t *testing.T) { driver := state.Get("driver").(*DriverMock) - if action := step.Run(state); action != multistep.ActionContinue { + if action := step.Run(context.Background(), state); action != multistep.ActionContinue { t.Fatalf("bad action: %#v", action) } diff --git a/builder/triton/step_delete_machine.go b/builder/triton/step_delete_machine.go index d766f9833..8f9e1bc1f 100644 --- a/builder/triton/step_delete_machine.go +++ b/builder/triton/step_delete_machine.go @@ -1,17 +1,18 @@ package triton import ( + "context" "fmt" "time" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" ) // StepDeleteMachine deletes the machine with the ID specified in state["machine"] type StepDeleteMachine struct{} -func (s *StepDeleteMachine) Run(state multistep.StateBag) multistep.StepAction { +func (s *StepDeleteMachine) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { driver := state.Get("driver").(Driver) ui := state.Get("ui").(packer.Ui) diff --git a/builder/triton/step_delete_machine_test.go b/builder/triton/step_delete_machine_test.go index bca82b89a..421024d58 100644 --- a/builder/triton/step_delete_machine_test.go +++ b/builder/triton/step_delete_machine_test.go @@ -1,10 +1,11 @@ package triton import ( + "context" "errors" "testing" - "github.com/mitchellh/multistep" + "github.com/hashicorp/packer/helper/multistep" ) func TestStepDeleteMachine(t *testing.T) { @@ -17,7 +18,7 @@ func TestStepDeleteMachine(t *testing.T) { machineId := "test-machine-id" state.Put("machine", machineId) - if action := step.Run(state); action != multistep.ActionContinue { + if action := step.Run(context.Background(), state); action != multistep.ActionContinue { t.Fatalf("bad action: %#v", action) } @@ -40,7 +41,7 @@ func TestStepDeleteMachine_DeleteMachineError(t *testing.T) { driver.DeleteMachineErr = errors.New("error") - if action := step.Run(state); action != multistep.ActionHalt { + if action := step.Run(context.Background(), state); action != multistep.ActionHalt { t.Fatalf("bad action: %#v", action) } @@ -65,7 +66,7 @@ func TestStepDeleteMachine_WaitForMachineDeletionError(t *testing.T) { driver.WaitForMachineDeletionErr = errors.New("error") - if action := step.Run(state); action != multistep.ActionHalt { + if action := step.Run(context.Background(), state); action != multistep.ActionHalt { t.Fatalf("bad action: %#v", action) } diff --git a/builder/triton/step_stop_machine.go b/builder/triton/step_stop_machine.go index 00eb75958..73fa3f4d6 100644 --- a/builder/triton/step_stop_machine.go +++ b/builder/triton/step_stop_machine.go @@ -1,18 +1,19 @@ package triton import ( + "context" "fmt" "time" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" ) // StepStopMachine stops the machine with the given Machine ID, and waits // for it to reach the stopped state. type StepStopMachine struct{} -func (s *StepStopMachine) Run(state multistep.StateBag) multistep.StepAction { +func (s *StepStopMachine) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { driver := state.Get("driver").(Driver) ui := state.Get("ui").(packer.Ui) diff --git a/builder/triton/step_stop_machine_test.go b/builder/triton/step_stop_machine_test.go index 4609343e0..8f0f39d2d 100644 --- a/builder/triton/step_stop_machine_test.go +++ b/builder/triton/step_stop_machine_test.go @@ -1,10 +1,11 @@ package triton import ( + "context" "errors" "testing" - "github.com/mitchellh/multistep" + "github.com/hashicorp/packer/helper/multistep" ) func TestStepStopMachine(t *testing.T) { @@ -17,7 +18,7 @@ func TestStepStopMachine(t *testing.T) { machineId := "test-machine-id" state.Put("machine", machineId) - if action := step.Run(state); action != multistep.ActionContinue { + if action := step.Run(context.Background(), state); action != multistep.ActionContinue { t.Fatalf("bad action: %#v", action) } @@ -40,7 +41,7 @@ func TestStepStopMachine_StopMachineError(t *testing.T) { driver.StopMachineErr = errors.New("error") - if action := step.Run(state); action != multistep.ActionHalt { + if action := step.Run(context.Background(), state); action != multistep.ActionHalt { t.Fatalf("bad action: %#v", action) } @@ -61,7 +62,7 @@ func TestStepStopMachine_WaitForMachineStoppedError(t *testing.T) { driver.WaitForMachineStateErr = errors.New("error") - if action := step.Run(state); action != multistep.ActionHalt { + if action := step.Run(context.Background(), state); action != multistep.ActionHalt { t.Fatalf("bad action: %#v", action) } diff --git a/builder/triton/step_test.go b/builder/triton/step_test.go index 61580edfe..bccc398c7 100644 --- a/builder/triton/step_test.go +++ b/builder/triton/step_test.go @@ -2,9 +2,10 @@ package triton import ( "bytes" - "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" "testing" + + "github.com/hashicorp/packer/helper/multistep" + "github.com/hashicorp/packer/packer" ) func testState(t *testing.T) multistep.StateBag { diff --git a/builder/triton/step_wait_for_stop_to_not_fail.go b/builder/triton/step_wait_for_stop_to_not_fail.go index cd4df4704..de0a7cc84 100644 --- a/builder/triton/step_wait_for_stop_to_not_fail.go +++ b/builder/triton/step_wait_for_stop_to_not_fail.go @@ -1,10 +1,11 @@ package triton import ( + "context" "time" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" ) // StepWaitForStopNotToFail waits for 10 seconds before returning with continue @@ -12,7 +13,7 @@ import ( // they are started never actually stop. type StepWaitForStopNotToFail struct{} -func (s *StepWaitForStopNotToFail) Run(state multistep.StateBag) multistep.StepAction { +func (s *StepWaitForStopNotToFail) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { ui := state.Get("ui").(packer.Ui) ui.Say("Waiting 10 seconds to avoid potential SDC bug...") time.Sleep(10 * time.Second) diff --git a/builder/virtualbox/common/export_config.go b/builder/virtualbox/common/export_config.go index 08dfb267c..b55941943 100644 --- a/builder/virtualbox/common/export_config.go +++ b/builder/virtualbox/common/export_config.go @@ -7,7 +7,7 @@ import ( ) type ExportConfig struct { - Format string `mapstruture:"format"` + Format string `mapstructure:"format"` } func (c *ExportConfig) Prepare(ctx *interpolate.Context) []error { diff --git a/builder/virtualbox/common/export_config_test.go b/builder/virtualbox/common/export_config_test.go index 612bfc01d..6848a9a66 100644 --- a/builder/virtualbox/common/export_config_test.go +++ b/builder/virtualbox/common/export_config_test.go @@ -10,7 +10,7 @@ func TestExportConfigPrepare_BootWait(t *testing.T) { // Bad c = new(ExportConfig) - c.Format = "illega" + c.Format = "illegal" errs = c.Prepare(testConfigTemplate(t)) if len(errs) == 0 { t.Fatalf("bad: %#v", errs) diff --git a/builder/virtualbox/common/output_config_test.go b/builder/virtualbox/common/output_config_test.go index 08cff37d1..3a87dc18f 100644 --- a/builder/virtualbox/common/output_config_test.go +++ b/builder/virtualbox/common/output_config_test.go @@ -1,10 +1,11 @@ package common import ( - "github.com/hashicorp/packer/common" "io/ioutil" "os" "testing" + + "github.com/hashicorp/packer/common" ) func TestOutputConfigPrepare(t *testing.T) { diff --git a/builder/virtualbox/common/run_config.go b/builder/virtualbox/common/run_config.go index 6d3f58686..cf94c306b 100644 --- a/builder/virtualbox/common/run_config.go +++ b/builder/virtualbox/common/run_config.go @@ -2,27 +2,19 @@ package common import ( "fmt" - "time" "github.com/hashicorp/packer/template/interpolate" ) type RunConfig struct { - Headless bool `mapstructure:"headless"` - RawBootWait string `mapstructure:"boot_wait"` + Headless bool `mapstructure:"headless"` VRDPBindAddress string `mapstructure:"vrdp_bind_address"` VRDPPortMin uint `mapstructure:"vrdp_port_min"` VRDPPortMax uint `mapstructure:"vrdp_port_max"` - - BootWait time.Duration `` } -func (c *RunConfig) Prepare(ctx *interpolate.Context) []error { - if c.RawBootWait == "" { - c.RawBootWait = "10s" - } - +func (c *RunConfig) Prepare(ctx *interpolate.Context) (errs []error) { if c.VRDPBindAddress == "" { c.VRDPBindAddress = "127.0.0.1" } @@ -35,17 +27,10 @@ func (c *RunConfig) Prepare(ctx *interpolate.Context) []error { c.VRDPPortMax = 6000 } - var errs []error - var err error - c.BootWait, err = time.ParseDuration(c.RawBootWait) - if err != nil { - errs = append(errs, fmt.Errorf("Failed parsing boot_wait: %s", err)) - } - if c.VRDPPortMin > c.VRDPPortMax { errs = append( errs, fmt.Errorf("vrdp_port_min must be less than vrdp_port_max")) } - return errs + return } diff --git a/builder/virtualbox/common/run_config_test.go b/builder/virtualbox/common/run_config_test.go index 87b9b1b69..1caa7e3c8 100644 --- a/builder/virtualbox/common/run_config_test.go +++ b/builder/virtualbox/common/run_config_test.go @@ -4,38 +4,6 @@ import ( "testing" ) -func TestRunConfigPrepare_BootWait(t *testing.T) { - var c *RunConfig - var errs []error - - // Test a default boot_wait - c = new(RunConfig) - errs = c.Prepare(testConfigTemplate(t)) - if len(errs) > 0 { - t.Fatalf("should not have error: %s", errs) - } - - if c.RawBootWait != "10s" { - t.Fatalf("bad value: %s", c.RawBootWait) - } - - // Test with a bad boot_wait - c = new(RunConfig) - c.RawBootWait = "this is not good" - errs = c.Prepare(testConfigTemplate(t)) - if len(errs) == 0 { - t.Fatalf("bad: %#v", errs) - } - - // Test with a good one - c = new(RunConfig) - c.RawBootWait = "5s" - errs = c.Prepare(testConfigTemplate(t)) - if len(errs) > 0 { - t.Fatalf("should not have error: %s", errs) - } -} - func TestRunConfigPrepare_VRDPBindAddress(t *testing.T) { var c *RunConfig var errs []error diff --git a/builder/virtualbox/common/ssh.go b/builder/virtualbox/common/ssh.go index b09c0cc47..45a99fb29 100644 --- a/builder/virtualbox/common/ssh.go +++ b/builder/virtualbox/common/ssh.go @@ -3,7 +3,7 @@ package common import ( commonssh "github.com/hashicorp/packer/common/ssh" "github.com/hashicorp/packer/communicator/ssh" - "github.com/mitchellh/multistep" + "github.com/hashicorp/packer/helper/multistep" gossh "golang.org/x/crypto/ssh" ) diff --git a/builder/virtualbox/common/step_attach_floppy.go b/builder/virtualbox/common/step_attach_floppy.go index 238619551..6c73aa4b4 100644 --- a/builder/virtualbox/common/step_attach_floppy.go +++ b/builder/virtualbox/common/step_attach_floppy.go @@ -1,14 +1,16 @@ package common import ( + "context" "fmt" - "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" "io" "io/ioutil" "log" "os" "path/filepath" + + "github.com/hashicorp/packer/helper/multistep" + "github.com/hashicorp/packer/packer" ) // This step attaches the ISO to the virtual machine. @@ -23,7 +25,7 @@ type StepAttachFloppy struct { floppyPath string } -func (s *StepAttachFloppy) Run(state multistep.StateBag) multistep.StepAction { +func (s *StepAttachFloppy) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { // Determine if we even have a floppy disk to attach var floppyPath string if floppyPathRaw, ok := state.GetOk("floppy_path"); ok { diff --git a/builder/virtualbox/common/step_attach_floppy_test.go b/builder/virtualbox/common/step_attach_floppy_test.go index 110ddfe74..32ad4a962 100644 --- a/builder/virtualbox/common/step_attach_floppy_test.go +++ b/builder/virtualbox/common/step_attach_floppy_test.go @@ -1,10 +1,12 @@ package common import ( - "github.com/mitchellh/multistep" + "context" "io/ioutil" "os" "testing" + + "github.com/hashicorp/packer/helper/multistep" ) func TestStepAttachFloppy_impl(t *testing.T) { @@ -29,7 +31,7 @@ func TestStepAttachFloppy(t *testing.T) { driver := state.Get("driver").(*DriverMock) // Test the run - if action := step.Run(state); action != multistep.ActionContinue { + if action := step.Run(context.Background(), state); action != multistep.ActionContinue { t.Fatalf("bad action: %#v", action) } if _, ok := state.GetOk("error"); ok { @@ -60,7 +62,7 @@ func TestStepAttachFloppy_noFloppy(t *testing.T) { driver := state.Get("driver").(*DriverMock) // Test the run - if action := step.Run(state); action != multistep.ActionContinue { + if action := step.Run(context.Background(), state); action != multistep.ActionContinue { t.Fatalf("bad action: %#v", action) } if _, ok := state.GetOk("error"); ok { diff --git a/builder/virtualbox/common/step_attach_guest_additions.go b/builder/virtualbox/common/step_attach_guest_additions.go index de51237a5..d5698c372 100644 --- a/builder/virtualbox/common/step_attach_guest_additions.go +++ b/builder/virtualbox/common/step_attach_guest_additions.go @@ -1,10 +1,12 @@ package common import ( + "context" "fmt" - "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" "log" + + "github.com/hashicorp/packer/helper/multistep" + "github.com/hashicorp/packer/packer" ) // This step attaches the VirtualBox guest additions as a inserted CD onto @@ -23,7 +25,7 @@ type StepAttachGuestAdditions struct { GuestAdditionsMode string } -func (s *StepAttachGuestAdditions) Run(state multistep.StateBag) multistep.StepAction { +func (s *StepAttachGuestAdditions) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { driver := state.Get("driver").(Driver) ui := state.Get("ui").(packer.Ui) vmName := state.Get("vmName").(string) diff --git a/builder/virtualbox/common/step_configure_vrdp.go b/builder/virtualbox/common/step_configure_vrdp.go index 9a966c508..6cc960ad3 100644 --- a/builder/virtualbox/common/step_configure_vrdp.go +++ b/builder/virtualbox/common/step_configure_vrdp.go @@ -1,13 +1,14 @@ package common import ( + "context" "fmt" "log" "math/rand" "net" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" ) // This step configures the VM to enable the VRDP server @@ -26,7 +27,7 @@ type StepConfigureVRDP struct { VRDPPortMax uint } -func (s *StepConfigureVRDP) Run(state multistep.StateBag) multistep.StepAction { +func (s *StepConfigureVRDP) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { driver := state.Get("driver").(Driver) ui := state.Get("ui").(packer.Ui) vmName := state.Get("vmName").(string) diff --git a/builder/virtualbox/common/step_download_guest_additions.go b/builder/virtualbox/common/step_download_guest_additions.go index 8039d0fa4..da413e994 100644 --- a/builder/virtualbox/common/step_download_guest_additions.go +++ b/builder/virtualbox/common/step_download_guest_additions.go @@ -2,6 +2,7 @@ package common import ( "bytes" + "context" "fmt" "io" "io/ioutil" @@ -10,9 +11,9 @@ import ( "strings" "github.com/hashicorp/packer/common" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" "github.com/hashicorp/packer/template/interpolate" - "github.com/mitchellh/multistep" ) var additionsVersionMap = map[string]string{ @@ -36,7 +37,7 @@ type StepDownloadGuestAdditions struct { Ctx interpolate.Context } -func (s *StepDownloadGuestAdditions) Run(state multistep.StateBag) multistep.StepAction { +func (s *StepDownloadGuestAdditions) Run(ctx context.Context, state multistep.StateBag) multistep.StepAction { var action multistep.StepAction driver := state.Get("driver").(Driver) ui := state.Get("ui").(packer.Ui) @@ -66,21 +67,26 @@ func (s *StepDownloadGuestAdditions) Run(state multistep.StateBag) multistep.Ste checksumType := "sha256" - // Use the provided source (URL or file path) or generate it + // Grab the guest_additions_url as specified by the user. url := s.GuestAdditionsURL - if url != "" { - s.Ctx.Data = &guestAdditionsUrlTemplate{ - Version: version, - } - url, err = interpolate.Render(url, &s.Ctx) - if err != nil { - err := fmt.Errorf("Error preparing guest additions url: %s", err) - state.Put("error", err) - ui.Error(err.Error()) - return multistep.ActionHalt - } - } else { + // Initialize the template context so we can interpolate some variables.. + s.Ctx.Data = &guestAdditionsUrlTemplate{ + Version: version, + } + + // Interpolate any user-variables specified within the guest_additions_url + url, err = interpolate.Render(s.GuestAdditionsURL, &s.Ctx) + if err != nil { + err := fmt.Errorf("Error preparing guest additions url: %s", err) + state.Put("error", err) + ui.Error(err.Error()) + return multistep.ActionHalt + } + + // If this resulted in an empty url, then ask the driver about it. + if url == "" { + log.Printf("guest_additions_url is blank; querying driver for iso.") url, err = driver.Iso() if err == nil { @@ -88,11 +94,13 @@ func (s *StepDownloadGuestAdditions) Run(state multistep.StateBag) multistep.Ste } else { ui.Error(err.Error()) url = fmt.Sprintf( - "http://download.virtualbox.org/virtualbox/%s/%s", + "https://download.virtualbox.org/virtualbox/%s/%s", version, additionsName) } } + + // The driver couldn't even figure it out, so fail hard. if url == "" { err := fmt.Errorf("Couldn't detect guest additions URL.\n" + "Please specify `guest_additions_url` manually.") @@ -101,18 +109,20 @@ func (s *StepDownloadGuestAdditions) Run(state multistep.StateBag) multistep.Ste return multistep.ActionHalt } + // Figure out a default checksum here if checksumType != "none" { if s.GuestAdditionsSHA256 != "" { checksum = s.GuestAdditionsSHA256 } else { - checksum, action = s.downloadAdditionsSHA256(state, version, additionsName) + checksum, action = s.downloadAdditionsSHA256(ctx, state, version, additionsName) if action != multistep.ActionContinue { return action } } } - url, err = common.DownloadableURL(url) + // Convert the file/url to an actual URL for step_download to process. + url, err = common.ValidatedURL(url) if err != nil { err := fmt.Errorf("Error preparing guest additions url: %s", err) state.Put("error", err) @@ -122,6 +132,7 @@ func (s *StepDownloadGuestAdditions) Run(state multistep.StateBag) multistep.Ste log.Printf("Guest additions URL: %s", url) + // We're good, so let's go ahead and download this thing.. downStep := &common.StepDownload{ Checksum: checksum, ChecksumType: checksumType, @@ -130,16 +141,16 @@ func (s *StepDownloadGuestAdditions) Run(state multistep.StateBag) multistep.Ste Url: []string{url}, } - return downStep.Run(state) + return downStep.Run(ctx, state) } func (s *StepDownloadGuestAdditions) Cleanup(state multistep.StateBag) {} -func (s *StepDownloadGuestAdditions) downloadAdditionsSHA256(state multistep.StateBag, additionsVersion string, additionsName string) (string, multistep.StepAction) { +func (s *StepDownloadGuestAdditions) downloadAdditionsSHA256(ctx context.Context, state multistep.StateBag, additionsVersion string, additionsName string) (string, multistep.StepAction) { // First things first, we get the list of checksums for the files available // for this version. checksumsUrl := fmt.Sprintf( - "http://download.virtualbox.org/virtualbox/%s/SHA256SUMS", + "https://download.virtualbox.org/virtualbox/%s/SHA256SUMS", additionsVersion) checksumsFile, err := ioutil.TempFile("", "packer") @@ -159,7 +170,7 @@ func (s *StepDownloadGuestAdditions) downloadAdditionsSHA256(state multistep.Sta Url: []string{checksumsUrl}, } - action := downStep.Run(state) + action := downStep.Run(ctx, state) if action == multistep.ActionHalt { return "", action } diff --git a/builder/virtualbox/common/step_export.go b/builder/virtualbox/common/step_export.go index adc04e40f..03b02e8e4 100644 --- a/builder/virtualbox/common/step_export.go +++ b/builder/virtualbox/common/step_export.go @@ -1,13 +1,15 @@ package common import ( + "context" "fmt" - "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" "log" "path/filepath" "strings" "time" + + "github.com/hashicorp/packer/helper/multistep" + "github.com/hashicorp/packer/packer" ) // This step cleans up forwarded ports and exports the VM to an OVF. @@ -24,7 +26,7 @@ type StepExport struct { SkipExport bool } -func (s *StepExport) Run(state multistep.StateBag) multistep.StepAction { +func (s *StepExport) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { driver := state.Get("driver").(Driver) ui := state.Get("ui").(packer.Ui) vmName := state.Get("vmName").(string) diff --git a/builder/virtualbox/common/step_export_test.go b/builder/virtualbox/common/step_export_test.go index 0a939d331..6d37b3150 100644 --- a/builder/virtualbox/common/step_export_test.go +++ b/builder/virtualbox/common/step_export_test.go @@ -1,8 +1,10 @@ package common import ( - "github.com/mitchellh/multistep" + "context" "testing" + + "github.com/hashicorp/packer/helper/multistep" ) func TestStepExport_impl(t *testing.T) { @@ -18,7 +20,7 @@ func TestStepExport(t *testing.T) { driver := state.Get("driver").(*DriverMock) // Test the run - if action := step.Run(state); action != multistep.ActionContinue { + if action := step.Run(context.Background(), state); action != multistep.ActionContinue { t.Fatalf("bad action: %#v", action) } if _, ok := state.GetOk("error"); ok { diff --git a/builder/virtualbox/common/step_forward_ssh.go b/builder/virtualbox/common/step_forward_ssh.go index 391573ced..b4e3b2f25 100644 --- a/builder/virtualbox/common/step_forward_ssh.go +++ b/builder/virtualbox/common/step_forward_ssh.go @@ -1,14 +1,15 @@ package common import ( + "context" "fmt" "log" "math/rand" "net" "github.com/hashicorp/packer/helper/communicator" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" ) // This step adds a NAT port forwarding definition so that SSH is available @@ -27,7 +28,7 @@ type StepForwardSSH struct { SkipNatMapping bool } -func (s *StepForwardSSH) Run(state multistep.StateBag) multistep.StepAction { +func (s *StepForwardSSH) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { driver := state.Get("driver").(Driver) ui := state.Get("ui").(packer.Ui) vmName := state.Get("vmName").(string) diff --git a/builder/virtualbox/common/step_output_dir.go b/builder/virtualbox/common/step_output_dir.go index 0054d1994..fc39d08b2 100644 --- a/builder/virtualbox/common/step_output_dir.go +++ b/builder/virtualbox/common/step_output_dir.go @@ -1,14 +1,15 @@ package common import ( + "context" "fmt" "log" "os" "path/filepath" "time" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" ) // StepOutputDir sets up the output directory by creating it if it does @@ -21,7 +22,7 @@ type StepOutputDir struct { cleanup bool } -func (s *StepOutputDir) Run(state multistep.StateBag) multistep.StepAction { +func (s *StepOutputDir) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { ui := state.Get("ui").(packer.Ui) if _, err := os.Stat(s.Path); err == nil { diff --git a/builder/virtualbox/common/step_output_dir_test.go b/builder/virtualbox/common/step_output_dir_test.go index 77d1f855f..766154ca9 100644 --- a/builder/virtualbox/common/step_output_dir_test.go +++ b/builder/virtualbox/common/step_output_dir_test.go @@ -1,10 +1,12 @@ package common import ( - "github.com/mitchellh/multistep" + "context" "io/ioutil" "os" "testing" + + "github.com/hashicorp/packer/helper/multistep" ) func testStepOutputDir(t *testing.T) *StepOutputDir { @@ -26,9 +28,10 @@ func TestStepOutputDir_impl(t *testing.T) { func TestStepOutputDir(t *testing.T) { state := testState(t) step := testStepOutputDir(t) + defer os.RemoveAll(step.Path) // Test the run - if action := step.Run(state); action != multistep.ActionContinue { + if action := step.Run(context.Background(), state); action != multistep.ActionContinue { t.Fatalf("bad action: %#v", action) } if _, ok := state.GetOk("error"); ok { @@ -53,9 +56,10 @@ func TestStepOutputDir_exists(t *testing.T) { if err := os.MkdirAll(step.Path, 0755); err != nil { t.Fatalf("bad: %s", err) } + defer os.RemoveAll(step.Path) // Test the run - if action := step.Run(state); action != multistep.ActionHalt { + if action := step.Run(context.Background(), state); action != multistep.ActionHalt { t.Fatalf("bad action: %#v", action) } if _, ok := state.GetOk("error"); !ok { @@ -74,7 +78,7 @@ func TestStepOutputDir_cancelled(t *testing.T) { step := testStepOutputDir(t) // Test the run - if action := step.Run(state); action != multistep.ActionContinue { + if action := step.Run(context.Background(), state); action != multistep.ActionContinue { t.Fatalf("bad action: %#v", action) } if _, ok := state.GetOk("error"); ok { @@ -99,7 +103,7 @@ func TestStepOutputDir_halted(t *testing.T) { step := testStepOutputDir(t) // Test the run - if action := step.Run(state); action != multistep.ActionContinue { + if action := step.Run(context.Background(), state); action != multistep.ActionContinue { t.Fatalf("bad action: %#v", action) } if _, ok := state.GetOk("error"); ok { diff --git a/builder/virtualbox/common/step_remove_devices.go b/builder/virtualbox/common/step_remove_devices.go index 7860d53e9..835fd5af8 100644 --- a/builder/virtualbox/common/step_remove_devices.go +++ b/builder/virtualbox/common/step_remove_devices.go @@ -1,12 +1,13 @@ package common import ( + "context" "fmt" "log" "github.com/hashicorp/packer/common" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" ) // This step removes any devices (floppy disks, ISOs, etc.) from the @@ -20,7 +21,7 @@ import ( // Produces: type StepRemoveDevices struct{} -func (s *StepRemoveDevices) Run(state multistep.StateBag) multistep.StepAction { +func (s *StepRemoveDevices) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { driver := state.Get("driver").(Driver) ui := state.Get("ui").(packer.Ui) vmName := state.Get("vmName").(string) diff --git a/builder/virtualbox/common/step_remove_devices_test.go b/builder/virtualbox/common/step_remove_devices_test.go index 6e6e28f79..0a548bbd2 100644 --- a/builder/virtualbox/common/step_remove_devices_test.go +++ b/builder/virtualbox/common/step_remove_devices_test.go @@ -1,8 +1,10 @@ package common import ( - "github.com/mitchellh/multistep" + "context" "testing" + + "github.com/hashicorp/packer/helper/multistep" ) func TestStepRemoveDevices_impl(t *testing.T) { @@ -18,7 +20,7 @@ func TestStepRemoveDevices(t *testing.T) { driver := state.Get("driver").(*DriverMock) // Test the run - if action := step.Run(state); action != multistep.ActionContinue { + if action := step.Run(context.Background(), state); action != multistep.ActionContinue { t.Fatalf("bad action: %#v", action) } if _, ok := state.GetOk("error"); ok { @@ -41,7 +43,7 @@ func TestStepRemoveDevices_attachedIso(t *testing.T) { driver := state.Get("driver").(*DriverMock) // Test the run - if action := step.Run(state); action != multistep.ActionContinue { + if action := step.Run(context.Background(), state); action != multistep.ActionContinue { t.Fatalf("bad action: %#v", action) } if _, ok := state.GetOk("error"); ok { @@ -68,7 +70,7 @@ func TestStepRemoveDevices_attachedIsoOnSata(t *testing.T) { driver := state.Get("driver").(*DriverMock) // Test the run - if action := step.Run(state); action != multistep.ActionContinue { + if action := step.Run(context.Background(), state); action != multistep.ActionContinue { t.Fatalf("bad action: %#v", action) } if _, ok := state.GetOk("error"); ok { @@ -94,7 +96,7 @@ func TestStepRemoveDevices_floppyPath(t *testing.T) { driver := state.Get("driver").(*DriverMock) // Test the run - if action := step.Run(state); action != multistep.ActionContinue { + if action := step.Run(context.Background(), state); action != multistep.ActionContinue { t.Fatalf("bad action: %#v", action) } if _, ok := state.GetOk("error"); ok { diff --git a/builder/virtualbox/common/step_run.go b/builder/virtualbox/common/step_run.go index 3019c57d6..689b58602 100644 --- a/builder/virtualbox/common/step_run.go +++ b/builder/virtualbox/common/step_run.go @@ -1,11 +1,11 @@ package common import ( + "context" "fmt" - "time" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" ) // This step starts the virtual machine. @@ -17,13 +17,12 @@ import ( // // Produces: type StepRun struct { - BootWait time.Duration Headless bool vmName string } -func (s *StepRun) Run(state multistep.StateBag) multistep.StepAction { +func (s *StepRun) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { driver := state.Get("driver").(Driver) ui := state.Get("ui").(packer.Ui) vmName := state.Get("vmName").(string) @@ -59,22 +58,6 @@ func (s *StepRun) Run(state multistep.StateBag) multistep.StepAction { s.vmName = vmName - if int64(s.BootWait) > 0 { - ui.Say(fmt.Sprintf("Waiting %s for boot...", s.BootWait)) - wait := time.After(s.BootWait) - WAITLOOP: - for { - select { - case <-wait: - break WAITLOOP - case <-time.After(1 * time.Second): - if _, ok := state.GetOk(multistep.StateCancelled); ok { - return multistep.ActionHalt - } - } - } - } - return multistep.ActionContinue } diff --git a/builder/virtualbox/common/step_shutdown.go b/builder/virtualbox/common/step_shutdown.go index 82bcb05ab..af2539cac 100644 --- a/builder/virtualbox/common/step_shutdown.go +++ b/builder/virtualbox/common/step_shutdown.go @@ -1,12 +1,14 @@ package common import ( + "context" "errors" "fmt" - "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" "log" "time" + + "github.com/hashicorp/packer/helper/multistep" + "github.com/hashicorp/packer/packer" ) // This step shuts down the machine. It first attempts to do so gracefully, @@ -26,7 +28,7 @@ type StepShutdown struct { Delay time.Duration } -func (s *StepShutdown) Run(state multistep.StateBag) multistep.StepAction { +func (s *StepShutdown) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { comm := state.Get("communicator").(packer.Communicator) driver := state.Get("driver").(Driver) ui := state.Get("ui").(packer.Ui) diff --git a/builder/virtualbox/common/step_shutdown_test.go b/builder/virtualbox/common/step_shutdown_test.go index 769ed64f0..44bd1d927 100644 --- a/builder/virtualbox/common/step_shutdown_test.go +++ b/builder/virtualbox/common/step_shutdown_test.go @@ -1,10 +1,12 @@ package common import ( - "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" + "context" "testing" "time" + + "github.com/hashicorp/packer/helper/multistep" + "github.com/hashicorp/packer/packer" ) func TestStepShutdown_impl(t *testing.T) { @@ -22,7 +24,7 @@ func TestStepShutdown_noShutdownCommand(t *testing.T) { driver := state.Get("driver").(*DriverMock) // Test the run - if action := step.Run(state); action != multistep.ActionContinue { + if action := step.Run(context.Background(), state); action != multistep.ActionContinue { t.Fatalf("bad action: %#v", action) } if _, ok := state.GetOk("error"); ok { @@ -59,7 +61,7 @@ func TestStepShutdown_shutdownCommand(t *testing.T) { }() // Test the run - if action := step.Run(state); action != multistep.ActionContinue { + if action := step.Run(context.Background(), state); action != multistep.ActionContinue { t.Fatalf("bad action: %#v", action) } if _, ok := state.GetOk("error"); ok { @@ -96,7 +98,7 @@ func TestStepShutdown_shutdownTimeout(t *testing.T) { }() // Test the run - if action := step.Run(state); action != multistep.ActionHalt { + if action := step.Run(context.Background(), state); action != multistep.ActionHalt { t.Fatalf("bad action: %#v", action) } if _, ok := state.GetOk("error"); !ok { @@ -128,7 +130,7 @@ func TestStepShutdown_shutdownDelay(t *testing.T) { // Test the run - if action := step.Run(state); action != multistep.ActionContinue { + if action := step.Run(context.Background(), state); action != multistep.ActionContinue { t.Fatalf("bad action: %#v", action) } testDuration := time.Since(start).Seconds() @@ -153,7 +155,7 @@ func TestStepShutdown_shutdownDelay(t *testing.T) { }() // Test the run - if action := step.Run(state); action != multistep.ActionContinue { + if action := step.Run(context.Background(), state); action != multistep.ActionContinue { t.Fatalf("bad action: %#v", action) } testDuration = time.Since(start).Seconds() diff --git a/builder/virtualbox/common/step_suppress_messages.go b/builder/virtualbox/common/step_suppress_messages.go index 863606526..369911bdb 100644 --- a/builder/virtualbox/common/step_suppress_messages.go +++ b/builder/virtualbox/common/step_suppress_messages.go @@ -1,17 +1,19 @@ package common import ( + "context" "fmt" - "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" "log" + + "github.com/hashicorp/packer/helper/multistep" + "github.com/hashicorp/packer/packer" ) // This step sets some variables in VirtualBox so that annoying // pop-up messages don't exist. type StepSuppressMessages struct{} -func (StepSuppressMessages) Run(state multistep.StateBag) multistep.StepAction { +func (StepSuppressMessages) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { driver := state.Get("driver").(Driver) ui := state.Get("ui").(packer.Ui) diff --git a/builder/virtualbox/common/step_suppress_messages_test.go b/builder/virtualbox/common/step_suppress_messages_test.go index 44612ecbc..c37af7826 100644 --- a/builder/virtualbox/common/step_suppress_messages_test.go +++ b/builder/virtualbox/common/step_suppress_messages_test.go @@ -1,9 +1,11 @@ package common import ( + "context" "errors" - "github.com/mitchellh/multistep" "testing" + + "github.com/hashicorp/packer/helper/multistep" ) func TestStepSuppressMessages_impl(t *testing.T) { @@ -17,7 +19,7 @@ func TestStepSuppressMessages(t *testing.T) { driver := state.Get("driver").(*DriverMock) // Test the run - if action := step.Run(state); action != multistep.ActionContinue { + if action := step.Run(context.Background(), state); action != multistep.ActionContinue { t.Fatalf("bad action: %#v", action) } if _, ok := state.GetOk("error"); ok { @@ -37,7 +39,7 @@ func TestStepSuppressMessages_error(t *testing.T) { driver.SuppressMessagesErr = errors.New("foo") // Test the run - if action := step.Run(state); action != multistep.ActionHalt { + if action := step.Run(context.Background(), state); action != multistep.ActionHalt { t.Fatalf("bad action: %#v", action) } if _, ok := state.GetOk("error"); !ok { diff --git a/builder/virtualbox/common/step_test.go b/builder/virtualbox/common/step_test.go index c6424b692..c8a12abb6 100644 --- a/builder/virtualbox/common/step_test.go +++ b/builder/virtualbox/common/step_test.go @@ -2,9 +2,10 @@ package common import ( "bytes" - "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" "testing" + + "github.com/hashicorp/packer/helper/multistep" + "github.com/hashicorp/packer/packer" ) func testState(t *testing.T) multistep.StateBag { diff --git a/builder/virtualbox/common/step_type_boot_command.go b/builder/virtualbox/common/step_type_boot_command.go index 3220ca8b8..0effc3628 100644 --- a/builder/virtualbox/common/step_type_boot_command.go +++ b/builder/virtualbox/common/step_type_boot_command.go @@ -1,17 +1,15 @@ package common import ( + "context" "fmt" - "log" - "strings" "time" - "unicode" - "unicode/utf8" "github.com/hashicorp/packer/common" + "github.com/hashicorp/packer/common/bootcommand" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" "github.com/hashicorp/packer/template/interpolate" - "github.com/mitchellh/multistep" ) const KeyLeftShift uint32 = 0xFFE1 @@ -22,29 +20,31 @@ type bootCommandTemplateData struct { Name string } -// This step "types" the boot command into the VM over VNC. -// -// Uses: -// driver Driver -// http_port int -// ui packer.Ui -// vmName string -// -// Produces: -// type StepTypeBootCommand struct { - BootCommand []string + BootCommand string + BootWait time.Duration VMName string Ctx interpolate.Context } -func (s *StepTypeBootCommand) Run(state multistep.StateBag) multistep.StepAction { +func (s *StepTypeBootCommand) Run(ctx context.Context, state multistep.StateBag) multistep.StepAction { debug := state.Get("debug").(bool) driver := state.Get("driver").(Driver) httpPort := state.Get("http_port").(uint) ui := state.Get("ui").(packer.Ui) vmName := state.Get("vmName").(string) + // Wait the for the vm to boot. + if int64(s.BootWait) > 0 { + ui.Say(fmt.Sprintf("Waiting %s for boot...", s.BootWait.String())) + select { + case <-time.After(s.BootWait): + break + case <-ctx.Done(): + return multistep.ActionHalt + } + } + var pauseFn multistep.DebugPauseFn if debug { pauseFn = state.Get("pauseFn").(multistep.DebugPauseFn) @@ -58,252 +58,43 @@ func (s *StepTypeBootCommand) Run(state multistep.StateBag) multistep.StepAction s.VMName, } + sendCodes := func(codes []string) error { + args := []string{"controlvm", vmName, "keyboardputscancode"} + args = append(args, codes...) + + return driver.VBoxManage(args...) + } + d := bootcommand.NewPCXTDriver(sendCodes, 25) + ui.Say("Typing the boot command...") - for i, command := range s.BootCommand { - command, err := interpolate.Render(command, &s.Ctx) - if err != nil { - err := fmt.Errorf("Error preparing boot command: %s", err) - state.Put("error", err) - ui.Error(err.Error()) - return multistep.ActionHalt - } + command, err := interpolate.Render(s.BootCommand, &s.Ctx) + if err != nil { + err := fmt.Errorf("Error preparing boot command: %s", err) + state.Put("error", err) + ui.Error(err.Error()) + return multistep.ActionHalt + } - for _, code := range scancodes(command) { - if code == "wait" { - time.Sleep(1 * time.Second) - continue - } + seq, err := bootcommand.GenerateExpressionSequence(command) + if err != nil { + err := fmt.Errorf("Error generating boot command: %s", err) + state.Put("error", err) + ui.Error(err.Error()) + return multistep.ActionHalt + } - if code == "wait5" { - time.Sleep(5 * time.Second) - continue - } - - if code == "wait10" { - time.Sleep(10 * time.Second) - continue - } - - // Since typing is sometimes so slow, we check for an interrupt - // in between each character. - if _, ok := state.GetOk(multistep.StateCancelled); ok { - return multistep.ActionHalt - } - - if err := driver.VBoxManage("controlvm", vmName, "keyboardputscancode", code); err != nil { - err := fmt.Errorf("Error sending boot command: %s", err) - state.Put("error", err) - ui.Error(err.Error()) - return multistep.ActionHalt - } - } - - if pauseFn != nil { - pauseFn(multistep.DebugLocationAfterRun, fmt.Sprintf("boot_command[%d]: %s", i, command), state) - } + if err := seq.Do(ctx, d); err != nil { + err := fmt.Errorf("Error running boot command: %s", err) + state.Put("error", err) + ui.Error(err.Error()) + return multistep.ActionHalt + } + if pauseFn != nil { + pauseFn(multistep.DebugLocationAfterRun, fmt.Sprintf("boot_command: %s", command), state) } return multistep.ActionContinue } func (*StepTypeBootCommand) Cleanup(multistep.StateBag) {} - -func scancodes(message string) []string { - // Scancodes reference: http://www.win.tue.nl/~aeb/linux/kbd/scancodes-1.html - // - // Scancodes represent raw keyboard output and are fed to the VM by the - // VBoxManage controlvm keyboardputscancode program. - // - // Scancodes are recorded here in pairs. The first entry represents - // the key press and the second entry represents the key release and is - // derived from the first by the addition of 0x80. - special := make(map[string][]string) - special[""] = []string{"0e", "8e"} - special[""] = []string{"53", "d3"} - special[""] = []string{"1c", "9c"} - special[""] = []string{"01", "81"} - special[""] = []string{"3b", "bb"} - special[""] = []string{"3c", "bc"} - special[""] = []string{"3d", "bd"} - special[""] = []string{"3e", "be"} - special[""] = []string{"3f", "bf"} - special[""] = []string{"40", "c0"} - special[""] = []string{"41", "c1"} - special[""] = []string{"42", "c2"} - special[""] = []string{"43", "c3"} - special[""] = []string{"44", "c4"} - special[""] = []string{"1c", "9c"} - special[""] = []string{"0f", "8f"} - special[""] = []string{"48", "c8"} - special[""] = []string{"50", "d0"} - special[""] = []string{"4b", "cb"} - special[""] = []string{"4d", "cd"} - special[""] = []string{"39", "b9"} - special[""] = []string{"52", "d2"} - special[""] = []string{"47", "c7"} - special[""] = []string{"4f", "cf"} - special[""] = []string{"49", "c9"} - special[""] = []string{"51", "d1"} - special[""] = []string{"38", "b8"} - special[""] = []string{"1d", "9d"} - special[""] = []string{"2a", "aa"} - special[""] = []string{"e038", "e0b8"} - special[""] = []string{"e01d", "e09d"} - special[""] = []string{"36", "b6"} - - shiftedChars := "~!@#$%^&*()_+{}|:\"<>?" - - scancodeIndex := make(map[string]uint) - scancodeIndex["1234567890-="] = 0x02 - scancodeIndex["!@#$%^&*()_+"] = 0x02 - scancodeIndex["qwertyuiop[]"] = 0x10 - scancodeIndex["QWERTYUIOP{}"] = 0x10 - scancodeIndex["asdfghjkl;'`"] = 0x1e - scancodeIndex[`ASDFGHJKL:"~`] = 0x1e - scancodeIndex[`\zxcvbnm,./`] = 0x2b - scancodeIndex["|ZXCVBNM<>?"] = 0x2b - scancodeIndex[" "] = 0x39 - - scancodeMap := make(map[rune]uint) - for chars, start := range scancodeIndex { - var i uint = 0 - for len(chars) > 0 { - r, size := utf8.DecodeRuneInString(chars) - chars = chars[size:] - scancodeMap[r] = start + i - i += 1 - } - } - - result := make([]string, 0, len(message)*2) - for len(message) > 0 { - var scancode []string - - if strings.HasPrefix(message, "") { - scancode = []string{"38"} - message = message[len(""):] - log.Printf("Special code '' found, replacing with: 38") - } - - if strings.HasPrefix(message, "") { - scancode = []string{"1d"} - message = message[len(""):] - log.Printf("Special code '' found, replacing with: 1d") - } - - if strings.HasPrefix(message, "") { - scancode = []string{"2a"} - message = message[len(""):] - log.Printf("Special code '' found, replacing with: 2a") - } - - if strings.HasPrefix(message, "") { - scancode = []string{"b8"} - message = message[len(""):] - log.Printf("Special code '' found, replacing with: b8") - } - - if strings.HasPrefix(message, "") { - scancode = []string{"9d"} - message = message[len(""):] - log.Printf("Special code '' found, replacing with: 9d") - } - - if strings.HasPrefix(message, "") { - scancode = []string{"aa"} - message = message[len(""):] - log.Printf("Special code '' found, replacing with: aa") - } - - if strings.HasPrefix(message, "") { - scancode = []string{"e038"} - message = message[len(""):] - log.Printf("Special code '' found, replacing with: e038") - } - - if strings.HasPrefix(message, "") { - scancode = []string{"e01d"} - message = message[len(""):] - log.Printf("Special code '' found, replacing with: e01d") - } - - if strings.HasPrefix(message, "") { - scancode = []string{"36"} - message = message[len(""):] - log.Printf("Special code '' found, replacing with: 36") - } - - if strings.HasPrefix(message, "") { - scancode = []string{"e0b8"} - message = message[len(""):] - log.Printf("Special code '' found, replacing with: e0b8") - } - - if strings.HasPrefix(message, "") { - scancode = []string{"e09d"} - message = message[len(""):] - log.Printf("Special code '' found, replacing with: e09d") - } - - if strings.HasPrefix(message, "") { - scancode = []string{"b6"} - message = message[len(""):] - log.Printf("Special code '' found, replacing with: b6") - } - - if strings.HasPrefix(message, "") { - log.Printf("Special code found, will sleep 1 second at this point.") - scancode = []string{"wait"} - message = message[len(""):] - } - - if strings.HasPrefix(message, "") { - log.Printf("Special code found, will sleep 5 seconds at this point.") - scancode = []string{"wait5"} - message = message[len(""):] - } - - if strings.HasPrefix(message, "") { - log.Printf("Special code found, will sleep 10 seconds at this point.") - scancode = []string{"wait10"} - message = message[len(""):] - } - - if scancode == nil { - for specialCode, specialValue := range special { - if strings.HasPrefix(message, specialCode) { - log.Printf("Special code '%s' found, replacing with: %s", specialCode, specialValue) - scancode = specialValue - message = message[len(specialCode):] - break - } - } - } - - if scancode == nil { - r, size := utf8.DecodeRuneInString(message) - message = message[size:] - scancodeInt := scancodeMap[r] - keyShift := unicode.IsUpper(r) || strings.ContainsRune(shiftedChars, r) - - scancode = make([]string, 0, 4) - if keyShift { - scancode = append(scancode, "2a") - } - - scancode = append(scancode, fmt.Sprintf("%02x", scancodeInt)) - - if keyShift { - scancode = append(scancode, "aa") - } - - scancode = append(scancode, fmt.Sprintf("%02x", scancodeInt+0x80)) - log.Printf("Sending char '%c', code '%v', shift %v", r, scancode, keyShift) - } - - result = append(result, scancode...) - } - - return result -} diff --git a/builder/virtualbox/common/step_upload_guest_additions.go b/builder/virtualbox/common/step_upload_guest_additions.go index a98676555..c77ef7d22 100644 --- a/builder/virtualbox/common/step_upload_guest_additions.go +++ b/builder/virtualbox/common/step_upload_guest_additions.go @@ -1,13 +1,14 @@ package common import ( + "context" "fmt" "log" "os" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" "github.com/hashicorp/packer/template/interpolate" - "github.com/mitchellh/multistep" ) type guestAdditionsPathTemplate struct { @@ -21,7 +22,7 @@ type StepUploadGuestAdditions struct { Ctx interpolate.Context } -func (s *StepUploadGuestAdditions) Run(state multistep.StateBag) multistep.StepAction { +func (s *StepUploadGuestAdditions) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { comm := state.Get("communicator").(packer.Communicator) driver := state.Get("driver").(Driver) ui := state.Get("ui").(packer.Ui) diff --git a/builder/virtualbox/common/step_upload_version.go b/builder/virtualbox/common/step_upload_version.go index ab1ca6c7f..deaf8824d 100644 --- a/builder/virtualbox/common/step_upload_version.go +++ b/builder/virtualbox/common/step_upload_version.go @@ -2,10 +2,12 @@ package common import ( "bytes" + "context" "fmt" - "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" "log" + + "github.com/hashicorp/packer/helper/multistep" + "github.com/hashicorp/packer/packer" ) // This step uploads a file containing the VirtualBox version, which @@ -14,7 +16,7 @@ type StepUploadVersion struct { Path string } -func (s *StepUploadVersion) Run(state multistep.StateBag) multistep.StepAction { +func (s *StepUploadVersion) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { comm := state.Get("communicator").(packer.Communicator) driver := state.Get("driver").(Driver) ui := state.Get("ui").(packer.Ui) diff --git a/builder/virtualbox/common/step_upload_version_test.go b/builder/virtualbox/common/step_upload_version_test.go index 1414a2278..998d35f0e 100644 --- a/builder/virtualbox/common/step_upload_version_test.go +++ b/builder/virtualbox/common/step_upload_version_test.go @@ -1,9 +1,11 @@ package common import ( - "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" + "context" "testing" + + "github.com/hashicorp/packer/helper/multistep" + "github.com/hashicorp/packer/packer" ) func TestStepUploadVersion_impl(t *testing.T) { @@ -22,7 +24,7 @@ func TestStepUploadVersion(t *testing.T) { driver.VersionResult = "foo" // Test the run - if action := step.Run(state); action != multistep.ActionContinue { + if action := step.Run(context.Background(), state); action != multistep.ActionContinue { t.Fatalf("bad action: %#v", action) } if _, ok := state.GetOk("error"); ok { @@ -47,7 +49,7 @@ func TestStepUploadVersion_noPath(t *testing.T) { state.Put("communicator", comm) // Test the run - if action := step.Run(state); action != multistep.ActionContinue { + if action := step.Run(context.Background(), state); action != multistep.ActionContinue { t.Fatalf("bad action: %#v", action) } if _, ok := state.GetOk("error"); ok { diff --git a/builder/virtualbox/common/step_vboxmanage.go b/builder/virtualbox/common/step_vboxmanage.go index 7544dbbce..948cae884 100644 --- a/builder/virtualbox/common/step_vboxmanage.go +++ b/builder/virtualbox/common/step_vboxmanage.go @@ -1,12 +1,13 @@ package common import ( + "context" "fmt" "strings" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" "github.com/hashicorp/packer/template/interpolate" - "github.com/mitchellh/multistep" ) type commandTemplate struct { @@ -27,7 +28,7 @@ type StepVBoxManage struct { Ctx interpolate.Context } -func (s *StepVBoxManage) Run(state multistep.StateBag) multistep.StepAction { +func (s *StepVBoxManage) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { driver := state.Get("driver").(Driver) ui := state.Get("ui").(packer.Ui) vmName := state.Get("vmName").(string) diff --git a/builder/virtualbox/iso/builder.go b/builder/virtualbox/iso/builder.go index 82cf36409..2ea19e70f 100644 --- a/builder/virtualbox/iso/builder.go +++ b/builder/virtualbox/iso/builder.go @@ -8,11 +8,12 @@ import ( vboxcommon "github.com/hashicorp/packer/builder/virtualbox/common" "github.com/hashicorp/packer/common" + "github.com/hashicorp/packer/common/bootcommand" "github.com/hashicorp/packer/helper/communicator" "github.com/hashicorp/packer/helper/config" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" "github.com/hashicorp/packer/template/interpolate" - "github.com/mitchellh/multistep" ) const BuilderId = "mitchellh.virtualbox" @@ -27,6 +28,7 @@ type Config struct { common.HTTPConfig `mapstructure:",squash"` common.ISOConfig `mapstructure:",squash"` common.FloppyConfig `mapstructure:",squash"` + bootcommand.BootConfig `mapstructure:",squash"` vboxcommon.ExportConfig `mapstructure:",squash"` vboxcommon.ExportOpts `mapstructure:",squash"` vboxcommon.OutputConfig `mapstructure:",squash"` @@ -37,21 +39,20 @@ type Config struct { vboxcommon.VBoxManagePostConfig `mapstructure:",squash"` vboxcommon.VBoxVersionConfig `mapstructure:",squash"` - BootCommand []string `mapstructure:"boot_command"` - DiskSize uint `mapstructure:"disk_size"` - GuestAdditionsMode string `mapstructure:"guest_additions_mode"` - GuestAdditionsPath string `mapstructure:"guest_additions_path"` - GuestAdditionsSHA256 string `mapstructure:"guest_additions_sha256"` - GuestAdditionsURL string `mapstructure:"guest_additions_url"` - GuestOSType string `mapstructure:"guest_os_type"` - HardDriveDiscard bool `mapstructure:"hard_drive_discard"` - HardDriveInterface string `mapstructure:"hard_drive_interface"` - SATAPortCount int `mapstructure:"sata_port_count"` - HardDriveNonrotational bool `mapstructure:"hard_drive_nonrotational"` - ISOInterface string `mapstructure:"iso_interface"` - KeepRegistered bool `mapstructure:"keep_registered"` - SkipExport bool `mapstructure:"skip_export"` - VMName string `mapstructure:"vm_name"` + DiskSize uint `mapstructure:"disk_size"` + GuestAdditionsMode string `mapstructure:"guest_additions_mode"` + GuestAdditionsPath string `mapstructure:"guest_additions_path"` + GuestAdditionsSHA256 string `mapstructure:"guest_additions_sha256"` + GuestAdditionsURL string `mapstructure:"guest_additions_url"` + GuestOSType string `mapstructure:"guest_os_type"` + HardDriveDiscard bool `mapstructure:"hard_drive_discard"` + HardDriveInterface string `mapstructure:"hard_drive_interface"` + SATAPortCount int `mapstructure:"sata_port_count"` + HardDriveNonrotational bool `mapstructure:"hard_drive_nonrotational"` + ISOInterface string `mapstructure:"iso_interface"` + KeepRegistered bool `mapstructure:"keep_registered"` + SkipExport bool `mapstructure:"skip_export"` + VMName string `mapstructure:"vm_name"` ctx interpolate.Context } @@ -94,6 +95,7 @@ func (b *Builder) Prepare(raws ...interface{}) ([]string, error) { errs = packer.MultiErrorAppend(errs, b.config.VBoxManageConfig.Prepare(&b.config.ctx)...) errs = packer.MultiErrorAppend(errs, b.config.VBoxManagePostConfig.Prepare(&b.config.ctx)...) errs = packer.MultiErrorAppend(errs, b.config.VBoxVersionConfig.Prepare(&b.config.ctx)...) + errs = packer.MultiErrorAppend(errs, b.config.BootConfig.Prepare(&b.config.ctx)...) if b.config.DiskSize == 0 { b.config.DiskSize = 40000 @@ -240,11 +242,11 @@ func (b *Builder) Run(ui packer.Ui, hook packer.Hook, cache packer.Cache) (packe Ctx: b.config.ctx, }, &vboxcommon.StepRun{ - BootWait: b.config.BootWait, Headless: b.config.Headless, }, &vboxcommon.StepTypeBootCommand{ - BootCommand: b.config.BootCommand, + BootWait: b.config.BootWait, + BootCommand: b.config.FlatBootCommand(), VMName: b.config.VMName, Ctx: b.config.ctx, }, diff --git a/builder/virtualbox/iso/step_attach_iso.go b/builder/virtualbox/iso/step_attach_iso.go index a997ee823..624696482 100644 --- a/builder/virtualbox/iso/step_attach_iso.go +++ b/builder/virtualbox/iso/step_attach_iso.go @@ -1,10 +1,13 @@ package iso import ( + "context" "fmt" + "path/filepath" + vboxcommon "github.com/hashicorp/packer/builder/virtualbox/common" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" ) // This step attaches the ISO to the virtual machine. @@ -16,7 +19,7 @@ type stepAttachISO struct { diskPath string } -func (s *stepAttachISO) Run(state multistep.StateBag) multistep.StepAction { +func (s *stepAttachISO) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { config := state.Get("config").(*Config) driver := state.Get("driver").(vboxcommon.Driver) isoPath := state.Get("iso_path").(string) @@ -32,6 +35,16 @@ func (s *stepAttachISO) Run(state multistep.StateBag) multistep.StepAction { device = "0" } + // If it's a symlink, resolve it to it's target. + resolvedIsoPath, err := filepath.EvalSymlinks(isoPath) + if err != nil { + err := fmt.Errorf("Error resolving symlink for ISO: %s", err) + state.Put("error", err) + ui.Error(err.Error()) + return multistep.ActionHalt + } + isoPath = resolvedIsoPath + // Attach the disk to the controller command := []string{ "storageattach", vmName, diff --git a/builder/virtualbox/iso/step_create_disk.go b/builder/virtualbox/iso/step_create_disk.go index 6051768af..7c5b03e4f 100644 --- a/builder/virtualbox/iso/step_create_disk.go +++ b/builder/virtualbox/iso/step_create_disk.go @@ -1,11 +1,12 @@ package iso import ( + "context" "fmt" vboxcommon "github.com/hashicorp/packer/builder/virtualbox/common" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" "path/filepath" "strconv" @@ -16,7 +17,7 @@ import ( // hard drive for the virtual machine. type stepCreateDisk struct{} -func (s *stepCreateDisk) Run(state multistep.StateBag) multistep.StepAction { +func (s *stepCreateDisk) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { config := state.Get("config").(*Config) driver := state.Get("driver").(vboxcommon.Driver) ui := state.Get("ui").(packer.Ui) diff --git a/builder/virtualbox/iso/step_create_vm.go b/builder/virtualbox/iso/step_create_vm.go index ad1f72eaf..72a556737 100644 --- a/builder/virtualbox/iso/step_create_vm.go +++ b/builder/virtualbox/iso/step_create_vm.go @@ -1,11 +1,12 @@ package iso import ( + "context" "fmt" vboxcommon "github.com/hashicorp/packer/builder/virtualbox/common" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" ) // This step creates the actual virtual machine. @@ -16,7 +17,7 @@ type stepCreateVM struct { vmName string } -func (s *stepCreateVM) Run(state multistep.StateBag) multistep.StepAction { +func (s *stepCreateVM) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { config := state.Get("config").(*Config) driver := state.Get("driver").(vboxcommon.Driver) ui := state.Get("ui").(packer.Ui) diff --git a/builder/virtualbox/ovf/builder.go b/builder/virtualbox/ovf/builder.go index 475bbebcf..09fd416a3 100644 --- a/builder/virtualbox/ovf/builder.go +++ b/builder/virtualbox/ovf/builder.go @@ -8,8 +8,8 @@ import ( vboxcommon "github.com/hashicorp/packer/builder/virtualbox/common" "github.com/hashicorp/packer/common" "github.com/hashicorp/packer/helper/communicator" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" ) // Builder implements packer.Builder and builds the actual VirtualBox @@ -103,11 +103,11 @@ func (b *Builder) Run(ui packer.Ui, hook packer.Hook, cache packer.Cache) (packe Ctx: b.config.ctx, }, &vboxcommon.StepRun{ - BootWait: b.config.BootWait, Headless: b.config.Headless, }, &vboxcommon.StepTypeBootCommand{ - BootCommand: b.config.BootCommand, + BootWait: b.config.BootWait, + BootCommand: b.config.FlatBootCommand(), VMName: b.config.VMName, Ctx: b.config.ctx, }, diff --git a/builder/virtualbox/ovf/config.go b/builder/virtualbox/ovf/config.go index 77b7ffa0e..bd60a4c4c 100644 --- a/builder/virtualbox/ovf/config.go +++ b/builder/virtualbox/ovf/config.go @@ -6,6 +6,7 @@ import ( vboxcommon "github.com/hashicorp/packer/builder/virtualbox/common" "github.com/hashicorp/packer/common" + "github.com/hashicorp/packer/common/bootcommand" "github.com/hashicorp/packer/helper/config" "github.com/hashicorp/packer/packer" "github.com/hashicorp/packer/template/interpolate" @@ -16,6 +17,7 @@ type Config struct { common.PackerConfig `mapstructure:",squash"` common.HTTPConfig `mapstructure:",squash"` common.FloppyConfig `mapstructure:",squash"` + bootcommand.BootConfig `mapstructure:",squash"` vboxcommon.ExportConfig `mapstructure:",squash"` vboxcommon.ExportOpts `mapstructure:",squash"` vboxcommon.OutputConfig `mapstructure:",squash"` @@ -26,7 +28,6 @@ type Config struct { vboxcommon.VBoxManagePostConfig `mapstructure:",squash"` vboxcommon.VBoxVersionConfig `mapstructure:",squash"` - BootCommand []string `mapstructure:"boot_command"` Checksum string `mapstructure:"checksum"` ChecksumType string `mapstructure:"checksum_type"` GuestAdditionsMode string `mapstructure:"guest_additions_mode"` @@ -90,6 +91,7 @@ func NewConfig(raws ...interface{}) (*Config, []string, error) { errs = packer.MultiErrorAppend(errs, c.VBoxManageConfig.Prepare(&c.ctx)...) errs = packer.MultiErrorAppend(errs, c.VBoxManagePostConfig.Prepare(&c.ctx)...) errs = packer.MultiErrorAppend(errs, c.VBoxVersionConfig.Prepare(&c.ctx)...) + errs = packer.MultiErrorAppend(errs, c.BootConfig.Prepare(&c.ctx)...) c.ChecksumType = strings.ToLower(c.ChecksumType) c.Checksum = strings.ToLower(c.Checksum) @@ -97,10 +99,16 @@ func NewConfig(raws ...interface{}) (*Config, []string, error) { if c.SourcePath == "" { errs = packer.MultiErrorAppend(errs, fmt.Errorf("source_path is required")) } else { - c.SourcePath, err = common.DownloadableURL(c.SourcePath) + c.SourcePath, err = common.ValidatedURL(c.SourcePath) if err != nil { errs = packer.MultiErrorAppend(errs, fmt.Errorf("source_path is invalid: %s", err)) } + fileOK := common.FileExistsLocally(c.SourcePath) + if !fileOK { + packer.MultiErrorAppend(errs, + fmt.Errorf("Source file '%s' needs to exist at time of config validation!", c.SourcePath)) + } + } validMode := false diff --git a/builder/virtualbox/ovf/config_test.go b/builder/virtualbox/ovf/config_test.go index 980c4d4ef..e36c44fcf 100644 --- a/builder/virtualbox/ovf/config_test.go +++ b/builder/virtualbox/ovf/config_test.go @@ -65,15 +65,15 @@ func TestNewConfig_sourcePath(t *testing.T) { t.Fatalf("should error with empty `source_path`") } - // Okay, because it gets caught during download + // Want this to fail on validation c = testConfig(t) c["source_path"] = "/i/dont/exist" _, warns, err = NewConfig(c) if len(warns) > 0 { t.Fatalf("bad: %#v", warns) } - if err != nil { - t.Fatalf("bad: %s", err) + if err == nil { + t.Fatalf("Nonexistent file should throw a validation error!") } // Bad @@ -84,7 +84,7 @@ func TestNewConfig_sourcePath(t *testing.T) { t.Fatalf("bad: %#v", warns) } if err == nil { - t.Fatal("should error") + t.Fatalf("should error") } // Good diff --git a/builder/virtualbox/ovf/step_import.go b/builder/virtualbox/ovf/step_import.go index 33e71fc24..5e066d83a 100644 --- a/builder/virtualbox/ovf/step_import.go +++ b/builder/virtualbox/ovf/step_import.go @@ -1,11 +1,12 @@ package ovf import ( + "context" "fmt" vboxcommon "github.com/hashicorp/packer/builder/virtualbox/common" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" ) // This step imports an OVF VM into VirtualBox. @@ -16,7 +17,7 @@ type StepImport struct { vmName string } -func (s *StepImport) Run(state multistep.StateBag) multistep.StepAction { +func (s *StepImport) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { driver := state.Get("driver").(vboxcommon.Driver) ui := state.Get("ui").(packer.Ui) vmPath := state.Get("vm_path").(string) diff --git a/builder/virtualbox/ovf/step_import_test.go b/builder/virtualbox/ovf/step_import_test.go index fd204c042..ffaecc0ba 100644 --- a/builder/virtualbox/ovf/step_import_test.go +++ b/builder/virtualbox/ovf/step_import_test.go @@ -1,9 +1,11 @@ package ovf import ( - vboxcommon "github.com/hashicorp/packer/builder/virtualbox/common" - "github.com/mitchellh/multistep" + "context" "testing" + + vboxcommon "github.com/hashicorp/packer/builder/virtualbox/common" + "github.com/hashicorp/packer/helper/multistep" ) func TestStepImport_impl(t *testing.T) { @@ -22,7 +24,7 @@ func TestStepImport(t *testing.T) { driver := state.Get("driver").(*vboxcommon.DriverMock) // Test the run - if action := step.Run(state); action != multistep.ActionContinue { + if action := step.Run(context.Background(), state); action != multistep.ActionContinue { t.Fatalf("bad action: %#v", action) } if _, ok := state.GetOk("error"); ok { diff --git a/builder/virtualbox/ovf/step_test.go b/builder/virtualbox/ovf/step_test.go index cbd18dbd5..4f9a0f9d8 100644 --- a/builder/virtualbox/ovf/step_test.go +++ b/builder/virtualbox/ovf/step_test.go @@ -2,10 +2,11 @@ package ovf import ( "bytes" - vboxcommon "github.com/hashicorp/packer/builder/virtualbox/common" - "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" "testing" + + vboxcommon "github.com/hashicorp/packer/builder/virtualbox/common" + "github.com/hashicorp/packer/helper/multistep" + "github.com/hashicorp/packer/packer" ) func testState(t *testing.T) multistep.StateBag { diff --git a/builder/vmware/common/driver.go b/builder/vmware/common/driver.go index 4e4f38e02..ec34e70b5 100644 --- a/builder/vmware/common/driver.go +++ b/builder/vmware/common/driver.go @@ -2,15 +2,20 @@ package common import ( "bytes" + "errors" "fmt" + "io/ioutil" "log" + "net" + "os" "os/exec" "regexp" "runtime" "strconv" "strings" + "time" - "github.com/mitchellh/multistep" + "github.com/hashicorp/packer/helper/multistep" ) // A driver is able to talk to VMware, control virtual machines, etc. @@ -18,21 +23,17 @@ type Driver interface { // Clone clones the VMX and the disk to the destination path. The // destination is a path to the VMX file. The disk will be copied // to that same directory. - Clone(dst string, src string) error + Clone(dst string, src string, cloneType bool) error // CompactDisk compacts a virtual disk. CompactDisk(string) error // CreateDisk creates a virtual disk with the given size. - CreateDisk(string, string, string) error + CreateDisk(string, string, string, string) error // Checks if the VMX file at the given path is running. IsRunning(string) (bool, error) - // CommHost returns the host address for the VM that is being - // managed by this driver. - CommHost(multistep.StateBag) (string, error) - // Start starts a VM specified by the path to the VMX given. Start(string, bool) error @@ -49,14 +50,33 @@ type Driver interface { // Attach the VMware tools ISO ToolsInstall() error - // Get the path to the DHCP leases file for the given device. - DhcpLeasesPath(string) string - // Verify checks to make sure that this driver should function // properly. This should check that all the files it will use // appear to exist and so on. If everything is okay, this doesn't - // return an error. Otherwise, this returns an error. + // return an error. Otherwise, this returns an error. Each vmware + // driver should assign the VmwareMachine callback functions for locating + // paths within this function. Verify() error + + /// This is to establish a connection to the guest + CommHost(multistep.StateBag) (string, error) + + /// These methods are generally implemented by the VmwareDriver + /// structure within this file. A driver implementation can + /// reimplement these, though, if it wants. + GetVmwareDriver() VmwareDriver + + // Get the guest hw address for the vm + GuestAddress(multistep.StateBag) (string, error) + + // Get the guest ip address for the vm + GuestIP(multistep.StateBag) (string, error) + + // Get the host hw address for the vm + HostAddress(multistep.StateBag) (string, error) + + // Get the host ip address for the vm + HostIP(multistep.StateBag) (string, error) } // NewDriver returns a new driver implementation for this operating @@ -106,6 +126,9 @@ func NewDriver(dconfig *DriverConfig, config *SSHConfig) (Driver, error) { errs := "" for _, driver := range drivers { err := driver.Verify() + log.Printf("Testing vmware driver %T. Success: %t", + driver, err == nil) + if err == nil { return driver, nil } @@ -189,3 +212,346 @@ func compareVersions(versionFound string, versionWanted string, product string) return nil } + +/// helper functions that read configuration information from a file +// read the network<->device configuration out of the specified path +func ReadNetmapConfig(path string) (NetworkMap, error) { + fd, err := os.Open(path) + if err != nil { + return nil, err + } + defer fd.Close() + return ReadNetworkMap(fd) +} + +// read the dhcp configuration out of the specified path +func ReadDhcpConfig(path string) (DhcpConfiguration, error) { + fd, err := os.Open(path) + if err != nil { + return nil, err + } + defer fd.Close() + return ReadDhcpConfiguration(fd) +} + +// read the VMX configuration from the specified path +func readVMXConfig(path string) (map[string]string, error) { + f, err := os.Open(path) + if err != nil { + return map[string]string{}, err + } + defer f.Close() + + vmxBytes, err := ioutil.ReadAll(f) + if err != nil { + return map[string]string{}, err + } + return ParseVMX(string(vmxBytes)), nil +} + +// read the connection type out of a vmx configuration +func readCustomDeviceName(vmxData map[string]string) (string, error) { + + connectionType, ok := vmxData["ethernet0.connectiontype"] + if !ok || connectionType != "custom" { + return "", fmt.Errorf("Unable to determine the device name for the connection type : %s", connectionType) + } + + device, ok := vmxData["ethernet0.vnet"] + if !ok || device == "" { + return "", fmt.Errorf("Unable to determine the device name for the connection type \"%s\" : %s", connectionType, device) + } + return device, nil +} + +// This VmwareDriver is a base class that contains default methods +// that a Driver can use or implement themselves. +type VmwareDriver struct { + /// These methods define paths that are utilized by the driver + /// A driver must overload these in order to point to the correct + /// files so that the address detection (ip and ethernet) machinery + /// works. + DhcpLeasesPath func(string) string + DhcpConfPath func(string) string + VmnetnatConfPath func(string) string + + /// This method returns an object with the NetworkNameMapper interface + /// that maps network to device and vice-versa. + NetworkMapper func() (NetworkNameMapper, error) +} + +func (d *VmwareDriver) GuestAddress(state multistep.StateBag) (string, error) { + vmxPath := state.Get("vmx_path").(string) + + log.Println("Lookup up IP information...") + vmxData, err := readVMXConfig(vmxPath) + if err != nil { + return "", err + } + + var ok bool + macAddress := "" + if macAddress, ok = vmxData["ethernet0.address"]; !ok || macAddress == "" { + if macAddress, ok = vmxData["ethernet0.generatedaddress"]; !ok || macAddress == "" { + return "", errors.New("couldn't find MAC address in VMX") + } + } + log.Printf("GuestAddress found MAC address in VMX: %s", macAddress) + + res, err := net.ParseMAC(macAddress) + if err != nil { + return "", err + } + + return res.String(), nil +} + +func (d *VmwareDriver) GuestIP(state multistep.StateBag) (string, error) { + + // grab network mapper + netmap, err := d.NetworkMapper() + if err != nil { + return "", err + } + + // convert the stashed network to a device + network := state.Get("vmnetwork").(string) + devices, err := netmap.NameIntoDevices(network) + + // log them to see what was detected + for _, device := range devices { + log.Printf("GuestIP discovered device matching %s: %s", network, device) + } + + // we were unable to find the device, maybe it's a custom one... + // so, check to see if it's in the .vmx configuration + if err != nil || network == "custom" { + vmxPath := state.Get("vmx_path").(string) + vmxData, err := readVMXConfig(vmxPath) + if err != nil { + return "", err + } + + var device string + device, err = readCustomDeviceName(vmxData) + devices = append(devices, device) + if err != nil { + return "", err + } + log.Printf("GuestIP discovered custom device matching %s: %s", network, device) + } + + // figure out our MAC address for looking up the guest address + MACAddress, err := d.GuestAddress(state) + if err != nil { + return "", err + } + + for _, device := range devices { + // figure out the correct dhcp leases + dhcpLeasesPath := d.DhcpLeasesPath(device) + log.Printf("Trying DHCP leases path: %s", dhcpLeasesPath) + if dhcpLeasesPath == "" { + return "", fmt.Errorf("no DHCP leases path found for device %s", device) + } + + // open up the lease and read its contents + fh, err := os.Open(dhcpLeasesPath) + if err != nil { + log.Printf("Error while reading DHCP lease path file %s: %s", dhcpLeasesPath, err.Error()) + continue + } + defer fh.Close() + + dhcpBytes, err := ioutil.ReadAll(fh) + if err != nil { + return "", err + } + + // start grepping through the file looking for fields that we care about + var lastIp string + var lastLeaseEnd time.Time + + var curIp string + var curLeaseEnd time.Time + + ipLineRe := regexp.MustCompile(`^lease (.+?) {$`) + endTimeLineRe := regexp.MustCompile(`^\s*ends \d (.+?);$`) + macLineRe := regexp.MustCompile(`^\s*hardware ethernet (.+?);$`) + + for _, line := range strings.Split(string(dhcpBytes), "\n") { + // Need to trim off CR character when running in windows + line = strings.TrimRight(line, "\r") + + matches := ipLineRe.FindStringSubmatch(line) + if matches != nil { + lastIp = matches[1] + continue + } + + matches = endTimeLineRe.FindStringSubmatch(line) + if matches != nil { + lastLeaseEnd, _ = time.Parse("2006/01/02 15:04:05", matches[1]) + continue + } + + // If the mac address matches and this lease ends farther in the + // future than the last match we might have, then choose it. + matches = macLineRe.FindStringSubmatch(line) + if matches != nil && strings.EqualFold(matches[1], MACAddress) && curLeaseEnd.Before(lastLeaseEnd) { + curIp = lastIp + curLeaseEnd = lastLeaseEnd + } + } + if curIp != "" { + return curIp, nil + } + } + + return "", fmt.Errorf("None of the found device(s) %v has a DHCP lease for MAC %s", devices, MACAddress) +} + +func (d *VmwareDriver) HostAddress(state multistep.StateBag) (string, error) { + + // grab mapper for converting network<->device + netmap, err := d.NetworkMapper() + if err != nil { + return "", err + } + + // convert network to name + network := state.Get("vmnetwork").(string) + devices, err := netmap.NameIntoDevices(network) + + // log them to see what was detected + for _, device := range devices { + log.Printf("HostAddress discovered device matching %s: %s", network, device) + } + + // we were unable to find the device, maybe it's a custom one... + // so, check to see if it's in the .vmx configuration + if err != nil || network == "custom" { + vmxPath := state.Get("vmx_path").(string) + vmxData, err := readVMXConfig(vmxPath) + if err != nil { + return "", err + } + + var device string + device, err = readCustomDeviceName(vmxData) + devices = append(devices, device) + if err != nil { + return "", err + } + log.Printf("HostAddress discovered custom device matching %s: %s", network, device) + } + + var lastError error + for _, device := range devices { + // parse dhcpd configuration + pathDhcpConfig := d.DhcpConfPath(device) + if _, err := os.Stat(pathDhcpConfig); err != nil { + return "", fmt.Errorf("Could not find vmnetdhcp conf file: %s", pathDhcpConfig) + } + + config, err := ReadDhcpConfig(pathDhcpConfig) + if err != nil { + lastError = err + continue + } + + // find the entry configured in the dhcpd + interfaceConfig, err := config.HostByName(device) + if err != nil { + lastError = err + continue + } + + // finally grab the hardware address + address, err := interfaceConfig.Hardware() + if err == nil { + return address.String(), nil + } + + // we didn't find it, so search through our interfaces for the device name + interfaceList, err := net.Interfaces() + if err == nil { + return "", err + } + + names := make([]string, 0) + for _, intf := range interfaceList { + if strings.HasSuffix(strings.ToLower(intf.Name), device) { + return intf.HardwareAddr.String(), nil + } + names = append(names, intf.Name) + } + } + return "", fmt.Errorf("Unable to find host address from devices %v, last error: %s", devices, lastError) +} + +func (d *VmwareDriver) HostIP(state multistep.StateBag) (string, error) { + + // grab mapper for converting network<->device + netmap, err := d.NetworkMapper() + if err != nil { + return "", err + } + + // convert network to name + network := state.Get("vmnetwork").(string) + devices, err := netmap.NameIntoDevices(network) + + // log them to see what was detected + for _, device := range devices { + log.Printf("HostIP discovered device matching %s: %s", network, device) + } + + // we were unable to find the device, maybe it's a custom one... + // so, check to see if it's in the .vmx configuration + if err != nil || network == "custom" { + vmxPath := state.Get("vmx_path").(string) + vmxData, err := readVMXConfig(vmxPath) + if err != nil { + return "", err + } + + var device string + device, err = readCustomDeviceName(vmxData) + devices = append(devices, device) + if err != nil { + return "", err + } + log.Printf("HostIP discovered custom device matching %s: %s", network, device) + } + + var lastError error + for _, device := range devices { + // parse dhcpd configuration + pathDhcpConfig := d.DhcpConfPath(device) + if _, err := os.Stat(pathDhcpConfig); err != nil { + return "", fmt.Errorf("Could not find vmnetdhcp conf file: %s", pathDhcpConfig) + } + config, err := ReadDhcpConfig(pathDhcpConfig) + if err != nil { + lastError = err + continue + } + + // find the entry configured in the dhcpd + interfaceConfig, err := config.HostByName(device) + if err != nil { + lastError = err + continue + } + + address, err := interfaceConfig.IP4() + if err != nil { + lastError = err + continue + } + + return address.String(), nil + } + return "", fmt.Errorf("Unable to find host IP from devices %v, last error: %s", devices, lastError) +} diff --git a/builder/vmware/common/driver_fusion5.go b/builder/vmware/common/driver_fusion5.go index a10f11902..2599604b9 100644 --- a/builder/vmware/common/driver_fusion5.go +++ b/builder/vmware/common/driver_fusion5.go @@ -4,16 +4,19 @@ import ( "errors" "fmt" "io/ioutil" + "log" "os" "os/exec" "path/filepath" "strings" - "github.com/mitchellh/multistep" + "github.com/hashicorp/packer/helper/multistep" ) // Fusion5Driver is a driver that can run VMware Fusion 5. type Fusion5Driver struct { + VmwareDriver + // This is the path to the "VMware Fusion.app" AppPath string @@ -21,7 +24,7 @@ type Fusion5Driver struct { SSHConfig *SSHConfig } -func (d *Fusion5Driver) Clone(dst, src string) error { +func (d *Fusion5Driver) Clone(dst, src string, linked bool) error { return errors.New("Cloning is not supported with Fusion 5. Please use Fusion 6+.") } @@ -39,8 +42,8 @@ func (d *Fusion5Driver) CompactDisk(diskPath string) error { return nil } -func (d *Fusion5Driver) CreateDisk(output string, size string, type_id string) error { - cmd := exec.Command(d.vdiskManagerPath(), "-c", "-s", size, "-a", "lsilogic", "-t", type_id, output) +func (d *Fusion5Driver) CreateDisk(output string, size string, adapter_type string, type_id string) error { + cmd := exec.Command(d.vdiskManagerPath(), "-c", "-s", size, "-a", adapter_type, "-t", type_id, output) if _, _, err := runAndLog(cmd); err != nil { return err } @@ -139,6 +142,33 @@ func (d *Fusion5Driver) Verify() error { return err } + libpath := filepath.Join("/", "Library", "Preferences", "VMware Fusion") + + d.VmwareDriver.DhcpLeasesPath = func(device string) string { + return "/var/db/vmware/vmnet-dhcpd-" + device + ".leases" + } + d.VmwareDriver.DhcpConfPath = func(device string) string { + return filepath.Join(libpath, device, "dhcpd.conf") + } + d.VmwareDriver.VmnetnatConfPath = func(device string) string { + return filepath.Join(libpath, device, "nat.conf") + } + d.VmwareDriver.NetworkMapper = func() (NetworkNameMapper, error) { + pathNetworking := filepath.Join(libpath, "networking") + if _, err := os.Stat(pathNetworking); err != nil { + return nil, fmt.Errorf("Could not find networking conf file: %s", pathNetworking) + } + log.Printf("Located networkmapper configuration file using Fusion5: %s", pathNetworking) + + fd, err := os.Open(pathNetworking) + if err != nil { + return nil, err + } + defer fd.Close() + + return ReadNetworkingConfig(fd) + } + return nil } @@ -158,10 +188,6 @@ func (d *Fusion5Driver) ToolsInstall() error { return nil } -func (d *Fusion5Driver) DhcpLeasesPath(device string) string { - return "/var/db/vmware/vmnet-dhcpd-" + device + ".leases" -} - const fusionSuppressPlist = ` @@ -170,3 +196,7 @@ const fusionSuppressPlist = ` ` + +func (d *Fusion5Driver) GetVmwareDriver() VmwareDriver { + return d.VmwareDriver +} diff --git a/builder/vmware/common/driver_fusion6.go b/builder/vmware/common/driver_fusion6.go index af42c7ef9..ff5e9d73e 100644 --- a/builder/vmware/common/driver_fusion6.go +++ b/builder/vmware/common/driver_fusion6.go @@ -18,11 +18,19 @@ type Fusion6Driver struct { Fusion5Driver } -func (d *Fusion6Driver) Clone(dst, src string) error { +func (d *Fusion6Driver) Clone(dst, src string, linked bool) error { + + var cloneType string + if linked { + cloneType = "linked" + } else { + cloneType = "full" + } + cmd := exec.Command(d.vmrunPath(), "-T", "fusion", "clone", src, dst, - "full") + cloneType) if _, _, err := runAndLog(cmd); err != nil { if strings.Contains(err.Error(), "parameters was invalid") { return fmt.Errorf( @@ -66,5 +74,37 @@ func (d *Fusion6Driver) Verify() error { } log.Printf("Detected VMware version: %s", matches[1]) + libpath := filepath.Join("/", "Library", "Preferences", "VMware Fusion") + + d.VmwareDriver.DhcpLeasesPath = func(device string) string { + return "/var/db/vmware/vmnet-dhcpd-" + device + ".leases" + } + d.VmwareDriver.DhcpConfPath = func(device string) string { + return filepath.Join(libpath, device, "dhcpd.conf") + } + + d.VmwareDriver.VmnetnatConfPath = func(device string) string { + return filepath.Join(libpath, device, "nat.conf") + } + d.VmwareDriver.NetworkMapper = func() (NetworkNameMapper, error) { + pathNetworking := filepath.Join(libpath, "networking") + if _, err := os.Stat(pathNetworking); err != nil { + return nil, fmt.Errorf("Could not find networking conf file: %s", pathNetworking) + } + log.Printf("Located networkmapper configuration file using Fusion6: %s", pathNetworking) + + fd, err := os.Open(pathNetworking) + if err != nil { + return nil, err + } + defer fd.Close() + + return ReadNetworkingConfig(fd) + } + return compareVersions(matches[1], VMWARE_FUSION_VERSION, "Fusion Professional") } + +func (d *Fusion6Driver) GetVmwareDriver() VmwareDriver { + return d.Fusion5Driver.VmwareDriver +} diff --git a/builder/vmware/common/driver_mock.go b/builder/vmware/common/driver_mock.go index fcd80a51b..019b688fc 100644 --- a/builder/vmware/common/driver_mock.go +++ b/builder/vmware/common/driver_mock.go @@ -1,9 +1,10 @@ package common import ( + "net" "sync" - "github.com/mitchellh/multistep" + "github.com/hashicorp/packer/helper/multistep" ) type DriverMock struct { @@ -12,17 +13,19 @@ type DriverMock struct { CloneCalled bool CloneDst string CloneSrc string + Linked bool CloneErr error CompactDiskCalled bool CompactDiskPath string CompactDiskErr error - CreateDiskCalled bool - CreateDiskOutput string - CreateDiskSize string - CreateDiskTypeId string - CreateDiskErr error + CreateDiskCalled bool + CreateDiskOutput string + CreateDiskSize string + CreateDiskAdapterType string + CreateDiskTypeId string + CreateDiskErr error IsRunningCalled bool IsRunningPath string @@ -34,6 +37,26 @@ type DriverMock struct { CommHostResult string CommHostErr error + HostAddressCalled bool + HostAddressState multistep.StateBag + HostAddressResult string + HostAddressErr error + + HostIPCalled bool + HostIPState multistep.StateBag + HostIPResult string + HostIPErr error + + GuestAddressCalled bool + GuestAddressState multistep.StateBag + GuestAddressResult string + GuestAddressErr error + + GuestIPCalled bool + GuestIPState multistep.StateBag + GuestIPResult string + GuestIPErr error + StartCalled bool StartPath string StartHeadless bool @@ -58,14 +81,38 @@ type DriverMock struct { DhcpLeasesPathDevice string DhcpLeasesPathResult string + DhcpConfPathCalled bool + DhcpConfPathResult string + + VmnetnatConfPathCalled bool + VmnetnatConfPathResult string + + NetmapConfPathCalled bool + NetmapConfPathResult string + VerifyCalled bool VerifyErr error } -func (d *DriverMock) Clone(dst string, src string) error { +type NetworkMapperMock struct { + NameIntoDeviceCalled int + DeviceIntoNameCalled int +} + +func (m NetworkMapperMock) NameIntoDevices(name string) ([]string, error) { + m.NameIntoDeviceCalled += 1 + return make([]string, 0), nil +} +func (m NetworkMapperMock) DeviceIntoName(device string) (string, error) { + m.DeviceIntoNameCalled += 1 + return "", nil +} + +func (d *DriverMock) Clone(dst string, src string, linked bool) error { d.CloneCalled = true d.CloneDst = dst d.CloneSrc = src + d.Linked = linked return d.CloneErr } @@ -75,10 +122,11 @@ func (d *DriverMock) CompactDisk(path string) error { return d.CompactDiskErr } -func (d *DriverMock) CreateDisk(output string, size string, typeId string) error { +func (d *DriverMock) CreateDisk(output string, size string, adapterType string, typeId string) error { d.CreateDiskCalled = true d.CreateDiskOutput = output d.CreateDiskSize = size + d.CreateDiskAdapterType = adapterType d.CreateDiskTypeId = typeId return d.CreateDiskErr } @@ -98,6 +146,58 @@ func (d *DriverMock) CommHost(state multistep.StateBag) (string, error) { return d.CommHostResult, d.CommHostErr } +func MockInterface() net.Interface { + interfaces, err := net.Interfaces() + + // Build a dummy interface due to being unable to enumerate interfaces + if err != nil || len(interfaces) == 0 { + return net.Interface{ + Index: 0, + MTU: -1, + Name: "dummy", + HardwareAddr: net.HardwareAddr{0, 0, 0, 0, 0, 0}, + Flags: net.FlagLoopback, + } + } + + // Find the first loopback interface + for _, intf := range interfaces { + if intf.Flags&net.FlagLoopback == net.FlagLoopback { + return intf + } + } + + // Fall-back to just the first one + return interfaces[0] +} + +func (d *DriverMock) HostAddress(state multistep.StateBag) (string, error) { + intf := MockInterface() + d.HostAddressResult = intf.HardwareAddr.String() + d.HostAddressCalled = true + d.HostAddressState = state + return d.HostAddressResult, d.HostAddressErr +} + +func (d *DriverMock) HostIP(state multistep.StateBag) (string, error) { + d.HostIPResult = "127.0.0.1" + d.HostIPCalled = true + d.HostIPState = state + return d.HostIPResult, d.HostIPErr +} + +func (d *DriverMock) GuestAddress(state multistep.StateBag) (string, error) { + d.GuestAddressCalled = true + d.GuestAddressState = state + return d.GuestAddressResult, d.GuestAddressErr +} + +func (d *DriverMock) GuestIP(state multistep.StateBag) (string, error) { + d.GuestIPCalled = true + d.GuestIPState = state + return d.GuestIPResult, d.GuestIPErr +} + func (d *DriverMock) Start(path string, headless bool) error { d.StartCalled = true d.StartPath = path @@ -134,7 +234,39 @@ func (d *DriverMock) DhcpLeasesPath(device string) string { return d.DhcpLeasesPathResult } +func (d *DriverMock) DhcpConfPath(device string) string { + d.DhcpConfPathCalled = true + return d.DhcpConfPathResult +} + +func (d *DriverMock) VmnetnatConfPath(device string) string { + d.VmnetnatConfPathCalled = true + return d.VmnetnatConfPathResult +} + +func (d *DriverMock) NetmapConfPath() string { + d.NetmapConfPathCalled = true + return d.NetmapConfPathResult +} + func (d *DriverMock) Verify() error { d.VerifyCalled = true return d.VerifyErr } + +func (d *DriverMock) GetVmwareDriver() VmwareDriver { + var state VmwareDriver + state.DhcpLeasesPath = func(string) string { + return "/path/to/dhcp.leases" + } + state.DhcpConfPath = func(string) string { + return "/path/to/dhcp.conf" + } + state.VmnetnatConfPath = func(string) string { + return "/path/to/vmnetnat.conf" + } + state.NetworkMapper = func() (NetworkNameMapper, error) { + return NetworkMapperMock{}, nil + } + return state +} diff --git a/builder/vmware/common/driver_parser.go b/builder/vmware/common/driver_parser.go new file mode 100644 index 000000000..fc54b8221 --- /dev/null +++ b/builder/vmware/common/driver_parser.go @@ -0,0 +1,1973 @@ +package common + +import ( + "fmt" + "io" + "log" + "math" + "net" + "os" + "reflect" + "sort" + "strconv" + "strings" +) + +type sentinelSignaller chan struct{} + +/** low-level parsing */ +// strip the comments and extraneous newlines from a byte channel +func uncomment(eof sentinelSignaller, in <-chan byte) (chan byte, sentinelSignaller) { + out := make(chan byte) + eoc := make(sentinelSignaller) + + go func(in <-chan byte, out chan byte) { + var endofline bool + for stillReading := true; stillReading; { + select { + case <-eof: + stillReading = false + case ch := <-in: + switch ch { + case '#': + endofline = true + case '\n': + if endofline { + endofline = false + } + } + if !endofline { + out <- ch + } + } + } + close(eoc) + }(in, out) + return out, eoc +} + +// convert a byte channel into a channel of pseudo-tokens +func tokenizeDhcpConfig(eof sentinelSignaller, in chan byte) (chan string, sentinelSignaller) { + var ch byte + var state string + var quote bool + + eot := make(sentinelSignaller) + + out := make(chan string) + go func(out chan string) { + for stillReading := true; stillReading; { + select { + case <-eof: + stillReading = false + + case ch = <-in: + if quote { + if ch == '"' { + out <- state + string(ch) + state, quote = "", false + continue + } + state += string(ch) + continue + } + + switch ch { + case '"': + quote = true + state += string(ch) + continue + + case '\r': + fallthrough + case '\n': + fallthrough + case '\t': + fallthrough + case ' ': + if len(state) == 0 { + continue + } + out <- state + state = "" + + case '{': + fallthrough + case '}': + fallthrough + case ';': + if len(state) > 0 { + out <- state + } + out <- string(ch) + state = "" + + default: + state += string(ch) + } + } + } + if len(state) > 0 { + out <- state + } + close(eot) + }(out) + return out, eot +} + +/** mid-level parsing */ +type tkParameter struct { + name string + operand []string +} + +func (e *tkParameter) String() string { + var values []string + for _, val := range e.operand { + values = append(values, val) + } + return fmt.Sprintf("%s [%s]", e.name, strings.Join(values, ",")) +} + +type tkGroup struct { + parent *tkGroup + id tkParameter + + groups []*tkGroup + params []tkParameter +} + +func (e *tkGroup) String() string { + var id []string + + id = append(id, e.id.name) + for _, val := range e.id.operand { + id = append(id, val) + } + + var config []string + for _, val := range e.params { + config = append(config, val.String()) + } + return fmt.Sprintf("%s {\n%s\n}", strings.Join(id, " "), strings.Join(config, "\n")) +} + +// convert a channel of pseudo-tokens into an tkParameter struct +func parseTokenParameter(in chan string) tkParameter { + var result tkParameter + for { + token := <-in + if result.name == "" { + result.name = token + continue + } + switch token { + case "{": + fallthrough + case "}": + fallthrough + case ";": + goto leave + default: + result.operand = append(result.operand, token) + } + } +leave: + return result +} + +// convert a channel of pseudo-tokens into an tkGroup tree */ +func parseDhcpConfig(eof sentinelSignaller, in chan string) (tkGroup, error) { + var tokens []string + var result tkGroup + + toParameter := func(tokens []string) tkParameter { + out := make(chan string) + go func(out chan string) { + for _, v := range tokens { + out <- v + } + out <- ";" + }(out) + return parseTokenParameter(out) + } + + for stillReading, currentgroup := true, &result; stillReading; { + select { + case <-eof: + stillReading = false + + case tk := <-in: + switch tk { + case "{": + grp := &tkGroup{parent: currentgroup} + grp.id = toParameter(tokens) + currentgroup.groups = append(currentgroup.groups, grp) + currentgroup = grp + case "}": + if currentgroup.parent == nil { + return tkGroup{}, fmt.Errorf("Unable to close the global declaration") + } + if len(tokens) > 0 { + return tkGroup{}, fmt.Errorf("List of tokens was left unterminated : %v", tokens) + } + currentgroup = currentgroup.parent + case ";": + arg := toParameter(tokens) + currentgroup.params = append(currentgroup.params, arg) + default: + tokens = append(tokens, tk) + continue + } + tokens = []string{} + } + } + return result, nil +} + +func tokenizeNetworkMapConfig(eof sentinelSignaller, in chan byte) (chan string, sentinelSignaller) { + var ch byte + var state string + var quote bool + var lastnewline bool + + eot := make(sentinelSignaller) + + out := make(chan string) + go func(out chan string) { + for stillReading := true; stillReading; { + select { + case <-eof: + stillReading = false + + case ch = <-in: + if quote { + if ch == '"' { + out <- state + string(ch) + state, quote = "", false + continue + } + state += string(ch) + continue + } + + switch ch { + case '"': + quote = true + state += string(ch) + continue + + case '\r': + fallthrough + case '\t': + fallthrough + case ' ': + if len(state) == 0 { + continue + } + out <- state + state = "" + + case '\n': + if lastnewline { + continue + } + if len(state) > 0 { + out <- state + } + out <- string(ch) + state = "" + lastnewline = true + continue + + case '.': + fallthrough + case '=': + if len(state) > 0 { + out <- state + } + out <- string(ch) + state = "" + + default: + state += string(ch) + } + lastnewline = false + } + } + if len(state) > 0 { + out <- state + } + close(eot) + }(out) + return out, eot +} + +func parseNetworkMapConfig(eof sentinelSignaller, in chan string) (NetworkMap, error) { + var unsorted map[string]map[string]string + var state []string + + addResult := func(network string, attribute string, value string) error { + _, ok := unsorted[network] + if !ok { + unsorted[network] = make(map[string]string) + } + + val, err := strconv.Unquote(value) + if err != nil { + return err + } + + current := unsorted[network] + current[attribute] = val + return nil + } + + stillReading := true + for unsorted = make(map[string]map[string]string); stillReading; { + select { + case <-eof: + if len(state) == 3 { + err := addResult(state[0], state[1], state[2]) + if err != nil { + return nil, err + } + } + stillReading = false + case tk := <-in: + switch tk { + case ".": + if len(state) != 1 { + return nil, fmt.Errorf("Missing network index") + } + case "=": + if len(state) != 2 { + return nil, fmt.Errorf("Assignment to empty attribute") + } + case "\n": + if len(state) == 0 { + continue + } + if len(state) != 3 { + return nil, fmt.Errorf("Invalid attribute assignment : %v", state) + } + err := addResult(state[0], state[1], state[2]) + if err != nil { + return nil, err + } + state = make([]string, 0) + default: + state = append(state, tk) + } + } + } + result := make([]map[string]string, 0) + var keys []string + for k := range unsorted { + keys = append(keys, k) + } + sort.Strings(keys) + for _, k := range keys { + result = append(result, unsorted[k]) + } + return result, nil +} + +/** higher-level parsing */ +/// parameters +type pParameter interface { + repr() string +} + +type pParameterInclude struct { + filename string +} + +func (e pParameterInclude) repr() string { return fmt.Sprintf("include-file:filename=%s", e.filename) } + +type pParameterOption struct { + name string + value string +} + +func (e pParameterOption) repr() string { return fmt.Sprintf("option:%s=%s", e.name, e.value) } + +// allow some-kind-of-something +type pParameterGrant struct { + verb string // allow,deny,ignore + attribute string +} + +func (e pParameterGrant) repr() string { return fmt.Sprintf("grant:%s,%s", e.verb, e.attribute) } + +type pParameterAddress4 []string + +func (e pParameterAddress4) repr() string { + return fmt.Sprintf("fixed-address4:%s", strings.Join(e, ",")) +} + +type pParameterAddress6 []string + +func (e pParameterAddress6) repr() string { + return fmt.Sprintf("fixed-address6:%s", strings.Join(e, ",")) +} + +// hardware address 00:00:00:00:00:00 +type pParameterHardware struct { + class string + address []byte +} + +func (e pParameterHardware) repr() string { + res := make([]string, 0) + for _, v := range e.address { + res = append(res, fmt.Sprintf("%02x", v)) + } + return fmt.Sprintf("hardware-address:%s[%s]", e.class, strings.Join(res, ":")) +} + +type pParameterBoolean struct { + parameter string + truancy bool +} + +func (e pParameterBoolean) repr() string { return fmt.Sprintf("boolean:%s=%v", e.parameter, e.truancy) } + +type pParameterClientMatch struct { + name string + data string +} + +func (e pParameterClientMatch) repr() string { return fmt.Sprintf("match-client:%s=%s", e.name, e.data) } + +// range 127.0.0.1 127.0.0.255 +type pParameterRange4 struct { + min net.IP + max net.IP +} + +func (e pParameterRange4) repr() string { + return fmt.Sprintf("range4:%s-%s", e.min.String(), e.max.String()) +} + +type pParameterRange6 struct { + min net.IP + max net.IP +} + +func (e pParameterRange6) repr() string { + return fmt.Sprintf("range6:%s-%s", e.min.String(), e.max.String()) +} + +type pParameterPrefix6 struct { + min net.IP + max net.IP + bits int +} + +func (e pParameterPrefix6) repr() string { + return fmt.Sprintf("prefix6:/%d:%s-%s", e.bits, e.min.String(), e.max.String()) +} + +// some-kind-of-parameter 1024 +type pParameterOther struct { + parameter string + value string +} + +func (e pParameterOther) repr() string { return fmt.Sprintf("parameter:%s=%s", e.parameter, e.value) } + +type pParameterExpression struct { + parameter string + expression string +} + +func (e pParameterExpression) repr() string { + return fmt.Sprintf("parameter-expression:%s=\"%s\"", e.parameter, e.expression) +} + +type pDeclarationIdentifier interface { + repr() string +} + +type pDeclaration struct { + id pDeclarationIdentifier + parent *pDeclaration + parameters []pParameter + declarations []pDeclaration +} + +func (e *pDeclaration) short() string { + return e.id.repr() +} + +func (e *pDeclaration) repr() string { + res := e.short() + + var parameters []string + for _, v := range e.parameters { + parameters = append(parameters, v.repr()) + } + + var groups []string + for _, v := range e.declarations { + groups = append(groups, fmt.Sprintf("-> %s", v.short())) + } + + if e.parent != nil { + res = fmt.Sprintf("%s parent:%s", res, e.parent.short()) + } + return fmt.Sprintf("%s\n%s\n%s\n", res, strings.Join(parameters, "\n"), strings.Join(groups, "\n")) +} + +type pDeclarationGlobal struct{} + +func (e pDeclarationGlobal) repr() string { return fmt.Sprintf("{global}") } + +type pDeclarationShared struct{ name string } + +func (e pDeclarationShared) repr() string { return fmt.Sprintf("{shared-network %s}", e.name) } + +type pDeclarationSubnet4 struct{ net.IPNet } + +func (e pDeclarationSubnet4) repr() string { return fmt.Sprintf("{subnet4 %s}", e.String()) } + +type pDeclarationSubnet6 struct{ net.IPNet } + +func (e pDeclarationSubnet6) repr() string { return fmt.Sprintf("{subnet6 %s}", e.String()) } + +type pDeclarationHost struct{ name string } + +func (e pDeclarationHost) repr() string { return fmt.Sprintf("{host name:%s}", e.name) } + +type pDeclarationPool struct{} + +func (e pDeclarationPool) repr() string { return fmt.Sprintf("{pool}") } + +type pDeclarationGroup struct{} + +func (e pDeclarationGroup) repr() string { return fmt.Sprintf("{group}") } + +type pDeclarationClass struct{ name string } + +func (e pDeclarationClass) repr() string { return fmt.Sprintf("{class}") } + +/** parsers */ +func parseParameter(val tkParameter) (pParameter, error) { + switch val.name { + case "include": + if len(val.operand) != 2 { + return nil, fmt.Errorf("Invalid number of parameters for pParameterInclude : %v", val.operand) + } + name := val.operand[0] + return pParameterInclude{filename: name}, nil + + case "option": + if len(val.operand) != 2 { + return nil, fmt.Errorf("Invalid number of parameters for pParameterOption : %v", val.operand) + } + name, value := val.operand[0], val.operand[1] + return pParameterOption{name: name, value: value}, nil + + case "allow": + fallthrough + case "deny": + fallthrough + case "ignore": + if len(val.operand) < 1 { + return nil, fmt.Errorf("Invalid number of parameters for pParameterGrant : %v", val.operand) + } + attribute := strings.Join(val.operand, " ") + return pParameterGrant{verb: strings.ToLower(val.name), attribute: attribute}, nil + + case "range": + if len(val.operand) < 1 { + return nil, fmt.Errorf("Invalid number of parameters for pParameterRange4 : %v", val.operand) + } + idxAddress := map[bool]int{true: 1, false: 0}[strings.ToLower(val.operand[0]) == "bootp"] + if len(val.operand) > 2+idxAddress { + return nil, fmt.Errorf("Invalid number of parameters for pParameterRange : %v", val.operand) + } + if idxAddress+1 > len(val.operand) { + res := net.ParseIP(val.operand[idxAddress]) + return pParameterRange4{min: res, max: res}, nil + } + addr1 := net.ParseIP(val.operand[idxAddress]) + addr2 := net.ParseIP(val.operand[idxAddress+1]) + return pParameterRange4{min: addr1, max: addr2}, nil + + case "range6": + if len(val.operand) == 1 { + address := val.operand[0] + if strings.Contains(address, "/") { + cidr := strings.SplitN(address, "/", 2) + if len(cidr) != 2 { + return nil, fmt.Errorf("Unknown ipv6 format : %v", address) + } + address := net.ParseIP(cidr[0]) + bits, err := strconv.Atoi(cidr[1]) + if err != nil { + return nil, err + } + mask := net.CIDRMask(bits, net.IPv6len*8) + + // figure out the network address + network := address.Mask(mask) + + // make a broadcast address + broadcast := network + networkSize, totalSize := mask.Size() + hostSize := totalSize - networkSize + for i := networkSize / 8; i < totalSize/8; i++ { + broadcast[i] = byte(0xff) + } + octetIndex := network[networkSize/8] + bitsLeft := (uint32)(hostSize % 8) + broadcast[octetIndex] = network[octetIndex] | ((1 << bitsLeft) - 1) + + // FIXME: check that the broadcast address was made correctly + return pParameterRange6{min: network, max: broadcast}, nil + } + res := net.ParseIP(address) + return pParameterRange6{min: res, max: res}, nil + } + if len(val.operand) == 2 { + addr := net.ParseIP(val.operand[0]) + if strings.ToLower(val.operand[1]) == "temporary" { + return pParameterRange6{min: addr, max: addr}, nil + } + other := net.ParseIP(val.operand[1]) + return pParameterRange6{min: addr, max: other}, nil + } + return nil, fmt.Errorf("Invalid number of parameters for pParameterRange6 : %v", val.operand) + + case "prefix6": + if len(val.operand) != 3 { + return nil, fmt.Errorf("Invalid number of parameters for pParameterRange6 : %v", val.operand) + } + bits, err := strconv.Atoi(val.operand[2]) + if err != nil { + return nil, fmt.Errorf("Invalid bits for pParameterPrefix6 : %v", val.operand[2]) + } + minaddr := net.ParseIP(val.operand[0]) + maxaddr := net.ParseIP(val.operand[1]) + return pParameterPrefix6{min: minaddr, max: maxaddr, bits: bits}, nil + + case "hardware": + if len(val.operand) != 2 { + return nil, fmt.Errorf("Invalid number of parameters for pParameterHardware : %v", val.operand) + } + class := val.operand[0] + octets := strings.Split(val.operand[1], ":") + address := make([]byte, 0) + for _, v := range octets { + b, err := strconv.ParseInt(v, 16, 0) + if err != nil { + return nil, err + } + address = append(address, byte(b)) + } + return pParameterHardware{class: class, address: address}, nil + + case "fixed-address": + ip4addrs := make(pParameterAddress4, len(val.operand)) + copy(ip4addrs, val.operand) + return ip4addrs, nil + + case "fixed-address6": + ip6addrs := make(pParameterAddress6, len(val.operand)) + copy(ip6addrs, val.operand) + return ip6addrs, nil + + case "host-identifier": + if len(val.operand) != 3 { + return nil, fmt.Errorf("Invalid number of parameters for pParameterClientMatch : %v", val.operand) + } + if val.operand[0] != "option" { + return nil, fmt.Errorf("Invalid match parameter : %v", val.operand[0]) + } + optionName := val.operand[1] + optionData := val.operand[2] + return pParameterClientMatch{name: optionName, data: optionData}, nil + + default: + length := len(val.operand) + if length < 1 { + return pParameterBoolean{parameter: val.name, truancy: true}, nil + } else if length > 1 { + if val.operand[0] == "=" { + return pParameterExpression{parameter: val.name, expression: strings.Join(val.operand[1:], "")}, nil + } + } + if length != 1 { + return nil, fmt.Errorf("Invalid number of parameters for pParameterOther : %v", val.operand) + } + if strings.ToLower(val.name) == "not" { + return pParameterBoolean{parameter: val.operand[0], truancy: false}, nil + } + return pParameterOther{parameter: val.name, value: val.operand[0]}, nil + } +} + +func parseTokenGroup(val tkGroup) (*pDeclaration, error) { + params := val.id.operand + switch val.id.name { + case "group": + return &pDeclaration{id: pDeclarationGroup{}}, nil + + case "pool": + return &pDeclaration{id: pDeclarationPool{}}, nil + + case "host": + if len(params) == 1 { + return &pDeclaration{id: pDeclarationHost{name: params[0]}}, nil + } + + case "subnet": + if len(params) == 3 && strings.ToLower(params[1]) == "netmask" { + addr := make([]byte, 4) + for i, v := range strings.SplitN(params[2], ".", 4) { + res, err := strconv.ParseInt(v, 10, 0) + if err != nil { + return nil, err + } + addr[i] = byte(res) + } + oc1, oc2, oc3, oc4 := addr[0], addr[1], addr[2], addr[3] + if subnet, mask := net.ParseIP(params[0]), net.IPv4Mask(oc1, oc2, oc3, oc4); subnet != nil && mask != nil { + return &pDeclaration{id: pDeclarationSubnet4{net.IPNet{IP: subnet, Mask: mask}}}, nil + } + } + case "subnet6": + if len(params) == 1 { + ip6 := strings.SplitN(params[0], "/", 2) + if len(ip6) == 2 && strings.Contains(ip6[0], ":") { + address := net.ParseIP(ip6[0]) + prefix, err := strconv.Atoi(ip6[1]) + if err != nil { + return nil, err + } + return &pDeclaration{id: pDeclarationSubnet6{net.IPNet{IP: address, Mask: net.CIDRMask(prefix, net.IPv6len*8)}}}, nil + } + } + case "shared-network": + if len(params) == 1 { + return &pDeclaration{id: pDeclarationShared{name: params[0]}}, nil + } + case "": + return &pDeclaration{id: pDeclarationGlobal{}}, nil + } + return nil, fmt.Errorf("Invalid pDeclaration : %v : %v", val.id.name, params) +} + +func flattenDhcpConfig(root tkGroup) (*pDeclaration, error) { + var result *pDeclaration + result, err := parseTokenGroup(root) + if err != nil { + return nil, err + } + + for _, p := range root.params { + param, err := parseParameter(p) + if err != nil { + return nil, err + } + result.parameters = append(result.parameters, param) + } + for _, p := range root.groups { + group, err := flattenDhcpConfig(*p) + if err != nil { + return nil, err + } + group.parent = result + result.declarations = append(result.declarations, *group) + } + return result, nil +} + +/** reduce the tree into the things that we care about */ +type grant uint + +const ( + ALLOW grant = iota + IGNORE grant = iota + DENY grant = iota +) + +type configDeclaration struct { + id []pDeclarationIdentifier + composites []pDeclaration + + address []pParameter + + options map[string]string + grants map[string]grant + attributes map[string]bool + parameters map[string]string + expressions map[string]string + + hostid []pParameterClientMatch +} + +func createDeclaration(node pDeclaration) configDeclaration { + var hierarchy []pDeclaration + + for n := &node; n != nil; n = n.parent { + hierarchy = append(hierarchy, *n) + } + + var result configDeclaration + result.address = make([]pParameter, 0) + + result.options = make(map[string]string) + result.grants = make(map[string]grant) + result.attributes = make(map[string]bool) + result.parameters = make(map[string]string) + result.expressions = make(map[string]string) + + result.hostid = make([]pParameterClientMatch, 0) + + // walk from globals to pDeclaration collecting all parameters + for i := len(hierarchy) - 1; i >= 0; i-- { + result.composites = append(result.composites, hierarchy[(len(hierarchy)-1)-i]) + result.id = append(result.id, hierarchy[(len(hierarchy)-1)-i].id) + + // update configDeclaration parameters + for _, p := range hierarchy[i].parameters { + switch p.(type) { + case pParameterOption: + result.options[p.(pParameterOption).name] = p.(pParameterOption).value + case pParameterGrant: + Grant := map[string]grant{"ignore": IGNORE, "allow": ALLOW, "deny": DENY} + result.grants[p.(pParameterGrant).attribute] = Grant[p.(pParameterGrant).verb] + case pParameterBoolean: + result.attributes[p.(pParameterBoolean).parameter] = p.(pParameterBoolean).truancy + case pParameterClientMatch: + result.hostid = append(result.hostid, p.(pParameterClientMatch)) + case pParameterExpression: + result.expressions[p.(pParameterExpression).parameter] = p.(pParameterExpression).expression + case pParameterOther: + result.parameters[p.(pParameterOther).parameter] = p.(pParameterOther).value + default: + result.address = append(result.address, p) + } + } + } + return result +} + +func (e *configDeclaration) repr() string { + var result []string + + var res []string + + res = make([]string, 0) + for _, v := range e.id { + res = append(res, v.repr()) + } + result = append(result, strings.Join(res, ",")) + + if len(e.address) > 0 { + res = make([]string, 0) + for _, v := range e.address { + res = append(res, v.repr()) + } + result = append(result, fmt.Sprintf("address : %v", strings.Join(res, ","))) + } + + if len(e.options) > 0 { + result = append(result, fmt.Sprintf("options : %v", e.options)) + } + if len(e.grants) > 0 { + result = append(result, fmt.Sprintf("grants : %v", e.grants)) + } + if len(e.attributes) > 0 { + result = append(result, fmt.Sprintf("attributes : %v", e.attributes)) + } + if len(e.parameters) > 0 { + result = append(result, fmt.Sprintf("parameters : %v", e.parameters)) + } + if len(e.expressions) > 0 { + result = append(result, fmt.Sprintf("parameter-expressions : %v", e.expressions)) + } + + if len(e.hostid) > 0 { + res = make([]string, 0) + for _, v := range e.hostid { + res = append(res, v.repr()) + } + result = append(result, fmt.Sprintf("hostid : %v", strings.Join(res, " "))) + } + return strings.Join(result, "\n") + "\n" +} + +func (e *configDeclaration) IP4() (net.IP, error) { + var result []string + for _, entry := range e.address { + switch entry.(type) { + case pParameterAddress4: + for _, s := range entry.(pParameterAddress4) { + result = append(result, s) + } + } + } + if len(result) > 1 { + return nil, fmt.Errorf("More than one address4 returned : %v", result) + } else if len(result) == 0 { + return nil, fmt.Errorf("No IP4 addresses found") + } + + if res := net.ParseIP(result[0]); res != nil { + return res, nil + } + res, err := net.ResolveIPAddr("ip4", result[0]) + if err != nil { + return nil, err + } + return res.IP, nil +} +func (e *configDeclaration) IP6() (net.IP, error) { + var result []string + for _, entry := range e.address { + switch entry.(type) { + case pParameterAddress6: + for _, s := range entry.(pParameterAddress6) { + result = append(result, s) + } + } + } + if len(result) > 1 { + return nil, fmt.Errorf("More than one address6 returned : %v", result) + } else if len(result) == 0 { + return nil, fmt.Errorf("No IP6 addresses found") + } + + if res := net.ParseIP(result[0]); res != nil { + return res, nil + } + res, err := net.ResolveIPAddr("ip6", result[0]) + if err != nil { + return nil, err + } + return res.IP, nil +} +func (e *configDeclaration) Hardware() (net.HardwareAddr, error) { + var result []pParameterHardware + for _, addr := range e.address { + switch addr.(type) { + case pParameterHardware: + result = append(result, addr.(pParameterHardware)) + } + } + if len(result) > 0 { + return nil, fmt.Errorf("More than one hardware address returned : %v", result) + } + res := make(net.HardwareAddr, 0) + for _, by := range result[0].address { + res = append(res, by) + } + return res, nil +} + +/*** Dhcp Configuration */ +type DhcpConfiguration []configDeclaration + +func ReadDhcpConfiguration(fd *os.File) (DhcpConfiguration, error) { + fromfile, eof := consumeFile(fd) + uncommented, eoc := uncomment(eof, fromfile) + tokenized, eot := tokenizeDhcpConfig(eoc, uncommented) + parsetree, err := parseDhcpConfig(eot, tokenized) + if err != nil { + return nil, err + } + + global, err := flattenDhcpConfig(parsetree) + if err != nil { + return nil, err + } + + var walkDeclarations func(root pDeclaration, out chan *configDeclaration) + walkDeclarations = func(root pDeclaration, out chan *configDeclaration) { + res := createDeclaration(root) + out <- &res + for _, p := range root.declarations { + walkDeclarations(p, out) + } + } + + each := make(chan *configDeclaration) + go func(out chan *configDeclaration) { + walkDeclarations(*global, out) + out <- nil + }(each) + + var result DhcpConfiguration + for decl := <-each; decl != nil; decl = <-each { + result = append(result, *decl) + } + return result, nil +} + +func (e *DhcpConfiguration) Global() configDeclaration { + result := (*e)[0] + if len(result.id) != 1 { + panic(fmt.Errorf("Something that can't happen happened")) + } + return result +} + +func (e *DhcpConfiguration) SubnetByAddress(address net.IP) (configDeclaration, error) { + var result []configDeclaration + for _, entry := range *e { + switch entry.id[0].(type) { + case pDeclarationSubnet4: + id := entry.id[0].(pDeclarationSubnet4) + if id.Contains(address) { + result = append(result, entry) + } + case pDeclarationSubnet6: + id := entry.id[0].(pDeclarationSubnet6) + if id.Contains(address) { + result = append(result, entry) + } + } + } + if len(result) == 0 { + return configDeclaration{}, fmt.Errorf("No network declarations containing %s found", address.String()) + } + if len(result) > 1 { + return configDeclaration{}, fmt.Errorf("More than 1 network declaration found : %v", result) + } + return result[0], nil +} + +func (e *DhcpConfiguration) HostByName(host string) (configDeclaration, error) { + var result []configDeclaration + for _, entry := range *e { + switch entry.id[0].(type) { + case pDeclarationHost: + id := entry.id[0].(pDeclarationHost) + if strings.ToLower(id.name) == strings.ToLower(host) { + result = append(result, entry) + } + } + } + if len(result) == 0 { + return configDeclaration{}, fmt.Errorf("No host declarations containing %s found", host) + } + if len(result) > 1 { + return configDeclaration{}, fmt.Errorf("More than 1 host declaration found : %v", result) + } + return result[0], nil +} + +/*** Network Map */ +type NetworkMap []map[string]string + +type NetworkNameMapper interface { + NameIntoDevices(string) ([]string, error) + DeviceIntoName(string) (string, error) +} + +func ReadNetworkMap(fd *os.File) (NetworkMap, error) { + + fromfile, eof := consumeFile(fd) + uncommented, eoc := uncomment(eof, fromfile) + tokenized, eot := tokenizeNetworkMapConfig(eoc, uncommented) + + result, err := parseNetworkMapConfig(eot, tokenized) + if err != nil { + return nil, err + } + return result, nil +} + +func (e NetworkMap) NameIntoDevices(name string) ([]string, error) { + var devices []string + for _, val := range e { + if strings.ToLower(val["name"]) == strings.ToLower(name) { + devices = append(devices, val["device"]) + } + } + if len(devices) > 0 { + return devices, nil + } + return make([]string, 0), fmt.Errorf("Network name not found : %v", name) +} +func (e NetworkMap) DeviceIntoName(device string) (string, error) { + for _, val := range e { + if strings.ToLower(val["device"]) == strings.ToLower(device) { + return val["name"], nil + } + } + return "", fmt.Errorf("Device name not found : %v", device) +} +func (e *NetworkMap) repr() string { + var result []string + for idx, val := range *e { + result = append(result, fmt.Sprintf("network%d.name = \"%s\"", idx, val["name"])) + result = append(result, fmt.Sprintf("network%d.device = \"%s\"", idx, val["device"])) + } + return strings.Join(result, "\n") +} + +/*** parser for VMware Fusion's networking file */ +func tokenizeNetworkingConfig(eof sentinelSignaller, in chan byte) (chan string, sentinelSignaller) { + var ch byte + var state string + var repeat_newline bool + + eot := make(sentinelSignaller) + + out := make(chan string) + go func(out chan string) { + for reading := true; reading; { + select { + case <-eof: + reading = false + + case ch = <-in: + switch ch { + case '\r': + fallthrough + case '\t': + fallthrough + case ' ': + if len(state) == 0 { + continue + } + out <- state + state = "" + case '\n': + if repeat_newline { + continue + } + if len(state) > 0 { + out <- state + } + out <- string(ch) + state = "" + repeat_newline = true + continue + default: + state += string(ch) + } + repeat_newline = false + } + } + if len(state) > 0 { + out <- state + } + close(eot) + }(out) + return out, eot +} + +func splitNetworkingConfig(eof sentinelSignaller, in chan string) (chan []string, sentinelSignaller) { + var out chan []string + + eos := make(sentinelSignaller) + + out = make(chan []string) + go func(out chan []string) { + row := make([]string, 0) + for reading := true; reading; { + select { + case <-eof: + reading = false + + case tk := <-in: + switch tk { + case "\n": + if len(row) > 0 { + out <- row + } + row = make([]string, 0) + default: + row = append(row, tk) + } + } + } + if len(row) > 0 { + out <- row + } + close(eos) + }(out) + return out, eos +} + +/// All token types in networking file. +// VERSION token +type networkingVERSION struct { + value string +} + +func networkingReadVersion(row []string) (*networkingVERSION, error) { + if len(row) != 1 { + return nil, fmt.Errorf("Unexpected format for VERSION entry : %v", row) + } + res := &networkingVERSION{value: row[0]} + if !res.Valid() { + return nil, fmt.Errorf("Unexpected format for VERSION entry : %v", row) + } + return res, nil +} + +func (s networkingVERSION) Repr() string { + if !s.Valid() { + return fmt.Sprintf("VERSION{INVALID=\"%v\"}", s.value) + } + return fmt.Sprintf("VERSION{%f}", s.Number()) +} + +func (s networkingVERSION) Valid() bool { + tokens := strings.SplitN(s.value, "=", 2) + if len(tokens) != 2 || tokens[0] != "VERSION" { + return false + } + + tokens = strings.Split(tokens[1], ",") + if len(tokens) != 2 { + return false + } + + for _, t := range tokens { + _, err := strconv.ParseUint(t, 10, 64) + if err != nil { + return false + } + } + return true +} + +func (s networkingVERSION) Number() float64 { + var result float64 + tokens := strings.SplitN(s.value, "=", 2) + tokens = strings.Split(tokens[1], ",") + + integer, err := strconv.ParseUint(tokens[0], 10, 64) + if err != nil { + integer = 0 + } + result = float64(integer) + + mantissa, err := strconv.ParseUint(tokens[1], 10, 64) + if err != nil { + return result + } + denomination := math.Pow(10.0, float64(len(tokens[1]))) + return result + (float64(mantissa) / denomination) +} + +// VNET_X token +type networkingVNET struct { + value string +} + +func (s networkingVNET) Valid() bool { + if strings.ToUpper(s.value) != s.value { + return false + } + tokens := strings.SplitN(s.value, "_", 3) + if len(tokens) != 3 || tokens[0] != "VNET" { + return false + } + _, err := strconv.ParseUint(tokens[1], 10, 64) + if err != nil { + return false + } + return true +} + +func (s networkingVNET) Number() int { + tokens := strings.SplitN(s.value, "_", 3) + res, err := strconv.Atoi(tokens[1]) + if err != nil { + return ^int(0) + } + return res +} + +func (s networkingVNET) Option() string { + tokens := strings.SplitN(s.value, "_", 3) + if len(tokens) == 3 { + return tokens[2] + } + return "" +} + +func (s networkingVNET) Repr() string { + if !s.Valid() { + tokens := strings.SplitN(s.value, "_", 3) + return fmt.Sprintf("VNET{INVALID=%v}", tokens) + } + return fmt.Sprintf("VNET{%d} %s", s.Number(), s.Option()) +} + +// Interface name +type networkingInterface struct { + name string +} + +func (s networkingInterface) Interface() (*net.Interface, error) { + return net.InterfaceByName(s.name) +} + +// networking command entry types +type networkingCommandEntry_answer struct { + vnet networkingVNET + value string +} +type networkingCommandEntry_remove_answer struct { + vnet networkingVNET +} +type networkingCommandEntry_add_nat_portfwd struct { + vnet int + protocol string + port int + target_host net.IP + target_port int +} +type networkingCommandEntry_remove_nat_portfwd struct { + vnet int + protocol string + port int +} +type networkingCommandEntry_add_dhcp_mac_to_ip struct { + vnet int + mac net.HardwareAddr + ip net.IP +} +type networkingCommandEntry_remove_dhcp_mac_to_ip struct { + vnet int + mac net.HardwareAddr +} +type networkingCommandEntry_add_bridge_mapping struct { + intf networkingInterface + vnet int +} +type networkingCommandEntry_remove_bridge_mapping struct { + intf networkingInterface +} +type networkingCommandEntry_add_nat_prefix struct { + vnet int + prefix int +} +type networkingCommandEntry_remove_nat_prefix struct { + vnet int + prefix int +} + +type networkingCommandEntry struct { + entry interface{} + answer *networkingCommandEntry_answer + remove_answer *networkingCommandEntry_remove_answer + add_nat_portfwd *networkingCommandEntry_add_nat_portfwd + remove_nat_portfwd *networkingCommandEntry_remove_nat_portfwd + add_dhcp_mac_to_ip *networkingCommandEntry_add_dhcp_mac_to_ip + remove_dhcp_mac_to_ip *networkingCommandEntry_remove_dhcp_mac_to_ip + add_bridge_mapping *networkingCommandEntry_add_bridge_mapping + remove_bridge_mapping *networkingCommandEntry_remove_bridge_mapping + add_nat_prefix *networkingCommandEntry_add_nat_prefix + remove_nat_prefix *networkingCommandEntry_remove_nat_prefix +} + +func (e networkingCommandEntry) Name() string { + switch e.entry.(type) { + case networkingCommandEntry_answer: + return "answer" + case networkingCommandEntry_remove_answer: + return "remove_answer" + case networkingCommandEntry_add_nat_portfwd: + return "add_nat_portfwd" + case networkingCommandEntry_remove_nat_portfwd: + return "remove_nat_portfwd" + case networkingCommandEntry_add_dhcp_mac_to_ip: + return "add_dhcp_mac_to_ip" + case networkingCommandEntry_remove_dhcp_mac_to_ip: + return "remove_dhcp_mac_to_ip" + case networkingCommandEntry_add_bridge_mapping: + return "add_bridge_mapping" + case networkingCommandEntry_remove_bridge_mapping: + return "remove_bridge_mapping" + case networkingCommandEntry_add_nat_prefix: + return "add_nat_prefix" + case networkingCommandEntry_remove_nat_prefix: + return "remove_nat_prefix" + } + return "" +} + +func (e networkingCommandEntry) Entry() reflect.Value { + this := reflect.ValueOf(e) + switch e.entry.(type) { + case networkingCommandEntry_answer: + return reflect.Indirect(this.FieldByName("answer")) + case networkingCommandEntry_remove_answer: + return reflect.Indirect(this.FieldByName("remove_answer")) + case networkingCommandEntry_add_nat_portfwd: + return reflect.Indirect(this.FieldByName("add_nat_portfwd")) + case networkingCommandEntry_remove_nat_portfwd: + return reflect.Indirect(this.FieldByName("remove_nat_portfwd")) + case networkingCommandEntry_add_dhcp_mac_to_ip: + return reflect.Indirect(this.FieldByName("add_dhcp_mac_to_ip")) + case networkingCommandEntry_remove_dhcp_mac_to_ip: + return reflect.Indirect(this.FieldByName("remove_dhcp_mac_to_ip")) + case networkingCommandEntry_add_bridge_mapping: + return reflect.Indirect(this.FieldByName("add_bridge_mapping")) + case networkingCommandEntry_remove_bridge_mapping: + return reflect.Indirect(this.FieldByName("remove_bridge_mapping")) + case networkingCommandEntry_add_nat_prefix: + return reflect.Indirect(this.FieldByName("add_nat_prefix")) + case networkingCommandEntry_remove_nat_prefix: + return reflect.Indirect(this.FieldByName("remove_nat_prefix")) + } + return reflect.Value{} +} + +func (e networkingCommandEntry) Repr() string { + var result map[string]interface{} + result = make(map[string]interface{}) + + entryN, entry := e.Name(), e.Entry() + entryT := entry.Type() + for i := 0; i < entry.NumField(); i++ { + fld, fldT := entry.Field(i), entryT.Field(i) + result[fldT.Name] = fld + } + return fmt.Sprintf("%s -> %v", entryN, result) +} + +// networking command entry parsers +func parseNetworkingCommand_answer(row []string) (*networkingCommandEntry, error) { + if len(row) != 2 { + return nil, fmt.Errorf("Expected %d arguments. Received only %d.", 2, len(row)) + } + vnet := networkingVNET{value: row[0]} + if !vnet.Valid() { + return nil, fmt.Errorf("Invalid format for VNET.") + } + value := row[1] + + result := networkingCommandEntry_answer{vnet: vnet, value: value} + return &networkingCommandEntry{entry: result, answer: &result}, nil +} +func parseNetworkingCommand_remove_answer(row []string) (*networkingCommandEntry, error) { + if len(row) != 1 { + return nil, fmt.Errorf("Expected %d argument. Received %d.", 1, len(row)) + } + vnet := networkingVNET{value: row[0]} + if !vnet.Valid() { + return nil, fmt.Errorf("Invalid format for VNET.") + } + + result := networkingCommandEntry_remove_answer{vnet: vnet} + return &networkingCommandEntry{entry: result, remove_answer: &result}, nil +} +func parseNetworkingCommand_add_nat_portfwd(row []string) (*networkingCommandEntry, error) { + if len(row) != 5 { + return nil, fmt.Errorf("Expected %d arguments. Received only %d.", 5, len(row)) + } + + vnet, err := strconv.Atoi(row[0]) + if err != nil { + return nil, fmt.Errorf("Unable to parse first argument as an integer. : %v", row[0]) + } + + protocol := strings.ToLower(row[1]) + if !(protocol == "tcp" || protocol == "udp") { + return nil, fmt.Errorf("Expected \"tcp\" or \"udp\" for second argument. : %v", row[1]) + } + + sport, err := strconv.Atoi(row[2]) + if err != nil { + return nil, fmt.Errorf("Unable to parse third argument as an integer. : %v", row[2]) + } + + dest := net.ParseIP(row[3]) + if dest == nil { + return nil, fmt.Errorf("Unable to parse fourth argument as an IPv4 address. : %v", row[2]) + } + + dport, err := strconv.Atoi(row[4]) + if err != nil { + return nil, fmt.Errorf("Unable to parse fifth argument as an integer. : %v", row[4]) + } + + result := networkingCommandEntry_add_nat_portfwd{vnet: vnet - 1, protocol: protocol, port: sport, target_host: dest, target_port: dport} + return &networkingCommandEntry{entry: result, add_nat_portfwd: &result}, nil +} +func parseNetworkingCommand_remove_nat_portfwd(row []string) (*networkingCommandEntry, error) { + if len(row) != 3 { + return nil, fmt.Errorf("Expected %d arguments. Received only %d.", 3, len(row)) + } + + vnet, err := strconv.Atoi(row[0]) + if err != nil { + return nil, fmt.Errorf("Unable to parse first argument as an integer. : %v", row[0]) + } + + protocol := strings.ToLower(row[1]) + if !(protocol == "tcp" || protocol == "udp") { + return nil, fmt.Errorf("Expected \"tcp\" or \"udp\" for second argument. : %v", row[1]) + } + + sport, err := strconv.Atoi(row[2]) + if err != nil { + return nil, fmt.Errorf("Unable to parse third argument as an integer. : %v", row[2]) + } + + result := networkingCommandEntry_remove_nat_portfwd{vnet: vnet - 1, protocol: protocol, port: sport} + return &networkingCommandEntry{entry: result, remove_nat_portfwd: &result}, nil +} +func parseNetworkingCommand_add_dhcp_mac_to_ip(row []string) (*networkingCommandEntry, error) { + if len(row) != 3 { + return nil, fmt.Errorf("Expected %d arguments. Received only %d.", 3, len(row)) + } + + vnet, err := strconv.Atoi(row[0]) + if err != nil { + return nil, fmt.Errorf("Unable to parse first argument as an integer. : %v", row[0]) + } + + mac, err := net.ParseMAC(row[1]) + if err != nil { + return nil, fmt.Errorf("Unable to parse second argument as hwaddr. : %v", row[1]) + } + + ip := net.ParseIP(row[2]) + if ip != nil { + return nil, fmt.Errorf("Unable to parse third argument as ipv4. : %v", row[2]) + } + + result := networkingCommandEntry_add_dhcp_mac_to_ip{vnet: vnet - 1, mac: mac, ip: ip} + return &networkingCommandEntry{entry: result, add_dhcp_mac_to_ip: &result}, nil +} +func parseNetworkingCommand_remove_dhcp_mac_to_ip(row []string) (*networkingCommandEntry, error) { + if len(row) != 2 { + return nil, fmt.Errorf("Expected %d arguments. Received only %d.", 2, len(row)) + } + + vnet, err := strconv.Atoi(row[0]) + if err != nil { + return nil, fmt.Errorf("Unable to parse first argument as an integer. : %v", row[0]) + } + + mac, err := net.ParseMAC(row[1]) + if err != nil { + return nil, fmt.Errorf("Unable to parse second argument as hwaddr. : %v", row[1]) + } + + result := networkingCommandEntry_remove_dhcp_mac_to_ip{vnet: vnet - 1, mac: mac} + return &networkingCommandEntry{entry: result, remove_dhcp_mac_to_ip: &result}, nil +} +func parseNetworkingCommand_add_bridge_mapping(row []string) (*networkingCommandEntry, error) { + if len(row) != 2 { + return nil, fmt.Errorf("Expected %d arguments. Received only %d.", 2, len(row)) + } + intf := networkingInterface{name: row[0]} + + vnet, err := strconv.Atoi(row[1]) + if err != nil { + return nil, fmt.Errorf("Unable to parse second argument as an integer. : %v", row[2]) + } + + result := networkingCommandEntry_add_bridge_mapping{intf: intf, vnet: vnet - 1} + return &networkingCommandEntry{entry: result, add_bridge_mapping: &result}, nil +} +func parseNetworkingCommand_remove_bridge_mapping(row []string) (*networkingCommandEntry, error) { + if len(row) != 1 { + return nil, fmt.Errorf("Expected %d argument. Received %d.", 1, len(row)) + } + intf := networkingInterface{name: row[0]} + /* + number, err := strconv.Atoi(row[0]) + if err != nil { + return nil, fmt.Errorf("Unable to parse first argument as an integer. : %v", row[0]) + } + */ + result := networkingCommandEntry_remove_bridge_mapping{intf: intf} + return &networkingCommandEntry{entry: result, remove_bridge_mapping: &result}, nil +} +func parseNetworkingCommand_add_nat_prefix(row []string) (*networkingCommandEntry, error) { + if len(row) != 2 { + return nil, fmt.Errorf("Expected %d arguments. Received only %d.", 2, len(row)) + } + + vnet, err := strconv.Atoi(row[0]) + if err != nil { + return nil, fmt.Errorf("Unable to parse first argument as an integer. : %v", row[0]) + } + + if !strings.HasPrefix(row[1], "/") { + return nil, fmt.Errorf("Expected second argument to begin with \"/\". : %v", row[1]) + } + + prefix, err := strconv.Atoi(row[1][1:]) + if err != nil { + return nil, fmt.Errorf("Unable to parse prefix out of second argument. : %v", row[1]) + } + + result := networkingCommandEntry_add_nat_prefix{vnet: vnet - 1, prefix: prefix} + return &networkingCommandEntry{entry: result, add_nat_prefix: &result}, nil +} +func parseNetworkingCommand_remove_nat_prefix(row []string) (*networkingCommandEntry, error) { + if len(row) != 2 { + return nil, fmt.Errorf("Expected %d arguments. Received only %d.", 2, len(row)) + } + + vnet, err := strconv.Atoi(row[0]) + if err != nil { + return nil, fmt.Errorf("Unable to parse first argument as an integer. : %v", row[0]) + } + + if !strings.HasPrefix(row[1], "/") { + return nil, fmt.Errorf("Expected second argument to begin with \"/\". : %v", row[1]) + } + prefix, err := strconv.Atoi(row[1][1:]) + if err != nil { + return nil, fmt.Errorf("Unable to parse prefix out of second argument. : %v", row[1]) + } + + result := networkingCommandEntry_remove_nat_prefix{vnet: vnet - 1, prefix: prefix} + return &networkingCommandEntry{entry: result, remove_nat_prefix: &result}, nil +} + +type networkingCommandParser struct { + command string + callback func([]string) (*networkingCommandEntry, error) +} + +var NetworkingCommandParsers = []networkingCommandParser{ + /* DictRecordParseFunct */ {command: "answer", callback: parseNetworkingCommand_answer}, + /* DictRecordParseFunct */ {command: "remove_answer", callback: parseNetworkingCommand_remove_answer}, + /* NatFwdRecordParseFunct */ {command: "add_nat_portfwd", callback: parseNetworkingCommand_add_nat_portfwd}, + /* NatFwdRecordParseFunct */ {command: "remove_nat_portfwd", callback: parseNetworkingCommand_remove_nat_portfwd}, + /* DhcpMacRecordParseFunct */ {command: "add_dhcp_mac_to_ip", callback: parseNetworkingCommand_add_dhcp_mac_to_ip}, + /* DhcpMacRecordParseFunct */ {command: "remove_dhcp_mac_to_ip", callback: parseNetworkingCommand_remove_dhcp_mac_to_ip}, + /* BridgeMappingRecordParseFunct */ {command: "add_bridge_mapping", callback: parseNetworkingCommand_add_bridge_mapping}, + /* BridgeMappingRecordParseFunct */ {command: "remove_bridge_mapping", callback: parseNetworkingCommand_remove_bridge_mapping}, + /* NatPrefixRecordParseFunct */ {command: "add_nat_prefix", callback: parseNetworkingCommand_add_nat_prefix}, + /* NatPrefixRecordParseFunct */ {command: "remove_nat_prefix", callback: parseNetworkingCommand_remove_nat_prefix}, +} + +func NetworkingParserByCommand(command string) *func([]string) (*networkingCommandEntry, error) { + for _, p := range NetworkingCommandParsers { + if p.command == command { + return &p.callback + } + } + return nil +} + +func parseNetworkingConfig(eof sentinelSignaller, rows chan []string) (chan networkingCommandEntry, sentinelSignaller) { + var out chan networkingCommandEntry + + eop := make(sentinelSignaller) + + out = make(chan networkingCommandEntry) + go func(in chan []string, out chan networkingCommandEntry) { + for reading := true; reading; { + select { + case <-eof: + reading = false + case row := <-in: + if len(row) >= 1 { + parser := NetworkingParserByCommand(row[0]) + if parser == nil { + log.Printf("Invalid command : %v", row) + continue + } + callback := *parser + entry, err := callback(row[1:]) + if err != nil { + log.Printf("Unable to parse command : %v %v", err, row) + continue + } + out <- *entry + } + } + } + close(eop) + }(rows, out) + return out, eop +} + +type NetworkingConfig struct { + answer map[int]map[string]string + nat_portfwd map[int]map[string]string + dhcp_mac_to_ip map[int]map[string]net.IP + //bridge_mapping map[net.Interface]uint64 // XXX: we don't need the actual interface for anything but informing the user. + bridge_mapping map[string]int + nat_prefix map[int][]int +} + +func (c NetworkingConfig) repr() string { + return fmt.Sprintf("answer -> %v\nnat_portfwd -> %v\ndhcp_mac_to_ip -> %v\nbridge_mapping -> %v\nnat_prefix -> %v", c.answer, c.nat_portfwd, c.dhcp_mac_to_ip, c.bridge_mapping, c.nat_prefix) +} + +func flattenNetworkingConfig(eof sentinelSignaller, in chan networkingCommandEntry) NetworkingConfig { + var result NetworkingConfig + var vmnet int + + result.answer = make(map[int]map[string]string) + result.nat_portfwd = make(map[int]map[string]string) + result.dhcp_mac_to_ip = make(map[int]map[string]net.IP) + result.bridge_mapping = make(map[string]int) + result.nat_prefix = make(map[int][]int) + + for reading := true; reading; { + select { + case <-eof: + reading = false + case e := <-in: + switch e.entry.(type) { + case networkingCommandEntry_answer: + vnet := e.answer.vnet + answers, exists := result.answer[vnet.Number()] + if !exists { + answers = make(map[string]string) + result.answer[vnet.Number()] = answers + } + answers[vnet.Option()] = e.answer.value + case networkingCommandEntry_remove_answer: + vnet := e.remove_answer.vnet + answers, exists := result.answer[vnet.Number()] + if exists { + delete(answers, vnet.Option()) + } else { + log.Printf("Unable to remove answer %s as specified by `remove_answer`.\n", vnet.Repr()) + } + case networkingCommandEntry_add_nat_portfwd: + vmnet = e.add_nat_portfwd.vnet + protoport := fmt.Sprintf("%s/%d", e.add_nat_portfwd.protocol, e.add_nat_portfwd.port) + target := fmt.Sprintf("%s:%d", e.add_nat_portfwd.target_host, e.add_nat_portfwd.target_port) + portfwds, exists := result.nat_portfwd[vmnet] + if !exists { + portfwds = make(map[string]string) + result.nat_portfwd[vmnet] = portfwds + } + portfwds[protoport] = target + case networkingCommandEntry_remove_nat_portfwd: + vmnet = e.remove_nat_portfwd.vnet + protoport := fmt.Sprintf("%s/%d", e.remove_nat_portfwd.protocol, e.remove_nat_portfwd.port) + portfwds, exists := result.nat_portfwd[vmnet] + if exists { + delete(portfwds, protoport) + } else { + log.Printf("Unable to remove nat port-forward %s from interface %s%d as requested by `remove_nat_portfwd`.\n", protoport, NetworkingInterfacePrefix, vmnet) + } + case networkingCommandEntry_add_dhcp_mac_to_ip: + vmnet = e.add_dhcp_mac_to_ip.vnet + dhcpmacs, exists := result.dhcp_mac_to_ip[vmnet] + if !exists { + dhcpmacs = make(map[string]net.IP) + result.dhcp_mac_to_ip[vmnet] = dhcpmacs + } + dhcpmacs[e.add_dhcp_mac_to_ip.mac.String()] = e.add_dhcp_mac_to_ip.ip + case networkingCommandEntry_remove_dhcp_mac_to_ip: + vmnet = e.remove_dhcp_mac_to_ip.vnet + dhcpmacs, exists := result.dhcp_mac_to_ip[vmnet] + if exists { + delete(dhcpmacs, e.remove_dhcp_mac_to_ip.mac.String()) + } else { + log.Printf("Unable to remove dhcp_mac_to_ip entry %v from interface %s%d as specified by `remove_dhcp_mac_to_ip`.\n", e.remove_dhcp_mac_to_ip, NetworkingInterfacePrefix, vmnet) + } + case networkingCommandEntry_add_bridge_mapping: + intf := e.add_bridge_mapping.intf + if _, err := intf.Interface(); err != nil { + log.Printf("Interface \"%s\" as specified by `add_bridge_mapping` was not found on the current platform. This is a non-critical error. Ignoring.", intf.name) + } + result.bridge_mapping[intf.name] = e.add_bridge_mapping.vnet + case networkingCommandEntry_remove_bridge_mapping: + intf := e.remove_bridge_mapping.intf + if _, err := intf.Interface(); err != nil { + log.Printf("Interface \"%s\" as specified by `remove_bridge_mapping` was not found on the current platform. This is a non-critical error. Ignoring.", intf.name) + } + delete(result.bridge_mapping, intf.name) + case networkingCommandEntry_add_nat_prefix: + vmnet = e.add_nat_prefix.vnet + _, exists := result.nat_prefix[vmnet] + if exists { + result.nat_prefix[vmnet] = append(result.nat_prefix[vmnet], e.add_nat_prefix.prefix) + } else { + result.nat_prefix[vmnet] = []int{e.add_nat_prefix.prefix} + } + case networkingCommandEntry_remove_nat_prefix: + vmnet = e.remove_nat_prefix.vnet + prefixes, exists := result.nat_prefix[vmnet] + if exists { + for index := 0; index < len(prefixes); index++ { + if prefixes[index] == e.remove_nat_prefix.prefix { + result.nat_prefix[vmnet] = append(prefixes[:index], prefixes[index+1:]...) + break + } + } + } else { + log.Printf("Unable to remove nat prefix /%d from interface %s%d as specified by `remove_nat_prefix`.\n", e.remove_nat_prefix.prefix, NetworkingInterfacePrefix, vmnet) + } + } + } + } + return result +} + +// Constructor for networking file +func ReadNetworkingConfig(fd *os.File) (NetworkingConfig, error) { + // start piecing together different parts of the file + fromfile, eof := consumeFile(fd) + tokenized, eot := tokenizeNetworkingConfig(eof, fromfile) + rows, eos := splitNetworkingConfig(eot, tokenized) + entries, eop := parseNetworkingConfig(eos, rows) + + // parse the version + parsed_version, err := networkingReadVersion(<-rows) + if err != nil { + return NetworkingConfig{}, err + } + + // verify that it's 1.0 since that's all we support. + version := parsed_version.Number() + if version != 1.0 { + return NetworkingConfig{}, fmt.Errorf("Expected version %f of networking file. Received version %f.", 1.0, version) + } + + // convert to a configuration + result := flattenNetworkingConfig(eop, entries) + return result, nil +} + +// netmapper interface +type NetworkingType int + +const ( + NetworkingType_HOSTONLY = iota + 1 + NetworkingType_NAT + NetworkingType_BRIDGED +) + +func networkingConfig_InterfaceTypes(config NetworkingConfig) map[int]NetworkingType { + var result map[int]NetworkingType + result = make(map[int]NetworkingType) + + // defaults + result[0] = NetworkingType_BRIDGED + result[1] = NetworkingType_HOSTONLY + result[8] = NetworkingType_NAT + + // walk through config collecting bridged interfaces + for _, vmnet := range config.bridge_mapping { + result[vmnet] = NetworkingType_BRIDGED + } + + // walk through answers finding out which ones are nat versus hostonly + for vmnet, table := range config.answer { + // everything should be defined as a virtual adapter... + if table["VIRTUAL_ADAPTER"] == "yes" { + + // validate that the VNET entry contains everything we expect it to + _, subnetQ := table["HOSTONLY_SUBNET"] + _, netmaskQ := table["HOSTONLY_NETMASK"] + if !(subnetQ && netmaskQ) { + log.Printf("Interface %s%d is missing some expected keys (HOSTONLY_SUBNET, HOSTONLY_NETMASK). This is non-critical. Ignoring..", NetworkingInterfacePrefix, vmnet) + } + + // distinguish between nat or hostonly + if table["NAT"] == "yes" { + result[vmnet] = NetworkingType_NAT + } else { + result[vmnet] = NetworkingType_HOSTONLY + } + + // if it's not a virtual_adapter, then it must be an alias (really a bridge). + } else { + result[vmnet] = NetworkingType_BRIDGED + } + } + return result +} + +func networkingConfig_NamesToVmnet(config NetworkingConfig) map[NetworkingType][]int { + types := networkingConfig_InterfaceTypes(config) + + // now sort the keys + var keys []int + for vmnet := range types { + keys = append(keys, vmnet) + } + sort.Ints(keys) + + // build result dictionary + var result map[NetworkingType][]int + result = make(map[NetworkingType][]int) + for i := 0; i < len(keys); i++ { + t := types[keys[i]] + result[t] = append(result[t], keys[i]) + } + return result +} + +const NetworkingInterfacePrefix = "vmnet" + +func (e NetworkingConfig) NameIntoDevices(name string) ([]string, error) { + netmapper := networkingConfig_NamesToVmnet(e) + name = strings.ToLower(name) + + var vmnets []string + var networkingType NetworkingType + if name == "hostonly" && len(netmapper[NetworkingType_HOSTONLY]) > 0 { + networkingType = NetworkingType_HOSTONLY + } else if name == "nat" && len(netmapper[NetworkingType_NAT]) > 0 { + networkingType = NetworkingType_NAT + } else if name == "bridged" && len(netmapper[NetworkingType_BRIDGED]) > 0 { + networkingType = NetworkingType_BRIDGED + } else { + return make([]string, 0), fmt.Errorf("Network name not found: %v", name) + } + + for i := 0; i < len(netmapper[networkingType]); i++ { + vmnets = append(vmnets, fmt.Sprintf("%s%d", NetworkingInterfacePrefix, netmapper[networkingType][i])) + } + return vmnets, nil +} + +func (e NetworkingConfig) DeviceIntoName(device string) (string, error) { + types := networkingConfig_InterfaceTypes(e) + + lowerdevice := strings.ToLower(device) + if !strings.HasPrefix(lowerdevice, NetworkingInterfacePrefix) { + return device, nil + } + vmnet, err := strconv.Atoi(lowerdevice[len(NetworkingInterfacePrefix):]) + if err != nil { + return "", err + } + network := types[vmnet] + switch network { + case NetworkingType_HOSTONLY: + return "hostonly", nil + case NetworkingType_NAT: + return "nat", nil + case NetworkingType_BRIDGED: + return "bridged", nil + } + return "", fmt.Errorf("Unable to determine network type for device %s%d.", NetworkingInterfacePrefix, vmnet) +} + +/** generic async file reader */ +func consumeFile(fd *os.File) (chan byte, sentinelSignaller) { + fromfile := make(chan byte) + eof := make(sentinelSignaller) + go func() { + b := make([]byte, 1) + for { + _, err := fd.Read(b) + if err == io.EOF { + break + } + fromfile <- b[0] + } + close(eof) + }() + return fromfile, eof +} diff --git a/builder/vmware/common/driver_player5.go b/builder/vmware/common/driver_player5.go index 1552e92ea..ccb1de2c2 100644 --- a/builder/vmware/common/driver_player5.go +++ b/builder/vmware/common/driver_player5.go @@ -9,11 +9,13 @@ import ( "path/filepath" "strings" - "github.com/mitchellh/multistep" + "github.com/hashicorp/packer/helper/multistep" ) // Player5Driver is a driver that can run VMware Player 5 on Linux. type Player5Driver struct { + VmwareDriver + AppPath string VdiskManagerPath string QemuImgPath string @@ -23,7 +25,7 @@ type Player5Driver struct { SSHConfig *SSHConfig } -func (d *Player5Driver) Clone(dst, src string) error { +func (d *Player5Driver) Clone(dst, src string, linked bool) error { return errors.New("Cloning is not supported with VMWare Player version 5. Please use VMWare Player version 6, or greater.") } @@ -62,12 +64,12 @@ func (d *Player5Driver) qemuCompactDisk(diskPath string) error { return nil } -func (d *Player5Driver) CreateDisk(output string, size string, type_id string) error { +func (d *Player5Driver) CreateDisk(output string, size string, adapter_type string, type_id string) error { var cmd *exec.Cmd if d.QemuImgPath != "" { cmd = exec.Command(d.QemuImgPath, "create", "-f", "vmdk", "-o", "compat6", output, size) } else { - cmd = exec.Command(d.VdiskManagerPath, "-c", "-s", size, "-a", "lsilogic", "-t", type_id, output) + cmd = exec.Command(d.VdiskManagerPath, "-c", "-s", size, "-a", adapter_type, "-t", type_id, output) } if _, _, err := runAndLog(cmd); err != nil { return err @@ -181,6 +183,28 @@ func (d *Player5Driver) Verify() error { "One of these is required to configure disks for VMware Player.") } + // Assigning the path callbacks to VmwareDriver + d.VmwareDriver.DhcpLeasesPath = func(device string) string { + return playerDhcpLeasesPath(device) + } + + d.VmwareDriver.DhcpConfPath = func(device string) string { + return playerVmDhcpConfPath(device) + } + + d.VmwareDriver.VmnetnatConfPath = func(device string) string { + return playerVmnetnatConfPath(device) + } + + d.VmwareDriver.NetworkMapper = func() (NetworkNameMapper, error) { + pathNetmap := playerNetmapConfPath() + if _, err := os.Stat(pathNetmap); err != nil { + return nil, fmt.Errorf("Could not find netmap conf file: %s", pathNetmap) + } + log.Printf("Located networkmapper configuration file using Player: %s", pathNetmap) + + return ReadNetmapConfig(pathNetmap) + } return nil } @@ -192,10 +216,6 @@ func (d *Player5Driver) ToolsInstall() error { return nil } -func (d *Player5Driver) DhcpLeasesPath(device string) string { - return playerDhcpLeasesPath(device) -} - -func (d *Player5Driver) VmnetnatConfPath() string { - return playerVmnetnatConfPath() +func (d *Player5Driver) GetVmwareDriver() VmwareDriver { + return d.VmwareDriver } diff --git a/builder/vmware/common/driver_player5_windows.go b/builder/vmware/common/driver_player5_windows.go index e16d275db..790a4c46c 100644 --- a/builder/vmware/common/driver_player5_windows.go +++ b/builder/vmware/common/driver_player5_windows.go @@ -57,14 +57,29 @@ func playerDhcpLeasesPath(device string) string { } else if _, err := os.Stat(path); err == nil { return path } - return findFile("vmnetdhcp.leases", playerDataFilePaths()) } -func playerVmnetnatConfPath() string { +func playerVmDhcpConfPath(device string) string { + // the device isn't actually used on windows hosts + path, err := playerDhcpConfigPathRegistry() + if err != nil { + log.Printf("Error finding configuration in registry: %s", err) + } else if _, err := os.Stat(path); err == nil { + return path + } + return findFile("vmnetdhcp.conf", playerDataFilePaths()) +} + +func playerVmnetnatConfPath(device string) string { + // the device isn't actually used on windows hosts return findFile("vmnetnat.conf", playerDataFilePaths()) } +func playerNetmapConfPath() string { + return findFile("netmap.conf", playerDataFilePaths()) +} + // This reads the VMware installation path from the Windows registry. func playerVMwareRoot() (s string, err error) { key := `SOFTWARE\Microsoft\Windows\CurrentVersion\App Paths\vmplayer.exe` @@ -87,7 +102,18 @@ func playerDhcpLeasesPathRegistry() (s string, err error) { log.Printf(`Unable to read registry key %s\%s`, key, subkey) return } + return normalizePath(s), nil +} +// This reads the VMware DHCP configuration path from the Windows registry. +func playerDhcpConfigPathRegistry() (s string, err error) { + key := "SYSTEM\\CurrentControlSet\\services\\VMnetDHCP\\Parameters" + subkey := "ConfFile" + s, err = readRegString(syscall.HKEY_LOCAL_MACHINE, key, subkey) + if err != nil { + log.Printf(`Unable to read registry key %s\%s`, key, subkey) + return + } return normalizePath(s), nil } diff --git a/builder/vmware/common/driver_player6.go b/builder/vmware/common/driver_player6.go index d1cc7be88..d121f02bc 100644 --- a/builder/vmware/common/driver_player6.go +++ b/builder/vmware/common/driver_player6.go @@ -13,13 +13,20 @@ type Player6Driver struct { Player5Driver } -func (d *Player6Driver) Clone(dst, src string) error { +func (d *Player6Driver) Clone(dst, src string, linked bool) error { // TODO(rasa) check if running player+, not just player + var cloneType string + if linked { + cloneType = "linked" + } else { + cloneType = "full" + } + cmd := exec.Command(d.Player5Driver.VmrunPath, "-T", "ws", "clone", src, dst, - "full") + cloneType) if _, _, err := runAndLog(cmd); err != nil { return err @@ -35,3 +42,7 @@ func (d *Player6Driver) Verify() error { return playerVerifyVersion(VMWARE_PLAYER_VERSION) } + +func (d *Player6Driver) GetVmwareDriver() VmwareDriver { + return d.Player5Driver.VmwareDriver +} diff --git a/builder/vmware/common/driver_player_unix.go b/builder/vmware/common/driver_player_unix.go index 0c2cc8635..a963c60fa 100644 --- a/builder/vmware/common/driver_player_unix.go +++ b/builder/vmware/common/driver_player_unix.go @@ -7,7 +7,9 @@ import ( "bytes" "fmt" "log" + "os" "os/exec" + "path/filepath" "regexp" "runtime" ) @@ -28,18 +30,85 @@ func playerFindVmrun() (string, error) { return exec.LookPath("vmrun") } -func playerDhcpLeasesPath(device string) string { - return "/etc/vmware/" + device + "/dhcpd/dhcpd.leases" -} - func playerToolsIsoPath(flavor string) string { return "/usr/lib/vmware/isoimages/" + flavor + ".iso" } -func playerVmnetnatConfPath() string { +// return the base path to vmware's config on the host +func playerVMwareRoot() (s string, err error) { + return "/etc/vmware", nil +} + +func playerDhcpLeasesPath(device string) string { + base, err := playerVMwareRoot() + if err != nil { + log.Printf("Error finding VMware root: %s", err) + return "" + } + + // Build the base path to VMware configuration for specified device: `/etc/vmware/${device}` + devicebase := filepath.Join(base, device) + + // Walk through a list of paths searching for the correct permutation... + // ...as it appears that in >= WS14 and < WS14, the leases file may be labelled differently. + + // Docs say we should expect: dhcpd/dhcpd.leases + paths := []string{"dhcpd/dhcpd.leases", "dhcpd/dhcp.leases", "dhcp/dhcpd.leases", "dhcp/dhcp.leases"} + for _, p := range paths { + fp := filepath.Join(devicebase, p) + if _, err := os.Stat(fp); !os.IsNotExist(err) { + return fp + } + } + + log.Printf("Error finding VMWare DHCP Server Leases (dhcpd.leases) under device path: %s", devicebase) return "" } +func playerVmDhcpConfPath(device string) string { + base, err := playerVMwareRoot() + if err != nil { + log.Printf("Error finding VMware root: %s", err) + return "" + } + + // Build the base path to VMware configuration for specified device: `/etc/vmware/${device}` + devicebase := filepath.Join(base, device) + + // Walk through a list of paths searching for the correct permutation... + // ...as it appears that in >= WS14 and < WS14, the dhcp config may be labelled differently. + + // Docs say we should expect: dhcp/dhcp.conf + paths := []string{"dhcp/dhcp.conf", "dhcp/dhcpd.conf", "dhcpd/dhcp.conf", "dhcpd/dhcpd.conf"} + for _, p := range paths { + fp := filepath.Join(devicebase, p) + if _, err := os.Stat(fp); !os.IsNotExist(err) { + return fp + } + } + + log.Printf("Error finding VMWare DHCP Server Configuration (dhcp.conf) under device path: %s", devicebase) + return "" +} + +func playerVmnetnatConfPath(device string) string { + base, err := playerVMwareRoot() + if err != nil { + log.Printf("Error finding VMware root: %s", err) + return "" + } + return filepath.Join(base, device, "nat/nat.conf") +} + +func playerNetmapConfPath() string { + base, err := playerVMwareRoot() + if err != nil { + log.Printf("Error finding VMware root: %s", err) + return "" + } + return filepath.Join(base, "netmap.conf") +} + func playerVerifyVersion(version string) error { if runtime.GOOS != "linux" { return fmt.Errorf("The VMWare Player version %s driver is only supported on Linux, and Windows, at the moment. Your OS: %s", version, runtime.GOOS) diff --git a/builder/vmware/common/driver_workstation10.go b/builder/vmware/common/driver_workstation10.go index 7e6ac8b8f..070df50c2 100644 --- a/builder/vmware/common/driver_workstation10.go +++ b/builder/vmware/common/driver_workstation10.go @@ -13,11 +13,19 @@ type Workstation10Driver struct { Workstation9Driver } -func (d *Workstation10Driver) Clone(dst, src string) error { +func (d *Workstation10Driver) Clone(dst, src string, linked bool) error { + + var cloneType string + if linked { + cloneType = "linked" + } else { + cloneType = "full" + } + cmd := exec.Command(d.Workstation9Driver.VmrunPath, "-T", "ws", "clone", src, dst, - "full") + cloneType) if _, _, err := runAndLog(cmd); err != nil { return err @@ -33,3 +41,7 @@ func (d *Workstation10Driver) Verify() error { return workstationVerifyVersion(VMWARE_WS_VERSION) } + +func (d *Workstation10Driver) GetVmwareDriver() VmwareDriver { + return d.Workstation9Driver.VmwareDriver +} diff --git a/builder/vmware/common/driver_workstation9.go b/builder/vmware/common/driver_workstation9.go index debcefcbc..50bf77f34 100644 --- a/builder/vmware/common/driver_workstation9.go +++ b/builder/vmware/common/driver_workstation9.go @@ -9,11 +9,13 @@ import ( "path/filepath" "strings" - "github.com/mitchellh/multistep" + "github.com/hashicorp/packer/helper/multistep" ) // Workstation9Driver is a driver that can run VMware Workstation 9 type Workstation9Driver struct { + VmwareDriver + AppPath string VdiskManagerPath string VmrunPath string @@ -22,7 +24,7 @@ type Workstation9Driver struct { SSHConfig *SSHConfig } -func (d *Workstation9Driver) Clone(dst, src string) error { +func (d *Workstation9Driver) Clone(dst, src string, linked bool) error { return errors.New("Cloning is not supported with VMware WS version 9. Please use VMware WS version 10, or greater.") } @@ -40,8 +42,8 @@ func (d *Workstation9Driver) CompactDisk(diskPath string) error { return nil } -func (d *Workstation9Driver) CreateDisk(output string, size string, type_id string) error { - cmd := exec.Command(d.VdiskManagerPath, "-c", "-s", size, "-a", "lsilogic", "-t", type_id, output) +func (d *Workstation9Driver) CreateDisk(output string, size string, adapter_type string, type_id string) error { + cmd := exec.Command(d.VdiskManagerPath, "-c", "-s", size, "-a", adapter_type, "-t", type_id, output) if _, _, err := runAndLog(cmd); err != nil { return err } @@ -142,6 +144,28 @@ func (d *Workstation9Driver) Verify() error { return err } + // Assigning the path callbacks to VmwareDriver + d.VmwareDriver.DhcpLeasesPath = func(device string) string { + return workstationDhcpLeasesPath(device) + } + + d.VmwareDriver.DhcpConfPath = func(device string) string { + return workstationDhcpConfPath(device) + } + + d.VmwareDriver.VmnetnatConfPath = func(device string) string { + return workstationVmnetnatConfPath(device) + } + + d.VmwareDriver.NetworkMapper = func() (NetworkNameMapper, error) { + pathNetmap := workstationNetmapConfPath() + if _, err := os.Stat(pathNetmap); err != nil { + return nil, fmt.Errorf("Could not find netmap conf file: %s", pathNetmap) + } + log.Printf("Located networkmapper configuration file using Workstation: %s", pathNetmap) + + return ReadNetmapConfig(pathNetmap) + } return nil } @@ -153,10 +177,6 @@ func (d *Workstation9Driver) ToolsInstall() error { return nil } -func (d *Workstation9Driver) DhcpLeasesPath(device string) string { - return workstationDhcpLeasesPath(device) -} - -func (d *Workstation9Driver) VmnetnatConfPath() string { - return workstationVmnetnatConfPath() +func (d *Workstation9Driver) GetVmwareDriver() VmwareDriver { + return d.VmwareDriver } diff --git a/builder/vmware/common/driver_workstation9_windows.go b/builder/vmware/common/driver_workstation9_windows.go index 17095badc..b8d6d2b61 100644 --- a/builder/vmware/common/driver_workstation9_windows.go +++ b/builder/vmware/common/driver_workstation9_windows.go @@ -59,10 +59,20 @@ func workstationDhcpLeasesPath(device string) string { return findFile("vmnetdhcp.leases", workstationDataFilePaths()) } -func workstationVmnetnatConfPath() string { +func workstationDhcpConfPath(device string) string { + // device isn't used on a windows host + return findFile("vmnetdhcp.conf", workstationDataFilePaths()) +} + +func workstationVmnetnatConfPath(device string) string { + // device isn't used on a windows host return findFile("vmnetnat.conf", workstationDataFilePaths()) } +func workstationNetmapConfPath() string { + return findFile("netmap.conf", workstationDataFilePaths()) +} + // See http://blog.natefinch.com/2012/11/go-win-stuff.html // // This is used by workstationVMwareRoot in order to read some registry data. diff --git a/builder/vmware/common/driver_workstation_unix.go b/builder/vmware/common/driver_workstation_unix.go index c4217d9be..5bbbc7d1d 100644 --- a/builder/vmware/common/driver_workstation_unix.go +++ b/builder/vmware/common/driver_workstation_unix.go @@ -8,6 +8,7 @@ import ( "errors" "fmt" "log" + "os" "os/exec" "path/filepath" "regexp" @@ -39,18 +40,85 @@ func workstationFindVmrun() (string, error) { return exec.LookPath("vmrun") } +// return the base path to vmware's config on the host +func workstationVMwareRoot() (s string, err error) { + return "/etc/vmware", nil +} + func workstationDhcpLeasesPath(device string) string { - return "/etc/vmware/" + device + "/dhcpd/dhcpd.leases" + base, err := workstationVMwareRoot() + if err != nil { + log.Printf("Error finding VMware root: %s", err) + return "" + } + + // Build the base path to VMware configuration for specified device: `/etc/vmware/${device}` + devicebase := filepath.Join(base, device) + + // Walk through a list of paths searching for the correct permutation... + // ...as it appears that in >= WS14 and < WS14, the leases file may be labelled differently. + + // Docs say we should expect: dhcpd/dhcpd.leases + paths := []string{"dhcpd/dhcpd.leases", "dhcpd/dhcp.leases", "dhcp/dhcpd.leases", "dhcp/dhcp.leases"} + for _, p := range paths { + fp := filepath.Join(devicebase, p) + if _, err := os.Stat(fp); !os.IsNotExist(err) { + return fp + } + } + + log.Printf("Error finding VMWare DHCP Server Leases (dhcpd.leases) under device path: %s", devicebase) + return "" +} + +func workstationDhcpConfPath(device string) string { + base, err := workstationVMwareRoot() + if err != nil { + log.Printf("Error finding VMware root: %s", err) + return "" + } + + // Build the base path to VMware configuration for specified device: `/etc/vmware/${device}` + devicebase := filepath.Join(base, device) + + // Walk through a list of paths searching for the correct permutation... + // ...as it appears that in >= WS14 and < WS14, the dhcp config may be labelled differently. + + // Docs say we should expect: dhcp/dhcp.conf + paths := []string{"dhcp/dhcp.conf", "dhcp/dhcpd.conf", "dhcpd/dhcp.conf", "dhcpd/dhcpd.conf"} + for _, p := range paths { + fp := filepath.Join(devicebase, p) + if _, err := os.Stat(fp); !os.IsNotExist(err) { + return fp + } + } + + log.Printf("Error finding VMWare DHCP Server Configuration (dhcp.conf) under device path: %s", devicebase) + return "" +} + +func workstationVmnetnatConfPath(device string) string { + base, err := workstationVMwareRoot() + if err != nil { + log.Printf("Error finding VMware root: %s", err) + return "" + } + return filepath.Join(base, device, "nat/nat.conf") +} + +func workstationNetmapConfPath() string { + base, err := workstationVMwareRoot() + if err != nil { + log.Printf("Error finding VMware root: %s", err) + return "" + } + return filepath.Join(base, "netmap.conf") } func workstationToolsIsoPath(flavor string) string { return "/usr/lib/vmware/isoimages/" + flavor + ".iso" } -func workstationVmnetnatConfPath() string { - return "" -} - func workstationVerifyVersion(version string) error { if runtime.GOOS != "linux" { return fmt.Errorf("The VMware WS version %s driver is only supported on Linux, and Windows, at the moment. Your OS: %s", version, runtime.GOOS) @@ -66,14 +134,17 @@ func workstationVerifyVersion(version string) error { if err := cmd.Run(); err != nil { return err } + return workstationTestVersion(version, stderr.String()) +} +func workstationTestVersion(wanted, versionOutput string) error { versionRe := regexp.MustCompile(`(?i)VMware Workstation (\d+)\.`) - matches := versionRe.FindStringSubmatch(stderr.String()) + matches := versionRe.FindStringSubmatch(versionOutput) if matches == nil { return fmt.Errorf( - "Could not find VMware WS version in output: %s", stderr.String()) + "Could not find VMware WS version in output: %s", wanted) } log.Printf("Detected VMware WS version: %s", matches[1]) - return compareVersions(matches[1], version, "Workstation") + return compareVersions(matches[1], wanted, "Workstation") } diff --git a/builder/vmware/common/driver_workstation_unix_test.go b/builder/vmware/common/driver_workstation_unix_test.go new file mode 100644 index 000000000..e0e3e6ecf --- /dev/null +++ b/builder/vmware/common/driver_workstation_unix_test.go @@ -0,0 +1,15 @@ +// +build !windows + +package common + +import ( + "testing" +) + +func TestWorkstationVersion_ws14(t *testing.T) { + input := `VMware Workstation Information: +VMware Workstation 14.1.1 build-7528167 Release` + if err := workstationTestVersion("10", input); err != nil { + t.Fatal(err) + } +} diff --git a/builder/vmware/common/guest_ip.go b/builder/vmware/common/guest_ip.go deleted file mode 100644 index 25ca3b795..000000000 --- a/builder/vmware/common/guest_ip.go +++ /dev/null @@ -1,90 +0,0 @@ -package common - -import ( - "errors" - "fmt" - "io/ioutil" - "log" - "os" - "regexp" - "strings" - "time" -) - -// Interface to help find the IP address of a running virtual machine. -type GuestIPFinder interface { - GuestIP() (string, error) -} - -// DHCPLeaseGuestLookup looks up the IP address of a guest using DHCP -// lease information from the VMware network devices. -type DHCPLeaseGuestLookup struct { - // Driver that is being used (to find leases path) - Driver Driver - - // Device that the guest is connected to. - Device string - - // MAC address of the guest. - MACAddress string -} - -func (f *DHCPLeaseGuestLookup) GuestIP() (string, error) { - dhcpLeasesPath := f.Driver.DhcpLeasesPath(f.Device) - log.Printf("DHCP leases path: %s", dhcpLeasesPath) - if dhcpLeasesPath == "" { - return "", errors.New("no DHCP leases path found.") - } - - fh, err := os.Open(dhcpLeasesPath) - if err != nil { - return "", err - } - defer fh.Close() - - dhcpBytes, err := ioutil.ReadAll(fh) - if err != nil { - return "", err - } - - var lastIp string - var lastLeaseEnd time.Time - - var curIp string - var curLeaseEnd time.Time - - ipLineRe := regexp.MustCompile(`^lease (.+?) {$`) - endTimeLineRe := regexp.MustCompile(`^\s*ends \d (.+?);$`) - macLineRe := regexp.MustCompile(`^\s*hardware ethernet (.+?);$`) - - for _, line := range strings.Split(string(dhcpBytes), "\n") { - // Need to trim off CR character when running in windows - line = strings.TrimRight(line, "\r") - - matches := ipLineRe.FindStringSubmatch(line) - if matches != nil { - lastIp = matches[1] - continue - } - - matches = endTimeLineRe.FindStringSubmatch(line) - if matches != nil { - lastLeaseEnd, _ = time.Parse("2006/01/02 15:04:05", matches[1]) - continue - } - - // If the mac address matches and this lease ends farther in the - // future than the last match we might have, then choose it. - matches = macLineRe.FindStringSubmatch(line) - if matches != nil && strings.EqualFold(matches[1], f.MACAddress) && curLeaseEnd.Before(lastLeaseEnd) { - curIp = lastIp - curLeaseEnd = lastLeaseEnd - } - } - - if curIp == "" { - return "", fmt.Errorf("IP not found for MAC %s in DHCP leases at %s", f.MACAddress, dhcpLeasesPath) - } - - return curIp, nil -} diff --git a/builder/vmware/common/guest_ip_test.go b/builder/vmware/common/guest_ip_test.go deleted file mode 100644 index fdd6b4c9c..000000000 --- a/builder/vmware/common/guest_ip_test.go +++ /dev/null @@ -1,82 +0,0 @@ -package common - -import ( - "io/ioutil" - "os" - "testing" -) - -func TestDHCPLeaseGuestLookup_impl(t *testing.T) { - var _ GuestIPFinder = new(DHCPLeaseGuestLookup) -} - -func TestDHCPLeaseGuestLookup(t *testing.T) { - tf, err := ioutil.TempFile("", "packer") - if err != nil { - t.Fatalf("err: %s", err) - } - if _, err := tf.Write([]byte(testLeaseContents)); err != nil { - t.Fatalf("err: %s", err) - } - tf.Close() - defer os.Remove(tf.Name()) - - driver := new(DriverMock) - driver.DhcpLeasesPathResult = tf.Name() - - finder := &DHCPLeaseGuestLookup{ - Driver: driver, - Device: "vmnet8", - MACAddress: "00:0c:29:59:91:02", - } - - ip, err := finder.GuestIP() - if err != nil { - t.Fatalf("err: %s", err) - } - - if !driver.DhcpLeasesPathCalled { - t.Fatal("should ask for DHCP leases path") - } - if driver.DhcpLeasesPathDevice != "vmnet8" { - t.Fatal("should be vmnet8") - } - - if ip != "192.168.126.130" { - t.Fatalf("bad: %#v", ip) - } -} - -const testLeaseContents = ` -# All times in this file are in UTC (GMT), not your local timezone. This is -# not a bug, so please don't ask about it. There is no portable way to -# store leases in the local timezone, so please don't request this as a -# feature. If this is inconvenient or confusing to you, we sincerely -# apologize. Seriously, though - don't ask. -# The format of this file is documented in the dhcpd.leases(5) manual page. - -lease 192.168.126.129 { - starts 0 2013/09/15 23:58:51; - ends 1 2013/09/16 00:28:51; - hardware ethernet 00:0c:29:59:91:02; - client-hostname "precise64"; -} -lease 192.168.126.130 { - starts 2 2013/09/17 21:39:07; - ends 2 2013/09/17 22:09:07; - hardware ethernet 00:0c:29:59:91:02; - client-hostname "precise64"; -} -lease 192.168.126.128 { - starts 0 2013/09/15 20:09:59; - ends 0 2013/09/15 20:21:58; - hardware ethernet 00:0c:29:59:91:02; - client-hostname "precise64"; -} -lease 192.168.126.127 { - starts 0 2013/09/15 20:09:59; - ends 0 2013/09/15 20:21:58; - hardware ethernet 01:0c:29:59:91:02; - client-hostname "precise64"; - -` diff --git a/builder/vmware/common/host_ip.go b/builder/vmware/common/host_ip.go deleted file mode 100644 index f920043e7..000000000 --- a/builder/vmware/common/host_ip.go +++ /dev/null @@ -1,7 +0,0 @@ -package common - -// Interface to help find the host IP that is available from within -// the VMware virtual machines. -type HostIPFinder interface { - HostIP() (string, error) -} diff --git a/builder/vmware/common/host_ip_ifconfig_test.go b/builder/vmware/common/host_ip_ifconfig_test.go deleted file mode 100644 index a69104e69..000000000 --- a/builder/vmware/common/host_ip_ifconfig_test.go +++ /dev/null @@ -1,11 +0,0 @@ -package common - -import "testing" - -func TestIfconfigIPFinder_Impl(t *testing.T) { - var raw interface{} - raw = &IfconfigIPFinder{} - if _, ok := raw.(HostIPFinder); !ok { - t.Fatalf("IfconfigIPFinder is not a host IP finder") - } -} diff --git a/builder/vmware/common/host_ip_vmnetnatconf.go b/builder/vmware/common/host_ip_vmnetnatconf.go deleted file mode 100644 index e07227a5e..000000000 --- a/builder/vmware/common/host_ip_vmnetnatconf.go +++ /dev/null @@ -1,65 +0,0 @@ -package common - -import ( - "bufio" - "errors" - "fmt" - "io" - "os" - "regexp" - "strings" -) - -// VMnetNatConfIPFinder finds the IP address of the host machine by -// retrieving the IP from the vmnetnat.conf. This isn't a full proof -// technique but so far it has not failed. -type VMnetNatConfIPFinder struct{} - -func (*VMnetNatConfIPFinder) HostIP() (string, error) { - driver := &Workstation9Driver{} - - vmnetnat := driver.VmnetnatConfPath() - if vmnetnat == "" { - return "", errors.New("Could not find NAT vmnet conf file") - } - - if _, err := os.Stat(vmnetnat); err != nil { - return "", fmt.Errorf("Could not find NAT vmnet conf file: %s", vmnetnat) - } - - f, err := os.Open(vmnetnat) - if err != nil { - return "", err - } - defer f.Close() - - ipRe := regexp.MustCompile(`^\s*ip\s*=\s*(.+?)\s*$`) - - r := bufio.NewReader(f) - for { - line, err := r.ReadString('\n') - if line != "" { - matches := ipRe.FindStringSubmatch(line) - if matches != nil { - ip := matches[1] - dotIndex := strings.LastIndex(ip, ".") - if dotIndex == -1 { - continue - } - - ip = ip[0:dotIndex] + ".1" - return ip, nil - } - } - - if err == io.EOF { - break - } - - if err != nil { - return "", err - } - } - - return "", errors.New("host IP not found in " + vmnetnat) -} diff --git a/builder/vmware/common/host_ip_vmnetnatconf_test.go b/builder/vmware/common/host_ip_vmnetnatconf_test.go deleted file mode 100644 index 9f3114a92..000000000 --- a/builder/vmware/common/host_ip_vmnetnatconf_test.go +++ /dev/null @@ -1,11 +0,0 @@ -package common - -import "testing" - -func TestVMnetNatConfIPFinder_Impl(t *testing.T) { - var raw interface{} - raw = &VMnetNatConfIPFinder{} - if _, ok := raw.(HostIPFinder); !ok { - t.Fatalf("VMnetNatConfIPFinder is not a host IP finder") - } -} diff --git a/builder/vmware/common/output_dir_local_test.go b/builder/vmware/common/output_dir_local_test.go index c3197117e..3026c6c66 100644 --- a/builder/vmware/common/output_dir_local_test.go +++ b/builder/vmware/common/output_dir_local_test.go @@ -4,6 +4,6 @@ import ( "testing" ) -func TestLocalOuputDir_impl(t *testing.T) { +func TestLocalOutputDir_impl(t *testing.T) { var _ OutputDir = new(LocalOutputDir) } diff --git a/builder/vmware/common/run_config.go b/builder/vmware/common/run_config.go index 9463d8753..c4bfc3413 100644 --- a/builder/vmware/common/run_config.go +++ b/builder/vmware/common/run_config.go @@ -2,30 +2,20 @@ package common import ( "fmt" - "time" "github.com/hashicorp/packer/template/interpolate" ) type RunConfig struct { - Headless bool `mapstructure:"headless"` - RawBootWait string `mapstructure:"boot_wait"` - DisableVNC bool `mapstructure:"disable_vnc"` - BootCommand []string `mapstructure:"boot_command"` + Headless bool `mapstructure:"headless"` VNCBindAddress string `mapstructure:"vnc_bind_address"` VNCPortMin uint `mapstructure:"vnc_port_min"` VNCPortMax uint `mapstructure:"vnc_port_max"` VNCDisablePassword bool `mapstructure:"vnc_disable_password"` - - BootWait time.Duration `` } -func (c *RunConfig) Prepare(ctx *interpolate.Context) []error { - if c.RawBootWait == "" { - c.RawBootWait = "10s" - } - +func (c *RunConfig) Prepare(ctx *interpolate.Context) (errs []error) { if c.VNCPortMin == 0 { c.VNCPortMin = 5900 } @@ -38,25 +28,9 @@ func (c *RunConfig) Prepare(ctx *interpolate.Context) []error { c.VNCBindAddress = "127.0.0.1" } - var errs []error - var err error - if len(c.BootCommand) > 0 && c.DisableVNC { - errs = append(errs, - fmt.Errorf("A boot command cannot be used when vnc is disabled.")) - } - - if c.RawBootWait != "" { - c.BootWait, err = time.ParseDuration(c.RawBootWait) - if err != nil { - errs = append( - errs, fmt.Errorf("Failed parsing boot_wait: %s", err)) - } - } - if c.VNCPortMin > c.VNCPortMax { - errs = append( - errs, fmt.Errorf("vnc_port_min must be less than vnc_port_max")) + errs = append(errs, fmt.Errorf("vnc_port_min must be less than vnc_port_max")) } - return errs + return } diff --git a/builder/vmware/common/run_config_test.go b/builder/vmware/common/run_config_test.go deleted file mode 100644 index d94e254f7..000000000 --- a/builder/vmware/common/run_config_test.go +++ /dev/null @@ -1,36 +0,0 @@ -package common - -import ( - "testing" -) - -func TestRunConfigPrepare(t *testing.T) { - var c *RunConfig - - // Test a default boot_wait - c = new(RunConfig) - c.RawBootWait = "" - errs := c.Prepare(testConfigTemplate(t)) - if len(errs) > 0 { - t.Fatalf("bad: %#v", errs) - } - if c.RawBootWait != "10s" { - t.Fatalf("bad value: %s", c.RawBootWait) - } - - // Test with a bad boot_wait - c = new(RunConfig) - c.RawBootWait = "this is not good" - errs = c.Prepare(testConfigTemplate(t)) - if len(errs) == 0 { - t.Fatal("should error") - } - - // Test with a good one - c = new(RunConfig) - c.RawBootWait = "5s" - errs = c.Prepare(testConfigTemplate(t)) - if len(errs) > 0 { - t.Fatalf("bad: %#v", errs) - } -} diff --git a/builder/vmware/common/ssh.go b/builder/vmware/common/ssh.go index 0d94599f8..e03f9b6bf 100644 --- a/builder/vmware/common/ssh.go +++ b/builder/vmware/common/ssh.go @@ -3,54 +3,23 @@ package common import ( "errors" "fmt" - "io/ioutil" "log" - "os" commonssh "github.com/hashicorp/packer/common/ssh" "github.com/hashicorp/packer/communicator/ssh" - "github.com/mitchellh/multistep" + "github.com/hashicorp/packer/helper/multistep" gossh "golang.org/x/crypto/ssh" ) func CommHost(config *SSHConfig) func(multistep.StateBag) (string, error) { return func(state multistep.StateBag) (string, error) { driver := state.Get("driver").(Driver) - vmxPath := state.Get("vmx_path").(string) if config.Comm.SSHHost != "" { return config.Comm.SSHHost, nil } - log.Println("Lookup up IP information...") - f, err := os.Open(vmxPath) - if err != nil { - return "", err - } - defer f.Close() - - vmxBytes, err := ioutil.ReadAll(f) - if err != nil { - return "", err - } - - vmxData := ParseVMX(string(vmxBytes)) - - var ok bool - macAddress := "" - if macAddress, ok = vmxData["ethernet0.address"]; !ok || macAddress == "" { - if macAddress, ok = vmxData["ethernet0.generatedaddress"]; !ok || macAddress == "" { - return "", errors.New("couldn't find MAC address in VMX") - } - } - - ipLookup := &DHCPLeaseGuestLookup{ - Driver: driver, - Device: "vmnet8", - MACAddress: macAddress, - } - - ipAddress, err := ipLookup.GuestIP() + ipAddress, err := driver.GuestIP(state) if err != nil { log.Printf("IP lookup failed: %s", err) return "", fmt.Errorf("IP lookup failed: %s", err) diff --git a/builder/vmware/common/step_clean_files.go b/builder/vmware/common/step_clean_files.go index 6dc1a730c..08d14cf63 100644 --- a/builder/vmware/common/step_clean_files.go +++ b/builder/vmware/common/step_clean_files.go @@ -1,11 +1,13 @@ package common import ( + "context" "fmt" - "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" "os" "path/filepath" + + "github.com/hashicorp/packer/helper/multistep" + "github.com/hashicorp/packer/packer" ) // These are the extensions of files that are important for the function @@ -23,7 +25,7 @@ var KeepFileExtensions = []string{".nvram", ".vmdk", ".vmsd", ".vmx", ".vmxf"} // type StepCleanFiles struct{} -func (StepCleanFiles) Run(state multistep.StateBag) multistep.StepAction { +func (StepCleanFiles) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { dir := state.Get("dir").(OutputDir) ui := state.Get("ui").(packer.Ui) diff --git a/builder/vmware/common/step_clean_vmx.go b/builder/vmware/common/step_clean_vmx.go index 853233e89..f112227f8 100644 --- a/builder/vmware/common/step_clean_vmx.go +++ b/builder/vmware/common/step_clean_vmx.go @@ -1,13 +1,14 @@ package common import ( + "context" "fmt" "log" "regexp" "strings" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" ) // This step cleans up the VMX by removing or changing this prior to @@ -24,7 +25,7 @@ type StepCleanVMX struct { VNCEnabled bool } -func (s StepCleanVMX) Run(state multistep.StateBag) multistep.StepAction { +func (s StepCleanVMX) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { ui := state.Get("ui").(packer.Ui) vmxPath := state.Get("vmx_path").(string) diff --git a/builder/vmware/common/step_clean_vmx_test.go b/builder/vmware/common/step_clean_vmx_test.go index aad39e8b0..0f4df3df2 100644 --- a/builder/vmware/common/step_clean_vmx_test.go +++ b/builder/vmware/common/step_clean_vmx_test.go @@ -1,11 +1,12 @@ package common import ( + "context" "io/ioutil" "os" "testing" - "github.com/mitchellh/multistep" + "github.com/hashicorp/packer/helper/multistep" ) func TestStepCleanVMX_impl(t *testing.T) { @@ -21,7 +22,7 @@ func TestStepCleanVMX(t *testing.T) { state.Put("vmx_path", vmxPath) // Test the run - if action := step.Run(state); action != multistep.ActionContinue { + if action := step.Run(context.Background(), state); action != multistep.ActionContinue { t.Fatalf("bad action: %#v", action) } if _, ok := state.GetOk("error"); ok { @@ -42,7 +43,7 @@ func TestStepCleanVMX_floppyPath(t *testing.T) { state.Put("vmx_path", vmxPath) // Test the run - if action := step.Run(state); action != multistep.ActionContinue { + if action := step.Run(context.Background(), state); action != multistep.ActionContinue { t.Fatalf("bad action: %#v", action) } if _, ok := state.GetOk("error"); ok { @@ -91,7 +92,7 @@ func TestStepCleanVMX_isoPath(t *testing.T) { state.Put("vmx_path", vmxPath) // Test the run - if action := step.Run(state); action != multistep.ActionContinue { + if action := step.Run(context.Background(), state); action != multistep.ActionContinue { t.Fatalf("bad action: %#v", action) } if _, ok := state.GetOk("error"); ok { @@ -143,7 +144,7 @@ func TestStepCleanVMX_ethernet(t *testing.T) { state.Put("vmx_path", vmxPath) // Test the run - if action := step.Run(state); action != multistep.ActionContinue { + if action := step.Run(context.Background(), state); action != multistep.ActionContinue { t.Fatalf("bad action: %#v", action) } if _, ok := state.GetOk("error"); ok { diff --git a/builder/vmware/common/step_compact_disk.go b/builder/vmware/common/step_compact_disk.go index d3eabec6f..1fdef293c 100644 --- a/builder/vmware/common/step_compact_disk.go +++ b/builder/vmware/common/step_compact_disk.go @@ -1,10 +1,12 @@ package common import ( + "context" "fmt" - "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" "log" + + "github.com/hashicorp/packer/helper/multistep" + "github.com/hashicorp/packer/packer" ) // This step compacts the virtual disk for the VM unless the "skip_compaction" @@ -12,7 +14,7 @@ import ( // // Uses: // driver Driver -// full_disk_path string +// disk_full_paths ([]string) - The full paths to all created disks // ui packer.Ui // // Produces: @@ -21,31 +23,22 @@ type StepCompactDisk struct { Skip bool } -func (s StepCompactDisk) Run(state multistep.StateBag) multistep.StepAction { +func (s StepCompactDisk) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { driver := state.Get("driver").(Driver) ui := state.Get("ui").(packer.Ui) - full_disk_path := state.Get("full_disk_path").(string) + diskFullPaths := state.Get("disk_full_paths").([]string) if s.Skip { log.Println("Skipping disk compaction step...") return multistep.ActionContinue } - ui.Say("Compacting the disk image") - if err := driver.CompactDisk(full_disk_path); err != nil { - state.Put("error", fmt.Errorf("Error compacting disk: %s", err)) - return multistep.ActionHalt - } - - if state.Get("additional_disk_paths") != nil { - if moreDisks := state.Get("additional_disk_paths").([]string); len(moreDisks) > 0 { - for i, path := range moreDisks { - ui.Say(fmt.Sprintf("Compacting additional disk image %d", i+1)) - if err := driver.CompactDisk(path); err != nil { - state.Put("error", fmt.Errorf("Error compacting additional disk %d: %s", i+1, err)) - return multistep.ActionHalt - } - } + ui.Say("Compacting all attached virtual disks...") + for i, diskFullPath := range diskFullPaths { + ui.Message(fmt.Sprintf("Compacting virtual disk %d", i+1)) + if err := driver.CompactDisk(diskFullPath); err != nil { + state.Put("error", fmt.Errorf("Error compacting disk: %s", err)) + return multistep.ActionHalt } } diff --git a/builder/vmware/common/step_compact_disk_test.go b/builder/vmware/common/step_compact_disk_test.go index e59a99a90..df0fc5fca 100644 --- a/builder/vmware/common/step_compact_disk_test.go +++ b/builder/vmware/common/step_compact_disk_test.go @@ -1,9 +1,10 @@ package common import ( + "context" "testing" - "github.com/mitchellh/multistep" + "github.com/hashicorp/packer/helper/multistep" ) func TestStepCompactDisk_impl(t *testing.T) { @@ -14,12 +15,13 @@ func TestStepCompactDisk(t *testing.T) { state := testState(t) step := new(StepCompactDisk) - state.Put("full_disk_path", "foo") + diskFullPaths := []string{"foo"} + state.Put("disk_full_paths", diskFullPaths) driver := state.Get("driver").(*DriverMock) // Test the run - if action := step.Run(state); action != multistep.ActionContinue { + if action := step.Run(context.Background(), state); action != multistep.ActionContinue { t.Fatalf("bad action: %#v", action) } if _, ok := state.GetOk("error"); ok { @@ -40,12 +42,13 @@ func TestStepCompactDisk_skip(t *testing.T) { step := new(StepCompactDisk) step.Skip = true - state.Put("full_disk_path", "foo") + diskFullPaths := []string{"foo"} + state.Put("disk_full_paths", diskFullPaths) driver := state.Get("driver").(*DriverMock) // Test the run - if action := step.Run(state); action != multistep.ActionContinue { + if action := step.Run(context.Background(), state); action != multistep.ActionContinue { t.Fatalf("bad action: %#v", action) } if _, ok := state.GetOk("error"); ok { diff --git a/builder/vmware/common/step_configure_vmx.go b/builder/vmware/common/step_configure_vmx.go index 854922a2c..c3f63b2b2 100644 --- a/builder/vmware/common/step_configure_vmx.go +++ b/builder/vmware/common/step_configure_vmx.go @@ -1,14 +1,14 @@ package common import ( + "context" "fmt" - "io/ioutil" "log" "regexp" "strings" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" ) // This step configures a VMX by setting some default settings as well @@ -16,16 +16,19 @@ import ( // // Uses: // vmx_path string +// +// Produces: +// display_name string - Value of the displayName key set in the VMX file type StepConfigureVMX struct { CustomData map[string]string SkipFloppy bool } -func (s *StepConfigureVMX) Run(state multistep.StateBag) multistep.StepAction { +func (s *StepConfigureVMX) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { ui := state.Get("ui").(packer.Ui) vmxPath := state.Get("vmx_path").(string) - vmxContents, err := ioutil.ReadFile(vmxPath) + vmxData, err := ReadVMX(vmxPath) if err != nil { err := fmt.Errorf("Error reading VMX file: %s", err) state.Put("error", err) @@ -33,8 +36,6 @@ func (s *StepConfigureVMX) Run(state multistep.StateBag) multistep.StepAction { return multistep.ActionHalt } - vmxData := ParseVMX(string(vmxContents)) - // Set this so that no dialogs ever appear from Packer. vmxData["msg.autoanswer"] = "true" @@ -75,6 +76,19 @@ func (s *StepConfigureVMX) Run(state multistep.StateBag) multistep.StepAction { return multistep.ActionHalt } + // If the build is taking place on a remote ESX server, the displayName + // will be needed for discovery of the VM's IP address and for export + // of the VM. The displayName key should always be set in the VMX file, + // so error if we don't find it + if displayName, ok := vmxData["displayname"]; !ok { // Packer converts key names to lowercase! + err := fmt.Errorf("Error: Could not get value of displayName from VMX data") + state.Put("error", err) + ui.Error(err.Error()) + return multistep.ActionHalt + } else { + state.Put("display_name", displayName) + } + return multistep.ActionContinue } diff --git a/builder/vmware/common/step_configure_vmx_test.go b/builder/vmware/common/step_configure_vmx_test.go index 293269e3f..d06ae51b9 100644 --- a/builder/vmware/common/step_configure_vmx_test.go +++ b/builder/vmware/common/step_configure_vmx_test.go @@ -1,11 +1,12 @@ package common import ( + "context" "io/ioutil" "os" "testing" - "github.com/mitchellh/multistep" + "github.com/hashicorp/packer/helper/multistep" ) func testVMXFile(t *testing.T) string { @@ -13,6 +14,9 @@ func testVMXFile(t *testing.T) string { if err != nil { t.Fatalf("err: %s", err) } + + // displayName must always be set + err = WriteVMX(tf.Name(), map[string]string{"displayName": "PackerBuild"}) tf.Close() return tf.Name() @@ -34,7 +38,7 @@ func TestStepConfigureVMX(t *testing.T) { state.Put("vmx_path", vmxPath) // Test the run - if action := step.Run(state); action != multistep.ActionContinue { + if action := step.Run(context.Background(), state); action != multistep.ActionContinue { t.Fatalf("bad action: %#v", action) } if _, ok := state.GetOk("error"); ok { @@ -87,7 +91,7 @@ func TestStepConfigureVMX_floppyPath(t *testing.T) { state.Put("vmx_path", vmxPath) // Test the run - if action := step.Run(state); action != multistep.ActionContinue { + if action := step.Run(context.Background(), state); action != multistep.ActionContinue { t.Fatalf("bad action: %#v", action) } if _, ok := state.GetOk("error"); ok { @@ -131,12 +135,29 @@ func TestStepConfigureVMX_generatedAddresses(t *testing.T) { vmxPath := testVMXFile(t) defer os.Remove(vmxPath) - err := WriteVMX(vmxPath, map[string]string{ - "foo": "bar", - "ethernet0.generatedAddress": "foo", - "ethernet1.generatedAddress": "foo", - "ethernet1.generatedAddressOffset": "foo", - }) + additionalTestVmxData := []struct { + Key string + Value string + }{ + {"foo", "bar"}, + {"ethernet0.generatedaddress", "foo"}, + {"ethernet1.generatedaddress", "foo"}, + {"ethernet1.generatedaddressoffset", "foo"}, + } + + // Get any existing VMX data from the VMX file + vmxData, err := ReadVMX(vmxPath) + if err != nil { + t.Fatalf("err %s", err) + } + + // Add the additional key/value pairs we need for this test to the existing VMX data + for _, data := range additionalTestVmxData { + vmxData[data.Key] = data.Value + } + + // Recreate the VMX file so it includes all the data needed for this test + err = WriteVMX(vmxPath, vmxData) if err != nil { t.Fatalf("err: %s", err) } @@ -144,7 +165,7 @@ func TestStepConfigureVMX_generatedAddresses(t *testing.T) { state.Put("vmx_path", vmxPath) // Test the run - if action := step.Run(state); action != multistep.ActionContinue { + if action := step.Run(context.Background(), state); action != multistep.ActionContinue { t.Fatalf("bad action: %#v", action) } if _, ok := state.GetOk("error"); ok { @@ -156,7 +177,7 @@ func TestStepConfigureVMX_generatedAddresses(t *testing.T) { if err != nil { t.Fatalf("err: %s", err) } - vmxData := ParseVMX(string(vmxContents)) + vmxData = ParseVMX(string(vmxContents)) cases := []struct { Key string @@ -179,5 +200,71 @@ func TestStepConfigureVMX_generatedAddresses(t *testing.T) { } } } +} + +// Should fail if the displayName key is not found in the VMX +func TestStepConfigureVMX_displayNameMissing(t *testing.T) { + state := testState(t) + step := new(StepConfigureVMX) + + // testVMXFile adds displayName key/value pair to the VMX + vmxPath := testVMXFile(t) + defer os.Remove(vmxPath) + + // Bad: Delete displayName from the VMX/Create an empty VMX file + err := WriteVMX(vmxPath, map[string]string{}) + if err != nil { + t.Fatalf("err: %s", err) + } + + state.Put("vmx_path", vmxPath) + + // Test the run + if action := step.Run(context.Background(), state); action != multistep.ActionHalt { + t.Fatalf("bad action: %#v. Should halt when displayName key is missing from VMX", action) + } + if _, ok := state.GetOk("error"); !ok { + t.Fatal("should store error in state when displayName key is missing from VMX") + } +} + +// Should store the value of displayName in the statebag +func TestStepConfigureVMX_displayNameStore(t *testing.T) { + state := testState(t) + step := new(StepConfigureVMX) + + // testVMXFile adds displayName key/value pair to the VMX + vmxPath := testVMXFile(t) + defer os.Remove(vmxPath) + + state.Put("vmx_path", vmxPath) + + // Test the run + if action := step.Run(context.Background(), state); action != multistep.ActionContinue { + t.Fatalf("bad action: %#v", action) + } + if _, ok := state.GetOk("error"); ok { + t.Fatal("should NOT have error") + } + + // The value of displayName must be stored in the statebag + if _, ok := state.GetOk("display_name"); !ok { + t.Fatalf("displayName should be stored in the statebag as 'display_name'") + } +} + +func TestStepConfigureVMX_vmxPathBad(t *testing.T) { + state := testState(t) + step := new(StepConfigureVMX) + + state.Put("vmx_path", "some_bad_path") + + // Test the run + if action := step.Run(context.Background(), state); action != multistep.ActionHalt { + t.Fatalf("bad action: %#v. Should halt when vmxPath is bad", action) + } + if _, ok := state.GetOk("error"); !ok { + t.Fatal("should store error in state when vmxPath is bad") + } } diff --git a/builder/vmware/common/step_configure_vnc.go b/builder/vmware/common/step_configure_vnc.go index 4e4798862..54f5ac182 100644 --- a/builder/vmware/common/step_configure_vnc.go +++ b/builder/vmware/common/step_configure_vnc.go @@ -1,15 +1,14 @@ package common import ( + "context" "fmt" - "io/ioutil" "log" "math/rand" "net" - "os" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" ) // This step configures the VM to enable the VNC server. @@ -76,7 +75,7 @@ func VNCPassword(skipPassword bool) string { return string(password) } -func (s *StepConfigureVNC) Run(state multistep.StateBag) multistep.StepAction { +func (s *StepConfigureVNC) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { if !s.Enabled { log.Println("Skipping VNC configuration step...") return multistep.ActionContinue @@ -86,17 +85,9 @@ func (s *StepConfigureVNC) Run(state multistep.StateBag) multistep.StepAction { ui := state.Get("ui").(packer.Ui) vmxPath := state.Get("vmx_path").(string) - f, err := os.Open(vmxPath) + vmxData, err := ReadVMX(vmxPath) if err != nil { - err := fmt.Errorf("Error reading VMX data: %s", err) - state.Put("error", err) - ui.Error(err.Error()) - return multistep.ActionHalt - } - - vmxBytes, err := ioutil.ReadAll(f) - if err != nil { - err := fmt.Errorf("Error reading VMX data: %s", err) + err := fmt.Errorf("Error reading VMX file: %s", err) state.Put("error", err) ui.Error(err.Error()) return multistep.ActionHalt @@ -120,7 +111,6 @@ func (s *StepConfigureVNC) Run(state multistep.StateBag) multistep.StepAction { log.Printf("Found available VNC port: %d", vncPort) - vmxData := ParseVMX(string(vmxBytes)) vncFinder.UpdateVMX(vncBindAddress, vncPassword, vncPort, vmxData) if err := WriteVMX(vmxPath, vmxData); err != nil { diff --git a/builder/vmware/common/step_output_dir.go b/builder/vmware/common/step_output_dir.go index 2b679f1fc..90c71e18a 100644 --- a/builder/vmware/common/step_output_dir.go +++ b/builder/vmware/common/step_output_dir.go @@ -1,12 +1,13 @@ package common import ( + "context" "fmt" "log" "time" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" ) // StepOutputDir sets up the output directory by creating it if it does @@ -18,7 +19,7 @@ type StepOutputDir struct { success bool } -func (s *StepOutputDir) Run(state multistep.StateBag) multistep.StepAction { +func (s *StepOutputDir) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { dir := state.Get("dir").(OutputDir) ui := state.Get("ui").(packer.Ui) diff --git a/builder/vmware/common/step_output_dir_test.go b/builder/vmware/common/step_output_dir_test.go index fcd64ca7e..3726a9354 100644 --- a/builder/vmware/common/step_output_dir_test.go +++ b/builder/vmware/common/step_output_dir_test.go @@ -1,10 +1,12 @@ package common import ( - "github.com/mitchellh/multistep" + "context" "io/ioutil" "os" "testing" + + "github.com/hashicorp/packer/helper/multistep" ) func testOutputDir(t *testing.T) *LocalOutputDir { @@ -28,10 +30,12 @@ func TestStepOutputDir(t *testing.T) { step := new(StepOutputDir) dir := testOutputDir(t) + // Delete the test output directory when done + defer os.RemoveAll(dir.dir) state.Put("dir", dir) // Test the run - if action := step.Run(state); action != multistep.ActionContinue { + if action := step.Run(context.Background(), state); action != multistep.ActionContinue { t.Fatalf("bad action: %#v", action) } if _, ok := state.GetOk("error"); ok { @@ -59,9 +63,10 @@ func TestStepOutputDir_existsNoForce(t *testing.T) { if err := os.MkdirAll(dir.dir, 0755); err != nil { t.Fatalf("err: %s", err) } + defer os.RemoveAll(dir.dir) // Test the run - if action := step.Run(state); action != multistep.ActionHalt { + if action := step.Run(context.Background(), state); action != multistep.ActionHalt { t.Fatalf("bad action: %#v", action) } if _, ok := state.GetOk("error"); !ok { @@ -87,9 +92,10 @@ func TestStepOutputDir_existsForce(t *testing.T) { if err := os.MkdirAll(dir.dir, 0755); err != nil { t.Fatalf("err: %s", err) } + defer os.RemoveAll(dir.dir) // Test the run - if action := step.Run(state); action != multistep.ActionContinue { + if action := step.Run(context.Background(), state); action != multistep.ActionContinue { t.Fatalf("bad action: %#v", action) } if _, ok := state.GetOk("error"); ok { @@ -108,7 +114,7 @@ func TestStepOutputDir_cancel(t *testing.T) { state.Put("dir", dir) // Test the run - if action := step.Run(state); action != multistep.ActionContinue { + if action := step.Run(context.Background(), state); action != multistep.ActionContinue { t.Fatalf("bad action: %#v", action) } if _, ok := state.GetOk("error"); ok { @@ -134,7 +140,7 @@ func TestStepOutputDir_halt(t *testing.T) { state.Put("dir", dir) // Test the run - if action := step.Run(state); action != multistep.ActionContinue { + if action := step.Run(context.Background(), state); action != multistep.ActionContinue { t.Fatalf("bad action: %#v", action) } if _, ok := state.GetOk("error"); ok { diff --git a/builder/vmware/common/step_prepare_tools.go b/builder/vmware/common/step_prepare_tools.go index f0566d91e..72f8f1488 100644 --- a/builder/vmware/common/step_prepare_tools.go +++ b/builder/vmware/common/step_prepare_tools.go @@ -1,10 +1,11 @@ package common import ( + "context" "fmt" "os" - "github.com/mitchellh/multistep" + "github.com/hashicorp/packer/helper/multistep" ) type StepPrepareTools struct { @@ -12,7 +13,7 @@ type StepPrepareTools struct { ToolsUploadFlavor string } -func (c *StepPrepareTools) Run(state multistep.StateBag) multistep.StepAction { +func (c *StepPrepareTools) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { driver := state.Get("driver").(Driver) if c.RemoteType == "esx5" { diff --git a/builder/vmware/common/step_prepare_tools_test.go b/builder/vmware/common/step_prepare_tools_test.go index 0326ea0d2..cddf0ffba 100644 --- a/builder/vmware/common/step_prepare_tools_test.go +++ b/builder/vmware/common/step_prepare_tools_test.go @@ -1,11 +1,12 @@ package common import ( + "context" "io/ioutil" "os" "testing" - "github.com/mitchellh/multistep" + "github.com/hashicorp/packer/helper/multistep" ) func TestStepPrepareTools_impl(t *testing.T) { @@ -32,7 +33,7 @@ func TestStepPrepareTools(t *testing.T) { driver.ToolsIsoPathResult = tf.Name() // Test the run - if action := step.Run(state); action != multistep.ActionContinue { + if action := step.Run(context.Background(), state); action != multistep.ActionContinue { t.Fatalf("bad action: %#v", action) } if _, ok := state.GetOk("error"); ok { @@ -67,7 +68,7 @@ func TestStepPrepareTools_esx5(t *testing.T) { driver := state.Get("driver").(*DriverMock) // Test the run - if action := step.Run(state); action != multistep.ActionContinue { + if action := step.Run(context.Background(), state); action != multistep.ActionContinue { t.Fatalf("bad action: %#v", action) } if _, ok := state.GetOk("error"); ok { @@ -93,7 +94,7 @@ func TestStepPrepareTools_nonExist(t *testing.T) { driver.ToolsIsoPathResult = "foo" // Test the run - if action := step.Run(state); action != multistep.ActionHalt { + if action := step.Run(context.Background(), state); action != multistep.ActionHalt { t.Fatalf("bad action: %#v", action) } if _, ok := state.GetOk("error"); !ok { diff --git a/builder/vmware/common/step_run.go b/builder/vmware/common/step_run.go index 2ea61f899..2dad6faed 100644 --- a/builder/vmware/common/step_run.go +++ b/builder/vmware/common/step_run.go @@ -1,11 +1,12 @@ package common import ( + "context" "fmt" "time" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" ) // This step runs the created virtual machine. @@ -18,7 +19,6 @@ import ( // Produces: // type StepRun struct { - BootWait time.Duration DurationBeforeStop time.Duration Headless bool @@ -26,7 +26,7 @@ type StepRun struct { vmxPath string } -func (s *StepRun) Run(state multistep.StateBag) multistep.StepAction { +func (s *StepRun) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { driver := state.Get("driver").(Driver) ui := state.Get("ui").(packer.Ui) vmxPath := state.Get("vmx_path").(string) @@ -64,24 +64,6 @@ func (s *StepRun) Run(state multistep.StateBag) multistep.StepAction { return multistep.ActionHalt } - // Wait the wait amount - if int64(s.BootWait) > 0 { - ui.Say(fmt.Sprintf("Waiting %s for boot...", s.BootWait.String())) - wait := time.After(s.BootWait) - WAITLOOP: - for { - select { - case <-wait: - break WAITLOOP - case <-time.After(1 * time.Second): - if _, ok := state.GetOk(multistep.StateCancelled); ok { - return multistep.ActionHalt - } - } - } - - } - return multistep.ActionContinue } diff --git a/builder/vmware/common/step_run_test.go b/builder/vmware/common/step_run_test.go index 9888cdf10..e89dcb1cf 100644 --- a/builder/vmware/common/step_run_test.go +++ b/builder/vmware/common/step_run_test.go @@ -1,9 +1,10 @@ package common import ( + "context" "testing" - "github.com/mitchellh/multistep" + "github.com/hashicorp/packer/helper/multistep" ) func TestStepRun_impl(t *testing.T) { @@ -19,7 +20,7 @@ func TestStepRun(t *testing.T) { driver := state.Get("driver").(*DriverMock) // Test the run - if action := step.Run(state); action != multistep.ActionContinue { + if action := step.Run(context.Background(), state); action != multistep.ActionContinue { t.Fatalf("bad action: %#v", action) } if _, ok := state.GetOk("error"); ok { @@ -53,7 +54,7 @@ func TestStepRun_cleanupRunning(t *testing.T) { driver := state.Get("driver").(*DriverMock) // Test the run - if action := step.Run(state); action != multistep.ActionContinue { + if action := step.Run(context.Background(), state); action != multistep.ActionContinue { t.Fatalf("bad action: %#v", action) } if _, ok := state.GetOk("error"); ok { diff --git a/builder/vmware/common/step_shutdown.go b/builder/vmware/common/step_shutdown.go index babcea28c..bc8490ff3 100644 --- a/builder/vmware/common/step_shutdown.go +++ b/builder/vmware/common/step_shutdown.go @@ -2,14 +2,16 @@ package common import ( "bytes" + "context" "errors" "fmt" - "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" "log" "regexp" "strings" "time" + + "github.com/hashicorp/packer/helper/multistep" + "github.com/hashicorp/packer/packer" ) // This step shuts down the machine. It first attempts to do so gracefully, @@ -32,7 +34,7 @@ type StepShutdown struct { Testing bool } -func (s *StepShutdown) Run(state multistep.StateBag) multistep.StepAction { +func (s *StepShutdown) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { comm := state.Get("communicator").(packer.Communicator) dir := state.Get("dir").(OutputDir) driver := state.Get("driver").(Driver) diff --git a/builder/vmware/common/step_shutdown_test.go b/builder/vmware/common/step_shutdown_test.go index b693527da..0c3d98589 100644 --- a/builder/vmware/common/step_shutdown_test.go +++ b/builder/vmware/common/step_shutdown_test.go @@ -1,14 +1,15 @@ package common import ( + "context" "io/ioutil" "os" "path/filepath" "testing" "time" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" ) func testStepShutdownState(t *testing.T) multistep.StateBag { @@ -49,7 +50,7 @@ func TestStepShutdown_command(t *testing.T) { resultCh := make(chan multistep.StepAction, 1) go func() { - resultCh <- step.Run(state) + resultCh <- step.Run(context.Background(), state) }() select { @@ -84,6 +85,12 @@ func TestStepShutdown_command(t *testing.T) { if comm.StartCmd.Command != "foo" { t.Fatalf("bad: %#v", comm.StartCmd.Command) } + + // Clean up the created test output directory + dir := state.Get("dir").(*LocalOutputDir) + if err := dir.RemoveAll(); err != nil { + t.Fatalf("Error cleaning up directory: %s", err) + } } func TestStepShutdown_noCommand(t *testing.T) { @@ -94,7 +101,7 @@ func TestStepShutdown_noCommand(t *testing.T) { driver := state.Get("driver").(*DriverMock) // Test the run - if action := step.Run(state); action != multistep.ActionContinue { + if action := step.Run(context.Background(), state); action != multistep.ActionContinue { t.Fatalf("bad action: %#v", action) } if _, ok := state.GetOk("error"); ok { @@ -112,9 +119,19 @@ func TestStepShutdown_noCommand(t *testing.T) { if comm.StartCalled { t.Fatal("start should not be called") } + + // Clean up the created test output directory + dir := state.Get("dir").(*LocalOutputDir) + if err := dir.RemoveAll(); err != nil { + t.Fatalf("Error cleaning up directory: %s", err) + } } func TestStepShutdown_locks(t *testing.T) { + if os.Getenv("PACKER_ACC") == "" { + t.Skip("This test is only run with PACKER_ACC=1 due to the requirement of access to the VMware binaries.") + } + state := testStepShutdownState(t) step := new(StepShutdown) step.Testing = true @@ -138,7 +155,7 @@ func TestStepShutdown_locks(t *testing.T) { resultCh := make(chan multistep.StepAction, 1) go func() { - resultCh <- step.Run(state) + resultCh <- step.Run(context.Background(), state) }() select { diff --git a/builder/vmware/common/step_suppress_messages.go b/builder/vmware/common/step_suppress_messages.go index ca83506bf..02928123a 100644 --- a/builder/vmware/common/step_suppress_messages.go +++ b/builder/vmware/common/step_suppress_messages.go @@ -1,16 +1,18 @@ package common import ( + "context" "fmt" - "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" "log" + + "github.com/hashicorp/packer/helper/multistep" + "github.com/hashicorp/packer/packer" ) // This step suppresses any messages that VMware product might show. type StepSuppressMessages struct{} -func (s *StepSuppressMessages) Run(state multistep.StateBag) multistep.StepAction { +func (s *StepSuppressMessages) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { driver := state.Get("driver").(Driver) ui := state.Get("ui").(packer.Ui) vmxPath := state.Get("vmx_path").(string) diff --git a/builder/vmware/common/step_suppress_messages_test.go b/builder/vmware/common/step_suppress_messages_test.go index 998cfdb24..24044d470 100644 --- a/builder/vmware/common/step_suppress_messages_test.go +++ b/builder/vmware/common/step_suppress_messages_test.go @@ -1,9 +1,10 @@ package common import ( + "context" "testing" - "github.com/mitchellh/multistep" + "github.com/hashicorp/packer/helper/multistep" ) func TestStepSuppressMessages_impl(t *testing.T) { @@ -19,7 +20,7 @@ func TestStepSuppressMessages(t *testing.T) { driver := state.Get("driver").(*DriverMock) // Test the run - if action := step.Run(state); action != multistep.ActionContinue { + if action := step.Run(context.Background(), state); action != multistep.ActionContinue { t.Fatalf("bad action: %#v", action) } if _, ok := state.GetOk("error"); ok { diff --git a/builder/vmware/common/step_test.go b/builder/vmware/common/step_test.go index c6424b692..c8a12abb6 100644 --- a/builder/vmware/common/step_test.go +++ b/builder/vmware/common/step_test.go @@ -2,9 +2,10 @@ package common import ( "bytes" - "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" "testing" + + "github.com/hashicorp/packer/helper/multistep" + "github.com/hashicorp/packer/packer" ) func testState(t *testing.T) multistep.StateBag { diff --git a/builder/vmware/common/step_type_boot_command.go b/builder/vmware/common/step_type_boot_command.go index 28db3c72e..83d55ce11 100644 --- a/builder/vmware/common/step_type_boot_command.go +++ b/builder/vmware/common/step_type_boot_command.go @@ -1,31 +1,20 @@ package common import ( + "context" "fmt" "log" "net" - "os" - "runtime" - "strings" "time" - "unicode" - "unicode/utf8" "github.com/hashicorp/packer/common" + "github.com/hashicorp/packer/common/bootcommand" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" "github.com/hashicorp/packer/template/interpolate" "github.com/mitchellh/go-vnc" - "github.com/mitchellh/multistep" ) -const KeyLeftShift uint32 = 0xFFE1 - -type bootCommandTemplateData struct { - HTTPIP string - HTTPPort uint - Name string -} - // This step "types" the boot command into the VM over VNC. // // Uses: @@ -36,13 +25,19 @@ type bootCommandTemplateData struct { // Produces: // type StepTypeBootCommand struct { + BootCommand string VNCEnabled bool - BootCommand []string + BootWait time.Duration VMName string Ctx interpolate.Context } +type bootCommandTemplateData struct { + HTTPIP string + HTTPPort uint + Name string +} -func (s *StepTypeBootCommand) Run(state multistep.StateBag) multistep.StepAction { +func (s *StepTypeBootCommand) Run(ctx context.Context, state multistep.StateBag) multistep.StepAction { if !s.VNCEnabled { log.Println("Skipping boot command step...") return multistep.ActionContinue @@ -56,13 +51,24 @@ func (s *StepTypeBootCommand) Run(state multistep.StateBag) multistep.StepAction vncPort := state.Get("vnc_port").(uint) vncPassword := state.Get("vnc_password") + // Wait the for the vm to boot. + if int64(s.BootWait) > 0 { + ui.Say(fmt.Sprintf("Waiting %s for boot...", s.BootWait.String())) + select { + case <-time.After(s.BootWait): + break + case <-ctx.Done(): + return multistep.ActionHalt + } + } + var pauseFn multistep.DebugPauseFn if debug { pauseFn = state.Get("pauseFn").(multistep.DebugPauseFn) } // Connect to VNC - ui.Say("Connecting to VM via VNC") + ui.Say(fmt.Sprintf("Connecting to VM via VNC (%s:%d)", vncIp, vncPort)) nc, err := net.Dial("tcp", fmt.Sprintf("%s:%d", vncIp, vncPort)) if err != nil { err := fmt.Errorf("Error connecting to VNC: %s", err) @@ -92,16 +98,7 @@ func (s *StepTypeBootCommand) Run(state multistep.StateBag) multistep.StepAction log.Printf("Connected to VNC desktop: %s", c.DesktopName) // Determine the host IP - var ipFinder HostIPFinder - if finder, ok := driver.(HostIPFinder); ok { - ipFinder = finder - } else if runtime.GOOS == "windows" { - ipFinder = new(VMnetNatConfIPFinder) - } else { - ipFinder = &IfconfigIPFinder{Device: "vmnet8"} - } - - hostIP, err := ipFinder.HostIP() + hostIP, err := driver.HostIP(state) if err != nil { err := fmt.Errorf("Error detecting host IP: %s", err) state.Put("error", err) @@ -118,268 +115,38 @@ func (s *StepTypeBootCommand) Run(state multistep.StateBag) multistep.StepAction s.VMName, } + d := bootcommand.NewVNCDriver(c) + ui.Say("Typing the boot command over VNC...") - for i, command := range s.BootCommand { - command, err := interpolate.Render(command, &s.Ctx) - if err != nil { - err := fmt.Errorf("Error preparing boot command: %s", err) - state.Put("error", err) - ui.Error(err.Error()) - return multistep.ActionHalt - } + command, err := interpolate.Render(s.BootCommand, &s.Ctx) + if err != nil { + err := fmt.Errorf("Error preparing boot command: %s", err) + state.Put("error", err) + ui.Error(err.Error()) + return multistep.ActionHalt + } - // Check for interrupts between typing things so we can cancel - // since this isn't the fastest thing. - if _, ok := state.GetOk(multistep.StateCancelled); ok { - return multistep.ActionHalt - } + seq, err := bootcommand.GenerateExpressionSequence(command) + if err != nil { + err := fmt.Errorf("Error generating boot command: %s", err) + state.Put("error", err) + ui.Error(err.Error()) + return multistep.ActionHalt + } - if pauseFn != nil { - pauseFn(multistep.DebugLocationAfterRun, fmt.Sprintf("boot_command[%d]: %s", i, command), state) - } + if err := seq.Do(ctx, d); err != nil { + err := fmt.Errorf("Error running boot command: %s", err) + state.Put("error", err) + ui.Error(err.Error()) + return multistep.ActionHalt + } - vncSendString(c, command) + if pauseFn != nil { + pauseFn(multistep.DebugLocationAfterRun, + fmt.Sprintf("boot_command: %s", command), state) } return multistep.ActionContinue } func (*StepTypeBootCommand) Cleanup(multistep.StateBag) {} - -func vncSendString(c *vnc.ClientConn, original string) { - // Scancodes reference: https://github.com/qemu/qemu/blob/master/ui/vnc_keysym.h - special := make(map[string]uint32) - special[""] = 0xFF08 - special[""] = 0xFFFF - special[""] = 0xFF0D - special[""] = 0xFF1B - special[""] = 0xFFBE - special[""] = 0xFFBF - special[""] = 0xFFC0 - special[""] = 0xFFC1 - special[""] = 0xFFC2 - special[""] = 0xFFC3 - special[""] = 0xFFC4 - special[""] = 0xFFC5 - special[""] = 0xFFC6 - special[""] = 0xFFC7 - special[""] = 0xFFC8 - special[""] = 0xFFC9 - special[""] = 0xFF0D - special[""] = 0xFF09 - special[""] = 0xFF52 - special[""] = 0xFF54 - special[""] = 0xFF51 - special[""] = 0xFF53 - special[""] = 0x020 - special[""] = 0xFF63 - special[""] = 0xFF50 - special[""] = 0xFF57 - special[""] = 0xFF55 - special[""] = 0xFF56 - special[""] = 0xFFE9 - special[""] = 0xFFE3 - special[""] = 0xFFE1 - special[""] = 0xFFEA - special[""] = 0xFFE4 - special[""] = 0xFFE2 - - shiftedChars := "~!@#$%^&*()_+{}|:\"<>?" - - // We delay (default 100ms) between each key event to allow for CPU or - // network latency. See PackerKeyEnv for tuning. - keyInterval := common.PackerKeyDefault - if delay, err := time.ParseDuration(os.Getenv(common.PackerKeyEnv)); err == nil { - keyInterval = delay - } - - // TODO(mitchellh): Ripe for optimizations of some point, perhaps. - for len(original) > 0 { - var keyCode uint32 - keyShift := false - - if strings.HasPrefix(original, "") { - keyCode = special[""] - original = original[len(""):] - log.Printf("Special code '' found, replacing with: %d", keyCode) - - c.KeyEvent(keyCode, true) - time.Sleep(keyInterval) - - continue - } - - if strings.HasPrefix(original, "") { - keyCode = special[""] - original = original[len(""):] - log.Printf("Special code '' found, replacing with: %d", keyCode) - - c.KeyEvent(keyCode, true) - time.Sleep(keyInterval) - - continue - } - - if strings.HasPrefix(original, "") { - keyCode = special[""] - original = original[len(""):] - log.Printf("Special code '' found, replacing with: %d", keyCode) - - c.KeyEvent(keyCode, true) - time.Sleep(keyInterval) - - continue - } - - if strings.HasPrefix(original, "") { - keyCode = special[""] - original = original[len(""):] - log.Printf("Special code '' found, replacing with: %d", keyCode) - - c.KeyEvent(keyCode, false) - time.Sleep(keyInterval) - - continue - } - - if strings.HasPrefix(original, "") { - keyCode = special[""] - original = original[len(""):] - log.Printf("Special code '' found, replacing with: %d", keyCode) - - c.KeyEvent(keyCode, false) - time.Sleep(keyInterval) - - continue - } - - if strings.HasPrefix(original, "") { - keyCode = special[""] - original = original[len(""):] - log.Printf("Special code '' found, replacing with: %d", keyCode) - - c.KeyEvent(keyCode, false) - time.Sleep(keyInterval) - - continue - } - - if strings.HasPrefix(original, "") { - keyCode = special[""] - original = original[len(""):] - log.Printf("Special code '' found, replacing with: %d", keyCode) - - c.KeyEvent(keyCode, true) - time.Sleep(keyInterval) - - continue - } - - if strings.HasPrefix(original, "") { - keyCode = special[""] - original = original[len(""):] - log.Printf("Special code '' found, replacing with: %d", keyCode) - - c.KeyEvent(keyCode, true) - time.Sleep(keyInterval) - - continue - } - - if strings.HasPrefix(original, "") { - keyCode = special[""] - original = original[len(""):] - log.Printf("Special code '' found, replacing with: %d", keyCode) - - c.KeyEvent(keyCode, true) - time.Sleep(keyInterval) - - continue - } - - if strings.HasPrefix(original, "") { - keyCode = special[""] - original = original[len(""):] - log.Printf("Special code '' found, replacing with: %d", keyCode) - - c.KeyEvent(keyCode, false) - time.Sleep(keyInterval) - - continue - } - - if strings.HasPrefix(original, "") { - keyCode = special[""] - original = original[len(""):] - log.Printf("Special code '' found, replacing with: %d", keyCode) - - c.KeyEvent(keyCode, false) - time.Sleep(keyInterval) - - continue - } - - if strings.HasPrefix(original, "") { - keyCode = special[""] - original = original[len(""):] - log.Printf("Special code '' found, replacing with: %d", keyCode) - - c.KeyEvent(keyCode, false) - time.Sleep(keyInterval) - - continue - } - - if strings.HasPrefix(original, "") { - log.Printf("Special code '' found, sleeping one second") - time.Sleep(1 * time.Second) - original = original[len(""):] - continue - } - - if strings.HasPrefix(original, "") { - log.Printf("Special code '' found, sleeping 5 seconds") - time.Sleep(5 * time.Second) - original = original[len(""):] - continue - } - - if strings.HasPrefix(original, "") { - log.Printf("Special code '' found, sleeping 10 seconds") - time.Sleep(10 * time.Second) - original = original[len(""):] - continue - } - - for specialCode, specialValue := range special { - if strings.HasPrefix(original, specialCode) { - log.Printf("Special code '%s' found, replacing with: %d", specialCode, specialValue) - keyCode = specialValue - original = original[len(specialCode):] - break - } - } - - if keyCode == 0 { - r, size := utf8.DecodeRuneInString(original) - original = original[size:] - keyCode = uint32(r) - keyShift = unicode.IsUpper(r) || strings.ContainsRune(shiftedChars, r) - - log.Printf("Sending char '%c', code %d, shift %v", r, keyCode, keyShift) - } - - if keyShift { - c.KeyEvent(KeyLeftShift, true) - } - - c.KeyEvent(keyCode, true) - time.Sleep(keyInterval) - c.KeyEvent(keyCode, false) - time.Sleep(keyInterval) - - if keyShift { - c.KeyEvent(KeyLeftShift, false) - } - } -} diff --git a/builder/vmware/common/step_upload_tools.go b/builder/vmware/common/step_upload_tools.go index dfa8fcb88..eed55ab0b 100644 --- a/builder/vmware/common/step_upload_tools.go +++ b/builder/vmware/common/step_upload_tools.go @@ -1,12 +1,13 @@ package common import ( + "context" "fmt" "os" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" "github.com/hashicorp/packer/template/interpolate" - "github.com/mitchellh/multistep" ) type toolsUploadPathTemplate struct { @@ -20,7 +21,7 @@ type StepUploadTools struct { Ctx interpolate.Context } -func (c *StepUploadTools) Run(state multistep.StateBag) multistep.StepAction { +func (c *StepUploadTools) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { driver := state.Get("driver").(Driver) if c.ToolsUploadFlavor == "" { diff --git a/builder/vmware/common/vmx.go b/builder/vmware/common/vmx.go index 1fcdfa35e..631e0106d 100644 --- a/builder/vmware/common/vmx.go +++ b/builder/vmware/common/vmx.go @@ -44,14 +44,14 @@ func EncodeVMX(contents map[string]string) string { } // a list of VMX key fragments that the value must not be quoted - // fragments are used to cover multliples (i.e. multiple disks) + // fragments are used to cover multiples (i.e. multiple disks) // keys are still lowercase at this point, use lower fragments noQuotes := []string{ ".virtualssd", } // a list of VMX key fragments that are case sensitive - // fragments are used to cover multliples (i.e. multiple disks) + // fragments are used to cover multiples (i.e. multiple disks) caseSensitive := []string{ ".virtualSSD", } diff --git a/builder/vmware/iso/artifact_test.go b/builder/vmware/iso/artifact_test.go index c18e60119..ea4bab42b 100644 --- a/builder/vmware/iso/artifact_test.go +++ b/builder/vmware/iso/artifact_test.go @@ -1,8 +1,9 @@ package iso import ( - "github.com/hashicorp/packer/packer" "testing" + + "github.com/hashicorp/packer/packer" ) func TestArtifact_Impl(t *testing.T) { diff --git a/builder/vmware/iso/builder.go b/builder/vmware/iso/builder.go index 5fa15317f..0944db927 100644 --- a/builder/vmware/iso/builder.go +++ b/builder/vmware/iso/builder.go @@ -11,11 +11,12 @@ import ( vmwcommon "github.com/hashicorp/packer/builder/vmware/common" "github.com/hashicorp/packer/common" + "github.com/hashicorp/packer/common/bootcommand" "github.com/hashicorp/packer/helper/communicator" "github.com/hashicorp/packer/helper/config" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" "github.com/hashicorp/packer/template/interpolate" - "github.com/mitchellh/multistep" ) const BuilderIdESX = "mitchellh.vmware-esx" @@ -30,6 +31,7 @@ type Config struct { common.HTTPConfig `mapstructure:",squash"` common.ISOConfig `mapstructure:",squash"` common.FloppyConfig `mapstructure:",squash"` + bootcommand.VNCConfig `mapstructure:",squash"` vmwcommon.DriverConfig `mapstructure:",squash"` vmwcommon.OutputConfig `mapstructure:",squash"` vmwcommon.RunConfig `mapstructure:",squash"` @@ -38,21 +40,43 @@ type Config struct { vmwcommon.ToolsConfig `mapstructure:",squash"` vmwcommon.VMXConfig `mapstructure:",squash"` - AdditionalDiskSize []uint `mapstructure:"disk_additional_size"` - DiskName string `mapstructure:"vmdk_name"` - DiskSize uint `mapstructure:"disk_size"` - DiskTypeId string `mapstructure:"disk_type_id"` - Format string `mapstructure:"format"` - GuestOSType string `mapstructure:"guest_os_type"` + // disk drives + AdditionalDiskSize []uint `mapstructure:"disk_additional_size"` + DiskAdapterType string `mapstructure:"disk_adapter_type"` + DiskName string `mapstructure:"vmdk_name"` + DiskSize uint `mapstructure:"disk_size"` + DiskTypeId string `mapstructure:"disk_type_id"` + Format string `mapstructure:"format"` + + // cdrom drive + CdromAdapterType string `mapstructure:"cdrom_adapter_type"` + + // platform information + GuestOSType string `mapstructure:"guest_os_type"` + Version string `mapstructure:"version"` + VMName string `mapstructure:"vm_name"` + + // Network adapter and type + NetworkAdapterType string `mapstructure:"network_adapter_type"` + Network string `mapstructure:"network"` + + // device presence + Sound bool `mapstructure:"sound"` + USB bool `mapstructure:"usb"` + + // communication ports + Serial string `mapstructure:"serial"` + Parallel string `mapstructure:"parallel"` + + // booting a guest KeepRegistered bool `mapstructure:"keep_registered"` OVFToolOptions []string `mapstructure:"ovftool_options"` SkipCompaction bool `mapstructure:"skip_compaction"` SkipExport bool `mapstructure:"skip_export"` - VMName string `mapstructure:"vm_name"` VMXDiskTemplatePath string `mapstructure:"vmx_disk_template_path"` VMXTemplatePath string `mapstructure:"vmx_template_path"` - Version string `mapstructure:"version"` + // remote vsphere RemoteType string `mapstructure:"remote_type"` RemoteDatastore string `mapstructure:"remote_datastore"` RemoteCacheDatastore string `mapstructure:"remote_cache_datastore"` @@ -100,6 +124,7 @@ func (b *Builder) Prepare(raws ...interface{}) ([]string, error) { errs = packer.MultiErrorAppend(errs, b.config.ToolsConfig.Prepare(&b.config.ctx)...) errs = packer.MultiErrorAppend(errs, b.config.VMXConfig.Prepare(&b.config.ctx)...) errs = packer.MultiErrorAppend(errs, b.config.FloppyConfig.Prepare(&b.config.ctx)...) + errs = packer.MultiErrorAppend(errs, b.config.VNCConfig.Prepare(&b.config.ctx)...) if b.config.DiskName == "" { b.config.DiskName = "disk" @@ -109,6 +134,11 @@ func (b *Builder) Prepare(raws ...interface{}) ([]string, error) { b.config.DiskSize = 40000 } + if b.config.DiskAdapterType == "" { + // Default is lsilogic + b.config.DiskAdapterType = "lsilogic" + } + if b.config.DiskTypeId == "" { // Default is growable virtual disk split in 2GB files. b.config.DiskTypeId = "1" @@ -158,19 +188,42 @@ func (b *Builder) Prepare(raws ...interface{}) ([]string, error) { } + if b.config.Network == "" { + b.config.Network = "nat" + } + + if !b.config.Sound { + b.config.Sound = false + } + + if !b.config.USB { + b.config.USB = false + } + // Remote configuration validation if b.config.RemoteType != "" { if b.config.RemoteHost == "" { errs = packer.MultiErrorAppend(errs, fmt.Errorf("remote_host must be specified")) } + if b.config.RemoteType != "esx5" { + errs = packer.MultiErrorAppend(errs, + fmt.Errorf("Only 'esx5' value is accepted for remote_type")) + } } - if b.config.Format != "" { - if !(b.config.Format == "ova" || b.config.Format == "ovf" || b.config.Format == "vmx") { - errs = packer.MultiErrorAppend(errs, - fmt.Errorf("format must be one of ova, ovf, or vmx")) - } + if b.config.Format == "" { + b.config.Format = "ovf" + } + + if !(b.config.Format == "ova" || b.config.Format == "ovf" || b.config.Format == "vmx") { + errs = packer.MultiErrorAppend(errs, + fmt.Errorf("format must be one of ova, ovf, or vmx")) + } + + if b.config.RemoteType == "esx5" && b.config.SkipExport != true && b.config.RemotePassword == "" { + errs = packer.MultiErrorAppend(errs, + fmt.Errorf("exporting the vm (with ovftool) requires that you set a value for remote_password")) } // Warnings @@ -210,7 +263,7 @@ func (b *Builder) Run(ui packer.Ui, hook packer.Hook, cache packer.Cache) (packe exportOutputPath := b.config.OutputDir - if b.config.RemoteType != "" && b.config.Format != "" { + if b.config.RemoteType != "" { b.config.OutputDir = b.config.VMName } dir.SetOutputDir(b.config.OutputDir) @@ -247,8 +300,9 @@ func (b *Builder) Run(ui packer.Ui, hook packer.Hook, cache packer.Cache) (packe Directories: b.config.FloppyConfig.FloppyDirectories, }, &stepRemoteUpload{ - Key: "floppy_path", - Message: "Uploading Floppy to remote machine...", + Key: "floppy_path", + Message: "Uploading Floppy to remote machine...", + DoCleanup: true, }, &stepRemoteUpload{ Key: "iso_path", @@ -276,13 +330,13 @@ func (b *Builder) Run(ui packer.Ui, hook packer.Hook, cache packer.Cache) (packe Format: b.config.Format, }, &vmwcommon.StepRun{ - BootWait: b.config.BootWait, DurationBeforeStop: 5 * time.Second, Headless: b.config.Headless, }, &vmwcommon.StepTypeBootCommand{ + BootWait: b.config.BootWait, VNCEnabled: !b.config.DisableVNC, - BootCommand: b.config.BootCommand, + BootCommand: b.config.FlatBootCommand(), VMName: b.config.VMName, Ctx: b.config.ctx, }, diff --git a/builder/vmware/iso/builder_test.go b/builder/vmware/iso/builder_test.go index ab01a270c..59e18907d 100644 --- a/builder/vmware/iso/builder_test.go +++ b/builder/vmware/iso/builder_test.go @@ -139,6 +139,91 @@ func TestBuilderPrepare_InvalidFloppies(t *testing.T) { } } +func TestBuilderPrepare_RemoteType(t *testing.T) { + var b Builder + config := testConfig() + + config["format"] = "ovf" + config["remote_host"] = "foobar.example.com" + config["remote_password"] = "supersecret" + // Bad + config["remote_type"] = "foobar" + warns, err := b.Prepare(config) + if len(warns) > 0 { + t.Fatalf("bad: %#v", warns) + } + if err == nil { + t.Fatal("should have error") + } + + config["remote_type"] = "esx5" + // Bad + config["remote_host"] = "" + b = Builder{} + warns, err = b.Prepare(config) + if len(warns) > 0 { + t.Fatalf("bad: %#v", warns) + } + if err == nil { + t.Fatal("should have error") + } + + // Good + config["remote_type"] = "" + config["remote_host"] = "" + config["remote_password"] = "" + config["remote_private_key_file"] = "" + b = Builder{} + warns, err = b.Prepare(config) + if len(warns) > 0 { + t.Fatalf("bad: %#v", warns) + } + if err != nil { + t.Fatalf("should not have error: %s", err) + } + + // Good + config["remote_type"] = "esx5" + config["remote_host"] = "foobar.example.com" + config["remote_password"] = "supersecret" + b = Builder{} + warns, err = b.Prepare(config) + if len(warns) > 0 { + t.Fatalf("bad: %#v", warns) + } + if err != nil { + t.Fatalf("should not have error: %s", err) + } +} + +func TestBuilderPrepare_RemoteExport(t *testing.T) { + var b Builder + config := testConfig() + + config["remote_type"] = "esx5" + config["remote_host"] = "foobar.example.com" + // Bad + config["remote_password"] = "" + warns, err := b.Prepare(config) + if len(warns) != 0 { + t.Fatalf("bad: %#v", warns) + } + if err == nil { + t.Fatal("should have error") + } + + // Good + config["remote_password"] = "supersecret" + b = Builder{} + warns, err = b.Prepare(config) + if len(warns) != 0 { + t.Fatalf("err: %s", err) + } + if err != nil { + t.Fatalf("should not have error: %s", err) + } +} + func TestBuilderPrepare_Format(t *testing.T) { var b Builder config := testConfig() diff --git a/builder/vmware/iso/driver_esx5.go b/builder/vmware/iso/driver_esx5.go index 86ddf015b..bfeee8d24 100644 --- a/builder/vmware/iso/driver_esx5.go +++ b/builder/vmware/iso/driver_esx5.go @@ -14,16 +14,19 @@ import ( "strings" "time" + vmwcommon "github.com/hashicorp/packer/builder/vmware/common" commonssh "github.com/hashicorp/packer/common/ssh" "github.com/hashicorp/packer/communicator/ssh" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" gossh "golang.org/x/crypto/ssh" ) // ESX5 driver talks to an ESXi5 hypervisor remotely over SSH to build // virtual machines. This driver can only manage one machine at a time. type ESX5Driver struct { + base vmwcommon.VmwareDriver + Host string Port uint Username string @@ -38,7 +41,7 @@ type ESX5Driver struct { vmId string } -func (d *ESX5Driver) Clone(dst, src string) error { +func (d *ESX5Driver) Clone(dst, src string, linked bool) error { return errors.New("Cloning is not supported with the ESX driver.") } @@ -46,9 +49,9 @@ func (d *ESX5Driver) CompactDisk(diskPathLocal string) error { return nil } -func (d *ESX5Driver) CreateDisk(diskPathLocal string, size string, typeId string) error { +func (d *ESX5Driver) CreateDisk(diskPathLocal string, size string, adapter_type string, typeId string) error { diskPath := d.datastorePath(diskPathLocal) - return d.sh("vmkfstools", "-c", size, "-d", typeId, "-a", "lsilogic", diskPath) + return d.sh("vmkfstools", "-c", size, "-d", typeId, "-a", adapter_type, diskPath) } func (d *ESX5Driver) IsRunning(string) (bool, error) { @@ -135,6 +138,12 @@ func (d *ESX5Driver) UploadISO(localPath string, checksum string, checksumType s return finalPath, nil } +func (d *ESX5Driver) RemoveCache(localPath string) error { + finalPath := d.cachePath(localPath) + log.Printf("Removing remote cache path %s (local %s)", finalPath, localPath) + return d.sh("rm", "-f", finalPath) +} + func (d *ESX5Driver) ToolsIsoPath(string) string { return "" } @@ -143,11 +152,30 @@ func (d *ESX5Driver) ToolsInstall() error { return d.sh("vim-cmd", "vmsvc/tools.install", d.vmId) } -func (d *ESX5Driver) DhcpLeasesPath(string) string { - return "" -} - func (d *ESX5Driver) Verify() error { + // Ensure that NetworkMapper is nil, since the mapping of device<->network + // is handled by ESX and thus can't be performed by packer unless we + // query things. + + // FIXME: If we want to expose the network devices to the user, then we can + // probably use esxcli to enumerate the portgroup and switchId + d.base.NetworkMapper = nil + + // Be safe/friendly and overwrite the rest of the utility functions with + // log functions despite the fact that these shouldn't be called anyways. + d.base.DhcpLeasesPath = func(device string) string { + log.Printf("Unexpected error, ESX5 driver attempted to call DhcpLeasesPath(%#v)\n", device) + return "" + } + d.base.DhcpConfPath = func(device string) string { + log.Printf("Unexpected error, ESX5 driver attempted to call DhcpConfPath(%#v)\n", device) + return "" + } + d.base.VmnetnatConfPath = func(device string) string { + log.Printf("Unexpected error, ESX5 driver attempted to call VmnetnatConfPath(%#v)\n", device) + return "" + } + checks := []func() error{ d.connect, d.checkSystemVersion, @@ -159,11 +187,10 @@ func (d *ESX5Driver) Verify() error { return err } } - return nil } -func (d *ESX5Driver) HostIP() (string, error) { +func (d *ESX5Driver) HostIP(multistep.StateBag) (string, error) { conn, err := net.Dial("tcp", fmt.Sprintf("%s:%d", d.Host, d.Port)) if err != nil { return "", err @@ -174,6 +201,101 @@ func (d *ESX5Driver) HostIP() (string, error) { return host, err } +func (d *ESX5Driver) GuestIP(multistep.StateBag) (string, error) { + // GuestIP is defined by the user as d.Host..but let's validate it just to be sure + conn, err := net.Dial("tcp", fmt.Sprintf("%s:%d", d.Host, d.Port)) + defer conn.Close() + if err != nil { + return "", err + } + + host, _, err := net.SplitHostPort(conn.RemoteAddr().String()) + return host, err +} + +func (d *ESX5Driver) HostAddress(multistep.StateBag) (string, error) { + // make a connection + conn, err := net.Dial("tcp", fmt.Sprintf("%s:%d", d.Host, d.Port)) + defer conn.Close() + if err != nil { + return "", err + } + + // get the local address (the host) + host, _, err := net.SplitHostPort(conn.LocalAddr().String()) + if err != nil { + return "", fmt.Errorf("Unable to determine host address for ESXi: %v", err) + } + + // iterate through all the interfaces.. + interfaces, err := net.Interfaces() + if err != nil { + return "", fmt.Errorf("Unable to enumerate host interfaces : %v", err) + } + + for _, intf := range interfaces { + addrs, err := intf.Addrs() + if err != nil { + continue + } + + // ..checking to see if any if it's addrs match the host address + for _, addr := range addrs { + if addr.String() == host { // FIXME: Is this the proper way to compare two HardwareAddrs? + return intf.HardwareAddr.String(), nil + } + } + } + + // ..unfortunately nothing was found + return "", fmt.Errorf("Unable to locate interface matching host address in ESXi: %v", host) +} + +func (d *ESX5Driver) GuestAddress(multistep.StateBag) (string, error) { + // list all the interfaces on the esx host + r, err := d.esxcli("network", "ip", "interface", "list") + if err != nil { + return "", fmt.Errorf("Could not retrieve network interfaces for ESXi: %v", err) + } + + // rip out the interface name and the MAC address from the csv output + addrs := make(map[string]string) + for record, err := r.read(); record != nil && err == nil; record, err = r.read() { + if strings.ToUpper(record["Enabled"]) != "TRUE" { + continue + } + addrs[record["Name"]] = record["MAC Address"] + } + + // list all the addresses on the esx host + r, err = d.esxcli("network", "ip", "interface", "ipv4", "get") + if err != nil { + return "", fmt.Errorf("Could not retrieve network addresses for ESXi: %v", err) + } + + // figure out the interface name that matches the specified d.Host address + var intf string + intf = "" + for record, err := r.read(); record != nil && err == nil; record, err = r.read() { + if record["IPv4 Address"] == d.Host && record["Name"] != "" { + intf = record["Name"] + break + } + } + if intf == "" { + return "", fmt.Errorf("Unable to find matching address for ESXi guest") + } + + // find the MAC address according to the interface name + result, ok := addrs[intf] + if !ok { + return "", fmt.Errorf("Unable to find address for ESXi guest interface") + } + + // ..and we're good + return result, nil +} + func (d *ESX5Driver) VNCAddress(_ string, portMin, portMax uint) (string, uint, error) { var vncPort uint @@ -267,7 +389,13 @@ func (d *ESX5Driver) CommHost(state multistep.StateBag) (string, error) { return "", err } - record, err := r.find("Name", config.VMName) + // The value in the Name field returned by 'esxcli network vm list' + // corresponds directly to the value of displayName set in the VMX file + var displayName string + if v, ok := state.GetOk("display_name"); ok { + displayName = v.(string) + } + record, err := r.find("Name", displayName) if err != nil { return "", err } @@ -302,6 +430,9 @@ func (d *ESX5Driver) CommHost(state multistep.StateBag) (string, error) { if e.Timeout() { log.Printf("Timeout connecting to %s", record["IPAddress"]) continue + } else if strings.Contains(e.Error(), "connection refused") { + log.Printf("Connection refused when connecting to: %s", record["IPAddress"]) + continue } } } else { @@ -528,6 +659,10 @@ func (d *ESX5Driver) esxcli(args ...string) (*esxcliReader, error) { return &esxcliReader{r, header}, nil } +func (d *ESX5Driver) GetVmwareDriver() vmwcommon.VmwareDriver { + return d.base +} + type esxcliReader struct { cr *csv.Reader header []string diff --git a/builder/vmware/iso/driver_esx5_test.go b/builder/vmware/iso/driver_esx5_test.go index 6ce714ff1..097d6e186 100644 --- a/builder/vmware/iso/driver_esx5_test.go +++ b/builder/vmware/iso/driver_esx5_test.go @@ -6,7 +6,7 @@ import ( "testing" vmwcommon "github.com/hashicorp/packer/builder/vmware/common" - "github.com/mitchellh/multistep" + "github.com/hashicorp/packer/helper/multistep" ) func TestESX5Driver_implDriver(t *testing.T) { @@ -50,8 +50,9 @@ func TestESX5Driver_HostIP(t *testing.T) { defer listen.Close() driver := ESX5Driver{Host: "localhost", Port: uint(port)} + state := new(multistep.BasicStateBag) - if host, _ := driver.HostIP(); host != expected_host { + if host, _ := driver.HostIP(state); host != expected_host { t.Error(fmt.Sprintf("Expected string, %s but got %s", expected_host, host)) } } diff --git a/builder/vmware/iso/remote_driver.go b/builder/vmware/iso/remote_driver.go index 4c6dc18e4..6e830042c 100644 --- a/builder/vmware/iso/remote_driver.go +++ b/builder/vmware/iso/remote_driver.go @@ -12,6 +12,9 @@ type RemoteDriver interface { // exists. UploadISO(string, string, string) (string, error) + // RemoveCache deletes localPath from the remote cache. + RemoveCache(localPath string) error + // Adds a VM to inventory specified by the path to the VMX given. Register(string) error diff --git a/builder/vmware/iso/remote_driver_mock.go b/builder/vmware/iso/remote_driver_mock.go index adf5fa713..03830ddcf 100644 --- a/builder/vmware/iso/remote_driver_mock.go +++ b/builder/vmware/iso/remote_driver_mock.go @@ -64,6 +64,10 @@ func (d *RemoteDriverMock) upload(dst, src string) error { return d.uploadErr } +func (d *RemoteDriverMock) RemoveCache(localPath string) error { + return nil +} + func (d *RemoteDriverMock) ReloadVM() error { return d.ReloadVMErr } diff --git a/builder/vmware/iso/step_create_disk.go b/builder/vmware/iso/step_create_disk.go index 463599780..b4ced8cd1 100644 --- a/builder/vmware/iso/step_create_disk.go +++ b/builder/vmware/iso/step_create_disk.go @@ -1,11 +1,14 @@ package iso import ( + "context" "fmt" - vmwcommon "github.com/hashicorp/packer/builder/vmware/common" - "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" + "log" "path/filepath" + + vmwcommon "github.com/hashicorp/packer/builder/vmware/common" + "github.com/hashicorp/packer/helper/multistep" + "github.com/hashicorp/packer/packer" ) // This step creates the virtual disks for the VM. @@ -16,47 +19,47 @@ import ( // ui packer.Ui // // Produces: -// full_disk_path (string) - The full path to the created disk. +// disk_full_paths ([]string) - The full paths to all created disks type stepCreateDisk struct{} -func (stepCreateDisk) Run(state multistep.StateBag) multistep.StepAction { +func (stepCreateDisk) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { config := state.Get("config").(*Config) driver := state.Get("driver").(vmwcommon.Driver) ui := state.Get("ui").(packer.Ui) - ui.Say("Creating virtual machine disk") - full_disk_path := filepath.Join(config.OutputDir, config.DiskName+".vmdk") - if err := driver.CreateDisk(full_disk_path, fmt.Sprintf("%dM", config.DiskSize), config.DiskTypeId); err != nil { - err := fmt.Errorf("Error creating disk: %s", err) - state.Put("error", err) - ui.Error(err.Error()) - return multistep.ActionHalt - } - - state.Put("full_disk_path", full_disk_path) + ui.Say("Creating required virtual machine disks") + // Users can configure disks at several locations in the template so + // first collate all the disk requirements + var diskFullPaths, diskSizes []string + // The 'main' or 'default' disk + diskFullPaths = append(diskFullPaths, filepath.Join(config.OutputDir, config.DiskName+".vmdk")) + diskSizes = append(diskSizes, fmt.Sprintf("%dM", uint64(config.DiskSize))) + // Additional disks if len(config.AdditionalDiskSize) > 0 { - // stash the disk paths we create - additional_paths := make([]string, len(config.AdditionalDiskSize)) - - ui.Say("Creating additional hard drives...") - for i, additionalsize := range config.AdditionalDiskSize { - additionalpath := filepath.Join(config.OutputDir, fmt.Sprintf("%s-%d.vmdk", config.DiskName, i+1)) - size := fmt.Sprintf("%dM", uint64(additionalsize)) - - if err := driver.CreateDisk(additionalpath, size, config.DiskTypeId); err != nil { - err := fmt.Errorf("Error creating additional disk: %s", err) - state.Put("error", err) - ui.Error(err.Error()) - return multistep.ActionHalt - } - - additional_paths[i] = additionalpath + for i, diskSize := range config.AdditionalDiskSize { + path := filepath.Join(config.OutputDir, fmt.Sprintf("%s-%d.vmdk", config.DiskName, i+1)) + diskFullPaths = append(diskFullPaths, path) + size := fmt.Sprintf("%dM", uint64(diskSize)) + diskSizes = append(diskSizes, size) } - - state.Put("additional_disk_paths", additional_paths) } + // Create all required disks + for i, diskFullPath := range diskFullPaths { + log.Printf("[INFO] Creating disk with Path: %s and Size: %s", diskFullPath, diskSizes[i]) + // Additional disks currently use the same adapter type and disk + // type as specified for the main disk + if err := driver.CreateDisk(diskFullPath, diskSizes[i], config.DiskAdapterType, config.DiskTypeId); err != nil { + err := fmt.Errorf("Error creating disk: %s", err) + state.Put("error", err) + ui.Error(err.Error()) + return multistep.ActionHalt + } + } + + // Stash the disk paths so we can retrieve later e.g. when compacting + state.Put("disk_full_paths", diskFullPaths) return multistep.ActionContinue } diff --git a/builder/vmware/iso/step_create_vmx.go b/builder/vmware/iso/step_create_vmx.go index 1f4ee812f..db9010a04 100644 --- a/builder/vmware/iso/step_create_vmx.go +++ b/builder/vmware/iso/step_create_vmx.go @@ -1,23 +1,55 @@ package iso import ( + "context" "fmt" "io/ioutil" "os" "path/filepath" + "runtime" + "strings" vmwcommon "github.com/hashicorp/packer/builder/vmware/common" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" "github.com/hashicorp/packer/template/interpolate" - "github.com/mitchellh/multistep" ) type vmxTemplateData struct { - Name string - GuestOS string - DiskName string - ISOPath string - Version string + Name string + GuestOS string + ISOPath string + Version string + + SCSI_Present string + SCSI_diskAdapterType string + SATA_Present string + NVME_Present string + + DiskName string + DiskType string + CDROMType string + CDROMType_MasterSlave string + + Network_Type string + Network_Device string + Network_Adapter string + + Sound_Present string + Usb_Present string + + Serial_Present string + Serial_Type string + Serial_Endpoint string + Serial_Host string + Serial_Yield string + Serial_Filename string + Serial_Auto string + + Parallel_Present string + Parallel_Bidirectional string + Parallel_Filename string + Parallel_Auto string } type additionalDiskTemplateData struct { @@ -38,11 +70,238 @@ type stepCreateVMX struct { tempDir string } -func (s *stepCreateVMX) Run(state multistep.StateBag) multistep.StepAction { +/* serial conversions */ +type serialConfigPipe struct { + filename string + endpoint string + host string + yield string +} + +type serialConfigFile struct { + filename string + yield string +} + +type serialConfigDevice struct { + devicename string + yield string +} + +type serialConfigAuto struct { + devicename string + yield string +} + +type serialUnion struct { + serialType interface{} + pipe *serialConfigPipe + file *serialConfigFile + device *serialConfigDevice + auto *serialConfigAuto +} + +func unformat_serial(config string) (*serialUnion, error) { + var defaultSerialPort string + if runtime.GOOS == "windows" { + defaultSerialPort = "COM1" + } else { + defaultSerialPort = "/dev/ttyS0" + } + + input := strings.SplitN(config, ":", 2) + if len(input) < 1 { + return nil, fmt.Errorf("Unexpected format for serial port: %s", config) + } + + var formatType, formatOptions string + formatType = input[0] + if len(input) == 2 { + formatOptions = input[1] + } else { + formatOptions = "" + } + + switch strings.ToUpper(formatType) { + case "PIPE": + comp := strings.Split(formatOptions, ",") + if len(comp) < 3 || len(comp) > 4 { + return nil, fmt.Errorf("Unexpected format for serial port : pipe : %s", config) + } + if res := strings.ToLower(comp[1]); res != "client" && res != "server" { + return nil, fmt.Errorf("Unexpected format for serial port : pipe : endpoint : %s : %s", res, config) + } + if res := strings.ToLower(comp[2]); res != "app" && res != "vm" { + return nil, fmt.Errorf("Unexpected format for serial port : pipe : host : %s : %s", res, config) + } + res := &serialConfigPipe{ + filename: comp[0], + endpoint: comp[1], + host: map[string]string{"app": "TRUE", "vm": "FALSE"}[strings.ToLower(comp[2])], + yield: "FALSE", + } + if len(comp) == 4 { + res.yield = strings.ToUpper(comp[3]) + } + if res.yield != "TRUE" && res.yield != "FALSE" { + return nil, fmt.Errorf("Unexpected format for serial port : pipe : yield : %s : %s", res.yield, config) + } + return &serialUnion{serialType: res, pipe: res}, nil + + case "FILE": + comp := strings.Split(formatOptions, ",") + if len(comp) > 2 { + return nil, fmt.Errorf("Unexpected format for serial port : file : %s", config) + } + + res := &serialConfigFile{yield: "FALSE"} + + res.filename = filepath.FromSlash(comp[0]) + + res.yield = map[bool]string{true: strings.ToUpper(comp[0]), false: "FALSE"}[len(comp) > 1] + if res.yield != "TRUE" && res.yield != "FALSE" { + return nil, fmt.Errorf("Unexpected format for serial port : file : yield : %s : %s", res.yield, config) + } + + return &serialUnion{serialType: res, file: res}, nil + + case "DEVICE": + comp := strings.Split(formatOptions, ",") + if len(comp) > 2 { + return nil, fmt.Errorf("Unexpected format for serial port : device : %s", config) + } + + res := new(serialConfigDevice) + + if len(comp) == 2 { + res.devicename = map[bool]string{true: filepath.FromSlash(comp[0]), false: defaultSerialPort}[len(comp[0]) > 0] + res.yield = strings.ToUpper(comp[1]) + } else if len(comp) == 1 { + res.devicename = map[bool]string{true: filepath.FromSlash(comp[0]), false: defaultSerialPort}[len(comp[0]) > 0] + res.yield = "FALSE" + } else if len(comp) == 0 { + res.devicename = defaultSerialPort + res.yield = "FALSE" + } + + if res.yield != "TRUE" && res.yield != "FALSE" { + return nil, fmt.Errorf("Unexpected format for serial port : device : yield : %s : %s", res.yield, config) + } + + return &serialUnion{serialType: res, device: res}, nil + + case "AUTO": + res := new(serialConfigAuto) + res.devicename = defaultSerialPort + + if len(formatOptions) > 0 { + res.yield = strings.ToUpper(formatOptions) + } else { + res.yield = "FALSE" + } + + if res.yield != "TRUE" && res.yield != "FALSE" { + return nil, fmt.Errorf("Unexpected format for serial port : auto : yield : %s : %s", res.yield, config) + } + + return &serialUnion{serialType: res, auto: res}, nil + + case "NONE": + return &serialUnion{serialType: nil}, nil + + default: + return nil, fmt.Errorf("Unknown serial type : %s : %s", strings.ToUpper(formatType), config) + } +} + +/* parallel port */ +type parallelUnion struct { + parallelType interface{} + file *parallelPortFile + device *parallelPortDevice + auto *parallelPortAuto +} +type parallelPortFile struct { + filename string +} +type parallelPortDevice struct { + bidirectional string + devicename string +} +type parallelPortAuto struct { + bidirectional string +} + +func unformat_parallel(config string) (*parallelUnion, error) { + input := strings.SplitN(config, ":", 2) + if len(input) < 1 { + return nil, fmt.Errorf("Unexpected format for parallel port: %s", config) + } + + var formatType, formatOptions string + formatType = input[0] + if len(input) == 2 { + formatOptions = input[1] + } else { + formatOptions = "" + } + + switch strings.ToUpper(formatType) { + case "FILE": + res := ¶llelPortFile{filename: filepath.FromSlash(formatOptions)} + return ¶llelUnion{parallelType: res, file: res}, nil + case "DEVICE": + comp := strings.Split(formatOptions, ",") + if len(comp) < 1 || len(comp) > 2 { + return nil, fmt.Errorf("Unexpected format for parallel port: %s", config) + } + res := new(parallelPortDevice) + res.bidirectional = "FALSE" + res.devicename = filepath.FromSlash(comp[0]) + if len(comp) > 1 { + switch strings.ToUpper(comp[1]) { + case "BI": + res.bidirectional = "TRUE" + case "UNI": + res.bidirectional = "FALSE" + default: + return nil, fmt.Errorf("Unknown parallel port direction : %s : %s", strings.ToUpper(comp[0]), config) + } + } + return ¶llelUnion{parallelType: res, device: res}, nil + + case "AUTO": + res := new(parallelPortAuto) + switch strings.ToUpper(formatOptions) { + case "": + fallthrough + case "UNI": + res.bidirectional = "FALSE" + case "BI": + res.bidirectional = "TRUE" + default: + return nil, fmt.Errorf("Unknown parallel port direction : %s : %s", strings.ToUpper(formatOptions), config) + } + return ¶llelUnion{parallelType: res, auto: res}, nil + + case "NONE": + return ¶llelUnion{parallelType: nil}, nil + } + + return nil, fmt.Errorf("Unexpected format for parallel port: %s", config) +} + +/* regular steps */ +func (s *stepCreateVMX) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { config := state.Get("config").(*Config) isoPath := state.Get("iso_path").(string) ui := state.Get("ui").(packer.Ui) + // Convert the iso_path into a path relative to the .vmx file if possible + if relativeIsoPath, err := filepath.Rel(config.VMXTemplatePath, filepath.FromSlash(isoPath)); err == nil { + isoPath = relativeIsoPath + } + ui.Say("Building and writing VMX file") vmxTemplate := DefaultVMXTemplate @@ -110,17 +369,233 @@ func (s *stepCreateVMX) Run(state multistep.StateBag) multistep.StepAction { } } - ctx.Data = &vmxTemplateData{ + templateData := vmxTemplateData{ Name: config.VMName, GuestOS: config.GuestOSType, DiskName: config.DiskName, Version: config.Version, ISOPath: isoPath, + + SCSI_Present: "FALSE", + SCSI_diskAdapterType: "lsilogic", + SATA_Present: "FALSE", + NVME_Present: "FALSE", + + DiskType: "scsi", + CDROMType: "ide", + CDROMType_MasterSlave: "0", + + Network_Adapter: "e1000", + + Sound_Present: map[bool]string{true: "TRUE", false: "FALSE"}[bool(config.Sound)], + Usb_Present: map[bool]string{true: "TRUE", false: "FALSE"}[bool(config.USB)], + + Serial_Present: "FALSE", + Parallel_Present: "FALSE", } + /// Use the disk adapter type that the user specified to tweak the .vmx + // Also sync the cdrom adapter type according to what's common for that disk type. + diskAdapterType := strings.ToLower(config.DiskAdapterType) + switch diskAdapterType { + case "ide": + templateData.DiskType = "ide" + templateData.CDROMType = "ide" + templateData.CDROMType_MasterSlave = "1" + case "sata": + templateData.SATA_Present = "TRUE" + templateData.DiskType = "sata" + templateData.CDROMType = "sata" + templateData.CDROMType_MasterSlave = "1" + case "nvme": + templateData.NVME_Present = "TRUE" + templateData.DiskType = "nvme" + templateData.SATA_Present = "TRUE" + templateData.CDROMType = "sata" + templateData.CDROMType_MasterSlave = "0" + case "scsi": + diskAdapterType = "lsilogic" + fallthrough + default: + templateData.SCSI_Present = "TRUE" + templateData.SCSI_diskAdapterType = diskAdapterType + templateData.DiskType = "scsi" + templateData.CDROMType = "ide" + templateData.CDROMType_MasterSlave = "0" + } + + /// Handle the cdrom adapter type. If the disk adapter type and the + // cdrom adapter type are the same, then ensure that the cdrom is the + // slave device on whatever bus the disk adapter is on. + cdromAdapterType := strings.ToLower(config.CdromAdapterType) + if cdromAdapterType == "" { + cdromAdapterType = templateData.CDROMType + } else if cdromAdapterType == diskAdapterType { + templateData.CDROMType_MasterSlave = "1" + } else { + templateData.CDROMType_MasterSlave = "0" + } + + switch cdromAdapterType { + case "ide": + templateData.CDROMType = "ide" + case "sata": + templateData.SATA_Present = "TRUE" + templateData.CDROMType = "sata" + case "scsi": + templateData.SCSI_Present = "TRUE" + templateData.CDROMType = "scsi" + default: + err := fmt.Errorf("Error processing VMX template: %s", cdromAdapterType) + state.Put("error", err) + ui.Error(err.Error()) + return multistep.ActionHalt + } + + /// Assign the network adapter type into the template if one was specified. + network_adapter := strings.ToLower(config.NetworkAdapterType) + if network_adapter != "" { + templateData.Network_Adapter = network_adapter + } + + /// Check the network type that the user specified + network := config.Network + driver := state.Get("driver").(vmwcommon.Driver).GetVmwareDriver() + + // check to see if the driver implements a network mapper for mapping + // the network-type to its device-name. + if driver.NetworkMapper != nil { + + // read network map configuration into a NetworkNameMapper. + netmap, err := driver.NetworkMapper() + if err != nil { + state.Put("error", err) + ui.Error(err.Error()) + return multistep.ActionHalt + } + + // try and convert the specified network to a device. + devices, err := netmap.NameIntoDevices(network) + + if err == nil && len(devices) > 0 { + // If multiple devices exist, for example for network "nat", VMware chooses + // the actual device. Only type "custom" allows the exact choice of a + // specific virtual network (see below). We allow VMware to choose the device + // and for device-specific operations like GuestIP, try to go over all + // devices that match a name (e.g. "nat"). + // https://pubs.vmware.com/workstation-9/index.jsp?topic=%2Fcom.vmware.ws.using.doc%2FGUID-3B504F2F-7A0B-415F-AE01-62363A95D052.html + templateData.Network_Type = network + templateData.Network_Device = "" + } else { + // otherwise, we were unable to find the type, so assume it's a custom device + templateData.Network_Type = "custom" + templateData.Network_Device = network + } + + // if NetworkMapper is nil, then we're using something like ESX, so fall + // back to the previous logic of using "nat" despite it not mattering to ESX. + } else { + templateData.Network_Type = "nat" + templateData.Network_Device = network + + network = "nat" + } + + // store the network so that we can later figure out what ip address to bind to + state.Put("vmnetwork", network) + + /// check if serial port has been configured + if config.Serial == "" { + templateData.Serial_Present = "FALSE" + } else { + serial, err := unformat_serial(config.Serial) + if err != nil { + err := fmt.Errorf("Error processing VMX template: %s", err) + state.Put("error", err) + ui.Error(err.Error()) + return multistep.ActionHalt + } + + templateData.Serial_Present = "TRUE" + templateData.Serial_Filename = "" + templateData.Serial_Yield = "" + templateData.Serial_Endpoint = "" + templateData.Serial_Host = "" + templateData.Serial_Auto = "FALSE" + + switch serial.serialType.(type) { + case *serialConfigPipe: + templateData.Serial_Type = "pipe" + templateData.Serial_Endpoint = serial.pipe.endpoint + templateData.Serial_Host = serial.pipe.host + templateData.Serial_Yield = serial.pipe.yield + templateData.Serial_Filename = filepath.FromSlash(serial.pipe.filename) + case *serialConfigFile: + templateData.Serial_Type = "file" + templateData.Serial_Filename = filepath.FromSlash(serial.file.filename) + case *serialConfigDevice: + templateData.Serial_Type = "device" + templateData.Serial_Filename = filepath.FromSlash(serial.device.devicename) + case *serialConfigAuto: + templateData.Serial_Type = "device" + templateData.Serial_Filename = filepath.FromSlash(serial.auto.devicename) + templateData.Serial_Yield = serial.auto.yield + templateData.Serial_Auto = "TRUE" + case nil: + templateData.Serial_Present = "FALSE" + break + + default: + err := fmt.Errorf("Error processing VMX template: %v", serial) + state.Put("error", err) + ui.Error(err.Error()) + return multistep.ActionHalt + } + } + + /// check if parallel port has been configured + if config.Parallel == "" { + templateData.Parallel_Present = "FALSE" + } else { + parallel, err := unformat_parallel(config.Parallel) + if err != nil { + err := fmt.Errorf("Error processing VMX template: %s", err) + state.Put("error", err) + ui.Error(err.Error()) + return multistep.ActionHalt + } + + templateData.Parallel_Auto = "FALSE" + switch parallel.parallelType.(type) { + case *parallelPortFile: + templateData.Parallel_Present = "TRUE" + templateData.Parallel_Filename = filepath.FromSlash(parallel.file.filename) + case *parallelPortDevice: + templateData.Parallel_Present = "TRUE" + templateData.Parallel_Bidirectional = parallel.device.bidirectional + templateData.Parallel_Filename = filepath.FromSlash(parallel.device.devicename) + case *parallelPortAuto: + templateData.Parallel_Present = "TRUE" + templateData.Parallel_Auto = "TRUE" + templateData.Parallel_Bidirectional = parallel.auto.bidirectional + case nil: + templateData.Parallel_Present = "FALSE" + break + + default: + err := fmt.Errorf("Error processing VMX template: %v", parallel) + state.Put("error", err) + ui.Error(err.Error()) + return multistep.ActionHalt + } + } + + ctx.Data = &templateData + + /// render the .vmx template vmxContents, err := interpolate.Render(vmxTemplate, &ctx) if err != nil { - err := fmt.Errorf("Error procesing VMX template: %s", err) + err := fmt.Errorf("Error processing VMX template: %s", err) state.Put("error", err) ui.Error(err.Error()) return multistep.ActionHalt @@ -175,12 +650,13 @@ ehci.pciSlotNumber = "34" ehci.present = "TRUE" ethernet0.addressType = "generated" ethernet0.bsdName = "en0" -ethernet0.connectionType = "nat" +ethernet0.connectionType = "{{ .Network_Type }}" +ethernet0.vnet = "{{ .Network_Device }}" ethernet0.displayName = "Ethernet" ethernet0.linkStatePropagation.enable = "FALSE" ethernet0.pciSlotNumber = "33" ethernet0.present = "TRUE" -ethernet0.virtualDev = "e1000" +ethernet0.virtualDev = "{{ .Network_Adapter }}" ethernet0.wakeOnPcktRcv = "FALSE" extendedConfigFile = "{{ .Name }}.vmxf" floppy0.present = "FALSE" @@ -189,9 +665,21 @@ gui.fullScreenAtPowerOn = "FALSE" gui.viewModeAtPowerOn = "windowed" hgfs.linkRootShare = "TRUE" hgfs.mapRootShare = "TRUE" -ide1:0.present = "TRUE" -ide1:0.fileName = "{{ .ISOPath }}" -ide1:0.deviceType = "cdrom-image" + +scsi0.present = "{{ .SCSI_Present }}" +scsi0.virtualDev = "{{ .SCSI_diskAdapterType }}" +scsi0.pciSlotNumber = "16" +scsi0:0.redo = "" +sata0.present = "{{ .SATA_Present }}" +nvme0.present = "{{ .NVME_Present }}" + +{{ .DiskType }}0:0.present = "TRUE" +{{ .DiskType }}0:0.fileName = "{{ .DiskName }}.vmdk" + +{{ .CDROMType }}0:{{ .CDROMType_MasterSlave }}.present = "TRUE" +{{ .CDROMType }}0:{{ .CDROMType_MasterSlave }}.fileName = "{{ .ISOPath }}" +{{ .CDROMType }}0:{{ .CDROMType_MasterSlave }}.deviceType = "cdrom-image" + isolation.tools.hgfs.disable = "FALSE" memsize = "512" nvram = "{{ .Name }}.nvram" @@ -220,17 +708,37 @@ powerType.suspend = "soft" proxyApps.publishToHost = "FALSE" replay.filename = "" replay.supported = "FALSE" -scsi0.pciSlotNumber = "16" -scsi0.present = "TRUE" -scsi0.virtualDev = "lsilogic" -scsi0:0.fileName = "{{ .DiskName }}.vmdk" -scsi0:0.present = "TRUE" -scsi0:0.redo = "" -sound.startConnected = "FALSE" + +// Sound +sound.startConnected = "{{ .Sound_Present }}" +sound.present = "{{ .Sound_Present }}" +sound.fileName = "-1" +sound.autodetect = "TRUE" + tools.syncTime = "TRUE" tools.upgrade.policy = "upgradeAtPowerCycle" + +// USB usb.pciSlotNumber = "32" -usb.present = "FALSE" +usb.present = "{{ .Usb_Present }}" + +// Serial +serial0.present = "{{ .Serial_Present }}" +serial0.startConnected = "{{ .Serial_Present }}" +serial0.fileName = "{{ .Serial_Filename }}" +serial0.autodetect = "{{ .Serial_Auto }}" +serial0.fileType = "{{ .Serial_Type }}" +serial0.yieldOnMsrRead = "{{ .Serial_Yield }}" +serial0.pipe.endPoint = "{{ .Serial_Endpoint }}" +serial0.tryNoRxLoss = "{{ .Serial_Host }}" + +// Parallel +parallel0.present = "{{ .Parallel_Present }}" +parallel0.startConnected = "{{ .Parallel_Present }}" +parallel0.fileName = "{{ .Parallel_Filename }}" +parallel0.autodetect = "{{ .Parallel_Auto }}" +parallel0.bidirectional = "{{ .Parallel_Bidirectional }}" + virtualHW.productCompatibility = "hosted" virtualHW.version = "{{ .Version }}" vmci0.id = "1861462627" diff --git a/builder/vmware/iso/step_create_vmx_test.go b/builder/vmware/iso/step_create_vmx_test.go new file mode 100644 index 000000000..26605a1f5 --- /dev/null +++ b/builder/vmware/iso/step_create_vmx_test.go @@ -0,0 +1,410 @@ +package iso + +import ( + "bytes" + "fmt" + "io/ioutil" + "math" + "math/rand" + "os" + "path/filepath" + "runtime" + "strconv" + "strings" + + "testing" + + "github.com/hashicorp/packer/packer" + "github.com/hashicorp/packer/provisioner/shell" + "github.com/hashicorp/packer/template" +) + +var vmxTestBuilderConfig = map[string]string{ + "type": `"vmware-iso"`, + "iso_url": `"https://archive.org/download/ut-ttylinux-i686-12.6/ut-ttylinux-i686-12.6.iso"`, + "iso_checksum_type": `"md5"`, + "iso_checksum": `"43c1feeae55a44c6ef694b8eb18408a6"`, + "ssh_username": `"root"`, + "ssh_password": `"password"`, + "ssh_wait_timeout": `"45s"`, + "boot_command": `["","rootpassword","udhcpc"]`, + "shutdown_command": `"/sbin/shutdown -h; exit 0"`, +} + +var vmxTestProvisionerConfig = map[string]string{ + "type": `"shell"`, + "inline": `["echo hola mundo"]`, +} + +const vmxTestTemplate string = `{"builders":[{%s}],"provisioners":[{%s}]}` + +func tmpnam(prefix string) string { + var path string + var err error + + const length = 16 + + dir := os.TempDir() + max := int(math.Pow(2, float64(length))) + + n, err := rand.Intn(max), nil + for path = filepath.Join(dir, prefix+strconv.Itoa(n)); err == nil; _, err = os.Stat(path) { + n = rand.Intn(max) + path = filepath.Join(dir, prefix+strconv.Itoa(n)) + } + return path +} + +func createFloppyOutput(prefix string) (string, string, error) { + output := tmpnam(prefix) + f, err := os.Create(output) + if err != nil { + return "", "", fmt.Errorf("Unable to create empty %s: %s", output, err) + } + f.Close() + + vmxData := []string{ + `"floppy0.present":"TRUE"`, + `"floppy0.fileType":"file"`, + `"floppy0.clientDevice":"FALSE"`, + `"floppy0.fileName":"%s"`, + `"floppy0.startConnected":"TRUE"`, + } + + outputFile := strings.Replace(output, "\\", "\\\\", -1) + vmxString := fmt.Sprintf("{"+strings.Join(vmxData, ",")+"}", outputFile) + return output, vmxString, nil +} + +func readFloppyOutput(path string) (string, error) { + f, err := os.Open(path) + if err != nil { + return "", fmt.Errorf("Unable to open file %s", path) + } + defer f.Close() + data, err := ioutil.ReadAll(f) + if err != nil { + return "", fmt.Errorf("Unable to read file: %s", err) + } + if len(data) == 0 { + return "", nil + } + return string(data[:bytes.IndexByte(data, 0)]), nil +} + +func setupVMwareBuild(t *testing.T, builderConfig map[string]string, provisionerConfig map[string]string) error { + ui := packer.TestUi(t) + + // create builder config and update with user-supplied options + cfgBuilder := map[string]string{} + for k, v := range vmxTestBuilderConfig { + cfgBuilder[k] = v + } + for k, v := range builderConfig { + cfgBuilder[k] = v + } + + // convert our builder config into a single sprintfable string + builderLines := []string{} + for k, v := range cfgBuilder { + builderLines = append(builderLines, fmt.Sprintf(`"%s":%s`, k, v)) + } + + // create provisioner config and update with user-supplied options + cfgProvisioner := map[string]string{} + for k, v := range vmxTestProvisionerConfig { + cfgProvisioner[k] = v + } + for k, v := range provisionerConfig { + cfgProvisioner[k] = v + } + + // convert our provisioner config into a single sprintfable string + provisionerLines := []string{} + for k, v := range cfgProvisioner { + provisionerLines = append(provisionerLines, fmt.Sprintf(`"%s":%s`, k, v)) + } + + // and now parse them into a template + configString := fmt.Sprintf(vmxTestTemplate, strings.Join(builderLines, `,`), strings.Join(provisionerLines, `,`)) + + tpl, err := template.Parse(strings.NewReader(configString)) + if err != nil { + t.Fatalf("Unable to parse test config: %s", err) + } + + // create our config to test the vmware-iso builder + components := packer.ComponentFinder{ + Builder: func(n string) (packer.Builder, error) { + return &Builder{}, nil + }, + Hook: func(n string) (packer.Hook, error) { + return &packer.DispatchHook{}, nil + }, + PostProcessor: func(n string) (packer.PostProcessor, error) { + return &packer.MockPostProcessor{}, nil + }, + Provisioner: func(n string) (packer.Provisioner, error) { + return &shell.Provisioner{}, nil + }, + } + config := packer.CoreConfig{ + Template: tpl, + Components: components, + } + + // create a core using our template + core, err := packer.NewCore(&config) + if err != nil { + t.Fatalf("Unable to create core: %s", err) + } + + // now we can prepare our build + b, err := core.Build("vmware-iso") + if err != nil { + t.Fatalf("Unable to create build: %s", err) + } + + warn, err := b.Prepare() + if len(warn) > 0 { + for _, w := range warn { + t.Logf("Configuration warning: %s", w) + } + } + + // and then finally build it + cache := &packer.FileCache{CacheDir: os.TempDir()} + artifacts, err := b.Run(ui, cache) + if err != nil { + t.Fatalf("Failed to build artifact: %s", err) + } + + // check to see that we only got one artifact back + if len(artifacts) == 1 { + return artifacts[0].Destroy() + } + + // otherwise some number of errors happened + t.Logf("Unexpected number of artifacts returned: %d", len(artifacts)) + errors := make([]error, 0) + for _, artifact := range artifacts { + if err := artifact.Destroy(); err != nil { + errors = append(errors, err) + } + } + if len(errors) > 0 { + t.Errorf("%d Errors returned while trying to destroy artifacts", len(errors)) + return fmt.Errorf("Error while trying to destroy artifacts: %v", errors) + } + return nil +} + +func TestStepCreateVmx_SerialFile(t *testing.T) { + if os.Getenv("PACKER_ACC") == "" { + t.Skip("This test is only run with PACKER_ACC=1 due to the requirement of access to the VMware binaries.") + } + + tmpfile := tmpnam("SerialFileInput.") + + serialConfig := map[string]string{ + "serial": fmt.Sprintf(`"file:%s"`, filepath.ToSlash(tmpfile)), + } + + error := setupVMwareBuild(t, serialConfig, map[string]string{}) + if error != nil { + t.Errorf("Unable to read file: %s", error) + } + + f, err := os.Stat(tmpfile) + if err != nil { + t.Errorf("VMware builder did not create a file for serial port: %s", err) + } + + if f != nil { + if err := os.Remove(tmpfile); err != nil { + t.Fatalf("Unable to remove file %s: %s", tmpfile, err) + } + } +} + +func TestStepCreateVmx_SerialPort(t *testing.T) { + if os.Getenv("PACKER_ACC") == "" { + t.Skip("This test is only run with PACKER_ACC=1 due to the requirement of access to the VMware binaries.") + } + + var defaultSerial string + if runtime.GOOS == "windows" { + defaultSerial = "COM1" + } else { + defaultSerial = "/dev/ttyS0" + } + + config := map[string]string{ + "serial": fmt.Sprintf(`"device:%s"`, filepath.ToSlash(defaultSerial)), + } + provision := map[string]string{ + "inline": `"dmesg | egrep -o '^serial8250: ttyS1 at' > /dev/fd0"`, + } + + // where to write output + output, vmxData, err := createFloppyOutput("SerialPortOutput.") + if err != nil { + t.Fatalf("Error creating output: %s", err) + } + defer func() { + if _, err := os.Stat(output); err == nil { + os.Remove(output) + } + }() + config["vmx_data"] = vmxData + t.Logf("Preparing to write output to %s", output) + + // whee + err = setupVMwareBuild(t, config, provision) + if err != nil { + t.Errorf("%s", err) + } + + // check the output + data, err := readFloppyOutput(output) + if err != nil { + t.Errorf("%s", err) + } + + if data != "serial8250: ttyS1 at\n" { + t.Errorf("Serial port not detected : %v", data) + } +} + +func TestStepCreateVmx_ParallelPort(t *testing.T) { + if os.Getenv("PACKER_ACC") == "" { + t.Skip("This test is only run with PACKER_ACC=1 due to the requirement of access to the VMware binaries.") + } + + var defaultParallel string + if runtime.GOOS == "windows" { + defaultParallel = "LPT1" + } else { + defaultParallel = "/dev/lp0" + } + + config := map[string]string{ + "parallel": fmt.Sprintf(`"device:%s,uni"`, filepath.ToSlash(defaultParallel)), + } + provision := map[string]string{ + "inline": `"cat /proc/modules | egrep -o '^parport ' > /dev/fd0"`, + } + + // where to write output + output, vmxData, err := createFloppyOutput("ParallelPortOutput.") + if err != nil { + t.Fatalf("Error creating output: %s", err) + } + defer func() { + if _, err := os.Stat(output); err == nil { + os.Remove(output) + } + }() + config["vmx_data"] = vmxData + t.Logf("Preparing to write output to %s", output) + + // whee + error := setupVMwareBuild(t, config, provision) + if error != nil { + t.Errorf("%s", error) + } + + // check the output + data, err := readFloppyOutput(output) + if err != nil { + t.Errorf("%s", err) + } + + if data != "parport \n" { + t.Errorf("Parallel port not detected : %v", data) + } +} + +func TestStepCreateVmx_Usb(t *testing.T) { + if os.Getenv("PACKER_ACC") == "" { + t.Skip("This test is only run with PACKER_ACC=1 due to the requirement of access to the VMware binaries.") + } + + config := map[string]string{ + "usb": `"TRUE"`, + } + provision := map[string]string{ + "inline": `"dmesg | egrep -m1 -o 'USB hub found$' > /dev/fd0"`, + } + + // where to write output + output, vmxData, err := createFloppyOutput("UsbOutput.") + if err != nil { + t.Fatalf("Error creating output: %s", err) + } + defer func() { + if _, err := os.Stat(output); err == nil { + os.Remove(output) + } + }() + config["vmx_data"] = vmxData + t.Logf("Preparing to write output to %s", output) + + // whee + error := setupVMwareBuild(t, config, provision) + if error != nil { + t.Errorf("%s", error) + } + + // check the output + data, err := readFloppyOutput(output) + if err != nil { + t.Errorf("%s", err) + } + + if data != "USB hub found\n" { + t.Errorf("USB support not detected : %v", data) + } +} + +func TestStepCreateVmx_Sound(t *testing.T) { + if os.Getenv("PACKER_ACC") == "" { + t.Skip("This test is only run with PACKER_ACC=1 due to the requirement of access to the VMware binaries.") + } + + config := map[string]string{ + "sound": `"TRUE"`, + } + provision := map[string]string{ + "inline": `"cat /proc/modules | egrep -o '^soundcore' > /dev/fd0"`, + } + + // where to write output + output, vmxData, err := createFloppyOutput("SoundOutput.") + if err != nil { + t.Fatalf("Error creating output: %s", err) + } + defer func() { + if _, err := os.Stat(output); err == nil { + os.Remove(output) + } + }() + config["vmx_data"] = vmxData + t.Logf("Preparing to write output to %s", output) + + // whee + error := setupVMwareBuild(t, config, provision) + if error != nil { + t.Errorf("Unable to read file: %s", error) + } + + // check the output + data, err := readFloppyOutput(output) + if err != nil { + t.Errorf("%s", err) + } + + if data != "soundcore\n" { + t.Errorf("Soundcard not detected : %v", data) + } +} diff --git a/builder/vmware/iso/step_export.go b/builder/vmware/iso/step_export.go index 91a2ce486..c39443c8f 100644 --- a/builder/vmware/iso/step_export.go +++ b/builder/vmware/iso/step_export.go @@ -2,6 +2,7 @@ package iso import ( "bytes" + "context" "fmt" "net/url" "os" @@ -9,17 +10,21 @@ import ( "runtime" "strings" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" ) +// This step exports a VM built on ESXi using ovftool +// +// Uses: +// display_name string type StepExport struct { Format string SkipExport bool OutputDir string } -func (s *StepExport) generateArgs(c *Config, hidePassword bool) []string { +func (s *StepExport) generateArgs(c *Config, displayName string, hidePassword bool) []string { password := url.QueryEscape(c.RemotePassword) if hidePassword { password = "****" @@ -28,13 +33,13 @@ func (s *StepExport) generateArgs(c *Config, hidePassword bool) []string { "--noSSLVerify=true", "--skipManifestCheck", "-tt=" + s.Format, - "vi://" + c.RemoteUser + ":" + password + "@" + c.RemoteHost + "/" + c.VMName, + "vi://" + c.RemoteUser + ":" + password + "@" + c.RemoteHost + "/" + displayName, s.OutputDir, } return append(c.OVFToolOptions, args...) } -func (s *StepExport) Run(state multistep.StateBag) multistep.StepAction { +func (s *StepExport) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { c := state.Get("config").(*Config) ui := state.Get("ui").(packer.Ui) @@ -44,8 +49,8 @@ func (s *StepExport) Run(state multistep.StateBag) multistep.StepAction { return multistep.ActionContinue } - if c.RemoteType != "esx5" || s.Format == "" { - ui.Say("Skipping export of virtual machine (export is allowed only for ESXi and the format needs to be specified)...") + if c.RemoteType != "esx5" { + ui.Say("Skipping export of virtual machine (export is allowed only for ESXi)...") return multistep.ActionContinue } @@ -71,9 +76,13 @@ func (s *StepExport) Run(state multistep.StateBag) multistep.StepAction { } ui.Say("Exporting virtual machine...") - ui.Message(fmt.Sprintf("Executing: %s %s", ovftool, strings.Join(s.generateArgs(c, true), " "))) + var displayName string + if v, ok := state.GetOk("display_name"); ok { + displayName = v.(string) + } + ui.Message(fmt.Sprintf("Executing: %s %s", ovftool, strings.Join(s.generateArgs(c, displayName, true), " "))) var out bytes.Buffer - cmd := exec.Command(ovftool, s.generateArgs(c, false)...) + cmd := exec.Command(ovftool, s.generateArgs(c, displayName, false)...) cmd.Stdout = &out if err := cmd.Run(); err != nil { err := fmt.Errorf("Error exporting virtual machine: %s\n%s\n", err, out.String()) diff --git a/builder/vmware/iso/step_export_test.go b/builder/vmware/iso/step_export_test.go index 9ea0d64b9..27c2a0c36 100644 --- a/builder/vmware/iso/step_export_test.go +++ b/builder/vmware/iso/step_export_test.go @@ -1,8 +1,10 @@ package iso import ( - "github.com/mitchellh/multistep" + "context" "testing" + + "github.com/hashicorp/packer/helper/multistep" ) func TestStepExport_impl(t *testing.T) { @@ -17,7 +19,7 @@ func testStepExport_wrongtype_impl(t *testing.T, remoteType string) { config.RemoteType = "foo" state.Put("config", &config) - if action := step.Run(state); action != multistep.ActionContinue { + if action := step.Run(context.Background(), state); action != multistep.ActionContinue { t.Fatalf("bad action: %#v", action) } if _, ok := state.GetOk("error"); ok { diff --git a/builder/vmware/iso/step_register.go b/builder/vmware/iso/step_register.go index 509b6e017..c97800910 100644 --- a/builder/vmware/iso/step_register.go +++ b/builder/vmware/iso/step_register.go @@ -1,12 +1,13 @@ package iso import ( + "context" "fmt" "time" vmwcommon "github.com/hashicorp/packer/builder/vmware/common" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" ) type StepRegister struct { @@ -14,7 +15,7 @@ type StepRegister struct { Format string } -func (s *StepRegister) Run(state multistep.StateBag) multistep.StepAction { +func (s *StepRegister) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { driver := state.Get("driver").(vmwcommon.Driver) ui := state.Get("ui").(packer.Ui) vmxPath := state.Get("vmx_path").(string) diff --git a/builder/vmware/iso/step_register_test.go b/builder/vmware/iso/step_register_test.go index 4862cbd6d..099e067ab 100644 --- a/builder/vmware/iso/step_register_test.go +++ b/builder/vmware/iso/step_register_test.go @@ -1,8 +1,10 @@ package iso import ( - "github.com/mitchellh/multistep" + "context" "testing" + + "github.com/hashicorp/packer/helper/multistep" ) func TestStepRegister_impl(t *testing.T) { @@ -16,7 +18,7 @@ func TestStepRegister_regularDriver(t *testing.T) { state.Put("vmx_path", "foo") // Test the run - if action := step.Run(state); action != multistep.ActionContinue { + if action := step.Run(context.Background(), state); action != multistep.ActionContinue { t.Fatalf("bad action: %#v", action) } if _, ok := state.GetOk("error"); ok { @@ -40,7 +42,7 @@ func TestStepRegister_remoteDriver(t *testing.T) { state.Put("vmx_path", "foo") // Test the run - if action := step.Run(state); action != multistep.ActionContinue { + if action := step.Run(context.Background(), state); action != multistep.ActionContinue { t.Fatalf("bad action: %#v", action) } if _, ok := state.GetOk("error"); ok { @@ -80,7 +82,7 @@ func TestStepRegister_WithoutUnregister_remoteDriver(t *testing.T) { state.Put("vmx_path", "foo") // Test the run - if action := step.Run(state); action != multistep.ActionContinue { + if action := step.Run(context.Background(), state); action != multistep.ActionContinue { t.Fatalf("bad action: %#v", action) } if _, ok := state.GetOk("error"); ok { diff --git a/builder/vmware/iso/step_remote_upload.go b/builder/vmware/iso/step_remote_upload.go index a9f6ecc80..05050326d 100644 --- a/builder/vmware/iso/step_remote_upload.go +++ b/builder/vmware/iso/step_remote_upload.go @@ -1,22 +1,24 @@ package iso import ( + "context" "fmt" "log" vmwcommon "github.com/hashicorp/packer/builder/vmware/common" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" ) // stepRemoteUpload uploads some thing from the state bag to a remote driver // (if it can) and stores that new remote path into the state bag. type stepRemoteUpload struct { - Key string - Message string + Key string + Message string + DoCleanup bool } -func (s *stepRemoteUpload) Run(state multistep.StateBag) multistep.StepAction { +func (s *stepRemoteUpload) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { driver := state.Get("driver").(vmwcommon.Driver) ui := state.Get("ui").(packer.Ui) @@ -60,4 +62,25 @@ func (s *stepRemoteUpload) Run(state multistep.StateBag) multistep.StepAction { } func (s *stepRemoteUpload) Cleanup(state multistep.StateBag) { + if !s.DoCleanup { + return + } + + driver := state.Get("driver").(vmwcommon.Driver) + + remote, ok := driver.(RemoteDriver) + if !ok { + return + } + + path, ok := state.Get(s.Key).(string) + if !ok { + return + } + + log.Printf("Cleaning up remote path: %s", path) + err := remote.RemoveCache(path) + if err != nil { + log.Printf("Error cleaning up: %s", err) + } } diff --git a/builder/vmware/iso/step_test.go b/builder/vmware/iso/step_test.go index 783d38796..958ab4734 100644 --- a/builder/vmware/iso/step_test.go +++ b/builder/vmware/iso/step_test.go @@ -2,10 +2,11 @@ package iso import ( "bytes" - vmwcommon "github.com/hashicorp/packer/builder/vmware/common" - "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" "testing" + + vmwcommon "github.com/hashicorp/packer/builder/vmware/common" + "github.com/hashicorp/packer/helper/multistep" + "github.com/hashicorp/packer/packer" ) func testState(t *testing.T) multistep.StateBag { diff --git a/builder/vmware/iso/step_upload_vmx.go b/builder/vmware/iso/step_upload_vmx.go index 39532674b..f0fec436c 100644 --- a/builder/vmware/iso/step_upload_vmx.go +++ b/builder/vmware/iso/step_upload_vmx.go @@ -1,11 +1,13 @@ package iso import ( + "context" "fmt" - vmwcommon "github.com/hashicorp/packer/builder/vmware/common" - "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" "path/filepath" + + vmwcommon "github.com/hashicorp/packer/builder/vmware/common" + "github.com/hashicorp/packer/helper/multistep" + "github.com/hashicorp/packer/packer" ) // This step upload the VMX to the remote host @@ -21,7 +23,7 @@ type StepUploadVMX struct { RemoteType string } -func (c *StepUploadVMX) Run(state multistep.StateBag) multistep.StepAction { +func (c *StepUploadVMX) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { driver := state.Get("driver").(vmwcommon.Driver) ui := state.Get("ui").(packer.Ui) diff --git a/builder/vmware/vmx/builder.go b/builder/vmware/vmx/builder.go index b7c70d548..dbb74b7a1 100644 --- a/builder/vmware/vmx/builder.go +++ b/builder/vmware/vmx/builder.go @@ -9,8 +9,8 @@ import ( vmwcommon "github.com/hashicorp/packer/builder/vmware/common" "github.com/hashicorp/packer/common" "github.com/hashicorp/packer/helper/communicator" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" ) // Builder implements packer.Builder and builds the actual VMware @@ -32,7 +32,7 @@ func (b *Builder) Prepare(raws ...interface{}) ([]string, error) { } // Run executes a Packer build and returns a packer.Artifact representing -// a VirtualBox appliance. +// a VMware image. func (b *Builder) Run(ui packer.Ui, hook packer.Hook, cache packer.Cache) (packer.Artifact, error) { driver, err := vmwcommon.NewDriver(&b.config.DriverConfig, &b.config.SSHConfig) if err != nil { @@ -69,6 +69,7 @@ func (b *Builder) Run(ui packer.Ui, hook packer.Hook, cache packer.Cache) (packe OutputDir: b.config.OutputDir, Path: b.config.SourcePath, VMName: b.config.VMName, + Linked: b.config.Linked, }, &vmwcommon.StepConfigureVMX{ CustomData: b.config.VMXData, @@ -87,13 +88,13 @@ func (b *Builder) Run(ui packer.Ui, hook packer.Hook, cache packer.Cache) (packe VNCDisablePassword: b.config.VNCDisablePassword, }, &vmwcommon.StepRun{ - BootWait: b.config.BootWait, DurationBeforeStop: 5 * time.Second, Headless: b.config.Headless, }, &vmwcommon.StepTypeBootCommand{ + BootWait: b.config.BootWait, VNCEnabled: !b.config.DisableVNC, - BootCommand: b.config.BootCommand, + BootCommand: b.config.FlatBootCommand(), VMName: b.config.VMName, Ctx: b.config.ctx, }, diff --git a/builder/vmware/vmx/config.go b/builder/vmware/vmx/config.go index b4d1c009f..1e32ac7b7 100644 --- a/builder/vmware/vmx/config.go +++ b/builder/vmware/vmx/config.go @@ -6,6 +6,7 @@ import ( vmwcommon "github.com/hashicorp/packer/builder/vmware/common" "github.com/hashicorp/packer/common" + "github.com/hashicorp/packer/common/bootcommand" "github.com/hashicorp/packer/helper/config" "github.com/hashicorp/packer/packer" "github.com/hashicorp/packer/template/interpolate" @@ -16,6 +17,7 @@ type Config struct { common.PackerConfig `mapstructure:",squash"` common.HTTPConfig `mapstructure:",squash"` common.FloppyConfig `mapstructure:",squash"` + bootcommand.VNCConfig `mapstructure:",squash"` vmwcommon.DriverConfig `mapstructure:",squash"` vmwcommon.OutputConfig `mapstructure:",squash"` vmwcommon.RunConfig `mapstructure:",squash"` @@ -24,6 +26,7 @@ type Config struct { vmwcommon.ToolsConfig `mapstructure:",squash"` vmwcommon.VMXConfig `mapstructure:",squash"` + Linked bool `mapstructure:"linked"` RemoteType string `mapstructure:"remote_type"` SkipCompaction bool `mapstructure:"skip_compaction"` SourcePath string `mapstructure:"source_path"` @@ -65,6 +68,7 @@ func NewConfig(raws ...interface{}) (*Config, []string, error) { errs = packer.MultiErrorAppend(errs, c.ToolsConfig.Prepare(&c.ctx)...) errs = packer.MultiErrorAppend(errs, c.VMXConfig.Prepare(&c.ctx)...) errs = packer.MultiErrorAppend(errs, c.FloppyConfig.Prepare(&c.ctx)...) + errs = packer.MultiErrorAppend(errs, c.VNCConfig.Prepare(&c.ctx)...) if c.SourcePath == "" { errs = packer.MultiErrorAppend(errs, fmt.Errorf("source_path is blank, but is required")) diff --git a/builder/vmware/vmx/step_clone_vmx.go b/builder/vmware/vmx/step_clone_vmx.go index eaa6607c5..848476e9a 100644 --- a/builder/vmware/vmx/step_clone_vmx.go +++ b/builder/vmware/vmx/step_clone_vmx.go @@ -1,13 +1,15 @@ package vmx import ( + "context" "fmt" "log" "path/filepath" + "regexp" vmwcommon "github.com/hashicorp/packer/builder/vmware/common" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" ) // StepCloneVMX takes a VMX file and clones the VM into the output directory. @@ -15,46 +17,89 @@ type StepCloneVMX struct { OutputDir string Path string VMName string + Linked bool } -func (s *StepCloneVMX) Run(state multistep.StateBag) multistep.StepAction { +func (s *StepCloneVMX) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { driver := state.Get("driver").(vmwcommon.Driver) ui := state.Get("ui").(packer.Ui) + // Set the path we want for the new .vmx file and clone vmxPath := filepath.Join(s.OutputDir, s.VMName+".vmx") - ui.Say("Cloning source VM...") log.Printf("Cloning from: %s", s.Path) log.Printf("Cloning to: %s", vmxPath) - if err := driver.Clone(vmxPath, s.Path); err != nil { + if err := driver.Clone(vmxPath, s.Path, s.Linked); err != nil { state.Put("error", err) return multistep.ActionHalt } + // Read in the machine configuration from the cloned VMX file + // + // * The main driver needs the path to the vmx (set above) and the + // network type so that it can work out things like IP's and MAC + // addresses + // * The disk compaction step needs the paths to all attached disks vmxData, err := vmwcommon.ReadVMX(vmxPath) if err != nil { state.Put("error", err) return multistep.ActionHalt } - var diskName string - if _, ok := vmxData["scsi0:0.filename"]; ok { - diskName = vmxData["scsi0:0.filename"] + var diskFilenames []string + // The VMX file stores the path to a configured disk, and information + // about that disks attachment to a virtual adapter/controller, as a + // key/value pair. + // For a virtual disk attached to bus ID 3 of the virtual machines + // first SCSI adapter the key/value pair would look something like: + // scsi0:3.fileName = "relative/path/to/scsiDisk.vmdk" + // The supported adapter types and configuration maximums for each type + // vary according to the VMware platform type and version, and the + // Virtual Machine Hardware version used. See the 'Virtual Machine + // Maximums' section within VMware's 'Configuration Maximums' + // documentation for each platform: + // https://kb.vmware.com/s/article/1003497 + // Information about the supported Virtual Machine Hardware versions: + // https://kb.vmware.com/s/article/1003746 + // The following regexp is used to match all possible disk attachment + // points that may be found in the VMX file across all VMware + // platforms/versions and Virtual Machine Hardware versions + diskPathKeyRe := regexp.MustCompile(`(?i)^(scsi|sata|ide|nvme)[[:digit:]]:[[:digit:]]{1,2}\.fileName`) + for k, v := range vmxData { + match := diskPathKeyRe.FindString(k) + if match != "" && filepath.Ext(v) == ".vmdk" { + diskFilenames = append(diskFilenames, v) + } } - if _, ok := vmxData["sata0:0.filename"]; ok { - diskName = vmxData["sata0:0.filename"] + + // Write out the relative, host filesystem paths to the disks + var diskFullPaths []string + for _, diskFilename := range diskFilenames { + log.Printf("Found attached disk with filename: %s", diskFilename) + diskFullPaths = append(diskFullPaths, filepath.Join(s.OutputDir, diskFilename)) } - if _, ok := vmxData["ide0:0.filename"]; ok { - diskName = vmxData["ide0:0.filename"] - } - if diskName == "" { - err := fmt.Errorf("Root disk filename could not be found!") - state.Put("error", err) + + if len(diskFullPaths) == 0 { + state.Put("error", fmt.Errorf("Could not enumerate disk info from the vmx file")) return multistep.ActionHalt } - state.Put("full_disk_path", filepath.Join(s.OutputDir, diskName)) + // Determine the network type by reading out of the .vmx + var networkType string + if _, ok := vmxData["ethernet0.connectiontype"]; ok { + networkType = vmxData["ethernet0.connectiontype"] + log.Printf("Discovered the network type: %s", networkType) + } + if networkType == "" { + networkType = "nat" + log.Printf("Defaulting to network type: %s", networkType) + } + + // Stash all required information in our state bag state.Put("vmx_path", vmxPath) + state.Put("disk_full_paths", diskFullPaths) + state.Put("vmnetwork", networkType) + return multistep.ActionContinue } diff --git a/builder/vmware/vmx/step_clone_vmx_test.go b/builder/vmware/vmx/step_clone_vmx_test.go index 3b4cea1d5..d781df687 100644 --- a/builder/vmware/vmx/step_clone_vmx_test.go +++ b/builder/vmware/vmx/step_clone_vmx_test.go @@ -1,13 +1,23 @@ package vmx import ( + "context" + "fmt" "io/ioutil" "os" "path/filepath" "testing" vmwcommon "github.com/hashicorp/packer/builder/vmware/common" - "github.com/mitchellh/multistep" + "github.com/hashicorp/packer/helper/multistep" + "github.com/stretchr/testify/assert" +) + +const ( + scsiFilename = "scsiDisk.vmdk" + sataFilename = "sataDisk.vmdk" + nvmeFilename = "nvmeDisk.vmdk" + ideFilename = "ideDisk.vmdk" ) func TestStepCloneVMX_impl(t *testing.T) { @@ -22,6 +32,22 @@ func TestStepCloneVMX(t *testing.T) { } defer os.RemoveAll(td) + // Set up mock vmx file contents + var testCloneVMX = fmt.Sprintf("scsi0:0.filename = \"%s\"\n"+ + "sata0:0.filename = \"%s\"\n"+ + "nvme0:0.filename = \"%s\"\n"+ + "ide1:0.filename = \"%s\"\n"+ + "ide0:0.filename = \"auto detect\"\n"+ + "ethernet0.connectiontype = \"nat\"\n", scsiFilename, + sataFilename, nvmeFilename, ideFilename) + + // Set up expected mock disk file paths + diskFilenames := []string{scsiFilename, sataFilename, ideFilename, nvmeFilename} + var diskFullPaths []string + for _, diskFilename := range diskFilenames { + diskFullPaths = append(diskFullPaths, filepath.Join(td, diskFilename)) + } + // Create the source sourcePath := filepath.Join(td, "source.vmx") if err := ioutil.WriteFile(sourcePath, []byte(testCloneVMX), 0644); err != nil { @@ -43,7 +69,7 @@ func TestStepCloneVMX(t *testing.T) { driver := state.Get("driver").(*vmwcommon.DriverMock) // Test the run - if action := step.Run(state); action != multistep.ActionContinue { + if action := step.Run(context.Background(), state); action != multistep.ActionContinue { t.Fatalf("bad action: %#v", action) } if _, ok := state.GetOk("error"); ok { @@ -59,16 +85,20 @@ func TestStepCloneVMX(t *testing.T) { if vmxPath, ok := state.GetOk("vmx_path"); !ok { t.Fatal("should set vmx_path") } else if vmxPath != destPath { - t.Fatalf("bad: %#v", vmxPath) + t.Fatalf("bad path to vmx: %#v", vmxPath) } - if diskPath, ok := state.GetOk("full_disk_path"); !ok { - t.Fatal("should set full_disk_path") - } else if diskPath != filepath.Join(td, "foo") { - t.Fatalf("bad: %#v", diskPath) + if stateDiskPaths, ok := state.GetOk("disk_full_paths"); !ok { + t.Fatal("should set disk_full_paths") + } else { + assert.ElementsMatchf(t, stateDiskPaths.([]string), diskFullPaths, + "%s\nshould contain the same elements as:\n%s", stateDiskPaths.([]string), diskFullPaths) + } + + // Test we got the network type + if networkType, ok := state.GetOk("vmnetwork"); !ok { + t.Fatal("should set vmnetwork") + } else if networkType != "nat" { + t.Fatalf("bad network type: %#v", networkType) } } - -const testCloneVMX = ` -scsi0:0.fileName = "foo" -` diff --git a/builder/vmware/vmx/step_test.go b/builder/vmware/vmx/step_test.go index 60e2f890c..81439b3c4 100644 --- a/builder/vmware/vmx/step_test.go +++ b/builder/vmware/vmx/step_test.go @@ -5,8 +5,8 @@ import ( "testing" vmwcommon "github.com/hashicorp/packer/builder/vmware/common" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" ) func testState(t *testing.T) multistep.StateBag { diff --git a/command/build.go b/command/build.go index de9da51ee..39e761ba5 100644 --- a/command/build.go +++ b/command/build.go @@ -9,17 +9,20 @@ import ( "strconv" "strings" "sync" + "syscall" "github.com/hashicorp/packer/helper/enumflag" "github.com/hashicorp/packer/packer" "github.com/hashicorp/packer/template" + + "github.com/posener/complete" ) type BuildCommand struct { Meta } -func (c BuildCommand) Run(args []string) int { +func (c *BuildCommand) Run(args []string) int { var cfgColor, cfgDebug, cfgForce, cfgParallel bool var cfgOnError string flags := c.Meta.FlagSet("build", FlagSetBuildFilter|FlagSetVars) @@ -138,13 +141,15 @@ func (c BuildCommand) Run(args []string) int { m map[string][]packer.Artifact }{m: make(map[string][]packer.Artifact)} errors := make(map[string]error) + // ctx := context.Background() for _, b := range builds { // Increment the waitgroup so we wait for this item to finish properly wg.Add(1) + // buildCtx, cancelCtx := ctx.WithCancel() // Handle interrupts for this build sigCh := make(chan os.Signal, 1) - signal.Notify(sigCh, os.Interrupt) + signal.Notify(sigCh, os.Interrupt, syscall.SIGTERM) defer signal.Stop(sigCh) go func(b packer.Build) { <-sigCh @@ -154,6 +159,7 @@ func (c BuildCommand) Run(args []string) int { log.Printf("Stopping build: %s", b.Name()) b.Cancel() + //cancelCtx() log.Printf("Build cancelled: %s", b.Name()) }(b) @@ -279,7 +285,7 @@ func (c BuildCommand) Run(args []string) int { return 0 } -func (BuildCommand) Help() string { +func (*BuildCommand) Help() string { helpText := ` Usage: packer build [options] TEMPLATE @@ -303,6 +309,25 @@ Options: return strings.TrimSpace(helpText) } -func (BuildCommand) Synopsis() string { +func (*BuildCommand) Synopsis() string { return "build image(s) from template" } + +func (*BuildCommand) AutocompleteArgs() complete.Predictor { + return complete.PredictNothing +} + +func (*BuildCommand) AutocompleteFlags() complete.Flags { + return complete.Flags{ + "-color": complete.PredictNothing, + "-debug": complete.PredictNothing, + "-except": complete.PredictNothing, + "-only": complete.PredictNothing, + "-force": complete.PredictNothing, + "-machine-readable": complete.PredictNothing, + "-on-error": complete.PredictNothing, + "-parallel": complete.PredictNothing, + "-var": complete.PredictNothing, + "-var-file": complete.PredictNothing, + } +} diff --git a/command/fix.go b/command/fix.go index 1a94f5786..d04cdfd0b 100644 --- a/command/fix.go +++ b/command/fix.go @@ -10,6 +10,8 @@ import ( "github.com/hashicorp/packer/fix" "github.com/hashicorp/packer/template" + + "github.com/posener/complete" ) type FixCommand struct { @@ -85,7 +87,7 @@ func (c *FixCommand) Run(args []string) int { c.Ui.Say(result) if flagValidate { - // Attemot to parse and validate the template + // Attempt to parse and validate the template tpl, err := template.Parse(strings.NewReader(result)) if err != nil { c.Ui.Error(fmt.Sprintf( @@ -120,14 +122,37 @@ Usage: packer fix [options] TEMPLATE Fixes that are run: - iso-md5 Replaces "iso_md5" in builders with newer "iso_checksum" - createtime Replaces ".CreateTime" in builder configs with "{{timestamp}}" - virtualbox-gaattach Updates VirtualBox builders using "guest_additions_attach" - to use "guest_additions_mode" - pp-vagrant-override Replaces old-style provider overrides for the Vagrant - post-processor to new-style as of Packer 0.5.0. - virtualbox-rename Updates "virtualbox" builders to "virtualbox-iso" - vmware-rename Updates "vmware" builders to "vmware-iso" + iso-md5 Replaces "iso_md5" in builders with newer + "iso_checksum" + createtime Replaces ".CreateTime" in builder configs with + "{{timestamp}}" + virtualbox-gaattach Updates VirtualBox builders using + "guest_additions_attach" to use + "guest_additions_mode" + pp-vagrant-override Replaces old-style provider overrides for the + Vagrant post-processor to new-style as of Packer + 0.5.0. + virtualbox-rename Updates "virtualbox" builders to "virtualbox-iso" + vmware-rename Updates "vmware" builders to "vmware-iso" + parallels-headless Removes unused "headless" setting from Parallels + builders + parallels-deprecations Removes deprecated "parallels_tools_host_path" from + Parallels builders + sshkeypath Updates builders using "ssh_key_path" to use + "ssh_private_key_file" + sshdisableagent Updates builders using "ssh_disable_agent" to use + "ssh_disable_agent_forwarding" + manifest-filename Updates "manifest" post-processor so any "filename" + field is renamed to "output" + amazon-shutdown_behavior Changes "shutdown_behaviour" to "shutdown_behavior" + in Amazon builders + amazon-enhanced-networking Replaces "enhanced_networking" in builders with + "ena_support" + amazon-private-ip Replaces "ssh_private_ip": true in amazon builders + with "ssh_interface": "private_ip" + docker-email Removes "login_email" from the Docker builder + powershell-escapes Removes PowerShell escapes from user env vars and + elevated username and password strings Options: @@ -140,3 +165,13 @@ Options: func (c *FixCommand) Synopsis() string { return "fixes templates from old versions of packer" } + +func (c *FixCommand) AutocompleteArgs() complete.Predictor { + return complete.PredictNothing +} + +func (c *FixCommand) AutocompleteFlags() complete.Flags { + return complete.Flags{ + "-validate": complete.PredictNothing, + } +} diff --git a/command/inspect.go b/command/inspect.go index 71ddbba1d..f71efcf20 100644 --- a/command/inspect.go +++ b/command/inspect.go @@ -6,6 +6,8 @@ import ( "strings" "github.com/hashicorp/packer/template" + + "github.com/posener/complete" ) type InspectCommand struct { @@ -160,3 +162,13 @@ Options: func (c *InspectCommand) Synopsis() string { return "see components of a template" } + +func (c *InspectCommand) AutocompleteArgs() complete.Predictor { + return complete.PredictNothing +} + +func (c *InspectCommand) AutocompleteFlags() complete.Flags { + return complete.Flags{ + "-machine-readable": complete.PredictNothing, + } +} diff --git a/command/plugin.go b/command/plugin.go index d9e7ea577..34062a11e 100644 --- a/command/plugin.go +++ b/command/plugin.go @@ -29,14 +29,17 @@ import ( hypervvmcxbuilder "github.com/hashicorp/packer/builder/hyperv/vmcx" lxcbuilder "github.com/hashicorp/packer/builder/lxc" lxdbuilder "github.com/hashicorp/packer/builder/lxd" + ncloudbuilder "github.com/hashicorp/packer/builder/ncloud" nullbuilder "github.com/hashicorp/packer/builder/null" oneandonebuilder "github.com/hashicorp/packer/builder/oneandone" openstackbuilder "github.com/hashicorp/packer/builder/openstack" + oracleclassicbuilder "github.com/hashicorp/packer/builder/oracle/classic" oracleocibuilder "github.com/hashicorp/packer/builder/oracle/oci" parallelsisobuilder "github.com/hashicorp/packer/builder/parallels/iso" parallelspvmbuilder "github.com/hashicorp/packer/builder/parallels/pvm" profitbricksbuilder "github.com/hashicorp/packer/builder/profitbricks" qemubuilder "github.com/hashicorp/packer/builder/qemu" + scalewaybuilder "github.com/hashicorp/packer/builder/scaleway" tritonbuilder "github.com/hashicorp/packer/builder/triton" virtualboxisobuilder "github.com/hashicorp/packer/builder/virtualbox/iso" virtualboxovfbuilder "github.com/hashicorp/packer/builder/virtualbox/ovf" @@ -53,6 +56,7 @@ import ( dockersavepostprocessor "github.com/hashicorp/packer/post-processor/docker-save" dockertagpostprocessor "github.com/hashicorp/packer/post-processor/docker-tag" googlecomputeexportpostprocessor "github.com/hashicorp/packer/post-processor/googlecompute-export" + googlecomputeimportpostprocessor "github.com/hashicorp/packer/post-processor/googlecompute-import" manifestpostprocessor "github.com/hashicorp/packer/post-processor/manifest" shelllocalpostprocessor "github.com/hashicorp/packer/post-processor/shell-local" vagrantpostprocessor "github.com/hashicorp/packer/post-processor/vagrant" @@ -96,14 +100,17 @@ var Builders = map[string]packer.Builder{ "hyperv-vmcx": new(hypervvmcxbuilder.Builder), "lxc": new(lxcbuilder.Builder), "lxd": new(lxdbuilder.Builder), + "ncloud": new(ncloudbuilder.Builder), "null": new(nullbuilder.Builder), "oneandone": new(oneandonebuilder.Builder), "openstack": new(openstackbuilder.Builder), + "oracle-classic": new(oracleclassicbuilder.Builder), "oracle-oci": new(oracleocibuilder.Builder), "parallels-iso": new(parallelsisobuilder.Builder), "parallels-pvm": new(parallelspvmbuilder.Builder), "profitbricks": new(profitbricksbuilder.Builder), "qemu": new(qemubuilder.Builder), + "scaleway": new(scalewaybuilder.Builder), "triton": new(tritonbuilder.Builder), "virtualbox-iso": new(virtualboxisobuilder.Builder), "virtualbox-ovf": new(virtualboxovfbuilder.Builder), @@ -140,6 +147,7 @@ var PostProcessors = map[string]packer.PostProcessor{ "docker-save": new(dockersavepostprocessor.PostProcessor), "docker-tag": new(dockertagpostprocessor.PostProcessor), "googlecompute-export": new(googlecomputeexportpostprocessor.PostProcessor), + "googlecompute-import": new(googlecomputeimportpostprocessor.PostProcessor), "manifest": new(manifestpostprocessor.PostProcessor), "shell-local": new(shelllocalpostprocessor.PostProcessor), "vagrant": new(vagrantpostprocessor.PostProcessor), diff --git a/command/push.go b/command/push.go index e1d6c7db2..41685e949 100644 --- a/command/push.go +++ b/command/push.go @@ -8,12 +8,15 @@ import ( "path/filepath" "regexp" "strings" + "syscall" "github.com/hashicorp/atlas-go/archive" "github.com/hashicorp/atlas-go/v1" "github.com/hashicorp/packer/helper/flag-kv" "github.com/hashicorp/packer/helper/flag-slice" "github.com/hashicorp/packer/template" + + "github.com/posener/complete" ) // archiveTemplateEntry is the name the template always takes within the slug. @@ -182,15 +185,15 @@ func (c *PushCommand) Run(args []string) int { info := &uploadBuildInfo{Type: b.Type} // todo: remove post-migration if b.Type == "vagrant" { - c.Ui.Message("\n-----------------------------------------------------------------------------------\n" + - "Warning: Vagrant-related functionality will be moved from Terraform Enterprise into \n" + - "its own product, Vagrant Cloud. This migration is currently planned for June 27th, \n" + - "2017 at 6PM EDT/3PM PDT/10PM UTC. For more information see \n" + + c.Ui.Error("\n-----------------------------------------------------------------------------------\n" + + "Vagrant-related functionality has been moved from Terraform Enterprise into \n" + + "its own product, Vagrant Cloud. For more information see " + "https://www.vagrantup.com/docs/vagrant-cloud/vagrant-cloud-migration.html\n" + - "In the meantime, you should activate your Vagrant Cloud account and replace your \n" + - "Atlas post-processor with the Vagrant Cloud post-processor. See\n" + - "https://www.packer.io/docs/post-processors/vagrant-cloud.html for more details." + + "Please replace the Atlas post-processor with the Vagrant Cloud post-processor,\n" + + "and see https://www.packer.io/docs/post-processors/vagrant-cloud.html for\n" + + "more detail.\n" + "-----------------------------------------------------------------------------------\n") + return 1 } // Determine if we're artifacting this build @@ -250,6 +253,15 @@ func (c *PushCommand) Run(args []string) int { "Builds: %s\n\n", strings.Join(badBuilds, ", "))) } + c.Ui.Message("\n-----------------------------------------------------------------------\n" + + "Deprecation warning: The Packer and Artifact Registry features of Atlas\n" + + "will no longer be actively developed or maintained and will be fully\n" + + "decommissioned. Please see our guide on building immutable\n" + + "infrastructure with Packer on CI/CD for ideas on implementing\n" + + "these features yourself: https://www.packer.io/guides/packer-on-cicd/\n" + + "-----------------------------------------------------------------------\n", + ) + // Start the archiving process r, err := archive.CreateArchive(path, &opts) if err != nil { @@ -267,7 +279,7 @@ func (c *PushCommand) Run(args []string) int { // Make a ctrl-C channel sigCh := make(chan os.Signal, 1) - signal.Notify(sigCh, os.Interrupt) + signal.Notify(sigCh, os.Interrupt, syscall.SIGTERM) defer signal.Stop(sigCh) err = nil @@ -324,6 +336,20 @@ func (*PushCommand) Synopsis() string { return "push a template and supporting files to a Packer build service" } +func (*PushCommand) AutocompleteArgs() complete.Predictor { + return complete.PredictNothing +} + +func (*PushCommand) AutocompleteFlags() complete.Flags { + return complete.Flags{ + "-name": complete.PredictNothing, + "-token": complete.PredictNothing, + "-sensitive": complete.PredictNothing, + "-var": complete.PredictNothing, + "-var-file": complete.PredictNothing, + } +} + func (c *PushCommand) upload( r *archive.Archive, opts *uploadOpts) (<-chan struct{}, <-chan error, error) { if c.uploadFn != nil { diff --git a/command/validate.go b/command/validate.go index a2fb47ff6..f31b106dd 100644 --- a/command/validate.go +++ b/command/validate.go @@ -1,12 +1,17 @@ package command import ( + "encoding/json" "fmt" "log" "strings" + "github.com/hashicorp/packer/fix" "github.com/hashicorp/packer/packer" "github.com/hashicorp/packer/template" + + "github.com/google/go-cmp/cmp" + "github.com/posener/complete" ) type ValidateCommand struct { @@ -78,6 +83,58 @@ func (c *ValidateCommand) Run(args []string) int { } } + // Check if any of the configuration is fixable + var rawTemplateData map[string]interface{} + input := make(map[string]interface{}) + templateData := make(map[string]interface{}) + json.Unmarshal(tpl.RawContents, &rawTemplateData) + for k, v := range rawTemplateData { + if vals, ok := v.([]interface{}); ok { + if len(vals) == 0 { + continue + } + } + templateData[strings.ToLower(k)] = v + input[strings.ToLower(k)] = v + } + + // fix rawTemplateData into input + for _, name := range fix.FixerOrder { + var err error + fixer, ok := fix.Fixers[name] + if !ok { + panic("fixer not found: " + name) + } + input, err = fixer.Fix(input) + if err != nil { + c.Ui.Error(fmt.Sprintf("Error checking against fixers: %s", err)) + return 1 + } + } + // delete empty top-level keys since the fixers seem to add them + // willy-nilly + for k := range input { + ml, ok := input[k].([]map[string]interface{}) + if !ok { + continue + } + if len(ml) == 0 { + delete(input, k) + } + } + // marshal/unmarshal to make comparable to templateData + var fixedData map[string]interface{} + // Guaranteed to be valid json, so we can ignore errors + j, _ := json.Marshal(input) + json.Unmarshal(j, &fixedData) + + if diff := cmp.Diff(templateData, fixedData); diff != "" { + c.Ui.Say("[warning] Fixable configuration found.") + c.Ui.Say("You may need to run `packer fix` to get your build to run") + c.Ui.Say("correctly. See debug log for more information.\n") + log.Printf("Fixable config differences:\n%s", diff) + } + if len(errs) > 0 { c.Ui.Error("Template validation failed. Errors are shown below.\n") for i, err := range errs { @@ -87,7 +144,6 @@ func (c *ValidateCommand) Run(args []string) int { c.Ui.Error("") } } - return 1 } @@ -136,3 +192,17 @@ Options: func (*ValidateCommand) Synopsis() string { return "check that a template is valid" } + +func (*ValidateCommand) AutocompleteArgs() complete.Predictor { + return complete.PredictNothing +} + +func (*ValidateCommand) AutocompleteFlags() complete.Flags { + return complete.Flags{ + "-syntax-only": complete.PredictNothing, + "-except": complete.PredictNothing, + "-only": complete.PredictNothing, + "-var": complete.PredictNothing, + "-var-file": complete.PredictNothing, + } +} diff --git a/command/version.go b/command/version.go index 1142e1747..3d45205bd 100644 --- a/command/version.go +++ b/command/version.go @@ -1,18 +1,16 @@ package command import ( - "bytes" "fmt" + + "github.com/hashicorp/packer/version" ) // VersionCommand is a Command implementation prints the version. type VersionCommand struct { Meta - Revision string - Version string - VersionPrerelease string - CheckFunc VersionCheckFunc + CheckFunc VersionCheckFunc } // VersionCheckFunc is the callback called by the Version command to @@ -29,25 +27,15 @@ type VersionCheckInfo struct { } func (c *VersionCommand) Help() string { - return "" + return "Prints the Packer version, and checks for new release." } func (c *VersionCommand) Run(args []string) int { - c.Ui.Machine("version", c.Version) - c.Ui.Machine("version-prelease", c.VersionPrerelease) - c.Ui.Machine("version-commit", c.Revision) + c.Ui.Machine("version", version.Version) + c.Ui.Machine("version-prelease", version.VersionPrerelease) + c.Ui.Machine("version-commit", version.GitCommit) - var versionString bytes.Buffer - fmt.Fprintf(&versionString, "Packer v%s", c.Version) - if c.VersionPrerelease != "" { - fmt.Fprintf(&versionString, "-%s", c.VersionPrerelease) - - if c.Revision != "" { - fmt.Fprintf(&versionString, " (%s)", c.Revision) - } - } - - c.Ui.Say(versionString.String()) + c.Ui.Say(fmt.Sprintf("Packer v%s", version.FormattedVersion())) // If we have a version check function, then let's check for // the latest version as well. diff --git a/commands.go b/commands.go index 17b6a7f36..47ee467a5 100644 --- a/commands.go +++ b/commands.go @@ -2,7 +2,6 @@ package main import ( "github.com/hashicorp/packer/command" - "github.com/hashicorp/packer/version" "github.com/mitchellh/cli" ) @@ -50,11 +49,8 @@ func init() { "version": func() (cli.Command, error) { return &command.VersionCommand{ - Meta: *CommandMeta, - Revision: version.GitCommit, - Version: version.Version, - VersionPrerelease: version.VersionPrerelease, - CheckFunc: commandVersionCheck, + Meta: *CommandMeta, + CheckFunc: commandVersionCheck, }, nil }, diff --git a/common/bootcommand/boot_command.go b/common/bootcommand/boot_command.go new file mode 100644 index 000000000..95133f334 --- /dev/null +++ b/common/bootcommand/boot_command.go @@ -0,0 +1,2074 @@ +// Code generated by pigeon; DO NOT EDIT. + +package bootcommand + +import ( + "bytes" + "errors" + "fmt" + "io" + "io/ioutil" + "math" + "os" + "sort" + "strconv" + "strings" + "time" + "unicode" + "unicode/utf8" +) + +var g = &grammar{ + rules: []*rule{ + { + name: "Input", + pos: position{line: 6, col: 1, offset: 26}, + expr: &actionExpr{ + pos: position{line: 6, col: 10, offset: 35}, + run: (*parser).callonInput1, + expr: &seqExpr{ + pos: position{line: 6, col: 10, offset: 35}, + exprs: []interface{}{ + &labeledExpr{ + pos: position{line: 6, col: 10, offset: 35}, + label: "expr", + expr: &ruleRefExpr{ + pos: position{line: 6, col: 15, offset: 40}, + name: "Expr", + }, + }, + &ruleRefExpr{ + pos: position{line: 6, col: 20, offset: 45}, + name: "EOF", + }, + }, + }, + }, + }, + { + name: "Expr", + pos: position{line: 10, col: 1, offset: 75}, + expr: &actionExpr{ + pos: position{line: 10, col: 9, offset: 83}, + run: (*parser).callonExpr1, + expr: &labeledExpr{ + pos: position{line: 10, col: 9, offset: 83}, + label: "l", + expr: &oneOrMoreExpr{ + pos: position{line: 10, col: 11, offset: 85}, + expr: &choiceExpr{ + pos: position{line: 10, col: 13, offset: 87}, + alternatives: []interface{}{ + &ruleRefExpr{ + pos: position{line: 10, col: 13, offset: 87}, + name: "Wait", + }, + &ruleRefExpr{ + pos: position{line: 10, col: 20, offset: 94}, + name: "CharToggle", + }, + &ruleRefExpr{ + pos: position{line: 10, col: 33, offset: 107}, + name: "Special", + }, + &ruleRefExpr{ + pos: position{line: 10, col: 43, offset: 117}, + name: "Literal", + }, + }, + }, + }, + }, + }, + }, + { + name: "Wait", + pos: position{line: 14, col: 1, offset: 150}, + expr: &actionExpr{ + pos: position{line: 14, col: 8, offset: 157}, + run: (*parser).callonWait1, + expr: &seqExpr{ + pos: position{line: 14, col: 8, offset: 157}, + exprs: []interface{}{ + &ruleRefExpr{ + pos: position{line: 14, col: 8, offset: 157}, + name: "ExprStart", + }, + &litMatcher{ + pos: position{line: 14, col: 18, offset: 167}, + val: "wait", + ignoreCase: false, + }, + &labeledExpr{ + pos: position{line: 14, col: 25, offset: 174}, + label: "duration", + expr: &zeroOrOneExpr{ + pos: position{line: 14, col: 34, offset: 183}, + expr: &choiceExpr{ + pos: position{line: 14, col: 36, offset: 185}, + alternatives: []interface{}{ + &ruleRefExpr{ + pos: position{line: 14, col: 36, offset: 185}, + name: "Duration", + }, + &ruleRefExpr{ + pos: position{line: 14, col: 47, offset: 196}, + name: "Integer", + }, + }, + }, + }, + }, + &ruleRefExpr{ + pos: position{line: 14, col: 58, offset: 207}, + name: "ExprEnd", + }, + }, + }, + }, + }, + { + name: "CharToggle", + pos: position{line: 27, col: 1, offset: 453}, + expr: &actionExpr{ + pos: position{line: 27, col: 14, offset: 466}, + run: (*parser).callonCharToggle1, + expr: &seqExpr{ + pos: position{line: 27, col: 14, offset: 466}, + exprs: []interface{}{ + &ruleRefExpr{ + pos: position{line: 27, col: 14, offset: 466}, + name: "ExprStart", + }, + &labeledExpr{ + pos: position{line: 27, col: 24, offset: 476}, + label: "lit", + expr: &ruleRefExpr{ + pos: position{line: 27, col: 29, offset: 481}, + name: "Literal", + }, + }, + &labeledExpr{ + pos: position{line: 27, col: 38, offset: 490}, + label: "t", + expr: &choiceExpr{ + pos: position{line: 27, col: 41, offset: 493}, + alternatives: []interface{}{ + &ruleRefExpr{ + pos: position{line: 27, col: 41, offset: 493}, + name: "On", + }, + &ruleRefExpr{ + pos: position{line: 27, col: 46, offset: 498}, + name: "Off", + }, + }, + }, + }, + &ruleRefExpr{ + pos: position{line: 27, col: 51, offset: 503}, + name: "ExprEnd", + }, + }, + }, + }, + }, + { + name: "Special", + pos: position{line: 31, col: 1, offset: 574}, + expr: &actionExpr{ + pos: position{line: 31, col: 11, offset: 584}, + run: (*parser).callonSpecial1, + expr: &seqExpr{ + pos: position{line: 31, col: 11, offset: 584}, + exprs: []interface{}{ + &ruleRefExpr{ + pos: position{line: 31, col: 11, offset: 584}, + name: "ExprStart", + }, + &labeledExpr{ + pos: position{line: 31, col: 21, offset: 594}, + label: "s", + expr: &ruleRefExpr{ + pos: position{line: 31, col: 24, offset: 597}, + name: "SpecialKey", + }, + }, + &labeledExpr{ + pos: position{line: 31, col: 36, offset: 609}, + label: "t", + expr: &zeroOrOneExpr{ + pos: position{line: 31, col: 38, offset: 611}, + expr: &choiceExpr{ + pos: position{line: 31, col: 39, offset: 612}, + alternatives: []interface{}{ + &ruleRefExpr{ + pos: position{line: 31, col: 39, offset: 612}, + name: "On", + }, + &ruleRefExpr{ + pos: position{line: 31, col: 44, offset: 617}, + name: "Off", + }, + }, + }, + }, + }, + &ruleRefExpr{ + pos: position{line: 31, col: 50, offset: 623}, + name: "ExprEnd", + }, + }, + }, + }, + }, + { + name: "Number", + pos: position{line: 39, col: 1, offset: 810}, + expr: &actionExpr{ + pos: position{line: 39, col: 10, offset: 819}, + run: (*parser).callonNumber1, + expr: &seqExpr{ + pos: position{line: 39, col: 10, offset: 819}, + exprs: []interface{}{ + &zeroOrOneExpr{ + pos: position{line: 39, col: 10, offset: 819}, + expr: &litMatcher{ + pos: position{line: 39, col: 10, offset: 819}, + val: "-", + ignoreCase: false, + }, + }, + &ruleRefExpr{ + pos: position{line: 39, col: 15, offset: 824}, + name: "Integer", + }, + &zeroOrOneExpr{ + pos: position{line: 39, col: 23, offset: 832}, + expr: &seqExpr{ + pos: position{line: 39, col: 25, offset: 834}, + exprs: []interface{}{ + &litMatcher{ + pos: position{line: 39, col: 25, offset: 834}, + val: ".", + ignoreCase: false, + }, + &oneOrMoreExpr{ + pos: position{line: 39, col: 29, offset: 838}, + expr: &ruleRefExpr{ + pos: position{line: 39, col: 29, offset: 838}, + name: "Digit", + }, + }, + }, + }, + }, + }, + }, + }, + }, + { + name: "Integer", + pos: position{line: 43, col: 1, offset: 884}, + expr: &choiceExpr{ + pos: position{line: 43, col: 11, offset: 894}, + alternatives: []interface{}{ + &litMatcher{ + pos: position{line: 43, col: 11, offset: 894}, + val: "0", + ignoreCase: false, + }, + &actionExpr{ + pos: position{line: 43, col: 17, offset: 900}, + run: (*parser).callonInteger3, + expr: &seqExpr{ + pos: position{line: 43, col: 17, offset: 900}, + exprs: []interface{}{ + &ruleRefExpr{ + pos: position{line: 43, col: 17, offset: 900}, + name: "NonZeroDigit", + }, + &zeroOrMoreExpr{ + pos: position{line: 43, col: 30, offset: 913}, + expr: &ruleRefExpr{ + pos: position{line: 43, col: 30, offset: 913}, + name: "Digit", + }, + }, + }, + }, + }, + }, + }, + }, + { + name: "Duration", + pos: position{line: 47, col: 1, offset: 977}, + expr: &actionExpr{ + pos: position{line: 47, col: 12, offset: 988}, + run: (*parser).callonDuration1, + expr: &oneOrMoreExpr{ + pos: position{line: 47, col: 12, offset: 988}, + expr: &seqExpr{ + pos: position{line: 47, col: 14, offset: 990}, + exprs: []interface{}{ + &ruleRefExpr{ + pos: position{line: 47, col: 14, offset: 990}, + name: "Number", + }, + &ruleRefExpr{ + pos: position{line: 47, col: 21, offset: 997}, + name: "TimeUnit", + }, + }, + }, + }, + }, + }, + { + name: "On", + pos: position{line: 51, col: 1, offset: 1060}, + expr: &actionExpr{ + pos: position{line: 51, col: 6, offset: 1065}, + run: (*parser).callonOn1, + expr: &litMatcher{ + pos: position{line: 51, col: 6, offset: 1065}, + val: "on", + ignoreCase: true, + }, + }, + }, + { + name: "Off", + pos: position{line: 55, col: 1, offset: 1098}, + expr: &actionExpr{ + pos: position{line: 55, col: 7, offset: 1104}, + run: (*parser).callonOff1, + expr: &litMatcher{ + pos: position{line: 55, col: 7, offset: 1104}, + val: "off", + ignoreCase: true, + }, + }, + }, + { + name: "Literal", + pos: position{line: 59, col: 1, offset: 1139}, + expr: &actionExpr{ + pos: position{line: 59, col: 11, offset: 1149}, + run: (*parser).callonLiteral1, + expr: &anyMatcher{ + line: 59, col: 11, offset: 1149, + }, + }, + }, + { + name: "ExprEnd", + pos: position{line: 64, col: 1, offset: 1230}, + expr: &litMatcher{ + pos: position{line: 64, col: 11, offset: 1240}, + val: ">", + ignoreCase: false, + }, + }, + { + name: "ExprStart", + pos: position{line: 65, col: 1, offset: 1244}, + expr: &litMatcher{ + pos: position{line: 65, col: 13, offset: 1256}, + val: "<", + ignoreCase: false, + }, + }, + { + name: "SpecialKey", + pos: position{line: 66, col: 1, offset: 1260}, + expr: &choiceExpr{ + pos: position{line: 66, col: 14, offset: 1273}, + alternatives: []interface{}{ + &litMatcher{ + pos: position{line: 66, col: 14, offset: 1273}, + val: "bs", + ignoreCase: true, + }, + &litMatcher{ + pos: position{line: 66, col: 22, offset: 1281}, + val: "del", + ignoreCase: true, + }, + &litMatcher{ + pos: position{line: 66, col: 31, offset: 1290}, + val: "enter", + ignoreCase: true, + }, + &litMatcher{ + pos: position{line: 66, col: 42, offset: 1301}, + val: "esc", + ignoreCase: true, + }, + &litMatcher{ + pos: position{line: 66, col: 51, offset: 1310}, + val: "f10", + ignoreCase: true, + }, + &litMatcher{ + pos: position{line: 66, col: 60, offset: 1319}, + val: "f11", + ignoreCase: true, + }, + &litMatcher{ + pos: position{line: 66, col: 69, offset: 1328}, + val: "f12", + ignoreCase: true, + }, + &litMatcher{ + pos: position{line: 67, col: 11, offset: 1345}, + val: "f1", + ignoreCase: true, + }, + &litMatcher{ + pos: position{line: 67, col: 19, offset: 1353}, + val: "f2", + ignoreCase: true, + }, + &litMatcher{ + pos: position{line: 67, col: 27, offset: 1361}, + val: "f3", + ignoreCase: true, + }, + &litMatcher{ + pos: position{line: 67, col: 35, offset: 1369}, + val: "f4", + ignoreCase: true, + }, + &litMatcher{ + pos: position{line: 67, col: 43, offset: 1377}, + val: "f5", + ignoreCase: true, + }, + &litMatcher{ + pos: position{line: 67, col: 51, offset: 1385}, + val: "f6", + ignoreCase: true, + }, + &litMatcher{ + pos: position{line: 67, col: 59, offset: 1393}, + val: "f7", + ignoreCase: true, + }, + &litMatcher{ + pos: position{line: 67, col: 67, offset: 1401}, + val: "f8", + ignoreCase: true, + }, + &litMatcher{ + pos: position{line: 67, col: 75, offset: 1409}, + val: "f9", + ignoreCase: true, + }, + &litMatcher{ + pos: position{line: 68, col: 12, offset: 1426}, + val: "return", + ignoreCase: true, + }, + &litMatcher{ + pos: position{line: 68, col: 24, offset: 1438}, + val: "tab", + ignoreCase: true, + }, + &litMatcher{ + pos: position{line: 68, col: 33, offset: 1447}, + val: "up", + ignoreCase: true, + }, + &litMatcher{ + pos: position{line: 68, col: 41, offset: 1455}, + val: "down", + ignoreCase: true, + }, + &litMatcher{ + pos: position{line: 68, col: 51, offset: 1465}, + val: "spacebar", + ignoreCase: true, + }, + &litMatcher{ + pos: position{line: 68, col: 65, offset: 1479}, + val: "insert", + ignoreCase: true, + }, + &litMatcher{ + pos: position{line: 68, col: 77, offset: 1491}, + val: "home", + ignoreCase: true, + }, + &litMatcher{ + pos: position{line: 69, col: 11, offset: 1509}, + val: "end", + ignoreCase: true, + }, + &litMatcher{ + pos: position{line: 69, col: 20, offset: 1518}, + val: "pageup", + ignoreCase: true, + }, + &litMatcher{ + pos: position{line: 69, col: 32, offset: 1530}, + val: "pagedown", + ignoreCase: true, + }, + &litMatcher{ + pos: position{line: 69, col: 46, offset: 1544}, + val: "leftalt", + ignoreCase: true, + }, + &litMatcher{ + pos: position{line: 69, col: 59, offset: 1557}, + val: "leftctrl", + ignoreCase: true, + }, + &litMatcher{ + pos: position{line: 69, col: 73, offset: 1571}, + val: "leftshift", + ignoreCase: true, + }, + &litMatcher{ + pos: position{line: 70, col: 11, offset: 1594}, + val: "rightalt", + ignoreCase: true, + }, + &litMatcher{ + pos: position{line: 70, col: 25, offset: 1608}, + val: "rightctrl", + ignoreCase: true, + }, + &litMatcher{ + pos: position{line: 70, col: 40, offset: 1623}, + val: "rightshift", + ignoreCase: true, + }, + &litMatcher{ + pos: position{line: 70, col: 56, offset: 1639}, + val: "leftsuper", + ignoreCase: true, + }, + &litMatcher{ + pos: position{line: 70, col: 71, offset: 1654}, + val: "rightsuper", + ignoreCase: true, + }, + &litMatcher{ + pos: position{line: 71, col: 11, offset: 1678}, + val: "left", + ignoreCase: true, + }, + &litMatcher{ + pos: position{line: 71, col: 21, offset: 1688}, + val: "right", + ignoreCase: true, + }, + }, + }, + }, + { + name: "NonZeroDigit", + pos: position{line: 73, col: 1, offset: 1698}, + expr: &charClassMatcher{ + pos: position{line: 73, col: 16, offset: 1713}, + val: "[1-9]", + ranges: []rune{'1', '9'}, + ignoreCase: false, + inverted: false, + }, + }, + { + name: "Digit", + pos: position{line: 74, col: 1, offset: 1719}, + expr: &charClassMatcher{ + pos: position{line: 74, col: 9, offset: 1727}, + val: "[0-9]", + ranges: []rune{'0', '9'}, + ignoreCase: false, + inverted: false, + }, + }, + { + name: "TimeUnit", + pos: position{line: 75, col: 1, offset: 1733}, + expr: &choiceExpr{ + pos: position{line: 75, col: 13, offset: 1745}, + alternatives: []interface{}{ + &litMatcher{ + pos: position{line: 75, col: 13, offset: 1745}, + val: "ns", + ignoreCase: false, + }, + &litMatcher{ + pos: position{line: 75, col: 20, offset: 1752}, + val: "us", + ignoreCase: false, + }, + &litMatcher{ + pos: position{line: 75, col: 27, offset: 1759}, + val: "µs", + ignoreCase: false, + }, + &litMatcher{ + pos: position{line: 75, col: 34, offset: 1767}, + val: "ms", + ignoreCase: false, + }, + &litMatcher{ + pos: position{line: 75, col: 41, offset: 1774}, + val: "s", + ignoreCase: false, + }, + &litMatcher{ + pos: position{line: 75, col: 47, offset: 1780}, + val: "m", + ignoreCase: false, + }, + &litMatcher{ + pos: position{line: 75, col: 53, offset: 1786}, + val: "h", + ignoreCase: false, + }, + }, + }, + }, + { + name: "_", + displayName: "\"whitespace\"", + pos: position{line: 77, col: 1, offset: 1792}, + expr: &zeroOrMoreExpr{ + pos: position{line: 77, col: 19, offset: 1810}, + expr: &charClassMatcher{ + pos: position{line: 77, col: 19, offset: 1810}, + val: "[ \\n\\t\\r]", + chars: []rune{' ', '\n', '\t', '\r'}, + ignoreCase: false, + inverted: false, + }, + }, + }, + { + name: "EOF", + pos: position{line: 79, col: 1, offset: 1822}, + expr: ¬Expr{ + pos: position{line: 79, col: 8, offset: 1829}, + expr: &anyMatcher{ + line: 79, col: 9, offset: 1830, + }, + }, + }, + }, +} + +func (c *current) onInput1(expr interface{}) (interface{}, error) { + return expr, nil +} + +func (p *parser) callonInput1() (interface{}, error) { + stack := p.vstack[len(p.vstack)-1] + _ = stack + return p.cur.onInput1(stack["expr"]) +} + +func (c *current) onExpr1(l interface{}) (interface{}, error) { + return l, nil +} + +func (p *parser) callonExpr1() (interface{}, error) { + stack := p.vstack[len(p.vstack)-1] + _ = stack + return p.cur.onExpr1(stack["l"]) +} + +func (c *current) onWait1(duration interface{}) (interface{}, error) { + var d time.Duration + switch t := duration.(type) { + case time.Duration: + d = t + case int64: + d = time.Duration(t) * time.Second + default: + d = time.Second + } + return &waitExpression{d}, nil +} + +func (p *parser) callonWait1() (interface{}, error) { + stack := p.vstack[len(p.vstack)-1] + _ = stack + return p.cur.onWait1(stack["duration"]) +} + +func (c *current) onCharToggle1(lit, t interface{}) (interface{}, error) { + return &literal{lit.(*literal).s, t.(KeyAction)}, nil +} + +func (p *parser) callonCharToggle1() (interface{}, error) { + stack := p.vstack[len(p.vstack)-1] + _ = stack + return p.cur.onCharToggle1(stack["lit"], stack["t"]) +} + +func (c *current) onSpecial1(s, t interface{}) (interface{}, error) { + l := strings.ToLower(string(s.([]byte))) + if t == nil { + return &specialExpression{l, KeyPress}, nil + } + return &specialExpression{l, t.(KeyAction)}, nil +} + +func (p *parser) callonSpecial1() (interface{}, error) { + stack := p.vstack[len(p.vstack)-1] + _ = stack + return p.cur.onSpecial1(stack["s"], stack["t"]) +} + +func (c *current) onNumber1() (interface{}, error) { + return string(c.text), nil +} + +func (p *parser) callonNumber1() (interface{}, error) { + stack := p.vstack[len(p.vstack)-1] + _ = stack + return p.cur.onNumber1() +} + +func (c *current) onInteger3() (interface{}, error) { + return strconv.ParseInt(string(c.text), 10, 64) +} + +func (p *parser) callonInteger3() (interface{}, error) { + stack := p.vstack[len(p.vstack)-1] + _ = stack + return p.cur.onInteger3() +} + +func (c *current) onDuration1() (interface{}, error) { + return time.ParseDuration(string(c.text)) +} + +func (p *parser) callonDuration1() (interface{}, error) { + stack := p.vstack[len(p.vstack)-1] + _ = stack + return p.cur.onDuration1() +} + +func (c *current) onOn1() (interface{}, error) { + return KeyOn, nil +} + +func (p *parser) callonOn1() (interface{}, error) { + stack := p.vstack[len(p.vstack)-1] + _ = stack + return p.cur.onOn1() +} + +func (c *current) onOff1() (interface{}, error) { + return KeyOff, nil +} + +func (p *parser) callonOff1() (interface{}, error) { + stack := p.vstack[len(p.vstack)-1] + _ = stack + return p.cur.onOff1() +} + +func (c *current) onLiteral1() (interface{}, error) { + r, _ := utf8.DecodeRune(c.text) + return &literal{r, KeyPress}, nil +} + +func (p *parser) callonLiteral1() (interface{}, error) { + stack := p.vstack[len(p.vstack)-1] + _ = stack + return p.cur.onLiteral1() +} + +var ( + // errNoRule is returned when the grammar to parse has no rule. + errNoRule = errors.New("grammar has no rule") + + // errInvalidEntrypoint is returned when the specified entrypoint rule + // does not exit. + errInvalidEntrypoint = errors.New("invalid entrypoint") + + // errInvalidEncoding is returned when the source is not properly + // utf8-encoded. + errInvalidEncoding = errors.New("invalid encoding") + + // errMaxExprCnt is used to signal that the maximum number of + // expressions have been parsed. + errMaxExprCnt = errors.New("max number of expresssions parsed") +) + +// Option is a function that can set an option on the parser. It returns +// the previous setting as an Option. +type Option func(*parser) Option + +// MaxExpressions creates an Option to stop parsing after the provided +// number of expressions have been parsed, if the value is 0 then the parser will +// parse for as many steps as needed (possibly an infinite number). +// +// The default for maxExprCnt is 0. +func MaxExpressions(maxExprCnt uint64) Option { + return func(p *parser) Option { + oldMaxExprCnt := p.maxExprCnt + p.maxExprCnt = maxExprCnt + return MaxExpressions(oldMaxExprCnt) + } +} + +// Entrypoint creates an Option to set the rule name to use as entrypoint. +// The rule name must have been specified in the -alternate-entrypoints +// if generating the parser with the -optimize-grammar flag, otherwise +// it may have been optimized out. Passing an empty string sets the +// entrypoint to the first rule in the grammar. +// +// The default is to start parsing at the first rule in the grammar. +func Entrypoint(ruleName string) Option { + return func(p *parser) Option { + oldEntrypoint := p.entrypoint + p.entrypoint = ruleName + if ruleName == "" { + p.entrypoint = g.rules[0].name + } + return Entrypoint(oldEntrypoint) + } +} + +// Statistics adds a user provided Stats struct to the parser to allow +// the user to process the results after the parsing has finished. +// Also the key for the "no match" counter is set. +// +// Example usage: +// +// input := "input" +// stats := Stats{} +// _, err := Parse("input-file", []byte(input), Statistics(&stats, "no match")) +// if err != nil { +// log.Panicln(err) +// } +// b, err := json.MarshalIndent(stats.ChoiceAltCnt, "", " ") +// if err != nil { +// log.Panicln(err) +// } +// fmt.Println(string(b)) +// +func Statistics(stats *Stats, choiceNoMatch string) Option { + return func(p *parser) Option { + oldStats := p.Stats + p.Stats = stats + oldChoiceNoMatch := p.choiceNoMatch + p.choiceNoMatch = choiceNoMatch + if p.Stats.ChoiceAltCnt == nil { + p.Stats.ChoiceAltCnt = make(map[string]map[string]int) + } + return Statistics(oldStats, oldChoiceNoMatch) + } +} + +// Debug creates an Option to set the debug flag to b. When set to true, +// debugging information is printed to stdout while parsing. +// +// The default is false. +func Debug(b bool) Option { + return func(p *parser) Option { + old := p.debug + p.debug = b + return Debug(old) + } +} + +// Memoize creates an Option to set the memoize flag to b. When set to true, +// the parser will cache all results so each expression is evaluated only +// once. This guarantees linear parsing time even for pathological cases, +// at the expense of more memory and slower times for typical cases. +// +// The default is false. +func Memoize(b bool) Option { + return func(p *parser) Option { + old := p.memoize + p.memoize = b + return Memoize(old) + } +} + +// AllowInvalidUTF8 creates an Option to allow invalid UTF-8 bytes. +// Every invalid UTF-8 byte is treated as a utf8.RuneError (U+FFFD) +// by character class matchers and is matched by the any matcher. +// The returned matched value, c.text and c.offset are NOT affected. +// +// The default is false. +func AllowInvalidUTF8(b bool) Option { + return func(p *parser) Option { + old := p.allowInvalidUTF8 + p.allowInvalidUTF8 = b + return AllowInvalidUTF8(old) + } +} + +// Recover creates an Option to set the recover flag to b. When set to +// true, this causes the parser to recover from panics and convert it +// to an error. Setting it to false can be useful while debugging to +// access the full stack trace. +// +// The default is true. +func Recover(b bool) Option { + return func(p *parser) Option { + old := p.recover + p.recover = b + return Recover(old) + } +} + +// GlobalStore creates an Option to set a key to a certain value in +// the globalStore. +func GlobalStore(key string, value interface{}) Option { + return func(p *parser) Option { + old := p.cur.globalStore[key] + p.cur.globalStore[key] = value + return GlobalStore(key, old) + } +} + +// InitState creates an Option to set a key to a certain value in +// the global "state" store. +func InitState(key string, value interface{}) Option { + return func(p *parser) Option { + old := p.cur.state[key] + p.cur.state[key] = value + return InitState(key, old) + } +} + +// ParseFile parses the file identified by filename. +func ParseFile(filename string, opts ...Option) (i interface{}, err error) { + f, err := os.Open(filename) + if err != nil { + return nil, err + } + defer func() { + if closeErr := f.Close(); closeErr != nil { + err = closeErr + } + }() + return ParseReader(filename, f, opts...) +} + +// ParseReader parses the data from r using filename as information in the +// error messages. +func ParseReader(filename string, r io.Reader, opts ...Option) (interface{}, error) { + b, err := ioutil.ReadAll(r) + if err != nil { + return nil, err + } + + return Parse(filename, b, opts...) +} + +// Parse parses the data from b using filename as information in the +// error messages. +func Parse(filename string, b []byte, opts ...Option) (interface{}, error) { + return newParser(filename, b, opts...).parse(g) +} + +// position records a position in the text. +type position struct { + line, col, offset int +} + +func (p position) String() string { + return fmt.Sprintf("%d:%d [%d]", p.line, p.col, p.offset) +} + +// savepoint stores all state required to go back to this point in the +// parser. +type savepoint struct { + position + rn rune + w int +} + +type current struct { + pos position // start position of the match + text []byte // raw text of the match + + // state is a store for arbitrary key,value pairs that the user wants to be + // tied to the backtracking of the parser. + // This is always rolled back if a parsing rule fails. + state storeDict + + // globalStore is a general store for the user to store arbitrary key-value + // pairs that they need to manage and that they do not want tied to the + // backtracking of the parser. This is only modified by the user and never + // rolled back by the parser. It is always up to the user to keep this in a + // consistent state. + globalStore storeDict +} + +type storeDict map[string]interface{} + +// the AST types... + +type grammar struct { + pos position + rules []*rule +} + +type rule struct { + pos position + name string + displayName string + expr interface{} +} + +type choiceExpr struct { + pos position + alternatives []interface{} +} + +type actionExpr struct { + pos position + expr interface{} + run func(*parser) (interface{}, error) +} + +type recoveryExpr struct { + pos position + expr interface{} + recoverExpr interface{} + failureLabel []string +} + +type seqExpr struct { + pos position + exprs []interface{} +} + +type throwExpr struct { + pos position + label string +} + +type labeledExpr struct { + pos position + label string + expr interface{} +} + +type expr struct { + pos position + expr interface{} +} + +type andExpr expr +type notExpr expr +type zeroOrOneExpr expr +type zeroOrMoreExpr expr +type oneOrMoreExpr expr + +type ruleRefExpr struct { + pos position + name string +} + +type stateCodeExpr struct { + pos position + run func(*parser) error +} + +type andCodeExpr struct { + pos position + run func(*parser) (bool, error) +} + +type notCodeExpr struct { + pos position + run func(*parser) (bool, error) +} + +type litMatcher struct { + pos position + val string + ignoreCase bool +} + +type charClassMatcher struct { + pos position + val string + basicLatinChars [128]bool + chars []rune + ranges []rune + classes []*unicode.RangeTable + ignoreCase bool + inverted bool +} + +type anyMatcher position + +// errList cumulates the errors found by the parser. +type errList []error + +func (e *errList) add(err error) { + *e = append(*e, err) +} + +func (e errList) err() error { + if len(e) == 0 { + return nil + } + e.dedupe() + return e +} + +func (e *errList) dedupe() { + var cleaned []error + set := make(map[string]bool) + for _, err := range *e { + if msg := err.Error(); !set[msg] { + set[msg] = true + cleaned = append(cleaned, err) + } + } + *e = cleaned +} + +func (e errList) Error() string { + switch len(e) { + case 0: + return "" + case 1: + return e[0].Error() + default: + var buf bytes.Buffer + + for i, err := range e { + if i > 0 { + buf.WriteRune('\n') + } + buf.WriteString(err.Error()) + } + return buf.String() + } +} + +// parserError wraps an error with a prefix indicating the rule in which +// the error occurred. The original error is stored in the Inner field. +type parserError struct { + Inner error + pos position + prefix string + expected []string +} + +// Error returns the error message. +func (p *parserError) Error() string { + return p.prefix + ": " + p.Inner.Error() +} + +// newParser creates a parser with the specified input source and options. +func newParser(filename string, b []byte, opts ...Option) *parser { + stats := Stats{ + ChoiceAltCnt: make(map[string]map[string]int), + } + + p := &parser{ + filename: filename, + errs: new(errList), + data: b, + pt: savepoint{position: position{line: 1}}, + recover: true, + cur: current{ + state: make(storeDict), + globalStore: make(storeDict), + }, + maxFailPos: position{col: 1, line: 1}, + maxFailExpected: make([]string, 0, 20), + Stats: &stats, + // start rule is rule [0] unless an alternate entrypoint is specified + entrypoint: g.rules[0].name, + emptyState: make(storeDict), + } + p.setOptions(opts) + + if p.maxExprCnt == 0 { + p.maxExprCnt = math.MaxUint64 + } + + return p +} + +// setOptions applies the options to the parser. +func (p *parser) setOptions(opts []Option) { + for _, opt := range opts { + opt(p) + } +} + +type resultTuple struct { + v interface{} + b bool + end savepoint +} + +const choiceNoMatch = -1 + +// Stats stores some statistics, gathered during parsing +type Stats struct { + // ExprCnt counts the number of expressions processed during parsing + // This value is compared to the maximum number of expressions allowed + // (set by the MaxExpressions option). + ExprCnt uint64 + + // ChoiceAltCnt is used to count for each ordered choice expression, + // which alternative is used how may times. + // These numbers allow to optimize the order of the ordered choice expression + // to increase the performance of the parser + // + // The outer key of ChoiceAltCnt is composed of the name of the rule as well + // as the line and the column of the ordered choice. + // The inner key of ChoiceAltCnt is the number (one-based) of the matching alternative. + // For each alternative the number of matches are counted. If an ordered choice does not + // match, a special counter is incremented. The name of this counter is set with + // the parser option Statistics. + // For an alternative to be included in ChoiceAltCnt, it has to match at least once. + ChoiceAltCnt map[string]map[string]int +} + +type parser struct { + filename string + pt savepoint + cur current + + data []byte + errs *errList + + depth int + recover bool + debug bool + + memoize bool + // memoization table for the packrat algorithm: + // map[offset in source] map[expression or rule] {value, match} + memo map[int]map[interface{}]resultTuple + + // rules table, maps the rule identifier to the rule node + rules map[string]*rule + // variables stack, map of label to value + vstack []map[string]interface{} + // rule stack, allows identification of the current rule in errors + rstack []*rule + + // parse fail + maxFailPos position + maxFailExpected []string + maxFailInvertExpected bool + + // max number of expressions to be parsed + maxExprCnt uint64 + // entrypoint for the parser + entrypoint string + + allowInvalidUTF8 bool + + *Stats + + choiceNoMatch string + // recovery expression stack, keeps track of the currently available recovery expression, these are traversed in reverse + recoveryStack []map[string]interface{} + + // emptyState contains an empty storeDict, which is used to optimize cloneState if global "state" store is not used. + emptyState storeDict +} + +// push a variable set on the vstack. +func (p *parser) pushV() { + if cap(p.vstack) == len(p.vstack) { + // create new empty slot in the stack + p.vstack = append(p.vstack, nil) + } else { + // slice to 1 more + p.vstack = p.vstack[:len(p.vstack)+1] + } + + // get the last args set + m := p.vstack[len(p.vstack)-1] + if m != nil && len(m) == 0 { + // empty map, all good + return + } + + m = make(map[string]interface{}) + p.vstack[len(p.vstack)-1] = m +} + +// pop a variable set from the vstack. +func (p *parser) popV() { + // if the map is not empty, clear it + m := p.vstack[len(p.vstack)-1] + if len(m) > 0 { + // GC that map + p.vstack[len(p.vstack)-1] = nil + } + p.vstack = p.vstack[:len(p.vstack)-1] +} + +// push a recovery expression with its labels to the recoveryStack +func (p *parser) pushRecovery(labels []string, expr interface{}) { + if cap(p.recoveryStack) == len(p.recoveryStack) { + // create new empty slot in the stack + p.recoveryStack = append(p.recoveryStack, nil) + } else { + // slice to 1 more + p.recoveryStack = p.recoveryStack[:len(p.recoveryStack)+1] + } + + m := make(map[string]interface{}, len(labels)) + for _, fl := range labels { + m[fl] = expr + } + p.recoveryStack[len(p.recoveryStack)-1] = m +} + +// pop a recovery expression from the recoveryStack +func (p *parser) popRecovery() { + // GC that map + p.recoveryStack[len(p.recoveryStack)-1] = nil + + p.recoveryStack = p.recoveryStack[:len(p.recoveryStack)-1] +} + +func (p *parser) print(prefix, s string) string { + if !p.debug { + return s + } + + fmt.Printf("%s %d:%d:%d: %s [%#U]\n", + prefix, p.pt.line, p.pt.col, p.pt.offset, s, p.pt.rn) + return s +} + +func (p *parser) in(s string) string { + p.depth++ + return p.print(strings.Repeat(" ", p.depth)+">", s) +} + +func (p *parser) out(s string) string { + p.depth-- + return p.print(strings.Repeat(" ", p.depth)+"<", s) +} + +func (p *parser) addErr(err error) { + p.addErrAt(err, p.pt.position, []string{}) +} + +func (p *parser) addErrAt(err error, pos position, expected []string) { + var buf bytes.Buffer + if p.filename != "" { + buf.WriteString(p.filename) + } + if buf.Len() > 0 { + buf.WriteString(":") + } + buf.WriteString(fmt.Sprintf("%d:%d (%d)", pos.line, pos.col, pos.offset)) + if len(p.rstack) > 0 { + if buf.Len() > 0 { + buf.WriteString(": ") + } + rule := p.rstack[len(p.rstack)-1] + if rule.displayName != "" { + buf.WriteString("rule " + rule.displayName) + } else { + buf.WriteString("rule " + rule.name) + } + } + pe := &parserError{Inner: err, pos: pos, prefix: buf.String(), expected: expected} + p.errs.add(pe) +} + +func (p *parser) failAt(fail bool, pos position, want string) { + // process fail if parsing fails and not inverted or parsing succeeds and invert is set + if fail == p.maxFailInvertExpected { + if pos.offset < p.maxFailPos.offset { + return + } + + if pos.offset > p.maxFailPos.offset { + p.maxFailPos = pos + p.maxFailExpected = p.maxFailExpected[:0] + } + + if p.maxFailInvertExpected { + want = "!" + want + } + p.maxFailExpected = append(p.maxFailExpected, want) + } +} + +// read advances the parser to the next rune. +func (p *parser) read() { + p.pt.offset += p.pt.w + rn, n := utf8.DecodeRune(p.data[p.pt.offset:]) + p.pt.rn = rn + p.pt.w = n + p.pt.col++ + if rn == '\n' { + p.pt.line++ + p.pt.col = 0 + } + + if rn == utf8.RuneError && n == 1 { // see utf8.DecodeRune + if !p.allowInvalidUTF8 { + p.addErr(errInvalidEncoding) + } + } +} + +// restore parser position to the savepoint pt. +func (p *parser) restore(pt savepoint) { + if p.debug { + defer p.out(p.in("restore")) + } + if pt.offset == p.pt.offset { + return + } + p.pt = pt +} + +// Cloner is implemented by any value that has a Clone method, which returns a +// copy of the value. This is mainly used for types which are not passed by +// value (e.g map, slice, chan) or structs that contain such types. +// +// This is used in conjunction with the global state feature to create proper +// copies of the state to allow the parser to properly restore the state in +// the case of backtracking. +type Cloner interface { + Clone() interface{} +} + +// clone and return parser current state. +func (p *parser) cloneState() storeDict { + if p.debug { + defer p.out(p.in("cloneState")) + } + + if len(p.cur.state) == 0 { + if len(p.emptyState) > 0 { + p.emptyState = make(storeDict) + } + return p.emptyState + } + + state := make(storeDict, len(p.cur.state)) + for k, v := range p.cur.state { + if c, ok := v.(Cloner); ok { + state[k] = c.Clone() + } else { + state[k] = v + } + } + return state +} + +// restore parser current state to the state storeDict. +// every restoreState should applied only one time for every cloned state +func (p *parser) restoreState(state storeDict) { + if p.debug { + defer p.out(p.in("restoreState")) + } + p.cur.state = state +} + +// get the slice of bytes from the savepoint start to the current position. +func (p *parser) sliceFrom(start savepoint) []byte { + return p.data[start.position.offset:p.pt.position.offset] +} + +func (p *parser) getMemoized(node interface{}) (resultTuple, bool) { + if len(p.memo) == 0 { + return resultTuple{}, false + } + m := p.memo[p.pt.offset] + if len(m) == 0 { + return resultTuple{}, false + } + res, ok := m[node] + return res, ok +} + +func (p *parser) setMemoized(pt savepoint, node interface{}, tuple resultTuple) { + if p.memo == nil { + p.memo = make(map[int]map[interface{}]resultTuple) + } + m := p.memo[pt.offset] + if m == nil { + m = make(map[interface{}]resultTuple) + p.memo[pt.offset] = m + } + m[node] = tuple +} + +func (p *parser) buildRulesTable(g *grammar) { + p.rules = make(map[string]*rule, len(g.rules)) + for _, r := range g.rules { + p.rules[r.name] = r + } +} + +func (p *parser) parse(g *grammar) (val interface{}, err error) { + if len(g.rules) == 0 { + p.addErr(errNoRule) + return nil, p.errs.err() + } + + // TODO : not super critical but this could be generated + p.buildRulesTable(g) + + if p.recover { + // panic can be used in action code to stop parsing immediately + // and return the panic as an error. + defer func() { + if e := recover(); e != nil { + if p.debug { + defer p.out(p.in("panic handler")) + } + val = nil + switch e := e.(type) { + case error: + p.addErr(e) + default: + p.addErr(fmt.Errorf("%v", e)) + } + err = p.errs.err() + } + }() + } + + startRule, ok := p.rules[p.entrypoint] + if !ok { + p.addErr(errInvalidEntrypoint) + return nil, p.errs.err() + } + + p.read() // advance to first rune + val, ok = p.parseRule(startRule) + if !ok { + if len(*p.errs) == 0 { + // If parsing fails, but no errors have been recorded, the expected values + // for the farthest parser position are returned as error. + maxFailExpectedMap := make(map[string]struct{}, len(p.maxFailExpected)) + for _, v := range p.maxFailExpected { + maxFailExpectedMap[v] = struct{}{} + } + expected := make([]string, 0, len(maxFailExpectedMap)) + eof := false + if _, ok := maxFailExpectedMap["!."]; ok { + delete(maxFailExpectedMap, "!.") + eof = true + } + for k := range maxFailExpectedMap { + expected = append(expected, k) + } + sort.Strings(expected) + if eof { + expected = append(expected, "EOF") + } + p.addErrAt(errors.New("no match found, expected: "+listJoin(expected, ", ", "or")), p.maxFailPos, expected) + } + + return nil, p.errs.err() + } + return val, p.errs.err() +} + +func listJoin(list []string, sep string, lastSep string) string { + switch len(list) { + case 0: + return "" + case 1: + return list[0] + default: + return fmt.Sprintf("%s %s %s", strings.Join(list[:len(list)-1], sep), lastSep, list[len(list)-1]) + } +} + +func (p *parser) parseRule(rule *rule) (interface{}, bool) { + if p.debug { + defer p.out(p.in("parseRule " + rule.name)) + } + + if p.memoize { + res, ok := p.getMemoized(rule) + if ok { + p.restore(res.end) + return res.v, res.b + } + } + + start := p.pt + p.rstack = append(p.rstack, rule) + p.pushV() + val, ok := p.parseExpr(rule.expr) + p.popV() + p.rstack = p.rstack[:len(p.rstack)-1] + if ok && p.debug { + p.print(strings.Repeat(" ", p.depth)+"MATCH", string(p.sliceFrom(start))) + } + + if p.memoize { + p.setMemoized(start, rule, resultTuple{val, ok, p.pt}) + } + return val, ok +} + +func (p *parser) parseExpr(expr interface{}) (interface{}, bool) { + var pt savepoint + + if p.memoize { + res, ok := p.getMemoized(expr) + if ok { + p.restore(res.end) + return res.v, res.b + } + pt = p.pt + } + + p.ExprCnt++ + if p.ExprCnt > p.maxExprCnt { + panic(errMaxExprCnt) + } + + var val interface{} + var ok bool + switch expr := expr.(type) { + case *actionExpr: + val, ok = p.parseActionExpr(expr) + case *andCodeExpr: + val, ok = p.parseAndCodeExpr(expr) + case *andExpr: + val, ok = p.parseAndExpr(expr) + case *anyMatcher: + val, ok = p.parseAnyMatcher(expr) + case *charClassMatcher: + val, ok = p.parseCharClassMatcher(expr) + case *choiceExpr: + val, ok = p.parseChoiceExpr(expr) + case *labeledExpr: + val, ok = p.parseLabeledExpr(expr) + case *litMatcher: + val, ok = p.parseLitMatcher(expr) + case *notCodeExpr: + val, ok = p.parseNotCodeExpr(expr) + case *notExpr: + val, ok = p.parseNotExpr(expr) + case *oneOrMoreExpr: + val, ok = p.parseOneOrMoreExpr(expr) + case *recoveryExpr: + val, ok = p.parseRecoveryExpr(expr) + case *ruleRefExpr: + val, ok = p.parseRuleRefExpr(expr) + case *seqExpr: + val, ok = p.parseSeqExpr(expr) + case *stateCodeExpr: + val, ok = p.parseStateCodeExpr(expr) + case *throwExpr: + val, ok = p.parseThrowExpr(expr) + case *zeroOrMoreExpr: + val, ok = p.parseZeroOrMoreExpr(expr) + case *zeroOrOneExpr: + val, ok = p.parseZeroOrOneExpr(expr) + default: + panic(fmt.Sprintf("unknown expression type %T", expr)) + } + if p.memoize { + p.setMemoized(pt, expr, resultTuple{val, ok, p.pt}) + } + return val, ok +} + +func (p *parser) parseActionExpr(act *actionExpr) (interface{}, bool) { + if p.debug { + defer p.out(p.in("parseActionExpr")) + } + + start := p.pt + val, ok := p.parseExpr(act.expr) + if ok { + p.cur.pos = start.position + p.cur.text = p.sliceFrom(start) + state := p.cloneState() + actVal, err := act.run(p) + if err != nil { + p.addErrAt(err, start.position, []string{}) + } + p.restoreState(state) + + val = actVal + } + if ok && p.debug { + p.print(strings.Repeat(" ", p.depth)+"MATCH", string(p.sliceFrom(start))) + } + return val, ok +} + +func (p *parser) parseAndCodeExpr(and *andCodeExpr) (interface{}, bool) { + if p.debug { + defer p.out(p.in("parseAndCodeExpr")) + } + + state := p.cloneState() + + ok, err := and.run(p) + if err != nil { + p.addErr(err) + } + p.restoreState(state) + + return nil, ok +} + +func (p *parser) parseAndExpr(and *andExpr) (interface{}, bool) { + if p.debug { + defer p.out(p.in("parseAndExpr")) + } + + pt := p.pt + state := p.cloneState() + p.pushV() + _, ok := p.parseExpr(and.expr) + p.popV() + p.restoreState(state) + p.restore(pt) + + return nil, ok +} + +func (p *parser) parseAnyMatcher(any *anyMatcher) (interface{}, bool) { + if p.debug { + defer p.out(p.in("parseAnyMatcher")) + } + + if p.pt.rn == utf8.RuneError && p.pt.w == 0 { + // EOF - see utf8.DecodeRune + p.failAt(false, p.pt.position, ".") + return nil, false + } + start := p.pt + p.read() + p.failAt(true, start.position, ".") + return p.sliceFrom(start), true +} + +func (p *parser) parseCharClassMatcher(chr *charClassMatcher) (interface{}, bool) { + if p.debug { + defer p.out(p.in("parseCharClassMatcher")) + } + + cur := p.pt.rn + start := p.pt + + // can't match EOF + if cur == utf8.RuneError && p.pt.w == 0 { // see utf8.DecodeRune + p.failAt(false, start.position, chr.val) + return nil, false + } + + if chr.ignoreCase { + cur = unicode.ToLower(cur) + } + + // try to match in the list of available chars + for _, rn := range chr.chars { + if rn == cur { + if chr.inverted { + p.failAt(false, start.position, chr.val) + return nil, false + } + p.read() + p.failAt(true, start.position, chr.val) + return p.sliceFrom(start), true + } + } + + // try to match in the list of ranges + for i := 0; i < len(chr.ranges); i += 2 { + if cur >= chr.ranges[i] && cur <= chr.ranges[i+1] { + if chr.inverted { + p.failAt(false, start.position, chr.val) + return nil, false + } + p.read() + p.failAt(true, start.position, chr.val) + return p.sliceFrom(start), true + } + } + + // try to match in the list of Unicode classes + for _, cl := range chr.classes { + if unicode.Is(cl, cur) { + if chr.inverted { + p.failAt(false, start.position, chr.val) + return nil, false + } + p.read() + p.failAt(true, start.position, chr.val) + return p.sliceFrom(start), true + } + } + + if chr.inverted { + p.read() + p.failAt(true, start.position, chr.val) + return p.sliceFrom(start), true + } + p.failAt(false, start.position, chr.val) + return nil, false +} + +func (p *parser) incChoiceAltCnt(ch *choiceExpr, altI int) { + choiceIdent := fmt.Sprintf("%s %d:%d", p.rstack[len(p.rstack)-1].name, ch.pos.line, ch.pos.col) + m := p.ChoiceAltCnt[choiceIdent] + if m == nil { + m = make(map[string]int) + p.ChoiceAltCnt[choiceIdent] = m + } + // We increment altI by 1, so the keys do not start at 0 + alt := strconv.Itoa(altI + 1) + if altI == choiceNoMatch { + alt = p.choiceNoMatch + } + m[alt]++ +} + +func (p *parser) parseChoiceExpr(ch *choiceExpr) (interface{}, bool) { + if p.debug { + defer p.out(p.in("parseChoiceExpr")) + } + + for altI, alt := range ch.alternatives { + // dummy assignment to prevent compile error if optimized + _ = altI + + state := p.cloneState() + + p.pushV() + val, ok := p.parseExpr(alt) + p.popV() + if ok { + p.incChoiceAltCnt(ch, altI) + return val, ok + } + p.restoreState(state) + } + p.incChoiceAltCnt(ch, choiceNoMatch) + return nil, false +} + +func (p *parser) parseLabeledExpr(lab *labeledExpr) (interface{}, bool) { + if p.debug { + defer p.out(p.in("parseLabeledExpr")) + } + + p.pushV() + val, ok := p.parseExpr(lab.expr) + p.popV() + if ok && lab.label != "" { + m := p.vstack[len(p.vstack)-1] + m[lab.label] = val + } + return val, ok +} + +func (p *parser) parseLitMatcher(lit *litMatcher) (interface{}, bool) { + if p.debug { + defer p.out(p.in("parseLitMatcher")) + } + + ignoreCase := "" + if lit.ignoreCase { + ignoreCase = "i" + } + val := fmt.Sprintf("%q%s", lit.val, ignoreCase) + start := p.pt + for _, want := range lit.val { + cur := p.pt.rn + if lit.ignoreCase { + cur = unicode.ToLower(cur) + } + if cur != want { + p.failAt(false, start.position, val) + p.restore(start) + return nil, false + } + p.read() + } + p.failAt(true, start.position, val) + return p.sliceFrom(start), true +} + +func (p *parser) parseNotCodeExpr(not *notCodeExpr) (interface{}, bool) { + if p.debug { + defer p.out(p.in("parseNotCodeExpr")) + } + + state := p.cloneState() + + ok, err := not.run(p) + if err != nil { + p.addErr(err) + } + p.restoreState(state) + + return nil, !ok +} + +func (p *parser) parseNotExpr(not *notExpr) (interface{}, bool) { + if p.debug { + defer p.out(p.in("parseNotExpr")) + } + + pt := p.pt + state := p.cloneState() + p.pushV() + p.maxFailInvertExpected = !p.maxFailInvertExpected + _, ok := p.parseExpr(not.expr) + p.maxFailInvertExpected = !p.maxFailInvertExpected + p.popV() + p.restoreState(state) + p.restore(pt) + + return nil, !ok +} + +func (p *parser) parseOneOrMoreExpr(expr *oneOrMoreExpr) (interface{}, bool) { + if p.debug { + defer p.out(p.in("parseOneOrMoreExpr")) + } + + var vals []interface{} + + for { + p.pushV() + val, ok := p.parseExpr(expr.expr) + p.popV() + if !ok { + if len(vals) == 0 { + // did not match once, no match + return nil, false + } + return vals, true + } + vals = append(vals, val) + } +} + +func (p *parser) parseRecoveryExpr(recover *recoveryExpr) (interface{}, bool) { + if p.debug { + defer p.out(p.in("parseRecoveryExpr (" + strings.Join(recover.failureLabel, ",") + ")")) + } + + p.pushRecovery(recover.failureLabel, recover.recoverExpr) + val, ok := p.parseExpr(recover.expr) + p.popRecovery() + + return val, ok +} + +func (p *parser) parseRuleRefExpr(ref *ruleRefExpr) (interface{}, bool) { + if p.debug { + defer p.out(p.in("parseRuleRefExpr " + ref.name)) + } + + if ref.name == "" { + panic(fmt.Sprintf("%s: invalid rule: missing name", ref.pos)) + } + + rule := p.rules[ref.name] + if rule == nil { + p.addErr(fmt.Errorf("undefined rule: %s", ref.name)) + return nil, false + } + return p.parseRule(rule) +} + +func (p *parser) parseSeqExpr(seq *seqExpr) (interface{}, bool) { + if p.debug { + defer p.out(p.in("parseSeqExpr")) + } + + vals := make([]interface{}, 0, len(seq.exprs)) + + pt := p.pt + state := p.cloneState() + for _, expr := range seq.exprs { + val, ok := p.parseExpr(expr) + if !ok { + p.restoreState(state) + p.restore(pt) + return nil, false + } + vals = append(vals, val) + } + return vals, true +} + +func (p *parser) parseStateCodeExpr(state *stateCodeExpr) (interface{}, bool) { + if p.debug { + defer p.out(p.in("parseStateCodeExpr")) + } + + err := state.run(p) + if err != nil { + p.addErr(err) + } + return nil, true +} + +func (p *parser) parseThrowExpr(expr *throwExpr) (interface{}, bool) { + if p.debug { + defer p.out(p.in("parseThrowExpr")) + } + + for i := len(p.recoveryStack) - 1; i >= 0; i-- { + if recoverExpr, ok := p.recoveryStack[i][expr.label]; ok { + if val, ok := p.parseExpr(recoverExpr); ok { + return val, ok + } + } + } + + return nil, false +} + +func (p *parser) parseZeroOrMoreExpr(expr *zeroOrMoreExpr) (interface{}, bool) { + if p.debug { + defer p.out(p.in("parseZeroOrMoreExpr")) + } + + var vals []interface{} + + for { + p.pushV() + val, ok := p.parseExpr(expr.expr) + p.popV() + if !ok { + return vals, true + } + vals = append(vals, val) + } +} + +func (p *parser) parseZeroOrOneExpr(expr *zeroOrOneExpr) (interface{}, bool) { + if p.debug { + defer p.out(p.in("parseZeroOrOneExpr")) + } + + p.pushV() + val, _ := p.parseExpr(expr.expr) + p.popV() + // whether it matched or not, consider it a match + return val, true +} diff --git a/common/bootcommand/boot_command.pigeon b/common/bootcommand/boot_command.pigeon new file mode 100644 index 000000000..fdfbba0cf --- /dev/null +++ b/common/bootcommand/boot_command.pigeon @@ -0,0 +1,79 @@ +{ +package bootcommand + +} + +Input <- expr:Expr EOF { + return expr, nil +} + +Expr <- l:( Wait / CharToggle / Special / Literal)+ { + return l, nil +} + +Wait = ExprStart "wait" duration:( Duration / Integer )? ExprEnd { + var d time.Duration + switch t := duration.(type) { + case time.Duration: + d = t + case int64: + d = time.Duration(t) * time.Second + default: + d = time.Second + } + return &waitExpression{d}, nil +} + +CharToggle = ExprStart lit:(Literal) t:(On / Off) ExprEnd { + return &literal{lit.(*literal).s, t.(KeyAction)}, nil +} + +Special = ExprStart s:(SpecialKey) t:(On / Off)? ExprEnd { + l := strings.ToLower(string(s.([]byte))) + if t == nil { + return &specialExpression{l, KeyPress}, nil + } + return &specialExpression{l, t.(KeyAction)}, nil +} + +Number = '-'? Integer ( '.' Digit+ )? { + return string(c.text), nil +} + +Integer = '0' / NonZeroDigit Digit* { + return strconv.ParseInt(string(c.text), 10, 64) +} + +Duration = ( Number TimeUnit )+ { + return time.ParseDuration(string(c.text)) +} + +On = "on"i { + return KeyOn, nil +} + +Off = "off"i { + return KeyOff, nil +} + +Literal = . { + r, _ := utf8.DecodeRune(c.text) + return &literal{r, KeyPress}, nil +} + +ExprEnd = ">" +ExprStart = "<" +SpecialKey = "bs"i / "del"i / "enter"i / "esc"i / "f10"i / "f11"i / "f12"i + / "f1"i / "f2"i / "f3"i / "f4"i / "f5"i / "f6"i / "f7"i / "f8"i / "f9"i + / "return"i / "tab"i / "up"i / "down"i / "spacebar"i / "insert"i / "home"i + / "end"i / "pageUp"i / "pageDown"i / "leftAlt"i / "leftCtrl"i / "leftShift"i + / "rightAlt"i / "rightCtrl"i / "rightShift"i / "leftSuper"i / "rightSuper"i + / "left"i / "right"i + +NonZeroDigit = [1-9] +Digit = [0-9] +TimeUnit = ("ns" / "us" / "µs" / "ms" / "s" / "m" / "h") + +_ "whitespace" <- [ \n\t\r]* + +EOF <- !. diff --git a/common/bootcommand/boot_command_ast.go b/common/bootcommand/boot_command_ast.go new file mode 100644 index 000000000..f680345c4 --- /dev/null +++ b/common/bootcommand/boot_command_ast.go @@ -0,0 +1,157 @@ +package bootcommand + +import ( + "context" + "fmt" + "log" + "strings" + "time" +) + +// KeysAction represents what we want to do with a key press. +// It can take 3 states. We either want to: +// * press the key once +// * press and hold +// * press and release +type KeyAction int + +const ( + KeyOn KeyAction = 1 << iota + KeyOff + KeyPress +) + +func (k KeyAction) String() string { + switch k { + case KeyOn: + return "On" + case KeyOff: + return "Off" + case KeyPress: + return "Press" + } + panic(fmt.Sprintf("Unknwon KeyAction %d", k)) +} + +type expression interface { + // Do executes the expression + Do(context.Context, BCDriver) error + // Validate validates the expression without executing it + Validate() error +} + +type expressionSequence []expression + +// Do executes every expression in the sequence and then flushes remaining +// scancodes. +func (s expressionSequence) Do(ctx context.Context, b BCDriver) error { + // validate should never fail here, since it should be called before + // expressionSequence.Do. Only reason we don't panic is so we can clean up. + if errs := s.Validate(); errs != nil { + return fmt.Errorf("Found an invalid boot command. This is likely an error in Packer, so please open a ticket.") + } + + for _, exp := range s { + if err := ctx.Err(); err != nil { + return err + } + if err := exp.Do(ctx, b); err != nil { + return err + } + } + return b.Flush() +} + +// Validate tells us if every expression in the sequence is valid. +func (s expressionSequence) Validate() (errs []error) { + for _, exp := range s { + if err := exp.Validate(); err != nil { + errs = append(errs, err) + } + } + return +} + +// GenerateExpressionSequence generates a sequence of expressions from the +// given command. This is the primary entry point to the boot command parser. +func GenerateExpressionSequence(command string) (expressionSequence, error) { + seq := expressionSequence{} + if command == "" { + return seq, nil + } + got, err := ParseReader("", strings.NewReader(command)) + if err != nil { + return nil, err + } + for _, exp := range got.([]interface{}) { + seq = append(seq, exp.(expression)) + } + return seq, nil +} + +type waitExpression struct { + d time.Duration +} + +// Do waits the amount of time described by the expression. It is cancellable +// through the context. +func (w *waitExpression) Do(ctx context.Context, driver BCDriver) error { + driver.Flush() + log.Printf("[INFO] Waiting %s", w.d) + select { + case <-time.After(w.d): + return nil + case <-ctx.Done(): + return ctx.Err() + } +} + +// Validate returns an error if the time is <= 0 +func (w *waitExpression) Validate() error { + if w.d <= 0 { + return fmt.Errorf("Expecting a positive wait value. Got %s", w.d) + } + return nil +} + +func (w *waitExpression) String() string { + return fmt.Sprintf("Wait<%s>", w.d) +} + +type specialExpression struct { + s string + action KeyAction +} + +// Do sends the special command to the driver, along with the key action. +func (s *specialExpression) Do(ctx context.Context, driver BCDriver) error { + return driver.SendSpecial(s.s, s.action) +} + +// Validate always passes +func (s *specialExpression) Validate() error { + return nil +} + +func (s *specialExpression) String() string { + return fmt.Sprintf("Spec-%s(%s)", s.action, s.s) +} + +type literal struct { + s rune + action KeyAction +} + +// Do sends the key to the driver, along with the key action. +func (l *literal) Do(ctx context.Context, driver BCDriver) error { + return driver.SendKey(l.s, l.action) +} + +// Validate always passes +func (l *literal) Validate() error { + return nil +} + +func (l *literal) String() string { + return fmt.Sprintf("LIT-%s(%s)", l.action, string(l.s)) +} diff --git a/common/bootcommand/boot_command_ast_test.go b/common/bootcommand/boot_command_ast_test.go new file mode 100644 index 000000000..49bbb5cd4 --- /dev/null +++ b/common/bootcommand/boot_command_ast_test.go @@ -0,0 +1,134 @@ +package bootcommand + +import ( + "fmt" + "log" + "testing" + + "github.com/stretchr/testify/assert" +) + +func Test_parse(t *testing.T) { + in := "" + in += "foo/bar > one 界" + in += " b" + in += "" + expected := []string{ + "Wait<1s>", + "Wait<20s>", + "Wait<3s>", + "Wait<4m0.000000002s>", + "LIT-Press(f)", + "LIT-Press(o)", + "LIT-Press(o)", + "LIT-Press(/)", + "LIT-Press(b)", + "LIT-Press(a)", + "LIT-Press(r)", + "LIT-Press( )", + "LIT-Press(>)", + "LIT-Press( )", + "LIT-Press(o)", + "LIT-Press(n)", + "LIT-Press(e)", + "LIT-Press( )", + "LIT-Press(界)", + "LIT-On(f)", + "LIT-Press( )", + "LIT-Press(b)", + "LIT-Off(f)", + "LIT-Press(<)", + "LIT-Press(f)", + "LIT-Press(o)", + "LIT-Press(o)", + "LIT-Press(>)", + "Spec-Press(f3)", + "Spec-Press(f12)", + "Spec-Press(spacebar)", + "Spec-Press(leftalt)", + "Spec-Press(rightshift)", + "Spec-Press(rightsuper)", + } + + seq, err := GenerateExpressionSequence(in) + if err != nil { + log.Fatal(err) + } + for i, exp := range seq { + assert.Equal(t, expected[i], fmt.Sprintf("%s", exp)) + log.Printf("%s\n", exp) + } +} + +func Test_special(t *testing.T) { + var specials = []struct { + in string + out string + }{ + { + "", + "Spec-Press(rightshift)", + }, + { + "", + "Spec-On(del)", + }, + { + "", + "Spec-Off(enter)", + }, + } + for _, tt := range specials { + seq, err := GenerateExpressionSequence(tt.in) + if err != nil { + log.Fatal(err) + } + for _, exp := range seq { + assert.Equal(t, tt.out, exp.(*specialExpression).String()) + } + } +} + +func Test_validation(t *testing.T) { + var expressions = []struct { + in string + valid bool + }{ + { + "", + true, + }, + { + "", + false, + }, + { + "", + true, + }, + { + "<", + true, + }, + } + for _, tt := range expressions { + exp, err := GenerateExpressionSequence(tt.in) + if err != nil { + log.Fatal(err) + } + + assert.Len(t, exp, 1) + err = exp[0].Validate() + if tt.valid { + assert.NoError(t, err) + } else { + assert.Error(t, err) + } + } +} + +func Test_empty(t *testing.T) { + exp, err := GenerateExpressionSequence("") + assert.NoError(t, err, "should have parsed an empty input okay.") + assert.Len(t, exp, 0) +} diff --git a/common/bootcommand/config.go b/common/bootcommand/config.go new file mode 100644 index 000000000..532c7c82c --- /dev/null +++ b/common/bootcommand/config.go @@ -0,0 +1,60 @@ +package bootcommand + +import ( + "fmt" + "strings" + "time" + + "github.com/hashicorp/packer/template/interpolate" +) + +type BootConfig struct { + RawBootWait string `mapstructure:"boot_wait"` + BootCommand []string `mapstructure:"boot_command"` + + BootWait time.Duration `` +} + +type VNCConfig struct { + BootConfig `mapstructure:",squash"` + DisableVNC bool `mapstructure:"disable_vnc"` +} + +func (c *BootConfig) Prepare(ctx *interpolate.Context) (errs []error) { + if c.RawBootWait == "" { + c.RawBootWait = "10s" + } + if c.RawBootWait != "" { + bw, err := time.ParseDuration(c.RawBootWait) + if err != nil { + errs = append( + errs, fmt.Errorf("Failed parsing boot_wait: %s", err)) + } else { + c.BootWait = bw + } + } + + if c.BootCommand != nil { + expSeq, err := GenerateExpressionSequence(c.FlatBootCommand()) + if err != nil { + errs = append(errs, err) + } else if vErrs := expSeq.Validate(); vErrs != nil { + errs = append(errs, vErrs...) + } + } + + return +} + +func (c *BootConfig) FlatBootCommand() string { + return strings.Join(c.BootCommand, "") +} + +func (c *VNCConfig) Prepare(ctx *interpolate.Context) (errs []error) { + if len(c.BootCommand) > 0 && c.DisableVNC { + errs = append(errs, + fmt.Errorf("A boot command cannot be used when vnc is disabled.")) + } + errs = append(errs, c.BootConfig.Prepare(ctx)...) + return +} diff --git a/common/bootcommand/config_test.go b/common/bootcommand/config_test.go new file mode 100644 index 000000000..551f2c785 --- /dev/null +++ b/common/bootcommand/config_test.go @@ -0,0 +1,65 @@ +package bootcommand + +import ( + "testing" + + "github.com/hashicorp/packer/template/interpolate" +) + +func TestConfigPrepare(t *testing.T) { + var c *BootConfig + + // Test a default boot_wait + c = new(BootConfig) + c.RawBootWait = "" + errs := c.Prepare(&interpolate.Context{}) + if len(errs) > 0 { + t.Fatalf("bad: %#v", errs) + } + if c.RawBootWait != "10s" { + t.Fatalf("bad value: %s", c.RawBootWait) + } + + // Test with a bad boot_wait + c = new(BootConfig) + c.RawBootWait = "this is not good" + errs = c.Prepare(&interpolate.Context{}) + if len(errs) == 0 { + t.Fatal("should error") + } + + // Test with a good one + c = new(BootConfig) + c.RawBootWait = "5s" + errs = c.Prepare(&interpolate.Context{}) + if len(errs) > 0 { + t.Fatalf("bad: %#v", errs) + } +} + +func TestVNCConfigPrepare(t *testing.T) { + var c *VNCConfig + + // Test with a boot command + c = new(VNCConfig) + c.BootCommand = []string{"a", "b"} + errs := c.Prepare(&interpolate.Context{}) + if len(errs) > 0 { + t.Fatalf("bad: %#v", errs) + } + + // Test with disabled vnc + c.DisableVNC = true + errs = c.Prepare(&interpolate.Context{}) + if len(errs) == 0 { + t.Fatal("should error") + } + + // Test no boot command with no vnc + c = new(VNCConfig) + c.DisableVNC = true + errs = c.Prepare(&interpolate.Context{}) + if len(errs) > 0 { + t.Fatalf("bad: %#v", errs) + } +} diff --git a/common/bootcommand/driver.go b/common/bootcommand/driver.go new file mode 100644 index 000000000..04b0eecd6 --- /dev/null +++ b/common/bootcommand/driver.go @@ -0,0 +1,11 @@ +package bootcommand + +const shiftedChars = "~!@#$%^&*()_+{}|:\"<>?" + +// BCDriver is our access to the VM we want to type boot commands to +type BCDriver interface { + SendKey(key rune, action KeyAction) error + SendSpecial(special string, action KeyAction) error + // Flush will be called when we want to send scancodes to the VM. + Flush() error +} diff --git a/common/bootcommand/gen.go b/common/bootcommand/gen.go new file mode 100644 index 000000000..c5796e65e --- /dev/null +++ b/common/bootcommand/gen.go @@ -0,0 +1,3 @@ +package bootcommand + +//go:generate pigeon -o boot_command.go boot_command.pigeon diff --git a/common/bootcommand/pc_xt_driver.go b/common/bootcommand/pc_xt_driver.go new file mode 100644 index 000000000..7aca80852 --- /dev/null +++ b/common/bootcommand/pc_xt_driver.go @@ -0,0 +1,211 @@ +package bootcommand + +import ( + "fmt" + "log" + "os" + "strings" + "time" + "unicode" + "unicode/utf8" + + "github.com/hashicorp/packer/common" +) + +// SendCodeFunc will be called to send codes to the VM +type SendCodeFunc func([]string) error +type scMap map[string]*scancode + +type pcXTDriver struct { + interval time.Duration + sendImpl SendCodeFunc + specialMap scMap + scancodeMap map[rune]byte + buffer [][]string + // TODO: set from env + scancodeChunkSize int +} + +type scancode struct { + make []string + break_ []string +} + +func (sc *scancode) makeBreak() []string { + return append(sc.make, sc.break_...) +} + +// NewPCXTDriver creates a new boot command driver for VMs that expect PC-XT +// keyboard codes. `send` should send its argument to the VM. `chunkSize` should +// be the maximum number of keyboard codes to send to `send` at one time. +func NewPCXTDriver(send SendCodeFunc, chunkSize int) *pcXTDriver { + // We delay (default 100ms) between each input event to allow for CPU or + // network latency. See PackerKeyEnv for tuning. + keyInterval := common.PackerKeyDefault + if delay, err := time.ParseDuration(os.Getenv(common.PackerKeyEnv)); err == nil { + keyInterval = delay + } + // Scancodes reference: https://www.win.tue.nl/~aeb/linux/kbd/scancodes-1.html + // https://www.win.tue.nl/~aeb/linux/kbd/scancodes-10.html + // + // Scancodes are recorded here in pairs. The first entry represents + // the key press and the second entry represents the key release and is + // derived from the first by the addition of 0x80. + sMap := make(scMap) + sMap["bs"] = &scancode{[]string{"0e"}, []string{"8e"}} + sMap["del"] = &scancode{[]string{"e0", "53"}, []string{"e0", "d3"}} + sMap["down"] = &scancode{[]string{"e0", "50"}, []string{"e0", "d0"}} + sMap["end"] = &scancode{[]string{"e0", "4f"}, []string{"e0", "cf"}} + sMap["enter"] = &scancode{[]string{"1c"}, []string{"9c"}} + sMap["esc"] = &scancode{[]string{"01"}, []string{"81"}} + sMap["f1"] = &scancode{[]string{"3b"}, []string{"bb"}} + sMap["f2"] = &scancode{[]string{"3c"}, []string{"bc"}} + sMap["f3"] = &scancode{[]string{"3d"}, []string{"bd"}} + sMap["f4"] = &scancode{[]string{"3e"}, []string{"be"}} + sMap["f5"] = &scancode{[]string{"3f"}, []string{"bf"}} + sMap["f6"] = &scancode{[]string{"40"}, []string{"c0"}} + sMap["f7"] = &scancode{[]string{"41"}, []string{"c1"}} + sMap["f8"] = &scancode{[]string{"42"}, []string{"c2"}} + sMap["f9"] = &scancode{[]string{"43"}, []string{"c3"}} + sMap["f10"] = &scancode{[]string{"44"}, []string{"c4"}} + sMap["f11"] = &scancode{[]string{"57"}, []string{"d7"}} + sMap["f12"] = &scancode{[]string{"58"}, []string{"d8"}} + sMap["home"] = &scancode{[]string{"e0", "47"}, []string{"e0", "c7"}} + sMap["insert"] = &scancode{[]string{"e0", "52"}, []string{"e0", "d2"}} + sMap["left"] = &scancode{[]string{"e0", "4b"}, []string{"e0", "cb"}} + sMap["leftalt"] = &scancode{[]string{"38"}, []string{"b8"}} + sMap["leftctrl"] = &scancode{[]string{"1d"}, []string{"9d"}} + sMap["leftshift"] = &scancode{[]string{"2a"}, []string{"aa"}} + sMap["leftsuper"] = &scancode{[]string{"e0", "5b"}, []string{"e0", "db"}} + sMap["menu"] = &scancode{[]string{"e0", "5d"}, []string{"e0", "dd"}} + sMap["pagedown"] = &scancode{[]string{"e0", "51"}, []string{"e0", "d1"}} + sMap["pageup"] = &scancode{[]string{"e0", "49"}, []string{"e0", "c9"}} + sMap["return"] = &scancode{[]string{"1c"}, []string{"9c"}} + sMap["right"] = &scancode{[]string{"e0", "4d"}, []string{"e0", "cd"}} + sMap["rightalt"] = &scancode{[]string{"e0", "38"}, []string{"e0", "b8"}} + sMap["rightctrl"] = &scancode{[]string{"e0", "1d"}, []string{"e0", "9d"}} + sMap["rightshift"] = &scancode{[]string{"36"}, []string{"b6"}} + sMap["rightsuper"] = &scancode{[]string{"e0", "5c"}, []string{"e0", "dc"}} + sMap["spacebar"] = &scancode{[]string{"39"}, []string{"b9"}} + sMap["tab"] = &scancode{[]string{"0f"}, []string{"8f"}} + sMap["up"] = &scancode{[]string{"e0", "48"}, []string{"e0", "c8"}} + + scancodeIndex := make(map[string]byte) + scancodeIndex["1234567890-="] = 0x02 + scancodeIndex["!@#$%^&*()_+"] = 0x02 + scancodeIndex["qwertyuiop[]"] = 0x10 + scancodeIndex["QWERTYUIOP{}"] = 0x10 + scancodeIndex["asdfghjkl;'`"] = 0x1e + scancodeIndex[`ASDFGHJKL:"~`] = 0x1e + scancodeIndex[`\zxcvbnm,./`] = 0x2b + scancodeIndex["|ZXCVBNM<>?"] = 0x2b + scancodeIndex[" "] = 0x39 + + scancodeMap := make(map[rune]byte) + for chars, start := range scancodeIndex { + var i byte = 0 + for len(chars) > 0 { + r, size := utf8.DecodeRuneInString(chars) + chars = chars[size:] + scancodeMap[r] = start + i + i += 1 + } + } + + return &pcXTDriver{ + interval: keyInterval, + sendImpl: send, + specialMap: sMap, + scancodeMap: scancodeMap, + scancodeChunkSize: chunkSize, + } +} + +// Flush send all scanecodes. +func (d *pcXTDriver) Flush() error { + defer func() { + d.buffer = nil + }() + sc, err := chunkScanCodes(d.buffer, d.scancodeChunkSize) + if err != nil { + return err + } + for _, b := range sc { + if err := d.sendImpl(b); err != nil { + return err + } + time.Sleep(d.interval) + } + return nil +} + +func (d *pcXTDriver) SendKey(key rune, action KeyAction) error { + keyShift := unicode.IsUpper(key) || strings.ContainsRune(shiftedChars, key) + + var sc []string + + if action&(KeyOn|KeyPress) != 0 { + scInt := d.scancodeMap[key] + if keyShift { + sc = append(sc, "2a") + } + sc = append(sc, fmt.Sprintf("%02x", scInt)) + } + + if action&(KeyOff|KeyPress) != 0 { + scInt := d.scancodeMap[key] + 0x80 + if keyShift { + sc = append(sc, "aa") + } + sc = append(sc, fmt.Sprintf("%02x", scInt)) + } + + log.Printf("Sending char '%c', code '%s', shift %v", + key, strings.Join(sc, ""), keyShift) + + d.send(sc) + return nil +} + +func (d *pcXTDriver) SendSpecial(special string, action KeyAction) error { + keyCode, ok := d.specialMap[special] + if !ok { + return fmt.Errorf("special %s not found.", special) + } + log.Printf("Special code '%s' '<%s>' found, replacing with: %v", action.String(), special, keyCode) + + switch action { + case KeyOn: + d.send(keyCode.make) + case KeyOff: + d.send(keyCode.break_) + case KeyPress: + d.send(keyCode.makeBreak()) + } + return nil +} + +// send stores the codes in an internal buffer. Use Flush to send them. +func (d *pcXTDriver) send(codes []string) { + d.buffer = append(d.buffer, codes) +} + +func chunkScanCodes(sc [][]string, size int) (out [][]string, err error) { + var running []string + for _, codes := range sc { + if size > 0 { + if len(codes) > size { + return nil, fmt.Errorf("chunkScanCodes: size cannot be smaller than sc width.") + } + if len(running)+len(codes) > size { + out = append(out, running) + running = nil + } + } + running = append(running, codes...) + } + if running != nil { + out = append(out, running) + } + return +} diff --git a/common/bootcommand/pc_xt_driver_test.go b/common/bootcommand/pc_xt_driver_test.go new file mode 100644 index 000000000..cd0e04ec8 --- /dev/null +++ b/common/bootcommand/pc_xt_driver_test.go @@ -0,0 +1,123 @@ +package bootcommand + +import ( + "context" + "testing" + + "github.com/stretchr/testify/assert" +) + +func Test_chunkScanCodes(t *testing.T) { + + var chunktests = []struct { + size int + in [][]string + out [][]string + }{ + { + 3, + [][]string{ + {"a", "b"}, + {"c"}, + {"d"}, + {"e", "f"}, + {"g", "h"}, + {"i", "j"}, + {"k"}, + {"l", "m"}, + }, + [][]string{ + {"a", "b", "c"}, + {"d", "e", "f"}, + {"g", "h"}, + {"i", "j", "k"}, + {"l", "m"}, + }, + }, + { + -1, + [][]string{ + {"a", "b"}, + {"c"}, + {"d"}, + {"e", "f"}, + {"g", "h"}, + {"i", "j"}, + {"k"}, + {"l", "m"}, + }, + [][]string{ + {"a", "b", "c", "d", "e", "f", "g", "h", "i", "j", "k", "l", "m"}, + }, + }, + } + + for _, tt := range chunktests { + out, err := chunkScanCodes(tt.in, tt.size) + assert.NoError(t, err) + assert.Equalf(t, tt.out, out, "expecting chunks of %d.", tt.size) + } +} + +func Test_chunkScanCodeError(t *testing.T) { + // can't go from wider to thinner + in := [][]string{ + {"a", "b", "c"}, + {"d", "e", "f"}, + {"g", "h"}, + } + + _, err := chunkScanCodes(in, 2) + assert.Error(t, err) +} + +func Test_pcxtSpecialOnOff(t *testing.T) { + in := "" + expected := []string{"36", "b6", "b6", "36"} + var codes []string + sendCodes := func(c []string) error { + codes = c + return nil + } + d := NewPCXTDriver(sendCodes, -1) + seq, err := GenerateExpressionSequence(in) + assert.NoError(t, err) + err = seq.Do(context.Background(), d) + assert.NoError(t, err) + assert.Equal(t, expected, codes) +} + +func Test_pcxtSpecial(t *testing.T) { + in := "" + expected := []string{"e0", "4b", "e0", "cb"} + var codes []string + sendCodes := func(c []string) error { + codes = c + return nil + } + d := NewPCXTDriver(sendCodes, -1) + seq, err := GenerateExpressionSequence(in) + assert.NoError(t, err) + err = seq.Do(context.Background(), d) + assert.NoError(t, err) + assert.Equal(t, expected, codes) +} + +func Test_flushes(t *testing.T) { + in := "abc123098" + expected := [][]string{ + {"1e", "9e", "30", "b0", "2e", "ae", "02", "82", "03", "83", "04", "84"}, + {"0b", "8b", "0a", "8a", "09", "89"}, + } + var actual [][]string + sendCodes := func(c []string) error { + actual = append(actual, c) + return nil + } + d := NewPCXTDriver(sendCodes, -1) + seq, err := GenerateExpressionSequence(in) + assert.NoError(t, err) + err = seq.Do(context.Background(), d) + assert.NoError(t, err) + assert.Equal(t, expected, actual) +} diff --git a/common/bootcommand/vnc_driver.go b/common/bootcommand/vnc_driver.go new file mode 100644 index 000000000..dfe75b9b2 --- /dev/null +++ b/common/bootcommand/vnc_driver.go @@ -0,0 +1,147 @@ +package bootcommand + +import ( + "fmt" + "log" + "os" + "strings" + "time" + "unicode" + + "github.com/hashicorp/packer/common" +) + +const KeyLeftShift uint32 = 0xFFE1 + +type VNCKeyEvent interface { + KeyEvent(uint32, bool) error +} + +type vncDriver struct { + c VNCKeyEvent + interval time.Duration + specialMap map[string]uint32 + // keyEvent can set this error which will prevent it from continuing + err error +} + +func NewVNCDriver(c VNCKeyEvent) *vncDriver { + // We delay (default 100ms) between each key event to allow for CPU or + // network latency. See PackerKeyEnv for tuning. + keyInterval := common.PackerKeyDefault + if delay, err := time.ParseDuration(os.Getenv(common.PackerKeyEnv)); err == nil { + keyInterval = delay + } + + // Scancodes reference: https://github.com/qemu/qemu/blob/master/ui/vnc_keysym.h + sMap := make(map[string]uint32) + sMap["bs"] = 0xFF08 + sMap["del"] = 0xFFFF + sMap["down"] = 0xFF54 + sMap["end"] = 0xFF57 + sMap["enter"] = 0xFF0D + sMap["esc"] = 0xFF1B + sMap["f1"] = 0xFFBE + sMap["f2"] = 0xFFBF + sMap["f3"] = 0xFFC0 + sMap["f4"] = 0xFFC1 + sMap["f5"] = 0xFFC2 + sMap["f6"] = 0xFFC3 + sMap["f7"] = 0xFFC4 + sMap["f8"] = 0xFFC5 + sMap["f9"] = 0xFFC6 + sMap["f10"] = 0xFFC7 + sMap["f11"] = 0xFFC8 + sMap["f12"] = 0xFFC9 + sMap["home"] = 0xFF50 + sMap["insert"] = 0xFF63 + sMap["left"] = 0xFF51 + sMap["leftalt"] = 0xFFE9 + sMap["leftctrl"] = 0xFFE3 + sMap["leftshift"] = 0xFFE1 + sMap["leftsuper"] = 0xFFEB + sMap["menu"] = 0xFF67 + sMap["pagedown"] = 0xFF56 + sMap["pageup"] = 0xFF55 + sMap["return"] = 0xFF0D + sMap["right"] = 0xFF53 + sMap["rightalt"] = 0xFFEA + sMap["rightctrl"] = 0xFFE4 + sMap["rightshift"] = 0xFFE2 + sMap["rightsuper"] = 0xFFEC + sMap["spacebar"] = 0x020 + sMap["tab"] = 0xFF09 + sMap["up"] = 0xFF52 + + return &vncDriver{ + c: c, + interval: keyInterval, + specialMap: sMap, + } +} + +func (d *vncDriver) keyEvent(k uint32, down bool) error { + if d.err != nil { + return nil + } + if err := d.c.KeyEvent(k, down); err != nil { + d.err = err + return err + } + time.Sleep(d.interval) + return nil +} + +// Flush does nothing here +func (d *vncDriver) Flush() error { + return nil +} + +func (d *vncDriver) SendKey(key rune, action KeyAction) error { + keyShift := unicode.IsUpper(key) || strings.ContainsRune(shiftedChars, key) + keyCode := uint32(key) + log.Printf("Sending char '%c', code 0x%X, shift %v", key, keyCode, keyShift) + + switch action { + case KeyOn: + if keyShift { + d.keyEvent(KeyLeftShift, true) + } + d.keyEvent(keyCode, true) + case KeyOff: + if keyShift { + d.keyEvent(KeyLeftShift, false) + } + d.keyEvent(keyCode, false) + case KeyPress: + if keyShift { + d.keyEvent(KeyLeftShift, true) + } + d.keyEvent(keyCode, true) + d.keyEvent(keyCode, false) + if keyShift { + d.keyEvent(KeyLeftShift, false) + } + } + return d.err +} + +func (d *vncDriver) SendSpecial(special string, action KeyAction) error { + keyCode, ok := d.specialMap[special] + if !ok { + return fmt.Errorf("special %s not found.", special) + } + log.Printf("Special code '<%s>' found, replacing with: 0x%X", special, keyCode) + + switch action { + case KeyOn: + d.keyEvent(keyCode, true) + case KeyOff: + d.keyEvent(keyCode, false) + case KeyPress: + d.keyEvent(keyCode, true) + d.keyEvent(keyCode, false) + } + + return d.err +} diff --git a/common/bootcommand/vnc_driver_test.go b/common/bootcommand/vnc_driver_test.go new file mode 100644 index 000000000..11ca41669 --- /dev/null +++ b/common/bootcommand/vnc_driver_test.go @@ -0,0 +1,39 @@ +package bootcommand + +import ( + "context" + "testing" + + "github.com/stretchr/testify/assert" +) + +type event struct { + u uint32 + down bool +} + +type sender struct { + e []event +} + +func (s *sender) KeyEvent(u uint32, down bool) error { + s.e = append(s.e, event{u, down}) + return nil +} + +func Test_vncSpecialLookup(t *testing.T) { + in := "" + expected := []event{ + {0xFFE2, true}, + {0xFFE2, false}, + {0xFFE2, false}, + {0xFFE2, true}, + } + s := &sender{} + d := NewVNCDriver(s) + seq, err := GenerateExpressionSequence(in) + assert.NoError(t, err) + err = seq.Do(context.Background(), d) + assert.NoError(t, err) + assert.Equal(t, expected, s.e) +} diff --git a/common/config.go b/common/config.go index cc7eced6a..aa632ceae 100644 --- a/common/config.go +++ b/common/config.go @@ -5,6 +5,7 @@ import ( "net/url" "os" "path/filepath" + "regexp" "runtime" "strings" "time" @@ -43,93 +44,176 @@ func ChooseString(vals ...string) string { return "" } -// DownloadableURL processes a URL that may also be a file path and returns -// a completely valid URL. For example, the original URL might be "local/file.iso" -// which isn't a valid URL. DownloadableURL will return "file:///local/file.iso" -func DownloadableURL(original string) (string, error) { - if runtime.GOOS == "windows" { - // If the distance to the first ":" is just one character, assume - // we're dealing with a drive letter and thus a file path. - idx := strings.Index(original, ":") - if idx == 1 { - original = "file:///" + original - } +// SupportedProtocol verifies that the url passed is actually supported or not +// This will also validate that the protocol is one that's actually implemented. +func SupportedProtocol(u *url.URL) bool { + // url.Parse shouldn't return nil except on error....but it can. + if u == nil { + return false } - url, err := url.Parse(original) + // build a dummy NewDownloadClient since this is the only place that valid + // protocols are actually exposed. + cli := NewDownloadClient(&DownloadConfig{}) + + // Iterate through each downloader to see if a protocol was found. + ok := false + for scheme := range cli.config.DownloaderMap { + if strings.ToLower(u.Scheme) == strings.ToLower(scheme) { + ok = true + } + } + return ok +} + +// DownloadableURL processes a URL that may also be a file path and returns +// a completely valid URL representing the requested file. For example, +// the original URL might be "local/file.iso" which isn't a valid URL, +// and so DownloadableURL will return "file://local/file.iso" +// No other transformations are done to the path. +func DownloadableURL(original string) (string, error) { + var absPrefix, result string + + absPrefix = "" + if runtime.GOOS == "windows" { + absPrefix = "/" + } + + // Check that the user specified a UNC path, and promote it to an smb:// uri. + if strings.HasPrefix(original, "\\\\") && len(original) > 2 && original[2] != '?' { + result = filepath.ToSlash(original[2:]) + return fmt.Sprintf("smb://%s", result), nil + } + + // Fix the url if it's using bad characters commonly mistaken with a path. + original = filepath.ToSlash(original) + + // Check to see that this is a parseable URL with a scheme and a host. + // If so, then just pass it through. + if u, err := url.Parse(original); err == nil && u.Scheme != "" && u.Host != "" { + return original, nil + } + + // If it's a file scheme, then convert it back to a regular path so the next + // case which forces it to an absolute path, will correct it. + if u, err := url.Parse(original); err == nil && strings.ToLower(u.Scheme) == "file" { + original = u.Path + } + + // If we're on Windows and we start with a slash, then this absolute path + // is wrong. Fix it up, so the next case can figure out the absolute path. + if rpath := strings.SplitN(original, "/", 2); rpath[0] == "" && runtime.GOOS == "windows" { + result = rpath[1] + } else { + result = original + } + + // Since we should be some kind of path (relative or absolute), check + // that the file exists, then make it an absolute path so we can return an + // absolute uri. + if _, err := os.Stat(result); err == nil { + result, err = filepath.Abs(filepath.FromSlash(result)) + if err != nil { + return "", err + } + + result, err = filepath.EvalSymlinks(result) + if err != nil { + return "", err + } + + result = filepath.Clean(result) + return fmt.Sprintf("file://%s%s", absPrefix, filepath.ToSlash(result)), nil + } + + // Otherwise, check if it was originally an absolute path, and fix it if so. + if strings.HasPrefix(original, "/") { + return fmt.Sprintf("file://%s%s", absPrefix, result), nil + } + + // Anything left should be a non-existent relative path. So fix it up here. + result = filepath.ToSlash(filepath.Clean(result)) + return fmt.Sprintf("file://./%s", result), nil +} + +// Force the parameter into a url. This will transform the parameter into +// a proper url, removing slashes, adding the proper prefix, etc. +func ValidatedURL(original string) (string, error) { + + // See if the user failed to give a url + if ok, _ := regexp.MatchString("(?m)^[^[:punct:]]+://", original); !ok { + + // So since no magic was found, this must be a path. + result, err := DownloadableURL(original) + if err == nil { + return ValidatedURL(result) + } + + return "", err + } + + // Verify that the url is parseable...just in case. + u, err := url.Parse(original) if err != nil { return "", err } - if url.Scheme == "" { - url.Scheme = "file" + // We should now have a url, so verify that it's a protocol we support. + if !SupportedProtocol(u) { + return "", fmt.Errorf("Unsupported protocol scheme! (%#v)", u) } - if url.Scheme == "file" { - // Windows file handling is all sorts of tricky... - if runtime.GOOS == "windows" { - // If the path is using Windows-style slashes, URL parses - // it into the host field. - if url.Path == "" && strings.Contains(url.Host, `\`) { - url.Path = url.Host - url.Host = "" - } - - // For Windows absolute file paths, remove leading / prior to processing - // since net/url turns "C:/" into "/C:/" - if len(url.Path) > 0 && url.Path[0] == '/' { - url.Path = url.Path[1:len(url.Path)] - } - } - - // Only do the filepath transformations if the file appears - // to actually exist. - if _, err := os.Stat(url.Path); err == nil { - url.Path, err = filepath.Abs(url.Path) - if err != nil { - return "", err - } - - url.Path, err = filepath.EvalSymlinks(url.Path) - if err != nil { - return "", err - } - - url.Path = filepath.Clean(url.Path) - } - - if runtime.GOOS == "windows" { - // Also replace all backslashes with forwardslashes since Windows - // users are likely to do this but the URL should actually only - // contain forward slashes. - url.Path = strings.Replace(url.Path, `\`, `/`, -1) - } - } - - // Make sure it is lowercased - url.Scheme = strings.ToLower(url.Scheme) - - // This is to work around issue #5927. This can safely be removed once - // we distribute with a version of Go that fixes that bug. - // - // See: https://code.google.com/p/go/issues/detail?id=5927 - if url.Path != "" && url.Path[0] != '/' { - url.Path = "/" + url.Path - } - - // Verify that the scheme is something we support in our common downloader. - supported := []string{"file", "http", "https"} - found := false - for _, s := range supported { - if url.Scheme == s { - found = true - break - } - } - - if !found { - return "", fmt.Errorf("Unsupported URL scheme: %s", url.Scheme) - } - - return url.String(), nil + // We should now have a properly formatted and supported url + return u.String(), nil +} + +// FileExistsLocally takes the URL output from DownloadableURL, and determines +// whether it is present on the file system. +// example usage: +// +// myFile, err = common.DownloadableURL(c.SourcePath) +// ... +// fileExists := common.StatURL(myFile) +// possible output: +// true -- should occur if the file is present, or if the file is not present, +// but is not supposed to be (e.g. the schema is http://, not file://) +// false -- should occur if there was an error stating the file, so the +// file is not present when it should be. + +func FileExistsLocally(original string) bool { + // original should be something like file://C:/my/path.iso + u, _ := url.Parse(original) + + // First create a dummy downloader so we can figure out which + // protocol to use. + cli := NewDownloadClient(&DownloadConfig{}) + d, ok := cli.config.DownloaderMap[u.Scheme] + if !ok { + return false + } + + // Check to see that it's got a Local way of doing things. + local, ok := d.(LocalDownloader) + if !ok { + return true // XXX: Remote URLs short-circuit this logic. + } + + // Figure out where we're at. + wd, err := os.Getwd() + if err != nil { + return false + } + + // Now figure out the real path to the file. + realpath, err := local.toPath(wd, *u) + if err != nil { + return false + } + + // Finally we can seek the truth via os.Stat. + _, err = os.Stat(realpath) + if err != nil { + return false + } + return true } diff --git a/common/config_test.go b/common/config_test.go index 8bd684739..8f170fe1d 100644 --- a/common/config_test.go +++ b/common/config_test.go @@ -37,21 +37,21 @@ func TestChooseString(t *testing.T) { } } -func TestDownloadableURL(t *testing.T) { +func TestValidatedURL(t *testing.T) { // Invalid URL: has hex code in host - _, err := DownloadableURL("http://what%20.com") + _, err := ValidatedURL("http://what%20.com") if err == nil { - t.Fatal("expected err") + t.Fatalf("expected err : %s", err) } // Invalid: unsupported scheme - _, err = DownloadableURL("ftp://host.com/path") + _, err = ValidatedURL("ftp://host.com/path") if err == nil { - t.Fatal("expected err") + t.Fatalf("expected err : %s", err) } // Valid: http - u, err := DownloadableURL("HTTP://packer.io/path") + u, err := ValidatedURL("HTTP://packer.io/path") if err != nil { t.Fatalf("err: %s", err) } @@ -60,14 +60,153 @@ func TestDownloadableURL(t *testing.T) { t.Fatalf("bad: %s", u) } - // No path - u, err = DownloadableURL("HTTP://packer.io") - if err != nil { - t.Fatalf("err: %s", err) + cases := []struct { + InputString string + OutputURL string + ErrExpected bool + }{ + // Invalid URL: has hex code in host + {"http://what%20.com", "", true}, + // Valid: http + {"HTTP://packer.io/path", "http://packer.io/path", false}, + // No path + {"HTTP://packer.io", "http://packer.io", false}, + // Invalid: unsupported scheme + {"ftp://host.com/path", "", true}, } - if u != "http://packer.io" { - t.Fatalf("bad: %s", u) + for _, tc := range cases { + u, err := ValidatedURL(tc.InputString) + if u != tc.OutputURL { + t.Fatal(fmt.Sprintf("Error with URL %s: got %s but expected %s", + tc.InputString, tc.OutputURL, u)) + } + if (err != nil) != tc.ErrExpected { + if tc.ErrExpected == true { + t.Fatal(fmt.Sprintf("Error with URL %s: we expected "+ + "ValidatedURL to return an error but didn't get one.", + tc.InputString)) + } else { + t.Fatal(fmt.Sprintf("Error with URL %s: we did not expect an "+ + " error from ValidatedURL but we got: %s", + tc.InputString, err)) + } + } + } +} + +func GetNativePathToTestFixtures(t *testing.T) string { + const path = "./test-fixtures" + res, err := filepath.Abs(path) + if err != nil { + t.Fatalf("err converting test-fixtures path into an absolute path : %s", err) + } + return res +} + +func GetPortablePathToTestFixtures(t *testing.T) string { + res := GetNativePathToTestFixtures(t) + return filepath.ToSlash(res) +} + +func TestDownloadableURL_WindowsFiles(t *testing.T) { + if runtime.GOOS == "windows" { + portablepath := GetPortablePathToTestFixtures(t) + nativepath := GetNativePathToTestFixtures(t) + + dirCases := []struct { + InputString string + OutputURL string + ErrExpected bool + }{ // TODO: add different directories + { + fmt.Sprintf("%s\\SomeDir\\myfile.txt", nativepath), + fmt.Sprintf("file:///%s/SomeDir/myfile.txt", portablepath), + false, + }, + { // without the drive makes this native path a relative file:// uri + "test-fixtures\\SomeDir\\myfile.txt", + fmt.Sprintf("file:///%s/SomeDir/myfile.txt", portablepath), + false, + }, + { // without the drive makes this native path a relative file:// uri + "test-fixtures/SomeDir/myfile.txt", + fmt.Sprintf("file:///%s/SomeDir/myfile.txt", portablepath), + false, + }, + { // UNC paths being promoted to smb:// uri scheme. + fmt.Sprintf("\\\\localhost\\C$\\%s\\SomeDir\\myfile.txt", nativepath), + fmt.Sprintf("smb://localhost/C$/%s/SomeDir/myfile.txt", portablepath), + false, + }, + { // Absolute uri (incorrect slash type) + fmt.Sprintf("file:///%s\\SomeDir\\myfile.txt", nativepath), + fmt.Sprintf("file:///%s/SomeDir/myfile.txt", portablepath), + false, + }, + { // Absolute uri (existing and mis-spelled) + fmt.Sprintf("file:///%s/Somedir/myfile.txt", nativepath), + fmt.Sprintf("file:///%s/SomeDir/myfile.txt", portablepath), + false, + }, + { // Absolute path (non-existing) + "\\absolute\\path\\to\\non-existing\\file.txt", + "file:///absolute/path/to/non-existing/file.txt", + false, + }, + { // Absolute paths (existing) + fmt.Sprintf("%s/SomeDir/myfile.txt", nativepath), + fmt.Sprintf("file:///%s/SomeDir/myfile.txt", portablepath), + false, + }, + { // Relative path (non-existing) + "./nonexisting/relative/path/to/file.txt", + "file://./nonexisting/relative/path/to/file.txt", + false, + }, + { // Relative path (existing) + "./test-fixtures/SomeDir/myfile.txt", + fmt.Sprintf("file:///%s/SomeDir/myfile.txt", portablepath), + false, + }, + { // Absolute uri (existing and with `/` prefix) + fmt.Sprintf("file:///%s/SomeDir/myfile.txt", portablepath), + fmt.Sprintf("file:///%s/SomeDir/myfile.txt", portablepath), + false, + }, + { // Absolute uri (non-existing and with `/` prefix) + "file:///path/to/non-existing/file.txt", + "file:///path/to/non-existing/file.txt", + false, + }, + { // Absolute uri (non-existing and missing `/` prefix) + "file://path/to/non-existing/file.txt", + "file://path/to/non-existing/file.txt", + false, + }, + { // Absolute uri and volume (non-existing and with `/` prefix) + "file:///T:/path/to/non-existing/file.txt", + "file:///T:/path/to/non-existing/file.txt", + false, + }, + { // Absolute uri and volume (non-existing and missing `/` prefix) + "file://T:/path/to/non-existing/file.txt", + "file://T:/path/to/non-existing/file.txt", + false, + }, + } + // Run through test cases to make sure they all parse correctly + for idx, tc := range dirCases { + u, err := DownloadableURL(tc.InputString) + if (err != nil) != tc.ErrExpected { + t.Fatalf("Test Case %d failed: Expected err = %#v, err = %#v, input = %s", + idx, tc.ErrExpected, err, tc.InputString) + } + if u != tc.OutputURL { + t.Fatalf("Test Case %d failed: Expected %s but received %s from input %s", + idx, tc.OutputURL, u, tc.InputString) + } + } } } @@ -85,10 +224,12 @@ func TestDownloadableURL_FilePaths(t *testing.T) { } tfPath = filepath.Clean(tfPath) - filePrefix := "file://" + + // If we're running windows, then absolute URIs are `/`-prefixed. + platformPrefix := "" if runtime.GOOS == "windows" { - filePrefix += "/" + platformPrefix = "/" } // Relative filepath. We run this test in a func so that @@ -111,8 +252,9 @@ func TestDownloadableURL_FilePaths(t *testing.T) { t.Fatalf("err: %s", err) } - expected := fmt.Sprintf("%s%s", + expected := fmt.Sprintf("%s%s%s", filePrefix, + platformPrefix, strings.Replace(tfPath, `\`, `/`, -1)) if u != expected { t.Fatalf("unexpected: %#v != %#v", u, expected) @@ -120,21 +262,22 @@ func TestDownloadableURL_FilePaths(t *testing.T) { }() // Test some cases with and without a schema prefix - for _, prefix := range []string{"", filePrefix} { + for _, prefix := range []string{"", filePrefix + platformPrefix} { // Nonexistent file _, err = DownloadableURL(prefix + "i/dont/exist") if err != nil { t.Fatalf("err: %s", err) } - // Good file + // Good file (absolute) u, err := DownloadableURL(prefix + tfPath) if err != nil { t.Fatalf("err: %s", err) } - expected := fmt.Sprintf("%s%s", + expected := fmt.Sprintf("%s%s%s", filePrefix, + platformPrefix, strings.Replace(tfPath, `\`, `/`, -1)) if u != expected { t.Fatalf("unexpected: %s != %s", u, expected) @@ -142,6 +285,32 @@ func TestDownloadableURL_FilePaths(t *testing.T) { } } +func TestFileExistsLocally(t *testing.T) { + portablepath := GetPortablePathToTestFixtures(t) + + dirCases := []struct { + Input string + Output bool + }{ + // file exists locally + {fmt.Sprintf("file://%s/SomeDir/myfile.txt", portablepath), true}, + // remote protocols short-circuit and are considered to exist locally + {"https://myfile.iso", true}, + // non-existent protocols do not exist and hence fail + {"nonexistent-protocol://myfile.iso", false}, + // file does not exist locally + {"file:///C/i/dont/exist", false}, + } + // Run through test cases to make sure they all parse correctly + for _, tc := range dirCases { + fileOK := FileExistsLocally(tc.Input) + if fileOK != tc.Output { + t.Fatalf("Test Case failed: Expected %#v, received = %#v, input = %s", + tc.Output, fileOK, tc.Input) + } + } +} + func TestScrubConfig(t *testing.T) { type Inner struct { Baz string diff --git a/common/download.go b/common/download.go index 86dea34d1..59d882b50 100644 --- a/common/download.go +++ b/common/download.go @@ -10,12 +10,19 @@ import ( "errors" "fmt" "hash" - "io" "log" - "net/http" "net/url" "os" + "path" "runtime" + "strings" +) + +// imports related to each Downloader implementation +import ( + "io" + "net/http" + "path/filepath" ) // DownloadConfig is the configuration given to instantiate a new @@ -75,23 +82,38 @@ func HashForType(t string) hash.Hash { // NewDownloadClient returns a new DownloadClient for the given // configuration. func NewDownloadClient(c *DownloadConfig) *DownloadClient { + const mtu = 1500 /* ethernet */ - 20 /* ipv4 */ - 20 /* tcp */ + + // Create downloader map if it hasn't been specified already. if c.DownloaderMap == nil { c.DownloaderMap = map[string]Downloader{ + "file": &FileDownloader{bufferSize: nil}, "http": &HTTPDownloader{userAgent: c.UserAgent}, "https": &HTTPDownloader{userAgent: c.UserAgent}, + "smb": &SMBDownloader{bufferSize: nil}, } } - return &DownloadClient{config: c} } -// A downloader is responsible for actually taking a remote URL and -// downloading it. +// A downloader implements the ability to transfer, cancel, or resume a file. type Downloader interface { + Resume() Cancel() + Progress() uint64 + Total() uint64 +} + +// A LocalDownloader is responsible for converting a uri to a local path +// that the platform can open directly. +type LocalDownloader interface { + toPath(string, url.URL) (string, error) +} + +// A RemoteDownloader is responsible for actually taking a remote URL and +// downloading it. +type RemoteDownloader interface { Download(*os.File, *url.URL) error - Progress() uint - Total() uint } func (d *DownloadClient) Cancel() { @@ -105,61 +127,64 @@ func (d *DownloadClient) Get() (string, error) { return d.config.TargetPath, nil } + /* parse the configuration url into a net/url object */ u, err := url.Parse(d.config.Url) if err != nil { return "", err } - log.Printf("Parsed URL: %#v", u) - // Files when we don't copy the file are special cased. - var f *os.File - var finalPath string - sourcePath := "" - if u.Scheme == "file" && !d.config.CopyFile { - // This is special case for relative path in this case user specify - // file:../ and after parse destination goes to Opaque - if u.Path != "" { - // If url.Path is set just use this - finalPath = u.Path - } else if u.Opaque != "" { - // otherwise try url.Opaque - finalPath = u.Opaque - } - // This is a special case where we use a source file that already exists - // locally and we don't make a copy. Normally we would copy or download. - log.Printf("[DEBUG] Using local file: %s", finalPath) + /* use the current working directory as the base for relative uri's */ + cwd, err := os.Getwd() + if err != nil { + return "", err + } - // Remove forward slash on absolute Windows file URLs before processing - if runtime.GOOS == "windows" && len(finalPath) > 0 && finalPath[0] == '/' { - finalPath = finalPath[1:] - } - // Keep track of the source so we can make sure not to delete this later - sourcePath = finalPath - if _, err = os.Stat(finalPath); err != nil { - return "", err - } - } else { + // Determine which is the correct downloader to use + var finalPath string + + var ok bool + d.downloader, ok = d.config.DownloaderMap[u.Scheme] + if !ok { + return "", fmt.Errorf("No downloader for scheme: %s", u.Scheme) + } + + remote, ok := d.downloader.(RemoteDownloader) + if !ok { + return "", fmt.Errorf("Unable to treat uri scheme %s as a Downloader. : %T", u.Scheme, d.downloader) + } + + local, ok := d.downloader.(LocalDownloader) + if !ok && !d.config.CopyFile { + d.config.CopyFile = true + } + + // If we're copying the file, then just use the actual downloader + if d.config.CopyFile { + var f *os.File finalPath = d.config.TargetPath - var ok bool - d.downloader, ok = d.config.DownloaderMap[u.Scheme] - if !ok { - return "", fmt.Errorf("No downloader for scheme: %s", u.Scheme) - } - - // Otherwise, download using the downloader. f, err = os.OpenFile(finalPath, os.O_RDWR|os.O_CREATE, os.FileMode(0666)) if err != nil { return "", err } log.Printf("[DEBUG] Downloading: %s", u.String()) - err = d.downloader.Download(f, u) + err = remote.Download(f, u) f.Close() if err != nil { return "", err } + + // Otherwise if our Downloader is a LocalDownloader we can just use the + // path after transforming it. + } else { + finalPath, err = local.toPath(cwd, *u) + if err != nil { + return "", err + } + + log.Printf("[DEBUG] Using local file: %s", finalPath) } if d.config.Hash != nil { @@ -167,7 +192,7 @@ func (d *DownloadClient) Get() (string, error) { verify, err = d.VerifyChecksum(finalPath) if err == nil && !verify { // Only delete the file if we made a copy or downloaded it - if sourcePath != finalPath { + if d.config.CopyFile { os.Remove(finalPath) } @@ -180,7 +205,6 @@ func (d *DownloadClient) Get() (string, error) { return finalPath, err } -// PercentProgress returns the download progress as a percentage. func (d *DownloadClient) PercentProgress() int { if d.downloader == nil { return -1 @@ -211,17 +235,21 @@ func (d *DownloadClient) VerifyChecksum(path string) (bool, error) { // HTTPDownloader is an implementation of Downloader that downloads // files over HTTP. type HTTPDownloader struct { - progress uint - total uint + current uint64 + total uint64 userAgent string } -func (*HTTPDownloader) Cancel() { +func (d *HTTPDownloader) Cancel() { + // TODO(mitchellh): Implement +} + +func (d *HTTPDownloader) Resume() { // TODO(mitchellh): Implement } func (d *HTTPDownloader) Download(dst *os.File, src *url.URL) error { - log.Printf("Starting download: %s", src.String()) + log.Printf("Starting download over HTTP: %s", src.String()) // Seek to the beginning by default if _, err := dst.Seek(0, 0); err != nil { @@ -229,7 +257,7 @@ func (d *HTTPDownloader) Download(dst *os.File, src *url.URL) error { } // Reset our progress - d.progress = 0 + d.current = 0 // Make the request. We first make a HEAD request so we can check // if the server supports range queries. If the server/URL doesn't @@ -250,16 +278,23 @@ func (d *HTTPDownloader) Download(dst *os.File, src *url.URL) error { } resp, err := httpClient.Do(req) - if err == nil && (resp.StatusCode >= 200 && resp.StatusCode < 300) { - // If the HEAD request succeeded, then attempt to set the range - // query if we can. - if resp.Header.Get("Accept-Ranges") == "bytes" { - if fi, err := dst.Stat(); err == nil { - if _, err = dst.Seek(0, os.SEEK_END); err == nil { - req.Header.Set("Range", fmt.Sprintf("bytes=%d-", fi.Size())) - d.progress = uint(fi.Size()) + if err != nil { + log.Printf("[DEBUG] (download) Error making HTTP HEAD request: %s", err.Error()) + } else { + if resp.StatusCode >= 200 && resp.StatusCode < 300 { + // If the HEAD request succeeded, then attempt to set the range + // query if we can. + if resp.Header.Get("Accept-Ranges") == "bytes" { + if fi, err := dst.Stat(); err == nil { + if _, err = dst.Seek(0, os.SEEK_END); err == nil { + req.Header.Set("Range", fmt.Sprintf("bytes=%d-", fi.Size())) + + d.current = uint64(fi.Size()) + } } } + } else { + log.Printf("[DEBUG] (download) Unexpected HTTP response during HEAD request: %s", resp.Status) } } @@ -269,9 +304,14 @@ func (d *HTTPDownloader) Download(dst *os.File, src *url.URL) error { resp, err = httpClient.Do(req) if err != nil { return err + } else { + if resp.StatusCode >= 400 && resp.StatusCode < 600 { + return fmt.Errorf("Error making HTTP GET request: %s", resp.Status) + } } - d.total = d.progress + uint(resp.ContentLength) + d.total = d.current + uint64(resp.ContentLength) + var buffer [4096]byte for { n, err := resp.Body.Read(buffer[:]) @@ -279,7 +319,7 @@ func (d *HTTPDownloader) Download(dst *os.File, src *url.URL) error { return err } - d.progress += uint(n) + d.current += uint64(n) if _, werr := dst.Write(buffer[:n]); werr != nil { return werr @@ -289,14 +329,253 @@ func (d *HTTPDownloader) Download(dst *os.File, src *url.URL) error { break } } - return nil } -func (d *HTTPDownloader) Progress() uint { - return d.progress +func (d *HTTPDownloader) Progress() uint64 { + return d.current } -func (d *HTTPDownloader) Total() uint { +func (d *HTTPDownloader) Total() uint64 { return d.total } + +// FileDownloader is an implementation of Downloader that downloads +// files using the regular filesystem. +type FileDownloader struct { + bufferSize *uint + + active bool + current uint64 + total uint64 +} + +func (d *FileDownloader) Progress() uint64 { + return d.current +} + +func (d *FileDownloader) Total() uint64 { + return d.total +} + +func (d *FileDownloader) Cancel() { + d.active = false +} + +func (d *FileDownloader) Resume() { + // TODO: Implement +} + +func (d *FileDownloader) toPath(base string, uri url.URL) (string, error) { + var result string + + // absolute path -- file://c:/absolute/path -> c:/absolute/path + if strings.HasSuffix(uri.Host, ":") { + result = path.Join(uri.Host, uri.Path) + + // semi-absolute path (current drive letter) + // -- file:///absolute/path -> drive:/absolute/path + } else if uri.Host == "" && strings.HasPrefix(uri.Path, "/") { + apath := uri.Path + components := strings.Split(apath, "/") + volume := filepath.VolumeName(base) + + // semi-absolute absolute path (includes volume letter) + // -- file://drive:/path -> drive:/absolute/path + if len(components) > 1 && strings.HasSuffix(components[1], ":") { + volume = components[1] + apath = path.Join(components[2:]...) + } + + result = path.Join(volume, apath) + + // relative path -- file://./relative/path -> ./relative/path + } else if uri.Host == "." { + result = path.Join(base, uri.Path) + + // relative path -- file://relative/path -> ./relative/path + } else { + result = path.Join(base, uri.Host, uri.Path) + } + return filepath.ToSlash(result), nil +} + +func (d *FileDownloader) Download(dst *os.File, src *url.URL) error { + d.active = false + + /* check the uri's scheme to make sure it matches */ + if src == nil || src.Scheme != "file" { + return fmt.Errorf("Unexpected uri scheme: %s", src.Scheme) + } + uri := src + + /* use the current working directory as the base for relative uri's */ + cwd, err := os.Getwd() + if err != nil { + return err + } + + /* determine which uri format is being used and convert to a real path */ + realpath, err := d.toPath(cwd, *uri) + if err != nil { + return err + } + + /* download the file using the operating system's facilities */ + d.current = 0 + d.active = true + + f, err := os.Open(realpath) + if err != nil { + return err + } + defer f.Close() + + // get the file size + fi, err := f.Stat() + if err != nil { + return err + } + d.total = uint64(fi.Size()) + + // no bufferSize specified, so copy synchronously. + if d.bufferSize == nil { + var n int64 + n, err = io.Copy(dst, f) + d.active = false + + d.current += uint64(n) + + // use a goro in case someone else wants to enable cancel/resume + } else { + errch := make(chan error) + go func(d *FileDownloader, r io.Reader, w io.Writer, e chan error) { + for d.active { + n, err := io.CopyN(w, r, int64(*d.bufferSize)) + if err != nil { + break + } + + d.current += uint64(n) + } + d.active = false + e <- err + }(d, f, dst, errch) + + // ...and we spin until it's done + err = <-errch + } + f.Close() + return err +} + +// SMBDownloader is an implementation of Downloader that downloads +// files using the "\\" path format on Windows +type SMBDownloader struct { + bufferSize *uint + + active bool + current uint64 + total uint64 +} + +func (d *SMBDownloader) Progress() uint64 { + return d.current +} + +func (d *SMBDownloader) Total() uint64 { + return d.total +} + +func (d *SMBDownloader) Cancel() { + d.active = false +} + +func (d *SMBDownloader) Resume() { + // TODO: Implement +} + +func (d *SMBDownloader) toPath(base string, uri url.URL) (string, error) { + const UNCPrefix = string(os.PathSeparator) + string(os.PathSeparator) + + if runtime.GOOS != "windows" { + return "", fmt.Errorf("Support for SMB based uri's are not supported on %s", runtime.GOOS) + } + + return UNCPrefix + filepath.ToSlash(path.Join(uri.Host, uri.Path)), nil +} + +func (d *SMBDownloader) Download(dst *os.File, src *url.URL) error { + + /* first we warn the world if we're not running windows */ + if runtime.GOOS != "windows" { + return fmt.Errorf("Support for SMB based uri's are not supported on %s", runtime.GOOS) + } + + d.active = false + + /* convert the uri using the net/url module to a UNC path */ + if src == nil || src.Scheme != "smb" { + return fmt.Errorf("Unexpected uri scheme: %s", src.Scheme) + } + uri := src + + /* use the current working directory as the base for relative uri's */ + cwd, err := os.Getwd() + if err != nil { + return err + } + + /* convert uri to an smb-path */ + realpath, err := d.toPath(cwd, *uri) + if err != nil { + return err + } + + /* Open up the "\\"-prefixed path using the Windows filesystem */ + d.current = 0 + d.active = true + + f, err := os.Open(realpath) + if err != nil { + return err + } + defer f.Close() + + // get the file size (at the risk of performance) + fi, err := f.Stat() + if err != nil { + return err + } + d.total = uint64(fi.Size()) + + // no bufferSize specified, so copy synchronously. + if d.bufferSize == nil { + var n int64 + n, err = io.Copy(dst, f) + d.active = false + + d.current += uint64(n) + + // use a goro in case someone else wants to enable cancel/resume + } else { + errch := make(chan error) + go func(d *SMBDownloader, r io.Reader, w io.Writer, e chan error) { + for d.active { + n, err := io.CopyN(w, r, int64(*d.bufferSize)) + if err != nil { + break + } + + d.current += uint64(n) + } + d.active = false + e <- err + }(d, f, dst, errch) + + // ...and as usual we spin until it's done + err = <-errch + } + f.Close() + return err +} diff --git a/common/download_test.go b/common/download_test.go index 497a05649..c9175b013 100644 --- a/common/download_test.go +++ b/common/download_test.go @@ -8,7 +8,9 @@ import ( "net/http" "net/http/httptest" "os" + "path/filepath" "runtime" + "strings" "testing" ) @@ -48,7 +50,7 @@ func TestDownloadClientVerifyChecksum(t *testing.T) { func TestDownloadClient_basic(t *testing.T) { tf, _ := ioutil.TempFile("", "packer") tf.Close() - os.Remove(tf.Name()) + defer os.Remove(tf.Name()) ts := httptest.NewServer(http.FileServer(http.Dir("./test-fixtures/root"))) defer ts.Close() @@ -56,6 +58,7 @@ func TestDownloadClient_basic(t *testing.T) { client := NewDownloadClient(&DownloadConfig{ Url: ts.URL + "/basic.txt", TargetPath: tf.Name(), + CopyFile: true, }) path, err := client.Get() @@ -81,7 +84,7 @@ func TestDownloadClient_checksumBad(t *testing.T) { tf, _ := ioutil.TempFile("", "packer") tf.Close() - os.Remove(tf.Name()) + defer os.Remove(tf.Name()) ts := httptest.NewServer(http.FileServer(http.Dir("./test-fixtures/root"))) defer ts.Close() @@ -91,7 +94,9 @@ func TestDownloadClient_checksumBad(t *testing.T) { TargetPath: tf.Name(), Hash: HashForType("md5"), Checksum: checksum, + CopyFile: true, }) + if _, err := client.Get(); err == nil { t.Fatal("should error") } @@ -105,7 +110,7 @@ func TestDownloadClient_checksumGood(t *testing.T) { tf, _ := ioutil.TempFile("", "packer") tf.Close() - os.Remove(tf.Name()) + defer os.Remove(tf.Name()) ts := httptest.NewServer(http.FileServer(http.Dir("./test-fixtures/root"))) defer ts.Close() @@ -115,7 +120,9 @@ func TestDownloadClient_checksumGood(t *testing.T) { TargetPath: tf.Name(), Hash: HashForType("md5"), Checksum: checksum, + CopyFile: true, }) + path, err := client.Get() if err != nil { t.Fatalf("err: %s", err) @@ -145,6 +152,7 @@ func TestDownloadClient_checksumNoDownload(t *testing.T) { TargetPath: "./test-fixtures/root/another.txt", Hash: HashForType("md5"), Checksum: checksum, + CopyFile: true, }) path, err := client.Get() if err != nil { @@ -164,10 +172,29 @@ func TestDownloadClient_checksumNoDownload(t *testing.T) { } } +func TestDownloadClient_notFound(t *testing.T) { + tf, _ := ioutil.TempFile("", "packer") + tf.Close() + defer os.Remove(tf.Name()) + + ts := httptest.NewServer(http.FileServer(http.Dir("./test-fixtures/root"))) + defer ts.Close() + + client := NewDownloadClient(&DownloadConfig{ + Url: ts.URL + "/not-found.txt", + TargetPath: tf.Name(), + }) + + if _, err := client.Get(); err == nil { + t.Fatal("should error") + } +} + func TestDownloadClient_resume(t *testing.T) { tf, _ := ioutil.TempFile("", "packer") tf.Write([]byte("w")) tf.Close() + defer os.Remove(tf.Name()) ts := httptest.NewServer(http.HandlerFunc(func(rw http.ResponseWriter, r *http.Request) { if r.Method == "HEAD" { @@ -183,7 +210,9 @@ func TestDownloadClient_resume(t *testing.T) { client := NewDownloadClient(&DownloadConfig{ Url: ts.URL, TargetPath: tf.Name(), + CopyFile: true, }) + path, err := client.Get() if err != nil { t.Fatalf("err: %s", err) @@ -204,6 +233,7 @@ func TestDownloadClient_usesDefaultUserAgent(t *testing.T) { if err != nil { t.Fatalf("tempfile error: %s", err) } + tf.Close() defer os.Remove(tf.Name()) defaultUserAgent := "" @@ -240,6 +270,7 @@ func TestDownloadClient_usesDefaultUserAgent(t *testing.T) { config := &DownloadConfig{ Url: server.URL, TargetPath: tf.Name(), + CopyFile: true, } client := NewDownloadClient(config) @@ -258,6 +289,7 @@ func TestDownloadClient_setsUserAgent(t *testing.T) { if err != nil { t.Fatalf("tempfile error: %s", err) } + tf.Close() defer os.Remove(tf.Name()) asserted := false @@ -271,6 +303,7 @@ func TestDownloadClient_setsUserAgent(t *testing.T) { Url: server.URL, TargetPath: tf.Name(), UserAgent: "fancy user agent", + CopyFile: true, } client := NewDownloadClient(config) @@ -351,6 +384,7 @@ func TestDownloadFileUrl(t *testing.T) { if err != nil { t.Fatalf("Unable to detect working directory: %s", err) } + cwd = filepath.ToSlash(cwd) // source_path is a file path and source is a network path sourcePath := fmt.Sprintf("%s/test-fixtures/fileurl/%s", cwd, "cake") @@ -376,11 +410,116 @@ func TestDownloadFileUrl(t *testing.T) { // Verify that we fail to match the checksum _, err = client.Get() if err.Error() != "checksums didn't match expected: 6e6f7065" { - t.Fatalf("Unexpected failure; expected checksum not to match") + t.Fatalf("Unexpected failure; expected checksum not to match. Error was \"%v\"", err) } if _, err = os.Stat(sourcePath); err != nil { t.Errorf("Could not stat source file: %s", sourcePath) } - +} + +// SimulateFileUriDownload is a simple utility function that converts a uri +// into a testable file path whilst ignoring a correct checksum match, stripping +// UNC path info, and then calling stat to ensure the correct file exists. +// (used by TestFileUriTransforms) +func SimulateFileUriDownload(t *testing.T, uri string) (string, error) { + // source_path is a file path and source is a network path + source := fmt.Sprintf(uri) + t.Logf("Trying to download %s", source) + + config := &DownloadConfig{ + Url: source, + // This should be wrong. We want to make sure we don't delete + Checksum: []byte("nope"), + Hash: HashForType("sha256"), + CopyFile: false, + } + + // go go go + client := NewDownloadClient(config) + path, err := client.Get() + + // ignore any non-important checksum errors if it's not a unc path + if !strings.HasPrefix(path, "\\\\") && err.Error() != "checksums didn't match expected: 6e6f7065" { + t.Fatalf("Unexpected failure; expected checksum not to match") + } + + // if it's a unc path, then remove the host and share name so we don't have + // to force the user to enable ADMIN$ and Windows File Sharing + if strings.HasPrefix(path, "\\\\") { + res := strings.SplitN(path, "/", 3) + path = "/" + res[2] + } + + if _, err = os.Stat(path); err != nil { + t.Errorf("Could not stat source file: %s", path) + } + return path, err +} + +// TestFileUriTransforms tests the case where we use a local file uri +// for iso_url. There's a few different formats that a file uri can exist as +// and so we try to test the most useful and common ones. +func TestFileUriTransforms(t *testing.T) { + const testpath = /* have your */ "test-fixtures/fileurl/cake" /* and eat it too */ + const host = "localhost" + + var cwd string + var volume string + var share string + + cwd, err := os.Getwd() + if err != nil { + t.Fatalf("Unable to detect working directory: %s", err) + return + } + cwd = filepath.ToSlash(cwd) + volume = filepath.VolumeName(cwd) + share = volume + + // if a volume was found (on windows), replace the ':' from + // C: to C$ to convert it into a hidden windows share. + if len(share) > 1 && share[len(share)-1] == ':' { + share = share[:len(share)-1] + "$" + } + cwd = cwd[len(volume):] + + t.Logf("TestFileUriTransforms : Running with cwd : '%s'", cwd) + t.Logf("TestFileUriTransforms : Running with volume : '%s'", volume) + + // ./relative/path -> ./relative/path + // /absolute/path -> /absolute/path + // c:/windows/absolute -> c:/windows/absolute + testcases := []string{ + "./%s", + cwd + "/%s", + volume + cwd + "/%s", + } + + // all regular slashed testcases + for _, testcase := range testcases { + uri := "file://" + fmt.Sprintf(testcase, testpath) + t.Logf("TestFileUriTransforms : Trying Uri '%s'", uri) + res, err := SimulateFileUriDownload(t, uri) + if err != nil { + t.Errorf("Unable to transform uri '%s' into a path : %v", uri, err) + } + t.Logf("TestFileUriTransforms : Result Path '%s'", res) + } + + // smb protocol depends on platform support which currently + // only exists on windows. + if runtime.GOOS == "windows" { + // ...and finally the oddball windows native path + // smb://host/sharename/file -> \\host\sharename\file + testcase := host + "/" + share + "/" + cwd[1:] + "/%s" + uri := "smb://" + fmt.Sprintf(testcase, testpath) + t.Logf("TestFileUriTransforms : Trying Uri '%s'", uri) + res, err := SimulateFileUriDownload(t, uri) + if err != nil { + t.Errorf("Unable to transform uri '%s' into a path", uri) + return + } + t.Logf("TestFileUriTransforms : Result Path '%s'", res) + } } diff --git a/common/iso_config.go b/common/iso_config.go index 1ddb78d17..857c13f57 100644 --- a/common/iso_config.go +++ b/common/iso_config.go @@ -95,7 +95,6 @@ func (c *ISOConfig) Prepare(ctx *interpolate.Context) (warnings []string, errs [ errs = append(errs, err) return warnings, errs } - case "": break default: @@ -111,7 +110,7 @@ func (c *ISOConfig) Prepare(ctx *interpolate.Context) (warnings []string, errs [ c.ISOChecksum = strings.ToLower(c.ISOChecksum) for i, url := range c.ISOUrls { - url, err := DownloadableURL(url) + url, err := ValidatedURL(url) if err != nil { errs = append( errs, fmt.Errorf("Failed to parse iso_url %d: %s", i+1, err)) @@ -140,9 +139,26 @@ func (c *ISOConfig) parseCheckSumFile(rd *bufio.Reader) error { if err != nil { return err } + + checksumurl, err := url.Parse(c.ISOChecksumURL) + if err != nil { + return err + } + + absPath, err := filepath.Abs(u.Path) + if err != nil { + return fmt.Errorf("Unable to generate absolute path from provided iso_url: %s", err) + } + + relpath, err := filepath.Rel(filepath.Dir(checksumurl.Path), absPath) + if err != nil { + return err + } + filename := filepath.Base(u.Path) - errNotFound := fmt.Errorf("No checksum for %q found at: %s", filename, c.ISOChecksumURL) + errNotFound := fmt.Errorf("No checksum for %q, %q or %q found at: %s", + filename, relpath, u.Path, c.ISOChecksumURL) for { line, err := rd.ReadString('\n') if err != nil && line == "" { @@ -152,11 +168,14 @@ func (c *ISOConfig) parseCheckSumFile(rd *bufio.Reader) error { if len(parts) < 2 { continue } + options := []string{filename, relpath, "./" + relpath, absPath} if strings.ToLower(parts[0]) == c.ISOChecksumType { // BSD-style checksum - if parts[1] == fmt.Sprintf("(%s)", filename) { - c.ISOChecksum = parts[3] - return nil + for _, match := range options { + if parts[1] == fmt.Sprintf("(%s)", match) { + c.ISOChecksum = parts[3] + return nil + } } } else { // Standard checksum @@ -164,9 +183,11 @@ func (c *ISOConfig) parseCheckSumFile(rd *bufio.Reader) error { // Binary mode parts[1] = parts[1][1:] } - if parts[1] == filename { - c.ISOChecksum = parts[0] - return nil + for _, match := range options { + if parts[1] == match { + c.ISOChecksum = parts[0] + return nil + } } } } diff --git a/common/iso_config_test.go b/common/iso_config_test.go index 7a5a0b026..b465bead8 100644 --- a/common/iso_config_test.go +++ b/common/iso_config_test.go @@ -5,6 +5,7 @@ import ( "io/ioutil" "net/http" "net/http/httptest" + "os" "reflect" "runtime" "testing" @@ -24,11 +25,21 @@ MD5 (other.iso) = bAr MD5 (the-OS.iso) = baZ ` +var cs_bsd_style_subdir = ` +MD5 (other.iso) = bAr +MD5 (./subdir/the-OS.iso) = baZ +` + var cs_gnu_style = ` bAr0 *the-OS.iso baZ0 other.iso ` +var cs_gnu_style_subdir = ` +bAr0 *./subdir/the-OS.iso +baZ0 other.iso +` + var cs_bsd_style_no_newline = ` MD5 (other.iso) = bAr MD5 (the-OS.iso) = baZ` @@ -80,6 +91,8 @@ func TestISOConfigPrepare_ISOChecksumURL(t *testing.T) { i = testISOConfig() i.ISOChecksum = "" cs_file, _ := ioutil.TempFile("", "packer-test-") + defer os.Remove(cs_file.Name()) + defer cs_file.Close() ioutil.WriteFile(cs_file.Name(), []byte(cs_bsd_style), 0666) filePrefix := "file://" if runtime.GOOS == "windows" { @@ -102,6 +115,8 @@ func TestISOConfigPrepare_ISOChecksumURL(t *testing.T) { i = testISOConfig() i.ISOChecksum = "" cs_file, _ = ioutil.TempFile("", "packer-test-") + defer os.Remove(cs_file.Name()) + defer cs_file.Close() ioutil.WriteFile(cs_file.Name(), []byte(cs_gnu_style), 0666) i.ISOChecksumURL = fmt.Sprintf("%s%s", filePrefix, cs_file.Name()) warns, err = i.Prepare(nil) @@ -120,6 +135,8 @@ func TestISOConfigPrepare_ISOChecksumURL(t *testing.T) { i = testISOConfig() i.ISOChecksum = "" cs_file, _ = ioutil.TempFile("", "packer-test-") + defer os.Remove(cs_file.Name()) + defer cs_file.Close() ioutil.WriteFile(cs_file.Name(), []byte(cs_bsd_style_no_newline), 0666) i.ISOChecksumURL = fmt.Sprintf("file://%s", cs_file.Name()) warns, err = i.Prepare(nil) @@ -134,10 +151,35 @@ func TestISOConfigPrepare_ISOChecksumURL(t *testing.T) { t.Fatalf("should've found \"baz\" got: %s", i.ISOChecksum) } + // Test good - ISOChecksumURL BSD style with relative path + i = testISOConfig() + i.ISOChecksum = "" + + cs_dir, _ := ioutil.TempDir("", "packer-testdir-") + cs_file, _ = ioutil.TempFile(cs_dir, "packer-test-") + defer os.RemoveAll(cs_dir) // Removes the file as well + defer cs_file.Close() + ioutil.WriteFile(cs_file.Name(), []byte(cs_bsd_style_subdir), 0666) + i.ISOChecksumURL = fmt.Sprintf("%s%s", filePrefix, cs_file.Name()) + i.RawSingleISOUrl = fmt.Sprintf("%s%s", cs_dir, "/subdir/the-OS.iso") + warns, err = i.Prepare(nil) + if len(warns) > 0 { + t.Fatalf("bad: %#v", warns) + } + if err != nil { + t.Fatalf("should not have error: %s", err) + } + + if i.ISOChecksum != "baz" { + t.Fatalf("should've found \"baz\" got: %s", i.ISOChecksum) + } + // Test good - ISOChecksumURL GNU style no newline i = testISOConfig() i.ISOChecksum = "" cs_file, _ = ioutil.TempFile("", "packer-test-") + defer os.Remove(cs_file.Name()) + defer cs_file.Close() ioutil.WriteFile(cs_file.Name(), []byte(cs_gnu_style_no_newline), 0666) i.ISOChecksumURL = fmt.Sprintf("file://%s", cs_file.Name()) warns, err = i.Prepare(nil) @@ -158,6 +200,8 @@ func TestISOConfigPrepare_ISOChecksumURL(t *testing.T) { i.RawSingleISOUrl = "http://www.packer.io/the-OS.iso?stuff=boo" cs_file, _ = ioutil.TempFile("", "packer-test-") + defer os.Remove(cs_file.Name()) + defer cs_file.Close() ioutil.WriteFile(cs_file.Name(), []byte(cs_gnu_style), 0666) i.ISOChecksumURL = fmt.Sprintf("%s%s", filePrefix, cs_file.Name()) warns, err = i.Prepare(nil) @@ -171,6 +215,28 @@ func TestISOConfigPrepare_ISOChecksumURL(t *testing.T) { if i.ISOChecksum != "bar0" { t.Fatalf("should've found \"bar0\" got: %s", i.ISOChecksum) } + + // Test good - ISOChecksumURL GNU style with relative path + i = testISOConfig() + i.ISOChecksum = "" + + cs_file, _ = ioutil.TempFile(cs_dir, "packer-test-") + defer os.Remove(cs_file.Name()) + defer cs_file.Close() + ioutil.WriteFile(cs_file.Name(), []byte(cs_gnu_style_subdir), 0666) + i.ISOChecksumURL = fmt.Sprintf("%s%s", filePrefix, cs_file.Name()) + i.RawSingleISOUrl = fmt.Sprintf("%s%s", cs_dir, "/subdir/the-OS.iso") + warns, err = i.Prepare(nil) + if len(warns) > 0 { + t.Fatalf("bad: %#v", warns) + } + if err != nil { + t.Fatalf("should not have error: %s", err) + } + + if i.ISOChecksum != "bar0" { + t.Fatalf("should've found \"bar0\" got: %s", i.ISOChecksum) + } } func TestISOConfigPrepare_ISOChecksumType(t *testing.T) { diff --git a/common/multistep_debug.go b/common/multistep_debug.go index b726ce2c3..2d1a58291 100644 --- a/common/multistep_debug.go +++ b/common/multistep_debug.go @@ -2,10 +2,11 @@ package common import ( "fmt" - "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" "log" "time" + + "github.com/hashicorp/packer/helper/multistep" + "github.com/hashicorp/packer/packer" ) // MultistepDebugFn will return a proper multistep.DebugPauseFn to diff --git a/common/multistep_runner.go b/common/multistep_runner.go index 6f7c0a7cb..b34ee3a04 100644 --- a/common/multistep_runner.go +++ b/common/multistep_runner.go @@ -1,6 +1,7 @@ package common import ( + "context" "fmt" "log" "os" @@ -8,8 +9,8 @@ import ( "strings" "time" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" ) func newRunner(steps []multistep.Step, config PackerConfig, ui packer.Ui) (multistep.Runner, multistep.DebugPauseFn) { @@ -65,11 +66,15 @@ func (s abortStep) InnerStepName() string { return typeName(s.step) } -func (s abortStep) Run(state multistep.StateBag) multistep.StepAction { - return s.step.Run(state) +func (s abortStep) Run(ctx context.Context, state multistep.StateBag) multistep.StepAction { + return s.step.Run(ctx, state) } func (s abortStep) Cleanup(state multistep.StateBag) { + err, ok := state.GetOk("error") + if ok { + s.ui.Error(fmt.Sprintf("%s", err)) + } if _, ok := state.GetOk(multistep.StateCancelled); ok { s.ui.Error("Interrupted, aborting...") os.Exit(1) @@ -90,14 +95,19 @@ func (s askStep) InnerStepName() string { return typeName(s.step) } -func (s askStep) Run(state multistep.StateBag) (action multistep.StepAction) { +func (s askStep) Run(ctx context.Context, state multistep.StateBag) (action multistep.StepAction) { for { - action = s.step.Run(state) + action = s.step.Run(ctx, state) if action != multistep.ActionHalt { return } + err, ok := state.GetOk("error") + if ok { + s.ui.Error(fmt.Sprintf("%s", err)) + } + switch ask(s.ui, typeName(s.step), state) { case askCleanup: return diff --git a/common/powershell/hyperv/hyperv.go b/common/powershell/hyperv/hyperv.go index b793e171a..343e02776 100644 --- a/common/powershell/hyperv/hyperv.go +++ b/common/powershell/hyperv/hyperv.go @@ -1,7 +1,9 @@ package hyperv import ( + "context" "errors" + "os/exec" "strconv" "strings" @@ -12,7 +14,7 @@ func GetHostAdapterIpAddressForSwitch(switchName string) (string, error) { var script = ` param([string]$switchName, [int]$addressIndex) -$HostVMAdapter = Get-VMNetworkAdapter -ManagementOS -SwitchName $switchName +$HostVMAdapter = Hyper-V\Get-VMNetworkAdapter -ManagementOS -SwitchName $switchName if ($HostVMAdapter){ $HostNetAdapter = Get-NetAdapter | ?{ $_.DeviceID -eq $HostVMAdapter.DeviceId } if ($HostNetAdapter){ @@ -36,10 +38,19 @@ func GetVirtualMachineNetworkAdapterAddress(vmName string) (string, error) { var script = ` param([string]$vmName, [int]$addressIndex) try { - $adapter = Get-VMNetworkAdapter -VMName $vmName -ErrorAction SilentlyContinue - $ip = $adapter.IPAddresses[$addressIndex] - if($ip -eq $null) { - return $false + $adapter = Hyper-V\Get-VMNetworkAdapter -VMName $vmName -ErrorAction SilentlyContinue + if ($adapter.IPAddresses) { + $ip = $adapter.IPAddresses[$addressIndex] + } else { + $vm = Get-CimInstance -ClassName Msvm_ComputerSystem -Namespace root\virtualization\v2 -Filter "ElementName='$vmName'" + $ip_details = (Get-CimAssociatedInstance -InputObject $vm -ResultClassName Msvm_KvpExchangeComponent).GuestIntrinsicExchangeItems | %{ [xml]$_ } | ?{ $_.SelectSingleNode("/INSTANCE/PROPERTY[@NAME='Name']/VALUE[child::text()='NetworkAddressIPv4']") } + + if ($null -eq $ip_details) { + return $false + } + + $ip_addresses = $ip_details.SelectSingleNode("/INSTANCE/PROPERTY[@NAME='Data']/VALUE/child::text()").Value + $ip = ($ip_addresses -split ";")[0] } } catch { return $false @@ -59,8 +70,8 @@ func CreateDvdDrive(vmName string, isoPath string, generation uint) (uint, uint, script = ` param([string]$vmName, [string]$isoPath) -$dvdController = Add-VMDvdDrive -VMName $vmName -path $isoPath -Passthru -$dvdController | Set-VMDvdDrive -path $null +$dvdController = Hyper-V\Add-VMDvdDrive -VMName $vmName -path $isoPath -Passthru +$dvdController | Hyper-V\Set-VMDvdDrive -path $null $result = "$($dvdController.ControllerNumber),$($dvdController.ControllerLocation)" $result ` @@ -94,9 +105,9 @@ func MountDvdDrive(vmName string, path string, controllerNumber uint, controller var script = ` param([string]$vmName,[string]$path,[string]$controllerNumber,[string]$controllerLocation) -$vmDvdDrive = Get-VMDvdDrive -VMName $vmName -ControllerNumber $controllerNumber -ControllerLocation $controllerLocation +$vmDvdDrive = Hyper-V\Get-VMDvdDrive -VMName $vmName -ControllerNumber $controllerNumber -ControllerLocation $controllerLocation if (!$vmDvdDrive) {throw 'unable to find dvd drive'} -Set-VMDvdDrive -VMName $vmName -ControllerNumber $controllerNumber -ControllerLocation $controllerLocation -Path $path +Hyper-V\Set-VMDvdDrive -VMName $vmName -ControllerNumber $controllerNumber -ControllerLocation $controllerLocation -Path $path ` var ps powershell.PowerShellCmd @@ -107,9 +118,9 @@ Set-VMDvdDrive -VMName $vmName -ControllerNumber $controllerNumber -ControllerLo func UnmountDvdDrive(vmName string, controllerNumber uint, controllerLocation uint) error { var script = ` param([string]$vmName,[int]$controllerNumber,[int]$controllerLocation) -$vmDvdDrive = Get-VMDvdDrive -VMName $vmName -ControllerNumber $controllerNumber -ControllerLocation $controllerLocation +$vmDvdDrive = Hyper-V\Get-VMDvdDrive -VMName $vmName -ControllerNumber $controllerNumber -ControllerLocation $controllerLocation if (!$vmDvdDrive) {throw 'unable to find dvd drive'} -Set-VMDvdDrive -VMName $vmName -ControllerNumber $controllerNumber -ControllerLocation $controllerLocation -Path $null +Hyper-V\Set-VMDvdDrive -VMName $vmName -ControllerNumber $controllerNumber -ControllerLocation $controllerLocation -Path $null ` var ps powershell.PowerShellCmd @@ -122,7 +133,7 @@ func SetBootDvdDrive(vmName string, controllerNumber uint, controllerLocation ui if generation < 2 { script := ` param([string]$vmName) -Set-VMBios -VMName $vmName -StartupOrder @("CD", "IDE","LegacyNetworkAdapter","Floppy") +Hyper-V\Set-VMBios -VMName $vmName -StartupOrder @("CD", "IDE","LegacyNetworkAdapter","Floppy") ` var ps powershell.PowerShellCmd err := ps.Run(script, vmName) @@ -130,9 +141,9 @@ Set-VMBios -VMName $vmName -StartupOrder @("CD", "IDE","LegacyNetworkAdapter","F } else { script := ` param([string]$vmName,[int]$controllerNumber,[int]$controllerLocation) -$vmDvdDrive = Get-VMDvdDrive -VMName $vmName -ControllerNumber $controllerNumber -ControllerLocation $controllerLocation +$vmDvdDrive = Hyper-V\Get-VMDvdDrive -VMName $vmName -ControllerNumber $controllerNumber -ControllerLocation $controllerLocation if (!$vmDvdDrive) {throw 'unable to find dvd drive'} -Set-VMFirmware -VMName $vmName -FirstBootDevice $vmDvdDrive -ErrorAction SilentlyContinue +Hyper-V\Set-VMFirmware -VMName $vmName -FirstBootDevice $vmDvdDrive -ErrorAction SilentlyContinue ` var ps powershell.PowerShellCmd err := ps.Run(script, vmName, strconv.FormatInt(int64(controllerNumber), 10), strconv.FormatInt(int64(controllerLocation), 10)) @@ -143,9 +154,9 @@ Set-VMFirmware -VMName $vmName -FirstBootDevice $vmDvdDrive -ErrorAction Silentl func DeleteDvdDrive(vmName string, controllerNumber uint, controllerLocation uint) error { var script = ` param([string]$vmName,[int]$controllerNumber,[int]$controllerLocation) -$vmDvdDrive = Get-VMDvdDrive -VMName $vmName -ControllerNumber $controllerNumber -ControllerLocation $controllerLocation +$vmDvdDrive = Hyper-V\Get-VMDvdDrive -VMName $vmName -ControllerNumber $controllerNumber -ControllerLocation $controllerLocation if (!$vmDvdDrive) {throw 'unable to find dvd drive'} -Remove-VMDvdDrive -VMName $vmName -ControllerNumber $controllerNumber -ControllerLocation $controllerLocation +Hyper-V\Remove-VMDvdDrive -VMName $vmName -ControllerNumber $controllerNumber -ControllerLocation $controllerLocation ` var ps powershell.PowerShellCmd @@ -156,7 +167,7 @@ Remove-VMDvdDrive -VMName $vmName -ControllerNumber $controllerNumber -Controlle func DeleteAllDvdDrives(vmName string) error { var script = ` param([string]$vmName) -Get-VMDvdDrive -VMName $vmName | Remove-VMDvdDrive +Hyper-V\Get-VMDvdDrive -VMName $vmName | Hyper-V\Remove-VMDvdDrive ` var ps powershell.PowerShellCmd @@ -167,7 +178,7 @@ Get-VMDvdDrive -VMName $vmName | Remove-VMDvdDrive func MountFloppyDrive(vmName string, path string) error { var script = ` param([string]$vmName, [string]$path) -Set-VMFloppyDiskDrive -VMName $vmName -Path $path +Hyper-V\Set-VMFloppyDiskDrive -VMName $vmName -Path $path ` var ps powershell.PowerShellCmd @@ -179,7 +190,7 @@ func UnmountFloppyDrive(vmName string) error { var script = ` param([string]$vmName) -Set-VMFloppyDiskDrive -VMName $vmName -Path $null +Hyper-V\Set-VMFloppyDiskDrive -VMName $vmName -Path $null ` var ps powershell.PowerShellCmd @@ -187,40 +198,61 @@ Set-VMFloppyDiskDrive -VMName $vmName -Path $null return err } -func CreateVirtualMachine(vmName string, path string, harddrivePath string, vhdRoot string, ram int64, diskSize int64, switchName string, generation uint) error { +func CreateVirtualMachine(vmName string, path string, harddrivePath string, vhdRoot string, ram int64, diskSize int64, diskBlockSize int64, switchName string, generation uint, diffDisks bool, fixedVHD bool) error { if generation == 2 { var script = ` -param([string]$vmName, [string]$path, [string]$harddrivePath, [string]$vhdRoot, [long]$memoryStartupBytes, [long]$newVHDSizeBytes, [string]$switchName, [int]$generation) +param([string]$vmName, [string]$path, [string]$harddrivePath, [string]$vhdRoot, [long]$memoryStartupBytes, [long]$newVHDSizeBytes, [long]$vhdBlockSizeBytes, [string]$switchName, [int]$generation, [string]$diffDisks) $vhdx = $vmName + '.vhdx' $vhdPath = Join-Path -Path $vhdRoot -ChildPath $vhdx if ($harddrivePath){ - Copy-Item -Path $harddrivePath -Destination $vhdPath - New-VM -Name $vmName -Path $path -MemoryStartupBytes $memoryStartupBytes -VHDPath $vhdPath -SwitchName $switchName -Generation $generation + if($diffDisks -eq "true"){ + New-VHD -Path $vhdPath -ParentPath $harddrivePath -Differencing -BlockSizeBytes $vhdBlockSizeBytes + } else { + Copy-Item -Path $harddrivePath -Destination $vhdPath + } + Hyper-V\New-VM -Name $vmName -Path $path -MemoryStartupBytes $memoryStartupBytes -VHDPath $vhdPath -SwitchName $switchName -Generation $generation } else { - New-VM -Name $vmName -Path $path -MemoryStartupBytes $memoryStartupBytes -NewVHDPath $vhdPath -NewVHDSizeBytes $newVHDSizeBytes -SwitchName $switchName -Generation $generation + Hyper-V\New-VHD -Path $vhdPath -SizeBytes $newVHDSizeBytes -BlockSizeBytes $vhdBlockSizeBytes + Hyper-V\New-VM -Name $vmName -Path $path -MemoryStartupBytes $memoryStartupBytes -VHDPath $vhdPath -SwitchName $switchName -Generation $generation } ` var ps powershell.PowerShellCmd - if err := ps.Run(script, vmName, path, harddrivePath, vhdRoot, strconv.FormatInt(ram, 10), strconv.FormatInt(diskSize, 10), switchName, strconv.FormatInt(int64(generation), 10)); err != nil { + if err := ps.Run(script, vmName, path, harddrivePath, vhdRoot, strconv.FormatInt(ram, 10), strconv.FormatInt(diskSize, 10), strconv.FormatInt(diskBlockSize, 10), switchName, strconv.FormatInt(int64(generation), 10), strconv.FormatBool(diffDisks)); err != nil { return err } return DisableAutomaticCheckpoints(vmName) } else { var script = ` -param([string]$vmName, [string]$path, [string]$harddrivePath, [string]$vhdRoot, [long]$memoryStartupBytes, [long]$newVHDSizeBytes, [string]$switchName) -$vhdx = $vmName + '.vhdx' +param([string]$vmName, [string]$path, [string]$harddrivePath, [string]$vhdRoot, [long]$memoryStartupBytes, [long]$newVHDSizeBytes, [long]$vhdBlockSizeBytes, [string]$switchName, [string]$diffDisks, [string]$fixedVHD) +if($fixedVHD -eq "true"){ + $vhdx = $vmName + '.vhd' +} +else{ + $vhdx = $vmName + '.vhdx' +} $vhdPath = Join-Path -Path $vhdRoot -ChildPath $vhdx if ($harddrivePath){ - Copy-Item -Path $harddrivePath -Destination $vhdPath - New-VM -Name $vmName -Path $path -MemoryStartupBytes $memoryStartupBytes -VHDPath $vhdPath -SwitchName $switchName + if($diffDisks -eq "true"){ + New-VHD -Path $vhdPath -ParentPath $harddrivePath -Differencing -BlockSizeBytes $vhdBlockSizeBytes + } + else{ + Copy-Item -Path $harddrivePath -Destination $vhdPath + } + Hyper-V\New-VM -Name $vmName -Path $path -MemoryStartupBytes $memoryStartupBytes -VHDPath $vhdPath -SwitchName $switchName } else { - New-VM -Name $vmName -Path $path -MemoryStartupBytes $memoryStartupBytes -NewVHDPath $vhdPath -NewVHDSizeBytes $newVHDSizeBytes -SwitchName $switchName + if($fixedVHD -eq "true"){ + Hyper-V\New-VHD -Path $vhdPath -Fixed -SizeBytes $newVHDSizeBytes + } + else { + Hyper-V\New-VHD -Path $vhdPath -SizeBytes $newVHDSizeBytes -BlockSizeBytes $vhdBlockSizeBytes + } + Hyper-V\New-VM -Name $vmName -Path $path -MemoryStartupBytes $memoryStartupBytes -VHDPath $vhdPath -SwitchName $switchName } ` var ps powershell.PowerShellCmd - if err := ps.Run(script, vmName, path, harddrivePath, vhdRoot, strconv.FormatInt(ram, 10), strconv.FormatInt(diskSize, 10), switchName); err != nil { + if err := ps.Run(script, vmName, path, harddrivePath, vhdRoot, strconv.FormatInt(ram, 10), strconv.FormatInt(diskSize, 10), strconv.FormatInt(diskBlockSize, 10), switchName, strconv.FormatBool(diffDisks), strconv.FormatBool(fixedVHD)); err != nil { return err } @@ -235,8 +267,8 @@ if ($harddrivePath){ func DisableAutomaticCheckpoints(vmName string) error { var script = ` param([string]$vmName) -if ((Get-Command Set-Vm).Parameters["AutomaticCheckpointsEnabled"]) { - Set-Vm -Name $vmName -AutomaticCheckpointsEnabled $false } +if ((Get-Command Hyper-V\Set-Vm).Parameters["AutomaticCheckpointsEnabled"]) { + Hyper-V\Set-Vm -Name $vmName -AutomaticCheckpointsEnabled $false } ` var ps powershell.PowerShellCmd err := ps.Run(script, vmName) @@ -247,7 +279,7 @@ func ExportVmxcVirtualMachine(exportPath string, vmName string, snapshotName str var script = ` param([string]$exportPath, [string]$vmName, [string]$snapshotName, [string]$allSnapshotsString) -$WorkingPath = Join-Path $exportPath $vmName +$WorkingPath = Join-Path $exportPath $vmName if (Test-Path $WorkingPath) { throw "Export path working directory: $WorkingPath already exists!" @@ -256,22 +288,22 @@ if (Test-Path $WorkingPath) { $allSnapshots = [System.Boolean]::Parse($allSnapshotsString) if ($snapshotName) { - $snapshot = Get-VMSnapshot -VMName $vmName -Name $snapshotName - Export-VMSnapshot -VMSnapshot $snapshot -Path $exportPath -ErrorAction Stop + $snapshot = Hyper-V\Get-VMSnapshot -VMName $vmName -Name $snapshotName + Hyper-V\Export-VMSnapshot -VMSnapshot $snapshot -Path $exportPath -ErrorAction Stop } else { if (!$allSnapshots) { #Use last snapshot if one was not specified - $snapshot = Get-VMSnapshot -VMName $vmName | Select -Last 1 + $snapshot = Hyper-V\Get-VMSnapshot -VMName $vmName | Select -Last 1 } else { $snapshot = $null } - + if (!$snapshot) { #No snapshot clone - Export-VM -Name $vmName -Path $exportPath -ErrorAction Stop + Hyper-V\Export-VM -Name $vmName -Path $exportPath -ErrorAction Stop } else { #Snapshot clone - Export-VMSnapshot -VMSnapshot $snapshot -Path $exportPath -ErrorAction Stop + Hyper-V\Export-VMSnapshot -VMSnapshot $snapshot -Path $exportPath -ErrorAction Stop } } @@ -296,7 +328,7 @@ param([string]$exportPath, [string]$cloneFromVmxcPath) if (!(Test-Path $cloneFromVmxcPath)){ throw "Clone from vmxc directory: $cloneFromVmxcPath does not exist!" } - + if (!(Test-Path $exportPath)){ New-Item -ItemType Directory -Force -Path $exportPath } @@ -310,6 +342,18 @@ Copy-Item $cloneFromVmxcPath $exportPath -Recurse -Force return err } +func SetVmNetworkAdapterMacAddress(vmName string, mac string) error { + var script = ` +param([string]$vmName, [string]$mac) +Hyper-V\Set-VMNetworkAdapter $vmName -staticmacaddress $mac + ` + + var ps powershell.PowerShellCmd + err := ps.Run(script, vmName, mac) + + return err +} + func ImportVmxcVirtualMachine(importPath string, vmName string, harddrivePath string, ram int64, switchName string) error { var script = ` param([string]$importPath, [string]$vmName, [string]$harddrivePath, [long]$memoryStartupBytes, [string]$switchName) @@ -338,24 +382,24 @@ if (!$VirtualMachinePath){ $VirtualMachinePath = Get-ChildItem -Path $importPath -Filter *.xml -Recurse -ErrorAction SilentlyContinue | select -First 1 | %{$_.FullName} } -$compatibilityReport = Compare-VM -Path $VirtualMachinePath -VirtualMachinePath $importPath -SmartPagingFilePath $importPath -SnapshotFilePath $importPath -VhdDestinationPath $VirtualHarddisksPath -GenerateNewId -Copy:$false +$compatibilityReport = Hyper-V\Compare-VM -Path $VirtualMachinePath -VirtualMachinePath $importPath -SmartPagingFilePath $importPath -SnapshotFilePath $importPath -VhdDestinationPath $VirtualHarddisksPath -GenerateNewId -Copy:$false if ($vhdPath){ Copy-Item -Path $harddrivePath -Destination $vhdPath $existingFirstHarddrive = $compatibilityReport.VM.HardDrives | Select -First 1 if ($existingFirstHarddrive) { - $existingFirstHarddrive | Set-VMHardDiskDrive -Path $vhdPath + $existingFirstHarddrive | Hyper-V\Set-VMHardDiskDrive -Path $vhdPath } else { - Add-VMHardDiskDrive -VM $compatibilityReport.VM -Path $vhdPath - } + Hyper-V\Add-VMHardDiskDrive -VM $compatibilityReport.VM -Path $vhdPath + } } -Set-VMMemory -VM $compatibilityReport.VM -StartupBytes $memoryStartupBytes +Hyper-V\Set-VMMemory -VM $compatibilityReport.VM -StartupBytes $memoryStartupBytes $networkAdaptor = $compatibilityReport.VM.NetworkAdapters | Select -First 1 -Disconnect-VMNetworkAdapter -VMNetworkAdapter $networkAdaptor -Connect-VMNetworkAdapter -VMNetworkAdapter $networkAdaptor -SwitchName $switchName -$vm = Import-VM -CompatibilityReport $compatibilityReport +Hyper-V\Disconnect-VMNetworkAdapter -VMNetworkAdapter $networkAdaptor +Hyper-V\Connect-VMNetworkAdapter -VMNetworkAdapter $networkAdaptor -SwitchName $switchName +$vm = Hyper-V\Import-VM -CompatibilityReport $compatibilityReport if ($vm) { - $result = Rename-VM -VM $vm -NewName $VMName + $result = Hyper-V\Rename-VM -VM $vm -NewName $VMName } ` @@ -388,7 +432,7 @@ func CloneVirtualMachine(cloneFromVmxcPath string, cloneFromVmName string, clone func GetVirtualMachineGeneration(vmName string) (uint, error) { var script = ` param([string]$vmName) -$generation = Get-Vm -Name $vmName | %{$_.Generation} +$generation = Hyper-V\Get-Vm -Name $vmName | %{$_.Generation} if (!$generation){ $generation = 1 } @@ -416,7 +460,7 @@ func SetVirtualMachineCpuCount(vmName string, cpu uint) error { var script = ` param([string]$vmName, [int]$cpu) -Set-VMProcessor -VMName $vmName -Count $cpu +Hyper-V\Set-VMProcessor -VMName $vmName -Count $cpu ` var ps powershell.PowerShellCmd err := ps.Run(script, vmName, strconv.FormatInt(int64(cpu), 10)) @@ -428,7 +472,7 @@ func SetVirtualMachineVirtualizationExtensions(vmName string, enableVirtualizati var script = ` param([string]$vmName, [string]$exposeVirtualizationExtensionsString) $exposeVirtualizationExtensions = [System.Boolean]::Parse($exposeVirtualizationExtensionsString) -Set-VMProcessor -VMName $vmName -ExposeVirtualizationExtensions $exposeVirtualizationExtensions +Hyper-V\Set-VMProcessor -VMName $vmName -ExposeVirtualizationExtensions $exposeVirtualizationExtensions ` exposeVirtualizationExtensionsString := "False" if enableVirtualizationExtensions { @@ -444,7 +488,7 @@ func SetVirtualMachineDynamicMemory(vmName string, enableDynamicMemory bool) err var script = ` param([string]$vmName, [string]$enableDynamicMemoryString) $enableDynamicMemory = [System.Boolean]::Parse($enableDynamicMemoryString) -Set-VMMemory -VMName $vmName -DynamicMemoryEnabled $enableDynamicMemory +Hyper-V\Set-VMMemory -VMName $vmName -DynamicMemoryEnabled $enableDynamicMemory ` enableDynamicMemoryString := "False" if enableDynamicMemory { @@ -458,7 +502,7 @@ Set-VMMemory -VMName $vmName -DynamicMemoryEnabled $enableDynamicMemory func SetVirtualMachineMacSpoofing(vmName string, enableMacSpoofing bool) error { var script = ` param([string]$vmName, $enableMacSpoofing) -Set-VMNetworkAdapter -VMName $vmName -MacAddressSpoofing $enableMacSpoofing +Hyper-V\Set-VMNetworkAdapter -VMName $vmName -MacAddressSpoofing $enableMacSpoofing ` var ps powershell.PowerShellCmd @@ -472,10 +516,16 @@ Set-VMNetworkAdapter -VMName $vmName -MacAddressSpoofing $enableMacSpoofing return err } -func SetVirtualMachineSecureBoot(vmName string, enableSecureBoot bool) error { +func SetVirtualMachineSecureBoot(vmName string, enableSecureBoot bool, templateName string) error { var script = ` -param([string]$vmName, $enableSecureBoot) -Set-VMFirmware -VMName $vmName -EnableSecureBoot $enableSecureBoot +param([string]$vmName, [string]$enableSecureBootString, [string]$templateName) +$cmdlet = Get-Command Hyper-V\Set-VMFirmware +# The SecureBootTemplate parameter is only available in later versions +if ($cmdlet.Parameters.SecureBootTemplate) { + Hyper-V\Set-VMFirmware -VMName $vmName -EnableSecureBoot $enableSecureBootString -SecureBootTemplate $templateName +} else { + Hyper-V\Set-VMFirmware -VMName $vmName -EnableSecureBoot $enableSecureBootString +} ` var ps powershell.PowerShellCmd @@ -485,7 +535,11 @@ Set-VMFirmware -VMName $vmName -EnableSecureBoot $enableSecureBoot enableSecureBootString = "On" } - err := ps.Run(script, vmName, enableSecureBootString) + if templateName == "" { + templateName = "MicrosoftWindows" + } + + err := ps.Run(script, vmName, enableSecureBootString, templateName) return err } @@ -494,12 +548,12 @@ func DeleteVirtualMachine(vmName string) error { var script = ` param([string]$vmName) -$vm = Get-VM -Name $vmName +$vm = Hyper-V\Get-VM -Name $vmName if (($vm.State -ne [Microsoft.HyperV.PowerShell.VMState]::Off) -and ($vm.State -ne [Microsoft.HyperV.PowerShell.VMState]::OffCritical)) { - Stop-VM -VM $vm -TurnOff -Force -Confirm:$false + Hyper-V\Stop-VM -VM $vm -TurnOff -Force -Confirm:$false } -Remove-VM -Name $vmName -Force -Confirm:$false +Hyper-V\Remove-VM -Name $vmName -Force -Confirm:$false ` var ps powershell.PowerShellCmd @@ -511,12 +565,12 @@ func ExportVirtualMachine(vmName string, path string) error { var script = ` param([string]$vmName, [string]$path) -Export-VM -Name $vmName -Path $path +Hyper-V\Export-VM -Name $vmName -Path $path if (Test-Path -Path ([IO.Path]::Combine($path, $vmName, 'Virtual Machines', '*.VMCX'))) { - $vm = Get-VM -Name $vmName - $vm_adapter = Get-VMNetworkAdapter -VM $vm | Select -First 1 + $vm = Hyper-V\Get-VM -Name $vmName + $vm_adapter = Hyper-V\Get-VMNetworkAdapter -VM $vm | Select -First 1 $config = [xml]@" @@ -550,24 +604,24 @@ if (Test-Path -Path ([IO.Path]::Combine($path, $vmName, 'Virtual Machines', '*.V if ($vm.Generation -eq 1) { - $vm_controllers = Get-VMIdeController -VM $vm + $vm_controllers = Hyper-V\Get-VMIdeController -VM $vm $controller_type = $config.SelectSingleNode('/configuration/vm-controllers') # IDE controllers are not stored in a special XML container } else { - $vm_controllers = Get-VMScsiController -VM $vm + $vm_controllers = Hyper-V\Get-VMScsiController -VM $vm $controller_type = $config.CreateElement('scsi') $controller_type.SetAttribute('ChannelInstanceGuid', 'x') # SCSI controllers are stored in the scsi XML container - if ((Get-VMFirmware -VM $vm).SecureBoot -eq [Microsoft.HyperV.PowerShell.OnOffState]::On) + if ((Hyper-V\Get-VMFirmware -VM $vm).SecureBoot -eq [Microsoft.HyperV.PowerShell.OnOffState]::On) { - $config.configuration.secure_boot_enabled.'#text' = 'True' - } + $config.configuration.secure_boot_enabled.'#text' = 'True' + } else { $config.configuration.secure_boot_enabled.'#text' = 'False' - } + } } $vm_controllers | ForEach { @@ -628,9 +682,9 @@ func CopyExportedVirtualMachine(expPath string, outputPath string, vhdDir string var script = ` param([string]$srcPath, [string]$dstPath, [string]$vhdDirName, [string]$vmDir) -Move-Item -Path $srcPath/*.* -Destination $dstPath -Move-Item -Path $srcPath/$vhdDirName -Destination $dstPath -Move-Item -Path $srcPath/$vmDir -Destination $dstPath +Move-Item -Path (Join-Path (Get-Item $srcPath).FullName "*.*") -Destination $dstPath +Move-Item -Path (Join-Path (Get-Item $srcPath).FullName $vhdDirName) -Destination $dstPath +Move-Item -Path (Join-Path (Get-Item $srcPath).FullName $vmDir) -Destination $dstPath ` var ps powershell.PowerShellCmd @@ -642,9 +696,9 @@ func CreateVirtualSwitch(switchName string, switchType string) (bool, error) { var script = ` param([string]$switchName,[string]$switchType) -$switches = Get-VMSwitch -Name $switchName -ErrorAction SilentlyContinue +$switches = Hyper-V\Get-VMSwitch -Name $switchName -ErrorAction SilentlyContinue if ($switches.Count -eq 0) { - New-VMSwitch -Name $switchName -SwitchType $switchType + Hyper-V\New-VMSwitch -Name $switchName -SwitchType $switchType return $true } return $false @@ -660,9 +714,9 @@ func DeleteVirtualSwitch(switchName string) error { var script = ` param([string]$switchName) -$switch = Get-VMSwitch -Name $switchName -ErrorAction SilentlyContinue +$switch = Hyper-V\Get-VMSwitch -Name $switchName -ErrorAction SilentlyContinue if ($switch -ne $null) { - $switch | Remove-VMSwitch -Force -Confirm:$false + $switch | Hyper-V\Remove-VMSwitch -Force -Confirm:$false } ` @@ -675,9 +729,9 @@ func StartVirtualMachine(vmName string) error { var script = ` param([string]$vmName) -$vm = Get-VM -Name $vmName -ErrorAction SilentlyContinue +$vm = Hyper-V\Get-VM -Name $vmName -ErrorAction SilentlyContinue if ($vm.State -eq [Microsoft.HyperV.PowerShell.VMState]::Off) { - Start-VM -Name $vmName -Confirm:$false + Hyper-V\Start-VM -Name $vmName -Confirm:$false } ` @@ -690,7 +744,7 @@ func RestartVirtualMachine(vmName string) error { var script = ` param([string]$vmName) -Restart-VM $vmName -Force -Confirm:$false +Hyper-V\Restart-VM $vmName -Force -Confirm:$false ` var ps powershell.PowerShellCmd @@ -702,9 +756,9 @@ func StopVirtualMachine(vmName string) error { var script = ` param([string]$vmName) -$vm = Get-VM -Name $vmName +$vm = Hyper-V\Get-VM -Name $vmName if ($vm.State -eq [Microsoft.HyperV.PowerShell.VMState]::Running) { - Stop-VM -VM $vm -Force -Confirm:$false + Hyper-V\Stop-VM -VM $vm -Force -Confirm:$false } ` @@ -735,7 +789,7 @@ func EnableVirtualMachineIntegrationService(vmName string, integrationServiceNam var script = ` param([string]$vmName,[string]$integrationServiceId) -Get-VMIntegrationService -VmName $vmName | ?{$_.Id -match $integrationServiceId} | Enable-VMIntegrationService +Hyper-V\Get-VMIntegrationService -VmName $vmName | ?{$_.Id -match $integrationServiceId} | Hyper-V\Enable-VMIntegrationService ` var ps powershell.PowerShellCmd @@ -747,7 +801,7 @@ func SetNetworkAdapterVlanId(switchName string, vlanId string) error { var script = ` param([string]$networkAdapterName,[string]$vlanId) -Set-VMNetworkAdapterVlan -ManagementOS -VMNetworkAdapterName $networkAdapterName -Access -VlanId $vlanId +Hyper-V\Set-VMNetworkAdapterVlan -ManagementOS -VMNetworkAdapterName $networkAdapterName -Access -VlanId $vlanId ` var ps powershell.PowerShellCmd @@ -759,7 +813,7 @@ func SetVirtualMachineVlanId(vmName string, vlanId string) error { var script = ` param([string]$vmName,[string]$vlanId) -Set-VMNetworkAdapterVlan -VMName $vmName -Access -VlanId $vlanId +Hyper-V\Set-VMNetworkAdapterVlan -VMName $vmName -Access -VlanId $vlanId ` var ps powershell.PowerShellCmd err := ps.Run(script, vmName, vlanId) @@ -771,7 +825,7 @@ func GetExternalOnlineVirtualSwitch() (string, error) { var script = ` $adapters = Get-NetAdapter -Physical -ErrorAction SilentlyContinue | Where-Object { $_.Status -eq 'Up' } | Sort-Object -Descending -Property Speed foreach ($adapter in $adapters) { - $switch = Get-VMSwitch -SwitchType External | Where-Object { $_.NetAdapterInterfaceDescription -eq $adapter.InterfaceDescription } + $switch = Hyper-V\Get-VMSwitch -SwitchType External | Where-Object { $_.NetAdapterInterfaceDescription -eq $adapter.InterfaceDescription } if ($switch -ne $null) { $switch.Name @@ -801,10 +855,10 @@ $adapters = foreach ($name in $names) { } foreach ($adapter in $adapters) { - $switch = Get-VMSwitch -SwitchType External | where { $_.NetAdapterInterfaceDescription -eq $adapter.InterfaceDescription } + $switch = Hyper-V\Get-VMSwitch -SwitchType External | where { $_.NetAdapterInterfaceDescription -eq $adapter.InterfaceDescription } if ($switch -eq $null) { - $switch = New-VMSwitch -Name $switchName -NetAdapterName $adapter.Name -AllowManagementOS $true -Notes 'Parent OS, VMs, WiFi' + $switch = Hyper-V\New-VMSwitch -Name $switchName -NetAdapterName $adapter.Name -AllowManagementOS $true -Notes 'Parent OS, VMs, WiFi' } if ($switch -ne $null) { @@ -813,7 +867,7 @@ foreach ($adapter in $adapters) { } if($switch -ne $null) { - Get-VMNetworkAdapter -VMName $vmName | Connect-VMNetworkAdapter -VMSwitch $switch + Hyper-V\Get-VMNetworkAdapter -VMName $vmName | Hyper-V\Connect-VMNetworkAdapter -VMSwitch $switch } else { Write-Error 'No internet adapters found' } @@ -827,7 +881,7 @@ func GetVirtualMachineSwitchName(vmName string) (string, error) { var script = ` param([string]$vmName) -(Get-VMNetworkAdapter -VMName $vmName).SwitchName +(Hyper-V\Get-VMNetworkAdapter -VMName $vmName).SwitchName ` var ps powershell.PowerShellCmd @@ -843,7 +897,7 @@ func ConnectVirtualMachineNetworkAdapterToSwitch(vmName string, switchName strin var script = ` param([string]$vmName,[string]$switchName) -Get-VMNetworkAdapter -VMName $vmName | Connect-VMNetworkAdapter -SwitchName $switchName +Hyper-V\Get-VMNetworkAdapter -VMName $vmName | Hyper-V\Connect-VMNetworkAdapter -SwitchName $switchName ` var ps powershell.PowerShellCmd @@ -851,16 +905,16 @@ Get-VMNetworkAdapter -VMName $vmName | Connect-VMNetworkAdapter -SwitchName $swi return err } -func AddVirtualMachineHardDiskDrive(vmName string, vhdRoot string, vhdName string, vhdSizeBytes int64, controllerType string) error { +func AddVirtualMachineHardDiskDrive(vmName string, vhdRoot string, vhdName string, vhdSizeBytes int64, vhdBlockSize int64, controllerType string) error { var script = ` -param([string]$vmName,[string]$vhdRoot, [string]$vhdName, [string]$vhdSizeInBytes, [string]$controllerType) +param([string]$vmName,[string]$vhdRoot, [string]$vhdName, [string]$vhdSizeInBytes, [string]$vhdBlockSizeInByte, [string]$controllerType) $vhdPath = Join-Path -Path $vhdRoot -ChildPath $vhdName -New-VHD $vhdPath -SizeBytes $vhdSizeInBytes -Add-VMHardDiskDrive -VMName $vmName -path $vhdPath -controllerType $controllerType +Hyper-V\New-VHD -path $vhdPath -SizeBytes $vhdSizeInBytes -BlockSizeBytes $vhdBlockSizeInByte +Hyper-V\Add-VMHardDiskDrive -VMName $vmName -path $vhdPath -controllerType $controllerType ` var ps powershell.PowerShellCmd - err := ps.Run(script, vmName, vhdRoot, vhdName, strconv.FormatInt(vhdSizeBytes, 10), controllerType) + err := ps.Run(script, vmName, vhdRoot, vhdName, strconv.FormatInt(vhdSizeBytes, 10), strconv.FormatInt(vhdBlockSize, 10), controllerType) return err } @@ -868,8 +922,8 @@ func UntagVirtualMachineNetworkAdapterVlan(vmName string, switchName string) err var script = ` param([string]$vmName,[string]$switchName) -Set-VMNetworkAdapterVlan -VMName $vmName -Untagged -Set-VMNetworkAdapterVlan -ManagementOS -VMNetworkAdapterName $switchName -Untagged +Hyper-V\Set-VMNetworkAdapterVlan -VMName $vmName -Untagged +Hyper-V\Set-VMNetworkAdapterVlan -ManagementOS -VMNetworkAdapterName $switchName -Untagged ` var ps powershell.PowerShellCmd @@ -881,7 +935,7 @@ func IsRunning(vmName string) (bool, error) { var script = ` param([string]$vmName) -$vm = Get-VM -Name $vmName -ErrorAction SilentlyContinue +$vm = Hyper-V\Get-VM -Name $vmName -ErrorAction SilentlyContinue $vm.State -eq [Microsoft.HyperV.PowerShell.VMState]::Running ` @@ -900,7 +954,7 @@ func IsOff(vmName string) (bool, error) { var script = ` param([string]$vmName) -$vm = Get-VM -Name $vmName -ErrorAction SilentlyContinue +$vm = Hyper-V\Get-VM -Name $vmName -ErrorAction SilentlyContinue $vm.State -eq [Microsoft.HyperV.PowerShell.VMState]::Off ` @@ -919,7 +973,7 @@ func Uptime(vmName string) (uint64, error) { var script = ` param([string]$vmName) -$vm = Get-VM -Name $vmName -ErrorAction SilentlyContinue +$vm = Hyper-V\Get-VM -Name $vmName -ErrorAction SilentlyContinue $vm.Uptime.TotalSeconds ` var ps powershell.PowerShellCmd @@ -938,7 +992,7 @@ func Mac(vmName string) (string, error) { var script = ` param([string]$vmName, [int]$adapterIndex) try { - $adapter = Get-VMNetworkAdapter -VMName $vmName -ErrorAction SilentlyContinue + $adapter = Hyper-V\Get-VMNetworkAdapter -VMName $vmName -ErrorAction SilentlyContinue $mac = $adapter[$adapterIndex].MacAddress if($mac -eq $null) { return "" @@ -959,10 +1013,23 @@ func IpAddress(mac string) (string, error) { var script = ` param([string]$mac, [int]$addressIndex) try { - $ip = Get-Vm | %{$_.NetworkAdapters} | ?{$_.MacAddress -eq $mac} | %{$_.IpAddresses[$addressIndex]} + $vm = Hyper-V\Get-VM | ?{$_.NetworkAdapters.MacAddress -eq $mac} + if ($vm.NetworkAdapters.IpAddresses) { + $ipAddresses = $vm.NetworkAdapters.IPAddresses + if ($ipAddresses -isnot [array]) { + $ipAddresses = @($ipAddresses) + } + $ip = $ipAddresses[$addressIndex] + } else { + $vm_info = Get-CimInstance -ClassName Msvm_ComputerSystem -Namespace root\virtualization\v2 -Filter "ElementName='$($vm.Name)'" + $ip_details = (Get-CimAssociatedInstance -InputObject $vm_info -ResultClassName Msvm_KvpExchangeComponent).GuestIntrinsicExchangeItems | %{ [xml]$_ } | ?{ $_.SelectSingleNode("/INSTANCE/PROPERTY[@NAME='Name']/VALUE[child::text()='NetworkAddressIPv4']") } - if($ip -eq $null) { - return "" + if ($null -eq $ip_details) { + return "" + } + + $ip_addresses = $ip_details.SelectSingleNode("/INSTANCE/PROPERTY[@NAME='Data']/VALUE/child::text()").Value + $ip = ($ip_addresses -split ";")[0] } } catch { return "" @@ -980,9 +1047,9 @@ func TurnOff(vmName string) error { var script = ` param([string]$vmName) -$vm = Get-VM -Name $vmName -ErrorAction SilentlyContinue +$vm = Hyper-V\Get-VM -Name $vmName -ErrorAction SilentlyContinue if ($vm.State -eq [Microsoft.HyperV.PowerShell.VMState]::Running) { - Stop-VM -Name $vmName -TurnOff -Force -Confirm:$false + Hyper-V\Stop-VM -Name $vmName -TurnOff -Force -Confirm:$false } ` @@ -995,9 +1062,9 @@ func ShutDown(vmName string) error { var script = ` param([string]$vmName) -$vm = Get-VM -Name $vmName -ErrorAction SilentlyContinue +$vm = Hyper-V\Get-VM -Name $vmName -ErrorAction SilentlyContinue if ($vm.State -eq [Microsoft.HyperV.PowerShell.VMState]::Running) { - Stop-VM -Name $vmName -Force -Confirm:$false + Hyper-V\Stop-VM -Name $vmName -Force -Confirm:$false } ` @@ -1015,7 +1082,7 @@ func TypeScanCodes(vmName string, scanCodes string) error { param([string]$vmName, [string]$scanCodes) #Requires -Version 3 - function Get-VMConsole + function Hyper-V\Get-VMConsole { [CmdletBinding()] param ( @@ -1143,7 +1210,7 @@ param([string]$vmName, [string]$scanCodes) return $console } - $vmConsole = Get-VMConsole -VMName $vmName + $vmConsole = Hyper-V\Get-VMConsole -VMName $vmName $scanCodesToSend = '' $scanCodes.Split(' ') | %{ $scanCode = $_ @@ -1189,3 +1256,18 @@ param([string]$vmName, [string]$scanCodes) err := ps.Run(script, vmName, scanCodes) return err } + +func ConnectVirtualMachine(vmName string) (context.CancelFunc, error) { + ctx, cancel := context.WithCancel(context.Background()) + cmd := exec.CommandContext(ctx, "vmconnect.exe", "localhost", vmName) + err := cmd.Start() + if err != nil { + // Failed to start so cancel function not required + cancel = nil + } + return cancel, err +} + +func DisconnectVirtualMachine(cancel context.CancelFunc) { + cancel() +} diff --git a/common/powershell/powershell.go b/common/powershell/powershell.go index 4d550cb8b..6540c7705 100644 --- a/common/powershell/powershell.go +++ b/common/powershell/powershell.go @@ -240,7 +240,7 @@ param([string]$moduleName) func HasVirtualMachineVirtualizationExtensions() (bool, error) { var script = ` -(GET-Command Set-VMProcessor).parameters.keys -contains "ExposeVirtualizationExtensions" +(GET-Command Hyper-V\Set-VMProcessor).parameters.keys -contains "ExposeVirtualizationExtensions" ` var ps PowerShellCmd @@ -258,7 +258,7 @@ func DoesVirtualMachineExist(vmName string) (bool, error) { var script = ` param([string]$vmName) -return (Get-VM | ?{$_.Name -eq $vmName}) -ne $null +return (Hyper-V\Get-VM | ?{$_.Name -eq $vmName}) -ne $null ` var ps PowerShellCmd @@ -276,7 +276,7 @@ func DoesVirtualMachineSnapshotExist(vmName string, snapshotName string) (bool, var script = ` param([string]$vmName, [string]$snapshotName) -return (Get-VMSnapshot -VMName $vmName | ?{$_.Name -eq $snapshotName}) -ne $null +return (Hyper-V\Get-VMSnapshot -VMName $vmName | ?{$_.Name -eq $snapshotName}) -ne $null ` var ps PowerShellCmd @@ -294,7 +294,7 @@ func IsVirtualMachineOn(vmName string) (bool, error) { var script = ` param([string]$vmName) -$vm = Get-VM -Name $vmName -ErrorAction SilentlyContinue +$vm = Hyper-V\Get-VM -Name $vmName -ErrorAction SilentlyContinue $vm.State -eq [Microsoft.HyperV.PowerShell.VMState]::Running ` @@ -312,7 +312,7 @@ $vm.State -eq [Microsoft.HyperV.PowerShell.VMState]::Running func GetVirtualMachineGeneration(vmName string) (uint, error) { var script = ` param([string]$vmName) -$generation = Get-Vm -Name $vmName | %{$_.Generation} +$generation = Hyper-V\Get-Vm -Name $vmName | %{$_.Generation} if (!$generation){ $generation = 1 } diff --git a/provisioner/shell-local/communicator.go b/common/shell-local/communicator.go similarity index 75% rename from provisioner/shell-local/communicator.go rename to common/shell-local/communicator.go index 2afbe1028..4055c96b5 100644 --- a/provisioner/shell-local/communicator.go +++ b/common/shell-local/communicator.go @@ -1,36 +1,27 @@ -package shell +package shell_local import ( "fmt" "io" + "log" "os" "os/exec" "syscall" "github.com/hashicorp/packer/packer" - "github.com/hashicorp/packer/template/interpolate" ) type Communicator struct { ExecuteCommand []string - Ctx interpolate.Context } func (c *Communicator) Start(cmd *packer.RemoteCmd) error { - // Render the template so that we know how to execute the command - c.Ctx.Data = &ExecuteCommandTemplate{ - Command: cmd.Command, - } - for i, field := range c.ExecuteCommand { - command, err := interpolate.Render(field, &c.Ctx) - if err != nil { - return fmt.Errorf("Error processing command: %s", err) - } - - c.ExecuteCommand[i] = command + if len(c.ExecuteCommand) == 0 { + return fmt.Errorf("Error launching command via shell-local communicator: No ExecuteCommand provided") } // Build the local command to execute + log.Printf("[INFO] (shell-local communicator): Executing local shell command %s", c.ExecuteCommand) localCmd := exec.Command(c.ExecuteCommand[0], c.ExecuteCommand[1:]...) localCmd.Stdin = cmd.Stdin localCmd.Stdout = cmd.Stdout @@ -79,7 +70,3 @@ func (c *Communicator) Download(string, io.Writer) error { func (c *Communicator) DownloadDir(string, string, []string) error { return fmt.Errorf("downloadDir not supported") } - -type ExecuteCommandTemplate struct { - Command string -} diff --git a/post-processor/shell-local/communicator_test.go b/common/shell-local/communicator_test.go similarity index 86% rename from post-processor/shell-local/communicator_test.go rename to common/shell-local/communicator_test.go index 025deec54..9a8cb9057 100644 --- a/post-processor/shell-local/communicator_test.go +++ b/common/shell-local/communicator_test.go @@ -19,12 +19,13 @@ func TestCommunicator(t *testing.T) { return } - c := &Communicator{} + c := &Communicator{ + ExecuteCommand: []string{"/bin/sh", "-c", "echo foo"}, + } var buf bytes.Buffer cmd := &packer.RemoteCmd{ - Command: "/bin/echo foo", - Stdout: &buf, + Stdout: &buf, } if err := c.Start(cmd); err != nil { diff --git a/common/shell-local/config.go b/common/shell-local/config.go new file mode 100644 index 000000000..ba2508120 --- /dev/null +++ b/common/shell-local/config.go @@ -0,0 +1,227 @@ +package shell_local + +import ( + "errors" + "fmt" + "os" + "path/filepath" + "runtime" + "strings" + + "github.com/hashicorp/packer/common" + configHelper "github.com/hashicorp/packer/helper/config" + "github.com/hashicorp/packer/packer" + "github.com/hashicorp/packer/template/interpolate" +) + +type Config struct { + common.PackerConfig `mapstructure:",squash"` + + // ** DEPRECATED: USE INLINE INSTEAD ** + // ** Only Present for backwards compatibiltiy ** + // Command is the command to execute + Command string + + // An inline script to execute. Multiple strings are all executed + // in the context of a single shell. + Inline []string + + // The shebang value used when running inline scripts. + InlineShebang string `mapstructure:"inline_shebang"` + + // The file extension to use for the file generated from the inline commands + TempfileExtension string `mapstructure:"tempfile_extension"` + + // The local path of the shell script to upload and execute. + Script string + + // An array of multiple scripts to run. + Scripts []string + + // An array of environment variables that will be injected before + // your command(s) are executed. + Vars []string `mapstructure:"environment_vars"` + + EnvVarFormat string `mapstructure:"env_var_format"` + // End dedupe with postprocessor + + // The command used to execute the script. The '{{ .Path }}' variable + // should be used to specify where the script goes, {{ .Vars }} + // can be used to inject the environment_vars into the environment. + ExecuteCommand []string `mapstructure:"execute_command"` + + UseLinuxPathing bool `mapstructure:"use_linux_pathing"` + + Ctx interpolate.Context +} + +func Decode(config *Config, raws ...interface{}) error { + //Create passthrough for winrm password so we can fill it in once we know it + config.Ctx.Data = &EnvVarsTemplate{ + WinRMPassword: `{{.WinRMPassword}}`, + } + + err := configHelper.Decode(&config, &configHelper.DecodeOpts{ + Interpolate: true, + InterpolateContext: &config.Ctx, + InterpolateFilter: &interpolate.RenderFilter{ + Exclude: []string{ + "execute_command", + }, + }, + }, raws...) + if err != nil { + return fmt.Errorf("Error decoding config: %s, config is %#v, and raws is %#v", err, config, raws) + } + + return nil +} + +func Validate(config *Config) error { + var errs *packer.MultiError + + if runtime.GOOS == "windows" { + if len(config.ExecuteCommand) == 0 { + config.ExecuteCommand = []string{ + "cmd", + "/V", + "/C", + "{{.Vars}}", + "call", + "{{.Script}}", + } + } + } else { + if config.InlineShebang == "" { + config.InlineShebang = "/bin/sh -e" + } + if len(config.ExecuteCommand) == 0 { + config.ExecuteCommand = []string{ + "/bin/sh", + "-c", + "{{.Vars}} {{.Script}}", + } + } + } + + // Clean up input + if config.Inline != nil && len(config.Inline) == 0 { + config.Inline = make([]string, 0) + } + + if config.Scripts == nil { + config.Scripts = make([]string, 0) + } + + if config.Vars == nil { + config.Vars = make([]string, 0) + } + + // Verify that the user has given us a command to run + if config.Command == "" && len(config.Inline) == 0 && + len(config.Scripts) == 0 && config.Script == "" { + errs = packer.MultiErrorAppend(errs, + errors.New("Command, Inline, Script and Scripts options cannot all be empty.")) + } + + // Check that user hasn't given us too many commands to run + tooManyOptionsErr := errors.New("You may only specify one of the " + + "following options: Command, Inline, Script or Scripts. Please" + + " consolidate these options in your config.") + + if config.Command != "" { + if len(config.Inline) != 0 || len(config.Scripts) != 0 || config.Script != "" { + errs = packer.MultiErrorAppend(errs, tooManyOptionsErr) + } else { + config.Inline = []string{config.Command} + } + } + + if config.Script != "" { + if len(config.Scripts) > 0 || len(config.Inline) > 0 { + errs = packer.MultiErrorAppend(errs, tooManyOptionsErr) + } else { + config.Scripts = []string{config.Script} + } + } + + if len(config.Scripts) > 0 && config.Inline != nil { + errs = packer.MultiErrorAppend(errs, tooManyOptionsErr) + } + + // Check that all scripts we need to run exist locally + for _, path := range config.Scripts { + if _, err := os.Stat(path); err != nil { + errs = packer.MultiErrorAppend(errs, + fmt.Errorf("Bad script '%s': %s", path, err)) + } + } + if config.UseLinuxPathing { + for index, script := range config.Scripts { + scriptAbsPath, err := filepath.Abs(script) + if err != nil { + return fmt.Errorf("Error converting %s to absolute path: %s", script, err.Error()) + } + converted, err := ConvertToLinuxPath(scriptAbsPath) + if err != nil { + return err + } + config.Scripts[index] = converted + } + // Interoperability issues with WSL makes creating and running tempfiles + // via golang's os package basically impossible. + if len(config.Inline) > 0 { + errs = packer.MultiErrorAppend(errs, + fmt.Errorf("Packer is unable to use the Command and Inline "+ + "features with the Windows Linux Subsystem. Please use "+ + "the Script or Scripts options instead")) + } + } + // This is currently undocumented and not a feature users are expected to + // interact with. + if config.EnvVarFormat == "" { + if (runtime.GOOS == "windows") && !config.UseLinuxPathing { + config.EnvVarFormat = "set %s=%s && " + } else { + config.EnvVarFormat = "%s='%s' " + } + } + + // drop unnecessary "." in extension; we add this later. + if config.TempfileExtension != "" { + if strings.HasPrefix(config.TempfileExtension, ".") { + config.TempfileExtension = config.TempfileExtension[1:] + } + } + + // Do a check for bad environment variables, such as '=foo', 'foobar' + for _, kv := range config.Vars { + vs := strings.SplitN(kv, "=", 2) + if len(vs) != 2 || vs[0] == "" { + errs = packer.MultiErrorAppend(errs, + fmt.Errorf("Environment variable not in format 'key=value': %s", kv)) + } + } + + if errs != nil && len(errs.Errors) > 0 { + return errs + } + + return nil +} + +// C:/path/to/your/file becomes /mnt/c/path/to/your/file +func ConvertToLinuxPath(winAbsPath string) (string, error) { + // get absolute path of script, and morph it into the bash path + winAbsPath = strings.Replace(winAbsPath, "\\", "/", -1) + splitPath := strings.SplitN(winAbsPath, ":/", 2) + if len(splitPath) == 2 { + winBashPath := fmt.Sprintf("/mnt/%s/%s", strings.ToLower(splitPath[0]), splitPath[1]) + return winBashPath, nil + } else { + err := fmt.Errorf("There was an error splitting your absolute path; expected "+ + "to find a drive following the format ':/' but did not: absolute "+ + "path: %s", winAbsPath) + return "", err + } +} diff --git a/common/shell-local/config_test.go b/common/shell-local/config_test.go new file mode 100644 index 000000000..7c74581ae --- /dev/null +++ b/common/shell-local/config_test.go @@ -0,0 +1,16 @@ +package shell_local + +import ( + "testing" + + "github.com/stretchr/testify/assert" +) + +func TestConvertToLinuxPath(t *testing.T) { + winPath := "C:/path/to/your/file" + winBashPath := "/mnt/c/path/to/your/file" + converted, _ := ConvertToLinuxPath(winPath) + assert.Equal(t, winBashPath, converted, + "Should have converted %s to %s -- not %s", winPath, winBashPath, converted) + +} diff --git a/common/shell-local/run.go b/common/shell-local/run.go new file mode 100644 index 000000000..ede99e30d --- /dev/null +++ b/common/shell-local/run.go @@ -0,0 +1,208 @@ +package shell_local + +import ( + "bufio" + "fmt" + "io/ioutil" + "log" + "os" + "sort" + "strings" + + "github.com/hashicorp/packer/common" + commonhelper "github.com/hashicorp/packer/helper/common" + "github.com/hashicorp/packer/packer" + "github.com/hashicorp/packer/template/interpolate" +) + +type ExecuteCommandTemplate struct { + Vars string + Script string + Command string + WinRMPassword string +} + +type EnvVarsTemplate struct { + WinRMPassword string +} + +func Run(ui packer.Ui, config *Config) (bool, error) { + scripts := make([]string, len(config.Scripts)) + if len(config.Scripts) > 0 { + copy(scripts, config.Scripts) + } else if config.Inline != nil { + // If we have an inline script, then turn that into a temporary + // shell script and use that. + tempScriptFileName, err := createInlineScriptFile(config) + if err != nil { + return false, err + } + scripts = append(scripts, tempScriptFileName) + + // figure out what extension the file should have, and rename it. + if config.TempfileExtension != "" { + os.Rename(tempScriptFileName, fmt.Sprintf("%s.%s", tempScriptFileName, config.TempfileExtension)) + tempScriptFileName = fmt.Sprintf("%s.%s", tempScriptFileName, config.TempfileExtension) + } + defer os.Remove(tempScriptFileName) + } + + // Create environment variables to set before executing the command + flattenedEnvVars, err := createFlattenedEnvVars(config) + if err != nil { + return false, err + } + + for _, script := range scripts { + interpolatedCmds, err := createInterpolatedCommands(config, script, flattenedEnvVars) + if err != nil { + return false, err + } + ui.Say(fmt.Sprintf("Running local shell script: %s", script)) + + comm := &Communicator{ + ExecuteCommand: interpolatedCmds, + } + + // The remoteCmd generated here isn't actually run, but it allows us to + // use the same interafce for the shell-local communicator as we use for + // the other communicators; ultimately, this command is just used for + // buffers and for reading the final exit status. + flattenedCmd := strings.Join(interpolatedCmds, " ") + cmd := &packer.RemoteCmd{Command: flattenedCmd} + sanitized := flattenedCmd + if len(getWinRMPassword(config.PackerBuildName)) > 0 { + sanitized = strings.Replace(flattenedCmd, + getWinRMPassword(config.PackerBuildName), "*****", -1) + } + log.Printf("[INFO] (shell-local): starting local command: %s", sanitized) + if err := cmd.StartWithUi(comm, ui); err != nil { + return false, fmt.Errorf( + "Error executing script: %s\n\n"+ + "Please see output above for more information.", + script) + } + if cmd.ExitStatus != 0 { + return false, fmt.Errorf( + "Erroneous exit code %d while executing script: %s\n\n"+ + "Please see output above for more information.", + cmd.ExitStatus, + script) + } + } + + return true, nil +} + +func createInlineScriptFile(config *Config) (string, error) { + tf, err := ioutil.TempFile("", "packer-shell") + if err != nil { + return "", fmt.Errorf("Error preparing shell script: %s", err) + } + defer tf.Close() + // Write our contents to it + writer := bufio.NewWriter(tf) + if config.InlineShebang != "" { + shebang := fmt.Sprintf("#!%s\n", config.InlineShebang) + log.Printf("[INFO] (shell-local): Prepending inline script with %s", shebang) + writer.WriteString(shebang) + } + + // generate context so you can interpolate the command + config.Ctx.Data = &EnvVarsTemplate{ + WinRMPassword: getWinRMPassword(config.PackerBuildName), + } + + for _, command := range config.Inline { + // interpolate command to check for template variables. + command, err := interpolate.Render(command, &config.Ctx) + if err != nil { + return "", err + } + + if _, err := writer.WriteString(command + "\n"); err != nil { + return "", fmt.Errorf("Error preparing shell script: %s", err) + } + } + + if err := writer.Flush(); err != nil { + return "", fmt.Errorf("Error preparing shell script: %s", err) + } + + err = os.Chmod(tf.Name(), 0700) + if err != nil { + log.Printf("[ERROR] (shell-local): error modifying permissions of temp script file: %s", err.Error()) + } + return tf.Name(), nil +} + +// Generates the final command to send to the communicator, using either the +// user-provided ExecuteCommand or defaulting to something that makes sense for +// the host OS +func createInterpolatedCommands(config *Config, script string, flattenedEnvVars string) ([]string, error) { + config.Ctx.Data = &ExecuteCommandTemplate{ + Vars: flattenedEnvVars, + Script: script, + Command: script, + WinRMPassword: getWinRMPassword(config.PackerBuildName), + } + + interpolatedCmds := make([]string, len(config.ExecuteCommand)) + for i, cmd := range config.ExecuteCommand { + interpolatedCmd, err := interpolate.Render(cmd, &config.Ctx) + if err != nil { + return nil, fmt.Errorf("Error processing command: %s", err) + } + interpolatedCmds[i] = interpolatedCmd + } + return interpolatedCmds, nil +} + +func createFlattenedEnvVars(config *Config) (string, error) { + flattened := "" + envVars := make(map[string]string) + + // Always available Packer provided env vars + envVars["PACKER_BUILD_NAME"] = fmt.Sprintf("%s", config.PackerBuildName) + envVars["PACKER_BUILDER_TYPE"] = fmt.Sprintf("%s", config.PackerBuilderType) + + // expose PACKER_HTTP_ADDR + httpAddr := common.GetHTTPAddr() + if httpAddr != "" { + envVars["PACKER_HTTP_ADDR"] = fmt.Sprintf("%s", httpAddr) + } + + // interpolate environment variables + config.Ctx.Data = &EnvVarsTemplate{ + WinRMPassword: getWinRMPassword(config.PackerBuildName), + } + // Split vars into key/value components + for _, envVar := range config.Vars { + envVar, err := interpolate.Render(envVar, &config.Ctx) + if err != nil { + return "", err + } + // Split vars into key/value components + keyValue := strings.SplitN(envVar, "=", 2) + // Store pair, replacing any single quotes in value so they parse + // correctly with required environment variable format + envVars[keyValue[0]] = strings.Replace(keyValue[1], "'", `'"'"'`, -1) + } + + // Create a list of env var keys in sorted order + var keys []string + for k := range envVars { + keys = append(keys, k) + } + sort.Strings(keys) + + for _, key := range keys { + flattened += fmt.Sprintf(config.EnvVarFormat, key, envVars[key]) + } + return flattened, nil +} + +func getWinRMPassword(buildName string) string { + winRMPass, _ := commonhelper.RetrieveSharedState("winrm_password", buildName) + return winRMPass +} diff --git a/common/step_create_floppy.go b/common/step_create_floppy.go index 5f05eea78..dd4ead840 100644 --- a/common/step_create_floppy.go +++ b/common/step_create_floppy.go @@ -1,6 +1,7 @@ package common import ( + "context" "fmt" "io" "io/ioutil" @@ -10,10 +11,10 @@ import ( "path/filepath" "strings" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" "github.com/mitchellh/go-fs" "github.com/mitchellh/go-fs/fat" - "github.com/mitchellh/multistep" ) // StepCreateFloppy will create a floppy disk with the given files. @@ -26,7 +27,7 @@ type StepCreateFloppy struct { FilesAdded map[string]bool } -func (s *StepCreateFloppy) Run(state multistep.StateBag) multistep.StepAction { +func (s *StepCreateFloppy) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { if len(s.Files) == 0 && len(s.Directories) == 0 { log.Println("No floppy files specified. Floppy disk will not be made.") return multistep.ActionContinue @@ -155,7 +156,10 @@ func (s *StepCreateFloppy) Run(state multistep.StateBag) multistep.StepAction { } for _, crawlfilename := range crawlDirectoryFiles { - s.Add(cache, crawlfilename) + if err = s.Add(cache, crawlfilename); err != nil { + state.Put("error", fmt.Errorf("Error adding file from floppy_files : %s : %s", filename, err)) + return multistep.ActionHalt + } s.FilesAdded[crawlfilename] = true } @@ -165,7 +169,10 @@ func (s *StepCreateFloppy) Run(state multistep.StateBag) multistep.StepAction { // add just a single file ui.Message(fmt.Sprintf("Copying file: %s", filename)) - s.Add(cache, filename) + if err = s.Add(cache, filename); err != nil { + state.Put("error", fmt.Errorf("Error adding file from floppy_files : %s : %s", filename, err)) + return multistep.ActionHalt + } s.FilesAdded[filename] = true } ui.Message("Done copying files from floppy_files") diff --git a/common/step_create_floppy_test.go b/common/step_create_floppy_test.go index f5cc16b64..4c43a6839 100644 --- a/common/step_create_floppy_test.go +++ b/common/step_create_floppy_test.go @@ -2,9 +2,8 @@ package common import ( "bytes" + "context" "fmt" - "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" "io/ioutil" "log" "os" @@ -13,6 +12,9 @@ import ( "strconv" "strings" "testing" + + "github.com/hashicorp/packer/helper/multistep" + "github.com/hashicorp/packer/packer" ) const TestFixtures = "test-fixtures" @@ -88,7 +90,7 @@ func TestStepCreateFloppy(t *testing.T) { } for _, step.Files = range lists { - if action := step.Run(state); action != multistep.ActionContinue { + if action := step.Run(context.Background(), state); action != multistep.ActionContinue { t.Fatalf("bad action: %#v for %v", action, step.Files) } @@ -139,7 +141,7 @@ func xxxTestStepCreateFloppy_missing(t *testing.T) { } for _, step.Files = range lists { - if action := step.Run(state); action != multistep.ActionHalt { + if action := step.Run(context.Background(), state); action != multistep.ActionHalt { t.Fatalf("bad action: %#v for %v", action, step.Files) } @@ -189,7 +191,7 @@ func xxxTestStepCreateFloppy_notfound(t *testing.T) { } for _, step.Files = range lists { - if action := step.Run(state); action != multistep.ActionContinue { + if action := step.Run(context.Background(), state); action != multistep.ActionContinue { t.Fatalf("bad action: %#v for %v", action, step.Files) } @@ -264,7 +266,7 @@ func TestStepCreateFloppyDirectories(t *testing.T) { log.Println(fmt.Sprintf("Trying against floppy_dirs : %v", step.Directories)) // run the step - if action := step.Run(state); action != multistep.ActionContinue { + if action := step.Run(context.Background(), state); action != multistep.ActionContinue { t.Fatalf("bad action: %#v for %v : %v", action, step.Directories, state.Get("error")) } diff --git a/common/step_download.go b/common/step_download.go index f63070a04..03340b974 100644 --- a/common/step_download.go +++ b/common/step_download.go @@ -1,14 +1,16 @@ package common import ( + "context" "crypto/sha1" "encoding/hex" "fmt" "log" "time" + "github.com/hashicorp/packer/helper/multistep" + "github.com/hashicorp/packer/helper/useragent" "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" ) // StepDownload downloads a remote file using the download client within @@ -45,7 +47,7 @@ type StepDownload struct { Extension string } -func (s *StepDownload) Run(state multistep.StateBag) multistep.StepAction { +func (s *StepDownload) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { cache := state.Get("cache").(packer.Cache) ui := state.Get("ui").(packer.Ui) @@ -61,10 +63,12 @@ func (s *StepDownload) Run(state multistep.StateBag) multistep.StepAction { ui.Say(fmt.Sprintf("Downloading or copying %s", s.Description)) - var finalPath string - for _, url := range s.Url { - ui.Message(fmt.Sprintf("Downloading or copying: %s", url)) + // First try to use any already downloaded file + // If it fails, proceed to regular download logic + var downloadConfigs = make([]*DownloadConfig, len(s.Url)) + var finalPath string + for i, url := range s.Url { targetPath := s.TargetPath if targetPath == "" { // Determine a cache key. This is normally just the URL but @@ -88,24 +92,39 @@ func (s *StepDownload) Run(state multistep.StateBag) multistep.StepAction { CopyFile: false, Hash: HashForType(s.ChecksumType), Checksum: checksum, - UserAgent: "Packer", + UserAgent: useragent.String(), } + downloadConfigs[i] = config - path, err, retry := s.download(config, state) - if err != nil { - ui.Message(fmt.Sprintf("Error downloading: %s", err)) - } - - if !retry { - return multistep.ActionHalt - } - - if err == nil { - finalPath = path + if match, _ := NewDownloadClient(config).VerifyChecksum(config.TargetPath); match { + ui.Message(fmt.Sprintf("Found already downloaded, initial checksum matched, no download needed: %s", url)) + finalPath = config.TargetPath break } } + if finalPath == "" { + for i, url := range s.Url { + ui.Message(fmt.Sprintf("Downloading or copying: %s", url)) + + config := downloadConfigs[i] + + path, err, retry := s.download(config, state) + if err != nil { + ui.Message(fmt.Sprintf("Error downloading: %s", err)) + } + + if !retry { + return multistep.ActionHalt + } + + if err == nil { + finalPath = path + break + } + } + } + if finalPath == "" { err := fmt.Errorf("%s download failed.", s.Description) state.Put("error", err) diff --git a/common/step_download_test.go b/common/step_download_test.go index 258beda09..2b6476614 100644 --- a/common/step_download_test.go +++ b/common/step_download_test.go @@ -1,8 +1,9 @@ package common import ( - "github.com/mitchellh/multistep" "testing" + + "github.com/hashicorp/packer/helper/multistep" ) func TestStepDownload_Impl(t *testing.T) { diff --git a/common/step_http_server.go b/common/step_http_server.go index 3752b51c6..394042961 100644 --- a/common/step_http_server.go +++ b/common/step_http_server.go @@ -1,17 +1,16 @@ package common import ( + "context" "fmt" - "io/ioutil" "log" "math/rand" "net" "net/http" - "os" - "path/filepath" + "github.com/hashicorp/packer/helper/common" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" ) // This step creates and runs the HTTP server that is serving files from the @@ -31,7 +30,7 @@ type StepHTTPServer struct { l net.Listener } -func (s *StepHTTPServer) Run(state multistep.StateBag) multistep.StepAction { +func (s *StepHTTPServer) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { ui := state.Get("ui").(packer.Ui) var httpPort uint = 0 @@ -76,25 +75,21 @@ func (s *StepHTTPServer) Run(state multistep.StateBag) multistep.StepAction { return multistep.ActionContinue } -func httpAddrFilename(suffix string) string { - uuid := os.Getenv("PACKER_RUN_UUID") - return filepath.Join(os.TempDir(), fmt.Sprintf("packer-%s-%s", uuid, suffix)) -} - func SetHTTPPort(port string) error { - return ioutil.WriteFile(httpAddrFilename("port"), []byte(port), 0644) + return common.SetSharedState("port", port, "") } func SetHTTPIP(ip string) error { - return ioutil.WriteFile(httpAddrFilename("ip"), []byte(ip), 0644) + return common.SetSharedState("ip", ip, "") } func GetHTTPAddr() string { - ip, err := ioutil.ReadFile(httpAddrFilename("ip")) + ip, err := common.RetrieveSharedState("ip", "") if err != nil { return "" } - port, err := ioutil.ReadFile(httpAddrFilename("port")) + + port, err := common.RetrieveSharedState("port", "") if err != nil { return "" } @@ -106,6 +101,6 @@ func (s *StepHTTPServer) Cleanup(multistep.StateBag) { // Close the listener so that the HTTP server stops s.l.Close() } - os.Remove(httpAddrFilename("port")) - os.Remove(httpAddrFilename("ip")) + common.RemoveSharedStateFile("port", "") + common.RemoveSharedStateFile("ip", "") } diff --git a/common/step_provision.go b/common/step_provision.go index 14074d7a1..9c842044b 100644 --- a/common/step_provision.go +++ b/common/step_provision.go @@ -1,10 +1,12 @@ package common import ( - "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" + "context" "log" "time" + + "github.com/hashicorp/packer/helper/multistep" + "github.com/hashicorp/packer/packer" ) // StepProvision runs the provisioners. @@ -20,7 +22,7 @@ type StepProvision struct { Comm packer.Communicator } -func (s *StepProvision) Run(state multistep.StateBag) multistep.StepAction { +func (s *StepProvision) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { comm := s.Comm if comm == nil { raw, ok := state.Get("communicator").(packer.Communicator) diff --git a/common/step_provision_test.go b/common/step_provision_test.go index 6da846cb3..a37d7681b 100644 --- a/common/step_provision_test.go +++ b/common/step_provision_test.go @@ -1,8 +1,9 @@ package common import ( - "github.com/mitchellh/multistep" "testing" + + "github.com/hashicorp/packer/helper/multistep" ) func TestStepProvision_Impl(t *testing.T) { diff --git a/common/test-fixtures/SomeDir/myfile.txt b/common/test-fixtures/SomeDir/myfile.txt new file mode 100644 index 000000000..e69de29bb diff --git a/common/test-fixtures/decompress-tar/outside_parent.tar b/common/test-fixtures/decompress-tar/outside_parent.tar new file mode 100644 index 000000000..f08df1e65 Binary files /dev/null and b/common/test-fixtures/decompress-tar/outside_parent.tar differ diff --git a/communicator/ssh/communicator.go b/communicator/ssh/communicator.go index c10e97dcc..8fbddc968 100644 --- a/communicator/ssh/communicator.go +++ b/communicator/ssh/communicator.go @@ -55,6 +55,13 @@ type Config struct { // UseSftp, if true, sftp will be used instead of scp for file transfers UseSftp bool + + // KeepAliveInterval sets how often we send a channel request to the + // server. A value < 0 disables. + KeepAliveInterval time.Duration + + // Timeout is how long to wait for a read or write to succeed. + Timeout time.Duration } // Creates a new packer.Communicator implementation over SSH. This takes @@ -98,12 +105,26 @@ func (c *comm) Start(cmd *packer.RemoteCmd) (err error) { } } - log.Printf("starting remote command: %s", cmd.Command) + log.Printf("[DEBUG] starting remote command: %s", cmd.Command) err = session.Start(cmd.Command + "\n") if err != nil { return } + go func() { + if c.config.KeepAliveInterval <= 0 { + return + } + c := time.NewTicker(c.config.KeepAliveInterval) + defer c.Stop() + for range c.C { + _, err := session.SendRequest("keepalive@packer.io", true, nil) + if err != nil { + return + } + } + }() + // Start a goroutine to wait for the session to end and set the // exit boolean and status. go func() { @@ -115,12 +136,12 @@ func (c *comm) Start(cmd *packer.RemoteCmd) (err error) { switch err.(type) { case *ssh.ExitError: exitStatus = err.(*ssh.ExitError).ExitStatus() - log.Printf("Remote command exited with '%d': %s", exitStatus, cmd.Command) + log.Printf("[ERROR] Remote command exited with '%d': %s", exitStatus, cmd.Command) case *ssh.ExitMissingError: - log.Printf("Remote command exited without exit status or exit signal.") + log.Printf("[ERROR] Remote command exited without exit status or exit signal.") exitStatus = packer.CmdDisconnect default: - log.Printf("Error occurred waiting for ssh session: %s", err.Error()) + log.Printf("[ERROR] Error occurred waiting for ssh session: %s", err.Error()) } } cmd.SetExited(exitStatus) @@ -137,7 +158,7 @@ func (c *comm) Upload(path string, input io.Reader, fi *os.FileInfo) error { } func (c *comm) UploadDir(dst string, src string, excl []string) error { - log.Printf("Upload dir '%s' to '%s'", src, dst) + log.Printf("[DEBUG] Upload dir '%s' to '%s'", src, dst) if c.config.UseSftp { return c.sftpUploadDirSession(dst, src, excl) } else { @@ -146,7 +167,7 @@ func (c *comm) UploadDir(dst string, src string, excl []string) error { } func (c *comm) DownloadDir(src string, dst string, excl []string) error { - log.Printf("Download dir '%s' to '%s'", src, dst) + log.Printf("[DEBUG] Download dir '%s' to '%s'", src, dst) scpFunc := func(w io.Writer, stdoutR *bufio.Reader) error { dirStack := []string{dst} for { @@ -181,7 +202,7 @@ func (c *comm) DownloadDir(src string, dst string, excl []string) error { var mode int64 var size int64 var name string - log.Printf("Download dir str:%s", fi) + log.Printf("[DEBUG] Download dir str:%s", fi) n, err := fmt.Sscanf(fi[1:], "%o %d %s", &mode, &size, &name) if err != nil || n != 3 { return fmt.Errorf("can't parse server response (%s)", fi) @@ -190,7 +211,7 @@ func (c *comm) DownloadDir(src string, dst string, excl []string) error { return fmt.Errorf("negative file size") } - log.Printf("Download dir mode:%0o size:%d name:%s", mode, size, name) + log.Printf("[DEBUG] Download dir mode:%0o size:%d name:%s", mode, size, name) dst = filepath.Join(dirStack...) switch fi[0] { @@ -225,7 +246,7 @@ func (c *comm) Download(path string, output io.Writer) error { } func (c *comm) newSession() (session *ssh.Session, err error) { - log.Println("opening new ssh session") + log.Println("[DEBUG] Opening new ssh session") if c.client == nil { err = errors.New("client not available") } else { @@ -233,7 +254,7 @@ func (c *comm) newSession() (session *ssh.Session, err error) { } if err != nil { - log.Printf("ssh session open error: '%s', attempting reconnect", err) + log.Printf("[ERROR] ssh session open error: '%s', attempting reconnect", err) if err := c.reconnect(); err != nil { return nil, err } @@ -258,7 +279,7 @@ func (c *comm) reconnect() (err error) { c.conn = nil c.client = nil - log.Printf("reconnecting to TCP connection for SSH") + log.Printf("[DEBUG] reconnecting to TCP connection for SSH") c.conn, err = c.config.Connection() if err != nil { // Explicitly set this to the REAL nil. Connection() can return @@ -269,11 +290,15 @@ func (c *comm) reconnect() (err error) { // http://golang.org/doc/faq#nil_error c.conn = nil - log.Printf("reconnection error: %s", err) + log.Printf("[ERROR] reconnection error: %s", err) return } - log.Printf("handshaking with SSH") + if c.config.Timeout > 0 { + c.conn = &timeoutConn{c.conn, c.config.Timeout, c.config.Timeout} + } + + log.Printf("[DEBUG] handshaking with SSH") // Default timeout to 1 minute if it wasn't specified (zero value). For // when you need to handshake from low orbit. @@ -310,10 +335,9 @@ func (c *comm) reconnect() (err error) { } if err != nil { - log.Printf("handshake error: %s", err) return } - log.Printf("handshake complete!") + log.Printf("[DEBUG] handshake complete!") if sshConn != nil { c.client = ssh.NewClient(sshConn, sshChan, req) } @@ -377,15 +401,14 @@ func (c *comm) connectToAgent() { func (c *comm) sftpUploadSession(path string, input io.Reader, fi *os.FileInfo) error { sftpFunc := func(client *sftp.Client) error { - return sftpUploadFile(path, input, client, fi) + return c.sftpUploadFile(path, input, client, fi) } return c.sftpSession(sftpFunc) } -func sftpUploadFile(path string, input io.Reader, client *sftp.Client, fi *os.FileInfo) error { +func (c *comm) sftpUploadFile(path string, input io.Reader, client *sftp.Client, fi *os.FileInfo) error { log.Printf("[DEBUG] sftp: uploading %s", path) - f, err := client.Create(path) if err != nil { return err @@ -411,7 +434,7 @@ func (c *comm) sftpUploadDirSession(dst string, src string, excl []string) error sftpFunc := func(client *sftp.Client) error { rootDst := dst if src[len(src)-1] != '/' { - log.Printf("No trailing slash, creating the source directory name") + log.Printf("[DEBUG] No trailing slash, creating the source directory name") rootDst = filepath.Join(dst, filepath.Base(src)) } walkFunc := func(path string, info os.FileInfo, err error) error { @@ -436,7 +459,7 @@ func (c *comm) sftpUploadDirSession(dst string, src string, excl []string) error return nil } - return sftpVisitFile(finalDst, path, info, client) + return c.sftpVisitFile(finalDst, path, info, client) } return filepath.Walk(src, walkFunc) @@ -445,7 +468,7 @@ func (c *comm) sftpUploadDirSession(dst string, src string, excl []string) error return c.sftpSession(sftpFunc) } -func sftpMkdir(path string, client *sftp.Client, fi os.FileInfo) error { +func (c *comm) sftpMkdir(path string, client *sftp.Client, fi os.FileInfo) error { log.Printf("[DEBUG] sftp: creating dir %s", path) if err := client.Mkdir(path); err != nil { @@ -463,16 +486,16 @@ func sftpMkdir(path string, client *sftp.Client, fi os.FileInfo) error { return nil } -func sftpVisitFile(dst string, src string, fi os.FileInfo, client *sftp.Client) error { +func (c *comm) sftpVisitFile(dst string, src string, fi os.FileInfo, client *sftp.Client) error { if !fi.IsDir() { f, err := os.Open(src) if err != nil { return err } defer f.Close() - return sftpUploadFile(dst, f, client, &fi) + return c.sftpUploadFile(dst, f, client, &fi) } else { - err := sftpMkdir(dst, client, fi) + err := c.sftpMkdir(dst, client, fi) return err } } @@ -498,7 +521,7 @@ func (c *comm) sftpDownloadSession(path string, output io.Writer) error { func (c *comm) sftpSession(f func(*sftp.Client) error) error { client, err := c.newSftpClient() if err != nil { - return err + return fmt.Errorf("sftpSession error: %s", err.Error()) } defer client.Close() @@ -524,7 +547,15 @@ func (c *comm) newSftpClient() (*sftp.Client, error) { return nil, err } - return sftp.NewClientPipe(pr, pw) + // Capture stdout so we can return errors to the user + var stdout bytes.Buffer + tee := io.TeeReader(pr, &stdout) + client, err := sftp.NewClientPipe(tee, pw) + if err != nil && stdout.Len() > 0 { + log.Printf("[ERROR] Upload failed: %s", stdout.Bytes()) + } + + return client, err } func (c *comm) scpUploadSession(path string, input io.Reader, fi *os.FileInfo) error { @@ -533,7 +564,7 @@ func (c *comm) scpUploadSession(path string, input io.Reader, fi *os.FileInfo) e target_dir := filepath.Dir(path) target_file := filepath.Base(path) - // On windows, filepath.Dir uses backslash seperators (ie. "\tmp"). + // On windows, filepath.Dir uses backslash separators (ie. "\tmp"). // This does not work when the target host is unix. Switch to forward slash // which works for unix and windows target_dir = filepath.ToSlash(target_dir) @@ -563,7 +594,7 @@ func (c *comm) scpUploadDirSession(dst string, src string, excl []string) error } if src[len(src)-1] != '/' { - log.Printf("No trailing slash, creating the source directory name") + log.Printf("[DEBUG] No trailing slash, creating the source directory name") fi, err := os.Stat(src) if err != nil { return err @@ -664,7 +695,7 @@ func (c *comm) scpSession(scpCommand string, f func(io.Writer, *bufio.Reader) er // Start the sink mode on the other side // TODO(mitchellh): There are probably issues with shell escaping the path - log.Println("Starting remote scp process: ", scpCommand) + log.Println("[DEBUG] Starting remote scp process: ", scpCommand) if err := session.Start(scpCommand); err != nil { return err } @@ -672,7 +703,7 @@ func (c *comm) scpSession(scpCommand string, f func(io.Writer, *bufio.Reader) er // Call our callback that executes in the context of SCP. We ignore // EOF errors if they occur because it usually means that SCP prematurely // ended on the other side. - log.Println("Started SCP session, beginning transfers...") + log.Println("[DEBUG] Started SCP session, beginning transfers...") if err := f(stdinW, stdoutR); err != nil && err != io.EOF { return err } @@ -680,24 +711,24 @@ func (c *comm) scpSession(scpCommand string, f func(io.Writer, *bufio.Reader) er // Close the stdin, which sends an EOF, and then set w to nil so that // our defer func doesn't close it again since that is unsafe with // the Go SSH package. - log.Println("SCP session complete, closing stdin pipe.") + log.Println("[DEBUG] SCP session complete, closing stdin pipe.") stdinW.Close() stdinW = nil // Wait for the SCP connection to close, meaning it has consumed all // our data and has completed. Or has errored. - log.Println("Waiting for SSH session to complete.") + log.Println("[DEBUG] Waiting for SSH session to complete.") err = session.Wait() if err != nil { if exitErr, ok := err.(*ssh.ExitError); ok { - // Otherwise, we have an ExitErorr, meaning we can just read + // Otherwise, we have an ExitError, meaning we can just read // the exit status - log.Printf("non-zero exit status: %d", exitErr.ExitStatus()) + log.Printf("[DEBUG] non-zero exit status: %d", exitErr.ExitStatus()) stdoutB, err := ioutil.ReadAll(stdoutR) if err != nil { return err } - log.Printf("scp output: %s", stdoutB) + log.Printf("[DEBUG] scp output: %s", stdoutB) // If we exited with status 127, it means SCP isn't available. // Return a more descriptive error for that. @@ -711,7 +742,7 @@ func (c *comm) scpSession(scpCommand string, f func(io.Writer, *bufio.Reader) er return err } - log.Printf("scp stderr (length %d): %s", stderr.Len(), stderr.String()) + log.Printf("[DEBUG] scp stderr (length %d): %s", stderr.Len(), stderr.String()) return nil } @@ -768,7 +799,7 @@ func scpUploadFile(dst string, src io.Reader, w io.Writer, r *bufio.Reader, fi * mode = 0644 - log.Println("Copying input data into temporary file so we can read the length") + log.Println("[DEBUG] Copying input data into temporary file so we can read the length") if _, err := io.Copy(tf, src); err != nil { return err } @@ -811,7 +842,7 @@ func scpUploadFile(dst string, src io.Reader, w io.Writer, r *bufio.Reader, fi * } func scpUploadDirProtocol(name string, w io.Writer, r *bufio.Reader, f func() error, fi os.FileInfo) error { - log.Printf("SCP: starting directory upload: %s", name) + log.Printf("[DEBUG] SCP: starting directory upload: %s", name) mode := fi.Mode().Perm() diff --git a/communicator/ssh/connect.go b/communicator/ssh/connect.go index 7622a90e3..80bf0a9f4 100644 --- a/communicator/ssh/connect.go +++ b/communicator/ssh/connect.go @@ -28,7 +28,7 @@ func ConnectFunc(network, addr string) func() (net.Conn, error) { } } -// ConnectFunc is a convenience method for returning a function +// ProxyConnectFunc is a convenience method for returning a function // that connects to a host using SOCKS5 proxy func ProxyConnectFunc(socksProxy string, socksAuth *proxy.Auth, network, addr string) func() (net.Conn, error) { return func() (net.Conn, error) { diff --git a/communicator/ssh/connection.go b/communicator/ssh/connection.go new file mode 100644 index 000000000..c3df04543 --- /dev/null +++ b/communicator/ssh/connection.go @@ -0,0 +1,30 @@ +package ssh + +import ( + "net" + "time" +) + +// timeoutConn wraps a net.Conn, and sets a deadline for every read +// and write operation. +type timeoutConn struct { + net.Conn + ReadTimeout time.Duration + WriteTimeout time.Duration +} + +func (c *timeoutConn) Read(b []byte) (int, error) { + err := c.Conn.SetReadDeadline(time.Now().Add(c.ReadTimeout)) + if err != nil { + return 0, err + } + return c.Conn.Read(b) +} + +func (c *timeoutConn) Write(b []byte) (int, error) { + err := c.Conn.SetWriteDeadline(time.Now().Add(c.WriteTimeout)) + if err != nil { + return 0, err + } + return c.Conn.Write(b) +} diff --git a/communicator/ssh/password_test.go b/communicator/ssh/password_test.go index 6e3e0a257..47a4c7782 100644 --- a/communicator/ssh/password_test.go +++ b/communicator/ssh/password_test.go @@ -1,9 +1,10 @@ package ssh import ( - "golang.org/x/crypto/ssh" "reflect" "testing" + + "golang.org/x/crypto/ssh" ) func TestPasswordKeyboardInteractive_Impl(t *testing.T) { @@ -14,7 +15,7 @@ func TestPasswordKeyboardInteractive_Impl(t *testing.T) { } } -func TestPasswordKeybardInteractive_Challenge(t *testing.T) { +func TestPasswordKeyboardInteractive_Challenge(t *testing.T) { p := PasswordKeyboardInteractive("foo") result, err := p("foo", "bar", []string{"one", "two"}, nil) if err != nil { diff --git a/communicator/winrm/communicator.go b/communicator/winrm/communicator.go index d1f7d2ab6..fc639ccb0 100644 --- a/communicator/winrm/communicator.go +++ b/communicator/winrm/communicator.go @@ -122,10 +122,14 @@ func runCommand(shell *winrm.Shell, cmd *winrm.Command, rc *packer.RemoteCmd) { } // Upload implementation of communicator.Communicator interface -func (c *Communicator) Upload(path string, input io.Reader, _ *os.FileInfo) error { +func (c *Communicator) Upload(path string, input io.Reader, fi *os.FileInfo) error { wcp, err := c.newCopyClient() if err != nil { - return err + return fmt.Errorf("Was unable to create winrm client: %s", err) + } + if strings.HasSuffix(path, `\`) { + // path is a directory + path += filepath.Base((*fi).Name()) } log.Printf("Uploading file to '%s'", path) return wcp.Write(path, input) @@ -145,8 +149,7 @@ func (c *Communicator) UploadDir(dst string, src string, exclude []string) error } func (c *Communicator) Download(src string, dst io.Writer) error { - endpoint := winrm.NewEndpoint(c.endpoint.Host, c.endpoint.Port, c.config.Https, c.config.Insecure, nil, nil, nil, c.config.Timeout) - client, err := winrm.NewClient(endpoint, c.config.Username, c.config.Password) + client, err := c.newWinRMClient() if err != nil { return err } @@ -165,9 +168,8 @@ func (c *Communicator) DownloadDir(src string, dst string, exclude []string) err return fmt.Errorf("WinRM doesn't support download dir.") } -func (c *Communicator) newCopyClient() (*winrmcp.Winrmcp, error) { - addr := fmt.Sprintf("%s:%d", c.endpoint.Host, c.endpoint.Port) - return winrmcp.New(addr, &winrmcp.Config{ +func (c *Communicator) getClientConfig() *winrmcp.Config { + return &winrmcp.Config{ Auth: winrmcp.Auth{ User: c.config.Username, Password: c.config.Password, @@ -177,7 +179,45 @@ func (c *Communicator) newCopyClient() (*winrmcp.Winrmcp, error) { OperationTimeout: c.config.Timeout, MaxOperationsPerShell: 15, // lowest common denominator TransportDecorator: c.config.TransportDecorator, - }) + } +} + +func (c *Communicator) newCopyClient() (*winrmcp.Winrmcp, error) { + addr := fmt.Sprintf("%s:%d", c.endpoint.Host, c.endpoint.Port) + clientConfig := c.getClientConfig() + return winrmcp.New(addr, clientConfig) +} + +func (c *Communicator) newWinRMClient() (*winrm.Client, error) { + conf := c.getClientConfig() + + // Shamelessly borrowed from the winrmcp client to ensure + // that the client is configured using the same defaulting behaviors that + // winrmcp uses even we we aren't using winrmcp. This ensures similar + // behavior between upload, download, and copy functions. We can't use the + // one generated by winrmcp because it isn't exported. + var endpoint *winrm.Endpoint + endpoint = &winrm.Endpoint{ + Host: c.endpoint.Host, + Port: c.endpoint.Port, + HTTPS: conf.Https, + Insecure: conf.Insecure, + TLSServerName: conf.TLSServerName, + CACert: conf.CACertBytes, + Timeout: conf.ConnectTimeout, + } + params := winrm.NewParameters( + winrm.DefaultParameters.Timeout, + winrm.DefaultParameters.Locale, + winrm.DefaultParameters.EnvelopeSize, + ) + + params.TransportDecorator = conf.TransportDecorator + params.Timeout = "PT3M" + + client, err := winrm.NewClientWithParameters( + endpoint, conf.Auth.User, conf.Auth.Password, params) + return client, err } type Base64Pipe struct { diff --git a/communicator/winrm/communicator_test.go b/communicator/winrm/communicator_test.go index d5eb974ac..3f6b50ae1 100644 --- a/communicator/winrm/communicator_test.go +++ b/communicator/winrm/communicator_test.go @@ -48,6 +48,12 @@ func newMockWinRMServer(t *testing.T) *winrmtest.Remote { func(out, err io.Writer) int { return 0 }) + wrm.CommandFunc( + winrmtest.MatchText(`powershell -Command "(Get-Item C:/Temp/packer.cmd) -is [System.IO.DirectoryInfo]"`), + func(out, err io.Writer) int { + out.Write([]byte("False")) + return 0 + }) return wrm } diff --git a/contrib/azure-setup.sh b/contrib/azure-setup.sh index b5f6de220..6473c5a08 100755 --- a/contrib/azure-setup.sh +++ b/contrib/azure-setup.sh @@ -17,7 +17,7 @@ create_sleep=10 showhelp() { echo "azure-setup" echo "" - echo " azure-setup helps you generate packer credentials for Azure" + echo " azure-setup helps you generate packer credentials for azure" echo "" echo " The script creates a resource group, storage account, application" echo " (client), service principal, and permissions and displays a snippet" @@ -49,13 +49,14 @@ showhelp() { requirements() { found=0 - azureversion=$(azure -v) + azureversion=$(az --version) if [ $? -eq 0 ]; then found=$((found + 1)) echo "Found azure-cli version: $azureversion" else echo "azure-cli is missing. Please install azure-cli from" - echo "https://azure.microsoft.com/en-us/documentation/articles/xplat-cli-install/" + echo "https://docs.microsoft.com/en-us/cli/azure/install-azure-cli?view=azure-cli-latest" + echo "Alternatively, you can use the Cloud Shell https://docs.microsoft.com/en-us/azure/cloud-shell/overview right from the Azure Portal or even VS Code." fi jqversion=$(jq --version) @@ -73,19 +74,20 @@ requirements() { } askSubscription() { - azure account list + az account list -otable echo "" echo "Please enter the Id of the account you wish to use. If you do not see" echo "a valid account in the list press Ctrl+C to abort and create one." echo "If you leave this blank we will use the Current account." echo -n "> " read azure_subscription_id + if [ "$azure_subscription_id" != "" ]; then - azure account set $azure_subscription_id + az account set --subscription $azure_subscription_id else - azure_subscription_id=$(azure account show --json | jq -r .[].id) + azure_subscription_id=$(az account list --output json | jq -r '.[] | select(.isDefault==true) | .id') fi - azure_tenant_id=$(azure account show --json | jq -r .[].tenantId) + azure_tenant_id=$(az account list --output json | jq -r '.[] | select(.id=="'$azure_subscription_id'") | .tenantId') echo "Using subscription_id: $azure_subscription_id" echo "Using tenant_id: $azure_tenant_id" } @@ -118,16 +120,16 @@ askSecret() { } askLocation() { - azure location list + az account list-locations -otable echo "" - echo "Choose which region your resource group and storage account will be created." + echo "Choose which region your resource group and storage account will be created. example: westus" echo -n "> " read location } createResourceGroup() { echo "==> Creating resource group" - azure group create -n $meta_name -l $location + az group create -n $meta_name -l $location if [ $? -eq 0 ]; then azure_group_name=$meta_name else @@ -138,7 +140,7 @@ createResourceGroup() { createStorageAccount() { echo "==> Creating storage account" - azure storage account create -g $meta_name -l $location --sku-name LRS --kind Storage $meta_name + az storage account create --name $meta_name --resource-group $meta_name --location $location --kind Storage --sku Standard_LRS if [ $? -eq 0 ]; then azure_storage_name=$meta_name else @@ -149,7 +151,17 @@ createStorageAccount() { createApplication() { echo "==> Creating application" - azure_client_id=$(azure ad app create -n $meta_name -i http://$meta_name --home-page http://$meta_name -p $azure_client_secret --json | jq -r .appId) + echo "==> Does application exist?" + azure_client_id=$(az ad app list --output json | jq -r '.[] | select(.displayName | contains("'$meta_name'")) ') + + if [ "$azure_client_id" != "" ]; then + echo "==> application already exist, grab appId" + azure_client_id=$(az ad app list az ad app list --output json | jq -r '.[] | select(.displayName | contains("'$meta_name'")) .appId') + else + echo "==> application does not exist" + azure_client_id=$(az ad app create --display-name $meta_name --identifier-uris http://$meta_name --homepage http://$meta_name --password $azure_client_secret --output json | jq -r .appId) + fi + if [ $? -ne 0 ]; then echo "Error creating application: $meta_name @ http://$meta_name" return 1 @@ -158,19 +170,8 @@ createApplication() { createServicePrincipal() { echo "==> Creating service principal" - # Azure CLI 0.10.2 introduced a breaking change, where appId must be supplied with the -a switch - # prior version accepted appId as the only parameter without a switch - newer_syntax=false - IFS='.' read -ra azureversionsemver <<< "$azureversion" - if [ ${azureversionsemver[0]} -ge 0 ] && [ ${azureversionsemver[1]} -ge 10 ] && [ ${azureversionsemver[2]} -ge 2 ]; then - newer_syntax=true - fi - - if [ "${newer_syntax}" = true ]; then - azure_object_id=$(azure ad sp create -a $azure_client_id --json | jq -r .objectId) - else - azure_object_id=$(azure ad sp create $azure_client_id --json | jq -r .objectId) - fi + azure_object_id=$(az ad sp create --id $azure_client_id --output json | jq -r .objectId) + echo $azure_object_id "was selected." if [ $? -ne 0 ]; then echo "Error creating service principal: $azure_client_id" @@ -180,10 +181,11 @@ createServicePrincipal() { createPermissions() { echo "==> Creating permissions" - azure role assignment create --objectId $azure_object_id -o "Owner" -c /subscriptions/$azure_subscription_id - # We want to use this more conservative scope but it does not work with the - # current implementation which uses temporary resource groups - # azure role assignment create --spn http://$meta_name -g $azure_group_name -o "API Management Service Contributor" + az role assignment create --assignee $azure_object_id --role "Owner" --scope /subscriptions/$azure_subscription_id + # If the user wants to use a more conservative scope, she can. She must + # configure the Azure builder to use build_resource_group_name. The + # easiest solution is subscription wide permission. + # az role assignment create --spn http://$meta_name -g $azure_group_name -o "API Management Service Contributor" if [ $? -ne 0 ]; then echo "Error creating permissions for: http://$meta_name" return 1 @@ -234,8 +236,7 @@ retryable() { setup() { requirements - azure config mode arm - azure login + az login askSubscription askName diff --git a/examples/alicloud/basic/alicloud.json b/examples/alicloud/basic/alicloud.json index 8f83262fe..d0c643f48 100644 --- a/examples/alicloud/basic/alicloud.json +++ b/examples/alicloud/basic/alicloud.json @@ -9,7 +9,7 @@ "secret_key":"{{user `secret_key`}}", "region":"cn-beijing", "image_name":"packer_basic", - "source_image":"centos_7_2_64_40G_base_20170222.vhd", + "source_image":"centos_7_03_64_20G_alibase_20170818.vhd", "ssh_username":"root", "instance_type":"ecs.n1.tiny", "internet_charge_type":"PayByTraffic", diff --git a/examples/alicloud/basic/alicloud_with_data_disk.json b/examples/alicloud/basic/alicloud_with_data_disk.json index 31f10ab70..dcc8c70ef 100644 --- a/examples/alicloud/basic/alicloud_with_data_disk.json +++ b/examples/alicloud/basic/alicloud_with_data_disk.json @@ -9,7 +9,7 @@ "secret_key":"{{user `secret_key`}}", "region":"cn-beijing", "image_name":"packer_with_data_disk", - "source_image":"centos_7_2_64_40G_base_20170222.vhd", + "source_image":"centos_7_03_64_20G_alibase_20170818.vhd", "ssh_username":"root", "instance_type":"ecs.n1.tiny", "internet_charge_type":"PayByTraffic", diff --git a/examples/alicloud/chef/chef.sh b/examples/alicloud/chef/chef.sh index 6d678b7cc..3e3144561 100644 --- a/examples/alicloud/chef/chef.sh +++ b/examples/alicloud/chef/chef.sh @@ -1,5 +1,5 @@ #!/bin/sh -#if the related deb pkg not found, please replace with it other avaiable repository url +#if the related deb pkg not found, please replace with it other available repository url HOSTNAME=`ifconfig eth1|grep 'inet addr'|cut -d ":" -f2|cut -d " " -f1` if [ not $HOSTNAME ] ; then HOSTNAME=`ifconfig eth0|grep 'inet addr'|cut -d ":" -f2|cut -d " " -f1` diff --git a/examples/alicloud/jenkins/jenkins.sh b/examples/alicloud/jenkins/jenkins.sh index af3eb51a3..72972c24e 100644 --- a/examples/alicloud/jenkins/jenkins.sh +++ b/examples/alicloud/jenkins/jenkins.sh @@ -32,7 +32,7 @@ mv $TOMCAT_NAME /opt wget $JENKINS_URL mv jenkins.war $TOMCAT_PATH/webapps/ -#set emvironment +#set environment echo "TOMCAT_PATH=\"$TOMCAT_PATH\"">>/etc/profile echo "JENKINS_HOME=\"$TOMCAT_PATH/webapps/jenkins\"">>/etc/profile echo PATH="\"\$PATH:\$TOMCAT_PATH:\$JENKINS_HOME\"">>/etc/profile diff --git a/examples/alicloud/local/centos.json b/examples/alicloud/local/centos.json index 45c19b3dc..07b2f2578 100644 --- a/examples/alicloud/local/centos.json +++ b/examples/alicloud/local/centos.json @@ -37,7 +37,7 @@ "ssh_password": "vagrant", "ssh_port": 22, "ssh_username": "root", - "ssh_wait_timeout": "10000s", + "ssh_timeout": "10000s", "type": "qemu", "vm_name": "{{ user `template` }}.raw", "net_device": "virtio-net", diff --git a/examples/ansible/connection-plugin/2.4.x/packer.py b/examples/ansible/connection-plugin/2.4.x/packer.py new file mode 100644 index 000000000..ea4210d32 --- /dev/null +++ b/examples/ansible/connection-plugin/2.4.x/packer.py @@ -0,0 +1,186 @@ +from __future__ import (absolute_import, division, print_function) +__metaclass__ = type + +from ansible.plugins.connection.ssh import Connection as SSHConnection + +DOCUMENTATION = ''' + connection: packer + short_description: ssh based connections for powershell via packer + description: + - This connection plugin allows ansible to communicate to the target packer machines via ssh based connections for powershell. + author: Packer Community + version_added: na + options: + host: + description: Hostname/ip to connect to. + default: inventory_hostname + vars: + - name: ansible_host + - name: ansible_ssh_host + host_key_checking: + #constant: HOST_KEY_CHECKING + description: Determines if ssh should check host keys + type: boolean + ini: + - section: defaults + key: 'host_key_checking' + env: + - name: ANSIBLE_HOST_KEY_CHECKING + password: + description: Authentication password for the C(remote_user). Can be supplied as CLI option. + vars: + - name: ansible_password + - name: ansible_ssh_pass + ssh_args: + description: Arguments to pass to all ssh cli tools + default: '-C -o ControlMaster=auto -o ControlPersist=60s' + ini: + - section: 'ssh_connection' + key: 'ssh_args' + env: + - name: ANSIBLE_SSH_ARGS + ssh_common_args: + description: Common extra args for all ssh CLI tools + vars: + - name: ansible_ssh_common_args + ssh_executable: + default: ssh + description: + - This defines the location of the ssh binary. It defaults to `ssh` which will use the first ssh binary available in $PATH. + - This option is usually not required, it might be useful when access to system ssh is restricted, + or when using ssh wrappers to connect to remote hosts. + env: [{name: ANSIBLE_SSH_EXECUTABLE}] + ini: + - {key: ssh_executable, section: ssh_connection} + yaml: {key: ssh_connection.ssh_executable} + #const: ANSIBLE_SSH_EXECUTABLE + version_added: "2.2" + scp_extra_args: + description: Extra exclusive to the 'scp' CLI + vars: + - name: ansible_scp_extra_args + sftp_extra_args: + description: Extra exclusive to the 'sftp' CLI + vars: + - name: ansible_sftp_extra_args + ssh_extra_args: + description: Extra exclusive to the 'ssh' CLI + vars: + - name: ansible_ssh_extra_args + retries: + # constant: ANSIBLE_SSH_RETRIES + description: Number of attempts to connect. + default: 3 + type: integer + env: + - name: ANSIBLE_SSH_RETRIES + ini: + - section: connection + key: retries + - section: ssh_connection + key: retries + port: + description: Remote port to connect to. + type: int + default: 22 + ini: + - section: defaults + key: remote_port + env: + - name: ANSIBLE_REMOTE_PORT + vars: + - name: ansible_port + - name: ansible_ssh_port + remote_user: + description: + - User name with which to login to the remote server, normally set by the remote_user keyword. + - If no user is supplied, Ansible will let the ssh client binary choose the user as it normally + ini: + - section: defaults + key: remote_user + env: + - name: ANSIBLE_REMOTE_USER + vars: + - name: ansible_user + - name: ansible_ssh_user + pipelining: + default: ANSIBLE_PIPELINING + description: + - Pipelining reduces the number of SSH operations required to execute a module on the remote server, + by executing many Ansible modules without actual file transfer. + - This can result in a very significant performance improvement when enabled. + - However this conflicts with privilege escalation (become). + For example, when using sudo operations you must first disable 'requiretty' in the sudoers file for the target hosts, + which is why this feature is disabled by default. + env: + - name: ANSIBLE_PIPELINING + #- name: ANSIBLE_SSH_PIPELINING + ini: + - section: defaults + key: pipelining + #- section: ssh_connection + # key: pipelining + type: boolean + vars: + - name: ansible_pipelining + - name: ansible_ssh_pipelining + private_key_file: + description: + - Path to private key file to use for authentication + ini: + - section: defaults + key: private_key_file + env: + - name: ANSIBLE_PRIVATE_KEY_FILE + vars: + - name: ansible_private_key_file + - name: ansible_ssh_private_key_file + control_path: + default: null + description: + - This is the location to save ssh's ControlPath sockets, it uses ssh's variable substitution. + - Since 2.3, if null, ansible will generate a unique hash. Use `%(directory)s` to indicate where to use the control dir path setting. + env: + - name: ANSIBLE_SSH_CONTROL_PATH + ini: + - key: control_path + section: ssh_connection + control_path_dir: + default: ~/.ansible/cp + description: + - This sets the directory to use for ssh control path if the control path setting is null. + - Also, provides the `%(directory)s` variable for the control path setting. + env: + - name: ANSIBLE_SSH_CONTROL_PATH_DIR + ini: + - section: ssh_connection + key: control_path_dir + sftp_batch_mode: + default: True + description: 'TODO: write it' + env: [{name: ANSIBLE_SFTP_BATCH_MODE}] + ini: + - {key: sftp_batch_mode, section: ssh_connection} + type: boolean + scp_if_ssh: + default: smart + description: + - "Prefered method to use when transfering files over ssh" + - When set to smart, Ansible will try them until one succeeds or they all fail + - If set to True, it will force 'scp', if False it will use 'sftp' + env: [{name: ANSIBLE_SCP_IF_SSH}] + ini: + - {key: scp_if_ssh, section: ssh_connection} +''' + +class Connection(SSHConnection): + ''' ssh based connections for powershell via packer''' + + transport = 'packer' + has_pipelining = True + become_methods = [] + allow_executable = False + module_implementation_preferences = ('.ps1', '') + + def __init__(self, *args, **kwargs): + super(Connection, self).__init__(*args, **kwargs) \ No newline at end of file diff --git a/examples/ansible/connection-plugin/2.5.x/packer.py b/examples/ansible/connection-plugin/2.5.x/packer.py new file mode 100644 index 000000000..e133aeea4 --- /dev/null +++ b/examples/ansible/connection-plugin/2.5.x/packer.py @@ -0,0 +1,204 @@ +from __future__ import (absolute_import, division, print_function) +__metaclass__ = type + +from ansible.plugins.connection.ssh import Connection as SSHConnection + +DOCUMENTATION = ''' + connection: packer + short_description: ssh based connections for powershell via packer + description: + - This connection plugin allows ansible to communicate to the target packer machines via ssh based connections for powershell. + author: Packer Community + version_added: na + options: + host: + description: Hostname/ip to connect to. + default: inventory_hostname + vars: + - name: ansible_host + - name: ansible_ssh_host + host_key_checking: + description: Determines if ssh should check host keys + type: boolean + ini: + - section: defaults + key: 'host_key_checking' + - section: ssh_connection + key: 'host_key_checking' + version_added: '2.5' + env: + - name: ANSIBLE_HOST_KEY_CHECKING + - name: ANSIBLE_SSH_HOST_KEY_CHECKING + version_added: '2.5' + vars: + - name: ansible_host_key_checking + version_added: '2.5' + - name: ansible_ssh_host_key_checking + version_added: '2.5' + password: + description: Authentication password for the C(remote_user). Can be supplied as CLI option. + vars: + - name: ansible_password + - name: ansible_ssh_pass + ssh_args: + description: Arguments to pass to all ssh cli tools + default: '-C -o ControlMaster=auto -o ControlPersist=60s' + ini: + - section: 'ssh_connection' + key: 'ssh_args' + env: + - name: ANSIBLE_SSH_ARGS + ssh_common_args: + description: Common extra args for all ssh CLI tools + vars: + - name: ansible_ssh_common_args + ssh_executable: + default: ssh + description: + - This defines the location of the ssh binary. It defaults to `ssh` which will use the first ssh binary available in $PATH. + - This option is usually not required, it might be useful when access to system ssh is restricted, + or when using ssh wrappers to connect to remote hosts. + env: [{name: ANSIBLE_SSH_EXECUTABLE}] + ini: + - {key: ssh_executable, section: ssh_connection} + yaml: {key: ssh_connection.ssh_executable} + #const: ANSIBLE_SSH_EXECUTABLE + version_added: "2.2" + scp_extra_args: + description: Extra exclusive to the 'scp' CLI + vars: + - name: ansible_scp_extra_args + sftp_extra_args: + description: Extra exclusive to the 'sftp' CLI + vars: + - name: ansible_sftp_extra_args + ssh_extra_args: + description: Extra exclusive to the 'ssh' CLI + vars: + - name: ansible_ssh_extra_args + retries: + # constant: ANSIBLE_SSH_RETRIES + description: Number of attempts to connect. + default: 3 + type: integer + env: + - name: ANSIBLE_SSH_RETRIES + ini: + - section: connection + key: retries + - section: ssh_connection + key: retries + port: + description: Remote port to connect to. + type: int + default: 22 + ini: + - section: defaults + key: remote_port + env: + - name: ANSIBLE_REMOTE_PORT + vars: + - name: ansible_port + - name: ansible_ssh_port + remote_user: + description: + - User name with which to login to the remote server, normally set by the remote_user keyword. + - If no user is supplied, Ansible will let the ssh client binary choose the user as it normally + ini: + - section: defaults + key: remote_user + env: + - name: ANSIBLE_REMOTE_USER + vars: + - name: ansible_user + - name: ansible_ssh_user + pipelining: + default: ANSIBLE_PIPELINING + description: + - Pipelining reduces the number of SSH operations required to execute a module on the remote server, + by executing many Ansible modules without actual file transfer. + - This can result in a very significant performance improvement when enabled. + - However this conflicts with privilege escalation (become). + For example, when using sudo operations you must first disable 'requiretty' in the sudoers file for the target hosts, + which is why this feature is disabled by default. + env: + - name: ANSIBLE_PIPELINING + #- name: ANSIBLE_SSH_PIPELINING + ini: + - section: defaults + key: pipelining + #- section: ssh_connection + # key: pipelining + type: boolean + vars: + - name: ansible_pipelining + - name: ansible_ssh_pipelining + private_key_file: + description: + - Path to private key file to use for authentication + ini: + - section: defaults + key: private_key_file + env: + - name: ANSIBLE_PRIVATE_KEY_FILE + vars: + - name: ansible_private_key_file + - name: ansible_ssh_private_key_file + control_path: + default: null + description: + - This is the location to save ssh's ControlPath sockets, it uses ssh's variable substitution. + - Since 2.3, if null, ansible will generate a unique hash. Use `%(directory)s` to indicate where to use the control dir path setting. + env: + - name: ANSIBLE_SSH_CONTROL_PATH + ini: + - key: control_path + section: ssh_connection + control_path_dir: + default: ~/.ansible/cp + description: + - This sets the directory to use for ssh control path if the control path setting is null. + - Also, provides the `%(directory)s` variable for the control path setting. + env: + - name: ANSIBLE_SSH_CONTROL_PATH_DIR + ini: + - section: ssh_connection + key: control_path_dir + sftp_batch_mode: + default: True + description: 'TODO: write it' + env: [{name: ANSIBLE_SFTP_BATCH_MODE}] + ini: + - {key: sftp_batch_mode, section: ssh_connection} + type: boolean + scp_if_ssh: + default: smart + description: + - "Prefered method to use when transfering files over ssh" + - When set to smart, Ansible will try them until one succeeds or they all fail + - If set to True, it will force 'scp', if False it will use 'sftp' + env: [{name: ANSIBLE_SCP_IF_SSH}] + ini: + - {key: scp_if_ssh, section: ssh_connection} + use_tty: + version_added: '2.5' + default: True + description: add -tt to ssh commands to force tty allocation + env: [{name: ANSIBLE_SSH_USETTY}] + ini: + - {key: usetty, section: ssh_connection} + type: boolean + yaml: {key: connection.usetty} +''' + +class Connection(SSHConnection): + ''' ssh based connections for powershell via packer''' + + transport = 'packer' + has_pipelining = True + become_methods = [] + allow_executable = False + module_implementation_preferences = ('.ps1', '') + + def __init__(self, *args, **kwargs): + super(Connection, self).__init__(*args, **kwargs) \ No newline at end of file diff --git a/examples/ansible/connection-plugin/2.6.x/packer.py b/examples/ansible/connection-plugin/2.6.x/packer.py new file mode 100644 index 000000000..8ef8c0548 --- /dev/null +++ b/examples/ansible/connection-plugin/2.6.x/packer.py @@ -0,0 +1,218 @@ +from __future__ import (absolute_import, division, print_function) +__metaclass__ = type + +from ansible.plugins.connection.ssh import Connection as SSHConnection + +DOCUMENTATION = ''' + connection: packer + short_description: ssh based connections for powershell via packer + description: + - This connection plugin allows ansible to communicate to the target packer machines via ssh based connections for powershell. + author: Packer Community + version_added: na + options: + host: + description: Hostname/ip to connect to. + default: inventory_hostname + vars: + - name: ansible_host + - name: ansible_ssh_host + host_key_checking: + description: Determines if ssh should check host keys + type: boolean + ini: + - section: defaults + key: 'host_key_checking' + - section: ssh_connection + key: 'host_key_checking' + version_added: '2.5' + env: + - name: ANSIBLE_HOST_KEY_CHECKING + - name: ANSIBLE_SSH_HOST_KEY_CHECKING + version_added: '2.5' + vars: + - name: ansible_host_key_checking + version_added: '2.5' + - name: ansible_ssh_host_key_checking + version_added: '2.5' + password: + description: Authentication password for the C(remote_user). Can be supplied as CLI option. + vars: + - name: ansible_password + - name: ansible_ssh_pass + ssh_args: + description: Arguments to pass to all ssh cli tools + default: '-C -o ControlMaster=auto -o ControlPersist=60s' + ini: + - section: 'ssh_connection' + key: 'ssh_args' + env: + - name: ANSIBLE_SSH_ARGS + ssh_common_args: + description: Common extra args for all ssh CLI tools + vars: + - name: ansible_ssh_common_args + ssh_executable: + default: ssh + description: + - This defines the location of the ssh binary. It defaults to ``ssh`` which will use the first ssh binary available in $PATH. + - This option is usually not required, it might be useful when access to system ssh is restricted, + or when using ssh wrappers to connect to remote hosts. + env: [{name: ANSIBLE_SSH_EXECUTABLE}] + ini: + - {key: ssh_executable, section: ssh_connection} + #const: ANSIBLE_SSH_EXECUTABLE + version_added: "2.2" + sftp_executable: + default: sftp + description: + - This defines the location of the sftp binary. It defaults to ``sftp`` which will use the first binary available in $PATH. + env: [{name: ANSIBLE_SFTP_EXECUTABLE}] + ini: + - {key: sftp_executable, section: ssh_connection} + version_added: "2.6" + scp_executable: + default: scp + description: + - This defines the location of the scp binary. It defaults to `scp` which will use the first binary available in $PATH. + env: [{name: ANSIBLE_SCP_EXECUTABLE}] + ini: + - {key: scp_executable, section: ssh_connection} + version_added: "2.6" + scp_extra_args: + description: Extra exclusive to the ``scp`` CLI + vars: + - name: ansible_scp_extra_args + sftp_extra_args: + description: Extra exclusive to the ``sftp`` CLI + vars: + - name: ansible_sftp_extra_args + ssh_extra_args: + description: Extra exclusive to the 'ssh' CLI + vars: + - name: ansible_ssh_extra_args + retries: + # constant: ANSIBLE_SSH_RETRIES + description: Number of attempts to connect. + default: 3 + type: integer + env: + - name: ANSIBLE_SSH_RETRIES + ini: + - section: connection + key: retries + - section: ssh_connection + key: retries + port: + description: Remote port to connect to. + type: int + default: 22 + ini: + - section: defaults + key: remote_port + env: + - name: ANSIBLE_REMOTE_PORT + vars: + - name: ansible_port + - name: ansible_ssh_port + remote_user: + description: + - User name with which to login to the remote server, normally set by the remote_user keyword. + - If no user is supplied, Ansible will let the ssh client binary choose the user as it normally + ini: + - section: defaults + key: remote_user + env: + - name: ANSIBLE_REMOTE_USER + vars: + - name: ansible_user + - name: ansible_ssh_user + pipelining: + default: ANSIBLE_PIPELINING + description: + - Pipelining reduces the number of SSH operations required to execute a module on the remote server, + by executing many Ansible modules without actual file transfer. + - This can result in a very significant performance improvement when enabled. + - However this conflicts with privilege escalation (become). + For example, when using sudo operations you must first disable 'requiretty' in the sudoers file for the target hosts, + which is why this feature is disabled by default. + env: + - name: ANSIBLE_PIPELINING + #- name: ANSIBLE_SSH_PIPELINING + ini: + - section: defaults + key: pipelining + #- section: ssh_connection + # key: pipelining + type: boolean + vars: + - name: ansible_pipelining + - name: ansible_ssh_pipelining + private_key_file: + description: + - Path to private key file to use for authentication + ini: + - section: defaults + key: private_key_file + env: + - name: ANSIBLE_PRIVATE_KEY_FILE + vars: + - name: ansible_private_key_file + - name: ansible_ssh_private_key_file + control_path: + description: + - This is the location to save ssh's ControlPath sockets, it uses ssh's variable substitution. + - Since 2.3, if null, ansible will generate a unique hash. Use `%(directory)s` to indicate where to use the control dir path setting. + env: + - name: ANSIBLE_SSH_CONTROL_PATH + ini: + - key: control_path + section: ssh_connection + control_path_dir: + default: ~/.ansible/cp + description: + - This sets the directory to use for ssh control path if the control path setting is null. + - Also, provides the `%(directory)s` variable for the control path setting. + env: + - name: ANSIBLE_SSH_CONTROL_PATH_DIR + ini: + - section: ssh_connection + key: control_path_dir + sftp_batch_mode: + default: 'yes' + description: 'TODO: write it' + env: [{name: ANSIBLE_SFTP_BATCH_MODE}] + ini: + - {key: sftp_batch_mode, section: ssh_connection} + type: bool + scp_if_ssh: + default: smart + description: + - "Prefered method to use when transfering files over ssh" + - When set to smart, Ansible will try them until one succeeds or they all fail + - If set to True, it will force 'scp', if False it will use 'sftp' + env: [{name: ANSIBLE_SCP_IF_SSH}] + ini: + - {key: scp_if_ssh, section: ssh_connection} + use_tty: + version_added: '2.5' + default: 'yes' + description: add -tt to ssh commands to force tty allocation + env: [{name: ANSIBLE_SSH_USETTY}] + ini: + - {key: usetty, section: ssh_connection} + type: bool + yaml: {key: connection.usetty} +''' + +class Connection(SSHConnection): + ''' ssh based connections for powershell via packer''' + + transport = 'packer' + has_pipelining = True + become_methods = [] + allow_executable = False + module_implementation_preferences = ('.ps1', '') + + def __init__(self, *args, **kwargs): + super(Connection, self).__init__(*args, **kwargs) \ No newline at end of file diff --git a/examples/azure/centos.json b/examples/azure/centos.json index 1b0e7a633..9570ce7e9 100644 --- a/examples/azure/centos.json +++ b/examples/azure/centos.json @@ -2,25 +2,21 @@ "variables": { "client_id": "{{env `ARM_CLIENT_ID`}}", "client_secret": "{{env `ARM_CLIENT_SECRET`}}", - "resource_group": "{{env `ARM_RESOURCE_GROUP`}}", - "storage_account": "{{env `ARM_STORAGE_ACCOUNT`}}", "subscription_id": "{{env `ARM_SUBSCRIPTION_ID`}}", "tenant_id": "{{env `ARM_TENANT_ID`}}", "ssh_user": "centos", - "ssh_pass": null + "ssh_pass": "{{env `ARM_SSH_PASS`}}" }, "builders": [{ "type": "azure-arm", "client_id": "{{user `client_id`}}", "client_secret": "{{user `client_secret`}}", - "resource_group_name": "{{user `resource_group`}}", - "storage_account": "{{user `storage_account`}}", "subscription_id": "{{user `subscription_id`}}", "tenant_id": "{{user `tenant_id`}}", - "capture_container_name": "images", - "capture_name_prefix": "packer", + "managed_image_resource_group_name": "packertest", + "managed_image_name": "MyCentOSImage", "ssh_username": "{{user `ssh_user`}}", "ssh_password": "{{user `ssh_pass`}}", diff --git a/examples/azure/debian.json b/examples/azure/debian.json index 112ca4993..f1d97daad 100644 --- a/examples/azure/debian.json +++ b/examples/azure/debian.json @@ -2,11 +2,9 @@ "variables": { "client_id": "{{env `ARM_CLIENT_ID`}}", "client_secret": "{{env `ARM_CLIENT_SECRET`}}", - "resource_group": "{{env `ARM_RESOURCE_GROUP`}}", - "storage_account": "{{env `ARM_STORAGE_ACCOUNT`}}", "subscription_id": "{{env `ARM_SUBSCRIPTION_ID`}}", "ssh_user": "packer", - "ssh_pass": null + "ssh_pass": "{{env `ARM_SSH_PASS`}}" }, "builders": [{ "type": "azure-arm", @@ -17,8 +15,8 @@ "storage_account": "{{user `storage_account`}}", "subscription_id": "{{user `subscription_id`}}", - "capture_container_name": "images", - "capture_name_prefix": "packer", + "managed_image_resource_group_name": "packertest", + "managed_image_name": "MyDebianOSImage", "ssh_username": "{{user `ssh_user`}}", "ssh_password": "{{user `ssh_pass`}}", @@ -26,7 +24,7 @@ "os_type": "Linux", "image_publisher": "credativ", "image_offer": "Debian", - "image_sku": "8", + "image_sku": "9", "ssh_pty": "true", "location": "South Central US", diff --git a/examples/azure/freebsd.json b/examples/azure/freebsd.json new file mode 100644 index 000000000..eedad52ea --- /dev/null +++ b/examples/azure/freebsd.json @@ -0,0 +1,43 @@ +{ + "variables": { + "client_id": "{{env `ARM_CLIENT_ID`}}", + "client_secret": "{{env `ARM_CLIENT_SECRET`}}", + "subscription_id": "{{env `ARM_SUBSCRIPTION_ID`}}", + "ssh_user": "packer", + "ssh_pass": "{{env `ARM_SSH_PASS`}}" + }, + "builders": [{ + "type": "azure-arm", + + "client_id": "{{user `client_id`}}", + "client_secret": "{{user `client_secret`}}", + "subscription_id": "{{user `subscription_id`}}", + + "managed_image_resource_group_name": "packertest", + "managed_image_name": "MyFreeBsdOSImage", + + "ssh_username": "{{user `ssh_user`}}", + "ssh_password": "{{user `ssh_pass`}}", + + "os_type": "Linux", + "image_publisher": "MicrosoftOSTC", + "image_offer": "FreeBSD", + "image_sku": "11.1", + "image_version": "latest", + + "location": "West US 2", + "vm_size": "Standard_DS2_v2" + }], + "provisioners": [{ + "execute_command": "chmod +x {{ .Path }}; {{ .Vars }} sudo -E sh '{{ .Path }}'", + "inline": [ + "env ASSUME_ALWAYS_YES=YES pkg bootstrap", + "/usr/sbin/waagent -force -deprovision+user && export HISTSIZE=0 && sync" + ], + "inline_shebang": "/bin/sh -x", + "type": "shell", + "skip_clean": "true", + "expect_disconnect": "true" + }] + +} diff --git a/examples/azure/ubuntu_managed_image.json b/examples/azure/marketplace_plan_info.json similarity index 72% rename from examples/azure/ubuntu_managed_image.json rename to examples/azure/marketplace_plan_info.json index ddb4a49c9..5cf58e280 100644 --- a/examples/azure/ubuntu_managed_image.json +++ b/examples/azure/marketplace_plan_info.json @@ -11,19 +11,25 @@ "client_secret": "{{user `client_secret`}}", "subscription_id": "{{user `subscription_id`}}", - "os_type": "Linux", - "image_publisher": "Canonical", - "image_offer": "UbuntuServer", - "image_sku": "16.04-LTS", + "managed_image_resource_group_name": "packertest", + "managed_image_name": "MyMarketplaceOSImage", - "managed_image_resource_group_name": "PackerImages", - "managed_image_name": "MyUbuntuImage", + "os_type": "Linux", + "image_publisher": "bitnami", + "image_offer": "rabbitmq", + "image_sku": "rabbitmq", "azure_tags": { "dept": "engineering", "task": "image deployment" }, + "plan_info": { + "plan_name": "rabbitmq", + "plan_product": "rabbitmq", + "plan_publisher": "bitnami" + }, + "location": "West US", "vm_size": "Standard_DS2_v2" }], diff --git a/examples/azure/rhel.json b/examples/azure/rhel.json index 7c4160268..895d4079b 100644 --- a/examples/azure/rhel.json +++ b/examples/azure/rhel.json @@ -2,25 +2,22 @@ "variables": { "client_id": "{{env `ARM_CLIENT_ID`}}", "client_secret": "{{env `ARM_CLIENT_SECRET`}}", - "resource_group": "{{env `ARM_RESOURCE_GROUP`}}", - "storage_account": "{{env `ARM_STORAGE_ACCOUNT`}}", "subscription_id": "{{env `ARM_SUBSCRIPTION_ID`}}", "tenant_id": "{{env `ARM_TENANT_ID`}}", "ssh_user": "centos", - "ssh_pass": null + "ssh_pass": "{{env `ARM_SSH_PASS`}}" }, "builders": [{ "type": "azure-arm", "client_id": "{{user `client_id`}}", "client_secret": "{{user `client_secret`}}", - "resource_group_name": "{{user `resource_group`}}", - "storage_account": "{{user `storage_account`}}", "subscription_id": "{{user `subscription_id`}}", "tenant_id": "{{user `tenant_id`}}", - "capture_container_name": "images", - "capture_name_prefix": "packer", + "managed_image_resource_group_name": "packertest", + "managed_image_name": "MyRedHatOSImage", + "ssh_username": "{{user `ssh_user`}}", "ssh_password": "{{user `ssh_pass`}}", diff --git a/examples/azure/suse.json b/examples/azure/suse.json index c99b364ec..4c6ff8e9e 100644 --- a/examples/azure/suse.json +++ b/examples/azure/suse.json @@ -2,23 +2,19 @@ "variables": { "client_id": "{{env `ARM_CLIENT_ID`}}", "client_secret": "{{env `ARM_CLIENT_SECRET`}}", - "resource_group": "{{env `ARM_RESOURCE_GROUP`}}", - "storage_account": "{{env `ARM_STORAGE_ACCOUNT`}}", "subscription_id": "{{env `ARM_SUBSCRIPTION_ID`}}", "ssh_user": "packer", - "ssh_pass": null + "ssh_pass": "{{env `ARM_SSH_PASS`}}" }, "builders": [{ "type": "azure-arm", "client_id": "{{user `client_id`}}", "client_secret": "{{user `client_secret`}}", - "resource_group_name": "{{user `resource_group`}}", - "storage_account": "{{user `storage_account`}}", "subscription_id": "{{user `subscription_id`}}", - - "capture_container_name": "images", - "capture_name_prefix": "packer", + + "managed_image_resource_group_name": "packertest", + "managed_image_name": "MySuseOSImage", "ssh_username": "{{user `ssh_user`}}", "ssh_password": "{{user `ssh_pass`}}", @@ -26,7 +22,7 @@ "os_type": "Linux", "image_publisher": "SUSE", "image_offer": "SLES", - "image_sku": "12-SP2", + "image_sku": "12-SP3", "ssh_pty": "true", "location": "South Central US", @@ -40,6 +36,7 @@ "/usr/sbin/waagent -force -deprovision+user && export HISTSIZE=0 && sync" ], "inline_shebang": "/bin/sh -x", + "skip_clean": true, "type": "shell" }] } diff --git a/examples/azure/ubuntu.json b/examples/azure/ubuntu.json index a9c657d06..eefe62299 100644 --- a/examples/azure/ubuntu.json +++ b/examples/azure/ubuntu.json @@ -2,8 +2,6 @@ "variables": { "client_id": "{{env `ARM_CLIENT_ID`}}", "client_secret": "{{env `ARM_CLIENT_SECRET`}}", - "resource_group": "{{env `ARM_RESOURCE_GROUP`}}", - "storage_account": "{{env `ARM_STORAGE_ACCOUNT`}}", "subscription_id": "{{env `ARM_SUBSCRIPTION_ID`}}" }, "builders": [{ @@ -11,24 +9,22 @@ "client_id": "{{user `client_id`}}", "client_secret": "{{user `client_secret`}}", - "resource_group_name": "{{user `resource_group`}}", - "storage_account": "{{user `storage_account`}}", "subscription_id": "{{user `subscription_id`}}", - "capture_container_name": "images", - "capture_name_prefix": "packer", - "os_type": "Linux", "image_publisher": "Canonical", "image_offer": "UbuntuServer", "image_sku": "16.04-LTS", + "managed_image_resource_group_name": "packertest", + "managed_image_name": "MyUbuntuImage", + "azure_tags": { "dept": "engineering", "task": "image deployment" }, - "location": "West US", + "location": "South Central US", "vm_size": "Standard_DS2_v2" }], "provisioners": [{ diff --git a/examples/azure/windows.json b/examples/azure/windows.json index 23bd66b26..e0b03b7ae 100644 --- a/examples/azure/windows.json +++ b/examples/azure/windows.json @@ -2,23 +2,17 @@ "variables": { "client_id": "{{env `ARM_CLIENT_ID`}}", "client_secret": "{{env `ARM_CLIENT_SECRET`}}", - "resource_group": "{{env `ARM_RESOURCE_GROUP`}}", - "storage_account": "{{env `ARM_STORAGE_ACCOUNT`}}", - "subscription_id": "{{env `ARM_SUBSCRIPTION_ID`}}", - "object_id": "{{env `ARM_OBJECT_ID`}}" + "subscription_id": "{{env `ARM_SUBSCRIPTION_ID`}}" }, "builders": [{ "type": "azure-arm", "client_id": "{{user `client_id`}}", "client_secret": "{{user `client_secret`}}", - "resource_group_name": "{{user `resource_group`}}", - "storage_account": "{{user `storage_account`}}", "subscription_id": "{{user `subscription_id`}}", - "object_id": "{{user `object_id`}}", - "capture_container_name": "images", - "capture_name_prefix": "packer", + "managed_image_resource_group_name": "packertest", + "managed_image_name": "MyWindowsOSImage", "os_type": "Windows", "image_publisher": "MicrosoftWindowsServer", @@ -31,7 +25,7 @@ "winrm_timeout": "3m", "winrm_username": "packer", - "location": "West US", + "location": "South Central US", "vm_size": "Standard_DS2_v2" }], "provisioners": [{ diff --git a/examples/azure/windows_quickstart.json b/examples/azure/windows_quickstart.json new file mode 100644 index 000000000..3f7a1e9bb --- /dev/null +++ b/examples/azure/windows_quickstart.json @@ -0,0 +1,36 @@ +{ + "variables": { + "subscription_id": "{{env `ARM_SUBSCRIPTION_ID`}}" + }, + "builders": [{ + "type": "azure-arm", + + "subscription_id": "{{user `subscription_id`}}", + + "managed_image_resource_group_name": "packertest", + "managed_image_name": "MyWindowsOSImage", + + "os_type": "Windows", + "image_publisher": "MicrosoftWindowsServer", + "image_offer": "WindowsServer", + "image_sku": "2012-R2-Datacenter", + + "communicator": "winrm", + "winrm_use_ssl": "true", + "winrm_insecure": "true", + "winrm_timeout": "3m", + "winrm_username": "packer", + + "location": "South Central US", + "vm_size": "Standard_DS2_v2" + }], + "provisioners": [{ + "type": "powershell", + "inline": [ + "if( Test-Path $Env:SystemRoot\\windows\\system32\\Sysprep\\unattend.xml ){ rm $Env:SystemRoot\\windows\\system32\\Sysprep\\unattend.xml -Force}", + "& $env:SystemRoot\\System32\\Sysprep\\Sysprep.exe /oobe /generalize /quiet /quit", + "while($true) { $imageState = Get-ItemProperty HKLM:\\SOFTWARE\\Microsoft\\Windows\\CurrentVersion\\Setup\\State | Select ImageState; if($imageState.ImageState -ne 'IMAGE_STATE_GENERALIZE_RESEAL_TO_OOBE') { Write-Output $imageState.ImageState; Start-Sleep -s 10 } else { break } }" + ] + }] +} + diff --git a/fix/fixer.go b/fix/fixer.go index 5ca0b3a18..ae8aac40d 100644 --- a/fix/fixer.go +++ b/fix/fixer.go @@ -33,7 +33,9 @@ func init() { "manifest-filename": new(FixerManifestFilename), "amazon-shutdown_behavior": new(FixerAmazonShutdownBehavior), "amazon-enhanced-networking": new(FixerAmazonEnhancedNetworking), + "amazon-private-ip": new(FixerAmazonPrivateIP), "docker-email": new(FixerDockerEmail), + "powershell-escapes": new(FixerPowerShellEscapes), } FixerOrder = []string{ @@ -50,6 +52,8 @@ func init() { "manifest-filename", "amazon-shutdown_behavior", "amazon-enhanced-networking", + "amazon-private-ip", "docker-email", + "powershell-escapes", } } diff --git a/fix/fixer_amazon_enhanced_networking.go b/fix/fixer_amazon_enhanced_networking.go index 4c9330ebe..7188da7f7 100644 --- a/fix/fixer_amazon_enhanced_networking.go +++ b/fix/fixer_amazon_enhanced_networking.go @@ -1,6 +1,8 @@ package fix import ( + "strings" + "github.com/mitchellh/mapstructure" ) @@ -22,6 +24,19 @@ func (FixerAmazonEnhancedNetworking) Fix(input map[string]interface{}) (map[stri // Go through each builder and replace the enhanced_networking if we can for _, builder := range tpl.Builders { + builderTypeRaw, ok := builder["type"] + if !ok { + continue + } + + builderType, ok := builderTypeRaw.(string) + if !ok { + continue + } + + if !strings.HasPrefix(builderType, "amazon-") { + continue + } enhancedNetworkingRaw, ok := builder["enhanced_networking"] if !ok { continue diff --git a/fix/fixer_amazon_enhanced_networking_test.go b/fix/fixer_amazon_enhanced_networking_test.go index f8b5be178..78b58d582 100644 --- a/fix/fixer_amazon_enhanced_networking_test.go +++ b/fix/fixer_amazon_enhanced_networking_test.go @@ -17,12 +17,12 @@ func TestFixerAmazonEnhancedNetworking(t *testing.T) { // Attach field == false { Input: map[string]interface{}{ - "type": "ebs", + "type": "amazon-ebs", "enhanced_networking": false, }, Expected: map[string]interface{}{ - "type": "ebs", + "type": "amazon-ebs", "ena_support": false, }, }, @@ -30,12 +30,12 @@ func TestFixerAmazonEnhancedNetworking(t *testing.T) { // Attach field == true { Input: map[string]interface{}{ - "type": "ebs", + "type": "amazon-ebs", "enhanced_networking": true, }, Expected: map[string]interface{}{ - "type": "ebs", + "type": "amazon-ebs", "ena_support": true, }, }, diff --git a/fix/fixer_amazon_private_ip.go b/fix/fixer_amazon_private_ip.go new file mode 100644 index 000000000..7b0189ca5 --- /dev/null +++ b/fix/fixer_amazon_private_ip.go @@ -0,0 +1,75 @@ +package fix + +import ( + "log" + "strconv" + "strings" + + "github.com/mitchellh/mapstructure" +) + +// FixerAmazonPrivateIP is a Fixer that replaces instances of `"private_ip": +// true` with `"ssh_interface": "private_ip"` +type FixerAmazonPrivateIP struct{} + +func (FixerAmazonPrivateIP) Fix(input map[string]interface{}) (map[string]interface{}, error) { + type template struct { + Builders []map[string]interface{} + } + + // Decode the input into our structure, if we can + var tpl template + if err := mapstructure.Decode(input, &tpl); err != nil { + return nil, err + } + + // Go through each builder and replace the enhanced_networking if we can + for _, builder := range tpl.Builders { + builderTypeRaw, ok := builder["type"] + if !ok { + continue + } + + builderType, ok := builderTypeRaw.(string) + if !ok { + continue + } + + if !strings.HasPrefix(builderType, "amazon-") { + continue + } + + // if ssh_interface already set, do nothing + if _, ok := builder["ssh_interface"]; ok { + continue + } + + privateIPi, ok := builder["ssh_private_ip"] + if !ok { + continue + } + privateIP, ok := privateIPi.(bool) + if !ok { + var err error + privateIP, err = strconv.ParseBool(privateIPi.(string)) + if err != nil { + log.Fatalf("Wrong type for ssh_private_ip") + continue + } + } + + delete(builder, "ssh_private_ip") + if privateIP { + builder["ssh_interface"] = "private_ip" + } else { + builder["ssh_interface"] = "public_ip" + } + } + + input["builders"] = tpl.Builders + return input, nil +} + +func (FixerAmazonPrivateIP) Synopsis() string { + return "Replaces `\"ssh_private_ip\": true` in amazon builders with `\"ssh_interface\": \"private_ip\"`" +} diff --git a/fix/fixer_amazon_private_ip_test.go b/fix/fixer_amazon_private_ip_test.go new file mode 100644 index 000000000..71961f2f8 --- /dev/null +++ b/fix/fixer_amazon_private_ip_test.go @@ -0,0 +1,77 @@ +package fix + +import ( + "reflect" + "testing" +) + +func TestFixerAmazonPrivateIP_Impl(t *testing.T) { + var _ Fixer = new(FixerAmazonPrivateIP) +} + +func TestFixerAmazonPrivateIP(t *testing.T) { + cases := []struct { + Input map[string]interface{} + Expected map[string]interface{} + }{ + // Attach field == false + { + Input: map[string]interface{}{ + "type": "amazon-ebs", + "ssh_private_ip": false, + }, + + Expected: map[string]interface{}{ + "type": "amazon-ebs", + "ssh_interface": "public_ip", + }, + }, + + // Attach field == true + { + Input: map[string]interface{}{ + "type": "amazon-ebs", + "ssh_private_ip": true, + }, + + Expected: map[string]interface{}{ + "type": "amazon-ebs", + "ssh_interface": "private_ip", + }, + }, + + // ssh_private_ip specified as string + { + Input: map[string]interface{}{ + "type": "amazon-ebs", + "ssh_private_ip": "true", + }, + + Expected: map[string]interface{}{ + "type": "amazon-ebs", + "ssh_interface": "private_ip", + }, + }, + } + + for _, tc := range cases { + var f FixerAmazonPrivateIP + + input := map[string]interface{}{ + "builders": []map[string]interface{}{tc.Input}, + } + + expected := map[string]interface{}{ + "builders": []map[string]interface{}{tc.Expected}, + } + + output, err := f.Fix(input) + if err != nil { + t.Fatalf("err: %s", err) + } + + if !reflect.DeepEqual(output, expected) { + t.Fatalf("unexpected: %#v\nexpected: %#v\n", output, expected) + } + } +} diff --git a/fix/fixer_amazon_shutdown_behavior.go b/fix/fixer_amazon_shutdown_behavior.go index 2e47c5592..686ef5119 100644 --- a/fix/fixer_amazon_shutdown_behavior.go +++ b/fix/fixer_amazon_shutdown_behavior.go @@ -1,6 +1,8 @@ package fix import ( + "strings" + "github.com/mitchellh/mapstructure" ) @@ -31,7 +33,7 @@ func (FixerAmazonShutdownBehavior) Fix(input map[string]interface{}) (map[string continue } - if builderType != "amazon-ebs" && builderType != "amazon-ebsvolume" && builderType != "amazon-instance" && builderType != "amazon-chroot" { + if !strings.HasPrefix(builderType, "amazon-") { continue } diff --git a/fix/fixer_createtime.go b/fix/fixer_createtime.go index 81d23bfe0..67bd12284 100644 --- a/fix/fixer_createtime.go +++ b/fix/fixer_createtime.go @@ -1,8 +1,9 @@ package fix import ( - "github.com/mitchellh/mapstructure" "regexp" + + "github.com/mitchellh/mapstructure" ) // FixerCreateTime is a Fixer that replaces the ".CreateTime" template diff --git a/fix/fixer_powershell_escapes.go b/fix/fixer_powershell_escapes.go new file mode 100644 index 000000000..e9e3b0033 --- /dev/null +++ b/fix/fixer_powershell_escapes.go @@ -0,0 +1,74 @@ +package fix + +import ( + "strings" + + "github.com/mitchellh/mapstructure" +) + +// FixerPowerShellEscapes removes the PowerShell escape character from user +// environment variables and elevated username and password strings +type FixerPowerShellEscapes struct{} + +func (FixerPowerShellEscapes) Fix(input map[string]interface{}) (map[string]interface{}, error) { + type template struct { + Provisioners []interface{} + } + + var psUnescape = strings.NewReplacer( + "`$", "$", + "`\"", "\"", + "``", "`", + "`'", "'", + ) + + // Decode the input into our structure, if we can + var tpl template + if err := mapstructure.WeakDecode(input, &tpl); err != nil { + return nil, err + } + + for i, raw := range tpl.Provisioners { + var provisioners map[string]interface{} + if err := mapstructure.Decode(raw, &provisioners); err != nil { + // Ignore errors, could be a non-map + continue + } + + if ok := provisioners["type"] == "powershell"; !ok { + continue + } + + if _, ok := provisioners["elevated_user"]; ok { + provisioners["elevated_user"] = psUnescape.Replace(provisioners["elevated_user"].(string)) + } + if _, ok := provisioners["elevated_password"]; ok { + provisioners["elevated_password"] = psUnescape.Replace(provisioners["elevated_password"].(string)) + } + if raw, ok := provisioners["environment_vars"]; ok { + var env_vars []string + if err := mapstructure.Decode(raw, &env_vars); err != nil { + continue + } + env_vars_unescaped := make([]interface{}, len(env_vars)) + for j, env_var := range env_vars { + env_vars_unescaped[j] = psUnescape.Replace(env_var) + } + // Replace with unescaped environment variables + provisioners["environment_vars"] = env_vars_unescaped + } + + // Write all changes back to template + tpl.Provisioners[i] = provisioners + } + + if len(tpl.Provisioners) > 0 { + input["provisioners"] = tpl.Provisioners + } + + return input, nil +} + +func (FixerPowerShellEscapes) Synopsis() string { + return `Removes PowerShell escapes from user env vars and elevated username and password strings` +} diff --git a/fix/fixer_pp_vagrant_override.go b/fix/fixer_pp_vagrant_override.go index 36aadfa46..a0c530728 100644 --- a/fix/fixer_pp_vagrant_override.go +++ b/fix/fixer_pp_vagrant_override.go @@ -4,7 +4,7 @@ import ( "github.com/mitchellh/mapstructure" ) -// FixerVagrantPPOvveride is a Fixer that replaces the provider-specific +// FixerVagrantPPOverride is a Fixer that replaces the provider-specific // overrides for the Vagrant post-processor with the new style introduced // as part of Packer 0.5.0. type FixerVagrantPPOverride struct{} diff --git a/helper/common/shared_state.go b/helper/common/shared_state.go new file mode 100644 index 000000000..87e111cfb --- /dev/null +++ b/helper/common/shared_state.go @@ -0,0 +1,31 @@ +package common + +import ( + "fmt" + "io/ioutil" + "os" + "path/filepath" +) + +// Used to set variables which we need to access later in the build, where +// state bag and config information won't work +func sharedStateFilename(suffix string, buildName string) string { + uuid := os.Getenv("PACKER_RUN_UUID") + return filepath.Join(os.TempDir(), fmt.Sprintf("packer-%s-%s-%s", uuid, suffix, buildName)) +} + +func SetSharedState(key string, value string, buildName string) error { + return ioutil.WriteFile(sharedStateFilename(key, buildName), []byte(value), 0600) +} + +func RetrieveSharedState(key string, buildName string) (string, error) { + value, err := ioutil.ReadFile(sharedStateFilename(key, buildName)) + if err != nil { + return "", err + } + return string(value), nil +} + +func RemoveSharedStateFile(key string, buildName string) { + os.Remove(sharedStateFilename(key, buildName)) +} diff --git a/helper/communicator/config.go b/helper/communicator/config.go index 5ecbdfc65..f2664b192 100644 --- a/helper/communicator/config.go +++ b/helper/communicator/config.go @@ -37,6 +37,8 @@ type Config struct { SSHProxyPort int `mapstructure:"ssh_proxy_port"` SSHProxyUsername string `mapstructure:"ssh_proxy_username"` SSHProxyPassword string `mapstructure:"ssh_proxy_password"` + SSHKeepAliveInterval time.Duration `mapstructure:"ssh_keep_alive_interval"` + SSHReadWriteTimeout time.Duration `mapstructure:"ssh_read_write_timeout"` // WinRM WinRMUser string `mapstructure:"winrm_username"` @@ -131,6 +133,10 @@ func (c *Config) prepareSSH(ctx *interpolate.Context) []error { c.SSHTimeout = 5 * time.Minute } + if c.SSHKeepAliveInterval == 0 { + c.SSHKeepAliveInterval = 5 * time.Second + } + if c.SSHHandshakeAttempts == 0 { c.SSHHandshakeAttempts = 10 } diff --git a/helper/communicator/step_connect.go b/helper/communicator/step_connect.go index 7d738e4cc..273935710 100644 --- a/helper/communicator/step_connect.go +++ b/helper/communicator/step_connect.go @@ -1,12 +1,13 @@ package communicator import ( + "context" "fmt" "log" "github.com/hashicorp/packer/communicator/none" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" gossh "golang.org/x/crypto/ssh" ) @@ -43,7 +44,7 @@ type StepConnect struct { substep multistep.Step } -func (s *StepConnect) Run(state multistep.StateBag) multistep.StepAction { +func (s *StepConnect) Run(ctx context.Context, state multistep.StateBag) multistep.StepAction { typeMap := map[string]multistep.Step{ "none": nil, "ssh": &StepConnectSSH{ @@ -84,7 +85,7 @@ func (s *StepConnect) Run(state multistep.StateBag) multistep.StepAction { } s.substep = step - return s.substep.Run(state) + return s.substep.Run(ctx, state) } func (s *StepConnect) Cleanup(state multistep.StateBag) { diff --git a/helper/communicator/step_connect_ssh.go b/helper/communicator/step_connect_ssh.go index 9179f57e8..18fd699c7 100644 --- a/helper/communicator/step_connect_ssh.go +++ b/helper/communicator/step_connect_ssh.go @@ -1,6 +1,7 @@ package communicator import ( + "context" "errors" "fmt" "log" @@ -11,8 +12,8 @@ import ( commonssh "github.com/hashicorp/packer/common/ssh" "github.com/hashicorp/packer/communicator/ssh" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" gossh "golang.org/x/crypto/ssh" "golang.org/x/crypto/ssh/agent" "golang.org/x/net/proxy" @@ -29,7 +30,7 @@ type StepConnectSSH struct { SSHPort func(multistep.StateBag) (int, error) } -func (s *StepConnectSSH) Run(state multistep.StateBag) multistep.StepAction { +func (s *StepConnectSSH) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { ui := state.Get("ui").(packer.Ui) var comm packer.Communicator @@ -181,6 +182,8 @@ func (s *StepConnectSSH) waitForSSH(state multistep.StateBag, cancel <-chan stru Pty: s.Config.SSHPty, DisableAgentForwarding: s.Config.SSHDisableAgentForwarding, UseSftp: s.Config.SSHFileTransferMethod == "sftp", + KeepAliveInterval: s.Config.SSHKeepAliveInterval, + Timeout: s.Config.SSHReadWriteTimeout, } log.Println("[INFO] Attempting SSH connection...") diff --git a/helper/communicator/step_connect_test.go b/helper/communicator/step_connect_test.go index d83625dd2..fb61f6463 100644 --- a/helper/communicator/step_connect_test.go +++ b/helper/communicator/step_connect_test.go @@ -2,10 +2,11 @@ package communicator import ( "bytes" + "context" "testing" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" ) func TestStepConnect_impl(t *testing.T) { @@ -23,7 +24,7 @@ func TestStepConnect_none(t *testing.T) { defer step.Cleanup(state) // run the step - if action := step.Run(state); action != multistep.ActionContinue { + if action := step.Run(context.Background(), state); action != multistep.ActionContinue { t.Fatalf("bad action: %#v", action) } } diff --git a/helper/communicator/step_connect_winrm.go b/helper/communicator/step_connect_winrm.go index 14f5cfc91..06e6236f8 100644 --- a/helper/communicator/step_connect_winrm.go +++ b/helper/communicator/step_connect_winrm.go @@ -2,6 +2,7 @@ package communicator import ( "bytes" + "context" "errors" "fmt" "io" @@ -10,9 +11,9 @@ import ( "time" "github.com/hashicorp/packer/communicator/winrm" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" winrmcmd "github.com/masterzen/winrm" - "github.com/mitchellh/multistep" ) // StepConnectWinRM is a multistep Step implementation that waits for WinRM @@ -32,7 +33,7 @@ type StepConnectWinRM struct { WinRMPort func(multistep.StateBag) (int, error) } -func (s *StepConnectWinRM) Run(state multistep.StateBag) multistep.StepAction { +func (s *StepConnectWinRM) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { ui := state.Get("ui").(packer.Ui) var comm packer.Communicator diff --git a/vendor/github.com/mitchellh/multistep/LICENSE.md b/helper/multistep/LICENSE.md similarity index 100% rename from vendor/github.com/mitchellh/multistep/LICENSE.md rename to helper/multistep/LICENSE.md diff --git a/vendor/github.com/mitchellh/multistep/basic_runner.go b/helper/multistep/basic_runner.go similarity index 87% rename from vendor/github.com/mitchellh/multistep/basic_runner.go rename to helper/multistep/basic_runner.go index 35692a743..00723725d 100644 --- a/vendor/github.com/mitchellh/multistep/basic_runner.go +++ b/helper/multistep/basic_runner.go @@ -1,6 +1,7 @@ package multistep import ( + "context" "sync" "sync/atomic" ) @@ -19,28 +20,29 @@ type BasicRunner struct { // modified. Steps []Step - cancelCh chan struct{} - doneCh chan struct{} - state runState - l sync.Mutex + cancel context.CancelFunc + doneCh chan struct{} + state runState + l sync.Mutex } func (b *BasicRunner) Run(state StateBag) { + ctx, cancel := context.WithCancel(context.Background()) + b.l.Lock() if b.state != stateIdle { panic("already running") } - cancelCh := make(chan struct{}) doneCh := make(chan struct{}) - b.cancelCh = cancelCh + b.cancel = cancel b.doneCh = doneCh b.state = stateRunning b.l.Unlock() defer func() { b.l.Lock() - b.cancelCh = nil + b.cancel = nil b.doneCh = nil b.state = stateIdle close(doneCh) @@ -51,7 +53,7 @@ func (b *BasicRunner) Run(state StateBag) { // as quickly as possible into the state bag to mark it. go func() { select { - case <-cancelCh: + case <-ctx.Done(): // Flag cancel and wait for finish state.Put(StateCancelled, true) <-doneCh @@ -67,7 +69,7 @@ func (b *BasicRunner) Run(state StateBag) { break } - action := step.Run(state) + action := step.Run(ctx, state) defer step.Cleanup(state) if _, ok := state.GetOk(StateCancelled); ok { @@ -90,7 +92,7 @@ func (b *BasicRunner) Cancel() { return case stateRunning: // Running, so mark that we cancelled and set the state - close(b.cancelCh) + b.cancel() b.state = stateCancelling fallthrough case stateCancelling: diff --git a/helper/multistep/basic_runner_test.go b/helper/multistep/basic_runner_test.go new file mode 100644 index 000000000..433f288d2 --- /dev/null +++ b/helper/multistep/basic_runner_test.go @@ -0,0 +1,170 @@ +package multistep + +import ( + "reflect" + "testing" + "time" +) + +func TestBasicRunner_ImplRunner(t *testing.T) { + var raw interface{} + raw = &BasicRunner{} + if _, ok := raw.(Runner); !ok { + t.Fatalf("BasicRunner must be a Runner") + } +} + +func TestBasicRunner_Run(t *testing.T) { + data := new(BasicStateBag) + stepA := &TestStepAcc{Data: "a"} + stepB := &TestStepAcc{Data: "b"} + + r := &BasicRunner{Steps: []Step{stepA, stepB}} + r.Run(data) + + // Test run data + expected := []string{"a", "b"} + results := data.Get("data").([]string) + if !reflect.DeepEqual(results, expected) { + t.Errorf("unexpected result: %#v", results) + } + + // Test cleanup data + expected = []string{"b", "a"} + results = data.Get("cleanup").([]string) + if !reflect.DeepEqual(results, expected) { + t.Errorf("unexpected result: %#v", results) + } + + // Test no halted or cancelled + if _, ok := data.GetOk(StateCancelled); ok { + t.Errorf("cancelled should not be in state bag") + } + + if _, ok := data.GetOk(StateHalted); ok { + t.Errorf("halted should not be in state bag") + } +} + +func TestBasicRunner_Run_Halt(t *testing.T) { + data := new(BasicStateBag) + stepA := &TestStepAcc{Data: "a"} + stepB := &TestStepAcc{Data: "b", Halt: true} + stepC := &TestStepAcc{Data: "c"} + + r := &BasicRunner{Steps: []Step{stepA, stepB, stepC}} + r.Run(data) + + // Test run data + expected := []string{"a", "b"} + results := data.Get("data").([]string) + if !reflect.DeepEqual(results, expected) { + t.Errorf("unexpected result: %#v", results) + } + + // Test cleanup data + expected = []string{"b", "a"} + results = data.Get("cleanup").([]string) + if !reflect.DeepEqual(results, expected) { + t.Errorf("unexpected result: %#v", results) + } + + // Test that it says it is halted + halted := data.Get(StateHalted).(bool) + if !halted { + t.Errorf("not halted") + } +} + +// confirm that can't run twice +func TestBasicRunner_Run_Run(t *testing.T) { + defer func() { + recover() + }() + ch := make(chan chan bool) + stepInt := &TestStepSync{ch} + stepWait := &TestStepWaitForever{} + r := &BasicRunner{Steps: []Step{stepInt, stepWait}} + + go r.Run(new(BasicStateBag)) + // wait until really running + <-ch + + // now try to run aain + r.Run(new(BasicStateBag)) + + // should not get here in nominal codepath + t.Errorf("Was able to run an already running BasicRunner") +} + +func TestBasicRunner_Cancel(t *testing.T) { + ch := make(chan chan bool) + data := new(BasicStateBag) + stepA := &TestStepAcc{Data: "a"} + stepB := &TestStepAcc{Data: "b"} + stepInt := &TestStepSync{ch} + stepC := &TestStepAcc{Data: "c"} + + r := &BasicRunner{Steps: []Step{stepA, stepB, stepInt, stepC}} + + // cancelling an idle Runner is a no-op + r.Cancel() + + go r.Run(data) + + // Wait until we reach the sync point + responseCh := <-ch + + // Cancel then continue chain + cancelCh := make(chan bool) + go func() { + r.Cancel() + cancelCh <- true + }() + + for { + if _, ok := data.GetOk(StateCancelled); ok { + responseCh <- true + break + } + + time.Sleep(10 * time.Millisecond) + } + + <-cancelCh + + // Test run data + expected := []string{"a", "b"} + results := data.Get("data").([]string) + if !reflect.DeepEqual(results, expected) { + t.Errorf("unexpected result: %#v", results) + } + + // Test cleanup data + expected = []string{"b", "a"} + results = data.Get("cleanup").([]string) + if !reflect.DeepEqual(results, expected) { + t.Errorf("unexpected result: %#v", results) + } + + // Test that it says it is cancelled + cancelled := data.Get(StateCancelled).(bool) + if !cancelled { + t.Errorf("not cancelled") + } +} + +func TestBasicRunner_Cancel_Special(t *testing.T) { + stepOne := &TestStepInjectCancel{} + stepTwo := &TestStepInjectCancel{} + r := &BasicRunner{Steps: []Step{stepOne, stepTwo}} + + state := new(BasicStateBag) + state.Put("runner", r) + r.Run(state) + + // test that state contains cancelled + if _, ok := state.GetOk(StateCancelled); !ok { + t.Errorf("cancelled should be in state bag") + } +} diff --git a/vendor/github.com/mitchellh/multistep/debug_runner.go b/helper/multistep/debug_runner.go similarity index 94% rename from vendor/github.com/mitchellh/multistep/debug_runner.go rename to helper/multistep/debug_runner.go index 882009494..a6760c5f1 100644 --- a/vendor/github.com/mitchellh/multistep/debug_runner.go +++ b/helper/multistep/debug_runner.go @@ -1,12 +1,13 @@ package multistep import ( + "context" "fmt" "reflect" "sync" ) -// DebugLocation is the location where the pause is occuring when debugging +// DebugLocation is the location where the pause is occurring when debugging // a step sequence. "DebugLocationAfterRun" is after the run of the named // step. "DebugLocationBeforeCleanup" is before the cleanup of the named // step. @@ -113,7 +114,7 @@ type debugStepPause struct { PauseFn DebugPauseFn } -func (s *debugStepPause) Run(state StateBag) StepAction { +func (s *debugStepPause) Run(_ context.Context, state StateBag) StepAction { s.PauseFn(DebugLocationAfterRun, s.StepName, state) return ActionContinue } diff --git a/helper/multistep/debug_runner_test.go b/helper/multistep/debug_runner_test.go new file mode 100644 index 000000000..aae905d43 --- /dev/null +++ b/helper/multistep/debug_runner_test.go @@ -0,0 +1,174 @@ +package multistep + +import ( + "os" + "reflect" + "testing" + "time" +) + +func TestDebugRunner_Impl(t *testing.T) { + var raw interface{} + raw = &DebugRunner{} + if _, ok := raw.(Runner); !ok { + t.Fatal("DebugRunner must be a runner.") + } +} + +func TestDebugRunner_Run(t *testing.T) { + data := new(BasicStateBag) + stepA := &TestStepAcc{Data: "a"} + stepB := &TestStepAcc{Data: "b"} + + pauseFn := func(loc DebugLocation, name string, state StateBag) { + key := "data" + if loc == DebugLocationBeforeCleanup { + key = "cleanup" + } + + if _, ok := state.GetOk(key); !ok { + state.Put(key, make([]string, 0, 5)) + } + + data := state.Get(key).([]string) + state.Put(key, append(data, name)) + } + + r := &DebugRunner{ + Steps: []Step{stepA, stepB}, + PauseFn: pauseFn, + } + + r.Run(data) + + // Test data + expected := []string{"a", "TestStepAcc", "b", "TestStepAcc"} + results := data.Get("data").([]string) + if !reflect.DeepEqual(results, expected) { + t.Errorf("unexpected results: %#v", results) + } + + // Test cleanup + expected = []string{"TestStepAcc", "b", "TestStepAcc", "a"} + results = data.Get("cleanup").([]string) + if !reflect.DeepEqual(results, expected) { + t.Errorf("unexpected results: %#v", results) + } +} + +// confirm that can't run twice +func TestDebugRunner_Run_Run(t *testing.T) { + defer func() { + recover() + }() + ch := make(chan chan bool) + stepInt := &TestStepSync{ch} + stepWait := &TestStepWaitForever{} + r := &DebugRunner{Steps: []Step{stepInt, stepWait}} + + go r.Run(new(BasicStateBag)) + // wait until really running + <-ch + + // now try to run aain + r.Run(new(BasicStateBag)) + + // should not get here in nominal codepath + t.Errorf("Was able to run an already running DebugRunner") +} + +func TestDebugRunner_Cancel(t *testing.T) { + ch := make(chan chan bool) + data := new(BasicStateBag) + stepA := &TestStepAcc{Data: "a"} + stepB := &TestStepAcc{Data: "b"} + stepInt := &TestStepSync{ch} + stepC := &TestStepAcc{Data: "c"} + + r := &DebugRunner{} + r.Steps = []Step{stepA, stepB, stepInt, stepC} + + // cancelling an idle Runner is a no-op + r.Cancel() + + go r.Run(data) + + // Wait until we reach the sync point + responseCh := <-ch + + // Cancel then continue chain + cancelCh := make(chan bool) + go func() { + r.Cancel() + cancelCh <- true + }() + + for { + if _, ok := data.GetOk(StateCancelled); ok { + responseCh <- true + break + } + + time.Sleep(10 * time.Millisecond) + } + + <-cancelCh + + // Test run data + expected := []string{"a", "b"} + results := data.Get("data").([]string) + if !reflect.DeepEqual(results, expected) { + t.Errorf("unexpected result: %#v", results) + } + + // Test cleanup data + expected = []string{"b", "a"} + results = data.Get("cleanup").([]string) + if !reflect.DeepEqual(results, expected) { + t.Errorf("unexpected result: %#v", results) + } + + // Test that it says it is cancelled + cancelled := data.Get(StateCancelled).(bool) + if !cancelled { + t.Errorf("not cancelled") + } +} + +func TestDebugPauseDefault(t *testing.T) { + + // Create a pipe pair so that writes/reads are blocked until we do it + r, w, err := os.Pipe() + if err != nil { + t.Fatalf("err: %s", err) + } + + // Set stdin so we can control it + oldStdin := os.Stdin + os.Stdin = r + defer func() { os.Stdin = oldStdin }() + + // Start pausing + complete := make(chan bool, 1) + go func() { + dr := &DebugRunner{Steps: []Step{ + &TestStepAcc{Data: "a"}, + }} + dr.Run(new(BasicStateBag)) + complete <- true + }() + + select { + case <-complete: + t.Fatal("shouldn't have completed") + case <-time.After(100 * time.Millisecond): + } + + w.Write([]byte("\n\n")) + + select { + case <-complete: + case <-time.After(100 * time.Millisecond): + t.Fatal("didn't complete") + } +} diff --git a/vendor/github.com/mitchellh/multistep/README.md b/helper/multistep/doc.go similarity index 82% rename from vendor/github.com/mitchellh/multistep/README.md rename to helper/multistep/doc.go index 9ceae012a..17570cda9 100644 --- a/vendor/github.com/mitchellh/multistep/README.md +++ b/helper/multistep/doc.go @@ -1,5 +1,4 @@ -# multistep - +/* multistep is a Go library for building up complex actions using discrete, individual "steps." These steps are strung together and run in sequence to achieve a more complex goal. The runner handles cleanup, cancelling, etc. @@ -13,13 +12,13 @@ which is passed between steps by the runner. ```go type stepAdd struct{} -func (s *stepAdd) Run(state multistep.StateBag) multistep.StepAction { +func (s *stepAdd) Run(ctx context.Context, state multistep.StateBag) multistep.StepAction { // Read our value and assert that it is they type we want value := state.Get("value").(int) fmt.Printf("Value is %d\n", value) - // Store some state back - state.Put("value", value + 1) + // Store some state back + state.Put("value", value + 1) return multistep.ActionContinue } @@ -35,7 +34,7 @@ Make a runner and call your array of Steps. func main() { // Our "bag of state" that we read the value from state := new(multistep.BasicStateBag) - state.Put("value", 0) + state.Put("value", 0) steps := []multistep.Step{ &stepAdd{}, @@ -46,7 +45,7 @@ func main() { runner := &multistep.BasicRunner{Steps: steps} // Executes the steps - runner.Run(state) + runner.Run(context.Background(), state) } ``` @@ -57,3 +56,6 @@ Value is 0 Value is 1 Value is 2 ``` +*/ + +package multistep diff --git a/vendor/github.com/mitchellh/multistep/multistep.go b/helper/multistep/multistep.go similarity index 79% rename from vendor/github.com/mitchellh/multistep/multistep.go rename to helper/multistep/multistep.go index feef1406f..39c44ed0a 100644 --- a/vendor/github.com/mitchellh/multistep/multistep.go +++ b/helper/multistep/multistep.go @@ -1,7 +1,9 @@ -// multistep is a library for bulding up complex actions using individual, +// multistep is a library for building up complex actions using individual, // discrete steps. package multistep +import "context" + // A StepAction determines the next step to take regarding multi-step actions. type StepAction uint @@ -20,13 +22,14 @@ const StateHalted = "halted" // Step is a single step that is part of a potentially large sequence // of other steps, responsible for performing some specific action. type Step interface { - // Run is called to perform the action. The parameter is a "state bag" - // of untyped things. Please be very careful about type-checking the + // Run is called to perform the action. The passed through context will be + // cancelled when the runner is cancelled. The second parameter is a "state + // bag" of untyped things. Please be very careful about type-checking the // items in this bag. // // The return value determines whether multi-step sequences continue // or should halt. - Run(StateBag) StepAction + Run(context.Context, StateBag) StepAction // Cleanup is called in reverse order of the steps that have run // and allow steps to clean up after themselves. Do not assume if this diff --git a/helper/multistep/multistep_test.go b/helper/multistep/multistep_test.go new file mode 100644 index 000000000..3ced60d83 --- /dev/null +++ b/helper/multistep/multistep_test.go @@ -0,0 +1,75 @@ +package multistep + +import "context" + +// A step for testing that accumulates data into a string slice in the +// the state bag. It always uses the "data" key in the state bag, and will +// initialize it. +type TestStepAcc struct { + // The data inserted into the state bag. + Data string + + // If true, it will halt at the step when it is run + Halt bool +} + +// A step that syncs by sending a channel and expecting a response. +type TestStepSync struct { + Ch chan chan bool +} + +// A step that sleeps forever +type TestStepWaitForever struct { +} + +// A step that manually flips state to cancelling in run +type TestStepInjectCancel struct { +} + +func (s TestStepAcc) Run(_ context.Context, state StateBag) StepAction { + s.insertData(state, "data") + + if s.Halt { + return ActionHalt + } + + return ActionContinue +} + +func (s TestStepAcc) Cleanup(state StateBag) { + s.insertData(state, "cleanup") +} + +func (s TestStepAcc) insertData(state StateBag, key string) { + if _, ok := state.GetOk(key); !ok { + state.Put(key, make([]string, 0, 5)) + } + + data := state.Get(key).([]string) + data = append(data, s.Data) + state.Put(key, data) +} + +func (s TestStepSync) Run(context.Context, StateBag) StepAction { + ch := make(chan bool) + s.Ch <- ch + <-ch + + return ActionContinue +} + +func (s TestStepSync) Cleanup(StateBag) {} + +func (s TestStepWaitForever) Run(context.Context, StateBag) StepAction { + select {} +} + +func (s TestStepWaitForever) Cleanup(StateBag) {} + +func (s TestStepInjectCancel) Run(_ context.Context, state StateBag) StepAction { + r := state.Get("runner").(*BasicRunner) + r.state = stateCancelling + return ActionContinue +} + +func (s TestStepInjectCancel) Cleanup(StateBag) {} diff --git a/vendor/github.com/mitchellh/multistep/statebag.go b/helper/multistep/statebag.go similarity index 92% rename from vendor/github.com/mitchellh/multistep/statebag.go rename to helper/multistep/statebag.go index dab712316..9efb6d998 100644 --- a/vendor/github.com/mitchellh/multistep/statebag.go +++ b/helper/multistep/statebag.go @@ -1,8 +1,8 @@ package multistep -import ( - "sync" -) +import "sync" + +// Add context to state bag to prevent changing step signature // StateBag holds the state that is used by the Runner and Steps. The // StateBag implementation must be safe for concurrent access. diff --git a/helper/multistep/statebag_test.go b/helper/multistep/statebag_test.go new file mode 100644 index 000000000..1793ee6e2 --- /dev/null +++ b/helper/multistep/statebag_test.go @@ -0,0 +1,30 @@ +package multistep + +import ( + "testing" +) + +func TestBasicStateBag_ImplRunner(t *testing.T) { + var raw interface{} + raw = &BasicStateBag{} + if _, ok := raw.(StateBag); !ok { + t.Fatalf("must be a StateBag") + } +} + +func TestBasicStateBag(t *testing.T) { + b := new(BasicStateBag) + if b.Get("foo") != nil { + t.Fatalf("bad: %#v", b.Get("foo")) + } + + if _, ok := b.GetOk("foo"); ok { + t.Fatal("should not have foo") + } + + b.Put("foo", "bar") + + if b.Get("foo").(string) != "bar" { + t.Fatalf("bad") + } +} diff --git a/helper/useragent/useragent.go b/helper/useragent/useragent.go new file mode 100644 index 000000000..575d77aba --- /dev/null +++ b/helper/useragent/useragent.go @@ -0,0 +1,35 @@ +package useragent + +import ( + "fmt" + "runtime" + + "github.com/hashicorp/packer/version" +) + +var ( + // projectURL is the project URL. + projectURL = "https://www.packer.io/" + + // rt is the runtime - variable for tests. + rt = runtime.Version() + + // goos is the os - variable for tests. + goos = runtime.GOOS + + // goarch is the architecture - variable for tests. + goarch = runtime.GOARCH + + // versionFunc is the func that returns the current version. This is a + // function to take into account the different build processes and distinguish + // between enterprise and oss builds. + versionFunc = func() string { + return version.FormattedVersion() + } +) + +// String returns the consistent user-agent string for Packer. +func String() string { + return fmt.Sprintf("Packer/%s (+%s; %s; %s/%s)", + versionFunc(), projectURL, rt, goos, goarch) +} diff --git a/helper/useragent/useragent_test.go b/helper/useragent/useragent_test.go new file mode 100644 index 000000000..b792753c6 --- /dev/null +++ b/helper/useragent/useragent_test.go @@ -0,0 +1,20 @@ +package useragent + +import ( + "testing" +) + +func TestUserAgent(t *testing.T) { + projectURL = "https://packer-test.com" + rt = "go5.0" + goos = "linux" + goarch = "amd64" + versionFunc = func() string { return "1.2.3" } + + act := String() + + exp := "Packer/1.2.3 (+https://packer-test.com; go5.0; linux/amd64)" + if exp != act { + t.Errorf("expected %q to be %q", act, exp) + } +} diff --git a/main.go b/main.go index 0f00b9038..2fc626fde 100644 --- a/main.go +++ b/main.go @@ -1,6 +1,7 @@ // This is the main package for the `packer` application. //go:generate go run ./scripts/generate-plugins.go +//go:generate go generate ./common/bootcommand/... package main import ( @@ -13,6 +14,7 @@ import ( "path/filepath" "runtime" "sync" + "syscall" "time" "github.com/hashicorp/go-uuid" @@ -88,6 +90,7 @@ func realMain() int { wrapConfig.Writer = io.MultiWriter(logTempFile, logWriter) wrapConfig.Stdout = outW wrapConfig.DetectDuration = 500 * time.Millisecond + wrapConfig.ForwardSignals = []os.Signal{syscall.SIGTERM} exitStatus, err := panicwrap.Wrap(&wrapConfig) if err != nil { fmt.Fprintf(os.Stderr, "Couldn't start Packer: %s", err) @@ -206,11 +209,13 @@ func wrappedMain() int { } cli := &cli.CLI{ - Args: args, - Commands: Commands, - HelpFunc: excludeHelpFunc(Commands, []string{"plugin"}), - HelpWriter: os.Stdout, - Version: version.Version, + Args: args, + Autocomplete: true, + Commands: Commands, + HelpFunc: excludeHelpFunc(Commands, []string{"plugin"}), + HelpWriter: os.Stdout, + Name: "packer", + Version: version.Version, } exitCode, err := cli.Run() @@ -275,7 +280,9 @@ func loadConfig() (*config, error) { } configFilePath := os.Getenv("PACKER_CONFIG") - if configFilePath == "" { + if configFilePath != "" { + log.Printf("'PACKER_CONFIG' set, loading config from environment.") + } else { var err error configFilePath, err = packer.ConfigFile() diff --git a/packer/build.go b/packer/build.go index 996c9a331..bb441b522 100644 --- a/packer/build.go +++ b/packer/build.go @@ -194,11 +194,25 @@ func (b *coreBuild) Run(originalUi Ui, cache Cache) ([]Artifact, error) { // Add a hook for the provisioners if we have provisioners if len(b.provisioners) > 0 { - provisioners := make([]Provisioner, len(b.provisioners)) - provisionerTypes := make([]string, len(b.provisioners)) + hookedProvisioners := make([]*HookedProvisioner, len(b.provisioners)) for i, p := range b.provisioners { - provisioners[i] = p.provisioner - provisionerTypes[i] = p.pType + var pConfig interface{} + if len(p.config) > 0 { + pConfig = p.config[0] + } + if b.debug { + hookedProvisioners[i] = &HookedProvisioner{ + &DebuggedProvisioner{Provisioner: p.provisioner}, + pConfig, + p.pType, + } + } else { + hookedProvisioners[i] = &HookedProvisioner{ + p.provisioner, + pConfig, + p.pType, + } + } } if _, ok := hooks[HookProvision]; !ok { @@ -206,8 +220,7 @@ func (b *coreBuild) Run(originalUi Ui, cache Cache) ([]Artifact, error) { } hooks[HookProvision] = append(hooks[HookProvision], &ProvisionHook{ - Provisioners: provisioners, - ProvisionerTypes: provisionerTypes, + Provisioners: hookedProvisioners, }) } @@ -221,7 +234,7 @@ func (b *coreBuild) Run(originalUi Ui, cache Cache) ([]Artifact, error) { } log.Printf("Running builder: %s", b.builderType) - ts := CheckpointReporter.AddSpan(b.builderType, "builder") + ts := CheckpointReporter.AddSpan(b.builderType, "builder", b.builderConfig) builderArtifact, err := b.builder.Run(builderUi, hook, cache) ts.End(err) if err != nil { @@ -248,7 +261,7 @@ PostProcessorRunSeqLoop: } builderUi.Say(fmt.Sprintf("Running post-processor: %s", corePP.processorType)) - ts := CheckpointReporter.AddSpan(corePP.processorType, "post-processor") + ts := CheckpointReporter.AddSpan(corePP.processorType, "post-processor", corePP.config) artifact, keep, err := corePP.processor.PostProcess(ppUi, priorArtifact) ts.End(err) if err != nil { diff --git a/packer/build_test.go b/packer/build_test.go index 8fe7bc5af..a903c9803 100644 --- a/packer/build_test.go +++ b/packer/build_test.go @@ -102,7 +102,7 @@ func TestBuild_Prepare_Twice(t *testing.T) { build.Prepare() } -func TestBuildPrepare_BuilderWarniings(t *testing.T) { +func TestBuildPrepare_BuilderWarnings(t *testing.T) { expected := []string{"foo"} build := testBuild() @@ -191,7 +191,7 @@ func TestBuild_Run(t *testing.T) { t.Fatal("should be called") } - // Verify hooks are disapatchable + // Verify hooks are dispatchable dispatchHook := builder.RunHook dispatchHook.Run("foo", nil, nil, 42) diff --git a/packer/core.go b/packer/core.go index 7da3e4510..c8ac2cfb7 100644 --- a/packer/core.go +++ b/packer/core.go @@ -67,7 +67,7 @@ func NewCore(c *CoreConfig) (*Core, error) { return nil, err } - // Go through and interpolate all the build names. We shuld be able + // Go through and interpolate all the build names. We should be able // to do this at this point with the variables. result.builds = make(map[string]*template.Builder) for _, b := range c.Template.Builders { diff --git a/packer/plugin/builder.go b/packer/plugin/builder.go index 9036cc1b1..3b2debce7 100644 --- a/packer/plugin/builder.go +++ b/packer/plugin/builder.go @@ -1,8 +1,9 @@ package plugin import ( - "github.com/hashicorp/packer/packer" "log" + + "github.com/hashicorp/packer/packer" ) type cmdBuilder struct { diff --git a/packer/plugin/client.go b/packer/plugin/client.go index 67f2ad23e..edcab1d7a 100644 --- a/packer/plugin/client.go +++ b/packer/plugin/client.go @@ -4,8 +4,6 @@ import ( "bufio" "errors" "fmt" - "github.com/hashicorp/packer/packer" - packrpc "github.com/hashicorp/packer/packer/rpc" "io" "io/ioutil" "log" @@ -17,6 +15,9 @@ import ( "sync" "time" "unicode" + + "github.com/hashicorp/packer/packer" + packrpc "github.com/hashicorp/packer/packer/rpc" ) // If this is true, then the "unexpected EOF" panic will not be diff --git a/packer/plugin/hook.go b/packer/plugin/hook.go index 7d9f8f03f..ca28ecbee 100644 --- a/packer/plugin/hook.go +++ b/packer/plugin/hook.go @@ -1,8 +1,9 @@ package plugin import ( - "github.com/hashicorp/packer/packer" "log" + + "github.com/hashicorp/packer/packer" ) type cmdHook struct { diff --git a/packer/plugin/post_processor.go b/packer/plugin/post_processor.go index bb801448a..b65c76623 100644 --- a/packer/plugin/post_processor.go +++ b/packer/plugin/post_processor.go @@ -1,8 +1,9 @@ package plugin import ( - "github.com/hashicorp/packer/packer" "log" + + "github.com/hashicorp/packer/packer" ) type cmdPostProcessor struct { diff --git a/packer/plugin/post_processor_test.go b/packer/plugin/post_processor_test.go index 4c545142d..a1e2f0f65 100644 --- a/packer/plugin/post_processor_test.go +++ b/packer/plugin/post_processor_test.go @@ -1,9 +1,10 @@ package plugin import ( - "github.com/hashicorp/packer/packer" "os/exec" "testing" + + "github.com/hashicorp/packer/packer" ) type helperPostProcessor byte diff --git a/packer/plugin/provisioner.go b/packer/plugin/provisioner.go index d71854a58..91017062b 100644 --- a/packer/plugin/provisioner.go +++ b/packer/plugin/provisioner.go @@ -1,8 +1,9 @@ package plugin import ( - "github.com/hashicorp/packer/packer" "log" + + "github.com/hashicorp/packer/packer" ) type cmdProvisioner struct { diff --git a/packer/plugin/server.go b/packer/plugin/server.go index 667907a5f..470daf950 100644 --- a/packer/plugin/server.go +++ b/packer/plugin/server.go @@ -10,7 +10,6 @@ package plugin import ( "errors" "fmt" - packrpc "github.com/hashicorp/packer/packer/rpc" "io/ioutil" "log" "math/rand" @@ -20,7 +19,10 @@ import ( "runtime" "strconv" "sync/atomic" + "syscall" "time" + + packrpc "github.com/hashicorp/packer/packer/rpc" ) // This is a count of the number of interrupts the process has received. @@ -87,7 +89,7 @@ func Server() (*packrpc.Server, error) { // Eat the interrupts ch := make(chan os.Signal, 1) - signal.Notify(ch, os.Interrupt) + signal.Notify(ch, os.Interrupt, syscall.SIGTERM) go func() { var count int32 = 0 for { diff --git a/packer/provisioner.go b/packer/provisioner.go index 639e8b1a8..a565c30e2 100644 --- a/packer/provisioner.go +++ b/packer/provisioner.go @@ -2,6 +2,7 @@ package packer import ( "fmt" + "log" "sync" "time" ) @@ -26,12 +27,18 @@ type Provisioner interface { Cancel() } +// A HookedProvisioner represents a provisioner and information describing it +type HookedProvisioner struct { + Provisioner Provisioner + Config interface{} + TypeName string +} + // A Hook implementation that runs the given provisioners. type ProvisionHook struct { // The provisioners to run as part of the hook. These should already // be prepared (by calling Prepare) at some earlier stage. - Provisioners []Provisioner - ProvisionerTypes []string + Provisioners []*HookedProvisioner lock sync.Mutex runningProvisioner Provisioner @@ -58,13 +65,15 @@ func (h *ProvisionHook) Run(name string, ui Ui, comm Communicator, data interfac h.runningProvisioner = nil }() - for i, p := range h.Provisioners { + for _, p := range h.Provisioners { h.lock.Lock() - h.runningProvisioner = p + h.runningProvisioner = p.Provisioner h.lock.Unlock() - ts := CheckpointReporter.AddSpan(h.ProvisionerTypes[i], "provisioner") - err := p.Provision(ui, comm) + ts := CheckpointReporter.AddSpan(p.TypeName, "provisioner", p.Config) + + err := p.Provisioner.Provision(ui, comm) + ts.End(err) if err != nil { return err @@ -74,7 +83,7 @@ func (h *ProvisionHook) Run(name string, ui Ui, comm Communicator, data interfac return nil } -// Cancels the privisioners that are still running. +// Cancels the provisioners that are still running. func (h *ProvisionHook) Cancel() { h.lock.Lock() defer h.lock.Unlock() @@ -160,3 +169,90 @@ func (p *PausedProvisioner) Cancel() { func (p *PausedProvisioner) provision(result chan<- error, ui Ui, comm Communicator) { result <- p.Provisioner.Provision(ui, comm) } + +// DebuggedProvisioner is a Provisioner implementation that waits until a key +// press before the provisioner is actually run. +type DebuggedProvisioner struct { + Provisioner Provisioner + + cancelCh chan struct{} + doneCh chan struct{} + lock sync.Mutex +} + +func (p *DebuggedProvisioner) Prepare(raws ...interface{}) error { + return p.Provisioner.Prepare(raws...) +} + +func (p *DebuggedProvisioner) Provision(ui Ui, comm Communicator) error { + p.lock.Lock() + cancelCh := make(chan struct{}) + p.cancelCh = cancelCh + + // Setup the done channel, which is trigger when we're done + doneCh := make(chan struct{}) + defer close(doneCh) + p.doneCh = doneCh + p.lock.Unlock() + + defer func() { + p.lock.Lock() + defer p.lock.Unlock() + if p.cancelCh == cancelCh { + p.cancelCh = nil + } + if p.doneCh == doneCh { + p.doneCh = nil + } + }() + + // Use a select to determine if we get cancelled during the wait + message := "Pausing before the next provisioner . Press enter to continue." + + result := make(chan string, 1) + go func() { + line, err := ui.Ask(message) + if err != nil { + log.Printf("Error asking for input: %s", err) + } + + result <- line + }() + + select { + case <-result: + case <-cancelCh: + return nil + } + + provDoneCh := make(chan error, 1) + go p.provision(provDoneCh, ui, comm) + + select { + case err := <-provDoneCh: + return err + case <-cancelCh: + p.Provisioner.Cancel() + return <-provDoneCh + } +} + +func (p *DebuggedProvisioner) Cancel() { + var doneCh chan struct{} + + p.lock.Lock() + if p.cancelCh != nil { + close(p.cancelCh) + p.cancelCh = nil + } + if p.doneCh != nil { + doneCh = p.doneCh + } + p.lock.Unlock() + + <-doneCh +} + +func (p *DebuggedProvisioner) provision(result chan<- error, ui Ui, comm Communicator) { + result <- p.Provisioner.Provision(ui, comm) +} diff --git a/packer/provisioner_test.go b/packer/provisioner_test.go index f303bc7d2..4d370ef39 100644 --- a/packer/provisioner_test.go +++ b/packer/provisioner_test.go @@ -23,8 +23,10 @@ func TestProvisionHook(t *testing.T) { var data interface{} = nil hook := &ProvisionHook{ - Provisioners: []Provisioner{pA, pB}, - ProvisionerTypes: []string{"", ""}, + Provisioners: []*HookedProvisioner{ + {pA, nil, ""}, + {pB, nil, ""}, + }, } hook.Run("foo", ui, comm, data) @@ -47,8 +49,10 @@ func TestProvisionHook_nilComm(t *testing.T) { var data interface{} = nil hook := &ProvisionHook{ - Provisioners: []Provisioner{pA, pB}, - ProvisionerTypes: []string{"", ""}, + Provisioners: []*HookedProvisioner{ + {pA, nil, ""}, + {pB, nil, ""}, + }, } err := hook.Run("foo", ui, comm, data) @@ -63,7 +67,7 @@ func TestProvisionHook_cancel(t *testing.T) { p := &MockProvisioner{ ProvFunc: func() error { - time.Sleep(50 * time.Millisecond) + time.Sleep(100 * time.Millisecond) lock.Lock() defer lock.Unlock() @@ -74,8 +78,9 @@ func TestProvisionHook_cancel(t *testing.T) { } hook := &ProvisionHook{ - Provisioners: []Provisioner{p}, - ProvisionerTypes: []string{""}, + Provisioners: []*HookedProvisioner{ + {p, nil, ""}, + }, } finished := make(chan struct{}) @@ -192,3 +197,67 @@ func TestPausedProvisionerCancel(t *testing.T) { t.Fatal("cancel should be called") } } + +func TestDebuggedProvisioner_impl(t *testing.T) { + var _ Provisioner = new(DebuggedProvisioner) +} + +func TestDebuggedProvisionerPrepare(t *testing.T) { + mock := new(MockProvisioner) + prov := &DebuggedProvisioner{ + Provisioner: mock, + } + + prov.Prepare(42) + if !mock.PrepCalled { + t.Fatal("prepare should be called") + } + if mock.PrepConfigs[0] != 42 { + t.Fatal("should have proper configs") + } +} + +func TestDebuggedProvisionerProvision(t *testing.T) { + mock := new(MockProvisioner) + prov := &DebuggedProvisioner{ + Provisioner: mock, + } + + ui := testUi() + comm := new(MockCommunicator) + writeReader(ui, "\n") + prov.Provision(ui, comm) + if !mock.ProvCalled { + t.Fatal("prov should be called") + } + if mock.ProvUi != ui { + t.Fatal("should have proper ui") + } + if mock.ProvCommunicator != comm { + t.Fatal("should have proper comm") + } +} + +func TestDebuggedProvisionerCancel(t *testing.T) { + mock := new(MockProvisioner) + prov := &DebuggedProvisioner{ + Provisioner: mock, + } + + provCh := make(chan struct{}) + mock.ProvFunc = func() error { + close(provCh) + time.Sleep(10 * time.Millisecond) + return nil + } + + // Start provisioning and wait for it to start + go prov.Provision(testUi(), new(MockCommunicator)) + <-provCh + + // Cancel it + prov.Cancel() + if !mock.CancelCalled { + t.Fatal("cancel should be called") + } +} diff --git a/packer/rpc/artifact.go b/packer/rpc/artifact.go index 4d01bbabd..b1166afa7 100644 --- a/packer/rpc/artifact.go +++ b/packer/rpc/artifact.go @@ -1,8 +1,9 @@ package rpc import ( - "github.com/hashicorp/packer/packer" "net/rpc" + + "github.com/hashicorp/packer/packer" ) // An implementation of packer.Artifact where the artifact is actually diff --git a/packer/rpc/artifact_test.go b/packer/rpc/artifact_test.go index 6959c3239..2a0e7dab9 100644 --- a/packer/rpc/artifact_test.go +++ b/packer/rpc/artifact_test.go @@ -1,9 +1,10 @@ package rpc import ( - "github.com/hashicorp/packer/packer" "reflect" "testing" + + "github.com/hashicorp/packer/packer" ) func TestArtifactRPC(t *testing.T) { diff --git a/packer/rpc/builder.go b/packer/rpc/builder.go index ed847fa91..6365b1c23 100644 --- a/packer/rpc/builder.go +++ b/packer/rpc/builder.go @@ -1,9 +1,10 @@ package rpc import ( - "github.com/hashicorp/packer/packer" "log" "net/rpc" + + "github.com/hashicorp/packer/packer" ) // An implementation of packer.Builder where the builder is actually executed diff --git a/packer/rpc/builder_test.go b/packer/rpc/builder_test.go index 255877352..0165e8d57 100644 --- a/packer/rpc/builder_test.go +++ b/packer/rpc/builder_test.go @@ -1,9 +1,10 @@ package rpc import ( - "github.com/hashicorp/packer/packer" "reflect" "testing" + + "github.com/hashicorp/packer/packer" ) var testBuilderArtifact = &packer.MockArtifact{} diff --git a/packer/rpc/cache.go b/packer/rpc/cache.go index ce5a231bb..57da43cf3 100644 --- a/packer/rpc/cache.go +++ b/packer/rpc/cache.go @@ -1,9 +1,10 @@ package rpc import ( - "github.com/hashicorp/packer/packer" "log" "net/rpc" + + "github.com/hashicorp/packer/packer" ) // An implementation of packer.Cache where the cache is actually executed diff --git a/packer/rpc/cache_test.go b/packer/rpc/cache_test.go index 1fa76f0e4..a6a8f2ac4 100644 --- a/packer/rpc/cache_test.go +++ b/packer/rpc/cache_test.go @@ -1,8 +1,9 @@ package rpc import ( - "github.com/hashicorp/packer/packer" "testing" + + "github.com/hashicorp/packer/packer" ) type testCache struct { diff --git a/packer/rpc/communicator_test.go b/packer/rpc/communicator_test.go index 1af9e53c8..9712960dc 100644 --- a/packer/rpc/communicator_test.go +++ b/packer/rpc/communicator_test.go @@ -2,10 +2,11 @@ package rpc import ( "bufio" - "github.com/hashicorp/packer/packer" "io" "reflect" "testing" + + "github.com/hashicorp/packer/packer" ) func TestCommunicatorRPC(t *testing.T) { diff --git a/packer/rpc/hook.go b/packer/rpc/hook.go index 590b0899a..623f86ce7 100644 --- a/packer/rpc/hook.go +++ b/packer/rpc/hook.go @@ -1,9 +1,10 @@ package rpc import ( - "github.com/hashicorp/packer/packer" "log" "net/rpc" + + "github.com/hashicorp/packer/packer" ) // An implementation of packer.Hook where the hook is actually executed diff --git a/packer/rpc/hook_test.go b/packer/rpc/hook_test.go index 8043e6ad9..d1ae7ea06 100644 --- a/packer/rpc/hook_test.go +++ b/packer/rpc/hook_test.go @@ -1,11 +1,12 @@ package rpc import ( - "github.com/hashicorp/packer/packer" "reflect" "sync" "testing" "time" + + "github.com/hashicorp/packer/packer" ) func TestHookRPC(t *testing.T) { diff --git a/packer/rpc/post_processor_test.go b/packer/rpc/post_processor_test.go index 99258c50e..683b6dc16 100644 --- a/packer/rpc/post_processor_test.go +++ b/packer/rpc/post_processor_test.go @@ -1,9 +1,10 @@ package rpc import ( - "github.com/hashicorp/packer/packer" "reflect" "testing" + + "github.com/hashicorp/packer/packer" ) var testPostProcessorArtifact = new(packer.MockArtifact) diff --git a/packer/rpc/provisioner.go b/packer/rpc/provisioner.go index 80c57e264..d8b3b3f66 100644 --- a/packer/rpc/provisioner.go +++ b/packer/rpc/provisioner.go @@ -1,9 +1,10 @@ package rpc import ( - "github.com/hashicorp/packer/packer" "log" "net/rpc" + + "github.com/hashicorp/packer/packer" ) // An implementation of packer.Provisioner where the provisioner is actually diff --git a/packer/rpc/provisioner_test.go b/packer/rpc/provisioner_test.go index 448946682..df310369b 100644 --- a/packer/rpc/provisioner_test.go +++ b/packer/rpc/provisioner_test.go @@ -1,9 +1,10 @@ package rpc import ( - "github.com/hashicorp/packer/packer" "reflect" "testing" + + "github.com/hashicorp/packer/packer" ) func TestProvisionerRPC(t *testing.T) { diff --git a/packer/rpc/ui.go b/packer/rpc/ui.go index 1b84e5cd1..1c6356a65 100644 --- a/packer/rpc/ui.go +++ b/packer/rpc/ui.go @@ -1,9 +1,10 @@ package rpc import ( - "github.com/hashicorp/packer/packer" "log" "net/rpc" + + "github.com/hashicorp/packer/packer" ) // An implementation of packer.Ui where the Ui is actually executed diff --git a/packer/telemetry.go b/packer/telemetry.go index 7224de582..0f493a83a 100644 --- a/packer/telemetry.go +++ b/packer/telemetry.go @@ -5,13 +5,14 @@ import ( "log" "os" "path/filepath" + "sort" "time" checkpoint "github.com/hashicorp/go-checkpoint" packerVersion "github.com/hashicorp/packer/version" ) -const TelemetryVersion string = "beta/packer/4" +const TelemetryVersion string = "beta/packer/5" const TelemetryPanicVersion string = "beta/packer_panic/4" var CheckpointReporter *CheckpointTelemetry @@ -85,15 +86,17 @@ func (c *CheckpointTelemetry) ReportPanic(m string) error { return checkpoint.Report(ctx, panicParams) } -func (c *CheckpointTelemetry) AddSpan(name, pluginType string) *TelemetrySpan { +func (c *CheckpointTelemetry) AddSpan(name, pluginType string, options interface{}) *TelemetrySpan { if c == nil { return nil } log.Printf("[INFO] (telemetry) Starting %s %s", pluginType, name) + ts := &TelemetrySpan{ Name: name, - Type: pluginType, + Options: flattenConfigKeys(options), StartTime: time.Now().UTC(), + Type: pluginType, } c.spans = append(c.spans, ts) return ts @@ -116,8 +119,10 @@ func (c *CheckpointTelemetry) Finalize(command string, errCode int, err error) e extra.Error = err.Error() } params.Payload = extra + // b, _ := json.MarshalIndent(params, "", " ") + // log.Println(string(b)) - ctx, cancel := context.WithTimeout(context.Background(), 1500*time.Millisecond) + ctx, cancel := context.WithTimeout(context.Background(), 2*time.Second) defer cancel() log.Printf("[INFO] (telemetry) Finalizing.") @@ -125,11 +130,12 @@ func (c *CheckpointTelemetry) Finalize(command string, errCode int, err error) e } type TelemetrySpan struct { - Name string `json:"name"` - Type string `json:"type"` - StartTime time.Time `json:"start_time"` EndTime time.Time `json:"end_time"` Error string `json:"error"` + Name string `json:"name"` + Options []string `json:"options"` + StartTime time.Time `json:"start_time"` + Type string `json:"type"` } func (s *TelemetrySpan) End(err error) { @@ -140,6 +146,29 @@ func (s *TelemetrySpan) End(err error) { log.Printf("[INFO] (telemetry) ending %s", s.Name) if err != nil { s.Error = err.Error() - log.Printf("[INFO] (telemetry) found error: %s", err.Error()) } } + +func flattenConfigKeys(options interface{}) []string { + var flatten func(string, interface{}) []string + + flatten = func(prefix string, options interface{}) (strOpts []string) { + if m, ok := options.(map[string]interface{}); ok { + for k, v := range m { + if prefix != "" { + k = prefix + "/" + k + } + if n, ok := v.(map[string]interface{}); ok { + strOpts = append(strOpts, flatten(k, n)...) + } else { + strOpts = append(strOpts, k) + } + } + } + return + } + + flattened := flatten("", options) + sort.Strings(flattened) + return flattened +} diff --git a/packer/telemetry_test.go b/packer/telemetry_test.go new file mode 100644 index 000000000..c4192f61f --- /dev/null +++ b/packer/telemetry_test.go @@ -0,0 +1,32 @@ +package packer + +import ( + "testing" + + "github.com/stretchr/testify/assert" +) + +func TestFlattenConfigKeys_nil(t *testing.T) { + f := flattenConfigKeys(nil) + assert.Zero(t, f, "Expected empty list.") +} + +func TestFlattenConfigKeys_nested(t *testing.T) { + inp := make(map[string]interface{}) + inp["A"] = "" + inp["B"] = []string{} + + c := make(map[string]interface{}) + c["X"] = "" + d := make(map[string]interface{}) + d["a"] = "" + + c["Y"] = d + inp["C"] = c + + assert.Equal(t, + []string{"A", "B", "C/X", "C/Y/a"}, + flattenConfigKeys(inp), + "Input didn't flatten correctly.", + ) +} diff --git a/packer/ui.go b/packer/ui.go index c107c23b8..c16a2eae0 100644 --- a/packer/ui.go +++ b/packer/ui.go @@ -180,7 +180,7 @@ func (rw *BasicUi) Ask(query string) (string, error) { rw.scanner = bufio.NewScanner(rw.Reader) } sigCh := make(chan os.Signal, 1) - signal.Notify(sigCh, os.Interrupt) + signal.Notify(sigCh, os.Interrupt, syscall.SIGTERM) defer signal.Stop(sigCh) log.Printf("ui: ask: %s", query) diff --git a/packer/ui_test.go b/packer/ui_test.go index 76d0cd5b5..a1bb0193c 100644 --- a/packer/ui_test.go +++ b/packer/ui_test.go @@ -99,34 +99,34 @@ func TestColoredUi_noColorEnv(t *testing.T) { func TestTargetedUI(t *testing.T) { bufferUi := testUi() - targettedUi := &TargetedUI{ + targetedUi := &TargetedUI{ Target: "foo", Ui: bufferUi, } var actual, expected string - targettedUi.Say("foo") + targetedUi.Say("foo") actual = readWriter(bufferUi) expected = "==> foo: foo\n" if actual != expected { t.Fatalf("bad: %#v", actual) } - targettedUi.Message("foo") + targetedUi.Message("foo") actual = readWriter(bufferUi) expected = " foo: foo\n" if actual != expected { t.Fatalf("bad: %#v", actual) } - targettedUi.Error("bar") + targetedUi.Error("bar") actual = readErrorWriter(bufferUi) expected = "==> foo: bar\n" if actual != expected { t.Fatalf("bad: %#v", actual) } - targettedUi.Say("foo\nbar") + targetedUi.Say("foo\nbar") actual = readWriter(bufferUi) expected = "==> foo: foo\n==> foo: bar\n" if actual != expected { @@ -215,7 +215,7 @@ func TestBasicUi_Ask(t *testing.T) { } for _, testCase := range testCases { - // Because of the internal bufio we can't eaily reset the input, so create a new one each time + // Because of the internal bufio we can't easily reset the input, so create a new one each time bufferUi := testUi() writeReader(bufferUi, testCase.Input) diff --git a/post-processor/alicloud-import/post-processor.go b/post-processor/alicloud-import/post-processor.go index 2606e5fdf..853e97ef1 100644 --- a/post-processor/alicloud-import/post-processor.go +++ b/post-processor/alicloud-import/post-processor.go @@ -60,7 +60,7 @@ type Config struct { Architecture string `mapstructure:"image_architecture"` Size string `mapstructure:"image_system_size"` Format string `mapstructure:"format"` - AlicloudImageForceDetele bool `mapstructure:"image_force_delete"` + AlicloudImageForceDelete bool `mapstructure:"image_force_delete"` ctx interpolate.Context } @@ -160,7 +160,7 @@ func (p *PostProcessor) PostProcess(ui packer.Ui, artifact packer.Artifact) (pac getEndPonit(p.config.OSSBucket), p.config.OSSKey, err) } - if len(images) > 0 && !p.config.AlicloudImageForceDetele { + if len(images) > 0 && !p.config.AlicloudImageForceDelete { return nil, false, fmt.Errorf("Duplicated image exists, please delete the existing images " + "or set the 'image_force_delete' value as true") } @@ -185,7 +185,7 @@ func (p *PostProcessor) PostProcess(ui packer.Ui, artifact packer.Artifact) (pac if err != nil { return nil, false, fmt.Errorf("Failed to upload image %s: %s", source, err) } - if len(images) > 0 && p.config.AlicloudImageForceDetele { + if len(images) > 0 && p.config.AlicloudImageForceDelete { if err = ecsClient.DeleteImage(packercommon.Region(p.config.AlicloudRegion), images[0].ImageId); err != nil { return nil, false, fmt.Errorf("Delete duplicated image %s failed", images[0].ImageName) @@ -214,7 +214,7 @@ func (p *PostProcessor) PostProcess(ui packer.Ui, artifact packer.Artifact) (pac if err != nil { e, _ := err.(*packercommon.Error) - if e.Code == "NoSetRoletoECSServiceAcount" { + if e.Code == "NoSetRoletoECSServiceAccount" { ramClient := ram.NewClient(p.config.AlicloudAccessKey, p.config.AlicloudSecretKey) roleResponse, err := ramClient.GetRole(ram.RoleQueryRequest{ RoleName: "AliyunECSImageImportDefaultRole", @@ -282,7 +282,7 @@ func (p *PostProcessor) PostProcess(ui packer.Ui, artifact packer.Artifact) (pac imageId, err = ecsClient.ImportImage(imageImageArgs) if err != nil { e, _ = err.(*packercommon.Error) - if e.Code == "NoSetRoletoECSServiceAcount" { + if e.Code == "NoSetRoletoECSServiceAccount" { time.Sleep(5 * time.Second) continue } else if e.Code == "ImageIsImporting" || diff --git a/post-processor/amazon-import/post-processor.go b/post-processor/amazon-import/post-processor.go index f66c2435e..e42fb1ae7 100644 --- a/post-processor/amazon-import/post-processor.go +++ b/post-processor/amazon-import/post-processor.go @@ -34,6 +34,7 @@ type Config struct { Users []string `mapstructure:"ami_users"` Groups []string `mapstructure:"ami_groups"` LicenseType string `mapstructure:"license_type"` + RoleName string `mapstructure:"role_name"` ctx interpolate.Context } @@ -91,7 +92,7 @@ func (p *PostProcessor) Configure(raws ...interface{}) error { return errs } - log.Println(common.ScrubConfig(p.config, p.config.AccessKey, p.config.SecretKey)) + log.Println(common.ScrubConfig(p.config, p.config.AccessKey, p.config.SecretKey, p.config.Token)) return nil } @@ -166,6 +167,10 @@ func (p *PostProcessor) PostProcess(ui packer.Ui, artifact packer.Artifact) (pac }, } + if p.config.RoleName != "" { + params.SetRoleName(p.config.RoleName) + } + if p.config.LicenseType != "" { ui.Message(fmt.Sprintf("Setting license type to '%s'", p.config.LicenseType)) params.LicenseType = &p.config.LicenseType @@ -181,17 +186,11 @@ func (p *PostProcessor) PostProcess(ui packer.Ui, artifact packer.Artifact) (pac // Wait for import process to complete, this takes a while ui.Message(fmt.Sprintf("Waiting for task %s to complete (may take a while)", *import_start.ImportTaskId)) - - stateChange := awscommon.StateChangeConf{ - Pending: []string{"pending", "active"}, - Refresh: awscommon.ImportImageRefreshFunc(ec2conn, *import_start.ImportTaskId), - Target: "completed", + err = awscommon.WaitUntilImageImported(aws.BackgroundContext(), ec2conn, *import_start.ImportTaskId) + if err != nil { + return nil, false, fmt.Errorf("Import task %s failed with error: %s", *import_start.ImportTaskId, err) } - // Actually do the wait for state change - // We ignore errors out of this and check job state in AWS API - awscommon.WaitForState(&stateChange) - // Retrieve what the outcome was for the import task import_result, err := ec2conn.DescribeImportImageTasks(&ec2.DescribeImportImageTasksInput{ ImportTaskIds: []*string{ @@ -230,13 +229,7 @@ func (p *PostProcessor) PostProcess(ui packer.Ui, artifact packer.Artifact) (pac ui.Message(fmt.Sprintf("Waiting for AMI rename to complete (may take a while)")) - stateChange := awscommon.StateChangeConf{ - Pending: []string{"pending"}, - Target: "available", - Refresh: awscommon.AMIStateRefreshFunc(ec2conn, *resp.ImageId), - } - - if _, err := awscommon.WaitForState(&stateChange); err != nil { + if err := awscommon.WaitUntilAMIAvailable(aws.BackgroundContext(), ec2conn, *resp.ImageId); err != nil { return nil, false, fmt.Errorf("Error waiting for AMI (%s): %s", *resp.ImageId, err) } @@ -369,7 +362,7 @@ func (p *PostProcessor) PostProcess(ui packer.Ui, artifact packer.Artifact) (pac *config.Region: createdami, }, BuilderIdValue: BuilderId, - Conn: ec2conn, + Session: session, } if !p.config.SkipClean { diff --git a/post-processor/atlas/post-processor.go b/post-processor/atlas/post-processor.go index 9b06e61ec..42d45c054 100644 --- a/post-processor/atlas/post-processor.go +++ b/post-processor/atlas/post-processor.go @@ -151,6 +151,16 @@ func (p *PostProcessor) PostProcess(ui packer.Ui, artifact packer.Artifact) (pac "and see https://www.packer.io/docs/post-processors/vagrant-cloud.html for\n" + "more detail.\n") } + + ui.Message("\n-----------------------------------------------------------------------\n" + + "Deprecation warning: The Packer and Artifact Registry features of Atlas\n" + + "will no longer be actively developed or maintained and will be fully\n" + + "decommissioned. Please see our guide on building immutable\n" + + "infrastructure with Packer on CI/CD for ideas on implementing\n" + + "these features yourself: https://www.packer.io/guides/packer-on-cicd/\n" + + "-----------------------------------------------------------------------\n", + ) + if _, err := p.client.Artifact(p.config.user, p.config.name); err != nil { if err != atlas.ErrNotFound { return nil, false, fmt.Errorf( diff --git a/post-processor/checksum/post-processor_test.go b/post-processor/checksum/post-processor_test.go index fb4d0faf1..6b4420d39 100644 --- a/post-processor/checksum/post-processor_test.go +++ b/post-processor/checksum/post-processor_test.go @@ -35,7 +35,7 @@ func TestChecksumSHA1(t *testing.T) { t.Errorf("Unable to read checksum file: %s", err) } if buf, _ := ioutil.ReadAll(f); !bytes.Equal(buf, []byte("d3486ae9136e7856bc42212385ea797094475802\tpackage.txt\n")) { - t.Errorf("Failed to compate checksum: %s\n%s", buf, "d3486ae9136e7856bc42212385ea797094475802 package.txt") + t.Errorf("Failed to compute checksum: %s\n%s", buf, "d3486ae9136e7856bc42212385ea797094475802 package.txt") } defer f.Close() diff --git a/post-processor/compress/artifact_test.go b/post-processor/compress/artifact_test.go index 2abde78d7..ed39f4a73 100644 --- a/post-processor/compress/artifact_test.go +++ b/post-processor/compress/artifact_test.go @@ -1,8 +1,9 @@ package compress import ( - "github.com/hashicorp/packer/packer" "testing" + + "github.com/hashicorp/packer/packer" ) func TestArtifact_ImplementsArtifact(t *testing.T) { diff --git a/post-processor/compress/notes.txt b/post-processor/compress/notes.txt new file mode 100644 index 000000000..51f9f7f0b --- /dev/null +++ b/post-processor/compress/notes.txt @@ -0,0 +1,3 @@ +* 1.9.6 => GNU tar format +* 1.10.3 w/ patch => GNU tar format +* 1.10.3 w/o patch => Posix tar format diff --git a/post-processor/compress/post-processor.go b/post-processor/compress/post-processor.go index d1bf89f1a..3e089f867 100644 --- a/post-processor/compress/post-processor.go +++ b/post-processor/compress/post-processor.go @@ -303,6 +303,9 @@ func createTarArchive(files []string, output io.WriteCloser) error { return fmt.Errorf("Failed to create tar header for %s: %s", path, err) } + // workaround for archive format on go >=1.10 + setHeaderFormat(header) + if err := archive.WriteHeader(header); err != nil { return fmt.Errorf("Failed to write tar header for %s: %s", path, err) } diff --git a/post-processor/compress/tar_fix.go b/post-processor/compress/tar_fix.go new file mode 100644 index 000000000..af60a2fff --- /dev/null +++ b/post-processor/compress/tar_fix.go @@ -0,0 +1,9 @@ +// +build !go1.10 + +package compress + +import "archive/tar" + +func setHeaderFormat(header *tar.Header) { + // no-op +} diff --git a/post-processor/compress/tar_fix_go110.go b/post-processor/compress/tar_fix_go110.go new file mode 100644 index 000000000..be8456321 --- /dev/null +++ b/post-processor/compress/tar_fix_go110.go @@ -0,0 +1,17 @@ +// +build go1.10 + +package compress + +import ( + "archive/tar" + "time" +) + +func setHeaderFormat(header *tar.Header) { + // We have to set the Format explicitly for the googlecompute-import + // post-processor. Google Cloud only allows importing GNU tar format. + header.Format = tar.FormatGNU + header.AccessTime = time.Time{} + header.ModTime = time.Time{} + header.ChangeTime = time.Time{} +} diff --git a/post-processor/docker-import/post-processor_test.go b/post-processor/docker-import/post-processor_test.go index 4df15486e..efe51c65f 100644 --- a/post-processor/docker-import/post-processor_test.go +++ b/post-processor/docker-import/post-processor_test.go @@ -2,8 +2,9 @@ package dockerimport import ( "bytes" - "github.com/hashicorp/packer/packer" "testing" + + "github.com/hashicorp/packer/packer" ) func testConfig() map[string]interface{} { diff --git a/post-processor/docker-push/post-processor.go b/post-processor/docker-push/post-processor.go index f0fee00ca..2ffc3e5a5 100644 --- a/post-processor/docker-push/post-processor.go +++ b/post-processor/docker-push/post-processor.go @@ -12,6 +12,8 @@ import ( "github.com/hashicorp/packer/template/interpolate" ) +const BuilderIdImport = "packer.post-processor.docker-import" + type Config struct { common.PackerConfig `mapstructure:",squash"` @@ -103,5 +105,11 @@ func (p *PostProcessor) PostProcess(ui packer.Ui, artifact packer.Artifact) (pac return nil, false, err } - return nil, false, nil + artifact = &docker.ImportArtifact{ + BuilderIdValue: BuilderIdImport, + Driver: driver, + IdValue: name, + } + + return artifact, true, nil } diff --git a/post-processor/docker-push/post-processor_test.go b/post-processor/docker-push/post-processor_test.go index a79b4dc21..1eee102cb 100644 --- a/post-processor/docker-push/post-processor_test.go +++ b/post-processor/docker-push/post-processor_test.go @@ -2,10 +2,11 @@ package dockerpush import ( "bytes" + "testing" + "github.com/hashicorp/packer/builder/docker" "github.com/hashicorp/packer/packer" "github.com/hashicorp/packer/post-processor/docker-import" - "testing" ) func testConfig() map[string]interface{} { @@ -41,11 +42,11 @@ func TestPostProcessor_PostProcess(t *testing.T) { } result, keep, err := p.PostProcess(testUi(), artifact) - if result != nil { - t.Fatal("should be nil") + if _, ok := result.(packer.Artifact); !ok { + t.Fatal("should be instance of Artifact") } - if keep { - t.Fatal("should not keep") + if !keep { + t.Fatal("should keep") } if err != nil { t.Fatalf("err: %s", err) @@ -57,6 +58,9 @@ func TestPostProcessor_PostProcess(t *testing.T) { if driver.PushName != "foo/bar" { t.Fatal("bad name") } + if result.Id() != "foo/bar" { + t.Fatal("bad image id") + } } func TestPostProcessor_PostProcess_portInName(t *testing.T) { @@ -68,11 +72,11 @@ func TestPostProcessor_PostProcess_portInName(t *testing.T) { } result, keep, err := p.PostProcess(testUi(), artifact) - if result != nil { - t.Fatal("should be nil") + if _, ok := result.(packer.Artifact); !ok { + t.Fatal("should be instance of Artifact") } - if keep { - t.Fatal("should not keep") + if !keep { + t.Fatal("should keep") } if err != nil { t.Fatalf("err: %s", err) @@ -84,6 +88,9 @@ func TestPostProcessor_PostProcess_portInName(t *testing.T) { if driver.PushName != "localhost:5000/foo/bar" { t.Fatal("bad name") } + if result.Id() != "localhost:5000/foo/bar" { + t.Fatal("bad image id") + } } func TestPostProcessor_PostProcess_tags(t *testing.T) { @@ -95,11 +102,11 @@ func TestPostProcessor_PostProcess_tags(t *testing.T) { } result, keep, err := p.PostProcess(testUi(), artifact) - if result != nil { - t.Fatal("should be nil") + if _, ok := result.(packer.Artifact); !ok { + t.Fatal("should be instance of Artifact") } - if keep { - t.Fatal("should not keep") + if !keep { + t.Fatal("should keep") } if err != nil { t.Fatalf("err: %s", err) @@ -111,4 +118,7 @@ func TestPostProcessor_PostProcess_tags(t *testing.T) { if driver.PushName != "hashicorp/ubuntu:precise" { t.Fatalf("bad name: %s", driver.PushName) } + if result.Id() != "hashicorp/ubuntu:precise" { + t.Fatal("bad image id") + } } diff --git a/post-processor/docker-save/post-processor_test.go b/post-processor/docker-save/post-processor_test.go index 45ef41986..3caf78c35 100644 --- a/post-processor/docker-save/post-processor_test.go +++ b/post-processor/docker-save/post-processor_test.go @@ -2,8 +2,9 @@ package dockersave import ( "bytes" - "github.com/hashicorp/packer/packer" "testing" + + "github.com/hashicorp/packer/packer" ) func testConfig() map[string]interface{} { diff --git a/post-processor/googlecompute-export/post-processor.go b/post-processor/googlecompute-export/post-processor.go index 8a5befbc8..eb1fc786f 100644 --- a/post-processor/googlecompute-export/post-processor.go +++ b/post-processor/googlecompute-export/post-processor.go @@ -2,15 +2,14 @@ package googlecomputeexport import ( "fmt" - "io/ioutil" "strings" "github.com/hashicorp/packer/builder/googlecompute" "github.com/hashicorp/packer/common" "github.com/hashicorp/packer/helper/config" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" "github.com/hashicorp/packer/template/interpolate" - "github.com/mitchellh/multistep" ) type Config struct { @@ -88,13 +87,12 @@ func (p *PostProcessor) PostProcess(ui packer.Ui, artifact packer.Artifact) (pac exporterConfig.CalcTimeout() // Set up credentials and GCE driver. - b, err := ioutil.ReadFile(accountKeyFilePath) - if err != nil { - err = fmt.Errorf("Error fetching account credentials: %s", err) - return nil, p.config.KeepOriginalImage, err + if accountKeyFilePath != "" { + err := googlecompute.ProcessAccountFile(&exporterConfig.Account, accountKeyFilePath) + if err != nil { + return nil, p.config.KeepOriginalImage, err + } } - accountKeyContents := string(b) - googlecompute.ProcessAccountFile(&exporterConfig.Account, accountKeyContents) driver, err := googlecompute.NewDriverGCE(ui, projectId, &exporterConfig.Account) if err != nil { return nil, p.config.KeepOriginalImage, err diff --git a/post-processor/googlecompute-import/artifact.go b/post-processor/googlecompute-import/artifact.go new file mode 100644 index 000000000..1a43d7c50 --- /dev/null +++ b/post-processor/googlecompute-import/artifact.go @@ -0,0 +1,37 @@ +package googlecomputeimport + +import ( + "fmt" +) + +const BuilderId = "packer.post-processor.googlecompute-import" + +type Artifact struct { + paths []string +} + +func (*Artifact) BuilderId() string { + return BuilderId +} + +func (*Artifact) Id() string { + return "" +} + +func (a *Artifact) Files() []string { + pathsCopy := make([]string, len(a.paths)) + copy(pathsCopy, a.paths) + return pathsCopy +} + +func (a *Artifact) String() string { + return fmt.Sprintf("Exported artifacts in: %s", a.paths) +} + +func (*Artifact) State(name string) interface{} { + return nil +} + +func (a *Artifact) Destroy() error { + return nil +} diff --git a/post-processor/googlecompute-import/artifact_test.go b/post-processor/googlecompute-import/artifact_test.go new file mode 100644 index 000000000..9d53c0b07 --- /dev/null +++ b/post-processor/googlecompute-import/artifact_test.go @@ -0,0 +1,15 @@ +package googlecomputeimport + +import ( + "testing" + + "github.com/hashicorp/packer/packer" +) + +func TestArtifact_ImplementsArtifact(t *testing.T) { + var raw interface{} + raw = &Artifact{} + if _, ok := raw.(packer.Artifact); !ok { + t.Fatalf("Artifact should be a Artifact") + } +} diff --git a/post-processor/googlecompute-import/post-processor.go b/post-processor/googlecompute-import/post-processor.go new file mode 100644 index 000000000..3e773cbda --- /dev/null +++ b/post-processor/googlecompute-import/post-processor.go @@ -0,0 +1,276 @@ +package googlecomputeimport + +import ( + "fmt" + "net/http" + "os" + "strings" + "time" + + "google.golang.org/api/compute/v1" + "google.golang.org/api/storage/v1" + + "github.com/hashicorp/packer/builder/googlecompute" + "github.com/hashicorp/packer/common" + "github.com/hashicorp/packer/helper/config" + "github.com/hashicorp/packer/helper/multistep" + "github.com/hashicorp/packer/packer" + "github.com/hashicorp/packer/post-processor/compress" + "github.com/hashicorp/packer/template/interpolate" + + "golang.org/x/oauth2" + "golang.org/x/oauth2/jwt" +) + +type Config struct { + common.PackerConfig `mapstructure:",squash"` + + Bucket string `mapstructure:"bucket"` + GCSObjectName string `mapstructure:"gcs_object_name"` + ImageDescription string `mapstructure:"image_description"` + ImageFamily string `mapstructure:"image_family"` + ImageLabels map[string]string `mapstructure:"image_labels"` + ImageName string `mapstructure:"image_name"` + ProjectId string `mapstructure:"project_id"` + AccountFile string `mapstructure:"account_file"` + KeepOriginalImage bool `mapstructure:"keep_input_artifact"` + SkipClean bool `mapstructure:"skip_clean"` + + ctx interpolate.Context +} + +type PostProcessor struct { + config Config + runner multistep.Runner +} + +func (p *PostProcessor) Configure(raws ...interface{}) error { + err := config.Decode(&p.config, &config.DecodeOpts{ + Interpolate: true, + InterpolateContext: &p.config.ctx, + InterpolateFilter: &interpolate.RenderFilter{ + Exclude: []string{ + "gcs_object_name", + }, + }, + }, raws...) + if err != nil { + return err + } + + // Set defaults + if p.config.GCSObjectName == "" { + p.config.GCSObjectName = "packer-import-{{timestamp}}.tar.gz" + } + + errs := new(packer.MultiError) + + // Check and render gcs_object_name + if err = interpolate.Validate(p.config.GCSObjectName, &p.config.ctx); err != nil { + errs = packer.MultiErrorAppend( + errs, fmt.Errorf("Error parsing gcs_object_name template: %s", err)) + } + + templates := map[string]*string{ + "bucket": &p.config.Bucket, + "image_name": &p.config.ImageName, + "project_id": &p.config.ProjectId, + "account_file": &p.config.AccountFile, + } + for key, ptr := range templates { + if *ptr == "" { + errs = packer.MultiErrorAppend( + errs, fmt.Errorf("%s must be set", key)) + } + } + + if len(errs.Errors) > 0 { + return errs + } + + return nil +} + +func (p *PostProcessor) PostProcess(ui packer.Ui, artifact packer.Artifact) (packer.Artifact, bool, error) { + var err error + + if artifact.BuilderId() != compress.BuilderId { + err = fmt.Errorf( + "incompatible artifact type: %s\nCan only import from Compress post-processor artifacts", + artifact.BuilderId()) + return nil, false, err + } + + p.config.GCSObjectName, err = interpolate.Render(p.config.GCSObjectName, &p.config.ctx) + if err != nil { + return nil, false, fmt.Errorf("Error rendering gcs_object_name template: %s", err) + } + + rawImageGcsPath, err := UploadToBucket(p.config.AccountFile, ui, artifact, p.config.Bucket, p.config.GCSObjectName) + if err != nil { + return nil, p.config.KeepOriginalImage, err + } + + gceImageArtifact, err := CreateGceImage(p.config.AccountFile, ui, p.config.ProjectId, rawImageGcsPath, p.config.ImageName, p.config.ImageDescription, p.config.ImageFamily, p.config.ImageLabels) + if err != nil { + return nil, p.config.KeepOriginalImage, err + } + + if !p.config.SkipClean { + err = DeleteFromBucket(p.config.AccountFile, ui, p.config.Bucket, p.config.GCSObjectName) + if err != nil { + return nil, p.config.KeepOriginalImage, err + } + } + + return gceImageArtifact, p.config.KeepOriginalImage, nil +} + +func UploadToBucket(accountFile string, ui packer.Ui, artifact packer.Artifact, bucket string, gcsObjectName string) (string, error) { + var client *http.Client + var account googlecompute.AccountFile + + err := googlecompute.ProcessAccountFile(&account, accountFile) + if err != nil { + return "", err + } + + var DriverScopes = []string{"https://www.googleapis.com/auth/devstorage.full_control"} + conf := jwt.Config{ + Email: account.ClientEmail, + PrivateKey: []byte(account.PrivateKey), + Scopes: DriverScopes, + TokenURL: "https://accounts.google.com/o/oauth2/token", + } + + client = conf.Client(oauth2.NoContext) + service, err := storage.New(client) + if err != nil { + return "", err + } + + ui.Say("Looking for tar.gz file in list of artifacts...") + source := "" + for _, path := range artifact.Files() { + ui.Say(fmt.Sprintf("Found artifact %v...", path)) + if strings.HasSuffix(path, ".tar.gz") { + source = path + break + } + } + + if source == "" { + return "", fmt.Errorf("No tar.gz file found in list of articats") + } + + artifactFile, err := os.Open(source) + if err != nil { + err := fmt.Errorf("error opening %v", source) + return "", err + } + + ui.Say(fmt.Sprintf("Uploading file %v to GCS bucket %v/%v...", source, bucket, gcsObjectName)) + storageObject, err := service.Objects.Insert(bucket, &storage.Object{Name: gcsObjectName}).Media(artifactFile).Do() + if err != nil { + ui.Say(fmt.Sprintf("Failed to upload: %v", storageObject)) + return "", err + } + + return "https://storage.googleapis.com/" + bucket + "/" + gcsObjectName, nil +} + +func CreateGceImage(accountFile string, ui packer.Ui, project string, rawImageURL string, imageName string, imageDescription string, imageFamily string, imageLabels map[string]string) (packer.Artifact, error) { + var client *http.Client + var account googlecompute.AccountFile + + err := googlecompute.ProcessAccountFile(&account, accountFile) + if err != nil { + return nil, err + } + + var DriverScopes = []string{"https://www.googleapis.com/auth/compute", "https://www.googleapis.com/auth/devstorage.full_control"} + conf := jwt.Config{ + Email: account.ClientEmail, + PrivateKey: []byte(account.PrivateKey), + Scopes: DriverScopes, + TokenURL: "https://accounts.google.com/o/oauth2/token", + } + + client = conf.Client(oauth2.NoContext) + + service, err := compute.New(client) + if err != nil { + return nil, err + } + + gceImage := &compute.Image{ + Name: imageName, + Description: imageDescription, + Family: imageFamily, + Labels: imageLabels, + RawDisk: &compute.ImageRawDisk{Source: rawImageURL}, + SourceType: "RAW", + } + + ui.Say(fmt.Sprintf("Creating GCE image %v...", imageName)) + op, err := service.Images.Insert(project, gceImage).Do() + if err != nil { + ui.Say("Error creating GCE image") + return nil, err + } + + ui.Say("Waiting for GCE image creation operation to complete...") + for op.Status != "DONE" { + op, err = service.GlobalOperations.Get(project, op.Name).Do() + if err != nil { + return nil, err + } + + time.Sleep(5 * time.Second) + } + + // fail if image creation operation has an error + if op.Error != nil { + var imageError string + for _, error := range op.Error.Errors { + imageError += error.Message + } + err = fmt.Errorf("failed to create GCE image %s: %s", imageName, imageError) + return nil, err + } + + return &Artifact{paths: []string{op.TargetLink}}, nil +} + +func DeleteFromBucket(accountFile string, ui packer.Ui, bucket string, gcsObjectName string) error { + var client *http.Client + var account googlecompute.AccountFile + + err := googlecompute.ProcessAccountFile(&account, accountFile) + if err != nil { + return err + } + + var DriverScopes = []string{"https://www.googleapis.com/auth/devstorage.full_control"} + conf := jwt.Config{ + Email: account.ClientEmail, + PrivateKey: []byte(account.PrivateKey), + Scopes: DriverScopes, + TokenURL: "https://accounts.google.com/o/oauth2/token", + } + + client = conf.Client(oauth2.NoContext) + service, err := storage.New(client) + if err != nil { + return err + } + + ui.Say(fmt.Sprintf("Deleting import source from GCS %s/%s...", bucket, gcsObjectName)) + err = service.Objects.Delete(bucket, gcsObjectName).Do() + if err != nil { + ui.Say(fmt.Sprintf("Failed to delete: %v/%v", bucket, gcsObjectName)) + return err + } + + return nil +} diff --git a/post-processor/shell-local/communicator.go b/post-processor/shell-local/communicator.go deleted file mode 100644 index b0bfb008f..000000000 --- a/post-processor/shell-local/communicator.go +++ /dev/null @@ -1,63 +0,0 @@ -package shell_local - -import ( - "fmt" - "io" - "os" - "os/exec" - "syscall" - - "github.com/hashicorp/packer/packer" -) - -type Communicator struct{} - -func (c *Communicator) Start(cmd *packer.RemoteCmd) error { - localCmd := exec.Command("sh", "-c", cmd.Command) - localCmd.Stdin = cmd.Stdin - localCmd.Stdout = cmd.Stdout - localCmd.Stderr = cmd.Stderr - - // Start it. If it doesn't work, then error right away. - if err := localCmd.Start(); err != nil { - return err - } - - // We've started successfully. Start a goroutine to wait for - // it to complete and track exit status. - go func() { - var exitStatus int - err := localCmd.Wait() - if err != nil { - if exitErr, ok := err.(*exec.ExitError); ok { - exitStatus = 1 - - // There is no process-independent way to get the REAL - // exit status so we just try to go deeper. - if status, ok := exitErr.Sys().(syscall.WaitStatus); ok { - exitStatus = status.ExitStatus() - } - } - } - - cmd.SetExited(exitStatus) - }() - - return nil -} - -func (c *Communicator) Upload(string, io.Reader, *os.FileInfo) error { - return fmt.Errorf("upload not supported") -} - -func (c *Communicator) UploadDir(string, string, []string) error { - return fmt.Errorf("uploadDir not supported") -} - -func (c *Communicator) Download(string, io.Writer) error { - return fmt.Errorf("download not supported") -} - -func (c *Communicator) DownloadDir(src string, dst string, exclude []string) error { - return fmt.Errorf("downloadDir not supported") -} diff --git a/post-processor/shell-local/post-processor.go b/post-processor/shell-local/post-processor.go index c2bd2d5c0..c761a19f4 100644 --- a/post-processor/shell-local/post-processor.go +++ b/post-processor/shell-local/post-processor.go @@ -1,51 +1,12 @@ package shell_local import ( - "bufio" - "errors" - "fmt" - "io/ioutil" - "log" - "os" - "sort" - "strings" - - "github.com/hashicorp/packer/common" - "github.com/hashicorp/packer/helper/config" + sl "github.com/hashicorp/packer/common/shell-local" "github.com/hashicorp/packer/packer" - "github.com/hashicorp/packer/template/interpolate" ) -type Config struct { - common.PackerConfig `mapstructure:",squash"` - - // An inline script to execute. Multiple strings are all executed - // in the context of a single shell. - Inline []string - - // The shebang value used when running inline scripts. - InlineShebang string `mapstructure:"inline_shebang"` - - // The local path of the shell script to upload and execute. - Script string - - // An array of multiple scripts to run. - Scripts []string - - // An array of environment variables that will be injected before - // your command(s) are executed. - Vars []string `mapstructure:"environment_vars"` - - // The command used to execute the script. The '{{ .Path }}' variable - // should be used to specify where the script goes, {{ .Vars }} - // can be used to inject the environment_vars into the environment. - ExecuteCommand string `mapstructure:"execute_command"` - - ctx interpolate.Context -} - type PostProcessor struct { - config Config + config sl.Config } type ExecuteCommandTemplate struct { @@ -54,179 +15,34 @@ type ExecuteCommandTemplate struct { } func (p *PostProcessor) Configure(raws ...interface{}) error { - err := config.Decode(&p.config, &config.DecodeOpts{ - Interpolate: true, - InterpolateContext: &p.config.ctx, - InterpolateFilter: &interpolate.RenderFilter{ - Exclude: []string{ - "execute_command", - }, - }, - }, raws...) + err := sl.Decode(&p.config, raws...) if err != nil { return err } - - if p.config.ExecuteCommand == "" { - p.config.ExecuteCommand = `chmod +x "{{.Script}}"; {{.Vars}} "{{.Script}}"` + if len(p.config.ExecuteCommand) == 1 { + // Backwards compatibility -- before we merged the shell-local + // post-processor and provisioners, the post-processor accepted + // execute_command as a string rather than a slice of strings. It didn't + // have a configurable call to shell program, automatically prepending + // the user-supplied execute_command string with "sh -c". If users are + // still using the old way of defining ExecuteCommand (by supplying a + // single string rather than a slice of strings) then we need to + // prepend this command with the call that the post-processor defaulted + // to before. + p.config.ExecuteCommand = append([]string{"sh", "-c"}, p.config.ExecuteCommand...) } - if p.config.Inline != nil && len(p.config.Inline) == 0 { - p.config.Inline = nil - } - - if p.config.InlineShebang == "" { - p.config.InlineShebang = "/bin/sh -e" - } - - if p.config.Scripts == nil { - p.config.Scripts = make([]string, 0) - } - - if p.config.Vars == nil { - p.config.Vars = make([]string, 0) - } - - var errs *packer.MultiError - if p.config.Script != "" && len(p.config.Scripts) > 0 { - errs = packer.MultiErrorAppend(errs, - errors.New("Only one of script or scripts can be specified.")) - } - - if p.config.Script != "" { - p.config.Scripts = []string{p.config.Script} - } - - if len(p.config.Scripts) == 0 && p.config.Inline == nil { - errs = packer.MultiErrorAppend(errs, - errors.New("Either a script file or inline script must be specified.")) - } else if len(p.config.Scripts) > 0 && p.config.Inline != nil { - errs = packer.MultiErrorAppend(errs, - errors.New("Only a script file or an inline script can be specified, not both.")) - } - - for _, path := range p.config.Scripts { - if _, err := os.Stat(path); err != nil { - errs = packer.MultiErrorAppend(errs, - fmt.Errorf("Bad script '%s': %s", path, err)) - } - } - - // Do a check for bad environment variables, such as '=foo', 'foobar' - for _, kv := range p.config.Vars { - vs := strings.SplitN(kv, "=", 2) - if len(vs) != 2 || vs[0] == "" { - errs = packer.MultiErrorAppend(errs, - fmt.Errorf("Environment variable not in format 'key=value': %s", kv)) - } - } - - if errs != nil && len(errs.Errors) > 0 { - return errs - } - - return nil + return sl.Validate(&p.config) } func (p *PostProcessor) PostProcess(ui packer.Ui, artifact packer.Artifact) (packer.Artifact, bool, error) { + // this particular post-processor doesn't do anything with the artifact + // except to return it. - scripts := make([]string, len(p.config.Scripts)) - copy(scripts, p.config.Scripts) - - // If we have an inline script, then turn that into a temporary - // shell script and use that. - if p.config.Inline != nil { - tf, err := ioutil.TempFile("", "packer-shell") - if err != nil { - return nil, false, fmt.Errorf("Error preparing shell script: %s", err) - } - defer os.Remove(tf.Name()) - - // Set the path to the temporary file - scripts = append(scripts, tf.Name()) - - // Write our contents to it - writer := bufio.NewWriter(tf) - writer.WriteString(fmt.Sprintf("#!%s\n", p.config.InlineShebang)) - for _, command := range p.config.Inline { - if _, err := writer.WriteString(command + "\n"); err != nil { - return nil, false, fmt.Errorf("Error preparing shell script: %s", err) - } - } - - if err := writer.Flush(); err != nil { - return nil, false, fmt.Errorf("Error preparing shell script: %s", err) - } - - tf.Close() + retBool, retErr := sl.Run(ui, &p.config) + if !retBool { + return nil, retBool, retErr } - // Create environment variables to set before executing the command - flattenedEnvVars := p.createFlattenedEnvVars() - - for _, script := range scripts { - - p.config.ctx.Data = &ExecuteCommandTemplate{ - Vars: flattenedEnvVars, - Script: script, - } - - command, err := interpolate.Render(p.config.ExecuteCommand, &p.config.ctx) - if err != nil { - return nil, false, fmt.Errorf("Error processing command: %s", err) - } - - ui.Say(fmt.Sprintf("Post processing with local shell script: %s", script)) - - comm := &Communicator{} - - cmd := &packer.RemoteCmd{Command: command} - - log.Printf("starting local command: %s", command) - if err := cmd.StartWithUi(comm, ui); err != nil { - return nil, false, fmt.Errorf( - "Error executing script: %s\n\n"+ - "Please see output above for more information.", - script) - } - if cmd.ExitStatus != 0 { - return nil, false, fmt.Errorf( - "Erroneous exit code %d while executing script: %s\n\n"+ - "Please see output above for more information.", - cmd.ExitStatus, - script) - } - } - - return artifact, true, nil -} - -func (p *PostProcessor) createFlattenedEnvVars() (flattened string) { - flattened = "" - envVars := make(map[string]string) - - // Always available Packer provided env vars - envVars["PACKER_BUILD_NAME"] = fmt.Sprintf("%s", p.config.PackerBuildName) - envVars["PACKER_BUILDER_TYPE"] = fmt.Sprintf("%s", p.config.PackerBuilderType) - - // Split vars into key/value components - for _, envVar := range p.config.Vars { - keyValue := strings.SplitN(envVar, "=", 2) - // Store pair, replacing any single quotes in value so they parse - // correctly with required environment variable format - envVars[keyValue[0]] = strings.Replace(keyValue[1], "'", `'"'"'`, -1) - } - - // Create a list of env var keys in sorted order - var keys []string - for k := range envVars { - keys = append(keys, k) - } - sort.Strings(keys) - - // Re-assemble vars surrounding value with single quotes and flatten - for _, key := range keys { - flattened += fmt.Sprintf("%s='%s' ", key, envVars[key]) - } - return + return artifact, retBool, retErr } diff --git a/post-processor/shell-local/post-processor_test.go b/post-processor/shell-local/post-processor_test.go index 094b84096..515704f9d 100644 --- a/post-processor/shell-local/post-processor_test.go +++ b/post-processor/shell-local/post-processor_test.go @@ -1,10 +1,13 @@ package shell_local import ( - "github.com/hashicorp/packer/packer" "io/ioutil" "os" + "runtime" "testing" + + "github.com/hashicorp/packer/packer" + "github.com/stretchr/testify/assert" ) func TestPostProcessor_ImplementsPostProcessor(t *testing.T) { @@ -27,32 +30,35 @@ func TestPostProcessor_Impl(t *testing.T) { func TestPostProcessorPrepare_Defaults(t *testing.T) { var p PostProcessor - config := testConfig() + raws := testConfig() - err := p.Configure(config) + err := p.Configure(raws) if err != nil { t.Fatalf("err: %s", err) } } func TestPostProcessorPrepare_InlineShebang(t *testing.T) { - config := testConfig() + raws := testConfig() - delete(config, "inline_shebang") + delete(raws, "inline_shebang") p := new(PostProcessor) - err := p.Configure(config) + err := p.Configure(raws) if err != nil { t.Fatalf("should not have error: %s", err) } - - if p.config.InlineShebang != "/bin/sh -e" { + expected := "" + if runtime.GOOS != "windows" { + expected = "/bin/sh -e" + } + if p.config.InlineShebang != expected { t.Fatalf("bad value: %s", p.config.InlineShebang) } // Test with a good one - config["inline_shebang"] = "foo" + raws["inline_shebang"] = "foo" p = new(PostProcessor) - err = p.Configure(config) + err = p.Configure(raws) if err != nil { t.Fatalf("should not have error: %s", err) } @@ -64,23 +70,23 @@ func TestPostProcessorPrepare_InlineShebang(t *testing.T) { func TestPostProcessorPrepare_InvalidKey(t *testing.T) { var p PostProcessor - config := testConfig() + raws := testConfig() // Add a random key - config["i_should_not_be_valid"] = true - err := p.Configure(config) + raws["i_should_not_be_valid"] = true + err := p.Configure(raws) if err == nil { t.Fatal("should have error") } } func TestPostProcessorPrepare_Script(t *testing.T) { - config := testConfig() - delete(config, "inline") + raws := testConfig() + delete(raws, "inline") - config["script"] = "/this/should/not/exist" + raws["script"] = "/this/should/not/exist" p := new(PostProcessor) - err := p.Configure(config) + err := p.Configure(raws) if err == nil { t.Fatal("should have error") } @@ -92,23 +98,65 @@ func TestPostProcessorPrepare_Script(t *testing.T) { } defer os.Remove(tf.Name()) - config["script"] = tf.Name() + raws["script"] = tf.Name() p = new(PostProcessor) - err = p.Configure(config) + err = p.Configure(raws) if err != nil { t.Fatalf("should not have error: %s", err) } } +func TestPostProcessorPrepare_ExecuteCommand(t *testing.T) { + // Check that passing a string will work (Backwards Compatibility) + p := new(PostProcessor) + raws := testConfig() + raws["execute_command"] = "foo bar" + err := p.Configure(raws) + expected := []string{"sh", "-c", "foo bar"} + if err != nil { + t.Fatalf("should handle backwards compatibility: %s", err) + } + assert.Equal(t, p.config.ExecuteCommand, expected, + "Did not get expected execute_command: expected: %#v; received %#v", expected, p.config.ExecuteCommand) + + // Check that passing a list will work + p = new(PostProcessor) + raws = testConfig() + raws["execute_command"] = []string{"foo", "bar"} + err = p.Configure(raws) + if err != nil { + t.Fatalf("should handle backwards compatibility: %s", err) + } + expected = []string{"foo", "bar"} + assert.Equal(t, p.config.ExecuteCommand, expected, + "Did not get expected execute_command: expected: %#v; received %#v", expected, p.config.ExecuteCommand) + + // Check that default is as expected + raws = testConfig() + delete(raws, "execute_command") + p = new(PostProcessor) + p.Configure(raws) + if runtime.GOOS != "windows" { + expected = []string{"/bin/sh", "-c", "{{.Vars}} {{.Script}}"} + } else { + expected = []string{"cmd", "/V", "/C", "{{.Vars}}", "call", "{{.Script}}"} + } + assert.Equal(t, p.config.ExecuteCommand, expected, + "Did not get expected default: expected: %#v; received %#v", expected, p.config.ExecuteCommand) +} + func TestPostProcessorPrepare_ScriptAndInline(t *testing.T) { var p PostProcessor - config := testConfig() + raws := testConfig() - delete(config, "inline") - delete(config, "script") - err := p.Configure(config) + // Error if no scripts/inline commands provided + delete(raws, "inline") + delete(raws, "script") + delete(raws, "command") + delete(raws, "scripts") + err := p.Configure(raws) if err == nil { - t.Fatal("should have error") + t.Fatalf("should error when no scripts/inline commands are provided") } // Test with both @@ -118,9 +166,9 @@ func TestPostProcessorPrepare_ScriptAndInline(t *testing.T) { } defer os.Remove(tf.Name()) - config["inline"] = []interface{}{"foo"} - config["script"] = tf.Name() - err = p.Configure(config) + raws["inline"] = []interface{}{"foo"} + raws["script"] = tf.Name() + err = p.Configure(raws) if err == nil { t.Fatal("should have error") } @@ -128,7 +176,7 @@ func TestPostProcessorPrepare_ScriptAndInline(t *testing.T) { func TestPostProcessorPrepare_ScriptAndScripts(t *testing.T) { var p PostProcessor - config := testConfig() + raws := testConfig() // Test with both tf, err := ioutil.TempFile("", "packer") @@ -137,21 +185,21 @@ func TestPostProcessorPrepare_ScriptAndScripts(t *testing.T) { } defer os.Remove(tf.Name()) - config["inline"] = []interface{}{"foo"} - config["scripts"] = []string{tf.Name()} - err = p.Configure(config) + raws["inline"] = []interface{}{"foo"} + raws["scripts"] = []string{tf.Name()} + err = p.Configure(raws) if err == nil { t.Fatal("should have error") } } func TestPostProcessorPrepare_Scripts(t *testing.T) { - config := testConfig() - delete(config, "inline") + raws := testConfig() + delete(raws, "inline") - config["scripts"] = []string{} + raws["scripts"] = []string{} p := new(PostProcessor) - err := p.Configure(config) + err := p.Configure(raws) if err == nil { t.Fatal("should have error") } @@ -163,92 +211,55 @@ func TestPostProcessorPrepare_Scripts(t *testing.T) { } defer os.Remove(tf.Name()) - config["scripts"] = []string{tf.Name()} + raws["scripts"] = []string{tf.Name()} p = new(PostProcessor) - err = p.Configure(config) + err = p.Configure(raws) if err != nil { t.Fatalf("should not have error: %s", err) } } func TestPostProcessorPrepare_EnvironmentVars(t *testing.T) { - config := testConfig() + raws := testConfig() // Test with a bad case - config["environment_vars"] = []string{"badvar", "good=var"} + raws["environment_vars"] = []string{"badvar", "good=var"} p := new(PostProcessor) - err := p.Configure(config) + err := p.Configure(raws) if err == nil { t.Fatal("should have error") } // Test with a trickier case - config["environment_vars"] = []string{"=bad"} + raws["environment_vars"] = []string{"=bad"} p = new(PostProcessor) - err = p.Configure(config) + err = p.Configure(raws) if err == nil { t.Fatal("should have error") } // Test with a good case // Note: baz= is a real env variable, just empty - config["environment_vars"] = []string{"FOO=bar", "baz="} + raws["environment_vars"] = []string{"FOO=bar", "baz="} p = new(PostProcessor) - err = p.Configure(config) + err = p.Configure(raws) if err != nil { t.Fatalf("should not have error: %s", err) } // Test when the env variable value contains an equals sign - config["environment_vars"] = []string{"good=withequals=true"} + raws["environment_vars"] = []string{"good=withequals=true"} p = new(PostProcessor) - err = p.Configure(config) + err = p.Configure(raws) if err != nil { t.Fatalf("should not have error: %s", err) } // Test when the env variable value starts with an equals sign - config["environment_vars"] = []string{"good==true"} + raws["environment_vars"] = []string{"good==true"} p = new(PostProcessor) - err = p.Configure(config) + err = p.Configure(raws) if err != nil { t.Fatalf("should not have error: %s", err) } } - -func TestPostProcessor_createFlattenedEnvVars(t *testing.T) { - var flattenedEnvVars string - config := testConfig() - - userEnvVarTests := [][]string{ - {}, // No user env var - {"FOO=bar"}, // Single user env var - {"FOO=bar's"}, // User env var with single quote in value - {"FOO=bar", "BAZ=qux"}, // Multiple user env vars - {"FOO=bar=baz"}, // User env var with value containing equals - {"FOO==bar"}, // User env var with value starting with equals - } - expected := []string{ - `PACKER_BUILDER_TYPE='iso' PACKER_BUILD_NAME='vmware' `, - `FOO='bar' PACKER_BUILDER_TYPE='iso' PACKER_BUILD_NAME='vmware' `, - `FOO='bar'"'"'s' PACKER_BUILDER_TYPE='iso' PACKER_BUILD_NAME='vmware' `, - `BAZ='qux' FOO='bar' PACKER_BUILDER_TYPE='iso' PACKER_BUILD_NAME='vmware' `, - `FOO='bar=baz' PACKER_BUILDER_TYPE='iso' PACKER_BUILD_NAME='vmware' `, - `FOO='=bar' PACKER_BUILDER_TYPE='iso' PACKER_BUILD_NAME='vmware' `, - } - - p := new(PostProcessor) - p.Configure(config) - - // Defaults provided by Packer - p.config.PackerBuildName = "vmware" - p.config.PackerBuilderType = "iso" - - for i, expectedValue := range expected { - p.config.Vars = userEnvVarTests[i] - flattenedEnvVars = p.createFlattenedEnvVars() - if flattenedEnvVars != expectedValue { - t.Fatalf("expected flattened env vars to be: %s, got %s.", expectedValue, flattenedEnvVars) - } - } -} diff --git a/post-processor/vagrant-cloud/artifact_test.go b/post-processor/vagrant-cloud/artifact_test.go index e22090612..6c633ea84 100644 --- a/post-processor/vagrant-cloud/artifact_test.go +++ b/post-processor/vagrant-cloud/artifact_test.go @@ -1,8 +1,9 @@ package vagrantcloud import ( - "github.com/hashicorp/packer/packer" "testing" + + "github.com/hashicorp/packer/packer" ) func TestArtifact_ImplementsArtifact(t *testing.T) { diff --git a/post-processor/vagrant-cloud/post-processor.go b/post-processor/vagrant-cloud/post-processor.go index b8eeb7105..273d2fdb8 100644 --- a/post-processor/vagrant-cloud/post-processor.go +++ b/post-processor/vagrant-cloud/post-processor.go @@ -12,9 +12,9 @@ import ( "github.com/hashicorp/packer/common" "github.com/hashicorp/packer/helper/config" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" "github.com/hashicorp/packer/template/interpolate" - "github.com/mitchellh/multistep" ) const VAGRANT_CLOUD_URL = "https://vagrantcloud.com/api/v1" @@ -189,6 +189,8 @@ func providerFromBuilderName(name string) string { switch name { case "aws": return "aws" + case "scaleway": + return "scaleway" case "digitalocean": return "digitalocean" case "virtualbox": diff --git a/post-processor/vagrant-cloud/step_create_provider.go b/post-processor/vagrant-cloud/step_create_provider.go index c3e3665c8..e664b8b19 100644 --- a/post-processor/vagrant-cloud/step_create_provider.go +++ b/post-processor/vagrant-cloud/step_create_provider.go @@ -1,9 +1,11 @@ package vagrantcloud import ( + "context" "fmt" + + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" ) type Provider struct { @@ -17,7 +19,7 @@ type stepCreateProvider struct { name string // the name of the provider } -func (s *stepCreateProvider) Run(state multistep.StateBag) multistep.StepAction { +func (s *stepCreateProvider) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { client := state.Get("client").(*VagrantCloudClient) ui := state.Get("ui").(packer.Ui) box := state.Get("box").(*Box) diff --git a/post-processor/vagrant-cloud/step_create_version.go b/post-processor/vagrant-cloud/step_create_version.go index 0a544ceba..feb247f0a 100644 --- a/post-processor/vagrant-cloud/step_create_version.go +++ b/post-processor/vagrant-cloud/step_create_version.go @@ -1,9 +1,11 @@ package vagrantcloud import ( + "context" "fmt" + + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" ) type Version struct { @@ -14,7 +16,7 @@ type Version struct { type stepCreateVersion struct { } -func (s *stepCreateVersion) Run(state multistep.StateBag) multistep.StepAction { +func (s *stepCreateVersion) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { client := state.Get("client").(*VagrantCloudClient) ui := state.Get("ui").(packer.Ui) config := state.Get("config").(Config) diff --git a/post-processor/vagrant-cloud/step_prepare_upload.go b/post-processor/vagrant-cloud/step_prepare_upload.go index 0723520b9..bdab16d10 100644 --- a/post-processor/vagrant-cloud/step_prepare_upload.go +++ b/post-processor/vagrant-cloud/step_prepare_upload.go @@ -1,10 +1,11 @@ package vagrantcloud import ( + "context" "fmt" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" ) type Upload struct { @@ -14,7 +15,7 @@ type Upload struct { type stepPrepareUpload struct { } -func (s *stepPrepareUpload) Run(state multistep.StateBag) multistep.StepAction { +func (s *stepPrepareUpload) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { client := state.Get("client").(*VagrantCloudClient) ui := state.Get("ui").(packer.Ui) box := state.Get("box").(*Box) diff --git a/post-processor/vagrant-cloud/step_release_version.go b/post-processor/vagrant-cloud/step_release_version.go index cc4db611f..94e4b90da 100644 --- a/post-processor/vagrant-cloud/step_release_version.go +++ b/post-processor/vagrant-cloud/step_release_version.go @@ -1,17 +1,18 @@ package vagrantcloud import ( + "context" "fmt" "strings" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" ) type stepReleaseVersion struct { } -func (s *stepReleaseVersion) Run(state multistep.StateBag) multistep.StepAction { +func (s *stepReleaseVersion) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { client := state.Get("client").(*VagrantCloudClient) ui := state.Get("ui").(packer.Ui) box := state.Get("box").(*Box) diff --git a/post-processor/vagrant-cloud/step_upload.go b/post-processor/vagrant-cloud/step_upload.go index 7004177fd..cce1e9de2 100644 --- a/post-processor/vagrant-cloud/step_upload.go +++ b/post-processor/vagrant-cloud/step_upload.go @@ -1,18 +1,19 @@ package vagrantcloud import ( + "context" "fmt" "log" "github.com/hashicorp/packer/common" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" ) type stepUpload struct { } -func (s *stepUpload) Run(state multistep.StateBag) multistep.StepAction { +func (s *stepUpload) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { client := state.Get("client").(*VagrantCloudClient) ui := state.Get("ui").(packer.Ui) upload := state.Get("upload").(*Upload) diff --git a/post-processor/vagrant-cloud/step_verify_box.go b/post-processor/vagrant-cloud/step_verify_box.go index c3c7ef4a6..f1dafc991 100644 --- a/post-processor/vagrant-cloud/step_verify_box.go +++ b/post-processor/vagrant-cloud/step_verify_box.go @@ -1,9 +1,11 @@ package vagrantcloud import ( + "context" "fmt" + + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" ) type Box struct { @@ -23,7 +25,7 @@ func (b *Box) HasVersion(version string) (bool, *Version) { type stepVerifyBox struct { } -func (s *stepVerifyBox) Run(state multistep.StateBag) multistep.StepAction { +func (s *stepVerifyBox) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { client := state.Get("client").(*VagrantCloudClient) ui := state.Get("ui").(packer.Ui) config := state.Get("config").(Config) diff --git a/post-processor/vagrant/artifact_test.go b/post-processor/vagrant/artifact_test.go index 3676e1d98..7b087d963 100644 --- a/post-processor/vagrant/artifact_test.go +++ b/post-processor/vagrant/artifact_test.go @@ -1,8 +1,9 @@ package vagrant import ( - "github.com/hashicorp/packer/packer" "testing" + + "github.com/hashicorp/packer/packer" ) func TestArtifact_ImplementsArtifact(t *testing.T) { diff --git a/post-processor/vagrant/digitalocean.go b/post-processor/vagrant/digitalocean.go index 2aaba9067..292b00600 100644 --- a/post-processor/vagrant/digitalocean.go +++ b/post-processor/vagrant/digitalocean.go @@ -3,9 +3,10 @@ package vagrant import ( "bytes" "fmt" - "github.com/hashicorp/packer/packer" "strings" "text/template" + + "github.com/hashicorp/packer/packer" ) type digitalOceanVagrantfileTemplate struct { diff --git a/post-processor/vagrant/docker.go b/post-processor/vagrant/docker.go new file mode 100644 index 000000000..6f5aeb3dc --- /dev/null +++ b/post-processor/vagrant/docker.go @@ -0,0 +1,29 @@ +package vagrant + +import ( + "fmt" + + "github.com/hashicorp/packer/packer" +) + +type DockerProvider struct{} + +func (p *DockerProvider) KeepInputArtifact() bool { + return false +} + +func (p *DockerProvider) Process(ui packer.Ui, artifact packer.Artifact, dir string) (vagrantfile string, metadata map[string]interface{}, err error) { + // Create the metadata + metadata = map[string]interface{}{"provider": "docker"} + + vagrantfile = fmt.Sprintf(dockerVagrantfile, artifact.Id()) + return +} + +var dockerVagrantfile = ` +Vagrant.configure("2") do |config| + config.vm.provider :docker do |docker, override| + docker.image = "%s" + end +end +` diff --git a/post-processor/vagrant/docker_test.go b/post-processor/vagrant/docker_test.go new file mode 100644 index 000000000..380846bcc --- /dev/null +++ b/post-processor/vagrant/docker_test.go @@ -0,0 +1,9 @@ +package vagrant + +import ( + "testing" +) + +func TestDockerProvider_impl(t *testing.T) { + var _ Provider = new(DockerProvider) +} diff --git a/post-processor/vagrant/google.go b/post-processor/vagrant/google.go new file mode 100644 index 000000000..a4d9dbf48 --- /dev/null +++ b/post-processor/vagrant/google.go @@ -0,0 +1,42 @@ +package vagrant + +import ( + "bytes" + "text/template" + + "github.com/hashicorp/packer/packer" +) + +type googleVagrantfileTemplate struct { + Image string "" +} + +type GoogleProvider struct{} + +func (p *GoogleProvider) KeepInputArtifact() bool { + return true +} + +func (p *GoogleProvider) Process(ui packer.Ui, artifact packer.Artifact, dir string) (vagrantfile string, metadata map[string]interface{}, err error) { + // Create the metadata + metadata = map[string]interface{}{"provider": "google"} + + // Build up the template data to build our Vagrantfile + tplData := &googleVagrantfileTemplate{} + tplData.Image = artifact.Id() + + // Build up the Vagrantfile + var contents bytes.Buffer + t := template.Must(template.New("vf").Parse(defaultGoogleVagrantfile)) + err = t.Execute(&contents, tplData) + vagrantfile = contents.String() + return +} + +var defaultGoogleVagrantfile = ` +Vagrant.configure("2") do |config| + config.vm.provider :google do |google| + google.image = "{{ .Image }}" + end +end +` diff --git a/post-processor/vagrant/google_test.go b/post-processor/vagrant/google_test.go new file mode 100644 index 000000000..22eab0c77 --- /dev/null +++ b/post-processor/vagrant/google_test.go @@ -0,0 +1,37 @@ +package vagrant + +import ( + "strings" + "testing" + + "github.com/hashicorp/packer/packer" +) + +func TestGoogleProvider_impl(t *testing.T) { + var _ Provider = new(GoogleProvider) +} + +func TestGoogleProvider_KeepInputArtifact(t *testing.T) { + p := new(GoogleProvider) + + if !p.KeepInputArtifact() { + t.Fatal("should keep input artifact") + } +} + +func TestGoogleProvider_ArtifactId(t *testing.T) { + p := new(GoogleProvider) + ui := testUi() + artifact := &packer.MockArtifact{ + IdValue: "packer-1234", + } + + vagrantfile, _, err := p.Process(ui, artifact, "foo") + if err != nil { + t.Fatalf("should not have error: %s", err) + } + result := `google.image = "packer-1234"` + if !strings.Contains(vagrantfile, result) { + t.Fatalf("wrong substitution: %s", vagrantfile) + } +} diff --git a/post-processor/vagrant/libvirt.go b/post-processor/vagrant/libvirt.go index ece535bf0..c20e8473b 100644 --- a/post-processor/vagrant/libvirt.go +++ b/post-processor/vagrant/libvirt.go @@ -2,9 +2,10 @@ package vagrant import ( "fmt" - "github.com/hashicorp/packer/packer" "path/filepath" "strings" + + "github.com/hashicorp/packer/packer" ) type LibVirtProvider struct{} diff --git a/post-processor/vagrant/lxc.go b/post-processor/vagrant/lxc.go new file mode 100644 index 000000000..771df9509 --- /dev/null +++ b/post-processor/vagrant/lxc.go @@ -0,0 +1,34 @@ +package vagrant + +import ( + "fmt" + "path/filepath" + + "github.com/hashicorp/packer/packer" +) + +type LXCProvider struct{} + +func (p *LXCProvider) KeepInputArtifact() bool { + return false +} + +func (p *LXCProvider) Process(ui packer.Ui, artifact packer.Artifact, dir string) (vagrantfile string, metadata map[string]interface{}, err error) { + // Create the metadata + metadata = map[string]interface{}{ + "provider": "lxc", + "version": "1.0.0", + } + + // Copy all of the original contents into the temporary directory + for _, path := range artifact.Files() { + ui.Message(fmt.Sprintf("Copying: %s", path)) + + dstPath := filepath.Join(dir, filepath.Base(path)) + if err = CopyContents(dstPath, path); err != nil { + return + } + } + + return +} diff --git a/post-processor/vagrant/lxc_test.go b/post-processor/vagrant/lxc_test.go new file mode 100644 index 000000000..da5532c3b --- /dev/null +++ b/post-processor/vagrant/lxc_test.go @@ -0,0 +1,9 @@ +package vagrant + +import ( + "testing" +) + +func TestLXCProvider_impl(t *testing.T) { + var _ Provider = new(LXCProvider) +} diff --git a/post-processor/vagrant/post-processor.go b/post-processor/vagrant/post-processor.go index aa65b2292..1aab0ca23 100644 --- a/post-processor/vagrant/post-processor.go +++ b/post-processor/vagrant/post-processor.go @@ -19,15 +19,21 @@ import ( ) var builtins = map[string]string{ - "mitchellh.amazonebs": "aws", - "mitchellh.amazon.instance": "aws", - "mitchellh.virtualbox": "virtualbox", - "mitchellh.vmware": "vmware", - "mitchellh.vmware-esx": "vmware", - "pearkes.digitalocean": "digitalocean", - "packer.parallels": "parallels", - "MSOpenTech.hyperv": "hyperv", - "transcend.qemu": "libvirt", + "mitchellh.amazonebs": "aws", + "mitchellh.amazon.instance": "aws", + "mitchellh.virtualbox": "virtualbox", + "mitchellh.vmware": "vmware", + "mitchellh.vmware-esx": "vmware", + "pearkes.digitalocean": "digitalocean", + "packer.googlecompute": "google", + "hashicorp.scaleway": "scaleway", + "packer.parallels": "parallels", + "MSOpenTech.hyperv": "hyperv", + "transcend.qemu": "libvirt", + "ustream.lxc": "lxc", + "packer.post-processor.docker-import": "docker", + "packer.post-processor.docker-tag": "docker", + "packer.post-processor.docker-push": "docker", } type Config struct { @@ -220,6 +226,8 @@ func providerForName(name string) Provider { switch name { case "aws": return new(AWSProvider) + case "scaleway": + return new(ScalewayProvider) case "digitalocean": return new(DigitalOceanProvider) case "virtualbox": @@ -232,12 +240,18 @@ func providerForName(name string) Provider { return new(HypervProvider) case "libvirt": return new(LibVirtProvider) + case "google": + return new(GoogleProvider) + case "lxc": + return new(LXCProvider) + case "docker": + return new(DockerProvider) default: return nil } } -// OutputPathTemplate is the structure that is availalable within the +// OutputPathTemplate is the structure that is available within the // OutputPath variables. type outputPathTemplate struct { ArtifactId string diff --git a/post-processor/vagrant/post-processor_test.go b/post-processor/vagrant/post-processor_test.go index 8b8bfe10a..8a5368737 100644 --- a/post-processor/vagrant/post-processor_test.go +++ b/post-processor/vagrant/post-processor_test.go @@ -3,11 +3,12 @@ package vagrant import ( "bytes" "compress/flate" - "github.com/hashicorp/packer/packer" "io/ioutil" "os" "strings" "testing" + + "github.com/hashicorp/packer/packer" ) func testConfig() map[string]interface{} { diff --git a/post-processor/vagrant/scaleway.go b/post-processor/vagrant/scaleway.go new file mode 100644 index 000000000..ee275cffd --- /dev/null +++ b/post-processor/vagrant/scaleway.go @@ -0,0 +1,53 @@ +package vagrant + +import ( + "bytes" + "fmt" + "strings" + "text/template" + + "github.com/hashicorp/packer/packer" +) + +type scalewayVagrantfileTemplate struct { + Image string "" + Region string "" +} + +type ScalewayProvider struct{} + +func (p *ScalewayProvider) KeepInputArtifact() bool { + return true +} + +func (p *ScalewayProvider) Process(ui packer.Ui, artifact packer.Artifact, dir string) (vagrantfile string, metadata map[string]interface{}, err error) { + // Create the metadata + metadata = map[string]interface{}{"provider": "scaleway"} + + // Determine the image and region... + tplData := &scalewayVagrantfileTemplate{} + + parts := strings.Split(artifact.Id(), ":") + if len(parts) != 2 { + err = fmt.Errorf("Poorly formatted artifact ID: %s", artifact.Id()) + return + } + tplData.Region = parts[0] + tplData.Image = parts[1] + + // Build up the Vagrantfile + var contents bytes.Buffer + t := template.Must(template.New("vf").Parse(defaultScalewayVagrantfile)) + err = t.Execute(&contents, tplData) + vagrantfile = contents.String() + return +} + +var defaultScalewayVagrantfile = ` +Vagrant.configure("2") do |config| + config.vm.provider :scaleway do |scaleway| + scaleway.image = "{{ .Image }}" + scaleway.region = "{{ .Region }}" + end +end +` diff --git a/post-processor/vagrant/tar_fix.go b/post-processor/vagrant/tar_fix.go new file mode 100644 index 000000000..32f05d933 --- /dev/null +++ b/post-processor/vagrant/tar_fix.go @@ -0,0 +1,9 @@ +// +build !go1.10 + +package vagrant + +import "archive/tar" + +func setHeaderFormat(header *tar.Header) { + // no-op +} diff --git a/post-processor/vagrant/tar_fix_go110.go b/post-processor/vagrant/tar_fix_go110.go new file mode 100644 index 000000000..e9092ac08 --- /dev/null +++ b/post-processor/vagrant/tar_fix_go110.go @@ -0,0 +1,12 @@ +// +build go1.10 + +package vagrant + +import "archive/tar" + +func setHeaderFormat(header *tar.Header) { + // We have to set the Format explicitly because of a bug in + // libarchive. This affects eg. the tar in macOS listing huge + // files with zero byte length. + header.Format = tar.FormatGNU +} diff --git a/post-processor/vagrant/util.go b/post-processor/vagrant/util.go index de80912bb..55c89ae5c 100644 --- a/post-processor/vagrant/util.go +++ b/post-processor/vagrant/util.go @@ -126,6 +126,10 @@ func DirToBox(dst, dir string, ui packer.Ui, level int) error { return err } + // go >=1.10 wants to use GNU tar format to workaround issues in + // libarchive < 3.3.2 + setHeaderFormat(header) + // We have to set the Name explicitly because it is supposed to // be a relative path to the root. Otherwise, the tar ends up // being a bunch of files in the root, even if they're actually diff --git a/post-processor/vagrant/virtualbox.go b/post-processor/vagrant/virtualbox.go index 29201a01a..3efda5e9f 100644 --- a/post-processor/vagrant/virtualbox.go +++ b/post-processor/vagrant/virtualbox.go @@ -4,13 +4,14 @@ import ( "archive/tar" "errors" "fmt" - "github.com/hashicorp/packer/packer" "io" "io/ioutil" "log" "os" "path/filepath" "regexp" + + "github.com/hashicorp/packer/packer" ) type VBoxProvider struct{} @@ -132,7 +133,15 @@ func DecompressOva(dir, src string) error { if hdr == nil || err == io.EOF { break } + if err != nil { + return err + } + // We use the fileinfo to get the file name because we are not + // expecting path information as from the tar header. It's important + // that we not use the path name from the tar header without checking + // for the presence of `..`. If we accidentally allow for that, we can + // open ourselves up to a path traversal vulnerability. info := hdr.FileInfo() // Shouldn't be any directories, skip them diff --git a/post-processor/vagrant/virtualbox_test.go b/post-processor/vagrant/virtualbox_test.go index c3815ce77..5082c9388 100644 --- a/post-processor/vagrant/virtualbox_test.go +++ b/post-processor/vagrant/virtualbox_test.go @@ -1,9 +1,27 @@ package vagrant import ( + "io/ioutil" + "os" + "path/filepath" "testing" + + "github.com/stretchr/testify/assert" ) func TestVBoxProvider_impl(t *testing.T) { var _ Provider = new(VBoxProvider) } + +func TestDecomressOVA(t *testing.T) { + td, err := ioutil.TempDir("", "pp-vagrant-virtualbox") + assert.NoError(t, err) + fixture := "../../common/test-fixtures/decompress-tar/outside_parent.tar" + err = DecompressOva(td, fixture) + assert.NoError(t, err) + _, err = os.Stat(filepath.Join(filepath.Base(td), "demo.poc")) + assert.Error(t, err) + _, err = os.Stat(filepath.Join(td, "demo.poc")) + assert.NoError(t, err) + os.RemoveAll(td) +} diff --git a/post-processor/vagrant/vmware.go b/post-processor/vagrant/vmware.go index 3fe2aed4e..66d75b1ba 100644 --- a/post-processor/vagrant/vmware.go +++ b/post-processor/vagrant/vmware.go @@ -2,8 +2,9 @@ package vagrant import ( "fmt" - "github.com/hashicorp/packer/packer" "path/filepath" + + "github.com/hashicorp/packer/packer" ) type VMwareProvider struct{} diff --git a/post-processor/vsphere-template/post-processor.go b/post-processor/vsphere-template/post-processor.go index f5fba9c4e..deed5c2d7 100644 --- a/post-processor/vsphere-template/post-processor.go +++ b/post-processor/vsphere-template/post-processor.go @@ -11,14 +11,16 @@ import ( "github.com/hashicorp/packer/builder/vmware/iso" "github.com/hashicorp/packer/common" "github.com/hashicorp/packer/helper/config" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" + "github.com/hashicorp/packer/post-processor/vsphere" "github.com/hashicorp/packer/template/interpolate" - "github.com/mitchellh/multistep" "github.com/vmware/govmomi" ) var builtins = map[string]string{ - "mitchellh.vmware-esx": "vmware", + vsphere.BuilderId: "vmware", + iso.BuilderIdESX: "vmware", } type Config struct { @@ -74,6 +76,7 @@ func (p *PostProcessor) Configure(raws ...interface{}) error { if err != nil { errs = packer.MultiErrorAppend( errs, fmt.Errorf("Error invalid vSphere sdk endpoint: %s", err)) + return errs } sdk.User = url.UserPassword(p.config.Username, p.config.Password) @@ -87,7 +90,10 @@ func (p *PostProcessor) Configure(raws ...interface{}) error { func (p *PostProcessor) PostProcess(ui packer.Ui, artifact packer.Artifact) (packer.Artifact, bool, error) { if _, ok := builtins[artifact.BuilderId()]; !ok { - return nil, false, fmt.Errorf("Unknown artifact type, can't build box: %s", artifact.BuilderId()) + return nil, false, fmt.Errorf("The Packer vSphere Template post-processor "+ + "can only take an artifact from the VMware-iso builder, built on "+ + "ESXi (i.e. remote) or an artifact from the vSphere post-processor. "+ + "Artifact type %s does not fit this requirement", artifact.BuilderId()) } f := artifact.State(iso.ArtifactConfFormat) @@ -120,9 +126,7 @@ func (p *PostProcessor) PostProcess(ui packer.Ui, artifact packer.Artifact) (pac &stepCreateFolder{ Folder: p.config.Folder, }, - &stepMarkAsTemplate{ - VMName: artifact.Id(), - }, + NewStepMarkAsTemplate(artifact), } runner := common.NewRunnerWithPauseFn(steps, p.config.PackerConfig, ui, state) runner.Run(state) diff --git a/post-processor/vsphere-template/step_choose_datacenter.go b/post-processor/vsphere-template/step_choose_datacenter.go index 51959b6d7..450f2b946 100644 --- a/post-processor/vsphere-template/step_choose_datacenter.go +++ b/post-processor/vsphere-template/step_choose_datacenter.go @@ -3,8 +3,8 @@ package vsphere_template import ( "context" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" "github.com/vmware/govmomi" "github.com/vmware/govmomi/find" ) @@ -13,7 +13,7 @@ type stepChooseDatacenter struct { Datacenter string } -func (s *stepChooseDatacenter) Run(state multistep.StateBag) multistep.StepAction { +func (s *stepChooseDatacenter) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { ui := state.Get("ui").(packer.Ui) cli := state.Get("client").(*govmomi.Client) finder := find.NewFinder(cli.Client, false) diff --git a/post-processor/vsphere-template/step_create_folder.go b/post-processor/vsphere-template/step_create_folder.go index a0e648275..49232c2f2 100644 --- a/post-processor/vsphere-template/step_create_folder.go +++ b/post-processor/vsphere-template/step_create_folder.go @@ -5,8 +5,8 @@ import ( "fmt" "path" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" "github.com/vmware/govmomi" "github.com/vmware/govmomi/object" ) @@ -15,7 +15,7 @@ type stepCreateFolder struct { Folder string } -func (s *stepCreateFolder) Run(state multistep.StateBag) multistep.StepAction { +func (s *stepCreateFolder) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { ui := state.Get("ui").(packer.Ui) cli := state.Get("client").(*govmomi.Client) dcPath := state.Get("dcPath").(string) diff --git a/post-processor/vsphere-template/step_mark_as_template.go b/post-processor/vsphere-template/step_mark_as_template.go index 7dc9211fa..2cf533d55 100644 --- a/post-processor/vsphere-template/step_mark_as_template.go +++ b/post-processor/vsphere-template/step_mark_as_template.go @@ -7,18 +7,36 @@ import ( "regexp" "strings" + "github.com/hashicorp/packer/helper/multistep" "github.com/hashicorp/packer/packer" - "github.com/mitchellh/multistep" + "github.com/hashicorp/packer/post-processor/vsphere" "github.com/vmware/govmomi" "github.com/vmware/govmomi/object" "github.com/vmware/govmomi/vim25/types" ) type stepMarkAsTemplate struct { - VMName string + VMName string + RemoteFolder string } -func (s *stepMarkAsTemplate) Run(state multistep.StateBag) multistep.StepAction { +func NewStepMarkAsTemplate(artifact packer.Artifact) *stepMarkAsTemplate { + remoteFolder := "Discovered virtual machine" + vmname := artifact.Id() + + if artifact.BuilderId() == vsphere.BuilderId { + id := strings.Split(artifact.Id(), "::") + remoteFolder = id[1] + vmname = id[2] + } + + return &stepMarkAsTemplate{ + VMName: vmname, + RemoteFolder: remoteFolder, + } +} + +func (s *stepMarkAsTemplate) Run(_ context.Context, state multistep.StateBag) multistep.StepAction { ui := state.Get("ui").(packer.Ui) cli := state.Get("client").(*govmomi.Client) folder := state.Get("folder").(*object.Folder) @@ -26,19 +44,13 @@ func (s *stepMarkAsTemplate) Run(state multistep.StateBag) multistep.StepAction ui.Message("Marking as a template...") - vm, err := findRuntimeVM(cli, dcPath, s.VMName) + vm, err := findRuntimeVM(cli, dcPath, s.VMName, s.RemoteFolder) if err != nil { state.Put("error", err) ui.Error(err.Error()) return multistep.ActionHalt } - if err := unregisterPreviousVM(cli, folder, s.VMName); err != nil { - state.Put("error", err) - ui.Error(err.Error()) - return multistep.ActionHalt - } - dsPath, err := datastorePath(vm) if err != nil { state.Put("error", err) @@ -59,6 +71,12 @@ func (s *stepMarkAsTemplate) Run(state multistep.StateBag) multistep.StepAction return multistep.ActionHalt } + if err := unregisterPreviousVM(cli, folder, s.VMName); err != nil { + state.Put("error", err) + ui.Error(err.Error()) + return multistep.ActionHalt + } + task, err := folder.RegisterVM(context.Background(), dsPath.String(), s.VMName, true, nil, host) if err != nil { state.Put("error", err) @@ -100,13 +118,15 @@ func datastorePath(vm *object.VirtualMachine) (*object.DatastorePath, error) { datastore := re.FindStringSubmatch(disk)[1] vmxPath := path.Join("/", path.Dir(strings.Split(disk, " ")[1]), vm.Name()+".vmx") - return &object.DatastorePath{datastore, vmxPath}, nil + return &object.DatastorePath{ + Datastore: datastore, + Path: vmxPath, + }, nil } -// We will use the virtual machine created by vmware-iso builder -func findRuntimeVM(cli *govmomi.Client, dcPath, name string) (*object.VirtualMachine, error) { +func findRuntimeVM(cli *govmomi.Client, dcPath, name, remoteFolder string) (*object.VirtualMachine, error) { si := object.NewSearchIndex(cli.Client) - fullPath := path.Join(dcPath, "vm", "Discovered virtual machine", name) + fullPath := path.Join(dcPath, "vm", remoteFolder, name) ref, err := si.FindByInventoryPath(context.Background(), fullPath) if err != nil { diff --git a/post-processor/vsphere/artifact.go b/post-processor/vsphere/artifact.go new file mode 100644 index 000000000..90e475a28 --- /dev/null +++ b/post-processor/vsphere/artifact.go @@ -0,0 +1,47 @@ +package vsphere + +import ( + "fmt" +) + +const BuilderId = "packer.post-processor.vsphere" + +type Artifact struct { + files []string + datastore string + vmfolder string + vmname string +} + +func NewArtifact(datastore, vmfolder, vmname string, files []string) *Artifact { + return &Artifact{ + files: files, + datastore: datastore, + vmfolder: vmfolder, + vmname: vmname, + } +} + +func (*Artifact) BuilderId() string { + return BuilderId +} + +func (a *Artifact) Files() []string { + return a.files +} + +func (a *Artifact) Id() string { + return fmt.Sprintf("%s::%s::%s", a.datastore, a.vmfolder, a.vmname) +} + +func (a *Artifact) String() string { + return fmt.Sprintf("VM: %s Folder: %s Datastore: %s", a.vmname, a.vmfolder, a.datastore) +} + +func (*Artifact) State(name string) interface{} { + return nil +} + +func (a *Artifact) Destroy() error { + return nil +} diff --git a/post-processor/vsphere/artifact_test.go b/post-processor/vsphere/artifact_test.go new file mode 100644 index 000000000..f6d0d54be --- /dev/null +++ b/post-processor/vsphere/artifact_test.go @@ -0,0 +1,22 @@ +package vsphere + +import ( + "testing" + + "github.com/hashicorp/packer/packer" +) + +func TestArtifact_ImplementsArtifact(t *testing.T) { + var raw interface{} + raw = &Artifact{} + if _, ok := raw.(packer.Artifact); !ok { + t.Fatalf("Artifact should be a Artifact") + } +} + +func TestArtifact_Id(t *testing.T) { + artifact := NewArtifact("datastore", "vmfolder", "vmname", nil) + if artifact.Id() != "datastore::vmfolder::vmname" { + t.Fatalf("must return datastore, vmfolder and vmname splitted by :: as Id") + } +} diff --git a/post-processor/vsphere/post-processor.go b/post-processor/vsphere/post-processor.go index c028cd087..fcf209ea7 100644 --- a/post-processor/vsphere/post-processor.go +++ b/post-processor/vsphere/post-processor.go @@ -110,9 +110,9 @@ func (p *PostProcessor) PostProcess(ui packer.Ui, artifact packer.Artifact) (pac return nil, false, fmt.Errorf("VMX, OVF or OVA file not found") } - password := url.QueryEscape(p.config.Password) + password := escapeWithSpaces(p.config.Password) ovftool_uri := fmt.Sprintf("vi://%s:%s@%s/%s/host/%s", - url.QueryEscape(p.config.Username), + escapeWithSpaces(p.config.Username), password, p.config.Host, p.config.Datacenter, @@ -140,11 +140,13 @@ func (p *PostProcessor) PostProcess(ui packer.Ui, artifact packer.Artifact) (pac ui.Message(p.filterLog(out.String())) + artifact = NewArtifact(p.config.Datastore, p.config.VMFolder, p.config.VMName, artifact.Files()) + return artifact, false, nil } func (p *PostProcessor) filterLog(s string) string { - password := url.QueryEscape(p.config.Password) + password := escapeWithSpaces(p.config.Password) return strings.Replace(s, password, "", -1) } @@ -184,3 +186,10 @@ func (p *PostProcessor) BuildArgs(source, ovftool_uri string) ([]string, error) return args, nil } + +// Encode everything except for + signs +func escapeWithSpaces(stringToEscape string) string { + escapedString := url.QueryEscape(stringToEscape) + escapedString = strings.Replace(escapedString, "+", `%20`, -1) + return escapedString +} diff --git a/post-processor/vsphere/post-processor_test.go b/post-processor/vsphere/post-processor_test.go index 61b58789b..65b99f9a7 100644 --- a/post-processor/vsphere/post-processor_test.go +++ b/post-processor/vsphere/post-processor_test.go @@ -40,3 +40,34 @@ func TestArgs(t *testing.T) { t.Logf("ovftool %s", strings.Join(args, " ")) } + +func TestEscaping(t *testing.T) { + type escapeCases struct { + Input string + Expected string + } + + cases := []escapeCases{ + {`this has spaces`, `this%20has%20spaces`}, + {`exclaimation_!`, `exclaimation_%21`}, + {`hash_#_dollar_$`, `hash_%23_dollar_%24`}, + {`ampersand_&awesome`, `ampersand_%26awesome`}, + {`single_quote_'_and_another_'`, `single_quote_%27_and_another_%27`}, + {`open_paren_(_close_paren_)`, `open_paren_%28_close_paren_%29`}, + {`asterisk_*_plus_+`, `asterisk_%2A_plus_%2B`}, + {`comma_,slash_/`, `comma_%2Cslash_%2F`}, + {`colon_:semicolon_;`, `colon_%3Asemicolon_%3B`}, + {`equal_=question_?`, `equal_%3Dquestion_%3F`}, + {`at_@`, `at_%40`}, + {`open_bracket_[closed_bracket]`, `open_bracket_%5Bclosed_bracket%5D`}, + {`user:password with $paces@host/name.foo`, `user%3Apassword%20with%20%24paces%40host%2Fname.foo`}, + } + for _, escapeCase := range cases { + received := escapeWithSpaces(escapeCase.Input) + + if escapeCase.Expected != received { + t.Errorf("Error escaping URL; expected %s got %s", escapeCase.Expected, received) + } + } + +} diff --git a/provisioner/ansible-local/communicator_mock.go b/provisioner/ansible-local/communicator_mock.go new file mode 100644 index 000000000..8ed59e9bb --- /dev/null +++ b/provisioner/ansible-local/communicator_mock.go @@ -0,0 +1,38 @@ +package ansiblelocal + +import ( + "github.com/hashicorp/packer/packer" + "io" + "os" +) + +type communicatorMock struct { + startCommand []string + uploadDestination []string +} + +func (c *communicatorMock) Start(cmd *packer.RemoteCmd) error { + c.startCommand = append(c.startCommand, cmd.Command) + cmd.SetExited(0) + return nil +} + +func (c *communicatorMock) Upload(dst string, _ io.Reader, _ *os.FileInfo) error { + c.uploadDestination = append(c.uploadDestination, dst) + return nil +} + +func (c *communicatorMock) UploadDir(dst, src string, exclude []string) error { + return nil +} + +func (c *communicatorMock) Download(src string, dst io.Writer) error { + return nil +} + +func (c *communicatorMock) DownloadDir(src, dst string, exclude []string) error { + return nil +} + +func (c *communicatorMock) verify() { +} diff --git a/provisioner/ansible-local/provisioner.go b/provisioner/ansible-local/provisioner.go index 2f6a873f9..bd4559388 100644 --- a/provisioner/ansible-local/provisioner.go +++ b/provisioner/ansible-local/provisioner.go @@ -38,6 +38,9 @@ type Config struct { // The main playbook file to execute. PlaybookFile string `mapstructure:"playbook_file"` + // The playbook files to execute. + PlaybookFiles []string `mapstructure:"playbook_files"` + // An array of local paths of playbook files to upload. PlaybookPaths []string `mapstructure:"playbook_paths"` @@ -48,6 +51,9 @@ type Config struct { // permissions in this directory. StagingDir string `mapstructure:"staging_directory"` + // If true, staging directory is removed after executing ansible. + CleanStagingDir bool `mapstructure:"clean_staging_directory"` + // The optional inventory file InventoryFile string `mapstructure:"inventory_file"` @@ -63,6 +69,8 @@ type Config struct { type Provisioner struct { config Config + + playbookFiles []string } func (p *Provisioner) Prepare(raws ...interface{}) error { @@ -77,6 +85,9 @@ func (p *Provisioner) Prepare(raws ...interface{}) error { return err } + // Reset the state. + p.playbookFiles = make([]string, 0, len(p.config.PlaybookFiles)) + // Defaults if p.config.Command == "" { p.config.Command = "ANSIBLE_FORCE_COLOR=1 PYTHONUNBUFFERED=1 ansible-playbook" @@ -91,9 +102,32 @@ func (p *Provisioner) Prepare(raws ...interface{}) error { // Validation var errs *packer.MultiError - err = validateFileConfig(p.config.PlaybookFile, "playbook_file", true) - if err != nil { - errs = packer.MultiErrorAppend(errs, err) + + // Check that either playbook_file or playbook_files is specified + if len(p.config.PlaybookFiles) != 0 && p.config.PlaybookFile != "" { + errs = packer.MultiErrorAppend(errs, fmt.Errorf("Either playbook_file or playbook_files can be specified, not both")) + } + if len(p.config.PlaybookFiles) == 0 && p.config.PlaybookFile == "" { + errs = packer.MultiErrorAppend(errs, fmt.Errorf("Either playbook_file or playbook_files must be specified")) + } + if p.config.PlaybookFile != "" { + err = validateFileConfig(p.config.PlaybookFile, "playbook_file", true) + if err != nil { + errs = packer.MultiErrorAppend(errs, err) + } + } + + for _, playbookFile := range p.config.PlaybookFiles { + if err := validateFileConfig(playbookFile, "playbook_files", true); err != nil { + errs = packer.MultiErrorAppend(errs, err) + } else { + playbookFile, err := filepath.Abs(playbookFile) + if err != nil { + errs = packer.MultiErrorAppend(errs, err) + } else { + p.playbookFiles = append(p.playbookFiles, playbookFile) + } + } } // Check that the inventory file exists, if configured @@ -166,11 +200,15 @@ func (p *Provisioner) Provision(ui packer.Ui, comm packer.Communicator) error { } } - ui.Message("Uploading main Playbook file...") - src := p.config.PlaybookFile - dst := filepath.ToSlash(filepath.Join(p.config.StagingDir, filepath.Base(src))) - if err := p.uploadFile(ui, comm, dst, src); err != nil { - return fmt.Errorf("Error uploading main playbook: %s", err) + if p.config.PlaybookFile != "" { + ui.Message("Uploading main Playbook file...") + src := p.config.PlaybookFile + dst := filepath.ToSlash(filepath.Join(p.config.StagingDir, filepath.Base(src))) + if err := p.uploadFile(ui, comm, dst, src); err != nil { + return fmt.Errorf("Error uploading main playbook: %s", err) + } + } else if err := p.provisionPlaybookFiles(ui, comm); err != nil { + return err } if len(p.config.InventoryFile) == 0 { @@ -201,16 +239,16 @@ func (p *Provisioner) Provision(ui packer.Ui, comm packer.Communicator) error { if len(p.config.GalaxyFile) > 0 { ui.Message("Uploading galaxy file...") - src = p.config.GalaxyFile - dst = filepath.ToSlash(filepath.Join(p.config.StagingDir, filepath.Base(src))) + src := p.config.GalaxyFile + dst := filepath.ToSlash(filepath.Join(p.config.StagingDir, filepath.Base(src))) if err := p.uploadFile(ui, comm, dst, src); err != nil { return fmt.Errorf("Error uploading galaxy file: %s", err) } } ui.Message("Uploading inventory file...") - src = p.config.InventoryFile - dst = filepath.ToSlash(filepath.Join(p.config.StagingDir, filepath.Base(src))) + src := p.config.InventoryFile + dst := filepath.ToSlash(filepath.Join(p.config.StagingDir, filepath.Base(src))) if err := p.uploadFile(ui, comm, dst, src); err != nil { return fmt.Errorf("Error uploading inventory file: %s", err) } @@ -260,6 +298,13 @@ func (p *Provisioner) Provision(ui packer.Ui, comm packer.Communicator) error { if err := p.executeAnsible(ui, comm); err != nil { return fmt.Errorf("Error executing Ansible: %s", err) } + + if p.config.CleanStagingDir { + ui.Message("Removing staging directory...") + if err := p.removeDir(ui, comm, p.config.StagingDir); err != nil { + return fmt.Errorf("Error removing staging directory: %s", err) + } + } return nil } @@ -269,6 +314,44 @@ func (p *Provisioner) Cancel() { os.Exit(0) } +func (p *Provisioner) provisionPlaybookFiles(ui packer.Ui, comm packer.Communicator) error { + var playbookDir string + if p.config.PlaybookDir != "" { + var err error + playbookDir, err = filepath.Abs(p.config.PlaybookDir) + if err != nil { + return err + } + } + for index, playbookFile := range p.playbookFiles { + if playbookDir != "" && strings.HasPrefix(playbookFile, playbookDir) { + p.playbookFiles[index] = strings.TrimPrefix(playbookFile, playbookDir) + continue + } + if err := p.provisionPlaybookFile(ui, comm, playbookFile); err != nil { + return err + } + } + return nil +} + +func (p *Provisioner) provisionPlaybookFile(ui packer.Ui, comm packer.Communicator, playbookFile string) error { + ui.Message(fmt.Sprintf("Uploading playbook file: %s", playbookFile)) + + remoteDir := filepath.ToSlash(filepath.Join(p.config.StagingDir, filepath.Dir(playbookFile))) + remotePlaybookFile := filepath.ToSlash(filepath.Join(p.config.StagingDir, playbookFile)) + + if err := p.createDir(ui, comm, remoteDir); err != nil { + return fmt.Errorf("Error uploading playbook file: %s [%s]", playbookFile, err) + } + + if err := p.uploadFile(ui, comm, remotePlaybookFile, playbookFile); err != nil { + return fmt.Errorf("Error uploading playbook: %s [%s]", playbookFile, err) + } + + return nil +} + func (p *Provisioner) executeGalaxy(ui packer.Ui, comm packer.Communicator) error { rolesDir := filepath.ToSlash(filepath.Join(p.config.StagingDir, "roles")) galaxyFile := filepath.ToSlash(filepath.Join(p.config.StagingDir, filepath.Base(p.config.GalaxyFile))) @@ -291,7 +374,6 @@ func (p *Provisioner) executeGalaxy(ui packer.Ui, comm packer.Communicator) erro } func (p *Provisioner) executeAnsible(ui packer.Ui, comm packer.Communicator) error { - playbook := filepath.ToSlash(filepath.Join(p.config.StagingDir, filepath.Base(p.config.PlaybookFile))) inventory := filepath.ToSlash(filepath.Join(p.config.StagingDir, filepath.Base(p.config.InventoryFile))) extraArgs := fmt.Sprintf(" --extra-vars \"packer_build_name=%s packer_builder_type=%s packer_http_addr=%s\" ", @@ -307,8 +389,28 @@ func (p *Provisioner) executeAnsible(ui packer.Ui, comm packer.Communicator) err } } + if p.config.PlaybookFile != "" { + playbookFile := filepath.ToSlash(filepath.Join(p.config.StagingDir, filepath.Base(p.config.PlaybookFile))) + if err := p.executeAnsiblePlaybook(ui, comm, playbookFile, extraArgs, inventory); err != nil { + return err + } + } + + for _, playbookFile := range p.playbookFiles { + playbookFile = filepath.ToSlash(filepath.Join(p.config.StagingDir, playbookFile)) + if err := p.executeAnsiblePlaybook(ui, comm, playbookFile, extraArgs, inventory); err != nil { + return err + } + } + return nil +} + +func (p *Provisioner) executeAnsiblePlaybook( + ui packer.Ui, comm packer.Communicator, playbookFile, extraArgs, inventory string, +) error { command := fmt.Sprintf("cd %s && %s %s%s -c local -i %s", - p.config.StagingDir, p.config.Command, playbook, extraArgs, inventory) + p.config.StagingDir, p.config.Command, playbookFile, extraArgs, inventory, + ) ui.Message(fmt.Sprintf("Executing Ansible: %s", command)) cmd := &packer.RemoteCmd{ Command: command, @@ -367,15 +469,33 @@ func (p *Provisioner) uploadFile(ui packer.Ui, comm packer.Communicator, dst, sr } func (p *Provisioner) createDir(ui packer.Ui, comm packer.Communicator, dir string) error { - ui.Message(fmt.Sprintf("Creating directory: %s", dir)) cmd := &packer.RemoteCmd{ Command: fmt.Sprintf("mkdir -p '%s'", dir), } + + ui.Message(fmt.Sprintf("Creating directory: %s", dir)) if err := cmd.StartWithUi(comm, ui); err != nil { return err } + if cmd.ExitStatus != 0 { - return fmt.Errorf("Non-zero exit status.") + return fmt.Errorf("Non-zero exit status. See output above for more information.") + } + return nil +} + +func (p *Provisioner) removeDir(ui packer.Ui, comm packer.Communicator, dir string) error { + cmd := &packer.RemoteCmd{ + Command: fmt.Sprintf("rm -rf '%s'", dir), + } + + ui.Message(fmt.Sprintf("Removing directory: %s", dir)) + if err := cmd.StartWithUi(comm, ui); err != nil { + return err + } + + if cmd.ExitStatus != 0 { + return fmt.Errorf("Non-zero exit status. See output above for more information.") } return nil } diff --git a/provisioner/ansible-local/provisioner_test.go b/provisioner/ansible-local/provisioner_test.go index 2195b7107..f3dec49cb 100644 --- a/provisioner/ansible-local/provisioner_test.go +++ b/provisioner/ansible-local/provisioner_test.go @@ -7,14 +7,14 @@ import ( "strings" "testing" + "fmt" + "github.com/hashicorp/packer/builder/docker" "github.com/hashicorp/packer/packer" + "github.com/hashicorp/packer/provisioner/file" + "github.com/hashicorp/packer/template" + "os/exec" ) -func testConfig() map[string]interface{} { - m := make(map[string]interface{}) - return m -} - func TestProvisioner_Impl(t *testing.T) { var raw interface{} raw = &Provisioner{} @@ -73,6 +73,107 @@ func TestProvisionerPrepare_PlaybookFile(t *testing.T) { } } +func TestProvisionerPrepare_PlaybookFiles(t *testing.T) { + var p Provisioner + config := testConfig() + + err := p.Prepare(config) + if err == nil { + t.Fatal("should have error") + } + + config["playbook_file"] = "" + config["playbook_files"] = []string{} + err = p.Prepare(config) + if err == nil { + t.Fatal("should have error") + } + + playbook_file, err := ioutil.TempFile("", "playbook") + if err != nil { + t.Fatalf("err: %s", err) + } + defer os.Remove(playbook_file.Name()) + + config["playbook_file"] = playbook_file.Name() + config["playbook_files"] = []string{"some_other_file"} + err = p.Prepare(config) + if err == nil { + t.Fatal("should have error") + } + + p = Provisioner{} + config["playbook_file"] = playbook_file.Name() + config["playbook_files"] = []string{} + err = p.Prepare(config) + if err != nil { + t.Fatalf("err: %s", err) + } + + config["playbook_file"] = "" + config["playbook_files"] = []string{playbook_file.Name()} + err = p.Prepare(config) + if err != nil { + t.Fatalf("err: %s", err) + } +} + +func TestProvisionerProvision_PlaybookFiles(t *testing.T) { + var p Provisioner + config := testConfig() + + playbooks := createTempFiles("", 3) + defer removeFiles(playbooks...) + + config["playbook_files"] = playbooks + err := p.Prepare(config) + if err != nil { + t.Fatalf("err: %s", err) + } + + comm := &communicatorMock{} + if err := p.Provision(&uiStub{}, comm); err != nil { + t.Fatalf("err: %s", err) + } + + assertPlaybooksUploaded(comm, playbooks) + assertPlaybooksExecuted(comm, playbooks) +} + +func TestProvisionerProvision_PlaybookFilesWithPlaybookDir(t *testing.T) { + var p Provisioner + config := testConfig() + + playbook_dir, err := ioutil.TempDir("", "") + if err != nil { + t.Fatalf("Failed to create playbook_dir: %s", err) + } + defer os.RemoveAll(playbook_dir) + playbooks := createTempFiles(playbook_dir, 3) + + playbookNames := make([]string, 0, len(playbooks)) + playbooksInPlaybookDir := make([]string, 0, len(playbooks)) + for _, playbook := range playbooks { + playbooksInPlaybookDir = append(playbooksInPlaybookDir, strings.TrimPrefix(playbook, playbook_dir)) + playbookNames = append(playbookNames, filepath.Base(playbook)) + } + + config["playbook_files"] = playbooks + config["playbook_dir"] = playbook_dir + err = p.Prepare(config) + if err != nil { + t.Fatalf("err: %s", err) + } + + comm := &communicatorMock{} + if err := p.Provision(&uiStub{}, comm); err != nil { + t.Fatalf("err: %s", err) + } + + assertPlaybooksNotUploaded(comm, playbookNames) + assertPlaybooksExecuted(comm, playbooksInPlaybookDir) +} + func TestProvisionerPrepare_InventoryFile(t *testing.T) { var p Provisioner config := testConfig() @@ -188,3 +289,239 @@ func TestProvisionerPrepare_Dirs(t *testing.T) { t.Fatalf("err: %s", err) } } + +func TestProvisionerPrepare_CleanStagingDir(t *testing.T) { + var p Provisioner + config := testConfig() + + playbook_file, err := ioutil.TempFile("", "playbook") + if err != nil { + t.Fatalf("err: %s", err) + } + defer os.Remove(playbook_file.Name()) + + config["playbook_file"] = playbook_file.Name() + config["clean_staging_directory"] = true + + err = p.Prepare(config) + if err != nil { + t.Fatalf("err: %s", err) + } + + if !p.config.CleanStagingDir { + t.Fatalf("expected clean_staging_directory to be set") + } +} + +func TestProvisionerProvisionDocker_PlaybookFiles(t *testing.T) { + testProvisionerProvisionDockerWithPlaybookFiles(t, playbookFilesDockerTemplate) +} + +func TestProvisionerProvisionDocker_PlaybookFilesWithPlaybookDir(t *testing.T) { + testProvisionerProvisionDockerWithPlaybookFiles(t, playbookFilesWithPlaybookDirDockerTemplate) +} + +func testProvisionerProvisionDockerWithPlaybookFiles(t *testing.T, templateString string) { + if os.Getenv("PACKER_ACC") == "" { + t.Skip("This test is only run with PACKER_ACC=1") + } + + ui := packer.TestUi(t) + cache := &packer.FileCache{CacheDir: os.TempDir()} + + tpl, err := template.Parse(strings.NewReader(templateString)) + if err != nil { + t.Fatalf("Unable to parse config: %s", err) + } + + // Check if docker executable can be found. + _, err = exec.LookPath("docker") + if err != nil { + t.Error("docker command not found; please make sure docker is installed") + } + + // Setup the builder + builder := &docker.Builder{} + warnings, err := builder.Prepare(tpl.Builders["docker"].Config) + if err != nil { + t.Fatalf("Error preparing configuration %s", err) + } + if len(warnings) > 0 { + t.Fatal("Encountered configuration warnings; aborting") + } + + ansible := &Provisioner{} + err = ansible.Prepare(tpl.Provisioners[0].Config) + if err != nil { + t.Fatalf("Error preparing ansible-local provisioner: %s", err) + } + + download := &file.Provisioner{} + err = download.Prepare(tpl.Provisioners[1].Config) + if err != nil { + t.Fatalf("Error preparing download: %s", err) + } + + // Add hooks so the provisioners run during the build + hooks := map[string][]packer.Hook{} + hooks[packer.HookProvision] = []packer.Hook{ + &packer.ProvisionHook{ + Provisioners: []*packer.HookedProvisioner{ + {ansible, nil, ""}, + {download, nil, ""}, + }, + }, + } + hook := &packer.DispatchHook{Mapping: hooks} + + artifact, err := builder.Run(ui, hook, cache) + if err != nil { + t.Fatalf("Error running build %s", err) + } + defer os.Remove("hello_world") + defer artifact.Destroy() + + actualContent, err := ioutil.ReadFile("hello_world") + if err != nil { + t.Fatalf("Expected file not found: %s", err) + } + + expectedContent := "Hello world!" + if string(actualContent) != expectedContent { + t.Fatalf(`Unexpected file content: expected="%s", actual="%s"`, expectedContent, actualContent) + } +} + +func assertPlaybooksExecuted(comm *communicatorMock, playbooks []string) { + cmdIndex := 0 + for _, playbook := range playbooks { + playbook = filepath.ToSlash(playbook) + for ; cmdIndex < len(comm.startCommand); cmdIndex++ { + cmd := comm.startCommand[cmdIndex] + if strings.Contains(cmd, "ansible-playbook") && strings.Contains(cmd, playbook) { + break + } + } + if cmdIndex == len(comm.startCommand) { + panic(fmt.Sprintf("Playbook %s was not executed", playbook)) + } + } +} + +func assertPlaybooksUploaded(comm *communicatorMock, playbooks []string) { + uploadIndex := 0 + for _, playbook := range playbooks { + playbook = filepath.ToSlash(playbook) + for ; uploadIndex < len(comm.uploadDestination); uploadIndex++ { + dest := comm.uploadDestination[uploadIndex] + if strings.HasSuffix(dest, playbook) { + break + } + } + if uploadIndex == len(comm.uploadDestination) { + panic(fmt.Sprintf("Playbook %s was not uploaded", playbook)) + } + } +} + +func assertPlaybooksNotUploaded(comm *communicatorMock, playbooks []string) { + for _, playbook := range playbooks { + playbook = filepath.ToSlash(playbook) + for _, destination := range comm.uploadDestination { + if strings.HasSuffix(destination, playbook) { + panic(fmt.Sprintf("Playbook %s was uploaded", playbook)) + } + } + } +} + +func testConfig() map[string]interface{} { + m := make(map[string]interface{}) + return m +} + +func createTempFile(dir string) string { + file, err := ioutil.TempFile(dir, "") + if err != nil { + panic(fmt.Sprintf("err: %s", err)) + } + return file.Name() +} + +func createTempFiles(dir string, numFiles int) []string { + files := make([]string, 0, numFiles) + defer func() { + // Cleanup the files if not all were created. + if len(files) < numFiles { + for _, file := range files { + os.Remove(file) + } + } + }() + + for i := 0; i < numFiles; i++ { + files = append(files, createTempFile(dir)) + } + return files +} + +func removeFiles(files ...string) { + for _, file := range files { + os.Remove(file) + } +} + +const playbookFilesDockerTemplate = ` +{ + "builders": [ + { + "type": "docker", + "image": "williamyeh/ansible:centos7", + "discard": true + } + ], + "provisioners": [ + { + "type": "ansible-local", + "playbook_files": [ + "test-fixtures/hello.yml", + "test-fixtures/world.yml" + ] + }, + { + "type": "file", + "source": "/tmp/hello_world", + "destination": "hello_world", + "direction": "download" + } + ] +} +` + +const playbookFilesWithPlaybookDirDockerTemplate = ` +{ + "builders": [ + { + "type": "docker", + "image": "williamyeh/ansible:centos7", + "discard": true + } + ], + "provisioners": [ + { + "type": "ansible-local", + "playbook_files": [ + "test-fixtures/hello.yml", + "test-fixtures/world.yml" + ], + "playbook_dir": "test-fixtures" + }, + { + "type": "file", + "source": "/tmp/hello_world", + "destination": "hello_world", + "direction": "download" + } + ] +} +` diff --git a/provisioner/ansible-local/test-fixtures/hello.yml b/provisioner/ansible-local/test-fixtures/hello.yml new file mode 100644 index 000000000..6bb8797d8 --- /dev/null +++ b/provisioner/ansible-local/test-fixtures/hello.yml @@ -0,0 +1,5 @@ +--- +- hosts: all + tasks: + - name: write Hello + shell: echo -n "Hello" >> /tmp/hello_world \ No newline at end of file diff --git a/provisioner/ansible-local/test-fixtures/world.yml b/provisioner/ansible-local/test-fixtures/world.yml new file mode 100644 index 000000000..98a205c7b --- /dev/null +++ b/provisioner/ansible-local/test-fixtures/world.yml @@ -0,0 +1,5 @@ +--- +- hosts: all + tasks: + - name: write world! + shell: echo -n " world!" >> /tmp/hello_world \ No newline at end of file diff --git a/provisioner/ansible-local/ui_stub.go b/provisioner/ansible-local/ui_stub.go new file mode 100644 index 000000000..4faa2a215 --- /dev/null +++ b/provisioner/ansible-local/ui_stub.go @@ -0,0 +1,15 @@ +package ansiblelocal + +type uiStub struct{} + +func (su *uiStub) Ask(string) (string, error) { + return "", nil +} + +func (su *uiStub) Error(string) {} + +func (su *uiStub) Machine(string, ...string) {} + +func (su *uiStub) Message(string) {} + +func (su *uiStub) Say(msg string) {} diff --git a/provisioner/ansible/provisioner.go b/provisioner/ansible/provisioner.go index dfe5aa6d6..99c4d07d9 100644 --- a/provisioner/ansible/provisioner.go +++ b/provisioner/ansible/provisioner.go @@ -15,6 +15,7 @@ import ( "net" "os" "os/exec" + "os/user" "path/filepath" "regexp" "strconv" @@ -25,6 +26,7 @@ import ( "golang.org/x/crypto/ssh" "github.com/hashicorp/packer/common" + commonhelper "github.com/hashicorp/packer/helper/common" "github.com/hashicorp/packer/helper/config" "github.com/hashicorp/packer/packer" "github.com/hashicorp/packer/template/interpolate" @@ -55,7 +57,7 @@ type Config struct { SkipVersionCheck bool `mapstructure:"skip_version_check"` UseSFTP bool `mapstructure:"use_sftp"` InventoryDirectory string `mapstructure:"inventory_directory"` - inventoryFile string + InventoryFile string `mapstructure:"inventory_file"` } type Provisioner struct { @@ -66,9 +68,19 @@ type Provisioner struct { ansibleMajVersion uint } +type PassthroughTemplate struct { + WinRMPassword string +} + func (p *Provisioner) Prepare(raws ...interface{}) error { p.done = make(chan struct{}) + // Create passthrough for winrm password so we can fill it in once we know + // it + p.config.ctx.Data = &PassthroughTemplate{ + WinRMPassword: `{{.WinRMPassword}}`, + } + err := config.Decode(&p.config, &config.DecodeOpts{ Interpolate: true, InterpolateContext: &p.config.ctx, @@ -141,7 +153,12 @@ func (p *Provisioner) Prepare(raws ...interface{}) error { } if p.config.User == "" { - p.config.User = os.Getenv("USER") + usr, err := user.Current() + if err != nil { + errs = packer.MultiErrorAppend(errs, err) + } else { + p.config.User = usr.Username + } } if p.config.User == "" { errs = packer.MultiErrorAppend(errs, fmt.Errorf("user: could not determine current user from environment.")) @@ -182,6 +199,25 @@ func (p *Provisioner) getVersion() error { func (p *Provisioner) Provision(ui packer.Ui, comm packer.Communicator) error { ui.Say("Provisioning with Ansible...") + // Interpolate env vars to check for .WinRMPassword + p.config.ctx.Data = &PassthroughTemplate{ + WinRMPassword: getWinRMPassword(p.config.PackerBuildName), + } + for i, envVar := range p.config.AnsibleEnvVars { + envVar, err := interpolate.Render(envVar, &p.config.ctx) + if err != nil { + return fmt.Errorf("Could not interpolate ansible env vars: %s", err) + } + p.config.AnsibleEnvVars[i] = envVar + } + // Interpolate extra vars to check for .WinRMPassword + for i, arg := range p.config.ExtraArguments { + arg, err := interpolate.Render(arg, &p.config.ctx) + if err != nil { + return fmt.Errorf("Could not interpolate ansible env vars: %s", err) + } + p.config.ExtraArguments[i] = arg + } k, err := newUserKey(p.config.SSHAuthorizedKeyFile) if err != nil { @@ -260,7 +296,7 @@ func (p *Provisioner) Provision(ui packer.Ui, comm packer.Communicator) error { go p.adapter.Serve() - if len(p.config.inventoryFile) == 0 { + if len(p.config.InventoryFile) == 0 { tf, err := ioutil.TempFile(p.config.InventoryDirectory, "packer-provisioner-ansible") if err != nil { return fmt.Errorf("Error preparing inventory file: %s", err) @@ -289,9 +325,9 @@ func (p *Provisioner) Provision(ui packer.Ui, comm packer.Communicator) error { return fmt.Errorf("Error preparing inventory file: %s", err) } tf.Close() - p.config.inventoryFile = tf.Name() + p.config.InventoryFile = tf.Name() defer func() { - p.config.inventoryFile = "" + p.config.InventoryFile = "" }() } @@ -314,15 +350,29 @@ func (p *Provisioner) Cancel() { func (p *Provisioner) executeAnsible(ui packer.Ui, comm packer.Communicator, privKeyFile string) error { playbook, _ := filepath.Abs(p.config.PlaybookFile) - inventory := p.config.inventoryFile + inventory := p.config.InventoryFile + if len(p.config.InventoryDirectory) > 0 { + inventory = p.config.InventoryDirectory + } var envvars []string args := []string{"--extra-vars", fmt.Sprintf("packer_build_name=%s packer_builder_type=%s", p.config.PackerBuildName, p.config.PackerBuilderType), "-i", inventory, playbook} if len(privKeyFile) > 0 { - args = append(args, "--private-key", privKeyFile) + // Changed this from using --private-key to supplying -e ansible_ssh_private_key_file as the latter + // is treated as a highest priority variable, and thus prevents overriding by dynamic variables + // as seen in #5852 + // args = append(args, "--private-key", privKeyFile) + args = append(args, "-e", fmt.Sprintf("ansible_ssh_private_key_file=%s", privKeyFile)) } + + // expose packer_http_addr extra variable + httpAddr := common.GetHTTPAddr() + if httpAddr != "" { + args = append(args, "--extra-vars", fmt.Sprintf("packer_http_addr=%s", httpAddr)) + } + args = append(args, p.config.ExtraArguments...) if len(p.config.AnsibleEnvVars) > 0 { envvars = append(envvars, p.config.AnsibleEnvVars...) @@ -368,7 +418,15 @@ func (p *Provisioner) executeAnsible(ui packer.Ui, comm packer.Communicator, pri go repeat(stdout) go repeat(stderr) - ui.Say(fmt.Sprintf("Executing Ansible: %s", strings.Join(cmd.Args, " "))) + // remove winrm password from command, if it's been added + flattenedCmd := strings.Join(cmd.Args, " ") + sanitized := flattenedCmd + if len(getWinRMPassword(p.config.PackerBuildName)) > 0 { + sanitized = strings.Replace(sanitized, + getWinRMPassword(p.config.PackerBuildName), "*****", -1) + } + ui.Say(fmt.Sprintf("Executing Ansible: %s", sanitized)) + if err := cmd.Start(); err != nil { return err } @@ -495,6 +553,11 @@ func newSigner(privKeyFile string) (*signer, error) { return signer, nil } +func getWinRMPassword(buildName string) string { + winRMPass, _ := commonhelper.RetrieveSharedState("winrm_password", buildName) + return winRMPass +} + // Ui provides concurrency-safe access to packer.Ui. type Ui struct { sem chan int diff --git a/provisioner/ansible/provisioner_test.go b/provisioner/ansible/provisioner_test.go index 47f185c9e..c3300bc6b 100644 --- a/provisioner/ansible/provisioner_test.go +++ b/provisioner/ansible/provisioner_test.go @@ -76,6 +76,16 @@ func TestProvisionerPrepare_Defaults(t *testing.T) { if err != nil { t.Fatalf("err: %s", err) } + defer os.Remove(playbook_file.Name()) + + err = os.Unsetenv("USER") + if err != nil { + t.Fatalf("err: %s", err) + } + err = p.Prepare(config) + if err != nil { + t.Fatalf("err: %s", err) + } } func TestProvisionerPrepare_PlaybookFile(t *testing.T) { diff --git a/provisioner/chef-client/provisioner.go b/provisioner/chef-client/provisioner.go index 3853613fc..13575f731 100644 --- a/provisioner/chef-client/provisioner.go +++ b/provisioner/chef-client/provisioner.go @@ -30,13 +30,13 @@ type guestOSTypeConfig struct { var guestOSTypeConfigs = map[string]guestOSTypeConfig{ provisioner.UnixOSType: { executeCommand: "{{if .Sudo}}sudo {{end}}chef-client --no-color -c {{.ConfigPath}} -j {{.JsonPath}}", - installCommand: "curl -L https://www.chef.io/chef/install.sh | {{if .Sudo}}sudo {{end}}bash", + installCommand: "curl -L https://omnitruck.chef.io/install.sh | {{if .Sudo}}sudo {{end}}bash", knifeCommand: "{{if .Sudo}}sudo {{end}}knife {{.Args}} {{.Flags}}", stagingDir: "/tmp/packer-chef-client", }, provisioner.WindowsOSType: { executeCommand: "c:/opscode/chef/bin/chef-client.bat --no-color -c {{.ConfigPath}} -j {{.JsonPath}}", - installCommand: "powershell.exe -Command \"(New-Object System.Net.WebClient).DownloadFile('http://chef.io/chef/install.msi', 'C:\\Windows\\Temp\\chef.msi');Start-Process 'msiexec' -ArgumentList '/qb /i C:\\Windows\\Temp\\chef.msi' -NoNewWindow -Wait\"", + installCommand: "powershell.exe -Command \". { iwr -useb https://omnitruck.chef.io/install.ps1 } | iex; install\"", knifeCommand: "c:/opscode/chef/bin/knife.bat {{.Args}} {{.Flags}}", stagingDir: "C:/Windows/Temp/packer-chef-client", }, @@ -56,13 +56,17 @@ type Config struct { InstallCommand string `mapstructure:"install_command"` KnifeCommand string `mapstructure:"knife_command"` NodeName string `mapstructure:"node_name"` + PolicyGroup string `mapstructure:"policy_group"` + PolicyName string `mapstructure:"policy_name"` PreventSudo bool `mapstructure:"prevent_sudo"` RunList []string `mapstructure:"run_list"` ServerUrl string `mapstructure:"server_url"` SkipCleanClient bool `mapstructure:"skip_clean_client"` SkipCleanNode bool `mapstructure:"skip_clean_node"` + SkipCleanStagingDirectory bool `mapstructure:"skip_clean_staging_directory"` SkipInstall bool `mapstructure:"skip_install"` SslVerifyMode string `mapstructure:"ssl_verify_mode"` + TrustedCertsDir string `mapstructure:"trusted_certs_dir"` StagingDir string `mapstructure:"staging_directory"` ValidationClientName string `mapstructure:"validation_client_name"` ValidationKeyPath string `mapstructure:"validation_key_path"` @@ -81,8 +85,11 @@ type ConfigTemplate struct { ClientKey string EncryptedDataBagSecretPath string NodeName string + PolicyGroup string + PolicyName string ServerUrl string SslVerifyMode string + TrustedCertsDir string ValidationClientName string ValidationKeyPath string } @@ -190,6 +197,10 @@ func (p *Provisioner) Prepare(raws ...interface{}) error { } } + if (p.config.PolicyName != "") != (p.config.PolicyGroup != "") { + errs = packer.MultiErrorAppend(errs, fmt.Errorf("If either policy_name or policy_group are set, they must both be set.")) + } + jsonValid := true for k, v := range p.config.Json { p.config.Json[k], err = p.deepJsonFix(k, v) @@ -268,7 +279,10 @@ func (p *Provisioner) Provision(ui packer.Ui, comm packer.Communicator) error { remoteValidationKeyPath, p.config.ValidationClientName, p.config.ChefEnvironment, - p.config.SslVerifyMode) + p.config.PolicyGroup, + p.config.PolicyName, + p.config.SslVerifyMode, + p.config.TrustedCertsDir) if err != nil { return fmt.Errorf("Error creating Chef config file: %s", err) } @@ -283,7 +297,7 @@ func (p *Provisioner) Provision(ui packer.Ui, comm packer.Communicator) error { if !(p.config.SkipCleanNode && p.config.SkipCleanClient) { knifeConfigPath, knifeErr := p.createKnifeConfig( - ui, comm, nodeName, serverUrl, p.config.ClientKey, p.config.SslVerifyMode) + ui, comm, nodeName, serverUrl, p.config.ClientKey, p.config.SslVerifyMode, p.config.TrustedCertsDir) if knifeErr != nil { return fmt.Errorf("Error creating knife config on node: %s", knifeErr) @@ -306,8 +320,10 @@ func (p *Provisioner) Provision(ui packer.Ui, comm packer.Communicator) error { return fmt.Errorf("Error executing Chef: %s", err) } - if err := p.removeDir(ui, comm, p.config.StagingDir); err != nil { - return fmt.Errorf("Error removing %s: %s", p.config.StagingDir, err) + if !p.config.SkipCleanStagingDirectory { + if err := p.removeDir(ui, comm, p.config.StagingDir); err != nil { + return fmt.Errorf("Error removing %s: %s", p.config.StagingDir, err) + } } return nil @@ -341,7 +357,10 @@ func (p *Provisioner) createConfig( remoteKeyPath string, validationClientName string, chefEnvironment string, - sslVerifyMode string) (string, error) { + policyGroup string, + policyName string, + sslVerifyMode string, + trustedCertsDir string) (string, error) { ui.Message("Creating configuration file 'client.rb'") @@ -370,7 +389,10 @@ func (p *Provisioner) createConfig( ValidationKeyPath: remoteKeyPath, ValidationClientName: validationClientName, ChefEnvironment: chefEnvironment, + PolicyGroup: policyGroup, + PolicyName: policyName, SslVerifyMode: sslVerifyMode, + TrustedCertsDir: trustedCertsDir, EncryptedDataBagSecretPath: encryptedDataBagSecretPath, } configString, err := interpolate.Render(tpl, &ctx) @@ -386,7 +408,7 @@ func (p *Provisioner) createConfig( return remotePath, nil } -func (p *Provisioner) createKnifeConfig(ui packer.Ui, comm packer.Communicator, nodeName string, serverUrl string, clientKey string, sslVerifyMode string) (string, error) { +func (p *Provisioner) createKnifeConfig(ui packer.Ui, comm packer.Communicator, nodeName string, serverUrl string, clientKey string, sslVerifyMode string, trustedCertsDir string) (string, error) { ui.Message("Creating configuration file 'knife.rb'") // Read the template @@ -394,10 +416,11 @@ func (p *Provisioner) createKnifeConfig(ui packer.Ui, comm packer.Communicator, ctx := p.config.ctx ctx.Data = &ConfigTemplate{ - NodeName: nodeName, - ServerUrl: serverUrl, - ClientKey: clientKey, - SslVerifyMode: sslVerifyMode, + NodeName: nodeName, + ServerUrl: serverUrl, + ClientKey: clientKey, + SslVerifyMode: sslVerifyMode, + TrustedCertsDir: trustedCertsDir, } configString, err := interpolate.Render(tpl, &ctx) if err != nil { @@ -682,9 +705,18 @@ node_name "{{.NodeName}}" {{if ne .ChefEnvironment ""}} environment "{{.ChefEnvironment}}" {{end}} +{{if ne .PolicyGroup ""}} +policy_group "{{.PolicyGroup}}" +{{end}} +{{if ne .PolicyName ""}} +policy_name "{{.PolicyName}}" +{{end}} {{if ne .SslVerifyMode ""}} ssl_verify_mode :{{.SslVerifyMode}} {{end}} +{{if ne .TrustedCertsDir ""}} +trusted_certs_dir "{{.TrustedCertsDir}}" +{{end}} ` var DefaultKnifeTemplate = ` @@ -696,4 +728,7 @@ node_name "{{.NodeName}}" {{if ne .SslVerifyMode ""}} ssl_verify_mode :{{.SslVerifyMode}} {{end}} +{{if ne .TrustedCertsDir ""}} +trusted_certs_dir "{{.TrustedCertsDir}}" +{{end}} ` diff --git a/provisioner/chef-client/provisioner_test.go b/provisioner/chef-client/provisioner_test.go index ac148721a..acb674b53 100644 --- a/provisioner/chef-client/provisioner_test.go +++ b/provisioner/chef-client/provisioner_test.go @@ -243,3 +243,27 @@ func TestProvisioner_removeDir(t *testing.T) { } } } + +func TestProvisionerPrepare_policy(t *testing.T) { + var p Provisioner + + var policyTests = []struct { + name string + group string + success bool + }{ + {"", "", true}, + {"a", "b", true}, + {"a", "", false}, + {"", "a", false}, + } + for _, tt := range policyTests { + config := testConfig() + config["policy_name"] = tt.name + config["policy_group"] = tt.group + err := p.Prepare(config) + if (err == nil) != tt.success { + t.Fatalf("wasn't expecting %+v to fail: %s", tt, err.Error()) + } + } +} diff --git a/provisioner/chef-solo/provisioner_test.go b/provisioner/chef-solo/provisioner_test.go index 15ae499c7..01d237330 100644 --- a/provisioner/chef-solo/provisioner_test.go +++ b/provisioner/chef-solo/provisioner_test.go @@ -1,10 +1,11 @@ package chefsolo import ( - "github.com/hashicorp/packer/packer" "io/ioutil" "os" "testing" + + "github.com/hashicorp/packer/packer" ) func testConfig() map[string]interface{} { diff --git a/provisioner/converge/provisioner.go b/provisioner/converge/provisioner.go index 0b8b93e5d..16dddd73f 100644 --- a/provisioner/converge/provisioner.go +++ b/provisioner/converge/provisioner.go @@ -34,7 +34,7 @@ type Config struct { // Execution Module string `mapstructure:"module"` WorkingDirectory string `mapstructure:"working_directory"` - Params map[string]string `mapstucture:"params"` + Params map[string]string `mapstructure:"params"` ExecuteCommand string `mapstructure:"execute_command"` PreventSudo bool `mapstructure:"prevent_sudo"` diff --git a/provisioner/file/provisioner.go b/provisioner/file/provisioner.go index b3283b20c..a0e8d2465 100644 --- a/provisioner/file/provisioner.go +++ b/provisioner/file/provisioner.go @@ -26,6 +26,9 @@ type Config struct { // Direction Direction string + // False if the sources have to exist. + Generated bool + ctx interpolate.Context } @@ -61,7 +64,7 @@ func (p *Provisioner) Prepare(raws ...interface{}) error { if p.config.Direction == "upload" { for _, src := range p.config.Sources { - if _, err := os.Stat(src); err != nil { + if _, err := os.Stat(src); p.config.Generated == false && err != nil { errs = packer.MultiErrorAppend(errs, fmt.Errorf("Bad source '%s': %s", src, err)) } diff --git a/provisioner/file/provisioner_test.go b/provisioner/file/provisioner_test.go index 762eb83f8..175082aae 100644 --- a/provisioner/file/provisioner_test.go +++ b/provisioner/file/provisioner_test.go @@ -45,6 +45,12 @@ func TestProvisionerPrepare_InvalidSource(t *testing.T) { if err == nil { t.Fatalf("should require existing file") } + + config["generated"] = false + err = p.Prepare(config) + if err == nil { + t.Fatalf("should required existing file") + } } func TestProvisionerPrepare_ValidSource(t *testing.T) { @@ -58,11 +64,28 @@ func TestProvisionerPrepare_ValidSource(t *testing.T) { config := testConfig() config["source"] = tf.Name() - err = p.Prepare(config) if err != nil { t.Fatalf("should allow valid file: %s", err) } + + config["generated"] = false + err = p.Prepare(config) + if err != nil { + t.Fatalf("should allow valid file: %s", err) + } +} + +func TestProvisionerPrepare_GeneratedSource(t *testing.T) { + var p Provisioner + + config := testConfig() + config["source"] = "/this/should/not/exist" + config["generated"] = true + err := p.Prepare(config) + if err != nil { + t.Fatalf("should allow non-existing file: %s", err) + } } func TestProvisionerPrepare_EmptyDestination(t *testing.T) { diff --git a/provisioner/guest_commands.go b/provisioner/guest_commands.go index 715d6cb31..3a361a208 100644 --- a/provisioner/guest_commands.go +++ b/provisioner/guest_commands.go @@ -13,6 +13,8 @@ type guestOSTypeCommand struct { chmod string mkdir string removeDir string + statPath string + mv string } var guestOSTypeCommands = map[string]guestOSTypeCommand{ @@ -20,11 +22,15 @@ var guestOSTypeCommands = map[string]guestOSTypeCommand{ chmod: "chmod %s '%s'", mkdir: "mkdir -p '%s'", removeDir: "rm -rf '%s'", + statPath: "stat '%s'", + mv: "mv '%s' '%s'", }, WindowsOSType: { chmod: "echo 'skipping chmod %s %s'", // no-op mkdir: "powershell.exe -Command \"New-Item -ItemType directory -Force -ErrorAction SilentlyContinue -Path %s\"", removeDir: "powershell.exe -Command \"rm %s -recurse -force\"", + statPath: "powershell.exe -Command { if (test-path %s) { exit 0 } else { exit 1 } }", + mv: "powershell.exe -Command \"mv %s %s\"", }, } @@ -64,6 +70,14 @@ func (g *GuestCommands) escapePath(path string) string { return path } +func (g *GuestCommands) StatPath(path string) string { + return g.sudo(fmt.Sprintf(g.commands().statPath, g.escapePath(path))) +} + +func (g *GuestCommands) MovePath(srcPath string, dstPath string) string { + return g.sudo(fmt.Sprintf(g.commands().mv, g.escapePath(srcPath), g.escapePath(dstPath))) +} + func (g *GuestCommands) sudo(cmd string) string { if g.GuestOSType == UnixOSType && g.Sudo { return "sudo " + cmd diff --git a/provisioner/guest_commands_test.go b/provisioner/guest_commands_test.go index 50d15870a..08baca058 100644 --- a/provisioner/guest_commands_test.go +++ b/provisioner/guest_commands_test.go @@ -118,3 +118,78 @@ func TestRemoveDir(t *testing.T) { t.Fatalf("Unexpected Windows remove dir cmd: %s", cmd) } } + +func TestStatPath(t *testing.T) { + // *nix + guestCmd, err := NewGuestCommands(UnixOSType, false) + if err != nil { + t.Fatalf("Failed to create new GuestCommands for OS: %s", UnixOSType) + } + cmd := guestCmd.StatPath("/tmp/somedir") + if cmd != "stat '/tmp/somedir'" { + t.Fatalf("Unexpected Unix stat cmd: %s", cmd) + } + + guestCmd, err = NewGuestCommands(UnixOSType, true) + if err != nil { + t.Fatalf("Failed to create new GuestCommands for OS: %s", UnixOSType) + } + cmd = guestCmd.StatPath("/tmp/somedir") + if cmd != "sudo stat '/tmp/somedir'" { + t.Fatalf("Unexpected Unix stat cmd: %s", cmd) + } + + // Windows OS + guestCmd, err = NewGuestCommands(WindowsOSType, false) + if err != nil { + t.Fatalf("Failed to create new GuestCommands for OS: %s", WindowsOSType) + } + cmd = guestCmd.StatPath("C:\\Temp\\SomeDir") + if cmd != "powershell.exe -Command { if (test-path C:\\Temp\\SomeDir) { exit 0 } else { exit 1 } }" { + t.Fatalf("Unexpected Windows stat cmd: %s", cmd) + } + + // Windows OS w/ space in path + cmd = guestCmd.StatPath("C:\\Temp\\Some Dir") + if cmd != "powershell.exe -Command { if (test-path C:\\Temp\\Some` Dir) { exit 0 } else { exit 1 } }" { + t.Fatalf("Unexpected Windows stat cmd: %s", cmd) + } +} + +func TestMovePath(t *testing.T) { + // *nix + guestCmd, err := NewGuestCommands(UnixOSType, false) + if err != nil { + t.Fatalf("Failed to create new GuestCommands for OS: %s", UnixOSType) + } + cmd := guestCmd.MovePath("/tmp/somedir", "/tmp/newdir") + if cmd != "mv '/tmp/somedir' '/tmp/newdir'" { + t.Fatalf("Unexpected Unix move cmd: %s", cmd) + } + + // sudo *nix + guestCmd, err = NewGuestCommands(UnixOSType, true) + if err != nil { + t.Fatalf("Failed to create new sudo GuestCommands for OS: %s", UnixOSType) + } + cmd = guestCmd.MovePath("/tmp/somedir", "/tmp/newdir") + if cmd != "sudo mv '/tmp/somedir' '/tmp/newdir'" { + t.Fatalf("Unexpected Unix sudo mv cmd: %s", cmd) + } + + // Windows OS + guestCmd, err = NewGuestCommands(WindowsOSType, false) + if err != nil { + t.Fatalf("Failed to create new GuestCommands for OS: %s", WindowsOSType) + } + cmd = guestCmd.MovePath("C:\\Temp\\SomeDir", "C:\\Temp\\NewDir") + if cmd != "powershell.exe -Command \"mv C:\\Temp\\SomeDir C:\\Temp\\NewDir\"" { + t.Fatalf("Unexpected Windows remove dir cmd: %s", cmd) + } + + // Windows OS w/ space in path + cmd = guestCmd.MovePath("C:\\Temp\\Some Dir", "C:\\Temp\\New Dir") + if cmd != "powershell.exe -Command \"mv C:\\Temp\\Some` Dir C:\\Temp\\New` Dir\"" { + t.Fatalf("Unexpected Windows remove dir cmd: %s", cmd) + } +} diff --git a/provisioner/powershell/provisioner.go b/provisioner/powershell/provisioner.go index 4297acf08..f3600a4c3 100644 --- a/provisioner/powershell/provisioner.go +++ b/provisioner/powershell/provisioner.go @@ -1,5 +1,5 @@ -// This package implements a provisioner for Packer that executes -// powershell scripts within the remote machine. +// This package implements a provisioner for Packer that executes powershell +// scripts within the remote machine. package powershell import ( @@ -17,6 +17,7 @@ import ( "github.com/hashicorp/packer/common" "github.com/hashicorp/packer/common/uuid" + commonhelper "github.com/hashicorp/packer/helper/common" "github.com/hashicorp/packer/helper/config" "github.com/hashicorp/packer/packer" "github.com/hashicorp/packer/template/interpolate" @@ -24,6 +25,13 @@ import ( var retryableSleep = 2 * time.Second +var psEscape = strings.NewReplacer( + "$", "`$", + "\"", "`\"", + "`", "``", + "'", "`'", +) + type Config struct { common.PackerConfig `mapstructure:",squash"` @@ -31,8 +39,8 @@ type Config struct { // converted from Windows to Unix-style. Binary bool - // An inline script to execute. Multiple strings are all executed - // in the context of a single shell. + // An inline script to execute. Multiple strings are all executed in the + // context of a single shell. Inline []string // The local path of the powershell script to upload and execute. @@ -41,27 +49,33 @@ type Config struct { // An array of multiple scripts to run. Scripts []string - // An array of environment variables that will be injected before - // your command(s) are executed. + // An array of environment variables that will be injected before your + // command(s) are executed. Vars []string `mapstructure:"environment_vars"` // The remote path where the local powershell script will be uploaded to. - // This should be set to a writable file that is in a pre-existing directory. + // This should be set to a writable file that is in a pre-existing + // directory. RemotePath string `mapstructure:"remote_path"` + // The remote path where the file containing the environment variables + // will be uploaded to. This should be set to a writable file that is in a + // pre-existing directory. + RemoteEnvVarPath string `mapstructure:"remote_env_var_path"` + // The command used to execute the script. The '{{ .Path }}' variable - // should be used to specify where the script goes, {{ .Vars }} - // can be used to inject the environment_vars into the environment. + // should be used to specify where the script goes, {{ .Vars }} can be + // used to inject the environment_vars into the environment. ExecuteCommand string `mapstructure:"execute_command"` - // The command used to execute the elevated script. The '{{ .Path }}' variable - // should be used to specify where the script goes, {{ .Vars }} + // The command used to execute the elevated script. The '{{ .Path }}' + // variable should be used to specify where the script goes, {{ .Vars }} // can be used to inject the environment_vars into the environment. ElevatedExecuteCommand string `mapstructure:"elevated_execute_command"` - // The timeout for retrying to start the process. Until this timeout - // is reached, if the provisioner can't start a process, it retries. - // This can be set high to allow for reboots. + // The timeout for retrying to start the process. Until this timeout is + // reached, if the provisioner can't start a process, it retries. This + // can be set high to allow for reboots. StartRetryTimeout time.Duration `mapstructure:"start_retry_timeout"` // This is used in the template generation to format environment variables @@ -72,15 +86,16 @@ type Config struct { // inside the `ElevatedExecuteCommand` template. ElevatedEnvVarFormat string `mapstructure:"elevated_env_var_format"` - // Instructs the communicator to run the remote script as a - // Windows scheduled task, effectively elevating the remote - // user by impersonating a logged-in user + // Instructs the communicator to run the remote script as a Windows + // scheduled task, effectively elevating the remote user by impersonating + // a logged-in user ElevatedUser string `mapstructure:"elevated_user"` ElevatedPassword string `mapstructure:"elevated_password"` - // Valid Exit Codes - 0 is not always the only valid error code! - // See http://www.symantec.com/connect/articles/windows-system-error-codes-exit-codes-description for examples - // such as 3010 - "The requested operation is successful. Changes will not be effective until the system is rebooted." + // Valid Exit Codes - 0 is not always the only valid error code! See + // http://www.symantec.com/connect/articles/windows-system-error-codes-exit-codes-description + // for examples such as 3010 - "The requested operation is successful. + // Changes will not be effective until the system is rebooted." ValidExitCodes []int `mapstructure:"valid_exit_codes"` ctx interpolate.Context @@ -92,11 +107,22 @@ type Provisioner struct { } type ExecuteCommandTemplate struct { - Vars string - Path string + Vars string + Path string + WinRMPassword string +} + +type EnvVarsTemplate struct { + WinRMPassword string } func (p *Provisioner) Prepare(raws ...interface{}) error { + // Create passthrough for winrm password so we can fill it in once we know + // it + p.config.ctx.Data = &EnvVarsTemplate{ + WinRMPassword: `{{.WinRMPassword}}`, + } + err := config.Decode(&p.config, &config.DecodeOpts{ Interpolate: true, InterpolateContext: &p.config.ctx, @@ -113,7 +139,7 @@ func (p *Provisioner) Prepare(raws ...interface{}) error { } if p.config.EnvVarFormat == "" { - p.config.EnvVarFormat = `$env:%s=\"%s\"; ` + p.config.EnvVarFormat = `$env:%s="%s"; ` } if p.config.ElevatedEnvVarFormat == "" { @@ -121,7 +147,7 @@ func (p *Provisioner) Prepare(raws ...interface{}) error { } if p.config.ExecuteCommand == "" { - p.config.ExecuteCommand = `powershell -executionpolicy bypass "& { if (Test-Path variable:global:ProgressPreference){$ProgressPreference='SilentlyContinue'};{{.Vars}}&'{{.Path}}';exit $LastExitCode }"` + p.config.ExecuteCommand = `powershell -executionpolicy bypass "& { if (Test-Path variable:global:ProgressPreference){$ProgressPreference='SilentlyContinue'};. {{.Vars}}; &'{{.Path}}';exit $LastExitCode }"` } if p.config.ElevatedExecuteCommand == "" { @@ -141,6 +167,11 @@ func (p *Provisioner) Prepare(raws ...interface{}) error { p.config.RemotePath = fmt.Sprintf(`c:/Windows/Temp/script-%s.ps1`, uuid) } + if p.config.RemoteEnvVarPath == "" { + uuid := uuid.TimeOrderedUUID() + p.config.RemoteEnvVarPath = fmt.Sprintf(`c:/Windows/Temp/packer-ps-env-vars-%s.ps1`, uuid) + } + if p.config.Scripts == nil { p.config.Scripts = make([]string, 0) } @@ -204,9 +235,8 @@ func (p *Provisioner) Prepare(raws ...interface{}) error { return nil } -// Takes the inline scripts, concatenates them -// into a temporary file and returns a string containing the location -// of said file. +// Takes the inline scripts, concatenates them into a temporary file and +// returns a string containing the location of said file. func extractScript(p *Provisioner) (string, error) { temp, err := ioutil.TempFile(os.TempDir(), "packer-powershell-provisioner") if err != nil { @@ -241,6 +271,8 @@ func (p *Provisioner) Provision(ui packer.Ui, comm packer.Communicator) error { ui.Error(fmt.Sprintf("Unable to extract inline scripts into a file: %s", err)) } scripts = append(scripts, temp) + // Remove temp script containing the inline commands when done + defer os.Remove(temp) } for _, path := range scripts { @@ -258,11 +290,10 @@ func (p *Provisioner) Provision(ui packer.Ui, comm packer.Communicator) error { return fmt.Errorf("Error processing command: %s", err) } - // Upload the file and run the command. Do this in the context of - // a single retryable function so that we don't end up with - // the case that the upload succeeded, a restart is initiated, - // and then the command is executed but the file doesn't exist - // any longer. + // Upload the file and run the command. Do this in the context of a + // single retryable function so that we don't end up with the case + // that the upload succeeded, a restart is initiated, and then the + // command is executed but the file doesn't exist any longer. var cmd *packer.RemoteCmd err = p.retryable(func() error { if _, err := f.Seek(0, 0); err != nil { @@ -300,13 +331,13 @@ func (p *Provisioner) Provision(ui packer.Ui, comm packer.Communicator) error { } func (p *Provisioner) Cancel() { - // Just hard quit. It isn't a big deal if what we're doing keeps - // running on the other side. + // Just hard quit. It isn't a big deal if what we're doing keeps running + // on the other side. os.Exit(0) } -// retryable will retry the given function over and over until a -// non-error is returned. +// retryable will retry the given function over and over until a non-error is +// returned. func (p *Provisioner) retryable(f func() error) error { startTimeout := time.After(p.config.StartRetryTimeout) for { @@ -319,9 +350,8 @@ func (p *Provisioner) retryable(f func() error) error { err = fmt.Errorf("Retryable error: %s", err) log.Print(err.Error()) - // Check if we timed out, otherwise we retry. It is safe to - // retry since the only error case above is if the command - // failed to START. + // Check if we timed out, otherwise we retry. It is safe to retry + // since the only error case above is if the command failed to START. select { case <-startTimeout: return err @@ -331,6 +361,22 @@ func (p *Provisioner) retryable(f func() error) error { } } +// Environment variables required within the remote environment are uploaded +// within a PS script and then enabled by 'dot sourcing' the script +// immediately prior to execution of the main command +func (p *Provisioner) prepareEnvVars(elevated bool) (err error) { + // Collate all required env vars into a plain string with required + // formatting applied + flattenedEnvVars := p.createFlattenedEnvVars(elevated) + // Create a powershell script on the target build fs containing the + // flattened env vars + err = p.uploadEnvVars(flattenedEnvVars) + if err != nil { + return err + } + return +} + func (p *Provisioner) createFlattenedEnvVars(elevated bool) (flattened string) { flattened = "" envVars := make(map[string]string) @@ -338,15 +384,30 @@ func (p *Provisioner) createFlattenedEnvVars(elevated bool) (flattened string) { // Always available Packer provided env vars envVars["PACKER_BUILD_NAME"] = p.config.PackerBuildName envVars["PACKER_BUILDER_TYPE"] = p.config.PackerBuilderType + httpAddr := common.GetHTTPAddr() if httpAddr != "" { envVars["PACKER_HTTP_ADDR"] = httpAddr } + // interpolate environment variables + p.config.ctx.Data = &EnvVarsTemplate{ + WinRMPassword: getWinRMPassword(p.config.PackerBuildName), + } // Split vars into key/value components for _, envVar := range p.config.Vars { + envVar, err := interpolate.Render(envVar, &p.config.ctx) + if err != nil { + return + } keyValue := strings.SplitN(envVar, "=", 2) - envVars[keyValue[0]] = keyValue[1] + // Escape chars special to PS in each env var value + escapedEnvVarValue := psEscape.Replace(keyValue[1]) + if escapedEnvVarValue != keyValue[1] { + log.Printf("Env var %s converted to %s after escaping chars special to PS", keyValue[1], + escapedEnvVarValue) + } + envVars[keyValue[0]] = escapedEnvVarValue } // Create a list of env var keys in sorted order @@ -367,6 +428,25 @@ func (p *Provisioner) createFlattenedEnvVars(elevated bool) (flattened string) { return } +func (p *Provisioner) uploadEnvVars(flattenedEnvVars string) (err error) { + // Upload all env vars to a powershell script on the target build file + // system. Do this in the context of a single retryable function so that + // we gracefully handle any errors created by transient conditions such as + // a system restart + envVarReader := strings.NewReader(flattenedEnvVars) + log.Printf("Uploading env vars to %s", p.config.RemoteEnvVarPath) + err = p.retryable(func() error { + if err := p.communicator.Upload(p.config.RemoteEnvVarPath, envVarReader, nil); err != nil { + return fmt.Errorf("Error uploading ps script containing env vars: %s", err) + } + return err + }) + if err != nil { + return err + } + return +} + func (p *Provisioner) createCommandText() (command string, err error) { // Return the interpolated command if p.config.ElevatedUser == "" { @@ -377,12 +457,17 @@ func (p *Provisioner) createCommandText() (command string, err error) { } func (p *Provisioner) createCommandTextNonPrivileged() (command string, err error) { - // Create environment variables to set before executing the command - flattenedEnvVars := p.createFlattenedEnvVars(false) + // Prepare everything needed to enable the required env vars within the + // remote environment + err = p.prepareEnvVars(false) + if err != nil { + return "", err + } p.config.ctx.Data = &ExecuteCommandTemplate{ - Vars: flattenedEnvVars, - Path: p.config.RemotePath, + Path: p.config.RemotePath, + Vars: p.config.RemoteEnvVarPath, + WinRMPassword: getWinRMPassword(p.config.PackerBuildName), } command, err = interpolate.Render(p.config.ExecuteCommand, &p.config.ctx) @@ -394,31 +479,32 @@ func (p *Provisioner) createCommandTextNonPrivileged() (command string, err erro return command, nil } +func getWinRMPassword(buildName string) string { + winRMPass, _ := commonhelper.RetrieveSharedState("winrm_password", buildName) + return winRMPass +} + func (p *Provisioner) createCommandTextPrivileged() (command string, err error) { - // Can't double escape the env vars, lets create shiny new ones - flattenedEnvVars := p.createFlattenedEnvVars(true) - // Need to create a mini ps1 script containing all of the environment variables we want; - // we'll be dot-sourcing this later - envVarReader := strings.NewReader(flattenedEnvVars) - uuid := uuid.TimeOrderedUUID() - envVarPath := fmt.Sprintf(`${env:SYSTEMROOT}\Temp\packer-env-vars-%s.ps1`, uuid) - log.Printf("Uploading env vars to %s", envVarPath) - err = p.communicator.Upload(envVarPath, envVarReader, nil) + // Prepare everything needed to enable the required env vars within the + // remote environment + err = p.prepareEnvVars(true) if err != nil { - return "", fmt.Errorf("Error preparing elevated powershell script: %s", err) + return "", err } p.config.ctx.Data = &ExecuteCommandTemplate{ - Path: p.config.RemotePath, - Vars: envVarPath, + Path: p.config.RemotePath, + Vars: p.config.RemoteEnvVarPath, + WinRMPassword: getWinRMPassword(p.config.PackerBuildName), } command, err = interpolate.Render(p.config.ElevatedExecuteCommand, &p.config.ctx) if err != nil { return "", fmt.Errorf("Error processing command: %s", err) } - // OK so we need an elevated shell runner to wrap our command, this is going to have its own path - // generate the script and update the command runner in the process + // OK so we need an elevated shell runner to wrap our command, this is + // going to have its own path generate the script and update the command + // runner in the process path, err := p.generateElevatedRunner(command) if err != nil { return "", fmt.Errorf("Error generating elevated runner: %s", err) @@ -435,36 +521,55 @@ func (p *Provisioner) generateElevatedRunner(command string) (uploadedPath strin var buffer bytes.Buffer - // Output from the elevated command cannot be returned directly to - // the Packer console. In order to be able to view output from elevated - // commands and scripts an indirect approach is used by which the - // commands output is first redirected to file. The output file is then - // 'watched' by Packer while the elevated command is running and any - // content appearing in the file is written out to the console. - // Below the portion of command required to redirect output from the - // command to file is built and appended to the existing command string + // Output from the elevated command cannot be returned directly to the + // Packer console. In order to be able to view output from elevated + // commands and scripts an indirect approach is used by which the commands + // output is first redirected to file. The output file is then 'watched' + // by Packer while the elevated command is running and any content + // appearing in the file is written out to the console. Below the portion + // of command required to redirect output from the command to file is + // built and appended to the existing command string taskName := fmt.Sprintf("packer-%s", uuid.TimeOrderedUUID()) - // Only use %ENVVAR% format for environment variables when setting - // the log file path; Do NOT use $env:ENVVAR format as it won't be - // expanded correctly in the elevatedTemplate - logFile := `%SYSTEMROOT%\Temp\` + taskName + ".out" + // Only use %ENVVAR% format for environment variables when setting the log + // file path; Do NOT use $env:ENVVAR format as it won't be expanded + // correctly in the elevatedTemplate + logFile := `%SYSTEMROOT%/Temp/` + taskName + ".out" command += fmt.Sprintf(" > %s 2>&1", logFile) - // elevatedTemplate wraps the command in a single quoted XML text - // string so we need to escape characters considered 'special' in XML. + // elevatedTemplate wraps the command in a single quoted XML text string + // so we need to escape characters considered 'special' in XML. err = xml.EscapeText(&buffer, []byte(command)) if err != nil { return "", fmt.Errorf("Error escaping characters special to XML in command %s: %s", command, err) } escapedCommand := buffer.String() log.Printf("Command [%s] converted to [%s] for use in XML string", command, escapedCommand) - buffer.Reset() + // Escape chars special to PowerShell in the ElevatedUser string + escapedElevatedUser := psEscape.Replace(p.config.ElevatedUser) + if escapedElevatedUser != p.config.ElevatedUser { + log.Printf("Elevated user %s converted to %s after escaping chars special to PowerShell", + p.config.ElevatedUser, escapedElevatedUser) + } + // Replace ElevatedPassword for winrm users who used this feature + p.config.ctx.Data = &EnvVarsTemplate{ + WinRMPassword: getWinRMPassword(p.config.PackerBuildName), + } + + p.config.ElevatedPassword, _ = interpolate.Render(p.config.ElevatedPassword, &p.config.ctx) + + // Escape chars special to PowerShell in the ElevatedPassword string + escapedElevatedPassword := psEscape.Replace(p.config.ElevatedPassword) + if escapedElevatedPassword != p.config.ElevatedPassword { + log.Printf("Elevated password %s converted to %s after escaping chars special to PowerShell", + p.config.ElevatedPassword, escapedElevatedPassword) + } + // Generate command err = elevatedTemplate.Execute(&buffer, elevatedOptions{ - User: p.config.ElevatedUser, - Password: p.config.ElevatedPassword, + User: escapedElevatedUser, + Password: escapedElevatedPassword, TaskName: taskName, TaskDescription: "Packer elevated task", LogFile: logFile, @@ -476,14 +581,11 @@ func (p *Provisioner) generateElevatedRunner(command string) (uploadedPath strin return "", err } uuid := uuid.TimeOrderedUUID() - path := fmt.Sprintf(`${env:TEMP}\packer-elevated-shell-%s.ps1`, uuid) + path := fmt.Sprintf(`C:/Windows/Temp/packer-elevated-shell-%s.ps1`, uuid) log.Printf("Uploading elevated shell wrapper for command [%s] to [%s]", command, path) err = p.communicator.Upload(path, &buffer, nil) if err != nil { return "", fmt.Errorf("Error preparing elevated powershell script: %s", err) } - - // CMD formatted Path required for this op - path = fmt.Sprintf("%s-%s.ps1", "%TEMP%\\packer-elevated-shell", uuid) return path, err } diff --git a/provisioner/powershell/provisioner_test.go b/provisioner/powershell/provisioner_test.go index e7e64d52e..7e739016c 100644 --- a/provisioner/powershell/provisioner_test.go +++ b/provisioner/powershell/provisioner_test.go @@ -5,7 +5,6 @@ import ( "errors" "fmt" "io/ioutil" - //"log" "os" "regexp" "strings" @@ -30,6 +29,7 @@ func TestProvisionerPrepare_extractScript(t *testing.T) { p := new(Provisioner) _ = p.Prepare(config) file, err := extractScript(p) + defer os.Remove(file) if err != nil { t.Fatalf("Should not be error: %s", err) } @@ -79,8 +79,8 @@ func TestProvisionerPrepare_Defaults(t *testing.T) { t.Error("expected elevated_password to be empty") } - if p.config.ExecuteCommand != `powershell -executionpolicy bypass "& { if (Test-Path variable:global:ProgressPreference){$ProgressPreference='SilentlyContinue'};{{.Vars}}&'{{.Path}}';exit $LastExitCode }"` { - t.Fatalf(`Default command should be 'powershell -executionpolicy bypass "& { if (Test-Path variable:global:ProgressPreference){$ProgressPreference='SilentlyContinue'};{{.Vars}}&'{{.Path}}';exit $LastExitCode }"', but got '%s'`, p.config.ExecuteCommand) + if p.config.ExecuteCommand != `powershell -executionpolicy bypass "& { if (Test-Path variable:global:ProgressPreference){$ProgressPreference='SilentlyContinue'};. {{.Vars}}; &'{{.Path}}';exit $LastExitCode }"` { + t.Fatalf(`Default command should be 'powershell -executionpolicy bypass "& { if (Test-Path variable:global:ProgressPreference){$ProgressPreference='SilentlyContinue'};. {{.Vars}}; &'{{.Path}}';exit $LastExitCode }"', but got '%s'`, p.config.ExecuteCommand) } if p.config.ElevatedExecuteCommand != `powershell -executionpolicy bypass "& { if (Test-Path variable:global:ProgressPreference){$ProgressPreference='SilentlyContinue'};. {{.Vars}}; &'{{.Path}}'; exit $LastExitCode }"` { @@ -177,6 +177,7 @@ func TestProvisionerPrepare_Script(t *testing.T) { t.Fatalf("error tempfile: %s", err) } defer os.Remove(tf.Name()) + defer tf.Close() config["script"] = tf.Name() p = new(Provisioner) @@ -203,6 +204,7 @@ func TestProvisionerPrepare_ScriptAndInline(t *testing.T) { t.Fatalf("error tempfile: %s", err) } defer os.Remove(tf.Name()) + defer tf.Close() config["inline"] = []interface{}{"foo"} config["script"] = tf.Name() @@ -222,6 +224,7 @@ func TestProvisionerPrepare_ScriptAndScripts(t *testing.T) { t.Fatalf("error tempfile: %s", err) } defer os.Remove(tf.Name()) + defer tf.Close() config["inline"] = []interface{}{"foo"} config["scripts"] = []string{tf.Name()} @@ -248,6 +251,7 @@ func TestProvisionerPrepare_Scripts(t *testing.T) { t.Fatalf("error tempfile: %s", err) } defer os.Remove(tf.Name()) + defer tf.Close() config["scripts"] = []string{tf.Name()} p = new(Provisioner) @@ -403,7 +407,7 @@ func TestProvisionerProvision_Inline(t *testing.T) { ui := testUi() p := new(Provisioner) - // Defaults provided by Packer + // Defaults provided by Packer - env vars should not appear in cmd p.config.PackerBuildName = "vmware" p.config.PackerBuilderType = "iso" comm := new(packer.MockCommunicator) @@ -413,11 +417,14 @@ func TestProvisionerProvision_Inline(t *testing.T) { t.Fatal("should not have error") } - expectedCommand := `powershell -executionpolicy bypass "& { if (Test-Path variable:global:ProgressPreference){$ProgressPreference='SilentlyContinue'};$env:PACKER_BUILDER_TYPE=\"iso\"; $env:PACKER_BUILD_NAME=\"vmware\"; &'c:/Windows/Temp/inlineScript.ps1';exit $LastExitCode }"` - if comm.StartCmd.Command != expectedCommand { - t.Fatalf("Expect command to be: %s, got %s", expectedCommand, comm.StartCmd.Command) + cmd := comm.StartCmd.Command + re := regexp.MustCompile(`powershell -executionpolicy bypass "& { if \(Test-Path variable:global:ProgressPreference\){\$ProgressPreference='SilentlyContinue'};\. c:/Windows/Temp/packer-ps-env-vars-[[:alnum:]]{8}-[[:alnum:]]{4}-[[:alnum:]]{4}-[[:alnum:]]{4}-[[:alnum:]]{12}\.ps1; &'c:/Windows/Temp/inlineScript.ps1';exit \$LastExitCode }"`) + matched := re.MatchString(cmd) + if !matched { + t.Fatalf("Got unexpected command: %s", cmd) } + // User supplied env vars should not change things envVars := make([]string, 2) envVars[0] = "FOO=BAR" envVars[1] = "BAR=BAZ" @@ -430,15 +437,19 @@ func TestProvisionerProvision_Inline(t *testing.T) { t.Fatal("should not have error") } - expectedCommand = `powershell -executionpolicy bypass "& { if (Test-Path variable:global:ProgressPreference){$ProgressPreference='SilentlyContinue'};$env:BAR=\"BAZ\"; $env:FOO=\"BAR\"; $env:PACKER_BUILDER_TYPE=\"iso\"; $env:PACKER_BUILD_NAME=\"vmware\"; &'c:/Windows/Temp/inlineScript.ps1';exit $LastExitCode }"` - if comm.StartCmd.Command != expectedCommand { - t.Fatalf("Expect command to be: %s, got %s", expectedCommand, comm.StartCmd.Command) + cmd = comm.StartCmd.Command + re = regexp.MustCompile(`powershell -executionpolicy bypass "& { if \(Test-Path variable:global:ProgressPreference\){\$ProgressPreference='SilentlyContinue'};\. c:/Windows/Temp/packer-ps-env-vars-[[:alnum:]]{8}-[[:alnum:]]{4}-[[:alnum:]]{4}-[[:alnum:]]{4}-[[:alnum:]]{12}\.ps1; &'c:/Windows/Temp/inlineScript.ps1';exit \$LastExitCode }"`) + matched = re.MatchString(cmd) + if !matched { + t.Fatalf("Got unexpected command: %s", cmd) } } func TestProvisionerProvision_Scripts(t *testing.T) { tempFile, _ := ioutil.TempFile("", "packer") defer os.Remove(tempFile.Name()) + defer tempFile.Close() + config := testConfig() delete(config, "inline") config["scripts"] = []string{tempFile.Name()} @@ -455,11 +466,12 @@ func TestProvisionerProvision_Scripts(t *testing.T) { t.Fatal("should not have error") } - expectedCommand := `powershell -executionpolicy bypass "& { if (Test-Path variable:global:ProgressPreference){$ProgressPreference='SilentlyContinue'};$env:PACKER_BUILDER_TYPE=\"footype\"; $env:PACKER_BUILD_NAME=\"foobuild\"; &'c:/Windows/Temp/script.ps1';exit $LastExitCode }"` - if comm.StartCmd.Command != expectedCommand { - t.Fatalf("Expect command to be: %s, got %s", expectedCommand, comm.StartCmd.Command) + cmd := comm.StartCmd.Command + re := regexp.MustCompile(`powershell -executionpolicy bypass "& { if \(Test-Path variable:global:ProgressPreference\){\$ProgressPreference='SilentlyContinue'};\. c:/Windows/Temp/packer-ps-env-vars-[[:alnum:]]{8}-[[:alnum:]]{4}-[[:alnum:]]{4}-[[:alnum:]]{4}-[[:alnum:]]{12}\.ps1; &'c:/Windows/Temp/script.ps1';exit \$LastExitCode }"`) + matched := re.MatchString(cmd) + if !matched { + t.Fatalf("Got unexpected command: %s", cmd) } - } func TestProvisionerProvision_ScriptsWithEnvVars(t *testing.T) { @@ -467,6 +479,8 @@ func TestProvisionerProvision_ScriptsWithEnvVars(t *testing.T) { config := testConfig() ui := testUi() defer os.Remove(tempFile.Name()) + defer tempFile.Close() + delete(config, "inline") config["scripts"] = []string{tempFile.Name()} @@ -488,9 +502,11 @@ func TestProvisionerProvision_ScriptsWithEnvVars(t *testing.T) { t.Fatal("should not have error") } - expectedCommand := `powershell -executionpolicy bypass "& { if (Test-Path variable:global:ProgressPreference){$ProgressPreference='SilentlyContinue'};$env:BAR=\"BAZ\"; $env:FOO=\"BAR\"; $env:PACKER_BUILDER_TYPE=\"footype\"; $env:PACKER_BUILD_NAME=\"foobuild\"; &'c:/Windows/Temp/script.ps1';exit $LastExitCode }"` - if comm.StartCmd.Command != expectedCommand { - t.Fatalf("Expect command to be: %s, got %s", expectedCommand, comm.StartCmd.Command) + cmd := comm.StartCmd.Command + re := regexp.MustCompile(`powershell -executionpolicy bypass "& { if \(Test-Path variable:global:ProgressPreference\){\$ProgressPreference='SilentlyContinue'};\. c:/Windows/Temp/packer-ps-env-vars-[[:alnum:]]{8}-[[:alnum:]]{4}-[[:alnum:]]{4}-[[:alnum:]]{4}-[[:alnum:]]{12}\.ps1; &'c:/Windows/Temp/script.ps1';exit \$LastExitCode }"`) + matched := re.MatchString(cmd) + if !matched { + t.Fatalf("Got unexpected command: %s", cmd) } } @@ -510,6 +526,12 @@ func TestProvisioner_createFlattenedElevatedEnvVars_windows(t *testing.T) { {"FOO=bar", "BAZ=qux"}, // Multiple user env vars {"FOO=bar=baz"}, // User env var with value containing equals {"FOO==bar"}, // User env var with value starting with equals + // Test escaping of characters special to PowerShell + {"FOO=bar$baz"}, // User env var with value containing dollar + {"FOO=bar\"baz"}, // User env var with value containing a double quote + {"FOO=bar'baz"}, // User env var with value containing a single quote + {"FOO=bar`baz"}, // User env var with value containing a backtick + } expected := []string{ `$env:PACKER_BUILDER_TYPE="iso"; $env:PACKER_BUILD_NAME="vmware"; `, @@ -517,6 +539,10 @@ func TestProvisioner_createFlattenedElevatedEnvVars_windows(t *testing.T) { `$env:BAZ="qux"; $env:FOO="bar"; $env:PACKER_BUILDER_TYPE="iso"; $env:PACKER_BUILD_NAME="vmware"; `, `$env:FOO="bar=baz"; $env:PACKER_BUILDER_TYPE="iso"; $env:PACKER_BUILD_NAME="vmware"; `, `$env:FOO="=bar"; $env:PACKER_BUILDER_TYPE="iso"; $env:PACKER_BUILD_NAME="vmware"; `, + "$env:FOO=\"bar`$baz\"; $env:PACKER_BUILDER_TYPE=\"iso\"; $env:PACKER_BUILD_NAME=\"vmware\"; ", + "$env:FOO=\"bar`\"baz\"; $env:PACKER_BUILDER_TYPE=\"iso\"; $env:PACKER_BUILD_NAME=\"vmware\"; ", + "$env:FOO=\"bar`'baz\"; $env:PACKER_BUILDER_TYPE=\"iso\"; $env:PACKER_BUILD_NAME=\"vmware\"; ", + "$env:FOO=\"bar``baz\"; $env:PACKER_BUILDER_TYPE=\"iso\"; $env:PACKER_BUILD_NAME=\"vmware\"; ", } p := new(Provisioner) @@ -545,13 +571,22 @@ func TestProvisioner_createFlattenedEnvVars_windows(t *testing.T) { {"FOO=bar", "BAZ=qux"}, // Multiple user env vars {"FOO=bar=baz"}, // User env var with value containing equals {"FOO==bar"}, // User env var with value starting with equals + // Test escaping of characters special to PowerShell + {"FOO=bar$baz"}, // User env var with value containing dollar + {"FOO=bar\"baz"}, // User env var with value containing a double quote + {"FOO=bar'baz"}, // User env var with value containing a single quote + {"FOO=bar`baz"}, // User env var with value containing a backtick } expected := []string{ - `$env:PACKER_BUILDER_TYPE=\"iso\"; $env:PACKER_BUILD_NAME=\"vmware\"; `, - `$env:FOO=\"bar\"; $env:PACKER_BUILDER_TYPE=\"iso\"; $env:PACKER_BUILD_NAME=\"vmware\"; `, - `$env:BAZ=\"qux\"; $env:FOO=\"bar\"; $env:PACKER_BUILDER_TYPE=\"iso\"; $env:PACKER_BUILD_NAME=\"vmware\"; `, - `$env:FOO=\"bar=baz\"; $env:PACKER_BUILDER_TYPE=\"iso\"; $env:PACKER_BUILD_NAME=\"vmware\"; `, - `$env:FOO=\"=bar\"; $env:PACKER_BUILDER_TYPE=\"iso\"; $env:PACKER_BUILD_NAME=\"vmware\"; `, + `$env:PACKER_BUILDER_TYPE="iso"; $env:PACKER_BUILD_NAME="vmware"; `, + `$env:FOO="bar"; $env:PACKER_BUILDER_TYPE="iso"; $env:PACKER_BUILD_NAME="vmware"; `, + `$env:BAZ="qux"; $env:FOO="bar"; $env:PACKER_BUILDER_TYPE="iso"; $env:PACKER_BUILD_NAME="vmware"; `, + `$env:FOO="bar=baz"; $env:PACKER_BUILDER_TYPE="iso"; $env:PACKER_BUILD_NAME="vmware"; `, + `$env:FOO="=bar"; $env:PACKER_BUILDER_TYPE="iso"; $env:PACKER_BUILD_NAME="vmware"; `, + "$env:FOO=\"bar`$baz\"; $env:PACKER_BUILDER_TYPE=\"iso\"; $env:PACKER_BUILD_NAME=\"vmware\"; ", + "$env:FOO=\"bar`\"baz\"; $env:PACKER_BUILDER_TYPE=\"iso\"; $env:PACKER_BUILD_NAME=\"vmware\"; ", + "$env:FOO=\"bar`'baz\"; $env:PACKER_BUILDER_TYPE=\"iso\"; $env:PACKER_BUILD_NAME=\"vmware\"; ", + "$env:FOO=\"bar``baz\"; $env:PACKER_BUILDER_TYPE=\"iso\"; $env:PACKER_BUILD_NAME=\"vmware\"; ", } p := new(Provisioner) @@ -571,7 +606,6 @@ func TestProvisioner_createFlattenedEnvVars_windows(t *testing.T) { } func TestProvision_createCommandText(t *testing.T) { - config := testConfig() config["remote_path"] = "c:/Windows/Temp/script.ps1" p := new(Provisioner) @@ -586,22 +620,40 @@ func TestProvision_createCommandText(t *testing.T) { // Non-elevated cmd, _ := p.createCommandText() - expectedCommand := `powershell -executionpolicy bypass "& { if (Test-Path variable:global:ProgressPreference){$ProgressPreference='SilentlyContinue'};$env:PACKER_BUILDER_TYPE=\"iso\"; $env:PACKER_BUILD_NAME=\"vmware\"; &'c:/Windows/Temp/script.ps1';exit $LastExitCode }"` - - if cmd != expectedCommand { - t.Fatalf("Expected Non-elevated command: %s, got %s", expectedCommand, cmd) + re := regexp.MustCompile(`powershell -executionpolicy bypass "& { if \(Test-Path variable:global:ProgressPreference\){\$ProgressPreference='SilentlyContinue'};\. c:/Windows/Temp/packer-ps-env-vars-[[:alnum:]]{8}-[[:alnum:]]{4}-[[:alnum:]]{4}-[[:alnum:]]{4}-[[:alnum:]]{12}\.ps1; &'c:/Windows/Temp/script.ps1';exit \$LastExitCode }"`) + matched := re.MatchString(cmd) + if !matched { + t.Fatalf("Got unexpected command: %s", cmd) } // Elevated p.config.ElevatedUser = "vagrant" p.config.ElevatedPassword = "vagrant" cmd, _ = p.createCommandText() - matched, _ := regexp.MatchString("powershell -executionpolicy bypass -file \"%TEMP%(.{1})packer-elevated-shell.*", cmd) + re = regexp.MustCompile(`powershell -executionpolicy bypass -file "C:/Windows/Temp/packer-elevated-shell-[[:alnum:]]{8}-[[:alnum:]]{4}-[[:alnum:]]{4}-[[:alnum:]]{4}-[[:alnum:]]{12}\.ps1"`) + matched = re.MatchString(cmd) if !matched { t.Fatalf("Got unexpected elevated command: %s", cmd) } } +func TestProvision_uploadEnvVars(t *testing.T) { + p := new(Provisioner) + comm := new(packer.MockCommunicator) + p.communicator = comm + + flattenedEnvVars := `$env:PACKER_BUILDER_TYPE="footype"; $env:PACKER_BUILD_NAME="foobuild";` + + err := p.uploadEnvVars(flattenedEnvVars) + if err != nil { + t.Fatalf("Did not expect error: %s", err.Error()) + } + + if comm.UploadCalled != true { + t.Fatalf("Failed to upload env var file") + } +} + func TestProvision_generateElevatedShellRunner(t *testing.T) { // Non-elevated @@ -620,7 +672,7 @@ func TestProvision_generateElevatedShellRunner(t *testing.T) { t.Fatalf("Should have uploaded file") } - matched, _ := regexp.MatchString("%TEMP%(.{1})packer-elevated-shell.*", path) + matched, _ := regexp.MatchString("C:/Windows/Temp/packer-elevated-shell.*", path) if !matched { t.Fatalf("Got unexpected file: %s", path) } @@ -644,7 +696,7 @@ func TestRetryable(t *testing.T) { err := p.Prepare(config) err = p.retryable(retryMe) if err != nil { - t.Fatalf("should not have error retrying funuction") + t.Fatalf("should not have error retrying function") } count = 0 @@ -652,7 +704,7 @@ func TestRetryable(t *testing.T) { err = p.Prepare(config) err = p.retryable(retryMe) if err == nil { - t.Fatalf("should have error retrying funuction") + t.Fatalf("should have error retrying function") } } diff --git a/provisioner/puppet-masterless/provisioner.go b/provisioner/puppet-masterless/provisioner.go index 4b01c84d3..3254a6134 100644 --- a/provisioner/puppet-masterless/provisioner.go +++ b/provisioner/puppet-masterless/provisioner.go @@ -20,6 +20,12 @@ type Config struct { common.PackerConfig `mapstructure:",squash"` ctx interpolate.Context + // If true, staging directory is removed after executing puppet. + CleanStagingDir bool `mapstructure:"clean_staging_directory"` + + // The Guest OS Type (unix or windows) + GuestOSType string `mapstructure:"guest_os_type"` + // The command used to execute Puppet. ExecuteCommand string `mapstructure:"execute_command"` @@ -32,6 +38,9 @@ type Config struct { // Path to a hiera configuration file to upload and use. HieraConfigPath string `mapstructure:"hiera_config_path"` + // If true, packer will ignore all exit-codes from a puppet run + IgnoreExitCodes bool `mapstructure:"ignore_exit_codes"` + // An array of local paths of modules to upload. ModulePaths []string `mapstructure:"module_paths"` @@ -45,47 +54,42 @@ type Config struct { // If true, `sudo` will NOT be used to execute Puppet. PreventSudo bool `mapstructure:"prevent_sudo"` - // The directory where files will be uploaded. Packer requires write - // permissions in this directory. - StagingDir string `mapstructure:"staging_directory"` - - // If true, staging directory is removed after executing puppet. - CleanStagingDir bool `mapstructure:"clean_staging_directory"` - - // The directory from which the command will be executed. - // Packer requires the directory to exist when running puppet. - WorkingDir string `mapstructure:"working_directory"` - // The directory that contains the puppet binary. // E.g. if it can't be found on the standard path. PuppetBinDir string `mapstructure:"puppet_bin_dir"` - // If true, packer will ignore all exit-codes from a puppet run - IgnoreExitCodes bool `mapstructure:"ignore_exit_codes"` + // The directory where files will be uploaded. Packer requires write + // permissions in this directory. + StagingDir string `mapstructure:"staging_directory"` - // The Guest OS Type (unix or windows) - GuestOSType string `mapstructure:"guest_os_type"` + // The directory from which the command will be executed. + // Packer requires the directory to exist when running puppet. + WorkingDir string `mapstructure:"working_directory"` } type guestOSTypeConfig struct { - stagingDir string executeCommand string facterVarsFmt string facterVarsJoiner string modulePathJoiner string + stagingDir string + tempDir string } +// FIXME assumes both Packer host and target are same OS var guestOSTypeConfigs = map[string]guestOSTypeConfig{ provisioner.UnixOSType: { + tempDir: "/tmp", stagingDir: "/tmp/packer-puppet-masterless", executeCommand: "cd {{.WorkingDir}} && " + `{{if ne .FacterVars ""}}{{.FacterVars}} {{end}}` + "{{if .Sudo}}sudo -E {{end}}" + `{{if ne .PuppetBinDir ""}}{{.PuppetBinDir}}/{{end}}` + - `puppet apply --verbose --modulepath='{{.ModulePath}}' ` + + "puppet apply --detailed-exitcodes " + + "{{if .Debug}}--debug {{end}}" + + `{{if ne .ModulePath ""}}--modulepath='{{.ModulePath}}' {{end}}` + `{{if ne .HieraConfigPath ""}}--hiera_config='{{.HieraConfigPath}}' {{end}}` + `{{if ne .ManifestDir ""}}--manifestdir='{{.ManifestDir}}' {{end}}` + - "--detailed-exitcodes " + `{{if ne .ExtraArguments ""}}{{.ExtraArguments}} {{end}}` + "{{.ManifestFile}}", facterVarsFmt: "FACTER_%s='%s'", @@ -93,14 +97,16 @@ var guestOSTypeConfigs = map[string]guestOSTypeConfig{ modulePathJoiner: ":", }, provisioner.WindowsOSType: { - stagingDir: "C:/Windows/Temp/packer-puppet-masterless", + tempDir: filepath.ToSlash(os.Getenv("TEMP")), + stagingDir: filepath.ToSlash(os.Getenv("SYSTEMROOT")) + "/Temp/packer-puppet-masterless", executeCommand: "cd {{.WorkingDir}} && " + - "{{.FacterVars}} && " + + `{{if ne .FacterVars ""}}{{.FacterVars}} && {{end}}` + `{{if ne .PuppetBinDir ""}}{{.PuppetBinDir}}/{{end}}` + - `puppet apply --verbose --modulepath='{{.ModulePath}}' ` + + "puppet apply --detailed-exitcodes " + + "{{if .Debug}}--debug {{end}}" + + `{{if ne .ModulePath ""}}--modulepath='{{.ModulePath}}' {{end}}` + `{{if ne .HieraConfigPath ""}}--hiera_config='{{.HieraConfigPath}}' {{end}}` + `{{if ne .ManifestDir ""}}--manifestdir='{{.ManifestDir}}' {{end}}` + - "--detailed-exitcodes " + `{{if ne .ExtraArguments ""}}{{.ExtraArguments}} {{end}}` + "{{.ManifestFile}}", facterVarsFmt: `SET "FACTER_%s=%s"`, @@ -116,15 +122,17 @@ type Provisioner struct { } type ExecuteTemplate struct { - WorkingDir string - FacterVars string - HieraConfigPath string - ModulePath string - ManifestFile string - ManifestDir string - PuppetBinDir string - Sudo bool - ExtraArguments string + Debug bool + ExtraArguments string + FacterVars string + HieraConfigPath string + ModulePath string + ModulePathJoiner string + ManifestFile string + ManifestDir string + PuppetBinDir string + Sudo bool + WorkingDir string } func (p *Provisioner) Prepare(raws ...interface{}) error { @@ -134,6 +142,7 @@ func (p *Provisioner) Prepare(raws ...interface{}) error { InterpolateFilter: &interpolate.RenderFilter{ Exclude: []string{ "execute_command", + "extra_arguments", }, }, }, raws...) @@ -162,10 +171,6 @@ func (p *Provisioner) Prepare(raws ...interface{}) error { p.config.ExecuteCommand = p.guestOSTypeConfig.executeCommand } - if p.config.ExecuteCommand == "" { - p.config.ExecuteCommand = p.guestOSTypeConfig.executeCommand - } - if p.config.StagingDir == "" { p.config.StagingDir = p.guestOSTypeConfig.stagingDir } @@ -286,18 +291,26 @@ func (p *Provisioner) Provision(ui packer.Ui, comm packer.Communicator) error { facterVars = append(facterVars, fmt.Sprintf(p.guestOSTypeConfig.facterVarsFmt, k, v)) } - // Execute Puppet - p.config.ctx.Data = &ExecuteTemplate{ - FacterVars: strings.Join(facterVars, p.guestOSTypeConfig.facterVarsJoiner), - HieraConfigPath: remoteHieraConfigPath, - ManifestDir: remoteManifestDir, - ManifestFile: remoteManifestFile, - ModulePath: strings.Join(modulePaths, p.guestOSTypeConfig.modulePathJoiner), - PuppetBinDir: p.config.PuppetBinDir, - Sudo: !p.config.PreventSudo, - WorkingDir: p.config.WorkingDir, - ExtraArguments: strings.Join(p.config.ExtraArguments, " "), + data := ExecuteTemplate{ + ExtraArguments: "", + FacterVars: strings.Join(facterVars, p.guestOSTypeConfig.facterVarsJoiner), + HieraConfigPath: remoteHieraConfigPath, + ManifestDir: remoteManifestDir, + ManifestFile: remoteManifestFile, + ModulePath: strings.Join(modulePaths, p.guestOSTypeConfig.modulePathJoiner), + ModulePathJoiner: p.guestOSTypeConfig.modulePathJoiner, + PuppetBinDir: p.config.PuppetBinDir, + Sudo: !p.config.PreventSudo, + WorkingDir: p.config.WorkingDir, } + + p.config.ctx.Data = &data + _ExtraArguments, err := interpolate.Render(strings.Join(p.config.ExtraArguments, " "), &p.config.ctx) + if err != nil { + return err + } + data.ExtraArguments = _ExtraArguments + command, err := interpolate.Render(p.config.ExecuteCommand, &p.config.ctx) if err != nil { return err diff --git a/provisioner/puppet-masterless/provisioner_test.go b/provisioner/puppet-masterless/provisioner_test.go index dd6725768..c8adce89e 100644 --- a/provisioner/puppet-masterless/provisioner_test.go +++ b/provisioner/puppet-masterless/provisioner_test.go @@ -5,6 +5,7 @@ import ( "io/ioutil" "log" "os" + "path/filepath" "strings" "testing" @@ -13,15 +14,17 @@ import ( "github.com/stretchr/testify/assert" ) -func testConfig() map[string]interface{} { +func testConfig() (config map[string]interface{}, tf *os.File) { tf, err := ioutil.TempFile("", "packer") if err != nil { panic(err) } - return map[string]interface{}{ + config = map[string]interface{}{ "manifest_file": tf.Name(), } + + return config, tf } func TestProvisioner_Impl(t *testing.T) { @@ -33,7 +36,10 @@ func TestProvisioner_Impl(t *testing.T) { } func TestGuestOSConfig_empty_unix(t *testing.T) { - config := testConfig() + config, tempfile := testConfig() + defer os.Remove(tempfile.Name()) + defer tempfile.Close() + p := new(Provisioner) err := p.Prepare(config) if err != nil { @@ -53,12 +59,15 @@ func TestGuestOSConfig_empty_unix(t *testing.T) { } expected := "cd /tmp/packer-puppet-masterless && " + - "sudo -E puppet apply --verbose --modulepath='' --detailed-exitcodes /r/m/f" + "sudo -E puppet apply --detailed-exitcodes /r/m/f" assert.Equal(t, expected, command) } func TestGuestOSConfig_full_unix(t *testing.T) { - config := testConfig() + config, tempfile := testConfig() + defer os.Remove(tempfile.Name()) + defer tempfile.Close() + p := new(Provisioner) err := p.Prepare(config) if err != nil { @@ -89,13 +98,16 @@ func TestGuestOSConfig_full_unix(t *testing.T) { expected := "cd /tmp/packer-puppet-masterless && FACTER_lhs='rhs' FACTER_foo='bar' " + "sudo -E puppet apply " + - "--verbose --modulepath='/m/p:/a/b' --hiera_config='/h/c/p' " + - "--manifestdir='/r/m/d' --detailed-exitcodes /r/m/f" + "--detailed-exitcodes --modulepath='/m/p:/a/b' --hiera_config='/h/c/p' " + + "--manifestdir='/r/m/d' /r/m/f" assert.Equal(t, expected, command) } func TestGuestOSConfig_empty_windows(t *testing.T) { - config := testConfig() + config, tempfile := testConfig() + defer os.Remove(tempfile.Name()) + defer tempfile.Close() + config["guest_os_type"] = "windows" p := new(Provisioner) err := p.Prepare(config) @@ -115,12 +127,15 @@ func TestGuestOSConfig_empty_windows(t *testing.T) { t.Fatalf("err: %s", err) } - expected := "cd C:/Windows/Temp/packer-puppet-masterless && && puppet apply --verbose --modulepath='' --detailed-exitcodes /r/m/f" + expected := "cd " + filepath.ToSlash(os.Getenv("SYSTEMROOT")) + "/Temp/packer-puppet-masterless && puppet apply --detailed-exitcodes /r/m/f" assert.Equal(t, expected, command) } func TestGuestOSConfig_full_windows(t *testing.T) { - config := testConfig() + config, tempfile := testConfig() + defer os.Remove(tempfile.Name()) + defer tempfile.Close() + config["guest_os_type"] = "windows" p := new(Provisioner) err := p.Prepare(config) @@ -150,15 +165,17 @@ func TestGuestOSConfig_full_windows(t *testing.T) { t.Fatalf("err: %s", err) } - expected := "cd C:/Windows/Temp/packer-puppet-masterless && " + + expected := "cd " + filepath.ToSlash(os.Getenv("SYSTEMROOT")) + "/Temp/packer-puppet-masterless && " + "SET \"FACTER_lhs=rhs\" & SET \"FACTER_foo=bar\" && " + - "puppet apply --verbose --modulepath='/m/p;/a/b' --hiera_config='/h/c/p' " + - "--manifestdir='/r/m/d' --detailed-exitcodes /r/m/f" + "puppet apply --detailed-exitcodes --modulepath='/m/p;/a/b' --hiera_config='/h/c/p' " + + "--manifestdir='/r/m/d' /r/m/f" assert.Equal(t, expected, command) } func TestProvisionerPrepare_puppetBinDir(t *testing.T) { - config := testConfig() + config, tempfile := testConfig() + defer os.Remove(tempfile.Name()) + defer tempfile.Close() delete(config, "puppet_bin_dir") p := new(Provisioner) @@ -183,7 +200,9 @@ func TestProvisionerPrepare_puppetBinDir(t *testing.T) { } func TestProvisionerPrepare_hieraConfigPath(t *testing.T) { - config := testConfig() + config, tempfile := testConfig() + defer os.Remove(tempfile.Name()) + defer tempfile.Close() delete(config, "hiera_config_path") p := new(Provisioner) @@ -208,7 +227,9 @@ func TestProvisionerPrepare_hieraConfigPath(t *testing.T) { } func TestProvisionerPrepare_manifestFile(t *testing.T) { - config := testConfig() + config, tempfile := testConfig() + defer os.Remove(tempfile.Name()) + defer tempfile.Close() delete(config, "manifest_file") p := new(Provisioner) @@ -233,7 +254,9 @@ func TestProvisionerPrepare_manifestFile(t *testing.T) { } func TestProvisionerPrepare_manifestDir(t *testing.T) { - config := testConfig() + config, tempfile := testConfig() + defer os.Remove(tempfile.Name()) + defer tempfile.Close() delete(config, "manifestdir") p := new(Provisioner) @@ -258,7 +281,9 @@ func TestProvisionerPrepare_manifestDir(t *testing.T) { } func TestProvisionerPrepare_modulePaths(t *testing.T) { - config := testConfig() + config, tempfile := testConfig() + defer os.Remove(tempfile.Name()) + defer tempfile.Close() delete(config, "module_paths") p := new(Provisioner) @@ -291,7 +316,9 @@ func TestProvisionerPrepare_modulePaths(t *testing.T) { } func TestProvisionerPrepare_facterFacts(t *testing.T) { - config := testConfig() + config, tempfile := testConfig() + defer os.Remove(tempfile.Name()) + defer tempfile.Close() delete(config, "facter") p := new(Provisioner) @@ -346,7 +373,9 @@ func TestProvisionerPrepare_facterFacts(t *testing.T) { } func TestProvisionerPrepare_extraArguments(t *testing.T) { - config := testConfig() + config, tempfile := testConfig() + defer os.Remove(tempfile.Name()) + defer tempfile.Close() // Test with missing parameter delete(config, "extra_arguments") @@ -377,7 +406,9 @@ func TestProvisionerPrepare_extraArguments(t *testing.T) { } func TestProvisionerPrepare_stagingDir(t *testing.T) { - config := testConfig() + config, tempfile := testConfig() + defer os.Remove(tempfile.Name()) + defer tempfile.Close() delete(config, "staging_directory") p := new(Provisioner) @@ -405,7 +436,9 @@ func TestProvisionerPrepare_stagingDir(t *testing.T) { } func TestProvisionerPrepare_workingDir(t *testing.T) { - config := testConfig() + config, tempfile := testConfig() + defer os.Remove(tempfile.Name()) + defer tempfile.Close() delete(config, "working_directory") p := new(Provisioner) @@ -438,7 +471,10 @@ func TestProvisionerPrepare_workingDir(t *testing.T) { } func TestProvisionerProvision_extraArguments(t *testing.T) { - config := testConfig() + config, tempfile := testConfig() + defer os.Remove(tempfile.Name()) + defer tempfile.Close() + ui := &packer.MachineReadableUi{ Writer: ioutil.Discard, } diff --git a/provisioner/puppet-server/provisioner.go b/provisioner/puppet-server/provisioner.go index d7744952e..b7ec34ea7 100644 --- a/provisioner/puppet-server/provisioner.go +++ b/provisioner/puppet-server/provisioner.go @@ -5,6 +5,7 @@ package puppetserver import ( "fmt" "os" + "path/filepath" "strings" "github.com/hashicorp/packer/common" @@ -14,56 +15,12 @@ import ( "github.com/hashicorp/packer/template/interpolate" ) -type guestOSTypeConfig struct { - executeCommand string - facterVarsFmt string - facterVarsJoiner string - stagingDir string -} - -var guestOSTypeConfigs = map[string]guestOSTypeConfig{ - provisioner.UnixOSType: { - executeCommand: "{{.FacterVars}} {{if .Sudo}}sudo -E {{end}}" + - "{{if ne .PuppetBinDir \"\"}}{{.PuppetBinDir}}/{{end}}puppet agent " + - "--onetime --no-daemonize " + - "{{if ne .PuppetServer \"\"}}--server='{{.PuppetServer}}' {{end}}" + - "{{if ne .Options \"\"}}{{.Options}} {{end}}" + - "{{if ne .PuppetNode \"\"}}--certname={{.PuppetNode}} {{end}}" + - "{{if ne .ClientCertPath \"\"}}--certdir='{{.ClientCertPath}}' {{end}}" + - "{{if ne .ClientPrivateKeyPath \"\"}}--privatekeydir='{{.ClientPrivateKeyPath}}' {{end}}" + - "--detailed-exitcodes", - facterVarsFmt: "FACTER_%s='%s'", - facterVarsJoiner: " ", - stagingDir: "/tmp/packer-puppet-server", - }, - provisioner.WindowsOSType: { - executeCommand: "{{.FacterVars}} " + - "{{if ne .PuppetBinDir \"\"}}{{.PuppetBinDir}}/{{end}}puppet agent " + - "--onetime --no-daemonize " + - "{{if ne .PuppetServer \"\"}}--server='{{.PuppetServer}}' {{end}}" + - "{{if ne .Options \"\"}}{{.Options}} {{end}}" + - "{{if ne .PuppetNode \"\"}}--certname={{.PuppetNode}} {{end}}" + - "{{if ne .ClientCertPath \"\"}}--certdir='{{.ClientCertPath}}' {{end}}" + - "{{if ne .ClientPrivateKeyPath \"\"}}--privatekeydir='{{.ClientPrivateKeyPath}}' {{end}}" + - "--detailed-exitcodes", - facterVarsFmt: "SET \"FACTER_%s=%s\"", - facterVarsJoiner: " & ", - stagingDir: "C:/Windows/Temp/packer-puppet-server", - }, -} - type Config struct { common.PackerConfig `mapstructure:",squash"` ctx interpolate.Context - // The command used to execute Puppet. - ExecuteCommand string `mapstructure:"execute_command"` - - // The Guest OS Type (unix or windows) - GuestOSType string `mapstructure:"guest_os_type"` - - // Additional facts to set when executing Puppet - Facter map[string]string + // If true, staging directory is removed after executing puppet. + CleanStagingDir bool `mapstructure:"clean_staging_directory"` // A path to the client certificate ClientCertPath string `mapstructure:"client_cert_path"` @@ -71,28 +28,86 @@ type Config struct { // A path to a directory containing the client private keys ClientPrivateKeyPath string `mapstructure:"client_private_key_path"` + // The command used to execute Puppet. + ExecuteCommand string `mapstructure:"execute_command"` + + // Additional argument to pass when executing Puppet. + ExtraArguments []string `mapstructure:"extra_arguments"` + + // Additional facts to set when executing Puppet + Facter map[string]string + + // The Guest OS Type (unix or windows) + GuestOSType string `mapstructure:"guest_os_type"` + + // If true, packer will ignore all exit-codes from a puppet run + IgnoreExitCodes bool `mapstructure:"ignore_exit_codes"` + + // If true, `sudo` will NOT be used to execute Puppet. + PreventSudo bool `mapstructure:"prevent_sudo"` + + // The directory that contains the puppet binary. + // E.g. if it can't be found on the standard path. + PuppetBinDir string `mapstructure:"puppet_bin_dir"` + // The hostname of the Puppet node. PuppetNode string `mapstructure:"puppet_node"` // The hostname of the Puppet server. PuppetServer string `mapstructure:"puppet_server"` - // Additional options to be passed to `puppet agent`. - Options string `mapstructure:"options"` - - // If true, `sudo` will NOT be used to execute Puppet. - PreventSudo bool `mapstructure:"prevent_sudo"` - // The directory where files will be uploaded. Packer requires write // permissions in this directory. StagingDir string `mapstructure:"staging_dir"` - // The directory that contains the puppet binary. - // E.g. if it can't be found on the standard path. - PuppetBinDir string `mapstructure:"puppet_bin_dir"` + // The directory from which the command will be executed. + // Packer requires the directory to exist when running puppet. + WorkingDir string `mapstructure:"working_directory"` +} - // If true, packer will ignore all exit-codes from a puppet run - IgnoreExitCodes bool `mapstructure:"ignore_exit_codes"` +type guestOSTypeConfig struct { + executeCommand string + facterVarsFmt string + facterVarsJoiner string + stagingDir string + tempDir string +} + +// FIXME assumes both Packer host and target are same OS +var guestOSTypeConfigs = map[string]guestOSTypeConfig{ + provisioner.UnixOSType: { + tempDir: "/tmp", + stagingDir: "/tmp/packer-puppet-server", + executeCommand: "cd {{.WorkingDir}} && " + + `{{if ne .FacterVars ""}}{{.FacterVars}} {{end}}` + + "{{if .Sudo}}sudo -E {{end}}" + + `{{if ne .PuppetBinDir ""}}{{.PuppetBinDir}}/{{end}}` + + "puppet agent --onetime --no-daemonize --detailed-exitcodes " + + "{{if .Debug}}--debug {{end}}" + + `{{if ne .PuppetServer ""}}--server='{{.PuppetServer}}' {{end}}` + + `{{if ne .PuppetNode ""}}--certname='{{.PuppetNode}}' {{end}}` + + `{{if ne .ClientCertPath ""}}--certdir='{{.ClientCertPath}}' {{end}}` + + `{{if ne .ClientPrivateKeyPath ""}}--privatekeydir='{{.ClientPrivateKeyPath}}' {{end}}` + + `{{if ne .ExtraArguments ""}}{{.ExtraArguments}} {{end}}`, + facterVarsFmt: "FACTER_%s='%s'", + facterVarsJoiner: " ", + }, + provisioner.WindowsOSType: { + tempDir: filepath.ToSlash(os.Getenv("TEMP")), + stagingDir: filepath.ToSlash(os.Getenv("SYSTEMROOT")) + "/Temp/packer-puppet-server", + executeCommand: "cd {{.WorkingDir}} && " + + `{{if ne .FacterVars ""}}{{.FacterVars}} && {{end}}` + + `{{if ne .PuppetBinDir ""}}{{.PuppetBinDir}}/{{end}}` + + "puppet agent --onetime --no-daemonize --detailed-exitcodes " + + "{{if .Debug}}--debug {{end}}" + + `{{if ne .PuppetServer ""}}--server='{{.PuppetServer}}' {{end}}` + + `{{if ne .PuppetNode ""}}--certname='{{.PuppetNode}}' {{end}}` + + `{{if ne .ClientCertPath ""}}--certdir='{{.ClientCertPath}}' {{end}}` + + `{{if ne .ClientPrivateKeyPath ""}}--privatekeydir='{{.ClientPrivateKeyPath}}' {{end}}` + + `{{if ne .ExtraArguments ""}}{{.ExtraArguments}} {{end}}`, + facterVarsFmt: `SET "FACTER_%s=%s"`, + facterVarsJoiner: " & ", + }, } type Provisioner struct { @@ -102,14 +117,16 @@ type Provisioner struct { } type ExecuteTemplate struct { - FacterVars string ClientCertPath string ClientPrivateKeyPath string + Debug bool + ExtraArguments string + FacterVars string PuppetNode string PuppetServer string - Options string PuppetBinDir string Sudo bool + WorkingDir string } func (p *Provisioner) Prepare(raws ...interface{}) error { @@ -119,6 +136,7 @@ func (p *Provisioner) Prepare(raws ...interface{}) error { InterpolateFilter: &interpolate.RenderFilter{ Exclude: []string{ "execute_command", + "extra_arguments", }, }, }, raws...) @@ -150,6 +168,10 @@ func (p *Provisioner) Prepare(raws ...interface{}) error { p.config.StagingDir = p.guestOSTypeConfig.stagingDir } + if p.config.WorkingDir == "" { + p.config.WorkingDir = p.config.StagingDir + } + if p.config.Facter == nil { p.config.Facter = make(map[string]string) } @@ -223,17 +245,25 @@ func (p *Provisioner) Provision(ui packer.Ui, comm packer.Communicator) error { facterVars = append(facterVars, fmt.Sprintf(p.guestOSTypeConfig.facterVarsFmt, k, v)) } - // Execute Puppet - p.config.ctx.Data = &ExecuteTemplate{ - FacterVars: strings.Join(facterVars, p.guestOSTypeConfig.facterVarsJoiner), + data := ExecuteTemplate{ ClientCertPath: remoteClientCertPath, ClientPrivateKeyPath: remoteClientPrivateKeyPath, + ExtraArguments: "", + FacterVars: strings.Join(facterVars, p.guestOSTypeConfig.facterVarsJoiner), PuppetNode: p.config.PuppetNode, PuppetServer: p.config.PuppetServer, - Options: p.config.Options, PuppetBinDir: p.config.PuppetBinDir, Sudo: !p.config.PreventSudo, + WorkingDir: p.config.WorkingDir, } + + p.config.ctx.Data = &data + _ExtraArguments, err := interpolate.Render(strings.Join(p.config.ExtraArguments, " "), &p.config.ctx) + if err != nil { + return err + } + data.ExtraArguments = _ExtraArguments + command, err := interpolate.Render(p.config.ExecuteCommand, &p.config.ctx) if err != nil { return err @@ -252,6 +282,12 @@ func (p *Provisioner) Provision(ui packer.Ui, comm packer.Communicator) error { return fmt.Errorf("Puppet exited with a non-zero exit status: %d", cmd.ExitStatus) } + if p.config.CleanStagingDir { + if err := p.removeDir(ui, comm, p.config.StagingDir); err != nil { + return fmt.Errorf("Error removing staging directory: %s", err) + } + } + return nil } @@ -284,6 +320,22 @@ func (p *Provisioner) createDir(ui packer.Ui, comm packer.Communicator, dir stri return nil } +func (p *Provisioner) removeDir(ui packer.Ui, comm packer.Communicator, dir string) error { + cmd := &packer.RemoteCmd{ + Command: fmt.Sprintf("rm -fr '%s'", dir), + } + + if err := cmd.StartWithUi(comm, ui); err != nil { + return err + } + + if cmd.ExitStatus != 0 { + return fmt.Errorf("Non-zero exit status.") + } + + return nil +} + func (p *Provisioner) uploadDirectory(ui packer.Ui, comm packer.Communicator, dst string, src string) error { if err := p.createDir(ui, comm, dst); err != nil { return err diff --git a/provisioner/puppet-server/provisioner_test.go b/provisioner/puppet-server/provisioner_test.go index 4c7746c78..68e9369b6 100644 --- a/provisioner/puppet-server/provisioner_test.go +++ b/provisioner/puppet-server/provisioner_test.go @@ -8,15 +8,17 @@ import ( "github.com/hashicorp/packer/packer" ) -func testConfig() map[string]interface{} { +func testConfig() (config map[string]interface{}, tf *os.File) { tf, err := ioutil.TempFile("", "packer") if err != nil { panic(err) } - return map[string]interface{}{ + config = map[string]interface{}{ "puppet_server": tf.Name(), } + + return config, tf } func TestProvisioner_Impl(t *testing.T) { @@ -28,7 +30,9 @@ func TestProvisioner_Impl(t *testing.T) { } func TestProvisionerPrepare_puppetBinDir(t *testing.T) { - config := testConfig() + config, tempfile := testConfig() + defer os.Remove(tempfile.Name()) + defer tempfile.Close() delete(config, "puppet_bin_dir") p := new(Provisioner) @@ -53,7 +57,9 @@ func TestProvisionerPrepare_puppetBinDir(t *testing.T) { } func TestProvisionerPrepare_clientPrivateKeyPath(t *testing.T) { - config := testConfig() + config, tempfile := testConfig() + defer os.Remove(tempfile.Name()) + defer tempfile.Close() delete(config, "client_private_key_path") p := new(Provisioner) @@ -86,7 +92,9 @@ func TestProvisionerPrepare_clientPrivateKeyPath(t *testing.T) { } func TestProvisionerPrepare_clientCertPath(t *testing.T) { - config := testConfig() + config, tempfile := testConfig() + defer os.Remove(tempfile.Name()) + defer tempfile.Close() delete(config, "client_cert_path") p := new(Provisioner) @@ -119,7 +127,9 @@ func TestProvisionerPrepare_clientCertPath(t *testing.T) { } func TestProvisionerPrepare_executeCommand(t *testing.T) { - config := testConfig() + config, tempfile := testConfig() + defer os.Remove(tempfile.Name()) + defer tempfile.Close() delete(config, "execute_command") p := new(Provisioner) @@ -130,7 +140,9 @@ func TestProvisionerPrepare_executeCommand(t *testing.T) { } func TestProvisionerPrepare_facterFacts(t *testing.T) { - config := testConfig() + config, tempfile := testConfig() + defer os.Remove(tempfile.Name()) + defer tempfile.Close() delete(config, "facter") p := new(Provisioner) @@ -185,7 +197,9 @@ func TestProvisionerPrepare_facterFacts(t *testing.T) { } func TestProvisionerPrepare_stagingDir(t *testing.T) { - config := testConfig() + config, tempfile := testConfig() + defer os.Remove(tempfile.Name()) + defer tempfile.Close() delete(config, "staging_dir") p := new(Provisioner) diff --git a/provisioner/salt-masterless/provisioner.go b/provisioner/salt-masterless/provisioner.go index dad5f7109..76d2ff2d3 100644 --- a/provisioner/salt-masterless/provisioner.go +++ b/provisioner/salt-masterless/provisioner.go @@ -8,17 +8,15 @@ import ( "fmt" "os" "path/filepath" + "strings" "github.com/hashicorp/packer/common" "github.com/hashicorp/packer/helper/config" "github.com/hashicorp/packer/packer" + "github.com/hashicorp/packer/provisioner" "github.com/hashicorp/packer/template/interpolate" ) -const DefaultTempConfigDir = "/tmp/salt" -const DefaultStateTreeDir = "/srv/salt" -const DefaultPillarRootDir = "/srv/pillar" - type Config struct { common.PackerConfig `mapstructure:",squash"` @@ -67,11 +65,44 @@ type Config struct { // Command line args passed onto salt-call CmdArgs string "" + // The Guest OS Type (unix or windows) + GuestOSType string `mapstructure:"guest_os_type"` + ctx interpolate.Context } type Provisioner struct { - config Config + config Config + guestOSTypeConfig guestOSTypeConfig + guestCommands *provisioner.GuestCommands +} + +type guestOSTypeConfig struct { + tempDir string + stateRoot string + pillarRoot string + configDir string + bootstrapFetchCmd string + bootstrapRunCmd string +} + +var guestOSTypeConfigs = map[string]guestOSTypeConfig{ + provisioner.UnixOSType: { + configDir: "/etc/salt", + tempDir: "/tmp/salt", + stateRoot: "/srv/salt", + pillarRoot: "/srv/pillar", + bootstrapFetchCmd: "curl -L https://bootstrap.saltstack.com -o /tmp/install_salt.sh || wget -O /tmp/install_salt.sh https://bootstrap.saltstack.com", + bootstrapRunCmd: "sh /tmp/install_salt.sh", + }, + provisioner.WindowsOSType: { + configDir: "C:/salt/conf", + tempDir: "C:/Windows/Temp/salt/", + stateRoot: "C:/salt/state", + pillarRoot: "C:/salt/pillar/", + bootstrapFetchCmd: "powershell Invoke-WebRequest -Uri 'https://raw.githubusercontent.com/saltstack/salt-bootstrap/stable/bootstrap-salt.ps1' -OutFile 'C:/Windows/Temp/bootstrap-salt.ps1'", + bootstrapRunCmd: "Powershell C:/Windows/Temp/bootstrap-salt.ps1", + }, } func (p *Provisioner) Prepare(raws ...interface{}) error { @@ -86,8 +117,25 @@ func (p *Provisioner) Prepare(raws ...interface{}) error { return err } + if p.config.GuestOSType == "" { + p.config.GuestOSType = provisioner.DefaultOSType + } else { + p.config.GuestOSType = strings.ToLower(p.config.GuestOSType) + } + + var ok bool + p.guestOSTypeConfig, ok = guestOSTypeConfigs[p.config.GuestOSType] + if !ok { + return fmt.Errorf("Invalid guest_os_type: \"%s\"", p.config.GuestOSType) + } + + p.guestCommands, err = provisioner.NewGuestCommands(p.config.GuestOSType, false) + if err != nil { + return fmt.Errorf("Invalid guest_os_type: \"%s\"", p.config.GuestOSType) + } + if p.config.TempConfigDir == "" { - p.config.TempConfigDir = DefaultTempConfigDir + p.config.TempConfigDir = p.guestOSTypeConfig.tempDir } var errs *packer.MultiError @@ -135,14 +183,14 @@ func (p *Provisioner) Prepare(raws ...interface{}) error { cmd_args.WriteString(p.config.RemoteStateTree) } else { cmd_args.WriteString(" --file-root=") - cmd_args.WriteString(DefaultStateTreeDir) + cmd_args.WriteString(p.guestOSTypeConfig.stateRoot) } if p.config.RemotePillarRoots != "" { cmd_args.WriteString(" --pillar-root=") cmd_args.WriteString(p.config.RemotePillarRoots) } else { cmd_args.WriteString(" --pillar-root=") - cmd_args.WriteString(DefaultPillarRootDir) + cmd_args.WriteString(p.guestOSTypeConfig.pillarRoot) } } @@ -179,14 +227,14 @@ func (p *Provisioner) Provision(ui packer.Ui, comm packer.Communicator) error { if !p.config.SkipBootstrap { cmd := &packer.RemoteCmd{ // Fallback on wget if curl failed for any reason (such as not being installed) - Command: fmt.Sprintf("curl -L https://bootstrap.saltstack.com -o /tmp/install_salt.sh || wget -O /tmp/install_salt.sh https://bootstrap.saltstack.com"), + Command: fmt.Sprintf(p.guestOSTypeConfig.bootstrapFetchCmd), } ui.Message(fmt.Sprintf("Downloading saltstack bootstrap to /tmp/install_salt.sh")) if err = cmd.StartWithUi(comm, ui); err != nil { return fmt.Errorf("Unable to download Salt: %s", err) } cmd = &packer.RemoteCmd{ - Command: fmt.Sprintf("%s /tmp/install_salt.sh %s", p.sudo("sh"), p.config.BootstrapArgs), + Command: fmt.Sprintf("%s %s", p.sudo(p.guestOSTypeConfig.bootstrapRunCmd), p.config.BootstrapArgs), } ui.Message(fmt.Sprintf("Installing Salt with command %s", cmd.Command)) if err = cmd.StartWithUi(comm, ui); err != nil { @@ -208,14 +256,14 @@ func (p *Provisioner) Provision(ui packer.Ui, comm packer.Communicator) error { } // move minion config into /etc/salt - ui.Message(fmt.Sprintf("Make sure directory %s exists", "/etc/salt")) - if err := p.createDir(ui, comm, "/etc/salt"); err != nil { + ui.Message(fmt.Sprintf("Make sure directory %s exists", p.guestOSTypeConfig.configDir)) + if err := p.createDir(ui, comm, p.guestOSTypeConfig.configDir); err != nil { return fmt.Errorf("Error creating remote salt configuration directory: %s", err) } src = filepath.ToSlash(filepath.Join(p.config.TempConfigDir, "minion")) - dst = "/etc/salt/minion" + dst = filepath.ToSlash(filepath.Join(p.guestOSTypeConfig.configDir, "minion")) if err = p.moveFile(ui, comm, dst, src); err != nil { - return fmt.Errorf("Unable to move %s/minion to /etc/salt/minion: %s", p.config.TempConfigDir, err) + return fmt.Errorf("Unable to move %s/minion to %s/minion: %s", p.config.TempConfigDir, p.guestOSTypeConfig.configDir, err) } } @@ -228,14 +276,14 @@ func (p *Provisioner) Provision(ui packer.Ui, comm packer.Communicator) error { } // move grains file into /etc/salt - ui.Message(fmt.Sprintf("Make sure directory %s exists", "/etc/salt")) - if err := p.createDir(ui, comm, "/etc/salt"); err != nil { + ui.Message(fmt.Sprintf("Make sure directory %s exists", p.guestOSTypeConfig.configDir)) + if err := p.createDir(ui, comm, p.guestOSTypeConfig.configDir); err != nil { return fmt.Errorf("Error creating remote salt configuration directory: %s", err) } src = filepath.ToSlash(filepath.Join(p.config.TempConfigDir, "grains")) - dst = "/etc/salt/grains" + dst = filepath.ToSlash(filepath.Join(p.guestOSTypeConfig.configDir, "grains")) if err = p.moveFile(ui, comm, dst, src); err != nil { - return fmt.Errorf("Unable to move %s/grains to /etc/salt/grains: %s", p.config.TempConfigDir, err) + return fmt.Errorf("Unable to move %s/grains to %s/grains: %s", p.config.TempConfigDir, p.guestOSTypeConfig.configDir, err) } } @@ -251,11 +299,15 @@ func (p *Provisioner) Provision(ui packer.Ui, comm packer.Communicator) error { if p.config.RemoteStateTree != "" { dst = p.config.RemoteStateTree } else { - dst = DefaultStateTreeDir + dst = p.guestOSTypeConfig.stateRoot } - if err = p.removeDir(ui, comm, dst); err != nil { - return fmt.Errorf("Unable to clear salt tree: %s", err) + + if err = p.statPath(ui, comm, dst); err != nil { + if err = p.removeDir(ui, comm, dst); err != nil { + return fmt.Errorf("Unable to clear salt tree: %s", err) + } } + if err = p.moveFile(ui, comm, dst, src); err != nil { return fmt.Errorf("Unable to move %s/states to %s: %s", p.config.TempConfigDir, dst, err) } @@ -273,11 +325,15 @@ func (p *Provisioner) Provision(ui packer.Ui, comm packer.Communicator) error { if p.config.RemotePillarRoots != "" { dst = p.config.RemotePillarRoots } else { - dst = DefaultPillarRootDir + dst = p.guestOSTypeConfig.pillarRoot } - if err = p.removeDir(ui, comm, dst); err != nil { - return fmt.Errorf("Unable to clear pillar root: %s", err) + + if err = p.statPath(ui, comm, dst); err != nil { + if err = p.removeDir(ui, comm, dst); err != nil { + return fmt.Errorf("Unable to clear pillar root: %s", err) + } } + if err = p.moveFile(ui, comm, dst, src); err != nil { return fmt.Errorf("Unable to move %s/pillar to %s: %s", p.config.TempConfigDir, dst, err) } @@ -304,7 +360,7 @@ func (p *Provisioner) Cancel() { // Prepends sudo to supplied command if config says to func (p *Provisioner) sudo(cmd string) string { - if p.config.DisableSudo { + if p.config.DisableSudo || (p.config.GuestOSType == provisioner.WindowsOSType) { return cmd } @@ -354,9 +410,11 @@ func (p *Provisioner) uploadFile(ui packer.Ui, comm packer.Communicator, dst, sr return nil } -func (p *Provisioner) moveFile(ui packer.Ui, comm packer.Communicator, dst, src string) error { +func (p *Provisioner) moveFile(ui packer.Ui, comm packer.Communicator, dst string, src string) error { ui.Message(fmt.Sprintf("Moving %s to %s", src, dst)) - cmd := &packer.RemoteCmd{Command: fmt.Sprintf(p.sudo("mv %s %s"), src, dst)} + cmd := &packer.RemoteCmd{ + Command: p.sudo(p.guestCommands.MovePath(src, dst)), + } if err := cmd.StartWithUi(comm, ui); err != nil || cmd.ExitStatus != 0 { if err == nil { err = fmt.Errorf("Bad exit status: %d", cmd.ExitStatus) @@ -370,7 +428,21 @@ func (p *Provisioner) moveFile(ui packer.Ui, comm packer.Communicator, dst, src func (p *Provisioner) createDir(ui packer.Ui, comm packer.Communicator, dir string) error { ui.Message(fmt.Sprintf("Creating directory: %s", dir)) cmd := &packer.RemoteCmd{ - Command: fmt.Sprintf("mkdir -p '%s'", dir), + Command: p.guestCommands.CreateDir(dir), + } + if err := cmd.StartWithUi(comm, ui); err != nil { + return err + } + if cmd.ExitStatus != 0 { + return fmt.Errorf("Non-zero exit status.") + } + return nil +} + +func (p *Provisioner) statPath(ui packer.Ui, comm packer.Communicator, path string) error { + ui.Message(fmt.Sprintf("Verifying Path: %s", path)) + cmd := &packer.RemoteCmd{ + Command: p.guestCommands.StatPath(path), } if err := cmd.StartWithUi(comm, ui); err != nil { return err @@ -384,7 +456,7 @@ func (p *Provisioner) createDir(ui packer.Ui, comm packer.Communicator, dir stri func (p *Provisioner) removeDir(ui packer.Ui, comm packer.Communicator, dir string) error { ui.Message(fmt.Sprintf("Removing directory: %s", dir)) cmd := &packer.RemoteCmd{ - Command: fmt.Sprintf(p.sudo("rm -rf '%s'"), dir), + Command: p.guestCommands.RemoveDir(dir), } if err := cmd.StartWithUi(comm, ui); err != nil { return err diff --git a/provisioner/salt-masterless/provisioner_test.go b/provisioner/salt-masterless/provisioner_test.go index 2b40887d3..c38a9b879 100644 --- a/provisioner/salt-masterless/provisioner_test.go +++ b/provisioner/salt-masterless/provisioner_test.go @@ -1,11 +1,12 @@ package saltmasterless import ( - "github.com/hashicorp/packer/packer" "io/ioutil" "os" "strings" "testing" + + "github.com/hashicorp/packer/packer" ) func testConfig() map[string]interface{} { @@ -31,7 +32,7 @@ func TestProvisionerPrepare_Defaults(t *testing.T) { t.Fatalf("err: %s", err) } - if p.config.TempConfigDir != DefaultTempConfigDir { + if p.config.TempConfigDir != p.guestOSTypeConfig.tempDir { t.Errorf("unexpected temp config dir: %s", p.config.TempConfigDir) } } @@ -48,7 +49,7 @@ func TestProvisionerPrepare_InvalidKey(t *testing.T) { } } -func TestProvisionerPrepare_CustomeState(t *testing.T) { +func TestProvisionerPrepare_CustomState(t *testing.T) { var p Provisioner config := testConfig() @@ -308,3 +309,19 @@ func TestProvisionerPrepare_LogLevel(t *testing.T) { t.Fatal("-l debug should be set in CmdArgs") } } + +func TestProvisionerPrepare_GuestOSType(t *testing.T) { + var p Provisioner + config := testConfig() + + config["guest_os_type"] = "Windows" + + err := p.Prepare(config) + if err != nil { + t.Fatalf("err: %s", err) + } + + if p.config.GuestOSType != "windows" { + t.Fatalf("GuestOSType should be 'windows'") + } +} diff --git a/provisioner/shell-local/communicator_test.go b/provisioner/shell-local/communicator_test.go deleted file mode 100644 index 8ebd4fa60..000000000 --- a/provisioner/shell-local/communicator_test.go +++ /dev/null @@ -1,45 +0,0 @@ -package shell - -import ( - "bytes" - "runtime" - "strings" - "testing" - - "github.com/hashicorp/packer/packer" -) - -func TestCommunicator_impl(t *testing.T) { - var _ packer.Communicator = new(Communicator) -} - -func TestCommunicator(t *testing.T) { - if runtime.GOOS == "windows" { - t.Skip("windows not supported for this test") - return - } - - c := &Communicator{ - ExecuteCommand: []string{"/bin/sh", "-c", "{{.Command}}"}, - } - - var buf bytes.Buffer - cmd := &packer.RemoteCmd{ - Command: "echo foo", - Stdout: &buf, - } - - if err := c.Start(cmd); err != nil { - t.Fatalf("err: %s", err) - } - - cmd.Wait() - - if cmd.ExitStatus != 0 { - t.Fatalf("err bad exit status: %d", cmd.ExitStatus) - } - - if strings.TrimSpace(buf.String()) != "foo" { - t.Fatalf("bad: %s", buf.String()) - } -} diff --git a/provisioner/shell-local/provisioner.go b/provisioner/shell-local/provisioner.go index 3f8222c19..16c3806e4 100644 --- a/provisioner/shell-local/provisioner.go +++ b/provisioner/shell-local/provisioner.go @@ -1,105 +1,32 @@ package shell import ( - "errors" - "fmt" - "runtime" - - "github.com/hashicorp/packer/common" - "github.com/hashicorp/packer/helper/config" + sl "github.com/hashicorp/packer/common/shell-local" "github.com/hashicorp/packer/packer" - "github.com/hashicorp/packer/template/interpolate" ) -type Config struct { - common.PackerConfig `mapstructure:",squash"` - - // Command is the command to execute - Command string - - // ExecuteCommand is the command used to execute the command. - ExecuteCommand []string `mapstructure:"execute_command"` - - ctx interpolate.Context -} - type Provisioner struct { - config Config + config sl.Config } func (p *Provisioner) Prepare(raws ...interface{}) error { - err := config.Decode(&p.config, &config.DecodeOpts{ - Interpolate: true, - InterpolateContext: &p.config.ctx, - InterpolateFilter: &interpolate.RenderFilter{ - Exclude: []string{ - "execute_command", - }, - }, - }, raws...) + err := sl.Decode(&p.config, raws...) if err != nil { return err } - if len(p.config.ExecuteCommand) == 0 { - if runtime.GOOS == "windows" { - p.config.ExecuteCommand = []string{ - "cmd", - "/C", - "{{.Command}}", - } - } else { - p.config.ExecuteCommand = []string{ - "/bin/sh", - "-c", - "{{.Command}}", - } - } - } - - var errs *packer.MultiError - if p.config.Command == "" { - errs = packer.MultiErrorAppend(errs, - errors.New("command must be specified")) - } - - if len(p.config.ExecuteCommand) == 0 { - errs = packer.MultiErrorAppend(errs, - errors.New("execute_command must not be empty")) - } - - if errs != nil && len(errs.Errors) > 0 { - return errs + err = sl.Validate(&p.config) + if err != nil { + return err } return nil } func (p *Provisioner) Provision(ui packer.Ui, _ packer.Communicator) error { - // Make another communicator for local - comm := &Communicator{ - Ctx: p.config.ctx, - ExecuteCommand: p.config.ExecuteCommand, - } - - // Build the remote command - cmd := &packer.RemoteCmd{Command: p.config.Command} - - ui.Say(fmt.Sprintf( - "Executing local command: %s", - p.config.Command)) - if err := cmd.StartWithUi(comm, ui); err != nil { - return fmt.Errorf( - "Error executing command: %s\n\n"+ - "Please see output above for more information.", - p.config.Command) - } - if cmd.ExitStatus != 0 { - return fmt.Errorf( - "Erroneous exit code %d while executing command: %s\n\n"+ - "Please see output above for more information.", - cmd.ExitStatus, - p.config.Command) + _, retErr := sl.Run(ui, &p.config) + if retErr != nil { + return retErr } return nil diff --git a/provisioner/windows-restart/provisioner.go b/provisioner/windows-restart/provisioner.go index 3b03cf116..ecf15e6a1 100644 --- a/provisioner/windows-restart/provisioner.go +++ b/provisioner/windows-restart/provisioner.go @@ -3,6 +3,7 @@ package restart import ( "bytes" "fmt" + "io" "log" "strings" @@ -113,6 +114,12 @@ var waitForRestart = func(p *Provisioner, comm packer.Communicator) error { var cmd *packer.RemoteCmd trycommand := TryCheckReboot abortcommand := AbortReboot + + // This sleep works around an azure/winrm bug. For more info see + // https://github.com/hashicorp/packer/issues/5257; we can remove the + // sleep when the underlying bug has been resolved. + time.Sleep(1 * time.Second) + // Stolen from Vagrant reboot checker for { log.Printf("Check if machine is rebooting...") @@ -212,20 +219,15 @@ var waitForCommunicator = func(p *Provisioner) error { // provisioning before powershell is actually ready. // In this next check, we parse stdout to make sure that the command is // actually running as expected. - var stdout, stderr bytes.Buffer - cmdModuleLoad := &packer.RemoteCmd{ - Command: DefaultRestartCheckCommand, - Stdin: nil, - Stdout: &stdout, - Stderr: &stderr} + cmdModuleLoad := &packer.RemoteCmd{Command: DefaultRestartCheckCommand} + var buf, buf2 bytes.Buffer + cmdModuleLoad.Stdout = &buf + cmdModuleLoad.Stdout = io.MultiWriter(cmdModuleLoad.Stdout, &buf2) - p.comm.Start(cmdModuleLoad) - cmdModuleLoad.Wait() + cmdModuleLoad.StartWithUi(p.comm, p.ui) + stdoutToRead := buf2.String() - stdoutToRead := stdout.String() - stderrToRead := stderr.String() if !strings.Contains(stdoutToRead, "restarted.") { - log.Printf("Stderr is %s", stderrToRead) log.Printf("echo didn't succeed; retrying...") continue } diff --git a/provisioner/windows-shell/provisioner.go b/provisioner/windows-shell/provisioner.go index 585fa437c..f9b105121 100644 --- a/provisioner/windows-shell/provisioner.go +++ b/provisioner/windows-shell/provisioner.go @@ -190,6 +190,8 @@ func (p *Provisioner) Provision(ui packer.Ui, comm packer.Communicator) error { ui.Error(fmt.Sprintf("Unable to extract inline scripts into a file: %s", err)) } scripts = append(scripts, temp) + // Remove temp script containing the inline commands when done + defer os.Remove(temp) } for _, path := range scripts { @@ -203,11 +205,11 @@ func (p *Provisioner) Provision(ui packer.Ui, comm packer.Communicator) error { defer f.Close() // Create environment variables to set before executing the command - flattendVars := p.createFlattenedEnvVars() + flattenedVars := p.createFlattenedEnvVars() // Compile the command p.config.ctx.Data = &ExecuteCommandTemplate{ - Vars: flattendVars, + Vars: flattenedVars, Path: p.config.RemotePath, } command, err := interpolate.Render(p.config.ExecuteCommand, &p.config.ctx) diff --git a/provisioner/windows-shell/provisioner_test.go b/provisioner/windows-shell/provisioner_test.go index 1f2ee2cab..5f4149478 100644 --- a/provisioner/windows-shell/provisioner_test.go +++ b/provisioner/windows-shell/provisioner_test.go @@ -25,6 +25,7 @@ func TestProvisionerPrepare_extractScript(t *testing.T) { p := new(Provisioner) _ = p.Prepare(config) file, err := extractScript(p) + defer os.Remove(file) if err != nil { t.Fatalf("Should not be error: %s", err) } @@ -104,6 +105,7 @@ func TestProvisionerPrepare_Script(t *testing.T) { t.Fatalf("error tempfile: %s", err) } defer os.Remove(tf.Name()) + defer tf.Close() config["script"] = tf.Name() p = new(Provisioner) @@ -130,6 +132,7 @@ func TestProvisionerPrepare_ScriptAndInline(t *testing.T) { t.Fatalf("error tempfile: %s", err) } defer os.Remove(tf.Name()) + defer tf.Close() config["inline"] = []interface{}{"foo"} config["script"] = tf.Name() @@ -149,6 +152,7 @@ func TestProvisionerPrepare_ScriptAndScripts(t *testing.T) { t.Fatalf("error tempfile: %s", err) } defer os.Remove(tf.Name()) + defer tf.Close() config["inline"] = []interface{}{"foo"} config["scripts"] = []string{tf.Name()} @@ -175,6 +179,7 @@ func TestProvisionerPrepare_Scripts(t *testing.T) { t.Fatalf("error tempfile: %s", err) } defer os.Remove(tf.Name()) + defer tf.Close() config["scripts"] = []string{tf.Name()} p = new(Provisioner) @@ -322,11 +327,16 @@ func TestProvisionerProvision_Inline(t *testing.T) { } func TestProvisionerProvision_Scripts(t *testing.T) { - tempFile, _ := ioutil.TempFile("", "packer") - defer os.Remove(tempFile.Name()) + tf, err := ioutil.TempFile("", "packer") + if err != nil { + t.Fatalf("error tempfile: %s", err) + } + defer os.Remove(tf.Name()) + defer tf.Close() + config := testConfig() delete(config, "inline") - config["scripts"] = []string{tempFile.Name()} + config["scripts"] = []string{tf.Name()} config["packer_build_name"] = "foobuild" config["packer_builder_type"] = "footype" ui := testUi() @@ -334,7 +344,7 @@ func TestProvisionerProvision_Scripts(t *testing.T) { p := new(Provisioner) comm := new(packer.MockCommunicator) p.Prepare(config) - err := p.Provision(ui, comm) + err = p.Provision(ui, comm) if err != nil { t.Fatal("should not have error") } @@ -349,13 +359,18 @@ func TestProvisionerProvision_Scripts(t *testing.T) { } func TestProvisionerProvision_ScriptsWithEnvVars(t *testing.T) { - tempFile, _ := ioutil.TempFile("", "packer") + tf, err := ioutil.TempFile("", "packer") + if err != nil { + t.Fatalf("error tempfile: %s", err) + } + defer os.Remove(tf.Name()) + defer tf.Close() + config := testConfig() ui := testUi() - defer os.Remove(tempFile.Name()) delete(config, "inline") - config["scripts"] = []string{tempFile.Name()} + config["scripts"] = []string{tf.Name()} config["packer_build_name"] = "foobuild" config["packer_builder_type"] = "footype" @@ -368,7 +383,7 @@ func TestProvisionerProvision_ScriptsWithEnvVars(t *testing.T) { p := new(Provisioner) comm := new(packer.MockCommunicator) p.Prepare(config) - err := p.Provision(ui, comm) + err = p.Provision(ui, comm) if err != nil { t.Fatal("should not have error") } @@ -434,7 +449,7 @@ func TestRetryable(t *testing.T) { err := p.Prepare(config) err = p.retryable(retryMe) if err != nil { - t.Fatalf("should not have error retrying funuction") + t.Fatalf("should not have error retrying function") } count = 0 @@ -442,7 +457,7 @@ func TestRetryable(t *testing.T) { err = p.Prepare(config) err = p.retryable(retryMe) if err == nil { - t.Fatalf("should have error retrying funuction") + t.Fatalf("should have error retrying function") } } diff --git a/scripts/build.ps1 b/scripts/build.ps1 index 5603ea078..972dcd414 100644 --- a/scripts/build.ps1 +++ b/scripts/build.ps1 @@ -53,10 +53,18 @@ if ($LastExitCode -eq 0) { if (Test-Path env:PACKER_DEV) { $XC_OS=$(go.exe env GOOS) $XC_ARCH=$(go.exe env GOARCH) -} -elseif (-not (Test-Path env:XC_ARCH)) { - $XC_ARCH="386 amd64 arm arm64 ppc64le" - $XC_OS="linux darwin windows freebsd openbsd solaris" +} else { + if (Test-Path env:XC_ARCH) { + $XC_ARCH = $(Get-Content env:XC_ARCH) + } else { + $XC_ARCH="386 amd64 arm arm64 ppc64le" + } + + if (Test-Path env:XC_OS) { + $XC_OS = $(Get-Content env:XC_OS) + } else { + $XC_OS = "linux darwin windows freebsd openbsd solaris" + } } # Delete the old dir diff --git a/scripts/build.sh b/scripts/build.sh index 84bd7f67d..4f55b9893 100755 --- a/scripts/build.sh +++ b/scripts/build.sh @@ -1,6 +1,10 @@ #!/usr/bin/env bash -# + # This script builds the application from source for multiple platforms. +# Determine the arch/os combos we're building for +ALL_XC_ARCH="386 amd64 arm arm64 ppc64le" +ALL_XC_OS="linux darwin windows freebsd openbsd solaris" + set -e # Get the parent directory of where this script is. @@ -11,54 +15,88 @@ DIR="$( cd -P "$( dirname "$SOURCE" )/.." && pwd )" # Change into that directory cd $DIR -# Determine the arch/os combos we're building for -XC_ARCH=${XC_ARCH:-"386 amd64 arm arm64 ppc64le"} -XC_OS=${XC_OS:-linux darwin windows freebsd openbsd solaris} - # Delete the old dir echo "==> Removing old directory..." rm -f bin/* rm -rf pkg/* mkdir -p bin/ +# helpers for Cygwin-hosted builds +: ${OSTYPE:=`uname`} + +case $OSTYPE in + MINGW*|MSYS*|cygwin|CYGWIN*) + # cygwin only translates ';' to ':' on select environment variables + PATHSEP=';' + ;; + *) PATHSEP=':' +esac + +function convert_path() { + local flag + [ "${1:0:1}" = '-' ] && { flag="$1"; shift; } + + [ -n "$1" ] || return 0 + case ${OSTYPE:-`uname`} in + cygwin|CYGWIN*) + cygpath $flag -- "$1" + ;; + *) echo "$1" + esac +} + +# XXX works in MINGW? +which go &>/dev/null || PATH+=":`convert_path "${GOROOT:?}"`/bin" + +OLDIFS="$IFS" + +# make sure GOPATH is consistent - Windows binaries can't handle Cygwin-style paths +IFS="$PATHSEP" +for d in ${GOPATH:-$(go env GOPATH)}; do + _GOPATH+="${_GOPATH:+$PATHSEP}$(convert_path --windows "$d")" +done +GOPATH="$_GOPATH" + +# locate 'gox' and traverse GOPATH if needed +which "${GOX:=gox}" &>/dev/null || { + for d in $GOPATH; do + GOX="$(convert_path --unix "$d")/bin/gox" + [ -x "$GOX" ] && break || unset GOX + done +} +IFS="$OLDIFS" + # Build! echo "==> Building..." + +# If in dev mode, only build for ourself +if [ -n "${PACKER_DEV+x}" ]; then + XC_OS=$(go env GOOS) + XC_ARCH=$(go env GOARCH) +fi + set +e -gox \ - -os="${XC_OS}" \ - -arch="${XC_ARCH}" \ +${GOX:?command not found} \ + -os="${XC_OS:-$ALL_XC_OS}" \ + -arch="${XC_ARCH:-$ALL_XC_ARCH}" \ -osarch="!darwin/arm !darwin/arm64" \ -ldflags "${GOLDFLAGS}" \ -output "pkg/{{.OS}}_{{.Arch}}/packer" \ . set -e -# Move all the compiled things to the $GOPATH/bin -GOPATH=${GOPATH:-$(go env GOPATH)} -case $(uname) in - CYGWIN*) - GOPATH="$(cygpath $GOPATH)" - ;; -esac -OLDIFS=$IFS -IFS=: -case $(uname) in - MINGW*) - IFS=";" - ;; - MSYS*) - IFS=";" - ;; -esac +# trim GOPATH to first element +IFS="$PATHSEP" MAIN_GOPATH=($GOPATH) +MAIN_GOPATH="$(convert_path --unix "$MAIN_GOPATH")" IFS=$OLDIFS # Copy our OS/Arch to the bin/ directory echo "==> Copying binaries for this platform..." DEV_PLATFORM="./pkg/$(go env GOOS)_$(go env GOARCH)" -for F in $(find ${DEV_PLATFORM} -mindepth 1 -maxdepth 1 -type f); do - cp ${F} bin/ - cp ${F} ${MAIN_GOPATH}/bin/ +for F in $(find ${DEV_PLATFORM} -mindepth 1 -maxdepth 1 -type f 2>/dev/null); do + cp -v ${F} bin/ + cp -v ${F} ${MAIN_GOPATH}/bin/ done # Done! diff --git a/scripts/gofmtcheck.sh b/scripts/gofmtcheck.sh deleted file mode 100755 index 5b99bcdc4..000000000 --- a/scripts/gofmtcheck.sh +++ /dev/null @@ -1,14 +0,0 @@ -#!/usr/bin/env bash - -# Check gofmt -echo "==> Checking that code complies with gofmt requirements..." -gofmt_files=$(gofmt -s -l ${@}) -if [[ -n ${gofmt_files} ]]; then - echo 'gofmt needs running on the following files:' - echo "${gofmt_files}" - echo "You can use the command: \`make fmt\` to reformat code." - exit 1 -fi -echo "Check passed." - -exit 0 diff --git a/scripts/vagrant-freebsd-priv-config.sh b/scripts/vagrant-freebsd-priv-config.sh index 30a2e8185..e2ea148dc 100755 --- a/scripts/vagrant-freebsd-priv-config.sh +++ b/scripts/vagrant-freebsd-priv-config.sh @@ -17,7 +17,7 @@ EOT pkg update pkg install -y \ - editors/vim-lite \ + editors/vim-console \ devel/git \ devel/gmake \ lang/go \ diff --git a/stdin.go b/stdin.go index 2df4153f5..758cc6864 100644 --- a/stdin.go +++ b/stdin.go @@ -5,6 +5,7 @@ import ( "log" "os" "os/signal" + "syscall" ) // setupStdin switches out stdin for a pipe. We do this so that we can @@ -26,7 +27,7 @@ func setupStdin() { // Register a signal handler for interrupt in order to close the // writer end of our pipe so that readers get EOF downstream. ch := make(chan os.Signal, 1) - signal.Notify(ch, os.Interrupt) + signal.Notify(ch, os.Interrupt, syscall.SIGTERM) go func() { defer signal.Stop(ch) diff --git a/template/interpolate/funcs.go b/template/interpolate/funcs.go index ee5e21ccc..924c13d73 100644 --- a/template/interpolate/funcs.go +++ b/template/interpolate/funcs.go @@ -11,6 +11,7 @@ import ( "time" "github.com/hashicorp/packer/common/uuid" + "github.com/hashicorp/packer/version" ) // InitTime is the UTC time when this package was initialized. It is @@ -24,15 +25,16 @@ func init() { // Funcs are the interpolation funcs that are available within interpolations. var FuncGens = map[string]FuncGenerator{ - "build_name": funcGenBuildName, - "build_type": funcGenBuildType, - "env": funcGenEnv, - "isotime": funcGenIsotime, - "pwd": funcGenPwd, - "template_dir": funcGenTemplateDir, - "timestamp": funcGenTimestamp, - "uuid": funcGenUuid, - "user": funcGenUser, + "build_name": funcGenBuildName, + "build_type": funcGenBuildType, + "env": funcGenEnv, + "isotime": funcGenIsotime, + "pwd": funcGenPwd, + "template_dir": funcGenTemplateDir, + "timestamp": funcGenTimestamp, + "uuid": funcGenUuid, + "user": funcGenUser, + "packer_version": funcGenPackerVersion, "upper": funcGenPrimitive(strings.ToUpper), "lower": funcGenPrimitive(strings.ToLower), @@ -152,3 +154,9 @@ func funcGenUuid(ctx *Context) interface{} { return uuid.TimeOrderedUUID() } } + +func funcGenPackerVersion(ctx *Context) interface{} { + return func() string { + return version.FormattedVersion() + } +} diff --git a/template/interpolate/funcs_test.go b/template/interpolate/funcs_test.go index ab8ee36a4..eca121025 100644 --- a/template/interpolate/funcs_test.go +++ b/template/interpolate/funcs_test.go @@ -4,8 +4,11 @@ import ( "os" "path/filepath" "strconv" + "strings" "testing" "time" + + "github.com/hashicorp/packer/version" ) func TestFuncBuildName(t *testing.T) { @@ -257,3 +260,21 @@ func TestFuncUser(t *testing.T) { } } } + +func TestFuncPackerVersion(t *testing.T) { + template := `{{packer_version}}` + + ctx := &Context{} + i := &I{Value: template} + + result, err := i.Render(ctx) + if err != nil { + t.Fatalf("Input: %s\n\nerr: %s", template, err) + } + + // Only match the X.Y.Z portion of the whole version string. + if !strings.HasPrefix(result, version.Version) { + t.Fatalf("Expected input to include: %s\n\nGot: %s", + version.Version, result) + } +} diff --git a/vendor/github.com/Azure/azure-sdk-for-go/NOTICE b/vendor/github.com/Azure/azure-sdk-for-go/NOTICE new file mode 100644 index 000000000..2d1d72608 --- /dev/null +++ b/vendor/github.com/Azure/azure-sdk-for-go/NOTICE @@ -0,0 +1,5 @@ +Microsoft Azure-SDK-for-Go +Copyright 2014-2017 Microsoft + +This product includes software developed at +the Microsoft Corporation (https://www.microsoft.com). diff --git a/vendor/github.com/Azure/azure-sdk-for-go/arm/compute/models.go b/vendor/github.com/Azure/azure-sdk-for-go/arm/compute/models.go deleted file mode 100644 index a9524daef..000000000 --- a/vendor/github.com/Azure/azure-sdk-for-go/arm/compute/models.go +++ /dev/null @@ -1,1342 +0,0 @@ -package compute - -// Copyright (c) Microsoft and contributors. All rights reserved. -// -// Licensed under the Apache License, Version 2.0 (the "License"); -// you may not use this file except in compliance with the License. -// You may obtain a copy of the License at -// http://www.apache.org/licenses/LICENSE-2.0 -// -// Unless required by applicable law or agreed to in writing, software -// distributed under the License is distributed on an "AS IS" BASIS, -// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -// -// See the License for the specific language governing permissions and -// limitations under the License. -// -// Code generated by Microsoft (R) AutoRest Code Generator 1.0.1.0 -// Changes may cause incorrect behavior and will be lost if the code is -// regenerated. - -import ( - "github.com/Azure/go-autorest/autorest" - "github.com/Azure/go-autorest/autorest/date" - "github.com/Azure/go-autorest/autorest/to" - "net/http" -) - -// CachingTypes enumerates the values for caching types. -type CachingTypes string - -const ( - // None specifies the none state for caching types. - None CachingTypes = "None" - // ReadOnly specifies the read only state for caching types. - ReadOnly CachingTypes = "ReadOnly" - // ReadWrite specifies the read write state for caching types. - ReadWrite CachingTypes = "ReadWrite" -) - -// ComponentNames enumerates the values for component names. -type ComponentNames string - -const ( - // MicrosoftWindowsShellSetup specifies the microsoft windows shell setup - // state for component names. - MicrosoftWindowsShellSetup ComponentNames = "Microsoft-Windows-Shell-Setup" -) - -// DiskCreateOptionTypes enumerates the values for disk create option types. -type DiskCreateOptionTypes string - -const ( - // Attach specifies the attach state for disk create option types. - Attach DiskCreateOptionTypes = "attach" - // Empty specifies the empty state for disk create option types. - Empty DiskCreateOptionTypes = "empty" - // FromImage specifies the from image state for disk create option types. - FromImage DiskCreateOptionTypes = "fromImage" -) - -// InstanceViewTypes enumerates the values for instance view types. -type InstanceViewTypes string - -const ( - // InstanceView specifies the instance view state for instance view types. - InstanceView InstanceViewTypes = "instanceView" -) - -// OperatingSystemStateTypes enumerates the values for operating system state -// types. -type OperatingSystemStateTypes string - -const ( - // Generalized specifies the generalized state for operating system state - // types. - Generalized OperatingSystemStateTypes = "Generalized" - // Specialized specifies the specialized state for operating system state - // types. - Specialized OperatingSystemStateTypes = "Specialized" -) - -// OperatingSystemTypes enumerates the values for operating system types. -type OperatingSystemTypes string - -const ( - // Linux specifies the linux state for operating system types. - Linux OperatingSystemTypes = "Linux" - // Windows specifies the windows state for operating system types. - Windows OperatingSystemTypes = "Windows" -) - -// PassNames enumerates the values for pass names. -type PassNames string - -const ( - // OobeSystem specifies the oobe system state for pass names. - OobeSystem PassNames = "oobeSystem" -) - -// ProtocolTypes enumerates the values for protocol types. -type ProtocolTypes string - -const ( - // HTTP specifies the http state for protocol types. - HTTP ProtocolTypes = "Http" - // HTTPS specifies the https state for protocol types. - HTTPS ProtocolTypes = "Https" -) - -// ResourceIdentityType enumerates the values for resource identity type. -type ResourceIdentityType string - -const ( - // SystemAssigned specifies the system assigned state for resource identity - // type. - SystemAssigned ResourceIdentityType = "SystemAssigned" -) - -// SettingNames enumerates the values for setting names. -type SettingNames string - -const ( - // AutoLogon specifies the auto logon state for setting names. - AutoLogon SettingNames = "AutoLogon" - // FirstLogonCommands specifies the first logon commands state for setting - // names. - FirstLogonCommands SettingNames = "FirstLogonCommands" -) - -// StatusLevelTypes enumerates the values for status level types. -type StatusLevelTypes string - -const ( - // Error specifies the error state for status level types. - Error StatusLevelTypes = "Error" - // Info specifies the info state for status level types. - Info StatusLevelTypes = "Info" - // Warning specifies the warning state for status level types. - Warning StatusLevelTypes = "Warning" -) - -// StorageAccountTypes enumerates the values for storage account types. -type StorageAccountTypes string - -const ( - // PremiumLRS specifies the premium lrs state for storage account types. - PremiumLRS StorageAccountTypes = "Premium_LRS" - // StandardLRS specifies the standard lrs state for storage account types. - StandardLRS StorageAccountTypes = "Standard_LRS" -) - -// UpgradeMode enumerates the values for upgrade mode. -type UpgradeMode string - -const ( - // Automatic specifies the automatic state for upgrade mode. - Automatic UpgradeMode = "Automatic" - // Manual specifies the manual state for upgrade mode. - Manual UpgradeMode = "Manual" -) - -// VirtualMachineScaleSetSkuScaleType enumerates the values for virtual machine -// scale set sku scale type. -type VirtualMachineScaleSetSkuScaleType string - -const ( - // VirtualMachineScaleSetSkuScaleTypeAutomatic specifies the virtual - // machine scale set sku scale type automatic state for virtual machine - // scale set sku scale type. - VirtualMachineScaleSetSkuScaleTypeAutomatic VirtualMachineScaleSetSkuScaleType = "Automatic" - // VirtualMachineScaleSetSkuScaleTypeNone specifies the virtual machine - // scale set sku scale type none state for virtual machine scale set sku - // scale type. - VirtualMachineScaleSetSkuScaleTypeNone VirtualMachineScaleSetSkuScaleType = "None" -) - -// VirtualMachineSizeTypes enumerates the values for virtual machine size -// types. -type VirtualMachineSizeTypes string - -const ( - // BasicA0 specifies the basic a0 state for virtual machine size types. - BasicA0 VirtualMachineSizeTypes = "Basic_A0" - // BasicA1 specifies the basic a1 state for virtual machine size types. - BasicA1 VirtualMachineSizeTypes = "Basic_A1" - // BasicA2 specifies the basic a2 state for virtual machine size types. - BasicA2 VirtualMachineSizeTypes = "Basic_A2" - // BasicA3 specifies the basic a3 state for virtual machine size types. - BasicA3 VirtualMachineSizeTypes = "Basic_A3" - // BasicA4 specifies the basic a4 state for virtual machine size types. - BasicA4 VirtualMachineSizeTypes = "Basic_A4" - // StandardA0 specifies the standard a0 state for virtual machine size - // types. - StandardA0 VirtualMachineSizeTypes = "Standard_A0" - // StandardA1 specifies the standard a1 state for virtual machine size - // types. - StandardA1 VirtualMachineSizeTypes = "Standard_A1" - // StandardA10 specifies the standard a10 state for virtual machine size - // types. - StandardA10 VirtualMachineSizeTypes = "Standard_A10" - // StandardA11 specifies the standard a11 state for virtual machine size - // types. - StandardA11 VirtualMachineSizeTypes = "Standard_A11" - // StandardA2 specifies the standard a2 state for virtual machine size - // types. - StandardA2 VirtualMachineSizeTypes = "Standard_A2" - // StandardA3 specifies the standard a3 state for virtual machine size - // types. - StandardA3 VirtualMachineSizeTypes = "Standard_A3" - // StandardA4 specifies the standard a4 state for virtual machine size - // types. - StandardA4 VirtualMachineSizeTypes = "Standard_A4" - // StandardA5 specifies the standard a5 state for virtual machine size - // types. - StandardA5 VirtualMachineSizeTypes = "Standard_A5" - // StandardA6 specifies the standard a6 state for virtual machine size - // types. - StandardA6 VirtualMachineSizeTypes = "Standard_A6" - // StandardA7 specifies the standard a7 state for virtual machine size - // types. - StandardA7 VirtualMachineSizeTypes = "Standard_A7" - // StandardA8 specifies the standard a8 state for virtual machine size - // types. - StandardA8 VirtualMachineSizeTypes = "Standard_A8" - // StandardA9 specifies the standard a9 state for virtual machine size - // types. - StandardA9 VirtualMachineSizeTypes = "Standard_A9" - // StandardD1 specifies the standard d1 state for virtual machine size - // types. - StandardD1 VirtualMachineSizeTypes = "Standard_D1" - // StandardD11 specifies the standard d11 state for virtual machine size - // types. - StandardD11 VirtualMachineSizeTypes = "Standard_D11" - // StandardD11V2 specifies the standard d11v2 state for virtual machine - // size types. - StandardD11V2 VirtualMachineSizeTypes = "Standard_D11_v2" - // StandardD12 specifies the standard d12 state for virtual machine size - // types. - StandardD12 VirtualMachineSizeTypes = "Standard_D12" - // StandardD12V2 specifies the standard d12v2 state for virtual machine - // size types. - StandardD12V2 VirtualMachineSizeTypes = "Standard_D12_v2" - // StandardD13 specifies the standard d13 state for virtual machine size - // types. - StandardD13 VirtualMachineSizeTypes = "Standard_D13" - // StandardD13V2 specifies the standard d13v2 state for virtual machine - // size types. - StandardD13V2 VirtualMachineSizeTypes = "Standard_D13_v2" - // StandardD14 specifies the standard d14 state for virtual machine size - // types. - StandardD14 VirtualMachineSizeTypes = "Standard_D14" - // StandardD14V2 specifies the standard d14v2 state for virtual machine - // size types. - StandardD14V2 VirtualMachineSizeTypes = "Standard_D14_v2" - // StandardD15V2 specifies the standard d15v2 state for virtual machine - // size types. - StandardD15V2 VirtualMachineSizeTypes = "Standard_D15_v2" - // StandardD1V2 specifies the standard d1v2 state for virtual machine size - // types. - StandardD1V2 VirtualMachineSizeTypes = "Standard_D1_v2" - // StandardD2 specifies the standard d2 state for virtual machine size - // types. - StandardD2 VirtualMachineSizeTypes = "Standard_D2" - // StandardD2V2 specifies the standard d2v2 state for virtual machine size - // types. - StandardD2V2 VirtualMachineSizeTypes = "Standard_D2_v2" - // StandardD3 specifies the standard d3 state for virtual machine size - // types. - StandardD3 VirtualMachineSizeTypes = "Standard_D3" - // StandardD3V2 specifies the standard d3v2 state for virtual machine size - // types. - StandardD3V2 VirtualMachineSizeTypes = "Standard_D3_v2" - // StandardD4 specifies the standard d4 state for virtual machine size - // types. - StandardD4 VirtualMachineSizeTypes = "Standard_D4" - // StandardD4V2 specifies the standard d4v2 state for virtual machine size - // types. - StandardD4V2 VirtualMachineSizeTypes = "Standard_D4_v2" - // StandardD5V2 specifies the standard d5v2 state for virtual machine size - // types. - StandardD5V2 VirtualMachineSizeTypes = "Standard_D5_v2" - // StandardDS1 specifies the standard ds1 state for virtual machine size - // types. - StandardDS1 VirtualMachineSizeTypes = "Standard_DS1" - // StandardDS11 specifies the standard ds11 state for virtual machine size - // types. - StandardDS11 VirtualMachineSizeTypes = "Standard_DS11" - // StandardDS11V2 specifies the standard ds11v2 state for virtual machine - // size types. - StandardDS11V2 VirtualMachineSizeTypes = "Standard_DS11_v2" - // StandardDS12 specifies the standard ds12 state for virtual machine size - // types. - StandardDS12 VirtualMachineSizeTypes = "Standard_DS12" - // StandardDS12V2 specifies the standard ds12v2 state for virtual machine - // size types. - StandardDS12V2 VirtualMachineSizeTypes = "Standard_DS12_v2" - // StandardDS13 specifies the standard ds13 state for virtual machine size - // types. - StandardDS13 VirtualMachineSizeTypes = "Standard_DS13" - // StandardDS13V2 specifies the standard ds13v2 state for virtual machine - // size types. - StandardDS13V2 VirtualMachineSizeTypes = "Standard_DS13_v2" - // StandardDS14 specifies the standard ds14 state for virtual machine size - // types. - StandardDS14 VirtualMachineSizeTypes = "Standard_DS14" - // StandardDS14V2 specifies the standard ds14v2 state for virtual machine - // size types. - StandardDS14V2 VirtualMachineSizeTypes = "Standard_DS14_v2" - // StandardDS15V2 specifies the standard ds15v2 state for virtual machine - // size types. - StandardDS15V2 VirtualMachineSizeTypes = "Standard_DS15_v2" - // StandardDS1V2 specifies the standard ds1v2 state for virtual machine - // size types. - StandardDS1V2 VirtualMachineSizeTypes = "Standard_DS1_v2" - // StandardDS2 specifies the standard ds2 state for virtual machine size - // types. - StandardDS2 VirtualMachineSizeTypes = "Standard_DS2" - // StandardDS2V2 specifies the standard ds2v2 state for virtual machine - // size types. - StandardDS2V2 VirtualMachineSizeTypes = "Standard_DS2_v2" - // StandardDS3 specifies the standard ds3 state for virtual machine size - // types. - StandardDS3 VirtualMachineSizeTypes = "Standard_DS3" - // StandardDS3V2 specifies the standard ds3v2 state for virtual machine - // size types. - StandardDS3V2 VirtualMachineSizeTypes = "Standard_DS3_v2" - // StandardDS4 specifies the standard ds4 state for virtual machine size - // types. - StandardDS4 VirtualMachineSizeTypes = "Standard_DS4" - // StandardDS4V2 specifies the standard ds4v2 state for virtual machine - // size types. - StandardDS4V2 VirtualMachineSizeTypes = "Standard_DS4_v2" - // StandardDS5V2 specifies the standard ds5v2 state for virtual machine - // size types. - StandardDS5V2 VirtualMachineSizeTypes = "Standard_DS5_v2" - // StandardG1 specifies the standard g1 state for virtual machine size - // types. - StandardG1 VirtualMachineSizeTypes = "Standard_G1" - // StandardG2 specifies the standard g2 state for virtual machine size - // types. - StandardG2 VirtualMachineSizeTypes = "Standard_G2" - // StandardG3 specifies the standard g3 state for virtual machine size - // types. - StandardG3 VirtualMachineSizeTypes = "Standard_G3" - // StandardG4 specifies the standard g4 state for virtual machine size - // types. - StandardG4 VirtualMachineSizeTypes = "Standard_G4" - // StandardG5 specifies the standard g5 state for virtual machine size - // types. - StandardG5 VirtualMachineSizeTypes = "Standard_G5" - // StandardGS1 specifies the standard gs1 state for virtual machine size - // types. - StandardGS1 VirtualMachineSizeTypes = "Standard_GS1" - // StandardGS2 specifies the standard gs2 state for virtual machine size - // types. - StandardGS2 VirtualMachineSizeTypes = "Standard_GS2" - // StandardGS3 specifies the standard gs3 state for virtual machine size - // types. - StandardGS3 VirtualMachineSizeTypes = "Standard_GS3" - // StandardGS4 specifies the standard gs4 state for virtual machine size - // types. - StandardGS4 VirtualMachineSizeTypes = "Standard_GS4" - // StandardGS5 specifies the standard gs5 state for virtual machine size - // types. - StandardGS5 VirtualMachineSizeTypes = "Standard_GS5" -) - -// AdditionalUnattendContent is additional XML formatted information that can -// be included in the Unattend.xml file, which is used by Windows Setup. -// Contents are defined by setting name, component name, and the pass in which -// the content is a applied. -type AdditionalUnattendContent struct { - PassName PassNames `json:"passName,omitempty"` - ComponentName ComponentNames `json:"componentName,omitempty"` - SettingName SettingNames `json:"settingName,omitempty"` - Content *string `json:"content,omitempty"` -} - -// APIEntityReference is the API entity reference. -type APIEntityReference struct { - ID *string `json:"id,omitempty"` -} - -// APIError is api error. -type APIError struct { - Details *[]APIErrorBase `json:"details,omitempty"` - Innererror *InnerError `json:"innererror,omitempty"` - Code *string `json:"code,omitempty"` - Target *string `json:"target,omitempty"` - Message *string `json:"message,omitempty"` -} - -// APIErrorBase is api error base. -type APIErrorBase struct { - Code *string `json:"code,omitempty"` - Target *string `json:"target,omitempty"` - Message *string `json:"message,omitempty"` -} - -// AvailabilitySet is create or update availability set parameters. -type AvailabilitySet struct { - autorest.Response `json:"-"` - ID *string `json:"id,omitempty"` - Name *string `json:"name,omitempty"` - Type *string `json:"type,omitempty"` - Location *string `json:"location,omitempty"` - Tags *map[string]*string `json:"tags,omitempty"` - *AvailabilitySetProperties `json:"properties,omitempty"` - Sku *Sku `json:"sku,omitempty"` -} - -// AvailabilitySetListResult is the List Availability Set operation response. -type AvailabilitySetListResult struct { - autorest.Response `json:"-"` - Value *[]AvailabilitySet `json:"value,omitempty"` -} - -// AvailabilitySetProperties is the instance view of a resource. -type AvailabilitySetProperties struct { - PlatformUpdateDomainCount *int32 `json:"platformUpdateDomainCount,omitempty"` - PlatformFaultDomainCount *int32 `json:"platformFaultDomainCount,omitempty"` - VirtualMachines *[]SubResource `json:"virtualMachines,omitempty"` - Statuses *[]InstanceViewStatus `json:"statuses,omitempty"` - Managed *bool `json:"managed,omitempty"` -} - -// BootDiagnostics is describes Boot Diagnostics. -type BootDiagnostics struct { - Enabled *bool `json:"enabled,omitempty"` - StorageURI *string `json:"storageUri,omitempty"` -} - -// BootDiagnosticsInstanceView is the instance view of a virtual machine boot -// diagnostics. -type BootDiagnosticsInstanceView struct { - ConsoleScreenshotBlobURI *string `json:"consoleScreenshotBlobUri,omitempty"` - SerialConsoleLogBlobURI *string `json:"serialConsoleLogBlobUri,omitempty"` -} - -// DataDisk is describes a data disk. -type DataDisk struct { - Lun *int32 `json:"lun,omitempty"` - Name *string `json:"name,omitempty"` - Vhd *VirtualHardDisk `json:"vhd,omitempty"` - Image *VirtualHardDisk `json:"image,omitempty"` - Caching CachingTypes `json:"caching,omitempty"` - CreateOption DiskCreateOptionTypes `json:"createOption,omitempty"` - DiskSizeGB *int32 `json:"diskSizeGB,omitempty"` - ManagedDisk *ManagedDiskParameters `json:"managedDisk,omitempty"` -} - -// DataDiskImage is contains the data disk images information. -type DataDiskImage struct { - Lun *int32 `json:"lun,omitempty"` -} - -// DiagnosticsProfile is describes a diagnostics profile. -type DiagnosticsProfile struct { - BootDiagnostics *BootDiagnostics `json:"bootDiagnostics,omitempty"` -} - -// DiskEncryptionSettings is describes a Encryption Settings for a Disk -type DiskEncryptionSettings struct { - DiskEncryptionKey *KeyVaultSecretReference `json:"diskEncryptionKey,omitempty"` - KeyEncryptionKey *KeyVaultKeyReference `json:"keyEncryptionKey,omitempty"` - Enabled *bool `json:"enabled,omitempty"` -} - -// DiskInstanceView is the instance view of the disk. -type DiskInstanceView struct { - Name *string `json:"name,omitempty"` - Statuses *[]InstanceViewStatus `json:"statuses,omitempty"` -} - -// HardwareProfile is describes a hardware profile. -type HardwareProfile struct { - VMSize VirtualMachineSizeTypes `json:"vmSize,omitempty"` -} - -// Image is describes an Image. -type Image struct { - autorest.Response `json:"-"` - ID *string `json:"id,omitempty"` - Name *string `json:"name,omitempty"` - Type *string `json:"type,omitempty"` - Location *string `json:"location,omitempty"` - Tags *map[string]*string `json:"tags,omitempty"` - *ImageProperties `json:"properties,omitempty"` -} - -// ImageDataDisk is describes a data disk. -type ImageDataDisk struct { - Lun *int32 `json:"lun,omitempty"` - Snapshot *SubResource `json:"snapshot,omitempty"` - ManagedDisk *SubResource `json:"managedDisk,omitempty"` - BlobURI *string `json:"blobUri,omitempty"` - Caching CachingTypes `json:"caching,omitempty"` - DiskSizeGB *int32 `json:"diskSizeGB,omitempty"` -} - -// ImageListResult is the List Image operation response. -type ImageListResult struct { - autorest.Response `json:"-"` - Value *[]Image `json:"value,omitempty"` - NextLink *string `json:"nextLink,omitempty"` -} - -// ImageListResultPreparer prepares a request to retrieve the next set of results. It returns -// nil if no more results exist. -func (client ImageListResult) ImageListResultPreparer() (*http.Request, error) { - if client.NextLink == nil || len(to.String(client.NextLink)) <= 0 { - return nil, nil - } - return autorest.Prepare(&http.Request{}, - autorest.AsJSON(), - autorest.AsGet(), - autorest.WithBaseURL(to.String(client.NextLink))) -} - -// ImageOSDisk is describes an Operating System disk. -type ImageOSDisk struct { - OsType OperatingSystemTypes `json:"osType,omitempty"` - OsState OperatingSystemStateTypes `json:"osState,omitempty"` - Snapshot *SubResource `json:"snapshot,omitempty"` - ManagedDisk *SubResource `json:"managedDisk,omitempty"` - BlobURI *string `json:"blobUri,omitempty"` - Caching CachingTypes `json:"caching,omitempty"` - DiskSizeGB *int32 `json:"diskSizeGB,omitempty"` -} - -// ImageProperties is describes the properties of an Image. -type ImageProperties struct { - SourceVirtualMachine *SubResource `json:"sourceVirtualMachine,omitempty"` - StorageProfile *ImageStorageProfile `json:"storageProfile,omitempty"` - ProvisioningState *string `json:"provisioningState,omitempty"` -} - -// ImageReference is the image reference. -type ImageReference struct { - ID *string `json:"id,omitempty"` - Publisher *string `json:"publisher,omitempty"` - Offer *string `json:"offer,omitempty"` - Sku *string `json:"sku,omitempty"` - Version *string `json:"version,omitempty"` -} - -// ImageStorageProfile is describes a storage profile. -type ImageStorageProfile struct { - OsDisk *ImageOSDisk `json:"osDisk,omitempty"` - DataDisks *[]ImageDataDisk `json:"dataDisks,omitempty"` -} - -// InnerError is inner error details. -type InnerError struct { - Exceptiontype *string `json:"exceptiontype,omitempty"` - Errordetail *string `json:"errordetail,omitempty"` -} - -// InstanceViewStatus is instance view status. -type InstanceViewStatus struct { - Code *string `json:"code,omitempty"` - Level StatusLevelTypes `json:"level,omitempty"` - DisplayStatus *string `json:"displayStatus,omitempty"` - Message *string `json:"message,omitempty"` - Time *date.Time `json:"time,omitempty"` -} - -// KeyVaultKeyReference is describes a reference to Key Vault Key -type KeyVaultKeyReference struct { - KeyURL *string `json:"keyUrl,omitempty"` - SourceVault *SubResource `json:"sourceVault,omitempty"` -} - -// KeyVaultSecretReference is describes a reference to Key Vault Secret -type KeyVaultSecretReference struct { - SecretURL *string `json:"secretUrl,omitempty"` - SourceVault *SubResource `json:"sourceVault,omitempty"` -} - -// LinuxConfiguration is describes Windows configuration of the OS Profile. -type LinuxConfiguration struct { - DisablePasswordAuthentication *bool `json:"disablePasswordAuthentication,omitempty"` - SSH *SSHConfiguration `json:"ssh,omitempty"` -} - -// ListUsagesResult is the List Usages operation response. -type ListUsagesResult struct { - autorest.Response `json:"-"` - Value *[]Usage `json:"value,omitempty"` - NextLink *string `json:"nextLink,omitempty"` -} - -// ListUsagesResultPreparer prepares a request to retrieve the next set of results. It returns -// nil if no more results exist. -func (client ListUsagesResult) ListUsagesResultPreparer() (*http.Request, error) { - if client.NextLink == nil || len(to.String(client.NextLink)) <= 0 { - return nil, nil - } - return autorest.Prepare(&http.Request{}, - autorest.AsJSON(), - autorest.AsGet(), - autorest.WithBaseURL(to.String(client.NextLink))) -} - -// ListVirtualMachineExtensionImage is -type ListVirtualMachineExtensionImage struct { - autorest.Response `json:"-"` - Value *[]VirtualMachineExtensionImage `json:"value,omitempty"` -} - -// ListVirtualMachineImageResource is -type ListVirtualMachineImageResource struct { - autorest.Response `json:"-"` - Value *[]VirtualMachineImageResource `json:"value,omitempty"` -} - -// LongRunningOperationProperties is compute-specific operation properties, -// including output -type LongRunningOperationProperties struct { - Output *map[string]interface{} `json:"output,omitempty"` -} - -// ManagedDiskParameters is the parameters of a managed disk. -type ManagedDiskParameters struct { - ID *string `json:"id,omitempty"` - StorageAccountType StorageAccountTypes `json:"storageAccountType,omitempty"` -} - -// NetworkInterfaceReference is describes a network interface reference. -type NetworkInterfaceReference struct { - ID *string `json:"id,omitempty"` - *NetworkInterfaceReferenceProperties `json:"properties,omitempty"` -} - -// NetworkInterfaceReferenceProperties is describes a network interface -// reference properties. -type NetworkInterfaceReferenceProperties struct { - Primary *bool `json:"primary,omitempty"` -} - -// NetworkProfile is describes a network profile. -type NetworkProfile struct { - NetworkInterfaces *[]NetworkInterfaceReference `json:"networkInterfaces,omitempty"` -} - -// OperationStatusResponse is operation status response -type OperationStatusResponse struct { - autorest.Response `json:"-"` - Name *string `json:"name,omitempty"` - Status *string `json:"status,omitempty"` - StartTime *date.Time `json:"startTime,omitempty"` - EndTime *date.Time `json:"endTime,omitempty"` - Error *APIError `json:"error,omitempty"` -} - -// OSDisk is describes an Operating System disk. -type OSDisk struct { - OsType OperatingSystemTypes `json:"osType,omitempty"` - EncryptionSettings *DiskEncryptionSettings `json:"encryptionSettings,omitempty"` - Name *string `json:"name,omitempty"` - Vhd *VirtualHardDisk `json:"vhd,omitempty"` - Image *VirtualHardDisk `json:"image,omitempty"` - Caching CachingTypes `json:"caching,omitempty"` - CreateOption DiskCreateOptionTypes `json:"createOption,omitempty"` - DiskSizeGB *int32 `json:"diskSizeGB,omitempty"` - ManagedDisk *ManagedDiskParameters `json:"managedDisk,omitempty"` -} - -// OSDiskImage is contains the os disk image information. -type OSDiskImage struct { - OperatingSystem OperatingSystemTypes `json:"operatingSystem,omitempty"` -} - -// OSProfile is describes an OS profile. -type OSProfile struct { - ComputerName *string `json:"computerName,omitempty"` - AdminUsername *string `json:"adminUsername,omitempty"` - AdminPassword *string `json:"adminPassword,omitempty"` - CustomData *string `json:"customData,omitempty"` - WindowsConfiguration *WindowsConfiguration `json:"windowsConfiguration,omitempty"` - LinuxConfiguration *LinuxConfiguration `json:"linuxConfiguration,omitempty"` - Secrets *[]VaultSecretGroup `json:"secrets,omitempty"` -} - -// Plan is plan for the resource. -type Plan struct { - Name *string `json:"name,omitempty"` - Publisher *string `json:"publisher,omitempty"` - Product *string `json:"product,omitempty"` - PromotionCode *string `json:"promotionCode,omitempty"` -} - -// PurchasePlan is used for establishing the purchase context of any 3rd Party -// artifact through MarketPlace. -type PurchasePlan struct { - Publisher *string `json:"publisher,omitempty"` - Name *string `json:"name,omitempty"` - Product *string `json:"product,omitempty"` -} - -// Resource is the Resource model definition. -type Resource struct { - ID *string `json:"id,omitempty"` - Name *string `json:"name,omitempty"` - Type *string `json:"type,omitempty"` - Location *string `json:"location,omitempty"` - Tags *map[string]*string `json:"tags,omitempty"` -} - -// Sku is describes a virtual machine scale set sku. -type Sku struct { - Name *string `json:"name,omitempty"` - Tier *string `json:"tier,omitempty"` - Capacity *int64 `json:"capacity,omitempty"` -} - -// SSHConfiguration is sSH configuration for Linux based VMs running on Azure -type SSHConfiguration struct { - PublicKeys *[]SSHPublicKey `json:"publicKeys,omitempty"` -} - -// SSHPublicKey is contains information about SSH certificate public key and -// the path on the Linux VM where the public key is placed. -type SSHPublicKey struct { - Path *string `json:"path,omitempty"` - KeyData *string `json:"keyData,omitempty"` -} - -// StorageProfile is describes a storage profile. -type StorageProfile struct { - ImageReference *ImageReference `json:"imageReference,omitempty"` - OsDisk *OSDisk `json:"osDisk,omitempty"` - DataDisks *[]DataDisk `json:"dataDisks,omitempty"` -} - -// SubResource is -type SubResource struct { - ID *string `json:"id,omitempty"` -} - -// SubResourceReadOnly is -type SubResourceReadOnly struct { - ID *string `json:"id,omitempty"` -} - -// UpgradePolicy is describes an upgrade policy - automatic or manual. -type UpgradePolicy struct { - Mode UpgradeMode `json:"mode,omitempty"` -} - -// Usage is describes Compute Resource Usage. -type Usage struct { - Unit *string `json:"unit,omitempty"` - CurrentValue *int32 `json:"currentValue,omitempty"` - Limit *int64 `json:"limit,omitempty"` - Name *UsageName `json:"name,omitempty"` -} - -// UsageName is the Usage Names. -type UsageName struct { - Value *string `json:"value,omitempty"` - LocalizedValue *string `json:"localizedValue,omitempty"` -} - -// VaultCertificate is describes a single certificate reference in a Key Vault, -// and where the certificate should reside on the VM. -type VaultCertificate struct { - CertificateURL *string `json:"certificateUrl,omitempty"` - CertificateStore *string `json:"certificateStore,omitempty"` -} - -// VaultSecretGroup is describes a set of certificates which are all in the -// same Key Vault. -type VaultSecretGroup struct { - SourceVault *SubResource `json:"sourceVault,omitempty"` - VaultCertificates *[]VaultCertificate `json:"vaultCertificates,omitempty"` -} - -// VirtualHardDisk is describes the uri of a disk. -type VirtualHardDisk struct { - URI *string `json:"uri,omitempty"` -} - -// VirtualMachine is describes a Virtual Machine. -type VirtualMachine struct { - autorest.Response `json:"-"` - ID *string `json:"id,omitempty"` - Name *string `json:"name,omitempty"` - Type *string `json:"type,omitempty"` - Location *string `json:"location,omitempty"` - Tags *map[string]*string `json:"tags,omitempty"` - Plan *Plan `json:"plan,omitempty"` - *VirtualMachineProperties `json:"properties,omitempty"` - Resources *[]VirtualMachineExtension `json:"resources,omitempty"` - Identity *VirtualMachineIdentity `json:"identity,omitempty"` -} - -// VirtualMachineAgentInstanceView is the instance view of the VM Agent running -// on the virtual machine. -type VirtualMachineAgentInstanceView struct { - VMAgentVersion *string `json:"vmAgentVersion,omitempty"` - ExtensionHandlers *[]VirtualMachineExtensionHandlerInstanceView `json:"extensionHandlers,omitempty"` - Statuses *[]InstanceViewStatus `json:"statuses,omitempty"` -} - -// VirtualMachineCaptureParameters is capture Virtual Machine parameters. -type VirtualMachineCaptureParameters struct { - VhdPrefix *string `json:"vhdPrefix,omitempty"` - DestinationContainerName *string `json:"destinationContainerName,omitempty"` - OverwriteVhds *bool `json:"overwriteVhds,omitempty"` -} - -// VirtualMachineCaptureResult is resource Id. -type VirtualMachineCaptureResult struct { - autorest.Response `json:"-"` - ID *string `json:"id,omitempty"` - *VirtualMachineCaptureResultProperties `json:"properties,omitempty"` -} - -// VirtualMachineCaptureResultProperties is compute-specific operation -// properties, including output -type VirtualMachineCaptureResultProperties struct { - Output *map[string]interface{} `json:"output,omitempty"` -} - -// VirtualMachineExtension is describes a Virtual Machine Extension. -type VirtualMachineExtension struct { - autorest.Response `json:"-"` - ID *string `json:"id,omitempty"` - Name *string `json:"name,omitempty"` - Type *string `json:"type,omitempty"` - Location *string `json:"location,omitempty"` - Tags *map[string]*string `json:"tags,omitempty"` - *VirtualMachineExtensionProperties `json:"properties,omitempty"` -} - -// VirtualMachineExtensionHandlerInstanceView is the instance view of a virtual -// machine extension handler. -type VirtualMachineExtensionHandlerInstanceView struct { - Type *string `json:"type,omitempty"` - TypeHandlerVersion *string `json:"typeHandlerVersion,omitempty"` - Status *InstanceViewStatus `json:"status,omitempty"` -} - -// VirtualMachineExtensionImage is describes a Virtual Machine Extension Image. -type VirtualMachineExtensionImage struct { - autorest.Response `json:"-"` - ID *string `json:"id,omitempty"` - Name *string `json:"name,omitempty"` - Type *string `json:"type,omitempty"` - Location *string `json:"location,omitempty"` - Tags *map[string]*string `json:"tags,omitempty"` - *VirtualMachineExtensionImageProperties `json:"properties,omitempty"` -} - -// VirtualMachineExtensionImageProperties is describes the properties of a -// Virtual Machine Extension Image. -type VirtualMachineExtensionImageProperties struct { - OperatingSystem *string `json:"operatingSystem,omitempty"` - ComputeRole *string `json:"computeRole,omitempty"` - HandlerSchema *string `json:"handlerSchema,omitempty"` - VMScaleSetEnabled *bool `json:"vmScaleSetEnabled,omitempty"` - SupportsMultipleExtensions *bool `json:"supportsMultipleExtensions,omitempty"` -} - -// VirtualMachineExtensionInstanceView is the instance view of a virtual -// machine extension. -type VirtualMachineExtensionInstanceView struct { - Name *string `json:"name,omitempty"` - Type *string `json:"type,omitempty"` - TypeHandlerVersion *string `json:"typeHandlerVersion,omitempty"` - Substatuses *[]InstanceViewStatus `json:"substatuses,omitempty"` - Statuses *[]InstanceViewStatus `json:"statuses,omitempty"` -} - -// VirtualMachineExtensionProperties is describes the properties of a Virtual -// Machine Extension. -type VirtualMachineExtensionProperties struct { - ForceUpdateTag *string `json:"forceUpdateTag,omitempty"` - Publisher *string `json:"publisher,omitempty"` - Type *string `json:"type,omitempty"` - TypeHandlerVersion *string `json:"typeHandlerVersion,omitempty"` - AutoUpgradeMinorVersion *bool `json:"autoUpgradeMinorVersion,omitempty"` - Settings *map[string]interface{} `json:"settings,omitempty"` - ProtectedSettings *map[string]interface{} `json:"protectedSettings,omitempty"` - ProvisioningState *string `json:"provisioningState,omitempty"` - InstanceView *VirtualMachineExtensionInstanceView `json:"instanceView,omitempty"` -} - -// VirtualMachineIdentity is identity for the virtual machine. -type VirtualMachineIdentity struct { - PrincipalID *string `json:"principalId,omitempty"` - TenantID *string `json:"tenantId,omitempty"` - Type ResourceIdentityType `json:"type,omitempty"` -} - -// VirtualMachineImage is describes a Virtual Machine Image. -type VirtualMachineImage struct { - autorest.Response `json:"-"` - ID *string `json:"id,omitempty"` - Name *string `json:"name,omitempty"` - Location *string `json:"location,omitempty"` - Tags *map[string]*string `json:"tags,omitempty"` - *VirtualMachineImageProperties `json:"properties,omitempty"` -} - -// VirtualMachineImageProperties is describes the properties of a Virtual -// Machine Image. -type VirtualMachineImageProperties struct { - Plan *PurchasePlan `json:"plan,omitempty"` - OsDiskImage *OSDiskImage `json:"osDiskImage,omitempty"` - DataDiskImages *[]DataDiskImage `json:"dataDiskImages,omitempty"` -} - -// VirtualMachineImageResource is virtual machine image resource information. -type VirtualMachineImageResource struct { - ID *string `json:"id,omitempty"` - Name *string `json:"name,omitempty"` - Location *string `json:"location,omitempty"` - Tags *map[string]*string `json:"tags,omitempty"` -} - -// VirtualMachineInstanceView is the instance view of a virtual machine. -type VirtualMachineInstanceView struct { - PlatformUpdateDomain *int32 `json:"platformUpdateDomain,omitempty"` - PlatformFaultDomain *int32 `json:"platformFaultDomain,omitempty"` - RdpThumbPrint *string `json:"rdpThumbPrint,omitempty"` - VMAgent *VirtualMachineAgentInstanceView `json:"vmAgent,omitempty"` - Disks *[]DiskInstanceView `json:"disks,omitempty"` - Extensions *[]VirtualMachineExtensionInstanceView `json:"extensions,omitempty"` - BootDiagnostics *BootDiagnosticsInstanceView `json:"bootDiagnostics,omitempty"` - Statuses *[]InstanceViewStatus `json:"statuses,omitempty"` -} - -// VirtualMachineListResult is the List Virtual Machine operation response. -type VirtualMachineListResult struct { - autorest.Response `json:"-"` - Value *[]VirtualMachine `json:"value,omitempty"` - NextLink *string `json:"nextLink,omitempty"` -} - -// VirtualMachineListResultPreparer prepares a request to retrieve the next set of results. It returns -// nil if no more results exist. -func (client VirtualMachineListResult) VirtualMachineListResultPreparer() (*http.Request, error) { - if client.NextLink == nil || len(to.String(client.NextLink)) <= 0 { - return nil, nil - } - return autorest.Prepare(&http.Request{}, - autorest.AsJSON(), - autorest.AsGet(), - autorest.WithBaseURL(to.String(client.NextLink))) -} - -// VirtualMachineProperties is describes the properties of a Virtual Machine. -type VirtualMachineProperties struct { - HardwareProfile *HardwareProfile `json:"hardwareProfile,omitempty"` - StorageProfile *StorageProfile `json:"storageProfile,omitempty"` - OsProfile *OSProfile `json:"osProfile,omitempty"` - NetworkProfile *NetworkProfile `json:"networkProfile,omitempty"` - DiagnosticsProfile *DiagnosticsProfile `json:"diagnosticsProfile,omitempty"` - AvailabilitySet *SubResource `json:"availabilitySet,omitempty"` - ProvisioningState *string `json:"provisioningState,omitempty"` - InstanceView *VirtualMachineInstanceView `json:"instanceView,omitempty"` - LicenseType *string `json:"licenseType,omitempty"` - VMID *string `json:"vmId,omitempty"` -} - -// VirtualMachineScaleSet is describes a Virtual Machine Scale Set. -type VirtualMachineScaleSet struct { - autorest.Response `json:"-"` - ID *string `json:"id,omitempty"` - Name *string `json:"name,omitempty"` - Type *string `json:"type,omitempty"` - Location *string `json:"location,omitempty"` - Tags *map[string]*string `json:"tags,omitempty"` - Sku *Sku `json:"sku,omitempty"` - Plan *Plan `json:"plan,omitempty"` - *VirtualMachineScaleSetProperties `json:"properties,omitempty"` - Identity *VirtualMachineScaleSetIdentity `json:"identity,omitempty"` -} - -// VirtualMachineScaleSetDataDisk is describes a virtual machine scale set data -// disk. -type VirtualMachineScaleSetDataDisk struct { - Name *string `json:"name,omitempty"` - Lun *int32 `json:"lun,omitempty"` - Caching CachingTypes `json:"caching,omitempty"` - CreateOption DiskCreateOptionTypes `json:"createOption,omitempty"` - DiskSizeGB *int32 `json:"diskSizeGB,omitempty"` - ManagedDisk *VirtualMachineScaleSetManagedDiskParameters `json:"managedDisk,omitempty"` -} - -// VirtualMachineScaleSetExtension is describes a Virtual Machine Scale Set -// Extension. -type VirtualMachineScaleSetExtension struct { - ID *string `json:"id,omitempty"` - Name *string `json:"name,omitempty"` - *VirtualMachineScaleSetExtensionProperties `json:"properties,omitempty"` -} - -// VirtualMachineScaleSetExtensionProfile is describes a virtual machine scale -// set extension profile. -type VirtualMachineScaleSetExtensionProfile struct { - Extensions *[]VirtualMachineScaleSetExtension `json:"extensions,omitempty"` -} - -// VirtualMachineScaleSetExtensionProperties is describes the properties of a -// Virtual Machine Scale Set Extension. -type VirtualMachineScaleSetExtensionProperties struct { - Publisher *string `json:"publisher,omitempty"` - Type *string `json:"type,omitempty"` - TypeHandlerVersion *string `json:"typeHandlerVersion,omitempty"` - AutoUpgradeMinorVersion *bool `json:"autoUpgradeMinorVersion,omitempty"` - Settings *map[string]interface{} `json:"settings,omitempty"` - ProtectedSettings *map[string]interface{} `json:"protectedSettings,omitempty"` - ProvisioningState *string `json:"provisioningState,omitempty"` -} - -// VirtualMachineScaleSetIdentity is identity for the virtual machine scale -// set. -type VirtualMachineScaleSetIdentity struct { - PrincipalID *string `json:"principalId,omitempty"` - TenantID *string `json:"tenantId,omitempty"` - Type ResourceIdentityType `json:"type,omitempty"` -} - -// VirtualMachineScaleSetInstanceView is the instance view of a virtual machine -// scale set. -type VirtualMachineScaleSetInstanceView struct { - autorest.Response `json:"-"` - VirtualMachine *VirtualMachineScaleSetInstanceViewStatusesSummary `json:"virtualMachine,omitempty"` - Extensions *[]VirtualMachineScaleSetVMExtensionsSummary `json:"extensions,omitempty"` - Statuses *[]InstanceViewStatus `json:"statuses,omitempty"` -} - -// VirtualMachineScaleSetInstanceViewStatusesSummary is instance view statuses -// summary for virtual machines of a virtual machine scale set. -type VirtualMachineScaleSetInstanceViewStatusesSummary struct { - StatusesSummary *[]VirtualMachineStatusCodeCount `json:"statusesSummary,omitempty"` -} - -// VirtualMachineScaleSetIPConfiguration is describes a virtual machine scale -// set network profile's IP configuration. -type VirtualMachineScaleSetIPConfiguration struct { - ID *string `json:"id,omitempty"` - Name *string `json:"name,omitempty"` - *VirtualMachineScaleSetIPConfigurationProperties `json:"properties,omitempty"` -} - -// VirtualMachineScaleSetIPConfigurationProperties is describes a virtual -// machine scale set network profile's IP configuration properties. -type VirtualMachineScaleSetIPConfigurationProperties struct { - Subnet *APIEntityReference `json:"subnet,omitempty"` - ApplicationGatewayBackendAddressPools *[]SubResource `json:"applicationGatewayBackendAddressPools,omitempty"` - LoadBalancerBackendAddressPools *[]SubResource `json:"loadBalancerBackendAddressPools,omitempty"` - LoadBalancerInboundNatPools *[]SubResource `json:"loadBalancerInboundNatPools,omitempty"` -} - -// VirtualMachineScaleSetListResult is the List Virtual Machine operation -// response. -type VirtualMachineScaleSetListResult struct { - autorest.Response `json:"-"` - Value *[]VirtualMachineScaleSet `json:"value,omitempty"` - NextLink *string `json:"nextLink,omitempty"` -} - -// VirtualMachineScaleSetListResultPreparer prepares a request to retrieve the next set of results. It returns -// nil if no more results exist. -func (client VirtualMachineScaleSetListResult) VirtualMachineScaleSetListResultPreparer() (*http.Request, error) { - if client.NextLink == nil || len(to.String(client.NextLink)) <= 0 { - return nil, nil - } - return autorest.Prepare(&http.Request{}, - autorest.AsJSON(), - autorest.AsGet(), - autorest.WithBaseURL(to.String(client.NextLink))) -} - -// VirtualMachineScaleSetListSkusResult is the Virtual Machine Scale Set List -// Skus operation response. -type VirtualMachineScaleSetListSkusResult struct { - autorest.Response `json:"-"` - Value *[]VirtualMachineScaleSetSku `json:"value,omitempty"` - NextLink *string `json:"nextLink,omitempty"` -} - -// VirtualMachineScaleSetListSkusResultPreparer prepares a request to retrieve the next set of results. It returns -// nil if no more results exist. -func (client VirtualMachineScaleSetListSkusResult) VirtualMachineScaleSetListSkusResultPreparer() (*http.Request, error) { - if client.NextLink == nil || len(to.String(client.NextLink)) <= 0 { - return nil, nil - } - return autorest.Prepare(&http.Request{}, - autorest.AsJSON(), - autorest.AsGet(), - autorest.WithBaseURL(to.String(client.NextLink))) -} - -// VirtualMachineScaleSetListWithLinkResult is the List Virtual Machine -// operation response. -type VirtualMachineScaleSetListWithLinkResult struct { - autorest.Response `json:"-"` - Value *[]VirtualMachineScaleSet `json:"value,omitempty"` - NextLink *string `json:"nextLink,omitempty"` -} - -// VirtualMachineScaleSetListWithLinkResultPreparer prepares a request to retrieve the next set of results. It returns -// nil if no more results exist. -func (client VirtualMachineScaleSetListWithLinkResult) VirtualMachineScaleSetListWithLinkResultPreparer() (*http.Request, error) { - if client.NextLink == nil || len(to.String(client.NextLink)) <= 0 { - return nil, nil - } - return autorest.Prepare(&http.Request{}, - autorest.AsJSON(), - autorest.AsGet(), - autorest.WithBaseURL(to.String(client.NextLink))) -} - -// VirtualMachineScaleSetManagedDiskParameters is describes the parameters of a -// ScaleSet managed disk. -type VirtualMachineScaleSetManagedDiskParameters struct { - StorageAccountType StorageAccountTypes `json:"storageAccountType,omitempty"` -} - -// VirtualMachineScaleSetNetworkConfiguration is describes a virtual machine -// scale set network profile's network configurations. -type VirtualMachineScaleSetNetworkConfiguration struct { - ID *string `json:"id,omitempty"` - Name *string `json:"name,omitempty"` - *VirtualMachineScaleSetNetworkConfigurationProperties `json:"properties,omitempty"` -} - -// VirtualMachineScaleSetNetworkConfigurationProperties is describes a virtual -// machine scale set network profile's IP configuration. -type VirtualMachineScaleSetNetworkConfigurationProperties struct { - Primary *bool `json:"primary,omitempty"` - IPConfigurations *[]VirtualMachineScaleSetIPConfiguration `json:"ipConfigurations,omitempty"` -} - -// VirtualMachineScaleSetNetworkProfile is describes a virtual machine scale -// set network profile. -type VirtualMachineScaleSetNetworkProfile struct { - NetworkInterfaceConfigurations *[]VirtualMachineScaleSetNetworkConfiguration `json:"networkInterfaceConfigurations,omitempty"` -} - -// VirtualMachineScaleSetOSDisk is describes a virtual machine scale set -// operating system disk. -type VirtualMachineScaleSetOSDisk struct { - Name *string `json:"name,omitempty"` - Caching CachingTypes `json:"caching,omitempty"` - CreateOption DiskCreateOptionTypes `json:"createOption,omitempty"` - OsType OperatingSystemTypes `json:"osType,omitempty"` - Image *VirtualHardDisk `json:"image,omitempty"` - VhdContainers *[]string `json:"vhdContainers,omitempty"` - ManagedDisk *VirtualMachineScaleSetManagedDiskParameters `json:"managedDisk,omitempty"` -} - -// VirtualMachineScaleSetOSProfile is describes a virtual machine scale set OS -// profile. -type VirtualMachineScaleSetOSProfile struct { - ComputerNamePrefix *string `json:"computerNamePrefix,omitempty"` - AdminUsername *string `json:"adminUsername,omitempty"` - AdminPassword *string `json:"adminPassword,omitempty"` - CustomData *string `json:"customData,omitempty"` - WindowsConfiguration *WindowsConfiguration `json:"windowsConfiguration,omitempty"` - LinuxConfiguration *LinuxConfiguration `json:"linuxConfiguration,omitempty"` - Secrets *[]VaultSecretGroup `json:"secrets,omitempty"` -} - -// VirtualMachineScaleSetProperties is describes the properties of a Virtual -// Machine Scale Set. -type VirtualMachineScaleSetProperties struct { - UpgradePolicy *UpgradePolicy `json:"upgradePolicy,omitempty"` - VirtualMachineProfile *VirtualMachineScaleSetVMProfile `json:"virtualMachineProfile,omitempty"` - ProvisioningState *string `json:"provisioningState,omitempty"` - Overprovision *bool `json:"overprovision,omitempty"` - SinglePlacementGroup *bool `json:"singlePlacementGroup,omitempty"` -} - -// VirtualMachineScaleSetSku is describes an available virtual machine scale -// set sku. -type VirtualMachineScaleSetSku struct { - ResourceType *string `json:"resourceType,omitempty"` - Sku *Sku `json:"sku,omitempty"` - Capacity *VirtualMachineScaleSetSkuCapacity `json:"capacity,omitempty"` -} - -// VirtualMachineScaleSetSkuCapacity is describes scaling information of a sku. -type VirtualMachineScaleSetSkuCapacity struct { - Minimum *int64 `json:"minimum,omitempty"` - Maximum *int64 `json:"maximum,omitempty"` - DefaultCapacity *int64 `json:"defaultCapacity,omitempty"` - ScaleType VirtualMachineScaleSetSkuScaleType `json:"scaleType,omitempty"` -} - -// VirtualMachineScaleSetStorageProfile is describes a virtual machine scale -// set storage profile. -type VirtualMachineScaleSetStorageProfile struct { - ImageReference *ImageReference `json:"imageReference,omitempty"` - OsDisk *VirtualMachineScaleSetOSDisk `json:"osDisk,omitempty"` - DataDisks *[]VirtualMachineScaleSetDataDisk `json:"dataDisks,omitempty"` -} - -// VirtualMachineScaleSetVM is describes a virtual machine scale set virtual -// machine. -type VirtualMachineScaleSetVM struct { - autorest.Response `json:"-"` - ID *string `json:"id,omitempty"` - Name *string `json:"name,omitempty"` - Type *string `json:"type,omitempty"` - Location *string `json:"location,omitempty"` - Tags *map[string]*string `json:"tags,omitempty"` - InstanceID *string `json:"instanceId,omitempty"` - Sku *Sku `json:"sku,omitempty"` - *VirtualMachineScaleSetVMProperties `json:"properties,omitempty"` - Plan *Plan `json:"plan,omitempty"` - Resources *[]VirtualMachineExtension `json:"resources,omitempty"` -} - -// VirtualMachineScaleSetVMExtensionsSummary is extensions summary for virtual -// machines of a virtual machine scale set. -type VirtualMachineScaleSetVMExtensionsSummary struct { - Name *string `json:"name,omitempty"` - StatusesSummary *[]VirtualMachineStatusCodeCount `json:"statusesSummary,omitempty"` -} - -// VirtualMachineScaleSetVMInstanceIDs is specifies a list of virtual machine -// instance IDs from the VM scale set. -type VirtualMachineScaleSetVMInstanceIDs struct { - InstanceIds *[]string `json:"instanceIds,omitempty"` -} - -// VirtualMachineScaleSetVMInstanceRequiredIDs is specifies a list of virtual -// machine instance IDs from the VM scale set. -type VirtualMachineScaleSetVMInstanceRequiredIDs struct { - InstanceIds *[]string `json:"instanceIds,omitempty"` -} - -// VirtualMachineScaleSetVMInstanceView is the instance view of a virtual -// machine scale set VM. -type VirtualMachineScaleSetVMInstanceView struct { - autorest.Response `json:"-"` - PlatformUpdateDomain *int32 `json:"platformUpdateDomain,omitempty"` - PlatformFaultDomain *int32 `json:"platformFaultDomain,omitempty"` - RdpThumbPrint *string `json:"rdpThumbPrint,omitempty"` - VMAgent *VirtualMachineAgentInstanceView `json:"vmAgent,omitempty"` - Disks *[]DiskInstanceView `json:"disks,omitempty"` - Extensions *[]VirtualMachineExtensionInstanceView `json:"extensions,omitempty"` - BootDiagnostics *BootDiagnosticsInstanceView `json:"bootDiagnostics,omitempty"` - Statuses *[]InstanceViewStatus `json:"statuses,omitempty"` - PlacementGroupID *string `json:"placementGroupId,omitempty"` -} - -// VirtualMachineScaleSetVMListResult is the List Virtual Machine Scale Set VMs -// operation response. -type VirtualMachineScaleSetVMListResult struct { - autorest.Response `json:"-"` - Value *[]VirtualMachineScaleSetVM `json:"value,omitempty"` - NextLink *string `json:"nextLink,omitempty"` -} - -// VirtualMachineScaleSetVMListResultPreparer prepares a request to retrieve the next set of results. It returns -// nil if no more results exist. -func (client VirtualMachineScaleSetVMListResult) VirtualMachineScaleSetVMListResultPreparer() (*http.Request, error) { - if client.NextLink == nil || len(to.String(client.NextLink)) <= 0 { - return nil, nil - } - return autorest.Prepare(&http.Request{}, - autorest.AsJSON(), - autorest.AsGet(), - autorest.WithBaseURL(to.String(client.NextLink))) -} - -// VirtualMachineScaleSetVMProfile is describes a virtual machine scale set -// virtual machine profile. -type VirtualMachineScaleSetVMProfile struct { - OsProfile *VirtualMachineScaleSetOSProfile `json:"osProfile,omitempty"` - StorageProfile *VirtualMachineScaleSetStorageProfile `json:"storageProfile,omitempty"` - NetworkProfile *VirtualMachineScaleSetNetworkProfile `json:"networkProfile,omitempty"` - ExtensionProfile *VirtualMachineScaleSetExtensionProfile `json:"extensionProfile,omitempty"` -} - -// VirtualMachineScaleSetVMProperties is describes the properties of a virtual -// machine scale set virtual machine. -type VirtualMachineScaleSetVMProperties struct { - LatestModelApplied *bool `json:"latestModelApplied,omitempty"` - VMID *string `json:"vmId,omitempty"` - InstanceView *VirtualMachineInstanceView `json:"instanceView,omitempty"` - HardwareProfile *HardwareProfile `json:"hardwareProfile,omitempty"` - StorageProfile *StorageProfile `json:"storageProfile,omitempty"` - OsProfile *OSProfile `json:"osProfile,omitempty"` - NetworkProfile *NetworkProfile `json:"networkProfile,omitempty"` - DiagnosticsProfile *DiagnosticsProfile `json:"diagnosticsProfile,omitempty"` - AvailabilitySet *SubResource `json:"availabilitySet,omitempty"` - ProvisioningState *string `json:"provisioningState,omitempty"` - LicenseType *string `json:"licenseType,omitempty"` -} - -// VirtualMachineSize is describes the properties of a VM size. -type VirtualMachineSize struct { - Name *string `json:"name,omitempty"` - NumberOfCores *int32 `json:"numberOfCores,omitempty"` - OsDiskSizeInMB *int32 `json:"osDiskSizeInMB,omitempty"` - ResourceDiskSizeInMB *int32 `json:"resourceDiskSizeInMB,omitempty"` - MemoryInMB *int32 `json:"memoryInMB,omitempty"` - MaxDataDiskCount *int32 `json:"maxDataDiskCount,omitempty"` -} - -// VirtualMachineSizeListResult is the List Virtual Machine operation response. -type VirtualMachineSizeListResult struct { - autorest.Response `json:"-"` - Value *[]VirtualMachineSize `json:"value,omitempty"` -} - -// VirtualMachineStatusCodeCount is the status code and count of the virtual -// machine scale set instance view status summary. -type VirtualMachineStatusCodeCount struct { - Code *string `json:"code,omitempty"` - Count *int32 `json:"count,omitempty"` -} - -// WindowsConfiguration is describes Windows Configuration of the OS Profile. -type WindowsConfiguration struct { - ProvisionVMAgent *bool `json:"provisionVMAgent,omitempty"` - EnableAutomaticUpdates *bool `json:"enableAutomaticUpdates,omitempty"` - TimeZone *string `json:"timeZone,omitempty"` - AdditionalUnattendContent *[]AdditionalUnattendContent `json:"additionalUnattendContent,omitempty"` - WinRM *WinRMConfiguration `json:"winRM,omitempty"` -} - -// WinRMConfiguration is describes Windows Remote Management configuration of -// the VM -type WinRMConfiguration struct { - Listeners *[]WinRMListener `json:"listeners,omitempty"` -} - -// WinRMListener is describes Protocol and thumbprint of Windows Remote -// Management listener -type WinRMListener struct { - Protocol ProtocolTypes `json:"protocol,omitempty"` - CertificateURL *string `json:"certificateUrl,omitempty"` -} diff --git a/vendor/github.com/Azure/azure-sdk-for-go/arm/compute/virtualmachineextensions.go b/vendor/github.com/Azure/azure-sdk-for-go/arm/compute/virtualmachineextensions.go deleted file mode 100644 index 7a876cfef..000000000 --- a/vendor/github.com/Azure/azure-sdk-for-go/arm/compute/virtualmachineextensions.go +++ /dev/null @@ -1,286 +0,0 @@ -package compute - -// Copyright (c) Microsoft and contributors. All rights reserved. -// -// Licensed under the Apache License, Version 2.0 (the "License"); -// you may not use this file except in compliance with the License. -// You may obtain a copy of the License at -// http://www.apache.org/licenses/LICENSE-2.0 -// -// Unless required by applicable law or agreed to in writing, software -// distributed under the License is distributed on an "AS IS" BASIS, -// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -// -// See the License for the specific language governing permissions and -// limitations under the License. -// -// Code generated by Microsoft (R) AutoRest Code Generator 1.0.1.0 -// Changes may cause incorrect behavior and will be lost if the code is -// regenerated. - -import ( - "github.com/Azure/go-autorest/autorest" - "github.com/Azure/go-autorest/autorest/azure" - "net/http" -) - -// VirtualMachineExtensionsClient is the the Compute Management Client. -type VirtualMachineExtensionsClient struct { - ManagementClient -} - -// NewVirtualMachineExtensionsClient creates an instance of the -// VirtualMachineExtensionsClient client. -func NewVirtualMachineExtensionsClient(subscriptionID string) VirtualMachineExtensionsClient { - return NewVirtualMachineExtensionsClientWithBaseURI(DefaultBaseURI, subscriptionID) -} - -// NewVirtualMachineExtensionsClientWithBaseURI creates an instance of the -// VirtualMachineExtensionsClient client. -func NewVirtualMachineExtensionsClientWithBaseURI(baseURI string, subscriptionID string) VirtualMachineExtensionsClient { - return VirtualMachineExtensionsClient{NewWithBaseURI(baseURI, subscriptionID)} -} - -// CreateOrUpdate the operation to create or update the extension. This method -// may poll for completion. Polling can be canceled by passing the cancel -// channel argument. The channel will be used to cancel polling and any -// outstanding HTTP requests. -// -// resourceGroupName is the name of the resource group. VMName is the name of -// the virtual machine where the extension should be create or updated. -// VMExtensionName is the name of the virtual machine extension. -// extensionParameters is parameters supplied to the Create Virtual Machine -// Extension operation. -func (client VirtualMachineExtensionsClient) CreateOrUpdate(resourceGroupName string, VMName string, VMExtensionName string, extensionParameters VirtualMachineExtension, cancel <-chan struct{}) (<-chan VirtualMachineExtension, <-chan error) { - resultChan := make(chan VirtualMachineExtension, 1) - errChan := make(chan error, 1) - go func() { - var err error - var result VirtualMachineExtension - defer func() { - resultChan <- result - errChan <- err - close(resultChan) - close(errChan) - }() - req, err := client.CreateOrUpdatePreparer(resourceGroupName, VMName, VMExtensionName, extensionParameters, cancel) - if err != nil { - err = autorest.NewErrorWithError(err, "compute.VirtualMachineExtensionsClient", "CreateOrUpdate", nil, "Failure preparing request") - return - } - - resp, err := client.CreateOrUpdateSender(req) - if err != nil { - result.Response = autorest.Response{Response: resp} - err = autorest.NewErrorWithError(err, "compute.VirtualMachineExtensionsClient", "CreateOrUpdate", resp, "Failure sending request") - return - } - - result, err = client.CreateOrUpdateResponder(resp) - if err != nil { - err = autorest.NewErrorWithError(err, "compute.VirtualMachineExtensionsClient", "CreateOrUpdate", resp, "Failure responding to request") - } - }() - return resultChan, errChan -} - -// CreateOrUpdatePreparer prepares the CreateOrUpdate request. -func (client VirtualMachineExtensionsClient) CreateOrUpdatePreparer(resourceGroupName string, VMName string, VMExtensionName string, extensionParameters VirtualMachineExtension, cancel <-chan struct{}) (*http.Request, error) { - pathParameters := map[string]interface{}{ - "resourceGroupName": autorest.Encode("path", resourceGroupName), - "subscriptionId": autorest.Encode("path", client.SubscriptionID), - "vmExtensionName": autorest.Encode("path", VMExtensionName), - "vmName": autorest.Encode("path", VMName), - } - - const APIVersion = "2016-04-30-preview" - queryParameters := map[string]interface{}{ - "api-version": APIVersion, - } - - preparer := autorest.CreatePreparer( - autorest.AsJSON(), - autorest.AsPut(), - autorest.WithBaseURL(client.BaseURI), - autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Compute/virtualMachines/{vmName}/extensions/{vmExtensionName}", pathParameters), - autorest.WithJSON(extensionParameters), - autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{Cancel: cancel}) -} - -// CreateOrUpdateSender sends the CreateOrUpdate request. The method will close the -// http.Response Body if it receives an error. -func (client VirtualMachineExtensionsClient) CreateOrUpdateSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, - req, - azure.DoPollForAsynchronous(client.PollingDelay)) -} - -// CreateOrUpdateResponder handles the response to the CreateOrUpdate request. The method always -// closes the http.Response Body. -func (client VirtualMachineExtensionsClient) CreateOrUpdateResponder(resp *http.Response) (result VirtualMachineExtension, err error) { - err = autorest.Respond( - resp, - client.ByInspecting(), - azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusCreated), - autorest.ByUnmarshallingJSON(&result), - autorest.ByClosing()) - result.Response = autorest.Response{Response: resp} - return -} - -// Delete the operation to delete the extension. This method may poll for -// completion. Polling can be canceled by passing the cancel channel argument. -// The channel will be used to cancel polling and any outstanding HTTP -// requests. -// -// resourceGroupName is the name of the resource group. VMName is the name of -// the virtual machine where the extension should be deleted. VMExtensionName -// is the name of the virtual machine extension. -func (client VirtualMachineExtensionsClient) Delete(resourceGroupName string, VMName string, VMExtensionName string, cancel <-chan struct{}) (<-chan OperationStatusResponse, <-chan error) { - resultChan := make(chan OperationStatusResponse, 1) - errChan := make(chan error, 1) - go func() { - var err error - var result OperationStatusResponse - defer func() { - resultChan <- result - errChan <- err - close(resultChan) - close(errChan) - }() - req, err := client.DeletePreparer(resourceGroupName, VMName, VMExtensionName, cancel) - if err != nil { - err = autorest.NewErrorWithError(err, "compute.VirtualMachineExtensionsClient", "Delete", nil, "Failure preparing request") - return - } - - resp, err := client.DeleteSender(req) - if err != nil { - result.Response = autorest.Response{Response: resp} - err = autorest.NewErrorWithError(err, "compute.VirtualMachineExtensionsClient", "Delete", resp, "Failure sending request") - return - } - - result, err = client.DeleteResponder(resp) - if err != nil { - err = autorest.NewErrorWithError(err, "compute.VirtualMachineExtensionsClient", "Delete", resp, "Failure responding to request") - } - }() - return resultChan, errChan -} - -// DeletePreparer prepares the Delete request. -func (client VirtualMachineExtensionsClient) DeletePreparer(resourceGroupName string, VMName string, VMExtensionName string, cancel <-chan struct{}) (*http.Request, error) { - pathParameters := map[string]interface{}{ - "resourceGroupName": autorest.Encode("path", resourceGroupName), - "subscriptionId": autorest.Encode("path", client.SubscriptionID), - "vmExtensionName": autorest.Encode("path", VMExtensionName), - "vmName": autorest.Encode("path", VMName), - } - - const APIVersion = "2016-04-30-preview" - queryParameters := map[string]interface{}{ - "api-version": APIVersion, - } - - preparer := autorest.CreatePreparer( - autorest.AsDelete(), - autorest.WithBaseURL(client.BaseURI), - autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Compute/virtualMachines/{vmName}/extensions/{vmExtensionName}", pathParameters), - autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{Cancel: cancel}) -} - -// DeleteSender sends the Delete request. The method will close the -// http.Response Body if it receives an error. -func (client VirtualMachineExtensionsClient) DeleteSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, - req, - azure.DoPollForAsynchronous(client.PollingDelay)) -} - -// DeleteResponder handles the response to the Delete request. The method always -// closes the http.Response Body. -func (client VirtualMachineExtensionsClient) DeleteResponder(resp *http.Response) (result OperationStatusResponse, err error) { - err = autorest.Respond( - resp, - client.ByInspecting(), - azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted, http.StatusNoContent), - autorest.ByUnmarshallingJSON(&result), - autorest.ByClosing()) - result.Response = autorest.Response{Response: resp} - return -} - -// Get the operation to get the extension. -// -// resourceGroupName is the name of the resource group. VMName is the name of -// the virtual machine containing the extension. VMExtensionName is the name of -// the virtual machine extension. expand is the expand expression to apply on -// the operation. -func (client VirtualMachineExtensionsClient) Get(resourceGroupName string, VMName string, VMExtensionName string, expand string) (result VirtualMachineExtension, err error) { - req, err := client.GetPreparer(resourceGroupName, VMName, VMExtensionName, expand) - if err != nil { - err = autorest.NewErrorWithError(err, "compute.VirtualMachineExtensionsClient", "Get", nil, "Failure preparing request") - return - } - - resp, err := client.GetSender(req) - if err != nil { - result.Response = autorest.Response{Response: resp} - err = autorest.NewErrorWithError(err, "compute.VirtualMachineExtensionsClient", "Get", resp, "Failure sending request") - return - } - - result, err = client.GetResponder(resp) - if err != nil { - err = autorest.NewErrorWithError(err, "compute.VirtualMachineExtensionsClient", "Get", resp, "Failure responding to request") - } - - return -} - -// GetPreparer prepares the Get request. -func (client VirtualMachineExtensionsClient) GetPreparer(resourceGroupName string, VMName string, VMExtensionName string, expand string) (*http.Request, error) { - pathParameters := map[string]interface{}{ - "resourceGroupName": autorest.Encode("path", resourceGroupName), - "subscriptionId": autorest.Encode("path", client.SubscriptionID), - "vmExtensionName": autorest.Encode("path", VMExtensionName), - "vmName": autorest.Encode("path", VMName), - } - - const APIVersion = "2016-04-30-preview" - queryParameters := map[string]interface{}{ - "api-version": APIVersion, - } - if len(expand) > 0 { - queryParameters["$expand"] = autorest.Encode("query", expand) - } - - preparer := autorest.CreatePreparer( - autorest.AsGet(), - autorest.WithBaseURL(client.BaseURI), - autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Compute/virtualMachines/{vmName}/extensions/{vmExtensionName}", pathParameters), - autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{}) -} - -// GetSender sends the Get request. The method will close the -// http.Response Body if it receives an error. -func (client VirtualMachineExtensionsClient) GetSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, req) -} - -// GetResponder handles the response to the Get request. The method always -// closes the http.Response Body. -func (client VirtualMachineExtensionsClient) GetResponder(resp *http.Response) (result VirtualMachineExtension, err error) { - err = autorest.Respond( - resp, - client.ByInspecting(), - azure.WithErrorUnlessStatusCode(http.StatusOK), - autorest.ByUnmarshallingJSON(&result), - autorest.ByClosing()) - result.Response = autorest.Response{Response: resp} - return -} diff --git a/vendor/github.com/Azure/azure-sdk-for-go/arm/compute/virtualmachines.go b/vendor/github.com/Azure/azure-sdk-for-go/arm/compute/virtualmachines.go deleted file mode 100644 index 686b7ace2..000000000 --- a/vendor/github.com/Azure/azure-sdk-for-go/arm/compute/virtualmachines.go +++ /dev/null @@ -1,1207 +0,0 @@ -package compute - -// Copyright (c) Microsoft and contributors. All rights reserved. -// -// Licensed under the Apache License, Version 2.0 (the "License"); -// you may not use this file except in compliance with the License. -// You may obtain a copy of the License at -// http://www.apache.org/licenses/LICENSE-2.0 -// -// Unless required by applicable law or agreed to in writing, software -// distributed under the License is distributed on an "AS IS" BASIS, -// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -// -// See the License for the specific language governing permissions and -// limitations under the License. -// -// Code generated by Microsoft (R) AutoRest Code Generator 1.0.1.0 -// Changes may cause incorrect behavior and will be lost if the code is -// regenerated. - -import ( - "github.com/Azure/go-autorest/autorest" - "github.com/Azure/go-autorest/autorest/azure" - "github.com/Azure/go-autorest/autorest/validation" - "net/http" -) - -// VirtualMachinesClient is the the Compute Management Client. -type VirtualMachinesClient struct { - ManagementClient -} - -// NewVirtualMachinesClient creates an instance of the VirtualMachinesClient -// client. -func NewVirtualMachinesClient(subscriptionID string) VirtualMachinesClient { - return NewVirtualMachinesClientWithBaseURI(DefaultBaseURI, subscriptionID) -} - -// NewVirtualMachinesClientWithBaseURI creates an instance of the -// VirtualMachinesClient client. -func NewVirtualMachinesClientWithBaseURI(baseURI string, subscriptionID string) VirtualMachinesClient { - return VirtualMachinesClient{NewWithBaseURI(baseURI, subscriptionID)} -} - -// Capture captures the VM by copying virtual hard disks of the VM and outputs -// a template that can be used to create similar VMs. This method may poll for -// completion. Polling can be canceled by passing the cancel channel argument. -// The channel will be used to cancel polling and any outstanding HTTP -// requests. -// -// resourceGroupName is the name of the resource group. VMName is the name of -// the virtual machine. parameters is parameters supplied to the Capture -// Virtual Machine operation. -func (client VirtualMachinesClient) Capture(resourceGroupName string, VMName string, parameters VirtualMachineCaptureParameters, cancel <-chan struct{}) (<-chan VirtualMachineCaptureResult, <-chan error) { - resultChan := make(chan VirtualMachineCaptureResult, 1) - errChan := make(chan error, 1) - if err := validation.Validate([]validation.Validation{ - {TargetValue: parameters, - Constraints: []validation.Constraint{{Target: "parameters.VhdPrefix", Name: validation.Null, Rule: true, Chain: nil}, - {Target: "parameters.DestinationContainerName", Name: validation.Null, Rule: true, Chain: nil}, - {Target: "parameters.OverwriteVhds", Name: validation.Null, Rule: true, Chain: nil}}}}); err != nil { - errChan <- validation.NewErrorWithValidationError(err, "compute.VirtualMachinesClient", "Capture") - close(errChan) - close(resultChan) - return resultChan, errChan - } - - go func() { - var err error - var result VirtualMachineCaptureResult - defer func() { - resultChan <- result - errChan <- err - close(resultChan) - close(errChan) - }() - req, err := client.CapturePreparer(resourceGroupName, VMName, parameters, cancel) - if err != nil { - err = autorest.NewErrorWithError(err, "compute.VirtualMachinesClient", "Capture", nil, "Failure preparing request") - return - } - - resp, err := client.CaptureSender(req) - if err != nil { - result.Response = autorest.Response{Response: resp} - err = autorest.NewErrorWithError(err, "compute.VirtualMachinesClient", "Capture", resp, "Failure sending request") - return - } - - result, err = client.CaptureResponder(resp) - if err != nil { - err = autorest.NewErrorWithError(err, "compute.VirtualMachinesClient", "Capture", resp, "Failure responding to request") - } - }() - return resultChan, errChan -} - -// CapturePreparer prepares the Capture request. -func (client VirtualMachinesClient) CapturePreparer(resourceGroupName string, VMName string, parameters VirtualMachineCaptureParameters, cancel <-chan struct{}) (*http.Request, error) { - pathParameters := map[string]interface{}{ - "resourceGroupName": autorest.Encode("path", resourceGroupName), - "subscriptionId": autorest.Encode("path", client.SubscriptionID), - "vmName": autorest.Encode("path", VMName), - } - - const APIVersion = "2016-04-30-preview" - queryParameters := map[string]interface{}{ - "api-version": APIVersion, - } - - preparer := autorest.CreatePreparer( - autorest.AsJSON(), - autorest.AsPost(), - autorest.WithBaseURL(client.BaseURI), - autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Compute/virtualMachines/{vmName}/capture", pathParameters), - autorest.WithJSON(parameters), - autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{Cancel: cancel}) -} - -// CaptureSender sends the Capture request. The method will close the -// http.Response Body if it receives an error. -func (client VirtualMachinesClient) CaptureSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, - req, - azure.DoPollForAsynchronous(client.PollingDelay)) -} - -// CaptureResponder handles the response to the Capture request. The method always -// closes the http.Response Body. -func (client VirtualMachinesClient) CaptureResponder(resp *http.Response) (result VirtualMachineCaptureResult, err error) { - err = autorest.Respond( - resp, - client.ByInspecting(), - azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted), - autorest.ByUnmarshallingJSON(&result), - autorest.ByClosing()) - result.Response = autorest.Response{Response: resp} - return -} - -// ConvertToManagedDisks converts virtual machine disks from blob-based to -// managed disks. Virtual machine must be stop-deallocated before invoking this -// operation. This method may poll for completion. Polling can be canceled by -// passing the cancel channel argument. The channel will be used to cancel -// polling and any outstanding HTTP requests. -// -// resourceGroupName is the name of the resource group. VMName is the name of -// the virtual machine. -func (client VirtualMachinesClient) ConvertToManagedDisks(resourceGroupName string, VMName string, cancel <-chan struct{}) (<-chan OperationStatusResponse, <-chan error) { - resultChan := make(chan OperationStatusResponse, 1) - errChan := make(chan error, 1) - go func() { - var err error - var result OperationStatusResponse - defer func() { - resultChan <- result - errChan <- err - close(resultChan) - close(errChan) - }() - req, err := client.ConvertToManagedDisksPreparer(resourceGroupName, VMName, cancel) - if err != nil { - err = autorest.NewErrorWithError(err, "compute.VirtualMachinesClient", "ConvertToManagedDisks", nil, "Failure preparing request") - return - } - - resp, err := client.ConvertToManagedDisksSender(req) - if err != nil { - result.Response = autorest.Response{Response: resp} - err = autorest.NewErrorWithError(err, "compute.VirtualMachinesClient", "ConvertToManagedDisks", resp, "Failure sending request") - return - } - - result, err = client.ConvertToManagedDisksResponder(resp) - if err != nil { - err = autorest.NewErrorWithError(err, "compute.VirtualMachinesClient", "ConvertToManagedDisks", resp, "Failure responding to request") - } - }() - return resultChan, errChan -} - -// ConvertToManagedDisksPreparer prepares the ConvertToManagedDisks request. -func (client VirtualMachinesClient) ConvertToManagedDisksPreparer(resourceGroupName string, VMName string, cancel <-chan struct{}) (*http.Request, error) { - pathParameters := map[string]interface{}{ - "resourceGroupName": autorest.Encode("path", resourceGroupName), - "subscriptionId": autorest.Encode("path", client.SubscriptionID), - "vmName": autorest.Encode("path", VMName), - } - - const APIVersion = "2016-04-30-preview" - queryParameters := map[string]interface{}{ - "api-version": APIVersion, - } - - preparer := autorest.CreatePreparer( - autorest.AsPost(), - autorest.WithBaseURL(client.BaseURI), - autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Compute/virtualMachines/{vmName}/convertToManagedDisks", pathParameters), - autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{Cancel: cancel}) -} - -// ConvertToManagedDisksSender sends the ConvertToManagedDisks request. The method will close the -// http.Response Body if it receives an error. -func (client VirtualMachinesClient) ConvertToManagedDisksSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, - req, - azure.DoPollForAsynchronous(client.PollingDelay)) -} - -// ConvertToManagedDisksResponder handles the response to the ConvertToManagedDisks request. The method always -// closes the http.Response Body. -func (client VirtualMachinesClient) ConvertToManagedDisksResponder(resp *http.Response) (result OperationStatusResponse, err error) { - err = autorest.Respond( - resp, - client.ByInspecting(), - azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted), - autorest.ByUnmarshallingJSON(&result), - autorest.ByClosing()) - result.Response = autorest.Response{Response: resp} - return -} - -// CreateOrUpdate the operation to create or update a virtual machine. This -// method may poll for completion. Polling can be canceled by passing the -// cancel channel argument. The channel will be used to cancel polling and any -// outstanding HTTP requests. -// -// resourceGroupName is the name of the resource group. VMName is the name of -// the virtual machine. parameters is parameters supplied to the Create Virtual -// Machine operation. -func (client VirtualMachinesClient) CreateOrUpdate(resourceGroupName string, VMName string, parameters VirtualMachine, cancel <-chan struct{}) (<-chan VirtualMachine, <-chan error) { - resultChan := make(chan VirtualMachine, 1) - errChan := make(chan error, 1) - if err := validation.Validate([]validation.Validation{ - {TargetValue: parameters, - Constraints: []validation.Constraint{{Target: "parameters.VirtualMachineProperties", Name: validation.Null, Rule: false, - Chain: []validation.Constraint{{Target: "parameters.VirtualMachineProperties.StorageProfile", Name: validation.Null, Rule: false, - Chain: []validation.Constraint{{Target: "parameters.VirtualMachineProperties.StorageProfile.OsDisk", Name: validation.Null, Rule: false, - Chain: []validation.Constraint{{Target: "parameters.VirtualMachineProperties.StorageProfile.OsDisk.EncryptionSettings", Name: validation.Null, Rule: false, - Chain: []validation.Constraint{{Target: "parameters.VirtualMachineProperties.StorageProfile.OsDisk.EncryptionSettings.DiskEncryptionKey", Name: validation.Null, Rule: false, - Chain: []validation.Constraint{{Target: "parameters.VirtualMachineProperties.StorageProfile.OsDisk.EncryptionSettings.DiskEncryptionKey.SecretURL", Name: validation.Null, Rule: true, Chain: nil}, - {Target: "parameters.VirtualMachineProperties.StorageProfile.OsDisk.EncryptionSettings.DiskEncryptionKey.SourceVault", Name: validation.Null, Rule: true, Chain: nil}, - }}, - {Target: "parameters.VirtualMachineProperties.StorageProfile.OsDisk.EncryptionSettings.KeyEncryptionKey", Name: validation.Null, Rule: false, - Chain: []validation.Constraint{{Target: "parameters.VirtualMachineProperties.StorageProfile.OsDisk.EncryptionSettings.KeyEncryptionKey.KeyURL", Name: validation.Null, Rule: true, Chain: nil}, - {Target: "parameters.VirtualMachineProperties.StorageProfile.OsDisk.EncryptionSettings.KeyEncryptionKey.SourceVault", Name: validation.Null, Rule: true, Chain: nil}, - }}, - }}, - }}, - }}, - }}}}}); err != nil { - errChan <- validation.NewErrorWithValidationError(err, "compute.VirtualMachinesClient", "CreateOrUpdate") - close(errChan) - close(resultChan) - return resultChan, errChan - } - - go func() { - var err error - var result VirtualMachine - defer func() { - resultChan <- result - errChan <- err - close(resultChan) - close(errChan) - }() - req, err := client.CreateOrUpdatePreparer(resourceGroupName, VMName, parameters, cancel) - if err != nil { - err = autorest.NewErrorWithError(err, "compute.VirtualMachinesClient", "CreateOrUpdate", nil, "Failure preparing request") - return - } - - resp, err := client.CreateOrUpdateSender(req) - if err != nil { - result.Response = autorest.Response{Response: resp} - err = autorest.NewErrorWithError(err, "compute.VirtualMachinesClient", "CreateOrUpdate", resp, "Failure sending request") - return - } - - result, err = client.CreateOrUpdateResponder(resp) - if err != nil { - err = autorest.NewErrorWithError(err, "compute.VirtualMachinesClient", "CreateOrUpdate", resp, "Failure responding to request") - } - }() - return resultChan, errChan -} - -// CreateOrUpdatePreparer prepares the CreateOrUpdate request. -func (client VirtualMachinesClient) CreateOrUpdatePreparer(resourceGroupName string, VMName string, parameters VirtualMachine, cancel <-chan struct{}) (*http.Request, error) { - pathParameters := map[string]interface{}{ - "resourceGroupName": autorest.Encode("path", resourceGroupName), - "subscriptionId": autorest.Encode("path", client.SubscriptionID), - "vmName": autorest.Encode("path", VMName), - } - - const APIVersion = "2016-04-30-preview" - queryParameters := map[string]interface{}{ - "api-version": APIVersion, - } - - preparer := autorest.CreatePreparer( - autorest.AsJSON(), - autorest.AsPut(), - autorest.WithBaseURL(client.BaseURI), - autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Compute/virtualMachines/{vmName}", pathParameters), - autorest.WithJSON(parameters), - autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{Cancel: cancel}) -} - -// CreateOrUpdateSender sends the CreateOrUpdate request. The method will close the -// http.Response Body if it receives an error. -func (client VirtualMachinesClient) CreateOrUpdateSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, - req, - azure.DoPollForAsynchronous(client.PollingDelay)) -} - -// CreateOrUpdateResponder handles the response to the CreateOrUpdate request. The method always -// closes the http.Response Body. -func (client VirtualMachinesClient) CreateOrUpdateResponder(resp *http.Response) (result VirtualMachine, err error) { - err = autorest.Respond( - resp, - client.ByInspecting(), - azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusCreated), - autorest.ByUnmarshallingJSON(&result), - autorest.ByClosing()) - result.Response = autorest.Response{Response: resp} - return -} - -// Deallocate shuts down the virtual machine and releases the compute -// resources. You are not billed for the compute resources that this virtual -// machine uses. This method may poll for completion. Polling can be canceled -// by passing the cancel channel argument. The channel will be used to cancel -// polling and any outstanding HTTP requests. -// -// resourceGroupName is the name of the resource group. VMName is the name of -// the virtual machine. -func (client VirtualMachinesClient) Deallocate(resourceGroupName string, VMName string, cancel <-chan struct{}) (<-chan OperationStatusResponse, <-chan error) { - resultChan := make(chan OperationStatusResponse, 1) - errChan := make(chan error, 1) - go func() { - var err error - var result OperationStatusResponse - defer func() { - resultChan <- result - errChan <- err - close(resultChan) - close(errChan) - }() - req, err := client.DeallocatePreparer(resourceGroupName, VMName, cancel) - if err != nil { - err = autorest.NewErrorWithError(err, "compute.VirtualMachinesClient", "Deallocate", nil, "Failure preparing request") - return - } - - resp, err := client.DeallocateSender(req) - if err != nil { - result.Response = autorest.Response{Response: resp} - err = autorest.NewErrorWithError(err, "compute.VirtualMachinesClient", "Deallocate", resp, "Failure sending request") - return - } - - result, err = client.DeallocateResponder(resp) - if err != nil { - err = autorest.NewErrorWithError(err, "compute.VirtualMachinesClient", "Deallocate", resp, "Failure responding to request") - } - }() - return resultChan, errChan -} - -// DeallocatePreparer prepares the Deallocate request. -func (client VirtualMachinesClient) DeallocatePreparer(resourceGroupName string, VMName string, cancel <-chan struct{}) (*http.Request, error) { - pathParameters := map[string]interface{}{ - "resourceGroupName": autorest.Encode("path", resourceGroupName), - "subscriptionId": autorest.Encode("path", client.SubscriptionID), - "vmName": autorest.Encode("path", VMName), - } - - const APIVersion = "2016-04-30-preview" - queryParameters := map[string]interface{}{ - "api-version": APIVersion, - } - - preparer := autorest.CreatePreparer( - autorest.AsPost(), - autorest.WithBaseURL(client.BaseURI), - autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Compute/virtualMachines/{vmName}/deallocate", pathParameters), - autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{Cancel: cancel}) -} - -// DeallocateSender sends the Deallocate request. The method will close the -// http.Response Body if it receives an error. -func (client VirtualMachinesClient) DeallocateSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, - req, - azure.DoPollForAsynchronous(client.PollingDelay)) -} - -// DeallocateResponder handles the response to the Deallocate request. The method always -// closes the http.Response Body. -func (client VirtualMachinesClient) DeallocateResponder(resp *http.Response) (result OperationStatusResponse, err error) { - err = autorest.Respond( - resp, - client.ByInspecting(), - azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted), - autorest.ByUnmarshallingJSON(&result), - autorest.ByClosing()) - result.Response = autorest.Response{Response: resp} - return -} - -// Delete the operation to delete a virtual machine. This method may poll for -// completion. Polling can be canceled by passing the cancel channel argument. -// The channel will be used to cancel polling and any outstanding HTTP -// requests. -// -// resourceGroupName is the name of the resource group. VMName is the name of -// the virtual machine. -func (client VirtualMachinesClient) Delete(resourceGroupName string, VMName string, cancel <-chan struct{}) (<-chan OperationStatusResponse, <-chan error) { - resultChan := make(chan OperationStatusResponse, 1) - errChan := make(chan error, 1) - go func() { - var err error - var result OperationStatusResponse - defer func() { - resultChan <- result - errChan <- err - close(resultChan) - close(errChan) - }() - req, err := client.DeletePreparer(resourceGroupName, VMName, cancel) - if err != nil { - err = autorest.NewErrorWithError(err, "compute.VirtualMachinesClient", "Delete", nil, "Failure preparing request") - return - } - - resp, err := client.DeleteSender(req) - if err != nil { - result.Response = autorest.Response{Response: resp} - err = autorest.NewErrorWithError(err, "compute.VirtualMachinesClient", "Delete", resp, "Failure sending request") - return - } - - result, err = client.DeleteResponder(resp) - if err != nil { - err = autorest.NewErrorWithError(err, "compute.VirtualMachinesClient", "Delete", resp, "Failure responding to request") - } - }() - return resultChan, errChan -} - -// DeletePreparer prepares the Delete request. -func (client VirtualMachinesClient) DeletePreparer(resourceGroupName string, VMName string, cancel <-chan struct{}) (*http.Request, error) { - pathParameters := map[string]interface{}{ - "resourceGroupName": autorest.Encode("path", resourceGroupName), - "subscriptionId": autorest.Encode("path", client.SubscriptionID), - "vmName": autorest.Encode("path", VMName), - } - - const APIVersion = "2016-04-30-preview" - queryParameters := map[string]interface{}{ - "api-version": APIVersion, - } - - preparer := autorest.CreatePreparer( - autorest.AsDelete(), - autorest.WithBaseURL(client.BaseURI), - autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Compute/virtualMachines/{vmName}", pathParameters), - autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{Cancel: cancel}) -} - -// DeleteSender sends the Delete request. The method will close the -// http.Response Body if it receives an error. -func (client VirtualMachinesClient) DeleteSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, - req, - azure.DoPollForAsynchronous(client.PollingDelay)) -} - -// DeleteResponder handles the response to the Delete request. The method always -// closes the http.Response Body. -func (client VirtualMachinesClient) DeleteResponder(resp *http.Response) (result OperationStatusResponse, err error) { - err = autorest.Respond( - resp, - client.ByInspecting(), - azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted, http.StatusNoContent), - autorest.ByUnmarshallingJSON(&result), - autorest.ByClosing()) - result.Response = autorest.Response{Response: resp} - return -} - -// Generalize sets the state of the virtual machine to generalized. -// -// resourceGroupName is the name of the resource group. VMName is the name of -// the virtual machine. -func (client VirtualMachinesClient) Generalize(resourceGroupName string, VMName string) (result OperationStatusResponse, err error) { - req, err := client.GeneralizePreparer(resourceGroupName, VMName) - if err != nil { - err = autorest.NewErrorWithError(err, "compute.VirtualMachinesClient", "Generalize", nil, "Failure preparing request") - return - } - - resp, err := client.GeneralizeSender(req) - if err != nil { - result.Response = autorest.Response{Response: resp} - err = autorest.NewErrorWithError(err, "compute.VirtualMachinesClient", "Generalize", resp, "Failure sending request") - return - } - - result, err = client.GeneralizeResponder(resp) - if err != nil { - err = autorest.NewErrorWithError(err, "compute.VirtualMachinesClient", "Generalize", resp, "Failure responding to request") - } - - return -} - -// GeneralizePreparer prepares the Generalize request. -func (client VirtualMachinesClient) GeneralizePreparer(resourceGroupName string, VMName string) (*http.Request, error) { - pathParameters := map[string]interface{}{ - "resourceGroupName": autorest.Encode("path", resourceGroupName), - "subscriptionId": autorest.Encode("path", client.SubscriptionID), - "vmName": autorest.Encode("path", VMName), - } - - const APIVersion = "2016-04-30-preview" - queryParameters := map[string]interface{}{ - "api-version": APIVersion, - } - - preparer := autorest.CreatePreparer( - autorest.AsPost(), - autorest.WithBaseURL(client.BaseURI), - autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Compute/virtualMachines/{vmName}/generalize", pathParameters), - autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{}) -} - -// GeneralizeSender sends the Generalize request. The method will close the -// http.Response Body if it receives an error. -func (client VirtualMachinesClient) GeneralizeSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, req) -} - -// GeneralizeResponder handles the response to the Generalize request. The method always -// closes the http.Response Body. -func (client VirtualMachinesClient) GeneralizeResponder(resp *http.Response) (result OperationStatusResponse, err error) { - err = autorest.Respond( - resp, - client.ByInspecting(), - azure.WithErrorUnlessStatusCode(http.StatusOK), - autorest.ByUnmarshallingJSON(&result), - autorest.ByClosing()) - result.Response = autorest.Response{Response: resp} - return -} - -// Get retrieves information about the model view or the instance view of a -// virtual machine. -// -// resourceGroupName is the name of the resource group. VMName is the name of -// the virtual machine. expand is the expand expression to apply on the -// operation. -func (client VirtualMachinesClient) Get(resourceGroupName string, VMName string, expand InstanceViewTypes) (result VirtualMachine, err error) { - req, err := client.GetPreparer(resourceGroupName, VMName, expand) - if err != nil { - err = autorest.NewErrorWithError(err, "compute.VirtualMachinesClient", "Get", nil, "Failure preparing request") - return - } - - resp, err := client.GetSender(req) - if err != nil { - result.Response = autorest.Response{Response: resp} - err = autorest.NewErrorWithError(err, "compute.VirtualMachinesClient", "Get", resp, "Failure sending request") - return - } - - result, err = client.GetResponder(resp) - if err != nil { - err = autorest.NewErrorWithError(err, "compute.VirtualMachinesClient", "Get", resp, "Failure responding to request") - } - - return -} - -// GetPreparer prepares the Get request. -func (client VirtualMachinesClient) GetPreparer(resourceGroupName string, VMName string, expand InstanceViewTypes) (*http.Request, error) { - pathParameters := map[string]interface{}{ - "resourceGroupName": autorest.Encode("path", resourceGroupName), - "subscriptionId": autorest.Encode("path", client.SubscriptionID), - "vmName": autorest.Encode("path", VMName), - } - - const APIVersion = "2016-04-30-preview" - queryParameters := map[string]interface{}{ - "api-version": APIVersion, - } - if len(string(expand)) > 0 { - queryParameters["$expand"] = autorest.Encode("query", expand) - } - - preparer := autorest.CreatePreparer( - autorest.AsGet(), - autorest.WithBaseURL(client.BaseURI), - autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Compute/virtualMachines/{vmName}", pathParameters), - autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{}) -} - -// GetSender sends the Get request. The method will close the -// http.Response Body if it receives an error. -func (client VirtualMachinesClient) GetSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, req) -} - -// GetResponder handles the response to the Get request. The method always -// closes the http.Response Body. -func (client VirtualMachinesClient) GetResponder(resp *http.Response) (result VirtualMachine, err error) { - err = autorest.Respond( - resp, - client.ByInspecting(), - azure.WithErrorUnlessStatusCode(http.StatusOK), - autorest.ByUnmarshallingJSON(&result), - autorest.ByClosing()) - result.Response = autorest.Response{Response: resp} - return -} - -// List lists all of the virtual machines in the specified resource group. Use -// the nextLink property in the response to get the next page of virtual -// machines. -// -// resourceGroupName is the name of the resource group. -func (client VirtualMachinesClient) List(resourceGroupName string) (result VirtualMachineListResult, err error) { - req, err := client.ListPreparer(resourceGroupName) - if err != nil { - err = autorest.NewErrorWithError(err, "compute.VirtualMachinesClient", "List", nil, "Failure preparing request") - return - } - - resp, err := client.ListSender(req) - if err != nil { - result.Response = autorest.Response{Response: resp} - err = autorest.NewErrorWithError(err, "compute.VirtualMachinesClient", "List", resp, "Failure sending request") - return - } - - result, err = client.ListResponder(resp) - if err != nil { - err = autorest.NewErrorWithError(err, "compute.VirtualMachinesClient", "List", resp, "Failure responding to request") - } - - return -} - -// ListPreparer prepares the List request. -func (client VirtualMachinesClient) ListPreparer(resourceGroupName string) (*http.Request, error) { - pathParameters := map[string]interface{}{ - "resourceGroupName": autorest.Encode("path", resourceGroupName), - "subscriptionId": autorest.Encode("path", client.SubscriptionID), - } - - const APIVersion = "2016-04-30-preview" - queryParameters := map[string]interface{}{ - "api-version": APIVersion, - } - - preparer := autorest.CreatePreparer( - autorest.AsGet(), - autorest.WithBaseURL(client.BaseURI), - autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Compute/virtualMachines", pathParameters), - autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{}) -} - -// ListSender sends the List request. The method will close the -// http.Response Body if it receives an error. -func (client VirtualMachinesClient) ListSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, req) -} - -// ListResponder handles the response to the List request. The method always -// closes the http.Response Body. -func (client VirtualMachinesClient) ListResponder(resp *http.Response) (result VirtualMachineListResult, err error) { - err = autorest.Respond( - resp, - client.ByInspecting(), - azure.WithErrorUnlessStatusCode(http.StatusOK), - autorest.ByUnmarshallingJSON(&result), - autorest.ByClosing()) - result.Response = autorest.Response{Response: resp} - return -} - -// ListNextResults retrieves the next set of results, if any. -func (client VirtualMachinesClient) ListNextResults(lastResults VirtualMachineListResult) (result VirtualMachineListResult, err error) { - req, err := lastResults.VirtualMachineListResultPreparer() - if err != nil { - return result, autorest.NewErrorWithError(err, "compute.VirtualMachinesClient", "List", nil, "Failure preparing next results request") - } - if req == nil { - return - } - - resp, err := client.ListSender(req) - if err != nil { - result.Response = autorest.Response{Response: resp} - return result, autorest.NewErrorWithError(err, "compute.VirtualMachinesClient", "List", resp, "Failure sending next results request") - } - - result, err = client.ListResponder(resp) - if err != nil { - err = autorest.NewErrorWithError(err, "compute.VirtualMachinesClient", "List", resp, "Failure responding to next results request") - } - - return -} - -// ListAll lists all of the virtual machines in the specified subscription. Use -// the nextLink property in the response to get the next page of virtual -// machines. -func (client VirtualMachinesClient) ListAll() (result VirtualMachineListResult, err error) { - req, err := client.ListAllPreparer() - if err != nil { - err = autorest.NewErrorWithError(err, "compute.VirtualMachinesClient", "ListAll", nil, "Failure preparing request") - return - } - - resp, err := client.ListAllSender(req) - if err != nil { - result.Response = autorest.Response{Response: resp} - err = autorest.NewErrorWithError(err, "compute.VirtualMachinesClient", "ListAll", resp, "Failure sending request") - return - } - - result, err = client.ListAllResponder(resp) - if err != nil { - err = autorest.NewErrorWithError(err, "compute.VirtualMachinesClient", "ListAll", resp, "Failure responding to request") - } - - return -} - -// ListAllPreparer prepares the ListAll request. -func (client VirtualMachinesClient) ListAllPreparer() (*http.Request, error) { - pathParameters := map[string]interface{}{ - "subscriptionId": autorest.Encode("path", client.SubscriptionID), - } - - const APIVersion = "2016-04-30-preview" - queryParameters := map[string]interface{}{ - "api-version": APIVersion, - } - - preparer := autorest.CreatePreparer( - autorest.AsGet(), - autorest.WithBaseURL(client.BaseURI), - autorest.WithPathParameters("/subscriptions/{subscriptionId}/providers/Microsoft.Compute/virtualMachines", pathParameters), - autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{}) -} - -// ListAllSender sends the ListAll request. The method will close the -// http.Response Body if it receives an error. -func (client VirtualMachinesClient) ListAllSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, req) -} - -// ListAllResponder handles the response to the ListAll request. The method always -// closes the http.Response Body. -func (client VirtualMachinesClient) ListAllResponder(resp *http.Response) (result VirtualMachineListResult, err error) { - err = autorest.Respond( - resp, - client.ByInspecting(), - azure.WithErrorUnlessStatusCode(http.StatusOK), - autorest.ByUnmarshallingJSON(&result), - autorest.ByClosing()) - result.Response = autorest.Response{Response: resp} - return -} - -// ListAllNextResults retrieves the next set of results, if any. -func (client VirtualMachinesClient) ListAllNextResults(lastResults VirtualMachineListResult) (result VirtualMachineListResult, err error) { - req, err := lastResults.VirtualMachineListResultPreparer() - if err != nil { - return result, autorest.NewErrorWithError(err, "compute.VirtualMachinesClient", "ListAll", nil, "Failure preparing next results request") - } - if req == nil { - return - } - - resp, err := client.ListAllSender(req) - if err != nil { - result.Response = autorest.Response{Response: resp} - return result, autorest.NewErrorWithError(err, "compute.VirtualMachinesClient", "ListAll", resp, "Failure sending next results request") - } - - result, err = client.ListAllResponder(resp) - if err != nil { - err = autorest.NewErrorWithError(err, "compute.VirtualMachinesClient", "ListAll", resp, "Failure responding to next results request") - } - - return -} - -// ListAvailableSizes lists all available virtual machine sizes to which the -// specified virtual machine can be resized. -// -// resourceGroupName is the name of the resource group. VMName is the name of -// the virtual machine. -func (client VirtualMachinesClient) ListAvailableSizes(resourceGroupName string, VMName string) (result VirtualMachineSizeListResult, err error) { - req, err := client.ListAvailableSizesPreparer(resourceGroupName, VMName) - if err != nil { - err = autorest.NewErrorWithError(err, "compute.VirtualMachinesClient", "ListAvailableSizes", nil, "Failure preparing request") - return - } - - resp, err := client.ListAvailableSizesSender(req) - if err != nil { - result.Response = autorest.Response{Response: resp} - err = autorest.NewErrorWithError(err, "compute.VirtualMachinesClient", "ListAvailableSizes", resp, "Failure sending request") - return - } - - result, err = client.ListAvailableSizesResponder(resp) - if err != nil { - err = autorest.NewErrorWithError(err, "compute.VirtualMachinesClient", "ListAvailableSizes", resp, "Failure responding to request") - } - - return -} - -// ListAvailableSizesPreparer prepares the ListAvailableSizes request. -func (client VirtualMachinesClient) ListAvailableSizesPreparer(resourceGroupName string, VMName string) (*http.Request, error) { - pathParameters := map[string]interface{}{ - "resourceGroupName": autorest.Encode("path", resourceGroupName), - "subscriptionId": autorest.Encode("path", client.SubscriptionID), - "vmName": autorest.Encode("path", VMName), - } - - const APIVersion = "2016-04-30-preview" - queryParameters := map[string]interface{}{ - "api-version": APIVersion, - } - - preparer := autorest.CreatePreparer( - autorest.AsGet(), - autorest.WithBaseURL(client.BaseURI), - autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Compute/virtualMachines/{vmName}/vmSizes", pathParameters), - autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{}) -} - -// ListAvailableSizesSender sends the ListAvailableSizes request. The method will close the -// http.Response Body if it receives an error. -func (client VirtualMachinesClient) ListAvailableSizesSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, req) -} - -// ListAvailableSizesResponder handles the response to the ListAvailableSizes request. The method always -// closes the http.Response Body. -func (client VirtualMachinesClient) ListAvailableSizesResponder(resp *http.Response) (result VirtualMachineSizeListResult, err error) { - err = autorest.Respond( - resp, - client.ByInspecting(), - azure.WithErrorUnlessStatusCode(http.StatusOK), - autorest.ByUnmarshallingJSON(&result), - autorest.ByClosing()) - result.Response = autorest.Response{Response: resp} - return -} - -// PowerOff the operation to power off (stop) a virtual machine. The virtual -// machine can be restarted with the same provisioned resources. You are still -// charged for this virtual machine. This method may poll for completion. -// Polling can be canceled by passing the cancel channel argument. The channel -// will be used to cancel polling and any outstanding HTTP requests. -// -// resourceGroupName is the name of the resource group. VMName is the name of -// the virtual machine. -func (client VirtualMachinesClient) PowerOff(resourceGroupName string, VMName string, cancel <-chan struct{}) (<-chan OperationStatusResponse, <-chan error) { - resultChan := make(chan OperationStatusResponse, 1) - errChan := make(chan error, 1) - go func() { - var err error - var result OperationStatusResponse - defer func() { - resultChan <- result - errChan <- err - close(resultChan) - close(errChan) - }() - req, err := client.PowerOffPreparer(resourceGroupName, VMName, cancel) - if err != nil { - err = autorest.NewErrorWithError(err, "compute.VirtualMachinesClient", "PowerOff", nil, "Failure preparing request") - return - } - - resp, err := client.PowerOffSender(req) - if err != nil { - result.Response = autorest.Response{Response: resp} - err = autorest.NewErrorWithError(err, "compute.VirtualMachinesClient", "PowerOff", resp, "Failure sending request") - return - } - - result, err = client.PowerOffResponder(resp) - if err != nil { - err = autorest.NewErrorWithError(err, "compute.VirtualMachinesClient", "PowerOff", resp, "Failure responding to request") - } - }() - return resultChan, errChan -} - -// PowerOffPreparer prepares the PowerOff request. -func (client VirtualMachinesClient) PowerOffPreparer(resourceGroupName string, VMName string, cancel <-chan struct{}) (*http.Request, error) { - pathParameters := map[string]interface{}{ - "resourceGroupName": autorest.Encode("path", resourceGroupName), - "subscriptionId": autorest.Encode("path", client.SubscriptionID), - "vmName": autorest.Encode("path", VMName), - } - - const APIVersion = "2016-04-30-preview" - queryParameters := map[string]interface{}{ - "api-version": APIVersion, - } - - preparer := autorest.CreatePreparer( - autorest.AsPost(), - autorest.WithBaseURL(client.BaseURI), - autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Compute/virtualMachines/{vmName}/powerOff", pathParameters), - autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{Cancel: cancel}) -} - -// PowerOffSender sends the PowerOff request. The method will close the -// http.Response Body if it receives an error. -func (client VirtualMachinesClient) PowerOffSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, - req, - azure.DoPollForAsynchronous(client.PollingDelay)) -} - -// PowerOffResponder handles the response to the PowerOff request. The method always -// closes the http.Response Body. -func (client VirtualMachinesClient) PowerOffResponder(resp *http.Response) (result OperationStatusResponse, err error) { - err = autorest.Respond( - resp, - client.ByInspecting(), - azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted), - autorest.ByUnmarshallingJSON(&result), - autorest.ByClosing()) - result.Response = autorest.Response{Response: resp} - return -} - -// Redeploy the operation to redeploy a virtual machine. This method may poll -// for completion. Polling can be canceled by passing the cancel channel -// argument. The channel will be used to cancel polling and any outstanding -// HTTP requests. -// -// resourceGroupName is the name of the resource group. VMName is the name of -// the virtual machine. -func (client VirtualMachinesClient) Redeploy(resourceGroupName string, VMName string, cancel <-chan struct{}) (<-chan OperationStatusResponse, <-chan error) { - resultChan := make(chan OperationStatusResponse, 1) - errChan := make(chan error, 1) - go func() { - var err error - var result OperationStatusResponse - defer func() { - resultChan <- result - errChan <- err - close(resultChan) - close(errChan) - }() - req, err := client.RedeployPreparer(resourceGroupName, VMName, cancel) - if err != nil { - err = autorest.NewErrorWithError(err, "compute.VirtualMachinesClient", "Redeploy", nil, "Failure preparing request") - return - } - - resp, err := client.RedeploySender(req) - if err != nil { - result.Response = autorest.Response{Response: resp} - err = autorest.NewErrorWithError(err, "compute.VirtualMachinesClient", "Redeploy", resp, "Failure sending request") - return - } - - result, err = client.RedeployResponder(resp) - if err != nil { - err = autorest.NewErrorWithError(err, "compute.VirtualMachinesClient", "Redeploy", resp, "Failure responding to request") - } - }() - return resultChan, errChan -} - -// RedeployPreparer prepares the Redeploy request. -func (client VirtualMachinesClient) RedeployPreparer(resourceGroupName string, VMName string, cancel <-chan struct{}) (*http.Request, error) { - pathParameters := map[string]interface{}{ - "resourceGroupName": autorest.Encode("path", resourceGroupName), - "subscriptionId": autorest.Encode("path", client.SubscriptionID), - "vmName": autorest.Encode("path", VMName), - } - - const APIVersion = "2016-04-30-preview" - queryParameters := map[string]interface{}{ - "api-version": APIVersion, - } - - preparer := autorest.CreatePreparer( - autorest.AsPost(), - autorest.WithBaseURL(client.BaseURI), - autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Compute/virtualMachines/{vmName}/redeploy", pathParameters), - autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{Cancel: cancel}) -} - -// RedeploySender sends the Redeploy request. The method will close the -// http.Response Body if it receives an error. -func (client VirtualMachinesClient) RedeploySender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, - req, - azure.DoPollForAsynchronous(client.PollingDelay)) -} - -// RedeployResponder handles the response to the Redeploy request. The method always -// closes the http.Response Body. -func (client VirtualMachinesClient) RedeployResponder(resp *http.Response) (result OperationStatusResponse, err error) { - err = autorest.Respond( - resp, - client.ByInspecting(), - azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted), - autorest.ByUnmarshallingJSON(&result), - autorest.ByClosing()) - result.Response = autorest.Response{Response: resp} - return -} - -// Restart the operation to restart a virtual machine. This method may poll for -// completion. Polling can be canceled by passing the cancel channel argument. -// The channel will be used to cancel polling and any outstanding HTTP -// requests. -// -// resourceGroupName is the name of the resource group. VMName is the name of -// the virtual machine. -func (client VirtualMachinesClient) Restart(resourceGroupName string, VMName string, cancel <-chan struct{}) (<-chan OperationStatusResponse, <-chan error) { - resultChan := make(chan OperationStatusResponse, 1) - errChan := make(chan error, 1) - go func() { - var err error - var result OperationStatusResponse - defer func() { - resultChan <- result - errChan <- err - close(resultChan) - close(errChan) - }() - req, err := client.RestartPreparer(resourceGroupName, VMName, cancel) - if err != nil { - err = autorest.NewErrorWithError(err, "compute.VirtualMachinesClient", "Restart", nil, "Failure preparing request") - return - } - - resp, err := client.RestartSender(req) - if err != nil { - result.Response = autorest.Response{Response: resp} - err = autorest.NewErrorWithError(err, "compute.VirtualMachinesClient", "Restart", resp, "Failure sending request") - return - } - - result, err = client.RestartResponder(resp) - if err != nil { - err = autorest.NewErrorWithError(err, "compute.VirtualMachinesClient", "Restart", resp, "Failure responding to request") - } - }() - return resultChan, errChan -} - -// RestartPreparer prepares the Restart request. -func (client VirtualMachinesClient) RestartPreparer(resourceGroupName string, VMName string, cancel <-chan struct{}) (*http.Request, error) { - pathParameters := map[string]interface{}{ - "resourceGroupName": autorest.Encode("path", resourceGroupName), - "subscriptionId": autorest.Encode("path", client.SubscriptionID), - "vmName": autorest.Encode("path", VMName), - } - - const APIVersion = "2016-04-30-preview" - queryParameters := map[string]interface{}{ - "api-version": APIVersion, - } - - preparer := autorest.CreatePreparer( - autorest.AsPost(), - autorest.WithBaseURL(client.BaseURI), - autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Compute/virtualMachines/{vmName}/restart", pathParameters), - autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{Cancel: cancel}) -} - -// RestartSender sends the Restart request. The method will close the -// http.Response Body if it receives an error. -func (client VirtualMachinesClient) RestartSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, - req, - azure.DoPollForAsynchronous(client.PollingDelay)) -} - -// RestartResponder handles the response to the Restart request. The method always -// closes the http.Response Body. -func (client VirtualMachinesClient) RestartResponder(resp *http.Response) (result OperationStatusResponse, err error) { - err = autorest.Respond( - resp, - client.ByInspecting(), - azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted), - autorest.ByUnmarshallingJSON(&result), - autorest.ByClosing()) - result.Response = autorest.Response{Response: resp} - return -} - -// Start the operation to start a virtual machine. This method may poll for -// completion. Polling can be canceled by passing the cancel channel argument. -// The channel will be used to cancel polling and any outstanding HTTP -// requests. -// -// resourceGroupName is the name of the resource group. VMName is the name of -// the virtual machine. -func (client VirtualMachinesClient) Start(resourceGroupName string, VMName string, cancel <-chan struct{}) (<-chan OperationStatusResponse, <-chan error) { - resultChan := make(chan OperationStatusResponse, 1) - errChan := make(chan error, 1) - go func() { - var err error - var result OperationStatusResponse - defer func() { - resultChan <- result - errChan <- err - close(resultChan) - close(errChan) - }() - req, err := client.StartPreparer(resourceGroupName, VMName, cancel) - if err != nil { - err = autorest.NewErrorWithError(err, "compute.VirtualMachinesClient", "Start", nil, "Failure preparing request") - return - } - - resp, err := client.StartSender(req) - if err != nil { - result.Response = autorest.Response{Response: resp} - err = autorest.NewErrorWithError(err, "compute.VirtualMachinesClient", "Start", resp, "Failure sending request") - return - } - - result, err = client.StartResponder(resp) - if err != nil { - err = autorest.NewErrorWithError(err, "compute.VirtualMachinesClient", "Start", resp, "Failure responding to request") - } - }() - return resultChan, errChan -} - -// StartPreparer prepares the Start request. -func (client VirtualMachinesClient) StartPreparer(resourceGroupName string, VMName string, cancel <-chan struct{}) (*http.Request, error) { - pathParameters := map[string]interface{}{ - "resourceGroupName": autorest.Encode("path", resourceGroupName), - "subscriptionId": autorest.Encode("path", client.SubscriptionID), - "vmName": autorest.Encode("path", VMName), - } - - const APIVersion = "2016-04-30-preview" - queryParameters := map[string]interface{}{ - "api-version": APIVersion, - } - - preparer := autorest.CreatePreparer( - autorest.AsPost(), - autorest.WithBaseURL(client.BaseURI), - autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Compute/virtualMachines/{vmName}/start", pathParameters), - autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{Cancel: cancel}) -} - -// StartSender sends the Start request. The method will close the -// http.Response Body if it receives an error. -func (client VirtualMachinesClient) StartSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, - req, - azure.DoPollForAsynchronous(client.PollingDelay)) -} - -// StartResponder handles the response to the Start request. The method always -// closes the http.Response Body. -func (client VirtualMachinesClient) StartResponder(resp *http.Response) (result OperationStatusResponse, err error) { - err = autorest.Respond( - resp, - client.ByInspecting(), - azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted), - autorest.ByUnmarshallingJSON(&result), - autorest.ByClosing()) - result.Response = autorest.Response{Response: resp} - return -} diff --git a/vendor/github.com/Azure/azure-sdk-for-go/arm/compute/virtualmachinescalesets.go b/vendor/github.com/Azure/azure-sdk-for-go/arm/compute/virtualmachinescalesets.go deleted file mode 100644 index 53700c8dd..000000000 --- a/vendor/github.com/Azure/azure-sdk-for-go/arm/compute/virtualmachinescalesets.go +++ /dev/null @@ -1,1316 +0,0 @@ -package compute - -// Copyright (c) Microsoft and contributors. All rights reserved. -// -// Licensed under the Apache License, Version 2.0 (the "License"); -// you may not use this file except in compliance with the License. -// You may obtain a copy of the License at -// http://www.apache.org/licenses/LICENSE-2.0 -// -// Unless required by applicable law or agreed to in writing, software -// distributed under the License is distributed on an "AS IS" BASIS, -// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -// -// See the License for the specific language governing permissions and -// limitations under the License. -// -// Code generated by Microsoft (R) AutoRest Code Generator 1.0.1.0 -// Changes may cause incorrect behavior and will be lost if the code is -// regenerated. - -import ( - "github.com/Azure/go-autorest/autorest" - "github.com/Azure/go-autorest/autorest/azure" - "github.com/Azure/go-autorest/autorest/validation" - "net/http" -) - -// VirtualMachineScaleSetsClient is the the Compute Management Client. -type VirtualMachineScaleSetsClient struct { - ManagementClient -} - -// NewVirtualMachineScaleSetsClient creates an instance of the -// VirtualMachineScaleSetsClient client. -func NewVirtualMachineScaleSetsClient(subscriptionID string) VirtualMachineScaleSetsClient { - return NewVirtualMachineScaleSetsClientWithBaseURI(DefaultBaseURI, subscriptionID) -} - -// NewVirtualMachineScaleSetsClientWithBaseURI creates an instance of the -// VirtualMachineScaleSetsClient client. -func NewVirtualMachineScaleSetsClientWithBaseURI(baseURI string, subscriptionID string) VirtualMachineScaleSetsClient { - return VirtualMachineScaleSetsClient{NewWithBaseURI(baseURI, subscriptionID)} -} - -// CreateOrUpdate create or update a VM scale set. This method may poll for -// completion. Polling can be canceled by passing the cancel channel argument. -// The channel will be used to cancel polling and any outstanding HTTP -// requests. -// -// resourceGroupName is the name of the resource group. name is the name of the -// VM scale set to create or update. parameters is the scale set object. -func (client VirtualMachineScaleSetsClient) CreateOrUpdate(resourceGroupName string, name string, parameters VirtualMachineScaleSet, cancel <-chan struct{}) (<-chan VirtualMachineScaleSet, <-chan error) { - resultChan := make(chan VirtualMachineScaleSet, 1) - errChan := make(chan error, 1) - go func() { - var err error - var result VirtualMachineScaleSet - defer func() { - resultChan <- result - errChan <- err - close(resultChan) - close(errChan) - }() - req, err := client.CreateOrUpdatePreparer(resourceGroupName, name, parameters, cancel) - if err != nil { - err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetsClient", "CreateOrUpdate", nil, "Failure preparing request") - return - } - - resp, err := client.CreateOrUpdateSender(req) - if err != nil { - result.Response = autorest.Response{Response: resp} - err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetsClient", "CreateOrUpdate", resp, "Failure sending request") - return - } - - result, err = client.CreateOrUpdateResponder(resp) - if err != nil { - err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetsClient", "CreateOrUpdate", resp, "Failure responding to request") - } - }() - return resultChan, errChan -} - -// CreateOrUpdatePreparer prepares the CreateOrUpdate request. -func (client VirtualMachineScaleSetsClient) CreateOrUpdatePreparer(resourceGroupName string, name string, parameters VirtualMachineScaleSet, cancel <-chan struct{}) (*http.Request, error) { - pathParameters := map[string]interface{}{ - "name": autorest.Encode("path", name), - "resourceGroupName": autorest.Encode("path", resourceGroupName), - "subscriptionId": autorest.Encode("path", client.SubscriptionID), - } - - const APIVersion = "2016-04-30-preview" - queryParameters := map[string]interface{}{ - "api-version": APIVersion, - } - - preparer := autorest.CreatePreparer( - autorest.AsJSON(), - autorest.AsPut(), - autorest.WithBaseURL(client.BaseURI), - autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Compute/virtualMachineScaleSets/{name}", pathParameters), - autorest.WithJSON(parameters), - autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{Cancel: cancel}) -} - -// CreateOrUpdateSender sends the CreateOrUpdate request. The method will close the -// http.Response Body if it receives an error. -func (client VirtualMachineScaleSetsClient) CreateOrUpdateSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, - req, - azure.DoPollForAsynchronous(client.PollingDelay)) -} - -// CreateOrUpdateResponder handles the response to the CreateOrUpdate request. The method always -// closes the http.Response Body. -func (client VirtualMachineScaleSetsClient) CreateOrUpdateResponder(resp *http.Response) (result VirtualMachineScaleSet, err error) { - err = autorest.Respond( - resp, - client.ByInspecting(), - azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusCreated), - autorest.ByUnmarshallingJSON(&result), - autorest.ByClosing()) - result.Response = autorest.Response{Response: resp} - return -} - -// Deallocate deallocates specific virtual machines in a VM scale set. Shuts -// down the virtual machines and releases the compute resources. You are not -// billed for the compute resources that this virtual machine scale set -// deallocates. This method may poll for completion. Polling can be canceled by -// passing the cancel channel argument. The channel will be used to cancel -// polling and any outstanding HTTP requests. -// -// resourceGroupName is the name of the resource group. VMScaleSetName is the -// name of the VM scale set. VMInstanceIDs is a list of virtual machine -// instance IDs from the VM scale set. -func (client VirtualMachineScaleSetsClient) Deallocate(resourceGroupName string, VMScaleSetName string, VMInstanceIDs *VirtualMachineScaleSetVMInstanceIDs, cancel <-chan struct{}) (<-chan OperationStatusResponse, <-chan error) { - resultChan := make(chan OperationStatusResponse, 1) - errChan := make(chan error, 1) - go func() { - var err error - var result OperationStatusResponse - defer func() { - resultChan <- result - errChan <- err - close(resultChan) - close(errChan) - }() - req, err := client.DeallocatePreparer(resourceGroupName, VMScaleSetName, VMInstanceIDs, cancel) - if err != nil { - err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetsClient", "Deallocate", nil, "Failure preparing request") - return - } - - resp, err := client.DeallocateSender(req) - if err != nil { - result.Response = autorest.Response{Response: resp} - err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetsClient", "Deallocate", resp, "Failure sending request") - return - } - - result, err = client.DeallocateResponder(resp) - if err != nil { - err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetsClient", "Deallocate", resp, "Failure responding to request") - } - }() - return resultChan, errChan -} - -// DeallocatePreparer prepares the Deallocate request. -func (client VirtualMachineScaleSetsClient) DeallocatePreparer(resourceGroupName string, VMScaleSetName string, VMInstanceIDs *VirtualMachineScaleSetVMInstanceIDs, cancel <-chan struct{}) (*http.Request, error) { - pathParameters := map[string]interface{}{ - "resourceGroupName": autorest.Encode("path", resourceGroupName), - "subscriptionId": autorest.Encode("path", client.SubscriptionID), - "vmScaleSetName": autorest.Encode("path", VMScaleSetName), - } - - const APIVersion = "2016-04-30-preview" - queryParameters := map[string]interface{}{ - "api-version": APIVersion, - } - - preparer := autorest.CreatePreparer( - autorest.AsJSON(), - autorest.AsPost(), - autorest.WithBaseURL(client.BaseURI), - autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Compute/virtualMachineScaleSets/{vmScaleSetName}/deallocate", pathParameters), - autorest.WithQueryParameters(queryParameters)) - if VMInstanceIDs != nil { - preparer = autorest.DecoratePreparer(preparer, - autorest.WithJSON(VMInstanceIDs)) - } - return preparer.Prepare(&http.Request{Cancel: cancel}) -} - -// DeallocateSender sends the Deallocate request. The method will close the -// http.Response Body if it receives an error. -func (client VirtualMachineScaleSetsClient) DeallocateSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, - req, - azure.DoPollForAsynchronous(client.PollingDelay)) -} - -// DeallocateResponder handles the response to the Deallocate request. The method always -// closes the http.Response Body. -func (client VirtualMachineScaleSetsClient) DeallocateResponder(resp *http.Response) (result OperationStatusResponse, err error) { - err = autorest.Respond( - resp, - client.ByInspecting(), - azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted), - autorest.ByUnmarshallingJSON(&result), - autorest.ByClosing()) - result.Response = autorest.Response{Response: resp} - return -} - -// Delete deletes a VM scale set. This method may poll for completion. Polling -// can be canceled by passing the cancel channel argument. The channel will be -// used to cancel polling and any outstanding HTTP requests. -// -// resourceGroupName is the name of the resource group. VMScaleSetName is the -// name of the VM scale set. -func (client VirtualMachineScaleSetsClient) Delete(resourceGroupName string, VMScaleSetName string, cancel <-chan struct{}) (<-chan OperationStatusResponse, <-chan error) { - resultChan := make(chan OperationStatusResponse, 1) - errChan := make(chan error, 1) - go func() { - var err error - var result OperationStatusResponse - defer func() { - resultChan <- result - errChan <- err - close(resultChan) - close(errChan) - }() - req, err := client.DeletePreparer(resourceGroupName, VMScaleSetName, cancel) - if err != nil { - err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetsClient", "Delete", nil, "Failure preparing request") - return - } - - resp, err := client.DeleteSender(req) - if err != nil { - result.Response = autorest.Response{Response: resp} - err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetsClient", "Delete", resp, "Failure sending request") - return - } - - result, err = client.DeleteResponder(resp) - if err != nil { - err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetsClient", "Delete", resp, "Failure responding to request") - } - }() - return resultChan, errChan -} - -// DeletePreparer prepares the Delete request. -func (client VirtualMachineScaleSetsClient) DeletePreparer(resourceGroupName string, VMScaleSetName string, cancel <-chan struct{}) (*http.Request, error) { - pathParameters := map[string]interface{}{ - "resourceGroupName": autorest.Encode("path", resourceGroupName), - "subscriptionId": autorest.Encode("path", client.SubscriptionID), - "vmScaleSetName": autorest.Encode("path", VMScaleSetName), - } - - const APIVersion = "2016-04-30-preview" - queryParameters := map[string]interface{}{ - "api-version": APIVersion, - } - - preparer := autorest.CreatePreparer( - autorest.AsDelete(), - autorest.WithBaseURL(client.BaseURI), - autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Compute/virtualMachineScaleSets/{vmScaleSetName}", pathParameters), - autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{Cancel: cancel}) -} - -// DeleteSender sends the Delete request. The method will close the -// http.Response Body if it receives an error. -func (client VirtualMachineScaleSetsClient) DeleteSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, - req, - azure.DoPollForAsynchronous(client.PollingDelay)) -} - -// DeleteResponder handles the response to the Delete request. The method always -// closes the http.Response Body. -func (client VirtualMachineScaleSetsClient) DeleteResponder(resp *http.Response) (result OperationStatusResponse, err error) { - err = autorest.Respond( - resp, - client.ByInspecting(), - azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted, http.StatusNoContent), - autorest.ByUnmarshallingJSON(&result), - autorest.ByClosing()) - result.Response = autorest.Response{Response: resp} - return -} - -// DeleteInstances deletes virtual machines in a VM scale set. This method may -// poll for completion. Polling can be canceled by passing the cancel channel -// argument. The channel will be used to cancel polling and any outstanding -// HTTP requests. -// -// resourceGroupName is the name of the resource group. VMScaleSetName is the -// name of the VM scale set. VMInstanceIDs is a list of virtual machine -// instance IDs from the VM scale set. -func (client VirtualMachineScaleSetsClient) DeleteInstances(resourceGroupName string, VMScaleSetName string, VMInstanceIDs VirtualMachineScaleSetVMInstanceRequiredIDs, cancel <-chan struct{}) (<-chan OperationStatusResponse, <-chan error) { - resultChan := make(chan OperationStatusResponse, 1) - errChan := make(chan error, 1) - if err := validation.Validate([]validation.Validation{ - {TargetValue: VMInstanceIDs, - Constraints: []validation.Constraint{{Target: "VMInstanceIDs.InstanceIds", Name: validation.Null, Rule: true, Chain: nil}}}}); err != nil { - errChan <- validation.NewErrorWithValidationError(err, "compute.VirtualMachineScaleSetsClient", "DeleteInstances") - close(errChan) - close(resultChan) - return resultChan, errChan - } - - go func() { - var err error - var result OperationStatusResponse - defer func() { - resultChan <- result - errChan <- err - close(resultChan) - close(errChan) - }() - req, err := client.DeleteInstancesPreparer(resourceGroupName, VMScaleSetName, VMInstanceIDs, cancel) - if err != nil { - err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetsClient", "DeleteInstances", nil, "Failure preparing request") - return - } - - resp, err := client.DeleteInstancesSender(req) - if err != nil { - result.Response = autorest.Response{Response: resp} - err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetsClient", "DeleteInstances", resp, "Failure sending request") - return - } - - result, err = client.DeleteInstancesResponder(resp) - if err != nil { - err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetsClient", "DeleteInstances", resp, "Failure responding to request") - } - }() - return resultChan, errChan -} - -// DeleteInstancesPreparer prepares the DeleteInstances request. -func (client VirtualMachineScaleSetsClient) DeleteInstancesPreparer(resourceGroupName string, VMScaleSetName string, VMInstanceIDs VirtualMachineScaleSetVMInstanceRequiredIDs, cancel <-chan struct{}) (*http.Request, error) { - pathParameters := map[string]interface{}{ - "resourceGroupName": autorest.Encode("path", resourceGroupName), - "subscriptionId": autorest.Encode("path", client.SubscriptionID), - "vmScaleSetName": autorest.Encode("path", VMScaleSetName), - } - - const APIVersion = "2016-04-30-preview" - queryParameters := map[string]interface{}{ - "api-version": APIVersion, - } - - preparer := autorest.CreatePreparer( - autorest.AsJSON(), - autorest.AsPost(), - autorest.WithBaseURL(client.BaseURI), - autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Compute/virtualMachineScaleSets/{vmScaleSetName}/delete", pathParameters), - autorest.WithJSON(VMInstanceIDs), - autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{Cancel: cancel}) -} - -// DeleteInstancesSender sends the DeleteInstances request. The method will close the -// http.Response Body if it receives an error. -func (client VirtualMachineScaleSetsClient) DeleteInstancesSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, - req, - azure.DoPollForAsynchronous(client.PollingDelay)) -} - -// DeleteInstancesResponder handles the response to the DeleteInstances request. The method always -// closes the http.Response Body. -func (client VirtualMachineScaleSetsClient) DeleteInstancesResponder(resp *http.Response) (result OperationStatusResponse, err error) { - err = autorest.Respond( - resp, - client.ByInspecting(), - azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted), - autorest.ByUnmarshallingJSON(&result), - autorest.ByClosing()) - result.Response = autorest.Response{Response: resp} - return -} - -// Get display information about a virtual machine scale set. -// -// resourceGroupName is the name of the resource group. VMScaleSetName is the -// name of the VM scale set. -func (client VirtualMachineScaleSetsClient) Get(resourceGroupName string, VMScaleSetName string) (result VirtualMachineScaleSet, err error) { - req, err := client.GetPreparer(resourceGroupName, VMScaleSetName) - if err != nil { - err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetsClient", "Get", nil, "Failure preparing request") - return - } - - resp, err := client.GetSender(req) - if err != nil { - result.Response = autorest.Response{Response: resp} - err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetsClient", "Get", resp, "Failure sending request") - return - } - - result, err = client.GetResponder(resp) - if err != nil { - err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetsClient", "Get", resp, "Failure responding to request") - } - - return -} - -// GetPreparer prepares the Get request. -func (client VirtualMachineScaleSetsClient) GetPreparer(resourceGroupName string, VMScaleSetName string) (*http.Request, error) { - pathParameters := map[string]interface{}{ - "resourceGroupName": autorest.Encode("path", resourceGroupName), - "subscriptionId": autorest.Encode("path", client.SubscriptionID), - "vmScaleSetName": autorest.Encode("path", VMScaleSetName), - } - - const APIVersion = "2016-04-30-preview" - queryParameters := map[string]interface{}{ - "api-version": APIVersion, - } - - preparer := autorest.CreatePreparer( - autorest.AsGet(), - autorest.WithBaseURL(client.BaseURI), - autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Compute/virtualMachineScaleSets/{vmScaleSetName}", pathParameters), - autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{}) -} - -// GetSender sends the Get request. The method will close the -// http.Response Body if it receives an error. -func (client VirtualMachineScaleSetsClient) GetSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, req) -} - -// GetResponder handles the response to the Get request. The method always -// closes the http.Response Body. -func (client VirtualMachineScaleSetsClient) GetResponder(resp *http.Response) (result VirtualMachineScaleSet, err error) { - err = autorest.Respond( - resp, - client.ByInspecting(), - azure.WithErrorUnlessStatusCode(http.StatusOK), - autorest.ByUnmarshallingJSON(&result), - autorest.ByClosing()) - result.Response = autorest.Response{Response: resp} - return -} - -// GetInstanceView gets the status of a VM scale set instance. -// -// resourceGroupName is the name of the resource group. VMScaleSetName is the -// name of the VM scale set. -func (client VirtualMachineScaleSetsClient) GetInstanceView(resourceGroupName string, VMScaleSetName string) (result VirtualMachineScaleSetInstanceView, err error) { - req, err := client.GetInstanceViewPreparer(resourceGroupName, VMScaleSetName) - if err != nil { - err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetsClient", "GetInstanceView", nil, "Failure preparing request") - return - } - - resp, err := client.GetInstanceViewSender(req) - if err != nil { - result.Response = autorest.Response{Response: resp} - err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetsClient", "GetInstanceView", resp, "Failure sending request") - return - } - - result, err = client.GetInstanceViewResponder(resp) - if err != nil { - err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetsClient", "GetInstanceView", resp, "Failure responding to request") - } - - return -} - -// GetInstanceViewPreparer prepares the GetInstanceView request. -func (client VirtualMachineScaleSetsClient) GetInstanceViewPreparer(resourceGroupName string, VMScaleSetName string) (*http.Request, error) { - pathParameters := map[string]interface{}{ - "resourceGroupName": autorest.Encode("path", resourceGroupName), - "subscriptionId": autorest.Encode("path", client.SubscriptionID), - "vmScaleSetName": autorest.Encode("path", VMScaleSetName), - } - - const APIVersion = "2016-04-30-preview" - queryParameters := map[string]interface{}{ - "api-version": APIVersion, - } - - preparer := autorest.CreatePreparer( - autorest.AsGet(), - autorest.WithBaseURL(client.BaseURI), - autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Compute/virtualMachineScaleSets/{vmScaleSetName}/instanceView", pathParameters), - autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{}) -} - -// GetInstanceViewSender sends the GetInstanceView request. The method will close the -// http.Response Body if it receives an error. -func (client VirtualMachineScaleSetsClient) GetInstanceViewSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, req) -} - -// GetInstanceViewResponder handles the response to the GetInstanceView request. The method always -// closes the http.Response Body. -func (client VirtualMachineScaleSetsClient) GetInstanceViewResponder(resp *http.Response) (result VirtualMachineScaleSetInstanceView, err error) { - err = autorest.Respond( - resp, - client.ByInspecting(), - azure.WithErrorUnlessStatusCode(http.StatusOK), - autorest.ByUnmarshallingJSON(&result), - autorest.ByClosing()) - result.Response = autorest.Response{Response: resp} - return -} - -// List gets a list of all VM scale sets under a resource group. -// -// resourceGroupName is the name of the resource group. -func (client VirtualMachineScaleSetsClient) List(resourceGroupName string) (result VirtualMachineScaleSetListResult, err error) { - req, err := client.ListPreparer(resourceGroupName) - if err != nil { - err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetsClient", "List", nil, "Failure preparing request") - return - } - - resp, err := client.ListSender(req) - if err != nil { - result.Response = autorest.Response{Response: resp} - err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetsClient", "List", resp, "Failure sending request") - return - } - - result, err = client.ListResponder(resp) - if err != nil { - err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetsClient", "List", resp, "Failure responding to request") - } - - return -} - -// ListPreparer prepares the List request. -func (client VirtualMachineScaleSetsClient) ListPreparer(resourceGroupName string) (*http.Request, error) { - pathParameters := map[string]interface{}{ - "resourceGroupName": autorest.Encode("path", resourceGroupName), - "subscriptionId": autorest.Encode("path", client.SubscriptionID), - } - - const APIVersion = "2016-04-30-preview" - queryParameters := map[string]interface{}{ - "api-version": APIVersion, - } - - preparer := autorest.CreatePreparer( - autorest.AsGet(), - autorest.WithBaseURL(client.BaseURI), - autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Compute/virtualMachineScaleSets", pathParameters), - autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{}) -} - -// ListSender sends the List request. The method will close the -// http.Response Body if it receives an error. -func (client VirtualMachineScaleSetsClient) ListSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, req) -} - -// ListResponder handles the response to the List request. The method always -// closes the http.Response Body. -func (client VirtualMachineScaleSetsClient) ListResponder(resp *http.Response) (result VirtualMachineScaleSetListResult, err error) { - err = autorest.Respond( - resp, - client.ByInspecting(), - azure.WithErrorUnlessStatusCode(http.StatusOK), - autorest.ByUnmarshallingJSON(&result), - autorest.ByClosing()) - result.Response = autorest.Response{Response: resp} - return -} - -// ListNextResults retrieves the next set of results, if any. -func (client VirtualMachineScaleSetsClient) ListNextResults(lastResults VirtualMachineScaleSetListResult) (result VirtualMachineScaleSetListResult, err error) { - req, err := lastResults.VirtualMachineScaleSetListResultPreparer() - if err != nil { - return result, autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetsClient", "List", nil, "Failure preparing next results request") - } - if req == nil { - return - } - - resp, err := client.ListSender(req) - if err != nil { - result.Response = autorest.Response{Response: resp} - return result, autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetsClient", "List", resp, "Failure sending next results request") - } - - result, err = client.ListResponder(resp) - if err != nil { - err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetsClient", "List", resp, "Failure responding to next results request") - } - - return -} - -// ListAll gets a list of all VM Scale Sets in the subscription, regardless of -// the associated resource group. Use nextLink property in the response to get -// the next page of VM Scale Sets. Do this till nextLink is not null to fetch -// all the VM Scale Sets. -func (client VirtualMachineScaleSetsClient) ListAll() (result VirtualMachineScaleSetListWithLinkResult, err error) { - req, err := client.ListAllPreparer() - if err != nil { - err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetsClient", "ListAll", nil, "Failure preparing request") - return - } - - resp, err := client.ListAllSender(req) - if err != nil { - result.Response = autorest.Response{Response: resp} - err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetsClient", "ListAll", resp, "Failure sending request") - return - } - - result, err = client.ListAllResponder(resp) - if err != nil { - err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetsClient", "ListAll", resp, "Failure responding to request") - } - - return -} - -// ListAllPreparer prepares the ListAll request. -func (client VirtualMachineScaleSetsClient) ListAllPreparer() (*http.Request, error) { - pathParameters := map[string]interface{}{ - "subscriptionId": autorest.Encode("path", client.SubscriptionID), - } - - const APIVersion = "2016-04-30-preview" - queryParameters := map[string]interface{}{ - "api-version": APIVersion, - } - - preparer := autorest.CreatePreparer( - autorest.AsGet(), - autorest.WithBaseURL(client.BaseURI), - autorest.WithPathParameters("/subscriptions/{subscriptionId}/providers/Microsoft.Compute/virtualMachineScaleSets", pathParameters), - autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{}) -} - -// ListAllSender sends the ListAll request. The method will close the -// http.Response Body if it receives an error. -func (client VirtualMachineScaleSetsClient) ListAllSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, req) -} - -// ListAllResponder handles the response to the ListAll request. The method always -// closes the http.Response Body. -func (client VirtualMachineScaleSetsClient) ListAllResponder(resp *http.Response) (result VirtualMachineScaleSetListWithLinkResult, err error) { - err = autorest.Respond( - resp, - client.ByInspecting(), - azure.WithErrorUnlessStatusCode(http.StatusOK), - autorest.ByUnmarshallingJSON(&result), - autorest.ByClosing()) - result.Response = autorest.Response{Response: resp} - return -} - -// ListAllNextResults retrieves the next set of results, if any. -func (client VirtualMachineScaleSetsClient) ListAllNextResults(lastResults VirtualMachineScaleSetListWithLinkResult) (result VirtualMachineScaleSetListWithLinkResult, err error) { - req, err := lastResults.VirtualMachineScaleSetListWithLinkResultPreparer() - if err != nil { - return result, autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetsClient", "ListAll", nil, "Failure preparing next results request") - } - if req == nil { - return - } - - resp, err := client.ListAllSender(req) - if err != nil { - result.Response = autorest.Response{Response: resp} - return result, autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetsClient", "ListAll", resp, "Failure sending next results request") - } - - result, err = client.ListAllResponder(resp) - if err != nil { - err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetsClient", "ListAll", resp, "Failure responding to next results request") - } - - return -} - -// ListSkus gets a list of SKUs available for your VM scale set, including the -// minimum and maximum VM instances allowed for each SKU. -// -// resourceGroupName is the name of the resource group. VMScaleSetName is the -// name of the VM scale set. -func (client VirtualMachineScaleSetsClient) ListSkus(resourceGroupName string, VMScaleSetName string) (result VirtualMachineScaleSetListSkusResult, err error) { - req, err := client.ListSkusPreparer(resourceGroupName, VMScaleSetName) - if err != nil { - err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetsClient", "ListSkus", nil, "Failure preparing request") - return - } - - resp, err := client.ListSkusSender(req) - if err != nil { - result.Response = autorest.Response{Response: resp} - err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetsClient", "ListSkus", resp, "Failure sending request") - return - } - - result, err = client.ListSkusResponder(resp) - if err != nil { - err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetsClient", "ListSkus", resp, "Failure responding to request") - } - - return -} - -// ListSkusPreparer prepares the ListSkus request. -func (client VirtualMachineScaleSetsClient) ListSkusPreparer(resourceGroupName string, VMScaleSetName string) (*http.Request, error) { - pathParameters := map[string]interface{}{ - "resourceGroupName": autorest.Encode("path", resourceGroupName), - "subscriptionId": autorest.Encode("path", client.SubscriptionID), - "vmScaleSetName": autorest.Encode("path", VMScaleSetName), - } - - const APIVersion = "2016-04-30-preview" - queryParameters := map[string]interface{}{ - "api-version": APIVersion, - } - - preparer := autorest.CreatePreparer( - autorest.AsGet(), - autorest.WithBaseURL(client.BaseURI), - autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Compute/virtualMachineScaleSets/{vmScaleSetName}/skus", pathParameters), - autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{}) -} - -// ListSkusSender sends the ListSkus request. The method will close the -// http.Response Body if it receives an error. -func (client VirtualMachineScaleSetsClient) ListSkusSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, req) -} - -// ListSkusResponder handles the response to the ListSkus request. The method always -// closes the http.Response Body. -func (client VirtualMachineScaleSetsClient) ListSkusResponder(resp *http.Response) (result VirtualMachineScaleSetListSkusResult, err error) { - err = autorest.Respond( - resp, - client.ByInspecting(), - azure.WithErrorUnlessStatusCode(http.StatusOK), - autorest.ByUnmarshallingJSON(&result), - autorest.ByClosing()) - result.Response = autorest.Response{Response: resp} - return -} - -// ListSkusNextResults retrieves the next set of results, if any. -func (client VirtualMachineScaleSetsClient) ListSkusNextResults(lastResults VirtualMachineScaleSetListSkusResult) (result VirtualMachineScaleSetListSkusResult, err error) { - req, err := lastResults.VirtualMachineScaleSetListSkusResultPreparer() - if err != nil { - return result, autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetsClient", "ListSkus", nil, "Failure preparing next results request") - } - if req == nil { - return - } - - resp, err := client.ListSkusSender(req) - if err != nil { - result.Response = autorest.Response{Response: resp} - return result, autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetsClient", "ListSkus", resp, "Failure sending next results request") - } - - result, err = client.ListSkusResponder(resp) - if err != nil { - err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetsClient", "ListSkus", resp, "Failure responding to next results request") - } - - return -} - -// PowerOff power off (stop) one or more virtual machines in a VM scale set. -// Note that resources are still attached and you are getting charged for the -// resources. Instead, use deallocate to release resources and avoid charges. -// This method may poll for completion. Polling can be canceled by passing the -// cancel channel argument. The channel will be used to cancel polling and any -// outstanding HTTP requests. -// -// resourceGroupName is the name of the resource group. VMScaleSetName is the -// name of the VM scale set. VMInstanceIDs is a list of virtual machine -// instance IDs from the VM scale set. -func (client VirtualMachineScaleSetsClient) PowerOff(resourceGroupName string, VMScaleSetName string, VMInstanceIDs *VirtualMachineScaleSetVMInstanceIDs, cancel <-chan struct{}) (<-chan OperationStatusResponse, <-chan error) { - resultChan := make(chan OperationStatusResponse, 1) - errChan := make(chan error, 1) - go func() { - var err error - var result OperationStatusResponse - defer func() { - resultChan <- result - errChan <- err - close(resultChan) - close(errChan) - }() - req, err := client.PowerOffPreparer(resourceGroupName, VMScaleSetName, VMInstanceIDs, cancel) - if err != nil { - err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetsClient", "PowerOff", nil, "Failure preparing request") - return - } - - resp, err := client.PowerOffSender(req) - if err != nil { - result.Response = autorest.Response{Response: resp} - err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetsClient", "PowerOff", resp, "Failure sending request") - return - } - - result, err = client.PowerOffResponder(resp) - if err != nil { - err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetsClient", "PowerOff", resp, "Failure responding to request") - } - }() - return resultChan, errChan -} - -// PowerOffPreparer prepares the PowerOff request. -func (client VirtualMachineScaleSetsClient) PowerOffPreparer(resourceGroupName string, VMScaleSetName string, VMInstanceIDs *VirtualMachineScaleSetVMInstanceIDs, cancel <-chan struct{}) (*http.Request, error) { - pathParameters := map[string]interface{}{ - "resourceGroupName": autorest.Encode("path", resourceGroupName), - "subscriptionId": autorest.Encode("path", client.SubscriptionID), - "vmScaleSetName": autorest.Encode("path", VMScaleSetName), - } - - const APIVersion = "2016-04-30-preview" - queryParameters := map[string]interface{}{ - "api-version": APIVersion, - } - - preparer := autorest.CreatePreparer( - autorest.AsJSON(), - autorest.AsPost(), - autorest.WithBaseURL(client.BaseURI), - autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Compute/virtualMachineScaleSets/{vmScaleSetName}/poweroff", pathParameters), - autorest.WithQueryParameters(queryParameters)) - if VMInstanceIDs != nil { - preparer = autorest.DecoratePreparer(preparer, - autorest.WithJSON(VMInstanceIDs)) - } - return preparer.Prepare(&http.Request{Cancel: cancel}) -} - -// PowerOffSender sends the PowerOff request. The method will close the -// http.Response Body if it receives an error. -func (client VirtualMachineScaleSetsClient) PowerOffSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, - req, - azure.DoPollForAsynchronous(client.PollingDelay)) -} - -// PowerOffResponder handles the response to the PowerOff request. The method always -// closes the http.Response Body. -func (client VirtualMachineScaleSetsClient) PowerOffResponder(resp *http.Response) (result OperationStatusResponse, err error) { - err = autorest.Respond( - resp, - client.ByInspecting(), - azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted), - autorest.ByUnmarshallingJSON(&result), - autorest.ByClosing()) - result.Response = autorest.Response{Response: resp} - return -} - -// Reimage reimages (upgrade the operating system) one or more virtual machines -// in a VM scale set. This method may poll for completion. Polling can be -// canceled by passing the cancel channel argument. The channel will be used to -// cancel polling and any outstanding HTTP requests. -// -// resourceGroupName is the name of the resource group. VMScaleSetName is the -// name of the VM scale set. -func (client VirtualMachineScaleSetsClient) Reimage(resourceGroupName string, VMScaleSetName string, cancel <-chan struct{}) (<-chan OperationStatusResponse, <-chan error) { - resultChan := make(chan OperationStatusResponse, 1) - errChan := make(chan error, 1) - go func() { - var err error - var result OperationStatusResponse - defer func() { - resultChan <- result - errChan <- err - close(resultChan) - close(errChan) - }() - req, err := client.ReimagePreparer(resourceGroupName, VMScaleSetName, cancel) - if err != nil { - err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetsClient", "Reimage", nil, "Failure preparing request") - return - } - - resp, err := client.ReimageSender(req) - if err != nil { - result.Response = autorest.Response{Response: resp} - err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetsClient", "Reimage", resp, "Failure sending request") - return - } - - result, err = client.ReimageResponder(resp) - if err != nil { - err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetsClient", "Reimage", resp, "Failure responding to request") - } - }() - return resultChan, errChan -} - -// ReimagePreparer prepares the Reimage request. -func (client VirtualMachineScaleSetsClient) ReimagePreparer(resourceGroupName string, VMScaleSetName string, cancel <-chan struct{}) (*http.Request, error) { - pathParameters := map[string]interface{}{ - "resourceGroupName": autorest.Encode("path", resourceGroupName), - "subscriptionId": autorest.Encode("path", client.SubscriptionID), - "vmScaleSetName": autorest.Encode("path", VMScaleSetName), - } - - const APIVersion = "2016-04-30-preview" - queryParameters := map[string]interface{}{ - "api-version": APIVersion, - } - - preparer := autorest.CreatePreparer( - autorest.AsPost(), - autorest.WithBaseURL(client.BaseURI), - autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Compute/virtualMachineScaleSets/{vmScaleSetName}/reimage", pathParameters), - autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{Cancel: cancel}) -} - -// ReimageSender sends the Reimage request. The method will close the -// http.Response Body if it receives an error. -func (client VirtualMachineScaleSetsClient) ReimageSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, - req, - azure.DoPollForAsynchronous(client.PollingDelay)) -} - -// ReimageResponder handles the response to the Reimage request. The method always -// closes the http.Response Body. -func (client VirtualMachineScaleSetsClient) ReimageResponder(resp *http.Response) (result OperationStatusResponse, err error) { - err = autorest.Respond( - resp, - client.ByInspecting(), - azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted), - autorest.ByUnmarshallingJSON(&result), - autorest.ByClosing()) - result.Response = autorest.Response{Response: resp} - return -} - -// ReimageAll reimages all the disks ( including data disks ) in the virtual -// machines in a virtual machine scale set. This operation is only supported -// for managed disks. This method may poll for completion. Polling can be -// canceled by passing the cancel channel argument. The channel will be used to -// cancel polling and any outstanding HTTP requests. -// -// resourceGroupName is the name of the resource group. VMScaleSetName is the -// name of the VM scale set. -func (client VirtualMachineScaleSetsClient) ReimageAll(resourceGroupName string, VMScaleSetName string, cancel <-chan struct{}) (<-chan OperationStatusResponse, <-chan error) { - resultChan := make(chan OperationStatusResponse, 1) - errChan := make(chan error, 1) - go func() { - var err error - var result OperationStatusResponse - defer func() { - resultChan <- result - errChan <- err - close(resultChan) - close(errChan) - }() - req, err := client.ReimageAllPreparer(resourceGroupName, VMScaleSetName, cancel) - if err != nil { - err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetsClient", "ReimageAll", nil, "Failure preparing request") - return - } - - resp, err := client.ReimageAllSender(req) - if err != nil { - result.Response = autorest.Response{Response: resp} - err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetsClient", "ReimageAll", resp, "Failure sending request") - return - } - - result, err = client.ReimageAllResponder(resp) - if err != nil { - err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetsClient", "ReimageAll", resp, "Failure responding to request") - } - }() - return resultChan, errChan -} - -// ReimageAllPreparer prepares the ReimageAll request. -func (client VirtualMachineScaleSetsClient) ReimageAllPreparer(resourceGroupName string, VMScaleSetName string, cancel <-chan struct{}) (*http.Request, error) { - pathParameters := map[string]interface{}{ - "resourceGroupName": autorest.Encode("path", resourceGroupName), - "subscriptionId": autorest.Encode("path", client.SubscriptionID), - "vmScaleSetName": autorest.Encode("path", VMScaleSetName), - } - - const APIVersion = "2016-04-30-preview" - queryParameters := map[string]interface{}{ - "api-version": APIVersion, - } - - preparer := autorest.CreatePreparer( - autorest.AsPost(), - autorest.WithBaseURL(client.BaseURI), - autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Compute/virtualMachineScaleSets/{vmScaleSetName}/reimageall", pathParameters), - autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{Cancel: cancel}) -} - -// ReimageAllSender sends the ReimageAll request. The method will close the -// http.Response Body if it receives an error. -func (client VirtualMachineScaleSetsClient) ReimageAllSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, - req, - azure.DoPollForAsynchronous(client.PollingDelay)) -} - -// ReimageAllResponder handles the response to the ReimageAll request. The method always -// closes the http.Response Body. -func (client VirtualMachineScaleSetsClient) ReimageAllResponder(resp *http.Response) (result OperationStatusResponse, err error) { - err = autorest.Respond( - resp, - client.ByInspecting(), - azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted), - autorest.ByUnmarshallingJSON(&result), - autorest.ByClosing()) - result.Response = autorest.Response{Response: resp} - return -} - -// Restart restarts one or more virtual machines in a VM scale set. This method -// may poll for completion. Polling can be canceled by passing the cancel -// channel argument. The channel will be used to cancel polling and any -// outstanding HTTP requests. -// -// resourceGroupName is the name of the resource group. VMScaleSetName is the -// name of the VM scale set. VMInstanceIDs is a list of virtual machine -// instance IDs from the VM scale set. -func (client VirtualMachineScaleSetsClient) Restart(resourceGroupName string, VMScaleSetName string, VMInstanceIDs *VirtualMachineScaleSetVMInstanceIDs, cancel <-chan struct{}) (<-chan OperationStatusResponse, <-chan error) { - resultChan := make(chan OperationStatusResponse, 1) - errChan := make(chan error, 1) - go func() { - var err error - var result OperationStatusResponse - defer func() { - resultChan <- result - errChan <- err - close(resultChan) - close(errChan) - }() - req, err := client.RestartPreparer(resourceGroupName, VMScaleSetName, VMInstanceIDs, cancel) - if err != nil { - err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetsClient", "Restart", nil, "Failure preparing request") - return - } - - resp, err := client.RestartSender(req) - if err != nil { - result.Response = autorest.Response{Response: resp} - err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetsClient", "Restart", resp, "Failure sending request") - return - } - - result, err = client.RestartResponder(resp) - if err != nil { - err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetsClient", "Restart", resp, "Failure responding to request") - } - }() - return resultChan, errChan -} - -// RestartPreparer prepares the Restart request. -func (client VirtualMachineScaleSetsClient) RestartPreparer(resourceGroupName string, VMScaleSetName string, VMInstanceIDs *VirtualMachineScaleSetVMInstanceIDs, cancel <-chan struct{}) (*http.Request, error) { - pathParameters := map[string]interface{}{ - "resourceGroupName": autorest.Encode("path", resourceGroupName), - "subscriptionId": autorest.Encode("path", client.SubscriptionID), - "vmScaleSetName": autorest.Encode("path", VMScaleSetName), - } - - const APIVersion = "2016-04-30-preview" - queryParameters := map[string]interface{}{ - "api-version": APIVersion, - } - - preparer := autorest.CreatePreparer( - autorest.AsJSON(), - autorest.AsPost(), - autorest.WithBaseURL(client.BaseURI), - autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Compute/virtualMachineScaleSets/{vmScaleSetName}/restart", pathParameters), - autorest.WithQueryParameters(queryParameters)) - if VMInstanceIDs != nil { - preparer = autorest.DecoratePreparer(preparer, - autorest.WithJSON(VMInstanceIDs)) - } - return preparer.Prepare(&http.Request{Cancel: cancel}) -} - -// RestartSender sends the Restart request. The method will close the -// http.Response Body if it receives an error. -func (client VirtualMachineScaleSetsClient) RestartSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, - req, - azure.DoPollForAsynchronous(client.PollingDelay)) -} - -// RestartResponder handles the response to the Restart request. The method always -// closes the http.Response Body. -func (client VirtualMachineScaleSetsClient) RestartResponder(resp *http.Response) (result OperationStatusResponse, err error) { - err = autorest.Respond( - resp, - client.ByInspecting(), - azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted), - autorest.ByUnmarshallingJSON(&result), - autorest.ByClosing()) - result.Response = autorest.Response{Response: resp} - return -} - -// Start starts one or more virtual machines in a VM scale set. This method may -// poll for completion. Polling can be canceled by passing the cancel channel -// argument. The channel will be used to cancel polling and any outstanding -// HTTP requests. -// -// resourceGroupName is the name of the resource group. VMScaleSetName is the -// name of the VM scale set. VMInstanceIDs is a list of virtual machine -// instance IDs from the VM scale set. -func (client VirtualMachineScaleSetsClient) Start(resourceGroupName string, VMScaleSetName string, VMInstanceIDs *VirtualMachineScaleSetVMInstanceIDs, cancel <-chan struct{}) (<-chan OperationStatusResponse, <-chan error) { - resultChan := make(chan OperationStatusResponse, 1) - errChan := make(chan error, 1) - go func() { - var err error - var result OperationStatusResponse - defer func() { - resultChan <- result - errChan <- err - close(resultChan) - close(errChan) - }() - req, err := client.StartPreparer(resourceGroupName, VMScaleSetName, VMInstanceIDs, cancel) - if err != nil { - err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetsClient", "Start", nil, "Failure preparing request") - return - } - - resp, err := client.StartSender(req) - if err != nil { - result.Response = autorest.Response{Response: resp} - err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetsClient", "Start", resp, "Failure sending request") - return - } - - result, err = client.StartResponder(resp) - if err != nil { - err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetsClient", "Start", resp, "Failure responding to request") - } - }() - return resultChan, errChan -} - -// StartPreparer prepares the Start request. -func (client VirtualMachineScaleSetsClient) StartPreparer(resourceGroupName string, VMScaleSetName string, VMInstanceIDs *VirtualMachineScaleSetVMInstanceIDs, cancel <-chan struct{}) (*http.Request, error) { - pathParameters := map[string]interface{}{ - "resourceGroupName": autorest.Encode("path", resourceGroupName), - "subscriptionId": autorest.Encode("path", client.SubscriptionID), - "vmScaleSetName": autorest.Encode("path", VMScaleSetName), - } - - const APIVersion = "2016-04-30-preview" - queryParameters := map[string]interface{}{ - "api-version": APIVersion, - } - - preparer := autorest.CreatePreparer( - autorest.AsJSON(), - autorest.AsPost(), - autorest.WithBaseURL(client.BaseURI), - autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Compute/virtualMachineScaleSets/{vmScaleSetName}/start", pathParameters), - autorest.WithQueryParameters(queryParameters)) - if VMInstanceIDs != nil { - preparer = autorest.DecoratePreparer(preparer, - autorest.WithJSON(VMInstanceIDs)) - } - return preparer.Prepare(&http.Request{Cancel: cancel}) -} - -// StartSender sends the Start request. The method will close the -// http.Response Body if it receives an error. -func (client VirtualMachineScaleSetsClient) StartSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, - req, - azure.DoPollForAsynchronous(client.PollingDelay)) -} - -// StartResponder handles the response to the Start request. The method always -// closes the http.Response Body. -func (client VirtualMachineScaleSetsClient) StartResponder(resp *http.Response) (result OperationStatusResponse, err error) { - err = autorest.Respond( - resp, - client.ByInspecting(), - azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted), - autorest.ByUnmarshallingJSON(&result), - autorest.ByClosing()) - result.Response = autorest.Response{Response: resp} - return -} - -// UpdateInstances upgrades one or more virtual machines to the latest SKU set -// in the VM scale set model. This method may poll for completion. Polling can -// be canceled by passing the cancel channel argument. The channel will be used -// to cancel polling and any outstanding HTTP requests. -// -// resourceGroupName is the name of the resource group. VMScaleSetName is the -// name of the VM scale set. VMInstanceIDs is a list of virtual machine -// instance IDs from the VM scale set. -func (client VirtualMachineScaleSetsClient) UpdateInstances(resourceGroupName string, VMScaleSetName string, VMInstanceIDs VirtualMachineScaleSetVMInstanceRequiredIDs, cancel <-chan struct{}) (<-chan OperationStatusResponse, <-chan error) { - resultChan := make(chan OperationStatusResponse, 1) - errChan := make(chan error, 1) - if err := validation.Validate([]validation.Validation{ - {TargetValue: VMInstanceIDs, - Constraints: []validation.Constraint{{Target: "VMInstanceIDs.InstanceIds", Name: validation.Null, Rule: true, Chain: nil}}}}); err != nil { - errChan <- validation.NewErrorWithValidationError(err, "compute.VirtualMachineScaleSetsClient", "UpdateInstances") - close(errChan) - close(resultChan) - return resultChan, errChan - } - - go func() { - var err error - var result OperationStatusResponse - defer func() { - resultChan <- result - errChan <- err - close(resultChan) - close(errChan) - }() - req, err := client.UpdateInstancesPreparer(resourceGroupName, VMScaleSetName, VMInstanceIDs, cancel) - if err != nil { - err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetsClient", "UpdateInstances", nil, "Failure preparing request") - return - } - - resp, err := client.UpdateInstancesSender(req) - if err != nil { - result.Response = autorest.Response{Response: resp} - err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetsClient", "UpdateInstances", resp, "Failure sending request") - return - } - - result, err = client.UpdateInstancesResponder(resp) - if err != nil { - err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetsClient", "UpdateInstances", resp, "Failure responding to request") - } - }() - return resultChan, errChan -} - -// UpdateInstancesPreparer prepares the UpdateInstances request. -func (client VirtualMachineScaleSetsClient) UpdateInstancesPreparer(resourceGroupName string, VMScaleSetName string, VMInstanceIDs VirtualMachineScaleSetVMInstanceRequiredIDs, cancel <-chan struct{}) (*http.Request, error) { - pathParameters := map[string]interface{}{ - "resourceGroupName": autorest.Encode("path", resourceGroupName), - "subscriptionId": autorest.Encode("path", client.SubscriptionID), - "vmScaleSetName": autorest.Encode("path", VMScaleSetName), - } - - const APIVersion = "2016-04-30-preview" - queryParameters := map[string]interface{}{ - "api-version": APIVersion, - } - - preparer := autorest.CreatePreparer( - autorest.AsJSON(), - autorest.AsPost(), - autorest.WithBaseURL(client.BaseURI), - autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Compute/virtualMachineScaleSets/{vmScaleSetName}/manualupgrade", pathParameters), - autorest.WithJSON(VMInstanceIDs), - autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{Cancel: cancel}) -} - -// UpdateInstancesSender sends the UpdateInstances request. The method will close the -// http.Response Body if it receives an error. -func (client VirtualMachineScaleSetsClient) UpdateInstancesSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, - req, - azure.DoPollForAsynchronous(client.PollingDelay)) -} - -// UpdateInstancesResponder handles the response to the UpdateInstances request. The method always -// closes the http.Response Body. -func (client VirtualMachineScaleSetsClient) UpdateInstancesResponder(resp *http.Response) (result OperationStatusResponse, err error) { - err = autorest.Respond( - resp, - client.ByInspecting(), - azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted), - autorest.ByUnmarshallingJSON(&result), - autorest.ByClosing()) - result.Response = autorest.Response{Response: resp} - return -} diff --git a/vendor/github.com/Azure/azure-sdk-for-go/arm/compute/virtualmachinescalesetvms.go b/vendor/github.com/Azure/azure-sdk-for-go/arm/compute/virtualmachinescalesetvms.go deleted file mode 100644 index 34e0934dc..000000000 --- a/vendor/github.com/Azure/azure-sdk-for-go/arm/compute/virtualmachinescalesetvms.go +++ /dev/null @@ -1,872 +0,0 @@ -package compute - -// Copyright (c) Microsoft and contributors. All rights reserved. -// -// Licensed under the Apache License, Version 2.0 (the "License"); -// you may not use this file except in compliance with the License. -// You may obtain a copy of the License at -// http://www.apache.org/licenses/LICENSE-2.0 -// -// Unless required by applicable law or agreed to in writing, software -// distributed under the License is distributed on an "AS IS" BASIS, -// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -// -// See the License for the specific language governing permissions and -// limitations under the License. -// -// Code generated by Microsoft (R) AutoRest Code Generator 1.0.1.0 -// Changes may cause incorrect behavior and will be lost if the code is -// regenerated. - -import ( - "github.com/Azure/go-autorest/autorest" - "github.com/Azure/go-autorest/autorest/azure" - "net/http" -) - -// VirtualMachineScaleSetVMsClient is the the Compute Management Client. -type VirtualMachineScaleSetVMsClient struct { - ManagementClient -} - -// NewVirtualMachineScaleSetVMsClient creates an instance of the -// VirtualMachineScaleSetVMsClient client. -func NewVirtualMachineScaleSetVMsClient(subscriptionID string) VirtualMachineScaleSetVMsClient { - return NewVirtualMachineScaleSetVMsClientWithBaseURI(DefaultBaseURI, subscriptionID) -} - -// NewVirtualMachineScaleSetVMsClientWithBaseURI creates an instance of the -// VirtualMachineScaleSetVMsClient client. -func NewVirtualMachineScaleSetVMsClientWithBaseURI(baseURI string, subscriptionID string) VirtualMachineScaleSetVMsClient { - return VirtualMachineScaleSetVMsClient{NewWithBaseURI(baseURI, subscriptionID)} -} - -// Deallocate deallocates a specific virtual machine in a VM scale set. Shuts -// down the virtual machine and releases the compute resources it uses. You are -// not billed for the compute resources of this virtual machine once it is -// deallocated. This method may poll for completion. Polling can be canceled by -// passing the cancel channel argument. The channel will be used to cancel -// polling and any outstanding HTTP requests. -// -// resourceGroupName is the name of the resource group. VMScaleSetName is the -// name of the VM scale set. instanceID is the instance ID of the virtual -// machine. -func (client VirtualMachineScaleSetVMsClient) Deallocate(resourceGroupName string, VMScaleSetName string, instanceID string, cancel <-chan struct{}) (<-chan OperationStatusResponse, <-chan error) { - resultChan := make(chan OperationStatusResponse, 1) - errChan := make(chan error, 1) - go func() { - var err error - var result OperationStatusResponse - defer func() { - resultChan <- result - errChan <- err - close(resultChan) - close(errChan) - }() - req, err := client.DeallocatePreparer(resourceGroupName, VMScaleSetName, instanceID, cancel) - if err != nil { - err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetVMsClient", "Deallocate", nil, "Failure preparing request") - return - } - - resp, err := client.DeallocateSender(req) - if err != nil { - result.Response = autorest.Response{Response: resp} - err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetVMsClient", "Deallocate", resp, "Failure sending request") - return - } - - result, err = client.DeallocateResponder(resp) - if err != nil { - err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetVMsClient", "Deallocate", resp, "Failure responding to request") - } - }() - return resultChan, errChan -} - -// DeallocatePreparer prepares the Deallocate request. -func (client VirtualMachineScaleSetVMsClient) DeallocatePreparer(resourceGroupName string, VMScaleSetName string, instanceID string, cancel <-chan struct{}) (*http.Request, error) { - pathParameters := map[string]interface{}{ - "instanceId": autorest.Encode("path", instanceID), - "resourceGroupName": autorest.Encode("path", resourceGroupName), - "subscriptionId": autorest.Encode("path", client.SubscriptionID), - "vmScaleSetName": autorest.Encode("path", VMScaleSetName), - } - - const APIVersion = "2016-04-30-preview" - queryParameters := map[string]interface{}{ - "api-version": APIVersion, - } - - preparer := autorest.CreatePreparer( - autorest.AsPost(), - autorest.WithBaseURL(client.BaseURI), - autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Compute/virtualMachineScaleSets/{vmScaleSetName}/virtualmachines/{instanceId}/deallocate", pathParameters), - autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{Cancel: cancel}) -} - -// DeallocateSender sends the Deallocate request. The method will close the -// http.Response Body if it receives an error. -func (client VirtualMachineScaleSetVMsClient) DeallocateSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, - req, - azure.DoPollForAsynchronous(client.PollingDelay)) -} - -// DeallocateResponder handles the response to the Deallocate request. The method always -// closes the http.Response Body. -func (client VirtualMachineScaleSetVMsClient) DeallocateResponder(resp *http.Response) (result OperationStatusResponse, err error) { - err = autorest.Respond( - resp, - client.ByInspecting(), - azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted), - autorest.ByUnmarshallingJSON(&result), - autorest.ByClosing()) - result.Response = autorest.Response{Response: resp} - return -} - -// Delete deletes a virtual machine from a VM scale set. This method may poll -// for completion. Polling can be canceled by passing the cancel channel -// argument. The channel will be used to cancel polling and any outstanding -// HTTP requests. -// -// resourceGroupName is the name of the resource group. VMScaleSetName is the -// name of the VM scale set. instanceID is the instance ID of the virtual -// machine. -func (client VirtualMachineScaleSetVMsClient) Delete(resourceGroupName string, VMScaleSetName string, instanceID string, cancel <-chan struct{}) (<-chan OperationStatusResponse, <-chan error) { - resultChan := make(chan OperationStatusResponse, 1) - errChan := make(chan error, 1) - go func() { - var err error - var result OperationStatusResponse - defer func() { - resultChan <- result - errChan <- err - close(resultChan) - close(errChan) - }() - req, err := client.DeletePreparer(resourceGroupName, VMScaleSetName, instanceID, cancel) - if err != nil { - err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetVMsClient", "Delete", nil, "Failure preparing request") - return - } - - resp, err := client.DeleteSender(req) - if err != nil { - result.Response = autorest.Response{Response: resp} - err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetVMsClient", "Delete", resp, "Failure sending request") - return - } - - result, err = client.DeleteResponder(resp) - if err != nil { - err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetVMsClient", "Delete", resp, "Failure responding to request") - } - }() - return resultChan, errChan -} - -// DeletePreparer prepares the Delete request. -func (client VirtualMachineScaleSetVMsClient) DeletePreparer(resourceGroupName string, VMScaleSetName string, instanceID string, cancel <-chan struct{}) (*http.Request, error) { - pathParameters := map[string]interface{}{ - "instanceId": autorest.Encode("path", instanceID), - "resourceGroupName": autorest.Encode("path", resourceGroupName), - "subscriptionId": autorest.Encode("path", client.SubscriptionID), - "vmScaleSetName": autorest.Encode("path", VMScaleSetName), - } - - const APIVersion = "2016-04-30-preview" - queryParameters := map[string]interface{}{ - "api-version": APIVersion, - } - - preparer := autorest.CreatePreparer( - autorest.AsDelete(), - autorest.WithBaseURL(client.BaseURI), - autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Compute/virtualMachineScaleSets/{vmScaleSetName}/virtualmachines/{instanceId}", pathParameters), - autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{Cancel: cancel}) -} - -// DeleteSender sends the Delete request. The method will close the -// http.Response Body if it receives an error. -func (client VirtualMachineScaleSetVMsClient) DeleteSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, - req, - azure.DoPollForAsynchronous(client.PollingDelay)) -} - -// DeleteResponder handles the response to the Delete request. The method always -// closes the http.Response Body. -func (client VirtualMachineScaleSetVMsClient) DeleteResponder(resp *http.Response) (result OperationStatusResponse, err error) { - err = autorest.Respond( - resp, - client.ByInspecting(), - azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted, http.StatusNoContent), - autorest.ByUnmarshallingJSON(&result), - autorest.ByClosing()) - result.Response = autorest.Response{Response: resp} - return -} - -// Get gets a virtual machine from a VM scale set. -// -// resourceGroupName is the name of the resource group. VMScaleSetName is the -// name of the VM scale set. instanceID is the instance ID of the virtual -// machine. -func (client VirtualMachineScaleSetVMsClient) Get(resourceGroupName string, VMScaleSetName string, instanceID string) (result VirtualMachineScaleSetVM, err error) { - req, err := client.GetPreparer(resourceGroupName, VMScaleSetName, instanceID) - if err != nil { - err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetVMsClient", "Get", nil, "Failure preparing request") - return - } - - resp, err := client.GetSender(req) - if err != nil { - result.Response = autorest.Response{Response: resp} - err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetVMsClient", "Get", resp, "Failure sending request") - return - } - - result, err = client.GetResponder(resp) - if err != nil { - err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetVMsClient", "Get", resp, "Failure responding to request") - } - - return -} - -// GetPreparer prepares the Get request. -func (client VirtualMachineScaleSetVMsClient) GetPreparer(resourceGroupName string, VMScaleSetName string, instanceID string) (*http.Request, error) { - pathParameters := map[string]interface{}{ - "instanceId": autorest.Encode("path", instanceID), - "resourceGroupName": autorest.Encode("path", resourceGroupName), - "subscriptionId": autorest.Encode("path", client.SubscriptionID), - "vmScaleSetName": autorest.Encode("path", VMScaleSetName), - } - - const APIVersion = "2016-04-30-preview" - queryParameters := map[string]interface{}{ - "api-version": APIVersion, - } - - preparer := autorest.CreatePreparer( - autorest.AsGet(), - autorest.WithBaseURL(client.BaseURI), - autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Compute/virtualMachineScaleSets/{vmScaleSetName}/virtualmachines/{instanceId}", pathParameters), - autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{}) -} - -// GetSender sends the Get request. The method will close the -// http.Response Body if it receives an error. -func (client VirtualMachineScaleSetVMsClient) GetSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, req) -} - -// GetResponder handles the response to the Get request. The method always -// closes the http.Response Body. -func (client VirtualMachineScaleSetVMsClient) GetResponder(resp *http.Response) (result VirtualMachineScaleSetVM, err error) { - err = autorest.Respond( - resp, - client.ByInspecting(), - azure.WithErrorUnlessStatusCode(http.StatusOK), - autorest.ByUnmarshallingJSON(&result), - autorest.ByClosing()) - result.Response = autorest.Response{Response: resp} - return -} - -// GetInstanceView gets the status of a virtual machine from a VM scale set. -// -// resourceGroupName is the name of the resource group. VMScaleSetName is the -// name of the VM scale set. instanceID is the instance ID of the virtual -// machine. -func (client VirtualMachineScaleSetVMsClient) GetInstanceView(resourceGroupName string, VMScaleSetName string, instanceID string) (result VirtualMachineScaleSetVMInstanceView, err error) { - req, err := client.GetInstanceViewPreparer(resourceGroupName, VMScaleSetName, instanceID) - if err != nil { - err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetVMsClient", "GetInstanceView", nil, "Failure preparing request") - return - } - - resp, err := client.GetInstanceViewSender(req) - if err != nil { - result.Response = autorest.Response{Response: resp} - err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetVMsClient", "GetInstanceView", resp, "Failure sending request") - return - } - - result, err = client.GetInstanceViewResponder(resp) - if err != nil { - err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetVMsClient", "GetInstanceView", resp, "Failure responding to request") - } - - return -} - -// GetInstanceViewPreparer prepares the GetInstanceView request. -func (client VirtualMachineScaleSetVMsClient) GetInstanceViewPreparer(resourceGroupName string, VMScaleSetName string, instanceID string) (*http.Request, error) { - pathParameters := map[string]interface{}{ - "instanceId": autorest.Encode("path", instanceID), - "resourceGroupName": autorest.Encode("path", resourceGroupName), - "subscriptionId": autorest.Encode("path", client.SubscriptionID), - "vmScaleSetName": autorest.Encode("path", VMScaleSetName), - } - - const APIVersion = "2016-04-30-preview" - queryParameters := map[string]interface{}{ - "api-version": APIVersion, - } - - preparer := autorest.CreatePreparer( - autorest.AsGet(), - autorest.WithBaseURL(client.BaseURI), - autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Compute/virtualMachineScaleSets/{vmScaleSetName}/virtualmachines/{instanceId}/instanceView", pathParameters), - autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{}) -} - -// GetInstanceViewSender sends the GetInstanceView request. The method will close the -// http.Response Body if it receives an error. -func (client VirtualMachineScaleSetVMsClient) GetInstanceViewSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, req) -} - -// GetInstanceViewResponder handles the response to the GetInstanceView request. The method always -// closes the http.Response Body. -func (client VirtualMachineScaleSetVMsClient) GetInstanceViewResponder(resp *http.Response) (result VirtualMachineScaleSetVMInstanceView, err error) { - err = autorest.Respond( - resp, - client.ByInspecting(), - azure.WithErrorUnlessStatusCode(http.StatusOK), - autorest.ByUnmarshallingJSON(&result), - autorest.ByClosing()) - result.Response = autorest.Response{Response: resp} - return -} - -// List gets a list of all virtual machines in a VM scale sets. -// -// resourceGroupName is the name of the resource group. -// virtualMachineScaleSetName is the name of the VM scale set. filter is the -// filter to apply to the operation. selectParameter is the list parameters. -// expand is the expand expression to apply to the operation. -func (client VirtualMachineScaleSetVMsClient) List(resourceGroupName string, virtualMachineScaleSetName string, filter string, selectParameter string, expand string) (result VirtualMachineScaleSetVMListResult, err error) { - req, err := client.ListPreparer(resourceGroupName, virtualMachineScaleSetName, filter, selectParameter, expand) - if err != nil { - err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetVMsClient", "List", nil, "Failure preparing request") - return - } - - resp, err := client.ListSender(req) - if err != nil { - result.Response = autorest.Response{Response: resp} - err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetVMsClient", "List", resp, "Failure sending request") - return - } - - result, err = client.ListResponder(resp) - if err != nil { - err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetVMsClient", "List", resp, "Failure responding to request") - } - - return -} - -// ListPreparer prepares the List request. -func (client VirtualMachineScaleSetVMsClient) ListPreparer(resourceGroupName string, virtualMachineScaleSetName string, filter string, selectParameter string, expand string) (*http.Request, error) { - pathParameters := map[string]interface{}{ - "resourceGroupName": autorest.Encode("path", resourceGroupName), - "subscriptionId": autorest.Encode("path", client.SubscriptionID), - "virtualMachineScaleSetName": autorest.Encode("path", virtualMachineScaleSetName), - } - - const APIVersion = "2016-04-30-preview" - queryParameters := map[string]interface{}{ - "api-version": APIVersion, - } - if len(filter) > 0 { - queryParameters["$filter"] = autorest.Encode("query", filter) - } - if len(selectParameter) > 0 { - queryParameters["$select"] = autorest.Encode("query", selectParameter) - } - if len(expand) > 0 { - queryParameters["$expand"] = autorest.Encode("query", expand) - } - - preparer := autorest.CreatePreparer( - autorest.AsGet(), - autorest.WithBaseURL(client.BaseURI), - autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Compute/virtualMachineScaleSets/{virtualMachineScaleSetName}/virtualMachines", pathParameters), - autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{}) -} - -// ListSender sends the List request. The method will close the -// http.Response Body if it receives an error. -func (client VirtualMachineScaleSetVMsClient) ListSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, req) -} - -// ListResponder handles the response to the List request. The method always -// closes the http.Response Body. -func (client VirtualMachineScaleSetVMsClient) ListResponder(resp *http.Response) (result VirtualMachineScaleSetVMListResult, err error) { - err = autorest.Respond( - resp, - client.ByInspecting(), - azure.WithErrorUnlessStatusCode(http.StatusOK), - autorest.ByUnmarshallingJSON(&result), - autorest.ByClosing()) - result.Response = autorest.Response{Response: resp} - return -} - -// ListNextResults retrieves the next set of results, if any. -func (client VirtualMachineScaleSetVMsClient) ListNextResults(lastResults VirtualMachineScaleSetVMListResult) (result VirtualMachineScaleSetVMListResult, err error) { - req, err := lastResults.VirtualMachineScaleSetVMListResultPreparer() - if err != nil { - return result, autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetVMsClient", "List", nil, "Failure preparing next results request") - } - if req == nil { - return - } - - resp, err := client.ListSender(req) - if err != nil { - result.Response = autorest.Response{Response: resp} - return result, autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetVMsClient", "List", resp, "Failure sending next results request") - } - - result, err = client.ListResponder(resp) - if err != nil { - err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetVMsClient", "List", resp, "Failure responding to next results request") - } - - return -} - -// PowerOff power off (stop) a virtual machine in a VM scale set. Note that -// resources are still attached and you are getting charged for the resources. -// Instead, use deallocate to release resources and avoid charges. This method -// may poll for completion. Polling can be canceled by passing the cancel -// channel argument. The channel will be used to cancel polling and any -// outstanding HTTP requests. -// -// resourceGroupName is the name of the resource group. VMScaleSetName is the -// name of the VM scale set. instanceID is the instance ID of the virtual -// machine. -func (client VirtualMachineScaleSetVMsClient) PowerOff(resourceGroupName string, VMScaleSetName string, instanceID string, cancel <-chan struct{}) (<-chan OperationStatusResponse, <-chan error) { - resultChan := make(chan OperationStatusResponse, 1) - errChan := make(chan error, 1) - go func() { - var err error - var result OperationStatusResponse - defer func() { - resultChan <- result - errChan <- err - close(resultChan) - close(errChan) - }() - req, err := client.PowerOffPreparer(resourceGroupName, VMScaleSetName, instanceID, cancel) - if err != nil { - err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetVMsClient", "PowerOff", nil, "Failure preparing request") - return - } - - resp, err := client.PowerOffSender(req) - if err != nil { - result.Response = autorest.Response{Response: resp} - err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetVMsClient", "PowerOff", resp, "Failure sending request") - return - } - - result, err = client.PowerOffResponder(resp) - if err != nil { - err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetVMsClient", "PowerOff", resp, "Failure responding to request") - } - }() - return resultChan, errChan -} - -// PowerOffPreparer prepares the PowerOff request. -func (client VirtualMachineScaleSetVMsClient) PowerOffPreparer(resourceGroupName string, VMScaleSetName string, instanceID string, cancel <-chan struct{}) (*http.Request, error) { - pathParameters := map[string]interface{}{ - "instanceId": autorest.Encode("path", instanceID), - "resourceGroupName": autorest.Encode("path", resourceGroupName), - "subscriptionId": autorest.Encode("path", client.SubscriptionID), - "vmScaleSetName": autorest.Encode("path", VMScaleSetName), - } - - const APIVersion = "2016-04-30-preview" - queryParameters := map[string]interface{}{ - "api-version": APIVersion, - } - - preparer := autorest.CreatePreparer( - autorest.AsPost(), - autorest.WithBaseURL(client.BaseURI), - autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Compute/virtualMachineScaleSets/{vmScaleSetName}/virtualmachines/{instanceId}/poweroff", pathParameters), - autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{Cancel: cancel}) -} - -// PowerOffSender sends the PowerOff request. The method will close the -// http.Response Body if it receives an error. -func (client VirtualMachineScaleSetVMsClient) PowerOffSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, - req, - azure.DoPollForAsynchronous(client.PollingDelay)) -} - -// PowerOffResponder handles the response to the PowerOff request. The method always -// closes the http.Response Body. -func (client VirtualMachineScaleSetVMsClient) PowerOffResponder(resp *http.Response) (result OperationStatusResponse, err error) { - err = autorest.Respond( - resp, - client.ByInspecting(), - azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted), - autorest.ByUnmarshallingJSON(&result), - autorest.ByClosing()) - result.Response = autorest.Response{Response: resp} - return -} - -// Reimage reimages (upgrade the operating system) a specific virtual machine -// in a VM scale set. This method may poll for completion. Polling can be -// canceled by passing the cancel channel argument. The channel will be used to -// cancel polling and any outstanding HTTP requests. -// -// resourceGroupName is the name of the resource group. VMScaleSetName is the -// name of the VM scale set. instanceID is the instance ID of the virtual -// machine. -func (client VirtualMachineScaleSetVMsClient) Reimage(resourceGroupName string, VMScaleSetName string, instanceID string, cancel <-chan struct{}) (<-chan OperationStatusResponse, <-chan error) { - resultChan := make(chan OperationStatusResponse, 1) - errChan := make(chan error, 1) - go func() { - var err error - var result OperationStatusResponse - defer func() { - resultChan <- result - errChan <- err - close(resultChan) - close(errChan) - }() - req, err := client.ReimagePreparer(resourceGroupName, VMScaleSetName, instanceID, cancel) - if err != nil { - err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetVMsClient", "Reimage", nil, "Failure preparing request") - return - } - - resp, err := client.ReimageSender(req) - if err != nil { - result.Response = autorest.Response{Response: resp} - err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetVMsClient", "Reimage", resp, "Failure sending request") - return - } - - result, err = client.ReimageResponder(resp) - if err != nil { - err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetVMsClient", "Reimage", resp, "Failure responding to request") - } - }() - return resultChan, errChan -} - -// ReimagePreparer prepares the Reimage request. -func (client VirtualMachineScaleSetVMsClient) ReimagePreparer(resourceGroupName string, VMScaleSetName string, instanceID string, cancel <-chan struct{}) (*http.Request, error) { - pathParameters := map[string]interface{}{ - "instanceId": autorest.Encode("path", instanceID), - "resourceGroupName": autorest.Encode("path", resourceGroupName), - "subscriptionId": autorest.Encode("path", client.SubscriptionID), - "vmScaleSetName": autorest.Encode("path", VMScaleSetName), - } - - const APIVersion = "2016-04-30-preview" - queryParameters := map[string]interface{}{ - "api-version": APIVersion, - } - - preparer := autorest.CreatePreparer( - autorest.AsPost(), - autorest.WithBaseURL(client.BaseURI), - autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Compute/virtualMachineScaleSets/{vmScaleSetName}/virtualmachines/{instanceId}/reimage", pathParameters), - autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{Cancel: cancel}) -} - -// ReimageSender sends the Reimage request. The method will close the -// http.Response Body if it receives an error. -func (client VirtualMachineScaleSetVMsClient) ReimageSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, - req, - azure.DoPollForAsynchronous(client.PollingDelay)) -} - -// ReimageResponder handles the response to the Reimage request. The method always -// closes the http.Response Body. -func (client VirtualMachineScaleSetVMsClient) ReimageResponder(resp *http.Response) (result OperationStatusResponse, err error) { - err = autorest.Respond( - resp, - client.ByInspecting(), - azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted), - autorest.ByUnmarshallingJSON(&result), - autorest.ByClosing()) - result.Response = autorest.Response{Response: resp} - return -} - -// ReimageAll allows you to re-image all the disks ( including data disks ) in -// the a virtual machine scale set instance. This operation is only supported -// for managed disks. This method may poll for completion. Polling can be -// canceled by passing the cancel channel argument. The channel will be used to -// cancel polling and any outstanding HTTP requests. -// -// resourceGroupName is the name of the resource group. VMScaleSetName is the -// name of the VM scale set. instanceID is the instance ID of the virtual -// machine. -func (client VirtualMachineScaleSetVMsClient) ReimageAll(resourceGroupName string, VMScaleSetName string, instanceID string, cancel <-chan struct{}) (<-chan OperationStatusResponse, <-chan error) { - resultChan := make(chan OperationStatusResponse, 1) - errChan := make(chan error, 1) - go func() { - var err error - var result OperationStatusResponse - defer func() { - resultChan <- result - errChan <- err - close(resultChan) - close(errChan) - }() - req, err := client.ReimageAllPreparer(resourceGroupName, VMScaleSetName, instanceID, cancel) - if err != nil { - err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetVMsClient", "ReimageAll", nil, "Failure preparing request") - return - } - - resp, err := client.ReimageAllSender(req) - if err != nil { - result.Response = autorest.Response{Response: resp} - err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetVMsClient", "ReimageAll", resp, "Failure sending request") - return - } - - result, err = client.ReimageAllResponder(resp) - if err != nil { - err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetVMsClient", "ReimageAll", resp, "Failure responding to request") - } - }() - return resultChan, errChan -} - -// ReimageAllPreparer prepares the ReimageAll request. -func (client VirtualMachineScaleSetVMsClient) ReimageAllPreparer(resourceGroupName string, VMScaleSetName string, instanceID string, cancel <-chan struct{}) (*http.Request, error) { - pathParameters := map[string]interface{}{ - "instanceId": autorest.Encode("path", instanceID), - "resourceGroupName": autorest.Encode("path", resourceGroupName), - "subscriptionId": autorest.Encode("path", client.SubscriptionID), - "vmScaleSetName": autorest.Encode("path", VMScaleSetName), - } - - const APIVersion = "2016-04-30-preview" - queryParameters := map[string]interface{}{ - "api-version": APIVersion, - } - - preparer := autorest.CreatePreparer( - autorest.AsPost(), - autorest.WithBaseURL(client.BaseURI), - autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Compute/virtualMachineScaleSets/{vmScaleSetName}/virtualmachines/{instanceId}/reimageall", pathParameters), - autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{Cancel: cancel}) -} - -// ReimageAllSender sends the ReimageAll request. The method will close the -// http.Response Body if it receives an error. -func (client VirtualMachineScaleSetVMsClient) ReimageAllSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, - req, - azure.DoPollForAsynchronous(client.PollingDelay)) -} - -// ReimageAllResponder handles the response to the ReimageAll request. The method always -// closes the http.Response Body. -func (client VirtualMachineScaleSetVMsClient) ReimageAllResponder(resp *http.Response) (result OperationStatusResponse, err error) { - err = autorest.Respond( - resp, - client.ByInspecting(), - azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted), - autorest.ByUnmarshallingJSON(&result), - autorest.ByClosing()) - result.Response = autorest.Response{Response: resp} - return -} - -// Restart restarts a virtual machine in a VM scale set. This method may poll -// for completion. Polling can be canceled by passing the cancel channel -// argument. The channel will be used to cancel polling and any outstanding -// HTTP requests. -// -// resourceGroupName is the name of the resource group. VMScaleSetName is the -// name of the VM scale set. instanceID is the instance ID of the virtual -// machine. -func (client VirtualMachineScaleSetVMsClient) Restart(resourceGroupName string, VMScaleSetName string, instanceID string, cancel <-chan struct{}) (<-chan OperationStatusResponse, <-chan error) { - resultChan := make(chan OperationStatusResponse, 1) - errChan := make(chan error, 1) - go func() { - var err error - var result OperationStatusResponse - defer func() { - resultChan <- result - errChan <- err - close(resultChan) - close(errChan) - }() - req, err := client.RestartPreparer(resourceGroupName, VMScaleSetName, instanceID, cancel) - if err != nil { - err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetVMsClient", "Restart", nil, "Failure preparing request") - return - } - - resp, err := client.RestartSender(req) - if err != nil { - result.Response = autorest.Response{Response: resp} - err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetVMsClient", "Restart", resp, "Failure sending request") - return - } - - result, err = client.RestartResponder(resp) - if err != nil { - err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetVMsClient", "Restart", resp, "Failure responding to request") - } - }() - return resultChan, errChan -} - -// RestartPreparer prepares the Restart request. -func (client VirtualMachineScaleSetVMsClient) RestartPreparer(resourceGroupName string, VMScaleSetName string, instanceID string, cancel <-chan struct{}) (*http.Request, error) { - pathParameters := map[string]interface{}{ - "instanceId": autorest.Encode("path", instanceID), - "resourceGroupName": autorest.Encode("path", resourceGroupName), - "subscriptionId": autorest.Encode("path", client.SubscriptionID), - "vmScaleSetName": autorest.Encode("path", VMScaleSetName), - } - - const APIVersion = "2016-04-30-preview" - queryParameters := map[string]interface{}{ - "api-version": APIVersion, - } - - preparer := autorest.CreatePreparer( - autorest.AsPost(), - autorest.WithBaseURL(client.BaseURI), - autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Compute/virtualMachineScaleSets/{vmScaleSetName}/virtualmachines/{instanceId}/restart", pathParameters), - autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{Cancel: cancel}) -} - -// RestartSender sends the Restart request. The method will close the -// http.Response Body if it receives an error. -func (client VirtualMachineScaleSetVMsClient) RestartSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, - req, - azure.DoPollForAsynchronous(client.PollingDelay)) -} - -// RestartResponder handles the response to the Restart request. The method always -// closes the http.Response Body. -func (client VirtualMachineScaleSetVMsClient) RestartResponder(resp *http.Response) (result OperationStatusResponse, err error) { - err = autorest.Respond( - resp, - client.ByInspecting(), - azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted), - autorest.ByUnmarshallingJSON(&result), - autorest.ByClosing()) - result.Response = autorest.Response{Response: resp} - return -} - -// Start starts a virtual machine in a VM scale set. This method may poll for -// completion. Polling can be canceled by passing the cancel channel argument. -// The channel will be used to cancel polling and any outstanding HTTP -// requests. -// -// resourceGroupName is the name of the resource group. VMScaleSetName is the -// name of the VM scale set. instanceID is the instance ID of the virtual -// machine. -func (client VirtualMachineScaleSetVMsClient) Start(resourceGroupName string, VMScaleSetName string, instanceID string, cancel <-chan struct{}) (<-chan OperationStatusResponse, <-chan error) { - resultChan := make(chan OperationStatusResponse, 1) - errChan := make(chan error, 1) - go func() { - var err error - var result OperationStatusResponse - defer func() { - resultChan <- result - errChan <- err - close(resultChan) - close(errChan) - }() - req, err := client.StartPreparer(resourceGroupName, VMScaleSetName, instanceID, cancel) - if err != nil { - err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetVMsClient", "Start", nil, "Failure preparing request") - return - } - - resp, err := client.StartSender(req) - if err != nil { - result.Response = autorest.Response{Response: resp} - err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetVMsClient", "Start", resp, "Failure sending request") - return - } - - result, err = client.StartResponder(resp) - if err != nil { - err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetVMsClient", "Start", resp, "Failure responding to request") - } - }() - return resultChan, errChan -} - -// StartPreparer prepares the Start request. -func (client VirtualMachineScaleSetVMsClient) StartPreparer(resourceGroupName string, VMScaleSetName string, instanceID string, cancel <-chan struct{}) (*http.Request, error) { - pathParameters := map[string]interface{}{ - "instanceId": autorest.Encode("path", instanceID), - "resourceGroupName": autorest.Encode("path", resourceGroupName), - "subscriptionId": autorest.Encode("path", client.SubscriptionID), - "vmScaleSetName": autorest.Encode("path", VMScaleSetName), - } - - const APIVersion = "2016-04-30-preview" - queryParameters := map[string]interface{}{ - "api-version": APIVersion, - } - - preparer := autorest.CreatePreparer( - autorest.AsPost(), - autorest.WithBaseURL(client.BaseURI), - autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Compute/virtualMachineScaleSets/{vmScaleSetName}/virtualmachines/{instanceId}/start", pathParameters), - autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{Cancel: cancel}) -} - -// StartSender sends the Start request. The method will close the -// http.Response Body if it receives an error. -func (client VirtualMachineScaleSetVMsClient) StartSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, - req, - azure.DoPollForAsynchronous(client.PollingDelay)) -} - -// StartResponder handles the response to the Start request. The method always -// closes the http.Response Body. -func (client VirtualMachineScaleSetVMsClient) StartResponder(resp *http.Response) (result OperationStatusResponse, err error) { - err = autorest.Respond( - resp, - client.ByInspecting(), - azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted), - autorest.ByUnmarshallingJSON(&result), - autorest.ByClosing()) - result.Response = autorest.Response{Response: resp} - return -} diff --git a/vendor/github.com/Azure/azure-sdk-for-go/arm/network/applicationgateways.go b/vendor/github.com/Azure/azure-sdk-for-go/arm/network/applicationgateways.go deleted file mode 100644 index 4ab4e0734..000000000 --- a/vendor/github.com/Azure/azure-sdk-for-go/arm/network/applicationgateways.go +++ /dev/null @@ -1,773 +0,0 @@ -package network - -// Copyright (c) Microsoft and contributors. All rights reserved. -// -// Licensed under the Apache License, Version 2.0 (the "License"); -// you may not use this file except in compliance with the License. -// You may obtain a copy of the License at -// http://www.apache.org/licenses/LICENSE-2.0 -// -// Unless required by applicable law or agreed to in writing, software -// distributed under the License is distributed on an "AS IS" BASIS, -// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -// -// See the License for the specific language governing permissions and -// limitations under the License. -// -// Code generated by Microsoft (R) AutoRest Code Generator 1.0.1.0 -// Changes may cause incorrect behavior and will be lost if the code is -// regenerated. - -import ( - "github.com/Azure/go-autorest/autorest" - "github.com/Azure/go-autorest/autorest/azure" - "github.com/Azure/go-autorest/autorest/validation" - "net/http" -) - -// ApplicationGatewaysClient is the composite Swagger for Network Client -type ApplicationGatewaysClient struct { - ManagementClient -} - -// NewApplicationGatewaysClient creates an instance of the -// ApplicationGatewaysClient client. -func NewApplicationGatewaysClient(subscriptionID string) ApplicationGatewaysClient { - return NewApplicationGatewaysClientWithBaseURI(DefaultBaseURI, subscriptionID) -} - -// NewApplicationGatewaysClientWithBaseURI creates an instance of the -// ApplicationGatewaysClient client. -func NewApplicationGatewaysClientWithBaseURI(baseURI string, subscriptionID string) ApplicationGatewaysClient { - return ApplicationGatewaysClient{NewWithBaseURI(baseURI, subscriptionID)} -} - -// BackendHealth gets the backend health of the specified application gateway -// in a resource group. This method may poll for completion. Polling can be -// canceled by passing the cancel channel argument. The channel will be used to -// cancel polling and any outstanding HTTP requests. -// -// resourceGroupName is the name of the resource group. applicationGatewayName -// is the name of the application gateway. expand is expands BackendAddressPool -// and BackendHttpSettings referenced in backend health. -func (client ApplicationGatewaysClient) BackendHealth(resourceGroupName string, applicationGatewayName string, expand string, cancel <-chan struct{}) (<-chan ApplicationGatewayBackendHealth, <-chan error) { - resultChan := make(chan ApplicationGatewayBackendHealth, 1) - errChan := make(chan error, 1) - go func() { - var err error - var result ApplicationGatewayBackendHealth - defer func() { - resultChan <- result - errChan <- err - close(resultChan) - close(errChan) - }() - req, err := client.BackendHealthPreparer(resourceGroupName, applicationGatewayName, expand, cancel) - if err != nil { - err = autorest.NewErrorWithError(err, "network.ApplicationGatewaysClient", "BackendHealth", nil, "Failure preparing request") - return - } - - resp, err := client.BackendHealthSender(req) - if err != nil { - result.Response = autorest.Response{Response: resp} - err = autorest.NewErrorWithError(err, "network.ApplicationGatewaysClient", "BackendHealth", resp, "Failure sending request") - return - } - - result, err = client.BackendHealthResponder(resp) - if err != nil { - err = autorest.NewErrorWithError(err, "network.ApplicationGatewaysClient", "BackendHealth", resp, "Failure responding to request") - } - }() - return resultChan, errChan -} - -// BackendHealthPreparer prepares the BackendHealth request. -func (client ApplicationGatewaysClient) BackendHealthPreparer(resourceGroupName string, applicationGatewayName string, expand string, cancel <-chan struct{}) (*http.Request, error) { - pathParameters := map[string]interface{}{ - "applicationGatewayName": autorest.Encode("path", applicationGatewayName), - "resourceGroupName": autorest.Encode("path", resourceGroupName), - "subscriptionId": autorest.Encode("path", client.SubscriptionID), - } - - const APIVersion = "2017-03-01" - queryParameters := map[string]interface{}{ - "api-version": APIVersion, - } - if len(expand) > 0 { - queryParameters["$expand"] = autorest.Encode("query", expand) - } - - preparer := autorest.CreatePreparer( - autorest.AsPost(), - autorest.WithBaseURL(client.BaseURI), - autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/applicationGateways/{applicationGatewayName}/backendhealth", pathParameters), - autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{Cancel: cancel}) -} - -// BackendHealthSender sends the BackendHealth request. The method will close the -// http.Response Body if it receives an error. -func (client ApplicationGatewaysClient) BackendHealthSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, - req, - azure.DoPollForAsynchronous(client.PollingDelay)) -} - -// BackendHealthResponder handles the response to the BackendHealth request. The method always -// closes the http.Response Body. -func (client ApplicationGatewaysClient) BackendHealthResponder(resp *http.Response) (result ApplicationGatewayBackendHealth, err error) { - err = autorest.Respond( - resp, - client.ByInspecting(), - azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted), - autorest.ByUnmarshallingJSON(&result), - autorest.ByClosing()) - result.Response = autorest.Response{Response: resp} - return -} - -// CreateOrUpdate creates or updates the specified application gateway. This -// method may poll for completion. Polling can be canceled by passing the -// cancel channel argument. The channel will be used to cancel polling and any -// outstanding HTTP requests. -// -// resourceGroupName is the name of the resource group. applicationGatewayName -// is the name of the application gateway. parameters is parameters supplied to -// the create or update application gateway operation. -func (client ApplicationGatewaysClient) CreateOrUpdate(resourceGroupName string, applicationGatewayName string, parameters ApplicationGateway, cancel <-chan struct{}) (<-chan ApplicationGateway, <-chan error) { - resultChan := make(chan ApplicationGateway, 1) - errChan := make(chan error, 1) - if err := validation.Validate([]validation.Validation{ - {TargetValue: parameters, - Constraints: []validation.Constraint{{Target: "parameters.ApplicationGatewayPropertiesFormat", Name: validation.Null, Rule: false, - Chain: []validation.Constraint{{Target: "parameters.ApplicationGatewayPropertiesFormat.WebApplicationFirewallConfiguration", Name: validation.Null, Rule: false, - Chain: []validation.Constraint{{Target: "parameters.ApplicationGatewayPropertiesFormat.WebApplicationFirewallConfiguration.Enabled", Name: validation.Null, Rule: true, Chain: nil}, - {Target: "parameters.ApplicationGatewayPropertiesFormat.WebApplicationFirewallConfiguration.RuleSetType", Name: validation.Null, Rule: true, Chain: nil}, - {Target: "parameters.ApplicationGatewayPropertiesFormat.WebApplicationFirewallConfiguration.RuleSetVersion", Name: validation.Null, Rule: true, Chain: nil}, - }}, - }}}}}); err != nil { - errChan <- validation.NewErrorWithValidationError(err, "network.ApplicationGatewaysClient", "CreateOrUpdate") - close(errChan) - close(resultChan) - return resultChan, errChan - } - - go func() { - var err error - var result ApplicationGateway - defer func() { - resultChan <- result - errChan <- err - close(resultChan) - close(errChan) - }() - req, err := client.CreateOrUpdatePreparer(resourceGroupName, applicationGatewayName, parameters, cancel) - if err != nil { - err = autorest.NewErrorWithError(err, "network.ApplicationGatewaysClient", "CreateOrUpdate", nil, "Failure preparing request") - return - } - - resp, err := client.CreateOrUpdateSender(req) - if err != nil { - result.Response = autorest.Response{Response: resp} - err = autorest.NewErrorWithError(err, "network.ApplicationGatewaysClient", "CreateOrUpdate", resp, "Failure sending request") - return - } - - result, err = client.CreateOrUpdateResponder(resp) - if err != nil { - err = autorest.NewErrorWithError(err, "network.ApplicationGatewaysClient", "CreateOrUpdate", resp, "Failure responding to request") - } - }() - return resultChan, errChan -} - -// CreateOrUpdatePreparer prepares the CreateOrUpdate request. -func (client ApplicationGatewaysClient) CreateOrUpdatePreparer(resourceGroupName string, applicationGatewayName string, parameters ApplicationGateway, cancel <-chan struct{}) (*http.Request, error) { - pathParameters := map[string]interface{}{ - "applicationGatewayName": autorest.Encode("path", applicationGatewayName), - "resourceGroupName": autorest.Encode("path", resourceGroupName), - "subscriptionId": autorest.Encode("path", client.SubscriptionID), - } - - const APIVersion = "2017-03-01" - queryParameters := map[string]interface{}{ - "api-version": APIVersion, - } - - preparer := autorest.CreatePreparer( - autorest.AsJSON(), - autorest.AsPut(), - autorest.WithBaseURL(client.BaseURI), - autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/applicationGateways/{applicationGatewayName}", pathParameters), - autorest.WithJSON(parameters), - autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{Cancel: cancel}) -} - -// CreateOrUpdateSender sends the CreateOrUpdate request. The method will close the -// http.Response Body if it receives an error. -func (client ApplicationGatewaysClient) CreateOrUpdateSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, - req, - azure.DoPollForAsynchronous(client.PollingDelay)) -} - -// CreateOrUpdateResponder handles the response to the CreateOrUpdate request. The method always -// closes the http.Response Body. -func (client ApplicationGatewaysClient) CreateOrUpdateResponder(resp *http.Response) (result ApplicationGateway, err error) { - err = autorest.Respond( - resp, - client.ByInspecting(), - azure.WithErrorUnlessStatusCode(http.StatusCreated, http.StatusOK), - autorest.ByUnmarshallingJSON(&result), - autorest.ByClosing()) - result.Response = autorest.Response{Response: resp} - return -} - -// Delete deletes the specified application gateway. This method may poll for -// completion. Polling can be canceled by passing the cancel channel argument. -// The channel will be used to cancel polling and any outstanding HTTP -// requests. -// -// resourceGroupName is the name of the resource group. applicationGatewayName -// is the name of the application gateway. -func (client ApplicationGatewaysClient) Delete(resourceGroupName string, applicationGatewayName string, cancel <-chan struct{}) (<-chan autorest.Response, <-chan error) { - resultChan := make(chan autorest.Response, 1) - errChan := make(chan error, 1) - go func() { - var err error - var result autorest.Response - defer func() { - resultChan <- result - errChan <- err - close(resultChan) - close(errChan) - }() - req, err := client.DeletePreparer(resourceGroupName, applicationGatewayName, cancel) - if err != nil { - err = autorest.NewErrorWithError(err, "network.ApplicationGatewaysClient", "Delete", nil, "Failure preparing request") - return - } - - resp, err := client.DeleteSender(req) - if err != nil { - result.Response = resp - err = autorest.NewErrorWithError(err, "network.ApplicationGatewaysClient", "Delete", resp, "Failure sending request") - return - } - - result, err = client.DeleteResponder(resp) - if err != nil { - err = autorest.NewErrorWithError(err, "network.ApplicationGatewaysClient", "Delete", resp, "Failure responding to request") - } - }() - return resultChan, errChan -} - -// DeletePreparer prepares the Delete request. -func (client ApplicationGatewaysClient) DeletePreparer(resourceGroupName string, applicationGatewayName string, cancel <-chan struct{}) (*http.Request, error) { - pathParameters := map[string]interface{}{ - "applicationGatewayName": autorest.Encode("path", applicationGatewayName), - "resourceGroupName": autorest.Encode("path", resourceGroupName), - "subscriptionId": autorest.Encode("path", client.SubscriptionID), - } - - const APIVersion = "2017-03-01" - queryParameters := map[string]interface{}{ - "api-version": APIVersion, - } - - preparer := autorest.CreatePreparer( - autorest.AsDelete(), - autorest.WithBaseURL(client.BaseURI), - autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/applicationGateways/{applicationGatewayName}", pathParameters), - autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{Cancel: cancel}) -} - -// DeleteSender sends the Delete request. The method will close the -// http.Response Body if it receives an error. -func (client ApplicationGatewaysClient) DeleteSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, - req, - azure.DoPollForAsynchronous(client.PollingDelay)) -} - -// DeleteResponder handles the response to the Delete request. The method always -// closes the http.Response Body. -func (client ApplicationGatewaysClient) DeleteResponder(resp *http.Response) (result autorest.Response, err error) { - err = autorest.Respond( - resp, - client.ByInspecting(), - azure.WithErrorUnlessStatusCode(http.StatusAccepted, http.StatusNoContent, http.StatusOK), - autorest.ByClosing()) - result.Response = resp - return -} - -// Get gets the specified application gateway. -// -// resourceGroupName is the name of the resource group. applicationGatewayName -// is the name of the application gateway. -func (client ApplicationGatewaysClient) Get(resourceGroupName string, applicationGatewayName string) (result ApplicationGateway, err error) { - req, err := client.GetPreparer(resourceGroupName, applicationGatewayName) - if err != nil { - err = autorest.NewErrorWithError(err, "network.ApplicationGatewaysClient", "Get", nil, "Failure preparing request") - return - } - - resp, err := client.GetSender(req) - if err != nil { - result.Response = autorest.Response{Response: resp} - err = autorest.NewErrorWithError(err, "network.ApplicationGatewaysClient", "Get", resp, "Failure sending request") - return - } - - result, err = client.GetResponder(resp) - if err != nil { - err = autorest.NewErrorWithError(err, "network.ApplicationGatewaysClient", "Get", resp, "Failure responding to request") - } - - return -} - -// GetPreparer prepares the Get request. -func (client ApplicationGatewaysClient) GetPreparer(resourceGroupName string, applicationGatewayName string) (*http.Request, error) { - pathParameters := map[string]interface{}{ - "applicationGatewayName": autorest.Encode("path", applicationGatewayName), - "resourceGroupName": autorest.Encode("path", resourceGroupName), - "subscriptionId": autorest.Encode("path", client.SubscriptionID), - } - - const APIVersion = "2017-03-01" - queryParameters := map[string]interface{}{ - "api-version": APIVersion, - } - - preparer := autorest.CreatePreparer( - autorest.AsGet(), - autorest.WithBaseURL(client.BaseURI), - autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/applicationGateways/{applicationGatewayName}", pathParameters), - autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{}) -} - -// GetSender sends the Get request. The method will close the -// http.Response Body if it receives an error. -func (client ApplicationGatewaysClient) GetSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, req) -} - -// GetResponder handles the response to the Get request. The method always -// closes the http.Response Body. -func (client ApplicationGatewaysClient) GetResponder(resp *http.Response) (result ApplicationGateway, err error) { - err = autorest.Respond( - resp, - client.ByInspecting(), - azure.WithErrorUnlessStatusCode(http.StatusOK), - autorest.ByUnmarshallingJSON(&result), - autorest.ByClosing()) - result.Response = autorest.Response{Response: resp} - return -} - -// List lists all application gateways in a resource group. -// -// resourceGroupName is the name of the resource group. -func (client ApplicationGatewaysClient) List(resourceGroupName string) (result ApplicationGatewayListResult, err error) { - req, err := client.ListPreparer(resourceGroupName) - if err != nil { - err = autorest.NewErrorWithError(err, "network.ApplicationGatewaysClient", "List", nil, "Failure preparing request") - return - } - - resp, err := client.ListSender(req) - if err != nil { - result.Response = autorest.Response{Response: resp} - err = autorest.NewErrorWithError(err, "network.ApplicationGatewaysClient", "List", resp, "Failure sending request") - return - } - - result, err = client.ListResponder(resp) - if err != nil { - err = autorest.NewErrorWithError(err, "network.ApplicationGatewaysClient", "List", resp, "Failure responding to request") - } - - return -} - -// ListPreparer prepares the List request. -func (client ApplicationGatewaysClient) ListPreparer(resourceGroupName string) (*http.Request, error) { - pathParameters := map[string]interface{}{ - "resourceGroupName": autorest.Encode("path", resourceGroupName), - "subscriptionId": autorest.Encode("path", client.SubscriptionID), - } - - const APIVersion = "2017-03-01" - queryParameters := map[string]interface{}{ - "api-version": APIVersion, - } - - preparer := autorest.CreatePreparer( - autorest.AsGet(), - autorest.WithBaseURL(client.BaseURI), - autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/applicationGateways", pathParameters), - autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{}) -} - -// ListSender sends the List request. The method will close the -// http.Response Body if it receives an error. -func (client ApplicationGatewaysClient) ListSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, req) -} - -// ListResponder handles the response to the List request. The method always -// closes the http.Response Body. -func (client ApplicationGatewaysClient) ListResponder(resp *http.Response) (result ApplicationGatewayListResult, err error) { - err = autorest.Respond( - resp, - client.ByInspecting(), - azure.WithErrorUnlessStatusCode(http.StatusOK), - autorest.ByUnmarshallingJSON(&result), - autorest.ByClosing()) - result.Response = autorest.Response{Response: resp} - return -} - -// ListNextResults retrieves the next set of results, if any. -func (client ApplicationGatewaysClient) ListNextResults(lastResults ApplicationGatewayListResult) (result ApplicationGatewayListResult, err error) { - req, err := lastResults.ApplicationGatewayListResultPreparer() - if err != nil { - return result, autorest.NewErrorWithError(err, "network.ApplicationGatewaysClient", "List", nil, "Failure preparing next results request") - } - if req == nil { - return - } - - resp, err := client.ListSender(req) - if err != nil { - result.Response = autorest.Response{Response: resp} - return result, autorest.NewErrorWithError(err, "network.ApplicationGatewaysClient", "List", resp, "Failure sending next results request") - } - - result, err = client.ListResponder(resp) - if err != nil { - err = autorest.NewErrorWithError(err, "network.ApplicationGatewaysClient", "List", resp, "Failure responding to next results request") - } - - return -} - -// ListAll gets all the application gateways in a subscription. -func (client ApplicationGatewaysClient) ListAll() (result ApplicationGatewayListResult, err error) { - req, err := client.ListAllPreparer() - if err != nil { - err = autorest.NewErrorWithError(err, "network.ApplicationGatewaysClient", "ListAll", nil, "Failure preparing request") - return - } - - resp, err := client.ListAllSender(req) - if err != nil { - result.Response = autorest.Response{Response: resp} - err = autorest.NewErrorWithError(err, "network.ApplicationGatewaysClient", "ListAll", resp, "Failure sending request") - return - } - - result, err = client.ListAllResponder(resp) - if err != nil { - err = autorest.NewErrorWithError(err, "network.ApplicationGatewaysClient", "ListAll", resp, "Failure responding to request") - } - - return -} - -// ListAllPreparer prepares the ListAll request. -func (client ApplicationGatewaysClient) ListAllPreparer() (*http.Request, error) { - pathParameters := map[string]interface{}{ - "subscriptionId": autorest.Encode("path", client.SubscriptionID), - } - - const APIVersion = "2017-03-01" - queryParameters := map[string]interface{}{ - "api-version": APIVersion, - } - - preparer := autorest.CreatePreparer( - autorest.AsGet(), - autorest.WithBaseURL(client.BaseURI), - autorest.WithPathParameters("/subscriptions/{subscriptionId}/providers/Microsoft.Network/applicationGateways", pathParameters), - autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{}) -} - -// ListAllSender sends the ListAll request. The method will close the -// http.Response Body if it receives an error. -func (client ApplicationGatewaysClient) ListAllSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, req) -} - -// ListAllResponder handles the response to the ListAll request. The method always -// closes the http.Response Body. -func (client ApplicationGatewaysClient) ListAllResponder(resp *http.Response) (result ApplicationGatewayListResult, err error) { - err = autorest.Respond( - resp, - client.ByInspecting(), - azure.WithErrorUnlessStatusCode(http.StatusOK), - autorest.ByUnmarshallingJSON(&result), - autorest.ByClosing()) - result.Response = autorest.Response{Response: resp} - return -} - -// ListAllNextResults retrieves the next set of results, if any. -func (client ApplicationGatewaysClient) ListAllNextResults(lastResults ApplicationGatewayListResult) (result ApplicationGatewayListResult, err error) { - req, err := lastResults.ApplicationGatewayListResultPreparer() - if err != nil { - return result, autorest.NewErrorWithError(err, "network.ApplicationGatewaysClient", "ListAll", nil, "Failure preparing next results request") - } - if req == nil { - return - } - - resp, err := client.ListAllSender(req) - if err != nil { - result.Response = autorest.Response{Response: resp} - return result, autorest.NewErrorWithError(err, "network.ApplicationGatewaysClient", "ListAll", resp, "Failure sending next results request") - } - - result, err = client.ListAllResponder(resp) - if err != nil { - err = autorest.NewErrorWithError(err, "network.ApplicationGatewaysClient", "ListAll", resp, "Failure responding to next results request") - } - - return -} - -// ListAvailableWafRuleSets lists all available web application firewall rule -// sets. -func (client ApplicationGatewaysClient) ListAvailableWafRuleSets() (result ApplicationGatewayAvailableWafRuleSetsResult, err error) { - req, err := client.ListAvailableWafRuleSetsPreparer() - if err != nil { - err = autorest.NewErrorWithError(err, "network.ApplicationGatewaysClient", "ListAvailableWafRuleSets", nil, "Failure preparing request") - return - } - - resp, err := client.ListAvailableWafRuleSetsSender(req) - if err != nil { - result.Response = autorest.Response{Response: resp} - err = autorest.NewErrorWithError(err, "network.ApplicationGatewaysClient", "ListAvailableWafRuleSets", resp, "Failure sending request") - return - } - - result, err = client.ListAvailableWafRuleSetsResponder(resp) - if err != nil { - err = autorest.NewErrorWithError(err, "network.ApplicationGatewaysClient", "ListAvailableWafRuleSets", resp, "Failure responding to request") - } - - return -} - -// ListAvailableWafRuleSetsPreparer prepares the ListAvailableWafRuleSets request. -func (client ApplicationGatewaysClient) ListAvailableWafRuleSetsPreparer() (*http.Request, error) { - pathParameters := map[string]interface{}{ - "subscriptionId": autorest.Encode("path", client.SubscriptionID), - } - - const APIVersion = "2017-03-01" - queryParameters := map[string]interface{}{ - "api-version": APIVersion, - } - - preparer := autorest.CreatePreparer( - autorest.AsGet(), - autorest.WithBaseURL(client.BaseURI), - autorest.WithPathParameters("/subscriptions/{subscriptionId}/providers/Microsoft.Network/applicationGatewayAvailableWafRuleSets", pathParameters), - autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{}) -} - -// ListAvailableWafRuleSetsSender sends the ListAvailableWafRuleSets request. The method will close the -// http.Response Body if it receives an error. -func (client ApplicationGatewaysClient) ListAvailableWafRuleSetsSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, req) -} - -// ListAvailableWafRuleSetsResponder handles the response to the ListAvailableWafRuleSets request. The method always -// closes the http.Response Body. -func (client ApplicationGatewaysClient) ListAvailableWafRuleSetsResponder(resp *http.Response) (result ApplicationGatewayAvailableWafRuleSetsResult, err error) { - err = autorest.Respond( - resp, - client.ByInspecting(), - azure.WithErrorUnlessStatusCode(http.StatusOK), - autorest.ByUnmarshallingJSON(&result), - autorest.ByClosing()) - result.Response = autorest.Response{Response: resp} - return -} - -// Start starts the specified application gateway. This method may poll for -// completion. Polling can be canceled by passing the cancel channel argument. -// The channel will be used to cancel polling and any outstanding HTTP -// requests. -// -// resourceGroupName is the name of the resource group. applicationGatewayName -// is the name of the application gateway. -func (client ApplicationGatewaysClient) Start(resourceGroupName string, applicationGatewayName string, cancel <-chan struct{}) (<-chan autorest.Response, <-chan error) { - resultChan := make(chan autorest.Response, 1) - errChan := make(chan error, 1) - go func() { - var err error - var result autorest.Response - defer func() { - resultChan <- result - errChan <- err - close(resultChan) - close(errChan) - }() - req, err := client.StartPreparer(resourceGroupName, applicationGatewayName, cancel) - if err != nil { - err = autorest.NewErrorWithError(err, "network.ApplicationGatewaysClient", "Start", nil, "Failure preparing request") - return - } - - resp, err := client.StartSender(req) - if err != nil { - result.Response = resp - err = autorest.NewErrorWithError(err, "network.ApplicationGatewaysClient", "Start", resp, "Failure sending request") - return - } - - result, err = client.StartResponder(resp) - if err != nil { - err = autorest.NewErrorWithError(err, "network.ApplicationGatewaysClient", "Start", resp, "Failure responding to request") - } - }() - return resultChan, errChan -} - -// StartPreparer prepares the Start request. -func (client ApplicationGatewaysClient) StartPreparer(resourceGroupName string, applicationGatewayName string, cancel <-chan struct{}) (*http.Request, error) { - pathParameters := map[string]interface{}{ - "applicationGatewayName": autorest.Encode("path", applicationGatewayName), - "resourceGroupName": autorest.Encode("path", resourceGroupName), - "subscriptionId": autorest.Encode("path", client.SubscriptionID), - } - - const APIVersion = "2017-03-01" - queryParameters := map[string]interface{}{ - "api-version": APIVersion, - } - - preparer := autorest.CreatePreparer( - autorest.AsPost(), - autorest.WithBaseURL(client.BaseURI), - autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/applicationGateways/{applicationGatewayName}/start", pathParameters), - autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{Cancel: cancel}) -} - -// StartSender sends the Start request. The method will close the -// http.Response Body if it receives an error. -func (client ApplicationGatewaysClient) StartSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, - req, - azure.DoPollForAsynchronous(client.PollingDelay)) -} - -// StartResponder handles the response to the Start request. The method always -// closes the http.Response Body. -func (client ApplicationGatewaysClient) StartResponder(resp *http.Response) (result autorest.Response, err error) { - err = autorest.Respond( - resp, - client.ByInspecting(), - azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted), - autorest.ByClosing()) - result.Response = resp - return -} - -// Stop stops the specified application gateway in a resource group. This -// method may poll for completion. Polling can be canceled by passing the -// cancel channel argument. The channel will be used to cancel polling and any -// outstanding HTTP requests. -// -// resourceGroupName is the name of the resource group. applicationGatewayName -// is the name of the application gateway. -func (client ApplicationGatewaysClient) Stop(resourceGroupName string, applicationGatewayName string, cancel <-chan struct{}) (<-chan autorest.Response, <-chan error) { - resultChan := make(chan autorest.Response, 1) - errChan := make(chan error, 1) - go func() { - var err error - var result autorest.Response - defer func() { - resultChan <- result - errChan <- err - close(resultChan) - close(errChan) - }() - req, err := client.StopPreparer(resourceGroupName, applicationGatewayName, cancel) - if err != nil { - err = autorest.NewErrorWithError(err, "network.ApplicationGatewaysClient", "Stop", nil, "Failure preparing request") - return - } - - resp, err := client.StopSender(req) - if err != nil { - result.Response = resp - err = autorest.NewErrorWithError(err, "network.ApplicationGatewaysClient", "Stop", resp, "Failure sending request") - return - } - - result, err = client.StopResponder(resp) - if err != nil { - err = autorest.NewErrorWithError(err, "network.ApplicationGatewaysClient", "Stop", resp, "Failure responding to request") - } - }() - return resultChan, errChan -} - -// StopPreparer prepares the Stop request. -func (client ApplicationGatewaysClient) StopPreparer(resourceGroupName string, applicationGatewayName string, cancel <-chan struct{}) (*http.Request, error) { - pathParameters := map[string]interface{}{ - "applicationGatewayName": autorest.Encode("path", applicationGatewayName), - "resourceGroupName": autorest.Encode("path", resourceGroupName), - "subscriptionId": autorest.Encode("path", client.SubscriptionID), - } - - const APIVersion = "2017-03-01" - queryParameters := map[string]interface{}{ - "api-version": APIVersion, - } - - preparer := autorest.CreatePreparer( - autorest.AsPost(), - autorest.WithBaseURL(client.BaseURI), - autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/applicationGateways/{applicationGatewayName}/stop", pathParameters), - autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{Cancel: cancel}) -} - -// StopSender sends the Stop request. The method will close the -// http.Response Body if it receives an error. -func (client ApplicationGatewaysClient) StopSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, - req, - azure.DoPollForAsynchronous(client.PollingDelay)) -} - -// StopResponder handles the response to the Stop request. The method always -// closes the http.Response Body. -func (client ApplicationGatewaysClient) StopResponder(resp *http.Response) (result autorest.Response, err error) { - err = autorest.Respond( - resp, - client.ByInspecting(), - azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted), - autorest.ByClosing()) - result.Response = resp - return -} diff --git a/vendor/github.com/Azure/azure-sdk-for-go/arm/network/interfaces.go b/vendor/github.com/Azure/azure-sdk-for-go/arm/network/interfaces.go deleted file mode 100644 index 8918c08b4..000000000 --- a/vendor/github.com/Azure/azure-sdk-for-go/arm/network/interfaces.go +++ /dev/null @@ -1,871 +0,0 @@ -package network - -// Copyright (c) Microsoft and contributors. All rights reserved. -// -// Licensed under the Apache License, Version 2.0 (the "License"); -// you may not use this file except in compliance with the License. -// You may obtain a copy of the License at -// http://www.apache.org/licenses/LICENSE-2.0 -// -// Unless required by applicable law or agreed to in writing, software -// distributed under the License is distributed on an "AS IS" BASIS, -// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -// -// See the License for the specific language governing permissions and -// limitations under the License. -// -// Code generated by Microsoft (R) AutoRest Code Generator 1.0.1.0 -// Changes may cause incorrect behavior and will be lost if the code is -// regenerated. - -import ( - "github.com/Azure/go-autorest/autorest" - "github.com/Azure/go-autorest/autorest/azure" - "net/http" -) - -// InterfacesClient is the composite Swagger for Network Client -type InterfacesClient struct { - ManagementClient -} - -// NewInterfacesClient creates an instance of the InterfacesClient client. -func NewInterfacesClient(subscriptionID string) InterfacesClient { - return NewInterfacesClientWithBaseURI(DefaultBaseURI, subscriptionID) -} - -// NewInterfacesClientWithBaseURI creates an instance of the InterfacesClient -// client. -func NewInterfacesClientWithBaseURI(baseURI string, subscriptionID string) InterfacesClient { - return InterfacesClient{NewWithBaseURI(baseURI, subscriptionID)} -} - -// CreateOrUpdate creates or updates a network interface. This method may poll -// for completion. Polling can be canceled by passing the cancel channel -// argument. The channel will be used to cancel polling and any outstanding -// HTTP requests. -// -// resourceGroupName is the name of the resource group. networkInterfaceName is -// the name of the network interface. parameters is parameters supplied to the -// create or update network interface operation. -func (client InterfacesClient) CreateOrUpdate(resourceGroupName string, networkInterfaceName string, parameters Interface, cancel <-chan struct{}) (<-chan Interface, <-chan error) { - resultChan := make(chan Interface, 1) - errChan := make(chan error, 1) - go func() { - var err error - var result Interface - defer func() { - resultChan <- result - errChan <- err - close(resultChan) - close(errChan) - }() - req, err := client.CreateOrUpdatePreparer(resourceGroupName, networkInterfaceName, parameters, cancel) - if err != nil { - err = autorest.NewErrorWithError(err, "network.InterfacesClient", "CreateOrUpdate", nil, "Failure preparing request") - return - } - - resp, err := client.CreateOrUpdateSender(req) - if err != nil { - result.Response = autorest.Response{Response: resp} - err = autorest.NewErrorWithError(err, "network.InterfacesClient", "CreateOrUpdate", resp, "Failure sending request") - return - } - - result, err = client.CreateOrUpdateResponder(resp) - if err != nil { - err = autorest.NewErrorWithError(err, "network.InterfacesClient", "CreateOrUpdate", resp, "Failure responding to request") - } - }() - return resultChan, errChan -} - -// CreateOrUpdatePreparer prepares the CreateOrUpdate request. -func (client InterfacesClient) CreateOrUpdatePreparer(resourceGroupName string, networkInterfaceName string, parameters Interface, cancel <-chan struct{}) (*http.Request, error) { - pathParameters := map[string]interface{}{ - "networkInterfaceName": autorest.Encode("path", networkInterfaceName), - "resourceGroupName": autorest.Encode("path", resourceGroupName), - "subscriptionId": autorest.Encode("path", client.SubscriptionID), - } - - const APIVersion = "2017-03-01" - queryParameters := map[string]interface{}{ - "api-version": APIVersion, - } - - preparer := autorest.CreatePreparer( - autorest.AsJSON(), - autorest.AsPut(), - autorest.WithBaseURL(client.BaseURI), - autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/networkInterfaces/{networkInterfaceName}", pathParameters), - autorest.WithJSON(parameters), - autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{Cancel: cancel}) -} - -// CreateOrUpdateSender sends the CreateOrUpdate request. The method will close the -// http.Response Body if it receives an error. -func (client InterfacesClient) CreateOrUpdateSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, - req, - azure.DoPollForAsynchronous(client.PollingDelay)) -} - -// CreateOrUpdateResponder handles the response to the CreateOrUpdate request. The method always -// closes the http.Response Body. -func (client InterfacesClient) CreateOrUpdateResponder(resp *http.Response) (result Interface, err error) { - err = autorest.Respond( - resp, - client.ByInspecting(), - azure.WithErrorUnlessStatusCode(http.StatusCreated, http.StatusOK), - autorest.ByUnmarshallingJSON(&result), - autorest.ByClosing()) - result.Response = autorest.Response{Response: resp} - return -} - -// Delete deletes the specified network interface. This method may poll for -// completion. Polling can be canceled by passing the cancel channel argument. -// The channel will be used to cancel polling and any outstanding HTTP -// requests. -// -// resourceGroupName is the name of the resource group. networkInterfaceName is -// the name of the network interface. -func (client InterfacesClient) Delete(resourceGroupName string, networkInterfaceName string, cancel <-chan struct{}) (<-chan autorest.Response, <-chan error) { - resultChan := make(chan autorest.Response, 1) - errChan := make(chan error, 1) - go func() { - var err error - var result autorest.Response - defer func() { - resultChan <- result - errChan <- err - close(resultChan) - close(errChan) - }() - req, err := client.DeletePreparer(resourceGroupName, networkInterfaceName, cancel) - if err != nil { - err = autorest.NewErrorWithError(err, "network.InterfacesClient", "Delete", nil, "Failure preparing request") - return - } - - resp, err := client.DeleteSender(req) - if err != nil { - result.Response = resp - err = autorest.NewErrorWithError(err, "network.InterfacesClient", "Delete", resp, "Failure sending request") - return - } - - result, err = client.DeleteResponder(resp) - if err != nil { - err = autorest.NewErrorWithError(err, "network.InterfacesClient", "Delete", resp, "Failure responding to request") - } - }() - return resultChan, errChan -} - -// DeletePreparer prepares the Delete request. -func (client InterfacesClient) DeletePreparer(resourceGroupName string, networkInterfaceName string, cancel <-chan struct{}) (*http.Request, error) { - pathParameters := map[string]interface{}{ - "networkInterfaceName": autorest.Encode("path", networkInterfaceName), - "resourceGroupName": autorest.Encode("path", resourceGroupName), - "subscriptionId": autorest.Encode("path", client.SubscriptionID), - } - - const APIVersion = "2017-03-01" - queryParameters := map[string]interface{}{ - "api-version": APIVersion, - } - - preparer := autorest.CreatePreparer( - autorest.AsDelete(), - autorest.WithBaseURL(client.BaseURI), - autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/networkInterfaces/{networkInterfaceName}", pathParameters), - autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{Cancel: cancel}) -} - -// DeleteSender sends the Delete request. The method will close the -// http.Response Body if it receives an error. -func (client InterfacesClient) DeleteSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, - req, - azure.DoPollForAsynchronous(client.PollingDelay)) -} - -// DeleteResponder handles the response to the Delete request. The method always -// closes the http.Response Body. -func (client InterfacesClient) DeleteResponder(resp *http.Response) (result autorest.Response, err error) { - err = autorest.Respond( - resp, - client.ByInspecting(), - azure.WithErrorUnlessStatusCode(http.StatusNoContent, http.StatusAccepted, http.StatusOK), - autorest.ByClosing()) - result.Response = resp - return -} - -// Get gets information about the specified network interface. -// -// resourceGroupName is the name of the resource group. networkInterfaceName is -// the name of the network interface. expand is expands referenced resources. -func (client InterfacesClient) Get(resourceGroupName string, networkInterfaceName string, expand string) (result Interface, err error) { - req, err := client.GetPreparer(resourceGroupName, networkInterfaceName, expand) - if err != nil { - err = autorest.NewErrorWithError(err, "network.InterfacesClient", "Get", nil, "Failure preparing request") - return - } - - resp, err := client.GetSender(req) - if err != nil { - result.Response = autorest.Response{Response: resp} - err = autorest.NewErrorWithError(err, "network.InterfacesClient", "Get", resp, "Failure sending request") - return - } - - result, err = client.GetResponder(resp) - if err != nil { - err = autorest.NewErrorWithError(err, "network.InterfacesClient", "Get", resp, "Failure responding to request") - } - - return -} - -// GetPreparer prepares the Get request. -func (client InterfacesClient) GetPreparer(resourceGroupName string, networkInterfaceName string, expand string) (*http.Request, error) { - pathParameters := map[string]interface{}{ - "networkInterfaceName": autorest.Encode("path", networkInterfaceName), - "resourceGroupName": autorest.Encode("path", resourceGroupName), - "subscriptionId": autorest.Encode("path", client.SubscriptionID), - } - - const APIVersion = "2017-03-01" - queryParameters := map[string]interface{}{ - "api-version": APIVersion, - } - if len(expand) > 0 { - queryParameters["$expand"] = autorest.Encode("query", expand) - } - - preparer := autorest.CreatePreparer( - autorest.AsGet(), - autorest.WithBaseURL(client.BaseURI), - autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/networkInterfaces/{networkInterfaceName}", pathParameters), - autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{}) -} - -// GetSender sends the Get request. The method will close the -// http.Response Body if it receives an error. -func (client InterfacesClient) GetSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, req) -} - -// GetResponder handles the response to the Get request. The method always -// closes the http.Response Body. -func (client InterfacesClient) GetResponder(resp *http.Response) (result Interface, err error) { - err = autorest.Respond( - resp, - client.ByInspecting(), - azure.WithErrorUnlessStatusCode(http.StatusOK), - autorest.ByUnmarshallingJSON(&result), - autorest.ByClosing()) - result.Response = autorest.Response{Response: resp} - return -} - -// GetEffectiveRouteTable gets all route tables applied to a network interface. -// This method may poll for completion. Polling can be canceled by passing the -// cancel channel argument. The channel will be used to cancel polling and any -// outstanding HTTP requests. -// -// resourceGroupName is the name of the resource group. networkInterfaceName is -// the name of the network interface. -func (client InterfacesClient) GetEffectiveRouteTable(resourceGroupName string, networkInterfaceName string, cancel <-chan struct{}) (<-chan EffectiveRouteListResult, <-chan error) { - resultChan := make(chan EffectiveRouteListResult, 1) - errChan := make(chan error, 1) - go func() { - var err error - var result EffectiveRouteListResult - defer func() { - resultChan <- result - errChan <- err - close(resultChan) - close(errChan) - }() - req, err := client.GetEffectiveRouteTablePreparer(resourceGroupName, networkInterfaceName, cancel) - if err != nil { - err = autorest.NewErrorWithError(err, "network.InterfacesClient", "GetEffectiveRouteTable", nil, "Failure preparing request") - return - } - - resp, err := client.GetEffectiveRouteTableSender(req) - if err != nil { - result.Response = autorest.Response{Response: resp} - err = autorest.NewErrorWithError(err, "network.InterfacesClient", "GetEffectiveRouteTable", resp, "Failure sending request") - return - } - - result, err = client.GetEffectiveRouteTableResponder(resp) - if err != nil { - err = autorest.NewErrorWithError(err, "network.InterfacesClient", "GetEffectiveRouteTable", resp, "Failure responding to request") - } - }() - return resultChan, errChan -} - -// GetEffectiveRouteTablePreparer prepares the GetEffectiveRouteTable request. -func (client InterfacesClient) GetEffectiveRouteTablePreparer(resourceGroupName string, networkInterfaceName string, cancel <-chan struct{}) (*http.Request, error) { - pathParameters := map[string]interface{}{ - "networkInterfaceName": autorest.Encode("path", networkInterfaceName), - "resourceGroupName": autorest.Encode("path", resourceGroupName), - "subscriptionId": autorest.Encode("path", client.SubscriptionID), - } - - const APIVersion = "2017-03-01" - queryParameters := map[string]interface{}{ - "api-version": APIVersion, - } - - preparer := autorest.CreatePreparer( - autorest.AsPost(), - autorest.WithBaseURL(client.BaseURI), - autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/networkInterfaces/{networkInterfaceName}/effectiveRouteTable", pathParameters), - autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{Cancel: cancel}) -} - -// GetEffectiveRouteTableSender sends the GetEffectiveRouteTable request. The method will close the -// http.Response Body if it receives an error. -func (client InterfacesClient) GetEffectiveRouteTableSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, - req, - azure.DoPollForAsynchronous(client.PollingDelay)) -} - -// GetEffectiveRouteTableResponder handles the response to the GetEffectiveRouteTable request. The method always -// closes the http.Response Body. -func (client InterfacesClient) GetEffectiveRouteTableResponder(resp *http.Response) (result EffectiveRouteListResult, err error) { - err = autorest.Respond( - resp, - client.ByInspecting(), - azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted), - autorest.ByUnmarshallingJSON(&result), - autorest.ByClosing()) - result.Response = autorest.Response{Response: resp} - return -} - -// GetVirtualMachineScaleSetNetworkInterface get the specified network -// interface in a virtual machine scale set. -// -// resourceGroupName is the name of the resource group. -// virtualMachineScaleSetName is the name of the virtual machine scale set. -// virtualmachineIndex is the virtual machine index. networkInterfaceName is -// the name of the network interface. expand is expands referenced resources. -func (client InterfacesClient) GetVirtualMachineScaleSetNetworkInterface(resourceGroupName string, virtualMachineScaleSetName string, virtualmachineIndex string, networkInterfaceName string, expand string) (result Interface, err error) { - req, err := client.GetVirtualMachineScaleSetNetworkInterfacePreparer(resourceGroupName, virtualMachineScaleSetName, virtualmachineIndex, networkInterfaceName, expand) - if err != nil { - err = autorest.NewErrorWithError(err, "network.InterfacesClient", "GetVirtualMachineScaleSetNetworkInterface", nil, "Failure preparing request") - return - } - - resp, err := client.GetVirtualMachineScaleSetNetworkInterfaceSender(req) - if err != nil { - result.Response = autorest.Response{Response: resp} - err = autorest.NewErrorWithError(err, "network.InterfacesClient", "GetVirtualMachineScaleSetNetworkInterface", resp, "Failure sending request") - return - } - - result, err = client.GetVirtualMachineScaleSetNetworkInterfaceResponder(resp) - if err != nil { - err = autorest.NewErrorWithError(err, "network.InterfacesClient", "GetVirtualMachineScaleSetNetworkInterface", resp, "Failure responding to request") - } - - return -} - -// GetVirtualMachineScaleSetNetworkInterfacePreparer prepares the GetVirtualMachineScaleSetNetworkInterface request. -func (client InterfacesClient) GetVirtualMachineScaleSetNetworkInterfacePreparer(resourceGroupName string, virtualMachineScaleSetName string, virtualmachineIndex string, networkInterfaceName string, expand string) (*http.Request, error) { - pathParameters := map[string]interface{}{ - "networkInterfaceName": autorest.Encode("path", networkInterfaceName), - "resourceGroupName": autorest.Encode("path", resourceGroupName), - "subscriptionId": autorest.Encode("path", client.SubscriptionID), - "virtualmachineIndex": autorest.Encode("path", virtualmachineIndex), - "virtualMachineScaleSetName": autorest.Encode("path", virtualMachineScaleSetName), - } - - const APIVersion = "2016-09-01" - queryParameters := map[string]interface{}{ - "api-version": APIVersion, - } - if len(expand) > 0 { - queryParameters["$expand"] = autorest.Encode("query", expand) - } - - preparer := autorest.CreatePreparer( - autorest.AsGet(), - autorest.WithBaseURL(client.BaseURI), - autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/microsoft.Compute/virtualMachineScaleSets/{virtualMachineScaleSetName}/virtualMachines/{virtualmachineIndex}/networkInterfaces/{networkInterfaceName}", pathParameters), - autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{}) -} - -// GetVirtualMachineScaleSetNetworkInterfaceSender sends the GetVirtualMachineScaleSetNetworkInterface request. The method will close the -// http.Response Body if it receives an error. -func (client InterfacesClient) GetVirtualMachineScaleSetNetworkInterfaceSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, req) -} - -// GetVirtualMachineScaleSetNetworkInterfaceResponder handles the response to the GetVirtualMachineScaleSetNetworkInterface request. The method always -// closes the http.Response Body. -func (client InterfacesClient) GetVirtualMachineScaleSetNetworkInterfaceResponder(resp *http.Response) (result Interface, err error) { - err = autorest.Respond( - resp, - client.ByInspecting(), - azure.WithErrorUnlessStatusCode(http.StatusOK), - autorest.ByUnmarshallingJSON(&result), - autorest.ByClosing()) - result.Response = autorest.Response{Response: resp} - return -} - -// List gets all network interfaces in a resource group. -// -// resourceGroupName is the name of the resource group. -func (client InterfacesClient) List(resourceGroupName string) (result InterfaceListResult, err error) { - req, err := client.ListPreparer(resourceGroupName) - if err != nil { - err = autorest.NewErrorWithError(err, "network.InterfacesClient", "List", nil, "Failure preparing request") - return - } - - resp, err := client.ListSender(req) - if err != nil { - result.Response = autorest.Response{Response: resp} - err = autorest.NewErrorWithError(err, "network.InterfacesClient", "List", resp, "Failure sending request") - return - } - - result, err = client.ListResponder(resp) - if err != nil { - err = autorest.NewErrorWithError(err, "network.InterfacesClient", "List", resp, "Failure responding to request") - } - - return -} - -// ListPreparer prepares the List request. -func (client InterfacesClient) ListPreparer(resourceGroupName string) (*http.Request, error) { - pathParameters := map[string]interface{}{ - "resourceGroupName": autorest.Encode("path", resourceGroupName), - "subscriptionId": autorest.Encode("path", client.SubscriptionID), - } - - const APIVersion = "2017-03-01" - queryParameters := map[string]interface{}{ - "api-version": APIVersion, - } - - preparer := autorest.CreatePreparer( - autorest.AsGet(), - autorest.WithBaseURL(client.BaseURI), - autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/networkInterfaces", pathParameters), - autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{}) -} - -// ListSender sends the List request. The method will close the -// http.Response Body if it receives an error. -func (client InterfacesClient) ListSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, req) -} - -// ListResponder handles the response to the List request. The method always -// closes the http.Response Body. -func (client InterfacesClient) ListResponder(resp *http.Response) (result InterfaceListResult, err error) { - err = autorest.Respond( - resp, - client.ByInspecting(), - azure.WithErrorUnlessStatusCode(http.StatusOK), - autorest.ByUnmarshallingJSON(&result), - autorest.ByClosing()) - result.Response = autorest.Response{Response: resp} - return -} - -// ListNextResults retrieves the next set of results, if any. -func (client InterfacesClient) ListNextResults(lastResults InterfaceListResult) (result InterfaceListResult, err error) { - req, err := lastResults.InterfaceListResultPreparer() - if err != nil { - return result, autorest.NewErrorWithError(err, "network.InterfacesClient", "List", nil, "Failure preparing next results request") - } - if req == nil { - return - } - - resp, err := client.ListSender(req) - if err != nil { - result.Response = autorest.Response{Response: resp} - return result, autorest.NewErrorWithError(err, "network.InterfacesClient", "List", resp, "Failure sending next results request") - } - - result, err = client.ListResponder(resp) - if err != nil { - err = autorest.NewErrorWithError(err, "network.InterfacesClient", "List", resp, "Failure responding to next results request") - } - - return -} - -// ListAll gets all network interfaces in a subscription. -func (client InterfacesClient) ListAll() (result InterfaceListResult, err error) { - req, err := client.ListAllPreparer() - if err != nil { - err = autorest.NewErrorWithError(err, "network.InterfacesClient", "ListAll", nil, "Failure preparing request") - return - } - - resp, err := client.ListAllSender(req) - if err != nil { - result.Response = autorest.Response{Response: resp} - err = autorest.NewErrorWithError(err, "network.InterfacesClient", "ListAll", resp, "Failure sending request") - return - } - - result, err = client.ListAllResponder(resp) - if err != nil { - err = autorest.NewErrorWithError(err, "network.InterfacesClient", "ListAll", resp, "Failure responding to request") - } - - return -} - -// ListAllPreparer prepares the ListAll request. -func (client InterfacesClient) ListAllPreparer() (*http.Request, error) { - pathParameters := map[string]interface{}{ - "subscriptionId": autorest.Encode("path", client.SubscriptionID), - } - - const APIVersion = "2017-03-01" - queryParameters := map[string]interface{}{ - "api-version": APIVersion, - } - - preparer := autorest.CreatePreparer( - autorest.AsGet(), - autorest.WithBaseURL(client.BaseURI), - autorest.WithPathParameters("/subscriptions/{subscriptionId}/providers/Microsoft.Network/networkInterfaces", pathParameters), - autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{}) -} - -// ListAllSender sends the ListAll request. The method will close the -// http.Response Body if it receives an error. -func (client InterfacesClient) ListAllSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, req) -} - -// ListAllResponder handles the response to the ListAll request. The method always -// closes the http.Response Body. -func (client InterfacesClient) ListAllResponder(resp *http.Response) (result InterfaceListResult, err error) { - err = autorest.Respond( - resp, - client.ByInspecting(), - azure.WithErrorUnlessStatusCode(http.StatusOK), - autorest.ByUnmarshallingJSON(&result), - autorest.ByClosing()) - result.Response = autorest.Response{Response: resp} - return -} - -// ListAllNextResults retrieves the next set of results, if any. -func (client InterfacesClient) ListAllNextResults(lastResults InterfaceListResult) (result InterfaceListResult, err error) { - req, err := lastResults.InterfaceListResultPreparer() - if err != nil { - return result, autorest.NewErrorWithError(err, "network.InterfacesClient", "ListAll", nil, "Failure preparing next results request") - } - if req == nil { - return - } - - resp, err := client.ListAllSender(req) - if err != nil { - result.Response = autorest.Response{Response: resp} - return result, autorest.NewErrorWithError(err, "network.InterfacesClient", "ListAll", resp, "Failure sending next results request") - } - - result, err = client.ListAllResponder(resp) - if err != nil { - err = autorest.NewErrorWithError(err, "network.InterfacesClient", "ListAll", resp, "Failure responding to next results request") - } - - return -} - -// ListEffectiveNetworkSecurityGroups gets all network security groups applied -// to a network interface. This method may poll for completion. Polling can be -// canceled by passing the cancel channel argument. The channel will be used to -// cancel polling and any outstanding HTTP requests. -// -// resourceGroupName is the name of the resource group. networkInterfaceName is -// the name of the network interface. -func (client InterfacesClient) ListEffectiveNetworkSecurityGroups(resourceGroupName string, networkInterfaceName string, cancel <-chan struct{}) (<-chan EffectiveNetworkSecurityGroupListResult, <-chan error) { - resultChan := make(chan EffectiveNetworkSecurityGroupListResult, 1) - errChan := make(chan error, 1) - go func() { - var err error - var result EffectiveNetworkSecurityGroupListResult - defer func() { - resultChan <- result - errChan <- err - close(resultChan) - close(errChan) - }() - req, err := client.ListEffectiveNetworkSecurityGroupsPreparer(resourceGroupName, networkInterfaceName, cancel) - if err != nil { - err = autorest.NewErrorWithError(err, "network.InterfacesClient", "ListEffectiveNetworkSecurityGroups", nil, "Failure preparing request") - return - } - - resp, err := client.ListEffectiveNetworkSecurityGroupsSender(req) - if err != nil { - result.Response = autorest.Response{Response: resp} - err = autorest.NewErrorWithError(err, "network.InterfacesClient", "ListEffectiveNetworkSecurityGroups", resp, "Failure sending request") - return - } - - result, err = client.ListEffectiveNetworkSecurityGroupsResponder(resp) - if err != nil { - err = autorest.NewErrorWithError(err, "network.InterfacesClient", "ListEffectiveNetworkSecurityGroups", resp, "Failure responding to request") - } - }() - return resultChan, errChan -} - -// ListEffectiveNetworkSecurityGroupsPreparer prepares the ListEffectiveNetworkSecurityGroups request. -func (client InterfacesClient) ListEffectiveNetworkSecurityGroupsPreparer(resourceGroupName string, networkInterfaceName string, cancel <-chan struct{}) (*http.Request, error) { - pathParameters := map[string]interface{}{ - "networkInterfaceName": autorest.Encode("path", networkInterfaceName), - "resourceGroupName": autorest.Encode("path", resourceGroupName), - "subscriptionId": autorest.Encode("path", client.SubscriptionID), - } - - const APIVersion = "2017-03-01" - queryParameters := map[string]interface{}{ - "api-version": APIVersion, - } - - preparer := autorest.CreatePreparer( - autorest.AsPost(), - autorest.WithBaseURL(client.BaseURI), - autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/networkInterfaces/{networkInterfaceName}/effectiveNetworkSecurityGroups", pathParameters), - autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{Cancel: cancel}) -} - -// ListEffectiveNetworkSecurityGroupsSender sends the ListEffectiveNetworkSecurityGroups request. The method will close the -// http.Response Body if it receives an error. -func (client InterfacesClient) ListEffectiveNetworkSecurityGroupsSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, - req, - azure.DoPollForAsynchronous(client.PollingDelay)) -} - -// ListEffectiveNetworkSecurityGroupsResponder handles the response to the ListEffectiveNetworkSecurityGroups request. The method always -// closes the http.Response Body. -func (client InterfacesClient) ListEffectiveNetworkSecurityGroupsResponder(resp *http.Response) (result EffectiveNetworkSecurityGroupListResult, err error) { - err = autorest.Respond( - resp, - client.ByInspecting(), - azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted), - autorest.ByUnmarshallingJSON(&result), - autorest.ByClosing()) - result.Response = autorest.Response{Response: resp} - return -} - -// ListVirtualMachineScaleSetNetworkInterfaces gets all network interfaces in a -// virtual machine scale set. -// -// resourceGroupName is the name of the resource group. -// virtualMachineScaleSetName is the name of the virtual machine scale set. -func (client InterfacesClient) ListVirtualMachineScaleSetNetworkInterfaces(resourceGroupName string, virtualMachineScaleSetName string) (result InterfaceListResult, err error) { - req, err := client.ListVirtualMachineScaleSetNetworkInterfacesPreparer(resourceGroupName, virtualMachineScaleSetName) - if err != nil { - err = autorest.NewErrorWithError(err, "network.InterfacesClient", "ListVirtualMachineScaleSetNetworkInterfaces", nil, "Failure preparing request") - return - } - - resp, err := client.ListVirtualMachineScaleSetNetworkInterfacesSender(req) - if err != nil { - result.Response = autorest.Response{Response: resp} - err = autorest.NewErrorWithError(err, "network.InterfacesClient", "ListVirtualMachineScaleSetNetworkInterfaces", resp, "Failure sending request") - return - } - - result, err = client.ListVirtualMachineScaleSetNetworkInterfacesResponder(resp) - if err != nil { - err = autorest.NewErrorWithError(err, "network.InterfacesClient", "ListVirtualMachineScaleSetNetworkInterfaces", resp, "Failure responding to request") - } - - return -} - -// ListVirtualMachineScaleSetNetworkInterfacesPreparer prepares the ListVirtualMachineScaleSetNetworkInterfaces request. -func (client InterfacesClient) ListVirtualMachineScaleSetNetworkInterfacesPreparer(resourceGroupName string, virtualMachineScaleSetName string) (*http.Request, error) { - pathParameters := map[string]interface{}{ - "resourceGroupName": autorest.Encode("path", resourceGroupName), - "subscriptionId": autorest.Encode("path", client.SubscriptionID), - "virtualMachineScaleSetName": autorest.Encode("path", virtualMachineScaleSetName), - } - - const APIVersion = "2016-09-01" - queryParameters := map[string]interface{}{ - "api-version": APIVersion, - } - - preparer := autorest.CreatePreparer( - autorest.AsGet(), - autorest.WithBaseURL(client.BaseURI), - autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/microsoft.Compute/virtualMachineScaleSets/{virtualMachineScaleSetName}/networkInterfaces", pathParameters), - autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{}) -} - -// ListVirtualMachineScaleSetNetworkInterfacesSender sends the ListVirtualMachineScaleSetNetworkInterfaces request. The method will close the -// http.Response Body if it receives an error. -func (client InterfacesClient) ListVirtualMachineScaleSetNetworkInterfacesSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, req) -} - -// ListVirtualMachineScaleSetNetworkInterfacesResponder handles the response to the ListVirtualMachineScaleSetNetworkInterfaces request. The method always -// closes the http.Response Body. -func (client InterfacesClient) ListVirtualMachineScaleSetNetworkInterfacesResponder(resp *http.Response) (result InterfaceListResult, err error) { - err = autorest.Respond( - resp, - client.ByInspecting(), - azure.WithErrorUnlessStatusCode(http.StatusOK), - autorest.ByUnmarshallingJSON(&result), - autorest.ByClosing()) - result.Response = autorest.Response{Response: resp} - return -} - -// ListVirtualMachineScaleSetNetworkInterfacesNextResults retrieves the next set of results, if any. -func (client InterfacesClient) ListVirtualMachineScaleSetNetworkInterfacesNextResults(lastResults InterfaceListResult) (result InterfaceListResult, err error) { - req, err := lastResults.InterfaceListResultPreparer() - if err != nil { - return result, autorest.NewErrorWithError(err, "network.InterfacesClient", "ListVirtualMachineScaleSetNetworkInterfaces", nil, "Failure preparing next results request") - } - if req == nil { - return - } - - resp, err := client.ListVirtualMachineScaleSetNetworkInterfacesSender(req) - if err != nil { - result.Response = autorest.Response{Response: resp} - return result, autorest.NewErrorWithError(err, "network.InterfacesClient", "ListVirtualMachineScaleSetNetworkInterfaces", resp, "Failure sending next results request") - } - - result, err = client.ListVirtualMachineScaleSetNetworkInterfacesResponder(resp) - if err != nil { - err = autorest.NewErrorWithError(err, "network.InterfacesClient", "ListVirtualMachineScaleSetNetworkInterfaces", resp, "Failure responding to next results request") - } - - return -} - -// ListVirtualMachineScaleSetVMNetworkInterfaces gets information about all -// network interfaces in a virtual machine in a virtual machine scale set. -// -// resourceGroupName is the name of the resource group. -// virtualMachineScaleSetName is the name of the virtual machine scale set. -// virtualmachineIndex is the virtual machine index. -func (client InterfacesClient) ListVirtualMachineScaleSetVMNetworkInterfaces(resourceGroupName string, virtualMachineScaleSetName string, virtualmachineIndex string) (result InterfaceListResult, err error) { - req, err := client.ListVirtualMachineScaleSetVMNetworkInterfacesPreparer(resourceGroupName, virtualMachineScaleSetName, virtualmachineIndex) - if err != nil { - err = autorest.NewErrorWithError(err, "network.InterfacesClient", "ListVirtualMachineScaleSetVMNetworkInterfaces", nil, "Failure preparing request") - return - } - - resp, err := client.ListVirtualMachineScaleSetVMNetworkInterfacesSender(req) - if err != nil { - result.Response = autorest.Response{Response: resp} - err = autorest.NewErrorWithError(err, "network.InterfacesClient", "ListVirtualMachineScaleSetVMNetworkInterfaces", resp, "Failure sending request") - return - } - - result, err = client.ListVirtualMachineScaleSetVMNetworkInterfacesResponder(resp) - if err != nil { - err = autorest.NewErrorWithError(err, "network.InterfacesClient", "ListVirtualMachineScaleSetVMNetworkInterfaces", resp, "Failure responding to request") - } - - return -} - -// ListVirtualMachineScaleSetVMNetworkInterfacesPreparer prepares the ListVirtualMachineScaleSetVMNetworkInterfaces request. -func (client InterfacesClient) ListVirtualMachineScaleSetVMNetworkInterfacesPreparer(resourceGroupName string, virtualMachineScaleSetName string, virtualmachineIndex string) (*http.Request, error) { - pathParameters := map[string]interface{}{ - "resourceGroupName": autorest.Encode("path", resourceGroupName), - "subscriptionId": autorest.Encode("path", client.SubscriptionID), - "virtualmachineIndex": autorest.Encode("path", virtualmachineIndex), - "virtualMachineScaleSetName": autorest.Encode("path", virtualMachineScaleSetName), - } - - const APIVersion = "2016-09-01" - queryParameters := map[string]interface{}{ - "api-version": APIVersion, - } - - preparer := autorest.CreatePreparer( - autorest.AsGet(), - autorest.WithBaseURL(client.BaseURI), - autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/microsoft.Compute/virtualMachineScaleSets/{virtualMachineScaleSetName}/virtualMachines/{virtualmachineIndex}/networkInterfaces", pathParameters), - autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{}) -} - -// ListVirtualMachineScaleSetVMNetworkInterfacesSender sends the ListVirtualMachineScaleSetVMNetworkInterfaces request. The method will close the -// http.Response Body if it receives an error. -func (client InterfacesClient) ListVirtualMachineScaleSetVMNetworkInterfacesSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, req) -} - -// ListVirtualMachineScaleSetVMNetworkInterfacesResponder handles the response to the ListVirtualMachineScaleSetVMNetworkInterfaces request. The method always -// closes the http.Response Body. -func (client InterfacesClient) ListVirtualMachineScaleSetVMNetworkInterfacesResponder(resp *http.Response) (result InterfaceListResult, err error) { - err = autorest.Respond( - resp, - client.ByInspecting(), - azure.WithErrorUnlessStatusCode(http.StatusOK), - autorest.ByUnmarshallingJSON(&result), - autorest.ByClosing()) - result.Response = autorest.Response{Response: resp} - return -} - -// ListVirtualMachineScaleSetVMNetworkInterfacesNextResults retrieves the next set of results, if any. -func (client InterfacesClient) ListVirtualMachineScaleSetVMNetworkInterfacesNextResults(lastResults InterfaceListResult) (result InterfaceListResult, err error) { - req, err := lastResults.InterfaceListResultPreparer() - if err != nil { - return result, autorest.NewErrorWithError(err, "network.InterfacesClient", "ListVirtualMachineScaleSetVMNetworkInterfaces", nil, "Failure preparing next results request") - } - if req == nil { - return - } - - resp, err := client.ListVirtualMachineScaleSetVMNetworkInterfacesSender(req) - if err != nil { - result.Response = autorest.Response{Response: resp} - return result, autorest.NewErrorWithError(err, "network.InterfacesClient", "ListVirtualMachineScaleSetVMNetworkInterfaces", resp, "Failure sending next results request") - } - - result, err = client.ListVirtualMachineScaleSetVMNetworkInterfacesResponder(resp) - if err != nil { - err = autorest.NewErrorWithError(err, "network.InterfacesClient", "ListVirtualMachineScaleSetVMNetworkInterfaces", resp, "Failure responding to next results request") - } - - return -} diff --git a/vendor/github.com/Azure/azure-sdk-for-go/arm/network/models.go b/vendor/github.com/Azure/azure-sdk-for-go/arm/network/models.go deleted file mode 100644 index 505691db0..000000000 --- a/vendor/github.com/Azure/azure-sdk-for-go/arm/network/models.go +++ /dev/null @@ -1,2996 +0,0 @@ -package network - -// Copyright (c) Microsoft and contributors. All rights reserved. -// -// Licensed under the Apache License, Version 2.0 (the "License"); -// you may not use this file except in compliance with the License. -// You may obtain a copy of the License at -// http://www.apache.org/licenses/LICENSE-2.0 -// -// Unless required by applicable law or agreed to in writing, software -// distributed under the License is distributed on an "AS IS" BASIS, -// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -// -// See the License for the specific language governing permissions and -// limitations under the License. -// -// Code generated by Microsoft (R) AutoRest Code Generator 1.0.1.0 -// Changes may cause incorrect behavior and will be lost if the code is -// regenerated. - -import ( - "github.com/Azure/go-autorest/autorest" - "github.com/Azure/go-autorest/autorest/date" - "github.com/Azure/go-autorest/autorest/to" - "net/http" -) - -// Access enumerates the values for access. -type Access string - -const ( - // Allow specifies the allow state for access. - Allow Access = "Allow" - // Deny specifies the deny state for access. - Deny Access = "Deny" -) - -// ApplicationGatewayBackendHealthServerHealth enumerates the values for -// application gateway backend health server health. -type ApplicationGatewayBackendHealthServerHealth string - -const ( - // Down specifies the down state for application gateway backend health - // server health. - Down ApplicationGatewayBackendHealthServerHealth = "Down" - // Draining specifies the draining state for application gateway backend - // health server health. - Draining ApplicationGatewayBackendHealthServerHealth = "Draining" - // Partial specifies the partial state for application gateway backend - // health server health. - Partial ApplicationGatewayBackendHealthServerHealth = "Partial" - // Unknown specifies the unknown state for application gateway backend - // health server health. - Unknown ApplicationGatewayBackendHealthServerHealth = "Unknown" - // Up specifies the up state for application gateway backend health server - // health. - Up ApplicationGatewayBackendHealthServerHealth = "Up" -) - -// ApplicationGatewayCookieBasedAffinity enumerates the values for application -// gateway cookie based affinity. -type ApplicationGatewayCookieBasedAffinity string - -const ( - // Disabled specifies the disabled state for application gateway cookie - // based affinity. - Disabled ApplicationGatewayCookieBasedAffinity = "Disabled" - // Enabled specifies the enabled state for application gateway cookie based - // affinity. - Enabled ApplicationGatewayCookieBasedAffinity = "Enabled" -) - -// ApplicationGatewayFirewallMode enumerates the values for application gateway -// firewall mode. -type ApplicationGatewayFirewallMode string - -const ( - // Detection specifies the detection state for application gateway firewall - // mode. - Detection ApplicationGatewayFirewallMode = "Detection" - // Prevention specifies the prevention state for application gateway - // firewall mode. - Prevention ApplicationGatewayFirewallMode = "Prevention" -) - -// ApplicationGatewayOperationalState enumerates the values for application -// gateway operational state. -type ApplicationGatewayOperationalState string - -const ( - // Running specifies the running state for application gateway operational - // state. - Running ApplicationGatewayOperationalState = "Running" - // Starting specifies the starting state for application gateway - // operational state. - Starting ApplicationGatewayOperationalState = "Starting" - // Stopped specifies the stopped state for application gateway operational - // state. - Stopped ApplicationGatewayOperationalState = "Stopped" - // Stopping specifies the stopping state for application gateway - // operational state. - Stopping ApplicationGatewayOperationalState = "Stopping" -) - -// ApplicationGatewayProtocol enumerates the values for application gateway -// protocol. -type ApplicationGatewayProtocol string - -const ( - // HTTP specifies the http state for application gateway protocol. - HTTP ApplicationGatewayProtocol = "Http" - // HTTPS specifies the https state for application gateway protocol. - HTTPS ApplicationGatewayProtocol = "Https" -) - -// ApplicationGatewayRequestRoutingRuleType enumerates the values for -// application gateway request routing rule type. -type ApplicationGatewayRequestRoutingRuleType string - -const ( - // Basic specifies the basic state for application gateway request routing - // rule type. - Basic ApplicationGatewayRequestRoutingRuleType = "Basic" - // PathBasedRouting specifies the path based routing state for application - // gateway request routing rule type. - PathBasedRouting ApplicationGatewayRequestRoutingRuleType = "PathBasedRouting" -) - -// ApplicationGatewaySkuName enumerates the values for application gateway sku -// name. -type ApplicationGatewaySkuName string - -const ( - // StandardLarge specifies the standard large state for application gateway - // sku name. - StandardLarge ApplicationGatewaySkuName = "Standard_Large" - // StandardMedium specifies the standard medium state for application - // gateway sku name. - StandardMedium ApplicationGatewaySkuName = "Standard_Medium" - // StandardSmall specifies the standard small state for application gateway - // sku name. - StandardSmall ApplicationGatewaySkuName = "Standard_Small" - // WAFLarge specifies the waf large state for application gateway sku name. - WAFLarge ApplicationGatewaySkuName = "WAF_Large" - // WAFMedium specifies the waf medium state for application gateway sku - // name. - WAFMedium ApplicationGatewaySkuName = "WAF_Medium" -) - -// ApplicationGatewaySslProtocol enumerates the values for application gateway -// ssl protocol. -type ApplicationGatewaySslProtocol string - -const ( - // TLSv10 specifies the tl sv 10 state for application gateway ssl - // protocol. - TLSv10 ApplicationGatewaySslProtocol = "TLSv1_0" - // TLSv11 specifies the tl sv 11 state for application gateway ssl - // protocol. - TLSv11 ApplicationGatewaySslProtocol = "TLSv1_1" - // TLSv12 specifies the tl sv 12 state for application gateway ssl - // protocol. - TLSv12 ApplicationGatewaySslProtocol = "TLSv1_2" -) - -// ApplicationGatewayTier enumerates the values for application gateway tier. -type ApplicationGatewayTier string - -const ( - // Standard specifies the standard state for application gateway tier. - Standard ApplicationGatewayTier = "Standard" - // WAF specifies the waf state for application gateway tier. - WAF ApplicationGatewayTier = "WAF" -) - -// AssociationType enumerates the values for association type. -type AssociationType string - -const ( - // Associated specifies the associated state for association type. - Associated AssociationType = "Associated" - // Contains specifies the contains state for association type. - Contains AssociationType = "Contains" -) - -// AuthorizationUseStatus enumerates the values for authorization use status. -type AuthorizationUseStatus string - -const ( - // Available specifies the available state for authorization use status. - Available AuthorizationUseStatus = "Available" - // InUse specifies the in use state for authorization use status. - InUse AuthorizationUseStatus = "InUse" -) - -// BgpPeerState enumerates the values for bgp peer state. -type BgpPeerState string - -const ( - // BgpPeerStateConnected specifies the bgp peer state connected state for - // bgp peer state. - BgpPeerStateConnected BgpPeerState = "Connected" - // BgpPeerStateConnecting specifies the bgp peer state connecting state for - // bgp peer state. - BgpPeerStateConnecting BgpPeerState = "Connecting" - // BgpPeerStateIdle specifies the bgp peer state idle state for bgp peer - // state. - BgpPeerStateIdle BgpPeerState = "Idle" - // BgpPeerStateStopped specifies the bgp peer state stopped state for bgp - // peer state. - BgpPeerStateStopped BgpPeerState = "Stopped" - // BgpPeerStateUnknown specifies the bgp peer state unknown state for bgp - // peer state. - BgpPeerStateUnknown BgpPeerState = "Unknown" -) - -// DhGroup enumerates the values for dh group. -type DhGroup string - -const ( - // DHGroup1 specifies the dh group 1 state for dh group. - DHGroup1 DhGroup = "DHGroup1" - // DHGroup14 specifies the dh group 14 state for dh group. - DHGroup14 DhGroup = "DHGroup14" - // DHGroup2 specifies the dh group 2 state for dh group. - DHGroup2 DhGroup = "DHGroup2" - // DHGroup2048 specifies the dh group 2048 state for dh group. - DHGroup2048 DhGroup = "DHGroup2048" - // DHGroup24 specifies the dh group 24 state for dh group. - DHGroup24 DhGroup = "DHGroup24" - // ECP256 specifies the ecp256 state for dh group. - ECP256 DhGroup = "ECP256" - // ECP384 specifies the ecp384 state for dh group. - ECP384 DhGroup = "ECP384" - // None specifies the none state for dh group. - None DhGroup = "None" -) - -// Direction enumerates the values for direction. -type Direction string - -const ( - // Inbound specifies the inbound state for direction. - Inbound Direction = "Inbound" - // Outbound specifies the outbound state for direction. - Outbound Direction = "Outbound" -) - -// EffectiveRouteSource enumerates the values for effective route source. -type EffectiveRouteSource string - -const ( - // EffectiveRouteSourceDefault specifies the effective route source default - // state for effective route source. - EffectiveRouteSourceDefault EffectiveRouteSource = "Default" - // EffectiveRouteSourceUnknown specifies the effective route source unknown - // state for effective route source. - EffectiveRouteSourceUnknown EffectiveRouteSource = "Unknown" - // EffectiveRouteSourceUser specifies the effective route source user state - // for effective route source. - EffectiveRouteSourceUser EffectiveRouteSource = "User" - // EffectiveRouteSourceVirtualNetworkGateway specifies the effective route - // source virtual network gateway state for effective route source. - EffectiveRouteSourceVirtualNetworkGateway EffectiveRouteSource = "VirtualNetworkGateway" -) - -// EffectiveRouteState enumerates the values for effective route state. -type EffectiveRouteState string - -const ( - // Active specifies the active state for effective route state. - Active EffectiveRouteState = "Active" - // Invalid specifies the invalid state for effective route state. - Invalid EffectiveRouteState = "Invalid" -) - -// ExpressRouteCircuitPeeringAdvertisedPublicPrefixState enumerates the values -// for express route circuit peering advertised public prefix state. -type ExpressRouteCircuitPeeringAdvertisedPublicPrefixState string - -const ( - // Configured specifies the configured state for express route circuit - // peering advertised public prefix state. - Configured ExpressRouteCircuitPeeringAdvertisedPublicPrefixState = "Configured" - // Configuring specifies the configuring state for express route circuit - // peering advertised public prefix state. - Configuring ExpressRouteCircuitPeeringAdvertisedPublicPrefixState = "Configuring" - // NotConfigured specifies the not configured state for express route - // circuit peering advertised public prefix state. - NotConfigured ExpressRouteCircuitPeeringAdvertisedPublicPrefixState = "NotConfigured" - // ValidationNeeded specifies the validation needed state for express route - // circuit peering advertised public prefix state. - ValidationNeeded ExpressRouteCircuitPeeringAdvertisedPublicPrefixState = "ValidationNeeded" -) - -// ExpressRouteCircuitPeeringState enumerates the values for express route -// circuit peering state. -type ExpressRouteCircuitPeeringState string - -const ( - // ExpressRouteCircuitPeeringStateDisabled specifies the express route - // circuit peering state disabled state for express route circuit peering - // state. - ExpressRouteCircuitPeeringStateDisabled ExpressRouteCircuitPeeringState = "Disabled" - // ExpressRouteCircuitPeeringStateEnabled specifies the express route - // circuit peering state enabled state for express route circuit peering - // state. - ExpressRouteCircuitPeeringStateEnabled ExpressRouteCircuitPeeringState = "Enabled" -) - -// ExpressRouteCircuitPeeringType enumerates the values for express route -// circuit peering type. -type ExpressRouteCircuitPeeringType string - -const ( - // AzurePrivatePeering specifies the azure private peering state for - // express route circuit peering type. - AzurePrivatePeering ExpressRouteCircuitPeeringType = "AzurePrivatePeering" - // AzurePublicPeering specifies the azure public peering state for express - // route circuit peering type. - AzurePublicPeering ExpressRouteCircuitPeeringType = "AzurePublicPeering" - // MicrosoftPeering specifies the microsoft peering state for express route - // circuit peering type. - MicrosoftPeering ExpressRouteCircuitPeeringType = "MicrosoftPeering" -) - -// ExpressRouteCircuitSkuFamily enumerates the values for express route circuit -// sku family. -type ExpressRouteCircuitSkuFamily string - -const ( - // MeteredData specifies the metered data state for express route circuit - // sku family. - MeteredData ExpressRouteCircuitSkuFamily = "MeteredData" - // UnlimitedData specifies the unlimited data state for express route - // circuit sku family. - UnlimitedData ExpressRouteCircuitSkuFamily = "UnlimitedData" -) - -// ExpressRouteCircuitSkuTier enumerates the values for express route circuit -// sku tier. -type ExpressRouteCircuitSkuTier string - -const ( - // ExpressRouteCircuitSkuTierPremium specifies the express route circuit - // sku tier premium state for express route circuit sku tier. - ExpressRouteCircuitSkuTierPremium ExpressRouteCircuitSkuTier = "Premium" - // ExpressRouteCircuitSkuTierStandard specifies the express route circuit - // sku tier standard state for express route circuit sku tier. - ExpressRouteCircuitSkuTierStandard ExpressRouteCircuitSkuTier = "Standard" -) - -// IkeEncryption enumerates the values for ike encryption. -type IkeEncryption string - -const ( - // AES128 specifies the aes128 state for ike encryption. - AES128 IkeEncryption = "AES128" - // AES192 specifies the aes192 state for ike encryption. - AES192 IkeEncryption = "AES192" - // AES256 specifies the aes256 state for ike encryption. - AES256 IkeEncryption = "AES256" - // DES specifies the des state for ike encryption. - DES IkeEncryption = "DES" - // DES3 specifies the des3 state for ike encryption. - DES3 IkeEncryption = "DES3" -) - -// IkeIntegrity enumerates the values for ike integrity. -type IkeIntegrity string - -const ( - // MD5 specifies the md5 state for ike integrity. - MD5 IkeIntegrity = "MD5" - // SHA1 specifies the sha1 state for ike integrity. - SHA1 IkeIntegrity = "SHA1" - // SHA256 specifies the sha256 state for ike integrity. - SHA256 IkeIntegrity = "SHA256" - // SHA384 specifies the sha384 state for ike integrity. - SHA384 IkeIntegrity = "SHA384" -) - -// IPAllocationMethod enumerates the values for ip allocation method. -type IPAllocationMethod string - -const ( - // Dynamic specifies the dynamic state for ip allocation method. - Dynamic IPAllocationMethod = "Dynamic" - // Static specifies the static state for ip allocation method. - Static IPAllocationMethod = "Static" -) - -// IpsecEncryption enumerates the values for ipsec encryption. -type IpsecEncryption string - -const ( - // IpsecEncryptionAES128 specifies the ipsec encryption aes128 state for - // ipsec encryption. - IpsecEncryptionAES128 IpsecEncryption = "AES128" - // IpsecEncryptionAES192 specifies the ipsec encryption aes192 state for - // ipsec encryption. - IpsecEncryptionAES192 IpsecEncryption = "AES192" - // IpsecEncryptionAES256 specifies the ipsec encryption aes256 state for - // ipsec encryption. - IpsecEncryptionAES256 IpsecEncryption = "AES256" - // IpsecEncryptionDES specifies the ipsec encryption des state for ipsec - // encryption. - IpsecEncryptionDES IpsecEncryption = "DES" - // IpsecEncryptionDES3 specifies the ipsec encryption des3 state for ipsec - // encryption. - IpsecEncryptionDES3 IpsecEncryption = "DES3" - // IpsecEncryptionGCMAES128 specifies the ipsec encryption gcmaes128 state - // for ipsec encryption. - IpsecEncryptionGCMAES128 IpsecEncryption = "GCMAES128" - // IpsecEncryptionGCMAES192 specifies the ipsec encryption gcmaes192 state - // for ipsec encryption. - IpsecEncryptionGCMAES192 IpsecEncryption = "GCMAES192" - // IpsecEncryptionGCMAES256 specifies the ipsec encryption gcmaes256 state - // for ipsec encryption. - IpsecEncryptionGCMAES256 IpsecEncryption = "GCMAES256" - // IpsecEncryptionNone specifies the ipsec encryption none state for ipsec - // encryption. - IpsecEncryptionNone IpsecEncryption = "None" -) - -// IpsecIntegrity enumerates the values for ipsec integrity. -type IpsecIntegrity string - -const ( - // IpsecIntegrityGCMAES128 specifies the ipsec integrity gcmaes128 state - // for ipsec integrity. - IpsecIntegrityGCMAES128 IpsecIntegrity = "GCMAES128" - // IpsecIntegrityGCMAES192 specifies the ipsec integrity gcmaes192 state - // for ipsec integrity. - IpsecIntegrityGCMAES192 IpsecIntegrity = "GCMAES192" - // IpsecIntegrityGCMAES256 specifies the ipsec integrity gcmaes256 state - // for ipsec integrity. - IpsecIntegrityGCMAES256 IpsecIntegrity = "GCMAES256" - // IpsecIntegrityMD5 specifies the ipsec integrity md5 state for ipsec - // integrity. - IpsecIntegrityMD5 IpsecIntegrity = "MD5" - // IpsecIntegritySHA1 specifies the ipsec integrity sha1 state for ipsec - // integrity. - IpsecIntegritySHA1 IpsecIntegrity = "SHA1" - // IpsecIntegritySHA256 specifies the ipsec integrity sha256 state for - // ipsec integrity. - IpsecIntegritySHA256 IpsecIntegrity = "SHA256" -) - -// IPVersion enumerates the values for ip version. -type IPVersion string - -const ( - // IPv4 specifies the i pv 4 state for ip version. - IPv4 IPVersion = "IPv4" - // IPv6 specifies the i pv 6 state for ip version. - IPv6 IPVersion = "IPv6" -) - -// LoadDistribution enumerates the values for load distribution. -type LoadDistribution string - -const ( - // Default specifies the default state for load distribution. - Default LoadDistribution = "Default" - // SourceIP specifies the source ip state for load distribution. - SourceIP LoadDistribution = "SourceIP" - // SourceIPProtocol specifies the source ip protocol state for load - // distribution. - SourceIPProtocol LoadDistribution = "SourceIPProtocol" -) - -// NextHopType enumerates the values for next hop type. -type NextHopType string - -const ( - // NextHopTypeHyperNetGateway specifies the next hop type hyper net gateway - // state for next hop type. - NextHopTypeHyperNetGateway NextHopType = "HyperNetGateway" - // NextHopTypeInternet specifies the next hop type internet state for next - // hop type. - NextHopTypeInternet NextHopType = "Internet" - // NextHopTypeNone specifies the next hop type none state for next hop - // type. - NextHopTypeNone NextHopType = "None" - // NextHopTypeVirtualAppliance specifies the next hop type virtual - // appliance state for next hop type. - NextHopTypeVirtualAppliance NextHopType = "VirtualAppliance" - // NextHopTypeVirtualNetworkGateway specifies the next hop type virtual - // network gateway state for next hop type. - NextHopTypeVirtualNetworkGateway NextHopType = "VirtualNetworkGateway" - // NextHopTypeVnetLocal specifies the next hop type vnet local state for - // next hop type. - NextHopTypeVnetLocal NextHopType = "VnetLocal" -) - -// OperationStatus enumerates the values for operation status. -type OperationStatus string - -const ( - // Failed specifies the failed state for operation status. - Failed OperationStatus = "Failed" - // InProgress specifies the in progress state for operation status. - InProgress OperationStatus = "InProgress" - // Succeeded specifies the succeeded state for operation status. - Succeeded OperationStatus = "Succeeded" -) - -// PcError enumerates the values for pc error. -type PcError string - -const ( - // AgentStopped specifies the agent stopped state for pc error. - AgentStopped PcError = "AgentStopped" - // CaptureFailed specifies the capture failed state for pc error. - CaptureFailed PcError = "CaptureFailed" - // InternalError specifies the internal error state for pc error. - InternalError PcError = "InternalError" - // LocalFileFailed specifies the local file failed state for pc error. - LocalFileFailed PcError = "LocalFileFailed" - // StorageFailed specifies the storage failed state for pc error. - StorageFailed PcError = "StorageFailed" -) - -// PcProtocol enumerates the values for pc protocol. -type PcProtocol string - -const ( - // Any specifies the any state for pc protocol. - Any PcProtocol = "Any" - // TCP specifies the tcp state for pc protocol. - TCP PcProtocol = "TCP" - // UDP specifies the udp state for pc protocol. - UDP PcProtocol = "UDP" -) - -// PcStatus enumerates the values for pc status. -type PcStatus string - -const ( - // PcStatusError specifies the pc status error state for pc status. - PcStatusError PcStatus = "Error" - // PcStatusNotStarted specifies the pc status not started state for pc - // status. - PcStatusNotStarted PcStatus = "NotStarted" - // PcStatusRunning specifies the pc status running state for pc status. - PcStatusRunning PcStatus = "Running" - // PcStatusStopped specifies the pc status stopped state for pc status. - PcStatusStopped PcStatus = "Stopped" - // PcStatusUnknown specifies the pc status unknown state for pc status. - PcStatusUnknown PcStatus = "Unknown" -) - -// PfsGroup enumerates the values for pfs group. -type PfsGroup string - -const ( - // PfsGroupECP256 specifies the pfs group ecp256 state for pfs group. - PfsGroupECP256 PfsGroup = "ECP256" - // PfsGroupECP384 specifies the pfs group ecp384 state for pfs group. - PfsGroupECP384 PfsGroup = "ECP384" - // PfsGroupNone specifies the pfs group none state for pfs group. - PfsGroupNone PfsGroup = "None" - // PfsGroupPFS1 specifies the pfs group pfs1 state for pfs group. - PfsGroupPFS1 PfsGroup = "PFS1" - // PfsGroupPFS2 specifies the pfs group pfs2 state for pfs group. - PfsGroupPFS2 PfsGroup = "PFS2" - // PfsGroupPFS2048 specifies the pfs group pfs2048 state for pfs group. - PfsGroupPFS2048 PfsGroup = "PFS2048" - // PfsGroupPFS24 specifies the pfs group pfs24 state for pfs group. - PfsGroupPFS24 PfsGroup = "PFS24" -) - -// ProbeProtocol enumerates the values for probe protocol. -type ProbeProtocol string - -const ( - // ProbeProtocolHTTP specifies the probe protocol http state for probe - // protocol. - ProbeProtocolHTTP ProbeProtocol = "Http" - // ProbeProtocolTCP specifies the probe protocol tcp state for probe - // protocol. - ProbeProtocolTCP ProbeProtocol = "Tcp" -) - -// ProcessorArchitecture enumerates the values for processor architecture. -type ProcessorArchitecture string - -const ( - // Amd64 specifies the amd 64 state for processor architecture. - Amd64 ProcessorArchitecture = "Amd64" - // X86 specifies the x86 state for processor architecture. - X86 ProcessorArchitecture = "X86" -) - -// Protocol enumerates the values for protocol. -type Protocol string - -const ( - // ProtocolTCP specifies the protocol tcp state for protocol. - ProtocolTCP Protocol = "TCP" - // ProtocolUDP specifies the protocol udp state for protocol. - ProtocolUDP Protocol = "UDP" -) - -// ProvisioningState enumerates the values for provisioning state. -type ProvisioningState string - -const ( - // ProvisioningStateDeleting specifies the provisioning state deleting - // state for provisioning state. - ProvisioningStateDeleting ProvisioningState = "Deleting" - // ProvisioningStateFailed specifies the provisioning state failed state - // for provisioning state. - ProvisioningStateFailed ProvisioningState = "Failed" - // ProvisioningStateSucceeded specifies the provisioning state succeeded - // state for provisioning state. - ProvisioningStateSucceeded ProvisioningState = "Succeeded" - // ProvisioningStateUpdating specifies the provisioning state updating - // state for provisioning state. - ProvisioningStateUpdating ProvisioningState = "Updating" -) - -// RouteNextHopType enumerates the values for route next hop type. -type RouteNextHopType string - -const ( - // RouteNextHopTypeInternet specifies the route next hop type internet - // state for route next hop type. - RouteNextHopTypeInternet RouteNextHopType = "Internet" - // RouteNextHopTypeNone specifies the route next hop type none state for - // route next hop type. - RouteNextHopTypeNone RouteNextHopType = "None" - // RouteNextHopTypeVirtualAppliance specifies the route next hop type - // virtual appliance state for route next hop type. - RouteNextHopTypeVirtualAppliance RouteNextHopType = "VirtualAppliance" - // RouteNextHopTypeVirtualNetworkGateway specifies the route next hop type - // virtual network gateway state for route next hop type. - RouteNextHopTypeVirtualNetworkGateway RouteNextHopType = "VirtualNetworkGateway" - // RouteNextHopTypeVnetLocal specifies the route next hop type vnet local - // state for route next hop type. - RouteNextHopTypeVnetLocal RouteNextHopType = "VnetLocal" -) - -// SecurityRuleAccess enumerates the values for security rule access. -type SecurityRuleAccess string - -const ( - // SecurityRuleAccessAllow specifies the security rule access allow state - // for security rule access. - SecurityRuleAccessAllow SecurityRuleAccess = "Allow" - // SecurityRuleAccessDeny specifies the security rule access deny state for - // security rule access. - SecurityRuleAccessDeny SecurityRuleAccess = "Deny" -) - -// SecurityRuleDirection enumerates the values for security rule direction. -type SecurityRuleDirection string - -const ( - // SecurityRuleDirectionInbound specifies the security rule direction - // inbound state for security rule direction. - SecurityRuleDirectionInbound SecurityRuleDirection = "Inbound" - // SecurityRuleDirectionOutbound specifies the security rule direction - // outbound state for security rule direction. - SecurityRuleDirectionOutbound SecurityRuleDirection = "Outbound" -) - -// SecurityRuleProtocol enumerates the values for security rule protocol. -type SecurityRuleProtocol string - -const ( - // SecurityRuleProtocolAsterisk specifies the security rule protocol - // asterisk state for security rule protocol. - SecurityRuleProtocolAsterisk SecurityRuleProtocol = "*" - // SecurityRuleProtocolTCP specifies the security rule protocol tcp state - // for security rule protocol. - SecurityRuleProtocolTCP SecurityRuleProtocol = "Tcp" - // SecurityRuleProtocolUDP specifies the security rule protocol udp state - // for security rule protocol. - SecurityRuleProtocolUDP SecurityRuleProtocol = "Udp" -) - -// ServiceProviderProvisioningState enumerates the values for service provider -// provisioning state. -type ServiceProviderProvisioningState string - -const ( - // Deprovisioning specifies the deprovisioning state for service provider - // provisioning state. - Deprovisioning ServiceProviderProvisioningState = "Deprovisioning" - // NotProvisioned specifies the not provisioned state for service provider - // provisioning state. - NotProvisioned ServiceProviderProvisioningState = "NotProvisioned" - // Provisioned specifies the provisioned state for service provider - // provisioning state. - Provisioned ServiceProviderProvisioningState = "Provisioned" - // Provisioning specifies the provisioning state for service provider - // provisioning state. - Provisioning ServiceProviderProvisioningState = "Provisioning" -) - -// TransportProtocol enumerates the values for transport protocol. -type TransportProtocol string - -const ( - // TransportProtocolTCP specifies the transport protocol tcp state for - // transport protocol. - TransportProtocolTCP TransportProtocol = "Tcp" - // TransportProtocolUDP specifies the transport protocol udp state for - // transport protocol. - TransportProtocolUDP TransportProtocol = "Udp" -) - -// VirtualNetworkGatewayConnectionStatus enumerates the values for virtual -// network gateway connection status. -type VirtualNetworkGatewayConnectionStatus string - -const ( - // VirtualNetworkGatewayConnectionStatusConnected specifies the virtual - // network gateway connection status connected state for virtual network - // gateway connection status. - VirtualNetworkGatewayConnectionStatusConnected VirtualNetworkGatewayConnectionStatus = "Connected" - // VirtualNetworkGatewayConnectionStatusConnecting specifies the virtual - // network gateway connection status connecting state for virtual network - // gateway connection status. - VirtualNetworkGatewayConnectionStatusConnecting VirtualNetworkGatewayConnectionStatus = "Connecting" - // VirtualNetworkGatewayConnectionStatusNotConnected specifies the virtual - // network gateway connection status not connected state for virtual - // network gateway connection status. - VirtualNetworkGatewayConnectionStatusNotConnected VirtualNetworkGatewayConnectionStatus = "NotConnected" - // VirtualNetworkGatewayConnectionStatusUnknown specifies the virtual - // network gateway connection status unknown state for virtual network - // gateway connection status. - VirtualNetworkGatewayConnectionStatusUnknown VirtualNetworkGatewayConnectionStatus = "Unknown" -) - -// VirtualNetworkGatewayConnectionType enumerates the values for virtual -// network gateway connection type. -type VirtualNetworkGatewayConnectionType string - -const ( - // ExpressRoute specifies the express route state for virtual network - // gateway connection type. - ExpressRoute VirtualNetworkGatewayConnectionType = "ExpressRoute" - // IPsec specifies the i psec state for virtual network gateway connection - // type. - IPsec VirtualNetworkGatewayConnectionType = "IPsec" - // Vnet2Vnet specifies the vnet 2 vnet state for virtual network gateway - // connection type. - Vnet2Vnet VirtualNetworkGatewayConnectionType = "Vnet2Vnet" - // VPNClient specifies the vpn client state for virtual network gateway - // connection type. - VPNClient VirtualNetworkGatewayConnectionType = "VPNClient" -) - -// VirtualNetworkGatewaySkuName enumerates the values for virtual network -// gateway sku name. -type VirtualNetworkGatewaySkuName string - -const ( - // VirtualNetworkGatewaySkuNameBasic specifies the virtual network gateway - // sku name basic state for virtual network gateway sku name. - VirtualNetworkGatewaySkuNameBasic VirtualNetworkGatewaySkuName = "Basic" - // VirtualNetworkGatewaySkuNameHighPerformance specifies the virtual - // network gateway sku name high performance state for virtual network - // gateway sku name. - VirtualNetworkGatewaySkuNameHighPerformance VirtualNetworkGatewaySkuName = "HighPerformance" - // VirtualNetworkGatewaySkuNameStandard specifies the virtual network - // gateway sku name standard state for virtual network gateway sku name. - VirtualNetworkGatewaySkuNameStandard VirtualNetworkGatewaySkuName = "Standard" - // VirtualNetworkGatewaySkuNameUltraPerformance specifies the virtual - // network gateway sku name ultra performance state for virtual network - // gateway sku name. - VirtualNetworkGatewaySkuNameUltraPerformance VirtualNetworkGatewaySkuName = "UltraPerformance" - // VirtualNetworkGatewaySkuNameVpnGw1 specifies the virtual network gateway - // sku name vpn gw 1 state for virtual network gateway sku name. - VirtualNetworkGatewaySkuNameVpnGw1 VirtualNetworkGatewaySkuName = "VpnGw1" - // VirtualNetworkGatewaySkuNameVpnGw2 specifies the virtual network gateway - // sku name vpn gw 2 state for virtual network gateway sku name. - VirtualNetworkGatewaySkuNameVpnGw2 VirtualNetworkGatewaySkuName = "VpnGw2" - // VirtualNetworkGatewaySkuNameVpnGw3 specifies the virtual network gateway - // sku name vpn gw 3 state for virtual network gateway sku name. - VirtualNetworkGatewaySkuNameVpnGw3 VirtualNetworkGatewaySkuName = "VpnGw3" -) - -// VirtualNetworkGatewaySkuTier enumerates the values for virtual network -// gateway sku tier. -type VirtualNetworkGatewaySkuTier string - -const ( - // VirtualNetworkGatewaySkuTierBasic specifies the virtual network gateway - // sku tier basic state for virtual network gateway sku tier. - VirtualNetworkGatewaySkuTierBasic VirtualNetworkGatewaySkuTier = "Basic" - // VirtualNetworkGatewaySkuTierHighPerformance specifies the virtual - // network gateway sku tier high performance state for virtual network - // gateway sku tier. - VirtualNetworkGatewaySkuTierHighPerformance VirtualNetworkGatewaySkuTier = "HighPerformance" - // VirtualNetworkGatewaySkuTierStandard specifies the virtual network - // gateway sku tier standard state for virtual network gateway sku tier. - VirtualNetworkGatewaySkuTierStandard VirtualNetworkGatewaySkuTier = "Standard" - // VirtualNetworkGatewaySkuTierUltraPerformance specifies the virtual - // network gateway sku tier ultra performance state for virtual network - // gateway sku tier. - VirtualNetworkGatewaySkuTierUltraPerformance VirtualNetworkGatewaySkuTier = "UltraPerformance" - // VirtualNetworkGatewaySkuTierVpnGw1 specifies the virtual network gateway - // sku tier vpn gw 1 state for virtual network gateway sku tier. - VirtualNetworkGatewaySkuTierVpnGw1 VirtualNetworkGatewaySkuTier = "VpnGw1" - // VirtualNetworkGatewaySkuTierVpnGw2 specifies the virtual network gateway - // sku tier vpn gw 2 state for virtual network gateway sku tier. - VirtualNetworkGatewaySkuTierVpnGw2 VirtualNetworkGatewaySkuTier = "VpnGw2" - // VirtualNetworkGatewaySkuTierVpnGw3 specifies the virtual network gateway - // sku tier vpn gw 3 state for virtual network gateway sku tier. - VirtualNetworkGatewaySkuTierVpnGw3 VirtualNetworkGatewaySkuTier = "VpnGw3" -) - -// VirtualNetworkGatewayType enumerates the values for virtual network gateway -// type. -type VirtualNetworkGatewayType string - -const ( - // VirtualNetworkGatewayTypeExpressRoute specifies the virtual network - // gateway type express route state for virtual network gateway type. - VirtualNetworkGatewayTypeExpressRoute VirtualNetworkGatewayType = "ExpressRoute" - // VirtualNetworkGatewayTypeVpn specifies the virtual network gateway type - // vpn state for virtual network gateway type. - VirtualNetworkGatewayTypeVpn VirtualNetworkGatewayType = "Vpn" -) - -// VirtualNetworkPeeringState enumerates the values for virtual network peering -// state. -type VirtualNetworkPeeringState string - -const ( - // Connected specifies the connected state for virtual network peering - // state. - Connected VirtualNetworkPeeringState = "Connected" - // Disconnected specifies the disconnected state for virtual network - // peering state. - Disconnected VirtualNetworkPeeringState = "Disconnected" - // Initiated specifies the initiated state for virtual network peering - // state. - Initiated VirtualNetworkPeeringState = "Initiated" -) - -// VpnType enumerates the values for vpn type. -type VpnType string - -const ( - // PolicyBased specifies the policy based state for vpn type. - PolicyBased VpnType = "PolicyBased" - // RouteBased specifies the route based state for vpn type. - RouteBased VpnType = "RouteBased" -) - -// AddressSpace is addressSpace contains an array of IP address ranges that can -// be used by subnets of the virtual network. -type AddressSpace struct { - AddressPrefixes *[]string `json:"addressPrefixes,omitempty"` -} - -// ApplicationGateway is application gateway resource -type ApplicationGateway struct { - autorest.Response `json:"-"` - ID *string `json:"id,omitempty"` - Name *string `json:"name,omitempty"` - Type *string `json:"type,omitempty"` - Location *string `json:"location,omitempty"` - Tags *map[string]*string `json:"tags,omitempty"` - *ApplicationGatewayPropertiesFormat `json:"properties,omitempty"` - Etag *string `json:"etag,omitempty"` -} - -// ApplicationGatewayAuthenticationCertificate is authentication certificates -// of an application gateway. -type ApplicationGatewayAuthenticationCertificate struct { - ID *string `json:"id,omitempty"` - *ApplicationGatewayAuthenticationCertificatePropertiesFormat `json:"properties,omitempty"` - Name *string `json:"name,omitempty"` - Etag *string `json:"etag,omitempty"` -} - -// ApplicationGatewayAuthenticationCertificatePropertiesFormat is -// authentication certificates properties of an application gateway. -type ApplicationGatewayAuthenticationCertificatePropertiesFormat struct { - Data *string `json:"data,omitempty"` - ProvisioningState *string `json:"provisioningState,omitempty"` -} - -// ApplicationGatewayAvailableWafRuleSetsResult is response for -// ApplicationGatewayAvailableWafRuleSets API service call. -type ApplicationGatewayAvailableWafRuleSetsResult struct { - autorest.Response `json:"-"` - Value *[]ApplicationGatewayFirewallRuleSet `json:"value,omitempty"` -} - -// ApplicationGatewayBackendAddress is backend address of an application -// gateway. -type ApplicationGatewayBackendAddress struct { - Fqdn *string `json:"fqdn,omitempty"` - IPAddress *string `json:"ipAddress,omitempty"` -} - -// ApplicationGatewayBackendAddressPool is backend Address Pool of an -// application gateway. -type ApplicationGatewayBackendAddressPool struct { - ID *string `json:"id,omitempty"` - *ApplicationGatewayBackendAddressPoolPropertiesFormat `json:"properties,omitempty"` - Name *string `json:"name,omitempty"` - Etag *string `json:"etag,omitempty"` -} - -// ApplicationGatewayBackendAddressPoolPropertiesFormat is properties of -// Backend Address Pool of an application gateway. -type ApplicationGatewayBackendAddressPoolPropertiesFormat struct { - BackendIPConfigurations *[]InterfaceIPConfiguration `json:"backendIPConfigurations,omitempty"` - BackendAddresses *[]ApplicationGatewayBackendAddress `json:"backendAddresses,omitempty"` - ProvisioningState *string `json:"provisioningState,omitempty"` -} - -// ApplicationGatewayBackendHealth is list of -// ApplicationGatewayBackendHealthPool resources. -type ApplicationGatewayBackendHealth struct { - autorest.Response `json:"-"` - BackendAddressPools *[]ApplicationGatewayBackendHealthPool `json:"backendAddressPools,omitempty"` -} - -// ApplicationGatewayBackendHealthHTTPSettings is application gateway -// BackendHealthHttp settings. -type ApplicationGatewayBackendHealthHTTPSettings struct { - BackendHTTPSettings *ApplicationGatewayBackendHTTPSettings `json:"backendHttpSettings,omitempty"` - Servers *[]ApplicationGatewayBackendHealthServer `json:"servers,omitempty"` -} - -// ApplicationGatewayBackendHealthPool is application gateway BackendHealth -// pool. -type ApplicationGatewayBackendHealthPool struct { - BackendAddressPool *ApplicationGatewayBackendAddressPool `json:"backendAddressPool,omitempty"` - BackendHTTPSettingsCollection *[]ApplicationGatewayBackendHealthHTTPSettings `json:"backendHttpSettingsCollection,omitempty"` -} - -// ApplicationGatewayBackendHealthServer is application gateway backendhealth -// http settings. -type ApplicationGatewayBackendHealthServer struct { - Address *string `json:"address,omitempty"` - IPConfiguration *SubResource `json:"ipConfiguration,omitempty"` - Health ApplicationGatewayBackendHealthServerHealth `json:"health,omitempty"` -} - -// ApplicationGatewayBackendHTTPSettings is backend address pool settings of an -// application gateway. -type ApplicationGatewayBackendHTTPSettings struct { - ID *string `json:"id,omitempty"` - *ApplicationGatewayBackendHTTPSettingsPropertiesFormat `json:"properties,omitempty"` - Name *string `json:"name,omitempty"` - Etag *string `json:"etag,omitempty"` -} - -// ApplicationGatewayBackendHTTPSettingsPropertiesFormat is properties of -// Backend address pool settings of an application gateway. -type ApplicationGatewayBackendHTTPSettingsPropertiesFormat struct { - Port *int32 `json:"port,omitempty"` - Protocol ApplicationGatewayProtocol `json:"protocol,omitempty"` - CookieBasedAffinity ApplicationGatewayCookieBasedAffinity `json:"cookieBasedAffinity,omitempty"` - RequestTimeout *int32 `json:"requestTimeout,omitempty"` - Probe *SubResource `json:"probe,omitempty"` - AuthenticationCertificates *[]SubResource `json:"authenticationCertificates,omitempty"` - ProvisioningState *string `json:"provisioningState,omitempty"` - ConnectionDraining *ApplicationGatewayConnectionDraining `json:"connectionDraining,omitempty"` -} - -// ApplicationGatewayConnectionDraining is connection draining allows open -// connections to a backend server to be active for a specified time after the -// backend server got removed from the configuration. -type ApplicationGatewayConnectionDraining struct { - Enabled *bool `json:"enabled,omitempty"` - DrainTimeoutInSec *int32 `json:"drainTimeoutInSec,omitempty"` -} - -// ApplicationGatewayFirewallDisabledRuleGroup is allows to disable rules -// within a rule group or an entire rule group. -type ApplicationGatewayFirewallDisabledRuleGroup struct { - RuleGroupName *string `json:"ruleGroupName,omitempty"` - Rules *[]int32 `json:"rules,omitempty"` -} - -// ApplicationGatewayFirewallRule is a web application firewall rule. -type ApplicationGatewayFirewallRule struct { - RuleID *int32 `json:"ruleId,omitempty"` - Description *string `json:"description,omitempty"` -} - -// ApplicationGatewayFirewallRuleGroup is a web application firewall rule -// group. -type ApplicationGatewayFirewallRuleGroup struct { - RuleGroupName *string `json:"ruleGroupName,omitempty"` - Description *string `json:"description,omitempty"` - Rules *[]ApplicationGatewayFirewallRule `json:"rules,omitempty"` -} - -// ApplicationGatewayFirewallRuleSet is a web application firewall rule set. -type ApplicationGatewayFirewallRuleSet struct { - ID *string `json:"id,omitempty"` - Name *string `json:"name,omitempty"` - Type *string `json:"type,omitempty"` - Location *string `json:"location,omitempty"` - Tags *map[string]*string `json:"tags,omitempty"` - *ApplicationGatewayFirewallRuleSetPropertiesFormat `json:"properties,omitempty"` -} - -// ApplicationGatewayFirewallRuleSetPropertiesFormat is properties of the web -// application firewall rule set. -type ApplicationGatewayFirewallRuleSetPropertiesFormat struct { - ProvisioningState *string `json:"provisioningState,omitempty"` - RuleSetType *string `json:"ruleSetType,omitempty"` - RuleSetVersion *string `json:"ruleSetVersion,omitempty"` - RuleGroups *[]ApplicationGatewayFirewallRuleGroup `json:"ruleGroups,omitempty"` -} - -// ApplicationGatewayFrontendIPConfiguration is frontend IP configuration of an -// application gateway. -type ApplicationGatewayFrontendIPConfiguration struct { - ID *string `json:"id,omitempty"` - *ApplicationGatewayFrontendIPConfigurationPropertiesFormat `json:"properties,omitempty"` - Name *string `json:"name,omitempty"` - Etag *string `json:"etag,omitempty"` -} - -// ApplicationGatewayFrontendIPConfigurationPropertiesFormat is properties of -// Frontend IP configuration of an application gateway. -type ApplicationGatewayFrontendIPConfigurationPropertiesFormat struct { - PrivateIPAddress *string `json:"privateIPAddress,omitempty"` - PrivateIPAllocationMethod IPAllocationMethod `json:"privateIPAllocationMethod,omitempty"` - Subnet *SubResource `json:"subnet,omitempty"` - PublicIPAddress *SubResource `json:"publicIPAddress,omitempty"` - ProvisioningState *string `json:"provisioningState,omitempty"` -} - -// ApplicationGatewayFrontendPort is frontend port of an application gateway. -type ApplicationGatewayFrontendPort struct { - ID *string `json:"id,omitempty"` - *ApplicationGatewayFrontendPortPropertiesFormat `json:"properties,omitempty"` - Name *string `json:"name,omitempty"` - Etag *string `json:"etag,omitempty"` -} - -// ApplicationGatewayFrontendPortPropertiesFormat is properties of Frontend -// port of an application gateway. -type ApplicationGatewayFrontendPortPropertiesFormat struct { - Port *int32 `json:"port,omitempty"` - ProvisioningState *string `json:"provisioningState,omitempty"` -} - -// ApplicationGatewayHTTPListener is http listener of an application gateway. -type ApplicationGatewayHTTPListener struct { - ID *string `json:"id,omitempty"` - *ApplicationGatewayHTTPListenerPropertiesFormat `json:"properties,omitempty"` - Name *string `json:"name,omitempty"` - Etag *string `json:"etag,omitempty"` -} - -// ApplicationGatewayHTTPListenerPropertiesFormat is properties of HTTP -// listener of an application gateway. -type ApplicationGatewayHTTPListenerPropertiesFormat struct { - FrontendIPConfiguration *SubResource `json:"frontendIPConfiguration,omitempty"` - FrontendPort *SubResource `json:"frontendPort,omitempty"` - Protocol ApplicationGatewayProtocol `json:"protocol,omitempty"` - HostName *string `json:"hostName,omitempty"` - SslCertificate *SubResource `json:"sslCertificate,omitempty"` - RequireServerNameIndication *bool `json:"requireServerNameIndication,omitempty"` - ProvisioningState *string `json:"provisioningState,omitempty"` -} - -// ApplicationGatewayIPConfiguration is iP configuration of an application -// gateway. Currently 1 public and 1 private IP configuration is allowed. -type ApplicationGatewayIPConfiguration struct { - ID *string `json:"id,omitempty"` - *ApplicationGatewayIPConfigurationPropertiesFormat `json:"properties,omitempty"` - Name *string `json:"name,omitempty"` - Etag *string `json:"etag,omitempty"` -} - -// ApplicationGatewayIPConfigurationPropertiesFormat is properties of IP -// configuration of an application gateway. -type ApplicationGatewayIPConfigurationPropertiesFormat struct { - Subnet *SubResource `json:"subnet,omitempty"` - ProvisioningState *string `json:"provisioningState,omitempty"` -} - -// ApplicationGatewayListResult is response for ListApplicationGateways API -// service call. -type ApplicationGatewayListResult struct { - autorest.Response `json:"-"` - Value *[]ApplicationGateway `json:"value,omitempty"` - NextLink *string `json:"nextLink,omitempty"` -} - -// ApplicationGatewayListResultPreparer prepares a request to retrieve the next set of results. It returns -// nil if no more results exist. -func (client ApplicationGatewayListResult) ApplicationGatewayListResultPreparer() (*http.Request, error) { - if client.NextLink == nil || len(to.String(client.NextLink)) <= 0 { - return nil, nil - } - return autorest.Prepare(&http.Request{}, - autorest.AsJSON(), - autorest.AsGet(), - autorest.WithBaseURL(to.String(client.NextLink))) -} - -// ApplicationGatewayPathRule is path rule of URL path map of an application -// gateway. -type ApplicationGatewayPathRule struct { - ID *string `json:"id,omitempty"` - *ApplicationGatewayPathRulePropertiesFormat `json:"properties,omitempty"` - Name *string `json:"name,omitempty"` - Etag *string `json:"etag,omitempty"` -} - -// ApplicationGatewayPathRulePropertiesFormat is properties of probe of an -// application gateway. -type ApplicationGatewayPathRulePropertiesFormat struct { - Paths *[]string `json:"paths,omitempty"` - BackendAddressPool *SubResource `json:"backendAddressPool,omitempty"` - BackendHTTPSettings *SubResource `json:"backendHttpSettings,omitempty"` - ProvisioningState *string `json:"provisioningState,omitempty"` -} - -// ApplicationGatewayProbe is probe of the application gateway. -type ApplicationGatewayProbe struct { - ID *string `json:"id,omitempty"` - *ApplicationGatewayProbePropertiesFormat `json:"properties,omitempty"` - Name *string `json:"name,omitempty"` - Etag *string `json:"etag,omitempty"` -} - -// ApplicationGatewayProbePropertiesFormat is properties of probe of an -// application gateway. -type ApplicationGatewayProbePropertiesFormat struct { - Protocol ApplicationGatewayProtocol `json:"protocol,omitempty"` - Host *string `json:"host,omitempty"` - Path *string `json:"path,omitempty"` - Interval *int32 `json:"interval,omitempty"` - Timeout *int32 `json:"timeout,omitempty"` - UnhealthyThreshold *int32 `json:"unhealthyThreshold,omitempty"` - ProvisioningState *string `json:"provisioningState,omitempty"` -} - -// ApplicationGatewayPropertiesFormat is properties of the application gateway. -type ApplicationGatewayPropertiesFormat struct { - Sku *ApplicationGatewaySku `json:"sku,omitempty"` - SslPolicy *ApplicationGatewaySslPolicy `json:"sslPolicy,omitempty"` - OperationalState ApplicationGatewayOperationalState `json:"operationalState,omitempty"` - GatewayIPConfigurations *[]ApplicationGatewayIPConfiguration `json:"gatewayIPConfigurations,omitempty"` - AuthenticationCertificates *[]ApplicationGatewayAuthenticationCertificate `json:"authenticationCertificates,omitempty"` - SslCertificates *[]ApplicationGatewaySslCertificate `json:"sslCertificates,omitempty"` - FrontendIPConfigurations *[]ApplicationGatewayFrontendIPConfiguration `json:"frontendIPConfigurations,omitempty"` - FrontendPorts *[]ApplicationGatewayFrontendPort `json:"frontendPorts,omitempty"` - Probes *[]ApplicationGatewayProbe `json:"probes,omitempty"` - BackendAddressPools *[]ApplicationGatewayBackendAddressPool `json:"backendAddressPools,omitempty"` - BackendHTTPSettingsCollection *[]ApplicationGatewayBackendHTTPSettings `json:"backendHttpSettingsCollection,omitempty"` - HTTPListeners *[]ApplicationGatewayHTTPListener `json:"httpListeners,omitempty"` - URLPathMaps *[]ApplicationGatewayURLPathMap `json:"urlPathMaps,omitempty"` - RequestRoutingRules *[]ApplicationGatewayRequestRoutingRule `json:"requestRoutingRules,omitempty"` - WebApplicationFirewallConfiguration *ApplicationGatewayWebApplicationFirewallConfiguration `json:"webApplicationFirewallConfiguration,omitempty"` - ResourceGUID *string `json:"resourceGuid,omitempty"` - ProvisioningState *string `json:"provisioningState,omitempty"` -} - -// ApplicationGatewayRequestRoutingRule is request routing rule of an -// application gateway. -type ApplicationGatewayRequestRoutingRule struct { - ID *string `json:"id,omitempty"` - *ApplicationGatewayRequestRoutingRulePropertiesFormat `json:"properties,omitempty"` - Name *string `json:"name,omitempty"` - Etag *string `json:"etag,omitempty"` -} - -// ApplicationGatewayRequestRoutingRulePropertiesFormat is properties of -// request routing rule of the application gateway. -type ApplicationGatewayRequestRoutingRulePropertiesFormat struct { - RuleType ApplicationGatewayRequestRoutingRuleType `json:"ruleType,omitempty"` - BackendAddressPool *SubResource `json:"backendAddressPool,omitempty"` - BackendHTTPSettings *SubResource `json:"backendHttpSettings,omitempty"` - HTTPListener *SubResource `json:"httpListener,omitempty"` - URLPathMap *SubResource `json:"urlPathMap,omitempty"` - ProvisioningState *string `json:"provisioningState,omitempty"` -} - -// ApplicationGatewaySku is sKU of an application gateway -type ApplicationGatewaySku struct { - Name ApplicationGatewaySkuName `json:"name,omitempty"` - Tier ApplicationGatewayTier `json:"tier,omitempty"` - Capacity *int32 `json:"capacity,omitempty"` -} - -// ApplicationGatewaySslCertificate is sSL certificates of an application -// gateway. -type ApplicationGatewaySslCertificate struct { - ID *string `json:"id,omitempty"` - *ApplicationGatewaySslCertificatePropertiesFormat `json:"properties,omitempty"` - Name *string `json:"name,omitempty"` - Etag *string `json:"etag,omitempty"` -} - -// ApplicationGatewaySslCertificatePropertiesFormat is properties of SSL -// certificates of an application gateway. -type ApplicationGatewaySslCertificatePropertiesFormat struct { - Data *string `json:"data,omitempty"` - Password *string `json:"password,omitempty"` - PublicCertData *string `json:"publicCertData,omitempty"` - ProvisioningState *string `json:"provisioningState,omitempty"` -} - -// ApplicationGatewaySslPolicy is application gateway SSL policy. -type ApplicationGatewaySslPolicy struct { - DisabledSslProtocols *[]ApplicationGatewaySslProtocol `json:"disabledSslProtocols,omitempty"` -} - -// ApplicationGatewayURLPathMap is urlPathMaps give a url path to the backend -// mapping information for PathBasedRouting. -type ApplicationGatewayURLPathMap struct { - ID *string `json:"id,omitempty"` - *ApplicationGatewayURLPathMapPropertiesFormat `json:"properties,omitempty"` - Name *string `json:"name,omitempty"` - Etag *string `json:"etag,omitempty"` -} - -// ApplicationGatewayURLPathMapPropertiesFormat is properties of UrlPathMap of -// the application gateway. -type ApplicationGatewayURLPathMapPropertiesFormat struct { - DefaultBackendAddressPool *SubResource `json:"defaultBackendAddressPool,omitempty"` - DefaultBackendHTTPSettings *SubResource `json:"defaultBackendHttpSettings,omitempty"` - PathRules *[]ApplicationGatewayPathRule `json:"pathRules,omitempty"` - ProvisioningState *string `json:"provisioningState,omitempty"` -} - -// ApplicationGatewayWebApplicationFirewallConfiguration is application gateway -// web application firewall configuration. -type ApplicationGatewayWebApplicationFirewallConfiguration struct { - Enabled *bool `json:"enabled,omitempty"` - FirewallMode ApplicationGatewayFirewallMode `json:"firewallMode,omitempty"` - RuleSetType *string `json:"ruleSetType,omitempty"` - RuleSetVersion *string `json:"ruleSetVersion,omitempty"` - DisabledRuleGroups *[]ApplicationGatewayFirewallDisabledRuleGroup `json:"disabledRuleGroups,omitempty"` -} - -// AuthorizationListResult is response for ListAuthorizations API service call -// retrieves all authorizations that belongs to an ExpressRouteCircuit. -type AuthorizationListResult struct { - autorest.Response `json:"-"` - Value *[]ExpressRouteCircuitAuthorization `json:"value,omitempty"` - NextLink *string `json:"nextLink,omitempty"` -} - -// AuthorizationListResultPreparer prepares a request to retrieve the next set of results. It returns -// nil if no more results exist. -func (client AuthorizationListResult) AuthorizationListResultPreparer() (*http.Request, error) { - if client.NextLink == nil || len(to.String(client.NextLink)) <= 0 { - return nil, nil - } - return autorest.Prepare(&http.Request{}, - autorest.AsJSON(), - autorest.AsGet(), - autorest.WithBaseURL(to.String(client.NextLink))) -} - -// AuthorizationPropertiesFormat is -type AuthorizationPropertiesFormat struct { - AuthorizationKey *string `json:"authorizationKey,omitempty"` - AuthorizationUseStatus AuthorizationUseStatus `json:"authorizationUseStatus,omitempty"` - ProvisioningState *string `json:"provisioningState,omitempty"` -} - -// AzureAsyncOperationResult is the response body contains the status of the -// specified asynchronous operation, indicating whether it has succeeded, is in -// progress, or has failed. Note that this status is distinct from the HTTP -// status code returned for the Get Operation Status operation itself. If the -// asynchronous operation succeeded, the response body includes the HTTP status -// code for the successful request. If the asynchronous operation failed, the -// response body includes the HTTP status code for the failed request and error -// information regarding the failure. -type AzureAsyncOperationResult struct { - Status OperationStatus `json:"status,omitempty"` - Error *Error `json:"error,omitempty"` -} - -// BackendAddressPool is pool of backend IP addresses. -type BackendAddressPool struct { - ID *string `json:"id,omitempty"` - *BackendAddressPoolPropertiesFormat `json:"properties,omitempty"` - Name *string `json:"name,omitempty"` - Etag *string `json:"etag,omitempty"` -} - -// BackendAddressPoolPropertiesFormat is properties of the backend address -// pool. -type BackendAddressPoolPropertiesFormat struct { - BackendIPConfigurations *[]InterfaceIPConfiguration `json:"backendIPConfigurations,omitempty"` - LoadBalancingRules *[]SubResource `json:"loadBalancingRules,omitempty"` - OutboundNatRule *SubResource `json:"outboundNatRule,omitempty"` - ProvisioningState *string `json:"provisioningState,omitempty"` -} - -// BGPCommunity is contains bgp community information offered in Service -// Community resources. -type BGPCommunity struct { - ServiceSupportedRegion *string `json:"serviceSupportedRegion,omitempty"` - CommunityName *string `json:"communityName,omitempty"` - CommunityValue *string `json:"communityValue,omitempty"` - CommunityPrefixes *[]string `json:"communityPrefixes,omitempty"` -} - -// BgpPeerStatus is bGP peer status details -type BgpPeerStatus struct { - LocalAddress *string `json:"localAddress,omitempty"` - Neighbor *string `json:"neighbor,omitempty"` - Asn *int32 `json:"asn,omitempty"` - State BgpPeerState `json:"state,omitempty"` - ConnectedDuration *string `json:"connectedDuration,omitempty"` - RoutesReceived *int64 `json:"routesReceived,omitempty"` - MessagesSent *int64 `json:"messagesSent,omitempty"` - MessagesReceived *int64 `json:"messagesReceived,omitempty"` -} - -// BgpPeerStatusListResult is response for list BGP peer status API service -// call -type BgpPeerStatusListResult struct { - autorest.Response `json:"-"` - Value *[]BgpPeerStatus `json:"value,omitempty"` -} - -// BgpServiceCommunity is service Community Properties. -type BgpServiceCommunity struct { - ID *string `json:"id,omitempty"` - Name *string `json:"name,omitempty"` - Type *string `json:"type,omitempty"` - Location *string `json:"location,omitempty"` - Tags *map[string]*string `json:"tags,omitempty"` - *BgpServiceCommunityPropertiesFormat `json:"properties,omitempty"` -} - -// BgpServiceCommunityListResult is response for the ListServiceCommunity API -// service call. -type BgpServiceCommunityListResult struct { - autorest.Response `json:"-"` - Value *[]BgpServiceCommunity `json:"value,omitempty"` - NextLink *string `json:"nextLink,omitempty"` -} - -// BgpServiceCommunityListResultPreparer prepares a request to retrieve the next set of results. It returns -// nil if no more results exist. -func (client BgpServiceCommunityListResult) BgpServiceCommunityListResultPreparer() (*http.Request, error) { - if client.NextLink == nil || len(to.String(client.NextLink)) <= 0 { - return nil, nil - } - return autorest.Prepare(&http.Request{}, - autorest.AsJSON(), - autorest.AsGet(), - autorest.WithBaseURL(to.String(client.NextLink))) -} - -// BgpServiceCommunityPropertiesFormat is properties of Service Community. -type BgpServiceCommunityPropertiesFormat struct { - ServiceName *string `json:"serviceName,omitempty"` - BgpCommunities *[]BGPCommunity `json:"bgpCommunities,omitempty"` -} - -// BgpSettings is bGP settings details -type BgpSettings struct { - Asn *int64 `json:"asn,omitempty"` - BgpPeeringAddress *string `json:"bgpPeeringAddress,omitempty"` - PeerWeight *int32 `json:"peerWeight,omitempty"` -} - -// ConnectionResetSharedKey is the virtual network connection reset shared key -type ConnectionResetSharedKey struct { - autorest.Response `json:"-"` - KeyLength *int32 `json:"keyLength,omitempty"` -} - -// ConnectionSharedKey is response for GetConnectionSharedKey API service call -type ConnectionSharedKey struct { - autorest.Response `json:"-"` - Value *string `json:"value,omitempty"` -} - -// DhcpOptions is dhcpOptions contains an array of DNS servers available to VMs -// deployed in the virtual network. Standard DHCP option for a subnet overrides -// VNET DHCP options. -type DhcpOptions struct { - DNSServers *[]string `json:"dnsServers,omitempty"` -} - -// DNSNameAvailabilityResult is response for the CheckDnsNameAvailability API -// service call. -type DNSNameAvailabilityResult struct { - autorest.Response `json:"-"` - Available *bool `json:"available,omitempty"` -} - -// EffectiveNetworkSecurityGroup is effective network security group. -type EffectiveNetworkSecurityGroup struct { - NetworkSecurityGroup *SubResource `json:"networkSecurityGroup,omitempty"` - Association *EffectiveNetworkSecurityGroupAssociation `json:"association,omitempty"` - EffectiveSecurityRules *[]EffectiveNetworkSecurityRule `json:"effectiveSecurityRules,omitempty"` -} - -// EffectiveNetworkSecurityGroupAssociation is the effective network security -// group association. -type EffectiveNetworkSecurityGroupAssociation struct { - Subnet *SubResource `json:"subnet,omitempty"` - NetworkInterface *SubResource `json:"networkInterface,omitempty"` -} - -// EffectiveNetworkSecurityGroupListResult is response for list effective -// network security groups API service call. -type EffectiveNetworkSecurityGroupListResult struct { - autorest.Response `json:"-"` - Value *[]EffectiveNetworkSecurityGroup `json:"value,omitempty"` - NextLink *string `json:"nextLink,omitempty"` -} - -// EffectiveNetworkSecurityRule is effective network security rules. -type EffectiveNetworkSecurityRule struct { - Name *string `json:"name,omitempty"` - Protocol SecurityRuleProtocol `json:"protocol,omitempty"` - SourcePortRange *string `json:"sourcePortRange,omitempty"` - DestinationPortRange *string `json:"destinationPortRange,omitempty"` - SourceAddressPrefix *string `json:"sourceAddressPrefix,omitempty"` - DestinationAddressPrefix *string `json:"destinationAddressPrefix,omitempty"` - ExpandedSourceAddressPrefix *[]string `json:"expandedSourceAddressPrefix,omitempty"` - ExpandedDestinationAddressPrefix *[]string `json:"expandedDestinationAddressPrefix,omitempty"` - Access SecurityRuleAccess `json:"access,omitempty"` - Priority *int32 `json:"priority,omitempty"` - Direction SecurityRuleDirection `json:"direction,omitempty"` -} - -// EffectiveRoute is effective Route -type EffectiveRoute struct { - Name *string `json:"name,omitempty"` - Source EffectiveRouteSource `json:"source,omitempty"` - State EffectiveRouteState `json:"state,omitempty"` - AddressPrefix *[]string `json:"addressPrefix,omitempty"` - NextHopIPAddress *[]string `json:"nextHopIpAddress,omitempty"` - NextHopType RouteNextHopType `json:"nextHopType,omitempty"` -} - -// EffectiveRouteListResult is response for list effective route API service -// call. -type EffectiveRouteListResult struct { - autorest.Response `json:"-"` - Value *[]EffectiveRoute `json:"value,omitempty"` - NextLink *string `json:"nextLink,omitempty"` -} - -// Error is -type Error struct { - Code *string `json:"code,omitempty"` - Message *string `json:"message,omitempty"` - Target *string `json:"target,omitempty"` - Details *[]ErrorDetails `json:"details,omitempty"` - InnerError *string `json:"innerError,omitempty"` -} - -// ErrorDetails is -type ErrorDetails struct { - Code *string `json:"code,omitempty"` - Target *string `json:"target,omitempty"` - Message *string `json:"message,omitempty"` -} - -// ExpressRouteCircuit is expressRouteCircuit resource -type ExpressRouteCircuit struct { - autorest.Response `json:"-"` - ID *string `json:"id,omitempty"` - Name *string `json:"name,omitempty"` - Type *string `json:"type,omitempty"` - Location *string `json:"location,omitempty"` - Tags *map[string]*string `json:"tags,omitempty"` - Sku *ExpressRouteCircuitSku `json:"sku,omitempty"` - *ExpressRouteCircuitPropertiesFormat `json:"properties,omitempty"` - Etag *string `json:"etag,omitempty"` -} - -// ExpressRouteCircuitArpTable is the ARP table associated with the -// ExpressRouteCircuit. -type ExpressRouteCircuitArpTable struct { - Age *int32 `json:"age,omitempty"` - Interface *string `json:"interface,omitempty"` - IPAddress *string `json:"ipAddress,omitempty"` - MacAddress *string `json:"macAddress,omitempty"` -} - -// ExpressRouteCircuitAuthorization is authorization in an ExpressRouteCircuit -// resource. -type ExpressRouteCircuitAuthorization struct { - autorest.Response `json:"-"` - ID *string `json:"id,omitempty"` - *AuthorizationPropertiesFormat `json:"properties,omitempty"` - Name *string `json:"name,omitempty"` - Etag *string `json:"etag,omitempty"` -} - -// ExpressRouteCircuitListResult is response for ListExpressRouteCircuit API -// service call. -type ExpressRouteCircuitListResult struct { - autorest.Response `json:"-"` - Value *[]ExpressRouteCircuit `json:"value,omitempty"` - NextLink *string `json:"nextLink,omitempty"` -} - -// ExpressRouteCircuitListResultPreparer prepares a request to retrieve the next set of results. It returns -// nil if no more results exist. -func (client ExpressRouteCircuitListResult) ExpressRouteCircuitListResultPreparer() (*http.Request, error) { - if client.NextLink == nil || len(to.String(client.NextLink)) <= 0 { - return nil, nil - } - return autorest.Prepare(&http.Request{}, - autorest.AsJSON(), - autorest.AsGet(), - autorest.WithBaseURL(to.String(client.NextLink))) -} - -// ExpressRouteCircuitPeering is peering in an ExpressRouteCircuit resource. -type ExpressRouteCircuitPeering struct { - autorest.Response `json:"-"` - ID *string `json:"id,omitempty"` - *ExpressRouteCircuitPeeringPropertiesFormat `json:"properties,omitempty"` - Name *string `json:"name,omitempty"` - Etag *string `json:"etag,omitempty"` -} - -// ExpressRouteCircuitPeeringConfig is specifies the peering configuration. -type ExpressRouteCircuitPeeringConfig struct { - AdvertisedPublicPrefixes *[]string `json:"advertisedPublicPrefixes,omitempty"` - AdvertisedPublicPrefixesState ExpressRouteCircuitPeeringAdvertisedPublicPrefixState `json:"advertisedPublicPrefixesState,omitempty"` - CustomerASN *int32 `json:"customerASN,omitempty"` - RoutingRegistryName *string `json:"routingRegistryName,omitempty"` -} - -// ExpressRouteCircuitPeeringListResult is response for ListPeering API service -// call retrieves all peerings that belong to an ExpressRouteCircuit. -type ExpressRouteCircuitPeeringListResult struct { - autorest.Response `json:"-"` - Value *[]ExpressRouteCircuitPeering `json:"value,omitempty"` - NextLink *string `json:"nextLink,omitempty"` -} - -// ExpressRouteCircuitPeeringListResultPreparer prepares a request to retrieve the next set of results. It returns -// nil if no more results exist. -func (client ExpressRouteCircuitPeeringListResult) ExpressRouteCircuitPeeringListResultPreparer() (*http.Request, error) { - if client.NextLink == nil || len(to.String(client.NextLink)) <= 0 { - return nil, nil - } - return autorest.Prepare(&http.Request{}, - autorest.AsJSON(), - autorest.AsGet(), - autorest.WithBaseURL(to.String(client.NextLink))) -} - -// ExpressRouteCircuitPeeringPropertiesFormat is -type ExpressRouteCircuitPeeringPropertiesFormat struct { - PeeringType ExpressRouteCircuitPeeringType `json:"peeringType,omitempty"` - State ExpressRouteCircuitPeeringState `json:"state,omitempty"` - AzureASN *int32 `json:"azureASN,omitempty"` - PeerASN *int32 `json:"peerASN,omitempty"` - PrimaryPeerAddressPrefix *string `json:"primaryPeerAddressPrefix,omitempty"` - SecondaryPeerAddressPrefix *string `json:"secondaryPeerAddressPrefix,omitempty"` - PrimaryAzurePort *string `json:"primaryAzurePort,omitempty"` - SecondaryAzurePort *string `json:"secondaryAzurePort,omitempty"` - SharedKey *string `json:"sharedKey,omitempty"` - VlanID *int32 `json:"vlanId,omitempty"` - MicrosoftPeeringConfig *ExpressRouteCircuitPeeringConfig `json:"microsoftPeeringConfig,omitempty"` - Stats *ExpressRouteCircuitStats `json:"stats,omitempty"` - ProvisioningState *string `json:"provisioningState,omitempty"` - GatewayManagerEtag *string `json:"gatewayManagerEtag,omitempty"` - LastModifiedBy *string `json:"lastModifiedBy,omitempty"` - RouteFilter *RouteFilter `json:"routeFilter,omitempty"` -} - -// ExpressRouteCircuitPropertiesFormat is properties of ExpressRouteCircuit. -type ExpressRouteCircuitPropertiesFormat struct { - AllowClassicOperations *bool `json:"allowClassicOperations,omitempty"` - CircuitProvisioningState *string `json:"circuitProvisioningState,omitempty"` - ServiceProviderProvisioningState ServiceProviderProvisioningState `json:"serviceProviderProvisioningState,omitempty"` - Authorizations *[]ExpressRouteCircuitAuthorization `json:"authorizations,omitempty"` - Peerings *[]ExpressRouteCircuitPeering `json:"peerings,omitempty"` - ServiceKey *string `json:"serviceKey,omitempty"` - ServiceProviderNotes *string `json:"serviceProviderNotes,omitempty"` - ServiceProviderProperties *ExpressRouteCircuitServiceProviderProperties `json:"serviceProviderProperties,omitempty"` - ProvisioningState *string `json:"provisioningState,omitempty"` - GatewayManagerEtag *string `json:"gatewayManagerEtag,omitempty"` -} - -// ExpressRouteCircuitRoutesTable is the routes table associated with the -// ExpressRouteCircuit -type ExpressRouteCircuitRoutesTable struct { - NetworkProperty *string `json:"network,omitempty"` - NextHop *string `json:"nextHop,omitempty"` - LocPrf *string `json:"locPrf,omitempty"` - Weight *int32 `json:"weight,omitempty"` - Path *string `json:"path,omitempty"` -} - -// ExpressRouteCircuitRoutesTableSummary is the routes table associated with -// the ExpressRouteCircuit. -type ExpressRouteCircuitRoutesTableSummary struct { - Neighbor *string `json:"neighbor,omitempty"` - V *int32 `json:"v,omitempty"` - As *int32 `json:"as,omitempty"` - UpDown *string `json:"upDown,omitempty"` - StatePfxRcd *string `json:"statePfxRcd,omitempty"` -} - -// ExpressRouteCircuitsArpTableListResult is response for ListArpTable -// associated with the Express Route Circuits API. -type ExpressRouteCircuitsArpTableListResult struct { - autorest.Response `json:"-"` - Value *[]ExpressRouteCircuitArpTable `json:"value,omitempty"` - NextLink *string `json:"nextLink,omitempty"` -} - -// ExpressRouteCircuitServiceProviderProperties is contains -// ServiceProviderProperties in an ExpressRouteCircuit. -type ExpressRouteCircuitServiceProviderProperties struct { - ServiceProviderName *string `json:"serviceProviderName,omitempty"` - PeeringLocation *string `json:"peeringLocation,omitempty"` - BandwidthInMbps *int32 `json:"bandwidthInMbps,omitempty"` -} - -// ExpressRouteCircuitSku is contains SKU in an ExpressRouteCircuit. -type ExpressRouteCircuitSku struct { - Name *string `json:"name,omitempty"` - Tier ExpressRouteCircuitSkuTier `json:"tier,omitempty"` - Family ExpressRouteCircuitSkuFamily `json:"family,omitempty"` -} - -// ExpressRouteCircuitsRoutesTableListResult is response for ListRoutesTable -// associated with the Express Route Circuits API. -type ExpressRouteCircuitsRoutesTableListResult struct { - autorest.Response `json:"-"` - Value *[]ExpressRouteCircuitRoutesTable `json:"value,omitempty"` - NextLink *string `json:"nextLink,omitempty"` -} - -// ExpressRouteCircuitsRoutesTableSummaryListResult is response for -// ListRoutesTable associated with the Express Route Circuits API. -type ExpressRouteCircuitsRoutesTableSummaryListResult struct { - autorest.Response `json:"-"` - Value *[]ExpressRouteCircuitRoutesTableSummary `json:"value,omitempty"` - NextLink *string `json:"nextLink,omitempty"` -} - -// ExpressRouteCircuitStats is contains stats associated with the peering. -type ExpressRouteCircuitStats struct { - autorest.Response `json:"-"` - PrimarybytesIn *int64 `json:"primarybytesIn,omitempty"` - PrimarybytesOut *int64 `json:"primarybytesOut,omitempty"` - SecondarybytesIn *int64 `json:"secondarybytesIn,omitempty"` - SecondarybytesOut *int64 `json:"secondarybytesOut,omitempty"` -} - -// ExpressRouteServiceProvider is a ExpressRouteResourceProvider object. -type ExpressRouteServiceProvider struct { - ID *string `json:"id,omitempty"` - Name *string `json:"name,omitempty"` - Type *string `json:"type,omitempty"` - Location *string `json:"location,omitempty"` - Tags *map[string]*string `json:"tags,omitempty"` - *ExpressRouteServiceProviderPropertiesFormat `json:"properties,omitempty"` -} - -// ExpressRouteServiceProviderBandwidthsOffered is contains bandwidths offered -// in ExpressRouteServiceProvider resources. -type ExpressRouteServiceProviderBandwidthsOffered struct { - OfferName *string `json:"offerName,omitempty"` - ValueInMbps *int32 `json:"valueInMbps,omitempty"` -} - -// ExpressRouteServiceProviderListResult is response for the -// ListExpressRouteServiceProvider API service call. -type ExpressRouteServiceProviderListResult struct { - autorest.Response `json:"-"` - Value *[]ExpressRouteServiceProvider `json:"value,omitempty"` - NextLink *string `json:"nextLink,omitempty"` -} - -// ExpressRouteServiceProviderListResultPreparer prepares a request to retrieve the next set of results. It returns -// nil if no more results exist. -func (client ExpressRouteServiceProviderListResult) ExpressRouteServiceProviderListResultPreparer() (*http.Request, error) { - if client.NextLink == nil || len(to.String(client.NextLink)) <= 0 { - return nil, nil - } - return autorest.Prepare(&http.Request{}, - autorest.AsJSON(), - autorest.AsGet(), - autorest.WithBaseURL(to.String(client.NextLink))) -} - -// ExpressRouteServiceProviderPropertiesFormat is properties of -// ExpressRouteServiceProvider. -type ExpressRouteServiceProviderPropertiesFormat struct { - PeeringLocations *[]string `json:"peeringLocations,omitempty"` - BandwidthsOffered *[]ExpressRouteServiceProviderBandwidthsOffered `json:"bandwidthsOffered,omitempty"` - ProvisioningState *string `json:"provisioningState,omitempty"` -} - -// FlowLogInformation is information on the configuration of flow log. -type FlowLogInformation struct { - autorest.Response `json:"-"` - TargetResourceID *string `json:"targetResourceId,omitempty"` - *FlowLogProperties `json:"properties,omitempty"` -} - -// FlowLogProperties is parameters that define the configuration of flow log. -type FlowLogProperties struct { - StorageID *string `json:"storageId,omitempty"` - Enabled *bool `json:"enabled,omitempty"` - RetentionPolicy *RetentionPolicyParameters `json:"retentionPolicy,omitempty"` -} - -// FlowLogStatusParameters is parameters that define a resource to query flow -// log status. -type FlowLogStatusParameters struct { - TargetResourceID *string `json:"targetResourceId,omitempty"` -} - -// FrontendIPConfiguration is frontend IP address of the load balancer. -type FrontendIPConfiguration struct { - ID *string `json:"id,omitempty"` - *FrontendIPConfigurationPropertiesFormat `json:"properties,omitempty"` - Name *string `json:"name,omitempty"` - Etag *string `json:"etag,omitempty"` -} - -// FrontendIPConfigurationPropertiesFormat is properties of Frontend IP -// Configuration of the load balancer. -type FrontendIPConfigurationPropertiesFormat struct { - InboundNatRules *[]SubResource `json:"inboundNatRules,omitempty"` - InboundNatPools *[]SubResource `json:"inboundNatPools,omitempty"` - OutboundNatRules *[]SubResource `json:"outboundNatRules,omitempty"` - LoadBalancingRules *[]SubResource `json:"loadBalancingRules,omitempty"` - PrivateIPAddress *string `json:"privateIPAddress,omitempty"` - PrivateIPAllocationMethod IPAllocationMethod `json:"privateIPAllocationMethod,omitempty"` - Subnet *Subnet `json:"subnet,omitempty"` - PublicIPAddress *PublicIPAddress `json:"publicIPAddress,omitempty"` - ProvisioningState *string `json:"provisioningState,omitempty"` -} - -// GatewayRoute is gateway routing details -type GatewayRoute struct { - LocalAddress *string `json:"localAddress,omitempty"` - NetworkProperty *string `json:"network,omitempty"` - NextHop *string `json:"nextHop,omitempty"` - SourcePeer *string `json:"sourcePeer,omitempty"` - Origin *string `json:"origin,omitempty"` - AsPath *string `json:"asPath,omitempty"` - Weight *int32 `json:"weight,omitempty"` -} - -// GatewayRouteListResult is list of virtual network gateway routes -type GatewayRouteListResult struct { - autorest.Response `json:"-"` - Value *[]GatewayRoute `json:"value,omitempty"` -} - -// InboundNatPool is inbound NAT pool of the load balancer. -type InboundNatPool struct { - ID *string `json:"id,omitempty"` - *InboundNatPoolPropertiesFormat `json:"properties,omitempty"` - Name *string `json:"name,omitempty"` - Etag *string `json:"etag,omitempty"` -} - -// InboundNatPoolPropertiesFormat is properties of Inbound NAT pool. -type InboundNatPoolPropertiesFormat struct { - FrontendIPConfiguration *SubResource `json:"frontendIPConfiguration,omitempty"` - Protocol TransportProtocol `json:"protocol,omitempty"` - FrontendPortRangeStart *int32 `json:"frontendPortRangeStart,omitempty"` - FrontendPortRangeEnd *int32 `json:"frontendPortRangeEnd,omitempty"` - BackendPort *int32 `json:"backendPort,omitempty"` - ProvisioningState *string `json:"provisioningState,omitempty"` -} - -// InboundNatRule is inbound NAT rule of the load balancer. -type InboundNatRule struct { - ID *string `json:"id,omitempty"` - *InboundNatRulePropertiesFormat `json:"properties,omitempty"` - Name *string `json:"name,omitempty"` - Etag *string `json:"etag,omitempty"` -} - -// InboundNatRulePropertiesFormat is properties of the inbound NAT rule. -type InboundNatRulePropertiesFormat struct { - FrontendIPConfiguration *SubResource `json:"frontendIPConfiguration,omitempty"` - BackendIPConfiguration *InterfaceIPConfiguration `json:"backendIPConfiguration,omitempty"` - Protocol TransportProtocol `json:"protocol,omitempty"` - FrontendPort *int32 `json:"frontendPort,omitempty"` - BackendPort *int32 `json:"backendPort,omitempty"` - IdleTimeoutInMinutes *int32 `json:"idleTimeoutInMinutes,omitempty"` - EnableFloatingIP *bool `json:"enableFloatingIP,omitempty"` - ProvisioningState *string `json:"provisioningState,omitempty"` -} - -// Interface is a network interface in a resource group. -type Interface struct { - autorest.Response `json:"-"` - ID *string `json:"id,omitempty"` - Name *string `json:"name,omitempty"` - Type *string `json:"type,omitempty"` - Location *string `json:"location,omitempty"` - Tags *map[string]*string `json:"tags,omitempty"` - *InterfacePropertiesFormat `json:"properties,omitempty"` - Etag *string `json:"etag,omitempty"` -} - -// InterfaceAssociation is network interface and its custom security rules. -type InterfaceAssociation struct { - ID *string `json:"id,omitempty"` - SecurityRules *[]SecurityRule `json:"securityRules,omitempty"` -} - -// InterfaceDNSSettings is dNS settings of a network interface. -type InterfaceDNSSettings struct { - DNSServers *[]string `json:"dnsServers,omitempty"` - AppliedDNSServers *[]string `json:"appliedDnsServers,omitempty"` - InternalDNSNameLabel *string `json:"internalDnsNameLabel,omitempty"` - InternalFqdn *string `json:"internalFqdn,omitempty"` - InternalDomainNameSuffix *string `json:"internalDomainNameSuffix,omitempty"` -} - -// InterfaceIPConfiguration is iPConfiguration in a network interface. -type InterfaceIPConfiguration struct { - ID *string `json:"id,omitempty"` - *InterfaceIPConfigurationPropertiesFormat `json:"properties,omitempty"` - Name *string `json:"name,omitempty"` - Etag *string `json:"etag,omitempty"` -} - -// InterfaceIPConfigurationPropertiesFormat is properties of IP configuration. -type InterfaceIPConfigurationPropertiesFormat struct { - ApplicationGatewayBackendAddressPools *[]ApplicationGatewayBackendAddressPool `json:"applicationGatewayBackendAddressPools,omitempty"` - LoadBalancerBackendAddressPools *[]BackendAddressPool `json:"loadBalancerBackendAddressPools,omitempty"` - LoadBalancerInboundNatRules *[]InboundNatRule `json:"loadBalancerInboundNatRules,omitempty"` - PrivateIPAddress *string `json:"privateIPAddress,omitempty"` - PrivateIPAllocationMethod IPAllocationMethod `json:"privateIPAllocationMethod,omitempty"` - PrivateIPAddressVersion IPVersion `json:"privateIPAddressVersion,omitempty"` - Subnet *Subnet `json:"subnet,omitempty"` - Primary *bool `json:"primary,omitempty"` - PublicIPAddress *PublicIPAddress `json:"publicIPAddress,omitempty"` - ProvisioningState *string `json:"provisioningState,omitempty"` -} - -// InterfaceListResult is response for the ListNetworkInterface API service -// call. -type InterfaceListResult struct { - autorest.Response `json:"-"` - Value *[]Interface `json:"value,omitempty"` - NextLink *string `json:"nextLink,omitempty"` -} - -// InterfaceListResultPreparer prepares a request to retrieve the next set of results. It returns -// nil if no more results exist. -func (client InterfaceListResult) InterfaceListResultPreparer() (*http.Request, error) { - if client.NextLink == nil || len(to.String(client.NextLink)) <= 0 { - return nil, nil - } - return autorest.Prepare(&http.Request{}, - autorest.AsJSON(), - autorest.AsGet(), - autorest.WithBaseURL(to.String(client.NextLink))) -} - -// InterfacePropertiesFormat is networkInterface properties. -type InterfacePropertiesFormat struct { - VirtualMachine *SubResource `json:"virtualMachine,omitempty"` - NetworkSecurityGroup *SecurityGroup `json:"networkSecurityGroup,omitempty"` - IPConfigurations *[]InterfaceIPConfiguration `json:"ipConfigurations,omitempty"` - DNSSettings *InterfaceDNSSettings `json:"dnsSettings,omitempty"` - MacAddress *string `json:"macAddress,omitempty"` - Primary *bool `json:"primary,omitempty"` - EnableAcceleratedNetworking *bool `json:"enableAcceleratedNetworking,omitempty"` - EnableIPForwarding *bool `json:"enableIPForwarding,omitempty"` - ResourceGUID *string `json:"resourceGuid,omitempty"` - ProvisioningState *string `json:"provisioningState,omitempty"` -} - -// IPAddressAvailabilityResult is response for CheckIPAddressAvailability API -// service call -type IPAddressAvailabilityResult struct { - autorest.Response `json:"-"` - Available *bool `json:"available,omitempty"` - AvailableIPAddresses *[]string `json:"availableIPAddresses,omitempty"` -} - -// IPConfiguration is iPConfiguration -type IPConfiguration struct { - ID *string `json:"id,omitempty"` - *IPConfigurationPropertiesFormat `json:"properties,omitempty"` - Name *string `json:"name,omitempty"` - Etag *string `json:"etag,omitempty"` -} - -// IPConfigurationPropertiesFormat is properties of IP configuration. -type IPConfigurationPropertiesFormat struct { - PrivateIPAddress *string `json:"privateIPAddress,omitempty"` - PrivateIPAllocationMethod IPAllocationMethod `json:"privateIPAllocationMethod,omitempty"` - Subnet *Subnet `json:"subnet,omitempty"` - PublicIPAddress *PublicIPAddress `json:"publicIPAddress,omitempty"` - ProvisioningState *string `json:"provisioningState,omitempty"` -} - -// IpsecPolicy is an IPSec Policy configuration for a virtual network gateway -// connection -type IpsecPolicy struct { - SaLifeTimeSeconds *int32 `json:"saLifeTimeSeconds,omitempty"` - SaDataSizeKilobytes *int32 `json:"saDataSizeKilobytes,omitempty"` - IpsecEncryption IpsecEncryption `json:"ipsecEncryption,omitempty"` - IpsecIntegrity IpsecIntegrity `json:"ipsecIntegrity,omitempty"` - IkeEncryption IkeEncryption `json:"ikeEncryption,omitempty"` - IkeIntegrity IkeIntegrity `json:"ikeIntegrity,omitempty"` - DhGroup DhGroup `json:"dhGroup,omitempty"` - PfsGroup PfsGroup `json:"pfsGroup,omitempty"` -} - -// LoadBalancer is loadBalancer resource -type LoadBalancer struct { - autorest.Response `json:"-"` - ID *string `json:"id,omitempty"` - Name *string `json:"name,omitempty"` - Type *string `json:"type,omitempty"` - Location *string `json:"location,omitempty"` - Tags *map[string]*string `json:"tags,omitempty"` - *LoadBalancerPropertiesFormat `json:"properties,omitempty"` - Etag *string `json:"etag,omitempty"` -} - -// LoadBalancerListResult is response for ListLoadBalancers API service call. -type LoadBalancerListResult struct { - autorest.Response `json:"-"` - Value *[]LoadBalancer `json:"value,omitempty"` - NextLink *string `json:"nextLink,omitempty"` -} - -// LoadBalancerListResultPreparer prepares a request to retrieve the next set of results. It returns -// nil if no more results exist. -func (client LoadBalancerListResult) LoadBalancerListResultPreparer() (*http.Request, error) { - if client.NextLink == nil || len(to.String(client.NextLink)) <= 0 { - return nil, nil - } - return autorest.Prepare(&http.Request{}, - autorest.AsJSON(), - autorest.AsGet(), - autorest.WithBaseURL(to.String(client.NextLink))) -} - -// LoadBalancerPropertiesFormat is properties of the load balancer. -type LoadBalancerPropertiesFormat struct { - FrontendIPConfigurations *[]FrontendIPConfiguration `json:"frontendIPConfigurations,omitempty"` - BackendAddressPools *[]BackendAddressPool `json:"backendAddressPools,omitempty"` - LoadBalancingRules *[]LoadBalancingRule `json:"loadBalancingRules,omitempty"` - Probes *[]Probe `json:"probes,omitempty"` - InboundNatRules *[]InboundNatRule `json:"inboundNatRules,omitempty"` - InboundNatPools *[]InboundNatPool `json:"inboundNatPools,omitempty"` - OutboundNatRules *[]OutboundNatRule `json:"outboundNatRules,omitempty"` - ResourceGUID *string `json:"resourceGuid,omitempty"` - ProvisioningState *string `json:"provisioningState,omitempty"` -} - -// LoadBalancingRule is a loag balancing rule for a load balancer. -type LoadBalancingRule struct { - ID *string `json:"id,omitempty"` - *LoadBalancingRulePropertiesFormat `json:"properties,omitempty"` - Name *string `json:"name,omitempty"` - Etag *string `json:"etag,omitempty"` -} - -// LoadBalancingRulePropertiesFormat is properties of the load balancer. -type LoadBalancingRulePropertiesFormat struct { - FrontendIPConfiguration *SubResource `json:"frontendIPConfiguration,omitempty"` - BackendAddressPool *SubResource `json:"backendAddressPool,omitempty"` - Probe *SubResource `json:"probe,omitempty"` - Protocol TransportProtocol `json:"protocol,omitempty"` - LoadDistribution LoadDistribution `json:"loadDistribution,omitempty"` - FrontendPort *int32 `json:"frontendPort,omitempty"` - BackendPort *int32 `json:"backendPort,omitempty"` - IdleTimeoutInMinutes *int32 `json:"idleTimeoutInMinutes,omitempty"` - EnableFloatingIP *bool `json:"enableFloatingIP,omitempty"` - ProvisioningState *string `json:"provisioningState,omitempty"` -} - -// LocalNetworkGateway is a common class for general resource information -type LocalNetworkGateway struct { - autorest.Response `json:"-"` - ID *string `json:"id,omitempty"` - Name *string `json:"name,omitempty"` - Type *string `json:"type,omitempty"` - Location *string `json:"location,omitempty"` - Tags *map[string]*string `json:"tags,omitempty"` - *LocalNetworkGatewayPropertiesFormat `json:"properties,omitempty"` - Etag *string `json:"etag,omitempty"` -} - -// LocalNetworkGatewayListResult is response for ListLocalNetworkGateways API -// service call. -type LocalNetworkGatewayListResult struct { - autorest.Response `json:"-"` - Value *[]LocalNetworkGateway `json:"value,omitempty"` - NextLink *string `json:"nextLink,omitempty"` -} - -// LocalNetworkGatewayListResultPreparer prepares a request to retrieve the next set of results. It returns -// nil if no more results exist. -func (client LocalNetworkGatewayListResult) LocalNetworkGatewayListResultPreparer() (*http.Request, error) { - if client.NextLink == nil || len(to.String(client.NextLink)) <= 0 { - return nil, nil - } - return autorest.Prepare(&http.Request{}, - autorest.AsJSON(), - autorest.AsGet(), - autorest.WithBaseURL(to.String(client.NextLink))) -} - -// LocalNetworkGatewayPropertiesFormat is localNetworkGateway properties -type LocalNetworkGatewayPropertiesFormat struct { - LocalNetworkAddressSpace *AddressSpace `json:"localNetworkAddressSpace,omitempty"` - GatewayIPAddress *string `json:"gatewayIpAddress,omitempty"` - BgpSettings *BgpSettings `json:"bgpSettings,omitempty"` - ResourceGUID *string `json:"resourceGuid,omitempty"` - ProvisioningState *string `json:"provisioningState,omitempty"` -} - -// NextHopParameters is parameters that define the source and destination -// endpoint. -type NextHopParameters struct { - TargetResourceID *string `json:"targetResourceId,omitempty"` - SourceIPAddress *string `json:"sourceIPAddress,omitempty"` - DestinationIPAddress *string `json:"destinationIPAddress,omitempty"` - TargetNicResourceID *string `json:"targetNicResourceId,omitempty"` -} - -// NextHopResult is the information about next hop from the specified VM. -type NextHopResult struct { - autorest.Response `json:"-"` - NextHopType NextHopType `json:"nextHopType,omitempty"` - NextHopIPAddress *string `json:"nextHopIpAddress,omitempty"` - RouteTableID *string `json:"routeTableId,omitempty"` -} - -// OutboundNatRule is outbound NAT pool of the load balancer. -type OutboundNatRule struct { - ID *string `json:"id,omitempty"` - *OutboundNatRulePropertiesFormat `json:"properties,omitempty"` - Name *string `json:"name,omitempty"` - Etag *string `json:"etag,omitempty"` -} - -// OutboundNatRulePropertiesFormat is outbound NAT pool of the load balancer. -type OutboundNatRulePropertiesFormat struct { - AllocatedOutboundPorts *int32 `json:"allocatedOutboundPorts,omitempty"` - FrontendIPConfigurations *[]SubResource `json:"frontendIPConfigurations,omitempty"` - BackendAddressPool *SubResource `json:"backendAddressPool,omitempty"` - ProvisioningState *string `json:"provisioningState,omitempty"` -} - -// PacketCapture is parameters that define the create packet capture operation. -type PacketCapture struct { - *PacketCaptureParameters `json:"properties,omitempty"` -} - -// PacketCaptureFilter is filter that is applied to packet capture request. -// Multiple filters can be applied. -type PacketCaptureFilter struct { - Protocol PcProtocol `json:"protocol,omitempty"` - LocalIPAddress *string `json:"localIPAddress,omitempty"` - RemoteIPAddress *string `json:"remoteIPAddress,omitempty"` - LocalPort *string `json:"localPort,omitempty"` - RemotePort *string `json:"remotePort,omitempty"` -} - -// PacketCaptureListResult is list of packet capture sessions. -type PacketCaptureListResult struct { - autorest.Response `json:"-"` - Value *[]PacketCaptureResult `json:"value,omitempty"` -} - -// PacketCaptureParameters is parameters that define the create packet capture -// operation. -type PacketCaptureParameters struct { - Target *string `json:"target,omitempty"` - BytesToCapturePerPacket *int32 `json:"bytesToCapturePerPacket,omitempty"` - TotalBytesPerSession *int32 `json:"totalBytesPerSession,omitempty"` - TimeLimitInSeconds *int32 `json:"timeLimitInSeconds,omitempty"` - StorageLocation *PacketCaptureStorageLocation `json:"storageLocation,omitempty"` - Filters *[]PacketCaptureFilter `json:"filters,omitempty"` -} - -// PacketCaptureQueryStatusResult is status of packet capture session. -type PacketCaptureQueryStatusResult struct { - autorest.Response `json:"-"` - Name *string `json:"name,omitempty"` - ID *string `json:"id,omitempty"` - CaptureStartTime *date.Time `json:"captureStartTime,omitempty"` - PacketCaptureStatus PcStatus `json:"packetCaptureStatus,omitempty"` - StopReason *string `json:"stopReason,omitempty"` - PacketCaptureError *[]PcError `json:"packetCaptureError,omitempty"` -} - -// PacketCaptureResult is information about packet capture session. -type PacketCaptureResult struct { - autorest.Response `json:"-"` - Name *string `json:"name,omitempty"` - ID *string `json:"id,omitempty"` - Etag *string `json:"etag,omitempty"` - *PacketCaptureResultProperties `json:"properties,omitempty"` -} - -// PacketCaptureResultProperties is describes the properties of a packet -// capture session. -type PacketCaptureResultProperties struct { - Target *string `json:"target,omitempty"` - BytesToCapturePerPacket *int32 `json:"bytesToCapturePerPacket,omitempty"` - TotalBytesPerSession *int32 `json:"totalBytesPerSession,omitempty"` - TimeLimitInSeconds *int32 `json:"timeLimitInSeconds,omitempty"` - StorageLocation *PacketCaptureStorageLocation `json:"storageLocation,omitempty"` - Filters *[]PacketCaptureFilter `json:"filters,omitempty"` - ProvisioningState ProvisioningState `json:"provisioningState,omitempty"` -} - -// PacketCaptureStorageLocation is describes the storage location for a packet -// capture session. -type PacketCaptureStorageLocation struct { - StorageID *string `json:"storageId,omitempty"` - StoragePath *string `json:"storagePath,omitempty"` - FilePath *string `json:"filePath,omitempty"` -} - -// PatchRouteFilter is route Filter Resource. -type PatchRouteFilter struct { - ID *string `json:"id,omitempty"` - *RouteFilterPropertiesFormat `json:"properties,omitempty"` - Name *string `json:"name,omitempty"` - Etag *string `json:"etag,omitempty"` - Type *string `json:"type,omitempty"` - Tags *map[string]*string `json:"tags,omitempty"` -} - -// PatchRouteFilterRule is route Filter Rule Resource -type PatchRouteFilterRule struct { - ID *string `json:"id,omitempty"` - *RouteFilterRulePropertiesFormat `json:"properties,omitempty"` - Name *string `json:"name,omitempty"` - Etag *string `json:"etag,omitempty"` - Tags *map[string]*string `json:"tags,omitempty"` -} - -// Probe is a load balancer probe. -type Probe struct { - ID *string `json:"id,omitempty"` - *ProbePropertiesFormat `json:"properties,omitempty"` - Name *string `json:"name,omitempty"` - Etag *string `json:"etag,omitempty"` -} - -// ProbePropertiesFormat is -type ProbePropertiesFormat struct { - LoadBalancingRules *[]SubResource `json:"loadBalancingRules,omitempty"` - Protocol ProbeProtocol `json:"protocol,omitempty"` - Port *int32 `json:"port,omitempty"` - IntervalInSeconds *int32 `json:"intervalInSeconds,omitempty"` - NumberOfProbes *int32 `json:"numberOfProbes,omitempty"` - RequestPath *string `json:"requestPath,omitempty"` - ProvisioningState *string `json:"provisioningState,omitempty"` -} - -// PublicIPAddress is public IP address resource. -type PublicIPAddress struct { - autorest.Response `json:"-"` - ID *string `json:"id,omitempty"` - Name *string `json:"name,omitempty"` - Type *string `json:"type,omitempty"` - Location *string `json:"location,omitempty"` - Tags *map[string]*string `json:"tags,omitempty"` - *PublicIPAddressPropertiesFormat `json:"properties,omitempty"` - Etag *string `json:"etag,omitempty"` -} - -// PublicIPAddressDNSSettings is contains FQDN of the DNS record associated -// with the public IP address -type PublicIPAddressDNSSettings struct { - DomainNameLabel *string `json:"domainNameLabel,omitempty"` - Fqdn *string `json:"fqdn,omitempty"` - ReverseFqdn *string `json:"reverseFqdn,omitempty"` -} - -// PublicIPAddressListResult is response for ListPublicIpAddresses API service -// call. -type PublicIPAddressListResult struct { - autorest.Response `json:"-"` - Value *[]PublicIPAddress `json:"value,omitempty"` - NextLink *string `json:"nextLink,omitempty"` -} - -// PublicIPAddressListResultPreparer prepares a request to retrieve the next set of results. It returns -// nil if no more results exist. -func (client PublicIPAddressListResult) PublicIPAddressListResultPreparer() (*http.Request, error) { - if client.NextLink == nil || len(to.String(client.NextLink)) <= 0 { - return nil, nil - } - return autorest.Prepare(&http.Request{}, - autorest.AsJSON(), - autorest.AsGet(), - autorest.WithBaseURL(to.String(client.NextLink))) -} - -// PublicIPAddressPropertiesFormat is public IP address properties. -type PublicIPAddressPropertiesFormat struct { - PublicIPAllocationMethod IPAllocationMethod `json:"publicIPAllocationMethod,omitempty"` - PublicIPAddressVersion IPVersion `json:"publicIPAddressVersion,omitempty"` - IPConfiguration *IPConfiguration `json:"ipConfiguration,omitempty"` - DNSSettings *PublicIPAddressDNSSettings `json:"dnsSettings,omitempty"` - IPAddress *string `json:"ipAddress,omitempty"` - IdleTimeoutInMinutes *int32 `json:"idleTimeoutInMinutes,omitempty"` - ResourceGUID *string `json:"resourceGuid,omitempty"` - ProvisioningState *string `json:"provisioningState,omitempty"` -} - -// QueryTroubleshootingParameters is parameters that define the resource to -// query the troubleshooting result. -type QueryTroubleshootingParameters struct { - TargetResourceID *string `json:"targetResourceId,omitempty"` -} - -// Resource is -type Resource struct { - ID *string `json:"id,omitempty"` - Name *string `json:"name,omitempty"` - Type *string `json:"type,omitempty"` - Location *string `json:"location,omitempty"` - Tags *map[string]*string `json:"tags,omitempty"` -} - -// ResourceNavigationLink is resourceNavigationLink resource. -type ResourceNavigationLink struct { - ID *string `json:"id,omitempty"` - *ResourceNavigationLinkFormat `json:"properties,omitempty"` - Name *string `json:"name,omitempty"` - Etag *string `json:"etag,omitempty"` -} - -// ResourceNavigationLinkFormat is properties of ResourceNavigationLink. -type ResourceNavigationLinkFormat struct { - LinkedResourceType *string `json:"linkedResourceType,omitempty"` - Link *string `json:"link,omitempty"` - ProvisioningState *string `json:"provisioningState,omitempty"` -} - -// RetentionPolicyParameters is parameters that define the retention policy for -// flow log. -type RetentionPolicyParameters struct { - Days *int32 `json:"days,omitempty"` - Enabled *bool `json:"enabled,omitempty"` -} - -// Route is route resource -type Route struct { - autorest.Response `json:"-"` - ID *string `json:"id,omitempty"` - *RoutePropertiesFormat `json:"properties,omitempty"` - Name *string `json:"name,omitempty"` - Etag *string `json:"etag,omitempty"` -} - -// RouteFilter is route Filter Resource. -type RouteFilter struct { - autorest.Response `json:"-"` - ID *string `json:"id,omitempty"` - Name *string `json:"name,omitempty"` - Type *string `json:"type,omitempty"` - Location *string `json:"location,omitempty"` - Tags *map[string]*string `json:"tags,omitempty"` - *RouteFilterPropertiesFormat `json:"properties,omitempty"` - Etag *string `json:"etag,omitempty"` -} - -// RouteFilterListResult is response for the ListRouteFilters API service call. -type RouteFilterListResult struct { - autorest.Response `json:"-"` - Value *[]RouteFilter `json:"value,omitempty"` - NextLink *string `json:"nextLink,omitempty"` -} - -// RouteFilterListResultPreparer prepares a request to retrieve the next set of results. It returns -// nil if no more results exist. -func (client RouteFilterListResult) RouteFilterListResultPreparer() (*http.Request, error) { - if client.NextLink == nil || len(to.String(client.NextLink)) <= 0 { - return nil, nil - } - return autorest.Prepare(&http.Request{}, - autorest.AsJSON(), - autorest.AsGet(), - autorest.WithBaseURL(to.String(client.NextLink))) -} - -// RouteFilterPropertiesFormat is route Filter Resource -type RouteFilterPropertiesFormat struct { - Rules *[]RouteFilterRule `json:"rules,omitempty"` - Peerings *[]ExpressRouteCircuitPeering `json:"peerings,omitempty"` - ProvisioningState *string `json:"provisioningState,omitempty"` -} - -// RouteFilterRule is route Filter Rule Resource -type RouteFilterRule struct { - autorest.Response `json:"-"` - ID *string `json:"id,omitempty"` - *RouteFilterRulePropertiesFormat `json:"properties,omitempty"` - Name *string `json:"name,omitempty"` - Location *string `json:"location,omitempty"` - Etag *string `json:"etag,omitempty"` - Tags *map[string]*string `json:"tags,omitempty"` -} - -// RouteFilterRuleListResult is response for the ListRouteFilterRules API -// service call -type RouteFilterRuleListResult struct { - autorest.Response `json:"-"` - Value *[]RouteFilterRule `json:"value,omitempty"` - NextLink *string `json:"nextLink,omitempty"` -} - -// RouteFilterRuleListResultPreparer prepares a request to retrieve the next set of results. It returns -// nil if no more results exist. -func (client RouteFilterRuleListResult) RouteFilterRuleListResultPreparer() (*http.Request, error) { - if client.NextLink == nil || len(to.String(client.NextLink)) <= 0 { - return nil, nil - } - return autorest.Prepare(&http.Request{}, - autorest.AsJSON(), - autorest.AsGet(), - autorest.WithBaseURL(to.String(client.NextLink))) -} - -// RouteFilterRulePropertiesFormat is route Filter Rule Resource -type RouteFilterRulePropertiesFormat struct { - Access Access `json:"access,omitempty"` - RouteFilterRuleType *string `json:"routeFilterRuleType,omitempty"` - Communities *[]string `json:"communities,omitempty"` - ProvisioningState *string `json:"provisioningState,omitempty"` -} - -// RouteListResult is response for the ListRoute API service call -type RouteListResult struct { - autorest.Response `json:"-"` - Value *[]Route `json:"value,omitempty"` - NextLink *string `json:"nextLink,omitempty"` -} - -// RouteListResultPreparer prepares a request to retrieve the next set of results. It returns -// nil if no more results exist. -func (client RouteListResult) RouteListResultPreparer() (*http.Request, error) { - if client.NextLink == nil || len(to.String(client.NextLink)) <= 0 { - return nil, nil - } - return autorest.Prepare(&http.Request{}, - autorest.AsJSON(), - autorest.AsGet(), - autorest.WithBaseURL(to.String(client.NextLink))) -} - -// RoutePropertiesFormat is route resource -type RoutePropertiesFormat struct { - AddressPrefix *string `json:"addressPrefix,omitempty"` - NextHopType RouteNextHopType `json:"nextHopType,omitempty"` - NextHopIPAddress *string `json:"nextHopIpAddress,omitempty"` - ProvisioningState *string `json:"provisioningState,omitempty"` -} - -// RouteTable is route table resource. -type RouteTable struct { - autorest.Response `json:"-"` - ID *string `json:"id,omitempty"` - Name *string `json:"name,omitempty"` - Type *string `json:"type,omitempty"` - Location *string `json:"location,omitempty"` - Tags *map[string]*string `json:"tags,omitempty"` - *RouteTablePropertiesFormat `json:"properties,omitempty"` - Etag *string `json:"etag,omitempty"` -} - -// RouteTableListResult is response for the ListRouteTable API service call. -type RouteTableListResult struct { - autorest.Response `json:"-"` - Value *[]RouteTable `json:"value,omitempty"` - NextLink *string `json:"nextLink,omitempty"` -} - -// RouteTableListResultPreparer prepares a request to retrieve the next set of results. It returns -// nil if no more results exist. -func (client RouteTableListResult) RouteTableListResultPreparer() (*http.Request, error) { - if client.NextLink == nil || len(to.String(client.NextLink)) <= 0 { - return nil, nil - } - return autorest.Prepare(&http.Request{}, - autorest.AsJSON(), - autorest.AsGet(), - autorest.WithBaseURL(to.String(client.NextLink))) -} - -// RouteTablePropertiesFormat is route Table resource -type RouteTablePropertiesFormat struct { - Routes *[]Route `json:"routes,omitempty"` - Subnets *[]Subnet `json:"subnets,omitempty"` - ProvisioningState *string `json:"provisioningState,omitempty"` -} - -// SecurityGroup is networkSecurityGroup resource. -type SecurityGroup struct { - autorest.Response `json:"-"` - ID *string `json:"id,omitempty"` - Name *string `json:"name,omitempty"` - Type *string `json:"type,omitempty"` - Location *string `json:"location,omitempty"` - Tags *map[string]*string `json:"tags,omitempty"` - *SecurityGroupPropertiesFormat `json:"properties,omitempty"` - Etag *string `json:"etag,omitempty"` -} - -// SecurityGroupListResult is response for ListNetworkSecurityGroups API -// service call. -type SecurityGroupListResult struct { - autorest.Response `json:"-"` - Value *[]SecurityGroup `json:"value,omitempty"` - NextLink *string `json:"nextLink,omitempty"` -} - -// SecurityGroupListResultPreparer prepares a request to retrieve the next set of results. It returns -// nil if no more results exist. -func (client SecurityGroupListResult) SecurityGroupListResultPreparer() (*http.Request, error) { - if client.NextLink == nil || len(to.String(client.NextLink)) <= 0 { - return nil, nil - } - return autorest.Prepare(&http.Request{}, - autorest.AsJSON(), - autorest.AsGet(), - autorest.WithBaseURL(to.String(client.NextLink))) -} - -// SecurityGroupNetworkInterface is network interface and all its associated -// security rules. -type SecurityGroupNetworkInterface struct { - ID *string `json:"id,omitempty"` - SecurityRuleAssociations *SecurityRuleAssociations `json:"securityRuleAssociations,omitempty"` -} - -// SecurityGroupPropertiesFormat is network Security Group resource. -type SecurityGroupPropertiesFormat struct { - SecurityRules *[]SecurityRule `json:"securityRules,omitempty"` - DefaultSecurityRules *[]SecurityRule `json:"defaultSecurityRules,omitempty"` - NetworkInterfaces *[]Interface `json:"networkInterfaces,omitempty"` - Subnets *[]Subnet `json:"subnets,omitempty"` - ResourceGUID *string `json:"resourceGuid,omitempty"` - ProvisioningState *string `json:"provisioningState,omitempty"` -} - -// SecurityGroupViewParameters is parameters that define the VM to check -// security groups for. -type SecurityGroupViewParameters struct { - TargetResourceID *string `json:"targetResourceId,omitempty"` -} - -// SecurityGroupViewResult is the information about security rules applied to -// the specified VM. -type SecurityGroupViewResult struct { - autorest.Response `json:"-"` - NetworkInterfaces *[]SecurityGroupNetworkInterface `json:"networkInterfaces,omitempty"` -} - -// SecurityRule is network security rule. -type SecurityRule struct { - autorest.Response `json:"-"` - ID *string `json:"id,omitempty"` - *SecurityRulePropertiesFormat `json:"properties,omitempty"` - Name *string `json:"name,omitempty"` - Etag *string `json:"etag,omitempty"` -} - -// SecurityRuleAssociations is all security rules associated with the network -// interface. -type SecurityRuleAssociations struct { - NetworkInterfaceAssociation *InterfaceAssociation `json:"networkInterfaceAssociation,omitempty"` - SubnetAssociation *SubnetAssociation `json:"subnetAssociation,omitempty"` - DefaultSecurityRules *[]SecurityRule `json:"defaultSecurityRules,omitempty"` - EffectiveSecurityRules *[]EffectiveNetworkSecurityRule `json:"effectiveSecurityRules,omitempty"` -} - -// SecurityRuleListResult is response for ListSecurityRule API service call. -// Retrieves all security rules that belongs to a network security group. -type SecurityRuleListResult struct { - autorest.Response `json:"-"` - Value *[]SecurityRule `json:"value,omitempty"` - NextLink *string `json:"nextLink,omitempty"` -} - -// SecurityRuleListResultPreparer prepares a request to retrieve the next set of results. It returns -// nil if no more results exist. -func (client SecurityRuleListResult) SecurityRuleListResultPreparer() (*http.Request, error) { - if client.NextLink == nil || len(to.String(client.NextLink)) <= 0 { - return nil, nil - } - return autorest.Prepare(&http.Request{}, - autorest.AsJSON(), - autorest.AsGet(), - autorest.WithBaseURL(to.String(client.NextLink))) -} - -// SecurityRulePropertiesFormat is -type SecurityRulePropertiesFormat struct { - Description *string `json:"description,omitempty"` - Protocol SecurityRuleProtocol `json:"protocol,omitempty"` - SourcePortRange *string `json:"sourcePortRange,omitempty"` - DestinationPortRange *string `json:"destinationPortRange,omitempty"` - SourceAddressPrefix *string `json:"sourceAddressPrefix,omitempty"` - DestinationAddressPrefix *string `json:"destinationAddressPrefix,omitempty"` - Access SecurityRuleAccess `json:"access,omitempty"` - Priority *int32 `json:"priority,omitempty"` - Direction SecurityRuleDirection `json:"direction,omitempty"` - ProvisioningState *string `json:"provisioningState,omitempty"` -} - -// String is -type String struct { - autorest.Response `json:"-"` - Value *string `json:"value,omitempty"` -} - -// Subnet is subnet in a virtual network resource. -type Subnet struct { - autorest.Response `json:"-"` - ID *string `json:"id,omitempty"` - *SubnetPropertiesFormat `json:"properties,omitempty"` - Name *string `json:"name,omitempty"` - Etag *string `json:"etag,omitempty"` -} - -// SubnetAssociation is network interface and its custom security rules. -type SubnetAssociation struct { - ID *string `json:"id,omitempty"` - SecurityRules *[]SecurityRule `json:"securityRules,omitempty"` -} - -// SubnetListResult is response for ListSubnets API service callRetrieves all -// subnet that belongs to a virtual network -type SubnetListResult struct { - autorest.Response `json:"-"` - Value *[]Subnet `json:"value,omitempty"` - NextLink *string `json:"nextLink,omitempty"` -} - -// SubnetListResultPreparer prepares a request to retrieve the next set of results. It returns -// nil if no more results exist. -func (client SubnetListResult) SubnetListResultPreparer() (*http.Request, error) { - if client.NextLink == nil || len(to.String(client.NextLink)) <= 0 { - return nil, nil - } - return autorest.Prepare(&http.Request{}, - autorest.AsJSON(), - autorest.AsGet(), - autorest.WithBaseURL(to.String(client.NextLink))) -} - -// SubnetPropertiesFormat is -type SubnetPropertiesFormat struct { - AddressPrefix *string `json:"addressPrefix,omitempty"` - NetworkSecurityGroup *SecurityGroup `json:"networkSecurityGroup,omitempty"` - RouteTable *RouteTable `json:"routeTable,omitempty"` - IPConfigurations *[]IPConfiguration `json:"ipConfigurations,omitempty"` - ResourceNavigationLinks *[]ResourceNavigationLink `json:"resourceNavigationLinks,omitempty"` - ProvisioningState *string `json:"provisioningState,omitempty"` -} - -// SubResource is -type SubResource struct { - ID *string `json:"id,omitempty"` -} - -// Topology is topology of the specified resource group. -type Topology struct { - autorest.Response `json:"-"` - ID *string `json:"id,omitempty"` - CreatedDateTime *date.Time `json:"createdDateTime,omitempty"` - LastModified *date.Time `json:"lastModified,omitempty"` - Resources *[]TopologyResource `json:"resources,omitempty"` -} - -// TopologyAssociation is resources that have an association with the parent -// resource. -type TopologyAssociation struct { - Name *string `json:"name,omitempty"` - ResourceID *string `json:"resourceId,omitempty"` - AssociationType AssociationType `json:"associationType,omitempty"` -} - -// TopologyParameters is parameters that define the representation of topology. -type TopologyParameters struct { - TargetResourceGroupName *string `json:"targetResourceGroupName,omitempty"` -} - -// TopologyResource is the network resource topology information for the given -// resource group. -type TopologyResource struct { - Name *string `json:"name,omitempty"` - ID *string `json:"id,omitempty"` - Location *string `json:"location,omitempty"` - Associations *[]TopologyAssociation `json:"associations,omitempty"` -} - -// TroubleshootingDetails is information gained from troubleshooting of -// specified resource. -type TroubleshootingDetails struct { - ID *string `json:"id,omitempty"` - ReasonType *string `json:"reasonType,omitempty"` - Summary *string `json:"summary,omitempty"` - Detail *string `json:"detail,omitempty"` - RecommendedActions *[]TroubleshootingRecommendedActions `json:"recommendedActions,omitempty"` -} - -// TroubleshootingParameters is parameters that define the resource to -// troubleshoot. -type TroubleshootingParameters struct { - TargetResourceID *string `json:"targetResourceId,omitempty"` - *TroubleshootingProperties `json:"properties,omitempty"` -} - -// TroubleshootingProperties is storage location provided for troubleshoot. -type TroubleshootingProperties struct { - StorageID *string `json:"storageId,omitempty"` - StoragePath *string `json:"storagePath,omitempty"` -} - -// TroubleshootingRecommendedActions is recommended actions based on discovered -// issues. -type TroubleshootingRecommendedActions struct { - ActionID *string `json:"actionId,omitempty"` - ActionText *string `json:"actionText,omitempty"` - ActionURI *string `json:"actionUri,omitempty"` - ActionURIText *string `json:"actionUriText,omitempty"` -} - -// TroubleshootingResult is troubleshooting information gained from specified -// resource. -type TroubleshootingResult struct { - autorest.Response `json:"-"` - StartTime *date.Time `json:"startTime,omitempty"` - EndTime *date.Time `json:"endTime,omitempty"` - Code *string `json:"code,omitempty"` - Results *[]TroubleshootingDetails `json:"results,omitempty"` -} - -// TunnelConnectionHealth is virtualNetworkGatewayConnection properties -type TunnelConnectionHealth struct { - Tunnel *string `json:"tunnel,omitempty"` - ConnectionStatus VirtualNetworkGatewayConnectionStatus `json:"connectionStatus,omitempty"` - IngressBytesTransferred *int64 `json:"ingressBytesTransferred,omitempty"` - EgressBytesTransferred *int64 `json:"egressBytesTransferred,omitempty"` - LastConnectionEstablishedUtcTime *string `json:"lastConnectionEstablishedUtcTime,omitempty"` -} - -// Usage is describes network resource usage. -type Usage struct { - Unit *string `json:"unit,omitempty"` - CurrentValue *int64 `json:"currentValue,omitempty"` - Limit *int64 `json:"limit,omitempty"` - Name *UsageName `json:"name,omitempty"` -} - -// UsageName is the usage names. -type UsageName struct { - Value *string `json:"value,omitempty"` - LocalizedValue *string `json:"localizedValue,omitempty"` -} - -// UsagesListResult is the list usages operation response. -type UsagesListResult struct { - autorest.Response `json:"-"` - Value *[]Usage `json:"value,omitempty"` - NextLink *string `json:"nextLink,omitempty"` -} - -// UsagesListResultPreparer prepares a request to retrieve the next set of results. It returns -// nil if no more results exist. -func (client UsagesListResult) UsagesListResultPreparer() (*http.Request, error) { - if client.NextLink == nil || len(to.String(client.NextLink)) <= 0 { - return nil, nil - } - return autorest.Prepare(&http.Request{}, - autorest.AsJSON(), - autorest.AsGet(), - autorest.WithBaseURL(to.String(client.NextLink))) -} - -// VerificationIPFlowParameters is parameters that define the IP flow to be -// verified. -type VerificationIPFlowParameters struct { - TargetResourceID *string `json:"targetResourceId,omitempty"` - Direction Direction `json:"direction,omitempty"` - Protocol Protocol `json:"protocol,omitempty"` - LocalPort *string `json:"localPort,omitempty"` - RemotePort *string `json:"remotePort,omitempty"` - LocalIPAddress *string `json:"localIPAddress,omitempty"` - RemoteIPAddress *string `json:"remoteIPAddress,omitempty"` - TargetNicResourceID *string `json:"targetNicResourceId,omitempty"` -} - -// VerificationIPFlowResult is results of IP flow verification on the target -// resource. -type VerificationIPFlowResult struct { - autorest.Response `json:"-"` - Access Access `json:"access,omitempty"` - RuleName *string `json:"ruleName,omitempty"` -} - -// VirtualNetwork is virtual Network resource. -type VirtualNetwork struct { - autorest.Response `json:"-"` - ID *string `json:"id,omitempty"` - Name *string `json:"name,omitempty"` - Type *string `json:"type,omitempty"` - Location *string `json:"location,omitempty"` - Tags *map[string]*string `json:"tags,omitempty"` - *VirtualNetworkPropertiesFormat `json:"properties,omitempty"` - Etag *string `json:"etag,omitempty"` -} - -// VirtualNetworkGateway is a common class for general resource information -type VirtualNetworkGateway struct { - autorest.Response `json:"-"` - ID *string `json:"id,omitempty"` - Name *string `json:"name,omitempty"` - Type *string `json:"type,omitempty"` - Location *string `json:"location,omitempty"` - Tags *map[string]*string `json:"tags,omitempty"` - *VirtualNetworkGatewayPropertiesFormat `json:"properties,omitempty"` - Etag *string `json:"etag,omitempty"` -} - -// VirtualNetworkGatewayConnection is a common class for general resource -// information -type VirtualNetworkGatewayConnection struct { - autorest.Response `json:"-"` - ID *string `json:"id,omitempty"` - Name *string `json:"name,omitempty"` - Type *string `json:"type,omitempty"` - Location *string `json:"location,omitempty"` - Tags *map[string]*string `json:"tags,omitempty"` - *VirtualNetworkGatewayConnectionPropertiesFormat `json:"properties,omitempty"` - Etag *string `json:"etag,omitempty"` -} - -// VirtualNetworkGatewayConnectionListResult is response for the -// ListVirtualNetworkGatewayConnections API service call -type VirtualNetworkGatewayConnectionListResult struct { - autorest.Response `json:"-"` - Value *[]VirtualNetworkGatewayConnection `json:"value,omitempty"` - NextLink *string `json:"nextLink,omitempty"` -} - -// VirtualNetworkGatewayConnectionListResultPreparer prepares a request to retrieve the next set of results. It returns -// nil if no more results exist. -func (client VirtualNetworkGatewayConnectionListResult) VirtualNetworkGatewayConnectionListResultPreparer() (*http.Request, error) { - if client.NextLink == nil || len(to.String(client.NextLink)) <= 0 { - return nil, nil - } - return autorest.Prepare(&http.Request{}, - autorest.AsJSON(), - autorest.AsGet(), - autorest.WithBaseURL(to.String(client.NextLink))) -} - -// VirtualNetworkGatewayConnectionPropertiesFormat is -// virtualNetworkGatewayConnection properties -type VirtualNetworkGatewayConnectionPropertiesFormat struct { - AuthorizationKey *string `json:"authorizationKey,omitempty"` - VirtualNetworkGateway1 *VirtualNetworkGateway `json:"virtualNetworkGateway1,omitempty"` - VirtualNetworkGateway2 *VirtualNetworkGateway `json:"virtualNetworkGateway2,omitempty"` - LocalNetworkGateway2 *LocalNetworkGateway `json:"localNetworkGateway2,omitempty"` - ConnectionType VirtualNetworkGatewayConnectionType `json:"connectionType,omitempty"` - RoutingWeight *int32 `json:"routingWeight,omitempty"` - SharedKey *string `json:"sharedKey,omitempty"` - ConnectionStatus VirtualNetworkGatewayConnectionStatus `json:"connectionStatus,omitempty"` - TunnelConnectionStatus *[]TunnelConnectionHealth `json:"tunnelConnectionStatus,omitempty"` - EgressBytesTransferred *int64 `json:"egressBytesTransferred,omitempty"` - IngressBytesTransferred *int64 `json:"ingressBytesTransferred,omitempty"` - Peer *SubResource `json:"peer,omitempty"` - EnableBgp *bool `json:"enableBgp,omitempty"` - UsePolicyBasedTrafficSelectors *bool `json:"usePolicyBasedTrafficSelectors,omitempty"` - IpsecPolicies *[]IpsecPolicy `json:"ipsecPolicies,omitempty"` - ResourceGUID *string `json:"resourceGuid,omitempty"` - ProvisioningState *string `json:"provisioningState,omitempty"` -} - -// VirtualNetworkGatewayIPConfiguration is iP configuration for virtual network -// gateway -type VirtualNetworkGatewayIPConfiguration struct { - ID *string `json:"id,omitempty"` - *VirtualNetworkGatewayIPConfigurationPropertiesFormat `json:"properties,omitempty"` - Name *string `json:"name,omitempty"` - Etag *string `json:"etag,omitempty"` -} - -// VirtualNetworkGatewayIPConfigurationPropertiesFormat is properties of -// VirtualNetworkGatewayIPConfiguration -type VirtualNetworkGatewayIPConfigurationPropertiesFormat struct { - PrivateIPAllocationMethod IPAllocationMethod `json:"privateIPAllocationMethod,omitempty"` - Subnet *SubResource `json:"subnet,omitempty"` - PublicIPAddress *SubResource `json:"publicIPAddress,omitempty"` - ProvisioningState *string `json:"provisioningState,omitempty"` -} - -// VirtualNetworkGatewayListResult is response for the -// ListVirtualNetworkGateways API service call. -type VirtualNetworkGatewayListResult struct { - autorest.Response `json:"-"` - Value *[]VirtualNetworkGateway `json:"value,omitempty"` - NextLink *string `json:"nextLink,omitempty"` -} - -// VirtualNetworkGatewayListResultPreparer prepares a request to retrieve the next set of results. It returns -// nil if no more results exist. -func (client VirtualNetworkGatewayListResult) VirtualNetworkGatewayListResultPreparer() (*http.Request, error) { - if client.NextLink == nil || len(to.String(client.NextLink)) <= 0 { - return nil, nil - } - return autorest.Prepare(&http.Request{}, - autorest.AsJSON(), - autorest.AsGet(), - autorest.WithBaseURL(to.String(client.NextLink))) -} - -// VirtualNetworkGatewayPropertiesFormat is virtualNetworkGateway properties -type VirtualNetworkGatewayPropertiesFormat struct { - IPConfigurations *[]VirtualNetworkGatewayIPConfiguration `json:"ipConfigurations,omitempty"` - GatewayType VirtualNetworkGatewayType `json:"gatewayType,omitempty"` - VpnType VpnType `json:"vpnType,omitempty"` - EnableBgp *bool `json:"enableBgp,omitempty"` - ActiveActive *bool `json:"activeActive,omitempty"` - GatewayDefaultSite *SubResource `json:"gatewayDefaultSite,omitempty"` - Sku *VirtualNetworkGatewaySku `json:"sku,omitempty"` - VpnClientConfiguration *VpnClientConfiguration `json:"vpnClientConfiguration,omitempty"` - BgpSettings *BgpSettings `json:"bgpSettings,omitempty"` - ResourceGUID *string `json:"resourceGuid,omitempty"` - ProvisioningState *string `json:"provisioningState,omitempty"` -} - -// VirtualNetworkGatewaySku is virtualNetworkGatewaySku details -type VirtualNetworkGatewaySku struct { - Name VirtualNetworkGatewaySkuName `json:"name,omitempty"` - Tier VirtualNetworkGatewaySkuTier `json:"tier,omitempty"` - Capacity *int32 `json:"capacity,omitempty"` -} - -// VirtualNetworkListResult is response for the ListVirtualNetworks API service -// call. -type VirtualNetworkListResult struct { - autorest.Response `json:"-"` - Value *[]VirtualNetwork `json:"value,omitempty"` - NextLink *string `json:"nextLink,omitempty"` -} - -// VirtualNetworkListResultPreparer prepares a request to retrieve the next set of results. It returns -// nil if no more results exist. -func (client VirtualNetworkListResult) VirtualNetworkListResultPreparer() (*http.Request, error) { - if client.NextLink == nil || len(to.String(client.NextLink)) <= 0 { - return nil, nil - } - return autorest.Prepare(&http.Request{}, - autorest.AsJSON(), - autorest.AsGet(), - autorest.WithBaseURL(to.String(client.NextLink))) -} - -// VirtualNetworkPeering is peerings in a virtual network resource. -type VirtualNetworkPeering struct { - autorest.Response `json:"-"` - ID *string `json:"id,omitempty"` - *VirtualNetworkPeeringPropertiesFormat `json:"properties,omitempty"` - Name *string `json:"name,omitempty"` - Etag *string `json:"etag,omitempty"` -} - -// VirtualNetworkPeeringListResult is response for ListSubnets API service -// call. Retrieves all subnets that belong to a virtual network. -type VirtualNetworkPeeringListResult struct { - autorest.Response `json:"-"` - Value *[]VirtualNetworkPeering `json:"value,omitempty"` - NextLink *string `json:"nextLink,omitempty"` -} - -// VirtualNetworkPeeringListResultPreparer prepares a request to retrieve the next set of results. It returns -// nil if no more results exist. -func (client VirtualNetworkPeeringListResult) VirtualNetworkPeeringListResultPreparer() (*http.Request, error) { - if client.NextLink == nil || len(to.String(client.NextLink)) <= 0 { - return nil, nil - } - return autorest.Prepare(&http.Request{}, - autorest.AsJSON(), - autorest.AsGet(), - autorest.WithBaseURL(to.String(client.NextLink))) -} - -// VirtualNetworkPeeringPropertiesFormat is -type VirtualNetworkPeeringPropertiesFormat struct { - AllowVirtualNetworkAccess *bool `json:"allowVirtualNetworkAccess,omitempty"` - AllowForwardedTraffic *bool `json:"allowForwardedTraffic,omitempty"` - AllowGatewayTransit *bool `json:"allowGatewayTransit,omitempty"` - UseRemoteGateways *bool `json:"useRemoteGateways,omitempty"` - RemoteVirtualNetwork *SubResource `json:"remoteVirtualNetwork,omitempty"` - PeeringState VirtualNetworkPeeringState `json:"peeringState,omitempty"` - ProvisioningState *string `json:"provisioningState,omitempty"` -} - -// VirtualNetworkPropertiesFormat is -type VirtualNetworkPropertiesFormat struct { - AddressSpace *AddressSpace `json:"addressSpace,omitempty"` - DhcpOptions *DhcpOptions `json:"dhcpOptions,omitempty"` - Subnets *[]Subnet `json:"subnets,omitempty"` - VirtualNetworkPeerings *[]VirtualNetworkPeering `json:"virtualNetworkPeerings,omitempty"` - ResourceGUID *string `json:"resourceGuid,omitempty"` - ProvisioningState *string `json:"provisioningState,omitempty"` -} - -// VpnClientConfiguration is vpnClientConfiguration for P2S client. -type VpnClientConfiguration struct { - VpnClientAddressPool *AddressSpace `json:"vpnClientAddressPool,omitempty"` - VpnClientRootCertificates *[]VpnClientRootCertificate `json:"vpnClientRootCertificates,omitempty"` - VpnClientRevokedCertificates *[]VpnClientRevokedCertificate `json:"vpnClientRevokedCertificates,omitempty"` -} - -// VpnClientParameters is vpn Client Parameters for package generation -type VpnClientParameters struct { - ProcessorArchitecture ProcessorArchitecture `json:"processorArchitecture,omitempty"` -} - -// VpnClientRevokedCertificate is vPN client revoked certificate of virtual -// network gateway. -type VpnClientRevokedCertificate struct { - ID *string `json:"id,omitempty"` - *VpnClientRevokedCertificatePropertiesFormat `json:"properties,omitempty"` - Name *string `json:"name,omitempty"` - Etag *string `json:"etag,omitempty"` -} - -// VpnClientRevokedCertificatePropertiesFormat is properties of the revoked VPN -// client certificate of virtual network gateway. -type VpnClientRevokedCertificatePropertiesFormat struct { - Thumbprint *string `json:"thumbprint,omitempty"` - ProvisioningState *string `json:"provisioningState,omitempty"` -} - -// VpnClientRootCertificate is vPN client root certificate of virtual network -// gateway -type VpnClientRootCertificate struct { - ID *string `json:"id,omitempty"` - *VpnClientRootCertificatePropertiesFormat `json:"properties,omitempty"` - Name *string `json:"name,omitempty"` - Etag *string `json:"etag,omitempty"` -} - -// VpnClientRootCertificatePropertiesFormat is properties of SSL certificates -// of application gateway -type VpnClientRootCertificatePropertiesFormat struct { - PublicCertData *string `json:"publicCertData,omitempty"` - ProvisioningState *string `json:"provisioningState,omitempty"` -} - -// Watcher is network watcher in a resource group. -type Watcher struct { - autorest.Response `json:"-"` - ID *string `json:"id,omitempty"` - Name *string `json:"name,omitempty"` - Type *string `json:"type,omitempty"` - Location *string `json:"location,omitempty"` - Tags *map[string]*string `json:"tags,omitempty"` - Etag *string `json:"etag,omitempty"` - *WatcherPropertiesFormat `json:"properties,omitempty"` -} - -// WatcherListResult is list of network watcher resources. -type WatcherListResult struct { - autorest.Response `json:"-"` - Value *[]Watcher `json:"value,omitempty"` -} - -// WatcherPropertiesFormat is the network watcher properties. -type WatcherPropertiesFormat struct { - ProvisioningState ProvisioningState `json:"provisioningState,omitempty"` -} diff --git a/vendor/github.com/Azure/azure-sdk-for-go/arm/network/publicipaddresses.go b/vendor/github.com/Azure/azure-sdk-for-go/arm/network/publicipaddresses.go deleted file mode 100644 index 896bc817d..000000000 --- a/vendor/github.com/Azure/azure-sdk-for-go/arm/network/publicipaddresses.go +++ /dev/null @@ -1,465 +0,0 @@ -package network - -// Copyright (c) Microsoft and contributors. All rights reserved. -// -// Licensed under the Apache License, Version 2.0 (the "License"); -// you may not use this file except in compliance with the License. -// You may obtain a copy of the License at -// http://www.apache.org/licenses/LICENSE-2.0 -// -// Unless required by applicable law or agreed to in writing, software -// distributed under the License is distributed on an "AS IS" BASIS, -// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -// -// See the License for the specific language governing permissions and -// limitations under the License. -// -// Code generated by Microsoft (R) AutoRest Code Generator 1.0.1.0 -// Changes may cause incorrect behavior and will be lost if the code is -// regenerated. - -import ( - "github.com/Azure/go-autorest/autorest" - "github.com/Azure/go-autorest/autorest/azure" - "github.com/Azure/go-autorest/autorest/validation" - "net/http" -) - -// PublicIPAddressesClient is the composite Swagger for Network Client -type PublicIPAddressesClient struct { - ManagementClient -} - -// NewPublicIPAddressesClient creates an instance of the -// PublicIPAddressesClient client. -func NewPublicIPAddressesClient(subscriptionID string) PublicIPAddressesClient { - return NewPublicIPAddressesClientWithBaseURI(DefaultBaseURI, subscriptionID) -} - -// NewPublicIPAddressesClientWithBaseURI creates an instance of the -// PublicIPAddressesClient client. -func NewPublicIPAddressesClientWithBaseURI(baseURI string, subscriptionID string) PublicIPAddressesClient { - return PublicIPAddressesClient{NewWithBaseURI(baseURI, subscriptionID)} -} - -// CreateOrUpdate creates or updates a static or dynamic public IP address. -// This method may poll for completion. Polling can be canceled by passing the -// cancel channel argument. The channel will be used to cancel polling and any -// outstanding HTTP requests. -// -// resourceGroupName is the name of the resource group. publicIPAddressName is -// the name of the public IP address. parameters is parameters supplied to the -// create or update public IP address operation. -func (client PublicIPAddressesClient) CreateOrUpdate(resourceGroupName string, publicIPAddressName string, parameters PublicIPAddress, cancel <-chan struct{}) (<-chan PublicIPAddress, <-chan error) { - resultChan := make(chan PublicIPAddress, 1) - errChan := make(chan error, 1) - if err := validation.Validate([]validation.Validation{ - {TargetValue: parameters, - Constraints: []validation.Constraint{{Target: "parameters.PublicIPAddressPropertiesFormat", Name: validation.Null, Rule: false, - Chain: []validation.Constraint{{Target: "parameters.PublicIPAddressPropertiesFormat.IPConfiguration", Name: validation.Null, Rule: false, - Chain: []validation.Constraint{{Target: "parameters.PublicIPAddressPropertiesFormat.IPConfiguration.IPConfigurationPropertiesFormat", Name: validation.Null, Rule: false, - Chain: []validation.Constraint{{Target: "parameters.PublicIPAddressPropertiesFormat.IPConfiguration.IPConfigurationPropertiesFormat.PublicIPAddress", Name: validation.Null, Rule: false, Chain: nil}}}, - }}, - }}}}}); err != nil { - errChan <- validation.NewErrorWithValidationError(err, "network.PublicIPAddressesClient", "CreateOrUpdate") - close(errChan) - close(resultChan) - return resultChan, errChan - } - - go func() { - var err error - var result PublicIPAddress - defer func() { - resultChan <- result - errChan <- err - close(resultChan) - close(errChan) - }() - req, err := client.CreateOrUpdatePreparer(resourceGroupName, publicIPAddressName, parameters, cancel) - if err != nil { - err = autorest.NewErrorWithError(err, "network.PublicIPAddressesClient", "CreateOrUpdate", nil, "Failure preparing request") - return - } - - resp, err := client.CreateOrUpdateSender(req) - if err != nil { - result.Response = autorest.Response{Response: resp} - err = autorest.NewErrorWithError(err, "network.PublicIPAddressesClient", "CreateOrUpdate", resp, "Failure sending request") - return - } - - result, err = client.CreateOrUpdateResponder(resp) - if err != nil { - err = autorest.NewErrorWithError(err, "network.PublicIPAddressesClient", "CreateOrUpdate", resp, "Failure responding to request") - } - }() - return resultChan, errChan -} - -// CreateOrUpdatePreparer prepares the CreateOrUpdate request. -func (client PublicIPAddressesClient) CreateOrUpdatePreparer(resourceGroupName string, publicIPAddressName string, parameters PublicIPAddress, cancel <-chan struct{}) (*http.Request, error) { - pathParameters := map[string]interface{}{ - "publicIpAddressName": autorest.Encode("path", publicIPAddressName), - "resourceGroupName": autorest.Encode("path", resourceGroupName), - "subscriptionId": autorest.Encode("path", client.SubscriptionID), - } - - const APIVersion = "2017-03-01" - queryParameters := map[string]interface{}{ - "api-version": APIVersion, - } - - preparer := autorest.CreatePreparer( - autorest.AsJSON(), - autorest.AsPut(), - autorest.WithBaseURL(client.BaseURI), - autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/publicIPAddresses/{publicIpAddressName}", pathParameters), - autorest.WithJSON(parameters), - autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{Cancel: cancel}) -} - -// CreateOrUpdateSender sends the CreateOrUpdate request. The method will close the -// http.Response Body if it receives an error. -func (client PublicIPAddressesClient) CreateOrUpdateSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, - req, - azure.DoPollForAsynchronous(client.PollingDelay)) -} - -// CreateOrUpdateResponder handles the response to the CreateOrUpdate request. The method always -// closes the http.Response Body. -func (client PublicIPAddressesClient) CreateOrUpdateResponder(resp *http.Response) (result PublicIPAddress, err error) { - err = autorest.Respond( - resp, - client.ByInspecting(), - azure.WithErrorUnlessStatusCode(http.StatusCreated, http.StatusOK), - autorest.ByUnmarshallingJSON(&result), - autorest.ByClosing()) - result.Response = autorest.Response{Response: resp} - return -} - -// Delete deletes the specified public IP address. This method may poll for -// completion. Polling can be canceled by passing the cancel channel argument. -// The channel will be used to cancel polling and any outstanding HTTP -// requests. -// -// resourceGroupName is the name of the resource group. publicIPAddressName is -// the name of the subnet. -func (client PublicIPAddressesClient) Delete(resourceGroupName string, publicIPAddressName string, cancel <-chan struct{}) (<-chan autorest.Response, <-chan error) { - resultChan := make(chan autorest.Response, 1) - errChan := make(chan error, 1) - go func() { - var err error - var result autorest.Response - defer func() { - resultChan <- result - errChan <- err - close(resultChan) - close(errChan) - }() - req, err := client.DeletePreparer(resourceGroupName, publicIPAddressName, cancel) - if err != nil { - err = autorest.NewErrorWithError(err, "network.PublicIPAddressesClient", "Delete", nil, "Failure preparing request") - return - } - - resp, err := client.DeleteSender(req) - if err != nil { - result.Response = resp - err = autorest.NewErrorWithError(err, "network.PublicIPAddressesClient", "Delete", resp, "Failure sending request") - return - } - - result, err = client.DeleteResponder(resp) - if err != nil { - err = autorest.NewErrorWithError(err, "network.PublicIPAddressesClient", "Delete", resp, "Failure responding to request") - } - }() - return resultChan, errChan -} - -// DeletePreparer prepares the Delete request. -func (client PublicIPAddressesClient) DeletePreparer(resourceGroupName string, publicIPAddressName string, cancel <-chan struct{}) (*http.Request, error) { - pathParameters := map[string]interface{}{ - "publicIpAddressName": autorest.Encode("path", publicIPAddressName), - "resourceGroupName": autorest.Encode("path", resourceGroupName), - "subscriptionId": autorest.Encode("path", client.SubscriptionID), - } - - const APIVersion = "2017-03-01" - queryParameters := map[string]interface{}{ - "api-version": APIVersion, - } - - preparer := autorest.CreatePreparer( - autorest.AsDelete(), - autorest.WithBaseURL(client.BaseURI), - autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/publicIPAddresses/{publicIpAddressName}", pathParameters), - autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{Cancel: cancel}) -} - -// DeleteSender sends the Delete request. The method will close the -// http.Response Body if it receives an error. -func (client PublicIPAddressesClient) DeleteSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, - req, - azure.DoPollForAsynchronous(client.PollingDelay)) -} - -// DeleteResponder handles the response to the Delete request. The method always -// closes the http.Response Body. -func (client PublicIPAddressesClient) DeleteResponder(resp *http.Response) (result autorest.Response, err error) { - err = autorest.Respond( - resp, - client.ByInspecting(), - azure.WithErrorUnlessStatusCode(http.StatusNoContent, http.StatusAccepted, http.StatusOK), - autorest.ByClosing()) - result.Response = resp - return -} - -// Get gets the specified public IP address in a specified resource group. -// -// resourceGroupName is the name of the resource group. publicIPAddressName is -// the name of the subnet. expand is expands referenced resources. -func (client PublicIPAddressesClient) Get(resourceGroupName string, publicIPAddressName string, expand string) (result PublicIPAddress, err error) { - req, err := client.GetPreparer(resourceGroupName, publicIPAddressName, expand) - if err != nil { - err = autorest.NewErrorWithError(err, "network.PublicIPAddressesClient", "Get", nil, "Failure preparing request") - return - } - - resp, err := client.GetSender(req) - if err != nil { - result.Response = autorest.Response{Response: resp} - err = autorest.NewErrorWithError(err, "network.PublicIPAddressesClient", "Get", resp, "Failure sending request") - return - } - - result, err = client.GetResponder(resp) - if err != nil { - err = autorest.NewErrorWithError(err, "network.PublicIPAddressesClient", "Get", resp, "Failure responding to request") - } - - return -} - -// GetPreparer prepares the Get request. -func (client PublicIPAddressesClient) GetPreparer(resourceGroupName string, publicIPAddressName string, expand string) (*http.Request, error) { - pathParameters := map[string]interface{}{ - "publicIpAddressName": autorest.Encode("path", publicIPAddressName), - "resourceGroupName": autorest.Encode("path", resourceGroupName), - "subscriptionId": autorest.Encode("path", client.SubscriptionID), - } - - const APIVersion = "2017-03-01" - queryParameters := map[string]interface{}{ - "api-version": APIVersion, - } - if len(expand) > 0 { - queryParameters["$expand"] = autorest.Encode("query", expand) - } - - preparer := autorest.CreatePreparer( - autorest.AsGet(), - autorest.WithBaseURL(client.BaseURI), - autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/publicIPAddresses/{publicIpAddressName}", pathParameters), - autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{}) -} - -// GetSender sends the Get request. The method will close the -// http.Response Body if it receives an error. -func (client PublicIPAddressesClient) GetSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, req) -} - -// GetResponder handles the response to the Get request. The method always -// closes the http.Response Body. -func (client PublicIPAddressesClient) GetResponder(resp *http.Response) (result PublicIPAddress, err error) { - err = autorest.Respond( - resp, - client.ByInspecting(), - azure.WithErrorUnlessStatusCode(http.StatusOK), - autorest.ByUnmarshallingJSON(&result), - autorest.ByClosing()) - result.Response = autorest.Response{Response: resp} - return -} - -// List gets all public IP addresses in a resource group. -// -// resourceGroupName is the name of the resource group. -func (client PublicIPAddressesClient) List(resourceGroupName string) (result PublicIPAddressListResult, err error) { - req, err := client.ListPreparer(resourceGroupName) - if err != nil { - err = autorest.NewErrorWithError(err, "network.PublicIPAddressesClient", "List", nil, "Failure preparing request") - return - } - - resp, err := client.ListSender(req) - if err != nil { - result.Response = autorest.Response{Response: resp} - err = autorest.NewErrorWithError(err, "network.PublicIPAddressesClient", "List", resp, "Failure sending request") - return - } - - result, err = client.ListResponder(resp) - if err != nil { - err = autorest.NewErrorWithError(err, "network.PublicIPAddressesClient", "List", resp, "Failure responding to request") - } - - return -} - -// ListPreparer prepares the List request. -func (client PublicIPAddressesClient) ListPreparer(resourceGroupName string) (*http.Request, error) { - pathParameters := map[string]interface{}{ - "resourceGroupName": autorest.Encode("path", resourceGroupName), - "subscriptionId": autorest.Encode("path", client.SubscriptionID), - } - - const APIVersion = "2017-03-01" - queryParameters := map[string]interface{}{ - "api-version": APIVersion, - } - - preparer := autorest.CreatePreparer( - autorest.AsGet(), - autorest.WithBaseURL(client.BaseURI), - autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/publicIPAddresses", pathParameters), - autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{}) -} - -// ListSender sends the List request. The method will close the -// http.Response Body if it receives an error. -func (client PublicIPAddressesClient) ListSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, req) -} - -// ListResponder handles the response to the List request. The method always -// closes the http.Response Body. -func (client PublicIPAddressesClient) ListResponder(resp *http.Response) (result PublicIPAddressListResult, err error) { - err = autorest.Respond( - resp, - client.ByInspecting(), - azure.WithErrorUnlessStatusCode(http.StatusOK), - autorest.ByUnmarshallingJSON(&result), - autorest.ByClosing()) - result.Response = autorest.Response{Response: resp} - return -} - -// ListNextResults retrieves the next set of results, if any. -func (client PublicIPAddressesClient) ListNextResults(lastResults PublicIPAddressListResult) (result PublicIPAddressListResult, err error) { - req, err := lastResults.PublicIPAddressListResultPreparer() - if err != nil { - return result, autorest.NewErrorWithError(err, "network.PublicIPAddressesClient", "List", nil, "Failure preparing next results request") - } - if req == nil { - return - } - - resp, err := client.ListSender(req) - if err != nil { - result.Response = autorest.Response{Response: resp} - return result, autorest.NewErrorWithError(err, "network.PublicIPAddressesClient", "List", resp, "Failure sending next results request") - } - - result, err = client.ListResponder(resp) - if err != nil { - err = autorest.NewErrorWithError(err, "network.PublicIPAddressesClient", "List", resp, "Failure responding to next results request") - } - - return -} - -// ListAll gets all the public IP addresses in a subscription. -func (client PublicIPAddressesClient) ListAll() (result PublicIPAddressListResult, err error) { - req, err := client.ListAllPreparer() - if err != nil { - err = autorest.NewErrorWithError(err, "network.PublicIPAddressesClient", "ListAll", nil, "Failure preparing request") - return - } - - resp, err := client.ListAllSender(req) - if err != nil { - result.Response = autorest.Response{Response: resp} - err = autorest.NewErrorWithError(err, "network.PublicIPAddressesClient", "ListAll", resp, "Failure sending request") - return - } - - result, err = client.ListAllResponder(resp) - if err != nil { - err = autorest.NewErrorWithError(err, "network.PublicIPAddressesClient", "ListAll", resp, "Failure responding to request") - } - - return -} - -// ListAllPreparer prepares the ListAll request. -func (client PublicIPAddressesClient) ListAllPreparer() (*http.Request, error) { - pathParameters := map[string]interface{}{ - "subscriptionId": autorest.Encode("path", client.SubscriptionID), - } - - const APIVersion = "2017-03-01" - queryParameters := map[string]interface{}{ - "api-version": APIVersion, - } - - preparer := autorest.CreatePreparer( - autorest.AsGet(), - autorest.WithBaseURL(client.BaseURI), - autorest.WithPathParameters("/subscriptions/{subscriptionId}/providers/Microsoft.Network/publicIPAddresses", pathParameters), - autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{}) -} - -// ListAllSender sends the ListAll request. The method will close the -// http.Response Body if it receives an error. -func (client PublicIPAddressesClient) ListAllSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, req) -} - -// ListAllResponder handles the response to the ListAll request. The method always -// closes the http.Response Body. -func (client PublicIPAddressesClient) ListAllResponder(resp *http.Response) (result PublicIPAddressListResult, err error) { - err = autorest.Respond( - resp, - client.ByInspecting(), - azure.WithErrorUnlessStatusCode(http.StatusOK), - autorest.ByUnmarshallingJSON(&result), - autorest.ByClosing()) - result.Response = autorest.Response{Response: resp} - return -} - -// ListAllNextResults retrieves the next set of results, if any. -func (client PublicIPAddressesClient) ListAllNextResults(lastResults PublicIPAddressListResult) (result PublicIPAddressListResult, err error) { - req, err := lastResults.PublicIPAddressListResultPreparer() - if err != nil { - return result, autorest.NewErrorWithError(err, "network.PublicIPAddressesClient", "ListAll", nil, "Failure preparing next results request") - } - if req == nil { - return - } - - resp, err := client.ListAllSender(req) - if err != nil { - result.Response = autorest.Response{Response: resp} - return result, autorest.NewErrorWithError(err, "network.PublicIPAddressesClient", "ListAll", resp, "Failure sending next results request") - } - - result, err = client.ListAllResponder(resp) - if err != nil { - err = autorest.NewErrorWithError(err, "network.PublicIPAddressesClient", "ListAll", resp, "Failure responding to next results request") - } - - return -} diff --git a/vendor/github.com/Azure/azure-sdk-for-go/arm/network/virtualnetworkgateways.go b/vendor/github.com/Azure/azure-sdk-for-go/arm/network/virtualnetworkgateways.go deleted file mode 100644 index 720978e83..000000000 --- a/vendor/github.com/Azure/azure-sdk-for-go/arm/network/virtualnetworkgateways.go +++ /dev/null @@ -1,785 +0,0 @@ -package network - -// Copyright (c) Microsoft and contributors. All rights reserved. -// -// Licensed under the Apache License, Version 2.0 (the "License"); -// you may not use this file except in compliance with the License. -// You may obtain a copy of the License at -// http://www.apache.org/licenses/LICENSE-2.0 -// -// Unless required by applicable law or agreed to in writing, software -// distributed under the License is distributed on an "AS IS" BASIS, -// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -// -// See the License for the specific language governing permissions and -// limitations under the License. -// -// Code generated by Microsoft (R) AutoRest Code Generator 1.0.1.0 -// Changes may cause incorrect behavior and will be lost if the code is -// regenerated. - -import ( - "github.com/Azure/go-autorest/autorest" - "github.com/Azure/go-autorest/autorest/azure" - "github.com/Azure/go-autorest/autorest/validation" - "net/http" -) - -// VirtualNetworkGatewaysClient is the composite Swagger for Network Client -type VirtualNetworkGatewaysClient struct { - ManagementClient -} - -// NewVirtualNetworkGatewaysClient creates an instance of the -// VirtualNetworkGatewaysClient client. -func NewVirtualNetworkGatewaysClient(subscriptionID string) VirtualNetworkGatewaysClient { - return NewVirtualNetworkGatewaysClientWithBaseURI(DefaultBaseURI, subscriptionID) -} - -// NewVirtualNetworkGatewaysClientWithBaseURI creates an instance of the -// VirtualNetworkGatewaysClient client. -func NewVirtualNetworkGatewaysClientWithBaseURI(baseURI string, subscriptionID string) VirtualNetworkGatewaysClient { - return VirtualNetworkGatewaysClient{NewWithBaseURI(baseURI, subscriptionID)} -} - -// CreateOrUpdate creates or updates a virtual network gateway in the specified -// resource group. This method may poll for completion. Polling can be canceled -// by passing the cancel channel argument. The channel will be used to cancel -// polling and any outstanding HTTP requests. -// -// resourceGroupName is the name of the resource group. -// virtualNetworkGatewayName is the name of the virtual network gateway. -// parameters is parameters supplied to create or update virtual network -// gateway operation. -func (client VirtualNetworkGatewaysClient) CreateOrUpdate(resourceGroupName string, virtualNetworkGatewayName string, parameters VirtualNetworkGateway, cancel <-chan struct{}) (<-chan VirtualNetworkGateway, <-chan error) { - resultChan := make(chan VirtualNetworkGateway, 1) - errChan := make(chan error, 1) - if err := validation.Validate([]validation.Validation{ - {TargetValue: parameters, - Constraints: []validation.Constraint{{Target: "parameters.VirtualNetworkGatewayPropertiesFormat", Name: validation.Null, Rule: true, Chain: nil}}}}); err != nil { - errChan <- validation.NewErrorWithValidationError(err, "network.VirtualNetworkGatewaysClient", "CreateOrUpdate") - close(errChan) - close(resultChan) - return resultChan, errChan - } - - go func() { - var err error - var result VirtualNetworkGateway - defer func() { - resultChan <- result - errChan <- err - close(resultChan) - close(errChan) - }() - req, err := client.CreateOrUpdatePreparer(resourceGroupName, virtualNetworkGatewayName, parameters, cancel) - if err != nil { - err = autorest.NewErrorWithError(err, "network.VirtualNetworkGatewaysClient", "CreateOrUpdate", nil, "Failure preparing request") - return - } - - resp, err := client.CreateOrUpdateSender(req) - if err != nil { - result.Response = autorest.Response{Response: resp} - err = autorest.NewErrorWithError(err, "network.VirtualNetworkGatewaysClient", "CreateOrUpdate", resp, "Failure sending request") - return - } - - result, err = client.CreateOrUpdateResponder(resp) - if err != nil { - err = autorest.NewErrorWithError(err, "network.VirtualNetworkGatewaysClient", "CreateOrUpdate", resp, "Failure responding to request") - } - }() - return resultChan, errChan -} - -// CreateOrUpdatePreparer prepares the CreateOrUpdate request. -func (client VirtualNetworkGatewaysClient) CreateOrUpdatePreparer(resourceGroupName string, virtualNetworkGatewayName string, parameters VirtualNetworkGateway, cancel <-chan struct{}) (*http.Request, error) { - pathParameters := map[string]interface{}{ - "resourceGroupName": autorest.Encode("path", resourceGroupName), - "subscriptionId": autorest.Encode("path", client.SubscriptionID), - "virtualNetworkGatewayName": autorest.Encode("path", virtualNetworkGatewayName), - } - - const APIVersion = "2017-03-01" - queryParameters := map[string]interface{}{ - "api-version": APIVersion, - } - - preparer := autorest.CreatePreparer( - autorest.AsJSON(), - autorest.AsPut(), - autorest.WithBaseURL(client.BaseURI), - autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/virtualNetworkGateways/{virtualNetworkGatewayName}", pathParameters), - autorest.WithJSON(parameters), - autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{Cancel: cancel}) -} - -// CreateOrUpdateSender sends the CreateOrUpdate request. The method will close the -// http.Response Body if it receives an error. -func (client VirtualNetworkGatewaysClient) CreateOrUpdateSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, - req, - azure.DoPollForAsynchronous(client.PollingDelay)) -} - -// CreateOrUpdateResponder handles the response to the CreateOrUpdate request. The method always -// closes the http.Response Body. -func (client VirtualNetworkGatewaysClient) CreateOrUpdateResponder(resp *http.Response) (result VirtualNetworkGateway, err error) { - err = autorest.Respond( - resp, - client.ByInspecting(), - azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusCreated), - autorest.ByUnmarshallingJSON(&result), - autorest.ByClosing()) - result.Response = autorest.Response{Response: resp} - return -} - -// Delete deletes the specified virtual network gateway. This method may poll -// for completion. Polling can be canceled by passing the cancel channel -// argument. The channel will be used to cancel polling and any outstanding -// HTTP requests. -// -// resourceGroupName is the name of the resource group. -// virtualNetworkGatewayName is the name of the virtual network gateway. -func (client VirtualNetworkGatewaysClient) Delete(resourceGroupName string, virtualNetworkGatewayName string, cancel <-chan struct{}) (<-chan autorest.Response, <-chan error) { - resultChan := make(chan autorest.Response, 1) - errChan := make(chan error, 1) - go func() { - var err error - var result autorest.Response - defer func() { - resultChan <- result - errChan <- err - close(resultChan) - close(errChan) - }() - req, err := client.DeletePreparer(resourceGroupName, virtualNetworkGatewayName, cancel) - if err != nil { - err = autorest.NewErrorWithError(err, "network.VirtualNetworkGatewaysClient", "Delete", nil, "Failure preparing request") - return - } - - resp, err := client.DeleteSender(req) - if err != nil { - result.Response = resp - err = autorest.NewErrorWithError(err, "network.VirtualNetworkGatewaysClient", "Delete", resp, "Failure sending request") - return - } - - result, err = client.DeleteResponder(resp) - if err != nil { - err = autorest.NewErrorWithError(err, "network.VirtualNetworkGatewaysClient", "Delete", resp, "Failure responding to request") - } - }() - return resultChan, errChan -} - -// DeletePreparer prepares the Delete request. -func (client VirtualNetworkGatewaysClient) DeletePreparer(resourceGroupName string, virtualNetworkGatewayName string, cancel <-chan struct{}) (*http.Request, error) { - pathParameters := map[string]interface{}{ - "resourceGroupName": autorest.Encode("path", resourceGroupName), - "subscriptionId": autorest.Encode("path", client.SubscriptionID), - "virtualNetworkGatewayName": autorest.Encode("path", virtualNetworkGatewayName), - } - - const APIVersion = "2017-03-01" - queryParameters := map[string]interface{}{ - "api-version": APIVersion, - } - - preparer := autorest.CreatePreparer( - autorest.AsDelete(), - autorest.WithBaseURL(client.BaseURI), - autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/virtualNetworkGateways/{virtualNetworkGatewayName}", pathParameters), - autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{Cancel: cancel}) -} - -// DeleteSender sends the Delete request. The method will close the -// http.Response Body if it receives an error. -func (client VirtualNetworkGatewaysClient) DeleteSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, - req, - azure.DoPollForAsynchronous(client.PollingDelay)) -} - -// DeleteResponder handles the response to the Delete request. The method always -// closes the http.Response Body. -func (client VirtualNetworkGatewaysClient) DeleteResponder(resp *http.Response) (result autorest.Response, err error) { - err = autorest.Respond( - resp, - client.ByInspecting(), - azure.WithErrorUnlessStatusCode(http.StatusNoContent, http.StatusAccepted, http.StatusOK), - autorest.ByClosing()) - result.Response = resp - return -} - -// Generatevpnclientpackage generates VPN client package for P2S client of the -// virtual network gateway in the specified resource group. -// -// resourceGroupName is the name of the resource group. -// virtualNetworkGatewayName is the name of the virtual network gateway. -// parameters is parameters supplied to the generate virtual network gateway -// VPN client package operation. -func (client VirtualNetworkGatewaysClient) Generatevpnclientpackage(resourceGroupName string, virtualNetworkGatewayName string, parameters VpnClientParameters) (result String, err error) { - req, err := client.GeneratevpnclientpackagePreparer(resourceGroupName, virtualNetworkGatewayName, parameters) - if err != nil { - err = autorest.NewErrorWithError(err, "network.VirtualNetworkGatewaysClient", "Generatevpnclientpackage", nil, "Failure preparing request") - return - } - - resp, err := client.GeneratevpnclientpackageSender(req) - if err != nil { - result.Response = autorest.Response{Response: resp} - err = autorest.NewErrorWithError(err, "network.VirtualNetworkGatewaysClient", "Generatevpnclientpackage", resp, "Failure sending request") - return - } - - result, err = client.GeneratevpnclientpackageResponder(resp) - if err != nil { - err = autorest.NewErrorWithError(err, "network.VirtualNetworkGatewaysClient", "Generatevpnclientpackage", resp, "Failure responding to request") - } - - return -} - -// GeneratevpnclientpackagePreparer prepares the Generatevpnclientpackage request. -func (client VirtualNetworkGatewaysClient) GeneratevpnclientpackagePreparer(resourceGroupName string, virtualNetworkGatewayName string, parameters VpnClientParameters) (*http.Request, error) { - pathParameters := map[string]interface{}{ - "resourceGroupName": autorest.Encode("path", resourceGroupName), - "subscriptionId": autorest.Encode("path", client.SubscriptionID), - "virtualNetworkGatewayName": autorest.Encode("path", virtualNetworkGatewayName), - } - - const APIVersion = "2017-03-01" - queryParameters := map[string]interface{}{ - "api-version": APIVersion, - } - - preparer := autorest.CreatePreparer( - autorest.AsJSON(), - autorest.AsPost(), - autorest.WithBaseURL(client.BaseURI), - autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/virtualNetworkGateways/{virtualNetworkGatewayName}/generatevpnclientpackage", pathParameters), - autorest.WithJSON(parameters), - autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{}) -} - -// GeneratevpnclientpackageSender sends the Generatevpnclientpackage request. The method will close the -// http.Response Body if it receives an error. -func (client VirtualNetworkGatewaysClient) GeneratevpnclientpackageSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, req) -} - -// GeneratevpnclientpackageResponder handles the response to the Generatevpnclientpackage request. The method always -// closes the http.Response Body. -func (client VirtualNetworkGatewaysClient) GeneratevpnclientpackageResponder(resp *http.Response) (result String, err error) { - err = autorest.Respond( - resp, - client.ByInspecting(), - azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted), - autorest.ByUnmarshallingJSON(&result.Value), - autorest.ByClosing()) - result.Response = autorest.Response{Response: resp} - return -} - -// Get gets the specified virtual network gateway by resource group. -// -// resourceGroupName is the name of the resource group. -// virtualNetworkGatewayName is the name of the virtual network gateway. -func (client VirtualNetworkGatewaysClient) Get(resourceGroupName string, virtualNetworkGatewayName string) (result VirtualNetworkGateway, err error) { - req, err := client.GetPreparer(resourceGroupName, virtualNetworkGatewayName) - if err != nil { - err = autorest.NewErrorWithError(err, "network.VirtualNetworkGatewaysClient", "Get", nil, "Failure preparing request") - return - } - - resp, err := client.GetSender(req) - if err != nil { - result.Response = autorest.Response{Response: resp} - err = autorest.NewErrorWithError(err, "network.VirtualNetworkGatewaysClient", "Get", resp, "Failure sending request") - return - } - - result, err = client.GetResponder(resp) - if err != nil { - err = autorest.NewErrorWithError(err, "network.VirtualNetworkGatewaysClient", "Get", resp, "Failure responding to request") - } - - return -} - -// GetPreparer prepares the Get request. -func (client VirtualNetworkGatewaysClient) GetPreparer(resourceGroupName string, virtualNetworkGatewayName string) (*http.Request, error) { - pathParameters := map[string]interface{}{ - "resourceGroupName": autorest.Encode("path", resourceGroupName), - "subscriptionId": autorest.Encode("path", client.SubscriptionID), - "virtualNetworkGatewayName": autorest.Encode("path", virtualNetworkGatewayName), - } - - const APIVersion = "2017-03-01" - queryParameters := map[string]interface{}{ - "api-version": APIVersion, - } - - preparer := autorest.CreatePreparer( - autorest.AsGet(), - autorest.WithBaseURL(client.BaseURI), - autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/virtualNetworkGateways/{virtualNetworkGatewayName}", pathParameters), - autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{}) -} - -// GetSender sends the Get request. The method will close the -// http.Response Body if it receives an error. -func (client VirtualNetworkGatewaysClient) GetSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, req) -} - -// GetResponder handles the response to the Get request. The method always -// closes the http.Response Body. -func (client VirtualNetworkGatewaysClient) GetResponder(resp *http.Response) (result VirtualNetworkGateway, err error) { - err = autorest.Respond( - resp, - client.ByInspecting(), - azure.WithErrorUnlessStatusCode(http.StatusOK), - autorest.ByUnmarshallingJSON(&result), - autorest.ByClosing()) - result.Response = autorest.Response{Response: resp} - return -} - -// GetAdvertisedRoutes this operation retrieves a list of routes the virtual -// network gateway is advertising to the specified peer. This method may poll -// for completion. Polling can be canceled by passing the cancel channel -// argument. The channel will be used to cancel polling and any outstanding -// HTTP requests. -// -// resourceGroupName is the name of the resource group. -// virtualNetworkGatewayName is the name of the virtual network gateway. peer -// is the IP address of the peer -func (client VirtualNetworkGatewaysClient) GetAdvertisedRoutes(resourceGroupName string, virtualNetworkGatewayName string, peer string, cancel <-chan struct{}) (<-chan GatewayRouteListResult, <-chan error) { - resultChan := make(chan GatewayRouteListResult, 1) - errChan := make(chan error, 1) - go func() { - var err error - var result GatewayRouteListResult - defer func() { - resultChan <- result - errChan <- err - close(resultChan) - close(errChan) - }() - req, err := client.GetAdvertisedRoutesPreparer(resourceGroupName, virtualNetworkGatewayName, peer, cancel) - if err != nil { - err = autorest.NewErrorWithError(err, "network.VirtualNetworkGatewaysClient", "GetAdvertisedRoutes", nil, "Failure preparing request") - return - } - - resp, err := client.GetAdvertisedRoutesSender(req) - if err != nil { - result.Response = autorest.Response{Response: resp} - err = autorest.NewErrorWithError(err, "network.VirtualNetworkGatewaysClient", "GetAdvertisedRoutes", resp, "Failure sending request") - return - } - - result, err = client.GetAdvertisedRoutesResponder(resp) - if err != nil { - err = autorest.NewErrorWithError(err, "network.VirtualNetworkGatewaysClient", "GetAdvertisedRoutes", resp, "Failure responding to request") - } - }() - return resultChan, errChan -} - -// GetAdvertisedRoutesPreparer prepares the GetAdvertisedRoutes request. -func (client VirtualNetworkGatewaysClient) GetAdvertisedRoutesPreparer(resourceGroupName string, virtualNetworkGatewayName string, peer string, cancel <-chan struct{}) (*http.Request, error) { - pathParameters := map[string]interface{}{ - "resourceGroupName": autorest.Encode("path", resourceGroupName), - "subscriptionId": autorest.Encode("path", client.SubscriptionID), - "virtualNetworkGatewayName": autorest.Encode("path", virtualNetworkGatewayName), - } - - const APIVersion = "2017-03-01" - queryParameters := map[string]interface{}{ - "api-version": APIVersion, - "peer": autorest.Encode("query", peer), - } - - preparer := autorest.CreatePreparer( - autorest.AsPost(), - autorest.WithBaseURL(client.BaseURI), - autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/virtualNetworkGateways/{virtualNetworkGatewayName}/getAdvertisedRoutes", pathParameters), - autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{Cancel: cancel}) -} - -// GetAdvertisedRoutesSender sends the GetAdvertisedRoutes request. The method will close the -// http.Response Body if it receives an error. -func (client VirtualNetworkGatewaysClient) GetAdvertisedRoutesSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, - req, - azure.DoPollForAsynchronous(client.PollingDelay)) -} - -// GetAdvertisedRoutesResponder handles the response to the GetAdvertisedRoutes request. The method always -// closes the http.Response Body. -func (client VirtualNetworkGatewaysClient) GetAdvertisedRoutesResponder(resp *http.Response) (result GatewayRouteListResult, err error) { - err = autorest.Respond( - resp, - client.ByInspecting(), - azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted), - autorest.ByUnmarshallingJSON(&result), - autorest.ByClosing()) - result.Response = autorest.Response{Response: resp} - return -} - -// GetBgpPeerStatus the GetBgpPeerStatus operation retrieves the status of all -// BGP peers. This method may poll for completion. Polling can be canceled by -// passing the cancel channel argument. The channel will be used to cancel -// polling and any outstanding HTTP requests. -// -// resourceGroupName is the name of the resource group. -// virtualNetworkGatewayName is the name of the virtual network gateway. peer -// is the IP address of the peer to retrieve the status of. -func (client VirtualNetworkGatewaysClient) GetBgpPeerStatus(resourceGroupName string, virtualNetworkGatewayName string, peer string, cancel <-chan struct{}) (<-chan BgpPeerStatusListResult, <-chan error) { - resultChan := make(chan BgpPeerStatusListResult, 1) - errChan := make(chan error, 1) - go func() { - var err error - var result BgpPeerStatusListResult - defer func() { - resultChan <- result - errChan <- err - close(resultChan) - close(errChan) - }() - req, err := client.GetBgpPeerStatusPreparer(resourceGroupName, virtualNetworkGatewayName, peer, cancel) - if err != nil { - err = autorest.NewErrorWithError(err, "network.VirtualNetworkGatewaysClient", "GetBgpPeerStatus", nil, "Failure preparing request") - return - } - - resp, err := client.GetBgpPeerStatusSender(req) - if err != nil { - result.Response = autorest.Response{Response: resp} - err = autorest.NewErrorWithError(err, "network.VirtualNetworkGatewaysClient", "GetBgpPeerStatus", resp, "Failure sending request") - return - } - - result, err = client.GetBgpPeerStatusResponder(resp) - if err != nil { - err = autorest.NewErrorWithError(err, "network.VirtualNetworkGatewaysClient", "GetBgpPeerStatus", resp, "Failure responding to request") - } - }() - return resultChan, errChan -} - -// GetBgpPeerStatusPreparer prepares the GetBgpPeerStatus request. -func (client VirtualNetworkGatewaysClient) GetBgpPeerStatusPreparer(resourceGroupName string, virtualNetworkGatewayName string, peer string, cancel <-chan struct{}) (*http.Request, error) { - pathParameters := map[string]interface{}{ - "resourceGroupName": autorest.Encode("path", resourceGroupName), - "subscriptionId": autorest.Encode("path", client.SubscriptionID), - "virtualNetworkGatewayName": autorest.Encode("path", virtualNetworkGatewayName), - } - - const APIVersion = "2017-03-01" - queryParameters := map[string]interface{}{ - "api-version": APIVersion, - } - if len(peer) > 0 { - queryParameters["peer"] = autorest.Encode("query", peer) - } - - preparer := autorest.CreatePreparer( - autorest.AsPost(), - autorest.WithBaseURL(client.BaseURI), - autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/virtualNetworkGateways/{virtualNetworkGatewayName}/getBgpPeerStatus", pathParameters), - autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{Cancel: cancel}) -} - -// GetBgpPeerStatusSender sends the GetBgpPeerStatus request. The method will close the -// http.Response Body if it receives an error. -func (client VirtualNetworkGatewaysClient) GetBgpPeerStatusSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, - req, - azure.DoPollForAsynchronous(client.PollingDelay)) -} - -// GetBgpPeerStatusResponder handles the response to the GetBgpPeerStatus request. The method always -// closes the http.Response Body. -func (client VirtualNetworkGatewaysClient) GetBgpPeerStatusResponder(resp *http.Response) (result BgpPeerStatusListResult, err error) { - err = autorest.Respond( - resp, - client.ByInspecting(), - azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted), - autorest.ByUnmarshallingJSON(&result), - autorest.ByClosing()) - result.Response = autorest.Response{Response: resp} - return -} - -// GetLearnedRoutes this operation retrieves a list of routes the virtual -// network gateway has learned, including routes learned from BGP peers. This -// method may poll for completion. Polling can be canceled by passing the -// cancel channel argument. The channel will be used to cancel polling and any -// outstanding HTTP requests. -// -// resourceGroupName is the name of the resource group. -// virtualNetworkGatewayName is the name of the virtual network gateway. -func (client VirtualNetworkGatewaysClient) GetLearnedRoutes(resourceGroupName string, virtualNetworkGatewayName string, cancel <-chan struct{}) (<-chan GatewayRouteListResult, <-chan error) { - resultChan := make(chan GatewayRouteListResult, 1) - errChan := make(chan error, 1) - go func() { - var err error - var result GatewayRouteListResult - defer func() { - resultChan <- result - errChan <- err - close(resultChan) - close(errChan) - }() - req, err := client.GetLearnedRoutesPreparer(resourceGroupName, virtualNetworkGatewayName, cancel) - if err != nil { - err = autorest.NewErrorWithError(err, "network.VirtualNetworkGatewaysClient", "GetLearnedRoutes", nil, "Failure preparing request") - return - } - - resp, err := client.GetLearnedRoutesSender(req) - if err != nil { - result.Response = autorest.Response{Response: resp} - err = autorest.NewErrorWithError(err, "network.VirtualNetworkGatewaysClient", "GetLearnedRoutes", resp, "Failure sending request") - return - } - - result, err = client.GetLearnedRoutesResponder(resp) - if err != nil { - err = autorest.NewErrorWithError(err, "network.VirtualNetworkGatewaysClient", "GetLearnedRoutes", resp, "Failure responding to request") - } - }() - return resultChan, errChan -} - -// GetLearnedRoutesPreparer prepares the GetLearnedRoutes request. -func (client VirtualNetworkGatewaysClient) GetLearnedRoutesPreparer(resourceGroupName string, virtualNetworkGatewayName string, cancel <-chan struct{}) (*http.Request, error) { - pathParameters := map[string]interface{}{ - "resourceGroupName": autorest.Encode("path", resourceGroupName), - "subscriptionId": autorest.Encode("path", client.SubscriptionID), - "virtualNetworkGatewayName": autorest.Encode("path", virtualNetworkGatewayName), - } - - const APIVersion = "2017-03-01" - queryParameters := map[string]interface{}{ - "api-version": APIVersion, - } - - preparer := autorest.CreatePreparer( - autorest.AsPost(), - autorest.WithBaseURL(client.BaseURI), - autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/virtualNetworkGateways/{virtualNetworkGatewayName}/getLearnedRoutes", pathParameters), - autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{Cancel: cancel}) -} - -// GetLearnedRoutesSender sends the GetLearnedRoutes request. The method will close the -// http.Response Body if it receives an error. -func (client VirtualNetworkGatewaysClient) GetLearnedRoutesSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, - req, - azure.DoPollForAsynchronous(client.PollingDelay)) -} - -// GetLearnedRoutesResponder handles the response to the GetLearnedRoutes request. The method always -// closes the http.Response Body. -func (client VirtualNetworkGatewaysClient) GetLearnedRoutesResponder(resp *http.Response) (result GatewayRouteListResult, err error) { - err = autorest.Respond( - resp, - client.ByInspecting(), - azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted), - autorest.ByUnmarshallingJSON(&result), - autorest.ByClosing()) - result.Response = autorest.Response{Response: resp} - return -} - -// List gets all virtual network gateways by resource group. -// -// resourceGroupName is the name of the resource group. -func (client VirtualNetworkGatewaysClient) List(resourceGroupName string) (result VirtualNetworkGatewayListResult, err error) { - req, err := client.ListPreparer(resourceGroupName) - if err != nil { - err = autorest.NewErrorWithError(err, "network.VirtualNetworkGatewaysClient", "List", nil, "Failure preparing request") - return - } - - resp, err := client.ListSender(req) - if err != nil { - result.Response = autorest.Response{Response: resp} - err = autorest.NewErrorWithError(err, "network.VirtualNetworkGatewaysClient", "List", resp, "Failure sending request") - return - } - - result, err = client.ListResponder(resp) - if err != nil { - err = autorest.NewErrorWithError(err, "network.VirtualNetworkGatewaysClient", "List", resp, "Failure responding to request") - } - - return -} - -// ListPreparer prepares the List request. -func (client VirtualNetworkGatewaysClient) ListPreparer(resourceGroupName string) (*http.Request, error) { - pathParameters := map[string]interface{}{ - "resourceGroupName": autorest.Encode("path", resourceGroupName), - "subscriptionId": autorest.Encode("path", client.SubscriptionID), - } - - const APIVersion = "2017-03-01" - queryParameters := map[string]interface{}{ - "api-version": APIVersion, - } - - preparer := autorest.CreatePreparer( - autorest.AsGet(), - autorest.WithBaseURL(client.BaseURI), - autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/virtualNetworkGateways", pathParameters), - autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{}) -} - -// ListSender sends the List request. The method will close the -// http.Response Body if it receives an error. -func (client VirtualNetworkGatewaysClient) ListSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, req) -} - -// ListResponder handles the response to the List request. The method always -// closes the http.Response Body. -func (client VirtualNetworkGatewaysClient) ListResponder(resp *http.Response) (result VirtualNetworkGatewayListResult, err error) { - err = autorest.Respond( - resp, - client.ByInspecting(), - azure.WithErrorUnlessStatusCode(http.StatusOK), - autorest.ByUnmarshallingJSON(&result), - autorest.ByClosing()) - result.Response = autorest.Response{Response: resp} - return -} - -// ListNextResults retrieves the next set of results, if any. -func (client VirtualNetworkGatewaysClient) ListNextResults(lastResults VirtualNetworkGatewayListResult) (result VirtualNetworkGatewayListResult, err error) { - req, err := lastResults.VirtualNetworkGatewayListResultPreparer() - if err != nil { - return result, autorest.NewErrorWithError(err, "network.VirtualNetworkGatewaysClient", "List", nil, "Failure preparing next results request") - } - if req == nil { - return - } - - resp, err := client.ListSender(req) - if err != nil { - result.Response = autorest.Response{Response: resp} - return result, autorest.NewErrorWithError(err, "network.VirtualNetworkGatewaysClient", "List", resp, "Failure sending next results request") - } - - result, err = client.ListResponder(resp) - if err != nil { - err = autorest.NewErrorWithError(err, "network.VirtualNetworkGatewaysClient", "List", resp, "Failure responding to next results request") - } - - return -} - -// Reset resets the primary of the virtual network gateway in the specified -// resource group. This method may poll for completion. Polling can be canceled -// by passing the cancel channel argument. The channel will be used to cancel -// polling and any outstanding HTTP requests. -// -// resourceGroupName is the name of the resource group. -// virtualNetworkGatewayName is the name of the virtual network gateway. -// gatewayVip is virtual network gateway vip address supplied to the begin -// reset of the active-active feature enabled gateway. -func (client VirtualNetworkGatewaysClient) Reset(resourceGroupName string, virtualNetworkGatewayName string, gatewayVip string, cancel <-chan struct{}) (<-chan VirtualNetworkGateway, <-chan error) { - resultChan := make(chan VirtualNetworkGateway, 1) - errChan := make(chan error, 1) - go func() { - var err error - var result VirtualNetworkGateway - defer func() { - resultChan <- result - errChan <- err - close(resultChan) - close(errChan) - }() - req, err := client.ResetPreparer(resourceGroupName, virtualNetworkGatewayName, gatewayVip, cancel) - if err != nil { - err = autorest.NewErrorWithError(err, "network.VirtualNetworkGatewaysClient", "Reset", nil, "Failure preparing request") - return - } - - resp, err := client.ResetSender(req) - if err != nil { - result.Response = autorest.Response{Response: resp} - err = autorest.NewErrorWithError(err, "network.VirtualNetworkGatewaysClient", "Reset", resp, "Failure sending request") - return - } - - result, err = client.ResetResponder(resp) - if err != nil { - err = autorest.NewErrorWithError(err, "network.VirtualNetworkGatewaysClient", "Reset", resp, "Failure responding to request") - } - }() - return resultChan, errChan -} - -// ResetPreparer prepares the Reset request. -func (client VirtualNetworkGatewaysClient) ResetPreparer(resourceGroupName string, virtualNetworkGatewayName string, gatewayVip string, cancel <-chan struct{}) (*http.Request, error) { - pathParameters := map[string]interface{}{ - "resourceGroupName": autorest.Encode("path", resourceGroupName), - "subscriptionId": autorest.Encode("path", client.SubscriptionID), - "virtualNetworkGatewayName": autorest.Encode("path", virtualNetworkGatewayName), - } - - const APIVersion = "2017-03-01" - queryParameters := map[string]interface{}{ - "api-version": APIVersion, - } - if len(gatewayVip) > 0 { - queryParameters["gatewayVip"] = autorest.Encode("query", gatewayVip) - } - - preparer := autorest.CreatePreparer( - autorest.AsPost(), - autorest.WithBaseURL(client.BaseURI), - autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/virtualNetworkGateways/{virtualNetworkGatewayName}/reset", pathParameters), - autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{Cancel: cancel}) -} - -// ResetSender sends the Reset request. The method will close the -// http.Response Body if it receives an error. -func (client VirtualNetworkGatewaysClient) ResetSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, - req, - azure.DoPollForAsynchronous(client.PollingDelay)) -} - -// ResetResponder handles the response to the Reset request. The method always -// closes the http.Response Body. -func (client VirtualNetworkGatewaysClient) ResetResponder(resp *http.Response) (result VirtualNetworkGateway, err error) { - err = autorest.Respond( - resp, - client.ByInspecting(), - azure.WithErrorUnlessStatusCode(http.StatusAccepted, http.StatusOK), - autorest.ByUnmarshallingJSON(&result), - autorest.ByClosing()) - result.Response = autorest.Response{Response: resp} - return -} diff --git a/vendor/github.com/Azure/azure-sdk-for-go/arm/network/virtualnetworks.go b/vendor/github.com/Azure/azure-sdk-for-go/arm/network/virtualnetworks.go deleted file mode 100644 index eab048c72..000000000 --- a/vendor/github.com/Azure/azure-sdk-for-go/arm/network/virtualnetworks.go +++ /dev/null @@ -1,521 +0,0 @@ -package network - -// Copyright (c) Microsoft and contributors. All rights reserved. -// -// Licensed under the Apache License, Version 2.0 (the "License"); -// you may not use this file except in compliance with the License. -// You may obtain a copy of the License at -// http://www.apache.org/licenses/LICENSE-2.0 -// -// Unless required by applicable law or agreed to in writing, software -// distributed under the License is distributed on an "AS IS" BASIS, -// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -// -// See the License for the specific language governing permissions and -// limitations under the License. -// -// Code generated by Microsoft (R) AutoRest Code Generator 1.0.1.0 -// Changes may cause incorrect behavior and will be lost if the code is -// regenerated. - -import ( - "github.com/Azure/go-autorest/autorest" - "github.com/Azure/go-autorest/autorest/azure" - "net/http" -) - -// VirtualNetworksClient is the composite Swagger for Network Client -type VirtualNetworksClient struct { - ManagementClient -} - -// NewVirtualNetworksClient creates an instance of the VirtualNetworksClient -// client. -func NewVirtualNetworksClient(subscriptionID string) VirtualNetworksClient { - return NewVirtualNetworksClientWithBaseURI(DefaultBaseURI, subscriptionID) -} - -// NewVirtualNetworksClientWithBaseURI creates an instance of the -// VirtualNetworksClient client. -func NewVirtualNetworksClientWithBaseURI(baseURI string, subscriptionID string) VirtualNetworksClient { - return VirtualNetworksClient{NewWithBaseURI(baseURI, subscriptionID)} -} - -// CheckIPAddressAvailability checks whether a private IP address is available -// for use. -// -// resourceGroupName is the name of the resource group. virtualNetworkName is -// the name of the virtual network. IPAddress is the private IP address to be -// verified. -func (client VirtualNetworksClient) CheckIPAddressAvailability(resourceGroupName string, virtualNetworkName string, IPAddress string) (result IPAddressAvailabilityResult, err error) { - req, err := client.CheckIPAddressAvailabilityPreparer(resourceGroupName, virtualNetworkName, IPAddress) - if err != nil { - err = autorest.NewErrorWithError(err, "network.VirtualNetworksClient", "CheckIPAddressAvailability", nil, "Failure preparing request") - return - } - - resp, err := client.CheckIPAddressAvailabilitySender(req) - if err != nil { - result.Response = autorest.Response{Response: resp} - err = autorest.NewErrorWithError(err, "network.VirtualNetworksClient", "CheckIPAddressAvailability", resp, "Failure sending request") - return - } - - result, err = client.CheckIPAddressAvailabilityResponder(resp) - if err != nil { - err = autorest.NewErrorWithError(err, "network.VirtualNetworksClient", "CheckIPAddressAvailability", resp, "Failure responding to request") - } - - return -} - -// CheckIPAddressAvailabilityPreparer prepares the CheckIPAddressAvailability request. -func (client VirtualNetworksClient) CheckIPAddressAvailabilityPreparer(resourceGroupName string, virtualNetworkName string, IPAddress string) (*http.Request, error) { - pathParameters := map[string]interface{}{ - "resourceGroupName": autorest.Encode("path", resourceGroupName), - "subscriptionId": autorest.Encode("path", client.SubscriptionID), - "virtualNetworkName": autorest.Encode("path", virtualNetworkName), - } - - const APIVersion = "2017-03-01" - queryParameters := map[string]interface{}{ - "api-version": APIVersion, - } - if len(IPAddress) > 0 { - queryParameters["ipAddress"] = autorest.Encode("query", IPAddress) - } - - preparer := autorest.CreatePreparer( - autorest.AsGet(), - autorest.WithBaseURL(client.BaseURI), - autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/virtualNetworks/{virtualNetworkName}/CheckIPAddressAvailability", pathParameters), - autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{}) -} - -// CheckIPAddressAvailabilitySender sends the CheckIPAddressAvailability request. The method will close the -// http.Response Body if it receives an error. -func (client VirtualNetworksClient) CheckIPAddressAvailabilitySender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, req) -} - -// CheckIPAddressAvailabilityResponder handles the response to the CheckIPAddressAvailability request. The method always -// closes the http.Response Body. -func (client VirtualNetworksClient) CheckIPAddressAvailabilityResponder(resp *http.Response) (result IPAddressAvailabilityResult, err error) { - err = autorest.Respond( - resp, - client.ByInspecting(), - azure.WithErrorUnlessStatusCode(http.StatusOK), - autorest.ByUnmarshallingJSON(&result), - autorest.ByClosing()) - result.Response = autorest.Response{Response: resp} - return -} - -// CreateOrUpdate creates or updates a virtual network in the specified -// resource group. This method may poll for completion. Polling can be canceled -// by passing the cancel channel argument. The channel will be used to cancel -// polling and any outstanding HTTP requests. -// -// resourceGroupName is the name of the resource group. virtualNetworkName is -// the name of the virtual network. parameters is parameters supplied to the -// create or update virtual network operation -func (client VirtualNetworksClient) CreateOrUpdate(resourceGroupName string, virtualNetworkName string, parameters VirtualNetwork, cancel <-chan struct{}) (<-chan VirtualNetwork, <-chan error) { - resultChan := make(chan VirtualNetwork, 1) - errChan := make(chan error, 1) - go func() { - var err error - var result VirtualNetwork - defer func() { - resultChan <- result - errChan <- err - close(resultChan) - close(errChan) - }() - req, err := client.CreateOrUpdatePreparer(resourceGroupName, virtualNetworkName, parameters, cancel) - if err != nil { - err = autorest.NewErrorWithError(err, "network.VirtualNetworksClient", "CreateOrUpdate", nil, "Failure preparing request") - return - } - - resp, err := client.CreateOrUpdateSender(req) - if err != nil { - result.Response = autorest.Response{Response: resp} - err = autorest.NewErrorWithError(err, "network.VirtualNetworksClient", "CreateOrUpdate", resp, "Failure sending request") - return - } - - result, err = client.CreateOrUpdateResponder(resp) - if err != nil { - err = autorest.NewErrorWithError(err, "network.VirtualNetworksClient", "CreateOrUpdate", resp, "Failure responding to request") - } - }() - return resultChan, errChan -} - -// CreateOrUpdatePreparer prepares the CreateOrUpdate request. -func (client VirtualNetworksClient) CreateOrUpdatePreparer(resourceGroupName string, virtualNetworkName string, parameters VirtualNetwork, cancel <-chan struct{}) (*http.Request, error) { - pathParameters := map[string]interface{}{ - "resourceGroupName": autorest.Encode("path", resourceGroupName), - "subscriptionId": autorest.Encode("path", client.SubscriptionID), - "virtualNetworkName": autorest.Encode("path", virtualNetworkName), - } - - const APIVersion = "2017-03-01" - queryParameters := map[string]interface{}{ - "api-version": APIVersion, - } - - preparer := autorest.CreatePreparer( - autorest.AsJSON(), - autorest.AsPut(), - autorest.WithBaseURL(client.BaseURI), - autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/virtualNetworks/{virtualNetworkName}", pathParameters), - autorest.WithJSON(parameters), - autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{Cancel: cancel}) -} - -// CreateOrUpdateSender sends the CreateOrUpdate request. The method will close the -// http.Response Body if it receives an error. -func (client VirtualNetworksClient) CreateOrUpdateSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, - req, - azure.DoPollForAsynchronous(client.PollingDelay)) -} - -// CreateOrUpdateResponder handles the response to the CreateOrUpdate request. The method always -// closes the http.Response Body. -func (client VirtualNetworksClient) CreateOrUpdateResponder(resp *http.Response) (result VirtualNetwork, err error) { - err = autorest.Respond( - resp, - client.ByInspecting(), - azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusCreated), - autorest.ByUnmarshallingJSON(&result), - autorest.ByClosing()) - result.Response = autorest.Response{Response: resp} - return -} - -// Delete deletes the specified virtual network. This method may poll for -// completion. Polling can be canceled by passing the cancel channel argument. -// The channel will be used to cancel polling and any outstanding HTTP -// requests. -// -// resourceGroupName is the name of the resource group. virtualNetworkName is -// the name of the virtual network. -func (client VirtualNetworksClient) Delete(resourceGroupName string, virtualNetworkName string, cancel <-chan struct{}) (<-chan autorest.Response, <-chan error) { - resultChan := make(chan autorest.Response, 1) - errChan := make(chan error, 1) - go func() { - var err error - var result autorest.Response - defer func() { - resultChan <- result - errChan <- err - close(resultChan) - close(errChan) - }() - req, err := client.DeletePreparer(resourceGroupName, virtualNetworkName, cancel) - if err != nil { - err = autorest.NewErrorWithError(err, "network.VirtualNetworksClient", "Delete", nil, "Failure preparing request") - return - } - - resp, err := client.DeleteSender(req) - if err != nil { - result.Response = resp - err = autorest.NewErrorWithError(err, "network.VirtualNetworksClient", "Delete", resp, "Failure sending request") - return - } - - result, err = client.DeleteResponder(resp) - if err != nil { - err = autorest.NewErrorWithError(err, "network.VirtualNetworksClient", "Delete", resp, "Failure responding to request") - } - }() - return resultChan, errChan -} - -// DeletePreparer prepares the Delete request. -func (client VirtualNetworksClient) DeletePreparer(resourceGroupName string, virtualNetworkName string, cancel <-chan struct{}) (*http.Request, error) { - pathParameters := map[string]interface{}{ - "resourceGroupName": autorest.Encode("path", resourceGroupName), - "subscriptionId": autorest.Encode("path", client.SubscriptionID), - "virtualNetworkName": autorest.Encode("path", virtualNetworkName), - } - - const APIVersion = "2017-03-01" - queryParameters := map[string]interface{}{ - "api-version": APIVersion, - } - - preparer := autorest.CreatePreparer( - autorest.AsDelete(), - autorest.WithBaseURL(client.BaseURI), - autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/virtualNetworks/{virtualNetworkName}", pathParameters), - autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{Cancel: cancel}) -} - -// DeleteSender sends the Delete request. The method will close the -// http.Response Body if it receives an error. -func (client VirtualNetworksClient) DeleteSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, - req, - azure.DoPollForAsynchronous(client.PollingDelay)) -} - -// DeleteResponder handles the response to the Delete request. The method always -// closes the http.Response Body. -func (client VirtualNetworksClient) DeleteResponder(resp *http.Response) (result autorest.Response, err error) { - err = autorest.Respond( - resp, - client.ByInspecting(), - azure.WithErrorUnlessStatusCode(http.StatusAccepted, http.StatusNoContent, http.StatusOK), - autorest.ByClosing()) - result.Response = resp - return -} - -// Get gets the specified virtual network by resource group. -// -// resourceGroupName is the name of the resource group. virtualNetworkName is -// the name of the virtual network. expand is expands referenced resources. -func (client VirtualNetworksClient) Get(resourceGroupName string, virtualNetworkName string, expand string) (result VirtualNetwork, err error) { - req, err := client.GetPreparer(resourceGroupName, virtualNetworkName, expand) - if err != nil { - err = autorest.NewErrorWithError(err, "network.VirtualNetworksClient", "Get", nil, "Failure preparing request") - return - } - - resp, err := client.GetSender(req) - if err != nil { - result.Response = autorest.Response{Response: resp} - err = autorest.NewErrorWithError(err, "network.VirtualNetworksClient", "Get", resp, "Failure sending request") - return - } - - result, err = client.GetResponder(resp) - if err != nil { - err = autorest.NewErrorWithError(err, "network.VirtualNetworksClient", "Get", resp, "Failure responding to request") - } - - return -} - -// GetPreparer prepares the Get request. -func (client VirtualNetworksClient) GetPreparer(resourceGroupName string, virtualNetworkName string, expand string) (*http.Request, error) { - pathParameters := map[string]interface{}{ - "resourceGroupName": autorest.Encode("path", resourceGroupName), - "subscriptionId": autorest.Encode("path", client.SubscriptionID), - "virtualNetworkName": autorest.Encode("path", virtualNetworkName), - } - - const APIVersion = "2017-03-01" - queryParameters := map[string]interface{}{ - "api-version": APIVersion, - } - if len(expand) > 0 { - queryParameters["$expand"] = autorest.Encode("query", expand) - } - - preparer := autorest.CreatePreparer( - autorest.AsGet(), - autorest.WithBaseURL(client.BaseURI), - autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/virtualNetworks/{virtualNetworkName}", pathParameters), - autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{}) -} - -// GetSender sends the Get request. The method will close the -// http.Response Body if it receives an error. -func (client VirtualNetworksClient) GetSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, req) -} - -// GetResponder handles the response to the Get request. The method always -// closes the http.Response Body. -func (client VirtualNetworksClient) GetResponder(resp *http.Response) (result VirtualNetwork, err error) { - err = autorest.Respond( - resp, - client.ByInspecting(), - azure.WithErrorUnlessStatusCode(http.StatusOK), - autorest.ByUnmarshallingJSON(&result), - autorest.ByClosing()) - result.Response = autorest.Response{Response: resp} - return -} - -// List gets all virtual networks in a resource group. -// -// resourceGroupName is the name of the resource group. -func (client VirtualNetworksClient) List(resourceGroupName string) (result VirtualNetworkListResult, err error) { - req, err := client.ListPreparer(resourceGroupName) - if err != nil { - err = autorest.NewErrorWithError(err, "network.VirtualNetworksClient", "List", nil, "Failure preparing request") - return - } - - resp, err := client.ListSender(req) - if err != nil { - result.Response = autorest.Response{Response: resp} - err = autorest.NewErrorWithError(err, "network.VirtualNetworksClient", "List", resp, "Failure sending request") - return - } - - result, err = client.ListResponder(resp) - if err != nil { - err = autorest.NewErrorWithError(err, "network.VirtualNetworksClient", "List", resp, "Failure responding to request") - } - - return -} - -// ListPreparer prepares the List request. -func (client VirtualNetworksClient) ListPreparer(resourceGroupName string) (*http.Request, error) { - pathParameters := map[string]interface{}{ - "resourceGroupName": autorest.Encode("path", resourceGroupName), - "subscriptionId": autorest.Encode("path", client.SubscriptionID), - } - - const APIVersion = "2017-03-01" - queryParameters := map[string]interface{}{ - "api-version": APIVersion, - } - - preparer := autorest.CreatePreparer( - autorest.AsGet(), - autorest.WithBaseURL(client.BaseURI), - autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/virtualNetworks", pathParameters), - autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{}) -} - -// ListSender sends the List request. The method will close the -// http.Response Body if it receives an error. -func (client VirtualNetworksClient) ListSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, req) -} - -// ListResponder handles the response to the List request. The method always -// closes the http.Response Body. -func (client VirtualNetworksClient) ListResponder(resp *http.Response) (result VirtualNetworkListResult, err error) { - err = autorest.Respond( - resp, - client.ByInspecting(), - azure.WithErrorUnlessStatusCode(http.StatusOK), - autorest.ByUnmarshallingJSON(&result), - autorest.ByClosing()) - result.Response = autorest.Response{Response: resp} - return -} - -// ListNextResults retrieves the next set of results, if any. -func (client VirtualNetworksClient) ListNextResults(lastResults VirtualNetworkListResult) (result VirtualNetworkListResult, err error) { - req, err := lastResults.VirtualNetworkListResultPreparer() - if err != nil { - return result, autorest.NewErrorWithError(err, "network.VirtualNetworksClient", "List", nil, "Failure preparing next results request") - } - if req == nil { - return - } - - resp, err := client.ListSender(req) - if err != nil { - result.Response = autorest.Response{Response: resp} - return result, autorest.NewErrorWithError(err, "network.VirtualNetworksClient", "List", resp, "Failure sending next results request") - } - - result, err = client.ListResponder(resp) - if err != nil { - err = autorest.NewErrorWithError(err, "network.VirtualNetworksClient", "List", resp, "Failure responding to next results request") - } - - return -} - -// ListAll gets all virtual networks in a subscription. -func (client VirtualNetworksClient) ListAll() (result VirtualNetworkListResult, err error) { - req, err := client.ListAllPreparer() - if err != nil { - err = autorest.NewErrorWithError(err, "network.VirtualNetworksClient", "ListAll", nil, "Failure preparing request") - return - } - - resp, err := client.ListAllSender(req) - if err != nil { - result.Response = autorest.Response{Response: resp} - err = autorest.NewErrorWithError(err, "network.VirtualNetworksClient", "ListAll", resp, "Failure sending request") - return - } - - result, err = client.ListAllResponder(resp) - if err != nil { - err = autorest.NewErrorWithError(err, "network.VirtualNetworksClient", "ListAll", resp, "Failure responding to request") - } - - return -} - -// ListAllPreparer prepares the ListAll request. -func (client VirtualNetworksClient) ListAllPreparer() (*http.Request, error) { - pathParameters := map[string]interface{}{ - "subscriptionId": autorest.Encode("path", client.SubscriptionID), - } - - const APIVersion = "2017-03-01" - queryParameters := map[string]interface{}{ - "api-version": APIVersion, - } - - preparer := autorest.CreatePreparer( - autorest.AsGet(), - autorest.WithBaseURL(client.BaseURI), - autorest.WithPathParameters("/subscriptions/{subscriptionId}/providers/Microsoft.Network/virtualNetworks", pathParameters), - autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{}) -} - -// ListAllSender sends the ListAll request. The method will close the -// http.Response Body if it receives an error. -func (client VirtualNetworksClient) ListAllSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, req) -} - -// ListAllResponder handles the response to the ListAll request. The method always -// closes the http.Response Body. -func (client VirtualNetworksClient) ListAllResponder(resp *http.Response) (result VirtualNetworkListResult, err error) { - err = autorest.Respond( - resp, - client.ByInspecting(), - azure.WithErrorUnlessStatusCode(http.StatusOK), - autorest.ByUnmarshallingJSON(&result), - autorest.ByClosing()) - result.Response = autorest.Response{Response: resp} - return -} - -// ListAllNextResults retrieves the next set of results, if any. -func (client VirtualNetworksClient) ListAllNextResults(lastResults VirtualNetworkListResult) (result VirtualNetworkListResult, err error) { - req, err := lastResults.VirtualNetworkListResultPreparer() - if err != nil { - return result, autorest.NewErrorWithError(err, "network.VirtualNetworksClient", "ListAll", nil, "Failure preparing next results request") - } - if req == nil { - return - } - - resp, err := client.ListAllSender(req) - if err != nil { - result.Response = autorest.Response{Response: resp} - return result, autorest.NewErrorWithError(err, "network.VirtualNetworksClient", "ListAll", resp, "Failure sending next results request") - } - - result, err = client.ListAllResponder(resp) - if err != nil { - err = autorest.NewErrorWithError(err, "network.VirtualNetworksClient", "ListAll", resp, "Failure responding to next results request") - } - - return -} diff --git a/vendor/github.com/Azure/azure-sdk-for-go/arm/network/watchers.go b/vendor/github.com/Azure/azure-sdk-for-go/arm/network/watchers.go deleted file mode 100644 index 28febf847..000000000 --- a/vendor/github.com/Azure/azure-sdk-for-go/arm/network/watchers.go +++ /dev/null @@ -1,1131 +0,0 @@ -package network - -// Copyright (c) Microsoft and contributors. All rights reserved. -// -// Licensed under the Apache License, Version 2.0 (the "License"); -// you may not use this file except in compliance with the License. -// You may obtain a copy of the License at -// http://www.apache.org/licenses/LICENSE-2.0 -// -// Unless required by applicable law or agreed to in writing, software -// distributed under the License is distributed on an "AS IS" BASIS, -// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -// -// See the License for the specific language governing permissions and -// limitations under the License. -// -// Code generated by Microsoft (R) AutoRest Code Generator 1.0.1.0 -// Changes may cause incorrect behavior and will be lost if the code is -// regenerated. - -import ( - "github.com/Azure/go-autorest/autorest" - "github.com/Azure/go-autorest/autorest/azure" - "github.com/Azure/go-autorest/autorest/validation" - "net/http" -) - -// WatchersClient is the composite Swagger for Network Client -type WatchersClient struct { - ManagementClient -} - -// NewWatchersClient creates an instance of the WatchersClient client. -func NewWatchersClient(subscriptionID string) WatchersClient { - return NewWatchersClientWithBaseURI(DefaultBaseURI, subscriptionID) -} - -// NewWatchersClientWithBaseURI creates an instance of the WatchersClient -// client. -func NewWatchersClientWithBaseURI(baseURI string, subscriptionID string) WatchersClient { - return WatchersClient{NewWithBaseURI(baseURI, subscriptionID)} -} - -// CreateOrUpdate creates or updates a network watcher in the specified -// resource group. -// -// resourceGroupName is the name of the resource group. networkWatcherName is -// the name of the network watcher. parameters is parameters that define the -// network watcher resource. -func (client WatchersClient) CreateOrUpdate(resourceGroupName string, networkWatcherName string, parameters Watcher) (result Watcher, err error) { - req, err := client.CreateOrUpdatePreparer(resourceGroupName, networkWatcherName, parameters) - if err != nil { - err = autorest.NewErrorWithError(err, "network.WatchersClient", "CreateOrUpdate", nil, "Failure preparing request") - return - } - - resp, err := client.CreateOrUpdateSender(req) - if err != nil { - result.Response = autorest.Response{Response: resp} - err = autorest.NewErrorWithError(err, "network.WatchersClient", "CreateOrUpdate", resp, "Failure sending request") - return - } - - result, err = client.CreateOrUpdateResponder(resp) - if err != nil { - err = autorest.NewErrorWithError(err, "network.WatchersClient", "CreateOrUpdate", resp, "Failure responding to request") - } - - return -} - -// CreateOrUpdatePreparer prepares the CreateOrUpdate request. -func (client WatchersClient) CreateOrUpdatePreparer(resourceGroupName string, networkWatcherName string, parameters Watcher) (*http.Request, error) { - pathParameters := map[string]interface{}{ - "networkWatcherName": autorest.Encode("path", networkWatcherName), - "resourceGroupName": autorest.Encode("path", resourceGroupName), - "subscriptionId": autorest.Encode("path", client.SubscriptionID), - } - - const APIVersion = "2017-03-01" - queryParameters := map[string]interface{}{ - "api-version": APIVersion, - } - - preparer := autorest.CreatePreparer( - autorest.AsJSON(), - autorest.AsPut(), - autorest.WithBaseURL(client.BaseURI), - autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/networkWatchers/{networkWatcherName}", pathParameters), - autorest.WithJSON(parameters), - autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{}) -} - -// CreateOrUpdateSender sends the CreateOrUpdate request. The method will close the -// http.Response Body if it receives an error. -func (client WatchersClient) CreateOrUpdateSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, req) -} - -// CreateOrUpdateResponder handles the response to the CreateOrUpdate request. The method always -// closes the http.Response Body. -func (client WatchersClient) CreateOrUpdateResponder(resp *http.Response) (result Watcher, err error) { - err = autorest.Respond( - resp, - client.ByInspecting(), - azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusCreated), - autorest.ByUnmarshallingJSON(&result), - autorest.ByClosing()) - result.Response = autorest.Response{Response: resp} - return -} - -// Delete deletes the specified network watcher resource. This method may poll -// for completion. Polling can be canceled by passing the cancel channel -// argument. The channel will be used to cancel polling and any outstanding -// HTTP requests. -// -// resourceGroupName is the name of the resource group. networkWatcherName is -// the name of the network watcher. -func (client WatchersClient) Delete(resourceGroupName string, networkWatcherName string, cancel <-chan struct{}) (<-chan autorest.Response, <-chan error) { - resultChan := make(chan autorest.Response, 1) - errChan := make(chan error, 1) - go func() { - var err error - var result autorest.Response - defer func() { - resultChan <- result - errChan <- err - close(resultChan) - close(errChan) - }() - req, err := client.DeletePreparer(resourceGroupName, networkWatcherName, cancel) - if err != nil { - err = autorest.NewErrorWithError(err, "network.WatchersClient", "Delete", nil, "Failure preparing request") - return - } - - resp, err := client.DeleteSender(req) - if err != nil { - result.Response = resp - err = autorest.NewErrorWithError(err, "network.WatchersClient", "Delete", resp, "Failure sending request") - return - } - - result, err = client.DeleteResponder(resp) - if err != nil { - err = autorest.NewErrorWithError(err, "network.WatchersClient", "Delete", resp, "Failure responding to request") - } - }() - return resultChan, errChan -} - -// DeletePreparer prepares the Delete request. -func (client WatchersClient) DeletePreparer(resourceGroupName string, networkWatcherName string, cancel <-chan struct{}) (*http.Request, error) { - pathParameters := map[string]interface{}{ - "networkWatcherName": autorest.Encode("path", networkWatcherName), - "resourceGroupName": autorest.Encode("path", resourceGroupName), - "subscriptionId": autorest.Encode("path", client.SubscriptionID), - } - - const APIVersion = "2017-03-01" - queryParameters := map[string]interface{}{ - "api-version": APIVersion, - } - - preparer := autorest.CreatePreparer( - autorest.AsDelete(), - autorest.WithBaseURL(client.BaseURI), - autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/networkWatchers/{networkWatcherName}", pathParameters), - autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{Cancel: cancel}) -} - -// DeleteSender sends the Delete request. The method will close the -// http.Response Body if it receives an error. -func (client WatchersClient) DeleteSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, - req, - azure.DoPollForAsynchronous(client.PollingDelay)) -} - -// DeleteResponder handles the response to the Delete request. The method always -// closes the http.Response Body. -func (client WatchersClient) DeleteResponder(resp *http.Response) (result autorest.Response, err error) { - err = autorest.Respond( - resp, - client.ByInspecting(), - azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted, http.StatusNoContent), - autorest.ByClosing()) - result.Response = resp - return -} - -// Get gets the specified network watcher by resource group. -// -// resourceGroupName is the name of the resource group. networkWatcherName is -// the name of the network watcher. -func (client WatchersClient) Get(resourceGroupName string, networkWatcherName string) (result Watcher, err error) { - req, err := client.GetPreparer(resourceGroupName, networkWatcherName) - if err != nil { - err = autorest.NewErrorWithError(err, "network.WatchersClient", "Get", nil, "Failure preparing request") - return - } - - resp, err := client.GetSender(req) - if err != nil { - result.Response = autorest.Response{Response: resp} - err = autorest.NewErrorWithError(err, "network.WatchersClient", "Get", resp, "Failure sending request") - return - } - - result, err = client.GetResponder(resp) - if err != nil { - err = autorest.NewErrorWithError(err, "network.WatchersClient", "Get", resp, "Failure responding to request") - } - - return -} - -// GetPreparer prepares the Get request. -func (client WatchersClient) GetPreparer(resourceGroupName string, networkWatcherName string) (*http.Request, error) { - pathParameters := map[string]interface{}{ - "networkWatcherName": autorest.Encode("path", networkWatcherName), - "resourceGroupName": autorest.Encode("path", resourceGroupName), - "subscriptionId": autorest.Encode("path", client.SubscriptionID), - } - - const APIVersion = "2017-03-01" - queryParameters := map[string]interface{}{ - "api-version": APIVersion, - } - - preparer := autorest.CreatePreparer( - autorest.AsGet(), - autorest.WithBaseURL(client.BaseURI), - autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/networkWatchers/{networkWatcherName}", pathParameters), - autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{}) -} - -// GetSender sends the Get request. The method will close the -// http.Response Body if it receives an error. -func (client WatchersClient) GetSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, req) -} - -// GetResponder handles the response to the Get request. The method always -// closes the http.Response Body. -func (client WatchersClient) GetResponder(resp *http.Response) (result Watcher, err error) { - err = autorest.Respond( - resp, - client.ByInspecting(), - azure.WithErrorUnlessStatusCode(http.StatusOK), - autorest.ByUnmarshallingJSON(&result), - autorest.ByClosing()) - result.Response = autorest.Response{Response: resp} - return -} - -// GetFlowLogStatus queries status of flow log on a specified resource. This -// method may poll for completion. Polling can be canceled by passing the -// cancel channel argument. The channel will be used to cancel polling and any -// outstanding HTTP requests. -// -// resourceGroupName is the name of the network watcher resource group. -// networkWatcherName is the name of the network watcher resource. parameters -// is parameters that define a resource to query flow log status. -func (client WatchersClient) GetFlowLogStatus(resourceGroupName string, networkWatcherName string, parameters FlowLogStatusParameters, cancel <-chan struct{}) (<-chan FlowLogInformation, <-chan error) { - resultChan := make(chan FlowLogInformation, 1) - errChan := make(chan error, 1) - if err := validation.Validate([]validation.Validation{ - {TargetValue: parameters, - Constraints: []validation.Constraint{{Target: "parameters.TargetResourceID", Name: validation.Null, Rule: true, Chain: nil}}}}); err != nil { - errChan <- validation.NewErrorWithValidationError(err, "network.WatchersClient", "GetFlowLogStatus") - close(errChan) - close(resultChan) - return resultChan, errChan - } - - go func() { - var err error - var result FlowLogInformation - defer func() { - resultChan <- result - errChan <- err - close(resultChan) - close(errChan) - }() - req, err := client.GetFlowLogStatusPreparer(resourceGroupName, networkWatcherName, parameters, cancel) - if err != nil { - err = autorest.NewErrorWithError(err, "network.WatchersClient", "GetFlowLogStatus", nil, "Failure preparing request") - return - } - - resp, err := client.GetFlowLogStatusSender(req) - if err != nil { - result.Response = autorest.Response{Response: resp} - err = autorest.NewErrorWithError(err, "network.WatchersClient", "GetFlowLogStatus", resp, "Failure sending request") - return - } - - result, err = client.GetFlowLogStatusResponder(resp) - if err != nil { - err = autorest.NewErrorWithError(err, "network.WatchersClient", "GetFlowLogStatus", resp, "Failure responding to request") - } - }() - return resultChan, errChan -} - -// GetFlowLogStatusPreparer prepares the GetFlowLogStatus request. -func (client WatchersClient) GetFlowLogStatusPreparer(resourceGroupName string, networkWatcherName string, parameters FlowLogStatusParameters, cancel <-chan struct{}) (*http.Request, error) { - pathParameters := map[string]interface{}{ - "networkWatcherName": autorest.Encode("path", networkWatcherName), - "resourceGroupName": autorest.Encode("path", resourceGroupName), - "subscriptionId": autorest.Encode("path", client.SubscriptionID), - } - - const APIVersion = "2017-03-01" - queryParameters := map[string]interface{}{ - "api-version": APIVersion, - } - - preparer := autorest.CreatePreparer( - autorest.AsJSON(), - autorest.AsPost(), - autorest.WithBaseURL(client.BaseURI), - autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/networkWatchers/{networkWatcherName}/queryFlowLogStatus", pathParameters), - autorest.WithJSON(parameters), - autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{Cancel: cancel}) -} - -// GetFlowLogStatusSender sends the GetFlowLogStatus request. The method will close the -// http.Response Body if it receives an error. -func (client WatchersClient) GetFlowLogStatusSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, - req, - azure.DoPollForAsynchronous(client.PollingDelay)) -} - -// GetFlowLogStatusResponder handles the response to the GetFlowLogStatus request. The method always -// closes the http.Response Body. -func (client WatchersClient) GetFlowLogStatusResponder(resp *http.Response) (result FlowLogInformation, err error) { - err = autorest.Respond( - resp, - client.ByInspecting(), - azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted), - autorest.ByUnmarshallingJSON(&result), - autorest.ByClosing()) - result.Response = autorest.Response{Response: resp} - return -} - -// GetNextHop gets the next hop from the specified VM. This method may poll for -// completion. Polling can be canceled by passing the cancel channel argument. -// The channel will be used to cancel polling and any outstanding HTTP -// requests. -// -// resourceGroupName is the name of the resource group. networkWatcherName is -// the name of the network watcher. parameters is parameters that define the -// source and destination endpoint. -func (client WatchersClient) GetNextHop(resourceGroupName string, networkWatcherName string, parameters NextHopParameters, cancel <-chan struct{}) (<-chan NextHopResult, <-chan error) { - resultChan := make(chan NextHopResult, 1) - errChan := make(chan error, 1) - if err := validation.Validate([]validation.Validation{ - {TargetValue: parameters, - Constraints: []validation.Constraint{{Target: "parameters.TargetResourceID", Name: validation.Null, Rule: true, Chain: nil}, - {Target: "parameters.SourceIPAddress", Name: validation.Null, Rule: true, Chain: nil}, - {Target: "parameters.DestinationIPAddress", Name: validation.Null, Rule: true, Chain: nil}}}}); err != nil { - errChan <- validation.NewErrorWithValidationError(err, "network.WatchersClient", "GetNextHop") - close(errChan) - close(resultChan) - return resultChan, errChan - } - - go func() { - var err error - var result NextHopResult - defer func() { - resultChan <- result - errChan <- err - close(resultChan) - close(errChan) - }() - req, err := client.GetNextHopPreparer(resourceGroupName, networkWatcherName, parameters, cancel) - if err != nil { - err = autorest.NewErrorWithError(err, "network.WatchersClient", "GetNextHop", nil, "Failure preparing request") - return - } - - resp, err := client.GetNextHopSender(req) - if err != nil { - result.Response = autorest.Response{Response: resp} - err = autorest.NewErrorWithError(err, "network.WatchersClient", "GetNextHop", resp, "Failure sending request") - return - } - - result, err = client.GetNextHopResponder(resp) - if err != nil { - err = autorest.NewErrorWithError(err, "network.WatchersClient", "GetNextHop", resp, "Failure responding to request") - } - }() - return resultChan, errChan -} - -// GetNextHopPreparer prepares the GetNextHop request. -func (client WatchersClient) GetNextHopPreparer(resourceGroupName string, networkWatcherName string, parameters NextHopParameters, cancel <-chan struct{}) (*http.Request, error) { - pathParameters := map[string]interface{}{ - "networkWatcherName": autorest.Encode("path", networkWatcherName), - "resourceGroupName": autorest.Encode("path", resourceGroupName), - "subscriptionId": autorest.Encode("path", client.SubscriptionID), - } - - const APIVersion = "2017-03-01" - queryParameters := map[string]interface{}{ - "api-version": APIVersion, - } - - preparer := autorest.CreatePreparer( - autorest.AsJSON(), - autorest.AsPost(), - autorest.WithBaseURL(client.BaseURI), - autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/networkWatchers/{networkWatcherName}/nextHop", pathParameters), - autorest.WithJSON(parameters), - autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{Cancel: cancel}) -} - -// GetNextHopSender sends the GetNextHop request. The method will close the -// http.Response Body if it receives an error. -func (client WatchersClient) GetNextHopSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, - req, - azure.DoPollForAsynchronous(client.PollingDelay)) -} - -// GetNextHopResponder handles the response to the GetNextHop request. The method always -// closes the http.Response Body. -func (client WatchersClient) GetNextHopResponder(resp *http.Response) (result NextHopResult, err error) { - err = autorest.Respond( - resp, - client.ByInspecting(), - azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted), - autorest.ByUnmarshallingJSON(&result), - autorest.ByClosing()) - result.Response = autorest.Response{Response: resp} - return -} - -// GetTopology gets the current network topology by resource group. -// -// resourceGroupName is the name of the resource group. networkWatcherName is -// the name of the network watcher. parameters is parameters that define the -// representation of topology. -func (client WatchersClient) GetTopology(resourceGroupName string, networkWatcherName string, parameters TopologyParameters) (result Topology, err error) { - if err := validation.Validate([]validation.Validation{ - {TargetValue: parameters, - Constraints: []validation.Constraint{{Target: "parameters.TargetResourceGroupName", Name: validation.Null, Rule: true, Chain: nil}}}}); err != nil { - return result, validation.NewErrorWithValidationError(err, "network.WatchersClient", "GetTopology") - } - - req, err := client.GetTopologyPreparer(resourceGroupName, networkWatcherName, parameters) - if err != nil { - err = autorest.NewErrorWithError(err, "network.WatchersClient", "GetTopology", nil, "Failure preparing request") - return - } - - resp, err := client.GetTopologySender(req) - if err != nil { - result.Response = autorest.Response{Response: resp} - err = autorest.NewErrorWithError(err, "network.WatchersClient", "GetTopology", resp, "Failure sending request") - return - } - - result, err = client.GetTopologyResponder(resp) - if err != nil { - err = autorest.NewErrorWithError(err, "network.WatchersClient", "GetTopology", resp, "Failure responding to request") - } - - return -} - -// GetTopologyPreparer prepares the GetTopology request. -func (client WatchersClient) GetTopologyPreparer(resourceGroupName string, networkWatcherName string, parameters TopologyParameters) (*http.Request, error) { - pathParameters := map[string]interface{}{ - "networkWatcherName": autorest.Encode("path", networkWatcherName), - "resourceGroupName": autorest.Encode("path", resourceGroupName), - "subscriptionId": autorest.Encode("path", client.SubscriptionID), - } - - const APIVersion = "2017-03-01" - queryParameters := map[string]interface{}{ - "api-version": APIVersion, - } - - preparer := autorest.CreatePreparer( - autorest.AsJSON(), - autorest.AsPost(), - autorest.WithBaseURL(client.BaseURI), - autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/networkWatchers/{networkWatcherName}/topology", pathParameters), - autorest.WithJSON(parameters), - autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{}) -} - -// GetTopologySender sends the GetTopology request. The method will close the -// http.Response Body if it receives an error. -func (client WatchersClient) GetTopologySender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, req) -} - -// GetTopologyResponder handles the response to the GetTopology request. The method always -// closes the http.Response Body. -func (client WatchersClient) GetTopologyResponder(resp *http.Response) (result Topology, err error) { - err = autorest.Respond( - resp, - client.ByInspecting(), - azure.WithErrorUnlessStatusCode(http.StatusOK), - autorest.ByUnmarshallingJSON(&result), - autorest.ByClosing()) - result.Response = autorest.Response{Response: resp} - return -} - -// GetTroubleshooting initiate troubleshooting on a specified resource This -// method may poll for completion. Polling can be canceled by passing the -// cancel channel argument. The channel will be used to cancel polling and any -// outstanding HTTP requests. -// -// resourceGroupName is the name of the resource group. networkWatcherName is -// the name of the network watcher resource. parameters is parameters that -// define the resource to troubleshoot. -func (client WatchersClient) GetTroubleshooting(resourceGroupName string, networkWatcherName string, parameters TroubleshootingParameters, cancel <-chan struct{}) (<-chan TroubleshootingResult, <-chan error) { - resultChan := make(chan TroubleshootingResult, 1) - errChan := make(chan error, 1) - if err := validation.Validate([]validation.Validation{ - {TargetValue: parameters, - Constraints: []validation.Constraint{{Target: "parameters.TargetResourceID", Name: validation.Null, Rule: true, Chain: nil}, - {Target: "parameters.TroubleshootingProperties", Name: validation.Null, Rule: true, - Chain: []validation.Constraint{{Target: "parameters.TroubleshootingProperties.StorageID", Name: validation.Null, Rule: true, Chain: nil}, - {Target: "parameters.TroubleshootingProperties.StoragePath", Name: validation.Null, Rule: true, Chain: nil}, - }}}}}); err != nil { - errChan <- validation.NewErrorWithValidationError(err, "network.WatchersClient", "GetTroubleshooting") - close(errChan) - close(resultChan) - return resultChan, errChan - } - - go func() { - var err error - var result TroubleshootingResult - defer func() { - resultChan <- result - errChan <- err - close(resultChan) - close(errChan) - }() - req, err := client.GetTroubleshootingPreparer(resourceGroupName, networkWatcherName, parameters, cancel) - if err != nil { - err = autorest.NewErrorWithError(err, "network.WatchersClient", "GetTroubleshooting", nil, "Failure preparing request") - return - } - - resp, err := client.GetTroubleshootingSender(req) - if err != nil { - result.Response = autorest.Response{Response: resp} - err = autorest.NewErrorWithError(err, "network.WatchersClient", "GetTroubleshooting", resp, "Failure sending request") - return - } - - result, err = client.GetTroubleshootingResponder(resp) - if err != nil { - err = autorest.NewErrorWithError(err, "network.WatchersClient", "GetTroubleshooting", resp, "Failure responding to request") - } - }() - return resultChan, errChan -} - -// GetTroubleshootingPreparer prepares the GetTroubleshooting request. -func (client WatchersClient) GetTroubleshootingPreparer(resourceGroupName string, networkWatcherName string, parameters TroubleshootingParameters, cancel <-chan struct{}) (*http.Request, error) { - pathParameters := map[string]interface{}{ - "networkWatcherName": autorest.Encode("path", networkWatcherName), - "resourceGroupName": autorest.Encode("path", resourceGroupName), - "subscriptionId": autorest.Encode("path", client.SubscriptionID), - } - - const APIVersion = "2017-03-01" - queryParameters := map[string]interface{}{ - "api-version": APIVersion, - } - - preparer := autorest.CreatePreparer( - autorest.AsJSON(), - autorest.AsPost(), - autorest.WithBaseURL(client.BaseURI), - autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/networkWatchers/{networkWatcherName}/troubleshoot", pathParameters), - autorest.WithJSON(parameters), - autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{Cancel: cancel}) -} - -// GetTroubleshootingSender sends the GetTroubleshooting request. The method will close the -// http.Response Body if it receives an error. -func (client WatchersClient) GetTroubleshootingSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, - req, - azure.DoPollForAsynchronous(client.PollingDelay)) -} - -// GetTroubleshootingResponder handles the response to the GetTroubleshooting request. The method always -// closes the http.Response Body. -func (client WatchersClient) GetTroubleshootingResponder(resp *http.Response) (result TroubleshootingResult, err error) { - err = autorest.Respond( - resp, - client.ByInspecting(), - azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted), - autorest.ByUnmarshallingJSON(&result), - autorest.ByClosing()) - result.Response = autorest.Response{Response: resp} - return -} - -// GetTroubleshootingResult get the last completed troubleshooting result on a -// specified resource This method may poll for completion. Polling can be -// canceled by passing the cancel channel argument. The channel will be used to -// cancel polling and any outstanding HTTP requests. -// -// resourceGroupName is the name of the resource group. networkWatcherName is -// the name of the network watcher resource. parameters is parameters that -// define the resource to query the troubleshooting result. -func (client WatchersClient) GetTroubleshootingResult(resourceGroupName string, networkWatcherName string, parameters QueryTroubleshootingParameters, cancel <-chan struct{}) (<-chan TroubleshootingResult, <-chan error) { - resultChan := make(chan TroubleshootingResult, 1) - errChan := make(chan error, 1) - if err := validation.Validate([]validation.Validation{ - {TargetValue: parameters, - Constraints: []validation.Constraint{{Target: "parameters.TargetResourceID", Name: validation.Null, Rule: true, Chain: nil}}}}); err != nil { - errChan <- validation.NewErrorWithValidationError(err, "network.WatchersClient", "GetTroubleshootingResult") - close(errChan) - close(resultChan) - return resultChan, errChan - } - - go func() { - var err error - var result TroubleshootingResult - defer func() { - resultChan <- result - errChan <- err - close(resultChan) - close(errChan) - }() - req, err := client.GetTroubleshootingResultPreparer(resourceGroupName, networkWatcherName, parameters, cancel) - if err != nil { - err = autorest.NewErrorWithError(err, "network.WatchersClient", "GetTroubleshootingResult", nil, "Failure preparing request") - return - } - - resp, err := client.GetTroubleshootingResultSender(req) - if err != nil { - result.Response = autorest.Response{Response: resp} - err = autorest.NewErrorWithError(err, "network.WatchersClient", "GetTroubleshootingResult", resp, "Failure sending request") - return - } - - result, err = client.GetTroubleshootingResultResponder(resp) - if err != nil { - err = autorest.NewErrorWithError(err, "network.WatchersClient", "GetTroubleshootingResult", resp, "Failure responding to request") - } - }() - return resultChan, errChan -} - -// GetTroubleshootingResultPreparer prepares the GetTroubleshootingResult request. -func (client WatchersClient) GetTroubleshootingResultPreparer(resourceGroupName string, networkWatcherName string, parameters QueryTroubleshootingParameters, cancel <-chan struct{}) (*http.Request, error) { - pathParameters := map[string]interface{}{ - "networkWatcherName": autorest.Encode("path", networkWatcherName), - "resourceGroupName": autorest.Encode("path", resourceGroupName), - "subscriptionId": autorest.Encode("path", client.SubscriptionID), - } - - const APIVersion = "2017-03-01" - queryParameters := map[string]interface{}{ - "api-version": APIVersion, - } - - preparer := autorest.CreatePreparer( - autorest.AsJSON(), - autorest.AsPost(), - autorest.WithBaseURL(client.BaseURI), - autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/networkWatchers/{networkWatcherName}/queryTroubleshootResult", pathParameters), - autorest.WithJSON(parameters), - autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{Cancel: cancel}) -} - -// GetTroubleshootingResultSender sends the GetTroubleshootingResult request. The method will close the -// http.Response Body if it receives an error. -func (client WatchersClient) GetTroubleshootingResultSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, - req, - azure.DoPollForAsynchronous(client.PollingDelay)) -} - -// GetTroubleshootingResultResponder handles the response to the GetTroubleshootingResult request. The method always -// closes the http.Response Body. -func (client WatchersClient) GetTroubleshootingResultResponder(resp *http.Response) (result TroubleshootingResult, err error) { - err = autorest.Respond( - resp, - client.ByInspecting(), - azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted), - autorest.ByUnmarshallingJSON(&result), - autorest.ByClosing()) - result.Response = autorest.Response{Response: resp} - return -} - -// GetVMSecurityRules gets the configured and effective security group rules on -// the specified VM. This method may poll for completion. Polling can be -// canceled by passing the cancel channel argument. The channel will be used to -// cancel polling and any outstanding HTTP requests. -// -// resourceGroupName is the name of the resource group. networkWatcherName is -// the name of the network watcher. parameters is parameters that define the VM -// to check security groups for. -func (client WatchersClient) GetVMSecurityRules(resourceGroupName string, networkWatcherName string, parameters SecurityGroupViewParameters, cancel <-chan struct{}) (<-chan SecurityGroupViewResult, <-chan error) { - resultChan := make(chan SecurityGroupViewResult, 1) - errChan := make(chan error, 1) - if err := validation.Validate([]validation.Validation{ - {TargetValue: parameters, - Constraints: []validation.Constraint{{Target: "parameters.TargetResourceID", Name: validation.Null, Rule: true, Chain: nil}}}}); err != nil { - errChan <- validation.NewErrorWithValidationError(err, "network.WatchersClient", "GetVMSecurityRules") - close(errChan) - close(resultChan) - return resultChan, errChan - } - - go func() { - var err error - var result SecurityGroupViewResult - defer func() { - resultChan <- result - errChan <- err - close(resultChan) - close(errChan) - }() - req, err := client.GetVMSecurityRulesPreparer(resourceGroupName, networkWatcherName, parameters, cancel) - if err != nil { - err = autorest.NewErrorWithError(err, "network.WatchersClient", "GetVMSecurityRules", nil, "Failure preparing request") - return - } - - resp, err := client.GetVMSecurityRulesSender(req) - if err != nil { - result.Response = autorest.Response{Response: resp} - err = autorest.NewErrorWithError(err, "network.WatchersClient", "GetVMSecurityRules", resp, "Failure sending request") - return - } - - result, err = client.GetVMSecurityRulesResponder(resp) - if err != nil { - err = autorest.NewErrorWithError(err, "network.WatchersClient", "GetVMSecurityRules", resp, "Failure responding to request") - } - }() - return resultChan, errChan -} - -// GetVMSecurityRulesPreparer prepares the GetVMSecurityRules request. -func (client WatchersClient) GetVMSecurityRulesPreparer(resourceGroupName string, networkWatcherName string, parameters SecurityGroupViewParameters, cancel <-chan struct{}) (*http.Request, error) { - pathParameters := map[string]interface{}{ - "networkWatcherName": autorest.Encode("path", networkWatcherName), - "resourceGroupName": autorest.Encode("path", resourceGroupName), - "subscriptionId": autorest.Encode("path", client.SubscriptionID), - } - - const APIVersion = "2017-03-01" - queryParameters := map[string]interface{}{ - "api-version": APIVersion, - } - - preparer := autorest.CreatePreparer( - autorest.AsJSON(), - autorest.AsPost(), - autorest.WithBaseURL(client.BaseURI), - autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/networkWatchers/{networkWatcherName}/securityGroupView", pathParameters), - autorest.WithJSON(parameters), - autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{Cancel: cancel}) -} - -// GetVMSecurityRulesSender sends the GetVMSecurityRules request. The method will close the -// http.Response Body if it receives an error. -func (client WatchersClient) GetVMSecurityRulesSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, - req, - azure.DoPollForAsynchronous(client.PollingDelay)) -} - -// GetVMSecurityRulesResponder handles the response to the GetVMSecurityRules request. The method always -// closes the http.Response Body. -func (client WatchersClient) GetVMSecurityRulesResponder(resp *http.Response) (result SecurityGroupViewResult, err error) { - err = autorest.Respond( - resp, - client.ByInspecting(), - azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted), - autorest.ByUnmarshallingJSON(&result), - autorest.ByClosing()) - result.Response = autorest.Response{Response: resp} - return -} - -// List gets all network watchers by resource group. -// -// resourceGroupName is the name of the resource group. -func (client WatchersClient) List(resourceGroupName string) (result WatcherListResult, err error) { - req, err := client.ListPreparer(resourceGroupName) - if err != nil { - err = autorest.NewErrorWithError(err, "network.WatchersClient", "List", nil, "Failure preparing request") - return - } - - resp, err := client.ListSender(req) - if err != nil { - result.Response = autorest.Response{Response: resp} - err = autorest.NewErrorWithError(err, "network.WatchersClient", "List", resp, "Failure sending request") - return - } - - result, err = client.ListResponder(resp) - if err != nil { - err = autorest.NewErrorWithError(err, "network.WatchersClient", "List", resp, "Failure responding to request") - } - - return -} - -// ListPreparer prepares the List request. -func (client WatchersClient) ListPreparer(resourceGroupName string) (*http.Request, error) { - pathParameters := map[string]interface{}{ - "resourceGroupName": autorest.Encode("path", resourceGroupName), - "subscriptionId": autorest.Encode("path", client.SubscriptionID), - } - - const APIVersion = "2017-03-01" - queryParameters := map[string]interface{}{ - "api-version": APIVersion, - } - - preparer := autorest.CreatePreparer( - autorest.AsGet(), - autorest.WithBaseURL(client.BaseURI), - autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/networkWatchers", pathParameters), - autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{}) -} - -// ListSender sends the List request. The method will close the -// http.Response Body if it receives an error. -func (client WatchersClient) ListSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, req) -} - -// ListResponder handles the response to the List request. The method always -// closes the http.Response Body. -func (client WatchersClient) ListResponder(resp *http.Response) (result WatcherListResult, err error) { - err = autorest.Respond( - resp, - client.ByInspecting(), - azure.WithErrorUnlessStatusCode(http.StatusOK), - autorest.ByUnmarshallingJSON(&result), - autorest.ByClosing()) - result.Response = autorest.Response{Response: resp} - return -} - -// ListAll gets all network watchers by subscription. -func (client WatchersClient) ListAll() (result WatcherListResult, err error) { - req, err := client.ListAllPreparer() - if err != nil { - err = autorest.NewErrorWithError(err, "network.WatchersClient", "ListAll", nil, "Failure preparing request") - return - } - - resp, err := client.ListAllSender(req) - if err != nil { - result.Response = autorest.Response{Response: resp} - err = autorest.NewErrorWithError(err, "network.WatchersClient", "ListAll", resp, "Failure sending request") - return - } - - result, err = client.ListAllResponder(resp) - if err != nil { - err = autorest.NewErrorWithError(err, "network.WatchersClient", "ListAll", resp, "Failure responding to request") - } - - return -} - -// ListAllPreparer prepares the ListAll request. -func (client WatchersClient) ListAllPreparer() (*http.Request, error) { - pathParameters := map[string]interface{}{ - "subscriptionId": autorest.Encode("path", client.SubscriptionID), - } - - const APIVersion = "2017-03-01" - queryParameters := map[string]interface{}{ - "api-version": APIVersion, - } - - preparer := autorest.CreatePreparer( - autorest.AsGet(), - autorest.WithBaseURL(client.BaseURI), - autorest.WithPathParameters("/subscriptions/{subscriptionId}/providers/Microsoft.Network/networkWatchers", pathParameters), - autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{}) -} - -// ListAllSender sends the ListAll request. The method will close the -// http.Response Body if it receives an error. -func (client WatchersClient) ListAllSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, req) -} - -// ListAllResponder handles the response to the ListAll request. The method always -// closes the http.Response Body. -func (client WatchersClient) ListAllResponder(resp *http.Response) (result WatcherListResult, err error) { - err = autorest.Respond( - resp, - client.ByInspecting(), - azure.WithErrorUnlessStatusCode(http.StatusOK), - autorest.ByUnmarshallingJSON(&result), - autorest.ByClosing()) - result.Response = autorest.Response{Response: resp} - return -} - -// SetFlowLogConfiguration configures flow log on a specified resource. This -// method may poll for completion. Polling can be canceled by passing the -// cancel channel argument. The channel will be used to cancel polling and any -// outstanding HTTP requests. -// -// resourceGroupName is the name of the network watcher resource group. -// networkWatcherName is the name of the network watcher resource. parameters -// is parameters that define the configuration of flow log. -func (client WatchersClient) SetFlowLogConfiguration(resourceGroupName string, networkWatcherName string, parameters FlowLogInformation, cancel <-chan struct{}) (<-chan FlowLogInformation, <-chan error) { - resultChan := make(chan FlowLogInformation, 1) - errChan := make(chan error, 1) - if err := validation.Validate([]validation.Validation{ - {TargetValue: parameters, - Constraints: []validation.Constraint{{Target: "parameters.TargetResourceID", Name: validation.Null, Rule: true, Chain: nil}, - {Target: "parameters.FlowLogProperties", Name: validation.Null, Rule: true, - Chain: []validation.Constraint{{Target: "parameters.FlowLogProperties.StorageID", Name: validation.Null, Rule: true, Chain: nil}, - {Target: "parameters.FlowLogProperties.Enabled", Name: validation.Null, Rule: true, Chain: nil}, - }}}}}); err != nil { - errChan <- validation.NewErrorWithValidationError(err, "network.WatchersClient", "SetFlowLogConfiguration") - close(errChan) - close(resultChan) - return resultChan, errChan - } - - go func() { - var err error - var result FlowLogInformation - defer func() { - resultChan <- result - errChan <- err - close(resultChan) - close(errChan) - }() - req, err := client.SetFlowLogConfigurationPreparer(resourceGroupName, networkWatcherName, parameters, cancel) - if err != nil { - err = autorest.NewErrorWithError(err, "network.WatchersClient", "SetFlowLogConfiguration", nil, "Failure preparing request") - return - } - - resp, err := client.SetFlowLogConfigurationSender(req) - if err != nil { - result.Response = autorest.Response{Response: resp} - err = autorest.NewErrorWithError(err, "network.WatchersClient", "SetFlowLogConfiguration", resp, "Failure sending request") - return - } - - result, err = client.SetFlowLogConfigurationResponder(resp) - if err != nil { - err = autorest.NewErrorWithError(err, "network.WatchersClient", "SetFlowLogConfiguration", resp, "Failure responding to request") - } - }() - return resultChan, errChan -} - -// SetFlowLogConfigurationPreparer prepares the SetFlowLogConfiguration request. -func (client WatchersClient) SetFlowLogConfigurationPreparer(resourceGroupName string, networkWatcherName string, parameters FlowLogInformation, cancel <-chan struct{}) (*http.Request, error) { - pathParameters := map[string]interface{}{ - "networkWatcherName": autorest.Encode("path", networkWatcherName), - "resourceGroupName": autorest.Encode("path", resourceGroupName), - "subscriptionId": autorest.Encode("path", client.SubscriptionID), - } - - const APIVersion = "2017-03-01" - queryParameters := map[string]interface{}{ - "api-version": APIVersion, - } - - preparer := autorest.CreatePreparer( - autorest.AsJSON(), - autorest.AsPost(), - autorest.WithBaseURL(client.BaseURI), - autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/networkWatchers/{networkWatcherName}/configureFlowLog", pathParameters), - autorest.WithJSON(parameters), - autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{Cancel: cancel}) -} - -// SetFlowLogConfigurationSender sends the SetFlowLogConfiguration request. The method will close the -// http.Response Body if it receives an error. -func (client WatchersClient) SetFlowLogConfigurationSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, - req, - azure.DoPollForAsynchronous(client.PollingDelay)) -} - -// SetFlowLogConfigurationResponder handles the response to the SetFlowLogConfiguration request. The method always -// closes the http.Response Body. -func (client WatchersClient) SetFlowLogConfigurationResponder(resp *http.Response) (result FlowLogInformation, err error) { - err = autorest.Respond( - resp, - client.ByInspecting(), - azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted), - autorest.ByUnmarshallingJSON(&result), - autorest.ByClosing()) - result.Response = autorest.Response{Response: resp} - return -} - -// VerifyIPFlow verify IP flow from the specified VM to a location given the -// currently configured NSG rules. This method may poll for completion. Polling -// can be canceled by passing the cancel channel argument. The channel will be -// used to cancel polling and any outstanding HTTP requests. -// -// resourceGroupName is the name of the resource group. networkWatcherName is -// the name of the network watcher. parameters is parameters that define the IP -// flow to be verified. -func (client WatchersClient) VerifyIPFlow(resourceGroupName string, networkWatcherName string, parameters VerificationIPFlowParameters, cancel <-chan struct{}) (<-chan VerificationIPFlowResult, <-chan error) { - resultChan := make(chan VerificationIPFlowResult, 1) - errChan := make(chan error, 1) - if err := validation.Validate([]validation.Validation{ - {TargetValue: parameters, - Constraints: []validation.Constraint{{Target: "parameters.TargetResourceID", Name: validation.Null, Rule: true, Chain: nil}, - {Target: "parameters.LocalPort", Name: validation.Null, Rule: true, Chain: nil}, - {Target: "parameters.RemotePort", Name: validation.Null, Rule: true, Chain: nil}, - {Target: "parameters.LocalIPAddress", Name: validation.Null, Rule: true, Chain: nil}, - {Target: "parameters.RemoteIPAddress", Name: validation.Null, Rule: true, Chain: nil}}}}); err != nil { - errChan <- validation.NewErrorWithValidationError(err, "network.WatchersClient", "VerifyIPFlow") - close(errChan) - close(resultChan) - return resultChan, errChan - } - - go func() { - var err error - var result VerificationIPFlowResult - defer func() { - resultChan <- result - errChan <- err - close(resultChan) - close(errChan) - }() - req, err := client.VerifyIPFlowPreparer(resourceGroupName, networkWatcherName, parameters, cancel) - if err != nil { - err = autorest.NewErrorWithError(err, "network.WatchersClient", "VerifyIPFlow", nil, "Failure preparing request") - return - } - - resp, err := client.VerifyIPFlowSender(req) - if err != nil { - result.Response = autorest.Response{Response: resp} - err = autorest.NewErrorWithError(err, "network.WatchersClient", "VerifyIPFlow", resp, "Failure sending request") - return - } - - result, err = client.VerifyIPFlowResponder(resp) - if err != nil { - err = autorest.NewErrorWithError(err, "network.WatchersClient", "VerifyIPFlow", resp, "Failure responding to request") - } - }() - return resultChan, errChan -} - -// VerifyIPFlowPreparer prepares the VerifyIPFlow request. -func (client WatchersClient) VerifyIPFlowPreparer(resourceGroupName string, networkWatcherName string, parameters VerificationIPFlowParameters, cancel <-chan struct{}) (*http.Request, error) { - pathParameters := map[string]interface{}{ - "networkWatcherName": autorest.Encode("path", networkWatcherName), - "resourceGroupName": autorest.Encode("path", resourceGroupName), - "subscriptionId": autorest.Encode("path", client.SubscriptionID), - } - - const APIVersion = "2017-03-01" - queryParameters := map[string]interface{}{ - "api-version": APIVersion, - } - - preparer := autorest.CreatePreparer( - autorest.AsJSON(), - autorest.AsPost(), - autorest.WithBaseURL(client.BaseURI), - autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/networkWatchers/{networkWatcherName}/ipFlowVerify", pathParameters), - autorest.WithJSON(parameters), - autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{Cancel: cancel}) -} - -// VerifyIPFlowSender sends the VerifyIPFlow request. The method will close the -// http.Response Body if it receives an error. -func (client WatchersClient) VerifyIPFlowSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, - req, - azure.DoPollForAsynchronous(client.PollingDelay)) -} - -// VerifyIPFlowResponder handles the response to the VerifyIPFlow request. The method always -// closes the http.Response Body. -func (client WatchersClient) VerifyIPFlowResponder(resp *http.Response) (result VerificationIPFlowResult, err error) { - err = autorest.Respond( - resp, - client.ByInspecting(), - azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted), - autorest.ByUnmarshallingJSON(&result), - autorest.ByClosing()) - result.Response = autorest.Response{Response: resp} - return -} diff --git a/vendor/github.com/Azure/azure-sdk-for-go/arm/resources/resources/models.go b/vendor/github.com/Azure/azure-sdk-for-go/arm/resources/resources/models.go deleted file mode 100644 index dd0f06f5f..000000000 --- a/vendor/github.com/Azure/azure-sdk-for-go/arm/resources/resources/models.go +++ /dev/null @@ -1,457 +0,0 @@ -package resources - -// Copyright (c) Microsoft and contributors. All rights reserved. -// -// Licensed under the Apache License, Version 2.0 (the "License"); -// you may not use this file except in compliance with the License. -// You may obtain a copy of the License at -// http://www.apache.org/licenses/LICENSE-2.0 -// -// Unless required by applicable law or agreed to in writing, software -// distributed under the License is distributed on an "AS IS" BASIS, -// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -// -// See the License for the specific language governing permissions and -// limitations under the License. -// -// Code generated by Microsoft (R) AutoRest Code Generator 1.0.1.0 -// Changes may cause incorrect behavior and will be lost if the code is -// regenerated. - -import ( - "github.com/Azure/go-autorest/autorest" - "github.com/Azure/go-autorest/autorest/date" - "github.com/Azure/go-autorest/autorest/to" - "net/http" -) - -// DeploymentMode enumerates the values for deployment mode. -type DeploymentMode string - -const ( - // Complete specifies the complete state for deployment mode. - Complete DeploymentMode = "Complete" - // Incremental specifies the incremental state for deployment mode. - Incremental DeploymentMode = "Incremental" -) - -// ResourceIdentityType enumerates the values for resource identity type. -type ResourceIdentityType string - -const ( - // SystemAssigned specifies the system assigned state for resource identity - // type. - SystemAssigned ResourceIdentityType = "SystemAssigned" -) - -// AliasPathType is the type of the paths for alias. -type AliasPathType struct { - Path *string `json:"path,omitempty"` - APIVersions *[]string `json:"apiVersions,omitempty"` -} - -// AliasType is the alias type. -type AliasType struct { - Name *string `json:"name,omitempty"` - Paths *[]AliasPathType `json:"paths,omitempty"` -} - -// BasicDependency is deployment dependency information. -type BasicDependency struct { - ID *string `json:"id,omitempty"` - ResourceType *string `json:"resourceType,omitempty"` - ResourceName *string `json:"resourceName,omitempty"` -} - -// DebugSetting is -type DebugSetting struct { - DetailLevel *string `json:"detailLevel,omitempty"` -} - -// Dependency is deployment dependency information. -type Dependency struct { - DependsOn *[]BasicDependency `json:"dependsOn,omitempty"` - ID *string `json:"id,omitempty"` - ResourceType *string `json:"resourceType,omitempty"` - ResourceName *string `json:"resourceName,omitempty"` -} - -// Deployment is deployment operation parameters. -type Deployment struct { - Properties *DeploymentProperties `json:"properties,omitempty"` -} - -// DeploymentExportResult is the deployment export result. -type DeploymentExportResult struct { - autorest.Response `json:"-"` - Template *map[string]interface{} `json:"template,omitempty"` -} - -// DeploymentExtended is deployment information. -type DeploymentExtended struct { - autorest.Response `json:"-"` - ID *string `json:"id,omitempty"` - Name *string `json:"name,omitempty"` - Properties *DeploymentPropertiesExtended `json:"properties,omitempty"` -} - -// DeploymentExtendedFilter is deployment filter. -type DeploymentExtendedFilter struct { - ProvisioningState *string `json:"provisioningState,omitempty"` -} - -// DeploymentListResult is list of deployments. -type DeploymentListResult struct { - autorest.Response `json:"-"` - Value *[]DeploymentExtended `json:"value,omitempty"` - NextLink *string `json:"nextLink,omitempty"` -} - -// DeploymentListResultPreparer prepares a request to retrieve the next set of results. It returns -// nil if no more results exist. -func (client DeploymentListResult) DeploymentListResultPreparer() (*http.Request, error) { - if client.NextLink == nil || len(to.String(client.NextLink)) <= 0 { - return nil, nil - } - return autorest.Prepare(&http.Request{}, - autorest.AsJSON(), - autorest.AsGet(), - autorest.WithBaseURL(to.String(client.NextLink))) -} - -// DeploymentOperation is deployment operation information. -type DeploymentOperation struct { - autorest.Response `json:"-"` - ID *string `json:"id,omitempty"` - OperationID *string `json:"operationId,omitempty"` - Properties *DeploymentOperationProperties `json:"properties,omitempty"` -} - -// DeploymentOperationProperties is deployment operation properties. -type DeploymentOperationProperties struct { - ProvisioningState *string `json:"provisioningState,omitempty"` - Timestamp *date.Time `json:"timestamp,omitempty"` - ServiceRequestID *string `json:"serviceRequestId,omitempty"` - StatusCode *string `json:"statusCode,omitempty"` - StatusMessage *map[string]interface{} `json:"statusMessage,omitempty"` - TargetResource *TargetResource `json:"targetResource,omitempty"` - Request *HTTPMessage `json:"request,omitempty"` - Response *HTTPMessage `json:"response,omitempty"` -} - -// DeploymentOperationsListResult is list of deployment operations. -type DeploymentOperationsListResult struct { - autorest.Response `json:"-"` - Value *[]DeploymentOperation `json:"value,omitempty"` - NextLink *string `json:"nextLink,omitempty"` -} - -// DeploymentOperationsListResultPreparer prepares a request to retrieve the next set of results. It returns -// nil if no more results exist. -func (client DeploymentOperationsListResult) DeploymentOperationsListResultPreparer() (*http.Request, error) { - if client.NextLink == nil || len(to.String(client.NextLink)) <= 0 { - return nil, nil - } - return autorest.Prepare(&http.Request{}, - autorest.AsJSON(), - autorest.AsGet(), - autorest.WithBaseURL(to.String(client.NextLink))) -} - -// DeploymentProperties is deployment properties. -type DeploymentProperties struct { - Template *map[string]interface{} `json:"template,omitempty"` - TemplateLink *TemplateLink `json:"templateLink,omitempty"` - Parameters *map[string]interface{} `json:"parameters,omitempty"` - ParametersLink *ParametersLink `json:"parametersLink,omitempty"` - Mode DeploymentMode `json:"mode,omitempty"` - DebugSetting *DebugSetting `json:"debugSetting,omitempty"` -} - -// DeploymentPropertiesExtended is deployment properties with additional -// details. -type DeploymentPropertiesExtended struct { - ProvisioningState *string `json:"provisioningState,omitempty"` - CorrelationID *string `json:"correlationId,omitempty"` - Timestamp *date.Time `json:"timestamp,omitempty"` - Outputs *map[string]interface{} `json:"outputs,omitempty"` - Providers *[]Provider `json:"providers,omitempty"` - Dependencies *[]Dependency `json:"dependencies,omitempty"` - Template *map[string]interface{} `json:"template,omitempty"` - TemplateLink *TemplateLink `json:"templateLink,omitempty"` - Parameters *map[string]interface{} `json:"parameters,omitempty"` - ParametersLink *ParametersLink `json:"parametersLink,omitempty"` - Mode DeploymentMode `json:"mode,omitempty"` - DebugSetting *DebugSetting `json:"debugSetting,omitempty"` -} - -// DeploymentValidateResult is information from validate template deployment -// response. -type DeploymentValidateResult struct { - autorest.Response `json:"-"` - Error *ManagementErrorWithDetails `json:"error,omitempty"` - Properties *DeploymentPropertiesExtended `json:"properties,omitempty"` -} - -// ExportTemplateRequest is export resource group template request parameters. -type ExportTemplateRequest struct { - ResourcesProperty *[]string `json:"resources,omitempty"` - Options *string `json:"options,omitempty"` -} - -// GenericResource is resource information. -type GenericResource struct { - autorest.Response `json:"-"` - ID *string `json:"id,omitempty"` - Name *string `json:"name,omitempty"` - Type *string `json:"type,omitempty"` - Location *string `json:"location,omitempty"` - Tags *map[string]*string `json:"tags,omitempty"` - Plan *Plan `json:"plan,omitempty"` - Properties *map[string]interface{} `json:"properties,omitempty"` - Kind *string `json:"kind,omitempty"` - ManagedBy *string `json:"managedBy,omitempty"` - Sku *Sku `json:"sku,omitempty"` - Identity *Identity `json:"identity,omitempty"` -} - -// GenericResourceFilter is resource filter. -type GenericResourceFilter struct { - ResourceType *string `json:"resourceType,omitempty"` - Tagname *string `json:"tagname,omitempty"` - Tagvalue *string `json:"tagvalue,omitempty"` -} - -// Group is resource group information. -type Group struct { - autorest.Response `json:"-"` - ID *string `json:"id,omitempty"` - Name *string `json:"name,omitempty"` - Properties *GroupProperties `json:"properties,omitempty"` - Location *string `json:"location,omitempty"` - ManagedBy *string `json:"managedBy,omitempty"` - Tags *map[string]*string `json:"tags,omitempty"` -} - -// GroupExportResult is -type GroupExportResult struct { - autorest.Response `json:"-"` - Template *map[string]interface{} `json:"template,omitempty"` - Error *ManagementErrorWithDetails `json:"error,omitempty"` -} - -// GroupFilter is resource group filter. -type GroupFilter struct { - TagName *string `json:"tagName,omitempty"` - TagValue *string `json:"tagValue,omitempty"` -} - -// GroupListResult is list of resource groups. -type GroupListResult struct { - autorest.Response `json:"-"` - Value *[]Group `json:"value,omitempty"` - NextLink *string `json:"nextLink,omitempty"` -} - -// GroupListResultPreparer prepares a request to retrieve the next set of results. It returns -// nil if no more results exist. -func (client GroupListResult) GroupListResultPreparer() (*http.Request, error) { - if client.NextLink == nil || len(to.String(client.NextLink)) <= 0 { - return nil, nil - } - return autorest.Prepare(&http.Request{}, - autorest.AsJSON(), - autorest.AsGet(), - autorest.WithBaseURL(to.String(client.NextLink))) -} - -// GroupProperties is the resource group properties. -type GroupProperties struct { - ProvisioningState *string `json:"provisioningState,omitempty"` -} - -// HTTPMessage is -type HTTPMessage struct { - Content *map[string]interface{} `json:"content,omitempty"` -} - -// Identity is identity for the resource. -type Identity struct { - PrincipalID *string `json:"principalId,omitempty"` - TenantID *string `json:"tenantId,omitempty"` - Type ResourceIdentityType `json:"type,omitempty"` -} - -// ListResult is list of resource groups. -type ListResult struct { - autorest.Response `json:"-"` - Value *[]GenericResource `json:"value,omitempty"` - NextLink *string `json:"nextLink,omitempty"` -} - -// ListResultPreparer prepares a request to retrieve the next set of results. It returns -// nil if no more results exist. -func (client ListResult) ListResultPreparer() (*http.Request, error) { - if client.NextLink == nil || len(to.String(client.NextLink)) <= 0 { - return nil, nil - } - return autorest.Prepare(&http.Request{}, - autorest.AsJSON(), - autorest.AsGet(), - autorest.WithBaseURL(to.String(client.NextLink))) -} - -// ManagementErrorWithDetails is -type ManagementErrorWithDetails struct { - Code *string `json:"code,omitempty"` - Message *string `json:"message,omitempty"` - Target *string `json:"target,omitempty"` - Details *[]ManagementErrorWithDetails `json:"details,omitempty"` -} - -// MoveInfo is parameters of move resources. -type MoveInfo struct { - ResourcesProperty *[]string `json:"resources,omitempty"` - TargetResourceGroup *string `json:"targetResourceGroup,omitempty"` -} - -// ParametersLink is entity representing the reference to the deployment -// paramaters. -type ParametersLink struct { - URI *string `json:"uri,omitempty"` - ContentVersion *string `json:"contentVersion,omitempty"` -} - -// Plan is plan for the resource. -type Plan struct { - Name *string `json:"name,omitempty"` - Publisher *string `json:"publisher,omitempty"` - Product *string `json:"product,omitempty"` - PromotionCode *string `json:"promotionCode,omitempty"` -} - -// Provider is resource provider information. -type Provider struct { - autorest.Response `json:"-"` - ID *string `json:"id,omitempty"` - Namespace *string `json:"namespace,omitempty"` - RegistrationState *string `json:"registrationState,omitempty"` - ResourceTypes *[]ProviderResourceType `json:"resourceTypes,omitempty"` -} - -// ProviderListResult is list of resource providers. -type ProviderListResult struct { - autorest.Response `json:"-"` - Value *[]Provider `json:"value,omitempty"` - NextLink *string `json:"nextLink,omitempty"` -} - -// ProviderListResultPreparer prepares a request to retrieve the next set of results. It returns -// nil if no more results exist. -func (client ProviderListResult) ProviderListResultPreparer() (*http.Request, error) { - if client.NextLink == nil || len(to.String(client.NextLink)) <= 0 { - return nil, nil - } - return autorest.Prepare(&http.Request{}, - autorest.AsJSON(), - autorest.AsGet(), - autorest.WithBaseURL(to.String(client.NextLink))) -} - -// ProviderOperationDisplayProperties is resource provider operation's display -// properties. -type ProviderOperationDisplayProperties struct { - Publisher *string `json:"publisher,omitempty"` - Provider *string `json:"provider,omitempty"` - Resource *string `json:"resource,omitempty"` - Operation *string `json:"operation,omitempty"` - Description *string `json:"description,omitempty"` -} - -// ProviderResourceType is resource type managed by the resource provider. -type ProviderResourceType struct { - ResourceType *string `json:"resourceType,omitempty"` - Locations *[]string `json:"locations,omitempty"` - Aliases *[]AliasType `json:"aliases,omitempty"` - APIVersions *[]string `json:"apiVersions,omitempty"` - Properties *map[string]*string `json:"properties,omitempty"` -} - -// Resource is -type Resource struct { - ID *string `json:"id,omitempty"` - Name *string `json:"name,omitempty"` - Type *string `json:"type,omitempty"` - Location *string `json:"location,omitempty"` - Tags *map[string]*string `json:"tags,omitempty"` -} - -// Sku is sKU for the resource. -type Sku struct { - Name *string `json:"name,omitempty"` - Tier *string `json:"tier,omitempty"` - Size *string `json:"size,omitempty"` - Family *string `json:"family,omitempty"` - Model *string `json:"model,omitempty"` - Capacity *int32 `json:"capacity,omitempty"` -} - -// SubResource is -type SubResource struct { - ID *string `json:"id,omitempty"` -} - -// TagCount is tag count. -type TagCount struct { - Type *string `json:"type,omitempty"` - Value *int32 `json:"value,omitempty"` -} - -// TagDetails is tag details. -type TagDetails struct { - autorest.Response `json:"-"` - ID *string `json:"id,omitempty"` - TagName *string `json:"tagName,omitempty"` - Count *TagCount `json:"count,omitempty"` - Values *[]TagValue `json:"values,omitempty"` -} - -// TagsListResult is list of subscription tags. -type TagsListResult struct { - autorest.Response `json:"-"` - Value *[]TagDetails `json:"value,omitempty"` - NextLink *string `json:"nextLink,omitempty"` -} - -// TagsListResultPreparer prepares a request to retrieve the next set of results. It returns -// nil if no more results exist. -func (client TagsListResult) TagsListResultPreparer() (*http.Request, error) { - if client.NextLink == nil || len(to.String(client.NextLink)) <= 0 { - return nil, nil - } - return autorest.Prepare(&http.Request{}, - autorest.AsJSON(), - autorest.AsGet(), - autorest.WithBaseURL(to.String(client.NextLink))) -} - -// TagValue is tag information. -type TagValue struct { - autorest.Response `json:"-"` - ID *string `json:"id,omitempty"` - TagValue *string `json:"tagValue,omitempty"` - Count *TagCount `json:"count,omitempty"` -} - -// TargetResource is target resource. -type TargetResource struct { - ID *string `json:"id,omitempty"` - ResourceName *string `json:"resourceName,omitempty"` - ResourceType *string `json:"resourceType,omitempty"` -} - -// TemplateLink is entity representing the reference to the template. -type TemplateLink struct { - URI *string `json:"uri,omitempty"` - ContentVersion *string `json:"contentVersion,omitempty"` -} diff --git a/vendor/github.com/Azure/azure-sdk-for-go/arm/resources/resources/resourcesgroup.go b/vendor/github.com/Azure/azure-sdk-for-go/arm/resources/resources/resourcesgroup.go deleted file mode 100644 index a7179dda8..000000000 --- a/vendor/github.com/Azure/azure-sdk-for-go/arm/resources/resources/resourcesgroup.go +++ /dev/null @@ -1,898 +0,0 @@ -package resources - -// Copyright (c) Microsoft and contributors. All rights reserved. -// -// Licensed under the Apache License, Version 2.0 (the "License"); -// you may not use this file except in compliance with the License. -// You may obtain a copy of the License at -// http://www.apache.org/licenses/LICENSE-2.0 -// -// Unless required by applicable law or agreed to in writing, software -// distributed under the License is distributed on an "AS IS" BASIS, -// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -// -// See the License for the specific language governing permissions and -// limitations under the License. -// -// Code generated by Microsoft (R) AutoRest Code Generator 1.0.1.0 -// Changes may cause incorrect behavior and will be lost if the code is -// regenerated. - -import ( - "github.com/Azure/go-autorest/autorest" - "github.com/Azure/go-autorest/autorest/azure" - "github.com/Azure/go-autorest/autorest/validation" - "net/http" -) - -// GroupClient is the provides operations for working with resources and -// resource groups. -type GroupClient struct { - ManagementClient -} - -// NewGroupClient creates an instance of the GroupClient client. -func NewGroupClient(subscriptionID string) GroupClient { - return NewGroupClientWithBaseURI(DefaultBaseURI, subscriptionID) -} - -// NewGroupClientWithBaseURI creates an instance of the GroupClient client. -func NewGroupClientWithBaseURI(baseURI string, subscriptionID string) GroupClient { - return GroupClient{NewWithBaseURI(baseURI, subscriptionID)} -} - -// CheckExistence checks whether a resource exists. -// -// resourceGroupName is the name of the resource group containing the resource -// to check. The name is case insensitive. resourceProviderNamespace is the -// resource provider of the resource to check. parentResourcePath is the parent -// resource identity. resourceType is the resource type. resourceName is the -// name of the resource to check whether it exists. -func (client GroupClient) CheckExistence(resourceGroupName string, resourceProviderNamespace string, parentResourcePath string, resourceType string, resourceName string) (result autorest.Response, err error) { - if err := validation.Validate([]validation.Validation{ - {TargetValue: resourceGroupName, - Constraints: []validation.Constraint{{Target: "resourceGroupName", Name: validation.MaxLength, Rule: 90, Chain: nil}, - {Target: "resourceGroupName", Name: validation.MinLength, Rule: 1, Chain: nil}, - {Target: "resourceGroupName", Name: validation.Pattern, Rule: `^[-\w\._\(\)]+$`, Chain: nil}}}}); err != nil { - return result, validation.NewErrorWithValidationError(err, "resources.GroupClient", "CheckExistence") - } - - req, err := client.CheckExistencePreparer(resourceGroupName, resourceProviderNamespace, parentResourcePath, resourceType, resourceName) - if err != nil { - err = autorest.NewErrorWithError(err, "resources.GroupClient", "CheckExistence", nil, "Failure preparing request") - return - } - - resp, err := client.CheckExistenceSender(req) - if err != nil { - result.Response = resp - err = autorest.NewErrorWithError(err, "resources.GroupClient", "CheckExistence", resp, "Failure sending request") - return - } - - result, err = client.CheckExistenceResponder(resp) - if err != nil { - err = autorest.NewErrorWithError(err, "resources.GroupClient", "CheckExistence", resp, "Failure responding to request") - } - - return -} - -// CheckExistencePreparer prepares the CheckExistence request. -func (client GroupClient) CheckExistencePreparer(resourceGroupName string, resourceProviderNamespace string, parentResourcePath string, resourceType string, resourceName string) (*http.Request, error) { - pathParameters := map[string]interface{}{ - "parentResourcePath": parentResourcePath, - "resourceGroupName": autorest.Encode("path", resourceGroupName), - "resourceName": autorest.Encode("path", resourceName), - "resourceProviderNamespace": autorest.Encode("path", resourceProviderNamespace), - "resourceType": resourceType, - "subscriptionId": autorest.Encode("path", client.SubscriptionID), - } - - const APIVersion = "2016-09-01" - queryParameters := map[string]interface{}{ - "api-version": APIVersion, - } - - preparer := autorest.CreatePreparer( - autorest.AsHead(), - autorest.WithBaseURL(client.BaseURI), - autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourcegroups/{resourceGroupName}/providers/{resourceProviderNamespace}/{parentResourcePath}/{resourceType}/{resourceName}", pathParameters), - autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{}) -} - -// CheckExistenceSender sends the CheckExistence request. The method will close the -// http.Response Body if it receives an error. -func (client GroupClient) CheckExistenceSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, req) -} - -// CheckExistenceResponder handles the response to the CheckExistence request. The method always -// closes the http.Response Body. -func (client GroupClient) CheckExistenceResponder(resp *http.Response) (result autorest.Response, err error) { - err = autorest.Respond( - resp, - client.ByInspecting(), - azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusNoContent, http.StatusNotFound), - autorest.ByClosing()) - result.Response = resp - return -} - -// CheckExistenceByID checks by ID whether a resource exists. -// -// resourceID is the fully qualified ID of the resource, including the resource -// name and resource type. Use the format, -// /subscriptions/{guid}/resourceGroups/{resource-group-name}/{resource-provider-namespace}/{resource-type}/{resource-name} -func (client GroupClient) CheckExistenceByID(resourceID string) (result autorest.Response, err error) { - req, err := client.CheckExistenceByIDPreparer(resourceID) - if err != nil { - err = autorest.NewErrorWithError(err, "resources.GroupClient", "CheckExistenceByID", nil, "Failure preparing request") - return - } - - resp, err := client.CheckExistenceByIDSender(req) - if err != nil { - result.Response = resp - err = autorest.NewErrorWithError(err, "resources.GroupClient", "CheckExistenceByID", resp, "Failure sending request") - return - } - - result, err = client.CheckExistenceByIDResponder(resp) - if err != nil { - err = autorest.NewErrorWithError(err, "resources.GroupClient", "CheckExistenceByID", resp, "Failure responding to request") - } - - return -} - -// CheckExistenceByIDPreparer prepares the CheckExistenceByID request. -func (client GroupClient) CheckExistenceByIDPreparer(resourceID string) (*http.Request, error) { - pathParameters := map[string]interface{}{ - "resourceId": resourceID, - } - - const APIVersion = "2016-09-01" - queryParameters := map[string]interface{}{ - "api-version": APIVersion, - } - - preparer := autorest.CreatePreparer( - autorest.AsHead(), - autorest.WithBaseURL(client.BaseURI), - autorest.WithPathParameters("/{resourceId}", pathParameters), - autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{}) -} - -// CheckExistenceByIDSender sends the CheckExistenceByID request. The method will close the -// http.Response Body if it receives an error. -func (client GroupClient) CheckExistenceByIDSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, req) -} - -// CheckExistenceByIDResponder handles the response to the CheckExistenceByID request. The method always -// closes the http.Response Body. -func (client GroupClient) CheckExistenceByIDResponder(resp *http.Response) (result autorest.Response, err error) { - err = autorest.Respond( - resp, - client.ByInspecting(), - azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusNoContent, http.StatusNotFound), - autorest.ByClosing()) - result.Response = resp - return -} - -// CreateOrUpdate creates a resource. This method may poll for completion. -// Polling can be canceled by passing the cancel channel argument. The channel -// will be used to cancel polling and any outstanding HTTP requests. -// -// resourceGroupName is the name of the resource group for the resource. The -// name is case insensitive. resourceProviderNamespace is the namespace of the -// resource provider. parentResourcePath is the parent resource identity. -// resourceType is the resource type of the resource to create. resourceName is -// the name of the resource to create. parameters is parameters for creating or -// updating the resource. -func (client GroupClient) CreateOrUpdate(resourceGroupName string, resourceProviderNamespace string, parentResourcePath string, resourceType string, resourceName string, parameters GenericResource, cancel <-chan struct{}) (<-chan GenericResource, <-chan error) { - resultChan := make(chan GenericResource, 1) - errChan := make(chan error, 1) - if err := validation.Validate([]validation.Validation{ - {TargetValue: resourceGroupName, - Constraints: []validation.Constraint{{Target: "resourceGroupName", Name: validation.MaxLength, Rule: 90, Chain: nil}, - {Target: "resourceGroupName", Name: validation.MinLength, Rule: 1, Chain: nil}, - {Target: "resourceGroupName", Name: validation.Pattern, Rule: `^[-\w\._\(\)]+$`, Chain: nil}}}, - {TargetValue: parameters, - Constraints: []validation.Constraint{{Target: "parameters.Kind", Name: validation.Null, Rule: false, - Chain: []validation.Constraint{{Target: "parameters.Kind", Name: validation.Pattern, Rule: `^[-\w\._,\(\)]+$`, Chain: nil}}}}}}); err != nil { - errChan <- validation.NewErrorWithValidationError(err, "resources.GroupClient", "CreateOrUpdate") - close(errChan) - close(resultChan) - return resultChan, errChan - } - - go func() { - var err error - var result GenericResource - defer func() { - resultChan <- result - errChan <- err - close(resultChan) - close(errChan) - }() - req, err := client.CreateOrUpdatePreparer(resourceGroupName, resourceProviderNamespace, parentResourcePath, resourceType, resourceName, parameters, cancel) - if err != nil { - err = autorest.NewErrorWithError(err, "resources.GroupClient", "CreateOrUpdate", nil, "Failure preparing request") - return - } - - resp, err := client.CreateOrUpdateSender(req) - if err != nil { - result.Response = autorest.Response{Response: resp} - err = autorest.NewErrorWithError(err, "resources.GroupClient", "CreateOrUpdate", resp, "Failure sending request") - return - } - - result, err = client.CreateOrUpdateResponder(resp) - if err != nil { - err = autorest.NewErrorWithError(err, "resources.GroupClient", "CreateOrUpdate", resp, "Failure responding to request") - } - }() - return resultChan, errChan -} - -// CreateOrUpdatePreparer prepares the CreateOrUpdate request. -func (client GroupClient) CreateOrUpdatePreparer(resourceGroupName string, resourceProviderNamespace string, parentResourcePath string, resourceType string, resourceName string, parameters GenericResource, cancel <-chan struct{}) (*http.Request, error) { - pathParameters := map[string]interface{}{ - "parentResourcePath": parentResourcePath, - "resourceGroupName": autorest.Encode("path", resourceGroupName), - "resourceName": autorest.Encode("path", resourceName), - "resourceProviderNamespace": autorest.Encode("path", resourceProviderNamespace), - "resourceType": resourceType, - "subscriptionId": autorest.Encode("path", client.SubscriptionID), - } - - const APIVersion = "2016-09-01" - queryParameters := map[string]interface{}{ - "api-version": APIVersion, - } - - preparer := autorest.CreatePreparer( - autorest.AsJSON(), - autorest.AsPut(), - autorest.WithBaseURL(client.BaseURI), - autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourcegroups/{resourceGroupName}/providers/{resourceProviderNamespace}/{parentResourcePath}/{resourceType}/{resourceName}", pathParameters), - autorest.WithJSON(parameters), - autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{Cancel: cancel}) -} - -// CreateOrUpdateSender sends the CreateOrUpdate request. The method will close the -// http.Response Body if it receives an error. -func (client GroupClient) CreateOrUpdateSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, - req, - azure.DoPollForAsynchronous(client.PollingDelay)) -} - -// CreateOrUpdateResponder handles the response to the CreateOrUpdate request. The method always -// closes the http.Response Body. -func (client GroupClient) CreateOrUpdateResponder(resp *http.Response) (result GenericResource, err error) { - err = autorest.Respond( - resp, - client.ByInspecting(), - azure.WithErrorUnlessStatusCode(http.StatusCreated, http.StatusOK, http.StatusAccepted), - autorest.ByUnmarshallingJSON(&result), - autorest.ByClosing()) - result.Response = autorest.Response{Response: resp} - return -} - -// CreateOrUpdateByID create a resource by ID. This method may poll for -// completion. Polling can be canceled by passing the cancel channel argument. -// The channel will be used to cancel polling and any outstanding HTTP -// requests. -// -// resourceID is the fully qualified ID of the resource, including the resource -// name and resource type. Use the format, -// /subscriptions/{guid}/resourceGroups/{resource-group-name}/{resource-provider-namespace}/{resource-type}/{resource-name} -// parameters is create or update resource parameters. -func (client GroupClient) CreateOrUpdateByID(resourceID string, parameters GenericResource, cancel <-chan struct{}) (<-chan GenericResource, <-chan error) { - resultChan := make(chan GenericResource, 1) - errChan := make(chan error, 1) - if err := validation.Validate([]validation.Validation{ - {TargetValue: parameters, - Constraints: []validation.Constraint{{Target: "parameters.Kind", Name: validation.Null, Rule: false, - Chain: []validation.Constraint{{Target: "parameters.Kind", Name: validation.Pattern, Rule: `^[-\w\._,\(\)]+$`, Chain: nil}}}}}}); err != nil { - errChan <- validation.NewErrorWithValidationError(err, "resources.GroupClient", "CreateOrUpdateByID") - close(errChan) - close(resultChan) - return resultChan, errChan - } - - go func() { - var err error - var result GenericResource - defer func() { - resultChan <- result - errChan <- err - close(resultChan) - close(errChan) - }() - req, err := client.CreateOrUpdateByIDPreparer(resourceID, parameters, cancel) - if err != nil { - err = autorest.NewErrorWithError(err, "resources.GroupClient", "CreateOrUpdateByID", nil, "Failure preparing request") - return - } - - resp, err := client.CreateOrUpdateByIDSender(req) - if err != nil { - result.Response = autorest.Response{Response: resp} - err = autorest.NewErrorWithError(err, "resources.GroupClient", "CreateOrUpdateByID", resp, "Failure sending request") - return - } - - result, err = client.CreateOrUpdateByIDResponder(resp) - if err != nil { - err = autorest.NewErrorWithError(err, "resources.GroupClient", "CreateOrUpdateByID", resp, "Failure responding to request") - } - }() - return resultChan, errChan -} - -// CreateOrUpdateByIDPreparer prepares the CreateOrUpdateByID request. -func (client GroupClient) CreateOrUpdateByIDPreparer(resourceID string, parameters GenericResource, cancel <-chan struct{}) (*http.Request, error) { - pathParameters := map[string]interface{}{ - "resourceId": resourceID, - } - - const APIVersion = "2016-09-01" - queryParameters := map[string]interface{}{ - "api-version": APIVersion, - } - - preparer := autorest.CreatePreparer( - autorest.AsJSON(), - autorest.AsPut(), - autorest.WithBaseURL(client.BaseURI), - autorest.WithPathParameters("/{resourceId}", pathParameters), - autorest.WithJSON(parameters), - autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{Cancel: cancel}) -} - -// CreateOrUpdateByIDSender sends the CreateOrUpdateByID request. The method will close the -// http.Response Body if it receives an error. -func (client GroupClient) CreateOrUpdateByIDSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, - req, - azure.DoPollForAsynchronous(client.PollingDelay)) -} - -// CreateOrUpdateByIDResponder handles the response to the CreateOrUpdateByID request. The method always -// closes the http.Response Body. -func (client GroupClient) CreateOrUpdateByIDResponder(resp *http.Response) (result GenericResource, err error) { - err = autorest.Respond( - resp, - client.ByInspecting(), - azure.WithErrorUnlessStatusCode(http.StatusCreated, http.StatusOK, http.StatusAccepted), - autorest.ByUnmarshallingJSON(&result), - autorest.ByClosing()) - result.Response = autorest.Response{Response: resp} - return -} - -// Delete deletes a resource. This method may poll for completion. Polling can -// be canceled by passing the cancel channel argument. The channel will be used -// to cancel polling and any outstanding HTTP requests. -// -// resourceGroupName is the name of the resource group that contains the -// resource to delete. The name is case insensitive. resourceProviderNamespace -// is the namespace of the resource provider. parentResourcePath is the parent -// resource identity. resourceType is the resource type. resourceName is the -// name of the resource to delete. -func (client GroupClient) Delete(resourceGroupName string, resourceProviderNamespace string, parentResourcePath string, resourceType string, resourceName string, cancel <-chan struct{}) (<-chan autorest.Response, <-chan error) { - resultChan := make(chan autorest.Response, 1) - errChan := make(chan error, 1) - if err := validation.Validate([]validation.Validation{ - {TargetValue: resourceGroupName, - Constraints: []validation.Constraint{{Target: "resourceGroupName", Name: validation.MaxLength, Rule: 90, Chain: nil}, - {Target: "resourceGroupName", Name: validation.MinLength, Rule: 1, Chain: nil}, - {Target: "resourceGroupName", Name: validation.Pattern, Rule: `^[-\w\._\(\)]+$`, Chain: nil}}}}); err != nil { - errChan <- validation.NewErrorWithValidationError(err, "resources.GroupClient", "Delete") - close(errChan) - close(resultChan) - return resultChan, errChan - } - - go func() { - var err error - var result autorest.Response - defer func() { - resultChan <- result - errChan <- err - close(resultChan) - close(errChan) - }() - req, err := client.DeletePreparer(resourceGroupName, resourceProviderNamespace, parentResourcePath, resourceType, resourceName, cancel) - if err != nil { - err = autorest.NewErrorWithError(err, "resources.GroupClient", "Delete", nil, "Failure preparing request") - return - } - - resp, err := client.DeleteSender(req) - if err != nil { - result.Response = resp - err = autorest.NewErrorWithError(err, "resources.GroupClient", "Delete", resp, "Failure sending request") - return - } - - result, err = client.DeleteResponder(resp) - if err != nil { - err = autorest.NewErrorWithError(err, "resources.GroupClient", "Delete", resp, "Failure responding to request") - } - }() - return resultChan, errChan -} - -// DeletePreparer prepares the Delete request. -func (client GroupClient) DeletePreparer(resourceGroupName string, resourceProviderNamespace string, parentResourcePath string, resourceType string, resourceName string, cancel <-chan struct{}) (*http.Request, error) { - pathParameters := map[string]interface{}{ - "parentResourcePath": parentResourcePath, - "resourceGroupName": autorest.Encode("path", resourceGroupName), - "resourceName": autorest.Encode("path", resourceName), - "resourceProviderNamespace": autorest.Encode("path", resourceProviderNamespace), - "resourceType": resourceType, - "subscriptionId": autorest.Encode("path", client.SubscriptionID), - } - - const APIVersion = "2016-09-01" - queryParameters := map[string]interface{}{ - "api-version": APIVersion, - } - - preparer := autorest.CreatePreparer( - autorest.AsDelete(), - autorest.WithBaseURL(client.BaseURI), - autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourcegroups/{resourceGroupName}/providers/{resourceProviderNamespace}/{parentResourcePath}/{resourceType}/{resourceName}", pathParameters), - autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{Cancel: cancel}) -} - -// DeleteSender sends the Delete request. The method will close the -// http.Response Body if it receives an error. -func (client GroupClient) DeleteSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, - req, - azure.DoPollForAsynchronous(client.PollingDelay)) -} - -// DeleteResponder handles the response to the Delete request. The method always -// closes the http.Response Body. -func (client GroupClient) DeleteResponder(resp *http.Response) (result autorest.Response, err error) { - err = autorest.Respond( - resp, - client.ByInspecting(), - azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusNoContent, http.StatusAccepted), - autorest.ByClosing()) - result.Response = resp - return -} - -// DeleteByID deletes a resource by ID. This method may poll for completion. -// Polling can be canceled by passing the cancel channel argument. The channel -// will be used to cancel polling and any outstanding HTTP requests. -// -// resourceID is the fully qualified ID of the resource, including the resource -// name and resource type. Use the format, -// /subscriptions/{guid}/resourceGroups/{resource-group-name}/{resource-provider-namespace}/{resource-type}/{resource-name} -func (client GroupClient) DeleteByID(resourceID string, cancel <-chan struct{}) (<-chan autorest.Response, <-chan error) { - resultChan := make(chan autorest.Response, 1) - errChan := make(chan error, 1) - go func() { - var err error - var result autorest.Response - defer func() { - resultChan <- result - errChan <- err - close(resultChan) - close(errChan) - }() - req, err := client.DeleteByIDPreparer(resourceID, cancel) - if err != nil { - err = autorest.NewErrorWithError(err, "resources.GroupClient", "DeleteByID", nil, "Failure preparing request") - return - } - - resp, err := client.DeleteByIDSender(req) - if err != nil { - result.Response = resp - err = autorest.NewErrorWithError(err, "resources.GroupClient", "DeleteByID", resp, "Failure sending request") - return - } - - result, err = client.DeleteByIDResponder(resp) - if err != nil { - err = autorest.NewErrorWithError(err, "resources.GroupClient", "DeleteByID", resp, "Failure responding to request") - } - }() - return resultChan, errChan -} - -// DeleteByIDPreparer prepares the DeleteByID request. -func (client GroupClient) DeleteByIDPreparer(resourceID string, cancel <-chan struct{}) (*http.Request, error) { - pathParameters := map[string]interface{}{ - "resourceId": resourceID, - } - - const APIVersion = "2016-09-01" - queryParameters := map[string]interface{}{ - "api-version": APIVersion, - } - - preparer := autorest.CreatePreparer( - autorest.AsDelete(), - autorest.WithBaseURL(client.BaseURI), - autorest.WithPathParameters("/{resourceId}", pathParameters), - autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{Cancel: cancel}) -} - -// DeleteByIDSender sends the DeleteByID request. The method will close the -// http.Response Body if it receives an error. -func (client GroupClient) DeleteByIDSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, - req, - azure.DoPollForAsynchronous(client.PollingDelay)) -} - -// DeleteByIDResponder handles the response to the DeleteByID request. The method always -// closes the http.Response Body. -func (client GroupClient) DeleteByIDResponder(resp *http.Response) (result autorest.Response, err error) { - err = autorest.Respond( - resp, - client.ByInspecting(), - azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusNoContent, http.StatusAccepted), - autorest.ByClosing()) - result.Response = resp - return -} - -// Get gets a resource. -// -// resourceGroupName is the name of the resource group containing the resource -// to get. The name is case insensitive. resourceProviderNamespace is the -// namespace of the resource provider. parentResourcePath is the parent -// resource identity. resourceType is the resource type of the resource. -// resourceName is the name of the resource to get. -func (client GroupClient) Get(resourceGroupName string, resourceProviderNamespace string, parentResourcePath string, resourceType string, resourceName string) (result GenericResource, err error) { - if err := validation.Validate([]validation.Validation{ - {TargetValue: resourceGroupName, - Constraints: []validation.Constraint{{Target: "resourceGroupName", Name: validation.MaxLength, Rule: 90, Chain: nil}, - {Target: "resourceGroupName", Name: validation.MinLength, Rule: 1, Chain: nil}, - {Target: "resourceGroupName", Name: validation.Pattern, Rule: `^[-\w\._\(\)]+$`, Chain: nil}}}}); err != nil { - return result, validation.NewErrorWithValidationError(err, "resources.GroupClient", "Get") - } - - req, err := client.GetPreparer(resourceGroupName, resourceProviderNamespace, parentResourcePath, resourceType, resourceName) - if err != nil { - err = autorest.NewErrorWithError(err, "resources.GroupClient", "Get", nil, "Failure preparing request") - return - } - - resp, err := client.GetSender(req) - if err != nil { - result.Response = autorest.Response{Response: resp} - err = autorest.NewErrorWithError(err, "resources.GroupClient", "Get", resp, "Failure sending request") - return - } - - result, err = client.GetResponder(resp) - if err != nil { - err = autorest.NewErrorWithError(err, "resources.GroupClient", "Get", resp, "Failure responding to request") - } - - return -} - -// GetPreparer prepares the Get request. -func (client GroupClient) GetPreparer(resourceGroupName string, resourceProviderNamespace string, parentResourcePath string, resourceType string, resourceName string) (*http.Request, error) { - pathParameters := map[string]interface{}{ - "parentResourcePath": parentResourcePath, - "resourceGroupName": autorest.Encode("path", resourceGroupName), - "resourceName": autorest.Encode("path", resourceName), - "resourceProviderNamespace": autorest.Encode("path", resourceProviderNamespace), - "resourceType": resourceType, - "subscriptionId": autorest.Encode("path", client.SubscriptionID), - } - - const APIVersion = "2016-09-01" - queryParameters := map[string]interface{}{ - "api-version": APIVersion, - } - - preparer := autorest.CreatePreparer( - autorest.AsGet(), - autorest.WithBaseURL(client.BaseURI), - autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourcegroups/{resourceGroupName}/providers/{resourceProviderNamespace}/{parentResourcePath}/{resourceType}/{resourceName}", pathParameters), - autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{}) -} - -// GetSender sends the Get request. The method will close the -// http.Response Body if it receives an error. -func (client GroupClient) GetSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, req) -} - -// GetResponder handles the response to the Get request. The method always -// closes the http.Response Body. -func (client GroupClient) GetResponder(resp *http.Response) (result GenericResource, err error) { - err = autorest.Respond( - resp, - client.ByInspecting(), - azure.WithErrorUnlessStatusCode(http.StatusOK), - autorest.ByUnmarshallingJSON(&result), - autorest.ByClosing()) - result.Response = autorest.Response{Response: resp} - return -} - -// GetByID gets a resource by ID. -// -// resourceID is the fully qualified ID of the resource, including the resource -// name and resource type. Use the format, -// /subscriptions/{guid}/resourceGroups/{resource-group-name}/{resource-provider-namespace}/{resource-type}/{resource-name} -func (client GroupClient) GetByID(resourceID string) (result GenericResource, err error) { - req, err := client.GetByIDPreparer(resourceID) - if err != nil { - err = autorest.NewErrorWithError(err, "resources.GroupClient", "GetByID", nil, "Failure preparing request") - return - } - - resp, err := client.GetByIDSender(req) - if err != nil { - result.Response = autorest.Response{Response: resp} - err = autorest.NewErrorWithError(err, "resources.GroupClient", "GetByID", resp, "Failure sending request") - return - } - - result, err = client.GetByIDResponder(resp) - if err != nil { - err = autorest.NewErrorWithError(err, "resources.GroupClient", "GetByID", resp, "Failure responding to request") - } - - return -} - -// GetByIDPreparer prepares the GetByID request. -func (client GroupClient) GetByIDPreparer(resourceID string) (*http.Request, error) { - pathParameters := map[string]interface{}{ - "resourceId": resourceID, - } - - const APIVersion = "2016-09-01" - queryParameters := map[string]interface{}{ - "api-version": APIVersion, - } - - preparer := autorest.CreatePreparer( - autorest.AsGet(), - autorest.WithBaseURL(client.BaseURI), - autorest.WithPathParameters("/{resourceId}", pathParameters), - autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{}) -} - -// GetByIDSender sends the GetByID request. The method will close the -// http.Response Body if it receives an error. -func (client GroupClient) GetByIDSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, req) -} - -// GetByIDResponder handles the response to the GetByID request. The method always -// closes the http.Response Body. -func (client GroupClient) GetByIDResponder(resp *http.Response) (result GenericResource, err error) { - err = autorest.Respond( - resp, - client.ByInspecting(), - azure.WithErrorUnlessStatusCode(http.StatusOK), - autorest.ByUnmarshallingJSON(&result), - autorest.ByClosing()) - result.Response = autorest.Response{Response: resp} - return -} - -// List get all the resources in a subscription. -// -// filter is the filter to apply on the operation. expand is the $expand query -// parameter. top is the number of results to return. If null is passed, -// returns all resource groups. -func (client GroupClient) List(filter string, expand string, top *int32) (result ListResult, err error) { - req, err := client.ListPreparer(filter, expand, top) - if err != nil { - err = autorest.NewErrorWithError(err, "resources.GroupClient", "List", nil, "Failure preparing request") - return - } - - resp, err := client.ListSender(req) - if err != nil { - result.Response = autorest.Response{Response: resp} - err = autorest.NewErrorWithError(err, "resources.GroupClient", "List", resp, "Failure sending request") - return - } - - result, err = client.ListResponder(resp) - if err != nil { - err = autorest.NewErrorWithError(err, "resources.GroupClient", "List", resp, "Failure responding to request") - } - - return -} - -// ListPreparer prepares the List request. -func (client GroupClient) ListPreparer(filter string, expand string, top *int32) (*http.Request, error) { - pathParameters := map[string]interface{}{ - "subscriptionId": autorest.Encode("path", client.SubscriptionID), - } - - const APIVersion = "2016-09-01" - queryParameters := map[string]interface{}{ - "api-version": APIVersion, - } - if len(filter) > 0 { - queryParameters["$filter"] = autorest.Encode("query", filter) - } - if len(expand) > 0 { - queryParameters["$expand"] = autorest.Encode("query", expand) - } - if top != nil { - queryParameters["$top"] = autorest.Encode("query", *top) - } - - preparer := autorest.CreatePreparer( - autorest.AsGet(), - autorest.WithBaseURL(client.BaseURI), - autorest.WithPathParameters("/subscriptions/{subscriptionId}/resources", pathParameters), - autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{}) -} - -// ListSender sends the List request. The method will close the -// http.Response Body if it receives an error. -func (client GroupClient) ListSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, req) -} - -// ListResponder handles the response to the List request. The method always -// closes the http.Response Body. -func (client GroupClient) ListResponder(resp *http.Response) (result ListResult, err error) { - err = autorest.Respond( - resp, - client.ByInspecting(), - azure.WithErrorUnlessStatusCode(http.StatusOK), - autorest.ByUnmarshallingJSON(&result), - autorest.ByClosing()) - result.Response = autorest.Response{Response: resp} - return -} - -// ListNextResults retrieves the next set of results, if any. -func (client GroupClient) ListNextResults(lastResults ListResult) (result ListResult, err error) { - req, err := lastResults.ListResultPreparer() - if err != nil { - return result, autorest.NewErrorWithError(err, "resources.GroupClient", "List", nil, "Failure preparing next results request") - } - if req == nil { - return - } - - resp, err := client.ListSender(req) - if err != nil { - result.Response = autorest.Response{Response: resp} - return result, autorest.NewErrorWithError(err, "resources.GroupClient", "List", resp, "Failure sending next results request") - } - - result, err = client.ListResponder(resp) - if err != nil { - err = autorest.NewErrorWithError(err, "resources.GroupClient", "List", resp, "Failure responding to next results request") - } - - return -} - -// MoveResources the resources to move must be in the same source resource -// group. The target resource group may be in a different subscription. When -// moving resources, both the source group and the target group are locked for -// the duration of the operation. Write and delete operations are blocked on -// the groups until the move completes. This method may poll for completion. -// Polling can be canceled by passing the cancel channel argument. The channel -// will be used to cancel polling and any outstanding HTTP requests. -// -// sourceResourceGroupName is the name of the resource group containing the -// rsources to move. parameters is parameters for moving resources. -func (client GroupClient) MoveResources(sourceResourceGroupName string, parameters MoveInfo, cancel <-chan struct{}) (<-chan autorest.Response, <-chan error) { - resultChan := make(chan autorest.Response, 1) - errChan := make(chan error, 1) - if err := validation.Validate([]validation.Validation{ - {TargetValue: sourceResourceGroupName, - Constraints: []validation.Constraint{{Target: "sourceResourceGroupName", Name: validation.MaxLength, Rule: 90, Chain: nil}, - {Target: "sourceResourceGroupName", Name: validation.MinLength, Rule: 1, Chain: nil}, - {Target: "sourceResourceGroupName", Name: validation.Pattern, Rule: `^[-\w\._\(\)]+$`, Chain: nil}}}}); err != nil { - errChan <- validation.NewErrorWithValidationError(err, "resources.GroupClient", "MoveResources") - close(errChan) - close(resultChan) - return resultChan, errChan - } - - go func() { - var err error - var result autorest.Response - defer func() { - resultChan <- result - errChan <- err - close(resultChan) - close(errChan) - }() - req, err := client.MoveResourcesPreparer(sourceResourceGroupName, parameters, cancel) - if err != nil { - err = autorest.NewErrorWithError(err, "resources.GroupClient", "MoveResources", nil, "Failure preparing request") - return - } - - resp, err := client.MoveResourcesSender(req) - if err != nil { - result.Response = resp - err = autorest.NewErrorWithError(err, "resources.GroupClient", "MoveResources", resp, "Failure sending request") - return - } - - result, err = client.MoveResourcesResponder(resp) - if err != nil { - err = autorest.NewErrorWithError(err, "resources.GroupClient", "MoveResources", resp, "Failure responding to request") - } - }() - return resultChan, errChan -} - -// MoveResourcesPreparer prepares the MoveResources request. -func (client GroupClient) MoveResourcesPreparer(sourceResourceGroupName string, parameters MoveInfo, cancel <-chan struct{}) (*http.Request, error) { - pathParameters := map[string]interface{}{ - "sourceResourceGroupName": autorest.Encode("path", sourceResourceGroupName), - "subscriptionId": autorest.Encode("path", client.SubscriptionID), - } - - const APIVersion = "2016-09-01" - queryParameters := map[string]interface{}{ - "api-version": APIVersion, - } - - preparer := autorest.CreatePreparer( - autorest.AsJSON(), - autorest.AsPost(), - autorest.WithBaseURL(client.BaseURI), - autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{sourceResourceGroupName}/moveResources", pathParameters), - autorest.WithJSON(parameters), - autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{Cancel: cancel}) -} - -// MoveResourcesSender sends the MoveResources request. The method will close the -// http.Response Body if it receives an error. -func (client GroupClient) MoveResourcesSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, - req, - azure.DoPollForAsynchronous(client.PollingDelay)) -} - -// MoveResourcesResponder handles the response to the MoveResources request. The method always -// closes the http.Response Body. -func (client GroupClient) MoveResourcesResponder(resp *http.Response) (result autorest.Response, err error) { - err = autorest.Respond( - resp, - client.ByInspecting(), - azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted, http.StatusNoContent), - autorest.ByClosing()) - result.Response = resp - return -} diff --git a/vendor/github.com/Azure/azure-sdk-for-go/arm/resources/subscriptions/models.go b/vendor/github.com/Azure/azure-sdk-for-go/arm/resources/subscriptions/models.go deleted file mode 100644 index e2cf1743f..000000000 --- a/vendor/github.com/Azure/azure-sdk-for-go/arm/resources/subscriptions/models.go +++ /dev/null @@ -1,132 +0,0 @@ -package subscriptions - -// Copyright (c) Microsoft and contributors. All rights reserved. -// -// Licensed under the Apache License, Version 2.0 (the "License"); -// you may not use this file except in compliance with the License. -// You may obtain a copy of the License at -// http://www.apache.org/licenses/LICENSE-2.0 -// -// Unless required by applicable law or agreed to in writing, software -// distributed under the License is distributed on an "AS IS" BASIS, -// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -// -// See the License for the specific language governing permissions and -// limitations under the License. -// -// Code generated by Microsoft (R) AutoRest Code Generator 1.0.1.0 -// Changes may cause incorrect behavior and will be lost if the code is -// regenerated. - -import ( - "github.com/Azure/go-autorest/autorest" - "github.com/Azure/go-autorest/autorest/to" - "net/http" -) - -// SpendingLimit enumerates the values for spending limit. -type SpendingLimit string - -const ( - // CurrentPeriodOff specifies the current period off state for spending - // limit. - CurrentPeriodOff SpendingLimit = "CurrentPeriodOff" - // Off specifies the off state for spending limit. - Off SpendingLimit = "Off" - // On specifies the on state for spending limit. - On SpendingLimit = "On" -) - -// State enumerates the values for state. -type State string - -const ( - // Deleted specifies the deleted state for state. - Deleted State = "Deleted" - // Disabled specifies the disabled state for state. - Disabled State = "Disabled" - // Enabled specifies the enabled state for state. - Enabled State = "Enabled" - // PastDue specifies the past due state for state. - PastDue State = "PastDue" - // Warned specifies the warned state for state. - Warned State = "Warned" -) - -// ListResult is subscription list operation response. -type ListResult struct { - autorest.Response `json:"-"` - Value *[]Subscription `json:"value,omitempty"` - NextLink *string `json:"nextLink,omitempty"` -} - -// ListResultPreparer prepares a request to retrieve the next set of results. It returns -// nil if no more results exist. -func (client ListResult) ListResultPreparer() (*http.Request, error) { - if client.NextLink == nil || len(to.String(client.NextLink)) <= 0 { - return nil, nil - } - return autorest.Prepare(&http.Request{}, - autorest.AsJSON(), - autorest.AsGet(), - autorest.WithBaseURL(to.String(client.NextLink))) -} - -// Location is location information. -type Location struct { - ID *string `json:"id,omitempty"` - SubscriptionID *string `json:"subscriptionId,omitempty"` - Name *string `json:"name,omitempty"` - DisplayName *string `json:"displayName,omitempty"` - Latitude *string `json:"latitude,omitempty"` - Longitude *string `json:"longitude,omitempty"` -} - -// LocationListResult is location list operation response. -type LocationListResult struct { - autorest.Response `json:"-"` - Value *[]Location `json:"value,omitempty"` -} - -// Policies is subscription policies. -type Policies struct { - LocationPlacementID *string `json:"locationPlacementId,omitempty"` - QuotaID *string `json:"quotaId,omitempty"` - SpendingLimit SpendingLimit `json:"spendingLimit,omitempty"` -} - -// Subscription is subscription information. -type Subscription struct { - autorest.Response `json:"-"` - ID *string `json:"id,omitempty"` - SubscriptionID *string `json:"subscriptionId,omitempty"` - DisplayName *string `json:"displayName,omitempty"` - State State `json:"state,omitempty"` - SubscriptionPolicies *Policies `json:"subscriptionPolicies,omitempty"` - AuthorizationSource *string `json:"authorizationSource,omitempty"` -} - -// TenantIDDescription is tenant Id information. -type TenantIDDescription struct { - ID *string `json:"id,omitempty"` - TenantID *string `json:"tenantId,omitempty"` -} - -// TenantListResult is tenant Ids information. -type TenantListResult struct { - autorest.Response `json:"-"` - Value *[]TenantIDDescription `json:"value,omitempty"` - NextLink *string `json:"nextLink,omitempty"` -} - -// TenantListResultPreparer prepares a request to retrieve the next set of results. It returns -// nil if no more results exist. -func (client TenantListResult) TenantListResultPreparer() (*http.Request, error) { - if client.NextLink == nil || len(to.String(client.NextLink)) <= 0 { - return nil, nil - } - return autorest.Prepare(&http.Request{}, - autorest.AsJSON(), - autorest.AsGet(), - autorest.WithBaseURL(to.String(client.NextLink))) -} diff --git a/vendor/github.com/Azure/azure-sdk-for-go/arm/storage/models.go b/vendor/github.com/Azure/azure-sdk-for-go/arm/storage/models.go deleted file mode 100644 index 2e2030184..000000000 --- a/vendor/github.com/Azure/azure-sdk-for-go/arm/storage/models.go +++ /dev/null @@ -1,452 +0,0 @@ -package storage - -// Copyright (c) Microsoft and contributors. All rights reserved. -// -// Licensed under the Apache License, Version 2.0 (the "License"); -// you may not use this file except in compliance with the License. -// You may obtain a copy of the License at -// http://www.apache.org/licenses/LICENSE-2.0 -// -// Unless required by applicable law or agreed to in writing, software -// distributed under the License is distributed on an "AS IS" BASIS, -// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -// -// See the License for the specific language governing permissions and -// limitations under the License. -// -// Code generated by Microsoft (R) AutoRest Code Generator 1.0.1.0 -// Changes may cause incorrect behavior and will be lost if the code is -// regenerated. - -import ( - "github.com/Azure/go-autorest/autorest" - "github.com/Azure/go-autorest/autorest/date" -) - -// AccessTier enumerates the values for access tier. -type AccessTier string - -const ( - // Cool specifies the cool state for access tier. - Cool AccessTier = "Cool" - // Hot specifies the hot state for access tier. - Hot AccessTier = "Hot" -) - -// AccountStatus enumerates the values for account status. -type AccountStatus string - -const ( - // Available specifies the available state for account status. - Available AccountStatus = "available" - // Unavailable specifies the unavailable state for account status. - Unavailable AccountStatus = "unavailable" -) - -// HTTPProtocol enumerates the values for http protocol. -type HTTPProtocol string - -const ( - // HTTPS specifies the https state for http protocol. - HTTPS HTTPProtocol = "https" - // Httpshttp specifies the httpshttp state for http protocol. - Httpshttp HTTPProtocol = "https,http" -) - -// KeyPermission enumerates the values for key permission. -type KeyPermission string - -const ( - // Full specifies the full state for key permission. - Full KeyPermission = "Full" - // Read specifies the read state for key permission. - Read KeyPermission = "Read" -) - -// Kind enumerates the values for kind. -type Kind string - -const ( - // BlobStorage specifies the blob storage state for kind. - BlobStorage Kind = "BlobStorage" - // Storage specifies the storage state for kind. - Storage Kind = "Storage" -) - -// Permissions enumerates the values for permissions. -type Permissions string - -const ( - // A specifies the a state for permissions. - A Permissions = "a" - // C specifies the c state for permissions. - C Permissions = "c" - // D specifies the d state for permissions. - D Permissions = "d" - // L specifies the l state for permissions. - L Permissions = "l" - // P specifies the p state for permissions. - P Permissions = "p" - // R specifies the r state for permissions. - R Permissions = "r" - // U specifies the u state for permissions. - U Permissions = "u" - // W specifies the w state for permissions. - W Permissions = "w" -) - -// Permissions1 enumerates the values for permissions 1. -type Permissions1 string - -const ( - // Permissions1A specifies the permissions 1a state for permissions 1. - Permissions1A Permissions1 = "a" - // Permissions1C specifies the permissions 1c state for permissions 1. - Permissions1C Permissions1 = "c" - // Permissions1D specifies the permissions 1d state for permissions 1. - Permissions1D Permissions1 = "d" - // Permissions1L specifies the permissions 1l state for permissions 1. - Permissions1L Permissions1 = "l" - // Permissions1P specifies the permissions 1p state for permissions 1. - Permissions1P Permissions1 = "p" - // Permissions1R specifies the permissions 1r state for permissions 1. - Permissions1R Permissions1 = "r" - // Permissions1U specifies the permissions 1u state for permissions 1. - Permissions1U Permissions1 = "u" - // Permissions1W specifies the permissions 1w state for permissions 1. - Permissions1W Permissions1 = "w" -) - -// ProvisioningState enumerates the values for provisioning state. -type ProvisioningState string - -const ( - // Creating specifies the creating state for provisioning state. - Creating ProvisioningState = "Creating" - // ResolvingDNS specifies the resolving dns state for provisioning state. - ResolvingDNS ProvisioningState = "ResolvingDNS" - // Succeeded specifies the succeeded state for provisioning state. - Succeeded ProvisioningState = "Succeeded" -) - -// Reason enumerates the values for reason. -type Reason string - -const ( - // AccountNameInvalid specifies the account name invalid state for reason. - AccountNameInvalid Reason = "AccountNameInvalid" - // AlreadyExists specifies the already exists state for reason. - AlreadyExists Reason = "AlreadyExists" -) - -// ResourceEnum enumerates the values for resource enum. -type ResourceEnum string - -const ( - // ResourceEnumB specifies the resource enum b state for resource enum. - ResourceEnumB ResourceEnum = "b" - // ResourceEnumC specifies the resource enum c state for resource enum. - ResourceEnumC ResourceEnum = "c" - // ResourceEnumF specifies the resource enum f state for resource enum. - ResourceEnumF ResourceEnum = "f" - // ResourceEnumS specifies the resource enum s state for resource enum. - ResourceEnumS ResourceEnum = "s" -) - -// ResourceTypes enumerates the values for resource types. -type ResourceTypes string - -const ( - // ResourceTypesC specifies the resource types c state for resource types. - ResourceTypesC ResourceTypes = "c" - // ResourceTypesO specifies the resource types o state for resource types. - ResourceTypesO ResourceTypes = "o" - // ResourceTypesS specifies the resource types s state for resource types. - ResourceTypesS ResourceTypes = "s" -) - -// Services enumerates the values for services. -type Services string - -const ( - // B specifies the b state for services. - B Services = "b" - // F specifies the f state for services. - F Services = "f" - // Q specifies the q state for services. - Q Services = "q" - // T specifies the t state for services. - T Services = "t" -) - -// SkuName enumerates the values for sku name. -type SkuName string - -const ( - // PremiumLRS specifies the premium lrs state for sku name. - PremiumLRS SkuName = "Premium_LRS" - // StandardGRS specifies the standard grs state for sku name. - StandardGRS SkuName = "Standard_GRS" - // StandardLRS specifies the standard lrs state for sku name. - StandardLRS SkuName = "Standard_LRS" - // StandardRAGRS specifies the standard ragrs state for sku name. - StandardRAGRS SkuName = "Standard_RAGRS" - // StandardZRS specifies the standard zrs state for sku name. - StandardZRS SkuName = "Standard_ZRS" -) - -// SkuTier enumerates the values for sku tier. -type SkuTier string - -const ( - // Premium specifies the premium state for sku tier. - Premium SkuTier = "Premium" - // Standard specifies the standard state for sku tier. - Standard SkuTier = "Standard" -) - -// UsageUnit enumerates the values for usage unit. -type UsageUnit string - -const ( - // Bytes specifies the bytes state for usage unit. - Bytes UsageUnit = "Bytes" - // BytesPerSecond specifies the bytes per second state for usage unit. - BytesPerSecond UsageUnit = "BytesPerSecond" - // Count specifies the count state for usage unit. - Count UsageUnit = "Count" - // CountsPerSecond specifies the counts per second state for usage unit. - CountsPerSecond UsageUnit = "CountsPerSecond" - // Percent specifies the percent state for usage unit. - Percent UsageUnit = "Percent" - // Seconds specifies the seconds state for usage unit. - Seconds UsageUnit = "Seconds" -) - -// Account is the storage account. -type Account struct { - autorest.Response `json:"-"` - ID *string `json:"id,omitempty"` - Name *string `json:"name,omitempty"` - Type *string `json:"type,omitempty"` - Location *string `json:"location,omitempty"` - Tags *map[string]*string `json:"tags,omitempty"` - Sku *Sku `json:"sku,omitempty"` - Kind Kind `json:"kind,omitempty"` - *AccountProperties `json:"properties,omitempty"` -} - -// AccountCheckNameAvailabilityParameters is the parameters used to check the -// availabity of the storage account name. -type AccountCheckNameAvailabilityParameters struct { - Name *string `json:"name,omitempty"` - Type *string `json:"type,omitempty"` -} - -// AccountCreateParameters is the parameters used when creating a storage -// account. -type AccountCreateParameters struct { - Sku *Sku `json:"sku,omitempty"` - Kind Kind `json:"kind,omitempty"` - Location *string `json:"location,omitempty"` - Tags *map[string]*string `json:"tags,omitempty"` - *AccountPropertiesCreateParameters `json:"properties,omitempty"` -} - -// AccountKey is an access key for the storage account. -type AccountKey struct { - KeyName *string `json:"keyName,omitempty"` - Value *string `json:"value,omitempty"` - Permissions KeyPermission `json:"permissions,omitempty"` -} - -// AccountListKeysResult is the response from the ListKeys operation. -type AccountListKeysResult struct { - autorest.Response `json:"-"` - Keys *[]AccountKey `json:"keys,omitempty"` -} - -// AccountListResult is the response from the List Storage Accounts operation. -type AccountListResult struct { - autorest.Response `json:"-"` - Value *[]Account `json:"value,omitempty"` -} - -// AccountProperties is properties of the storage account. -type AccountProperties struct { - ProvisioningState ProvisioningState `json:"provisioningState,omitempty"` - PrimaryEndpoints *Endpoints `json:"primaryEndpoints,omitempty"` - PrimaryLocation *string `json:"primaryLocation,omitempty"` - StatusOfPrimary AccountStatus `json:"statusOfPrimary,omitempty"` - LastGeoFailoverTime *date.Time `json:"lastGeoFailoverTime,omitempty"` - SecondaryLocation *string `json:"secondaryLocation,omitempty"` - StatusOfSecondary AccountStatus `json:"statusOfSecondary,omitempty"` - CreationTime *date.Time `json:"creationTime,omitempty"` - CustomDomain *CustomDomain `json:"customDomain,omitempty"` - SecondaryEndpoints *Endpoints `json:"secondaryEndpoints,omitempty"` - Encryption *Encryption `json:"encryption,omitempty"` - AccessTier AccessTier `json:"accessTier,omitempty"` - EnableHTTPSTrafficOnly *bool `json:"supportsHttpsTrafficOnly,omitempty"` -} - -// AccountPropertiesCreateParameters is the parameters used to create the -// storage account. -type AccountPropertiesCreateParameters struct { - CustomDomain *CustomDomain `json:"customDomain,omitempty"` - Encryption *Encryption `json:"encryption,omitempty"` - AccessTier AccessTier `json:"accessTier,omitempty"` - EnableHTTPSTrafficOnly *bool `json:"supportsHttpsTrafficOnly,omitempty"` -} - -// AccountPropertiesUpdateParameters is the parameters used when updating a -// storage account. -type AccountPropertiesUpdateParameters struct { - CustomDomain *CustomDomain `json:"customDomain,omitempty"` - Encryption *Encryption `json:"encryption,omitempty"` - AccessTier AccessTier `json:"accessTier,omitempty"` - EnableHTTPSTrafficOnly *bool `json:"supportsHttpsTrafficOnly,omitempty"` -} - -// AccountRegenerateKeyParameters is the parameters used to regenerate the -// storage account key. -type AccountRegenerateKeyParameters struct { - KeyName *string `json:"keyName,omitempty"` -} - -// AccountSasParameters is the parameters to list SAS credentials of a storage -// account. -type AccountSasParameters struct { - Services Services `json:"signedServices,omitempty"` - ResourceTypes ResourceTypes `json:"signedResourceTypes,omitempty"` - Permissions Permissions `json:"signedPermission,omitempty"` - IPAddressOrRange *string `json:"signedIp,omitempty"` - Protocols HTTPProtocol `json:"signedProtocol,omitempty"` - SharedAccessStartTime *date.Time `json:"signedStart,omitempty"` - SharedAccessExpiryTime *date.Time `json:"signedExpiry,omitempty"` - KeyToSign *string `json:"keyToSign,omitempty"` -} - -// AccountUpdateParameters is the parameters that can be provided when updating -// the storage account properties. -type AccountUpdateParameters struct { - Sku *Sku `json:"sku,omitempty"` - Tags *map[string]*string `json:"tags,omitempty"` - *AccountPropertiesUpdateParameters `json:"properties,omitempty"` -} - -// CheckNameAvailabilityResult is the CheckNameAvailability operation response. -type CheckNameAvailabilityResult struct { - autorest.Response `json:"-"` - NameAvailable *bool `json:"nameAvailable,omitempty"` - Reason Reason `json:"reason,omitempty"` - Message *string `json:"message,omitempty"` -} - -// CustomDomain is the custom domain assigned to this storage account. This can -// be set via Update. -type CustomDomain struct { - Name *string `json:"name,omitempty"` - UseSubDomain *bool `json:"useSubDomain,omitempty"` -} - -// Encryption is the encryption settings on the storage account. -type Encryption struct { - Services *EncryptionServices `json:"services,omitempty"` - KeySource *string `json:"keySource,omitempty"` -} - -// EncryptionService is a service that allows server-side encryption to be -// used. -type EncryptionService struct { - Enabled *bool `json:"enabled,omitempty"` - LastEnabledTime *date.Time `json:"lastEnabledTime,omitempty"` -} - -// EncryptionServices is a list of services that support encryption. -type EncryptionServices struct { - Blob *EncryptionService `json:"blob,omitempty"` - File *EncryptionService `json:"file,omitempty"` - Table *EncryptionService `json:"table,omitempty"` - Queue *EncryptionService `json:"queue,omitempty"` -} - -// Endpoints is the URIs that are used to perform a retrieval of a public blob, -// queue, or table object. -type Endpoints struct { - Blob *string `json:"blob,omitempty"` - Queue *string `json:"queue,omitempty"` - Table *string `json:"table,omitempty"` - File *string `json:"file,omitempty"` -} - -// ListAccountSasResponse is the List SAS credentials operation response. -type ListAccountSasResponse struct { - autorest.Response `json:"-"` - AccountSasToken *string `json:"accountSasToken,omitempty"` -} - -// ListServiceSasResponse is the List service SAS credentials operation -// response. -type ListServiceSasResponse struct { - autorest.Response `json:"-"` - ServiceSasToken *string `json:"serviceSasToken,omitempty"` -} - -// Resource is describes a storage resource. -type Resource struct { - ID *string `json:"id,omitempty"` - Name *string `json:"name,omitempty"` - Type *string `json:"type,omitempty"` - Location *string `json:"location,omitempty"` - Tags *map[string]*string `json:"tags,omitempty"` -} - -// ServiceSasParameters is the parameters to list service SAS credentials of a -// speicific resource. -type ServiceSasParameters struct { - CanonicalizedResource *string `json:"canonicalizedResource,omitempty"` - Resource Resource `json:"signedResource,omitempty"` - Permissions Permissions `json:"signedPermission,omitempty"` - IPAddressOrRange *string `json:"signedIp,omitempty"` - Protocols HTTPProtocol `json:"signedProtocol,omitempty"` - SharedAccessStartTime *date.Time `json:"signedStart,omitempty"` - SharedAccessExpiryTime *date.Time `json:"signedExpiry,omitempty"` - Identifier *string `json:"signedIdentifier,omitempty"` - PartitionKeyStart *string `json:"startPk,omitempty"` - PartitionKeyEnd *string `json:"endPk,omitempty"` - RowKeyStart *string `json:"startRk,omitempty"` - RowKeyEnd *string `json:"endRk,omitempty"` - KeyToSign *string `json:"keyToSign,omitempty"` - CacheControl *string `json:"rscc,omitempty"` - ContentDisposition *string `json:"rscd,omitempty"` - ContentEncoding *string `json:"rsce,omitempty"` - ContentLanguage *string `json:"rscl,omitempty"` - ContentType *string `json:"rsct,omitempty"` -} - -// Sku is the SKU of the storage account. -type Sku struct { - Name SkuName `json:"name,omitempty"` - Tier SkuTier `json:"tier,omitempty"` -} - -// Usage is describes Storage Resource Usage. -type Usage struct { - Unit UsageUnit `json:"unit,omitempty"` - CurrentValue *int32 `json:"currentValue,omitempty"` - Limit *int32 `json:"limit,omitempty"` - Name *UsageName `json:"name,omitempty"` -} - -// UsageListResult is the response from the List Usages operation. -type UsageListResult struct { - autorest.Response `json:"-"` - Value *[]Usage `json:"value,omitempty"` -} - -// UsageName is the usage names that can be used; currently limited to -// StorageAccount. -type UsageName struct { - Value *string `json:"value,omitempty"` - LocalizedValue *string `json:"localizedValue,omitempty"` -} diff --git a/vendor/github.com/Azure/azure-sdk-for-go/arm/compute/availabilitysets.go b/vendor/github.com/Azure/azure-sdk-for-go/services/compute/mgmt/2018-04-01/compute/availabilitysets.go similarity index 61% rename from vendor/github.com/Azure/azure-sdk-for-go/arm/compute/availabilitysets.go rename to vendor/github.com/Azure/azure-sdk-for-go/services/compute/mgmt/2018-04-01/compute/availabilitysets.go index 738c2c61e..1f0007ff5 100644 --- a/vendor/github.com/Azure/azure-sdk-for-go/arm/compute/availabilitysets.go +++ b/vendor/github.com/Azure/azure-sdk-for-go/services/compute/mgmt/2018-04-01/compute/availabilitysets.go @@ -14,40 +14,38 @@ package compute // See the License for the specific language governing permissions and // limitations under the License. // -// Code generated by Microsoft (R) AutoRest Code Generator 1.0.1.0 -// Changes may cause incorrect behavior and will be lost if the code is -// regenerated. +// Code generated by Microsoft (R) AutoRest Code Generator. +// Changes may cause incorrect behavior and will be lost if the code is regenerated. import ( + "context" "github.com/Azure/go-autorest/autorest" "github.com/Azure/go-autorest/autorest/azure" "net/http" ) -// AvailabilitySetsClient is the the Compute Management Client. +// AvailabilitySetsClient is the compute Client type AvailabilitySetsClient struct { - ManagementClient + BaseClient } -// NewAvailabilitySetsClient creates an instance of the AvailabilitySetsClient -// client. +// NewAvailabilitySetsClient creates an instance of the AvailabilitySetsClient client. func NewAvailabilitySetsClient(subscriptionID string) AvailabilitySetsClient { return NewAvailabilitySetsClientWithBaseURI(DefaultBaseURI, subscriptionID) } -// NewAvailabilitySetsClientWithBaseURI creates an instance of the -// AvailabilitySetsClient client. +// NewAvailabilitySetsClientWithBaseURI creates an instance of the AvailabilitySetsClient client. func NewAvailabilitySetsClientWithBaseURI(baseURI string, subscriptionID string) AvailabilitySetsClient { return AvailabilitySetsClient{NewWithBaseURI(baseURI, subscriptionID)} } // CreateOrUpdate create or update an availability set. -// -// resourceGroupName is the name of the resource group. name is the name of the -// availability set. parameters is parameters supplied to the Create -// Availability Set operation. -func (client AvailabilitySetsClient) CreateOrUpdate(resourceGroupName string, name string, parameters AvailabilitySet) (result AvailabilitySet, err error) { - req, err := client.CreateOrUpdatePreparer(resourceGroupName, name, parameters) +// Parameters: +// resourceGroupName - the name of the resource group. +// availabilitySetName - the name of the availability set. +// parameters - parameters supplied to the Create Availability Set operation. +func (client AvailabilitySetsClient) CreateOrUpdate(ctx context.Context, resourceGroupName string, availabilitySetName string, parameters AvailabilitySet) (result AvailabilitySet, err error) { + req, err := client.CreateOrUpdatePreparer(ctx, resourceGroupName, availabilitySetName, parameters) if err != nil { err = autorest.NewErrorWithError(err, "compute.AvailabilitySetsClient", "CreateOrUpdate", nil, "Failure preparing request") return @@ -69,32 +67,33 @@ func (client AvailabilitySetsClient) CreateOrUpdate(resourceGroupName string, na } // CreateOrUpdatePreparer prepares the CreateOrUpdate request. -func (client AvailabilitySetsClient) CreateOrUpdatePreparer(resourceGroupName string, name string, parameters AvailabilitySet) (*http.Request, error) { +func (client AvailabilitySetsClient) CreateOrUpdatePreparer(ctx context.Context, resourceGroupName string, availabilitySetName string, parameters AvailabilitySet) (*http.Request, error) { pathParameters := map[string]interface{}{ - "name": autorest.Encode("path", name), - "resourceGroupName": autorest.Encode("path", resourceGroupName), - "subscriptionId": autorest.Encode("path", client.SubscriptionID), + "availabilitySetName": autorest.Encode("path", availabilitySetName), + "resourceGroupName": autorest.Encode("path", resourceGroupName), + "subscriptionId": autorest.Encode("path", client.SubscriptionID), } - const APIVersion = "2016-04-30-preview" + const APIVersion = "2017-12-01" queryParameters := map[string]interface{}{ "api-version": APIVersion, } preparer := autorest.CreatePreparer( - autorest.AsJSON(), + autorest.AsContentType("application/json; charset=utf-8"), autorest.AsPut(), autorest.WithBaseURL(client.BaseURI), - autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Compute/availabilitySets/{name}", pathParameters), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Compute/availabilitySets/{availabilitySetName}", pathParameters), autorest.WithJSON(parameters), autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{}) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) } // CreateOrUpdateSender sends the CreateOrUpdate request. The method will close the // http.Response Body if it receives an error. func (client AvailabilitySetsClient) CreateOrUpdateSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, req) + return autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) } // CreateOrUpdateResponder handles the response to the CreateOrUpdate request. The method always @@ -111,11 +110,11 @@ func (client AvailabilitySetsClient) CreateOrUpdateResponder(resp *http.Response } // Delete delete an availability set. -// -// resourceGroupName is the name of the resource group. availabilitySetName is -// the name of the availability set. -func (client AvailabilitySetsClient) Delete(resourceGroupName string, availabilitySetName string) (result OperationStatusResponse, err error) { - req, err := client.DeletePreparer(resourceGroupName, availabilitySetName) +// Parameters: +// resourceGroupName - the name of the resource group. +// availabilitySetName - the name of the availability set. +func (client AvailabilitySetsClient) Delete(ctx context.Context, resourceGroupName string, availabilitySetName string) (result OperationStatusResponse, err error) { + req, err := client.DeletePreparer(ctx, resourceGroupName, availabilitySetName) if err != nil { err = autorest.NewErrorWithError(err, "compute.AvailabilitySetsClient", "Delete", nil, "Failure preparing request") return @@ -137,14 +136,14 @@ func (client AvailabilitySetsClient) Delete(resourceGroupName string, availabili } // DeletePreparer prepares the Delete request. -func (client AvailabilitySetsClient) DeletePreparer(resourceGroupName string, availabilitySetName string) (*http.Request, error) { +func (client AvailabilitySetsClient) DeletePreparer(ctx context.Context, resourceGroupName string, availabilitySetName string) (*http.Request, error) { pathParameters := map[string]interface{}{ "availabilitySetName": autorest.Encode("path", availabilitySetName), "resourceGroupName": autorest.Encode("path", resourceGroupName), "subscriptionId": autorest.Encode("path", client.SubscriptionID), } - const APIVersion = "2016-04-30-preview" + const APIVersion = "2017-12-01" queryParameters := map[string]interface{}{ "api-version": APIVersion, } @@ -154,13 +153,14 @@ func (client AvailabilitySetsClient) DeletePreparer(resourceGroupName string, av autorest.WithBaseURL(client.BaseURI), autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Compute/availabilitySets/{availabilitySetName}", pathParameters), autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{}) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) } // DeleteSender sends the Delete request. The method will close the // http.Response Body if it receives an error. func (client AvailabilitySetsClient) DeleteSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, req) + return autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) } // DeleteResponder handles the response to the Delete request. The method always @@ -177,11 +177,11 @@ func (client AvailabilitySetsClient) DeleteResponder(resp *http.Response) (resul } // Get retrieves information about an availability set. -// -// resourceGroupName is the name of the resource group. availabilitySetName is -// the name of the availability set. -func (client AvailabilitySetsClient) Get(resourceGroupName string, availabilitySetName string) (result AvailabilitySet, err error) { - req, err := client.GetPreparer(resourceGroupName, availabilitySetName) +// Parameters: +// resourceGroupName - the name of the resource group. +// availabilitySetName - the name of the availability set. +func (client AvailabilitySetsClient) Get(ctx context.Context, resourceGroupName string, availabilitySetName string) (result AvailabilitySet, err error) { + req, err := client.GetPreparer(ctx, resourceGroupName, availabilitySetName) if err != nil { err = autorest.NewErrorWithError(err, "compute.AvailabilitySetsClient", "Get", nil, "Failure preparing request") return @@ -203,14 +203,14 @@ func (client AvailabilitySetsClient) Get(resourceGroupName string, availabilityS } // GetPreparer prepares the Get request. -func (client AvailabilitySetsClient) GetPreparer(resourceGroupName string, availabilitySetName string) (*http.Request, error) { +func (client AvailabilitySetsClient) GetPreparer(ctx context.Context, resourceGroupName string, availabilitySetName string) (*http.Request, error) { pathParameters := map[string]interface{}{ "availabilitySetName": autorest.Encode("path", availabilitySetName), "resourceGroupName": autorest.Encode("path", resourceGroupName), "subscriptionId": autorest.Encode("path", client.SubscriptionID), } - const APIVersion = "2016-04-30-preview" + const APIVersion = "2017-12-01" queryParameters := map[string]interface{}{ "api-version": APIVersion, } @@ -220,13 +220,14 @@ func (client AvailabilitySetsClient) GetPreparer(resourceGroupName string, avail autorest.WithBaseURL(client.BaseURI), autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Compute/availabilitySets/{availabilitySetName}", pathParameters), autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{}) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) } // GetSender sends the Get request. The method will close the // http.Response Body if it receives an error. func (client AvailabilitySetsClient) GetSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, req) + return autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) } // GetResponder handles the response to the Get request. The method always @@ -243,10 +244,10 @@ func (client AvailabilitySetsClient) GetResponder(resp *http.Response) (result A } // List lists all availability sets in a resource group. -// -// resourceGroupName is the name of the resource group. -func (client AvailabilitySetsClient) List(resourceGroupName string) (result AvailabilitySetListResult, err error) { - req, err := client.ListPreparer(resourceGroupName) +// Parameters: +// resourceGroupName - the name of the resource group. +func (client AvailabilitySetsClient) List(ctx context.Context, resourceGroupName string) (result AvailabilitySetListResult, err error) { + req, err := client.ListPreparer(ctx, resourceGroupName) if err != nil { err = autorest.NewErrorWithError(err, "compute.AvailabilitySetsClient", "List", nil, "Failure preparing request") return @@ -268,13 +269,13 @@ func (client AvailabilitySetsClient) List(resourceGroupName string) (result Avai } // ListPreparer prepares the List request. -func (client AvailabilitySetsClient) ListPreparer(resourceGroupName string) (*http.Request, error) { +func (client AvailabilitySetsClient) ListPreparer(ctx context.Context, resourceGroupName string) (*http.Request, error) { pathParameters := map[string]interface{}{ "resourceGroupName": autorest.Encode("path", resourceGroupName), "subscriptionId": autorest.Encode("path", client.SubscriptionID), } - const APIVersion = "2016-04-30-preview" + const APIVersion = "2017-12-01" queryParameters := map[string]interface{}{ "api-version": APIVersion, } @@ -284,13 +285,14 @@ func (client AvailabilitySetsClient) ListPreparer(resourceGroupName string) (*ht autorest.WithBaseURL(client.BaseURI), autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Compute/availabilitySets", pathParameters), autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{}) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) } // ListSender sends the List request. The method will close the // http.Response Body if it receives an error. func (client AvailabilitySetsClient) ListSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, req) + return autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) } // ListResponder handles the response to the List request. The method always @@ -306,13 +308,13 @@ func (client AvailabilitySetsClient) ListResponder(resp *http.Response) (result return } -// ListAvailableSizes lists all available virtual machine sizes that can be -// used to create a new virtual machine in an existing availability set. -// -// resourceGroupName is the name of the resource group. availabilitySetName is -// the name of the availability set. -func (client AvailabilitySetsClient) ListAvailableSizes(resourceGroupName string, availabilitySetName string) (result VirtualMachineSizeListResult, err error) { - req, err := client.ListAvailableSizesPreparer(resourceGroupName, availabilitySetName) +// ListAvailableSizes lists all available virtual machine sizes that can be used to create a new virtual machine in an +// existing availability set. +// Parameters: +// resourceGroupName - the name of the resource group. +// availabilitySetName - the name of the availability set. +func (client AvailabilitySetsClient) ListAvailableSizes(ctx context.Context, resourceGroupName string, availabilitySetName string) (result VirtualMachineSizeListResult, err error) { + req, err := client.ListAvailableSizesPreparer(ctx, resourceGroupName, availabilitySetName) if err != nil { err = autorest.NewErrorWithError(err, "compute.AvailabilitySetsClient", "ListAvailableSizes", nil, "Failure preparing request") return @@ -334,14 +336,14 @@ func (client AvailabilitySetsClient) ListAvailableSizes(resourceGroupName string } // ListAvailableSizesPreparer prepares the ListAvailableSizes request. -func (client AvailabilitySetsClient) ListAvailableSizesPreparer(resourceGroupName string, availabilitySetName string) (*http.Request, error) { +func (client AvailabilitySetsClient) ListAvailableSizesPreparer(ctx context.Context, resourceGroupName string, availabilitySetName string) (*http.Request, error) { pathParameters := map[string]interface{}{ "availabilitySetName": autorest.Encode("path", availabilitySetName), "resourceGroupName": autorest.Encode("path", resourceGroupName), "subscriptionId": autorest.Encode("path", client.SubscriptionID), } - const APIVersion = "2016-04-30-preview" + const APIVersion = "2017-12-01" queryParameters := map[string]interface{}{ "api-version": APIVersion, } @@ -351,13 +353,14 @@ func (client AvailabilitySetsClient) ListAvailableSizesPreparer(resourceGroupNam autorest.WithBaseURL(client.BaseURI), autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Compute/availabilitySets/{availabilitySetName}/vmSizes", pathParameters), autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{}) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) } // ListAvailableSizesSender sends the ListAvailableSizes request. The method will close the // http.Response Body if it receives an error. func (client AvailabilitySetsClient) ListAvailableSizesSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, req) + return autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) } // ListAvailableSizesResponder handles the response to the ListAvailableSizes request. The method always @@ -372,3 +375,73 @@ func (client AvailabilitySetsClient) ListAvailableSizesResponder(resp *http.Resp result.Response = autorest.Response{Response: resp} return } + +// Update update an availability set. +// Parameters: +// resourceGroupName - the name of the resource group. +// availabilitySetName - the name of the availability set. +// parameters - parameters supplied to the Update Availability Set operation. +func (client AvailabilitySetsClient) Update(ctx context.Context, resourceGroupName string, availabilitySetName string, parameters AvailabilitySetUpdate) (result AvailabilitySet, err error) { + req, err := client.UpdatePreparer(ctx, resourceGroupName, availabilitySetName, parameters) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.AvailabilitySetsClient", "Update", nil, "Failure preparing request") + return + } + + resp, err := client.UpdateSender(req) + if err != nil { + result.Response = autorest.Response{Response: resp} + err = autorest.NewErrorWithError(err, "compute.AvailabilitySetsClient", "Update", resp, "Failure sending request") + return + } + + result, err = client.UpdateResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.AvailabilitySetsClient", "Update", resp, "Failure responding to request") + } + + return +} + +// UpdatePreparer prepares the Update request. +func (client AvailabilitySetsClient) UpdatePreparer(ctx context.Context, resourceGroupName string, availabilitySetName string, parameters AvailabilitySetUpdate) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "availabilitySetName": autorest.Encode("path", availabilitySetName), + "resourceGroupName": autorest.Encode("path", resourceGroupName), + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + } + + const APIVersion = "2017-12-01" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsContentType("application/json; charset=utf-8"), + autorest.AsPatch(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Compute/availabilitySets/{availabilitySetName}", pathParameters), + autorest.WithJSON(parameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// UpdateSender sends the Update request. The method will close the +// http.Response Body if it receives an error. +func (client AvailabilitySetsClient) UpdateSender(req *http.Request) (*http.Response, error) { + return autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) +} + +// UpdateResponder handles the response to the Update request. The method always +// closes the http.Response Body. +func (client AvailabilitySetsClient) UpdateResponder(resp *http.Response) (result AvailabilitySet, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} diff --git a/vendor/github.com/Azure/azure-sdk-for-go/arm/compute/client.go b/vendor/github.com/Azure/azure-sdk-for-go/services/compute/mgmt/2018-04-01/compute/client.go similarity index 69% rename from vendor/github.com/Azure/azure-sdk-for-go/arm/compute/client.go rename to vendor/github.com/Azure/azure-sdk-for-go/services/compute/mgmt/2018-04-01/compute/client.go index c60452b9d..b23c9ca74 100644 --- a/vendor/github.com/Azure/azure-sdk-for-go/arm/compute/client.go +++ b/vendor/github.com/Azure/azure-sdk-for-go/services/compute/mgmt/2018-04-01/compute/client.go @@ -1,7 +1,6 @@ -// Package compute implements the Azure ARM Compute service API version -// 2016-04-30-preview. +// Package compute implements the Azure ARM Compute service API version . // -// The Compute Management Client. +// Compute Client package compute // Copyright (c) Microsoft and contributors. All rights reserved. @@ -18,9 +17,8 @@ package compute // See the License for the specific language governing permissions and // limitations under the License. // -// Code generated by Microsoft (R) AutoRest Code Generator 1.0.1.0 -// Changes may cause incorrect behavior and will be lost if the code is -// regenerated. +// Code generated by Microsoft (R) AutoRest Code Generator. +// Changes may cause incorrect behavior and will be lost if the code is regenerated. import ( "github.com/Azure/go-autorest/autorest" @@ -31,21 +29,21 @@ const ( DefaultBaseURI = "https://management.azure.com" ) -// ManagementClient is the base client for Compute. -type ManagementClient struct { +// BaseClient is the base client for Compute. +type BaseClient struct { autorest.Client BaseURI string SubscriptionID string } -// New creates an instance of the ManagementClient client. -func New(subscriptionID string) ManagementClient { +// New creates an instance of the BaseClient client. +func New(subscriptionID string) BaseClient { return NewWithBaseURI(DefaultBaseURI, subscriptionID) } -// NewWithBaseURI creates an instance of the ManagementClient client. -func NewWithBaseURI(baseURI string, subscriptionID string) ManagementClient { - return ManagementClient{ +// NewWithBaseURI creates an instance of the BaseClient client. +func NewWithBaseURI(baseURI string, subscriptionID string) BaseClient { + return BaseClient{ Client: autorest.NewClientWithUserAgent(UserAgent()), BaseURI: baseURI, SubscriptionID: subscriptionID, diff --git a/vendor/github.com/Azure/azure-sdk-for-go/services/compute/mgmt/2018-04-01/compute/containerservices.go b/vendor/github.com/Azure/azure-sdk-for-go/services/compute/mgmt/2018-04-01/compute/containerservices.go new file mode 100644 index 000000000..b16788bfa --- /dev/null +++ b/vendor/github.com/Azure/azure-sdk-for-go/services/compute/mgmt/2018-04-01/compute/containerservices.go @@ -0,0 +1,475 @@ +package compute + +// Copyright (c) Microsoft and contributors. All rights reserved. +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// +// See the License for the specific language governing permissions and +// limitations under the License. +// +// Code generated by Microsoft (R) AutoRest Code Generator. +// Changes may cause incorrect behavior and will be lost if the code is regenerated. + +import ( + "context" + "github.com/Azure/go-autorest/autorest" + "github.com/Azure/go-autorest/autorest/azure" + "github.com/Azure/go-autorest/autorest/validation" + "net/http" +) + +// ContainerServicesClient is the compute Client +type ContainerServicesClient struct { + BaseClient +} + +// NewContainerServicesClient creates an instance of the ContainerServicesClient client. +func NewContainerServicesClient(subscriptionID string) ContainerServicesClient { + return NewContainerServicesClientWithBaseURI(DefaultBaseURI, subscriptionID) +} + +// NewContainerServicesClientWithBaseURI creates an instance of the ContainerServicesClient client. +func NewContainerServicesClientWithBaseURI(baseURI string, subscriptionID string) ContainerServicesClient { + return ContainerServicesClient{NewWithBaseURI(baseURI, subscriptionID)} +} + +// CreateOrUpdate creates or updates a container service with the specified configuration of orchestrator, masters, and +// agents. +// Parameters: +// resourceGroupName - the name of the resource group. +// containerServiceName - the name of the container service in the specified subscription and resource group. +// parameters - parameters supplied to the Create or Update a Container Service operation. +func (client ContainerServicesClient) CreateOrUpdate(ctx context.Context, resourceGroupName string, containerServiceName string, parameters ContainerService) (result ContainerServicesCreateOrUpdateFuture, err error) { + if err := validation.Validate([]validation.Validation{ + {TargetValue: parameters, + Constraints: []validation.Constraint{{Target: "parameters.ContainerServiceProperties", Name: validation.Null, Rule: false, + Chain: []validation.Constraint{{Target: "parameters.ContainerServiceProperties.CustomProfile", Name: validation.Null, Rule: false, + Chain: []validation.Constraint{{Target: "parameters.ContainerServiceProperties.CustomProfile.Orchestrator", Name: validation.Null, Rule: true, Chain: nil}}}, + {Target: "parameters.ContainerServiceProperties.ServicePrincipalProfile", Name: validation.Null, Rule: false, + Chain: []validation.Constraint{{Target: "parameters.ContainerServiceProperties.ServicePrincipalProfile.ClientID", Name: validation.Null, Rule: true, Chain: nil}, + {Target: "parameters.ContainerServiceProperties.ServicePrincipalProfile.Secret", Name: validation.Null, Rule: true, Chain: nil}, + }}, + {Target: "parameters.ContainerServiceProperties.MasterProfile", Name: validation.Null, Rule: true, + Chain: []validation.Constraint{{Target: "parameters.ContainerServiceProperties.MasterProfile.DNSPrefix", Name: validation.Null, Rule: true, Chain: nil}}}, + {Target: "parameters.ContainerServiceProperties.AgentPoolProfiles", Name: validation.Null, Rule: true, Chain: nil}, + {Target: "parameters.ContainerServiceProperties.WindowsProfile", Name: validation.Null, Rule: false, + Chain: []validation.Constraint{{Target: "parameters.ContainerServiceProperties.WindowsProfile.AdminUsername", Name: validation.Null, Rule: true, + Chain: []validation.Constraint{{Target: "parameters.ContainerServiceProperties.WindowsProfile.AdminUsername", Name: validation.Pattern, Rule: `^[a-zA-Z0-9]+([._]?[a-zA-Z0-9]+)*$`, Chain: nil}}}, + {Target: "parameters.ContainerServiceProperties.WindowsProfile.AdminPassword", Name: validation.Null, Rule: true, Chain: nil}, + }}, + {Target: "parameters.ContainerServiceProperties.LinuxProfile", Name: validation.Null, Rule: true, + Chain: []validation.Constraint{{Target: "parameters.ContainerServiceProperties.LinuxProfile.AdminUsername", Name: validation.Null, Rule: true, + Chain: []validation.Constraint{{Target: "parameters.ContainerServiceProperties.LinuxProfile.AdminUsername", Name: validation.Pattern, Rule: `^[a-z][a-z0-9_-]*$`, Chain: nil}}}, + {Target: "parameters.ContainerServiceProperties.LinuxProfile.SSH", Name: validation.Null, Rule: true, + Chain: []validation.Constraint{{Target: "parameters.ContainerServiceProperties.LinuxProfile.SSH.PublicKeys", Name: validation.Null, Rule: true, Chain: nil}}}, + }}, + {Target: "parameters.ContainerServiceProperties.DiagnosticsProfile", Name: validation.Null, Rule: false, + Chain: []validation.Constraint{{Target: "parameters.ContainerServiceProperties.DiagnosticsProfile.VMDiagnostics", Name: validation.Null, Rule: true, + Chain: []validation.Constraint{{Target: "parameters.ContainerServiceProperties.DiagnosticsProfile.VMDiagnostics.Enabled", Name: validation.Null, Rule: true, Chain: nil}}}, + }}, + }}}}}); err != nil { + return result, validation.NewError("compute.ContainerServicesClient", "CreateOrUpdate", err.Error()) + } + + req, err := client.CreateOrUpdatePreparer(ctx, resourceGroupName, containerServiceName, parameters) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.ContainerServicesClient", "CreateOrUpdate", nil, "Failure preparing request") + return + } + + result, err = client.CreateOrUpdateSender(req) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.ContainerServicesClient", "CreateOrUpdate", result.Response(), "Failure sending request") + return + } + + return +} + +// CreateOrUpdatePreparer prepares the CreateOrUpdate request. +func (client ContainerServicesClient) CreateOrUpdatePreparer(ctx context.Context, resourceGroupName string, containerServiceName string, parameters ContainerService) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "containerServiceName": autorest.Encode("path", containerServiceName), + "resourceGroupName": autorest.Encode("path", resourceGroupName), + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + } + + const APIVersion = "2017-01-31" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsContentType("application/json; charset=utf-8"), + autorest.AsPut(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.ContainerService/containerServices/{containerServiceName}", pathParameters), + autorest.WithJSON(parameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// CreateOrUpdateSender sends the CreateOrUpdate request. The method will close the +// http.Response Body if it receives an error. +func (client ContainerServicesClient) CreateOrUpdateSender(req *http.Request) (future ContainerServicesCreateOrUpdateFuture, err error) { + var resp *http.Response + resp, err = autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) + if err != nil { + return + } + err = autorest.Respond(resp, azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusCreated, http.StatusAccepted)) + if err != nil { + return + } + future.Future, err = azure.NewFutureFromResponse(resp) + return +} + +// CreateOrUpdateResponder handles the response to the CreateOrUpdate request. The method always +// closes the http.Response Body. +func (client ContainerServicesClient) CreateOrUpdateResponder(resp *http.Response) (result ContainerService, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusCreated, http.StatusAccepted), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + +// Delete deletes the specified container service in the specified subscription and resource group. The operation does +// not delete other resources created as part of creating a container service, including storage accounts, VMs, and +// availability sets. All the other resources created with the container service are part of the same resource group +// and can be deleted individually. +// Parameters: +// resourceGroupName - the name of the resource group. +// containerServiceName - the name of the container service in the specified subscription and resource group. +func (client ContainerServicesClient) Delete(ctx context.Context, resourceGroupName string, containerServiceName string) (result ContainerServicesDeleteFuture, err error) { + req, err := client.DeletePreparer(ctx, resourceGroupName, containerServiceName) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.ContainerServicesClient", "Delete", nil, "Failure preparing request") + return + } + + result, err = client.DeleteSender(req) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.ContainerServicesClient", "Delete", result.Response(), "Failure sending request") + return + } + + return +} + +// DeletePreparer prepares the Delete request. +func (client ContainerServicesClient) DeletePreparer(ctx context.Context, resourceGroupName string, containerServiceName string) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "containerServiceName": autorest.Encode("path", containerServiceName), + "resourceGroupName": autorest.Encode("path", resourceGroupName), + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + } + + const APIVersion = "2017-01-31" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsDelete(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.ContainerService/containerServices/{containerServiceName}", pathParameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// DeleteSender sends the Delete request. The method will close the +// http.Response Body if it receives an error. +func (client ContainerServicesClient) DeleteSender(req *http.Request) (future ContainerServicesDeleteFuture, err error) { + var resp *http.Response + resp, err = autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) + if err != nil { + return + } + err = autorest.Respond(resp, azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted, http.StatusNoContent)) + if err != nil { + return + } + future.Future, err = azure.NewFutureFromResponse(resp) + return +} + +// DeleteResponder handles the response to the Delete request. The method always +// closes the http.Response Body. +func (client ContainerServicesClient) DeleteResponder(resp *http.Response) (result autorest.Response, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted, http.StatusNoContent), + autorest.ByClosing()) + result.Response = resp + return +} + +// Get gets the properties of the specified container service in the specified subscription and resource group. The +// operation returns the properties including state, orchestrator, number of masters and agents, and FQDNs of masters +// and agents. +// Parameters: +// resourceGroupName - the name of the resource group. +// containerServiceName - the name of the container service in the specified subscription and resource group. +func (client ContainerServicesClient) Get(ctx context.Context, resourceGroupName string, containerServiceName string) (result ContainerService, err error) { + req, err := client.GetPreparer(ctx, resourceGroupName, containerServiceName) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.ContainerServicesClient", "Get", nil, "Failure preparing request") + return + } + + resp, err := client.GetSender(req) + if err != nil { + result.Response = autorest.Response{Response: resp} + err = autorest.NewErrorWithError(err, "compute.ContainerServicesClient", "Get", resp, "Failure sending request") + return + } + + result, err = client.GetResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.ContainerServicesClient", "Get", resp, "Failure responding to request") + } + + return +} + +// GetPreparer prepares the Get request. +func (client ContainerServicesClient) GetPreparer(ctx context.Context, resourceGroupName string, containerServiceName string) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "containerServiceName": autorest.Encode("path", containerServiceName), + "resourceGroupName": autorest.Encode("path", resourceGroupName), + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + } + + const APIVersion = "2017-01-31" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsGet(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.ContainerService/containerServices/{containerServiceName}", pathParameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// GetSender sends the Get request. The method will close the +// http.Response Body if it receives an error. +func (client ContainerServicesClient) GetSender(req *http.Request) (*http.Response, error) { + return autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) +} + +// GetResponder handles the response to the Get request. The method always +// closes the http.Response Body. +func (client ContainerServicesClient) GetResponder(resp *http.Response) (result ContainerService, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + +// List gets a list of container services in the specified subscription. The operation returns properties of each +// container service including state, orchestrator, number of masters and agents, and FQDNs of masters and agents. +func (client ContainerServicesClient) List(ctx context.Context) (result ContainerServiceListResultPage, err error) { + result.fn = client.listNextResults + req, err := client.ListPreparer(ctx) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.ContainerServicesClient", "List", nil, "Failure preparing request") + return + } + + resp, err := client.ListSender(req) + if err != nil { + result.cslr.Response = autorest.Response{Response: resp} + err = autorest.NewErrorWithError(err, "compute.ContainerServicesClient", "List", resp, "Failure sending request") + return + } + + result.cslr, err = client.ListResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.ContainerServicesClient", "List", resp, "Failure responding to request") + } + + return +} + +// ListPreparer prepares the List request. +func (client ContainerServicesClient) ListPreparer(ctx context.Context) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + } + + const APIVersion = "2017-01-31" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsGet(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/providers/Microsoft.ContainerService/containerServices", pathParameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// ListSender sends the List request. The method will close the +// http.Response Body if it receives an error. +func (client ContainerServicesClient) ListSender(req *http.Request) (*http.Response, error) { + return autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) +} + +// ListResponder handles the response to the List request. The method always +// closes the http.Response Body. +func (client ContainerServicesClient) ListResponder(resp *http.Response) (result ContainerServiceListResult, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + +// listNextResults retrieves the next set of results, if any. +func (client ContainerServicesClient) listNextResults(lastResults ContainerServiceListResult) (result ContainerServiceListResult, err error) { + req, err := lastResults.containerServiceListResultPreparer() + if err != nil { + return result, autorest.NewErrorWithError(err, "compute.ContainerServicesClient", "listNextResults", nil, "Failure preparing next results request") + } + if req == nil { + return + } + resp, err := client.ListSender(req) + if err != nil { + result.Response = autorest.Response{Response: resp} + return result, autorest.NewErrorWithError(err, "compute.ContainerServicesClient", "listNextResults", resp, "Failure sending next results request") + } + result, err = client.ListResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.ContainerServicesClient", "listNextResults", resp, "Failure responding to next results request") + } + return +} + +// ListComplete enumerates all values, automatically crossing page boundaries as required. +func (client ContainerServicesClient) ListComplete(ctx context.Context) (result ContainerServiceListResultIterator, err error) { + result.page, err = client.List(ctx) + return +} + +// ListByResourceGroup gets a list of container services in the specified subscription and resource group. The +// operation returns properties of each container service including state, orchestrator, number of masters and agents, +// and FQDNs of masters and agents. +// Parameters: +// resourceGroupName - the name of the resource group. +func (client ContainerServicesClient) ListByResourceGroup(ctx context.Context, resourceGroupName string) (result ContainerServiceListResultPage, err error) { + result.fn = client.listByResourceGroupNextResults + req, err := client.ListByResourceGroupPreparer(ctx, resourceGroupName) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.ContainerServicesClient", "ListByResourceGroup", nil, "Failure preparing request") + return + } + + resp, err := client.ListByResourceGroupSender(req) + if err != nil { + result.cslr.Response = autorest.Response{Response: resp} + err = autorest.NewErrorWithError(err, "compute.ContainerServicesClient", "ListByResourceGroup", resp, "Failure sending request") + return + } + + result.cslr, err = client.ListByResourceGroupResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.ContainerServicesClient", "ListByResourceGroup", resp, "Failure responding to request") + } + + return +} + +// ListByResourceGroupPreparer prepares the ListByResourceGroup request. +func (client ContainerServicesClient) ListByResourceGroupPreparer(ctx context.Context, resourceGroupName string) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "resourceGroupName": autorest.Encode("path", resourceGroupName), + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + } + + const APIVersion = "2017-01-31" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsGet(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.ContainerService/containerServices", pathParameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// ListByResourceGroupSender sends the ListByResourceGroup request. The method will close the +// http.Response Body if it receives an error. +func (client ContainerServicesClient) ListByResourceGroupSender(req *http.Request) (*http.Response, error) { + return autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) +} + +// ListByResourceGroupResponder handles the response to the ListByResourceGroup request. The method always +// closes the http.Response Body. +func (client ContainerServicesClient) ListByResourceGroupResponder(resp *http.Response) (result ContainerServiceListResult, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + +// listByResourceGroupNextResults retrieves the next set of results, if any. +func (client ContainerServicesClient) listByResourceGroupNextResults(lastResults ContainerServiceListResult) (result ContainerServiceListResult, err error) { + req, err := lastResults.containerServiceListResultPreparer() + if err != nil { + return result, autorest.NewErrorWithError(err, "compute.ContainerServicesClient", "listByResourceGroupNextResults", nil, "Failure preparing next results request") + } + if req == nil { + return + } + resp, err := client.ListByResourceGroupSender(req) + if err != nil { + result.Response = autorest.Response{Response: resp} + return result, autorest.NewErrorWithError(err, "compute.ContainerServicesClient", "listByResourceGroupNextResults", resp, "Failure sending next results request") + } + result, err = client.ListByResourceGroupResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.ContainerServicesClient", "listByResourceGroupNextResults", resp, "Failure responding to next results request") + } + return +} + +// ListByResourceGroupComplete enumerates all values, automatically crossing page boundaries as required. +func (client ContainerServicesClient) ListByResourceGroupComplete(ctx context.Context, resourceGroupName string) (result ContainerServiceListResultIterator, err error) { + result.page, err = client.ListByResourceGroup(ctx, resourceGroupName) + return +} diff --git a/vendor/github.com/Azure/azure-sdk-for-go/services/compute/mgmt/2018-04-01/compute/disks.go b/vendor/github.com/Azure/azure-sdk-for-go/services/compute/mgmt/2018-04-01/compute/disks.go new file mode 100644 index 000000000..cdc570377 --- /dev/null +++ b/vendor/github.com/Azure/azure-sdk-for-go/services/compute/mgmt/2018-04-01/compute/disks.go @@ -0,0 +1,692 @@ +package compute + +// Copyright (c) Microsoft and contributors. All rights reserved. +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// +// See the License for the specific language governing permissions and +// limitations under the License. +// +// Code generated by Microsoft (R) AutoRest Code Generator. +// Changes may cause incorrect behavior and will be lost if the code is regenerated. + +import ( + "context" + "github.com/Azure/go-autorest/autorest" + "github.com/Azure/go-autorest/autorest/azure" + "github.com/Azure/go-autorest/autorest/validation" + "net/http" +) + +// DisksClient is the compute Client +type DisksClient struct { + BaseClient +} + +// NewDisksClient creates an instance of the DisksClient client. +func NewDisksClient(subscriptionID string) DisksClient { + return NewDisksClientWithBaseURI(DefaultBaseURI, subscriptionID) +} + +// NewDisksClientWithBaseURI creates an instance of the DisksClient client. +func NewDisksClientWithBaseURI(baseURI string, subscriptionID string) DisksClient { + return DisksClient{NewWithBaseURI(baseURI, subscriptionID)} +} + +// CreateOrUpdate creates or updates a disk. +// Parameters: +// resourceGroupName - the name of the resource group. +// diskName - the name of the managed disk that is being created. The name can't be changed after the disk is +// created. Supported characters for the name are a-z, A-Z, 0-9 and _. The maximum name length is 80 +// characters. +// disk - disk object supplied in the body of the Put disk operation. +func (client DisksClient) CreateOrUpdate(ctx context.Context, resourceGroupName string, diskName string, disk Disk) (result DisksCreateOrUpdateFuture, err error) { + if err := validation.Validate([]validation.Validation{ + {TargetValue: disk, + Constraints: []validation.Constraint{{Target: "disk.DiskProperties", Name: validation.Null, Rule: false, + Chain: []validation.Constraint{{Target: "disk.DiskProperties.CreationData", Name: validation.Null, Rule: true, + Chain: []validation.Constraint{{Target: "disk.DiskProperties.CreationData.ImageReference", Name: validation.Null, Rule: false, + Chain: []validation.Constraint{{Target: "disk.DiskProperties.CreationData.ImageReference.ID", Name: validation.Null, Rule: true, Chain: nil}}}, + }}, + {Target: "disk.DiskProperties.EncryptionSettings", Name: validation.Null, Rule: false, + Chain: []validation.Constraint{{Target: "disk.DiskProperties.EncryptionSettings.DiskEncryptionKey", Name: validation.Null, Rule: false, + Chain: []validation.Constraint{{Target: "disk.DiskProperties.EncryptionSettings.DiskEncryptionKey.SourceVault", Name: validation.Null, Rule: true, Chain: nil}, + {Target: "disk.DiskProperties.EncryptionSettings.DiskEncryptionKey.SecretURL", Name: validation.Null, Rule: true, Chain: nil}, + }}, + {Target: "disk.DiskProperties.EncryptionSettings.KeyEncryptionKey", Name: validation.Null, Rule: false, + Chain: []validation.Constraint{{Target: "disk.DiskProperties.EncryptionSettings.KeyEncryptionKey.SourceVault", Name: validation.Null, Rule: true, Chain: nil}, + {Target: "disk.DiskProperties.EncryptionSettings.KeyEncryptionKey.KeyURL", Name: validation.Null, Rule: true, Chain: nil}, + }}, + }}, + }}}}}); err != nil { + return result, validation.NewError("compute.DisksClient", "CreateOrUpdate", err.Error()) + } + + req, err := client.CreateOrUpdatePreparer(ctx, resourceGroupName, diskName, disk) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.DisksClient", "CreateOrUpdate", nil, "Failure preparing request") + return + } + + result, err = client.CreateOrUpdateSender(req) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.DisksClient", "CreateOrUpdate", result.Response(), "Failure sending request") + return + } + + return +} + +// CreateOrUpdatePreparer prepares the CreateOrUpdate request. +func (client DisksClient) CreateOrUpdatePreparer(ctx context.Context, resourceGroupName string, diskName string, disk Disk) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "diskName": autorest.Encode("path", diskName), + "resourceGroupName": autorest.Encode("path", resourceGroupName), + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + } + + const APIVersion = "2018-04-01" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsContentType("application/json; charset=utf-8"), + autorest.AsPut(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Compute/disks/{diskName}", pathParameters), + autorest.WithJSON(disk), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// CreateOrUpdateSender sends the CreateOrUpdate request. The method will close the +// http.Response Body if it receives an error. +func (client DisksClient) CreateOrUpdateSender(req *http.Request) (future DisksCreateOrUpdateFuture, err error) { + var resp *http.Response + resp, err = autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) + if err != nil { + return + } + err = autorest.Respond(resp, azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted)) + if err != nil { + return + } + future.Future, err = azure.NewFutureFromResponse(resp) + return +} + +// CreateOrUpdateResponder handles the response to the CreateOrUpdate request. The method always +// closes the http.Response Body. +func (client DisksClient) CreateOrUpdateResponder(resp *http.Response) (result Disk, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + +// Delete deletes a disk. +// Parameters: +// resourceGroupName - the name of the resource group. +// diskName - the name of the managed disk that is being created. The name can't be changed after the disk is +// created. Supported characters for the name are a-z, A-Z, 0-9 and _. The maximum name length is 80 +// characters. +func (client DisksClient) Delete(ctx context.Context, resourceGroupName string, diskName string) (result DisksDeleteFuture, err error) { + req, err := client.DeletePreparer(ctx, resourceGroupName, diskName) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.DisksClient", "Delete", nil, "Failure preparing request") + return + } + + result, err = client.DeleteSender(req) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.DisksClient", "Delete", result.Response(), "Failure sending request") + return + } + + return +} + +// DeletePreparer prepares the Delete request. +func (client DisksClient) DeletePreparer(ctx context.Context, resourceGroupName string, diskName string) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "diskName": autorest.Encode("path", diskName), + "resourceGroupName": autorest.Encode("path", resourceGroupName), + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + } + + const APIVersion = "2018-04-01" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsDelete(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Compute/disks/{diskName}", pathParameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// DeleteSender sends the Delete request. The method will close the +// http.Response Body if it receives an error. +func (client DisksClient) DeleteSender(req *http.Request) (future DisksDeleteFuture, err error) { + var resp *http.Response + resp, err = autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) + if err != nil { + return + } + err = autorest.Respond(resp, azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted, http.StatusNoContent)) + if err != nil { + return + } + future.Future, err = azure.NewFutureFromResponse(resp) + return +} + +// DeleteResponder handles the response to the Delete request. The method always +// closes the http.Response Body. +func (client DisksClient) DeleteResponder(resp *http.Response) (result autorest.Response, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted, http.StatusNoContent), + autorest.ByClosing()) + result.Response = resp + return +} + +// Get gets information about a disk. +// Parameters: +// resourceGroupName - the name of the resource group. +// diskName - the name of the managed disk that is being created. The name can't be changed after the disk is +// created. Supported characters for the name are a-z, A-Z, 0-9 and _. The maximum name length is 80 +// characters. +func (client DisksClient) Get(ctx context.Context, resourceGroupName string, diskName string) (result Disk, err error) { + req, err := client.GetPreparer(ctx, resourceGroupName, diskName) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.DisksClient", "Get", nil, "Failure preparing request") + return + } + + resp, err := client.GetSender(req) + if err != nil { + result.Response = autorest.Response{Response: resp} + err = autorest.NewErrorWithError(err, "compute.DisksClient", "Get", resp, "Failure sending request") + return + } + + result, err = client.GetResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.DisksClient", "Get", resp, "Failure responding to request") + } + + return +} + +// GetPreparer prepares the Get request. +func (client DisksClient) GetPreparer(ctx context.Context, resourceGroupName string, diskName string) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "diskName": autorest.Encode("path", diskName), + "resourceGroupName": autorest.Encode("path", resourceGroupName), + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + } + + const APIVersion = "2018-04-01" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsGet(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Compute/disks/{diskName}", pathParameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// GetSender sends the Get request. The method will close the +// http.Response Body if it receives an error. +func (client DisksClient) GetSender(req *http.Request) (*http.Response, error) { + return autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) +} + +// GetResponder handles the response to the Get request. The method always +// closes the http.Response Body. +func (client DisksClient) GetResponder(resp *http.Response) (result Disk, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + +// GrantAccess grants access to a disk. +// Parameters: +// resourceGroupName - the name of the resource group. +// diskName - the name of the managed disk that is being created. The name can't be changed after the disk is +// created. Supported characters for the name are a-z, A-Z, 0-9 and _. The maximum name length is 80 +// characters. +// grantAccessData - access data object supplied in the body of the get disk access operation. +func (client DisksClient) GrantAccess(ctx context.Context, resourceGroupName string, diskName string, grantAccessData GrantAccessData) (result DisksGrantAccessFuture, err error) { + if err := validation.Validate([]validation.Validation{ + {TargetValue: grantAccessData, + Constraints: []validation.Constraint{{Target: "grantAccessData.DurationInSeconds", Name: validation.Null, Rule: true, Chain: nil}}}}); err != nil { + return result, validation.NewError("compute.DisksClient", "GrantAccess", err.Error()) + } + + req, err := client.GrantAccessPreparer(ctx, resourceGroupName, diskName, grantAccessData) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.DisksClient", "GrantAccess", nil, "Failure preparing request") + return + } + + result, err = client.GrantAccessSender(req) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.DisksClient", "GrantAccess", result.Response(), "Failure sending request") + return + } + + return +} + +// GrantAccessPreparer prepares the GrantAccess request. +func (client DisksClient) GrantAccessPreparer(ctx context.Context, resourceGroupName string, diskName string, grantAccessData GrantAccessData) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "diskName": autorest.Encode("path", diskName), + "resourceGroupName": autorest.Encode("path", resourceGroupName), + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + } + + const APIVersion = "2018-04-01" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsContentType("application/json; charset=utf-8"), + autorest.AsPost(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Compute/disks/{diskName}/beginGetAccess", pathParameters), + autorest.WithJSON(grantAccessData), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// GrantAccessSender sends the GrantAccess request. The method will close the +// http.Response Body if it receives an error. +func (client DisksClient) GrantAccessSender(req *http.Request) (future DisksGrantAccessFuture, err error) { + var resp *http.Response + resp, err = autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) + if err != nil { + return + } + err = autorest.Respond(resp, azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted)) + if err != nil { + return + } + future.Future, err = azure.NewFutureFromResponse(resp) + return +} + +// GrantAccessResponder handles the response to the GrantAccess request. The method always +// closes the http.Response Body. +func (client DisksClient) GrantAccessResponder(resp *http.Response) (result AccessURI, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + +// List lists all the disks under a subscription. +func (client DisksClient) List(ctx context.Context) (result DiskListPage, err error) { + result.fn = client.listNextResults + req, err := client.ListPreparer(ctx) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.DisksClient", "List", nil, "Failure preparing request") + return + } + + resp, err := client.ListSender(req) + if err != nil { + result.dl.Response = autorest.Response{Response: resp} + err = autorest.NewErrorWithError(err, "compute.DisksClient", "List", resp, "Failure sending request") + return + } + + result.dl, err = client.ListResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.DisksClient", "List", resp, "Failure responding to request") + } + + return +} + +// ListPreparer prepares the List request. +func (client DisksClient) ListPreparer(ctx context.Context) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + } + + const APIVersion = "2018-04-01" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsGet(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/providers/Microsoft.Compute/disks", pathParameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// ListSender sends the List request. The method will close the +// http.Response Body if it receives an error. +func (client DisksClient) ListSender(req *http.Request) (*http.Response, error) { + return autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) +} + +// ListResponder handles the response to the List request. The method always +// closes the http.Response Body. +func (client DisksClient) ListResponder(resp *http.Response) (result DiskList, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + +// listNextResults retrieves the next set of results, if any. +func (client DisksClient) listNextResults(lastResults DiskList) (result DiskList, err error) { + req, err := lastResults.diskListPreparer() + if err != nil { + return result, autorest.NewErrorWithError(err, "compute.DisksClient", "listNextResults", nil, "Failure preparing next results request") + } + if req == nil { + return + } + resp, err := client.ListSender(req) + if err != nil { + result.Response = autorest.Response{Response: resp} + return result, autorest.NewErrorWithError(err, "compute.DisksClient", "listNextResults", resp, "Failure sending next results request") + } + result, err = client.ListResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.DisksClient", "listNextResults", resp, "Failure responding to next results request") + } + return +} + +// ListComplete enumerates all values, automatically crossing page boundaries as required. +func (client DisksClient) ListComplete(ctx context.Context) (result DiskListIterator, err error) { + result.page, err = client.List(ctx) + return +} + +// ListByResourceGroup lists all the disks under a resource group. +// Parameters: +// resourceGroupName - the name of the resource group. +func (client DisksClient) ListByResourceGroup(ctx context.Context, resourceGroupName string) (result DiskListPage, err error) { + result.fn = client.listByResourceGroupNextResults + req, err := client.ListByResourceGroupPreparer(ctx, resourceGroupName) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.DisksClient", "ListByResourceGroup", nil, "Failure preparing request") + return + } + + resp, err := client.ListByResourceGroupSender(req) + if err != nil { + result.dl.Response = autorest.Response{Response: resp} + err = autorest.NewErrorWithError(err, "compute.DisksClient", "ListByResourceGroup", resp, "Failure sending request") + return + } + + result.dl, err = client.ListByResourceGroupResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.DisksClient", "ListByResourceGroup", resp, "Failure responding to request") + } + + return +} + +// ListByResourceGroupPreparer prepares the ListByResourceGroup request. +func (client DisksClient) ListByResourceGroupPreparer(ctx context.Context, resourceGroupName string) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "resourceGroupName": autorest.Encode("path", resourceGroupName), + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + } + + const APIVersion = "2018-04-01" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsGet(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Compute/disks", pathParameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// ListByResourceGroupSender sends the ListByResourceGroup request. The method will close the +// http.Response Body if it receives an error. +func (client DisksClient) ListByResourceGroupSender(req *http.Request) (*http.Response, error) { + return autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) +} + +// ListByResourceGroupResponder handles the response to the ListByResourceGroup request. The method always +// closes the http.Response Body. +func (client DisksClient) ListByResourceGroupResponder(resp *http.Response) (result DiskList, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + +// listByResourceGroupNextResults retrieves the next set of results, if any. +func (client DisksClient) listByResourceGroupNextResults(lastResults DiskList) (result DiskList, err error) { + req, err := lastResults.diskListPreparer() + if err != nil { + return result, autorest.NewErrorWithError(err, "compute.DisksClient", "listByResourceGroupNextResults", nil, "Failure preparing next results request") + } + if req == nil { + return + } + resp, err := client.ListByResourceGroupSender(req) + if err != nil { + result.Response = autorest.Response{Response: resp} + return result, autorest.NewErrorWithError(err, "compute.DisksClient", "listByResourceGroupNextResults", resp, "Failure sending next results request") + } + result, err = client.ListByResourceGroupResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.DisksClient", "listByResourceGroupNextResults", resp, "Failure responding to next results request") + } + return +} + +// ListByResourceGroupComplete enumerates all values, automatically crossing page boundaries as required. +func (client DisksClient) ListByResourceGroupComplete(ctx context.Context, resourceGroupName string) (result DiskListIterator, err error) { + result.page, err = client.ListByResourceGroup(ctx, resourceGroupName) + return +} + +// RevokeAccess revokes access to a disk. +// Parameters: +// resourceGroupName - the name of the resource group. +// diskName - the name of the managed disk that is being created. The name can't be changed after the disk is +// created. Supported characters for the name are a-z, A-Z, 0-9 and _. The maximum name length is 80 +// characters. +func (client DisksClient) RevokeAccess(ctx context.Context, resourceGroupName string, diskName string) (result DisksRevokeAccessFuture, err error) { + req, err := client.RevokeAccessPreparer(ctx, resourceGroupName, diskName) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.DisksClient", "RevokeAccess", nil, "Failure preparing request") + return + } + + result, err = client.RevokeAccessSender(req) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.DisksClient", "RevokeAccess", result.Response(), "Failure sending request") + return + } + + return +} + +// RevokeAccessPreparer prepares the RevokeAccess request. +func (client DisksClient) RevokeAccessPreparer(ctx context.Context, resourceGroupName string, diskName string) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "diskName": autorest.Encode("path", diskName), + "resourceGroupName": autorest.Encode("path", resourceGroupName), + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + } + + const APIVersion = "2018-04-01" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsPost(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Compute/disks/{diskName}/endGetAccess", pathParameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// RevokeAccessSender sends the RevokeAccess request. The method will close the +// http.Response Body if it receives an error. +func (client DisksClient) RevokeAccessSender(req *http.Request) (future DisksRevokeAccessFuture, err error) { + var resp *http.Response + resp, err = autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) + if err != nil { + return + } + err = autorest.Respond(resp, azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted)) + if err != nil { + return + } + future.Future, err = azure.NewFutureFromResponse(resp) + return +} + +// RevokeAccessResponder handles the response to the RevokeAccess request. The method always +// closes the http.Response Body. +func (client DisksClient) RevokeAccessResponder(resp *http.Response) (result autorest.Response, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted), + autorest.ByClosing()) + result.Response = resp + return +} + +// Update updates (patches) a disk. +// Parameters: +// resourceGroupName - the name of the resource group. +// diskName - the name of the managed disk that is being created. The name can't be changed after the disk is +// created. Supported characters for the name are a-z, A-Z, 0-9 and _. The maximum name length is 80 +// characters. +// disk - disk object supplied in the body of the Patch disk operation. +func (client DisksClient) Update(ctx context.Context, resourceGroupName string, diskName string, disk DiskUpdate) (result DisksUpdateFuture, err error) { + req, err := client.UpdatePreparer(ctx, resourceGroupName, diskName, disk) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.DisksClient", "Update", nil, "Failure preparing request") + return + } + + result, err = client.UpdateSender(req) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.DisksClient", "Update", result.Response(), "Failure sending request") + return + } + + return +} + +// UpdatePreparer prepares the Update request. +func (client DisksClient) UpdatePreparer(ctx context.Context, resourceGroupName string, diskName string, disk DiskUpdate) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "diskName": autorest.Encode("path", diskName), + "resourceGroupName": autorest.Encode("path", resourceGroupName), + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + } + + const APIVersion = "2018-04-01" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsContentType("application/json; charset=utf-8"), + autorest.AsPatch(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Compute/disks/{diskName}", pathParameters), + autorest.WithJSON(disk), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// UpdateSender sends the Update request. The method will close the +// http.Response Body if it receives an error. +func (client DisksClient) UpdateSender(req *http.Request) (future DisksUpdateFuture, err error) { + var resp *http.Response + resp, err = autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) + if err != nil { + return + } + err = autorest.Respond(resp, azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted)) + if err != nil { + return + } + future.Future, err = azure.NewFutureFromResponse(resp) + return +} + +// UpdateResponder handles the response to the Update request. The method always +// closes the http.Response Body. +func (client DisksClient) UpdateResponder(resp *http.Response) (result Disk, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} diff --git a/vendor/github.com/Azure/azure-sdk-for-go/arm/compute/images.go b/vendor/github.com/Azure/azure-sdk-for-go/services/compute/mgmt/2018-04-01/compute/images.go similarity index 52% rename from vendor/github.com/Azure/azure-sdk-for-go/arm/compute/images.go rename to vendor/github.com/Azure/azure-sdk-for-go/services/compute/mgmt/2018-04-01/compute/images.go index 64f14dd08..d50a784b7 100644 --- a/vendor/github.com/Azure/azure-sdk-for-go/arm/compute/images.go +++ b/vendor/github.com/Azure/azure-sdk-for-go/services/compute/mgmt/2018-04-01/compute/images.go @@ -14,20 +14,19 @@ package compute // See the License for the specific language governing permissions and // limitations under the License. // -// Code generated by Microsoft (R) AutoRest Code Generator 1.0.1.0 -// Changes may cause incorrect behavior and will be lost if the code is -// regenerated. +// Code generated by Microsoft (R) AutoRest Code Generator. +// Changes may cause incorrect behavior and will be lost if the code is regenerated. import ( + "context" "github.com/Azure/go-autorest/autorest" "github.com/Azure/go-autorest/autorest/azure" - "github.com/Azure/go-autorest/autorest/validation" "net/http" ) -// ImagesClient is the the Compute Management Client. +// ImagesClient is the compute Client type ImagesClient struct { - ManagementClient + BaseClient } // NewImagesClient creates an instance of the ImagesClient client. @@ -40,88 +39,65 @@ func NewImagesClientWithBaseURI(baseURI string, subscriptionID string) ImagesCli return ImagesClient{NewWithBaseURI(baseURI, subscriptionID)} } -// CreateOrUpdate create or update an image. This method may poll for -// completion. Polling can be canceled by passing the cancel channel argument. -// The channel will be used to cancel polling and any outstanding HTTP -// requests. -// -// resourceGroupName is the name of the resource group. imageName is the name -// of the image. parameters is parameters supplied to the Create Image -// operation. -func (client ImagesClient) CreateOrUpdate(resourceGroupName string, imageName string, parameters Image, cancel <-chan struct{}) (<-chan Image, <-chan error) { - resultChan := make(chan Image, 1) - errChan := make(chan error, 1) - if err := validation.Validate([]validation.Validation{ - {TargetValue: parameters, - Constraints: []validation.Constraint{{Target: "parameters.ImageProperties", Name: validation.Null, Rule: false, - Chain: []validation.Constraint{{Target: "parameters.ImageProperties.StorageProfile", Name: validation.Null, Rule: false, - Chain: []validation.Constraint{{Target: "parameters.ImageProperties.StorageProfile.OsDisk", Name: validation.Null, Rule: true, Chain: nil}}}, - }}}}}); err != nil { - errChan <- validation.NewErrorWithValidationError(err, "compute.ImagesClient", "CreateOrUpdate") - close(errChan) - close(resultChan) - return resultChan, errChan +// CreateOrUpdate create or update an image. +// Parameters: +// resourceGroupName - the name of the resource group. +// imageName - the name of the image. +// parameters - parameters supplied to the Create Image operation. +func (client ImagesClient) CreateOrUpdate(ctx context.Context, resourceGroupName string, imageName string, parameters Image) (result ImagesCreateOrUpdateFuture, err error) { + req, err := client.CreateOrUpdatePreparer(ctx, resourceGroupName, imageName, parameters) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.ImagesClient", "CreateOrUpdate", nil, "Failure preparing request") + return } - go func() { - var err error - var result Image - defer func() { - resultChan <- result - errChan <- err - close(resultChan) - close(errChan) - }() - req, err := client.CreateOrUpdatePreparer(resourceGroupName, imageName, parameters, cancel) - if err != nil { - err = autorest.NewErrorWithError(err, "compute.ImagesClient", "CreateOrUpdate", nil, "Failure preparing request") - return - } + result, err = client.CreateOrUpdateSender(req) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.ImagesClient", "CreateOrUpdate", result.Response(), "Failure sending request") + return + } - resp, err := client.CreateOrUpdateSender(req) - if err != nil { - result.Response = autorest.Response{Response: resp} - err = autorest.NewErrorWithError(err, "compute.ImagesClient", "CreateOrUpdate", resp, "Failure sending request") - return - } - - result, err = client.CreateOrUpdateResponder(resp) - if err != nil { - err = autorest.NewErrorWithError(err, "compute.ImagesClient", "CreateOrUpdate", resp, "Failure responding to request") - } - }() - return resultChan, errChan + return } // CreateOrUpdatePreparer prepares the CreateOrUpdate request. -func (client ImagesClient) CreateOrUpdatePreparer(resourceGroupName string, imageName string, parameters Image, cancel <-chan struct{}) (*http.Request, error) { +func (client ImagesClient) CreateOrUpdatePreparer(ctx context.Context, resourceGroupName string, imageName string, parameters Image) (*http.Request, error) { pathParameters := map[string]interface{}{ "imageName": autorest.Encode("path", imageName), "resourceGroupName": autorest.Encode("path", resourceGroupName), "subscriptionId": autorest.Encode("path", client.SubscriptionID), } - const APIVersion = "2016-04-30-preview" + const APIVersion = "2017-12-01" queryParameters := map[string]interface{}{ "api-version": APIVersion, } preparer := autorest.CreatePreparer( - autorest.AsJSON(), + autorest.AsContentType("application/json; charset=utf-8"), autorest.AsPut(), autorest.WithBaseURL(client.BaseURI), autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Compute/images/{imageName}", pathParameters), autorest.WithJSON(parameters), autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{Cancel: cancel}) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) } // CreateOrUpdateSender sends the CreateOrUpdate request. The method will close the // http.Response Body if it receives an error. -func (client ImagesClient) CreateOrUpdateSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, - req, - azure.DoPollForAsynchronous(client.PollingDelay)) +func (client ImagesClient) CreateOrUpdateSender(req *http.Request) (future ImagesCreateOrUpdateFuture, err error) { + var resp *http.Response + resp, err = autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) + if err != nil { + return + } + err = autorest.Respond(resp, azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusCreated)) + if err != nil { + return + } + future.Future, err = azure.NewFutureFromResponse(resp) + return } // CreateOrUpdateResponder handles the response to the CreateOrUpdate request. The method always @@ -137,54 +113,35 @@ func (client ImagesClient) CreateOrUpdateResponder(resp *http.Response) (result return } -// Delete deletes an Image. This method may poll for completion. Polling can be -// canceled by passing the cancel channel argument. The channel will be used to -// cancel polling and any outstanding HTTP requests. -// -// resourceGroupName is the name of the resource group. imageName is the name -// of the image. -func (client ImagesClient) Delete(resourceGroupName string, imageName string, cancel <-chan struct{}) (<-chan OperationStatusResponse, <-chan error) { - resultChan := make(chan OperationStatusResponse, 1) - errChan := make(chan error, 1) - go func() { - var err error - var result OperationStatusResponse - defer func() { - resultChan <- result - errChan <- err - close(resultChan) - close(errChan) - }() - req, err := client.DeletePreparer(resourceGroupName, imageName, cancel) - if err != nil { - err = autorest.NewErrorWithError(err, "compute.ImagesClient", "Delete", nil, "Failure preparing request") - return - } +// Delete deletes an Image. +// Parameters: +// resourceGroupName - the name of the resource group. +// imageName - the name of the image. +func (client ImagesClient) Delete(ctx context.Context, resourceGroupName string, imageName string) (result ImagesDeleteFuture, err error) { + req, err := client.DeletePreparer(ctx, resourceGroupName, imageName) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.ImagesClient", "Delete", nil, "Failure preparing request") + return + } - resp, err := client.DeleteSender(req) - if err != nil { - result.Response = autorest.Response{Response: resp} - err = autorest.NewErrorWithError(err, "compute.ImagesClient", "Delete", resp, "Failure sending request") - return - } + result, err = client.DeleteSender(req) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.ImagesClient", "Delete", result.Response(), "Failure sending request") + return + } - result, err = client.DeleteResponder(resp) - if err != nil { - err = autorest.NewErrorWithError(err, "compute.ImagesClient", "Delete", resp, "Failure responding to request") - } - }() - return resultChan, errChan + return } // DeletePreparer prepares the Delete request. -func (client ImagesClient) DeletePreparer(resourceGroupName string, imageName string, cancel <-chan struct{}) (*http.Request, error) { +func (client ImagesClient) DeletePreparer(ctx context.Context, resourceGroupName string, imageName string) (*http.Request, error) { pathParameters := map[string]interface{}{ "imageName": autorest.Encode("path", imageName), "resourceGroupName": autorest.Encode("path", resourceGroupName), "subscriptionId": autorest.Encode("path", client.SubscriptionID), } - const APIVersion = "2016-04-30-preview" + const APIVersion = "2017-12-01" queryParameters := map[string]interface{}{ "api-version": APIVersion, } @@ -194,15 +151,24 @@ func (client ImagesClient) DeletePreparer(resourceGroupName string, imageName st autorest.WithBaseURL(client.BaseURI), autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Compute/images/{imageName}", pathParameters), autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{Cancel: cancel}) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) } // DeleteSender sends the Delete request. The method will close the // http.Response Body if it receives an error. -func (client ImagesClient) DeleteSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, - req, - azure.DoPollForAsynchronous(client.PollingDelay)) +func (client ImagesClient) DeleteSender(req *http.Request) (future ImagesDeleteFuture, err error) { + var resp *http.Response + resp, err = autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) + if err != nil { + return + } + err = autorest.Respond(resp, azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted, http.StatusNoContent)) + if err != nil { + return + } + future.Future, err = azure.NewFutureFromResponse(resp) + return } // DeleteResponder handles the response to the Delete request. The method always @@ -219,11 +185,12 @@ func (client ImagesClient) DeleteResponder(resp *http.Response) (result Operatio } // Get gets an image. -// -// resourceGroupName is the name of the resource group. imageName is the name -// of the image. expand is the expand expression to apply on the operation. -func (client ImagesClient) Get(resourceGroupName string, imageName string, expand string) (result Image, err error) { - req, err := client.GetPreparer(resourceGroupName, imageName, expand) +// Parameters: +// resourceGroupName - the name of the resource group. +// imageName - the name of the image. +// expand - the expand expression to apply on the operation. +func (client ImagesClient) Get(ctx context.Context, resourceGroupName string, imageName string, expand string) (result Image, err error) { + req, err := client.GetPreparer(ctx, resourceGroupName, imageName, expand) if err != nil { err = autorest.NewErrorWithError(err, "compute.ImagesClient", "Get", nil, "Failure preparing request") return @@ -245,14 +212,14 @@ func (client ImagesClient) Get(resourceGroupName string, imageName string, expan } // GetPreparer prepares the Get request. -func (client ImagesClient) GetPreparer(resourceGroupName string, imageName string, expand string) (*http.Request, error) { +func (client ImagesClient) GetPreparer(ctx context.Context, resourceGroupName string, imageName string, expand string) (*http.Request, error) { pathParameters := map[string]interface{}{ "imageName": autorest.Encode("path", imageName), "resourceGroupName": autorest.Encode("path", resourceGroupName), "subscriptionId": autorest.Encode("path", client.SubscriptionID), } - const APIVersion = "2016-04-30-preview" + const APIVersion = "2017-12-01" queryParameters := map[string]interface{}{ "api-version": APIVersion, } @@ -265,13 +232,14 @@ func (client ImagesClient) GetPreparer(resourceGroupName string, imageName strin autorest.WithBaseURL(client.BaseURI), autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Compute/images/{imageName}", pathParameters), autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{}) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) } // GetSender sends the Get request. The method will close the // http.Response Body if it receives an error. func (client ImagesClient) GetSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, req) + return autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) } // GetResponder handles the response to the Get request. The method always @@ -287,11 +255,11 @@ func (client ImagesClient) GetResponder(resp *http.Response) (result Image, err return } -// List gets the list of Images in the subscription. Use nextLink property in -// the response to get the next page of Images. Do this till nextLink is not -// null to fetch all the Images. -func (client ImagesClient) List() (result ImageListResult, err error) { - req, err := client.ListPreparer() +// List gets the list of Images in the subscription. Use nextLink property in the response to get the next page of +// Images. Do this till nextLink is null to fetch all the Images. +func (client ImagesClient) List(ctx context.Context) (result ImageListResultPage, err error) { + result.fn = client.listNextResults + req, err := client.ListPreparer(ctx) if err != nil { err = autorest.NewErrorWithError(err, "compute.ImagesClient", "List", nil, "Failure preparing request") return @@ -299,12 +267,12 @@ func (client ImagesClient) List() (result ImageListResult, err error) { resp, err := client.ListSender(req) if err != nil { - result.Response = autorest.Response{Response: resp} + result.ilr.Response = autorest.Response{Response: resp} err = autorest.NewErrorWithError(err, "compute.ImagesClient", "List", resp, "Failure sending request") return } - result, err = client.ListResponder(resp) + result.ilr, err = client.ListResponder(resp) if err != nil { err = autorest.NewErrorWithError(err, "compute.ImagesClient", "List", resp, "Failure responding to request") } @@ -313,12 +281,12 @@ func (client ImagesClient) List() (result ImageListResult, err error) { } // ListPreparer prepares the List request. -func (client ImagesClient) ListPreparer() (*http.Request, error) { +func (client ImagesClient) ListPreparer(ctx context.Context) (*http.Request, error) { pathParameters := map[string]interface{}{ "subscriptionId": autorest.Encode("path", client.SubscriptionID), } - const APIVersion = "2016-04-30-preview" + const APIVersion = "2017-12-01" queryParameters := map[string]interface{}{ "api-version": APIVersion, } @@ -328,13 +296,14 @@ func (client ImagesClient) ListPreparer() (*http.Request, error) { autorest.WithBaseURL(client.BaseURI), autorest.WithPathParameters("/subscriptions/{subscriptionId}/providers/Microsoft.Compute/images", pathParameters), autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{}) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) } // ListSender sends the List request. The method will close the // http.Response Body if it receives an error. func (client ImagesClient) ListSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, req) + return autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) } // ListResponder handles the response to the List request. The method always @@ -350,35 +319,39 @@ func (client ImagesClient) ListResponder(resp *http.Response) (result ImageListR return } -// ListNextResults retrieves the next set of results, if any. -func (client ImagesClient) ListNextResults(lastResults ImageListResult) (result ImageListResult, err error) { - req, err := lastResults.ImageListResultPreparer() +// listNextResults retrieves the next set of results, if any. +func (client ImagesClient) listNextResults(lastResults ImageListResult) (result ImageListResult, err error) { + req, err := lastResults.imageListResultPreparer() if err != nil { - return result, autorest.NewErrorWithError(err, "compute.ImagesClient", "List", nil, "Failure preparing next results request") + return result, autorest.NewErrorWithError(err, "compute.ImagesClient", "listNextResults", nil, "Failure preparing next results request") } if req == nil { return } - resp, err := client.ListSender(req) if err != nil { result.Response = autorest.Response{Response: resp} - return result, autorest.NewErrorWithError(err, "compute.ImagesClient", "List", resp, "Failure sending next results request") + return result, autorest.NewErrorWithError(err, "compute.ImagesClient", "listNextResults", resp, "Failure sending next results request") } - result, err = client.ListResponder(resp) if err != nil { - err = autorest.NewErrorWithError(err, "compute.ImagesClient", "List", resp, "Failure responding to next results request") + err = autorest.NewErrorWithError(err, "compute.ImagesClient", "listNextResults", resp, "Failure responding to next results request") } + return +} +// ListComplete enumerates all values, automatically crossing page boundaries as required. +func (client ImagesClient) ListComplete(ctx context.Context) (result ImageListResultIterator, err error) { + result.page, err = client.List(ctx) return } // ListByResourceGroup gets the list of images under a resource group. -// -// resourceGroupName is the name of the resource group. -func (client ImagesClient) ListByResourceGroup(resourceGroupName string) (result ImageListResult, err error) { - req, err := client.ListByResourceGroupPreparer(resourceGroupName) +// Parameters: +// resourceGroupName - the name of the resource group. +func (client ImagesClient) ListByResourceGroup(ctx context.Context, resourceGroupName string) (result ImageListResultPage, err error) { + result.fn = client.listByResourceGroupNextResults + req, err := client.ListByResourceGroupPreparer(ctx, resourceGroupName) if err != nil { err = autorest.NewErrorWithError(err, "compute.ImagesClient", "ListByResourceGroup", nil, "Failure preparing request") return @@ -386,12 +359,12 @@ func (client ImagesClient) ListByResourceGroup(resourceGroupName string) (result resp, err := client.ListByResourceGroupSender(req) if err != nil { - result.Response = autorest.Response{Response: resp} + result.ilr.Response = autorest.Response{Response: resp} err = autorest.NewErrorWithError(err, "compute.ImagesClient", "ListByResourceGroup", resp, "Failure sending request") return } - result, err = client.ListByResourceGroupResponder(resp) + result.ilr, err = client.ListByResourceGroupResponder(resp) if err != nil { err = autorest.NewErrorWithError(err, "compute.ImagesClient", "ListByResourceGroup", resp, "Failure responding to request") } @@ -400,13 +373,13 @@ func (client ImagesClient) ListByResourceGroup(resourceGroupName string) (result } // ListByResourceGroupPreparer prepares the ListByResourceGroup request. -func (client ImagesClient) ListByResourceGroupPreparer(resourceGroupName string) (*http.Request, error) { +func (client ImagesClient) ListByResourceGroupPreparer(ctx context.Context, resourceGroupName string) (*http.Request, error) { pathParameters := map[string]interface{}{ "resourceGroupName": autorest.Encode("path", resourceGroupName), "subscriptionId": autorest.Encode("path", client.SubscriptionID), } - const APIVersion = "2016-04-30-preview" + const APIVersion = "2017-12-01" queryParameters := map[string]interface{}{ "api-version": APIVersion, } @@ -416,13 +389,14 @@ func (client ImagesClient) ListByResourceGroupPreparer(resourceGroupName string) autorest.WithBaseURL(client.BaseURI), autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Compute/images", pathParameters), autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{}) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) } // ListByResourceGroupSender sends the ListByResourceGroup request. The method will close the // http.Response Body if it receives an error. func (client ImagesClient) ListByResourceGroupSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, req) + return autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) } // ListByResourceGroupResponder handles the response to the ListByResourceGroup request. The method always @@ -438,26 +412,103 @@ func (client ImagesClient) ListByResourceGroupResponder(resp *http.Response) (re return } -// ListByResourceGroupNextResults retrieves the next set of results, if any. -func (client ImagesClient) ListByResourceGroupNextResults(lastResults ImageListResult) (result ImageListResult, err error) { - req, err := lastResults.ImageListResultPreparer() +// listByResourceGroupNextResults retrieves the next set of results, if any. +func (client ImagesClient) listByResourceGroupNextResults(lastResults ImageListResult) (result ImageListResult, err error) { + req, err := lastResults.imageListResultPreparer() if err != nil { - return result, autorest.NewErrorWithError(err, "compute.ImagesClient", "ListByResourceGroup", nil, "Failure preparing next results request") + return result, autorest.NewErrorWithError(err, "compute.ImagesClient", "listByResourceGroupNextResults", nil, "Failure preparing next results request") } if req == nil { return } - resp, err := client.ListByResourceGroupSender(req) if err != nil { result.Response = autorest.Response{Response: resp} - return result, autorest.NewErrorWithError(err, "compute.ImagesClient", "ListByResourceGroup", resp, "Failure sending next results request") + return result, autorest.NewErrorWithError(err, "compute.ImagesClient", "listByResourceGroupNextResults", resp, "Failure sending next results request") } - result, err = client.ListByResourceGroupResponder(resp) if err != nil { - err = autorest.NewErrorWithError(err, "compute.ImagesClient", "ListByResourceGroup", resp, "Failure responding to next results request") + err = autorest.NewErrorWithError(err, "compute.ImagesClient", "listByResourceGroupNextResults", resp, "Failure responding to next results request") + } + return +} + +// ListByResourceGroupComplete enumerates all values, automatically crossing page boundaries as required. +func (client ImagesClient) ListByResourceGroupComplete(ctx context.Context, resourceGroupName string) (result ImageListResultIterator, err error) { + result.page, err = client.ListByResourceGroup(ctx, resourceGroupName) + return +} + +// Update update an image. +// Parameters: +// resourceGroupName - the name of the resource group. +// imageName - the name of the image. +// parameters - parameters supplied to the Update Image operation. +func (client ImagesClient) Update(ctx context.Context, resourceGroupName string, imageName string, parameters ImageUpdate) (result ImagesUpdateFuture, err error) { + req, err := client.UpdatePreparer(ctx, resourceGroupName, imageName, parameters) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.ImagesClient", "Update", nil, "Failure preparing request") + return + } + + result, err = client.UpdateSender(req) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.ImagesClient", "Update", result.Response(), "Failure sending request") + return } return } + +// UpdatePreparer prepares the Update request. +func (client ImagesClient) UpdatePreparer(ctx context.Context, resourceGroupName string, imageName string, parameters ImageUpdate) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "imageName": autorest.Encode("path", imageName), + "resourceGroupName": autorest.Encode("path", resourceGroupName), + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + } + + const APIVersion = "2017-12-01" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsContentType("application/json; charset=utf-8"), + autorest.AsPatch(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Compute/images/{imageName}", pathParameters), + autorest.WithJSON(parameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// UpdateSender sends the Update request. The method will close the +// http.Response Body if it receives an error. +func (client ImagesClient) UpdateSender(req *http.Request) (future ImagesUpdateFuture, err error) { + var resp *http.Response + resp, err = autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) + if err != nil { + return + } + err = autorest.Respond(resp, azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusCreated)) + if err != nil { + return + } + future.Future, err = azure.NewFutureFromResponse(resp) + return +} + +// UpdateResponder handles the response to the Update request. The method always +// closes the http.Response Body. +func (client ImagesClient) UpdateResponder(resp *http.Response) (result Image, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusCreated), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} diff --git a/vendor/github.com/Azure/azure-sdk-for-go/services/compute/mgmt/2018-04-01/compute/loganalytics.go b/vendor/github.com/Azure/azure-sdk-for-go/services/compute/mgmt/2018-04-01/compute/loganalytics.go new file mode 100644 index 000000000..a309e61ab --- /dev/null +++ b/vendor/github.com/Azure/azure-sdk-for-go/services/compute/mgmt/2018-04-01/compute/loganalytics.go @@ -0,0 +1,199 @@ +package compute + +// Copyright (c) Microsoft and contributors. All rights reserved. +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// +// See the License for the specific language governing permissions and +// limitations under the License. +// +// Code generated by Microsoft (R) AutoRest Code Generator. +// Changes may cause incorrect behavior and will be lost if the code is regenerated. + +import ( + "context" + "github.com/Azure/go-autorest/autorest" + "github.com/Azure/go-autorest/autorest/azure" + "github.com/Azure/go-autorest/autorest/validation" + "net/http" +) + +// LogAnalyticsClient is the compute Client +type LogAnalyticsClient struct { + BaseClient +} + +// NewLogAnalyticsClient creates an instance of the LogAnalyticsClient client. +func NewLogAnalyticsClient(subscriptionID string) LogAnalyticsClient { + return NewLogAnalyticsClientWithBaseURI(DefaultBaseURI, subscriptionID) +} + +// NewLogAnalyticsClientWithBaseURI creates an instance of the LogAnalyticsClient client. +func NewLogAnalyticsClientWithBaseURI(baseURI string, subscriptionID string) LogAnalyticsClient { + return LogAnalyticsClient{NewWithBaseURI(baseURI, subscriptionID)} +} + +// ExportRequestRateByInterval export logs that show Api requests made by this subscription in the given time window to +// show throttling activities. +// Parameters: +// parameters - parameters supplied to the LogAnalytics getRequestRateByInterval Api. +// location - the location upon which virtual-machine-sizes is queried. +func (client LogAnalyticsClient) ExportRequestRateByInterval(ctx context.Context, parameters RequestRateByIntervalInput, location string) (result LogAnalyticsExportRequestRateByIntervalFuture, err error) { + if err := validation.Validate([]validation.Validation{ + {TargetValue: location, + Constraints: []validation.Constraint{{Target: "location", Name: validation.Pattern, Rule: `^[-\w\._]+$`, Chain: nil}}}}); err != nil { + return result, validation.NewError("compute.LogAnalyticsClient", "ExportRequestRateByInterval", err.Error()) + } + + req, err := client.ExportRequestRateByIntervalPreparer(ctx, parameters, location) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.LogAnalyticsClient", "ExportRequestRateByInterval", nil, "Failure preparing request") + return + } + + result, err = client.ExportRequestRateByIntervalSender(req) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.LogAnalyticsClient", "ExportRequestRateByInterval", result.Response(), "Failure sending request") + return + } + + return +} + +// ExportRequestRateByIntervalPreparer prepares the ExportRequestRateByInterval request. +func (client LogAnalyticsClient) ExportRequestRateByIntervalPreparer(ctx context.Context, parameters RequestRateByIntervalInput, location string) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "location": autorest.Encode("path", location), + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + } + + const APIVersion = "2017-12-01" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsContentType("application/json; charset=utf-8"), + autorest.AsPost(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/providers/Microsoft.Compute/locations/{location}/logAnalytics/apiAccess/getRequestRateByInterval", pathParameters), + autorest.WithJSON(parameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// ExportRequestRateByIntervalSender sends the ExportRequestRateByInterval request. The method will close the +// http.Response Body if it receives an error. +func (client LogAnalyticsClient) ExportRequestRateByIntervalSender(req *http.Request) (future LogAnalyticsExportRequestRateByIntervalFuture, err error) { + var resp *http.Response + resp, err = autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) + if err != nil { + return + } + err = autorest.Respond(resp, azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted)) + if err != nil { + return + } + future.Future, err = azure.NewFutureFromResponse(resp) + return +} + +// ExportRequestRateByIntervalResponder handles the response to the ExportRequestRateByInterval request. The method always +// closes the http.Response Body. +func (client LogAnalyticsClient) ExportRequestRateByIntervalResponder(resp *http.Response) (result LogAnalyticsOperationResult, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + +// ExportThrottledRequests export logs that show total throttled Api requests for this subscription in the given time +// window. +// Parameters: +// parameters - parameters supplied to the LogAnalytics getThrottledRequests Api. +// location - the location upon which virtual-machine-sizes is queried. +func (client LogAnalyticsClient) ExportThrottledRequests(ctx context.Context, parameters ThrottledRequestsInput, location string) (result LogAnalyticsExportThrottledRequestsFuture, err error) { + if err := validation.Validate([]validation.Validation{ + {TargetValue: location, + Constraints: []validation.Constraint{{Target: "location", Name: validation.Pattern, Rule: `^[-\w\._]+$`, Chain: nil}}}}); err != nil { + return result, validation.NewError("compute.LogAnalyticsClient", "ExportThrottledRequests", err.Error()) + } + + req, err := client.ExportThrottledRequestsPreparer(ctx, parameters, location) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.LogAnalyticsClient", "ExportThrottledRequests", nil, "Failure preparing request") + return + } + + result, err = client.ExportThrottledRequestsSender(req) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.LogAnalyticsClient", "ExportThrottledRequests", result.Response(), "Failure sending request") + return + } + + return +} + +// ExportThrottledRequestsPreparer prepares the ExportThrottledRequests request. +func (client LogAnalyticsClient) ExportThrottledRequestsPreparer(ctx context.Context, parameters ThrottledRequestsInput, location string) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "location": autorest.Encode("path", location), + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + } + + const APIVersion = "2017-12-01" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsContentType("application/json; charset=utf-8"), + autorest.AsPost(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/providers/Microsoft.Compute/locations/{location}/logAnalytics/apiAccess/getThrottledRequests", pathParameters), + autorest.WithJSON(parameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// ExportThrottledRequestsSender sends the ExportThrottledRequests request. The method will close the +// http.Response Body if it receives an error. +func (client LogAnalyticsClient) ExportThrottledRequestsSender(req *http.Request) (future LogAnalyticsExportThrottledRequestsFuture, err error) { + var resp *http.Response + resp, err = autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) + if err != nil { + return + } + err = autorest.Respond(resp, azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted)) + if err != nil { + return + } + future.Future, err = azure.NewFutureFromResponse(resp) + return +} + +// ExportThrottledRequestsResponder handles the response to the ExportThrottledRequests request. The method always +// closes the http.Response Body. +func (client LogAnalyticsClient) ExportThrottledRequestsResponder(resp *http.Response) (result LogAnalyticsOperationResult, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} diff --git a/vendor/github.com/Azure/azure-sdk-for-go/services/compute/mgmt/2018-04-01/compute/models.go b/vendor/github.com/Azure/azure-sdk-for-go/services/compute/mgmt/2018-04-01/compute/models.go new file mode 100644 index 000000000..f4efc7509 --- /dev/null +++ b/vendor/github.com/Azure/azure-sdk-for-go/services/compute/mgmt/2018-04-01/compute/models.go @@ -0,0 +1,8700 @@ +package compute + +// Copyright (c) Microsoft and contributors. All rights reserved. +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// +// See the License for the specific language governing permissions and +// limitations under the License. +// +// Code generated by Microsoft (R) AutoRest Code Generator. +// Changes may cause incorrect behavior and will be lost if the code is regenerated. + +import ( + "encoding/json" + "github.com/Azure/go-autorest/autorest" + "github.com/Azure/go-autorest/autorest/azure" + "github.com/Azure/go-autorest/autorest/date" + "github.com/Azure/go-autorest/autorest/to" + "net/http" +) + +// AccessLevel enumerates the values for access level. +type AccessLevel string + +const ( + // None ... + None AccessLevel = "None" + // Read ... + Read AccessLevel = "Read" +) + +// PossibleAccessLevelValues returns an array of possible values for the AccessLevel const type. +func PossibleAccessLevelValues() []AccessLevel { + return []AccessLevel{None, Read} +} + +// CachingTypes enumerates the values for caching types. +type CachingTypes string + +const ( + // CachingTypesNone ... + CachingTypesNone CachingTypes = "None" + // CachingTypesReadOnly ... + CachingTypesReadOnly CachingTypes = "ReadOnly" + // CachingTypesReadWrite ... + CachingTypesReadWrite CachingTypes = "ReadWrite" +) + +// PossibleCachingTypesValues returns an array of possible values for the CachingTypes const type. +func PossibleCachingTypesValues() []CachingTypes { + return []CachingTypes{CachingTypesNone, CachingTypesReadOnly, CachingTypesReadWrite} +} + +// ComponentNames enumerates the values for component names. +type ComponentNames string + +const ( + // MicrosoftWindowsShellSetup ... + MicrosoftWindowsShellSetup ComponentNames = "Microsoft-Windows-Shell-Setup" +) + +// PossibleComponentNamesValues returns an array of possible values for the ComponentNames const type. +func PossibleComponentNamesValues() []ComponentNames { + return []ComponentNames{MicrosoftWindowsShellSetup} +} + +// ContainerServiceOrchestratorTypes enumerates the values for container service orchestrator types. +type ContainerServiceOrchestratorTypes string + +const ( + // Custom ... + Custom ContainerServiceOrchestratorTypes = "Custom" + // DCOS ... + DCOS ContainerServiceOrchestratorTypes = "DCOS" + // Kubernetes ... + Kubernetes ContainerServiceOrchestratorTypes = "Kubernetes" + // Swarm ... + Swarm ContainerServiceOrchestratorTypes = "Swarm" +) + +// PossibleContainerServiceOrchestratorTypesValues returns an array of possible values for the ContainerServiceOrchestratorTypes const type. +func PossibleContainerServiceOrchestratorTypesValues() []ContainerServiceOrchestratorTypes { + return []ContainerServiceOrchestratorTypes{Custom, DCOS, Kubernetes, Swarm} +} + +// ContainerServiceVMSizeTypes enumerates the values for container service vm size types. +type ContainerServiceVMSizeTypes string + +const ( + // StandardA0 ... + StandardA0 ContainerServiceVMSizeTypes = "Standard_A0" + // StandardA1 ... + StandardA1 ContainerServiceVMSizeTypes = "Standard_A1" + // StandardA10 ... + StandardA10 ContainerServiceVMSizeTypes = "Standard_A10" + // StandardA11 ... + StandardA11 ContainerServiceVMSizeTypes = "Standard_A11" + // StandardA2 ... + StandardA2 ContainerServiceVMSizeTypes = "Standard_A2" + // StandardA3 ... + StandardA3 ContainerServiceVMSizeTypes = "Standard_A3" + // StandardA4 ... + StandardA4 ContainerServiceVMSizeTypes = "Standard_A4" + // StandardA5 ... + StandardA5 ContainerServiceVMSizeTypes = "Standard_A5" + // StandardA6 ... + StandardA6 ContainerServiceVMSizeTypes = "Standard_A6" + // StandardA7 ... + StandardA7 ContainerServiceVMSizeTypes = "Standard_A7" + // StandardA8 ... + StandardA8 ContainerServiceVMSizeTypes = "Standard_A8" + // StandardA9 ... + StandardA9 ContainerServiceVMSizeTypes = "Standard_A9" + // StandardD1 ... + StandardD1 ContainerServiceVMSizeTypes = "Standard_D1" + // StandardD11 ... + StandardD11 ContainerServiceVMSizeTypes = "Standard_D11" + // StandardD11V2 ... + StandardD11V2 ContainerServiceVMSizeTypes = "Standard_D11_v2" + // StandardD12 ... + StandardD12 ContainerServiceVMSizeTypes = "Standard_D12" + // StandardD12V2 ... + StandardD12V2 ContainerServiceVMSizeTypes = "Standard_D12_v2" + // StandardD13 ... + StandardD13 ContainerServiceVMSizeTypes = "Standard_D13" + // StandardD13V2 ... + StandardD13V2 ContainerServiceVMSizeTypes = "Standard_D13_v2" + // StandardD14 ... + StandardD14 ContainerServiceVMSizeTypes = "Standard_D14" + // StandardD14V2 ... + StandardD14V2 ContainerServiceVMSizeTypes = "Standard_D14_v2" + // StandardD1V2 ... + StandardD1V2 ContainerServiceVMSizeTypes = "Standard_D1_v2" + // StandardD2 ... + StandardD2 ContainerServiceVMSizeTypes = "Standard_D2" + // StandardD2V2 ... + StandardD2V2 ContainerServiceVMSizeTypes = "Standard_D2_v2" + // StandardD3 ... + StandardD3 ContainerServiceVMSizeTypes = "Standard_D3" + // StandardD3V2 ... + StandardD3V2 ContainerServiceVMSizeTypes = "Standard_D3_v2" + // StandardD4 ... + StandardD4 ContainerServiceVMSizeTypes = "Standard_D4" + // StandardD4V2 ... + StandardD4V2 ContainerServiceVMSizeTypes = "Standard_D4_v2" + // StandardD5V2 ... + StandardD5V2 ContainerServiceVMSizeTypes = "Standard_D5_v2" + // StandardDS1 ... + StandardDS1 ContainerServiceVMSizeTypes = "Standard_DS1" + // StandardDS11 ... + StandardDS11 ContainerServiceVMSizeTypes = "Standard_DS11" + // StandardDS12 ... + StandardDS12 ContainerServiceVMSizeTypes = "Standard_DS12" + // StandardDS13 ... + StandardDS13 ContainerServiceVMSizeTypes = "Standard_DS13" + // StandardDS14 ... + StandardDS14 ContainerServiceVMSizeTypes = "Standard_DS14" + // StandardDS2 ... + StandardDS2 ContainerServiceVMSizeTypes = "Standard_DS2" + // StandardDS3 ... + StandardDS3 ContainerServiceVMSizeTypes = "Standard_DS3" + // StandardDS4 ... + StandardDS4 ContainerServiceVMSizeTypes = "Standard_DS4" + // StandardG1 ... + StandardG1 ContainerServiceVMSizeTypes = "Standard_G1" + // StandardG2 ... + StandardG2 ContainerServiceVMSizeTypes = "Standard_G2" + // StandardG3 ... + StandardG3 ContainerServiceVMSizeTypes = "Standard_G3" + // StandardG4 ... + StandardG4 ContainerServiceVMSizeTypes = "Standard_G4" + // StandardG5 ... + StandardG5 ContainerServiceVMSizeTypes = "Standard_G5" + // StandardGS1 ... + StandardGS1 ContainerServiceVMSizeTypes = "Standard_GS1" + // StandardGS2 ... + StandardGS2 ContainerServiceVMSizeTypes = "Standard_GS2" + // StandardGS3 ... + StandardGS3 ContainerServiceVMSizeTypes = "Standard_GS3" + // StandardGS4 ... + StandardGS4 ContainerServiceVMSizeTypes = "Standard_GS4" + // StandardGS5 ... + StandardGS5 ContainerServiceVMSizeTypes = "Standard_GS5" +) + +// PossibleContainerServiceVMSizeTypesValues returns an array of possible values for the ContainerServiceVMSizeTypes const type. +func PossibleContainerServiceVMSizeTypesValues() []ContainerServiceVMSizeTypes { + return []ContainerServiceVMSizeTypes{StandardA0, StandardA1, StandardA10, StandardA11, StandardA2, StandardA3, StandardA4, StandardA5, StandardA6, StandardA7, StandardA8, StandardA9, StandardD1, StandardD11, StandardD11V2, StandardD12, StandardD12V2, StandardD13, StandardD13V2, StandardD14, StandardD14V2, StandardD1V2, StandardD2, StandardD2V2, StandardD3, StandardD3V2, StandardD4, StandardD4V2, StandardD5V2, StandardDS1, StandardDS11, StandardDS12, StandardDS13, StandardDS14, StandardDS2, StandardDS3, StandardDS4, StandardG1, StandardG2, StandardG3, StandardG4, StandardG5, StandardGS1, StandardGS2, StandardGS3, StandardGS4, StandardGS5} +} + +// DiskCreateOption enumerates the values for disk create option. +type DiskCreateOption string + +const ( + // Attach ... + Attach DiskCreateOption = "Attach" + // Copy ... + Copy DiskCreateOption = "Copy" + // Empty ... + Empty DiskCreateOption = "Empty" + // FromImage ... + FromImage DiskCreateOption = "FromImage" + // Import ... + Import DiskCreateOption = "Import" + // Restore ... + Restore DiskCreateOption = "Restore" +) + +// PossibleDiskCreateOptionValues returns an array of possible values for the DiskCreateOption const type. +func PossibleDiskCreateOptionValues() []DiskCreateOption { + return []DiskCreateOption{Attach, Copy, Empty, FromImage, Import, Restore} +} + +// DiskCreateOptionTypes enumerates the values for disk create option types. +type DiskCreateOptionTypes string + +const ( + // DiskCreateOptionTypesAttach ... + DiskCreateOptionTypesAttach DiskCreateOptionTypes = "Attach" + // DiskCreateOptionTypesEmpty ... + DiskCreateOptionTypesEmpty DiskCreateOptionTypes = "Empty" + // DiskCreateOptionTypesFromImage ... + DiskCreateOptionTypesFromImage DiskCreateOptionTypes = "FromImage" +) + +// PossibleDiskCreateOptionTypesValues returns an array of possible values for the DiskCreateOptionTypes const type. +func PossibleDiskCreateOptionTypesValues() []DiskCreateOptionTypes { + return []DiskCreateOptionTypes{DiskCreateOptionTypesAttach, DiskCreateOptionTypesEmpty, DiskCreateOptionTypesFromImage} +} + +// InstanceViewTypes enumerates the values for instance view types. +type InstanceViewTypes string + +const ( + // InstanceView ... + InstanceView InstanceViewTypes = "instanceView" +) + +// PossibleInstanceViewTypesValues returns an array of possible values for the InstanceViewTypes const type. +func PossibleInstanceViewTypesValues() []InstanceViewTypes { + return []InstanceViewTypes{InstanceView} +} + +// IntervalInMins enumerates the values for interval in mins. +type IntervalInMins string + +const ( + // FiveMins ... + FiveMins IntervalInMins = "FiveMins" + // SixtyMins ... + SixtyMins IntervalInMins = "SixtyMins" + // ThirtyMins ... + ThirtyMins IntervalInMins = "ThirtyMins" + // ThreeMins ... + ThreeMins IntervalInMins = "ThreeMins" +) + +// PossibleIntervalInMinsValues returns an array of possible values for the IntervalInMins const type. +func PossibleIntervalInMinsValues() []IntervalInMins { + return []IntervalInMins{FiveMins, SixtyMins, ThirtyMins, ThreeMins} +} + +// IPVersion enumerates the values for ip version. +type IPVersion string + +const ( + // IPv4 ... + IPv4 IPVersion = "IPv4" + // IPv6 ... + IPv6 IPVersion = "IPv6" +) + +// PossibleIPVersionValues returns an array of possible values for the IPVersion const type. +func PossibleIPVersionValues() []IPVersion { + return []IPVersion{IPv4, IPv6} +} + +// MaintenanceOperationResultCodeTypes enumerates the values for maintenance operation result code types. +type MaintenanceOperationResultCodeTypes string + +const ( + // MaintenanceOperationResultCodeTypesMaintenanceAborted ... + MaintenanceOperationResultCodeTypesMaintenanceAborted MaintenanceOperationResultCodeTypes = "MaintenanceAborted" + // MaintenanceOperationResultCodeTypesMaintenanceCompleted ... + MaintenanceOperationResultCodeTypesMaintenanceCompleted MaintenanceOperationResultCodeTypes = "MaintenanceCompleted" + // MaintenanceOperationResultCodeTypesNone ... + MaintenanceOperationResultCodeTypesNone MaintenanceOperationResultCodeTypes = "None" + // MaintenanceOperationResultCodeTypesRetryLater ... + MaintenanceOperationResultCodeTypesRetryLater MaintenanceOperationResultCodeTypes = "RetryLater" +) + +// PossibleMaintenanceOperationResultCodeTypesValues returns an array of possible values for the MaintenanceOperationResultCodeTypes const type. +func PossibleMaintenanceOperationResultCodeTypesValues() []MaintenanceOperationResultCodeTypes { + return []MaintenanceOperationResultCodeTypes{MaintenanceOperationResultCodeTypesMaintenanceAborted, MaintenanceOperationResultCodeTypesMaintenanceCompleted, MaintenanceOperationResultCodeTypesNone, MaintenanceOperationResultCodeTypesRetryLater} +} + +// OperatingSystemStateTypes enumerates the values for operating system state types. +type OperatingSystemStateTypes string + +const ( + // Generalized ... + Generalized OperatingSystemStateTypes = "Generalized" + // Specialized ... + Specialized OperatingSystemStateTypes = "Specialized" +) + +// PossibleOperatingSystemStateTypesValues returns an array of possible values for the OperatingSystemStateTypes const type. +func PossibleOperatingSystemStateTypesValues() []OperatingSystemStateTypes { + return []OperatingSystemStateTypes{Generalized, Specialized} +} + +// OperatingSystemTypes enumerates the values for operating system types. +type OperatingSystemTypes string + +const ( + // Linux ... + Linux OperatingSystemTypes = "Linux" + // Windows ... + Windows OperatingSystemTypes = "Windows" +) + +// PossibleOperatingSystemTypesValues returns an array of possible values for the OperatingSystemTypes const type. +func PossibleOperatingSystemTypesValues() []OperatingSystemTypes { + return []OperatingSystemTypes{Linux, Windows} +} + +// PassNames enumerates the values for pass names. +type PassNames string + +const ( + // OobeSystem ... + OobeSystem PassNames = "OobeSystem" +) + +// PossiblePassNamesValues returns an array of possible values for the PassNames const type. +func PossiblePassNamesValues() []PassNames { + return []PassNames{OobeSystem} +} + +// ProtocolTypes enumerates the values for protocol types. +type ProtocolTypes string + +const ( + // HTTP ... + HTTP ProtocolTypes = "Http" + // HTTPS ... + HTTPS ProtocolTypes = "Https" +) + +// PossibleProtocolTypesValues returns an array of possible values for the ProtocolTypes const type. +func PossibleProtocolTypesValues() []ProtocolTypes { + return []ProtocolTypes{HTTP, HTTPS} +} + +// ResourceIdentityType enumerates the values for resource identity type. +type ResourceIdentityType string + +const ( + // ResourceIdentityTypeNone ... + ResourceIdentityTypeNone ResourceIdentityType = "None" + // ResourceIdentityTypeSystemAssigned ... + ResourceIdentityTypeSystemAssigned ResourceIdentityType = "SystemAssigned" + // ResourceIdentityTypeSystemAssignedUserAssigned ... + ResourceIdentityTypeSystemAssignedUserAssigned ResourceIdentityType = "SystemAssigned, UserAssigned" + // ResourceIdentityTypeUserAssigned ... + ResourceIdentityTypeUserAssigned ResourceIdentityType = "UserAssigned" +) + +// PossibleResourceIdentityTypeValues returns an array of possible values for the ResourceIdentityType const type. +func PossibleResourceIdentityTypeValues() []ResourceIdentityType { + return []ResourceIdentityType{ResourceIdentityTypeNone, ResourceIdentityTypeSystemAssigned, ResourceIdentityTypeSystemAssignedUserAssigned, ResourceIdentityTypeUserAssigned} +} + +// ResourceSkuCapacityScaleType enumerates the values for resource sku capacity scale type. +type ResourceSkuCapacityScaleType string + +const ( + // ResourceSkuCapacityScaleTypeAutomatic ... + ResourceSkuCapacityScaleTypeAutomatic ResourceSkuCapacityScaleType = "Automatic" + // ResourceSkuCapacityScaleTypeManual ... + ResourceSkuCapacityScaleTypeManual ResourceSkuCapacityScaleType = "Manual" + // ResourceSkuCapacityScaleTypeNone ... + ResourceSkuCapacityScaleTypeNone ResourceSkuCapacityScaleType = "None" +) + +// PossibleResourceSkuCapacityScaleTypeValues returns an array of possible values for the ResourceSkuCapacityScaleType const type. +func PossibleResourceSkuCapacityScaleTypeValues() []ResourceSkuCapacityScaleType { + return []ResourceSkuCapacityScaleType{ResourceSkuCapacityScaleTypeAutomatic, ResourceSkuCapacityScaleTypeManual, ResourceSkuCapacityScaleTypeNone} +} + +// ResourceSkuRestrictionsReasonCode enumerates the values for resource sku restrictions reason code. +type ResourceSkuRestrictionsReasonCode string + +const ( + // NotAvailableForSubscription ... + NotAvailableForSubscription ResourceSkuRestrictionsReasonCode = "NotAvailableForSubscription" + // QuotaID ... + QuotaID ResourceSkuRestrictionsReasonCode = "QuotaId" +) + +// PossibleResourceSkuRestrictionsReasonCodeValues returns an array of possible values for the ResourceSkuRestrictionsReasonCode const type. +func PossibleResourceSkuRestrictionsReasonCodeValues() []ResourceSkuRestrictionsReasonCode { + return []ResourceSkuRestrictionsReasonCode{NotAvailableForSubscription, QuotaID} +} + +// ResourceSkuRestrictionsType enumerates the values for resource sku restrictions type. +type ResourceSkuRestrictionsType string + +const ( + // Location ... + Location ResourceSkuRestrictionsType = "Location" + // Zone ... + Zone ResourceSkuRestrictionsType = "Zone" +) + +// PossibleResourceSkuRestrictionsTypeValues returns an array of possible values for the ResourceSkuRestrictionsType const type. +func PossibleResourceSkuRestrictionsTypeValues() []ResourceSkuRestrictionsType { + return []ResourceSkuRestrictionsType{Location, Zone} +} + +// RollingUpgradeActionType enumerates the values for rolling upgrade action type. +type RollingUpgradeActionType string + +const ( + // Cancel ... + Cancel RollingUpgradeActionType = "Cancel" + // Start ... + Start RollingUpgradeActionType = "Start" +) + +// PossibleRollingUpgradeActionTypeValues returns an array of possible values for the RollingUpgradeActionType const type. +func PossibleRollingUpgradeActionTypeValues() []RollingUpgradeActionType { + return []RollingUpgradeActionType{Cancel, Start} +} + +// RollingUpgradeStatusCode enumerates the values for rolling upgrade status code. +type RollingUpgradeStatusCode string + +const ( + // Cancelled ... + Cancelled RollingUpgradeStatusCode = "Cancelled" + // Completed ... + Completed RollingUpgradeStatusCode = "Completed" + // Faulted ... + Faulted RollingUpgradeStatusCode = "Faulted" + // RollingForward ... + RollingForward RollingUpgradeStatusCode = "RollingForward" +) + +// PossibleRollingUpgradeStatusCodeValues returns an array of possible values for the RollingUpgradeStatusCode const type. +func PossibleRollingUpgradeStatusCodeValues() []RollingUpgradeStatusCode { + return []RollingUpgradeStatusCode{Cancelled, Completed, Faulted, RollingForward} +} + +// SettingNames enumerates the values for setting names. +type SettingNames string + +const ( + // AutoLogon ... + AutoLogon SettingNames = "AutoLogon" + // FirstLogonCommands ... + FirstLogonCommands SettingNames = "FirstLogonCommands" +) + +// PossibleSettingNamesValues returns an array of possible values for the SettingNames const type. +func PossibleSettingNamesValues() []SettingNames { + return []SettingNames{AutoLogon, FirstLogonCommands} +} + +// SnapshotStorageAccountTypes enumerates the values for snapshot storage account types. +type SnapshotStorageAccountTypes string + +const ( + // PremiumLRS ... + PremiumLRS SnapshotStorageAccountTypes = "Premium_LRS" + // StandardLRS ... + StandardLRS SnapshotStorageAccountTypes = "Standard_LRS" + // StandardZRS ... + StandardZRS SnapshotStorageAccountTypes = "Standard_ZRS" +) + +// PossibleSnapshotStorageAccountTypesValues returns an array of possible values for the SnapshotStorageAccountTypes const type. +func PossibleSnapshotStorageAccountTypesValues() []SnapshotStorageAccountTypes { + return []SnapshotStorageAccountTypes{PremiumLRS, StandardLRS, StandardZRS} +} + +// StatusLevelTypes enumerates the values for status level types. +type StatusLevelTypes string + +const ( + // Error ... + Error StatusLevelTypes = "Error" + // Info ... + Info StatusLevelTypes = "Info" + // Warning ... + Warning StatusLevelTypes = "Warning" +) + +// PossibleStatusLevelTypesValues returns an array of possible values for the StatusLevelTypes const type. +func PossibleStatusLevelTypesValues() []StatusLevelTypes { + return []StatusLevelTypes{Error, Info, Warning} +} + +// StorageAccountTypes enumerates the values for storage account types. +type StorageAccountTypes string + +const ( + // StorageAccountTypesPremiumLRS ... + StorageAccountTypesPremiumLRS StorageAccountTypes = "Premium_LRS" + // StorageAccountTypesStandardLRS ... + StorageAccountTypesStandardLRS StorageAccountTypes = "Standard_LRS" +) + +// PossibleStorageAccountTypesValues returns an array of possible values for the StorageAccountTypes const type. +func PossibleStorageAccountTypesValues() []StorageAccountTypes { + return []StorageAccountTypes{StorageAccountTypesPremiumLRS, StorageAccountTypesStandardLRS} +} + +// UpgradeMode enumerates the values for upgrade mode. +type UpgradeMode string + +const ( + // Automatic ... + Automatic UpgradeMode = "Automatic" + // Manual ... + Manual UpgradeMode = "Manual" + // Rolling ... + Rolling UpgradeMode = "Rolling" +) + +// PossibleUpgradeModeValues returns an array of possible values for the UpgradeMode const type. +func PossibleUpgradeModeValues() []UpgradeMode { + return []UpgradeMode{Automatic, Manual, Rolling} +} + +// UpgradeOperationInvoker enumerates the values for upgrade operation invoker. +type UpgradeOperationInvoker string + +const ( + // Platform ... + Platform UpgradeOperationInvoker = "Platform" + // Unknown ... + Unknown UpgradeOperationInvoker = "Unknown" + // User ... + User UpgradeOperationInvoker = "User" +) + +// PossibleUpgradeOperationInvokerValues returns an array of possible values for the UpgradeOperationInvoker const type. +func PossibleUpgradeOperationInvokerValues() []UpgradeOperationInvoker { + return []UpgradeOperationInvoker{Platform, Unknown, User} +} + +// UpgradeState enumerates the values for upgrade state. +type UpgradeState string + +const ( + // UpgradeStateCancelled ... + UpgradeStateCancelled UpgradeState = "Cancelled" + // UpgradeStateCompleted ... + UpgradeStateCompleted UpgradeState = "Completed" + // UpgradeStateFaulted ... + UpgradeStateFaulted UpgradeState = "Faulted" + // UpgradeStateRollingForward ... + UpgradeStateRollingForward UpgradeState = "RollingForward" +) + +// PossibleUpgradeStateValues returns an array of possible values for the UpgradeState const type. +func PossibleUpgradeStateValues() []UpgradeState { + return []UpgradeState{UpgradeStateCancelled, UpgradeStateCompleted, UpgradeStateFaulted, UpgradeStateRollingForward} +} + +// VirtualMachineEvictionPolicyTypes enumerates the values for virtual machine eviction policy types. +type VirtualMachineEvictionPolicyTypes string + +const ( + // Deallocate ... + Deallocate VirtualMachineEvictionPolicyTypes = "Deallocate" + // Delete ... + Delete VirtualMachineEvictionPolicyTypes = "Delete" +) + +// PossibleVirtualMachineEvictionPolicyTypesValues returns an array of possible values for the VirtualMachineEvictionPolicyTypes const type. +func PossibleVirtualMachineEvictionPolicyTypesValues() []VirtualMachineEvictionPolicyTypes { + return []VirtualMachineEvictionPolicyTypes{Deallocate, Delete} +} + +// VirtualMachinePriorityTypes enumerates the values for virtual machine priority types. +type VirtualMachinePriorityTypes string + +const ( + // Low ... + Low VirtualMachinePriorityTypes = "Low" + // Regular ... + Regular VirtualMachinePriorityTypes = "Regular" +) + +// PossibleVirtualMachinePriorityTypesValues returns an array of possible values for the VirtualMachinePriorityTypes const type. +func PossibleVirtualMachinePriorityTypesValues() []VirtualMachinePriorityTypes { + return []VirtualMachinePriorityTypes{Low, Regular} +} + +// VirtualMachineScaleSetSkuScaleType enumerates the values for virtual machine scale set sku scale type. +type VirtualMachineScaleSetSkuScaleType string + +const ( + // VirtualMachineScaleSetSkuScaleTypeAutomatic ... + VirtualMachineScaleSetSkuScaleTypeAutomatic VirtualMachineScaleSetSkuScaleType = "Automatic" + // VirtualMachineScaleSetSkuScaleTypeNone ... + VirtualMachineScaleSetSkuScaleTypeNone VirtualMachineScaleSetSkuScaleType = "None" +) + +// PossibleVirtualMachineScaleSetSkuScaleTypeValues returns an array of possible values for the VirtualMachineScaleSetSkuScaleType const type. +func PossibleVirtualMachineScaleSetSkuScaleTypeValues() []VirtualMachineScaleSetSkuScaleType { + return []VirtualMachineScaleSetSkuScaleType{VirtualMachineScaleSetSkuScaleTypeAutomatic, VirtualMachineScaleSetSkuScaleTypeNone} +} + +// VirtualMachineSizeTypes enumerates the values for virtual machine size types. +type VirtualMachineSizeTypes string + +const ( + // VirtualMachineSizeTypesBasicA0 ... + VirtualMachineSizeTypesBasicA0 VirtualMachineSizeTypes = "Basic_A0" + // VirtualMachineSizeTypesBasicA1 ... + VirtualMachineSizeTypesBasicA1 VirtualMachineSizeTypes = "Basic_A1" + // VirtualMachineSizeTypesBasicA2 ... + VirtualMachineSizeTypesBasicA2 VirtualMachineSizeTypes = "Basic_A2" + // VirtualMachineSizeTypesBasicA3 ... + VirtualMachineSizeTypesBasicA3 VirtualMachineSizeTypes = "Basic_A3" + // VirtualMachineSizeTypesBasicA4 ... + VirtualMachineSizeTypesBasicA4 VirtualMachineSizeTypes = "Basic_A4" + // VirtualMachineSizeTypesStandardA0 ... + VirtualMachineSizeTypesStandardA0 VirtualMachineSizeTypes = "Standard_A0" + // VirtualMachineSizeTypesStandardA1 ... + VirtualMachineSizeTypesStandardA1 VirtualMachineSizeTypes = "Standard_A1" + // VirtualMachineSizeTypesStandardA10 ... + VirtualMachineSizeTypesStandardA10 VirtualMachineSizeTypes = "Standard_A10" + // VirtualMachineSizeTypesStandardA11 ... + VirtualMachineSizeTypesStandardA11 VirtualMachineSizeTypes = "Standard_A11" + // VirtualMachineSizeTypesStandardA1V2 ... + VirtualMachineSizeTypesStandardA1V2 VirtualMachineSizeTypes = "Standard_A1_v2" + // VirtualMachineSizeTypesStandardA2 ... + VirtualMachineSizeTypesStandardA2 VirtualMachineSizeTypes = "Standard_A2" + // VirtualMachineSizeTypesStandardA2mV2 ... + VirtualMachineSizeTypesStandardA2mV2 VirtualMachineSizeTypes = "Standard_A2m_v2" + // VirtualMachineSizeTypesStandardA2V2 ... + VirtualMachineSizeTypesStandardA2V2 VirtualMachineSizeTypes = "Standard_A2_v2" + // VirtualMachineSizeTypesStandardA3 ... + VirtualMachineSizeTypesStandardA3 VirtualMachineSizeTypes = "Standard_A3" + // VirtualMachineSizeTypesStandardA4 ... + VirtualMachineSizeTypesStandardA4 VirtualMachineSizeTypes = "Standard_A4" + // VirtualMachineSizeTypesStandardA4mV2 ... + VirtualMachineSizeTypesStandardA4mV2 VirtualMachineSizeTypes = "Standard_A4m_v2" + // VirtualMachineSizeTypesStandardA4V2 ... + VirtualMachineSizeTypesStandardA4V2 VirtualMachineSizeTypes = "Standard_A4_v2" + // VirtualMachineSizeTypesStandardA5 ... + VirtualMachineSizeTypesStandardA5 VirtualMachineSizeTypes = "Standard_A5" + // VirtualMachineSizeTypesStandardA6 ... + VirtualMachineSizeTypesStandardA6 VirtualMachineSizeTypes = "Standard_A6" + // VirtualMachineSizeTypesStandardA7 ... + VirtualMachineSizeTypesStandardA7 VirtualMachineSizeTypes = "Standard_A7" + // VirtualMachineSizeTypesStandardA8 ... + VirtualMachineSizeTypesStandardA8 VirtualMachineSizeTypes = "Standard_A8" + // VirtualMachineSizeTypesStandardA8mV2 ... + VirtualMachineSizeTypesStandardA8mV2 VirtualMachineSizeTypes = "Standard_A8m_v2" + // VirtualMachineSizeTypesStandardA8V2 ... + VirtualMachineSizeTypesStandardA8V2 VirtualMachineSizeTypes = "Standard_A8_v2" + // VirtualMachineSizeTypesStandardA9 ... + VirtualMachineSizeTypesStandardA9 VirtualMachineSizeTypes = "Standard_A9" + // VirtualMachineSizeTypesStandardB1ms ... + VirtualMachineSizeTypesStandardB1ms VirtualMachineSizeTypes = "Standard_B1ms" + // VirtualMachineSizeTypesStandardB1s ... + VirtualMachineSizeTypesStandardB1s VirtualMachineSizeTypes = "Standard_B1s" + // VirtualMachineSizeTypesStandardB2ms ... + VirtualMachineSizeTypesStandardB2ms VirtualMachineSizeTypes = "Standard_B2ms" + // VirtualMachineSizeTypesStandardB2s ... + VirtualMachineSizeTypesStandardB2s VirtualMachineSizeTypes = "Standard_B2s" + // VirtualMachineSizeTypesStandardB4ms ... + VirtualMachineSizeTypesStandardB4ms VirtualMachineSizeTypes = "Standard_B4ms" + // VirtualMachineSizeTypesStandardB8ms ... + VirtualMachineSizeTypesStandardB8ms VirtualMachineSizeTypes = "Standard_B8ms" + // VirtualMachineSizeTypesStandardD1 ... + VirtualMachineSizeTypesStandardD1 VirtualMachineSizeTypes = "Standard_D1" + // VirtualMachineSizeTypesStandardD11 ... + VirtualMachineSizeTypesStandardD11 VirtualMachineSizeTypes = "Standard_D11" + // VirtualMachineSizeTypesStandardD11V2 ... + VirtualMachineSizeTypesStandardD11V2 VirtualMachineSizeTypes = "Standard_D11_v2" + // VirtualMachineSizeTypesStandardD12 ... + VirtualMachineSizeTypesStandardD12 VirtualMachineSizeTypes = "Standard_D12" + // VirtualMachineSizeTypesStandardD12V2 ... + VirtualMachineSizeTypesStandardD12V2 VirtualMachineSizeTypes = "Standard_D12_v2" + // VirtualMachineSizeTypesStandardD13 ... + VirtualMachineSizeTypesStandardD13 VirtualMachineSizeTypes = "Standard_D13" + // VirtualMachineSizeTypesStandardD13V2 ... + VirtualMachineSizeTypesStandardD13V2 VirtualMachineSizeTypes = "Standard_D13_v2" + // VirtualMachineSizeTypesStandardD14 ... + VirtualMachineSizeTypesStandardD14 VirtualMachineSizeTypes = "Standard_D14" + // VirtualMachineSizeTypesStandardD14V2 ... + VirtualMachineSizeTypesStandardD14V2 VirtualMachineSizeTypes = "Standard_D14_v2" + // VirtualMachineSizeTypesStandardD15V2 ... + VirtualMachineSizeTypesStandardD15V2 VirtualMachineSizeTypes = "Standard_D15_v2" + // VirtualMachineSizeTypesStandardD16sV3 ... + VirtualMachineSizeTypesStandardD16sV3 VirtualMachineSizeTypes = "Standard_D16s_v3" + // VirtualMachineSizeTypesStandardD16V3 ... + VirtualMachineSizeTypesStandardD16V3 VirtualMachineSizeTypes = "Standard_D16_v3" + // VirtualMachineSizeTypesStandardD1V2 ... + VirtualMachineSizeTypesStandardD1V2 VirtualMachineSizeTypes = "Standard_D1_v2" + // VirtualMachineSizeTypesStandardD2 ... + VirtualMachineSizeTypesStandardD2 VirtualMachineSizeTypes = "Standard_D2" + // VirtualMachineSizeTypesStandardD2sV3 ... + VirtualMachineSizeTypesStandardD2sV3 VirtualMachineSizeTypes = "Standard_D2s_v3" + // VirtualMachineSizeTypesStandardD2V2 ... + VirtualMachineSizeTypesStandardD2V2 VirtualMachineSizeTypes = "Standard_D2_v2" + // VirtualMachineSizeTypesStandardD2V3 ... + VirtualMachineSizeTypesStandardD2V3 VirtualMachineSizeTypes = "Standard_D2_v3" + // VirtualMachineSizeTypesStandardD3 ... + VirtualMachineSizeTypesStandardD3 VirtualMachineSizeTypes = "Standard_D3" + // VirtualMachineSizeTypesStandardD32sV3 ... + VirtualMachineSizeTypesStandardD32sV3 VirtualMachineSizeTypes = "Standard_D32s_v3" + // VirtualMachineSizeTypesStandardD32V3 ... + VirtualMachineSizeTypesStandardD32V3 VirtualMachineSizeTypes = "Standard_D32_v3" + // VirtualMachineSizeTypesStandardD3V2 ... + VirtualMachineSizeTypesStandardD3V2 VirtualMachineSizeTypes = "Standard_D3_v2" + // VirtualMachineSizeTypesStandardD4 ... + VirtualMachineSizeTypesStandardD4 VirtualMachineSizeTypes = "Standard_D4" + // VirtualMachineSizeTypesStandardD4sV3 ... + VirtualMachineSizeTypesStandardD4sV3 VirtualMachineSizeTypes = "Standard_D4s_v3" + // VirtualMachineSizeTypesStandardD4V2 ... + VirtualMachineSizeTypesStandardD4V2 VirtualMachineSizeTypes = "Standard_D4_v2" + // VirtualMachineSizeTypesStandardD4V3 ... + VirtualMachineSizeTypesStandardD4V3 VirtualMachineSizeTypes = "Standard_D4_v3" + // VirtualMachineSizeTypesStandardD5V2 ... + VirtualMachineSizeTypesStandardD5V2 VirtualMachineSizeTypes = "Standard_D5_v2" + // VirtualMachineSizeTypesStandardD64sV3 ... + VirtualMachineSizeTypesStandardD64sV3 VirtualMachineSizeTypes = "Standard_D64s_v3" + // VirtualMachineSizeTypesStandardD64V3 ... + VirtualMachineSizeTypesStandardD64V3 VirtualMachineSizeTypes = "Standard_D64_v3" + // VirtualMachineSizeTypesStandardD8sV3 ... + VirtualMachineSizeTypesStandardD8sV3 VirtualMachineSizeTypes = "Standard_D8s_v3" + // VirtualMachineSizeTypesStandardD8V3 ... + VirtualMachineSizeTypesStandardD8V3 VirtualMachineSizeTypes = "Standard_D8_v3" + // VirtualMachineSizeTypesStandardDS1 ... + VirtualMachineSizeTypesStandardDS1 VirtualMachineSizeTypes = "Standard_DS1" + // VirtualMachineSizeTypesStandardDS11 ... + VirtualMachineSizeTypesStandardDS11 VirtualMachineSizeTypes = "Standard_DS11" + // VirtualMachineSizeTypesStandardDS11V2 ... + VirtualMachineSizeTypesStandardDS11V2 VirtualMachineSizeTypes = "Standard_DS11_v2" + // VirtualMachineSizeTypesStandardDS12 ... + VirtualMachineSizeTypesStandardDS12 VirtualMachineSizeTypes = "Standard_DS12" + // VirtualMachineSizeTypesStandardDS12V2 ... + VirtualMachineSizeTypesStandardDS12V2 VirtualMachineSizeTypes = "Standard_DS12_v2" + // VirtualMachineSizeTypesStandardDS13 ... + VirtualMachineSizeTypesStandardDS13 VirtualMachineSizeTypes = "Standard_DS13" + // VirtualMachineSizeTypesStandardDS132V2 ... + VirtualMachineSizeTypesStandardDS132V2 VirtualMachineSizeTypes = "Standard_DS13-2_v2" + // VirtualMachineSizeTypesStandardDS134V2 ... + VirtualMachineSizeTypesStandardDS134V2 VirtualMachineSizeTypes = "Standard_DS13-4_v2" + // VirtualMachineSizeTypesStandardDS13V2 ... + VirtualMachineSizeTypesStandardDS13V2 VirtualMachineSizeTypes = "Standard_DS13_v2" + // VirtualMachineSizeTypesStandardDS14 ... + VirtualMachineSizeTypesStandardDS14 VirtualMachineSizeTypes = "Standard_DS14" + // VirtualMachineSizeTypesStandardDS144V2 ... + VirtualMachineSizeTypesStandardDS144V2 VirtualMachineSizeTypes = "Standard_DS14-4_v2" + // VirtualMachineSizeTypesStandardDS148V2 ... + VirtualMachineSizeTypesStandardDS148V2 VirtualMachineSizeTypes = "Standard_DS14-8_v2" + // VirtualMachineSizeTypesStandardDS14V2 ... + VirtualMachineSizeTypesStandardDS14V2 VirtualMachineSizeTypes = "Standard_DS14_v2" + // VirtualMachineSizeTypesStandardDS15V2 ... + VirtualMachineSizeTypesStandardDS15V2 VirtualMachineSizeTypes = "Standard_DS15_v2" + // VirtualMachineSizeTypesStandardDS1V2 ... + VirtualMachineSizeTypesStandardDS1V2 VirtualMachineSizeTypes = "Standard_DS1_v2" + // VirtualMachineSizeTypesStandardDS2 ... + VirtualMachineSizeTypesStandardDS2 VirtualMachineSizeTypes = "Standard_DS2" + // VirtualMachineSizeTypesStandardDS2V2 ... + VirtualMachineSizeTypesStandardDS2V2 VirtualMachineSizeTypes = "Standard_DS2_v2" + // VirtualMachineSizeTypesStandardDS3 ... + VirtualMachineSizeTypesStandardDS3 VirtualMachineSizeTypes = "Standard_DS3" + // VirtualMachineSizeTypesStandardDS3V2 ... + VirtualMachineSizeTypesStandardDS3V2 VirtualMachineSizeTypes = "Standard_DS3_v2" + // VirtualMachineSizeTypesStandardDS4 ... + VirtualMachineSizeTypesStandardDS4 VirtualMachineSizeTypes = "Standard_DS4" + // VirtualMachineSizeTypesStandardDS4V2 ... + VirtualMachineSizeTypesStandardDS4V2 VirtualMachineSizeTypes = "Standard_DS4_v2" + // VirtualMachineSizeTypesStandardDS5V2 ... + VirtualMachineSizeTypesStandardDS5V2 VirtualMachineSizeTypes = "Standard_DS5_v2" + // VirtualMachineSizeTypesStandardE16sV3 ... + VirtualMachineSizeTypesStandardE16sV3 VirtualMachineSizeTypes = "Standard_E16s_v3" + // VirtualMachineSizeTypesStandardE16V3 ... + VirtualMachineSizeTypesStandardE16V3 VirtualMachineSizeTypes = "Standard_E16_v3" + // VirtualMachineSizeTypesStandardE2sV3 ... + VirtualMachineSizeTypesStandardE2sV3 VirtualMachineSizeTypes = "Standard_E2s_v3" + // VirtualMachineSizeTypesStandardE2V3 ... + VirtualMachineSizeTypesStandardE2V3 VirtualMachineSizeTypes = "Standard_E2_v3" + // VirtualMachineSizeTypesStandardE3216V3 ... + VirtualMachineSizeTypesStandardE3216V3 VirtualMachineSizeTypes = "Standard_E32-16_v3" + // VirtualMachineSizeTypesStandardE328sV3 ... + VirtualMachineSizeTypesStandardE328sV3 VirtualMachineSizeTypes = "Standard_E32-8s_v3" + // VirtualMachineSizeTypesStandardE32sV3 ... + VirtualMachineSizeTypesStandardE32sV3 VirtualMachineSizeTypes = "Standard_E32s_v3" + // VirtualMachineSizeTypesStandardE32V3 ... + VirtualMachineSizeTypesStandardE32V3 VirtualMachineSizeTypes = "Standard_E32_v3" + // VirtualMachineSizeTypesStandardE4sV3 ... + VirtualMachineSizeTypesStandardE4sV3 VirtualMachineSizeTypes = "Standard_E4s_v3" + // VirtualMachineSizeTypesStandardE4V3 ... + VirtualMachineSizeTypesStandardE4V3 VirtualMachineSizeTypes = "Standard_E4_v3" + // VirtualMachineSizeTypesStandardE6416sV3 ... + VirtualMachineSizeTypesStandardE6416sV3 VirtualMachineSizeTypes = "Standard_E64-16s_v3" + // VirtualMachineSizeTypesStandardE6432sV3 ... + VirtualMachineSizeTypesStandardE6432sV3 VirtualMachineSizeTypes = "Standard_E64-32s_v3" + // VirtualMachineSizeTypesStandardE64sV3 ... + VirtualMachineSizeTypesStandardE64sV3 VirtualMachineSizeTypes = "Standard_E64s_v3" + // VirtualMachineSizeTypesStandardE64V3 ... + VirtualMachineSizeTypesStandardE64V3 VirtualMachineSizeTypes = "Standard_E64_v3" + // VirtualMachineSizeTypesStandardE8sV3 ... + VirtualMachineSizeTypesStandardE8sV3 VirtualMachineSizeTypes = "Standard_E8s_v3" + // VirtualMachineSizeTypesStandardE8V3 ... + VirtualMachineSizeTypesStandardE8V3 VirtualMachineSizeTypes = "Standard_E8_v3" + // VirtualMachineSizeTypesStandardF1 ... + VirtualMachineSizeTypesStandardF1 VirtualMachineSizeTypes = "Standard_F1" + // VirtualMachineSizeTypesStandardF16 ... + VirtualMachineSizeTypesStandardF16 VirtualMachineSizeTypes = "Standard_F16" + // VirtualMachineSizeTypesStandardF16s ... + VirtualMachineSizeTypesStandardF16s VirtualMachineSizeTypes = "Standard_F16s" + // VirtualMachineSizeTypesStandardF16sV2 ... + VirtualMachineSizeTypesStandardF16sV2 VirtualMachineSizeTypes = "Standard_F16s_v2" + // VirtualMachineSizeTypesStandardF1s ... + VirtualMachineSizeTypesStandardF1s VirtualMachineSizeTypes = "Standard_F1s" + // VirtualMachineSizeTypesStandardF2 ... + VirtualMachineSizeTypesStandardF2 VirtualMachineSizeTypes = "Standard_F2" + // VirtualMachineSizeTypesStandardF2s ... + VirtualMachineSizeTypesStandardF2s VirtualMachineSizeTypes = "Standard_F2s" + // VirtualMachineSizeTypesStandardF2sV2 ... + VirtualMachineSizeTypesStandardF2sV2 VirtualMachineSizeTypes = "Standard_F2s_v2" + // VirtualMachineSizeTypesStandardF32sV2 ... + VirtualMachineSizeTypesStandardF32sV2 VirtualMachineSizeTypes = "Standard_F32s_v2" + // VirtualMachineSizeTypesStandardF4 ... + VirtualMachineSizeTypesStandardF4 VirtualMachineSizeTypes = "Standard_F4" + // VirtualMachineSizeTypesStandardF4s ... + VirtualMachineSizeTypesStandardF4s VirtualMachineSizeTypes = "Standard_F4s" + // VirtualMachineSizeTypesStandardF4sV2 ... + VirtualMachineSizeTypesStandardF4sV2 VirtualMachineSizeTypes = "Standard_F4s_v2" + // VirtualMachineSizeTypesStandardF64sV2 ... + VirtualMachineSizeTypesStandardF64sV2 VirtualMachineSizeTypes = "Standard_F64s_v2" + // VirtualMachineSizeTypesStandardF72sV2 ... + VirtualMachineSizeTypesStandardF72sV2 VirtualMachineSizeTypes = "Standard_F72s_v2" + // VirtualMachineSizeTypesStandardF8 ... + VirtualMachineSizeTypesStandardF8 VirtualMachineSizeTypes = "Standard_F8" + // VirtualMachineSizeTypesStandardF8s ... + VirtualMachineSizeTypesStandardF8s VirtualMachineSizeTypes = "Standard_F8s" + // VirtualMachineSizeTypesStandardF8sV2 ... + VirtualMachineSizeTypesStandardF8sV2 VirtualMachineSizeTypes = "Standard_F8s_v2" + // VirtualMachineSizeTypesStandardG1 ... + VirtualMachineSizeTypesStandardG1 VirtualMachineSizeTypes = "Standard_G1" + // VirtualMachineSizeTypesStandardG2 ... + VirtualMachineSizeTypesStandardG2 VirtualMachineSizeTypes = "Standard_G2" + // VirtualMachineSizeTypesStandardG3 ... + VirtualMachineSizeTypesStandardG3 VirtualMachineSizeTypes = "Standard_G3" + // VirtualMachineSizeTypesStandardG4 ... + VirtualMachineSizeTypesStandardG4 VirtualMachineSizeTypes = "Standard_G4" + // VirtualMachineSizeTypesStandardG5 ... + VirtualMachineSizeTypesStandardG5 VirtualMachineSizeTypes = "Standard_G5" + // VirtualMachineSizeTypesStandardGS1 ... + VirtualMachineSizeTypesStandardGS1 VirtualMachineSizeTypes = "Standard_GS1" + // VirtualMachineSizeTypesStandardGS2 ... + VirtualMachineSizeTypesStandardGS2 VirtualMachineSizeTypes = "Standard_GS2" + // VirtualMachineSizeTypesStandardGS3 ... + VirtualMachineSizeTypesStandardGS3 VirtualMachineSizeTypes = "Standard_GS3" + // VirtualMachineSizeTypesStandardGS4 ... + VirtualMachineSizeTypesStandardGS4 VirtualMachineSizeTypes = "Standard_GS4" + // VirtualMachineSizeTypesStandardGS44 ... + VirtualMachineSizeTypesStandardGS44 VirtualMachineSizeTypes = "Standard_GS4-4" + // VirtualMachineSizeTypesStandardGS48 ... + VirtualMachineSizeTypesStandardGS48 VirtualMachineSizeTypes = "Standard_GS4-8" + // VirtualMachineSizeTypesStandardGS5 ... + VirtualMachineSizeTypesStandardGS5 VirtualMachineSizeTypes = "Standard_GS5" + // VirtualMachineSizeTypesStandardGS516 ... + VirtualMachineSizeTypesStandardGS516 VirtualMachineSizeTypes = "Standard_GS5-16" + // VirtualMachineSizeTypesStandardGS58 ... + VirtualMachineSizeTypesStandardGS58 VirtualMachineSizeTypes = "Standard_GS5-8" + // VirtualMachineSizeTypesStandardH16 ... + VirtualMachineSizeTypesStandardH16 VirtualMachineSizeTypes = "Standard_H16" + // VirtualMachineSizeTypesStandardH16m ... + VirtualMachineSizeTypesStandardH16m VirtualMachineSizeTypes = "Standard_H16m" + // VirtualMachineSizeTypesStandardH16mr ... + VirtualMachineSizeTypesStandardH16mr VirtualMachineSizeTypes = "Standard_H16mr" + // VirtualMachineSizeTypesStandardH16r ... + VirtualMachineSizeTypesStandardH16r VirtualMachineSizeTypes = "Standard_H16r" + // VirtualMachineSizeTypesStandardH8 ... + VirtualMachineSizeTypesStandardH8 VirtualMachineSizeTypes = "Standard_H8" + // VirtualMachineSizeTypesStandardH8m ... + VirtualMachineSizeTypesStandardH8m VirtualMachineSizeTypes = "Standard_H8m" + // VirtualMachineSizeTypesStandardL16s ... + VirtualMachineSizeTypesStandardL16s VirtualMachineSizeTypes = "Standard_L16s" + // VirtualMachineSizeTypesStandardL32s ... + VirtualMachineSizeTypesStandardL32s VirtualMachineSizeTypes = "Standard_L32s" + // VirtualMachineSizeTypesStandardL4s ... + VirtualMachineSizeTypesStandardL4s VirtualMachineSizeTypes = "Standard_L4s" + // VirtualMachineSizeTypesStandardL8s ... + VirtualMachineSizeTypesStandardL8s VirtualMachineSizeTypes = "Standard_L8s" + // VirtualMachineSizeTypesStandardM12832ms ... + VirtualMachineSizeTypesStandardM12832ms VirtualMachineSizeTypes = "Standard_M128-32ms" + // VirtualMachineSizeTypesStandardM12864ms ... + VirtualMachineSizeTypesStandardM12864ms VirtualMachineSizeTypes = "Standard_M128-64ms" + // VirtualMachineSizeTypesStandardM128ms ... + VirtualMachineSizeTypesStandardM128ms VirtualMachineSizeTypes = "Standard_M128ms" + // VirtualMachineSizeTypesStandardM128s ... + VirtualMachineSizeTypesStandardM128s VirtualMachineSizeTypes = "Standard_M128s" + // VirtualMachineSizeTypesStandardM6416ms ... + VirtualMachineSizeTypesStandardM6416ms VirtualMachineSizeTypes = "Standard_M64-16ms" + // VirtualMachineSizeTypesStandardM6432ms ... + VirtualMachineSizeTypesStandardM6432ms VirtualMachineSizeTypes = "Standard_M64-32ms" + // VirtualMachineSizeTypesStandardM64ms ... + VirtualMachineSizeTypesStandardM64ms VirtualMachineSizeTypes = "Standard_M64ms" + // VirtualMachineSizeTypesStandardM64s ... + VirtualMachineSizeTypesStandardM64s VirtualMachineSizeTypes = "Standard_M64s" + // VirtualMachineSizeTypesStandardNC12 ... + VirtualMachineSizeTypesStandardNC12 VirtualMachineSizeTypes = "Standard_NC12" + // VirtualMachineSizeTypesStandardNC12sV2 ... + VirtualMachineSizeTypesStandardNC12sV2 VirtualMachineSizeTypes = "Standard_NC12s_v2" + // VirtualMachineSizeTypesStandardNC12sV3 ... + VirtualMachineSizeTypesStandardNC12sV3 VirtualMachineSizeTypes = "Standard_NC12s_v3" + // VirtualMachineSizeTypesStandardNC24 ... + VirtualMachineSizeTypesStandardNC24 VirtualMachineSizeTypes = "Standard_NC24" + // VirtualMachineSizeTypesStandardNC24r ... + VirtualMachineSizeTypesStandardNC24r VirtualMachineSizeTypes = "Standard_NC24r" + // VirtualMachineSizeTypesStandardNC24rsV2 ... + VirtualMachineSizeTypesStandardNC24rsV2 VirtualMachineSizeTypes = "Standard_NC24rs_v2" + // VirtualMachineSizeTypesStandardNC24rsV3 ... + VirtualMachineSizeTypesStandardNC24rsV3 VirtualMachineSizeTypes = "Standard_NC24rs_v3" + // VirtualMachineSizeTypesStandardNC24sV2 ... + VirtualMachineSizeTypesStandardNC24sV2 VirtualMachineSizeTypes = "Standard_NC24s_v2" + // VirtualMachineSizeTypesStandardNC24sV3 ... + VirtualMachineSizeTypesStandardNC24sV3 VirtualMachineSizeTypes = "Standard_NC24s_v3" + // VirtualMachineSizeTypesStandardNC6 ... + VirtualMachineSizeTypesStandardNC6 VirtualMachineSizeTypes = "Standard_NC6" + // VirtualMachineSizeTypesStandardNC6sV2 ... + VirtualMachineSizeTypesStandardNC6sV2 VirtualMachineSizeTypes = "Standard_NC6s_v2" + // VirtualMachineSizeTypesStandardNC6sV3 ... + VirtualMachineSizeTypesStandardNC6sV3 VirtualMachineSizeTypes = "Standard_NC6s_v3" + // VirtualMachineSizeTypesStandardND12s ... + VirtualMachineSizeTypesStandardND12s VirtualMachineSizeTypes = "Standard_ND12s" + // VirtualMachineSizeTypesStandardND24rs ... + VirtualMachineSizeTypesStandardND24rs VirtualMachineSizeTypes = "Standard_ND24rs" + // VirtualMachineSizeTypesStandardND24s ... + VirtualMachineSizeTypesStandardND24s VirtualMachineSizeTypes = "Standard_ND24s" + // VirtualMachineSizeTypesStandardND6s ... + VirtualMachineSizeTypesStandardND6s VirtualMachineSizeTypes = "Standard_ND6s" + // VirtualMachineSizeTypesStandardNV12 ... + VirtualMachineSizeTypesStandardNV12 VirtualMachineSizeTypes = "Standard_NV12" + // VirtualMachineSizeTypesStandardNV24 ... + VirtualMachineSizeTypesStandardNV24 VirtualMachineSizeTypes = "Standard_NV24" + // VirtualMachineSizeTypesStandardNV6 ... + VirtualMachineSizeTypesStandardNV6 VirtualMachineSizeTypes = "Standard_NV6" +) + +// PossibleVirtualMachineSizeTypesValues returns an array of possible values for the VirtualMachineSizeTypes const type. +func PossibleVirtualMachineSizeTypesValues() []VirtualMachineSizeTypes { + return []VirtualMachineSizeTypes{VirtualMachineSizeTypesBasicA0, VirtualMachineSizeTypesBasicA1, VirtualMachineSizeTypesBasicA2, VirtualMachineSizeTypesBasicA3, VirtualMachineSizeTypesBasicA4, VirtualMachineSizeTypesStandardA0, VirtualMachineSizeTypesStandardA1, VirtualMachineSizeTypesStandardA10, VirtualMachineSizeTypesStandardA11, VirtualMachineSizeTypesStandardA1V2, VirtualMachineSizeTypesStandardA2, VirtualMachineSizeTypesStandardA2mV2, VirtualMachineSizeTypesStandardA2V2, VirtualMachineSizeTypesStandardA3, VirtualMachineSizeTypesStandardA4, VirtualMachineSizeTypesStandardA4mV2, VirtualMachineSizeTypesStandardA4V2, VirtualMachineSizeTypesStandardA5, VirtualMachineSizeTypesStandardA6, VirtualMachineSizeTypesStandardA7, VirtualMachineSizeTypesStandardA8, VirtualMachineSizeTypesStandardA8mV2, VirtualMachineSizeTypesStandardA8V2, VirtualMachineSizeTypesStandardA9, VirtualMachineSizeTypesStandardB1ms, VirtualMachineSizeTypesStandardB1s, VirtualMachineSizeTypesStandardB2ms, VirtualMachineSizeTypesStandardB2s, VirtualMachineSizeTypesStandardB4ms, VirtualMachineSizeTypesStandardB8ms, VirtualMachineSizeTypesStandardD1, VirtualMachineSizeTypesStandardD11, VirtualMachineSizeTypesStandardD11V2, VirtualMachineSizeTypesStandardD12, VirtualMachineSizeTypesStandardD12V2, VirtualMachineSizeTypesStandardD13, VirtualMachineSizeTypesStandardD13V2, VirtualMachineSizeTypesStandardD14, VirtualMachineSizeTypesStandardD14V2, VirtualMachineSizeTypesStandardD15V2, VirtualMachineSizeTypesStandardD16sV3, VirtualMachineSizeTypesStandardD16V3, VirtualMachineSizeTypesStandardD1V2, VirtualMachineSizeTypesStandardD2, VirtualMachineSizeTypesStandardD2sV3, VirtualMachineSizeTypesStandardD2V2, VirtualMachineSizeTypesStandardD2V3, VirtualMachineSizeTypesStandardD3, VirtualMachineSizeTypesStandardD32sV3, VirtualMachineSizeTypesStandardD32V3, VirtualMachineSizeTypesStandardD3V2, VirtualMachineSizeTypesStandardD4, VirtualMachineSizeTypesStandardD4sV3, VirtualMachineSizeTypesStandardD4V2, VirtualMachineSizeTypesStandardD4V3, VirtualMachineSizeTypesStandardD5V2, VirtualMachineSizeTypesStandardD64sV3, VirtualMachineSizeTypesStandardD64V3, VirtualMachineSizeTypesStandardD8sV3, VirtualMachineSizeTypesStandardD8V3, VirtualMachineSizeTypesStandardDS1, VirtualMachineSizeTypesStandardDS11, VirtualMachineSizeTypesStandardDS11V2, VirtualMachineSizeTypesStandardDS12, VirtualMachineSizeTypesStandardDS12V2, VirtualMachineSizeTypesStandardDS13, VirtualMachineSizeTypesStandardDS132V2, VirtualMachineSizeTypesStandardDS134V2, VirtualMachineSizeTypesStandardDS13V2, VirtualMachineSizeTypesStandardDS14, VirtualMachineSizeTypesStandardDS144V2, VirtualMachineSizeTypesStandardDS148V2, VirtualMachineSizeTypesStandardDS14V2, VirtualMachineSizeTypesStandardDS15V2, VirtualMachineSizeTypesStandardDS1V2, VirtualMachineSizeTypesStandardDS2, VirtualMachineSizeTypesStandardDS2V2, VirtualMachineSizeTypesStandardDS3, VirtualMachineSizeTypesStandardDS3V2, VirtualMachineSizeTypesStandardDS4, VirtualMachineSizeTypesStandardDS4V2, VirtualMachineSizeTypesStandardDS5V2, VirtualMachineSizeTypesStandardE16sV3, VirtualMachineSizeTypesStandardE16V3, VirtualMachineSizeTypesStandardE2sV3, VirtualMachineSizeTypesStandardE2V3, VirtualMachineSizeTypesStandardE3216V3, VirtualMachineSizeTypesStandardE328sV3, VirtualMachineSizeTypesStandardE32sV3, VirtualMachineSizeTypesStandardE32V3, VirtualMachineSizeTypesStandardE4sV3, VirtualMachineSizeTypesStandardE4V3, VirtualMachineSizeTypesStandardE6416sV3, VirtualMachineSizeTypesStandardE6432sV3, VirtualMachineSizeTypesStandardE64sV3, VirtualMachineSizeTypesStandardE64V3, VirtualMachineSizeTypesStandardE8sV3, VirtualMachineSizeTypesStandardE8V3, VirtualMachineSizeTypesStandardF1, VirtualMachineSizeTypesStandardF16, VirtualMachineSizeTypesStandardF16s, VirtualMachineSizeTypesStandardF16sV2, VirtualMachineSizeTypesStandardF1s, VirtualMachineSizeTypesStandardF2, VirtualMachineSizeTypesStandardF2s, VirtualMachineSizeTypesStandardF2sV2, VirtualMachineSizeTypesStandardF32sV2, VirtualMachineSizeTypesStandardF4, VirtualMachineSizeTypesStandardF4s, VirtualMachineSizeTypesStandardF4sV2, VirtualMachineSizeTypesStandardF64sV2, VirtualMachineSizeTypesStandardF72sV2, VirtualMachineSizeTypesStandardF8, VirtualMachineSizeTypesStandardF8s, VirtualMachineSizeTypesStandardF8sV2, VirtualMachineSizeTypesStandardG1, VirtualMachineSizeTypesStandardG2, VirtualMachineSizeTypesStandardG3, VirtualMachineSizeTypesStandardG4, VirtualMachineSizeTypesStandardG5, VirtualMachineSizeTypesStandardGS1, VirtualMachineSizeTypesStandardGS2, VirtualMachineSizeTypesStandardGS3, VirtualMachineSizeTypesStandardGS4, VirtualMachineSizeTypesStandardGS44, VirtualMachineSizeTypesStandardGS48, VirtualMachineSizeTypesStandardGS5, VirtualMachineSizeTypesStandardGS516, VirtualMachineSizeTypesStandardGS58, VirtualMachineSizeTypesStandardH16, VirtualMachineSizeTypesStandardH16m, VirtualMachineSizeTypesStandardH16mr, VirtualMachineSizeTypesStandardH16r, VirtualMachineSizeTypesStandardH8, VirtualMachineSizeTypesStandardH8m, VirtualMachineSizeTypesStandardL16s, VirtualMachineSizeTypesStandardL32s, VirtualMachineSizeTypesStandardL4s, VirtualMachineSizeTypesStandardL8s, VirtualMachineSizeTypesStandardM12832ms, VirtualMachineSizeTypesStandardM12864ms, VirtualMachineSizeTypesStandardM128ms, VirtualMachineSizeTypesStandardM128s, VirtualMachineSizeTypesStandardM6416ms, VirtualMachineSizeTypesStandardM6432ms, VirtualMachineSizeTypesStandardM64ms, VirtualMachineSizeTypesStandardM64s, VirtualMachineSizeTypesStandardNC12, VirtualMachineSizeTypesStandardNC12sV2, VirtualMachineSizeTypesStandardNC12sV3, VirtualMachineSizeTypesStandardNC24, VirtualMachineSizeTypesStandardNC24r, VirtualMachineSizeTypesStandardNC24rsV2, VirtualMachineSizeTypesStandardNC24rsV3, VirtualMachineSizeTypesStandardNC24sV2, VirtualMachineSizeTypesStandardNC24sV3, VirtualMachineSizeTypesStandardNC6, VirtualMachineSizeTypesStandardNC6sV2, VirtualMachineSizeTypesStandardNC6sV3, VirtualMachineSizeTypesStandardND12s, VirtualMachineSizeTypesStandardND24rs, VirtualMachineSizeTypesStandardND24s, VirtualMachineSizeTypesStandardND6s, VirtualMachineSizeTypesStandardNV12, VirtualMachineSizeTypesStandardNV24, VirtualMachineSizeTypesStandardNV6} +} + +// AccessURI a disk access SAS uri. +type AccessURI struct { + autorest.Response `json:"-"` + // AccessSAS - A SAS uri for accessing a disk. + AccessSAS *string `json:"accessSAS,omitempty"` +} + +// AdditionalUnattendContent specifies additional XML formatted information that can be included in the +// Unattend.xml file, which is used by Windows Setup. Contents are defined by setting name, component name, and the +// pass in which the content is applied. +type AdditionalUnattendContent struct { + // PassName - The pass name. Currently, the only allowable value is OobeSystem. Possible values include: 'OobeSystem' + PassName PassNames `json:"passName,omitempty"` + // ComponentName - The component name. Currently, the only allowable value is Microsoft-Windows-Shell-Setup. Possible values include: 'MicrosoftWindowsShellSetup' + ComponentName ComponentNames `json:"componentName,omitempty"` + // SettingName - Specifies the name of the setting to which the content applies. Possible values are: FirstLogonCommands and AutoLogon. Possible values include: 'AutoLogon', 'FirstLogonCommands' + SettingName SettingNames `json:"settingName,omitempty"` + // Content - Specifies the XML formatted content that is added to the unattend.xml file for the specified path and component. The XML must be less than 4KB and must include the root element for the setting or feature that is being inserted. + Content *string `json:"content,omitempty"` +} + +// APIEntityReference the API entity reference. +type APIEntityReference struct { + // ID - The ARM resource id in the form of /subscriptions/{SubcriptionId}/resourceGroups/{ResourceGroupName}/... + ID *string `json:"id,omitempty"` +} + +// APIError api error. +type APIError struct { + // Details - The Api error details + Details *[]APIErrorBase `json:"details,omitempty"` + // Innererror - The Api inner error + Innererror *InnerError `json:"innererror,omitempty"` + // Code - The error code. + Code *string `json:"code,omitempty"` + // Target - The target of the particular error. + Target *string `json:"target,omitempty"` + // Message - The error message. + Message *string `json:"message,omitempty"` +} + +// APIErrorBase api error base. +type APIErrorBase struct { + // Code - The error code. + Code *string `json:"code,omitempty"` + // Target - The target of the particular error. + Target *string `json:"target,omitempty"` + // Message - The error message. + Message *string `json:"message,omitempty"` +} + +// AutoOSUpgradePolicy the configuration parameters used for performing automatic OS upgrade. +type AutoOSUpgradePolicy struct { + // DisableAutoRollback - Whether OS image rollback feature should be disabled. Default value is false. + DisableAutoRollback *bool `json:"disableAutoRollback,omitempty"` +} + +// AvailabilitySet specifies information about the availability set that the virtual machine should be assigned to. +// Virtual machines specified in the same availability set are allocated to different nodes to maximize +// availability. For more information about availability sets, see [Manage the availability of virtual +// machines](https://docs.microsoft.com/azure/virtual-machines/virtual-machines-windows-manage-availability?toc=%2fazure%2fvirtual-machines%2fwindows%2ftoc.json). +//

For more information on Azure planned maintainance, see [Planned maintenance for virtual machines in +// Azure](https://docs.microsoft.com/azure/virtual-machines/virtual-machines-windows-planned-maintenance?toc=%2fazure%2fvirtual-machines%2fwindows%2ftoc.json) +//

Currently, a VM can only be added to availability set at creation time. An existing VM cannot be added +// to an availability set. +type AvailabilitySet struct { + autorest.Response `json:"-"` + *AvailabilitySetProperties `json:"properties,omitempty"` + // Sku - Sku of the availability set + Sku *Sku `json:"sku,omitempty"` + // ID - Resource Id + ID *string `json:"id,omitempty"` + // Name - Resource name + Name *string `json:"name,omitempty"` + // Type - Resource type + Type *string `json:"type,omitempty"` + // Location - Resource location + Location *string `json:"location,omitempty"` + // Tags - Resource tags + Tags map[string]*string `json:"tags"` +} + +// MarshalJSON is the custom marshaler for AvailabilitySet. +func (as AvailabilitySet) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]interface{}) + if as.AvailabilitySetProperties != nil { + objectMap["properties"] = as.AvailabilitySetProperties + } + if as.Sku != nil { + objectMap["sku"] = as.Sku + } + if as.ID != nil { + objectMap["id"] = as.ID + } + if as.Name != nil { + objectMap["name"] = as.Name + } + if as.Type != nil { + objectMap["type"] = as.Type + } + if as.Location != nil { + objectMap["location"] = as.Location + } + if as.Tags != nil { + objectMap["tags"] = as.Tags + } + return json.Marshal(objectMap) +} + +// UnmarshalJSON is the custom unmarshaler for AvailabilitySet struct. +func (as *AvailabilitySet) UnmarshalJSON(body []byte) error { + var m map[string]*json.RawMessage + err := json.Unmarshal(body, &m) + if err != nil { + return err + } + for k, v := range m { + switch k { + case "properties": + if v != nil { + var availabilitySetProperties AvailabilitySetProperties + err = json.Unmarshal(*v, &availabilitySetProperties) + if err != nil { + return err + } + as.AvailabilitySetProperties = &availabilitySetProperties + } + case "sku": + if v != nil { + var sku Sku + err = json.Unmarshal(*v, &sku) + if err != nil { + return err + } + as.Sku = &sku + } + case "id": + if v != nil { + var ID string + err = json.Unmarshal(*v, &ID) + if err != nil { + return err + } + as.ID = &ID + } + case "name": + if v != nil { + var name string + err = json.Unmarshal(*v, &name) + if err != nil { + return err + } + as.Name = &name + } + case "type": + if v != nil { + var typeVar string + err = json.Unmarshal(*v, &typeVar) + if err != nil { + return err + } + as.Type = &typeVar + } + case "location": + if v != nil { + var location string + err = json.Unmarshal(*v, &location) + if err != nil { + return err + } + as.Location = &location + } + case "tags": + if v != nil { + var tags map[string]*string + err = json.Unmarshal(*v, &tags) + if err != nil { + return err + } + as.Tags = tags + } + } + } + + return nil +} + +// AvailabilitySetListResult the List Availability Set operation response. +type AvailabilitySetListResult struct { + autorest.Response `json:"-"` + // Value - The list of availability sets + Value *[]AvailabilitySet `json:"value,omitempty"` +} + +// AvailabilitySetProperties the instance view of a resource. +type AvailabilitySetProperties struct { + // PlatformUpdateDomainCount - Update Domain count. + PlatformUpdateDomainCount *int32 `json:"platformUpdateDomainCount,omitempty"` + // PlatformFaultDomainCount - Fault Domain count. + PlatformFaultDomainCount *int32 `json:"platformFaultDomainCount,omitempty"` + // VirtualMachines - A list of references to all virtual machines in the availability set. + VirtualMachines *[]SubResource `json:"virtualMachines,omitempty"` + // Statuses - The resource status information. + Statuses *[]InstanceViewStatus `json:"statuses,omitempty"` +} + +// AvailabilitySetUpdate specifies information about the availability set that the virtual machine should be +// assigned to. Only tags may be updated. +type AvailabilitySetUpdate struct { + *AvailabilitySetProperties `json:"properties,omitempty"` + // Sku - Sku of the availability set + Sku *Sku `json:"sku,omitempty"` + // Tags - Resource tags + Tags map[string]*string `json:"tags"` +} + +// MarshalJSON is the custom marshaler for AvailabilitySetUpdate. +func (asu AvailabilitySetUpdate) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]interface{}) + if asu.AvailabilitySetProperties != nil { + objectMap["properties"] = asu.AvailabilitySetProperties + } + if asu.Sku != nil { + objectMap["sku"] = asu.Sku + } + if asu.Tags != nil { + objectMap["tags"] = asu.Tags + } + return json.Marshal(objectMap) +} + +// UnmarshalJSON is the custom unmarshaler for AvailabilitySetUpdate struct. +func (asu *AvailabilitySetUpdate) UnmarshalJSON(body []byte) error { + var m map[string]*json.RawMessage + err := json.Unmarshal(body, &m) + if err != nil { + return err + } + for k, v := range m { + switch k { + case "properties": + if v != nil { + var availabilitySetProperties AvailabilitySetProperties + err = json.Unmarshal(*v, &availabilitySetProperties) + if err != nil { + return err + } + asu.AvailabilitySetProperties = &availabilitySetProperties + } + case "sku": + if v != nil { + var sku Sku + err = json.Unmarshal(*v, &sku) + if err != nil { + return err + } + asu.Sku = &sku + } + case "tags": + if v != nil { + var tags map[string]*string + err = json.Unmarshal(*v, &tags) + if err != nil { + return err + } + asu.Tags = tags + } + } + } + + return nil +} + +// BootDiagnostics boot Diagnostics is a debugging feature which allows you to view Console Output and Screenshot +// to diagnose VM status.

For Linux Virtual Machines, you can easily view the output of your console log. +//

For both Windows and Linux virtual machines, Azure also enables you to see a screenshot of the VM from +// the hypervisor. +type BootDiagnostics struct { + // Enabled - Whether boot diagnostics should be enabled on the Virtual Machine. + Enabled *bool `json:"enabled,omitempty"` + // StorageURI - Uri of the storage account to use for placing the console output and screenshot. + StorageURI *string `json:"storageUri,omitempty"` +} + +// BootDiagnosticsInstanceView the instance view of a virtual machine boot diagnostics. +type BootDiagnosticsInstanceView struct { + // ConsoleScreenshotBlobURI - The console screenshot blob URI. + ConsoleScreenshotBlobURI *string `json:"consoleScreenshotBlobUri,omitempty"` + // SerialConsoleLogBlobURI - The Linux serial console log blob Uri. + SerialConsoleLogBlobURI *string `json:"serialConsoleLogBlobUri,omitempty"` +} + +// ContainerService container service. +type ContainerService struct { + autorest.Response `json:"-"` + *ContainerServiceProperties `json:"properties,omitempty"` + // ID - Resource Id + ID *string `json:"id,omitempty"` + // Name - Resource name + Name *string `json:"name,omitempty"` + // Type - Resource type + Type *string `json:"type,omitempty"` + // Location - Resource location + Location *string `json:"location,omitempty"` + // Tags - Resource tags + Tags map[string]*string `json:"tags"` +} + +// MarshalJSON is the custom marshaler for ContainerService. +func (cs ContainerService) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]interface{}) + if cs.ContainerServiceProperties != nil { + objectMap["properties"] = cs.ContainerServiceProperties + } + if cs.ID != nil { + objectMap["id"] = cs.ID + } + if cs.Name != nil { + objectMap["name"] = cs.Name + } + if cs.Type != nil { + objectMap["type"] = cs.Type + } + if cs.Location != nil { + objectMap["location"] = cs.Location + } + if cs.Tags != nil { + objectMap["tags"] = cs.Tags + } + return json.Marshal(objectMap) +} + +// UnmarshalJSON is the custom unmarshaler for ContainerService struct. +func (cs *ContainerService) UnmarshalJSON(body []byte) error { + var m map[string]*json.RawMessage + err := json.Unmarshal(body, &m) + if err != nil { + return err + } + for k, v := range m { + switch k { + case "properties": + if v != nil { + var containerServiceProperties ContainerServiceProperties + err = json.Unmarshal(*v, &containerServiceProperties) + if err != nil { + return err + } + cs.ContainerServiceProperties = &containerServiceProperties + } + case "id": + if v != nil { + var ID string + err = json.Unmarshal(*v, &ID) + if err != nil { + return err + } + cs.ID = &ID + } + case "name": + if v != nil { + var name string + err = json.Unmarshal(*v, &name) + if err != nil { + return err + } + cs.Name = &name + } + case "type": + if v != nil { + var typeVar string + err = json.Unmarshal(*v, &typeVar) + if err != nil { + return err + } + cs.Type = &typeVar + } + case "location": + if v != nil { + var location string + err = json.Unmarshal(*v, &location) + if err != nil { + return err + } + cs.Location = &location + } + case "tags": + if v != nil { + var tags map[string]*string + err = json.Unmarshal(*v, &tags) + if err != nil { + return err + } + cs.Tags = tags + } + } + } + + return nil +} + +// ContainerServiceAgentPoolProfile profile for the container service agent pool. +type ContainerServiceAgentPoolProfile struct { + // Name - Unique name of the agent pool profile in the context of the subscription and resource group. + Name *string `json:"name,omitempty"` + // Count - Number of agents (VMs) to host docker containers. Allowed values must be in the range of 1 to 100 (inclusive). The default value is 1. + Count *int32 `json:"count,omitempty"` + // VMSize - Size of agent VMs. Possible values include: 'StandardA0', 'StandardA1', 'StandardA2', 'StandardA3', 'StandardA4', 'StandardA5', 'StandardA6', 'StandardA7', 'StandardA8', 'StandardA9', 'StandardA10', 'StandardA11', 'StandardD1', 'StandardD2', 'StandardD3', 'StandardD4', 'StandardD11', 'StandardD12', 'StandardD13', 'StandardD14', 'StandardD1V2', 'StandardD2V2', 'StandardD3V2', 'StandardD4V2', 'StandardD5V2', 'StandardD11V2', 'StandardD12V2', 'StandardD13V2', 'StandardD14V2', 'StandardG1', 'StandardG2', 'StandardG3', 'StandardG4', 'StandardG5', 'StandardDS1', 'StandardDS2', 'StandardDS3', 'StandardDS4', 'StandardDS11', 'StandardDS12', 'StandardDS13', 'StandardDS14', 'StandardGS1', 'StandardGS2', 'StandardGS3', 'StandardGS4', 'StandardGS5' + VMSize ContainerServiceVMSizeTypes `json:"vmSize,omitempty"` + // DNSPrefix - DNS prefix to be used to create the FQDN for the agent pool. + DNSPrefix *string `json:"dnsPrefix,omitempty"` + // Fqdn - FDQN for the agent pool. + Fqdn *string `json:"fqdn,omitempty"` +} + +// ContainerServiceCustomProfile properties to configure a custom container service cluster. +type ContainerServiceCustomProfile struct { + // Orchestrator - The name of the custom orchestrator to use. + Orchestrator *string `json:"orchestrator,omitempty"` +} + +// ContainerServiceDiagnosticsProfile ... +type ContainerServiceDiagnosticsProfile struct { + // VMDiagnostics - Profile for the container service VM diagnostic agent. + VMDiagnostics *ContainerServiceVMDiagnostics `json:"vmDiagnostics,omitempty"` +} + +// ContainerServiceLinuxProfile profile for Linux VMs in the container service cluster. +type ContainerServiceLinuxProfile struct { + // AdminUsername - The administrator username to use for Linux VMs. + AdminUsername *string `json:"adminUsername,omitempty"` + // SSH - The ssh key configuration for Linux VMs. + SSH *ContainerServiceSSHConfiguration `json:"ssh,omitempty"` +} + +// ContainerServiceListResult the response from the List Container Services operation. +type ContainerServiceListResult struct { + autorest.Response `json:"-"` + // Value - the list of container services. + Value *[]ContainerService `json:"value,omitempty"` + // NextLink - The URL to get the next set of container service results. + NextLink *string `json:"nextLink,omitempty"` +} + +// ContainerServiceListResultIterator provides access to a complete listing of ContainerService values. +type ContainerServiceListResultIterator struct { + i int + page ContainerServiceListResultPage +} + +// Next advances to the next value. If there was an error making +// the request the iterator does not advance and the error is returned. +func (iter *ContainerServiceListResultIterator) Next() error { + iter.i++ + if iter.i < len(iter.page.Values()) { + return nil + } + err := iter.page.Next() + if err != nil { + iter.i-- + return err + } + iter.i = 0 + return nil +} + +// NotDone returns true if the enumeration should be started or is not yet complete. +func (iter ContainerServiceListResultIterator) NotDone() bool { + return iter.page.NotDone() && iter.i < len(iter.page.Values()) +} + +// Response returns the raw server response from the last page request. +func (iter ContainerServiceListResultIterator) Response() ContainerServiceListResult { + return iter.page.Response() +} + +// Value returns the current value or a zero-initialized value if the +// iterator has advanced beyond the end of the collection. +func (iter ContainerServiceListResultIterator) Value() ContainerService { + if !iter.page.NotDone() { + return ContainerService{} + } + return iter.page.Values()[iter.i] +} + +// IsEmpty returns true if the ListResult contains no values. +func (cslr ContainerServiceListResult) IsEmpty() bool { + return cslr.Value == nil || len(*cslr.Value) == 0 +} + +// containerServiceListResultPreparer prepares a request to retrieve the next set of results. +// It returns nil if no more results exist. +func (cslr ContainerServiceListResult) containerServiceListResultPreparer() (*http.Request, error) { + if cslr.NextLink == nil || len(to.String(cslr.NextLink)) < 1 { + return nil, nil + } + return autorest.Prepare(&http.Request{}, + autorest.AsJSON(), + autorest.AsGet(), + autorest.WithBaseURL(to.String(cslr.NextLink))) +} + +// ContainerServiceListResultPage contains a page of ContainerService values. +type ContainerServiceListResultPage struct { + fn func(ContainerServiceListResult) (ContainerServiceListResult, error) + cslr ContainerServiceListResult +} + +// Next advances to the next page of values. If there was an error making +// the request the page does not advance and the error is returned. +func (page *ContainerServiceListResultPage) Next() error { + next, err := page.fn(page.cslr) + if err != nil { + return err + } + page.cslr = next + return nil +} + +// NotDone returns true if the page enumeration should be started or is not yet complete. +func (page ContainerServiceListResultPage) NotDone() bool { + return !page.cslr.IsEmpty() +} + +// Response returns the raw server response from the last page request. +func (page ContainerServiceListResultPage) Response() ContainerServiceListResult { + return page.cslr +} + +// Values returns the slice of values for the current page or nil if there are no values. +func (page ContainerServiceListResultPage) Values() []ContainerService { + if page.cslr.IsEmpty() { + return nil + } + return *page.cslr.Value +} + +// ContainerServiceMasterProfile profile for the container service master. +type ContainerServiceMasterProfile struct { + // Count - Number of masters (VMs) in the container service cluster. Allowed values are 1, 3, and 5. The default value is 1. + Count *int32 `json:"count,omitempty"` + // DNSPrefix - DNS prefix to be used to create the FQDN for master. + DNSPrefix *string `json:"dnsPrefix,omitempty"` + // Fqdn - FDQN for the master. + Fqdn *string `json:"fqdn,omitempty"` +} + +// ContainerServiceOrchestratorProfile profile for the container service orchestrator. +type ContainerServiceOrchestratorProfile struct { + // OrchestratorType - The orchestrator to use to manage container service cluster resources. Valid values are Swarm, DCOS, and Custom. Possible values include: 'Swarm', 'DCOS', 'Custom', 'Kubernetes' + OrchestratorType ContainerServiceOrchestratorTypes `json:"orchestratorType,omitempty"` +} + +// ContainerServiceProperties properties of the container service. +type ContainerServiceProperties struct { + // ProvisioningState - the current deployment or provisioning state, which only appears in the response. + ProvisioningState *string `json:"provisioningState,omitempty"` + // OrchestratorProfile - Properties of the orchestrator. + OrchestratorProfile *ContainerServiceOrchestratorProfile `json:"orchestratorProfile,omitempty"` + // CustomProfile - Properties for custom clusters. + CustomProfile *ContainerServiceCustomProfile `json:"customProfile,omitempty"` + // ServicePrincipalProfile - Properties for cluster service principals. + ServicePrincipalProfile *ContainerServiceServicePrincipalProfile `json:"servicePrincipalProfile,omitempty"` + // MasterProfile - Properties of master agents. + MasterProfile *ContainerServiceMasterProfile `json:"masterProfile,omitempty"` + // AgentPoolProfiles - Properties of the agent pool. + AgentPoolProfiles *[]ContainerServiceAgentPoolProfile `json:"agentPoolProfiles,omitempty"` + // WindowsProfile - Properties of Windows VMs. + WindowsProfile *ContainerServiceWindowsProfile `json:"windowsProfile,omitempty"` + // LinuxProfile - Properties of Linux VMs. + LinuxProfile *ContainerServiceLinuxProfile `json:"linuxProfile,omitempty"` + // DiagnosticsProfile - Properties of the diagnostic agent. + DiagnosticsProfile *ContainerServiceDiagnosticsProfile `json:"diagnosticsProfile,omitempty"` +} + +// ContainerServicesCreateOrUpdateFuture an abstraction for monitoring and retrieving the results of a long-running +// operation. +type ContainerServicesCreateOrUpdateFuture struct { + azure.Future +} + +// Result returns the result of the asynchronous operation. +// If the operation has not completed it will return an error. +func (future *ContainerServicesCreateOrUpdateFuture) Result(client ContainerServicesClient) (cs ContainerService, err error) { + var done bool + done, err = future.Done(client) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.ContainerServicesCreateOrUpdateFuture", "Result", future.Response(), "Polling failure") + return + } + if !done { + err = azure.NewAsyncOpIncompleteError("compute.ContainerServicesCreateOrUpdateFuture") + return + } + sender := autorest.DecorateSender(client, autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...)) + if cs.Response.Response, err = future.GetResult(sender); err == nil && cs.Response.Response.StatusCode != http.StatusNoContent { + cs, err = client.CreateOrUpdateResponder(cs.Response.Response) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.ContainerServicesCreateOrUpdateFuture", "Result", cs.Response.Response, "Failure responding to request") + } + } + return +} + +// ContainerServicesDeleteFuture an abstraction for monitoring and retrieving the results of a long-running +// operation. +type ContainerServicesDeleteFuture struct { + azure.Future +} + +// Result returns the result of the asynchronous operation. +// If the operation has not completed it will return an error. +func (future *ContainerServicesDeleteFuture) Result(client ContainerServicesClient) (ar autorest.Response, err error) { + var done bool + done, err = future.Done(client) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.ContainerServicesDeleteFuture", "Result", future.Response(), "Polling failure") + return + } + if !done { + err = azure.NewAsyncOpIncompleteError("compute.ContainerServicesDeleteFuture") + return + } + ar.Response = future.Response() + return +} + +// ContainerServiceServicePrincipalProfile information about a service principal identity for the cluster to use +// for manipulating Azure APIs. +type ContainerServiceServicePrincipalProfile struct { + // ClientID - The ID for the service principal. + ClientID *string `json:"clientId,omitempty"` + // Secret - The secret password associated with the service principal. + Secret *string `json:"secret,omitempty"` +} + +// ContainerServiceSSHConfiguration SSH configuration for Linux-based VMs running on Azure. +type ContainerServiceSSHConfiguration struct { + // PublicKeys - the list of SSH public keys used to authenticate with Linux-based VMs. + PublicKeys *[]ContainerServiceSSHPublicKey `json:"publicKeys,omitempty"` +} + +// ContainerServiceSSHPublicKey contains information about SSH certificate public key data. +type ContainerServiceSSHPublicKey struct { + // KeyData - Certificate public key used to authenticate with VMs through SSH. The certificate must be in PEM format with or without headers. + KeyData *string `json:"keyData,omitempty"` +} + +// ContainerServiceVMDiagnostics profile for diagnostics on the container service VMs. +type ContainerServiceVMDiagnostics struct { + // Enabled - Whether the VM diagnostic agent is provisioned on the VM. + Enabled *bool `json:"enabled,omitempty"` + // StorageURI - The URI of the storage account where diagnostics are stored. + StorageURI *string `json:"storageUri,omitempty"` +} + +// ContainerServiceWindowsProfile profile for Windows VMs in the container service cluster. +type ContainerServiceWindowsProfile struct { + // AdminUsername - The administrator username to use for Windows VMs. + AdminUsername *string `json:"adminUsername,omitempty"` + // AdminPassword - The administrator password to use for Windows VMs. + AdminPassword *string `json:"adminPassword,omitempty"` +} + +// CreationData data used when creating a disk. +type CreationData struct { + // CreateOption - This enumerates the possible sources of a disk's creation. Possible values include: 'Empty', 'Attach', 'FromImage', 'Import', 'Copy', 'Restore' + CreateOption DiskCreateOption `json:"createOption,omitempty"` + // StorageAccountID - If createOption is Import, the Azure Resource Manager identifier of the storage account containing the blob to import as a disk. Required only if the blob is in a different subscription + StorageAccountID *string `json:"storageAccountId,omitempty"` + // ImageReference - Disk source information. + ImageReference *ImageDiskReference `json:"imageReference,omitempty"` + // SourceURI - If createOption is Import, this is the URI of a blob to be imported into a managed disk. + SourceURI *string `json:"sourceUri,omitempty"` + // SourceResourceID - If createOption is Copy, this is the ARM id of the source snapshot or disk. + SourceResourceID *string `json:"sourceResourceId,omitempty"` +} + +// DataDisk describes a data disk. +type DataDisk struct { + // Lun - Specifies the logical unit number of the data disk. This value is used to identify data disks within the VM and therefore must be unique for each data disk attached to a VM. + Lun *int32 `json:"lun,omitempty"` + // Name - The disk name. + Name *string `json:"name,omitempty"` + // Vhd - The virtual hard disk. + Vhd *VirtualHardDisk `json:"vhd,omitempty"` + // Image - The source user image virtual hard disk. The virtual hard disk will be copied before being attached to the virtual machine. If SourceImage is provided, the destination virtual hard drive must not exist. + Image *VirtualHardDisk `json:"image,omitempty"` + // Caching - Specifies the caching requirements.

Possible values are:

**None**

**ReadOnly**

**ReadWrite**

Default: **None for Standard storage. ReadOnly for Premium storage**. Possible values include: 'CachingTypesNone', 'CachingTypesReadOnly', 'CachingTypesReadWrite' + Caching CachingTypes `json:"caching,omitempty"` + // WriteAcceleratorEnabled - Specifies whether writeAccelerator should be enabled or disabled on the disk. + WriteAcceleratorEnabled *bool `json:"writeAcceleratorEnabled,omitempty"` + // CreateOption - Specifies how the virtual machine should be created.

Possible values are:

**Attach** \u2013 This value is used when you are using a specialized disk to create the virtual machine.

**FromImage** \u2013 This value is used when you are using an image to create the virtual machine. If you are using a platform image, you also use the imageReference element described above. If you are using a marketplace image, you also use the plan element previously described. Possible values include: 'DiskCreateOptionTypesFromImage', 'DiskCreateOptionTypesEmpty', 'DiskCreateOptionTypesAttach' + CreateOption DiskCreateOptionTypes `json:"createOption,omitempty"` + // DiskSizeGB - Specifies the size of an empty data disk in gigabytes. This element can be used to overwrite the name of the disk in a virtual machine image.

This value cannot be larger than 1023 GB + DiskSizeGB *int32 `json:"diskSizeGB,omitempty"` + // ManagedDisk - The managed disk parameters. + ManagedDisk *ManagedDiskParameters `json:"managedDisk,omitempty"` +} + +// DataDiskImage contains the data disk images information. +type DataDiskImage struct { + // Lun - Specifies the logical unit number of the data disk. This value is used to identify data disks within the VM and therefore must be unique for each data disk attached to a VM. + Lun *int32 `json:"lun,omitempty"` +} + +// DiagnosticsProfile specifies the boot diagnostic settings state.

Minimum api-version: 2015-06-15. +type DiagnosticsProfile struct { + // BootDiagnostics - Boot Diagnostics is a debugging feature which allows you to view Console Output and Screenshot to diagnose VM status.

For Linux Virtual Machines, you can easily view the output of your console log.

For both Windows and Linux virtual machines, Azure also enables you to see a screenshot of the VM from the hypervisor. + BootDiagnostics *BootDiagnostics `json:"bootDiagnostics,omitempty"` +} + +// Disk disk resource. +type Disk struct { + autorest.Response `json:"-"` + // ManagedBy - A relative URI containing the ID of the VM that has the disk attached. + ManagedBy *string `json:"managedBy,omitempty"` + Sku *DiskSku `json:"sku,omitempty"` + // Zones - The Logical zone list for Disk. + Zones *[]string `json:"zones,omitempty"` + *DiskProperties `json:"properties,omitempty"` + // ID - Resource Id + ID *string `json:"id,omitempty"` + // Name - Resource name + Name *string `json:"name,omitempty"` + // Type - Resource type + Type *string `json:"type,omitempty"` + // Location - Resource location + Location *string `json:"location,omitempty"` + // Tags - Resource tags + Tags map[string]*string `json:"tags"` +} + +// MarshalJSON is the custom marshaler for Disk. +func (d Disk) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]interface{}) + if d.ManagedBy != nil { + objectMap["managedBy"] = d.ManagedBy + } + if d.Sku != nil { + objectMap["sku"] = d.Sku + } + if d.Zones != nil { + objectMap["zones"] = d.Zones + } + if d.DiskProperties != nil { + objectMap["properties"] = d.DiskProperties + } + if d.ID != nil { + objectMap["id"] = d.ID + } + if d.Name != nil { + objectMap["name"] = d.Name + } + if d.Type != nil { + objectMap["type"] = d.Type + } + if d.Location != nil { + objectMap["location"] = d.Location + } + if d.Tags != nil { + objectMap["tags"] = d.Tags + } + return json.Marshal(objectMap) +} + +// UnmarshalJSON is the custom unmarshaler for Disk struct. +func (d *Disk) UnmarshalJSON(body []byte) error { + var m map[string]*json.RawMessage + err := json.Unmarshal(body, &m) + if err != nil { + return err + } + for k, v := range m { + switch k { + case "managedBy": + if v != nil { + var managedBy string + err = json.Unmarshal(*v, &managedBy) + if err != nil { + return err + } + d.ManagedBy = &managedBy + } + case "sku": + if v != nil { + var sku DiskSku + err = json.Unmarshal(*v, &sku) + if err != nil { + return err + } + d.Sku = &sku + } + case "zones": + if v != nil { + var zones []string + err = json.Unmarshal(*v, &zones) + if err != nil { + return err + } + d.Zones = &zones + } + case "properties": + if v != nil { + var diskProperties DiskProperties + err = json.Unmarshal(*v, &diskProperties) + if err != nil { + return err + } + d.DiskProperties = &diskProperties + } + case "id": + if v != nil { + var ID string + err = json.Unmarshal(*v, &ID) + if err != nil { + return err + } + d.ID = &ID + } + case "name": + if v != nil { + var name string + err = json.Unmarshal(*v, &name) + if err != nil { + return err + } + d.Name = &name + } + case "type": + if v != nil { + var typeVar string + err = json.Unmarshal(*v, &typeVar) + if err != nil { + return err + } + d.Type = &typeVar + } + case "location": + if v != nil { + var location string + err = json.Unmarshal(*v, &location) + if err != nil { + return err + } + d.Location = &location + } + case "tags": + if v != nil { + var tags map[string]*string + err = json.Unmarshal(*v, &tags) + if err != nil { + return err + } + d.Tags = tags + } + } + } + + return nil +} + +// DiskEncryptionSettings describes a Encryption Settings for a Disk +type DiskEncryptionSettings struct { + // DiskEncryptionKey - Specifies the location of the disk encryption key, which is a Key Vault Secret. + DiskEncryptionKey *KeyVaultSecretReference `json:"diskEncryptionKey,omitempty"` + // KeyEncryptionKey - Specifies the location of the key encryption key in Key Vault. + KeyEncryptionKey *KeyVaultKeyReference `json:"keyEncryptionKey,omitempty"` + // Enabled - Specifies whether disk encryption should be enabled on the virtual machine. + Enabled *bool `json:"enabled,omitempty"` +} + +// DiskInstanceView the instance view of the disk. +type DiskInstanceView struct { + // Name - The disk name. + Name *string `json:"name,omitempty"` + // EncryptionSettings - Specifies the encryption settings for the OS Disk.

Minimum api-version: 2015-06-15 + EncryptionSettings *[]DiskEncryptionSettings `json:"encryptionSettings,omitempty"` + // Statuses - The resource status information. + Statuses *[]InstanceViewStatus `json:"statuses,omitempty"` +} + +// DiskList the List Disks operation response. +type DiskList struct { + autorest.Response `json:"-"` + // Value - A list of disks. + Value *[]Disk `json:"value,omitempty"` + // NextLink - The uri to fetch the next page of disks. Call ListNext() with this to fetch the next page of disks. + NextLink *string `json:"nextLink,omitempty"` +} + +// DiskListIterator provides access to a complete listing of Disk values. +type DiskListIterator struct { + i int + page DiskListPage +} + +// Next advances to the next value. If there was an error making +// the request the iterator does not advance and the error is returned. +func (iter *DiskListIterator) Next() error { + iter.i++ + if iter.i < len(iter.page.Values()) { + return nil + } + err := iter.page.Next() + if err != nil { + iter.i-- + return err + } + iter.i = 0 + return nil +} + +// NotDone returns true if the enumeration should be started or is not yet complete. +func (iter DiskListIterator) NotDone() bool { + return iter.page.NotDone() && iter.i < len(iter.page.Values()) +} + +// Response returns the raw server response from the last page request. +func (iter DiskListIterator) Response() DiskList { + return iter.page.Response() +} + +// Value returns the current value or a zero-initialized value if the +// iterator has advanced beyond the end of the collection. +func (iter DiskListIterator) Value() Disk { + if !iter.page.NotDone() { + return Disk{} + } + return iter.page.Values()[iter.i] +} + +// IsEmpty returns true if the ListResult contains no values. +func (dl DiskList) IsEmpty() bool { + return dl.Value == nil || len(*dl.Value) == 0 +} + +// diskListPreparer prepares a request to retrieve the next set of results. +// It returns nil if no more results exist. +func (dl DiskList) diskListPreparer() (*http.Request, error) { + if dl.NextLink == nil || len(to.String(dl.NextLink)) < 1 { + return nil, nil + } + return autorest.Prepare(&http.Request{}, + autorest.AsJSON(), + autorest.AsGet(), + autorest.WithBaseURL(to.String(dl.NextLink))) +} + +// DiskListPage contains a page of Disk values. +type DiskListPage struct { + fn func(DiskList) (DiskList, error) + dl DiskList +} + +// Next advances to the next page of values. If there was an error making +// the request the page does not advance and the error is returned. +func (page *DiskListPage) Next() error { + next, err := page.fn(page.dl) + if err != nil { + return err + } + page.dl = next + return nil +} + +// NotDone returns true if the page enumeration should be started or is not yet complete. +func (page DiskListPage) NotDone() bool { + return !page.dl.IsEmpty() +} + +// Response returns the raw server response from the last page request. +func (page DiskListPage) Response() DiskList { + return page.dl +} + +// Values returns the slice of values for the current page or nil if there are no values. +func (page DiskListPage) Values() []Disk { + if page.dl.IsEmpty() { + return nil + } + return *page.dl.Value +} + +// DiskProperties disk resource properties. +type DiskProperties struct { + // TimeCreated - The time when the disk was created. + TimeCreated *date.Time `json:"timeCreated,omitempty"` + // OsType - The Operating System type. Possible values include: 'Windows', 'Linux' + OsType OperatingSystemTypes `json:"osType,omitempty"` + // CreationData - Disk source information. CreationData information cannot be changed after the disk has been created. + CreationData *CreationData `json:"creationData,omitempty"` + // DiskSizeGB - If creationData.createOption is Empty, this field is mandatory and it indicates the size of the VHD to create. If this field is present for updates or creation with other options, it indicates a resize. Resizes are only allowed if the disk is not attached to a running VM, and can only increase the disk's size. + DiskSizeGB *int32 `json:"diskSizeGB,omitempty"` + // EncryptionSettings - Encryption settings for disk or snapshot + EncryptionSettings *EncryptionSettings `json:"encryptionSettings,omitempty"` + // ProvisioningState - The disk provisioning state. + ProvisioningState *string `json:"provisioningState,omitempty"` +} + +// DisksCreateOrUpdateFuture an abstraction for monitoring and retrieving the results of a long-running operation. +type DisksCreateOrUpdateFuture struct { + azure.Future +} + +// Result returns the result of the asynchronous operation. +// If the operation has not completed it will return an error. +func (future *DisksCreateOrUpdateFuture) Result(client DisksClient) (d Disk, err error) { + var done bool + done, err = future.Done(client) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.DisksCreateOrUpdateFuture", "Result", future.Response(), "Polling failure") + return + } + if !done { + err = azure.NewAsyncOpIncompleteError("compute.DisksCreateOrUpdateFuture") + return + } + sender := autorest.DecorateSender(client, autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...)) + if d.Response.Response, err = future.GetResult(sender); err == nil && d.Response.Response.StatusCode != http.StatusNoContent { + d, err = client.CreateOrUpdateResponder(d.Response.Response) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.DisksCreateOrUpdateFuture", "Result", d.Response.Response, "Failure responding to request") + } + } + return +} + +// DisksDeleteFuture an abstraction for monitoring and retrieving the results of a long-running operation. +type DisksDeleteFuture struct { + azure.Future +} + +// Result returns the result of the asynchronous operation. +// If the operation has not completed it will return an error. +func (future *DisksDeleteFuture) Result(client DisksClient) (ar autorest.Response, err error) { + var done bool + done, err = future.Done(client) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.DisksDeleteFuture", "Result", future.Response(), "Polling failure") + return + } + if !done { + err = azure.NewAsyncOpIncompleteError("compute.DisksDeleteFuture") + return + } + ar.Response = future.Response() + return +} + +// DisksGrantAccessFuture an abstraction for monitoring and retrieving the results of a long-running operation. +type DisksGrantAccessFuture struct { + azure.Future +} + +// Result returns the result of the asynchronous operation. +// If the operation has not completed it will return an error. +func (future *DisksGrantAccessFuture) Result(client DisksClient) (au AccessURI, err error) { + var done bool + done, err = future.Done(client) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.DisksGrantAccessFuture", "Result", future.Response(), "Polling failure") + return + } + if !done { + err = azure.NewAsyncOpIncompleteError("compute.DisksGrantAccessFuture") + return + } + sender := autorest.DecorateSender(client, autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...)) + if au.Response.Response, err = future.GetResult(sender); err == nil && au.Response.Response.StatusCode != http.StatusNoContent { + au, err = client.GrantAccessResponder(au.Response.Response) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.DisksGrantAccessFuture", "Result", au.Response.Response, "Failure responding to request") + } + } + return +} + +// DiskSku the disks sku name. Can be Standard_LRS or Premium_LRS. +type DiskSku struct { + // Name - The sku name. Possible values include: 'StorageAccountTypesStandardLRS', 'StorageAccountTypesPremiumLRS' + Name StorageAccountTypes `json:"name,omitempty"` + // Tier - The sku tier. + Tier *string `json:"tier,omitempty"` +} + +// DisksRevokeAccessFuture an abstraction for monitoring and retrieving the results of a long-running operation. +type DisksRevokeAccessFuture struct { + azure.Future +} + +// Result returns the result of the asynchronous operation. +// If the operation has not completed it will return an error. +func (future *DisksRevokeAccessFuture) Result(client DisksClient) (ar autorest.Response, err error) { + var done bool + done, err = future.Done(client) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.DisksRevokeAccessFuture", "Result", future.Response(), "Polling failure") + return + } + if !done { + err = azure.NewAsyncOpIncompleteError("compute.DisksRevokeAccessFuture") + return + } + ar.Response = future.Response() + return +} + +// DisksUpdateFuture an abstraction for monitoring and retrieving the results of a long-running operation. +type DisksUpdateFuture struct { + azure.Future +} + +// Result returns the result of the asynchronous operation. +// If the operation has not completed it will return an error. +func (future *DisksUpdateFuture) Result(client DisksClient) (d Disk, err error) { + var done bool + done, err = future.Done(client) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.DisksUpdateFuture", "Result", future.Response(), "Polling failure") + return + } + if !done { + err = azure.NewAsyncOpIncompleteError("compute.DisksUpdateFuture") + return + } + sender := autorest.DecorateSender(client, autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...)) + if d.Response.Response, err = future.GetResult(sender); err == nil && d.Response.Response.StatusCode != http.StatusNoContent { + d, err = client.UpdateResponder(d.Response.Response) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.DisksUpdateFuture", "Result", d.Response.Response, "Failure responding to request") + } + } + return +} + +// DiskUpdate disk update resource. +type DiskUpdate struct { + *DiskUpdateProperties `json:"properties,omitempty"` + // Tags - Resource tags + Tags map[string]*string `json:"tags"` + Sku *DiskSku `json:"sku,omitempty"` +} + +// MarshalJSON is the custom marshaler for DiskUpdate. +func (du DiskUpdate) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]interface{}) + if du.DiskUpdateProperties != nil { + objectMap["properties"] = du.DiskUpdateProperties + } + if du.Tags != nil { + objectMap["tags"] = du.Tags + } + if du.Sku != nil { + objectMap["sku"] = du.Sku + } + return json.Marshal(objectMap) +} + +// UnmarshalJSON is the custom unmarshaler for DiskUpdate struct. +func (du *DiskUpdate) UnmarshalJSON(body []byte) error { + var m map[string]*json.RawMessage + err := json.Unmarshal(body, &m) + if err != nil { + return err + } + for k, v := range m { + switch k { + case "properties": + if v != nil { + var diskUpdateProperties DiskUpdateProperties + err = json.Unmarshal(*v, &diskUpdateProperties) + if err != nil { + return err + } + du.DiskUpdateProperties = &diskUpdateProperties + } + case "tags": + if v != nil { + var tags map[string]*string + err = json.Unmarshal(*v, &tags) + if err != nil { + return err + } + du.Tags = tags + } + case "sku": + if v != nil { + var sku DiskSku + err = json.Unmarshal(*v, &sku) + if err != nil { + return err + } + du.Sku = &sku + } + } + } + + return nil +} + +// DiskUpdateProperties disk resource update properties. +type DiskUpdateProperties struct { + // OsType - the Operating System type. Possible values include: 'Windows', 'Linux' + OsType OperatingSystemTypes `json:"osType,omitempty"` + // DiskSizeGB - If creationData.createOption is Empty, this field is mandatory and it indicates the size of the VHD to create. If this field is present for updates or creation with other options, it indicates a resize. Resizes are only allowed if the disk is not attached to a running VM, and can only increase the disk's size. + DiskSizeGB *int32 `json:"diskSizeGB,omitempty"` + // EncryptionSettings - Encryption settings for disk or snapshot + EncryptionSettings *EncryptionSettings `json:"encryptionSettings,omitempty"` +} + +// EncryptionSettings encryption settings for disk or snapshot +type EncryptionSettings struct { + // Enabled - Set this flag to true and provide DiskEncryptionKey and optional KeyEncryptionKey to enable encryption. Set this flag to false and remove DiskEncryptionKey and KeyEncryptionKey to disable encryption. If EncryptionSettings is null in the request object, the existing settings remain unchanged. + Enabled *bool `json:"enabled,omitempty"` + // DiskEncryptionKey - Key Vault Secret Url and vault id of the disk encryption key + DiskEncryptionKey *KeyVaultAndSecretReference `json:"diskEncryptionKey,omitempty"` + // KeyEncryptionKey - Key Vault Key Url and vault id of the key encryption key + KeyEncryptionKey *KeyVaultAndKeyReference `json:"keyEncryptionKey,omitempty"` +} + +// GrantAccessData data used for requesting a SAS. +type GrantAccessData struct { + // Access - Possible values include: 'None', 'Read' + Access AccessLevel `json:"access,omitempty"` + // DurationInSeconds - Time duration in seconds until the SAS access expires. + DurationInSeconds *int32 `json:"durationInSeconds,omitempty"` +} + +// HardwareProfile specifies the hardware settings for the virtual machine. +type HardwareProfile struct { + // VMSize - Specifies the size of the virtual machine. For more information about virtual machine sizes, see [Sizes for virtual machines](https://docs.microsoft.com/azure/virtual-machines/virtual-machines-windows-sizes?toc=%2fazure%2fvirtual-machines%2fwindows%2ftoc.json).

The available VM sizes depend on region and availability set. For a list of available sizes use these APIs:

[List all available virtual machine sizes in an availability set](https://docs.microsoft.com/rest/api/compute/availabilitysets/listavailablesizes)

[List all available virtual machine sizes in a region](https://docs.microsoft.com/rest/api/compute/virtualmachinesizes/list)

[List all available virtual machine sizes for resizing](https://docs.microsoft.com/rest/api/compute/virtualmachines/listavailablesizes). Possible values include: 'VirtualMachineSizeTypesBasicA0', 'VirtualMachineSizeTypesBasicA1', 'VirtualMachineSizeTypesBasicA2', 'VirtualMachineSizeTypesBasicA3', 'VirtualMachineSizeTypesBasicA4', 'VirtualMachineSizeTypesStandardA0', 'VirtualMachineSizeTypesStandardA1', 'VirtualMachineSizeTypesStandardA2', 'VirtualMachineSizeTypesStandardA3', 'VirtualMachineSizeTypesStandardA4', 'VirtualMachineSizeTypesStandardA5', 'VirtualMachineSizeTypesStandardA6', 'VirtualMachineSizeTypesStandardA7', 'VirtualMachineSizeTypesStandardA8', 'VirtualMachineSizeTypesStandardA9', 'VirtualMachineSizeTypesStandardA10', 'VirtualMachineSizeTypesStandardA11', 'VirtualMachineSizeTypesStandardA1V2', 'VirtualMachineSizeTypesStandardA2V2', 'VirtualMachineSizeTypesStandardA4V2', 'VirtualMachineSizeTypesStandardA8V2', 'VirtualMachineSizeTypesStandardA2mV2', 'VirtualMachineSizeTypesStandardA4mV2', 'VirtualMachineSizeTypesStandardA8mV2', 'VirtualMachineSizeTypesStandardB1s', 'VirtualMachineSizeTypesStandardB1ms', 'VirtualMachineSizeTypesStandardB2s', 'VirtualMachineSizeTypesStandardB2ms', 'VirtualMachineSizeTypesStandardB4ms', 'VirtualMachineSizeTypesStandardB8ms', 'VirtualMachineSizeTypesStandardD1', 'VirtualMachineSizeTypesStandardD2', 'VirtualMachineSizeTypesStandardD3', 'VirtualMachineSizeTypesStandardD4', 'VirtualMachineSizeTypesStandardD11', 'VirtualMachineSizeTypesStandardD12', 'VirtualMachineSizeTypesStandardD13', 'VirtualMachineSizeTypesStandardD14', 'VirtualMachineSizeTypesStandardD1V2', 'VirtualMachineSizeTypesStandardD2V2', 'VirtualMachineSizeTypesStandardD3V2', 'VirtualMachineSizeTypesStandardD4V2', 'VirtualMachineSizeTypesStandardD5V2', 'VirtualMachineSizeTypesStandardD2V3', 'VirtualMachineSizeTypesStandardD4V3', 'VirtualMachineSizeTypesStandardD8V3', 'VirtualMachineSizeTypesStandardD16V3', 'VirtualMachineSizeTypesStandardD32V3', 'VirtualMachineSizeTypesStandardD64V3', 'VirtualMachineSizeTypesStandardD2sV3', 'VirtualMachineSizeTypesStandardD4sV3', 'VirtualMachineSizeTypesStandardD8sV3', 'VirtualMachineSizeTypesStandardD16sV3', 'VirtualMachineSizeTypesStandardD32sV3', 'VirtualMachineSizeTypesStandardD64sV3', 'VirtualMachineSizeTypesStandardD11V2', 'VirtualMachineSizeTypesStandardD12V2', 'VirtualMachineSizeTypesStandardD13V2', 'VirtualMachineSizeTypesStandardD14V2', 'VirtualMachineSizeTypesStandardD15V2', 'VirtualMachineSizeTypesStandardDS1', 'VirtualMachineSizeTypesStandardDS2', 'VirtualMachineSizeTypesStandardDS3', 'VirtualMachineSizeTypesStandardDS4', 'VirtualMachineSizeTypesStandardDS11', 'VirtualMachineSizeTypesStandardDS12', 'VirtualMachineSizeTypesStandardDS13', 'VirtualMachineSizeTypesStandardDS14', 'VirtualMachineSizeTypesStandardDS1V2', 'VirtualMachineSizeTypesStandardDS2V2', 'VirtualMachineSizeTypesStandardDS3V2', 'VirtualMachineSizeTypesStandardDS4V2', 'VirtualMachineSizeTypesStandardDS5V2', 'VirtualMachineSizeTypesStandardDS11V2', 'VirtualMachineSizeTypesStandardDS12V2', 'VirtualMachineSizeTypesStandardDS13V2', 'VirtualMachineSizeTypesStandardDS14V2', 'VirtualMachineSizeTypesStandardDS15V2', 'VirtualMachineSizeTypesStandardDS134V2', 'VirtualMachineSizeTypesStandardDS132V2', 'VirtualMachineSizeTypesStandardDS148V2', 'VirtualMachineSizeTypesStandardDS144V2', 'VirtualMachineSizeTypesStandardE2V3', 'VirtualMachineSizeTypesStandardE4V3', 'VirtualMachineSizeTypesStandardE8V3', 'VirtualMachineSizeTypesStandardE16V3', 'VirtualMachineSizeTypesStandardE32V3', 'VirtualMachineSizeTypesStandardE64V3', 'VirtualMachineSizeTypesStandardE2sV3', 'VirtualMachineSizeTypesStandardE4sV3', 'VirtualMachineSizeTypesStandardE8sV3', 'VirtualMachineSizeTypesStandardE16sV3', 'VirtualMachineSizeTypesStandardE32sV3', 'VirtualMachineSizeTypesStandardE64sV3', 'VirtualMachineSizeTypesStandardE3216V3', 'VirtualMachineSizeTypesStandardE328sV3', 'VirtualMachineSizeTypesStandardE6432sV3', 'VirtualMachineSizeTypesStandardE6416sV3', 'VirtualMachineSizeTypesStandardF1', 'VirtualMachineSizeTypesStandardF2', 'VirtualMachineSizeTypesStandardF4', 'VirtualMachineSizeTypesStandardF8', 'VirtualMachineSizeTypesStandardF16', 'VirtualMachineSizeTypesStandardF1s', 'VirtualMachineSizeTypesStandardF2s', 'VirtualMachineSizeTypesStandardF4s', 'VirtualMachineSizeTypesStandardF8s', 'VirtualMachineSizeTypesStandardF16s', 'VirtualMachineSizeTypesStandardF2sV2', 'VirtualMachineSizeTypesStandardF4sV2', 'VirtualMachineSizeTypesStandardF8sV2', 'VirtualMachineSizeTypesStandardF16sV2', 'VirtualMachineSizeTypesStandardF32sV2', 'VirtualMachineSizeTypesStandardF64sV2', 'VirtualMachineSizeTypesStandardF72sV2', 'VirtualMachineSizeTypesStandardG1', 'VirtualMachineSizeTypesStandardG2', 'VirtualMachineSizeTypesStandardG3', 'VirtualMachineSizeTypesStandardG4', 'VirtualMachineSizeTypesStandardG5', 'VirtualMachineSizeTypesStandardGS1', 'VirtualMachineSizeTypesStandardGS2', 'VirtualMachineSizeTypesStandardGS3', 'VirtualMachineSizeTypesStandardGS4', 'VirtualMachineSizeTypesStandardGS5', 'VirtualMachineSizeTypesStandardGS48', 'VirtualMachineSizeTypesStandardGS44', 'VirtualMachineSizeTypesStandardGS516', 'VirtualMachineSizeTypesStandardGS58', 'VirtualMachineSizeTypesStandardH8', 'VirtualMachineSizeTypesStandardH16', 'VirtualMachineSizeTypesStandardH8m', 'VirtualMachineSizeTypesStandardH16m', 'VirtualMachineSizeTypesStandardH16r', 'VirtualMachineSizeTypesStandardH16mr', 'VirtualMachineSizeTypesStandardL4s', 'VirtualMachineSizeTypesStandardL8s', 'VirtualMachineSizeTypesStandardL16s', 'VirtualMachineSizeTypesStandardL32s', 'VirtualMachineSizeTypesStandardM64s', 'VirtualMachineSizeTypesStandardM64ms', 'VirtualMachineSizeTypesStandardM128s', 'VirtualMachineSizeTypesStandardM128ms', 'VirtualMachineSizeTypesStandardM6432ms', 'VirtualMachineSizeTypesStandardM6416ms', 'VirtualMachineSizeTypesStandardM12864ms', 'VirtualMachineSizeTypesStandardM12832ms', 'VirtualMachineSizeTypesStandardNC6', 'VirtualMachineSizeTypesStandardNC12', 'VirtualMachineSizeTypesStandardNC24', 'VirtualMachineSizeTypesStandardNC24r', 'VirtualMachineSizeTypesStandardNC6sV2', 'VirtualMachineSizeTypesStandardNC12sV2', 'VirtualMachineSizeTypesStandardNC24sV2', 'VirtualMachineSizeTypesStandardNC24rsV2', 'VirtualMachineSizeTypesStandardNC6sV3', 'VirtualMachineSizeTypesStandardNC12sV3', 'VirtualMachineSizeTypesStandardNC24sV3', 'VirtualMachineSizeTypesStandardNC24rsV3', 'VirtualMachineSizeTypesStandardND6s', 'VirtualMachineSizeTypesStandardND12s', 'VirtualMachineSizeTypesStandardND24s', 'VirtualMachineSizeTypesStandardND24rs', 'VirtualMachineSizeTypesStandardNV6', 'VirtualMachineSizeTypesStandardNV12', 'VirtualMachineSizeTypesStandardNV24' + VMSize VirtualMachineSizeTypes `json:"vmSize,omitempty"` +} + +// Image the source user image virtual hard disk. The virtual hard disk will be copied before being attached to the +// virtual machine. If SourceImage is provided, the destination virtual hard drive must not exist. +type Image struct { + autorest.Response `json:"-"` + *ImageProperties `json:"properties,omitempty"` + // ID - Resource Id + ID *string `json:"id,omitempty"` + // Name - Resource name + Name *string `json:"name,omitempty"` + // Type - Resource type + Type *string `json:"type,omitempty"` + // Location - Resource location + Location *string `json:"location,omitempty"` + // Tags - Resource tags + Tags map[string]*string `json:"tags"` +} + +// MarshalJSON is the custom marshaler for Image. +func (i Image) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]interface{}) + if i.ImageProperties != nil { + objectMap["properties"] = i.ImageProperties + } + if i.ID != nil { + objectMap["id"] = i.ID + } + if i.Name != nil { + objectMap["name"] = i.Name + } + if i.Type != nil { + objectMap["type"] = i.Type + } + if i.Location != nil { + objectMap["location"] = i.Location + } + if i.Tags != nil { + objectMap["tags"] = i.Tags + } + return json.Marshal(objectMap) +} + +// UnmarshalJSON is the custom unmarshaler for Image struct. +func (i *Image) UnmarshalJSON(body []byte) error { + var m map[string]*json.RawMessage + err := json.Unmarshal(body, &m) + if err != nil { + return err + } + for k, v := range m { + switch k { + case "properties": + if v != nil { + var imageProperties ImageProperties + err = json.Unmarshal(*v, &imageProperties) + if err != nil { + return err + } + i.ImageProperties = &imageProperties + } + case "id": + if v != nil { + var ID string + err = json.Unmarshal(*v, &ID) + if err != nil { + return err + } + i.ID = &ID + } + case "name": + if v != nil { + var name string + err = json.Unmarshal(*v, &name) + if err != nil { + return err + } + i.Name = &name + } + case "type": + if v != nil { + var typeVar string + err = json.Unmarshal(*v, &typeVar) + if err != nil { + return err + } + i.Type = &typeVar + } + case "location": + if v != nil { + var location string + err = json.Unmarshal(*v, &location) + if err != nil { + return err + } + i.Location = &location + } + case "tags": + if v != nil { + var tags map[string]*string + err = json.Unmarshal(*v, &tags) + if err != nil { + return err + } + i.Tags = tags + } + } + } + + return nil +} + +// ImageDataDisk describes a data disk. +type ImageDataDisk struct { + // Lun - Specifies the logical unit number of the data disk. This value is used to identify data disks within the VM and therefore must be unique for each data disk attached to a VM. + Lun *int32 `json:"lun,omitempty"` + // Snapshot - The snapshot. + Snapshot *SubResource `json:"snapshot,omitempty"` + // ManagedDisk - The managedDisk. + ManagedDisk *SubResource `json:"managedDisk,omitempty"` + // BlobURI - The Virtual Hard Disk. + BlobURI *string `json:"blobUri,omitempty"` + // Caching - Specifies the caching requirements.

Possible values are:

**None**

**ReadOnly**

**ReadWrite**

Default: **None for Standard storage. ReadOnly for Premium storage**. Possible values include: 'CachingTypesNone', 'CachingTypesReadOnly', 'CachingTypesReadWrite' + Caching CachingTypes `json:"caching,omitempty"` + // DiskSizeGB - Specifies the size of empty data disks in gigabytes. This element can be used to overwrite the name of the disk in a virtual machine image.

This value cannot be larger than 1023 GB + DiskSizeGB *int32 `json:"diskSizeGB,omitempty"` + // StorageAccountType - Specifies the storage account type for the managed disk. Possible values are: Standard_LRS or Premium_LRS. Possible values include: 'StorageAccountTypesStandardLRS', 'StorageAccountTypesPremiumLRS' + StorageAccountType StorageAccountTypes `json:"storageAccountType,omitempty"` +} + +// ImageDiskReference the source image used for creating the disk. +type ImageDiskReference struct { + // ID - A relative uri containing either a Platform Imgage Repository or user image reference. + ID *string `json:"id,omitempty"` + // Lun - If the disk is created from an image's data disk, this is an index that indicates which of the data disks in the image to use. For OS disks, this field is null. + Lun *int32 `json:"lun,omitempty"` +} + +// ImageListResult the List Image operation response. +type ImageListResult struct { + autorest.Response `json:"-"` + // Value - The list of Images. + Value *[]Image `json:"value,omitempty"` + // NextLink - The uri to fetch the next page of Images. Call ListNext() with this to fetch the next page of Images. + NextLink *string `json:"nextLink,omitempty"` +} + +// ImageListResultIterator provides access to a complete listing of Image values. +type ImageListResultIterator struct { + i int + page ImageListResultPage +} + +// Next advances to the next value. If there was an error making +// the request the iterator does not advance and the error is returned. +func (iter *ImageListResultIterator) Next() error { + iter.i++ + if iter.i < len(iter.page.Values()) { + return nil + } + err := iter.page.Next() + if err != nil { + iter.i-- + return err + } + iter.i = 0 + return nil +} + +// NotDone returns true if the enumeration should be started or is not yet complete. +func (iter ImageListResultIterator) NotDone() bool { + return iter.page.NotDone() && iter.i < len(iter.page.Values()) +} + +// Response returns the raw server response from the last page request. +func (iter ImageListResultIterator) Response() ImageListResult { + return iter.page.Response() +} + +// Value returns the current value or a zero-initialized value if the +// iterator has advanced beyond the end of the collection. +func (iter ImageListResultIterator) Value() Image { + if !iter.page.NotDone() { + return Image{} + } + return iter.page.Values()[iter.i] +} + +// IsEmpty returns true if the ListResult contains no values. +func (ilr ImageListResult) IsEmpty() bool { + return ilr.Value == nil || len(*ilr.Value) == 0 +} + +// imageListResultPreparer prepares a request to retrieve the next set of results. +// It returns nil if no more results exist. +func (ilr ImageListResult) imageListResultPreparer() (*http.Request, error) { + if ilr.NextLink == nil || len(to.String(ilr.NextLink)) < 1 { + return nil, nil + } + return autorest.Prepare(&http.Request{}, + autorest.AsJSON(), + autorest.AsGet(), + autorest.WithBaseURL(to.String(ilr.NextLink))) +} + +// ImageListResultPage contains a page of Image values. +type ImageListResultPage struct { + fn func(ImageListResult) (ImageListResult, error) + ilr ImageListResult +} + +// Next advances to the next page of values. If there was an error making +// the request the page does not advance and the error is returned. +func (page *ImageListResultPage) Next() error { + next, err := page.fn(page.ilr) + if err != nil { + return err + } + page.ilr = next + return nil +} + +// NotDone returns true if the page enumeration should be started or is not yet complete. +func (page ImageListResultPage) NotDone() bool { + return !page.ilr.IsEmpty() +} + +// Response returns the raw server response from the last page request. +func (page ImageListResultPage) Response() ImageListResult { + return page.ilr +} + +// Values returns the slice of values for the current page or nil if there are no values. +func (page ImageListResultPage) Values() []Image { + if page.ilr.IsEmpty() { + return nil + } + return *page.ilr.Value +} + +// ImageOSDisk describes an Operating System disk. +type ImageOSDisk struct { + // OsType - This property allows you to specify the type of the OS that is included in the disk if creating a VM from a custom image.

Possible values are:

**Windows**

**Linux**. Possible values include: 'Windows', 'Linux' + OsType OperatingSystemTypes `json:"osType,omitempty"` + // OsState - The OS State. Possible values include: 'Generalized', 'Specialized' + OsState OperatingSystemStateTypes `json:"osState,omitempty"` + // Snapshot - The snapshot. + Snapshot *SubResource `json:"snapshot,omitempty"` + // ManagedDisk - The managedDisk. + ManagedDisk *SubResource `json:"managedDisk,omitempty"` + // BlobURI - The Virtual Hard Disk. + BlobURI *string `json:"blobUri,omitempty"` + // Caching - Specifies the caching requirements.

Possible values are:

**None**

**ReadOnly**

**ReadWrite**

Default: **None for Standard storage. ReadOnly for Premium storage**. Possible values include: 'CachingTypesNone', 'CachingTypesReadOnly', 'CachingTypesReadWrite' + Caching CachingTypes `json:"caching,omitempty"` + // DiskSizeGB - Specifies the size of empty data disks in gigabytes. This element can be used to overwrite the name of the disk in a virtual machine image.

This value cannot be larger than 1023 GB + DiskSizeGB *int32 `json:"diskSizeGB,omitempty"` + // StorageAccountType - Specifies the storage account type for the managed disk. Possible values are: Standard_LRS or Premium_LRS. Possible values include: 'StorageAccountTypesStandardLRS', 'StorageAccountTypesPremiumLRS' + StorageAccountType StorageAccountTypes `json:"storageAccountType,omitempty"` +} + +// ImageProperties describes the properties of an Image. +type ImageProperties struct { + // SourceVirtualMachine - The source virtual machine from which Image is created. + SourceVirtualMachine *SubResource `json:"sourceVirtualMachine,omitempty"` + // StorageProfile - Specifies the storage settings for the virtual machine disks. + StorageProfile *ImageStorageProfile `json:"storageProfile,omitempty"` + // ProvisioningState - The provisioning state. + ProvisioningState *string `json:"provisioningState,omitempty"` +} + +// ImageReference specifies information about the image to use. You can specify information about platform images, +// marketplace images, or virtual machine images. This element is required when you want to use a platform image, +// marketplace image, or virtual machine image, but is not used in other creation operations. +type ImageReference struct { + // Publisher - The image publisher. + Publisher *string `json:"publisher,omitempty"` + // Offer - Specifies the offer of the platform image or marketplace image used to create the virtual machine. + Offer *string `json:"offer,omitempty"` + // Sku - The image SKU. + Sku *string `json:"sku,omitempty"` + // Version - Specifies the version of the platform image or marketplace image used to create the virtual machine. The allowed formats are Major.Minor.Build or 'latest'. Major, Minor, and Build are decimal numbers. Specify 'latest' to use the latest version of an image available at deploy time. Even if you use 'latest', the VM image will not automatically update after deploy time even if a new version becomes available. + Version *string `json:"version,omitempty"` + // ID - Resource Id + ID *string `json:"id,omitempty"` +} + +// ImagesCreateOrUpdateFuture an abstraction for monitoring and retrieving the results of a long-running operation. +type ImagesCreateOrUpdateFuture struct { + azure.Future +} + +// Result returns the result of the asynchronous operation. +// If the operation has not completed it will return an error. +func (future *ImagesCreateOrUpdateFuture) Result(client ImagesClient) (i Image, err error) { + var done bool + done, err = future.Done(client) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.ImagesCreateOrUpdateFuture", "Result", future.Response(), "Polling failure") + return + } + if !done { + err = azure.NewAsyncOpIncompleteError("compute.ImagesCreateOrUpdateFuture") + return + } + sender := autorest.DecorateSender(client, autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...)) + if i.Response.Response, err = future.GetResult(sender); err == nil && i.Response.Response.StatusCode != http.StatusNoContent { + i, err = client.CreateOrUpdateResponder(i.Response.Response) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.ImagesCreateOrUpdateFuture", "Result", i.Response.Response, "Failure responding to request") + } + } + return +} + +// ImagesDeleteFuture an abstraction for monitoring and retrieving the results of a long-running operation. +type ImagesDeleteFuture struct { + azure.Future +} + +// Result returns the result of the asynchronous operation. +// If the operation has not completed it will return an error. +func (future *ImagesDeleteFuture) Result(client ImagesClient) (osr OperationStatusResponse, err error) { + var done bool + done, err = future.Done(client) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.ImagesDeleteFuture", "Result", future.Response(), "Polling failure") + return + } + if !done { + err = azure.NewAsyncOpIncompleteError("compute.ImagesDeleteFuture") + return + } + sender := autorest.DecorateSender(client, autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...)) + if osr.Response.Response, err = future.GetResult(sender); err == nil && osr.Response.Response.StatusCode != http.StatusNoContent { + osr, err = client.DeleteResponder(osr.Response.Response) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.ImagesDeleteFuture", "Result", osr.Response.Response, "Failure responding to request") + } + } + return +} + +// ImageStorageProfile describes a storage profile. +type ImageStorageProfile struct { + // OsDisk - Specifies information about the operating system disk used by the virtual machine.

For more information about disks, see [About disks and VHDs for Azure virtual machines](https://docs.microsoft.com/azure/virtual-machines/virtual-machines-windows-about-disks-vhds?toc=%2fazure%2fvirtual-machines%2fwindows%2ftoc.json). + OsDisk *ImageOSDisk `json:"osDisk,omitempty"` + // DataDisks - Specifies the parameters that are used to add a data disk to a virtual machine.

For more information about disks, see [About disks and VHDs for Azure virtual machines](https://docs.microsoft.com/azure/virtual-machines/virtual-machines-windows-about-disks-vhds?toc=%2fazure%2fvirtual-machines%2fwindows%2ftoc.json). + DataDisks *[]ImageDataDisk `json:"dataDisks,omitempty"` + // ZoneResilient - Specifies whether an image is zone resilient or not. Default is false. Zone resilient images can be created only in regions that provide Zone Redundant Storage (ZRS). + ZoneResilient *bool `json:"zoneResilient,omitempty"` +} + +// ImagesUpdateFuture an abstraction for monitoring and retrieving the results of a long-running operation. +type ImagesUpdateFuture struct { + azure.Future +} + +// Result returns the result of the asynchronous operation. +// If the operation has not completed it will return an error. +func (future *ImagesUpdateFuture) Result(client ImagesClient) (i Image, err error) { + var done bool + done, err = future.Done(client) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.ImagesUpdateFuture", "Result", future.Response(), "Polling failure") + return + } + if !done { + err = azure.NewAsyncOpIncompleteError("compute.ImagesUpdateFuture") + return + } + sender := autorest.DecorateSender(client, autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...)) + if i.Response.Response, err = future.GetResult(sender); err == nil && i.Response.Response.StatusCode != http.StatusNoContent { + i, err = client.UpdateResponder(i.Response.Response) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.ImagesUpdateFuture", "Result", i.Response.Response, "Failure responding to request") + } + } + return +} + +// ImageUpdate the source user image virtual hard disk. Only tags may be updated. +type ImageUpdate struct { + *ImageProperties `json:"properties,omitempty"` + // Tags - Resource tags + Tags map[string]*string `json:"tags"` +} + +// MarshalJSON is the custom marshaler for ImageUpdate. +func (iu ImageUpdate) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]interface{}) + if iu.ImageProperties != nil { + objectMap["properties"] = iu.ImageProperties + } + if iu.Tags != nil { + objectMap["tags"] = iu.Tags + } + return json.Marshal(objectMap) +} + +// UnmarshalJSON is the custom unmarshaler for ImageUpdate struct. +func (iu *ImageUpdate) UnmarshalJSON(body []byte) error { + var m map[string]*json.RawMessage + err := json.Unmarshal(body, &m) + if err != nil { + return err + } + for k, v := range m { + switch k { + case "properties": + if v != nil { + var imageProperties ImageProperties + err = json.Unmarshal(*v, &imageProperties) + if err != nil { + return err + } + iu.ImageProperties = &imageProperties + } + case "tags": + if v != nil { + var tags map[string]*string + err = json.Unmarshal(*v, &tags) + if err != nil { + return err + } + iu.Tags = tags + } + } + } + + return nil +} + +// InnerError inner error details. +type InnerError struct { + // Exceptiontype - The exception type. + Exceptiontype *string `json:"exceptiontype,omitempty"` + // Errordetail - The internal error message or exception dump. + Errordetail *string `json:"errordetail,omitempty"` +} + +// InstanceViewStatus instance view status. +type InstanceViewStatus struct { + // Code - The status code. + Code *string `json:"code,omitempty"` + // Level - The level code. Possible values include: 'Info', 'Warning', 'Error' + Level StatusLevelTypes `json:"level,omitempty"` + // DisplayStatus - The short localizable label for the status. + DisplayStatus *string `json:"displayStatus,omitempty"` + // Message - The detailed status message, including for alerts and error messages. + Message *string `json:"message,omitempty"` + // Time - The time of the status. + Time *date.Time `json:"time,omitempty"` +} + +// KeyVaultAndKeyReference key Vault Key Url and vault id of KeK, KeK is optional and when provided is used to +// unwrap the encryptionKey +type KeyVaultAndKeyReference struct { + // SourceVault - Resource id of the KeyVault containing the key or secret + SourceVault *SourceVault `json:"sourceVault,omitempty"` + // KeyURL - Url pointing to a key or secret in KeyVault + KeyURL *string `json:"keyUrl,omitempty"` +} + +// KeyVaultAndSecretReference key Vault Secret Url and vault id of the encryption key +type KeyVaultAndSecretReference struct { + // SourceVault - Resource id of the KeyVault containing the key or secret + SourceVault *SourceVault `json:"sourceVault,omitempty"` + // SecretURL - Url pointing to a key or secret in KeyVault + SecretURL *string `json:"secretUrl,omitempty"` +} + +// KeyVaultKeyReference describes a reference to Key Vault Key +type KeyVaultKeyReference struct { + // KeyURL - The URL referencing a key encryption key in Key Vault. + KeyURL *string `json:"keyUrl,omitempty"` + // SourceVault - The relative URL of the Key Vault containing the key. + SourceVault *SubResource `json:"sourceVault,omitempty"` +} + +// KeyVaultSecretReference describes a reference to Key Vault Secret +type KeyVaultSecretReference struct { + // SecretURL - The URL referencing a secret in a Key Vault. + SecretURL *string `json:"secretUrl,omitempty"` + // SourceVault - The relative URL of the Key Vault containing the secret. + SourceVault *SubResource `json:"sourceVault,omitempty"` +} + +// LinuxConfiguration specifies the Linux operating system settings on the virtual machine.

For a list of +// supported Linux distributions, see [Linux on Azure-Endorsed +// Distributions](https://docs.microsoft.com/azure/virtual-machines/virtual-machines-linux-endorsed-distros?toc=%2fazure%2fvirtual-machines%2flinux%2ftoc.json) +//

For running non-endorsed distributions, see [Information for Non-Endorsed +// Distributions](https://docs.microsoft.com/azure/virtual-machines/virtual-machines-linux-create-upload-generic?toc=%2fazure%2fvirtual-machines%2flinux%2ftoc.json). +type LinuxConfiguration struct { + // DisablePasswordAuthentication - Specifies whether password authentication should be disabled. + DisablePasswordAuthentication *bool `json:"disablePasswordAuthentication,omitempty"` + // SSH - Specifies the ssh key configuration for a Linux OS. + SSH *SSHConfiguration `json:"ssh,omitempty"` +} + +// ListUsagesResult the List Usages operation response. +type ListUsagesResult struct { + autorest.Response `json:"-"` + // Value - The list of compute resource usages. + Value *[]Usage `json:"value,omitempty"` + // NextLink - The URI to fetch the next page of compute resource usage information. Call ListNext() with this to fetch the next page of compute resource usage information. + NextLink *string `json:"nextLink,omitempty"` +} + +// ListUsagesResultIterator provides access to a complete listing of Usage values. +type ListUsagesResultIterator struct { + i int + page ListUsagesResultPage +} + +// Next advances to the next value. If there was an error making +// the request the iterator does not advance and the error is returned. +func (iter *ListUsagesResultIterator) Next() error { + iter.i++ + if iter.i < len(iter.page.Values()) { + return nil + } + err := iter.page.Next() + if err != nil { + iter.i-- + return err + } + iter.i = 0 + return nil +} + +// NotDone returns true if the enumeration should be started or is not yet complete. +func (iter ListUsagesResultIterator) NotDone() bool { + return iter.page.NotDone() && iter.i < len(iter.page.Values()) +} + +// Response returns the raw server response from the last page request. +func (iter ListUsagesResultIterator) Response() ListUsagesResult { + return iter.page.Response() +} + +// Value returns the current value or a zero-initialized value if the +// iterator has advanced beyond the end of the collection. +func (iter ListUsagesResultIterator) Value() Usage { + if !iter.page.NotDone() { + return Usage{} + } + return iter.page.Values()[iter.i] +} + +// IsEmpty returns true if the ListResult contains no values. +func (lur ListUsagesResult) IsEmpty() bool { + return lur.Value == nil || len(*lur.Value) == 0 +} + +// listUsagesResultPreparer prepares a request to retrieve the next set of results. +// It returns nil if no more results exist. +func (lur ListUsagesResult) listUsagesResultPreparer() (*http.Request, error) { + if lur.NextLink == nil || len(to.String(lur.NextLink)) < 1 { + return nil, nil + } + return autorest.Prepare(&http.Request{}, + autorest.AsJSON(), + autorest.AsGet(), + autorest.WithBaseURL(to.String(lur.NextLink))) +} + +// ListUsagesResultPage contains a page of Usage values. +type ListUsagesResultPage struct { + fn func(ListUsagesResult) (ListUsagesResult, error) + lur ListUsagesResult +} + +// Next advances to the next page of values. If there was an error making +// the request the page does not advance and the error is returned. +func (page *ListUsagesResultPage) Next() error { + next, err := page.fn(page.lur) + if err != nil { + return err + } + page.lur = next + return nil +} + +// NotDone returns true if the page enumeration should be started or is not yet complete. +func (page ListUsagesResultPage) NotDone() bool { + return !page.lur.IsEmpty() +} + +// Response returns the raw server response from the last page request. +func (page ListUsagesResultPage) Response() ListUsagesResult { + return page.lur +} + +// Values returns the slice of values for the current page or nil if there are no values. +func (page ListUsagesResultPage) Values() []Usage { + if page.lur.IsEmpty() { + return nil + } + return *page.lur.Value +} + +// ListVirtualMachineExtensionImage ... +type ListVirtualMachineExtensionImage struct { + autorest.Response `json:"-"` + Value *[]VirtualMachineExtensionImage `json:"value,omitempty"` +} + +// ListVirtualMachineImageResource ... +type ListVirtualMachineImageResource struct { + autorest.Response `json:"-"` + Value *[]VirtualMachineImageResource `json:"value,omitempty"` +} + +// LogAnalyticsExportRequestRateByIntervalFuture an abstraction for monitoring and retrieving the results of a +// long-running operation. +type LogAnalyticsExportRequestRateByIntervalFuture struct { + azure.Future +} + +// Result returns the result of the asynchronous operation. +// If the operation has not completed it will return an error. +func (future *LogAnalyticsExportRequestRateByIntervalFuture) Result(client LogAnalyticsClient) (laor LogAnalyticsOperationResult, err error) { + var done bool + done, err = future.Done(client) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.LogAnalyticsExportRequestRateByIntervalFuture", "Result", future.Response(), "Polling failure") + return + } + if !done { + err = azure.NewAsyncOpIncompleteError("compute.LogAnalyticsExportRequestRateByIntervalFuture") + return + } + sender := autorest.DecorateSender(client, autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...)) + if laor.Response.Response, err = future.GetResult(sender); err == nil && laor.Response.Response.StatusCode != http.StatusNoContent { + laor, err = client.ExportRequestRateByIntervalResponder(laor.Response.Response) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.LogAnalyticsExportRequestRateByIntervalFuture", "Result", laor.Response.Response, "Failure responding to request") + } + } + return +} + +// LogAnalyticsExportThrottledRequestsFuture an abstraction for monitoring and retrieving the results of a +// long-running operation. +type LogAnalyticsExportThrottledRequestsFuture struct { + azure.Future +} + +// Result returns the result of the asynchronous operation. +// If the operation has not completed it will return an error. +func (future *LogAnalyticsExportThrottledRequestsFuture) Result(client LogAnalyticsClient) (laor LogAnalyticsOperationResult, err error) { + var done bool + done, err = future.Done(client) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.LogAnalyticsExportThrottledRequestsFuture", "Result", future.Response(), "Polling failure") + return + } + if !done { + err = azure.NewAsyncOpIncompleteError("compute.LogAnalyticsExportThrottledRequestsFuture") + return + } + sender := autorest.DecorateSender(client, autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...)) + if laor.Response.Response, err = future.GetResult(sender); err == nil && laor.Response.Response.StatusCode != http.StatusNoContent { + laor, err = client.ExportThrottledRequestsResponder(laor.Response.Response) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.LogAnalyticsExportThrottledRequestsFuture", "Result", laor.Response.Response, "Failure responding to request") + } + } + return +} + +// LogAnalyticsInputBase api input base class for LogAnalytics Api. +type LogAnalyticsInputBase struct { + // BlobContainerSasURI - SAS Uri of the logging blob container to which LogAnalytics Api writes output logs to. + BlobContainerSasURI *string `json:"blobContainerSasUri,omitempty"` + // FromTime - From time of the query + FromTime *date.Time `json:"fromTime,omitempty"` + // ToTime - To time of the query + ToTime *date.Time `json:"toTime,omitempty"` + // GroupByThrottlePolicy - Group query result by Throttle Policy applied. + GroupByThrottlePolicy *bool `json:"groupByThrottlePolicy,omitempty"` + // GroupByOperationName - Group query result by by Operation Name. + GroupByOperationName *bool `json:"groupByOperationName,omitempty"` + // GroupByResourceName - Group query result by Resource Name. + GroupByResourceName *bool `json:"groupByResourceName,omitempty"` +} + +// LogAnalyticsOperationResult logAnalytics operation status response +type LogAnalyticsOperationResult struct { + autorest.Response `json:"-"` + // Properties - LogAnalyticsOutput + Properties *LogAnalyticsOutput `json:"properties,omitempty"` + // Name - Operation ID + Name *string `json:"name,omitempty"` + // Status - Operation status + Status *string `json:"status,omitempty"` + // StartTime - Start time of the operation + StartTime *date.Time `json:"startTime,omitempty"` + // EndTime - End time of the operation + EndTime *date.Time `json:"endTime,omitempty"` + // Error - Api error + Error *APIError `json:"error,omitempty"` +} + +// LogAnalyticsOutput logAnalytics output properties +type LogAnalyticsOutput struct { + // Output - Output file Uri path to blob container. + Output *string `json:"output,omitempty"` +} + +// LongRunningOperationProperties compute-specific operation properties, including output +type LongRunningOperationProperties struct { + // Output - Operation output data (raw JSON) + Output interface{} `json:"output,omitempty"` +} + +// MaintenanceRedeployStatus maintenance Operation Status. +type MaintenanceRedeployStatus struct { + // IsCustomerInitiatedMaintenanceAllowed - True, if customer is allowed to perform Maintenance. + IsCustomerInitiatedMaintenanceAllowed *bool `json:"isCustomerInitiatedMaintenanceAllowed,omitempty"` + // PreMaintenanceWindowStartTime - Start Time for the Pre Maintenance Window. + PreMaintenanceWindowStartTime *date.Time `json:"preMaintenanceWindowStartTime,omitempty"` + // PreMaintenanceWindowEndTime - End Time for the Pre Maintenance Window. + PreMaintenanceWindowEndTime *date.Time `json:"preMaintenanceWindowEndTime,omitempty"` + // MaintenanceWindowStartTime - Start Time for the Maintenance Window. + MaintenanceWindowStartTime *date.Time `json:"maintenanceWindowStartTime,omitempty"` + // MaintenanceWindowEndTime - End Time for the Maintenance Window. + MaintenanceWindowEndTime *date.Time `json:"maintenanceWindowEndTime,omitempty"` + // LastOperationResultCode - The Last Maintenance Operation Result Code. Possible values include: 'MaintenanceOperationResultCodeTypesNone', 'MaintenanceOperationResultCodeTypesRetryLater', 'MaintenanceOperationResultCodeTypesMaintenanceAborted', 'MaintenanceOperationResultCodeTypesMaintenanceCompleted' + LastOperationResultCode MaintenanceOperationResultCodeTypes `json:"lastOperationResultCode,omitempty"` + // LastOperationMessage - Message returned for the last Maintenance Operation. + LastOperationMessage *string `json:"lastOperationMessage,omitempty"` +} + +// ManagedDiskParameters the parameters of a managed disk. +type ManagedDiskParameters struct { + // StorageAccountType - Specifies the storage account type for the managed disk. Possible values are: Standard_LRS or Premium_LRS. Possible values include: 'StorageAccountTypesStandardLRS', 'StorageAccountTypesPremiumLRS' + StorageAccountType StorageAccountTypes `json:"storageAccountType,omitempty"` + // ID - Resource Id + ID *string `json:"id,omitempty"` +} + +// NetworkInterfaceReference describes a network interface reference. +type NetworkInterfaceReference struct { + *NetworkInterfaceReferenceProperties `json:"properties,omitempty"` + // ID - Resource Id + ID *string `json:"id,omitempty"` +} + +// MarshalJSON is the custom marshaler for NetworkInterfaceReference. +func (nir NetworkInterfaceReference) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]interface{}) + if nir.NetworkInterfaceReferenceProperties != nil { + objectMap["properties"] = nir.NetworkInterfaceReferenceProperties + } + if nir.ID != nil { + objectMap["id"] = nir.ID + } + return json.Marshal(objectMap) +} + +// UnmarshalJSON is the custom unmarshaler for NetworkInterfaceReference struct. +func (nir *NetworkInterfaceReference) UnmarshalJSON(body []byte) error { + var m map[string]*json.RawMessage + err := json.Unmarshal(body, &m) + if err != nil { + return err + } + for k, v := range m { + switch k { + case "properties": + if v != nil { + var networkInterfaceReferenceProperties NetworkInterfaceReferenceProperties + err = json.Unmarshal(*v, &networkInterfaceReferenceProperties) + if err != nil { + return err + } + nir.NetworkInterfaceReferenceProperties = &networkInterfaceReferenceProperties + } + case "id": + if v != nil { + var ID string + err = json.Unmarshal(*v, &ID) + if err != nil { + return err + } + nir.ID = &ID + } + } + } + + return nil +} + +// NetworkInterfaceReferenceProperties describes a network interface reference properties. +type NetworkInterfaceReferenceProperties struct { + // Primary - Specifies the primary network interface in case the virtual machine has more than 1 network interface. + Primary *bool `json:"primary,omitempty"` +} + +// NetworkProfile specifies the network interfaces of the virtual machine. +type NetworkProfile struct { + // NetworkInterfaces - Specifies the list of resource Ids for the network interfaces associated with the virtual machine. + NetworkInterfaces *[]NetworkInterfaceReference `json:"networkInterfaces,omitempty"` +} + +// OperationListResult the List Compute Operation operation response. +type OperationListResult struct { + autorest.Response `json:"-"` + // Value - The list of compute operations + Value *[]OperationValue `json:"value,omitempty"` +} + +// OperationStatusResponse operation status response +type OperationStatusResponse struct { + autorest.Response `json:"-"` + // Name - Operation ID + Name *string `json:"name,omitempty"` + // Status - Operation status + Status *string `json:"status,omitempty"` + // StartTime - Start time of the operation + StartTime *date.Time `json:"startTime,omitempty"` + // EndTime - End time of the operation + EndTime *date.Time `json:"endTime,omitempty"` + // Error - Api error + Error *APIError `json:"error,omitempty"` +} + +// OperationValue describes the properties of a Compute Operation value. +type OperationValue struct { + // Origin - The origin of the compute operation. + Origin *string `json:"origin,omitempty"` + // Name - The name of the compute operation. + Name *string `json:"name,omitempty"` + *OperationValueDisplay `json:"display,omitempty"` +} + +// MarshalJSON is the custom marshaler for OperationValue. +func (ov OperationValue) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]interface{}) + if ov.Origin != nil { + objectMap["origin"] = ov.Origin + } + if ov.Name != nil { + objectMap["name"] = ov.Name + } + if ov.OperationValueDisplay != nil { + objectMap["display"] = ov.OperationValueDisplay + } + return json.Marshal(objectMap) +} + +// UnmarshalJSON is the custom unmarshaler for OperationValue struct. +func (ov *OperationValue) UnmarshalJSON(body []byte) error { + var m map[string]*json.RawMessage + err := json.Unmarshal(body, &m) + if err != nil { + return err + } + for k, v := range m { + switch k { + case "origin": + if v != nil { + var origin string + err = json.Unmarshal(*v, &origin) + if err != nil { + return err + } + ov.Origin = &origin + } + case "name": + if v != nil { + var name string + err = json.Unmarshal(*v, &name) + if err != nil { + return err + } + ov.Name = &name + } + case "display": + if v != nil { + var operationValueDisplay OperationValueDisplay + err = json.Unmarshal(*v, &operationValueDisplay) + if err != nil { + return err + } + ov.OperationValueDisplay = &operationValueDisplay + } + } + } + + return nil +} + +// OperationValueDisplay describes the properties of a Compute Operation Value Display. +type OperationValueDisplay struct { + // Operation - The display name of the compute operation. + Operation *string `json:"operation,omitempty"` + // Resource - The display name of the resource the operation applies to. + Resource *string `json:"resource,omitempty"` + // Description - The description of the operation. + Description *string `json:"description,omitempty"` + // Provider - The resource provider for the operation. + Provider *string `json:"provider,omitempty"` +} + +// OSDisk specifies information about the operating system disk used by the virtual machine.

For more +// information about disks, see [About disks and VHDs for Azure virtual +// machines](https://docs.microsoft.com/azure/virtual-machines/virtual-machines-windows-about-disks-vhds?toc=%2fazure%2fvirtual-machines%2fwindows%2ftoc.json). +type OSDisk struct { + // OsType - This property allows you to specify the type of the OS that is included in the disk if creating a VM from user-image or a specialized VHD.

Possible values are:

**Windows**

**Linux**. Possible values include: 'Windows', 'Linux' + OsType OperatingSystemTypes `json:"osType,omitempty"` + // EncryptionSettings - Specifies the encryption settings for the OS Disk.

Minimum api-version: 2015-06-15 + EncryptionSettings *DiskEncryptionSettings `json:"encryptionSettings,omitempty"` + // Name - The disk name. + Name *string `json:"name,omitempty"` + // Vhd - The virtual hard disk. + Vhd *VirtualHardDisk `json:"vhd,omitempty"` + // Image - The source user image virtual hard disk. The virtual hard disk will be copied before being attached to the virtual machine. If SourceImage is provided, the destination virtual hard drive must not exist. + Image *VirtualHardDisk `json:"image,omitempty"` + // Caching - Specifies the caching requirements.

Possible values are:

**None**

**ReadOnly**

**ReadWrite**

Default: **None for Standard storage. ReadOnly for Premium storage**. Possible values include: 'CachingTypesNone', 'CachingTypesReadOnly', 'CachingTypesReadWrite' + Caching CachingTypes `json:"caching,omitempty"` + // WriteAcceleratorEnabled - Specifies whether writeAccelerator should be enabled or disabled on the disk. + WriteAcceleratorEnabled *bool `json:"writeAcceleratorEnabled,omitempty"` + // CreateOption - Specifies how the virtual machine should be created.

Possible values are:

**Attach** \u2013 This value is used when you are using a specialized disk to create the virtual machine.

**FromImage** \u2013 This value is used when you are using an image to create the virtual machine. If you are using a platform image, you also use the imageReference element described above. If you are using a marketplace image, you also use the plan element previously described. Possible values include: 'DiskCreateOptionTypesFromImage', 'DiskCreateOptionTypesEmpty', 'DiskCreateOptionTypesAttach' + CreateOption DiskCreateOptionTypes `json:"createOption,omitempty"` + // DiskSizeGB - Specifies the size of an empty data disk in gigabytes. This element can be used to overwrite the name of the disk in a virtual machine image.

This value cannot be larger than 1023 GB + DiskSizeGB *int32 `json:"diskSizeGB,omitempty"` + // ManagedDisk - The managed disk parameters. + ManagedDisk *ManagedDiskParameters `json:"managedDisk,omitempty"` +} + +// OSDiskImage contains the os disk image information. +type OSDiskImage struct { + // OperatingSystem - The operating system of the osDiskImage. Possible values include: 'Windows', 'Linux' + OperatingSystem OperatingSystemTypes `json:"operatingSystem,omitempty"` +} + +// OSProfile specifies the operating system settings for the virtual machine. +type OSProfile struct { + // ComputerName - Specifies the host OS name of the virtual machine.

**Max-length (Windows):** 15 characters

**Max-length (Linux):** 64 characters.

For naming conventions and restrictions see [Azure infrastructure services implementation guidelines](https://docs.microsoft.com/azure/virtual-machines/virtual-machines-linux-infrastructure-subscription-accounts-guidelines?toc=%2fazure%2fvirtual-machines%2flinux%2ftoc.json#1-naming-conventions). + ComputerName *string `json:"computerName,omitempty"` + // AdminUsername - Specifies the name of the administrator account.

**Windows-only restriction:** Cannot end in "."

**Disallowed values:** "administrator", "admin", "user", "user1", "test", "user2", "test1", "user3", "admin1", "1", "123", "a", "actuser", "adm", "admin2", "aspnet", "backup", "console", "david", "guest", "john", "owner", "root", "server", "sql", "support", "support_388945a0", "sys", "test2", "test3", "user4", "user5".

**Minimum-length (Linux):** 1 character

**Max-length (Linux):** 64 characters

**Max-length (Windows):** 20 characters

  • For root access to the Linux VM, see [Using root privileges on Linux virtual machines in Azure](https://docs.microsoft.com/azure/virtual-machines/virtual-machines-linux-use-root-privileges?toc=%2fazure%2fvirtual-machines%2flinux%2ftoc.json)
  • For a list of built-in system users on Linux that should not be used in this field, see [Selecting User Names for Linux on Azure](https://docs.microsoft.com/azure/virtual-machines/virtual-machines-linux-usernames?toc=%2fazure%2fvirtual-machines%2flinux%2ftoc.json) + AdminUsername *string `json:"adminUsername,omitempty"` + // AdminPassword - Specifies the password of the administrator account.

    **Minimum-length (Windows):** 8 characters

    **Minimum-length (Linux):** 6 characters

    **Max-length (Windows):** 123 characters

    **Max-length (Linux):** 72 characters

    **Complexity requirements:** 3 out of 4 conditions below need to be fulfilled
    Has lower characters
    Has upper characters
    Has a digit
    Has a special character (Regex match [\W_])

    **Disallowed values:** "abc@123", "P@$$w0rd", "P@ssw0rd", "P@ssword123", "Pa$$word", "pass@word1", "Password!", "Password1", "Password22", "iloveyou!"

    For resetting the password, see [How to reset the Remote Desktop service or its login password in a Windows VM](https://docs.microsoft.com/azure/virtual-machines/virtual-machines-windows-reset-rdp?toc=%2fazure%2fvirtual-machines%2fwindows%2ftoc.json)

    For resetting root password, see [Manage users, SSH, and check or repair disks on Azure Linux VMs using the VMAccess Extension](https://docs.microsoft.com/azure/virtual-machines/virtual-machines-linux-using-vmaccess-extension?toc=%2fazure%2fvirtual-machines%2flinux%2ftoc.json#reset-root-password) + AdminPassword *string `json:"adminPassword,omitempty"` + // CustomData - Specifies a base-64 encoded string of custom data. The base-64 encoded string is decoded to a binary array that is saved as a file on the Virtual Machine. The maximum length of the binary array is 65535 bytes.

    For using cloud-init for your VM, see [Using cloud-init to customize a Linux VM during creation](https://docs.microsoft.com/azure/virtual-machines/virtual-machines-linux-using-cloud-init?toc=%2fazure%2fvirtual-machines%2flinux%2ftoc.json) + CustomData *string `json:"customData,omitempty"` + // WindowsConfiguration - Specifies Windows operating system settings on the virtual machine. + WindowsConfiguration *WindowsConfiguration `json:"windowsConfiguration,omitempty"` + // LinuxConfiguration - Specifies the Linux operating system settings on the virtual machine.

    For a list of supported Linux distributions, see [Linux on Azure-Endorsed Distributions](https://docs.microsoft.com/azure/virtual-machines/virtual-machines-linux-endorsed-distros?toc=%2fazure%2fvirtual-machines%2flinux%2ftoc.json)

    For running non-endorsed distributions, see [Information for Non-Endorsed Distributions](https://docs.microsoft.com/azure/virtual-machines/virtual-machines-linux-create-upload-generic?toc=%2fazure%2fvirtual-machines%2flinux%2ftoc.json). + LinuxConfiguration *LinuxConfiguration `json:"linuxConfiguration,omitempty"` + // Secrets - Specifies set of certificates that should be installed onto the virtual machine. + Secrets *[]VaultSecretGroup `json:"secrets,omitempty"` +} + +// Plan specifies information about the marketplace image used to create the virtual machine. This element is only +// used for marketplace images. Before you can use a marketplace image from an API, you must enable the image for +// programmatic use. In the Azure portal, find the marketplace image that you want to use and then click **Want to +// deploy programmatically, Get Started ->**. Enter any required information and then click **Save**. +type Plan struct { + // Name - The plan ID. + Name *string `json:"name,omitempty"` + // Publisher - The publisher ID. + Publisher *string `json:"publisher,omitempty"` + // Product - Specifies the product of the image from the marketplace. This is the same value as Offer under the imageReference element. + Product *string `json:"product,omitempty"` + // PromotionCode - The promotion code. + PromotionCode *string `json:"promotionCode,omitempty"` +} + +// PurchasePlan used for establishing the purchase context of any 3rd Party artifact through MarketPlace. +type PurchasePlan struct { + // Publisher - The publisher ID. + Publisher *string `json:"publisher,omitempty"` + // Name - The plan ID. + Name *string `json:"name,omitempty"` + // Product - Specifies the product of the image from the marketplace. This is the same value as Offer under the imageReference element. + Product *string `json:"product,omitempty"` +} + +// RecoveryWalkResponse response after calling a manual recovery walk +type RecoveryWalkResponse struct { + autorest.Response `json:"-"` + // WalkPerformed - Whether the recovery walk was performed + WalkPerformed *bool `json:"walkPerformed,omitempty"` + // NextPlatformUpdateDomain - The next update domain that needs to be walked. Null means walk spanning all update domains has been completed + NextPlatformUpdateDomain *int32 `json:"nextPlatformUpdateDomain,omitempty"` +} + +// RequestRateByIntervalInput api request input for LogAnalytics getRequestRateByInterval Api. +type RequestRateByIntervalInput struct { + // IntervalLength - Interval value in minutes used to create LogAnalytics call rate logs. Possible values include: 'ThreeMins', 'FiveMins', 'ThirtyMins', 'SixtyMins' + IntervalLength IntervalInMins `json:"intervalLength,omitempty"` + // BlobContainerSasURI - SAS Uri of the logging blob container to which LogAnalytics Api writes output logs to. + BlobContainerSasURI *string `json:"blobContainerSasUri,omitempty"` + // FromTime - From time of the query + FromTime *date.Time `json:"fromTime,omitempty"` + // ToTime - To time of the query + ToTime *date.Time `json:"toTime,omitempty"` + // GroupByThrottlePolicy - Group query result by Throttle Policy applied. + GroupByThrottlePolicy *bool `json:"groupByThrottlePolicy,omitempty"` + // GroupByOperationName - Group query result by by Operation Name. + GroupByOperationName *bool `json:"groupByOperationName,omitempty"` + // GroupByResourceName - Group query result by Resource Name. + GroupByResourceName *bool `json:"groupByResourceName,omitempty"` +} + +// Resource the Resource model definition. +type Resource struct { + // ID - Resource Id + ID *string `json:"id,omitempty"` + // Name - Resource name + Name *string `json:"name,omitempty"` + // Type - Resource type + Type *string `json:"type,omitempty"` + // Location - Resource location + Location *string `json:"location,omitempty"` + // Tags - Resource tags + Tags map[string]*string `json:"tags"` +} + +// MarshalJSON is the custom marshaler for Resource. +func (r Resource) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]interface{}) + if r.ID != nil { + objectMap["id"] = r.ID + } + if r.Name != nil { + objectMap["name"] = r.Name + } + if r.Type != nil { + objectMap["type"] = r.Type + } + if r.Location != nil { + objectMap["location"] = r.Location + } + if r.Tags != nil { + objectMap["tags"] = r.Tags + } + return json.Marshal(objectMap) +} + +// ResourceSku describes an available Compute SKU. +type ResourceSku struct { + // ResourceType - The type of resource the SKU applies to. + ResourceType *string `json:"resourceType,omitempty"` + // Name - The name of SKU. + Name *string `json:"name,omitempty"` + // Tier - Specifies the tier of virtual machines in a scale set.

    Possible Values:

    **Standard**

    **Basic** + Tier *string `json:"tier,omitempty"` + // Size - The Size of the SKU. + Size *string `json:"size,omitempty"` + // Family - The Family of this particular SKU. + Family *string `json:"family,omitempty"` + // Kind - The Kind of resources that are supported in this SKU. + Kind *string `json:"kind,omitempty"` + // Capacity - Specifies the number of virtual machines in the scale set. + Capacity *ResourceSkuCapacity `json:"capacity,omitempty"` + // Locations - The set of locations that the SKU is available. + Locations *[]string `json:"locations,omitempty"` + // LocationInfo - A list of locations and availability zones in those locations where the SKU is available. + LocationInfo *[]ResourceSkuLocationInfo `json:"locationInfo,omitempty"` + // APIVersions - The api versions that support this SKU. + APIVersions *[]string `json:"apiVersions,omitempty"` + // Costs - Metadata for retrieving price info. + Costs *[]ResourceSkuCosts `json:"costs,omitempty"` + // Capabilities - A name value pair to describe the capability. + Capabilities *[]ResourceSkuCapabilities `json:"capabilities,omitempty"` + // Restrictions - The restrictions because of which SKU cannot be used. This is empty if there are no restrictions. + Restrictions *[]ResourceSkuRestrictions `json:"restrictions,omitempty"` +} + +// ResourceSkuCapabilities describes The SKU capabilites object. +type ResourceSkuCapabilities struct { + // Name - An invariant to describe the feature. + Name *string `json:"name,omitempty"` + // Value - An invariant if the feature is measured by quantity. + Value *string `json:"value,omitempty"` +} + +// ResourceSkuCapacity describes scaling information of a SKU. +type ResourceSkuCapacity struct { + // Minimum - The minimum capacity. + Minimum *int64 `json:"minimum,omitempty"` + // Maximum - The maximum capacity that can be set. + Maximum *int64 `json:"maximum,omitempty"` + // Default - The default capacity. + Default *int64 `json:"default,omitempty"` + // ScaleType - The scale type applicable to the sku. Possible values include: 'ResourceSkuCapacityScaleTypeAutomatic', 'ResourceSkuCapacityScaleTypeManual', 'ResourceSkuCapacityScaleTypeNone' + ScaleType ResourceSkuCapacityScaleType `json:"scaleType,omitempty"` +} + +// ResourceSkuCosts describes metadata for retrieving price info. +type ResourceSkuCosts struct { + // MeterID - Used for querying price from commerce. + MeterID *string `json:"meterID,omitempty"` + // Quantity - The multiplier is needed to extend the base metered cost. + Quantity *int64 `json:"quantity,omitempty"` + // ExtendedUnit - An invariant to show the extended unit. + ExtendedUnit *string `json:"extendedUnit,omitempty"` +} + +// ResourceSkuLocationInfo ... +type ResourceSkuLocationInfo struct { + // Location - Location of the SKU + Location *string `json:"location,omitempty"` + // Zones - List of availability zones where the SKU is supported. + Zones *[]string `json:"zones,omitempty"` +} + +// ResourceSkuRestrictionInfo ... +type ResourceSkuRestrictionInfo struct { + // Locations - Locations where the SKU is restricted + Locations *[]string `json:"locations,omitempty"` + // Zones - List of availability zones where the SKU is restricted. + Zones *[]string `json:"zones,omitempty"` +} + +// ResourceSkuRestrictions describes scaling information of a SKU. +type ResourceSkuRestrictions struct { + // Type - The type of restrictions. Possible values include: 'Location', 'Zone' + Type ResourceSkuRestrictionsType `json:"type,omitempty"` + // Values - The value of restrictions. If the restriction type is set to location. This would be different locations where the SKU is restricted. + Values *[]string `json:"values,omitempty"` + // RestrictionInfo - The information about the restriction where the SKU cannot be used. + RestrictionInfo *ResourceSkuRestrictionInfo `json:"restrictionInfo,omitempty"` + // ReasonCode - The reason for restriction. Possible values include: 'QuotaID', 'NotAvailableForSubscription' + ReasonCode ResourceSkuRestrictionsReasonCode `json:"reasonCode,omitempty"` +} + +// ResourceSkusResult the Compute List Skus operation response. +type ResourceSkusResult struct { + autorest.Response `json:"-"` + // Value - The list of skus available for the subscription. + Value *[]ResourceSku `json:"value,omitempty"` + // NextLink - The uri to fetch the next page of Compute Skus. Call ListNext() with this to fetch the next page of VMSS Skus. + NextLink *string `json:"nextLink,omitempty"` +} + +// ResourceSkusResultIterator provides access to a complete listing of ResourceSku values. +type ResourceSkusResultIterator struct { + i int + page ResourceSkusResultPage +} + +// Next advances to the next value. If there was an error making +// the request the iterator does not advance and the error is returned. +func (iter *ResourceSkusResultIterator) Next() error { + iter.i++ + if iter.i < len(iter.page.Values()) { + return nil + } + err := iter.page.Next() + if err != nil { + iter.i-- + return err + } + iter.i = 0 + return nil +} + +// NotDone returns true if the enumeration should be started or is not yet complete. +func (iter ResourceSkusResultIterator) NotDone() bool { + return iter.page.NotDone() && iter.i < len(iter.page.Values()) +} + +// Response returns the raw server response from the last page request. +func (iter ResourceSkusResultIterator) Response() ResourceSkusResult { + return iter.page.Response() +} + +// Value returns the current value or a zero-initialized value if the +// iterator has advanced beyond the end of the collection. +func (iter ResourceSkusResultIterator) Value() ResourceSku { + if !iter.page.NotDone() { + return ResourceSku{} + } + return iter.page.Values()[iter.i] +} + +// IsEmpty returns true if the ListResult contains no values. +func (rsr ResourceSkusResult) IsEmpty() bool { + return rsr.Value == nil || len(*rsr.Value) == 0 +} + +// resourceSkusResultPreparer prepares a request to retrieve the next set of results. +// It returns nil if no more results exist. +func (rsr ResourceSkusResult) resourceSkusResultPreparer() (*http.Request, error) { + if rsr.NextLink == nil || len(to.String(rsr.NextLink)) < 1 { + return nil, nil + } + return autorest.Prepare(&http.Request{}, + autorest.AsJSON(), + autorest.AsGet(), + autorest.WithBaseURL(to.String(rsr.NextLink))) +} + +// ResourceSkusResultPage contains a page of ResourceSku values. +type ResourceSkusResultPage struct { + fn func(ResourceSkusResult) (ResourceSkusResult, error) + rsr ResourceSkusResult +} + +// Next advances to the next page of values. If there was an error making +// the request the page does not advance and the error is returned. +func (page *ResourceSkusResultPage) Next() error { + next, err := page.fn(page.rsr) + if err != nil { + return err + } + page.rsr = next + return nil +} + +// NotDone returns true if the page enumeration should be started or is not yet complete. +func (page ResourceSkusResultPage) NotDone() bool { + return !page.rsr.IsEmpty() +} + +// Response returns the raw server response from the last page request. +func (page ResourceSkusResultPage) Response() ResourceSkusResult { + return page.rsr +} + +// Values returns the slice of values for the current page or nil if there are no values. +func (page ResourceSkusResultPage) Values() []ResourceSku { + if page.rsr.IsEmpty() { + return nil + } + return *page.rsr.Value +} + +// RollbackStatusInfo information about rollback on failed VM instances after a OS Upgrade operation. +type RollbackStatusInfo struct { + // SuccessfullyRolledbackInstanceCount - The number of instances which have been successfully rolled back. + SuccessfullyRolledbackInstanceCount *int32 `json:"successfullyRolledbackInstanceCount,omitempty"` + // FailedRolledbackInstanceCount - The number of instances which failed to rollback. + FailedRolledbackInstanceCount *int32 `json:"failedRolledbackInstanceCount,omitempty"` + // RollbackError - Error details if OS rollback failed. + RollbackError *APIError `json:"rollbackError,omitempty"` +} + +// RollingUpgradePolicy the configuration parameters used while performing a rolling upgrade. +type RollingUpgradePolicy struct { + // MaxBatchInstancePercent - The maximum percent of total virtual machine instances that will be upgraded simultaneously by the rolling upgrade in one batch. As this is a maximum, unhealthy instances in previous or future batches can cause the percentage of instances in a batch to decrease to ensure higher reliability. The default value for this parameter is 20%. + MaxBatchInstancePercent *int32 `json:"maxBatchInstancePercent,omitempty"` + // MaxUnhealthyInstancePercent - The maximum percentage of the total virtual machine instances in the scale set that can be simultaneously unhealthy, either as a result of being upgraded, or by being found in an unhealthy state by the virtual machine health checks before the rolling upgrade aborts. This constraint will be checked prior to starting any batch. The default value for this parameter is 20%. + MaxUnhealthyInstancePercent *int32 `json:"maxUnhealthyInstancePercent,omitempty"` + // MaxUnhealthyUpgradedInstancePercent - The maximum percentage of upgraded virtual machine instances that can be found to be in an unhealthy state. This check will happen after each batch is upgraded. If this percentage is ever exceeded, the rolling update aborts. The default value for this parameter is 20%. + MaxUnhealthyUpgradedInstancePercent *int32 `json:"maxUnhealthyUpgradedInstancePercent,omitempty"` + // PauseTimeBetweenBatches - The wait time between completing the update for all virtual machines in one batch and starting the next batch. The time duration should be specified in ISO 8601 format. The default value is 0 seconds (PT0S). + PauseTimeBetweenBatches *string `json:"pauseTimeBetweenBatches,omitempty"` +} + +// RollingUpgradeProgressInfo information about the number of virtual machine instances in each upgrade state. +type RollingUpgradeProgressInfo struct { + // SuccessfulInstanceCount - The number of instances that have been successfully upgraded. + SuccessfulInstanceCount *int32 `json:"successfulInstanceCount,omitempty"` + // FailedInstanceCount - The number of instances that have failed to be upgraded successfully. + FailedInstanceCount *int32 `json:"failedInstanceCount,omitempty"` + // InProgressInstanceCount - The number of instances that are currently being upgraded. + InProgressInstanceCount *int32 `json:"inProgressInstanceCount,omitempty"` + // PendingInstanceCount - The number of instances that have not yet begun to be upgraded. + PendingInstanceCount *int32 `json:"pendingInstanceCount,omitempty"` +} + +// RollingUpgradeRunningStatus information about the current running state of the overall upgrade. +type RollingUpgradeRunningStatus struct { + // Code - Code indicating the current status of the upgrade. Possible values include: 'RollingForward', 'Cancelled', 'Completed', 'Faulted' + Code RollingUpgradeStatusCode `json:"code,omitempty"` + // StartTime - Start time of the upgrade. + StartTime *date.Time `json:"startTime,omitempty"` + // LastAction - The last action performed on the rolling upgrade. Possible values include: 'Start', 'Cancel' + LastAction RollingUpgradeActionType `json:"lastAction,omitempty"` + // LastActionTime - Last action time of the upgrade. + LastActionTime *date.Time `json:"lastActionTime,omitempty"` +} + +// RollingUpgradeStatusInfo the status of the latest virtual machine scale set rolling upgrade. +type RollingUpgradeStatusInfo struct { + autorest.Response `json:"-"` + *RollingUpgradeStatusInfoProperties `json:"properties,omitempty"` + // ID - Resource Id + ID *string `json:"id,omitempty"` + // Name - Resource name + Name *string `json:"name,omitempty"` + // Type - Resource type + Type *string `json:"type,omitempty"` + // Location - Resource location + Location *string `json:"location,omitempty"` + // Tags - Resource tags + Tags map[string]*string `json:"tags"` +} + +// MarshalJSON is the custom marshaler for RollingUpgradeStatusInfo. +func (rusi RollingUpgradeStatusInfo) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]interface{}) + if rusi.RollingUpgradeStatusInfoProperties != nil { + objectMap["properties"] = rusi.RollingUpgradeStatusInfoProperties + } + if rusi.ID != nil { + objectMap["id"] = rusi.ID + } + if rusi.Name != nil { + objectMap["name"] = rusi.Name + } + if rusi.Type != nil { + objectMap["type"] = rusi.Type + } + if rusi.Location != nil { + objectMap["location"] = rusi.Location + } + if rusi.Tags != nil { + objectMap["tags"] = rusi.Tags + } + return json.Marshal(objectMap) +} + +// UnmarshalJSON is the custom unmarshaler for RollingUpgradeStatusInfo struct. +func (rusi *RollingUpgradeStatusInfo) UnmarshalJSON(body []byte) error { + var m map[string]*json.RawMessage + err := json.Unmarshal(body, &m) + if err != nil { + return err + } + for k, v := range m { + switch k { + case "properties": + if v != nil { + var rollingUpgradeStatusInfoProperties RollingUpgradeStatusInfoProperties + err = json.Unmarshal(*v, &rollingUpgradeStatusInfoProperties) + if err != nil { + return err + } + rusi.RollingUpgradeStatusInfoProperties = &rollingUpgradeStatusInfoProperties + } + case "id": + if v != nil { + var ID string + err = json.Unmarshal(*v, &ID) + if err != nil { + return err + } + rusi.ID = &ID + } + case "name": + if v != nil { + var name string + err = json.Unmarshal(*v, &name) + if err != nil { + return err + } + rusi.Name = &name + } + case "type": + if v != nil { + var typeVar string + err = json.Unmarshal(*v, &typeVar) + if err != nil { + return err + } + rusi.Type = &typeVar + } + case "location": + if v != nil { + var location string + err = json.Unmarshal(*v, &location) + if err != nil { + return err + } + rusi.Location = &location + } + case "tags": + if v != nil { + var tags map[string]*string + err = json.Unmarshal(*v, &tags) + if err != nil { + return err + } + rusi.Tags = tags + } + } + } + + return nil +} + +// RollingUpgradeStatusInfoProperties the status of the latest virtual machine scale set rolling upgrade. +type RollingUpgradeStatusInfoProperties struct { + // Policy - The rolling upgrade policies applied for this upgrade. + Policy *RollingUpgradePolicy `json:"policy,omitempty"` + // RunningStatus - Information about the current running state of the overall upgrade. + RunningStatus *RollingUpgradeRunningStatus `json:"runningStatus,omitempty"` + // Progress - Information about the number of virtual machine instances in each upgrade state. + Progress *RollingUpgradeProgressInfo `json:"progress,omitempty"` + // Error - Error details for this upgrade, if there are any. + Error *APIError `json:"error,omitempty"` +} + +// RunCommandDocument describes the properties of a Run Command. +type RunCommandDocument struct { + autorest.Response `json:"-"` + // Script - The script to be executed. + Script *[]string `json:"script,omitempty"` + // Parameters - The parameters used by the script. + Parameters *[]RunCommandParameterDefinition `json:"parameters,omitempty"` + // Schema - The VM run command schema. + Schema *string `json:"$schema,omitempty"` + // ID - The VM run command id. + ID *string `json:"id,omitempty"` + // OsType - The Operating System type. Possible values include: 'Windows', 'Linux' + OsType OperatingSystemTypes `json:"osType,omitempty"` + // Label - The VM run command label. + Label *string `json:"label,omitempty"` + // Description - The VM run command description. + Description *string `json:"description,omitempty"` +} + +// RunCommandDocumentBase describes the properties of a Run Command metadata. +type RunCommandDocumentBase struct { + // Schema - The VM run command schema. + Schema *string `json:"$schema,omitempty"` + // ID - The VM run command id. + ID *string `json:"id,omitempty"` + // OsType - The Operating System type. Possible values include: 'Windows', 'Linux' + OsType OperatingSystemTypes `json:"osType,omitempty"` + // Label - The VM run command label. + Label *string `json:"label,omitempty"` + // Description - The VM run command description. + Description *string `json:"description,omitempty"` +} + +// RunCommandInput capture Virtual Machine parameters. +type RunCommandInput struct { + // CommandID - The run command id. + CommandID *string `json:"commandId,omitempty"` + // Script - Optional. The script to be executed. When this value is given, the given script will override the default script of the command. + Script *[]string `json:"script,omitempty"` + // Parameters - The run command parameters. + Parameters *[]RunCommandInputParameter `json:"parameters,omitempty"` +} + +// RunCommandInputParameter describes the properties of a run command parameter. +type RunCommandInputParameter struct { + // Name - The run command parameter name. + Name *string `json:"name,omitempty"` + // Value - The run command parameter value. + Value *string `json:"value,omitempty"` +} + +// RunCommandListResult the List Virtual Machine operation response. +type RunCommandListResult struct { + autorest.Response `json:"-"` + // Value - The list of virtual machine run commands. + Value *[]RunCommandDocumentBase `json:"value,omitempty"` + // NextLink - The uri to fetch the next page of run commands. Call ListNext() with this to fetch the next page of run commands. + NextLink *string `json:"nextLink,omitempty"` +} + +// RunCommandListResultIterator provides access to a complete listing of RunCommandDocumentBase values. +type RunCommandListResultIterator struct { + i int + page RunCommandListResultPage +} + +// Next advances to the next value. If there was an error making +// the request the iterator does not advance and the error is returned. +func (iter *RunCommandListResultIterator) Next() error { + iter.i++ + if iter.i < len(iter.page.Values()) { + return nil + } + err := iter.page.Next() + if err != nil { + iter.i-- + return err + } + iter.i = 0 + return nil +} + +// NotDone returns true if the enumeration should be started or is not yet complete. +func (iter RunCommandListResultIterator) NotDone() bool { + return iter.page.NotDone() && iter.i < len(iter.page.Values()) +} + +// Response returns the raw server response from the last page request. +func (iter RunCommandListResultIterator) Response() RunCommandListResult { + return iter.page.Response() +} + +// Value returns the current value or a zero-initialized value if the +// iterator has advanced beyond the end of the collection. +func (iter RunCommandListResultIterator) Value() RunCommandDocumentBase { + if !iter.page.NotDone() { + return RunCommandDocumentBase{} + } + return iter.page.Values()[iter.i] +} + +// IsEmpty returns true if the ListResult contains no values. +func (rclr RunCommandListResult) IsEmpty() bool { + return rclr.Value == nil || len(*rclr.Value) == 0 +} + +// runCommandListResultPreparer prepares a request to retrieve the next set of results. +// It returns nil if no more results exist. +func (rclr RunCommandListResult) runCommandListResultPreparer() (*http.Request, error) { + if rclr.NextLink == nil || len(to.String(rclr.NextLink)) < 1 { + return nil, nil + } + return autorest.Prepare(&http.Request{}, + autorest.AsJSON(), + autorest.AsGet(), + autorest.WithBaseURL(to.String(rclr.NextLink))) +} + +// RunCommandListResultPage contains a page of RunCommandDocumentBase values. +type RunCommandListResultPage struct { + fn func(RunCommandListResult) (RunCommandListResult, error) + rclr RunCommandListResult +} + +// Next advances to the next page of values. If there was an error making +// the request the page does not advance and the error is returned. +func (page *RunCommandListResultPage) Next() error { + next, err := page.fn(page.rclr) + if err != nil { + return err + } + page.rclr = next + return nil +} + +// NotDone returns true if the page enumeration should be started or is not yet complete. +func (page RunCommandListResultPage) NotDone() bool { + return !page.rclr.IsEmpty() +} + +// Response returns the raw server response from the last page request. +func (page RunCommandListResultPage) Response() RunCommandListResult { + return page.rclr +} + +// Values returns the slice of values for the current page or nil if there are no values. +func (page RunCommandListResultPage) Values() []RunCommandDocumentBase { + if page.rclr.IsEmpty() { + return nil + } + return *page.rclr.Value +} + +// RunCommandParameterDefinition describes the properties of a run command parameter. +type RunCommandParameterDefinition struct { + // Name - The run command parameter name. + Name *string `json:"name,omitempty"` + // Type - The run command parameter type. + Type *string `json:"type,omitempty"` + // DefaultValue - The run command parameter default value. + DefaultValue *string `json:"defaultValue,omitempty"` + // Required - The run command parameter required. + Required *bool `json:"required,omitempty"` +} + +// RunCommandResult run command operation response. +type RunCommandResult struct { + autorest.Response `json:"-"` + *RunCommandResultProperties `json:"properties,omitempty"` + // Name - Operation ID + Name *string `json:"name,omitempty"` + // Status - Operation status + Status *string `json:"status,omitempty"` + // StartTime - Start time of the operation + StartTime *date.Time `json:"startTime,omitempty"` + // EndTime - End time of the operation + EndTime *date.Time `json:"endTime,omitempty"` + // Error - Api error + Error *APIError `json:"error,omitempty"` +} + +// MarshalJSON is the custom marshaler for RunCommandResult. +func (rcr RunCommandResult) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]interface{}) + if rcr.RunCommandResultProperties != nil { + objectMap["properties"] = rcr.RunCommandResultProperties + } + if rcr.Name != nil { + objectMap["name"] = rcr.Name + } + if rcr.Status != nil { + objectMap["status"] = rcr.Status + } + if rcr.StartTime != nil { + objectMap["startTime"] = rcr.StartTime + } + if rcr.EndTime != nil { + objectMap["endTime"] = rcr.EndTime + } + if rcr.Error != nil { + objectMap["error"] = rcr.Error + } + return json.Marshal(objectMap) +} + +// UnmarshalJSON is the custom unmarshaler for RunCommandResult struct. +func (rcr *RunCommandResult) UnmarshalJSON(body []byte) error { + var m map[string]*json.RawMessage + err := json.Unmarshal(body, &m) + if err != nil { + return err + } + for k, v := range m { + switch k { + case "properties": + if v != nil { + var runCommandResultProperties RunCommandResultProperties + err = json.Unmarshal(*v, &runCommandResultProperties) + if err != nil { + return err + } + rcr.RunCommandResultProperties = &runCommandResultProperties + } + case "name": + if v != nil { + var name string + err = json.Unmarshal(*v, &name) + if err != nil { + return err + } + rcr.Name = &name + } + case "status": + if v != nil { + var status string + err = json.Unmarshal(*v, &status) + if err != nil { + return err + } + rcr.Status = &status + } + case "startTime": + if v != nil { + var startTime date.Time + err = json.Unmarshal(*v, &startTime) + if err != nil { + return err + } + rcr.StartTime = &startTime + } + case "endTime": + if v != nil { + var endTime date.Time + err = json.Unmarshal(*v, &endTime) + if err != nil { + return err + } + rcr.EndTime = &endTime + } + case "error": + if v != nil { + var errorVar APIError + err = json.Unmarshal(*v, &errorVar) + if err != nil { + return err + } + rcr.Error = &errorVar + } + } + } + + return nil +} + +// RunCommandResultProperties compute-specific operation properties, including output +type RunCommandResultProperties struct { + // Output - Operation output data (raw JSON) + Output interface{} `json:"output,omitempty"` +} + +// Sku describes a virtual machine scale set sku. +type Sku struct { + // Name - The sku name. + Name *string `json:"name,omitempty"` + // Tier - Specifies the tier of virtual machines in a scale set.

    Possible Values:

    **Standard**

    **Basic** + Tier *string `json:"tier,omitempty"` + // Capacity - Specifies the number of virtual machines in the scale set. + Capacity *int64 `json:"capacity,omitempty"` +} + +// Snapshot snapshot resource. +type Snapshot struct { + autorest.Response `json:"-"` + // ManagedBy - Unused. Always Null. + ManagedBy *string `json:"managedBy,omitempty"` + Sku *SnapshotSku `json:"sku,omitempty"` + *DiskProperties `json:"properties,omitempty"` + // ID - Resource Id + ID *string `json:"id,omitempty"` + // Name - Resource name + Name *string `json:"name,omitempty"` + // Type - Resource type + Type *string `json:"type,omitempty"` + // Location - Resource location + Location *string `json:"location,omitempty"` + // Tags - Resource tags + Tags map[string]*string `json:"tags"` +} + +// MarshalJSON is the custom marshaler for Snapshot. +func (s Snapshot) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]interface{}) + if s.ManagedBy != nil { + objectMap["managedBy"] = s.ManagedBy + } + if s.Sku != nil { + objectMap["sku"] = s.Sku + } + if s.DiskProperties != nil { + objectMap["properties"] = s.DiskProperties + } + if s.ID != nil { + objectMap["id"] = s.ID + } + if s.Name != nil { + objectMap["name"] = s.Name + } + if s.Type != nil { + objectMap["type"] = s.Type + } + if s.Location != nil { + objectMap["location"] = s.Location + } + if s.Tags != nil { + objectMap["tags"] = s.Tags + } + return json.Marshal(objectMap) +} + +// UnmarshalJSON is the custom unmarshaler for Snapshot struct. +func (s *Snapshot) UnmarshalJSON(body []byte) error { + var m map[string]*json.RawMessage + err := json.Unmarshal(body, &m) + if err != nil { + return err + } + for k, v := range m { + switch k { + case "managedBy": + if v != nil { + var managedBy string + err = json.Unmarshal(*v, &managedBy) + if err != nil { + return err + } + s.ManagedBy = &managedBy + } + case "sku": + if v != nil { + var sku SnapshotSku + err = json.Unmarshal(*v, &sku) + if err != nil { + return err + } + s.Sku = &sku + } + case "properties": + if v != nil { + var diskProperties DiskProperties + err = json.Unmarshal(*v, &diskProperties) + if err != nil { + return err + } + s.DiskProperties = &diskProperties + } + case "id": + if v != nil { + var ID string + err = json.Unmarshal(*v, &ID) + if err != nil { + return err + } + s.ID = &ID + } + case "name": + if v != nil { + var name string + err = json.Unmarshal(*v, &name) + if err != nil { + return err + } + s.Name = &name + } + case "type": + if v != nil { + var typeVar string + err = json.Unmarshal(*v, &typeVar) + if err != nil { + return err + } + s.Type = &typeVar + } + case "location": + if v != nil { + var location string + err = json.Unmarshal(*v, &location) + if err != nil { + return err + } + s.Location = &location + } + case "tags": + if v != nil { + var tags map[string]*string + err = json.Unmarshal(*v, &tags) + if err != nil { + return err + } + s.Tags = tags + } + } + } + + return nil +} + +// SnapshotList the List Snapshots operation response. +type SnapshotList struct { + autorest.Response `json:"-"` + // Value - A list of snapshots. + Value *[]Snapshot `json:"value,omitempty"` + // NextLink - The uri to fetch the next page of snapshots. Call ListNext() with this to fetch the next page of snapshots. + NextLink *string `json:"nextLink,omitempty"` +} + +// SnapshotListIterator provides access to a complete listing of Snapshot values. +type SnapshotListIterator struct { + i int + page SnapshotListPage +} + +// Next advances to the next value. If there was an error making +// the request the iterator does not advance and the error is returned. +func (iter *SnapshotListIterator) Next() error { + iter.i++ + if iter.i < len(iter.page.Values()) { + return nil + } + err := iter.page.Next() + if err != nil { + iter.i-- + return err + } + iter.i = 0 + return nil +} + +// NotDone returns true if the enumeration should be started or is not yet complete. +func (iter SnapshotListIterator) NotDone() bool { + return iter.page.NotDone() && iter.i < len(iter.page.Values()) +} + +// Response returns the raw server response from the last page request. +func (iter SnapshotListIterator) Response() SnapshotList { + return iter.page.Response() +} + +// Value returns the current value or a zero-initialized value if the +// iterator has advanced beyond the end of the collection. +func (iter SnapshotListIterator) Value() Snapshot { + if !iter.page.NotDone() { + return Snapshot{} + } + return iter.page.Values()[iter.i] +} + +// IsEmpty returns true if the ListResult contains no values. +func (sl SnapshotList) IsEmpty() bool { + return sl.Value == nil || len(*sl.Value) == 0 +} + +// snapshotListPreparer prepares a request to retrieve the next set of results. +// It returns nil if no more results exist. +func (sl SnapshotList) snapshotListPreparer() (*http.Request, error) { + if sl.NextLink == nil || len(to.String(sl.NextLink)) < 1 { + return nil, nil + } + return autorest.Prepare(&http.Request{}, + autorest.AsJSON(), + autorest.AsGet(), + autorest.WithBaseURL(to.String(sl.NextLink))) +} + +// SnapshotListPage contains a page of Snapshot values. +type SnapshotListPage struct { + fn func(SnapshotList) (SnapshotList, error) + sl SnapshotList +} + +// Next advances to the next page of values. If there was an error making +// the request the page does not advance and the error is returned. +func (page *SnapshotListPage) Next() error { + next, err := page.fn(page.sl) + if err != nil { + return err + } + page.sl = next + return nil +} + +// NotDone returns true if the page enumeration should be started or is not yet complete. +func (page SnapshotListPage) NotDone() bool { + return !page.sl.IsEmpty() +} + +// Response returns the raw server response from the last page request. +func (page SnapshotListPage) Response() SnapshotList { + return page.sl +} + +// Values returns the slice of values for the current page or nil if there are no values. +func (page SnapshotListPage) Values() []Snapshot { + if page.sl.IsEmpty() { + return nil + } + return *page.sl.Value +} + +// SnapshotsCreateOrUpdateFuture an abstraction for monitoring and retrieving the results of a long-running +// operation. +type SnapshotsCreateOrUpdateFuture struct { + azure.Future +} + +// Result returns the result of the asynchronous operation. +// If the operation has not completed it will return an error. +func (future *SnapshotsCreateOrUpdateFuture) Result(client SnapshotsClient) (s Snapshot, err error) { + var done bool + done, err = future.Done(client) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.SnapshotsCreateOrUpdateFuture", "Result", future.Response(), "Polling failure") + return + } + if !done { + err = azure.NewAsyncOpIncompleteError("compute.SnapshotsCreateOrUpdateFuture") + return + } + sender := autorest.DecorateSender(client, autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...)) + if s.Response.Response, err = future.GetResult(sender); err == nil && s.Response.Response.StatusCode != http.StatusNoContent { + s, err = client.CreateOrUpdateResponder(s.Response.Response) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.SnapshotsCreateOrUpdateFuture", "Result", s.Response.Response, "Failure responding to request") + } + } + return +} + +// SnapshotsDeleteFuture an abstraction for monitoring and retrieving the results of a long-running operation. +type SnapshotsDeleteFuture struct { + azure.Future +} + +// Result returns the result of the asynchronous operation. +// If the operation has not completed it will return an error. +func (future *SnapshotsDeleteFuture) Result(client SnapshotsClient) (ar autorest.Response, err error) { + var done bool + done, err = future.Done(client) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.SnapshotsDeleteFuture", "Result", future.Response(), "Polling failure") + return + } + if !done { + err = azure.NewAsyncOpIncompleteError("compute.SnapshotsDeleteFuture") + return + } + ar.Response = future.Response() + return +} + +// SnapshotsGrantAccessFuture an abstraction for monitoring and retrieving the results of a long-running operation. +type SnapshotsGrantAccessFuture struct { + azure.Future +} + +// Result returns the result of the asynchronous operation. +// If the operation has not completed it will return an error. +func (future *SnapshotsGrantAccessFuture) Result(client SnapshotsClient) (au AccessURI, err error) { + var done bool + done, err = future.Done(client) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.SnapshotsGrantAccessFuture", "Result", future.Response(), "Polling failure") + return + } + if !done { + err = azure.NewAsyncOpIncompleteError("compute.SnapshotsGrantAccessFuture") + return + } + sender := autorest.DecorateSender(client, autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...)) + if au.Response.Response, err = future.GetResult(sender); err == nil && au.Response.Response.StatusCode != http.StatusNoContent { + au, err = client.GrantAccessResponder(au.Response.Response) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.SnapshotsGrantAccessFuture", "Result", au.Response.Response, "Failure responding to request") + } + } + return +} + +// SnapshotSku the snapshots sku name. Can be Standard_LRS, Premium_LRS, or Standard_ZRS. +type SnapshotSku struct { + // Name - The sku name. Possible values include: 'StandardLRS', 'PremiumLRS', 'StandardZRS' + Name SnapshotStorageAccountTypes `json:"name,omitempty"` + // Tier - The sku tier. + Tier *string `json:"tier,omitempty"` +} + +// SnapshotsRevokeAccessFuture an abstraction for monitoring and retrieving the results of a long-running +// operation. +type SnapshotsRevokeAccessFuture struct { + azure.Future +} + +// Result returns the result of the asynchronous operation. +// If the operation has not completed it will return an error. +func (future *SnapshotsRevokeAccessFuture) Result(client SnapshotsClient) (ar autorest.Response, err error) { + var done bool + done, err = future.Done(client) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.SnapshotsRevokeAccessFuture", "Result", future.Response(), "Polling failure") + return + } + if !done { + err = azure.NewAsyncOpIncompleteError("compute.SnapshotsRevokeAccessFuture") + return + } + ar.Response = future.Response() + return +} + +// SnapshotsUpdateFuture an abstraction for monitoring and retrieving the results of a long-running operation. +type SnapshotsUpdateFuture struct { + azure.Future +} + +// Result returns the result of the asynchronous operation. +// If the operation has not completed it will return an error. +func (future *SnapshotsUpdateFuture) Result(client SnapshotsClient) (s Snapshot, err error) { + var done bool + done, err = future.Done(client) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.SnapshotsUpdateFuture", "Result", future.Response(), "Polling failure") + return + } + if !done { + err = azure.NewAsyncOpIncompleteError("compute.SnapshotsUpdateFuture") + return + } + sender := autorest.DecorateSender(client, autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...)) + if s.Response.Response, err = future.GetResult(sender); err == nil && s.Response.Response.StatusCode != http.StatusNoContent { + s, err = client.UpdateResponder(s.Response.Response) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.SnapshotsUpdateFuture", "Result", s.Response.Response, "Failure responding to request") + } + } + return +} + +// SnapshotUpdate snapshot update resource. +type SnapshotUpdate struct { + *DiskUpdateProperties `json:"properties,omitempty"` + // Tags - Resource tags + Tags map[string]*string `json:"tags"` + Sku *SnapshotSku `json:"sku,omitempty"` +} + +// MarshalJSON is the custom marshaler for SnapshotUpdate. +func (su SnapshotUpdate) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]interface{}) + if su.DiskUpdateProperties != nil { + objectMap["properties"] = su.DiskUpdateProperties + } + if su.Tags != nil { + objectMap["tags"] = su.Tags + } + if su.Sku != nil { + objectMap["sku"] = su.Sku + } + return json.Marshal(objectMap) +} + +// UnmarshalJSON is the custom unmarshaler for SnapshotUpdate struct. +func (su *SnapshotUpdate) UnmarshalJSON(body []byte) error { + var m map[string]*json.RawMessage + err := json.Unmarshal(body, &m) + if err != nil { + return err + } + for k, v := range m { + switch k { + case "properties": + if v != nil { + var diskUpdateProperties DiskUpdateProperties + err = json.Unmarshal(*v, &diskUpdateProperties) + if err != nil { + return err + } + su.DiskUpdateProperties = &diskUpdateProperties + } + case "tags": + if v != nil { + var tags map[string]*string + err = json.Unmarshal(*v, &tags) + if err != nil { + return err + } + su.Tags = tags + } + case "sku": + if v != nil { + var sku SnapshotSku + err = json.Unmarshal(*v, &sku) + if err != nil { + return err + } + su.Sku = &sku + } + } + } + + return nil +} + +// SourceVault the vault id is an Azure Resource Manager Resoure id in the form +// /subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.KeyVault/vaults/{vaultName} +type SourceVault struct { + // ID - Resource Id + ID *string `json:"id,omitempty"` +} + +// SSHConfiguration SSH configuration for Linux based VMs running on Azure +type SSHConfiguration struct { + // PublicKeys - The list of SSH public keys used to authenticate with linux based VMs. + PublicKeys *[]SSHPublicKey `json:"publicKeys,omitempty"` +} + +// SSHPublicKey contains information about SSH certificate public key and the path on the Linux VM where the public +// key is placed. +type SSHPublicKey struct { + // Path - Specifies the full path on the created VM where ssh public key is stored. If the file already exists, the specified key is appended to the file. Example: /home/user/.ssh/authorized_keys + Path *string `json:"path,omitempty"` + // KeyData - SSH public key certificate used to authenticate with the VM through ssh. The key needs to be at least 2048-bit and in ssh-rsa format.

    For creating ssh keys, see [Create SSH keys on Linux and Mac for Linux VMs in Azure](https://docs.microsoft.com/azure/virtual-machines/virtual-machines-linux-mac-create-ssh-keys?toc=%2fazure%2fvirtual-machines%2flinux%2ftoc.json). + KeyData *string `json:"keyData,omitempty"` +} + +// StorageProfile specifies the storage settings for the virtual machine disks. +type StorageProfile struct { + // ImageReference - Specifies information about the image to use. You can specify information about platform images, marketplace images, or virtual machine images. This element is required when you want to use a platform image, marketplace image, or virtual machine image, but is not used in other creation operations. + ImageReference *ImageReference `json:"imageReference,omitempty"` + // OsDisk - Specifies information about the operating system disk used by the virtual machine.

    For more information about disks, see [About disks and VHDs for Azure virtual machines](https://docs.microsoft.com/azure/virtual-machines/virtual-machines-windows-about-disks-vhds?toc=%2fazure%2fvirtual-machines%2fwindows%2ftoc.json). + OsDisk *OSDisk `json:"osDisk,omitempty"` + // DataDisks - Specifies the parameters that are used to add a data disk to a virtual machine.

    For more information about disks, see [About disks and VHDs for Azure virtual machines](https://docs.microsoft.com/azure/virtual-machines/virtual-machines-windows-about-disks-vhds?toc=%2fazure%2fvirtual-machines%2fwindows%2ftoc.json). + DataDisks *[]DataDisk `json:"dataDisks,omitempty"` +} + +// SubResource ... +type SubResource struct { + // ID - Resource Id + ID *string `json:"id,omitempty"` +} + +// SubResourceReadOnly ... +type SubResourceReadOnly struct { + // ID - Resource Id + ID *string `json:"id,omitempty"` +} + +// ThrottledRequestsInput api request input for LogAnalytics getThrottledRequests Api. +type ThrottledRequestsInput struct { + // BlobContainerSasURI - SAS Uri of the logging blob container to which LogAnalytics Api writes output logs to. + BlobContainerSasURI *string `json:"blobContainerSasUri,omitempty"` + // FromTime - From time of the query + FromTime *date.Time `json:"fromTime,omitempty"` + // ToTime - To time of the query + ToTime *date.Time `json:"toTime,omitempty"` + // GroupByThrottlePolicy - Group query result by Throttle Policy applied. + GroupByThrottlePolicy *bool `json:"groupByThrottlePolicy,omitempty"` + // GroupByOperationName - Group query result by by Operation Name. + GroupByOperationName *bool `json:"groupByOperationName,omitempty"` + // GroupByResourceName - Group query result by Resource Name. + GroupByResourceName *bool `json:"groupByResourceName,omitempty"` +} + +// UpdateResource the Update Resource model definition. +type UpdateResource struct { + // Tags - Resource tags + Tags map[string]*string `json:"tags"` +} + +// MarshalJSON is the custom marshaler for UpdateResource. +func (ur UpdateResource) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]interface{}) + if ur.Tags != nil { + objectMap["tags"] = ur.Tags + } + return json.Marshal(objectMap) +} + +// UpgradeOperationHistoricalStatusInfo virtual Machine Scale Set OS Upgrade History operation response. +type UpgradeOperationHistoricalStatusInfo struct { + // Properties - Information about the properties of the upgrade operation. + Properties *UpgradeOperationHistoricalStatusInfoProperties `json:"properties,omitempty"` + // Type - Resource type + Type *string `json:"type,omitempty"` + // Location - Resource location + Location *string `json:"location,omitempty"` +} + +// UpgradeOperationHistoricalStatusInfoProperties describes each OS upgrade on the Virtual Machine Scale Set. +type UpgradeOperationHistoricalStatusInfoProperties struct { + // RunningStatus - Information about the overall status of the upgrade operation. + RunningStatus *UpgradeOperationHistoryStatus `json:"runningStatus,omitempty"` + // Progress - Counts of the VM's in each state. + Progress *RollingUpgradeProgressInfo `json:"progress,omitempty"` + // Error - Error Details for this upgrade if there are any. + Error *APIError `json:"error,omitempty"` + // StartedBy - Invoker of the Upgrade Operation. Possible values include: 'Unknown', 'User', 'Platform' + StartedBy UpgradeOperationInvoker `json:"startedBy,omitempty"` + // TargetImageReference - Image Reference details + TargetImageReference *ImageReference `json:"targetImageReference,omitempty"` + // RollbackInfo - Information about OS rollback if performed + RollbackInfo *RollbackStatusInfo `json:"rollbackInfo,omitempty"` +} + +// UpgradeOperationHistoryStatus information about the current running state of the overall upgrade. +type UpgradeOperationHistoryStatus struct { + // Code - Code indicating the current status of the upgrade. Possible values include: 'UpgradeStateRollingForward', 'UpgradeStateCancelled', 'UpgradeStateCompleted', 'UpgradeStateFaulted' + Code UpgradeState `json:"code,omitempty"` + // StartTime - Start time of the upgrade. + StartTime *date.Time `json:"startTime,omitempty"` + // EndTime - End time of the upgrade. + EndTime *date.Time `json:"endTime,omitempty"` +} + +// UpgradePolicy describes an upgrade policy - automatic, manual, or rolling. +type UpgradePolicy struct { + // Mode - Specifies the mode of an upgrade to virtual machines in the scale set.

    Possible values are:

    **Manual** - You control the application of updates to virtual machines in the scale set. You do this by using the manualUpgrade action.

    **Automatic** - All virtual machines in the scale set are automatically updated at the same time. Possible values include: 'Automatic', 'Manual', 'Rolling' + Mode UpgradeMode `json:"mode,omitempty"` + // RollingUpgradePolicy - The configuration parameters used while performing a rolling upgrade. + RollingUpgradePolicy *RollingUpgradePolicy `json:"rollingUpgradePolicy,omitempty"` + // AutomaticOSUpgrade - Whether OS upgrades should automatically be applied to scale set instances in a rolling fashion when a newer version of the image becomes available. + AutomaticOSUpgrade *bool `json:"automaticOSUpgrade,omitempty"` + // AutoOSUpgradePolicy - Configuration parameters used for performing automatic OS Upgrade. + AutoOSUpgradePolicy *AutoOSUpgradePolicy `json:"autoOSUpgradePolicy,omitempty"` +} + +// Usage describes Compute Resource Usage. +type Usage struct { + // Unit - An enum describing the unit of usage measurement. + Unit *string `json:"unit,omitempty"` + // CurrentValue - The current usage of the resource. + CurrentValue *int32 `json:"currentValue,omitempty"` + // Limit - The maximum permitted usage of the resource. + Limit *int64 `json:"limit,omitempty"` + // Name - The name of the type of usage. + Name *UsageName `json:"name,omitempty"` +} + +// UsageName the Usage Names. +type UsageName struct { + // Value - The name of the resource. + Value *string `json:"value,omitempty"` + // LocalizedValue - The localized name of the resource. + LocalizedValue *string `json:"localizedValue,omitempty"` +} + +// VaultCertificate describes a single certificate reference in a Key Vault, and where the certificate should +// reside on the VM. +type VaultCertificate struct { + // CertificateURL - This is the URL of a certificate that has been uploaded to Key Vault as a secret. For adding a secret to the Key Vault, see [Add a key or secret to the key vault](https://docs.microsoft.com/azure/key-vault/key-vault-get-started/#add). In this case, your certificate needs to be It is the Base64 encoding of the following JSON Object which is encoded in UTF-8:

    {
    "data":"",
    "dataType":"pfx",
    "password":""
    } + CertificateURL *string `json:"certificateUrl,omitempty"` + // CertificateStore - For Windows VMs, specifies the certificate store on the Virtual Machine to which the certificate should be added. The specified certificate store is implicitly in the LocalMachine account.

    For Linux VMs, the certificate file is placed under the /var/lib/waagent directory, with the file name .crt for the X509 certificate file and .prv for private key. Both of these files are .pem formatted. + CertificateStore *string `json:"certificateStore,omitempty"` +} + +// VaultSecretGroup describes a set of certificates which are all in the same Key Vault. +type VaultSecretGroup struct { + // SourceVault - The relative URL of the Key Vault containing all of the certificates in VaultCertificates. + SourceVault *SubResource `json:"sourceVault,omitempty"` + // VaultCertificates - The list of key vault references in SourceVault which contain certificates. + VaultCertificates *[]VaultCertificate `json:"vaultCertificates,omitempty"` +} + +// VirtualHardDisk describes the uri of a disk. +type VirtualHardDisk struct { + // URI - Specifies the virtual hard disk's uri. + URI *string `json:"uri,omitempty"` +} + +// VirtualMachine describes a Virtual Machine. +type VirtualMachine struct { + autorest.Response `json:"-"` + // Plan - Specifies information about the marketplace image used to create the virtual machine. This element is only used for marketplace images. Before you can use a marketplace image from an API, you must enable the image for programmatic use. In the Azure portal, find the marketplace image that you want to use and then click **Want to deploy programmatically, Get Started ->**. Enter any required information and then click **Save**. + Plan *Plan `json:"plan,omitempty"` + *VirtualMachineProperties `json:"properties,omitempty"` + // Resources - The virtual machine child extension resources. + Resources *[]VirtualMachineExtension `json:"resources,omitempty"` + // Identity - The identity of the virtual machine, if configured. + Identity *VirtualMachineIdentity `json:"identity,omitempty"` + // Zones - The virtual machine zones. + Zones *[]string `json:"zones,omitempty"` + // ID - Resource Id + ID *string `json:"id,omitempty"` + // Name - Resource name + Name *string `json:"name,omitempty"` + // Type - Resource type + Type *string `json:"type,omitempty"` + // Location - Resource location + Location *string `json:"location,omitempty"` + // Tags - Resource tags + Tags map[string]*string `json:"tags"` +} + +// MarshalJSON is the custom marshaler for VirtualMachine. +func (VM VirtualMachine) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]interface{}) + if VM.Plan != nil { + objectMap["plan"] = VM.Plan + } + if VM.VirtualMachineProperties != nil { + objectMap["properties"] = VM.VirtualMachineProperties + } + if VM.Resources != nil { + objectMap["resources"] = VM.Resources + } + if VM.Identity != nil { + objectMap["identity"] = VM.Identity + } + if VM.Zones != nil { + objectMap["zones"] = VM.Zones + } + if VM.ID != nil { + objectMap["id"] = VM.ID + } + if VM.Name != nil { + objectMap["name"] = VM.Name + } + if VM.Type != nil { + objectMap["type"] = VM.Type + } + if VM.Location != nil { + objectMap["location"] = VM.Location + } + if VM.Tags != nil { + objectMap["tags"] = VM.Tags + } + return json.Marshal(objectMap) +} + +// UnmarshalJSON is the custom unmarshaler for VirtualMachine struct. +func (VM *VirtualMachine) UnmarshalJSON(body []byte) error { + var m map[string]*json.RawMessage + err := json.Unmarshal(body, &m) + if err != nil { + return err + } + for k, v := range m { + switch k { + case "plan": + if v != nil { + var plan Plan + err = json.Unmarshal(*v, &plan) + if err != nil { + return err + } + VM.Plan = &plan + } + case "properties": + if v != nil { + var virtualMachineProperties VirtualMachineProperties + err = json.Unmarshal(*v, &virtualMachineProperties) + if err != nil { + return err + } + VM.VirtualMachineProperties = &virtualMachineProperties + } + case "resources": + if v != nil { + var resources []VirtualMachineExtension + err = json.Unmarshal(*v, &resources) + if err != nil { + return err + } + VM.Resources = &resources + } + case "identity": + if v != nil { + var identity VirtualMachineIdentity + err = json.Unmarshal(*v, &identity) + if err != nil { + return err + } + VM.Identity = &identity + } + case "zones": + if v != nil { + var zones []string + err = json.Unmarshal(*v, &zones) + if err != nil { + return err + } + VM.Zones = &zones + } + case "id": + if v != nil { + var ID string + err = json.Unmarshal(*v, &ID) + if err != nil { + return err + } + VM.ID = &ID + } + case "name": + if v != nil { + var name string + err = json.Unmarshal(*v, &name) + if err != nil { + return err + } + VM.Name = &name + } + case "type": + if v != nil { + var typeVar string + err = json.Unmarshal(*v, &typeVar) + if err != nil { + return err + } + VM.Type = &typeVar + } + case "location": + if v != nil { + var location string + err = json.Unmarshal(*v, &location) + if err != nil { + return err + } + VM.Location = &location + } + case "tags": + if v != nil { + var tags map[string]*string + err = json.Unmarshal(*v, &tags) + if err != nil { + return err + } + VM.Tags = tags + } + } + } + + return nil +} + +// VirtualMachineAgentInstanceView the instance view of the VM Agent running on the virtual machine. +type VirtualMachineAgentInstanceView struct { + // VMAgentVersion - The VM Agent full version. + VMAgentVersion *string `json:"vmAgentVersion,omitempty"` + // ExtensionHandlers - The virtual machine extension handler instance view. + ExtensionHandlers *[]VirtualMachineExtensionHandlerInstanceView `json:"extensionHandlers,omitempty"` + // Statuses - The resource status information. + Statuses *[]InstanceViewStatus `json:"statuses,omitempty"` +} + +// VirtualMachineCaptureParameters capture Virtual Machine parameters. +type VirtualMachineCaptureParameters struct { + // VhdPrefix - The captured virtual hard disk's name prefix. + VhdPrefix *string `json:"vhdPrefix,omitempty"` + // DestinationContainerName - The destination container name. + DestinationContainerName *string `json:"destinationContainerName,omitempty"` + // OverwriteVhds - Specifies whether to overwrite the destination virtual hard disk, in case of conflict. + OverwriteVhds *bool `json:"overwriteVhds,omitempty"` +} + +// VirtualMachineCaptureResult resource Id. +type VirtualMachineCaptureResult struct { + autorest.Response `json:"-"` + *VirtualMachineCaptureResultProperties `json:"properties,omitempty"` + // ID - Resource Id + ID *string `json:"id,omitempty"` +} + +// MarshalJSON is the custom marshaler for VirtualMachineCaptureResult. +func (vmcr VirtualMachineCaptureResult) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]interface{}) + if vmcr.VirtualMachineCaptureResultProperties != nil { + objectMap["properties"] = vmcr.VirtualMachineCaptureResultProperties + } + if vmcr.ID != nil { + objectMap["id"] = vmcr.ID + } + return json.Marshal(objectMap) +} + +// UnmarshalJSON is the custom unmarshaler for VirtualMachineCaptureResult struct. +func (vmcr *VirtualMachineCaptureResult) UnmarshalJSON(body []byte) error { + var m map[string]*json.RawMessage + err := json.Unmarshal(body, &m) + if err != nil { + return err + } + for k, v := range m { + switch k { + case "properties": + if v != nil { + var virtualMachineCaptureResultProperties VirtualMachineCaptureResultProperties + err = json.Unmarshal(*v, &virtualMachineCaptureResultProperties) + if err != nil { + return err + } + vmcr.VirtualMachineCaptureResultProperties = &virtualMachineCaptureResultProperties + } + case "id": + if v != nil { + var ID string + err = json.Unmarshal(*v, &ID) + if err != nil { + return err + } + vmcr.ID = &ID + } + } + } + + return nil +} + +// VirtualMachineCaptureResultProperties compute-specific operation properties, including output +type VirtualMachineCaptureResultProperties struct { + // Output - Operation output data (raw JSON) + Output interface{} `json:"output,omitempty"` +} + +// VirtualMachineExtension describes a Virtual Machine Extension. +type VirtualMachineExtension struct { + autorest.Response `json:"-"` + *VirtualMachineExtensionProperties `json:"properties,omitempty"` + // ID - Resource Id + ID *string `json:"id,omitempty"` + // Name - Resource name + Name *string `json:"name,omitempty"` + // Type - Resource type + Type *string `json:"type,omitempty"` + // Location - Resource location + Location *string `json:"location,omitempty"` + // Tags - Resource tags + Tags map[string]*string `json:"tags"` +} + +// MarshalJSON is the custom marshaler for VirtualMachineExtension. +func (vme VirtualMachineExtension) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]interface{}) + if vme.VirtualMachineExtensionProperties != nil { + objectMap["properties"] = vme.VirtualMachineExtensionProperties + } + if vme.ID != nil { + objectMap["id"] = vme.ID + } + if vme.Name != nil { + objectMap["name"] = vme.Name + } + if vme.Type != nil { + objectMap["type"] = vme.Type + } + if vme.Location != nil { + objectMap["location"] = vme.Location + } + if vme.Tags != nil { + objectMap["tags"] = vme.Tags + } + return json.Marshal(objectMap) +} + +// UnmarshalJSON is the custom unmarshaler for VirtualMachineExtension struct. +func (vme *VirtualMachineExtension) UnmarshalJSON(body []byte) error { + var m map[string]*json.RawMessage + err := json.Unmarshal(body, &m) + if err != nil { + return err + } + for k, v := range m { + switch k { + case "properties": + if v != nil { + var virtualMachineExtensionProperties VirtualMachineExtensionProperties + err = json.Unmarshal(*v, &virtualMachineExtensionProperties) + if err != nil { + return err + } + vme.VirtualMachineExtensionProperties = &virtualMachineExtensionProperties + } + case "id": + if v != nil { + var ID string + err = json.Unmarshal(*v, &ID) + if err != nil { + return err + } + vme.ID = &ID + } + case "name": + if v != nil { + var name string + err = json.Unmarshal(*v, &name) + if err != nil { + return err + } + vme.Name = &name + } + case "type": + if v != nil { + var typeVar string + err = json.Unmarshal(*v, &typeVar) + if err != nil { + return err + } + vme.Type = &typeVar + } + case "location": + if v != nil { + var location string + err = json.Unmarshal(*v, &location) + if err != nil { + return err + } + vme.Location = &location + } + case "tags": + if v != nil { + var tags map[string]*string + err = json.Unmarshal(*v, &tags) + if err != nil { + return err + } + vme.Tags = tags + } + } + } + + return nil +} + +// VirtualMachineExtensionHandlerInstanceView the instance view of a virtual machine extension handler. +type VirtualMachineExtensionHandlerInstanceView struct { + // Type - Specifies the type of the extension; an example is "CustomScriptExtension". + Type *string `json:"type,omitempty"` + // TypeHandlerVersion - Specifies the version of the script handler. + TypeHandlerVersion *string `json:"typeHandlerVersion,omitempty"` + // Status - The extension handler status. + Status *InstanceViewStatus `json:"status,omitempty"` +} + +// VirtualMachineExtensionImage describes a Virtual Machine Extension Image. +type VirtualMachineExtensionImage struct { + autorest.Response `json:"-"` + *VirtualMachineExtensionImageProperties `json:"properties,omitempty"` + // ID - Resource Id + ID *string `json:"id,omitempty"` + // Name - Resource name + Name *string `json:"name,omitempty"` + // Type - Resource type + Type *string `json:"type,omitempty"` + // Location - Resource location + Location *string `json:"location,omitempty"` + // Tags - Resource tags + Tags map[string]*string `json:"tags"` +} + +// MarshalJSON is the custom marshaler for VirtualMachineExtensionImage. +func (vmei VirtualMachineExtensionImage) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]interface{}) + if vmei.VirtualMachineExtensionImageProperties != nil { + objectMap["properties"] = vmei.VirtualMachineExtensionImageProperties + } + if vmei.ID != nil { + objectMap["id"] = vmei.ID + } + if vmei.Name != nil { + objectMap["name"] = vmei.Name + } + if vmei.Type != nil { + objectMap["type"] = vmei.Type + } + if vmei.Location != nil { + objectMap["location"] = vmei.Location + } + if vmei.Tags != nil { + objectMap["tags"] = vmei.Tags + } + return json.Marshal(objectMap) +} + +// UnmarshalJSON is the custom unmarshaler for VirtualMachineExtensionImage struct. +func (vmei *VirtualMachineExtensionImage) UnmarshalJSON(body []byte) error { + var m map[string]*json.RawMessage + err := json.Unmarshal(body, &m) + if err != nil { + return err + } + for k, v := range m { + switch k { + case "properties": + if v != nil { + var virtualMachineExtensionImageProperties VirtualMachineExtensionImageProperties + err = json.Unmarshal(*v, &virtualMachineExtensionImageProperties) + if err != nil { + return err + } + vmei.VirtualMachineExtensionImageProperties = &virtualMachineExtensionImageProperties + } + case "id": + if v != nil { + var ID string + err = json.Unmarshal(*v, &ID) + if err != nil { + return err + } + vmei.ID = &ID + } + case "name": + if v != nil { + var name string + err = json.Unmarshal(*v, &name) + if err != nil { + return err + } + vmei.Name = &name + } + case "type": + if v != nil { + var typeVar string + err = json.Unmarshal(*v, &typeVar) + if err != nil { + return err + } + vmei.Type = &typeVar + } + case "location": + if v != nil { + var location string + err = json.Unmarshal(*v, &location) + if err != nil { + return err + } + vmei.Location = &location + } + case "tags": + if v != nil { + var tags map[string]*string + err = json.Unmarshal(*v, &tags) + if err != nil { + return err + } + vmei.Tags = tags + } + } + } + + return nil +} + +// VirtualMachineExtensionImageProperties describes the properties of a Virtual Machine Extension Image. +type VirtualMachineExtensionImageProperties struct { + // OperatingSystem - The operating system this extension supports. + OperatingSystem *string `json:"operatingSystem,omitempty"` + // ComputeRole - The type of role (IaaS or PaaS) this extension supports. + ComputeRole *string `json:"computeRole,omitempty"` + // HandlerSchema - The schema defined by publisher, where extension consumers should provide settings in a matching schema. + HandlerSchema *string `json:"handlerSchema,omitempty"` + // VMScaleSetEnabled - Whether the extension can be used on xRP VMScaleSets. By default existing extensions are usable on scalesets, but there might be cases where a publisher wants to explicitly indicate the extension is only enabled for CRP VMs but not VMSS. + VMScaleSetEnabled *bool `json:"vmScaleSetEnabled,omitempty"` + // SupportsMultipleExtensions - Whether the handler can support multiple extensions. + SupportsMultipleExtensions *bool `json:"supportsMultipleExtensions,omitempty"` +} + +// VirtualMachineExtensionInstanceView the instance view of a virtual machine extension. +type VirtualMachineExtensionInstanceView struct { + // Name - The virtual machine extension name. + Name *string `json:"name,omitempty"` + // Type - Specifies the type of the extension; an example is "CustomScriptExtension". + Type *string `json:"type,omitempty"` + // TypeHandlerVersion - Specifies the version of the script handler. + TypeHandlerVersion *string `json:"typeHandlerVersion,omitempty"` + // Substatuses - The resource status information. + Substatuses *[]InstanceViewStatus `json:"substatuses,omitempty"` + // Statuses - The resource status information. + Statuses *[]InstanceViewStatus `json:"statuses,omitempty"` +} + +// VirtualMachineExtensionProperties describes the properties of a Virtual Machine Extension. +type VirtualMachineExtensionProperties struct { + // ForceUpdateTag - How the extension handler should be forced to update even if the extension configuration has not changed. + ForceUpdateTag *string `json:"forceUpdateTag,omitempty"` + // Publisher - The name of the extension handler publisher. + Publisher *string `json:"publisher,omitempty"` + // Type - Specifies the type of the extension; an example is "CustomScriptExtension". + Type *string `json:"type,omitempty"` + // TypeHandlerVersion - Specifies the version of the script handler. + TypeHandlerVersion *string `json:"typeHandlerVersion,omitempty"` + // AutoUpgradeMinorVersion - Indicates whether the extension should use a newer minor version if one is available at deployment time. Once deployed, however, the extension will not upgrade minor versions unless redeployed, even with this property set to true. + AutoUpgradeMinorVersion *bool `json:"autoUpgradeMinorVersion,omitempty"` + // Settings - Json formatted public settings for the extension. + Settings interface{} `json:"settings,omitempty"` + // ProtectedSettings - The extension can contain either protectedSettings or protectedSettingsFromKeyVault or no protected settings at all. + ProtectedSettings interface{} `json:"protectedSettings,omitempty"` + // ProvisioningState - The provisioning state, which only appears in the response. + ProvisioningState *string `json:"provisioningState,omitempty"` + // InstanceView - The virtual machine extension instance view. + InstanceView *VirtualMachineExtensionInstanceView `json:"instanceView,omitempty"` +} + +// VirtualMachineExtensionsCreateOrUpdateFuture an abstraction for monitoring and retrieving the results of a +// long-running operation. +type VirtualMachineExtensionsCreateOrUpdateFuture struct { + azure.Future +} + +// Result returns the result of the asynchronous operation. +// If the operation has not completed it will return an error. +func (future *VirtualMachineExtensionsCreateOrUpdateFuture) Result(client VirtualMachineExtensionsClient) (vme VirtualMachineExtension, err error) { + var done bool + done, err = future.Done(client) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachineExtensionsCreateOrUpdateFuture", "Result", future.Response(), "Polling failure") + return + } + if !done { + err = azure.NewAsyncOpIncompleteError("compute.VirtualMachineExtensionsCreateOrUpdateFuture") + return + } + sender := autorest.DecorateSender(client, autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...)) + if vme.Response.Response, err = future.GetResult(sender); err == nil && vme.Response.Response.StatusCode != http.StatusNoContent { + vme, err = client.CreateOrUpdateResponder(vme.Response.Response) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachineExtensionsCreateOrUpdateFuture", "Result", vme.Response.Response, "Failure responding to request") + } + } + return +} + +// VirtualMachineExtensionsDeleteFuture an abstraction for monitoring and retrieving the results of a long-running +// operation. +type VirtualMachineExtensionsDeleteFuture struct { + azure.Future +} + +// Result returns the result of the asynchronous operation. +// If the operation has not completed it will return an error. +func (future *VirtualMachineExtensionsDeleteFuture) Result(client VirtualMachineExtensionsClient) (osr OperationStatusResponse, err error) { + var done bool + done, err = future.Done(client) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachineExtensionsDeleteFuture", "Result", future.Response(), "Polling failure") + return + } + if !done { + err = azure.NewAsyncOpIncompleteError("compute.VirtualMachineExtensionsDeleteFuture") + return + } + sender := autorest.DecorateSender(client, autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...)) + if osr.Response.Response, err = future.GetResult(sender); err == nil && osr.Response.Response.StatusCode != http.StatusNoContent { + osr, err = client.DeleteResponder(osr.Response.Response) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachineExtensionsDeleteFuture", "Result", osr.Response.Response, "Failure responding to request") + } + } + return +} + +// VirtualMachineExtensionsListResult the List Extension operation response +type VirtualMachineExtensionsListResult struct { + autorest.Response `json:"-"` + // Value - The list of extensions + Value *[]VirtualMachineExtension `json:"value,omitempty"` +} + +// VirtualMachineExtensionsUpdateFuture an abstraction for monitoring and retrieving the results of a long-running +// operation. +type VirtualMachineExtensionsUpdateFuture struct { + azure.Future +} + +// Result returns the result of the asynchronous operation. +// If the operation has not completed it will return an error. +func (future *VirtualMachineExtensionsUpdateFuture) Result(client VirtualMachineExtensionsClient) (vme VirtualMachineExtension, err error) { + var done bool + done, err = future.Done(client) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachineExtensionsUpdateFuture", "Result", future.Response(), "Polling failure") + return + } + if !done { + err = azure.NewAsyncOpIncompleteError("compute.VirtualMachineExtensionsUpdateFuture") + return + } + sender := autorest.DecorateSender(client, autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...)) + if vme.Response.Response, err = future.GetResult(sender); err == nil && vme.Response.Response.StatusCode != http.StatusNoContent { + vme, err = client.UpdateResponder(vme.Response.Response) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachineExtensionsUpdateFuture", "Result", vme.Response.Response, "Failure responding to request") + } + } + return +} + +// VirtualMachineExtensionUpdate describes a Virtual Machine Extension. +type VirtualMachineExtensionUpdate struct { + *VirtualMachineExtensionUpdateProperties `json:"properties,omitempty"` + // Tags - Resource tags + Tags map[string]*string `json:"tags"` +} + +// MarshalJSON is the custom marshaler for VirtualMachineExtensionUpdate. +func (vmeu VirtualMachineExtensionUpdate) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]interface{}) + if vmeu.VirtualMachineExtensionUpdateProperties != nil { + objectMap["properties"] = vmeu.VirtualMachineExtensionUpdateProperties + } + if vmeu.Tags != nil { + objectMap["tags"] = vmeu.Tags + } + return json.Marshal(objectMap) +} + +// UnmarshalJSON is the custom unmarshaler for VirtualMachineExtensionUpdate struct. +func (vmeu *VirtualMachineExtensionUpdate) UnmarshalJSON(body []byte) error { + var m map[string]*json.RawMessage + err := json.Unmarshal(body, &m) + if err != nil { + return err + } + for k, v := range m { + switch k { + case "properties": + if v != nil { + var virtualMachineExtensionUpdateProperties VirtualMachineExtensionUpdateProperties + err = json.Unmarshal(*v, &virtualMachineExtensionUpdateProperties) + if err != nil { + return err + } + vmeu.VirtualMachineExtensionUpdateProperties = &virtualMachineExtensionUpdateProperties + } + case "tags": + if v != nil { + var tags map[string]*string + err = json.Unmarshal(*v, &tags) + if err != nil { + return err + } + vmeu.Tags = tags + } + } + } + + return nil +} + +// VirtualMachineExtensionUpdateProperties describes the properties of a Virtual Machine Extension. +type VirtualMachineExtensionUpdateProperties struct { + // ForceUpdateTag - How the extension handler should be forced to update even if the extension configuration has not changed. + ForceUpdateTag *string `json:"forceUpdateTag,omitempty"` + // Publisher - The name of the extension handler publisher. + Publisher *string `json:"publisher,omitempty"` + // Type - Specifies the type of the extension; an example is "CustomScriptExtension". + Type *string `json:"type,omitempty"` + // TypeHandlerVersion - Specifies the version of the script handler. + TypeHandlerVersion *string `json:"typeHandlerVersion,omitempty"` + // AutoUpgradeMinorVersion - Indicates whether the extension should use a newer minor version if one is available at deployment time. Once deployed, however, the extension will not upgrade minor versions unless redeployed, even with this property set to true. + AutoUpgradeMinorVersion *bool `json:"autoUpgradeMinorVersion,omitempty"` + // Settings - Json formatted public settings for the extension. + Settings interface{} `json:"settings,omitempty"` + // ProtectedSettings - The extension can contain either protectedSettings or protectedSettingsFromKeyVault or no protected settings at all. + ProtectedSettings interface{} `json:"protectedSettings,omitempty"` +} + +// VirtualMachineHealthStatus the health status of the VM. +type VirtualMachineHealthStatus struct { + // Status - The health status information for the VM. + Status *InstanceViewStatus `json:"status,omitempty"` +} + +// VirtualMachineIdentity identity for the virtual machine. +type VirtualMachineIdentity struct { + // PrincipalID - The principal id of virtual machine identity. This property will only be provided for a system assigned identity. + PrincipalID *string `json:"principalId,omitempty"` + // TenantID - The tenant id associated with the virtual machine. This property will only be provided for a system assigned identity. + TenantID *string `json:"tenantId,omitempty"` + // Type - The type of identity used for the virtual machine. The type 'SystemAssigned, UserAssigned' includes both an implicitly created identity and a set of user assigned identities. The type 'None' will remove any identities from the virtual machine. Possible values include: 'ResourceIdentityTypeSystemAssigned', 'ResourceIdentityTypeUserAssigned', 'ResourceIdentityTypeSystemAssignedUserAssigned', 'ResourceIdentityTypeNone' + Type ResourceIdentityType `json:"type,omitempty"` + // IdentityIds - The list of user identities associated with the Virtual Machine. The user identity references will be ARM resource ids in the form: '/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.ManagedIdentity/identities/{identityName}'. + IdentityIds *[]string `json:"identityIds,omitempty"` +} + +// VirtualMachineImage describes a Virtual Machine Image. +type VirtualMachineImage struct { + autorest.Response `json:"-"` + *VirtualMachineImageProperties `json:"properties,omitempty"` + // Name - The name of the resource. + Name *string `json:"name,omitempty"` + // Location - The supported Azure location of the resource. + Location *string `json:"location,omitempty"` + // Tags - Specifies the tags that are assigned to the virtual machine. For more information about using tags, see [Using tags to organize your Azure resources](https://docs.microsoft.com/azure/azure-resource-manager/resource-group-using-tags.md). + Tags map[string]*string `json:"tags"` + // ID - Resource Id + ID *string `json:"id,omitempty"` +} + +// MarshalJSON is the custom marshaler for VirtualMachineImage. +func (vmi VirtualMachineImage) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]interface{}) + if vmi.VirtualMachineImageProperties != nil { + objectMap["properties"] = vmi.VirtualMachineImageProperties + } + if vmi.Name != nil { + objectMap["name"] = vmi.Name + } + if vmi.Location != nil { + objectMap["location"] = vmi.Location + } + if vmi.Tags != nil { + objectMap["tags"] = vmi.Tags + } + if vmi.ID != nil { + objectMap["id"] = vmi.ID + } + return json.Marshal(objectMap) +} + +// UnmarshalJSON is the custom unmarshaler for VirtualMachineImage struct. +func (vmi *VirtualMachineImage) UnmarshalJSON(body []byte) error { + var m map[string]*json.RawMessage + err := json.Unmarshal(body, &m) + if err != nil { + return err + } + for k, v := range m { + switch k { + case "properties": + if v != nil { + var virtualMachineImageProperties VirtualMachineImageProperties + err = json.Unmarshal(*v, &virtualMachineImageProperties) + if err != nil { + return err + } + vmi.VirtualMachineImageProperties = &virtualMachineImageProperties + } + case "name": + if v != nil { + var name string + err = json.Unmarshal(*v, &name) + if err != nil { + return err + } + vmi.Name = &name + } + case "location": + if v != nil { + var location string + err = json.Unmarshal(*v, &location) + if err != nil { + return err + } + vmi.Location = &location + } + case "tags": + if v != nil { + var tags map[string]*string + err = json.Unmarshal(*v, &tags) + if err != nil { + return err + } + vmi.Tags = tags + } + case "id": + if v != nil { + var ID string + err = json.Unmarshal(*v, &ID) + if err != nil { + return err + } + vmi.ID = &ID + } + } + } + + return nil +} + +// VirtualMachineImageProperties describes the properties of a Virtual Machine Image. +type VirtualMachineImageProperties struct { + Plan *PurchasePlan `json:"plan,omitempty"` + OsDiskImage *OSDiskImage `json:"osDiskImage,omitempty"` + DataDiskImages *[]DataDiskImage `json:"dataDiskImages,omitempty"` +} + +// VirtualMachineImageResource virtual machine image resource information. +type VirtualMachineImageResource struct { + // Name - The name of the resource. + Name *string `json:"name,omitempty"` + // Location - The supported Azure location of the resource. + Location *string `json:"location,omitempty"` + // Tags - Specifies the tags that are assigned to the virtual machine. For more information about using tags, see [Using tags to organize your Azure resources](https://docs.microsoft.com/azure/azure-resource-manager/resource-group-using-tags.md). + Tags map[string]*string `json:"tags"` + // ID - Resource Id + ID *string `json:"id,omitempty"` +} + +// MarshalJSON is the custom marshaler for VirtualMachineImageResource. +func (vmir VirtualMachineImageResource) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]interface{}) + if vmir.Name != nil { + objectMap["name"] = vmir.Name + } + if vmir.Location != nil { + objectMap["location"] = vmir.Location + } + if vmir.Tags != nil { + objectMap["tags"] = vmir.Tags + } + if vmir.ID != nil { + objectMap["id"] = vmir.ID + } + return json.Marshal(objectMap) +} + +// VirtualMachineInstanceView the instance view of a virtual machine. +type VirtualMachineInstanceView struct { + autorest.Response `json:"-"` + // PlatformUpdateDomain - Specifies the update domain of the virtual machine. + PlatformUpdateDomain *int32 `json:"platformUpdateDomain,omitempty"` + // PlatformFaultDomain - Specifies the fault domain of the virtual machine. + PlatformFaultDomain *int32 `json:"platformFaultDomain,omitempty"` + // ComputerName - The computer name assigned to the virtual machine. + ComputerName *string `json:"computerName,omitempty"` + // OsName - The Operating System running on the virtual machine. + OsName *string `json:"osName,omitempty"` + // OsVersion - The version of Operating System running on the virtual machine. + OsVersion *string `json:"osVersion,omitempty"` + // RdpThumbPrint - The Remote desktop certificate thumbprint. + RdpThumbPrint *string `json:"rdpThumbPrint,omitempty"` + // VMAgent - The VM Agent running on the virtual machine. + VMAgent *VirtualMachineAgentInstanceView `json:"vmAgent,omitempty"` + // MaintenanceRedeployStatus - The Maintenance Operation status on the virtual machine. + MaintenanceRedeployStatus *MaintenanceRedeployStatus `json:"maintenanceRedeployStatus,omitempty"` + // Disks - The virtual machine disk information. + Disks *[]DiskInstanceView `json:"disks,omitempty"` + // Extensions - The extensions information. + Extensions *[]VirtualMachineExtensionInstanceView `json:"extensions,omitempty"` + // BootDiagnostics - Boot Diagnostics is a debugging feature which allows you to view Console Output and Screenshot to diagnose VM status.

    For Linux Virtual Machines, you can easily view the output of your console log.

    For both Windows and Linux virtual machines, Azure also enables you to see a screenshot of the VM from the hypervisor. + BootDiagnostics *BootDiagnosticsInstanceView `json:"bootDiagnostics,omitempty"` + // Statuses - The resource status information. + Statuses *[]InstanceViewStatus `json:"statuses,omitempty"` +} + +// VirtualMachineListResult the List Virtual Machine operation response. +type VirtualMachineListResult struct { + autorest.Response `json:"-"` + // Value - The list of virtual machines. + Value *[]VirtualMachine `json:"value,omitempty"` + // NextLink - The URI to fetch the next page of VMs. Call ListNext() with this URI to fetch the next page of Virtual Machines. + NextLink *string `json:"nextLink,omitempty"` +} + +// VirtualMachineListResultIterator provides access to a complete listing of VirtualMachine values. +type VirtualMachineListResultIterator struct { + i int + page VirtualMachineListResultPage +} + +// Next advances to the next value. If there was an error making +// the request the iterator does not advance and the error is returned. +func (iter *VirtualMachineListResultIterator) Next() error { + iter.i++ + if iter.i < len(iter.page.Values()) { + return nil + } + err := iter.page.Next() + if err != nil { + iter.i-- + return err + } + iter.i = 0 + return nil +} + +// NotDone returns true if the enumeration should be started or is not yet complete. +func (iter VirtualMachineListResultIterator) NotDone() bool { + return iter.page.NotDone() && iter.i < len(iter.page.Values()) +} + +// Response returns the raw server response from the last page request. +func (iter VirtualMachineListResultIterator) Response() VirtualMachineListResult { + return iter.page.Response() +} + +// Value returns the current value or a zero-initialized value if the +// iterator has advanced beyond the end of the collection. +func (iter VirtualMachineListResultIterator) Value() VirtualMachine { + if !iter.page.NotDone() { + return VirtualMachine{} + } + return iter.page.Values()[iter.i] +} + +// IsEmpty returns true if the ListResult contains no values. +func (vmlr VirtualMachineListResult) IsEmpty() bool { + return vmlr.Value == nil || len(*vmlr.Value) == 0 +} + +// virtualMachineListResultPreparer prepares a request to retrieve the next set of results. +// It returns nil if no more results exist. +func (vmlr VirtualMachineListResult) virtualMachineListResultPreparer() (*http.Request, error) { + if vmlr.NextLink == nil || len(to.String(vmlr.NextLink)) < 1 { + return nil, nil + } + return autorest.Prepare(&http.Request{}, + autorest.AsJSON(), + autorest.AsGet(), + autorest.WithBaseURL(to.String(vmlr.NextLink))) +} + +// VirtualMachineListResultPage contains a page of VirtualMachine values. +type VirtualMachineListResultPage struct { + fn func(VirtualMachineListResult) (VirtualMachineListResult, error) + vmlr VirtualMachineListResult +} + +// Next advances to the next page of values. If there was an error making +// the request the page does not advance and the error is returned. +func (page *VirtualMachineListResultPage) Next() error { + next, err := page.fn(page.vmlr) + if err != nil { + return err + } + page.vmlr = next + return nil +} + +// NotDone returns true if the page enumeration should be started or is not yet complete. +func (page VirtualMachineListResultPage) NotDone() bool { + return !page.vmlr.IsEmpty() +} + +// Response returns the raw server response from the last page request. +func (page VirtualMachineListResultPage) Response() VirtualMachineListResult { + return page.vmlr +} + +// Values returns the slice of values for the current page or nil if there are no values. +func (page VirtualMachineListResultPage) Values() []VirtualMachine { + if page.vmlr.IsEmpty() { + return nil + } + return *page.vmlr.Value +} + +// VirtualMachineProperties describes the properties of a Virtual Machine. +type VirtualMachineProperties struct { + // HardwareProfile - Specifies the hardware settings for the virtual machine. + HardwareProfile *HardwareProfile `json:"hardwareProfile,omitempty"` + // StorageProfile - Specifies the storage settings for the virtual machine disks. + StorageProfile *StorageProfile `json:"storageProfile,omitempty"` + // OsProfile - Specifies the operating system settings for the virtual machine. + OsProfile *OSProfile `json:"osProfile,omitempty"` + // NetworkProfile - Specifies the network interfaces of the virtual machine. + NetworkProfile *NetworkProfile `json:"networkProfile,omitempty"` + // DiagnosticsProfile - Specifies the boot diagnostic settings state.

    Minimum api-version: 2015-06-15. + DiagnosticsProfile *DiagnosticsProfile `json:"diagnosticsProfile,omitempty"` + // AvailabilitySet - Specifies information about the availability set that the virtual machine should be assigned to. Virtual machines specified in the same availability set are allocated to different nodes to maximize availability. For more information about availability sets, see [Manage the availability of virtual machines](https://docs.microsoft.com/azure/virtual-machines/virtual-machines-windows-manage-availability?toc=%2fazure%2fvirtual-machines%2fwindows%2ftoc.json).

    For more information on Azure planned maintainance, see [Planned maintenance for virtual machines in Azure](https://docs.microsoft.com/azure/virtual-machines/virtual-machines-windows-planned-maintenance?toc=%2fazure%2fvirtual-machines%2fwindows%2ftoc.json)

    Currently, a VM can only be added to availability set at creation time. An existing VM cannot be added to an availability set. + AvailabilitySet *SubResource `json:"availabilitySet,omitempty"` + // ProvisioningState - The provisioning state, which only appears in the response. + ProvisioningState *string `json:"provisioningState,omitempty"` + // InstanceView - The virtual machine instance view. + InstanceView *VirtualMachineInstanceView `json:"instanceView,omitempty"` + // LicenseType - Specifies that the image or disk that is being used was licensed on-premises. This element is only used for images that contain the Windows Server operating system.

    Possible values are:

    Windows_Client

    Windows_Server

    If this element is included in a request for an update, the value must match the initial value. This value cannot be updated.

    For more information, see [Azure Hybrid Use Benefit for Windows Server](https://docs.microsoft.com/azure/virtual-machines/virtual-machines-windows-hybrid-use-benefit-licensing?toc=%2fazure%2fvirtual-machines%2fwindows%2ftoc.json)

    Minimum api-version: 2015-06-15 + LicenseType *string `json:"licenseType,omitempty"` + // VMID - Specifies the VM unique ID which is a 128-bits identifier that is encoded and stored in all Azure IaaS VMs SMBIOS and can be read using platform BIOS commands. + VMID *string `json:"vmId,omitempty"` +} + +// VirtualMachineScaleSet describes a Virtual Machine Scale Set. +type VirtualMachineScaleSet struct { + autorest.Response `json:"-"` + // Sku - The virtual machine scale set sku. + Sku *Sku `json:"sku,omitempty"` + // Plan - Specifies information about the marketplace image used to create the virtual machine. This element is only used for marketplace images. Before you can use a marketplace image from an API, you must enable the image for programmatic use. In the Azure portal, find the marketplace image that you want to use and then click **Want to deploy programmatically, Get Started ->**. Enter any required information and then click **Save**. + Plan *Plan `json:"plan,omitempty"` + *VirtualMachineScaleSetProperties `json:"properties,omitempty"` + // Identity - The identity of the virtual machine scale set, if configured. + Identity *VirtualMachineScaleSetIdentity `json:"identity,omitempty"` + // Zones - The virtual machine scale set zones. + Zones *[]string `json:"zones,omitempty"` + // ID - Resource Id + ID *string `json:"id,omitempty"` + // Name - Resource name + Name *string `json:"name,omitempty"` + // Type - Resource type + Type *string `json:"type,omitempty"` + // Location - Resource location + Location *string `json:"location,omitempty"` + // Tags - Resource tags + Tags map[string]*string `json:"tags"` +} + +// MarshalJSON is the custom marshaler for VirtualMachineScaleSet. +func (vmss VirtualMachineScaleSet) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]interface{}) + if vmss.Sku != nil { + objectMap["sku"] = vmss.Sku + } + if vmss.Plan != nil { + objectMap["plan"] = vmss.Plan + } + if vmss.VirtualMachineScaleSetProperties != nil { + objectMap["properties"] = vmss.VirtualMachineScaleSetProperties + } + if vmss.Identity != nil { + objectMap["identity"] = vmss.Identity + } + if vmss.Zones != nil { + objectMap["zones"] = vmss.Zones + } + if vmss.ID != nil { + objectMap["id"] = vmss.ID + } + if vmss.Name != nil { + objectMap["name"] = vmss.Name + } + if vmss.Type != nil { + objectMap["type"] = vmss.Type + } + if vmss.Location != nil { + objectMap["location"] = vmss.Location + } + if vmss.Tags != nil { + objectMap["tags"] = vmss.Tags + } + return json.Marshal(objectMap) +} + +// UnmarshalJSON is the custom unmarshaler for VirtualMachineScaleSet struct. +func (vmss *VirtualMachineScaleSet) UnmarshalJSON(body []byte) error { + var m map[string]*json.RawMessage + err := json.Unmarshal(body, &m) + if err != nil { + return err + } + for k, v := range m { + switch k { + case "sku": + if v != nil { + var sku Sku + err = json.Unmarshal(*v, &sku) + if err != nil { + return err + } + vmss.Sku = &sku + } + case "plan": + if v != nil { + var plan Plan + err = json.Unmarshal(*v, &plan) + if err != nil { + return err + } + vmss.Plan = &plan + } + case "properties": + if v != nil { + var virtualMachineScaleSetProperties VirtualMachineScaleSetProperties + err = json.Unmarshal(*v, &virtualMachineScaleSetProperties) + if err != nil { + return err + } + vmss.VirtualMachineScaleSetProperties = &virtualMachineScaleSetProperties + } + case "identity": + if v != nil { + var identity VirtualMachineScaleSetIdentity + err = json.Unmarshal(*v, &identity) + if err != nil { + return err + } + vmss.Identity = &identity + } + case "zones": + if v != nil { + var zones []string + err = json.Unmarshal(*v, &zones) + if err != nil { + return err + } + vmss.Zones = &zones + } + case "id": + if v != nil { + var ID string + err = json.Unmarshal(*v, &ID) + if err != nil { + return err + } + vmss.ID = &ID + } + case "name": + if v != nil { + var name string + err = json.Unmarshal(*v, &name) + if err != nil { + return err + } + vmss.Name = &name + } + case "type": + if v != nil { + var typeVar string + err = json.Unmarshal(*v, &typeVar) + if err != nil { + return err + } + vmss.Type = &typeVar + } + case "location": + if v != nil { + var location string + err = json.Unmarshal(*v, &location) + if err != nil { + return err + } + vmss.Location = &location + } + case "tags": + if v != nil { + var tags map[string]*string + err = json.Unmarshal(*v, &tags) + if err != nil { + return err + } + vmss.Tags = tags + } + } + } + + return nil +} + +// VirtualMachineScaleSetDataDisk describes a virtual machine scale set data disk. +type VirtualMachineScaleSetDataDisk struct { + // Name - The disk name. + Name *string `json:"name,omitempty"` + // Lun - Specifies the logical unit number of the data disk. This value is used to identify data disks within the VM and therefore must be unique for each data disk attached to a VM. + Lun *int32 `json:"lun,omitempty"` + // Caching - Specifies the caching requirements.

    Possible values are:

    **None**

    **ReadOnly**

    **ReadWrite**

    Default: **None for Standard storage. ReadOnly for Premium storage**. Possible values include: 'CachingTypesNone', 'CachingTypesReadOnly', 'CachingTypesReadWrite' + Caching CachingTypes `json:"caching,omitempty"` + // WriteAcceleratorEnabled - Specifies whether writeAccelerator should be enabled or disabled on the disk. + WriteAcceleratorEnabled *bool `json:"writeAcceleratorEnabled,omitempty"` + // CreateOption - The create option. Possible values include: 'DiskCreateOptionTypesFromImage', 'DiskCreateOptionTypesEmpty', 'DiskCreateOptionTypesAttach' + CreateOption DiskCreateOptionTypes `json:"createOption,omitempty"` + // DiskSizeGB - Specifies the size of an empty data disk in gigabytes. This element can be used to overwrite the name of the disk in a virtual machine image.

    This value cannot be larger than 1023 GB + DiskSizeGB *int32 `json:"diskSizeGB,omitempty"` + // ManagedDisk - The managed disk parameters. + ManagedDisk *VirtualMachineScaleSetManagedDiskParameters `json:"managedDisk,omitempty"` +} + +// VirtualMachineScaleSetExtension describes a Virtual Machine Scale Set Extension. +type VirtualMachineScaleSetExtension struct { + autorest.Response `json:"-"` + // Name - The name of the extension. + Name *string `json:"name,omitempty"` + *VirtualMachineScaleSetExtensionProperties `json:"properties,omitempty"` + // ID - Resource Id + ID *string `json:"id,omitempty"` +} + +// MarshalJSON is the custom marshaler for VirtualMachineScaleSetExtension. +func (vmsse VirtualMachineScaleSetExtension) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]interface{}) + if vmsse.Name != nil { + objectMap["name"] = vmsse.Name + } + if vmsse.VirtualMachineScaleSetExtensionProperties != nil { + objectMap["properties"] = vmsse.VirtualMachineScaleSetExtensionProperties + } + if vmsse.ID != nil { + objectMap["id"] = vmsse.ID + } + return json.Marshal(objectMap) +} + +// UnmarshalJSON is the custom unmarshaler for VirtualMachineScaleSetExtension struct. +func (vmsse *VirtualMachineScaleSetExtension) UnmarshalJSON(body []byte) error { + var m map[string]*json.RawMessage + err := json.Unmarshal(body, &m) + if err != nil { + return err + } + for k, v := range m { + switch k { + case "name": + if v != nil { + var name string + err = json.Unmarshal(*v, &name) + if err != nil { + return err + } + vmsse.Name = &name + } + case "properties": + if v != nil { + var virtualMachineScaleSetExtensionProperties VirtualMachineScaleSetExtensionProperties + err = json.Unmarshal(*v, &virtualMachineScaleSetExtensionProperties) + if err != nil { + return err + } + vmsse.VirtualMachineScaleSetExtensionProperties = &virtualMachineScaleSetExtensionProperties + } + case "id": + if v != nil { + var ID string + err = json.Unmarshal(*v, &ID) + if err != nil { + return err + } + vmsse.ID = &ID + } + } + } + + return nil +} + +// VirtualMachineScaleSetExtensionListResult the List VM scale set extension operation response. +type VirtualMachineScaleSetExtensionListResult struct { + autorest.Response `json:"-"` + // Value - The list of VM scale set extensions. + Value *[]VirtualMachineScaleSetExtension `json:"value,omitempty"` + // NextLink - The uri to fetch the next page of VM scale set extensions. Call ListNext() with this to fetch the next page of VM scale set extensions. + NextLink *string `json:"nextLink,omitempty"` +} + +// VirtualMachineScaleSetExtensionListResultIterator provides access to a complete listing of +// VirtualMachineScaleSetExtension values. +type VirtualMachineScaleSetExtensionListResultIterator struct { + i int + page VirtualMachineScaleSetExtensionListResultPage +} + +// Next advances to the next value. If there was an error making +// the request the iterator does not advance and the error is returned. +func (iter *VirtualMachineScaleSetExtensionListResultIterator) Next() error { + iter.i++ + if iter.i < len(iter.page.Values()) { + return nil + } + err := iter.page.Next() + if err != nil { + iter.i-- + return err + } + iter.i = 0 + return nil +} + +// NotDone returns true if the enumeration should be started or is not yet complete. +func (iter VirtualMachineScaleSetExtensionListResultIterator) NotDone() bool { + return iter.page.NotDone() && iter.i < len(iter.page.Values()) +} + +// Response returns the raw server response from the last page request. +func (iter VirtualMachineScaleSetExtensionListResultIterator) Response() VirtualMachineScaleSetExtensionListResult { + return iter.page.Response() +} + +// Value returns the current value or a zero-initialized value if the +// iterator has advanced beyond the end of the collection. +func (iter VirtualMachineScaleSetExtensionListResultIterator) Value() VirtualMachineScaleSetExtension { + if !iter.page.NotDone() { + return VirtualMachineScaleSetExtension{} + } + return iter.page.Values()[iter.i] +} + +// IsEmpty returns true if the ListResult contains no values. +func (vmsselr VirtualMachineScaleSetExtensionListResult) IsEmpty() bool { + return vmsselr.Value == nil || len(*vmsselr.Value) == 0 +} + +// virtualMachineScaleSetExtensionListResultPreparer prepares a request to retrieve the next set of results. +// It returns nil if no more results exist. +func (vmsselr VirtualMachineScaleSetExtensionListResult) virtualMachineScaleSetExtensionListResultPreparer() (*http.Request, error) { + if vmsselr.NextLink == nil || len(to.String(vmsselr.NextLink)) < 1 { + return nil, nil + } + return autorest.Prepare(&http.Request{}, + autorest.AsJSON(), + autorest.AsGet(), + autorest.WithBaseURL(to.String(vmsselr.NextLink))) +} + +// VirtualMachineScaleSetExtensionListResultPage contains a page of VirtualMachineScaleSetExtension values. +type VirtualMachineScaleSetExtensionListResultPage struct { + fn func(VirtualMachineScaleSetExtensionListResult) (VirtualMachineScaleSetExtensionListResult, error) + vmsselr VirtualMachineScaleSetExtensionListResult +} + +// Next advances to the next page of values. If there was an error making +// the request the page does not advance and the error is returned. +func (page *VirtualMachineScaleSetExtensionListResultPage) Next() error { + next, err := page.fn(page.vmsselr) + if err != nil { + return err + } + page.vmsselr = next + return nil +} + +// NotDone returns true if the page enumeration should be started or is not yet complete. +func (page VirtualMachineScaleSetExtensionListResultPage) NotDone() bool { + return !page.vmsselr.IsEmpty() +} + +// Response returns the raw server response from the last page request. +func (page VirtualMachineScaleSetExtensionListResultPage) Response() VirtualMachineScaleSetExtensionListResult { + return page.vmsselr +} + +// Values returns the slice of values for the current page or nil if there are no values. +func (page VirtualMachineScaleSetExtensionListResultPage) Values() []VirtualMachineScaleSetExtension { + if page.vmsselr.IsEmpty() { + return nil + } + return *page.vmsselr.Value +} + +// VirtualMachineScaleSetExtensionProfile describes a virtual machine scale set extension profile. +type VirtualMachineScaleSetExtensionProfile struct { + // Extensions - The virtual machine scale set child extension resources. + Extensions *[]VirtualMachineScaleSetExtension `json:"extensions,omitempty"` +} + +// VirtualMachineScaleSetExtensionProperties describes the properties of a Virtual Machine Scale Set Extension. +type VirtualMachineScaleSetExtensionProperties struct { + // ForceUpdateTag - If a value is provided and is different from the previous value, the extension handler will be forced to update even if the extension configuration has not changed. + ForceUpdateTag *string `json:"forceUpdateTag,omitempty"` + // Publisher - The name of the extension handler publisher. + Publisher *string `json:"publisher,omitempty"` + // Type - Specifies the type of the extension; an example is "CustomScriptExtension". + Type *string `json:"type,omitempty"` + // TypeHandlerVersion - Specifies the version of the script handler. + TypeHandlerVersion *string `json:"typeHandlerVersion,omitempty"` + // AutoUpgradeMinorVersion - Indicates whether the extension should use a newer minor version if one is available at deployment time. Once deployed, however, the extension will not upgrade minor versions unless redeployed, even with this property set to true. + AutoUpgradeMinorVersion *bool `json:"autoUpgradeMinorVersion,omitempty"` + // Settings - Json formatted public settings for the extension. + Settings interface{} `json:"settings,omitempty"` + // ProtectedSettings - The extension can contain either protectedSettings or protectedSettingsFromKeyVault or no protected settings at all. + ProtectedSettings interface{} `json:"protectedSettings,omitempty"` + // ProvisioningState - The provisioning state, which only appears in the response. + ProvisioningState *string `json:"provisioningState,omitempty"` +} + +// VirtualMachineScaleSetExtensionsCreateOrUpdateFuture an abstraction for monitoring and retrieving the results of +// a long-running operation. +type VirtualMachineScaleSetExtensionsCreateOrUpdateFuture struct { + azure.Future +} + +// Result returns the result of the asynchronous operation. +// If the operation has not completed it will return an error. +func (future *VirtualMachineScaleSetExtensionsCreateOrUpdateFuture) Result(client VirtualMachineScaleSetExtensionsClient) (vmsse VirtualMachineScaleSetExtension, err error) { + var done bool + done, err = future.Done(client) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetExtensionsCreateOrUpdateFuture", "Result", future.Response(), "Polling failure") + return + } + if !done { + err = azure.NewAsyncOpIncompleteError("compute.VirtualMachineScaleSetExtensionsCreateOrUpdateFuture") + return + } + sender := autorest.DecorateSender(client, autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...)) + if vmsse.Response.Response, err = future.GetResult(sender); err == nil && vmsse.Response.Response.StatusCode != http.StatusNoContent { + vmsse, err = client.CreateOrUpdateResponder(vmsse.Response.Response) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetExtensionsCreateOrUpdateFuture", "Result", vmsse.Response.Response, "Failure responding to request") + } + } + return +} + +// VirtualMachineScaleSetExtensionsDeleteFuture an abstraction for monitoring and retrieving the results of a +// long-running operation. +type VirtualMachineScaleSetExtensionsDeleteFuture struct { + azure.Future +} + +// Result returns the result of the asynchronous operation. +// If the operation has not completed it will return an error. +func (future *VirtualMachineScaleSetExtensionsDeleteFuture) Result(client VirtualMachineScaleSetExtensionsClient) (osr OperationStatusResponse, err error) { + var done bool + done, err = future.Done(client) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetExtensionsDeleteFuture", "Result", future.Response(), "Polling failure") + return + } + if !done { + err = azure.NewAsyncOpIncompleteError("compute.VirtualMachineScaleSetExtensionsDeleteFuture") + return + } + sender := autorest.DecorateSender(client, autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...)) + if osr.Response.Response, err = future.GetResult(sender); err == nil && osr.Response.Response.StatusCode != http.StatusNoContent { + osr, err = client.DeleteResponder(osr.Response.Response) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetExtensionsDeleteFuture", "Result", osr.Response.Response, "Failure responding to request") + } + } + return +} + +// VirtualMachineScaleSetIdentity identity for the virtual machine scale set. +type VirtualMachineScaleSetIdentity struct { + // PrincipalID - The principal id of virtual machine scale set identity. This property will only be provided for a system assigned identity. + PrincipalID *string `json:"principalId,omitempty"` + // TenantID - The tenant id associated with the virtual machine scale set. This property will only be provided for a system assigned identity. + TenantID *string `json:"tenantId,omitempty"` + // Type - The type of identity used for the virtual machine scale set. The type 'SystemAssigned, UserAssigned' includes both an implicitly created identity and a set of user assigned identities. The type 'None' will remove any identities from the virtual machine scale set. Possible values include: 'ResourceIdentityTypeSystemAssigned', 'ResourceIdentityTypeUserAssigned', 'ResourceIdentityTypeSystemAssignedUserAssigned', 'ResourceIdentityTypeNone' + Type ResourceIdentityType `json:"type,omitempty"` + // IdentityIds - The list of user identities associated with the virtual machine scale set. The user identity references will be ARM resource ids in the form: '/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.ManagedIdentity/identities/{identityName}'. + IdentityIds *[]string `json:"identityIds,omitempty"` +} + +// VirtualMachineScaleSetInstanceView the instance view of a virtual machine scale set. +type VirtualMachineScaleSetInstanceView struct { + autorest.Response `json:"-"` + // VirtualMachine - The instance view status summary for the virtual machine scale set. + VirtualMachine *VirtualMachineScaleSetInstanceViewStatusesSummary `json:"virtualMachine,omitempty"` + // Extensions - The extensions information. + Extensions *[]VirtualMachineScaleSetVMExtensionsSummary `json:"extensions,omitempty"` + // Statuses - The resource status information. + Statuses *[]InstanceViewStatus `json:"statuses,omitempty"` +} + +// VirtualMachineScaleSetInstanceViewStatusesSummary instance view statuses summary for virtual machines of a +// virtual machine scale set. +type VirtualMachineScaleSetInstanceViewStatusesSummary struct { + // StatusesSummary - The extensions information. + StatusesSummary *[]VirtualMachineStatusCodeCount `json:"statusesSummary,omitempty"` +} + +// VirtualMachineScaleSetIPConfiguration describes a virtual machine scale set network profile's IP configuration. +type VirtualMachineScaleSetIPConfiguration struct { + // Name - The IP configuration name. + Name *string `json:"name,omitempty"` + *VirtualMachineScaleSetIPConfigurationProperties `json:"properties,omitempty"` + // ID - Resource Id + ID *string `json:"id,omitempty"` +} + +// MarshalJSON is the custom marshaler for VirtualMachineScaleSetIPConfiguration. +func (vmssic VirtualMachineScaleSetIPConfiguration) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]interface{}) + if vmssic.Name != nil { + objectMap["name"] = vmssic.Name + } + if vmssic.VirtualMachineScaleSetIPConfigurationProperties != nil { + objectMap["properties"] = vmssic.VirtualMachineScaleSetIPConfigurationProperties + } + if vmssic.ID != nil { + objectMap["id"] = vmssic.ID + } + return json.Marshal(objectMap) +} + +// UnmarshalJSON is the custom unmarshaler for VirtualMachineScaleSetIPConfiguration struct. +func (vmssic *VirtualMachineScaleSetIPConfiguration) UnmarshalJSON(body []byte) error { + var m map[string]*json.RawMessage + err := json.Unmarshal(body, &m) + if err != nil { + return err + } + for k, v := range m { + switch k { + case "name": + if v != nil { + var name string + err = json.Unmarshal(*v, &name) + if err != nil { + return err + } + vmssic.Name = &name + } + case "properties": + if v != nil { + var virtualMachineScaleSetIPConfigurationProperties VirtualMachineScaleSetIPConfigurationProperties + err = json.Unmarshal(*v, &virtualMachineScaleSetIPConfigurationProperties) + if err != nil { + return err + } + vmssic.VirtualMachineScaleSetIPConfigurationProperties = &virtualMachineScaleSetIPConfigurationProperties + } + case "id": + if v != nil { + var ID string + err = json.Unmarshal(*v, &ID) + if err != nil { + return err + } + vmssic.ID = &ID + } + } + } + + return nil +} + +// VirtualMachineScaleSetIPConfigurationProperties describes a virtual machine scale set network profile's IP +// configuration properties. +type VirtualMachineScaleSetIPConfigurationProperties struct { + // Subnet - Specifies the identifier of the subnet. + Subnet *APIEntityReference `json:"subnet,omitempty"` + // Primary - Specifies the primary network interface in case the virtual machine has more than 1 network interface. + Primary *bool `json:"primary,omitempty"` + // PublicIPAddressConfiguration - The publicIPAddressConfiguration. + PublicIPAddressConfiguration *VirtualMachineScaleSetPublicIPAddressConfiguration `json:"publicIPAddressConfiguration,omitempty"` + // PrivateIPAddressVersion - Available from Api-Version 2017-03-30 onwards, it represents whether the specific ipconfiguration is IPv4 or IPv6. Default is taken as IPv4. Possible values are: 'IPv4' and 'IPv6'. Possible values include: 'IPv4', 'IPv6' + PrivateIPAddressVersion IPVersion `json:"privateIPAddressVersion,omitempty"` + // ApplicationGatewayBackendAddressPools - Specifies an array of references to backend address pools of application gateways. A scale set can reference backend address pools of multiple application gateways. Multiple scale sets cannot use the same application gateway. + ApplicationGatewayBackendAddressPools *[]SubResource `json:"applicationGatewayBackendAddressPools,omitempty"` + // LoadBalancerBackendAddressPools - Specifies an array of references to backend address pools of load balancers. A scale set can reference backend address pools of one public and one internal load balancer. Multiple scale sets cannot use the same load balancer. + LoadBalancerBackendAddressPools *[]SubResource `json:"loadBalancerBackendAddressPools,omitempty"` + // LoadBalancerInboundNatPools - Specifies an array of references to inbound Nat pools of the load balancers. A scale set can reference inbound nat pools of one public and one internal load balancer. Multiple scale sets cannot use the same load balancer + LoadBalancerInboundNatPools *[]SubResource `json:"loadBalancerInboundNatPools,omitempty"` +} + +// VirtualMachineScaleSetListOSUpgradeHistory list of Virtual Machine Scale Set OS Upgrade History operation +// response. +type VirtualMachineScaleSetListOSUpgradeHistory struct { + autorest.Response `json:"-"` + // Value - The list of OS upgrades performed on the virtual machine scale set. + Value *[]UpgradeOperationHistoricalStatusInfo `json:"value,omitempty"` + // NextLink - The uri to fetch the next page of OS Upgrade History. Call ListNext() with this to fetch the next page of history of upgrades. + NextLink *string `json:"nextLink,omitempty"` +} + +// VirtualMachineScaleSetListOSUpgradeHistoryIterator provides access to a complete listing of +// UpgradeOperationHistoricalStatusInfo values. +type VirtualMachineScaleSetListOSUpgradeHistoryIterator struct { + i int + page VirtualMachineScaleSetListOSUpgradeHistoryPage +} + +// Next advances to the next value. If there was an error making +// the request the iterator does not advance and the error is returned. +func (iter *VirtualMachineScaleSetListOSUpgradeHistoryIterator) Next() error { + iter.i++ + if iter.i < len(iter.page.Values()) { + return nil + } + err := iter.page.Next() + if err != nil { + iter.i-- + return err + } + iter.i = 0 + return nil +} + +// NotDone returns true if the enumeration should be started or is not yet complete. +func (iter VirtualMachineScaleSetListOSUpgradeHistoryIterator) NotDone() bool { + return iter.page.NotDone() && iter.i < len(iter.page.Values()) +} + +// Response returns the raw server response from the last page request. +func (iter VirtualMachineScaleSetListOSUpgradeHistoryIterator) Response() VirtualMachineScaleSetListOSUpgradeHistory { + return iter.page.Response() +} + +// Value returns the current value or a zero-initialized value if the +// iterator has advanced beyond the end of the collection. +func (iter VirtualMachineScaleSetListOSUpgradeHistoryIterator) Value() UpgradeOperationHistoricalStatusInfo { + if !iter.page.NotDone() { + return UpgradeOperationHistoricalStatusInfo{} + } + return iter.page.Values()[iter.i] +} + +// IsEmpty returns true if the ListResult contains no values. +func (vmsslouh VirtualMachineScaleSetListOSUpgradeHistory) IsEmpty() bool { + return vmsslouh.Value == nil || len(*vmsslouh.Value) == 0 +} + +// virtualMachineScaleSetListOSUpgradeHistoryPreparer prepares a request to retrieve the next set of results. +// It returns nil if no more results exist. +func (vmsslouh VirtualMachineScaleSetListOSUpgradeHistory) virtualMachineScaleSetListOSUpgradeHistoryPreparer() (*http.Request, error) { + if vmsslouh.NextLink == nil || len(to.String(vmsslouh.NextLink)) < 1 { + return nil, nil + } + return autorest.Prepare(&http.Request{}, + autorest.AsJSON(), + autorest.AsGet(), + autorest.WithBaseURL(to.String(vmsslouh.NextLink))) +} + +// VirtualMachineScaleSetListOSUpgradeHistoryPage contains a page of UpgradeOperationHistoricalStatusInfo values. +type VirtualMachineScaleSetListOSUpgradeHistoryPage struct { + fn func(VirtualMachineScaleSetListOSUpgradeHistory) (VirtualMachineScaleSetListOSUpgradeHistory, error) + vmsslouh VirtualMachineScaleSetListOSUpgradeHistory +} + +// Next advances to the next page of values. If there was an error making +// the request the page does not advance and the error is returned. +func (page *VirtualMachineScaleSetListOSUpgradeHistoryPage) Next() error { + next, err := page.fn(page.vmsslouh) + if err != nil { + return err + } + page.vmsslouh = next + return nil +} + +// NotDone returns true if the page enumeration should be started or is not yet complete. +func (page VirtualMachineScaleSetListOSUpgradeHistoryPage) NotDone() bool { + return !page.vmsslouh.IsEmpty() +} + +// Response returns the raw server response from the last page request. +func (page VirtualMachineScaleSetListOSUpgradeHistoryPage) Response() VirtualMachineScaleSetListOSUpgradeHistory { + return page.vmsslouh +} + +// Values returns the slice of values for the current page or nil if there are no values. +func (page VirtualMachineScaleSetListOSUpgradeHistoryPage) Values() []UpgradeOperationHistoricalStatusInfo { + if page.vmsslouh.IsEmpty() { + return nil + } + return *page.vmsslouh.Value +} + +// VirtualMachineScaleSetListResult the List Virtual Machine operation response. +type VirtualMachineScaleSetListResult struct { + autorest.Response `json:"-"` + // Value - The list of virtual machine scale sets. + Value *[]VirtualMachineScaleSet `json:"value,omitempty"` + // NextLink - The uri to fetch the next page of Virtual Machine Scale Sets. Call ListNext() with this to fetch the next page of VMSS. + NextLink *string `json:"nextLink,omitempty"` +} + +// VirtualMachineScaleSetListResultIterator provides access to a complete listing of VirtualMachineScaleSet values. +type VirtualMachineScaleSetListResultIterator struct { + i int + page VirtualMachineScaleSetListResultPage +} + +// Next advances to the next value. If there was an error making +// the request the iterator does not advance and the error is returned. +func (iter *VirtualMachineScaleSetListResultIterator) Next() error { + iter.i++ + if iter.i < len(iter.page.Values()) { + return nil + } + err := iter.page.Next() + if err != nil { + iter.i-- + return err + } + iter.i = 0 + return nil +} + +// NotDone returns true if the enumeration should be started or is not yet complete. +func (iter VirtualMachineScaleSetListResultIterator) NotDone() bool { + return iter.page.NotDone() && iter.i < len(iter.page.Values()) +} + +// Response returns the raw server response from the last page request. +func (iter VirtualMachineScaleSetListResultIterator) Response() VirtualMachineScaleSetListResult { + return iter.page.Response() +} + +// Value returns the current value or a zero-initialized value if the +// iterator has advanced beyond the end of the collection. +func (iter VirtualMachineScaleSetListResultIterator) Value() VirtualMachineScaleSet { + if !iter.page.NotDone() { + return VirtualMachineScaleSet{} + } + return iter.page.Values()[iter.i] +} + +// IsEmpty returns true if the ListResult contains no values. +func (vmsslr VirtualMachineScaleSetListResult) IsEmpty() bool { + return vmsslr.Value == nil || len(*vmsslr.Value) == 0 +} + +// virtualMachineScaleSetListResultPreparer prepares a request to retrieve the next set of results. +// It returns nil if no more results exist. +func (vmsslr VirtualMachineScaleSetListResult) virtualMachineScaleSetListResultPreparer() (*http.Request, error) { + if vmsslr.NextLink == nil || len(to.String(vmsslr.NextLink)) < 1 { + return nil, nil + } + return autorest.Prepare(&http.Request{}, + autorest.AsJSON(), + autorest.AsGet(), + autorest.WithBaseURL(to.String(vmsslr.NextLink))) +} + +// VirtualMachineScaleSetListResultPage contains a page of VirtualMachineScaleSet values. +type VirtualMachineScaleSetListResultPage struct { + fn func(VirtualMachineScaleSetListResult) (VirtualMachineScaleSetListResult, error) + vmsslr VirtualMachineScaleSetListResult +} + +// Next advances to the next page of values. If there was an error making +// the request the page does not advance and the error is returned. +func (page *VirtualMachineScaleSetListResultPage) Next() error { + next, err := page.fn(page.vmsslr) + if err != nil { + return err + } + page.vmsslr = next + return nil +} + +// NotDone returns true if the page enumeration should be started or is not yet complete. +func (page VirtualMachineScaleSetListResultPage) NotDone() bool { + return !page.vmsslr.IsEmpty() +} + +// Response returns the raw server response from the last page request. +func (page VirtualMachineScaleSetListResultPage) Response() VirtualMachineScaleSetListResult { + return page.vmsslr +} + +// Values returns the slice of values for the current page or nil if there are no values. +func (page VirtualMachineScaleSetListResultPage) Values() []VirtualMachineScaleSet { + if page.vmsslr.IsEmpty() { + return nil + } + return *page.vmsslr.Value +} + +// VirtualMachineScaleSetListSkusResult the Virtual Machine Scale Set List Skus operation response. +type VirtualMachineScaleSetListSkusResult struct { + autorest.Response `json:"-"` + // Value - The list of skus available for the virtual machine scale set. + Value *[]VirtualMachineScaleSetSku `json:"value,omitempty"` + // NextLink - The uri to fetch the next page of Virtual Machine Scale Set Skus. Call ListNext() with this to fetch the next page of VMSS Skus. + NextLink *string `json:"nextLink,omitempty"` +} + +// VirtualMachineScaleSetListSkusResultIterator provides access to a complete listing of VirtualMachineScaleSetSku +// values. +type VirtualMachineScaleSetListSkusResultIterator struct { + i int + page VirtualMachineScaleSetListSkusResultPage +} + +// Next advances to the next value. If there was an error making +// the request the iterator does not advance and the error is returned. +func (iter *VirtualMachineScaleSetListSkusResultIterator) Next() error { + iter.i++ + if iter.i < len(iter.page.Values()) { + return nil + } + err := iter.page.Next() + if err != nil { + iter.i-- + return err + } + iter.i = 0 + return nil +} + +// NotDone returns true if the enumeration should be started or is not yet complete. +func (iter VirtualMachineScaleSetListSkusResultIterator) NotDone() bool { + return iter.page.NotDone() && iter.i < len(iter.page.Values()) +} + +// Response returns the raw server response from the last page request. +func (iter VirtualMachineScaleSetListSkusResultIterator) Response() VirtualMachineScaleSetListSkusResult { + return iter.page.Response() +} + +// Value returns the current value or a zero-initialized value if the +// iterator has advanced beyond the end of the collection. +func (iter VirtualMachineScaleSetListSkusResultIterator) Value() VirtualMachineScaleSetSku { + if !iter.page.NotDone() { + return VirtualMachineScaleSetSku{} + } + return iter.page.Values()[iter.i] +} + +// IsEmpty returns true if the ListResult contains no values. +func (vmsslsr VirtualMachineScaleSetListSkusResult) IsEmpty() bool { + return vmsslsr.Value == nil || len(*vmsslsr.Value) == 0 +} + +// virtualMachineScaleSetListSkusResultPreparer prepares a request to retrieve the next set of results. +// It returns nil if no more results exist. +func (vmsslsr VirtualMachineScaleSetListSkusResult) virtualMachineScaleSetListSkusResultPreparer() (*http.Request, error) { + if vmsslsr.NextLink == nil || len(to.String(vmsslsr.NextLink)) < 1 { + return nil, nil + } + return autorest.Prepare(&http.Request{}, + autorest.AsJSON(), + autorest.AsGet(), + autorest.WithBaseURL(to.String(vmsslsr.NextLink))) +} + +// VirtualMachineScaleSetListSkusResultPage contains a page of VirtualMachineScaleSetSku values. +type VirtualMachineScaleSetListSkusResultPage struct { + fn func(VirtualMachineScaleSetListSkusResult) (VirtualMachineScaleSetListSkusResult, error) + vmsslsr VirtualMachineScaleSetListSkusResult +} + +// Next advances to the next page of values. If there was an error making +// the request the page does not advance and the error is returned. +func (page *VirtualMachineScaleSetListSkusResultPage) Next() error { + next, err := page.fn(page.vmsslsr) + if err != nil { + return err + } + page.vmsslsr = next + return nil +} + +// NotDone returns true if the page enumeration should be started or is not yet complete. +func (page VirtualMachineScaleSetListSkusResultPage) NotDone() bool { + return !page.vmsslsr.IsEmpty() +} + +// Response returns the raw server response from the last page request. +func (page VirtualMachineScaleSetListSkusResultPage) Response() VirtualMachineScaleSetListSkusResult { + return page.vmsslsr +} + +// Values returns the slice of values for the current page or nil if there are no values. +func (page VirtualMachineScaleSetListSkusResultPage) Values() []VirtualMachineScaleSetSku { + if page.vmsslsr.IsEmpty() { + return nil + } + return *page.vmsslsr.Value +} + +// VirtualMachineScaleSetListWithLinkResult the List Virtual Machine operation response. +type VirtualMachineScaleSetListWithLinkResult struct { + autorest.Response `json:"-"` + // Value - The list of virtual machine scale sets. + Value *[]VirtualMachineScaleSet `json:"value,omitempty"` + // NextLink - The uri to fetch the next page of Virtual Machine Scale Sets. Call ListNext() with this to fetch the next page of Virtual Machine Scale Sets. + NextLink *string `json:"nextLink,omitempty"` +} + +// VirtualMachineScaleSetListWithLinkResultIterator provides access to a complete listing of VirtualMachineScaleSet +// values. +type VirtualMachineScaleSetListWithLinkResultIterator struct { + i int + page VirtualMachineScaleSetListWithLinkResultPage +} + +// Next advances to the next value. If there was an error making +// the request the iterator does not advance and the error is returned. +func (iter *VirtualMachineScaleSetListWithLinkResultIterator) Next() error { + iter.i++ + if iter.i < len(iter.page.Values()) { + return nil + } + err := iter.page.Next() + if err != nil { + iter.i-- + return err + } + iter.i = 0 + return nil +} + +// NotDone returns true if the enumeration should be started or is not yet complete. +func (iter VirtualMachineScaleSetListWithLinkResultIterator) NotDone() bool { + return iter.page.NotDone() && iter.i < len(iter.page.Values()) +} + +// Response returns the raw server response from the last page request. +func (iter VirtualMachineScaleSetListWithLinkResultIterator) Response() VirtualMachineScaleSetListWithLinkResult { + return iter.page.Response() +} + +// Value returns the current value or a zero-initialized value if the +// iterator has advanced beyond the end of the collection. +func (iter VirtualMachineScaleSetListWithLinkResultIterator) Value() VirtualMachineScaleSet { + if !iter.page.NotDone() { + return VirtualMachineScaleSet{} + } + return iter.page.Values()[iter.i] +} + +// IsEmpty returns true if the ListResult contains no values. +func (vmsslwlr VirtualMachineScaleSetListWithLinkResult) IsEmpty() bool { + return vmsslwlr.Value == nil || len(*vmsslwlr.Value) == 0 +} + +// virtualMachineScaleSetListWithLinkResultPreparer prepares a request to retrieve the next set of results. +// It returns nil if no more results exist. +func (vmsslwlr VirtualMachineScaleSetListWithLinkResult) virtualMachineScaleSetListWithLinkResultPreparer() (*http.Request, error) { + if vmsslwlr.NextLink == nil || len(to.String(vmsslwlr.NextLink)) < 1 { + return nil, nil + } + return autorest.Prepare(&http.Request{}, + autorest.AsJSON(), + autorest.AsGet(), + autorest.WithBaseURL(to.String(vmsslwlr.NextLink))) +} + +// VirtualMachineScaleSetListWithLinkResultPage contains a page of VirtualMachineScaleSet values. +type VirtualMachineScaleSetListWithLinkResultPage struct { + fn func(VirtualMachineScaleSetListWithLinkResult) (VirtualMachineScaleSetListWithLinkResult, error) + vmsslwlr VirtualMachineScaleSetListWithLinkResult +} + +// Next advances to the next page of values. If there was an error making +// the request the page does not advance and the error is returned. +func (page *VirtualMachineScaleSetListWithLinkResultPage) Next() error { + next, err := page.fn(page.vmsslwlr) + if err != nil { + return err + } + page.vmsslwlr = next + return nil +} + +// NotDone returns true if the page enumeration should be started or is not yet complete. +func (page VirtualMachineScaleSetListWithLinkResultPage) NotDone() bool { + return !page.vmsslwlr.IsEmpty() +} + +// Response returns the raw server response from the last page request. +func (page VirtualMachineScaleSetListWithLinkResultPage) Response() VirtualMachineScaleSetListWithLinkResult { + return page.vmsslwlr +} + +// Values returns the slice of values for the current page or nil if there are no values. +func (page VirtualMachineScaleSetListWithLinkResultPage) Values() []VirtualMachineScaleSet { + if page.vmsslwlr.IsEmpty() { + return nil + } + return *page.vmsslwlr.Value +} + +// VirtualMachineScaleSetManagedDiskParameters describes the parameters of a ScaleSet managed disk. +type VirtualMachineScaleSetManagedDiskParameters struct { + // StorageAccountType - Specifies the storage account type for the managed disk. Possible values are: Standard_LRS or Premium_LRS. Possible values include: 'StorageAccountTypesStandardLRS', 'StorageAccountTypesPremiumLRS' + StorageAccountType StorageAccountTypes `json:"storageAccountType,omitempty"` +} + +// VirtualMachineScaleSetNetworkConfiguration describes a virtual machine scale set network profile's network +// configurations. +type VirtualMachineScaleSetNetworkConfiguration struct { + // Name - The network configuration name. + Name *string `json:"name,omitempty"` + *VirtualMachineScaleSetNetworkConfigurationProperties `json:"properties,omitempty"` + // ID - Resource Id + ID *string `json:"id,omitempty"` +} + +// MarshalJSON is the custom marshaler for VirtualMachineScaleSetNetworkConfiguration. +func (vmssnc VirtualMachineScaleSetNetworkConfiguration) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]interface{}) + if vmssnc.Name != nil { + objectMap["name"] = vmssnc.Name + } + if vmssnc.VirtualMachineScaleSetNetworkConfigurationProperties != nil { + objectMap["properties"] = vmssnc.VirtualMachineScaleSetNetworkConfigurationProperties + } + if vmssnc.ID != nil { + objectMap["id"] = vmssnc.ID + } + return json.Marshal(objectMap) +} + +// UnmarshalJSON is the custom unmarshaler for VirtualMachineScaleSetNetworkConfiguration struct. +func (vmssnc *VirtualMachineScaleSetNetworkConfiguration) UnmarshalJSON(body []byte) error { + var m map[string]*json.RawMessage + err := json.Unmarshal(body, &m) + if err != nil { + return err + } + for k, v := range m { + switch k { + case "name": + if v != nil { + var name string + err = json.Unmarshal(*v, &name) + if err != nil { + return err + } + vmssnc.Name = &name + } + case "properties": + if v != nil { + var virtualMachineScaleSetNetworkConfigurationProperties VirtualMachineScaleSetNetworkConfigurationProperties + err = json.Unmarshal(*v, &virtualMachineScaleSetNetworkConfigurationProperties) + if err != nil { + return err + } + vmssnc.VirtualMachineScaleSetNetworkConfigurationProperties = &virtualMachineScaleSetNetworkConfigurationProperties + } + case "id": + if v != nil { + var ID string + err = json.Unmarshal(*v, &ID) + if err != nil { + return err + } + vmssnc.ID = &ID + } + } + } + + return nil +} + +// VirtualMachineScaleSetNetworkConfigurationDNSSettings describes a virtual machines scale sets network +// configuration's DNS settings. +type VirtualMachineScaleSetNetworkConfigurationDNSSettings struct { + // DNSServers - List of DNS servers IP addresses + DNSServers *[]string `json:"dnsServers,omitempty"` +} + +// VirtualMachineScaleSetNetworkConfigurationProperties describes a virtual machine scale set network profile's IP +// configuration. +type VirtualMachineScaleSetNetworkConfigurationProperties struct { + // Primary - Specifies the primary network interface in case the virtual machine has more than 1 network interface. + Primary *bool `json:"primary,omitempty"` + // EnableAcceleratedNetworking - Specifies whether the network interface is accelerated networking-enabled. + EnableAcceleratedNetworking *bool `json:"enableAcceleratedNetworking,omitempty"` + // NetworkSecurityGroup - The network security group. + NetworkSecurityGroup *SubResource `json:"networkSecurityGroup,omitempty"` + // DNSSettings - The dns settings to be applied on the network interfaces. + DNSSettings *VirtualMachineScaleSetNetworkConfigurationDNSSettings `json:"dnsSettings,omitempty"` + // IPConfigurations - Specifies the IP configurations of the network interface. + IPConfigurations *[]VirtualMachineScaleSetIPConfiguration `json:"ipConfigurations,omitempty"` + // EnableIPForwarding - Whether IP forwarding enabled on this NIC. + EnableIPForwarding *bool `json:"enableIPForwarding,omitempty"` +} + +// VirtualMachineScaleSetNetworkProfile describes a virtual machine scale set network profile. +type VirtualMachineScaleSetNetworkProfile struct { + // HealthProbe - A reference to a load balancer probe used to determine the health of an instance in the virtual machine scale set. The reference will be in the form: '/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/loadBalancers/{loadBalancerName}/probes/{probeName}'. + HealthProbe *APIEntityReference `json:"healthProbe,omitempty"` + // NetworkInterfaceConfigurations - The list of network configurations. + NetworkInterfaceConfigurations *[]VirtualMachineScaleSetNetworkConfiguration `json:"networkInterfaceConfigurations,omitempty"` +} + +// VirtualMachineScaleSetOSDisk describes a virtual machine scale set operating system disk. +type VirtualMachineScaleSetOSDisk struct { + // Name - The disk name. + Name *string `json:"name,omitempty"` + // Caching - Specifies the caching requirements.

    Possible values are:

    **None**

    **ReadOnly**

    **ReadWrite**

    Default: **None for Standard storage. ReadOnly for Premium storage**. Possible values include: 'CachingTypesNone', 'CachingTypesReadOnly', 'CachingTypesReadWrite' + Caching CachingTypes `json:"caching,omitempty"` + // WriteAcceleratorEnabled - Specifies whether writeAccelerator should be enabled or disabled on the disk. + WriteAcceleratorEnabled *bool `json:"writeAcceleratorEnabled,omitempty"` + // CreateOption - Specifies how the virtual machines in the scale set should be created.

    The only allowed value is: **FromImage** \u2013 This value is used when you are using an image to create the virtual machine. If you are using a platform image, you also use the imageReference element described above. If you are using a marketplace image, you also use the plan element previously described. Possible values include: 'DiskCreateOptionTypesFromImage', 'DiskCreateOptionTypesEmpty', 'DiskCreateOptionTypesAttach' + CreateOption DiskCreateOptionTypes `json:"createOption,omitempty"` + // OsType - This property allows you to specify the type of the OS that is included in the disk if creating a VM from user-image or a specialized VHD.

    Possible values are:

    **Windows**

    **Linux**. Possible values include: 'Windows', 'Linux' + OsType OperatingSystemTypes `json:"osType,omitempty"` + // Image - Specifies information about the unmanaged user image to base the scale set on. + Image *VirtualHardDisk `json:"image,omitempty"` + // VhdContainers - Specifies the container urls that are used to store operating system disks for the scale set. + VhdContainers *[]string `json:"vhdContainers,omitempty"` + // ManagedDisk - The managed disk parameters. + ManagedDisk *VirtualMachineScaleSetManagedDiskParameters `json:"managedDisk,omitempty"` +} + +// VirtualMachineScaleSetOSProfile describes a virtual machine scale set OS profile. +type VirtualMachineScaleSetOSProfile struct { + // ComputerNamePrefix - Specifies the computer name prefix for all of the virtual machines in the scale set. Computer name prefixes must be 1 to 15 characters long. + ComputerNamePrefix *string `json:"computerNamePrefix,omitempty"` + // AdminUsername - Specifies the name of the administrator account.

    **Windows-only restriction:** Cannot end in "."

    **Disallowed values:** "administrator", "admin", "user", "user1", "test", "user2", "test1", "user3", "admin1", "1", "123", "a", "actuser", "adm", "admin2", "aspnet", "backup", "console", "david", "guest", "john", "owner", "root", "server", "sql", "support", "support_388945a0", "sys", "test2", "test3", "user4", "user5".

    **Minimum-length (Linux):** 1 character

    **Max-length (Linux):** 64 characters

    **Max-length (Windows):** 20 characters

  • For root access to the Linux VM, see [Using root privileges on Linux virtual machines in Azure](https://docs.microsoft.com/azure/virtual-machines/virtual-machines-linux-use-root-privileges?toc=%2fazure%2fvirtual-machines%2flinux%2ftoc.json)
  • For a list of built-in system users on Linux that should not be used in this field, see [Selecting User Names for Linux on Azure](https://docs.microsoft.com/azure/virtual-machines/virtual-machines-linux-usernames?toc=%2fazure%2fvirtual-machines%2flinux%2ftoc.json) + AdminUsername *string `json:"adminUsername,omitempty"` + // AdminPassword - Specifies the password of the administrator account.

    **Minimum-length (Windows):** 8 characters

    **Minimum-length (Linux):** 6 characters

    **Max-length (Windows):** 123 characters

    **Max-length (Linux):** 72 characters

    **Complexity requirements:** 3 out of 4 conditions below need to be fulfilled
    Has lower characters
    Has upper characters
    Has a digit
    Has a special character (Regex match [\W_])

    **Disallowed values:** "abc@123", "P@$$w0rd", "P@ssw0rd", "P@ssword123", "Pa$$word", "pass@word1", "Password!", "Password1", "Password22", "iloveyou!"

    For resetting the password, see [How to reset the Remote Desktop service or its login password in a Windows VM](https://docs.microsoft.com/azure/virtual-machines/virtual-machines-windows-reset-rdp?toc=%2fazure%2fvirtual-machines%2fwindows%2ftoc.json)

    For resetting root password, see [Manage users, SSH, and check or repair disks on Azure Linux VMs using the VMAccess Extension](https://docs.microsoft.com/azure/virtual-machines/virtual-machines-linux-using-vmaccess-extension?toc=%2fazure%2fvirtual-machines%2flinux%2ftoc.json#reset-root-password) + AdminPassword *string `json:"adminPassword,omitempty"` + // CustomData - Specifies a base-64 encoded string of custom data. The base-64 encoded string is decoded to a binary array that is saved as a file on the Virtual Machine. The maximum length of the binary array is 65535 bytes.

    For using cloud-init for your VM, see [Using cloud-init to customize a Linux VM during creation](https://docs.microsoft.com/azure/virtual-machines/virtual-machines-linux-using-cloud-init?toc=%2fazure%2fvirtual-machines%2flinux%2ftoc.json) + CustomData *string `json:"customData,omitempty"` + // WindowsConfiguration - Specifies Windows operating system settings on the virtual machine. + WindowsConfiguration *WindowsConfiguration `json:"windowsConfiguration,omitempty"` + // LinuxConfiguration - Specifies the Linux operating system settings on the virtual machine.

    For a list of supported Linux distributions, see [Linux on Azure-Endorsed Distributions](https://docs.microsoft.com/azure/virtual-machines/virtual-machines-linux-endorsed-distros?toc=%2fazure%2fvirtual-machines%2flinux%2ftoc.json)

    For running non-endorsed distributions, see [Information for Non-Endorsed Distributions](https://docs.microsoft.com/azure/virtual-machines/virtual-machines-linux-create-upload-generic?toc=%2fazure%2fvirtual-machines%2flinux%2ftoc.json). + LinuxConfiguration *LinuxConfiguration `json:"linuxConfiguration,omitempty"` + // Secrets - Specifies set of certificates that should be installed onto the virtual machines in the scale set. + Secrets *[]VaultSecretGroup `json:"secrets,omitempty"` +} + +// VirtualMachineScaleSetProperties describes the properties of a Virtual Machine Scale Set. +type VirtualMachineScaleSetProperties struct { + // UpgradePolicy - The upgrade policy. + UpgradePolicy *UpgradePolicy `json:"upgradePolicy,omitempty"` + // VirtualMachineProfile - The virtual machine profile. + VirtualMachineProfile *VirtualMachineScaleSetVMProfile `json:"virtualMachineProfile,omitempty"` + // ProvisioningState - The provisioning state, which only appears in the response. + ProvisioningState *string `json:"provisioningState,omitempty"` + // Overprovision - Specifies whether the Virtual Machine Scale Set should be overprovisioned. + Overprovision *bool `json:"overprovision,omitempty"` + // UniqueID - Specifies the ID which uniquely identifies a Virtual Machine Scale Set. + UniqueID *string `json:"uniqueId,omitempty"` + // SinglePlacementGroup - When true this limits the scale set to a single placement group, of max size 100 virtual machines. + SinglePlacementGroup *bool `json:"singlePlacementGroup,omitempty"` + // ZoneBalance - Whether to force stictly even Virtual Machine distribution cross x-zones in case there is zone outage. + ZoneBalance *bool `json:"zoneBalance,omitempty"` + // PlatformFaultDomainCount - Fault Domain count for each placement group. + PlatformFaultDomainCount *int32 `json:"platformFaultDomainCount,omitempty"` +} + +// VirtualMachineScaleSetPublicIPAddressConfiguration describes a virtual machines scale set IP Configuration's +// PublicIPAddress configuration +type VirtualMachineScaleSetPublicIPAddressConfiguration struct { + // Name - The publicIP address configuration name. + Name *string `json:"name,omitempty"` + *VirtualMachineScaleSetPublicIPAddressConfigurationProperties `json:"properties,omitempty"` +} + +// MarshalJSON is the custom marshaler for VirtualMachineScaleSetPublicIPAddressConfiguration. +func (vmsspiac VirtualMachineScaleSetPublicIPAddressConfiguration) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]interface{}) + if vmsspiac.Name != nil { + objectMap["name"] = vmsspiac.Name + } + if vmsspiac.VirtualMachineScaleSetPublicIPAddressConfigurationProperties != nil { + objectMap["properties"] = vmsspiac.VirtualMachineScaleSetPublicIPAddressConfigurationProperties + } + return json.Marshal(objectMap) +} + +// UnmarshalJSON is the custom unmarshaler for VirtualMachineScaleSetPublicIPAddressConfiguration struct. +func (vmsspiac *VirtualMachineScaleSetPublicIPAddressConfiguration) UnmarshalJSON(body []byte) error { + var m map[string]*json.RawMessage + err := json.Unmarshal(body, &m) + if err != nil { + return err + } + for k, v := range m { + switch k { + case "name": + if v != nil { + var name string + err = json.Unmarshal(*v, &name) + if err != nil { + return err + } + vmsspiac.Name = &name + } + case "properties": + if v != nil { + var virtualMachineScaleSetPublicIPAddressConfigurationProperties VirtualMachineScaleSetPublicIPAddressConfigurationProperties + err = json.Unmarshal(*v, &virtualMachineScaleSetPublicIPAddressConfigurationProperties) + if err != nil { + return err + } + vmsspiac.VirtualMachineScaleSetPublicIPAddressConfigurationProperties = &virtualMachineScaleSetPublicIPAddressConfigurationProperties + } + } + } + + return nil +} + +// VirtualMachineScaleSetPublicIPAddressConfigurationDNSSettings describes a virtual machines scale sets network +// configuration's DNS settings. +type VirtualMachineScaleSetPublicIPAddressConfigurationDNSSettings struct { + // DomainNameLabel - The Domain name label.The concatenation of the domain name label and vm index will be the domain name labels of the PublicIPAddress resources that will be created + DomainNameLabel *string `json:"domainNameLabel,omitempty"` +} + +// VirtualMachineScaleSetPublicIPAddressConfigurationProperties describes a virtual machines scale set IP +// Configuration's PublicIPAddress configuration +type VirtualMachineScaleSetPublicIPAddressConfigurationProperties struct { + // IdleTimeoutInMinutes - The idle timeout of the public IP address. + IdleTimeoutInMinutes *int32 `json:"idleTimeoutInMinutes,omitempty"` + // DNSSettings - The dns settings to be applied on the publicIP addresses . + DNSSettings *VirtualMachineScaleSetPublicIPAddressConfigurationDNSSettings `json:"dnsSettings,omitempty"` +} + +// VirtualMachineScaleSetRollingUpgradesCancelFuture an abstraction for monitoring and retrieving the results of a +// long-running operation. +type VirtualMachineScaleSetRollingUpgradesCancelFuture struct { + azure.Future +} + +// Result returns the result of the asynchronous operation. +// If the operation has not completed it will return an error. +func (future *VirtualMachineScaleSetRollingUpgradesCancelFuture) Result(client VirtualMachineScaleSetRollingUpgradesClient) (osr OperationStatusResponse, err error) { + var done bool + done, err = future.Done(client) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetRollingUpgradesCancelFuture", "Result", future.Response(), "Polling failure") + return + } + if !done { + err = azure.NewAsyncOpIncompleteError("compute.VirtualMachineScaleSetRollingUpgradesCancelFuture") + return + } + sender := autorest.DecorateSender(client, autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...)) + if osr.Response.Response, err = future.GetResult(sender); err == nil && osr.Response.Response.StatusCode != http.StatusNoContent { + osr, err = client.CancelResponder(osr.Response.Response) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetRollingUpgradesCancelFuture", "Result", osr.Response.Response, "Failure responding to request") + } + } + return +} + +// VirtualMachineScaleSetRollingUpgradesStartOSUpgradeFuture an abstraction for monitoring and retrieving the +// results of a long-running operation. +type VirtualMachineScaleSetRollingUpgradesStartOSUpgradeFuture struct { + azure.Future +} + +// Result returns the result of the asynchronous operation. +// If the operation has not completed it will return an error. +func (future *VirtualMachineScaleSetRollingUpgradesStartOSUpgradeFuture) Result(client VirtualMachineScaleSetRollingUpgradesClient) (osr OperationStatusResponse, err error) { + var done bool + done, err = future.Done(client) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetRollingUpgradesStartOSUpgradeFuture", "Result", future.Response(), "Polling failure") + return + } + if !done { + err = azure.NewAsyncOpIncompleteError("compute.VirtualMachineScaleSetRollingUpgradesStartOSUpgradeFuture") + return + } + sender := autorest.DecorateSender(client, autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...)) + if osr.Response.Response, err = future.GetResult(sender); err == nil && osr.Response.Response.StatusCode != http.StatusNoContent { + osr, err = client.StartOSUpgradeResponder(osr.Response.Response) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetRollingUpgradesStartOSUpgradeFuture", "Result", osr.Response.Response, "Failure responding to request") + } + } + return +} + +// VirtualMachineScaleSetsCreateOrUpdateFuture an abstraction for monitoring and retrieving the results of a +// long-running operation. +type VirtualMachineScaleSetsCreateOrUpdateFuture struct { + azure.Future +} + +// Result returns the result of the asynchronous operation. +// If the operation has not completed it will return an error. +func (future *VirtualMachineScaleSetsCreateOrUpdateFuture) Result(client VirtualMachineScaleSetsClient) (vmss VirtualMachineScaleSet, err error) { + var done bool + done, err = future.Done(client) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetsCreateOrUpdateFuture", "Result", future.Response(), "Polling failure") + return + } + if !done { + err = azure.NewAsyncOpIncompleteError("compute.VirtualMachineScaleSetsCreateOrUpdateFuture") + return + } + sender := autorest.DecorateSender(client, autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...)) + if vmss.Response.Response, err = future.GetResult(sender); err == nil && vmss.Response.Response.StatusCode != http.StatusNoContent { + vmss, err = client.CreateOrUpdateResponder(vmss.Response.Response) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetsCreateOrUpdateFuture", "Result", vmss.Response.Response, "Failure responding to request") + } + } + return +} + +// VirtualMachineScaleSetsDeallocateFuture an abstraction for monitoring and retrieving the results of a +// long-running operation. +type VirtualMachineScaleSetsDeallocateFuture struct { + azure.Future +} + +// Result returns the result of the asynchronous operation. +// If the operation has not completed it will return an error. +func (future *VirtualMachineScaleSetsDeallocateFuture) Result(client VirtualMachineScaleSetsClient) (osr OperationStatusResponse, err error) { + var done bool + done, err = future.Done(client) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetsDeallocateFuture", "Result", future.Response(), "Polling failure") + return + } + if !done { + err = azure.NewAsyncOpIncompleteError("compute.VirtualMachineScaleSetsDeallocateFuture") + return + } + sender := autorest.DecorateSender(client, autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...)) + if osr.Response.Response, err = future.GetResult(sender); err == nil && osr.Response.Response.StatusCode != http.StatusNoContent { + osr, err = client.DeallocateResponder(osr.Response.Response) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetsDeallocateFuture", "Result", osr.Response.Response, "Failure responding to request") + } + } + return +} + +// VirtualMachineScaleSetsDeleteFuture an abstraction for monitoring and retrieving the results of a long-running +// operation. +type VirtualMachineScaleSetsDeleteFuture struct { + azure.Future +} + +// Result returns the result of the asynchronous operation. +// If the operation has not completed it will return an error. +func (future *VirtualMachineScaleSetsDeleteFuture) Result(client VirtualMachineScaleSetsClient) (osr OperationStatusResponse, err error) { + var done bool + done, err = future.Done(client) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetsDeleteFuture", "Result", future.Response(), "Polling failure") + return + } + if !done { + err = azure.NewAsyncOpIncompleteError("compute.VirtualMachineScaleSetsDeleteFuture") + return + } + sender := autorest.DecorateSender(client, autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...)) + if osr.Response.Response, err = future.GetResult(sender); err == nil && osr.Response.Response.StatusCode != http.StatusNoContent { + osr, err = client.DeleteResponder(osr.Response.Response) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetsDeleteFuture", "Result", osr.Response.Response, "Failure responding to request") + } + } + return +} + +// VirtualMachineScaleSetsDeleteInstancesFuture an abstraction for monitoring and retrieving the results of a +// long-running operation. +type VirtualMachineScaleSetsDeleteInstancesFuture struct { + azure.Future +} + +// Result returns the result of the asynchronous operation. +// If the operation has not completed it will return an error. +func (future *VirtualMachineScaleSetsDeleteInstancesFuture) Result(client VirtualMachineScaleSetsClient) (osr OperationStatusResponse, err error) { + var done bool + done, err = future.Done(client) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetsDeleteInstancesFuture", "Result", future.Response(), "Polling failure") + return + } + if !done { + err = azure.NewAsyncOpIncompleteError("compute.VirtualMachineScaleSetsDeleteInstancesFuture") + return + } + sender := autorest.DecorateSender(client, autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...)) + if osr.Response.Response, err = future.GetResult(sender); err == nil && osr.Response.Response.StatusCode != http.StatusNoContent { + osr, err = client.DeleteInstancesResponder(osr.Response.Response) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetsDeleteInstancesFuture", "Result", osr.Response.Response, "Failure responding to request") + } + } + return +} + +// VirtualMachineScaleSetSku describes an available virtual machine scale set sku. +type VirtualMachineScaleSetSku struct { + // ResourceType - The type of resource the sku applies to. + ResourceType *string `json:"resourceType,omitempty"` + // Sku - The Sku. + Sku *Sku `json:"sku,omitempty"` + // Capacity - Specifies the number of virtual machines in the scale set. + Capacity *VirtualMachineScaleSetSkuCapacity `json:"capacity,omitempty"` +} + +// VirtualMachineScaleSetSkuCapacity describes scaling information of a sku. +type VirtualMachineScaleSetSkuCapacity struct { + // Minimum - The minimum capacity. + Minimum *int64 `json:"minimum,omitempty"` + // Maximum - The maximum capacity that can be set. + Maximum *int64 `json:"maximum,omitempty"` + // DefaultCapacity - The default capacity. + DefaultCapacity *int64 `json:"defaultCapacity,omitempty"` + // ScaleType - The scale type applicable to the sku. Possible values include: 'VirtualMachineScaleSetSkuScaleTypeAutomatic', 'VirtualMachineScaleSetSkuScaleTypeNone' + ScaleType VirtualMachineScaleSetSkuScaleType `json:"scaleType,omitempty"` +} + +// VirtualMachineScaleSetsPerformMaintenanceFuture an abstraction for monitoring and retrieving the results of a +// long-running operation. +type VirtualMachineScaleSetsPerformMaintenanceFuture struct { + azure.Future +} + +// Result returns the result of the asynchronous operation. +// If the operation has not completed it will return an error. +func (future *VirtualMachineScaleSetsPerformMaintenanceFuture) Result(client VirtualMachineScaleSetsClient) (osr OperationStatusResponse, err error) { + var done bool + done, err = future.Done(client) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetsPerformMaintenanceFuture", "Result", future.Response(), "Polling failure") + return + } + if !done { + err = azure.NewAsyncOpIncompleteError("compute.VirtualMachineScaleSetsPerformMaintenanceFuture") + return + } + sender := autorest.DecorateSender(client, autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...)) + if osr.Response.Response, err = future.GetResult(sender); err == nil && osr.Response.Response.StatusCode != http.StatusNoContent { + osr, err = client.PerformMaintenanceResponder(osr.Response.Response) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetsPerformMaintenanceFuture", "Result", osr.Response.Response, "Failure responding to request") + } + } + return +} + +// VirtualMachineScaleSetsPowerOffFuture an abstraction for monitoring and retrieving the results of a long-running +// operation. +type VirtualMachineScaleSetsPowerOffFuture struct { + azure.Future +} + +// Result returns the result of the asynchronous operation. +// If the operation has not completed it will return an error. +func (future *VirtualMachineScaleSetsPowerOffFuture) Result(client VirtualMachineScaleSetsClient) (osr OperationStatusResponse, err error) { + var done bool + done, err = future.Done(client) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetsPowerOffFuture", "Result", future.Response(), "Polling failure") + return + } + if !done { + err = azure.NewAsyncOpIncompleteError("compute.VirtualMachineScaleSetsPowerOffFuture") + return + } + sender := autorest.DecorateSender(client, autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...)) + if osr.Response.Response, err = future.GetResult(sender); err == nil && osr.Response.Response.StatusCode != http.StatusNoContent { + osr, err = client.PowerOffResponder(osr.Response.Response) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetsPowerOffFuture", "Result", osr.Response.Response, "Failure responding to request") + } + } + return +} + +// VirtualMachineScaleSetsRedeployFuture an abstraction for monitoring and retrieving the results of a long-running +// operation. +type VirtualMachineScaleSetsRedeployFuture struct { + azure.Future +} + +// Result returns the result of the asynchronous operation. +// If the operation has not completed it will return an error. +func (future *VirtualMachineScaleSetsRedeployFuture) Result(client VirtualMachineScaleSetsClient) (osr OperationStatusResponse, err error) { + var done bool + done, err = future.Done(client) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetsRedeployFuture", "Result", future.Response(), "Polling failure") + return + } + if !done { + err = azure.NewAsyncOpIncompleteError("compute.VirtualMachineScaleSetsRedeployFuture") + return + } + sender := autorest.DecorateSender(client, autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...)) + if osr.Response.Response, err = future.GetResult(sender); err == nil && osr.Response.Response.StatusCode != http.StatusNoContent { + osr, err = client.RedeployResponder(osr.Response.Response) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetsRedeployFuture", "Result", osr.Response.Response, "Failure responding to request") + } + } + return +} + +// VirtualMachineScaleSetsReimageAllFuture an abstraction for monitoring and retrieving the results of a +// long-running operation. +type VirtualMachineScaleSetsReimageAllFuture struct { + azure.Future +} + +// Result returns the result of the asynchronous operation. +// If the operation has not completed it will return an error. +func (future *VirtualMachineScaleSetsReimageAllFuture) Result(client VirtualMachineScaleSetsClient) (osr OperationStatusResponse, err error) { + var done bool + done, err = future.Done(client) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetsReimageAllFuture", "Result", future.Response(), "Polling failure") + return + } + if !done { + err = azure.NewAsyncOpIncompleteError("compute.VirtualMachineScaleSetsReimageAllFuture") + return + } + sender := autorest.DecorateSender(client, autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...)) + if osr.Response.Response, err = future.GetResult(sender); err == nil && osr.Response.Response.StatusCode != http.StatusNoContent { + osr, err = client.ReimageAllResponder(osr.Response.Response) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetsReimageAllFuture", "Result", osr.Response.Response, "Failure responding to request") + } + } + return +} + +// VirtualMachineScaleSetsReimageFuture an abstraction for monitoring and retrieving the results of a long-running +// operation. +type VirtualMachineScaleSetsReimageFuture struct { + azure.Future +} + +// Result returns the result of the asynchronous operation. +// If the operation has not completed it will return an error. +func (future *VirtualMachineScaleSetsReimageFuture) Result(client VirtualMachineScaleSetsClient) (osr OperationStatusResponse, err error) { + var done bool + done, err = future.Done(client) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetsReimageFuture", "Result", future.Response(), "Polling failure") + return + } + if !done { + err = azure.NewAsyncOpIncompleteError("compute.VirtualMachineScaleSetsReimageFuture") + return + } + sender := autorest.DecorateSender(client, autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...)) + if osr.Response.Response, err = future.GetResult(sender); err == nil && osr.Response.Response.StatusCode != http.StatusNoContent { + osr, err = client.ReimageResponder(osr.Response.Response) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetsReimageFuture", "Result", osr.Response.Response, "Failure responding to request") + } + } + return +} + +// VirtualMachineScaleSetsRestartFuture an abstraction for monitoring and retrieving the results of a long-running +// operation. +type VirtualMachineScaleSetsRestartFuture struct { + azure.Future +} + +// Result returns the result of the asynchronous operation. +// If the operation has not completed it will return an error. +func (future *VirtualMachineScaleSetsRestartFuture) Result(client VirtualMachineScaleSetsClient) (osr OperationStatusResponse, err error) { + var done bool + done, err = future.Done(client) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetsRestartFuture", "Result", future.Response(), "Polling failure") + return + } + if !done { + err = azure.NewAsyncOpIncompleteError("compute.VirtualMachineScaleSetsRestartFuture") + return + } + sender := autorest.DecorateSender(client, autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...)) + if osr.Response.Response, err = future.GetResult(sender); err == nil && osr.Response.Response.StatusCode != http.StatusNoContent { + osr, err = client.RestartResponder(osr.Response.Response) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetsRestartFuture", "Result", osr.Response.Response, "Failure responding to request") + } + } + return +} + +// VirtualMachineScaleSetsStartFuture an abstraction for monitoring and retrieving the results of a long-running +// operation. +type VirtualMachineScaleSetsStartFuture struct { + azure.Future +} + +// Result returns the result of the asynchronous operation. +// If the operation has not completed it will return an error. +func (future *VirtualMachineScaleSetsStartFuture) Result(client VirtualMachineScaleSetsClient) (osr OperationStatusResponse, err error) { + var done bool + done, err = future.Done(client) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetsStartFuture", "Result", future.Response(), "Polling failure") + return + } + if !done { + err = azure.NewAsyncOpIncompleteError("compute.VirtualMachineScaleSetsStartFuture") + return + } + sender := autorest.DecorateSender(client, autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...)) + if osr.Response.Response, err = future.GetResult(sender); err == nil && osr.Response.Response.StatusCode != http.StatusNoContent { + osr, err = client.StartResponder(osr.Response.Response) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetsStartFuture", "Result", osr.Response.Response, "Failure responding to request") + } + } + return +} + +// VirtualMachineScaleSetStorageProfile describes a virtual machine scale set storage profile. +type VirtualMachineScaleSetStorageProfile struct { + // ImageReference - Specifies information about the image to use. You can specify information about platform images, marketplace images, or virtual machine images. This element is required when you want to use a platform image, marketplace image, or virtual machine image, but is not used in other creation operations. + ImageReference *ImageReference `json:"imageReference,omitempty"` + // OsDisk - Specifies information about the operating system disk used by the virtual machines in the scale set.

    For more information about disks, see [About disks and VHDs for Azure virtual machines](https://docs.microsoft.com/azure/virtual-machines/virtual-machines-windows-about-disks-vhds?toc=%2fazure%2fvirtual-machines%2fwindows%2ftoc.json). + OsDisk *VirtualMachineScaleSetOSDisk `json:"osDisk,omitempty"` + // DataDisks - Specifies the parameters that are used to add data disks to the virtual machines in the scale set.

    For more information about disks, see [About disks and VHDs for Azure virtual machines](https://docs.microsoft.com/azure/virtual-machines/virtual-machines-windows-about-disks-vhds?toc=%2fazure%2fvirtual-machines%2fwindows%2ftoc.json). + DataDisks *[]VirtualMachineScaleSetDataDisk `json:"dataDisks,omitempty"` +} + +// VirtualMachineScaleSetsUpdateFuture an abstraction for monitoring and retrieving the results of a long-running +// operation. +type VirtualMachineScaleSetsUpdateFuture struct { + azure.Future +} + +// Result returns the result of the asynchronous operation. +// If the operation has not completed it will return an error. +func (future *VirtualMachineScaleSetsUpdateFuture) Result(client VirtualMachineScaleSetsClient) (vmss VirtualMachineScaleSet, err error) { + var done bool + done, err = future.Done(client) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetsUpdateFuture", "Result", future.Response(), "Polling failure") + return + } + if !done { + err = azure.NewAsyncOpIncompleteError("compute.VirtualMachineScaleSetsUpdateFuture") + return + } + sender := autorest.DecorateSender(client, autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...)) + if vmss.Response.Response, err = future.GetResult(sender); err == nil && vmss.Response.Response.StatusCode != http.StatusNoContent { + vmss, err = client.UpdateResponder(vmss.Response.Response) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetsUpdateFuture", "Result", vmss.Response.Response, "Failure responding to request") + } + } + return +} + +// VirtualMachineScaleSetsUpdateInstancesFuture an abstraction for monitoring and retrieving the results of a +// long-running operation. +type VirtualMachineScaleSetsUpdateInstancesFuture struct { + azure.Future +} + +// Result returns the result of the asynchronous operation. +// If the operation has not completed it will return an error. +func (future *VirtualMachineScaleSetsUpdateInstancesFuture) Result(client VirtualMachineScaleSetsClient) (osr OperationStatusResponse, err error) { + var done bool + done, err = future.Done(client) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetsUpdateInstancesFuture", "Result", future.Response(), "Polling failure") + return + } + if !done { + err = azure.NewAsyncOpIncompleteError("compute.VirtualMachineScaleSetsUpdateInstancesFuture") + return + } + sender := autorest.DecorateSender(client, autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...)) + if osr.Response.Response, err = future.GetResult(sender); err == nil && osr.Response.Response.StatusCode != http.StatusNoContent { + osr, err = client.UpdateInstancesResponder(osr.Response.Response) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetsUpdateInstancesFuture", "Result", osr.Response.Response, "Failure responding to request") + } + } + return +} + +// VirtualMachineScaleSetUpdate describes a Virtual Machine Scale Set. +type VirtualMachineScaleSetUpdate struct { + // Sku - The virtual machine scale set sku. + Sku *Sku `json:"sku,omitempty"` + // Plan - The purchase plan when deploying a virtual machine scale set from VM Marketplace images. + Plan *Plan `json:"plan,omitempty"` + *VirtualMachineScaleSetUpdateProperties `json:"properties,omitempty"` + // Identity - The identity of the virtual machine scale set, if configured. + Identity *VirtualMachineScaleSetIdentity `json:"identity,omitempty"` + // Tags - Resource tags + Tags map[string]*string `json:"tags"` +} + +// MarshalJSON is the custom marshaler for VirtualMachineScaleSetUpdate. +func (vmssu VirtualMachineScaleSetUpdate) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]interface{}) + if vmssu.Sku != nil { + objectMap["sku"] = vmssu.Sku + } + if vmssu.Plan != nil { + objectMap["plan"] = vmssu.Plan + } + if vmssu.VirtualMachineScaleSetUpdateProperties != nil { + objectMap["properties"] = vmssu.VirtualMachineScaleSetUpdateProperties + } + if vmssu.Identity != nil { + objectMap["identity"] = vmssu.Identity + } + if vmssu.Tags != nil { + objectMap["tags"] = vmssu.Tags + } + return json.Marshal(objectMap) +} + +// UnmarshalJSON is the custom unmarshaler for VirtualMachineScaleSetUpdate struct. +func (vmssu *VirtualMachineScaleSetUpdate) UnmarshalJSON(body []byte) error { + var m map[string]*json.RawMessage + err := json.Unmarshal(body, &m) + if err != nil { + return err + } + for k, v := range m { + switch k { + case "sku": + if v != nil { + var sku Sku + err = json.Unmarshal(*v, &sku) + if err != nil { + return err + } + vmssu.Sku = &sku + } + case "plan": + if v != nil { + var plan Plan + err = json.Unmarshal(*v, &plan) + if err != nil { + return err + } + vmssu.Plan = &plan + } + case "properties": + if v != nil { + var virtualMachineScaleSetUpdateProperties VirtualMachineScaleSetUpdateProperties + err = json.Unmarshal(*v, &virtualMachineScaleSetUpdateProperties) + if err != nil { + return err + } + vmssu.VirtualMachineScaleSetUpdateProperties = &virtualMachineScaleSetUpdateProperties + } + case "identity": + if v != nil { + var identity VirtualMachineScaleSetIdentity + err = json.Unmarshal(*v, &identity) + if err != nil { + return err + } + vmssu.Identity = &identity + } + case "tags": + if v != nil { + var tags map[string]*string + err = json.Unmarshal(*v, &tags) + if err != nil { + return err + } + vmssu.Tags = tags + } + } + } + + return nil +} + +// VirtualMachineScaleSetUpdateIPConfiguration describes a virtual machine scale set network profile's IP +// configuration. +type VirtualMachineScaleSetUpdateIPConfiguration struct { + // Name - The IP configuration name. + Name *string `json:"name,omitempty"` + *VirtualMachineScaleSetUpdateIPConfigurationProperties `json:"properties,omitempty"` + // ID - Resource Id + ID *string `json:"id,omitempty"` +} + +// MarshalJSON is the custom marshaler for VirtualMachineScaleSetUpdateIPConfiguration. +func (vmssuic VirtualMachineScaleSetUpdateIPConfiguration) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]interface{}) + if vmssuic.Name != nil { + objectMap["name"] = vmssuic.Name + } + if vmssuic.VirtualMachineScaleSetUpdateIPConfigurationProperties != nil { + objectMap["properties"] = vmssuic.VirtualMachineScaleSetUpdateIPConfigurationProperties + } + if vmssuic.ID != nil { + objectMap["id"] = vmssuic.ID + } + return json.Marshal(objectMap) +} + +// UnmarshalJSON is the custom unmarshaler for VirtualMachineScaleSetUpdateIPConfiguration struct. +func (vmssuic *VirtualMachineScaleSetUpdateIPConfiguration) UnmarshalJSON(body []byte) error { + var m map[string]*json.RawMessage + err := json.Unmarshal(body, &m) + if err != nil { + return err + } + for k, v := range m { + switch k { + case "name": + if v != nil { + var name string + err = json.Unmarshal(*v, &name) + if err != nil { + return err + } + vmssuic.Name = &name + } + case "properties": + if v != nil { + var virtualMachineScaleSetUpdateIPConfigurationProperties VirtualMachineScaleSetUpdateIPConfigurationProperties + err = json.Unmarshal(*v, &virtualMachineScaleSetUpdateIPConfigurationProperties) + if err != nil { + return err + } + vmssuic.VirtualMachineScaleSetUpdateIPConfigurationProperties = &virtualMachineScaleSetUpdateIPConfigurationProperties + } + case "id": + if v != nil { + var ID string + err = json.Unmarshal(*v, &ID) + if err != nil { + return err + } + vmssuic.ID = &ID + } + } + } + + return nil +} + +// VirtualMachineScaleSetUpdateIPConfigurationProperties describes a virtual machine scale set network profile's IP +// configuration properties. +type VirtualMachineScaleSetUpdateIPConfigurationProperties struct { + // Subnet - The subnet. + Subnet *APIEntityReference `json:"subnet,omitempty"` + // Primary - Specifies the primary IP Configuration in case the network interface has more than one IP Configuration. + Primary *bool `json:"primary,omitempty"` + // PublicIPAddressConfiguration - The publicIPAddressConfiguration. + PublicIPAddressConfiguration *VirtualMachineScaleSetUpdatePublicIPAddressConfiguration `json:"publicIPAddressConfiguration,omitempty"` + // PrivateIPAddressVersion - Available from Api-Version 2017-03-30 onwards, it represents whether the specific ipconfiguration is IPv4 or IPv6. Default is taken as IPv4. Possible values are: 'IPv4' and 'IPv6'. Possible values include: 'IPv4', 'IPv6' + PrivateIPAddressVersion IPVersion `json:"privateIPAddressVersion,omitempty"` + // ApplicationGatewayBackendAddressPools - The application gateway backend address pools. + ApplicationGatewayBackendAddressPools *[]SubResource `json:"applicationGatewayBackendAddressPools,omitempty"` + // LoadBalancerBackendAddressPools - The load balancer backend address pools. + LoadBalancerBackendAddressPools *[]SubResource `json:"loadBalancerBackendAddressPools,omitempty"` + // LoadBalancerInboundNatPools - The load balancer inbound nat pools. + LoadBalancerInboundNatPools *[]SubResource `json:"loadBalancerInboundNatPools,omitempty"` +} + +// VirtualMachineScaleSetUpdateNetworkConfiguration describes a virtual machine scale set network profile's network +// configurations. +type VirtualMachineScaleSetUpdateNetworkConfiguration struct { + // Name - The network configuration name. + Name *string `json:"name,omitempty"` + *VirtualMachineScaleSetUpdateNetworkConfigurationProperties `json:"properties,omitempty"` + // ID - Resource Id + ID *string `json:"id,omitempty"` +} + +// MarshalJSON is the custom marshaler for VirtualMachineScaleSetUpdateNetworkConfiguration. +func (vmssunc VirtualMachineScaleSetUpdateNetworkConfiguration) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]interface{}) + if vmssunc.Name != nil { + objectMap["name"] = vmssunc.Name + } + if vmssunc.VirtualMachineScaleSetUpdateNetworkConfigurationProperties != nil { + objectMap["properties"] = vmssunc.VirtualMachineScaleSetUpdateNetworkConfigurationProperties + } + if vmssunc.ID != nil { + objectMap["id"] = vmssunc.ID + } + return json.Marshal(objectMap) +} + +// UnmarshalJSON is the custom unmarshaler for VirtualMachineScaleSetUpdateNetworkConfiguration struct. +func (vmssunc *VirtualMachineScaleSetUpdateNetworkConfiguration) UnmarshalJSON(body []byte) error { + var m map[string]*json.RawMessage + err := json.Unmarshal(body, &m) + if err != nil { + return err + } + for k, v := range m { + switch k { + case "name": + if v != nil { + var name string + err = json.Unmarshal(*v, &name) + if err != nil { + return err + } + vmssunc.Name = &name + } + case "properties": + if v != nil { + var virtualMachineScaleSetUpdateNetworkConfigurationProperties VirtualMachineScaleSetUpdateNetworkConfigurationProperties + err = json.Unmarshal(*v, &virtualMachineScaleSetUpdateNetworkConfigurationProperties) + if err != nil { + return err + } + vmssunc.VirtualMachineScaleSetUpdateNetworkConfigurationProperties = &virtualMachineScaleSetUpdateNetworkConfigurationProperties + } + case "id": + if v != nil { + var ID string + err = json.Unmarshal(*v, &ID) + if err != nil { + return err + } + vmssunc.ID = &ID + } + } + } + + return nil +} + +// VirtualMachineScaleSetUpdateNetworkConfigurationProperties describes a virtual machine scale set updatable +// network profile's IP configuration.Use this object for updating network profile's IP Configuration. +type VirtualMachineScaleSetUpdateNetworkConfigurationProperties struct { + // Primary - Whether this is a primary NIC on a virtual machine. + Primary *bool `json:"primary,omitempty"` + // EnableAcceleratedNetworking - Specifies whether the network interface is accelerated networking-enabled. + EnableAcceleratedNetworking *bool `json:"enableAcceleratedNetworking,omitempty"` + // NetworkSecurityGroup - The network security group. + NetworkSecurityGroup *SubResource `json:"networkSecurityGroup,omitempty"` + // DNSSettings - The dns settings to be applied on the network interfaces. + DNSSettings *VirtualMachineScaleSetNetworkConfigurationDNSSettings `json:"dnsSettings,omitempty"` + // IPConfigurations - The virtual machine scale set IP Configuration. + IPConfigurations *[]VirtualMachineScaleSetUpdateIPConfiguration `json:"ipConfigurations,omitempty"` + // EnableIPForwarding - Whether IP forwarding enabled on this NIC. + EnableIPForwarding *bool `json:"enableIPForwarding,omitempty"` +} + +// VirtualMachineScaleSetUpdateNetworkProfile describes a virtual machine scale set network profile. +type VirtualMachineScaleSetUpdateNetworkProfile struct { + // NetworkInterfaceConfigurations - The list of network configurations. + NetworkInterfaceConfigurations *[]VirtualMachineScaleSetUpdateNetworkConfiguration `json:"networkInterfaceConfigurations,omitempty"` +} + +// VirtualMachineScaleSetUpdateOSDisk describes virtual machine scale set operating system disk Update Object. This +// should be used for Updating VMSS OS Disk. +type VirtualMachineScaleSetUpdateOSDisk struct { + // Caching - The caching type. Possible values include: 'CachingTypesNone', 'CachingTypesReadOnly', 'CachingTypesReadWrite' + Caching CachingTypes `json:"caching,omitempty"` + // WriteAcceleratorEnabled - Specifies whether writeAccelerator should be enabled or disabled on the disk. + WriteAcceleratorEnabled *bool `json:"writeAcceleratorEnabled,omitempty"` + // Image - The Source User Image VirtualHardDisk. This VirtualHardDisk will be copied before using it to attach to the Virtual Machine. If SourceImage is provided, the destination VirtualHardDisk should not exist. + Image *VirtualHardDisk `json:"image,omitempty"` + // VhdContainers - The list of virtual hard disk container uris. + VhdContainers *[]string `json:"vhdContainers,omitempty"` + // ManagedDisk - The managed disk parameters. + ManagedDisk *VirtualMachineScaleSetManagedDiskParameters `json:"managedDisk,omitempty"` +} + +// VirtualMachineScaleSetUpdateOSProfile describes a virtual machine scale set OS profile. +type VirtualMachineScaleSetUpdateOSProfile struct { + // CustomData - A base-64 encoded string of custom data. + CustomData *string `json:"customData,omitempty"` + // WindowsConfiguration - The Windows Configuration of the OS profile. + WindowsConfiguration *WindowsConfiguration `json:"windowsConfiguration,omitempty"` + // LinuxConfiguration - The Linux Configuration of the OS profile. + LinuxConfiguration *LinuxConfiguration `json:"linuxConfiguration,omitempty"` + // Secrets - The List of certificates for addition to the VM. + Secrets *[]VaultSecretGroup `json:"secrets,omitempty"` +} + +// VirtualMachineScaleSetUpdateProperties describes the properties of a Virtual Machine Scale Set. +type VirtualMachineScaleSetUpdateProperties struct { + // UpgradePolicy - The upgrade policy. + UpgradePolicy *UpgradePolicy `json:"upgradePolicy,omitempty"` + // VirtualMachineProfile - The virtual machine profile. + VirtualMachineProfile *VirtualMachineScaleSetUpdateVMProfile `json:"virtualMachineProfile,omitempty"` + // Overprovision - Specifies whether the Virtual Machine Scale Set should be overprovisioned. + Overprovision *bool `json:"overprovision,omitempty"` + // SinglePlacementGroup - When true this limits the scale set to a single placement group, of max size 100 virtual machines. + SinglePlacementGroup *bool `json:"singlePlacementGroup,omitempty"` +} + +// VirtualMachineScaleSetUpdatePublicIPAddressConfiguration describes a virtual machines scale set IP +// Configuration's PublicIPAddress configuration +type VirtualMachineScaleSetUpdatePublicIPAddressConfiguration struct { + // Name - The publicIP address configuration name. + Name *string `json:"name,omitempty"` + *VirtualMachineScaleSetUpdatePublicIPAddressConfigurationProperties `json:"properties,omitempty"` +} + +// MarshalJSON is the custom marshaler for VirtualMachineScaleSetUpdatePublicIPAddressConfiguration. +func (vmssupiac VirtualMachineScaleSetUpdatePublicIPAddressConfiguration) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]interface{}) + if vmssupiac.Name != nil { + objectMap["name"] = vmssupiac.Name + } + if vmssupiac.VirtualMachineScaleSetUpdatePublicIPAddressConfigurationProperties != nil { + objectMap["properties"] = vmssupiac.VirtualMachineScaleSetUpdatePublicIPAddressConfigurationProperties + } + return json.Marshal(objectMap) +} + +// UnmarshalJSON is the custom unmarshaler for VirtualMachineScaleSetUpdatePublicIPAddressConfiguration struct. +func (vmssupiac *VirtualMachineScaleSetUpdatePublicIPAddressConfiguration) UnmarshalJSON(body []byte) error { + var m map[string]*json.RawMessage + err := json.Unmarshal(body, &m) + if err != nil { + return err + } + for k, v := range m { + switch k { + case "name": + if v != nil { + var name string + err = json.Unmarshal(*v, &name) + if err != nil { + return err + } + vmssupiac.Name = &name + } + case "properties": + if v != nil { + var virtualMachineScaleSetUpdatePublicIPAddressConfigurationProperties VirtualMachineScaleSetUpdatePublicIPAddressConfigurationProperties + err = json.Unmarshal(*v, &virtualMachineScaleSetUpdatePublicIPAddressConfigurationProperties) + if err != nil { + return err + } + vmssupiac.VirtualMachineScaleSetUpdatePublicIPAddressConfigurationProperties = &virtualMachineScaleSetUpdatePublicIPAddressConfigurationProperties + } + } + } + + return nil +} + +// VirtualMachineScaleSetUpdatePublicIPAddressConfigurationProperties describes a virtual machines scale set IP +// Configuration's PublicIPAddress configuration +type VirtualMachineScaleSetUpdatePublicIPAddressConfigurationProperties struct { + // IdleTimeoutInMinutes - The idle timeout of the public IP address. + IdleTimeoutInMinutes *int32 `json:"idleTimeoutInMinutes,omitempty"` + // DNSSettings - The dns settings to be applied on the publicIP addresses . + DNSSettings *VirtualMachineScaleSetPublicIPAddressConfigurationDNSSettings `json:"dnsSettings,omitempty"` +} + +// VirtualMachineScaleSetUpdateStorageProfile describes a virtual machine scale set storage profile. +type VirtualMachineScaleSetUpdateStorageProfile struct { + // ImageReference - The image reference. + ImageReference *ImageReference `json:"imageReference,omitempty"` + // OsDisk - The OS disk. + OsDisk *VirtualMachineScaleSetUpdateOSDisk `json:"osDisk,omitempty"` + // DataDisks - The data disks. + DataDisks *[]VirtualMachineScaleSetDataDisk `json:"dataDisks,omitempty"` +} + +// VirtualMachineScaleSetUpdateVMProfile describes a virtual machine scale set virtual machine profile. +type VirtualMachineScaleSetUpdateVMProfile struct { + // OsProfile - The virtual machine scale set OS profile. + OsProfile *VirtualMachineScaleSetUpdateOSProfile `json:"osProfile,omitempty"` + // StorageProfile - The virtual machine scale set storage profile. + StorageProfile *VirtualMachineScaleSetUpdateStorageProfile `json:"storageProfile,omitempty"` + // NetworkProfile - The virtual machine scale set network profile. + NetworkProfile *VirtualMachineScaleSetUpdateNetworkProfile `json:"networkProfile,omitempty"` + // DiagnosticsProfile - The virtual machine scale set diagnostics profile. + DiagnosticsProfile *DiagnosticsProfile `json:"diagnosticsProfile,omitempty"` + // ExtensionProfile - The virtual machine scale set extension profile. + ExtensionProfile *VirtualMachineScaleSetExtensionProfile `json:"extensionProfile,omitempty"` + // LicenseType - The license type, which is for bring your own license scenario. + LicenseType *string `json:"licenseType,omitempty"` +} + +// VirtualMachineScaleSetVM describes a virtual machine scale set virtual machine. +type VirtualMachineScaleSetVM struct { + autorest.Response `json:"-"` + // InstanceID - The virtual machine instance ID. + InstanceID *string `json:"instanceId,omitempty"` + // Sku - The virtual machine SKU. + Sku *Sku `json:"sku,omitempty"` + *VirtualMachineScaleSetVMProperties `json:"properties,omitempty"` + // Plan - Specifies information about the marketplace image used to create the virtual machine. This element is only used for marketplace images. Before you can use a marketplace image from an API, you must enable the image for programmatic use. In the Azure portal, find the marketplace image that you want to use and then click **Want to deploy programmatically, Get Started ->**. Enter any required information and then click **Save**. + Plan *Plan `json:"plan,omitempty"` + // Resources - The virtual machine child extension resources. + Resources *[]VirtualMachineExtension `json:"resources,omitempty"` + // ID - Resource Id + ID *string `json:"id,omitempty"` + // Name - Resource name + Name *string `json:"name,omitempty"` + // Type - Resource type + Type *string `json:"type,omitempty"` + // Location - Resource location + Location *string `json:"location,omitempty"` + // Tags - Resource tags + Tags map[string]*string `json:"tags"` +} + +// MarshalJSON is the custom marshaler for VirtualMachineScaleSetVM. +func (vmssv VirtualMachineScaleSetVM) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]interface{}) + if vmssv.InstanceID != nil { + objectMap["instanceId"] = vmssv.InstanceID + } + if vmssv.Sku != nil { + objectMap["sku"] = vmssv.Sku + } + if vmssv.VirtualMachineScaleSetVMProperties != nil { + objectMap["properties"] = vmssv.VirtualMachineScaleSetVMProperties + } + if vmssv.Plan != nil { + objectMap["plan"] = vmssv.Plan + } + if vmssv.Resources != nil { + objectMap["resources"] = vmssv.Resources + } + if vmssv.ID != nil { + objectMap["id"] = vmssv.ID + } + if vmssv.Name != nil { + objectMap["name"] = vmssv.Name + } + if vmssv.Type != nil { + objectMap["type"] = vmssv.Type + } + if vmssv.Location != nil { + objectMap["location"] = vmssv.Location + } + if vmssv.Tags != nil { + objectMap["tags"] = vmssv.Tags + } + return json.Marshal(objectMap) +} + +// UnmarshalJSON is the custom unmarshaler for VirtualMachineScaleSetVM struct. +func (vmssv *VirtualMachineScaleSetVM) UnmarshalJSON(body []byte) error { + var m map[string]*json.RawMessage + err := json.Unmarshal(body, &m) + if err != nil { + return err + } + for k, v := range m { + switch k { + case "instanceId": + if v != nil { + var instanceID string + err = json.Unmarshal(*v, &instanceID) + if err != nil { + return err + } + vmssv.InstanceID = &instanceID + } + case "sku": + if v != nil { + var sku Sku + err = json.Unmarshal(*v, &sku) + if err != nil { + return err + } + vmssv.Sku = &sku + } + case "properties": + if v != nil { + var virtualMachineScaleSetVMProperties VirtualMachineScaleSetVMProperties + err = json.Unmarshal(*v, &virtualMachineScaleSetVMProperties) + if err != nil { + return err + } + vmssv.VirtualMachineScaleSetVMProperties = &virtualMachineScaleSetVMProperties + } + case "plan": + if v != nil { + var plan Plan + err = json.Unmarshal(*v, &plan) + if err != nil { + return err + } + vmssv.Plan = &plan + } + case "resources": + if v != nil { + var resources []VirtualMachineExtension + err = json.Unmarshal(*v, &resources) + if err != nil { + return err + } + vmssv.Resources = &resources + } + case "id": + if v != nil { + var ID string + err = json.Unmarshal(*v, &ID) + if err != nil { + return err + } + vmssv.ID = &ID + } + case "name": + if v != nil { + var name string + err = json.Unmarshal(*v, &name) + if err != nil { + return err + } + vmssv.Name = &name + } + case "type": + if v != nil { + var typeVar string + err = json.Unmarshal(*v, &typeVar) + if err != nil { + return err + } + vmssv.Type = &typeVar + } + case "location": + if v != nil { + var location string + err = json.Unmarshal(*v, &location) + if err != nil { + return err + } + vmssv.Location = &location + } + case "tags": + if v != nil { + var tags map[string]*string + err = json.Unmarshal(*v, &tags) + if err != nil { + return err + } + vmssv.Tags = tags + } + } + } + + return nil +} + +// VirtualMachineScaleSetVMExtensionsSummary extensions summary for virtual machines of a virtual machine scale +// set. +type VirtualMachineScaleSetVMExtensionsSummary struct { + // Name - The extension name. + Name *string `json:"name,omitempty"` + // StatusesSummary - The extensions information. + StatusesSummary *[]VirtualMachineStatusCodeCount `json:"statusesSummary,omitempty"` +} + +// VirtualMachineScaleSetVMInstanceIDs specifies a list of virtual machine instance IDs from the VM scale set. +type VirtualMachineScaleSetVMInstanceIDs struct { + // InstanceIds - The virtual machine scale set instance ids. Omitting the virtual machine scale set instance ids will result in the operation being performed on all virtual machines in the virtual machine scale set. + InstanceIds *[]string `json:"instanceIds,omitempty"` +} + +// VirtualMachineScaleSetVMInstanceRequiredIDs specifies a list of virtual machine instance IDs from the VM scale +// set. +type VirtualMachineScaleSetVMInstanceRequiredIDs struct { + // InstanceIds - The virtual machine scale set instance ids. + InstanceIds *[]string `json:"instanceIds,omitempty"` +} + +// VirtualMachineScaleSetVMInstanceView the instance view of a virtual machine scale set VM. +type VirtualMachineScaleSetVMInstanceView struct { + autorest.Response `json:"-"` + // PlatformUpdateDomain - The Update Domain count. + PlatformUpdateDomain *int32 `json:"platformUpdateDomain,omitempty"` + // PlatformFaultDomain - The Fault Domain count. + PlatformFaultDomain *int32 `json:"platformFaultDomain,omitempty"` + // RdpThumbPrint - The Remote desktop certificate thumbprint. + RdpThumbPrint *string `json:"rdpThumbPrint,omitempty"` + // VMAgent - The VM Agent running on the virtual machine. + VMAgent *VirtualMachineAgentInstanceView `json:"vmAgent,omitempty"` + // MaintenanceRedeployStatus - The Maintenance Operation status on the virtual machine. + MaintenanceRedeployStatus *MaintenanceRedeployStatus `json:"maintenanceRedeployStatus,omitempty"` + // Disks - The disks information. + Disks *[]DiskInstanceView `json:"disks,omitempty"` + // Extensions - The extensions information. + Extensions *[]VirtualMachineExtensionInstanceView `json:"extensions,omitempty"` + // VMHealth - The health status for the VM. + VMHealth *VirtualMachineHealthStatus `json:"vmHealth,omitempty"` + // BootDiagnostics - Boot Diagnostics is a debugging feature which allows you to view Console Output and Screenshot to diagnose VM status.

    For Linux Virtual Machines, you can easily view the output of your console log.

    For both Windows and Linux virtual machines, Azure also enables you to see a screenshot of the VM from the hypervisor. + BootDiagnostics *BootDiagnosticsInstanceView `json:"bootDiagnostics,omitempty"` + // Statuses - The resource status information. + Statuses *[]InstanceViewStatus `json:"statuses,omitempty"` + // PlacementGroupID - The placement group in which the VM is running. If the VM is deallocated it will not have a placementGroupId. + PlacementGroupID *string `json:"placementGroupId,omitempty"` +} + +// VirtualMachineScaleSetVMListResult the List Virtual Machine Scale Set VMs operation response. +type VirtualMachineScaleSetVMListResult struct { + autorest.Response `json:"-"` + // Value - The list of virtual machine scale sets VMs. + Value *[]VirtualMachineScaleSetVM `json:"value,omitempty"` + // NextLink - The uri to fetch the next page of Virtual Machine Scale Set VMs. Call ListNext() with this to fetch the next page of VMSS VMs + NextLink *string `json:"nextLink,omitempty"` +} + +// VirtualMachineScaleSetVMListResultIterator provides access to a complete listing of VirtualMachineScaleSetVM +// values. +type VirtualMachineScaleSetVMListResultIterator struct { + i int + page VirtualMachineScaleSetVMListResultPage +} + +// Next advances to the next value. If there was an error making +// the request the iterator does not advance and the error is returned. +func (iter *VirtualMachineScaleSetVMListResultIterator) Next() error { + iter.i++ + if iter.i < len(iter.page.Values()) { + return nil + } + err := iter.page.Next() + if err != nil { + iter.i-- + return err + } + iter.i = 0 + return nil +} + +// NotDone returns true if the enumeration should be started or is not yet complete. +func (iter VirtualMachineScaleSetVMListResultIterator) NotDone() bool { + return iter.page.NotDone() && iter.i < len(iter.page.Values()) +} + +// Response returns the raw server response from the last page request. +func (iter VirtualMachineScaleSetVMListResultIterator) Response() VirtualMachineScaleSetVMListResult { + return iter.page.Response() +} + +// Value returns the current value or a zero-initialized value if the +// iterator has advanced beyond the end of the collection. +func (iter VirtualMachineScaleSetVMListResultIterator) Value() VirtualMachineScaleSetVM { + if !iter.page.NotDone() { + return VirtualMachineScaleSetVM{} + } + return iter.page.Values()[iter.i] +} + +// IsEmpty returns true if the ListResult contains no values. +func (vmssvlr VirtualMachineScaleSetVMListResult) IsEmpty() bool { + return vmssvlr.Value == nil || len(*vmssvlr.Value) == 0 +} + +// virtualMachineScaleSetVMListResultPreparer prepares a request to retrieve the next set of results. +// It returns nil if no more results exist. +func (vmssvlr VirtualMachineScaleSetVMListResult) virtualMachineScaleSetVMListResultPreparer() (*http.Request, error) { + if vmssvlr.NextLink == nil || len(to.String(vmssvlr.NextLink)) < 1 { + return nil, nil + } + return autorest.Prepare(&http.Request{}, + autorest.AsJSON(), + autorest.AsGet(), + autorest.WithBaseURL(to.String(vmssvlr.NextLink))) +} + +// VirtualMachineScaleSetVMListResultPage contains a page of VirtualMachineScaleSetVM values. +type VirtualMachineScaleSetVMListResultPage struct { + fn func(VirtualMachineScaleSetVMListResult) (VirtualMachineScaleSetVMListResult, error) + vmssvlr VirtualMachineScaleSetVMListResult +} + +// Next advances to the next page of values. If there was an error making +// the request the page does not advance and the error is returned. +func (page *VirtualMachineScaleSetVMListResultPage) Next() error { + next, err := page.fn(page.vmssvlr) + if err != nil { + return err + } + page.vmssvlr = next + return nil +} + +// NotDone returns true if the page enumeration should be started or is not yet complete. +func (page VirtualMachineScaleSetVMListResultPage) NotDone() bool { + return !page.vmssvlr.IsEmpty() +} + +// Response returns the raw server response from the last page request. +func (page VirtualMachineScaleSetVMListResultPage) Response() VirtualMachineScaleSetVMListResult { + return page.vmssvlr +} + +// Values returns the slice of values for the current page or nil if there are no values. +func (page VirtualMachineScaleSetVMListResultPage) Values() []VirtualMachineScaleSetVM { + if page.vmssvlr.IsEmpty() { + return nil + } + return *page.vmssvlr.Value +} + +// VirtualMachineScaleSetVMProfile describes a virtual machine scale set virtual machine profile. +type VirtualMachineScaleSetVMProfile struct { + // OsProfile - Specifies the operating system settings for the virtual machines in the scale set. + OsProfile *VirtualMachineScaleSetOSProfile `json:"osProfile,omitempty"` + // StorageProfile - Specifies the storage settings for the virtual machine disks. + StorageProfile *VirtualMachineScaleSetStorageProfile `json:"storageProfile,omitempty"` + // NetworkProfile - Specifies properties of the network interfaces of the virtual machines in the scale set. + NetworkProfile *VirtualMachineScaleSetNetworkProfile `json:"networkProfile,omitempty"` + // DiagnosticsProfile - Specifies the boot diagnostic settings state.

    Minimum api-version: 2015-06-15. + DiagnosticsProfile *DiagnosticsProfile `json:"diagnosticsProfile,omitempty"` + // ExtensionProfile - Specifies a collection of settings for extensions installed on virtual machines in the scale set. + ExtensionProfile *VirtualMachineScaleSetExtensionProfile `json:"extensionProfile,omitempty"` + // LicenseType - Specifies that the image or disk that is being used was licensed on-premises. This element is only used for images that contain the Windows Server operating system.

    Possible values are:

    Windows_Client

    Windows_Server

    If this element is included in a request for an update, the value must match the initial value. This value cannot be updated.

    For more information, see [Azure Hybrid Use Benefit for Windows Server](https://docs.microsoft.com/azure/virtual-machines/virtual-machines-windows-hybrid-use-benefit-licensing?toc=%2fazure%2fvirtual-machines%2fwindows%2ftoc.json)

    Minimum api-version: 2015-06-15 + LicenseType *string `json:"licenseType,omitempty"` + // Priority - Specifies the priority for the virtual machines in the scale set.

    Minimum api-version: 2017-10-30-preview. Possible values include: 'Regular', 'Low' + Priority VirtualMachinePriorityTypes `json:"priority,omitempty"` + // EvictionPolicy - Specifies the eviction policy for virtual machines in a low priority scale set.

    Minimum api-version: 2017-10-30-preview. Possible values include: 'Deallocate', 'Delete' + EvictionPolicy VirtualMachineEvictionPolicyTypes `json:"evictionPolicy,omitempty"` +} + +// VirtualMachineScaleSetVMProperties describes the properties of a virtual machine scale set virtual machine. +type VirtualMachineScaleSetVMProperties struct { + // LatestModelApplied - Specifies whether the latest model has been applied to the virtual machine. + LatestModelApplied *bool `json:"latestModelApplied,omitempty"` + // VMID - Azure VM unique ID. + VMID *string `json:"vmId,omitempty"` + // InstanceView - The virtual machine instance view. + InstanceView *VirtualMachineInstanceView `json:"instanceView,omitempty"` + // HardwareProfile - Specifies the hardware settings for the virtual machine. + HardwareProfile *HardwareProfile `json:"hardwareProfile,omitempty"` + // StorageProfile - Specifies the storage settings for the virtual machine disks. + StorageProfile *StorageProfile `json:"storageProfile,omitempty"` + // OsProfile - Specifies the operating system settings for the virtual machine. + OsProfile *OSProfile `json:"osProfile,omitempty"` + // NetworkProfile - Specifies the network interfaces of the virtual machine. + NetworkProfile *NetworkProfile `json:"networkProfile,omitempty"` + // DiagnosticsProfile - Specifies the boot diagnostic settings state.

    Minimum api-version: 2015-06-15. + DiagnosticsProfile *DiagnosticsProfile `json:"diagnosticsProfile,omitempty"` + // AvailabilitySet - Specifies information about the availability set that the virtual machine should be assigned to. Virtual machines specified in the same availability set are allocated to different nodes to maximize availability. For more information about availability sets, see [Manage the availability of virtual machines](https://docs.microsoft.com/azure/virtual-machines/virtual-machines-windows-manage-availability?toc=%2fazure%2fvirtual-machines%2fwindows%2ftoc.json).

    For more information on Azure planned maintainance, see [Planned maintenance for virtual machines in Azure](https://docs.microsoft.com/azure/virtual-machines/virtual-machines-windows-planned-maintenance?toc=%2fazure%2fvirtual-machines%2fwindows%2ftoc.json)

    Currently, a VM can only be added to availability set at creation time. An existing VM cannot be added to an availability set. + AvailabilitySet *SubResource `json:"availabilitySet,omitempty"` + // ProvisioningState - The provisioning state, which only appears in the response. + ProvisioningState *string `json:"provisioningState,omitempty"` + // LicenseType - Specifies that the image or disk that is being used was licensed on-premises. This element is only used for images that contain the Windows Server operating system.

    Possible values are:

    Windows_Client

    Windows_Server

    If this element is included in a request for an update, the value must match the initial value. This value cannot be updated.

    For more information, see [Azure Hybrid Use Benefit for Windows Server](https://docs.microsoft.com/azure/virtual-machines/virtual-machines-windows-hybrid-use-benefit-licensing?toc=%2fazure%2fvirtual-machines%2fwindows%2ftoc.json)

    Minimum api-version: 2015-06-15 + LicenseType *string `json:"licenseType,omitempty"` +} + +// VirtualMachineScaleSetVMsDeallocateFuture an abstraction for monitoring and retrieving the results of a +// long-running operation. +type VirtualMachineScaleSetVMsDeallocateFuture struct { + azure.Future +} + +// Result returns the result of the asynchronous operation. +// If the operation has not completed it will return an error. +func (future *VirtualMachineScaleSetVMsDeallocateFuture) Result(client VirtualMachineScaleSetVMsClient) (osr OperationStatusResponse, err error) { + var done bool + done, err = future.Done(client) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetVMsDeallocateFuture", "Result", future.Response(), "Polling failure") + return + } + if !done { + err = azure.NewAsyncOpIncompleteError("compute.VirtualMachineScaleSetVMsDeallocateFuture") + return + } + sender := autorest.DecorateSender(client, autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...)) + if osr.Response.Response, err = future.GetResult(sender); err == nil && osr.Response.Response.StatusCode != http.StatusNoContent { + osr, err = client.DeallocateResponder(osr.Response.Response) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetVMsDeallocateFuture", "Result", osr.Response.Response, "Failure responding to request") + } + } + return +} + +// VirtualMachineScaleSetVMsDeleteFuture an abstraction for monitoring and retrieving the results of a long-running +// operation. +type VirtualMachineScaleSetVMsDeleteFuture struct { + azure.Future +} + +// Result returns the result of the asynchronous operation. +// If the operation has not completed it will return an error. +func (future *VirtualMachineScaleSetVMsDeleteFuture) Result(client VirtualMachineScaleSetVMsClient) (osr OperationStatusResponse, err error) { + var done bool + done, err = future.Done(client) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetVMsDeleteFuture", "Result", future.Response(), "Polling failure") + return + } + if !done { + err = azure.NewAsyncOpIncompleteError("compute.VirtualMachineScaleSetVMsDeleteFuture") + return + } + sender := autorest.DecorateSender(client, autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...)) + if osr.Response.Response, err = future.GetResult(sender); err == nil && osr.Response.Response.StatusCode != http.StatusNoContent { + osr, err = client.DeleteResponder(osr.Response.Response) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetVMsDeleteFuture", "Result", osr.Response.Response, "Failure responding to request") + } + } + return +} + +// VirtualMachineScaleSetVMsPerformMaintenanceFuture an abstraction for monitoring and retrieving the results of a +// long-running operation. +type VirtualMachineScaleSetVMsPerformMaintenanceFuture struct { + azure.Future +} + +// Result returns the result of the asynchronous operation. +// If the operation has not completed it will return an error. +func (future *VirtualMachineScaleSetVMsPerformMaintenanceFuture) Result(client VirtualMachineScaleSetVMsClient) (osr OperationStatusResponse, err error) { + var done bool + done, err = future.Done(client) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetVMsPerformMaintenanceFuture", "Result", future.Response(), "Polling failure") + return + } + if !done { + err = azure.NewAsyncOpIncompleteError("compute.VirtualMachineScaleSetVMsPerformMaintenanceFuture") + return + } + sender := autorest.DecorateSender(client, autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...)) + if osr.Response.Response, err = future.GetResult(sender); err == nil && osr.Response.Response.StatusCode != http.StatusNoContent { + osr, err = client.PerformMaintenanceResponder(osr.Response.Response) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetVMsPerformMaintenanceFuture", "Result", osr.Response.Response, "Failure responding to request") + } + } + return +} + +// VirtualMachineScaleSetVMsPowerOffFuture an abstraction for monitoring and retrieving the results of a +// long-running operation. +type VirtualMachineScaleSetVMsPowerOffFuture struct { + azure.Future +} + +// Result returns the result of the asynchronous operation. +// If the operation has not completed it will return an error. +func (future *VirtualMachineScaleSetVMsPowerOffFuture) Result(client VirtualMachineScaleSetVMsClient) (osr OperationStatusResponse, err error) { + var done bool + done, err = future.Done(client) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetVMsPowerOffFuture", "Result", future.Response(), "Polling failure") + return + } + if !done { + err = azure.NewAsyncOpIncompleteError("compute.VirtualMachineScaleSetVMsPowerOffFuture") + return + } + sender := autorest.DecorateSender(client, autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...)) + if osr.Response.Response, err = future.GetResult(sender); err == nil && osr.Response.Response.StatusCode != http.StatusNoContent { + osr, err = client.PowerOffResponder(osr.Response.Response) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetVMsPowerOffFuture", "Result", osr.Response.Response, "Failure responding to request") + } + } + return +} + +// VirtualMachineScaleSetVMsRedeployFuture an abstraction for monitoring and retrieving the results of a +// long-running operation. +type VirtualMachineScaleSetVMsRedeployFuture struct { + azure.Future +} + +// Result returns the result of the asynchronous operation. +// If the operation has not completed it will return an error. +func (future *VirtualMachineScaleSetVMsRedeployFuture) Result(client VirtualMachineScaleSetVMsClient) (osr OperationStatusResponse, err error) { + var done bool + done, err = future.Done(client) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetVMsRedeployFuture", "Result", future.Response(), "Polling failure") + return + } + if !done { + err = azure.NewAsyncOpIncompleteError("compute.VirtualMachineScaleSetVMsRedeployFuture") + return + } + sender := autorest.DecorateSender(client, autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...)) + if osr.Response.Response, err = future.GetResult(sender); err == nil && osr.Response.Response.StatusCode != http.StatusNoContent { + osr, err = client.RedeployResponder(osr.Response.Response) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetVMsRedeployFuture", "Result", osr.Response.Response, "Failure responding to request") + } + } + return +} + +// VirtualMachineScaleSetVMsReimageAllFuture an abstraction for monitoring and retrieving the results of a +// long-running operation. +type VirtualMachineScaleSetVMsReimageAllFuture struct { + azure.Future +} + +// Result returns the result of the asynchronous operation. +// If the operation has not completed it will return an error. +func (future *VirtualMachineScaleSetVMsReimageAllFuture) Result(client VirtualMachineScaleSetVMsClient) (osr OperationStatusResponse, err error) { + var done bool + done, err = future.Done(client) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetVMsReimageAllFuture", "Result", future.Response(), "Polling failure") + return + } + if !done { + err = azure.NewAsyncOpIncompleteError("compute.VirtualMachineScaleSetVMsReimageAllFuture") + return + } + sender := autorest.DecorateSender(client, autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...)) + if osr.Response.Response, err = future.GetResult(sender); err == nil && osr.Response.Response.StatusCode != http.StatusNoContent { + osr, err = client.ReimageAllResponder(osr.Response.Response) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetVMsReimageAllFuture", "Result", osr.Response.Response, "Failure responding to request") + } + } + return +} + +// VirtualMachineScaleSetVMsReimageFuture an abstraction for monitoring and retrieving the results of a +// long-running operation. +type VirtualMachineScaleSetVMsReimageFuture struct { + azure.Future +} + +// Result returns the result of the asynchronous operation. +// If the operation has not completed it will return an error. +func (future *VirtualMachineScaleSetVMsReimageFuture) Result(client VirtualMachineScaleSetVMsClient) (osr OperationStatusResponse, err error) { + var done bool + done, err = future.Done(client) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetVMsReimageFuture", "Result", future.Response(), "Polling failure") + return + } + if !done { + err = azure.NewAsyncOpIncompleteError("compute.VirtualMachineScaleSetVMsReimageFuture") + return + } + sender := autorest.DecorateSender(client, autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...)) + if osr.Response.Response, err = future.GetResult(sender); err == nil && osr.Response.Response.StatusCode != http.StatusNoContent { + osr, err = client.ReimageResponder(osr.Response.Response) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetVMsReimageFuture", "Result", osr.Response.Response, "Failure responding to request") + } + } + return +} + +// VirtualMachineScaleSetVMsRestartFuture an abstraction for monitoring and retrieving the results of a +// long-running operation. +type VirtualMachineScaleSetVMsRestartFuture struct { + azure.Future +} + +// Result returns the result of the asynchronous operation. +// If the operation has not completed it will return an error. +func (future *VirtualMachineScaleSetVMsRestartFuture) Result(client VirtualMachineScaleSetVMsClient) (osr OperationStatusResponse, err error) { + var done bool + done, err = future.Done(client) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetVMsRestartFuture", "Result", future.Response(), "Polling failure") + return + } + if !done { + err = azure.NewAsyncOpIncompleteError("compute.VirtualMachineScaleSetVMsRestartFuture") + return + } + sender := autorest.DecorateSender(client, autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...)) + if osr.Response.Response, err = future.GetResult(sender); err == nil && osr.Response.Response.StatusCode != http.StatusNoContent { + osr, err = client.RestartResponder(osr.Response.Response) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetVMsRestartFuture", "Result", osr.Response.Response, "Failure responding to request") + } + } + return +} + +// VirtualMachineScaleSetVMsStartFuture an abstraction for monitoring and retrieving the results of a long-running +// operation. +type VirtualMachineScaleSetVMsStartFuture struct { + azure.Future +} + +// Result returns the result of the asynchronous operation. +// If the operation has not completed it will return an error. +func (future *VirtualMachineScaleSetVMsStartFuture) Result(client VirtualMachineScaleSetVMsClient) (osr OperationStatusResponse, err error) { + var done bool + done, err = future.Done(client) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetVMsStartFuture", "Result", future.Response(), "Polling failure") + return + } + if !done { + err = azure.NewAsyncOpIncompleteError("compute.VirtualMachineScaleSetVMsStartFuture") + return + } + sender := autorest.DecorateSender(client, autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...)) + if osr.Response.Response, err = future.GetResult(sender); err == nil && osr.Response.Response.StatusCode != http.StatusNoContent { + osr, err = client.StartResponder(osr.Response.Response) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetVMsStartFuture", "Result", osr.Response.Response, "Failure responding to request") + } + } + return +} + +// VirtualMachineScaleSetVMsUpdateFuture an abstraction for monitoring and retrieving the results of a long-running +// operation. +type VirtualMachineScaleSetVMsUpdateFuture struct { + azure.Future +} + +// Result returns the result of the asynchronous operation. +// If the operation has not completed it will return an error. +func (future *VirtualMachineScaleSetVMsUpdateFuture) Result(client VirtualMachineScaleSetVMsClient) (vmssv VirtualMachineScaleSetVM, err error) { + var done bool + done, err = future.Done(client) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetVMsUpdateFuture", "Result", future.Response(), "Polling failure") + return + } + if !done { + err = azure.NewAsyncOpIncompleteError("compute.VirtualMachineScaleSetVMsUpdateFuture") + return + } + sender := autorest.DecorateSender(client, autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...)) + if vmssv.Response.Response, err = future.GetResult(sender); err == nil && vmssv.Response.Response.StatusCode != http.StatusNoContent { + vmssv, err = client.UpdateResponder(vmssv.Response.Response) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetVMsUpdateFuture", "Result", vmssv.Response.Response, "Failure responding to request") + } + } + return +} + +// VirtualMachinesCaptureFuture an abstraction for monitoring and retrieving the results of a long-running +// operation. +type VirtualMachinesCaptureFuture struct { + azure.Future +} + +// Result returns the result of the asynchronous operation. +// If the operation has not completed it will return an error. +func (future *VirtualMachinesCaptureFuture) Result(client VirtualMachinesClient) (vmcr VirtualMachineCaptureResult, err error) { + var done bool + done, err = future.Done(client) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachinesCaptureFuture", "Result", future.Response(), "Polling failure") + return + } + if !done { + err = azure.NewAsyncOpIncompleteError("compute.VirtualMachinesCaptureFuture") + return + } + sender := autorest.DecorateSender(client, autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...)) + if vmcr.Response.Response, err = future.GetResult(sender); err == nil && vmcr.Response.Response.StatusCode != http.StatusNoContent { + vmcr, err = client.CaptureResponder(vmcr.Response.Response) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachinesCaptureFuture", "Result", vmcr.Response.Response, "Failure responding to request") + } + } + return +} + +// VirtualMachinesConvertToManagedDisksFuture an abstraction for monitoring and retrieving the results of a +// long-running operation. +type VirtualMachinesConvertToManagedDisksFuture struct { + azure.Future +} + +// Result returns the result of the asynchronous operation. +// If the operation has not completed it will return an error. +func (future *VirtualMachinesConvertToManagedDisksFuture) Result(client VirtualMachinesClient) (osr OperationStatusResponse, err error) { + var done bool + done, err = future.Done(client) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachinesConvertToManagedDisksFuture", "Result", future.Response(), "Polling failure") + return + } + if !done { + err = azure.NewAsyncOpIncompleteError("compute.VirtualMachinesConvertToManagedDisksFuture") + return + } + sender := autorest.DecorateSender(client, autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...)) + if osr.Response.Response, err = future.GetResult(sender); err == nil && osr.Response.Response.StatusCode != http.StatusNoContent { + osr, err = client.ConvertToManagedDisksResponder(osr.Response.Response) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachinesConvertToManagedDisksFuture", "Result", osr.Response.Response, "Failure responding to request") + } + } + return +} + +// VirtualMachinesCreateOrUpdateFuture an abstraction for monitoring and retrieving the results of a long-running +// operation. +type VirtualMachinesCreateOrUpdateFuture struct { + azure.Future +} + +// Result returns the result of the asynchronous operation. +// If the operation has not completed it will return an error. +func (future *VirtualMachinesCreateOrUpdateFuture) Result(client VirtualMachinesClient) (VM VirtualMachine, err error) { + var done bool + done, err = future.Done(client) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachinesCreateOrUpdateFuture", "Result", future.Response(), "Polling failure") + return + } + if !done { + err = azure.NewAsyncOpIncompleteError("compute.VirtualMachinesCreateOrUpdateFuture") + return + } + sender := autorest.DecorateSender(client, autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...)) + if VM.Response.Response, err = future.GetResult(sender); err == nil && VM.Response.Response.StatusCode != http.StatusNoContent { + VM, err = client.CreateOrUpdateResponder(VM.Response.Response) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachinesCreateOrUpdateFuture", "Result", VM.Response.Response, "Failure responding to request") + } + } + return +} + +// VirtualMachinesDeallocateFuture an abstraction for monitoring and retrieving the results of a long-running +// operation. +type VirtualMachinesDeallocateFuture struct { + azure.Future +} + +// Result returns the result of the asynchronous operation. +// If the operation has not completed it will return an error. +func (future *VirtualMachinesDeallocateFuture) Result(client VirtualMachinesClient) (osr OperationStatusResponse, err error) { + var done bool + done, err = future.Done(client) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachinesDeallocateFuture", "Result", future.Response(), "Polling failure") + return + } + if !done { + err = azure.NewAsyncOpIncompleteError("compute.VirtualMachinesDeallocateFuture") + return + } + sender := autorest.DecorateSender(client, autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...)) + if osr.Response.Response, err = future.GetResult(sender); err == nil && osr.Response.Response.StatusCode != http.StatusNoContent { + osr, err = client.DeallocateResponder(osr.Response.Response) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachinesDeallocateFuture", "Result", osr.Response.Response, "Failure responding to request") + } + } + return +} + +// VirtualMachinesDeleteFuture an abstraction for monitoring and retrieving the results of a long-running +// operation. +type VirtualMachinesDeleteFuture struct { + azure.Future +} + +// Result returns the result of the asynchronous operation. +// If the operation has not completed it will return an error. +func (future *VirtualMachinesDeleteFuture) Result(client VirtualMachinesClient) (osr OperationStatusResponse, err error) { + var done bool + done, err = future.Done(client) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachinesDeleteFuture", "Result", future.Response(), "Polling failure") + return + } + if !done { + err = azure.NewAsyncOpIncompleteError("compute.VirtualMachinesDeleteFuture") + return + } + sender := autorest.DecorateSender(client, autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...)) + if osr.Response.Response, err = future.GetResult(sender); err == nil && osr.Response.Response.StatusCode != http.StatusNoContent { + osr, err = client.DeleteResponder(osr.Response.Response) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachinesDeleteFuture", "Result", osr.Response.Response, "Failure responding to request") + } + } + return +} + +// VirtualMachineSize describes the properties of a VM size. +type VirtualMachineSize struct { + // Name - The name of the virtual machine size. + Name *string `json:"name,omitempty"` + // NumberOfCores - The number of cores supported by the virtual machine size. + NumberOfCores *int32 `json:"numberOfCores,omitempty"` + // OsDiskSizeInMB - The OS disk size, in MB, allowed by the virtual machine size. + OsDiskSizeInMB *int32 `json:"osDiskSizeInMB,omitempty"` + // ResourceDiskSizeInMB - The resource disk size, in MB, allowed by the virtual machine size. + ResourceDiskSizeInMB *int32 `json:"resourceDiskSizeInMB,omitempty"` + // MemoryInMB - The amount of memory, in MB, supported by the virtual machine size. + MemoryInMB *int32 `json:"memoryInMB,omitempty"` + // MaxDataDiskCount - The maximum number of data disks that can be attached to the virtual machine size. + MaxDataDiskCount *int32 `json:"maxDataDiskCount,omitempty"` +} + +// VirtualMachineSizeListResult the List Virtual Machine operation response. +type VirtualMachineSizeListResult struct { + autorest.Response `json:"-"` + // Value - The list of virtual machine sizes. + Value *[]VirtualMachineSize `json:"value,omitempty"` +} + +// VirtualMachinesPerformMaintenanceFuture an abstraction for monitoring and retrieving the results of a +// long-running operation. +type VirtualMachinesPerformMaintenanceFuture struct { + azure.Future +} + +// Result returns the result of the asynchronous operation. +// If the operation has not completed it will return an error. +func (future *VirtualMachinesPerformMaintenanceFuture) Result(client VirtualMachinesClient) (osr OperationStatusResponse, err error) { + var done bool + done, err = future.Done(client) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachinesPerformMaintenanceFuture", "Result", future.Response(), "Polling failure") + return + } + if !done { + err = azure.NewAsyncOpIncompleteError("compute.VirtualMachinesPerformMaintenanceFuture") + return + } + sender := autorest.DecorateSender(client, autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...)) + if osr.Response.Response, err = future.GetResult(sender); err == nil && osr.Response.Response.StatusCode != http.StatusNoContent { + osr, err = client.PerformMaintenanceResponder(osr.Response.Response) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachinesPerformMaintenanceFuture", "Result", osr.Response.Response, "Failure responding to request") + } + } + return +} + +// VirtualMachinesPowerOffFuture an abstraction for monitoring and retrieving the results of a long-running +// operation. +type VirtualMachinesPowerOffFuture struct { + azure.Future +} + +// Result returns the result of the asynchronous operation. +// If the operation has not completed it will return an error. +func (future *VirtualMachinesPowerOffFuture) Result(client VirtualMachinesClient) (osr OperationStatusResponse, err error) { + var done bool + done, err = future.Done(client) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachinesPowerOffFuture", "Result", future.Response(), "Polling failure") + return + } + if !done { + err = azure.NewAsyncOpIncompleteError("compute.VirtualMachinesPowerOffFuture") + return + } + sender := autorest.DecorateSender(client, autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...)) + if osr.Response.Response, err = future.GetResult(sender); err == nil && osr.Response.Response.StatusCode != http.StatusNoContent { + osr, err = client.PowerOffResponder(osr.Response.Response) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachinesPowerOffFuture", "Result", osr.Response.Response, "Failure responding to request") + } + } + return +} + +// VirtualMachinesRedeployFuture an abstraction for monitoring and retrieving the results of a long-running +// operation. +type VirtualMachinesRedeployFuture struct { + azure.Future +} + +// Result returns the result of the asynchronous operation. +// If the operation has not completed it will return an error. +func (future *VirtualMachinesRedeployFuture) Result(client VirtualMachinesClient) (osr OperationStatusResponse, err error) { + var done bool + done, err = future.Done(client) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachinesRedeployFuture", "Result", future.Response(), "Polling failure") + return + } + if !done { + err = azure.NewAsyncOpIncompleteError("compute.VirtualMachinesRedeployFuture") + return + } + sender := autorest.DecorateSender(client, autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...)) + if osr.Response.Response, err = future.GetResult(sender); err == nil && osr.Response.Response.StatusCode != http.StatusNoContent { + osr, err = client.RedeployResponder(osr.Response.Response) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachinesRedeployFuture", "Result", osr.Response.Response, "Failure responding to request") + } + } + return +} + +// VirtualMachinesRestartFuture an abstraction for monitoring and retrieving the results of a long-running +// operation. +type VirtualMachinesRestartFuture struct { + azure.Future +} + +// Result returns the result of the asynchronous operation. +// If the operation has not completed it will return an error. +func (future *VirtualMachinesRestartFuture) Result(client VirtualMachinesClient) (osr OperationStatusResponse, err error) { + var done bool + done, err = future.Done(client) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachinesRestartFuture", "Result", future.Response(), "Polling failure") + return + } + if !done { + err = azure.NewAsyncOpIncompleteError("compute.VirtualMachinesRestartFuture") + return + } + sender := autorest.DecorateSender(client, autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...)) + if osr.Response.Response, err = future.GetResult(sender); err == nil && osr.Response.Response.StatusCode != http.StatusNoContent { + osr, err = client.RestartResponder(osr.Response.Response) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachinesRestartFuture", "Result", osr.Response.Response, "Failure responding to request") + } + } + return +} + +// VirtualMachinesRunCommandFuture an abstraction for monitoring and retrieving the results of a long-running +// operation. +type VirtualMachinesRunCommandFuture struct { + azure.Future +} + +// Result returns the result of the asynchronous operation. +// If the operation has not completed it will return an error. +func (future *VirtualMachinesRunCommandFuture) Result(client VirtualMachinesClient) (rcr RunCommandResult, err error) { + var done bool + done, err = future.Done(client) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachinesRunCommandFuture", "Result", future.Response(), "Polling failure") + return + } + if !done { + err = azure.NewAsyncOpIncompleteError("compute.VirtualMachinesRunCommandFuture") + return + } + sender := autorest.DecorateSender(client, autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...)) + if rcr.Response.Response, err = future.GetResult(sender); err == nil && rcr.Response.Response.StatusCode != http.StatusNoContent { + rcr, err = client.RunCommandResponder(rcr.Response.Response) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachinesRunCommandFuture", "Result", rcr.Response.Response, "Failure responding to request") + } + } + return +} + +// VirtualMachinesStartFuture an abstraction for monitoring and retrieving the results of a long-running operation. +type VirtualMachinesStartFuture struct { + azure.Future +} + +// Result returns the result of the asynchronous operation. +// If the operation has not completed it will return an error. +func (future *VirtualMachinesStartFuture) Result(client VirtualMachinesClient) (osr OperationStatusResponse, err error) { + var done bool + done, err = future.Done(client) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachinesStartFuture", "Result", future.Response(), "Polling failure") + return + } + if !done { + err = azure.NewAsyncOpIncompleteError("compute.VirtualMachinesStartFuture") + return + } + sender := autorest.DecorateSender(client, autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...)) + if osr.Response.Response, err = future.GetResult(sender); err == nil && osr.Response.Response.StatusCode != http.StatusNoContent { + osr, err = client.StartResponder(osr.Response.Response) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachinesStartFuture", "Result", osr.Response.Response, "Failure responding to request") + } + } + return +} + +// VirtualMachineStatusCodeCount the status code and count of the virtual machine scale set instance view status +// summary. +type VirtualMachineStatusCodeCount struct { + // Code - The instance view status code. + Code *string `json:"code,omitempty"` + // Count - The number of instances having a particular status code. + Count *int32 `json:"count,omitempty"` +} + +// VirtualMachinesUpdateFuture an abstraction for monitoring and retrieving the results of a long-running +// operation. +type VirtualMachinesUpdateFuture struct { + azure.Future +} + +// Result returns the result of the asynchronous operation. +// If the operation has not completed it will return an error. +func (future *VirtualMachinesUpdateFuture) Result(client VirtualMachinesClient) (VM VirtualMachine, err error) { + var done bool + done, err = future.Done(client) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachinesUpdateFuture", "Result", future.Response(), "Polling failure") + return + } + if !done { + err = azure.NewAsyncOpIncompleteError("compute.VirtualMachinesUpdateFuture") + return + } + sender := autorest.DecorateSender(client, autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...)) + if VM.Response.Response, err = future.GetResult(sender); err == nil && VM.Response.Response.StatusCode != http.StatusNoContent { + VM, err = client.UpdateResponder(VM.Response.Response) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachinesUpdateFuture", "Result", VM.Response.Response, "Failure responding to request") + } + } + return +} + +// VirtualMachineUpdate describes a Virtual Machine Update. +type VirtualMachineUpdate struct { + // Plan - Specifies information about the marketplace image used to create the virtual machine. This element is only used for marketplace images. Before you can use a marketplace image from an API, you must enable the image for programmatic use. In the Azure portal, find the marketplace image that you want to use and then click **Want to deploy programmatically, Get Started ->**. Enter any required information and then click **Save**. + Plan *Plan `json:"plan,omitempty"` + *VirtualMachineProperties `json:"properties,omitempty"` + // Identity - The identity of the virtual machine, if configured. + Identity *VirtualMachineIdentity `json:"identity,omitempty"` + // Zones - The virtual machine zones. + Zones *[]string `json:"zones,omitempty"` + // Tags - Resource tags + Tags map[string]*string `json:"tags"` +} + +// MarshalJSON is the custom marshaler for VirtualMachineUpdate. +func (vmu VirtualMachineUpdate) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]interface{}) + if vmu.Plan != nil { + objectMap["plan"] = vmu.Plan + } + if vmu.VirtualMachineProperties != nil { + objectMap["properties"] = vmu.VirtualMachineProperties + } + if vmu.Identity != nil { + objectMap["identity"] = vmu.Identity + } + if vmu.Zones != nil { + objectMap["zones"] = vmu.Zones + } + if vmu.Tags != nil { + objectMap["tags"] = vmu.Tags + } + return json.Marshal(objectMap) +} + +// UnmarshalJSON is the custom unmarshaler for VirtualMachineUpdate struct. +func (vmu *VirtualMachineUpdate) UnmarshalJSON(body []byte) error { + var m map[string]*json.RawMessage + err := json.Unmarshal(body, &m) + if err != nil { + return err + } + for k, v := range m { + switch k { + case "plan": + if v != nil { + var plan Plan + err = json.Unmarshal(*v, &plan) + if err != nil { + return err + } + vmu.Plan = &plan + } + case "properties": + if v != nil { + var virtualMachineProperties VirtualMachineProperties + err = json.Unmarshal(*v, &virtualMachineProperties) + if err != nil { + return err + } + vmu.VirtualMachineProperties = &virtualMachineProperties + } + case "identity": + if v != nil { + var identity VirtualMachineIdentity + err = json.Unmarshal(*v, &identity) + if err != nil { + return err + } + vmu.Identity = &identity + } + case "zones": + if v != nil { + var zones []string + err = json.Unmarshal(*v, &zones) + if err != nil { + return err + } + vmu.Zones = &zones + } + case "tags": + if v != nil { + var tags map[string]*string + err = json.Unmarshal(*v, &tags) + if err != nil { + return err + } + vmu.Tags = tags + } + } + } + + return nil +} + +// WindowsConfiguration specifies Windows operating system settings on the virtual machine. +type WindowsConfiguration struct { + // ProvisionVMAgent - Indicates whether virtual machine agent should be provisioned on the virtual machine.

    When this property is not specified in the request body, default behavior is to set it to true. This will ensure that VM Agent is installed on the VM so that extensions can be added to the VM later. + ProvisionVMAgent *bool `json:"provisionVMAgent,omitempty"` + // EnableAutomaticUpdates - Indicates whether virtual machine is enabled for automatic updates. + EnableAutomaticUpdates *bool `json:"enableAutomaticUpdates,omitempty"` + // TimeZone - Specifies the time zone of the virtual machine. e.g. "Pacific Standard Time" + TimeZone *string `json:"timeZone,omitempty"` + // AdditionalUnattendContent - Specifies additional base-64 encoded XML formatted information that can be included in the Unattend.xml file, which is used by Windows Setup. + AdditionalUnattendContent *[]AdditionalUnattendContent `json:"additionalUnattendContent,omitempty"` + // WinRM - Specifies the Windows Remote Management listeners. This enables remote Windows PowerShell. + WinRM *WinRMConfiguration `json:"winRM,omitempty"` +} + +// WinRMConfiguration describes Windows Remote Management configuration of the VM +type WinRMConfiguration struct { + // Listeners - The list of Windows Remote Management listeners + Listeners *[]WinRMListener `json:"listeners,omitempty"` +} + +// WinRMListener describes Protocol and thumbprint of Windows Remote Management listener +type WinRMListener struct { + // Protocol - Specifies the protocol of listener.

    Possible values are:
    **http**

    **https**. Possible values include: 'HTTP', 'HTTPS' + Protocol ProtocolTypes `json:"protocol,omitempty"` + // CertificateURL - This is the URL of a certificate that has been uploaded to Key Vault as a secret. For adding a secret to the Key Vault, see [Add a key or secret to the key vault](https://docs.microsoft.com/azure/key-vault/key-vault-get-started/#add). In this case, your certificate needs to be It is the Base64 encoding of the following JSON Object which is encoded in UTF-8:

    {
    "data":"",
    "dataType":"pfx",
    "password":""
    } + CertificateURL *string `json:"certificateUrl,omitempty"` +} diff --git a/vendor/github.com/Azure/azure-sdk-for-go/services/compute/mgmt/2018-04-01/compute/operations.go b/vendor/github.com/Azure/azure-sdk-for-go/services/compute/mgmt/2018-04-01/compute/operations.go new file mode 100644 index 000000000..c2daed42b --- /dev/null +++ b/vendor/github.com/Azure/azure-sdk-for-go/services/compute/mgmt/2018-04-01/compute/operations.go @@ -0,0 +1,98 @@ +package compute + +// Copyright (c) Microsoft and contributors. All rights reserved. +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// +// See the License for the specific language governing permissions and +// limitations under the License. +// +// Code generated by Microsoft (R) AutoRest Code Generator. +// Changes may cause incorrect behavior and will be lost if the code is regenerated. + +import ( + "context" + "github.com/Azure/go-autorest/autorest" + "github.com/Azure/go-autorest/autorest/azure" + "net/http" +) + +// OperationsClient is the compute Client +type OperationsClient struct { + BaseClient +} + +// NewOperationsClient creates an instance of the OperationsClient client. +func NewOperationsClient(subscriptionID string) OperationsClient { + return NewOperationsClientWithBaseURI(DefaultBaseURI, subscriptionID) +} + +// NewOperationsClientWithBaseURI creates an instance of the OperationsClient client. +func NewOperationsClientWithBaseURI(baseURI string, subscriptionID string) OperationsClient { + return OperationsClient{NewWithBaseURI(baseURI, subscriptionID)} +} + +// List gets a list of compute operations. +func (client OperationsClient) List(ctx context.Context) (result OperationListResult, err error) { + req, err := client.ListPreparer(ctx) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.OperationsClient", "List", nil, "Failure preparing request") + return + } + + resp, err := client.ListSender(req) + if err != nil { + result.Response = autorest.Response{Response: resp} + err = autorest.NewErrorWithError(err, "compute.OperationsClient", "List", resp, "Failure sending request") + return + } + + result, err = client.ListResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.OperationsClient", "List", resp, "Failure responding to request") + } + + return +} + +// ListPreparer prepares the List request. +func (client OperationsClient) ListPreparer(ctx context.Context) (*http.Request, error) { + const APIVersion = "2017-12-01" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsGet(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPath("/providers/Microsoft.Compute/operations"), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// ListSender sends the List request. The method will close the +// http.Response Body if it receives an error. +func (client OperationsClient) ListSender(req *http.Request) (*http.Response, error) { + return autorest.SendWithSender(client, req, + autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...)) +} + +// ListResponder handles the response to the List request. The method always +// closes the http.Response Body. +func (client OperationsClient) ListResponder(resp *http.Response) (result OperationListResult, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} diff --git a/vendor/github.com/Azure/azure-sdk-for-go/services/compute/mgmt/2018-04-01/compute/resourceskus.go b/vendor/github.com/Azure/azure-sdk-for-go/services/compute/mgmt/2018-04-01/compute/resourceskus.go new file mode 100644 index 000000000..de32a3df6 --- /dev/null +++ b/vendor/github.com/Azure/azure-sdk-for-go/services/compute/mgmt/2018-04-01/compute/resourceskus.go @@ -0,0 +1,130 @@ +package compute + +// Copyright (c) Microsoft and contributors. All rights reserved. +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// +// See the License for the specific language governing permissions and +// limitations under the License. +// +// Code generated by Microsoft (R) AutoRest Code Generator. +// Changes may cause incorrect behavior and will be lost if the code is regenerated. + +import ( + "context" + "github.com/Azure/go-autorest/autorest" + "github.com/Azure/go-autorest/autorest/azure" + "net/http" +) + +// ResourceSkusClient is the compute Client +type ResourceSkusClient struct { + BaseClient +} + +// NewResourceSkusClient creates an instance of the ResourceSkusClient client. +func NewResourceSkusClient(subscriptionID string) ResourceSkusClient { + return NewResourceSkusClientWithBaseURI(DefaultBaseURI, subscriptionID) +} + +// NewResourceSkusClientWithBaseURI creates an instance of the ResourceSkusClient client. +func NewResourceSkusClientWithBaseURI(baseURI string, subscriptionID string) ResourceSkusClient { + return ResourceSkusClient{NewWithBaseURI(baseURI, subscriptionID)} +} + +// List gets the list of Microsoft.Compute SKUs available for your Subscription. +func (client ResourceSkusClient) List(ctx context.Context) (result ResourceSkusResultPage, err error) { + result.fn = client.listNextResults + req, err := client.ListPreparer(ctx) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.ResourceSkusClient", "List", nil, "Failure preparing request") + return + } + + resp, err := client.ListSender(req) + if err != nil { + result.rsr.Response = autorest.Response{Response: resp} + err = autorest.NewErrorWithError(err, "compute.ResourceSkusClient", "List", resp, "Failure sending request") + return + } + + result.rsr, err = client.ListResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.ResourceSkusClient", "List", resp, "Failure responding to request") + } + + return +} + +// ListPreparer prepares the List request. +func (client ResourceSkusClient) ListPreparer(ctx context.Context) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + } + + const APIVersion = "2017-09-01" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsGet(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/providers/Microsoft.Compute/skus", pathParameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// ListSender sends the List request. The method will close the +// http.Response Body if it receives an error. +func (client ResourceSkusClient) ListSender(req *http.Request) (*http.Response, error) { + return autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) +} + +// ListResponder handles the response to the List request. The method always +// closes the http.Response Body. +func (client ResourceSkusClient) ListResponder(resp *http.Response) (result ResourceSkusResult, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + +// listNextResults retrieves the next set of results, if any. +func (client ResourceSkusClient) listNextResults(lastResults ResourceSkusResult) (result ResourceSkusResult, err error) { + req, err := lastResults.resourceSkusResultPreparer() + if err != nil { + return result, autorest.NewErrorWithError(err, "compute.ResourceSkusClient", "listNextResults", nil, "Failure preparing next results request") + } + if req == nil { + return + } + resp, err := client.ListSender(req) + if err != nil { + result.Response = autorest.Response{Response: resp} + return result, autorest.NewErrorWithError(err, "compute.ResourceSkusClient", "listNextResults", resp, "Failure sending next results request") + } + result, err = client.ListResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.ResourceSkusClient", "listNextResults", resp, "Failure responding to next results request") + } + return +} + +// ListComplete enumerates all values, automatically crossing page boundaries as required. +func (client ResourceSkusClient) ListComplete(ctx context.Context) (result ResourceSkusResultIterator, err error) { + result.page, err = client.List(ctx) + return +} diff --git a/vendor/github.com/Azure/azure-sdk-for-go/services/compute/mgmt/2018-04-01/compute/snapshots.go b/vendor/github.com/Azure/azure-sdk-for-go/services/compute/mgmt/2018-04-01/compute/snapshots.go new file mode 100644 index 000000000..b87359354 --- /dev/null +++ b/vendor/github.com/Azure/azure-sdk-for-go/services/compute/mgmt/2018-04-01/compute/snapshots.go @@ -0,0 +1,686 @@ +package compute + +// Copyright (c) Microsoft and contributors. All rights reserved. +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// +// See the License for the specific language governing permissions and +// limitations under the License. +// +// Code generated by Microsoft (R) AutoRest Code Generator. +// Changes may cause incorrect behavior and will be lost if the code is regenerated. + +import ( + "context" + "github.com/Azure/go-autorest/autorest" + "github.com/Azure/go-autorest/autorest/azure" + "github.com/Azure/go-autorest/autorest/validation" + "net/http" +) + +// SnapshotsClient is the compute Client +type SnapshotsClient struct { + BaseClient +} + +// NewSnapshotsClient creates an instance of the SnapshotsClient client. +func NewSnapshotsClient(subscriptionID string) SnapshotsClient { + return NewSnapshotsClientWithBaseURI(DefaultBaseURI, subscriptionID) +} + +// NewSnapshotsClientWithBaseURI creates an instance of the SnapshotsClient client. +func NewSnapshotsClientWithBaseURI(baseURI string, subscriptionID string) SnapshotsClient { + return SnapshotsClient{NewWithBaseURI(baseURI, subscriptionID)} +} + +// CreateOrUpdate creates or updates a snapshot. +// Parameters: +// resourceGroupName - the name of the resource group. +// snapshotName - the name of the snapshot that is being created. The name can't be changed after the snapshot +// is created. Supported characters for the name are a-z, A-Z, 0-9 and _. The max name length is 80 characters. +// snapshot - snapshot object supplied in the body of the Put disk operation. +func (client SnapshotsClient) CreateOrUpdate(ctx context.Context, resourceGroupName string, snapshotName string, snapshot Snapshot) (result SnapshotsCreateOrUpdateFuture, err error) { + if err := validation.Validate([]validation.Validation{ + {TargetValue: snapshot, + Constraints: []validation.Constraint{{Target: "snapshot.DiskProperties", Name: validation.Null, Rule: false, + Chain: []validation.Constraint{{Target: "snapshot.DiskProperties.CreationData", Name: validation.Null, Rule: true, + Chain: []validation.Constraint{{Target: "snapshot.DiskProperties.CreationData.ImageReference", Name: validation.Null, Rule: false, + Chain: []validation.Constraint{{Target: "snapshot.DiskProperties.CreationData.ImageReference.ID", Name: validation.Null, Rule: true, Chain: nil}}}, + }}, + {Target: "snapshot.DiskProperties.EncryptionSettings", Name: validation.Null, Rule: false, + Chain: []validation.Constraint{{Target: "snapshot.DiskProperties.EncryptionSettings.DiskEncryptionKey", Name: validation.Null, Rule: false, + Chain: []validation.Constraint{{Target: "snapshot.DiskProperties.EncryptionSettings.DiskEncryptionKey.SourceVault", Name: validation.Null, Rule: true, Chain: nil}, + {Target: "snapshot.DiskProperties.EncryptionSettings.DiskEncryptionKey.SecretURL", Name: validation.Null, Rule: true, Chain: nil}, + }}, + {Target: "snapshot.DiskProperties.EncryptionSettings.KeyEncryptionKey", Name: validation.Null, Rule: false, + Chain: []validation.Constraint{{Target: "snapshot.DiskProperties.EncryptionSettings.KeyEncryptionKey.SourceVault", Name: validation.Null, Rule: true, Chain: nil}, + {Target: "snapshot.DiskProperties.EncryptionSettings.KeyEncryptionKey.KeyURL", Name: validation.Null, Rule: true, Chain: nil}, + }}, + }}, + }}}}}); err != nil { + return result, validation.NewError("compute.SnapshotsClient", "CreateOrUpdate", err.Error()) + } + + req, err := client.CreateOrUpdatePreparer(ctx, resourceGroupName, snapshotName, snapshot) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.SnapshotsClient", "CreateOrUpdate", nil, "Failure preparing request") + return + } + + result, err = client.CreateOrUpdateSender(req) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.SnapshotsClient", "CreateOrUpdate", result.Response(), "Failure sending request") + return + } + + return +} + +// CreateOrUpdatePreparer prepares the CreateOrUpdate request. +func (client SnapshotsClient) CreateOrUpdatePreparer(ctx context.Context, resourceGroupName string, snapshotName string, snapshot Snapshot) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "resourceGroupName": autorest.Encode("path", resourceGroupName), + "snapshotName": autorest.Encode("path", snapshotName), + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + } + + const APIVersion = "2018-04-01" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsContentType("application/json; charset=utf-8"), + autorest.AsPut(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Compute/snapshots/{snapshotName}", pathParameters), + autorest.WithJSON(snapshot), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// CreateOrUpdateSender sends the CreateOrUpdate request. The method will close the +// http.Response Body if it receives an error. +func (client SnapshotsClient) CreateOrUpdateSender(req *http.Request) (future SnapshotsCreateOrUpdateFuture, err error) { + var resp *http.Response + resp, err = autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) + if err != nil { + return + } + err = autorest.Respond(resp, azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted)) + if err != nil { + return + } + future.Future, err = azure.NewFutureFromResponse(resp) + return +} + +// CreateOrUpdateResponder handles the response to the CreateOrUpdate request. The method always +// closes the http.Response Body. +func (client SnapshotsClient) CreateOrUpdateResponder(resp *http.Response) (result Snapshot, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + +// Delete deletes a snapshot. +// Parameters: +// resourceGroupName - the name of the resource group. +// snapshotName - the name of the snapshot that is being created. The name can't be changed after the snapshot +// is created. Supported characters for the name are a-z, A-Z, 0-9 and _. The max name length is 80 characters. +func (client SnapshotsClient) Delete(ctx context.Context, resourceGroupName string, snapshotName string) (result SnapshotsDeleteFuture, err error) { + req, err := client.DeletePreparer(ctx, resourceGroupName, snapshotName) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.SnapshotsClient", "Delete", nil, "Failure preparing request") + return + } + + result, err = client.DeleteSender(req) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.SnapshotsClient", "Delete", result.Response(), "Failure sending request") + return + } + + return +} + +// DeletePreparer prepares the Delete request. +func (client SnapshotsClient) DeletePreparer(ctx context.Context, resourceGroupName string, snapshotName string) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "resourceGroupName": autorest.Encode("path", resourceGroupName), + "snapshotName": autorest.Encode("path", snapshotName), + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + } + + const APIVersion = "2018-04-01" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsDelete(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Compute/snapshots/{snapshotName}", pathParameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// DeleteSender sends the Delete request. The method will close the +// http.Response Body if it receives an error. +func (client SnapshotsClient) DeleteSender(req *http.Request) (future SnapshotsDeleteFuture, err error) { + var resp *http.Response + resp, err = autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) + if err != nil { + return + } + err = autorest.Respond(resp, azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted, http.StatusNoContent)) + if err != nil { + return + } + future.Future, err = azure.NewFutureFromResponse(resp) + return +} + +// DeleteResponder handles the response to the Delete request. The method always +// closes the http.Response Body. +func (client SnapshotsClient) DeleteResponder(resp *http.Response) (result autorest.Response, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted, http.StatusNoContent), + autorest.ByClosing()) + result.Response = resp + return +} + +// Get gets information about a snapshot. +// Parameters: +// resourceGroupName - the name of the resource group. +// snapshotName - the name of the snapshot that is being created. The name can't be changed after the snapshot +// is created. Supported characters for the name are a-z, A-Z, 0-9 and _. The max name length is 80 characters. +func (client SnapshotsClient) Get(ctx context.Context, resourceGroupName string, snapshotName string) (result Snapshot, err error) { + req, err := client.GetPreparer(ctx, resourceGroupName, snapshotName) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.SnapshotsClient", "Get", nil, "Failure preparing request") + return + } + + resp, err := client.GetSender(req) + if err != nil { + result.Response = autorest.Response{Response: resp} + err = autorest.NewErrorWithError(err, "compute.SnapshotsClient", "Get", resp, "Failure sending request") + return + } + + result, err = client.GetResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.SnapshotsClient", "Get", resp, "Failure responding to request") + } + + return +} + +// GetPreparer prepares the Get request. +func (client SnapshotsClient) GetPreparer(ctx context.Context, resourceGroupName string, snapshotName string) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "resourceGroupName": autorest.Encode("path", resourceGroupName), + "snapshotName": autorest.Encode("path", snapshotName), + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + } + + const APIVersion = "2018-04-01" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsGet(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Compute/snapshots/{snapshotName}", pathParameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// GetSender sends the Get request. The method will close the +// http.Response Body if it receives an error. +func (client SnapshotsClient) GetSender(req *http.Request) (*http.Response, error) { + return autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) +} + +// GetResponder handles the response to the Get request. The method always +// closes the http.Response Body. +func (client SnapshotsClient) GetResponder(resp *http.Response) (result Snapshot, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + +// GrantAccess grants access to a snapshot. +// Parameters: +// resourceGroupName - the name of the resource group. +// snapshotName - the name of the snapshot that is being created. The name can't be changed after the snapshot +// is created. Supported characters for the name are a-z, A-Z, 0-9 and _. The max name length is 80 characters. +// grantAccessData - access data object supplied in the body of the get snapshot access operation. +func (client SnapshotsClient) GrantAccess(ctx context.Context, resourceGroupName string, snapshotName string, grantAccessData GrantAccessData) (result SnapshotsGrantAccessFuture, err error) { + if err := validation.Validate([]validation.Validation{ + {TargetValue: grantAccessData, + Constraints: []validation.Constraint{{Target: "grantAccessData.DurationInSeconds", Name: validation.Null, Rule: true, Chain: nil}}}}); err != nil { + return result, validation.NewError("compute.SnapshotsClient", "GrantAccess", err.Error()) + } + + req, err := client.GrantAccessPreparer(ctx, resourceGroupName, snapshotName, grantAccessData) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.SnapshotsClient", "GrantAccess", nil, "Failure preparing request") + return + } + + result, err = client.GrantAccessSender(req) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.SnapshotsClient", "GrantAccess", result.Response(), "Failure sending request") + return + } + + return +} + +// GrantAccessPreparer prepares the GrantAccess request. +func (client SnapshotsClient) GrantAccessPreparer(ctx context.Context, resourceGroupName string, snapshotName string, grantAccessData GrantAccessData) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "resourceGroupName": autorest.Encode("path", resourceGroupName), + "snapshotName": autorest.Encode("path", snapshotName), + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + } + + const APIVersion = "2018-04-01" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsContentType("application/json; charset=utf-8"), + autorest.AsPost(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Compute/snapshots/{snapshotName}/beginGetAccess", pathParameters), + autorest.WithJSON(grantAccessData), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// GrantAccessSender sends the GrantAccess request. The method will close the +// http.Response Body if it receives an error. +func (client SnapshotsClient) GrantAccessSender(req *http.Request) (future SnapshotsGrantAccessFuture, err error) { + var resp *http.Response + resp, err = autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) + if err != nil { + return + } + err = autorest.Respond(resp, azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted)) + if err != nil { + return + } + future.Future, err = azure.NewFutureFromResponse(resp) + return +} + +// GrantAccessResponder handles the response to the GrantAccess request. The method always +// closes the http.Response Body. +func (client SnapshotsClient) GrantAccessResponder(resp *http.Response) (result AccessURI, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + +// List lists snapshots under a subscription. +func (client SnapshotsClient) List(ctx context.Context) (result SnapshotListPage, err error) { + result.fn = client.listNextResults + req, err := client.ListPreparer(ctx) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.SnapshotsClient", "List", nil, "Failure preparing request") + return + } + + resp, err := client.ListSender(req) + if err != nil { + result.sl.Response = autorest.Response{Response: resp} + err = autorest.NewErrorWithError(err, "compute.SnapshotsClient", "List", resp, "Failure sending request") + return + } + + result.sl, err = client.ListResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.SnapshotsClient", "List", resp, "Failure responding to request") + } + + return +} + +// ListPreparer prepares the List request. +func (client SnapshotsClient) ListPreparer(ctx context.Context) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + } + + const APIVersion = "2018-04-01" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsGet(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/providers/Microsoft.Compute/snapshots", pathParameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// ListSender sends the List request. The method will close the +// http.Response Body if it receives an error. +func (client SnapshotsClient) ListSender(req *http.Request) (*http.Response, error) { + return autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) +} + +// ListResponder handles the response to the List request. The method always +// closes the http.Response Body. +func (client SnapshotsClient) ListResponder(resp *http.Response) (result SnapshotList, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + +// listNextResults retrieves the next set of results, if any. +func (client SnapshotsClient) listNextResults(lastResults SnapshotList) (result SnapshotList, err error) { + req, err := lastResults.snapshotListPreparer() + if err != nil { + return result, autorest.NewErrorWithError(err, "compute.SnapshotsClient", "listNextResults", nil, "Failure preparing next results request") + } + if req == nil { + return + } + resp, err := client.ListSender(req) + if err != nil { + result.Response = autorest.Response{Response: resp} + return result, autorest.NewErrorWithError(err, "compute.SnapshotsClient", "listNextResults", resp, "Failure sending next results request") + } + result, err = client.ListResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.SnapshotsClient", "listNextResults", resp, "Failure responding to next results request") + } + return +} + +// ListComplete enumerates all values, automatically crossing page boundaries as required. +func (client SnapshotsClient) ListComplete(ctx context.Context) (result SnapshotListIterator, err error) { + result.page, err = client.List(ctx) + return +} + +// ListByResourceGroup lists snapshots under a resource group. +// Parameters: +// resourceGroupName - the name of the resource group. +func (client SnapshotsClient) ListByResourceGroup(ctx context.Context, resourceGroupName string) (result SnapshotListPage, err error) { + result.fn = client.listByResourceGroupNextResults + req, err := client.ListByResourceGroupPreparer(ctx, resourceGroupName) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.SnapshotsClient", "ListByResourceGroup", nil, "Failure preparing request") + return + } + + resp, err := client.ListByResourceGroupSender(req) + if err != nil { + result.sl.Response = autorest.Response{Response: resp} + err = autorest.NewErrorWithError(err, "compute.SnapshotsClient", "ListByResourceGroup", resp, "Failure sending request") + return + } + + result.sl, err = client.ListByResourceGroupResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.SnapshotsClient", "ListByResourceGroup", resp, "Failure responding to request") + } + + return +} + +// ListByResourceGroupPreparer prepares the ListByResourceGroup request. +func (client SnapshotsClient) ListByResourceGroupPreparer(ctx context.Context, resourceGroupName string) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "resourceGroupName": autorest.Encode("path", resourceGroupName), + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + } + + const APIVersion = "2018-04-01" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsGet(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Compute/snapshots", pathParameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// ListByResourceGroupSender sends the ListByResourceGroup request. The method will close the +// http.Response Body if it receives an error. +func (client SnapshotsClient) ListByResourceGroupSender(req *http.Request) (*http.Response, error) { + return autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) +} + +// ListByResourceGroupResponder handles the response to the ListByResourceGroup request. The method always +// closes the http.Response Body. +func (client SnapshotsClient) ListByResourceGroupResponder(resp *http.Response) (result SnapshotList, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + +// listByResourceGroupNextResults retrieves the next set of results, if any. +func (client SnapshotsClient) listByResourceGroupNextResults(lastResults SnapshotList) (result SnapshotList, err error) { + req, err := lastResults.snapshotListPreparer() + if err != nil { + return result, autorest.NewErrorWithError(err, "compute.SnapshotsClient", "listByResourceGroupNextResults", nil, "Failure preparing next results request") + } + if req == nil { + return + } + resp, err := client.ListByResourceGroupSender(req) + if err != nil { + result.Response = autorest.Response{Response: resp} + return result, autorest.NewErrorWithError(err, "compute.SnapshotsClient", "listByResourceGroupNextResults", resp, "Failure sending next results request") + } + result, err = client.ListByResourceGroupResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.SnapshotsClient", "listByResourceGroupNextResults", resp, "Failure responding to next results request") + } + return +} + +// ListByResourceGroupComplete enumerates all values, automatically crossing page boundaries as required. +func (client SnapshotsClient) ListByResourceGroupComplete(ctx context.Context, resourceGroupName string) (result SnapshotListIterator, err error) { + result.page, err = client.ListByResourceGroup(ctx, resourceGroupName) + return +} + +// RevokeAccess revokes access to a snapshot. +// Parameters: +// resourceGroupName - the name of the resource group. +// snapshotName - the name of the snapshot that is being created. The name can't be changed after the snapshot +// is created. Supported characters for the name are a-z, A-Z, 0-9 and _. The max name length is 80 characters. +func (client SnapshotsClient) RevokeAccess(ctx context.Context, resourceGroupName string, snapshotName string) (result SnapshotsRevokeAccessFuture, err error) { + req, err := client.RevokeAccessPreparer(ctx, resourceGroupName, snapshotName) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.SnapshotsClient", "RevokeAccess", nil, "Failure preparing request") + return + } + + result, err = client.RevokeAccessSender(req) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.SnapshotsClient", "RevokeAccess", result.Response(), "Failure sending request") + return + } + + return +} + +// RevokeAccessPreparer prepares the RevokeAccess request. +func (client SnapshotsClient) RevokeAccessPreparer(ctx context.Context, resourceGroupName string, snapshotName string) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "resourceGroupName": autorest.Encode("path", resourceGroupName), + "snapshotName": autorest.Encode("path", snapshotName), + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + } + + const APIVersion = "2018-04-01" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsPost(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Compute/snapshots/{snapshotName}/endGetAccess", pathParameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// RevokeAccessSender sends the RevokeAccess request. The method will close the +// http.Response Body if it receives an error. +func (client SnapshotsClient) RevokeAccessSender(req *http.Request) (future SnapshotsRevokeAccessFuture, err error) { + var resp *http.Response + resp, err = autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) + if err != nil { + return + } + err = autorest.Respond(resp, azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted)) + if err != nil { + return + } + future.Future, err = azure.NewFutureFromResponse(resp) + return +} + +// RevokeAccessResponder handles the response to the RevokeAccess request. The method always +// closes the http.Response Body. +func (client SnapshotsClient) RevokeAccessResponder(resp *http.Response) (result autorest.Response, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted), + autorest.ByClosing()) + result.Response = resp + return +} + +// Update updates (patches) a snapshot. +// Parameters: +// resourceGroupName - the name of the resource group. +// snapshotName - the name of the snapshot that is being created. The name can't be changed after the snapshot +// is created. Supported characters for the name are a-z, A-Z, 0-9 and _. The max name length is 80 characters. +// snapshot - snapshot object supplied in the body of the Patch snapshot operation. +func (client SnapshotsClient) Update(ctx context.Context, resourceGroupName string, snapshotName string, snapshot SnapshotUpdate) (result SnapshotsUpdateFuture, err error) { + req, err := client.UpdatePreparer(ctx, resourceGroupName, snapshotName, snapshot) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.SnapshotsClient", "Update", nil, "Failure preparing request") + return + } + + result, err = client.UpdateSender(req) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.SnapshotsClient", "Update", result.Response(), "Failure sending request") + return + } + + return +} + +// UpdatePreparer prepares the Update request. +func (client SnapshotsClient) UpdatePreparer(ctx context.Context, resourceGroupName string, snapshotName string, snapshot SnapshotUpdate) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "resourceGroupName": autorest.Encode("path", resourceGroupName), + "snapshotName": autorest.Encode("path", snapshotName), + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + } + + const APIVersion = "2018-04-01" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsContentType("application/json; charset=utf-8"), + autorest.AsPatch(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Compute/snapshots/{snapshotName}", pathParameters), + autorest.WithJSON(snapshot), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// UpdateSender sends the Update request. The method will close the +// http.Response Body if it receives an error. +func (client SnapshotsClient) UpdateSender(req *http.Request) (future SnapshotsUpdateFuture, err error) { + var resp *http.Response + resp, err = autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) + if err != nil { + return + } + err = autorest.Respond(resp, azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted)) + if err != nil { + return + } + future.Future, err = azure.NewFutureFromResponse(resp) + return +} + +// UpdateResponder handles the response to the Update request. The method always +// closes the http.Response Body. +func (client SnapshotsClient) UpdateResponder(resp *http.Response) (result Snapshot, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} diff --git a/vendor/github.com/Azure/azure-sdk-for-go/arm/compute/usage.go b/vendor/github.com/Azure/azure-sdk-for-go/services/compute/mgmt/2018-04-01/compute/usage.go similarity index 69% rename from vendor/github.com/Azure/azure-sdk-for-go/arm/compute/usage.go rename to vendor/github.com/Azure/azure-sdk-for-go/services/compute/mgmt/2018-04-01/compute/usage.go index 97c53e0ef..e4bcd72b4 100644 --- a/vendor/github.com/Azure/azure-sdk-for-go/arm/compute/usage.go +++ b/vendor/github.com/Azure/azure-sdk-for-go/services/compute/mgmt/2018-04-01/compute/usage.go @@ -14,20 +14,20 @@ package compute // See the License for the specific language governing permissions and // limitations under the License. // -// Code generated by Microsoft (R) AutoRest Code Generator 1.0.1.0 -// Changes may cause incorrect behavior and will be lost if the code is -// regenerated. +// Code generated by Microsoft (R) AutoRest Code Generator. +// Changes may cause incorrect behavior and will be lost if the code is regenerated. import ( + "context" "github.com/Azure/go-autorest/autorest" "github.com/Azure/go-autorest/autorest/azure" "github.com/Azure/go-autorest/autorest/validation" "net/http" ) -// UsageClient is the the Compute Management Client. +// UsageClient is the compute Client type UsageClient struct { - ManagementClient + BaseClient } // NewUsageClient creates an instance of the UsageClient client. @@ -40,19 +40,19 @@ func NewUsageClientWithBaseURI(baseURI string, subscriptionID string) UsageClien return UsageClient{NewWithBaseURI(baseURI, subscriptionID)} } -// List gets, for the specified location, the current compute resource usage -// information as well as the limits for compute resources under the -// subscription. -// -// location is the location for which resource usage is queried. -func (client UsageClient) List(location string) (result ListUsagesResult, err error) { +// List gets, for the specified location, the current compute resource usage information as well as the limits for +// compute resources under the subscription. +// Parameters: +// location - the location for which resource usage is queried. +func (client UsageClient) List(ctx context.Context, location string) (result ListUsagesResultPage, err error) { if err := validation.Validate([]validation.Validation{ {TargetValue: location, Constraints: []validation.Constraint{{Target: "location", Name: validation.Pattern, Rule: `^[-\w\._]+$`, Chain: nil}}}}); err != nil { - return result, validation.NewErrorWithValidationError(err, "compute.UsageClient", "List") + return result, validation.NewError("compute.UsageClient", "List", err.Error()) } - req, err := client.ListPreparer(location) + result.fn = client.listNextResults + req, err := client.ListPreparer(ctx, location) if err != nil { err = autorest.NewErrorWithError(err, "compute.UsageClient", "List", nil, "Failure preparing request") return @@ -60,12 +60,12 @@ func (client UsageClient) List(location string) (result ListUsagesResult, err er resp, err := client.ListSender(req) if err != nil { - result.Response = autorest.Response{Response: resp} + result.lur.Response = autorest.Response{Response: resp} err = autorest.NewErrorWithError(err, "compute.UsageClient", "List", resp, "Failure sending request") return } - result, err = client.ListResponder(resp) + result.lur, err = client.ListResponder(resp) if err != nil { err = autorest.NewErrorWithError(err, "compute.UsageClient", "List", resp, "Failure responding to request") } @@ -74,13 +74,13 @@ func (client UsageClient) List(location string) (result ListUsagesResult, err er } // ListPreparer prepares the List request. -func (client UsageClient) ListPreparer(location string) (*http.Request, error) { +func (client UsageClient) ListPreparer(ctx context.Context, location string) (*http.Request, error) { pathParameters := map[string]interface{}{ "location": autorest.Encode("path", location), "subscriptionId": autorest.Encode("path", client.SubscriptionID), } - const APIVersion = "2016-04-30-preview" + const APIVersion = "2017-12-01" queryParameters := map[string]interface{}{ "api-version": APIVersion, } @@ -90,13 +90,14 @@ func (client UsageClient) ListPreparer(location string) (*http.Request, error) { autorest.WithBaseURL(client.BaseURI), autorest.WithPathParameters("/subscriptions/{subscriptionId}/providers/Microsoft.Compute/locations/{location}/usages", pathParameters), autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{}) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) } // ListSender sends the List request. The method will close the // http.Response Body if it receives an error. func (client UsageClient) ListSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, req) + return autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) } // ListResponder handles the response to the List request. The method always @@ -112,26 +113,29 @@ func (client UsageClient) ListResponder(resp *http.Response) (result ListUsagesR return } -// ListNextResults retrieves the next set of results, if any. -func (client UsageClient) ListNextResults(lastResults ListUsagesResult) (result ListUsagesResult, err error) { - req, err := lastResults.ListUsagesResultPreparer() +// listNextResults retrieves the next set of results, if any. +func (client UsageClient) listNextResults(lastResults ListUsagesResult) (result ListUsagesResult, err error) { + req, err := lastResults.listUsagesResultPreparer() if err != nil { - return result, autorest.NewErrorWithError(err, "compute.UsageClient", "List", nil, "Failure preparing next results request") + return result, autorest.NewErrorWithError(err, "compute.UsageClient", "listNextResults", nil, "Failure preparing next results request") } if req == nil { return } - resp, err := client.ListSender(req) if err != nil { result.Response = autorest.Response{Response: resp} - return result, autorest.NewErrorWithError(err, "compute.UsageClient", "List", resp, "Failure sending next results request") + return result, autorest.NewErrorWithError(err, "compute.UsageClient", "listNextResults", resp, "Failure sending next results request") } - result, err = client.ListResponder(resp) if err != nil { - err = autorest.NewErrorWithError(err, "compute.UsageClient", "List", resp, "Failure responding to next results request") + err = autorest.NewErrorWithError(err, "compute.UsageClient", "listNextResults", resp, "Failure responding to next results request") } - + return +} + +// ListComplete enumerates all values, automatically crossing page boundaries as required. +func (client UsageClient) ListComplete(ctx context.Context, location string) (result ListUsagesResultIterator, err error) { + result.page, err = client.List(ctx, location) return } diff --git a/vendor/github.com/Azure/azure-sdk-for-go/arm/compute/version.go b/vendor/github.com/Azure/azure-sdk-for-go/services/compute/mgmt/2018-04-01/compute/version.go similarity index 80% rename from vendor/github.com/Azure/azure-sdk-for-go/arm/compute/version.go rename to vendor/github.com/Azure/azure-sdk-for-go/services/compute/mgmt/2018-04-01/compute/version.go index a5318ebf7..87d03ecf9 100644 --- a/vendor/github.com/Azure/azure-sdk-for-go/arm/compute/version.go +++ b/vendor/github.com/Azure/azure-sdk-for-go/services/compute/mgmt/2018-04-01/compute/version.go @@ -1,5 +1,7 @@ package compute +import "github.com/Azure/azure-sdk-for-go/version" + // Copyright (c) Microsoft and contributors. All rights reserved. // // Licensed under the Apache License, Version 2.0 (the "License"); @@ -14,16 +16,15 @@ package compute // See the License for the specific language governing permissions and // limitations under the License. // -// Code generated by Microsoft (R) AutoRest Code Generator 1.0.1.0 -// Changes may cause incorrect behavior and will be lost if the code is -// regenerated. +// Code generated by Microsoft (R) AutoRest Code Generator. +// Changes may cause incorrect behavior and will be lost if the code is regenerated. // UserAgent returns the UserAgent string to use when sending http.Requests. func UserAgent() string { - return "Azure-SDK-For-Go/v10.0.2-beta arm-compute/2016-04-30-preview" + return "Azure-SDK-For-Go/" + version.Number + " compute/2018-04-01" } // Version returns the semantic version (see http://semver.org) of the client. func Version() string { - return "v10.0.2-beta" + return version.Number } diff --git a/vendor/github.com/Azure/azure-sdk-for-go/arm/compute/virtualmachineextensionimages.go b/vendor/github.com/Azure/azure-sdk-for-go/services/compute/mgmt/2018-04-01/compute/virtualmachineextensionimages.go similarity index 77% rename from vendor/github.com/Azure/azure-sdk-for-go/arm/compute/virtualmachineextensionimages.go rename to vendor/github.com/Azure/azure-sdk-for-go/services/compute/mgmt/2018-04-01/compute/virtualmachineextensionimages.go index fcd122704..ca8bf91f7 100644 --- a/vendor/github.com/Azure/azure-sdk-for-go/arm/compute/virtualmachineextensionimages.go +++ b/vendor/github.com/Azure/azure-sdk-for-go/services/compute/mgmt/2018-04-01/compute/virtualmachineextensionimages.go @@ -14,37 +14,37 @@ package compute // See the License for the specific language governing permissions and // limitations under the License. // -// Code generated by Microsoft (R) AutoRest Code Generator 1.0.1.0 -// Changes may cause incorrect behavior and will be lost if the code is -// regenerated. +// Code generated by Microsoft (R) AutoRest Code Generator. +// Changes may cause incorrect behavior and will be lost if the code is regenerated. import ( + "context" "github.com/Azure/go-autorest/autorest" "github.com/Azure/go-autorest/autorest/azure" "net/http" ) -// VirtualMachineExtensionImagesClient is the the Compute Management Client. +// VirtualMachineExtensionImagesClient is the compute Client type VirtualMachineExtensionImagesClient struct { - ManagementClient + BaseClient } -// NewVirtualMachineExtensionImagesClient creates an instance of the -// VirtualMachineExtensionImagesClient client. +// NewVirtualMachineExtensionImagesClient creates an instance of the VirtualMachineExtensionImagesClient client. func NewVirtualMachineExtensionImagesClient(subscriptionID string) VirtualMachineExtensionImagesClient { return NewVirtualMachineExtensionImagesClientWithBaseURI(DefaultBaseURI, subscriptionID) } -// NewVirtualMachineExtensionImagesClientWithBaseURI creates an instance of the -// VirtualMachineExtensionImagesClient client. +// NewVirtualMachineExtensionImagesClientWithBaseURI creates an instance of the VirtualMachineExtensionImagesClient +// client. func NewVirtualMachineExtensionImagesClientWithBaseURI(baseURI string, subscriptionID string) VirtualMachineExtensionImagesClient { return VirtualMachineExtensionImagesClient{NewWithBaseURI(baseURI, subscriptionID)} } // Get gets a virtual machine extension image. -// -func (client VirtualMachineExtensionImagesClient) Get(location string, publisherName string, typeParameter string, version string) (result VirtualMachineExtensionImage, err error) { - req, err := client.GetPreparer(location, publisherName, typeParameter, version) +// Parameters: +// location - the name of a supported Azure region. +func (client VirtualMachineExtensionImagesClient) Get(ctx context.Context, location string, publisherName string, typeParameter string, version string) (result VirtualMachineExtensionImage, err error) { + req, err := client.GetPreparer(ctx, location, publisherName, typeParameter, version) if err != nil { err = autorest.NewErrorWithError(err, "compute.VirtualMachineExtensionImagesClient", "Get", nil, "Failure preparing request") return @@ -66,7 +66,7 @@ func (client VirtualMachineExtensionImagesClient) Get(location string, publisher } // GetPreparer prepares the Get request. -func (client VirtualMachineExtensionImagesClient) GetPreparer(location string, publisherName string, typeParameter string, version string) (*http.Request, error) { +func (client VirtualMachineExtensionImagesClient) GetPreparer(ctx context.Context, location string, publisherName string, typeParameter string, version string) (*http.Request, error) { pathParameters := map[string]interface{}{ "location": autorest.Encode("path", location), "publisherName": autorest.Encode("path", publisherName), @@ -75,7 +75,7 @@ func (client VirtualMachineExtensionImagesClient) GetPreparer(location string, p "version": autorest.Encode("path", version), } - const APIVersion = "2016-04-30-preview" + const APIVersion = "2017-12-01" queryParameters := map[string]interface{}{ "api-version": APIVersion, } @@ -85,13 +85,14 @@ func (client VirtualMachineExtensionImagesClient) GetPreparer(location string, p autorest.WithBaseURL(client.BaseURI), autorest.WithPathParameters("/subscriptions/{subscriptionId}/providers/Microsoft.Compute/locations/{location}/publishers/{publisherName}/artifacttypes/vmextension/types/{type}/versions/{version}", pathParameters), autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{}) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) } // GetSender sends the Get request. The method will close the // http.Response Body if it receives an error. func (client VirtualMachineExtensionImagesClient) GetSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, req) + return autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) } // GetResponder handles the response to the Get request. The method always @@ -108,9 +109,10 @@ func (client VirtualMachineExtensionImagesClient) GetResponder(resp *http.Respon } // ListTypes gets a list of virtual machine extension image types. -// -func (client VirtualMachineExtensionImagesClient) ListTypes(location string, publisherName string) (result ListVirtualMachineExtensionImage, err error) { - req, err := client.ListTypesPreparer(location, publisherName) +// Parameters: +// location - the name of a supported Azure region. +func (client VirtualMachineExtensionImagesClient) ListTypes(ctx context.Context, location string, publisherName string) (result ListVirtualMachineExtensionImage, err error) { + req, err := client.ListTypesPreparer(ctx, location, publisherName) if err != nil { err = autorest.NewErrorWithError(err, "compute.VirtualMachineExtensionImagesClient", "ListTypes", nil, "Failure preparing request") return @@ -132,14 +134,14 @@ func (client VirtualMachineExtensionImagesClient) ListTypes(location string, pub } // ListTypesPreparer prepares the ListTypes request. -func (client VirtualMachineExtensionImagesClient) ListTypesPreparer(location string, publisherName string) (*http.Request, error) { +func (client VirtualMachineExtensionImagesClient) ListTypesPreparer(ctx context.Context, location string, publisherName string) (*http.Request, error) { pathParameters := map[string]interface{}{ "location": autorest.Encode("path", location), "publisherName": autorest.Encode("path", publisherName), "subscriptionId": autorest.Encode("path", client.SubscriptionID), } - const APIVersion = "2016-04-30-preview" + const APIVersion = "2017-12-01" queryParameters := map[string]interface{}{ "api-version": APIVersion, } @@ -149,13 +151,14 @@ func (client VirtualMachineExtensionImagesClient) ListTypesPreparer(location str autorest.WithBaseURL(client.BaseURI), autorest.WithPathParameters("/subscriptions/{subscriptionId}/providers/Microsoft.Compute/locations/{location}/publishers/{publisherName}/artifacttypes/vmextension/types", pathParameters), autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{}) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) } // ListTypesSender sends the ListTypes request. The method will close the // http.Response Body if it receives an error. func (client VirtualMachineExtensionImagesClient) ListTypesSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, req) + return autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) } // ListTypesResponder handles the response to the ListTypes request. The method always @@ -172,10 +175,11 @@ func (client VirtualMachineExtensionImagesClient) ListTypesResponder(resp *http. } // ListVersions gets a list of virtual machine extension image versions. -// -// filter is the filter to apply on the operation. -func (client VirtualMachineExtensionImagesClient) ListVersions(location string, publisherName string, typeParameter string, filter string, top *int32, orderby string) (result ListVirtualMachineExtensionImage, err error) { - req, err := client.ListVersionsPreparer(location, publisherName, typeParameter, filter, top, orderby) +// Parameters: +// location - the name of a supported Azure region. +// filter - the filter to apply on the operation. +func (client VirtualMachineExtensionImagesClient) ListVersions(ctx context.Context, location string, publisherName string, typeParameter string, filter string, top *int32, orderby string) (result ListVirtualMachineExtensionImage, err error) { + req, err := client.ListVersionsPreparer(ctx, location, publisherName, typeParameter, filter, top, orderby) if err != nil { err = autorest.NewErrorWithError(err, "compute.VirtualMachineExtensionImagesClient", "ListVersions", nil, "Failure preparing request") return @@ -197,7 +201,7 @@ func (client VirtualMachineExtensionImagesClient) ListVersions(location string, } // ListVersionsPreparer prepares the ListVersions request. -func (client VirtualMachineExtensionImagesClient) ListVersionsPreparer(location string, publisherName string, typeParameter string, filter string, top *int32, orderby string) (*http.Request, error) { +func (client VirtualMachineExtensionImagesClient) ListVersionsPreparer(ctx context.Context, location string, publisherName string, typeParameter string, filter string, top *int32, orderby string) (*http.Request, error) { pathParameters := map[string]interface{}{ "location": autorest.Encode("path", location), "publisherName": autorest.Encode("path", publisherName), @@ -205,7 +209,7 @@ func (client VirtualMachineExtensionImagesClient) ListVersionsPreparer(location "type": autorest.Encode("path", typeParameter), } - const APIVersion = "2016-04-30-preview" + const APIVersion = "2017-12-01" queryParameters := map[string]interface{}{ "api-version": APIVersion, } @@ -224,13 +228,14 @@ func (client VirtualMachineExtensionImagesClient) ListVersionsPreparer(location autorest.WithBaseURL(client.BaseURI), autorest.WithPathParameters("/subscriptions/{subscriptionId}/providers/Microsoft.Compute/locations/{location}/publishers/{publisherName}/artifacttypes/vmextension/types/{type}/versions", pathParameters), autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{}) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) } // ListVersionsSender sends the ListVersions request. The method will close the // http.Response Body if it receives an error. func (client VirtualMachineExtensionImagesClient) ListVersionsSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, req) + return autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) } // ListVersionsResponder handles the response to the ListVersions request. The method always diff --git a/vendor/github.com/Azure/azure-sdk-for-go/services/compute/mgmt/2018-04-01/compute/virtualmachineextensions.go b/vendor/github.com/Azure/azure-sdk-for-go/services/compute/mgmt/2018-04-01/compute/virtualmachineextensions.go new file mode 100644 index 000000000..8c5d964fe --- /dev/null +++ b/vendor/github.com/Azure/azure-sdk-for-go/services/compute/mgmt/2018-04-01/compute/virtualmachineextensions.go @@ -0,0 +1,338 @@ +package compute + +// Copyright (c) Microsoft and contributors. All rights reserved. +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// +// See the License for the specific language governing permissions and +// limitations under the License. +// +// Code generated by Microsoft (R) AutoRest Code Generator. +// Changes may cause incorrect behavior and will be lost if the code is regenerated. + +import ( + "context" + "github.com/Azure/go-autorest/autorest" + "github.com/Azure/go-autorest/autorest/azure" + "net/http" +) + +// VirtualMachineExtensionsClient is the compute Client +type VirtualMachineExtensionsClient struct { + BaseClient +} + +// NewVirtualMachineExtensionsClient creates an instance of the VirtualMachineExtensionsClient client. +func NewVirtualMachineExtensionsClient(subscriptionID string) VirtualMachineExtensionsClient { + return NewVirtualMachineExtensionsClientWithBaseURI(DefaultBaseURI, subscriptionID) +} + +// NewVirtualMachineExtensionsClientWithBaseURI creates an instance of the VirtualMachineExtensionsClient client. +func NewVirtualMachineExtensionsClientWithBaseURI(baseURI string, subscriptionID string) VirtualMachineExtensionsClient { + return VirtualMachineExtensionsClient{NewWithBaseURI(baseURI, subscriptionID)} +} + +// CreateOrUpdate the operation to create or update the extension. +// Parameters: +// resourceGroupName - the name of the resource group. +// VMName - the name of the virtual machine where the extension should be created or updated. +// VMExtensionName - the name of the virtual machine extension. +// extensionParameters - parameters supplied to the Create Virtual Machine Extension operation. +func (client VirtualMachineExtensionsClient) CreateOrUpdate(ctx context.Context, resourceGroupName string, VMName string, VMExtensionName string, extensionParameters VirtualMachineExtension) (result VirtualMachineExtensionsCreateOrUpdateFuture, err error) { + req, err := client.CreateOrUpdatePreparer(ctx, resourceGroupName, VMName, VMExtensionName, extensionParameters) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachineExtensionsClient", "CreateOrUpdate", nil, "Failure preparing request") + return + } + + result, err = client.CreateOrUpdateSender(req) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachineExtensionsClient", "CreateOrUpdate", result.Response(), "Failure sending request") + return + } + + return +} + +// CreateOrUpdatePreparer prepares the CreateOrUpdate request. +func (client VirtualMachineExtensionsClient) CreateOrUpdatePreparer(ctx context.Context, resourceGroupName string, VMName string, VMExtensionName string, extensionParameters VirtualMachineExtension) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "resourceGroupName": autorest.Encode("path", resourceGroupName), + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + "vmExtensionName": autorest.Encode("path", VMExtensionName), + "vmName": autorest.Encode("path", VMName), + } + + const APIVersion = "2017-12-01" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsContentType("application/json; charset=utf-8"), + autorest.AsPut(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Compute/virtualMachines/{vmName}/extensions/{vmExtensionName}", pathParameters), + autorest.WithJSON(extensionParameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// CreateOrUpdateSender sends the CreateOrUpdate request. The method will close the +// http.Response Body if it receives an error. +func (client VirtualMachineExtensionsClient) CreateOrUpdateSender(req *http.Request) (future VirtualMachineExtensionsCreateOrUpdateFuture, err error) { + var resp *http.Response + resp, err = autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) + if err != nil { + return + } + err = autorest.Respond(resp, azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusCreated)) + if err != nil { + return + } + future.Future, err = azure.NewFutureFromResponse(resp) + return +} + +// CreateOrUpdateResponder handles the response to the CreateOrUpdate request. The method always +// closes the http.Response Body. +func (client VirtualMachineExtensionsClient) CreateOrUpdateResponder(resp *http.Response) (result VirtualMachineExtension, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusCreated), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + +// Delete the operation to delete the extension. +// Parameters: +// resourceGroupName - the name of the resource group. +// VMName - the name of the virtual machine where the extension should be deleted. +// VMExtensionName - the name of the virtual machine extension. +func (client VirtualMachineExtensionsClient) Delete(ctx context.Context, resourceGroupName string, VMName string, VMExtensionName string) (result VirtualMachineExtensionsDeleteFuture, err error) { + req, err := client.DeletePreparer(ctx, resourceGroupName, VMName, VMExtensionName) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachineExtensionsClient", "Delete", nil, "Failure preparing request") + return + } + + result, err = client.DeleteSender(req) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachineExtensionsClient", "Delete", result.Response(), "Failure sending request") + return + } + + return +} + +// DeletePreparer prepares the Delete request. +func (client VirtualMachineExtensionsClient) DeletePreparer(ctx context.Context, resourceGroupName string, VMName string, VMExtensionName string) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "resourceGroupName": autorest.Encode("path", resourceGroupName), + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + "vmExtensionName": autorest.Encode("path", VMExtensionName), + "vmName": autorest.Encode("path", VMName), + } + + const APIVersion = "2017-12-01" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsDelete(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Compute/virtualMachines/{vmName}/extensions/{vmExtensionName}", pathParameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// DeleteSender sends the Delete request. The method will close the +// http.Response Body if it receives an error. +func (client VirtualMachineExtensionsClient) DeleteSender(req *http.Request) (future VirtualMachineExtensionsDeleteFuture, err error) { + var resp *http.Response + resp, err = autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) + if err != nil { + return + } + err = autorest.Respond(resp, azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted, http.StatusNoContent)) + if err != nil { + return + } + future.Future, err = azure.NewFutureFromResponse(resp) + return +} + +// DeleteResponder handles the response to the Delete request. The method always +// closes the http.Response Body. +func (client VirtualMachineExtensionsClient) DeleteResponder(resp *http.Response) (result OperationStatusResponse, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted, http.StatusNoContent), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + +// Get the operation to get the extension. +// Parameters: +// resourceGroupName - the name of the resource group. +// VMName - the name of the virtual machine containing the extension. +// VMExtensionName - the name of the virtual machine extension. +// expand - the expand expression to apply on the operation. +func (client VirtualMachineExtensionsClient) Get(ctx context.Context, resourceGroupName string, VMName string, VMExtensionName string, expand string) (result VirtualMachineExtension, err error) { + req, err := client.GetPreparer(ctx, resourceGroupName, VMName, VMExtensionName, expand) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachineExtensionsClient", "Get", nil, "Failure preparing request") + return + } + + resp, err := client.GetSender(req) + if err != nil { + result.Response = autorest.Response{Response: resp} + err = autorest.NewErrorWithError(err, "compute.VirtualMachineExtensionsClient", "Get", resp, "Failure sending request") + return + } + + result, err = client.GetResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachineExtensionsClient", "Get", resp, "Failure responding to request") + } + + return +} + +// GetPreparer prepares the Get request. +func (client VirtualMachineExtensionsClient) GetPreparer(ctx context.Context, resourceGroupName string, VMName string, VMExtensionName string, expand string) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "resourceGroupName": autorest.Encode("path", resourceGroupName), + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + "vmExtensionName": autorest.Encode("path", VMExtensionName), + "vmName": autorest.Encode("path", VMName), + } + + const APIVersion = "2017-12-01" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + if len(expand) > 0 { + queryParameters["$expand"] = autorest.Encode("query", expand) + } + + preparer := autorest.CreatePreparer( + autorest.AsGet(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Compute/virtualMachines/{vmName}/extensions/{vmExtensionName}", pathParameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// GetSender sends the Get request. The method will close the +// http.Response Body if it receives an error. +func (client VirtualMachineExtensionsClient) GetSender(req *http.Request) (*http.Response, error) { + return autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) +} + +// GetResponder handles the response to the Get request. The method always +// closes the http.Response Body. +func (client VirtualMachineExtensionsClient) GetResponder(resp *http.Response) (result VirtualMachineExtension, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + +// Update the operation to update the extension. +// Parameters: +// resourceGroupName - the name of the resource group. +// VMName - the name of the virtual machine where the extension should be updated. +// VMExtensionName - the name of the virtual machine extension. +// extensionParameters - parameters supplied to the Update Virtual Machine Extension operation. +func (client VirtualMachineExtensionsClient) Update(ctx context.Context, resourceGroupName string, VMName string, VMExtensionName string, extensionParameters VirtualMachineExtensionUpdate) (result VirtualMachineExtensionsUpdateFuture, err error) { + req, err := client.UpdatePreparer(ctx, resourceGroupName, VMName, VMExtensionName, extensionParameters) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachineExtensionsClient", "Update", nil, "Failure preparing request") + return + } + + result, err = client.UpdateSender(req) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachineExtensionsClient", "Update", result.Response(), "Failure sending request") + return + } + + return +} + +// UpdatePreparer prepares the Update request. +func (client VirtualMachineExtensionsClient) UpdatePreparer(ctx context.Context, resourceGroupName string, VMName string, VMExtensionName string, extensionParameters VirtualMachineExtensionUpdate) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "resourceGroupName": autorest.Encode("path", resourceGroupName), + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + "vmExtensionName": autorest.Encode("path", VMExtensionName), + "vmName": autorest.Encode("path", VMName), + } + + const APIVersion = "2017-12-01" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsContentType("application/json; charset=utf-8"), + autorest.AsPatch(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Compute/virtualMachines/{vmName}/extensions/{vmExtensionName}", pathParameters), + autorest.WithJSON(extensionParameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// UpdateSender sends the Update request. The method will close the +// http.Response Body if it receives an error. +func (client VirtualMachineExtensionsClient) UpdateSender(req *http.Request) (future VirtualMachineExtensionsUpdateFuture, err error) { + var resp *http.Response + resp, err = autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) + if err != nil { + return + } + err = autorest.Respond(resp, azure.WithErrorUnlessStatusCode(http.StatusOK)) + if err != nil { + return + } + future.Future, err = azure.NewFutureFromResponse(resp) + return +} + +// UpdateResponder handles the response to the Update request. The method always +// closes the http.Response Body. +func (client VirtualMachineExtensionsClient) UpdateResponder(resp *http.Response) (result VirtualMachineExtension, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} diff --git a/vendor/github.com/Azure/azure-sdk-for-go/arm/compute/virtualmachineimages.go b/vendor/github.com/Azure/azure-sdk-for-go/services/compute/mgmt/2018-04-01/compute/virtualmachineimages.go similarity index 74% rename from vendor/github.com/Azure/azure-sdk-for-go/arm/compute/virtualmachineimages.go rename to vendor/github.com/Azure/azure-sdk-for-go/services/compute/mgmt/2018-04-01/compute/virtualmachineimages.go index 6c090568f..946a3ab49 100644 --- a/vendor/github.com/Azure/azure-sdk-for-go/arm/compute/virtualmachineimages.go +++ b/vendor/github.com/Azure/azure-sdk-for-go/services/compute/mgmt/2018-04-01/compute/virtualmachineimages.go @@ -14,40 +14,40 @@ package compute // See the License for the specific language governing permissions and // limitations under the License. // -// Code generated by Microsoft (R) AutoRest Code Generator 1.0.1.0 -// Changes may cause incorrect behavior and will be lost if the code is -// regenerated. +// Code generated by Microsoft (R) AutoRest Code Generator. +// Changes may cause incorrect behavior and will be lost if the code is regenerated. import ( + "context" "github.com/Azure/go-autorest/autorest" "github.com/Azure/go-autorest/autorest/azure" "net/http" ) -// VirtualMachineImagesClient is the the Compute Management Client. +// VirtualMachineImagesClient is the compute Client type VirtualMachineImagesClient struct { - ManagementClient + BaseClient } -// NewVirtualMachineImagesClient creates an instance of the -// VirtualMachineImagesClient client. +// NewVirtualMachineImagesClient creates an instance of the VirtualMachineImagesClient client. func NewVirtualMachineImagesClient(subscriptionID string) VirtualMachineImagesClient { return NewVirtualMachineImagesClientWithBaseURI(DefaultBaseURI, subscriptionID) } -// NewVirtualMachineImagesClientWithBaseURI creates an instance of the -// VirtualMachineImagesClient client. +// NewVirtualMachineImagesClientWithBaseURI creates an instance of the VirtualMachineImagesClient client. func NewVirtualMachineImagesClientWithBaseURI(baseURI string, subscriptionID string) VirtualMachineImagesClient { return VirtualMachineImagesClient{NewWithBaseURI(baseURI, subscriptionID)} } // Get gets a virtual machine image. -// -// location is the name of a supported Azure region. publisherName is a valid -// image publisher. offer is a valid image publisher offer. skus is a valid -// image SKU. version is a valid image SKU version. -func (client VirtualMachineImagesClient) Get(location string, publisherName string, offer string, skus string, version string) (result VirtualMachineImage, err error) { - req, err := client.GetPreparer(location, publisherName, offer, skus, version) +// Parameters: +// location - the name of a supported Azure region. +// publisherName - a valid image publisher. +// offer - a valid image publisher offer. +// skus - a valid image SKU. +// version - a valid image SKU version. +func (client VirtualMachineImagesClient) Get(ctx context.Context, location string, publisherName string, offer string, skus string, version string) (result VirtualMachineImage, err error) { + req, err := client.GetPreparer(ctx, location, publisherName, offer, skus, version) if err != nil { err = autorest.NewErrorWithError(err, "compute.VirtualMachineImagesClient", "Get", nil, "Failure preparing request") return @@ -69,7 +69,7 @@ func (client VirtualMachineImagesClient) Get(location string, publisherName stri } // GetPreparer prepares the Get request. -func (client VirtualMachineImagesClient) GetPreparer(location string, publisherName string, offer string, skus string, version string) (*http.Request, error) { +func (client VirtualMachineImagesClient) GetPreparer(ctx context.Context, location string, publisherName string, offer string, skus string, version string) (*http.Request, error) { pathParameters := map[string]interface{}{ "location": autorest.Encode("path", location), "offer": autorest.Encode("path", offer), @@ -79,7 +79,7 @@ func (client VirtualMachineImagesClient) GetPreparer(location string, publisherN "version": autorest.Encode("path", version), } - const APIVersion = "2016-04-30-preview" + const APIVersion = "2017-12-01" queryParameters := map[string]interface{}{ "api-version": APIVersion, } @@ -89,13 +89,14 @@ func (client VirtualMachineImagesClient) GetPreparer(location string, publisherN autorest.WithBaseURL(client.BaseURI), autorest.WithPathParameters("/subscriptions/{subscriptionId}/providers/Microsoft.Compute/locations/{location}/publishers/{publisherName}/artifacttypes/vmimage/offers/{offer}/skus/{skus}/versions/{version}", pathParameters), autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{}) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) } // GetSender sends the Get request. The method will close the // http.Response Body if it receives an error. func (client VirtualMachineImagesClient) GetSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, req) + return autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) } // GetResponder handles the response to the Get request. The method always @@ -111,14 +112,15 @@ func (client VirtualMachineImagesClient) GetResponder(resp *http.Response) (resu return } -// List gets a list of all virtual machine image versions for the specified -// location, publisher, offer, and SKU. -// -// location is the name of a supported Azure region. publisherName is a valid -// image publisher. offer is a valid image publisher offer. skus is a valid -// image SKU. filter is the filter to apply on the operation. -func (client VirtualMachineImagesClient) List(location string, publisherName string, offer string, skus string, filter string, top *int32, orderby string) (result ListVirtualMachineImageResource, err error) { - req, err := client.ListPreparer(location, publisherName, offer, skus, filter, top, orderby) +// List gets a list of all virtual machine image versions for the specified location, publisher, offer, and SKU. +// Parameters: +// location - the name of a supported Azure region. +// publisherName - a valid image publisher. +// offer - a valid image publisher offer. +// skus - a valid image SKU. +// filter - the filter to apply on the operation. +func (client VirtualMachineImagesClient) List(ctx context.Context, location string, publisherName string, offer string, skus string, filter string, top *int32, orderby string) (result ListVirtualMachineImageResource, err error) { + req, err := client.ListPreparer(ctx, location, publisherName, offer, skus, filter, top, orderby) if err != nil { err = autorest.NewErrorWithError(err, "compute.VirtualMachineImagesClient", "List", nil, "Failure preparing request") return @@ -140,7 +142,7 @@ func (client VirtualMachineImagesClient) List(location string, publisherName str } // ListPreparer prepares the List request. -func (client VirtualMachineImagesClient) ListPreparer(location string, publisherName string, offer string, skus string, filter string, top *int32, orderby string) (*http.Request, error) { +func (client VirtualMachineImagesClient) ListPreparer(ctx context.Context, location string, publisherName string, offer string, skus string, filter string, top *int32, orderby string) (*http.Request, error) { pathParameters := map[string]interface{}{ "location": autorest.Encode("path", location), "offer": autorest.Encode("path", offer), @@ -149,7 +151,7 @@ func (client VirtualMachineImagesClient) ListPreparer(location string, publisher "subscriptionId": autorest.Encode("path", client.SubscriptionID), } - const APIVersion = "2016-04-30-preview" + const APIVersion = "2017-12-01" queryParameters := map[string]interface{}{ "api-version": APIVersion, } @@ -168,13 +170,14 @@ func (client VirtualMachineImagesClient) ListPreparer(location string, publisher autorest.WithBaseURL(client.BaseURI), autorest.WithPathParameters("/subscriptions/{subscriptionId}/providers/Microsoft.Compute/locations/{location}/publishers/{publisherName}/artifacttypes/vmimage/offers/{offer}/skus/{skus}/versions", pathParameters), autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{}) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) } // ListSender sends the List request. The method will close the // http.Response Body if it receives an error. func (client VirtualMachineImagesClient) ListSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, req) + return autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) } // ListResponder handles the response to the List request. The method always @@ -190,13 +193,12 @@ func (client VirtualMachineImagesClient) ListResponder(resp *http.Response) (res return } -// ListOffers gets a list of virtual machine image offers for the specified -// location and publisher. -// -// location is the name of a supported Azure region. publisherName is a valid -// image publisher. -func (client VirtualMachineImagesClient) ListOffers(location string, publisherName string) (result ListVirtualMachineImageResource, err error) { - req, err := client.ListOffersPreparer(location, publisherName) +// ListOffers gets a list of virtual machine image offers for the specified location and publisher. +// Parameters: +// location - the name of a supported Azure region. +// publisherName - a valid image publisher. +func (client VirtualMachineImagesClient) ListOffers(ctx context.Context, location string, publisherName string) (result ListVirtualMachineImageResource, err error) { + req, err := client.ListOffersPreparer(ctx, location, publisherName) if err != nil { err = autorest.NewErrorWithError(err, "compute.VirtualMachineImagesClient", "ListOffers", nil, "Failure preparing request") return @@ -218,14 +220,14 @@ func (client VirtualMachineImagesClient) ListOffers(location string, publisherNa } // ListOffersPreparer prepares the ListOffers request. -func (client VirtualMachineImagesClient) ListOffersPreparer(location string, publisherName string) (*http.Request, error) { +func (client VirtualMachineImagesClient) ListOffersPreparer(ctx context.Context, location string, publisherName string) (*http.Request, error) { pathParameters := map[string]interface{}{ "location": autorest.Encode("path", location), "publisherName": autorest.Encode("path", publisherName), "subscriptionId": autorest.Encode("path", client.SubscriptionID), } - const APIVersion = "2016-04-30-preview" + const APIVersion = "2017-12-01" queryParameters := map[string]interface{}{ "api-version": APIVersion, } @@ -235,13 +237,14 @@ func (client VirtualMachineImagesClient) ListOffersPreparer(location string, pub autorest.WithBaseURL(client.BaseURI), autorest.WithPathParameters("/subscriptions/{subscriptionId}/providers/Microsoft.Compute/locations/{location}/publishers/{publisherName}/artifacttypes/vmimage/offers", pathParameters), autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{}) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) } // ListOffersSender sends the ListOffers request. The method will close the // http.Response Body if it receives an error. func (client VirtualMachineImagesClient) ListOffersSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, req) + return autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) } // ListOffersResponder handles the response to the ListOffers request. The method always @@ -257,12 +260,11 @@ func (client VirtualMachineImagesClient) ListOffersResponder(resp *http.Response return } -// ListPublishers gets a list of virtual machine image publishers for the -// specified Azure location. -// -// location is the name of a supported Azure region. -func (client VirtualMachineImagesClient) ListPublishers(location string) (result ListVirtualMachineImageResource, err error) { - req, err := client.ListPublishersPreparer(location) +// ListPublishers gets a list of virtual machine image publishers for the specified Azure location. +// Parameters: +// location - the name of a supported Azure region. +func (client VirtualMachineImagesClient) ListPublishers(ctx context.Context, location string) (result ListVirtualMachineImageResource, err error) { + req, err := client.ListPublishersPreparer(ctx, location) if err != nil { err = autorest.NewErrorWithError(err, "compute.VirtualMachineImagesClient", "ListPublishers", nil, "Failure preparing request") return @@ -284,13 +286,13 @@ func (client VirtualMachineImagesClient) ListPublishers(location string) (result } // ListPublishersPreparer prepares the ListPublishers request. -func (client VirtualMachineImagesClient) ListPublishersPreparer(location string) (*http.Request, error) { +func (client VirtualMachineImagesClient) ListPublishersPreparer(ctx context.Context, location string) (*http.Request, error) { pathParameters := map[string]interface{}{ "location": autorest.Encode("path", location), "subscriptionId": autorest.Encode("path", client.SubscriptionID), } - const APIVersion = "2016-04-30-preview" + const APIVersion = "2017-12-01" queryParameters := map[string]interface{}{ "api-version": APIVersion, } @@ -300,13 +302,14 @@ func (client VirtualMachineImagesClient) ListPublishersPreparer(location string) autorest.WithBaseURL(client.BaseURI), autorest.WithPathParameters("/subscriptions/{subscriptionId}/providers/Microsoft.Compute/locations/{location}/publishers", pathParameters), autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{}) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) } // ListPublishersSender sends the ListPublishers request. The method will close the // http.Response Body if it receives an error. func (client VirtualMachineImagesClient) ListPublishersSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, req) + return autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) } // ListPublishersResponder handles the response to the ListPublishers request. The method always @@ -322,13 +325,13 @@ func (client VirtualMachineImagesClient) ListPublishersResponder(resp *http.Resp return } -// ListSkus gets a list of virtual machine image SKUs for the specified -// location, publisher, and offer. -// -// location is the name of a supported Azure region. publisherName is a valid -// image publisher. offer is a valid image publisher offer. -func (client VirtualMachineImagesClient) ListSkus(location string, publisherName string, offer string) (result ListVirtualMachineImageResource, err error) { - req, err := client.ListSkusPreparer(location, publisherName, offer) +// ListSkus gets a list of virtual machine image SKUs for the specified location, publisher, and offer. +// Parameters: +// location - the name of a supported Azure region. +// publisherName - a valid image publisher. +// offer - a valid image publisher offer. +func (client VirtualMachineImagesClient) ListSkus(ctx context.Context, location string, publisherName string, offer string) (result ListVirtualMachineImageResource, err error) { + req, err := client.ListSkusPreparer(ctx, location, publisherName, offer) if err != nil { err = autorest.NewErrorWithError(err, "compute.VirtualMachineImagesClient", "ListSkus", nil, "Failure preparing request") return @@ -350,7 +353,7 @@ func (client VirtualMachineImagesClient) ListSkus(location string, publisherName } // ListSkusPreparer prepares the ListSkus request. -func (client VirtualMachineImagesClient) ListSkusPreparer(location string, publisherName string, offer string) (*http.Request, error) { +func (client VirtualMachineImagesClient) ListSkusPreparer(ctx context.Context, location string, publisherName string, offer string) (*http.Request, error) { pathParameters := map[string]interface{}{ "location": autorest.Encode("path", location), "offer": autorest.Encode("path", offer), @@ -358,7 +361,7 @@ func (client VirtualMachineImagesClient) ListSkusPreparer(location string, publi "subscriptionId": autorest.Encode("path", client.SubscriptionID), } - const APIVersion = "2016-04-30-preview" + const APIVersion = "2017-12-01" queryParameters := map[string]interface{}{ "api-version": APIVersion, } @@ -368,13 +371,14 @@ func (client VirtualMachineImagesClient) ListSkusPreparer(location string, publi autorest.WithBaseURL(client.BaseURI), autorest.WithPathParameters("/subscriptions/{subscriptionId}/providers/Microsoft.Compute/locations/{location}/publishers/{publisherName}/artifacttypes/vmimage/offers/{offer}/skus", pathParameters), autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{}) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) } // ListSkusSender sends the ListSkus request. The method will close the // http.Response Body if it receives an error. func (client VirtualMachineImagesClient) ListSkusSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, req) + return autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) } // ListSkusResponder handles the response to the ListSkus request. The method always diff --git a/vendor/github.com/Azure/azure-sdk-for-go/services/compute/mgmt/2018-04-01/compute/virtualmachineruncommands.go b/vendor/github.com/Azure/azure-sdk-for-go/services/compute/mgmt/2018-04-01/compute/virtualmachineruncommands.go new file mode 100644 index 000000000..26d2a3377 --- /dev/null +++ b/vendor/github.com/Azure/azure-sdk-for-go/services/compute/mgmt/2018-04-01/compute/virtualmachineruncommands.go @@ -0,0 +1,213 @@ +package compute + +// Copyright (c) Microsoft and contributors. All rights reserved. +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// +// See the License for the specific language governing permissions and +// limitations under the License. +// +// Code generated by Microsoft (R) AutoRest Code Generator. +// Changes may cause incorrect behavior and will be lost if the code is regenerated. + +import ( + "context" + "github.com/Azure/go-autorest/autorest" + "github.com/Azure/go-autorest/autorest/azure" + "github.com/Azure/go-autorest/autorest/validation" + "net/http" +) + +// VirtualMachineRunCommandsClient is the compute Client +type VirtualMachineRunCommandsClient struct { + BaseClient +} + +// NewVirtualMachineRunCommandsClient creates an instance of the VirtualMachineRunCommandsClient client. +func NewVirtualMachineRunCommandsClient(subscriptionID string) VirtualMachineRunCommandsClient { + return NewVirtualMachineRunCommandsClientWithBaseURI(DefaultBaseURI, subscriptionID) +} + +// NewVirtualMachineRunCommandsClientWithBaseURI creates an instance of the VirtualMachineRunCommandsClient client. +func NewVirtualMachineRunCommandsClientWithBaseURI(baseURI string, subscriptionID string) VirtualMachineRunCommandsClient { + return VirtualMachineRunCommandsClient{NewWithBaseURI(baseURI, subscriptionID)} +} + +// Get gets specific run command for a subscription in a location. +// Parameters: +// location - the location upon which run commands is queried. +// commandID - the command id. +func (client VirtualMachineRunCommandsClient) Get(ctx context.Context, location string, commandID string) (result RunCommandDocument, err error) { + if err := validation.Validate([]validation.Validation{ + {TargetValue: location, + Constraints: []validation.Constraint{{Target: "location", Name: validation.Pattern, Rule: `^[-\w\._]+$`, Chain: nil}}}}); err != nil { + return result, validation.NewError("compute.VirtualMachineRunCommandsClient", "Get", err.Error()) + } + + req, err := client.GetPreparer(ctx, location, commandID) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachineRunCommandsClient", "Get", nil, "Failure preparing request") + return + } + + resp, err := client.GetSender(req) + if err != nil { + result.Response = autorest.Response{Response: resp} + err = autorest.NewErrorWithError(err, "compute.VirtualMachineRunCommandsClient", "Get", resp, "Failure sending request") + return + } + + result, err = client.GetResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachineRunCommandsClient", "Get", resp, "Failure responding to request") + } + + return +} + +// GetPreparer prepares the Get request. +func (client VirtualMachineRunCommandsClient) GetPreparer(ctx context.Context, location string, commandID string) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "commandId": autorest.Encode("path", commandID), + "location": autorest.Encode("path", location), + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + } + + const APIVersion = "2017-12-01" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsGet(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/providers/Microsoft.Compute/locations/{location}/runCommands/{commandId}", pathParameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// GetSender sends the Get request. The method will close the +// http.Response Body if it receives an error. +func (client VirtualMachineRunCommandsClient) GetSender(req *http.Request) (*http.Response, error) { + return autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) +} + +// GetResponder handles the response to the Get request. The method always +// closes the http.Response Body. +func (client VirtualMachineRunCommandsClient) GetResponder(resp *http.Response) (result RunCommandDocument, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + +// List lists all available run commands for a subscription in a location. +// Parameters: +// location - the location upon which run commands is queried. +func (client VirtualMachineRunCommandsClient) List(ctx context.Context, location string) (result RunCommandListResultPage, err error) { + if err := validation.Validate([]validation.Validation{ + {TargetValue: location, + Constraints: []validation.Constraint{{Target: "location", Name: validation.Pattern, Rule: `^[-\w\._]+$`, Chain: nil}}}}); err != nil { + return result, validation.NewError("compute.VirtualMachineRunCommandsClient", "List", err.Error()) + } + + result.fn = client.listNextResults + req, err := client.ListPreparer(ctx, location) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachineRunCommandsClient", "List", nil, "Failure preparing request") + return + } + + resp, err := client.ListSender(req) + if err != nil { + result.rclr.Response = autorest.Response{Response: resp} + err = autorest.NewErrorWithError(err, "compute.VirtualMachineRunCommandsClient", "List", resp, "Failure sending request") + return + } + + result.rclr, err = client.ListResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachineRunCommandsClient", "List", resp, "Failure responding to request") + } + + return +} + +// ListPreparer prepares the List request. +func (client VirtualMachineRunCommandsClient) ListPreparer(ctx context.Context, location string) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "location": autorest.Encode("path", location), + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + } + + const APIVersion = "2017-12-01" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsGet(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/providers/Microsoft.Compute/locations/{location}/runCommands", pathParameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// ListSender sends the List request. The method will close the +// http.Response Body if it receives an error. +func (client VirtualMachineRunCommandsClient) ListSender(req *http.Request) (*http.Response, error) { + return autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) +} + +// ListResponder handles the response to the List request. The method always +// closes the http.Response Body. +func (client VirtualMachineRunCommandsClient) ListResponder(resp *http.Response) (result RunCommandListResult, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + +// listNextResults retrieves the next set of results, if any. +func (client VirtualMachineRunCommandsClient) listNextResults(lastResults RunCommandListResult) (result RunCommandListResult, err error) { + req, err := lastResults.runCommandListResultPreparer() + if err != nil { + return result, autorest.NewErrorWithError(err, "compute.VirtualMachineRunCommandsClient", "listNextResults", nil, "Failure preparing next results request") + } + if req == nil { + return + } + resp, err := client.ListSender(req) + if err != nil { + result.Response = autorest.Response{Response: resp} + return result, autorest.NewErrorWithError(err, "compute.VirtualMachineRunCommandsClient", "listNextResults", resp, "Failure sending next results request") + } + result, err = client.ListResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachineRunCommandsClient", "listNextResults", resp, "Failure responding to next results request") + } + return +} + +// ListComplete enumerates all values, automatically crossing page boundaries as required. +func (client VirtualMachineRunCommandsClient) ListComplete(ctx context.Context, location string) (result RunCommandListResultIterator, err error) { + result.page, err = client.List(ctx, location) + return +} diff --git a/vendor/github.com/Azure/azure-sdk-for-go/services/compute/mgmt/2018-04-01/compute/virtualmachines.go b/vendor/github.com/Azure/azure-sdk-for-go/services/compute/mgmt/2018-04-01/compute/virtualmachines.go new file mode 100644 index 000000000..ab8234321 --- /dev/null +++ b/vendor/github.com/Azure/azure-sdk-for-go/services/compute/mgmt/2018-04-01/compute/virtualmachines.go @@ -0,0 +1,1472 @@ +package compute + +// Copyright (c) Microsoft and contributors. All rights reserved. +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// +// See the License for the specific language governing permissions and +// limitations under the License. +// +// Code generated by Microsoft (R) AutoRest Code Generator. +// Changes may cause incorrect behavior and will be lost if the code is regenerated. + +import ( + "context" + "github.com/Azure/go-autorest/autorest" + "github.com/Azure/go-autorest/autorest/azure" + "github.com/Azure/go-autorest/autorest/validation" + "net/http" +) + +// VirtualMachinesClient is the compute Client +type VirtualMachinesClient struct { + BaseClient +} + +// NewVirtualMachinesClient creates an instance of the VirtualMachinesClient client. +func NewVirtualMachinesClient(subscriptionID string) VirtualMachinesClient { + return NewVirtualMachinesClientWithBaseURI(DefaultBaseURI, subscriptionID) +} + +// NewVirtualMachinesClientWithBaseURI creates an instance of the VirtualMachinesClient client. +func NewVirtualMachinesClientWithBaseURI(baseURI string, subscriptionID string) VirtualMachinesClient { + return VirtualMachinesClient{NewWithBaseURI(baseURI, subscriptionID)} +} + +// Capture captures the VM by copying virtual hard disks of the VM and outputs a template that can be used to create +// similar VMs. +// Parameters: +// resourceGroupName - the name of the resource group. +// VMName - the name of the virtual machine. +// parameters - parameters supplied to the Capture Virtual Machine operation. +func (client VirtualMachinesClient) Capture(ctx context.Context, resourceGroupName string, VMName string, parameters VirtualMachineCaptureParameters) (result VirtualMachinesCaptureFuture, err error) { + if err := validation.Validate([]validation.Validation{ + {TargetValue: parameters, + Constraints: []validation.Constraint{{Target: "parameters.VhdPrefix", Name: validation.Null, Rule: true, Chain: nil}, + {Target: "parameters.DestinationContainerName", Name: validation.Null, Rule: true, Chain: nil}, + {Target: "parameters.OverwriteVhds", Name: validation.Null, Rule: true, Chain: nil}}}}); err != nil { + return result, validation.NewError("compute.VirtualMachinesClient", "Capture", err.Error()) + } + + req, err := client.CapturePreparer(ctx, resourceGroupName, VMName, parameters) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachinesClient", "Capture", nil, "Failure preparing request") + return + } + + result, err = client.CaptureSender(req) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachinesClient", "Capture", result.Response(), "Failure sending request") + return + } + + return +} + +// CapturePreparer prepares the Capture request. +func (client VirtualMachinesClient) CapturePreparer(ctx context.Context, resourceGroupName string, VMName string, parameters VirtualMachineCaptureParameters) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "resourceGroupName": autorest.Encode("path", resourceGroupName), + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + "vmName": autorest.Encode("path", VMName), + } + + const APIVersion = "2017-12-01" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsContentType("application/json; charset=utf-8"), + autorest.AsPost(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Compute/virtualMachines/{vmName}/capture", pathParameters), + autorest.WithJSON(parameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// CaptureSender sends the Capture request. The method will close the +// http.Response Body if it receives an error. +func (client VirtualMachinesClient) CaptureSender(req *http.Request) (future VirtualMachinesCaptureFuture, err error) { + var resp *http.Response + resp, err = autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) + if err != nil { + return + } + err = autorest.Respond(resp, azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted)) + if err != nil { + return + } + future.Future, err = azure.NewFutureFromResponse(resp) + return +} + +// CaptureResponder handles the response to the Capture request. The method always +// closes the http.Response Body. +func (client VirtualMachinesClient) CaptureResponder(resp *http.Response) (result VirtualMachineCaptureResult, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + +// ConvertToManagedDisks converts virtual machine disks from blob-based to managed disks. Virtual machine must be +// stop-deallocated before invoking this operation. +// Parameters: +// resourceGroupName - the name of the resource group. +// VMName - the name of the virtual machine. +func (client VirtualMachinesClient) ConvertToManagedDisks(ctx context.Context, resourceGroupName string, VMName string) (result VirtualMachinesConvertToManagedDisksFuture, err error) { + req, err := client.ConvertToManagedDisksPreparer(ctx, resourceGroupName, VMName) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachinesClient", "ConvertToManagedDisks", nil, "Failure preparing request") + return + } + + result, err = client.ConvertToManagedDisksSender(req) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachinesClient", "ConvertToManagedDisks", result.Response(), "Failure sending request") + return + } + + return +} + +// ConvertToManagedDisksPreparer prepares the ConvertToManagedDisks request. +func (client VirtualMachinesClient) ConvertToManagedDisksPreparer(ctx context.Context, resourceGroupName string, VMName string) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "resourceGroupName": autorest.Encode("path", resourceGroupName), + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + "vmName": autorest.Encode("path", VMName), + } + + const APIVersion = "2017-12-01" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsPost(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Compute/virtualMachines/{vmName}/convertToManagedDisks", pathParameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// ConvertToManagedDisksSender sends the ConvertToManagedDisks request. The method will close the +// http.Response Body if it receives an error. +func (client VirtualMachinesClient) ConvertToManagedDisksSender(req *http.Request) (future VirtualMachinesConvertToManagedDisksFuture, err error) { + var resp *http.Response + resp, err = autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) + if err != nil { + return + } + err = autorest.Respond(resp, azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted)) + if err != nil { + return + } + future.Future, err = azure.NewFutureFromResponse(resp) + return +} + +// ConvertToManagedDisksResponder handles the response to the ConvertToManagedDisks request. The method always +// closes the http.Response Body. +func (client VirtualMachinesClient) ConvertToManagedDisksResponder(resp *http.Response) (result OperationStatusResponse, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + +// CreateOrUpdate the operation to create or update a virtual machine. +// Parameters: +// resourceGroupName - the name of the resource group. +// VMName - the name of the virtual machine. +// parameters - parameters supplied to the Create Virtual Machine operation. +func (client VirtualMachinesClient) CreateOrUpdate(ctx context.Context, resourceGroupName string, VMName string, parameters VirtualMachine) (result VirtualMachinesCreateOrUpdateFuture, err error) { + if err := validation.Validate([]validation.Validation{ + {TargetValue: parameters, + Constraints: []validation.Constraint{{Target: "parameters.VirtualMachineProperties", Name: validation.Null, Rule: false, + Chain: []validation.Constraint{{Target: "parameters.VirtualMachineProperties.StorageProfile", Name: validation.Null, Rule: false, + Chain: []validation.Constraint{{Target: "parameters.VirtualMachineProperties.StorageProfile.OsDisk", Name: validation.Null, Rule: false, + Chain: []validation.Constraint{{Target: "parameters.VirtualMachineProperties.StorageProfile.OsDisk.EncryptionSettings", Name: validation.Null, Rule: false, + Chain: []validation.Constraint{{Target: "parameters.VirtualMachineProperties.StorageProfile.OsDisk.EncryptionSettings.DiskEncryptionKey", Name: validation.Null, Rule: false, + Chain: []validation.Constraint{{Target: "parameters.VirtualMachineProperties.StorageProfile.OsDisk.EncryptionSettings.DiskEncryptionKey.SecretURL", Name: validation.Null, Rule: true, Chain: nil}, + {Target: "parameters.VirtualMachineProperties.StorageProfile.OsDisk.EncryptionSettings.DiskEncryptionKey.SourceVault", Name: validation.Null, Rule: true, Chain: nil}, + }}, + {Target: "parameters.VirtualMachineProperties.StorageProfile.OsDisk.EncryptionSettings.KeyEncryptionKey", Name: validation.Null, Rule: false, + Chain: []validation.Constraint{{Target: "parameters.VirtualMachineProperties.StorageProfile.OsDisk.EncryptionSettings.KeyEncryptionKey.KeyURL", Name: validation.Null, Rule: true, Chain: nil}, + {Target: "parameters.VirtualMachineProperties.StorageProfile.OsDisk.EncryptionSettings.KeyEncryptionKey.SourceVault", Name: validation.Null, Rule: true, Chain: nil}, + }}, + }}, + }}, + }}, + }}}}}); err != nil { + return result, validation.NewError("compute.VirtualMachinesClient", "CreateOrUpdate", err.Error()) + } + + req, err := client.CreateOrUpdatePreparer(ctx, resourceGroupName, VMName, parameters) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachinesClient", "CreateOrUpdate", nil, "Failure preparing request") + return + } + + result, err = client.CreateOrUpdateSender(req) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachinesClient", "CreateOrUpdate", result.Response(), "Failure sending request") + return + } + + return +} + +// CreateOrUpdatePreparer prepares the CreateOrUpdate request. +func (client VirtualMachinesClient) CreateOrUpdatePreparer(ctx context.Context, resourceGroupName string, VMName string, parameters VirtualMachine) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "resourceGroupName": autorest.Encode("path", resourceGroupName), + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + "vmName": autorest.Encode("path", VMName), + } + + const APIVersion = "2017-12-01" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsContentType("application/json; charset=utf-8"), + autorest.AsPut(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Compute/virtualMachines/{vmName}", pathParameters), + autorest.WithJSON(parameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// CreateOrUpdateSender sends the CreateOrUpdate request. The method will close the +// http.Response Body if it receives an error. +func (client VirtualMachinesClient) CreateOrUpdateSender(req *http.Request) (future VirtualMachinesCreateOrUpdateFuture, err error) { + var resp *http.Response + resp, err = autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) + if err != nil { + return + } + err = autorest.Respond(resp, azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusCreated)) + if err != nil { + return + } + future.Future, err = azure.NewFutureFromResponse(resp) + return +} + +// CreateOrUpdateResponder handles the response to the CreateOrUpdate request. The method always +// closes the http.Response Body. +func (client VirtualMachinesClient) CreateOrUpdateResponder(resp *http.Response) (result VirtualMachine, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusCreated), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + +// Deallocate shuts down the virtual machine and releases the compute resources. You are not billed for the compute +// resources that this virtual machine uses. +// Parameters: +// resourceGroupName - the name of the resource group. +// VMName - the name of the virtual machine. +func (client VirtualMachinesClient) Deallocate(ctx context.Context, resourceGroupName string, VMName string) (result VirtualMachinesDeallocateFuture, err error) { + req, err := client.DeallocatePreparer(ctx, resourceGroupName, VMName) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachinesClient", "Deallocate", nil, "Failure preparing request") + return + } + + result, err = client.DeallocateSender(req) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachinesClient", "Deallocate", result.Response(), "Failure sending request") + return + } + + return +} + +// DeallocatePreparer prepares the Deallocate request. +func (client VirtualMachinesClient) DeallocatePreparer(ctx context.Context, resourceGroupName string, VMName string) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "resourceGroupName": autorest.Encode("path", resourceGroupName), + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + "vmName": autorest.Encode("path", VMName), + } + + const APIVersion = "2017-12-01" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsPost(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Compute/virtualMachines/{vmName}/deallocate", pathParameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// DeallocateSender sends the Deallocate request. The method will close the +// http.Response Body if it receives an error. +func (client VirtualMachinesClient) DeallocateSender(req *http.Request) (future VirtualMachinesDeallocateFuture, err error) { + var resp *http.Response + resp, err = autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) + if err != nil { + return + } + err = autorest.Respond(resp, azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted)) + if err != nil { + return + } + future.Future, err = azure.NewFutureFromResponse(resp) + return +} + +// DeallocateResponder handles the response to the Deallocate request. The method always +// closes the http.Response Body. +func (client VirtualMachinesClient) DeallocateResponder(resp *http.Response) (result OperationStatusResponse, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + +// Delete the operation to delete a virtual machine. +// Parameters: +// resourceGroupName - the name of the resource group. +// VMName - the name of the virtual machine. +func (client VirtualMachinesClient) Delete(ctx context.Context, resourceGroupName string, VMName string) (result VirtualMachinesDeleteFuture, err error) { + req, err := client.DeletePreparer(ctx, resourceGroupName, VMName) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachinesClient", "Delete", nil, "Failure preparing request") + return + } + + result, err = client.DeleteSender(req) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachinesClient", "Delete", result.Response(), "Failure sending request") + return + } + + return +} + +// DeletePreparer prepares the Delete request. +func (client VirtualMachinesClient) DeletePreparer(ctx context.Context, resourceGroupName string, VMName string) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "resourceGroupName": autorest.Encode("path", resourceGroupName), + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + "vmName": autorest.Encode("path", VMName), + } + + const APIVersion = "2017-12-01" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsDelete(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Compute/virtualMachines/{vmName}", pathParameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// DeleteSender sends the Delete request. The method will close the +// http.Response Body if it receives an error. +func (client VirtualMachinesClient) DeleteSender(req *http.Request) (future VirtualMachinesDeleteFuture, err error) { + var resp *http.Response + resp, err = autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) + if err != nil { + return + } + err = autorest.Respond(resp, azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted, http.StatusNoContent)) + if err != nil { + return + } + future.Future, err = azure.NewFutureFromResponse(resp) + return +} + +// DeleteResponder handles the response to the Delete request. The method always +// closes the http.Response Body. +func (client VirtualMachinesClient) DeleteResponder(resp *http.Response) (result OperationStatusResponse, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted, http.StatusNoContent), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + +// Generalize sets the state of the virtual machine to generalized. +// Parameters: +// resourceGroupName - the name of the resource group. +// VMName - the name of the virtual machine. +func (client VirtualMachinesClient) Generalize(ctx context.Context, resourceGroupName string, VMName string) (result OperationStatusResponse, err error) { + req, err := client.GeneralizePreparer(ctx, resourceGroupName, VMName) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachinesClient", "Generalize", nil, "Failure preparing request") + return + } + + resp, err := client.GeneralizeSender(req) + if err != nil { + result.Response = autorest.Response{Response: resp} + err = autorest.NewErrorWithError(err, "compute.VirtualMachinesClient", "Generalize", resp, "Failure sending request") + return + } + + result, err = client.GeneralizeResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachinesClient", "Generalize", resp, "Failure responding to request") + } + + return +} + +// GeneralizePreparer prepares the Generalize request. +func (client VirtualMachinesClient) GeneralizePreparer(ctx context.Context, resourceGroupName string, VMName string) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "resourceGroupName": autorest.Encode("path", resourceGroupName), + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + "vmName": autorest.Encode("path", VMName), + } + + const APIVersion = "2017-12-01" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsPost(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Compute/virtualMachines/{vmName}/generalize", pathParameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// GeneralizeSender sends the Generalize request. The method will close the +// http.Response Body if it receives an error. +func (client VirtualMachinesClient) GeneralizeSender(req *http.Request) (*http.Response, error) { + return autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) +} + +// GeneralizeResponder handles the response to the Generalize request. The method always +// closes the http.Response Body. +func (client VirtualMachinesClient) GeneralizeResponder(resp *http.Response) (result OperationStatusResponse, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + +// Get retrieves information about the model view or the instance view of a virtual machine. +// Parameters: +// resourceGroupName - the name of the resource group. +// VMName - the name of the virtual machine. +// expand - the expand expression to apply on the operation. +func (client VirtualMachinesClient) Get(ctx context.Context, resourceGroupName string, VMName string, expand InstanceViewTypes) (result VirtualMachine, err error) { + req, err := client.GetPreparer(ctx, resourceGroupName, VMName, expand) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachinesClient", "Get", nil, "Failure preparing request") + return + } + + resp, err := client.GetSender(req) + if err != nil { + result.Response = autorest.Response{Response: resp} + err = autorest.NewErrorWithError(err, "compute.VirtualMachinesClient", "Get", resp, "Failure sending request") + return + } + + result, err = client.GetResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachinesClient", "Get", resp, "Failure responding to request") + } + + return +} + +// GetPreparer prepares the Get request. +func (client VirtualMachinesClient) GetPreparer(ctx context.Context, resourceGroupName string, VMName string, expand InstanceViewTypes) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "resourceGroupName": autorest.Encode("path", resourceGroupName), + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + "vmName": autorest.Encode("path", VMName), + } + + const APIVersion = "2017-12-01" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + if len(string(expand)) > 0 { + queryParameters["$expand"] = autorest.Encode("query", expand) + } + + preparer := autorest.CreatePreparer( + autorest.AsGet(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Compute/virtualMachines/{vmName}", pathParameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// GetSender sends the Get request. The method will close the +// http.Response Body if it receives an error. +func (client VirtualMachinesClient) GetSender(req *http.Request) (*http.Response, error) { + return autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) +} + +// GetResponder handles the response to the Get request. The method always +// closes the http.Response Body. +func (client VirtualMachinesClient) GetResponder(resp *http.Response) (result VirtualMachine, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + +// GetExtensions the operation to get all extensions of a Virtual Machine. +// Parameters: +// resourceGroupName - the name of the resource group. +// VMName - the name of the virtual machine containing the extension. +// expand - the expand expression to apply on the operation. +func (client VirtualMachinesClient) GetExtensions(ctx context.Context, resourceGroupName string, VMName string, expand string) (result VirtualMachineExtensionsListResult, err error) { + req, err := client.GetExtensionsPreparer(ctx, resourceGroupName, VMName, expand) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachinesClient", "GetExtensions", nil, "Failure preparing request") + return + } + + resp, err := client.GetExtensionsSender(req) + if err != nil { + result.Response = autorest.Response{Response: resp} + err = autorest.NewErrorWithError(err, "compute.VirtualMachinesClient", "GetExtensions", resp, "Failure sending request") + return + } + + result, err = client.GetExtensionsResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachinesClient", "GetExtensions", resp, "Failure responding to request") + } + + return +} + +// GetExtensionsPreparer prepares the GetExtensions request. +func (client VirtualMachinesClient) GetExtensionsPreparer(ctx context.Context, resourceGroupName string, VMName string, expand string) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "resourceGroupName": autorest.Encode("path", resourceGroupName), + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + "vmName": autorest.Encode("path", VMName), + } + + const APIVersion = "2017-12-01" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + if len(expand) > 0 { + queryParameters["$expand"] = autorest.Encode("query", expand) + } + + preparer := autorest.CreatePreparer( + autorest.AsGet(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Compute/virtualMachines/{vmName}/extensions", pathParameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// GetExtensionsSender sends the GetExtensions request. The method will close the +// http.Response Body if it receives an error. +func (client VirtualMachinesClient) GetExtensionsSender(req *http.Request) (*http.Response, error) { + return autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) +} + +// GetExtensionsResponder handles the response to the GetExtensions request. The method always +// closes the http.Response Body. +func (client VirtualMachinesClient) GetExtensionsResponder(resp *http.Response) (result VirtualMachineExtensionsListResult, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + +// InstanceView retrieves information about the run-time state of a virtual machine. +// Parameters: +// resourceGroupName - the name of the resource group. +// VMName - the name of the virtual machine. +func (client VirtualMachinesClient) InstanceView(ctx context.Context, resourceGroupName string, VMName string) (result VirtualMachineInstanceView, err error) { + req, err := client.InstanceViewPreparer(ctx, resourceGroupName, VMName) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachinesClient", "InstanceView", nil, "Failure preparing request") + return + } + + resp, err := client.InstanceViewSender(req) + if err != nil { + result.Response = autorest.Response{Response: resp} + err = autorest.NewErrorWithError(err, "compute.VirtualMachinesClient", "InstanceView", resp, "Failure sending request") + return + } + + result, err = client.InstanceViewResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachinesClient", "InstanceView", resp, "Failure responding to request") + } + + return +} + +// InstanceViewPreparer prepares the InstanceView request. +func (client VirtualMachinesClient) InstanceViewPreparer(ctx context.Context, resourceGroupName string, VMName string) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "resourceGroupName": autorest.Encode("path", resourceGroupName), + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + "vmName": autorest.Encode("path", VMName), + } + + const APIVersion = "2017-12-01" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsGet(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Compute/virtualMachines/{vmName}/instanceView", pathParameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// InstanceViewSender sends the InstanceView request. The method will close the +// http.Response Body if it receives an error. +func (client VirtualMachinesClient) InstanceViewSender(req *http.Request) (*http.Response, error) { + return autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) +} + +// InstanceViewResponder handles the response to the InstanceView request. The method always +// closes the http.Response Body. +func (client VirtualMachinesClient) InstanceViewResponder(resp *http.Response) (result VirtualMachineInstanceView, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + +// List lists all of the virtual machines in the specified resource group. Use the nextLink property in the response to +// get the next page of virtual machines. +// Parameters: +// resourceGroupName - the name of the resource group. +func (client VirtualMachinesClient) List(ctx context.Context, resourceGroupName string) (result VirtualMachineListResultPage, err error) { + result.fn = client.listNextResults + req, err := client.ListPreparer(ctx, resourceGroupName) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachinesClient", "List", nil, "Failure preparing request") + return + } + + resp, err := client.ListSender(req) + if err != nil { + result.vmlr.Response = autorest.Response{Response: resp} + err = autorest.NewErrorWithError(err, "compute.VirtualMachinesClient", "List", resp, "Failure sending request") + return + } + + result.vmlr, err = client.ListResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachinesClient", "List", resp, "Failure responding to request") + } + + return +} + +// ListPreparer prepares the List request. +func (client VirtualMachinesClient) ListPreparer(ctx context.Context, resourceGroupName string) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "resourceGroupName": autorest.Encode("path", resourceGroupName), + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + } + + const APIVersion = "2017-12-01" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsGet(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Compute/virtualMachines", pathParameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// ListSender sends the List request. The method will close the +// http.Response Body if it receives an error. +func (client VirtualMachinesClient) ListSender(req *http.Request) (*http.Response, error) { + return autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) +} + +// ListResponder handles the response to the List request. The method always +// closes the http.Response Body. +func (client VirtualMachinesClient) ListResponder(resp *http.Response) (result VirtualMachineListResult, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + +// listNextResults retrieves the next set of results, if any. +func (client VirtualMachinesClient) listNextResults(lastResults VirtualMachineListResult) (result VirtualMachineListResult, err error) { + req, err := lastResults.virtualMachineListResultPreparer() + if err != nil { + return result, autorest.NewErrorWithError(err, "compute.VirtualMachinesClient", "listNextResults", nil, "Failure preparing next results request") + } + if req == nil { + return + } + resp, err := client.ListSender(req) + if err != nil { + result.Response = autorest.Response{Response: resp} + return result, autorest.NewErrorWithError(err, "compute.VirtualMachinesClient", "listNextResults", resp, "Failure sending next results request") + } + result, err = client.ListResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachinesClient", "listNextResults", resp, "Failure responding to next results request") + } + return +} + +// ListComplete enumerates all values, automatically crossing page boundaries as required. +func (client VirtualMachinesClient) ListComplete(ctx context.Context, resourceGroupName string) (result VirtualMachineListResultIterator, err error) { + result.page, err = client.List(ctx, resourceGroupName) + return +} + +// ListAll lists all of the virtual machines in the specified subscription. Use the nextLink property in the response +// to get the next page of virtual machines. +func (client VirtualMachinesClient) ListAll(ctx context.Context) (result VirtualMachineListResultPage, err error) { + result.fn = client.listAllNextResults + req, err := client.ListAllPreparer(ctx) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachinesClient", "ListAll", nil, "Failure preparing request") + return + } + + resp, err := client.ListAllSender(req) + if err != nil { + result.vmlr.Response = autorest.Response{Response: resp} + err = autorest.NewErrorWithError(err, "compute.VirtualMachinesClient", "ListAll", resp, "Failure sending request") + return + } + + result.vmlr, err = client.ListAllResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachinesClient", "ListAll", resp, "Failure responding to request") + } + + return +} + +// ListAllPreparer prepares the ListAll request. +func (client VirtualMachinesClient) ListAllPreparer(ctx context.Context) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + } + + const APIVersion = "2017-12-01" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsGet(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/providers/Microsoft.Compute/virtualMachines", pathParameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// ListAllSender sends the ListAll request. The method will close the +// http.Response Body if it receives an error. +func (client VirtualMachinesClient) ListAllSender(req *http.Request) (*http.Response, error) { + return autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) +} + +// ListAllResponder handles the response to the ListAll request. The method always +// closes the http.Response Body. +func (client VirtualMachinesClient) ListAllResponder(resp *http.Response) (result VirtualMachineListResult, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + +// listAllNextResults retrieves the next set of results, if any. +func (client VirtualMachinesClient) listAllNextResults(lastResults VirtualMachineListResult) (result VirtualMachineListResult, err error) { + req, err := lastResults.virtualMachineListResultPreparer() + if err != nil { + return result, autorest.NewErrorWithError(err, "compute.VirtualMachinesClient", "listAllNextResults", nil, "Failure preparing next results request") + } + if req == nil { + return + } + resp, err := client.ListAllSender(req) + if err != nil { + result.Response = autorest.Response{Response: resp} + return result, autorest.NewErrorWithError(err, "compute.VirtualMachinesClient", "listAllNextResults", resp, "Failure sending next results request") + } + result, err = client.ListAllResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachinesClient", "listAllNextResults", resp, "Failure responding to next results request") + } + return +} + +// ListAllComplete enumerates all values, automatically crossing page boundaries as required. +func (client VirtualMachinesClient) ListAllComplete(ctx context.Context) (result VirtualMachineListResultIterator, err error) { + result.page, err = client.ListAll(ctx) + return +} + +// ListAvailableSizes lists all available virtual machine sizes to which the specified virtual machine can be resized. +// Parameters: +// resourceGroupName - the name of the resource group. +// VMName - the name of the virtual machine. +func (client VirtualMachinesClient) ListAvailableSizes(ctx context.Context, resourceGroupName string, VMName string) (result VirtualMachineSizeListResult, err error) { + req, err := client.ListAvailableSizesPreparer(ctx, resourceGroupName, VMName) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachinesClient", "ListAvailableSizes", nil, "Failure preparing request") + return + } + + resp, err := client.ListAvailableSizesSender(req) + if err != nil { + result.Response = autorest.Response{Response: resp} + err = autorest.NewErrorWithError(err, "compute.VirtualMachinesClient", "ListAvailableSizes", resp, "Failure sending request") + return + } + + result, err = client.ListAvailableSizesResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachinesClient", "ListAvailableSizes", resp, "Failure responding to request") + } + + return +} + +// ListAvailableSizesPreparer prepares the ListAvailableSizes request. +func (client VirtualMachinesClient) ListAvailableSizesPreparer(ctx context.Context, resourceGroupName string, VMName string) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "resourceGroupName": autorest.Encode("path", resourceGroupName), + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + "vmName": autorest.Encode("path", VMName), + } + + const APIVersion = "2017-12-01" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsGet(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Compute/virtualMachines/{vmName}/vmSizes", pathParameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// ListAvailableSizesSender sends the ListAvailableSizes request. The method will close the +// http.Response Body if it receives an error. +func (client VirtualMachinesClient) ListAvailableSizesSender(req *http.Request) (*http.Response, error) { + return autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) +} + +// ListAvailableSizesResponder handles the response to the ListAvailableSizes request. The method always +// closes the http.Response Body. +func (client VirtualMachinesClient) ListAvailableSizesResponder(resp *http.Response) (result VirtualMachineSizeListResult, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + +// PerformMaintenance the operation to perform maintenance on a virtual machine. +// Parameters: +// resourceGroupName - the name of the resource group. +// VMName - the name of the virtual machine. +func (client VirtualMachinesClient) PerformMaintenance(ctx context.Context, resourceGroupName string, VMName string) (result VirtualMachinesPerformMaintenanceFuture, err error) { + req, err := client.PerformMaintenancePreparer(ctx, resourceGroupName, VMName) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachinesClient", "PerformMaintenance", nil, "Failure preparing request") + return + } + + result, err = client.PerformMaintenanceSender(req) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachinesClient", "PerformMaintenance", result.Response(), "Failure sending request") + return + } + + return +} + +// PerformMaintenancePreparer prepares the PerformMaintenance request. +func (client VirtualMachinesClient) PerformMaintenancePreparer(ctx context.Context, resourceGroupName string, VMName string) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "resourceGroupName": autorest.Encode("path", resourceGroupName), + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + "vmName": autorest.Encode("path", VMName), + } + + const APIVersion = "2017-12-01" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsPost(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Compute/virtualMachines/{vmName}/performMaintenance", pathParameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// PerformMaintenanceSender sends the PerformMaintenance request. The method will close the +// http.Response Body if it receives an error. +func (client VirtualMachinesClient) PerformMaintenanceSender(req *http.Request) (future VirtualMachinesPerformMaintenanceFuture, err error) { + var resp *http.Response + resp, err = autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) + if err != nil { + return + } + err = autorest.Respond(resp, azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted)) + if err != nil { + return + } + future.Future, err = azure.NewFutureFromResponse(resp) + return +} + +// PerformMaintenanceResponder handles the response to the PerformMaintenance request. The method always +// closes the http.Response Body. +func (client VirtualMachinesClient) PerformMaintenanceResponder(resp *http.Response) (result OperationStatusResponse, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + +// PowerOff the operation to power off (stop) a virtual machine. The virtual machine can be restarted with the same +// provisioned resources. You are still charged for this virtual machine. +// Parameters: +// resourceGroupName - the name of the resource group. +// VMName - the name of the virtual machine. +func (client VirtualMachinesClient) PowerOff(ctx context.Context, resourceGroupName string, VMName string) (result VirtualMachinesPowerOffFuture, err error) { + req, err := client.PowerOffPreparer(ctx, resourceGroupName, VMName) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachinesClient", "PowerOff", nil, "Failure preparing request") + return + } + + result, err = client.PowerOffSender(req) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachinesClient", "PowerOff", result.Response(), "Failure sending request") + return + } + + return +} + +// PowerOffPreparer prepares the PowerOff request. +func (client VirtualMachinesClient) PowerOffPreparer(ctx context.Context, resourceGroupName string, VMName string) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "resourceGroupName": autorest.Encode("path", resourceGroupName), + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + "vmName": autorest.Encode("path", VMName), + } + + const APIVersion = "2017-12-01" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsPost(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Compute/virtualMachines/{vmName}/powerOff", pathParameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// PowerOffSender sends the PowerOff request. The method will close the +// http.Response Body if it receives an error. +func (client VirtualMachinesClient) PowerOffSender(req *http.Request) (future VirtualMachinesPowerOffFuture, err error) { + var resp *http.Response + resp, err = autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) + if err != nil { + return + } + err = autorest.Respond(resp, azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted)) + if err != nil { + return + } + future.Future, err = azure.NewFutureFromResponse(resp) + return +} + +// PowerOffResponder handles the response to the PowerOff request. The method always +// closes the http.Response Body. +func (client VirtualMachinesClient) PowerOffResponder(resp *http.Response) (result OperationStatusResponse, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + +// Redeploy the operation to redeploy a virtual machine. +// Parameters: +// resourceGroupName - the name of the resource group. +// VMName - the name of the virtual machine. +func (client VirtualMachinesClient) Redeploy(ctx context.Context, resourceGroupName string, VMName string) (result VirtualMachinesRedeployFuture, err error) { + req, err := client.RedeployPreparer(ctx, resourceGroupName, VMName) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachinesClient", "Redeploy", nil, "Failure preparing request") + return + } + + result, err = client.RedeploySender(req) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachinesClient", "Redeploy", result.Response(), "Failure sending request") + return + } + + return +} + +// RedeployPreparer prepares the Redeploy request. +func (client VirtualMachinesClient) RedeployPreparer(ctx context.Context, resourceGroupName string, VMName string) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "resourceGroupName": autorest.Encode("path", resourceGroupName), + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + "vmName": autorest.Encode("path", VMName), + } + + const APIVersion = "2017-12-01" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsPost(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Compute/virtualMachines/{vmName}/redeploy", pathParameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// RedeploySender sends the Redeploy request. The method will close the +// http.Response Body if it receives an error. +func (client VirtualMachinesClient) RedeploySender(req *http.Request) (future VirtualMachinesRedeployFuture, err error) { + var resp *http.Response + resp, err = autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) + if err != nil { + return + } + err = autorest.Respond(resp, azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted)) + if err != nil { + return + } + future.Future, err = azure.NewFutureFromResponse(resp) + return +} + +// RedeployResponder handles the response to the Redeploy request. The method always +// closes the http.Response Body. +func (client VirtualMachinesClient) RedeployResponder(resp *http.Response) (result OperationStatusResponse, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + +// Restart the operation to restart a virtual machine. +// Parameters: +// resourceGroupName - the name of the resource group. +// VMName - the name of the virtual machine. +func (client VirtualMachinesClient) Restart(ctx context.Context, resourceGroupName string, VMName string) (result VirtualMachinesRestartFuture, err error) { + req, err := client.RestartPreparer(ctx, resourceGroupName, VMName) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachinesClient", "Restart", nil, "Failure preparing request") + return + } + + result, err = client.RestartSender(req) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachinesClient", "Restart", result.Response(), "Failure sending request") + return + } + + return +} + +// RestartPreparer prepares the Restart request. +func (client VirtualMachinesClient) RestartPreparer(ctx context.Context, resourceGroupName string, VMName string) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "resourceGroupName": autorest.Encode("path", resourceGroupName), + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + "vmName": autorest.Encode("path", VMName), + } + + const APIVersion = "2017-12-01" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsPost(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Compute/virtualMachines/{vmName}/restart", pathParameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// RestartSender sends the Restart request. The method will close the +// http.Response Body if it receives an error. +func (client VirtualMachinesClient) RestartSender(req *http.Request) (future VirtualMachinesRestartFuture, err error) { + var resp *http.Response + resp, err = autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) + if err != nil { + return + } + err = autorest.Respond(resp, azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted)) + if err != nil { + return + } + future.Future, err = azure.NewFutureFromResponse(resp) + return +} + +// RestartResponder handles the response to the Restart request. The method always +// closes the http.Response Body. +func (client VirtualMachinesClient) RestartResponder(resp *http.Response) (result OperationStatusResponse, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + +// RunCommand run command on the VM. +// Parameters: +// resourceGroupName - the name of the resource group. +// VMName - the name of the virtual machine. +// parameters - parameters supplied to the Run command operation. +func (client VirtualMachinesClient) RunCommand(ctx context.Context, resourceGroupName string, VMName string, parameters RunCommandInput) (result VirtualMachinesRunCommandFuture, err error) { + if err := validation.Validate([]validation.Validation{ + {TargetValue: parameters, + Constraints: []validation.Constraint{{Target: "parameters.CommandID", Name: validation.Null, Rule: true, Chain: nil}}}}); err != nil { + return result, validation.NewError("compute.VirtualMachinesClient", "RunCommand", err.Error()) + } + + req, err := client.RunCommandPreparer(ctx, resourceGroupName, VMName, parameters) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachinesClient", "RunCommand", nil, "Failure preparing request") + return + } + + result, err = client.RunCommandSender(req) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachinesClient", "RunCommand", result.Response(), "Failure sending request") + return + } + + return +} + +// RunCommandPreparer prepares the RunCommand request. +func (client VirtualMachinesClient) RunCommandPreparer(ctx context.Context, resourceGroupName string, VMName string, parameters RunCommandInput) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "resourceGroupName": autorest.Encode("path", resourceGroupName), + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + "vmName": autorest.Encode("path", VMName), + } + + const APIVersion = "2017-12-01" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsContentType("application/json; charset=utf-8"), + autorest.AsPost(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Compute/virtualMachines/{vmName}/runCommand", pathParameters), + autorest.WithJSON(parameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// RunCommandSender sends the RunCommand request. The method will close the +// http.Response Body if it receives an error. +func (client VirtualMachinesClient) RunCommandSender(req *http.Request) (future VirtualMachinesRunCommandFuture, err error) { + var resp *http.Response + resp, err = autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) + if err != nil { + return + } + err = autorest.Respond(resp, azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted)) + if err != nil { + return + } + future.Future, err = azure.NewFutureFromResponse(resp) + return +} + +// RunCommandResponder handles the response to the RunCommand request. The method always +// closes the http.Response Body. +func (client VirtualMachinesClient) RunCommandResponder(resp *http.Response) (result RunCommandResult, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + +// Start the operation to start a virtual machine. +// Parameters: +// resourceGroupName - the name of the resource group. +// VMName - the name of the virtual machine. +func (client VirtualMachinesClient) Start(ctx context.Context, resourceGroupName string, VMName string) (result VirtualMachinesStartFuture, err error) { + req, err := client.StartPreparer(ctx, resourceGroupName, VMName) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachinesClient", "Start", nil, "Failure preparing request") + return + } + + result, err = client.StartSender(req) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachinesClient", "Start", result.Response(), "Failure sending request") + return + } + + return +} + +// StartPreparer prepares the Start request. +func (client VirtualMachinesClient) StartPreparer(ctx context.Context, resourceGroupName string, VMName string) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "resourceGroupName": autorest.Encode("path", resourceGroupName), + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + "vmName": autorest.Encode("path", VMName), + } + + const APIVersion = "2017-12-01" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsPost(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Compute/virtualMachines/{vmName}/start", pathParameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// StartSender sends the Start request. The method will close the +// http.Response Body if it receives an error. +func (client VirtualMachinesClient) StartSender(req *http.Request) (future VirtualMachinesStartFuture, err error) { + var resp *http.Response + resp, err = autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) + if err != nil { + return + } + err = autorest.Respond(resp, azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted)) + if err != nil { + return + } + future.Future, err = azure.NewFutureFromResponse(resp) + return +} + +// StartResponder handles the response to the Start request. The method always +// closes the http.Response Body. +func (client VirtualMachinesClient) StartResponder(resp *http.Response) (result OperationStatusResponse, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + +// Update the operation to update a virtual machine. +// Parameters: +// resourceGroupName - the name of the resource group. +// VMName - the name of the virtual machine. +// parameters - parameters supplied to the Update Virtual Machine operation. +func (client VirtualMachinesClient) Update(ctx context.Context, resourceGroupName string, VMName string, parameters VirtualMachineUpdate) (result VirtualMachinesUpdateFuture, err error) { + req, err := client.UpdatePreparer(ctx, resourceGroupName, VMName, parameters) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachinesClient", "Update", nil, "Failure preparing request") + return + } + + result, err = client.UpdateSender(req) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachinesClient", "Update", result.Response(), "Failure sending request") + return + } + + return +} + +// UpdatePreparer prepares the Update request. +func (client VirtualMachinesClient) UpdatePreparer(ctx context.Context, resourceGroupName string, VMName string, parameters VirtualMachineUpdate) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "resourceGroupName": autorest.Encode("path", resourceGroupName), + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + "vmName": autorest.Encode("path", VMName), + } + + const APIVersion = "2017-12-01" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsContentType("application/json; charset=utf-8"), + autorest.AsPatch(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Compute/virtualMachines/{vmName}", pathParameters), + autorest.WithJSON(parameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// UpdateSender sends the Update request. The method will close the +// http.Response Body if it receives an error. +func (client VirtualMachinesClient) UpdateSender(req *http.Request) (future VirtualMachinesUpdateFuture, err error) { + var resp *http.Response + resp, err = autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) + if err != nil { + return + } + err = autorest.Respond(resp, azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusCreated)) + if err != nil { + return + } + future.Future, err = azure.NewFutureFromResponse(resp) + return +} + +// UpdateResponder handles the response to the Update request. The method always +// closes the http.Response Body. +func (client VirtualMachinesClient) UpdateResponder(resp *http.Response) (result VirtualMachine, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusCreated), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} diff --git a/vendor/github.com/Azure/azure-sdk-for-go/services/compute/mgmt/2018-04-01/compute/virtualmachinescalesetextensions.go b/vendor/github.com/Azure/azure-sdk-for-go/services/compute/mgmt/2018-04-01/compute/virtualmachinescalesetextensions.go new file mode 100644 index 000000000..4ebd68c00 --- /dev/null +++ b/vendor/github.com/Azure/azure-sdk-for-go/services/compute/mgmt/2018-04-01/compute/virtualmachinescalesetextensions.go @@ -0,0 +1,358 @@ +package compute + +// Copyright (c) Microsoft and contributors. All rights reserved. +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// +// See the License for the specific language governing permissions and +// limitations under the License. +// +// Code generated by Microsoft (R) AutoRest Code Generator. +// Changes may cause incorrect behavior and will be lost if the code is regenerated. + +import ( + "context" + "github.com/Azure/go-autorest/autorest" + "github.com/Azure/go-autorest/autorest/azure" + "net/http" +) + +// VirtualMachineScaleSetExtensionsClient is the compute Client +type VirtualMachineScaleSetExtensionsClient struct { + BaseClient +} + +// NewVirtualMachineScaleSetExtensionsClient creates an instance of the VirtualMachineScaleSetExtensionsClient client. +func NewVirtualMachineScaleSetExtensionsClient(subscriptionID string) VirtualMachineScaleSetExtensionsClient { + return NewVirtualMachineScaleSetExtensionsClientWithBaseURI(DefaultBaseURI, subscriptionID) +} + +// NewVirtualMachineScaleSetExtensionsClientWithBaseURI creates an instance of the +// VirtualMachineScaleSetExtensionsClient client. +func NewVirtualMachineScaleSetExtensionsClientWithBaseURI(baseURI string, subscriptionID string) VirtualMachineScaleSetExtensionsClient { + return VirtualMachineScaleSetExtensionsClient{NewWithBaseURI(baseURI, subscriptionID)} +} + +// CreateOrUpdate the operation to create or update an extension. +// Parameters: +// resourceGroupName - the name of the resource group. +// VMScaleSetName - the name of the VM scale set where the extension should be create or updated. +// vmssExtensionName - the name of the VM scale set extension. +// extensionParameters - parameters supplied to the Create VM scale set Extension operation. +func (client VirtualMachineScaleSetExtensionsClient) CreateOrUpdate(ctx context.Context, resourceGroupName string, VMScaleSetName string, vmssExtensionName string, extensionParameters VirtualMachineScaleSetExtension) (result VirtualMachineScaleSetExtensionsCreateOrUpdateFuture, err error) { + req, err := client.CreateOrUpdatePreparer(ctx, resourceGroupName, VMScaleSetName, vmssExtensionName, extensionParameters) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetExtensionsClient", "CreateOrUpdate", nil, "Failure preparing request") + return + } + + result, err = client.CreateOrUpdateSender(req) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetExtensionsClient", "CreateOrUpdate", result.Response(), "Failure sending request") + return + } + + return +} + +// CreateOrUpdatePreparer prepares the CreateOrUpdate request. +func (client VirtualMachineScaleSetExtensionsClient) CreateOrUpdatePreparer(ctx context.Context, resourceGroupName string, VMScaleSetName string, vmssExtensionName string, extensionParameters VirtualMachineScaleSetExtension) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "resourceGroupName": autorest.Encode("path", resourceGroupName), + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + "vmScaleSetName": autorest.Encode("path", VMScaleSetName), + "vmssExtensionName": autorest.Encode("path", vmssExtensionName), + } + + const APIVersion = "2017-12-01" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsContentType("application/json; charset=utf-8"), + autorest.AsPut(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Compute/virtualMachineScaleSets/{vmScaleSetName}/extensions/{vmssExtensionName}", pathParameters), + autorest.WithJSON(extensionParameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// CreateOrUpdateSender sends the CreateOrUpdate request. The method will close the +// http.Response Body if it receives an error. +func (client VirtualMachineScaleSetExtensionsClient) CreateOrUpdateSender(req *http.Request) (future VirtualMachineScaleSetExtensionsCreateOrUpdateFuture, err error) { + var resp *http.Response + resp, err = autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) + if err != nil { + return + } + err = autorest.Respond(resp, azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusCreated)) + if err != nil { + return + } + future.Future, err = azure.NewFutureFromResponse(resp) + return +} + +// CreateOrUpdateResponder handles the response to the CreateOrUpdate request. The method always +// closes the http.Response Body. +func (client VirtualMachineScaleSetExtensionsClient) CreateOrUpdateResponder(resp *http.Response) (result VirtualMachineScaleSetExtension, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusCreated), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + +// Delete the operation to delete the extension. +// Parameters: +// resourceGroupName - the name of the resource group. +// VMScaleSetName - the name of the VM scale set where the extension should be deleted. +// vmssExtensionName - the name of the VM scale set extension. +func (client VirtualMachineScaleSetExtensionsClient) Delete(ctx context.Context, resourceGroupName string, VMScaleSetName string, vmssExtensionName string) (result VirtualMachineScaleSetExtensionsDeleteFuture, err error) { + req, err := client.DeletePreparer(ctx, resourceGroupName, VMScaleSetName, vmssExtensionName) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetExtensionsClient", "Delete", nil, "Failure preparing request") + return + } + + result, err = client.DeleteSender(req) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetExtensionsClient", "Delete", result.Response(), "Failure sending request") + return + } + + return +} + +// DeletePreparer prepares the Delete request. +func (client VirtualMachineScaleSetExtensionsClient) DeletePreparer(ctx context.Context, resourceGroupName string, VMScaleSetName string, vmssExtensionName string) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "resourceGroupName": autorest.Encode("path", resourceGroupName), + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + "vmScaleSetName": autorest.Encode("path", VMScaleSetName), + "vmssExtensionName": autorest.Encode("path", vmssExtensionName), + } + + const APIVersion = "2017-12-01" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsDelete(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Compute/virtualMachineScaleSets/{vmScaleSetName}/extensions/{vmssExtensionName}", pathParameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// DeleteSender sends the Delete request. The method will close the +// http.Response Body if it receives an error. +func (client VirtualMachineScaleSetExtensionsClient) DeleteSender(req *http.Request) (future VirtualMachineScaleSetExtensionsDeleteFuture, err error) { + var resp *http.Response + resp, err = autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) + if err != nil { + return + } + err = autorest.Respond(resp, azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted, http.StatusNoContent)) + if err != nil { + return + } + future.Future, err = azure.NewFutureFromResponse(resp) + return +} + +// DeleteResponder handles the response to the Delete request. The method always +// closes the http.Response Body. +func (client VirtualMachineScaleSetExtensionsClient) DeleteResponder(resp *http.Response) (result OperationStatusResponse, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted, http.StatusNoContent), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + +// Get the operation to get the extension. +// Parameters: +// resourceGroupName - the name of the resource group. +// VMScaleSetName - the name of the VM scale set containing the extension. +// vmssExtensionName - the name of the VM scale set extension. +// expand - the expand expression to apply on the operation. +func (client VirtualMachineScaleSetExtensionsClient) Get(ctx context.Context, resourceGroupName string, VMScaleSetName string, vmssExtensionName string, expand string) (result VirtualMachineScaleSetExtension, err error) { + req, err := client.GetPreparer(ctx, resourceGroupName, VMScaleSetName, vmssExtensionName, expand) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetExtensionsClient", "Get", nil, "Failure preparing request") + return + } + + resp, err := client.GetSender(req) + if err != nil { + result.Response = autorest.Response{Response: resp} + err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetExtensionsClient", "Get", resp, "Failure sending request") + return + } + + result, err = client.GetResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetExtensionsClient", "Get", resp, "Failure responding to request") + } + + return +} + +// GetPreparer prepares the Get request. +func (client VirtualMachineScaleSetExtensionsClient) GetPreparer(ctx context.Context, resourceGroupName string, VMScaleSetName string, vmssExtensionName string, expand string) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "resourceGroupName": autorest.Encode("path", resourceGroupName), + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + "vmScaleSetName": autorest.Encode("path", VMScaleSetName), + "vmssExtensionName": autorest.Encode("path", vmssExtensionName), + } + + const APIVersion = "2017-12-01" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + if len(expand) > 0 { + queryParameters["$expand"] = autorest.Encode("query", expand) + } + + preparer := autorest.CreatePreparer( + autorest.AsGet(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Compute/virtualMachineScaleSets/{vmScaleSetName}/extensions/{vmssExtensionName}", pathParameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// GetSender sends the Get request. The method will close the +// http.Response Body if it receives an error. +func (client VirtualMachineScaleSetExtensionsClient) GetSender(req *http.Request) (*http.Response, error) { + return autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) +} + +// GetResponder handles the response to the Get request. The method always +// closes the http.Response Body. +func (client VirtualMachineScaleSetExtensionsClient) GetResponder(resp *http.Response) (result VirtualMachineScaleSetExtension, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + +// List gets a list of all extensions in a VM scale set. +// Parameters: +// resourceGroupName - the name of the resource group. +// VMScaleSetName - the name of the VM scale set containing the extension. +func (client VirtualMachineScaleSetExtensionsClient) List(ctx context.Context, resourceGroupName string, VMScaleSetName string) (result VirtualMachineScaleSetExtensionListResultPage, err error) { + result.fn = client.listNextResults + req, err := client.ListPreparer(ctx, resourceGroupName, VMScaleSetName) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetExtensionsClient", "List", nil, "Failure preparing request") + return + } + + resp, err := client.ListSender(req) + if err != nil { + result.vmsselr.Response = autorest.Response{Response: resp} + err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetExtensionsClient", "List", resp, "Failure sending request") + return + } + + result.vmsselr, err = client.ListResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetExtensionsClient", "List", resp, "Failure responding to request") + } + + return +} + +// ListPreparer prepares the List request. +func (client VirtualMachineScaleSetExtensionsClient) ListPreparer(ctx context.Context, resourceGroupName string, VMScaleSetName string) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "resourceGroupName": autorest.Encode("path", resourceGroupName), + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + "vmScaleSetName": autorest.Encode("path", VMScaleSetName), + } + + const APIVersion = "2017-12-01" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsGet(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Compute/virtualMachineScaleSets/{vmScaleSetName}/extensions", pathParameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// ListSender sends the List request. The method will close the +// http.Response Body if it receives an error. +func (client VirtualMachineScaleSetExtensionsClient) ListSender(req *http.Request) (*http.Response, error) { + return autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) +} + +// ListResponder handles the response to the List request. The method always +// closes the http.Response Body. +func (client VirtualMachineScaleSetExtensionsClient) ListResponder(resp *http.Response) (result VirtualMachineScaleSetExtensionListResult, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + +// listNextResults retrieves the next set of results, if any. +func (client VirtualMachineScaleSetExtensionsClient) listNextResults(lastResults VirtualMachineScaleSetExtensionListResult) (result VirtualMachineScaleSetExtensionListResult, err error) { + req, err := lastResults.virtualMachineScaleSetExtensionListResultPreparer() + if err != nil { + return result, autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetExtensionsClient", "listNextResults", nil, "Failure preparing next results request") + } + if req == nil { + return + } + resp, err := client.ListSender(req) + if err != nil { + result.Response = autorest.Response{Response: resp} + return result, autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetExtensionsClient", "listNextResults", resp, "Failure sending next results request") + } + result, err = client.ListResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetExtensionsClient", "listNextResults", resp, "Failure responding to next results request") + } + return +} + +// ListComplete enumerates all values, automatically crossing page boundaries as required. +func (client VirtualMachineScaleSetExtensionsClient) ListComplete(ctx context.Context, resourceGroupName string, VMScaleSetName string) (result VirtualMachineScaleSetExtensionListResultIterator, err error) { + result.page, err = client.List(ctx, resourceGroupName, VMScaleSetName) + return +} diff --git a/vendor/github.com/Azure/azure-sdk-for-go/services/compute/mgmt/2018-04-01/compute/virtualmachinescalesetrollingupgrades.go b/vendor/github.com/Azure/azure-sdk-for-go/services/compute/mgmt/2018-04-01/compute/virtualmachinescalesetrollingupgrades.go new file mode 100644 index 000000000..aa23ddb93 --- /dev/null +++ b/vendor/github.com/Azure/azure-sdk-for-go/services/compute/mgmt/2018-04-01/compute/virtualmachinescalesetrollingupgrades.go @@ -0,0 +1,252 @@ +package compute + +// Copyright (c) Microsoft and contributors. All rights reserved. +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// +// See the License for the specific language governing permissions and +// limitations under the License. +// +// Code generated by Microsoft (R) AutoRest Code Generator. +// Changes may cause incorrect behavior and will be lost if the code is regenerated. + +import ( + "context" + "github.com/Azure/go-autorest/autorest" + "github.com/Azure/go-autorest/autorest/azure" + "net/http" +) + +// VirtualMachineScaleSetRollingUpgradesClient is the compute Client +type VirtualMachineScaleSetRollingUpgradesClient struct { + BaseClient +} + +// NewVirtualMachineScaleSetRollingUpgradesClient creates an instance of the +// VirtualMachineScaleSetRollingUpgradesClient client. +func NewVirtualMachineScaleSetRollingUpgradesClient(subscriptionID string) VirtualMachineScaleSetRollingUpgradesClient { + return NewVirtualMachineScaleSetRollingUpgradesClientWithBaseURI(DefaultBaseURI, subscriptionID) +} + +// NewVirtualMachineScaleSetRollingUpgradesClientWithBaseURI creates an instance of the +// VirtualMachineScaleSetRollingUpgradesClient client. +func NewVirtualMachineScaleSetRollingUpgradesClientWithBaseURI(baseURI string, subscriptionID string) VirtualMachineScaleSetRollingUpgradesClient { + return VirtualMachineScaleSetRollingUpgradesClient{NewWithBaseURI(baseURI, subscriptionID)} +} + +// Cancel cancels the current virtual machine scale set rolling upgrade. +// Parameters: +// resourceGroupName - the name of the resource group. +// VMScaleSetName - the name of the VM scale set. +func (client VirtualMachineScaleSetRollingUpgradesClient) Cancel(ctx context.Context, resourceGroupName string, VMScaleSetName string) (result VirtualMachineScaleSetRollingUpgradesCancelFuture, err error) { + req, err := client.CancelPreparer(ctx, resourceGroupName, VMScaleSetName) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetRollingUpgradesClient", "Cancel", nil, "Failure preparing request") + return + } + + result, err = client.CancelSender(req) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetRollingUpgradesClient", "Cancel", result.Response(), "Failure sending request") + return + } + + return +} + +// CancelPreparer prepares the Cancel request. +func (client VirtualMachineScaleSetRollingUpgradesClient) CancelPreparer(ctx context.Context, resourceGroupName string, VMScaleSetName string) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "resourceGroupName": autorest.Encode("path", resourceGroupName), + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + "vmScaleSetName": autorest.Encode("path", VMScaleSetName), + } + + const APIVersion = "2017-12-01" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsPost(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Compute/virtualMachineScaleSets/{vmScaleSetName}/rollingUpgrades/cancel", pathParameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// CancelSender sends the Cancel request. The method will close the +// http.Response Body if it receives an error. +func (client VirtualMachineScaleSetRollingUpgradesClient) CancelSender(req *http.Request) (future VirtualMachineScaleSetRollingUpgradesCancelFuture, err error) { + var resp *http.Response + resp, err = autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) + if err != nil { + return + } + err = autorest.Respond(resp, azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted)) + if err != nil { + return + } + future.Future, err = azure.NewFutureFromResponse(resp) + return +} + +// CancelResponder handles the response to the Cancel request. The method always +// closes the http.Response Body. +func (client VirtualMachineScaleSetRollingUpgradesClient) CancelResponder(resp *http.Response) (result OperationStatusResponse, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + +// GetLatest gets the status of the latest virtual machine scale set rolling upgrade. +// Parameters: +// resourceGroupName - the name of the resource group. +// VMScaleSetName - the name of the VM scale set. +func (client VirtualMachineScaleSetRollingUpgradesClient) GetLatest(ctx context.Context, resourceGroupName string, VMScaleSetName string) (result RollingUpgradeStatusInfo, err error) { + req, err := client.GetLatestPreparer(ctx, resourceGroupName, VMScaleSetName) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetRollingUpgradesClient", "GetLatest", nil, "Failure preparing request") + return + } + + resp, err := client.GetLatestSender(req) + if err != nil { + result.Response = autorest.Response{Response: resp} + err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetRollingUpgradesClient", "GetLatest", resp, "Failure sending request") + return + } + + result, err = client.GetLatestResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetRollingUpgradesClient", "GetLatest", resp, "Failure responding to request") + } + + return +} + +// GetLatestPreparer prepares the GetLatest request. +func (client VirtualMachineScaleSetRollingUpgradesClient) GetLatestPreparer(ctx context.Context, resourceGroupName string, VMScaleSetName string) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "resourceGroupName": autorest.Encode("path", resourceGroupName), + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + "vmScaleSetName": autorest.Encode("path", VMScaleSetName), + } + + const APIVersion = "2017-12-01" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsGet(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Compute/virtualMachineScaleSets/{vmScaleSetName}/rollingUpgrades/latest", pathParameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// GetLatestSender sends the GetLatest request. The method will close the +// http.Response Body if it receives an error. +func (client VirtualMachineScaleSetRollingUpgradesClient) GetLatestSender(req *http.Request) (*http.Response, error) { + return autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) +} + +// GetLatestResponder handles the response to the GetLatest request. The method always +// closes the http.Response Body. +func (client VirtualMachineScaleSetRollingUpgradesClient) GetLatestResponder(resp *http.Response) (result RollingUpgradeStatusInfo, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + +// StartOSUpgrade starts a rolling upgrade to move all virtual machine scale set instances to the latest available +// Platform Image OS version. Instances which are already running the latest available OS version are not affected. +// Parameters: +// resourceGroupName - the name of the resource group. +// VMScaleSetName - the name of the VM scale set. +func (client VirtualMachineScaleSetRollingUpgradesClient) StartOSUpgrade(ctx context.Context, resourceGroupName string, VMScaleSetName string) (result VirtualMachineScaleSetRollingUpgradesStartOSUpgradeFuture, err error) { + req, err := client.StartOSUpgradePreparer(ctx, resourceGroupName, VMScaleSetName) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetRollingUpgradesClient", "StartOSUpgrade", nil, "Failure preparing request") + return + } + + result, err = client.StartOSUpgradeSender(req) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetRollingUpgradesClient", "StartOSUpgrade", result.Response(), "Failure sending request") + return + } + + return +} + +// StartOSUpgradePreparer prepares the StartOSUpgrade request. +func (client VirtualMachineScaleSetRollingUpgradesClient) StartOSUpgradePreparer(ctx context.Context, resourceGroupName string, VMScaleSetName string) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "resourceGroupName": autorest.Encode("path", resourceGroupName), + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + "vmScaleSetName": autorest.Encode("path", VMScaleSetName), + } + + const APIVersion = "2017-12-01" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsPost(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Compute/virtualMachineScaleSets/{vmScaleSetName}/osRollingUpgrade", pathParameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// StartOSUpgradeSender sends the StartOSUpgrade request. The method will close the +// http.Response Body if it receives an error. +func (client VirtualMachineScaleSetRollingUpgradesClient) StartOSUpgradeSender(req *http.Request) (future VirtualMachineScaleSetRollingUpgradesStartOSUpgradeFuture, err error) { + var resp *http.Response + resp, err = autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) + if err != nil { + return + } + err = autorest.Respond(resp, azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted)) + if err != nil { + return + } + future.Future, err = azure.NewFutureFromResponse(resp) + return +} + +// StartOSUpgradeResponder handles the response to the StartOSUpgrade request. The method always +// closes the http.Response Body. +func (client VirtualMachineScaleSetRollingUpgradesClient) StartOSUpgradeResponder(resp *http.Response) (result OperationStatusResponse, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} diff --git a/vendor/github.com/Azure/azure-sdk-for-go/services/compute/mgmt/2018-04-01/compute/virtualmachinescalesets.go b/vendor/github.com/Azure/azure-sdk-for-go/services/compute/mgmt/2018-04-01/compute/virtualmachinescalesets.go new file mode 100644 index 000000000..dc742f64f --- /dev/null +++ b/vendor/github.com/Azure/azure-sdk-for-go/services/compute/mgmt/2018-04-01/compute/virtualmachinescalesets.go @@ -0,0 +1,1642 @@ +package compute + +// Copyright (c) Microsoft and contributors. All rights reserved. +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// +// See the License for the specific language governing permissions and +// limitations under the License. +// +// Code generated by Microsoft (R) AutoRest Code Generator. +// Changes may cause incorrect behavior and will be lost if the code is regenerated. + +import ( + "context" + "github.com/Azure/go-autorest/autorest" + "github.com/Azure/go-autorest/autorest/azure" + "github.com/Azure/go-autorest/autorest/validation" + "net/http" +) + +// VirtualMachineScaleSetsClient is the compute Client +type VirtualMachineScaleSetsClient struct { + BaseClient +} + +// NewVirtualMachineScaleSetsClient creates an instance of the VirtualMachineScaleSetsClient client. +func NewVirtualMachineScaleSetsClient(subscriptionID string) VirtualMachineScaleSetsClient { + return NewVirtualMachineScaleSetsClientWithBaseURI(DefaultBaseURI, subscriptionID) +} + +// NewVirtualMachineScaleSetsClientWithBaseURI creates an instance of the VirtualMachineScaleSetsClient client. +func NewVirtualMachineScaleSetsClientWithBaseURI(baseURI string, subscriptionID string) VirtualMachineScaleSetsClient { + return VirtualMachineScaleSetsClient{NewWithBaseURI(baseURI, subscriptionID)} +} + +// CreateOrUpdate create or update a VM scale set. +// Parameters: +// resourceGroupName - the name of the resource group. +// VMScaleSetName - the name of the VM scale set to create or update. +// parameters - the scale set object. +func (client VirtualMachineScaleSetsClient) CreateOrUpdate(ctx context.Context, resourceGroupName string, VMScaleSetName string, parameters VirtualMachineScaleSet) (result VirtualMachineScaleSetsCreateOrUpdateFuture, err error) { + if err := validation.Validate([]validation.Validation{ + {TargetValue: parameters, + Constraints: []validation.Constraint{{Target: "parameters.VirtualMachineScaleSetProperties", Name: validation.Null, Rule: false, + Chain: []validation.Constraint{{Target: "parameters.VirtualMachineScaleSetProperties.UpgradePolicy", Name: validation.Null, Rule: false, + Chain: []validation.Constraint{{Target: "parameters.VirtualMachineScaleSetProperties.UpgradePolicy.RollingUpgradePolicy", Name: validation.Null, Rule: false, + Chain: []validation.Constraint{{Target: "parameters.VirtualMachineScaleSetProperties.UpgradePolicy.RollingUpgradePolicy.MaxBatchInstancePercent", Name: validation.Null, Rule: false, + Chain: []validation.Constraint{{Target: "parameters.VirtualMachineScaleSetProperties.UpgradePolicy.RollingUpgradePolicy.MaxBatchInstancePercent", Name: validation.InclusiveMaximum, Rule: 100, Chain: nil}, + {Target: "parameters.VirtualMachineScaleSetProperties.UpgradePolicy.RollingUpgradePolicy.MaxBatchInstancePercent", Name: validation.InclusiveMinimum, Rule: 5, Chain: nil}, + }}, + {Target: "parameters.VirtualMachineScaleSetProperties.UpgradePolicy.RollingUpgradePolicy.MaxUnhealthyInstancePercent", Name: validation.Null, Rule: false, + Chain: []validation.Constraint{{Target: "parameters.VirtualMachineScaleSetProperties.UpgradePolicy.RollingUpgradePolicy.MaxUnhealthyInstancePercent", Name: validation.InclusiveMaximum, Rule: 100, Chain: nil}, + {Target: "parameters.VirtualMachineScaleSetProperties.UpgradePolicy.RollingUpgradePolicy.MaxUnhealthyInstancePercent", Name: validation.InclusiveMinimum, Rule: 5, Chain: nil}, + }}, + {Target: "parameters.VirtualMachineScaleSetProperties.UpgradePolicy.RollingUpgradePolicy.MaxUnhealthyUpgradedInstancePercent", Name: validation.Null, Rule: false, + Chain: []validation.Constraint{{Target: "parameters.VirtualMachineScaleSetProperties.UpgradePolicy.RollingUpgradePolicy.MaxUnhealthyUpgradedInstancePercent", Name: validation.InclusiveMaximum, Rule: 100, Chain: nil}, + {Target: "parameters.VirtualMachineScaleSetProperties.UpgradePolicy.RollingUpgradePolicy.MaxUnhealthyUpgradedInstancePercent", Name: validation.InclusiveMinimum, Rule: 0, Chain: nil}, + }}, + }}, + }}, + }}}}}); err != nil { + return result, validation.NewError("compute.VirtualMachineScaleSetsClient", "CreateOrUpdate", err.Error()) + } + + req, err := client.CreateOrUpdatePreparer(ctx, resourceGroupName, VMScaleSetName, parameters) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetsClient", "CreateOrUpdate", nil, "Failure preparing request") + return + } + + result, err = client.CreateOrUpdateSender(req) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetsClient", "CreateOrUpdate", result.Response(), "Failure sending request") + return + } + + return +} + +// CreateOrUpdatePreparer prepares the CreateOrUpdate request. +func (client VirtualMachineScaleSetsClient) CreateOrUpdatePreparer(ctx context.Context, resourceGroupName string, VMScaleSetName string, parameters VirtualMachineScaleSet) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "resourceGroupName": autorest.Encode("path", resourceGroupName), + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + "vmScaleSetName": autorest.Encode("path", VMScaleSetName), + } + + const APIVersion = "2017-12-01" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsContentType("application/json; charset=utf-8"), + autorest.AsPut(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Compute/virtualMachineScaleSets/{vmScaleSetName}", pathParameters), + autorest.WithJSON(parameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// CreateOrUpdateSender sends the CreateOrUpdate request. The method will close the +// http.Response Body if it receives an error. +func (client VirtualMachineScaleSetsClient) CreateOrUpdateSender(req *http.Request) (future VirtualMachineScaleSetsCreateOrUpdateFuture, err error) { + var resp *http.Response + resp, err = autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) + if err != nil { + return + } + err = autorest.Respond(resp, azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusCreated)) + if err != nil { + return + } + future.Future, err = azure.NewFutureFromResponse(resp) + return +} + +// CreateOrUpdateResponder handles the response to the CreateOrUpdate request. The method always +// closes the http.Response Body. +func (client VirtualMachineScaleSetsClient) CreateOrUpdateResponder(resp *http.Response) (result VirtualMachineScaleSet, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusCreated), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + +// Deallocate deallocates specific virtual machines in a VM scale set. Shuts down the virtual machines and releases the +// compute resources. You are not billed for the compute resources that this virtual machine scale set deallocates. +// Parameters: +// resourceGroupName - the name of the resource group. +// VMScaleSetName - the name of the VM scale set. +// VMInstanceIDs - a list of virtual machine instance IDs from the VM scale set. +func (client VirtualMachineScaleSetsClient) Deallocate(ctx context.Context, resourceGroupName string, VMScaleSetName string, VMInstanceIDs *VirtualMachineScaleSetVMInstanceIDs) (result VirtualMachineScaleSetsDeallocateFuture, err error) { + req, err := client.DeallocatePreparer(ctx, resourceGroupName, VMScaleSetName, VMInstanceIDs) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetsClient", "Deallocate", nil, "Failure preparing request") + return + } + + result, err = client.DeallocateSender(req) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetsClient", "Deallocate", result.Response(), "Failure sending request") + return + } + + return +} + +// DeallocatePreparer prepares the Deallocate request. +func (client VirtualMachineScaleSetsClient) DeallocatePreparer(ctx context.Context, resourceGroupName string, VMScaleSetName string, VMInstanceIDs *VirtualMachineScaleSetVMInstanceIDs) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "resourceGroupName": autorest.Encode("path", resourceGroupName), + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + "vmScaleSetName": autorest.Encode("path", VMScaleSetName), + } + + const APIVersion = "2017-12-01" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsContentType("application/json; charset=utf-8"), + autorest.AsPost(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Compute/virtualMachineScaleSets/{vmScaleSetName}/deallocate", pathParameters), + autorest.WithQueryParameters(queryParameters)) + if VMInstanceIDs != nil { + preparer = autorest.DecoratePreparer(preparer, + autorest.WithJSON(VMInstanceIDs)) + } + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// DeallocateSender sends the Deallocate request. The method will close the +// http.Response Body if it receives an error. +func (client VirtualMachineScaleSetsClient) DeallocateSender(req *http.Request) (future VirtualMachineScaleSetsDeallocateFuture, err error) { + var resp *http.Response + resp, err = autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) + if err != nil { + return + } + err = autorest.Respond(resp, azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted)) + if err != nil { + return + } + future.Future, err = azure.NewFutureFromResponse(resp) + return +} + +// DeallocateResponder handles the response to the Deallocate request. The method always +// closes the http.Response Body. +func (client VirtualMachineScaleSetsClient) DeallocateResponder(resp *http.Response) (result OperationStatusResponse, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + +// Delete deletes a VM scale set. +// Parameters: +// resourceGroupName - the name of the resource group. +// VMScaleSetName - the name of the VM scale set. +func (client VirtualMachineScaleSetsClient) Delete(ctx context.Context, resourceGroupName string, VMScaleSetName string) (result VirtualMachineScaleSetsDeleteFuture, err error) { + req, err := client.DeletePreparer(ctx, resourceGroupName, VMScaleSetName) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetsClient", "Delete", nil, "Failure preparing request") + return + } + + result, err = client.DeleteSender(req) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetsClient", "Delete", result.Response(), "Failure sending request") + return + } + + return +} + +// DeletePreparer prepares the Delete request. +func (client VirtualMachineScaleSetsClient) DeletePreparer(ctx context.Context, resourceGroupName string, VMScaleSetName string) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "resourceGroupName": autorest.Encode("path", resourceGroupName), + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + "vmScaleSetName": autorest.Encode("path", VMScaleSetName), + } + + const APIVersion = "2017-12-01" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsDelete(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Compute/virtualMachineScaleSets/{vmScaleSetName}", pathParameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// DeleteSender sends the Delete request. The method will close the +// http.Response Body if it receives an error. +func (client VirtualMachineScaleSetsClient) DeleteSender(req *http.Request) (future VirtualMachineScaleSetsDeleteFuture, err error) { + var resp *http.Response + resp, err = autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) + if err != nil { + return + } + err = autorest.Respond(resp, azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted, http.StatusNoContent)) + if err != nil { + return + } + future.Future, err = azure.NewFutureFromResponse(resp) + return +} + +// DeleteResponder handles the response to the Delete request. The method always +// closes the http.Response Body. +func (client VirtualMachineScaleSetsClient) DeleteResponder(resp *http.Response) (result OperationStatusResponse, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted, http.StatusNoContent), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + +// DeleteInstances deletes virtual machines in a VM scale set. +// Parameters: +// resourceGroupName - the name of the resource group. +// VMScaleSetName - the name of the VM scale set. +// VMInstanceIDs - a list of virtual machine instance IDs from the VM scale set. +func (client VirtualMachineScaleSetsClient) DeleteInstances(ctx context.Context, resourceGroupName string, VMScaleSetName string, VMInstanceIDs VirtualMachineScaleSetVMInstanceRequiredIDs) (result VirtualMachineScaleSetsDeleteInstancesFuture, err error) { + if err := validation.Validate([]validation.Validation{ + {TargetValue: VMInstanceIDs, + Constraints: []validation.Constraint{{Target: "VMInstanceIDs.InstanceIds", Name: validation.Null, Rule: true, Chain: nil}}}}); err != nil { + return result, validation.NewError("compute.VirtualMachineScaleSetsClient", "DeleteInstances", err.Error()) + } + + req, err := client.DeleteInstancesPreparer(ctx, resourceGroupName, VMScaleSetName, VMInstanceIDs) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetsClient", "DeleteInstances", nil, "Failure preparing request") + return + } + + result, err = client.DeleteInstancesSender(req) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetsClient", "DeleteInstances", result.Response(), "Failure sending request") + return + } + + return +} + +// DeleteInstancesPreparer prepares the DeleteInstances request. +func (client VirtualMachineScaleSetsClient) DeleteInstancesPreparer(ctx context.Context, resourceGroupName string, VMScaleSetName string, VMInstanceIDs VirtualMachineScaleSetVMInstanceRequiredIDs) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "resourceGroupName": autorest.Encode("path", resourceGroupName), + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + "vmScaleSetName": autorest.Encode("path", VMScaleSetName), + } + + const APIVersion = "2017-12-01" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsContentType("application/json; charset=utf-8"), + autorest.AsPost(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Compute/virtualMachineScaleSets/{vmScaleSetName}/delete", pathParameters), + autorest.WithJSON(VMInstanceIDs), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// DeleteInstancesSender sends the DeleteInstances request. The method will close the +// http.Response Body if it receives an error. +func (client VirtualMachineScaleSetsClient) DeleteInstancesSender(req *http.Request) (future VirtualMachineScaleSetsDeleteInstancesFuture, err error) { + var resp *http.Response + resp, err = autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) + if err != nil { + return + } + err = autorest.Respond(resp, azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted)) + if err != nil { + return + } + future.Future, err = azure.NewFutureFromResponse(resp) + return +} + +// DeleteInstancesResponder handles the response to the DeleteInstances request. The method always +// closes the http.Response Body. +func (client VirtualMachineScaleSetsClient) DeleteInstancesResponder(resp *http.Response) (result OperationStatusResponse, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + +// ForceRecoveryServiceFabricPlatformUpdateDomainWalk manual platform update domain walk to update virtual machines in +// a service fabric virtual machine scale set. +// Parameters: +// resourceGroupName - the name of the resource group. +// VMScaleSetName - the name of the VM scale set. +// platformUpdateDomain - the platform update domain for which a manual recovery walk is requested +func (client VirtualMachineScaleSetsClient) ForceRecoveryServiceFabricPlatformUpdateDomainWalk(ctx context.Context, resourceGroupName string, VMScaleSetName string, platformUpdateDomain int32) (result RecoveryWalkResponse, err error) { + req, err := client.ForceRecoveryServiceFabricPlatformUpdateDomainWalkPreparer(ctx, resourceGroupName, VMScaleSetName, platformUpdateDomain) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetsClient", "ForceRecoveryServiceFabricPlatformUpdateDomainWalk", nil, "Failure preparing request") + return + } + + resp, err := client.ForceRecoveryServiceFabricPlatformUpdateDomainWalkSender(req) + if err != nil { + result.Response = autorest.Response{Response: resp} + err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetsClient", "ForceRecoveryServiceFabricPlatformUpdateDomainWalk", resp, "Failure sending request") + return + } + + result, err = client.ForceRecoveryServiceFabricPlatformUpdateDomainWalkResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetsClient", "ForceRecoveryServiceFabricPlatformUpdateDomainWalk", resp, "Failure responding to request") + } + + return +} + +// ForceRecoveryServiceFabricPlatformUpdateDomainWalkPreparer prepares the ForceRecoveryServiceFabricPlatformUpdateDomainWalk request. +func (client VirtualMachineScaleSetsClient) ForceRecoveryServiceFabricPlatformUpdateDomainWalkPreparer(ctx context.Context, resourceGroupName string, VMScaleSetName string, platformUpdateDomain int32) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "resourceGroupName": autorest.Encode("path", resourceGroupName), + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + "vmScaleSetName": autorest.Encode("path", VMScaleSetName), + } + + const APIVersion = "2017-12-01" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + "platformUpdateDomain": autorest.Encode("query", platformUpdateDomain), + } + + preparer := autorest.CreatePreparer( + autorest.AsPost(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Compute/virtualMachineScaleSets/{vmScaleSetName}/forceRecoveryServiceFabricPlatformUpdateDomainWalk", pathParameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// ForceRecoveryServiceFabricPlatformUpdateDomainWalkSender sends the ForceRecoveryServiceFabricPlatformUpdateDomainWalk request. The method will close the +// http.Response Body if it receives an error. +func (client VirtualMachineScaleSetsClient) ForceRecoveryServiceFabricPlatformUpdateDomainWalkSender(req *http.Request) (*http.Response, error) { + return autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) +} + +// ForceRecoveryServiceFabricPlatformUpdateDomainWalkResponder handles the response to the ForceRecoveryServiceFabricPlatformUpdateDomainWalk request. The method always +// closes the http.Response Body. +func (client VirtualMachineScaleSetsClient) ForceRecoveryServiceFabricPlatformUpdateDomainWalkResponder(resp *http.Response) (result RecoveryWalkResponse, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + +// Get display information about a virtual machine scale set. +// Parameters: +// resourceGroupName - the name of the resource group. +// VMScaleSetName - the name of the VM scale set. +func (client VirtualMachineScaleSetsClient) Get(ctx context.Context, resourceGroupName string, VMScaleSetName string) (result VirtualMachineScaleSet, err error) { + req, err := client.GetPreparer(ctx, resourceGroupName, VMScaleSetName) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetsClient", "Get", nil, "Failure preparing request") + return + } + + resp, err := client.GetSender(req) + if err != nil { + result.Response = autorest.Response{Response: resp} + err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetsClient", "Get", resp, "Failure sending request") + return + } + + result, err = client.GetResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetsClient", "Get", resp, "Failure responding to request") + } + + return +} + +// GetPreparer prepares the Get request. +func (client VirtualMachineScaleSetsClient) GetPreparer(ctx context.Context, resourceGroupName string, VMScaleSetName string) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "resourceGroupName": autorest.Encode("path", resourceGroupName), + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + "vmScaleSetName": autorest.Encode("path", VMScaleSetName), + } + + const APIVersion = "2017-12-01" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsGet(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Compute/virtualMachineScaleSets/{vmScaleSetName}", pathParameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// GetSender sends the Get request. The method will close the +// http.Response Body if it receives an error. +func (client VirtualMachineScaleSetsClient) GetSender(req *http.Request) (*http.Response, error) { + return autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) +} + +// GetResponder handles the response to the Get request. The method always +// closes the http.Response Body. +func (client VirtualMachineScaleSetsClient) GetResponder(resp *http.Response) (result VirtualMachineScaleSet, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + +// GetInstanceView gets the status of a VM scale set instance. +// Parameters: +// resourceGroupName - the name of the resource group. +// VMScaleSetName - the name of the VM scale set. +func (client VirtualMachineScaleSetsClient) GetInstanceView(ctx context.Context, resourceGroupName string, VMScaleSetName string) (result VirtualMachineScaleSetInstanceView, err error) { + req, err := client.GetInstanceViewPreparer(ctx, resourceGroupName, VMScaleSetName) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetsClient", "GetInstanceView", nil, "Failure preparing request") + return + } + + resp, err := client.GetInstanceViewSender(req) + if err != nil { + result.Response = autorest.Response{Response: resp} + err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetsClient", "GetInstanceView", resp, "Failure sending request") + return + } + + result, err = client.GetInstanceViewResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetsClient", "GetInstanceView", resp, "Failure responding to request") + } + + return +} + +// GetInstanceViewPreparer prepares the GetInstanceView request. +func (client VirtualMachineScaleSetsClient) GetInstanceViewPreparer(ctx context.Context, resourceGroupName string, VMScaleSetName string) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "resourceGroupName": autorest.Encode("path", resourceGroupName), + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + "vmScaleSetName": autorest.Encode("path", VMScaleSetName), + } + + const APIVersion = "2017-12-01" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsGet(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Compute/virtualMachineScaleSets/{vmScaleSetName}/instanceView", pathParameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// GetInstanceViewSender sends the GetInstanceView request. The method will close the +// http.Response Body if it receives an error. +func (client VirtualMachineScaleSetsClient) GetInstanceViewSender(req *http.Request) (*http.Response, error) { + return autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) +} + +// GetInstanceViewResponder handles the response to the GetInstanceView request. The method always +// closes the http.Response Body. +func (client VirtualMachineScaleSetsClient) GetInstanceViewResponder(resp *http.Response) (result VirtualMachineScaleSetInstanceView, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + +// GetOSUpgradeHistory gets list of OS upgrades on a VM scale set instance. +// Parameters: +// resourceGroupName - the name of the resource group. +// VMScaleSetName - the name of the VM scale set. +func (client VirtualMachineScaleSetsClient) GetOSUpgradeHistory(ctx context.Context, resourceGroupName string, VMScaleSetName string) (result VirtualMachineScaleSetListOSUpgradeHistoryPage, err error) { + result.fn = client.getOSUpgradeHistoryNextResults + req, err := client.GetOSUpgradeHistoryPreparer(ctx, resourceGroupName, VMScaleSetName) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetsClient", "GetOSUpgradeHistory", nil, "Failure preparing request") + return + } + + resp, err := client.GetOSUpgradeHistorySender(req) + if err != nil { + result.vmsslouh.Response = autorest.Response{Response: resp} + err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetsClient", "GetOSUpgradeHistory", resp, "Failure sending request") + return + } + + result.vmsslouh, err = client.GetOSUpgradeHistoryResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetsClient", "GetOSUpgradeHistory", resp, "Failure responding to request") + } + + return +} + +// GetOSUpgradeHistoryPreparer prepares the GetOSUpgradeHistory request. +func (client VirtualMachineScaleSetsClient) GetOSUpgradeHistoryPreparer(ctx context.Context, resourceGroupName string, VMScaleSetName string) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "resourceGroupName": autorest.Encode("path", resourceGroupName), + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + "vmScaleSetName": autorest.Encode("path", VMScaleSetName), + } + + const APIVersion = "2017-12-01" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsGet(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Compute/virtualMachineScaleSets/{vmScaleSetName}/osUpgradeHistory", pathParameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// GetOSUpgradeHistorySender sends the GetOSUpgradeHistory request. The method will close the +// http.Response Body if it receives an error. +func (client VirtualMachineScaleSetsClient) GetOSUpgradeHistorySender(req *http.Request) (*http.Response, error) { + return autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) +} + +// GetOSUpgradeHistoryResponder handles the response to the GetOSUpgradeHistory request. The method always +// closes the http.Response Body. +func (client VirtualMachineScaleSetsClient) GetOSUpgradeHistoryResponder(resp *http.Response) (result VirtualMachineScaleSetListOSUpgradeHistory, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + +// getOSUpgradeHistoryNextResults retrieves the next set of results, if any. +func (client VirtualMachineScaleSetsClient) getOSUpgradeHistoryNextResults(lastResults VirtualMachineScaleSetListOSUpgradeHistory) (result VirtualMachineScaleSetListOSUpgradeHistory, err error) { + req, err := lastResults.virtualMachineScaleSetListOSUpgradeHistoryPreparer() + if err != nil { + return result, autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetsClient", "getOSUpgradeHistoryNextResults", nil, "Failure preparing next results request") + } + if req == nil { + return + } + resp, err := client.GetOSUpgradeHistorySender(req) + if err != nil { + result.Response = autorest.Response{Response: resp} + return result, autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetsClient", "getOSUpgradeHistoryNextResults", resp, "Failure sending next results request") + } + result, err = client.GetOSUpgradeHistoryResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetsClient", "getOSUpgradeHistoryNextResults", resp, "Failure responding to next results request") + } + return +} + +// GetOSUpgradeHistoryComplete enumerates all values, automatically crossing page boundaries as required. +func (client VirtualMachineScaleSetsClient) GetOSUpgradeHistoryComplete(ctx context.Context, resourceGroupName string, VMScaleSetName string) (result VirtualMachineScaleSetListOSUpgradeHistoryIterator, err error) { + result.page, err = client.GetOSUpgradeHistory(ctx, resourceGroupName, VMScaleSetName) + return +} + +// List gets a list of all VM scale sets under a resource group. +// Parameters: +// resourceGroupName - the name of the resource group. +func (client VirtualMachineScaleSetsClient) List(ctx context.Context, resourceGroupName string) (result VirtualMachineScaleSetListResultPage, err error) { + result.fn = client.listNextResults + req, err := client.ListPreparer(ctx, resourceGroupName) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetsClient", "List", nil, "Failure preparing request") + return + } + + resp, err := client.ListSender(req) + if err != nil { + result.vmsslr.Response = autorest.Response{Response: resp} + err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetsClient", "List", resp, "Failure sending request") + return + } + + result.vmsslr, err = client.ListResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetsClient", "List", resp, "Failure responding to request") + } + + return +} + +// ListPreparer prepares the List request. +func (client VirtualMachineScaleSetsClient) ListPreparer(ctx context.Context, resourceGroupName string) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "resourceGroupName": autorest.Encode("path", resourceGroupName), + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + } + + const APIVersion = "2017-12-01" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsGet(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Compute/virtualMachineScaleSets", pathParameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// ListSender sends the List request. The method will close the +// http.Response Body if it receives an error. +func (client VirtualMachineScaleSetsClient) ListSender(req *http.Request) (*http.Response, error) { + return autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) +} + +// ListResponder handles the response to the List request. The method always +// closes the http.Response Body. +func (client VirtualMachineScaleSetsClient) ListResponder(resp *http.Response) (result VirtualMachineScaleSetListResult, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + +// listNextResults retrieves the next set of results, if any. +func (client VirtualMachineScaleSetsClient) listNextResults(lastResults VirtualMachineScaleSetListResult) (result VirtualMachineScaleSetListResult, err error) { + req, err := lastResults.virtualMachineScaleSetListResultPreparer() + if err != nil { + return result, autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetsClient", "listNextResults", nil, "Failure preparing next results request") + } + if req == nil { + return + } + resp, err := client.ListSender(req) + if err != nil { + result.Response = autorest.Response{Response: resp} + return result, autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetsClient", "listNextResults", resp, "Failure sending next results request") + } + result, err = client.ListResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetsClient", "listNextResults", resp, "Failure responding to next results request") + } + return +} + +// ListComplete enumerates all values, automatically crossing page boundaries as required. +func (client VirtualMachineScaleSetsClient) ListComplete(ctx context.Context, resourceGroupName string) (result VirtualMachineScaleSetListResultIterator, err error) { + result.page, err = client.List(ctx, resourceGroupName) + return +} + +// ListAll gets a list of all VM Scale Sets in the subscription, regardless of the associated resource group. Use +// nextLink property in the response to get the next page of VM Scale Sets. Do this till nextLink is null to fetch all +// the VM Scale Sets. +func (client VirtualMachineScaleSetsClient) ListAll(ctx context.Context) (result VirtualMachineScaleSetListWithLinkResultPage, err error) { + result.fn = client.listAllNextResults + req, err := client.ListAllPreparer(ctx) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetsClient", "ListAll", nil, "Failure preparing request") + return + } + + resp, err := client.ListAllSender(req) + if err != nil { + result.vmsslwlr.Response = autorest.Response{Response: resp} + err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetsClient", "ListAll", resp, "Failure sending request") + return + } + + result.vmsslwlr, err = client.ListAllResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetsClient", "ListAll", resp, "Failure responding to request") + } + + return +} + +// ListAllPreparer prepares the ListAll request. +func (client VirtualMachineScaleSetsClient) ListAllPreparer(ctx context.Context) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + } + + const APIVersion = "2017-12-01" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsGet(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/providers/Microsoft.Compute/virtualMachineScaleSets", pathParameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// ListAllSender sends the ListAll request. The method will close the +// http.Response Body if it receives an error. +func (client VirtualMachineScaleSetsClient) ListAllSender(req *http.Request) (*http.Response, error) { + return autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) +} + +// ListAllResponder handles the response to the ListAll request. The method always +// closes the http.Response Body. +func (client VirtualMachineScaleSetsClient) ListAllResponder(resp *http.Response) (result VirtualMachineScaleSetListWithLinkResult, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + +// listAllNextResults retrieves the next set of results, if any. +func (client VirtualMachineScaleSetsClient) listAllNextResults(lastResults VirtualMachineScaleSetListWithLinkResult) (result VirtualMachineScaleSetListWithLinkResult, err error) { + req, err := lastResults.virtualMachineScaleSetListWithLinkResultPreparer() + if err != nil { + return result, autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetsClient", "listAllNextResults", nil, "Failure preparing next results request") + } + if req == nil { + return + } + resp, err := client.ListAllSender(req) + if err != nil { + result.Response = autorest.Response{Response: resp} + return result, autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetsClient", "listAllNextResults", resp, "Failure sending next results request") + } + result, err = client.ListAllResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetsClient", "listAllNextResults", resp, "Failure responding to next results request") + } + return +} + +// ListAllComplete enumerates all values, automatically crossing page boundaries as required. +func (client VirtualMachineScaleSetsClient) ListAllComplete(ctx context.Context) (result VirtualMachineScaleSetListWithLinkResultIterator, err error) { + result.page, err = client.ListAll(ctx) + return +} + +// ListSkus gets a list of SKUs available for your VM scale set, including the minimum and maximum VM instances allowed +// for each SKU. +// Parameters: +// resourceGroupName - the name of the resource group. +// VMScaleSetName - the name of the VM scale set. +func (client VirtualMachineScaleSetsClient) ListSkus(ctx context.Context, resourceGroupName string, VMScaleSetName string) (result VirtualMachineScaleSetListSkusResultPage, err error) { + result.fn = client.listSkusNextResults + req, err := client.ListSkusPreparer(ctx, resourceGroupName, VMScaleSetName) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetsClient", "ListSkus", nil, "Failure preparing request") + return + } + + resp, err := client.ListSkusSender(req) + if err != nil { + result.vmsslsr.Response = autorest.Response{Response: resp} + err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetsClient", "ListSkus", resp, "Failure sending request") + return + } + + result.vmsslsr, err = client.ListSkusResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetsClient", "ListSkus", resp, "Failure responding to request") + } + + return +} + +// ListSkusPreparer prepares the ListSkus request. +func (client VirtualMachineScaleSetsClient) ListSkusPreparer(ctx context.Context, resourceGroupName string, VMScaleSetName string) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "resourceGroupName": autorest.Encode("path", resourceGroupName), + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + "vmScaleSetName": autorest.Encode("path", VMScaleSetName), + } + + const APIVersion = "2017-12-01" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsGet(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Compute/virtualMachineScaleSets/{vmScaleSetName}/skus", pathParameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// ListSkusSender sends the ListSkus request. The method will close the +// http.Response Body if it receives an error. +func (client VirtualMachineScaleSetsClient) ListSkusSender(req *http.Request) (*http.Response, error) { + return autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) +} + +// ListSkusResponder handles the response to the ListSkus request. The method always +// closes the http.Response Body. +func (client VirtualMachineScaleSetsClient) ListSkusResponder(resp *http.Response) (result VirtualMachineScaleSetListSkusResult, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + +// listSkusNextResults retrieves the next set of results, if any. +func (client VirtualMachineScaleSetsClient) listSkusNextResults(lastResults VirtualMachineScaleSetListSkusResult) (result VirtualMachineScaleSetListSkusResult, err error) { + req, err := lastResults.virtualMachineScaleSetListSkusResultPreparer() + if err != nil { + return result, autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetsClient", "listSkusNextResults", nil, "Failure preparing next results request") + } + if req == nil { + return + } + resp, err := client.ListSkusSender(req) + if err != nil { + result.Response = autorest.Response{Response: resp} + return result, autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetsClient", "listSkusNextResults", resp, "Failure sending next results request") + } + result, err = client.ListSkusResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetsClient", "listSkusNextResults", resp, "Failure responding to next results request") + } + return +} + +// ListSkusComplete enumerates all values, automatically crossing page boundaries as required. +func (client VirtualMachineScaleSetsClient) ListSkusComplete(ctx context.Context, resourceGroupName string, VMScaleSetName string) (result VirtualMachineScaleSetListSkusResultIterator, err error) { + result.page, err = client.ListSkus(ctx, resourceGroupName, VMScaleSetName) + return +} + +// PerformMaintenance perform maintenance on one or more virtual machines in a VM scale set. +// Parameters: +// resourceGroupName - the name of the resource group. +// VMScaleSetName - the name of the VM scale set. +// VMInstanceIDs - a list of virtual machine instance IDs from the VM scale set. +func (client VirtualMachineScaleSetsClient) PerformMaintenance(ctx context.Context, resourceGroupName string, VMScaleSetName string, VMInstanceIDs *VirtualMachineScaleSetVMInstanceIDs) (result VirtualMachineScaleSetsPerformMaintenanceFuture, err error) { + req, err := client.PerformMaintenancePreparer(ctx, resourceGroupName, VMScaleSetName, VMInstanceIDs) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetsClient", "PerformMaintenance", nil, "Failure preparing request") + return + } + + result, err = client.PerformMaintenanceSender(req) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetsClient", "PerformMaintenance", result.Response(), "Failure sending request") + return + } + + return +} + +// PerformMaintenancePreparer prepares the PerformMaintenance request. +func (client VirtualMachineScaleSetsClient) PerformMaintenancePreparer(ctx context.Context, resourceGroupName string, VMScaleSetName string, VMInstanceIDs *VirtualMachineScaleSetVMInstanceIDs) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "resourceGroupName": autorest.Encode("path", resourceGroupName), + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + "vmScaleSetName": autorest.Encode("path", VMScaleSetName), + } + + const APIVersion = "2017-12-01" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsContentType("application/json; charset=utf-8"), + autorest.AsPost(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Compute/virtualMachineScaleSets/{vmScaleSetName}/performMaintenance", pathParameters), + autorest.WithQueryParameters(queryParameters)) + if VMInstanceIDs != nil { + preparer = autorest.DecoratePreparer(preparer, + autorest.WithJSON(VMInstanceIDs)) + } + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// PerformMaintenanceSender sends the PerformMaintenance request. The method will close the +// http.Response Body if it receives an error. +func (client VirtualMachineScaleSetsClient) PerformMaintenanceSender(req *http.Request) (future VirtualMachineScaleSetsPerformMaintenanceFuture, err error) { + var resp *http.Response + resp, err = autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) + if err != nil { + return + } + err = autorest.Respond(resp, azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted)) + if err != nil { + return + } + future.Future, err = azure.NewFutureFromResponse(resp) + return +} + +// PerformMaintenanceResponder handles the response to the PerformMaintenance request. The method always +// closes the http.Response Body. +func (client VirtualMachineScaleSetsClient) PerformMaintenanceResponder(resp *http.Response) (result OperationStatusResponse, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + +// PowerOff power off (stop) one or more virtual machines in a VM scale set. Note that resources are still attached and +// you are getting charged for the resources. Instead, use deallocate to release resources and avoid charges. +// Parameters: +// resourceGroupName - the name of the resource group. +// VMScaleSetName - the name of the VM scale set. +// VMInstanceIDs - a list of virtual machine instance IDs from the VM scale set. +func (client VirtualMachineScaleSetsClient) PowerOff(ctx context.Context, resourceGroupName string, VMScaleSetName string, VMInstanceIDs *VirtualMachineScaleSetVMInstanceIDs) (result VirtualMachineScaleSetsPowerOffFuture, err error) { + req, err := client.PowerOffPreparer(ctx, resourceGroupName, VMScaleSetName, VMInstanceIDs) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetsClient", "PowerOff", nil, "Failure preparing request") + return + } + + result, err = client.PowerOffSender(req) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetsClient", "PowerOff", result.Response(), "Failure sending request") + return + } + + return +} + +// PowerOffPreparer prepares the PowerOff request. +func (client VirtualMachineScaleSetsClient) PowerOffPreparer(ctx context.Context, resourceGroupName string, VMScaleSetName string, VMInstanceIDs *VirtualMachineScaleSetVMInstanceIDs) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "resourceGroupName": autorest.Encode("path", resourceGroupName), + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + "vmScaleSetName": autorest.Encode("path", VMScaleSetName), + } + + const APIVersion = "2017-12-01" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsContentType("application/json; charset=utf-8"), + autorest.AsPost(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Compute/virtualMachineScaleSets/{vmScaleSetName}/poweroff", pathParameters), + autorest.WithQueryParameters(queryParameters)) + if VMInstanceIDs != nil { + preparer = autorest.DecoratePreparer(preparer, + autorest.WithJSON(VMInstanceIDs)) + } + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// PowerOffSender sends the PowerOff request. The method will close the +// http.Response Body if it receives an error. +func (client VirtualMachineScaleSetsClient) PowerOffSender(req *http.Request) (future VirtualMachineScaleSetsPowerOffFuture, err error) { + var resp *http.Response + resp, err = autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) + if err != nil { + return + } + err = autorest.Respond(resp, azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted)) + if err != nil { + return + } + future.Future, err = azure.NewFutureFromResponse(resp) + return +} + +// PowerOffResponder handles the response to the PowerOff request. The method always +// closes the http.Response Body. +func (client VirtualMachineScaleSetsClient) PowerOffResponder(resp *http.Response) (result OperationStatusResponse, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + +// Redeploy redeploy one or more virtual machines in a VM scale set. +// Parameters: +// resourceGroupName - the name of the resource group. +// VMScaleSetName - the name of the VM scale set. +// VMInstanceIDs - a list of virtual machine instance IDs from the VM scale set. +func (client VirtualMachineScaleSetsClient) Redeploy(ctx context.Context, resourceGroupName string, VMScaleSetName string, VMInstanceIDs *VirtualMachineScaleSetVMInstanceIDs) (result VirtualMachineScaleSetsRedeployFuture, err error) { + req, err := client.RedeployPreparer(ctx, resourceGroupName, VMScaleSetName, VMInstanceIDs) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetsClient", "Redeploy", nil, "Failure preparing request") + return + } + + result, err = client.RedeploySender(req) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetsClient", "Redeploy", result.Response(), "Failure sending request") + return + } + + return +} + +// RedeployPreparer prepares the Redeploy request. +func (client VirtualMachineScaleSetsClient) RedeployPreparer(ctx context.Context, resourceGroupName string, VMScaleSetName string, VMInstanceIDs *VirtualMachineScaleSetVMInstanceIDs) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "resourceGroupName": autorest.Encode("path", resourceGroupName), + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + "vmScaleSetName": autorest.Encode("path", VMScaleSetName), + } + + const APIVersion = "2017-12-01" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsContentType("application/json; charset=utf-8"), + autorest.AsPost(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Compute/virtualMachineScaleSets/{vmScaleSetName}/redeploy", pathParameters), + autorest.WithQueryParameters(queryParameters)) + if VMInstanceIDs != nil { + preparer = autorest.DecoratePreparer(preparer, + autorest.WithJSON(VMInstanceIDs)) + } + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// RedeploySender sends the Redeploy request. The method will close the +// http.Response Body if it receives an error. +func (client VirtualMachineScaleSetsClient) RedeploySender(req *http.Request) (future VirtualMachineScaleSetsRedeployFuture, err error) { + var resp *http.Response + resp, err = autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) + if err != nil { + return + } + err = autorest.Respond(resp, azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted)) + if err != nil { + return + } + future.Future, err = azure.NewFutureFromResponse(resp) + return +} + +// RedeployResponder handles the response to the Redeploy request. The method always +// closes the http.Response Body. +func (client VirtualMachineScaleSetsClient) RedeployResponder(resp *http.Response) (result OperationStatusResponse, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + +// Reimage reimages (upgrade the operating system) one or more virtual machines in a VM scale set. +// Parameters: +// resourceGroupName - the name of the resource group. +// VMScaleSetName - the name of the VM scale set. +// VMInstanceIDs - a list of virtual machine instance IDs from the VM scale set. +func (client VirtualMachineScaleSetsClient) Reimage(ctx context.Context, resourceGroupName string, VMScaleSetName string, VMInstanceIDs *VirtualMachineScaleSetVMInstanceIDs) (result VirtualMachineScaleSetsReimageFuture, err error) { + req, err := client.ReimagePreparer(ctx, resourceGroupName, VMScaleSetName, VMInstanceIDs) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetsClient", "Reimage", nil, "Failure preparing request") + return + } + + result, err = client.ReimageSender(req) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetsClient", "Reimage", result.Response(), "Failure sending request") + return + } + + return +} + +// ReimagePreparer prepares the Reimage request. +func (client VirtualMachineScaleSetsClient) ReimagePreparer(ctx context.Context, resourceGroupName string, VMScaleSetName string, VMInstanceIDs *VirtualMachineScaleSetVMInstanceIDs) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "resourceGroupName": autorest.Encode("path", resourceGroupName), + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + "vmScaleSetName": autorest.Encode("path", VMScaleSetName), + } + + const APIVersion = "2017-12-01" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsContentType("application/json; charset=utf-8"), + autorest.AsPost(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Compute/virtualMachineScaleSets/{vmScaleSetName}/reimage", pathParameters), + autorest.WithQueryParameters(queryParameters)) + if VMInstanceIDs != nil { + preparer = autorest.DecoratePreparer(preparer, + autorest.WithJSON(VMInstanceIDs)) + } + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// ReimageSender sends the Reimage request. The method will close the +// http.Response Body if it receives an error. +func (client VirtualMachineScaleSetsClient) ReimageSender(req *http.Request) (future VirtualMachineScaleSetsReimageFuture, err error) { + var resp *http.Response + resp, err = autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) + if err != nil { + return + } + err = autorest.Respond(resp, azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted)) + if err != nil { + return + } + future.Future, err = azure.NewFutureFromResponse(resp) + return +} + +// ReimageResponder handles the response to the Reimage request. The method always +// closes the http.Response Body. +func (client VirtualMachineScaleSetsClient) ReimageResponder(resp *http.Response) (result OperationStatusResponse, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + +// ReimageAll reimages all the disks ( including data disks ) in the virtual machines in a VM scale set. This operation +// is only supported for managed disks. +// Parameters: +// resourceGroupName - the name of the resource group. +// VMScaleSetName - the name of the VM scale set. +// VMInstanceIDs - a list of virtual machine instance IDs from the VM scale set. +func (client VirtualMachineScaleSetsClient) ReimageAll(ctx context.Context, resourceGroupName string, VMScaleSetName string, VMInstanceIDs *VirtualMachineScaleSetVMInstanceIDs) (result VirtualMachineScaleSetsReimageAllFuture, err error) { + req, err := client.ReimageAllPreparer(ctx, resourceGroupName, VMScaleSetName, VMInstanceIDs) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetsClient", "ReimageAll", nil, "Failure preparing request") + return + } + + result, err = client.ReimageAllSender(req) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetsClient", "ReimageAll", result.Response(), "Failure sending request") + return + } + + return +} + +// ReimageAllPreparer prepares the ReimageAll request. +func (client VirtualMachineScaleSetsClient) ReimageAllPreparer(ctx context.Context, resourceGroupName string, VMScaleSetName string, VMInstanceIDs *VirtualMachineScaleSetVMInstanceIDs) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "resourceGroupName": autorest.Encode("path", resourceGroupName), + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + "vmScaleSetName": autorest.Encode("path", VMScaleSetName), + } + + const APIVersion = "2017-12-01" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsContentType("application/json; charset=utf-8"), + autorest.AsPost(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Compute/virtualMachineScaleSets/{vmScaleSetName}/reimageall", pathParameters), + autorest.WithQueryParameters(queryParameters)) + if VMInstanceIDs != nil { + preparer = autorest.DecoratePreparer(preparer, + autorest.WithJSON(VMInstanceIDs)) + } + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// ReimageAllSender sends the ReimageAll request. The method will close the +// http.Response Body if it receives an error. +func (client VirtualMachineScaleSetsClient) ReimageAllSender(req *http.Request) (future VirtualMachineScaleSetsReimageAllFuture, err error) { + var resp *http.Response + resp, err = autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) + if err != nil { + return + } + err = autorest.Respond(resp, azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted)) + if err != nil { + return + } + future.Future, err = azure.NewFutureFromResponse(resp) + return +} + +// ReimageAllResponder handles the response to the ReimageAll request. The method always +// closes the http.Response Body. +func (client VirtualMachineScaleSetsClient) ReimageAllResponder(resp *http.Response) (result OperationStatusResponse, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + +// Restart restarts one or more virtual machines in a VM scale set. +// Parameters: +// resourceGroupName - the name of the resource group. +// VMScaleSetName - the name of the VM scale set. +// VMInstanceIDs - a list of virtual machine instance IDs from the VM scale set. +func (client VirtualMachineScaleSetsClient) Restart(ctx context.Context, resourceGroupName string, VMScaleSetName string, VMInstanceIDs *VirtualMachineScaleSetVMInstanceIDs) (result VirtualMachineScaleSetsRestartFuture, err error) { + req, err := client.RestartPreparer(ctx, resourceGroupName, VMScaleSetName, VMInstanceIDs) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetsClient", "Restart", nil, "Failure preparing request") + return + } + + result, err = client.RestartSender(req) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetsClient", "Restart", result.Response(), "Failure sending request") + return + } + + return +} + +// RestartPreparer prepares the Restart request. +func (client VirtualMachineScaleSetsClient) RestartPreparer(ctx context.Context, resourceGroupName string, VMScaleSetName string, VMInstanceIDs *VirtualMachineScaleSetVMInstanceIDs) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "resourceGroupName": autorest.Encode("path", resourceGroupName), + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + "vmScaleSetName": autorest.Encode("path", VMScaleSetName), + } + + const APIVersion = "2017-12-01" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsContentType("application/json; charset=utf-8"), + autorest.AsPost(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Compute/virtualMachineScaleSets/{vmScaleSetName}/restart", pathParameters), + autorest.WithQueryParameters(queryParameters)) + if VMInstanceIDs != nil { + preparer = autorest.DecoratePreparer(preparer, + autorest.WithJSON(VMInstanceIDs)) + } + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// RestartSender sends the Restart request. The method will close the +// http.Response Body if it receives an error. +func (client VirtualMachineScaleSetsClient) RestartSender(req *http.Request) (future VirtualMachineScaleSetsRestartFuture, err error) { + var resp *http.Response + resp, err = autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) + if err != nil { + return + } + err = autorest.Respond(resp, azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted)) + if err != nil { + return + } + future.Future, err = azure.NewFutureFromResponse(resp) + return +} + +// RestartResponder handles the response to the Restart request. The method always +// closes the http.Response Body. +func (client VirtualMachineScaleSetsClient) RestartResponder(resp *http.Response) (result OperationStatusResponse, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + +// Start starts one or more virtual machines in a VM scale set. +// Parameters: +// resourceGroupName - the name of the resource group. +// VMScaleSetName - the name of the VM scale set. +// VMInstanceIDs - a list of virtual machine instance IDs from the VM scale set. +func (client VirtualMachineScaleSetsClient) Start(ctx context.Context, resourceGroupName string, VMScaleSetName string, VMInstanceIDs *VirtualMachineScaleSetVMInstanceIDs) (result VirtualMachineScaleSetsStartFuture, err error) { + req, err := client.StartPreparer(ctx, resourceGroupName, VMScaleSetName, VMInstanceIDs) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetsClient", "Start", nil, "Failure preparing request") + return + } + + result, err = client.StartSender(req) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetsClient", "Start", result.Response(), "Failure sending request") + return + } + + return +} + +// StartPreparer prepares the Start request. +func (client VirtualMachineScaleSetsClient) StartPreparer(ctx context.Context, resourceGroupName string, VMScaleSetName string, VMInstanceIDs *VirtualMachineScaleSetVMInstanceIDs) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "resourceGroupName": autorest.Encode("path", resourceGroupName), + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + "vmScaleSetName": autorest.Encode("path", VMScaleSetName), + } + + const APIVersion = "2017-12-01" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsContentType("application/json; charset=utf-8"), + autorest.AsPost(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Compute/virtualMachineScaleSets/{vmScaleSetName}/start", pathParameters), + autorest.WithQueryParameters(queryParameters)) + if VMInstanceIDs != nil { + preparer = autorest.DecoratePreparer(preparer, + autorest.WithJSON(VMInstanceIDs)) + } + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// StartSender sends the Start request. The method will close the +// http.Response Body if it receives an error. +func (client VirtualMachineScaleSetsClient) StartSender(req *http.Request) (future VirtualMachineScaleSetsStartFuture, err error) { + var resp *http.Response + resp, err = autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) + if err != nil { + return + } + err = autorest.Respond(resp, azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted)) + if err != nil { + return + } + future.Future, err = azure.NewFutureFromResponse(resp) + return +} + +// StartResponder handles the response to the Start request. The method always +// closes the http.Response Body. +func (client VirtualMachineScaleSetsClient) StartResponder(resp *http.Response) (result OperationStatusResponse, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + +// Update update a VM scale set. +// Parameters: +// resourceGroupName - the name of the resource group. +// VMScaleSetName - the name of the VM scale set to create or update. +// parameters - the scale set object. +func (client VirtualMachineScaleSetsClient) Update(ctx context.Context, resourceGroupName string, VMScaleSetName string, parameters VirtualMachineScaleSetUpdate) (result VirtualMachineScaleSetsUpdateFuture, err error) { + req, err := client.UpdatePreparer(ctx, resourceGroupName, VMScaleSetName, parameters) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetsClient", "Update", nil, "Failure preparing request") + return + } + + result, err = client.UpdateSender(req) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetsClient", "Update", result.Response(), "Failure sending request") + return + } + + return +} + +// UpdatePreparer prepares the Update request. +func (client VirtualMachineScaleSetsClient) UpdatePreparer(ctx context.Context, resourceGroupName string, VMScaleSetName string, parameters VirtualMachineScaleSetUpdate) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "resourceGroupName": autorest.Encode("path", resourceGroupName), + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + "vmScaleSetName": autorest.Encode("path", VMScaleSetName), + } + + const APIVersion = "2017-12-01" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsContentType("application/json; charset=utf-8"), + autorest.AsPatch(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Compute/virtualMachineScaleSets/{vmScaleSetName}", pathParameters), + autorest.WithJSON(parameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// UpdateSender sends the Update request. The method will close the +// http.Response Body if it receives an error. +func (client VirtualMachineScaleSetsClient) UpdateSender(req *http.Request) (future VirtualMachineScaleSetsUpdateFuture, err error) { + var resp *http.Response + resp, err = autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) + if err != nil { + return + } + err = autorest.Respond(resp, azure.WithErrorUnlessStatusCode(http.StatusOK)) + if err != nil { + return + } + future.Future, err = azure.NewFutureFromResponse(resp) + return +} + +// UpdateResponder handles the response to the Update request. The method always +// closes the http.Response Body. +func (client VirtualMachineScaleSetsClient) UpdateResponder(resp *http.Response) (result VirtualMachineScaleSet, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + +// UpdateInstances upgrades one or more virtual machines to the latest SKU set in the VM scale set model. +// Parameters: +// resourceGroupName - the name of the resource group. +// VMScaleSetName - the name of the VM scale set. +// VMInstanceIDs - a list of virtual machine instance IDs from the VM scale set. +func (client VirtualMachineScaleSetsClient) UpdateInstances(ctx context.Context, resourceGroupName string, VMScaleSetName string, VMInstanceIDs VirtualMachineScaleSetVMInstanceRequiredIDs) (result VirtualMachineScaleSetsUpdateInstancesFuture, err error) { + if err := validation.Validate([]validation.Validation{ + {TargetValue: VMInstanceIDs, + Constraints: []validation.Constraint{{Target: "VMInstanceIDs.InstanceIds", Name: validation.Null, Rule: true, Chain: nil}}}}); err != nil { + return result, validation.NewError("compute.VirtualMachineScaleSetsClient", "UpdateInstances", err.Error()) + } + + req, err := client.UpdateInstancesPreparer(ctx, resourceGroupName, VMScaleSetName, VMInstanceIDs) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetsClient", "UpdateInstances", nil, "Failure preparing request") + return + } + + result, err = client.UpdateInstancesSender(req) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetsClient", "UpdateInstances", result.Response(), "Failure sending request") + return + } + + return +} + +// UpdateInstancesPreparer prepares the UpdateInstances request. +func (client VirtualMachineScaleSetsClient) UpdateInstancesPreparer(ctx context.Context, resourceGroupName string, VMScaleSetName string, VMInstanceIDs VirtualMachineScaleSetVMInstanceRequiredIDs) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "resourceGroupName": autorest.Encode("path", resourceGroupName), + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + "vmScaleSetName": autorest.Encode("path", VMScaleSetName), + } + + const APIVersion = "2017-12-01" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsContentType("application/json; charset=utf-8"), + autorest.AsPost(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Compute/virtualMachineScaleSets/{vmScaleSetName}/manualupgrade", pathParameters), + autorest.WithJSON(VMInstanceIDs), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// UpdateInstancesSender sends the UpdateInstances request. The method will close the +// http.Response Body if it receives an error. +func (client VirtualMachineScaleSetsClient) UpdateInstancesSender(req *http.Request) (future VirtualMachineScaleSetsUpdateInstancesFuture, err error) { + var resp *http.Response + resp, err = autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) + if err != nil { + return + } + err = autorest.Respond(resp, azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted)) + if err != nil { + return + } + future.Future, err = azure.NewFutureFromResponse(resp) + return +} + +// UpdateInstancesResponder handles the response to the UpdateInstances request. The method always +// closes the http.Response Body. +func (client VirtualMachineScaleSetsClient) UpdateInstancesResponder(resp *http.Response) (result OperationStatusResponse, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} diff --git a/vendor/github.com/Azure/azure-sdk-for-go/services/compute/mgmt/2018-04-01/compute/virtualmachinescalesetvms.go b/vendor/github.com/Azure/azure-sdk-for-go/services/compute/mgmt/2018-04-01/compute/virtualmachinescalesetvms.go new file mode 100644 index 000000000..e8ec06789 --- /dev/null +++ b/vendor/github.com/Azure/azure-sdk-for-go/services/compute/mgmt/2018-04-01/compute/virtualmachinescalesetvms.go @@ -0,0 +1,1044 @@ +package compute + +// Copyright (c) Microsoft and contributors. All rights reserved. +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// +// See the License for the specific language governing permissions and +// limitations under the License. +// +// Code generated by Microsoft (R) AutoRest Code Generator. +// Changes may cause incorrect behavior and will be lost if the code is regenerated. + +import ( + "context" + "github.com/Azure/go-autorest/autorest" + "github.com/Azure/go-autorest/autorest/azure" + "github.com/Azure/go-autorest/autorest/validation" + "net/http" +) + +// VirtualMachineScaleSetVMsClient is the compute Client +type VirtualMachineScaleSetVMsClient struct { + BaseClient +} + +// NewVirtualMachineScaleSetVMsClient creates an instance of the VirtualMachineScaleSetVMsClient client. +func NewVirtualMachineScaleSetVMsClient(subscriptionID string) VirtualMachineScaleSetVMsClient { + return NewVirtualMachineScaleSetVMsClientWithBaseURI(DefaultBaseURI, subscriptionID) +} + +// NewVirtualMachineScaleSetVMsClientWithBaseURI creates an instance of the VirtualMachineScaleSetVMsClient client. +func NewVirtualMachineScaleSetVMsClientWithBaseURI(baseURI string, subscriptionID string) VirtualMachineScaleSetVMsClient { + return VirtualMachineScaleSetVMsClient{NewWithBaseURI(baseURI, subscriptionID)} +} + +// Deallocate deallocates a specific virtual machine in a VM scale set. Shuts down the virtual machine and releases the +// compute resources it uses. You are not billed for the compute resources of this virtual machine once it is +// deallocated. +// Parameters: +// resourceGroupName - the name of the resource group. +// VMScaleSetName - the name of the VM scale set. +// instanceID - the instance ID of the virtual machine. +func (client VirtualMachineScaleSetVMsClient) Deallocate(ctx context.Context, resourceGroupName string, VMScaleSetName string, instanceID string) (result VirtualMachineScaleSetVMsDeallocateFuture, err error) { + req, err := client.DeallocatePreparer(ctx, resourceGroupName, VMScaleSetName, instanceID) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetVMsClient", "Deallocate", nil, "Failure preparing request") + return + } + + result, err = client.DeallocateSender(req) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetVMsClient", "Deallocate", result.Response(), "Failure sending request") + return + } + + return +} + +// DeallocatePreparer prepares the Deallocate request. +func (client VirtualMachineScaleSetVMsClient) DeallocatePreparer(ctx context.Context, resourceGroupName string, VMScaleSetName string, instanceID string) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "instanceId": autorest.Encode("path", instanceID), + "resourceGroupName": autorest.Encode("path", resourceGroupName), + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + "vmScaleSetName": autorest.Encode("path", VMScaleSetName), + } + + const APIVersion = "2017-12-01" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsPost(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Compute/virtualMachineScaleSets/{vmScaleSetName}/virtualmachines/{instanceId}/deallocate", pathParameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// DeallocateSender sends the Deallocate request. The method will close the +// http.Response Body if it receives an error. +func (client VirtualMachineScaleSetVMsClient) DeallocateSender(req *http.Request) (future VirtualMachineScaleSetVMsDeallocateFuture, err error) { + var resp *http.Response + resp, err = autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) + if err != nil { + return + } + err = autorest.Respond(resp, azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted)) + if err != nil { + return + } + future.Future, err = azure.NewFutureFromResponse(resp) + return +} + +// DeallocateResponder handles the response to the Deallocate request. The method always +// closes the http.Response Body. +func (client VirtualMachineScaleSetVMsClient) DeallocateResponder(resp *http.Response) (result OperationStatusResponse, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + +// Delete deletes a virtual machine from a VM scale set. +// Parameters: +// resourceGroupName - the name of the resource group. +// VMScaleSetName - the name of the VM scale set. +// instanceID - the instance ID of the virtual machine. +func (client VirtualMachineScaleSetVMsClient) Delete(ctx context.Context, resourceGroupName string, VMScaleSetName string, instanceID string) (result VirtualMachineScaleSetVMsDeleteFuture, err error) { + req, err := client.DeletePreparer(ctx, resourceGroupName, VMScaleSetName, instanceID) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetVMsClient", "Delete", nil, "Failure preparing request") + return + } + + result, err = client.DeleteSender(req) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetVMsClient", "Delete", result.Response(), "Failure sending request") + return + } + + return +} + +// DeletePreparer prepares the Delete request. +func (client VirtualMachineScaleSetVMsClient) DeletePreparer(ctx context.Context, resourceGroupName string, VMScaleSetName string, instanceID string) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "instanceId": autorest.Encode("path", instanceID), + "resourceGroupName": autorest.Encode("path", resourceGroupName), + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + "vmScaleSetName": autorest.Encode("path", VMScaleSetName), + } + + const APIVersion = "2017-12-01" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsDelete(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Compute/virtualMachineScaleSets/{vmScaleSetName}/virtualmachines/{instanceId}", pathParameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// DeleteSender sends the Delete request. The method will close the +// http.Response Body if it receives an error. +func (client VirtualMachineScaleSetVMsClient) DeleteSender(req *http.Request) (future VirtualMachineScaleSetVMsDeleteFuture, err error) { + var resp *http.Response + resp, err = autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) + if err != nil { + return + } + err = autorest.Respond(resp, azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted, http.StatusNoContent)) + if err != nil { + return + } + future.Future, err = azure.NewFutureFromResponse(resp) + return +} + +// DeleteResponder handles the response to the Delete request. The method always +// closes the http.Response Body. +func (client VirtualMachineScaleSetVMsClient) DeleteResponder(resp *http.Response) (result OperationStatusResponse, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted, http.StatusNoContent), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + +// Get gets a virtual machine from a VM scale set. +// Parameters: +// resourceGroupName - the name of the resource group. +// VMScaleSetName - the name of the VM scale set. +// instanceID - the instance ID of the virtual machine. +func (client VirtualMachineScaleSetVMsClient) Get(ctx context.Context, resourceGroupName string, VMScaleSetName string, instanceID string) (result VirtualMachineScaleSetVM, err error) { + req, err := client.GetPreparer(ctx, resourceGroupName, VMScaleSetName, instanceID) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetVMsClient", "Get", nil, "Failure preparing request") + return + } + + resp, err := client.GetSender(req) + if err != nil { + result.Response = autorest.Response{Response: resp} + err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetVMsClient", "Get", resp, "Failure sending request") + return + } + + result, err = client.GetResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetVMsClient", "Get", resp, "Failure responding to request") + } + + return +} + +// GetPreparer prepares the Get request. +func (client VirtualMachineScaleSetVMsClient) GetPreparer(ctx context.Context, resourceGroupName string, VMScaleSetName string, instanceID string) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "instanceId": autorest.Encode("path", instanceID), + "resourceGroupName": autorest.Encode("path", resourceGroupName), + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + "vmScaleSetName": autorest.Encode("path", VMScaleSetName), + } + + const APIVersion = "2017-12-01" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsGet(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Compute/virtualMachineScaleSets/{vmScaleSetName}/virtualmachines/{instanceId}", pathParameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// GetSender sends the Get request. The method will close the +// http.Response Body if it receives an error. +func (client VirtualMachineScaleSetVMsClient) GetSender(req *http.Request) (*http.Response, error) { + return autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) +} + +// GetResponder handles the response to the Get request. The method always +// closes the http.Response Body. +func (client VirtualMachineScaleSetVMsClient) GetResponder(resp *http.Response) (result VirtualMachineScaleSetVM, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + +// GetInstanceView gets the status of a virtual machine from a VM scale set. +// Parameters: +// resourceGroupName - the name of the resource group. +// VMScaleSetName - the name of the VM scale set. +// instanceID - the instance ID of the virtual machine. +func (client VirtualMachineScaleSetVMsClient) GetInstanceView(ctx context.Context, resourceGroupName string, VMScaleSetName string, instanceID string) (result VirtualMachineScaleSetVMInstanceView, err error) { + req, err := client.GetInstanceViewPreparer(ctx, resourceGroupName, VMScaleSetName, instanceID) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetVMsClient", "GetInstanceView", nil, "Failure preparing request") + return + } + + resp, err := client.GetInstanceViewSender(req) + if err != nil { + result.Response = autorest.Response{Response: resp} + err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetVMsClient", "GetInstanceView", resp, "Failure sending request") + return + } + + result, err = client.GetInstanceViewResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetVMsClient", "GetInstanceView", resp, "Failure responding to request") + } + + return +} + +// GetInstanceViewPreparer prepares the GetInstanceView request. +func (client VirtualMachineScaleSetVMsClient) GetInstanceViewPreparer(ctx context.Context, resourceGroupName string, VMScaleSetName string, instanceID string) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "instanceId": autorest.Encode("path", instanceID), + "resourceGroupName": autorest.Encode("path", resourceGroupName), + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + "vmScaleSetName": autorest.Encode("path", VMScaleSetName), + } + + const APIVersion = "2017-12-01" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsGet(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Compute/virtualMachineScaleSets/{vmScaleSetName}/virtualmachines/{instanceId}/instanceView", pathParameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// GetInstanceViewSender sends the GetInstanceView request. The method will close the +// http.Response Body if it receives an error. +func (client VirtualMachineScaleSetVMsClient) GetInstanceViewSender(req *http.Request) (*http.Response, error) { + return autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) +} + +// GetInstanceViewResponder handles the response to the GetInstanceView request. The method always +// closes the http.Response Body. +func (client VirtualMachineScaleSetVMsClient) GetInstanceViewResponder(resp *http.Response) (result VirtualMachineScaleSetVMInstanceView, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + +// List gets a list of all virtual machines in a VM scale sets. +// Parameters: +// resourceGroupName - the name of the resource group. +// virtualMachineScaleSetName - the name of the VM scale set. +// filter - the filter to apply to the operation. +// selectParameter - the list parameters. +// expand - the expand expression to apply to the operation. +func (client VirtualMachineScaleSetVMsClient) List(ctx context.Context, resourceGroupName string, virtualMachineScaleSetName string, filter string, selectParameter string, expand string) (result VirtualMachineScaleSetVMListResultPage, err error) { + result.fn = client.listNextResults + req, err := client.ListPreparer(ctx, resourceGroupName, virtualMachineScaleSetName, filter, selectParameter, expand) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetVMsClient", "List", nil, "Failure preparing request") + return + } + + resp, err := client.ListSender(req) + if err != nil { + result.vmssvlr.Response = autorest.Response{Response: resp} + err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetVMsClient", "List", resp, "Failure sending request") + return + } + + result.vmssvlr, err = client.ListResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetVMsClient", "List", resp, "Failure responding to request") + } + + return +} + +// ListPreparer prepares the List request. +func (client VirtualMachineScaleSetVMsClient) ListPreparer(ctx context.Context, resourceGroupName string, virtualMachineScaleSetName string, filter string, selectParameter string, expand string) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "resourceGroupName": autorest.Encode("path", resourceGroupName), + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + "virtualMachineScaleSetName": autorest.Encode("path", virtualMachineScaleSetName), + } + + const APIVersion = "2017-12-01" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + if len(filter) > 0 { + queryParameters["$filter"] = autorest.Encode("query", filter) + } + if len(selectParameter) > 0 { + queryParameters["$select"] = autorest.Encode("query", selectParameter) + } + if len(expand) > 0 { + queryParameters["$expand"] = autorest.Encode("query", expand) + } + + preparer := autorest.CreatePreparer( + autorest.AsGet(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Compute/virtualMachineScaleSets/{virtualMachineScaleSetName}/virtualMachines", pathParameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// ListSender sends the List request. The method will close the +// http.Response Body if it receives an error. +func (client VirtualMachineScaleSetVMsClient) ListSender(req *http.Request) (*http.Response, error) { + return autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) +} + +// ListResponder handles the response to the List request. The method always +// closes the http.Response Body. +func (client VirtualMachineScaleSetVMsClient) ListResponder(resp *http.Response) (result VirtualMachineScaleSetVMListResult, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + +// listNextResults retrieves the next set of results, if any. +func (client VirtualMachineScaleSetVMsClient) listNextResults(lastResults VirtualMachineScaleSetVMListResult) (result VirtualMachineScaleSetVMListResult, err error) { + req, err := lastResults.virtualMachineScaleSetVMListResultPreparer() + if err != nil { + return result, autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetVMsClient", "listNextResults", nil, "Failure preparing next results request") + } + if req == nil { + return + } + resp, err := client.ListSender(req) + if err != nil { + result.Response = autorest.Response{Response: resp} + return result, autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetVMsClient", "listNextResults", resp, "Failure sending next results request") + } + result, err = client.ListResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetVMsClient", "listNextResults", resp, "Failure responding to next results request") + } + return +} + +// ListComplete enumerates all values, automatically crossing page boundaries as required. +func (client VirtualMachineScaleSetVMsClient) ListComplete(ctx context.Context, resourceGroupName string, virtualMachineScaleSetName string, filter string, selectParameter string, expand string) (result VirtualMachineScaleSetVMListResultIterator, err error) { + result.page, err = client.List(ctx, resourceGroupName, virtualMachineScaleSetName, filter, selectParameter, expand) + return +} + +// PerformMaintenance performs maintenance on a virtual machine in a VM scale set. +// Parameters: +// resourceGroupName - the name of the resource group. +// VMScaleSetName - the name of the VM scale set. +// instanceID - the instance ID of the virtual machine. +func (client VirtualMachineScaleSetVMsClient) PerformMaintenance(ctx context.Context, resourceGroupName string, VMScaleSetName string, instanceID string) (result VirtualMachineScaleSetVMsPerformMaintenanceFuture, err error) { + req, err := client.PerformMaintenancePreparer(ctx, resourceGroupName, VMScaleSetName, instanceID) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetVMsClient", "PerformMaintenance", nil, "Failure preparing request") + return + } + + result, err = client.PerformMaintenanceSender(req) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetVMsClient", "PerformMaintenance", result.Response(), "Failure sending request") + return + } + + return +} + +// PerformMaintenancePreparer prepares the PerformMaintenance request. +func (client VirtualMachineScaleSetVMsClient) PerformMaintenancePreparer(ctx context.Context, resourceGroupName string, VMScaleSetName string, instanceID string) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "instanceId": autorest.Encode("path", instanceID), + "resourceGroupName": autorest.Encode("path", resourceGroupName), + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + "vmScaleSetName": autorest.Encode("path", VMScaleSetName), + } + + const APIVersion = "2017-12-01" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsPost(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Compute/virtualMachineScaleSets/{vmScaleSetName}/virtualmachines/{instanceId}/performMaintenance", pathParameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// PerformMaintenanceSender sends the PerformMaintenance request. The method will close the +// http.Response Body if it receives an error. +func (client VirtualMachineScaleSetVMsClient) PerformMaintenanceSender(req *http.Request) (future VirtualMachineScaleSetVMsPerformMaintenanceFuture, err error) { + var resp *http.Response + resp, err = autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) + if err != nil { + return + } + err = autorest.Respond(resp, azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted)) + if err != nil { + return + } + future.Future, err = azure.NewFutureFromResponse(resp) + return +} + +// PerformMaintenanceResponder handles the response to the PerformMaintenance request. The method always +// closes the http.Response Body. +func (client VirtualMachineScaleSetVMsClient) PerformMaintenanceResponder(resp *http.Response) (result OperationStatusResponse, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + +// PowerOff power off (stop) a virtual machine in a VM scale set. Note that resources are still attached and you are +// getting charged for the resources. Instead, use deallocate to release resources and avoid charges. +// Parameters: +// resourceGroupName - the name of the resource group. +// VMScaleSetName - the name of the VM scale set. +// instanceID - the instance ID of the virtual machine. +func (client VirtualMachineScaleSetVMsClient) PowerOff(ctx context.Context, resourceGroupName string, VMScaleSetName string, instanceID string) (result VirtualMachineScaleSetVMsPowerOffFuture, err error) { + req, err := client.PowerOffPreparer(ctx, resourceGroupName, VMScaleSetName, instanceID) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetVMsClient", "PowerOff", nil, "Failure preparing request") + return + } + + result, err = client.PowerOffSender(req) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetVMsClient", "PowerOff", result.Response(), "Failure sending request") + return + } + + return +} + +// PowerOffPreparer prepares the PowerOff request. +func (client VirtualMachineScaleSetVMsClient) PowerOffPreparer(ctx context.Context, resourceGroupName string, VMScaleSetName string, instanceID string) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "instanceId": autorest.Encode("path", instanceID), + "resourceGroupName": autorest.Encode("path", resourceGroupName), + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + "vmScaleSetName": autorest.Encode("path", VMScaleSetName), + } + + const APIVersion = "2017-12-01" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsPost(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Compute/virtualMachineScaleSets/{vmScaleSetName}/virtualmachines/{instanceId}/poweroff", pathParameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// PowerOffSender sends the PowerOff request. The method will close the +// http.Response Body if it receives an error. +func (client VirtualMachineScaleSetVMsClient) PowerOffSender(req *http.Request) (future VirtualMachineScaleSetVMsPowerOffFuture, err error) { + var resp *http.Response + resp, err = autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) + if err != nil { + return + } + err = autorest.Respond(resp, azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted)) + if err != nil { + return + } + future.Future, err = azure.NewFutureFromResponse(resp) + return +} + +// PowerOffResponder handles the response to the PowerOff request. The method always +// closes the http.Response Body. +func (client VirtualMachineScaleSetVMsClient) PowerOffResponder(resp *http.Response) (result OperationStatusResponse, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + +// Redeploy redeploys a virtual machine in a VM scale set. +// Parameters: +// resourceGroupName - the name of the resource group. +// VMScaleSetName - the name of the VM scale set. +// instanceID - the instance ID of the virtual machine. +func (client VirtualMachineScaleSetVMsClient) Redeploy(ctx context.Context, resourceGroupName string, VMScaleSetName string, instanceID string) (result VirtualMachineScaleSetVMsRedeployFuture, err error) { + req, err := client.RedeployPreparer(ctx, resourceGroupName, VMScaleSetName, instanceID) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetVMsClient", "Redeploy", nil, "Failure preparing request") + return + } + + result, err = client.RedeploySender(req) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetVMsClient", "Redeploy", result.Response(), "Failure sending request") + return + } + + return +} + +// RedeployPreparer prepares the Redeploy request. +func (client VirtualMachineScaleSetVMsClient) RedeployPreparer(ctx context.Context, resourceGroupName string, VMScaleSetName string, instanceID string) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "instanceId": autorest.Encode("path", instanceID), + "resourceGroupName": autorest.Encode("path", resourceGroupName), + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + "vmScaleSetName": autorest.Encode("path", VMScaleSetName), + } + + const APIVersion = "2017-12-01" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsPost(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Compute/virtualMachineScaleSets/{vmScaleSetName}/virtualmachines/{instanceId}/redeploy", pathParameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// RedeploySender sends the Redeploy request. The method will close the +// http.Response Body if it receives an error. +func (client VirtualMachineScaleSetVMsClient) RedeploySender(req *http.Request) (future VirtualMachineScaleSetVMsRedeployFuture, err error) { + var resp *http.Response + resp, err = autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) + if err != nil { + return + } + err = autorest.Respond(resp, azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted)) + if err != nil { + return + } + future.Future, err = azure.NewFutureFromResponse(resp) + return +} + +// RedeployResponder handles the response to the Redeploy request. The method always +// closes the http.Response Body. +func (client VirtualMachineScaleSetVMsClient) RedeployResponder(resp *http.Response) (result OperationStatusResponse, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + +// Reimage reimages (upgrade the operating system) a specific virtual machine in a VM scale set. +// Parameters: +// resourceGroupName - the name of the resource group. +// VMScaleSetName - the name of the VM scale set. +// instanceID - the instance ID of the virtual machine. +func (client VirtualMachineScaleSetVMsClient) Reimage(ctx context.Context, resourceGroupName string, VMScaleSetName string, instanceID string) (result VirtualMachineScaleSetVMsReimageFuture, err error) { + req, err := client.ReimagePreparer(ctx, resourceGroupName, VMScaleSetName, instanceID) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetVMsClient", "Reimage", nil, "Failure preparing request") + return + } + + result, err = client.ReimageSender(req) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetVMsClient", "Reimage", result.Response(), "Failure sending request") + return + } + + return +} + +// ReimagePreparer prepares the Reimage request. +func (client VirtualMachineScaleSetVMsClient) ReimagePreparer(ctx context.Context, resourceGroupName string, VMScaleSetName string, instanceID string) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "instanceId": autorest.Encode("path", instanceID), + "resourceGroupName": autorest.Encode("path", resourceGroupName), + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + "vmScaleSetName": autorest.Encode("path", VMScaleSetName), + } + + const APIVersion = "2017-12-01" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsPost(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Compute/virtualMachineScaleSets/{vmScaleSetName}/virtualmachines/{instanceId}/reimage", pathParameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// ReimageSender sends the Reimage request. The method will close the +// http.Response Body if it receives an error. +func (client VirtualMachineScaleSetVMsClient) ReimageSender(req *http.Request) (future VirtualMachineScaleSetVMsReimageFuture, err error) { + var resp *http.Response + resp, err = autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) + if err != nil { + return + } + err = autorest.Respond(resp, azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted)) + if err != nil { + return + } + future.Future, err = azure.NewFutureFromResponse(resp) + return +} + +// ReimageResponder handles the response to the Reimage request. The method always +// closes the http.Response Body. +func (client VirtualMachineScaleSetVMsClient) ReimageResponder(resp *http.Response) (result OperationStatusResponse, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + +// ReimageAll allows you to re-image all the disks ( including data disks ) in the a VM scale set instance. This +// operation is only supported for managed disks. +// Parameters: +// resourceGroupName - the name of the resource group. +// VMScaleSetName - the name of the VM scale set. +// instanceID - the instance ID of the virtual machine. +func (client VirtualMachineScaleSetVMsClient) ReimageAll(ctx context.Context, resourceGroupName string, VMScaleSetName string, instanceID string) (result VirtualMachineScaleSetVMsReimageAllFuture, err error) { + req, err := client.ReimageAllPreparer(ctx, resourceGroupName, VMScaleSetName, instanceID) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetVMsClient", "ReimageAll", nil, "Failure preparing request") + return + } + + result, err = client.ReimageAllSender(req) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetVMsClient", "ReimageAll", result.Response(), "Failure sending request") + return + } + + return +} + +// ReimageAllPreparer prepares the ReimageAll request. +func (client VirtualMachineScaleSetVMsClient) ReimageAllPreparer(ctx context.Context, resourceGroupName string, VMScaleSetName string, instanceID string) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "instanceId": autorest.Encode("path", instanceID), + "resourceGroupName": autorest.Encode("path", resourceGroupName), + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + "vmScaleSetName": autorest.Encode("path", VMScaleSetName), + } + + const APIVersion = "2017-12-01" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsPost(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Compute/virtualMachineScaleSets/{vmScaleSetName}/virtualmachines/{instanceId}/reimageall", pathParameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// ReimageAllSender sends the ReimageAll request. The method will close the +// http.Response Body if it receives an error. +func (client VirtualMachineScaleSetVMsClient) ReimageAllSender(req *http.Request) (future VirtualMachineScaleSetVMsReimageAllFuture, err error) { + var resp *http.Response + resp, err = autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) + if err != nil { + return + } + err = autorest.Respond(resp, azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted)) + if err != nil { + return + } + future.Future, err = azure.NewFutureFromResponse(resp) + return +} + +// ReimageAllResponder handles the response to the ReimageAll request. The method always +// closes the http.Response Body. +func (client VirtualMachineScaleSetVMsClient) ReimageAllResponder(resp *http.Response) (result OperationStatusResponse, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + +// Restart restarts a virtual machine in a VM scale set. +// Parameters: +// resourceGroupName - the name of the resource group. +// VMScaleSetName - the name of the VM scale set. +// instanceID - the instance ID of the virtual machine. +func (client VirtualMachineScaleSetVMsClient) Restart(ctx context.Context, resourceGroupName string, VMScaleSetName string, instanceID string) (result VirtualMachineScaleSetVMsRestartFuture, err error) { + req, err := client.RestartPreparer(ctx, resourceGroupName, VMScaleSetName, instanceID) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetVMsClient", "Restart", nil, "Failure preparing request") + return + } + + result, err = client.RestartSender(req) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetVMsClient", "Restart", result.Response(), "Failure sending request") + return + } + + return +} + +// RestartPreparer prepares the Restart request. +func (client VirtualMachineScaleSetVMsClient) RestartPreparer(ctx context.Context, resourceGroupName string, VMScaleSetName string, instanceID string) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "instanceId": autorest.Encode("path", instanceID), + "resourceGroupName": autorest.Encode("path", resourceGroupName), + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + "vmScaleSetName": autorest.Encode("path", VMScaleSetName), + } + + const APIVersion = "2017-12-01" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsPost(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Compute/virtualMachineScaleSets/{vmScaleSetName}/virtualmachines/{instanceId}/restart", pathParameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// RestartSender sends the Restart request. The method will close the +// http.Response Body if it receives an error. +func (client VirtualMachineScaleSetVMsClient) RestartSender(req *http.Request) (future VirtualMachineScaleSetVMsRestartFuture, err error) { + var resp *http.Response + resp, err = autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) + if err != nil { + return + } + err = autorest.Respond(resp, azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted)) + if err != nil { + return + } + future.Future, err = azure.NewFutureFromResponse(resp) + return +} + +// RestartResponder handles the response to the Restart request. The method always +// closes the http.Response Body. +func (client VirtualMachineScaleSetVMsClient) RestartResponder(resp *http.Response) (result OperationStatusResponse, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + +// Start starts a virtual machine in a VM scale set. +// Parameters: +// resourceGroupName - the name of the resource group. +// VMScaleSetName - the name of the VM scale set. +// instanceID - the instance ID of the virtual machine. +func (client VirtualMachineScaleSetVMsClient) Start(ctx context.Context, resourceGroupName string, VMScaleSetName string, instanceID string) (result VirtualMachineScaleSetVMsStartFuture, err error) { + req, err := client.StartPreparer(ctx, resourceGroupName, VMScaleSetName, instanceID) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetVMsClient", "Start", nil, "Failure preparing request") + return + } + + result, err = client.StartSender(req) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetVMsClient", "Start", result.Response(), "Failure sending request") + return + } + + return +} + +// StartPreparer prepares the Start request. +func (client VirtualMachineScaleSetVMsClient) StartPreparer(ctx context.Context, resourceGroupName string, VMScaleSetName string, instanceID string) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "instanceId": autorest.Encode("path", instanceID), + "resourceGroupName": autorest.Encode("path", resourceGroupName), + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + "vmScaleSetName": autorest.Encode("path", VMScaleSetName), + } + + const APIVersion = "2017-12-01" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsPost(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Compute/virtualMachineScaleSets/{vmScaleSetName}/virtualmachines/{instanceId}/start", pathParameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// StartSender sends the Start request. The method will close the +// http.Response Body if it receives an error. +func (client VirtualMachineScaleSetVMsClient) StartSender(req *http.Request) (future VirtualMachineScaleSetVMsStartFuture, err error) { + var resp *http.Response + resp, err = autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) + if err != nil { + return + } + err = autorest.Respond(resp, azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted)) + if err != nil { + return + } + future.Future, err = azure.NewFutureFromResponse(resp) + return +} + +// StartResponder handles the response to the Start request. The method always +// closes the http.Response Body. +func (client VirtualMachineScaleSetVMsClient) StartResponder(resp *http.Response) (result OperationStatusResponse, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + +// Update updates a virtual machine of a VM scale set. +// Parameters: +// resourceGroupName - the name of the resource group. +// VMScaleSetName - the name of the VM scale set where the extension should be create or updated. +// instanceID - the instance ID of the virtual machine. +// parameters - parameters supplied to the Update Virtual Machine Scale Sets VM operation. +func (client VirtualMachineScaleSetVMsClient) Update(ctx context.Context, resourceGroupName string, VMScaleSetName string, instanceID string, parameters VirtualMachineScaleSetVM) (result VirtualMachineScaleSetVMsUpdateFuture, err error) { + if err := validation.Validate([]validation.Validation{ + {TargetValue: parameters, + Constraints: []validation.Constraint{{Target: "parameters.VirtualMachineScaleSetVMProperties", Name: validation.Null, Rule: false, + Chain: []validation.Constraint{{Target: "parameters.VirtualMachineScaleSetVMProperties.StorageProfile", Name: validation.Null, Rule: false, + Chain: []validation.Constraint{{Target: "parameters.VirtualMachineScaleSetVMProperties.StorageProfile.OsDisk", Name: validation.Null, Rule: false, + Chain: []validation.Constraint{{Target: "parameters.VirtualMachineScaleSetVMProperties.StorageProfile.OsDisk.EncryptionSettings", Name: validation.Null, Rule: false, + Chain: []validation.Constraint{{Target: "parameters.VirtualMachineScaleSetVMProperties.StorageProfile.OsDisk.EncryptionSettings.DiskEncryptionKey", Name: validation.Null, Rule: false, + Chain: []validation.Constraint{{Target: "parameters.VirtualMachineScaleSetVMProperties.StorageProfile.OsDisk.EncryptionSettings.DiskEncryptionKey.SecretURL", Name: validation.Null, Rule: true, Chain: nil}, + {Target: "parameters.VirtualMachineScaleSetVMProperties.StorageProfile.OsDisk.EncryptionSettings.DiskEncryptionKey.SourceVault", Name: validation.Null, Rule: true, Chain: nil}, + }}, + {Target: "parameters.VirtualMachineScaleSetVMProperties.StorageProfile.OsDisk.EncryptionSettings.KeyEncryptionKey", Name: validation.Null, Rule: false, + Chain: []validation.Constraint{{Target: "parameters.VirtualMachineScaleSetVMProperties.StorageProfile.OsDisk.EncryptionSettings.KeyEncryptionKey.KeyURL", Name: validation.Null, Rule: true, Chain: nil}, + {Target: "parameters.VirtualMachineScaleSetVMProperties.StorageProfile.OsDisk.EncryptionSettings.KeyEncryptionKey.SourceVault", Name: validation.Null, Rule: true, Chain: nil}, + }}, + }}, + }}, + }}, + }}}}}); err != nil { + return result, validation.NewError("compute.VirtualMachineScaleSetVMsClient", "Update", err.Error()) + } + + req, err := client.UpdatePreparer(ctx, resourceGroupName, VMScaleSetName, instanceID, parameters) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetVMsClient", "Update", nil, "Failure preparing request") + return + } + + result, err = client.UpdateSender(req) + if err != nil { + err = autorest.NewErrorWithError(err, "compute.VirtualMachineScaleSetVMsClient", "Update", result.Response(), "Failure sending request") + return + } + + return +} + +// UpdatePreparer prepares the Update request. +func (client VirtualMachineScaleSetVMsClient) UpdatePreparer(ctx context.Context, resourceGroupName string, VMScaleSetName string, instanceID string, parameters VirtualMachineScaleSetVM) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "instanceId": autorest.Encode("path", instanceID), + "resourceGroupName": autorest.Encode("path", resourceGroupName), + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + "vmScaleSetName": autorest.Encode("path", VMScaleSetName), + } + + const APIVersion = "2017-12-01" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsContentType("application/json; charset=utf-8"), + autorest.AsPut(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Compute/virtualMachineScaleSets/{vmScaleSetName}/virtualmachines/{instanceId}", pathParameters), + autorest.WithJSON(parameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// UpdateSender sends the Update request. The method will close the +// http.Response Body if it receives an error. +func (client VirtualMachineScaleSetVMsClient) UpdateSender(req *http.Request) (future VirtualMachineScaleSetVMsUpdateFuture, err error) { + var resp *http.Response + resp, err = autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) + if err != nil { + return + } + err = autorest.Respond(resp, azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted)) + if err != nil { + return + } + future.Future, err = azure.NewFutureFromResponse(resp) + return +} + +// UpdateResponder handles the response to the Update request. The method always +// closes the http.Response Body. +func (client VirtualMachineScaleSetVMsClient) UpdateResponder(resp *http.Response) (result VirtualMachineScaleSetVM, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} diff --git a/vendor/github.com/Azure/azure-sdk-for-go/arm/compute/virtualmachinesizes.go b/vendor/github.com/Azure/azure-sdk-for-go/services/compute/mgmt/2018-04-01/compute/virtualmachinesizes.go similarity index 78% rename from vendor/github.com/Azure/azure-sdk-for-go/arm/compute/virtualmachinesizes.go rename to vendor/github.com/Azure/azure-sdk-for-go/services/compute/mgmt/2018-04-01/compute/virtualmachinesizes.go index c76b203e7..49f81e550 100644 --- a/vendor/github.com/Azure/azure-sdk-for-go/arm/compute/virtualmachinesizes.go +++ b/vendor/github.com/Azure/azure-sdk-for-go/services/compute/mgmt/2018-04-01/compute/virtualmachinesizes.go @@ -14,46 +14,43 @@ package compute // See the License for the specific language governing permissions and // limitations under the License. // -// Code generated by Microsoft (R) AutoRest Code Generator 1.0.1.0 -// Changes may cause incorrect behavior and will be lost if the code is -// regenerated. +// Code generated by Microsoft (R) AutoRest Code Generator. +// Changes may cause incorrect behavior and will be lost if the code is regenerated. import ( + "context" "github.com/Azure/go-autorest/autorest" "github.com/Azure/go-autorest/autorest/azure" "github.com/Azure/go-autorest/autorest/validation" "net/http" ) -// VirtualMachineSizesClient is the the Compute Management Client. +// VirtualMachineSizesClient is the compute Client type VirtualMachineSizesClient struct { - ManagementClient + BaseClient } -// NewVirtualMachineSizesClient creates an instance of the -// VirtualMachineSizesClient client. +// NewVirtualMachineSizesClient creates an instance of the VirtualMachineSizesClient client. func NewVirtualMachineSizesClient(subscriptionID string) VirtualMachineSizesClient { return NewVirtualMachineSizesClientWithBaseURI(DefaultBaseURI, subscriptionID) } -// NewVirtualMachineSizesClientWithBaseURI creates an instance of the -// VirtualMachineSizesClient client. +// NewVirtualMachineSizesClientWithBaseURI creates an instance of the VirtualMachineSizesClient client. func NewVirtualMachineSizesClientWithBaseURI(baseURI string, subscriptionID string) VirtualMachineSizesClient { return VirtualMachineSizesClient{NewWithBaseURI(baseURI, subscriptionID)} } -// List lists all available virtual machine sizes for a subscription in a -// location. -// -// location is the location upon which virtual-machine-sizes is queried. -func (client VirtualMachineSizesClient) List(location string) (result VirtualMachineSizeListResult, err error) { +// List lists all available virtual machine sizes for a subscription in a location. +// Parameters: +// location - the location upon which virtual-machine-sizes is queried. +func (client VirtualMachineSizesClient) List(ctx context.Context, location string) (result VirtualMachineSizeListResult, err error) { if err := validation.Validate([]validation.Validation{ {TargetValue: location, Constraints: []validation.Constraint{{Target: "location", Name: validation.Pattern, Rule: `^[-\w\._]+$`, Chain: nil}}}}); err != nil { - return result, validation.NewErrorWithValidationError(err, "compute.VirtualMachineSizesClient", "List") + return result, validation.NewError("compute.VirtualMachineSizesClient", "List", err.Error()) } - req, err := client.ListPreparer(location) + req, err := client.ListPreparer(ctx, location) if err != nil { err = autorest.NewErrorWithError(err, "compute.VirtualMachineSizesClient", "List", nil, "Failure preparing request") return @@ -75,13 +72,13 @@ func (client VirtualMachineSizesClient) List(location string) (result VirtualMac } // ListPreparer prepares the List request. -func (client VirtualMachineSizesClient) ListPreparer(location string) (*http.Request, error) { +func (client VirtualMachineSizesClient) ListPreparer(ctx context.Context, location string) (*http.Request, error) { pathParameters := map[string]interface{}{ "location": autorest.Encode("path", location), "subscriptionId": autorest.Encode("path", client.SubscriptionID), } - const APIVersion = "2016-04-30-preview" + const APIVersion = "2017-12-01" queryParameters := map[string]interface{}{ "api-version": APIVersion, } @@ -91,13 +88,14 @@ func (client VirtualMachineSizesClient) ListPreparer(location string) (*http.Req autorest.WithBaseURL(client.BaseURI), autorest.WithPathParameters("/subscriptions/{subscriptionId}/providers/Microsoft.Compute/locations/{location}/vmSizes", pathParameters), autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{}) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) } // ListSender sends the List request. The method will close the // http.Response Body if it receives an error. func (client VirtualMachineSizesClient) ListSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, req) + return autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) } // ListResponder handles the response to the List request. The method always diff --git a/vendor/github.com/Azure/azure-sdk-for-go/services/network/mgmt/2018-01-01/network/applicationgateways.go b/vendor/github.com/Azure/azure-sdk-for-go/services/network/mgmt/2018-01-01/network/applicationgateways.go new file mode 100644 index 000000000..bee104bec --- /dev/null +++ b/vendor/github.com/Azure/azure-sdk-for-go/services/network/mgmt/2018-01-01/network/applicationgateways.go @@ -0,0 +1,1019 @@ +package network + +// Copyright (c) Microsoft and contributors. All rights reserved. +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// +// See the License for the specific language governing permissions and +// limitations under the License. +// +// Code generated by Microsoft (R) AutoRest Code Generator. +// Changes may cause incorrect behavior and will be lost if the code is regenerated. + +import ( + "context" + "github.com/Azure/go-autorest/autorest" + "github.com/Azure/go-autorest/autorest/azure" + "github.com/Azure/go-autorest/autorest/validation" + "net/http" +) + +// ApplicationGatewaysClient is the network Client +type ApplicationGatewaysClient struct { + BaseClient +} + +// NewApplicationGatewaysClient creates an instance of the ApplicationGatewaysClient client. +func NewApplicationGatewaysClient(subscriptionID string) ApplicationGatewaysClient { + return NewApplicationGatewaysClientWithBaseURI(DefaultBaseURI, subscriptionID) +} + +// NewApplicationGatewaysClientWithBaseURI creates an instance of the ApplicationGatewaysClient client. +func NewApplicationGatewaysClientWithBaseURI(baseURI string, subscriptionID string) ApplicationGatewaysClient { + return ApplicationGatewaysClient{NewWithBaseURI(baseURI, subscriptionID)} +} + +// BackendHealth gets the backend health of the specified application gateway in a resource group. +// Parameters: +// resourceGroupName - the name of the resource group. +// applicationGatewayName - the name of the application gateway. +// expand - expands BackendAddressPool and BackendHttpSettings referenced in backend health. +func (client ApplicationGatewaysClient) BackendHealth(ctx context.Context, resourceGroupName string, applicationGatewayName string, expand string) (result ApplicationGatewaysBackendHealthFuture, err error) { + req, err := client.BackendHealthPreparer(ctx, resourceGroupName, applicationGatewayName, expand) + if err != nil { + err = autorest.NewErrorWithError(err, "network.ApplicationGatewaysClient", "BackendHealth", nil, "Failure preparing request") + return + } + + result, err = client.BackendHealthSender(req) + if err != nil { + err = autorest.NewErrorWithError(err, "network.ApplicationGatewaysClient", "BackendHealth", result.Response(), "Failure sending request") + return + } + + return +} + +// BackendHealthPreparer prepares the BackendHealth request. +func (client ApplicationGatewaysClient) BackendHealthPreparer(ctx context.Context, resourceGroupName string, applicationGatewayName string, expand string) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "applicationGatewayName": autorest.Encode("path", applicationGatewayName), + "resourceGroupName": autorest.Encode("path", resourceGroupName), + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + } + + const APIVersion = "2018-01-01" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + if len(expand) > 0 { + queryParameters["$expand"] = autorest.Encode("query", expand) + } + + preparer := autorest.CreatePreparer( + autorest.AsPost(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/applicationGateways/{applicationGatewayName}/backendhealth", pathParameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// BackendHealthSender sends the BackendHealth request. The method will close the +// http.Response Body if it receives an error. +func (client ApplicationGatewaysClient) BackendHealthSender(req *http.Request) (future ApplicationGatewaysBackendHealthFuture, err error) { + var resp *http.Response + resp, err = autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) + if err != nil { + return + } + err = autorest.Respond(resp, azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted)) + if err != nil { + return + } + future.Future, err = azure.NewFutureFromResponse(resp) + return +} + +// BackendHealthResponder handles the response to the BackendHealth request. The method always +// closes the http.Response Body. +func (client ApplicationGatewaysClient) BackendHealthResponder(resp *http.Response) (result ApplicationGatewayBackendHealth, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + +// CreateOrUpdate creates or updates the specified application gateway. +// Parameters: +// resourceGroupName - the name of the resource group. +// applicationGatewayName - the name of the application gateway. +// parameters - parameters supplied to the create or update application gateway operation. +func (client ApplicationGatewaysClient) CreateOrUpdate(ctx context.Context, resourceGroupName string, applicationGatewayName string, parameters ApplicationGateway) (result ApplicationGatewaysCreateOrUpdateFuture, err error) { + if err := validation.Validate([]validation.Validation{ + {TargetValue: parameters, + Constraints: []validation.Constraint{{Target: "parameters.ApplicationGatewayPropertiesFormat", Name: validation.Null, Rule: false, + Chain: []validation.Constraint{{Target: "parameters.ApplicationGatewayPropertiesFormat.WebApplicationFirewallConfiguration", Name: validation.Null, Rule: false, + Chain: []validation.Constraint{{Target: "parameters.ApplicationGatewayPropertiesFormat.WebApplicationFirewallConfiguration.Enabled", Name: validation.Null, Rule: true, Chain: nil}, + {Target: "parameters.ApplicationGatewayPropertiesFormat.WebApplicationFirewallConfiguration.RuleSetType", Name: validation.Null, Rule: true, Chain: nil}, + {Target: "parameters.ApplicationGatewayPropertiesFormat.WebApplicationFirewallConfiguration.RuleSetVersion", Name: validation.Null, Rule: true, Chain: nil}, + {Target: "parameters.ApplicationGatewayPropertiesFormat.WebApplicationFirewallConfiguration.MaxRequestBodySize", Name: validation.Null, Rule: false, + Chain: []validation.Constraint{{Target: "parameters.ApplicationGatewayPropertiesFormat.WebApplicationFirewallConfiguration.MaxRequestBodySize", Name: validation.InclusiveMaximum, Rule: 128, Chain: nil}, + {Target: "parameters.ApplicationGatewayPropertiesFormat.WebApplicationFirewallConfiguration.MaxRequestBodySize", Name: validation.InclusiveMinimum, Rule: 8, Chain: nil}, + }}, + }}, + }}}}}); err != nil { + return result, validation.NewError("network.ApplicationGatewaysClient", "CreateOrUpdate", err.Error()) + } + + req, err := client.CreateOrUpdatePreparer(ctx, resourceGroupName, applicationGatewayName, parameters) + if err != nil { + err = autorest.NewErrorWithError(err, "network.ApplicationGatewaysClient", "CreateOrUpdate", nil, "Failure preparing request") + return + } + + result, err = client.CreateOrUpdateSender(req) + if err != nil { + err = autorest.NewErrorWithError(err, "network.ApplicationGatewaysClient", "CreateOrUpdate", result.Response(), "Failure sending request") + return + } + + return +} + +// CreateOrUpdatePreparer prepares the CreateOrUpdate request. +func (client ApplicationGatewaysClient) CreateOrUpdatePreparer(ctx context.Context, resourceGroupName string, applicationGatewayName string, parameters ApplicationGateway) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "applicationGatewayName": autorest.Encode("path", applicationGatewayName), + "resourceGroupName": autorest.Encode("path", resourceGroupName), + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + } + + const APIVersion = "2018-01-01" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsContentType("application/json; charset=utf-8"), + autorest.AsPut(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/applicationGateways/{applicationGatewayName}", pathParameters), + autorest.WithJSON(parameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// CreateOrUpdateSender sends the CreateOrUpdate request. The method will close the +// http.Response Body if it receives an error. +func (client ApplicationGatewaysClient) CreateOrUpdateSender(req *http.Request) (future ApplicationGatewaysCreateOrUpdateFuture, err error) { + var resp *http.Response + resp, err = autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) + if err != nil { + return + } + err = autorest.Respond(resp, azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusCreated)) + if err != nil { + return + } + future.Future, err = azure.NewFutureFromResponse(resp) + return +} + +// CreateOrUpdateResponder handles the response to the CreateOrUpdate request. The method always +// closes the http.Response Body. +func (client ApplicationGatewaysClient) CreateOrUpdateResponder(resp *http.Response) (result ApplicationGateway, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusCreated), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + +// Delete deletes the specified application gateway. +// Parameters: +// resourceGroupName - the name of the resource group. +// applicationGatewayName - the name of the application gateway. +func (client ApplicationGatewaysClient) Delete(ctx context.Context, resourceGroupName string, applicationGatewayName string) (result ApplicationGatewaysDeleteFuture, err error) { + req, err := client.DeletePreparer(ctx, resourceGroupName, applicationGatewayName) + if err != nil { + err = autorest.NewErrorWithError(err, "network.ApplicationGatewaysClient", "Delete", nil, "Failure preparing request") + return + } + + result, err = client.DeleteSender(req) + if err != nil { + err = autorest.NewErrorWithError(err, "network.ApplicationGatewaysClient", "Delete", result.Response(), "Failure sending request") + return + } + + return +} + +// DeletePreparer prepares the Delete request. +func (client ApplicationGatewaysClient) DeletePreparer(ctx context.Context, resourceGroupName string, applicationGatewayName string) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "applicationGatewayName": autorest.Encode("path", applicationGatewayName), + "resourceGroupName": autorest.Encode("path", resourceGroupName), + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + } + + const APIVersion = "2018-01-01" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsDelete(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/applicationGateways/{applicationGatewayName}", pathParameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// DeleteSender sends the Delete request. The method will close the +// http.Response Body if it receives an error. +func (client ApplicationGatewaysClient) DeleteSender(req *http.Request) (future ApplicationGatewaysDeleteFuture, err error) { + var resp *http.Response + resp, err = autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) + if err != nil { + return + } + err = autorest.Respond(resp, azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted, http.StatusNoContent)) + if err != nil { + return + } + future.Future, err = azure.NewFutureFromResponse(resp) + return +} + +// DeleteResponder handles the response to the Delete request. The method always +// closes the http.Response Body. +func (client ApplicationGatewaysClient) DeleteResponder(resp *http.Response) (result autorest.Response, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted, http.StatusNoContent), + autorest.ByClosing()) + result.Response = resp + return +} + +// Get gets the specified application gateway. +// Parameters: +// resourceGroupName - the name of the resource group. +// applicationGatewayName - the name of the application gateway. +func (client ApplicationGatewaysClient) Get(ctx context.Context, resourceGroupName string, applicationGatewayName string) (result ApplicationGateway, err error) { + req, err := client.GetPreparer(ctx, resourceGroupName, applicationGatewayName) + if err != nil { + err = autorest.NewErrorWithError(err, "network.ApplicationGatewaysClient", "Get", nil, "Failure preparing request") + return + } + + resp, err := client.GetSender(req) + if err != nil { + result.Response = autorest.Response{Response: resp} + err = autorest.NewErrorWithError(err, "network.ApplicationGatewaysClient", "Get", resp, "Failure sending request") + return + } + + result, err = client.GetResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "network.ApplicationGatewaysClient", "Get", resp, "Failure responding to request") + } + + return +} + +// GetPreparer prepares the Get request. +func (client ApplicationGatewaysClient) GetPreparer(ctx context.Context, resourceGroupName string, applicationGatewayName string) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "applicationGatewayName": autorest.Encode("path", applicationGatewayName), + "resourceGroupName": autorest.Encode("path", resourceGroupName), + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + } + + const APIVersion = "2018-01-01" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsGet(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/applicationGateways/{applicationGatewayName}", pathParameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// GetSender sends the Get request. The method will close the +// http.Response Body if it receives an error. +func (client ApplicationGatewaysClient) GetSender(req *http.Request) (*http.Response, error) { + return autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) +} + +// GetResponder handles the response to the Get request. The method always +// closes the http.Response Body. +func (client ApplicationGatewaysClient) GetResponder(resp *http.Response) (result ApplicationGateway, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + +// GetSslPredefinedPolicy gets Ssl predefined policy with the specified policy name. +// Parameters: +// predefinedPolicyName - name of Ssl predefined policy. +func (client ApplicationGatewaysClient) GetSslPredefinedPolicy(ctx context.Context, predefinedPolicyName string) (result ApplicationGatewaySslPredefinedPolicy, err error) { + req, err := client.GetSslPredefinedPolicyPreparer(ctx, predefinedPolicyName) + if err != nil { + err = autorest.NewErrorWithError(err, "network.ApplicationGatewaysClient", "GetSslPredefinedPolicy", nil, "Failure preparing request") + return + } + + resp, err := client.GetSslPredefinedPolicySender(req) + if err != nil { + result.Response = autorest.Response{Response: resp} + err = autorest.NewErrorWithError(err, "network.ApplicationGatewaysClient", "GetSslPredefinedPolicy", resp, "Failure sending request") + return + } + + result, err = client.GetSslPredefinedPolicyResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "network.ApplicationGatewaysClient", "GetSslPredefinedPolicy", resp, "Failure responding to request") + } + + return +} + +// GetSslPredefinedPolicyPreparer prepares the GetSslPredefinedPolicy request. +func (client ApplicationGatewaysClient) GetSslPredefinedPolicyPreparer(ctx context.Context, predefinedPolicyName string) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "predefinedPolicyName": autorest.Encode("path", predefinedPolicyName), + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + } + + const APIVersion = "2018-01-01" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsGet(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/providers/Microsoft.Network/applicationGatewayAvailableSslOptions/default/predefinedPolicies/{predefinedPolicyName}", pathParameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// GetSslPredefinedPolicySender sends the GetSslPredefinedPolicy request. The method will close the +// http.Response Body if it receives an error. +func (client ApplicationGatewaysClient) GetSslPredefinedPolicySender(req *http.Request) (*http.Response, error) { + return autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) +} + +// GetSslPredefinedPolicyResponder handles the response to the GetSslPredefinedPolicy request. The method always +// closes the http.Response Body. +func (client ApplicationGatewaysClient) GetSslPredefinedPolicyResponder(resp *http.Response) (result ApplicationGatewaySslPredefinedPolicy, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + +// List lists all application gateways in a resource group. +// Parameters: +// resourceGroupName - the name of the resource group. +func (client ApplicationGatewaysClient) List(ctx context.Context, resourceGroupName string) (result ApplicationGatewayListResultPage, err error) { + result.fn = client.listNextResults + req, err := client.ListPreparer(ctx, resourceGroupName) + if err != nil { + err = autorest.NewErrorWithError(err, "network.ApplicationGatewaysClient", "List", nil, "Failure preparing request") + return + } + + resp, err := client.ListSender(req) + if err != nil { + result.aglr.Response = autorest.Response{Response: resp} + err = autorest.NewErrorWithError(err, "network.ApplicationGatewaysClient", "List", resp, "Failure sending request") + return + } + + result.aglr, err = client.ListResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "network.ApplicationGatewaysClient", "List", resp, "Failure responding to request") + } + + return +} + +// ListPreparer prepares the List request. +func (client ApplicationGatewaysClient) ListPreparer(ctx context.Context, resourceGroupName string) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "resourceGroupName": autorest.Encode("path", resourceGroupName), + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + } + + const APIVersion = "2018-01-01" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsGet(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/applicationGateways", pathParameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// ListSender sends the List request. The method will close the +// http.Response Body if it receives an error. +func (client ApplicationGatewaysClient) ListSender(req *http.Request) (*http.Response, error) { + return autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) +} + +// ListResponder handles the response to the List request. The method always +// closes the http.Response Body. +func (client ApplicationGatewaysClient) ListResponder(resp *http.Response) (result ApplicationGatewayListResult, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + +// listNextResults retrieves the next set of results, if any. +func (client ApplicationGatewaysClient) listNextResults(lastResults ApplicationGatewayListResult) (result ApplicationGatewayListResult, err error) { + req, err := lastResults.applicationGatewayListResultPreparer() + if err != nil { + return result, autorest.NewErrorWithError(err, "network.ApplicationGatewaysClient", "listNextResults", nil, "Failure preparing next results request") + } + if req == nil { + return + } + resp, err := client.ListSender(req) + if err != nil { + result.Response = autorest.Response{Response: resp} + return result, autorest.NewErrorWithError(err, "network.ApplicationGatewaysClient", "listNextResults", resp, "Failure sending next results request") + } + result, err = client.ListResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "network.ApplicationGatewaysClient", "listNextResults", resp, "Failure responding to next results request") + } + return +} + +// ListComplete enumerates all values, automatically crossing page boundaries as required. +func (client ApplicationGatewaysClient) ListComplete(ctx context.Context, resourceGroupName string) (result ApplicationGatewayListResultIterator, err error) { + result.page, err = client.List(ctx, resourceGroupName) + return +} + +// ListAll gets all the application gateways in a subscription. +func (client ApplicationGatewaysClient) ListAll(ctx context.Context) (result ApplicationGatewayListResultPage, err error) { + result.fn = client.listAllNextResults + req, err := client.ListAllPreparer(ctx) + if err != nil { + err = autorest.NewErrorWithError(err, "network.ApplicationGatewaysClient", "ListAll", nil, "Failure preparing request") + return + } + + resp, err := client.ListAllSender(req) + if err != nil { + result.aglr.Response = autorest.Response{Response: resp} + err = autorest.NewErrorWithError(err, "network.ApplicationGatewaysClient", "ListAll", resp, "Failure sending request") + return + } + + result.aglr, err = client.ListAllResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "network.ApplicationGatewaysClient", "ListAll", resp, "Failure responding to request") + } + + return +} + +// ListAllPreparer prepares the ListAll request. +func (client ApplicationGatewaysClient) ListAllPreparer(ctx context.Context) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + } + + const APIVersion = "2018-01-01" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsGet(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/providers/Microsoft.Network/applicationGateways", pathParameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// ListAllSender sends the ListAll request. The method will close the +// http.Response Body if it receives an error. +func (client ApplicationGatewaysClient) ListAllSender(req *http.Request) (*http.Response, error) { + return autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) +} + +// ListAllResponder handles the response to the ListAll request. The method always +// closes the http.Response Body. +func (client ApplicationGatewaysClient) ListAllResponder(resp *http.Response) (result ApplicationGatewayListResult, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + +// listAllNextResults retrieves the next set of results, if any. +func (client ApplicationGatewaysClient) listAllNextResults(lastResults ApplicationGatewayListResult) (result ApplicationGatewayListResult, err error) { + req, err := lastResults.applicationGatewayListResultPreparer() + if err != nil { + return result, autorest.NewErrorWithError(err, "network.ApplicationGatewaysClient", "listAllNextResults", nil, "Failure preparing next results request") + } + if req == nil { + return + } + resp, err := client.ListAllSender(req) + if err != nil { + result.Response = autorest.Response{Response: resp} + return result, autorest.NewErrorWithError(err, "network.ApplicationGatewaysClient", "listAllNextResults", resp, "Failure sending next results request") + } + result, err = client.ListAllResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "network.ApplicationGatewaysClient", "listAllNextResults", resp, "Failure responding to next results request") + } + return +} + +// ListAllComplete enumerates all values, automatically crossing page boundaries as required. +func (client ApplicationGatewaysClient) ListAllComplete(ctx context.Context) (result ApplicationGatewayListResultIterator, err error) { + result.page, err = client.ListAll(ctx) + return +} + +// ListAvailableSslOptions lists available Ssl options for configuring Ssl policy. +func (client ApplicationGatewaysClient) ListAvailableSslOptions(ctx context.Context) (result ApplicationGatewayAvailableSslOptions, err error) { + req, err := client.ListAvailableSslOptionsPreparer(ctx) + if err != nil { + err = autorest.NewErrorWithError(err, "network.ApplicationGatewaysClient", "ListAvailableSslOptions", nil, "Failure preparing request") + return + } + + resp, err := client.ListAvailableSslOptionsSender(req) + if err != nil { + result.Response = autorest.Response{Response: resp} + err = autorest.NewErrorWithError(err, "network.ApplicationGatewaysClient", "ListAvailableSslOptions", resp, "Failure sending request") + return + } + + result, err = client.ListAvailableSslOptionsResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "network.ApplicationGatewaysClient", "ListAvailableSslOptions", resp, "Failure responding to request") + } + + return +} + +// ListAvailableSslOptionsPreparer prepares the ListAvailableSslOptions request. +func (client ApplicationGatewaysClient) ListAvailableSslOptionsPreparer(ctx context.Context) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + } + + const APIVersion = "2018-01-01" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsGet(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/providers/Microsoft.Network/applicationGatewayAvailableSslOptions/default", pathParameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// ListAvailableSslOptionsSender sends the ListAvailableSslOptions request. The method will close the +// http.Response Body if it receives an error. +func (client ApplicationGatewaysClient) ListAvailableSslOptionsSender(req *http.Request) (*http.Response, error) { + return autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) +} + +// ListAvailableSslOptionsResponder handles the response to the ListAvailableSslOptions request. The method always +// closes the http.Response Body. +func (client ApplicationGatewaysClient) ListAvailableSslOptionsResponder(resp *http.Response) (result ApplicationGatewayAvailableSslOptions, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + +// ListAvailableSslPredefinedPolicies lists all SSL predefined policies for configuring Ssl policy. +func (client ApplicationGatewaysClient) ListAvailableSslPredefinedPolicies(ctx context.Context) (result ApplicationGatewayAvailableSslPredefinedPoliciesPage, err error) { + result.fn = client.listAvailableSslPredefinedPoliciesNextResults + req, err := client.ListAvailableSslPredefinedPoliciesPreparer(ctx) + if err != nil { + err = autorest.NewErrorWithError(err, "network.ApplicationGatewaysClient", "ListAvailableSslPredefinedPolicies", nil, "Failure preparing request") + return + } + + resp, err := client.ListAvailableSslPredefinedPoliciesSender(req) + if err != nil { + result.agaspp.Response = autorest.Response{Response: resp} + err = autorest.NewErrorWithError(err, "network.ApplicationGatewaysClient", "ListAvailableSslPredefinedPolicies", resp, "Failure sending request") + return + } + + result.agaspp, err = client.ListAvailableSslPredefinedPoliciesResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "network.ApplicationGatewaysClient", "ListAvailableSslPredefinedPolicies", resp, "Failure responding to request") + } + + return +} + +// ListAvailableSslPredefinedPoliciesPreparer prepares the ListAvailableSslPredefinedPolicies request. +func (client ApplicationGatewaysClient) ListAvailableSslPredefinedPoliciesPreparer(ctx context.Context) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + } + + const APIVersion = "2018-01-01" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsGet(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/providers/Microsoft.Network/applicationGatewayAvailableSslOptions/default/predefinedPolicies", pathParameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// ListAvailableSslPredefinedPoliciesSender sends the ListAvailableSslPredefinedPolicies request. The method will close the +// http.Response Body if it receives an error. +func (client ApplicationGatewaysClient) ListAvailableSslPredefinedPoliciesSender(req *http.Request) (*http.Response, error) { + return autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) +} + +// ListAvailableSslPredefinedPoliciesResponder handles the response to the ListAvailableSslPredefinedPolicies request. The method always +// closes the http.Response Body. +func (client ApplicationGatewaysClient) ListAvailableSslPredefinedPoliciesResponder(resp *http.Response) (result ApplicationGatewayAvailableSslPredefinedPolicies, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + +// listAvailableSslPredefinedPoliciesNextResults retrieves the next set of results, if any. +func (client ApplicationGatewaysClient) listAvailableSslPredefinedPoliciesNextResults(lastResults ApplicationGatewayAvailableSslPredefinedPolicies) (result ApplicationGatewayAvailableSslPredefinedPolicies, err error) { + req, err := lastResults.applicationGatewayAvailableSslPredefinedPoliciesPreparer() + if err != nil { + return result, autorest.NewErrorWithError(err, "network.ApplicationGatewaysClient", "listAvailableSslPredefinedPoliciesNextResults", nil, "Failure preparing next results request") + } + if req == nil { + return + } + resp, err := client.ListAvailableSslPredefinedPoliciesSender(req) + if err != nil { + result.Response = autorest.Response{Response: resp} + return result, autorest.NewErrorWithError(err, "network.ApplicationGatewaysClient", "listAvailableSslPredefinedPoliciesNextResults", resp, "Failure sending next results request") + } + result, err = client.ListAvailableSslPredefinedPoliciesResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "network.ApplicationGatewaysClient", "listAvailableSslPredefinedPoliciesNextResults", resp, "Failure responding to next results request") + } + return +} + +// ListAvailableSslPredefinedPoliciesComplete enumerates all values, automatically crossing page boundaries as required. +func (client ApplicationGatewaysClient) ListAvailableSslPredefinedPoliciesComplete(ctx context.Context) (result ApplicationGatewayAvailableSslPredefinedPoliciesIterator, err error) { + result.page, err = client.ListAvailableSslPredefinedPolicies(ctx) + return +} + +// ListAvailableWafRuleSets lists all available web application firewall rule sets. +func (client ApplicationGatewaysClient) ListAvailableWafRuleSets(ctx context.Context) (result ApplicationGatewayAvailableWafRuleSetsResult, err error) { + req, err := client.ListAvailableWafRuleSetsPreparer(ctx) + if err != nil { + err = autorest.NewErrorWithError(err, "network.ApplicationGatewaysClient", "ListAvailableWafRuleSets", nil, "Failure preparing request") + return + } + + resp, err := client.ListAvailableWafRuleSetsSender(req) + if err != nil { + result.Response = autorest.Response{Response: resp} + err = autorest.NewErrorWithError(err, "network.ApplicationGatewaysClient", "ListAvailableWafRuleSets", resp, "Failure sending request") + return + } + + result, err = client.ListAvailableWafRuleSetsResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "network.ApplicationGatewaysClient", "ListAvailableWafRuleSets", resp, "Failure responding to request") + } + + return +} + +// ListAvailableWafRuleSetsPreparer prepares the ListAvailableWafRuleSets request. +func (client ApplicationGatewaysClient) ListAvailableWafRuleSetsPreparer(ctx context.Context) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + } + + const APIVersion = "2018-01-01" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsGet(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/providers/Microsoft.Network/applicationGatewayAvailableWafRuleSets", pathParameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// ListAvailableWafRuleSetsSender sends the ListAvailableWafRuleSets request. The method will close the +// http.Response Body if it receives an error. +func (client ApplicationGatewaysClient) ListAvailableWafRuleSetsSender(req *http.Request) (*http.Response, error) { + return autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) +} + +// ListAvailableWafRuleSetsResponder handles the response to the ListAvailableWafRuleSets request. The method always +// closes the http.Response Body. +func (client ApplicationGatewaysClient) ListAvailableWafRuleSetsResponder(resp *http.Response) (result ApplicationGatewayAvailableWafRuleSetsResult, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + +// Start starts the specified application gateway. +// Parameters: +// resourceGroupName - the name of the resource group. +// applicationGatewayName - the name of the application gateway. +func (client ApplicationGatewaysClient) Start(ctx context.Context, resourceGroupName string, applicationGatewayName string) (result ApplicationGatewaysStartFuture, err error) { + req, err := client.StartPreparer(ctx, resourceGroupName, applicationGatewayName) + if err != nil { + err = autorest.NewErrorWithError(err, "network.ApplicationGatewaysClient", "Start", nil, "Failure preparing request") + return + } + + result, err = client.StartSender(req) + if err != nil { + err = autorest.NewErrorWithError(err, "network.ApplicationGatewaysClient", "Start", result.Response(), "Failure sending request") + return + } + + return +} + +// StartPreparer prepares the Start request. +func (client ApplicationGatewaysClient) StartPreparer(ctx context.Context, resourceGroupName string, applicationGatewayName string) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "applicationGatewayName": autorest.Encode("path", applicationGatewayName), + "resourceGroupName": autorest.Encode("path", resourceGroupName), + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + } + + const APIVersion = "2018-01-01" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsPost(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/applicationGateways/{applicationGatewayName}/start", pathParameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// StartSender sends the Start request. The method will close the +// http.Response Body if it receives an error. +func (client ApplicationGatewaysClient) StartSender(req *http.Request) (future ApplicationGatewaysStartFuture, err error) { + var resp *http.Response + resp, err = autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) + if err != nil { + return + } + err = autorest.Respond(resp, azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted)) + if err != nil { + return + } + future.Future, err = azure.NewFutureFromResponse(resp) + return +} + +// StartResponder handles the response to the Start request. The method always +// closes the http.Response Body. +func (client ApplicationGatewaysClient) StartResponder(resp *http.Response) (result autorest.Response, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted), + autorest.ByClosing()) + result.Response = resp + return +} + +// Stop stops the specified application gateway in a resource group. +// Parameters: +// resourceGroupName - the name of the resource group. +// applicationGatewayName - the name of the application gateway. +func (client ApplicationGatewaysClient) Stop(ctx context.Context, resourceGroupName string, applicationGatewayName string) (result ApplicationGatewaysStopFuture, err error) { + req, err := client.StopPreparer(ctx, resourceGroupName, applicationGatewayName) + if err != nil { + err = autorest.NewErrorWithError(err, "network.ApplicationGatewaysClient", "Stop", nil, "Failure preparing request") + return + } + + result, err = client.StopSender(req) + if err != nil { + err = autorest.NewErrorWithError(err, "network.ApplicationGatewaysClient", "Stop", result.Response(), "Failure sending request") + return + } + + return +} + +// StopPreparer prepares the Stop request. +func (client ApplicationGatewaysClient) StopPreparer(ctx context.Context, resourceGroupName string, applicationGatewayName string) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "applicationGatewayName": autorest.Encode("path", applicationGatewayName), + "resourceGroupName": autorest.Encode("path", resourceGroupName), + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + } + + const APIVersion = "2018-01-01" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsPost(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/applicationGateways/{applicationGatewayName}/stop", pathParameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// StopSender sends the Stop request. The method will close the +// http.Response Body if it receives an error. +func (client ApplicationGatewaysClient) StopSender(req *http.Request) (future ApplicationGatewaysStopFuture, err error) { + var resp *http.Response + resp, err = autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) + if err != nil { + return + } + err = autorest.Respond(resp, azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted)) + if err != nil { + return + } + future.Future, err = azure.NewFutureFromResponse(resp) + return +} + +// StopResponder handles the response to the Stop request. The method always +// closes the http.Response Body. +func (client ApplicationGatewaysClient) StopResponder(resp *http.Response) (result autorest.Response, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted), + autorest.ByClosing()) + result.Response = resp + return +} + +// UpdateTags updates the specified application gateway tags. +// Parameters: +// resourceGroupName - the name of the resource group. +// applicationGatewayName - the name of the application gateway. +// parameters - parameters supplied to update application gateway tags. +func (client ApplicationGatewaysClient) UpdateTags(ctx context.Context, resourceGroupName string, applicationGatewayName string, parameters TagsObject) (result ApplicationGatewaysUpdateTagsFuture, err error) { + req, err := client.UpdateTagsPreparer(ctx, resourceGroupName, applicationGatewayName, parameters) + if err != nil { + err = autorest.NewErrorWithError(err, "network.ApplicationGatewaysClient", "UpdateTags", nil, "Failure preparing request") + return + } + + result, err = client.UpdateTagsSender(req) + if err != nil { + err = autorest.NewErrorWithError(err, "network.ApplicationGatewaysClient", "UpdateTags", result.Response(), "Failure sending request") + return + } + + return +} + +// UpdateTagsPreparer prepares the UpdateTags request. +func (client ApplicationGatewaysClient) UpdateTagsPreparer(ctx context.Context, resourceGroupName string, applicationGatewayName string, parameters TagsObject) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "applicationGatewayName": autorest.Encode("path", applicationGatewayName), + "resourceGroupName": autorest.Encode("path", resourceGroupName), + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + } + + const APIVersion = "2018-01-01" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsContentType("application/json; charset=utf-8"), + autorest.AsPatch(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/applicationGateways/{applicationGatewayName}", pathParameters), + autorest.WithJSON(parameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// UpdateTagsSender sends the UpdateTags request. The method will close the +// http.Response Body if it receives an error. +func (client ApplicationGatewaysClient) UpdateTagsSender(req *http.Request) (future ApplicationGatewaysUpdateTagsFuture, err error) { + var resp *http.Response + resp, err = autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) + if err != nil { + return + } + err = autorest.Respond(resp, azure.WithErrorUnlessStatusCode(http.StatusOK)) + if err != nil { + return + } + future.Future, err = azure.NewFutureFromResponse(resp) + return +} + +// UpdateTagsResponder handles the response to the UpdateTags request. The method always +// closes the http.Response Body. +func (client ApplicationGatewaysClient) UpdateTagsResponder(resp *http.Response) (result ApplicationGateway, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} diff --git a/vendor/github.com/Azure/azure-sdk-for-go/services/network/mgmt/2018-01-01/network/applicationsecuritygroups.go b/vendor/github.com/Azure/azure-sdk-for-go/services/network/mgmt/2018-01-01/network/applicationsecuritygroups.go new file mode 100644 index 000000000..727b7b1c1 --- /dev/null +++ b/vendor/github.com/Azure/azure-sdk-for-go/services/network/mgmt/2018-01-01/network/applicationsecuritygroups.go @@ -0,0 +1,434 @@ +package network + +// Copyright (c) Microsoft and contributors. All rights reserved. +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// +// See the License for the specific language governing permissions and +// limitations under the License. +// +// Code generated by Microsoft (R) AutoRest Code Generator. +// Changes may cause incorrect behavior and will be lost if the code is regenerated. + +import ( + "context" + "github.com/Azure/go-autorest/autorest" + "github.com/Azure/go-autorest/autorest/azure" + "net/http" +) + +// ApplicationSecurityGroupsClient is the network Client +type ApplicationSecurityGroupsClient struct { + BaseClient +} + +// NewApplicationSecurityGroupsClient creates an instance of the ApplicationSecurityGroupsClient client. +func NewApplicationSecurityGroupsClient(subscriptionID string) ApplicationSecurityGroupsClient { + return NewApplicationSecurityGroupsClientWithBaseURI(DefaultBaseURI, subscriptionID) +} + +// NewApplicationSecurityGroupsClientWithBaseURI creates an instance of the ApplicationSecurityGroupsClient client. +func NewApplicationSecurityGroupsClientWithBaseURI(baseURI string, subscriptionID string) ApplicationSecurityGroupsClient { + return ApplicationSecurityGroupsClient{NewWithBaseURI(baseURI, subscriptionID)} +} + +// CreateOrUpdate creates or updates an application security group. +// Parameters: +// resourceGroupName - the name of the resource group. +// applicationSecurityGroupName - the name of the application security group. +// parameters - parameters supplied to the create or update ApplicationSecurityGroup operation. +func (client ApplicationSecurityGroupsClient) CreateOrUpdate(ctx context.Context, resourceGroupName string, applicationSecurityGroupName string, parameters ApplicationSecurityGroup) (result ApplicationSecurityGroupsCreateOrUpdateFuture, err error) { + req, err := client.CreateOrUpdatePreparer(ctx, resourceGroupName, applicationSecurityGroupName, parameters) + if err != nil { + err = autorest.NewErrorWithError(err, "network.ApplicationSecurityGroupsClient", "CreateOrUpdate", nil, "Failure preparing request") + return + } + + result, err = client.CreateOrUpdateSender(req) + if err != nil { + err = autorest.NewErrorWithError(err, "network.ApplicationSecurityGroupsClient", "CreateOrUpdate", result.Response(), "Failure sending request") + return + } + + return +} + +// CreateOrUpdatePreparer prepares the CreateOrUpdate request. +func (client ApplicationSecurityGroupsClient) CreateOrUpdatePreparer(ctx context.Context, resourceGroupName string, applicationSecurityGroupName string, parameters ApplicationSecurityGroup) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "applicationSecurityGroupName": autorest.Encode("path", applicationSecurityGroupName), + "resourceGroupName": autorest.Encode("path", resourceGroupName), + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + } + + const APIVersion = "2018-01-01" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsContentType("application/json; charset=utf-8"), + autorest.AsPut(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/applicationSecurityGroups/{applicationSecurityGroupName}", pathParameters), + autorest.WithJSON(parameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// CreateOrUpdateSender sends the CreateOrUpdate request. The method will close the +// http.Response Body if it receives an error. +func (client ApplicationSecurityGroupsClient) CreateOrUpdateSender(req *http.Request) (future ApplicationSecurityGroupsCreateOrUpdateFuture, err error) { + var resp *http.Response + resp, err = autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) + if err != nil { + return + } + err = autorest.Respond(resp, azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusCreated)) + if err != nil { + return + } + future.Future, err = azure.NewFutureFromResponse(resp) + return +} + +// CreateOrUpdateResponder handles the response to the CreateOrUpdate request. The method always +// closes the http.Response Body. +func (client ApplicationSecurityGroupsClient) CreateOrUpdateResponder(resp *http.Response) (result ApplicationSecurityGroup, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusCreated), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + +// Delete deletes the specified application security group. +// Parameters: +// resourceGroupName - the name of the resource group. +// applicationSecurityGroupName - the name of the application security group. +func (client ApplicationSecurityGroupsClient) Delete(ctx context.Context, resourceGroupName string, applicationSecurityGroupName string) (result ApplicationSecurityGroupsDeleteFuture, err error) { + req, err := client.DeletePreparer(ctx, resourceGroupName, applicationSecurityGroupName) + if err != nil { + err = autorest.NewErrorWithError(err, "network.ApplicationSecurityGroupsClient", "Delete", nil, "Failure preparing request") + return + } + + result, err = client.DeleteSender(req) + if err != nil { + err = autorest.NewErrorWithError(err, "network.ApplicationSecurityGroupsClient", "Delete", result.Response(), "Failure sending request") + return + } + + return +} + +// DeletePreparer prepares the Delete request. +func (client ApplicationSecurityGroupsClient) DeletePreparer(ctx context.Context, resourceGroupName string, applicationSecurityGroupName string) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "applicationSecurityGroupName": autorest.Encode("path", applicationSecurityGroupName), + "resourceGroupName": autorest.Encode("path", resourceGroupName), + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + } + + const APIVersion = "2018-01-01" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsDelete(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/applicationSecurityGroups/{applicationSecurityGroupName}", pathParameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// DeleteSender sends the Delete request. The method will close the +// http.Response Body if it receives an error. +func (client ApplicationSecurityGroupsClient) DeleteSender(req *http.Request) (future ApplicationSecurityGroupsDeleteFuture, err error) { + var resp *http.Response + resp, err = autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) + if err != nil { + return + } + err = autorest.Respond(resp, azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted, http.StatusNoContent)) + if err != nil { + return + } + future.Future, err = azure.NewFutureFromResponse(resp) + return +} + +// DeleteResponder handles the response to the Delete request. The method always +// closes the http.Response Body. +func (client ApplicationSecurityGroupsClient) DeleteResponder(resp *http.Response) (result autorest.Response, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted, http.StatusNoContent), + autorest.ByClosing()) + result.Response = resp + return +} + +// Get gets information about the specified application security group. +// Parameters: +// resourceGroupName - the name of the resource group. +// applicationSecurityGroupName - the name of the application security group. +func (client ApplicationSecurityGroupsClient) Get(ctx context.Context, resourceGroupName string, applicationSecurityGroupName string) (result ApplicationSecurityGroup, err error) { + req, err := client.GetPreparer(ctx, resourceGroupName, applicationSecurityGroupName) + if err != nil { + err = autorest.NewErrorWithError(err, "network.ApplicationSecurityGroupsClient", "Get", nil, "Failure preparing request") + return + } + + resp, err := client.GetSender(req) + if err != nil { + result.Response = autorest.Response{Response: resp} + err = autorest.NewErrorWithError(err, "network.ApplicationSecurityGroupsClient", "Get", resp, "Failure sending request") + return + } + + result, err = client.GetResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "network.ApplicationSecurityGroupsClient", "Get", resp, "Failure responding to request") + } + + return +} + +// GetPreparer prepares the Get request. +func (client ApplicationSecurityGroupsClient) GetPreparer(ctx context.Context, resourceGroupName string, applicationSecurityGroupName string) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "applicationSecurityGroupName": autorest.Encode("path", applicationSecurityGroupName), + "resourceGroupName": autorest.Encode("path", resourceGroupName), + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + } + + const APIVersion = "2018-01-01" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsGet(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/applicationSecurityGroups/{applicationSecurityGroupName}", pathParameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// GetSender sends the Get request. The method will close the +// http.Response Body if it receives an error. +func (client ApplicationSecurityGroupsClient) GetSender(req *http.Request) (*http.Response, error) { + return autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) +} + +// GetResponder handles the response to the Get request. The method always +// closes the http.Response Body. +func (client ApplicationSecurityGroupsClient) GetResponder(resp *http.Response) (result ApplicationSecurityGroup, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + +// List gets all the application security groups in a resource group. +// Parameters: +// resourceGroupName - the name of the resource group. +func (client ApplicationSecurityGroupsClient) List(ctx context.Context, resourceGroupName string) (result ApplicationSecurityGroupListResultPage, err error) { + result.fn = client.listNextResults + req, err := client.ListPreparer(ctx, resourceGroupName) + if err != nil { + err = autorest.NewErrorWithError(err, "network.ApplicationSecurityGroupsClient", "List", nil, "Failure preparing request") + return + } + + resp, err := client.ListSender(req) + if err != nil { + result.asglr.Response = autorest.Response{Response: resp} + err = autorest.NewErrorWithError(err, "network.ApplicationSecurityGroupsClient", "List", resp, "Failure sending request") + return + } + + result.asglr, err = client.ListResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "network.ApplicationSecurityGroupsClient", "List", resp, "Failure responding to request") + } + + return +} + +// ListPreparer prepares the List request. +func (client ApplicationSecurityGroupsClient) ListPreparer(ctx context.Context, resourceGroupName string) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "resourceGroupName": autorest.Encode("path", resourceGroupName), + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + } + + const APIVersion = "2018-01-01" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsGet(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/applicationSecurityGroups", pathParameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// ListSender sends the List request. The method will close the +// http.Response Body if it receives an error. +func (client ApplicationSecurityGroupsClient) ListSender(req *http.Request) (*http.Response, error) { + return autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) +} + +// ListResponder handles the response to the List request. The method always +// closes the http.Response Body. +func (client ApplicationSecurityGroupsClient) ListResponder(resp *http.Response) (result ApplicationSecurityGroupListResult, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + +// listNextResults retrieves the next set of results, if any. +func (client ApplicationSecurityGroupsClient) listNextResults(lastResults ApplicationSecurityGroupListResult) (result ApplicationSecurityGroupListResult, err error) { + req, err := lastResults.applicationSecurityGroupListResultPreparer() + if err != nil { + return result, autorest.NewErrorWithError(err, "network.ApplicationSecurityGroupsClient", "listNextResults", nil, "Failure preparing next results request") + } + if req == nil { + return + } + resp, err := client.ListSender(req) + if err != nil { + result.Response = autorest.Response{Response: resp} + return result, autorest.NewErrorWithError(err, "network.ApplicationSecurityGroupsClient", "listNextResults", resp, "Failure sending next results request") + } + result, err = client.ListResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "network.ApplicationSecurityGroupsClient", "listNextResults", resp, "Failure responding to next results request") + } + return +} + +// ListComplete enumerates all values, automatically crossing page boundaries as required. +func (client ApplicationSecurityGroupsClient) ListComplete(ctx context.Context, resourceGroupName string) (result ApplicationSecurityGroupListResultIterator, err error) { + result.page, err = client.List(ctx, resourceGroupName) + return +} + +// ListAll gets all application security groups in a subscription. +func (client ApplicationSecurityGroupsClient) ListAll(ctx context.Context) (result ApplicationSecurityGroupListResultPage, err error) { + result.fn = client.listAllNextResults + req, err := client.ListAllPreparer(ctx) + if err != nil { + err = autorest.NewErrorWithError(err, "network.ApplicationSecurityGroupsClient", "ListAll", nil, "Failure preparing request") + return + } + + resp, err := client.ListAllSender(req) + if err != nil { + result.asglr.Response = autorest.Response{Response: resp} + err = autorest.NewErrorWithError(err, "network.ApplicationSecurityGroupsClient", "ListAll", resp, "Failure sending request") + return + } + + result.asglr, err = client.ListAllResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "network.ApplicationSecurityGroupsClient", "ListAll", resp, "Failure responding to request") + } + + return +} + +// ListAllPreparer prepares the ListAll request. +func (client ApplicationSecurityGroupsClient) ListAllPreparer(ctx context.Context) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + } + + const APIVersion = "2018-01-01" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsGet(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/providers/Microsoft.Network/applicationSecurityGroups", pathParameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// ListAllSender sends the ListAll request. The method will close the +// http.Response Body if it receives an error. +func (client ApplicationSecurityGroupsClient) ListAllSender(req *http.Request) (*http.Response, error) { + return autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) +} + +// ListAllResponder handles the response to the ListAll request. The method always +// closes the http.Response Body. +func (client ApplicationSecurityGroupsClient) ListAllResponder(resp *http.Response) (result ApplicationSecurityGroupListResult, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + +// listAllNextResults retrieves the next set of results, if any. +func (client ApplicationSecurityGroupsClient) listAllNextResults(lastResults ApplicationSecurityGroupListResult) (result ApplicationSecurityGroupListResult, err error) { + req, err := lastResults.applicationSecurityGroupListResultPreparer() + if err != nil { + return result, autorest.NewErrorWithError(err, "network.ApplicationSecurityGroupsClient", "listAllNextResults", nil, "Failure preparing next results request") + } + if req == nil { + return + } + resp, err := client.ListAllSender(req) + if err != nil { + result.Response = autorest.Response{Response: resp} + return result, autorest.NewErrorWithError(err, "network.ApplicationSecurityGroupsClient", "listAllNextResults", resp, "Failure sending next results request") + } + result, err = client.ListAllResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "network.ApplicationSecurityGroupsClient", "listAllNextResults", resp, "Failure responding to next results request") + } + return +} + +// ListAllComplete enumerates all values, automatically crossing page boundaries as required. +func (client ApplicationSecurityGroupsClient) ListAllComplete(ctx context.Context) (result ApplicationSecurityGroupListResultIterator, err error) { + result.page, err = client.ListAll(ctx) + return +} diff --git a/vendor/github.com/Azure/azure-sdk-for-go/services/network/mgmt/2018-01-01/network/availableendpointservices.go b/vendor/github.com/Azure/azure-sdk-for-go/services/network/mgmt/2018-01-01/network/availableendpointservices.go new file mode 100644 index 000000000..a386e084b --- /dev/null +++ b/vendor/github.com/Azure/azure-sdk-for-go/services/network/mgmt/2018-01-01/network/availableendpointservices.go @@ -0,0 +1,133 @@ +package network + +// Copyright (c) Microsoft and contributors. All rights reserved. +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// +// See the License for the specific language governing permissions and +// limitations under the License. +// +// Code generated by Microsoft (R) AutoRest Code Generator. +// Changes may cause incorrect behavior and will be lost if the code is regenerated. + +import ( + "context" + "github.com/Azure/go-autorest/autorest" + "github.com/Azure/go-autorest/autorest/azure" + "net/http" +) + +// AvailableEndpointServicesClient is the network Client +type AvailableEndpointServicesClient struct { + BaseClient +} + +// NewAvailableEndpointServicesClient creates an instance of the AvailableEndpointServicesClient client. +func NewAvailableEndpointServicesClient(subscriptionID string) AvailableEndpointServicesClient { + return NewAvailableEndpointServicesClientWithBaseURI(DefaultBaseURI, subscriptionID) +} + +// NewAvailableEndpointServicesClientWithBaseURI creates an instance of the AvailableEndpointServicesClient client. +func NewAvailableEndpointServicesClientWithBaseURI(baseURI string, subscriptionID string) AvailableEndpointServicesClient { + return AvailableEndpointServicesClient{NewWithBaseURI(baseURI, subscriptionID)} +} + +// List list what values of endpoint services are available for use. +// Parameters: +// location - the location to check available endpoint services. +func (client AvailableEndpointServicesClient) List(ctx context.Context, location string) (result EndpointServicesListResultPage, err error) { + result.fn = client.listNextResults + req, err := client.ListPreparer(ctx, location) + if err != nil { + err = autorest.NewErrorWithError(err, "network.AvailableEndpointServicesClient", "List", nil, "Failure preparing request") + return + } + + resp, err := client.ListSender(req) + if err != nil { + result.eslr.Response = autorest.Response{Response: resp} + err = autorest.NewErrorWithError(err, "network.AvailableEndpointServicesClient", "List", resp, "Failure sending request") + return + } + + result.eslr, err = client.ListResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "network.AvailableEndpointServicesClient", "List", resp, "Failure responding to request") + } + + return +} + +// ListPreparer prepares the List request. +func (client AvailableEndpointServicesClient) ListPreparer(ctx context.Context, location string) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "location": autorest.Encode("path", location), + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + } + + const APIVersion = "2018-01-01" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsGet(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/providers/Microsoft.Network/locations/{location}/virtualNetworkAvailableEndpointServices", pathParameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// ListSender sends the List request. The method will close the +// http.Response Body if it receives an error. +func (client AvailableEndpointServicesClient) ListSender(req *http.Request) (*http.Response, error) { + return autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) +} + +// ListResponder handles the response to the List request. The method always +// closes the http.Response Body. +func (client AvailableEndpointServicesClient) ListResponder(resp *http.Response) (result EndpointServicesListResult, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + +// listNextResults retrieves the next set of results, if any. +func (client AvailableEndpointServicesClient) listNextResults(lastResults EndpointServicesListResult) (result EndpointServicesListResult, err error) { + req, err := lastResults.endpointServicesListResultPreparer() + if err != nil { + return result, autorest.NewErrorWithError(err, "network.AvailableEndpointServicesClient", "listNextResults", nil, "Failure preparing next results request") + } + if req == nil { + return + } + resp, err := client.ListSender(req) + if err != nil { + result.Response = autorest.Response{Response: resp} + return result, autorest.NewErrorWithError(err, "network.AvailableEndpointServicesClient", "listNextResults", resp, "Failure sending next results request") + } + result, err = client.ListResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "network.AvailableEndpointServicesClient", "listNextResults", resp, "Failure responding to next results request") + } + return +} + +// ListComplete enumerates all values, automatically crossing page boundaries as required. +func (client AvailableEndpointServicesClient) ListComplete(ctx context.Context, location string) (result EndpointServicesListResultIterator, err error) { + result.page, err = client.List(ctx, location) + return +} diff --git a/vendor/github.com/Azure/azure-sdk-for-go/arm/network/bgpservicecommunities.go b/vendor/github.com/Azure/azure-sdk-for-go/services/network/mgmt/2018-01-01/network/bgpservicecommunities.go similarity index 68% rename from vendor/github.com/Azure/azure-sdk-for-go/arm/network/bgpservicecommunities.go rename to vendor/github.com/Azure/azure-sdk-for-go/services/network/mgmt/2018-01-01/network/bgpservicecommunities.go index 5024f5164..158a5c8d0 100644 --- a/vendor/github.com/Azure/azure-sdk-for-go/arm/network/bgpservicecommunities.go +++ b/vendor/github.com/Azure/azure-sdk-for-go/services/network/mgmt/2018-01-01/network/bgpservicecommunities.go @@ -14,36 +14,35 @@ package network // See the License for the specific language governing permissions and // limitations under the License. // -// Code generated by Microsoft (R) AutoRest Code Generator 1.0.1.0 -// Changes may cause incorrect behavior and will be lost if the code is -// regenerated. +// Code generated by Microsoft (R) AutoRest Code Generator. +// Changes may cause incorrect behavior and will be lost if the code is regenerated. import ( + "context" "github.com/Azure/go-autorest/autorest" "github.com/Azure/go-autorest/autorest/azure" "net/http" ) -// BgpServiceCommunitiesClient is the composite Swagger for Network Client +// BgpServiceCommunitiesClient is the network Client type BgpServiceCommunitiesClient struct { - ManagementClient + BaseClient } -// NewBgpServiceCommunitiesClient creates an instance of the -// BgpServiceCommunitiesClient client. +// NewBgpServiceCommunitiesClient creates an instance of the BgpServiceCommunitiesClient client. func NewBgpServiceCommunitiesClient(subscriptionID string) BgpServiceCommunitiesClient { return NewBgpServiceCommunitiesClientWithBaseURI(DefaultBaseURI, subscriptionID) } -// NewBgpServiceCommunitiesClientWithBaseURI creates an instance of the -// BgpServiceCommunitiesClient client. +// NewBgpServiceCommunitiesClientWithBaseURI creates an instance of the BgpServiceCommunitiesClient client. func NewBgpServiceCommunitiesClientWithBaseURI(baseURI string, subscriptionID string) BgpServiceCommunitiesClient { return BgpServiceCommunitiesClient{NewWithBaseURI(baseURI, subscriptionID)} } // List gets all the available bgp service communities. -func (client BgpServiceCommunitiesClient) List() (result BgpServiceCommunityListResult, err error) { - req, err := client.ListPreparer() +func (client BgpServiceCommunitiesClient) List(ctx context.Context) (result BgpServiceCommunityListResultPage, err error) { + result.fn = client.listNextResults + req, err := client.ListPreparer(ctx) if err != nil { err = autorest.NewErrorWithError(err, "network.BgpServiceCommunitiesClient", "List", nil, "Failure preparing request") return @@ -51,12 +50,12 @@ func (client BgpServiceCommunitiesClient) List() (result BgpServiceCommunityList resp, err := client.ListSender(req) if err != nil { - result.Response = autorest.Response{Response: resp} + result.bsclr.Response = autorest.Response{Response: resp} err = autorest.NewErrorWithError(err, "network.BgpServiceCommunitiesClient", "List", resp, "Failure sending request") return } - result, err = client.ListResponder(resp) + result.bsclr, err = client.ListResponder(resp) if err != nil { err = autorest.NewErrorWithError(err, "network.BgpServiceCommunitiesClient", "List", resp, "Failure responding to request") } @@ -65,12 +64,12 @@ func (client BgpServiceCommunitiesClient) List() (result BgpServiceCommunityList } // ListPreparer prepares the List request. -func (client BgpServiceCommunitiesClient) ListPreparer() (*http.Request, error) { +func (client BgpServiceCommunitiesClient) ListPreparer(ctx context.Context) (*http.Request, error) { pathParameters := map[string]interface{}{ "subscriptionId": autorest.Encode("path", client.SubscriptionID), } - const APIVersion = "2017-03-01" + const APIVersion = "2018-01-01" queryParameters := map[string]interface{}{ "api-version": APIVersion, } @@ -80,13 +79,14 @@ func (client BgpServiceCommunitiesClient) ListPreparer() (*http.Request, error) autorest.WithBaseURL(client.BaseURI), autorest.WithPathParameters("/subscriptions/{subscriptionId}/providers/Microsoft.Network/bgpServiceCommunities", pathParameters), autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{}) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) } // ListSender sends the List request. The method will close the // http.Response Body if it receives an error. func (client BgpServiceCommunitiesClient) ListSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, req) + return autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) } // ListResponder handles the response to the List request. The method always @@ -102,26 +102,29 @@ func (client BgpServiceCommunitiesClient) ListResponder(resp *http.Response) (re return } -// ListNextResults retrieves the next set of results, if any. -func (client BgpServiceCommunitiesClient) ListNextResults(lastResults BgpServiceCommunityListResult) (result BgpServiceCommunityListResult, err error) { - req, err := lastResults.BgpServiceCommunityListResultPreparer() +// listNextResults retrieves the next set of results, if any. +func (client BgpServiceCommunitiesClient) listNextResults(lastResults BgpServiceCommunityListResult) (result BgpServiceCommunityListResult, err error) { + req, err := lastResults.bgpServiceCommunityListResultPreparer() if err != nil { - return result, autorest.NewErrorWithError(err, "network.BgpServiceCommunitiesClient", "List", nil, "Failure preparing next results request") + return result, autorest.NewErrorWithError(err, "network.BgpServiceCommunitiesClient", "listNextResults", nil, "Failure preparing next results request") } if req == nil { return } - resp, err := client.ListSender(req) if err != nil { result.Response = autorest.Response{Response: resp} - return result, autorest.NewErrorWithError(err, "network.BgpServiceCommunitiesClient", "List", resp, "Failure sending next results request") + return result, autorest.NewErrorWithError(err, "network.BgpServiceCommunitiesClient", "listNextResults", resp, "Failure sending next results request") } - result, err = client.ListResponder(resp) if err != nil { - err = autorest.NewErrorWithError(err, "network.BgpServiceCommunitiesClient", "List", resp, "Failure responding to next results request") + err = autorest.NewErrorWithError(err, "network.BgpServiceCommunitiesClient", "listNextResults", resp, "Failure responding to next results request") } - + return +} + +// ListComplete enumerates all values, automatically crossing page boundaries as required. +func (client BgpServiceCommunitiesClient) ListComplete(ctx context.Context) (result BgpServiceCommunityListResultIterator, err error) { + result.page, err = client.List(ctx) return } diff --git a/vendor/github.com/Azure/azure-sdk-for-go/arm/network/client.go b/vendor/github.com/Azure/azure-sdk-for-go/services/network/mgmt/2018-01-01/network/client.go similarity index 58% rename from vendor/github.com/Azure/azure-sdk-for-go/arm/network/client.go rename to vendor/github.com/Azure/azure-sdk-for-go/services/network/mgmt/2018-01-01/network/client.go index ec5bf7ced..3e657b1ab 100644 --- a/vendor/github.com/Azure/azure-sdk-for-go/arm/network/client.go +++ b/vendor/github.com/Azure/azure-sdk-for-go/services/network/mgmt/2018-01-01/network/client.go @@ -1,6 +1,6 @@ // Package network implements the Azure ARM Network service API version . // -// Composite Swagger for Network Client +// Network Client package network // Copyright (c) Microsoft and contributors. All rights reserved. @@ -17,11 +17,11 @@ package network // See the License for the specific language governing permissions and // limitations under the License. // -// Code generated by Microsoft (R) AutoRest Code Generator 1.0.1.0 -// Changes may cause incorrect behavior and will be lost if the code is -// regenerated. +// Code generated by Microsoft (R) AutoRest Code Generator. +// Changes may cause incorrect behavior and will be lost if the code is regenerated. import ( + "context" "github.com/Azure/go-autorest/autorest" "github.com/Azure/go-autorest/autorest/azure" "net/http" @@ -32,68 +32,65 @@ const ( DefaultBaseURI = "https://management.azure.com" ) -// ManagementClient is the base client for Network. -type ManagementClient struct { +// BaseClient is the base client for Network. +type BaseClient struct { autorest.Client BaseURI string SubscriptionID string } -// New creates an instance of the ManagementClient client. -func New(subscriptionID string) ManagementClient { +// New creates an instance of the BaseClient client. +func New(subscriptionID string) BaseClient { return NewWithBaseURI(DefaultBaseURI, subscriptionID) } -// NewWithBaseURI creates an instance of the ManagementClient client. -func NewWithBaseURI(baseURI string, subscriptionID string) ManagementClient { - return ManagementClient{ +// NewWithBaseURI creates an instance of the BaseClient client. +func NewWithBaseURI(baseURI string, subscriptionID string) BaseClient { + return BaseClient{ Client: autorest.NewClientWithUserAgent(UserAgent()), BaseURI: baseURI, SubscriptionID: subscriptionID, } } -// CheckDNSNameAvailability checks whether a domain name in the cloudapp.net -// zone is available for use. -// -// location is the location of the domain name. domainNameLabel is the domain -// name to be verified. It must conform to the following regular expression: +// CheckDNSNameAvailability checks whether a domain name in the cloudapp.azure.com zone is available for use. +// Parameters: +// location - the location of the domain name. +// domainNameLabel - the domain name to be verified. It must conform to the following regular expression: // ^[a-z][a-z0-9-]{1,61}[a-z0-9]$. -func (client ManagementClient) CheckDNSNameAvailability(location string, domainNameLabel string) (result DNSNameAvailabilityResult, err error) { - req, err := client.CheckDNSNameAvailabilityPreparer(location, domainNameLabel) +func (client BaseClient) CheckDNSNameAvailability(ctx context.Context, location string, domainNameLabel string) (result DNSNameAvailabilityResult, err error) { + req, err := client.CheckDNSNameAvailabilityPreparer(ctx, location, domainNameLabel) if err != nil { - err = autorest.NewErrorWithError(err, "network.ManagementClient", "CheckDNSNameAvailability", nil, "Failure preparing request") + err = autorest.NewErrorWithError(err, "network.BaseClient", "CheckDNSNameAvailability", nil, "Failure preparing request") return } resp, err := client.CheckDNSNameAvailabilitySender(req) if err != nil { result.Response = autorest.Response{Response: resp} - err = autorest.NewErrorWithError(err, "network.ManagementClient", "CheckDNSNameAvailability", resp, "Failure sending request") + err = autorest.NewErrorWithError(err, "network.BaseClient", "CheckDNSNameAvailability", resp, "Failure sending request") return } result, err = client.CheckDNSNameAvailabilityResponder(resp) if err != nil { - err = autorest.NewErrorWithError(err, "network.ManagementClient", "CheckDNSNameAvailability", resp, "Failure responding to request") + err = autorest.NewErrorWithError(err, "network.BaseClient", "CheckDNSNameAvailability", resp, "Failure responding to request") } return } // CheckDNSNameAvailabilityPreparer prepares the CheckDNSNameAvailability request. -func (client ManagementClient) CheckDNSNameAvailabilityPreparer(location string, domainNameLabel string) (*http.Request, error) { +func (client BaseClient) CheckDNSNameAvailabilityPreparer(ctx context.Context, location string, domainNameLabel string) (*http.Request, error) { pathParameters := map[string]interface{}{ "location": autorest.Encode("path", location), "subscriptionId": autorest.Encode("path", client.SubscriptionID), } - const APIVersion = "2017-03-01" + const APIVersion = "2018-01-01" queryParameters := map[string]interface{}{ - "api-version": APIVersion, - } - if len(domainNameLabel) > 0 { - queryParameters["domainNameLabel"] = autorest.Encode("query", domainNameLabel) + "api-version": APIVersion, + "domainNameLabel": autorest.Encode("query", domainNameLabel), } preparer := autorest.CreatePreparer( @@ -101,18 +98,19 @@ func (client ManagementClient) CheckDNSNameAvailabilityPreparer(location string, autorest.WithBaseURL(client.BaseURI), autorest.WithPathParameters("/subscriptions/{subscriptionId}/providers/Microsoft.Network/locations/{location}/CheckDnsNameAvailability", pathParameters), autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{}) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) } // CheckDNSNameAvailabilitySender sends the CheckDNSNameAvailability request. The method will close the // http.Response Body if it receives an error. -func (client ManagementClient) CheckDNSNameAvailabilitySender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, req) +func (client BaseClient) CheckDNSNameAvailabilitySender(req *http.Request) (*http.Response, error) { + return autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) } // CheckDNSNameAvailabilityResponder handles the response to the CheckDNSNameAvailability request. The method always // closes the http.Response Body. -func (client ManagementClient) CheckDNSNameAvailabilityResponder(resp *http.Response) (result DNSNameAvailabilityResult, err error) { +func (client BaseClient) CheckDNSNameAvailabilityResponder(resp *http.Response) (result DNSNameAvailabilityResult, err error) { err = autorest.Respond( resp, client.ByInspecting(), diff --git a/vendor/github.com/Azure/azure-sdk-for-go/services/network/mgmt/2018-01-01/network/connectionmonitors.go b/vendor/github.com/Azure/azure-sdk-for-go/services/network/mgmt/2018-01-01/network/connectionmonitors.go new file mode 100644 index 000000000..d19cbad06 --- /dev/null +++ b/vendor/github.com/Azure/azure-sdk-for-go/services/network/mgmt/2018-01-01/network/connectionmonitors.go @@ -0,0 +1,552 @@ +package network + +// Copyright (c) Microsoft and contributors. All rights reserved. +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// +// See the License for the specific language governing permissions and +// limitations under the License. +// +// Code generated by Microsoft (R) AutoRest Code Generator. +// Changes may cause incorrect behavior and will be lost if the code is regenerated. + +import ( + "context" + "github.com/Azure/go-autorest/autorest" + "github.com/Azure/go-autorest/autorest/azure" + "github.com/Azure/go-autorest/autorest/validation" + "net/http" +) + +// ConnectionMonitorsClient is the network Client +type ConnectionMonitorsClient struct { + BaseClient +} + +// NewConnectionMonitorsClient creates an instance of the ConnectionMonitorsClient client. +func NewConnectionMonitorsClient(subscriptionID string) ConnectionMonitorsClient { + return NewConnectionMonitorsClientWithBaseURI(DefaultBaseURI, subscriptionID) +} + +// NewConnectionMonitorsClientWithBaseURI creates an instance of the ConnectionMonitorsClient client. +func NewConnectionMonitorsClientWithBaseURI(baseURI string, subscriptionID string) ConnectionMonitorsClient { + return ConnectionMonitorsClient{NewWithBaseURI(baseURI, subscriptionID)} +} + +// CreateOrUpdate create or update a connection monitor. +// Parameters: +// resourceGroupName - the name of the resource group containing Network Watcher. +// networkWatcherName - the name of the Network Watcher resource. +// connectionMonitorName - the name of the connection monitor. +// parameters - parameters that define the operation to create a connection monitor. +func (client ConnectionMonitorsClient) CreateOrUpdate(ctx context.Context, resourceGroupName string, networkWatcherName string, connectionMonitorName string, parameters ConnectionMonitor) (result ConnectionMonitorsCreateOrUpdateFuture, err error) { + if err := validation.Validate([]validation.Validation{ + {TargetValue: parameters, + Constraints: []validation.Constraint{{Target: "parameters.ConnectionMonitorParameters", Name: validation.Null, Rule: true, + Chain: []validation.Constraint{{Target: "parameters.ConnectionMonitorParameters.Source", Name: validation.Null, Rule: true, + Chain: []validation.Constraint{{Target: "parameters.ConnectionMonitorParameters.Source.ResourceID", Name: validation.Null, Rule: true, Chain: nil}}}, + {Target: "parameters.ConnectionMonitorParameters.Destination", Name: validation.Null, Rule: true, Chain: nil}, + }}}}}); err != nil { + return result, validation.NewError("network.ConnectionMonitorsClient", "CreateOrUpdate", err.Error()) + } + + req, err := client.CreateOrUpdatePreparer(ctx, resourceGroupName, networkWatcherName, connectionMonitorName, parameters) + if err != nil { + err = autorest.NewErrorWithError(err, "network.ConnectionMonitorsClient", "CreateOrUpdate", nil, "Failure preparing request") + return + } + + result, err = client.CreateOrUpdateSender(req) + if err != nil { + err = autorest.NewErrorWithError(err, "network.ConnectionMonitorsClient", "CreateOrUpdate", result.Response(), "Failure sending request") + return + } + + return +} + +// CreateOrUpdatePreparer prepares the CreateOrUpdate request. +func (client ConnectionMonitorsClient) CreateOrUpdatePreparer(ctx context.Context, resourceGroupName string, networkWatcherName string, connectionMonitorName string, parameters ConnectionMonitor) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "connectionMonitorName": autorest.Encode("path", connectionMonitorName), + "networkWatcherName": autorest.Encode("path", networkWatcherName), + "resourceGroupName": autorest.Encode("path", resourceGroupName), + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + } + + const APIVersion = "2018-01-01" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsContentType("application/json; charset=utf-8"), + autorest.AsPut(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/networkWatchers/{networkWatcherName}/connectionMonitors/{connectionMonitorName}", pathParameters), + autorest.WithJSON(parameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// CreateOrUpdateSender sends the CreateOrUpdate request. The method will close the +// http.Response Body if it receives an error. +func (client ConnectionMonitorsClient) CreateOrUpdateSender(req *http.Request) (future ConnectionMonitorsCreateOrUpdateFuture, err error) { + var resp *http.Response + resp, err = autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) + if err != nil { + return + } + err = autorest.Respond(resp, azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusCreated)) + if err != nil { + return + } + future.Future, err = azure.NewFutureFromResponse(resp) + return +} + +// CreateOrUpdateResponder handles the response to the CreateOrUpdate request. The method always +// closes the http.Response Body. +func (client ConnectionMonitorsClient) CreateOrUpdateResponder(resp *http.Response) (result ConnectionMonitorResult, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusCreated), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + +// Delete deletes the specified connection monitor. +// Parameters: +// resourceGroupName - the name of the resource group containing Network Watcher. +// networkWatcherName - the name of the Network Watcher resource. +// connectionMonitorName - the name of the connection monitor. +func (client ConnectionMonitorsClient) Delete(ctx context.Context, resourceGroupName string, networkWatcherName string, connectionMonitorName string) (result ConnectionMonitorsDeleteFuture, err error) { + req, err := client.DeletePreparer(ctx, resourceGroupName, networkWatcherName, connectionMonitorName) + if err != nil { + err = autorest.NewErrorWithError(err, "network.ConnectionMonitorsClient", "Delete", nil, "Failure preparing request") + return + } + + result, err = client.DeleteSender(req) + if err != nil { + err = autorest.NewErrorWithError(err, "network.ConnectionMonitorsClient", "Delete", result.Response(), "Failure sending request") + return + } + + return +} + +// DeletePreparer prepares the Delete request. +func (client ConnectionMonitorsClient) DeletePreparer(ctx context.Context, resourceGroupName string, networkWatcherName string, connectionMonitorName string) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "connectionMonitorName": autorest.Encode("path", connectionMonitorName), + "networkWatcherName": autorest.Encode("path", networkWatcherName), + "resourceGroupName": autorest.Encode("path", resourceGroupName), + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + } + + const APIVersion = "2018-01-01" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsDelete(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/networkWatchers/{networkWatcherName}/connectionMonitors/{connectionMonitorName}", pathParameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// DeleteSender sends the Delete request. The method will close the +// http.Response Body if it receives an error. +func (client ConnectionMonitorsClient) DeleteSender(req *http.Request) (future ConnectionMonitorsDeleteFuture, err error) { + var resp *http.Response + resp, err = autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) + if err != nil { + return + } + err = autorest.Respond(resp, azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted, http.StatusNoContent)) + if err != nil { + return + } + future.Future, err = azure.NewFutureFromResponse(resp) + return +} + +// DeleteResponder handles the response to the Delete request. The method always +// closes the http.Response Body. +func (client ConnectionMonitorsClient) DeleteResponder(resp *http.Response) (result autorest.Response, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted, http.StatusNoContent), + autorest.ByClosing()) + result.Response = resp + return +} + +// Get gets a connection monitor by name. +// Parameters: +// resourceGroupName - the name of the resource group containing Network Watcher. +// networkWatcherName - the name of the Network Watcher resource. +// connectionMonitorName - the name of the connection monitor. +func (client ConnectionMonitorsClient) Get(ctx context.Context, resourceGroupName string, networkWatcherName string, connectionMonitorName string) (result ConnectionMonitorResult, err error) { + req, err := client.GetPreparer(ctx, resourceGroupName, networkWatcherName, connectionMonitorName) + if err != nil { + err = autorest.NewErrorWithError(err, "network.ConnectionMonitorsClient", "Get", nil, "Failure preparing request") + return + } + + resp, err := client.GetSender(req) + if err != nil { + result.Response = autorest.Response{Response: resp} + err = autorest.NewErrorWithError(err, "network.ConnectionMonitorsClient", "Get", resp, "Failure sending request") + return + } + + result, err = client.GetResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "network.ConnectionMonitorsClient", "Get", resp, "Failure responding to request") + } + + return +} + +// GetPreparer prepares the Get request. +func (client ConnectionMonitorsClient) GetPreparer(ctx context.Context, resourceGroupName string, networkWatcherName string, connectionMonitorName string) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "connectionMonitorName": autorest.Encode("path", connectionMonitorName), + "networkWatcherName": autorest.Encode("path", networkWatcherName), + "resourceGroupName": autorest.Encode("path", resourceGroupName), + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + } + + const APIVersion = "2018-01-01" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsGet(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/networkWatchers/{networkWatcherName}/connectionMonitors/{connectionMonitorName}", pathParameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// GetSender sends the Get request. The method will close the +// http.Response Body if it receives an error. +func (client ConnectionMonitorsClient) GetSender(req *http.Request) (*http.Response, error) { + return autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) +} + +// GetResponder handles the response to the Get request. The method always +// closes the http.Response Body. +func (client ConnectionMonitorsClient) GetResponder(resp *http.Response) (result ConnectionMonitorResult, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + +// List lists all connection monitors for the specified Network Watcher. +// Parameters: +// resourceGroupName - the name of the resource group containing Network Watcher. +// networkWatcherName - the name of the Network Watcher resource. +func (client ConnectionMonitorsClient) List(ctx context.Context, resourceGroupName string, networkWatcherName string) (result ConnectionMonitorListResult, err error) { + req, err := client.ListPreparer(ctx, resourceGroupName, networkWatcherName) + if err != nil { + err = autorest.NewErrorWithError(err, "network.ConnectionMonitorsClient", "List", nil, "Failure preparing request") + return + } + + resp, err := client.ListSender(req) + if err != nil { + result.Response = autorest.Response{Response: resp} + err = autorest.NewErrorWithError(err, "network.ConnectionMonitorsClient", "List", resp, "Failure sending request") + return + } + + result, err = client.ListResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "network.ConnectionMonitorsClient", "List", resp, "Failure responding to request") + } + + return +} + +// ListPreparer prepares the List request. +func (client ConnectionMonitorsClient) ListPreparer(ctx context.Context, resourceGroupName string, networkWatcherName string) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "networkWatcherName": autorest.Encode("path", networkWatcherName), + "resourceGroupName": autorest.Encode("path", resourceGroupName), + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + } + + const APIVersion = "2018-01-01" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsGet(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/networkWatchers/{networkWatcherName}/connectionMonitors", pathParameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// ListSender sends the List request. The method will close the +// http.Response Body if it receives an error. +func (client ConnectionMonitorsClient) ListSender(req *http.Request) (*http.Response, error) { + return autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) +} + +// ListResponder handles the response to the List request. The method always +// closes the http.Response Body. +func (client ConnectionMonitorsClient) ListResponder(resp *http.Response) (result ConnectionMonitorListResult, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + +// Query query a snapshot of the most recent connection states. +// Parameters: +// resourceGroupName - the name of the resource group containing Network Watcher. +// networkWatcherName - the name of the Network Watcher resource. +// connectionMonitorName - the name given to the connection monitor. +func (client ConnectionMonitorsClient) Query(ctx context.Context, resourceGroupName string, networkWatcherName string, connectionMonitorName string) (result ConnectionMonitorsQueryFuture, err error) { + req, err := client.QueryPreparer(ctx, resourceGroupName, networkWatcherName, connectionMonitorName) + if err != nil { + err = autorest.NewErrorWithError(err, "network.ConnectionMonitorsClient", "Query", nil, "Failure preparing request") + return + } + + result, err = client.QuerySender(req) + if err != nil { + err = autorest.NewErrorWithError(err, "network.ConnectionMonitorsClient", "Query", result.Response(), "Failure sending request") + return + } + + return +} + +// QueryPreparer prepares the Query request. +func (client ConnectionMonitorsClient) QueryPreparer(ctx context.Context, resourceGroupName string, networkWatcherName string, connectionMonitorName string) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "connectionMonitorName": autorest.Encode("path", connectionMonitorName), + "networkWatcherName": autorest.Encode("path", networkWatcherName), + "resourceGroupName": autorest.Encode("path", resourceGroupName), + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + } + + const APIVersion = "2018-01-01" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsPost(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/networkWatchers/{networkWatcherName}/connectionMonitors/{connectionMonitorName}/query", pathParameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// QuerySender sends the Query request. The method will close the +// http.Response Body if it receives an error. +func (client ConnectionMonitorsClient) QuerySender(req *http.Request) (future ConnectionMonitorsQueryFuture, err error) { + var resp *http.Response + resp, err = autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) + if err != nil { + return + } + err = autorest.Respond(resp, azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted)) + if err != nil { + return + } + future.Future, err = azure.NewFutureFromResponse(resp) + return +} + +// QueryResponder handles the response to the Query request. The method always +// closes the http.Response Body. +func (client ConnectionMonitorsClient) QueryResponder(resp *http.Response) (result ConnectionMonitorQueryResult, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + +// Start starts the specified connection monitor. +// Parameters: +// resourceGroupName - the name of the resource group containing Network Watcher. +// networkWatcherName - the name of the Network Watcher resource. +// connectionMonitorName - the name of the connection monitor. +func (client ConnectionMonitorsClient) Start(ctx context.Context, resourceGroupName string, networkWatcherName string, connectionMonitorName string) (result ConnectionMonitorsStartFuture, err error) { + req, err := client.StartPreparer(ctx, resourceGroupName, networkWatcherName, connectionMonitorName) + if err != nil { + err = autorest.NewErrorWithError(err, "network.ConnectionMonitorsClient", "Start", nil, "Failure preparing request") + return + } + + result, err = client.StartSender(req) + if err != nil { + err = autorest.NewErrorWithError(err, "network.ConnectionMonitorsClient", "Start", result.Response(), "Failure sending request") + return + } + + return +} + +// StartPreparer prepares the Start request. +func (client ConnectionMonitorsClient) StartPreparer(ctx context.Context, resourceGroupName string, networkWatcherName string, connectionMonitorName string) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "connectionMonitorName": autorest.Encode("path", connectionMonitorName), + "networkWatcherName": autorest.Encode("path", networkWatcherName), + "resourceGroupName": autorest.Encode("path", resourceGroupName), + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + } + + const APIVersion = "2018-01-01" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsPost(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/networkWatchers/{networkWatcherName}/connectionMonitors/{connectionMonitorName}/start", pathParameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// StartSender sends the Start request. The method will close the +// http.Response Body if it receives an error. +func (client ConnectionMonitorsClient) StartSender(req *http.Request) (future ConnectionMonitorsStartFuture, err error) { + var resp *http.Response + resp, err = autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) + if err != nil { + return + } + err = autorest.Respond(resp, azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted)) + if err != nil { + return + } + future.Future, err = azure.NewFutureFromResponse(resp) + return +} + +// StartResponder handles the response to the Start request. The method always +// closes the http.Response Body. +func (client ConnectionMonitorsClient) StartResponder(resp *http.Response) (result autorest.Response, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted), + autorest.ByClosing()) + result.Response = resp + return +} + +// Stop stops the specified connection monitor. +// Parameters: +// resourceGroupName - the name of the resource group containing Network Watcher. +// networkWatcherName - the name of the Network Watcher resource. +// connectionMonitorName - the name of the connection monitor. +func (client ConnectionMonitorsClient) Stop(ctx context.Context, resourceGroupName string, networkWatcherName string, connectionMonitorName string) (result ConnectionMonitorsStopFuture, err error) { + req, err := client.StopPreparer(ctx, resourceGroupName, networkWatcherName, connectionMonitorName) + if err != nil { + err = autorest.NewErrorWithError(err, "network.ConnectionMonitorsClient", "Stop", nil, "Failure preparing request") + return + } + + result, err = client.StopSender(req) + if err != nil { + err = autorest.NewErrorWithError(err, "network.ConnectionMonitorsClient", "Stop", result.Response(), "Failure sending request") + return + } + + return +} + +// StopPreparer prepares the Stop request. +func (client ConnectionMonitorsClient) StopPreparer(ctx context.Context, resourceGroupName string, networkWatcherName string, connectionMonitorName string) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "connectionMonitorName": autorest.Encode("path", connectionMonitorName), + "networkWatcherName": autorest.Encode("path", networkWatcherName), + "resourceGroupName": autorest.Encode("path", resourceGroupName), + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + } + + const APIVersion = "2018-01-01" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsPost(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/networkWatchers/{networkWatcherName}/connectionMonitors/{connectionMonitorName}/stop", pathParameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// StopSender sends the Stop request. The method will close the +// http.Response Body if it receives an error. +func (client ConnectionMonitorsClient) StopSender(req *http.Request) (future ConnectionMonitorsStopFuture, err error) { + var resp *http.Response + resp, err = autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) + if err != nil { + return + } + err = autorest.Respond(resp, azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted)) + if err != nil { + return + } + future.Future, err = azure.NewFutureFromResponse(resp) + return +} + +// StopResponder handles the response to the Stop request. The method always +// closes the http.Response Body. +func (client ConnectionMonitorsClient) StopResponder(resp *http.Response) (result autorest.Response, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted), + autorest.ByClosing()) + result.Response = resp + return +} diff --git a/vendor/github.com/Azure/azure-sdk-for-go/services/network/mgmt/2018-01-01/network/defaultsecurityrules.go b/vendor/github.com/Azure/azure-sdk-for-go/services/network/mgmt/2018-01-01/network/defaultsecurityrules.go new file mode 100644 index 000000000..a53a27b3e --- /dev/null +++ b/vendor/github.com/Azure/azure-sdk-for-go/services/network/mgmt/2018-01-01/network/defaultsecurityrules.go @@ -0,0 +1,204 @@ +package network + +// Copyright (c) Microsoft and contributors. All rights reserved. +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// +// See the License for the specific language governing permissions and +// limitations under the License. +// +// Code generated by Microsoft (R) AutoRest Code Generator. +// Changes may cause incorrect behavior and will be lost if the code is regenerated. + +import ( + "context" + "github.com/Azure/go-autorest/autorest" + "github.com/Azure/go-autorest/autorest/azure" + "net/http" +) + +// DefaultSecurityRulesClient is the network Client +type DefaultSecurityRulesClient struct { + BaseClient +} + +// NewDefaultSecurityRulesClient creates an instance of the DefaultSecurityRulesClient client. +func NewDefaultSecurityRulesClient(subscriptionID string) DefaultSecurityRulesClient { + return NewDefaultSecurityRulesClientWithBaseURI(DefaultBaseURI, subscriptionID) +} + +// NewDefaultSecurityRulesClientWithBaseURI creates an instance of the DefaultSecurityRulesClient client. +func NewDefaultSecurityRulesClientWithBaseURI(baseURI string, subscriptionID string) DefaultSecurityRulesClient { + return DefaultSecurityRulesClient{NewWithBaseURI(baseURI, subscriptionID)} +} + +// Get get the specified default network security rule. +// Parameters: +// resourceGroupName - the name of the resource group. +// networkSecurityGroupName - the name of the network security group. +// defaultSecurityRuleName - the name of the default security rule. +func (client DefaultSecurityRulesClient) Get(ctx context.Context, resourceGroupName string, networkSecurityGroupName string, defaultSecurityRuleName string) (result SecurityRule, err error) { + req, err := client.GetPreparer(ctx, resourceGroupName, networkSecurityGroupName, defaultSecurityRuleName) + if err != nil { + err = autorest.NewErrorWithError(err, "network.DefaultSecurityRulesClient", "Get", nil, "Failure preparing request") + return + } + + resp, err := client.GetSender(req) + if err != nil { + result.Response = autorest.Response{Response: resp} + err = autorest.NewErrorWithError(err, "network.DefaultSecurityRulesClient", "Get", resp, "Failure sending request") + return + } + + result, err = client.GetResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "network.DefaultSecurityRulesClient", "Get", resp, "Failure responding to request") + } + + return +} + +// GetPreparer prepares the Get request. +func (client DefaultSecurityRulesClient) GetPreparer(ctx context.Context, resourceGroupName string, networkSecurityGroupName string, defaultSecurityRuleName string) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "defaultSecurityRuleName": autorest.Encode("path", defaultSecurityRuleName), + "networkSecurityGroupName": autorest.Encode("path", networkSecurityGroupName), + "resourceGroupName": autorest.Encode("path", resourceGroupName), + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + } + + const APIVersion = "2018-01-01" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsGet(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/networkSecurityGroups/{networkSecurityGroupName}/defaultSecurityRules/{defaultSecurityRuleName}", pathParameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// GetSender sends the Get request. The method will close the +// http.Response Body if it receives an error. +func (client DefaultSecurityRulesClient) GetSender(req *http.Request) (*http.Response, error) { + return autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) +} + +// GetResponder handles the response to the Get request. The method always +// closes the http.Response Body. +func (client DefaultSecurityRulesClient) GetResponder(resp *http.Response) (result SecurityRule, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + +// List gets all default security rules in a network security group. +// Parameters: +// resourceGroupName - the name of the resource group. +// networkSecurityGroupName - the name of the network security group. +func (client DefaultSecurityRulesClient) List(ctx context.Context, resourceGroupName string, networkSecurityGroupName string) (result SecurityRuleListResultPage, err error) { + result.fn = client.listNextResults + req, err := client.ListPreparer(ctx, resourceGroupName, networkSecurityGroupName) + if err != nil { + err = autorest.NewErrorWithError(err, "network.DefaultSecurityRulesClient", "List", nil, "Failure preparing request") + return + } + + resp, err := client.ListSender(req) + if err != nil { + result.srlr.Response = autorest.Response{Response: resp} + err = autorest.NewErrorWithError(err, "network.DefaultSecurityRulesClient", "List", resp, "Failure sending request") + return + } + + result.srlr, err = client.ListResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "network.DefaultSecurityRulesClient", "List", resp, "Failure responding to request") + } + + return +} + +// ListPreparer prepares the List request. +func (client DefaultSecurityRulesClient) ListPreparer(ctx context.Context, resourceGroupName string, networkSecurityGroupName string) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "networkSecurityGroupName": autorest.Encode("path", networkSecurityGroupName), + "resourceGroupName": autorest.Encode("path", resourceGroupName), + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + } + + const APIVersion = "2018-01-01" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsGet(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/networkSecurityGroups/{networkSecurityGroupName}/defaultSecurityRules", pathParameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// ListSender sends the List request. The method will close the +// http.Response Body if it receives an error. +func (client DefaultSecurityRulesClient) ListSender(req *http.Request) (*http.Response, error) { + return autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) +} + +// ListResponder handles the response to the List request. The method always +// closes the http.Response Body. +func (client DefaultSecurityRulesClient) ListResponder(resp *http.Response) (result SecurityRuleListResult, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + +// listNextResults retrieves the next set of results, if any. +func (client DefaultSecurityRulesClient) listNextResults(lastResults SecurityRuleListResult) (result SecurityRuleListResult, err error) { + req, err := lastResults.securityRuleListResultPreparer() + if err != nil { + return result, autorest.NewErrorWithError(err, "network.DefaultSecurityRulesClient", "listNextResults", nil, "Failure preparing next results request") + } + if req == nil { + return + } + resp, err := client.ListSender(req) + if err != nil { + result.Response = autorest.Response{Response: resp} + return result, autorest.NewErrorWithError(err, "network.DefaultSecurityRulesClient", "listNextResults", resp, "Failure sending next results request") + } + result, err = client.ListResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "network.DefaultSecurityRulesClient", "listNextResults", resp, "Failure responding to next results request") + } + return +} + +// ListComplete enumerates all values, automatically crossing page boundaries as required. +func (client DefaultSecurityRulesClient) ListComplete(ctx context.Context, resourceGroupName string, networkSecurityGroupName string) (result SecurityRuleListResultIterator, err error) { + result.page, err = client.List(ctx, resourceGroupName, networkSecurityGroupName) + return +} diff --git a/vendor/github.com/Azure/azure-sdk-for-go/arm/network/expressroutecircuitauthorizations.go b/vendor/github.com/Azure/azure-sdk-for-go/services/network/mgmt/2018-01-01/network/expressroutecircuitauthorizations.go similarity index 59% rename from vendor/github.com/Azure/azure-sdk-for-go/arm/network/expressroutecircuitauthorizations.go rename to vendor/github.com/Azure/azure-sdk-for-go/services/network/mgmt/2018-01-01/network/expressroutecircuitauthorizations.go index feb974fb9..35f9e2744 100644 --- a/vendor/github.com/Azure/azure-sdk-for-go/arm/network/expressroutecircuitauthorizations.go +++ b/vendor/github.com/Azure/azure-sdk-for-go/services/network/mgmt/2018-01-01/network/expressroutecircuitauthorizations.go @@ -14,78 +14,58 @@ package network // See the License for the specific language governing permissions and // limitations under the License. // -// Code generated by Microsoft (R) AutoRest Code Generator 1.0.1.0 -// Changes may cause incorrect behavior and will be lost if the code is -// regenerated. +// Code generated by Microsoft (R) AutoRest Code Generator. +// Changes may cause incorrect behavior and will be lost if the code is regenerated. import ( + "context" "github.com/Azure/go-autorest/autorest" "github.com/Azure/go-autorest/autorest/azure" "net/http" ) -// ExpressRouteCircuitAuthorizationsClient is the composite Swagger for Network -// Client +// ExpressRouteCircuitAuthorizationsClient is the network Client type ExpressRouteCircuitAuthorizationsClient struct { - ManagementClient + BaseClient } -// NewExpressRouteCircuitAuthorizationsClient creates an instance of the -// ExpressRouteCircuitAuthorizationsClient client. +// NewExpressRouteCircuitAuthorizationsClient creates an instance of the ExpressRouteCircuitAuthorizationsClient +// client. func NewExpressRouteCircuitAuthorizationsClient(subscriptionID string) ExpressRouteCircuitAuthorizationsClient { return NewExpressRouteCircuitAuthorizationsClientWithBaseURI(DefaultBaseURI, subscriptionID) } -// NewExpressRouteCircuitAuthorizationsClientWithBaseURI creates an instance of -// the ExpressRouteCircuitAuthorizationsClient client. +// NewExpressRouteCircuitAuthorizationsClientWithBaseURI creates an instance of the +// ExpressRouteCircuitAuthorizationsClient client. func NewExpressRouteCircuitAuthorizationsClientWithBaseURI(baseURI string, subscriptionID string) ExpressRouteCircuitAuthorizationsClient { return ExpressRouteCircuitAuthorizationsClient{NewWithBaseURI(baseURI, subscriptionID)} } -// CreateOrUpdate creates or updates an authorization in the specified express -// route circuit. This method may poll for completion. Polling can be canceled -// by passing the cancel channel argument. The channel will be used to cancel -// polling and any outstanding HTTP requests. -// -// resourceGroupName is the name of the resource group. circuitName is the name -// of the express route circuit. authorizationName is the name of the -// authorization. authorizationParameters is parameters supplied to the create -// or update express route circuit authorization operation. -func (client ExpressRouteCircuitAuthorizationsClient) CreateOrUpdate(resourceGroupName string, circuitName string, authorizationName string, authorizationParameters ExpressRouteCircuitAuthorization, cancel <-chan struct{}) (<-chan ExpressRouteCircuitAuthorization, <-chan error) { - resultChan := make(chan ExpressRouteCircuitAuthorization, 1) - errChan := make(chan error, 1) - go func() { - var err error - var result ExpressRouteCircuitAuthorization - defer func() { - resultChan <- result - errChan <- err - close(resultChan) - close(errChan) - }() - req, err := client.CreateOrUpdatePreparer(resourceGroupName, circuitName, authorizationName, authorizationParameters, cancel) - if err != nil { - err = autorest.NewErrorWithError(err, "network.ExpressRouteCircuitAuthorizationsClient", "CreateOrUpdate", nil, "Failure preparing request") - return - } +// CreateOrUpdate creates or updates an authorization in the specified express route circuit. +// Parameters: +// resourceGroupName - the name of the resource group. +// circuitName - the name of the express route circuit. +// authorizationName - the name of the authorization. +// authorizationParameters - parameters supplied to the create or update express route circuit authorization +// operation. +func (client ExpressRouteCircuitAuthorizationsClient) CreateOrUpdate(ctx context.Context, resourceGroupName string, circuitName string, authorizationName string, authorizationParameters ExpressRouteCircuitAuthorization) (result ExpressRouteCircuitAuthorizationsCreateOrUpdateFuture, err error) { + req, err := client.CreateOrUpdatePreparer(ctx, resourceGroupName, circuitName, authorizationName, authorizationParameters) + if err != nil { + err = autorest.NewErrorWithError(err, "network.ExpressRouteCircuitAuthorizationsClient", "CreateOrUpdate", nil, "Failure preparing request") + return + } - resp, err := client.CreateOrUpdateSender(req) - if err != nil { - result.Response = autorest.Response{Response: resp} - err = autorest.NewErrorWithError(err, "network.ExpressRouteCircuitAuthorizationsClient", "CreateOrUpdate", resp, "Failure sending request") - return - } + result, err = client.CreateOrUpdateSender(req) + if err != nil { + err = autorest.NewErrorWithError(err, "network.ExpressRouteCircuitAuthorizationsClient", "CreateOrUpdate", result.Response(), "Failure sending request") + return + } - result, err = client.CreateOrUpdateResponder(resp) - if err != nil { - err = autorest.NewErrorWithError(err, "network.ExpressRouteCircuitAuthorizationsClient", "CreateOrUpdate", resp, "Failure responding to request") - } - }() - return resultChan, errChan + return } // CreateOrUpdatePreparer prepares the CreateOrUpdate request. -func (client ExpressRouteCircuitAuthorizationsClient) CreateOrUpdatePreparer(resourceGroupName string, circuitName string, authorizationName string, authorizationParameters ExpressRouteCircuitAuthorization, cancel <-chan struct{}) (*http.Request, error) { +func (client ExpressRouteCircuitAuthorizationsClient) CreateOrUpdatePreparer(ctx context.Context, resourceGroupName string, circuitName string, authorizationName string, authorizationParameters ExpressRouteCircuitAuthorization) (*http.Request, error) { pathParameters := map[string]interface{}{ "authorizationName": autorest.Encode("path", authorizationName), "circuitName": autorest.Encode("path", circuitName), @@ -93,27 +73,36 @@ func (client ExpressRouteCircuitAuthorizationsClient) CreateOrUpdatePreparer(res "subscriptionId": autorest.Encode("path", client.SubscriptionID), } - const APIVersion = "2017-03-01" + const APIVersion = "2018-01-01" queryParameters := map[string]interface{}{ "api-version": APIVersion, } preparer := autorest.CreatePreparer( - autorest.AsJSON(), + autorest.AsContentType("application/json; charset=utf-8"), autorest.AsPut(), autorest.WithBaseURL(client.BaseURI), autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/expressRouteCircuits/{circuitName}/authorizations/{authorizationName}", pathParameters), autorest.WithJSON(authorizationParameters), autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{Cancel: cancel}) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) } // CreateOrUpdateSender sends the CreateOrUpdate request. The method will close the // http.Response Body if it receives an error. -func (client ExpressRouteCircuitAuthorizationsClient) CreateOrUpdateSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, - req, - azure.DoPollForAsynchronous(client.PollingDelay)) +func (client ExpressRouteCircuitAuthorizationsClient) CreateOrUpdateSender(req *http.Request) (future ExpressRouteCircuitAuthorizationsCreateOrUpdateFuture, err error) { + var resp *http.Response + resp, err = autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) + if err != nil { + return + } + err = autorest.Respond(resp, azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusCreated)) + if err != nil { + return + } + future.Future, err = azure.NewFutureFromResponse(resp) + return } // CreateOrUpdateResponder handles the response to the CreateOrUpdate request. The method always @@ -122,56 +111,36 @@ func (client ExpressRouteCircuitAuthorizationsClient) CreateOrUpdateResponder(re err = autorest.Respond( resp, client.ByInspecting(), - azure.WithErrorUnlessStatusCode(http.StatusCreated, http.StatusOK), + azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusCreated), autorest.ByUnmarshallingJSON(&result), autorest.ByClosing()) result.Response = autorest.Response{Response: resp} return } -// Delete deletes the specified authorization from the specified express route -// circuit. This method may poll for completion. Polling can be canceled by -// passing the cancel channel argument. The channel will be used to cancel -// polling and any outstanding HTTP requests. -// -// resourceGroupName is the name of the resource group. circuitName is the name -// of the express route circuit. authorizationName is the name of the -// authorization. -func (client ExpressRouteCircuitAuthorizationsClient) Delete(resourceGroupName string, circuitName string, authorizationName string, cancel <-chan struct{}) (<-chan autorest.Response, <-chan error) { - resultChan := make(chan autorest.Response, 1) - errChan := make(chan error, 1) - go func() { - var err error - var result autorest.Response - defer func() { - resultChan <- result - errChan <- err - close(resultChan) - close(errChan) - }() - req, err := client.DeletePreparer(resourceGroupName, circuitName, authorizationName, cancel) - if err != nil { - err = autorest.NewErrorWithError(err, "network.ExpressRouteCircuitAuthorizationsClient", "Delete", nil, "Failure preparing request") - return - } +// Delete deletes the specified authorization from the specified express route circuit. +// Parameters: +// resourceGroupName - the name of the resource group. +// circuitName - the name of the express route circuit. +// authorizationName - the name of the authorization. +func (client ExpressRouteCircuitAuthorizationsClient) Delete(ctx context.Context, resourceGroupName string, circuitName string, authorizationName string) (result ExpressRouteCircuitAuthorizationsDeleteFuture, err error) { + req, err := client.DeletePreparer(ctx, resourceGroupName, circuitName, authorizationName) + if err != nil { + err = autorest.NewErrorWithError(err, "network.ExpressRouteCircuitAuthorizationsClient", "Delete", nil, "Failure preparing request") + return + } - resp, err := client.DeleteSender(req) - if err != nil { - result.Response = resp - err = autorest.NewErrorWithError(err, "network.ExpressRouteCircuitAuthorizationsClient", "Delete", resp, "Failure sending request") - return - } + result, err = client.DeleteSender(req) + if err != nil { + err = autorest.NewErrorWithError(err, "network.ExpressRouteCircuitAuthorizationsClient", "Delete", result.Response(), "Failure sending request") + return + } - result, err = client.DeleteResponder(resp) - if err != nil { - err = autorest.NewErrorWithError(err, "network.ExpressRouteCircuitAuthorizationsClient", "Delete", resp, "Failure responding to request") - } - }() - return resultChan, errChan + return } // DeletePreparer prepares the Delete request. -func (client ExpressRouteCircuitAuthorizationsClient) DeletePreparer(resourceGroupName string, circuitName string, authorizationName string, cancel <-chan struct{}) (*http.Request, error) { +func (client ExpressRouteCircuitAuthorizationsClient) DeletePreparer(ctx context.Context, resourceGroupName string, circuitName string, authorizationName string) (*http.Request, error) { pathParameters := map[string]interface{}{ "authorizationName": autorest.Encode("path", authorizationName), "circuitName": autorest.Encode("path", circuitName), @@ -179,7 +148,7 @@ func (client ExpressRouteCircuitAuthorizationsClient) DeletePreparer(resourceGro "subscriptionId": autorest.Encode("path", client.SubscriptionID), } - const APIVersion = "2017-03-01" + const APIVersion = "2018-01-01" queryParameters := map[string]interface{}{ "api-version": APIVersion, } @@ -189,15 +158,24 @@ func (client ExpressRouteCircuitAuthorizationsClient) DeletePreparer(resourceGro autorest.WithBaseURL(client.BaseURI), autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/expressRouteCircuits/{circuitName}/authorizations/{authorizationName}", pathParameters), autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{Cancel: cancel}) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) } // DeleteSender sends the Delete request. The method will close the // http.Response Body if it receives an error. -func (client ExpressRouteCircuitAuthorizationsClient) DeleteSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, - req, - azure.DoPollForAsynchronous(client.PollingDelay)) +func (client ExpressRouteCircuitAuthorizationsClient) DeleteSender(req *http.Request) (future ExpressRouteCircuitAuthorizationsDeleteFuture, err error) { + var resp *http.Response + resp, err = autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) + if err != nil { + return + } + err = autorest.Respond(resp, azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted, http.StatusNoContent)) + if err != nil { + return + } + future.Future, err = azure.NewFutureFromResponse(resp) + return } // DeleteResponder handles the response to the Delete request. The method always @@ -206,20 +184,19 @@ func (client ExpressRouteCircuitAuthorizationsClient) DeleteResponder(resp *http err = autorest.Respond( resp, client.ByInspecting(), - azure.WithErrorUnlessStatusCode(http.StatusAccepted, http.StatusOK, http.StatusNoContent), + azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted, http.StatusNoContent), autorest.ByClosing()) result.Response = resp return } -// Get gets the specified authorization from the specified express route -// circuit. -// -// resourceGroupName is the name of the resource group. circuitName is the name -// of the express route circuit. authorizationName is the name of the -// authorization. -func (client ExpressRouteCircuitAuthorizationsClient) Get(resourceGroupName string, circuitName string, authorizationName string) (result ExpressRouteCircuitAuthorization, err error) { - req, err := client.GetPreparer(resourceGroupName, circuitName, authorizationName) +// Get gets the specified authorization from the specified express route circuit. +// Parameters: +// resourceGroupName - the name of the resource group. +// circuitName - the name of the express route circuit. +// authorizationName - the name of the authorization. +func (client ExpressRouteCircuitAuthorizationsClient) Get(ctx context.Context, resourceGroupName string, circuitName string, authorizationName string) (result ExpressRouteCircuitAuthorization, err error) { + req, err := client.GetPreparer(ctx, resourceGroupName, circuitName, authorizationName) if err != nil { err = autorest.NewErrorWithError(err, "network.ExpressRouteCircuitAuthorizationsClient", "Get", nil, "Failure preparing request") return @@ -241,7 +218,7 @@ func (client ExpressRouteCircuitAuthorizationsClient) Get(resourceGroupName stri } // GetPreparer prepares the Get request. -func (client ExpressRouteCircuitAuthorizationsClient) GetPreparer(resourceGroupName string, circuitName string, authorizationName string) (*http.Request, error) { +func (client ExpressRouteCircuitAuthorizationsClient) GetPreparer(ctx context.Context, resourceGroupName string, circuitName string, authorizationName string) (*http.Request, error) { pathParameters := map[string]interface{}{ "authorizationName": autorest.Encode("path", authorizationName), "circuitName": autorest.Encode("path", circuitName), @@ -249,7 +226,7 @@ func (client ExpressRouteCircuitAuthorizationsClient) GetPreparer(resourceGroupN "subscriptionId": autorest.Encode("path", client.SubscriptionID), } - const APIVersion = "2017-03-01" + const APIVersion = "2018-01-01" queryParameters := map[string]interface{}{ "api-version": APIVersion, } @@ -259,13 +236,14 @@ func (client ExpressRouteCircuitAuthorizationsClient) GetPreparer(resourceGroupN autorest.WithBaseURL(client.BaseURI), autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/expressRouteCircuits/{circuitName}/authorizations/{authorizationName}", pathParameters), autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{}) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) } // GetSender sends the Get request. The method will close the // http.Response Body if it receives an error. func (client ExpressRouteCircuitAuthorizationsClient) GetSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, req) + return autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) } // GetResponder handles the response to the Get request. The method always @@ -282,11 +260,12 @@ func (client ExpressRouteCircuitAuthorizationsClient) GetResponder(resp *http.Re } // List gets all authorizations in an express route circuit. -// -// resourceGroupName is the name of the resource group. circuitName is the name -// of the circuit. -func (client ExpressRouteCircuitAuthorizationsClient) List(resourceGroupName string, circuitName string) (result AuthorizationListResult, err error) { - req, err := client.ListPreparer(resourceGroupName, circuitName) +// Parameters: +// resourceGroupName - the name of the resource group. +// circuitName - the name of the circuit. +func (client ExpressRouteCircuitAuthorizationsClient) List(ctx context.Context, resourceGroupName string, circuitName string) (result AuthorizationListResultPage, err error) { + result.fn = client.listNextResults + req, err := client.ListPreparer(ctx, resourceGroupName, circuitName) if err != nil { err = autorest.NewErrorWithError(err, "network.ExpressRouteCircuitAuthorizationsClient", "List", nil, "Failure preparing request") return @@ -294,12 +273,12 @@ func (client ExpressRouteCircuitAuthorizationsClient) List(resourceGroupName str resp, err := client.ListSender(req) if err != nil { - result.Response = autorest.Response{Response: resp} + result.alr.Response = autorest.Response{Response: resp} err = autorest.NewErrorWithError(err, "network.ExpressRouteCircuitAuthorizationsClient", "List", resp, "Failure sending request") return } - result, err = client.ListResponder(resp) + result.alr, err = client.ListResponder(resp) if err != nil { err = autorest.NewErrorWithError(err, "network.ExpressRouteCircuitAuthorizationsClient", "List", resp, "Failure responding to request") } @@ -308,14 +287,14 @@ func (client ExpressRouteCircuitAuthorizationsClient) List(resourceGroupName str } // ListPreparer prepares the List request. -func (client ExpressRouteCircuitAuthorizationsClient) ListPreparer(resourceGroupName string, circuitName string) (*http.Request, error) { +func (client ExpressRouteCircuitAuthorizationsClient) ListPreparer(ctx context.Context, resourceGroupName string, circuitName string) (*http.Request, error) { pathParameters := map[string]interface{}{ "circuitName": autorest.Encode("path", circuitName), "resourceGroupName": autorest.Encode("path", resourceGroupName), "subscriptionId": autorest.Encode("path", client.SubscriptionID), } - const APIVersion = "2017-03-01" + const APIVersion = "2018-01-01" queryParameters := map[string]interface{}{ "api-version": APIVersion, } @@ -325,13 +304,14 @@ func (client ExpressRouteCircuitAuthorizationsClient) ListPreparer(resourceGroup autorest.WithBaseURL(client.BaseURI), autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/expressRouteCircuits/{circuitName}/authorizations", pathParameters), autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{}) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) } // ListSender sends the List request. The method will close the // http.Response Body if it receives an error. func (client ExpressRouteCircuitAuthorizationsClient) ListSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, req) + return autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) } // ListResponder handles the response to the List request. The method always @@ -347,26 +327,29 @@ func (client ExpressRouteCircuitAuthorizationsClient) ListResponder(resp *http.R return } -// ListNextResults retrieves the next set of results, if any. -func (client ExpressRouteCircuitAuthorizationsClient) ListNextResults(lastResults AuthorizationListResult) (result AuthorizationListResult, err error) { - req, err := lastResults.AuthorizationListResultPreparer() +// listNextResults retrieves the next set of results, if any. +func (client ExpressRouteCircuitAuthorizationsClient) listNextResults(lastResults AuthorizationListResult) (result AuthorizationListResult, err error) { + req, err := lastResults.authorizationListResultPreparer() if err != nil { - return result, autorest.NewErrorWithError(err, "network.ExpressRouteCircuitAuthorizationsClient", "List", nil, "Failure preparing next results request") + return result, autorest.NewErrorWithError(err, "network.ExpressRouteCircuitAuthorizationsClient", "listNextResults", nil, "Failure preparing next results request") } if req == nil { return } - resp, err := client.ListSender(req) if err != nil { result.Response = autorest.Response{Response: resp} - return result, autorest.NewErrorWithError(err, "network.ExpressRouteCircuitAuthorizationsClient", "List", resp, "Failure sending next results request") + return result, autorest.NewErrorWithError(err, "network.ExpressRouteCircuitAuthorizationsClient", "listNextResults", resp, "Failure sending next results request") } - result, err = client.ListResponder(resp) if err != nil { - err = autorest.NewErrorWithError(err, "network.ExpressRouteCircuitAuthorizationsClient", "List", resp, "Failure responding to next results request") + err = autorest.NewErrorWithError(err, "network.ExpressRouteCircuitAuthorizationsClient", "listNextResults", resp, "Failure responding to next results request") } - + return +} + +// ListComplete enumerates all values, automatically crossing page boundaries as required. +func (client ExpressRouteCircuitAuthorizationsClient) ListComplete(ctx context.Context, resourceGroupName string, circuitName string) (result AuthorizationListResultIterator, err error) { + result.page, err = client.List(ctx, resourceGroupName, circuitName) return } diff --git a/vendor/github.com/Azure/azure-sdk-for-go/arm/network/expressroutecircuitpeerings.go b/vendor/github.com/Azure/azure-sdk-for-go/services/network/mgmt/2018-01-01/network/expressroutecircuitpeerings.go similarity index 56% rename from vendor/github.com/Azure/azure-sdk-for-go/arm/network/expressroutecircuitpeerings.go rename to vendor/github.com/Azure/azure-sdk-for-go/services/network/mgmt/2018-01-01/network/expressroutecircuitpeerings.go index 28b926f24..8a33a4337 100644 --- a/vendor/github.com/Azure/azure-sdk-for-go/arm/network/expressroutecircuitpeerings.go +++ b/vendor/github.com/Azure/azure-sdk-for-go/services/network/mgmt/2018-01-01/network/expressroutecircuitpeerings.go @@ -14,78 +14,68 @@ package network // See the License for the specific language governing permissions and // limitations under the License. // -// Code generated by Microsoft (R) AutoRest Code Generator 1.0.1.0 -// Changes may cause incorrect behavior and will be lost if the code is -// regenerated. +// Code generated by Microsoft (R) AutoRest Code Generator. +// Changes may cause incorrect behavior and will be lost if the code is regenerated. import ( + "context" + "net/http" + "github.com/Azure/go-autorest/autorest" "github.com/Azure/go-autorest/autorest/azure" - "net/http" + "github.com/Azure/go-autorest/autorest/validation" ) -// ExpressRouteCircuitPeeringsClient is the composite Swagger for Network -// Client +// ExpressRouteCircuitPeeringsClient is the network Client type ExpressRouteCircuitPeeringsClient struct { - ManagementClient + BaseClient } -// NewExpressRouteCircuitPeeringsClient creates an instance of the -// ExpressRouteCircuitPeeringsClient client. +// NewExpressRouteCircuitPeeringsClient creates an instance of the ExpressRouteCircuitPeeringsClient client. func NewExpressRouteCircuitPeeringsClient(subscriptionID string) ExpressRouteCircuitPeeringsClient { return NewExpressRouteCircuitPeeringsClientWithBaseURI(DefaultBaseURI, subscriptionID) } -// NewExpressRouteCircuitPeeringsClientWithBaseURI creates an instance of the -// ExpressRouteCircuitPeeringsClient client. +// NewExpressRouteCircuitPeeringsClientWithBaseURI creates an instance of the ExpressRouteCircuitPeeringsClient client. func NewExpressRouteCircuitPeeringsClientWithBaseURI(baseURI string, subscriptionID string) ExpressRouteCircuitPeeringsClient { return ExpressRouteCircuitPeeringsClient{NewWithBaseURI(baseURI, subscriptionID)} } -// CreateOrUpdate creates or updates a peering in the specified express route -// circuits. This method may poll for completion. Polling can be canceled by -// passing the cancel channel argument. The channel will be used to cancel -// polling and any outstanding HTTP requests. -// -// resourceGroupName is the name of the resource group. circuitName is the name -// of the express route circuit. peeringName is the name of the peering. -// peeringParameters is parameters supplied to the create or update express -// route circuit peering operation. -func (client ExpressRouteCircuitPeeringsClient) CreateOrUpdate(resourceGroupName string, circuitName string, peeringName string, peeringParameters ExpressRouteCircuitPeering, cancel <-chan struct{}) (<-chan ExpressRouteCircuitPeering, <-chan error) { - resultChan := make(chan ExpressRouteCircuitPeering, 1) - errChan := make(chan error, 1) - go func() { - var err error - var result ExpressRouteCircuitPeering - defer func() { - resultChan <- result - errChan <- err - close(resultChan) - close(errChan) - }() - req, err := client.CreateOrUpdatePreparer(resourceGroupName, circuitName, peeringName, peeringParameters, cancel) - if err != nil { - err = autorest.NewErrorWithError(err, "network.ExpressRouteCircuitPeeringsClient", "CreateOrUpdate", nil, "Failure preparing request") - return - } +// CreateOrUpdate creates or updates a peering in the specified express route circuits. +// Parameters: +// resourceGroupName - the name of the resource group. +// circuitName - the name of the express route circuit. +// peeringName - the name of the peering. +// peeringParameters - parameters supplied to the create or update express route circuit peering operation. +func (client ExpressRouteCircuitPeeringsClient) CreateOrUpdate(ctx context.Context, resourceGroupName string, circuitName string, peeringName string, peeringParameters ExpressRouteCircuitPeering) (result ExpressRouteCircuitPeeringsCreateOrUpdateFuture, err error) { + if err := validation.Validate([]validation.Validation{ + {TargetValue: peeringParameters, + Constraints: []validation.Constraint{{Target: "peeringParameters.ExpressRouteCircuitPeeringPropertiesFormat", Name: validation.Null, Rule: false, + Chain: []validation.Constraint{{Target: "peeringParameters.ExpressRouteCircuitPeeringPropertiesFormat.PeerASN", Name: validation.Null, Rule: false, + Chain: []validation.Constraint{{Target: "peeringParameters.ExpressRouteCircuitPeeringPropertiesFormat.PeerASN", Name: validation.InclusiveMaximum, Rule: int64(4294967295), Chain: nil}, + {Target: "peeringParameters.ExpressRouteCircuitPeeringPropertiesFormat.PeerASN", Name: validation.InclusiveMinimum, Rule: 1, Chain: nil}, + }}, + }}}}}); err != nil { + return result, validation.NewError("network.ExpressRouteCircuitPeeringsClient", "CreateOrUpdate", err.Error()) + } - resp, err := client.CreateOrUpdateSender(req) - if err != nil { - result.Response = autorest.Response{Response: resp} - err = autorest.NewErrorWithError(err, "network.ExpressRouteCircuitPeeringsClient", "CreateOrUpdate", resp, "Failure sending request") - return - } + req, err := client.CreateOrUpdatePreparer(ctx, resourceGroupName, circuitName, peeringName, peeringParameters) + if err != nil { + err = autorest.NewErrorWithError(err, "network.ExpressRouteCircuitPeeringsClient", "CreateOrUpdate", nil, "Failure preparing request") + return + } - result, err = client.CreateOrUpdateResponder(resp) - if err != nil { - err = autorest.NewErrorWithError(err, "network.ExpressRouteCircuitPeeringsClient", "CreateOrUpdate", resp, "Failure responding to request") - } - }() - return resultChan, errChan + result, err = client.CreateOrUpdateSender(req) + if err != nil { + err = autorest.NewErrorWithError(err, "network.ExpressRouteCircuitPeeringsClient", "CreateOrUpdate", result.Response(), "Failure sending request") + return + } + + return } // CreateOrUpdatePreparer prepares the CreateOrUpdate request. -func (client ExpressRouteCircuitPeeringsClient) CreateOrUpdatePreparer(resourceGroupName string, circuitName string, peeringName string, peeringParameters ExpressRouteCircuitPeering, cancel <-chan struct{}) (*http.Request, error) { +func (client ExpressRouteCircuitPeeringsClient) CreateOrUpdatePreparer(ctx context.Context, resourceGroupName string, circuitName string, peeringName string, peeringParameters ExpressRouteCircuitPeering) (*http.Request, error) { pathParameters := map[string]interface{}{ "circuitName": autorest.Encode("path", circuitName), "peeringName": autorest.Encode("path", peeringName), @@ -93,27 +83,36 @@ func (client ExpressRouteCircuitPeeringsClient) CreateOrUpdatePreparer(resourceG "subscriptionId": autorest.Encode("path", client.SubscriptionID), } - const APIVersion = "2017-03-01" + const APIVersion = "2018-01-01" queryParameters := map[string]interface{}{ "api-version": APIVersion, } preparer := autorest.CreatePreparer( - autorest.AsJSON(), + autorest.AsContentType("application/json; charset=utf-8"), autorest.AsPut(), autorest.WithBaseURL(client.BaseURI), autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/expressRouteCircuits/{circuitName}/peerings/{peeringName}", pathParameters), autorest.WithJSON(peeringParameters), autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{Cancel: cancel}) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) } // CreateOrUpdateSender sends the CreateOrUpdate request. The method will close the // http.Response Body if it receives an error. -func (client ExpressRouteCircuitPeeringsClient) CreateOrUpdateSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, - req, - azure.DoPollForAsynchronous(client.PollingDelay)) +func (client ExpressRouteCircuitPeeringsClient) CreateOrUpdateSender(req *http.Request) (future ExpressRouteCircuitPeeringsCreateOrUpdateFuture, err error) { + var resp *http.Response + resp, err = autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) + if err != nil { + return + } + err = autorest.Respond(resp, azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusCreated)) + if err != nil { + return + } + future.Future, err = azure.NewFutureFromResponse(resp) + return } // CreateOrUpdateResponder handles the response to the CreateOrUpdate request. The method always @@ -129,48 +128,29 @@ func (client ExpressRouteCircuitPeeringsClient) CreateOrUpdateResponder(resp *ht return } -// Delete deletes the specified peering from the specified express route -// circuit. This method may poll for completion. Polling can be canceled by -// passing the cancel channel argument. The channel will be used to cancel -// polling and any outstanding HTTP requests. -// -// resourceGroupName is the name of the resource group. circuitName is the name -// of the express route circuit. peeringName is the name of the peering. -func (client ExpressRouteCircuitPeeringsClient) Delete(resourceGroupName string, circuitName string, peeringName string, cancel <-chan struct{}) (<-chan autorest.Response, <-chan error) { - resultChan := make(chan autorest.Response, 1) - errChan := make(chan error, 1) - go func() { - var err error - var result autorest.Response - defer func() { - resultChan <- result - errChan <- err - close(resultChan) - close(errChan) - }() - req, err := client.DeletePreparer(resourceGroupName, circuitName, peeringName, cancel) - if err != nil { - err = autorest.NewErrorWithError(err, "network.ExpressRouteCircuitPeeringsClient", "Delete", nil, "Failure preparing request") - return - } +// Delete deletes the specified peering from the specified express route circuit. +// Parameters: +// resourceGroupName - the name of the resource group. +// circuitName - the name of the express route circuit. +// peeringName - the name of the peering. +func (client ExpressRouteCircuitPeeringsClient) Delete(ctx context.Context, resourceGroupName string, circuitName string, peeringName string) (result ExpressRouteCircuitPeeringsDeleteFuture, err error) { + req, err := client.DeletePreparer(ctx, resourceGroupName, circuitName, peeringName) + if err != nil { + err = autorest.NewErrorWithError(err, "network.ExpressRouteCircuitPeeringsClient", "Delete", nil, "Failure preparing request") + return + } - resp, err := client.DeleteSender(req) - if err != nil { - result.Response = resp - err = autorest.NewErrorWithError(err, "network.ExpressRouteCircuitPeeringsClient", "Delete", resp, "Failure sending request") - return - } + result, err = client.DeleteSender(req) + if err != nil { + err = autorest.NewErrorWithError(err, "network.ExpressRouteCircuitPeeringsClient", "Delete", result.Response(), "Failure sending request") + return + } - result, err = client.DeleteResponder(resp) - if err != nil { - err = autorest.NewErrorWithError(err, "network.ExpressRouteCircuitPeeringsClient", "Delete", resp, "Failure responding to request") - } - }() - return resultChan, errChan + return } // DeletePreparer prepares the Delete request. -func (client ExpressRouteCircuitPeeringsClient) DeletePreparer(resourceGroupName string, circuitName string, peeringName string, cancel <-chan struct{}) (*http.Request, error) { +func (client ExpressRouteCircuitPeeringsClient) DeletePreparer(ctx context.Context, resourceGroupName string, circuitName string, peeringName string) (*http.Request, error) { pathParameters := map[string]interface{}{ "circuitName": autorest.Encode("path", circuitName), "peeringName": autorest.Encode("path", peeringName), @@ -178,7 +158,7 @@ func (client ExpressRouteCircuitPeeringsClient) DeletePreparer(resourceGroupName "subscriptionId": autorest.Encode("path", client.SubscriptionID), } - const APIVersion = "2017-03-01" + const APIVersion = "2018-01-01" queryParameters := map[string]interface{}{ "api-version": APIVersion, } @@ -188,15 +168,24 @@ func (client ExpressRouteCircuitPeeringsClient) DeletePreparer(resourceGroupName autorest.WithBaseURL(client.BaseURI), autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/expressRouteCircuits/{circuitName}/peerings/{peeringName}", pathParameters), autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{Cancel: cancel}) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) } // DeleteSender sends the Delete request. The method will close the // http.Response Body if it receives an error. -func (client ExpressRouteCircuitPeeringsClient) DeleteSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, - req, - azure.DoPollForAsynchronous(client.PollingDelay)) +func (client ExpressRouteCircuitPeeringsClient) DeleteSender(req *http.Request) (future ExpressRouteCircuitPeeringsDeleteFuture, err error) { + var resp *http.Response + resp, err = autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) + if err != nil { + return + } + err = autorest.Respond(resp, azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted, http.StatusNoContent)) + if err != nil { + return + } + future.Future, err = azure.NewFutureFromResponse(resp) + return } // DeleteResponder handles the response to the Delete request. The method always @@ -211,13 +200,13 @@ func (client ExpressRouteCircuitPeeringsClient) DeleteResponder(resp *http.Respo return } -// Get gets the specified authorization from the specified express route -// circuit. -// -// resourceGroupName is the name of the resource group. circuitName is the name -// of the express route circuit. peeringName is the name of the peering. -func (client ExpressRouteCircuitPeeringsClient) Get(resourceGroupName string, circuitName string, peeringName string) (result ExpressRouteCircuitPeering, err error) { - req, err := client.GetPreparer(resourceGroupName, circuitName, peeringName) +// Get gets the specified authorization from the specified express route circuit. +// Parameters: +// resourceGroupName - the name of the resource group. +// circuitName - the name of the express route circuit. +// peeringName - the name of the peering. +func (client ExpressRouteCircuitPeeringsClient) Get(ctx context.Context, resourceGroupName string, circuitName string, peeringName string) (result ExpressRouteCircuitPeering, err error) { + req, err := client.GetPreparer(ctx, resourceGroupName, circuitName, peeringName) if err != nil { err = autorest.NewErrorWithError(err, "network.ExpressRouteCircuitPeeringsClient", "Get", nil, "Failure preparing request") return @@ -239,7 +228,7 @@ func (client ExpressRouteCircuitPeeringsClient) Get(resourceGroupName string, ci } // GetPreparer prepares the Get request. -func (client ExpressRouteCircuitPeeringsClient) GetPreparer(resourceGroupName string, circuitName string, peeringName string) (*http.Request, error) { +func (client ExpressRouteCircuitPeeringsClient) GetPreparer(ctx context.Context, resourceGroupName string, circuitName string, peeringName string) (*http.Request, error) { pathParameters := map[string]interface{}{ "circuitName": autorest.Encode("path", circuitName), "peeringName": autorest.Encode("path", peeringName), @@ -247,7 +236,7 @@ func (client ExpressRouteCircuitPeeringsClient) GetPreparer(resourceGroupName st "subscriptionId": autorest.Encode("path", client.SubscriptionID), } - const APIVersion = "2017-03-01" + const APIVersion = "2018-01-01" queryParameters := map[string]interface{}{ "api-version": APIVersion, } @@ -257,13 +246,14 @@ func (client ExpressRouteCircuitPeeringsClient) GetPreparer(resourceGroupName st autorest.WithBaseURL(client.BaseURI), autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/expressRouteCircuits/{circuitName}/peerings/{peeringName}", pathParameters), autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{}) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) } // GetSender sends the Get request. The method will close the // http.Response Body if it receives an error. func (client ExpressRouteCircuitPeeringsClient) GetSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, req) + return autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) } // GetResponder handles the response to the Get request. The method always @@ -280,11 +270,12 @@ func (client ExpressRouteCircuitPeeringsClient) GetResponder(resp *http.Response } // List gets all peerings in a specified express route circuit. -// -// resourceGroupName is the name of the resource group. circuitName is the name -// of the express route circuit. -func (client ExpressRouteCircuitPeeringsClient) List(resourceGroupName string, circuitName string) (result ExpressRouteCircuitPeeringListResult, err error) { - req, err := client.ListPreparer(resourceGroupName, circuitName) +// Parameters: +// resourceGroupName - the name of the resource group. +// circuitName - the name of the express route circuit. +func (client ExpressRouteCircuitPeeringsClient) List(ctx context.Context, resourceGroupName string, circuitName string) (result ExpressRouteCircuitPeeringListResultPage, err error) { + result.fn = client.listNextResults + req, err := client.ListPreparer(ctx, resourceGroupName, circuitName) if err != nil { err = autorest.NewErrorWithError(err, "network.ExpressRouteCircuitPeeringsClient", "List", nil, "Failure preparing request") return @@ -292,12 +283,12 @@ func (client ExpressRouteCircuitPeeringsClient) List(resourceGroupName string, c resp, err := client.ListSender(req) if err != nil { - result.Response = autorest.Response{Response: resp} + result.ercplr.Response = autorest.Response{Response: resp} err = autorest.NewErrorWithError(err, "network.ExpressRouteCircuitPeeringsClient", "List", resp, "Failure sending request") return } - result, err = client.ListResponder(resp) + result.ercplr, err = client.ListResponder(resp) if err != nil { err = autorest.NewErrorWithError(err, "network.ExpressRouteCircuitPeeringsClient", "List", resp, "Failure responding to request") } @@ -306,14 +297,14 @@ func (client ExpressRouteCircuitPeeringsClient) List(resourceGroupName string, c } // ListPreparer prepares the List request. -func (client ExpressRouteCircuitPeeringsClient) ListPreparer(resourceGroupName string, circuitName string) (*http.Request, error) { +func (client ExpressRouteCircuitPeeringsClient) ListPreparer(ctx context.Context, resourceGroupName string, circuitName string) (*http.Request, error) { pathParameters := map[string]interface{}{ "circuitName": autorest.Encode("path", circuitName), "resourceGroupName": autorest.Encode("path", resourceGroupName), "subscriptionId": autorest.Encode("path", client.SubscriptionID), } - const APIVersion = "2017-03-01" + const APIVersion = "2018-01-01" queryParameters := map[string]interface{}{ "api-version": APIVersion, } @@ -323,13 +314,14 @@ func (client ExpressRouteCircuitPeeringsClient) ListPreparer(resourceGroupName s autorest.WithBaseURL(client.BaseURI), autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/expressRouteCircuits/{circuitName}/peerings", pathParameters), autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{}) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) } // ListSender sends the List request. The method will close the // http.Response Body if it receives an error. func (client ExpressRouteCircuitPeeringsClient) ListSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, req) + return autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) } // ListResponder handles the response to the List request. The method always @@ -345,26 +337,29 @@ func (client ExpressRouteCircuitPeeringsClient) ListResponder(resp *http.Respons return } -// ListNextResults retrieves the next set of results, if any. -func (client ExpressRouteCircuitPeeringsClient) ListNextResults(lastResults ExpressRouteCircuitPeeringListResult) (result ExpressRouteCircuitPeeringListResult, err error) { - req, err := lastResults.ExpressRouteCircuitPeeringListResultPreparer() +// listNextResults retrieves the next set of results, if any. +func (client ExpressRouteCircuitPeeringsClient) listNextResults(lastResults ExpressRouteCircuitPeeringListResult) (result ExpressRouteCircuitPeeringListResult, err error) { + req, err := lastResults.expressRouteCircuitPeeringListResultPreparer() if err != nil { - return result, autorest.NewErrorWithError(err, "network.ExpressRouteCircuitPeeringsClient", "List", nil, "Failure preparing next results request") + return result, autorest.NewErrorWithError(err, "network.ExpressRouteCircuitPeeringsClient", "listNextResults", nil, "Failure preparing next results request") } if req == nil { return } - resp, err := client.ListSender(req) if err != nil { result.Response = autorest.Response{Response: resp} - return result, autorest.NewErrorWithError(err, "network.ExpressRouteCircuitPeeringsClient", "List", resp, "Failure sending next results request") + return result, autorest.NewErrorWithError(err, "network.ExpressRouteCircuitPeeringsClient", "listNextResults", resp, "Failure sending next results request") } - result, err = client.ListResponder(resp) if err != nil { - err = autorest.NewErrorWithError(err, "network.ExpressRouteCircuitPeeringsClient", "List", resp, "Failure responding to next results request") + err = autorest.NewErrorWithError(err, "network.ExpressRouteCircuitPeeringsClient", "listNextResults", resp, "Failure responding to next results request") } - + return +} + +// ListComplete enumerates all values, automatically crossing page boundaries as required. +func (client ExpressRouteCircuitPeeringsClient) ListComplete(ctx context.Context, resourceGroupName string, circuitName string) (result ExpressRouteCircuitPeeringListResultIterator, err error) { + result.page, err = client.List(ctx, resourceGroupName, circuitName) return } diff --git a/vendor/github.com/Azure/azure-sdk-for-go/arm/network/expressroutecircuits.go b/vendor/github.com/Azure/azure-sdk-for-go/services/network/mgmt/2018-01-01/network/expressroutecircuits.go similarity index 55% rename from vendor/github.com/Azure/azure-sdk-for-go/arm/network/expressroutecircuits.go rename to vendor/github.com/Azure/azure-sdk-for-go/services/network/mgmt/2018-01-01/network/expressroutecircuits.go index b60aa10a4..b48f75e10 100644 --- a/vendor/github.com/Azure/azure-sdk-for-go/arm/network/expressroutecircuits.go +++ b/vendor/github.com/Azure/azure-sdk-for-go/services/network/mgmt/2018-01-01/network/expressroutecircuits.go @@ -14,103 +14,90 @@ package network // See the License for the specific language governing permissions and // limitations under the License. // -// Code generated by Microsoft (R) AutoRest Code Generator 1.0.1.0 -// Changes may cause incorrect behavior and will be lost if the code is -// regenerated. +// Code generated by Microsoft (R) AutoRest Code Generator. +// Changes may cause incorrect behavior and will be lost if the code is regenerated. import ( + "context" "github.com/Azure/go-autorest/autorest" "github.com/Azure/go-autorest/autorest/azure" "net/http" ) -// ExpressRouteCircuitsClient is the composite Swagger for Network Client +// ExpressRouteCircuitsClient is the network Client type ExpressRouteCircuitsClient struct { - ManagementClient + BaseClient } -// NewExpressRouteCircuitsClient creates an instance of the -// ExpressRouteCircuitsClient client. +// NewExpressRouteCircuitsClient creates an instance of the ExpressRouteCircuitsClient client. func NewExpressRouteCircuitsClient(subscriptionID string) ExpressRouteCircuitsClient { return NewExpressRouteCircuitsClientWithBaseURI(DefaultBaseURI, subscriptionID) } -// NewExpressRouteCircuitsClientWithBaseURI creates an instance of the -// ExpressRouteCircuitsClient client. +// NewExpressRouteCircuitsClientWithBaseURI creates an instance of the ExpressRouteCircuitsClient client. func NewExpressRouteCircuitsClientWithBaseURI(baseURI string, subscriptionID string) ExpressRouteCircuitsClient { return ExpressRouteCircuitsClient{NewWithBaseURI(baseURI, subscriptionID)} } -// CreateOrUpdate creates or updates an express route circuit. This method may -// poll for completion. Polling can be canceled by passing the cancel channel -// argument. The channel will be used to cancel polling and any outstanding -// HTTP requests. -// -// resourceGroupName is the name of the resource group. circuitName is the name -// of the circuit. parameters is parameters supplied to the create or update -// express route circuit operation. -func (client ExpressRouteCircuitsClient) CreateOrUpdate(resourceGroupName string, circuitName string, parameters ExpressRouteCircuit, cancel <-chan struct{}) (<-chan ExpressRouteCircuit, <-chan error) { - resultChan := make(chan ExpressRouteCircuit, 1) - errChan := make(chan error, 1) - go func() { - var err error - var result ExpressRouteCircuit - defer func() { - resultChan <- result - errChan <- err - close(resultChan) - close(errChan) - }() - req, err := client.CreateOrUpdatePreparer(resourceGroupName, circuitName, parameters, cancel) - if err != nil { - err = autorest.NewErrorWithError(err, "network.ExpressRouteCircuitsClient", "CreateOrUpdate", nil, "Failure preparing request") - return - } +// CreateOrUpdate creates or updates an express route circuit. +// Parameters: +// resourceGroupName - the name of the resource group. +// circuitName - the name of the circuit. +// parameters - parameters supplied to the create or update express route circuit operation. +func (client ExpressRouteCircuitsClient) CreateOrUpdate(ctx context.Context, resourceGroupName string, circuitName string, parameters ExpressRouteCircuit) (result ExpressRouteCircuitsCreateOrUpdateFuture, err error) { + req, err := client.CreateOrUpdatePreparer(ctx, resourceGroupName, circuitName, parameters) + if err != nil { + err = autorest.NewErrorWithError(err, "network.ExpressRouteCircuitsClient", "CreateOrUpdate", nil, "Failure preparing request") + return + } - resp, err := client.CreateOrUpdateSender(req) - if err != nil { - result.Response = autorest.Response{Response: resp} - err = autorest.NewErrorWithError(err, "network.ExpressRouteCircuitsClient", "CreateOrUpdate", resp, "Failure sending request") - return - } + result, err = client.CreateOrUpdateSender(req) + if err != nil { + err = autorest.NewErrorWithError(err, "network.ExpressRouteCircuitsClient", "CreateOrUpdate", result.Response(), "Failure sending request") + return + } - result, err = client.CreateOrUpdateResponder(resp) - if err != nil { - err = autorest.NewErrorWithError(err, "network.ExpressRouteCircuitsClient", "CreateOrUpdate", resp, "Failure responding to request") - } - }() - return resultChan, errChan + return } // CreateOrUpdatePreparer prepares the CreateOrUpdate request. -func (client ExpressRouteCircuitsClient) CreateOrUpdatePreparer(resourceGroupName string, circuitName string, parameters ExpressRouteCircuit, cancel <-chan struct{}) (*http.Request, error) { +func (client ExpressRouteCircuitsClient) CreateOrUpdatePreparer(ctx context.Context, resourceGroupName string, circuitName string, parameters ExpressRouteCircuit) (*http.Request, error) { pathParameters := map[string]interface{}{ "circuitName": autorest.Encode("path", circuitName), "resourceGroupName": autorest.Encode("path", resourceGroupName), "subscriptionId": autorest.Encode("path", client.SubscriptionID), } - const APIVersion = "2017-03-01" + const APIVersion = "2018-01-01" queryParameters := map[string]interface{}{ "api-version": APIVersion, } preparer := autorest.CreatePreparer( - autorest.AsJSON(), + autorest.AsContentType("application/json; charset=utf-8"), autorest.AsPut(), autorest.WithBaseURL(client.BaseURI), autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/expressRouteCircuits/{circuitName}", pathParameters), autorest.WithJSON(parameters), autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{Cancel: cancel}) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) } // CreateOrUpdateSender sends the CreateOrUpdate request. The method will close the // http.Response Body if it receives an error. -func (client ExpressRouteCircuitsClient) CreateOrUpdateSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, - req, - azure.DoPollForAsynchronous(client.PollingDelay)) +func (client ExpressRouteCircuitsClient) CreateOrUpdateSender(req *http.Request) (future ExpressRouteCircuitsCreateOrUpdateFuture, err error) { + var resp *http.Response + resp, err = autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) + if err != nil { + return + } + err = autorest.Respond(resp, azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusCreated)) + if err != nil { + return + } + future.Future, err = azure.NewFutureFromResponse(resp) + return } // CreateOrUpdateResponder handles the response to the CreateOrUpdate request. The method always @@ -119,62 +106,42 @@ func (client ExpressRouteCircuitsClient) CreateOrUpdateResponder(resp *http.Resp err = autorest.Respond( resp, client.ByInspecting(), - azure.WithErrorUnlessStatusCode(http.StatusCreated, http.StatusOK), + azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusCreated), autorest.ByUnmarshallingJSON(&result), autorest.ByClosing()) result.Response = autorest.Response{Response: resp} return } -// Delete deletes the specified express route circuit. This method may poll for -// completion. Polling can be canceled by passing the cancel channel argument. -// The channel will be used to cancel polling and any outstanding HTTP -// requests. -// -// resourceGroupName is the name of the resource group. circuitName is the name -// of the express route circuit. -func (client ExpressRouteCircuitsClient) Delete(resourceGroupName string, circuitName string, cancel <-chan struct{}) (<-chan autorest.Response, <-chan error) { - resultChan := make(chan autorest.Response, 1) - errChan := make(chan error, 1) - go func() { - var err error - var result autorest.Response - defer func() { - resultChan <- result - errChan <- err - close(resultChan) - close(errChan) - }() - req, err := client.DeletePreparer(resourceGroupName, circuitName, cancel) - if err != nil { - err = autorest.NewErrorWithError(err, "network.ExpressRouteCircuitsClient", "Delete", nil, "Failure preparing request") - return - } +// Delete deletes the specified express route circuit. +// Parameters: +// resourceGroupName - the name of the resource group. +// circuitName - the name of the express route circuit. +func (client ExpressRouteCircuitsClient) Delete(ctx context.Context, resourceGroupName string, circuitName string) (result ExpressRouteCircuitsDeleteFuture, err error) { + req, err := client.DeletePreparer(ctx, resourceGroupName, circuitName) + if err != nil { + err = autorest.NewErrorWithError(err, "network.ExpressRouteCircuitsClient", "Delete", nil, "Failure preparing request") + return + } - resp, err := client.DeleteSender(req) - if err != nil { - result.Response = resp - err = autorest.NewErrorWithError(err, "network.ExpressRouteCircuitsClient", "Delete", resp, "Failure sending request") - return - } + result, err = client.DeleteSender(req) + if err != nil { + err = autorest.NewErrorWithError(err, "network.ExpressRouteCircuitsClient", "Delete", result.Response(), "Failure sending request") + return + } - result, err = client.DeleteResponder(resp) - if err != nil { - err = autorest.NewErrorWithError(err, "network.ExpressRouteCircuitsClient", "Delete", resp, "Failure responding to request") - } - }() - return resultChan, errChan + return } // DeletePreparer prepares the Delete request. -func (client ExpressRouteCircuitsClient) DeletePreparer(resourceGroupName string, circuitName string, cancel <-chan struct{}) (*http.Request, error) { +func (client ExpressRouteCircuitsClient) DeletePreparer(ctx context.Context, resourceGroupName string, circuitName string) (*http.Request, error) { pathParameters := map[string]interface{}{ "circuitName": autorest.Encode("path", circuitName), "resourceGroupName": autorest.Encode("path", resourceGroupName), "subscriptionId": autorest.Encode("path", client.SubscriptionID), } - const APIVersion = "2017-03-01" + const APIVersion = "2018-01-01" queryParameters := map[string]interface{}{ "api-version": APIVersion, } @@ -184,15 +151,24 @@ func (client ExpressRouteCircuitsClient) DeletePreparer(resourceGroupName string autorest.WithBaseURL(client.BaseURI), autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/expressRouteCircuits/{circuitName}", pathParameters), autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{Cancel: cancel}) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) } // DeleteSender sends the Delete request. The method will close the // http.Response Body if it receives an error. -func (client ExpressRouteCircuitsClient) DeleteSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, - req, - azure.DoPollForAsynchronous(client.PollingDelay)) +func (client ExpressRouteCircuitsClient) DeleteSender(req *http.Request) (future ExpressRouteCircuitsDeleteFuture, err error) { + var resp *http.Response + resp, err = autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) + if err != nil { + return + } + err = autorest.Respond(resp, azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted, http.StatusNoContent)) + if err != nil { + return + } + future.Future, err = azure.NewFutureFromResponse(resp) + return } // DeleteResponder handles the response to the Delete request. The method always @@ -201,18 +177,18 @@ func (client ExpressRouteCircuitsClient) DeleteResponder(resp *http.Response) (r err = autorest.Respond( resp, client.ByInspecting(), - azure.WithErrorUnlessStatusCode(http.StatusNoContent, http.StatusAccepted, http.StatusOK), + azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted, http.StatusNoContent), autorest.ByClosing()) result.Response = resp return } // Get gets information about the specified express route circuit. -// -// resourceGroupName is the name of the resource group. circuitName is the name -// of express route circuit. -func (client ExpressRouteCircuitsClient) Get(resourceGroupName string, circuitName string) (result ExpressRouteCircuit, err error) { - req, err := client.GetPreparer(resourceGroupName, circuitName) +// Parameters: +// resourceGroupName - the name of the resource group. +// circuitName - the name of express route circuit. +func (client ExpressRouteCircuitsClient) Get(ctx context.Context, resourceGroupName string, circuitName string) (result ExpressRouteCircuit, err error) { + req, err := client.GetPreparer(ctx, resourceGroupName, circuitName) if err != nil { err = autorest.NewErrorWithError(err, "network.ExpressRouteCircuitsClient", "Get", nil, "Failure preparing request") return @@ -234,14 +210,14 @@ func (client ExpressRouteCircuitsClient) Get(resourceGroupName string, circuitNa } // GetPreparer prepares the Get request. -func (client ExpressRouteCircuitsClient) GetPreparer(resourceGroupName string, circuitName string) (*http.Request, error) { +func (client ExpressRouteCircuitsClient) GetPreparer(ctx context.Context, resourceGroupName string, circuitName string) (*http.Request, error) { pathParameters := map[string]interface{}{ "circuitName": autorest.Encode("path", circuitName), "resourceGroupName": autorest.Encode("path", resourceGroupName), "subscriptionId": autorest.Encode("path", client.SubscriptionID), } - const APIVersion = "2017-03-01" + const APIVersion = "2018-01-01" queryParameters := map[string]interface{}{ "api-version": APIVersion, } @@ -251,13 +227,14 @@ func (client ExpressRouteCircuitsClient) GetPreparer(resourceGroupName string, c autorest.WithBaseURL(client.BaseURI), autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/expressRouteCircuits/{circuitName}", pathParameters), autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{}) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) } // GetSender sends the Get request. The method will close the // http.Response Body if it receives an error. func (client ExpressRouteCircuitsClient) GetSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, req) + return autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) } // GetResponder handles the response to the Get request. The method always @@ -273,13 +250,13 @@ func (client ExpressRouteCircuitsClient) GetResponder(resp *http.Response) (resu return } -// GetPeeringStats gets all stats from an express route circuit in a resource -// group. -// -// resourceGroupName is the name of the resource group. circuitName is the name -// of the express route circuit. peeringName is the name of the peering. -func (client ExpressRouteCircuitsClient) GetPeeringStats(resourceGroupName string, circuitName string, peeringName string) (result ExpressRouteCircuitStats, err error) { - req, err := client.GetPeeringStatsPreparer(resourceGroupName, circuitName, peeringName) +// GetPeeringStats gets all stats from an express route circuit in a resource group. +// Parameters: +// resourceGroupName - the name of the resource group. +// circuitName - the name of the express route circuit. +// peeringName - the name of the peering. +func (client ExpressRouteCircuitsClient) GetPeeringStats(ctx context.Context, resourceGroupName string, circuitName string, peeringName string) (result ExpressRouteCircuitStats, err error) { + req, err := client.GetPeeringStatsPreparer(ctx, resourceGroupName, circuitName, peeringName) if err != nil { err = autorest.NewErrorWithError(err, "network.ExpressRouteCircuitsClient", "GetPeeringStats", nil, "Failure preparing request") return @@ -301,7 +278,7 @@ func (client ExpressRouteCircuitsClient) GetPeeringStats(resourceGroupName strin } // GetPeeringStatsPreparer prepares the GetPeeringStats request. -func (client ExpressRouteCircuitsClient) GetPeeringStatsPreparer(resourceGroupName string, circuitName string, peeringName string) (*http.Request, error) { +func (client ExpressRouteCircuitsClient) GetPeeringStatsPreparer(ctx context.Context, resourceGroupName string, circuitName string, peeringName string) (*http.Request, error) { pathParameters := map[string]interface{}{ "circuitName": autorest.Encode("path", circuitName), "peeringName": autorest.Encode("path", peeringName), @@ -309,7 +286,7 @@ func (client ExpressRouteCircuitsClient) GetPeeringStatsPreparer(resourceGroupNa "subscriptionId": autorest.Encode("path", client.SubscriptionID), } - const APIVersion = "2017-03-01" + const APIVersion = "2018-01-01" queryParameters := map[string]interface{}{ "api-version": APIVersion, } @@ -319,13 +296,14 @@ func (client ExpressRouteCircuitsClient) GetPeeringStatsPreparer(resourceGroupNa autorest.WithBaseURL(client.BaseURI), autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/expressRouteCircuits/{circuitName}/peerings/{peeringName}/stats", pathParameters), autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{}) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) } // GetPeeringStatsSender sends the GetPeeringStats request. The method will close the // http.Response Body if it receives an error. func (client ExpressRouteCircuitsClient) GetPeeringStatsSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, req) + return autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) } // GetPeeringStatsResponder handles the response to the GetPeeringStats request. The method always @@ -341,13 +319,12 @@ func (client ExpressRouteCircuitsClient) GetPeeringStatsResponder(resp *http.Res return } -// GetStats gets all the stats from an express route circuit in a resource -// group. -// -// resourceGroupName is the name of the resource group. circuitName is the name -// of the express route circuit. -func (client ExpressRouteCircuitsClient) GetStats(resourceGroupName string, circuitName string) (result ExpressRouteCircuitStats, err error) { - req, err := client.GetStatsPreparer(resourceGroupName, circuitName) +// GetStats gets all the stats from an express route circuit in a resource group. +// Parameters: +// resourceGroupName - the name of the resource group. +// circuitName - the name of the express route circuit. +func (client ExpressRouteCircuitsClient) GetStats(ctx context.Context, resourceGroupName string, circuitName string) (result ExpressRouteCircuitStats, err error) { + req, err := client.GetStatsPreparer(ctx, resourceGroupName, circuitName) if err != nil { err = autorest.NewErrorWithError(err, "network.ExpressRouteCircuitsClient", "GetStats", nil, "Failure preparing request") return @@ -369,14 +346,14 @@ func (client ExpressRouteCircuitsClient) GetStats(resourceGroupName string, circ } // GetStatsPreparer prepares the GetStats request. -func (client ExpressRouteCircuitsClient) GetStatsPreparer(resourceGroupName string, circuitName string) (*http.Request, error) { +func (client ExpressRouteCircuitsClient) GetStatsPreparer(ctx context.Context, resourceGroupName string, circuitName string) (*http.Request, error) { pathParameters := map[string]interface{}{ "circuitName": autorest.Encode("path", circuitName), "resourceGroupName": autorest.Encode("path", resourceGroupName), "subscriptionId": autorest.Encode("path", client.SubscriptionID), } - const APIVersion = "2017-03-01" + const APIVersion = "2018-01-01" queryParameters := map[string]interface{}{ "api-version": APIVersion, } @@ -386,13 +363,14 @@ func (client ExpressRouteCircuitsClient) GetStatsPreparer(resourceGroupName stri autorest.WithBaseURL(client.BaseURI), autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/expressRouteCircuits/{circuitName}/stats", pathParameters), autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{}) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) } // GetStatsSender sends the GetStats request. The method will close the // http.Response Body if it receives an error. func (client ExpressRouteCircuitsClient) GetStatsSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, req) + return autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) } // GetStatsResponder handles the response to the GetStats request. The method always @@ -409,10 +387,11 @@ func (client ExpressRouteCircuitsClient) GetStatsResponder(resp *http.Response) } // List gets all the express route circuits in a resource group. -// -// resourceGroupName is the name of the resource group. -func (client ExpressRouteCircuitsClient) List(resourceGroupName string) (result ExpressRouteCircuitListResult, err error) { - req, err := client.ListPreparer(resourceGroupName) +// Parameters: +// resourceGroupName - the name of the resource group. +func (client ExpressRouteCircuitsClient) List(ctx context.Context, resourceGroupName string) (result ExpressRouteCircuitListResultPage, err error) { + result.fn = client.listNextResults + req, err := client.ListPreparer(ctx, resourceGroupName) if err != nil { err = autorest.NewErrorWithError(err, "network.ExpressRouteCircuitsClient", "List", nil, "Failure preparing request") return @@ -420,12 +399,12 @@ func (client ExpressRouteCircuitsClient) List(resourceGroupName string) (result resp, err := client.ListSender(req) if err != nil { - result.Response = autorest.Response{Response: resp} + result.erclr.Response = autorest.Response{Response: resp} err = autorest.NewErrorWithError(err, "network.ExpressRouteCircuitsClient", "List", resp, "Failure sending request") return } - result, err = client.ListResponder(resp) + result.erclr, err = client.ListResponder(resp) if err != nil { err = autorest.NewErrorWithError(err, "network.ExpressRouteCircuitsClient", "List", resp, "Failure responding to request") } @@ -434,13 +413,13 @@ func (client ExpressRouteCircuitsClient) List(resourceGroupName string) (result } // ListPreparer prepares the List request. -func (client ExpressRouteCircuitsClient) ListPreparer(resourceGroupName string) (*http.Request, error) { +func (client ExpressRouteCircuitsClient) ListPreparer(ctx context.Context, resourceGroupName string) (*http.Request, error) { pathParameters := map[string]interface{}{ "resourceGroupName": autorest.Encode("path", resourceGroupName), "subscriptionId": autorest.Encode("path", client.SubscriptionID), } - const APIVersion = "2017-03-01" + const APIVersion = "2018-01-01" queryParameters := map[string]interface{}{ "api-version": APIVersion, } @@ -450,13 +429,14 @@ func (client ExpressRouteCircuitsClient) ListPreparer(resourceGroupName string) autorest.WithBaseURL(client.BaseURI), autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/expressRouteCircuits", pathParameters), autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{}) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) } // ListSender sends the List request. The method will close the // http.Response Body if it receives an error. func (client ExpressRouteCircuitsClient) ListSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, req) + return autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) } // ListResponder handles the response to the List request. The method always @@ -472,33 +452,37 @@ func (client ExpressRouteCircuitsClient) ListResponder(resp *http.Response) (res return } -// ListNextResults retrieves the next set of results, if any. -func (client ExpressRouteCircuitsClient) ListNextResults(lastResults ExpressRouteCircuitListResult) (result ExpressRouteCircuitListResult, err error) { - req, err := lastResults.ExpressRouteCircuitListResultPreparer() +// listNextResults retrieves the next set of results, if any. +func (client ExpressRouteCircuitsClient) listNextResults(lastResults ExpressRouteCircuitListResult) (result ExpressRouteCircuitListResult, err error) { + req, err := lastResults.expressRouteCircuitListResultPreparer() if err != nil { - return result, autorest.NewErrorWithError(err, "network.ExpressRouteCircuitsClient", "List", nil, "Failure preparing next results request") + return result, autorest.NewErrorWithError(err, "network.ExpressRouteCircuitsClient", "listNextResults", nil, "Failure preparing next results request") } if req == nil { return } - resp, err := client.ListSender(req) if err != nil { result.Response = autorest.Response{Response: resp} - return result, autorest.NewErrorWithError(err, "network.ExpressRouteCircuitsClient", "List", resp, "Failure sending next results request") + return result, autorest.NewErrorWithError(err, "network.ExpressRouteCircuitsClient", "listNextResults", resp, "Failure sending next results request") } - result, err = client.ListResponder(resp) if err != nil { - err = autorest.NewErrorWithError(err, "network.ExpressRouteCircuitsClient", "List", resp, "Failure responding to next results request") + err = autorest.NewErrorWithError(err, "network.ExpressRouteCircuitsClient", "listNextResults", resp, "Failure responding to next results request") } + return +} +// ListComplete enumerates all values, automatically crossing page boundaries as required. +func (client ExpressRouteCircuitsClient) ListComplete(ctx context.Context, resourceGroupName string) (result ExpressRouteCircuitListResultIterator, err error) { + result.page, err = client.List(ctx, resourceGroupName) return } // ListAll gets all the express route circuits in a subscription. -func (client ExpressRouteCircuitsClient) ListAll() (result ExpressRouteCircuitListResult, err error) { - req, err := client.ListAllPreparer() +func (client ExpressRouteCircuitsClient) ListAll(ctx context.Context) (result ExpressRouteCircuitListResultPage, err error) { + result.fn = client.listAllNextResults + req, err := client.ListAllPreparer(ctx) if err != nil { err = autorest.NewErrorWithError(err, "network.ExpressRouteCircuitsClient", "ListAll", nil, "Failure preparing request") return @@ -506,12 +490,12 @@ func (client ExpressRouteCircuitsClient) ListAll() (result ExpressRouteCircuitLi resp, err := client.ListAllSender(req) if err != nil { - result.Response = autorest.Response{Response: resp} + result.erclr.Response = autorest.Response{Response: resp} err = autorest.NewErrorWithError(err, "network.ExpressRouteCircuitsClient", "ListAll", resp, "Failure sending request") return } - result, err = client.ListAllResponder(resp) + result.erclr, err = client.ListAllResponder(resp) if err != nil { err = autorest.NewErrorWithError(err, "network.ExpressRouteCircuitsClient", "ListAll", resp, "Failure responding to request") } @@ -520,12 +504,12 @@ func (client ExpressRouteCircuitsClient) ListAll() (result ExpressRouteCircuitLi } // ListAllPreparer prepares the ListAll request. -func (client ExpressRouteCircuitsClient) ListAllPreparer() (*http.Request, error) { +func (client ExpressRouteCircuitsClient) ListAllPreparer(ctx context.Context) (*http.Request, error) { pathParameters := map[string]interface{}{ "subscriptionId": autorest.Encode("path", client.SubscriptionID), } - const APIVersion = "2017-03-01" + const APIVersion = "2018-01-01" queryParameters := map[string]interface{}{ "api-version": APIVersion, } @@ -535,13 +519,14 @@ func (client ExpressRouteCircuitsClient) ListAllPreparer() (*http.Request, error autorest.WithBaseURL(client.BaseURI), autorest.WithPathParameters("/subscriptions/{subscriptionId}/providers/Microsoft.Network/expressRouteCircuits", pathParameters), autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{}) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) } // ListAllSender sends the ListAll request. The method will close the // http.Response Body if it receives an error. func (client ExpressRouteCircuitsClient) ListAllSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, req) + return autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) } // ListAllResponder handles the response to the ListAll request. The method always @@ -557,74 +542,57 @@ func (client ExpressRouteCircuitsClient) ListAllResponder(resp *http.Response) ( return } -// ListAllNextResults retrieves the next set of results, if any. -func (client ExpressRouteCircuitsClient) ListAllNextResults(lastResults ExpressRouteCircuitListResult) (result ExpressRouteCircuitListResult, err error) { - req, err := lastResults.ExpressRouteCircuitListResultPreparer() +// listAllNextResults retrieves the next set of results, if any. +func (client ExpressRouteCircuitsClient) listAllNextResults(lastResults ExpressRouteCircuitListResult) (result ExpressRouteCircuitListResult, err error) { + req, err := lastResults.expressRouteCircuitListResultPreparer() if err != nil { - return result, autorest.NewErrorWithError(err, "network.ExpressRouteCircuitsClient", "ListAll", nil, "Failure preparing next results request") + return result, autorest.NewErrorWithError(err, "network.ExpressRouteCircuitsClient", "listAllNextResults", nil, "Failure preparing next results request") } if req == nil { return } - resp, err := client.ListAllSender(req) if err != nil { result.Response = autorest.Response{Response: resp} - return result, autorest.NewErrorWithError(err, "network.ExpressRouteCircuitsClient", "ListAll", resp, "Failure sending next results request") + return result, autorest.NewErrorWithError(err, "network.ExpressRouteCircuitsClient", "listAllNextResults", resp, "Failure sending next results request") } - result, err = client.ListAllResponder(resp) if err != nil { - err = autorest.NewErrorWithError(err, "network.ExpressRouteCircuitsClient", "ListAll", resp, "Failure responding to next results request") + err = autorest.NewErrorWithError(err, "network.ExpressRouteCircuitsClient", "listAllNextResults", resp, "Failure responding to next results request") + } + return +} + +// ListAllComplete enumerates all values, automatically crossing page boundaries as required. +func (client ExpressRouteCircuitsClient) ListAllComplete(ctx context.Context) (result ExpressRouteCircuitListResultIterator, err error) { + result.page, err = client.ListAll(ctx) + return +} + +// ListArpTable gets the currently advertised ARP table associated with the express route circuit in a resource group. +// Parameters: +// resourceGroupName - the name of the resource group. +// circuitName - the name of the express route circuit. +// peeringName - the name of the peering. +// devicePath - the path of the device. +func (client ExpressRouteCircuitsClient) ListArpTable(ctx context.Context, resourceGroupName string, circuitName string, peeringName string, devicePath string) (result ExpressRouteCircuitsListArpTableFuture, err error) { + req, err := client.ListArpTablePreparer(ctx, resourceGroupName, circuitName, peeringName, devicePath) + if err != nil { + err = autorest.NewErrorWithError(err, "network.ExpressRouteCircuitsClient", "ListArpTable", nil, "Failure preparing request") + return + } + + result, err = client.ListArpTableSender(req) + if err != nil { + err = autorest.NewErrorWithError(err, "network.ExpressRouteCircuitsClient", "ListArpTable", result.Response(), "Failure sending request") + return } return } -// ListArpTable gets the currently advertised ARP table associated with the -// express route circuit in a resource group. This method may poll for -// completion. Polling can be canceled by passing the cancel channel argument. -// The channel will be used to cancel polling and any outstanding HTTP -// requests. -// -// resourceGroupName is the name of the resource group. circuitName is the name -// of the express route circuit. peeringName is the name of the peering. -// devicePath is the path of the device. -func (client ExpressRouteCircuitsClient) ListArpTable(resourceGroupName string, circuitName string, peeringName string, devicePath string, cancel <-chan struct{}) (<-chan ExpressRouteCircuitsArpTableListResult, <-chan error) { - resultChan := make(chan ExpressRouteCircuitsArpTableListResult, 1) - errChan := make(chan error, 1) - go func() { - var err error - var result ExpressRouteCircuitsArpTableListResult - defer func() { - resultChan <- result - errChan <- err - close(resultChan) - close(errChan) - }() - req, err := client.ListArpTablePreparer(resourceGroupName, circuitName, peeringName, devicePath, cancel) - if err != nil { - err = autorest.NewErrorWithError(err, "network.ExpressRouteCircuitsClient", "ListArpTable", nil, "Failure preparing request") - return - } - - resp, err := client.ListArpTableSender(req) - if err != nil { - result.Response = autorest.Response{Response: resp} - err = autorest.NewErrorWithError(err, "network.ExpressRouteCircuitsClient", "ListArpTable", resp, "Failure sending request") - return - } - - result, err = client.ListArpTableResponder(resp) - if err != nil { - err = autorest.NewErrorWithError(err, "network.ExpressRouteCircuitsClient", "ListArpTable", resp, "Failure responding to request") - } - }() - return resultChan, errChan -} - // ListArpTablePreparer prepares the ListArpTable request. -func (client ExpressRouteCircuitsClient) ListArpTablePreparer(resourceGroupName string, circuitName string, peeringName string, devicePath string, cancel <-chan struct{}) (*http.Request, error) { +func (client ExpressRouteCircuitsClient) ListArpTablePreparer(ctx context.Context, resourceGroupName string, circuitName string, peeringName string, devicePath string) (*http.Request, error) { pathParameters := map[string]interface{}{ "circuitName": autorest.Encode("path", circuitName), "devicePath": autorest.Encode("path", devicePath), @@ -633,7 +601,7 @@ func (client ExpressRouteCircuitsClient) ListArpTablePreparer(resourceGroupName "subscriptionId": autorest.Encode("path", client.SubscriptionID), } - const APIVersion = "2017-03-01" + const APIVersion = "2018-01-01" queryParameters := map[string]interface{}{ "api-version": APIVersion, } @@ -643,15 +611,24 @@ func (client ExpressRouteCircuitsClient) ListArpTablePreparer(resourceGroupName autorest.WithBaseURL(client.BaseURI), autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/expressRouteCircuits/{circuitName}/peerings/{peeringName}/arpTables/{devicePath}", pathParameters), autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{Cancel: cancel}) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) } // ListArpTableSender sends the ListArpTable request. The method will close the // http.Response Body if it receives an error. -func (client ExpressRouteCircuitsClient) ListArpTableSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, - req, - azure.DoPollForAsynchronous(client.PollingDelay)) +func (client ExpressRouteCircuitsClient) ListArpTableSender(req *http.Request) (future ExpressRouteCircuitsListArpTableFuture, err error) { + var resp *http.Response + resp, err = autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) + if err != nil { + return + } + err = autorest.Respond(resp, azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted)) + if err != nil { + return + } + future.Future, err = azure.NewFutureFromResponse(resp) + return } // ListArpTableResponder handles the response to the ListArpTable request. The method always @@ -667,50 +644,31 @@ func (client ExpressRouteCircuitsClient) ListArpTableResponder(resp *http.Respon return } -// ListRoutesTable gets the currently advertised routes table associated with -// the express route circuit in a resource group. This method may poll for -// completion. Polling can be canceled by passing the cancel channel argument. -// The channel will be used to cancel polling and any outstanding HTTP -// requests. -// -// resourceGroupName is the name of the resource group. circuitName is the name -// of the express route circuit. peeringName is the name of the peering. -// devicePath is the path of the device. -func (client ExpressRouteCircuitsClient) ListRoutesTable(resourceGroupName string, circuitName string, peeringName string, devicePath string, cancel <-chan struct{}) (<-chan ExpressRouteCircuitsRoutesTableListResult, <-chan error) { - resultChan := make(chan ExpressRouteCircuitsRoutesTableListResult, 1) - errChan := make(chan error, 1) - go func() { - var err error - var result ExpressRouteCircuitsRoutesTableListResult - defer func() { - resultChan <- result - errChan <- err - close(resultChan) - close(errChan) - }() - req, err := client.ListRoutesTablePreparer(resourceGroupName, circuitName, peeringName, devicePath, cancel) - if err != nil { - err = autorest.NewErrorWithError(err, "network.ExpressRouteCircuitsClient", "ListRoutesTable", nil, "Failure preparing request") - return - } +// ListRoutesTable gets the currently advertised routes table associated with the express route circuit in a resource +// group. +// Parameters: +// resourceGroupName - the name of the resource group. +// circuitName - the name of the express route circuit. +// peeringName - the name of the peering. +// devicePath - the path of the device. +func (client ExpressRouteCircuitsClient) ListRoutesTable(ctx context.Context, resourceGroupName string, circuitName string, peeringName string, devicePath string) (result ExpressRouteCircuitsListRoutesTableFuture, err error) { + req, err := client.ListRoutesTablePreparer(ctx, resourceGroupName, circuitName, peeringName, devicePath) + if err != nil { + err = autorest.NewErrorWithError(err, "network.ExpressRouteCircuitsClient", "ListRoutesTable", nil, "Failure preparing request") + return + } - resp, err := client.ListRoutesTableSender(req) - if err != nil { - result.Response = autorest.Response{Response: resp} - err = autorest.NewErrorWithError(err, "network.ExpressRouteCircuitsClient", "ListRoutesTable", resp, "Failure sending request") - return - } + result, err = client.ListRoutesTableSender(req) + if err != nil { + err = autorest.NewErrorWithError(err, "network.ExpressRouteCircuitsClient", "ListRoutesTable", result.Response(), "Failure sending request") + return + } - result, err = client.ListRoutesTableResponder(resp) - if err != nil { - err = autorest.NewErrorWithError(err, "network.ExpressRouteCircuitsClient", "ListRoutesTable", resp, "Failure responding to request") - } - }() - return resultChan, errChan + return } // ListRoutesTablePreparer prepares the ListRoutesTable request. -func (client ExpressRouteCircuitsClient) ListRoutesTablePreparer(resourceGroupName string, circuitName string, peeringName string, devicePath string, cancel <-chan struct{}) (*http.Request, error) { +func (client ExpressRouteCircuitsClient) ListRoutesTablePreparer(ctx context.Context, resourceGroupName string, circuitName string, peeringName string, devicePath string) (*http.Request, error) { pathParameters := map[string]interface{}{ "circuitName": autorest.Encode("path", circuitName), "devicePath": autorest.Encode("path", devicePath), @@ -719,7 +677,7 @@ func (client ExpressRouteCircuitsClient) ListRoutesTablePreparer(resourceGroupNa "subscriptionId": autorest.Encode("path", client.SubscriptionID), } - const APIVersion = "2017-03-01" + const APIVersion = "2018-01-01" queryParameters := map[string]interface{}{ "api-version": APIVersion, } @@ -729,15 +687,24 @@ func (client ExpressRouteCircuitsClient) ListRoutesTablePreparer(resourceGroupNa autorest.WithBaseURL(client.BaseURI), autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/expressRouteCircuits/{circuitName}/peerings/{peeringName}/routeTables/{devicePath}", pathParameters), autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{Cancel: cancel}) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) } // ListRoutesTableSender sends the ListRoutesTable request. The method will close the // http.Response Body if it receives an error. -func (client ExpressRouteCircuitsClient) ListRoutesTableSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, - req, - azure.DoPollForAsynchronous(client.PollingDelay)) +func (client ExpressRouteCircuitsClient) ListRoutesTableSender(req *http.Request) (future ExpressRouteCircuitsListRoutesTableFuture, err error) { + var resp *http.Response + resp, err = autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) + if err != nil { + return + } + err = autorest.Respond(resp, azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted)) + if err != nil { + return + } + future.Future, err = azure.NewFutureFromResponse(resp) + return } // ListRoutesTableResponder handles the response to the ListRoutesTable request. The method always @@ -753,50 +720,31 @@ func (client ExpressRouteCircuitsClient) ListRoutesTableResponder(resp *http.Res return } -// ListRoutesTableSummary gets the currently advertised routes table summary -// associated with the express route circuit in a resource group. This method -// may poll for completion. Polling can be canceled by passing the cancel -// channel argument. The channel will be used to cancel polling and any -// outstanding HTTP requests. -// -// resourceGroupName is the name of the resource group. circuitName is the name -// of the express route circuit. peeringName is the name of the peering. -// devicePath is the path of the device. -func (client ExpressRouteCircuitsClient) ListRoutesTableSummary(resourceGroupName string, circuitName string, peeringName string, devicePath string, cancel <-chan struct{}) (<-chan ExpressRouteCircuitsRoutesTableSummaryListResult, <-chan error) { - resultChan := make(chan ExpressRouteCircuitsRoutesTableSummaryListResult, 1) - errChan := make(chan error, 1) - go func() { - var err error - var result ExpressRouteCircuitsRoutesTableSummaryListResult - defer func() { - resultChan <- result - errChan <- err - close(resultChan) - close(errChan) - }() - req, err := client.ListRoutesTableSummaryPreparer(resourceGroupName, circuitName, peeringName, devicePath, cancel) - if err != nil { - err = autorest.NewErrorWithError(err, "network.ExpressRouteCircuitsClient", "ListRoutesTableSummary", nil, "Failure preparing request") - return - } +// ListRoutesTableSummary gets the currently advertised routes table summary associated with the express route circuit +// in a resource group. +// Parameters: +// resourceGroupName - the name of the resource group. +// circuitName - the name of the express route circuit. +// peeringName - the name of the peering. +// devicePath - the path of the device. +func (client ExpressRouteCircuitsClient) ListRoutesTableSummary(ctx context.Context, resourceGroupName string, circuitName string, peeringName string, devicePath string) (result ExpressRouteCircuitsListRoutesTableSummaryFuture, err error) { + req, err := client.ListRoutesTableSummaryPreparer(ctx, resourceGroupName, circuitName, peeringName, devicePath) + if err != nil { + err = autorest.NewErrorWithError(err, "network.ExpressRouteCircuitsClient", "ListRoutesTableSummary", nil, "Failure preparing request") + return + } - resp, err := client.ListRoutesTableSummarySender(req) - if err != nil { - result.Response = autorest.Response{Response: resp} - err = autorest.NewErrorWithError(err, "network.ExpressRouteCircuitsClient", "ListRoutesTableSummary", resp, "Failure sending request") - return - } + result, err = client.ListRoutesTableSummarySender(req) + if err != nil { + err = autorest.NewErrorWithError(err, "network.ExpressRouteCircuitsClient", "ListRoutesTableSummary", result.Response(), "Failure sending request") + return + } - result, err = client.ListRoutesTableSummaryResponder(resp) - if err != nil { - err = autorest.NewErrorWithError(err, "network.ExpressRouteCircuitsClient", "ListRoutesTableSummary", resp, "Failure responding to request") - } - }() - return resultChan, errChan + return } // ListRoutesTableSummaryPreparer prepares the ListRoutesTableSummary request. -func (client ExpressRouteCircuitsClient) ListRoutesTableSummaryPreparer(resourceGroupName string, circuitName string, peeringName string, devicePath string, cancel <-chan struct{}) (*http.Request, error) { +func (client ExpressRouteCircuitsClient) ListRoutesTableSummaryPreparer(ctx context.Context, resourceGroupName string, circuitName string, peeringName string, devicePath string) (*http.Request, error) { pathParameters := map[string]interface{}{ "circuitName": autorest.Encode("path", circuitName), "devicePath": autorest.Encode("path", devicePath), @@ -805,7 +753,7 @@ func (client ExpressRouteCircuitsClient) ListRoutesTableSummaryPreparer(resource "subscriptionId": autorest.Encode("path", client.SubscriptionID), } - const APIVersion = "2017-03-01" + const APIVersion = "2018-01-01" queryParameters := map[string]interface{}{ "api-version": APIVersion, } @@ -815,15 +763,24 @@ func (client ExpressRouteCircuitsClient) ListRoutesTableSummaryPreparer(resource autorest.WithBaseURL(client.BaseURI), autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/expressRouteCircuits/{circuitName}/peerings/{peeringName}/routeTablesSummary/{devicePath}", pathParameters), autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{Cancel: cancel}) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) } // ListRoutesTableSummarySender sends the ListRoutesTableSummary request. The method will close the // http.Response Body if it receives an error. -func (client ExpressRouteCircuitsClient) ListRoutesTableSummarySender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, - req, - azure.DoPollForAsynchronous(client.PollingDelay)) +func (client ExpressRouteCircuitsClient) ListRoutesTableSummarySender(req *http.Request) (future ExpressRouteCircuitsListRoutesTableSummaryFuture, err error) { + var resp *http.Response + resp, err = autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) + if err != nil { + return + } + err = autorest.Respond(resp, azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted)) + if err != nil { + return + } + future.Future, err = azure.NewFutureFromResponse(resp) + return } // ListRoutesTableSummaryResponder handles the response to the ListRoutesTableSummary request. The method always @@ -838,3 +795,77 @@ func (client ExpressRouteCircuitsClient) ListRoutesTableSummaryResponder(resp *h result.Response = autorest.Response{Response: resp} return } + +// UpdateTags updates an express route circuit tags. +// Parameters: +// resourceGroupName - the name of the resource group. +// circuitName - the name of the circuit. +// parameters - parameters supplied to update express route circuit tags. +func (client ExpressRouteCircuitsClient) UpdateTags(ctx context.Context, resourceGroupName string, circuitName string, parameters TagsObject) (result ExpressRouteCircuitsUpdateTagsFuture, err error) { + req, err := client.UpdateTagsPreparer(ctx, resourceGroupName, circuitName, parameters) + if err != nil { + err = autorest.NewErrorWithError(err, "network.ExpressRouteCircuitsClient", "UpdateTags", nil, "Failure preparing request") + return + } + + result, err = client.UpdateTagsSender(req) + if err != nil { + err = autorest.NewErrorWithError(err, "network.ExpressRouteCircuitsClient", "UpdateTags", result.Response(), "Failure sending request") + return + } + + return +} + +// UpdateTagsPreparer prepares the UpdateTags request. +func (client ExpressRouteCircuitsClient) UpdateTagsPreparer(ctx context.Context, resourceGroupName string, circuitName string, parameters TagsObject) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "circuitName": autorest.Encode("path", circuitName), + "resourceGroupName": autorest.Encode("path", resourceGroupName), + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + } + + const APIVersion = "2018-01-01" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsContentType("application/json; charset=utf-8"), + autorest.AsPatch(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/expressRouteCircuits/{circuitName}", pathParameters), + autorest.WithJSON(parameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// UpdateTagsSender sends the UpdateTags request. The method will close the +// http.Response Body if it receives an error. +func (client ExpressRouteCircuitsClient) UpdateTagsSender(req *http.Request) (future ExpressRouteCircuitsUpdateTagsFuture, err error) { + var resp *http.Response + resp, err = autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) + if err != nil { + return + } + err = autorest.Respond(resp, azure.WithErrorUnlessStatusCode(http.StatusOK)) + if err != nil { + return + } + future.Future, err = azure.NewFutureFromResponse(resp) + return +} + +// UpdateTagsResponder handles the response to the UpdateTags request. The method always +// closes the http.Response Body. +func (client ExpressRouteCircuitsClient) UpdateTagsResponder(resp *http.Response) (result ExpressRouteCircuit, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} diff --git a/vendor/github.com/Azure/azure-sdk-for-go/arm/network/expressrouteserviceproviders.go b/vendor/github.com/Azure/azure-sdk-for-go/services/network/mgmt/2018-01-01/network/expressrouteserviceproviders.go similarity index 69% rename from vendor/github.com/Azure/azure-sdk-for-go/arm/network/expressrouteserviceproviders.go rename to vendor/github.com/Azure/azure-sdk-for-go/services/network/mgmt/2018-01-01/network/expressrouteserviceproviders.go index 94db9a4f6..9d8753de0 100644 --- a/vendor/github.com/Azure/azure-sdk-for-go/arm/network/expressrouteserviceproviders.go +++ b/vendor/github.com/Azure/azure-sdk-for-go/services/network/mgmt/2018-01-01/network/expressrouteserviceproviders.go @@ -14,37 +14,36 @@ package network // See the License for the specific language governing permissions and // limitations under the License. // -// Code generated by Microsoft (R) AutoRest Code Generator 1.0.1.0 -// Changes may cause incorrect behavior and will be lost if the code is -// regenerated. +// Code generated by Microsoft (R) AutoRest Code Generator. +// Changes may cause incorrect behavior and will be lost if the code is regenerated. import ( + "context" "github.com/Azure/go-autorest/autorest" "github.com/Azure/go-autorest/autorest/azure" "net/http" ) -// ExpressRouteServiceProvidersClient is the composite Swagger for Network -// Client +// ExpressRouteServiceProvidersClient is the network Client type ExpressRouteServiceProvidersClient struct { - ManagementClient + BaseClient } -// NewExpressRouteServiceProvidersClient creates an instance of the -// ExpressRouteServiceProvidersClient client. +// NewExpressRouteServiceProvidersClient creates an instance of the ExpressRouteServiceProvidersClient client. func NewExpressRouteServiceProvidersClient(subscriptionID string) ExpressRouteServiceProvidersClient { return NewExpressRouteServiceProvidersClientWithBaseURI(DefaultBaseURI, subscriptionID) } -// NewExpressRouteServiceProvidersClientWithBaseURI creates an instance of the -// ExpressRouteServiceProvidersClient client. +// NewExpressRouteServiceProvidersClientWithBaseURI creates an instance of the ExpressRouteServiceProvidersClient +// client. func NewExpressRouteServiceProvidersClientWithBaseURI(baseURI string, subscriptionID string) ExpressRouteServiceProvidersClient { return ExpressRouteServiceProvidersClient{NewWithBaseURI(baseURI, subscriptionID)} } // List gets all the available express route service providers. -func (client ExpressRouteServiceProvidersClient) List() (result ExpressRouteServiceProviderListResult, err error) { - req, err := client.ListPreparer() +func (client ExpressRouteServiceProvidersClient) List(ctx context.Context) (result ExpressRouteServiceProviderListResultPage, err error) { + result.fn = client.listNextResults + req, err := client.ListPreparer(ctx) if err != nil { err = autorest.NewErrorWithError(err, "network.ExpressRouteServiceProvidersClient", "List", nil, "Failure preparing request") return @@ -52,12 +51,12 @@ func (client ExpressRouteServiceProvidersClient) List() (result ExpressRouteServ resp, err := client.ListSender(req) if err != nil { - result.Response = autorest.Response{Response: resp} + result.ersplr.Response = autorest.Response{Response: resp} err = autorest.NewErrorWithError(err, "network.ExpressRouteServiceProvidersClient", "List", resp, "Failure sending request") return } - result, err = client.ListResponder(resp) + result.ersplr, err = client.ListResponder(resp) if err != nil { err = autorest.NewErrorWithError(err, "network.ExpressRouteServiceProvidersClient", "List", resp, "Failure responding to request") } @@ -66,12 +65,12 @@ func (client ExpressRouteServiceProvidersClient) List() (result ExpressRouteServ } // ListPreparer prepares the List request. -func (client ExpressRouteServiceProvidersClient) ListPreparer() (*http.Request, error) { +func (client ExpressRouteServiceProvidersClient) ListPreparer(ctx context.Context) (*http.Request, error) { pathParameters := map[string]interface{}{ "subscriptionId": autorest.Encode("path", client.SubscriptionID), } - const APIVersion = "2017-03-01" + const APIVersion = "2018-01-01" queryParameters := map[string]interface{}{ "api-version": APIVersion, } @@ -81,13 +80,14 @@ func (client ExpressRouteServiceProvidersClient) ListPreparer() (*http.Request, autorest.WithBaseURL(client.BaseURI), autorest.WithPathParameters("/subscriptions/{subscriptionId}/providers/Microsoft.Network/expressRouteServiceProviders", pathParameters), autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{}) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) } // ListSender sends the List request. The method will close the // http.Response Body if it receives an error. func (client ExpressRouteServiceProvidersClient) ListSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, req) + return autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) } // ListResponder handles the response to the List request. The method always @@ -103,26 +103,29 @@ func (client ExpressRouteServiceProvidersClient) ListResponder(resp *http.Respon return } -// ListNextResults retrieves the next set of results, if any. -func (client ExpressRouteServiceProvidersClient) ListNextResults(lastResults ExpressRouteServiceProviderListResult) (result ExpressRouteServiceProviderListResult, err error) { - req, err := lastResults.ExpressRouteServiceProviderListResultPreparer() +// listNextResults retrieves the next set of results, if any. +func (client ExpressRouteServiceProvidersClient) listNextResults(lastResults ExpressRouteServiceProviderListResult) (result ExpressRouteServiceProviderListResult, err error) { + req, err := lastResults.expressRouteServiceProviderListResultPreparer() if err != nil { - return result, autorest.NewErrorWithError(err, "network.ExpressRouteServiceProvidersClient", "List", nil, "Failure preparing next results request") + return result, autorest.NewErrorWithError(err, "network.ExpressRouteServiceProvidersClient", "listNextResults", nil, "Failure preparing next results request") } if req == nil { return } - resp, err := client.ListSender(req) if err != nil { result.Response = autorest.Response{Response: resp} - return result, autorest.NewErrorWithError(err, "network.ExpressRouteServiceProvidersClient", "List", resp, "Failure sending next results request") + return result, autorest.NewErrorWithError(err, "network.ExpressRouteServiceProvidersClient", "listNextResults", resp, "Failure sending next results request") } - result, err = client.ListResponder(resp) if err != nil { - err = autorest.NewErrorWithError(err, "network.ExpressRouteServiceProvidersClient", "List", resp, "Failure responding to next results request") + err = autorest.NewErrorWithError(err, "network.ExpressRouteServiceProvidersClient", "listNextResults", resp, "Failure responding to next results request") } - + return +} + +// ListComplete enumerates all values, automatically crossing page boundaries as required. +func (client ExpressRouteServiceProvidersClient) ListComplete(ctx context.Context) (result ExpressRouteServiceProviderListResultIterator, err error) { + result.page, err = client.List(ctx) return } diff --git a/vendor/github.com/Azure/azure-sdk-for-go/services/network/mgmt/2018-01-01/network/inboundnatrules.go b/vendor/github.com/Azure/azure-sdk-for-go/services/network/mgmt/2018-01-01/network/inboundnatrules.go new file mode 100644 index 000000000..5e976c919 --- /dev/null +++ b/vendor/github.com/Azure/azure-sdk-for-go/services/network/mgmt/2018-01-01/network/inboundnatrules.go @@ -0,0 +1,376 @@ +package network + +// Copyright (c) Microsoft and contributors. All rights reserved. +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// +// See the License for the specific language governing permissions and +// limitations under the License. +// +// Code generated by Microsoft (R) AutoRest Code Generator. +// Changes may cause incorrect behavior and will be lost if the code is regenerated. + +import ( + "context" + "github.com/Azure/go-autorest/autorest" + "github.com/Azure/go-autorest/autorest/azure" + "github.com/Azure/go-autorest/autorest/validation" + "net/http" +) + +// InboundNatRulesClient is the network Client +type InboundNatRulesClient struct { + BaseClient +} + +// NewInboundNatRulesClient creates an instance of the InboundNatRulesClient client. +func NewInboundNatRulesClient(subscriptionID string) InboundNatRulesClient { + return NewInboundNatRulesClientWithBaseURI(DefaultBaseURI, subscriptionID) +} + +// NewInboundNatRulesClientWithBaseURI creates an instance of the InboundNatRulesClient client. +func NewInboundNatRulesClientWithBaseURI(baseURI string, subscriptionID string) InboundNatRulesClient { + return InboundNatRulesClient{NewWithBaseURI(baseURI, subscriptionID)} +} + +// CreateOrUpdate creates or updates a load balancer inbound nat rule. +// Parameters: +// resourceGroupName - the name of the resource group. +// loadBalancerName - the name of the load balancer. +// inboundNatRuleName - the name of the inbound nat rule. +// inboundNatRuleParameters - parameters supplied to the create or update inbound nat rule operation. +func (client InboundNatRulesClient) CreateOrUpdate(ctx context.Context, resourceGroupName string, loadBalancerName string, inboundNatRuleName string, inboundNatRuleParameters InboundNatRule) (result InboundNatRulesCreateOrUpdateFuture, err error) { + if err := validation.Validate([]validation.Validation{ + {TargetValue: inboundNatRuleParameters, + Constraints: []validation.Constraint{{Target: "inboundNatRuleParameters.InboundNatRulePropertiesFormat", Name: validation.Null, Rule: false, + Chain: []validation.Constraint{{Target: "inboundNatRuleParameters.InboundNatRulePropertiesFormat.BackendIPConfiguration", Name: validation.Null, Rule: false, + Chain: []validation.Constraint{{Target: "inboundNatRuleParameters.InboundNatRulePropertiesFormat.BackendIPConfiguration.InterfaceIPConfigurationPropertiesFormat", Name: validation.Null, Rule: false, + Chain: []validation.Constraint{{Target: "inboundNatRuleParameters.InboundNatRulePropertiesFormat.BackendIPConfiguration.InterfaceIPConfigurationPropertiesFormat.PublicIPAddress", Name: validation.Null, Rule: false, + Chain: []validation.Constraint{{Target: "inboundNatRuleParameters.InboundNatRulePropertiesFormat.BackendIPConfiguration.InterfaceIPConfigurationPropertiesFormat.PublicIPAddress.PublicIPAddressPropertiesFormat", Name: validation.Null, Rule: false, + Chain: []validation.Constraint{{Target: "inboundNatRuleParameters.InboundNatRulePropertiesFormat.BackendIPConfiguration.InterfaceIPConfigurationPropertiesFormat.PublicIPAddress.PublicIPAddressPropertiesFormat.IPConfiguration", Name: validation.Null, Rule: false, + Chain: []validation.Constraint{{Target: "inboundNatRuleParameters.InboundNatRulePropertiesFormat.BackendIPConfiguration.InterfaceIPConfigurationPropertiesFormat.PublicIPAddress.PublicIPAddressPropertiesFormat.IPConfiguration.IPConfigurationPropertiesFormat", Name: validation.Null, Rule: false, + Chain: []validation.Constraint{{Target: "inboundNatRuleParameters.InboundNatRulePropertiesFormat.BackendIPConfiguration.InterfaceIPConfigurationPropertiesFormat.PublicIPAddress.PublicIPAddressPropertiesFormat.IPConfiguration.IPConfigurationPropertiesFormat.PublicIPAddress", Name: validation.Null, Rule: false, Chain: nil}}}, + }}, + }}, + }}, + }}, + }}, + }}}}}); err != nil { + return result, validation.NewError("network.InboundNatRulesClient", "CreateOrUpdate", err.Error()) + } + + req, err := client.CreateOrUpdatePreparer(ctx, resourceGroupName, loadBalancerName, inboundNatRuleName, inboundNatRuleParameters) + if err != nil { + err = autorest.NewErrorWithError(err, "network.InboundNatRulesClient", "CreateOrUpdate", nil, "Failure preparing request") + return + } + + result, err = client.CreateOrUpdateSender(req) + if err != nil { + err = autorest.NewErrorWithError(err, "network.InboundNatRulesClient", "CreateOrUpdate", result.Response(), "Failure sending request") + return + } + + return +} + +// CreateOrUpdatePreparer prepares the CreateOrUpdate request. +func (client InboundNatRulesClient) CreateOrUpdatePreparer(ctx context.Context, resourceGroupName string, loadBalancerName string, inboundNatRuleName string, inboundNatRuleParameters InboundNatRule) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "inboundNatRuleName": autorest.Encode("path", inboundNatRuleName), + "loadBalancerName": autorest.Encode("path", loadBalancerName), + "resourceGroupName": autorest.Encode("path", resourceGroupName), + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + } + + const APIVersion = "2018-01-01" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsContentType("application/json; charset=utf-8"), + autorest.AsPut(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/loadBalancers/{loadBalancerName}/inboundNatRules/{inboundNatRuleName}", pathParameters), + autorest.WithJSON(inboundNatRuleParameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// CreateOrUpdateSender sends the CreateOrUpdate request. The method will close the +// http.Response Body if it receives an error. +func (client InboundNatRulesClient) CreateOrUpdateSender(req *http.Request) (future InboundNatRulesCreateOrUpdateFuture, err error) { + var resp *http.Response + resp, err = autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) + if err != nil { + return + } + err = autorest.Respond(resp, azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusCreated)) + if err != nil { + return + } + future.Future, err = azure.NewFutureFromResponse(resp) + return +} + +// CreateOrUpdateResponder handles the response to the CreateOrUpdate request. The method always +// closes the http.Response Body. +func (client InboundNatRulesClient) CreateOrUpdateResponder(resp *http.Response) (result InboundNatRule, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusCreated), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + +// Delete deletes the specified load balancer inbound nat rule. +// Parameters: +// resourceGroupName - the name of the resource group. +// loadBalancerName - the name of the load balancer. +// inboundNatRuleName - the name of the inbound nat rule. +func (client InboundNatRulesClient) Delete(ctx context.Context, resourceGroupName string, loadBalancerName string, inboundNatRuleName string) (result InboundNatRulesDeleteFuture, err error) { + req, err := client.DeletePreparer(ctx, resourceGroupName, loadBalancerName, inboundNatRuleName) + if err != nil { + err = autorest.NewErrorWithError(err, "network.InboundNatRulesClient", "Delete", nil, "Failure preparing request") + return + } + + result, err = client.DeleteSender(req) + if err != nil { + err = autorest.NewErrorWithError(err, "network.InboundNatRulesClient", "Delete", result.Response(), "Failure sending request") + return + } + + return +} + +// DeletePreparer prepares the Delete request. +func (client InboundNatRulesClient) DeletePreparer(ctx context.Context, resourceGroupName string, loadBalancerName string, inboundNatRuleName string) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "inboundNatRuleName": autorest.Encode("path", inboundNatRuleName), + "loadBalancerName": autorest.Encode("path", loadBalancerName), + "resourceGroupName": autorest.Encode("path", resourceGroupName), + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + } + + const APIVersion = "2018-01-01" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsDelete(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/loadBalancers/{loadBalancerName}/inboundNatRules/{inboundNatRuleName}", pathParameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// DeleteSender sends the Delete request. The method will close the +// http.Response Body if it receives an error. +func (client InboundNatRulesClient) DeleteSender(req *http.Request) (future InboundNatRulesDeleteFuture, err error) { + var resp *http.Response + resp, err = autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) + if err != nil { + return + } + err = autorest.Respond(resp, azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted, http.StatusNoContent)) + if err != nil { + return + } + future.Future, err = azure.NewFutureFromResponse(resp) + return +} + +// DeleteResponder handles the response to the Delete request. The method always +// closes the http.Response Body. +func (client InboundNatRulesClient) DeleteResponder(resp *http.Response) (result autorest.Response, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted, http.StatusNoContent), + autorest.ByClosing()) + result.Response = resp + return +} + +// Get gets the specified load balancer inbound nat rule. +// Parameters: +// resourceGroupName - the name of the resource group. +// loadBalancerName - the name of the load balancer. +// inboundNatRuleName - the name of the inbound nat rule. +// expand - expands referenced resources. +func (client InboundNatRulesClient) Get(ctx context.Context, resourceGroupName string, loadBalancerName string, inboundNatRuleName string, expand string) (result InboundNatRule, err error) { + req, err := client.GetPreparer(ctx, resourceGroupName, loadBalancerName, inboundNatRuleName, expand) + if err != nil { + err = autorest.NewErrorWithError(err, "network.InboundNatRulesClient", "Get", nil, "Failure preparing request") + return + } + + resp, err := client.GetSender(req) + if err != nil { + result.Response = autorest.Response{Response: resp} + err = autorest.NewErrorWithError(err, "network.InboundNatRulesClient", "Get", resp, "Failure sending request") + return + } + + result, err = client.GetResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "network.InboundNatRulesClient", "Get", resp, "Failure responding to request") + } + + return +} + +// GetPreparer prepares the Get request. +func (client InboundNatRulesClient) GetPreparer(ctx context.Context, resourceGroupName string, loadBalancerName string, inboundNatRuleName string, expand string) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "inboundNatRuleName": autorest.Encode("path", inboundNatRuleName), + "loadBalancerName": autorest.Encode("path", loadBalancerName), + "resourceGroupName": autorest.Encode("path", resourceGroupName), + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + } + + const APIVersion = "2018-01-01" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + if len(expand) > 0 { + queryParameters["$expand"] = autorest.Encode("query", expand) + } + + preparer := autorest.CreatePreparer( + autorest.AsGet(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/loadBalancers/{loadBalancerName}/inboundNatRules/{inboundNatRuleName}", pathParameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// GetSender sends the Get request. The method will close the +// http.Response Body if it receives an error. +func (client InboundNatRulesClient) GetSender(req *http.Request) (*http.Response, error) { + return autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) +} + +// GetResponder handles the response to the Get request. The method always +// closes the http.Response Body. +func (client InboundNatRulesClient) GetResponder(resp *http.Response) (result InboundNatRule, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + +// List gets all the inbound nat rules in a load balancer. +// Parameters: +// resourceGroupName - the name of the resource group. +// loadBalancerName - the name of the load balancer. +func (client InboundNatRulesClient) List(ctx context.Context, resourceGroupName string, loadBalancerName string) (result InboundNatRuleListResultPage, err error) { + result.fn = client.listNextResults + req, err := client.ListPreparer(ctx, resourceGroupName, loadBalancerName) + if err != nil { + err = autorest.NewErrorWithError(err, "network.InboundNatRulesClient", "List", nil, "Failure preparing request") + return + } + + resp, err := client.ListSender(req) + if err != nil { + result.inrlr.Response = autorest.Response{Response: resp} + err = autorest.NewErrorWithError(err, "network.InboundNatRulesClient", "List", resp, "Failure sending request") + return + } + + result.inrlr, err = client.ListResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "network.InboundNatRulesClient", "List", resp, "Failure responding to request") + } + + return +} + +// ListPreparer prepares the List request. +func (client InboundNatRulesClient) ListPreparer(ctx context.Context, resourceGroupName string, loadBalancerName string) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "loadBalancerName": autorest.Encode("path", loadBalancerName), + "resourceGroupName": autorest.Encode("path", resourceGroupName), + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + } + + const APIVersion = "2018-01-01" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsGet(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/loadBalancers/{loadBalancerName}/inboundNatRules", pathParameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// ListSender sends the List request. The method will close the +// http.Response Body if it receives an error. +func (client InboundNatRulesClient) ListSender(req *http.Request) (*http.Response, error) { + return autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) +} + +// ListResponder handles the response to the List request. The method always +// closes the http.Response Body. +func (client InboundNatRulesClient) ListResponder(resp *http.Response) (result InboundNatRuleListResult, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + +// listNextResults retrieves the next set of results, if any. +func (client InboundNatRulesClient) listNextResults(lastResults InboundNatRuleListResult) (result InboundNatRuleListResult, err error) { + req, err := lastResults.inboundNatRuleListResultPreparer() + if err != nil { + return result, autorest.NewErrorWithError(err, "network.InboundNatRulesClient", "listNextResults", nil, "Failure preparing next results request") + } + if req == nil { + return + } + resp, err := client.ListSender(req) + if err != nil { + result.Response = autorest.Response{Response: resp} + return result, autorest.NewErrorWithError(err, "network.InboundNatRulesClient", "listNextResults", resp, "Failure sending next results request") + } + result, err = client.ListResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "network.InboundNatRulesClient", "listNextResults", resp, "Failure responding to next results request") + } + return +} + +// ListComplete enumerates all values, automatically crossing page boundaries as required. +func (client InboundNatRulesClient) ListComplete(ctx context.Context, resourceGroupName string, loadBalancerName string) (result InboundNatRuleListResultIterator, err error) { + result.page, err = client.List(ctx, resourceGroupName, loadBalancerName) + return +} diff --git a/vendor/github.com/Azure/azure-sdk-for-go/services/network/mgmt/2018-01-01/network/interfaceipconfigurations.go b/vendor/github.com/Azure/azure-sdk-for-go/services/network/mgmt/2018-01-01/network/interfaceipconfigurations.go new file mode 100644 index 000000000..86807b005 --- /dev/null +++ b/vendor/github.com/Azure/azure-sdk-for-go/services/network/mgmt/2018-01-01/network/interfaceipconfigurations.go @@ -0,0 +1,204 @@ +package network + +// Copyright (c) Microsoft and contributors. All rights reserved. +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// +// See the License for the specific language governing permissions and +// limitations under the License. +// +// Code generated by Microsoft (R) AutoRest Code Generator. +// Changes may cause incorrect behavior and will be lost if the code is regenerated. + +import ( + "context" + "github.com/Azure/go-autorest/autorest" + "github.com/Azure/go-autorest/autorest/azure" + "net/http" +) + +// InterfaceIPConfigurationsClient is the network Client +type InterfaceIPConfigurationsClient struct { + BaseClient +} + +// NewInterfaceIPConfigurationsClient creates an instance of the InterfaceIPConfigurationsClient client. +func NewInterfaceIPConfigurationsClient(subscriptionID string) InterfaceIPConfigurationsClient { + return NewInterfaceIPConfigurationsClientWithBaseURI(DefaultBaseURI, subscriptionID) +} + +// NewInterfaceIPConfigurationsClientWithBaseURI creates an instance of the InterfaceIPConfigurationsClient client. +func NewInterfaceIPConfigurationsClientWithBaseURI(baseURI string, subscriptionID string) InterfaceIPConfigurationsClient { + return InterfaceIPConfigurationsClient{NewWithBaseURI(baseURI, subscriptionID)} +} + +// Get gets the specified network interface ip configuration. +// Parameters: +// resourceGroupName - the name of the resource group. +// networkInterfaceName - the name of the network interface. +// IPConfigurationName - the name of the ip configuration name. +func (client InterfaceIPConfigurationsClient) Get(ctx context.Context, resourceGroupName string, networkInterfaceName string, IPConfigurationName string) (result InterfaceIPConfiguration, err error) { + req, err := client.GetPreparer(ctx, resourceGroupName, networkInterfaceName, IPConfigurationName) + if err != nil { + err = autorest.NewErrorWithError(err, "network.InterfaceIPConfigurationsClient", "Get", nil, "Failure preparing request") + return + } + + resp, err := client.GetSender(req) + if err != nil { + result.Response = autorest.Response{Response: resp} + err = autorest.NewErrorWithError(err, "network.InterfaceIPConfigurationsClient", "Get", resp, "Failure sending request") + return + } + + result, err = client.GetResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "network.InterfaceIPConfigurationsClient", "Get", resp, "Failure responding to request") + } + + return +} + +// GetPreparer prepares the Get request. +func (client InterfaceIPConfigurationsClient) GetPreparer(ctx context.Context, resourceGroupName string, networkInterfaceName string, IPConfigurationName string) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "ipConfigurationName": autorest.Encode("path", IPConfigurationName), + "networkInterfaceName": autorest.Encode("path", networkInterfaceName), + "resourceGroupName": autorest.Encode("path", resourceGroupName), + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + } + + const APIVersion = "2018-01-01" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsGet(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/networkInterfaces/{networkInterfaceName}/ipConfigurations/{ipConfigurationName}", pathParameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// GetSender sends the Get request. The method will close the +// http.Response Body if it receives an error. +func (client InterfaceIPConfigurationsClient) GetSender(req *http.Request) (*http.Response, error) { + return autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) +} + +// GetResponder handles the response to the Get request. The method always +// closes the http.Response Body. +func (client InterfaceIPConfigurationsClient) GetResponder(resp *http.Response) (result InterfaceIPConfiguration, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + +// List get all ip configurations in a network interface +// Parameters: +// resourceGroupName - the name of the resource group. +// networkInterfaceName - the name of the network interface. +func (client InterfaceIPConfigurationsClient) List(ctx context.Context, resourceGroupName string, networkInterfaceName string) (result InterfaceIPConfigurationListResultPage, err error) { + result.fn = client.listNextResults + req, err := client.ListPreparer(ctx, resourceGroupName, networkInterfaceName) + if err != nil { + err = autorest.NewErrorWithError(err, "network.InterfaceIPConfigurationsClient", "List", nil, "Failure preparing request") + return + } + + resp, err := client.ListSender(req) + if err != nil { + result.iiclr.Response = autorest.Response{Response: resp} + err = autorest.NewErrorWithError(err, "network.InterfaceIPConfigurationsClient", "List", resp, "Failure sending request") + return + } + + result.iiclr, err = client.ListResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "network.InterfaceIPConfigurationsClient", "List", resp, "Failure responding to request") + } + + return +} + +// ListPreparer prepares the List request. +func (client InterfaceIPConfigurationsClient) ListPreparer(ctx context.Context, resourceGroupName string, networkInterfaceName string) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "networkInterfaceName": autorest.Encode("path", networkInterfaceName), + "resourceGroupName": autorest.Encode("path", resourceGroupName), + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + } + + const APIVersion = "2018-01-01" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsGet(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/networkInterfaces/{networkInterfaceName}/ipConfigurations", pathParameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// ListSender sends the List request. The method will close the +// http.Response Body if it receives an error. +func (client InterfaceIPConfigurationsClient) ListSender(req *http.Request) (*http.Response, error) { + return autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) +} + +// ListResponder handles the response to the List request. The method always +// closes the http.Response Body. +func (client InterfaceIPConfigurationsClient) ListResponder(resp *http.Response) (result InterfaceIPConfigurationListResult, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + +// listNextResults retrieves the next set of results, if any. +func (client InterfaceIPConfigurationsClient) listNextResults(lastResults InterfaceIPConfigurationListResult) (result InterfaceIPConfigurationListResult, err error) { + req, err := lastResults.interfaceIPConfigurationListResultPreparer() + if err != nil { + return result, autorest.NewErrorWithError(err, "network.InterfaceIPConfigurationsClient", "listNextResults", nil, "Failure preparing next results request") + } + if req == nil { + return + } + resp, err := client.ListSender(req) + if err != nil { + result.Response = autorest.Response{Response: resp} + return result, autorest.NewErrorWithError(err, "network.InterfaceIPConfigurationsClient", "listNextResults", resp, "Failure sending next results request") + } + result, err = client.ListResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "network.InterfaceIPConfigurationsClient", "listNextResults", resp, "Failure responding to next results request") + } + return +} + +// ListComplete enumerates all values, automatically crossing page boundaries as required. +func (client InterfaceIPConfigurationsClient) ListComplete(ctx context.Context, resourceGroupName string, networkInterfaceName string) (result InterfaceIPConfigurationListResultIterator, err error) { + result.page, err = client.List(ctx, resourceGroupName, networkInterfaceName) + return +} diff --git a/vendor/github.com/Azure/azure-sdk-for-go/services/network/mgmt/2018-01-01/network/interfaceloadbalancers.go b/vendor/github.com/Azure/azure-sdk-for-go/services/network/mgmt/2018-01-01/network/interfaceloadbalancers.go new file mode 100644 index 000000000..cda30b0f4 --- /dev/null +++ b/vendor/github.com/Azure/azure-sdk-for-go/services/network/mgmt/2018-01-01/network/interfaceloadbalancers.go @@ -0,0 +1,135 @@ +package network + +// Copyright (c) Microsoft and contributors. All rights reserved. +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// +// See the License for the specific language governing permissions and +// limitations under the License. +// +// Code generated by Microsoft (R) AutoRest Code Generator. +// Changes may cause incorrect behavior and will be lost if the code is regenerated. + +import ( + "context" + "github.com/Azure/go-autorest/autorest" + "github.com/Azure/go-autorest/autorest/azure" + "net/http" +) + +// InterfaceLoadBalancersClient is the network Client +type InterfaceLoadBalancersClient struct { + BaseClient +} + +// NewInterfaceLoadBalancersClient creates an instance of the InterfaceLoadBalancersClient client. +func NewInterfaceLoadBalancersClient(subscriptionID string) InterfaceLoadBalancersClient { + return NewInterfaceLoadBalancersClientWithBaseURI(DefaultBaseURI, subscriptionID) +} + +// NewInterfaceLoadBalancersClientWithBaseURI creates an instance of the InterfaceLoadBalancersClient client. +func NewInterfaceLoadBalancersClientWithBaseURI(baseURI string, subscriptionID string) InterfaceLoadBalancersClient { + return InterfaceLoadBalancersClient{NewWithBaseURI(baseURI, subscriptionID)} +} + +// List list all load balancers in a network interface. +// Parameters: +// resourceGroupName - the name of the resource group. +// networkInterfaceName - the name of the network interface. +func (client InterfaceLoadBalancersClient) List(ctx context.Context, resourceGroupName string, networkInterfaceName string) (result InterfaceLoadBalancerListResultPage, err error) { + result.fn = client.listNextResults + req, err := client.ListPreparer(ctx, resourceGroupName, networkInterfaceName) + if err != nil { + err = autorest.NewErrorWithError(err, "network.InterfaceLoadBalancersClient", "List", nil, "Failure preparing request") + return + } + + resp, err := client.ListSender(req) + if err != nil { + result.ilblr.Response = autorest.Response{Response: resp} + err = autorest.NewErrorWithError(err, "network.InterfaceLoadBalancersClient", "List", resp, "Failure sending request") + return + } + + result.ilblr, err = client.ListResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "network.InterfaceLoadBalancersClient", "List", resp, "Failure responding to request") + } + + return +} + +// ListPreparer prepares the List request. +func (client InterfaceLoadBalancersClient) ListPreparer(ctx context.Context, resourceGroupName string, networkInterfaceName string) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "networkInterfaceName": autorest.Encode("path", networkInterfaceName), + "resourceGroupName": autorest.Encode("path", resourceGroupName), + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + } + + const APIVersion = "2018-01-01" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsGet(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/networkInterfaces/{networkInterfaceName}/loadBalancers", pathParameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// ListSender sends the List request. The method will close the +// http.Response Body if it receives an error. +func (client InterfaceLoadBalancersClient) ListSender(req *http.Request) (*http.Response, error) { + return autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) +} + +// ListResponder handles the response to the List request. The method always +// closes the http.Response Body. +func (client InterfaceLoadBalancersClient) ListResponder(resp *http.Response) (result InterfaceLoadBalancerListResult, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + +// listNextResults retrieves the next set of results, if any. +func (client InterfaceLoadBalancersClient) listNextResults(lastResults InterfaceLoadBalancerListResult) (result InterfaceLoadBalancerListResult, err error) { + req, err := lastResults.interfaceLoadBalancerListResultPreparer() + if err != nil { + return result, autorest.NewErrorWithError(err, "network.InterfaceLoadBalancersClient", "listNextResults", nil, "Failure preparing next results request") + } + if req == nil { + return + } + resp, err := client.ListSender(req) + if err != nil { + result.Response = autorest.Response{Response: resp} + return result, autorest.NewErrorWithError(err, "network.InterfaceLoadBalancersClient", "listNextResults", resp, "Failure sending next results request") + } + result, err = client.ListResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "network.InterfaceLoadBalancersClient", "listNextResults", resp, "Failure responding to next results request") + } + return +} + +// ListComplete enumerates all values, automatically crossing page boundaries as required. +func (client InterfaceLoadBalancersClient) ListComplete(ctx context.Context, resourceGroupName string, networkInterfaceName string) (result InterfaceLoadBalancerListResultIterator, err error) { + result.page, err = client.List(ctx, resourceGroupName, networkInterfaceName) + return +} diff --git a/vendor/github.com/Azure/azure-sdk-for-go/services/network/mgmt/2018-01-01/network/interfaces.go b/vendor/github.com/Azure/azure-sdk-for-go/services/network/mgmt/2018-01-01/network/interfaces.go new file mode 100644 index 000000000..6165ccc6d --- /dev/null +++ b/vendor/github.com/Azure/azure-sdk-for-go/services/network/mgmt/2018-01-01/network/interfaces.go @@ -0,0 +1,1104 @@ +package network + +// Copyright (c) Microsoft and contributors. All rights reserved. +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// +// See the License for the specific language governing permissions and +// limitations under the License. +// +// Code generated by Microsoft (R) AutoRest Code Generator. +// Changes may cause incorrect behavior and will be lost if the code is regenerated. + +import ( + "context" + "github.com/Azure/go-autorest/autorest" + "github.com/Azure/go-autorest/autorest/azure" + "net/http" +) + +// InterfacesClient is the network Client +type InterfacesClient struct { + BaseClient +} + +// NewInterfacesClient creates an instance of the InterfacesClient client. +func NewInterfacesClient(subscriptionID string) InterfacesClient { + return NewInterfacesClientWithBaseURI(DefaultBaseURI, subscriptionID) +} + +// NewInterfacesClientWithBaseURI creates an instance of the InterfacesClient client. +func NewInterfacesClientWithBaseURI(baseURI string, subscriptionID string) InterfacesClient { + return InterfacesClient{NewWithBaseURI(baseURI, subscriptionID)} +} + +// CreateOrUpdate creates or updates a network interface. +// Parameters: +// resourceGroupName - the name of the resource group. +// networkInterfaceName - the name of the network interface. +// parameters - parameters supplied to the create or update network interface operation. +func (client InterfacesClient) CreateOrUpdate(ctx context.Context, resourceGroupName string, networkInterfaceName string, parameters Interface) (result InterfacesCreateOrUpdateFuture, err error) { + req, err := client.CreateOrUpdatePreparer(ctx, resourceGroupName, networkInterfaceName, parameters) + if err != nil { + err = autorest.NewErrorWithError(err, "network.InterfacesClient", "CreateOrUpdate", nil, "Failure preparing request") + return + } + + result, err = client.CreateOrUpdateSender(req) + if err != nil { + err = autorest.NewErrorWithError(err, "network.InterfacesClient", "CreateOrUpdate", result.Response(), "Failure sending request") + return + } + + return +} + +// CreateOrUpdatePreparer prepares the CreateOrUpdate request. +func (client InterfacesClient) CreateOrUpdatePreparer(ctx context.Context, resourceGroupName string, networkInterfaceName string, parameters Interface) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "networkInterfaceName": autorest.Encode("path", networkInterfaceName), + "resourceGroupName": autorest.Encode("path", resourceGroupName), + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + } + + const APIVersion = "2018-01-01" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsContentType("application/json; charset=utf-8"), + autorest.AsPut(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/networkInterfaces/{networkInterfaceName}", pathParameters), + autorest.WithJSON(parameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// CreateOrUpdateSender sends the CreateOrUpdate request. The method will close the +// http.Response Body if it receives an error. +func (client InterfacesClient) CreateOrUpdateSender(req *http.Request) (future InterfacesCreateOrUpdateFuture, err error) { + var resp *http.Response + resp, err = autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) + if err != nil { + return + } + err = autorest.Respond(resp, azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusCreated)) + if err != nil { + return + } + future.Future, err = azure.NewFutureFromResponse(resp) + return +} + +// CreateOrUpdateResponder handles the response to the CreateOrUpdate request. The method always +// closes the http.Response Body. +func (client InterfacesClient) CreateOrUpdateResponder(resp *http.Response) (result Interface, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusCreated), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + +// Delete deletes the specified network interface. +// Parameters: +// resourceGroupName - the name of the resource group. +// networkInterfaceName - the name of the network interface. +func (client InterfacesClient) Delete(ctx context.Context, resourceGroupName string, networkInterfaceName string) (result InterfacesDeleteFuture, err error) { + req, err := client.DeletePreparer(ctx, resourceGroupName, networkInterfaceName) + if err != nil { + err = autorest.NewErrorWithError(err, "network.InterfacesClient", "Delete", nil, "Failure preparing request") + return + } + + result, err = client.DeleteSender(req) + if err != nil { + err = autorest.NewErrorWithError(err, "network.InterfacesClient", "Delete", result.Response(), "Failure sending request") + return + } + + return +} + +// DeletePreparer prepares the Delete request. +func (client InterfacesClient) DeletePreparer(ctx context.Context, resourceGroupName string, networkInterfaceName string) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "networkInterfaceName": autorest.Encode("path", networkInterfaceName), + "resourceGroupName": autorest.Encode("path", resourceGroupName), + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + } + + const APIVersion = "2018-01-01" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsDelete(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/networkInterfaces/{networkInterfaceName}", pathParameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// DeleteSender sends the Delete request. The method will close the +// http.Response Body if it receives an error. +func (client InterfacesClient) DeleteSender(req *http.Request) (future InterfacesDeleteFuture, err error) { + var resp *http.Response + resp, err = autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) + if err != nil { + return + } + err = autorest.Respond(resp, azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted, http.StatusNoContent)) + if err != nil { + return + } + future.Future, err = azure.NewFutureFromResponse(resp) + return +} + +// DeleteResponder handles the response to the Delete request. The method always +// closes the http.Response Body. +func (client InterfacesClient) DeleteResponder(resp *http.Response) (result autorest.Response, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted, http.StatusNoContent), + autorest.ByClosing()) + result.Response = resp + return +} + +// Get gets information about the specified network interface. +// Parameters: +// resourceGroupName - the name of the resource group. +// networkInterfaceName - the name of the network interface. +// expand - expands referenced resources. +func (client InterfacesClient) Get(ctx context.Context, resourceGroupName string, networkInterfaceName string, expand string) (result Interface, err error) { + req, err := client.GetPreparer(ctx, resourceGroupName, networkInterfaceName, expand) + if err != nil { + err = autorest.NewErrorWithError(err, "network.InterfacesClient", "Get", nil, "Failure preparing request") + return + } + + resp, err := client.GetSender(req) + if err != nil { + result.Response = autorest.Response{Response: resp} + err = autorest.NewErrorWithError(err, "network.InterfacesClient", "Get", resp, "Failure sending request") + return + } + + result, err = client.GetResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "network.InterfacesClient", "Get", resp, "Failure responding to request") + } + + return +} + +// GetPreparer prepares the Get request. +func (client InterfacesClient) GetPreparer(ctx context.Context, resourceGroupName string, networkInterfaceName string, expand string) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "networkInterfaceName": autorest.Encode("path", networkInterfaceName), + "resourceGroupName": autorest.Encode("path", resourceGroupName), + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + } + + const APIVersion = "2018-01-01" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + if len(expand) > 0 { + queryParameters["$expand"] = autorest.Encode("query", expand) + } + + preparer := autorest.CreatePreparer( + autorest.AsGet(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/networkInterfaces/{networkInterfaceName}", pathParameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// GetSender sends the Get request. The method will close the +// http.Response Body if it receives an error. +func (client InterfacesClient) GetSender(req *http.Request) (*http.Response, error) { + return autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) +} + +// GetResponder handles the response to the Get request. The method always +// closes the http.Response Body. +func (client InterfacesClient) GetResponder(resp *http.Response) (result Interface, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + +// GetEffectiveRouteTable gets all route tables applied to a network interface. +// Parameters: +// resourceGroupName - the name of the resource group. +// networkInterfaceName - the name of the network interface. +func (client InterfacesClient) GetEffectiveRouteTable(ctx context.Context, resourceGroupName string, networkInterfaceName string) (result InterfacesGetEffectiveRouteTableFuture, err error) { + req, err := client.GetEffectiveRouteTablePreparer(ctx, resourceGroupName, networkInterfaceName) + if err != nil { + err = autorest.NewErrorWithError(err, "network.InterfacesClient", "GetEffectiveRouteTable", nil, "Failure preparing request") + return + } + + result, err = client.GetEffectiveRouteTableSender(req) + if err != nil { + err = autorest.NewErrorWithError(err, "network.InterfacesClient", "GetEffectiveRouteTable", result.Response(), "Failure sending request") + return + } + + return +} + +// GetEffectiveRouteTablePreparer prepares the GetEffectiveRouteTable request. +func (client InterfacesClient) GetEffectiveRouteTablePreparer(ctx context.Context, resourceGroupName string, networkInterfaceName string) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "networkInterfaceName": autorest.Encode("path", networkInterfaceName), + "resourceGroupName": autorest.Encode("path", resourceGroupName), + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + } + + const APIVersion = "2018-01-01" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsPost(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/networkInterfaces/{networkInterfaceName}/effectiveRouteTable", pathParameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// GetEffectiveRouteTableSender sends the GetEffectiveRouteTable request. The method will close the +// http.Response Body if it receives an error. +func (client InterfacesClient) GetEffectiveRouteTableSender(req *http.Request) (future InterfacesGetEffectiveRouteTableFuture, err error) { + var resp *http.Response + resp, err = autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) + if err != nil { + return + } + err = autorest.Respond(resp, azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted)) + if err != nil { + return + } + future.Future, err = azure.NewFutureFromResponse(resp) + return +} + +// GetEffectiveRouteTableResponder handles the response to the GetEffectiveRouteTable request. The method always +// closes the http.Response Body. +func (client InterfacesClient) GetEffectiveRouteTableResponder(resp *http.Response) (result EffectiveRouteListResult, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + +// GetVirtualMachineScaleSetIPConfiguration get the specified network interface ip configuration in a virtual machine +// scale set. +// Parameters: +// resourceGroupName - the name of the resource group. +// virtualMachineScaleSetName - the name of the virtual machine scale set. +// virtualmachineIndex - the virtual machine index. +// networkInterfaceName - the name of the network interface. +// IPConfigurationName - the name of the ip configuration. +// expand - expands referenced resources. +func (client InterfacesClient) GetVirtualMachineScaleSetIPConfiguration(ctx context.Context, resourceGroupName string, virtualMachineScaleSetName string, virtualmachineIndex string, networkInterfaceName string, IPConfigurationName string, expand string) (result InterfaceIPConfiguration, err error) { + req, err := client.GetVirtualMachineScaleSetIPConfigurationPreparer(ctx, resourceGroupName, virtualMachineScaleSetName, virtualmachineIndex, networkInterfaceName, IPConfigurationName, expand) + if err != nil { + err = autorest.NewErrorWithError(err, "network.InterfacesClient", "GetVirtualMachineScaleSetIPConfiguration", nil, "Failure preparing request") + return + } + + resp, err := client.GetVirtualMachineScaleSetIPConfigurationSender(req) + if err != nil { + result.Response = autorest.Response{Response: resp} + err = autorest.NewErrorWithError(err, "network.InterfacesClient", "GetVirtualMachineScaleSetIPConfiguration", resp, "Failure sending request") + return + } + + result, err = client.GetVirtualMachineScaleSetIPConfigurationResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "network.InterfacesClient", "GetVirtualMachineScaleSetIPConfiguration", resp, "Failure responding to request") + } + + return +} + +// GetVirtualMachineScaleSetIPConfigurationPreparer prepares the GetVirtualMachineScaleSetIPConfiguration request. +func (client InterfacesClient) GetVirtualMachineScaleSetIPConfigurationPreparer(ctx context.Context, resourceGroupName string, virtualMachineScaleSetName string, virtualmachineIndex string, networkInterfaceName string, IPConfigurationName string, expand string) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "ipConfigurationName": autorest.Encode("path", IPConfigurationName), + "networkInterfaceName": autorest.Encode("path", networkInterfaceName), + "resourceGroupName": autorest.Encode("path", resourceGroupName), + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + "virtualmachineIndex": autorest.Encode("path", virtualmachineIndex), + "virtualMachineScaleSetName": autorest.Encode("path", virtualMachineScaleSetName), + } + + const APIVersion = "2017-03-30" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + if len(expand) > 0 { + queryParameters["$expand"] = autorest.Encode("query", expand) + } + + preparer := autorest.CreatePreparer( + autorest.AsGet(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/microsoft.Compute/virtualMachineScaleSets/{virtualMachineScaleSetName}/virtualMachines/{virtualmachineIndex}/networkInterfaces/{networkInterfaceName}/ipConfigurations/{ipConfigurationName}", pathParameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// GetVirtualMachineScaleSetIPConfigurationSender sends the GetVirtualMachineScaleSetIPConfiguration request. The method will close the +// http.Response Body if it receives an error. +func (client InterfacesClient) GetVirtualMachineScaleSetIPConfigurationSender(req *http.Request) (*http.Response, error) { + return autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) +} + +// GetVirtualMachineScaleSetIPConfigurationResponder handles the response to the GetVirtualMachineScaleSetIPConfiguration request. The method always +// closes the http.Response Body. +func (client InterfacesClient) GetVirtualMachineScaleSetIPConfigurationResponder(resp *http.Response) (result InterfaceIPConfiguration, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + +// GetVirtualMachineScaleSetNetworkInterface get the specified network interface in a virtual machine scale set. +// Parameters: +// resourceGroupName - the name of the resource group. +// virtualMachineScaleSetName - the name of the virtual machine scale set. +// virtualmachineIndex - the virtual machine index. +// networkInterfaceName - the name of the network interface. +// expand - expands referenced resources. +func (client InterfacesClient) GetVirtualMachineScaleSetNetworkInterface(ctx context.Context, resourceGroupName string, virtualMachineScaleSetName string, virtualmachineIndex string, networkInterfaceName string, expand string) (result Interface, err error) { + req, err := client.GetVirtualMachineScaleSetNetworkInterfacePreparer(ctx, resourceGroupName, virtualMachineScaleSetName, virtualmachineIndex, networkInterfaceName, expand) + if err != nil { + err = autorest.NewErrorWithError(err, "network.InterfacesClient", "GetVirtualMachineScaleSetNetworkInterface", nil, "Failure preparing request") + return + } + + resp, err := client.GetVirtualMachineScaleSetNetworkInterfaceSender(req) + if err != nil { + result.Response = autorest.Response{Response: resp} + err = autorest.NewErrorWithError(err, "network.InterfacesClient", "GetVirtualMachineScaleSetNetworkInterface", resp, "Failure sending request") + return + } + + result, err = client.GetVirtualMachineScaleSetNetworkInterfaceResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "network.InterfacesClient", "GetVirtualMachineScaleSetNetworkInterface", resp, "Failure responding to request") + } + + return +} + +// GetVirtualMachineScaleSetNetworkInterfacePreparer prepares the GetVirtualMachineScaleSetNetworkInterface request. +func (client InterfacesClient) GetVirtualMachineScaleSetNetworkInterfacePreparer(ctx context.Context, resourceGroupName string, virtualMachineScaleSetName string, virtualmachineIndex string, networkInterfaceName string, expand string) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "networkInterfaceName": autorest.Encode("path", networkInterfaceName), + "resourceGroupName": autorest.Encode("path", resourceGroupName), + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + "virtualmachineIndex": autorest.Encode("path", virtualmachineIndex), + "virtualMachineScaleSetName": autorest.Encode("path", virtualMachineScaleSetName), + } + + const APIVersion = "2017-03-30" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + if len(expand) > 0 { + queryParameters["$expand"] = autorest.Encode("query", expand) + } + + preparer := autorest.CreatePreparer( + autorest.AsGet(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/microsoft.Compute/virtualMachineScaleSets/{virtualMachineScaleSetName}/virtualMachines/{virtualmachineIndex}/networkInterfaces/{networkInterfaceName}", pathParameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// GetVirtualMachineScaleSetNetworkInterfaceSender sends the GetVirtualMachineScaleSetNetworkInterface request. The method will close the +// http.Response Body if it receives an error. +func (client InterfacesClient) GetVirtualMachineScaleSetNetworkInterfaceSender(req *http.Request) (*http.Response, error) { + return autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) +} + +// GetVirtualMachineScaleSetNetworkInterfaceResponder handles the response to the GetVirtualMachineScaleSetNetworkInterface request. The method always +// closes the http.Response Body. +func (client InterfacesClient) GetVirtualMachineScaleSetNetworkInterfaceResponder(resp *http.Response) (result Interface, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + +// List gets all network interfaces in a resource group. +// Parameters: +// resourceGroupName - the name of the resource group. +func (client InterfacesClient) List(ctx context.Context, resourceGroupName string) (result InterfaceListResultPage, err error) { + result.fn = client.listNextResults + req, err := client.ListPreparer(ctx, resourceGroupName) + if err != nil { + err = autorest.NewErrorWithError(err, "network.InterfacesClient", "List", nil, "Failure preparing request") + return + } + + resp, err := client.ListSender(req) + if err != nil { + result.ilr.Response = autorest.Response{Response: resp} + err = autorest.NewErrorWithError(err, "network.InterfacesClient", "List", resp, "Failure sending request") + return + } + + result.ilr, err = client.ListResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "network.InterfacesClient", "List", resp, "Failure responding to request") + } + + return +} + +// ListPreparer prepares the List request. +func (client InterfacesClient) ListPreparer(ctx context.Context, resourceGroupName string) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "resourceGroupName": autorest.Encode("path", resourceGroupName), + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + } + + const APIVersion = "2018-01-01" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsGet(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/networkInterfaces", pathParameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// ListSender sends the List request. The method will close the +// http.Response Body if it receives an error. +func (client InterfacesClient) ListSender(req *http.Request) (*http.Response, error) { + return autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) +} + +// ListResponder handles the response to the List request. The method always +// closes the http.Response Body. +func (client InterfacesClient) ListResponder(resp *http.Response) (result InterfaceListResult, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + +// listNextResults retrieves the next set of results, if any. +func (client InterfacesClient) listNextResults(lastResults InterfaceListResult) (result InterfaceListResult, err error) { + req, err := lastResults.interfaceListResultPreparer() + if err != nil { + return result, autorest.NewErrorWithError(err, "network.InterfacesClient", "listNextResults", nil, "Failure preparing next results request") + } + if req == nil { + return + } + resp, err := client.ListSender(req) + if err != nil { + result.Response = autorest.Response{Response: resp} + return result, autorest.NewErrorWithError(err, "network.InterfacesClient", "listNextResults", resp, "Failure sending next results request") + } + result, err = client.ListResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "network.InterfacesClient", "listNextResults", resp, "Failure responding to next results request") + } + return +} + +// ListComplete enumerates all values, automatically crossing page boundaries as required. +func (client InterfacesClient) ListComplete(ctx context.Context, resourceGroupName string) (result InterfaceListResultIterator, err error) { + result.page, err = client.List(ctx, resourceGroupName) + return +} + +// ListAll gets all network interfaces in a subscription. +func (client InterfacesClient) ListAll(ctx context.Context) (result InterfaceListResultPage, err error) { + result.fn = client.listAllNextResults + req, err := client.ListAllPreparer(ctx) + if err != nil { + err = autorest.NewErrorWithError(err, "network.InterfacesClient", "ListAll", nil, "Failure preparing request") + return + } + + resp, err := client.ListAllSender(req) + if err != nil { + result.ilr.Response = autorest.Response{Response: resp} + err = autorest.NewErrorWithError(err, "network.InterfacesClient", "ListAll", resp, "Failure sending request") + return + } + + result.ilr, err = client.ListAllResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "network.InterfacesClient", "ListAll", resp, "Failure responding to request") + } + + return +} + +// ListAllPreparer prepares the ListAll request. +func (client InterfacesClient) ListAllPreparer(ctx context.Context) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + } + + const APIVersion = "2018-01-01" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsGet(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/providers/Microsoft.Network/networkInterfaces", pathParameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// ListAllSender sends the ListAll request. The method will close the +// http.Response Body if it receives an error. +func (client InterfacesClient) ListAllSender(req *http.Request) (*http.Response, error) { + return autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) +} + +// ListAllResponder handles the response to the ListAll request. The method always +// closes the http.Response Body. +func (client InterfacesClient) ListAllResponder(resp *http.Response) (result InterfaceListResult, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + +// listAllNextResults retrieves the next set of results, if any. +func (client InterfacesClient) listAllNextResults(lastResults InterfaceListResult) (result InterfaceListResult, err error) { + req, err := lastResults.interfaceListResultPreparer() + if err != nil { + return result, autorest.NewErrorWithError(err, "network.InterfacesClient", "listAllNextResults", nil, "Failure preparing next results request") + } + if req == nil { + return + } + resp, err := client.ListAllSender(req) + if err != nil { + result.Response = autorest.Response{Response: resp} + return result, autorest.NewErrorWithError(err, "network.InterfacesClient", "listAllNextResults", resp, "Failure sending next results request") + } + result, err = client.ListAllResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "network.InterfacesClient", "listAllNextResults", resp, "Failure responding to next results request") + } + return +} + +// ListAllComplete enumerates all values, automatically crossing page boundaries as required. +func (client InterfacesClient) ListAllComplete(ctx context.Context) (result InterfaceListResultIterator, err error) { + result.page, err = client.ListAll(ctx) + return +} + +// ListEffectiveNetworkSecurityGroups gets all network security groups applied to a network interface. +// Parameters: +// resourceGroupName - the name of the resource group. +// networkInterfaceName - the name of the network interface. +func (client InterfacesClient) ListEffectiveNetworkSecurityGroups(ctx context.Context, resourceGroupName string, networkInterfaceName string) (result InterfacesListEffectiveNetworkSecurityGroupsFuture, err error) { + req, err := client.ListEffectiveNetworkSecurityGroupsPreparer(ctx, resourceGroupName, networkInterfaceName) + if err != nil { + err = autorest.NewErrorWithError(err, "network.InterfacesClient", "ListEffectiveNetworkSecurityGroups", nil, "Failure preparing request") + return + } + + result, err = client.ListEffectiveNetworkSecurityGroupsSender(req) + if err != nil { + err = autorest.NewErrorWithError(err, "network.InterfacesClient", "ListEffectiveNetworkSecurityGroups", result.Response(), "Failure sending request") + return + } + + return +} + +// ListEffectiveNetworkSecurityGroupsPreparer prepares the ListEffectiveNetworkSecurityGroups request. +func (client InterfacesClient) ListEffectiveNetworkSecurityGroupsPreparer(ctx context.Context, resourceGroupName string, networkInterfaceName string) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "networkInterfaceName": autorest.Encode("path", networkInterfaceName), + "resourceGroupName": autorest.Encode("path", resourceGroupName), + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + } + + const APIVersion = "2018-01-01" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsPost(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/networkInterfaces/{networkInterfaceName}/effectiveNetworkSecurityGroups", pathParameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// ListEffectiveNetworkSecurityGroupsSender sends the ListEffectiveNetworkSecurityGroups request. The method will close the +// http.Response Body if it receives an error. +func (client InterfacesClient) ListEffectiveNetworkSecurityGroupsSender(req *http.Request) (future InterfacesListEffectiveNetworkSecurityGroupsFuture, err error) { + var resp *http.Response + resp, err = autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) + if err != nil { + return + } + err = autorest.Respond(resp, azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted)) + if err != nil { + return + } + future.Future, err = azure.NewFutureFromResponse(resp) + return +} + +// ListEffectiveNetworkSecurityGroupsResponder handles the response to the ListEffectiveNetworkSecurityGroups request. The method always +// closes the http.Response Body. +func (client InterfacesClient) ListEffectiveNetworkSecurityGroupsResponder(resp *http.Response) (result EffectiveNetworkSecurityGroupListResult, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + +// ListVirtualMachineScaleSetIPConfigurations get the specified network interface ip configuration in a virtual machine +// scale set. +// Parameters: +// resourceGroupName - the name of the resource group. +// virtualMachineScaleSetName - the name of the virtual machine scale set. +// virtualmachineIndex - the virtual machine index. +// networkInterfaceName - the name of the network interface. +// expand - expands referenced resources. +func (client InterfacesClient) ListVirtualMachineScaleSetIPConfigurations(ctx context.Context, resourceGroupName string, virtualMachineScaleSetName string, virtualmachineIndex string, networkInterfaceName string, expand string) (result InterfaceIPConfigurationListResultPage, err error) { + result.fn = client.listVirtualMachineScaleSetIPConfigurationsNextResults + req, err := client.ListVirtualMachineScaleSetIPConfigurationsPreparer(ctx, resourceGroupName, virtualMachineScaleSetName, virtualmachineIndex, networkInterfaceName, expand) + if err != nil { + err = autorest.NewErrorWithError(err, "network.InterfacesClient", "ListVirtualMachineScaleSetIPConfigurations", nil, "Failure preparing request") + return + } + + resp, err := client.ListVirtualMachineScaleSetIPConfigurationsSender(req) + if err != nil { + result.iiclr.Response = autorest.Response{Response: resp} + err = autorest.NewErrorWithError(err, "network.InterfacesClient", "ListVirtualMachineScaleSetIPConfigurations", resp, "Failure sending request") + return + } + + result.iiclr, err = client.ListVirtualMachineScaleSetIPConfigurationsResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "network.InterfacesClient", "ListVirtualMachineScaleSetIPConfigurations", resp, "Failure responding to request") + } + + return +} + +// ListVirtualMachineScaleSetIPConfigurationsPreparer prepares the ListVirtualMachineScaleSetIPConfigurations request. +func (client InterfacesClient) ListVirtualMachineScaleSetIPConfigurationsPreparer(ctx context.Context, resourceGroupName string, virtualMachineScaleSetName string, virtualmachineIndex string, networkInterfaceName string, expand string) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "networkInterfaceName": autorest.Encode("path", networkInterfaceName), + "resourceGroupName": autorest.Encode("path", resourceGroupName), + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + "virtualmachineIndex": autorest.Encode("path", virtualmachineIndex), + "virtualMachineScaleSetName": autorest.Encode("path", virtualMachineScaleSetName), + } + + const APIVersion = "2017-03-30" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + if len(expand) > 0 { + queryParameters["$expand"] = autorest.Encode("query", expand) + } + + preparer := autorest.CreatePreparer( + autorest.AsGet(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/microsoft.Compute/virtualMachineScaleSets/{virtualMachineScaleSetName}/virtualMachines/{virtualmachineIndex}/networkInterfaces/{networkInterfaceName}/ipConfigurations", pathParameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// ListVirtualMachineScaleSetIPConfigurationsSender sends the ListVirtualMachineScaleSetIPConfigurations request. The method will close the +// http.Response Body if it receives an error. +func (client InterfacesClient) ListVirtualMachineScaleSetIPConfigurationsSender(req *http.Request) (*http.Response, error) { + return autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) +} + +// ListVirtualMachineScaleSetIPConfigurationsResponder handles the response to the ListVirtualMachineScaleSetIPConfigurations request. The method always +// closes the http.Response Body. +func (client InterfacesClient) ListVirtualMachineScaleSetIPConfigurationsResponder(resp *http.Response) (result InterfaceIPConfigurationListResult, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + +// listVirtualMachineScaleSetIPConfigurationsNextResults retrieves the next set of results, if any. +func (client InterfacesClient) listVirtualMachineScaleSetIPConfigurationsNextResults(lastResults InterfaceIPConfigurationListResult) (result InterfaceIPConfigurationListResult, err error) { + req, err := lastResults.interfaceIPConfigurationListResultPreparer() + if err != nil { + return result, autorest.NewErrorWithError(err, "network.InterfacesClient", "listVirtualMachineScaleSetIPConfigurationsNextResults", nil, "Failure preparing next results request") + } + if req == nil { + return + } + resp, err := client.ListVirtualMachineScaleSetIPConfigurationsSender(req) + if err != nil { + result.Response = autorest.Response{Response: resp} + return result, autorest.NewErrorWithError(err, "network.InterfacesClient", "listVirtualMachineScaleSetIPConfigurationsNextResults", resp, "Failure sending next results request") + } + result, err = client.ListVirtualMachineScaleSetIPConfigurationsResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "network.InterfacesClient", "listVirtualMachineScaleSetIPConfigurationsNextResults", resp, "Failure responding to next results request") + } + return +} + +// ListVirtualMachineScaleSetIPConfigurationsComplete enumerates all values, automatically crossing page boundaries as required. +func (client InterfacesClient) ListVirtualMachineScaleSetIPConfigurationsComplete(ctx context.Context, resourceGroupName string, virtualMachineScaleSetName string, virtualmachineIndex string, networkInterfaceName string, expand string) (result InterfaceIPConfigurationListResultIterator, err error) { + result.page, err = client.ListVirtualMachineScaleSetIPConfigurations(ctx, resourceGroupName, virtualMachineScaleSetName, virtualmachineIndex, networkInterfaceName, expand) + return +} + +// ListVirtualMachineScaleSetNetworkInterfaces gets all network interfaces in a virtual machine scale set. +// Parameters: +// resourceGroupName - the name of the resource group. +// virtualMachineScaleSetName - the name of the virtual machine scale set. +func (client InterfacesClient) ListVirtualMachineScaleSetNetworkInterfaces(ctx context.Context, resourceGroupName string, virtualMachineScaleSetName string) (result InterfaceListResultPage, err error) { + result.fn = client.listVirtualMachineScaleSetNetworkInterfacesNextResults + req, err := client.ListVirtualMachineScaleSetNetworkInterfacesPreparer(ctx, resourceGroupName, virtualMachineScaleSetName) + if err != nil { + err = autorest.NewErrorWithError(err, "network.InterfacesClient", "ListVirtualMachineScaleSetNetworkInterfaces", nil, "Failure preparing request") + return + } + + resp, err := client.ListVirtualMachineScaleSetNetworkInterfacesSender(req) + if err != nil { + result.ilr.Response = autorest.Response{Response: resp} + err = autorest.NewErrorWithError(err, "network.InterfacesClient", "ListVirtualMachineScaleSetNetworkInterfaces", resp, "Failure sending request") + return + } + + result.ilr, err = client.ListVirtualMachineScaleSetNetworkInterfacesResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "network.InterfacesClient", "ListVirtualMachineScaleSetNetworkInterfaces", resp, "Failure responding to request") + } + + return +} + +// ListVirtualMachineScaleSetNetworkInterfacesPreparer prepares the ListVirtualMachineScaleSetNetworkInterfaces request. +func (client InterfacesClient) ListVirtualMachineScaleSetNetworkInterfacesPreparer(ctx context.Context, resourceGroupName string, virtualMachineScaleSetName string) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "resourceGroupName": autorest.Encode("path", resourceGroupName), + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + "virtualMachineScaleSetName": autorest.Encode("path", virtualMachineScaleSetName), + } + + const APIVersion = "2017-03-30" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsGet(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/microsoft.Compute/virtualMachineScaleSets/{virtualMachineScaleSetName}/networkInterfaces", pathParameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// ListVirtualMachineScaleSetNetworkInterfacesSender sends the ListVirtualMachineScaleSetNetworkInterfaces request. The method will close the +// http.Response Body if it receives an error. +func (client InterfacesClient) ListVirtualMachineScaleSetNetworkInterfacesSender(req *http.Request) (*http.Response, error) { + return autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) +} + +// ListVirtualMachineScaleSetNetworkInterfacesResponder handles the response to the ListVirtualMachineScaleSetNetworkInterfaces request. The method always +// closes the http.Response Body. +func (client InterfacesClient) ListVirtualMachineScaleSetNetworkInterfacesResponder(resp *http.Response) (result InterfaceListResult, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + +// listVirtualMachineScaleSetNetworkInterfacesNextResults retrieves the next set of results, if any. +func (client InterfacesClient) listVirtualMachineScaleSetNetworkInterfacesNextResults(lastResults InterfaceListResult) (result InterfaceListResult, err error) { + req, err := lastResults.interfaceListResultPreparer() + if err != nil { + return result, autorest.NewErrorWithError(err, "network.InterfacesClient", "listVirtualMachineScaleSetNetworkInterfacesNextResults", nil, "Failure preparing next results request") + } + if req == nil { + return + } + resp, err := client.ListVirtualMachineScaleSetNetworkInterfacesSender(req) + if err != nil { + result.Response = autorest.Response{Response: resp} + return result, autorest.NewErrorWithError(err, "network.InterfacesClient", "listVirtualMachineScaleSetNetworkInterfacesNextResults", resp, "Failure sending next results request") + } + result, err = client.ListVirtualMachineScaleSetNetworkInterfacesResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "network.InterfacesClient", "listVirtualMachineScaleSetNetworkInterfacesNextResults", resp, "Failure responding to next results request") + } + return +} + +// ListVirtualMachineScaleSetNetworkInterfacesComplete enumerates all values, automatically crossing page boundaries as required. +func (client InterfacesClient) ListVirtualMachineScaleSetNetworkInterfacesComplete(ctx context.Context, resourceGroupName string, virtualMachineScaleSetName string) (result InterfaceListResultIterator, err error) { + result.page, err = client.ListVirtualMachineScaleSetNetworkInterfaces(ctx, resourceGroupName, virtualMachineScaleSetName) + return +} + +// ListVirtualMachineScaleSetVMNetworkInterfaces gets information about all network interfaces in a virtual machine in +// a virtual machine scale set. +// Parameters: +// resourceGroupName - the name of the resource group. +// virtualMachineScaleSetName - the name of the virtual machine scale set. +// virtualmachineIndex - the virtual machine index. +func (client InterfacesClient) ListVirtualMachineScaleSetVMNetworkInterfaces(ctx context.Context, resourceGroupName string, virtualMachineScaleSetName string, virtualmachineIndex string) (result InterfaceListResultPage, err error) { + result.fn = client.listVirtualMachineScaleSetVMNetworkInterfacesNextResults + req, err := client.ListVirtualMachineScaleSetVMNetworkInterfacesPreparer(ctx, resourceGroupName, virtualMachineScaleSetName, virtualmachineIndex) + if err != nil { + err = autorest.NewErrorWithError(err, "network.InterfacesClient", "ListVirtualMachineScaleSetVMNetworkInterfaces", nil, "Failure preparing request") + return + } + + resp, err := client.ListVirtualMachineScaleSetVMNetworkInterfacesSender(req) + if err != nil { + result.ilr.Response = autorest.Response{Response: resp} + err = autorest.NewErrorWithError(err, "network.InterfacesClient", "ListVirtualMachineScaleSetVMNetworkInterfaces", resp, "Failure sending request") + return + } + + result.ilr, err = client.ListVirtualMachineScaleSetVMNetworkInterfacesResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "network.InterfacesClient", "ListVirtualMachineScaleSetVMNetworkInterfaces", resp, "Failure responding to request") + } + + return +} + +// ListVirtualMachineScaleSetVMNetworkInterfacesPreparer prepares the ListVirtualMachineScaleSetVMNetworkInterfaces request. +func (client InterfacesClient) ListVirtualMachineScaleSetVMNetworkInterfacesPreparer(ctx context.Context, resourceGroupName string, virtualMachineScaleSetName string, virtualmachineIndex string) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "resourceGroupName": autorest.Encode("path", resourceGroupName), + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + "virtualmachineIndex": autorest.Encode("path", virtualmachineIndex), + "virtualMachineScaleSetName": autorest.Encode("path", virtualMachineScaleSetName), + } + + const APIVersion = "2017-03-30" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsGet(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/microsoft.Compute/virtualMachineScaleSets/{virtualMachineScaleSetName}/virtualMachines/{virtualmachineIndex}/networkInterfaces", pathParameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// ListVirtualMachineScaleSetVMNetworkInterfacesSender sends the ListVirtualMachineScaleSetVMNetworkInterfaces request. The method will close the +// http.Response Body if it receives an error. +func (client InterfacesClient) ListVirtualMachineScaleSetVMNetworkInterfacesSender(req *http.Request) (*http.Response, error) { + return autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) +} + +// ListVirtualMachineScaleSetVMNetworkInterfacesResponder handles the response to the ListVirtualMachineScaleSetVMNetworkInterfaces request. The method always +// closes the http.Response Body. +func (client InterfacesClient) ListVirtualMachineScaleSetVMNetworkInterfacesResponder(resp *http.Response) (result InterfaceListResult, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + +// listVirtualMachineScaleSetVMNetworkInterfacesNextResults retrieves the next set of results, if any. +func (client InterfacesClient) listVirtualMachineScaleSetVMNetworkInterfacesNextResults(lastResults InterfaceListResult) (result InterfaceListResult, err error) { + req, err := lastResults.interfaceListResultPreparer() + if err != nil { + return result, autorest.NewErrorWithError(err, "network.InterfacesClient", "listVirtualMachineScaleSetVMNetworkInterfacesNextResults", nil, "Failure preparing next results request") + } + if req == nil { + return + } + resp, err := client.ListVirtualMachineScaleSetVMNetworkInterfacesSender(req) + if err != nil { + result.Response = autorest.Response{Response: resp} + return result, autorest.NewErrorWithError(err, "network.InterfacesClient", "listVirtualMachineScaleSetVMNetworkInterfacesNextResults", resp, "Failure sending next results request") + } + result, err = client.ListVirtualMachineScaleSetVMNetworkInterfacesResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "network.InterfacesClient", "listVirtualMachineScaleSetVMNetworkInterfacesNextResults", resp, "Failure responding to next results request") + } + return +} + +// ListVirtualMachineScaleSetVMNetworkInterfacesComplete enumerates all values, automatically crossing page boundaries as required. +func (client InterfacesClient) ListVirtualMachineScaleSetVMNetworkInterfacesComplete(ctx context.Context, resourceGroupName string, virtualMachineScaleSetName string, virtualmachineIndex string) (result InterfaceListResultIterator, err error) { + result.page, err = client.ListVirtualMachineScaleSetVMNetworkInterfaces(ctx, resourceGroupName, virtualMachineScaleSetName, virtualmachineIndex) + return +} + +// UpdateTags updates a network interface tags. +// Parameters: +// resourceGroupName - the name of the resource group. +// networkInterfaceName - the name of the network interface. +// parameters - parameters supplied to update network interface tags. +func (client InterfacesClient) UpdateTags(ctx context.Context, resourceGroupName string, networkInterfaceName string, parameters TagsObject) (result InterfacesUpdateTagsFuture, err error) { + req, err := client.UpdateTagsPreparer(ctx, resourceGroupName, networkInterfaceName, parameters) + if err != nil { + err = autorest.NewErrorWithError(err, "network.InterfacesClient", "UpdateTags", nil, "Failure preparing request") + return + } + + result, err = client.UpdateTagsSender(req) + if err != nil { + err = autorest.NewErrorWithError(err, "network.InterfacesClient", "UpdateTags", result.Response(), "Failure sending request") + return + } + + return +} + +// UpdateTagsPreparer prepares the UpdateTags request. +func (client InterfacesClient) UpdateTagsPreparer(ctx context.Context, resourceGroupName string, networkInterfaceName string, parameters TagsObject) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "networkInterfaceName": autorest.Encode("path", networkInterfaceName), + "resourceGroupName": autorest.Encode("path", resourceGroupName), + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + } + + const APIVersion = "2018-01-01" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsContentType("application/json; charset=utf-8"), + autorest.AsPatch(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/networkInterfaces/{networkInterfaceName}", pathParameters), + autorest.WithJSON(parameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// UpdateTagsSender sends the UpdateTags request. The method will close the +// http.Response Body if it receives an error. +func (client InterfacesClient) UpdateTagsSender(req *http.Request) (future InterfacesUpdateTagsFuture, err error) { + var resp *http.Response + resp, err = autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) + if err != nil { + return + } + err = autorest.Respond(resp, azure.WithErrorUnlessStatusCode(http.StatusOK)) + if err != nil { + return + } + future.Future, err = azure.NewFutureFromResponse(resp) + return +} + +// UpdateTagsResponder handles the response to the UpdateTags request. The method always +// closes the http.Response Body. +func (client InterfacesClient) UpdateTagsResponder(resp *http.Response) (result Interface, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} diff --git a/vendor/github.com/Azure/azure-sdk-for-go/services/network/mgmt/2018-01-01/network/loadbalancerbackendaddresspools.go b/vendor/github.com/Azure/azure-sdk-for-go/services/network/mgmt/2018-01-01/network/loadbalancerbackendaddresspools.go new file mode 100644 index 000000000..b7b82f3bf --- /dev/null +++ b/vendor/github.com/Azure/azure-sdk-for-go/services/network/mgmt/2018-01-01/network/loadbalancerbackendaddresspools.go @@ -0,0 +1,205 @@ +package network + +// Copyright (c) Microsoft and contributors. All rights reserved. +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// +// See the License for the specific language governing permissions and +// limitations under the License. +// +// Code generated by Microsoft (R) AutoRest Code Generator. +// Changes may cause incorrect behavior and will be lost if the code is regenerated. + +import ( + "context" + "github.com/Azure/go-autorest/autorest" + "github.com/Azure/go-autorest/autorest/azure" + "net/http" +) + +// LoadBalancerBackendAddressPoolsClient is the network Client +type LoadBalancerBackendAddressPoolsClient struct { + BaseClient +} + +// NewLoadBalancerBackendAddressPoolsClient creates an instance of the LoadBalancerBackendAddressPoolsClient client. +func NewLoadBalancerBackendAddressPoolsClient(subscriptionID string) LoadBalancerBackendAddressPoolsClient { + return NewLoadBalancerBackendAddressPoolsClientWithBaseURI(DefaultBaseURI, subscriptionID) +} + +// NewLoadBalancerBackendAddressPoolsClientWithBaseURI creates an instance of the LoadBalancerBackendAddressPoolsClient +// client. +func NewLoadBalancerBackendAddressPoolsClientWithBaseURI(baseURI string, subscriptionID string) LoadBalancerBackendAddressPoolsClient { + return LoadBalancerBackendAddressPoolsClient{NewWithBaseURI(baseURI, subscriptionID)} +} + +// Get gets load balancer backend address pool. +// Parameters: +// resourceGroupName - the name of the resource group. +// loadBalancerName - the name of the load balancer. +// backendAddressPoolName - the name of the backend address pool. +func (client LoadBalancerBackendAddressPoolsClient) Get(ctx context.Context, resourceGroupName string, loadBalancerName string, backendAddressPoolName string) (result BackendAddressPool, err error) { + req, err := client.GetPreparer(ctx, resourceGroupName, loadBalancerName, backendAddressPoolName) + if err != nil { + err = autorest.NewErrorWithError(err, "network.LoadBalancerBackendAddressPoolsClient", "Get", nil, "Failure preparing request") + return + } + + resp, err := client.GetSender(req) + if err != nil { + result.Response = autorest.Response{Response: resp} + err = autorest.NewErrorWithError(err, "network.LoadBalancerBackendAddressPoolsClient", "Get", resp, "Failure sending request") + return + } + + result, err = client.GetResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "network.LoadBalancerBackendAddressPoolsClient", "Get", resp, "Failure responding to request") + } + + return +} + +// GetPreparer prepares the Get request. +func (client LoadBalancerBackendAddressPoolsClient) GetPreparer(ctx context.Context, resourceGroupName string, loadBalancerName string, backendAddressPoolName string) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "backendAddressPoolName": autorest.Encode("path", backendAddressPoolName), + "loadBalancerName": autorest.Encode("path", loadBalancerName), + "resourceGroupName": autorest.Encode("path", resourceGroupName), + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + } + + const APIVersion = "2018-01-01" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsGet(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/loadBalancers/{loadBalancerName}/backendAddressPools/{backendAddressPoolName}", pathParameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// GetSender sends the Get request. The method will close the +// http.Response Body if it receives an error. +func (client LoadBalancerBackendAddressPoolsClient) GetSender(req *http.Request) (*http.Response, error) { + return autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) +} + +// GetResponder handles the response to the Get request. The method always +// closes the http.Response Body. +func (client LoadBalancerBackendAddressPoolsClient) GetResponder(resp *http.Response) (result BackendAddressPool, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + +// List gets all the load balancer backed address pools. +// Parameters: +// resourceGroupName - the name of the resource group. +// loadBalancerName - the name of the load balancer. +func (client LoadBalancerBackendAddressPoolsClient) List(ctx context.Context, resourceGroupName string, loadBalancerName string) (result LoadBalancerBackendAddressPoolListResultPage, err error) { + result.fn = client.listNextResults + req, err := client.ListPreparer(ctx, resourceGroupName, loadBalancerName) + if err != nil { + err = autorest.NewErrorWithError(err, "network.LoadBalancerBackendAddressPoolsClient", "List", nil, "Failure preparing request") + return + } + + resp, err := client.ListSender(req) + if err != nil { + result.lbbaplr.Response = autorest.Response{Response: resp} + err = autorest.NewErrorWithError(err, "network.LoadBalancerBackendAddressPoolsClient", "List", resp, "Failure sending request") + return + } + + result.lbbaplr, err = client.ListResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "network.LoadBalancerBackendAddressPoolsClient", "List", resp, "Failure responding to request") + } + + return +} + +// ListPreparer prepares the List request. +func (client LoadBalancerBackendAddressPoolsClient) ListPreparer(ctx context.Context, resourceGroupName string, loadBalancerName string) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "loadBalancerName": autorest.Encode("path", loadBalancerName), + "resourceGroupName": autorest.Encode("path", resourceGroupName), + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + } + + const APIVersion = "2018-01-01" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsGet(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/loadBalancers/{loadBalancerName}/backendAddressPools", pathParameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// ListSender sends the List request. The method will close the +// http.Response Body if it receives an error. +func (client LoadBalancerBackendAddressPoolsClient) ListSender(req *http.Request) (*http.Response, error) { + return autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) +} + +// ListResponder handles the response to the List request. The method always +// closes the http.Response Body. +func (client LoadBalancerBackendAddressPoolsClient) ListResponder(resp *http.Response) (result LoadBalancerBackendAddressPoolListResult, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + +// listNextResults retrieves the next set of results, if any. +func (client LoadBalancerBackendAddressPoolsClient) listNextResults(lastResults LoadBalancerBackendAddressPoolListResult) (result LoadBalancerBackendAddressPoolListResult, err error) { + req, err := lastResults.loadBalancerBackendAddressPoolListResultPreparer() + if err != nil { + return result, autorest.NewErrorWithError(err, "network.LoadBalancerBackendAddressPoolsClient", "listNextResults", nil, "Failure preparing next results request") + } + if req == nil { + return + } + resp, err := client.ListSender(req) + if err != nil { + result.Response = autorest.Response{Response: resp} + return result, autorest.NewErrorWithError(err, "network.LoadBalancerBackendAddressPoolsClient", "listNextResults", resp, "Failure sending next results request") + } + result, err = client.ListResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "network.LoadBalancerBackendAddressPoolsClient", "listNextResults", resp, "Failure responding to next results request") + } + return +} + +// ListComplete enumerates all values, automatically crossing page boundaries as required. +func (client LoadBalancerBackendAddressPoolsClient) ListComplete(ctx context.Context, resourceGroupName string, loadBalancerName string) (result LoadBalancerBackendAddressPoolListResultIterator, err error) { + result.page, err = client.List(ctx, resourceGroupName, loadBalancerName) + return +} diff --git a/vendor/github.com/Azure/azure-sdk-for-go/services/network/mgmt/2018-01-01/network/loadbalancerfrontendipconfigurations.go b/vendor/github.com/Azure/azure-sdk-for-go/services/network/mgmt/2018-01-01/network/loadbalancerfrontendipconfigurations.go new file mode 100644 index 000000000..502c9379e --- /dev/null +++ b/vendor/github.com/Azure/azure-sdk-for-go/services/network/mgmt/2018-01-01/network/loadbalancerfrontendipconfigurations.go @@ -0,0 +1,206 @@ +package network + +// Copyright (c) Microsoft and contributors. All rights reserved. +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// +// See the License for the specific language governing permissions and +// limitations under the License. +// +// Code generated by Microsoft (R) AutoRest Code Generator. +// Changes may cause incorrect behavior and will be lost if the code is regenerated. + +import ( + "context" + "github.com/Azure/go-autorest/autorest" + "github.com/Azure/go-autorest/autorest/azure" + "net/http" +) + +// LoadBalancerFrontendIPConfigurationsClient is the network Client +type LoadBalancerFrontendIPConfigurationsClient struct { + BaseClient +} + +// NewLoadBalancerFrontendIPConfigurationsClient creates an instance of the LoadBalancerFrontendIPConfigurationsClient +// client. +func NewLoadBalancerFrontendIPConfigurationsClient(subscriptionID string) LoadBalancerFrontendIPConfigurationsClient { + return NewLoadBalancerFrontendIPConfigurationsClientWithBaseURI(DefaultBaseURI, subscriptionID) +} + +// NewLoadBalancerFrontendIPConfigurationsClientWithBaseURI creates an instance of the +// LoadBalancerFrontendIPConfigurationsClient client. +func NewLoadBalancerFrontendIPConfigurationsClientWithBaseURI(baseURI string, subscriptionID string) LoadBalancerFrontendIPConfigurationsClient { + return LoadBalancerFrontendIPConfigurationsClient{NewWithBaseURI(baseURI, subscriptionID)} +} + +// Get gets load balancer frontend IP configuration. +// Parameters: +// resourceGroupName - the name of the resource group. +// loadBalancerName - the name of the load balancer. +// frontendIPConfigurationName - the name of the frontend IP configuration. +func (client LoadBalancerFrontendIPConfigurationsClient) Get(ctx context.Context, resourceGroupName string, loadBalancerName string, frontendIPConfigurationName string) (result FrontendIPConfiguration, err error) { + req, err := client.GetPreparer(ctx, resourceGroupName, loadBalancerName, frontendIPConfigurationName) + if err != nil { + err = autorest.NewErrorWithError(err, "network.LoadBalancerFrontendIPConfigurationsClient", "Get", nil, "Failure preparing request") + return + } + + resp, err := client.GetSender(req) + if err != nil { + result.Response = autorest.Response{Response: resp} + err = autorest.NewErrorWithError(err, "network.LoadBalancerFrontendIPConfigurationsClient", "Get", resp, "Failure sending request") + return + } + + result, err = client.GetResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "network.LoadBalancerFrontendIPConfigurationsClient", "Get", resp, "Failure responding to request") + } + + return +} + +// GetPreparer prepares the Get request. +func (client LoadBalancerFrontendIPConfigurationsClient) GetPreparer(ctx context.Context, resourceGroupName string, loadBalancerName string, frontendIPConfigurationName string) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "frontendIPConfigurationName": autorest.Encode("path", frontendIPConfigurationName), + "loadBalancerName": autorest.Encode("path", loadBalancerName), + "resourceGroupName": autorest.Encode("path", resourceGroupName), + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + } + + const APIVersion = "2018-01-01" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsGet(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/loadBalancers/{loadBalancerName}/frontendIPConfigurations/{frontendIPConfigurationName}", pathParameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// GetSender sends the Get request. The method will close the +// http.Response Body if it receives an error. +func (client LoadBalancerFrontendIPConfigurationsClient) GetSender(req *http.Request) (*http.Response, error) { + return autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) +} + +// GetResponder handles the response to the Get request. The method always +// closes the http.Response Body. +func (client LoadBalancerFrontendIPConfigurationsClient) GetResponder(resp *http.Response) (result FrontendIPConfiguration, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + +// List gets all the load balancer frontend IP configurations. +// Parameters: +// resourceGroupName - the name of the resource group. +// loadBalancerName - the name of the load balancer. +func (client LoadBalancerFrontendIPConfigurationsClient) List(ctx context.Context, resourceGroupName string, loadBalancerName string) (result LoadBalancerFrontendIPConfigurationListResultPage, err error) { + result.fn = client.listNextResults + req, err := client.ListPreparer(ctx, resourceGroupName, loadBalancerName) + if err != nil { + err = autorest.NewErrorWithError(err, "network.LoadBalancerFrontendIPConfigurationsClient", "List", nil, "Failure preparing request") + return + } + + resp, err := client.ListSender(req) + if err != nil { + result.lbficlr.Response = autorest.Response{Response: resp} + err = autorest.NewErrorWithError(err, "network.LoadBalancerFrontendIPConfigurationsClient", "List", resp, "Failure sending request") + return + } + + result.lbficlr, err = client.ListResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "network.LoadBalancerFrontendIPConfigurationsClient", "List", resp, "Failure responding to request") + } + + return +} + +// ListPreparer prepares the List request. +func (client LoadBalancerFrontendIPConfigurationsClient) ListPreparer(ctx context.Context, resourceGroupName string, loadBalancerName string) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "loadBalancerName": autorest.Encode("path", loadBalancerName), + "resourceGroupName": autorest.Encode("path", resourceGroupName), + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + } + + const APIVersion = "2018-01-01" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsGet(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/loadBalancers/{loadBalancerName}/frontendIPConfigurations", pathParameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// ListSender sends the List request. The method will close the +// http.Response Body if it receives an error. +func (client LoadBalancerFrontendIPConfigurationsClient) ListSender(req *http.Request) (*http.Response, error) { + return autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) +} + +// ListResponder handles the response to the List request. The method always +// closes the http.Response Body. +func (client LoadBalancerFrontendIPConfigurationsClient) ListResponder(resp *http.Response) (result LoadBalancerFrontendIPConfigurationListResult, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + +// listNextResults retrieves the next set of results, if any. +func (client LoadBalancerFrontendIPConfigurationsClient) listNextResults(lastResults LoadBalancerFrontendIPConfigurationListResult) (result LoadBalancerFrontendIPConfigurationListResult, err error) { + req, err := lastResults.loadBalancerFrontendIPConfigurationListResultPreparer() + if err != nil { + return result, autorest.NewErrorWithError(err, "network.LoadBalancerFrontendIPConfigurationsClient", "listNextResults", nil, "Failure preparing next results request") + } + if req == nil { + return + } + resp, err := client.ListSender(req) + if err != nil { + result.Response = autorest.Response{Response: resp} + return result, autorest.NewErrorWithError(err, "network.LoadBalancerFrontendIPConfigurationsClient", "listNextResults", resp, "Failure sending next results request") + } + result, err = client.ListResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "network.LoadBalancerFrontendIPConfigurationsClient", "listNextResults", resp, "Failure responding to next results request") + } + return +} + +// ListComplete enumerates all values, automatically crossing page boundaries as required. +func (client LoadBalancerFrontendIPConfigurationsClient) ListComplete(ctx context.Context, resourceGroupName string, loadBalancerName string) (result LoadBalancerFrontendIPConfigurationListResultIterator, err error) { + result.page, err = client.List(ctx, resourceGroupName, loadBalancerName) + return +} diff --git a/vendor/github.com/Azure/azure-sdk-for-go/services/network/mgmt/2018-01-01/network/loadbalancerloadbalancingrules.go b/vendor/github.com/Azure/azure-sdk-for-go/services/network/mgmt/2018-01-01/network/loadbalancerloadbalancingrules.go new file mode 100644 index 000000000..c59288447 --- /dev/null +++ b/vendor/github.com/Azure/azure-sdk-for-go/services/network/mgmt/2018-01-01/network/loadbalancerloadbalancingrules.go @@ -0,0 +1,205 @@ +package network + +// Copyright (c) Microsoft and contributors. All rights reserved. +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// +// See the License for the specific language governing permissions and +// limitations under the License. +// +// Code generated by Microsoft (R) AutoRest Code Generator. +// Changes may cause incorrect behavior and will be lost if the code is regenerated. + +import ( + "context" + "github.com/Azure/go-autorest/autorest" + "github.com/Azure/go-autorest/autorest/azure" + "net/http" +) + +// LoadBalancerLoadBalancingRulesClient is the network Client +type LoadBalancerLoadBalancingRulesClient struct { + BaseClient +} + +// NewLoadBalancerLoadBalancingRulesClient creates an instance of the LoadBalancerLoadBalancingRulesClient client. +func NewLoadBalancerLoadBalancingRulesClient(subscriptionID string) LoadBalancerLoadBalancingRulesClient { + return NewLoadBalancerLoadBalancingRulesClientWithBaseURI(DefaultBaseURI, subscriptionID) +} + +// NewLoadBalancerLoadBalancingRulesClientWithBaseURI creates an instance of the LoadBalancerLoadBalancingRulesClient +// client. +func NewLoadBalancerLoadBalancingRulesClientWithBaseURI(baseURI string, subscriptionID string) LoadBalancerLoadBalancingRulesClient { + return LoadBalancerLoadBalancingRulesClient{NewWithBaseURI(baseURI, subscriptionID)} +} + +// Get gets the specified load balancer load balancing rule. +// Parameters: +// resourceGroupName - the name of the resource group. +// loadBalancerName - the name of the load balancer. +// loadBalancingRuleName - the name of the load balancing rule. +func (client LoadBalancerLoadBalancingRulesClient) Get(ctx context.Context, resourceGroupName string, loadBalancerName string, loadBalancingRuleName string) (result LoadBalancingRule, err error) { + req, err := client.GetPreparer(ctx, resourceGroupName, loadBalancerName, loadBalancingRuleName) + if err != nil { + err = autorest.NewErrorWithError(err, "network.LoadBalancerLoadBalancingRulesClient", "Get", nil, "Failure preparing request") + return + } + + resp, err := client.GetSender(req) + if err != nil { + result.Response = autorest.Response{Response: resp} + err = autorest.NewErrorWithError(err, "network.LoadBalancerLoadBalancingRulesClient", "Get", resp, "Failure sending request") + return + } + + result, err = client.GetResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "network.LoadBalancerLoadBalancingRulesClient", "Get", resp, "Failure responding to request") + } + + return +} + +// GetPreparer prepares the Get request. +func (client LoadBalancerLoadBalancingRulesClient) GetPreparer(ctx context.Context, resourceGroupName string, loadBalancerName string, loadBalancingRuleName string) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "loadBalancerName": autorest.Encode("path", loadBalancerName), + "loadBalancingRuleName": autorest.Encode("path", loadBalancingRuleName), + "resourceGroupName": autorest.Encode("path", resourceGroupName), + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + } + + const APIVersion = "2018-01-01" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsGet(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/loadBalancers/{loadBalancerName}/loadBalancingRules/{loadBalancingRuleName}", pathParameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// GetSender sends the Get request. The method will close the +// http.Response Body if it receives an error. +func (client LoadBalancerLoadBalancingRulesClient) GetSender(req *http.Request) (*http.Response, error) { + return autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) +} + +// GetResponder handles the response to the Get request. The method always +// closes the http.Response Body. +func (client LoadBalancerLoadBalancingRulesClient) GetResponder(resp *http.Response) (result LoadBalancingRule, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + +// List gets all the load balancing rules in a load balancer. +// Parameters: +// resourceGroupName - the name of the resource group. +// loadBalancerName - the name of the load balancer. +func (client LoadBalancerLoadBalancingRulesClient) List(ctx context.Context, resourceGroupName string, loadBalancerName string) (result LoadBalancerLoadBalancingRuleListResultPage, err error) { + result.fn = client.listNextResults + req, err := client.ListPreparer(ctx, resourceGroupName, loadBalancerName) + if err != nil { + err = autorest.NewErrorWithError(err, "network.LoadBalancerLoadBalancingRulesClient", "List", nil, "Failure preparing request") + return + } + + resp, err := client.ListSender(req) + if err != nil { + result.lblbrlr.Response = autorest.Response{Response: resp} + err = autorest.NewErrorWithError(err, "network.LoadBalancerLoadBalancingRulesClient", "List", resp, "Failure sending request") + return + } + + result.lblbrlr, err = client.ListResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "network.LoadBalancerLoadBalancingRulesClient", "List", resp, "Failure responding to request") + } + + return +} + +// ListPreparer prepares the List request. +func (client LoadBalancerLoadBalancingRulesClient) ListPreparer(ctx context.Context, resourceGroupName string, loadBalancerName string) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "loadBalancerName": autorest.Encode("path", loadBalancerName), + "resourceGroupName": autorest.Encode("path", resourceGroupName), + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + } + + const APIVersion = "2018-01-01" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsGet(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/loadBalancers/{loadBalancerName}/loadBalancingRules", pathParameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// ListSender sends the List request. The method will close the +// http.Response Body if it receives an error. +func (client LoadBalancerLoadBalancingRulesClient) ListSender(req *http.Request) (*http.Response, error) { + return autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) +} + +// ListResponder handles the response to the List request. The method always +// closes the http.Response Body. +func (client LoadBalancerLoadBalancingRulesClient) ListResponder(resp *http.Response) (result LoadBalancerLoadBalancingRuleListResult, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + +// listNextResults retrieves the next set of results, if any. +func (client LoadBalancerLoadBalancingRulesClient) listNextResults(lastResults LoadBalancerLoadBalancingRuleListResult) (result LoadBalancerLoadBalancingRuleListResult, err error) { + req, err := lastResults.loadBalancerLoadBalancingRuleListResultPreparer() + if err != nil { + return result, autorest.NewErrorWithError(err, "network.LoadBalancerLoadBalancingRulesClient", "listNextResults", nil, "Failure preparing next results request") + } + if req == nil { + return + } + resp, err := client.ListSender(req) + if err != nil { + result.Response = autorest.Response{Response: resp} + return result, autorest.NewErrorWithError(err, "network.LoadBalancerLoadBalancingRulesClient", "listNextResults", resp, "Failure sending next results request") + } + result, err = client.ListResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "network.LoadBalancerLoadBalancingRulesClient", "listNextResults", resp, "Failure responding to next results request") + } + return +} + +// ListComplete enumerates all values, automatically crossing page boundaries as required. +func (client LoadBalancerLoadBalancingRulesClient) ListComplete(ctx context.Context, resourceGroupName string, loadBalancerName string) (result LoadBalancerLoadBalancingRuleListResultIterator, err error) { + result.page, err = client.List(ctx, resourceGroupName, loadBalancerName) + return +} diff --git a/vendor/github.com/Azure/azure-sdk-for-go/services/network/mgmt/2018-01-01/network/loadbalancernetworkinterfaces.go b/vendor/github.com/Azure/azure-sdk-for-go/services/network/mgmt/2018-01-01/network/loadbalancernetworkinterfaces.go new file mode 100644 index 000000000..19aff0ee3 --- /dev/null +++ b/vendor/github.com/Azure/azure-sdk-for-go/services/network/mgmt/2018-01-01/network/loadbalancernetworkinterfaces.go @@ -0,0 +1,136 @@ +package network + +// Copyright (c) Microsoft and contributors. All rights reserved. +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// +// See the License for the specific language governing permissions and +// limitations under the License. +// +// Code generated by Microsoft (R) AutoRest Code Generator. +// Changes may cause incorrect behavior and will be lost if the code is regenerated. + +import ( + "context" + "github.com/Azure/go-autorest/autorest" + "github.com/Azure/go-autorest/autorest/azure" + "net/http" +) + +// LoadBalancerNetworkInterfacesClient is the network Client +type LoadBalancerNetworkInterfacesClient struct { + BaseClient +} + +// NewLoadBalancerNetworkInterfacesClient creates an instance of the LoadBalancerNetworkInterfacesClient client. +func NewLoadBalancerNetworkInterfacesClient(subscriptionID string) LoadBalancerNetworkInterfacesClient { + return NewLoadBalancerNetworkInterfacesClientWithBaseURI(DefaultBaseURI, subscriptionID) +} + +// NewLoadBalancerNetworkInterfacesClientWithBaseURI creates an instance of the LoadBalancerNetworkInterfacesClient +// client. +func NewLoadBalancerNetworkInterfacesClientWithBaseURI(baseURI string, subscriptionID string) LoadBalancerNetworkInterfacesClient { + return LoadBalancerNetworkInterfacesClient{NewWithBaseURI(baseURI, subscriptionID)} +} + +// List gets associated load balancer network interfaces. +// Parameters: +// resourceGroupName - the name of the resource group. +// loadBalancerName - the name of the load balancer. +func (client LoadBalancerNetworkInterfacesClient) List(ctx context.Context, resourceGroupName string, loadBalancerName string) (result InterfaceListResultPage, err error) { + result.fn = client.listNextResults + req, err := client.ListPreparer(ctx, resourceGroupName, loadBalancerName) + if err != nil { + err = autorest.NewErrorWithError(err, "network.LoadBalancerNetworkInterfacesClient", "List", nil, "Failure preparing request") + return + } + + resp, err := client.ListSender(req) + if err != nil { + result.ilr.Response = autorest.Response{Response: resp} + err = autorest.NewErrorWithError(err, "network.LoadBalancerNetworkInterfacesClient", "List", resp, "Failure sending request") + return + } + + result.ilr, err = client.ListResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "network.LoadBalancerNetworkInterfacesClient", "List", resp, "Failure responding to request") + } + + return +} + +// ListPreparer prepares the List request. +func (client LoadBalancerNetworkInterfacesClient) ListPreparer(ctx context.Context, resourceGroupName string, loadBalancerName string) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "loadBalancerName": autorest.Encode("path", loadBalancerName), + "resourceGroupName": autorest.Encode("path", resourceGroupName), + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + } + + const APIVersion = "2018-01-01" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsGet(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/loadBalancers/{loadBalancerName}/networkInterfaces", pathParameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// ListSender sends the List request. The method will close the +// http.Response Body if it receives an error. +func (client LoadBalancerNetworkInterfacesClient) ListSender(req *http.Request) (*http.Response, error) { + return autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) +} + +// ListResponder handles the response to the List request. The method always +// closes the http.Response Body. +func (client LoadBalancerNetworkInterfacesClient) ListResponder(resp *http.Response) (result InterfaceListResult, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + +// listNextResults retrieves the next set of results, if any. +func (client LoadBalancerNetworkInterfacesClient) listNextResults(lastResults InterfaceListResult) (result InterfaceListResult, err error) { + req, err := lastResults.interfaceListResultPreparer() + if err != nil { + return result, autorest.NewErrorWithError(err, "network.LoadBalancerNetworkInterfacesClient", "listNextResults", nil, "Failure preparing next results request") + } + if req == nil { + return + } + resp, err := client.ListSender(req) + if err != nil { + result.Response = autorest.Response{Response: resp} + return result, autorest.NewErrorWithError(err, "network.LoadBalancerNetworkInterfacesClient", "listNextResults", resp, "Failure sending next results request") + } + result, err = client.ListResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "network.LoadBalancerNetworkInterfacesClient", "listNextResults", resp, "Failure responding to next results request") + } + return +} + +// ListComplete enumerates all values, automatically crossing page boundaries as required. +func (client LoadBalancerNetworkInterfacesClient) ListComplete(ctx context.Context, resourceGroupName string, loadBalancerName string) (result InterfaceListResultIterator, err error) { + result.page, err = client.List(ctx, resourceGroupName, loadBalancerName) + return +} diff --git a/vendor/github.com/Azure/azure-sdk-for-go/services/network/mgmt/2018-01-01/network/loadbalancerprobes.go b/vendor/github.com/Azure/azure-sdk-for-go/services/network/mgmt/2018-01-01/network/loadbalancerprobes.go new file mode 100644 index 000000000..a4501d8fa --- /dev/null +++ b/vendor/github.com/Azure/azure-sdk-for-go/services/network/mgmt/2018-01-01/network/loadbalancerprobes.go @@ -0,0 +1,204 @@ +package network + +// Copyright (c) Microsoft and contributors. All rights reserved. +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// +// See the License for the specific language governing permissions and +// limitations under the License. +// +// Code generated by Microsoft (R) AutoRest Code Generator. +// Changes may cause incorrect behavior and will be lost if the code is regenerated. + +import ( + "context" + "github.com/Azure/go-autorest/autorest" + "github.com/Azure/go-autorest/autorest/azure" + "net/http" +) + +// LoadBalancerProbesClient is the network Client +type LoadBalancerProbesClient struct { + BaseClient +} + +// NewLoadBalancerProbesClient creates an instance of the LoadBalancerProbesClient client. +func NewLoadBalancerProbesClient(subscriptionID string) LoadBalancerProbesClient { + return NewLoadBalancerProbesClientWithBaseURI(DefaultBaseURI, subscriptionID) +} + +// NewLoadBalancerProbesClientWithBaseURI creates an instance of the LoadBalancerProbesClient client. +func NewLoadBalancerProbesClientWithBaseURI(baseURI string, subscriptionID string) LoadBalancerProbesClient { + return LoadBalancerProbesClient{NewWithBaseURI(baseURI, subscriptionID)} +} + +// Get gets load balancer probe. +// Parameters: +// resourceGroupName - the name of the resource group. +// loadBalancerName - the name of the load balancer. +// probeName - the name of the probe. +func (client LoadBalancerProbesClient) Get(ctx context.Context, resourceGroupName string, loadBalancerName string, probeName string) (result Probe, err error) { + req, err := client.GetPreparer(ctx, resourceGroupName, loadBalancerName, probeName) + if err != nil { + err = autorest.NewErrorWithError(err, "network.LoadBalancerProbesClient", "Get", nil, "Failure preparing request") + return + } + + resp, err := client.GetSender(req) + if err != nil { + result.Response = autorest.Response{Response: resp} + err = autorest.NewErrorWithError(err, "network.LoadBalancerProbesClient", "Get", resp, "Failure sending request") + return + } + + result, err = client.GetResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "network.LoadBalancerProbesClient", "Get", resp, "Failure responding to request") + } + + return +} + +// GetPreparer prepares the Get request. +func (client LoadBalancerProbesClient) GetPreparer(ctx context.Context, resourceGroupName string, loadBalancerName string, probeName string) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "loadBalancerName": autorest.Encode("path", loadBalancerName), + "probeName": autorest.Encode("path", probeName), + "resourceGroupName": autorest.Encode("path", resourceGroupName), + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + } + + const APIVersion = "2018-01-01" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsGet(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/loadBalancers/{loadBalancerName}/probes/{probeName}", pathParameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// GetSender sends the Get request. The method will close the +// http.Response Body if it receives an error. +func (client LoadBalancerProbesClient) GetSender(req *http.Request) (*http.Response, error) { + return autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) +} + +// GetResponder handles the response to the Get request. The method always +// closes the http.Response Body. +func (client LoadBalancerProbesClient) GetResponder(resp *http.Response) (result Probe, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + +// List gets all the load balancer probes. +// Parameters: +// resourceGroupName - the name of the resource group. +// loadBalancerName - the name of the load balancer. +func (client LoadBalancerProbesClient) List(ctx context.Context, resourceGroupName string, loadBalancerName string) (result LoadBalancerProbeListResultPage, err error) { + result.fn = client.listNextResults + req, err := client.ListPreparer(ctx, resourceGroupName, loadBalancerName) + if err != nil { + err = autorest.NewErrorWithError(err, "network.LoadBalancerProbesClient", "List", nil, "Failure preparing request") + return + } + + resp, err := client.ListSender(req) + if err != nil { + result.lbplr.Response = autorest.Response{Response: resp} + err = autorest.NewErrorWithError(err, "network.LoadBalancerProbesClient", "List", resp, "Failure sending request") + return + } + + result.lbplr, err = client.ListResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "network.LoadBalancerProbesClient", "List", resp, "Failure responding to request") + } + + return +} + +// ListPreparer prepares the List request. +func (client LoadBalancerProbesClient) ListPreparer(ctx context.Context, resourceGroupName string, loadBalancerName string) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "loadBalancerName": autorest.Encode("path", loadBalancerName), + "resourceGroupName": autorest.Encode("path", resourceGroupName), + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + } + + const APIVersion = "2018-01-01" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsGet(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/loadBalancers/{loadBalancerName}/probes", pathParameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// ListSender sends the List request. The method will close the +// http.Response Body if it receives an error. +func (client LoadBalancerProbesClient) ListSender(req *http.Request) (*http.Response, error) { + return autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) +} + +// ListResponder handles the response to the List request. The method always +// closes the http.Response Body. +func (client LoadBalancerProbesClient) ListResponder(resp *http.Response) (result LoadBalancerProbeListResult, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + +// listNextResults retrieves the next set of results, if any. +func (client LoadBalancerProbesClient) listNextResults(lastResults LoadBalancerProbeListResult) (result LoadBalancerProbeListResult, err error) { + req, err := lastResults.loadBalancerProbeListResultPreparer() + if err != nil { + return result, autorest.NewErrorWithError(err, "network.LoadBalancerProbesClient", "listNextResults", nil, "Failure preparing next results request") + } + if req == nil { + return + } + resp, err := client.ListSender(req) + if err != nil { + result.Response = autorest.Response{Response: resp} + return result, autorest.NewErrorWithError(err, "network.LoadBalancerProbesClient", "listNextResults", resp, "Failure sending next results request") + } + result, err = client.ListResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "network.LoadBalancerProbesClient", "listNextResults", resp, "Failure responding to next results request") + } + return +} + +// ListComplete enumerates all values, automatically crossing page boundaries as required. +func (client LoadBalancerProbesClient) ListComplete(ctx context.Context, resourceGroupName string, loadBalancerName string) (result LoadBalancerProbeListResultIterator, err error) { + result.page, err = client.List(ctx, resourceGroupName, loadBalancerName) + return +} diff --git a/vendor/github.com/Azure/azure-sdk-for-go/arm/network/loadbalancers.go b/vendor/github.com/Azure/azure-sdk-for-go/services/network/mgmt/2018-01-01/network/loadbalancers.go similarity index 51% rename from vendor/github.com/Azure/azure-sdk-for-go/arm/network/loadbalancers.go rename to vendor/github.com/Azure/azure-sdk-for-go/services/network/mgmt/2018-01-01/network/loadbalancers.go index 11d649b27..7513f9d14 100644 --- a/vendor/github.com/Azure/azure-sdk-for-go/arm/network/loadbalancers.go +++ b/vendor/github.com/Azure/azure-sdk-for-go/services/network/mgmt/2018-01-01/network/loadbalancers.go @@ -14,103 +14,90 @@ package network // See the License for the specific language governing permissions and // limitations under the License. // -// Code generated by Microsoft (R) AutoRest Code Generator 1.0.1.0 -// Changes may cause incorrect behavior and will be lost if the code is -// regenerated. +// Code generated by Microsoft (R) AutoRest Code Generator. +// Changes may cause incorrect behavior and will be lost if the code is regenerated. import ( + "context" "github.com/Azure/go-autorest/autorest" "github.com/Azure/go-autorest/autorest/azure" "net/http" ) -// LoadBalancersClient is the composite Swagger for Network Client +// LoadBalancersClient is the network Client type LoadBalancersClient struct { - ManagementClient + BaseClient } -// NewLoadBalancersClient creates an instance of the LoadBalancersClient -// client. +// NewLoadBalancersClient creates an instance of the LoadBalancersClient client. func NewLoadBalancersClient(subscriptionID string) LoadBalancersClient { return NewLoadBalancersClientWithBaseURI(DefaultBaseURI, subscriptionID) } -// NewLoadBalancersClientWithBaseURI creates an instance of the -// LoadBalancersClient client. +// NewLoadBalancersClientWithBaseURI creates an instance of the LoadBalancersClient client. func NewLoadBalancersClientWithBaseURI(baseURI string, subscriptionID string) LoadBalancersClient { return LoadBalancersClient{NewWithBaseURI(baseURI, subscriptionID)} } -// CreateOrUpdate creates or updates a load balancer. This method may poll for -// completion. Polling can be canceled by passing the cancel channel argument. -// The channel will be used to cancel polling and any outstanding HTTP -// requests. -// -// resourceGroupName is the name of the resource group. loadBalancerName is the -// name of the load balancer. parameters is parameters supplied to the create -// or update load balancer operation. -func (client LoadBalancersClient) CreateOrUpdate(resourceGroupName string, loadBalancerName string, parameters LoadBalancer, cancel <-chan struct{}) (<-chan LoadBalancer, <-chan error) { - resultChan := make(chan LoadBalancer, 1) - errChan := make(chan error, 1) - go func() { - var err error - var result LoadBalancer - defer func() { - resultChan <- result - errChan <- err - close(resultChan) - close(errChan) - }() - req, err := client.CreateOrUpdatePreparer(resourceGroupName, loadBalancerName, parameters, cancel) - if err != nil { - err = autorest.NewErrorWithError(err, "network.LoadBalancersClient", "CreateOrUpdate", nil, "Failure preparing request") - return - } +// CreateOrUpdate creates or updates a load balancer. +// Parameters: +// resourceGroupName - the name of the resource group. +// loadBalancerName - the name of the load balancer. +// parameters - parameters supplied to the create or update load balancer operation. +func (client LoadBalancersClient) CreateOrUpdate(ctx context.Context, resourceGroupName string, loadBalancerName string, parameters LoadBalancer) (result LoadBalancersCreateOrUpdateFuture, err error) { + req, err := client.CreateOrUpdatePreparer(ctx, resourceGroupName, loadBalancerName, parameters) + if err != nil { + err = autorest.NewErrorWithError(err, "network.LoadBalancersClient", "CreateOrUpdate", nil, "Failure preparing request") + return + } - resp, err := client.CreateOrUpdateSender(req) - if err != nil { - result.Response = autorest.Response{Response: resp} - err = autorest.NewErrorWithError(err, "network.LoadBalancersClient", "CreateOrUpdate", resp, "Failure sending request") - return - } + result, err = client.CreateOrUpdateSender(req) + if err != nil { + err = autorest.NewErrorWithError(err, "network.LoadBalancersClient", "CreateOrUpdate", result.Response(), "Failure sending request") + return + } - result, err = client.CreateOrUpdateResponder(resp) - if err != nil { - err = autorest.NewErrorWithError(err, "network.LoadBalancersClient", "CreateOrUpdate", resp, "Failure responding to request") - } - }() - return resultChan, errChan + return } // CreateOrUpdatePreparer prepares the CreateOrUpdate request. -func (client LoadBalancersClient) CreateOrUpdatePreparer(resourceGroupName string, loadBalancerName string, parameters LoadBalancer, cancel <-chan struct{}) (*http.Request, error) { +func (client LoadBalancersClient) CreateOrUpdatePreparer(ctx context.Context, resourceGroupName string, loadBalancerName string, parameters LoadBalancer) (*http.Request, error) { pathParameters := map[string]interface{}{ "loadBalancerName": autorest.Encode("path", loadBalancerName), "resourceGroupName": autorest.Encode("path", resourceGroupName), "subscriptionId": autorest.Encode("path", client.SubscriptionID), } - const APIVersion = "2017-03-01" + const APIVersion = "2018-01-01" queryParameters := map[string]interface{}{ "api-version": APIVersion, } preparer := autorest.CreatePreparer( - autorest.AsJSON(), + autorest.AsContentType("application/json; charset=utf-8"), autorest.AsPut(), autorest.WithBaseURL(client.BaseURI), autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/loadBalancers/{loadBalancerName}", pathParameters), autorest.WithJSON(parameters), autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{Cancel: cancel}) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) } // CreateOrUpdateSender sends the CreateOrUpdate request. The method will close the // http.Response Body if it receives an error. -func (client LoadBalancersClient) CreateOrUpdateSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, - req, - azure.DoPollForAsynchronous(client.PollingDelay)) +func (client LoadBalancersClient) CreateOrUpdateSender(req *http.Request) (future LoadBalancersCreateOrUpdateFuture, err error) { + var resp *http.Response + resp, err = autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) + if err != nil { + return + } + err = autorest.Respond(resp, azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusCreated)) + if err != nil { + return + } + future.Future, err = azure.NewFutureFromResponse(resp) + return } // CreateOrUpdateResponder handles the response to the CreateOrUpdate request. The method always @@ -119,62 +106,42 @@ func (client LoadBalancersClient) CreateOrUpdateResponder(resp *http.Response) ( err = autorest.Respond( resp, client.ByInspecting(), - azure.WithErrorUnlessStatusCode(http.StatusCreated, http.StatusOK), + azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusCreated), autorest.ByUnmarshallingJSON(&result), autorest.ByClosing()) result.Response = autorest.Response{Response: resp} return } -// Delete deletes the specified load balancer. This method may poll for -// completion. Polling can be canceled by passing the cancel channel argument. -// The channel will be used to cancel polling and any outstanding HTTP -// requests. -// -// resourceGroupName is the name of the resource group. loadBalancerName is the -// name of the load balancer. -func (client LoadBalancersClient) Delete(resourceGroupName string, loadBalancerName string, cancel <-chan struct{}) (<-chan autorest.Response, <-chan error) { - resultChan := make(chan autorest.Response, 1) - errChan := make(chan error, 1) - go func() { - var err error - var result autorest.Response - defer func() { - resultChan <- result - errChan <- err - close(resultChan) - close(errChan) - }() - req, err := client.DeletePreparer(resourceGroupName, loadBalancerName, cancel) - if err != nil { - err = autorest.NewErrorWithError(err, "network.LoadBalancersClient", "Delete", nil, "Failure preparing request") - return - } +// Delete deletes the specified load balancer. +// Parameters: +// resourceGroupName - the name of the resource group. +// loadBalancerName - the name of the load balancer. +func (client LoadBalancersClient) Delete(ctx context.Context, resourceGroupName string, loadBalancerName string) (result LoadBalancersDeleteFuture, err error) { + req, err := client.DeletePreparer(ctx, resourceGroupName, loadBalancerName) + if err != nil { + err = autorest.NewErrorWithError(err, "network.LoadBalancersClient", "Delete", nil, "Failure preparing request") + return + } - resp, err := client.DeleteSender(req) - if err != nil { - result.Response = resp - err = autorest.NewErrorWithError(err, "network.LoadBalancersClient", "Delete", resp, "Failure sending request") - return - } + result, err = client.DeleteSender(req) + if err != nil { + err = autorest.NewErrorWithError(err, "network.LoadBalancersClient", "Delete", result.Response(), "Failure sending request") + return + } - result, err = client.DeleteResponder(resp) - if err != nil { - err = autorest.NewErrorWithError(err, "network.LoadBalancersClient", "Delete", resp, "Failure responding to request") - } - }() - return resultChan, errChan + return } // DeletePreparer prepares the Delete request. -func (client LoadBalancersClient) DeletePreparer(resourceGroupName string, loadBalancerName string, cancel <-chan struct{}) (*http.Request, error) { +func (client LoadBalancersClient) DeletePreparer(ctx context.Context, resourceGroupName string, loadBalancerName string) (*http.Request, error) { pathParameters := map[string]interface{}{ "loadBalancerName": autorest.Encode("path", loadBalancerName), "resourceGroupName": autorest.Encode("path", resourceGroupName), "subscriptionId": autorest.Encode("path", client.SubscriptionID), } - const APIVersion = "2017-03-01" + const APIVersion = "2018-01-01" queryParameters := map[string]interface{}{ "api-version": APIVersion, } @@ -184,15 +151,24 @@ func (client LoadBalancersClient) DeletePreparer(resourceGroupName string, loadB autorest.WithBaseURL(client.BaseURI), autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/loadBalancers/{loadBalancerName}", pathParameters), autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{Cancel: cancel}) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) } // DeleteSender sends the Delete request. The method will close the // http.Response Body if it receives an error. -func (client LoadBalancersClient) DeleteSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, - req, - azure.DoPollForAsynchronous(client.PollingDelay)) +func (client LoadBalancersClient) DeleteSender(req *http.Request) (future LoadBalancersDeleteFuture, err error) { + var resp *http.Response + resp, err = autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) + if err != nil { + return + } + err = autorest.Respond(resp, azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted, http.StatusNoContent)) + if err != nil { + return + } + future.Future, err = azure.NewFutureFromResponse(resp) + return } // DeleteResponder handles the response to the Delete request. The method always @@ -201,18 +177,19 @@ func (client LoadBalancersClient) DeleteResponder(resp *http.Response) (result a err = autorest.Respond( resp, client.ByInspecting(), - azure.WithErrorUnlessStatusCode(http.StatusNoContent, http.StatusAccepted, http.StatusOK), + azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted, http.StatusNoContent), autorest.ByClosing()) result.Response = resp return } // Get gets the specified load balancer. -// -// resourceGroupName is the name of the resource group. loadBalancerName is the -// name of the load balancer. expand is expands referenced resources. -func (client LoadBalancersClient) Get(resourceGroupName string, loadBalancerName string, expand string) (result LoadBalancer, err error) { - req, err := client.GetPreparer(resourceGroupName, loadBalancerName, expand) +// Parameters: +// resourceGroupName - the name of the resource group. +// loadBalancerName - the name of the load balancer. +// expand - expands referenced resources. +func (client LoadBalancersClient) Get(ctx context.Context, resourceGroupName string, loadBalancerName string, expand string) (result LoadBalancer, err error) { + req, err := client.GetPreparer(ctx, resourceGroupName, loadBalancerName, expand) if err != nil { err = autorest.NewErrorWithError(err, "network.LoadBalancersClient", "Get", nil, "Failure preparing request") return @@ -234,14 +211,14 @@ func (client LoadBalancersClient) Get(resourceGroupName string, loadBalancerName } // GetPreparer prepares the Get request. -func (client LoadBalancersClient) GetPreparer(resourceGroupName string, loadBalancerName string, expand string) (*http.Request, error) { +func (client LoadBalancersClient) GetPreparer(ctx context.Context, resourceGroupName string, loadBalancerName string, expand string) (*http.Request, error) { pathParameters := map[string]interface{}{ "loadBalancerName": autorest.Encode("path", loadBalancerName), "resourceGroupName": autorest.Encode("path", resourceGroupName), "subscriptionId": autorest.Encode("path", client.SubscriptionID), } - const APIVersion = "2017-03-01" + const APIVersion = "2018-01-01" queryParameters := map[string]interface{}{ "api-version": APIVersion, } @@ -254,13 +231,14 @@ func (client LoadBalancersClient) GetPreparer(resourceGroupName string, loadBala autorest.WithBaseURL(client.BaseURI), autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/loadBalancers/{loadBalancerName}", pathParameters), autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{}) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) } // GetSender sends the Get request. The method will close the // http.Response Body if it receives an error. func (client LoadBalancersClient) GetSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, req) + return autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) } // GetResponder handles the response to the Get request. The method always @@ -277,10 +255,11 @@ func (client LoadBalancersClient) GetResponder(resp *http.Response) (result Load } // List gets all the load balancers in a resource group. -// -// resourceGroupName is the name of the resource group. -func (client LoadBalancersClient) List(resourceGroupName string) (result LoadBalancerListResult, err error) { - req, err := client.ListPreparer(resourceGroupName) +// Parameters: +// resourceGroupName - the name of the resource group. +func (client LoadBalancersClient) List(ctx context.Context, resourceGroupName string) (result LoadBalancerListResultPage, err error) { + result.fn = client.listNextResults + req, err := client.ListPreparer(ctx, resourceGroupName) if err != nil { err = autorest.NewErrorWithError(err, "network.LoadBalancersClient", "List", nil, "Failure preparing request") return @@ -288,12 +267,12 @@ func (client LoadBalancersClient) List(resourceGroupName string) (result LoadBal resp, err := client.ListSender(req) if err != nil { - result.Response = autorest.Response{Response: resp} + result.lblr.Response = autorest.Response{Response: resp} err = autorest.NewErrorWithError(err, "network.LoadBalancersClient", "List", resp, "Failure sending request") return } - result, err = client.ListResponder(resp) + result.lblr, err = client.ListResponder(resp) if err != nil { err = autorest.NewErrorWithError(err, "network.LoadBalancersClient", "List", resp, "Failure responding to request") } @@ -302,13 +281,13 @@ func (client LoadBalancersClient) List(resourceGroupName string) (result LoadBal } // ListPreparer prepares the List request. -func (client LoadBalancersClient) ListPreparer(resourceGroupName string) (*http.Request, error) { +func (client LoadBalancersClient) ListPreparer(ctx context.Context, resourceGroupName string) (*http.Request, error) { pathParameters := map[string]interface{}{ "resourceGroupName": autorest.Encode("path", resourceGroupName), "subscriptionId": autorest.Encode("path", client.SubscriptionID), } - const APIVersion = "2017-03-01" + const APIVersion = "2018-01-01" queryParameters := map[string]interface{}{ "api-version": APIVersion, } @@ -318,13 +297,14 @@ func (client LoadBalancersClient) ListPreparer(resourceGroupName string) (*http. autorest.WithBaseURL(client.BaseURI), autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/loadBalancers", pathParameters), autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{}) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) } // ListSender sends the List request. The method will close the // http.Response Body if it receives an error. func (client LoadBalancersClient) ListSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, req) + return autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) } // ListResponder handles the response to the List request. The method always @@ -340,33 +320,37 @@ func (client LoadBalancersClient) ListResponder(resp *http.Response) (result Loa return } -// ListNextResults retrieves the next set of results, if any. -func (client LoadBalancersClient) ListNextResults(lastResults LoadBalancerListResult) (result LoadBalancerListResult, err error) { - req, err := lastResults.LoadBalancerListResultPreparer() +// listNextResults retrieves the next set of results, if any. +func (client LoadBalancersClient) listNextResults(lastResults LoadBalancerListResult) (result LoadBalancerListResult, err error) { + req, err := lastResults.loadBalancerListResultPreparer() if err != nil { - return result, autorest.NewErrorWithError(err, "network.LoadBalancersClient", "List", nil, "Failure preparing next results request") + return result, autorest.NewErrorWithError(err, "network.LoadBalancersClient", "listNextResults", nil, "Failure preparing next results request") } if req == nil { return } - resp, err := client.ListSender(req) if err != nil { result.Response = autorest.Response{Response: resp} - return result, autorest.NewErrorWithError(err, "network.LoadBalancersClient", "List", resp, "Failure sending next results request") + return result, autorest.NewErrorWithError(err, "network.LoadBalancersClient", "listNextResults", resp, "Failure sending next results request") } - result, err = client.ListResponder(resp) if err != nil { - err = autorest.NewErrorWithError(err, "network.LoadBalancersClient", "List", resp, "Failure responding to next results request") + err = autorest.NewErrorWithError(err, "network.LoadBalancersClient", "listNextResults", resp, "Failure responding to next results request") } + return +} +// ListComplete enumerates all values, automatically crossing page boundaries as required. +func (client LoadBalancersClient) ListComplete(ctx context.Context, resourceGroupName string) (result LoadBalancerListResultIterator, err error) { + result.page, err = client.List(ctx, resourceGroupName) return } // ListAll gets all the load balancers in a subscription. -func (client LoadBalancersClient) ListAll() (result LoadBalancerListResult, err error) { - req, err := client.ListAllPreparer() +func (client LoadBalancersClient) ListAll(ctx context.Context) (result LoadBalancerListResultPage, err error) { + result.fn = client.listAllNextResults + req, err := client.ListAllPreparer(ctx) if err != nil { err = autorest.NewErrorWithError(err, "network.LoadBalancersClient", "ListAll", nil, "Failure preparing request") return @@ -374,12 +358,12 @@ func (client LoadBalancersClient) ListAll() (result LoadBalancerListResult, err resp, err := client.ListAllSender(req) if err != nil { - result.Response = autorest.Response{Response: resp} + result.lblr.Response = autorest.Response{Response: resp} err = autorest.NewErrorWithError(err, "network.LoadBalancersClient", "ListAll", resp, "Failure sending request") return } - result, err = client.ListAllResponder(resp) + result.lblr, err = client.ListAllResponder(resp) if err != nil { err = autorest.NewErrorWithError(err, "network.LoadBalancersClient", "ListAll", resp, "Failure responding to request") } @@ -388,12 +372,12 @@ func (client LoadBalancersClient) ListAll() (result LoadBalancerListResult, err } // ListAllPreparer prepares the ListAll request. -func (client LoadBalancersClient) ListAllPreparer() (*http.Request, error) { +func (client LoadBalancersClient) ListAllPreparer(ctx context.Context) (*http.Request, error) { pathParameters := map[string]interface{}{ "subscriptionId": autorest.Encode("path", client.SubscriptionID), } - const APIVersion = "2017-03-01" + const APIVersion = "2018-01-01" queryParameters := map[string]interface{}{ "api-version": APIVersion, } @@ -403,13 +387,14 @@ func (client LoadBalancersClient) ListAllPreparer() (*http.Request, error) { autorest.WithBaseURL(client.BaseURI), autorest.WithPathParameters("/subscriptions/{subscriptionId}/providers/Microsoft.Network/loadBalancers", pathParameters), autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{}) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) } // ListAllSender sends the ListAll request. The method will close the // http.Response Body if it receives an error. func (client LoadBalancersClient) ListAllSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, req) + return autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) } // ListAllResponder handles the response to the ListAll request. The method always @@ -425,26 +410,103 @@ func (client LoadBalancersClient) ListAllResponder(resp *http.Response) (result return } -// ListAllNextResults retrieves the next set of results, if any. -func (client LoadBalancersClient) ListAllNextResults(lastResults LoadBalancerListResult) (result LoadBalancerListResult, err error) { - req, err := lastResults.LoadBalancerListResultPreparer() +// listAllNextResults retrieves the next set of results, if any. +func (client LoadBalancersClient) listAllNextResults(lastResults LoadBalancerListResult) (result LoadBalancerListResult, err error) { + req, err := lastResults.loadBalancerListResultPreparer() if err != nil { - return result, autorest.NewErrorWithError(err, "network.LoadBalancersClient", "ListAll", nil, "Failure preparing next results request") + return result, autorest.NewErrorWithError(err, "network.LoadBalancersClient", "listAllNextResults", nil, "Failure preparing next results request") } if req == nil { return } - resp, err := client.ListAllSender(req) if err != nil { result.Response = autorest.Response{Response: resp} - return result, autorest.NewErrorWithError(err, "network.LoadBalancersClient", "ListAll", resp, "Failure sending next results request") + return result, autorest.NewErrorWithError(err, "network.LoadBalancersClient", "listAllNextResults", resp, "Failure sending next results request") } - result, err = client.ListAllResponder(resp) if err != nil { - err = autorest.NewErrorWithError(err, "network.LoadBalancersClient", "ListAll", resp, "Failure responding to next results request") + err = autorest.NewErrorWithError(err, "network.LoadBalancersClient", "listAllNextResults", resp, "Failure responding to next results request") + } + return +} + +// ListAllComplete enumerates all values, automatically crossing page boundaries as required. +func (client LoadBalancersClient) ListAllComplete(ctx context.Context) (result LoadBalancerListResultIterator, err error) { + result.page, err = client.ListAll(ctx) + return +} + +// UpdateTags updates a load balancer tags. +// Parameters: +// resourceGroupName - the name of the resource group. +// loadBalancerName - the name of the load balancer. +// parameters - parameters supplied to update load balancer tags. +func (client LoadBalancersClient) UpdateTags(ctx context.Context, resourceGroupName string, loadBalancerName string, parameters TagsObject) (result LoadBalancersUpdateTagsFuture, err error) { + req, err := client.UpdateTagsPreparer(ctx, resourceGroupName, loadBalancerName, parameters) + if err != nil { + err = autorest.NewErrorWithError(err, "network.LoadBalancersClient", "UpdateTags", nil, "Failure preparing request") + return + } + + result, err = client.UpdateTagsSender(req) + if err != nil { + err = autorest.NewErrorWithError(err, "network.LoadBalancersClient", "UpdateTags", result.Response(), "Failure sending request") + return } return } + +// UpdateTagsPreparer prepares the UpdateTags request. +func (client LoadBalancersClient) UpdateTagsPreparer(ctx context.Context, resourceGroupName string, loadBalancerName string, parameters TagsObject) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "loadBalancerName": autorest.Encode("path", loadBalancerName), + "resourceGroupName": autorest.Encode("path", resourceGroupName), + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + } + + const APIVersion = "2018-01-01" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsContentType("application/json; charset=utf-8"), + autorest.AsPatch(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/loadBalancers/{loadBalancerName}", pathParameters), + autorest.WithJSON(parameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// UpdateTagsSender sends the UpdateTags request. The method will close the +// http.Response Body if it receives an error. +func (client LoadBalancersClient) UpdateTagsSender(req *http.Request) (future LoadBalancersUpdateTagsFuture, err error) { + var resp *http.Response + resp, err = autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) + if err != nil { + return + } + err = autorest.Respond(resp, azure.WithErrorUnlessStatusCode(http.StatusOK)) + if err != nil { + return + } + future.Future, err = azure.NewFutureFromResponse(resp) + return +} + +// UpdateTagsResponder handles the response to the UpdateTags request. The method always +// closes the http.Response Body. +func (client LoadBalancersClient) UpdateTagsResponder(resp *http.Response) (result LoadBalancer, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} diff --git a/vendor/github.com/Azure/azure-sdk-for-go/arm/network/localnetworkgateways.go b/vendor/github.com/Azure/azure-sdk-for-go/services/network/mgmt/2018-01-01/network/localnetworkgateways.go similarity index 50% rename from vendor/github.com/Azure/azure-sdk-for-go/arm/network/localnetworkgateways.go rename to vendor/github.com/Azure/azure-sdk-for-go/services/network/mgmt/2018-01-01/network/localnetworkgateways.go index c1e66076e..e43ee6f27 100644 --- a/vendor/github.com/Azure/azure-sdk-for-go/arm/network/localnetworkgateways.go +++ b/vendor/github.com/Azure/azure-sdk-for-go/services/network/mgmt/2018-01-01/network/localnetworkgateways.go @@ -14,115 +14,99 @@ package network // See the License for the specific language governing permissions and // limitations under the License. // -// Code generated by Microsoft (R) AutoRest Code Generator 1.0.1.0 -// Changes may cause incorrect behavior and will be lost if the code is -// regenerated. +// Code generated by Microsoft (R) AutoRest Code Generator. +// Changes may cause incorrect behavior and will be lost if the code is regenerated. import ( + "context" "github.com/Azure/go-autorest/autorest" "github.com/Azure/go-autorest/autorest/azure" "github.com/Azure/go-autorest/autorest/validation" "net/http" ) -// LocalNetworkGatewaysClient is the composite Swagger for Network Client +// LocalNetworkGatewaysClient is the network Client type LocalNetworkGatewaysClient struct { - ManagementClient + BaseClient } -// NewLocalNetworkGatewaysClient creates an instance of the -// LocalNetworkGatewaysClient client. +// NewLocalNetworkGatewaysClient creates an instance of the LocalNetworkGatewaysClient client. func NewLocalNetworkGatewaysClient(subscriptionID string) LocalNetworkGatewaysClient { return NewLocalNetworkGatewaysClientWithBaseURI(DefaultBaseURI, subscriptionID) } -// NewLocalNetworkGatewaysClientWithBaseURI creates an instance of the -// LocalNetworkGatewaysClient client. +// NewLocalNetworkGatewaysClientWithBaseURI creates an instance of the LocalNetworkGatewaysClient client. func NewLocalNetworkGatewaysClientWithBaseURI(baseURI string, subscriptionID string) LocalNetworkGatewaysClient { return LocalNetworkGatewaysClient{NewWithBaseURI(baseURI, subscriptionID)} } -// CreateOrUpdate creates or updates a local network gateway in the specified -// resource group. This method may poll for completion. Polling can be canceled -// by passing the cancel channel argument. The channel will be used to cancel -// polling and any outstanding HTTP requests. -// -// resourceGroupName is the name of the resource group. localNetworkGatewayName -// is the name of the local network gateway. parameters is parameters supplied -// to the create or update local network gateway operation. -func (client LocalNetworkGatewaysClient) CreateOrUpdate(resourceGroupName string, localNetworkGatewayName string, parameters LocalNetworkGateway, cancel <-chan struct{}) (<-chan LocalNetworkGateway, <-chan error) { - resultChan := make(chan LocalNetworkGateway, 1) - errChan := make(chan error, 1) +// CreateOrUpdate creates or updates a local network gateway in the specified resource group. +// Parameters: +// resourceGroupName - the name of the resource group. +// localNetworkGatewayName - the name of the local network gateway. +// parameters - parameters supplied to the create or update local network gateway operation. +func (client LocalNetworkGatewaysClient) CreateOrUpdate(ctx context.Context, resourceGroupName string, localNetworkGatewayName string, parameters LocalNetworkGateway) (result LocalNetworkGatewaysCreateOrUpdateFuture, err error) { if err := validation.Validate([]validation.Validation{ {TargetValue: localNetworkGatewayName, Constraints: []validation.Constraint{{Target: "localNetworkGatewayName", Name: validation.MinLength, Rule: 1, Chain: nil}}}, {TargetValue: parameters, Constraints: []validation.Constraint{{Target: "parameters.LocalNetworkGatewayPropertiesFormat", Name: validation.Null, Rule: true, Chain: nil}}}}); err != nil { - errChan <- validation.NewErrorWithValidationError(err, "network.LocalNetworkGatewaysClient", "CreateOrUpdate") - close(errChan) - close(resultChan) - return resultChan, errChan + return result, validation.NewError("network.LocalNetworkGatewaysClient", "CreateOrUpdate", err.Error()) } - go func() { - var err error - var result LocalNetworkGateway - defer func() { - resultChan <- result - errChan <- err - close(resultChan) - close(errChan) - }() - req, err := client.CreateOrUpdatePreparer(resourceGroupName, localNetworkGatewayName, parameters, cancel) - if err != nil { - err = autorest.NewErrorWithError(err, "network.LocalNetworkGatewaysClient", "CreateOrUpdate", nil, "Failure preparing request") - return - } + req, err := client.CreateOrUpdatePreparer(ctx, resourceGroupName, localNetworkGatewayName, parameters) + if err != nil { + err = autorest.NewErrorWithError(err, "network.LocalNetworkGatewaysClient", "CreateOrUpdate", nil, "Failure preparing request") + return + } - resp, err := client.CreateOrUpdateSender(req) - if err != nil { - result.Response = autorest.Response{Response: resp} - err = autorest.NewErrorWithError(err, "network.LocalNetworkGatewaysClient", "CreateOrUpdate", resp, "Failure sending request") - return - } + result, err = client.CreateOrUpdateSender(req) + if err != nil { + err = autorest.NewErrorWithError(err, "network.LocalNetworkGatewaysClient", "CreateOrUpdate", result.Response(), "Failure sending request") + return + } - result, err = client.CreateOrUpdateResponder(resp) - if err != nil { - err = autorest.NewErrorWithError(err, "network.LocalNetworkGatewaysClient", "CreateOrUpdate", resp, "Failure responding to request") - } - }() - return resultChan, errChan + return } // CreateOrUpdatePreparer prepares the CreateOrUpdate request. -func (client LocalNetworkGatewaysClient) CreateOrUpdatePreparer(resourceGroupName string, localNetworkGatewayName string, parameters LocalNetworkGateway, cancel <-chan struct{}) (*http.Request, error) { +func (client LocalNetworkGatewaysClient) CreateOrUpdatePreparer(ctx context.Context, resourceGroupName string, localNetworkGatewayName string, parameters LocalNetworkGateway) (*http.Request, error) { pathParameters := map[string]interface{}{ "localNetworkGatewayName": autorest.Encode("path", localNetworkGatewayName), "resourceGroupName": autorest.Encode("path", resourceGroupName), "subscriptionId": autorest.Encode("path", client.SubscriptionID), } - const APIVersion = "2017-03-01" + const APIVersion = "2018-01-01" queryParameters := map[string]interface{}{ "api-version": APIVersion, } preparer := autorest.CreatePreparer( - autorest.AsJSON(), + autorest.AsContentType("application/json; charset=utf-8"), autorest.AsPut(), autorest.WithBaseURL(client.BaseURI), autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/localNetworkGateways/{localNetworkGatewayName}", pathParameters), autorest.WithJSON(parameters), autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{Cancel: cancel}) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) } // CreateOrUpdateSender sends the CreateOrUpdate request. The method will close the // http.Response Body if it receives an error. -func (client LocalNetworkGatewaysClient) CreateOrUpdateSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, - req, - azure.DoPollForAsynchronous(client.PollingDelay)) +func (client LocalNetworkGatewaysClient) CreateOrUpdateSender(req *http.Request) (future LocalNetworkGatewaysCreateOrUpdateFuture, err error) { + var resp *http.Response + resp, err = autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) + if err != nil { + return + } + err = autorest.Respond(resp, azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusCreated)) + if err != nil { + return + } + future.Future, err = azure.NewFutureFromResponse(resp) + return } // CreateOrUpdateResponder handles the response to the CreateOrUpdate request. The method always @@ -131,71 +115,48 @@ func (client LocalNetworkGatewaysClient) CreateOrUpdateResponder(resp *http.Resp err = autorest.Respond( resp, client.ByInspecting(), - azure.WithErrorUnlessStatusCode(http.StatusCreated, http.StatusOK), + azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusCreated), autorest.ByUnmarshallingJSON(&result), autorest.ByClosing()) result.Response = autorest.Response{Response: resp} return } -// Delete deletes the specified local network gateway. This method may poll for -// completion. Polling can be canceled by passing the cancel channel argument. -// The channel will be used to cancel polling and any outstanding HTTP -// requests. -// -// resourceGroupName is the name of the resource group. localNetworkGatewayName -// is the name of the local network gateway. -func (client LocalNetworkGatewaysClient) Delete(resourceGroupName string, localNetworkGatewayName string, cancel <-chan struct{}) (<-chan autorest.Response, <-chan error) { - resultChan := make(chan autorest.Response, 1) - errChan := make(chan error, 1) +// Delete deletes the specified local network gateway. +// Parameters: +// resourceGroupName - the name of the resource group. +// localNetworkGatewayName - the name of the local network gateway. +func (client LocalNetworkGatewaysClient) Delete(ctx context.Context, resourceGroupName string, localNetworkGatewayName string) (result LocalNetworkGatewaysDeleteFuture, err error) { if err := validation.Validate([]validation.Validation{ {TargetValue: localNetworkGatewayName, Constraints: []validation.Constraint{{Target: "localNetworkGatewayName", Name: validation.MinLength, Rule: 1, Chain: nil}}}}); err != nil { - errChan <- validation.NewErrorWithValidationError(err, "network.LocalNetworkGatewaysClient", "Delete") - close(errChan) - close(resultChan) - return resultChan, errChan + return result, validation.NewError("network.LocalNetworkGatewaysClient", "Delete", err.Error()) } - go func() { - var err error - var result autorest.Response - defer func() { - resultChan <- result - errChan <- err - close(resultChan) - close(errChan) - }() - req, err := client.DeletePreparer(resourceGroupName, localNetworkGatewayName, cancel) - if err != nil { - err = autorest.NewErrorWithError(err, "network.LocalNetworkGatewaysClient", "Delete", nil, "Failure preparing request") - return - } + req, err := client.DeletePreparer(ctx, resourceGroupName, localNetworkGatewayName) + if err != nil { + err = autorest.NewErrorWithError(err, "network.LocalNetworkGatewaysClient", "Delete", nil, "Failure preparing request") + return + } - resp, err := client.DeleteSender(req) - if err != nil { - result.Response = resp - err = autorest.NewErrorWithError(err, "network.LocalNetworkGatewaysClient", "Delete", resp, "Failure sending request") - return - } + result, err = client.DeleteSender(req) + if err != nil { + err = autorest.NewErrorWithError(err, "network.LocalNetworkGatewaysClient", "Delete", result.Response(), "Failure sending request") + return + } - result, err = client.DeleteResponder(resp) - if err != nil { - err = autorest.NewErrorWithError(err, "network.LocalNetworkGatewaysClient", "Delete", resp, "Failure responding to request") - } - }() - return resultChan, errChan + return } // DeletePreparer prepares the Delete request. -func (client LocalNetworkGatewaysClient) DeletePreparer(resourceGroupName string, localNetworkGatewayName string, cancel <-chan struct{}) (*http.Request, error) { +func (client LocalNetworkGatewaysClient) DeletePreparer(ctx context.Context, resourceGroupName string, localNetworkGatewayName string) (*http.Request, error) { pathParameters := map[string]interface{}{ "localNetworkGatewayName": autorest.Encode("path", localNetworkGatewayName), "resourceGroupName": autorest.Encode("path", resourceGroupName), "subscriptionId": autorest.Encode("path", client.SubscriptionID), } - const APIVersion = "2017-03-01" + const APIVersion = "2018-01-01" queryParameters := map[string]interface{}{ "api-version": APIVersion, } @@ -205,15 +166,24 @@ func (client LocalNetworkGatewaysClient) DeletePreparer(resourceGroupName string autorest.WithBaseURL(client.BaseURI), autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/localNetworkGateways/{localNetworkGatewayName}", pathParameters), autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{Cancel: cancel}) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) } // DeleteSender sends the Delete request. The method will close the // http.Response Body if it receives an error. -func (client LocalNetworkGatewaysClient) DeleteSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, - req, - azure.DoPollForAsynchronous(client.PollingDelay)) +func (client LocalNetworkGatewaysClient) DeleteSender(req *http.Request) (future LocalNetworkGatewaysDeleteFuture, err error) { + var resp *http.Response + resp, err = autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) + if err != nil { + return + } + err = autorest.Respond(resp, azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted, http.StatusNoContent)) + if err != nil { + return + } + future.Future, err = azure.NewFutureFromResponse(resp) + return } // DeleteResponder handles the response to the Delete request. The method always @@ -222,24 +192,24 @@ func (client LocalNetworkGatewaysClient) DeleteResponder(resp *http.Response) (r err = autorest.Respond( resp, client.ByInspecting(), - azure.WithErrorUnlessStatusCode(http.StatusNoContent, http.StatusOK, http.StatusAccepted), + azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted, http.StatusNoContent), autorest.ByClosing()) result.Response = resp return } // Get gets the specified local network gateway in a resource group. -// -// resourceGroupName is the name of the resource group. localNetworkGatewayName -// is the name of the local network gateway. -func (client LocalNetworkGatewaysClient) Get(resourceGroupName string, localNetworkGatewayName string) (result LocalNetworkGateway, err error) { +// Parameters: +// resourceGroupName - the name of the resource group. +// localNetworkGatewayName - the name of the local network gateway. +func (client LocalNetworkGatewaysClient) Get(ctx context.Context, resourceGroupName string, localNetworkGatewayName string) (result LocalNetworkGateway, err error) { if err := validation.Validate([]validation.Validation{ {TargetValue: localNetworkGatewayName, Constraints: []validation.Constraint{{Target: "localNetworkGatewayName", Name: validation.MinLength, Rule: 1, Chain: nil}}}}); err != nil { - return result, validation.NewErrorWithValidationError(err, "network.LocalNetworkGatewaysClient", "Get") + return result, validation.NewError("network.LocalNetworkGatewaysClient", "Get", err.Error()) } - req, err := client.GetPreparer(resourceGroupName, localNetworkGatewayName) + req, err := client.GetPreparer(ctx, resourceGroupName, localNetworkGatewayName) if err != nil { err = autorest.NewErrorWithError(err, "network.LocalNetworkGatewaysClient", "Get", nil, "Failure preparing request") return @@ -261,14 +231,14 @@ func (client LocalNetworkGatewaysClient) Get(resourceGroupName string, localNetw } // GetPreparer prepares the Get request. -func (client LocalNetworkGatewaysClient) GetPreparer(resourceGroupName string, localNetworkGatewayName string) (*http.Request, error) { +func (client LocalNetworkGatewaysClient) GetPreparer(ctx context.Context, resourceGroupName string, localNetworkGatewayName string) (*http.Request, error) { pathParameters := map[string]interface{}{ "localNetworkGatewayName": autorest.Encode("path", localNetworkGatewayName), "resourceGroupName": autorest.Encode("path", resourceGroupName), "subscriptionId": autorest.Encode("path", client.SubscriptionID), } - const APIVersion = "2017-03-01" + const APIVersion = "2018-01-01" queryParameters := map[string]interface{}{ "api-version": APIVersion, } @@ -278,13 +248,14 @@ func (client LocalNetworkGatewaysClient) GetPreparer(resourceGroupName string, l autorest.WithBaseURL(client.BaseURI), autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/localNetworkGateways/{localNetworkGatewayName}", pathParameters), autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{}) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) } // GetSender sends the Get request. The method will close the // http.Response Body if it receives an error. func (client LocalNetworkGatewaysClient) GetSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, req) + return autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) } // GetResponder handles the response to the Get request. The method always @@ -301,10 +272,11 @@ func (client LocalNetworkGatewaysClient) GetResponder(resp *http.Response) (resu } // List gets all the local network gateways in a resource group. -// -// resourceGroupName is the name of the resource group. -func (client LocalNetworkGatewaysClient) List(resourceGroupName string) (result LocalNetworkGatewayListResult, err error) { - req, err := client.ListPreparer(resourceGroupName) +// Parameters: +// resourceGroupName - the name of the resource group. +func (client LocalNetworkGatewaysClient) List(ctx context.Context, resourceGroupName string) (result LocalNetworkGatewayListResultPage, err error) { + result.fn = client.listNextResults + req, err := client.ListPreparer(ctx, resourceGroupName) if err != nil { err = autorest.NewErrorWithError(err, "network.LocalNetworkGatewaysClient", "List", nil, "Failure preparing request") return @@ -312,12 +284,12 @@ func (client LocalNetworkGatewaysClient) List(resourceGroupName string) (result resp, err := client.ListSender(req) if err != nil { - result.Response = autorest.Response{Response: resp} + result.lnglr.Response = autorest.Response{Response: resp} err = autorest.NewErrorWithError(err, "network.LocalNetworkGatewaysClient", "List", resp, "Failure sending request") return } - result, err = client.ListResponder(resp) + result.lnglr, err = client.ListResponder(resp) if err != nil { err = autorest.NewErrorWithError(err, "network.LocalNetworkGatewaysClient", "List", resp, "Failure responding to request") } @@ -326,13 +298,13 @@ func (client LocalNetworkGatewaysClient) List(resourceGroupName string) (result } // ListPreparer prepares the List request. -func (client LocalNetworkGatewaysClient) ListPreparer(resourceGroupName string) (*http.Request, error) { +func (client LocalNetworkGatewaysClient) ListPreparer(ctx context.Context, resourceGroupName string) (*http.Request, error) { pathParameters := map[string]interface{}{ "resourceGroupName": autorest.Encode("path", resourceGroupName), "subscriptionId": autorest.Encode("path", client.SubscriptionID), } - const APIVersion = "2017-03-01" + const APIVersion = "2018-01-01" queryParameters := map[string]interface{}{ "api-version": APIVersion, } @@ -342,13 +314,14 @@ func (client LocalNetworkGatewaysClient) ListPreparer(resourceGroupName string) autorest.WithBaseURL(client.BaseURI), autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/localNetworkGateways", pathParameters), autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{}) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) } // ListSender sends the List request. The method will close the // http.Response Body if it receives an error. func (client LocalNetworkGatewaysClient) ListSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, req) + return autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) } // ListResponder handles the response to the List request. The method always @@ -364,26 +337,109 @@ func (client LocalNetworkGatewaysClient) ListResponder(resp *http.Response) (res return } -// ListNextResults retrieves the next set of results, if any. -func (client LocalNetworkGatewaysClient) ListNextResults(lastResults LocalNetworkGatewayListResult) (result LocalNetworkGatewayListResult, err error) { - req, err := lastResults.LocalNetworkGatewayListResultPreparer() +// listNextResults retrieves the next set of results, if any. +func (client LocalNetworkGatewaysClient) listNextResults(lastResults LocalNetworkGatewayListResult) (result LocalNetworkGatewayListResult, err error) { + req, err := lastResults.localNetworkGatewayListResultPreparer() if err != nil { - return result, autorest.NewErrorWithError(err, "network.LocalNetworkGatewaysClient", "List", nil, "Failure preparing next results request") + return result, autorest.NewErrorWithError(err, "network.LocalNetworkGatewaysClient", "listNextResults", nil, "Failure preparing next results request") } if req == nil { return } - resp, err := client.ListSender(req) if err != nil { result.Response = autorest.Response{Response: resp} - return result, autorest.NewErrorWithError(err, "network.LocalNetworkGatewaysClient", "List", resp, "Failure sending next results request") + return result, autorest.NewErrorWithError(err, "network.LocalNetworkGatewaysClient", "listNextResults", resp, "Failure sending next results request") } - result, err = client.ListResponder(resp) if err != nil { - err = autorest.NewErrorWithError(err, "network.LocalNetworkGatewaysClient", "List", resp, "Failure responding to next results request") + err = autorest.NewErrorWithError(err, "network.LocalNetworkGatewaysClient", "listNextResults", resp, "Failure responding to next results request") + } + return +} + +// ListComplete enumerates all values, automatically crossing page boundaries as required. +func (client LocalNetworkGatewaysClient) ListComplete(ctx context.Context, resourceGroupName string) (result LocalNetworkGatewayListResultIterator, err error) { + result.page, err = client.List(ctx, resourceGroupName) + return +} + +// UpdateTags updates a local network gateway tags. +// Parameters: +// resourceGroupName - the name of the resource group. +// localNetworkGatewayName - the name of the local network gateway. +// parameters - parameters supplied to update local network gateway tags. +func (client LocalNetworkGatewaysClient) UpdateTags(ctx context.Context, resourceGroupName string, localNetworkGatewayName string, parameters TagsObject) (result LocalNetworkGatewaysUpdateTagsFuture, err error) { + if err := validation.Validate([]validation.Validation{ + {TargetValue: localNetworkGatewayName, + Constraints: []validation.Constraint{{Target: "localNetworkGatewayName", Name: validation.MinLength, Rule: 1, Chain: nil}}}}); err != nil { + return result, validation.NewError("network.LocalNetworkGatewaysClient", "UpdateTags", err.Error()) + } + + req, err := client.UpdateTagsPreparer(ctx, resourceGroupName, localNetworkGatewayName, parameters) + if err != nil { + err = autorest.NewErrorWithError(err, "network.LocalNetworkGatewaysClient", "UpdateTags", nil, "Failure preparing request") + return + } + + result, err = client.UpdateTagsSender(req) + if err != nil { + err = autorest.NewErrorWithError(err, "network.LocalNetworkGatewaysClient", "UpdateTags", result.Response(), "Failure sending request") + return } return } + +// UpdateTagsPreparer prepares the UpdateTags request. +func (client LocalNetworkGatewaysClient) UpdateTagsPreparer(ctx context.Context, resourceGroupName string, localNetworkGatewayName string, parameters TagsObject) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "localNetworkGatewayName": autorest.Encode("path", localNetworkGatewayName), + "resourceGroupName": autorest.Encode("path", resourceGroupName), + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + } + + const APIVersion = "2018-01-01" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsContentType("application/json; charset=utf-8"), + autorest.AsPatch(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/localNetworkGateways/{localNetworkGatewayName}", pathParameters), + autorest.WithJSON(parameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// UpdateTagsSender sends the UpdateTags request. The method will close the +// http.Response Body if it receives an error. +func (client LocalNetworkGatewaysClient) UpdateTagsSender(req *http.Request) (future LocalNetworkGatewaysUpdateTagsFuture, err error) { + var resp *http.Response + resp, err = autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) + if err != nil { + return + } + err = autorest.Respond(resp, azure.WithErrorUnlessStatusCode(http.StatusOK)) + if err != nil { + return + } + future.Future, err = azure.NewFutureFromResponse(resp) + return +} + +// UpdateTagsResponder handles the response to the UpdateTags request. The method always +// closes the http.Response Body. +func (client LocalNetworkGatewaysClient) UpdateTagsResponder(resp *http.Response) (result LocalNetworkGateway, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} diff --git a/vendor/github.com/Azure/azure-sdk-for-go/services/network/mgmt/2018-01-01/network/models.go b/vendor/github.com/Azure/azure-sdk-for-go/services/network/mgmt/2018-01-01/network/models.go new file mode 100644 index 000000000..994808f03 --- /dev/null +++ b/vendor/github.com/Azure/azure-sdk-for-go/services/network/mgmt/2018-01-01/network/models.go @@ -0,0 +1,15753 @@ +package network + +// Copyright (c) Microsoft and contributors. All rights reserved. +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// +// See the License for the specific language governing permissions and +// limitations under the License. +// +// Code generated by Microsoft (R) AutoRest Code Generator. +// Changes may cause incorrect behavior and will be lost if the code is regenerated. + +import ( + "encoding/json" + "github.com/Azure/go-autorest/autorest" + "github.com/Azure/go-autorest/autorest/azure" + "github.com/Azure/go-autorest/autorest/date" + "github.com/Azure/go-autorest/autorest/to" + "net/http" +) + +// Access enumerates the values for access. +type Access string + +const ( + // Allow ... + Allow Access = "Allow" + // Deny ... + Deny Access = "Deny" +) + +// PossibleAccessValues returns an array of possible values for the Access const type. +func PossibleAccessValues() []Access { + return []Access{Allow, Deny} +} + +// ApplicationGatewayBackendHealthServerHealth enumerates the values for application gateway backend health +// server health. +type ApplicationGatewayBackendHealthServerHealth string + +const ( + // Down ... + Down ApplicationGatewayBackendHealthServerHealth = "Down" + // Draining ... + Draining ApplicationGatewayBackendHealthServerHealth = "Draining" + // Partial ... + Partial ApplicationGatewayBackendHealthServerHealth = "Partial" + // Unknown ... + Unknown ApplicationGatewayBackendHealthServerHealth = "Unknown" + // Up ... + Up ApplicationGatewayBackendHealthServerHealth = "Up" +) + +// PossibleApplicationGatewayBackendHealthServerHealthValues returns an array of possible values for the ApplicationGatewayBackendHealthServerHealth const type. +func PossibleApplicationGatewayBackendHealthServerHealthValues() []ApplicationGatewayBackendHealthServerHealth { + return []ApplicationGatewayBackendHealthServerHealth{Down, Draining, Partial, Unknown, Up} +} + +// ApplicationGatewayCookieBasedAffinity enumerates the values for application gateway cookie based affinity. +type ApplicationGatewayCookieBasedAffinity string + +const ( + // Disabled ... + Disabled ApplicationGatewayCookieBasedAffinity = "Disabled" + // Enabled ... + Enabled ApplicationGatewayCookieBasedAffinity = "Enabled" +) + +// PossibleApplicationGatewayCookieBasedAffinityValues returns an array of possible values for the ApplicationGatewayCookieBasedAffinity const type. +func PossibleApplicationGatewayCookieBasedAffinityValues() []ApplicationGatewayCookieBasedAffinity { + return []ApplicationGatewayCookieBasedAffinity{Disabled, Enabled} +} + +// ApplicationGatewayFirewallMode enumerates the values for application gateway firewall mode. +type ApplicationGatewayFirewallMode string + +const ( + // Detection ... + Detection ApplicationGatewayFirewallMode = "Detection" + // Prevention ... + Prevention ApplicationGatewayFirewallMode = "Prevention" +) + +// PossibleApplicationGatewayFirewallModeValues returns an array of possible values for the ApplicationGatewayFirewallMode const type. +func PossibleApplicationGatewayFirewallModeValues() []ApplicationGatewayFirewallMode { + return []ApplicationGatewayFirewallMode{Detection, Prevention} +} + +// ApplicationGatewayOperationalState enumerates the values for application gateway operational state. +type ApplicationGatewayOperationalState string + +const ( + // Running ... + Running ApplicationGatewayOperationalState = "Running" + // Starting ... + Starting ApplicationGatewayOperationalState = "Starting" + // Stopped ... + Stopped ApplicationGatewayOperationalState = "Stopped" + // Stopping ... + Stopping ApplicationGatewayOperationalState = "Stopping" +) + +// PossibleApplicationGatewayOperationalStateValues returns an array of possible values for the ApplicationGatewayOperationalState const type. +func PossibleApplicationGatewayOperationalStateValues() []ApplicationGatewayOperationalState { + return []ApplicationGatewayOperationalState{Running, Starting, Stopped, Stopping} +} + +// ApplicationGatewayProtocol enumerates the values for application gateway protocol. +type ApplicationGatewayProtocol string + +const ( + // HTTP ... + HTTP ApplicationGatewayProtocol = "Http" + // HTTPS ... + HTTPS ApplicationGatewayProtocol = "Https" +) + +// PossibleApplicationGatewayProtocolValues returns an array of possible values for the ApplicationGatewayProtocol const type. +func PossibleApplicationGatewayProtocolValues() []ApplicationGatewayProtocol { + return []ApplicationGatewayProtocol{HTTP, HTTPS} +} + +// ApplicationGatewayRedirectType enumerates the values for application gateway redirect type. +type ApplicationGatewayRedirectType string + +const ( + // Found ... + Found ApplicationGatewayRedirectType = "Found" + // Permanent ... + Permanent ApplicationGatewayRedirectType = "Permanent" + // SeeOther ... + SeeOther ApplicationGatewayRedirectType = "SeeOther" + // Temporary ... + Temporary ApplicationGatewayRedirectType = "Temporary" +) + +// PossibleApplicationGatewayRedirectTypeValues returns an array of possible values for the ApplicationGatewayRedirectType const type. +func PossibleApplicationGatewayRedirectTypeValues() []ApplicationGatewayRedirectType { + return []ApplicationGatewayRedirectType{Found, Permanent, SeeOther, Temporary} +} + +// ApplicationGatewayRequestRoutingRuleType enumerates the values for application gateway request routing rule +// type. +type ApplicationGatewayRequestRoutingRuleType string + +const ( + // Basic ... + Basic ApplicationGatewayRequestRoutingRuleType = "Basic" + // PathBasedRouting ... + PathBasedRouting ApplicationGatewayRequestRoutingRuleType = "PathBasedRouting" +) + +// PossibleApplicationGatewayRequestRoutingRuleTypeValues returns an array of possible values for the ApplicationGatewayRequestRoutingRuleType const type. +func PossibleApplicationGatewayRequestRoutingRuleTypeValues() []ApplicationGatewayRequestRoutingRuleType { + return []ApplicationGatewayRequestRoutingRuleType{Basic, PathBasedRouting} +} + +// ApplicationGatewaySkuName enumerates the values for application gateway sku name. +type ApplicationGatewaySkuName string + +const ( + // StandardLarge ... + StandardLarge ApplicationGatewaySkuName = "Standard_Large" + // StandardMedium ... + StandardMedium ApplicationGatewaySkuName = "Standard_Medium" + // StandardSmall ... + StandardSmall ApplicationGatewaySkuName = "Standard_Small" + // WAFLarge ... + WAFLarge ApplicationGatewaySkuName = "WAF_Large" + // WAFMedium ... + WAFMedium ApplicationGatewaySkuName = "WAF_Medium" +) + +// PossibleApplicationGatewaySkuNameValues returns an array of possible values for the ApplicationGatewaySkuName const type. +func PossibleApplicationGatewaySkuNameValues() []ApplicationGatewaySkuName { + return []ApplicationGatewaySkuName{StandardLarge, StandardMedium, StandardSmall, WAFLarge, WAFMedium} +} + +// ApplicationGatewaySslCipherSuite enumerates the values for application gateway ssl cipher suite. +type ApplicationGatewaySslCipherSuite string + +const ( + // TLSDHEDSSWITHAES128CBCSHA ... + TLSDHEDSSWITHAES128CBCSHA ApplicationGatewaySslCipherSuite = "TLS_DHE_DSS_WITH_AES_128_CBC_SHA" + // TLSDHEDSSWITHAES128CBCSHA256 ... + TLSDHEDSSWITHAES128CBCSHA256 ApplicationGatewaySslCipherSuite = "TLS_DHE_DSS_WITH_AES_128_CBC_SHA256" + // TLSDHEDSSWITHAES256CBCSHA ... + TLSDHEDSSWITHAES256CBCSHA ApplicationGatewaySslCipherSuite = "TLS_DHE_DSS_WITH_AES_256_CBC_SHA" + // TLSDHEDSSWITHAES256CBCSHA256 ... + TLSDHEDSSWITHAES256CBCSHA256 ApplicationGatewaySslCipherSuite = "TLS_DHE_DSS_WITH_AES_256_CBC_SHA256" + // TLSDHERSAWITHAES128CBCSHA ... + TLSDHERSAWITHAES128CBCSHA ApplicationGatewaySslCipherSuite = "TLS_DHE_RSA_WITH_AES_128_CBC_SHA" + // TLSDHERSAWITHAES128GCMSHA256 ... + TLSDHERSAWITHAES128GCMSHA256 ApplicationGatewaySslCipherSuite = "TLS_DHE_RSA_WITH_AES_128_GCM_SHA256" + // TLSDHERSAWITHAES256CBCSHA ... + TLSDHERSAWITHAES256CBCSHA ApplicationGatewaySslCipherSuite = "TLS_DHE_RSA_WITH_AES_256_CBC_SHA" + // TLSDHERSAWITHAES256GCMSHA384 ... + TLSDHERSAWITHAES256GCMSHA384 ApplicationGatewaySslCipherSuite = "TLS_DHE_RSA_WITH_AES_256_GCM_SHA384" + // TLSECDHEECDSAWITHAES128CBCSHA ... + TLSECDHEECDSAWITHAES128CBCSHA ApplicationGatewaySslCipherSuite = "TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA" + // TLSECDHEECDSAWITHAES128CBCSHA256 ... + TLSECDHEECDSAWITHAES128CBCSHA256 ApplicationGatewaySslCipherSuite = "TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256" + // TLSECDHEECDSAWITHAES128GCMSHA256 ... + TLSECDHEECDSAWITHAES128GCMSHA256 ApplicationGatewaySslCipherSuite = "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256" + // TLSECDHEECDSAWITHAES256CBCSHA ... + TLSECDHEECDSAWITHAES256CBCSHA ApplicationGatewaySslCipherSuite = "TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA" + // TLSECDHEECDSAWITHAES256CBCSHA384 ... + TLSECDHEECDSAWITHAES256CBCSHA384 ApplicationGatewaySslCipherSuite = "TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384" + // TLSECDHEECDSAWITHAES256GCMSHA384 ... + TLSECDHEECDSAWITHAES256GCMSHA384 ApplicationGatewaySslCipherSuite = "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384" + // TLSECDHERSAWITHAES128CBCSHA ... + TLSECDHERSAWITHAES128CBCSHA ApplicationGatewaySslCipherSuite = "TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA" + // TLSECDHERSAWITHAES128CBCSHA256 ... + TLSECDHERSAWITHAES128CBCSHA256 ApplicationGatewaySslCipherSuite = "TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256" + // TLSECDHERSAWITHAES256CBCSHA ... + TLSECDHERSAWITHAES256CBCSHA ApplicationGatewaySslCipherSuite = "TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA" + // TLSECDHERSAWITHAES256CBCSHA384 ... + TLSECDHERSAWITHAES256CBCSHA384 ApplicationGatewaySslCipherSuite = "TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384" + // TLSRSAWITH3DESEDECBCSHA ... + TLSRSAWITH3DESEDECBCSHA ApplicationGatewaySslCipherSuite = "TLS_RSA_WITH_3DES_EDE_CBC_SHA" + // TLSRSAWITHAES128CBCSHA ... + TLSRSAWITHAES128CBCSHA ApplicationGatewaySslCipherSuite = "TLS_RSA_WITH_AES_128_CBC_SHA" + // TLSRSAWITHAES128CBCSHA256 ... + TLSRSAWITHAES128CBCSHA256 ApplicationGatewaySslCipherSuite = "TLS_RSA_WITH_AES_128_CBC_SHA256" + // TLSRSAWITHAES128GCMSHA256 ... + TLSRSAWITHAES128GCMSHA256 ApplicationGatewaySslCipherSuite = "TLS_RSA_WITH_AES_128_GCM_SHA256" + // TLSRSAWITHAES256CBCSHA ... + TLSRSAWITHAES256CBCSHA ApplicationGatewaySslCipherSuite = "TLS_RSA_WITH_AES_256_CBC_SHA" + // TLSRSAWITHAES256CBCSHA256 ... + TLSRSAWITHAES256CBCSHA256 ApplicationGatewaySslCipherSuite = "TLS_RSA_WITH_AES_256_CBC_SHA256" + // TLSRSAWITHAES256GCMSHA384 ... + TLSRSAWITHAES256GCMSHA384 ApplicationGatewaySslCipherSuite = "TLS_RSA_WITH_AES_256_GCM_SHA384" +) + +// PossibleApplicationGatewaySslCipherSuiteValues returns an array of possible values for the ApplicationGatewaySslCipherSuite const type. +func PossibleApplicationGatewaySslCipherSuiteValues() []ApplicationGatewaySslCipherSuite { + return []ApplicationGatewaySslCipherSuite{TLSDHEDSSWITHAES128CBCSHA, TLSDHEDSSWITHAES128CBCSHA256, TLSDHEDSSWITHAES256CBCSHA, TLSDHEDSSWITHAES256CBCSHA256, TLSDHERSAWITHAES128CBCSHA, TLSDHERSAWITHAES128GCMSHA256, TLSDHERSAWITHAES256CBCSHA, TLSDHERSAWITHAES256GCMSHA384, TLSECDHEECDSAWITHAES128CBCSHA, TLSECDHEECDSAWITHAES128CBCSHA256, TLSECDHEECDSAWITHAES128GCMSHA256, TLSECDHEECDSAWITHAES256CBCSHA, TLSECDHEECDSAWITHAES256CBCSHA384, TLSECDHEECDSAWITHAES256GCMSHA384, TLSECDHERSAWITHAES128CBCSHA, TLSECDHERSAWITHAES128CBCSHA256, TLSECDHERSAWITHAES256CBCSHA, TLSECDHERSAWITHAES256CBCSHA384, TLSRSAWITH3DESEDECBCSHA, TLSRSAWITHAES128CBCSHA, TLSRSAWITHAES128CBCSHA256, TLSRSAWITHAES128GCMSHA256, TLSRSAWITHAES256CBCSHA, TLSRSAWITHAES256CBCSHA256, TLSRSAWITHAES256GCMSHA384} +} + +// ApplicationGatewaySslPolicyName enumerates the values for application gateway ssl policy name. +type ApplicationGatewaySslPolicyName string + +const ( + // AppGwSslPolicy20150501 ... + AppGwSslPolicy20150501 ApplicationGatewaySslPolicyName = "AppGwSslPolicy20150501" + // AppGwSslPolicy20170401 ... + AppGwSslPolicy20170401 ApplicationGatewaySslPolicyName = "AppGwSslPolicy20170401" + // AppGwSslPolicy20170401S ... + AppGwSslPolicy20170401S ApplicationGatewaySslPolicyName = "AppGwSslPolicy20170401S" +) + +// PossibleApplicationGatewaySslPolicyNameValues returns an array of possible values for the ApplicationGatewaySslPolicyName const type. +func PossibleApplicationGatewaySslPolicyNameValues() []ApplicationGatewaySslPolicyName { + return []ApplicationGatewaySslPolicyName{AppGwSslPolicy20150501, AppGwSslPolicy20170401, AppGwSslPolicy20170401S} +} + +// ApplicationGatewaySslPolicyType enumerates the values for application gateway ssl policy type. +type ApplicationGatewaySslPolicyType string + +const ( + // Custom ... + Custom ApplicationGatewaySslPolicyType = "Custom" + // Predefined ... + Predefined ApplicationGatewaySslPolicyType = "Predefined" +) + +// PossibleApplicationGatewaySslPolicyTypeValues returns an array of possible values for the ApplicationGatewaySslPolicyType const type. +func PossibleApplicationGatewaySslPolicyTypeValues() []ApplicationGatewaySslPolicyType { + return []ApplicationGatewaySslPolicyType{Custom, Predefined} +} + +// ApplicationGatewaySslProtocol enumerates the values for application gateway ssl protocol. +type ApplicationGatewaySslProtocol string + +const ( + // TLSv10 ... + TLSv10 ApplicationGatewaySslProtocol = "TLSv1_0" + // TLSv11 ... + TLSv11 ApplicationGatewaySslProtocol = "TLSv1_1" + // TLSv12 ... + TLSv12 ApplicationGatewaySslProtocol = "TLSv1_2" +) + +// PossibleApplicationGatewaySslProtocolValues returns an array of possible values for the ApplicationGatewaySslProtocol const type. +func PossibleApplicationGatewaySslProtocolValues() []ApplicationGatewaySslProtocol { + return []ApplicationGatewaySslProtocol{TLSv10, TLSv11, TLSv12} +} + +// ApplicationGatewayTier enumerates the values for application gateway tier. +type ApplicationGatewayTier string + +const ( + // Standard ... + Standard ApplicationGatewayTier = "Standard" + // WAF ... + WAF ApplicationGatewayTier = "WAF" +) + +// PossibleApplicationGatewayTierValues returns an array of possible values for the ApplicationGatewayTier const type. +func PossibleApplicationGatewayTierValues() []ApplicationGatewayTier { + return []ApplicationGatewayTier{Standard, WAF} +} + +// AssociationType enumerates the values for association type. +type AssociationType string + +const ( + // Associated ... + Associated AssociationType = "Associated" + // Contains ... + Contains AssociationType = "Contains" +) + +// PossibleAssociationTypeValues returns an array of possible values for the AssociationType const type. +func PossibleAssociationTypeValues() []AssociationType { + return []AssociationType{Associated, Contains} +} + +// AuthenticationMethod enumerates the values for authentication method. +type AuthenticationMethod string + +const ( + // EAPMSCHAPv2 ... + EAPMSCHAPv2 AuthenticationMethod = "EAPMSCHAPv2" + // EAPTLS ... + EAPTLS AuthenticationMethod = "EAPTLS" +) + +// PossibleAuthenticationMethodValues returns an array of possible values for the AuthenticationMethod const type. +func PossibleAuthenticationMethodValues() []AuthenticationMethod { + return []AuthenticationMethod{EAPMSCHAPv2, EAPTLS} +} + +// AuthorizationUseStatus enumerates the values for authorization use status. +type AuthorizationUseStatus string + +const ( + // Available ... + Available AuthorizationUseStatus = "Available" + // InUse ... + InUse AuthorizationUseStatus = "InUse" +) + +// PossibleAuthorizationUseStatusValues returns an array of possible values for the AuthorizationUseStatus const type. +func PossibleAuthorizationUseStatusValues() []AuthorizationUseStatus { + return []AuthorizationUseStatus{Available, InUse} +} + +// BgpPeerState enumerates the values for bgp peer state. +type BgpPeerState string + +const ( + // BgpPeerStateConnected ... + BgpPeerStateConnected BgpPeerState = "Connected" + // BgpPeerStateConnecting ... + BgpPeerStateConnecting BgpPeerState = "Connecting" + // BgpPeerStateIdle ... + BgpPeerStateIdle BgpPeerState = "Idle" + // BgpPeerStateStopped ... + BgpPeerStateStopped BgpPeerState = "Stopped" + // BgpPeerStateUnknown ... + BgpPeerStateUnknown BgpPeerState = "Unknown" +) + +// PossibleBgpPeerStateValues returns an array of possible values for the BgpPeerState const type. +func PossibleBgpPeerStateValues() []BgpPeerState { + return []BgpPeerState{BgpPeerStateConnected, BgpPeerStateConnecting, BgpPeerStateIdle, BgpPeerStateStopped, BgpPeerStateUnknown} +} + +// ConnectionState enumerates the values for connection state. +type ConnectionState string + +const ( + // ConnectionStateReachable ... + ConnectionStateReachable ConnectionState = "Reachable" + // ConnectionStateUnknown ... + ConnectionStateUnknown ConnectionState = "Unknown" + // ConnectionStateUnreachable ... + ConnectionStateUnreachable ConnectionState = "Unreachable" +) + +// PossibleConnectionStateValues returns an array of possible values for the ConnectionState const type. +func PossibleConnectionStateValues() []ConnectionState { + return []ConnectionState{ConnectionStateReachable, ConnectionStateUnknown, ConnectionStateUnreachable} +} + +// ConnectionStatus enumerates the values for connection status. +type ConnectionStatus string + +const ( + // ConnectionStatusConnected ... + ConnectionStatusConnected ConnectionStatus = "Connected" + // ConnectionStatusDegraded ... + ConnectionStatusDegraded ConnectionStatus = "Degraded" + // ConnectionStatusDisconnected ... + ConnectionStatusDisconnected ConnectionStatus = "Disconnected" + // ConnectionStatusUnknown ... + ConnectionStatusUnknown ConnectionStatus = "Unknown" +) + +// PossibleConnectionStatusValues returns an array of possible values for the ConnectionStatus const type. +func PossibleConnectionStatusValues() []ConnectionStatus { + return []ConnectionStatus{ConnectionStatusConnected, ConnectionStatusDegraded, ConnectionStatusDisconnected, ConnectionStatusUnknown} +} + +// DhGroup enumerates the values for dh group. +type DhGroup string + +const ( + // DHGroup1 ... + DHGroup1 DhGroup = "DHGroup1" + // DHGroup14 ... + DHGroup14 DhGroup = "DHGroup14" + // DHGroup2 ... + DHGroup2 DhGroup = "DHGroup2" + // DHGroup2048 ... + DHGroup2048 DhGroup = "DHGroup2048" + // DHGroup24 ... + DHGroup24 DhGroup = "DHGroup24" + // ECP256 ... + ECP256 DhGroup = "ECP256" + // ECP384 ... + ECP384 DhGroup = "ECP384" + // None ... + None DhGroup = "None" +) + +// PossibleDhGroupValues returns an array of possible values for the DhGroup const type. +func PossibleDhGroupValues() []DhGroup { + return []DhGroup{DHGroup1, DHGroup14, DHGroup2, DHGroup2048, DHGroup24, ECP256, ECP384, None} +} + +// Direction enumerates the values for direction. +type Direction string + +const ( + // Inbound ... + Inbound Direction = "Inbound" + // Outbound ... + Outbound Direction = "Outbound" +) + +// PossibleDirectionValues returns an array of possible values for the Direction const type. +func PossibleDirectionValues() []Direction { + return []Direction{Inbound, Outbound} +} + +// EffectiveRouteSource enumerates the values for effective route source. +type EffectiveRouteSource string + +const ( + // EffectiveRouteSourceDefault ... + EffectiveRouteSourceDefault EffectiveRouteSource = "Default" + // EffectiveRouteSourceUnknown ... + EffectiveRouteSourceUnknown EffectiveRouteSource = "Unknown" + // EffectiveRouteSourceUser ... + EffectiveRouteSourceUser EffectiveRouteSource = "User" + // EffectiveRouteSourceVirtualNetworkGateway ... + EffectiveRouteSourceVirtualNetworkGateway EffectiveRouteSource = "VirtualNetworkGateway" +) + +// PossibleEffectiveRouteSourceValues returns an array of possible values for the EffectiveRouteSource const type. +func PossibleEffectiveRouteSourceValues() []EffectiveRouteSource { + return []EffectiveRouteSource{EffectiveRouteSourceDefault, EffectiveRouteSourceUnknown, EffectiveRouteSourceUser, EffectiveRouteSourceVirtualNetworkGateway} +} + +// EffectiveRouteState enumerates the values for effective route state. +type EffectiveRouteState string + +const ( + // Active ... + Active EffectiveRouteState = "Active" + // Invalid ... + Invalid EffectiveRouteState = "Invalid" +) + +// PossibleEffectiveRouteStateValues returns an array of possible values for the EffectiveRouteState const type. +func PossibleEffectiveRouteStateValues() []EffectiveRouteState { + return []EffectiveRouteState{Active, Invalid} +} + +// EffectiveSecurityRuleProtocol enumerates the values for effective security rule protocol. +type EffectiveSecurityRuleProtocol string + +const ( + // All ... + All EffectiveSecurityRuleProtocol = "All" + // TCP ... + TCP EffectiveSecurityRuleProtocol = "Tcp" + // UDP ... + UDP EffectiveSecurityRuleProtocol = "Udp" +) + +// PossibleEffectiveSecurityRuleProtocolValues returns an array of possible values for the EffectiveSecurityRuleProtocol const type. +func PossibleEffectiveSecurityRuleProtocolValues() []EffectiveSecurityRuleProtocol { + return []EffectiveSecurityRuleProtocol{All, TCP, UDP} +} + +// EvaluationState enumerates the values for evaluation state. +type EvaluationState string + +const ( + // Completed ... + Completed EvaluationState = "Completed" + // InProgress ... + InProgress EvaluationState = "InProgress" + // NotStarted ... + NotStarted EvaluationState = "NotStarted" +) + +// PossibleEvaluationStateValues returns an array of possible values for the EvaluationState const type. +func PossibleEvaluationStateValues() []EvaluationState { + return []EvaluationState{Completed, InProgress, NotStarted} +} + +// ExpressRouteCircuitPeeringAdvertisedPublicPrefixState enumerates the values for express route circuit +// peering advertised public prefix state. +type ExpressRouteCircuitPeeringAdvertisedPublicPrefixState string + +const ( + // Configured ... + Configured ExpressRouteCircuitPeeringAdvertisedPublicPrefixState = "Configured" + // Configuring ... + Configuring ExpressRouteCircuitPeeringAdvertisedPublicPrefixState = "Configuring" + // NotConfigured ... + NotConfigured ExpressRouteCircuitPeeringAdvertisedPublicPrefixState = "NotConfigured" + // ValidationNeeded ... + ValidationNeeded ExpressRouteCircuitPeeringAdvertisedPublicPrefixState = "ValidationNeeded" +) + +// PossibleExpressRouteCircuitPeeringAdvertisedPublicPrefixStateValues returns an array of possible values for the ExpressRouteCircuitPeeringAdvertisedPublicPrefixState const type. +func PossibleExpressRouteCircuitPeeringAdvertisedPublicPrefixStateValues() []ExpressRouteCircuitPeeringAdvertisedPublicPrefixState { + return []ExpressRouteCircuitPeeringAdvertisedPublicPrefixState{Configured, Configuring, NotConfigured, ValidationNeeded} +} + +// ExpressRouteCircuitPeeringState enumerates the values for express route circuit peering state. +type ExpressRouteCircuitPeeringState string + +const ( + // ExpressRouteCircuitPeeringStateDisabled ... + ExpressRouteCircuitPeeringStateDisabled ExpressRouteCircuitPeeringState = "Disabled" + // ExpressRouteCircuitPeeringStateEnabled ... + ExpressRouteCircuitPeeringStateEnabled ExpressRouteCircuitPeeringState = "Enabled" +) + +// PossibleExpressRouteCircuitPeeringStateValues returns an array of possible values for the ExpressRouteCircuitPeeringState const type. +func PossibleExpressRouteCircuitPeeringStateValues() []ExpressRouteCircuitPeeringState { + return []ExpressRouteCircuitPeeringState{ExpressRouteCircuitPeeringStateDisabled, ExpressRouteCircuitPeeringStateEnabled} +} + +// ExpressRouteCircuitPeeringType enumerates the values for express route circuit peering type. +type ExpressRouteCircuitPeeringType string + +const ( + // AzurePrivatePeering ... + AzurePrivatePeering ExpressRouteCircuitPeeringType = "AzurePrivatePeering" + // AzurePublicPeering ... + AzurePublicPeering ExpressRouteCircuitPeeringType = "AzurePublicPeering" + // MicrosoftPeering ... + MicrosoftPeering ExpressRouteCircuitPeeringType = "MicrosoftPeering" +) + +// PossibleExpressRouteCircuitPeeringTypeValues returns an array of possible values for the ExpressRouteCircuitPeeringType const type. +func PossibleExpressRouteCircuitPeeringTypeValues() []ExpressRouteCircuitPeeringType { + return []ExpressRouteCircuitPeeringType{AzurePrivatePeering, AzurePublicPeering, MicrosoftPeering} +} + +// ExpressRouteCircuitSkuFamily enumerates the values for express route circuit sku family. +type ExpressRouteCircuitSkuFamily string + +const ( + // MeteredData ... + MeteredData ExpressRouteCircuitSkuFamily = "MeteredData" + // UnlimitedData ... + UnlimitedData ExpressRouteCircuitSkuFamily = "UnlimitedData" +) + +// PossibleExpressRouteCircuitSkuFamilyValues returns an array of possible values for the ExpressRouteCircuitSkuFamily const type. +func PossibleExpressRouteCircuitSkuFamilyValues() []ExpressRouteCircuitSkuFamily { + return []ExpressRouteCircuitSkuFamily{MeteredData, UnlimitedData} +} + +// ExpressRouteCircuitSkuTier enumerates the values for express route circuit sku tier. +type ExpressRouteCircuitSkuTier string + +const ( + // ExpressRouteCircuitSkuTierPremium ... + ExpressRouteCircuitSkuTierPremium ExpressRouteCircuitSkuTier = "Premium" + // ExpressRouteCircuitSkuTierStandard ... + ExpressRouteCircuitSkuTierStandard ExpressRouteCircuitSkuTier = "Standard" +) + +// PossibleExpressRouteCircuitSkuTierValues returns an array of possible values for the ExpressRouteCircuitSkuTier const type. +func PossibleExpressRouteCircuitSkuTierValues() []ExpressRouteCircuitSkuTier { + return []ExpressRouteCircuitSkuTier{ExpressRouteCircuitSkuTierPremium, ExpressRouteCircuitSkuTierStandard} +} + +// IkeEncryption enumerates the values for ike encryption. +type IkeEncryption string + +const ( + // AES128 ... + AES128 IkeEncryption = "AES128" + // AES192 ... + AES192 IkeEncryption = "AES192" + // AES256 ... + AES256 IkeEncryption = "AES256" + // DES ... + DES IkeEncryption = "DES" + // DES3 ... + DES3 IkeEncryption = "DES3" +) + +// PossibleIkeEncryptionValues returns an array of possible values for the IkeEncryption const type. +func PossibleIkeEncryptionValues() []IkeEncryption { + return []IkeEncryption{AES128, AES192, AES256, DES, DES3} +} + +// IkeIntegrity enumerates the values for ike integrity. +type IkeIntegrity string + +const ( + // MD5 ... + MD5 IkeIntegrity = "MD5" + // SHA1 ... + SHA1 IkeIntegrity = "SHA1" + // SHA256 ... + SHA256 IkeIntegrity = "SHA256" + // SHA384 ... + SHA384 IkeIntegrity = "SHA384" +) + +// PossibleIkeIntegrityValues returns an array of possible values for the IkeIntegrity const type. +func PossibleIkeIntegrityValues() []IkeIntegrity { + return []IkeIntegrity{MD5, SHA1, SHA256, SHA384} +} + +// IPAllocationMethod enumerates the values for ip allocation method. +type IPAllocationMethod string + +const ( + // Dynamic ... + Dynamic IPAllocationMethod = "Dynamic" + // Static ... + Static IPAllocationMethod = "Static" +) + +// PossibleIPAllocationMethodValues returns an array of possible values for the IPAllocationMethod const type. +func PossibleIPAllocationMethodValues() []IPAllocationMethod { + return []IPAllocationMethod{Dynamic, Static} +} + +// IpsecEncryption enumerates the values for ipsec encryption. +type IpsecEncryption string + +const ( + // IpsecEncryptionAES128 ... + IpsecEncryptionAES128 IpsecEncryption = "AES128" + // IpsecEncryptionAES192 ... + IpsecEncryptionAES192 IpsecEncryption = "AES192" + // IpsecEncryptionAES256 ... + IpsecEncryptionAES256 IpsecEncryption = "AES256" + // IpsecEncryptionDES ... + IpsecEncryptionDES IpsecEncryption = "DES" + // IpsecEncryptionDES3 ... + IpsecEncryptionDES3 IpsecEncryption = "DES3" + // IpsecEncryptionGCMAES128 ... + IpsecEncryptionGCMAES128 IpsecEncryption = "GCMAES128" + // IpsecEncryptionGCMAES192 ... + IpsecEncryptionGCMAES192 IpsecEncryption = "GCMAES192" + // IpsecEncryptionGCMAES256 ... + IpsecEncryptionGCMAES256 IpsecEncryption = "GCMAES256" + // IpsecEncryptionNone ... + IpsecEncryptionNone IpsecEncryption = "None" +) + +// PossibleIpsecEncryptionValues returns an array of possible values for the IpsecEncryption const type. +func PossibleIpsecEncryptionValues() []IpsecEncryption { + return []IpsecEncryption{IpsecEncryptionAES128, IpsecEncryptionAES192, IpsecEncryptionAES256, IpsecEncryptionDES, IpsecEncryptionDES3, IpsecEncryptionGCMAES128, IpsecEncryptionGCMAES192, IpsecEncryptionGCMAES256, IpsecEncryptionNone} +} + +// IpsecIntegrity enumerates the values for ipsec integrity. +type IpsecIntegrity string + +const ( + // IpsecIntegrityGCMAES128 ... + IpsecIntegrityGCMAES128 IpsecIntegrity = "GCMAES128" + // IpsecIntegrityGCMAES192 ... + IpsecIntegrityGCMAES192 IpsecIntegrity = "GCMAES192" + // IpsecIntegrityGCMAES256 ... + IpsecIntegrityGCMAES256 IpsecIntegrity = "GCMAES256" + // IpsecIntegrityMD5 ... + IpsecIntegrityMD5 IpsecIntegrity = "MD5" + // IpsecIntegritySHA1 ... + IpsecIntegritySHA1 IpsecIntegrity = "SHA1" + // IpsecIntegritySHA256 ... + IpsecIntegritySHA256 IpsecIntegrity = "SHA256" +) + +// PossibleIpsecIntegrityValues returns an array of possible values for the IpsecIntegrity const type. +func PossibleIpsecIntegrityValues() []IpsecIntegrity { + return []IpsecIntegrity{IpsecIntegrityGCMAES128, IpsecIntegrityGCMAES192, IpsecIntegrityGCMAES256, IpsecIntegrityMD5, IpsecIntegritySHA1, IpsecIntegritySHA256} +} + +// IPVersion enumerates the values for ip version. +type IPVersion string + +const ( + // IPv4 ... + IPv4 IPVersion = "IPv4" + // IPv6 ... + IPv6 IPVersion = "IPv6" +) + +// PossibleIPVersionValues returns an array of possible values for the IPVersion const type. +func PossibleIPVersionValues() []IPVersion { + return []IPVersion{IPv4, IPv6} +} + +// IssueType enumerates the values for issue type. +type IssueType string + +const ( + // IssueTypeAgentStopped ... + IssueTypeAgentStopped IssueType = "AgentStopped" + // IssueTypeDNSResolution ... + IssueTypeDNSResolution IssueType = "DnsResolution" + // IssueTypeGuestFirewall ... + IssueTypeGuestFirewall IssueType = "GuestFirewall" + // IssueTypeNetworkSecurityRule ... + IssueTypeNetworkSecurityRule IssueType = "NetworkSecurityRule" + // IssueTypePlatform ... + IssueTypePlatform IssueType = "Platform" + // IssueTypePortThrottled ... + IssueTypePortThrottled IssueType = "PortThrottled" + // IssueTypeSocketBind ... + IssueTypeSocketBind IssueType = "SocketBind" + // IssueTypeUnknown ... + IssueTypeUnknown IssueType = "Unknown" + // IssueTypeUserDefinedRoute ... + IssueTypeUserDefinedRoute IssueType = "UserDefinedRoute" +) + +// PossibleIssueTypeValues returns an array of possible values for the IssueType const type. +func PossibleIssueTypeValues() []IssueType { + return []IssueType{IssueTypeAgentStopped, IssueTypeDNSResolution, IssueTypeGuestFirewall, IssueTypeNetworkSecurityRule, IssueTypePlatform, IssueTypePortThrottled, IssueTypeSocketBind, IssueTypeUnknown, IssueTypeUserDefinedRoute} +} + +// LoadBalancerSkuName enumerates the values for load balancer sku name. +type LoadBalancerSkuName string + +const ( + // LoadBalancerSkuNameBasic ... + LoadBalancerSkuNameBasic LoadBalancerSkuName = "Basic" + // LoadBalancerSkuNameStandard ... + LoadBalancerSkuNameStandard LoadBalancerSkuName = "Standard" +) + +// PossibleLoadBalancerSkuNameValues returns an array of possible values for the LoadBalancerSkuName const type. +func PossibleLoadBalancerSkuNameValues() []LoadBalancerSkuName { + return []LoadBalancerSkuName{LoadBalancerSkuNameBasic, LoadBalancerSkuNameStandard} +} + +// LoadDistribution enumerates the values for load distribution. +type LoadDistribution string + +const ( + // Default ... + Default LoadDistribution = "Default" + // SourceIP ... + SourceIP LoadDistribution = "SourceIP" + // SourceIPProtocol ... + SourceIPProtocol LoadDistribution = "SourceIPProtocol" +) + +// PossibleLoadDistributionValues returns an array of possible values for the LoadDistribution const type. +func PossibleLoadDistributionValues() []LoadDistribution { + return []LoadDistribution{Default, SourceIP, SourceIPProtocol} +} + +// NextHopType enumerates the values for next hop type. +type NextHopType string + +const ( + // NextHopTypeHyperNetGateway ... + NextHopTypeHyperNetGateway NextHopType = "HyperNetGateway" + // NextHopTypeInternet ... + NextHopTypeInternet NextHopType = "Internet" + // NextHopTypeNone ... + NextHopTypeNone NextHopType = "None" + // NextHopTypeVirtualAppliance ... + NextHopTypeVirtualAppliance NextHopType = "VirtualAppliance" + // NextHopTypeVirtualNetworkGateway ... + NextHopTypeVirtualNetworkGateway NextHopType = "VirtualNetworkGateway" + // NextHopTypeVnetLocal ... + NextHopTypeVnetLocal NextHopType = "VnetLocal" +) + +// PossibleNextHopTypeValues returns an array of possible values for the NextHopType const type. +func PossibleNextHopTypeValues() []NextHopType { + return []NextHopType{NextHopTypeHyperNetGateway, NextHopTypeInternet, NextHopTypeNone, NextHopTypeVirtualAppliance, NextHopTypeVirtualNetworkGateway, NextHopTypeVnetLocal} +} + +// OperationStatus enumerates the values for operation status. +type OperationStatus string + +const ( + // OperationStatusFailed ... + OperationStatusFailed OperationStatus = "Failed" + // OperationStatusInProgress ... + OperationStatusInProgress OperationStatus = "InProgress" + // OperationStatusSucceeded ... + OperationStatusSucceeded OperationStatus = "Succeeded" +) + +// PossibleOperationStatusValues returns an array of possible values for the OperationStatus const type. +func PossibleOperationStatusValues() []OperationStatus { + return []OperationStatus{OperationStatusFailed, OperationStatusInProgress, OperationStatusSucceeded} +} + +// Origin enumerates the values for origin. +type Origin string + +const ( + // OriginInbound ... + OriginInbound Origin = "Inbound" + // OriginLocal ... + OriginLocal Origin = "Local" + // OriginOutbound ... + OriginOutbound Origin = "Outbound" +) + +// PossibleOriginValues returns an array of possible values for the Origin const type. +func PossibleOriginValues() []Origin { + return []Origin{OriginInbound, OriginLocal, OriginOutbound} +} + +// PcError enumerates the values for pc error. +type PcError string + +const ( + // AgentStopped ... + AgentStopped PcError = "AgentStopped" + // CaptureFailed ... + CaptureFailed PcError = "CaptureFailed" + // InternalError ... + InternalError PcError = "InternalError" + // LocalFileFailed ... + LocalFileFailed PcError = "LocalFileFailed" + // StorageFailed ... + StorageFailed PcError = "StorageFailed" +) + +// PossiblePcErrorValues returns an array of possible values for the PcError const type. +func PossiblePcErrorValues() []PcError { + return []PcError{AgentStopped, CaptureFailed, InternalError, LocalFileFailed, StorageFailed} +} + +// PcProtocol enumerates the values for pc protocol. +type PcProtocol string + +const ( + // PcProtocolAny ... + PcProtocolAny PcProtocol = "Any" + // PcProtocolTCP ... + PcProtocolTCP PcProtocol = "TCP" + // PcProtocolUDP ... + PcProtocolUDP PcProtocol = "UDP" +) + +// PossiblePcProtocolValues returns an array of possible values for the PcProtocol const type. +func PossiblePcProtocolValues() []PcProtocol { + return []PcProtocol{PcProtocolAny, PcProtocolTCP, PcProtocolUDP} +} + +// PcStatus enumerates the values for pc status. +type PcStatus string + +const ( + // PcStatusError ... + PcStatusError PcStatus = "Error" + // PcStatusNotStarted ... + PcStatusNotStarted PcStatus = "NotStarted" + // PcStatusRunning ... + PcStatusRunning PcStatus = "Running" + // PcStatusStopped ... + PcStatusStopped PcStatus = "Stopped" + // PcStatusUnknown ... + PcStatusUnknown PcStatus = "Unknown" +) + +// PossiblePcStatusValues returns an array of possible values for the PcStatus const type. +func PossiblePcStatusValues() []PcStatus { + return []PcStatus{PcStatusError, PcStatusNotStarted, PcStatusRunning, PcStatusStopped, PcStatusUnknown} +} + +// PfsGroup enumerates the values for pfs group. +type PfsGroup string + +const ( + // PfsGroupECP256 ... + PfsGroupECP256 PfsGroup = "ECP256" + // PfsGroupECP384 ... + PfsGroupECP384 PfsGroup = "ECP384" + // PfsGroupNone ... + PfsGroupNone PfsGroup = "None" + // PfsGroupPFS1 ... + PfsGroupPFS1 PfsGroup = "PFS1" + // PfsGroupPFS2 ... + PfsGroupPFS2 PfsGroup = "PFS2" + // PfsGroupPFS2048 ... + PfsGroupPFS2048 PfsGroup = "PFS2048" + // PfsGroupPFS24 ... + PfsGroupPFS24 PfsGroup = "PFS24" +) + +// PossiblePfsGroupValues returns an array of possible values for the PfsGroup const type. +func PossiblePfsGroupValues() []PfsGroup { + return []PfsGroup{PfsGroupECP256, PfsGroupECP384, PfsGroupNone, PfsGroupPFS1, PfsGroupPFS2, PfsGroupPFS2048, PfsGroupPFS24} +} + +// ProbeProtocol enumerates the values for probe protocol. +type ProbeProtocol string + +const ( + // ProbeProtocolHTTP ... + ProbeProtocolHTTP ProbeProtocol = "Http" + // ProbeProtocolTCP ... + ProbeProtocolTCP ProbeProtocol = "Tcp" +) + +// PossibleProbeProtocolValues returns an array of possible values for the ProbeProtocol const type. +func PossibleProbeProtocolValues() []ProbeProtocol { + return []ProbeProtocol{ProbeProtocolHTTP, ProbeProtocolTCP} +} + +// ProcessorArchitecture enumerates the values for processor architecture. +type ProcessorArchitecture string + +const ( + // Amd64 ... + Amd64 ProcessorArchitecture = "Amd64" + // X86 ... + X86 ProcessorArchitecture = "X86" +) + +// PossibleProcessorArchitectureValues returns an array of possible values for the ProcessorArchitecture const type. +func PossibleProcessorArchitectureValues() []ProcessorArchitecture { + return []ProcessorArchitecture{Amd64, X86} +} + +// Protocol enumerates the values for protocol. +type Protocol string + +const ( + // ProtocolTCP ... + ProtocolTCP Protocol = "TCP" + // ProtocolUDP ... + ProtocolUDP Protocol = "UDP" +) + +// PossibleProtocolValues returns an array of possible values for the Protocol const type. +func PossibleProtocolValues() []Protocol { + return []Protocol{ProtocolTCP, ProtocolUDP} +} + +// ProvisioningState enumerates the values for provisioning state. +type ProvisioningState string + +const ( + // Deleting ... + Deleting ProvisioningState = "Deleting" + // Failed ... + Failed ProvisioningState = "Failed" + // Succeeded ... + Succeeded ProvisioningState = "Succeeded" + // Updating ... + Updating ProvisioningState = "Updating" +) + +// PossibleProvisioningStateValues returns an array of possible values for the ProvisioningState const type. +func PossibleProvisioningStateValues() []ProvisioningState { + return []ProvisioningState{Deleting, Failed, Succeeded, Updating} +} + +// PublicIPAddressSkuName enumerates the values for public ip address sku name. +type PublicIPAddressSkuName string + +const ( + // PublicIPAddressSkuNameBasic ... + PublicIPAddressSkuNameBasic PublicIPAddressSkuName = "Basic" + // PublicIPAddressSkuNameStandard ... + PublicIPAddressSkuNameStandard PublicIPAddressSkuName = "Standard" +) + +// PossiblePublicIPAddressSkuNameValues returns an array of possible values for the PublicIPAddressSkuName const type. +func PossiblePublicIPAddressSkuNameValues() []PublicIPAddressSkuName { + return []PublicIPAddressSkuName{PublicIPAddressSkuNameBasic, PublicIPAddressSkuNameStandard} +} + +// RouteNextHopType enumerates the values for route next hop type. +type RouteNextHopType string + +const ( + // RouteNextHopTypeInternet ... + RouteNextHopTypeInternet RouteNextHopType = "Internet" + // RouteNextHopTypeNone ... + RouteNextHopTypeNone RouteNextHopType = "None" + // RouteNextHopTypeVirtualAppliance ... + RouteNextHopTypeVirtualAppliance RouteNextHopType = "VirtualAppliance" + // RouteNextHopTypeVirtualNetworkGateway ... + RouteNextHopTypeVirtualNetworkGateway RouteNextHopType = "VirtualNetworkGateway" + // RouteNextHopTypeVnetLocal ... + RouteNextHopTypeVnetLocal RouteNextHopType = "VnetLocal" +) + +// PossibleRouteNextHopTypeValues returns an array of possible values for the RouteNextHopType const type. +func PossibleRouteNextHopTypeValues() []RouteNextHopType { + return []RouteNextHopType{RouteNextHopTypeInternet, RouteNextHopTypeNone, RouteNextHopTypeVirtualAppliance, RouteNextHopTypeVirtualNetworkGateway, RouteNextHopTypeVnetLocal} +} + +// SecurityRuleAccess enumerates the values for security rule access. +type SecurityRuleAccess string + +const ( + // SecurityRuleAccessAllow ... + SecurityRuleAccessAllow SecurityRuleAccess = "Allow" + // SecurityRuleAccessDeny ... + SecurityRuleAccessDeny SecurityRuleAccess = "Deny" +) + +// PossibleSecurityRuleAccessValues returns an array of possible values for the SecurityRuleAccess const type. +func PossibleSecurityRuleAccessValues() []SecurityRuleAccess { + return []SecurityRuleAccess{SecurityRuleAccessAllow, SecurityRuleAccessDeny} +} + +// SecurityRuleDirection enumerates the values for security rule direction. +type SecurityRuleDirection string + +const ( + // SecurityRuleDirectionInbound ... + SecurityRuleDirectionInbound SecurityRuleDirection = "Inbound" + // SecurityRuleDirectionOutbound ... + SecurityRuleDirectionOutbound SecurityRuleDirection = "Outbound" +) + +// PossibleSecurityRuleDirectionValues returns an array of possible values for the SecurityRuleDirection const type. +func PossibleSecurityRuleDirectionValues() []SecurityRuleDirection { + return []SecurityRuleDirection{SecurityRuleDirectionInbound, SecurityRuleDirectionOutbound} +} + +// SecurityRuleProtocol enumerates the values for security rule protocol. +type SecurityRuleProtocol string + +const ( + // SecurityRuleProtocolAsterisk ... + SecurityRuleProtocolAsterisk SecurityRuleProtocol = "*" + // SecurityRuleProtocolTCP ... + SecurityRuleProtocolTCP SecurityRuleProtocol = "Tcp" + // SecurityRuleProtocolUDP ... + SecurityRuleProtocolUDP SecurityRuleProtocol = "Udp" +) + +// PossibleSecurityRuleProtocolValues returns an array of possible values for the SecurityRuleProtocol const type. +func PossibleSecurityRuleProtocolValues() []SecurityRuleProtocol { + return []SecurityRuleProtocol{SecurityRuleProtocolAsterisk, SecurityRuleProtocolTCP, SecurityRuleProtocolUDP} +} + +// ServiceProviderProvisioningState enumerates the values for service provider provisioning state. +type ServiceProviderProvisioningState string + +const ( + // Deprovisioning ... + Deprovisioning ServiceProviderProvisioningState = "Deprovisioning" + // NotProvisioned ... + NotProvisioned ServiceProviderProvisioningState = "NotProvisioned" + // Provisioned ... + Provisioned ServiceProviderProvisioningState = "Provisioned" + // Provisioning ... + Provisioning ServiceProviderProvisioningState = "Provisioning" +) + +// PossibleServiceProviderProvisioningStateValues returns an array of possible values for the ServiceProviderProvisioningState const type. +func PossibleServiceProviderProvisioningStateValues() []ServiceProviderProvisioningState { + return []ServiceProviderProvisioningState{Deprovisioning, NotProvisioned, Provisioned, Provisioning} +} + +// Severity enumerates the values for severity. +type Severity string + +const ( + // SeverityError ... + SeverityError Severity = "Error" + // SeverityWarning ... + SeverityWarning Severity = "Warning" +) + +// PossibleSeverityValues returns an array of possible values for the Severity const type. +func PossibleSeverityValues() []Severity { + return []Severity{SeverityError, SeverityWarning} +} + +// TransportProtocol enumerates the values for transport protocol. +type TransportProtocol string + +const ( + // TransportProtocolAll ... + TransportProtocolAll TransportProtocol = "All" + // TransportProtocolTCP ... + TransportProtocolTCP TransportProtocol = "Tcp" + // TransportProtocolUDP ... + TransportProtocolUDP TransportProtocol = "Udp" +) + +// PossibleTransportProtocolValues returns an array of possible values for the TransportProtocol const type. +func PossibleTransportProtocolValues() []TransportProtocol { + return []TransportProtocol{TransportProtocolAll, TransportProtocolTCP, TransportProtocolUDP} +} + +// VirtualNetworkGatewayConnectionStatus enumerates the values for virtual network gateway connection status. +type VirtualNetworkGatewayConnectionStatus string + +const ( + // VirtualNetworkGatewayConnectionStatusConnected ... + VirtualNetworkGatewayConnectionStatusConnected VirtualNetworkGatewayConnectionStatus = "Connected" + // VirtualNetworkGatewayConnectionStatusConnecting ... + VirtualNetworkGatewayConnectionStatusConnecting VirtualNetworkGatewayConnectionStatus = "Connecting" + // VirtualNetworkGatewayConnectionStatusNotConnected ... + VirtualNetworkGatewayConnectionStatusNotConnected VirtualNetworkGatewayConnectionStatus = "NotConnected" + // VirtualNetworkGatewayConnectionStatusUnknown ... + VirtualNetworkGatewayConnectionStatusUnknown VirtualNetworkGatewayConnectionStatus = "Unknown" +) + +// PossibleVirtualNetworkGatewayConnectionStatusValues returns an array of possible values for the VirtualNetworkGatewayConnectionStatus const type. +func PossibleVirtualNetworkGatewayConnectionStatusValues() []VirtualNetworkGatewayConnectionStatus { + return []VirtualNetworkGatewayConnectionStatus{VirtualNetworkGatewayConnectionStatusConnected, VirtualNetworkGatewayConnectionStatusConnecting, VirtualNetworkGatewayConnectionStatusNotConnected, VirtualNetworkGatewayConnectionStatusUnknown} +} + +// VirtualNetworkGatewayConnectionType enumerates the values for virtual network gateway connection type. +type VirtualNetworkGatewayConnectionType string + +const ( + // ExpressRoute ... + ExpressRoute VirtualNetworkGatewayConnectionType = "ExpressRoute" + // IPsec ... + IPsec VirtualNetworkGatewayConnectionType = "IPsec" + // Vnet2Vnet ... + Vnet2Vnet VirtualNetworkGatewayConnectionType = "Vnet2Vnet" + // VPNClient ... + VPNClient VirtualNetworkGatewayConnectionType = "VPNClient" +) + +// PossibleVirtualNetworkGatewayConnectionTypeValues returns an array of possible values for the VirtualNetworkGatewayConnectionType const type. +func PossibleVirtualNetworkGatewayConnectionTypeValues() []VirtualNetworkGatewayConnectionType { + return []VirtualNetworkGatewayConnectionType{ExpressRoute, IPsec, Vnet2Vnet, VPNClient} +} + +// VirtualNetworkGatewaySkuName enumerates the values for virtual network gateway sku name. +type VirtualNetworkGatewaySkuName string + +const ( + // VirtualNetworkGatewaySkuNameBasic ... + VirtualNetworkGatewaySkuNameBasic VirtualNetworkGatewaySkuName = "Basic" + // VirtualNetworkGatewaySkuNameHighPerformance ... + VirtualNetworkGatewaySkuNameHighPerformance VirtualNetworkGatewaySkuName = "HighPerformance" + // VirtualNetworkGatewaySkuNameStandard ... + VirtualNetworkGatewaySkuNameStandard VirtualNetworkGatewaySkuName = "Standard" + // VirtualNetworkGatewaySkuNameUltraPerformance ... + VirtualNetworkGatewaySkuNameUltraPerformance VirtualNetworkGatewaySkuName = "UltraPerformance" + // VirtualNetworkGatewaySkuNameVpnGw1 ... + VirtualNetworkGatewaySkuNameVpnGw1 VirtualNetworkGatewaySkuName = "VpnGw1" + // VirtualNetworkGatewaySkuNameVpnGw2 ... + VirtualNetworkGatewaySkuNameVpnGw2 VirtualNetworkGatewaySkuName = "VpnGw2" + // VirtualNetworkGatewaySkuNameVpnGw3 ... + VirtualNetworkGatewaySkuNameVpnGw3 VirtualNetworkGatewaySkuName = "VpnGw3" +) + +// PossibleVirtualNetworkGatewaySkuNameValues returns an array of possible values for the VirtualNetworkGatewaySkuName const type. +func PossibleVirtualNetworkGatewaySkuNameValues() []VirtualNetworkGatewaySkuName { + return []VirtualNetworkGatewaySkuName{VirtualNetworkGatewaySkuNameBasic, VirtualNetworkGatewaySkuNameHighPerformance, VirtualNetworkGatewaySkuNameStandard, VirtualNetworkGatewaySkuNameUltraPerformance, VirtualNetworkGatewaySkuNameVpnGw1, VirtualNetworkGatewaySkuNameVpnGw2, VirtualNetworkGatewaySkuNameVpnGw3} +} + +// VirtualNetworkGatewaySkuTier enumerates the values for virtual network gateway sku tier. +type VirtualNetworkGatewaySkuTier string + +const ( + // VirtualNetworkGatewaySkuTierBasic ... + VirtualNetworkGatewaySkuTierBasic VirtualNetworkGatewaySkuTier = "Basic" + // VirtualNetworkGatewaySkuTierHighPerformance ... + VirtualNetworkGatewaySkuTierHighPerformance VirtualNetworkGatewaySkuTier = "HighPerformance" + // VirtualNetworkGatewaySkuTierStandard ... + VirtualNetworkGatewaySkuTierStandard VirtualNetworkGatewaySkuTier = "Standard" + // VirtualNetworkGatewaySkuTierUltraPerformance ... + VirtualNetworkGatewaySkuTierUltraPerformance VirtualNetworkGatewaySkuTier = "UltraPerformance" + // VirtualNetworkGatewaySkuTierVpnGw1 ... + VirtualNetworkGatewaySkuTierVpnGw1 VirtualNetworkGatewaySkuTier = "VpnGw1" + // VirtualNetworkGatewaySkuTierVpnGw2 ... + VirtualNetworkGatewaySkuTierVpnGw2 VirtualNetworkGatewaySkuTier = "VpnGw2" + // VirtualNetworkGatewaySkuTierVpnGw3 ... + VirtualNetworkGatewaySkuTierVpnGw3 VirtualNetworkGatewaySkuTier = "VpnGw3" +) + +// PossibleVirtualNetworkGatewaySkuTierValues returns an array of possible values for the VirtualNetworkGatewaySkuTier const type. +func PossibleVirtualNetworkGatewaySkuTierValues() []VirtualNetworkGatewaySkuTier { + return []VirtualNetworkGatewaySkuTier{VirtualNetworkGatewaySkuTierBasic, VirtualNetworkGatewaySkuTierHighPerformance, VirtualNetworkGatewaySkuTierStandard, VirtualNetworkGatewaySkuTierUltraPerformance, VirtualNetworkGatewaySkuTierVpnGw1, VirtualNetworkGatewaySkuTierVpnGw2, VirtualNetworkGatewaySkuTierVpnGw3} +} + +// VirtualNetworkGatewayType enumerates the values for virtual network gateway type. +type VirtualNetworkGatewayType string + +const ( + // VirtualNetworkGatewayTypeExpressRoute ... + VirtualNetworkGatewayTypeExpressRoute VirtualNetworkGatewayType = "ExpressRoute" + // VirtualNetworkGatewayTypeVpn ... + VirtualNetworkGatewayTypeVpn VirtualNetworkGatewayType = "Vpn" +) + +// PossibleVirtualNetworkGatewayTypeValues returns an array of possible values for the VirtualNetworkGatewayType const type. +func PossibleVirtualNetworkGatewayTypeValues() []VirtualNetworkGatewayType { + return []VirtualNetworkGatewayType{VirtualNetworkGatewayTypeExpressRoute, VirtualNetworkGatewayTypeVpn} +} + +// VirtualNetworkPeeringState enumerates the values for virtual network peering state. +type VirtualNetworkPeeringState string + +const ( + // Connected ... + Connected VirtualNetworkPeeringState = "Connected" + // Disconnected ... + Disconnected VirtualNetworkPeeringState = "Disconnected" + // Initiated ... + Initiated VirtualNetworkPeeringState = "Initiated" +) + +// PossibleVirtualNetworkPeeringStateValues returns an array of possible values for the VirtualNetworkPeeringState const type. +func PossibleVirtualNetworkPeeringStateValues() []VirtualNetworkPeeringState { + return []VirtualNetworkPeeringState{Connected, Disconnected, Initiated} +} + +// VpnClientProtocol enumerates the values for vpn client protocol. +type VpnClientProtocol string + +const ( + // IkeV2 ... + IkeV2 VpnClientProtocol = "IkeV2" + // SSTP ... + SSTP VpnClientProtocol = "SSTP" +) + +// PossibleVpnClientProtocolValues returns an array of possible values for the VpnClientProtocol const type. +func PossibleVpnClientProtocolValues() []VpnClientProtocol { + return []VpnClientProtocol{IkeV2, SSTP} +} + +// VpnType enumerates the values for vpn type. +type VpnType string + +const ( + // PolicyBased ... + PolicyBased VpnType = "PolicyBased" + // RouteBased ... + RouteBased VpnType = "RouteBased" +) + +// PossibleVpnTypeValues returns an array of possible values for the VpnType const type. +func PossibleVpnTypeValues() []VpnType { + return []VpnType{PolicyBased, RouteBased} +} + +// AddressSpace addressSpace contains an array of IP address ranges that can be used by subnets of the virtual +// network. +type AddressSpace struct { + // AddressPrefixes - A list of address blocks reserved for this virtual network in CIDR notation. + AddressPrefixes *[]string `json:"addressPrefixes,omitempty"` +} + +// ApplicationGateway application gateway resource +type ApplicationGateway struct { + autorest.Response `json:"-"` + *ApplicationGatewayPropertiesFormat `json:"properties,omitempty"` + // Etag - A unique read-only string that changes whenever the resource is updated. + Etag *string `json:"etag,omitempty"` + // ID - Resource ID. + ID *string `json:"id,omitempty"` + // Name - Resource name. + Name *string `json:"name,omitempty"` + // Type - Resource type. + Type *string `json:"type,omitempty"` + // Location - Resource location. + Location *string `json:"location,omitempty"` + // Tags - Resource tags. + Tags map[string]*string `json:"tags"` +} + +// MarshalJSON is the custom marshaler for ApplicationGateway. +func (ag ApplicationGateway) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]interface{}) + if ag.ApplicationGatewayPropertiesFormat != nil { + objectMap["properties"] = ag.ApplicationGatewayPropertiesFormat + } + if ag.Etag != nil { + objectMap["etag"] = ag.Etag + } + if ag.ID != nil { + objectMap["id"] = ag.ID + } + if ag.Name != nil { + objectMap["name"] = ag.Name + } + if ag.Type != nil { + objectMap["type"] = ag.Type + } + if ag.Location != nil { + objectMap["location"] = ag.Location + } + if ag.Tags != nil { + objectMap["tags"] = ag.Tags + } + return json.Marshal(objectMap) +} + +// UnmarshalJSON is the custom unmarshaler for ApplicationGateway struct. +func (ag *ApplicationGateway) UnmarshalJSON(body []byte) error { + var m map[string]*json.RawMessage + err := json.Unmarshal(body, &m) + if err != nil { + return err + } + for k, v := range m { + switch k { + case "properties": + if v != nil { + var applicationGatewayPropertiesFormat ApplicationGatewayPropertiesFormat + err = json.Unmarshal(*v, &applicationGatewayPropertiesFormat) + if err != nil { + return err + } + ag.ApplicationGatewayPropertiesFormat = &applicationGatewayPropertiesFormat + } + case "etag": + if v != nil { + var etag string + err = json.Unmarshal(*v, &etag) + if err != nil { + return err + } + ag.Etag = &etag + } + case "id": + if v != nil { + var ID string + err = json.Unmarshal(*v, &ID) + if err != nil { + return err + } + ag.ID = &ID + } + case "name": + if v != nil { + var name string + err = json.Unmarshal(*v, &name) + if err != nil { + return err + } + ag.Name = &name + } + case "type": + if v != nil { + var typeVar string + err = json.Unmarshal(*v, &typeVar) + if err != nil { + return err + } + ag.Type = &typeVar + } + case "location": + if v != nil { + var location string + err = json.Unmarshal(*v, &location) + if err != nil { + return err + } + ag.Location = &location + } + case "tags": + if v != nil { + var tags map[string]*string + err = json.Unmarshal(*v, &tags) + if err != nil { + return err + } + ag.Tags = tags + } + } + } + + return nil +} + +// ApplicationGatewayAuthenticationCertificate authentication certificates of an application gateway. +type ApplicationGatewayAuthenticationCertificate struct { + *ApplicationGatewayAuthenticationCertificatePropertiesFormat `json:"properties,omitempty"` + // Name - Name of the resource that is unique within a resource group. This name can be used to access the resource. + Name *string `json:"name,omitempty"` + // Etag - A unique read-only string that changes whenever the resource is updated. + Etag *string `json:"etag,omitempty"` + // Type - Type of the resource. + Type *string `json:"type,omitempty"` + // ID - Resource ID. + ID *string `json:"id,omitempty"` +} + +// MarshalJSON is the custom marshaler for ApplicationGatewayAuthenticationCertificate. +func (agac ApplicationGatewayAuthenticationCertificate) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]interface{}) + if agac.ApplicationGatewayAuthenticationCertificatePropertiesFormat != nil { + objectMap["properties"] = agac.ApplicationGatewayAuthenticationCertificatePropertiesFormat + } + if agac.Name != nil { + objectMap["name"] = agac.Name + } + if agac.Etag != nil { + objectMap["etag"] = agac.Etag + } + if agac.Type != nil { + objectMap["type"] = agac.Type + } + if agac.ID != nil { + objectMap["id"] = agac.ID + } + return json.Marshal(objectMap) +} + +// UnmarshalJSON is the custom unmarshaler for ApplicationGatewayAuthenticationCertificate struct. +func (agac *ApplicationGatewayAuthenticationCertificate) UnmarshalJSON(body []byte) error { + var m map[string]*json.RawMessage + err := json.Unmarshal(body, &m) + if err != nil { + return err + } + for k, v := range m { + switch k { + case "properties": + if v != nil { + var applicationGatewayAuthenticationCertificatePropertiesFormat ApplicationGatewayAuthenticationCertificatePropertiesFormat + err = json.Unmarshal(*v, &applicationGatewayAuthenticationCertificatePropertiesFormat) + if err != nil { + return err + } + agac.ApplicationGatewayAuthenticationCertificatePropertiesFormat = &applicationGatewayAuthenticationCertificatePropertiesFormat + } + case "name": + if v != nil { + var name string + err = json.Unmarshal(*v, &name) + if err != nil { + return err + } + agac.Name = &name + } + case "etag": + if v != nil { + var etag string + err = json.Unmarshal(*v, &etag) + if err != nil { + return err + } + agac.Etag = &etag + } + case "type": + if v != nil { + var typeVar string + err = json.Unmarshal(*v, &typeVar) + if err != nil { + return err + } + agac.Type = &typeVar + } + case "id": + if v != nil { + var ID string + err = json.Unmarshal(*v, &ID) + if err != nil { + return err + } + agac.ID = &ID + } + } + } + + return nil +} + +// ApplicationGatewayAuthenticationCertificatePropertiesFormat authentication certificates properties of an +// application gateway. +type ApplicationGatewayAuthenticationCertificatePropertiesFormat struct { + // Data - Certificate public data. + Data *string `json:"data,omitempty"` + // ProvisioningState - Provisioning state of the authentication certificate resource. Possible values are: 'Updating', 'Deleting', and 'Failed'. + ProvisioningState *string `json:"provisioningState,omitempty"` +} + +// ApplicationGatewayAvailableSslOptions response for ApplicationGatewayAvailableSslOptions API service call. +type ApplicationGatewayAvailableSslOptions struct { + autorest.Response `json:"-"` + *ApplicationGatewayAvailableSslOptionsPropertiesFormat `json:"properties,omitempty"` + // ID - Resource ID. + ID *string `json:"id,omitempty"` + // Name - Resource name. + Name *string `json:"name,omitempty"` + // Type - Resource type. + Type *string `json:"type,omitempty"` + // Location - Resource location. + Location *string `json:"location,omitempty"` + // Tags - Resource tags. + Tags map[string]*string `json:"tags"` +} + +// MarshalJSON is the custom marshaler for ApplicationGatewayAvailableSslOptions. +func (agaso ApplicationGatewayAvailableSslOptions) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]interface{}) + if agaso.ApplicationGatewayAvailableSslOptionsPropertiesFormat != nil { + objectMap["properties"] = agaso.ApplicationGatewayAvailableSslOptionsPropertiesFormat + } + if agaso.ID != nil { + objectMap["id"] = agaso.ID + } + if agaso.Name != nil { + objectMap["name"] = agaso.Name + } + if agaso.Type != nil { + objectMap["type"] = agaso.Type + } + if agaso.Location != nil { + objectMap["location"] = agaso.Location + } + if agaso.Tags != nil { + objectMap["tags"] = agaso.Tags + } + return json.Marshal(objectMap) +} + +// UnmarshalJSON is the custom unmarshaler for ApplicationGatewayAvailableSslOptions struct. +func (agaso *ApplicationGatewayAvailableSslOptions) UnmarshalJSON(body []byte) error { + var m map[string]*json.RawMessage + err := json.Unmarshal(body, &m) + if err != nil { + return err + } + for k, v := range m { + switch k { + case "properties": + if v != nil { + var applicationGatewayAvailableSslOptionsPropertiesFormat ApplicationGatewayAvailableSslOptionsPropertiesFormat + err = json.Unmarshal(*v, &applicationGatewayAvailableSslOptionsPropertiesFormat) + if err != nil { + return err + } + agaso.ApplicationGatewayAvailableSslOptionsPropertiesFormat = &applicationGatewayAvailableSslOptionsPropertiesFormat + } + case "id": + if v != nil { + var ID string + err = json.Unmarshal(*v, &ID) + if err != nil { + return err + } + agaso.ID = &ID + } + case "name": + if v != nil { + var name string + err = json.Unmarshal(*v, &name) + if err != nil { + return err + } + agaso.Name = &name + } + case "type": + if v != nil { + var typeVar string + err = json.Unmarshal(*v, &typeVar) + if err != nil { + return err + } + agaso.Type = &typeVar + } + case "location": + if v != nil { + var location string + err = json.Unmarshal(*v, &location) + if err != nil { + return err + } + agaso.Location = &location + } + case "tags": + if v != nil { + var tags map[string]*string + err = json.Unmarshal(*v, &tags) + if err != nil { + return err + } + agaso.Tags = tags + } + } + } + + return nil +} + +// ApplicationGatewayAvailableSslOptionsPropertiesFormat properties of ApplicationGatewayAvailableSslOptions +type ApplicationGatewayAvailableSslOptionsPropertiesFormat struct { + // PredefinedPolicies - List of available Ssl predefined policy. + PredefinedPolicies *[]SubResource `json:"predefinedPolicies,omitempty"` + // DefaultPolicy - Name of the Ssl predefined policy applied by default to application gateway. Possible values include: 'AppGwSslPolicy20150501', 'AppGwSslPolicy20170401', 'AppGwSslPolicy20170401S' + DefaultPolicy ApplicationGatewaySslPolicyName `json:"defaultPolicy,omitempty"` + // AvailableCipherSuites - List of available Ssl cipher suites. + AvailableCipherSuites *[]ApplicationGatewaySslCipherSuite `json:"availableCipherSuites,omitempty"` + // AvailableProtocols - List of available Ssl protocols. + AvailableProtocols *[]ApplicationGatewaySslProtocol `json:"availableProtocols,omitempty"` +} + +// ApplicationGatewayAvailableSslPredefinedPolicies response for ApplicationGatewayAvailableSslOptions API service +// call. +type ApplicationGatewayAvailableSslPredefinedPolicies struct { + autorest.Response `json:"-"` + // Value - List of available Ssl predefined policy. + Value *[]ApplicationGatewaySslPredefinedPolicy `json:"value,omitempty"` + // NextLink - URL to get the next set of results. + NextLink *string `json:"nextLink,omitempty"` +} + +// ApplicationGatewayAvailableSslPredefinedPoliciesIterator provides access to a complete listing of +// ApplicationGatewaySslPredefinedPolicy values. +type ApplicationGatewayAvailableSslPredefinedPoliciesIterator struct { + i int + page ApplicationGatewayAvailableSslPredefinedPoliciesPage +} + +// Next advances to the next value. If there was an error making +// the request the iterator does not advance and the error is returned. +func (iter *ApplicationGatewayAvailableSslPredefinedPoliciesIterator) Next() error { + iter.i++ + if iter.i < len(iter.page.Values()) { + return nil + } + err := iter.page.Next() + if err != nil { + iter.i-- + return err + } + iter.i = 0 + return nil +} + +// NotDone returns true if the enumeration should be started or is not yet complete. +func (iter ApplicationGatewayAvailableSslPredefinedPoliciesIterator) NotDone() bool { + return iter.page.NotDone() && iter.i < len(iter.page.Values()) +} + +// Response returns the raw server response from the last page request. +func (iter ApplicationGatewayAvailableSslPredefinedPoliciesIterator) Response() ApplicationGatewayAvailableSslPredefinedPolicies { + return iter.page.Response() +} + +// Value returns the current value or a zero-initialized value if the +// iterator has advanced beyond the end of the collection. +func (iter ApplicationGatewayAvailableSslPredefinedPoliciesIterator) Value() ApplicationGatewaySslPredefinedPolicy { + if !iter.page.NotDone() { + return ApplicationGatewaySslPredefinedPolicy{} + } + return iter.page.Values()[iter.i] +} + +// IsEmpty returns true if the ListResult contains no values. +func (agaspp ApplicationGatewayAvailableSslPredefinedPolicies) IsEmpty() bool { + return agaspp.Value == nil || len(*agaspp.Value) == 0 +} + +// applicationGatewayAvailableSslPredefinedPoliciesPreparer prepares a request to retrieve the next set of results. +// It returns nil if no more results exist. +func (agaspp ApplicationGatewayAvailableSslPredefinedPolicies) applicationGatewayAvailableSslPredefinedPoliciesPreparer() (*http.Request, error) { + if agaspp.NextLink == nil || len(to.String(agaspp.NextLink)) < 1 { + return nil, nil + } + return autorest.Prepare(&http.Request{}, + autorest.AsJSON(), + autorest.AsGet(), + autorest.WithBaseURL(to.String(agaspp.NextLink))) +} + +// ApplicationGatewayAvailableSslPredefinedPoliciesPage contains a page of ApplicationGatewaySslPredefinedPolicy +// values. +type ApplicationGatewayAvailableSslPredefinedPoliciesPage struct { + fn func(ApplicationGatewayAvailableSslPredefinedPolicies) (ApplicationGatewayAvailableSslPredefinedPolicies, error) + agaspp ApplicationGatewayAvailableSslPredefinedPolicies +} + +// Next advances to the next page of values. If there was an error making +// the request the page does not advance and the error is returned. +func (page *ApplicationGatewayAvailableSslPredefinedPoliciesPage) Next() error { + next, err := page.fn(page.agaspp) + if err != nil { + return err + } + page.agaspp = next + return nil +} + +// NotDone returns true if the page enumeration should be started or is not yet complete. +func (page ApplicationGatewayAvailableSslPredefinedPoliciesPage) NotDone() bool { + return !page.agaspp.IsEmpty() +} + +// Response returns the raw server response from the last page request. +func (page ApplicationGatewayAvailableSslPredefinedPoliciesPage) Response() ApplicationGatewayAvailableSslPredefinedPolicies { + return page.agaspp +} + +// Values returns the slice of values for the current page or nil if there are no values. +func (page ApplicationGatewayAvailableSslPredefinedPoliciesPage) Values() []ApplicationGatewaySslPredefinedPolicy { + if page.agaspp.IsEmpty() { + return nil + } + return *page.agaspp.Value +} + +// ApplicationGatewayAvailableWafRuleSetsResult response for ApplicationGatewayAvailableWafRuleSets API service +// call. +type ApplicationGatewayAvailableWafRuleSetsResult struct { + autorest.Response `json:"-"` + // Value - The list of application gateway rule sets. + Value *[]ApplicationGatewayFirewallRuleSet `json:"value,omitempty"` +} + +// ApplicationGatewayBackendAddress backend address of an application gateway. +type ApplicationGatewayBackendAddress struct { + // Fqdn - Fully qualified domain name (FQDN). + Fqdn *string `json:"fqdn,omitempty"` + // IPAddress - IP address + IPAddress *string `json:"ipAddress,omitempty"` +} + +// ApplicationGatewayBackendAddressPool backend Address Pool of an application gateway. +type ApplicationGatewayBackendAddressPool struct { + *ApplicationGatewayBackendAddressPoolPropertiesFormat `json:"properties,omitempty"` + // Name - Resource that is unique within a resource group. This name can be used to access the resource. + Name *string `json:"name,omitempty"` + // Etag - A unique read-only string that changes whenever the resource is updated. + Etag *string `json:"etag,omitempty"` + // Type - Type of the resource. + Type *string `json:"type,omitempty"` + // ID - Resource ID. + ID *string `json:"id,omitempty"` +} + +// MarshalJSON is the custom marshaler for ApplicationGatewayBackendAddressPool. +func (agbap ApplicationGatewayBackendAddressPool) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]interface{}) + if agbap.ApplicationGatewayBackendAddressPoolPropertiesFormat != nil { + objectMap["properties"] = agbap.ApplicationGatewayBackendAddressPoolPropertiesFormat + } + if agbap.Name != nil { + objectMap["name"] = agbap.Name + } + if agbap.Etag != nil { + objectMap["etag"] = agbap.Etag + } + if agbap.Type != nil { + objectMap["type"] = agbap.Type + } + if agbap.ID != nil { + objectMap["id"] = agbap.ID + } + return json.Marshal(objectMap) +} + +// UnmarshalJSON is the custom unmarshaler for ApplicationGatewayBackendAddressPool struct. +func (agbap *ApplicationGatewayBackendAddressPool) UnmarshalJSON(body []byte) error { + var m map[string]*json.RawMessage + err := json.Unmarshal(body, &m) + if err != nil { + return err + } + for k, v := range m { + switch k { + case "properties": + if v != nil { + var applicationGatewayBackendAddressPoolPropertiesFormat ApplicationGatewayBackendAddressPoolPropertiesFormat + err = json.Unmarshal(*v, &applicationGatewayBackendAddressPoolPropertiesFormat) + if err != nil { + return err + } + agbap.ApplicationGatewayBackendAddressPoolPropertiesFormat = &applicationGatewayBackendAddressPoolPropertiesFormat + } + case "name": + if v != nil { + var name string + err = json.Unmarshal(*v, &name) + if err != nil { + return err + } + agbap.Name = &name + } + case "etag": + if v != nil { + var etag string + err = json.Unmarshal(*v, &etag) + if err != nil { + return err + } + agbap.Etag = &etag + } + case "type": + if v != nil { + var typeVar string + err = json.Unmarshal(*v, &typeVar) + if err != nil { + return err + } + agbap.Type = &typeVar + } + case "id": + if v != nil { + var ID string + err = json.Unmarshal(*v, &ID) + if err != nil { + return err + } + agbap.ID = &ID + } + } + } + + return nil +} + +// ApplicationGatewayBackendAddressPoolPropertiesFormat properties of Backend Address Pool of an application +// gateway. +type ApplicationGatewayBackendAddressPoolPropertiesFormat struct { + // BackendIPConfigurations - Collection of references to IPs defined in network interfaces. + BackendIPConfigurations *[]InterfaceIPConfiguration `json:"backendIPConfigurations,omitempty"` + // BackendAddresses - Backend addresses + BackendAddresses *[]ApplicationGatewayBackendAddress `json:"backendAddresses,omitempty"` + // ProvisioningState - Provisioning state of the backend address pool resource. Possible values are: 'Updating', 'Deleting', and 'Failed'. + ProvisioningState *string `json:"provisioningState,omitempty"` +} + +// ApplicationGatewayBackendHealth list of ApplicationGatewayBackendHealthPool resources. +type ApplicationGatewayBackendHealth struct { + autorest.Response `json:"-"` + BackendAddressPools *[]ApplicationGatewayBackendHealthPool `json:"backendAddressPools,omitempty"` +} + +// ApplicationGatewayBackendHealthHTTPSettings application gateway BackendHealthHttp settings. +type ApplicationGatewayBackendHealthHTTPSettings struct { + // BackendHTTPSettings - Reference of an ApplicationGatewayBackendHttpSettings resource. + BackendHTTPSettings *ApplicationGatewayBackendHTTPSettings `json:"backendHttpSettings,omitempty"` + // Servers - List of ApplicationGatewayBackendHealthServer resources. + Servers *[]ApplicationGatewayBackendHealthServer `json:"servers,omitempty"` +} + +// ApplicationGatewayBackendHealthPool application gateway BackendHealth pool. +type ApplicationGatewayBackendHealthPool struct { + // BackendAddressPool - Reference of an ApplicationGatewayBackendAddressPool resource. + BackendAddressPool *ApplicationGatewayBackendAddressPool `json:"backendAddressPool,omitempty"` + // BackendHTTPSettingsCollection - List of ApplicationGatewayBackendHealthHttpSettings resources. + BackendHTTPSettingsCollection *[]ApplicationGatewayBackendHealthHTTPSettings `json:"backendHttpSettingsCollection,omitempty"` +} + +// ApplicationGatewayBackendHealthServer application gateway backendhealth http settings. +type ApplicationGatewayBackendHealthServer struct { + // Address - IP address or FQDN of backend server. + Address *string `json:"address,omitempty"` + // IPConfiguration - Reference of IP configuration of backend server. + IPConfiguration *InterfaceIPConfiguration `json:"ipConfiguration,omitempty"` + // Health - Health of backend server. Possible values include: 'Unknown', 'Up', 'Down', 'Partial', 'Draining' + Health ApplicationGatewayBackendHealthServerHealth `json:"health,omitempty"` +} + +// ApplicationGatewayBackendHTTPSettings backend address pool settings of an application gateway. +type ApplicationGatewayBackendHTTPSettings struct { + *ApplicationGatewayBackendHTTPSettingsPropertiesFormat `json:"properties,omitempty"` + // Name - Name of the resource that is unique within a resource group. This name can be used to access the resource. + Name *string `json:"name,omitempty"` + // Etag - A unique read-only string that changes whenever the resource is updated. + Etag *string `json:"etag,omitempty"` + // Type - Type of the resource. + Type *string `json:"type,omitempty"` + // ID - Resource ID. + ID *string `json:"id,omitempty"` +} + +// MarshalJSON is the custom marshaler for ApplicationGatewayBackendHTTPSettings. +func (agbhs ApplicationGatewayBackendHTTPSettings) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]interface{}) + if agbhs.ApplicationGatewayBackendHTTPSettingsPropertiesFormat != nil { + objectMap["properties"] = agbhs.ApplicationGatewayBackendHTTPSettingsPropertiesFormat + } + if agbhs.Name != nil { + objectMap["name"] = agbhs.Name + } + if agbhs.Etag != nil { + objectMap["etag"] = agbhs.Etag + } + if agbhs.Type != nil { + objectMap["type"] = agbhs.Type + } + if agbhs.ID != nil { + objectMap["id"] = agbhs.ID + } + return json.Marshal(objectMap) +} + +// UnmarshalJSON is the custom unmarshaler for ApplicationGatewayBackendHTTPSettings struct. +func (agbhs *ApplicationGatewayBackendHTTPSettings) UnmarshalJSON(body []byte) error { + var m map[string]*json.RawMessage + err := json.Unmarshal(body, &m) + if err != nil { + return err + } + for k, v := range m { + switch k { + case "properties": + if v != nil { + var applicationGatewayBackendHTTPSettingsPropertiesFormat ApplicationGatewayBackendHTTPSettingsPropertiesFormat + err = json.Unmarshal(*v, &applicationGatewayBackendHTTPSettingsPropertiesFormat) + if err != nil { + return err + } + agbhs.ApplicationGatewayBackendHTTPSettingsPropertiesFormat = &applicationGatewayBackendHTTPSettingsPropertiesFormat + } + case "name": + if v != nil { + var name string + err = json.Unmarshal(*v, &name) + if err != nil { + return err + } + agbhs.Name = &name + } + case "etag": + if v != nil { + var etag string + err = json.Unmarshal(*v, &etag) + if err != nil { + return err + } + agbhs.Etag = &etag + } + case "type": + if v != nil { + var typeVar string + err = json.Unmarshal(*v, &typeVar) + if err != nil { + return err + } + agbhs.Type = &typeVar + } + case "id": + if v != nil { + var ID string + err = json.Unmarshal(*v, &ID) + if err != nil { + return err + } + agbhs.ID = &ID + } + } + } + + return nil +} + +// ApplicationGatewayBackendHTTPSettingsPropertiesFormat properties of Backend address pool settings of an +// application gateway. +type ApplicationGatewayBackendHTTPSettingsPropertiesFormat struct { + // Port - Port + Port *int32 `json:"port,omitempty"` + // Protocol - Protocol. Possible values include: 'HTTP', 'HTTPS' + Protocol ApplicationGatewayProtocol `json:"protocol,omitempty"` + // CookieBasedAffinity - Cookie based affinity. Possible values include: 'Enabled', 'Disabled' + CookieBasedAffinity ApplicationGatewayCookieBasedAffinity `json:"cookieBasedAffinity,omitempty"` + // RequestTimeout - Request timeout in seconds. Application Gateway will fail the request if response is not received within RequestTimeout. Acceptable values are from 1 second to 86400 seconds. + RequestTimeout *int32 `json:"requestTimeout,omitempty"` + // Probe - Probe resource of an application gateway. + Probe *SubResource `json:"probe,omitempty"` + // AuthenticationCertificates - Array of references to application gateway authentication certificates. + AuthenticationCertificates *[]SubResource `json:"authenticationCertificates,omitempty"` + // ConnectionDraining - Connection draining of the backend http settings resource. + ConnectionDraining *ApplicationGatewayConnectionDraining `json:"connectionDraining,omitempty"` + // HostName - Host header to be sent to the backend servers. + HostName *string `json:"hostName,omitempty"` + // PickHostNameFromBackendAddress - Whether to pick host header should be picked from the host name of the backend server. Default value is false. + PickHostNameFromBackendAddress *bool `json:"pickHostNameFromBackendAddress,omitempty"` + // AffinityCookieName - Cookie name to use for the affinity cookie. + AffinityCookieName *string `json:"affinityCookieName,omitempty"` + // ProbeEnabled - Whether the probe is enabled. Default value is false. + ProbeEnabled *bool `json:"probeEnabled,omitempty"` + // Path - Path which should be used as a prefix for all HTTP requests. Null means no path will be prefixed. Default value is null. + Path *string `json:"path,omitempty"` + // ProvisioningState - Provisioning state of the backend http settings resource. Possible values are: 'Updating', 'Deleting', and 'Failed'. + ProvisioningState *string `json:"provisioningState,omitempty"` +} + +// ApplicationGatewayConnectionDraining connection draining allows open connections to a backend server to be +// active for a specified time after the backend server got removed from the configuration. +type ApplicationGatewayConnectionDraining struct { + // Enabled - Whether connection draining is enabled or not. + Enabled *bool `json:"enabled,omitempty"` + // DrainTimeoutInSec - The number of seconds connection draining is active. Acceptable values are from 1 second to 3600 seconds. + DrainTimeoutInSec *int32 `json:"drainTimeoutInSec,omitempty"` +} + +// ApplicationGatewayFirewallDisabledRuleGroup allows to disable rules within a rule group or an entire rule group. +type ApplicationGatewayFirewallDisabledRuleGroup struct { + // RuleGroupName - The name of the rule group that will be disabled. + RuleGroupName *string `json:"ruleGroupName,omitempty"` + // Rules - The list of rules that will be disabled. If null, all rules of the rule group will be disabled. + Rules *[]int32 `json:"rules,omitempty"` +} + +// ApplicationGatewayFirewallRule a web application firewall rule. +type ApplicationGatewayFirewallRule struct { + // RuleID - The identifier of the web application firewall rule. + RuleID *int32 `json:"ruleId,omitempty"` + // Description - The description of the web application firewall rule. + Description *string `json:"description,omitempty"` +} + +// ApplicationGatewayFirewallRuleGroup a web application firewall rule group. +type ApplicationGatewayFirewallRuleGroup struct { + // RuleGroupName - The name of the web application firewall rule group. + RuleGroupName *string `json:"ruleGroupName,omitempty"` + // Description - The description of the web application firewall rule group. + Description *string `json:"description,omitempty"` + // Rules - The rules of the web application firewall rule group. + Rules *[]ApplicationGatewayFirewallRule `json:"rules,omitempty"` +} + +// ApplicationGatewayFirewallRuleSet a web application firewall rule set. +type ApplicationGatewayFirewallRuleSet struct { + *ApplicationGatewayFirewallRuleSetPropertiesFormat `json:"properties,omitempty"` + // ID - Resource ID. + ID *string `json:"id,omitempty"` + // Name - Resource name. + Name *string `json:"name,omitempty"` + // Type - Resource type. + Type *string `json:"type,omitempty"` + // Location - Resource location. + Location *string `json:"location,omitempty"` + // Tags - Resource tags. + Tags map[string]*string `json:"tags"` +} + +// MarshalJSON is the custom marshaler for ApplicationGatewayFirewallRuleSet. +func (agfrs ApplicationGatewayFirewallRuleSet) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]interface{}) + if agfrs.ApplicationGatewayFirewallRuleSetPropertiesFormat != nil { + objectMap["properties"] = agfrs.ApplicationGatewayFirewallRuleSetPropertiesFormat + } + if agfrs.ID != nil { + objectMap["id"] = agfrs.ID + } + if agfrs.Name != nil { + objectMap["name"] = agfrs.Name + } + if agfrs.Type != nil { + objectMap["type"] = agfrs.Type + } + if agfrs.Location != nil { + objectMap["location"] = agfrs.Location + } + if agfrs.Tags != nil { + objectMap["tags"] = agfrs.Tags + } + return json.Marshal(objectMap) +} + +// UnmarshalJSON is the custom unmarshaler for ApplicationGatewayFirewallRuleSet struct. +func (agfrs *ApplicationGatewayFirewallRuleSet) UnmarshalJSON(body []byte) error { + var m map[string]*json.RawMessage + err := json.Unmarshal(body, &m) + if err != nil { + return err + } + for k, v := range m { + switch k { + case "properties": + if v != nil { + var applicationGatewayFirewallRuleSetPropertiesFormat ApplicationGatewayFirewallRuleSetPropertiesFormat + err = json.Unmarshal(*v, &applicationGatewayFirewallRuleSetPropertiesFormat) + if err != nil { + return err + } + agfrs.ApplicationGatewayFirewallRuleSetPropertiesFormat = &applicationGatewayFirewallRuleSetPropertiesFormat + } + case "id": + if v != nil { + var ID string + err = json.Unmarshal(*v, &ID) + if err != nil { + return err + } + agfrs.ID = &ID + } + case "name": + if v != nil { + var name string + err = json.Unmarshal(*v, &name) + if err != nil { + return err + } + agfrs.Name = &name + } + case "type": + if v != nil { + var typeVar string + err = json.Unmarshal(*v, &typeVar) + if err != nil { + return err + } + agfrs.Type = &typeVar + } + case "location": + if v != nil { + var location string + err = json.Unmarshal(*v, &location) + if err != nil { + return err + } + agfrs.Location = &location + } + case "tags": + if v != nil { + var tags map[string]*string + err = json.Unmarshal(*v, &tags) + if err != nil { + return err + } + agfrs.Tags = tags + } + } + } + + return nil +} + +// ApplicationGatewayFirewallRuleSetPropertiesFormat properties of the web application firewall rule set. +type ApplicationGatewayFirewallRuleSetPropertiesFormat struct { + // ProvisioningState - The provisioning state of the web application firewall rule set. + ProvisioningState *string `json:"provisioningState,omitempty"` + // RuleSetType - The type of the web application firewall rule set. + RuleSetType *string `json:"ruleSetType,omitempty"` + // RuleSetVersion - The version of the web application firewall rule set type. + RuleSetVersion *string `json:"ruleSetVersion,omitempty"` + // RuleGroups - The rule groups of the web application firewall rule set. + RuleGroups *[]ApplicationGatewayFirewallRuleGroup `json:"ruleGroups,omitempty"` +} + +// ApplicationGatewayFrontendIPConfiguration frontend IP configuration of an application gateway. +type ApplicationGatewayFrontendIPConfiguration struct { + *ApplicationGatewayFrontendIPConfigurationPropertiesFormat `json:"properties,omitempty"` + // Name - Name of the resource that is unique within a resource group. This name can be used to access the resource. + Name *string `json:"name,omitempty"` + // Etag - A unique read-only string that changes whenever the resource is updated. + Etag *string `json:"etag,omitempty"` + // Type - Type of the resource. + Type *string `json:"type,omitempty"` + // ID - Resource ID. + ID *string `json:"id,omitempty"` +} + +// MarshalJSON is the custom marshaler for ApplicationGatewayFrontendIPConfiguration. +func (agfic ApplicationGatewayFrontendIPConfiguration) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]interface{}) + if agfic.ApplicationGatewayFrontendIPConfigurationPropertiesFormat != nil { + objectMap["properties"] = agfic.ApplicationGatewayFrontendIPConfigurationPropertiesFormat + } + if agfic.Name != nil { + objectMap["name"] = agfic.Name + } + if agfic.Etag != nil { + objectMap["etag"] = agfic.Etag + } + if agfic.Type != nil { + objectMap["type"] = agfic.Type + } + if agfic.ID != nil { + objectMap["id"] = agfic.ID + } + return json.Marshal(objectMap) +} + +// UnmarshalJSON is the custom unmarshaler for ApplicationGatewayFrontendIPConfiguration struct. +func (agfic *ApplicationGatewayFrontendIPConfiguration) UnmarshalJSON(body []byte) error { + var m map[string]*json.RawMessage + err := json.Unmarshal(body, &m) + if err != nil { + return err + } + for k, v := range m { + switch k { + case "properties": + if v != nil { + var applicationGatewayFrontendIPConfigurationPropertiesFormat ApplicationGatewayFrontendIPConfigurationPropertiesFormat + err = json.Unmarshal(*v, &applicationGatewayFrontendIPConfigurationPropertiesFormat) + if err != nil { + return err + } + agfic.ApplicationGatewayFrontendIPConfigurationPropertiesFormat = &applicationGatewayFrontendIPConfigurationPropertiesFormat + } + case "name": + if v != nil { + var name string + err = json.Unmarshal(*v, &name) + if err != nil { + return err + } + agfic.Name = &name + } + case "etag": + if v != nil { + var etag string + err = json.Unmarshal(*v, &etag) + if err != nil { + return err + } + agfic.Etag = &etag + } + case "type": + if v != nil { + var typeVar string + err = json.Unmarshal(*v, &typeVar) + if err != nil { + return err + } + agfic.Type = &typeVar + } + case "id": + if v != nil { + var ID string + err = json.Unmarshal(*v, &ID) + if err != nil { + return err + } + agfic.ID = &ID + } + } + } + + return nil +} + +// ApplicationGatewayFrontendIPConfigurationPropertiesFormat properties of Frontend IP configuration of an +// application gateway. +type ApplicationGatewayFrontendIPConfigurationPropertiesFormat struct { + // PrivateIPAddress - PrivateIPAddress of the network interface IP Configuration. + PrivateIPAddress *string `json:"privateIPAddress,omitempty"` + // PrivateIPAllocationMethod - PrivateIP allocation method. Possible values include: 'Static', 'Dynamic' + PrivateIPAllocationMethod IPAllocationMethod `json:"privateIPAllocationMethod,omitempty"` + // Subnet - Reference of the subnet resource. + Subnet *SubResource `json:"subnet,omitempty"` + // PublicIPAddress - Reference of the PublicIP resource. + PublicIPAddress *SubResource `json:"publicIPAddress,omitempty"` + // ProvisioningState - Provisioning state of the public IP resource. Possible values are: 'Updating', 'Deleting', and 'Failed'. + ProvisioningState *string `json:"provisioningState,omitempty"` +} + +// ApplicationGatewayFrontendPort frontend port of an application gateway. +type ApplicationGatewayFrontendPort struct { + *ApplicationGatewayFrontendPortPropertiesFormat `json:"properties,omitempty"` + // Name - Name of the resource that is unique within a resource group. This name can be used to access the resource. + Name *string `json:"name,omitempty"` + // Etag - A unique read-only string that changes whenever the resource is updated. + Etag *string `json:"etag,omitempty"` + // Type - Type of the resource. + Type *string `json:"type,omitempty"` + // ID - Resource ID. + ID *string `json:"id,omitempty"` +} + +// MarshalJSON is the custom marshaler for ApplicationGatewayFrontendPort. +func (agfp ApplicationGatewayFrontendPort) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]interface{}) + if agfp.ApplicationGatewayFrontendPortPropertiesFormat != nil { + objectMap["properties"] = agfp.ApplicationGatewayFrontendPortPropertiesFormat + } + if agfp.Name != nil { + objectMap["name"] = agfp.Name + } + if agfp.Etag != nil { + objectMap["etag"] = agfp.Etag + } + if agfp.Type != nil { + objectMap["type"] = agfp.Type + } + if agfp.ID != nil { + objectMap["id"] = agfp.ID + } + return json.Marshal(objectMap) +} + +// UnmarshalJSON is the custom unmarshaler for ApplicationGatewayFrontendPort struct. +func (agfp *ApplicationGatewayFrontendPort) UnmarshalJSON(body []byte) error { + var m map[string]*json.RawMessage + err := json.Unmarshal(body, &m) + if err != nil { + return err + } + for k, v := range m { + switch k { + case "properties": + if v != nil { + var applicationGatewayFrontendPortPropertiesFormat ApplicationGatewayFrontendPortPropertiesFormat + err = json.Unmarshal(*v, &applicationGatewayFrontendPortPropertiesFormat) + if err != nil { + return err + } + agfp.ApplicationGatewayFrontendPortPropertiesFormat = &applicationGatewayFrontendPortPropertiesFormat + } + case "name": + if v != nil { + var name string + err = json.Unmarshal(*v, &name) + if err != nil { + return err + } + agfp.Name = &name + } + case "etag": + if v != nil { + var etag string + err = json.Unmarshal(*v, &etag) + if err != nil { + return err + } + agfp.Etag = &etag + } + case "type": + if v != nil { + var typeVar string + err = json.Unmarshal(*v, &typeVar) + if err != nil { + return err + } + agfp.Type = &typeVar + } + case "id": + if v != nil { + var ID string + err = json.Unmarshal(*v, &ID) + if err != nil { + return err + } + agfp.ID = &ID + } + } + } + + return nil +} + +// ApplicationGatewayFrontendPortPropertiesFormat properties of Frontend port of an application gateway. +type ApplicationGatewayFrontendPortPropertiesFormat struct { + // Port - Frontend port + Port *int32 `json:"port,omitempty"` + // ProvisioningState - Provisioning state of the frontend port resource. Possible values are: 'Updating', 'Deleting', and 'Failed'. + ProvisioningState *string `json:"provisioningState,omitempty"` +} + +// ApplicationGatewayHTTPListener http listener of an application gateway. +type ApplicationGatewayHTTPListener struct { + *ApplicationGatewayHTTPListenerPropertiesFormat `json:"properties,omitempty"` + // Name - Name of the resource that is unique within a resource group. This name can be used to access the resource. + Name *string `json:"name,omitempty"` + // Etag - A unique read-only string that changes whenever the resource is updated. + Etag *string `json:"etag,omitempty"` + // Type - Type of the resource. + Type *string `json:"type,omitempty"` + // ID - Resource ID. + ID *string `json:"id,omitempty"` +} + +// MarshalJSON is the custom marshaler for ApplicationGatewayHTTPListener. +func (aghl ApplicationGatewayHTTPListener) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]interface{}) + if aghl.ApplicationGatewayHTTPListenerPropertiesFormat != nil { + objectMap["properties"] = aghl.ApplicationGatewayHTTPListenerPropertiesFormat + } + if aghl.Name != nil { + objectMap["name"] = aghl.Name + } + if aghl.Etag != nil { + objectMap["etag"] = aghl.Etag + } + if aghl.Type != nil { + objectMap["type"] = aghl.Type + } + if aghl.ID != nil { + objectMap["id"] = aghl.ID + } + return json.Marshal(objectMap) +} + +// UnmarshalJSON is the custom unmarshaler for ApplicationGatewayHTTPListener struct. +func (aghl *ApplicationGatewayHTTPListener) UnmarshalJSON(body []byte) error { + var m map[string]*json.RawMessage + err := json.Unmarshal(body, &m) + if err != nil { + return err + } + for k, v := range m { + switch k { + case "properties": + if v != nil { + var applicationGatewayHTTPListenerPropertiesFormat ApplicationGatewayHTTPListenerPropertiesFormat + err = json.Unmarshal(*v, &applicationGatewayHTTPListenerPropertiesFormat) + if err != nil { + return err + } + aghl.ApplicationGatewayHTTPListenerPropertiesFormat = &applicationGatewayHTTPListenerPropertiesFormat + } + case "name": + if v != nil { + var name string + err = json.Unmarshal(*v, &name) + if err != nil { + return err + } + aghl.Name = &name + } + case "etag": + if v != nil { + var etag string + err = json.Unmarshal(*v, &etag) + if err != nil { + return err + } + aghl.Etag = &etag + } + case "type": + if v != nil { + var typeVar string + err = json.Unmarshal(*v, &typeVar) + if err != nil { + return err + } + aghl.Type = &typeVar + } + case "id": + if v != nil { + var ID string + err = json.Unmarshal(*v, &ID) + if err != nil { + return err + } + aghl.ID = &ID + } + } + } + + return nil +} + +// ApplicationGatewayHTTPListenerPropertiesFormat properties of HTTP listener of an application gateway. +type ApplicationGatewayHTTPListenerPropertiesFormat struct { + // FrontendIPConfiguration - Frontend IP configuration resource of an application gateway. + FrontendIPConfiguration *SubResource `json:"frontendIPConfiguration,omitempty"` + // FrontendPort - Frontend port resource of an application gateway. + FrontendPort *SubResource `json:"frontendPort,omitempty"` + // Protocol - Protocol. Possible values include: 'HTTP', 'HTTPS' + Protocol ApplicationGatewayProtocol `json:"protocol,omitempty"` + // HostName - Host name of HTTP listener. + HostName *string `json:"hostName,omitempty"` + // SslCertificate - SSL certificate resource of an application gateway. + SslCertificate *SubResource `json:"sslCertificate,omitempty"` + // RequireServerNameIndication - Applicable only if protocol is https. Enables SNI for multi-hosting. + RequireServerNameIndication *bool `json:"requireServerNameIndication,omitempty"` + // ProvisioningState - Provisioning state of the HTTP listener resource. Possible values are: 'Updating', 'Deleting', and 'Failed'. + ProvisioningState *string `json:"provisioningState,omitempty"` +} + +// ApplicationGatewayIPConfiguration IP configuration of an application gateway. Currently 1 public and 1 private +// IP configuration is allowed. +type ApplicationGatewayIPConfiguration struct { + *ApplicationGatewayIPConfigurationPropertiesFormat `json:"properties,omitempty"` + // Name - Name of the resource that is unique within a resource group. This name can be used to access the resource. + Name *string `json:"name,omitempty"` + // Etag - A unique read-only string that changes whenever the resource is updated. + Etag *string `json:"etag,omitempty"` + // Type - Type of the resource. + Type *string `json:"type,omitempty"` + // ID - Resource ID. + ID *string `json:"id,omitempty"` +} + +// MarshalJSON is the custom marshaler for ApplicationGatewayIPConfiguration. +func (agic ApplicationGatewayIPConfiguration) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]interface{}) + if agic.ApplicationGatewayIPConfigurationPropertiesFormat != nil { + objectMap["properties"] = agic.ApplicationGatewayIPConfigurationPropertiesFormat + } + if agic.Name != nil { + objectMap["name"] = agic.Name + } + if agic.Etag != nil { + objectMap["etag"] = agic.Etag + } + if agic.Type != nil { + objectMap["type"] = agic.Type + } + if agic.ID != nil { + objectMap["id"] = agic.ID + } + return json.Marshal(objectMap) +} + +// UnmarshalJSON is the custom unmarshaler for ApplicationGatewayIPConfiguration struct. +func (agic *ApplicationGatewayIPConfiguration) UnmarshalJSON(body []byte) error { + var m map[string]*json.RawMessage + err := json.Unmarshal(body, &m) + if err != nil { + return err + } + for k, v := range m { + switch k { + case "properties": + if v != nil { + var applicationGatewayIPConfigurationPropertiesFormat ApplicationGatewayIPConfigurationPropertiesFormat + err = json.Unmarshal(*v, &applicationGatewayIPConfigurationPropertiesFormat) + if err != nil { + return err + } + agic.ApplicationGatewayIPConfigurationPropertiesFormat = &applicationGatewayIPConfigurationPropertiesFormat + } + case "name": + if v != nil { + var name string + err = json.Unmarshal(*v, &name) + if err != nil { + return err + } + agic.Name = &name + } + case "etag": + if v != nil { + var etag string + err = json.Unmarshal(*v, &etag) + if err != nil { + return err + } + agic.Etag = &etag + } + case "type": + if v != nil { + var typeVar string + err = json.Unmarshal(*v, &typeVar) + if err != nil { + return err + } + agic.Type = &typeVar + } + case "id": + if v != nil { + var ID string + err = json.Unmarshal(*v, &ID) + if err != nil { + return err + } + agic.ID = &ID + } + } + } + + return nil +} + +// ApplicationGatewayIPConfigurationPropertiesFormat properties of IP configuration of an application gateway. +type ApplicationGatewayIPConfigurationPropertiesFormat struct { + // Subnet - Reference of the subnet resource. A subnet from where application gateway gets its private address. + Subnet *SubResource `json:"subnet,omitempty"` + // ProvisioningState - Provisioning state of the application gateway subnet resource. Possible values are: 'Updating', 'Deleting', and 'Failed'. + ProvisioningState *string `json:"provisioningState,omitempty"` +} + +// ApplicationGatewayListResult response for ListApplicationGateways API service call. +type ApplicationGatewayListResult struct { + autorest.Response `json:"-"` + // Value - List of an application gateways in a resource group. + Value *[]ApplicationGateway `json:"value,omitempty"` + // NextLink - URL to get the next set of results. + NextLink *string `json:"nextLink,omitempty"` +} + +// ApplicationGatewayListResultIterator provides access to a complete listing of ApplicationGateway values. +type ApplicationGatewayListResultIterator struct { + i int + page ApplicationGatewayListResultPage +} + +// Next advances to the next value. If there was an error making +// the request the iterator does not advance and the error is returned. +func (iter *ApplicationGatewayListResultIterator) Next() error { + iter.i++ + if iter.i < len(iter.page.Values()) { + return nil + } + err := iter.page.Next() + if err != nil { + iter.i-- + return err + } + iter.i = 0 + return nil +} + +// NotDone returns true if the enumeration should be started or is not yet complete. +func (iter ApplicationGatewayListResultIterator) NotDone() bool { + return iter.page.NotDone() && iter.i < len(iter.page.Values()) +} + +// Response returns the raw server response from the last page request. +func (iter ApplicationGatewayListResultIterator) Response() ApplicationGatewayListResult { + return iter.page.Response() +} + +// Value returns the current value or a zero-initialized value if the +// iterator has advanced beyond the end of the collection. +func (iter ApplicationGatewayListResultIterator) Value() ApplicationGateway { + if !iter.page.NotDone() { + return ApplicationGateway{} + } + return iter.page.Values()[iter.i] +} + +// IsEmpty returns true if the ListResult contains no values. +func (aglr ApplicationGatewayListResult) IsEmpty() bool { + return aglr.Value == nil || len(*aglr.Value) == 0 +} + +// applicationGatewayListResultPreparer prepares a request to retrieve the next set of results. +// It returns nil if no more results exist. +func (aglr ApplicationGatewayListResult) applicationGatewayListResultPreparer() (*http.Request, error) { + if aglr.NextLink == nil || len(to.String(aglr.NextLink)) < 1 { + return nil, nil + } + return autorest.Prepare(&http.Request{}, + autorest.AsJSON(), + autorest.AsGet(), + autorest.WithBaseURL(to.String(aglr.NextLink))) +} + +// ApplicationGatewayListResultPage contains a page of ApplicationGateway values. +type ApplicationGatewayListResultPage struct { + fn func(ApplicationGatewayListResult) (ApplicationGatewayListResult, error) + aglr ApplicationGatewayListResult +} + +// Next advances to the next page of values. If there was an error making +// the request the page does not advance and the error is returned. +func (page *ApplicationGatewayListResultPage) Next() error { + next, err := page.fn(page.aglr) + if err != nil { + return err + } + page.aglr = next + return nil +} + +// NotDone returns true if the page enumeration should be started or is not yet complete. +func (page ApplicationGatewayListResultPage) NotDone() bool { + return !page.aglr.IsEmpty() +} + +// Response returns the raw server response from the last page request. +func (page ApplicationGatewayListResultPage) Response() ApplicationGatewayListResult { + return page.aglr +} + +// Values returns the slice of values for the current page or nil if there are no values. +func (page ApplicationGatewayListResultPage) Values() []ApplicationGateway { + if page.aglr.IsEmpty() { + return nil + } + return *page.aglr.Value +} + +// ApplicationGatewayPathRule path rule of URL path map of an application gateway. +type ApplicationGatewayPathRule struct { + *ApplicationGatewayPathRulePropertiesFormat `json:"properties,omitempty"` + // Name - Name of the resource that is unique within a resource group. This name can be used to access the resource. + Name *string `json:"name,omitempty"` + // Etag - A unique read-only string that changes whenever the resource is updated. + Etag *string `json:"etag,omitempty"` + // Type - Type of the resource. + Type *string `json:"type,omitempty"` + // ID - Resource ID. + ID *string `json:"id,omitempty"` +} + +// MarshalJSON is the custom marshaler for ApplicationGatewayPathRule. +func (agpr ApplicationGatewayPathRule) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]interface{}) + if agpr.ApplicationGatewayPathRulePropertiesFormat != nil { + objectMap["properties"] = agpr.ApplicationGatewayPathRulePropertiesFormat + } + if agpr.Name != nil { + objectMap["name"] = agpr.Name + } + if agpr.Etag != nil { + objectMap["etag"] = agpr.Etag + } + if agpr.Type != nil { + objectMap["type"] = agpr.Type + } + if agpr.ID != nil { + objectMap["id"] = agpr.ID + } + return json.Marshal(objectMap) +} + +// UnmarshalJSON is the custom unmarshaler for ApplicationGatewayPathRule struct. +func (agpr *ApplicationGatewayPathRule) UnmarshalJSON(body []byte) error { + var m map[string]*json.RawMessage + err := json.Unmarshal(body, &m) + if err != nil { + return err + } + for k, v := range m { + switch k { + case "properties": + if v != nil { + var applicationGatewayPathRulePropertiesFormat ApplicationGatewayPathRulePropertiesFormat + err = json.Unmarshal(*v, &applicationGatewayPathRulePropertiesFormat) + if err != nil { + return err + } + agpr.ApplicationGatewayPathRulePropertiesFormat = &applicationGatewayPathRulePropertiesFormat + } + case "name": + if v != nil { + var name string + err = json.Unmarshal(*v, &name) + if err != nil { + return err + } + agpr.Name = &name + } + case "etag": + if v != nil { + var etag string + err = json.Unmarshal(*v, &etag) + if err != nil { + return err + } + agpr.Etag = &etag + } + case "type": + if v != nil { + var typeVar string + err = json.Unmarshal(*v, &typeVar) + if err != nil { + return err + } + agpr.Type = &typeVar + } + case "id": + if v != nil { + var ID string + err = json.Unmarshal(*v, &ID) + if err != nil { + return err + } + agpr.ID = &ID + } + } + } + + return nil +} + +// ApplicationGatewayPathRulePropertiesFormat properties of path rule of an application gateway. +type ApplicationGatewayPathRulePropertiesFormat struct { + // Paths - Path rules of URL path map. + Paths *[]string `json:"paths,omitempty"` + // BackendAddressPool - Backend address pool resource of URL path map path rule. + BackendAddressPool *SubResource `json:"backendAddressPool,omitempty"` + // BackendHTTPSettings - Backend http settings resource of URL path map path rule. + BackendHTTPSettings *SubResource `json:"backendHttpSettings,omitempty"` + // RedirectConfiguration - Redirect configuration resource of URL path map path rule. + RedirectConfiguration *SubResource `json:"redirectConfiguration,omitempty"` + // ProvisioningState - Path rule of URL path map resource. Possible values are: 'Updating', 'Deleting', and 'Failed'. + ProvisioningState *string `json:"provisioningState,omitempty"` +} + +// ApplicationGatewayProbe probe of the application gateway. +type ApplicationGatewayProbe struct { + *ApplicationGatewayProbePropertiesFormat `json:"properties,omitempty"` + // Name - Name of the resource that is unique within a resource group. This name can be used to access the resource. + Name *string `json:"name,omitempty"` + // Etag - A unique read-only string that changes whenever the resource is updated. + Etag *string `json:"etag,omitempty"` + // Type - Type of the resource. + Type *string `json:"type,omitempty"` + // ID - Resource ID. + ID *string `json:"id,omitempty"` +} + +// MarshalJSON is the custom marshaler for ApplicationGatewayProbe. +func (agp ApplicationGatewayProbe) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]interface{}) + if agp.ApplicationGatewayProbePropertiesFormat != nil { + objectMap["properties"] = agp.ApplicationGatewayProbePropertiesFormat + } + if agp.Name != nil { + objectMap["name"] = agp.Name + } + if agp.Etag != nil { + objectMap["etag"] = agp.Etag + } + if agp.Type != nil { + objectMap["type"] = agp.Type + } + if agp.ID != nil { + objectMap["id"] = agp.ID + } + return json.Marshal(objectMap) +} + +// UnmarshalJSON is the custom unmarshaler for ApplicationGatewayProbe struct. +func (agp *ApplicationGatewayProbe) UnmarshalJSON(body []byte) error { + var m map[string]*json.RawMessage + err := json.Unmarshal(body, &m) + if err != nil { + return err + } + for k, v := range m { + switch k { + case "properties": + if v != nil { + var applicationGatewayProbePropertiesFormat ApplicationGatewayProbePropertiesFormat + err = json.Unmarshal(*v, &applicationGatewayProbePropertiesFormat) + if err != nil { + return err + } + agp.ApplicationGatewayProbePropertiesFormat = &applicationGatewayProbePropertiesFormat + } + case "name": + if v != nil { + var name string + err = json.Unmarshal(*v, &name) + if err != nil { + return err + } + agp.Name = &name + } + case "etag": + if v != nil { + var etag string + err = json.Unmarshal(*v, &etag) + if err != nil { + return err + } + agp.Etag = &etag + } + case "type": + if v != nil { + var typeVar string + err = json.Unmarshal(*v, &typeVar) + if err != nil { + return err + } + agp.Type = &typeVar + } + case "id": + if v != nil { + var ID string + err = json.Unmarshal(*v, &ID) + if err != nil { + return err + } + agp.ID = &ID + } + } + } + + return nil +} + +// ApplicationGatewayProbeHealthResponseMatch application gateway probe health response match +type ApplicationGatewayProbeHealthResponseMatch struct { + // Body - Body that must be contained in the health response. Default value is empty. + Body *string `json:"body,omitempty"` + // StatusCodes - Allowed ranges of healthy status codes. Default range of healthy status codes is 200-399. + StatusCodes *[]string `json:"statusCodes,omitempty"` +} + +// ApplicationGatewayProbePropertiesFormat properties of probe of an application gateway. +type ApplicationGatewayProbePropertiesFormat struct { + // Protocol - Protocol. Possible values include: 'HTTP', 'HTTPS' + Protocol ApplicationGatewayProtocol `json:"protocol,omitempty"` + // Host - Host name to send the probe to. + Host *string `json:"host,omitempty"` + // Path - Relative path of probe. Valid path starts from '/'. Probe is sent to ://: + Path *string `json:"path,omitempty"` + // Interval - The probing interval in seconds. This is the time interval between two consecutive probes. Acceptable values are from 1 second to 86400 seconds. + Interval *int32 `json:"interval,omitempty"` + // Timeout - the probe timeout in seconds. Probe marked as failed if valid response is not received with this timeout period. Acceptable values are from 1 second to 86400 seconds. + Timeout *int32 `json:"timeout,omitempty"` + // UnhealthyThreshold - The probe retry count. Backend server is marked down after consecutive probe failure count reaches UnhealthyThreshold. Acceptable values are from 1 second to 20. + UnhealthyThreshold *int32 `json:"unhealthyThreshold,omitempty"` + // PickHostNameFromBackendHTTPSettings - Whether the host header should be picked from the backend http settings. Default value is false. + PickHostNameFromBackendHTTPSettings *bool `json:"pickHostNameFromBackendHttpSettings,omitempty"` + // MinServers - Minimum number of servers that are always marked healthy. Default value is 0. + MinServers *int32 `json:"minServers,omitempty"` + // Match - Criterion for classifying a healthy probe response. + Match *ApplicationGatewayProbeHealthResponseMatch `json:"match,omitempty"` + // ProvisioningState - Provisioning state of the backend http settings resource. Possible values are: 'Updating', 'Deleting', and 'Failed'. + ProvisioningState *string `json:"provisioningState,omitempty"` +} + +// ApplicationGatewayPropertiesFormat properties of the application gateway. +type ApplicationGatewayPropertiesFormat struct { + // Sku - SKU of the application gateway resource. + Sku *ApplicationGatewaySku `json:"sku,omitempty"` + // SslPolicy - SSL policy of the application gateway resource. + SslPolicy *ApplicationGatewaySslPolicy `json:"sslPolicy,omitempty"` + // OperationalState - Operational state of the application gateway resource. Possible values include: 'Stopped', 'Starting', 'Running', 'Stopping' + OperationalState ApplicationGatewayOperationalState `json:"operationalState,omitempty"` + // GatewayIPConfigurations - Subnets of application the gateway resource. + GatewayIPConfigurations *[]ApplicationGatewayIPConfiguration `json:"gatewayIPConfigurations,omitempty"` + // AuthenticationCertificates - Authentication certificates of the application gateway resource. + AuthenticationCertificates *[]ApplicationGatewayAuthenticationCertificate `json:"authenticationCertificates,omitempty"` + // SslCertificates - SSL certificates of the application gateway resource. + SslCertificates *[]ApplicationGatewaySslCertificate `json:"sslCertificates,omitempty"` + // FrontendIPConfigurations - Frontend IP addresses of the application gateway resource. + FrontendIPConfigurations *[]ApplicationGatewayFrontendIPConfiguration `json:"frontendIPConfigurations,omitempty"` + // FrontendPorts - Frontend ports of the application gateway resource. + FrontendPorts *[]ApplicationGatewayFrontendPort `json:"frontendPorts,omitempty"` + // Probes - Probes of the application gateway resource. + Probes *[]ApplicationGatewayProbe `json:"probes,omitempty"` + // BackendAddressPools - Backend address pool of the application gateway resource. + BackendAddressPools *[]ApplicationGatewayBackendAddressPool `json:"backendAddressPools,omitempty"` + // BackendHTTPSettingsCollection - Backend http settings of the application gateway resource. + BackendHTTPSettingsCollection *[]ApplicationGatewayBackendHTTPSettings `json:"backendHttpSettingsCollection,omitempty"` + // HTTPListeners - Http listeners of the application gateway resource. + HTTPListeners *[]ApplicationGatewayHTTPListener `json:"httpListeners,omitempty"` + // URLPathMaps - URL path map of the application gateway resource. + URLPathMaps *[]ApplicationGatewayURLPathMap `json:"urlPathMaps,omitempty"` + // RequestRoutingRules - Request routing rules of the application gateway resource. + RequestRoutingRules *[]ApplicationGatewayRequestRoutingRule `json:"requestRoutingRules,omitempty"` + // RedirectConfigurations - Redirect configurations of the application gateway resource. + RedirectConfigurations *[]ApplicationGatewayRedirectConfiguration `json:"redirectConfigurations,omitempty"` + // WebApplicationFirewallConfiguration - Web application firewall configuration. + WebApplicationFirewallConfiguration *ApplicationGatewayWebApplicationFirewallConfiguration `json:"webApplicationFirewallConfiguration,omitempty"` + // EnableHTTP2 - Whether HTTP2 is enabled on the application gateway resource. + EnableHTTP2 *bool `json:"enableHttp2,omitempty"` + // ResourceGUID - Resource GUID property of the application gateway resource. + ResourceGUID *string `json:"resourceGuid,omitempty"` + // ProvisioningState - Provisioning state of the application gateway resource. Possible values are: 'Updating', 'Deleting', and 'Failed'. + ProvisioningState *string `json:"provisioningState,omitempty"` +} + +// ApplicationGatewayRedirectConfiguration redirect configuration of an application gateway. +type ApplicationGatewayRedirectConfiguration struct { + *ApplicationGatewayRedirectConfigurationPropertiesFormat `json:"properties,omitempty"` + // Name - Name of the resource that is unique within a resource group. This name can be used to access the resource. + Name *string `json:"name,omitempty"` + // Etag - A unique read-only string that changes whenever the resource is updated. + Etag *string `json:"etag,omitempty"` + // Type - Type of the resource. + Type *string `json:"type,omitempty"` + // ID - Resource ID. + ID *string `json:"id,omitempty"` +} + +// MarshalJSON is the custom marshaler for ApplicationGatewayRedirectConfiguration. +func (agrc ApplicationGatewayRedirectConfiguration) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]interface{}) + if agrc.ApplicationGatewayRedirectConfigurationPropertiesFormat != nil { + objectMap["properties"] = agrc.ApplicationGatewayRedirectConfigurationPropertiesFormat + } + if agrc.Name != nil { + objectMap["name"] = agrc.Name + } + if agrc.Etag != nil { + objectMap["etag"] = agrc.Etag + } + if agrc.Type != nil { + objectMap["type"] = agrc.Type + } + if agrc.ID != nil { + objectMap["id"] = agrc.ID + } + return json.Marshal(objectMap) +} + +// UnmarshalJSON is the custom unmarshaler for ApplicationGatewayRedirectConfiguration struct. +func (agrc *ApplicationGatewayRedirectConfiguration) UnmarshalJSON(body []byte) error { + var m map[string]*json.RawMessage + err := json.Unmarshal(body, &m) + if err != nil { + return err + } + for k, v := range m { + switch k { + case "properties": + if v != nil { + var applicationGatewayRedirectConfigurationPropertiesFormat ApplicationGatewayRedirectConfigurationPropertiesFormat + err = json.Unmarshal(*v, &applicationGatewayRedirectConfigurationPropertiesFormat) + if err != nil { + return err + } + agrc.ApplicationGatewayRedirectConfigurationPropertiesFormat = &applicationGatewayRedirectConfigurationPropertiesFormat + } + case "name": + if v != nil { + var name string + err = json.Unmarshal(*v, &name) + if err != nil { + return err + } + agrc.Name = &name + } + case "etag": + if v != nil { + var etag string + err = json.Unmarshal(*v, &etag) + if err != nil { + return err + } + agrc.Etag = &etag + } + case "type": + if v != nil { + var typeVar string + err = json.Unmarshal(*v, &typeVar) + if err != nil { + return err + } + agrc.Type = &typeVar + } + case "id": + if v != nil { + var ID string + err = json.Unmarshal(*v, &ID) + if err != nil { + return err + } + agrc.ID = &ID + } + } + } + + return nil +} + +// ApplicationGatewayRedirectConfigurationPropertiesFormat properties of redirect configuration of the application +// gateway. +type ApplicationGatewayRedirectConfigurationPropertiesFormat struct { + // RedirectType - Supported http redirection types - Permanent, Temporary, Found, SeeOther. Possible values include: 'Permanent', 'Found', 'SeeOther', 'Temporary' + RedirectType ApplicationGatewayRedirectType `json:"redirectType,omitempty"` + // TargetListener - Reference to a listener to redirect the request to. + TargetListener *SubResource `json:"targetListener,omitempty"` + // TargetURL - Url to redirect the request to. + TargetURL *string `json:"targetUrl,omitempty"` + // IncludePath - Include path in the redirected url. + IncludePath *bool `json:"includePath,omitempty"` + // IncludeQueryString - Include query string in the redirected url. + IncludeQueryString *bool `json:"includeQueryString,omitempty"` + // RequestRoutingRules - Request routing specifying redirect configuration. + RequestRoutingRules *[]SubResource `json:"requestRoutingRules,omitempty"` + // URLPathMaps - Url path maps specifying default redirect configuration. + URLPathMaps *[]SubResource `json:"urlPathMaps,omitempty"` + // PathRules - Path rules specifying redirect configuration. + PathRules *[]SubResource `json:"pathRules,omitempty"` +} + +// ApplicationGatewayRequestRoutingRule request routing rule of an application gateway. +type ApplicationGatewayRequestRoutingRule struct { + *ApplicationGatewayRequestRoutingRulePropertiesFormat `json:"properties,omitempty"` + // Name - Name of the resource that is unique within a resource group. This name can be used to access the resource. + Name *string `json:"name,omitempty"` + // Etag - A unique read-only string that changes whenever the resource is updated. + Etag *string `json:"etag,omitempty"` + // Type - Type of the resource. + Type *string `json:"type,omitempty"` + // ID - Resource ID. + ID *string `json:"id,omitempty"` +} + +// MarshalJSON is the custom marshaler for ApplicationGatewayRequestRoutingRule. +func (agrrr ApplicationGatewayRequestRoutingRule) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]interface{}) + if agrrr.ApplicationGatewayRequestRoutingRulePropertiesFormat != nil { + objectMap["properties"] = agrrr.ApplicationGatewayRequestRoutingRulePropertiesFormat + } + if agrrr.Name != nil { + objectMap["name"] = agrrr.Name + } + if agrrr.Etag != nil { + objectMap["etag"] = agrrr.Etag + } + if agrrr.Type != nil { + objectMap["type"] = agrrr.Type + } + if agrrr.ID != nil { + objectMap["id"] = agrrr.ID + } + return json.Marshal(objectMap) +} + +// UnmarshalJSON is the custom unmarshaler for ApplicationGatewayRequestRoutingRule struct. +func (agrrr *ApplicationGatewayRequestRoutingRule) UnmarshalJSON(body []byte) error { + var m map[string]*json.RawMessage + err := json.Unmarshal(body, &m) + if err != nil { + return err + } + for k, v := range m { + switch k { + case "properties": + if v != nil { + var applicationGatewayRequestRoutingRulePropertiesFormat ApplicationGatewayRequestRoutingRulePropertiesFormat + err = json.Unmarshal(*v, &applicationGatewayRequestRoutingRulePropertiesFormat) + if err != nil { + return err + } + agrrr.ApplicationGatewayRequestRoutingRulePropertiesFormat = &applicationGatewayRequestRoutingRulePropertiesFormat + } + case "name": + if v != nil { + var name string + err = json.Unmarshal(*v, &name) + if err != nil { + return err + } + agrrr.Name = &name + } + case "etag": + if v != nil { + var etag string + err = json.Unmarshal(*v, &etag) + if err != nil { + return err + } + agrrr.Etag = &etag + } + case "type": + if v != nil { + var typeVar string + err = json.Unmarshal(*v, &typeVar) + if err != nil { + return err + } + agrrr.Type = &typeVar + } + case "id": + if v != nil { + var ID string + err = json.Unmarshal(*v, &ID) + if err != nil { + return err + } + agrrr.ID = &ID + } + } + } + + return nil +} + +// ApplicationGatewayRequestRoutingRulePropertiesFormat properties of request routing rule of the application +// gateway. +type ApplicationGatewayRequestRoutingRulePropertiesFormat struct { + // RuleType - Rule type. Possible values include: 'Basic', 'PathBasedRouting' + RuleType ApplicationGatewayRequestRoutingRuleType `json:"ruleType,omitempty"` + // BackendAddressPool - Backend address pool resource of the application gateway. + BackendAddressPool *SubResource `json:"backendAddressPool,omitempty"` + // BackendHTTPSettings - Frontend port resource of the application gateway. + BackendHTTPSettings *SubResource `json:"backendHttpSettings,omitempty"` + // HTTPListener - Http listener resource of the application gateway. + HTTPListener *SubResource `json:"httpListener,omitempty"` + // URLPathMap - URL path map resource of the application gateway. + URLPathMap *SubResource `json:"urlPathMap,omitempty"` + // RedirectConfiguration - Redirect configuration resource of the application gateway. + RedirectConfiguration *SubResource `json:"redirectConfiguration,omitempty"` + // ProvisioningState - Provisioning state of the request routing rule resource. Possible values are: 'Updating', 'Deleting', and 'Failed'. + ProvisioningState *string `json:"provisioningState,omitempty"` +} + +// ApplicationGatewaysBackendHealthFuture an abstraction for monitoring and retrieving the results of a +// long-running operation. +type ApplicationGatewaysBackendHealthFuture struct { + azure.Future +} + +// Result returns the result of the asynchronous operation. +// If the operation has not completed it will return an error. +func (future *ApplicationGatewaysBackendHealthFuture) Result(client ApplicationGatewaysClient) (agbh ApplicationGatewayBackendHealth, err error) { + var done bool + done, err = future.Done(client) + if err != nil { + err = autorest.NewErrorWithError(err, "network.ApplicationGatewaysBackendHealthFuture", "Result", future.Response(), "Polling failure") + return + } + if !done { + err = azure.NewAsyncOpIncompleteError("network.ApplicationGatewaysBackendHealthFuture") + return + } + sender := autorest.DecorateSender(client, autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...)) + if agbh.Response.Response, err = future.GetResult(sender); err == nil && agbh.Response.Response.StatusCode != http.StatusNoContent { + agbh, err = client.BackendHealthResponder(agbh.Response.Response) + if err != nil { + err = autorest.NewErrorWithError(err, "network.ApplicationGatewaysBackendHealthFuture", "Result", agbh.Response.Response, "Failure responding to request") + } + } + return +} + +// ApplicationGatewaysCreateOrUpdateFuture an abstraction for monitoring and retrieving the results of a +// long-running operation. +type ApplicationGatewaysCreateOrUpdateFuture struct { + azure.Future +} + +// Result returns the result of the asynchronous operation. +// If the operation has not completed it will return an error. +func (future *ApplicationGatewaysCreateOrUpdateFuture) Result(client ApplicationGatewaysClient) (ag ApplicationGateway, err error) { + var done bool + done, err = future.Done(client) + if err != nil { + err = autorest.NewErrorWithError(err, "network.ApplicationGatewaysCreateOrUpdateFuture", "Result", future.Response(), "Polling failure") + return + } + if !done { + err = azure.NewAsyncOpIncompleteError("network.ApplicationGatewaysCreateOrUpdateFuture") + return + } + sender := autorest.DecorateSender(client, autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...)) + if ag.Response.Response, err = future.GetResult(sender); err == nil && ag.Response.Response.StatusCode != http.StatusNoContent { + ag, err = client.CreateOrUpdateResponder(ag.Response.Response) + if err != nil { + err = autorest.NewErrorWithError(err, "network.ApplicationGatewaysCreateOrUpdateFuture", "Result", ag.Response.Response, "Failure responding to request") + } + } + return +} + +// ApplicationGatewaysDeleteFuture an abstraction for monitoring and retrieving the results of a long-running +// operation. +type ApplicationGatewaysDeleteFuture struct { + azure.Future +} + +// Result returns the result of the asynchronous operation. +// If the operation has not completed it will return an error. +func (future *ApplicationGatewaysDeleteFuture) Result(client ApplicationGatewaysClient) (ar autorest.Response, err error) { + var done bool + done, err = future.Done(client) + if err != nil { + err = autorest.NewErrorWithError(err, "network.ApplicationGatewaysDeleteFuture", "Result", future.Response(), "Polling failure") + return + } + if !done { + err = azure.NewAsyncOpIncompleteError("network.ApplicationGatewaysDeleteFuture") + return + } + ar.Response = future.Response() + return +} + +// ApplicationGatewaySku SKU of an application gateway +type ApplicationGatewaySku struct { + // Name - Name of an application gateway SKU. Possible values include: 'StandardSmall', 'StandardMedium', 'StandardLarge', 'WAFMedium', 'WAFLarge' + Name ApplicationGatewaySkuName `json:"name,omitempty"` + // Tier - Tier of an application gateway. Possible values include: 'Standard', 'WAF' + Tier ApplicationGatewayTier `json:"tier,omitempty"` + // Capacity - Capacity (instance count) of an application gateway. + Capacity *int32 `json:"capacity,omitempty"` +} + +// ApplicationGatewaySslCertificate SSL certificates of an application gateway. +type ApplicationGatewaySslCertificate struct { + *ApplicationGatewaySslCertificatePropertiesFormat `json:"properties,omitempty"` + // Name - Name of the resource that is unique within a resource group. This name can be used to access the resource. + Name *string `json:"name,omitempty"` + // Etag - A unique read-only string that changes whenever the resource is updated. + Etag *string `json:"etag,omitempty"` + // Type - Type of the resource. + Type *string `json:"type,omitempty"` + // ID - Resource ID. + ID *string `json:"id,omitempty"` +} + +// MarshalJSON is the custom marshaler for ApplicationGatewaySslCertificate. +func (agsc ApplicationGatewaySslCertificate) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]interface{}) + if agsc.ApplicationGatewaySslCertificatePropertiesFormat != nil { + objectMap["properties"] = agsc.ApplicationGatewaySslCertificatePropertiesFormat + } + if agsc.Name != nil { + objectMap["name"] = agsc.Name + } + if agsc.Etag != nil { + objectMap["etag"] = agsc.Etag + } + if agsc.Type != nil { + objectMap["type"] = agsc.Type + } + if agsc.ID != nil { + objectMap["id"] = agsc.ID + } + return json.Marshal(objectMap) +} + +// UnmarshalJSON is the custom unmarshaler for ApplicationGatewaySslCertificate struct. +func (agsc *ApplicationGatewaySslCertificate) UnmarshalJSON(body []byte) error { + var m map[string]*json.RawMessage + err := json.Unmarshal(body, &m) + if err != nil { + return err + } + for k, v := range m { + switch k { + case "properties": + if v != nil { + var applicationGatewaySslCertificatePropertiesFormat ApplicationGatewaySslCertificatePropertiesFormat + err = json.Unmarshal(*v, &applicationGatewaySslCertificatePropertiesFormat) + if err != nil { + return err + } + agsc.ApplicationGatewaySslCertificatePropertiesFormat = &applicationGatewaySslCertificatePropertiesFormat + } + case "name": + if v != nil { + var name string + err = json.Unmarshal(*v, &name) + if err != nil { + return err + } + agsc.Name = &name + } + case "etag": + if v != nil { + var etag string + err = json.Unmarshal(*v, &etag) + if err != nil { + return err + } + agsc.Etag = &etag + } + case "type": + if v != nil { + var typeVar string + err = json.Unmarshal(*v, &typeVar) + if err != nil { + return err + } + agsc.Type = &typeVar + } + case "id": + if v != nil { + var ID string + err = json.Unmarshal(*v, &ID) + if err != nil { + return err + } + agsc.ID = &ID + } + } + } + + return nil +} + +// ApplicationGatewaySslCertificatePropertiesFormat properties of SSL certificates of an application gateway. +type ApplicationGatewaySslCertificatePropertiesFormat struct { + // Data - Base-64 encoded pfx certificate. Only applicable in PUT Request. + Data *string `json:"data,omitempty"` + // Password - Password for the pfx file specified in data. Only applicable in PUT request. + Password *string `json:"password,omitempty"` + // PublicCertData - Base-64 encoded Public cert data corresponding to pfx specified in data. Only applicable in GET request. + PublicCertData *string `json:"publicCertData,omitempty"` + // ProvisioningState - Provisioning state of the SSL certificate resource Possible values are: 'Updating', 'Deleting', and 'Failed'. + ProvisioningState *string `json:"provisioningState,omitempty"` +} + +// ApplicationGatewaySslPolicy application Gateway Ssl policy. +type ApplicationGatewaySslPolicy struct { + // DisabledSslProtocols - Ssl protocols to be disabled on application gateway. + DisabledSslProtocols *[]ApplicationGatewaySslProtocol `json:"disabledSslProtocols,omitempty"` + // PolicyType - Type of Ssl Policy. Possible values include: 'Predefined', 'Custom' + PolicyType ApplicationGatewaySslPolicyType `json:"policyType,omitempty"` + // PolicyName - Name of Ssl predefined policy. Possible values include: 'AppGwSslPolicy20150501', 'AppGwSslPolicy20170401', 'AppGwSslPolicy20170401S' + PolicyName ApplicationGatewaySslPolicyName `json:"policyName,omitempty"` + // CipherSuites - Ssl cipher suites to be enabled in the specified order to application gateway. + CipherSuites *[]ApplicationGatewaySslCipherSuite `json:"cipherSuites,omitempty"` + // MinProtocolVersion - Minimum version of Ssl protocol to be supported on application gateway. Possible values include: 'TLSv10', 'TLSv11', 'TLSv12' + MinProtocolVersion ApplicationGatewaySslProtocol `json:"minProtocolVersion,omitempty"` +} + +// ApplicationGatewaySslPredefinedPolicy an Ssl predefined policy +type ApplicationGatewaySslPredefinedPolicy struct { + autorest.Response `json:"-"` + // Name - Name of Ssl predefined policy. + Name *string `json:"name,omitempty"` + *ApplicationGatewaySslPredefinedPolicyPropertiesFormat `json:"properties,omitempty"` + // ID - Resource ID. + ID *string `json:"id,omitempty"` +} + +// MarshalJSON is the custom marshaler for ApplicationGatewaySslPredefinedPolicy. +func (agspp ApplicationGatewaySslPredefinedPolicy) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]interface{}) + if agspp.Name != nil { + objectMap["name"] = agspp.Name + } + if agspp.ApplicationGatewaySslPredefinedPolicyPropertiesFormat != nil { + objectMap["properties"] = agspp.ApplicationGatewaySslPredefinedPolicyPropertiesFormat + } + if agspp.ID != nil { + objectMap["id"] = agspp.ID + } + return json.Marshal(objectMap) +} + +// UnmarshalJSON is the custom unmarshaler for ApplicationGatewaySslPredefinedPolicy struct. +func (agspp *ApplicationGatewaySslPredefinedPolicy) UnmarshalJSON(body []byte) error { + var m map[string]*json.RawMessage + err := json.Unmarshal(body, &m) + if err != nil { + return err + } + for k, v := range m { + switch k { + case "name": + if v != nil { + var name string + err = json.Unmarshal(*v, &name) + if err != nil { + return err + } + agspp.Name = &name + } + case "properties": + if v != nil { + var applicationGatewaySslPredefinedPolicyPropertiesFormat ApplicationGatewaySslPredefinedPolicyPropertiesFormat + err = json.Unmarshal(*v, &applicationGatewaySslPredefinedPolicyPropertiesFormat) + if err != nil { + return err + } + agspp.ApplicationGatewaySslPredefinedPolicyPropertiesFormat = &applicationGatewaySslPredefinedPolicyPropertiesFormat + } + case "id": + if v != nil { + var ID string + err = json.Unmarshal(*v, &ID) + if err != nil { + return err + } + agspp.ID = &ID + } + } + } + + return nil +} + +// ApplicationGatewaySslPredefinedPolicyPropertiesFormat properties of ApplicationGatewaySslPredefinedPolicy +type ApplicationGatewaySslPredefinedPolicyPropertiesFormat struct { + // CipherSuites - Ssl cipher suites to be enabled in the specified order for application gateway. + CipherSuites *[]ApplicationGatewaySslCipherSuite `json:"cipherSuites,omitempty"` + // MinProtocolVersion - Minimum version of Ssl protocol to be supported on application gateway. Possible values include: 'TLSv10', 'TLSv11', 'TLSv12' + MinProtocolVersion ApplicationGatewaySslProtocol `json:"minProtocolVersion,omitempty"` +} + +// ApplicationGatewaysStartFuture an abstraction for monitoring and retrieving the results of a long-running +// operation. +type ApplicationGatewaysStartFuture struct { + azure.Future +} + +// Result returns the result of the asynchronous operation. +// If the operation has not completed it will return an error. +func (future *ApplicationGatewaysStartFuture) Result(client ApplicationGatewaysClient) (ar autorest.Response, err error) { + var done bool + done, err = future.Done(client) + if err != nil { + err = autorest.NewErrorWithError(err, "network.ApplicationGatewaysStartFuture", "Result", future.Response(), "Polling failure") + return + } + if !done { + err = azure.NewAsyncOpIncompleteError("network.ApplicationGatewaysStartFuture") + return + } + ar.Response = future.Response() + return +} + +// ApplicationGatewaysStopFuture an abstraction for monitoring and retrieving the results of a long-running +// operation. +type ApplicationGatewaysStopFuture struct { + azure.Future +} + +// Result returns the result of the asynchronous operation. +// If the operation has not completed it will return an error. +func (future *ApplicationGatewaysStopFuture) Result(client ApplicationGatewaysClient) (ar autorest.Response, err error) { + var done bool + done, err = future.Done(client) + if err != nil { + err = autorest.NewErrorWithError(err, "network.ApplicationGatewaysStopFuture", "Result", future.Response(), "Polling failure") + return + } + if !done { + err = azure.NewAsyncOpIncompleteError("network.ApplicationGatewaysStopFuture") + return + } + ar.Response = future.Response() + return +} + +// ApplicationGatewaysUpdateTagsFuture an abstraction for monitoring and retrieving the results of a long-running +// operation. +type ApplicationGatewaysUpdateTagsFuture struct { + azure.Future +} + +// Result returns the result of the asynchronous operation. +// If the operation has not completed it will return an error. +func (future *ApplicationGatewaysUpdateTagsFuture) Result(client ApplicationGatewaysClient) (ag ApplicationGateway, err error) { + var done bool + done, err = future.Done(client) + if err != nil { + err = autorest.NewErrorWithError(err, "network.ApplicationGatewaysUpdateTagsFuture", "Result", future.Response(), "Polling failure") + return + } + if !done { + err = azure.NewAsyncOpIncompleteError("network.ApplicationGatewaysUpdateTagsFuture") + return + } + sender := autorest.DecorateSender(client, autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...)) + if ag.Response.Response, err = future.GetResult(sender); err == nil && ag.Response.Response.StatusCode != http.StatusNoContent { + ag, err = client.UpdateTagsResponder(ag.Response.Response) + if err != nil { + err = autorest.NewErrorWithError(err, "network.ApplicationGatewaysUpdateTagsFuture", "Result", ag.Response.Response, "Failure responding to request") + } + } + return +} + +// ApplicationGatewayURLPathMap urlPathMaps give a url path to the backend mapping information for +// PathBasedRouting. +type ApplicationGatewayURLPathMap struct { + *ApplicationGatewayURLPathMapPropertiesFormat `json:"properties,omitempty"` + // Name - Name of the resource that is unique within a resource group. This name can be used to access the resource. + Name *string `json:"name,omitempty"` + // Etag - A unique read-only string that changes whenever the resource is updated. + Etag *string `json:"etag,omitempty"` + // Type - Type of the resource. + Type *string `json:"type,omitempty"` + // ID - Resource ID. + ID *string `json:"id,omitempty"` +} + +// MarshalJSON is the custom marshaler for ApplicationGatewayURLPathMap. +func (agupm ApplicationGatewayURLPathMap) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]interface{}) + if agupm.ApplicationGatewayURLPathMapPropertiesFormat != nil { + objectMap["properties"] = agupm.ApplicationGatewayURLPathMapPropertiesFormat + } + if agupm.Name != nil { + objectMap["name"] = agupm.Name + } + if agupm.Etag != nil { + objectMap["etag"] = agupm.Etag + } + if agupm.Type != nil { + objectMap["type"] = agupm.Type + } + if agupm.ID != nil { + objectMap["id"] = agupm.ID + } + return json.Marshal(objectMap) +} + +// UnmarshalJSON is the custom unmarshaler for ApplicationGatewayURLPathMap struct. +func (agupm *ApplicationGatewayURLPathMap) UnmarshalJSON(body []byte) error { + var m map[string]*json.RawMessage + err := json.Unmarshal(body, &m) + if err != nil { + return err + } + for k, v := range m { + switch k { + case "properties": + if v != nil { + var applicationGatewayURLPathMapPropertiesFormat ApplicationGatewayURLPathMapPropertiesFormat + err = json.Unmarshal(*v, &applicationGatewayURLPathMapPropertiesFormat) + if err != nil { + return err + } + agupm.ApplicationGatewayURLPathMapPropertiesFormat = &applicationGatewayURLPathMapPropertiesFormat + } + case "name": + if v != nil { + var name string + err = json.Unmarshal(*v, &name) + if err != nil { + return err + } + agupm.Name = &name + } + case "etag": + if v != nil { + var etag string + err = json.Unmarshal(*v, &etag) + if err != nil { + return err + } + agupm.Etag = &etag + } + case "type": + if v != nil { + var typeVar string + err = json.Unmarshal(*v, &typeVar) + if err != nil { + return err + } + agupm.Type = &typeVar + } + case "id": + if v != nil { + var ID string + err = json.Unmarshal(*v, &ID) + if err != nil { + return err + } + agupm.ID = &ID + } + } + } + + return nil +} + +// ApplicationGatewayURLPathMapPropertiesFormat properties of UrlPathMap of the application gateway. +type ApplicationGatewayURLPathMapPropertiesFormat struct { + // DefaultBackendAddressPool - Default backend address pool resource of URL path map. + DefaultBackendAddressPool *SubResource `json:"defaultBackendAddressPool,omitempty"` + // DefaultBackendHTTPSettings - Default backend http settings resource of URL path map. + DefaultBackendHTTPSettings *SubResource `json:"defaultBackendHttpSettings,omitempty"` + // DefaultRedirectConfiguration - Default redirect configuration resource of URL path map. + DefaultRedirectConfiguration *SubResource `json:"defaultRedirectConfiguration,omitempty"` + // PathRules - Path rule of URL path map resource. + PathRules *[]ApplicationGatewayPathRule `json:"pathRules,omitempty"` + // ProvisioningState - Provisioning state of the backend http settings resource. Possible values are: 'Updating', 'Deleting', and 'Failed'. + ProvisioningState *string `json:"provisioningState,omitempty"` +} + +// ApplicationGatewayWebApplicationFirewallConfiguration application gateway web application firewall +// configuration. +type ApplicationGatewayWebApplicationFirewallConfiguration struct { + // Enabled - Whether the web application firewall is enabled or not. + Enabled *bool `json:"enabled,omitempty"` + // FirewallMode - Web application firewall mode. Possible values include: 'Detection', 'Prevention' + FirewallMode ApplicationGatewayFirewallMode `json:"firewallMode,omitempty"` + // RuleSetType - The type of the web application firewall rule set. Possible values are: 'OWASP'. + RuleSetType *string `json:"ruleSetType,omitempty"` + // RuleSetVersion - The version of the rule set type. + RuleSetVersion *string `json:"ruleSetVersion,omitempty"` + // DisabledRuleGroups - The disabled rule groups. + DisabledRuleGroups *[]ApplicationGatewayFirewallDisabledRuleGroup `json:"disabledRuleGroups,omitempty"` + // RequestBodyCheck - Whether allow WAF to check request Body. + RequestBodyCheck *bool `json:"requestBodyCheck,omitempty"` + // MaxRequestBodySize - Maxium request body size for WAF. + MaxRequestBodySize *int32 `json:"maxRequestBodySize,omitempty"` +} + +// ApplicationSecurityGroup an application security group in a resource group. +type ApplicationSecurityGroup struct { + autorest.Response `json:"-"` + // ApplicationSecurityGroupPropertiesFormat - Properties of the application security group. + *ApplicationSecurityGroupPropertiesFormat `json:"properties,omitempty"` + // Etag - A unique read-only string that changes whenever the resource is updated. + Etag *string `json:"etag,omitempty"` + // ID - Resource ID. + ID *string `json:"id,omitempty"` + // Name - Resource name. + Name *string `json:"name,omitempty"` + // Type - Resource type. + Type *string `json:"type,omitempty"` + // Location - Resource location. + Location *string `json:"location,omitempty"` + // Tags - Resource tags. + Tags map[string]*string `json:"tags"` +} + +// MarshalJSON is the custom marshaler for ApplicationSecurityGroup. +func (asg ApplicationSecurityGroup) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]interface{}) + if asg.ApplicationSecurityGroupPropertiesFormat != nil { + objectMap["properties"] = asg.ApplicationSecurityGroupPropertiesFormat + } + if asg.Etag != nil { + objectMap["etag"] = asg.Etag + } + if asg.ID != nil { + objectMap["id"] = asg.ID + } + if asg.Name != nil { + objectMap["name"] = asg.Name + } + if asg.Type != nil { + objectMap["type"] = asg.Type + } + if asg.Location != nil { + objectMap["location"] = asg.Location + } + if asg.Tags != nil { + objectMap["tags"] = asg.Tags + } + return json.Marshal(objectMap) +} + +// UnmarshalJSON is the custom unmarshaler for ApplicationSecurityGroup struct. +func (asg *ApplicationSecurityGroup) UnmarshalJSON(body []byte) error { + var m map[string]*json.RawMessage + err := json.Unmarshal(body, &m) + if err != nil { + return err + } + for k, v := range m { + switch k { + case "properties": + if v != nil { + var applicationSecurityGroupPropertiesFormat ApplicationSecurityGroupPropertiesFormat + err = json.Unmarshal(*v, &applicationSecurityGroupPropertiesFormat) + if err != nil { + return err + } + asg.ApplicationSecurityGroupPropertiesFormat = &applicationSecurityGroupPropertiesFormat + } + case "etag": + if v != nil { + var etag string + err = json.Unmarshal(*v, &etag) + if err != nil { + return err + } + asg.Etag = &etag + } + case "id": + if v != nil { + var ID string + err = json.Unmarshal(*v, &ID) + if err != nil { + return err + } + asg.ID = &ID + } + case "name": + if v != nil { + var name string + err = json.Unmarshal(*v, &name) + if err != nil { + return err + } + asg.Name = &name + } + case "type": + if v != nil { + var typeVar string + err = json.Unmarshal(*v, &typeVar) + if err != nil { + return err + } + asg.Type = &typeVar + } + case "location": + if v != nil { + var location string + err = json.Unmarshal(*v, &location) + if err != nil { + return err + } + asg.Location = &location + } + case "tags": + if v != nil { + var tags map[string]*string + err = json.Unmarshal(*v, &tags) + if err != nil { + return err + } + asg.Tags = tags + } + } + } + + return nil +} + +// ApplicationSecurityGroupListResult a list of application security groups. +type ApplicationSecurityGroupListResult struct { + autorest.Response `json:"-"` + // Value - A list of application security groups. + Value *[]ApplicationSecurityGroup `json:"value,omitempty"` + // NextLink - The URL to get the next set of results. + NextLink *string `json:"nextLink,omitempty"` +} + +// ApplicationSecurityGroupListResultIterator provides access to a complete listing of ApplicationSecurityGroup +// values. +type ApplicationSecurityGroupListResultIterator struct { + i int + page ApplicationSecurityGroupListResultPage +} + +// Next advances to the next value. If there was an error making +// the request the iterator does not advance and the error is returned. +func (iter *ApplicationSecurityGroupListResultIterator) Next() error { + iter.i++ + if iter.i < len(iter.page.Values()) { + return nil + } + err := iter.page.Next() + if err != nil { + iter.i-- + return err + } + iter.i = 0 + return nil +} + +// NotDone returns true if the enumeration should be started or is not yet complete. +func (iter ApplicationSecurityGroupListResultIterator) NotDone() bool { + return iter.page.NotDone() && iter.i < len(iter.page.Values()) +} + +// Response returns the raw server response from the last page request. +func (iter ApplicationSecurityGroupListResultIterator) Response() ApplicationSecurityGroupListResult { + return iter.page.Response() +} + +// Value returns the current value or a zero-initialized value if the +// iterator has advanced beyond the end of the collection. +func (iter ApplicationSecurityGroupListResultIterator) Value() ApplicationSecurityGroup { + if !iter.page.NotDone() { + return ApplicationSecurityGroup{} + } + return iter.page.Values()[iter.i] +} + +// IsEmpty returns true if the ListResult contains no values. +func (asglr ApplicationSecurityGroupListResult) IsEmpty() bool { + return asglr.Value == nil || len(*asglr.Value) == 0 +} + +// applicationSecurityGroupListResultPreparer prepares a request to retrieve the next set of results. +// It returns nil if no more results exist. +func (asglr ApplicationSecurityGroupListResult) applicationSecurityGroupListResultPreparer() (*http.Request, error) { + if asglr.NextLink == nil || len(to.String(asglr.NextLink)) < 1 { + return nil, nil + } + return autorest.Prepare(&http.Request{}, + autorest.AsJSON(), + autorest.AsGet(), + autorest.WithBaseURL(to.String(asglr.NextLink))) +} + +// ApplicationSecurityGroupListResultPage contains a page of ApplicationSecurityGroup values. +type ApplicationSecurityGroupListResultPage struct { + fn func(ApplicationSecurityGroupListResult) (ApplicationSecurityGroupListResult, error) + asglr ApplicationSecurityGroupListResult +} + +// Next advances to the next page of values. If there was an error making +// the request the page does not advance and the error is returned. +func (page *ApplicationSecurityGroupListResultPage) Next() error { + next, err := page.fn(page.asglr) + if err != nil { + return err + } + page.asglr = next + return nil +} + +// NotDone returns true if the page enumeration should be started or is not yet complete. +func (page ApplicationSecurityGroupListResultPage) NotDone() bool { + return !page.asglr.IsEmpty() +} + +// Response returns the raw server response from the last page request. +func (page ApplicationSecurityGroupListResultPage) Response() ApplicationSecurityGroupListResult { + return page.asglr +} + +// Values returns the slice of values for the current page or nil if there are no values. +func (page ApplicationSecurityGroupListResultPage) Values() []ApplicationSecurityGroup { + if page.asglr.IsEmpty() { + return nil + } + return *page.asglr.Value +} + +// ApplicationSecurityGroupPropertiesFormat application security group properties. +type ApplicationSecurityGroupPropertiesFormat struct { + // ResourceGUID - The resource GUID property of the application security group resource. It uniquely identifies a resource, even if the user changes its name or migrate the resource across subscriptions or resource groups. + ResourceGUID *string `json:"resourceGuid,omitempty"` + // ProvisioningState - The provisioning state of the application security group resource. Possible values are: 'Succeeded', 'Updating', 'Deleting', and 'Failed'. + ProvisioningState *string `json:"provisioningState,omitempty"` +} + +// ApplicationSecurityGroupsCreateOrUpdateFuture an abstraction for monitoring and retrieving the results of a +// long-running operation. +type ApplicationSecurityGroupsCreateOrUpdateFuture struct { + azure.Future +} + +// Result returns the result of the asynchronous operation. +// If the operation has not completed it will return an error. +func (future *ApplicationSecurityGroupsCreateOrUpdateFuture) Result(client ApplicationSecurityGroupsClient) (asg ApplicationSecurityGroup, err error) { + var done bool + done, err = future.Done(client) + if err != nil { + err = autorest.NewErrorWithError(err, "network.ApplicationSecurityGroupsCreateOrUpdateFuture", "Result", future.Response(), "Polling failure") + return + } + if !done { + err = azure.NewAsyncOpIncompleteError("network.ApplicationSecurityGroupsCreateOrUpdateFuture") + return + } + sender := autorest.DecorateSender(client, autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...)) + if asg.Response.Response, err = future.GetResult(sender); err == nil && asg.Response.Response.StatusCode != http.StatusNoContent { + asg, err = client.CreateOrUpdateResponder(asg.Response.Response) + if err != nil { + err = autorest.NewErrorWithError(err, "network.ApplicationSecurityGroupsCreateOrUpdateFuture", "Result", asg.Response.Response, "Failure responding to request") + } + } + return +} + +// ApplicationSecurityGroupsDeleteFuture an abstraction for monitoring and retrieving the results of a long-running +// operation. +type ApplicationSecurityGroupsDeleteFuture struct { + azure.Future +} + +// Result returns the result of the asynchronous operation. +// If the operation has not completed it will return an error. +func (future *ApplicationSecurityGroupsDeleteFuture) Result(client ApplicationSecurityGroupsClient) (ar autorest.Response, err error) { + var done bool + done, err = future.Done(client) + if err != nil { + err = autorest.NewErrorWithError(err, "network.ApplicationSecurityGroupsDeleteFuture", "Result", future.Response(), "Polling failure") + return + } + if !done { + err = azure.NewAsyncOpIncompleteError("network.ApplicationSecurityGroupsDeleteFuture") + return + } + ar.Response = future.Response() + return +} + +// AuthorizationListResult response for ListAuthorizations API service call retrieves all authorizations that +// belongs to an ExpressRouteCircuit. +type AuthorizationListResult struct { + autorest.Response `json:"-"` + // Value - The authorizations in an ExpressRoute Circuit. + Value *[]ExpressRouteCircuitAuthorization `json:"value,omitempty"` + // NextLink - The URL to get the next set of results. + NextLink *string `json:"nextLink,omitempty"` +} + +// AuthorizationListResultIterator provides access to a complete listing of ExpressRouteCircuitAuthorization +// values. +type AuthorizationListResultIterator struct { + i int + page AuthorizationListResultPage +} + +// Next advances to the next value. If there was an error making +// the request the iterator does not advance and the error is returned. +func (iter *AuthorizationListResultIterator) Next() error { + iter.i++ + if iter.i < len(iter.page.Values()) { + return nil + } + err := iter.page.Next() + if err != nil { + iter.i-- + return err + } + iter.i = 0 + return nil +} + +// NotDone returns true if the enumeration should be started or is not yet complete. +func (iter AuthorizationListResultIterator) NotDone() bool { + return iter.page.NotDone() && iter.i < len(iter.page.Values()) +} + +// Response returns the raw server response from the last page request. +func (iter AuthorizationListResultIterator) Response() AuthorizationListResult { + return iter.page.Response() +} + +// Value returns the current value or a zero-initialized value if the +// iterator has advanced beyond the end of the collection. +func (iter AuthorizationListResultIterator) Value() ExpressRouteCircuitAuthorization { + if !iter.page.NotDone() { + return ExpressRouteCircuitAuthorization{} + } + return iter.page.Values()[iter.i] +} + +// IsEmpty returns true if the ListResult contains no values. +func (alr AuthorizationListResult) IsEmpty() bool { + return alr.Value == nil || len(*alr.Value) == 0 +} + +// authorizationListResultPreparer prepares a request to retrieve the next set of results. +// It returns nil if no more results exist. +func (alr AuthorizationListResult) authorizationListResultPreparer() (*http.Request, error) { + if alr.NextLink == nil || len(to.String(alr.NextLink)) < 1 { + return nil, nil + } + return autorest.Prepare(&http.Request{}, + autorest.AsJSON(), + autorest.AsGet(), + autorest.WithBaseURL(to.String(alr.NextLink))) +} + +// AuthorizationListResultPage contains a page of ExpressRouteCircuitAuthorization values. +type AuthorizationListResultPage struct { + fn func(AuthorizationListResult) (AuthorizationListResult, error) + alr AuthorizationListResult +} + +// Next advances to the next page of values. If there was an error making +// the request the page does not advance and the error is returned. +func (page *AuthorizationListResultPage) Next() error { + next, err := page.fn(page.alr) + if err != nil { + return err + } + page.alr = next + return nil +} + +// NotDone returns true if the page enumeration should be started or is not yet complete. +func (page AuthorizationListResultPage) NotDone() bool { + return !page.alr.IsEmpty() +} + +// Response returns the raw server response from the last page request. +func (page AuthorizationListResultPage) Response() AuthorizationListResult { + return page.alr +} + +// Values returns the slice of values for the current page or nil if there are no values. +func (page AuthorizationListResultPage) Values() []ExpressRouteCircuitAuthorization { + if page.alr.IsEmpty() { + return nil + } + return *page.alr.Value +} + +// AuthorizationPropertiesFormat ... +type AuthorizationPropertiesFormat struct { + // AuthorizationKey - The authorization key. + AuthorizationKey *string `json:"authorizationKey,omitempty"` + // AuthorizationUseStatus - AuthorizationUseStatus. Possible values are: 'Available' and 'InUse'. Possible values include: 'Available', 'InUse' + AuthorizationUseStatus AuthorizationUseStatus `json:"authorizationUseStatus,omitempty"` + // ProvisioningState - Gets the provisioning state of the public IP resource. Possible values are: 'Updating', 'Deleting', and 'Failed'. + ProvisioningState *string `json:"provisioningState,omitempty"` +} + +// Availability availability of the metric. +type Availability struct { + // TimeGrain - The time grain of the availability. + TimeGrain *string `json:"timeGrain,omitempty"` + // Retention - The retention of the availability. + Retention *string `json:"retention,omitempty"` + // BlobDuration - Duration of the availability blob. + BlobDuration *string `json:"blobDuration,omitempty"` +} + +// AvailableProvidersList list of available countries with details. +type AvailableProvidersList struct { + autorest.Response `json:"-"` + // Countries - List of available countries. + Countries *[]AvailableProvidersListCountry `json:"countries,omitempty"` +} + +// AvailableProvidersListCity city or town details. +type AvailableProvidersListCity struct { + // CityName - The city or town name. + CityName *string `json:"cityName,omitempty"` + // Providers - A list of Internet service providers. + Providers *[]string `json:"providers,omitempty"` +} + +// AvailableProvidersListCountry country details. +type AvailableProvidersListCountry struct { + // CountryName - The country name. + CountryName *string `json:"countryName,omitempty"` + // Providers - A list of Internet service providers. + Providers *[]string `json:"providers,omitempty"` + // States - List of available states in the country. + States *[]AvailableProvidersListState `json:"states,omitempty"` +} + +// AvailableProvidersListParameters constraints that determine the list of available Internet service providers. +type AvailableProvidersListParameters struct { + // AzureLocations - A list of Azure regions. + AzureLocations *[]string `json:"azureLocations,omitempty"` + // Country - The country for available providers list. + Country *string `json:"country,omitempty"` + // State - The state for available providers list. + State *string `json:"state,omitempty"` + // City - The city or town for available providers list. + City *string `json:"city,omitempty"` +} + +// AvailableProvidersListState state details. +type AvailableProvidersListState struct { + // StateName - The state name. + StateName *string `json:"stateName,omitempty"` + // Providers - A list of Internet service providers. + Providers *[]string `json:"providers,omitempty"` + // Cities - List of available cities or towns in the state. + Cities *[]AvailableProvidersListCity `json:"cities,omitempty"` +} + +// AzureAsyncOperationResult the response body contains the status of the specified asynchronous operation, +// indicating whether it has succeeded, is in progress, or has failed. Note that this status is distinct from the +// HTTP status code returned for the Get Operation Status operation itself. If the asynchronous operation +// succeeded, the response body includes the HTTP status code for the successful request. If the asynchronous +// operation failed, the response body includes the HTTP status code for the failed request and error information +// regarding the failure. +type AzureAsyncOperationResult struct { + // Status - Status of the Azure async operation. Possible values are: 'InProgress', 'Succeeded', and 'Failed'. Possible values include: 'OperationStatusInProgress', 'OperationStatusSucceeded', 'OperationStatusFailed' + Status OperationStatus `json:"status,omitempty"` + Error *Error `json:"error,omitempty"` +} + +// AzureReachabilityReport azure reachability report details. +type AzureReachabilityReport struct { + autorest.Response `json:"-"` + // AggregationLevel - The aggregation level of Azure reachability report. Can be Country, State or City. + AggregationLevel *string `json:"aggregationLevel,omitempty"` + ProviderLocation *AzureReachabilityReportLocation `json:"providerLocation,omitempty"` + // ReachabilityReport - List of Azure reachability report items. + ReachabilityReport *[]AzureReachabilityReportItem `json:"reachabilityReport,omitempty"` +} + +// AzureReachabilityReportItem azure reachability report details for a given provider location. +type AzureReachabilityReportItem struct { + // Provider - The Internet service provider. + Provider *string `json:"provider,omitempty"` + // AzureLocation - The Azure region. + AzureLocation *string `json:"azureLocation,omitempty"` + // Latencies - List of latency details for each of the time series. + Latencies *[]AzureReachabilityReportLatencyInfo `json:"latencies,omitempty"` +} + +// AzureReachabilityReportLatencyInfo details on latency for a time series. +type AzureReachabilityReportLatencyInfo struct { + // TimeStamp - The time stamp. + TimeStamp *date.Time `json:"timeStamp,omitempty"` + // Score - The relative latency score between 1 and 100, higher values indicating a faster connection. + Score *int32 `json:"score,omitempty"` +} + +// AzureReachabilityReportLocation parameters that define a geographic location. +type AzureReachabilityReportLocation struct { + // Country - The name of the country. + Country *string `json:"country,omitempty"` + // State - The name of the state. + State *string `json:"state,omitempty"` + // City - The name of the city or town. + City *string `json:"city,omitempty"` +} + +// AzureReachabilityReportParameters geographic and time constraints for Azure reachability report. +type AzureReachabilityReportParameters struct { + ProviderLocation *AzureReachabilityReportLocation `json:"providerLocation,omitempty"` + // Providers - List of Internet service providers. + Providers *[]string `json:"providers,omitempty"` + // AzureLocations - Optional Azure regions to scope the query to. + AzureLocations *[]string `json:"azureLocations,omitempty"` + // StartTime - The start time for the Azure reachability report. + StartTime *date.Time `json:"startTime,omitempty"` + // EndTime - The end time for the Azure reachability report. + EndTime *date.Time `json:"endTime,omitempty"` +} + +// BackendAddressPool pool of backend IP addresses. +type BackendAddressPool struct { + autorest.Response `json:"-"` + // BackendAddressPoolPropertiesFormat - Properties of load balancer backend address pool. + *BackendAddressPoolPropertiesFormat `json:"properties,omitempty"` + // Name - Gets name of the resource that is unique within a resource group. This name can be used to access the resource. + Name *string `json:"name,omitempty"` + // Etag - A unique read-only string that changes whenever the resource is updated. + Etag *string `json:"etag,omitempty"` + // ID - Resource ID. + ID *string `json:"id,omitempty"` +} + +// MarshalJSON is the custom marshaler for BackendAddressPool. +func (bap BackendAddressPool) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]interface{}) + if bap.BackendAddressPoolPropertiesFormat != nil { + objectMap["properties"] = bap.BackendAddressPoolPropertiesFormat + } + if bap.Name != nil { + objectMap["name"] = bap.Name + } + if bap.Etag != nil { + objectMap["etag"] = bap.Etag + } + if bap.ID != nil { + objectMap["id"] = bap.ID + } + return json.Marshal(objectMap) +} + +// UnmarshalJSON is the custom unmarshaler for BackendAddressPool struct. +func (bap *BackendAddressPool) UnmarshalJSON(body []byte) error { + var m map[string]*json.RawMessage + err := json.Unmarshal(body, &m) + if err != nil { + return err + } + for k, v := range m { + switch k { + case "properties": + if v != nil { + var backendAddressPoolPropertiesFormat BackendAddressPoolPropertiesFormat + err = json.Unmarshal(*v, &backendAddressPoolPropertiesFormat) + if err != nil { + return err + } + bap.BackendAddressPoolPropertiesFormat = &backendAddressPoolPropertiesFormat + } + case "name": + if v != nil { + var name string + err = json.Unmarshal(*v, &name) + if err != nil { + return err + } + bap.Name = &name + } + case "etag": + if v != nil { + var etag string + err = json.Unmarshal(*v, &etag) + if err != nil { + return err + } + bap.Etag = &etag + } + case "id": + if v != nil { + var ID string + err = json.Unmarshal(*v, &ID) + if err != nil { + return err + } + bap.ID = &ID + } + } + } + + return nil +} + +// BackendAddressPoolPropertiesFormat properties of the backend address pool. +type BackendAddressPoolPropertiesFormat struct { + // BackendIPConfigurations - Gets collection of references to IP addresses defined in network interfaces. + BackendIPConfigurations *[]InterfaceIPConfiguration `json:"backendIPConfigurations,omitempty"` + // LoadBalancingRules - Gets load balancing rules that use this backend address pool. + LoadBalancingRules *[]SubResource `json:"loadBalancingRules,omitempty"` + // OutboundNatRule - Gets outbound rules that use this backend address pool. + OutboundNatRule *SubResource `json:"outboundNatRule,omitempty"` + // ProvisioningState - Get provisioning state of the public IP resource. Possible values are: 'Updating', 'Deleting', and 'Failed'. + ProvisioningState *string `json:"provisioningState,omitempty"` +} + +// BGPCommunity contains bgp community information offered in Service Community resources. +type BGPCommunity struct { + // ServiceSupportedRegion - The region which the service support. e.g. For O365, region is Global. + ServiceSupportedRegion *string `json:"serviceSupportedRegion,omitempty"` + // CommunityName - The name of the bgp community. e.g. Skype. + CommunityName *string `json:"communityName,omitempty"` + // CommunityValue - The value of the bgp community. For more information: https://docs.microsoft.com/en-us/azure/expressroute/expressroute-routing. + CommunityValue *string `json:"communityValue,omitempty"` + // CommunityPrefixes - The prefixes that the bgp community contains. + CommunityPrefixes *[]string `json:"communityPrefixes,omitempty"` + // IsAuthorizedToUse - Customer is authorized to use bgp community or not. + IsAuthorizedToUse *bool `json:"isAuthorizedToUse,omitempty"` + // ServiceGroup - The service group of the bgp community contains. + ServiceGroup *string `json:"serviceGroup,omitempty"` +} + +// BgpPeerStatus BGP peer status details +type BgpPeerStatus struct { + // LocalAddress - The virtual network gateway's local address + LocalAddress *string `json:"localAddress,omitempty"` + // Neighbor - The remote BGP peer + Neighbor *string `json:"neighbor,omitempty"` + // Asn - The autonomous system number of the remote BGP peer + Asn *int32 `json:"asn,omitempty"` + // State - The BGP peer state. Possible values include: 'BgpPeerStateUnknown', 'BgpPeerStateStopped', 'BgpPeerStateIdle', 'BgpPeerStateConnecting', 'BgpPeerStateConnected' + State BgpPeerState `json:"state,omitempty"` + // ConnectedDuration - For how long the peering has been up + ConnectedDuration *string `json:"connectedDuration,omitempty"` + // RoutesReceived - The number of routes learned from this peer + RoutesReceived *int64 `json:"routesReceived,omitempty"` + // MessagesSent - The number of BGP messages sent + MessagesSent *int64 `json:"messagesSent,omitempty"` + // MessagesReceived - The number of BGP messages received + MessagesReceived *int64 `json:"messagesReceived,omitempty"` +} + +// BgpPeerStatusListResult response for list BGP peer status API service call +type BgpPeerStatusListResult struct { + autorest.Response `json:"-"` + // Value - List of BGP peers + Value *[]BgpPeerStatus `json:"value,omitempty"` +} + +// BgpServiceCommunity service Community Properties. +type BgpServiceCommunity struct { + *BgpServiceCommunityPropertiesFormat `json:"properties,omitempty"` + // ID - Resource ID. + ID *string `json:"id,omitempty"` + // Name - Resource name. + Name *string `json:"name,omitempty"` + // Type - Resource type. + Type *string `json:"type,omitempty"` + // Location - Resource location. + Location *string `json:"location,omitempty"` + // Tags - Resource tags. + Tags map[string]*string `json:"tags"` +} + +// MarshalJSON is the custom marshaler for BgpServiceCommunity. +func (bsc BgpServiceCommunity) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]interface{}) + if bsc.BgpServiceCommunityPropertiesFormat != nil { + objectMap["properties"] = bsc.BgpServiceCommunityPropertiesFormat + } + if bsc.ID != nil { + objectMap["id"] = bsc.ID + } + if bsc.Name != nil { + objectMap["name"] = bsc.Name + } + if bsc.Type != nil { + objectMap["type"] = bsc.Type + } + if bsc.Location != nil { + objectMap["location"] = bsc.Location + } + if bsc.Tags != nil { + objectMap["tags"] = bsc.Tags + } + return json.Marshal(objectMap) +} + +// UnmarshalJSON is the custom unmarshaler for BgpServiceCommunity struct. +func (bsc *BgpServiceCommunity) UnmarshalJSON(body []byte) error { + var m map[string]*json.RawMessage + err := json.Unmarshal(body, &m) + if err != nil { + return err + } + for k, v := range m { + switch k { + case "properties": + if v != nil { + var bgpServiceCommunityPropertiesFormat BgpServiceCommunityPropertiesFormat + err = json.Unmarshal(*v, &bgpServiceCommunityPropertiesFormat) + if err != nil { + return err + } + bsc.BgpServiceCommunityPropertiesFormat = &bgpServiceCommunityPropertiesFormat + } + case "id": + if v != nil { + var ID string + err = json.Unmarshal(*v, &ID) + if err != nil { + return err + } + bsc.ID = &ID + } + case "name": + if v != nil { + var name string + err = json.Unmarshal(*v, &name) + if err != nil { + return err + } + bsc.Name = &name + } + case "type": + if v != nil { + var typeVar string + err = json.Unmarshal(*v, &typeVar) + if err != nil { + return err + } + bsc.Type = &typeVar + } + case "location": + if v != nil { + var location string + err = json.Unmarshal(*v, &location) + if err != nil { + return err + } + bsc.Location = &location + } + case "tags": + if v != nil { + var tags map[string]*string + err = json.Unmarshal(*v, &tags) + if err != nil { + return err + } + bsc.Tags = tags + } + } + } + + return nil +} + +// BgpServiceCommunityListResult response for the ListServiceCommunity API service call. +type BgpServiceCommunityListResult struct { + autorest.Response `json:"-"` + // Value - A list of service community resources. + Value *[]BgpServiceCommunity `json:"value,omitempty"` + // NextLink - The URL to get the next set of results. + NextLink *string `json:"nextLink,omitempty"` +} + +// BgpServiceCommunityListResultIterator provides access to a complete listing of BgpServiceCommunity values. +type BgpServiceCommunityListResultIterator struct { + i int + page BgpServiceCommunityListResultPage +} + +// Next advances to the next value. If there was an error making +// the request the iterator does not advance and the error is returned. +func (iter *BgpServiceCommunityListResultIterator) Next() error { + iter.i++ + if iter.i < len(iter.page.Values()) { + return nil + } + err := iter.page.Next() + if err != nil { + iter.i-- + return err + } + iter.i = 0 + return nil +} + +// NotDone returns true if the enumeration should be started or is not yet complete. +func (iter BgpServiceCommunityListResultIterator) NotDone() bool { + return iter.page.NotDone() && iter.i < len(iter.page.Values()) +} + +// Response returns the raw server response from the last page request. +func (iter BgpServiceCommunityListResultIterator) Response() BgpServiceCommunityListResult { + return iter.page.Response() +} + +// Value returns the current value or a zero-initialized value if the +// iterator has advanced beyond the end of the collection. +func (iter BgpServiceCommunityListResultIterator) Value() BgpServiceCommunity { + if !iter.page.NotDone() { + return BgpServiceCommunity{} + } + return iter.page.Values()[iter.i] +} + +// IsEmpty returns true if the ListResult contains no values. +func (bsclr BgpServiceCommunityListResult) IsEmpty() bool { + return bsclr.Value == nil || len(*bsclr.Value) == 0 +} + +// bgpServiceCommunityListResultPreparer prepares a request to retrieve the next set of results. +// It returns nil if no more results exist. +func (bsclr BgpServiceCommunityListResult) bgpServiceCommunityListResultPreparer() (*http.Request, error) { + if bsclr.NextLink == nil || len(to.String(bsclr.NextLink)) < 1 { + return nil, nil + } + return autorest.Prepare(&http.Request{}, + autorest.AsJSON(), + autorest.AsGet(), + autorest.WithBaseURL(to.String(bsclr.NextLink))) +} + +// BgpServiceCommunityListResultPage contains a page of BgpServiceCommunity values. +type BgpServiceCommunityListResultPage struct { + fn func(BgpServiceCommunityListResult) (BgpServiceCommunityListResult, error) + bsclr BgpServiceCommunityListResult +} + +// Next advances to the next page of values. If there was an error making +// the request the page does not advance and the error is returned. +func (page *BgpServiceCommunityListResultPage) Next() error { + next, err := page.fn(page.bsclr) + if err != nil { + return err + } + page.bsclr = next + return nil +} + +// NotDone returns true if the page enumeration should be started or is not yet complete. +func (page BgpServiceCommunityListResultPage) NotDone() bool { + return !page.bsclr.IsEmpty() +} + +// Response returns the raw server response from the last page request. +func (page BgpServiceCommunityListResultPage) Response() BgpServiceCommunityListResult { + return page.bsclr +} + +// Values returns the slice of values for the current page or nil if there are no values. +func (page BgpServiceCommunityListResultPage) Values() []BgpServiceCommunity { + if page.bsclr.IsEmpty() { + return nil + } + return *page.bsclr.Value +} + +// BgpServiceCommunityPropertiesFormat properties of Service Community. +type BgpServiceCommunityPropertiesFormat struct { + // ServiceName - The name of the bgp community. e.g. Skype. + ServiceName *string `json:"serviceName,omitempty"` + // BgpCommunities - Get a list of bgp communities. + BgpCommunities *[]BGPCommunity `json:"bgpCommunities,omitempty"` +} + +// BgpSettings BGP settings details +type BgpSettings struct { + // Asn - The BGP speaker's ASN. + Asn *int64 `json:"asn,omitempty"` + // BgpPeeringAddress - The BGP peering address and BGP identifier of this BGP speaker. + BgpPeeringAddress *string `json:"bgpPeeringAddress,omitempty"` + // PeerWeight - The weight added to routes learned from this BGP speaker. + PeerWeight *int32 `json:"peerWeight,omitempty"` +} + +// ConnectionMonitor parameters that define the operation to create a connection monitor. +type ConnectionMonitor struct { + // Location - Connection monitor location. + Location *string `json:"location,omitempty"` + // Tags - Connection monitor tags. + Tags map[string]*string `json:"tags"` + *ConnectionMonitorParameters `json:"properties,omitempty"` +} + +// MarshalJSON is the custom marshaler for ConnectionMonitor. +func (cm ConnectionMonitor) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]interface{}) + if cm.Location != nil { + objectMap["location"] = cm.Location + } + if cm.Tags != nil { + objectMap["tags"] = cm.Tags + } + if cm.ConnectionMonitorParameters != nil { + objectMap["properties"] = cm.ConnectionMonitorParameters + } + return json.Marshal(objectMap) +} + +// UnmarshalJSON is the custom unmarshaler for ConnectionMonitor struct. +func (cm *ConnectionMonitor) UnmarshalJSON(body []byte) error { + var m map[string]*json.RawMessage + err := json.Unmarshal(body, &m) + if err != nil { + return err + } + for k, v := range m { + switch k { + case "location": + if v != nil { + var location string + err = json.Unmarshal(*v, &location) + if err != nil { + return err + } + cm.Location = &location + } + case "tags": + if v != nil { + var tags map[string]*string + err = json.Unmarshal(*v, &tags) + if err != nil { + return err + } + cm.Tags = tags + } + case "properties": + if v != nil { + var connectionMonitorParameters ConnectionMonitorParameters + err = json.Unmarshal(*v, &connectionMonitorParameters) + if err != nil { + return err + } + cm.ConnectionMonitorParameters = &connectionMonitorParameters + } + } + } + + return nil +} + +// ConnectionMonitorDestination describes the destination of connection monitor. +type ConnectionMonitorDestination struct { + // ResourceID - The ID of the resource used as the destination by connection monitor. + ResourceID *string `json:"resourceId,omitempty"` + // Address - Address of the connection monitor destination (IP or domain name). + Address *string `json:"address,omitempty"` + // Port - The destination port used by connection monitor. + Port *int32 `json:"port,omitempty"` +} + +// ConnectionMonitorListResult list of connection monitors. +type ConnectionMonitorListResult struct { + autorest.Response `json:"-"` + // Value - Information about connection monitors. + Value *[]ConnectionMonitorResult `json:"value,omitempty"` +} + +// ConnectionMonitorParameters parameters that define the operation to create a connection monitor. +type ConnectionMonitorParameters struct { + Source *ConnectionMonitorSource `json:"source,omitempty"` + Destination *ConnectionMonitorDestination `json:"destination,omitempty"` + // AutoStart - Determines if the connection monitor will start automatically once created. + AutoStart *bool `json:"autoStart,omitempty"` + // MonitoringIntervalInSeconds - Monitoring interval in seconds. + MonitoringIntervalInSeconds *int32 `json:"monitoringIntervalInSeconds,omitempty"` +} + +// ConnectionMonitorQueryResult list of connection states snaphots. +type ConnectionMonitorQueryResult struct { + autorest.Response `json:"-"` + // States - Information about connection states. + States *[]ConnectionStateSnapshot `json:"states,omitempty"` +} + +// ConnectionMonitorResult information about the connection monitor. +type ConnectionMonitorResult struct { + autorest.Response `json:"-"` + // Name - Name of the connection monitor. + Name *string `json:"name,omitempty"` + // ID - ID of the connection monitor. + ID *string `json:"id,omitempty"` + Etag *string `json:"etag,omitempty"` + // Type - Connection monitor type. + Type *string `json:"type,omitempty"` + // Location - Connection monitor location. + Location *string `json:"location,omitempty"` + // Tags - Connection monitor tags. + Tags map[string]*string `json:"tags"` + *ConnectionMonitorResultProperties `json:"properties,omitempty"` +} + +// MarshalJSON is the custom marshaler for ConnectionMonitorResult. +func (cmr ConnectionMonitorResult) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]interface{}) + if cmr.Name != nil { + objectMap["name"] = cmr.Name + } + if cmr.ID != nil { + objectMap["id"] = cmr.ID + } + if cmr.Etag != nil { + objectMap["etag"] = cmr.Etag + } + if cmr.Type != nil { + objectMap["type"] = cmr.Type + } + if cmr.Location != nil { + objectMap["location"] = cmr.Location + } + if cmr.Tags != nil { + objectMap["tags"] = cmr.Tags + } + if cmr.ConnectionMonitorResultProperties != nil { + objectMap["properties"] = cmr.ConnectionMonitorResultProperties + } + return json.Marshal(objectMap) +} + +// UnmarshalJSON is the custom unmarshaler for ConnectionMonitorResult struct. +func (cmr *ConnectionMonitorResult) UnmarshalJSON(body []byte) error { + var m map[string]*json.RawMessage + err := json.Unmarshal(body, &m) + if err != nil { + return err + } + for k, v := range m { + switch k { + case "name": + if v != nil { + var name string + err = json.Unmarshal(*v, &name) + if err != nil { + return err + } + cmr.Name = &name + } + case "id": + if v != nil { + var ID string + err = json.Unmarshal(*v, &ID) + if err != nil { + return err + } + cmr.ID = &ID + } + case "etag": + if v != nil { + var etag string + err = json.Unmarshal(*v, &etag) + if err != nil { + return err + } + cmr.Etag = &etag + } + case "type": + if v != nil { + var typeVar string + err = json.Unmarshal(*v, &typeVar) + if err != nil { + return err + } + cmr.Type = &typeVar + } + case "location": + if v != nil { + var location string + err = json.Unmarshal(*v, &location) + if err != nil { + return err + } + cmr.Location = &location + } + case "tags": + if v != nil { + var tags map[string]*string + err = json.Unmarshal(*v, &tags) + if err != nil { + return err + } + cmr.Tags = tags + } + case "properties": + if v != nil { + var connectionMonitorResultProperties ConnectionMonitorResultProperties + err = json.Unmarshal(*v, &connectionMonitorResultProperties) + if err != nil { + return err + } + cmr.ConnectionMonitorResultProperties = &connectionMonitorResultProperties + } + } + } + + return nil +} + +// ConnectionMonitorResultProperties describes the properties of a connection monitor. +type ConnectionMonitorResultProperties struct { + // ProvisioningState - The provisioning state of the connection monitor. Possible values include: 'Succeeded', 'Updating', 'Deleting', 'Failed' + ProvisioningState ProvisioningState `json:"provisioningState,omitempty"` + // StartTime - The date and time when the connection monitor was started. + StartTime *date.Time `json:"startTime,omitempty"` + // MonitoringStatus - The monitoring status of the connection monitor. + MonitoringStatus *string `json:"monitoringStatus,omitempty"` + Source *ConnectionMonitorSource `json:"source,omitempty"` + Destination *ConnectionMonitorDestination `json:"destination,omitempty"` + // AutoStart - Determines if the connection monitor will start automatically once created. + AutoStart *bool `json:"autoStart,omitempty"` + // MonitoringIntervalInSeconds - Monitoring interval in seconds. + MonitoringIntervalInSeconds *int32 `json:"monitoringIntervalInSeconds,omitempty"` +} + +// ConnectionMonitorsCreateOrUpdateFuture an abstraction for monitoring and retrieving the results of a +// long-running operation. +type ConnectionMonitorsCreateOrUpdateFuture struct { + azure.Future +} + +// Result returns the result of the asynchronous operation. +// If the operation has not completed it will return an error. +func (future *ConnectionMonitorsCreateOrUpdateFuture) Result(client ConnectionMonitorsClient) (cmr ConnectionMonitorResult, err error) { + var done bool + done, err = future.Done(client) + if err != nil { + err = autorest.NewErrorWithError(err, "network.ConnectionMonitorsCreateOrUpdateFuture", "Result", future.Response(), "Polling failure") + return + } + if !done { + err = azure.NewAsyncOpIncompleteError("network.ConnectionMonitorsCreateOrUpdateFuture") + return + } + sender := autorest.DecorateSender(client, autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...)) + if cmr.Response.Response, err = future.GetResult(sender); err == nil && cmr.Response.Response.StatusCode != http.StatusNoContent { + cmr, err = client.CreateOrUpdateResponder(cmr.Response.Response) + if err != nil { + err = autorest.NewErrorWithError(err, "network.ConnectionMonitorsCreateOrUpdateFuture", "Result", cmr.Response.Response, "Failure responding to request") + } + } + return +} + +// ConnectionMonitorsDeleteFuture an abstraction for monitoring and retrieving the results of a long-running +// operation. +type ConnectionMonitorsDeleteFuture struct { + azure.Future +} + +// Result returns the result of the asynchronous operation. +// If the operation has not completed it will return an error. +func (future *ConnectionMonitorsDeleteFuture) Result(client ConnectionMonitorsClient) (ar autorest.Response, err error) { + var done bool + done, err = future.Done(client) + if err != nil { + err = autorest.NewErrorWithError(err, "network.ConnectionMonitorsDeleteFuture", "Result", future.Response(), "Polling failure") + return + } + if !done { + err = azure.NewAsyncOpIncompleteError("network.ConnectionMonitorsDeleteFuture") + return + } + ar.Response = future.Response() + return +} + +// ConnectionMonitorSource describes the source of connection monitor. +type ConnectionMonitorSource struct { + // ResourceID - The ID of the resource used as the source by connection monitor. + ResourceID *string `json:"resourceId,omitempty"` + // Port - The source port used by connection monitor. + Port *int32 `json:"port,omitempty"` +} + +// ConnectionMonitorsQueryFuture an abstraction for monitoring and retrieving the results of a long-running +// operation. +type ConnectionMonitorsQueryFuture struct { + azure.Future +} + +// Result returns the result of the asynchronous operation. +// If the operation has not completed it will return an error. +func (future *ConnectionMonitorsQueryFuture) Result(client ConnectionMonitorsClient) (cmqr ConnectionMonitorQueryResult, err error) { + var done bool + done, err = future.Done(client) + if err != nil { + err = autorest.NewErrorWithError(err, "network.ConnectionMonitorsQueryFuture", "Result", future.Response(), "Polling failure") + return + } + if !done { + err = azure.NewAsyncOpIncompleteError("network.ConnectionMonitorsQueryFuture") + return + } + sender := autorest.DecorateSender(client, autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...)) + if cmqr.Response.Response, err = future.GetResult(sender); err == nil && cmqr.Response.Response.StatusCode != http.StatusNoContent { + cmqr, err = client.QueryResponder(cmqr.Response.Response) + if err != nil { + err = autorest.NewErrorWithError(err, "network.ConnectionMonitorsQueryFuture", "Result", cmqr.Response.Response, "Failure responding to request") + } + } + return +} + +// ConnectionMonitorsStartFuture an abstraction for monitoring and retrieving the results of a long-running +// operation. +type ConnectionMonitorsStartFuture struct { + azure.Future +} + +// Result returns the result of the asynchronous operation. +// If the operation has not completed it will return an error. +func (future *ConnectionMonitorsStartFuture) Result(client ConnectionMonitorsClient) (ar autorest.Response, err error) { + var done bool + done, err = future.Done(client) + if err != nil { + err = autorest.NewErrorWithError(err, "network.ConnectionMonitorsStartFuture", "Result", future.Response(), "Polling failure") + return + } + if !done { + err = azure.NewAsyncOpIncompleteError("network.ConnectionMonitorsStartFuture") + return + } + ar.Response = future.Response() + return +} + +// ConnectionMonitorsStopFuture an abstraction for monitoring and retrieving the results of a long-running +// operation. +type ConnectionMonitorsStopFuture struct { + azure.Future +} + +// Result returns the result of the asynchronous operation. +// If the operation has not completed it will return an error. +func (future *ConnectionMonitorsStopFuture) Result(client ConnectionMonitorsClient) (ar autorest.Response, err error) { + var done bool + done, err = future.Done(client) + if err != nil { + err = autorest.NewErrorWithError(err, "network.ConnectionMonitorsStopFuture", "Result", future.Response(), "Polling failure") + return + } + if !done { + err = azure.NewAsyncOpIncompleteError("network.ConnectionMonitorsStopFuture") + return + } + ar.Response = future.Response() + return +} + +// ConnectionResetSharedKey the virtual network connection reset shared key +type ConnectionResetSharedKey struct { + autorest.Response `json:"-"` + // KeyLength - The virtual network connection reset shared key length, should between 1 and 128. + KeyLength *int32 `json:"keyLength,omitempty"` +} + +// ConnectionSharedKey response for GetConnectionSharedKey API service call +type ConnectionSharedKey struct { + autorest.Response `json:"-"` + // Value - The virtual network connection shared key value. + Value *string `json:"value,omitempty"` +} + +// ConnectionStateSnapshot connection state snapshot. +type ConnectionStateSnapshot struct { + // ConnectionState - The connection state. Possible values include: 'ConnectionStateReachable', 'ConnectionStateUnreachable', 'ConnectionStateUnknown' + ConnectionState ConnectionState `json:"connectionState,omitempty"` + // StartTime - The start time of the connection snapshot. + StartTime *date.Time `json:"startTime,omitempty"` + // EndTime - The end time of the connection snapshot. + EndTime *date.Time `json:"endTime,omitempty"` + // EvaluationState - Connectivity analysis evaluation state. Possible values include: 'NotStarted', 'InProgress', 'Completed' + EvaluationState EvaluationState `json:"evaluationState,omitempty"` + // Hops - List of hops between the source and the destination. + Hops *[]ConnectivityHop `json:"hops,omitempty"` +} + +// ConnectivityDestination parameters that define destination of connection. +type ConnectivityDestination struct { + // ResourceID - The ID of the resource to which a connection attempt will be made. + ResourceID *string `json:"resourceId,omitempty"` + // Address - The IP address or URI the resource to which a connection attempt will be made. + Address *string `json:"address,omitempty"` + // Port - Port on which check connectivity will be performed. + Port *int32 `json:"port,omitempty"` +} + +// ConnectivityHop information about a hop between the source and the destination. +type ConnectivityHop struct { + // Type - The type of the hop. + Type *string `json:"type,omitempty"` + // ID - The ID of the hop. + ID *string `json:"id,omitempty"` + // Address - The IP address of the hop. + Address *string `json:"address,omitempty"` + // ResourceID - The ID of the resource corresponding to this hop. + ResourceID *string `json:"resourceId,omitempty"` + // NextHopIds - List of next hop identifiers. + NextHopIds *[]string `json:"nextHopIds,omitempty"` + // Issues - List of issues. + Issues *[]ConnectivityIssue `json:"issues,omitempty"` +} + +// ConnectivityInformation information on the connectivity status. +type ConnectivityInformation struct { + autorest.Response `json:"-"` + // Hops - List of hops between the source and the destination. + Hops *[]ConnectivityHop `json:"hops,omitempty"` + // ConnectionStatus - The connection status. Possible values include: 'ConnectionStatusUnknown', 'ConnectionStatusConnected', 'ConnectionStatusDisconnected', 'ConnectionStatusDegraded' + ConnectionStatus ConnectionStatus `json:"connectionStatus,omitempty"` + // AvgLatencyInMs - Average latency in milliseconds. + AvgLatencyInMs *int32 `json:"avgLatencyInMs,omitempty"` + // MinLatencyInMs - Minimum latency in milliseconds. + MinLatencyInMs *int32 `json:"minLatencyInMs,omitempty"` + // MaxLatencyInMs - Maximum latency in milliseconds. + MaxLatencyInMs *int32 `json:"maxLatencyInMs,omitempty"` + // ProbesSent - Total number of probes sent. + ProbesSent *int32 `json:"probesSent,omitempty"` + // ProbesFailed - Number of failed probes. + ProbesFailed *int32 `json:"probesFailed,omitempty"` +} + +// ConnectivityIssue information about an issue encountered in the process of checking for connectivity. +type ConnectivityIssue struct { + // Origin - The origin of the issue. Possible values include: 'OriginLocal', 'OriginInbound', 'OriginOutbound' + Origin Origin `json:"origin,omitempty"` + // Severity - The severity of the issue. Possible values include: 'SeverityError', 'SeverityWarning' + Severity Severity `json:"severity,omitempty"` + // Type - The type of issue. Possible values include: 'IssueTypeUnknown', 'IssueTypeAgentStopped', 'IssueTypeGuestFirewall', 'IssueTypeDNSResolution', 'IssueTypeSocketBind', 'IssueTypeNetworkSecurityRule', 'IssueTypeUserDefinedRoute', 'IssueTypePortThrottled', 'IssueTypePlatform' + Type IssueType `json:"type,omitempty"` + // Context - Provides additional context on the issue. + Context *[]map[string]*string `json:"context,omitempty"` +} + +// ConnectivityParameters parameters that determine how the connectivity check will be performed. +type ConnectivityParameters struct { + Source *ConnectivitySource `json:"source,omitempty"` + Destination *ConnectivityDestination `json:"destination,omitempty"` +} + +// ConnectivitySource parameters that define the source of the connection. +type ConnectivitySource struct { + // ResourceID - The ID of the resource from which a connectivity check will be initiated. + ResourceID *string `json:"resourceId,omitempty"` + // Port - The source port from which a connectivity check will be performed. + Port *int32 `json:"port,omitempty"` +} + +// DhcpOptions dhcpOptions contains an array of DNS servers available to VMs deployed in the virtual network. +// Standard DHCP option for a subnet overrides VNET DHCP options. +type DhcpOptions struct { + // DNSServers - The list of DNS servers IP addresses. + DNSServers *[]string `json:"dnsServers,omitempty"` +} + +// Dimension dimension of the metric. +type Dimension struct { + // Name - The name of the dimension. + Name *string `json:"name,omitempty"` + // DisplayName - The display name of the dimension. + DisplayName *string `json:"displayName,omitempty"` + // InternalName - The internal name of the dimension. + InternalName *string `json:"internalName,omitempty"` +} + +// DNSNameAvailabilityResult response for the CheckDnsNameAvailability API service call. +type DNSNameAvailabilityResult struct { + autorest.Response `json:"-"` + // Available - Domain availability (True/False). + Available *bool `json:"available,omitempty"` +} + +// EffectiveNetworkSecurityGroup effective network security group. +type EffectiveNetworkSecurityGroup struct { + // NetworkSecurityGroup - The ID of network security group that is applied. + NetworkSecurityGroup *SubResource `json:"networkSecurityGroup,omitempty"` + // Association - Associated resources. + Association *EffectiveNetworkSecurityGroupAssociation `json:"association,omitempty"` + // EffectiveSecurityRules - A collection of effective security rules. + EffectiveSecurityRules *[]EffectiveNetworkSecurityRule `json:"effectiveSecurityRules,omitempty"` + // TagMap - Mapping of tags to list of IP Addresses included within the tag. + TagMap map[string][]string `json:"tagMap"` +} + +// MarshalJSON is the custom marshaler for EffectiveNetworkSecurityGroup. +func (ensg EffectiveNetworkSecurityGroup) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]interface{}) + if ensg.NetworkSecurityGroup != nil { + objectMap["networkSecurityGroup"] = ensg.NetworkSecurityGroup + } + if ensg.Association != nil { + objectMap["association"] = ensg.Association + } + if ensg.EffectiveSecurityRules != nil { + objectMap["effectiveSecurityRules"] = ensg.EffectiveSecurityRules + } + if ensg.TagMap != nil { + objectMap["tagMap"] = ensg.TagMap + } + return json.Marshal(objectMap) +} + +// EffectiveNetworkSecurityGroupAssociation the effective network security group association. +type EffectiveNetworkSecurityGroupAssociation struct { + // Subnet - The ID of the subnet if assigned. + Subnet *SubResource `json:"subnet,omitempty"` + // NetworkInterface - The ID of the network interface if assigned. + NetworkInterface *SubResource `json:"networkInterface,omitempty"` +} + +// EffectiveNetworkSecurityGroupListResult response for list effective network security groups API service call. +type EffectiveNetworkSecurityGroupListResult struct { + autorest.Response `json:"-"` + // Value - A list of effective network security groups. + Value *[]EffectiveNetworkSecurityGroup `json:"value,omitempty"` + // NextLink - The URL to get the next set of results. + NextLink *string `json:"nextLink,omitempty"` +} + +// EffectiveNetworkSecurityRule effective network security rules. +type EffectiveNetworkSecurityRule struct { + // Name - The name of the security rule specified by the user (if created by the user). + Name *string `json:"name,omitempty"` + // Protocol - The network protocol this rule applies to. Possible values are: 'Tcp', 'Udp', and 'All'. Possible values include: 'TCP', 'UDP', 'All' + Protocol EffectiveSecurityRuleProtocol `json:"protocol,omitempty"` + // SourcePortRange - The source port or range. + SourcePortRange *string `json:"sourcePortRange,omitempty"` + // DestinationPortRange - The destination port or range. + DestinationPortRange *string `json:"destinationPortRange,omitempty"` + // SourcePortRanges - The source port ranges. Expected values include a single integer between 0 and 65535, a range using '-' as seperator (e.g. 100-400), or an asterix (*) + SourcePortRanges *[]string `json:"sourcePortRanges,omitempty"` + // DestinationPortRanges - The destination port ranges. Expected values include a single integer between 0 and 65535, a range using '-' as seperator (e.g. 100-400), or an asterix (*) + DestinationPortRanges *[]string `json:"destinationPortRanges,omitempty"` + // SourceAddressPrefix - The source address prefix. + SourceAddressPrefix *string `json:"sourceAddressPrefix,omitempty"` + // DestinationAddressPrefix - The destination address prefix. + DestinationAddressPrefix *string `json:"destinationAddressPrefix,omitempty"` + // SourceAddressPrefixes - The source address prefixes. Expected values include CIDR IP ranges, Default Tags (VirtualNetwork, AureLoadBalancer, Internet), System Tags, and the asterix (*). + SourceAddressPrefixes *[]string `json:"sourceAddressPrefixes,omitempty"` + // DestinationAddressPrefixes - The destination address prefixes. Expected values include CIDR IP ranges, Default Tags (VirtualNetwork, AureLoadBalancer, Internet), System Tags, and the asterix (*). + DestinationAddressPrefixes *[]string `json:"destinationAddressPrefixes,omitempty"` + // ExpandedSourceAddressPrefix - The expanded source address prefix. + ExpandedSourceAddressPrefix *[]string `json:"expandedSourceAddressPrefix,omitempty"` + // ExpandedDestinationAddressPrefix - Expanded destination address prefix. + ExpandedDestinationAddressPrefix *[]string `json:"expandedDestinationAddressPrefix,omitempty"` + // Access - Whether network traffic is allowed or denied. Possible values are: 'Allow' and 'Deny'. Possible values include: 'SecurityRuleAccessAllow', 'SecurityRuleAccessDeny' + Access SecurityRuleAccess `json:"access,omitempty"` + // Priority - The priority of the rule. + Priority *int32 `json:"priority,omitempty"` + // Direction - The direction of the rule. Possible values are: 'Inbound and Outbound'. Possible values include: 'SecurityRuleDirectionInbound', 'SecurityRuleDirectionOutbound' + Direction SecurityRuleDirection `json:"direction,omitempty"` +} + +// EffectiveRoute effective Route +type EffectiveRoute struct { + // Name - The name of the user defined route. This is optional. + Name *string `json:"name,omitempty"` + // Source - Who created the route. Possible values are: 'Unknown', 'User', 'VirtualNetworkGateway', and 'Default'. Possible values include: 'EffectiveRouteSourceUnknown', 'EffectiveRouteSourceUser', 'EffectiveRouteSourceVirtualNetworkGateway', 'EffectiveRouteSourceDefault' + Source EffectiveRouteSource `json:"source,omitempty"` + // State - The value of effective route. Possible values are: 'Active' and 'Invalid'. Possible values include: 'Active', 'Invalid' + State EffectiveRouteState `json:"state,omitempty"` + // AddressPrefix - The address prefixes of the effective routes in CIDR notation. + AddressPrefix *[]string `json:"addressPrefix,omitempty"` + // NextHopIPAddress - The IP address of the next hop of the effective route. + NextHopIPAddress *[]string `json:"nextHopIpAddress,omitempty"` + // NextHopType - The type of Azure hop the packet should be sent to. Possible values are: 'VirtualNetworkGateway', 'VnetLocal', 'Internet', 'VirtualAppliance', and 'None'. Possible values include: 'RouteNextHopTypeVirtualNetworkGateway', 'RouteNextHopTypeVnetLocal', 'RouteNextHopTypeInternet', 'RouteNextHopTypeVirtualAppliance', 'RouteNextHopTypeNone' + NextHopType RouteNextHopType `json:"nextHopType,omitempty"` +} + +// EffectiveRouteListResult response for list effective route API service call. +type EffectiveRouteListResult struct { + autorest.Response `json:"-"` + // Value - A list of effective routes. + Value *[]EffectiveRoute `json:"value,omitempty"` + // NextLink - The URL to get the next set of results. + NextLink *string `json:"nextLink,omitempty"` +} + +// EndpointServiceResult endpoint service. +type EndpointServiceResult struct { + // Name - Name of the endpoint service. + Name *string `json:"name,omitempty"` + // Type - Type of the endpoint service. + Type *string `json:"type,omitempty"` + // ID - Resource ID. + ID *string `json:"id,omitempty"` +} + +// EndpointServicesListResult response for the ListAvailableEndpointServices API service call. +type EndpointServicesListResult struct { + autorest.Response `json:"-"` + // Value - List of available endpoint services in a region. + Value *[]EndpointServiceResult `json:"value,omitempty"` + // NextLink - The URL to get the next set of results. + NextLink *string `json:"nextLink,omitempty"` +} + +// EndpointServicesListResultIterator provides access to a complete listing of EndpointServiceResult values. +type EndpointServicesListResultIterator struct { + i int + page EndpointServicesListResultPage +} + +// Next advances to the next value. If there was an error making +// the request the iterator does not advance and the error is returned. +func (iter *EndpointServicesListResultIterator) Next() error { + iter.i++ + if iter.i < len(iter.page.Values()) { + return nil + } + err := iter.page.Next() + if err != nil { + iter.i-- + return err + } + iter.i = 0 + return nil +} + +// NotDone returns true if the enumeration should be started or is not yet complete. +func (iter EndpointServicesListResultIterator) NotDone() bool { + return iter.page.NotDone() && iter.i < len(iter.page.Values()) +} + +// Response returns the raw server response from the last page request. +func (iter EndpointServicesListResultIterator) Response() EndpointServicesListResult { + return iter.page.Response() +} + +// Value returns the current value or a zero-initialized value if the +// iterator has advanced beyond the end of the collection. +func (iter EndpointServicesListResultIterator) Value() EndpointServiceResult { + if !iter.page.NotDone() { + return EndpointServiceResult{} + } + return iter.page.Values()[iter.i] +} + +// IsEmpty returns true if the ListResult contains no values. +func (eslr EndpointServicesListResult) IsEmpty() bool { + return eslr.Value == nil || len(*eslr.Value) == 0 +} + +// endpointServicesListResultPreparer prepares a request to retrieve the next set of results. +// It returns nil if no more results exist. +func (eslr EndpointServicesListResult) endpointServicesListResultPreparer() (*http.Request, error) { + if eslr.NextLink == nil || len(to.String(eslr.NextLink)) < 1 { + return nil, nil + } + return autorest.Prepare(&http.Request{}, + autorest.AsJSON(), + autorest.AsGet(), + autorest.WithBaseURL(to.String(eslr.NextLink))) +} + +// EndpointServicesListResultPage contains a page of EndpointServiceResult values. +type EndpointServicesListResultPage struct { + fn func(EndpointServicesListResult) (EndpointServicesListResult, error) + eslr EndpointServicesListResult +} + +// Next advances to the next page of values. If there was an error making +// the request the page does not advance and the error is returned. +func (page *EndpointServicesListResultPage) Next() error { + next, err := page.fn(page.eslr) + if err != nil { + return err + } + page.eslr = next + return nil +} + +// NotDone returns true if the page enumeration should be started or is not yet complete. +func (page EndpointServicesListResultPage) NotDone() bool { + return !page.eslr.IsEmpty() +} + +// Response returns the raw server response from the last page request. +func (page EndpointServicesListResultPage) Response() EndpointServicesListResult { + return page.eslr +} + +// Values returns the slice of values for the current page or nil if there are no values. +func (page EndpointServicesListResultPage) Values() []EndpointServiceResult { + if page.eslr.IsEmpty() { + return nil + } + return *page.eslr.Value +} + +// Error ... +type Error struct { + Code *string `json:"code,omitempty"` + Message *string `json:"message,omitempty"` + Target *string `json:"target,omitempty"` + Details *[]ErrorDetails `json:"details,omitempty"` + InnerError *string `json:"innerError,omitempty"` +} + +// ErrorDetails ... +type ErrorDetails struct { + Code *string `json:"code,omitempty"` + Target *string `json:"target,omitempty"` + Message *string `json:"message,omitempty"` +} + +// ExpressRouteCircuit expressRouteCircuit resource +type ExpressRouteCircuit struct { + autorest.Response `json:"-"` + // Sku - The SKU. + Sku *ExpressRouteCircuitSku `json:"sku,omitempty"` + *ExpressRouteCircuitPropertiesFormat `json:"properties,omitempty"` + // Etag - Gets a unique read-only string that changes whenever the resource is updated. + Etag *string `json:"etag,omitempty"` + // ID - Resource ID. + ID *string `json:"id,omitempty"` + // Name - Resource name. + Name *string `json:"name,omitempty"` + // Type - Resource type. + Type *string `json:"type,omitempty"` + // Location - Resource location. + Location *string `json:"location,omitempty"` + // Tags - Resource tags. + Tags map[string]*string `json:"tags"` +} + +// MarshalJSON is the custom marshaler for ExpressRouteCircuit. +func (erc ExpressRouteCircuit) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]interface{}) + if erc.Sku != nil { + objectMap["sku"] = erc.Sku + } + if erc.ExpressRouteCircuitPropertiesFormat != nil { + objectMap["properties"] = erc.ExpressRouteCircuitPropertiesFormat + } + if erc.Etag != nil { + objectMap["etag"] = erc.Etag + } + if erc.ID != nil { + objectMap["id"] = erc.ID + } + if erc.Name != nil { + objectMap["name"] = erc.Name + } + if erc.Type != nil { + objectMap["type"] = erc.Type + } + if erc.Location != nil { + objectMap["location"] = erc.Location + } + if erc.Tags != nil { + objectMap["tags"] = erc.Tags + } + return json.Marshal(objectMap) +} + +// UnmarshalJSON is the custom unmarshaler for ExpressRouteCircuit struct. +func (erc *ExpressRouteCircuit) UnmarshalJSON(body []byte) error { + var m map[string]*json.RawMessage + err := json.Unmarshal(body, &m) + if err != nil { + return err + } + for k, v := range m { + switch k { + case "sku": + if v != nil { + var sku ExpressRouteCircuitSku + err = json.Unmarshal(*v, &sku) + if err != nil { + return err + } + erc.Sku = &sku + } + case "properties": + if v != nil { + var expressRouteCircuitPropertiesFormat ExpressRouteCircuitPropertiesFormat + err = json.Unmarshal(*v, &expressRouteCircuitPropertiesFormat) + if err != nil { + return err + } + erc.ExpressRouteCircuitPropertiesFormat = &expressRouteCircuitPropertiesFormat + } + case "etag": + if v != nil { + var etag string + err = json.Unmarshal(*v, &etag) + if err != nil { + return err + } + erc.Etag = &etag + } + case "id": + if v != nil { + var ID string + err = json.Unmarshal(*v, &ID) + if err != nil { + return err + } + erc.ID = &ID + } + case "name": + if v != nil { + var name string + err = json.Unmarshal(*v, &name) + if err != nil { + return err + } + erc.Name = &name + } + case "type": + if v != nil { + var typeVar string + err = json.Unmarshal(*v, &typeVar) + if err != nil { + return err + } + erc.Type = &typeVar + } + case "location": + if v != nil { + var location string + err = json.Unmarshal(*v, &location) + if err != nil { + return err + } + erc.Location = &location + } + case "tags": + if v != nil { + var tags map[string]*string + err = json.Unmarshal(*v, &tags) + if err != nil { + return err + } + erc.Tags = tags + } + } + } + + return nil +} + +// ExpressRouteCircuitArpTable the ARP table associated with the ExpressRouteCircuit. +type ExpressRouteCircuitArpTable struct { + // Age - Age + Age *int32 `json:"age,omitempty"` + // Interface - Interface + Interface *string `json:"interface,omitempty"` + // IPAddress - The IP address. + IPAddress *string `json:"ipAddress,omitempty"` + // MacAddress - The MAC address. + MacAddress *string `json:"macAddress,omitempty"` +} + +// ExpressRouteCircuitAuthorization authorization in an ExpressRouteCircuit resource. +type ExpressRouteCircuitAuthorization struct { + autorest.Response `json:"-"` + *AuthorizationPropertiesFormat `json:"properties,omitempty"` + // Name - Gets name of the resource that is unique within a resource group. This name can be used to access the resource. + Name *string `json:"name,omitempty"` + // Etag - A unique read-only string that changes whenever the resource is updated. + Etag *string `json:"etag,omitempty"` + // ID - Resource ID. + ID *string `json:"id,omitempty"` +} + +// MarshalJSON is the custom marshaler for ExpressRouteCircuitAuthorization. +func (erca ExpressRouteCircuitAuthorization) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]interface{}) + if erca.AuthorizationPropertiesFormat != nil { + objectMap["properties"] = erca.AuthorizationPropertiesFormat + } + if erca.Name != nil { + objectMap["name"] = erca.Name + } + if erca.Etag != nil { + objectMap["etag"] = erca.Etag + } + if erca.ID != nil { + objectMap["id"] = erca.ID + } + return json.Marshal(objectMap) +} + +// UnmarshalJSON is the custom unmarshaler for ExpressRouteCircuitAuthorization struct. +func (erca *ExpressRouteCircuitAuthorization) UnmarshalJSON(body []byte) error { + var m map[string]*json.RawMessage + err := json.Unmarshal(body, &m) + if err != nil { + return err + } + for k, v := range m { + switch k { + case "properties": + if v != nil { + var authorizationPropertiesFormat AuthorizationPropertiesFormat + err = json.Unmarshal(*v, &authorizationPropertiesFormat) + if err != nil { + return err + } + erca.AuthorizationPropertiesFormat = &authorizationPropertiesFormat + } + case "name": + if v != nil { + var name string + err = json.Unmarshal(*v, &name) + if err != nil { + return err + } + erca.Name = &name + } + case "etag": + if v != nil { + var etag string + err = json.Unmarshal(*v, &etag) + if err != nil { + return err + } + erca.Etag = &etag + } + case "id": + if v != nil { + var ID string + err = json.Unmarshal(*v, &ID) + if err != nil { + return err + } + erca.ID = &ID + } + } + } + + return nil +} + +// ExpressRouteCircuitAuthorizationsCreateOrUpdateFuture an abstraction for monitoring and retrieving the results +// of a long-running operation. +type ExpressRouteCircuitAuthorizationsCreateOrUpdateFuture struct { + azure.Future +} + +// Result returns the result of the asynchronous operation. +// If the operation has not completed it will return an error. +func (future *ExpressRouteCircuitAuthorizationsCreateOrUpdateFuture) Result(client ExpressRouteCircuitAuthorizationsClient) (erca ExpressRouteCircuitAuthorization, err error) { + var done bool + done, err = future.Done(client) + if err != nil { + err = autorest.NewErrorWithError(err, "network.ExpressRouteCircuitAuthorizationsCreateOrUpdateFuture", "Result", future.Response(), "Polling failure") + return + } + if !done { + err = azure.NewAsyncOpIncompleteError("network.ExpressRouteCircuitAuthorizationsCreateOrUpdateFuture") + return + } + sender := autorest.DecorateSender(client, autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...)) + if erca.Response.Response, err = future.GetResult(sender); err == nil && erca.Response.Response.StatusCode != http.StatusNoContent { + erca, err = client.CreateOrUpdateResponder(erca.Response.Response) + if err != nil { + err = autorest.NewErrorWithError(err, "network.ExpressRouteCircuitAuthorizationsCreateOrUpdateFuture", "Result", erca.Response.Response, "Failure responding to request") + } + } + return +} + +// ExpressRouteCircuitAuthorizationsDeleteFuture an abstraction for monitoring and retrieving the results of a +// long-running operation. +type ExpressRouteCircuitAuthorizationsDeleteFuture struct { + azure.Future +} + +// Result returns the result of the asynchronous operation. +// If the operation has not completed it will return an error. +func (future *ExpressRouteCircuitAuthorizationsDeleteFuture) Result(client ExpressRouteCircuitAuthorizationsClient) (ar autorest.Response, err error) { + var done bool + done, err = future.Done(client) + if err != nil { + err = autorest.NewErrorWithError(err, "network.ExpressRouteCircuitAuthorizationsDeleteFuture", "Result", future.Response(), "Polling failure") + return + } + if !done { + err = azure.NewAsyncOpIncompleteError("network.ExpressRouteCircuitAuthorizationsDeleteFuture") + return + } + ar.Response = future.Response() + return +} + +// ExpressRouteCircuitListResult response for ListExpressRouteCircuit API service call. +type ExpressRouteCircuitListResult struct { + autorest.Response `json:"-"` + // Value - A list of ExpressRouteCircuits in a resource group. + Value *[]ExpressRouteCircuit `json:"value,omitempty"` + // NextLink - The URL to get the next set of results. + NextLink *string `json:"nextLink,omitempty"` +} + +// ExpressRouteCircuitListResultIterator provides access to a complete listing of ExpressRouteCircuit values. +type ExpressRouteCircuitListResultIterator struct { + i int + page ExpressRouteCircuitListResultPage +} + +// Next advances to the next value. If there was an error making +// the request the iterator does not advance and the error is returned. +func (iter *ExpressRouteCircuitListResultIterator) Next() error { + iter.i++ + if iter.i < len(iter.page.Values()) { + return nil + } + err := iter.page.Next() + if err != nil { + iter.i-- + return err + } + iter.i = 0 + return nil +} + +// NotDone returns true if the enumeration should be started or is not yet complete. +func (iter ExpressRouteCircuitListResultIterator) NotDone() bool { + return iter.page.NotDone() && iter.i < len(iter.page.Values()) +} + +// Response returns the raw server response from the last page request. +func (iter ExpressRouteCircuitListResultIterator) Response() ExpressRouteCircuitListResult { + return iter.page.Response() +} + +// Value returns the current value or a zero-initialized value if the +// iterator has advanced beyond the end of the collection. +func (iter ExpressRouteCircuitListResultIterator) Value() ExpressRouteCircuit { + if !iter.page.NotDone() { + return ExpressRouteCircuit{} + } + return iter.page.Values()[iter.i] +} + +// IsEmpty returns true if the ListResult contains no values. +func (erclr ExpressRouteCircuitListResult) IsEmpty() bool { + return erclr.Value == nil || len(*erclr.Value) == 0 +} + +// expressRouteCircuitListResultPreparer prepares a request to retrieve the next set of results. +// It returns nil if no more results exist. +func (erclr ExpressRouteCircuitListResult) expressRouteCircuitListResultPreparer() (*http.Request, error) { + if erclr.NextLink == nil || len(to.String(erclr.NextLink)) < 1 { + return nil, nil + } + return autorest.Prepare(&http.Request{}, + autorest.AsJSON(), + autorest.AsGet(), + autorest.WithBaseURL(to.String(erclr.NextLink))) +} + +// ExpressRouteCircuitListResultPage contains a page of ExpressRouteCircuit values. +type ExpressRouteCircuitListResultPage struct { + fn func(ExpressRouteCircuitListResult) (ExpressRouteCircuitListResult, error) + erclr ExpressRouteCircuitListResult +} + +// Next advances to the next page of values. If there was an error making +// the request the page does not advance and the error is returned. +func (page *ExpressRouteCircuitListResultPage) Next() error { + next, err := page.fn(page.erclr) + if err != nil { + return err + } + page.erclr = next + return nil +} + +// NotDone returns true if the page enumeration should be started or is not yet complete. +func (page ExpressRouteCircuitListResultPage) NotDone() bool { + return !page.erclr.IsEmpty() +} + +// Response returns the raw server response from the last page request. +func (page ExpressRouteCircuitListResultPage) Response() ExpressRouteCircuitListResult { + return page.erclr +} + +// Values returns the slice of values for the current page or nil if there are no values. +func (page ExpressRouteCircuitListResultPage) Values() []ExpressRouteCircuit { + if page.erclr.IsEmpty() { + return nil + } + return *page.erclr.Value +} + +// ExpressRouteCircuitPeering peering in an ExpressRouteCircuit resource. +type ExpressRouteCircuitPeering struct { + autorest.Response `json:"-"` + *ExpressRouteCircuitPeeringPropertiesFormat `json:"properties,omitempty"` + // Name - Gets name of the resource that is unique within a resource group. This name can be used to access the resource. + Name *string `json:"name,omitempty"` + // Etag - A unique read-only string that changes whenever the resource is updated. + Etag *string `json:"etag,omitempty"` + // ID - Resource ID. + ID *string `json:"id,omitempty"` +} + +// MarshalJSON is the custom marshaler for ExpressRouteCircuitPeering. +func (ercp ExpressRouteCircuitPeering) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]interface{}) + if ercp.ExpressRouteCircuitPeeringPropertiesFormat != nil { + objectMap["properties"] = ercp.ExpressRouteCircuitPeeringPropertiesFormat + } + if ercp.Name != nil { + objectMap["name"] = ercp.Name + } + if ercp.Etag != nil { + objectMap["etag"] = ercp.Etag + } + if ercp.ID != nil { + objectMap["id"] = ercp.ID + } + return json.Marshal(objectMap) +} + +// UnmarshalJSON is the custom unmarshaler for ExpressRouteCircuitPeering struct. +func (ercp *ExpressRouteCircuitPeering) UnmarshalJSON(body []byte) error { + var m map[string]*json.RawMessage + err := json.Unmarshal(body, &m) + if err != nil { + return err + } + for k, v := range m { + switch k { + case "properties": + if v != nil { + var expressRouteCircuitPeeringPropertiesFormat ExpressRouteCircuitPeeringPropertiesFormat + err = json.Unmarshal(*v, &expressRouteCircuitPeeringPropertiesFormat) + if err != nil { + return err + } + ercp.ExpressRouteCircuitPeeringPropertiesFormat = &expressRouteCircuitPeeringPropertiesFormat + } + case "name": + if v != nil { + var name string + err = json.Unmarshal(*v, &name) + if err != nil { + return err + } + ercp.Name = &name + } + case "etag": + if v != nil { + var etag string + err = json.Unmarshal(*v, &etag) + if err != nil { + return err + } + ercp.Etag = &etag + } + case "id": + if v != nil { + var ID string + err = json.Unmarshal(*v, &ID) + if err != nil { + return err + } + ercp.ID = &ID + } + } + } + + return nil +} + +// ExpressRouteCircuitPeeringConfig specifies the peering configuration. +type ExpressRouteCircuitPeeringConfig struct { + // AdvertisedPublicPrefixes - The reference of AdvertisedPublicPrefixes. + AdvertisedPublicPrefixes *[]string `json:"advertisedPublicPrefixes,omitempty"` + // AdvertisedCommunities - The communities of bgp peering. Spepcified for microsoft peering + AdvertisedCommunities *[]string `json:"advertisedCommunities,omitempty"` + // AdvertisedPublicPrefixesState - AdvertisedPublicPrefixState of the Peering resource. Possible values are 'NotConfigured', 'Configuring', 'Configured', and 'ValidationNeeded'. Possible values include: 'NotConfigured', 'Configuring', 'Configured', 'ValidationNeeded' + AdvertisedPublicPrefixesState ExpressRouteCircuitPeeringAdvertisedPublicPrefixState `json:"advertisedPublicPrefixesState,omitempty"` + // LegacyMode - The legacy mode of the peering. + LegacyMode *int32 `json:"legacyMode,omitempty"` + // CustomerASN - The CustomerASN of the peering. + CustomerASN *int32 `json:"customerASN,omitempty"` + // RoutingRegistryName - The RoutingRegistryName of the configuration. + RoutingRegistryName *string `json:"routingRegistryName,omitempty"` +} + +// ExpressRouteCircuitPeeringListResult response for ListPeering API service call retrieves all peerings that +// belong to an ExpressRouteCircuit. +type ExpressRouteCircuitPeeringListResult struct { + autorest.Response `json:"-"` + // Value - The peerings in an express route circuit. + Value *[]ExpressRouteCircuitPeering `json:"value,omitempty"` + // NextLink - The URL to get the next set of results. + NextLink *string `json:"nextLink,omitempty"` +} + +// ExpressRouteCircuitPeeringListResultIterator provides access to a complete listing of ExpressRouteCircuitPeering +// values. +type ExpressRouteCircuitPeeringListResultIterator struct { + i int + page ExpressRouteCircuitPeeringListResultPage +} + +// Next advances to the next value. If there was an error making +// the request the iterator does not advance and the error is returned. +func (iter *ExpressRouteCircuitPeeringListResultIterator) Next() error { + iter.i++ + if iter.i < len(iter.page.Values()) { + return nil + } + err := iter.page.Next() + if err != nil { + iter.i-- + return err + } + iter.i = 0 + return nil +} + +// NotDone returns true if the enumeration should be started or is not yet complete. +func (iter ExpressRouteCircuitPeeringListResultIterator) NotDone() bool { + return iter.page.NotDone() && iter.i < len(iter.page.Values()) +} + +// Response returns the raw server response from the last page request. +func (iter ExpressRouteCircuitPeeringListResultIterator) Response() ExpressRouteCircuitPeeringListResult { + return iter.page.Response() +} + +// Value returns the current value or a zero-initialized value if the +// iterator has advanced beyond the end of the collection. +func (iter ExpressRouteCircuitPeeringListResultIterator) Value() ExpressRouteCircuitPeering { + if !iter.page.NotDone() { + return ExpressRouteCircuitPeering{} + } + return iter.page.Values()[iter.i] +} + +// IsEmpty returns true if the ListResult contains no values. +func (ercplr ExpressRouteCircuitPeeringListResult) IsEmpty() bool { + return ercplr.Value == nil || len(*ercplr.Value) == 0 +} + +// expressRouteCircuitPeeringListResultPreparer prepares a request to retrieve the next set of results. +// It returns nil if no more results exist. +func (ercplr ExpressRouteCircuitPeeringListResult) expressRouteCircuitPeeringListResultPreparer() (*http.Request, error) { + if ercplr.NextLink == nil || len(to.String(ercplr.NextLink)) < 1 { + return nil, nil + } + return autorest.Prepare(&http.Request{}, + autorest.AsJSON(), + autorest.AsGet(), + autorest.WithBaseURL(to.String(ercplr.NextLink))) +} + +// ExpressRouteCircuitPeeringListResultPage contains a page of ExpressRouteCircuitPeering values. +type ExpressRouteCircuitPeeringListResultPage struct { + fn func(ExpressRouteCircuitPeeringListResult) (ExpressRouteCircuitPeeringListResult, error) + ercplr ExpressRouteCircuitPeeringListResult +} + +// Next advances to the next page of values. If there was an error making +// the request the page does not advance and the error is returned. +func (page *ExpressRouteCircuitPeeringListResultPage) Next() error { + next, err := page.fn(page.ercplr) + if err != nil { + return err + } + page.ercplr = next + return nil +} + +// NotDone returns true if the page enumeration should be started or is not yet complete. +func (page ExpressRouteCircuitPeeringListResultPage) NotDone() bool { + return !page.ercplr.IsEmpty() +} + +// Response returns the raw server response from the last page request. +func (page ExpressRouteCircuitPeeringListResultPage) Response() ExpressRouteCircuitPeeringListResult { + return page.ercplr +} + +// Values returns the slice of values for the current page or nil if there are no values. +func (page ExpressRouteCircuitPeeringListResultPage) Values() []ExpressRouteCircuitPeering { + if page.ercplr.IsEmpty() { + return nil + } + return *page.ercplr.Value +} + +// ExpressRouteCircuitPeeringPropertiesFormat ... +type ExpressRouteCircuitPeeringPropertiesFormat struct { + // PeeringType - The PeeringType. Possible values are: 'AzurePublicPeering', 'AzurePrivatePeering', and 'MicrosoftPeering'. Possible values include: 'AzurePublicPeering', 'AzurePrivatePeering', 'MicrosoftPeering' + PeeringType ExpressRouteCircuitPeeringType `json:"peeringType,omitempty"` + // State - The state of peering. Possible values are: 'Disabled' and 'Enabled'. Possible values include: 'ExpressRouteCircuitPeeringStateDisabled', 'ExpressRouteCircuitPeeringStateEnabled' + State ExpressRouteCircuitPeeringState `json:"state,omitempty"` + // AzureASN - The Azure ASN. + AzureASN *int32 `json:"azureASN,omitempty"` + // PeerASN - The peer ASN. + PeerASN *int64 `json:"peerASN,omitempty"` + // PrimaryPeerAddressPrefix - The primary address prefix. + PrimaryPeerAddressPrefix *string `json:"primaryPeerAddressPrefix,omitempty"` + // SecondaryPeerAddressPrefix - The secondary address prefix. + SecondaryPeerAddressPrefix *string `json:"secondaryPeerAddressPrefix,omitempty"` + // PrimaryAzurePort - The primary port. + PrimaryAzurePort *string `json:"primaryAzurePort,omitempty"` + // SecondaryAzurePort - The secondary port. + SecondaryAzurePort *string `json:"secondaryAzurePort,omitempty"` + // SharedKey - The shared key. + SharedKey *string `json:"sharedKey,omitempty"` + // VlanID - The VLAN ID. + VlanID *int32 `json:"vlanId,omitempty"` + // MicrosoftPeeringConfig - The Microsoft peering configuration. + MicrosoftPeeringConfig *ExpressRouteCircuitPeeringConfig `json:"microsoftPeeringConfig,omitempty"` + // Stats - Gets peering stats. + Stats *ExpressRouteCircuitStats `json:"stats,omitempty"` + // ProvisioningState - Gets the provisioning state of the public IP resource. Possible values are: 'Updating', 'Deleting', and 'Failed'. + ProvisioningState *string `json:"provisioningState,omitempty"` + // GatewayManagerEtag - The GatewayManager Etag. + GatewayManagerEtag *string `json:"gatewayManagerEtag,omitempty"` + // LastModifiedBy - Gets whether the provider or the customer last modified the peering. + LastModifiedBy *string `json:"lastModifiedBy,omitempty"` + // RouteFilter - The reference of the RouteFilter resource. + RouteFilter *RouteFilter `json:"routeFilter,omitempty"` + // Ipv6PeeringConfig - The IPv6 peering configuration. + Ipv6PeeringConfig *Ipv6ExpressRouteCircuitPeeringConfig `json:"ipv6PeeringConfig,omitempty"` +} + +// ExpressRouteCircuitPeeringsCreateOrUpdateFuture an abstraction for monitoring and retrieving the results of a +// long-running operation. +type ExpressRouteCircuitPeeringsCreateOrUpdateFuture struct { + azure.Future +} + +// Result returns the result of the asynchronous operation. +// If the operation has not completed it will return an error. +func (future *ExpressRouteCircuitPeeringsCreateOrUpdateFuture) Result(client ExpressRouteCircuitPeeringsClient) (ercp ExpressRouteCircuitPeering, err error) { + var done bool + done, err = future.Done(client) + if err != nil { + err = autorest.NewErrorWithError(err, "network.ExpressRouteCircuitPeeringsCreateOrUpdateFuture", "Result", future.Response(), "Polling failure") + return + } + if !done { + err = azure.NewAsyncOpIncompleteError("network.ExpressRouteCircuitPeeringsCreateOrUpdateFuture") + return + } + sender := autorest.DecorateSender(client, autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...)) + if ercp.Response.Response, err = future.GetResult(sender); err == nil && ercp.Response.Response.StatusCode != http.StatusNoContent { + ercp, err = client.CreateOrUpdateResponder(ercp.Response.Response) + if err != nil { + err = autorest.NewErrorWithError(err, "network.ExpressRouteCircuitPeeringsCreateOrUpdateFuture", "Result", ercp.Response.Response, "Failure responding to request") + } + } + return +} + +// ExpressRouteCircuitPeeringsDeleteFuture an abstraction for monitoring and retrieving the results of a +// long-running operation. +type ExpressRouteCircuitPeeringsDeleteFuture struct { + azure.Future +} + +// Result returns the result of the asynchronous operation. +// If the operation has not completed it will return an error. +func (future *ExpressRouteCircuitPeeringsDeleteFuture) Result(client ExpressRouteCircuitPeeringsClient) (ar autorest.Response, err error) { + var done bool + done, err = future.Done(client) + if err != nil { + err = autorest.NewErrorWithError(err, "network.ExpressRouteCircuitPeeringsDeleteFuture", "Result", future.Response(), "Polling failure") + return + } + if !done { + err = azure.NewAsyncOpIncompleteError("network.ExpressRouteCircuitPeeringsDeleteFuture") + return + } + ar.Response = future.Response() + return +} + +// ExpressRouteCircuitPropertiesFormat properties of ExpressRouteCircuit. +type ExpressRouteCircuitPropertiesFormat struct { + // AllowClassicOperations - Allow classic operations + AllowClassicOperations *bool `json:"allowClassicOperations,omitempty"` + // CircuitProvisioningState - The CircuitProvisioningState state of the resource. + CircuitProvisioningState *string `json:"circuitProvisioningState,omitempty"` + // ServiceProviderProvisioningState - The ServiceProviderProvisioningState state of the resource. Possible values are 'NotProvisioned', 'Provisioning', 'Provisioned', and 'Deprovisioning'. Possible values include: 'NotProvisioned', 'Provisioning', 'Provisioned', 'Deprovisioning' + ServiceProviderProvisioningState ServiceProviderProvisioningState `json:"serviceProviderProvisioningState,omitempty"` + // Authorizations - The list of authorizations. + Authorizations *[]ExpressRouteCircuitAuthorization `json:"authorizations,omitempty"` + // Peerings - The list of peerings. + Peerings *[]ExpressRouteCircuitPeering `json:"peerings,omitempty"` + // ServiceKey - The ServiceKey. + ServiceKey *string `json:"serviceKey,omitempty"` + // ServiceProviderNotes - The ServiceProviderNotes. + ServiceProviderNotes *string `json:"serviceProviderNotes,omitempty"` + // ServiceProviderProperties - The ServiceProviderProperties. + ServiceProviderProperties *ExpressRouteCircuitServiceProviderProperties `json:"serviceProviderProperties,omitempty"` + // ProvisioningState - Gets the provisioning state of the public IP resource. Possible values are: 'Updating', 'Deleting', and 'Failed'. + ProvisioningState *string `json:"provisioningState,omitempty"` + // GatewayManagerEtag - The GatewayManager Etag. + GatewayManagerEtag *string `json:"gatewayManagerEtag,omitempty"` +} + +// ExpressRouteCircuitRoutesTable the routes table associated with the ExpressRouteCircuit +type ExpressRouteCircuitRoutesTable struct { + // NetworkProperty - network + NetworkProperty *string `json:"network,omitempty"` + // NextHop - nextHop + NextHop *string `json:"nextHop,omitempty"` + // LocPrf - locPrf + LocPrf *string `json:"locPrf,omitempty"` + // Weight - weight. + Weight *int32 `json:"weight,omitempty"` + // Path - path + Path *string `json:"path,omitempty"` +} + +// ExpressRouteCircuitRoutesTableSummary the routes table associated with the ExpressRouteCircuit. +type ExpressRouteCircuitRoutesTableSummary struct { + // Neighbor - Neighbor + Neighbor *string `json:"neighbor,omitempty"` + // V - BGP version number spoken to the neighbor. + V *int32 `json:"v,omitempty"` + // As - Autonomous system number. + As *int32 `json:"as,omitempty"` + // UpDown - The length of time that the BGP session has been in the Established state, or the current status if not in the Established state. + UpDown *string `json:"upDown,omitempty"` + // StatePfxRcd - Current state of the BGP session, and the number of prefixes that have been received from a neighbor or peer group. + StatePfxRcd *string `json:"statePfxRcd,omitempty"` +} + +// ExpressRouteCircuitsArpTableListResult response for ListArpTable associated with the Express Route Circuits API. +type ExpressRouteCircuitsArpTableListResult struct { + autorest.Response `json:"-"` + // Value - Gets list of the ARP table. + Value *[]ExpressRouteCircuitArpTable `json:"value,omitempty"` + // NextLink - The URL to get the next set of results. + NextLink *string `json:"nextLink,omitempty"` +} + +// ExpressRouteCircuitsCreateOrUpdateFuture an abstraction for monitoring and retrieving the results of a +// long-running operation. +type ExpressRouteCircuitsCreateOrUpdateFuture struct { + azure.Future +} + +// Result returns the result of the asynchronous operation. +// If the operation has not completed it will return an error. +func (future *ExpressRouteCircuitsCreateOrUpdateFuture) Result(client ExpressRouteCircuitsClient) (erc ExpressRouteCircuit, err error) { + var done bool + done, err = future.Done(client) + if err != nil { + err = autorest.NewErrorWithError(err, "network.ExpressRouteCircuitsCreateOrUpdateFuture", "Result", future.Response(), "Polling failure") + return + } + if !done { + err = azure.NewAsyncOpIncompleteError("network.ExpressRouteCircuitsCreateOrUpdateFuture") + return + } + sender := autorest.DecorateSender(client, autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...)) + if erc.Response.Response, err = future.GetResult(sender); err == nil && erc.Response.Response.StatusCode != http.StatusNoContent { + erc, err = client.CreateOrUpdateResponder(erc.Response.Response) + if err != nil { + err = autorest.NewErrorWithError(err, "network.ExpressRouteCircuitsCreateOrUpdateFuture", "Result", erc.Response.Response, "Failure responding to request") + } + } + return +} + +// ExpressRouteCircuitsDeleteFuture an abstraction for monitoring and retrieving the results of a long-running +// operation. +type ExpressRouteCircuitsDeleteFuture struct { + azure.Future +} + +// Result returns the result of the asynchronous operation. +// If the operation has not completed it will return an error. +func (future *ExpressRouteCircuitsDeleteFuture) Result(client ExpressRouteCircuitsClient) (ar autorest.Response, err error) { + var done bool + done, err = future.Done(client) + if err != nil { + err = autorest.NewErrorWithError(err, "network.ExpressRouteCircuitsDeleteFuture", "Result", future.Response(), "Polling failure") + return + } + if !done { + err = azure.NewAsyncOpIncompleteError("network.ExpressRouteCircuitsDeleteFuture") + return + } + ar.Response = future.Response() + return +} + +// ExpressRouteCircuitServiceProviderProperties contains ServiceProviderProperties in an ExpressRouteCircuit. +type ExpressRouteCircuitServiceProviderProperties struct { + // ServiceProviderName - The serviceProviderName. + ServiceProviderName *string `json:"serviceProviderName,omitempty"` + // PeeringLocation - The peering location. + PeeringLocation *string `json:"peeringLocation,omitempty"` + // BandwidthInMbps - The BandwidthInMbps. + BandwidthInMbps *int32 `json:"bandwidthInMbps,omitempty"` +} + +// ExpressRouteCircuitSku contains SKU in an ExpressRouteCircuit. +type ExpressRouteCircuitSku struct { + // Name - The name of the SKU. + Name *string `json:"name,omitempty"` + // Tier - The tier of the SKU. Possible values are 'Standard' and 'Premium'. Possible values include: 'ExpressRouteCircuitSkuTierStandard', 'ExpressRouteCircuitSkuTierPremium' + Tier ExpressRouteCircuitSkuTier `json:"tier,omitempty"` + // Family - The family of the SKU. Possible values are: 'UnlimitedData' and 'MeteredData'. Possible values include: 'UnlimitedData', 'MeteredData' + Family ExpressRouteCircuitSkuFamily `json:"family,omitempty"` +} + +// ExpressRouteCircuitsListArpTableFuture an abstraction for monitoring and retrieving the results of a +// long-running operation. +type ExpressRouteCircuitsListArpTableFuture struct { + azure.Future +} + +// Result returns the result of the asynchronous operation. +// If the operation has not completed it will return an error. +func (future *ExpressRouteCircuitsListArpTableFuture) Result(client ExpressRouteCircuitsClient) (ercatlr ExpressRouteCircuitsArpTableListResult, err error) { + var done bool + done, err = future.Done(client) + if err != nil { + err = autorest.NewErrorWithError(err, "network.ExpressRouteCircuitsListArpTableFuture", "Result", future.Response(), "Polling failure") + return + } + if !done { + err = azure.NewAsyncOpIncompleteError("network.ExpressRouteCircuitsListArpTableFuture") + return + } + sender := autorest.DecorateSender(client, autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...)) + if ercatlr.Response.Response, err = future.GetResult(sender); err == nil && ercatlr.Response.Response.StatusCode != http.StatusNoContent { + ercatlr, err = client.ListArpTableResponder(ercatlr.Response.Response) + if err != nil { + err = autorest.NewErrorWithError(err, "network.ExpressRouteCircuitsListArpTableFuture", "Result", ercatlr.Response.Response, "Failure responding to request") + } + } + return +} + +// ExpressRouteCircuitsListRoutesTableFuture an abstraction for monitoring and retrieving the results of a +// long-running operation. +type ExpressRouteCircuitsListRoutesTableFuture struct { + azure.Future +} + +// Result returns the result of the asynchronous operation. +// If the operation has not completed it will return an error. +func (future *ExpressRouteCircuitsListRoutesTableFuture) Result(client ExpressRouteCircuitsClient) (ercrtlr ExpressRouteCircuitsRoutesTableListResult, err error) { + var done bool + done, err = future.Done(client) + if err != nil { + err = autorest.NewErrorWithError(err, "network.ExpressRouteCircuitsListRoutesTableFuture", "Result", future.Response(), "Polling failure") + return + } + if !done { + err = azure.NewAsyncOpIncompleteError("network.ExpressRouteCircuitsListRoutesTableFuture") + return + } + sender := autorest.DecorateSender(client, autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...)) + if ercrtlr.Response.Response, err = future.GetResult(sender); err == nil && ercrtlr.Response.Response.StatusCode != http.StatusNoContent { + ercrtlr, err = client.ListRoutesTableResponder(ercrtlr.Response.Response) + if err != nil { + err = autorest.NewErrorWithError(err, "network.ExpressRouteCircuitsListRoutesTableFuture", "Result", ercrtlr.Response.Response, "Failure responding to request") + } + } + return +} + +// ExpressRouteCircuitsListRoutesTableSummaryFuture an abstraction for monitoring and retrieving the results of a +// long-running operation. +type ExpressRouteCircuitsListRoutesTableSummaryFuture struct { + azure.Future +} + +// Result returns the result of the asynchronous operation. +// If the operation has not completed it will return an error. +func (future *ExpressRouteCircuitsListRoutesTableSummaryFuture) Result(client ExpressRouteCircuitsClient) (ercrtslr ExpressRouteCircuitsRoutesTableSummaryListResult, err error) { + var done bool + done, err = future.Done(client) + if err != nil { + err = autorest.NewErrorWithError(err, "network.ExpressRouteCircuitsListRoutesTableSummaryFuture", "Result", future.Response(), "Polling failure") + return + } + if !done { + err = azure.NewAsyncOpIncompleteError("network.ExpressRouteCircuitsListRoutesTableSummaryFuture") + return + } + sender := autorest.DecorateSender(client, autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...)) + if ercrtslr.Response.Response, err = future.GetResult(sender); err == nil && ercrtslr.Response.Response.StatusCode != http.StatusNoContent { + ercrtslr, err = client.ListRoutesTableSummaryResponder(ercrtslr.Response.Response) + if err != nil { + err = autorest.NewErrorWithError(err, "network.ExpressRouteCircuitsListRoutesTableSummaryFuture", "Result", ercrtslr.Response.Response, "Failure responding to request") + } + } + return +} + +// ExpressRouteCircuitsRoutesTableListResult response for ListRoutesTable associated with the Express Route +// Circuits API. +type ExpressRouteCircuitsRoutesTableListResult struct { + autorest.Response `json:"-"` + // Value - The list of routes table. + Value *[]ExpressRouteCircuitRoutesTable `json:"value,omitempty"` + // NextLink - The URL to get the next set of results. + NextLink *string `json:"nextLink,omitempty"` +} + +// ExpressRouteCircuitsRoutesTableSummaryListResult response for ListRoutesTable associated with the Express Route +// Circuits API. +type ExpressRouteCircuitsRoutesTableSummaryListResult struct { + autorest.Response `json:"-"` + // Value - A list of the routes table. + Value *[]ExpressRouteCircuitRoutesTableSummary `json:"value,omitempty"` + // NextLink - The URL to get the next set of results. + NextLink *string `json:"nextLink,omitempty"` +} + +// ExpressRouteCircuitStats contains stats associated with the peering. +type ExpressRouteCircuitStats struct { + autorest.Response `json:"-"` + // PrimarybytesIn - Gets BytesIn of the peering. + PrimarybytesIn *int64 `json:"primarybytesIn,omitempty"` + // PrimarybytesOut - Gets BytesOut of the peering. + PrimarybytesOut *int64 `json:"primarybytesOut,omitempty"` + // SecondarybytesIn - Gets BytesIn of the peering. + SecondarybytesIn *int64 `json:"secondarybytesIn,omitempty"` + // SecondarybytesOut - Gets BytesOut of the peering. + SecondarybytesOut *int64 `json:"secondarybytesOut,omitempty"` +} + +// ExpressRouteCircuitsUpdateTagsFuture an abstraction for monitoring and retrieving the results of a long-running +// operation. +type ExpressRouteCircuitsUpdateTagsFuture struct { + azure.Future +} + +// Result returns the result of the asynchronous operation. +// If the operation has not completed it will return an error. +func (future *ExpressRouteCircuitsUpdateTagsFuture) Result(client ExpressRouteCircuitsClient) (erc ExpressRouteCircuit, err error) { + var done bool + done, err = future.Done(client) + if err != nil { + err = autorest.NewErrorWithError(err, "network.ExpressRouteCircuitsUpdateTagsFuture", "Result", future.Response(), "Polling failure") + return + } + if !done { + err = azure.NewAsyncOpIncompleteError("network.ExpressRouteCircuitsUpdateTagsFuture") + return + } + sender := autorest.DecorateSender(client, autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...)) + if erc.Response.Response, err = future.GetResult(sender); err == nil && erc.Response.Response.StatusCode != http.StatusNoContent { + erc, err = client.UpdateTagsResponder(erc.Response.Response) + if err != nil { + err = autorest.NewErrorWithError(err, "network.ExpressRouteCircuitsUpdateTagsFuture", "Result", erc.Response.Response, "Failure responding to request") + } + } + return +} + +// ExpressRouteServiceProvider a ExpressRouteResourceProvider object. +type ExpressRouteServiceProvider struct { + *ExpressRouteServiceProviderPropertiesFormat `json:"properties,omitempty"` + // ID - Resource ID. + ID *string `json:"id,omitempty"` + // Name - Resource name. + Name *string `json:"name,omitempty"` + // Type - Resource type. + Type *string `json:"type,omitempty"` + // Location - Resource location. + Location *string `json:"location,omitempty"` + // Tags - Resource tags. + Tags map[string]*string `json:"tags"` +} + +// MarshalJSON is the custom marshaler for ExpressRouteServiceProvider. +func (ersp ExpressRouteServiceProvider) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]interface{}) + if ersp.ExpressRouteServiceProviderPropertiesFormat != nil { + objectMap["properties"] = ersp.ExpressRouteServiceProviderPropertiesFormat + } + if ersp.ID != nil { + objectMap["id"] = ersp.ID + } + if ersp.Name != nil { + objectMap["name"] = ersp.Name + } + if ersp.Type != nil { + objectMap["type"] = ersp.Type + } + if ersp.Location != nil { + objectMap["location"] = ersp.Location + } + if ersp.Tags != nil { + objectMap["tags"] = ersp.Tags + } + return json.Marshal(objectMap) +} + +// UnmarshalJSON is the custom unmarshaler for ExpressRouteServiceProvider struct. +func (ersp *ExpressRouteServiceProvider) UnmarshalJSON(body []byte) error { + var m map[string]*json.RawMessage + err := json.Unmarshal(body, &m) + if err != nil { + return err + } + for k, v := range m { + switch k { + case "properties": + if v != nil { + var expressRouteServiceProviderPropertiesFormat ExpressRouteServiceProviderPropertiesFormat + err = json.Unmarshal(*v, &expressRouteServiceProviderPropertiesFormat) + if err != nil { + return err + } + ersp.ExpressRouteServiceProviderPropertiesFormat = &expressRouteServiceProviderPropertiesFormat + } + case "id": + if v != nil { + var ID string + err = json.Unmarshal(*v, &ID) + if err != nil { + return err + } + ersp.ID = &ID + } + case "name": + if v != nil { + var name string + err = json.Unmarshal(*v, &name) + if err != nil { + return err + } + ersp.Name = &name + } + case "type": + if v != nil { + var typeVar string + err = json.Unmarshal(*v, &typeVar) + if err != nil { + return err + } + ersp.Type = &typeVar + } + case "location": + if v != nil { + var location string + err = json.Unmarshal(*v, &location) + if err != nil { + return err + } + ersp.Location = &location + } + case "tags": + if v != nil { + var tags map[string]*string + err = json.Unmarshal(*v, &tags) + if err != nil { + return err + } + ersp.Tags = tags + } + } + } + + return nil +} + +// ExpressRouteServiceProviderBandwidthsOffered contains bandwidths offered in ExpressRouteServiceProvider +// resources. +type ExpressRouteServiceProviderBandwidthsOffered struct { + // OfferName - The OfferName. + OfferName *string `json:"offerName,omitempty"` + // ValueInMbps - The ValueInMbps. + ValueInMbps *int32 `json:"valueInMbps,omitempty"` +} + +// ExpressRouteServiceProviderListResult response for the ListExpressRouteServiceProvider API service call. +type ExpressRouteServiceProviderListResult struct { + autorest.Response `json:"-"` + // Value - A list of ExpressRouteResourceProvider resources. + Value *[]ExpressRouteServiceProvider `json:"value,omitempty"` + // NextLink - The URL to get the next set of results. + NextLink *string `json:"nextLink,omitempty"` +} + +// ExpressRouteServiceProviderListResultIterator provides access to a complete listing of +// ExpressRouteServiceProvider values. +type ExpressRouteServiceProviderListResultIterator struct { + i int + page ExpressRouteServiceProviderListResultPage +} + +// Next advances to the next value. If there was an error making +// the request the iterator does not advance and the error is returned. +func (iter *ExpressRouteServiceProviderListResultIterator) Next() error { + iter.i++ + if iter.i < len(iter.page.Values()) { + return nil + } + err := iter.page.Next() + if err != nil { + iter.i-- + return err + } + iter.i = 0 + return nil +} + +// NotDone returns true if the enumeration should be started or is not yet complete. +func (iter ExpressRouteServiceProviderListResultIterator) NotDone() bool { + return iter.page.NotDone() && iter.i < len(iter.page.Values()) +} + +// Response returns the raw server response from the last page request. +func (iter ExpressRouteServiceProviderListResultIterator) Response() ExpressRouteServiceProviderListResult { + return iter.page.Response() +} + +// Value returns the current value or a zero-initialized value if the +// iterator has advanced beyond the end of the collection. +func (iter ExpressRouteServiceProviderListResultIterator) Value() ExpressRouteServiceProvider { + if !iter.page.NotDone() { + return ExpressRouteServiceProvider{} + } + return iter.page.Values()[iter.i] +} + +// IsEmpty returns true if the ListResult contains no values. +func (ersplr ExpressRouteServiceProviderListResult) IsEmpty() bool { + return ersplr.Value == nil || len(*ersplr.Value) == 0 +} + +// expressRouteServiceProviderListResultPreparer prepares a request to retrieve the next set of results. +// It returns nil if no more results exist. +func (ersplr ExpressRouteServiceProviderListResult) expressRouteServiceProviderListResultPreparer() (*http.Request, error) { + if ersplr.NextLink == nil || len(to.String(ersplr.NextLink)) < 1 { + return nil, nil + } + return autorest.Prepare(&http.Request{}, + autorest.AsJSON(), + autorest.AsGet(), + autorest.WithBaseURL(to.String(ersplr.NextLink))) +} + +// ExpressRouteServiceProviderListResultPage contains a page of ExpressRouteServiceProvider values. +type ExpressRouteServiceProviderListResultPage struct { + fn func(ExpressRouteServiceProviderListResult) (ExpressRouteServiceProviderListResult, error) + ersplr ExpressRouteServiceProviderListResult +} + +// Next advances to the next page of values. If there was an error making +// the request the page does not advance and the error is returned. +func (page *ExpressRouteServiceProviderListResultPage) Next() error { + next, err := page.fn(page.ersplr) + if err != nil { + return err + } + page.ersplr = next + return nil +} + +// NotDone returns true if the page enumeration should be started or is not yet complete. +func (page ExpressRouteServiceProviderListResultPage) NotDone() bool { + return !page.ersplr.IsEmpty() +} + +// Response returns the raw server response from the last page request. +func (page ExpressRouteServiceProviderListResultPage) Response() ExpressRouteServiceProviderListResult { + return page.ersplr +} + +// Values returns the slice of values for the current page or nil if there are no values. +func (page ExpressRouteServiceProviderListResultPage) Values() []ExpressRouteServiceProvider { + if page.ersplr.IsEmpty() { + return nil + } + return *page.ersplr.Value +} + +// ExpressRouteServiceProviderPropertiesFormat properties of ExpressRouteServiceProvider. +type ExpressRouteServiceProviderPropertiesFormat struct { + // PeeringLocations - Get a list of peering locations. + PeeringLocations *[]string `json:"peeringLocations,omitempty"` + // BandwidthsOffered - Gets bandwidths offered. + BandwidthsOffered *[]ExpressRouteServiceProviderBandwidthsOffered `json:"bandwidthsOffered,omitempty"` + // ProvisioningState - Gets the provisioning state of the resource. + ProvisioningState *string `json:"provisioningState,omitempty"` +} + +// FlowLogInformation information on the configuration of flow log and traffic analytics (optional). +type FlowLogInformation struct { + autorest.Response `json:"-"` + // TargetResourceID - The ID of the resource to configure for flow logging. + TargetResourceID *string `json:"targetResourceId,omitempty"` + *FlowLogProperties `json:"properties,omitempty"` + *TrafficAnalyticsProperties `json:"flowAnalyticsConfiguration,omitempty"` +} + +// MarshalJSON is the custom marshaler for FlowLogInformation. +func (fli FlowLogInformation) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]interface{}) + if fli.TargetResourceID != nil { + objectMap["targetResourceId"] = fli.TargetResourceID + } + if fli.FlowLogProperties != nil { + objectMap["properties"] = fli.FlowLogProperties + } + if fli.TrafficAnalyticsProperties != nil { + objectMap["flowAnalyticsConfiguration"] = fli.TrafficAnalyticsProperties + } + return json.Marshal(objectMap) +} + +// UnmarshalJSON is the custom unmarshaler for FlowLogInformation struct. +func (fli *FlowLogInformation) UnmarshalJSON(body []byte) error { + var m map[string]*json.RawMessage + err := json.Unmarshal(body, &m) + if err != nil { + return err + } + for k, v := range m { + switch k { + case "targetResourceId": + if v != nil { + var targetResourceID string + err = json.Unmarshal(*v, &targetResourceID) + if err != nil { + return err + } + fli.TargetResourceID = &targetResourceID + } + case "properties": + if v != nil { + var flowLogProperties FlowLogProperties + err = json.Unmarshal(*v, &flowLogProperties) + if err != nil { + return err + } + fli.FlowLogProperties = &flowLogProperties + } + case "flowAnalyticsConfiguration": + if v != nil { + var trafficAnalyticsProperties TrafficAnalyticsProperties + err = json.Unmarshal(*v, &trafficAnalyticsProperties) + if err != nil { + return err + } + fli.TrafficAnalyticsProperties = &trafficAnalyticsProperties + } + } + } + + return nil +} + +// FlowLogProperties parameters that define the configuration of flow log. +type FlowLogProperties struct { + // StorageID - ID of the storage account which is used to store the flow log. + StorageID *string `json:"storageId,omitempty"` + // Enabled - Flag to enable/disable flow logging. + Enabled *bool `json:"enabled,omitempty"` + RetentionPolicy *RetentionPolicyParameters `json:"retentionPolicy,omitempty"` +} + +// FlowLogStatusParameters parameters that define a resource to query flow log and traffic analytics (optional) +// status. +type FlowLogStatusParameters struct { + // TargetResourceID - The target resource where getting the flow logging and traffic analytics (optional) status. + TargetResourceID *string `json:"targetResourceId,omitempty"` +} + +// FrontendIPConfiguration frontend IP address of the load balancer. +type FrontendIPConfiguration struct { + autorest.Response `json:"-"` + // FrontendIPConfigurationPropertiesFormat - Properties of the load balancer probe. + *FrontendIPConfigurationPropertiesFormat `json:"properties,omitempty"` + // Name - The name of the resource that is unique within a resource group. This name can be used to access the resource. + Name *string `json:"name,omitempty"` + // Etag - A unique read-only string that changes whenever the resource is updated. + Etag *string `json:"etag,omitempty"` + // Zones - A list of availability zones denoting the IP allocated for the resource needs to come from. + Zones *[]string `json:"zones,omitempty"` + // ID - Resource ID. + ID *string `json:"id,omitempty"` +} + +// MarshalJSON is the custom marshaler for FrontendIPConfiguration. +func (fic FrontendIPConfiguration) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]interface{}) + if fic.FrontendIPConfigurationPropertiesFormat != nil { + objectMap["properties"] = fic.FrontendIPConfigurationPropertiesFormat + } + if fic.Name != nil { + objectMap["name"] = fic.Name + } + if fic.Etag != nil { + objectMap["etag"] = fic.Etag + } + if fic.Zones != nil { + objectMap["zones"] = fic.Zones + } + if fic.ID != nil { + objectMap["id"] = fic.ID + } + return json.Marshal(objectMap) +} + +// UnmarshalJSON is the custom unmarshaler for FrontendIPConfiguration struct. +func (fic *FrontendIPConfiguration) UnmarshalJSON(body []byte) error { + var m map[string]*json.RawMessage + err := json.Unmarshal(body, &m) + if err != nil { + return err + } + for k, v := range m { + switch k { + case "properties": + if v != nil { + var frontendIPConfigurationPropertiesFormat FrontendIPConfigurationPropertiesFormat + err = json.Unmarshal(*v, &frontendIPConfigurationPropertiesFormat) + if err != nil { + return err + } + fic.FrontendIPConfigurationPropertiesFormat = &frontendIPConfigurationPropertiesFormat + } + case "name": + if v != nil { + var name string + err = json.Unmarshal(*v, &name) + if err != nil { + return err + } + fic.Name = &name + } + case "etag": + if v != nil { + var etag string + err = json.Unmarshal(*v, &etag) + if err != nil { + return err + } + fic.Etag = &etag + } + case "zones": + if v != nil { + var zones []string + err = json.Unmarshal(*v, &zones) + if err != nil { + return err + } + fic.Zones = &zones + } + case "id": + if v != nil { + var ID string + err = json.Unmarshal(*v, &ID) + if err != nil { + return err + } + fic.ID = &ID + } + } + } + + return nil +} + +// FrontendIPConfigurationPropertiesFormat properties of Frontend IP Configuration of the load balancer. +type FrontendIPConfigurationPropertiesFormat struct { + // InboundNatRules - Read only. Inbound rules URIs that use this frontend IP. + InboundNatRules *[]SubResource `json:"inboundNatRules,omitempty"` + // InboundNatPools - Read only. Inbound pools URIs that use this frontend IP. + InboundNatPools *[]SubResource `json:"inboundNatPools,omitempty"` + // OutboundNatRules - Read only. Outbound rules URIs that use this frontend IP. + OutboundNatRules *[]SubResource `json:"outboundNatRules,omitempty"` + // LoadBalancingRules - Gets load balancing rules URIs that use this frontend IP. + LoadBalancingRules *[]SubResource `json:"loadBalancingRules,omitempty"` + // PrivateIPAddress - The private IP address of the IP configuration. + PrivateIPAddress *string `json:"privateIPAddress,omitempty"` + // PrivateIPAllocationMethod - The Private IP allocation method. Possible values are: 'Static' and 'Dynamic'. Possible values include: 'Static', 'Dynamic' + PrivateIPAllocationMethod IPAllocationMethod `json:"privateIPAllocationMethod,omitempty"` + // Subnet - The reference of the subnet resource. + Subnet *Subnet `json:"subnet,omitempty"` + // PublicIPAddress - The reference of the Public IP resource. + PublicIPAddress *PublicIPAddress `json:"publicIPAddress,omitempty"` + // ProvisioningState - Gets the provisioning state of the public IP resource. Possible values are: 'Updating', 'Deleting', and 'Failed'. + ProvisioningState *string `json:"provisioningState,omitempty"` +} + +// GatewayRoute gateway routing details +type GatewayRoute struct { + // LocalAddress - The gateway's local address + LocalAddress *string `json:"localAddress,omitempty"` + // NetworkProperty - The route's network prefix + NetworkProperty *string `json:"network,omitempty"` + // NextHop - The route's next hop + NextHop *string `json:"nextHop,omitempty"` + // SourcePeer - The peer this route was learned from + SourcePeer *string `json:"sourcePeer,omitempty"` + // Origin - The source this route was learned from + Origin *string `json:"origin,omitempty"` + // AsPath - The route's AS path sequence + AsPath *string `json:"asPath,omitempty"` + // Weight - The route's weight + Weight *int32 `json:"weight,omitempty"` +} + +// GatewayRouteListResult list of virtual network gateway routes +type GatewayRouteListResult struct { + autorest.Response `json:"-"` + // Value - List of gateway routes + Value *[]GatewayRoute `json:"value,omitempty"` +} + +// InboundNatPool inbound NAT pool of the load balancer. +type InboundNatPool struct { + // InboundNatPoolPropertiesFormat - Properties of load balancer inbound nat pool. + *InboundNatPoolPropertiesFormat `json:"properties,omitempty"` + // Name - The name of the resource that is unique within a resource group. This name can be used to access the resource. + Name *string `json:"name,omitempty"` + // Etag - A unique read-only string that changes whenever the resource is updated. + Etag *string `json:"etag,omitempty"` + // ID - Resource ID. + ID *string `json:"id,omitempty"` +} + +// MarshalJSON is the custom marshaler for InboundNatPool. +func (inp InboundNatPool) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]interface{}) + if inp.InboundNatPoolPropertiesFormat != nil { + objectMap["properties"] = inp.InboundNatPoolPropertiesFormat + } + if inp.Name != nil { + objectMap["name"] = inp.Name + } + if inp.Etag != nil { + objectMap["etag"] = inp.Etag + } + if inp.ID != nil { + objectMap["id"] = inp.ID + } + return json.Marshal(objectMap) +} + +// UnmarshalJSON is the custom unmarshaler for InboundNatPool struct. +func (inp *InboundNatPool) UnmarshalJSON(body []byte) error { + var m map[string]*json.RawMessage + err := json.Unmarshal(body, &m) + if err != nil { + return err + } + for k, v := range m { + switch k { + case "properties": + if v != nil { + var inboundNatPoolPropertiesFormat InboundNatPoolPropertiesFormat + err = json.Unmarshal(*v, &inboundNatPoolPropertiesFormat) + if err != nil { + return err + } + inp.InboundNatPoolPropertiesFormat = &inboundNatPoolPropertiesFormat + } + case "name": + if v != nil { + var name string + err = json.Unmarshal(*v, &name) + if err != nil { + return err + } + inp.Name = &name + } + case "etag": + if v != nil { + var etag string + err = json.Unmarshal(*v, &etag) + if err != nil { + return err + } + inp.Etag = &etag + } + case "id": + if v != nil { + var ID string + err = json.Unmarshal(*v, &ID) + if err != nil { + return err + } + inp.ID = &ID + } + } + } + + return nil +} + +// InboundNatPoolPropertiesFormat properties of Inbound NAT pool. +type InboundNatPoolPropertiesFormat struct { + // FrontendIPConfiguration - A reference to frontend IP addresses. + FrontendIPConfiguration *SubResource `json:"frontendIPConfiguration,omitempty"` + // Protocol - Possible values include: 'TransportProtocolUDP', 'TransportProtocolTCP', 'TransportProtocolAll' + Protocol TransportProtocol `json:"protocol,omitempty"` + // FrontendPortRangeStart - The first port number in the range of external ports that will be used to provide Inbound Nat to NICs associated with a load balancer. Acceptable values range between 1 and 65534. + FrontendPortRangeStart *int32 `json:"frontendPortRangeStart,omitempty"` + // FrontendPortRangeEnd - The last port number in the range of external ports that will be used to provide Inbound Nat to NICs associated with a load balancer. Acceptable values range between 1 and 65535. + FrontendPortRangeEnd *int32 `json:"frontendPortRangeEnd,omitempty"` + // BackendPort - The port used for internal connections on the endpoint. Acceptable values are between 1 and 65535. + BackendPort *int32 `json:"backendPort,omitempty"` + // IdleTimeoutInMinutes - The timeout for the TCP idle connection. The value can be set between 4 and 30 minutes. The default value is 4 minutes. This element is only used when the protocol is set to TCP. + IdleTimeoutInMinutes *int32 `json:"idleTimeoutInMinutes,omitempty"` + // EnableFloatingIP - Configures a virtual machine's endpoint for the floating IP capability required to configure a SQL AlwaysOn Availability Group. This setting is required when using the SQL AlwaysOn Availability Groups in SQL server. This setting can't be changed after you create the endpoint. + EnableFloatingIP *bool `json:"enableFloatingIP,omitempty"` + // ProvisioningState - Gets the provisioning state of the PublicIP resource. Possible values are: 'Updating', 'Deleting', and 'Failed'. + ProvisioningState *string `json:"provisioningState,omitempty"` +} + +// InboundNatRule inbound NAT rule of the load balancer. +type InboundNatRule struct { + autorest.Response `json:"-"` + // InboundNatRulePropertiesFormat - Properties of load balancer inbound nat rule. + *InboundNatRulePropertiesFormat `json:"properties,omitempty"` + // Name - Gets name of the resource that is unique within a resource group. This name can be used to access the resource. + Name *string `json:"name,omitempty"` + // Etag - A unique read-only string that changes whenever the resource is updated. + Etag *string `json:"etag,omitempty"` + // ID - Resource ID. + ID *string `json:"id,omitempty"` +} + +// MarshalJSON is the custom marshaler for InboundNatRule. +func (inr InboundNatRule) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]interface{}) + if inr.InboundNatRulePropertiesFormat != nil { + objectMap["properties"] = inr.InboundNatRulePropertiesFormat + } + if inr.Name != nil { + objectMap["name"] = inr.Name + } + if inr.Etag != nil { + objectMap["etag"] = inr.Etag + } + if inr.ID != nil { + objectMap["id"] = inr.ID + } + return json.Marshal(objectMap) +} + +// UnmarshalJSON is the custom unmarshaler for InboundNatRule struct. +func (inr *InboundNatRule) UnmarshalJSON(body []byte) error { + var m map[string]*json.RawMessage + err := json.Unmarshal(body, &m) + if err != nil { + return err + } + for k, v := range m { + switch k { + case "properties": + if v != nil { + var inboundNatRulePropertiesFormat InboundNatRulePropertiesFormat + err = json.Unmarshal(*v, &inboundNatRulePropertiesFormat) + if err != nil { + return err + } + inr.InboundNatRulePropertiesFormat = &inboundNatRulePropertiesFormat + } + case "name": + if v != nil { + var name string + err = json.Unmarshal(*v, &name) + if err != nil { + return err + } + inr.Name = &name + } + case "etag": + if v != nil { + var etag string + err = json.Unmarshal(*v, &etag) + if err != nil { + return err + } + inr.Etag = &etag + } + case "id": + if v != nil { + var ID string + err = json.Unmarshal(*v, &ID) + if err != nil { + return err + } + inr.ID = &ID + } + } + } + + return nil +} + +// InboundNatRuleListResult response for ListInboundNatRule API service call. +type InboundNatRuleListResult struct { + autorest.Response `json:"-"` + // Value - A list of inbound nat rules in a load balancer. + Value *[]InboundNatRule `json:"value,omitempty"` + // NextLink - The URL to get the next set of results. + NextLink *string `json:"nextLink,omitempty"` +} + +// InboundNatRuleListResultIterator provides access to a complete listing of InboundNatRule values. +type InboundNatRuleListResultIterator struct { + i int + page InboundNatRuleListResultPage +} + +// Next advances to the next value. If there was an error making +// the request the iterator does not advance and the error is returned. +func (iter *InboundNatRuleListResultIterator) Next() error { + iter.i++ + if iter.i < len(iter.page.Values()) { + return nil + } + err := iter.page.Next() + if err != nil { + iter.i-- + return err + } + iter.i = 0 + return nil +} + +// NotDone returns true if the enumeration should be started or is not yet complete. +func (iter InboundNatRuleListResultIterator) NotDone() bool { + return iter.page.NotDone() && iter.i < len(iter.page.Values()) +} + +// Response returns the raw server response from the last page request. +func (iter InboundNatRuleListResultIterator) Response() InboundNatRuleListResult { + return iter.page.Response() +} + +// Value returns the current value or a zero-initialized value if the +// iterator has advanced beyond the end of the collection. +func (iter InboundNatRuleListResultIterator) Value() InboundNatRule { + if !iter.page.NotDone() { + return InboundNatRule{} + } + return iter.page.Values()[iter.i] +} + +// IsEmpty returns true if the ListResult contains no values. +func (inrlr InboundNatRuleListResult) IsEmpty() bool { + return inrlr.Value == nil || len(*inrlr.Value) == 0 +} + +// inboundNatRuleListResultPreparer prepares a request to retrieve the next set of results. +// It returns nil if no more results exist. +func (inrlr InboundNatRuleListResult) inboundNatRuleListResultPreparer() (*http.Request, error) { + if inrlr.NextLink == nil || len(to.String(inrlr.NextLink)) < 1 { + return nil, nil + } + return autorest.Prepare(&http.Request{}, + autorest.AsJSON(), + autorest.AsGet(), + autorest.WithBaseURL(to.String(inrlr.NextLink))) +} + +// InboundNatRuleListResultPage contains a page of InboundNatRule values. +type InboundNatRuleListResultPage struct { + fn func(InboundNatRuleListResult) (InboundNatRuleListResult, error) + inrlr InboundNatRuleListResult +} + +// Next advances to the next page of values. If there was an error making +// the request the page does not advance and the error is returned. +func (page *InboundNatRuleListResultPage) Next() error { + next, err := page.fn(page.inrlr) + if err != nil { + return err + } + page.inrlr = next + return nil +} + +// NotDone returns true if the page enumeration should be started or is not yet complete. +func (page InboundNatRuleListResultPage) NotDone() bool { + return !page.inrlr.IsEmpty() +} + +// Response returns the raw server response from the last page request. +func (page InboundNatRuleListResultPage) Response() InboundNatRuleListResult { + return page.inrlr +} + +// Values returns the slice of values for the current page or nil if there are no values. +func (page InboundNatRuleListResultPage) Values() []InboundNatRule { + if page.inrlr.IsEmpty() { + return nil + } + return *page.inrlr.Value +} + +// InboundNatRulePropertiesFormat properties of the inbound NAT rule. +type InboundNatRulePropertiesFormat struct { + // FrontendIPConfiguration - A reference to frontend IP addresses. + FrontendIPConfiguration *SubResource `json:"frontendIPConfiguration,omitempty"` + // BackendIPConfiguration - A reference to a private IP address defined on a network interface of a VM. Traffic sent to the frontend port of each of the frontend IP configurations is forwarded to the backend IP. + BackendIPConfiguration *InterfaceIPConfiguration `json:"backendIPConfiguration,omitempty"` + // Protocol - Possible values include: 'TransportProtocolUDP', 'TransportProtocolTCP', 'TransportProtocolAll' + Protocol TransportProtocol `json:"protocol,omitempty"` + // FrontendPort - The port for the external endpoint. Port numbers for each rule must be unique within the Load Balancer. Acceptable values range from 1 to 65534. + FrontendPort *int32 `json:"frontendPort,omitempty"` + // BackendPort - The port used for the internal endpoint. Acceptable values range from 1 to 65535. + BackendPort *int32 `json:"backendPort,omitempty"` + // IdleTimeoutInMinutes - The timeout for the TCP idle connection. The value can be set between 4 and 30 minutes. The default value is 4 minutes. This element is only used when the protocol is set to TCP. + IdleTimeoutInMinutes *int32 `json:"idleTimeoutInMinutes,omitempty"` + // EnableFloatingIP - Configures a virtual machine's endpoint for the floating IP capability required to configure a SQL AlwaysOn Availability Group. This setting is required when using the SQL AlwaysOn Availability Groups in SQL server. This setting can't be changed after you create the endpoint. + EnableFloatingIP *bool `json:"enableFloatingIP,omitempty"` + // ProvisioningState - Gets the provisioning state of the public IP resource. Possible values are: 'Updating', 'Deleting', and 'Failed'. + ProvisioningState *string `json:"provisioningState,omitempty"` +} + +// InboundNatRulesCreateOrUpdateFuture an abstraction for monitoring and retrieving the results of a long-running +// operation. +type InboundNatRulesCreateOrUpdateFuture struct { + azure.Future +} + +// Result returns the result of the asynchronous operation. +// If the operation has not completed it will return an error. +func (future *InboundNatRulesCreateOrUpdateFuture) Result(client InboundNatRulesClient) (inr InboundNatRule, err error) { + var done bool + done, err = future.Done(client) + if err != nil { + err = autorest.NewErrorWithError(err, "network.InboundNatRulesCreateOrUpdateFuture", "Result", future.Response(), "Polling failure") + return + } + if !done { + err = azure.NewAsyncOpIncompleteError("network.InboundNatRulesCreateOrUpdateFuture") + return + } + sender := autorest.DecorateSender(client, autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...)) + if inr.Response.Response, err = future.GetResult(sender); err == nil && inr.Response.Response.StatusCode != http.StatusNoContent { + inr, err = client.CreateOrUpdateResponder(inr.Response.Response) + if err != nil { + err = autorest.NewErrorWithError(err, "network.InboundNatRulesCreateOrUpdateFuture", "Result", inr.Response.Response, "Failure responding to request") + } + } + return +} + +// InboundNatRulesDeleteFuture an abstraction for monitoring and retrieving the results of a long-running +// operation. +type InboundNatRulesDeleteFuture struct { + azure.Future +} + +// Result returns the result of the asynchronous operation. +// If the operation has not completed it will return an error. +func (future *InboundNatRulesDeleteFuture) Result(client InboundNatRulesClient) (ar autorest.Response, err error) { + var done bool + done, err = future.Done(client) + if err != nil { + err = autorest.NewErrorWithError(err, "network.InboundNatRulesDeleteFuture", "Result", future.Response(), "Polling failure") + return + } + if !done { + err = azure.NewAsyncOpIncompleteError("network.InboundNatRulesDeleteFuture") + return + } + ar.Response = future.Response() + return +} + +// Interface a network interface in a resource group. +type Interface struct { + autorest.Response `json:"-"` + // InterfacePropertiesFormat - Properties of the network interface. + *InterfacePropertiesFormat `json:"properties,omitempty"` + // Etag - A unique read-only string that changes whenever the resource is updated. + Etag *string `json:"etag,omitempty"` + // ID - Resource ID. + ID *string `json:"id,omitempty"` + // Name - Resource name. + Name *string `json:"name,omitempty"` + // Type - Resource type. + Type *string `json:"type,omitempty"` + // Location - Resource location. + Location *string `json:"location,omitempty"` + // Tags - Resource tags. + Tags map[string]*string `json:"tags"` +} + +// MarshalJSON is the custom marshaler for Interface. +func (i Interface) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]interface{}) + if i.InterfacePropertiesFormat != nil { + objectMap["properties"] = i.InterfacePropertiesFormat + } + if i.Etag != nil { + objectMap["etag"] = i.Etag + } + if i.ID != nil { + objectMap["id"] = i.ID + } + if i.Name != nil { + objectMap["name"] = i.Name + } + if i.Type != nil { + objectMap["type"] = i.Type + } + if i.Location != nil { + objectMap["location"] = i.Location + } + if i.Tags != nil { + objectMap["tags"] = i.Tags + } + return json.Marshal(objectMap) +} + +// UnmarshalJSON is the custom unmarshaler for Interface struct. +func (i *Interface) UnmarshalJSON(body []byte) error { + var m map[string]*json.RawMessage + err := json.Unmarshal(body, &m) + if err != nil { + return err + } + for k, v := range m { + switch k { + case "properties": + if v != nil { + var interfacePropertiesFormat InterfacePropertiesFormat + err = json.Unmarshal(*v, &interfacePropertiesFormat) + if err != nil { + return err + } + i.InterfacePropertiesFormat = &interfacePropertiesFormat + } + case "etag": + if v != nil { + var etag string + err = json.Unmarshal(*v, &etag) + if err != nil { + return err + } + i.Etag = &etag + } + case "id": + if v != nil { + var ID string + err = json.Unmarshal(*v, &ID) + if err != nil { + return err + } + i.ID = &ID + } + case "name": + if v != nil { + var name string + err = json.Unmarshal(*v, &name) + if err != nil { + return err + } + i.Name = &name + } + case "type": + if v != nil { + var typeVar string + err = json.Unmarshal(*v, &typeVar) + if err != nil { + return err + } + i.Type = &typeVar + } + case "location": + if v != nil { + var location string + err = json.Unmarshal(*v, &location) + if err != nil { + return err + } + i.Location = &location + } + case "tags": + if v != nil { + var tags map[string]*string + err = json.Unmarshal(*v, &tags) + if err != nil { + return err + } + i.Tags = tags + } + } + } + + return nil +} + +// InterfaceAssociation network interface and its custom security rules. +type InterfaceAssociation struct { + // ID - Network interface ID. + ID *string `json:"id,omitempty"` + // SecurityRules - Collection of custom security rules. + SecurityRules *[]SecurityRule `json:"securityRules,omitempty"` +} + +// InterfaceDNSSettings DNS settings of a network interface. +type InterfaceDNSSettings struct { + // DNSServers - List of DNS servers IP addresses. Use 'AzureProvidedDNS' to switch to azure provided DNS resolution. 'AzureProvidedDNS' value cannot be combined with other IPs, it must be the only value in dnsServers collection. + DNSServers *[]string `json:"dnsServers,omitempty"` + // AppliedDNSServers - If the VM that uses this NIC is part of an Availability Set, then this list will have the union of all DNS servers from all NICs that are part of the Availability Set. This property is what is configured on each of those VMs. + AppliedDNSServers *[]string `json:"appliedDnsServers,omitempty"` + // InternalDNSNameLabel - Relative DNS name for this NIC used for internal communications between VMs in the same virtual network. + InternalDNSNameLabel *string `json:"internalDnsNameLabel,omitempty"` + // InternalFqdn - Fully qualified DNS name supporting internal communications between VMs in the same virtual network. + InternalFqdn *string `json:"internalFqdn,omitempty"` + // InternalDomainNameSuffix - Even if internalDnsNameLabel is not specified, a DNS entry is created for the primary NIC of the VM. This DNS name can be constructed by concatenating the VM name with the value of internalDomainNameSuffix. + InternalDomainNameSuffix *string `json:"internalDomainNameSuffix,omitempty"` +} + +// InterfaceIPConfiguration iPConfiguration in a network interface. +type InterfaceIPConfiguration struct { + autorest.Response `json:"-"` + // InterfaceIPConfigurationPropertiesFormat - Network interface IP configuration properties. + *InterfaceIPConfigurationPropertiesFormat `json:"properties,omitempty"` + // Name - The name of the resource that is unique within a resource group. This name can be used to access the resource. + Name *string `json:"name,omitempty"` + // Etag - A unique read-only string that changes whenever the resource is updated. + Etag *string `json:"etag,omitempty"` + // ID - Resource ID. + ID *string `json:"id,omitempty"` +} + +// MarshalJSON is the custom marshaler for InterfaceIPConfiguration. +func (iic InterfaceIPConfiguration) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]interface{}) + if iic.InterfaceIPConfigurationPropertiesFormat != nil { + objectMap["properties"] = iic.InterfaceIPConfigurationPropertiesFormat + } + if iic.Name != nil { + objectMap["name"] = iic.Name + } + if iic.Etag != nil { + objectMap["etag"] = iic.Etag + } + if iic.ID != nil { + objectMap["id"] = iic.ID + } + return json.Marshal(objectMap) +} + +// UnmarshalJSON is the custom unmarshaler for InterfaceIPConfiguration struct. +func (iic *InterfaceIPConfiguration) UnmarshalJSON(body []byte) error { + var m map[string]*json.RawMessage + err := json.Unmarshal(body, &m) + if err != nil { + return err + } + for k, v := range m { + switch k { + case "properties": + if v != nil { + var interfaceIPConfigurationPropertiesFormat InterfaceIPConfigurationPropertiesFormat + err = json.Unmarshal(*v, &interfaceIPConfigurationPropertiesFormat) + if err != nil { + return err + } + iic.InterfaceIPConfigurationPropertiesFormat = &interfaceIPConfigurationPropertiesFormat + } + case "name": + if v != nil { + var name string + err = json.Unmarshal(*v, &name) + if err != nil { + return err + } + iic.Name = &name + } + case "etag": + if v != nil { + var etag string + err = json.Unmarshal(*v, &etag) + if err != nil { + return err + } + iic.Etag = &etag + } + case "id": + if v != nil { + var ID string + err = json.Unmarshal(*v, &ID) + if err != nil { + return err + } + iic.ID = &ID + } + } + } + + return nil +} + +// InterfaceIPConfigurationListResult response for list ip configurations API service call. +type InterfaceIPConfigurationListResult struct { + autorest.Response `json:"-"` + // Value - A list of ip configurations. + Value *[]InterfaceIPConfiguration `json:"value,omitempty"` + // NextLink - The URL to get the next set of results. + NextLink *string `json:"nextLink,omitempty"` +} + +// InterfaceIPConfigurationListResultIterator provides access to a complete listing of InterfaceIPConfiguration +// values. +type InterfaceIPConfigurationListResultIterator struct { + i int + page InterfaceIPConfigurationListResultPage +} + +// Next advances to the next value. If there was an error making +// the request the iterator does not advance and the error is returned. +func (iter *InterfaceIPConfigurationListResultIterator) Next() error { + iter.i++ + if iter.i < len(iter.page.Values()) { + return nil + } + err := iter.page.Next() + if err != nil { + iter.i-- + return err + } + iter.i = 0 + return nil +} + +// NotDone returns true if the enumeration should be started or is not yet complete. +func (iter InterfaceIPConfigurationListResultIterator) NotDone() bool { + return iter.page.NotDone() && iter.i < len(iter.page.Values()) +} + +// Response returns the raw server response from the last page request. +func (iter InterfaceIPConfigurationListResultIterator) Response() InterfaceIPConfigurationListResult { + return iter.page.Response() +} + +// Value returns the current value or a zero-initialized value if the +// iterator has advanced beyond the end of the collection. +func (iter InterfaceIPConfigurationListResultIterator) Value() InterfaceIPConfiguration { + if !iter.page.NotDone() { + return InterfaceIPConfiguration{} + } + return iter.page.Values()[iter.i] +} + +// IsEmpty returns true if the ListResult contains no values. +func (iiclr InterfaceIPConfigurationListResult) IsEmpty() bool { + return iiclr.Value == nil || len(*iiclr.Value) == 0 +} + +// interfaceIPConfigurationListResultPreparer prepares a request to retrieve the next set of results. +// It returns nil if no more results exist. +func (iiclr InterfaceIPConfigurationListResult) interfaceIPConfigurationListResultPreparer() (*http.Request, error) { + if iiclr.NextLink == nil || len(to.String(iiclr.NextLink)) < 1 { + return nil, nil + } + return autorest.Prepare(&http.Request{}, + autorest.AsJSON(), + autorest.AsGet(), + autorest.WithBaseURL(to.String(iiclr.NextLink))) +} + +// InterfaceIPConfigurationListResultPage contains a page of InterfaceIPConfiguration values. +type InterfaceIPConfigurationListResultPage struct { + fn func(InterfaceIPConfigurationListResult) (InterfaceIPConfigurationListResult, error) + iiclr InterfaceIPConfigurationListResult +} + +// Next advances to the next page of values. If there was an error making +// the request the page does not advance and the error is returned. +func (page *InterfaceIPConfigurationListResultPage) Next() error { + next, err := page.fn(page.iiclr) + if err != nil { + return err + } + page.iiclr = next + return nil +} + +// NotDone returns true if the page enumeration should be started or is not yet complete. +func (page InterfaceIPConfigurationListResultPage) NotDone() bool { + return !page.iiclr.IsEmpty() +} + +// Response returns the raw server response from the last page request. +func (page InterfaceIPConfigurationListResultPage) Response() InterfaceIPConfigurationListResult { + return page.iiclr +} + +// Values returns the slice of values for the current page or nil if there are no values. +func (page InterfaceIPConfigurationListResultPage) Values() []InterfaceIPConfiguration { + if page.iiclr.IsEmpty() { + return nil + } + return *page.iiclr.Value +} + +// InterfaceIPConfigurationPropertiesFormat properties of IP configuration. +type InterfaceIPConfigurationPropertiesFormat struct { + // ApplicationGatewayBackendAddressPools - The reference of ApplicationGatewayBackendAddressPool resource. + ApplicationGatewayBackendAddressPools *[]ApplicationGatewayBackendAddressPool `json:"applicationGatewayBackendAddressPools,omitempty"` + // LoadBalancerBackendAddressPools - The reference of LoadBalancerBackendAddressPool resource. + LoadBalancerBackendAddressPools *[]BackendAddressPool `json:"loadBalancerBackendAddressPools,omitempty"` + // LoadBalancerInboundNatRules - A list of references of LoadBalancerInboundNatRules. + LoadBalancerInboundNatRules *[]InboundNatRule `json:"loadBalancerInboundNatRules,omitempty"` + // PrivateIPAddress - Private IP address of the IP configuration. + PrivateIPAddress *string `json:"privateIPAddress,omitempty"` + // PrivateIPAllocationMethod - Defines how a private IP address is assigned. Possible values are: 'Static' and 'Dynamic'. Possible values include: 'Static', 'Dynamic' + PrivateIPAllocationMethod IPAllocationMethod `json:"privateIPAllocationMethod,omitempty"` + // PrivateIPAddressVersion - Available from Api-Version 2016-03-30 onwards, it represents whether the specific ipconfiguration is IPv4 or IPv6. Default is taken as IPv4. Possible values are: 'IPv4' and 'IPv6'. Possible values include: 'IPv4', 'IPv6' + PrivateIPAddressVersion IPVersion `json:"privateIPAddressVersion,omitempty"` + // Subnet - Subnet bound to the IP configuration. + Subnet *Subnet `json:"subnet,omitempty"` + // Primary - Gets whether this is a primary customer address on the network interface. + Primary *bool `json:"primary,omitempty"` + // PublicIPAddress - Public IP address bound to the IP configuration. + PublicIPAddress *PublicIPAddress `json:"publicIPAddress,omitempty"` + // ApplicationSecurityGroups - Application security groups in which the IP configuration is included. + ApplicationSecurityGroups *[]ApplicationSecurityGroup `json:"applicationSecurityGroups,omitempty"` + // ProvisioningState - The provisioning state of the network interface IP configuration. Possible values are: 'Updating', 'Deleting', and 'Failed'. + ProvisioningState *string `json:"provisioningState,omitempty"` +} + +// InterfaceListResult response for the ListNetworkInterface API service call. +type InterfaceListResult struct { + autorest.Response `json:"-"` + // Value - A list of network interfaces in a resource group. + Value *[]Interface `json:"value,omitempty"` + // NextLink - The URL to get the next set of results. + NextLink *string `json:"nextLink,omitempty"` +} + +// InterfaceListResultIterator provides access to a complete listing of Interface values. +type InterfaceListResultIterator struct { + i int + page InterfaceListResultPage +} + +// Next advances to the next value. If there was an error making +// the request the iterator does not advance and the error is returned. +func (iter *InterfaceListResultIterator) Next() error { + iter.i++ + if iter.i < len(iter.page.Values()) { + return nil + } + err := iter.page.Next() + if err != nil { + iter.i-- + return err + } + iter.i = 0 + return nil +} + +// NotDone returns true if the enumeration should be started or is not yet complete. +func (iter InterfaceListResultIterator) NotDone() bool { + return iter.page.NotDone() && iter.i < len(iter.page.Values()) +} + +// Response returns the raw server response from the last page request. +func (iter InterfaceListResultIterator) Response() InterfaceListResult { + return iter.page.Response() +} + +// Value returns the current value or a zero-initialized value if the +// iterator has advanced beyond the end of the collection. +func (iter InterfaceListResultIterator) Value() Interface { + if !iter.page.NotDone() { + return Interface{} + } + return iter.page.Values()[iter.i] +} + +// IsEmpty returns true if the ListResult contains no values. +func (ilr InterfaceListResult) IsEmpty() bool { + return ilr.Value == nil || len(*ilr.Value) == 0 +} + +// interfaceListResultPreparer prepares a request to retrieve the next set of results. +// It returns nil if no more results exist. +func (ilr InterfaceListResult) interfaceListResultPreparer() (*http.Request, error) { + if ilr.NextLink == nil || len(to.String(ilr.NextLink)) < 1 { + return nil, nil + } + return autorest.Prepare(&http.Request{}, + autorest.AsJSON(), + autorest.AsGet(), + autorest.WithBaseURL(to.String(ilr.NextLink))) +} + +// InterfaceListResultPage contains a page of Interface values. +type InterfaceListResultPage struct { + fn func(InterfaceListResult) (InterfaceListResult, error) + ilr InterfaceListResult +} + +// Next advances to the next page of values. If there was an error making +// the request the page does not advance and the error is returned. +func (page *InterfaceListResultPage) Next() error { + next, err := page.fn(page.ilr) + if err != nil { + return err + } + page.ilr = next + return nil +} + +// NotDone returns true if the page enumeration should be started or is not yet complete. +func (page InterfaceListResultPage) NotDone() bool { + return !page.ilr.IsEmpty() +} + +// Response returns the raw server response from the last page request. +func (page InterfaceListResultPage) Response() InterfaceListResult { + return page.ilr +} + +// Values returns the slice of values for the current page or nil if there are no values. +func (page InterfaceListResultPage) Values() []Interface { + if page.ilr.IsEmpty() { + return nil + } + return *page.ilr.Value +} + +// InterfaceLoadBalancerListResult response for list ip configurations API service call. +type InterfaceLoadBalancerListResult struct { + autorest.Response `json:"-"` + // Value - A list of load balancers. + Value *[]LoadBalancer `json:"value,omitempty"` + // NextLink - The URL to get the next set of results. + NextLink *string `json:"nextLink,omitempty"` +} + +// InterfaceLoadBalancerListResultIterator provides access to a complete listing of LoadBalancer values. +type InterfaceLoadBalancerListResultIterator struct { + i int + page InterfaceLoadBalancerListResultPage +} + +// Next advances to the next value. If there was an error making +// the request the iterator does not advance and the error is returned. +func (iter *InterfaceLoadBalancerListResultIterator) Next() error { + iter.i++ + if iter.i < len(iter.page.Values()) { + return nil + } + err := iter.page.Next() + if err != nil { + iter.i-- + return err + } + iter.i = 0 + return nil +} + +// NotDone returns true if the enumeration should be started or is not yet complete. +func (iter InterfaceLoadBalancerListResultIterator) NotDone() bool { + return iter.page.NotDone() && iter.i < len(iter.page.Values()) +} + +// Response returns the raw server response from the last page request. +func (iter InterfaceLoadBalancerListResultIterator) Response() InterfaceLoadBalancerListResult { + return iter.page.Response() +} + +// Value returns the current value or a zero-initialized value if the +// iterator has advanced beyond the end of the collection. +func (iter InterfaceLoadBalancerListResultIterator) Value() LoadBalancer { + if !iter.page.NotDone() { + return LoadBalancer{} + } + return iter.page.Values()[iter.i] +} + +// IsEmpty returns true if the ListResult contains no values. +func (ilblr InterfaceLoadBalancerListResult) IsEmpty() bool { + return ilblr.Value == nil || len(*ilblr.Value) == 0 +} + +// interfaceLoadBalancerListResultPreparer prepares a request to retrieve the next set of results. +// It returns nil if no more results exist. +func (ilblr InterfaceLoadBalancerListResult) interfaceLoadBalancerListResultPreparer() (*http.Request, error) { + if ilblr.NextLink == nil || len(to.String(ilblr.NextLink)) < 1 { + return nil, nil + } + return autorest.Prepare(&http.Request{}, + autorest.AsJSON(), + autorest.AsGet(), + autorest.WithBaseURL(to.String(ilblr.NextLink))) +} + +// InterfaceLoadBalancerListResultPage contains a page of LoadBalancer values. +type InterfaceLoadBalancerListResultPage struct { + fn func(InterfaceLoadBalancerListResult) (InterfaceLoadBalancerListResult, error) + ilblr InterfaceLoadBalancerListResult +} + +// Next advances to the next page of values. If there was an error making +// the request the page does not advance and the error is returned. +func (page *InterfaceLoadBalancerListResultPage) Next() error { + next, err := page.fn(page.ilblr) + if err != nil { + return err + } + page.ilblr = next + return nil +} + +// NotDone returns true if the page enumeration should be started or is not yet complete. +func (page InterfaceLoadBalancerListResultPage) NotDone() bool { + return !page.ilblr.IsEmpty() +} + +// Response returns the raw server response from the last page request. +func (page InterfaceLoadBalancerListResultPage) Response() InterfaceLoadBalancerListResult { + return page.ilblr +} + +// Values returns the slice of values for the current page or nil if there are no values. +func (page InterfaceLoadBalancerListResultPage) Values() []LoadBalancer { + if page.ilblr.IsEmpty() { + return nil + } + return *page.ilblr.Value +} + +// InterfacePropertiesFormat networkInterface properties. +type InterfacePropertiesFormat struct { + // VirtualMachine - The reference of a virtual machine. + VirtualMachine *SubResource `json:"virtualMachine,omitempty"` + // NetworkSecurityGroup - The reference of the NetworkSecurityGroup resource. + NetworkSecurityGroup *SecurityGroup `json:"networkSecurityGroup,omitempty"` + // IPConfigurations - A list of IPConfigurations of the network interface. + IPConfigurations *[]InterfaceIPConfiguration `json:"ipConfigurations,omitempty"` + // DNSSettings - The DNS settings in network interface. + DNSSettings *InterfaceDNSSettings `json:"dnsSettings,omitempty"` + // MacAddress - The MAC address of the network interface. + MacAddress *string `json:"macAddress,omitempty"` + // Primary - Gets whether this is a primary network interface on a virtual machine. + Primary *bool `json:"primary,omitempty"` + // EnableAcceleratedNetworking - If the network interface is accelerated networking enabled. + EnableAcceleratedNetworking *bool `json:"enableAcceleratedNetworking,omitempty"` + // EnableIPForwarding - Indicates whether IP forwarding is enabled on this network interface. + EnableIPForwarding *bool `json:"enableIPForwarding,omitempty"` + // ResourceGUID - The resource GUID property of the network interface resource. + ResourceGUID *string `json:"resourceGuid,omitempty"` + // ProvisioningState - The provisioning state of the public IP resource. Possible values are: 'Updating', 'Deleting', and 'Failed'. + ProvisioningState *string `json:"provisioningState,omitempty"` +} + +// InterfacesCreateOrUpdateFuture an abstraction for monitoring and retrieving the results of a long-running +// operation. +type InterfacesCreateOrUpdateFuture struct { + azure.Future +} + +// Result returns the result of the asynchronous operation. +// If the operation has not completed it will return an error. +func (future *InterfacesCreateOrUpdateFuture) Result(client InterfacesClient) (i Interface, err error) { + var done bool + done, err = future.Done(client) + if err != nil { + err = autorest.NewErrorWithError(err, "network.InterfacesCreateOrUpdateFuture", "Result", future.Response(), "Polling failure") + return + } + if !done { + err = azure.NewAsyncOpIncompleteError("network.InterfacesCreateOrUpdateFuture") + return + } + sender := autorest.DecorateSender(client, autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...)) + if i.Response.Response, err = future.GetResult(sender); err == nil && i.Response.Response.StatusCode != http.StatusNoContent { + i, err = client.CreateOrUpdateResponder(i.Response.Response) + if err != nil { + err = autorest.NewErrorWithError(err, "network.InterfacesCreateOrUpdateFuture", "Result", i.Response.Response, "Failure responding to request") + } + } + return +} + +// InterfacesDeleteFuture an abstraction for monitoring and retrieving the results of a long-running operation. +type InterfacesDeleteFuture struct { + azure.Future +} + +// Result returns the result of the asynchronous operation. +// If the operation has not completed it will return an error. +func (future *InterfacesDeleteFuture) Result(client InterfacesClient) (ar autorest.Response, err error) { + var done bool + done, err = future.Done(client) + if err != nil { + err = autorest.NewErrorWithError(err, "network.InterfacesDeleteFuture", "Result", future.Response(), "Polling failure") + return + } + if !done { + err = azure.NewAsyncOpIncompleteError("network.InterfacesDeleteFuture") + return + } + ar.Response = future.Response() + return +} + +// InterfacesGetEffectiveRouteTableFuture an abstraction for monitoring and retrieving the results of a +// long-running operation. +type InterfacesGetEffectiveRouteTableFuture struct { + azure.Future +} + +// Result returns the result of the asynchronous operation. +// If the operation has not completed it will return an error. +func (future *InterfacesGetEffectiveRouteTableFuture) Result(client InterfacesClient) (erlr EffectiveRouteListResult, err error) { + var done bool + done, err = future.Done(client) + if err != nil { + err = autorest.NewErrorWithError(err, "network.InterfacesGetEffectiveRouteTableFuture", "Result", future.Response(), "Polling failure") + return + } + if !done { + err = azure.NewAsyncOpIncompleteError("network.InterfacesGetEffectiveRouteTableFuture") + return + } + sender := autorest.DecorateSender(client, autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...)) + if erlr.Response.Response, err = future.GetResult(sender); err == nil && erlr.Response.Response.StatusCode != http.StatusNoContent { + erlr, err = client.GetEffectiveRouteTableResponder(erlr.Response.Response) + if err != nil { + err = autorest.NewErrorWithError(err, "network.InterfacesGetEffectiveRouteTableFuture", "Result", erlr.Response.Response, "Failure responding to request") + } + } + return +} + +// InterfacesListEffectiveNetworkSecurityGroupsFuture an abstraction for monitoring and retrieving the results of a +// long-running operation. +type InterfacesListEffectiveNetworkSecurityGroupsFuture struct { + azure.Future +} + +// Result returns the result of the asynchronous operation. +// If the operation has not completed it will return an error. +func (future *InterfacesListEffectiveNetworkSecurityGroupsFuture) Result(client InterfacesClient) (ensglr EffectiveNetworkSecurityGroupListResult, err error) { + var done bool + done, err = future.Done(client) + if err != nil { + err = autorest.NewErrorWithError(err, "network.InterfacesListEffectiveNetworkSecurityGroupsFuture", "Result", future.Response(), "Polling failure") + return + } + if !done { + err = azure.NewAsyncOpIncompleteError("network.InterfacesListEffectiveNetworkSecurityGroupsFuture") + return + } + sender := autorest.DecorateSender(client, autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...)) + if ensglr.Response.Response, err = future.GetResult(sender); err == nil && ensglr.Response.Response.StatusCode != http.StatusNoContent { + ensglr, err = client.ListEffectiveNetworkSecurityGroupsResponder(ensglr.Response.Response) + if err != nil { + err = autorest.NewErrorWithError(err, "network.InterfacesListEffectiveNetworkSecurityGroupsFuture", "Result", ensglr.Response.Response, "Failure responding to request") + } + } + return +} + +// InterfacesUpdateTagsFuture an abstraction for monitoring and retrieving the results of a long-running operation. +type InterfacesUpdateTagsFuture struct { + azure.Future +} + +// Result returns the result of the asynchronous operation. +// If the operation has not completed it will return an error. +func (future *InterfacesUpdateTagsFuture) Result(client InterfacesClient) (i Interface, err error) { + var done bool + done, err = future.Done(client) + if err != nil { + err = autorest.NewErrorWithError(err, "network.InterfacesUpdateTagsFuture", "Result", future.Response(), "Polling failure") + return + } + if !done { + err = azure.NewAsyncOpIncompleteError("network.InterfacesUpdateTagsFuture") + return + } + sender := autorest.DecorateSender(client, autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...)) + if i.Response.Response, err = future.GetResult(sender); err == nil && i.Response.Response.StatusCode != http.StatusNoContent { + i, err = client.UpdateTagsResponder(i.Response.Response) + if err != nil { + err = autorest.NewErrorWithError(err, "network.InterfacesUpdateTagsFuture", "Result", i.Response.Response, "Failure responding to request") + } + } + return +} + +// IPAddressAvailabilityResult response for CheckIPAddressAvailability API service call +type IPAddressAvailabilityResult struct { + autorest.Response `json:"-"` + // Available - Private IP address availability. + Available *bool `json:"available,omitempty"` + // AvailableIPAddresses - Contains other available private IP addresses if the asked for address is taken. + AvailableIPAddresses *[]string `json:"availableIPAddresses,omitempty"` +} + +// IPConfiguration IP configuration +type IPConfiguration struct { + // IPConfigurationPropertiesFormat - Properties of the IP configuration + *IPConfigurationPropertiesFormat `json:"properties,omitempty"` + // Name - The name of the resource that is unique within a resource group. This name can be used to access the resource. + Name *string `json:"name,omitempty"` + // Etag - A unique read-only string that changes whenever the resource is updated. + Etag *string `json:"etag,omitempty"` + // ID - Resource ID. + ID *string `json:"id,omitempty"` +} + +// MarshalJSON is the custom marshaler for IPConfiguration. +func (ic IPConfiguration) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]interface{}) + if ic.IPConfigurationPropertiesFormat != nil { + objectMap["properties"] = ic.IPConfigurationPropertiesFormat + } + if ic.Name != nil { + objectMap["name"] = ic.Name + } + if ic.Etag != nil { + objectMap["etag"] = ic.Etag + } + if ic.ID != nil { + objectMap["id"] = ic.ID + } + return json.Marshal(objectMap) +} + +// UnmarshalJSON is the custom unmarshaler for IPConfiguration struct. +func (ic *IPConfiguration) UnmarshalJSON(body []byte) error { + var m map[string]*json.RawMessage + err := json.Unmarshal(body, &m) + if err != nil { + return err + } + for k, v := range m { + switch k { + case "properties": + if v != nil { + var IPConfigurationPropertiesFormat IPConfigurationPropertiesFormat + err = json.Unmarshal(*v, &IPConfigurationPropertiesFormat) + if err != nil { + return err + } + ic.IPConfigurationPropertiesFormat = &IPConfigurationPropertiesFormat + } + case "name": + if v != nil { + var name string + err = json.Unmarshal(*v, &name) + if err != nil { + return err + } + ic.Name = &name + } + case "etag": + if v != nil { + var etag string + err = json.Unmarshal(*v, &etag) + if err != nil { + return err + } + ic.Etag = &etag + } + case "id": + if v != nil { + var ID string + err = json.Unmarshal(*v, &ID) + if err != nil { + return err + } + ic.ID = &ID + } + } + } + + return nil +} + +// IPConfigurationPropertiesFormat properties of IP configuration. +type IPConfigurationPropertiesFormat struct { + // PrivateIPAddress - The private IP address of the IP configuration. + PrivateIPAddress *string `json:"privateIPAddress,omitempty"` + // PrivateIPAllocationMethod - The private IP allocation method. Possible values are 'Static' and 'Dynamic'. Possible values include: 'Static', 'Dynamic' + PrivateIPAllocationMethod IPAllocationMethod `json:"privateIPAllocationMethod,omitempty"` + // Subnet - The reference of the subnet resource. + Subnet *Subnet `json:"subnet,omitempty"` + // PublicIPAddress - The reference of the public IP resource. + PublicIPAddress *PublicIPAddress `json:"publicIPAddress,omitempty"` + // ProvisioningState - Gets the provisioning state of the public IP resource. Possible values are: 'Updating', 'Deleting', and 'Failed'. + ProvisioningState *string `json:"provisioningState,omitempty"` +} + +// IpsecPolicy an IPSec Policy configuration for a virtual network gateway connection +type IpsecPolicy struct { + // SaLifeTimeSeconds - The IPSec Security Association (also called Quick Mode or Phase 2 SA) lifetime in seconds for a site to site VPN tunnel. + SaLifeTimeSeconds *int32 `json:"saLifeTimeSeconds,omitempty"` + // SaDataSizeKilobytes - The IPSec Security Association (also called Quick Mode or Phase 2 SA) payload size in KB for a site to site VPN tunnel. + SaDataSizeKilobytes *int32 `json:"saDataSizeKilobytes,omitempty"` + // IpsecEncryption - The IPSec encryption algorithm (IKE phase 1). Possible values include: 'IpsecEncryptionNone', 'IpsecEncryptionDES', 'IpsecEncryptionDES3', 'IpsecEncryptionAES128', 'IpsecEncryptionAES192', 'IpsecEncryptionAES256', 'IpsecEncryptionGCMAES128', 'IpsecEncryptionGCMAES192', 'IpsecEncryptionGCMAES256' + IpsecEncryption IpsecEncryption `json:"ipsecEncryption,omitempty"` + // IpsecIntegrity - The IPSec integrity algorithm (IKE phase 1). Possible values include: 'IpsecIntegrityMD5', 'IpsecIntegritySHA1', 'IpsecIntegritySHA256', 'IpsecIntegrityGCMAES128', 'IpsecIntegrityGCMAES192', 'IpsecIntegrityGCMAES256' + IpsecIntegrity IpsecIntegrity `json:"ipsecIntegrity,omitempty"` + // IkeEncryption - The IKE encryption algorithm (IKE phase 2). Possible values include: 'DES', 'DES3', 'AES128', 'AES192', 'AES256' + IkeEncryption IkeEncryption `json:"ikeEncryption,omitempty"` + // IkeIntegrity - The IKE integrity algorithm (IKE phase 2). Possible values include: 'MD5', 'SHA1', 'SHA256', 'SHA384' + IkeIntegrity IkeIntegrity `json:"ikeIntegrity,omitempty"` + // DhGroup - The DH Groups used in IKE Phase 1 for initial SA. Possible values include: 'None', 'DHGroup1', 'DHGroup2', 'DHGroup14', 'DHGroup2048', 'ECP256', 'ECP384', 'DHGroup24' + DhGroup DhGroup `json:"dhGroup,omitempty"` + // PfsGroup - The DH Groups used in IKE Phase 2 for new child SA. Possible values include: 'PfsGroupNone', 'PfsGroupPFS1', 'PfsGroupPFS2', 'PfsGroupPFS2048', 'PfsGroupECP256', 'PfsGroupECP384', 'PfsGroupPFS24' + PfsGroup PfsGroup `json:"pfsGroup,omitempty"` +} + +// IPTag contains the IpTag associated with the public IP address +type IPTag struct { + // IPTagType - Gets or sets the ipTag type: Example FirstPartyUsage. + IPTagType *string `json:"ipTagType,omitempty"` + // Tag - Gets or sets value of the IpTag associated with the public IP. Example SQL, Storage etc + Tag *string `json:"tag,omitempty"` +} + +// Ipv6ExpressRouteCircuitPeeringConfig contains IPv6 peering config. +type Ipv6ExpressRouteCircuitPeeringConfig struct { + // PrimaryPeerAddressPrefix - The primary address prefix. + PrimaryPeerAddressPrefix *string `json:"primaryPeerAddressPrefix,omitempty"` + // SecondaryPeerAddressPrefix - The secondary address prefix. + SecondaryPeerAddressPrefix *string `json:"secondaryPeerAddressPrefix,omitempty"` + // MicrosoftPeeringConfig - The Microsoft peering configuration. + MicrosoftPeeringConfig *ExpressRouteCircuitPeeringConfig `json:"microsoftPeeringConfig,omitempty"` + // RouteFilter - The reference of the RouteFilter resource. + RouteFilter *RouteFilter `json:"routeFilter,omitempty"` + // State - The state of peering. Possible values are: 'Disabled' and 'Enabled'. Possible values include: 'ExpressRouteCircuitPeeringStateDisabled', 'ExpressRouteCircuitPeeringStateEnabled' + State ExpressRouteCircuitPeeringState `json:"state,omitempty"` +} + +// LoadBalancer loadBalancer resource +type LoadBalancer struct { + autorest.Response `json:"-"` + // Sku - The load balancer SKU. + Sku *LoadBalancerSku `json:"sku,omitempty"` + // LoadBalancerPropertiesFormat - Properties of load balancer. + *LoadBalancerPropertiesFormat `json:"properties,omitempty"` + // Etag - A unique read-only string that changes whenever the resource is updated. + Etag *string `json:"etag,omitempty"` + // ID - Resource ID. + ID *string `json:"id,omitempty"` + // Name - Resource name. + Name *string `json:"name,omitempty"` + // Type - Resource type. + Type *string `json:"type,omitempty"` + // Location - Resource location. + Location *string `json:"location,omitempty"` + // Tags - Resource tags. + Tags map[string]*string `json:"tags"` +} + +// MarshalJSON is the custom marshaler for LoadBalancer. +func (lb LoadBalancer) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]interface{}) + if lb.Sku != nil { + objectMap["sku"] = lb.Sku + } + if lb.LoadBalancerPropertiesFormat != nil { + objectMap["properties"] = lb.LoadBalancerPropertiesFormat + } + if lb.Etag != nil { + objectMap["etag"] = lb.Etag + } + if lb.ID != nil { + objectMap["id"] = lb.ID + } + if lb.Name != nil { + objectMap["name"] = lb.Name + } + if lb.Type != nil { + objectMap["type"] = lb.Type + } + if lb.Location != nil { + objectMap["location"] = lb.Location + } + if lb.Tags != nil { + objectMap["tags"] = lb.Tags + } + return json.Marshal(objectMap) +} + +// UnmarshalJSON is the custom unmarshaler for LoadBalancer struct. +func (lb *LoadBalancer) UnmarshalJSON(body []byte) error { + var m map[string]*json.RawMessage + err := json.Unmarshal(body, &m) + if err != nil { + return err + } + for k, v := range m { + switch k { + case "sku": + if v != nil { + var sku LoadBalancerSku + err = json.Unmarshal(*v, &sku) + if err != nil { + return err + } + lb.Sku = &sku + } + case "properties": + if v != nil { + var loadBalancerPropertiesFormat LoadBalancerPropertiesFormat + err = json.Unmarshal(*v, &loadBalancerPropertiesFormat) + if err != nil { + return err + } + lb.LoadBalancerPropertiesFormat = &loadBalancerPropertiesFormat + } + case "etag": + if v != nil { + var etag string + err = json.Unmarshal(*v, &etag) + if err != nil { + return err + } + lb.Etag = &etag + } + case "id": + if v != nil { + var ID string + err = json.Unmarshal(*v, &ID) + if err != nil { + return err + } + lb.ID = &ID + } + case "name": + if v != nil { + var name string + err = json.Unmarshal(*v, &name) + if err != nil { + return err + } + lb.Name = &name + } + case "type": + if v != nil { + var typeVar string + err = json.Unmarshal(*v, &typeVar) + if err != nil { + return err + } + lb.Type = &typeVar + } + case "location": + if v != nil { + var location string + err = json.Unmarshal(*v, &location) + if err != nil { + return err + } + lb.Location = &location + } + case "tags": + if v != nil { + var tags map[string]*string + err = json.Unmarshal(*v, &tags) + if err != nil { + return err + } + lb.Tags = tags + } + } + } + + return nil +} + +// LoadBalancerBackendAddressPoolListResult response for ListBackendAddressPool API service call. +type LoadBalancerBackendAddressPoolListResult struct { + autorest.Response `json:"-"` + // Value - A list of backend address pools in a load balancer. + Value *[]BackendAddressPool `json:"value,omitempty"` + // NextLink - The URL to get the next set of results. + NextLink *string `json:"nextLink,omitempty"` +} + +// LoadBalancerBackendAddressPoolListResultIterator provides access to a complete listing of BackendAddressPool +// values. +type LoadBalancerBackendAddressPoolListResultIterator struct { + i int + page LoadBalancerBackendAddressPoolListResultPage +} + +// Next advances to the next value. If there was an error making +// the request the iterator does not advance and the error is returned. +func (iter *LoadBalancerBackendAddressPoolListResultIterator) Next() error { + iter.i++ + if iter.i < len(iter.page.Values()) { + return nil + } + err := iter.page.Next() + if err != nil { + iter.i-- + return err + } + iter.i = 0 + return nil +} + +// NotDone returns true if the enumeration should be started or is not yet complete. +func (iter LoadBalancerBackendAddressPoolListResultIterator) NotDone() bool { + return iter.page.NotDone() && iter.i < len(iter.page.Values()) +} + +// Response returns the raw server response from the last page request. +func (iter LoadBalancerBackendAddressPoolListResultIterator) Response() LoadBalancerBackendAddressPoolListResult { + return iter.page.Response() +} + +// Value returns the current value or a zero-initialized value if the +// iterator has advanced beyond the end of the collection. +func (iter LoadBalancerBackendAddressPoolListResultIterator) Value() BackendAddressPool { + if !iter.page.NotDone() { + return BackendAddressPool{} + } + return iter.page.Values()[iter.i] +} + +// IsEmpty returns true if the ListResult contains no values. +func (lbbaplr LoadBalancerBackendAddressPoolListResult) IsEmpty() bool { + return lbbaplr.Value == nil || len(*lbbaplr.Value) == 0 +} + +// loadBalancerBackendAddressPoolListResultPreparer prepares a request to retrieve the next set of results. +// It returns nil if no more results exist. +func (lbbaplr LoadBalancerBackendAddressPoolListResult) loadBalancerBackendAddressPoolListResultPreparer() (*http.Request, error) { + if lbbaplr.NextLink == nil || len(to.String(lbbaplr.NextLink)) < 1 { + return nil, nil + } + return autorest.Prepare(&http.Request{}, + autorest.AsJSON(), + autorest.AsGet(), + autorest.WithBaseURL(to.String(lbbaplr.NextLink))) +} + +// LoadBalancerBackendAddressPoolListResultPage contains a page of BackendAddressPool values. +type LoadBalancerBackendAddressPoolListResultPage struct { + fn func(LoadBalancerBackendAddressPoolListResult) (LoadBalancerBackendAddressPoolListResult, error) + lbbaplr LoadBalancerBackendAddressPoolListResult +} + +// Next advances to the next page of values. If there was an error making +// the request the page does not advance and the error is returned. +func (page *LoadBalancerBackendAddressPoolListResultPage) Next() error { + next, err := page.fn(page.lbbaplr) + if err != nil { + return err + } + page.lbbaplr = next + return nil +} + +// NotDone returns true if the page enumeration should be started or is not yet complete. +func (page LoadBalancerBackendAddressPoolListResultPage) NotDone() bool { + return !page.lbbaplr.IsEmpty() +} + +// Response returns the raw server response from the last page request. +func (page LoadBalancerBackendAddressPoolListResultPage) Response() LoadBalancerBackendAddressPoolListResult { + return page.lbbaplr +} + +// Values returns the slice of values for the current page or nil if there are no values. +func (page LoadBalancerBackendAddressPoolListResultPage) Values() []BackendAddressPool { + if page.lbbaplr.IsEmpty() { + return nil + } + return *page.lbbaplr.Value +} + +// LoadBalancerFrontendIPConfigurationListResult response for ListFrontendIPConfiguration API service call. +type LoadBalancerFrontendIPConfigurationListResult struct { + autorest.Response `json:"-"` + // Value - A list of frontend IP configurations in a load balancer. + Value *[]FrontendIPConfiguration `json:"value,omitempty"` + // NextLink - The URL to get the next set of results. + NextLink *string `json:"nextLink,omitempty"` +} + +// LoadBalancerFrontendIPConfigurationListResultIterator provides access to a complete listing of +// FrontendIPConfiguration values. +type LoadBalancerFrontendIPConfigurationListResultIterator struct { + i int + page LoadBalancerFrontendIPConfigurationListResultPage +} + +// Next advances to the next value. If there was an error making +// the request the iterator does not advance and the error is returned. +func (iter *LoadBalancerFrontendIPConfigurationListResultIterator) Next() error { + iter.i++ + if iter.i < len(iter.page.Values()) { + return nil + } + err := iter.page.Next() + if err != nil { + iter.i-- + return err + } + iter.i = 0 + return nil +} + +// NotDone returns true if the enumeration should be started or is not yet complete. +func (iter LoadBalancerFrontendIPConfigurationListResultIterator) NotDone() bool { + return iter.page.NotDone() && iter.i < len(iter.page.Values()) +} + +// Response returns the raw server response from the last page request. +func (iter LoadBalancerFrontendIPConfigurationListResultIterator) Response() LoadBalancerFrontendIPConfigurationListResult { + return iter.page.Response() +} + +// Value returns the current value or a zero-initialized value if the +// iterator has advanced beyond the end of the collection. +func (iter LoadBalancerFrontendIPConfigurationListResultIterator) Value() FrontendIPConfiguration { + if !iter.page.NotDone() { + return FrontendIPConfiguration{} + } + return iter.page.Values()[iter.i] +} + +// IsEmpty returns true if the ListResult contains no values. +func (lbficlr LoadBalancerFrontendIPConfigurationListResult) IsEmpty() bool { + return lbficlr.Value == nil || len(*lbficlr.Value) == 0 +} + +// loadBalancerFrontendIPConfigurationListResultPreparer prepares a request to retrieve the next set of results. +// It returns nil if no more results exist. +func (lbficlr LoadBalancerFrontendIPConfigurationListResult) loadBalancerFrontendIPConfigurationListResultPreparer() (*http.Request, error) { + if lbficlr.NextLink == nil || len(to.String(lbficlr.NextLink)) < 1 { + return nil, nil + } + return autorest.Prepare(&http.Request{}, + autorest.AsJSON(), + autorest.AsGet(), + autorest.WithBaseURL(to.String(lbficlr.NextLink))) +} + +// LoadBalancerFrontendIPConfigurationListResultPage contains a page of FrontendIPConfiguration values. +type LoadBalancerFrontendIPConfigurationListResultPage struct { + fn func(LoadBalancerFrontendIPConfigurationListResult) (LoadBalancerFrontendIPConfigurationListResult, error) + lbficlr LoadBalancerFrontendIPConfigurationListResult +} + +// Next advances to the next page of values. If there was an error making +// the request the page does not advance and the error is returned. +func (page *LoadBalancerFrontendIPConfigurationListResultPage) Next() error { + next, err := page.fn(page.lbficlr) + if err != nil { + return err + } + page.lbficlr = next + return nil +} + +// NotDone returns true if the page enumeration should be started or is not yet complete. +func (page LoadBalancerFrontendIPConfigurationListResultPage) NotDone() bool { + return !page.lbficlr.IsEmpty() +} + +// Response returns the raw server response from the last page request. +func (page LoadBalancerFrontendIPConfigurationListResultPage) Response() LoadBalancerFrontendIPConfigurationListResult { + return page.lbficlr +} + +// Values returns the slice of values for the current page or nil if there are no values. +func (page LoadBalancerFrontendIPConfigurationListResultPage) Values() []FrontendIPConfiguration { + if page.lbficlr.IsEmpty() { + return nil + } + return *page.lbficlr.Value +} + +// LoadBalancerListResult response for ListLoadBalancers API service call. +type LoadBalancerListResult struct { + autorest.Response `json:"-"` + // Value - A list of load balancers in a resource group. + Value *[]LoadBalancer `json:"value,omitempty"` + // NextLink - The URL to get the next set of results. + NextLink *string `json:"nextLink,omitempty"` +} + +// LoadBalancerListResultIterator provides access to a complete listing of LoadBalancer values. +type LoadBalancerListResultIterator struct { + i int + page LoadBalancerListResultPage +} + +// Next advances to the next value. If there was an error making +// the request the iterator does not advance and the error is returned. +func (iter *LoadBalancerListResultIterator) Next() error { + iter.i++ + if iter.i < len(iter.page.Values()) { + return nil + } + err := iter.page.Next() + if err != nil { + iter.i-- + return err + } + iter.i = 0 + return nil +} + +// NotDone returns true if the enumeration should be started or is not yet complete. +func (iter LoadBalancerListResultIterator) NotDone() bool { + return iter.page.NotDone() && iter.i < len(iter.page.Values()) +} + +// Response returns the raw server response from the last page request. +func (iter LoadBalancerListResultIterator) Response() LoadBalancerListResult { + return iter.page.Response() +} + +// Value returns the current value or a zero-initialized value if the +// iterator has advanced beyond the end of the collection. +func (iter LoadBalancerListResultIterator) Value() LoadBalancer { + if !iter.page.NotDone() { + return LoadBalancer{} + } + return iter.page.Values()[iter.i] +} + +// IsEmpty returns true if the ListResult contains no values. +func (lblr LoadBalancerListResult) IsEmpty() bool { + return lblr.Value == nil || len(*lblr.Value) == 0 +} + +// loadBalancerListResultPreparer prepares a request to retrieve the next set of results. +// It returns nil if no more results exist. +func (lblr LoadBalancerListResult) loadBalancerListResultPreparer() (*http.Request, error) { + if lblr.NextLink == nil || len(to.String(lblr.NextLink)) < 1 { + return nil, nil + } + return autorest.Prepare(&http.Request{}, + autorest.AsJSON(), + autorest.AsGet(), + autorest.WithBaseURL(to.String(lblr.NextLink))) +} + +// LoadBalancerListResultPage contains a page of LoadBalancer values. +type LoadBalancerListResultPage struct { + fn func(LoadBalancerListResult) (LoadBalancerListResult, error) + lblr LoadBalancerListResult +} + +// Next advances to the next page of values. If there was an error making +// the request the page does not advance and the error is returned. +func (page *LoadBalancerListResultPage) Next() error { + next, err := page.fn(page.lblr) + if err != nil { + return err + } + page.lblr = next + return nil +} + +// NotDone returns true if the page enumeration should be started or is not yet complete. +func (page LoadBalancerListResultPage) NotDone() bool { + return !page.lblr.IsEmpty() +} + +// Response returns the raw server response from the last page request. +func (page LoadBalancerListResultPage) Response() LoadBalancerListResult { + return page.lblr +} + +// Values returns the slice of values for the current page or nil if there are no values. +func (page LoadBalancerListResultPage) Values() []LoadBalancer { + if page.lblr.IsEmpty() { + return nil + } + return *page.lblr.Value +} + +// LoadBalancerLoadBalancingRuleListResult response for ListLoadBalancingRule API service call. +type LoadBalancerLoadBalancingRuleListResult struct { + autorest.Response `json:"-"` + // Value - A list of load balancing rules in a load balancer. + Value *[]LoadBalancingRule `json:"value,omitempty"` + // NextLink - The URL to get the next set of results. + NextLink *string `json:"nextLink,omitempty"` +} + +// LoadBalancerLoadBalancingRuleListResultIterator provides access to a complete listing of LoadBalancingRule +// values. +type LoadBalancerLoadBalancingRuleListResultIterator struct { + i int + page LoadBalancerLoadBalancingRuleListResultPage +} + +// Next advances to the next value. If there was an error making +// the request the iterator does not advance and the error is returned. +func (iter *LoadBalancerLoadBalancingRuleListResultIterator) Next() error { + iter.i++ + if iter.i < len(iter.page.Values()) { + return nil + } + err := iter.page.Next() + if err != nil { + iter.i-- + return err + } + iter.i = 0 + return nil +} + +// NotDone returns true if the enumeration should be started or is not yet complete. +func (iter LoadBalancerLoadBalancingRuleListResultIterator) NotDone() bool { + return iter.page.NotDone() && iter.i < len(iter.page.Values()) +} + +// Response returns the raw server response from the last page request. +func (iter LoadBalancerLoadBalancingRuleListResultIterator) Response() LoadBalancerLoadBalancingRuleListResult { + return iter.page.Response() +} + +// Value returns the current value or a zero-initialized value if the +// iterator has advanced beyond the end of the collection. +func (iter LoadBalancerLoadBalancingRuleListResultIterator) Value() LoadBalancingRule { + if !iter.page.NotDone() { + return LoadBalancingRule{} + } + return iter.page.Values()[iter.i] +} + +// IsEmpty returns true if the ListResult contains no values. +func (lblbrlr LoadBalancerLoadBalancingRuleListResult) IsEmpty() bool { + return lblbrlr.Value == nil || len(*lblbrlr.Value) == 0 +} + +// loadBalancerLoadBalancingRuleListResultPreparer prepares a request to retrieve the next set of results. +// It returns nil if no more results exist. +func (lblbrlr LoadBalancerLoadBalancingRuleListResult) loadBalancerLoadBalancingRuleListResultPreparer() (*http.Request, error) { + if lblbrlr.NextLink == nil || len(to.String(lblbrlr.NextLink)) < 1 { + return nil, nil + } + return autorest.Prepare(&http.Request{}, + autorest.AsJSON(), + autorest.AsGet(), + autorest.WithBaseURL(to.String(lblbrlr.NextLink))) +} + +// LoadBalancerLoadBalancingRuleListResultPage contains a page of LoadBalancingRule values. +type LoadBalancerLoadBalancingRuleListResultPage struct { + fn func(LoadBalancerLoadBalancingRuleListResult) (LoadBalancerLoadBalancingRuleListResult, error) + lblbrlr LoadBalancerLoadBalancingRuleListResult +} + +// Next advances to the next page of values. If there was an error making +// the request the page does not advance and the error is returned. +func (page *LoadBalancerLoadBalancingRuleListResultPage) Next() error { + next, err := page.fn(page.lblbrlr) + if err != nil { + return err + } + page.lblbrlr = next + return nil +} + +// NotDone returns true if the page enumeration should be started or is not yet complete. +func (page LoadBalancerLoadBalancingRuleListResultPage) NotDone() bool { + return !page.lblbrlr.IsEmpty() +} + +// Response returns the raw server response from the last page request. +func (page LoadBalancerLoadBalancingRuleListResultPage) Response() LoadBalancerLoadBalancingRuleListResult { + return page.lblbrlr +} + +// Values returns the slice of values for the current page or nil if there are no values. +func (page LoadBalancerLoadBalancingRuleListResultPage) Values() []LoadBalancingRule { + if page.lblbrlr.IsEmpty() { + return nil + } + return *page.lblbrlr.Value +} + +// LoadBalancerProbeListResult response for ListProbe API service call. +type LoadBalancerProbeListResult struct { + autorest.Response `json:"-"` + // Value - A list of probes in a load balancer. + Value *[]Probe `json:"value,omitempty"` + // NextLink - The URL to get the next set of results. + NextLink *string `json:"nextLink,omitempty"` +} + +// LoadBalancerProbeListResultIterator provides access to a complete listing of Probe values. +type LoadBalancerProbeListResultIterator struct { + i int + page LoadBalancerProbeListResultPage +} + +// Next advances to the next value. If there was an error making +// the request the iterator does not advance and the error is returned. +func (iter *LoadBalancerProbeListResultIterator) Next() error { + iter.i++ + if iter.i < len(iter.page.Values()) { + return nil + } + err := iter.page.Next() + if err != nil { + iter.i-- + return err + } + iter.i = 0 + return nil +} + +// NotDone returns true if the enumeration should be started or is not yet complete. +func (iter LoadBalancerProbeListResultIterator) NotDone() bool { + return iter.page.NotDone() && iter.i < len(iter.page.Values()) +} + +// Response returns the raw server response from the last page request. +func (iter LoadBalancerProbeListResultIterator) Response() LoadBalancerProbeListResult { + return iter.page.Response() +} + +// Value returns the current value or a zero-initialized value if the +// iterator has advanced beyond the end of the collection. +func (iter LoadBalancerProbeListResultIterator) Value() Probe { + if !iter.page.NotDone() { + return Probe{} + } + return iter.page.Values()[iter.i] +} + +// IsEmpty returns true if the ListResult contains no values. +func (lbplr LoadBalancerProbeListResult) IsEmpty() bool { + return lbplr.Value == nil || len(*lbplr.Value) == 0 +} + +// loadBalancerProbeListResultPreparer prepares a request to retrieve the next set of results. +// It returns nil if no more results exist. +func (lbplr LoadBalancerProbeListResult) loadBalancerProbeListResultPreparer() (*http.Request, error) { + if lbplr.NextLink == nil || len(to.String(lbplr.NextLink)) < 1 { + return nil, nil + } + return autorest.Prepare(&http.Request{}, + autorest.AsJSON(), + autorest.AsGet(), + autorest.WithBaseURL(to.String(lbplr.NextLink))) +} + +// LoadBalancerProbeListResultPage contains a page of Probe values. +type LoadBalancerProbeListResultPage struct { + fn func(LoadBalancerProbeListResult) (LoadBalancerProbeListResult, error) + lbplr LoadBalancerProbeListResult +} + +// Next advances to the next page of values. If there was an error making +// the request the page does not advance and the error is returned. +func (page *LoadBalancerProbeListResultPage) Next() error { + next, err := page.fn(page.lbplr) + if err != nil { + return err + } + page.lbplr = next + return nil +} + +// NotDone returns true if the page enumeration should be started or is not yet complete. +func (page LoadBalancerProbeListResultPage) NotDone() bool { + return !page.lbplr.IsEmpty() +} + +// Response returns the raw server response from the last page request. +func (page LoadBalancerProbeListResultPage) Response() LoadBalancerProbeListResult { + return page.lbplr +} + +// Values returns the slice of values for the current page or nil if there are no values. +func (page LoadBalancerProbeListResultPage) Values() []Probe { + if page.lbplr.IsEmpty() { + return nil + } + return *page.lbplr.Value +} + +// LoadBalancerPropertiesFormat properties of the load balancer. +type LoadBalancerPropertiesFormat struct { + // FrontendIPConfigurations - Object representing the frontend IPs to be used for the load balancer + FrontendIPConfigurations *[]FrontendIPConfiguration `json:"frontendIPConfigurations,omitempty"` + // BackendAddressPools - Collection of backend address pools used by a load balancer + BackendAddressPools *[]BackendAddressPool `json:"backendAddressPools,omitempty"` + // LoadBalancingRules - Object collection representing the load balancing rules Gets the provisioning + LoadBalancingRules *[]LoadBalancingRule `json:"loadBalancingRules,omitempty"` + // Probes - Collection of probe objects used in the load balancer + Probes *[]Probe `json:"probes,omitempty"` + // InboundNatRules - Collection of inbound NAT Rules used by a load balancer. Defining inbound NAT rules on your load balancer is mutually exclusive with defining an inbound NAT pool. Inbound NAT pools are referenced from virtual machine scale sets. NICs that are associated with individual virtual machines cannot reference an Inbound NAT pool. They have to reference individual inbound NAT rules. + InboundNatRules *[]InboundNatRule `json:"inboundNatRules,omitempty"` + // InboundNatPools - Defines an external port range for inbound NAT to a single backend port on NICs associated with a load balancer. Inbound NAT rules are created automatically for each NIC associated with the Load Balancer using an external port from this range. Defining an Inbound NAT pool on your Load Balancer is mutually exclusive with defining inbound Nat rules. Inbound NAT pools are referenced from virtual machine scale sets. NICs that are associated with individual virtual machines cannot reference an inbound NAT pool. They have to reference individual inbound NAT rules. + InboundNatPools *[]InboundNatPool `json:"inboundNatPools,omitempty"` + // OutboundNatRules - The outbound NAT rules. + OutboundNatRules *[]OutboundNatRule `json:"outboundNatRules,omitempty"` + // ResourceGUID - The resource GUID property of the load balancer resource. + ResourceGUID *string `json:"resourceGuid,omitempty"` + // ProvisioningState - Gets the provisioning state of the PublicIP resource. Possible values are: 'Updating', 'Deleting', and 'Failed'. + ProvisioningState *string `json:"provisioningState,omitempty"` +} + +// LoadBalancersCreateOrUpdateFuture an abstraction for monitoring and retrieving the results of a long-running +// operation. +type LoadBalancersCreateOrUpdateFuture struct { + azure.Future +} + +// Result returns the result of the asynchronous operation. +// If the operation has not completed it will return an error. +func (future *LoadBalancersCreateOrUpdateFuture) Result(client LoadBalancersClient) (lb LoadBalancer, err error) { + var done bool + done, err = future.Done(client) + if err != nil { + err = autorest.NewErrorWithError(err, "network.LoadBalancersCreateOrUpdateFuture", "Result", future.Response(), "Polling failure") + return + } + if !done { + err = azure.NewAsyncOpIncompleteError("network.LoadBalancersCreateOrUpdateFuture") + return + } + sender := autorest.DecorateSender(client, autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...)) + if lb.Response.Response, err = future.GetResult(sender); err == nil && lb.Response.Response.StatusCode != http.StatusNoContent { + lb, err = client.CreateOrUpdateResponder(lb.Response.Response) + if err != nil { + err = autorest.NewErrorWithError(err, "network.LoadBalancersCreateOrUpdateFuture", "Result", lb.Response.Response, "Failure responding to request") + } + } + return +} + +// LoadBalancersDeleteFuture an abstraction for monitoring and retrieving the results of a long-running operation. +type LoadBalancersDeleteFuture struct { + azure.Future +} + +// Result returns the result of the asynchronous operation. +// If the operation has not completed it will return an error. +func (future *LoadBalancersDeleteFuture) Result(client LoadBalancersClient) (ar autorest.Response, err error) { + var done bool + done, err = future.Done(client) + if err != nil { + err = autorest.NewErrorWithError(err, "network.LoadBalancersDeleteFuture", "Result", future.Response(), "Polling failure") + return + } + if !done { + err = azure.NewAsyncOpIncompleteError("network.LoadBalancersDeleteFuture") + return + } + ar.Response = future.Response() + return +} + +// LoadBalancerSku SKU of a load balancer +type LoadBalancerSku struct { + // Name - Name of a load balancer SKU. Possible values include: 'LoadBalancerSkuNameBasic', 'LoadBalancerSkuNameStandard' + Name LoadBalancerSkuName `json:"name,omitempty"` +} + +// LoadBalancersUpdateTagsFuture an abstraction for monitoring and retrieving the results of a long-running +// operation. +type LoadBalancersUpdateTagsFuture struct { + azure.Future +} + +// Result returns the result of the asynchronous operation. +// If the operation has not completed it will return an error. +func (future *LoadBalancersUpdateTagsFuture) Result(client LoadBalancersClient) (lb LoadBalancer, err error) { + var done bool + done, err = future.Done(client) + if err != nil { + err = autorest.NewErrorWithError(err, "network.LoadBalancersUpdateTagsFuture", "Result", future.Response(), "Polling failure") + return + } + if !done { + err = azure.NewAsyncOpIncompleteError("network.LoadBalancersUpdateTagsFuture") + return + } + sender := autorest.DecorateSender(client, autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...)) + if lb.Response.Response, err = future.GetResult(sender); err == nil && lb.Response.Response.StatusCode != http.StatusNoContent { + lb, err = client.UpdateTagsResponder(lb.Response.Response) + if err != nil { + err = autorest.NewErrorWithError(err, "network.LoadBalancersUpdateTagsFuture", "Result", lb.Response.Response, "Failure responding to request") + } + } + return +} + +// LoadBalancingRule a load balancing rule for a load balancer. +type LoadBalancingRule struct { + autorest.Response `json:"-"` + // LoadBalancingRulePropertiesFormat - Properties of load balancer load balancing rule. + *LoadBalancingRulePropertiesFormat `json:"properties,omitempty"` + // Name - The name of the resource that is unique within a resource group. This name can be used to access the resource. + Name *string `json:"name,omitempty"` + // Etag - A unique read-only string that changes whenever the resource is updated. + Etag *string `json:"etag,omitempty"` + // ID - Resource ID. + ID *string `json:"id,omitempty"` +} + +// MarshalJSON is the custom marshaler for LoadBalancingRule. +func (lbr LoadBalancingRule) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]interface{}) + if lbr.LoadBalancingRulePropertiesFormat != nil { + objectMap["properties"] = lbr.LoadBalancingRulePropertiesFormat + } + if lbr.Name != nil { + objectMap["name"] = lbr.Name + } + if lbr.Etag != nil { + objectMap["etag"] = lbr.Etag + } + if lbr.ID != nil { + objectMap["id"] = lbr.ID + } + return json.Marshal(objectMap) +} + +// UnmarshalJSON is the custom unmarshaler for LoadBalancingRule struct. +func (lbr *LoadBalancingRule) UnmarshalJSON(body []byte) error { + var m map[string]*json.RawMessage + err := json.Unmarshal(body, &m) + if err != nil { + return err + } + for k, v := range m { + switch k { + case "properties": + if v != nil { + var loadBalancingRulePropertiesFormat LoadBalancingRulePropertiesFormat + err = json.Unmarshal(*v, &loadBalancingRulePropertiesFormat) + if err != nil { + return err + } + lbr.LoadBalancingRulePropertiesFormat = &loadBalancingRulePropertiesFormat + } + case "name": + if v != nil { + var name string + err = json.Unmarshal(*v, &name) + if err != nil { + return err + } + lbr.Name = &name + } + case "etag": + if v != nil { + var etag string + err = json.Unmarshal(*v, &etag) + if err != nil { + return err + } + lbr.Etag = &etag + } + case "id": + if v != nil { + var ID string + err = json.Unmarshal(*v, &ID) + if err != nil { + return err + } + lbr.ID = &ID + } + } + } + + return nil +} + +// LoadBalancingRulePropertiesFormat properties of the load balancer. +type LoadBalancingRulePropertiesFormat struct { + // FrontendIPConfiguration - A reference to frontend IP addresses. + FrontendIPConfiguration *SubResource `json:"frontendIPConfiguration,omitempty"` + // BackendAddressPool - A reference to a pool of DIPs. Inbound traffic is randomly load balanced across IPs in the backend IPs. + BackendAddressPool *SubResource `json:"backendAddressPool,omitempty"` + // Probe - The reference of the load balancer probe used by the load balancing rule. + Probe *SubResource `json:"probe,omitempty"` + // Protocol - Possible values include: 'TransportProtocolUDP', 'TransportProtocolTCP', 'TransportProtocolAll' + Protocol TransportProtocol `json:"protocol,omitempty"` + // LoadDistribution - The load distribution policy for this rule. Possible values are 'Default', 'SourceIP', and 'SourceIPProtocol'. Possible values include: 'Default', 'SourceIP', 'SourceIPProtocol' + LoadDistribution LoadDistribution `json:"loadDistribution,omitempty"` + // FrontendPort - The port for the external endpoint. Port numbers for each rule must be unique within the Load Balancer. Acceptable values are between 0 and 65534. Note that value 0 enables "Any Port" + FrontendPort *int32 `json:"frontendPort,omitempty"` + // BackendPort - The port used for internal connections on the endpoint. Acceptable values are between 0 and 65535. Note that value 0 enables "Any Port" + BackendPort *int32 `json:"backendPort,omitempty"` + // IdleTimeoutInMinutes - The timeout for the TCP idle connection. The value can be set between 4 and 30 minutes. The default value is 4 minutes. This element is only used when the protocol is set to TCP. + IdleTimeoutInMinutes *int32 `json:"idleTimeoutInMinutes,omitempty"` + // EnableFloatingIP - Configures a virtual machine's endpoint for the floating IP capability required to configure a SQL AlwaysOn Availability Group. This setting is required when using the SQL AlwaysOn Availability Groups in SQL server. This setting can't be changed after you create the endpoint. + EnableFloatingIP *bool `json:"enableFloatingIP,omitempty"` + // DisableOutboundSnat - Configures SNAT for the VMs in the backend pool to use the publicIP address specified in the frontend of the load balancing rule. + DisableOutboundSnat *bool `json:"disableOutboundSnat,omitempty"` + // ProvisioningState - Gets the provisioning state of the PublicIP resource. Possible values are: 'Updating', 'Deleting', and 'Failed'. + ProvisioningState *string `json:"provisioningState,omitempty"` +} + +// LocalNetworkGateway a common class for general resource information +type LocalNetworkGateway struct { + autorest.Response `json:"-"` + // LocalNetworkGatewayPropertiesFormat - Properties of the local network gateway. + *LocalNetworkGatewayPropertiesFormat `json:"properties,omitempty"` + // Etag - A unique read-only string that changes whenever the resource is updated. + Etag *string `json:"etag,omitempty"` + // ID - Resource ID. + ID *string `json:"id,omitempty"` + // Name - Resource name. + Name *string `json:"name,omitempty"` + // Type - Resource type. + Type *string `json:"type,omitempty"` + // Location - Resource location. + Location *string `json:"location,omitempty"` + // Tags - Resource tags. + Tags map[string]*string `json:"tags"` +} + +// MarshalJSON is the custom marshaler for LocalNetworkGateway. +func (lng LocalNetworkGateway) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]interface{}) + if lng.LocalNetworkGatewayPropertiesFormat != nil { + objectMap["properties"] = lng.LocalNetworkGatewayPropertiesFormat + } + if lng.Etag != nil { + objectMap["etag"] = lng.Etag + } + if lng.ID != nil { + objectMap["id"] = lng.ID + } + if lng.Name != nil { + objectMap["name"] = lng.Name + } + if lng.Type != nil { + objectMap["type"] = lng.Type + } + if lng.Location != nil { + objectMap["location"] = lng.Location + } + if lng.Tags != nil { + objectMap["tags"] = lng.Tags + } + return json.Marshal(objectMap) +} + +// UnmarshalJSON is the custom unmarshaler for LocalNetworkGateway struct. +func (lng *LocalNetworkGateway) UnmarshalJSON(body []byte) error { + var m map[string]*json.RawMessage + err := json.Unmarshal(body, &m) + if err != nil { + return err + } + for k, v := range m { + switch k { + case "properties": + if v != nil { + var localNetworkGatewayPropertiesFormat LocalNetworkGatewayPropertiesFormat + err = json.Unmarshal(*v, &localNetworkGatewayPropertiesFormat) + if err != nil { + return err + } + lng.LocalNetworkGatewayPropertiesFormat = &localNetworkGatewayPropertiesFormat + } + case "etag": + if v != nil { + var etag string + err = json.Unmarshal(*v, &etag) + if err != nil { + return err + } + lng.Etag = &etag + } + case "id": + if v != nil { + var ID string + err = json.Unmarshal(*v, &ID) + if err != nil { + return err + } + lng.ID = &ID + } + case "name": + if v != nil { + var name string + err = json.Unmarshal(*v, &name) + if err != nil { + return err + } + lng.Name = &name + } + case "type": + if v != nil { + var typeVar string + err = json.Unmarshal(*v, &typeVar) + if err != nil { + return err + } + lng.Type = &typeVar + } + case "location": + if v != nil { + var location string + err = json.Unmarshal(*v, &location) + if err != nil { + return err + } + lng.Location = &location + } + case "tags": + if v != nil { + var tags map[string]*string + err = json.Unmarshal(*v, &tags) + if err != nil { + return err + } + lng.Tags = tags + } + } + } + + return nil +} + +// LocalNetworkGatewayListResult response for ListLocalNetworkGateways API service call. +type LocalNetworkGatewayListResult struct { + autorest.Response `json:"-"` + // Value - A list of local network gateways that exists in a resource group. + Value *[]LocalNetworkGateway `json:"value,omitempty"` + // NextLink - The URL to get the next set of results. + NextLink *string `json:"nextLink,omitempty"` +} + +// LocalNetworkGatewayListResultIterator provides access to a complete listing of LocalNetworkGateway values. +type LocalNetworkGatewayListResultIterator struct { + i int + page LocalNetworkGatewayListResultPage +} + +// Next advances to the next value. If there was an error making +// the request the iterator does not advance and the error is returned. +func (iter *LocalNetworkGatewayListResultIterator) Next() error { + iter.i++ + if iter.i < len(iter.page.Values()) { + return nil + } + err := iter.page.Next() + if err != nil { + iter.i-- + return err + } + iter.i = 0 + return nil +} + +// NotDone returns true if the enumeration should be started or is not yet complete. +func (iter LocalNetworkGatewayListResultIterator) NotDone() bool { + return iter.page.NotDone() && iter.i < len(iter.page.Values()) +} + +// Response returns the raw server response from the last page request. +func (iter LocalNetworkGatewayListResultIterator) Response() LocalNetworkGatewayListResult { + return iter.page.Response() +} + +// Value returns the current value or a zero-initialized value if the +// iterator has advanced beyond the end of the collection. +func (iter LocalNetworkGatewayListResultIterator) Value() LocalNetworkGateway { + if !iter.page.NotDone() { + return LocalNetworkGateway{} + } + return iter.page.Values()[iter.i] +} + +// IsEmpty returns true if the ListResult contains no values. +func (lnglr LocalNetworkGatewayListResult) IsEmpty() bool { + return lnglr.Value == nil || len(*lnglr.Value) == 0 +} + +// localNetworkGatewayListResultPreparer prepares a request to retrieve the next set of results. +// It returns nil if no more results exist. +func (lnglr LocalNetworkGatewayListResult) localNetworkGatewayListResultPreparer() (*http.Request, error) { + if lnglr.NextLink == nil || len(to.String(lnglr.NextLink)) < 1 { + return nil, nil + } + return autorest.Prepare(&http.Request{}, + autorest.AsJSON(), + autorest.AsGet(), + autorest.WithBaseURL(to.String(lnglr.NextLink))) +} + +// LocalNetworkGatewayListResultPage contains a page of LocalNetworkGateway values. +type LocalNetworkGatewayListResultPage struct { + fn func(LocalNetworkGatewayListResult) (LocalNetworkGatewayListResult, error) + lnglr LocalNetworkGatewayListResult +} + +// Next advances to the next page of values. If there was an error making +// the request the page does not advance and the error is returned. +func (page *LocalNetworkGatewayListResultPage) Next() error { + next, err := page.fn(page.lnglr) + if err != nil { + return err + } + page.lnglr = next + return nil +} + +// NotDone returns true if the page enumeration should be started or is not yet complete. +func (page LocalNetworkGatewayListResultPage) NotDone() bool { + return !page.lnglr.IsEmpty() +} + +// Response returns the raw server response from the last page request. +func (page LocalNetworkGatewayListResultPage) Response() LocalNetworkGatewayListResult { + return page.lnglr +} + +// Values returns the slice of values for the current page or nil if there are no values. +func (page LocalNetworkGatewayListResultPage) Values() []LocalNetworkGateway { + if page.lnglr.IsEmpty() { + return nil + } + return *page.lnglr.Value +} + +// LocalNetworkGatewayPropertiesFormat localNetworkGateway properties +type LocalNetworkGatewayPropertiesFormat struct { + // LocalNetworkAddressSpace - Local network site address space. + LocalNetworkAddressSpace *AddressSpace `json:"localNetworkAddressSpace,omitempty"` + // GatewayIPAddress - IP address of local network gateway. + GatewayIPAddress *string `json:"gatewayIpAddress,omitempty"` + // BgpSettings - Local network gateway's BGP speaker settings. + BgpSettings *BgpSettings `json:"bgpSettings,omitempty"` + // ResourceGUID - The resource GUID property of the LocalNetworkGateway resource. + ResourceGUID *string `json:"resourceGuid,omitempty"` + // ProvisioningState - The provisioning state of the LocalNetworkGateway resource. Possible values are: 'Updating', 'Deleting', and 'Failed'. + ProvisioningState *string `json:"provisioningState,omitempty"` +} + +// LocalNetworkGatewaysCreateOrUpdateFuture an abstraction for monitoring and retrieving the results of a +// long-running operation. +type LocalNetworkGatewaysCreateOrUpdateFuture struct { + azure.Future +} + +// Result returns the result of the asynchronous operation. +// If the operation has not completed it will return an error. +func (future *LocalNetworkGatewaysCreateOrUpdateFuture) Result(client LocalNetworkGatewaysClient) (lng LocalNetworkGateway, err error) { + var done bool + done, err = future.Done(client) + if err != nil { + err = autorest.NewErrorWithError(err, "network.LocalNetworkGatewaysCreateOrUpdateFuture", "Result", future.Response(), "Polling failure") + return + } + if !done { + err = azure.NewAsyncOpIncompleteError("network.LocalNetworkGatewaysCreateOrUpdateFuture") + return + } + sender := autorest.DecorateSender(client, autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...)) + if lng.Response.Response, err = future.GetResult(sender); err == nil && lng.Response.Response.StatusCode != http.StatusNoContent { + lng, err = client.CreateOrUpdateResponder(lng.Response.Response) + if err != nil { + err = autorest.NewErrorWithError(err, "network.LocalNetworkGatewaysCreateOrUpdateFuture", "Result", lng.Response.Response, "Failure responding to request") + } + } + return +} + +// LocalNetworkGatewaysDeleteFuture an abstraction for monitoring and retrieving the results of a long-running +// operation. +type LocalNetworkGatewaysDeleteFuture struct { + azure.Future +} + +// Result returns the result of the asynchronous operation. +// If the operation has not completed it will return an error. +func (future *LocalNetworkGatewaysDeleteFuture) Result(client LocalNetworkGatewaysClient) (ar autorest.Response, err error) { + var done bool + done, err = future.Done(client) + if err != nil { + err = autorest.NewErrorWithError(err, "network.LocalNetworkGatewaysDeleteFuture", "Result", future.Response(), "Polling failure") + return + } + if !done { + err = azure.NewAsyncOpIncompleteError("network.LocalNetworkGatewaysDeleteFuture") + return + } + ar.Response = future.Response() + return +} + +// LocalNetworkGatewaysUpdateTagsFuture an abstraction for monitoring and retrieving the results of a long-running +// operation. +type LocalNetworkGatewaysUpdateTagsFuture struct { + azure.Future +} + +// Result returns the result of the asynchronous operation. +// If the operation has not completed it will return an error. +func (future *LocalNetworkGatewaysUpdateTagsFuture) Result(client LocalNetworkGatewaysClient) (lng LocalNetworkGateway, err error) { + var done bool + done, err = future.Done(client) + if err != nil { + err = autorest.NewErrorWithError(err, "network.LocalNetworkGatewaysUpdateTagsFuture", "Result", future.Response(), "Polling failure") + return + } + if !done { + err = azure.NewAsyncOpIncompleteError("network.LocalNetworkGatewaysUpdateTagsFuture") + return + } + sender := autorest.DecorateSender(client, autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...)) + if lng.Response.Response, err = future.GetResult(sender); err == nil && lng.Response.Response.StatusCode != http.StatusNoContent { + lng, err = client.UpdateTagsResponder(lng.Response.Response) + if err != nil { + err = autorest.NewErrorWithError(err, "network.LocalNetworkGatewaysUpdateTagsFuture", "Result", lng.Response.Response, "Failure responding to request") + } + } + return +} + +// LogSpecification description of logging specification. +type LogSpecification struct { + // Name - The name of the specification. + Name *string `json:"name,omitempty"` + // DisplayName - The display name of the specification. + DisplayName *string `json:"displayName,omitempty"` + // BlobDuration - Duration of the blob. + BlobDuration *string `json:"blobDuration,omitempty"` +} + +// MetricSpecification description of metrics specification. +type MetricSpecification struct { + // Name - The name of the metric. + Name *string `json:"name,omitempty"` + // DisplayName - The display name of the metric. + DisplayName *string `json:"displayName,omitempty"` + // DisplayDescription - The description of the metric. + DisplayDescription *string `json:"displayDescription,omitempty"` + // Unit - Units the metric to be displayed in. + Unit *string `json:"unit,omitempty"` + // AggregationType - The aggregation type. + AggregationType *string `json:"aggregationType,omitempty"` + // Availabilities - List of availability. + Availabilities *[]Availability `json:"availabilities,omitempty"` + // EnableRegionalMdmAccount - Whether regional MDM account enabled. + EnableRegionalMdmAccount *bool `json:"enableRegionalMdmAccount,omitempty"` + // FillGapWithZero - Whether gaps would be filled with zeros. + FillGapWithZero *bool `json:"fillGapWithZero,omitempty"` + // MetricFilterPattern - Pattern for the filter of the metric. + MetricFilterPattern *string `json:"metricFilterPattern,omitempty"` + // Dimensions - List of dimensions. + Dimensions *[]Dimension `json:"dimensions,omitempty"` + // IsInternal - Whether the metric is internal. + IsInternal *bool `json:"isInternal,omitempty"` + // SourceMdmAccount - The source MDM account. + SourceMdmAccount *string `json:"sourceMdmAccount,omitempty"` + // SourceMdmNamespace - The source MDM namespace. + SourceMdmNamespace *string `json:"sourceMdmNamespace,omitempty"` + // ResourceIDDimensionNameOverride - The resource Id dimension name override. + ResourceIDDimensionNameOverride *string `json:"resourceIdDimensionNameOverride,omitempty"` +} + +// NextHopParameters parameters that define the source and destination endpoint. +type NextHopParameters struct { + // TargetResourceID - The resource identifier of the target resource against which the action is to be performed. + TargetResourceID *string `json:"targetResourceId,omitempty"` + // SourceIPAddress - The source IP address. + SourceIPAddress *string `json:"sourceIPAddress,omitempty"` + // DestinationIPAddress - The destination IP address. + DestinationIPAddress *string `json:"destinationIPAddress,omitempty"` + // TargetNicResourceID - The NIC ID. (If VM has multiple NICs and IP forwarding is enabled on any of the nics, then this parameter must be specified. Otherwise optional). + TargetNicResourceID *string `json:"targetNicResourceId,omitempty"` +} + +// NextHopResult the information about next hop from the specified VM. +type NextHopResult struct { + autorest.Response `json:"-"` + // NextHopType - Next hop type. Possible values include: 'NextHopTypeInternet', 'NextHopTypeVirtualAppliance', 'NextHopTypeVirtualNetworkGateway', 'NextHopTypeVnetLocal', 'NextHopTypeHyperNetGateway', 'NextHopTypeNone' + NextHopType NextHopType `json:"nextHopType,omitempty"` + // NextHopIPAddress - Next hop IP Address + NextHopIPAddress *string `json:"nextHopIpAddress,omitempty"` + // RouteTableID - The resource identifier for the route table associated with the route being returned. If the route being returned does not correspond to any user created routes then this field will be the string 'System Route'. + RouteTableID *string `json:"routeTableId,omitempty"` +} + +// Operation network REST API operation definition. +type Operation struct { + // Name - Operation name: {provider}/{resource}/{operation} + Name *string `json:"name,omitempty"` + // Display - Display metadata associated with the operation. + Display *OperationDisplay `json:"display,omitempty"` + // Origin - Origin of the operation. + Origin *string `json:"origin,omitempty"` + // OperationPropertiesFormat - Operation properties format. + *OperationPropertiesFormat `json:"properties,omitempty"` +} + +// MarshalJSON is the custom marshaler for Operation. +func (o Operation) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]interface{}) + if o.Name != nil { + objectMap["name"] = o.Name + } + if o.Display != nil { + objectMap["display"] = o.Display + } + if o.Origin != nil { + objectMap["origin"] = o.Origin + } + if o.OperationPropertiesFormat != nil { + objectMap["properties"] = o.OperationPropertiesFormat + } + return json.Marshal(objectMap) +} + +// UnmarshalJSON is the custom unmarshaler for Operation struct. +func (o *Operation) UnmarshalJSON(body []byte) error { + var m map[string]*json.RawMessage + err := json.Unmarshal(body, &m) + if err != nil { + return err + } + for k, v := range m { + switch k { + case "name": + if v != nil { + var name string + err = json.Unmarshal(*v, &name) + if err != nil { + return err + } + o.Name = &name + } + case "display": + if v != nil { + var display OperationDisplay + err = json.Unmarshal(*v, &display) + if err != nil { + return err + } + o.Display = &display + } + case "origin": + if v != nil { + var origin string + err = json.Unmarshal(*v, &origin) + if err != nil { + return err + } + o.Origin = &origin + } + case "properties": + if v != nil { + var operationPropertiesFormat OperationPropertiesFormat + err = json.Unmarshal(*v, &operationPropertiesFormat) + if err != nil { + return err + } + o.OperationPropertiesFormat = &operationPropertiesFormat + } + } + } + + return nil +} + +// OperationDisplay display metadata associated with the operation. +type OperationDisplay struct { + // Provider - Service provider: Microsoft Network. + Provider *string `json:"provider,omitempty"` + // Resource - Resource on which the operation is performed. + Resource *string `json:"resource,omitempty"` + // Operation - Type of the operation: get, read, delete, etc. + Operation *string `json:"operation,omitempty"` + // Description - Description of the operation. + Description *string `json:"description,omitempty"` +} + +// OperationListResult result of the request to list Network operations. It contains a list of operations and a URL +// link to get the next set of results. +type OperationListResult struct { + autorest.Response `json:"-"` + // Value - List of Network operations supported by the Network resource provider. + Value *[]Operation `json:"value,omitempty"` + // NextLink - URL to get the next set of operation list results if there are any. + NextLink *string `json:"nextLink,omitempty"` +} + +// OperationListResultIterator provides access to a complete listing of Operation values. +type OperationListResultIterator struct { + i int + page OperationListResultPage +} + +// Next advances to the next value. If there was an error making +// the request the iterator does not advance and the error is returned. +func (iter *OperationListResultIterator) Next() error { + iter.i++ + if iter.i < len(iter.page.Values()) { + return nil + } + err := iter.page.Next() + if err != nil { + iter.i-- + return err + } + iter.i = 0 + return nil +} + +// NotDone returns true if the enumeration should be started or is not yet complete. +func (iter OperationListResultIterator) NotDone() bool { + return iter.page.NotDone() && iter.i < len(iter.page.Values()) +} + +// Response returns the raw server response from the last page request. +func (iter OperationListResultIterator) Response() OperationListResult { + return iter.page.Response() +} + +// Value returns the current value or a zero-initialized value if the +// iterator has advanced beyond the end of the collection. +func (iter OperationListResultIterator) Value() Operation { + if !iter.page.NotDone() { + return Operation{} + } + return iter.page.Values()[iter.i] +} + +// IsEmpty returns true if the ListResult contains no values. +func (olr OperationListResult) IsEmpty() bool { + return olr.Value == nil || len(*olr.Value) == 0 +} + +// operationListResultPreparer prepares a request to retrieve the next set of results. +// It returns nil if no more results exist. +func (olr OperationListResult) operationListResultPreparer() (*http.Request, error) { + if olr.NextLink == nil || len(to.String(olr.NextLink)) < 1 { + return nil, nil + } + return autorest.Prepare(&http.Request{}, + autorest.AsJSON(), + autorest.AsGet(), + autorest.WithBaseURL(to.String(olr.NextLink))) +} + +// OperationListResultPage contains a page of Operation values. +type OperationListResultPage struct { + fn func(OperationListResult) (OperationListResult, error) + olr OperationListResult +} + +// Next advances to the next page of values. If there was an error making +// the request the page does not advance and the error is returned. +func (page *OperationListResultPage) Next() error { + next, err := page.fn(page.olr) + if err != nil { + return err + } + page.olr = next + return nil +} + +// NotDone returns true if the page enumeration should be started or is not yet complete. +func (page OperationListResultPage) NotDone() bool { + return !page.olr.IsEmpty() +} + +// Response returns the raw server response from the last page request. +func (page OperationListResultPage) Response() OperationListResult { + return page.olr +} + +// Values returns the slice of values for the current page or nil if there are no values. +func (page OperationListResultPage) Values() []Operation { + if page.olr.IsEmpty() { + return nil + } + return *page.olr.Value +} + +// OperationPropertiesFormat description of operation properties format. +type OperationPropertiesFormat struct { + // ServiceSpecification - Specification of the service. + ServiceSpecification *OperationPropertiesFormatServiceSpecification `json:"serviceSpecification,omitempty"` +} + +// OperationPropertiesFormatServiceSpecification specification of the service. +type OperationPropertiesFormatServiceSpecification struct { + // MetricSpecifications - Operation service specification. + MetricSpecifications *[]MetricSpecification `json:"metricSpecifications,omitempty"` + // LogSpecifications - Operation log specification. + LogSpecifications *[]LogSpecification `json:"logSpecifications,omitempty"` +} + +// OutboundNatRule outbound NAT pool of the load balancer. +type OutboundNatRule struct { + // OutboundNatRulePropertiesFormat - Properties of load balancer outbound nat rule. + *OutboundNatRulePropertiesFormat `json:"properties,omitempty"` + // Name - The name of the resource that is unique within a resource group. This name can be used to access the resource. + Name *string `json:"name,omitempty"` + // Etag - A unique read-only string that changes whenever the resource is updated. + Etag *string `json:"etag,omitempty"` + // ID - Resource ID. + ID *string `json:"id,omitempty"` +} + +// MarshalJSON is the custom marshaler for OutboundNatRule. +func (onr OutboundNatRule) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]interface{}) + if onr.OutboundNatRulePropertiesFormat != nil { + objectMap["properties"] = onr.OutboundNatRulePropertiesFormat + } + if onr.Name != nil { + objectMap["name"] = onr.Name + } + if onr.Etag != nil { + objectMap["etag"] = onr.Etag + } + if onr.ID != nil { + objectMap["id"] = onr.ID + } + return json.Marshal(objectMap) +} + +// UnmarshalJSON is the custom unmarshaler for OutboundNatRule struct. +func (onr *OutboundNatRule) UnmarshalJSON(body []byte) error { + var m map[string]*json.RawMessage + err := json.Unmarshal(body, &m) + if err != nil { + return err + } + for k, v := range m { + switch k { + case "properties": + if v != nil { + var outboundNatRulePropertiesFormat OutboundNatRulePropertiesFormat + err = json.Unmarshal(*v, &outboundNatRulePropertiesFormat) + if err != nil { + return err + } + onr.OutboundNatRulePropertiesFormat = &outboundNatRulePropertiesFormat + } + case "name": + if v != nil { + var name string + err = json.Unmarshal(*v, &name) + if err != nil { + return err + } + onr.Name = &name + } + case "etag": + if v != nil { + var etag string + err = json.Unmarshal(*v, &etag) + if err != nil { + return err + } + onr.Etag = &etag + } + case "id": + if v != nil { + var ID string + err = json.Unmarshal(*v, &ID) + if err != nil { + return err + } + onr.ID = &ID + } + } + } + + return nil +} + +// OutboundNatRulePropertiesFormat outbound NAT pool of the load balancer. +type OutboundNatRulePropertiesFormat struct { + // AllocatedOutboundPorts - The number of outbound ports to be used for NAT. + AllocatedOutboundPorts *int32 `json:"allocatedOutboundPorts,omitempty"` + // FrontendIPConfigurations - The Frontend IP addresses of the load balancer. + FrontendIPConfigurations *[]SubResource `json:"frontendIPConfigurations,omitempty"` + // BackendAddressPool - A reference to a pool of DIPs. Outbound traffic is randomly load balanced across IPs in the backend IPs. + BackendAddressPool *SubResource `json:"backendAddressPool,omitempty"` + // ProvisioningState - Gets the provisioning state of the PublicIP resource. Possible values are: 'Updating', 'Deleting', and 'Failed'. + ProvisioningState *string `json:"provisioningState,omitempty"` +} + +// PacketCapture parameters that define the create packet capture operation. +type PacketCapture struct { + // Name - Name of the packet capture. + Name *string `json:"name,omitempty"` + // ID - ID of the packet capture. + ID *string `json:"id,omitempty"` + // Type - Packet capture type. + Type *string `json:"type,omitempty"` + *PacketCaptureParameters `json:"properties,omitempty"` +} + +// MarshalJSON is the custom marshaler for PacketCapture. +func (pc PacketCapture) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]interface{}) + if pc.Name != nil { + objectMap["name"] = pc.Name + } + if pc.ID != nil { + objectMap["id"] = pc.ID + } + if pc.Type != nil { + objectMap["type"] = pc.Type + } + if pc.PacketCaptureParameters != nil { + objectMap["properties"] = pc.PacketCaptureParameters + } + return json.Marshal(objectMap) +} + +// UnmarshalJSON is the custom unmarshaler for PacketCapture struct. +func (pc *PacketCapture) UnmarshalJSON(body []byte) error { + var m map[string]*json.RawMessage + err := json.Unmarshal(body, &m) + if err != nil { + return err + } + for k, v := range m { + switch k { + case "name": + if v != nil { + var name string + err = json.Unmarshal(*v, &name) + if err != nil { + return err + } + pc.Name = &name + } + case "id": + if v != nil { + var ID string + err = json.Unmarshal(*v, &ID) + if err != nil { + return err + } + pc.ID = &ID + } + case "type": + if v != nil { + var typeVar string + err = json.Unmarshal(*v, &typeVar) + if err != nil { + return err + } + pc.Type = &typeVar + } + case "properties": + if v != nil { + var packetCaptureParameters PacketCaptureParameters + err = json.Unmarshal(*v, &packetCaptureParameters) + if err != nil { + return err + } + pc.PacketCaptureParameters = &packetCaptureParameters + } + } + } + + return nil +} + +// PacketCaptureFilter filter that is applied to packet capture request. Multiple filters can be applied. +type PacketCaptureFilter struct { + // Protocol - Protocol to be filtered on. Possible values include: 'PcProtocolTCP', 'PcProtocolUDP', 'PcProtocolAny' + Protocol PcProtocol `json:"protocol,omitempty"` + // LocalIPAddress - Local IP Address to be filtered on. Notation: "127.0.0.1" for single address entry. "127.0.0.1-127.0.0.255" for range. "127.0.0.1;127.0.0.5"? for multiple entries. Multiple ranges not currently supported. Mixing ranges with multiple entries not currently supported. Default = null. + LocalIPAddress *string `json:"localIPAddress,omitempty"` + // RemoteIPAddress - Local IP Address to be filtered on. Notation: "127.0.0.1" for single address entry. "127.0.0.1-127.0.0.255" for range. "127.0.0.1;127.0.0.5;" for multiple entries. Multiple ranges not currently supported. Mixing ranges with multiple entries not currently supported. Default = null. + RemoteIPAddress *string `json:"remoteIPAddress,omitempty"` + // LocalPort - Local port to be filtered on. Notation: "80" for single port entry."80-85" for range. "80;443;" for multiple entries. Multiple ranges not currently supported. Mixing ranges with multiple entries not currently supported. Default = null. + LocalPort *string `json:"localPort,omitempty"` + // RemotePort - Remote port to be filtered on. Notation: "80" for single port entry."80-85" for range. "80;443;" for multiple entries. Multiple ranges not currently supported. Mixing ranges with multiple entries not currently supported. Default = null. + RemotePort *string `json:"remotePort,omitempty"` +} + +// PacketCaptureListResult list of packet capture sessions. +type PacketCaptureListResult struct { + autorest.Response `json:"-"` + // Value - Information about packet capture sessions. + Value *[]PacketCaptureResult `json:"value,omitempty"` +} + +// PacketCaptureParameters parameters that define the create packet capture operation. +type PacketCaptureParameters struct { + // Target - The ID of the targeted resource, only VM is currently supported. + Target *string `json:"target,omitempty"` + // BytesToCapturePerPacket - Number of bytes captured per packet, the remaining bytes are truncated. + BytesToCapturePerPacket *int32 `json:"bytesToCapturePerPacket,omitempty"` + // TotalBytesPerSession - Maximum size of the capture output. + TotalBytesPerSession *int32 `json:"totalBytesPerSession,omitempty"` + // TimeLimitInSeconds - Maximum duration of the capture session in seconds. + TimeLimitInSeconds *int32 `json:"timeLimitInSeconds,omitempty"` + StorageLocation *PacketCaptureStorageLocation `json:"storageLocation,omitempty"` + Filters *[]PacketCaptureFilter `json:"filters,omitempty"` +} + +// PacketCaptureQueryStatusResult status of packet capture session. +type PacketCaptureQueryStatusResult struct { + autorest.Response `json:"-"` + // Name - The name of the packet capture resource. + Name *string `json:"name,omitempty"` + // ID - The ID of the packet capture resource. + ID *string `json:"id,omitempty"` + // CaptureStartTime - The start time of the packet capture session. + CaptureStartTime *date.Time `json:"captureStartTime,omitempty"` + // PacketCaptureStatus - The status of the packet capture session. Possible values include: 'PcStatusNotStarted', 'PcStatusRunning', 'PcStatusStopped', 'PcStatusError', 'PcStatusUnknown' + PacketCaptureStatus PcStatus `json:"packetCaptureStatus,omitempty"` + // StopReason - The reason the current packet capture session was stopped. + StopReason *string `json:"stopReason,omitempty"` + // PacketCaptureError - List of errors of packet capture session. + PacketCaptureError *[]PcError `json:"packetCaptureError,omitempty"` +} + +// PacketCaptureResult information about packet capture session. +type PacketCaptureResult struct { + autorest.Response `json:"-"` + // Name - Name of the packet capture. + Name *string `json:"name,omitempty"` + // ID - ID of the packet capture. + ID *string `json:"id,omitempty"` + // Type - Packet capture type. + Type *string `json:"type,omitempty"` + Etag *string `json:"etag,omitempty"` + *PacketCaptureResultProperties `json:"properties,omitempty"` +} + +// MarshalJSON is the custom marshaler for PacketCaptureResult. +func (pcr PacketCaptureResult) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]interface{}) + if pcr.Name != nil { + objectMap["name"] = pcr.Name + } + if pcr.ID != nil { + objectMap["id"] = pcr.ID + } + if pcr.Type != nil { + objectMap["type"] = pcr.Type + } + if pcr.Etag != nil { + objectMap["etag"] = pcr.Etag + } + if pcr.PacketCaptureResultProperties != nil { + objectMap["properties"] = pcr.PacketCaptureResultProperties + } + return json.Marshal(objectMap) +} + +// UnmarshalJSON is the custom unmarshaler for PacketCaptureResult struct. +func (pcr *PacketCaptureResult) UnmarshalJSON(body []byte) error { + var m map[string]*json.RawMessage + err := json.Unmarshal(body, &m) + if err != nil { + return err + } + for k, v := range m { + switch k { + case "name": + if v != nil { + var name string + err = json.Unmarshal(*v, &name) + if err != nil { + return err + } + pcr.Name = &name + } + case "id": + if v != nil { + var ID string + err = json.Unmarshal(*v, &ID) + if err != nil { + return err + } + pcr.ID = &ID + } + case "type": + if v != nil { + var typeVar string + err = json.Unmarshal(*v, &typeVar) + if err != nil { + return err + } + pcr.Type = &typeVar + } + case "etag": + if v != nil { + var etag string + err = json.Unmarshal(*v, &etag) + if err != nil { + return err + } + pcr.Etag = &etag + } + case "properties": + if v != nil { + var packetCaptureResultProperties PacketCaptureResultProperties + err = json.Unmarshal(*v, &packetCaptureResultProperties) + if err != nil { + return err + } + pcr.PacketCaptureResultProperties = &packetCaptureResultProperties + } + } + } + + return nil +} + +// PacketCaptureResultProperties describes the properties of a packet capture session. +type PacketCaptureResultProperties struct { + // ProvisioningState - The provisioning state of the packet capture session. Possible values include: 'Succeeded', 'Updating', 'Deleting', 'Failed' + ProvisioningState ProvisioningState `json:"provisioningState,omitempty"` + // Target - The ID of the targeted resource, only VM is currently supported. + Target *string `json:"target,omitempty"` + // BytesToCapturePerPacket - Number of bytes captured per packet, the remaining bytes are truncated. + BytesToCapturePerPacket *int32 `json:"bytesToCapturePerPacket,omitempty"` + // TotalBytesPerSession - Maximum size of the capture output. + TotalBytesPerSession *int32 `json:"totalBytesPerSession,omitempty"` + // TimeLimitInSeconds - Maximum duration of the capture session in seconds. + TimeLimitInSeconds *int32 `json:"timeLimitInSeconds,omitempty"` + StorageLocation *PacketCaptureStorageLocation `json:"storageLocation,omitempty"` + Filters *[]PacketCaptureFilter `json:"filters,omitempty"` +} + +// PacketCapturesCreateFuture an abstraction for monitoring and retrieving the results of a long-running operation. +type PacketCapturesCreateFuture struct { + azure.Future +} + +// Result returns the result of the asynchronous operation. +// If the operation has not completed it will return an error. +func (future *PacketCapturesCreateFuture) Result(client PacketCapturesClient) (pcr PacketCaptureResult, err error) { + var done bool + done, err = future.Done(client) + if err != nil { + err = autorest.NewErrorWithError(err, "network.PacketCapturesCreateFuture", "Result", future.Response(), "Polling failure") + return + } + if !done { + err = azure.NewAsyncOpIncompleteError("network.PacketCapturesCreateFuture") + return + } + sender := autorest.DecorateSender(client, autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...)) + if pcr.Response.Response, err = future.GetResult(sender); err == nil && pcr.Response.Response.StatusCode != http.StatusNoContent { + pcr, err = client.CreateResponder(pcr.Response.Response) + if err != nil { + err = autorest.NewErrorWithError(err, "network.PacketCapturesCreateFuture", "Result", pcr.Response.Response, "Failure responding to request") + } + } + return +} + +// PacketCapturesDeleteFuture an abstraction for monitoring and retrieving the results of a long-running operation. +type PacketCapturesDeleteFuture struct { + azure.Future +} + +// Result returns the result of the asynchronous operation. +// If the operation has not completed it will return an error. +func (future *PacketCapturesDeleteFuture) Result(client PacketCapturesClient) (ar autorest.Response, err error) { + var done bool + done, err = future.Done(client) + if err != nil { + err = autorest.NewErrorWithError(err, "network.PacketCapturesDeleteFuture", "Result", future.Response(), "Polling failure") + return + } + if !done { + err = azure.NewAsyncOpIncompleteError("network.PacketCapturesDeleteFuture") + return + } + ar.Response = future.Response() + return +} + +// PacketCapturesGetStatusFuture an abstraction for monitoring and retrieving the results of a long-running +// operation. +type PacketCapturesGetStatusFuture struct { + azure.Future +} + +// Result returns the result of the asynchronous operation. +// If the operation has not completed it will return an error. +func (future *PacketCapturesGetStatusFuture) Result(client PacketCapturesClient) (pcqsr PacketCaptureQueryStatusResult, err error) { + var done bool + done, err = future.Done(client) + if err != nil { + err = autorest.NewErrorWithError(err, "network.PacketCapturesGetStatusFuture", "Result", future.Response(), "Polling failure") + return + } + if !done { + err = azure.NewAsyncOpIncompleteError("network.PacketCapturesGetStatusFuture") + return + } + sender := autorest.DecorateSender(client, autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...)) + if pcqsr.Response.Response, err = future.GetResult(sender); err == nil && pcqsr.Response.Response.StatusCode != http.StatusNoContent { + pcqsr, err = client.GetStatusResponder(pcqsr.Response.Response) + if err != nil { + err = autorest.NewErrorWithError(err, "network.PacketCapturesGetStatusFuture", "Result", pcqsr.Response.Response, "Failure responding to request") + } + } + return +} + +// PacketCapturesStopFuture an abstraction for monitoring and retrieving the results of a long-running operation. +type PacketCapturesStopFuture struct { + azure.Future +} + +// Result returns the result of the asynchronous operation. +// If the operation has not completed it will return an error. +func (future *PacketCapturesStopFuture) Result(client PacketCapturesClient) (ar autorest.Response, err error) { + var done bool + done, err = future.Done(client) + if err != nil { + err = autorest.NewErrorWithError(err, "network.PacketCapturesStopFuture", "Result", future.Response(), "Polling failure") + return + } + if !done { + err = azure.NewAsyncOpIncompleteError("network.PacketCapturesStopFuture") + return + } + ar.Response = future.Response() + return +} + +// PacketCaptureStorageLocation describes the storage location for a packet capture session. +type PacketCaptureStorageLocation struct { + // StorageID - The ID of the storage account to save the packet capture session. Required if no local file path is provided. + StorageID *string `json:"storageId,omitempty"` + // StoragePath - The URI of the storage path to save the packet capture. Must be a well-formed URI describing the location to save the packet capture. + StoragePath *string `json:"storagePath,omitempty"` + // FilePath - A valid local path on the targeting VM. Must include the name of the capture file (*.cap). For linux virtual machine it must start with /var/captures. Required if no storage ID is provided, otherwise optional. + FilePath *string `json:"filePath,omitempty"` +} + +// PatchRouteFilter route Filter Resource. +type PatchRouteFilter struct { + *RouteFilterPropertiesFormat `json:"properties,omitempty"` + // Name - The name of the resource that is unique within a resource group. This name can be used to access the resource. + Name *string `json:"name,omitempty"` + // Etag - A unique read-only string that changes whenever the resource is updated. + Etag *string `json:"etag,omitempty"` + // Type - Resource type. + Type *string `json:"type,omitempty"` + // Tags - Resource tags. + Tags map[string]*string `json:"tags"` + // ID - Resource ID. + ID *string `json:"id,omitempty"` +} + +// MarshalJSON is the custom marshaler for PatchRouteFilter. +func (prf PatchRouteFilter) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]interface{}) + if prf.RouteFilterPropertiesFormat != nil { + objectMap["properties"] = prf.RouteFilterPropertiesFormat + } + if prf.Name != nil { + objectMap["name"] = prf.Name + } + if prf.Etag != nil { + objectMap["etag"] = prf.Etag + } + if prf.Type != nil { + objectMap["type"] = prf.Type + } + if prf.Tags != nil { + objectMap["tags"] = prf.Tags + } + if prf.ID != nil { + objectMap["id"] = prf.ID + } + return json.Marshal(objectMap) +} + +// UnmarshalJSON is the custom unmarshaler for PatchRouteFilter struct. +func (prf *PatchRouteFilter) UnmarshalJSON(body []byte) error { + var m map[string]*json.RawMessage + err := json.Unmarshal(body, &m) + if err != nil { + return err + } + for k, v := range m { + switch k { + case "properties": + if v != nil { + var routeFilterPropertiesFormat RouteFilterPropertiesFormat + err = json.Unmarshal(*v, &routeFilterPropertiesFormat) + if err != nil { + return err + } + prf.RouteFilterPropertiesFormat = &routeFilterPropertiesFormat + } + case "name": + if v != nil { + var name string + err = json.Unmarshal(*v, &name) + if err != nil { + return err + } + prf.Name = &name + } + case "etag": + if v != nil { + var etag string + err = json.Unmarshal(*v, &etag) + if err != nil { + return err + } + prf.Etag = &etag + } + case "type": + if v != nil { + var typeVar string + err = json.Unmarshal(*v, &typeVar) + if err != nil { + return err + } + prf.Type = &typeVar + } + case "tags": + if v != nil { + var tags map[string]*string + err = json.Unmarshal(*v, &tags) + if err != nil { + return err + } + prf.Tags = tags + } + case "id": + if v != nil { + var ID string + err = json.Unmarshal(*v, &ID) + if err != nil { + return err + } + prf.ID = &ID + } + } + } + + return nil +} + +// PatchRouteFilterRule route Filter Rule Resource +type PatchRouteFilterRule struct { + *RouteFilterRulePropertiesFormat `json:"properties,omitempty"` + // Name - The name of the resource that is unique within a resource group. This name can be used to access the resource. + Name *string `json:"name,omitempty"` + // Etag - A unique read-only string that changes whenever the resource is updated. + Etag *string `json:"etag,omitempty"` + // ID - Resource ID. + ID *string `json:"id,omitempty"` +} + +// MarshalJSON is the custom marshaler for PatchRouteFilterRule. +func (prfr PatchRouteFilterRule) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]interface{}) + if prfr.RouteFilterRulePropertiesFormat != nil { + objectMap["properties"] = prfr.RouteFilterRulePropertiesFormat + } + if prfr.Name != nil { + objectMap["name"] = prfr.Name + } + if prfr.Etag != nil { + objectMap["etag"] = prfr.Etag + } + if prfr.ID != nil { + objectMap["id"] = prfr.ID + } + return json.Marshal(objectMap) +} + +// UnmarshalJSON is the custom unmarshaler for PatchRouteFilterRule struct. +func (prfr *PatchRouteFilterRule) UnmarshalJSON(body []byte) error { + var m map[string]*json.RawMessage + err := json.Unmarshal(body, &m) + if err != nil { + return err + } + for k, v := range m { + switch k { + case "properties": + if v != nil { + var routeFilterRulePropertiesFormat RouteFilterRulePropertiesFormat + err = json.Unmarshal(*v, &routeFilterRulePropertiesFormat) + if err != nil { + return err + } + prfr.RouteFilterRulePropertiesFormat = &routeFilterRulePropertiesFormat + } + case "name": + if v != nil { + var name string + err = json.Unmarshal(*v, &name) + if err != nil { + return err + } + prfr.Name = &name + } + case "etag": + if v != nil { + var etag string + err = json.Unmarshal(*v, &etag) + if err != nil { + return err + } + prfr.Etag = &etag + } + case "id": + if v != nil { + var ID string + err = json.Unmarshal(*v, &ID) + if err != nil { + return err + } + prfr.ID = &ID + } + } + } + + return nil +} + +// Probe a load balancer probe. +type Probe struct { + autorest.Response `json:"-"` + // ProbePropertiesFormat - Properties of load balancer probe. + *ProbePropertiesFormat `json:"properties,omitempty"` + // Name - Gets name of the resource that is unique within a resource group. This name can be used to access the resource. + Name *string `json:"name,omitempty"` + // Etag - A unique read-only string that changes whenever the resource is updated. + Etag *string `json:"etag,omitempty"` + // ID - Resource ID. + ID *string `json:"id,omitempty"` +} + +// MarshalJSON is the custom marshaler for Probe. +func (p Probe) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]interface{}) + if p.ProbePropertiesFormat != nil { + objectMap["properties"] = p.ProbePropertiesFormat + } + if p.Name != nil { + objectMap["name"] = p.Name + } + if p.Etag != nil { + objectMap["etag"] = p.Etag + } + if p.ID != nil { + objectMap["id"] = p.ID + } + return json.Marshal(objectMap) +} + +// UnmarshalJSON is the custom unmarshaler for Probe struct. +func (p *Probe) UnmarshalJSON(body []byte) error { + var m map[string]*json.RawMessage + err := json.Unmarshal(body, &m) + if err != nil { + return err + } + for k, v := range m { + switch k { + case "properties": + if v != nil { + var probePropertiesFormat ProbePropertiesFormat + err = json.Unmarshal(*v, &probePropertiesFormat) + if err != nil { + return err + } + p.ProbePropertiesFormat = &probePropertiesFormat + } + case "name": + if v != nil { + var name string + err = json.Unmarshal(*v, &name) + if err != nil { + return err + } + p.Name = &name + } + case "etag": + if v != nil { + var etag string + err = json.Unmarshal(*v, &etag) + if err != nil { + return err + } + p.Etag = &etag + } + case "id": + if v != nil { + var ID string + err = json.Unmarshal(*v, &ID) + if err != nil { + return err + } + p.ID = &ID + } + } + } + + return nil +} + +// ProbePropertiesFormat load balancer probe resource. +type ProbePropertiesFormat struct { + // LoadBalancingRules - The load balancer rules that use this probe. + LoadBalancingRules *[]SubResource `json:"loadBalancingRules,omitempty"` + // Protocol - The protocol of the end point. Possible values are: 'Http' or 'Tcp'. If 'Tcp' is specified, a received ACK is required for the probe to be successful. If 'Http' is specified, a 200 OK response from the specifies URI is required for the probe to be successful. Possible values include: 'ProbeProtocolHTTP', 'ProbeProtocolTCP' + Protocol ProbeProtocol `json:"protocol,omitempty"` + // Port - The port for communicating the probe. Possible values range from 1 to 65535, inclusive. + Port *int32 `json:"port,omitempty"` + // IntervalInSeconds - The interval, in seconds, for how frequently to probe the endpoint for health status. Typically, the interval is slightly less than half the allocated timeout period (in seconds) which allows two full probes before taking the instance out of rotation. The default value is 15, the minimum value is 5. + IntervalInSeconds *int32 `json:"intervalInSeconds,omitempty"` + // NumberOfProbes - The number of probes where if no response, will result in stopping further traffic from being delivered to the endpoint. This values allows endpoints to be taken out of rotation faster or slower than the typical times used in Azure. + NumberOfProbes *int32 `json:"numberOfProbes,omitempty"` + // RequestPath - The URI used for requesting health status from the VM. Path is required if a protocol is set to http. Otherwise, it is not allowed. There is no default value. + RequestPath *string `json:"requestPath,omitempty"` + // ProvisioningState - Gets the provisioning state of the public IP resource. Possible values are: 'Updating', 'Deleting', and 'Failed'. + ProvisioningState *string `json:"provisioningState,omitempty"` +} + +// PublicIPAddress public IP address resource. +type PublicIPAddress struct { + autorest.Response `json:"-"` + // Sku - The public IP address SKU. + Sku *PublicIPAddressSku `json:"sku,omitempty"` + // PublicIPAddressPropertiesFormat - Public IP address properties. + *PublicIPAddressPropertiesFormat `json:"properties,omitempty"` + // Etag - A unique read-only string that changes whenever the resource is updated. + Etag *string `json:"etag,omitempty"` + // Zones - A list of availability zones denoting the IP allocated for the resource needs to come from. + Zones *[]string `json:"zones,omitempty"` + // ID - Resource ID. + ID *string `json:"id,omitempty"` + // Name - Resource name. + Name *string `json:"name,omitempty"` + // Type - Resource type. + Type *string `json:"type,omitempty"` + // Location - Resource location. + Location *string `json:"location,omitempty"` + // Tags - Resource tags. + Tags map[string]*string `json:"tags"` +} + +// MarshalJSON is the custom marshaler for PublicIPAddress. +func (pia PublicIPAddress) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]interface{}) + if pia.Sku != nil { + objectMap["sku"] = pia.Sku + } + if pia.PublicIPAddressPropertiesFormat != nil { + objectMap["properties"] = pia.PublicIPAddressPropertiesFormat + } + if pia.Etag != nil { + objectMap["etag"] = pia.Etag + } + if pia.Zones != nil { + objectMap["zones"] = pia.Zones + } + if pia.ID != nil { + objectMap["id"] = pia.ID + } + if pia.Name != nil { + objectMap["name"] = pia.Name + } + if pia.Type != nil { + objectMap["type"] = pia.Type + } + if pia.Location != nil { + objectMap["location"] = pia.Location + } + if pia.Tags != nil { + objectMap["tags"] = pia.Tags + } + return json.Marshal(objectMap) +} + +// UnmarshalJSON is the custom unmarshaler for PublicIPAddress struct. +func (pia *PublicIPAddress) UnmarshalJSON(body []byte) error { + var m map[string]*json.RawMessage + err := json.Unmarshal(body, &m) + if err != nil { + return err + } + for k, v := range m { + switch k { + case "sku": + if v != nil { + var sku PublicIPAddressSku + err = json.Unmarshal(*v, &sku) + if err != nil { + return err + } + pia.Sku = &sku + } + case "properties": + if v != nil { + var publicIPAddressPropertiesFormat PublicIPAddressPropertiesFormat + err = json.Unmarshal(*v, &publicIPAddressPropertiesFormat) + if err != nil { + return err + } + pia.PublicIPAddressPropertiesFormat = &publicIPAddressPropertiesFormat + } + case "etag": + if v != nil { + var etag string + err = json.Unmarshal(*v, &etag) + if err != nil { + return err + } + pia.Etag = &etag + } + case "zones": + if v != nil { + var zones []string + err = json.Unmarshal(*v, &zones) + if err != nil { + return err + } + pia.Zones = &zones + } + case "id": + if v != nil { + var ID string + err = json.Unmarshal(*v, &ID) + if err != nil { + return err + } + pia.ID = &ID + } + case "name": + if v != nil { + var name string + err = json.Unmarshal(*v, &name) + if err != nil { + return err + } + pia.Name = &name + } + case "type": + if v != nil { + var typeVar string + err = json.Unmarshal(*v, &typeVar) + if err != nil { + return err + } + pia.Type = &typeVar + } + case "location": + if v != nil { + var location string + err = json.Unmarshal(*v, &location) + if err != nil { + return err + } + pia.Location = &location + } + case "tags": + if v != nil { + var tags map[string]*string + err = json.Unmarshal(*v, &tags) + if err != nil { + return err + } + pia.Tags = tags + } + } + } + + return nil +} + +// PublicIPAddressDNSSettings contains FQDN of the DNS record associated with the public IP address +type PublicIPAddressDNSSettings struct { + // DomainNameLabel - Gets or sets the Domain name label.The concatenation of the domain name label and the regionalized DNS zone make up the fully qualified domain name associated with the public IP address. If a domain name label is specified, an A DNS record is created for the public IP in the Microsoft Azure DNS system. + DomainNameLabel *string `json:"domainNameLabel,omitempty"` + // Fqdn - Gets the FQDN, Fully qualified domain name of the A DNS record associated with the public IP. This is the concatenation of the domainNameLabel and the regionalized DNS zone. + Fqdn *string `json:"fqdn,omitempty"` + // ReverseFqdn - Gets or Sets the Reverse FQDN. A user-visible, fully qualified domain name that resolves to this public IP address. If the reverseFqdn is specified, then a PTR DNS record is created pointing from the IP address in the in-addr.arpa domain to the reverse FQDN. + ReverseFqdn *string `json:"reverseFqdn,omitempty"` +} + +// PublicIPAddressesCreateOrUpdateFuture an abstraction for monitoring and retrieving the results of a long-running +// operation. +type PublicIPAddressesCreateOrUpdateFuture struct { + azure.Future +} + +// Result returns the result of the asynchronous operation. +// If the operation has not completed it will return an error. +func (future *PublicIPAddressesCreateOrUpdateFuture) Result(client PublicIPAddressesClient) (pia PublicIPAddress, err error) { + var done bool + done, err = future.Done(client) + if err != nil { + err = autorest.NewErrorWithError(err, "network.PublicIPAddressesCreateOrUpdateFuture", "Result", future.Response(), "Polling failure") + return + } + if !done { + err = azure.NewAsyncOpIncompleteError("network.PublicIPAddressesCreateOrUpdateFuture") + return + } + sender := autorest.DecorateSender(client, autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...)) + if pia.Response.Response, err = future.GetResult(sender); err == nil && pia.Response.Response.StatusCode != http.StatusNoContent { + pia, err = client.CreateOrUpdateResponder(pia.Response.Response) + if err != nil { + err = autorest.NewErrorWithError(err, "network.PublicIPAddressesCreateOrUpdateFuture", "Result", pia.Response.Response, "Failure responding to request") + } + } + return +} + +// PublicIPAddressesDeleteFuture an abstraction for monitoring and retrieving the results of a long-running +// operation. +type PublicIPAddressesDeleteFuture struct { + azure.Future +} + +// Result returns the result of the asynchronous operation. +// If the operation has not completed it will return an error. +func (future *PublicIPAddressesDeleteFuture) Result(client PublicIPAddressesClient) (ar autorest.Response, err error) { + var done bool + done, err = future.Done(client) + if err != nil { + err = autorest.NewErrorWithError(err, "network.PublicIPAddressesDeleteFuture", "Result", future.Response(), "Polling failure") + return + } + if !done { + err = azure.NewAsyncOpIncompleteError("network.PublicIPAddressesDeleteFuture") + return + } + ar.Response = future.Response() + return +} + +// PublicIPAddressesUpdateTagsFuture an abstraction for monitoring and retrieving the results of a long-running +// operation. +type PublicIPAddressesUpdateTagsFuture struct { + azure.Future +} + +// Result returns the result of the asynchronous operation. +// If the operation has not completed it will return an error. +func (future *PublicIPAddressesUpdateTagsFuture) Result(client PublicIPAddressesClient) (pia PublicIPAddress, err error) { + var done bool + done, err = future.Done(client) + if err != nil { + err = autorest.NewErrorWithError(err, "network.PublicIPAddressesUpdateTagsFuture", "Result", future.Response(), "Polling failure") + return + } + if !done { + err = azure.NewAsyncOpIncompleteError("network.PublicIPAddressesUpdateTagsFuture") + return + } + sender := autorest.DecorateSender(client, autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...)) + if pia.Response.Response, err = future.GetResult(sender); err == nil && pia.Response.Response.StatusCode != http.StatusNoContent { + pia, err = client.UpdateTagsResponder(pia.Response.Response) + if err != nil { + err = autorest.NewErrorWithError(err, "network.PublicIPAddressesUpdateTagsFuture", "Result", pia.Response.Response, "Failure responding to request") + } + } + return +} + +// PublicIPAddressListResult response for ListPublicIpAddresses API service call. +type PublicIPAddressListResult struct { + autorest.Response `json:"-"` + // Value - A list of public IP addresses that exists in a resource group. + Value *[]PublicIPAddress `json:"value,omitempty"` + // NextLink - The URL to get the next set of results. + NextLink *string `json:"nextLink,omitempty"` +} + +// PublicIPAddressListResultIterator provides access to a complete listing of PublicIPAddress values. +type PublicIPAddressListResultIterator struct { + i int + page PublicIPAddressListResultPage +} + +// Next advances to the next value. If there was an error making +// the request the iterator does not advance and the error is returned. +func (iter *PublicIPAddressListResultIterator) Next() error { + iter.i++ + if iter.i < len(iter.page.Values()) { + return nil + } + err := iter.page.Next() + if err != nil { + iter.i-- + return err + } + iter.i = 0 + return nil +} + +// NotDone returns true if the enumeration should be started or is not yet complete. +func (iter PublicIPAddressListResultIterator) NotDone() bool { + return iter.page.NotDone() && iter.i < len(iter.page.Values()) +} + +// Response returns the raw server response from the last page request. +func (iter PublicIPAddressListResultIterator) Response() PublicIPAddressListResult { + return iter.page.Response() +} + +// Value returns the current value or a zero-initialized value if the +// iterator has advanced beyond the end of the collection. +func (iter PublicIPAddressListResultIterator) Value() PublicIPAddress { + if !iter.page.NotDone() { + return PublicIPAddress{} + } + return iter.page.Values()[iter.i] +} + +// IsEmpty returns true if the ListResult contains no values. +func (pialr PublicIPAddressListResult) IsEmpty() bool { + return pialr.Value == nil || len(*pialr.Value) == 0 +} + +// publicIPAddressListResultPreparer prepares a request to retrieve the next set of results. +// It returns nil if no more results exist. +func (pialr PublicIPAddressListResult) publicIPAddressListResultPreparer() (*http.Request, error) { + if pialr.NextLink == nil || len(to.String(pialr.NextLink)) < 1 { + return nil, nil + } + return autorest.Prepare(&http.Request{}, + autorest.AsJSON(), + autorest.AsGet(), + autorest.WithBaseURL(to.String(pialr.NextLink))) +} + +// PublicIPAddressListResultPage contains a page of PublicIPAddress values. +type PublicIPAddressListResultPage struct { + fn func(PublicIPAddressListResult) (PublicIPAddressListResult, error) + pialr PublicIPAddressListResult +} + +// Next advances to the next page of values. If there was an error making +// the request the page does not advance and the error is returned. +func (page *PublicIPAddressListResultPage) Next() error { + next, err := page.fn(page.pialr) + if err != nil { + return err + } + page.pialr = next + return nil +} + +// NotDone returns true if the page enumeration should be started or is not yet complete. +func (page PublicIPAddressListResultPage) NotDone() bool { + return !page.pialr.IsEmpty() +} + +// Response returns the raw server response from the last page request. +func (page PublicIPAddressListResultPage) Response() PublicIPAddressListResult { + return page.pialr +} + +// Values returns the slice of values for the current page or nil if there are no values. +func (page PublicIPAddressListResultPage) Values() []PublicIPAddress { + if page.pialr.IsEmpty() { + return nil + } + return *page.pialr.Value +} + +// PublicIPAddressPropertiesFormat public IP address properties. +type PublicIPAddressPropertiesFormat struct { + // PublicIPAllocationMethod - The public IP allocation method. Possible values are: 'Static' and 'Dynamic'. Possible values include: 'Static', 'Dynamic' + PublicIPAllocationMethod IPAllocationMethod `json:"publicIPAllocationMethod,omitempty"` + // PublicIPAddressVersion - The public IP address version. Possible values are: 'IPv4' and 'IPv6'. Possible values include: 'IPv4', 'IPv6' + PublicIPAddressVersion IPVersion `json:"publicIPAddressVersion,omitempty"` + // IPConfiguration - The IP configuration associated with the public IP address. + IPConfiguration *IPConfiguration `json:"ipConfiguration,omitempty"` + // DNSSettings - The FQDN of the DNS record associated with the public IP address. + DNSSettings *PublicIPAddressDNSSettings `json:"dnsSettings,omitempty"` + // IPTags - The list of tags associated with the public IP address. + IPTags *[]IPTag `json:"ipTags,omitempty"` + // IPAddress - The IP address associated with the public IP address resource. + IPAddress *string `json:"ipAddress,omitempty"` + // IdleTimeoutInMinutes - The idle timeout of the public IP address. + IdleTimeoutInMinutes *int32 `json:"idleTimeoutInMinutes,omitempty"` + // ResourceGUID - The resource GUID property of the public IP resource. + ResourceGUID *string `json:"resourceGuid,omitempty"` + // ProvisioningState - The provisioning state of the PublicIP resource. Possible values are: 'Updating', 'Deleting', and 'Failed'. + ProvisioningState *string `json:"provisioningState,omitempty"` +} + +// PublicIPAddressSku SKU of a public IP address +type PublicIPAddressSku struct { + // Name - Name of a public IP address SKU. Possible values include: 'PublicIPAddressSkuNameBasic', 'PublicIPAddressSkuNameStandard' + Name PublicIPAddressSkuName `json:"name,omitempty"` +} + +// QueryTroubleshootingParameters parameters that define the resource to query the troubleshooting result. +type QueryTroubleshootingParameters struct { + // TargetResourceID - The target resource ID to query the troubleshooting result. + TargetResourceID *string `json:"targetResourceId,omitempty"` +} + +// Resource common resource representation. +type Resource struct { + // ID - Resource ID. + ID *string `json:"id,omitempty"` + // Name - Resource name. + Name *string `json:"name,omitempty"` + // Type - Resource type. + Type *string `json:"type,omitempty"` + // Location - Resource location. + Location *string `json:"location,omitempty"` + // Tags - Resource tags. + Tags map[string]*string `json:"tags"` +} + +// MarshalJSON is the custom marshaler for Resource. +func (r Resource) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]interface{}) + if r.ID != nil { + objectMap["id"] = r.ID + } + if r.Name != nil { + objectMap["name"] = r.Name + } + if r.Type != nil { + objectMap["type"] = r.Type + } + if r.Location != nil { + objectMap["location"] = r.Location + } + if r.Tags != nil { + objectMap["tags"] = r.Tags + } + return json.Marshal(objectMap) +} + +// ResourceNavigationLink resourceNavigationLink resource. +type ResourceNavigationLink struct { + // ResourceNavigationLinkFormat - Resource navigation link properties format. + *ResourceNavigationLinkFormat `json:"properties,omitempty"` + // Name - Name of the resource that is unique within a resource group. This name can be used to access the resource. + Name *string `json:"name,omitempty"` + // Etag - A unique read-only string that changes whenever the resource is updated. + Etag *string `json:"etag,omitempty"` + // ID - Resource ID. + ID *string `json:"id,omitempty"` +} + +// MarshalJSON is the custom marshaler for ResourceNavigationLink. +func (rnl ResourceNavigationLink) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]interface{}) + if rnl.ResourceNavigationLinkFormat != nil { + objectMap["properties"] = rnl.ResourceNavigationLinkFormat + } + if rnl.Name != nil { + objectMap["name"] = rnl.Name + } + if rnl.Etag != nil { + objectMap["etag"] = rnl.Etag + } + if rnl.ID != nil { + objectMap["id"] = rnl.ID + } + return json.Marshal(objectMap) +} + +// UnmarshalJSON is the custom unmarshaler for ResourceNavigationLink struct. +func (rnl *ResourceNavigationLink) UnmarshalJSON(body []byte) error { + var m map[string]*json.RawMessage + err := json.Unmarshal(body, &m) + if err != nil { + return err + } + for k, v := range m { + switch k { + case "properties": + if v != nil { + var resourceNavigationLinkFormat ResourceNavigationLinkFormat + err = json.Unmarshal(*v, &resourceNavigationLinkFormat) + if err != nil { + return err + } + rnl.ResourceNavigationLinkFormat = &resourceNavigationLinkFormat + } + case "name": + if v != nil { + var name string + err = json.Unmarshal(*v, &name) + if err != nil { + return err + } + rnl.Name = &name + } + case "etag": + if v != nil { + var etag string + err = json.Unmarshal(*v, &etag) + if err != nil { + return err + } + rnl.Etag = &etag + } + case "id": + if v != nil { + var ID string + err = json.Unmarshal(*v, &ID) + if err != nil { + return err + } + rnl.ID = &ID + } + } + } + + return nil +} + +// ResourceNavigationLinkFormat properties of ResourceNavigationLink. +type ResourceNavigationLinkFormat struct { + // LinkedResourceType - Resource type of the linked resource. + LinkedResourceType *string `json:"linkedResourceType,omitempty"` + // Link - Link to the external resource + Link *string `json:"link,omitempty"` + // ProvisioningState - Provisioning state of the ResourceNavigationLink resource. + ProvisioningState *string `json:"provisioningState,omitempty"` +} + +// RetentionPolicyParameters parameters that define the retention policy for flow log. +type RetentionPolicyParameters struct { + // Days - Number of days to retain flow log records. + Days *int32 `json:"days,omitempty"` + // Enabled - Flag to enable/disable retention. + Enabled *bool `json:"enabled,omitempty"` +} + +// Route route resource +type Route struct { + autorest.Response `json:"-"` + // RoutePropertiesFormat - Properties of the route. + *RoutePropertiesFormat `json:"properties,omitempty"` + // Name - The name of the resource that is unique within a resource group. This name can be used to access the resource. + Name *string `json:"name,omitempty"` + // Etag - A unique read-only string that changes whenever the resource is updated. + Etag *string `json:"etag,omitempty"` + // ID - Resource ID. + ID *string `json:"id,omitempty"` +} + +// MarshalJSON is the custom marshaler for Route. +func (r Route) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]interface{}) + if r.RoutePropertiesFormat != nil { + objectMap["properties"] = r.RoutePropertiesFormat + } + if r.Name != nil { + objectMap["name"] = r.Name + } + if r.Etag != nil { + objectMap["etag"] = r.Etag + } + if r.ID != nil { + objectMap["id"] = r.ID + } + return json.Marshal(objectMap) +} + +// UnmarshalJSON is the custom unmarshaler for Route struct. +func (r *Route) UnmarshalJSON(body []byte) error { + var m map[string]*json.RawMessage + err := json.Unmarshal(body, &m) + if err != nil { + return err + } + for k, v := range m { + switch k { + case "properties": + if v != nil { + var routePropertiesFormat RoutePropertiesFormat + err = json.Unmarshal(*v, &routePropertiesFormat) + if err != nil { + return err + } + r.RoutePropertiesFormat = &routePropertiesFormat + } + case "name": + if v != nil { + var name string + err = json.Unmarshal(*v, &name) + if err != nil { + return err + } + r.Name = &name + } + case "etag": + if v != nil { + var etag string + err = json.Unmarshal(*v, &etag) + if err != nil { + return err + } + r.Etag = &etag + } + case "id": + if v != nil { + var ID string + err = json.Unmarshal(*v, &ID) + if err != nil { + return err + } + r.ID = &ID + } + } + } + + return nil +} + +// RouteFilter route Filter Resource. +type RouteFilter struct { + autorest.Response `json:"-"` + *RouteFilterPropertiesFormat `json:"properties,omitempty"` + // Etag - Gets a unique read-only string that changes whenever the resource is updated. + Etag *string `json:"etag,omitempty"` + // ID - Resource ID. + ID *string `json:"id,omitempty"` + // Name - Resource name. + Name *string `json:"name,omitempty"` + // Type - Resource type. + Type *string `json:"type,omitempty"` + // Location - Resource location. + Location *string `json:"location,omitempty"` + // Tags - Resource tags. + Tags map[string]*string `json:"tags"` +} + +// MarshalJSON is the custom marshaler for RouteFilter. +func (rf RouteFilter) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]interface{}) + if rf.RouteFilterPropertiesFormat != nil { + objectMap["properties"] = rf.RouteFilterPropertiesFormat + } + if rf.Etag != nil { + objectMap["etag"] = rf.Etag + } + if rf.ID != nil { + objectMap["id"] = rf.ID + } + if rf.Name != nil { + objectMap["name"] = rf.Name + } + if rf.Type != nil { + objectMap["type"] = rf.Type + } + if rf.Location != nil { + objectMap["location"] = rf.Location + } + if rf.Tags != nil { + objectMap["tags"] = rf.Tags + } + return json.Marshal(objectMap) +} + +// UnmarshalJSON is the custom unmarshaler for RouteFilter struct. +func (rf *RouteFilter) UnmarshalJSON(body []byte) error { + var m map[string]*json.RawMessage + err := json.Unmarshal(body, &m) + if err != nil { + return err + } + for k, v := range m { + switch k { + case "properties": + if v != nil { + var routeFilterPropertiesFormat RouteFilterPropertiesFormat + err = json.Unmarshal(*v, &routeFilterPropertiesFormat) + if err != nil { + return err + } + rf.RouteFilterPropertiesFormat = &routeFilterPropertiesFormat + } + case "etag": + if v != nil { + var etag string + err = json.Unmarshal(*v, &etag) + if err != nil { + return err + } + rf.Etag = &etag + } + case "id": + if v != nil { + var ID string + err = json.Unmarshal(*v, &ID) + if err != nil { + return err + } + rf.ID = &ID + } + case "name": + if v != nil { + var name string + err = json.Unmarshal(*v, &name) + if err != nil { + return err + } + rf.Name = &name + } + case "type": + if v != nil { + var typeVar string + err = json.Unmarshal(*v, &typeVar) + if err != nil { + return err + } + rf.Type = &typeVar + } + case "location": + if v != nil { + var location string + err = json.Unmarshal(*v, &location) + if err != nil { + return err + } + rf.Location = &location + } + case "tags": + if v != nil { + var tags map[string]*string + err = json.Unmarshal(*v, &tags) + if err != nil { + return err + } + rf.Tags = tags + } + } + } + + return nil +} + +// RouteFilterListResult response for the ListRouteFilters API service call. +type RouteFilterListResult struct { + autorest.Response `json:"-"` + // Value - Gets a list of route filters in a resource group. + Value *[]RouteFilter `json:"value,omitempty"` + // NextLink - The URL to get the next set of results. + NextLink *string `json:"nextLink,omitempty"` +} + +// RouteFilterListResultIterator provides access to a complete listing of RouteFilter values. +type RouteFilterListResultIterator struct { + i int + page RouteFilterListResultPage +} + +// Next advances to the next value. If there was an error making +// the request the iterator does not advance and the error is returned. +func (iter *RouteFilterListResultIterator) Next() error { + iter.i++ + if iter.i < len(iter.page.Values()) { + return nil + } + err := iter.page.Next() + if err != nil { + iter.i-- + return err + } + iter.i = 0 + return nil +} + +// NotDone returns true if the enumeration should be started or is not yet complete. +func (iter RouteFilterListResultIterator) NotDone() bool { + return iter.page.NotDone() && iter.i < len(iter.page.Values()) +} + +// Response returns the raw server response from the last page request. +func (iter RouteFilterListResultIterator) Response() RouteFilterListResult { + return iter.page.Response() +} + +// Value returns the current value or a zero-initialized value if the +// iterator has advanced beyond the end of the collection. +func (iter RouteFilterListResultIterator) Value() RouteFilter { + if !iter.page.NotDone() { + return RouteFilter{} + } + return iter.page.Values()[iter.i] +} + +// IsEmpty returns true if the ListResult contains no values. +func (rflr RouteFilterListResult) IsEmpty() bool { + return rflr.Value == nil || len(*rflr.Value) == 0 +} + +// routeFilterListResultPreparer prepares a request to retrieve the next set of results. +// It returns nil if no more results exist. +func (rflr RouteFilterListResult) routeFilterListResultPreparer() (*http.Request, error) { + if rflr.NextLink == nil || len(to.String(rflr.NextLink)) < 1 { + return nil, nil + } + return autorest.Prepare(&http.Request{}, + autorest.AsJSON(), + autorest.AsGet(), + autorest.WithBaseURL(to.String(rflr.NextLink))) +} + +// RouteFilterListResultPage contains a page of RouteFilter values. +type RouteFilterListResultPage struct { + fn func(RouteFilterListResult) (RouteFilterListResult, error) + rflr RouteFilterListResult +} + +// Next advances to the next page of values. If there was an error making +// the request the page does not advance and the error is returned. +func (page *RouteFilterListResultPage) Next() error { + next, err := page.fn(page.rflr) + if err != nil { + return err + } + page.rflr = next + return nil +} + +// NotDone returns true if the page enumeration should be started or is not yet complete. +func (page RouteFilterListResultPage) NotDone() bool { + return !page.rflr.IsEmpty() +} + +// Response returns the raw server response from the last page request. +func (page RouteFilterListResultPage) Response() RouteFilterListResult { + return page.rflr +} + +// Values returns the slice of values for the current page or nil if there are no values. +func (page RouteFilterListResultPage) Values() []RouteFilter { + if page.rflr.IsEmpty() { + return nil + } + return *page.rflr.Value +} + +// RouteFilterPropertiesFormat route Filter Resource +type RouteFilterPropertiesFormat struct { + // Rules - Collection of RouteFilterRules contained within a route filter. + Rules *[]RouteFilterRule `json:"rules,omitempty"` + // Peerings - A collection of references to express route circuit peerings. + Peerings *[]ExpressRouteCircuitPeering `json:"peerings,omitempty"` + // ProvisioningState - The provisioning state of the resource. Possible values are: 'Updating', 'Deleting', 'Succeeded' and 'Failed'. + ProvisioningState *string `json:"provisioningState,omitempty"` +} + +// RouteFilterRule route Filter Rule Resource +type RouteFilterRule struct { + autorest.Response `json:"-"` + *RouteFilterRulePropertiesFormat `json:"properties,omitempty"` + // Name - The name of the resource that is unique within a resource group. This name can be used to access the resource. + Name *string `json:"name,omitempty"` + // Location - Resource location. + Location *string `json:"location,omitempty"` + // Etag - A unique read-only string that changes whenever the resource is updated. + Etag *string `json:"etag,omitempty"` + // ID - Resource ID. + ID *string `json:"id,omitempty"` +} + +// MarshalJSON is the custom marshaler for RouteFilterRule. +func (rfr RouteFilterRule) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]interface{}) + if rfr.RouteFilterRulePropertiesFormat != nil { + objectMap["properties"] = rfr.RouteFilterRulePropertiesFormat + } + if rfr.Name != nil { + objectMap["name"] = rfr.Name + } + if rfr.Location != nil { + objectMap["location"] = rfr.Location + } + if rfr.Etag != nil { + objectMap["etag"] = rfr.Etag + } + if rfr.ID != nil { + objectMap["id"] = rfr.ID + } + return json.Marshal(objectMap) +} + +// UnmarshalJSON is the custom unmarshaler for RouteFilterRule struct. +func (rfr *RouteFilterRule) UnmarshalJSON(body []byte) error { + var m map[string]*json.RawMessage + err := json.Unmarshal(body, &m) + if err != nil { + return err + } + for k, v := range m { + switch k { + case "properties": + if v != nil { + var routeFilterRulePropertiesFormat RouteFilterRulePropertiesFormat + err = json.Unmarshal(*v, &routeFilterRulePropertiesFormat) + if err != nil { + return err + } + rfr.RouteFilterRulePropertiesFormat = &routeFilterRulePropertiesFormat + } + case "name": + if v != nil { + var name string + err = json.Unmarshal(*v, &name) + if err != nil { + return err + } + rfr.Name = &name + } + case "location": + if v != nil { + var location string + err = json.Unmarshal(*v, &location) + if err != nil { + return err + } + rfr.Location = &location + } + case "etag": + if v != nil { + var etag string + err = json.Unmarshal(*v, &etag) + if err != nil { + return err + } + rfr.Etag = &etag + } + case "id": + if v != nil { + var ID string + err = json.Unmarshal(*v, &ID) + if err != nil { + return err + } + rfr.ID = &ID + } + } + } + + return nil +} + +// RouteFilterRuleListResult response for the ListRouteFilterRules API service call +type RouteFilterRuleListResult struct { + autorest.Response `json:"-"` + // Value - Gets a list of RouteFilterRules in a resource group. + Value *[]RouteFilterRule `json:"value,omitempty"` + // NextLink - The URL to get the next set of results. + NextLink *string `json:"nextLink,omitempty"` +} + +// RouteFilterRuleListResultIterator provides access to a complete listing of RouteFilterRule values. +type RouteFilterRuleListResultIterator struct { + i int + page RouteFilterRuleListResultPage +} + +// Next advances to the next value. If there was an error making +// the request the iterator does not advance and the error is returned. +func (iter *RouteFilterRuleListResultIterator) Next() error { + iter.i++ + if iter.i < len(iter.page.Values()) { + return nil + } + err := iter.page.Next() + if err != nil { + iter.i-- + return err + } + iter.i = 0 + return nil +} + +// NotDone returns true if the enumeration should be started or is not yet complete. +func (iter RouteFilterRuleListResultIterator) NotDone() bool { + return iter.page.NotDone() && iter.i < len(iter.page.Values()) +} + +// Response returns the raw server response from the last page request. +func (iter RouteFilterRuleListResultIterator) Response() RouteFilterRuleListResult { + return iter.page.Response() +} + +// Value returns the current value or a zero-initialized value if the +// iterator has advanced beyond the end of the collection. +func (iter RouteFilterRuleListResultIterator) Value() RouteFilterRule { + if !iter.page.NotDone() { + return RouteFilterRule{} + } + return iter.page.Values()[iter.i] +} + +// IsEmpty returns true if the ListResult contains no values. +func (rfrlr RouteFilterRuleListResult) IsEmpty() bool { + return rfrlr.Value == nil || len(*rfrlr.Value) == 0 +} + +// routeFilterRuleListResultPreparer prepares a request to retrieve the next set of results. +// It returns nil if no more results exist. +func (rfrlr RouteFilterRuleListResult) routeFilterRuleListResultPreparer() (*http.Request, error) { + if rfrlr.NextLink == nil || len(to.String(rfrlr.NextLink)) < 1 { + return nil, nil + } + return autorest.Prepare(&http.Request{}, + autorest.AsJSON(), + autorest.AsGet(), + autorest.WithBaseURL(to.String(rfrlr.NextLink))) +} + +// RouteFilterRuleListResultPage contains a page of RouteFilterRule values. +type RouteFilterRuleListResultPage struct { + fn func(RouteFilterRuleListResult) (RouteFilterRuleListResult, error) + rfrlr RouteFilterRuleListResult +} + +// Next advances to the next page of values. If there was an error making +// the request the page does not advance and the error is returned. +func (page *RouteFilterRuleListResultPage) Next() error { + next, err := page.fn(page.rfrlr) + if err != nil { + return err + } + page.rfrlr = next + return nil +} + +// NotDone returns true if the page enumeration should be started or is not yet complete. +func (page RouteFilterRuleListResultPage) NotDone() bool { + return !page.rfrlr.IsEmpty() +} + +// Response returns the raw server response from the last page request. +func (page RouteFilterRuleListResultPage) Response() RouteFilterRuleListResult { + return page.rfrlr +} + +// Values returns the slice of values for the current page or nil if there are no values. +func (page RouteFilterRuleListResultPage) Values() []RouteFilterRule { + if page.rfrlr.IsEmpty() { + return nil + } + return *page.rfrlr.Value +} + +// RouteFilterRulePropertiesFormat route Filter Rule Resource +type RouteFilterRulePropertiesFormat struct { + // Access - The access type of the rule. Valid values are: 'Allow', 'Deny'. Possible values include: 'Allow', 'Deny' + Access Access `json:"access,omitempty"` + // RouteFilterRuleType - The rule type of the rule. Valid value is: 'Community' + RouteFilterRuleType *string `json:"routeFilterRuleType,omitempty"` + // Communities - The collection for bgp community values to filter on. e.g. ['12076:5010','12076:5020'] + Communities *[]string `json:"communities,omitempty"` + // ProvisioningState - The provisioning state of the resource. Possible values are: 'Updating', 'Deleting', 'Succeeded' and 'Failed'. + ProvisioningState *string `json:"provisioningState,omitempty"` +} + +// RouteFilterRulesCreateOrUpdateFuture an abstraction for monitoring and retrieving the results of a long-running +// operation. +type RouteFilterRulesCreateOrUpdateFuture struct { + azure.Future +} + +// Result returns the result of the asynchronous operation. +// If the operation has not completed it will return an error. +func (future *RouteFilterRulesCreateOrUpdateFuture) Result(client RouteFilterRulesClient) (rfr RouteFilterRule, err error) { + var done bool + done, err = future.Done(client) + if err != nil { + err = autorest.NewErrorWithError(err, "network.RouteFilterRulesCreateOrUpdateFuture", "Result", future.Response(), "Polling failure") + return + } + if !done { + err = azure.NewAsyncOpIncompleteError("network.RouteFilterRulesCreateOrUpdateFuture") + return + } + sender := autorest.DecorateSender(client, autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...)) + if rfr.Response.Response, err = future.GetResult(sender); err == nil && rfr.Response.Response.StatusCode != http.StatusNoContent { + rfr, err = client.CreateOrUpdateResponder(rfr.Response.Response) + if err != nil { + err = autorest.NewErrorWithError(err, "network.RouteFilterRulesCreateOrUpdateFuture", "Result", rfr.Response.Response, "Failure responding to request") + } + } + return +} + +// RouteFilterRulesDeleteFuture an abstraction for monitoring and retrieving the results of a long-running +// operation. +type RouteFilterRulesDeleteFuture struct { + azure.Future +} + +// Result returns the result of the asynchronous operation. +// If the operation has not completed it will return an error. +func (future *RouteFilterRulesDeleteFuture) Result(client RouteFilterRulesClient) (ar autorest.Response, err error) { + var done bool + done, err = future.Done(client) + if err != nil { + err = autorest.NewErrorWithError(err, "network.RouteFilterRulesDeleteFuture", "Result", future.Response(), "Polling failure") + return + } + if !done { + err = azure.NewAsyncOpIncompleteError("network.RouteFilterRulesDeleteFuture") + return + } + ar.Response = future.Response() + return +} + +// RouteFilterRulesUpdateFuture an abstraction for monitoring and retrieving the results of a long-running +// operation. +type RouteFilterRulesUpdateFuture struct { + azure.Future +} + +// Result returns the result of the asynchronous operation. +// If the operation has not completed it will return an error. +func (future *RouteFilterRulesUpdateFuture) Result(client RouteFilterRulesClient) (rfr RouteFilterRule, err error) { + var done bool + done, err = future.Done(client) + if err != nil { + err = autorest.NewErrorWithError(err, "network.RouteFilterRulesUpdateFuture", "Result", future.Response(), "Polling failure") + return + } + if !done { + err = azure.NewAsyncOpIncompleteError("network.RouteFilterRulesUpdateFuture") + return + } + sender := autorest.DecorateSender(client, autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...)) + if rfr.Response.Response, err = future.GetResult(sender); err == nil && rfr.Response.Response.StatusCode != http.StatusNoContent { + rfr, err = client.UpdateResponder(rfr.Response.Response) + if err != nil { + err = autorest.NewErrorWithError(err, "network.RouteFilterRulesUpdateFuture", "Result", rfr.Response.Response, "Failure responding to request") + } + } + return +} + +// RouteFiltersCreateOrUpdateFuture an abstraction for monitoring and retrieving the results of a long-running +// operation. +type RouteFiltersCreateOrUpdateFuture struct { + azure.Future +} + +// Result returns the result of the asynchronous operation. +// If the operation has not completed it will return an error. +func (future *RouteFiltersCreateOrUpdateFuture) Result(client RouteFiltersClient) (rf RouteFilter, err error) { + var done bool + done, err = future.Done(client) + if err != nil { + err = autorest.NewErrorWithError(err, "network.RouteFiltersCreateOrUpdateFuture", "Result", future.Response(), "Polling failure") + return + } + if !done { + err = azure.NewAsyncOpIncompleteError("network.RouteFiltersCreateOrUpdateFuture") + return + } + sender := autorest.DecorateSender(client, autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...)) + if rf.Response.Response, err = future.GetResult(sender); err == nil && rf.Response.Response.StatusCode != http.StatusNoContent { + rf, err = client.CreateOrUpdateResponder(rf.Response.Response) + if err != nil { + err = autorest.NewErrorWithError(err, "network.RouteFiltersCreateOrUpdateFuture", "Result", rf.Response.Response, "Failure responding to request") + } + } + return +} + +// RouteFiltersDeleteFuture an abstraction for monitoring and retrieving the results of a long-running operation. +type RouteFiltersDeleteFuture struct { + azure.Future +} + +// Result returns the result of the asynchronous operation. +// If the operation has not completed it will return an error. +func (future *RouteFiltersDeleteFuture) Result(client RouteFiltersClient) (ar autorest.Response, err error) { + var done bool + done, err = future.Done(client) + if err != nil { + err = autorest.NewErrorWithError(err, "network.RouteFiltersDeleteFuture", "Result", future.Response(), "Polling failure") + return + } + if !done { + err = azure.NewAsyncOpIncompleteError("network.RouteFiltersDeleteFuture") + return + } + ar.Response = future.Response() + return +} + +// RouteFiltersUpdateFuture an abstraction for monitoring and retrieving the results of a long-running operation. +type RouteFiltersUpdateFuture struct { + azure.Future +} + +// Result returns the result of the asynchronous operation. +// If the operation has not completed it will return an error. +func (future *RouteFiltersUpdateFuture) Result(client RouteFiltersClient) (rf RouteFilter, err error) { + var done bool + done, err = future.Done(client) + if err != nil { + err = autorest.NewErrorWithError(err, "network.RouteFiltersUpdateFuture", "Result", future.Response(), "Polling failure") + return + } + if !done { + err = azure.NewAsyncOpIncompleteError("network.RouteFiltersUpdateFuture") + return + } + sender := autorest.DecorateSender(client, autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...)) + if rf.Response.Response, err = future.GetResult(sender); err == nil && rf.Response.Response.StatusCode != http.StatusNoContent { + rf, err = client.UpdateResponder(rf.Response.Response) + if err != nil { + err = autorest.NewErrorWithError(err, "network.RouteFiltersUpdateFuture", "Result", rf.Response.Response, "Failure responding to request") + } + } + return +} + +// RouteListResult response for the ListRoute API service call +type RouteListResult struct { + autorest.Response `json:"-"` + // Value - Gets a list of routes in a resource group. + Value *[]Route `json:"value,omitempty"` + // NextLink - The URL to get the next set of results. + NextLink *string `json:"nextLink,omitempty"` +} + +// RouteListResultIterator provides access to a complete listing of Route values. +type RouteListResultIterator struct { + i int + page RouteListResultPage +} + +// Next advances to the next value. If there was an error making +// the request the iterator does not advance and the error is returned. +func (iter *RouteListResultIterator) Next() error { + iter.i++ + if iter.i < len(iter.page.Values()) { + return nil + } + err := iter.page.Next() + if err != nil { + iter.i-- + return err + } + iter.i = 0 + return nil +} + +// NotDone returns true if the enumeration should be started or is not yet complete. +func (iter RouteListResultIterator) NotDone() bool { + return iter.page.NotDone() && iter.i < len(iter.page.Values()) +} + +// Response returns the raw server response from the last page request. +func (iter RouteListResultIterator) Response() RouteListResult { + return iter.page.Response() +} + +// Value returns the current value or a zero-initialized value if the +// iterator has advanced beyond the end of the collection. +func (iter RouteListResultIterator) Value() Route { + if !iter.page.NotDone() { + return Route{} + } + return iter.page.Values()[iter.i] +} + +// IsEmpty returns true if the ListResult contains no values. +func (rlr RouteListResult) IsEmpty() bool { + return rlr.Value == nil || len(*rlr.Value) == 0 +} + +// routeListResultPreparer prepares a request to retrieve the next set of results. +// It returns nil if no more results exist. +func (rlr RouteListResult) routeListResultPreparer() (*http.Request, error) { + if rlr.NextLink == nil || len(to.String(rlr.NextLink)) < 1 { + return nil, nil + } + return autorest.Prepare(&http.Request{}, + autorest.AsJSON(), + autorest.AsGet(), + autorest.WithBaseURL(to.String(rlr.NextLink))) +} + +// RouteListResultPage contains a page of Route values. +type RouteListResultPage struct { + fn func(RouteListResult) (RouteListResult, error) + rlr RouteListResult +} + +// Next advances to the next page of values. If there was an error making +// the request the page does not advance and the error is returned. +func (page *RouteListResultPage) Next() error { + next, err := page.fn(page.rlr) + if err != nil { + return err + } + page.rlr = next + return nil +} + +// NotDone returns true if the page enumeration should be started or is not yet complete. +func (page RouteListResultPage) NotDone() bool { + return !page.rlr.IsEmpty() +} + +// Response returns the raw server response from the last page request. +func (page RouteListResultPage) Response() RouteListResult { + return page.rlr +} + +// Values returns the slice of values for the current page or nil if there are no values. +func (page RouteListResultPage) Values() []Route { + if page.rlr.IsEmpty() { + return nil + } + return *page.rlr.Value +} + +// RoutePropertiesFormat route resource +type RoutePropertiesFormat struct { + // AddressPrefix - The destination CIDR to which the route applies. + AddressPrefix *string `json:"addressPrefix,omitempty"` + // NextHopType - The type of Azure hop the packet should be sent to. Possible values are: 'VirtualNetworkGateway', 'VnetLocal', 'Internet', 'VirtualAppliance', and 'None'. Possible values include: 'RouteNextHopTypeVirtualNetworkGateway', 'RouteNextHopTypeVnetLocal', 'RouteNextHopTypeInternet', 'RouteNextHopTypeVirtualAppliance', 'RouteNextHopTypeNone' + NextHopType RouteNextHopType `json:"nextHopType,omitempty"` + // NextHopIPAddress - The IP address packets should be forwarded to. Next hop values are only allowed in routes where the next hop type is VirtualAppliance. + NextHopIPAddress *string `json:"nextHopIpAddress,omitempty"` + // ProvisioningState - The provisioning state of the resource. Possible values are: 'Updating', 'Deleting', and 'Failed'. + ProvisioningState *string `json:"provisioningState,omitempty"` +} + +// RoutesCreateOrUpdateFuture an abstraction for monitoring and retrieving the results of a long-running operation. +type RoutesCreateOrUpdateFuture struct { + azure.Future +} + +// Result returns the result of the asynchronous operation. +// If the operation has not completed it will return an error. +func (future *RoutesCreateOrUpdateFuture) Result(client RoutesClient) (r Route, err error) { + var done bool + done, err = future.Done(client) + if err != nil { + err = autorest.NewErrorWithError(err, "network.RoutesCreateOrUpdateFuture", "Result", future.Response(), "Polling failure") + return + } + if !done { + err = azure.NewAsyncOpIncompleteError("network.RoutesCreateOrUpdateFuture") + return + } + sender := autorest.DecorateSender(client, autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...)) + if r.Response.Response, err = future.GetResult(sender); err == nil && r.Response.Response.StatusCode != http.StatusNoContent { + r, err = client.CreateOrUpdateResponder(r.Response.Response) + if err != nil { + err = autorest.NewErrorWithError(err, "network.RoutesCreateOrUpdateFuture", "Result", r.Response.Response, "Failure responding to request") + } + } + return +} + +// RoutesDeleteFuture an abstraction for monitoring and retrieving the results of a long-running operation. +type RoutesDeleteFuture struct { + azure.Future +} + +// Result returns the result of the asynchronous operation. +// If the operation has not completed it will return an error. +func (future *RoutesDeleteFuture) Result(client RoutesClient) (ar autorest.Response, err error) { + var done bool + done, err = future.Done(client) + if err != nil { + err = autorest.NewErrorWithError(err, "network.RoutesDeleteFuture", "Result", future.Response(), "Polling failure") + return + } + if !done { + err = azure.NewAsyncOpIncompleteError("network.RoutesDeleteFuture") + return + } + ar.Response = future.Response() + return +} + +// RouteTable route table resource. +type RouteTable struct { + autorest.Response `json:"-"` + // RouteTablePropertiesFormat - Properties of the route table. + *RouteTablePropertiesFormat `json:"properties,omitempty"` + // Etag - Gets a unique read-only string that changes whenever the resource is updated. + Etag *string `json:"etag,omitempty"` + // ID - Resource ID. + ID *string `json:"id,omitempty"` + // Name - Resource name. + Name *string `json:"name,omitempty"` + // Type - Resource type. + Type *string `json:"type,omitempty"` + // Location - Resource location. + Location *string `json:"location,omitempty"` + // Tags - Resource tags. + Tags map[string]*string `json:"tags"` +} + +// MarshalJSON is the custom marshaler for RouteTable. +func (rt RouteTable) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]interface{}) + if rt.RouteTablePropertiesFormat != nil { + objectMap["properties"] = rt.RouteTablePropertiesFormat + } + if rt.Etag != nil { + objectMap["etag"] = rt.Etag + } + if rt.ID != nil { + objectMap["id"] = rt.ID + } + if rt.Name != nil { + objectMap["name"] = rt.Name + } + if rt.Type != nil { + objectMap["type"] = rt.Type + } + if rt.Location != nil { + objectMap["location"] = rt.Location + } + if rt.Tags != nil { + objectMap["tags"] = rt.Tags + } + return json.Marshal(objectMap) +} + +// UnmarshalJSON is the custom unmarshaler for RouteTable struct. +func (rt *RouteTable) UnmarshalJSON(body []byte) error { + var m map[string]*json.RawMessage + err := json.Unmarshal(body, &m) + if err != nil { + return err + } + for k, v := range m { + switch k { + case "properties": + if v != nil { + var routeTablePropertiesFormat RouteTablePropertiesFormat + err = json.Unmarshal(*v, &routeTablePropertiesFormat) + if err != nil { + return err + } + rt.RouteTablePropertiesFormat = &routeTablePropertiesFormat + } + case "etag": + if v != nil { + var etag string + err = json.Unmarshal(*v, &etag) + if err != nil { + return err + } + rt.Etag = &etag + } + case "id": + if v != nil { + var ID string + err = json.Unmarshal(*v, &ID) + if err != nil { + return err + } + rt.ID = &ID + } + case "name": + if v != nil { + var name string + err = json.Unmarshal(*v, &name) + if err != nil { + return err + } + rt.Name = &name + } + case "type": + if v != nil { + var typeVar string + err = json.Unmarshal(*v, &typeVar) + if err != nil { + return err + } + rt.Type = &typeVar + } + case "location": + if v != nil { + var location string + err = json.Unmarshal(*v, &location) + if err != nil { + return err + } + rt.Location = &location + } + case "tags": + if v != nil { + var tags map[string]*string + err = json.Unmarshal(*v, &tags) + if err != nil { + return err + } + rt.Tags = tags + } + } + } + + return nil +} + +// RouteTableListResult response for the ListRouteTable API service call. +type RouteTableListResult struct { + autorest.Response `json:"-"` + // Value - Gets a list of route tables in a resource group. + Value *[]RouteTable `json:"value,omitempty"` + // NextLink - The URL to get the next set of results. + NextLink *string `json:"nextLink,omitempty"` +} + +// RouteTableListResultIterator provides access to a complete listing of RouteTable values. +type RouteTableListResultIterator struct { + i int + page RouteTableListResultPage +} + +// Next advances to the next value. If there was an error making +// the request the iterator does not advance and the error is returned. +func (iter *RouteTableListResultIterator) Next() error { + iter.i++ + if iter.i < len(iter.page.Values()) { + return nil + } + err := iter.page.Next() + if err != nil { + iter.i-- + return err + } + iter.i = 0 + return nil +} + +// NotDone returns true if the enumeration should be started or is not yet complete. +func (iter RouteTableListResultIterator) NotDone() bool { + return iter.page.NotDone() && iter.i < len(iter.page.Values()) +} + +// Response returns the raw server response from the last page request. +func (iter RouteTableListResultIterator) Response() RouteTableListResult { + return iter.page.Response() +} + +// Value returns the current value or a zero-initialized value if the +// iterator has advanced beyond the end of the collection. +func (iter RouteTableListResultIterator) Value() RouteTable { + if !iter.page.NotDone() { + return RouteTable{} + } + return iter.page.Values()[iter.i] +} + +// IsEmpty returns true if the ListResult contains no values. +func (rtlr RouteTableListResult) IsEmpty() bool { + return rtlr.Value == nil || len(*rtlr.Value) == 0 +} + +// routeTableListResultPreparer prepares a request to retrieve the next set of results. +// It returns nil if no more results exist. +func (rtlr RouteTableListResult) routeTableListResultPreparer() (*http.Request, error) { + if rtlr.NextLink == nil || len(to.String(rtlr.NextLink)) < 1 { + return nil, nil + } + return autorest.Prepare(&http.Request{}, + autorest.AsJSON(), + autorest.AsGet(), + autorest.WithBaseURL(to.String(rtlr.NextLink))) +} + +// RouteTableListResultPage contains a page of RouteTable values. +type RouteTableListResultPage struct { + fn func(RouteTableListResult) (RouteTableListResult, error) + rtlr RouteTableListResult +} + +// Next advances to the next page of values. If there was an error making +// the request the page does not advance and the error is returned. +func (page *RouteTableListResultPage) Next() error { + next, err := page.fn(page.rtlr) + if err != nil { + return err + } + page.rtlr = next + return nil +} + +// NotDone returns true if the page enumeration should be started or is not yet complete. +func (page RouteTableListResultPage) NotDone() bool { + return !page.rtlr.IsEmpty() +} + +// Response returns the raw server response from the last page request. +func (page RouteTableListResultPage) Response() RouteTableListResult { + return page.rtlr +} + +// Values returns the slice of values for the current page or nil if there are no values. +func (page RouteTableListResultPage) Values() []RouteTable { + if page.rtlr.IsEmpty() { + return nil + } + return *page.rtlr.Value +} + +// RouteTablePropertiesFormat route Table resource +type RouteTablePropertiesFormat struct { + // Routes - Collection of routes contained within a route table. + Routes *[]Route `json:"routes,omitempty"` + // Subnets - A collection of references to subnets. + Subnets *[]Subnet `json:"subnets,omitempty"` + // DisableBgpRoutePropagation - Gets or sets whether to disable the routes learned by BGP on that route table. True means disable. + DisableBgpRoutePropagation *bool `json:"disableBgpRoutePropagation,omitempty"` + // ProvisioningState - The provisioning state of the resource. Possible values are: 'Updating', 'Deleting', and 'Failed'. + ProvisioningState *string `json:"provisioningState,omitempty"` +} + +// RouteTablesCreateOrUpdateFuture an abstraction for monitoring and retrieving the results of a long-running +// operation. +type RouteTablesCreateOrUpdateFuture struct { + azure.Future +} + +// Result returns the result of the asynchronous operation. +// If the operation has not completed it will return an error. +func (future *RouteTablesCreateOrUpdateFuture) Result(client RouteTablesClient) (rt RouteTable, err error) { + var done bool + done, err = future.Done(client) + if err != nil { + err = autorest.NewErrorWithError(err, "network.RouteTablesCreateOrUpdateFuture", "Result", future.Response(), "Polling failure") + return + } + if !done { + err = azure.NewAsyncOpIncompleteError("network.RouteTablesCreateOrUpdateFuture") + return + } + sender := autorest.DecorateSender(client, autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...)) + if rt.Response.Response, err = future.GetResult(sender); err == nil && rt.Response.Response.StatusCode != http.StatusNoContent { + rt, err = client.CreateOrUpdateResponder(rt.Response.Response) + if err != nil { + err = autorest.NewErrorWithError(err, "network.RouteTablesCreateOrUpdateFuture", "Result", rt.Response.Response, "Failure responding to request") + } + } + return +} + +// RouteTablesDeleteFuture an abstraction for monitoring and retrieving the results of a long-running operation. +type RouteTablesDeleteFuture struct { + azure.Future +} + +// Result returns the result of the asynchronous operation. +// If the operation has not completed it will return an error. +func (future *RouteTablesDeleteFuture) Result(client RouteTablesClient) (ar autorest.Response, err error) { + var done bool + done, err = future.Done(client) + if err != nil { + err = autorest.NewErrorWithError(err, "network.RouteTablesDeleteFuture", "Result", future.Response(), "Polling failure") + return + } + if !done { + err = azure.NewAsyncOpIncompleteError("network.RouteTablesDeleteFuture") + return + } + ar.Response = future.Response() + return +} + +// RouteTablesUpdateTagsFuture an abstraction for monitoring and retrieving the results of a long-running +// operation. +type RouteTablesUpdateTagsFuture struct { + azure.Future +} + +// Result returns the result of the asynchronous operation. +// If the operation has not completed it will return an error. +func (future *RouteTablesUpdateTagsFuture) Result(client RouteTablesClient) (rt RouteTable, err error) { + var done bool + done, err = future.Done(client) + if err != nil { + err = autorest.NewErrorWithError(err, "network.RouteTablesUpdateTagsFuture", "Result", future.Response(), "Polling failure") + return + } + if !done { + err = azure.NewAsyncOpIncompleteError("network.RouteTablesUpdateTagsFuture") + return + } + sender := autorest.DecorateSender(client, autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...)) + if rt.Response.Response, err = future.GetResult(sender); err == nil && rt.Response.Response.StatusCode != http.StatusNoContent { + rt, err = client.UpdateTagsResponder(rt.Response.Response) + if err != nil { + err = autorest.NewErrorWithError(err, "network.RouteTablesUpdateTagsFuture", "Result", rt.Response.Response, "Failure responding to request") + } + } + return +} + +// SecurityGroup networkSecurityGroup resource. +type SecurityGroup struct { + autorest.Response `json:"-"` + // SecurityGroupPropertiesFormat - Properties of the network security group + *SecurityGroupPropertiesFormat `json:"properties,omitempty"` + // Etag - A unique read-only string that changes whenever the resource is updated. + Etag *string `json:"etag,omitempty"` + // ID - Resource ID. + ID *string `json:"id,omitempty"` + // Name - Resource name. + Name *string `json:"name,omitempty"` + // Type - Resource type. + Type *string `json:"type,omitempty"` + // Location - Resource location. + Location *string `json:"location,omitempty"` + // Tags - Resource tags. + Tags map[string]*string `json:"tags"` +} + +// MarshalJSON is the custom marshaler for SecurityGroup. +func (sg SecurityGroup) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]interface{}) + if sg.SecurityGroupPropertiesFormat != nil { + objectMap["properties"] = sg.SecurityGroupPropertiesFormat + } + if sg.Etag != nil { + objectMap["etag"] = sg.Etag + } + if sg.ID != nil { + objectMap["id"] = sg.ID + } + if sg.Name != nil { + objectMap["name"] = sg.Name + } + if sg.Type != nil { + objectMap["type"] = sg.Type + } + if sg.Location != nil { + objectMap["location"] = sg.Location + } + if sg.Tags != nil { + objectMap["tags"] = sg.Tags + } + return json.Marshal(objectMap) +} + +// UnmarshalJSON is the custom unmarshaler for SecurityGroup struct. +func (sg *SecurityGroup) UnmarshalJSON(body []byte) error { + var m map[string]*json.RawMessage + err := json.Unmarshal(body, &m) + if err != nil { + return err + } + for k, v := range m { + switch k { + case "properties": + if v != nil { + var securityGroupPropertiesFormat SecurityGroupPropertiesFormat + err = json.Unmarshal(*v, &securityGroupPropertiesFormat) + if err != nil { + return err + } + sg.SecurityGroupPropertiesFormat = &securityGroupPropertiesFormat + } + case "etag": + if v != nil { + var etag string + err = json.Unmarshal(*v, &etag) + if err != nil { + return err + } + sg.Etag = &etag + } + case "id": + if v != nil { + var ID string + err = json.Unmarshal(*v, &ID) + if err != nil { + return err + } + sg.ID = &ID + } + case "name": + if v != nil { + var name string + err = json.Unmarshal(*v, &name) + if err != nil { + return err + } + sg.Name = &name + } + case "type": + if v != nil { + var typeVar string + err = json.Unmarshal(*v, &typeVar) + if err != nil { + return err + } + sg.Type = &typeVar + } + case "location": + if v != nil { + var location string + err = json.Unmarshal(*v, &location) + if err != nil { + return err + } + sg.Location = &location + } + case "tags": + if v != nil { + var tags map[string]*string + err = json.Unmarshal(*v, &tags) + if err != nil { + return err + } + sg.Tags = tags + } + } + } + + return nil +} + +// SecurityGroupListResult response for ListNetworkSecurityGroups API service call. +type SecurityGroupListResult struct { + autorest.Response `json:"-"` + // Value - A list of NetworkSecurityGroup resources. + Value *[]SecurityGroup `json:"value,omitempty"` + // NextLink - The URL to get the next set of results. + NextLink *string `json:"nextLink,omitempty"` +} + +// SecurityGroupListResultIterator provides access to a complete listing of SecurityGroup values. +type SecurityGroupListResultIterator struct { + i int + page SecurityGroupListResultPage +} + +// Next advances to the next value. If there was an error making +// the request the iterator does not advance and the error is returned. +func (iter *SecurityGroupListResultIterator) Next() error { + iter.i++ + if iter.i < len(iter.page.Values()) { + return nil + } + err := iter.page.Next() + if err != nil { + iter.i-- + return err + } + iter.i = 0 + return nil +} + +// NotDone returns true if the enumeration should be started or is not yet complete. +func (iter SecurityGroupListResultIterator) NotDone() bool { + return iter.page.NotDone() && iter.i < len(iter.page.Values()) +} + +// Response returns the raw server response from the last page request. +func (iter SecurityGroupListResultIterator) Response() SecurityGroupListResult { + return iter.page.Response() +} + +// Value returns the current value or a zero-initialized value if the +// iterator has advanced beyond the end of the collection. +func (iter SecurityGroupListResultIterator) Value() SecurityGroup { + if !iter.page.NotDone() { + return SecurityGroup{} + } + return iter.page.Values()[iter.i] +} + +// IsEmpty returns true if the ListResult contains no values. +func (sglr SecurityGroupListResult) IsEmpty() bool { + return sglr.Value == nil || len(*sglr.Value) == 0 +} + +// securityGroupListResultPreparer prepares a request to retrieve the next set of results. +// It returns nil if no more results exist. +func (sglr SecurityGroupListResult) securityGroupListResultPreparer() (*http.Request, error) { + if sglr.NextLink == nil || len(to.String(sglr.NextLink)) < 1 { + return nil, nil + } + return autorest.Prepare(&http.Request{}, + autorest.AsJSON(), + autorest.AsGet(), + autorest.WithBaseURL(to.String(sglr.NextLink))) +} + +// SecurityGroupListResultPage contains a page of SecurityGroup values. +type SecurityGroupListResultPage struct { + fn func(SecurityGroupListResult) (SecurityGroupListResult, error) + sglr SecurityGroupListResult +} + +// Next advances to the next page of values. If there was an error making +// the request the page does not advance and the error is returned. +func (page *SecurityGroupListResultPage) Next() error { + next, err := page.fn(page.sglr) + if err != nil { + return err + } + page.sglr = next + return nil +} + +// NotDone returns true if the page enumeration should be started or is not yet complete. +func (page SecurityGroupListResultPage) NotDone() bool { + return !page.sglr.IsEmpty() +} + +// Response returns the raw server response from the last page request. +func (page SecurityGroupListResultPage) Response() SecurityGroupListResult { + return page.sglr +} + +// Values returns the slice of values for the current page or nil if there are no values. +func (page SecurityGroupListResultPage) Values() []SecurityGroup { + if page.sglr.IsEmpty() { + return nil + } + return *page.sglr.Value +} + +// SecurityGroupNetworkInterface network interface and all its associated security rules. +type SecurityGroupNetworkInterface struct { + // ID - ID of the network interface. + ID *string `json:"id,omitempty"` + SecurityRuleAssociations *SecurityRuleAssociations `json:"securityRuleAssociations,omitempty"` +} + +// SecurityGroupPropertiesFormat network Security Group resource. +type SecurityGroupPropertiesFormat struct { + // SecurityRules - A collection of security rules of the network security group. + SecurityRules *[]SecurityRule `json:"securityRules,omitempty"` + // DefaultSecurityRules - The default security rules of network security group. + DefaultSecurityRules *[]SecurityRule `json:"defaultSecurityRules,omitempty"` + // NetworkInterfaces - A collection of references to network interfaces. + NetworkInterfaces *[]Interface `json:"networkInterfaces,omitempty"` + // Subnets - A collection of references to subnets. + Subnets *[]Subnet `json:"subnets,omitempty"` + // ResourceGUID - The resource GUID property of the network security group resource. + ResourceGUID *string `json:"resourceGuid,omitempty"` + // ProvisioningState - The provisioning state of the public IP resource. Possible values are: 'Updating', 'Deleting', and 'Failed'. + ProvisioningState *string `json:"provisioningState,omitempty"` +} + +// SecurityGroupsCreateOrUpdateFuture an abstraction for monitoring and retrieving the results of a long-running +// operation. +type SecurityGroupsCreateOrUpdateFuture struct { + azure.Future +} + +// Result returns the result of the asynchronous operation. +// If the operation has not completed it will return an error. +func (future *SecurityGroupsCreateOrUpdateFuture) Result(client SecurityGroupsClient) (sg SecurityGroup, err error) { + var done bool + done, err = future.Done(client) + if err != nil { + err = autorest.NewErrorWithError(err, "network.SecurityGroupsCreateOrUpdateFuture", "Result", future.Response(), "Polling failure") + return + } + if !done { + err = azure.NewAsyncOpIncompleteError("network.SecurityGroupsCreateOrUpdateFuture") + return + } + sender := autorest.DecorateSender(client, autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...)) + if sg.Response.Response, err = future.GetResult(sender); err == nil && sg.Response.Response.StatusCode != http.StatusNoContent { + sg, err = client.CreateOrUpdateResponder(sg.Response.Response) + if err != nil { + err = autorest.NewErrorWithError(err, "network.SecurityGroupsCreateOrUpdateFuture", "Result", sg.Response.Response, "Failure responding to request") + } + } + return +} + +// SecurityGroupsDeleteFuture an abstraction for monitoring and retrieving the results of a long-running operation. +type SecurityGroupsDeleteFuture struct { + azure.Future +} + +// Result returns the result of the asynchronous operation. +// If the operation has not completed it will return an error. +func (future *SecurityGroupsDeleteFuture) Result(client SecurityGroupsClient) (ar autorest.Response, err error) { + var done bool + done, err = future.Done(client) + if err != nil { + err = autorest.NewErrorWithError(err, "network.SecurityGroupsDeleteFuture", "Result", future.Response(), "Polling failure") + return + } + if !done { + err = azure.NewAsyncOpIncompleteError("network.SecurityGroupsDeleteFuture") + return + } + ar.Response = future.Response() + return +} + +// SecurityGroupsUpdateTagsFuture an abstraction for monitoring and retrieving the results of a long-running +// operation. +type SecurityGroupsUpdateTagsFuture struct { + azure.Future +} + +// Result returns the result of the asynchronous operation. +// If the operation has not completed it will return an error. +func (future *SecurityGroupsUpdateTagsFuture) Result(client SecurityGroupsClient) (sg SecurityGroup, err error) { + var done bool + done, err = future.Done(client) + if err != nil { + err = autorest.NewErrorWithError(err, "network.SecurityGroupsUpdateTagsFuture", "Result", future.Response(), "Polling failure") + return + } + if !done { + err = azure.NewAsyncOpIncompleteError("network.SecurityGroupsUpdateTagsFuture") + return + } + sender := autorest.DecorateSender(client, autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...)) + if sg.Response.Response, err = future.GetResult(sender); err == nil && sg.Response.Response.StatusCode != http.StatusNoContent { + sg, err = client.UpdateTagsResponder(sg.Response.Response) + if err != nil { + err = autorest.NewErrorWithError(err, "network.SecurityGroupsUpdateTagsFuture", "Result", sg.Response.Response, "Failure responding to request") + } + } + return +} + +// SecurityGroupViewParameters parameters that define the VM to check security groups for. +type SecurityGroupViewParameters struct { + // TargetResourceID - ID of the target VM. + TargetResourceID *string `json:"targetResourceId,omitempty"` +} + +// SecurityGroupViewResult the information about security rules applied to the specified VM. +type SecurityGroupViewResult struct { + autorest.Response `json:"-"` + // NetworkInterfaces - List of network interfaces on the specified VM. + NetworkInterfaces *[]SecurityGroupNetworkInterface `json:"networkInterfaces,omitempty"` +} + +// SecurityRule network security rule. +type SecurityRule struct { + autorest.Response `json:"-"` + // SecurityRulePropertiesFormat - Properties of the security rule + *SecurityRulePropertiesFormat `json:"properties,omitempty"` + // Name - The name of the resource that is unique within a resource group. This name can be used to access the resource. + Name *string `json:"name,omitempty"` + // Etag - A unique read-only string that changes whenever the resource is updated. + Etag *string `json:"etag,omitempty"` + // ID - Resource ID. + ID *string `json:"id,omitempty"` +} + +// MarshalJSON is the custom marshaler for SecurityRule. +func (sr SecurityRule) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]interface{}) + if sr.SecurityRulePropertiesFormat != nil { + objectMap["properties"] = sr.SecurityRulePropertiesFormat + } + if sr.Name != nil { + objectMap["name"] = sr.Name + } + if sr.Etag != nil { + objectMap["etag"] = sr.Etag + } + if sr.ID != nil { + objectMap["id"] = sr.ID + } + return json.Marshal(objectMap) +} + +// UnmarshalJSON is the custom unmarshaler for SecurityRule struct. +func (sr *SecurityRule) UnmarshalJSON(body []byte) error { + var m map[string]*json.RawMessage + err := json.Unmarshal(body, &m) + if err != nil { + return err + } + for k, v := range m { + switch k { + case "properties": + if v != nil { + var securityRulePropertiesFormat SecurityRulePropertiesFormat + err = json.Unmarshal(*v, &securityRulePropertiesFormat) + if err != nil { + return err + } + sr.SecurityRulePropertiesFormat = &securityRulePropertiesFormat + } + case "name": + if v != nil { + var name string + err = json.Unmarshal(*v, &name) + if err != nil { + return err + } + sr.Name = &name + } + case "etag": + if v != nil { + var etag string + err = json.Unmarshal(*v, &etag) + if err != nil { + return err + } + sr.Etag = &etag + } + case "id": + if v != nil { + var ID string + err = json.Unmarshal(*v, &ID) + if err != nil { + return err + } + sr.ID = &ID + } + } + } + + return nil +} + +// SecurityRuleAssociations all security rules associated with the network interface. +type SecurityRuleAssociations struct { + NetworkInterfaceAssociation *InterfaceAssociation `json:"networkInterfaceAssociation,omitempty"` + SubnetAssociation *SubnetAssociation `json:"subnetAssociation,omitempty"` + // DefaultSecurityRules - Collection of default security rules of the network security group. + DefaultSecurityRules *[]SecurityRule `json:"defaultSecurityRules,omitempty"` + // EffectiveSecurityRules - Collection of effective security rules. + EffectiveSecurityRules *[]EffectiveNetworkSecurityRule `json:"effectiveSecurityRules,omitempty"` +} + +// SecurityRuleListResult response for ListSecurityRule API service call. Retrieves all security rules that belongs +// to a network security group. +type SecurityRuleListResult struct { + autorest.Response `json:"-"` + // Value - The security rules in a network security group. + Value *[]SecurityRule `json:"value,omitempty"` + // NextLink - The URL to get the next set of results. + NextLink *string `json:"nextLink,omitempty"` +} + +// SecurityRuleListResultIterator provides access to a complete listing of SecurityRule values. +type SecurityRuleListResultIterator struct { + i int + page SecurityRuleListResultPage +} + +// Next advances to the next value. If there was an error making +// the request the iterator does not advance and the error is returned. +func (iter *SecurityRuleListResultIterator) Next() error { + iter.i++ + if iter.i < len(iter.page.Values()) { + return nil + } + err := iter.page.Next() + if err != nil { + iter.i-- + return err + } + iter.i = 0 + return nil +} + +// NotDone returns true if the enumeration should be started or is not yet complete. +func (iter SecurityRuleListResultIterator) NotDone() bool { + return iter.page.NotDone() && iter.i < len(iter.page.Values()) +} + +// Response returns the raw server response from the last page request. +func (iter SecurityRuleListResultIterator) Response() SecurityRuleListResult { + return iter.page.Response() +} + +// Value returns the current value or a zero-initialized value if the +// iterator has advanced beyond the end of the collection. +func (iter SecurityRuleListResultIterator) Value() SecurityRule { + if !iter.page.NotDone() { + return SecurityRule{} + } + return iter.page.Values()[iter.i] +} + +// IsEmpty returns true if the ListResult contains no values. +func (srlr SecurityRuleListResult) IsEmpty() bool { + return srlr.Value == nil || len(*srlr.Value) == 0 +} + +// securityRuleListResultPreparer prepares a request to retrieve the next set of results. +// It returns nil if no more results exist. +func (srlr SecurityRuleListResult) securityRuleListResultPreparer() (*http.Request, error) { + if srlr.NextLink == nil || len(to.String(srlr.NextLink)) < 1 { + return nil, nil + } + return autorest.Prepare(&http.Request{}, + autorest.AsJSON(), + autorest.AsGet(), + autorest.WithBaseURL(to.String(srlr.NextLink))) +} + +// SecurityRuleListResultPage contains a page of SecurityRule values. +type SecurityRuleListResultPage struct { + fn func(SecurityRuleListResult) (SecurityRuleListResult, error) + srlr SecurityRuleListResult +} + +// Next advances to the next page of values. If there was an error making +// the request the page does not advance and the error is returned. +func (page *SecurityRuleListResultPage) Next() error { + next, err := page.fn(page.srlr) + if err != nil { + return err + } + page.srlr = next + return nil +} + +// NotDone returns true if the page enumeration should be started or is not yet complete. +func (page SecurityRuleListResultPage) NotDone() bool { + return !page.srlr.IsEmpty() +} + +// Response returns the raw server response from the last page request. +func (page SecurityRuleListResultPage) Response() SecurityRuleListResult { + return page.srlr +} + +// Values returns the slice of values for the current page or nil if there are no values. +func (page SecurityRuleListResultPage) Values() []SecurityRule { + if page.srlr.IsEmpty() { + return nil + } + return *page.srlr.Value +} + +// SecurityRulePropertiesFormat security rule resource. +type SecurityRulePropertiesFormat struct { + // Description - A description for this rule. Restricted to 140 chars. + Description *string `json:"description,omitempty"` + // Protocol - Network protocol this rule applies to. Possible values are 'Tcp', 'Udp', and '*'. Possible values include: 'SecurityRuleProtocolTCP', 'SecurityRuleProtocolUDP', 'SecurityRuleProtocolAsterisk' + Protocol SecurityRuleProtocol `json:"protocol,omitempty"` + // SourcePortRange - The source port or range. Integer or range between 0 and 65535. Asterix '*' can also be used to match all ports. + SourcePortRange *string `json:"sourcePortRange,omitempty"` + // DestinationPortRange - The destination port or range. Integer or range between 0 and 65535. Asterix '*' can also be used to match all ports. + DestinationPortRange *string `json:"destinationPortRange,omitempty"` + // SourceAddressPrefix - The CIDR or source IP range. Asterix '*' can also be used to match all source IPs. Default tags such as 'VirtualNetwork', 'AzureLoadBalancer' and 'Internet' can also be used. If this is an ingress rule, specifies where network traffic originates from. + SourceAddressPrefix *string `json:"sourceAddressPrefix,omitempty"` + // SourceAddressPrefixes - The CIDR or source IP ranges. + SourceAddressPrefixes *[]string `json:"sourceAddressPrefixes,omitempty"` + // SourceApplicationSecurityGroups - The application security group specified as source. + SourceApplicationSecurityGroups *[]ApplicationSecurityGroup `json:"sourceApplicationSecurityGroups,omitempty"` + // DestinationAddressPrefix - The destination address prefix. CIDR or destination IP range. Asterix '*' can also be used to match all source IPs. Default tags such as 'VirtualNetwork', 'AzureLoadBalancer' and 'Internet' can also be used. + DestinationAddressPrefix *string `json:"destinationAddressPrefix,omitempty"` + // DestinationAddressPrefixes - The destination address prefixes. CIDR or destination IP ranges. + DestinationAddressPrefixes *[]string `json:"destinationAddressPrefixes,omitempty"` + // DestinationApplicationSecurityGroups - The application security group specified as destination. + DestinationApplicationSecurityGroups *[]ApplicationSecurityGroup `json:"destinationApplicationSecurityGroups,omitempty"` + // SourcePortRanges - The source port ranges. + SourcePortRanges *[]string `json:"sourcePortRanges,omitempty"` + // DestinationPortRanges - The destination port ranges. + DestinationPortRanges *[]string `json:"destinationPortRanges,omitempty"` + // Access - The network traffic is allowed or denied. Possible values are: 'Allow' and 'Deny'. Possible values include: 'SecurityRuleAccessAllow', 'SecurityRuleAccessDeny' + Access SecurityRuleAccess `json:"access,omitempty"` + // Priority - The priority of the rule. The value can be between 100 and 4096. The priority number must be unique for each rule in the collection. The lower the priority number, the higher the priority of the rule. + Priority *int32 `json:"priority,omitempty"` + // Direction - The direction of the rule. The direction specifies if rule will be evaluated on incoming or outcoming traffic. Possible values are: 'Inbound' and 'Outbound'. Possible values include: 'SecurityRuleDirectionInbound', 'SecurityRuleDirectionOutbound' + Direction SecurityRuleDirection `json:"direction,omitempty"` + // ProvisioningState - The provisioning state of the public IP resource. Possible values are: 'Updating', 'Deleting', and 'Failed'. + ProvisioningState *string `json:"provisioningState,omitempty"` +} + +// SecurityRulesCreateOrUpdateFuture an abstraction for monitoring and retrieving the results of a long-running +// operation. +type SecurityRulesCreateOrUpdateFuture struct { + azure.Future +} + +// Result returns the result of the asynchronous operation. +// If the operation has not completed it will return an error. +func (future *SecurityRulesCreateOrUpdateFuture) Result(client SecurityRulesClient) (sr SecurityRule, err error) { + var done bool + done, err = future.Done(client) + if err != nil { + err = autorest.NewErrorWithError(err, "network.SecurityRulesCreateOrUpdateFuture", "Result", future.Response(), "Polling failure") + return + } + if !done { + err = azure.NewAsyncOpIncompleteError("network.SecurityRulesCreateOrUpdateFuture") + return + } + sender := autorest.DecorateSender(client, autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...)) + if sr.Response.Response, err = future.GetResult(sender); err == nil && sr.Response.Response.StatusCode != http.StatusNoContent { + sr, err = client.CreateOrUpdateResponder(sr.Response.Response) + if err != nil { + err = autorest.NewErrorWithError(err, "network.SecurityRulesCreateOrUpdateFuture", "Result", sr.Response.Response, "Failure responding to request") + } + } + return +} + +// SecurityRulesDeleteFuture an abstraction for monitoring and retrieving the results of a long-running operation. +type SecurityRulesDeleteFuture struct { + azure.Future +} + +// Result returns the result of the asynchronous operation. +// If the operation has not completed it will return an error. +func (future *SecurityRulesDeleteFuture) Result(client SecurityRulesClient) (ar autorest.Response, err error) { + var done bool + done, err = future.Done(client) + if err != nil { + err = autorest.NewErrorWithError(err, "network.SecurityRulesDeleteFuture", "Result", future.Response(), "Polling failure") + return + } + if !done { + err = azure.NewAsyncOpIncompleteError("network.SecurityRulesDeleteFuture") + return + } + ar.Response = future.Response() + return +} + +// ServiceEndpointPropertiesFormat the service endpoint properties. +type ServiceEndpointPropertiesFormat struct { + // Service - The type of the endpoint service. + Service *string `json:"service,omitempty"` + // Locations - A list of locations. + Locations *[]string `json:"locations,omitempty"` + // ProvisioningState - The provisioning state of the resource. + ProvisioningState *string `json:"provisioningState,omitempty"` +} + +// String ... +type String struct { + autorest.Response `json:"-"` + Value *string `json:"value,omitempty"` +} + +// Subnet subnet in a virtual network resource. +type Subnet struct { + autorest.Response `json:"-"` + // SubnetPropertiesFormat - Properties of the subnet. + *SubnetPropertiesFormat `json:"properties,omitempty"` + // Name - The name of the resource that is unique within a resource group. This name can be used to access the resource. + Name *string `json:"name,omitempty"` + // Etag - A unique read-only string that changes whenever the resource is updated. + Etag *string `json:"etag,omitempty"` + // ID - Resource ID. + ID *string `json:"id,omitempty"` +} + +// MarshalJSON is the custom marshaler for Subnet. +func (s Subnet) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]interface{}) + if s.SubnetPropertiesFormat != nil { + objectMap["properties"] = s.SubnetPropertiesFormat + } + if s.Name != nil { + objectMap["name"] = s.Name + } + if s.Etag != nil { + objectMap["etag"] = s.Etag + } + if s.ID != nil { + objectMap["id"] = s.ID + } + return json.Marshal(objectMap) +} + +// UnmarshalJSON is the custom unmarshaler for Subnet struct. +func (s *Subnet) UnmarshalJSON(body []byte) error { + var m map[string]*json.RawMessage + err := json.Unmarshal(body, &m) + if err != nil { + return err + } + for k, v := range m { + switch k { + case "properties": + if v != nil { + var subnetPropertiesFormat SubnetPropertiesFormat + err = json.Unmarshal(*v, &subnetPropertiesFormat) + if err != nil { + return err + } + s.SubnetPropertiesFormat = &subnetPropertiesFormat + } + case "name": + if v != nil { + var name string + err = json.Unmarshal(*v, &name) + if err != nil { + return err + } + s.Name = &name + } + case "etag": + if v != nil { + var etag string + err = json.Unmarshal(*v, &etag) + if err != nil { + return err + } + s.Etag = &etag + } + case "id": + if v != nil { + var ID string + err = json.Unmarshal(*v, &ID) + if err != nil { + return err + } + s.ID = &ID + } + } + } + + return nil +} + +// SubnetAssociation network interface and its custom security rules. +type SubnetAssociation struct { + // ID - Subnet ID. + ID *string `json:"id,omitempty"` + // SecurityRules - Collection of custom security rules. + SecurityRules *[]SecurityRule `json:"securityRules,omitempty"` +} + +// SubnetListResult response for ListSubnets API service callRetrieves all subnet that belongs to a virtual network +type SubnetListResult struct { + autorest.Response `json:"-"` + // Value - The subnets in a virtual network. + Value *[]Subnet `json:"value,omitempty"` + // NextLink - The URL to get the next set of results. + NextLink *string `json:"nextLink,omitempty"` +} + +// SubnetListResultIterator provides access to a complete listing of Subnet values. +type SubnetListResultIterator struct { + i int + page SubnetListResultPage +} + +// Next advances to the next value. If there was an error making +// the request the iterator does not advance and the error is returned. +func (iter *SubnetListResultIterator) Next() error { + iter.i++ + if iter.i < len(iter.page.Values()) { + return nil + } + err := iter.page.Next() + if err != nil { + iter.i-- + return err + } + iter.i = 0 + return nil +} + +// NotDone returns true if the enumeration should be started or is not yet complete. +func (iter SubnetListResultIterator) NotDone() bool { + return iter.page.NotDone() && iter.i < len(iter.page.Values()) +} + +// Response returns the raw server response from the last page request. +func (iter SubnetListResultIterator) Response() SubnetListResult { + return iter.page.Response() +} + +// Value returns the current value or a zero-initialized value if the +// iterator has advanced beyond the end of the collection. +func (iter SubnetListResultIterator) Value() Subnet { + if !iter.page.NotDone() { + return Subnet{} + } + return iter.page.Values()[iter.i] +} + +// IsEmpty returns true if the ListResult contains no values. +func (slr SubnetListResult) IsEmpty() bool { + return slr.Value == nil || len(*slr.Value) == 0 +} + +// subnetListResultPreparer prepares a request to retrieve the next set of results. +// It returns nil if no more results exist. +func (slr SubnetListResult) subnetListResultPreparer() (*http.Request, error) { + if slr.NextLink == nil || len(to.String(slr.NextLink)) < 1 { + return nil, nil + } + return autorest.Prepare(&http.Request{}, + autorest.AsJSON(), + autorest.AsGet(), + autorest.WithBaseURL(to.String(slr.NextLink))) +} + +// SubnetListResultPage contains a page of Subnet values. +type SubnetListResultPage struct { + fn func(SubnetListResult) (SubnetListResult, error) + slr SubnetListResult +} + +// Next advances to the next page of values. If there was an error making +// the request the page does not advance and the error is returned. +func (page *SubnetListResultPage) Next() error { + next, err := page.fn(page.slr) + if err != nil { + return err + } + page.slr = next + return nil +} + +// NotDone returns true if the page enumeration should be started or is not yet complete. +func (page SubnetListResultPage) NotDone() bool { + return !page.slr.IsEmpty() +} + +// Response returns the raw server response from the last page request. +func (page SubnetListResultPage) Response() SubnetListResult { + return page.slr +} + +// Values returns the slice of values for the current page or nil if there are no values. +func (page SubnetListResultPage) Values() []Subnet { + if page.slr.IsEmpty() { + return nil + } + return *page.slr.Value +} + +// SubnetPropertiesFormat properties of the subnet. +type SubnetPropertiesFormat struct { + // AddressPrefix - The address prefix for the subnet. + AddressPrefix *string `json:"addressPrefix,omitempty"` + // NetworkSecurityGroup - The reference of the NetworkSecurityGroup resource. + NetworkSecurityGroup *SecurityGroup `json:"networkSecurityGroup,omitempty"` + // RouteTable - The reference of the RouteTable resource. + RouteTable *RouteTable `json:"routeTable,omitempty"` + // ServiceEndpoints - An array of service endpoints. + ServiceEndpoints *[]ServiceEndpointPropertiesFormat `json:"serviceEndpoints,omitempty"` + // IPConfigurations - Gets an array of references to the network interface IP configurations using subnet. + IPConfigurations *[]IPConfiguration `json:"ipConfigurations,omitempty"` + // ResourceNavigationLinks - Gets an array of references to the external resources using subnet. + ResourceNavigationLinks *[]ResourceNavigationLink `json:"resourceNavigationLinks,omitempty"` + // ProvisioningState - The provisioning state of the resource. + ProvisioningState *string `json:"provisioningState,omitempty"` +} + +// SubnetsCreateOrUpdateFuture an abstraction for monitoring and retrieving the results of a long-running +// operation. +type SubnetsCreateOrUpdateFuture struct { + azure.Future +} + +// Result returns the result of the asynchronous operation. +// If the operation has not completed it will return an error. +func (future *SubnetsCreateOrUpdateFuture) Result(client SubnetsClient) (s Subnet, err error) { + var done bool + done, err = future.Done(client) + if err != nil { + err = autorest.NewErrorWithError(err, "network.SubnetsCreateOrUpdateFuture", "Result", future.Response(), "Polling failure") + return + } + if !done { + err = azure.NewAsyncOpIncompleteError("network.SubnetsCreateOrUpdateFuture") + return + } + sender := autorest.DecorateSender(client, autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...)) + if s.Response.Response, err = future.GetResult(sender); err == nil && s.Response.Response.StatusCode != http.StatusNoContent { + s, err = client.CreateOrUpdateResponder(s.Response.Response) + if err != nil { + err = autorest.NewErrorWithError(err, "network.SubnetsCreateOrUpdateFuture", "Result", s.Response.Response, "Failure responding to request") + } + } + return +} + +// SubnetsDeleteFuture an abstraction for monitoring and retrieving the results of a long-running operation. +type SubnetsDeleteFuture struct { + azure.Future +} + +// Result returns the result of the asynchronous operation. +// If the operation has not completed it will return an error. +func (future *SubnetsDeleteFuture) Result(client SubnetsClient) (ar autorest.Response, err error) { + var done bool + done, err = future.Done(client) + if err != nil { + err = autorest.NewErrorWithError(err, "network.SubnetsDeleteFuture", "Result", future.Response(), "Polling failure") + return + } + if !done { + err = azure.NewAsyncOpIncompleteError("network.SubnetsDeleteFuture") + return + } + ar.Response = future.Response() + return +} + +// SubResource reference to another subresource. +type SubResource struct { + // ID - Resource ID. + ID *string `json:"id,omitempty"` +} + +// TagsObject tags object for patch operations. +type TagsObject struct { + // Tags - Resource tags. + Tags map[string]*string `json:"tags"` +} + +// MarshalJSON is the custom marshaler for TagsObject. +func (toVar TagsObject) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]interface{}) + if toVar.Tags != nil { + objectMap["tags"] = toVar.Tags + } + return json.Marshal(objectMap) +} + +// Topology topology of the specified resource group. +type Topology struct { + autorest.Response `json:"-"` + // ID - GUID representing the operation id. + ID *string `json:"id,omitempty"` + // CreatedDateTime - The datetime when the topology was initially created for the resource group. + CreatedDateTime *date.Time `json:"createdDateTime,omitempty"` + // LastModified - The datetime when the topology was last modified. + LastModified *date.Time `json:"lastModified,omitempty"` + Resources *[]TopologyResource `json:"resources,omitempty"` +} + +// TopologyAssociation resources that have an association with the parent resource. +type TopologyAssociation struct { + // Name - The name of the resource that is associated with the parent resource. + Name *string `json:"name,omitempty"` + // ResourceID - The ID of the resource that is associated with the parent resource. + ResourceID *string `json:"resourceId,omitempty"` + // AssociationType - The association type of the child resource to the parent resource. Possible values include: 'Associated', 'Contains' + AssociationType AssociationType `json:"associationType,omitempty"` +} + +// TopologyParameters parameters that define the representation of topology. +type TopologyParameters struct { + // TargetResourceGroupName - The name of the target resource group to perform topology on. + TargetResourceGroupName *string `json:"targetResourceGroupName,omitempty"` + // TargetVirtualNetwork - The reference of the Virtual Network resource. + TargetVirtualNetwork *SubResource `json:"targetVirtualNetwork,omitempty"` + // TargetSubnet - The reference of the Subnet resource. + TargetSubnet *SubResource `json:"targetSubnet,omitempty"` +} + +// TopologyResource the network resource topology information for the given resource group. +type TopologyResource struct { + // Name - Name of the resource. + Name *string `json:"name,omitempty"` + // ID - ID of the resource. + ID *string `json:"id,omitempty"` + // Location - Resource location. + Location *string `json:"location,omitempty"` + // Associations - Holds the associations the resource has with other resources in the resource group. + Associations *[]TopologyAssociation `json:"associations,omitempty"` +} + +// TrafficAnalyticsConfigurationProperties parameters that define the configuration of traffic analytics. +type TrafficAnalyticsConfigurationProperties struct { + // Enabled - Flag to enable/disable traffic analytics. + Enabled *bool `json:"enabled,omitempty"` + // WorkspaceID - The resource guid of the attached workspace + WorkspaceID *string `json:"workspaceId,omitempty"` + // WorkspaceRegion - The location of the attached workspace + WorkspaceRegion *string `json:"workspaceRegion,omitempty"` + // WorkspaceResourceID - Resource Id of the attached workspace + WorkspaceResourceID *string `json:"workspaceResourceId,omitempty"` +} + +// TrafficAnalyticsProperties parameters that define the configuration of traffic analytics. +type TrafficAnalyticsProperties struct { + NetworkWatcherFlowAnalyticsConfiguration *TrafficAnalyticsConfigurationProperties `json:"networkWatcherFlowAnalyticsConfiguration,omitempty"` +} + +// TroubleshootingDetails information gained from troubleshooting of specified resource. +type TroubleshootingDetails struct { + // ID - The id of the get troubleshoot operation. + ID *string `json:"id,omitempty"` + // ReasonType - Reason type of failure. + ReasonType *string `json:"reasonType,omitempty"` + // Summary - A summary of troubleshooting. + Summary *string `json:"summary,omitempty"` + // Detail - Details on troubleshooting results. + Detail *string `json:"detail,omitempty"` + // RecommendedActions - List of recommended actions. + RecommendedActions *[]TroubleshootingRecommendedActions `json:"recommendedActions,omitempty"` +} + +// TroubleshootingParameters parameters that define the resource to troubleshoot. +type TroubleshootingParameters struct { + // TargetResourceID - The target resource to troubleshoot. + TargetResourceID *string `json:"targetResourceId,omitempty"` + *TroubleshootingProperties `json:"properties,omitempty"` +} + +// MarshalJSON is the custom marshaler for TroubleshootingParameters. +func (tp TroubleshootingParameters) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]interface{}) + if tp.TargetResourceID != nil { + objectMap["targetResourceId"] = tp.TargetResourceID + } + if tp.TroubleshootingProperties != nil { + objectMap["properties"] = tp.TroubleshootingProperties + } + return json.Marshal(objectMap) +} + +// UnmarshalJSON is the custom unmarshaler for TroubleshootingParameters struct. +func (tp *TroubleshootingParameters) UnmarshalJSON(body []byte) error { + var m map[string]*json.RawMessage + err := json.Unmarshal(body, &m) + if err != nil { + return err + } + for k, v := range m { + switch k { + case "targetResourceId": + if v != nil { + var targetResourceID string + err = json.Unmarshal(*v, &targetResourceID) + if err != nil { + return err + } + tp.TargetResourceID = &targetResourceID + } + case "properties": + if v != nil { + var troubleshootingProperties TroubleshootingProperties + err = json.Unmarshal(*v, &troubleshootingProperties) + if err != nil { + return err + } + tp.TroubleshootingProperties = &troubleshootingProperties + } + } + } + + return nil +} + +// TroubleshootingProperties storage location provided for troubleshoot. +type TroubleshootingProperties struct { + // StorageID - The ID for the storage account to save the troubleshoot result. + StorageID *string `json:"storageId,omitempty"` + // StoragePath - The path to the blob to save the troubleshoot result in. + StoragePath *string `json:"storagePath,omitempty"` +} + +// TroubleshootingRecommendedActions recommended actions based on discovered issues. +type TroubleshootingRecommendedActions struct { + // ActionID - ID of the recommended action. + ActionID *string `json:"actionId,omitempty"` + // ActionText - Description of recommended actions. + ActionText *string `json:"actionText,omitempty"` + // ActionURI - The uri linking to a documentation for the recommended troubleshooting actions. + ActionURI *string `json:"actionUri,omitempty"` + // ActionURIText - The information from the URI for the recommended troubleshooting actions. + ActionURIText *string `json:"actionUriText,omitempty"` +} + +// TroubleshootingResult troubleshooting information gained from specified resource. +type TroubleshootingResult struct { + autorest.Response `json:"-"` + // StartTime - The start time of the troubleshooting. + StartTime *date.Time `json:"startTime,omitempty"` + // EndTime - The end time of the troubleshooting. + EndTime *date.Time `json:"endTime,omitempty"` + // Code - The result code of the troubleshooting. + Code *string `json:"code,omitempty"` + // Results - Information from troubleshooting. + Results *[]TroubleshootingDetails `json:"results,omitempty"` +} + +// TunnelConnectionHealth virtualNetworkGatewayConnection properties +type TunnelConnectionHealth struct { + // Tunnel - Tunnel name. + Tunnel *string `json:"tunnel,omitempty"` + // ConnectionStatus - Virtual network Gateway connection status. Possible values include: 'VirtualNetworkGatewayConnectionStatusUnknown', 'VirtualNetworkGatewayConnectionStatusConnecting', 'VirtualNetworkGatewayConnectionStatusConnected', 'VirtualNetworkGatewayConnectionStatusNotConnected' + ConnectionStatus VirtualNetworkGatewayConnectionStatus `json:"connectionStatus,omitempty"` + // IngressBytesTransferred - The Ingress Bytes Transferred in this connection + IngressBytesTransferred *int64 `json:"ingressBytesTransferred,omitempty"` + // EgressBytesTransferred - The Egress Bytes Transferred in this connection + EgressBytesTransferred *int64 `json:"egressBytesTransferred,omitempty"` + // LastConnectionEstablishedUtcTime - The time at which connection was established in Utc format. + LastConnectionEstablishedUtcTime *string `json:"lastConnectionEstablishedUtcTime,omitempty"` +} + +// Usage describes network resource usage. +type Usage struct { + // ID - Resource identifier. + ID *string `json:"id,omitempty"` + // Unit - An enum describing the unit of measurement. + Unit *string `json:"unit,omitempty"` + // CurrentValue - The current value of the usage. + CurrentValue *int64 `json:"currentValue,omitempty"` + // Limit - The limit of usage. + Limit *int64 `json:"limit,omitempty"` + // Name - The name of the type of usage. + Name *UsageName `json:"name,omitempty"` +} + +// UsageName the usage names. +type UsageName struct { + // Value - A string describing the resource name. + Value *string `json:"value,omitempty"` + // LocalizedValue - A localized string describing the resource name. + LocalizedValue *string `json:"localizedValue,omitempty"` +} + +// UsagesListResult the list usages operation response. +type UsagesListResult struct { + autorest.Response `json:"-"` + // Value - The list network resource usages. + Value *[]Usage `json:"value,omitempty"` + // NextLink - URL to get the next set of results. + NextLink *string `json:"nextLink,omitempty"` +} + +// UsagesListResultIterator provides access to a complete listing of Usage values. +type UsagesListResultIterator struct { + i int + page UsagesListResultPage +} + +// Next advances to the next value. If there was an error making +// the request the iterator does not advance and the error is returned. +func (iter *UsagesListResultIterator) Next() error { + iter.i++ + if iter.i < len(iter.page.Values()) { + return nil + } + err := iter.page.Next() + if err != nil { + iter.i-- + return err + } + iter.i = 0 + return nil +} + +// NotDone returns true if the enumeration should be started or is not yet complete. +func (iter UsagesListResultIterator) NotDone() bool { + return iter.page.NotDone() && iter.i < len(iter.page.Values()) +} + +// Response returns the raw server response from the last page request. +func (iter UsagesListResultIterator) Response() UsagesListResult { + return iter.page.Response() +} + +// Value returns the current value or a zero-initialized value if the +// iterator has advanced beyond the end of the collection. +func (iter UsagesListResultIterator) Value() Usage { + if !iter.page.NotDone() { + return Usage{} + } + return iter.page.Values()[iter.i] +} + +// IsEmpty returns true if the ListResult contains no values. +func (ulr UsagesListResult) IsEmpty() bool { + return ulr.Value == nil || len(*ulr.Value) == 0 +} + +// usagesListResultPreparer prepares a request to retrieve the next set of results. +// It returns nil if no more results exist. +func (ulr UsagesListResult) usagesListResultPreparer() (*http.Request, error) { + if ulr.NextLink == nil || len(to.String(ulr.NextLink)) < 1 { + return nil, nil + } + return autorest.Prepare(&http.Request{}, + autorest.AsJSON(), + autorest.AsGet(), + autorest.WithBaseURL(to.String(ulr.NextLink))) +} + +// UsagesListResultPage contains a page of Usage values. +type UsagesListResultPage struct { + fn func(UsagesListResult) (UsagesListResult, error) + ulr UsagesListResult +} + +// Next advances to the next page of values. If there was an error making +// the request the page does not advance and the error is returned. +func (page *UsagesListResultPage) Next() error { + next, err := page.fn(page.ulr) + if err != nil { + return err + } + page.ulr = next + return nil +} + +// NotDone returns true if the page enumeration should be started or is not yet complete. +func (page UsagesListResultPage) NotDone() bool { + return !page.ulr.IsEmpty() +} + +// Response returns the raw server response from the last page request. +func (page UsagesListResultPage) Response() UsagesListResult { + return page.ulr +} + +// Values returns the slice of values for the current page or nil if there are no values. +func (page UsagesListResultPage) Values() []Usage { + if page.ulr.IsEmpty() { + return nil + } + return *page.ulr.Value +} + +// VerificationIPFlowParameters parameters that define the IP flow to be verified. +type VerificationIPFlowParameters struct { + // TargetResourceID - The ID of the target resource to perform next-hop on. + TargetResourceID *string `json:"targetResourceId,omitempty"` + // Direction - The direction of the packet represented as a 5-tuple. Possible values include: 'Inbound', 'Outbound' + Direction Direction `json:"direction,omitempty"` + // Protocol - Protocol to be verified on. Possible values include: 'ProtocolTCP', 'ProtocolUDP' + Protocol Protocol `json:"protocol,omitempty"` + // LocalPort - The local port. Acceptable values are a single integer in the range (0-65535). Support for * for the source port, which depends on the direction. + LocalPort *string `json:"localPort,omitempty"` + // RemotePort - The remote port. Acceptable values are a single integer in the range (0-65535). Support for * for the source port, which depends on the direction. + RemotePort *string `json:"remotePort,omitempty"` + // LocalIPAddress - The local IP address. Acceptable values are valid IPv4 addresses. + LocalIPAddress *string `json:"localIPAddress,omitempty"` + // RemoteIPAddress - The remote IP address. Acceptable values are valid IPv4 addresses. + RemoteIPAddress *string `json:"remoteIPAddress,omitempty"` + // TargetNicResourceID - The NIC ID. (If VM has multiple NICs and IP forwarding is enabled on any of them, then this parameter must be specified. Otherwise optional). + TargetNicResourceID *string `json:"targetNicResourceId,omitempty"` +} + +// VerificationIPFlowResult results of IP flow verification on the target resource. +type VerificationIPFlowResult struct { + autorest.Response `json:"-"` + // Access - Indicates whether the traffic is allowed or denied. Possible values include: 'Allow', 'Deny' + Access Access `json:"access,omitempty"` + // RuleName - Name of the rule. If input is not matched against any security rule, it is not displayed. + RuleName *string `json:"ruleName,omitempty"` +} + +// VirtualNetwork virtual Network resource. +type VirtualNetwork struct { + autorest.Response `json:"-"` + // VirtualNetworkPropertiesFormat - Properties of the virtual network. + *VirtualNetworkPropertiesFormat `json:"properties,omitempty"` + // Etag - Gets a unique read-only string that changes whenever the resource is updated. + Etag *string `json:"etag,omitempty"` + // ID - Resource ID. + ID *string `json:"id,omitempty"` + // Name - Resource name. + Name *string `json:"name,omitempty"` + // Type - Resource type. + Type *string `json:"type,omitempty"` + // Location - Resource location. + Location *string `json:"location,omitempty"` + // Tags - Resource tags. + Tags map[string]*string `json:"tags"` +} + +// MarshalJSON is the custom marshaler for VirtualNetwork. +func (vn VirtualNetwork) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]interface{}) + if vn.VirtualNetworkPropertiesFormat != nil { + objectMap["properties"] = vn.VirtualNetworkPropertiesFormat + } + if vn.Etag != nil { + objectMap["etag"] = vn.Etag + } + if vn.ID != nil { + objectMap["id"] = vn.ID + } + if vn.Name != nil { + objectMap["name"] = vn.Name + } + if vn.Type != nil { + objectMap["type"] = vn.Type + } + if vn.Location != nil { + objectMap["location"] = vn.Location + } + if vn.Tags != nil { + objectMap["tags"] = vn.Tags + } + return json.Marshal(objectMap) +} + +// UnmarshalJSON is the custom unmarshaler for VirtualNetwork struct. +func (vn *VirtualNetwork) UnmarshalJSON(body []byte) error { + var m map[string]*json.RawMessage + err := json.Unmarshal(body, &m) + if err != nil { + return err + } + for k, v := range m { + switch k { + case "properties": + if v != nil { + var virtualNetworkPropertiesFormat VirtualNetworkPropertiesFormat + err = json.Unmarshal(*v, &virtualNetworkPropertiesFormat) + if err != nil { + return err + } + vn.VirtualNetworkPropertiesFormat = &virtualNetworkPropertiesFormat + } + case "etag": + if v != nil { + var etag string + err = json.Unmarshal(*v, &etag) + if err != nil { + return err + } + vn.Etag = &etag + } + case "id": + if v != nil { + var ID string + err = json.Unmarshal(*v, &ID) + if err != nil { + return err + } + vn.ID = &ID + } + case "name": + if v != nil { + var name string + err = json.Unmarshal(*v, &name) + if err != nil { + return err + } + vn.Name = &name + } + case "type": + if v != nil { + var typeVar string + err = json.Unmarshal(*v, &typeVar) + if err != nil { + return err + } + vn.Type = &typeVar + } + case "location": + if v != nil { + var location string + err = json.Unmarshal(*v, &location) + if err != nil { + return err + } + vn.Location = &location + } + case "tags": + if v != nil { + var tags map[string]*string + err = json.Unmarshal(*v, &tags) + if err != nil { + return err + } + vn.Tags = tags + } + } + } + + return nil +} + +// VirtualNetworkConnectionGatewayReference a reference to VirtualNetworkGateway or LocalNetworkGateway resource. +type VirtualNetworkConnectionGatewayReference struct { + // ID - The ID of VirtualNetworkGateway or LocalNetworkGateway resource. + ID *string `json:"id,omitempty"` +} + +// VirtualNetworkGateway a common class for general resource information +type VirtualNetworkGateway struct { + autorest.Response `json:"-"` + // VirtualNetworkGatewayPropertiesFormat - Properties of the virtual network gateway. + *VirtualNetworkGatewayPropertiesFormat `json:"properties,omitempty"` + // Etag - Gets a unique read-only string that changes whenever the resource is updated. + Etag *string `json:"etag,omitempty"` + // ID - Resource ID. + ID *string `json:"id,omitempty"` + // Name - Resource name. + Name *string `json:"name,omitempty"` + // Type - Resource type. + Type *string `json:"type,omitempty"` + // Location - Resource location. + Location *string `json:"location,omitempty"` + // Tags - Resource tags. + Tags map[string]*string `json:"tags"` +} + +// MarshalJSON is the custom marshaler for VirtualNetworkGateway. +func (vng VirtualNetworkGateway) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]interface{}) + if vng.VirtualNetworkGatewayPropertiesFormat != nil { + objectMap["properties"] = vng.VirtualNetworkGatewayPropertiesFormat + } + if vng.Etag != nil { + objectMap["etag"] = vng.Etag + } + if vng.ID != nil { + objectMap["id"] = vng.ID + } + if vng.Name != nil { + objectMap["name"] = vng.Name + } + if vng.Type != nil { + objectMap["type"] = vng.Type + } + if vng.Location != nil { + objectMap["location"] = vng.Location + } + if vng.Tags != nil { + objectMap["tags"] = vng.Tags + } + return json.Marshal(objectMap) +} + +// UnmarshalJSON is the custom unmarshaler for VirtualNetworkGateway struct. +func (vng *VirtualNetworkGateway) UnmarshalJSON(body []byte) error { + var m map[string]*json.RawMessage + err := json.Unmarshal(body, &m) + if err != nil { + return err + } + for k, v := range m { + switch k { + case "properties": + if v != nil { + var virtualNetworkGatewayPropertiesFormat VirtualNetworkGatewayPropertiesFormat + err = json.Unmarshal(*v, &virtualNetworkGatewayPropertiesFormat) + if err != nil { + return err + } + vng.VirtualNetworkGatewayPropertiesFormat = &virtualNetworkGatewayPropertiesFormat + } + case "etag": + if v != nil { + var etag string + err = json.Unmarshal(*v, &etag) + if err != nil { + return err + } + vng.Etag = &etag + } + case "id": + if v != nil { + var ID string + err = json.Unmarshal(*v, &ID) + if err != nil { + return err + } + vng.ID = &ID + } + case "name": + if v != nil { + var name string + err = json.Unmarshal(*v, &name) + if err != nil { + return err + } + vng.Name = &name + } + case "type": + if v != nil { + var typeVar string + err = json.Unmarshal(*v, &typeVar) + if err != nil { + return err + } + vng.Type = &typeVar + } + case "location": + if v != nil { + var location string + err = json.Unmarshal(*v, &location) + if err != nil { + return err + } + vng.Location = &location + } + case "tags": + if v != nil { + var tags map[string]*string + err = json.Unmarshal(*v, &tags) + if err != nil { + return err + } + vng.Tags = tags + } + } + } + + return nil +} + +// VirtualNetworkGatewayConnection a common class for general resource information +type VirtualNetworkGatewayConnection struct { + autorest.Response `json:"-"` + // VirtualNetworkGatewayConnectionPropertiesFormat - Properties of the virtual network gateway connection. + *VirtualNetworkGatewayConnectionPropertiesFormat `json:"properties,omitempty"` + // Etag - Gets a unique read-only string that changes whenever the resource is updated. + Etag *string `json:"etag,omitempty"` + // ID - Resource ID. + ID *string `json:"id,omitempty"` + // Name - Resource name. + Name *string `json:"name,omitempty"` + // Type - Resource type. + Type *string `json:"type,omitempty"` + // Location - Resource location. + Location *string `json:"location,omitempty"` + // Tags - Resource tags. + Tags map[string]*string `json:"tags"` +} + +// MarshalJSON is the custom marshaler for VirtualNetworkGatewayConnection. +func (vngc VirtualNetworkGatewayConnection) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]interface{}) + if vngc.VirtualNetworkGatewayConnectionPropertiesFormat != nil { + objectMap["properties"] = vngc.VirtualNetworkGatewayConnectionPropertiesFormat + } + if vngc.Etag != nil { + objectMap["etag"] = vngc.Etag + } + if vngc.ID != nil { + objectMap["id"] = vngc.ID + } + if vngc.Name != nil { + objectMap["name"] = vngc.Name + } + if vngc.Type != nil { + objectMap["type"] = vngc.Type + } + if vngc.Location != nil { + objectMap["location"] = vngc.Location + } + if vngc.Tags != nil { + objectMap["tags"] = vngc.Tags + } + return json.Marshal(objectMap) +} + +// UnmarshalJSON is the custom unmarshaler for VirtualNetworkGatewayConnection struct. +func (vngc *VirtualNetworkGatewayConnection) UnmarshalJSON(body []byte) error { + var m map[string]*json.RawMessage + err := json.Unmarshal(body, &m) + if err != nil { + return err + } + for k, v := range m { + switch k { + case "properties": + if v != nil { + var virtualNetworkGatewayConnectionPropertiesFormat VirtualNetworkGatewayConnectionPropertiesFormat + err = json.Unmarshal(*v, &virtualNetworkGatewayConnectionPropertiesFormat) + if err != nil { + return err + } + vngc.VirtualNetworkGatewayConnectionPropertiesFormat = &virtualNetworkGatewayConnectionPropertiesFormat + } + case "etag": + if v != nil { + var etag string + err = json.Unmarshal(*v, &etag) + if err != nil { + return err + } + vngc.Etag = &etag + } + case "id": + if v != nil { + var ID string + err = json.Unmarshal(*v, &ID) + if err != nil { + return err + } + vngc.ID = &ID + } + case "name": + if v != nil { + var name string + err = json.Unmarshal(*v, &name) + if err != nil { + return err + } + vngc.Name = &name + } + case "type": + if v != nil { + var typeVar string + err = json.Unmarshal(*v, &typeVar) + if err != nil { + return err + } + vngc.Type = &typeVar + } + case "location": + if v != nil { + var location string + err = json.Unmarshal(*v, &location) + if err != nil { + return err + } + vngc.Location = &location + } + case "tags": + if v != nil { + var tags map[string]*string + err = json.Unmarshal(*v, &tags) + if err != nil { + return err + } + vngc.Tags = tags + } + } + } + + return nil +} + +// VirtualNetworkGatewayConnectionListEntity a common class for general resource information +type VirtualNetworkGatewayConnectionListEntity struct { + autorest.Response `json:"-"` + // VirtualNetworkGatewayConnectionListEntityPropertiesFormat - Properties of the virtual network gateway connection. + *VirtualNetworkGatewayConnectionListEntityPropertiesFormat `json:"properties,omitempty"` + // Etag - Gets a unique read-only string that changes whenever the resource is updated. + Etag *string `json:"etag,omitempty"` + // ID - Resource ID. + ID *string `json:"id,omitempty"` + // Name - Resource name. + Name *string `json:"name,omitempty"` + // Type - Resource type. + Type *string `json:"type,omitempty"` + // Location - Resource location. + Location *string `json:"location,omitempty"` + // Tags - Resource tags. + Tags map[string]*string `json:"tags"` +} + +// MarshalJSON is the custom marshaler for VirtualNetworkGatewayConnectionListEntity. +func (vngcle VirtualNetworkGatewayConnectionListEntity) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]interface{}) + if vngcle.VirtualNetworkGatewayConnectionListEntityPropertiesFormat != nil { + objectMap["properties"] = vngcle.VirtualNetworkGatewayConnectionListEntityPropertiesFormat + } + if vngcle.Etag != nil { + objectMap["etag"] = vngcle.Etag + } + if vngcle.ID != nil { + objectMap["id"] = vngcle.ID + } + if vngcle.Name != nil { + objectMap["name"] = vngcle.Name + } + if vngcle.Type != nil { + objectMap["type"] = vngcle.Type + } + if vngcle.Location != nil { + objectMap["location"] = vngcle.Location + } + if vngcle.Tags != nil { + objectMap["tags"] = vngcle.Tags + } + return json.Marshal(objectMap) +} + +// UnmarshalJSON is the custom unmarshaler for VirtualNetworkGatewayConnectionListEntity struct. +func (vngcle *VirtualNetworkGatewayConnectionListEntity) UnmarshalJSON(body []byte) error { + var m map[string]*json.RawMessage + err := json.Unmarshal(body, &m) + if err != nil { + return err + } + for k, v := range m { + switch k { + case "properties": + if v != nil { + var virtualNetworkGatewayConnectionListEntityPropertiesFormat VirtualNetworkGatewayConnectionListEntityPropertiesFormat + err = json.Unmarshal(*v, &virtualNetworkGatewayConnectionListEntityPropertiesFormat) + if err != nil { + return err + } + vngcle.VirtualNetworkGatewayConnectionListEntityPropertiesFormat = &virtualNetworkGatewayConnectionListEntityPropertiesFormat + } + case "etag": + if v != nil { + var etag string + err = json.Unmarshal(*v, &etag) + if err != nil { + return err + } + vngcle.Etag = &etag + } + case "id": + if v != nil { + var ID string + err = json.Unmarshal(*v, &ID) + if err != nil { + return err + } + vngcle.ID = &ID + } + case "name": + if v != nil { + var name string + err = json.Unmarshal(*v, &name) + if err != nil { + return err + } + vngcle.Name = &name + } + case "type": + if v != nil { + var typeVar string + err = json.Unmarshal(*v, &typeVar) + if err != nil { + return err + } + vngcle.Type = &typeVar + } + case "location": + if v != nil { + var location string + err = json.Unmarshal(*v, &location) + if err != nil { + return err + } + vngcle.Location = &location + } + case "tags": + if v != nil { + var tags map[string]*string + err = json.Unmarshal(*v, &tags) + if err != nil { + return err + } + vngcle.Tags = tags + } + } + } + + return nil +} + +// VirtualNetworkGatewayConnectionListEntityPropertiesFormat virtualNetworkGatewayConnection properties +type VirtualNetworkGatewayConnectionListEntityPropertiesFormat struct { + // AuthorizationKey - The authorizationKey. + AuthorizationKey *string `json:"authorizationKey,omitempty"` + // VirtualNetworkGateway1 - The reference to virtual network gateway resource. + VirtualNetworkGateway1 *VirtualNetworkConnectionGatewayReference `json:"virtualNetworkGateway1,omitempty"` + // VirtualNetworkGateway2 - The reference to virtual network gateway resource. + VirtualNetworkGateway2 *VirtualNetworkConnectionGatewayReference `json:"virtualNetworkGateway2,omitempty"` + // LocalNetworkGateway2 - The reference to local network gateway resource. + LocalNetworkGateway2 *VirtualNetworkConnectionGatewayReference `json:"localNetworkGateway2,omitempty"` + // ConnectionType - Gateway connection type. Possible values are: 'Ipsec','Vnet2Vnet','ExpressRoute', and 'VPNClient. Possible values include: 'IPsec', 'Vnet2Vnet', 'ExpressRoute', 'VPNClient' + ConnectionType VirtualNetworkGatewayConnectionType `json:"connectionType,omitempty"` + // RoutingWeight - The routing weight. + RoutingWeight *int32 `json:"routingWeight,omitempty"` + // SharedKey - The IPSec shared key. + SharedKey *string `json:"sharedKey,omitempty"` + // ConnectionStatus - Virtual network Gateway connection status. Possible values are 'Unknown', 'Connecting', 'Connected' and 'NotConnected'. Possible values include: 'VirtualNetworkGatewayConnectionStatusUnknown', 'VirtualNetworkGatewayConnectionStatusConnecting', 'VirtualNetworkGatewayConnectionStatusConnected', 'VirtualNetworkGatewayConnectionStatusNotConnected' + ConnectionStatus VirtualNetworkGatewayConnectionStatus `json:"connectionStatus,omitempty"` + // TunnelConnectionStatus - Collection of all tunnels' connection health status. + TunnelConnectionStatus *[]TunnelConnectionHealth `json:"tunnelConnectionStatus,omitempty"` + // EgressBytesTransferred - The egress bytes transferred in this connection. + EgressBytesTransferred *int64 `json:"egressBytesTransferred,omitempty"` + // IngressBytesTransferred - The ingress bytes transferred in this connection. + IngressBytesTransferred *int64 `json:"ingressBytesTransferred,omitempty"` + // Peer - The reference to peerings resource. + Peer *SubResource `json:"peer,omitempty"` + // EnableBgp - EnableBgp flag + EnableBgp *bool `json:"enableBgp,omitempty"` + // UsePolicyBasedTrafficSelectors - Enable policy-based traffic selectors. + UsePolicyBasedTrafficSelectors *bool `json:"usePolicyBasedTrafficSelectors,omitempty"` + // IpsecPolicies - The IPSec Policies to be considered by this connection. + IpsecPolicies *[]IpsecPolicy `json:"ipsecPolicies,omitempty"` + // ResourceGUID - The resource GUID property of the VirtualNetworkGatewayConnection resource. + ResourceGUID *string `json:"resourceGuid,omitempty"` + // ProvisioningState - The provisioning state of the VirtualNetworkGatewayConnection resource. Possible values are: 'Updating', 'Deleting', and 'Failed'. + ProvisioningState *string `json:"provisioningState,omitempty"` +} + +// VirtualNetworkGatewayConnectionListResult response for the ListVirtualNetworkGatewayConnections API service call +type VirtualNetworkGatewayConnectionListResult struct { + autorest.Response `json:"-"` + // Value - Gets a list of VirtualNetworkGatewayConnection resources that exists in a resource group. + Value *[]VirtualNetworkGatewayConnection `json:"value,omitempty"` + // NextLink - The URL to get the next set of results. + NextLink *string `json:"nextLink,omitempty"` +} + +// VirtualNetworkGatewayConnectionListResultIterator provides access to a complete listing of +// VirtualNetworkGatewayConnection values. +type VirtualNetworkGatewayConnectionListResultIterator struct { + i int + page VirtualNetworkGatewayConnectionListResultPage +} + +// Next advances to the next value. If there was an error making +// the request the iterator does not advance and the error is returned. +func (iter *VirtualNetworkGatewayConnectionListResultIterator) Next() error { + iter.i++ + if iter.i < len(iter.page.Values()) { + return nil + } + err := iter.page.Next() + if err != nil { + iter.i-- + return err + } + iter.i = 0 + return nil +} + +// NotDone returns true if the enumeration should be started or is not yet complete. +func (iter VirtualNetworkGatewayConnectionListResultIterator) NotDone() bool { + return iter.page.NotDone() && iter.i < len(iter.page.Values()) +} + +// Response returns the raw server response from the last page request. +func (iter VirtualNetworkGatewayConnectionListResultIterator) Response() VirtualNetworkGatewayConnectionListResult { + return iter.page.Response() +} + +// Value returns the current value or a zero-initialized value if the +// iterator has advanced beyond the end of the collection. +func (iter VirtualNetworkGatewayConnectionListResultIterator) Value() VirtualNetworkGatewayConnection { + if !iter.page.NotDone() { + return VirtualNetworkGatewayConnection{} + } + return iter.page.Values()[iter.i] +} + +// IsEmpty returns true if the ListResult contains no values. +func (vngclr VirtualNetworkGatewayConnectionListResult) IsEmpty() bool { + return vngclr.Value == nil || len(*vngclr.Value) == 0 +} + +// virtualNetworkGatewayConnectionListResultPreparer prepares a request to retrieve the next set of results. +// It returns nil if no more results exist. +func (vngclr VirtualNetworkGatewayConnectionListResult) virtualNetworkGatewayConnectionListResultPreparer() (*http.Request, error) { + if vngclr.NextLink == nil || len(to.String(vngclr.NextLink)) < 1 { + return nil, nil + } + return autorest.Prepare(&http.Request{}, + autorest.AsJSON(), + autorest.AsGet(), + autorest.WithBaseURL(to.String(vngclr.NextLink))) +} + +// VirtualNetworkGatewayConnectionListResultPage contains a page of VirtualNetworkGatewayConnection values. +type VirtualNetworkGatewayConnectionListResultPage struct { + fn func(VirtualNetworkGatewayConnectionListResult) (VirtualNetworkGatewayConnectionListResult, error) + vngclr VirtualNetworkGatewayConnectionListResult +} + +// Next advances to the next page of values. If there was an error making +// the request the page does not advance and the error is returned. +func (page *VirtualNetworkGatewayConnectionListResultPage) Next() error { + next, err := page.fn(page.vngclr) + if err != nil { + return err + } + page.vngclr = next + return nil +} + +// NotDone returns true if the page enumeration should be started or is not yet complete. +func (page VirtualNetworkGatewayConnectionListResultPage) NotDone() bool { + return !page.vngclr.IsEmpty() +} + +// Response returns the raw server response from the last page request. +func (page VirtualNetworkGatewayConnectionListResultPage) Response() VirtualNetworkGatewayConnectionListResult { + return page.vngclr +} + +// Values returns the slice of values for the current page or nil if there are no values. +func (page VirtualNetworkGatewayConnectionListResultPage) Values() []VirtualNetworkGatewayConnection { + if page.vngclr.IsEmpty() { + return nil + } + return *page.vngclr.Value +} + +// VirtualNetworkGatewayConnectionPropertiesFormat virtualNetworkGatewayConnection properties +type VirtualNetworkGatewayConnectionPropertiesFormat struct { + // AuthorizationKey - The authorizationKey. + AuthorizationKey *string `json:"authorizationKey,omitempty"` + // VirtualNetworkGateway1 - The reference to virtual network gateway resource. + VirtualNetworkGateway1 *VirtualNetworkGateway `json:"virtualNetworkGateway1,omitempty"` + // VirtualNetworkGateway2 - The reference to virtual network gateway resource. + VirtualNetworkGateway2 *VirtualNetworkGateway `json:"virtualNetworkGateway2,omitempty"` + // LocalNetworkGateway2 - The reference to local network gateway resource. + LocalNetworkGateway2 *LocalNetworkGateway `json:"localNetworkGateway2,omitempty"` + // ConnectionType - Gateway connection type. Possible values are: 'Ipsec','Vnet2Vnet','ExpressRoute', and 'VPNClient. Possible values include: 'IPsec', 'Vnet2Vnet', 'ExpressRoute', 'VPNClient' + ConnectionType VirtualNetworkGatewayConnectionType `json:"connectionType,omitempty"` + // RoutingWeight - The routing weight. + RoutingWeight *int32 `json:"routingWeight,omitempty"` + // SharedKey - The IPSec shared key. + SharedKey *string `json:"sharedKey,omitempty"` + // ConnectionStatus - Virtual network Gateway connection status. Possible values are 'Unknown', 'Connecting', 'Connected' and 'NotConnected'. Possible values include: 'VirtualNetworkGatewayConnectionStatusUnknown', 'VirtualNetworkGatewayConnectionStatusConnecting', 'VirtualNetworkGatewayConnectionStatusConnected', 'VirtualNetworkGatewayConnectionStatusNotConnected' + ConnectionStatus VirtualNetworkGatewayConnectionStatus `json:"connectionStatus,omitempty"` + // TunnelConnectionStatus - Collection of all tunnels' connection health status. + TunnelConnectionStatus *[]TunnelConnectionHealth `json:"tunnelConnectionStatus,omitempty"` + // EgressBytesTransferred - The egress bytes transferred in this connection. + EgressBytesTransferred *int64 `json:"egressBytesTransferred,omitempty"` + // IngressBytesTransferred - The ingress bytes transferred in this connection. + IngressBytesTransferred *int64 `json:"ingressBytesTransferred,omitempty"` + // Peer - The reference to peerings resource. + Peer *SubResource `json:"peer,omitempty"` + // EnableBgp - EnableBgp flag + EnableBgp *bool `json:"enableBgp,omitempty"` + // UsePolicyBasedTrafficSelectors - Enable policy-based traffic selectors. + UsePolicyBasedTrafficSelectors *bool `json:"usePolicyBasedTrafficSelectors,omitempty"` + // IpsecPolicies - The IPSec Policies to be considered by this connection. + IpsecPolicies *[]IpsecPolicy `json:"ipsecPolicies,omitempty"` + // ResourceGUID - The resource GUID property of the VirtualNetworkGatewayConnection resource. + ResourceGUID *string `json:"resourceGuid,omitempty"` + // ProvisioningState - The provisioning state of the VirtualNetworkGatewayConnection resource. Possible values are: 'Updating', 'Deleting', and 'Failed'. + ProvisioningState *string `json:"provisioningState,omitempty"` +} + +// VirtualNetworkGatewayConnectionsCreateOrUpdateFuture an abstraction for monitoring and retrieving the results of +// a long-running operation. +type VirtualNetworkGatewayConnectionsCreateOrUpdateFuture struct { + azure.Future +} + +// Result returns the result of the asynchronous operation. +// If the operation has not completed it will return an error. +func (future *VirtualNetworkGatewayConnectionsCreateOrUpdateFuture) Result(client VirtualNetworkGatewayConnectionsClient) (vngc VirtualNetworkGatewayConnection, err error) { + var done bool + done, err = future.Done(client) + if err != nil { + err = autorest.NewErrorWithError(err, "network.VirtualNetworkGatewayConnectionsCreateOrUpdateFuture", "Result", future.Response(), "Polling failure") + return + } + if !done { + err = azure.NewAsyncOpIncompleteError("network.VirtualNetworkGatewayConnectionsCreateOrUpdateFuture") + return + } + sender := autorest.DecorateSender(client, autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...)) + if vngc.Response.Response, err = future.GetResult(sender); err == nil && vngc.Response.Response.StatusCode != http.StatusNoContent { + vngc, err = client.CreateOrUpdateResponder(vngc.Response.Response) + if err != nil { + err = autorest.NewErrorWithError(err, "network.VirtualNetworkGatewayConnectionsCreateOrUpdateFuture", "Result", vngc.Response.Response, "Failure responding to request") + } + } + return +} + +// VirtualNetworkGatewayConnectionsDeleteFuture an abstraction for monitoring and retrieving the results of a +// long-running operation. +type VirtualNetworkGatewayConnectionsDeleteFuture struct { + azure.Future +} + +// Result returns the result of the asynchronous operation. +// If the operation has not completed it will return an error. +func (future *VirtualNetworkGatewayConnectionsDeleteFuture) Result(client VirtualNetworkGatewayConnectionsClient) (ar autorest.Response, err error) { + var done bool + done, err = future.Done(client) + if err != nil { + err = autorest.NewErrorWithError(err, "network.VirtualNetworkGatewayConnectionsDeleteFuture", "Result", future.Response(), "Polling failure") + return + } + if !done { + err = azure.NewAsyncOpIncompleteError("network.VirtualNetworkGatewayConnectionsDeleteFuture") + return + } + ar.Response = future.Response() + return +} + +// VirtualNetworkGatewayConnectionsResetSharedKeyFuture an abstraction for monitoring and retrieving the results of +// a long-running operation. +type VirtualNetworkGatewayConnectionsResetSharedKeyFuture struct { + azure.Future +} + +// Result returns the result of the asynchronous operation. +// If the operation has not completed it will return an error. +func (future *VirtualNetworkGatewayConnectionsResetSharedKeyFuture) Result(client VirtualNetworkGatewayConnectionsClient) (crsk ConnectionResetSharedKey, err error) { + var done bool + done, err = future.Done(client) + if err != nil { + err = autorest.NewErrorWithError(err, "network.VirtualNetworkGatewayConnectionsResetSharedKeyFuture", "Result", future.Response(), "Polling failure") + return + } + if !done { + err = azure.NewAsyncOpIncompleteError("network.VirtualNetworkGatewayConnectionsResetSharedKeyFuture") + return + } + sender := autorest.DecorateSender(client, autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...)) + if crsk.Response.Response, err = future.GetResult(sender); err == nil && crsk.Response.Response.StatusCode != http.StatusNoContent { + crsk, err = client.ResetSharedKeyResponder(crsk.Response.Response) + if err != nil { + err = autorest.NewErrorWithError(err, "network.VirtualNetworkGatewayConnectionsResetSharedKeyFuture", "Result", crsk.Response.Response, "Failure responding to request") + } + } + return +} + +// VirtualNetworkGatewayConnectionsSetSharedKeyFuture an abstraction for monitoring and retrieving the results of a +// long-running operation. +type VirtualNetworkGatewayConnectionsSetSharedKeyFuture struct { + azure.Future +} + +// Result returns the result of the asynchronous operation. +// If the operation has not completed it will return an error. +func (future *VirtualNetworkGatewayConnectionsSetSharedKeyFuture) Result(client VirtualNetworkGatewayConnectionsClient) (csk ConnectionSharedKey, err error) { + var done bool + done, err = future.Done(client) + if err != nil { + err = autorest.NewErrorWithError(err, "network.VirtualNetworkGatewayConnectionsSetSharedKeyFuture", "Result", future.Response(), "Polling failure") + return + } + if !done { + err = azure.NewAsyncOpIncompleteError("network.VirtualNetworkGatewayConnectionsSetSharedKeyFuture") + return + } + sender := autorest.DecorateSender(client, autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...)) + if csk.Response.Response, err = future.GetResult(sender); err == nil && csk.Response.Response.StatusCode != http.StatusNoContent { + csk, err = client.SetSharedKeyResponder(csk.Response.Response) + if err != nil { + err = autorest.NewErrorWithError(err, "network.VirtualNetworkGatewayConnectionsSetSharedKeyFuture", "Result", csk.Response.Response, "Failure responding to request") + } + } + return +} + +// VirtualNetworkGatewayConnectionsUpdateTagsFuture an abstraction for monitoring and retrieving the results of a +// long-running operation. +type VirtualNetworkGatewayConnectionsUpdateTagsFuture struct { + azure.Future +} + +// Result returns the result of the asynchronous operation. +// If the operation has not completed it will return an error. +func (future *VirtualNetworkGatewayConnectionsUpdateTagsFuture) Result(client VirtualNetworkGatewayConnectionsClient) (vngcle VirtualNetworkGatewayConnectionListEntity, err error) { + var done bool + done, err = future.Done(client) + if err != nil { + err = autorest.NewErrorWithError(err, "network.VirtualNetworkGatewayConnectionsUpdateTagsFuture", "Result", future.Response(), "Polling failure") + return + } + if !done { + err = azure.NewAsyncOpIncompleteError("network.VirtualNetworkGatewayConnectionsUpdateTagsFuture") + return + } + sender := autorest.DecorateSender(client, autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...)) + if vngcle.Response.Response, err = future.GetResult(sender); err == nil && vngcle.Response.Response.StatusCode != http.StatusNoContent { + vngcle, err = client.UpdateTagsResponder(vngcle.Response.Response) + if err != nil { + err = autorest.NewErrorWithError(err, "network.VirtualNetworkGatewayConnectionsUpdateTagsFuture", "Result", vngcle.Response.Response, "Failure responding to request") + } + } + return +} + +// VirtualNetworkGatewayIPConfiguration IP configuration for virtual network gateway +type VirtualNetworkGatewayIPConfiguration struct { + // VirtualNetworkGatewayIPConfigurationPropertiesFormat - Properties of the virtual network gateway ip configuration. + *VirtualNetworkGatewayIPConfigurationPropertiesFormat `json:"properties,omitempty"` + // Name - The name of the resource that is unique within a resource group. This name can be used to access the resource. + Name *string `json:"name,omitempty"` + // Etag - A unique read-only string that changes whenever the resource is updated. + Etag *string `json:"etag,omitempty"` + // ID - Resource ID. + ID *string `json:"id,omitempty"` +} + +// MarshalJSON is the custom marshaler for VirtualNetworkGatewayIPConfiguration. +func (vngic VirtualNetworkGatewayIPConfiguration) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]interface{}) + if vngic.VirtualNetworkGatewayIPConfigurationPropertiesFormat != nil { + objectMap["properties"] = vngic.VirtualNetworkGatewayIPConfigurationPropertiesFormat + } + if vngic.Name != nil { + objectMap["name"] = vngic.Name + } + if vngic.Etag != nil { + objectMap["etag"] = vngic.Etag + } + if vngic.ID != nil { + objectMap["id"] = vngic.ID + } + return json.Marshal(objectMap) +} + +// UnmarshalJSON is the custom unmarshaler for VirtualNetworkGatewayIPConfiguration struct. +func (vngic *VirtualNetworkGatewayIPConfiguration) UnmarshalJSON(body []byte) error { + var m map[string]*json.RawMessage + err := json.Unmarshal(body, &m) + if err != nil { + return err + } + for k, v := range m { + switch k { + case "properties": + if v != nil { + var virtualNetworkGatewayIPConfigurationPropertiesFormat VirtualNetworkGatewayIPConfigurationPropertiesFormat + err = json.Unmarshal(*v, &virtualNetworkGatewayIPConfigurationPropertiesFormat) + if err != nil { + return err + } + vngic.VirtualNetworkGatewayIPConfigurationPropertiesFormat = &virtualNetworkGatewayIPConfigurationPropertiesFormat + } + case "name": + if v != nil { + var name string + err = json.Unmarshal(*v, &name) + if err != nil { + return err + } + vngic.Name = &name + } + case "etag": + if v != nil { + var etag string + err = json.Unmarshal(*v, &etag) + if err != nil { + return err + } + vngic.Etag = &etag + } + case "id": + if v != nil { + var ID string + err = json.Unmarshal(*v, &ID) + if err != nil { + return err + } + vngic.ID = &ID + } + } + } + + return nil +} + +// VirtualNetworkGatewayIPConfigurationPropertiesFormat properties of VirtualNetworkGatewayIPConfiguration +type VirtualNetworkGatewayIPConfigurationPropertiesFormat struct { + // PrivateIPAllocationMethod - The private IP allocation method. Possible values are: 'Static' and 'Dynamic'. Possible values include: 'Static', 'Dynamic' + PrivateIPAllocationMethod IPAllocationMethod `json:"privateIPAllocationMethod,omitempty"` + // Subnet - The reference of the subnet resource. + Subnet *SubResource `json:"subnet,omitempty"` + // PublicIPAddress - The reference of the public IP resource. + PublicIPAddress *SubResource `json:"publicIPAddress,omitempty"` + // ProvisioningState - The provisioning state of the public IP resource. Possible values are: 'Updating', 'Deleting', and 'Failed'. + ProvisioningState *string `json:"provisioningState,omitempty"` +} + +// VirtualNetworkGatewayListConnectionsResult response for the VirtualNetworkGatewayListConnections API service +// call +type VirtualNetworkGatewayListConnectionsResult struct { + autorest.Response `json:"-"` + // Value - Gets a list of VirtualNetworkGatewayConnection resources that exists in a resource group. + Value *[]VirtualNetworkGatewayConnectionListEntity `json:"value,omitempty"` + // NextLink - The URL to get the next set of results. + NextLink *string `json:"nextLink,omitempty"` +} + +// VirtualNetworkGatewayListConnectionsResultIterator provides access to a complete listing of +// VirtualNetworkGatewayConnectionListEntity values. +type VirtualNetworkGatewayListConnectionsResultIterator struct { + i int + page VirtualNetworkGatewayListConnectionsResultPage +} + +// Next advances to the next value. If there was an error making +// the request the iterator does not advance and the error is returned. +func (iter *VirtualNetworkGatewayListConnectionsResultIterator) Next() error { + iter.i++ + if iter.i < len(iter.page.Values()) { + return nil + } + err := iter.page.Next() + if err != nil { + iter.i-- + return err + } + iter.i = 0 + return nil +} + +// NotDone returns true if the enumeration should be started or is not yet complete. +func (iter VirtualNetworkGatewayListConnectionsResultIterator) NotDone() bool { + return iter.page.NotDone() && iter.i < len(iter.page.Values()) +} + +// Response returns the raw server response from the last page request. +func (iter VirtualNetworkGatewayListConnectionsResultIterator) Response() VirtualNetworkGatewayListConnectionsResult { + return iter.page.Response() +} + +// Value returns the current value or a zero-initialized value if the +// iterator has advanced beyond the end of the collection. +func (iter VirtualNetworkGatewayListConnectionsResultIterator) Value() VirtualNetworkGatewayConnectionListEntity { + if !iter.page.NotDone() { + return VirtualNetworkGatewayConnectionListEntity{} + } + return iter.page.Values()[iter.i] +} + +// IsEmpty returns true if the ListResult contains no values. +func (vnglcr VirtualNetworkGatewayListConnectionsResult) IsEmpty() bool { + return vnglcr.Value == nil || len(*vnglcr.Value) == 0 +} + +// virtualNetworkGatewayListConnectionsResultPreparer prepares a request to retrieve the next set of results. +// It returns nil if no more results exist. +func (vnglcr VirtualNetworkGatewayListConnectionsResult) virtualNetworkGatewayListConnectionsResultPreparer() (*http.Request, error) { + if vnglcr.NextLink == nil || len(to.String(vnglcr.NextLink)) < 1 { + return nil, nil + } + return autorest.Prepare(&http.Request{}, + autorest.AsJSON(), + autorest.AsGet(), + autorest.WithBaseURL(to.String(vnglcr.NextLink))) +} + +// VirtualNetworkGatewayListConnectionsResultPage contains a page of VirtualNetworkGatewayConnectionListEntity +// values. +type VirtualNetworkGatewayListConnectionsResultPage struct { + fn func(VirtualNetworkGatewayListConnectionsResult) (VirtualNetworkGatewayListConnectionsResult, error) + vnglcr VirtualNetworkGatewayListConnectionsResult +} + +// Next advances to the next page of values. If there was an error making +// the request the page does not advance and the error is returned. +func (page *VirtualNetworkGatewayListConnectionsResultPage) Next() error { + next, err := page.fn(page.vnglcr) + if err != nil { + return err + } + page.vnglcr = next + return nil +} + +// NotDone returns true if the page enumeration should be started or is not yet complete. +func (page VirtualNetworkGatewayListConnectionsResultPage) NotDone() bool { + return !page.vnglcr.IsEmpty() +} + +// Response returns the raw server response from the last page request. +func (page VirtualNetworkGatewayListConnectionsResultPage) Response() VirtualNetworkGatewayListConnectionsResult { + return page.vnglcr +} + +// Values returns the slice of values for the current page or nil if there are no values. +func (page VirtualNetworkGatewayListConnectionsResultPage) Values() []VirtualNetworkGatewayConnectionListEntity { + if page.vnglcr.IsEmpty() { + return nil + } + return *page.vnglcr.Value +} + +// VirtualNetworkGatewayListResult response for the ListVirtualNetworkGateways API service call. +type VirtualNetworkGatewayListResult struct { + autorest.Response `json:"-"` + // Value - Gets a list of VirtualNetworkGateway resources that exists in a resource group. + Value *[]VirtualNetworkGateway `json:"value,omitempty"` + // NextLink - The URL to get the next set of results. + NextLink *string `json:"nextLink,omitempty"` +} + +// VirtualNetworkGatewayListResultIterator provides access to a complete listing of VirtualNetworkGateway values. +type VirtualNetworkGatewayListResultIterator struct { + i int + page VirtualNetworkGatewayListResultPage +} + +// Next advances to the next value. If there was an error making +// the request the iterator does not advance and the error is returned. +func (iter *VirtualNetworkGatewayListResultIterator) Next() error { + iter.i++ + if iter.i < len(iter.page.Values()) { + return nil + } + err := iter.page.Next() + if err != nil { + iter.i-- + return err + } + iter.i = 0 + return nil +} + +// NotDone returns true if the enumeration should be started or is not yet complete. +func (iter VirtualNetworkGatewayListResultIterator) NotDone() bool { + return iter.page.NotDone() && iter.i < len(iter.page.Values()) +} + +// Response returns the raw server response from the last page request. +func (iter VirtualNetworkGatewayListResultIterator) Response() VirtualNetworkGatewayListResult { + return iter.page.Response() +} + +// Value returns the current value or a zero-initialized value if the +// iterator has advanced beyond the end of the collection. +func (iter VirtualNetworkGatewayListResultIterator) Value() VirtualNetworkGateway { + if !iter.page.NotDone() { + return VirtualNetworkGateway{} + } + return iter.page.Values()[iter.i] +} + +// IsEmpty returns true if the ListResult contains no values. +func (vnglr VirtualNetworkGatewayListResult) IsEmpty() bool { + return vnglr.Value == nil || len(*vnglr.Value) == 0 +} + +// virtualNetworkGatewayListResultPreparer prepares a request to retrieve the next set of results. +// It returns nil if no more results exist. +func (vnglr VirtualNetworkGatewayListResult) virtualNetworkGatewayListResultPreparer() (*http.Request, error) { + if vnglr.NextLink == nil || len(to.String(vnglr.NextLink)) < 1 { + return nil, nil + } + return autorest.Prepare(&http.Request{}, + autorest.AsJSON(), + autorest.AsGet(), + autorest.WithBaseURL(to.String(vnglr.NextLink))) +} + +// VirtualNetworkGatewayListResultPage contains a page of VirtualNetworkGateway values. +type VirtualNetworkGatewayListResultPage struct { + fn func(VirtualNetworkGatewayListResult) (VirtualNetworkGatewayListResult, error) + vnglr VirtualNetworkGatewayListResult +} + +// Next advances to the next page of values. If there was an error making +// the request the page does not advance and the error is returned. +func (page *VirtualNetworkGatewayListResultPage) Next() error { + next, err := page.fn(page.vnglr) + if err != nil { + return err + } + page.vnglr = next + return nil +} + +// NotDone returns true if the page enumeration should be started or is not yet complete. +func (page VirtualNetworkGatewayListResultPage) NotDone() bool { + return !page.vnglr.IsEmpty() +} + +// Response returns the raw server response from the last page request. +func (page VirtualNetworkGatewayListResultPage) Response() VirtualNetworkGatewayListResult { + return page.vnglr +} + +// Values returns the slice of values for the current page or nil if there are no values. +func (page VirtualNetworkGatewayListResultPage) Values() []VirtualNetworkGateway { + if page.vnglr.IsEmpty() { + return nil + } + return *page.vnglr.Value +} + +// VirtualNetworkGatewayPropertiesFormat virtualNetworkGateway properties +type VirtualNetworkGatewayPropertiesFormat struct { + // IPConfigurations - IP configurations for virtual network gateway. + IPConfigurations *[]VirtualNetworkGatewayIPConfiguration `json:"ipConfigurations,omitempty"` + // GatewayType - The type of this virtual network gateway. Possible values are: 'Vpn' and 'ExpressRoute'. Possible values include: 'VirtualNetworkGatewayTypeVpn', 'VirtualNetworkGatewayTypeExpressRoute' + GatewayType VirtualNetworkGatewayType `json:"gatewayType,omitempty"` + // VpnType - The type of this virtual network gateway. Possible values are: 'PolicyBased' and 'RouteBased'. Possible values include: 'PolicyBased', 'RouteBased' + VpnType VpnType `json:"vpnType,omitempty"` + // EnableBgp - Whether BGP is enabled for this virtual network gateway or not. + EnableBgp *bool `json:"enableBgp,omitempty"` + // ActiveActive - ActiveActive flag + ActiveActive *bool `json:"activeActive,omitempty"` + // GatewayDefaultSite - The reference of the LocalNetworkGateway resource which represents local network site having default routes. Assign Null value in case of removing existing default site setting. + GatewayDefaultSite *SubResource `json:"gatewayDefaultSite,omitempty"` + // Sku - The reference of the VirtualNetworkGatewaySku resource which represents the SKU selected for Virtual network gateway. + Sku *VirtualNetworkGatewaySku `json:"sku,omitempty"` + // VpnClientConfiguration - The reference of the VpnClientConfiguration resource which represents the P2S VpnClient configurations. + VpnClientConfiguration *VpnClientConfiguration `json:"vpnClientConfiguration,omitempty"` + // BgpSettings - Virtual network gateway's BGP speaker settings. + BgpSettings *BgpSettings `json:"bgpSettings,omitempty"` + // ResourceGUID - The resource GUID property of the VirtualNetworkGateway resource. + ResourceGUID *string `json:"resourceGuid,omitempty"` + // ProvisioningState - The provisioning state of the VirtualNetworkGateway resource. Possible values are: 'Updating', 'Deleting', and 'Failed'. + ProvisioningState *string `json:"provisioningState,omitempty"` +} + +// VirtualNetworkGatewaysCreateOrUpdateFuture an abstraction for monitoring and retrieving the results of a +// long-running operation. +type VirtualNetworkGatewaysCreateOrUpdateFuture struct { + azure.Future +} + +// Result returns the result of the asynchronous operation. +// If the operation has not completed it will return an error. +func (future *VirtualNetworkGatewaysCreateOrUpdateFuture) Result(client VirtualNetworkGatewaysClient) (vng VirtualNetworkGateway, err error) { + var done bool + done, err = future.Done(client) + if err != nil { + err = autorest.NewErrorWithError(err, "network.VirtualNetworkGatewaysCreateOrUpdateFuture", "Result", future.Response(), "Polling failure") + return + } + if !done { + err = azure.NewAsyncOpIncompleteError("network.VirtualNetworkGatewaysCreateOrUpdateFuture") + return + } + sender := autorest.DecorateSender(client, autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...)) + if vng.Response.Response, err = future.GetResult(sender); err == nil && vng.Response.Response.StatusCode != http.StatusNoContent { + vng, err = client.CreateOrUpdateResponder(vng.Response.Response) + if err != nil { + err = autorest.NewErrorWithError(err, "network.VirtualNetworkGatewaysCreateOrUpdateFuture", "Result", vng.Response.Response, "Failure responding to request") + } + } + return +} + +// VirtualNetworkGatewaysDeleteFuture an abstraction for monitoring and retrieving the results of a long-running +// operation. +type VirtualNetworkGatewaysDeleteFuture struct { + azure.Future +} + +// Result returns the result of the asynchronous operation. +// If the operation has not completed it will return an error. +func (future *VirtualNetworkGatewaysDeleteFuture) Result(client VirtualNetworkGatewaysClient) (ar autorest.Response, err error) { + var done bool + done, err = future.Done(client) + if err != nil { + err = autorest.NewErrorWithError(err, "network.VirtualNetworkGatewaysDeleteFuture", "Result", future.Response(), "Polling failure") + return + } + if !done { + err = azure.NewAsyncOpIncompleteError("network.VirtualNetworkGatewaysDeleteFuture") + return + } + ar.Response = future.Response() + return +} + +// VirtualNetworkGatewaysGeneratevpnclientpackageFuture an abstraction for monitoring and retrieving the results of +// a long-running operation. +type VirtualNetworkGatewaysGeneratevpnclientpackageFuture struct { + azure.Future +} + +// Result returns the result of the asynchronous operation. +// If the operation has not completed it will return an error. +func (future *VirtualNetworkGatewaysGeneratevpnclientpackageFuture) Result(client VirtualNetworkGatewaysClient) (s String, err error) { + var done bool + done, err = future.Done(client) + if err != nil { + err = autorest.NewErrorWithError(err, "network.VirtualNetworkGatewaysGeneratevpnclientpackageFuture", "Result", future.Response(), "Polling failure") + return + } + if !done { + err = azure.NewAsyncOpIncompleteError("network.VirtualNetworkGatewaysGeneratevpnclientpackageFuture") + return + } + sender := autorest.DecorateSender(client, autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...)) + if s.Response.Response, err = future.GetResult(sender); err == nil && s.Response.Response.StatusCode != http.StatusNoContent { + s, err = client.GeneratevpnclientpackageResponder(s.Response.Response) + if err != nil { + err = autorest.NewErrorWithError(err, "network.VirtualNetworkGatewaysGeneratevpnclientpackageFuture", "Result", s.Response.Response, "Failure responding to request") + } + } + return +} + +// VirtualNetworkGatewaysGenerateVpnProfileFuture an abstraction for monitoring and retrieving the results of a +// long-running operation. +type VirtualNetworkGatewaysGenerateVpnProfileFuture struct { + azure.Future +} + +// Result returns the result of the asynchronous operation. +// If the operation has not completed it will return an error. +func (future *VirtualNetworkGatewaysGenerateVpnProfileFuture) Result(client VirtualNetworkGatewaysClient) (s String, err error) { + var done bool + done, err = future.Done(client) + if err != nil { + err = autorest.NewErrorWithError(err, "network.VirtualNetworkGatewaysGenerateVpnProfileFuture", "Result", future.Response(), "Polling failure") + return + } + if !done { + err = azure.NewAsyncOpIncompleteError("network.VirtualNetworkGatewaysGenerateVpnProfileFuture") + return + } + sender := autorest.DecorateSender(client, autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...)) + if s.Response.Response, err = future.GetResult(sender); err == nil && s.Response.Response.StatusCode != http.StatusNoContent { + s, err = client.GenerateVpnProfileResponder(s.Response.Response) + if err != nil { + err = autorest.NewErrorWithError(err, "network.VirtualNetworkGatewaysGenerateVpnProfileFuture", "Result", s.Response.Response, "Failure responding to request") + } + } + return +} + +// VirtualNetworkGatewaysGetAdvertisedRoutesFuture an abstraction for monitoring and retrieving the results of a +// long-running operation. +type VirtualNetworkGatewaysGetAdvertisedRoutesFuture struct { + azure.Future +} + +// Result returns the result of the asynchronous operation. +// If the operation has not completed it will return an error. +func (future *VirtualNetworkGatewaysGetAdvertisedRoutesFuture) Result(client VirtualNetworkGatewaysClient) (grlr GatewayRouteListResult, err error) { + var done bool + done, err = future.Done(client) + if err != nil { + err = autorest.NewErrorWithError(err, "network.VirtualNetworkGatewaysGetAdvertisedRoutesFuture", "Result", future.Response(), "Polling failure") + return + } + if !done { + err = azure.NewAsyncOpIncompleteError("network.VirtualNetworkGatewaysGetAdvertisedRoutesFuture") + return + } + sender := autorest.DecorateSender(client, autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...)) + if grlr.Response.Response, err = future.GetResult(sender); err == nil && grlr.Response.Response.StatusCode != http.StatusNoContent { + grlr, err = client.GetAdvertisedRoutesResponder(grlr.Response.Response) + if err != nil { + err = autorest.NewErrorWithError(err, "network.VirtualNetworkGatewaysGetAdvertisedRoutesFuture", "Result", grlr.Response.Response, "Failure responding to request") + } + } + return +} + +// VirtualNetworkGatewaysGetBgpPeerStatusFuture an abstraction for monitoring and retrieving the results of a +// long-running operation. +type VirtualNetworkGatewaysGetBgpPeerStatusFuture struct { + azure.Future +} + +// Result returns the result of the asynchronous operation. +// If the operation has not completed it will return an error. +func (future *VirtualNetworkGatewaysGetBgpPeerStatusFuture) Result(client VirtualNetworkGatewaysClient) (bpslr BgpPeerStatusListResult, err error) { + var done bool + done, err = future.Done(client) + if err != nil { + err = autorest.NewErrorWithError(err, "network.VirtualNetworkGatewaysGetBgpPeerStatusFuture", "Result", future.Response(), "Polling failure") + return + } + if !done { + err = azure.NewAsyncOpIncompleteError("network.VirtualNetworkGatewaysGetBgpPeerStatusFuture") + return + } + sender := autorest.DecorateSender(client, autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...)) + if bpslr.Response.Response, err = future.GetResult(sender); err == nil && bpslr.Response.Response.StatusCode != http.StatusNoContent { + bpslr, err = client.GetBgpPeerStatusResponder(bpslr.Response.Response) + if err != nil { + err = autorest.NewErrorWithError(err, "network.VirtualNetworkGatewaysGetBgpPeerStatusFuture", "Result", bpslr.Response.Response, "Failure responding to request") + } + } + return +} + +// VirtualNetworkGatewaysGetLearnedRoutesFuture an abstraction for monitoring and retrieving the results of a +// long-running operation. +type VirtualNetworkGatewaysGetLearnedRoutesFuture struct { + azure.Future +} + +// Result returns the result of the asynchronous operation. +// If the operation has not completed it will return an error. +func (future *VirtualNetworkGatewaysGetLearnedRoutesFuture) Result(client VirtualNetworkGatewaysClient) (grlr GatewayRouteListResult, err error) { + var done bool + done, err = future.Done(client) + if err != nil { + err = autorest.NewErrorWithError(err, "network.VirtualNetworkGatewaysGetLearnedRoutesFuture", "Result", future.Response(), "Polling failure") + return + } + if !done { + err = azure.NewAsyncOpIncompleteError("network.VirtualNetworkGatewaysGetLearnedRoutesFuture") + return + } + sender := autorest.DecorateSender(client, autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...)) + if grlr.Response.Response, err = future.GetResult(sender); err == nil && grlr.Response.Response.StatusCode != http.StatusNoContent { + grlr, err = client.GetLearnedRoutesResponder(grlr.Response.Response) + if err != nil { + err = autorest.NewErrorWithError(err, "network.VirtualNetworkGatewaysGetLearnedRoutesFuture", "Result", grlr.Response.Response, "Failure responding to request") + } + } + return +} + +// VirtualNetworkGatewaysGetVpnProfilePackageURLFuture an abstraction for monitoring and retrieving the results of +// a long-running operation. +type VirtualNetworkGatewaysGetVpnProfilePackageURLFuture struct { + azure.Future +} + +// Result returns the result of the asynchronous operation. +// If the operation has not completed it will return an error. +func (future *VirtualNetworkGatewaysGetVpnProfilePackageURLFuture) Result(client VirtualNetworkGatewaysClient) (s String, err error) { + var done bool + done, err = future.Done(client) + if err != nil { + err = autorest.NewErrorWithError(err, "network.VirtualNetworkGatewaysGetVpnProfilePackageURLFuture", "Result", future.Response(), "Polling failure") + return + } + if !done { + err = azure.NewAsyncOpIncompleteError("network.VirtualNetworkGatewaysGetVpnProfilePackageURLFuture") + return + } + sender := autorest.DecorateSender(client, autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...)) + if s.Response.Response, err = future.GetResult(sender); err == nil && s.Response.Response.StatusCode != http.StatusNoContent { + s, err = client.GetVpnProfilePackageURLResponder(s.Response.Response) + if err != nil { + err = autorest.NewErrorWithError(err, "network.VirtualNetworkGatewaysGetVpnProfilePackageURLFuture", "Result", s.Response.Response, "Failure responding to request") + } + } + return +} + +// VirtualNetworkGatewaySku virtualNetworkGatewaySku details +type VirtualNetworkGatewaySku struct { + // Name - Gateway SKU name. Possible values include: 'VirtualNetworkGatewaySkuNameBasic', 'VirtualNetworkGatewaySkuNameHighPerformance', 'VirtualNetworkGatewaySkuNameStandard', 'VirtualNetworkGatewaySkuNameUltraPerformance', 'VirtualNetworkGatewaySkuNameVpnGw1', 'VirtualNetworkGatewaySkuNameVpnGw2', 'VirtualNetworkGatewaySkuNameVpnGw3' + Name VirtualNetworkGatewaySkuName `json:"name,omitempty"` + // Tier - Gateway SKU tier. Possible values include: 'VirtualNetworkGatewaySkuTierBasic', 'VirtualNetworkGatewaySkuTierHighPerformance', 'VirtualNetworkGatewaySkuTierStandard', 'VirtualNetworkGatewaySkuTierUltraPerformance', 'VirtualNetworkGatewaySkuTierVpnGw1', 'VirtualNetworkGatewaySkuTierVpnGw2', 'VirtualNetworkGatewaySkuTierVpnGw3' + Tier VirtualNetworkGatewaySkuTier `json:"tier,omitempty"` + // Capacity - The capacity. + Capacity *int32 `json:"capacity,omitempty"` +} + +// VirtualNetworkGatewaysResetFuture an abstraction for monitoring and retrieving the results of a long-running +// operation. +type VirtualNetworkGatewaysResetFuture struct { + azure.Future +} + +// Result returns the result of the asynchronous operation. +// If the operation has not completed it will return an error. +func (future *VirtualNetworkGatewaysResetFuture) Result(client VirtualNetworkGatewaysClient) (vng VirtualNetworkGateway, err error) { + var done bool + done, err = future.Done(client) + if err != nil { + err = autorest.NewErrorWithError(err, "network.VirtualNetworkGatewaysResetFuture", "Result", future.Response(), "Polling failure") + return + } + if !done { + err = azure.NewAsyncOpIncompleteError("network.VirtualNetworkGatewaysResetFuture") + return + } + sender := autorest.DecorateSender(client, autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...)) + if vng.Response.Response, err = future.GetResult(sender); err == nil && vng.Response.Response.StatusCode != http.StatusNoContent { + vng, err = client.ResetResponder(vng.Response.Response) + if err != nil { + err = autorest.NewErrorWithError(err, "network.VirtualNetworkGatewaysResetFuture", "Result", vng.Response.Response, "Failure responding to request") + } + } + return +} + +// VirtualNetworkGatewaysUpdateTagsFuture an abstraction for monitoring and retrieving the results of a +// long-running operation. +type VirtualNetworkGatewaysUpdateTagsFuture struct { + azure.Future +} + +// Result returns the result of the asynchronous operation. +// If the operation has not completed it will return an error. +func (future *VirtualNetworkGatewaysUpdateTagsFuture) Result(client VirtualNetworkGatewaysClient) (vng VirtualNetworkGateway, err error) { + var done bool + done, err = future.Done(client) + if err != nil { + err = autorest.NewErrorWithError(err, "network.VirtualNetworkGatewaysUpdateTagsFuture", "Result", future.Response(), "Polling failure") + return + } + if !done { + err = azure.NewAsyncOpIncompleteError("network.VirtualNetworkGatewaysUpdateTagsFuture") + return + } + sender := autorest.DecorateSender(client, autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...)) + if vng.Response.Response, err = future.GetResult(sender); err == nil && vng.Response.Response.StatusCode != http.StatusNoContent { + vng, err = client.UpdateTagsResponder(vng.Response.Response) + if err != nil { + err = autorest.NewErrorWithError(err, "network.VirtualNetworkGatewaysUpdateTagsFuture", "Result", vng.Response.Response, "Failure responding to request") + } + } + return +} + +// VirtualNetworkListResult response for the ListVirtualNetworks API service call. +type VirtualNetworkListResult struct { + autorest.Response `json:"-"` + // Value - Gets a list of VirtualNetwork resources in a resource group. + Value *[]VirtualNetwork `json:"value,omitempty"` + // NextLink - The URL to get the next set of results. + NextLink *string `json:"nextLink,omitempty"` +} + +// VirtualNetworkListResultIterator provides access to a complete listing of VirtualNetwork values. +type VirtualNetworkListResultIterator struct { + i int + page VirtualNetworkListResultPage +} + +// Next advances to the next value. If there was an error making +// the request the iterator does not advance and the error is returned. +func (iter *VirtualNetworkListResultIterator) Next() error { + iter.i++ + if iter.i < len(iter.page.Values()) { + return nil + } + err := iter.page.Next() + if err != nil { + iter.i-- + return err + } + iter.i = 0 + return nil +} + +// NotDone returns true if the enumeration should be started or is not yet complete. +func (iter VirtualNetworkListResultIterator) NotDone() bool { + return iter.page.NotDone() && iter.i < len(iter.page.Values()) +} + +// Response returns the raw server response from the last page request. +func (iter VirtualNetworkListResultIterator) Response() VirtualNetworkListResult { + return iter.page.Response() +} + +// Value returns the current value or a zero-initialized value if the +// iterator has advanced beyond the end of the collection. +func (iter VirtualNetworkListResultIterator) Value() VirtualNetwork { + if !iter.page.NotDone() { + return VirtualNetwork{} + } + return iter.page.Values()[iter.i] +} + +// IsEmpty returns true if the ListResult contains no values. +func (vnlr VirtualNetworkListResult) IsEmpty() bool { + return vnlr.Value == nil || len(*vnlr.Value) == 0 +} + +// virtualNetworkListResultPreparer prepares a request to retrieve the next set of results. +// It returns nil if no more results exist. +func (vnlr VirtualNetworkListResult) virtualNetworkListResultPreparer() (*http.Request, error) { + if vnlr.NextLink == nil || len(to.String(vnlr.NextLink)) < 1 { + return nil, nil + } + return autorest.Prepare(&http.Request{}, + autorest.AsJSON(), + autorest.AsGet(), + autorest.WithBaseURL(to.String(vnlr.NextLink))) +} + +// VirtualNetworkListResultPage contains a page of VirtualNetwork values. +type VirtualNetworkListResultPage struct { + fn func(VirtualNetworkListResult) (VirtualNetworkListResult, error) + vnlr VirtualNetworkListResult +} + +// Next advances to the next page of values. If there was an error making +// the request the page does not advance and the error is returned. +func (page *VirtualNetworkListResultPage) Next() error { + next, err := page.fn(page.vnlr) + if err != nil { + return err + } + page.vnlr = next + return nil +} + +// NotDone returns true if the page enumeration should be started or is not yet complete. +func (page VirtualNetworkListResultPage) NotDone() bool { + return !page.vnlr.IsEmpty() +} + +// Response returns the raw server response from the last page request. +func (page VirtualNetworkListResultPage) Response() VirtualNetworkListResult { + return page.vnlr +} + +// Values returns the slice of values for the current page or nil if there are no values. +func (page VirtualNetworkListResultPage) Values() []VirtualNetwork { + if page.vnlr.IsEmpty() { + return nil + } + return *page.vnlr.Value +} + +// VirtualNetworkListUsageResult response for the virtual networks GetUsage API service call. +type VirtualNetworkListUsageResult struct { + autorest.Response `json:"-"` + // Value - VirtualNetwork usage stats. + Value *[]VirtualNetworkUsage `json:"value,omitempty"` + // NextLink - The URL to get the next set of results. + NextLink *string `json:"nextLink,omitempty"` +} + +// VirtualNetworkListUsageResultIterator provides access to a complete listing of VirtualNetworkUsage values. +type VirtualNetworkListUsageResultIterator struct { + i int + page VirtualNetworkListUsageResultPage +} + +// Next advances to the next value. If there was an error making +// the request the iterator does not advance and the error is returned. +func (iter *VirtualNetworkListUsageResultIterator) Next() error { + iter.i++ + if iter.i < len(iter.page.Values()) { + return nil + } + err := iter.page.Next() + if err != nil { + iter.i-- + return err + } + iter.i = 0 + return nil +} + +// NotDone returns true if the enumeration should be started or is not yet complete. +func (iter VirtualNetworkListUsageResultIterator) NotDone() bool { + return iter.page.NotDone() && iter.i < len(iter.page.Values()) +} + +// Response returns the raw server response from the last page request. +func (iter VirtualNetworkListUsageResultIterator) Response() VirtualNetworkListUsageResult { + return iter.page.Response() +} + +// Value returns the current value or a zero-initialized value if the +// iterator has advanced beyond the end of the collection. +func (iter VirtualNetworkListUsageResultIterator) Value() VirtualNetworkUsage { + if !iter.page.NotDone() { + return VirtualNetworkUsage{} + } + return iter.page.Values()[iter.i] +} + +// IsEmpty returns true if the ListResult contains no values. +func (vnlur VirtualNetworkListUsageResult) IsEmpty() bool { + return vnlur.Value == nil || len(*vnlur.Value) == 0 +} + +// virtualNetworkListUsageResultPreparer prepares a request to retrieve the next set of results. +// It returns nil if no more results exist. +func (vnlur VirtualNetworkListUsageResult) virtualNetworkListUsageResultPreparer() (*http.Request, error) { + if vnlur.NextLink == nil || len(to.String(vnlur.NextLink)) < 1 { + return nil, nil + } + return autorest.Prepare(&http.Request{}, + autorest.AsJSON(), + autorest.AsGet(), + autorest.WithBaseURL(to.String(vnlur.NextLink))) +} + +// VirtualNetworkListUsageResultPage contains a page of VirtualNetworkUsage values. +type VirtualNetworkListUsageResultPage struct { + fn func(VirtualNetworkListUsageResult) (VirtualNetworkListUsageResult, error) + vnlur VirtualNetworkListUsageResult +} + +// Next advances to the next page of values. If there was an error making +// the request the page does not advance and the error is returned. +func (page *VirtualNetworkListUsageResultPage) Next() error { + next, err := page.fn(page.vnlur) + if err != nil { + return err + } + page.vnlur = next + return nil +} + +// NotDone returns true if the page enumeration should be started or is not yet complete. +func (page VirtualNetworkListUsageResultPage) NotDone() bool { + return !page.vnlur.IsEmpty() +} + +// Response returns the raw server response from the last page request. +func (page VirtualNetworkListUsageResultPage) Response() VirtualNetworkListUsageResult { + return page.vnlur +} + +// Values returns the slice of values for the current page or nil if there are no values. +func (page VirtualNetworkListUsageResultPage) Values() []VirtualNetworkUsage { + if page.vnlur.IsEmpty() { + return nil + } + return *page.vnlur.Value +} + +// VirtualNetworkPeering peerings in a virtual network resource. +type VirtualNetworkPeering struct { + autorest.Response `json:"-"` + // VirtualNetworkPeeringPropertiesFormat - Properties of the virtual network peering. + *VirtualNetworkPeeringPropertiesFormat `json:"properties,omitempty"` + // Name - The name of the resource that is unique within a resource group. This name can be used to access the resource. + Name *string `json:"name,omitempty"` + // Etag - A unique read-only string that changes whenever the resource is updated. + Etag *string `json:"etag,omitempty"` + // ID - Resource ID. + ID *string `json:"id,omitempty"` +} + +// MarshalJSON is the custom marshaler for VirtualNetworkPeering. +func (vnp VirtualNetworkPeering) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]interface{}) + if vnp.VirtualNetworkPeeringPropertiesFormat != nil { + objectMap["properties"] = vnp.VirtualNetworkPeeringPropertiesFormat + } + if vnp.Name != nil { + objectMap["name"] = vnp.Name + } + if vnp.Etag != nil { + objectMap["etag"] = vnp.Etag + } + if vnp.ID != nil { + objectMap["id"] = vnp.ID + } + return json.Marshal(objectMap) +} + +// UnmarshalJSON is the custom unmarshaler for VirtualNetworkPeering struct. +func (vnp *VirtualNetworkPeering) UnmarshalJSON(body []byte) error { + var m map[string]*json.RawMessage + err := json.Unmarshal(body, &m) + if err != nil { + return err + } + for k, v := range m { + switch k { + case "properties": + if v != nil { + var virtualNetworkPeeringPropertiesFormat VirtualNetworkPeeringPropertiesFormat + err = json.Unmarshal(*v, &virtualNetworkPeeringPropertiesFormat) + if err != nil { + return err + } + vnp.VirtualNetworkPeeringPropertiesFormat = &virtualNetworkPeeringPropertiesFormat + } + case "name": + if v != nil { + var name string + err = json.Unmarshal(*v, &name) + if err != nil { + return err + } + vnp.Name = &name + } + case "etag": + if v != nil { + var etag string + err = json.Unmarshal(*v, &etag) + if err != nil { + return err + } + vnp.Etag = &etag + } + case "id": + if v != nil { + var ID string + err = json.Unmarshal(*v, &ID) + if err != nil { + return err + } + vnp.ID = &ID + } + } + } + + return nil +} + +// VirtualNetworkPeeringListResult response for ListSubnets API service call. Retrieves all subnets that belong to +// a virtual network. +type VirtualNetworkPeeringListResult struct { + autorest.Response `json:"-"` + // Value - The peerings in a virtual network. + Value *[]VirtualNetworkPeering `json:"value,omitempty"` + // NextLink - The URL to get the next set of results. + NextLink *string `json:"nextLink,omitempty"` +} + +// VirtualNetworkPeeringListResultIterator provides access to a complete listing of VirtualNetworkPeering values. +type VirtualNetworkPeeringListResultIterator struct { + i int + page VirtualNetworkPeeringListResultPage +} + +// Next advances to the next value. If there was an error making +// the request the iterator does not advance and the error is returned. +func (iter *VirtualNetworkPeeringListResultIterator) Next() error { + iter.i++ + if iter.i < len(iter.page.Values()) { + return nil + } + err := iter.page.Next() + if err != nil { + iter.i-- + return err + } + iter.i = 0 + return nil +} + +// NotDone returns true if the enumeration should be started or is not yet complete. +func (iter VirtualNetworkPeeringListResultIterator) NotDone() bool { + return iter.page.NotDone() && iter.i < len(iter.page.Values()) +} + +// Response returns the raw server response from the last page request. +func (iter VirtualNetworkPeeringListResultIterator) Response() VirtualNetworkPeeringListResult { + return iter.page.Response() +} + +// Value returns the current value or a zero-initialized value if the +// iterator has advanced beyond the end of the collection. +func (iter VirtualNetworkPeeringListResultIterator) Value() VirtualNetworkPeering { + if !iter.page.NotDone() { + return VirtualNetworkPeering{} + } + return iter.page.Values()[iter.i] +} + +// IsEmpty returns true if the ListResult contains no values. +func (vnplr VirtualNetworkPeeringListResult) IsEmpty() bool { + return vnplr.Value == nil || len(*vnplr.Value) == 0 +} + +// virtualNetworkPeeringListResultPreparer prepares a request to retrieve the next set of results. +// It returns nil if no more results exist. +func (vnplr VirtualNetworkPeeringListResult) virtualNetworkPeeringListResultPreparer() (*http.Request, error) { + if vnplr.NextLink == nil || len(to.String(vnplr.NextLink)) < 1 { + return nil, nil + } + return autorest.Prepare(&http.Request{}, + autorest.AsJSON(), + autorest.AsGet(), + autorest.WithBaseURL(to.String(vnplr.NextLink))) +} + +// VirtualNetworkPeeringListResultPage contains a page of VirtualNetworkPeering values. +type VirtualNetworkPeeringListResultPage struct { + fn func(VirtualNetworkPeeringListResult) (VirtualNetworkPeeringListResult, error) + vnplr VirtualNetworkPeeringListResult +} + +// Next advances to the next page of values. If there was an error making +// the request the page does not advance and the error is returned. +func (page *VirtualNetworkPeeringListResultPage) Next() error { + next, err := page.fn(page.vnplr) + if err != nil { + return err + } + page.vnplr = next + return nil +} + +// NotDone returns true if the page enumeration should be started or is not yet complete. +func (page VirtualNetworkPeeringListResultPage) NotDone() bool { + return !page.vnplr.IsEmpty() +} + +// Response returns the raw server response from the last page request. +func (page VirtualNetworkPeeringListResultPage) Response() VirtualNetworkPeeringListResult { + return page.vnplr +} + +// Values returns the slice of values for the current page or nil if there are no values. +func (page VirtualNetworkPeeringListResultPage) Values() []VirtualNetworkPeering { + if page.vnplr.IsEmpty() { + return nil + } + return *page.vnplr.Value +} + +// VirtualNetworkPeeringPropertiesFormat properties of the virtual network peering. +type VirtualNetworkPeeringPropertiesFormat struct { + // AllowVirtualNetworkAccess - Whether the VMs in the linked virtual network space would be able to access all the VMs in local Virtual network space. + AllowVirtualNetworkAccess *bool `json:"allowVirtualNetworkAccess,omitempty"` + // AllowForwardedTraffic - Whether the forwarded traffic from the VMs in the remote virtual network will be allowed/disallowed. + AllowForwardedTraffic *bool `json:"allowForwardedTraffic,omitempty"` + // AllowGatewayTransit - If gateway links can be used in remote virtual networking to link to this virtual network. + AllowGatewayTransit *bool `json:"allowGatewayTransit,omitempty"` + // UseRemoteGateways - If remote gateways can be used on this virtual network. If the flag is set to true, and allowGatewayTransit on remote peering is also true, virtual network will use gateways of remote virtual network for transit. Only one peering can have this flag set to true. This flag cannot be set if virtual network already has a gateway. + UseRemoteGateways *bool `json:"useRemoteGateways,omitempty"` + // RemoteVirtualNetwork - The reference of the remote virtual network. The remote virtual network can be in the same or different region (preview). See here to register for the preview and learn more (https://docs.microsoft.com/en-us/azure/virtual-network/virtual-network-create-peering). + RemoteVirtualNetwork *SubResource `json:"remoteVirtualNetwork,omitempty"` + // RemoteAddressSpace - The reference of the remote virtual network address space. + RemoteAddressSpace *AddressSpace `json:"remoteAddressSpace,omitempty"` + // PeeringState - The status of the virtual network peering. Possible values are 'Initiated', 'Connected', and 'Disconnected'. Possible values include: 'Initiated', 'Connected', 'Disconnected' + PeeringState VirtualNetworkPeeringState `json:"peeringState,omitempty"` + // ProvisioningState - The provisioning state of the resource. + ProvisioningState *string `json:"provisioningState,omitempty"` +} + +// VirtualNetworkPeeringsCreateOrUpdateFuture an abstraction for monitoring and retrieving the results of a +// long-running operation. +type VirtualNetworkPeeringsCreateOrUpdateFuture struct { + azure.Future +} + +// Result returns the result of the asynchronous operation. +// If the operation has not completed it will return an error. +func (future *VirtualNetworkPeeringsCreateOrUpdateFuture) Result(client VirtualNetworkPeeringsClient) (vnp VirtualNetworkPeering, err error) { + var done bool + done, err = future.Done(client) + if err != nil { + err = autorest.NewErrorWithError(err, "network.VirtualNetworkPeeringsCreateOrUpdateFuture", "Result", future.Response(), "Polling failure") + return + } + if !done { + err = azure.NewAsyncOpIncompleteError("network.VirtualNetworkPeeringsCreateOrUpdateFuture") + return + } + sender := autorest.DecorateSender(client, autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...)) + if vnp.Response.Response, err = future.GetResult(sender); err == nil && vnp.Response.Response.StatusCode != http.StatusNoContent { + vnp, err = client.CreateOrUpdateResponder(vnp.Response.Response) + if err != nil { + err = autorest.NewErrorWithError(err, "network.VirtualNetworkPeeringsCreateOrUpdateFuture", "Result", vnp.Response.Response, "Failure responding to request") + } + } + return +} + +// VirtualNetworkPeeringsDeleteFuture an abstraction for monitoring and retrieving the results of a long-running +// operation. +type VirtualNetworkPeeringsDeleteFuture struct { + azure.Future +} + +// Result returns the result of the asynchronous operation. +// If the operation has not completed it will return an error. +func (future *VirtualNetworkPeeringsDeleteFuture) Result(client VirtualNetworkPeeringsClient) (ar autorest.Response, err error) { + var done bool + done, err = future.Done(client) + if err != nil { + err = autorest.NewErrorWithError(err, "network.VirtualNetworkPeeringsDeleteFuture", "Result", future.Response(), "Polling failure") + return + } + if !done { + err = azure.NewAsyncOpIncompleteError("network.VirtualNetworkPeeringsDeleteFuture") + return + } + ar.Response = future.Response() + return +} + +// VirtualNetworkPropertiesFormat properties of the virtual network. +type VirtualNetworkPropertiesFormat struct { + // AddressSpace - The AddressSpace that contains an array of IP address ranges that can be used by subnets. + AddressSpace *AddressSpace `json:"addressSpace,omitempty"` + // DhcpOptions - The dhcpOptions that contains an array of DNS servers available to VMs deployed in the virtual network. + DhcpOptions *DhcpOptions `json:"dhcpOptions,omitempty"` + // Subnets - A list of subnets in a Virtual Network. + Subnets *[]Subnet `json:"subnets,omitempty"` + // VirtualNetworkPeerings - A list of peerings in a Virtual Network. + VirtualNetworkPeerings *[]VirtualNetworkPeering `json:"virtualNetworkPeerings,omitempty"` + // ResourceGUID - The resourceGuid property of the Virtual Network resource. + ResourceGUID *string `json:"resourceGuid,omitempty"` + // ProvisioningState - The provisioning state of the PublicIP resource. Possible values are: 'Updating', 'Deleting', and 'Failed'. + ProvisioningState *string `json:"provisioningState,omitempty"` + // EnableDdosProtection - Indicates if DDoS protection is enabled for all the protected resources in a Virtual Network. + EnableDdosProtection *bool `json:"enableDdosProtection,omitempty"` + // EnableVMProtection - Indicates if Vm protection is enabled for all the subnets in a Virtual Network. + EnableVMProtection *bool `json:"enableVmProtection,omitempty"` +} + +// VirtualNetworksCreateOrUpdateFuture an abstraction for monitoring and retrieving the results of a long-running +// operation. +type VirtualNetworksCreateOrUpdateFuture struct { + azure.Future +} + +// Result returns the result of the asynchronous operation. +// If the operation has not completed it will return an error. +func (future *VirtualNetworksCreateOrUpdateFuture) Result(client VirtualNetworksClient) (vn VirtualNetwork, err error) { + var done bool + done, err = future.Done(client) + if err != nil { + err = autorest.NewErrorWithError(err, "network.VirtualNetworksCreateOrUpdateFuture", "Result", future.Response(), "Polling failure") + return + } + if !done { + err = azure.NewAsyncOpIncompleteError("network.VirtualNetworksCreateOrUpdateFuture") + return + } + sender := autorest.DecorateSender(client, autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...)) + if vn.Response.Response, err = future.GetResult(sender); err == nil && vn.Response.Response.StatusCode != http.StatusNoContent { + vn, err = client.CreateOrUpdateResponder(vn.Response.Response) + if err != nil { + err = autorest.NewErrorWithError(err, "network.VirtualNetworksCreateOrUpdateFuture", "Result", vn.Response.Response, "Failure responding to request") + } + } + return +} + +// VirtualNetworksDeleteFuture an abstraction for monitoring and retrieving the results of a long-running +// operation. +type VirtualNetworksDeleteFuture struct { + azure.Future +} + +// Result returns the result of the asynchronous operation. +// If the operation has not completed it will return an error. +func (future *VirtualNetworksDeleteFuture) Result(client VirtualNetworksClient) (ar autorest.Response, err error) { + var done bool + done, err = future.Done(client) + if err != nil { + err = autorest.NewErrorWithError(err, "network.VirtualNetworksDeleteFuture", "Result", future.Response(), "Polling failure") + return + } + if !done { + err = azure.NewAsyncOpIncompleteError("network.VirtualNetworksDeleteFuture") + return + } + ar.Response = future.Response() + return +} + +// VirtualNetworksUpdateTagsFuture an abstraction for monitoring and retrieving the results of a long-running +// operation. +type VirtualNetworksUpdateTagsFuture struct { + azure.Future +} + +// Result returns the result of the asynchronous operation. +// If the operation has not completed it will return an error. +func (future *VirtualNetworksUpdateTagsFuture) Result(client VirtualNetworksClient) (vn VirtualNetwork, err error) { + var done bool + done, err = future.Done(client) + if err != nil { + err = autorest.NewErrorWithError(err, "network.VirtualNetworksUpdateTagsFuture", "Result", future.Response(), "Polling failure") + return + } + if !done { + err = azure.NewAsyncOpIncompleteError("network.VirtualNetworksUpdateTagsFuture") + return + } + sender := autorest.DecorateSender(client, autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...)) + if vn.Response.Response, err = future.GetResult(sender); err == nil && vn.Response.Response.StatusCode != http.StatusNoContent { + vn, err = client.UpdateTagsResponder(vn.Response.Response) + if err != nil { + err = autorest.NewErrorWithError(err, "network.VirtualNetworksUpdateTagsFuture", "Result", vn.Response.Response, "Failure responding to request") + } + } + return +} + +// VirtualNetworkUsage usage details for subnet. +type VirtualNetworkUsage struct { + // CurrentValue - Indicates number of IPs used from the Subnet. + CurrentValue *float64 `json:"currentValue,omitempty"` + // ID - Subnet identifier. + ID *string `json:"id,omitempty"` + // Limit - Indicates the size of the subnet. + Limit *float64 `json:"limit,omitempty"` + // Name - The name containing common and localized value for usage. + Name *VirtualNetworkUsageName `json:"name,omitempty"` + // Unit - Usage units. Returns 'Count' + Unit *string `json:"unit,omitempty"` +} + +// VirtualNetworkUsageName usage strings container. +type VirtualNetworkUsageName struct { + // LocalizedValue - Localized subnet size and usage string. + LocalizedValue *string `json:"localizedValue,omitempty"` + // Value - Subnet size and usage string. + Value *string `json:"value,omitempty"` +} + +// VpnClientConfiguration vpnClientConfiguration for P2S client. +type VpnClientConfiguration struct { + // VpnClientAddressPool - The reference of the address space resource which represents Address space for P2S VpnClient. + VpnClientAddressPool *AddressSpace `json:"vpnClientAddressPool,omitempty"` + // VpnClientRootCertificates - VpnClientRootCertificate for virtual network gateway. + VpnClientRootCertificates *[]VpnClientRootCertificate `json:"vpnClientRootCertificates,omitempty"` + // VpnClientRevokedCertificates - VpnClientRevokedCertificate for Virtual network gateway. + VpnClientRevokedCertificates *[]VpnClientRevokedCertificate `json:"vpnClientRevokedCertificates,omitempty"` + // VpnClientProtocols - VpnClientProtocols for Virtual network gateway. + VpnClientProtocols *[]VpnClientProtocol `json:"vpnClientProtocols,omitempty"` + // RadiusServerAddress - The radius server address property of the VirtualNetworkGateway resource for vpn client connection. + RadiusServerAddress *string `json:"radiusServerAddress,omitempty"` + // RadiusServerSecret - The radius secret property of the VirtualNetworkGateway resource for vpn client connection. + RadiusServerSecret *string `json:"radiusServerSecret,omitempty"` +} + +// VpnClientParameters vpn Client Parameters for package generation +type VpnClientParameters struct { + // ProcessorArchitecture - VPN client Processor Architecture. Possible values are: 'AMD64' and 'X86'. Possible values include: 'Amd64', 'X86' + ProcessorArchitecture ProcessorArchitecture `json:"processorArchitecture,omitempty"` + // AuthenticationMethod - VPN client Authentication Method. Possible values are: 'EAPTLS' and 'EAPMSCHAPv2'. Possible values include: 'EAPTLS', 'EAPMSCHAPv2' + AuthenticationMethod AuthenticationMethod `json:"authenticationMethod,omitempty"` + // RadiusServerAuthCertificate - The public certificate data for the radius server authentication certificate as a Base-64 encoded string. Required only if external radius authentication has been configured with EAPTLS authentication. + RadiusServerAuthCertificate *string `json:"radiusServerAuthCertificate,omitempty"` + // ClientRootCertificates - A list of client root certificates public certificate data encoded as Base-64 strings. Optional parameter for external radius based authentication with EAPTLS. + ClientRootCertificates *[]string `json:"clientRootCertificates,omitempty"` +} + +// VpnClientRevokedCertificate VPN client revoked certificate of virtual network gateway. +type VpnClientRevokedCertificate struct { + // VpnClientRevokedCertificatePropertiesFormat - Properties of the vpn client revoked certificate. + *VpnClientRevokedCertificatePropertiesFormat `json:"properties,omitempty"` + // Name - The name of the resource that is unique within a resource group. This name can be used to access the resource. + Name *string `json:"name,omitempty"` + // Etag - A unique read-only string that changes whenever the resource is updated. + Etag *string `json:"etag,omitempty"` + // ID - Resource ID. + ID *string `json:"id,omitempty"` +} + +// MarshalJSON is the custom marshaler for VpnClientRevokedCertificate. +func (vcrc VpnClientRevokedCertificate) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]interface{}) + if vcrc.VpnClientRevokedCertificatePropertiesFormat != nil { + objectMap["properties"] = vcrc.VpnClientRevokedCertificatePropertiesFormat + } + if vcrc.Name != nil { + objectMap["name"] = vcrc.Name + } + if vcrc.Etag != nil { + objectMap["etag"] = vcrc.Etag + } + if vcrc.ID != nil { + objectMap["id"] = vcrc.ID + } + return json.Marshal(objectMap) +} + +// UnmarshalJSON is the custom unmarshaler for VpnClientRevokedCertificate struct. +func (vcrc *VpnClientRevokedCertificate) UnmarshalJSON(body []byte) error { + var m map[string]*json.RawMessage + err := json.Unmarshal(body, &m) + if err != nil { + return err + } + for k, v := range m { + switch k { + case "properties": + if v != nil { + var vpnClientRevokedCertificatePropertiesFormat VpnClientRevokedCertificatePropertiesFormat + err = json.Unmarshal(*v, &vpnClientRevokedCertificatePropertiesFormat) + if err != nil { + return err + } + vcrc.VpnClientRevokedCertificatePropertiesFormat = &vpnClientRevokedCertificatePropertiesFormat + } + case "name": + if v != nil { + var name string + err = json.Unmarshal(*v, &name) + if err != nil { + return err + } + vcrc.Name = &name + } + case "etag": + if v != nil { + var etag string + err = json.Unmarshal(*v, &etag) + if err != nil { + return err + } + vcrc.Etag = &etag + } + case "id": + if v != nil { + var ID string + err = json.Unmarshal(*v, &ID) + if err != nil { + return err + } + vcrc.ID = &ID + } + } + } + + return nil +} + +// VpnClientRevokedCertificatePropertiesFormat properties of the revoked VPN client certificate of virtual network +// gateway. +type VpnClientRevokedCertificatePropertiesFormat struct { + // Thumbprint - The revoked VPN client certificate thumbprint. + Thumbprint *string `json:"thumbprint,omitempty"` + // ProvisioningState - The provisioning state of the VPN client revoked certificate resource. Possible values are: 'Updating', 'Deleting', and 'Failed'. + ProvisioningState *string `json:"provisioningState,omitempty"` +} + +// VpnClientRootCertificate VPN client root certificate of virtual network gateway +type VpnClientRootCertificate struct { + // VpnClientRootCertificatePropertiesFormat - Properties of the vpn client root certificate. + *VpnClientRootCertificatePropertiesFormat `json:"properties,omitempty"` + // Name - The name of the resource that is unique within a resource group. This name can be used to access the resource. + Name *string `json:"name,omitempty"` + // Etag - A unique read-only string that changes whenever the resource is updated. + Etag *string `json:"etag,omitempty"` + // ID - Resource ID. + ID *string `json:"id,omitempty"` +} + +// MarshalJSON is the custom marshaler for VpnClientRootCertificate. +func (vcrc VpnClientRootCertificate) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]interface{}) + if vcrc.VpnClientRootCertificatePropertiesFormat != nil { + objectMap["properties"] = vcrc.VpnClientRootCertificatePropertiesFormat + } + if vcrc.Name != nil { + objectMap["name"] = vcrc.Name + } + if vcrc.Etag != nil { + objectMap["etag"] = vcrc.Etag + } + if vcrc.ID != nil { + objectMap["id"] = vcrc.ID + } + return json.Marshal(objectMap) +} + +// UnmarshalJSON is the custom unmarshaler for VpnClientRootCertificate struct. +func (vcrc *VpnClientRootCertificate) UnmarshalJSON(body []byte) error { + var m map[string]*json.RawMessage + err := json.Unmarshal(body, &m) + if err != nil { + return err + } + for k, v := range m { + switch k { + case "properties": + if v != nil { + var vpnClientRootCertificatePropertiesFormat VpnClientRootCertificatePropertiesFormat + err = json.Unmarshal(*v, &vpnClientRootCertificatePropertiesFormat) + if err != nil { + return err + } + vcrc.VpnClientRootCertificatePropertiesFormat = &vpnClientRootCertificatePropertiesFormat + } + case "name": + if v != nil { + var name string + err = json.Unmarshal(*v, &name) + if err != nil { + return err + } + vcrc.Name = &name + } + case "etag": + if v != nil { + var etag string + err = json.Unmarshal(*v, &etag) + if err != nil { + return err + } + vcrc.Etag = &etag + } + case "id": + if v != nil { + var ID string + err = json.Unmarshal(*v, &ID) + if err != nil { + return err + } + vcrc.ID = &ID + } + } + } + + return nil +} + +// VpnClientRootCertificatePropertiesFormat properties of SSL certificates of application gateway +type VpnClientRootCertificatePropertiesFormat struct { + // PublicCertData - The certificate public data. + PublicCertData *string `json:"publicCertData,omitempty"` + // ProvisioningState - The provisioning state of the VPN client root certificate resource. Possible values are: 'Updating', 'Deleting', and 'Failed'. + ProvisioningState *string `json:"provisioningState,omitempty"` +} + +// VpnDeviceScriptParameters vpn device configuration script generation parameters +type VpnDeviceScriptParameters struct { + // Vendor - The vendor for the vpn device. + Vendor *string `json:"vendor,omitempty"` + // DeviceFamily - The device family for the vpn device. + DeviceFamily *string `json:"deviceFamily,omitempty"` + // FirmwareVersion - The firmware version for the vpn device. + FirmwareVersion *string `json:"firmwareVersion,omitempty"` +} + +// Watcher network watcher in a resource group. +type Watcher struct { + autorest.Response `json:"-"` + Etag *string `json:"etag,omitempty"` + *WatcherPropertiesFormat `json:"properties,omitempty"` + // ID - Resource ID. + ID *string `json:"id,omitempty"` + // Name - Resource name. + Name *string `json:"name,omitempty"` + // Type - Resource type. + Type *string `json:"type,omitempty"` + // Location - Resource location. + Location *string `json:"location,omitempty"` + // Tags - Resource tags. + Tags map[string]*string `json:"tags"` +} + +// MarshalJSON is the custom marshaler for Watcher. +func (w Watcher) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]interface{}) + if w.Etag != nil { + objectMap["etag"] = w.Etag + } + if w.WatcherPropertiesFormat != nil { + objectMap["properties"] = w.WatcherPropertiesFormat + } + if w.ID != nil { + objectMap["id"] = w.ID + } + if w.Name != nil { + objectMap["name"] = w.Name + } + if w.Type != nil { + objectMap["type"] = w.Type + } + if w.Location != nil { + objectMap["location"] = w.Location + } + if w.Tags != nil { + objectMap["tags"] = w.Tags + } + return json.Marshal(objectMap) +} + +// UnmarshalJSON is the custom unmarshaler for Watcher struct. +func (w *Watcher) UnmarshalJSON(body []byte) error { + var m map[string]*json.RawMessage + err := json.Unmarshal(body, &m) + if err != nil { + return err + } + for k, v := range m { + switch k { + case "etag": + if v != nil { + var etag string + err = json.Unmarshal(*v, &etag) + if err != nil { + return err + } + w.Etag = &etag + } + case "properties": + if v != nil { + var watcherPropertiesFormat WatcherPropertiesFormat + err = json.Unmarshal(*v, &watcherPropertiesFormat) + if err != nil { + return err + } + w.WatcherPropertiesFormat = &watcherPropertiesFormat + } + case "id": + if v != nil { + var ID string + err = json.Unmarshal(*v, &ID) + if err != nil { + return err + } + w.ID = &ID + } + case "name": + if v != nil { + var name string + err = json.Unmarshal(*v, &name) + if err != nil { + return err + } + w.Name = &name + } + case "type": + if v != nil { + var typeVar string + err = json.Unmarshal(*v, &typeVar) + if err != nil { + return err + } + w.Type = &typeVar + } + case "location": + if v != nil { + var location string + err = json.Unmarshal(*v, &location) + if err != nil { + return err + } + w.Location = &location + } + case "tags": + if v != nil { + var tags map[string]*string + err = json.Unmarshal(*v, &tags) + if err != nil { + return err + } + w.Tags = tags + } + } + } + + return nil +} + +// WatcherListResult list of network watcher resources. +type WatcherListResult struct { + autorest.Response `json:"-"` + Value *[]Watcher `json:"value,omitempty"` +} + +// WatcherPropertiesFormat the network watcher properties. +type WatcherPropertiesFormat struct { + // ProvisioningState - The provisioning state of the resource. Possible values include: 'Succeeded', 'Updating', 'Deleting', 'Failed' + ProvisioningState ProvisioningState `json:"provisioningState,omitempty"` +} + +// WatchersCheckConnectivityFuture an abstraction for monitoring and retrieving the results of a long-running +// operation. +type WatchersCheckConnectivityFuture struct { + azure.Future +} + +// Result returns the result of the asynchronous operation. +// If the operation has not completed it will return an error. +func (future *WatchersCheckConnectivityFuture) Result(client WatchersClient) (ci ConnectivityInformation, err error) { + var done bool + done, err = future.Done(client) + if err != nil { + err = autorest.NewErrorWithError(err, "network.WatchersCheckConnectivityFuture", "Result", future.Response(), "Polling failure") + return + } + if !done { + err = azure.NewAsyncOpIncompleteError("network.WatchersCheckConnectivityFuture") + return + } + sender := autorest.DecorateSender(client, autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...)) + if ci.Response.Response, err = future.GetResult(sender); err == nil && ci.Response.Response.StatusCode != http.StatusNoContent { + ci, err = client.CheckConnectivityResponder(ci.Response.Response) + if err != nil { + err = autorest.NewErrorWithError(err, "network.WatchersCheckConnectivityFuture", "Result", ci.Response.Response, "Failure responding to request") + } + } + return +} + +// WatchersDeleteFuture an abstraction for monitoring and retrieving the results of a long-running operation. +type WatchersDeleteFuture struct { + azure.Future +} + +// Result returns the result of the asynchronous operation. +// If the operation has not completed it will return an error. +func (future *WatchersDeleteFuture) Result(client WatchersClient) (ar autorest.Response, err error) { + var done bool + done, err = future.Done(client) + if err != nil { + err = autorest.NewErrorWithError(err, "network.WatchersDeleteFuture", "Result", future.Response(), "Polling failure") + return + } + if !done { + err = azure.NewAsyncOpIncompleteError("network.WatchersDeleteFuture") + return + } + ar.Response = future.Response() + return +} + +// WatchersGetAzureReachabilityReportFuture an abstraction for monitoring and retrieving the results of a +// long-running operation. +type WatchersGetAzureReachabilityReportFuture struct { + azure.Future +} + +// Result returns the result of the asynchronous operation. +// If the operation has not completed it will return an error. +func (future *WatchersGetAzureReachabilityReportFuture) Result(client WatchersClient) (arr AzureReachabilityReport, err error) { + var done bool + done, err = future.Done(client) + if err != nil { + err = autorest.NewErrorWithError(err, "network.WatchersGetAzureReachabilityReportFuture", "Result", future.Response(), "Polling failure") + return + } + if !done { + err = azure.NewAsyncOpIncompleteError("network.WatchersGetAzureReachabilityReportFuture") + return + } + sender := autorest.DecorateSender(client, autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...)) + if arr.Response.Response, err = future.GetResult(sender); err == nil && arr.Response.Response.StatusCode != http.StatusNoContent { + arr, err = client.GetAzureReachabilityReportResponder(arr.Response.Response) + if err != nil { + err = autorest.NewErrorWithError(err, "network.WatchersGetAzureReachabilityReportFuture", "Result", arr.Response.Response, "Failure responding to request") + } + } + return +} + +// WatchersGetFlowLogStatusFuture an abstraction for monitoring and retrieving the results of a long-running +// operation. +type WatchersGetFlowLogStatusFuture struct { + azure.Future +} + +// Result returns the result of the asynchronous operation. +// If the operation has not completed it will return an error. +func (future *WatchersGetFlowLogStatusFuture) Result(client WatchersClient) (fli FlowLogInformation, err error) { + var done bool + done, err = future.Done(client) + if err != nil { + err = autorest.NewErrorWithError(err, "network.WatchersGetFlowLogStatusFuture", "Result", future.Response(), "Polling failure") + return + } + if !done { + err = azure.NewAsyncOpIncompleteError("network.WatchersGetFlowLogStatusFuture") + return + } + sender := autorest.DecorateSender(client, autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...)) + if fli.Response.Response, err = future.GetResult(sender); err == nil && fli.Response.Response.StatusCode != http.StatusNoContent { + fli, err = client.GetFlowLogStatusResponder(fli.Response.Response) + if err != nil { + err = autorest.NewErrorWithError(err, "network.WatchersGetFlowLogStatusFuture", "Result", fli.Response.Response, "Failure responding to request") + } + } + return +} + +// WatchersGetNextHopFuture an abstraction for monitoring and retrieving the results of a long-running operation. +type WatchersGetNextHopFuture struct { + azure.Future +} + +// Result returns the result of the asynchronous operation. +// If the operation has not completed it will return an error. +func (future *WatchersGetNextHopFuture) Result(client WatchersClient) (nhr NextHopResult, err error) { + var done bool + done, err = future.Done(client) + if err != nil { + err = autorest.NewErrorWithError(err, "network.WatchersGetNextHopFuture", "Result", future.Response(), "Polling failure") + return + } + if !done { + err = azure.NewAsyncOpIncompleteError("network.WatchersGetNextHopFuture") + return + } + sender := autorest.DecorateSender(client, autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...)) + if nhr.Response.Response, err = future.GetResult(sender); err == nil && nhr.Response.Response.StatusCode != http.StatusNoContent { + nhr, err = client.GetNextHopResponder(nhr.Response.Response) + if err != nil { + err = autorest.NewErrorWithError(err, "network.WatchersGetNextHopFuture", "Result", nhr.Response.Response, "Failure responding to request") + } + } + return +} + +// WatchersGetTroubleshootingFuture an abstraction for monitoring and retrieving the results of a long-running +// operation. +type WatchersGetTroubleshootingFuture struct { + azure.Future +} + +// Result returns the result of the asynchronous operation. +// If the operation has not completed it will return an error. +func (future *WatchersGetTroubleshootingFuture) Result(client WatchersClient) (tr TroubleshootingResult, err error) { + var done bool + done, err = future.Done(client) + if err != nil { + err = autorest.NewErrorWithError(err, "network.WatchersGetTroubleshootingFuture", "Result", future.Response(), "Polling failure") + return + } + if !done { + err = azure.NewAsyncOpIncompleteError("network.WatchersGetTroubleshootingFuture") + return + } + sender := autorest.DecorateSender(client, autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...)) + if tr.Response.Response, err = future.GetResult(sender); err == nil && tr.Response.Response.StatusCode != http.StatusNoContent { + tr, err = client.GetTroubleshootingResponder(tr.Response.Response) + if err != nil { + err = autorest.NewErrorWithError(err, "network.WatchersGetTroubleshootingFuture", "Result", tr.Response.Response, "Failure responding to request") + } + } + return +} + +// WatchersGetTroubleshootingResultFuture an abstraction for monitoring and retrieving the results of a +// long-running operation. +type WatchersGetTroubleshootingResultFuture struct { + azure.Future +} + +// Result returns the result of the asynchronous operation. +// If the operation has not completed it will return an error. +func (future *WatchersGetTroubleshootingResultFuture) Result(client WatchersClient) (tr TroubleshootingResult, err error) { + var done bool + done, err = future.Done(client) + if err != nil { + err = autorest.NewErrorWithError(err, "network.WatchersGetTroubleshootingResultFuture", "Result", future.Response(), "Polling failure") + return + } + if !done { + err = azure.NewAsyncOpIncompleteError("network.WatchersGetTroubleshootingResultFuture") + return + } + sender := autorest.DecorateSender(client, autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...)) + if tr.Response.Response, err = future.GetResult(sender); err == nil && tr.Response.Response.StatusCode != http.StatusNoContent { + tr, err = client.GetTroubleshootingResultResponder(tr.Response.Response) + if err != nil { + err = autorest.NewErrorWithError(err, "network.WatchersGetTroubleshootingResultFuture", "Result", tr.Response.Response, "Failure responding to request") + } + } + return +} + +// WatchersGetVMSecurityRulesFuture an abstraction for monitoring and retrieving the results of a long-running +// operation. +type WatchersGetVMSecurityRulesFuture struct { + azure.Future +} + +// Result returns the result of the asynchronous operation. +// If the operation has not completed it will return an error. +func (future *WatchersGetVMSecurityRulesFuture) Result(client WatchersClient) (sgvr SecurityGroupViewResult, err error) { + var done bool + done, err = future.Done(client) + if err != nil { + err = autorest.NewErrorWithError(err, "network.WatchersGetVMSecurityRulesFuture", "Result", future.Response(), "Polling failure") + return + } + if !done { + err = azure.NewAsyncOpIncompleteError("network.WatchersGetVMSecurityRulesFuture") + return + } + sender := autorest.DecorateSender(client, autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...)) + if sgvr.Response.Response, err = future.GetResult(sender); err == nil && sgvr.Response.Response.StatusCode != http.StatusNoContent { + sgvr, err = client.GetVMSecurityRulesResponder(sgvr.Response.Response) + if err != nil { + err = autorest.NewErrorWithError(err, "network.WatchersGetVMSecurityRulesFuture", "Result", sgvr.Response.Response, "Failure responding to request") + } + } + return +} + +// WatchersListAvailableProvidersFuture an abstraction for monitoring and retrieving the results of a long-running +// operation. +type WatchersListAvailableProvidersFuture struct { + azure.Future +} + +// Result returns the result of the asynchronous operation. +// If the operation has not completed it will return an error. +func (future *WatchersListAvailableProvidersFuture) Result(client WatchersClient) (apl AvailableProvidersList, err error) { + var done bool + done, err = future.Done(client) + if err != nil { + err = autorest.NewErrorWithError(err, "network.WatchersListAvailableProvidersFuture", "Result", future.Response(), "Polling failure") + return + } + if !done { + err = azure.NewAsyncOpIncompleteError("network.WatchersListAvailableProvidersFuture") + return + } + sender := autorest.DecorateSender(client, autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...)) + if apl.Response.Response, err = future.GetResult(sender); err == nil && apl.Response.Response.StatusCode != http.StatusNoContent { + apl, err = client.ListAvailableProvidersResponder(apl.Response.Response) + if err != nil { + err = autorest.NewErrorWithError(err, "network.WatchersListAvailableProvidersFuture", "Result", apl.Response.Response, "Failure responding to request") + } + } + return +} + +// WatchersSetFlowLogConfigurationFuture an abstraction for monitoring and retrieving the results of a long-running +// operation. +type WatchersSetFlowLogConfigurationFuture struct { + azure.Future +} + +// Result returns the result of the asynchronous operation. +// If the operation has not completed it will return an error. +func (future *WatchersSetFlowLogConfigurationFuture) Result(client WatchersClient) (fli FlowLogInformation, err error) { + var done bool + done, err = future.Done(client) + if err != nil { + err = autorest.NewErrorWithError(err, "network.WatchersSetFlowLogConfigurationFuture", "Result", future.Response(), "Polling failure") + return + } + if !done { + err = azure.NewAsyncOpIncompleteError("network.WatchersSetFlowLogConfigurationFuture") + return + } + sender := autorest.DecorateSender(client, autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...)) + if fli.Response.Response, err = future.GetResult(sender); err == nil && fli.Response.Response.StatusCode != http.StatusNoContent { + fli, err = client.SetFlowLogConfigurationResponder(fli.Response.Response) + if err != nil { + err = autorest.NewErrorWithError(err, "network.WatchersSetFlowLogConfigurationFuture", "Result", fli.Response.Response, "Failure responding to request") + } + } + return +} + +// WatchersVerifyIPFlowFuture an abstraction for monitoring and retrieving the results of a long-running operation. +type WatchersVerifyIPFlowFuture struct { + azure.Future +} + +// Result returns the result of the asynchronous operation. +// If the operation has not completed it will return an error. +func (future *WatchersVerifyIPFlowFuture) Result(client WatchersClient) (vifr VerificationIPFlowResult, err error) { + var done bool + done, err = future.Done(client) + if err != nil { + err = autorest.NewErrorWithError(err, "network.WatchersVerifyIPFlowFuture", "Result", future.Response(), "Polling failure") + return + } + if !done { + err = azure.NewAsyncOpIncompleteError("network.WatchersVerifyIPFlowFuture") + return + } + sender := autorest.DecorateSender(client, autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...)) + if vifr.Response.Response, err = future.GetResult(sender); err == nil && vifr.Response.Response.StatusCode != http.StatusNoContent { + vifr, err = client.VerifyIPFlowResponder(vifr.Response.Response) + if err != nil { + err = autorest.NewErrorWithError(err, "network.WatchersVerifyIPFlowFuture", "Result", vifr.Response.Response, "Failure responding to request") + } + } + return +} diff --git a/vendor/github.com/Azure/azure-sdk-for-go/services/network/mgmt/2018-01-01/network/operations.go b/vendor/github.com/Azure/azure-sdk-for-go/services/network/mgmt/2018-01-01/network/operations.go new file mode 100644 index 000000000..f72f18163 --- /dev/null +++ b/vendor/github.com/Azure/azure-sdk-for-go/services/network/mgmt/2018-01-01/network/operations.go @@ -0,0 +1,126 @@ +package network + +// Copyright (c) Microsoft and contributors. All rights reserved. +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// +// See the License for the specific language governing permissions and +// limitations under the License. +// +// Code generated by Microsoft (R) AutoRest Code Generator. +// Changes may cause incorrect behavior and will be lost if the code is regenerated. + +import ( + "context" + "github.com/Azure/go-autorest/autorest" + "github.com/Azure/go-autorest/autorest/azure" + "net/http" +) + +// OperationsClient is the network Client +type OperationsClient struct { + BaseClient +} + +// NewOperationsClient creates an instance of the OperationsClient client. +func NewOperationsClient(subscriptionID string) OperationsClient { + return NewOperationsClientWithBaseURI(DefaultBaseURI, subscriptionID) +} + +// NewOperationsClientWithBaseURI creates an instance of the OperationsClient client. +func NewOperationsClientWithBaseURI(baseURI string, subscriptionID string) OperationsClient { + return OperationsClient{NewWithBaseURI(baseURI, subscriptionID)} +} + +// List lists all of the available Network Rest API operations. +func (client OperationsClient) List(ctx context.Context) (result OperationListResultPage, err error) { + result.fn = client.listNextResults + req, err := client.ListPreparer(ctx) + if err != nil { + err = autorest.NewErrorWithError(err, "network.OperationsClient", "List", nil, "Failure preparing request") + return + } + + resp, err := client.ListSender(req) + if err != nil { + result.olr.Response = autorest.Response{Response: resp} + err = autorest.NewErrorWithError(err, "network.OperationsClient", "List", resp, "Failure sending request") + return + } + + result.olr, err = client.ListResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "network.OperationsClient", "List", resp, "Failure responding to request") + } + + return +} + +// ListPreparer prepares the List request. +func (client OperationsClient) ListPreparer(ctx context.Context) (*http.Request, error) { + const APIVersion = "2018-01-01" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsGet(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPath("/providers/Microsoft.Network/operations"), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// ListSender sends the List request. The method will close the +// http.Response Body if it receives an error. +func (client OperationsClient) ListSender(req *http.Request) (*http.Response, error) { + return autorest.SendWithSender(client, req, + autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...)) +} + +// ListResponder handles the response to the List request. The method always +// closes the http.Response Body. +func (client OperationsClient) ListResponder(resp *http.Response) (result OperationListResult, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + +// listNextResults retrieves the next set of results, if any. +func (client OperationsClient) listNextResults(lastResults OperationListResult) (result OperationListResult, err error) { + req, err := lastResults.operationListResultPreparer() + if err != nil { + return result, autorest.NewErrorWithError(err, "network.OperationsClient", "listNextResults", nil, "Failure preparing next results request") + } + if req == nil { + return + } + resp, err := client.ListSender(req) + if err != nil { + result.Response = autorest.Response{Response: resp} + return result, autorest.NewErrorWithError(err, "network.OperationsClient", "listNextResults", resp, "Failure sending next results request") + } + result, err = client.ListResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "network.OperationsClient", "listNextResults", resp, "Failure responding to next results request") + } + return +} + +// ListComplete enumerates all values, automatically crossing page boundaries as required. +func (client OperationsClient) ListComplete(ctx context.Context) (result OperationListResultIterator, err error) { + result.page, err = client.List(ctx) + return +} diff --git a/vendor/github.com/Azure/azure-sdk-for-go/arm/network/packetcaptures.go b/vendor/github.com/Azure/azure-sdk-for-go/services/network/mgmt/2018-01-01/network/packetcaptures.go similarity index 55% rename from vendor/github.com/Azure/azure-sdk-for-go/arm/network/packetcaptures.go rename to vendor/github.com/Azure/azure-sdk-for-go/services/network/mgmt/2018-01-01/network/packetcaptures.go index fbeb0d9ef..eb9a1376f 100644 --- a/vendor/github.com/Azure/azure-sdk-for-go/arm/network/packetcaptures.go +++ b/vendor/github.com/Azure/azure-sdk-for-go/services/network/mgmt/2018-01-01/network/packetcaptures.go @@ -14,90 +14,65 @@ package network // See the License for the specific language governing permissions and // limitations under the License. // -// Code generated by Microsoft (R) AutoRest Code Generator 1.0.1.0 -// Changes may cause incorrect behavior and will be lost if the code is -// regenerated. +// Code generated by Microsoft (R) AutoRest Code Generator. +// Changes may cause incorrect behavior and will be lost if the code is regenerated. import ( + "context" "github.com/Azure/go-autorest/autorest" "github.com/Azure/go-autorest/autorest/azure" "github.com/Azure/go-autorest/autorest/validation" "net/http" ) -// PacketCapturesClient is the composite Swagger for Network Client +// PacketCapturesClient is the network Client type PacketCapturesClient struct { - ManagementClient + BaseClient } -// NewPacketCapturesClient creates an instance of the PacketCapturesClient -// client. +// NewPacketCapturesClient creates an instance of the PacketCapturesClient client. func NewPacketCapturesClient(subscriptionID string) PacketCapturesClient { return NewPacketCapturesClientWithBaseURI(DefaultBaseURI, subscriptionID) } -// NewPacketCapturesClientWithBaseURI creates an instance of the -// PacketCapturesClient client. +// NewPacketCapturesClientWithBaseURI creates an instance of the PacketCapturesClient client. func NewPacketCapturesClientWithBaseURI(baseURI string, subscriptionID string) PacketCapturesClient { return PacketCapturesClient{NewWithBaseURI(baseURI, subscriptionID)} } -// Create create and start a packet capture on the specified VM. This method -// may poll for completion. Polling can be canceled by passing the cancel -// channel argument. The channel will be used to cancel polling and any -// outstanding HTTP requests. -// -// resourceGroupName is the name of the resource group. networkWatcherName is -// the name of the network watcher. packetCaptureName is the name of the packet -// capture session. parameters is parameters that define the create packet -// capture operation. -func (client PacketCapturesClient) Create(resourceGroupName string, networkWatcherName string, packetCaptureName string, parameters PacketCapture, cancel <-chan struct{}) (<-chan PacketCaptureResult, <-chan error) { - resultChan := make(chan PacketCaptureResult, 1) - errChan := make(chan error, 1) +// Create create and start a packet capture on the specified VM. +// Parameters: +// resourceGroupName - the name of the resource group. +// networkWatcherName - the name of the network watcher. +// packetCaptureName - the name of the packet capture session. +// parameters - parameters that define the create packet capture operation. +func (client PacketCapturesClient) Create(ctx context.Context, resourceGroupName string, networkWatcherName string, packetCaptureName string, parameters PacketCapture) (result PacketCapturesCreateFuture, err error) { if err := validation.Validate([]validation.Validation{ {TargetValue: parameters, Constraints: []validation.Constraint{{Target: "parameters.PacketCaptureParameters", Name: validation.Null, Rule: true, Chain: []validation.Constraint{{Target: "parameters.PacketCaptureParameters.Target", Name: validation.Null, Rule: true, Chain: nil}, {Target: "parameters.PacketCaptureParameters.StorageLocation", Name: validation.Null, Rule: true, Chain: nil}, }}}}}); err != nil { - errChan <- validation.NewErrorWithValidationError(err, "network.PacketCapturesClient", "Create") - close(errChan) - close(resultChan) - return resultChan, errChan + return result, validation.NewError("network.PacketCapturesClient", "Create", err.Error()) } - go func() { - var err error - var result PacketCaptureResult - defer func() { - resultChan <- result - errChan <- err - close(resultChan) - close(errChan) - }() - req, err := client.CreatePreparer(resourceGroupName, networkWatcherName, packetCaptureName, parameters, cancel) - if err != nil { - err = autorest.NewErrorWithError(err, "network.PacketCapturesClient", "Create", nil, "Failure preparing request") - return - } + req, err := client.CreatePreparer(ctx, resourceGroupName, networkWatcherName, packetCaptureName, parameters) + if err != nil { + err = autorest.NewErrorWithError(err, "network.PacketCapturesClient", "Create", nil, "Failure preparing request") + return + } - resp, err := client.CreateSender(req) - if err != nil { - result.Response = autorest.Response{Response: resp} - err = autorest.NewErrorWithError(err, "network.PacketCapturesClient", "Create", resp, "Failure sending request") - return - } + result, err = client.CreateSender(req) + if err != nil { + err = autorest.NewErrorWithError(err, "network.PacketCapturesClient", "Create", result.Response(), "Failure sending request") + return + } - result, err = client.CreateResponder(resp) - if err != nil { - err = autorest.NewErrorWithError(err, "network.PacketCapturesClient", "Create", resp, "Failure responding to request") - } - }() - return resultChan, errChan + return } // CreatePreparer prepares the Create request. -func (client PacketCapturesClient) CreatePreparer(resourceGroupName string, networkWatcherName string, packetCaptureName string, parameters PacketCapture, cancel <-chan struct{}) (*http.Request, error) { +func (client PacketCapturesClient) CreatePreparer(ctx context.Context, resourceGroupName string, networkWatcherName string, packetCaptureName string, parameters PacketCapture) (*http.Request, error) { pathParameters := map[string]interface{}{ "networkWatcherName": autorest.Encode("path", networkWatcherName), "packetCaptureName": autorest.Encode("path", packetCaptureName), @@ -105,27 +80,36 @@ func (client PacketCapturesClient) CreatePreparer(resourceGroupName string, netw "subscriptionId": autorest.Encode("path", client.SubscriptionID), } - const APIVersion = "2017-03-01" + const APIVersion = "2018-01-01" queryParameters := map[string]interface{}{ "api-version": APIVersion, } preparer := autorest.CreatePreparer( - autorest.AsJSON(), + autorest.AsContentType("application/json; charset=utf-8"), autorest.AsPut(), autorest.WithBaseURL(client.BaseURI), autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/networkWatchers/{networkWatcherName}/packetCaptures/{packetCaptureName}", pathParameters), autorest.WithJSON(parameters), autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{Cancel: cancel}) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) } // CreateSender sends the Create request. The method will close the // http.Response Body if it receives an error. -func (client PacketCapturesClient) CreateSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, - req, - azure.DoPollForAsynchronous(client.PollingDelay)) +func (client PacketCapturesClient) CreateSender(req *http.Request) (future PacketCapturesCreateFuture, err error) { + var resp *http.Response + resp, err = autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) + if err != nil { + return + } + err = autorest.Respond(resp, azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusCreated)) + if err != nil { + return + } + future.Future, err = azure.NewFutureFromResponse(resp) + return } // CreateResponder handles the response to the Create request. The method always @@ -141,49 +125,29 @@ func (client PacketCapturesClient) CreateResponder(resp *http.Response) (result return } -// Delete deletes the specified packet capture session. This method may poll -// for completion. Polling can be canceled by passing the cancel channel -// argument. The channel will be used to cancel polling and any outstanding -// HTTP requests. -// -// resourceGroupName is the name of the resource group. networkWatcherName is -// the name of the network watcher. packetCaptureName is the name of the packet -// capture session. -func (client PacketCapturesClient) Delete(resourceGroupName string, networkWatcherName string, packetCaptureName string, cancel <-chan struct{}) (<-chan autorest.Response, <-chan error) { - resultChan := make(chan autorest.Response, 1) - errChan := make(chan error, 1) - go func() { - var err error - var result autorest.Response - defer func() { - resultChan <- result - errChan <- err - close(resultChan) - close(errChan) - }() - req, err := client.DeletePreparer(resourceGroupName, networkWatcherName, packetCaptureName, cancel) - if err != nil { - err = autorest.NewErrorWithError(err, "network.PacketCapturesClient", "Delete", nil, "Failure preparing request") - return - } +// Delete deletes the specified packet capture session. +// Parameters: +// resourceGroupName - the name of the resource group. +// networkWatcherName - the name of the network watcher. +// packetCaptureName - the name of the packet capture session. +func (client PacketCapturesClient) Delete(ctx context.Context, resourceGroupName string, networkWatcherName string, packetCaptureName string) (result PacketCapturesDeleteFuture, err error) { + req, err := client.DeletePreparer(ctx, resourceGroupName, networkWatcherName, packetCaptureName) + if err != nil { + err = autorest.NewErrorWithError(err, "network.PacketCapturesClient", "Delete", nil, "Failure preparing request") + return + } - resp, err := client.DeleteSender(req) - if err != nil { - result.Response = resp - err = autorest.NewErrorWithError(err, "network.PacketCapturesClient", "Delete", resp, "Failure sending request") - return - } + result, err = client.DeleteSender(req) + if err != nil { + err = autorest.NewErrorWithError(err, "network.PacketCapturesClient", "Delete", result.Response(), "Failure sending request") + return + } - result, err = client.DeleteResponder(resp) - if err != nil { - err = autorest.NewErrorWithError(err, "network.PacketCapturesClient", "Delete", resp, "Failure responding to request") - } - }() - return resultChan, errChan + return } // DeletePreparer prepares the Delete request. -func (client PacketCapturesClient) DeletePreparer(resourceGroupName string, networkWatcherName string, packetCaptureName string, cancel <-chan struct{}) (*http.Request, error) { +func (client PacketCapturesClient) DeletePreparer(ctx context.Context, resourceGroupName string, networkWatcherName string, packetCaptureName string) (*http.Request, error) { pathParameters := map[string]interface{}{ "networkWatcherName": autorest.Encode("path", networkWatcherName), "packetCaptureName": autorest.Encode("path", packetCaptureName), @@ -191,7 +155,7 @@ func (client PacketCapturesClient) DeletePreparer(resourceGroupName string, netw "subscriptionId": autorest.Encode("path", client.SubscriptionID), } - const APIVersion = "2017-03-01" + const APIVersion = "2018-01-01" queryParameters := map[string]interface{}{ "api-version": APIVersion, } @@ -201,15 +165,24 @@ func (client PacketCapturesClient) DeletePreparer(resourceGroupName string, netw autorest.WithBaseURL(client.BaseURI), autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/networkWatchers/{networkWatcherName}/packetCaptures/{packetCaptureName}", pathParameters), autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{Cancel: cancel}) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) } // DeleteSender sends the Delete request. The method will close the // http.Response Body if it receives an error. -func (client PacketCapturesClient) DeleteSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, - req, - azure.DoPollForAsynchronous(client.PollingDelay)) +func (client PacketCapturesClient) DeleteSender(req *http.Request) (future PacketCapturesDeleteFuture, err error) { + var resp *http.Response + resp, err = autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) + if err != nil { + return + } + err = autorest.Respond(resp, azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted, http.StatusNoContent)) + if err != nil { + return + } + future.Future, err = azure.NewFutureFromResponse(resp) + return } // DeleteResponder handles the response to the Delete request. The method always @@ -218,19 +191,19 @@ func (client PacketCapturesClient) DeleteResponder(resp *http.Response) (result err = autorest.Respond( resp, client.ByInspecting(), - azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusNoContent, http.StatusAccepted), + azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted, http.StatusNoContent), autorest.ByClosing()) result.Response = resp return } // Get gets a packet capture session by name. -// -// resourceGroupName is the name of the resource group. networkWatcherName is -// the name of the network watcher. packetCaptureName is the name of the packet -// capture session. -func (client PacketCapturesClient) Get(resourceGroupName string, networkWatcherName string, packetCaptureName string) (result PacketCaptureResult, err error) { - req, err := client.GetPreparer(resourceGroupName, networkWatcherName, packetCaptureName) +// Parameters: +// resourceGroupName - the name of the resource group. +// networkWatcherName - the name of the network watcher. +// packetCaptureName - the name of the packet capture session. +func (client PacketCapturesClient) Get(ctx context.Context, resourceGroupName string, networkWatcherName string, packetCaptureName string) (result PacketCaptureResult, err error) { + req, err := client.GetPreparer(ctx, resourceGroupName, networkWatcherName, packetCaptureName) if err != nil { err = autorest.NewErrorWithError(err, "network.PacketCapturesClient", "Get", nil, "Failure preparing request") return @@ -252,7 +225,7 @@ func (client PacketCapturesClient) Get(resourceGroupName string, networkWatcherN } // GetPreparer prepares the Get request. -func (client PacketCapturesClient) GetPreparer(resourceGroupName string, networkWatcherName string, packetCaptureName string) (*http.Request, error) { +func (client PacketCapturesClient) GetPreparer(ctx context.Context, resourceGroupName string, networkWatcherName string, packetCaptureName string) (*http.Request, error) { pathParameters := map[string]interface{}{ "networkWatcherName": autorest.Encode("path", networkWatcherName), "packetCaptureName": autorest.Encode("path", packetCaptureName), @@ -260,7 +233,7 @@ func (client PacketCapturesClient) GetPreparer(resourceGroupName string, network "subscriptionId": autorest.Encode("path", client.SubscriptionID), } - const APIVersion = "2017-03-01" + const APIVersion = "2018-01-01" queryParameters := map[string]interface{}{ "api-version": APIVersion, } @@ -270,13 +243,14 @@ func (client PacketCapturesClient) GetPreparer(resourceGroupName string, network autorest.WithBaseURL(client.BaseURI), autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/networkWatchers/{networkWatcherName}/packetCaptures/{packetCaptureName}", pathParameters), autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{}) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) } // GetSender sends the Get request. The method will close the // http.Response Body if it receives an error. func (client PacketCapturesClient) GetSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, req) + return autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) } // GetResponder handles the response to the Get request. The method always @@ -292,49 +266,29 @@ func (client PacketCapturesClient) GetResponder(resp *http.Response) (result Pac return } -// GetStatus query the status of a running packet capture session. This method -// may poll for completion. Polling can be canceled by passing the cancel -// channel argument. The channel will be used to cancel polling and any -// outstanding HTTP requests. -// -// resourceGroupName is the name of the resource group. networkWatcherName is -// the name of the Network Watcher resource. packetCaptureName is the name -// given to the packet capture session. -func (client PacketCapturesClient) GetStatus(resourceGroupName string, networkWatcherName string, packetCaptureName string, cancel <-chan struct{}) (<-chan PacketCaptureQueryStatusResult, <-chan error) { - resultChan := make(chan PacketCaptureQueryStatusResult, 1) - errChan := make(chan error, 1) - go func() { - var err error - var result PacketCaptureQueryStatusResult - defer func() { - resultChan <- result - errChan <- err - close(resultChan) - close(errChan) - }() - req, err := client.GetStatusPreparer(resourceGroupName, networkWatcherName, packetCaptureName, cancel) - if err != nil { - err = autorest.NewErrorWithError(err, "network.PacketCapturesClient", "GetStatus", nil, "Failure preparing request") - return - } +// GetStatus query the status of a running packet capture session. +// Parameters: +// resourceGroupName - the name of the resource group. +// networkWatcherName - the name of the Network Watcher resource. +// packetCaptureName - the name given to the packet capture session. +func (client PacketCapturesClient) GetStatus(ctx context.Context, resourceGroupName string, networkWatcherName string, packetCaptureName string) (result PacketCapturesGetStatusFuture, err error) { + req, err := client.GetStatusPreparer(ctx, resourceGroupName, networkWatcherName, packetCaptureName) + if err != nil { + err = autorest.NewErrorWithError(err, "network.PacketCapturesClient", "GetStatus", nil, "Failure preparing request") + return + } - resp, err := client.GetStatusSender(req) - if err != nil { - result.Response = autorest.Response{Response: resp} - err = autorest.NewErrorWithError(err, "network.PacketCapturesClient", "GetStatus", resp, "Failure sending request") - return - } + result, err = client.GetStatusSender(req) + if err != nil { + err = autorest.NewErrorWithError(err, "network.PacketCapturesClient", "GetStatus", result.Response(), "Failure sending request") + return + } - result, err = client.GetStatusResponder(resp) - if err != nil { - err = autorest.NewErrorWithError(err, "network.PacketCapturesClient", "GetStatus", resp, "Failure responding to request") - } - }() - return resultChan, errChan + return } // GetStatusPreparer prepares the GetStatus request. -func (client PacketCapturesClient) GetStatusPreparer(resourceGroupName string, networkWatcherName string, packetCaptureName string, cancel <-chan struct{}) (*http.Request, error) { +func (client PacketCapturesClient) GetStatusPreparer(ctx context.Context, resourceGroupName string, networkWatcherName string, packetCaptureName string) (*http.Request, error) { pathParameters := map[string]interface{}{ "networkWatcherName": autorest.Encode("path", networkWatcherName), "packetCaptureName": autorest.Encode("path", packetCaptureName), @@ -342,7 +296,7 @@ func (client PacketCapturesClient) GetStatusPreparer(resourceGroupName string, n "subscriptionId": autorest.Encode("path", client.SubscriptionID), } - const APIVersion = "2017-03-01" + const APIVersion = "2018-01-01" queryParameters := map[string]interface{}{ "api-version": APIVersion, } @@ -352,15 +306,24 @@ func (client PacketCapturesClient) GetStatusPreparer(resourceGroupName string, n autorest.WithBaseURL(client.BaseURI), autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/networkWatchers/{networkWatcherName}/packetCaptures/{packetCaptureName}/queryStatus", pathParameters), autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{Cancel: cancel}) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) } // GetStatusSender sends the GetStatus request. The method will close the // http.Response Body if it receives an error. -func (client PacketCapturesClient) GetStatusSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, - req, - azure.DoPollForAsynchronous(client.PollingDelay)) +func (client PacketCapturesClient) GetStatusSender(req *http.Request) (future PacketCapturesGetStatusFuture, err error) { + var resp *http.Response + resp, err = autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) + if err != nil { + return + } + err = autorest.Respond(resp, azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted)) + if err != nil { + return + } + future.Future, err = azure.NewFutureFromResponse(resp) + return } // GetStatusResponder handles the response to the GetStatus request. The method always @@ -377,11 +340,11 @@ func (client PacketCapturesClient) GetStatusResponder(resp *http.Response) (resu } // List lists all packet capture sessions within the specified resource group. -// -// resourceGroupName is the name of the resource group. networkWatcherName is -// the name of the Network Watcher resource. -func (client PacketCapturesClient) List(resourceGroupName string, networkWatcherName string) (result PacketCaptureListResult, err error) { - req, err := client.ListPreparer(resourceGroupName, networkWatcherName) +// Parameters: +// resourceGroupName - the name of the resource group. +// networkWatcherName - the name of the Network Watcher resource. +func (client PacketCapturesClient) List(ctx context.Context, resourceGroupName string, networkWatcherName string) (result PacketCaptureListResult, err error) { + req, err := client.ListPreparer(ctx, resourceGroupName, networkWatcherName) if err != nil { err = autorest.NewErrorWithError(err, "network.PacketCapturesClient", "List", nil, "Failure preparing request") return @@ -403,14 +366,14 @@ func (client PacketCapturesClient) List(resourceGroupName string, networkWatcher } // ListPreparer prepares the List request. -func (client PacketCapturesClient) ListPreparer(resourceGroupName string, networkWatcherName string) (*http.Request, error) { +func (client PacketCapturesClient) ListPreparer(ctx context.Context, resourceGroupName string, networkWatcherName string) (*http.Request, error) { pathParameters := map[string]interface{}{ "networkWatcherName": autorest.Encode("path", networkWatcherName), "resourceGroupName": autorest.Encode("path", resourceGroupName), "subscriptionId": autorest.Encode("path", client.SubscriptionID), } - const APIVersion = "2017-03-01" + const APIVersion = "2018-01-01" queryParameters := map[string]interface{}{ "api-version": APIVersion, } @@ -420,13 +383,14 @@ func (client PacketCapturesClient) ListPreparer(resourceGroupName string, networ autorest.WithBaseURL(client.BaseURI), autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/networkWatchers/{networkWatcherName}/packetCaptures", pathParameters), autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{}) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) } // ListSender sends the List request. The method will close the // http.Response Body if it receives an error. func (client PacketCapturesClient) ListSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, req) + return autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) } // ListResponder handles the response to the List request. The method always @@ -442,49 +406,29 @@ func (client PacketCapturesClient) ListResponder(resp *http.Response) (result Pa return } -// Stop stops a specified packet capture session. This method may poll for -// completion. Polling can be canceled by passing the cancel channel argument. -// The channel will be used to cancel polling and any outstanding HTTP -// requests. -// -// resourceGroupName is the name of the resource group. networkWatcherName is -// the name of the network watcher. packetCaptureName is the name of the packet -// capture session. -func (client PacketCapturesClient) Stop(resourceGroupName string, networkWatcherName string, packetCaptureName string, cancel <-chan struct{}) (<-chan autorest.Response, <-chan error) { - resultChan := make(chan autorest.Response, 1) - errChan := make(chan error, 1) - go func() { - var err error - var result autorest.Response - defer func() { - resultChan <- result - errChan <- err - close(resultChan) - close(errChan) - }() - req, err := client.StopPreparer(resourceGroupName, networkWatcherName, packetCaptureName, cancel) - if err != nil { - err = autorest.NewErrorWithError(err, "network.PacketCapturesClient", "Stop", nil, "Failure preparing request") - return - } +// Stop stops a specified packet capture session. +// Parameters: +// resourceGroupName - the name of the resource group. +// networkWatcherName - the name of the network watcher. +// packetCaptureName - the name of the packet capture session. +func (client PacketCapturesClient) Stop(ctx context.Context, resourceGroupName string, networkWatcherName string, packetCaptureName string) (result PacketCapturesStopFuture, err error) { + req, err := client.StopPreparer(ctx, resourceGroupName, networkWatcherName, packetCaptureName) + if err != nil { + err = autorest.NewErrorWithError(err, "network.PacketCapturesClient", "Stop", nil, "Failure preparing request") + return + } - resp, err := client.StopSender(req) - if err != nil { - result.Response = resp - err = autorest.NewErrorWithError(err, "network.PacketCapturesClient", "Stop", resp, "Failure sending request") - return - } + result, err = client.StopSender(req) + if err != nil { + err = autorest.NewErrorWithError(err, "network.PacketCapturesClient", "Stop", result.Response(), "Failure sending request") + return + } - result, err = client.StopResponder(resp) - if err != nil { - err = autorest.NewErrorWithError(err, "network.PacketCapturesClient", "Stop", resp, "Failure responding to request") - } - }() - return resultChan, errChan + return } // StopPreparer prepares the Stop request. -func (client PacketCapturesClient) StopPreparer(resourceGroupName string, networkWatcherName string, packetCaptureName string, cancel <-chan struct{}) (*http.Request, error) { +func (client PacketCapturesClient) StopPreparer(ctx context.Context, resourceGroupName string, networkWatcherName string, packetCaptureName string) (*http.Request, error) { pathParameters := map[string]interface{}{ "networkWatcherName": autorest.Encode("path", networkWatcherName), "packetCaptureName": autorest.Encode("path", packetCaptureName), @@ -492,7 +436,7 @@ func (client PacketCapturesClient) StopPreparer(resourceGroupName string, networ "subscriptionId": autorest.Encode("path", client.SubscriptionID), } - const APIVersion = "2017-03-01" + const APIVersion = "2018-01-01" queryParameters := map[string]interface{}{ "api-version": APIVersion, } @@ -502,15 +446,24 @@ func (client PacketCapturesClient) StopPreparer(resourceGroupName string, networ autorest.WithBaseURL(client.BaseURI), autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/networkWatchers/{networkWatcherName}/packetCaptures/{packetCaptureName}/stop", pathParameters), autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{Cancel: cancel}) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) } // StopSender sends the Stop request. The method will close the // http.Response Body if it receives an error. -func (client PacketCapturesClient) StopSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, - req, - azure.DoPollForAsynchronous(client.PollingDelay)) +func (client PacketCapturesClient) StopSender(req *http.Request) (future PacketCapturesStopFuture, err error) { + var resp *http.Response + resp, err = autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) + if err != nil { + return + } + err = autorest.Respond(resp, azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted)) + if err != nil { + return + } + future.Future, err = azure.NewFutureFromResponse(resp) + return } // StopResponder handles the response to the Stop request. The method always diff --git a/vendor/github.com/Azure/azure-sdk-for-go/services/network/mgmt/2018-01-01/network/publicipaddresses.go b/vendor/github.com/Azure/azure-sdk-for-go/services/network/mgmt/2018-01-01/network/publicipaddresses.go new file mode 100644 index 000000000..956756f94 --- /dev/null +++ b/vendor/github.com/Azure/azure-sdk-for-go/services/network/mgmt/2018-01-01/network/publicipaddresses.go @@ -0,0 +1,801 @@ +package network + +// Copyright (c) Microsoft and contributors. All rights reserved. +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// +// See the License for the specific language governing permissions and +// limitations under the License. +// +// Code generated by Microsoft (R) AutoRest Code Generator. +// Changes may cause incorrect behavior and will be lost if the code is regenerated. + +import ( + "context" + "github.com/Azure/go-autorest/autorest" + "github.com/Azure/go-autorest/autorest/azure" + "github.com/Azure/go-autorest/autorest/validation" + "net/http" +) + +// PublicIPAddressesClient is the network Client +type PublicIPAddressesClient struct { + BaseClient +} + +// NewPublicIPAddressesClient creates an instance of the PublicIPAddressesClient client. +func NewPublicIPAddressesClient(subscriptionID string) PublicIPAddressesClient { + return NewPublicIPAddressesClientWithBaseURI(DefaultBaseURI, subscriptionID) +} + +// NewPublicIPAddressesClientWithBaseURI creates an instance of the PublicIPAddressesClient client. +func NewPublicIPAddressesClientWithBaseURI(baseURI string, subscriptionID string) PublicIPAddressesClient { + return PublicIPAddressesClient{NewWithBaseURI(baseURI, subscriptionID)} +} + +// CreateOrUpdate creates or updates a static or dynamic public IP address. +// Parameters: +// resourceGroupName - the name of the resource group. +// publicIPAddressName - the name of the public IP address. +// parameters - parameters supplied to the create or update public IP address operation. +func (client PublicIPAddressesClient) CreateOrUpdate(ctx context.Context, resourceGroupName string, publicIPAddressName string, parameters PublicIPAddress) (result PublicIPAddressesCreateOrUpdateFuture, err error) { + if err := validation.Validate([]validation.Validation{ + {TargetValue: parameters, + Constraints: []validation.Constraint{{Target: "parameters.PublicIPAddressPropertiesFormat", Name: validation.Null, Rule: false, + Chain: []validation.Constraint{{Target: "parameters.PublicIPAddressPropertiesFormat.IPConfiguration", Name: validation.Null, Rule: false, + Chain: []validation.Constraint{{Target: "parameters.PublicIPAddressPropertiesFormat.IPConfiguration.IPConfigurationPropertiesFormat", Name: validation.Null, Rule: false, + Chain: []validation.Constraint{{Target: "parameters.PublicIPAddressPropertiesFormat.IPConfiguration.IPConfigurationPropertiesFormat.PublicIPAddress", Name: validation.Null, Rule: false, Chain: nil}}}, + }}, + }}}}}); err != nil { + return result, validation.NewError("network.PublicIPAddressesClient", "CreateOrUpdate", err.Error()) + } + + req, err := client.CreateOrUpdatePreparer(ctx, resourceGroupName, publicIPAddressName, parameters) + if err != nil { + err = autorest.NewErrorWithError(err, "network.PublicIPAddressesClient", "CreateOrUpdate", nil, "Failure preparing request") + return + } + + result, err = client.CreateOrUpdateSender(req) + if err != nil { + err = autorest.NewErrorWithError(err, "network.PublicIPAddressesClient", "CreateOrUpdate", result.Response(), "Failure sending request") + return + } + + return +} + +// CreateOrUpdatePreparer prepares the CreateOrUpdate request. +func (client PublicIPAddressesClient) CreateOrUpdatePreparer(ctx context.Context, resourceGroupName string, publicIPAddressName string, parameters PublicIPAddress) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "publicIpAddressName": autorest.Encode("path", publicIPAddressName), + "resourceGroupName": autorest.Encode("path", resourceGroupName), + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + } + + const APIVersion = "2018-01-01" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsContentType("application/json; charset=utf-8"), + autorest.AsPut(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/publicIPAddresses/{publicIpAddressName}", pathParameters), + autorest.WithJSON(parameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// CreateOrUpdateSender sends the CreateOrUpdate request. The method will close the +// http.Response Body if it receives an error. +func (client PublicIPAddressesClient) CreateOrUpdateSender(req *http.Request) (future PublicIPAddressesCreateOrUpdateFuture, err error) { + var resp *http.Response + resp, err = autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) + if err != nil { + return + } + err = autorest.Respond(resp, azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusCreated)) + if err != nil { + return + } + future.Future, err = azure.NewFutureFromResponse(resp) + return +} + +// CreateOrUpdateResponder handles the response to the CreateOrUpdate request. The method always +// closes the http.Response Body. +func (client PublicIPAddressesClient) CreateOrUpdateResponder(resp *http.Response) (result PublicIPAddress, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusCreated), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + +// Delete deletes the specified public IP address. +// Parameters: +// resourceGroupName - the name of the resource group. +// publicIPAddressName - the name of the subnet. +func (client PublicIPAddressesClient) Delete(ctx context.Context, resourceGroupName string, publicIPAddressName string) (result PublicIPAddressesDeleteFuture, err error) { + req, err := client.DeletePreparer(ctx, resourceGroupName, publicIPAddressName) + if err != nil { + err = autorest.NewErrorWithError(err, "network.PublicIPAddressesClient", "Delete", nil, "Failure preparing request") + return + } + + result, err = client.DeleteSender(req) + if err != nil { + err = autorest.NewErrorWithError(err, "network.PublicIPAddressesClient", "Delete", result.Response(), "Failure sending request") + return + } + + return +} + +// DeletePreparer prepares the Delete request. +func (client PublicIPAddressesClient) DeletePreparer(ctx context.Context, resourceGroupName string, publicIPAddressName string) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "publicIpAddressName": autorest.Encode("path", publicIPAddressName), + "resourceGroupName": autorest.Encode("path", resourceGroupName), + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + } + + const APIVersion = "2018-01-01" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsDelete(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/publicIPAddresses/{publicIpAddressName}", pathParameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// DeleteSender sends the Delete request. The method will close the +// http.Response Body if it receives an error. +func (client PublicIPAddressesClient) DeleteSender(req *http.Request) (future PublicIPAddressesDeleteFuture, err error) { + var resp *http.Response + resp, err = autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) + if err != nil { + return + } + err = autorest.Respond(resp, azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted, http.StatusNoContent)) + if err != nil { + return + } + future.Future, err = azure.NewFutureFromResponse(resp) + return +} + +// DeleteResponder handles the response to the Delete request. The method always +// closes the http.Response Body. +func (client PublicIPAddressesClient) DeleteResponder(resp *http.Response) (result autorest.Response, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted, http.StatusNoContent), + autorest.ByClosing()) + result.Response = resp + return +} + +// Get gets the specified public IP address in a specified resource group. +// Parameters: +// resourceGroupName - the name of the resource group. +// publicIPAddressName - the name of the subnet. +// expand - expands referenced resources. +func (client PublicIPAddressesClient) Get(ctx context.Context, resourceGroupName string, publicIPAddressName string, expand string) (result PublicIPAddress, err error) { + req, err := client.GetPreparer(ctx, resourceGroupName, publicIPAddressName, expand) + if err != nil { + err = autorest.NewErrorWithError(err, "network.PublicIPAddressesClient", "Get", nil, "Failure preparing request") + return + } + + resp, err := client.GetSender(req) + if err != nil { + result.Response = autorest.Response{Response: resp} + err = autorest.NewErrorWithError(err, "network.PublicIPAddressesClient", "Get", resp, "Failure sending request") + return + } + + result, err = client.GetResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "network.PublicIPAddressesClient", "Get", resp, "Failure responding to request") + } + + return +} + +// GetPreparer prepares the Get request. +func (client PublicIPAddressesClient) GetPreparer(ctx context.Context, resourceGroupName string, publicIPAddressName string, expand string) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "publicIpAddressName": autorest.Encode("path", publicIPAddressName), + "resourceGroupName": autorest.Encode("path", resourceGroupName), + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + } + + const APIVersion = "2018-01-01" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + if len(expand) > 0 { + queryParameters["$expand"] = autorest.Encode("query", expand) + } + + preparer := autorest.CreatePreparer( + autorest.AsGet(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/publicIPAddresses/{publicIpAddressName}", pathParameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// GetSender sends the Get request. The method will close the +// http.Response Body if it receives an error. +func (client PublicIPAddressesClient) GetSender(req *http.Request) (*http.Response, error) { + return autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) +} + +// GetResponder handles the response to the Get request. The method always +// closes the http.Response Body. +func (client PublicIPAddressesClient) GetResponder(resp *http.Response) (result PublicIPAddress, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + +// GetVirtualMachineScaleSetPublicIPAddress get the specified public IP address in a virtual machine scale set. +// Parameters: +// resourceGroupName - the name of the resource group. +// virtualMachineScaleSetName - the name of the virtual machine scale set. +// virtualmachineIndex - the virtual machine index. +// networkInterfaceName - the name of the network interface. +// IPConfigurationName - the name of the IP configuration. +// publicIPAddressName - the name of the public IP Address. +// expand - expands referenced resources. +func (client PublicIPAddressesClient) GetVirtualMachineScaleSetPublicIPAddress(ctx context.Context, resourceGroupName string, virtualMachineScaleSetName string, virtualmachineIndex string, networkInterfaceName string, IPConfigurationName string, publicIPAddressName string, expand string) (result PublicIPAddress, err error) { + req, err := client.GetVirtualMachineScaleSetPublicIPAddressPreparer(ctx, resourceGroupName, virtualMachineScaleSetName, virtualmachineIndex, networkInterfaceName, IPConfigurationName, publicIPAddressName, expand) + if err != nil { + err = autorest.NewErrorWithError(err, "network.PublicIPAddressesClient", "GetVirtualMachineScaleSetPublicIPAddress", nil, "Failure preparing request") + return + } + + resp, err := client.GetVirtualMachineScaleSetPublicIPAddressSender(req) + if err != nil { + result.Response = autorest.Response{Response: resp} + err = autorest.NewErrorWithError(err, "network.PublicIPAddressesClient", "GetVirtualMachineScaleSetPublicIPAddress", resp, "Failure sending request") + return + } + + result, err = client.GetVirtualMachineScaleSetPublicIPAddressResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "network.PublicIPAddressesClient", "GetVirtualMachineScaleSetPublicIPAddress", resp, "Failure responding to request") + } + + return +} + +// GetVirtualMachineScaleSetPublicIPAddressPreparer prepares the GetVirtualMachineScaleSetPublicIPAddress request. +func (client PublicIPAddressesClient) GetVirtualMachineScaleSetPublicIPAddressPreparer(ctx context.Context, resourceGroupName string, virtualMachineScaleSetName string, virtualmachineIndex string, networkInterfaceName string, IPConfigurationName string, publicIPAddressName string, expand string) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "ipConfigurationName": autorest.Encode("path", IPConfigurationName), + "networkInterfaceName": autorest.Encode("path", networkInterfaceName), + "publicIpAddressName": autorest.Encode("path", publicIPAddressName), + "resourceGroupName": autorest.Encode("path", resourceGroupName), + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + "virtualmachineIndex": autorest.Encode("path", virtualmachineIndex), + "virtualMachineScaleSetName": autorest.Encode("path", virtualMachineScaleSetName), + } + + const APIVersion = "2017-03-30" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + if len(expand) > 0 { + queryParameters["$expand"] = autorest.Encode("query", expand) + } + + preparer := autorest.CreatePreparer( + autorest.AsGet(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Compute/virtualMachineScaleSets/{virtualMachineScaleSetName}/virtualMachines/{virtualmachineIndex}/networkInterfaces/{networkInterfaceName}/ipconfigurations/{ipConfigurationName}/publicipaddresses/{publicIpAddressName}", pathParameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// GetVirtualMachineScaleSetPublicIPAddressSender sends the GetVirtualMachineScaleSetPublicIPAddress request. The method will close the +// http.Response Body if it receives an error. +func (client PublicIPAddressesClient) GetVirtualMachineScaleSetPublicIPAddressSender(req *http.Request) (*http.Response, error) { + return autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) +} + +// GetVirtualMachineScaleSetPublicIPAddressResponder handles the response to the GetVirtualMachineScaleSetPublicIPAddress request. The method always +// closes the http.Response Body. +func (client PublicIPAddressesClient) GetVirtualMachineScaleSetPublicIPAddressResponder(resp *http.Response) (result PublicIPAddress, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + +// List gets all public IP addresses in a resource group. +// Parameters: +// resourceGroupName - the name of the resource group. +func (client PublicIPAddressesClient) List(ctx context.Context, resourceGroupName string) (result PublicIPAddressListResultPage, err error) { + result.fn = client.listNextResults + req, err := client.ListPreparer(ctx, resourceGroupName) + if err != nil { + err = autorest.NewErrorWithError(err, "network.PublicIPAddressesClient", "List", nil, "Failure preparing request") + return + } + + resp, err := client.ListSender(req) + if err != nil { + result.pialr.Response = autorest.Response{Response: resp} + err = autorest.NewErrorWithError(err, "network.PublicIPAddressesClient", "List", resp, "Failure sending request") + return + } + + result.pialr, err = client.ListResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "network.PublicIPAddressesClient", "List", resp, "Failure responding to request") + } + + return +} + +// ListPreparer prepares the List request. +func (client PublicIPAddressesClient) ListPreparer(ctx context.Context, resourceGroupName string) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "resourceGroupName": autorest.Encode("path", resourceGroupName), + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + } + + const APIVersion = "2018-01-01" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsGet(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/publicIPAddresses", pathParameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// ListSender sends the List request. The method will close the +// http.Response Body if it receives an error. +func (client PublicIPAddressesClient) ListSender(req *http.Request) (*http.Response, error) { + return autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) +} + +// ListResponder handles the response to the List request. The method always +// closes the http.Response Body. +func (client PublicIPAddressesClient) ListResponder(resp *http.Response) (result PublicIPAddressListResult, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + +// listNextResults retrieves the next set of results, if any. +func (client PublicIPAddressesClient) listNextResults(lastResults PublicIPAddressListResult) (result PublicIPAddressListResult, err error) { + req, err := lastResults.publicIPAddressListResultPreparer() + if err != nil { + return result, autorest.NewErrorWithError(err, "network.PublicIPAddressesClient", "listNextResults", nil, "Failure preparing next results request") + } + if req == nil { + return + } + resp, err := client.ListSender(req) + if err != nil { + result.Response = autorest.Response{Response: resp} + return result, autorest.NewErrorWithError(err, "network.PublicIPAddressesClient", "listNextResults", resp, "Failure sending next results request") + } + result, err = client.ListResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "network.PublicIPAddressesClient", "listNextResults", resp, "Failure responding to next results request") + } + return +} + +// ListComplete enumerates all values, automatically crossing page boundaries as required. +func (client PublicIPAddressesClient) ListComplete(ctx context.Context, resourceGroupName string) (result PublicIPAddressListResultIterator, err error) { + result.page, err = client.List(ctx, resourceGroupName) + return +} + +// ListAll gets all the public IP addresses in a subscription. +func (client PublicIPAddressesClient) ListAll(ctx context.Context) (result PublicIPAddressListResultPage, err error) { + result.fn = client.listAllNextResults + req, err := client.ListAllPreparer(ctx) + if err != nil { + err = autorest.NewErrorWithError(err, "network.PublicIPAddressesClient", "ListAll", nil, "Failure preparing request") + return + } + + resp, err := client.ListAllSender(req) + if err != nil { + result.pialr.Response = autorest.Response{Response: resp} + err = autorest.NewErrorWithError(err, "network.PublicIPAddressesClient", "ListAll", resp, "Failure sending request") + return + } + + result.pialr, err = client.ListAllResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "network.PublicIPAddressesClient", "ListAll", resp, "Failure responding to request") + } + + return +} + +// ListAllPreparer prepares the ListAll request. +func (client PublicIPAddressesClient) ListAllPreparer(ctx context.Context) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + } + + const APIVersion = "2018-01-01" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsGet(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/providers/Microsoft.Network/publicIPAddresses", pathParameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// ListAllSender sends the ListAll request. The method will close the +// http.Response Body if it receives an error. +func (client PublicIPAddressesClient) ListAllSender(req *http.Request) (*http.Response, error) { + return autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) +} + +// ListAllResponder handles the response to the ListAll request. The method always +// closes the http.Response Body. +func (client PublicIPAddressesClient) ListAllResponder(resp *http.Response) (result PublicIPAddressListResult, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + +// listAllNextResults retrieves the next set of results, if any. +func (client PublicIPAddressesClient) listAllNextResults(lastResults PublicIPAddressListResult) (result PublicIPAddressListResult, err error) { + req, err := lastResults.publicIPAddressListResultPreparer() + if err != nil { + return result, autorest.NewErrorWithError(err, "network.PublicIPAddressesClient", "listAllNextResults", nil, "Failure preparing next results request") + } + if req == nil { + return + } + resp, err := client.ListAllSender(req) + if err != nil { + result.Response = autorest.Response{Response: resp} + return result, autorest.NewErrorWithError(err, "network.PublicIPAddressesClient", "listAllNextResults", resp, "Failure sending next results request") + } + result, err = client.ListAllResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "network.PublicIPAddressesClient", "listAllNextResults", resp, "Failure responding to next results request") + } + return +} + +// ListAllComplete enumerates all values, automatically crossing page boundaries as required. +func (client PublicIPAddressesClient) ListAllComplete(ctx context.Context) (result PublicIPAddressListResultIterator, err error) { + result.page, err = client.ListAll(ctx) + return +} + +// ListVirtualMachineScaleSetPublicIPAddresses gets information about all public IP addresses on a virtual machine +// scale set level. +// Parameters: +// resourceGroupName - the name of the resource group. +// virtualMachineScaleSetName - the name of the virtual machine scale set. +func (client PublicIPAddressesClient) ListVirtualMachineScaleSetPublicIPAddresses(ctx context.Context, resourceGroupName string, virtualMachineScaleSetName string) (result PublicIPAddressListResultPage, err error) { + result.fn = client.listVirtualMachineScaleSetPublicIPAddressesNextResults + req, err := client.ListVirtualMachineScaleSetPublicIPAddressesPreparer(ctx, resourceGroupName, virtualMachineScaleSetName) + if err != nil { + err = autorest.NewErrorWithError(err, "network.PublicIPAddressesClient", "ListVirtualMachineScaleSetPublicIPAddresses", nil, "Failure preparing request") + return + } + + resp, err := client.ListVirtualMachineScaleSetPublicIPAddressesSender(req) + if err != nil { + result.pialr.Response = autorest.Response{Response: resp} + err = autorest.NewErrorWithError(err, "network.PublicIPAddressesClient", "ListVirtualMachineScaleSetPublicIPAddresses", resp, "Failure sending request") + return + } + + result.pialr, err = client.ListVirtualMachineScaleSetPublicIPAddressesResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "network.PublicIPAddressesClient", "ListVirtualMachineScaleSetPublicIPAddresses", resp, "Failure responding to request") + } + + return +} + +// ListVirtualMachineScaleSetPublicIPAddressesPreparer prepares the ListVirtualMachineScaleSetPublicIPAddresses request. +func (client PublicIPAddressesClient) ListVirtualMachineScaleSetPublicIPAddressesPreparer(ctx context.Context, resourceGroupName string, virtualMachineScaleSetName string) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "resourceGroupName": autorest.Encode("path", resourceGroupName), + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + "virtualMachineScaleSetName": autorest.Encode("path", virtualMachineScaleSetName), + } + + const APIVersion = "2017-03-30" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsGet(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Compute/virtualMachineScaleSets/{virtualMachineScaleSetName}/publicipaddresses", pathParameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// ListVirtualMachineScaleSetPublicIPAddressesSender sends the ListVirtualMachineScaleSetPublicIPAddresses request. The method will close the +// http.Response Body if it receives an error. +func (client PublicIPAddressesClient) ListVirtualMachineScaleSetPublicIPAddressesSender(req *http.Request) (*http.Response, error) { + return autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) +} + +// ListVirtualMachineScaleSetPublicIPAddressesResponder handles the response to the ListVirtualMachineScaleSetPublicIPAddresses request. The method always +// closes the http.Response Body. +func (client PublicIPAddressesClient) ListVirtualMachineScaleSetPublicIPAddressesResponder(resp *http.Response) (result PublicIPAddressListResult, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + +// listVirtualMachineScaleSetPublicIPAddressesNextResults retrieves the next set of results, if any. +func (client PublicIPAddressesClient) listVirtualMachineScaleSetPublicIPAddressesNextResults(lastResults PublicIPAddressListResult) (result PublicIPAddressListResult, err error) { + req, err := lastResults.publicIPAddressListResultPreparer() + if err != nil { + return result, autorest.NewErrorWithError(err, "network.PublicIPAddressesClient", "listVirtualMachineScaleSetPublicIPAddressesNextResults", nil, "Failure preparing next results request") + } + if req == nil { + return + } + resp, err := client.ListVirtualMachineScaleSetPublicIPAddressesSender(req) + if err != nil { + result.Response = autorest.Response{Response: resp} + return result, autorest.NewErrorWithError(err, "network.PublicIPAddressesClient", "listVirtualMachineScaleSetPublicIPAddressesNextResults", resp, "Failure sending next results request") + } + result, err = client.ListVirtualMachineScaleSetPublicIPAddressesResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "network.PublicIPAddressesClient", "listVirtualMachineScaleSetPublicIPAddressesNextResults", resp, "Failure responding to next results request") + } + return +} + +// ListVirtualMachineScaleSetPublicIPAddressesComplete enumerates all values, automatically crossing page boundaries as required. +func (client PublicIPAddressesClient) ListVirtualMachineScaleSetPublicIPAddressesComplete(ctx context.Context, resourceGroupName string, virtualMachineScaleSetName string) (result PublicIPAddressListResultIterator, err error) { + result.page, err = client.ListVirtualMachineScaleSetPublicIPAddresses(ctx, resourceGroupName, virtualMachineScaleSetName) + return +} + +// ListVirtualMachineScaleSetVMPublicIPAddresses gets information about all public IP addresses in a virtual machine IP +// configuration in a virtual machine scale set. +// Parameters: +// resourceGroupName - the name of the resource group. +// virtualMachineScaleSetName - the name of the virtual machine scale set. +// virtualmachineIndex - the virtual machine index. +// networkInterfaceName - the network interface name. +// IPConfigurationName - the IP configuration name. +func (client PublicIPAddressesClient) ListVirtualMachineScaleSetVMPublicIPAddresses(ctx context.Context, resourceGroupName string, virtualMachineScaleSetName string, virtualmachineIndex string, networkInterfaceName string, IPConfigurationName string) (result PublicIPAddressListResultPage, err error) { + result.fn = client.listVirtualMachineScaleSetVMPublicIPAddressesNextResults + req, err := client.ListVirtualMachineScaleSetVMPublicIPAddressesPreparer(ctx, resourceGroupName, virtualMachineScaleSetName, virtualmachineIndex, networkInterfaceName, IPConfigurationName) + if err != nil { + err = autorest.NewErrorWithError(err, "network.PublicIPAddressesClient", "ListVirtualMachineScaleSetVMPublicIPAddresses", nil, "Failure preparing request") + return + } + + resp, err := client.ListVirtualMachineScaleSetVMPublicIPAddressesSender(req) + if err != nil { + result.pialr.Response = autorest.Response{Response: resp} + err = autorest.NewErrorWithError(err, "network.PublicIPAddressesClient", "ListVirtualMachineScaleSetVMPublicIPAddresses", resp, "Failure sending request") + return + } + + result.pialr, err = client.ListVirtualMachineScaleSetVMPublicIPAddressesResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "network.PublicIPAddressesClient", "ListVirtualMachineScaleSetVMPublicIPAddresses", resp, "Failure responding to request") + } + + return +} + +// ListVirtualMachineScaleSetVMPublicIPAddressesPreparer prepares the ListVirtualMachineScaleSetVMPublicIPAddresses request. +func (client PublicIPAddressesClient) ListVirtualMachineScaleSetVMPublicIPAddressesPreparer(ctx context.Context, resourceGroupName string, virtualMachineScaleSetName string, virtualmachineIndex string, networkInterfaceName string, IPConfigurationName string) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "ipConfigurationName": autorest.Encode("path", IPConfigurationName), + "networkInterfaceName": autorest.Encode("path", networkInterfaceName), + "resourceGroupName": autorest.Encode("path", resourceGroupName), + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + "virtualmachineIndex": autorest.Encode("path", virtualmachineIndex), + "virtualMachineScaleSetName": autorest.Encode("path", virtualMachineScaleSetName), + } + + const APIVersion = "2017-03-30" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsGet(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Compute/virtualMachineScaleSets/{virtualMachineScaleSetName}/virtualMachines/{virtualmachineIndex}/networkInterfaces/{networkInterfaceName}/ipconfigurations/{ipConfigurationName}/publicipaddresses", pathParameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// ListVirtualMachineScaleSetVMPublicIPAddressesSender sends the ListVirtualMachineScaleSetVMPublicIPAddresses request. The method will close the +// http.Response Body if it receives an error. +func (client PublicIPAddressesClient) ListVirtualMachineScaleSetVMPublicIPAddressesSender(req *http.Request) (*http.Response, error) { + return autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) +} + +// ListVirtualMachineScaleSetVMPublicIPAddressesResponder handles the response to the ListVirtualMachineScaleSetVMPublicIPAddresses request. The method always +// closes the http.Response Body. +func (client PublicIPAddressesClient) ListVirtualMachineScaleSetVMPublicIPAddressesResponder(resp *http.Response) (result PublicIPAddressListResult, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + +// listVirtualMachineScaleSetVMPublicIPAddressesNextResults retrieves the next set of results, if any. +func (client PublicIPAddressesClient) listVirtualMachineScaleSetVMPublicIPAddressesNextResults(lastResults PublicIPAddressListResult) (result PublicIPAddressListResult, err error) { + req, err := lastResults.publicIPAddressListResultPreparer() + if err != nil { + return result, autorest.NewErrorWithError(err, "network.PublicIPAddressesClient", "listVirtualMachineScaleSetVMPublicIPAddressesNextResults", nil, "Failure preparing next results request") + } + if req == nil { + return + } + resp, err := client.ListVirtualMachineScaleSetVMPublicIPAddressesSender(req) + if err != nil { + result.Response = autorest.Response{Response: resp} + return result, autorest.NewErrorWithError(err, "network.PublicIPAddressesClient", "listVirtualMachineScaleSetVMPublicIPAddressesNextResults", resp, "Failure sending next results request") + } + result, err = client.ListVirtualMachineScaleSetVMPublicIPAddressesResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "network.PublicIPAddressesClient", "listVirtualMachineScaleSetVMPublicIPAddressesNextResults", resp, "Failure responding to next results request") + } + return +} + +// ListVirtualMachineScaleSetVMPublicIPAddressesComplete enumerates all values, automatically crossing page boundaries as required. +func (client PublicIPAddressesClient) ListVirtualMachineScaleSetVMPublicIPAddressesComplete(ctx context.Context, resourceGroupName string, virtualMachineScaleSetName string, virtualmachineIndex string, networkInterfaceName string, IPConfigurationName string) (result PublicIPAddressListResultIterator, err error) { + result.page, err = client.ListVirtualMachineScaleSetVMPublicIPAddresses(ctx, resourceGroupName, virtualMachineScaleSetName, virtualmachineIndex, networkInterfaceName, IPConfigurationName) + return +} + +// UpdateTags updates public IP address tags. +// Parameters: +// resourceGroupName - the name of the resource group. +// publicIPAddressName - the name of the public IP address. +// parameters - parameters supplied to update public IP address tags. +func (client PublicIPAddressesClient) UpdateTags(ctx context.Context, resourceGroupName string, publicIPAddressName string, parameters TagsObject) (result PublicIPAddressesUpdateTagsFuture, err error) { + req, err := client.UpdateTagsPreparer(ctx, resourceGroupName, publicIPAddressName, parameters) + if err != nil { + err = autorest.NewErrorWithError(err, "network.PublicIPAddressesClient", "UpdateTags", nil, "Failure preparing request") + return + } + + result, err = client.UpdateTagsSender(req) + if err != nil { + err = autorest.NewErrorWithError(err, "network.PublicIPAddressesClient", "UpdateTags", result.Response(), "Failure sending request") + return + } + + return +} + +// UpdateTagsPreparer prepares the UpdateTags request. +func (client PublicIPAddressesClient) UpdateTagsPreparer(ctx context.Context, resourceGroupName string, publicIPAddressName string, parameters TagsObject) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "publicIpAddressName": autorest.Encode("path", publicIPAddressName), + "resourceGroupName": autorest.Encode("path", resourceGroupName), + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + } + + const APIVersion = "2018-01-01" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsContentType("application/json; charset=utf-8"), + autorest.AsPatch(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/publicIPAddresses/{publicIpAddressName}", pathParameters), + autorest.WithJSON(parameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// UpdateTagsSender sends the UpdateTags request. The method will close the +// http.Response Body if it receives an error. +func (client PublicIPAddressesClient) UpdateTagsSender(req *http.Request) (future PublicIPAddressesUpdateTagsFuture, err error) { + var resp *http.Response + resp, err = autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) + if err != nil { + return + } + err = autorest.Respond(resp, azure.WithErrorUnlessStatusCode(http.StatusOK)) + if err != nil { + return + } + future.Future, err = azure.NewFutureFromResponse(resp) + return +} + +// UpdateTagsResponder handles the response to the UpdateTags request. The method always +// closes the http.Response Body. +func (client PublicIPAddressesClient) UpdateTagsResponder(resp *http.Response) (result PublicIPAddress, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} diff --git a/vendor/github.com/Azure/azure-sdk-for-go/arm/network/routefilterrules.go b/vendor/github.com/Azure/azure-sdk-for-go/services/network/mgmt/2018-01-01/network/routefilterrules.go similarity index 56% rename from vendor/github.com/Azure/azure-sdk-for-go/arm/network/routefilterrules.go rename to vendor/github.com/Azure/azure-sdk-for-go/services/network/mgmt/2018-01-01/network/routefilterrules.go index 378a75bb2..42afb6949 100644 --- a/vendor/github.com/Azure/azure-sdk-for-go/arm/network/routefilterrules.go +++ b/vendor/github.com/Azure/azure-sdk-for-go/services/network/mgmt/2018-01-01/network/routefilterrules.go @@ -14,90 +14,65 @@ package network // See the License for the specific language governing permissions and // limitations under the License. // -// Code generated by Microsoft (R) AutoRest Code Generator 1.0.1.0 -// Changes may cause incorrect behavior and will be lost if the code is -// regenerated. +// Code generated by Microsoft (R) AutoRest Code Generator. +// Changes may cause incorrect behavior and will be lost if the code is regenerated. import ( + "context" "github.com/Azure/go-autorest/autorest" "github.com/Azure/go-autorest/autorest/azure" "github.com/Azure/go-autorest/autorest/validation" "net/http" ) -// RouteFilterRulesClient is the composite Swagger for Network Client +// RouteFilterRulesClient is the network Client type RouteFilterRulesClient struct { - ManagementClient + BaseClient } -// NewRouteFilterRulesClient creates an instance of the RouteFilterRulesClient -// client. +// NewRouteFilterRulesClient creates an instance of the RouteFilterRulesClient client. func NewRouteFilterRulesClient(subscriptionID string) RouteFilterRulesClient { return NewRouteFilterRulesClientWithBaseURI(DefaultBaseURI, subscriptionID) } -// NewRouteFilterRulesClientWithBaseURI creates an instance of the -// RouteFilterRulesClient client. +// NewRouteFilterRulesClientWithBaseURI creates an instance of the RouteFilterRulesClient client. func NewRouteFilterRulesClientWithBaseURI(baseURI string, subscriptionID string) RouteFilterRulesClient { return RouteFilterRulesClient{NewWithBaseURI(baseURI, subscriptionID)} } // CreateOrUpdate creates or updates a route in the specified route filter. -// This method may poll for completion. Polling can be canceled by passing the -// cancel channel argument. The channel will be used to cancel polling and any -// outstanding HTTP requests. -// -// resourceGroupName is the name of the resource group. routeFilterName is the -// name of the route filter. ruleName is the name of the route filter rule. -// routeFilterRuleParameters is parameters supplied to the create or update -// route filter rule operation. -func (client RouteFilterRulesClient) CreateOrUpdate(resourceGroupName string, routeFilterName string, ruleName string, routeFilterRuleParameters RouteFilterRule, cancel <-chan struct{}) (<-chan RouteFilterRule, <-chan error) { - resultChan := make(chan RouteFilterRule, 1) - errChan := make(chan error, 1) +// Parameters: +// resourceGroupName - the name of the resource group. +// routeFilterName - the name of the route filter. +// ruleName - the name of the route filter rule. +// routeFilterRuleParameters - parameters supplied to the create or update route filter rule operation. +func (client RouteFilterRulesClient) CreateOrUpdate(ctx context.Context, resourceGroupName string, routeFilterName string, ruleName string, routeFilterRuleParameters RouteFilterRule) (result RouteFilterRulesCreateOrUpdateFuture, err error) { if err := validation.Validate([]validation.Validation{ {TargetValue: routeFilterRuleParameters, Constraints: []validation.Constraint{{Target: "routeFilterRuleParameters.RouteFilterRulePropertiesFormat", Name: validation.Null, Rule: false, Chain: []validation.Constraint{{Target: "routeFilterRuleParameters.RouteFilterRulePropertiesFormat.RouteFilterRuleType", Name: validation.Null, Rule: true, Chain: nil}, {Target: "routeFilterRuleParameters.RouteFilterRulePropertiesFormat.Communities", Name: validation.Null, Rule: true, Chain: nil}, }}}}}); err != nil { - errChan <- validation.NewErrorWithValidationError(err, "network.RouteFilterRulesClient", "CreateOrUpdate") - close(errChan) - close(resultChan) - return resultChan, errChan + return result, validation.NewError("network.RouteFilterRulesClient", "CreateOrUpdate", err.Error()) } - go func() { - var err error - var result RouteFilterRule - defer func() { - resultChan <- result - errChan <- err - close(resultChan) - close(errChan) - }() - req, err := client.CreateOrUpdatePreparer(resourceGroupName, routeFilterName, ruleName, routeFilterRuleParameters, cancel) - if err != nil { - err = autorest.NewErrorWithError(err, "network.RouteFilterRulesClient", "CreateOrUpdate", nil, "Failure preparing request") - return - } + req, err := client.CreateOrUpdatePreparer(ctx, resourceGroupName, routeFilterName, ruleName, routeFilterRuleParameters) + if err != nil { + err = autorest.NewErrorWithError(err, "network.RouteFilterRulesClient", "CreateOrUpdate", nil, "Failure preparing request") + return + } - resp, err := client.CreateOrUpdateSender(req) - if err != nil { - result.Response = autorest.Response{Response: resp} - err = autorest.NewErrorWithError(err, "network.RouteFilterRulesClient", "CreateOrUpdate", resp, "Failure sending request") - return - } + result, err = client.CreateOrUpdateSender(req) + if err != nil { + err = autorest.NewErrorWithError(err, "network.RouteFilterRulesClient", "CreateOrUpdate", result.Response(), "Failure sending request") + return + } - result, err = client.CreateOrUpdateResponder(resp) - if err != nil { - err = autorest.NewErrorWithError(err, "network.RouteFilterRulesClient", "CreateOrUpdate", resp, "Failure responding to request") - } - }() - return resultChan, errChan + return } // CreateOrUpdatePreparer prepares the CreateOrUpdate request. -func (client RouteFilterRulesClient) CreateOrUpdatePreparer(resourceGroupName string, routeFilterName string, ruleName string, routeFilterRuleParameters RouteFilterRule, cancel <-chan struct{}) (*http.Request, error) { +func (client RouteFilterRulesClient) CreateOrUpdatePreparer(ctx context.Context, resourceGroupName string, routeFilterName string, ruleName string, routeFilterRuleParameters RouteFilterRule) (*http.Request, error) { pathParameters := map[string]interface{}{ "resourceGroupName": autorest.Encode("path", resourceGroupName), "routeFilterName": autorest.Encode("path", routeFilterName), @@ -105,27 +80,36 @@ func (client RouteFilterRulesClient) CreateOrUpdatePreparer(resourceGroupName st "subscriptionId": autorest.Encode("path", client.SubscriptionID), } - const APIVersion = "2017-03-01" + const APIVersion = "2018-01-01" queryParameters := map[string]interface{}{ "api-version": APIVersion, } preparer := autorest.CreatePreparer( - autorest.AsJSON(), + autorest.AsContentType("application/json; charset=utf-8"), autorest.AsPut(), autorest.WithBaseURL(client.BaseURI), autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/routeFilters/{routeFilterName}/routeFilterRules/{ruleName}", pathParameters), autorest.WithJSON(routeFilterRuleParameters), autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{Cancel: cancel}) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) } // CreateOrUpdateSender sends the CreateOrUpdate request. The method will close the // http.Response Body if it receives an error. -func (client RouteFilterRulesClient) CreateOrUpdateSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, - req, - azure.DoPollForAsynchronous(client.PollingDelay)) +func (client RouteFilterRulesClient) CreateOrUpdateSender(req *http.Request) (future RouteFilterRulesCreateOrUpdateFuture, err error) { + var resp *http.Response + resp, err = autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) + if err != nil { + return + } + err = autorest.Respond(resp, azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusCreated)) + if err != nil { + return + } + future.Future, err = azure.NewFutureFromResponse(resp) + return } // CreateOrUpdateResponder handles the response to the CreateOrUpdate request. The method always @@ -141,48 +125,29 @@ func (client RouteFilterRulesClient) CreateOrUpdateResponder(resp *http.Response return } -// Delete deletes the specified rule from a route filter. This method may poll -// for completion. Polling can be canceled by passing the cancel channel -// argument. The channel will be used to cancel polling and any outstanding -// HTTP requests. -// -// resourceGroupName is the name of the resource group. routeFilterName is the -// name of the route filter. ruleName is the name of the rule. -func (client RouteFilterRulesClient) Delete(resourceGroupName string, routeFilterName string, ruleName string, cancel <-chan struct{}) (<-chan autorest.Response, <-chan error) { - resultChan := make(chan autorest.Response, 1) - errChan := make(chan error, 1) - go func() { - var err error - var result autorest.Response - defer func() { - resultChan <- result - errChan <- err - close(resultChan) - close(errChan) - }() - req, err := client.DeletePreparer(resourceGroupName, routeFilterName, ruleName, cancel) - if err != nil { - err = autorest.NewErrorWithError(err, "network.RouteFilterRulesClient", "Delete", nil, "Failure preparing request") - return - } +// Delete deletes the specified rule from a route filter. +// Parameters: +// resourceGroupName - the name of the resource group. +// routeFilterName - the name of the route filter. +// ruleName - the name of the rule. +func (client RouteFilterRulesClient) Delete(ctx context.Context, resourceGroupName string, routeFilterName string, ruleName string) (result RouteFilterRulesDeleteFuture, err error) { + req, err := client.DeletePreparer(ctx, resourceGroupName, routeFilterName, ruleName) + if err != nil { + err = autorest.NewErrorWithError(err, "network.RouteFilterRulesClient", "Delete", nil, "Failure preparing request") + return + } - resp, err := client.DeleteSender(req) - if err != nil { - result.Response = resp - err = autorest.NewErrorWithError(err, "network.RouteFilterRulesClient", "Delete", resp, "Failure sending request") - return - } + result, err = client.DeleteSender(req) + if err != nil { + err = autorest.NewErrorWithError(err, "network.RouteFilterRulesClient", "Delete", result.Response(), "Failure sending request") + return + } - result, err = client.DeleteResponder(resp) - if err != nil { - err = autorest.NewErrorWithError(err, "network.RouteFilterRulesClient", "Delete", resp, "Failure responding to request") - } - }() - return resultChan, errChan + return } // DeletePreparer prepares the Delete request. -func (client RouteFilterRulesClient) DeletePreparer(resourceGroupName string, routeFilterName string, ruleName string, cancel <-chan struct{}) (*http.Request, error) { +func (client RouteFilterRulesClient) DeletePreparer(ctx context.Context, resourceGroupName string, routeFilterName string, ruleName string) (*http.Request, error) { pathParameters := map[string]interface{}{ "resourceGroupName": autorest.Encode("path", resourceGroupName), "routeFilterName": autorest.Encode("path", routeFilterName), @@ -190,7 +155,7 @@ func (client RouteFilterRulesClient) DeletePreparer(resourceGroupName string, ro "subscriptionId": autorest.Encode("path", client.SubscriptionID), } - const APIVersion = "2017-03-01" + const APIVersion = "2018-01-01" queryParameters := map[string]interface{}{ "api-version": APIVersion, } @@ -200,15 +165,24 @@ func (client RouteFilterRulesClient) DeletePreparer(resourceGroupName string, ro autorest.WithBaseURL(client.BaseURI), autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/routeFilters/{routeFilterName}/routeFilterRules/{ruleName}", pathParameters), autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{Cancel: cancel}) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) } // DeleteSender sends the Delete request. The method will close the // http.Response Body if it receives an error. -func (client RouteFilterRulesClient) DeleteSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, - req, - azure.DoPollForAsynchronous(client.PollingDelay)) +func (client RouteFilterRulesClient) DeleteSender(req *http.Request) (future RouteFilterRulesDeleteFuture, err error) { + var resp *http.Response + resp, err = autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) + if err != nil { + return + } + err = autorest.Respond(resp, azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted, http.StatusNoContent)) + if err != nil { + return + } + future.Future, err = azure.NewFutureFromResponse(resp) + return } // DeleteResponder handles the response to the Delete request. The method always @@ -217,18 +191,19 @@ func (client RouteFilterRulesClient) DeleteResponder(resp *http.Response) (resul err = autorest.Respond( resp, client.ByInspecting(), - azure.WithErrorUnlessStatusCode(http.StatusAccepted, http.StatusOK, http.StatusNoContent), + azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted, http.StatusNoContent), autorest.ByClosing()) result.Response = resp return } // Get gets the specified rule from a route filter. -// -// resourceGroupName is the name of the resource group. routeFilterName is the -// name of the route filter. ruleName is the name of the rule. -func (client RouteFilterRulesClient) Get(resourceGroupName string, routeFilterName string, ruleName string) (result RouteFilterRule, err error) { - req, err := client.GetPreparer(resourceGroupName, routeFilterName, ruleName) +// Parameters: +// resourceGroupName - the name of the resource group. +// routeFilterName - the name of the route filter. +// ruleName - the name of the rule. +func (client RouteFilterRulesClient) Get(ctx context.Context, resourceGroupName string, routeFilterName string, ruleName string) (result RouteFilterRule, err error) { + req, err := client.GetPreparer(ctx, resourceGroupName, routeFilterName, ruleName) if err != nil { err = autorest.NewErrorWithError(err, "network.RouteFilterRulesClient", "Get", nil, "Failure preparing request") return @@ -250,7 +225,7 @@ func (client RouteFilterRulesClient) Get(resourceGroupName string, routeFilterNa } // GetPreparer prepares the Get request. -func (client RouteFilterRulesClient) GetPreparer(resourceGroupName string, routeFilterName string, ruleName string) (*http.Request, error) { +func (client RouteFilterRulesClient) GetPreparer(ctx context.Context, resourceGroupName string, routeFilterName string, ruleName string) (*http.Request, error) { pathParameters := map[string]interface{}{ "resourceGroupName": autorest.Encode("path", resourceGroupName), "routeFilterName": autorest.Encode("path", routeFilterName), @@ -258,7 +233,7 @@ func (client RouteFilterRulesClient) GetPreparer(resourceGroupName string, route "subscriptionId": autorest.Encode("path", client.SubscriptionID), } - const APIVersion = "2017-03-01" + const APIVersion = "2018-01-01" queryParameters := map[string]interface{}{ "api-version": APIVersion, } @@ -268,13 +243,14 @@ func (client RouteFilterRulesClient) GetPreparer(resourceGroupName string, route autorest.WithBaseURL(client.BaseURI), autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/routeFilters/{routeFilterName}/routeFilterRules/{ruleName}", pathParameters), autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{}) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) } // GetSender sends the Get request. The method will close the // http.Response Body if it receives an error. func (client RouteFilterRulesClient) GetSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, req) + return autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) } // GetResponder handles the response to the Get request. The method always @@ -291,11 +267,12 @@ func (client RouteFilterRulesClient) GetResponder(resp *http.Response) (result R } // ListByRouteFilter gets all RouteFilterRules in a route filter. -// -// resourceGroupName is the name of the resource group. routeFilterName is the -// name of the route filter. -func (client RouteFilterRulesClient) ListByRouteFilter(resourceGroupName string, routeFilterName string) (result RouteFilterRuleListResult, err error) { - req, err := client.ListByRouteFilterPreparer(resourceGroupName, routeFilterName) +// Parameters: +// resourceGroupName - the name of the resource group. +// routeFilterName - the name of the route filter. +func (client RouteFilterRulesClient) ListByRouteFilter(ctx context.Context, resourceGroupName string, routeFilterName string) (result RouteFilterRuleListResultPage, err error) { + result.fn = client.listByRouteFilterNextResults + req, err := client.ListByRouteFilterPreparer(ctx, resourceGroupName, routeFilterName) if err != nil { err = autorest.NewErrorWithError(err, "network.RouteFilterRulesClient", "ListByRouteFilter", nil, "Failure preparing request") return @@ -303,12 +280,12 @@ func (client RouteFilterRulesClient) ListByRouteFilter(resourceGroupName string, resp, err := client.ListByRouteFilterSender(req) if err != nil { - result.Response = autorest.Response{Response: resp} + result.rfrlr.Response = autorest.Response{Response: resp} err = autorest.NewErrorWithError(err, "network.RouteFilterRulesClient", "ListByRouteFilter", resp, "Failure sending request") return } - result, err = client.ListByRouteFilterResponder(resp) + result.rfrlr, err = client.ListByRouteFilterResponder(resp) if err != nil { err = autorest.NewErrorWithError(err, "network.RouteFilterRulesClient", "ListByRouteFilter", resp, "Failure responding to request") } @@ -317,14 +294,14 @@ func (client RouteFilterRulesClient) ListByRouteFilter(resourceGroupName string, } // ListByRouteFilterPreparer prepares the ListByRouteFilter request. -func (client RouteFilterRulesClient) ListByRouteFilterPreparer(resourceGroupName string, routeFilterName string) (*http.Request, error) { +func (client RouteFilterRulesClient) ListByRouteFilterPreparer(ctx context.Context, resourceGroupName string, routeFilterName string) (*http.Request, error) { pathParameters := map[string]interface{}{ "resourceGroupName": autorest.Encode("path", resourceGroupName), "routeFilterName": autorest.Encode("path", routeFilterName), "subscriptionId": autorest.Encode("path", client.SubscriptionID), } - const APIVersion = "2017-03-01" + const APIVersion = "2018-01-01" queryParameters := map[string]interface{}{ "api-version": APIVersion, } @@ -334,13 +311,14 @@ func (client RouteFilterRulesClient) ListByRouteFilterPreparer(resourceGroupName autorest.WithBaseURL(client.BaseURI), autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/routeFilters/{routeFilterName}/routeFilterRules", pathParameters), autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{}) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) } // ListByRouteFilterSender sends the ListByRouteFilter request. The method will close the // http.Response Body if it receives an error. func (client RouteFilterRulesClient) ListByRouteFilterSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, req) + return autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) } // ListByRouteFilterResponder handles the response to the ListByRouteFilter request. The method always @@ -356,74 +334,57 @@ func (client RouteFilterRulesClient) ListByRouteFilterResponder(resp *http.Respo return } -// ListByRouteFilterNextResults retrieves the next set of results, if any. -func (client RouteFilterRulesClient) ListByRouteFilterNextResults(lastResults RouteFilterRuleListResult) (result RouteFilterRuleListResult, err error) { - req, err := lastResults.RouteFilterRuleListResultPreparer() +// listByRouteFilterNextResults retrieves the next set of results, if any. +func (client RouteFilterRulesClient) listByRouteFilterNextResults(lastResults RouteFilterRuleListResult) (result RouteFilterRuleListResult, err error) { + req, err := lastResults.routeFilterRuleListResultPreparer() if err != nil { - return result, autorest.NewErrorWithError(err, "network.RouteFilterRulesClient", "ListByRouteFilter", nil, "Failure preparing next results request") + return result, autorest.NewErrorWithError(err, "network.RouteFilterRulesClient", "listByRouteFilterNextResults", nil, "Failure preparing next results request") } if req == nil { return } - resp, err := client.ListByRouteFilterSender(req) if err != nil { result.Response = autorest.Response{Response: resp} - return result, autorest.NewErrorWithError(err, "network.RouteFilterRulesClient", "ListByRouteFilter", resp, "Failure sending next results request") + return result, autorest.NewErrorWithError(err, "network.RouteFilterRulesClient", "listByRouteFilterNextResults", resp, "Failure sending next results request") } - result, err = client.ListByRouteFilterResponder(resp) if err != nil { - err = autorest.NewErrorWithError(err, "network.RouteFilterRulesClient", "ListByRouteFilter", resp, "Failure responding to next results request") + err = autorest.NewErrorWithError(err, "network.RouteFilterRulesClient", "listByRouteFilterNextResults", resp, "Failure responding to next results request") + } + return +} + +// ListByRouteFilterComplete enumerates all values, automatically crossing page boundaries as required. +func (client RouteFilterRulesClient) ListByRouteFilterComplete(ctx context.Context, resourceGroupName string, routeFilterName string) (result RouteFilterRuleListResultIterator, err error) { + result.page, err = client.ListByRouteFilter(ctx, resourceGroupName, routeFilterName) + return +} + +// Update updates a route in the specified route filter. +// Parameters: +// resourceGroupName - the name of the resource group. +// routeFilterName - the name of the route filter. +// ruleName - the name of the route filter rule. +// routeFilterRuleParameters - parameters supplied to the update route filter rule operation. +func (client RouteFilterRulesClient) Update(ctx context.Context, resourceGroupName string, routeFilterName string, ruleName string, routeFilterRuleParameters PatchRouteFilterRule) (result RouteFilterRulesUpdateFuture, err error) { + req, err := client.UpdatePreparer(ctx, resourceGroupName, routeFilterName, ruleName, routeFilterRuleParameters) + if err != nil { + err = autorest.NewErrorWithError(err, "network.RouteFilterRulesClient", "Update", nil, "Failure preparing request") + return + } + + result, err = client.UpdateSender(req) + if err != nil { + err = autorest.NewErrorWithError(err, "network.RouteFilterRulesClient", "Update", result.Response(), "Failure sending request") + return } return } -// Update updates a route in the specified route filter. This method may poll -// for completion. Polling can be canceled by passing the cancel channel -// argument. The channel will be used to cancel polling and any outstanding -// HTTP requests. -// -// resourceGroupName is the name of the resource group. routeFilterName is the -// name of the route filter. ruleName is the name of the route filter rule. -// routeFilterRuleParameters is parameters supplied to the update route filter -// rule operation. -func (client RouteFilterRulesClient) Update(resourceGroupName string, routeFilterName string, ruleName string, routeFilterRuleParameters PatchRouteFilterRule, cancel <-chan struct{}) (<-chan RouteFilterRule, <-chan error) { - resultChan := make(chan RouteFilterRule, 1) - errChan := make(chan error, 1) - go func() { - var err error - var result RouteFilterRule - defer func() { - resultChan <- result - errChan <- err - close(resultChan) - close(errChan) - }() - req, err := client.UpdatePreparer(resourceGroupName, routeFilterName, ruleName, routeFilterRuleParameters, cancel) - if err != nil { - err = autorest.NewErrorWithError(err, "network.RouteFilterRulesClient", "Update", nil, "Failure preparing request") - return - } - - resp, err := client.UpdateSender(req) - if err != nil { - result.Response = autorest.Response{Response: resp} - err = autorest.NewErrorWithError(err, "network.RouteFilterRulesClient", "Update", resp, "Failure sending request") - return - } - - result, err = client.UpdateResponder(resp) - if err != nil { - err = autorest.NewErrorWithError(err, "network.RouteFilterRulesClient", "Update", resp, "Failure responding to request") - } - }() - return resultChan, errChan -} - // UpdatePreparer prepares the Update request. -func (client RouteFilterRulesClient) UpdatePreparer(resourceGroupName string, routeFilterName string, ruleName string, routeFilterRuleParameters PatchRouteFilterRule, cancel <-chan struct{}) (*http.Request, error) { +func (client RouteFilterRulesClient) UpdatePreparer(ctx context.Context, resourceGroupName string, routeFilterName string, ruleName string, routeFilterRuleParameters PatchRouteFilterRule) (*http.Request, error) { pathParameters := map[string]interface{}{ "resourceGroupName": autorest.Encode("path", resourceGroupName), "routeFilterName": autorest.Encode("path", routeFilterName), @@ -431,27 +392,36 @@ func (client RouteFilterRulesClient) UpdatePreparer(resourceGroupName string, ro "subscriptionId": autorest.Encode("path", client.SubscriptionID), } - const APIVersion = "2017-03-01" + const APIVersion = "2018-01-01" queryParameters := map[string]interface{}{ "api-version": APIVersion, } preparer := autorest.CreatePreparer( - autorest.AsJSON(), + autorest.AsContentType("application/json; charset=utf-8"), autorest.AsPatch(), autorest.WithBaseURL(client.BaseURI), autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/routeFilters/{routeFilterName}/routeFilterRules/{ruleName}", pathParameters), autorest.WithJSON(routeFilterRuleParameters), autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{Cancel: cancel}) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) } // UpdateSender sends the Update request. The method will close the // http.Response Body if it receives an error. -func (client RouteFilterRulesClient) UpdateSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, - req, - azure.DoPollForAsynchronous(client.PollingDelay)) +func (client RouteFilterRulesClient) UpdateSender(req *http.Request) (future RouteFilterRulesUpdateFuture, err error) { + var resp *http.Response + resp, err = autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) + if err != nil { + return + } + err = autorest.Respond(resp, azure.WithErrorUnlessStatusCode(http.StatusOK)) + if err != nil { + return + } + future.Future, err = azure.NewFutureFromResponse(resp) + return } // UpdateResponder handles the response to the Update request. The method always diff --git a/vendor/github.com/Azure/azure-sdk-for-go/arm/network/routefilters.go b/vendor/github.com/Azure/azure-sdk-for-go/services/network/mgmt/2018-01-01/network/routefilters.go similarity index 58% rename from vendor/github.com/Azure/azure-sdk-for-go/arm/network/routefilters.go rename to vendor/github.com/Azure/azure-sdk-for-go/services/network/mgmt/2018-01-01/network/routefilters.go index 860ccae5c..f2df3875e 100644 --- a/vendor/github.com/Azure/azure-sdk-for-go/arm/network/routefilters.go +++ b/vendor/github.com/Azure/azure-sdk-for-go/services/network/mgmt/2018-01-01/network/routefilters.go @@ -14,19 +14,19 @@ package network // See the License for the specific language governing permissions and // limitations under the License. // -// Code generated by Microsoft (R) AutoRest Code Generator 1.0.1.0 -// Changes may cause incorrect behavior and will be lost if the code is -// regenerated. +// Code generated by Microsoft (R) AutoRest Code Generator. +// Changes may cause incorrect behavior and will be lost if the code is regenerated. import ( + "context" "github.com/Azure/go-autorest/autorest" "github.com/Azure/go-autorest/autorest/azure" "net/http" ) -// RouteFiltersClient is the composite Swagger for Network Client +// RouteFiltersClient is the network Client type RouteFiltersClient struct { - ManagementClient + BaseClient } // NewRouteFiltersClient creates an instance of the RouteFiltersClient client. @@ -34,82 +34,70 @@ func NewRouteFiltersClient(subscriptionID string) RouteFiltersClient { return NewRouteFiltersClientWithBaseURI(DefaultBaseURI, subscriptionID) } -// NewRouteFiltersClientWithBaseURI creates an instance of the -// RouteFiltersClient client. +// NewRouteFiltersClientWithBaseURI creates an instance of the RouteFiltersClient client. func NewRouteFiltersClientWithBaseURI(baseURI string, subscriptionID string) RouteFiltersClient { return RouteFiltersClient{NewWithBaseURI(baseURI, subscriptionID)} } -// CreateOrUpdate creates or updates a route filter in a specified resource -// group. This method may poll for completion. Polling can be canceled by -// passing the cancel channel argument. The channel will be used to cancel -// polling and any outstanding HTTP requests. -// -// resourceGroupName is the name of the resource group. routeFilterName is the -// name of the route filter. routeFilterParameters is parameters supplied to -// the create or update route filter operation. -func (client RouteFiltersClient) CreateOrUpdate(resourceGroupName string, routeFilterName string, routeFilterParameters RouteFilter, cancel <-chan struct{}) (<-chan RouteFilter, <-chan error) { - resultChan := make(chan RouteFilter, 1) - errChan := make(chan error, 1) - go func() { - var err error - var result RouteFilter - defer func() { - resultChan <- result - errChan <- err - close(resultChan) - close(errChan) - }() - req, err := client.CreateOrUpdatePreparer(resourceGroupName, routeFilterName, routeFilterParameters, cancel) - if err != nil { - err = autorest.NewErrorWithError(err, "network.RouteFiltersClient", "CreateOrUpdate", nil, "Failure preparing request") - return - } +// CreateOrUpdate creates or updates a route filter in a specified resource group. +// Parameters: +// resourceGroupName - the name of the resource group. +// routeFilterName - the name of the route filter. +// routeFilterParameters - parameters supplied to the create or update route filter operation. +func (client RouteFiltersClient) CreateOrUpdate(ctx context.Context, resourceGroupName string, routeFilterName string, routeFilterParameters RouteFilter) (result RouteFiltersCreateOrUpdateFuture, err error) { + req, err := client.CreateOrUpdatePreparer(ctx, resourceGroupName, routeFilterName, routeFilterParameters) + if err != nil { + err = autorest.NewErrorWithError(err, "network.RouteFiltersClient", "CreateOrUpdate", nil, "Failure preparing request") + return + } - resp, err := client.CreateOrUpdateSender(req) - if err != nil { - result.Response = autorest.Response{Response: resp} - err = autorest.NewErrorWithError(err, "network.RouteFiltersClient", "CreateOrUpdate", resp, "Failure sending request") - return - } + result, err = client.CreateOrUpdateSender(req) + if err != nil { + err = autorest.NewErrorWithError(err, "network.RouteFiltersClient", "CreateOrUpdate", result.Response(), "Failure sending request") + return + } - result, err = client.CreateOrUpdateResponder(resp) - if err != nil { - err = autorest.NewErrorWithError(err, "network.RouteFiltersClient", "CreateOrUpdate", resp, "Failure responding to request") - } - }() - return resultChan, errChan + return } // CreateOrUpdatePreparer prepares the CreateOrUpdate request. -func (client RouteFiltersClient) CreateOrUpdatePreparer(resourceGroupName string, routeFilterName string, routeFilterParameters RouteFilter, cancel <-chan struct{}) (*http.Request, error) { +func (client RouteFiltersClient) CreateOrUpdatePreparer(ctx context.Context, resourceGroupName string, routeFilterName string, routeFilterParameters RouteFilter) (*http.Request, error) { pathParameters := map[string]interface{}{ "resourceGroupName": autorest.Encode("path", resourceGroupName), "routeFilterName": autorest.Encode("path", routeFilterName), "subscriptionId": autorest.Encode("path", client.SubscriptionID), } - const APIVersion = "2017-03-01" + const APIVersion = "2018-01-01" queryParameters := map[string]interface{}{ "api-version": APIVersion, } preparer := autorest.CreatePreparer( - autorest.AsJSON(), + autorest.AsContentType("application/json; charset=utf-8"), autorest.AsPut(), autorest.WithBaseURL(client.BaseURI), autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/routeFilters/{routeFilterName}", pathParameters), autorest.WithJSON(routeFilterParameters), autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{Cancel: cancel}) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) } // CreateOrUpdateSender sends the CreateOrUpdate request. The method will close the // http.Response Body if it receives an error. -func (client RouteFiltersClient) CreateOrUpdateSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, - req, - azure.DoPollForAsynchronous(client.PollingDelay)) +func (client RouteFiltersClient) CreateOrUpdateSender(req *http.Request) (future RouteFiltersCreateOrUpdateFuture, err error) { + var resp *http.Response + resp, err = autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) + if err != nil { + return + } + err = autorest.Respond(resp, azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusCreated)) + if err != nil { + return + } + future.Future, err = azure.NewFutureFromResponse(resp) + return } // CreateOrUpdateResponder handles the response to the CreateOrUpdate request. The method always @@ -125,55 +113,35 @@ func (client RouteFiltersClient) CreateOrUpdateResponder(resp *http.Response) (r return } -// Delete deletes the specified route filter. This method may poll for -// completion. Polling can be canceled by passing the cancel channel argument. -// The channel will be used to cancel polling and any outstanding HTTP -// requests. -// -// resourceGroupName is the name of the resource group. routeFilterName is the -// name of the route filter. -func (client RouteFiltersClient) Delete(resourceGroupName string, routeFilterName string, cancel <-chan struct{}) (<-chan autorest.Response, <-chan error) { - resultChan := make(chan autorest.Response, 1) - errChan := make(chan error, 1) - go func() { - var err error - var result autorest.Response - defer func() { - resultChan <- result - errChan <- err - close(resultChan) - close(errChan) - }() - req, err := client.DeletePreparer(resourceGroupName, routeFilterName, cancel) - if err != nil { - err = autorest.NewErrorWithError(err, "network.RouteFiltersClient", "Delete", nil, "Failure preparing request") - return - } +// Delete deletes the specified route filter. +// Parameters: +// resourceGroupName - the name of the resource group. +// routeFilterName - the name of the route filter. +func (client RouteFiltersClient) Delete(ctx context.Context, resourceGroupName string, routeFilterName string) (result RouteFiltersDeleteFuture, err error) { + req, err := client.DeletePreparer(ctx, resourceGroupName, routeFilterName) + if err != nil { + err = autorest.NewErrorWithError(err, "network.RouteFiltersClient", "Delete", nil, "Failure preparing request") + return + } - resp, err := client.DeleteSender(req) - if err != nil { - result.Response = resp - err = autorest.NewErrorWithError(err, "network.RouteFiltersClient", "Delete", resp, "Failure sending request") - return - } + result, err = client.DeleteSender(req) + if err != nil { + err = autorest.NewErrorWithError(err, "network.RouteFiltersClient", "Delete", result.Response(), "Failure sending request") + return + } - result, err = client.DeleteResponder(resp) - if err != nil { - err = autorest.NewErrorWithError(err, "network.RouteFiltersClient", "Delete", resp, "Failure responding to request") - } - }() - return resultChan, errChan + return } // DeletePreparer prepares the Delete request. -func (client RouteFiltersClient) DeletePreparer(resourceGroupName string, routeFilterName string, cancel <-chan struct{}) (*http.Request, error) { +func (client RouteFiltersClient) DeletePreparer(ctx context.Context, resourceGroupName string, routeFilterName string) (*http.Request, error) { pathParameters := map[string]interface{}{ "resourceGroupName": autorest.Encode("path", resourceGroupName), "routeFilterName": autorest.Encode("path", routeFilterName), "subscriptionId": autorest.Encode("path", client.SubscriptionID), } - const APIVersion = "2017-03-01" + const APIVersion = "2018-01-01" queryParameters := map[string]interface{}{ "api-version": APIVersion, } @@ -183,15 +151,24 @@ func (client RouteFiltersClient) DeletePreparer(resourceGroupName string, routeF autorest.WithBaseURL(client.BaseURI), autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/routeFilters/{routeFilterName}", pathParameters), autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{Cancel: cancel}) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) } // DeleteSender sends the Delete request. The method will close the // http.Response Body if it receives an error. -func (client RouteFiltersClient) DeleteSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, - req, - azure.DoPollForAsynchronous(client.PollingDelay)) +func (client RouteFiltersClient) DeleteSender(req *http.Request) (future RouteFiltersDeleteFuture, err error) { + var resp *http.Response + resp, err = autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) + if err != nil { + return + } + err = autorest.Respond(resp, azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted, http.StatusNoContent)) + if err != nil { + return + } + future.Future, err = azure.NewFutureFromResponse(resp) + return } // DeleteResponder handles the response to the Delete request. The method always @@ -200,19 +177,19 @@ func (client RouteFiltersClient) DeleteResponder(resp *http.Response) (result au err = autorest.Respond( resp, client.ByInspecting(), - azure.WithErrorUnlessStatusCode(http.StatusAccepted, http.StatusOK, http.StatusNoContent), + azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted, http.StatusNoContent), autorest.ByClosing()) result.Response = resp return } // Get gets the specified route filter. -// -// resourceGroupName is the name of the resource group. routeFilterName is the -// name of the route filter. expand is expands referenced express route bgp -// peering resources. -func (client RouteFiltersClient) Get(resourceGroupName string, routeFilterName string, expand string) (result RouteFilter, err error) { - req, err := client.GetPreparer(resourceGroupName, routeFilterName, expand) +// Parameters: +// resourceGroupName - the name of the resource group. +// routeFilterName - the name of the route filter. +// expand - expands referenced express route bgp peering resources. +func (client RouteFiltersClient) Get(ctx context.Context, resourceGroupName string, routeFilterName string, expand string) (result RouteFilter, err error) { + req, err := client.GetPreparer(ctx, resourceGroupName, routeFilterName, expand) if err != nil { err = autorest.NewErrorWithError(err, "network.RouteFiltersClient", "Get", nil, "Failure preparing request") return @@ -234,14 +211,14 @@ func (client RouteFiltersClient) Get(resourceGroupName string, routeFilterName s } // GetPreparer prepares the Get request. -func (client RouteFiltersClient) GetPreparer(resourceGroupName string, routeFilterName string, expand string) (*http.Request, error) { +func (client RouteFiltersClient) GetPreparer(ctx context.Context, resourceGroupName string, routeFilterName string, expand string) (*http.Request, error) { pathParameters := map[string]interface{}{ "resourceGroupName": autorest.Encode("path", resourceGroupName), "routeFilterName": autorest.Encode("path", routeFilterName), "subscriptionId": autorest.Encode("path", client.SubscriptionID), } - const APIVersion = "2017-03-01" + const APIVersion = "2018-01-01" queryParameters := map[string]interface{}{ "api-version": APIVersion, } @@ -254,13 +231,14 @@ func (client RouteFiltersClient) GetPreparer(resourceGroupName string, routeFilt autorest.WithBaseURL(client.BaseURI), autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/routeFilters/{routeFilterName}", pathParameters), autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{}) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) } // GetSender sends the Get request. The method will close the // http.Response Body if it receives an error. func (client RouteFiltersClient) GetSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, req) + return autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) } // GetResponder handles the response to the Get request. The method always @@ -277,8 +255,9 @@ func (client RouteFiltersClient) GetResponder(resp *http.Response) (result Route } // List gets all route filters in a subscription. -func (client RouteFiltersClient) List() (result RouteFilterListResult, err error) { - req, err := client.ListPreparer() +func (client RouteFiltersClient) List(ctx context.Context) (result RouteFilterListResultPage, err error) { + result.fn = client.listNextResults + req, err := client.ListPreparer(ctx) if err != nil { err = autorest.NewErrorWithError(err, "network.RouteFiltersClient", "List", nil, "Failure preparing request") return @@ -286,12 +265,12 @@ func (client RouteFiltersClient) List() (result RouteFilterListResult, err error resp, err := client.ListSender(req) if err != nil { - result.Response = autorest.Response{Response: resp} + result.rflr.Response = autorest.Response{Response: resp} err = autorest.NewErrorWithError(err, "network.RouteFiltersClient", "List", resp, "Failure sending request") return } - result, err = client.ListResponder(resp) + result.rflr, err = client.ListResponder(resp) if err != nil { err = autorest.NewErrorWithError(err, "network.RouteFiltersClient", "List", resp, "Failure responding to request") } @@ -300,12 +279,12 @@ func (client RouteFiltersClient) List() (result RouteFilterListResult, err error } // ListPreparer prepares the List request. -func (client RouteFiltersClient) ListPreparer() (*http.Request, error) { +func (client RouteFiltersClient) ListPreparer(ctx context.Context) (*http.Request, error) { pathParameters := map[string]interface{}{ "subscriptionId": autorest.Encode("path", client.SubscriptionID), } - const APIVersion = "2017-03-01" + const APIVersion = "2018-01-01" queryParameters := map[string]interface{}{ "api-version": APIVersion, } @@ -315,13 +294,14 @@ func (client RouteFiltersClient) ListPreparer() (*http.Request, error) { autorest.WithBaseURL(client.BaseURI), autorest.WithPathParameters("/subscriptions/{subscriptionId}/providers/Microsoft.Network/routeFilters", pathParameters), autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{}) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) } // ListSender sends the List request. The method will close the // http.Response Body if it receives an error. func (client RouteFiltersClient) ListSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, req) + return autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) } // ListResponder handles the response to the List request. The method always @@ -337,35 +317,39 @@ func (client RouteFiltersClient) ListResponder(resp *http.Response) (result Rout return } -// ListNextResults retrieves the next set of results, if any. -func (client RouteFiltersClient) ListNextResults(lastResults RouteFilterListResult) (result RouteFilterListResult, err error) { - req, err := lastResults.RouteFilterListResultPreparer() +// listNextResults retrieves the next set of results, if any. +func (client RouteFiltersClient) listNextResults(lastResults RouteFilterListResult) (result RouteFilterListResult, err error) { + req, err := lastResults.routeFilterListResultPreparer() if err != nil { - return result, autorest.NewErrorWithError(err, "network.RouteFiltersClient", "List", nil, "Failure preparing next results request") + return result, autorest.NewErrorWithError(err, "network.RouteFiltersClient", "listNextResults", nil, "Failure preparing next results request") } if req == nil { return } - resp, err := client.ListSender(req) if err != nil { result.Response = autorest.Response{Response: resp} - return result, autorest.NewErrorWithError(err, "network.RouteFiltersClient", "List", resp, "Failure sending next results request") + return result, autorest.NewErrorWithError(err, "network.RouteFiltersClient", "listNextResults", resp, "Failure sending next results request") } - result, err = client.ListResponder(resp) if err != nil { - err = autorest.NewErrorWithError(err, "network.RouteFiltersClient", "List", resp, "Failure responding to next results request") + err = autorest.NewErrorWithError(err, "network.RouteFiltersClient", "listNextResults", resp, "Failure responding to next results request") } + return +} +// ListComplete enumerates all values, automatically crossing page boundaries as required. +func (client RouteFiltersClient) ListComplete(ctx context.Context) (result RouteFilterListResultIterator, err error) { + result.page, err = client.List(ctx) return } // ListByResourceGroup gets all route filters in a resource group. -// -// resourceGroupName is the name of the resource group. -func (client RouteFiltersClient) ListByResourceGroup(resourceGroupName string) (result RouteFilterListResult, err error) { - req, err := client.ListByResourceGroupPreparer(resourceGroupName) +// Parameters: +// resourceGroupName - the name of the resource group. +func (client RouteFiltersClient) ListByResourceGroup(ctx context.Context, resourceGroupName string) (result RouteFilterListResultPage, err error) { + result.fn = client.listByResourceGroupNextResults + req, err := client.ListByResourceGroupPreparer(ctx, resourceGroupName) if err != nil { err = autorest.NewErrorWithError(err, "network.RouteFiltersClient", "ListByResourceGroup", nil, "Failure preparing request") return @@ -373,12 +357,12 @@ func (client RouteFiltersClient) ListByResourceGroup(resourceGroupName string) ( resp, err := client.ListByResourceGroupSender(req) if err != nil { - result.Response = autorest.Response{Response: resp} + result.rflr.Response = autorest.Response{Response: resp} err = autorest.NewErrorWithError(err, "network.RouteFiltersClient", "ListByResourceGroup", resp, "Failure sending request") return } - result, err = client.ListByResourceGroupResponder(resp) + result.rflr, err = client.ListByResourceGroupResponder(resp) if err != nil { err = autorest.NewErrorWithError(err, "network.RouteFiltersClient", "ListByResourceGroup", resp, "Failure responding to request") } @@ -387,13 +371,13 @@ func (client RouteFiltersClient) ListByResourceGroup(resourceGroupName string) ( } // ListByResourceGroupPreparer prepares the ListByResourceGroup request. -func (client RouteFiltersClient) ListByResourceGroupPreparer(resourceGroupName string) (*http.Request, error) { +func (client RouteFiltersClient) ListByResourceGroupPreparer(ctx context.Context, resourceGroupName string) (*http.Request, error) { pathParameters := map[string]interface{}{ "resourceGroupName": autorest.Encode("path", resourceGroupName), "subscriptionId": autorest.Encode("path", client.SubscriptionID), } - const APIVersion = "2017-03-01" + const APIVersion = "2018-01-01" queryParameters := map[string]interface{}{ "api-version": APIVersion, } @@ -403,13 +387,14 @@ func (client RouteFiltersClient) ListByResourceGroupPreparer(resourceGroupName s autorest.WithBaseURL(client.BaseURI), autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/routeFilters", pathParameters), autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{}) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) } // ListByResourceGroupSender sends the ListByResourceGroup request. The method will close the // http.Response Body if it receives an error. func (client RouteFiltersClient) ListByResourceGroupSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, req) + return autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) } // ListByResourceGroupResponder handles the response to the ListByResourceGroup request. The method always @@ -425,100 +410,92 @@ func (client RouteFiltersClient) ListByResourceGroupResponder(resp *http.Respons return } -// ListByResourceGroupNextResults retrieves the next set of results, if any. -func (client RouteFiltersClient) ListByResourceGroupNextResults(lastResults RouteFilterListResult) (result RouteFilterListResult, err error) { - req, err := lastResults.RouteFilterListResultPreparer() +// listByResourceGroupNextResults retrieves the next set of results, if any. +func (client RouteFiltersClient) listByResourceGroupNextResults(lastResults RouteFilterListResult) (result RouteFilterListResult, err error) { + req, err := lastResults.routeFilterListResultPreparer() if err != nil { - return result, autorest.NewErrorWithError(err, "network.RouteFiltersClient", "ListByResourceGroup", nil, "Failure preparing next results request") + return result, autorest.NewErrorWithError(err, "network.RouteFiltersClient", "listByResourceGroupNextResults", nil, "Failure preparing next results request") } if req == nil { return } - resp, err := client.ListByResourceGroupSender(req) if err != nil { result.Response = autorest.Response{Response: resp} - return result, autorest.NewErrorWithError(err, "network.RouteFiltersClient", "ListByResourceGroup", resp, "Failure sending next results request") + return result, autorest.NewErrorWithError(err, "network.RouteFiltersClient", "listByResourceGroupNextResults", resp, "Failure sending next results request") } - result, err = client.ListByResourceGroupResponder(resp) if err != nil { - err = autorest.NewErrorWithError(err, "network.RouteFiltersClient", "ListByResourceGroup", resp, "Failure responding to next results request") + err = autorest.NewErrorWithError(err, "network.RouteFiltersClient", "listByResourceGroupNextResults", resp, "Failure responding to next results request") + } + return +} + +// ListByResourceGroupComplete enumerates all values, automatically crossing page boundaries as required. +func (client RouteFiltersClient) ListByResourceGroupComplete(ctx context.Context, resourceGroupName string) (result RouteFilterListResultIterator, err error) { + result.page, err = client.ListByResourceGroup(ctx, resourceGroupName) + return +} + +// Update updates a route filter in a specified resource group. +// Parameters: +// resourceGroupName - the name of the resource group. +// routeFilterName - the name of the route filter. +// routeFilterParameters - parameters supplied to the update route filter operation. +func (client RouteFiltersClient) Update(ctx context.Context, resourceGroupName string, routeFilterName string, routeFilterParameters PatchRouteFilter) (result RouteFiltersUpdateFuture, err error) { + req, err := client.UpdatePreparer(ctx, resourceGroupName, routeFilterName, routeFilterParameters) + if err != nil { + err = autorest.NewErrorWithError(err, "network.RouteFiltersClient", "Update", nil, "Failure preparing request") + return + } + + result, err = client.UpdateSender(req) + if err != nil { + err = autorest.NewErrorWithError(err, "network.RouteFiltersClient", "Update", result.Response(), "Failure sending request") + return } return } -// Update updates a route filter in a specified resource group. This method may -// poll for completion. Polling can be canceled by passing the cancel channel -// argument. The channel will be used to cancel polling and any outstanding -// HTTP requests. -// -// resourceGroupName is the name of the resource group. routeFilterName is the -// name of the route filter. routeFilterParameters is parameters supplied to -// the update route filter operation. -func (client RouteFiltersClient) Update(resourceGroupName string, routeFilterName string, routeFilterParameters PatchRouteFilter, cancel <-chan struct{}) (<-chan RouteFilter, <-chan error) { - resultChan := make(chan RouteFilter, 1) - errChan := make(chan error, 1) - go func() { - var err error - var result RouteFilter - defer func() { - resultChan <- result - errChan <- err - close(resultChan) - close(errChan) - }() - req, err := client.UpdatePreparer(resourceGroupName, routeFilterName, routeFilterParameters, cancel) - if err != nil { - err = autorest.NewErrorWithError(err, "network.RouteFiltersClient", "Update", nil, "Failure preparing request") - return - } - - resp, err := client.UpdateSender(req) - if err != nil { - result.Response = autorest.Response{Response: resp} - err = autorest.NewErrorWithError(err, "network.RouteFiltersClient", "Update", resp, "Failure sending request") - return - } - - result, err = client.UpdateResponder(resp) - if err != nil { - err = autorest.NewErrorWithError(err, "network.RouteFiltersClient", "Update", resp, "Failure responding to request") - } - }() - return resultChan, errChan -} - // UpdatePreparer prepares the Update request. -func (client RouteFiltersClient) UpdatePreparer(resourceGroupName string, routeFilterName string, routeFilterParameters PatchRouteFilter, cancel <-chan struct{}) (*http.Request, error) { +func (client RouteFiltersClient) UpdatePreparer(ctx context.Context, resourceGroupName string, routeFilterName string, routeFilterParameters PatchRouteFilter) (*http.Request, error) { pathParameters := map[string]interface{}{ "resourceGroupName": autorest.Encode("path", resourceGroupName), "routeFilterName": autorest.Encode("path", routeFilterName), "subscriptionId": autorest.Encode("path", client.SubscriptionID), } - const APIVersion = "2017-03-01" + const APIVersion = "2018-01-01" queryParameters := map[string]interface{}{ "api-version": APIVersion, } preparer := autorest.CreatePreparer( - autorest.AsJSON(), + autorest.AsContentType("application/json; charset=utf-8"), autorest.AsPatch(), autorest.WithBaseURL(client.BaseURI), autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/routeFilters/{routeFilterName}", pathParameters), autorest.WithJSON(routeFilterParameters), autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{Cancel: cancel}) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) } // UpdateSender sends the Update request. The method will close the // http.Response Body if it receives an error. -func (client RouteFiltersClient) UpdateSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, - req, - azure.DoPollForAsynchronous(client.PollingDelay)) +func (client RouteFiltersClient) UpdateSender(req *http.Request) (future RouteFiltersUpdateFuture, err error) { + var resp *http.Response + resp, err = autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) + if err != nil { + return + } + err = autorest.Respond(resp, azure.WithErrorUnlessStatusCode(http.StatusOK)) + if err != nil { + return + } + future.Future, err = azure.NewFutureFromResponse(resp) + return } // UpdateResponder handles the response to the Update request. The method always diff --git a/vendor/github.com/Azure/azure-sdk-for-go/arm/network/routes.go b/vendor/github.com/Azure/azure-sdk-for-go/services/network/mgmt/2018-01-01/network/routes.go similarity index 59% rename from vendor/github.com/Azure/azure-sdk-for-go/arm/network/routes.go rename to vendor/github.com/Azure/azure-sdk-for-go/services/network/mgmt/2018-01-01/network/routes.go index 1366d3c14..9a716de45 100644 --- a/vendor/github.com/Azure/azure-sdk-for-go/arm/network/routes.go +++ b/vendor/github.com/Azure/azure-sdk-for-go/services/network/mgmt/2018-01-01/network/routes.go @@ -14,19 +14,19 @@ package network // See the License for the specific language governing permissions and // limitations under the License. // -// Code generated by Microsoft (R) AutoRest Code Generator 1.0.1.0 -// Changes may cause incorrect behavior and will be lost if the code is -// regenerated. +// Code generated by Microsoft (R) AutoRest Code Generator. +// Changes may cause incorrect behavior and will be lost if the code is regenerated. import ( + "context" "github.com/Azure/go-autorest/autorest" "github.com/Azure/go-autorest/autorest/azure" "net/http" ) -// RoutesClient is the composite Swagger for Network Client +// RoutesClient is the network Client type RoutesClient struct { - ManagementClient + BaseClient } // NewRoutesClient creates an instance of the RoutesClient client. @@ -39,49 +39,30 @@ func NewRoutesClientWithBaseURI(baseURI string, subscriptionID string) RoutesCli return RoutesClient{NewWithBaseURI(baseURI, subscriptionID)} } -// CreateOrUpdate creates or updates a route in the specified route table. This -// method may poll for completion. Polling can be canceled by passing the -// cancel channel argument. The channel will be used to cancel polling and any -// outstanding HTTP requests. -// -// resourceGroupName is the name of the resource group. routeTableName is the -// name of the route table. routeName is the name of the route. routeParameters -// is parameters supplied to the create or update route operation. -func (client RoutesClient) CreateOrUpdate(resourceGroupName string, routeTableName string, routeName string, routeParameters Route, cancel <-chan struct{}) (<-chan Route, <-chan error) { - resultChan := make(chan Route, 1) - errChan := make(chan error, 1) - go func() { - var err error - var result Route - defer func() { - resultChan <- result - errChan <- err - close(resultChan) - close(errChan) - }() - req, err := client.CreateOrUpdatePreparer(resourceGroupName, routeTableName, routeName, routeParameters, cancel) - if err != nil { - err = autorest.NewErrorWithError(err, "network.RoutesClient", "CreateOrUpdate", nil, "Failure preparing request") - return - } +// CreateOrUpdate creates or updates a route in the specified route table. +// Parameters: +// resourceGroupName - the name of the resource group. +// routeTableName - the name of the route table. +// routeName - the name of the route. +// routeParameters - parameters supplied to the create or update route operation. +func (client RoutesClient) CreateOrUpdate(ctx context.Context, resourceGroupName string, routeTableName string, routeName string, routeParameters Route) (result RoutesCreateOrUpdateFuture, err error) { + req, err := client.CreateOrUpdatePreparer(ctx, resourceGroupName, routeTableName, routeName, routeParameters) + if err != nil { + err = autorest.NewErrorWithError(err, "network.RoutesClient", "CreateOrUpdate", nil, "Failure preparing request") + return + } - resp, err := client.CreateOrUpdateSender(req) - if err != nil { - result.Response = autorest.Response{Response: resp} - err = autorest.NewErrorWithError(err, "network.RoutesClient", "CreateOrUpdate", resp, "Failure sending request") - return - } + result, err = client.CreateOrUpdateSender(req) + if err != nil { + err = autorest.NewErrorWithError(err, "network.RoutesClient", "CreateOrUpdate", result.Response(), "Failure sending request") + return + } - result, err = client.CreateOrUpdateResponder(resp) - if err != nil { - err = autorest.NewErrorWithError(err, "network.RoutesClient", "CreateOrUpdate", resp, "Failure responding to request") - } - }() - return resultChan, errChan + return } // CreateOrUpdatePreparer prepares the CreateOrUpdate request. -func (client RoutesClient) CreateOrUpdatePreparer(resourceGroupName string, routeTableName string, routeName string, routeParameters Route, cancel <-chan struct{}) (*http.Request, error) { +func (client RoutesClient) CreateOrUpdatePreparer(ctx context.Context, resourceGroupName string, routeTableName string, routeName string, routeParameters Route) (*http.Request, error) { pathParameters := map[string]interface{}{ "resourceGroupName": autorest.Encode("path", resourceGroupName), "routeName": autorest.Encode("path", routeName), @@ -89,27 +70,36 @@ func (client RoutesClient) CreateOrUpdatePreparer(resourceGroupName string, rout "subscriptionId": autorest.Encode("path", client.SubscriptionID), } - const APIVersion = "2017-03-01" + const APIVersion = "2018-01-01" queryParameters := map[string]interface{}{ "api-version": APIVersion, } preparer := autorest.CreatePreparer( - autorest.AsJSON(), + autorest.AsContentType("application/json; charset=utf-8"), autorest.AsPut(), autorest.WithBaseURL(client.BaseURI), autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/routeTables/{routeTableName}/routes/{routeName}", pathParameters), autorest.WithJSON(routeParameters), autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{Cancel: cancel}) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) } // CreateOrUpdateSender sends the CreateOrUpdate request. The method will close the // http.Response Body if it receives an error. -func (client RoutesClient) CreateOrUpdateSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, - req, - azure.DoPollForAsynchronous(client.PollingDelay)) +func (client RoutesClient) CreateOrUpdateSender(req *http.Request) (future RoutesCreateOrUpdateFuture, err error) { + var resp *http.Response + resp, err = autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) + if err != nil { + return + } + err = autorest.Respond(resp, azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusCreated)) + if err != nil { + return + } + future.Future, err = azure.NewFutureFromResponse(resp) + return } // CreateOrUpdateResponder handles the response to the CreateOrUpdate request. The method always @@ -125,48 +115,29 @@ func (client RoutesClient) CreateOrUpdateResponder(resp *http.Response) (result return } -// Delete deletes the specified route from a route table. This method may poll -// for completion. Polling can be canceled by passing the cancel channel -// argument. The channel will be used to cancel polling and any outstanding -// HTTP requests. -// -// resourceGroupName is the name of the resource group. routeTableName is the -// name of the route table. routeName is the name of the route. -func (client RoutesClient) Delete(resourceGroupName string, routeTableName string, routeName string, cancel <-chan struct{}) (<-chan autorest.Response, <-chan error) { - resultChan := make(chan autorest.Response, 1) - errChan := make(chan error, 1) - go func() { - var err error - var result autorest.Response - defer func() { - resultChan <- result - errChan <- err - close(resultChan) - close(errChan) - }() - req, err := client.DeletePreparer(resourceGroupName, routeTableName, routeName, cancel) - if err != nil { - err = autorest.NewErrorWithError(err, "network.RoutesClient", "Delete", nil, "Failure preparing request") - return - } +// Delete deletes the specified route from a route table. +// Parameters: +// resourceGroupName - the name of the resource group. +// routeTableName - the name of the route table. +// routeName - the name of the route. +func (client RoutesClient) Delete(ctx context.Context, resourceGroupName string, routeTableName string, routeName string) (result RoutesDeleteFuture, err error) { + req, err := client.DeletePreparer(ctx, resourceGroupName, routeTableName, routeName) + if err != nil { + err = autorest.NewErrorWithError(err, "network.RoutesClient", "Delete", nil, "Failure preparing request") + return + } - resp, err := client.DeleteSender(req) - if err != nil { - result.Response = resp - err = autorest.NewErrorWithError(err, "network.RoutesClient", "Delete", resp, "Failure sending request") - return - } + result, err = client.DeleteSender(req) + if err != nil { + err = autorest.NewErrorWithError(err, "network.RoutesClient", "Delete", result.Response(), "Failure sending request") + return + } - result, err = client.DeleteResponder(resp) - if err != nil { - err = autorest.NewErrorWithError(err, "network.RoutesClient", "Delete", resp, "Failure responding to request") - } - }() - return resultChan, errChan + return } // DeletePreparer prepares the Delete request. -func (client RoutesClient) DeletePreparer(resourceGroupName string, routeTableName string, routeName string, cancel <-chan struct{}) (*http.Request, error) { +func (client RoutesClient) DeletePreparer(ctx context.Context, resourceGroupName string, routeTableName string, routeName string) (*http.Request, error) { pathParameters := map[string]interface{}{ "resourceGroupName": autorest.Encode("path", resourceGroupName), "routeName": autorest.Encode("path", routeName), @@ -174,7 +145,7 @@ func (client RoutesClient) DeletePreparer(resourceGroupName string, routeTableNa "subscriptionId": autorest.Encode("path", client.SubscriptionID), } - const APIVersion = "2017-03-01" + const APIVersion = "2018-01-01" queryParameters := map[string]interface{}{ "api-version": APIVersion, } @@ -184,15 +155,24 @@ func (client RoutesClient) DeletePreparer(resourceGroupName string, routeTableNa autorest.WithBaseURL(client.BaseURI), autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/routeTables/{routeTableName}/routes/{routeName}", pathParameters), autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{Cancel: cancel}) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) } // DeleteSender sends the Delete request. The method will close the // http.Response Body if it receives an error. -func (client RoutesClient) DeleteSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, - req, - azure.DoPollForAsynchronous(client.PollingDelay)) +func (client RoutesClient) DeleteSender(req *http.Request) (future RoutesDeleteFuture, err error) { + var resp *http.Response + resp, err = autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) + if err != nil { + return + } + err = autorest.Respond(resp, azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted, http.StatusNoContent)) + if err != nil { + return + } + future.Future, err = azure.NewFutureFromResponse(resp) + return } // DeleteResponder handles the response to the Delete request. The method always @@ -201,18 +181,19 @@ func (client RoutesClient) DeleteResponder(resp *http.Response) (result autorest err = autorest.Respond( resp, client.ByInspecting(), - azure.WithErrorUnlessStatusCode(http.StatusAccepted, http.StatusOK, http.StatusNoContent), + azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted, http.StatusNoContent), autorest.ByClosing()) result.Response = resp return } // Get gets the specified route from a route table. -// -// resourceGroupName is the name of the resource group. routeTableName is the -// name of the route table. routeName is the name of the route. -func (client RoutesClient) Get(resourceGroupName string, routeTableName string, routeName string) (result Route, err error) { - req, err := client.GetPreparer(resourceGroupName, routeTableName, routeName) +// Parameters: +// resourceGroupName - the name of the resource group. +// routeTableName - the name of the route table. +// routeName - the name of the route. +func (client RoutesClient) Get(ctx context.Context, resourceGroupName string, routeTableName string, routeName string) (result Route, err error) { + req, err := client.GetPreparer(ctx, resourceGroupName, routeTableName, routeName) if err != nil { err = autorest.NewErrorWithError(err, "network.RoutesClient", "Get", nil, "Failure preparing request") return @@ -234,7 +215,7 @@ func (client RoutesClient) Get(resourceGroupName string, routeTableName string, } // GetPreparer prepares the Get request. -func (client RoutesClient) GetPreparer(resourceGroupName string, routeTableName string, routeName string) (*http.Request, error) { +func (client RoutesClient) GetPreparer(ctx context.Context, resourceGroupName string, routeTableName string, routeName string) (*http.Request, error) { pathParameters := map[string]interface{}{ "resourceGroupName": autorest.Encode("path", resourceGroupName), "routeName": autorest.Encode("path", routeName), @@ -242,7 +223,7 @@ func (client RoutesClient) GetPreparer(resourceGroupName string, routeTableName "subscriptionId": autorest.Encode("path", client.SubscriptionID), } - const APIVersion = "2017-03-01" + const APIVersion = "2018-01-01" queryParameters := map[string]interface{}{ "api-version": APIVersion, } @@ -252,13 +233,14 @@ func (client RoutesClient) GetPreparer(resourceGroupName string, routeTableName autorest.WithBaseURL(client.BaseURI), autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/routeTables/{routeTableName}/routes/{routeName}", pathParameters), autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{}) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) } // GetSender sends the Get request. The method will close the // http.Response Body if it receives an error. func (client RoutesClient) GetSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, req) + return autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) } // GetResponder handles the response to the Get request. The method always @@ -275,11 +257,12 @@ func (client RoutesClient) GetResponder(resp *http.Response) (result Route, err } // List gets all routes in a route table. -// -// resourceGroupName is the name of the resource group. routeTableName is the -// name of the route table. -func (client RoutesClient) List(resourceGroupName string, routeTableName string) (result RouteListResult, err error) { - req, err := client.ListPreparer(resourceGroupName, routeTableName) +// Parameters: +// resourceGroupName - the name of the resource group. +// routeTableName - the name of the route table. +func (client RoutesClient) List(ctx context.Context, resourceGroupName string, routeTableName string) (result RouteListResultPage, err error) { + result.fn = client.listNextResults + req, err := client.ListPreparer(ctx, resourceGroupName, routeTableName) if err != nil { err = autorest.NewErrorWithError(err, "network.RoutesClient", "List", nil, "Failure preparing request") return @@ -287,12 +270,12 @@ func (client RoutesClient) List(resourceGroupName string, routeTableName string) resp, err := client.ListSender(req) if err != nil { - result.Response = autorest.Response{Response: resp} + result.rlr.Response = autorest.Response{Response: resp} err = autorest.NewErrorWithError(err, "network.RoutesClient", "List", resp, "Failure sending request") return } - result, err = client.ListResponder(resp) + result.rlr, err = client.ListResponder(resp) if err != nil { err = autorest.NewErrorWithError(err, "network.RoutesClient", "List", resp, "Failure responding to request") } @@ -301,14 +284,14 @@ func (client RoutesClient) List(resourceGroupName string, routeTableName string) } // ListPreparer prepares the List request. -func (client RoutesClient) ListPreparer(resourceGroupName string, routeTableName string) (*http.Request, error) { +func (client RoutesClient) ListPreparer(ctx context.Context, resourceGroupName string, routeTableName string) (*http.Request, error) { pathParameters := map[string]interface{}{ "resourceGroupName": autorest.Encode("path", resourceGroupName), "routeTableName": autorest.Encode("path", routeTableName), "subscriptionId": autorest.Encode("path", client.SubscriptionID), } - const APIVersion = "2017-03-01" + const APIVersion = "2018-01-01" queryParameters := map[string]interface{}{ "api-version": APIVersion, } @@ -318,13 +301,14 @@ func (client RoutesClient) ListPreparer(resourceGroupName string, routeTableName autorest.WithBaseURL(client.BaseURI), autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/routeTables/{routeTableName}/routes", pathParameters), autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{}) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) } // ListSender sends the List request. The method will close the // http.Response Body if it receives an error. func (client RoutesClient) ListSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, req) + return autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) } // ListResponder handles the response to the List request. The method always @@ -340,26 +324,29 @@ func (client RoutesClient) ListResponder(resp *http.Response) (result RouteListR return } -// ListNextResults retrieves the next set of results, if any. -func (client RoutesClient) ListNextResults(lastResults RouteListResult) (result RouteListResult, err error) { - req, err := lastResults.RouteListResultPreparer() +// listNextResults retrieves the next set of results, if any. +func (client RoutesClient) listNextResults(lastResults RouteListResult) (result RouteListResult, err error) { + req, err := lastResults.routeListResultPreparer() if err != nil { - return result, autorest.NewErrorWithError(err, "network.RoutesClient", "List", nil, "Failure preparing next results request") + return result, autorest.NewErrorWithError(err, "network.RoutesClient", "listNextResults", nil, "Failure preparing next results request") } if req == nil { return } - resp, err := client.ListSender(req) if err != nil { result.Response = autorest.Response{Response: resp} - return result, autorest.NewErrorWithError(err, "network.RoutesClient", "List", resp, "Failure sending next results request") + return result, autorest.NewErrorWithError(err, "network.RoutesClient", "listNextResults", resp, "Failure sending next results request") } - result, err = client.ListResponder(resp) if err != nil { - err = autorest.NewErrorWithError(err, "network.RoutesClient", "List", resp, "Failure responding to next results request") + err = autorest.NewErrorWithError(err, "network.RoutesClient", "listNextResults", resp, "Failure responding to next results request") } - + return +} + +// ListComplete enumerates all values, automatically crossing page boundaries as required. +func (client RoutesClient) ListComplete(ctx context.Context, resourceGroupName string, routeTableName string) (result RouteListResultIterator, err error) { + result.page, err = client.List(ctx, resourceGroupName, routeTableName) return } diff --git a/vendor/github.com/Azure/azure-sdk-for-go/arm/network/routetables.go b/vendor/github.com/Azure/azure-sdk-for-go/services/network/mgmt/2018-01-01/network/routetables.go similarity index 52% rename from vendor/github.com/Azure/azure-sdk-for-go/arm/network/routetables.go rename to vendor/github.com/Azure/azure-sdk-for-go/services/network/mgmt/2018-01-01/network/routetables.go index 80339f207..21f30e9c9 100644 --- a/vendor/github.com/Azure/azure-sdk-for-go/arm/network/routetables.go +++ b/vendor/github.com/Azure/azure-sdk-for-go/services/network/mgmt/2018-01-01/network/routetables.go @@ -14,19 +14,19 @@ package network // See the License for the specific language governing permissions and // limitations under the License. // -// Code generated by Microsoft (R) AutoRest Code Generator 1.0.1.0 -// Changes may cause incorrect behavior and will be lost if the code is -// regenerated. +// Code generated by Microsoft (R) AutoRest Code Generator. +// Changes may cause incorrect behavior and will be lost if the code is regenerated. import ( + "context" "github.com/Azure/go-autorest/autorest" "github.com/Azure/go-autorest/autorest/azure" "net/http" ) -// RouteTablesClient is the composite Swagger for Network Client +// RouteTablesClient is the network Client type RouteTablesClient struct { - ManagementClient + BaseClient } // NewRouteTablesClient creates an instance of the RouteTablesClient client. @@ -34,82 +34,70 @@ func NewRouteTablesClient(subscriptionID string) RouteTablesClient { return NewRouteTablesClientWithBaseURI(DefaultBaseURI, subscriptionID) } -// NewRouteTablesClientWithBaseURI creates an instance of the RouteTablesClient -// client. +// NewRouteTablesClientWithBaseURI creates an instance of the RouteTablesClient client. func NewRouteTablesClientWithBaseURI(baseURI string, subscriptionID string) RouteTablesClient { return RouteTablesClient{NewWithBaseURI(baseURI, subscriptionID)} } -// CreateOrUpdate create or updates a route table in a specified resource -// group. This method may poll for completion. Polling can be canceled by -// passing the cancel channel argument. The channel will be used to cancel -// polling and any outstanding HTTP requests. -// -// resourceGroupName is the name of the resource group. routeTableName is the -// name of the route table. parameters is parameters supplied to the create or -// update route table operation. -func (client RouteTablesClient) CreateOrUpdate(resourceGroupName string, routeTableName string, parameters RouteTable, cancel <-chan struct{}) (<-chan RouteTable, <-chan error) { - resultChan := make(chan RouteTable, 1) - errChan := make(chan error, 1) - go func() { - var err error - var result RouteTable - defer func() { - resultChan <- result - errChan <- err - close(resultChan) - close(errChan) - }() - req, err := client.CreateOrUpdatePreparer(resourceGroupName, routeTableName, parameters, cancel) - if err != nil { - err = autorest.NewErrorWithError(err, "network.RouteTablesClient", "CreateOrUpdate", nil, "Failure preparing request") - return - } +// CreateOrUpdate create or updates a route table in a specified resource group. +// Parameters: +// resourceGroupName - the name of the resource group. +// routeTableName - the name of the route table. +// parameters - parameters supplied to the create or update route table operation. +func (client RouteTablesClient) CreateOrUpdate(ctx context.Context, resourceGroupName string, routeTableName string, parameters RouteTable) (result RouteTablesCreateOrUpdateFuture, err error) { + req, err := client.CreateOrUpdatePreparer(ctx, resourceGroupName, routeTableName, parameters) + if err != nil { + err = autorest.NewErrorWithError(err, "network.RouteTablesClient", "CreateOrUpdate", nil, "Failure preparing request") + return + } - resp, err := client.CreateOrUpdateSender(req) - if err != nil { - result.Response = autorest.Response{Response: resp} - err = autorest.NewErrorWithError(err, "network.RouteTablesClient", "CreateOrUpdate", resp, "Failure sending request") - return - } + result, err = client.CreateOrUpdateSender(req) + if err != nil { + err = autorest.NewErrorWithError(err, "network.RouteTablesClient", "CreateOrUpdate", result.Response(), "Failure sending request") + return + } - result, err = client.CreateOrUpdateResponder(resp) - if err != nil { - err = autorest.NewErrorWithError(err, "network.RouteTablesClient", "CreateOrUpdate", resp, "Failure responding to request") - } - }() - return resultChan, errChan + return } // CreateOrUpdatePreparer prepares the CreateOrUpdate request. -func (client RouteTablesClient) CreateOrUpdatePreparer(resourceGroupName string, routeTableName string, parameters RouteTable, cancel <-chan struct{}) (*http.Request, error) { +func (client RouteTablesClient) CreateOrUpdatePreparer(ctx context.Context, resourceGroupName string, routeTableName string, parameters RouteTable) (*http.Request, error) { pathParameters := map[string]interface{}{ "resourceGroupName": autorest.Encode("path", resourceGroupName), "routeTableName": autorest.Encode("path", routeTableName), "subscriptionId": autorest.Encode("path", client.SubscriptionID), } - const APIVersion = "2017-03-01" + const APIVersion = "2018-01-01" queryParameters := map[string]interface{}{ "api-version": APIVersion, } preparer := autorest.CreatePreparer( - autorest.AsJSON(), + autorest.AsContentType("application/json; charset=utf-8"), autorest.AsPut(), autorest.WithBaseURL(client.BaseURI), autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/routeTables/{routeTableName}", pathParameters), autorest.WithJSON(parameters), autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{Cancel: cancel}) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) } // CreateOrUpdateSender sends the CreateOrUpdate request. The method will close the // http.Response Body if it receives an error. -func (client RouteTablesClient) CreateOrUpdateSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, - req, - azure.DoPollForAsynchronous(client.PollingDelay)) +func (client RouteTablesClient) CreateOrUpdateSender(req *http.Request) (future RouteTablesCreateOrUpdateFuture, err error) { + var resp *http.Response + resp, err = autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) + if err != nil { + return + } + err = autorest.Respond(resp, azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusCreated)) + if err != nil { + return + } + future.Future, err = azure.NewFutureFromResponse(resp) + return } // CreateOrUpdateResponder handles the response to the CreateOrUpdate request. The method always @@ -125,55 +113,35 @@ func (client RouteTablesClient) CreateOrUpdateResponder(resp *http.Response) (re return } -// Delete deletes the specified route table. This method may poll for -// completion. Polling can be canceled by passing the cancel channel argument. -// The channel will be used to cancel polling and any outstanding HTTP -// requests. -// -// resourceGroupName is the name of the resource group. routeTableName is the -// name of the route table. -func (client RouteTablesClient) Delete(resourceGroupName string, routeTableName string, cancel <-chan struct{}) (<-chan autorest.Response, <-chan error) { - resultChan := make(chan autorest.Response, 1) - errChan := make(chan error, 1) - go func() { - var err error - var result autorest.Response - defer func() { - resultChan <- result - errChan <- err - close(resultChan) - close(errChan) - }() - req, err := client.DeletePreparer(resourceGroupName, routeTableName, cancel) - if err != nil { - err = autorest.NewErrorWithError(err, "network.RouteTablesClient", "Delete", nil, "Failure preparing request") - return - } +// Delete deletes the specified route table. +// Parameters: +// resourceGroupName - the name of the resource group. +// routeTableName - the name of the route table. +func (client RouteTablesClient) Delete(ctx context.Context, resourceGroupName string, routeTableName string) (result RouteTablesDeleteFuture, err error) { + req, err := client.DeletePreparer(ctx, resourceGroupName, routeTableName) + if err != nil { + err = autorest.NewErrorWithError(err, "network.RouteTablesClient", "Delete", nil, "Failure preparing request") + return + } - resp, err := client.DeleteSender(req) - if err != nil { - result.Response = resp - err = autorest.NewErrorWithError(err, "network.RouteTablesClient", "Delete", resp, "Failure sending request") - return - } + result, err = client.DeleteSender(req) + if err != nil { + err = autorest.NewErrorWithError(err, "network.RouteTablesClient", "Delete", result.Response(), "Failure sending request") + return + } - result, err = client.DeleteResponder(resp) - if err != nil { - err = autorest.NewErrorWithError(err, "network.RouteTablesClient", "Delete", resp, "Failure responding to request") - } - }() - return resultChan, errChan + return } // DeletePreparer prepares the Delete request. -func (client RouteTablesClient) DeletePreparer(resourceGroupName string, routeTableName string, cancel <-chan struct{}) (*http.Request, error) { +func (client RouteTablesClient) DeletePreparer(ctx context.Context, resourceGroupName string, routeTableName string) (*http.Request, error) { pathParameters := map[string]interface{}{ "resourceGroupName": autorest.Encode("path", resourceGroupName), "routeTableName": autorest.Encode("path", routeTableName), "subscriptionId": autorest.Encode("path", client.SubscriptionID), } - const APIVersion = "2017-03-01" + const APIVersion = "2018-01-01" queryParameters := map[string]interface{}{ "api-version": APIVersion, } @@ -183,15 +151,24 @@ func (client RouteTablesClient) DeletePreparer(resourceGroupName string, routeTa autorest.WithBaseURL(client.BaseURI), autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/routeTables/{routeTableName}", pathParameters), autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{Cancel: cancel}) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) } // DeleteSender sends the Delete request. The method will close the // http.Response Body if it receives an error. -func (client RouteTablesClient) DeleteSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, - req, - azure.DoPollForAsynchronous(client.PollingDelay)) +func (client RouteTablesClient) DeleteSender(req *http.Request) (future RouteTablesDeleteFuture, err error) { + var resp *http.Response + resp, err = autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) + if err != nil { + return + } + err = autorest.Respond(resp, azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted, http.StatusNoContent)) + if err != nil { + return + } + future.Future, err = azure.NewFutureFromResponse(resp) + return } // DeleteResponder handles the response to the Delete request. The method always @@ -200,18 +177,19 @@ func (client RouteTablesClient) DeleteResponder(resp *http.Response) (result aut err = autorest.Respond( resp, client.ByInspecting(), - azure.WithErrorUnlessStatusCode(http.StatusNoContent, http.StatusOK, http.StatusAccepted), + azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted, http.StatusNoContent), autorest.ByClosing()) result.Response = resp return } // Get gets the specified route table. -// -// resourceGroupName is the name of the resource group. routeTableName is the -// name of the route table. expand is expands referenced resources. -func (client RouteTablesClient) Get(resourceGroupName string, routeTableName string, expand string) (result RouteTable, err error) { - req, err := client.GetPreparer(resourceGroupName, routeTableName, expand) +// Parameters: +// resourceGroupName - the name of the resource group. +// routeTableName - the name of the route table. +// expand - expands referenced resources. +func (client RouteTablesClient) Get(ctx context.Context, resourceGroupName string, routeTableName string, expand string) (result RouteTable, err error) { + req, err := client.GetPreparer(ctx, resourceGroupName, routeTableName, expand) if err != nil { err = autorest.NewErrorWithError(err, "network.RouteTablesClient", "Get", nil, "Failure preparing request") return @@ -233,14 +211,14 @@ func (client RouteTablesClient) Get(resourceGroupName string, routeTableName str } // GetPreparer prepares the Get request. -func (client RouteTablesClient) GetPreparer(resourceGroupName string, routeTableName string, expand string) (*http.Request, error) { +func (client RouteTablesClient) GetPreparer(ctx context.Context, resourceGroupName string, routeTableName string, expand string) (*http.Request, error) { pathParameters := map[string]interface{}{ "resourceGroupName": autorest.Encode("path", resourceGroupName), "routeTableName": autorest.Encode("path", routeTableName), "subscriptionId": autorest.Encode("path", client.SubscriptionID), } - const APIVersion = "2017-03-01" + const APIVersion = "2018-01-01" queryParameters := map[string]interface{}{ "api-version": APIVersion, } @@ -253,13 +231,14 @@ func (client RouteTablesClient) GetPreparer(resourceGroupName string, routeTable autorest.WithBaseURL(client.BaseURI), autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/routeTables/{routeTableName}", pathParameters), autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{}) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) } // GetSender sends the Get request. The method will close the // http.Response Body if it receives an error. func (client RouteTablesClient) GetSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, req) + return autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) } // GetResponder handles the response to the Get request. The method always @@ -276,10 +255,11 @@ func (client RouteTablesClient) GetResponder(resp *http.Response) (result RouteT } // List gets all route tables in a resource group. -// -// resourceGroupName is the name of the resource group. -func (client RouteTablesClient) List(resourceGroupName string) (result RouteTableListResult, err error) { - req, err := client.ListPreparer(resourceGroupName) +// Parameters: +// resourceGroupName - the name of the resource group. +func (client RouteTablesClient) List(ctx context.Context, resourceGroupName string) (result RouteTableListResultPage, err error) { + result.fn = client.listNextResults + req, err := client.ListPreparer(ctx, resourceGroupName) if err != nil { err = autorest.NewErrorWithError(err, "network.RouteTablesClient", "List", nil, "Failure preparing request") return @@ -287,12 +267,12 @@ func (client RouteTablesClient) List(resourceGroupName string) (result RouteTabl resp, err := client.ListSender(req) if err != nil { - result.Response = autorest.Response{Response: resp} + result.rtlr.Response = autorest.Response{Response: resp} err = autorest.NewErrorWithError(err, "network.RouteTablesClient", "List", resp, "Failure sending request") return } - result, err = client.ListResponder(resp) + result.rtlr, err = client.ListResponder(resp) if err != nil { err = autorest.NewErrorWithError(err, "network.RouteTablesClient", "List", resp, "Failure responding to request") } @@ -301,13 +281,13 @@ func (client RouteTablesClient) List(resourceGroupName string) (result RouteTabl } // ListPreparer prepares the List request. -func (client RouteTablesClient) ListPreparer(resourceGroupName string) (*http.Request, error) { +func (client RouteTablesClient) ListPreparer(ctx context.Context, resourceGroupName string) (*http.Request, error) { pathParameters := map[string]interface{}{ "resourceGroupName": autorest.Encode("path", resourceGroupName), "subscriptionId": autorest.Encode("path", client.SubscriptionID), } - const APIVersion = "2017-03-01" + const APIVersion = "2018-01-01" queryParameters := map[string]interface{}{ "api-version": APIVersion, } @@ -317,13 +297,14 @@ func (client RouteTablesClient) ListPreparer(resourceGroupName string) (*http.Re autorest.WithBaseURL(client.BaseURI), autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/routeTables", pathParameters), autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{}) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) } // ListSender sends the List request. The method will close the // http.Response Body if it receives an error. func (client RouteTablesClient) ListSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, req) + return autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) } // ListResponder handles the response to the List request. The method always @@ -339,33 +320,37 @@ func (client RouteTablesClient) ListResponder(resp *http.Response) (result Route return } -// ListNextResults retrieves the next set of results, if any. -func (client RouteTablesClient) ListNextResults(lastResults RouteTableListResult) (result RouteTableListResult, err error) { - req, err := lastResults.RouteTableListResultPreparer() +// listNextResults retrieves the next set of results, if any. +func (client RouteTablesClient) listNextResults(lastResults RouteTableListResult) (result RouteTableListResult, err error) { + req, err := lastResults.routeTableListResultPreparer() if err != nil { - return result, autorest.NewErrorWithError(err, "network.RouteTablesClient", "List", nil, "Failure preparing next results request") + return result, autorest.NewErrorWithError(err, "network.RouteTablesClient", "listNextResults", nil, "Failure preparing next results request") } if req == nil { return } - resp, err := client.ListSender(req) if err != nil { result.Response = autorest.Response{Response: resp} - return result, autorest.NewErrorWithError(err, "network.RouteTablesClient", "List", resp, "Failure sending next results request") + return result, autorest.NewErrorWithError(err, "network.RouteTablesClient", "listNextResults", resp, "Failure sending next results request") } - result, err = client.ListResponder(resp) if err != nil { - err = autorest.NewErrorWithError(err, "network.RouteTablesClient", "List", resp, "Failure responding to next results request") + err = autorest.NewErrorWithError(err, "network.RouteTablesClient", "listNextResults", resp, "Failure responding to next results request") } + return +} +// ListComplete enumerates all values, automatically crossing page boundaries as required. +func (client RouteTablesClient) ListComplete(ctx context.Context, resourceGroupName string) (result RouteTableListResultIterator, err error) { + result.page, err = client.List(ctx, resourceGroupName) return } // ListAll gets all route tables in a subscription. -func (client RouteTablesClient) ListAll() (result RouteTableListResult, err error) { - req, err := client.ListAllPreparer() +func (client RouteTablesClient) ListAll(ctx context.Context) (result RouteTableListResultPage, err error) { + result.fn = client.listAllNextResults + req, err := client.ListAllPreparer(ctx) if err != nil { err = autorest.NewErrorWithError(err, "network.RouteTablesClient", "ListAll", nil, "Failure preparing request") return @@ -373,12 +358,12 @@ func (client RouteTablesClient) ListAll() (result RouteTableListResult, err erro resp, err := client.ListAllSender(req) if err != nil { - result.Response = autorest.Response{Response: resp} + result.rtlr.Response = autorest.Response{Response: resp} err = autorest.NewErrorWithError(err, "network.RouteTablesClient", "ListAll", resp, "Failure sending request") return } - result, err = client.ListAllResponder(resp) + result.rtlr, err = client.ListAllResponder(resp) if err != nil { err = autorest.NewErrorWithError(err, "network.RouteTablesClient", "ListAll", resp, "Failure responding to request") } @@ -387,12 +372,12 @@ func (client RouteTablesClient) ListAll() (result RouteTableListResult, err erro } // ListAllPreparer prepares the ListAll request. -func (client RouteTablesClient) ListAllPreparer() (*http.Request, error) { +func (client RouteTablesClient) ListAllPreparer(ctx context.Context) (*http.Request, error) { pathParameters := map[string]interface{}{ "subscriptionId": autorest.Encode("path", client.SubscriptionID), } - const APIVersion = "2017-03-01" + const APIVersion = "2018-01-01" queryParameters := map[string]interface{}{ "api-version": APIVersion, } @@ -402,13 +387,14 @@ func (client RouteTablesClient) ListAllPreparer() (*http.Request, error) { autorest.WithBaseURL(client.BaseURI), autorest.WithPathParameters("/subscriptions/{subscriptionId}/providers/Microsoft.Network/routeTables", pathParameters), autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{}) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) } // ListAllSender sends the ListAll request. The method will close the // http.Response Body if it receives an error. func (client RouteTablesClient) ListAllSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, req) + return autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) } // ListAllResponder handles the response to the ListAll request. The method always @@ -424,26 +410,103 @@ func (client RouteTablesClient) ListAllResponder(resp *http.Response) (result Ro return } -// ListAllNextResults retrieves the next set of results, if any. -func (client RouteTablesClient) ListAllNextResults(lastResults RouteTableListResult) (result RouteTableListResult, err error) { - req, err := lastResults.RouteTableListResultPreparer() +// listAllNextResults retrieves the next set of results, if any. +func (client RouteTablesClient) listAllNextResults(lastResults RouteTableListResult) (result RouteTableListResult, err error) { + req, err := lastResults.routeTableListResultPreparer() if err != nil { - return result, autorest.NewErrorWithError(err, "network.RouteTablesClient", "ListAll", nil, "Failure preparing next results request") + return result, autorest.NewErrorWithError(err, "network.RouteTablesClient", "listAllNextResults", nil, "Failure preparing next results request") } if req == nil { return } - resp, err := client.ListAllSender(req) if err != nil { result.Response = autorest.Response{Response: resp} - return result, autorest.NewErrorWithError(err, "network.RouteTablesClient", "ListAll", resp, "Failure sending next results request") + return result, autorest.NewErrorWithError(err, "network.RouteTablesClient", "listAllNextResults", resp, "Failure sending next results request") } - result, err = client.ListAllResponder(resp) if err != nil { - err = autorest.NewErrorWithError(err, "network.RouteTablesClient", "ListAll", resp, "Failure responding to next results request") + err = autorest.NewErrorWithError(err, "network.RouteTablesClient", "listAllNextResults", resp, "Failure responding to next results request") + } + return +} + +// ListAllComplete enumerates all values, automatically crossing page boundaries as required. +func (client RouteTablesClient) ListAllComplete(ctx context.Context) (result RouteTableListResultIterator, err error) { + result.page, err = client.ListAll(ctx) + return +} + +// UpdateTags updates a route table tags. +// Parameters: +// resourceGroupName - the name of the resource group. +// routeTableName - the name of the route table. +// parameters - parameters supplied to update route table tags. +func (client RouteTablesClient) UpdateTags(ctx context.Context, resourceGroupName string, routeTableName string, parameters TagsObject) (result RouteTablesUpdateTagsFuture, err error) { + req, err := client.UpdateTagsPreparer(ctx, resourceGroupName, routeTableName, parameters) + if err != nil { + err = autorest.NewErrorWithError(err, "network.RouteTablesClient", "UpdateTags", nil, "Failure preparing request") + return + } + + result, err = client.UpdateTagsSender(req) + if err != nil { + err = autorest.NewErrorWithError(err, "network.RouteTablesClient", "UpdateTags", result.Response(), "Failure sending request") + return } return } + +// UpdateTagsPreparer prepares the UpdateTags request. +func (client RouteTablesClient) UpdateTagsPreparer(ctx context.Context, resourceGroupName string, routeTableName string, parameters TagsObject) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "resourceGroupName": autorest.Encode("path", resourceGroupName), + "routeTableName": autorest.Encode("path", routeTableName), + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + } + + const APIVersion = "2018-01-01" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsContentType("application/json; charset=utf-8"), + autorest.AsPatch(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/routeTables/{routeTableName}", pathParameters), + autorest.WithJSON(parameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// UpdateTagsSender sends the UpdateTags request. The method will close the +// http.Response Body if it receives an error. +func (client RouteTablesClient) UpdateTagsSender(req *http.Request) (future RouteTablesUpdateTagsFuture, err error) { + var resp *http.Response + resp, err = autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) + if err != nil { + return + } + err = autorest.Respond(resp, azure.WithErrorUnlessStatusCode(http.StatusOK)) + if err != nil { + return + } + future.Future, err = azure.NewFutureFromResponse(resp) + return +} + +// UpdateTagsResponder handles the response to the UpdateTags request. The method always +// closes the http.Response Body. +func (client RouteTablesClient) UpdateTagsResponder(resp *http.Response) (result RouteTable, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} diff --git a/vendor/github.com/Azure/azure-sdk-for-go/arm/network/securitygroups.go b/vendor/github.com/Azure/azure-sdk-for-go/services/network/mgmt/2018-01-01/network/securitygroups.go similarity index 51% rename from vendor/github.com/Azure/azure-sdk-for-go/arm/network/securitygroups.go rename to vendor/github.com/Azure/azure-sdk-for-go/services/network/mgmt/2018-01-01/network/securitygroups.go index 3faaf007f..0c6d7f78e 100644 --- a/vendor/github.com/Azure/azure-sdk-for-go/arm/network/securitygroups.go +++ b/vendor/github.com/Azure/azure-sdk-for-go/services/network/mgmt/2018-01-01/network/securitygroups.go @@ -14,104 +14,90 @@ package network // See the License for the specific language governing permissions and // limitations under the License. // -// Code generated by Microsoft (R) AutoRest Code Generator 1.0.1.0 -// Changes may cause incorrect behavior and will be lost if the code is -// regenerated. +// Code generated by Microsoft (R) AutoRest Code Generator. +// Changes may cause incorrect behavior and will be lost if the code is regenerated. import ( + "context" "github.com/Azure/go-autorest/autorest" "github.com/Azure/go-autorest/autorest/azure" "net/http" ) -// SecurityGroupsClient is the composite Swagger for Network Client +// SecurityGroupsClient is the network Client type SecurityGroupsClient struct { - ManagementClient + BaseClient } -// NewSecurityGroupsClient creates an instance of the SecurityGroupsClient -// client. +// NewSecurityGroupsClient creates an instance of the SecurityGroupsClient client. func NewSecurityGroupsClient(subscriptionID string) SecurityGroupsClient { return NewSecurityGroupsClientWithBaseURI(DefaultBaseURI, subscriptionID) } -// NewSecurityGroupsClientWithBaseURI creates an instance of the -// SecurityGroupsClient client. +// NewSecurityGroupsClientWithBaseURI creates an instance of the SecurityGroupsClient client. func NewSecurityGroupsClientWithBaseURI(baseURI string, subscriptionID string) SecurityGroupsClient { return SecurityGroupsClient{NewWithBaseURI(baseURI, subscriptionID)} } -// CreateOrUpdate creates or updates a network security group in the specified -// resource group. This method may poll for completion. Polling can be canceled -// by passing the cancel channel argument. The channel will be used to cancel -// polling and any outstanding HTTP requests. -// -// resourceGroupName is the name of the resource group. -// networkSecurityGroupName is the name of the network security group. -// parameters is parameters supplied to the create or update network security -// group operation. -func (client SecurityGroupsClient) CreateOrUpdate(resourceGroupName string, networkSecurityGroupName string, parameters SecurityGroup, cancel <-chan struct{}) (<-chan SecurityGroup, <-chan error) { - resultChan := make(chan SecurityGroup, 1) - errChan := make(chan error, 1) - go func() { - var err error - var result SecurityGroup - defer func() { - resultChan <- result - errChan <- err - close(resultChan) - close(errChan) - }() - req, err := client.CreateOrUpdatePreparer(resourceGroupName, networkSecurityGroupName, parameters, cancel) - if err != nil { - err = autorest.NewErrorWithError(err, "network.SecurityGroupsClient", "CreateOrUpdate", nil, "Failure preparing request") - return - } +// CreateOrUpdate creates or updates a network security group in the specified resource group. +// Parameters: +// resourceGroupName - the name of the resource group. +// networkSecurityGroupName - the name of the network security group. +// parameters - parameters supplied to the create or update network security group operation. +func (client SecurityGroupsClient) CreateOrUpdate(ctx context.Context, resourceGroupName string, networkSecurityGroupName string, parameters SecurityGroup) (result SecurityGroupsCreateOrUpdateFuture, err error) { + req, err := client.CreateOrUpdatePreparer(ctx, resourceGroupName, networkSecurityGroupName, parameters) + if err != nil { + err = autorest.NewErrorWithError(err, "network.SecurityGroupsClient", "CreateOrUpdate", nil, "Failure preparing request") + return + } - resp, err := client.CreateOrUpdateSender(req) - if err != nil { - result.Response = autorest.Response{Response: resp} - err = autorest.NewErrorWithError(err, "network.SecurityGroupsClient", "CreateOrUpdate", resp, "Failure sending request") - return - } + result, err = client.CreateOrUpdateSender(req) + if err != nil { + err = autorest.NewErrorWithError(err, "network.SecurityGroupsClient", "CreateOrUpdate", result.Response(), "Failure sending request") + return + } - result, err = client.CreateOrUpdateResponder(resp) - if err != nil { - err = autorest.NewErrorWithError(err, "network.SecurityGroupsClient", "CreateOrUpdate", resp, "Failure responding to request") - } - }() - return resultChan, errChan + return } // CreateOrUpdatePreparer prepares the CreateOrUpdate request. -func (client SecurityGroupsClient) CreateOrUpdatePreparer(resourceGroupName string, networkSecurityGroupName string, parameters SecurityGroup, cancel <-chan struct{}) (*http.Request, error) { +func (client SecurityGroupsClient) CreateOrUpdatePreparer(ctx context.Context, resourceGroupName string, networkSecurityGroupName string, parameters SecurityGroup) (*http.Request, error) { pathParameters := map[string]interface{}{ "networkSecurityGroupName": autorest.Encode("path", networkSecurityGroupName), "resourceGroupName": autorest.Encode("path", resourceGroupName), "subscriptionId": autorest.Encode("path", client.SubscriptionID), } - const APIVersion = "2017-03-01" + const APIVersion = "2018-01-01" queryParameters := map[string]interface{}{ "api-version": APIVersion, } preparer := autorest.CreatePreparer( - autorest.AsJSON(), + autorest.AsContentType("application/json; charset=utf-8"), autorest.AsPut(), autorest.WithBaseURL(client.BaseURI), autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/networkSecurityGroups/{networkSecurityGroupName}", pathParameters), autorest.WithJSON(parameters), autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{Cancel: cancel}) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) } // CreateOrUpdateSender sends the CreateOrUpdate request. The method will close the // http.Response Body if it receives an error. -func (client SecurityGroupsClient) CreateOrUpdateSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, - req, - azure.DoPollForAsynchronous(client.PollingDelay)) +func (client SecurityGroupsClient) CreateOrUpdateSender(req *http.Request) (future SecurityGroupsCreateOrUpdateFuture, err error) { + var resp *http.Response + resp, err = autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) + if err != nil { + return + } + err = autorest.Respond(resp, azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusCreated)) + if err != nil { + return + } + future.Future, err = azure.NewFutureFromResponse(resp) + return } // CreateOrUpdateResponder handles the response to the CreateOrUpdate request. The method always @@ -120,62 +106,42 @@ func (client SecurityGroupsClient) CreateOrUpdateResponder(resp *http.Response) err = autorest.Respond( resp, client.ByInspecting(), - azure.WithErrorUnlessStatusCode(http.StatusCreated, http.StatusOK), + azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusCreated), autorest.ByUnmarshallingJSON(&result), autorest.ByClosing()) result.Response = autorest.Response{Response: resp} return } -// Delete deletes the specified network security group. This method may poll -// for completion. Polling can be canceled by passing the cancel channel -// argument. The channel will be used to cancel polling and any outstanding -// HTTP requests. -// -// resourceGroupName is the name of the resource group. -// networkSecurityGroupName is the name of the network security group. -func (client SecurityGroupsClient) Delete(resourceGroupName string, networkSecurityGroupName string, cancel <-chan struct{}) (<-chan autorest.Response, <-chan error) { - resultChan := make(chan autorest.Response, 1) - errChan := make(chan error, 1) - go func() { - var err error - var result autorest.Response - defer func() { - resultChan <- result - errChan <- err - close(resultChan) - close(errChan) - }() - req, err := client.DeletePreparer(resourceGroupName, networkSecurityGroupName, cancel) - if err != nil { - err = autorest.NewErrorWithError(err, "network.SecurityGroupsClient", "Delete", nil, "Failure preparing request") - return - } +// Delete deletes the specified network security group. +// Parameters: +// resourceGroupName - the name of the resource group. +// networkSecurityGroupName - the name of the network security group. +func (client SecurityGroupsClient) Delete(ctx context.Context, resourceGroupName string, networkSecurityGroupName string) (result SecurityGroupsDeleteFuture, err error) { + req, err := client.DeletePreparer(ctx, resourceGroupName, networkSecurityGroupName) + if err != nil { + err = autorest.NewErrorWithError(err, "network.SecurityGroupsClient", "Delete", nil, "Failure preparing request") + return + } - resp, err := client.DeleteSender(req) - if err != nil { - result.Response = resp - err = autorest.NewErrorWithError(err, "network.SecurityGroupsClient", "Delete", resp, "Failure sending request") - return - } + result, err = client.DeleteSender(req) + if err != nil { + err = autorest.NewErrorWithError(err, "network.SecurityGroupsClient", "Delete", result.Response(), "Failure sending request") + return + } - result, err = client.DeleteResponder(resp) - if err != nil { - err = autorest.NewErrorWithError(err, "network.SecurityGroupsClient", "Delete", resp, "Failure responding to request") - } - }() - return resultChan, errChan + return } // DeletePreparer prepares the Delete request. -func (client SecurityGroupsClient) DeletePreparer(resourceGroupName string, networkSecurityGroupName string, cancel <-chan struct{}) (*http.Request, error) { +func (client SecurityGroupsClient) DeletePreparer(ctx context.Context, resourceGroupName string, networkSecurityGroupName string) (*http.Request, error) { pathParameters := map[string]interface{}{ "networkSecurityGroupName": autorest.Encode("path", networkSecurityGroupName), "resourceGroupName": autorest.Encode("path", resourceGroupName), "subscriptionId": autorest.Encode("path", client.SubscriptionID), } - const APIVersion = "2017-03-01" + const APIVersion = "2018-01-01" queryParameters := map[string]interface{}{ "api-version": APIVersion, } @@ -185,15 +151,24 @@ func (client SecurityGroupsClient) DeletePreparer(resourceGroupName string, netw autorest.WithBaseURL(client.BaseURI), autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/networkSecurityGroups/{networkSecurityGroupName}", pathParameters), autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{Cancel: cancel}) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) } // DeleteSender sends the Delete request. The method will close the // http.Response Body if it receives an error. -func (client SecurityGroupsClient) DeleteSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, - req, - azure.DoPollForAsynchronous(client.PollingDelay)) +func (client SecurityGroupsClient) DeleteSender(req *http.Request) (future SecurityGroupsDeleteFuture, err error) { + var resp *http.Response + resp, err = autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) + if err != nil { + return + } + err = autorest.Respond(resp, azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted, http.StatusNoContent)) + if err != nil { + return + } + future.Future, err = azure.NewFutureFromResponse(resp) + return } // DeleteResponder handles the response to the Delete request. The method always @@ -202,19 +177,19 @@ func (client SecurityGroupsClient) DeleteResponder(resp *http.Response) (result err = autorest.Respond( resp, client.ByInspecting(), - azure.WithErrorUnlessStatusCode(http.StatusAccepted, http.StatusOK, http.StatusNoContent), + azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted, http.StatusNoContent), autorest.ByClosing()) result.Response = resp return } // Get gets the specified network security group. -// -// resourceGroupName is the name of the resource group. -// networkSecurityGroupName is the name of the network security group. expand -// is expands referenced resources. -func (client SecurityGroupsClient) Get(resourceGroupName string, networkSecurityGroupName string, expand string) (result SecurityGroup, err error) { - req, err := client.GetPreparer(resourceGroupName, networkSecurityGroupName, expand) +// Parameters: +// resourceGroupName - the name of the resource group. +// networkSecurityGroupName - the name of the network security group. +// expand - expands referenced resources. +func (client SecurityGroupsClient) Get(ctx context.Context, resourceGroupName string, networkSecurityGroupName string, expand string) (result SecurityGroup, err error) { + req, err := client.GetPreparer(ctx, resourceGroupName, networkSecurityGroupName, expand) if err != nil { err = autorest.NewErrorWithError(err, "network.SecurityGroupsClient", "Get", nil, "Failure preparing request") return @@ -236,14 +211,14 @@ func (client SecurityGroupsClient) Get(resourceGroupName string, networkSecurity } // GetPreparer prepares the Get request. -func (client SecurityGroupsClient) GetPreparer(resourceGroupName string, networkSecurityGroupName string, expand string) (*http.Request, error) { +func (client SecurityGroupsClient) GetPreparer(ctx context.Context, resourceGroupName string, networkSecurityGroupName string, expand string) (*http.Request, error) { pathParameters := map[string]interface{}{ "networkSecurityGroupName": autorest.Encode("path", networkSecurityGroupName), "resourceGroupName": autorest.Encode("path", resourceGroupName), "subscriptionId": autorest.Encode("path", client.SubscriptionID), } - const APIVersion = "2017-03-01" + const APIVersion = "2018-01-01" queryParameters := map[string]interface{}{ "api-version": APIVersion, } @@ -256,13 +231,14 @@ func (client SecurityGroupsClient) GetPreparer(resourceGroupName string, network autorest.WithBaseURL(client.BaseURI), autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/networkSecurityGroups/{networkSecurityGroupName}", pathParameters), autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{}) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) } // GetSender sends the Get request. The method will close the // http.Response Body if it receives an error. func (client SecurityGroupsClient) GetSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, req) + return autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) } // GetResponder handles the response to the Get request. The method always @@ -279,10 +255,11 @@ func (client SecurityGroupsClient) GetResponder(resp *http.Response) (result Sec } // List gets all network security groups in a resource group. -// -// resourceGroupName is the name of the resource group. -func (client SecurityGroupsClient) List(resourceGroupName string) (result SecurityGroupListResult, err error) { - req, err := client.ListPreparer(resourceGroupName) +// Parameters: +// resourceGroupName - the name of the resource group. +func (client SecurityGroupsClient) List(ctx context.Context, resourceGroupName string) (result SecurityGroupListResultPage, err error) { + result.fn = client.listNextResults + req, err := client.ListPreparer(ctx, resourceGroupName) if err != nil { err = autorest.NewErrorWithError(err, "network.SecurityGroupsClient", "List", nil, "Failure preparing request") return @@ -290,12 +267,12 @@ func (client SecurityGroupsClient) List(resourceGroupName string) (result Securi resp, err := client.ListSender(req) if err != nil { - result.Response = autorest.Response{Response: resp} + result.sglr.Response = autorest.Response{Response: resp} err = autorest.NewErrorWithError(err, "network.SecurityGroupsClient", "List", resp, "Failure sending request") return } - result, err = client.ListResponder(resp) + result.sglr, err = client.ListResponder(resp) if err != nil { err = autorest.NewErrorWithError(err, "network.SecurityGroupsClient", "List", resp, "Failure responding to request") } @@ -304,13 +281,13 @@ func (client SecurityGroupsClient) List(resourceGroupName string) (result Securi } // ListPreparer prepares the List request. -func (client SecurityGroupsClient) ListPreparer(resourceGroupName string) (*http.Request, error) { +func (client SecurityGroupsClient) ListPreparer(ctx context.Context, resourceGroupName string) (*http.Request, error) { pathParameters := map[string]interface{}{ "resourceGroupName": autorest.Encode("path", resourceGroupName), "subscriptionId": autorest.Encode("path", client.SubscriptionID), } - const APIVersion = "2017-03-01" + const APIVersion = "2018-01-01" queryParameters := map[string]interface{}{ "api-version": APIVersion, } @@ -320,13 +297,14 @@ func (client SecurityGroupsClient) ListPreparer(resourceGroupName string) (*http autorest.WithBaseURL(client.BaseURI), autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/networkSecurityGroups", pathParameters), autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{}) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) } // ListSender sends the List request. The method will close the // http.Response Body if it receives an error. func (client SecurityGroupsClient) ListSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, req) + return autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) } // ListResponder handles the response to the List request. The method always @@ -342,33 +320,37 @@ func (client SecurityGroupsClient) ListResponder(resp *http.Response) (result Se return } -// ListNextResults retrieves the next set of results, if any. -func (client SecurityGroupsClient) ListNextResults(lastResults SecurityGroupListResult) (result SecurityGroupListResult, err error) { - req, err := lastResults.SecurityGroupListResultPreparer() +// listNextResults retrieves the next set of results, if any. +func (client SecurityGroupsClient) listNextResults(lastResults SecurityGroupListResult) (result SecurityGroupListResult, err error) { + req, err := lastResults.securityGroupListResultPreparer() if err != nil { - return result, autorest.NewErrorWithError(err, "network.SecurityGroupsClient", "List", nil, "Failure preparing next results request") + return result, autorest.NewErrorWithError(err, "network.SecurityGroupsClient", "listNextResults", nil, "Failure preparing next results request") } if req == nil { return } - resp, err := client.ListSender(req) if err != nil { result.Response = autorest.Response{Response: resp} - return result, autorest.NewErrorWithError(err, "network.SecurityGroupsClient", "List", resp, "Failure sending next results request") + return result, autorest.NewErrorWithError(err, "network.SecurityGroupsClient", "listNextResults", resp, "Failure sending next results request") } - result, err = client.ListResponder(resp) if err != nil { - err = autorest.NewErrorWithError(err, "network.SecurityGroupsClient", "List", resp, "Failure responding to next results request") + err = autorest.NewErrorWithError(err, "network.SecurityGroupsClient", "listNextResults", resp, "Failure responding to next results request") } + return +} +// ListComplete enumerates all values, automatically crossing page boundaries as required. +func (client SecurityGroupsClient) ListComplete(ctx context.Context, resourceGroupName string) (result SecurityGroupListResultIterator, err error) { + result.page, err = client.List(ctx, resourceGroupName) return } // ListAll gets all network security groups in a subscription. -func (client SecurityGroupsClient) ListAll() (result SecurityGroupListResult, err error) { - req, err := client.ListAllPreparer() +func (client SecurityGroupsClient) ListAll(ctx context.Context) (result SecurityGroupListResultPage, err error) { + result.fn = client.listAllNextResults + req, err := client.ListAllPreparer(ctx) if err != nil { err = autorest.NewErrorWithError(err, "network.SecurityGroupsClient", "ListAll", nil, "Failure preparing request") return @@ -376,12 +358,12 @@ func (client SecurityGroupsClient) ListAll() (result SecurityGroupListResult, er resp, err := client.ListAllSender(req) if err != nil { - result.Response = autorest.Response{Response: resp} + result.sglr.Response = autorest.Response{Response: resp} err = autorest.NewErrorWithError(err, "network.SecurityGroupsClient", "ListAll", resp, "Failure sending request") return } - result, err = client.ListAllResponder(resp) + result.sglr, err = client.ListAllResponder(resp) if err != nil { err = autorest.NewErrorWithError(err, "network.SecurityGroupsClient", "ListAll", resp, "Failure responding to request") } @@ -390,12 +372,12 @@ func (client SecurityGroupsClient) ListAll() (result SecurityGroupListResult, er } // ListAllPreparer prepares the ListAll request. -func (client SecurityGroupsClient) ListAllPreparer() (*http.Request, error) { +func (client SecurityGroupsClient) ListAllPreparer(ctx context.Context) (*http.Request, error) { pathParameters := map[string]interface{}{ "subscriptionId": autorest.Encode("path", client.SubscriptionID), } - const APIVersion = "2017-03-01" + const APIVersion = "2018-01-01" queryParameters := map[string]interface{}{ "api-version": APIVersion, } @@ -405,13 +387,14 @@ func (client SecurityGroupsClient) ListAllPreparer() (*http.Request, error) { autorest.WithBaseURL(client.BaseURI), autorest.WithPathParameters("/subscriptions/{subscriptionId}/providers/Microsoft.Network/networkSecurityGroups", pathParameters), autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{}) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) } // ListAllSender sends the ListAll request. The method will close the // http.Response Body if it receives an error. func (client SecurityGroupsClient) ListAllSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, req) + return autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) } // ListAllResponder handles the response to the ListAll request. The method always @@ -427,26 +410,103 @@ func (client SecurityGroupsClient) ListAllResponder(resp *http.Response) (result return } -// ListAllNextResults retrieves the next set of results, if any. -func (client SecurityGroupsClient) ListAllNextResults(lastResults SecurityGroupListResult) (result SecurityGroupListResult, err error) { - req, err := lastResults.SecurityGroupListResultPreparer() +// listAllNextResults retrieves the next set of results, if any. +func (client SecurityGroupsClient) listAllNextResults(lastResults SecurityGroupListResult) (result SecurityGroupListResult, err error) { + req, err := lastResults.securityGroupListResultPreparer() if err != nil { - return result, autorest.NewErrorWithError(err, "network.SecurityGroupsClient", "ListAll", nil, "Failure preparing next results request") + return result, autorest.NewErrorWithError(err, "network.SecurityGroupsClient", "listAllNextResults", nil, "Failure preparing next results request") } if req == nil { return } - resp, err := client.ListAllSender(req) if err != nil { result.Response = autorest.Response{Response: resp} - return result, autorest.NewErrorWithError(err, "network.SecurityGroupsClient", "ListAll", resp, "Failure sending next results request") + return result, autorest.NewErrorWithError(err, "network.SecurityGroupsClient", "listAllNextResults", resp, "Failure sending next results request") } - result, err = client.ListAllResponder(resp) if err != nil { - err = autorest.NewErrorWithError(err, "network.SecurityGroupsClient", "ListAll", resp, "Failure responding to next results request") + err = autorest.NewErrorWithError(err, "network.SecurityGroupsClient", "listAllNextResults", resp, "Failure responding to next results request") + } + return +} + +// ListAllComplete enumerates all values, automatically crossing page boundaries as required. +func (client SecurityGroupsClient) ListAllComplete(ctx context.Context) (result SecurityGroupListResultIterator, err error) { + result.page, err = client.ListAll(ctx) + return +} + +// UpdateTags updates a network security group tags. +// Parameters: +// resourceGroupName - the name of the resource group. +// networkSecurityGroupName - the name of the network security group. +// parameters - parameters supplied to update network security group tags. +func (client SecurityGroupsClient) UpdateTags(ctx context.Context, resourceGroupName string, networkSecurityGroupName string, parameters TagsObject) (result SecurityGroupsUpdateTagsFuture, err error) { + req, err := client.UpdateTagsPreparer(ctx, resourceGroupName, networkSecurityGroupName, parameters) + if err != nil { + err = autorest.NewErrorWithError(err, "network.SecurityGroupsClient", "UpdateTags", nil, "Failure preparing request") + return + } + + result, err = client.UpdateTagsSender(req) + if err != nil { + err = autorest.NewErrorWithError(err, "network.SecurityGroupsClient", "UpdateTags", result.Response(), "Failure sending request") + return } return } + +// UpdateTagsPreparer prepares the UpdateTags request. +func (client SecurityGroupsClient) UpdateTagsPreparer(ctx context.Context, resourceGroupName string, networkSecurityGroupName string, parameters TagsObject) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "networkSecurityGroupName": autorest.Encode("path", networkSecurityGroupName), + "resourceGroupName": autorest.Encode("path", resourceGroupName), + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + } + + const APIVersion = "2018-01-01" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsContentType("application/json; charset=utf-8"), + autorest.AsPatch(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/networkSecurityGroups/{networkSecurityGroupName}", pathParameters), + autorest.WithJSON(parameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// UpdateTagsSender sends the UpdateTags request. The method will close the +// http.Response Body if it receives an error. +func (client SecurityGroupsClient) UpdateTagsSender(req *http.Request) (future SecurityGroupsUpdateTagsFuture, err error) { + var resp *http.Response + resp, err = autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) + if err != nil { + return + } + err = autorest.Respond(resp, azure.WithErrorUnlessStatusCode(http.StatusOK)) + if err != nil { + return + } + future.Future, err = azure.NewFutureFromResponse(resp) + return +} + +// UpdateTagsResponder handles the response to the UpdateTags request. The method always +// closes the http.Response Body. +func (client SecurityGroupsClient) UpdateTagsResponder(resp *http.Response) (result SecurityGroup, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} diff --git a/vendor/github.com/Azure/azure-sdk-for-go/arm/network/securityrules.go b/vendor/github.com/Azure/azure-sdk-for-go/services/network/mgmt/2018-01-01/network/securityrules.go similarity index 55% rename from vendor/github.com/Azure/azure-sdk-for-go/arm/network/securityrules.go rename to vendor/github.com/Azure/azure-sdk-for-go/services/network/mgmt/2018-01-01/network/securityrules.go index 2e2738605..942c34def 100644 --- a/vendor/github.com/Azure/azure-sdk-for-go/arm/network/securityrules.go +++ b/vendor/github.com/Azure/azure-sdk-for-go/services/network/mgmt/2018-01-01/network/securityrules.go @@ -14,90 +14,55 @@ package network // See the License for the specific language governing permissions and // limitations under the License. // -// Code generated by Microsoft (R) AutoRest Code Generator 1.0.1.0 -// Changes may cause incorrect behavior and will be lost if the code is -// regenerated. +// Code generated by Microsoft (R) AutoRest Code Generator. +// Changes may cause incorrect behavior and will be lost if the code is regenerated. import ( + "context" "github.com/Azure/go-autorest/autorest" "github.com/Azure/go-autorest/autorest/azure" - "github.com/Azure/go-autorest/autorest/validation" "net/http" ) -// SecurityRulesClient is the composite Swagger for Network Client +// SecurityRulesClient is the network Client type SecurityRulesClient struct { - ManagementClient + BaseClient } -// NewSecurityRulesClient creates an instance of the SecurityRulesClient -// client. +// NewSecurityRulesClient creates an instance of the SecurityRulesClient client. func NewSecurityRulesClient(subscriptionID string) SecurityRulesClient { return NewSecurityRulesClientWithBaseURI(DefaultBaseURI, subscriptionID) } -// NewSecurityRulesClientWithBaseURI creates an instance of the -// SecurityRulesClient client. +// NewSecurityRulesClientWithBaseURI creates an instance of the SecurityRulesClient client. func NewSecurityRulesClientWithBaseURI(baseURI string, subscriptionID string) SecurityRulesClient { return SecurityRulesClient{NewWithBaseURI(baseURI, subscriptionID)} } -// CreateOrUpdate creates or updates a security rule in the specified network -// security group. This method may poll for completion. Polling can be canceled -// by passing the cancel channel argument. The channel will be used to cancel -// polling and any outstanding HTTP requests. -// -// resourceGroupName is the name of the resource group. -// networkSecurityGroupName is the name of the network security group. -// securityRuleName is the name of the security rule. securityRuleParameters is -// parameters supplied to the create or update network security rule operation. -func (client SecurityRulesClient) CreateOrUpdate(resourceGroupName string, networkSecurityGroupName string, securityRuleName string, securityRuleParameters SecurityRule, cancel <-chan struct{}) (<-chan SecurityRule, <-chan error) { - resultChan := make(chan SecurityRule, 1) - errChan := make(chan error, 1) - if err := validation.Validate([]validation.Validation{ - {TargetValue: securityRuleParameters, - Constraints: []validation.Constraint{{Target: "securityRuleParameters.SecurityRulePropertiesFormat", Name: validation.Null, Rule: false, - Chain: []validation.Constraint{{Target: "securityRuleParameters.SecurityRulePropertiesFormat.SourceAddressPrefix", Name: validation.Null, Rule: true, Chain: nil}, - {Target: "securityRuleParameters.SecurityRulePropertiesFormat.DestinationAddressPrefix", Name: validation.Null, Rule: true, Chain: nil}, - }}}}}); err != nil { - errChan <- validation.NewErrorWithValidationError(err, "network.SecurityRulesClient", "CreateOrUpdate") - close(errChan) - close(resultChan) - return resultChan, errChan +// CreateOrUpdate creates or updates a security rule in the specified network security group. +// Parameters: +// resourceGroupName - the name of the resource group. +// networkSecurityGroupName - the name of the network security group. +// securityRuleName - the name of the security rule. +// securityRuleParameters - parameters supplied to the create or update network security rule operation. +func (client SecurityRulesClient) CreateOrUpdate(ctx context.Context, resourceGroupName string, networkSecurityGroupName string, securityRuleName string, securityRuleParameters SecurityRule) (result SecurityRulesCreateOrUpdateFuture, err error) { + req, err := client.CreateOrUpdatePreparer(ctx, resourceGroupName, networkSecurityGroupName, securityRuleName, securityRuleParameters) + if err != nil { + err = autorest.NewErrorWithError(err, "network.SecurityRulesClient", "CreateOrUpdate", nil, "Failure preparing request") + return } - go func() { - var err error - var result SecurityRule - defer func() { - resultChan <- result - errChan <- err - close(resultChan) - close(errChan) - }() - req, err := client.CreateOrUpdatePreparer(resourceGroupName, networkSecurityGroupName, securityRuleName, securityRuleParameters, cancel) - if err != nil { - err = autorest.NewErrorWithError(err, "network.SecurityRulesClient", "CreateOrUpdate", nil, "Failure preparing request") - return - } + result, err = client.CreateOrUpdateSender(req) + if err != nil { + err = autorest.NewErrorWithError(err, "network.SecurityRulesClient", "CreateOrUpdate", result.Response(), "Failure sending request") + return + } - resp, err := client.CreateOrUpdateSender(req) - if err != nil { - result.Response = autorest.Response{Response: resp} - err = autorest.NewErrorWithError(err, "network.SecurityRulesClient", "CreateOrUpdate", resp, "Failure sending request") - return - } - - result, err = client.CreateOrUpdateResponder(resp) - if err != nil { - err = autorest.NewErrorWithError(err, "network.SecurityRulesClient", "CreateOrUpdate", resp, "Failure responding to request") - } - }() - return resultChan, errChan + return } // CreateOrUpdatePreparer prepares the CreateOrUpdate request. -func (client SecurityRulesClient) CreateOrUpdatePreparer(resourceGroupName string, networkSecurityGroupName string, securityRuleName string, securityRuleParameters SecurityRule, cancel <-chan struct{}) (*http.Request, error) { +func (client SecurityRulesClient) CreateOrUpdatePreparer(ctx context.Context, resourceGroupName string, networkSecurityGroupName string, securityRuleName string, securityRuleParameters SecurityRule) (*http.Request, error) { pathParameters := map[string]interface{}{ "networkSecurityGroupName": autorest.Encode("path", networkSecurityGroupName), "resourceGroupName": autorest.Encode("path", resourceGroupName), @@ -105,27 +70,36 @@ func (client SecurityRulesClient) CreateOrUpdatePreparer(resourceGroupName strin "subscriptionId": autorest.Encode("path", client.SubscriptionID), } - const APIVersion = "2017-03-01" + const APIVersion = "2018-01-01" queryParameters := map[string]interface{}{ "api-version": APIVersion, } preparer := autorest.CreatePreparer( - autorest.AsJSON(), + autorest.AsContentType("application/json; charset=utf-8"), autorest.AsPut(), autorest.WithBaseURL(client.BaseURI), autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/networkSecurityGroups/{networkSecurityGroupName}/securityRules/{securityRuleName}", pathParameters), autorest.WithJSON(securityRuleParameters), autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{Cancel: cancel}) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) } // CreateOrUpdateSender sends the CreateOrUpdate request. The method will close the // http.Response Body if it receives an error. -func (client SecurityRulesClient) CreateOrUpdateSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, - req, - azure.DoPollForAsynchronous(client.PollingDelay)) +func (client SecurityRulesClient) CreateOrUpdateSender(req *http.Request) (future SecurityRulesCreateOrUpdateFuture, err error) { + var resp *http.Response + resp, err = autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) + if err != nil { + return + } + err = autorest.Respond(resp, azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusCreated)) + if err != nil { + return + } + future.Future, err = azure.NewFutureFromResponse(resp) + return } // CreateOrUpdateResponder handles the response to the CreateOrUpdate request. The method always @@ -141,49 +115,29 @@ func (client SecurityRulesClient) CreateOrUpdateResponder(resp *http.Response) ( return } -// Delete deletes the specified network security rule. This method may poll for -// completion. Polling can be canceled by passing the cancel channel argument. -// The channel will be used to cancel polling and any outstanding HTTP -// requests. -// -// resourceGroupName is the name of the resource group. -// networkSecurityGroupName is the name of the network security group. -// securityRuleName is the name of the security rule. -func (client SecurityRulesClient) Delete(resourceGroupName string, networkSecurityGroupName string, securityRuleName string, cancel <-chan struct{}) (<-chan autorest.Response, <-chan error) { - resultChan := make(chan autorest.Response, 1) - errChan := make(chan error, 1) - go func() { - var err error - var result autorest.Response - defer func() { - resultChan <- result - errChan <- err - close(resultChan) - close(errChan) - }() - req, err := client.DeletePreparer(resourceGroupName, networkSecurityGroupName, securityRuleName, cancel) - if err != nil { - err = autorest.NewErrorWithError(err, "network.SecurityRulesClient", "Delete", nil, "Failure preparing request") - return - } +// Delete deletes the specified network security rule. +// Parameters: +// resourceGroupName - the name of the resource group. +// networkSecurityGroupName - the name of the network security group. +// securityRuleName - the name of the security rule. +func (client SecurityRulesClient) Delete(ctx context.Context, resourceGroupName string, networkSecurityGroupName string, securityRuleName string) (result SecurityRulesDeleteFuture, err error) { + req, err := client.DeletePreparer(ctx, resourceGroupName, networkSecurityGroupName, securityRuleName) + if err != nil { + err = autorest.NewErrorWithError(err, "network.SecurityRulesClient", "Delete", nil, "Failure preparing request") + return + } - resp, err := client.DeleteSender(req) - if err != nil { - result.Response = resp - err = autorest.NewErrorWithError(err, "network.SecurityRulesClient", "Delete", resp, "Failure sending request") - return - } + result, err = client.DeleteSender(req) + if err != nil { + err = autorest.NewErrorWithError(err, "network.SecurityRulesClient", "Delete", result.Response(), "Failure sending request") + return + } - result, err = client.DeleteResponder(resp) - if err != nil { - err = autorest.NewErrorWithError(err, "network.SecurityRulesClient", "Delete", resp, "Failure responding to request") - } - }() - return resultChan, errChan + return } // DeletePreparer prepares the Delete request. -func (client SecurityRulesClient) DeletePreparer(resourceGroupName string, networkSecurityGroupName string, securityRuleName string, cancel <-chan struct{}) (*http.Request, error) { +func (client SecurityRulesClient) DeletePreparer(ctx context.Context, resourceGroupName string, networkSecurityGroupName string, securityRuleName string) (*http.Request, error) { pathParameters := map[string]interface{}{ "networkSecurityGroupName": autorest.Encode("path", networkSecurityGroupName), "resourceGroupName": autorest.Encode("path", resourceGroupName), @@ -191,7 +145,7 @@ func (client SecurityRulesClient) DeletePreparer(resourceGroupName string, netwo "subscriptionId": autorest.Encode("path", client.SubscriptionID), } - const APIVersion = "2017-03-01" + const APIVersion = "2018-01-01" queryParameters := map[string]interface{}{ "api-version": APIVersion, } @@ -201,15 +155,24 @@ func (client SecurityRulesClient) DeletePreparer(resourceGroupName string, netwo autorest.WithBaseURL(client.BaseURI), autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/networkSecurityGroups/{networkSecurityGroupName}/securityRules/{securityRuleName}", pathParameters), autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{Cancel: cancel}) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) } // DeleteSender sends the Delete request. The method will close the // http.Response Body if it receives an error. -func (client SecurityRulesClient) DeleteSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, - req, - azure.DoPollForAsynchronous(client.PollingDelay)) +func (client SecurityRulesClient) DeleteSender(req *http.Request) (future SecurityRulesDeleteFuture, err error) { + var resp *http.Response + resp, err = autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) + if err != nil { + return + } + err = autorest.Respond(resp, azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted, http.StatusNoContent)) + if err != nil { + return + } + future.Future, err = azure.NewFutureFromResponse(resp) + return } // DeleteResponder handles the response to the Delete request. The method always @@ -218,19 +181,19 @@ func (client SecurityRulesClient) DeleteResponder(resp *http.Response) (result a err = autorest.Respond( resp, client.ByInspecting(), - azure.WithErrorUnlessStatusCode(http.StatusNoContent, http.StatusAccepted, http.StatusOK), + azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted, http.StatusNoContent), autorest.ByClosing()) result.Response = resp return } // Get get the specified network security rule. -// -// resourceGroupName is the name of the resource group. -// networkSecurityGroupName is the name of the network security group. -// securityRuleName is the name of the security rule. -func (client SecurityRulesClient) Get(resourceGroupName string, networkSecurityGroupName string, securityRuleName string) (result SecurityRule, err error) { - req, err := client.GetPreparer(resourceGroupName, networkSecurityGroupName, securityRuleName) +// Parameters: +// resourceGroupName - the name of the resource group. +// networkSecurityGroupName - the name of the network security group. +// securityRuleName - the name of the security rule. +func (client SecurityRulesClient) Get(ctx context.Context, resourceGroupName string, networkSecurityGroupName string, securityRuleName string) (result SecurityRule, err error) { + req, err := client.GetPreparer(ctx, resourceGroupName, networkSecurityGroupName, securityRuleName) if err != nil { err = autorest.NewErrorWithError(err, "network.SecurityRulesClient", "Get", nil, "Failure preparing request") return @@ -252,7 +215,7 @@ func (client SecurityRulesClient) Get(resourceGroupName string, networkSecurityG } // GetPreparer prepares the Get request. -func (client SecurityRulesClient) GetPreparer(resourceGroupName string, networkSecurityGroupName string, securityRuleName string) (*http.Request, error) { +func (client SecurityRulesClient) GetPreparer(ctx context.Context, resourceGroupName string, networkSecurityGroupName string, securityRuleName string) (*http.Request, error) { pathParameters := map[string]interface{}{ "networkSecurityGroupName": autorest.Encode("path", networkSecurityGroupName), "resourceGroupName": autorest.Encode("path", resourceGroupName), @@ -260,7 +223,7 @@ func (client SecurityRulesClient) GetPreparer(resourceGroupName string, networkS "subscriptionId": autorest.Encode("path", client.SubscriptionID), } - const APIVersion = "2017-03-01" + const APIVersion = "2018-01-01" queryParameters := map[string]interface{}{ "api-version": APIVersion, } @@ -270,13 +233,14 @@ func (client SecurityRulesClient) GetPreparer(resourceGroupName string, networkS autorest.WithBaseURL(client.BaseURI), autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/networkSecurityGroups/{networkSecurityGroupName}/securityRules/{securityRuleName}", pathParameters), autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{}) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) } // GetSender sends the Get request. The method will close the // http.Response Body if it receives an error. func (client SecurityRulesClient) GetSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, req) + return autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) } // GetResponder handles the response to the Get request. The method always @@ -293,11 +257,12 @@ func (client SecurityRulesClient) GetResponder(resp *http.Response) (result Secu } // List gets all security rules in a network security group. -// -// resourceGroupName is the name of the resource group. -// networkSecurityGroupName is the name of the network security group. -func (client SecurityRulesClient) List(resourceGroupName string, networkSecurityGroupName string) (result SecurityRuleListResult, err error) { - req, err := client.ListPreparer(resourceGroupName, networkSecurityGroupName) +// Parameters: +// resourceGroupName - the name of the resource group. +// networkSecurityGroupName - the name of the network security group. +func (client SecurityRulesClient) List(ctx context.Context, resourceGroupName string, networkSecurityGroupName string) (result SecurityRuleListResultPage, err error) { + result.fn = client.listNextResults + req, err := client.ListPreparer(ctx, resourceGroupName, networkSecurityGroupName) if err != nil { err = autorest.NewErrorWithError(err, "network.SecurityRulesClient", "List", nil, "Failure preparing request") return @@ -305,12 +270,12 @@ func (client SecurityRulesClient) List(resourceGroupName string, networkSecurity resp, err := client.ListSender(req) if err != nil { - result.Response = autorest.Response{Response: resp} + result.srlr.Response = autorest.Response{Response: resp} err = autorest.NewErrorWithError(err, "network.SecurityRulesClient", "List", resp, "Failure sending request") return } - result, err = client.ListResponder(resp) + result.srlr, err = client.ListResponder(resp) if err != nil { err = autorest.NewErrorWithError(err, "network.SecurityRulesClient", "List", resp, "Failure responding to request") } @@ -319,14 +284,14 @@ func (client SecurityRulesClient) List(resourceGroupName string, networkSecurity } // ListPreparer prepares the List request. -func (client SecurityRulesClient) ListPreparer(resourceGroupName string, networkSecurityGroupName string) (*http.Request, error) { +func (client SecurityRulesClient) ListPreparer(ctx context.Context, resourceGroupName string, networkSecurityGroupName string) (*http.Request, error) { pathParameters := map[string]interface{}{ "networkSecurityGroupName": autorest.Encode("path", networkSecurityGroupName), "resourceGroupName": autorest.Encode("path", resourceGroupName), "subscriptionId": autorest.Encode("path", client.SubscriptionID), } - const APIVersion = "2017-03-01" + const APIVersion = "2018-01-01" queryParameters := map[string]interface{}{ "api-version": APIVersion, } @@ -336,13 +301,14 @@ func (client SecurityRulesClient) ListPreparer(resourceGroupName string, network autorest.WithBaseURL(client.BaseURI), autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/networkSecurityGroups/{networkSecurityGroupName}/securityRules", pathParameters), autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{}) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) } // ListSender sends the List request. The method will close the // http.Response Body if it receives an error. func (client SecurityRulesClient) ListSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, req) + return autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) } // ListResponder handles the response to the List request. The method always @@ -358,26 +324,29 @@ func (client SecurityRulesClient) ListResponder(resp *http.Response) (result Sec return } -// ListNextResults retrieves the next set of results, if any. -func (client SecurityRulesClient) ListNextResults(lastResults SecurityRuleListResult) (result SecurityRuleListResult, err error) { - req, err := lastResults.SecurityRuleListResultPreparer() +// listNextResults retrieves the next set of results, if any. +func (client SecurityRulesClient) listNextResults(lastResults SecurityRuleListResult) (result SecurityRuleListResult, err error) { + req, err := lastResults.securityRuleListResultPreparer() if err != nil { - return result, autorest.NewErrorWithError(err, "network.SecurityRulesClient", "List", nil, "Failure preparing next results request") + return result, autorest.NewErrorWithError(err, "network.SecurityRulesClient", "listNextResults", nil, "Failure preparing next results request") } if req == nil { return } - resp, err := client.ListSender(req) if err != nil { result.Response = autorest.Response{Response: resp} - return result, autorest.NewErrorWithError(err, "network.SecurityRulesClient", "List", resp, "Failure sending next results request") + return result, autorest.NewErrorWithError(err, "network.SecurityRulesClient", "listNextResults", resp, "Failure sending next results request") } - result, err = client.ListResponder(resp) if err != nil { - err = autorest.NewErrorWithError(err, "network.SecurityRulesClient", "List", resp, "Failure responding to next results request") + err = autorest.NewErrorWithError(err, "network.SecurityRulesClient", "listNextResults", resp, "Failure responding to next results request") } - + return +} + +// ListComplete enumerates all values, automatically crossing page boundaries as required. +func (client SecurityRulesClient) ListComplete(ctx context.Context, resourceGroupName string, networkSecurityGroupName string) (result SecurityRuleListResultIterator, err error) { + result.page, err = client.List(ctx, resourceGroupName, networkSecurityGroupName) return } diff --git a/vendor/github.com/Azure/azure-sdk-for-go/arm/network/subnets.go b/vendor/github.com/Azure/azure-sdk-for-go/services/network/mgmt/2018-01-01/network/subnets.go similarity index 59% rename from vendor/github.com/Azure/azure-sdk-for-go/arm/network/subnets.go rename to vendor/github.com/Azure/azure-sdk-for-go/services/network/mgmt/2018-01-01/network/subnets.go index 7d6ff882a..b288b8ba8 100644 --- a/vendor/github.com/Azure/azure-sdk-for-go/arm/network/subnets.go +++ b/vendor/github.com/Azure/azure-sdk-for-go/services/network/mgmt/2018-01-01/network/subnets.go @@ -14,19 +14,19 @@ package network // See the License for the specific language governing permissions and // limitations under the License. // -// Code generated by Microsoft (R) AutoRest Code Generator 1.0.1.0 -// Changes may cause incorrect behavior and will be lost if the code is -// regenerated. +// Code generated by Microsoft (R) AutoRest Code Generator. +// Changes may cause incorrect behavior and will be lost if the code is regenerated. import ( + "context" "github.com/Azure/go-autorest/autorest" "github.com/Azure/go-autorest/autorest/azure" "net/http" ) -// SubnetsClient is the composite Swagger for Network Client +// SubnetsClient is the network Client type SubnetsClient struct { - ManagementClient + BaseClient } // NewSubnetsClient creates an instance of the SubnetsClient client. @@ -40,49 +40,29 @@ func NewSubnetsClientWithBaseURI(baseURI string, subscriptionID string) SubnetsC } // CreateOrUpdate creates or updates a subnet in the specified virtual network. -// This method may poll for completion. Polling can be canceled by passing the -// cancel channel argument. The channel will be used to cancel polling and any -// outstanding HTTP requests. -// -// resourceGroupName is the name of the resource group. virtualNetworkName is -// the name of the virtual network. subnetName is the name of the subnet. -// subnetParameters is parameters supplied to the create or update subnet -// operation. -func (client SubnetsClient) CreateOrUpdate(resourceGroupName string, virtualNetworkName string, subnetName string, subnetParameters Subnet, cancel <-chan struct{}) (<-chan Subnet, <-chan error) { - resultChan := make(chan Subnet, 1) - errChan := make(chan error, 1) - go func() { - var err error - var result Subnet - defer func() { - resultChan <- result - errChan <- err - close(resultChan) - close(errChan) - }() - req, err := client.CreateOrUpdatePreparer(resourceGroupName, virtualNetworkName, subnetName, subnetParameters, cancel) - if err != nil { - err = autorest.NewErrorWithError(err, "network.SubnetsClient", "CreateOrUpdate", nil, "Failure preparing request") - return - } +// Parameters: +// resourceGroupName - the name of the resource group. +// virtualNetworkName - the name of the virtual network. +// subnetName - the name of the subnet. +// subnetParameters - parameters supplied to the create or update subnet operation. +func (client SubnetsClient) CreateOrUpdate(ctx context.Context, resourceGroupName string, virtualNetworkName string, subnetName string, subnetParameters Subnet) (result SubnetsCreateOrUpdateFuture, err error) { + req, err := client.CreateOrUpdatePreparer(ctx, resourceGroupName, virtualNetworkName, subnetName, subnetParameters) + if err != nil { + err = autorest.NewErrorWithError(err, "network.SubnetsClient", "CreateOrUpdate", nil, "Failure preparing request") + return + } - resp, err := client.CreateOrUpdateSender(req) - if err != nil { - result.Response = autorest.Response{Response: resp} - err = autorest.NewErrorWithError(err, "network.SubnetsClient", "CreateOrUpdate", resp, "Failure sending request") - return - } + result, err = client.CreateOrUpdateSender(req) + if err != nil { + err = autorest.NewErrorWithError(err, "network.SubnetsClient", "CreateOrUpdate", result.Response(), "Failure sending request") + return + } - result, err = client.CreateOrUpdateResponder(resp) - if err != nil { - err = autorest.NewErrorWithError(err, "network.SubnetsClient", "CreateOrUpdate", resp, "Failure responding to request") - } - }() - return resultChan, errChan + return } // CreateOrUpdatePreparer prepares the CreateOrUpdate request. -func (client SubnetsClient) CreateOrUpdatePreparer(resourceGroupName string, virtualNetworkName string, subnetName string, subnetParameters Subnet, cancel <-chan struct{}) (*http.Request, error) { +func (client SubnetsClient) CreateOrUpdatePreparer(ctx context.Context, resourceGroupName string, virtualNetworkName string, subnetName string, subnetParameters Subnet) (*http.Request, error) { pathParameters := map[string]interface{}{ "resourceGroupName": autorest.Encode("path", resourceGroupName), "subnetName": autorest.Encode("path", subnetName), @@ -90,27 +70,36 @@ func (client SubnetsClient) CreateOrUpdatePreparer(resourceGroupName string, vir "virtualNetworkName": autorest.Encode("path", virtualNetworkName), } - const APIVersion = "2017-03-01" + const APIVersion = "2018-01-01" queryParameters := map[string]interface{}{ "api-version": APIVersion, } preparer := autorest.CreatePreparer( - autorest.AsJSON(), + autorest.AsContentType("application/json; charset=utf-8"), autorest.AsPut(), autorest.WithBaseURL(client.BaseURI), autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/virtualNetworks/{virtualNetworkName}/subnets/{subnetName}", pathParameters), autorest.WithJSON(subnetParameters), autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{Cancel: cancel}) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) } // CreateOrUpdateSender sends the CreateOrUpdate request. The method will close the // http.Response Body if it receives an error. -func (client SubnetsClient) CreateOrUpdateSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, - req, - azure.DoPollForAsynchronous(client.PollingDelay)) +func (client SubnetsClient) CreateOrUpdateSender(req *http.Request) (future SubnetsCreateOrUpdateFuture, err error) { + var resp *http.Response + resp, err = autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) + if err != nil { + return + } + err = autorest.Respond(resp, azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusCreated)) + if err != nil { + return + } + future.Future, err = azure.NewFutureFromResponse(resp) + return } // CreateOrUpdateResponder handles the response to the CreateOrUpdate request. The method always @@ -126,47 +115,29 @@ func (client SubnetsClient) CreateOrUpdateResponder(resp *http.Response) (result return } -// Delete deletes the specified subnet. This method may poll for completion. -// Polling can be canceled by passing the cancel channel argument. The channel -// will be used to cancel polling and any outstanding HTTP requests. -// -// resourceGroupName is the name of the resource group. virtualNetworkName is -// the name of the virtual network. subnetName is the name of the subnet. -func (client SubnetsClient) Delete(resourceGroupName string, virtualNetworkName string, subnetName string, cancel <-chan struct{}) (<-chan autorest.Response, <-chan error) { - resultChan := make(chan autorest.Response, 1) - errChan := make(chan error, 1) - go func() { - var err error - var result autorest.Response - defer func() { - resultChan <- result - errChan <- err - close(resultChan) - close(errChan) - }() - req, err := client.DeletePreparer(resourceGroupName, virtualNetworkName, subnetName, cancel) - if err != nil { - err = autorest.NewErrorWithError(err, "network.SubnetsClient", "Delete", nil, "Failure preparing request") - return - } +// Delete deletes the specified subnet. +// Parameters: +// resourceGroupName - the name of the resource group. +// virtualNetworkName - the name of the virtual network. +// subnetName - the name of the subnet. +func (client SubnetsClient) Delete(ctx context.Context, resourceGroupName string, virtualNetworkName string, subnetName string) (result SubnetsDeleteFuture, err error) { + req, err := client.DeletePreparer(ctx, resourceGroupName, virtualNetworkName, subnetName) + if err != nil { + err = autorest.NewErrorWithError(err, "network.SubnetsClient", "Delete", nil, "Failure preparing request") + return + } - resp, err := client.DeleteSender(req) - if err != nil { - result.Response = resp - err = autorest.NewErrorWithError(err, "network.SubnetsClient", "Delete", resp, "Failure sending request") - return - } + result, err = client.DeleteSender(req) + if err != nil { + err = autorest.NewErrorWithError(err, "network.SubnetsClient", "Delete", result.Response(), "Failure sending request") + return + } - result, err = client.DeleteResponder(resp) - if err != nil { - err = autorest.NewErrorWithError(err, "network.SubnetsClient", "Delete", resp, "Failure responding to request") - } - }() - return resultChan, errChan + return } // DeletePreparer prepares the Delete request. -func (client SubnetsClient) DeletePreparer(resourceGroupName string, virtualNetworkName string, subnetName string, cancel <-chan struct{}) (*http.Request, error) { +func (client SubnetsClient) DeletePreparer(ctx context.Context, resourceGroupName string, virtualNetworkName string, subnetName string) (*http.Request, error) { pathParameters := map[string]interface{}{ "resourceGroupName": autorest.Encode("path", resourceGroupName), "subnetName": autorest.Encode("path", subnetName), @@ -174,7 +145,7 @@ func (client SubnetsClient) DeletePreparer(resourceGroupName string, virtualNetw "virtualNetworkName": autorest.Encode("path", virtualNetworkName), } - const APIVersion = "2017-03-01" + const APIVersion = "2018-01-01" queryParameters := map[string]interface{}{ "api-version": APIVersion, } @@ -184,15 +155,24 @@ func (client SubnetsClient) DeletePreparer(resourceGroupName string, virtualNetw autorest.WithBaseURL(client.BaseURI), autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/virtualNetworks/{virtualNetworkName}/subnets/{subnetName}", pathParameters), autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{Cancel: cancel}) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) } // DeleteSender sends the Delete request. The method will close the // http.Response Body if it receives an error. -func (client SubnetsClient) DeleteSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, - req, - azure.DoPollForAsynchronous(client.PollingDelay)) +func (client SubnetsClient) DeleteSender(req *http.Request) (future SubnetsDeleteFuture, err error) { + var resp *http.Response + resp, err = autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) + if err != nil { + return + } + err = autorest.Respond(resp, azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted, http.StatusNoContent)) + if err != nil { + return + } + future.Future, err = azure.NewFutureFromResponse(resp) + return } // DeleteResponder handles the response to the Delete request. The method always @@ -201,19 +181,20 @@ func (client SubnetsClient) DeleteResponder(resp *http.Response) (result autores err = autorest.Respond( resp, client.ByInspecting(), - azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusNoContent, http.StatusAccepted), + azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted, http.StatusNoContent), autorest.ByClosing()) result.Response = resp return } // Get gets the specified subnet by virtual network and resource group. -// -// resourceGroupName is the name of the resource group. virtualNetworkName is -// the name of the virtual network. subnetName is the name of the subnet. -// expand is expands referenced resources. -func (client SubnetsClient) Get(resourceGroupName string, virtualNetworkName string, subnetName string, expand string) (result Subnet, err error) { - req, err := client.GetPreparer(resourceGroupName, virtualNetworkName, subnetName, expand) +// Parameters: +// resourceGroupName - the name of the resource group. +// virtualNetworkName - the name of the virtual network. +// subnetName - the name of the subnet. +// expand - expands referenced resources. +func (client SubnetsClient) Get(ctx context.Context, resourceGroupName string, virtualNetworkName string, subnetName string, expand string) (result Subnet, err error) { + req, err := client.GetPreparer(ctx, resourceGroupName, virtualNetworkName, subnetName, expand) if err != nil { err = autorest.NewErrorWithError(err, "network.SubnetsClient", "Get", nil, "Failure preparing request") return @@ -235,7 +216,7 @@ func (client SubnetsClient) Get(resourceGroupName string, virtualNetworkName str } // GetPreparer prepares the Get request. -func (client SubnetsClient) GetPreparer(resourceGroupName string, virtualNetworkName string, subnetName string, expand string) (*http.Request, error) { +func (client SubnetsClient) GetPreparer(ctx context.Context, resourceGroupName string, virtualNetworkName string, subnetName string, expand string) (*http.Request, error) { pathParameters := map[string]interface{}{ "resourceGroupName": autorest.Encode("path", resourceGroupName), "subnetName": autorest.Encode("path", subnetName), @@ -243,7 +224,7 @@ func (client SubnetsClient) GetPreparer(resourceGroupName string, virtualNetwork "virtualNetworkName": autorest.Encode("path", virtualNetworkName), } - const APIVersion = "2017-03-01" + const APIVersion = "2018-01-01" queryParameters := map[string]interface{}{ "api-version": APIVersion, } @@ -256,13 +237,14 @@ func (client SubnetsClient) GetPreparer(resourceGroupName string, virtualNetwork autorest.WithBaseURL(client.BaseURI), autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/virtualNetworks/{virtualNetworkName}/subnets/{subnetName}", pathParameters), autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{}) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) } // GetSender sends the Get request. The method will close the // http.Response Body if it receives an error. func (client SubnetsClient) GetSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, req) + return autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) } // GetResponder handles the response to the Get request. The method always @@ -279,11 +261,12 @@ func (client SubnetsClient) GetResponder(resp *http.Response) (result Subnet, er } // List gets all subnets in a virtual network. -// -// resourceGroupName is the name of the resource group. virtualNetworkName is -// the name of the virtual network. -func (client SubnetsClient) List(resourceGroupName string, virtualNetworkName string) (result SubnetListResult, err error) { - req, err := client.ListPreparer(resourceGroupName, virtualNetworkName) +// Parameters: +// resourceGroupName - the name of the resource group. +// virtualNetworkName - the name of the virtual network. +func (client SubnetsClient) List(ctx context.Context, resourceGroupName string, virtualNetworkName string) (result SubnetListResultPage, err error) { + result.fn = client.listNextResults + req, err := client.ListPreparer(ctx, resourceGroupName, virtualNetworkName) if err != nil { err = autorest.NewErrorWithError(err, "network.SubnetsClient", "List", nil, "Failure preparing request") return @@ -291,12 +274,12 @@ func (client SubnetsClient) List(resourceGroupName string, virtualNetworkName st resp, err := client.ListSender(req) if err != nil { - result.Response = autorest.Response{Response: resp} + result.slr.Response = autorest.Response{Response: resp} err = autorest.NewErrorWithError(err, "network.SubnetsClient", "List", resp, "Failure sending request") return } - result, err = client.ListResponder(resp) + result.slr, err = client.ListResponder(resp) if err != nil { err = autorest.NewErrorWithError(err, "network.SubnetsClient", "List", resp, "Failure responding to request") } @@ -305,14 +288,14 @@ func (client SubnetsClient) List(resourceGroupName string, virtualNetworkName st } // ListPreparer prepares the List request. -func (client SubnetsClient) ListPreparer(resourceGroupName string, virtualNetworkName string) (*http.Request, error) { +func (client SubnetsClient) ListPreparer(ctx context.Context, resourceGroupName string, virtualNetworkName string) (*http.Request, error) { pathParameters := map[string]interface{}{ "resourceGroupName": autorest.Encode("path", resourceGroupName), "subscriptionId": autorest.Encode("path", client.SubscriptionID), "virtualNetworkName": autorest.Encode("path", virtualNetworkName), } - const APIVersion = "2017-03-01" + const APIVersion = "2018-01-01" queryParameters := map[string]interface{}{ "api-version": APIVersion, } @@ -322,13 +305,14 @@ func (client SubnetsClient) ListPreparer(resourceGroupName string, virtualNetwor autorest.WithBaseURL(client.BaseURI), autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/virtualNetworks/{virtualNetworkName}/subnets", pathParameters), autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{}) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) } // ListSender sends the List request. The method will close the // http.Response Body if it receives an error. func (client SubnetsClient) ListSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, req) + return autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) } // ListResponder handles the response to the List request. The method always @@ -344,26 +328,29 @@ func (client SubnetsClient) ListResponder(resp *http.Response) (result SubnetLis return } -// ListNextResults retrieves the next set of results, if any. -func (client SubnetsClient) ListNextResults(lastResults SubnetListResult) (result SubnetListResult, err error) { - req, err := lastResults.SubnetListResultPreparer() +// listNextResults retrieves the next set of results, if any. +func (client SubnetsClient) listNextResults(lastResults SubnetListResult) (result SubnetListResult, err error) { + req, err := lastResults.subnetListResultPreparer() if err != nil { - return result, autorest.NewErrorWithError(err, "network.SubnetsClient", "List", nil, "Failure preparing next results request") + return result, autorest.NewErrorWithError(err, "network.SubnetsClient", "listNextResults", nil, "Failure preparing next results request") } if req == nil { return } - resp, err := client.ListSender(req) if err != nil { result.Response = autorest.Response{Response: resp} - return result, autorest.NewErrorWithError(err, "network.SubnetsClient", "List", resp, "Failure sending next results request") + return result, autorest.NewErrorWithError(err, "network.SubnetsClient", "listNextResults", resp, "Failure sending next results request") } - result, err = client.ListResponder(resp) if err != nil { - err = autorest.NewErrorWithError(err, "network.SubnetsClient", "List", resp, "Failure responding to next results request") + err = autorest.NewErrorWithError(err, "network.SubnetsClient", "listNextResults", resp, "Failure responding to next results request") } - + return +} + +// ListComplete enumerates all values, automatically crossing page boundaries as required. +func (client SubnetsClient) ListComplete(ctx context.Context, resourceGroupName string, virtualNetworkName string) (result SubnetListResultIterator, err error) { + result.page, err = client.List(ctx, resourceGroupName, virtualNetworkName) return } diff --git a/vendor/github.com/Azure/azure-sdk-for-go/arm/network/usages.go b/vendor/github.com/Azure/azure-sdk-for-go/services/network/mgmt/2018-01-01/network/usages.go similarity index 69% rename from vendor/github.com/Azure/azure-sdk-for-go/arm/network/usages.go rename to vendor/github.com/Azure/azure-sdk-for-go/services/network/mgmt/2018-01-01/network/usages.go index 34fa0df4b..96b8efdd2 100644 --- a/vendor/github.com/Azure/azure-sdk-for-go/arm/network/usages.go +++ b/vendor/github.com/Azure/azure-sdk-for-go/services/network/mgmt/2018-01-01/network/usages.go @@ -14,20 +14,20 @@ package network // See the License for the specific language governing permissions and // limitations under the License. // -// Code generated by Microsoft (R) AutoRest Code Generator 1.0.1.0 -// Changes may cause incorrect behavior and will be lost if the code is -// regenerated. +// Code generated by Microsoft (R) AutoRest Code Generator. +// Changes may cause incorrect behavior and will be lost if the code is regenerated. import ( + "context" "github.com/Azure/go-autorest/autorest" "github.com/Azure/go-autorest/autorest/azure" "github.com/Azure/go-autorest/autorest/validation" "net/http" ) -// UsagesClient is the composite Swagger for Network Client +// UsagesClient is the network Client type UsagesClient struct { - ManagementClient + BaseClient } // NewUsagesClient creates an instance of the UsagesClient client. @@ -40,17 +40,18 @@ func NewUsagesClientWithBaseURI(baseURI string, subscriptionID string) UsagesCli return UsagesClient{NewWithBaseURI(baseURI, subscriptionID)} } -// List lists compute usages for a subscription. -// -// location is the location where resource usage is queried. -func (client UsagesClient) List(location string) (result UsagesListResult, err error) { +// List list network usages for a subscription. +// Parameters: +// location - the location where resource usage is queried. +func (client UsagesClient) List(ctx context.Context, location string) (result UsagesListResultPage, err error) { if err := validation.Validate([]validation.Validation{ {TargetValue: location, Constraints: []validation.Constraint{{Target: "location", Name: validation.Pattern, Rule: `^[-\w\._]+$`, Chain: nil}}}}); err != nil { - return result, validation.NewErrorWithValidationError(err, "network.UsagesClient", "List") + return result, validation.NewError("network.UsagesClient", "List", err.Error()) } - req, err := client.ListPreparer(location) + result.fn = client.listNextResults + req, err := client.ListPreparer(ctx, location) if err != nil { err = autorest.NewErrorWithError(err, "network.UsagesClient", "List", nil, "Failure preparing request") return @@ -58,12 +59,12 @@ func (client UsagesClient) List(location string) (result UsagesListResult, err e resp, err := client.ListSender(req) if err != nil { - result.Response = autorest.Response{Response: resp} + result.ulr.Response = autorest.Response{Response: resp} err = autorest.NewErrorWithError(err, "network.UsagesClient", "List", resp, "Failure sending request") return } - result, err = client.ListResponder(resp) + result.ulr, err = client.ListResponder(resp) if err != nil { err = autorest.NewErrorWithError(err, "network.UsagesClient", "List", resp, "Failure responding to request") } @@ -72,13 +73,13 @@ func (client UsagesClient) List(location string) (result UsagesListResult, err e } // ListPreparer prepares the List request. -func (client UsagesClient) ListPreparer(location string) (*http.Request, error) { +func (client UsagesClient) ListPreparer(ctx context.Context, location string) (*http.Request, error) { pathParameters := map[string]interface{}{ "location": autorest.Encode("path", location), "subscriptionId": autorest.Encode("path", client.SubscriptionID), } - const APIVersion = "2017-03-01" + const APIVersion = "2018-01-01" queryParameters := map[string]interface{}{ "api-version": APIVersion, } @@ -88,13 +89,14 @@ func (client UsagesClient) ListPreparer(location string) (*http.Request, error) autorest.WithBaseURL(client.BaseURI), autorest.WithPathParameters("/subscriptions/{subscriptionId}/providers/Microsoft.Network/locations/{location}/usages", pathParameters), autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{}) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) } // ListSender sends the List request. The method will close the // http.Response Body if it receives an error. func (client UsagesClient) ListSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, req) + return autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) } // ListResponder handles the response to the List request. The method always @@ -110,26 +112,29 @@ func (client UsagesClient) ListResponder(resp *http.Response) (result UsagesList return } -// ListNextResults retrieves the next set of results, if any. -func (client UsagesClient) ListNextResults(lastResults UsagesListResult) (result UsagesListResult, err error) { - req, err := lastResults.UsagesListResultPreparer() +// listNextResults retrieves the next set of results, if any. +func (client UsagesClient) listNextResults(lastResults UsagesListResult) (result UsagesListResult, err error) { + req, err := lastResults.usagesListResultPreparer() if err != nil { - return result, autorest.NewErrorWithError(err, "network.UsagesClient", "List", nil, "Failure preparing next results request") + return result, autorest.NewErrorWithError(err, "network.UsagesClient", "listNextResults", nil, "Failure preparing next results request") } if req == nil { return } - resp, err := client.ListSender(req) if err != nil { result.Response = autorest.Response{Response: resp} - return result, autorest.NewErrorWithError(err, "network.UsagesClient", "List", resp, "Failure sending next results request") + return result, autorest.NewErrorWithError(err, "network.UsagesClient", "listNextResults", resp, "Failure sending next results request") } - result, err = client.ListResponder(resp) if err != nil { - err = autorest.NewErrorWithError(err, "network.UsagesClient", "List", resp, "Failure responding to next results request") + err = autorest.NewErrorWithError(err, "network.UsagesClient", "listNextResults", resp, "Failure responding to next results request") } - + return +} + +// ListComplete enumerates all values, automatically crossing page boundaries as required. +func (client UsagesClient) ListComplete(ctx context.Context, location string) (result UsagesListResultIterator, err error) { + result.page, err = client.List(ctx, location) return } diff --git a/vendor/github.com/Azure/azure-sdk-for-go/arm/network/version.go b/vendor/github.com/Azure/azure-sdk-for-go/services/network/mgmt/2018-01-01/network/version.go similarity index 80% rename from vendor/github.com/Azure/azure-sdk-for-go/arm/network/version.go rename to vendor/github.com/Azure/azure-sdk-for-go/services/network/mgmt/2018-01-01/network/version.go index 50b836179..b631f9c17 100644 --- a/vendor/github.com/Azure/azure-sdk-for-go/arm/network/version.go +++ b/vendor/github.com/Azure/azure-sdk-for-go/services/network/mgmt/2018-01-01/network/version.go @@ -1,5 +1,7 @@ package network +import "github.com/Azure/azure-sdk-for-go/version" + // Copyright (c) Microsoft and contributors. All rights reserved. // // Licensed under the Apache License, Version 2.0 (the "License"); @@ -14,16 +16,15 @@ package network // See the License for the specific language governing permissions and // limitations under the License. // -// Code generated by Microsoft (R) AutoRest Code Generator 1.0.1.0 -// Changes may cause incorrect behavior and will be lost if the code is -// regenerated. +// Code generated by Microsoft (R) AutoRest Code Generator. +// Changes may cause incorrect behavior and will be lost if the code is regenerated. // UserAgent returns the UserAgent string to use when sending http.Requests. func UserAgent() string { - return "Azure-SDK-For-Go/v10.0.2-beta arm-network/" + return "Azure-SDK-For-Go/" + version.Number + " network/2018-01-01" } // Version returns the semantic version (see http://semver.org) of the client. func Version() string { - return "v10.0.2-beta" + return version.Number } diff --git a/vendor/github.com/Azure/azure-sdk-for-go/arm/network/virtualnetworkgatewayconnections.go b/vendor/github.com/Azure/azure-sdk-for-go/services/network/mgmt/2018-01-01/network/virtualnetworkgatewayconnections.go similarity index 55% rename from vendor/github.com/Azure/azure-sdk-for-go/arm/network/virtualnetworkgatewayconnections.go rename to vendor/github.com/Azure/azure-sdk-for-go/services/network/mgmt/2018-01-01/network/virtualnetworkgatewayconnections.go index 13e1392d2..d7a1d50a6 100644 --- a/vendor/github.com/Azure/azure-sdk-for-go/arm/network/virtualnetworkgatewayconnections.go +++ b/vendor/github.com/Azure/azure-sdk-for-go/services/network/mgmt/2018-01-01/network/virtualnetworkgatewayconnections.go @@ -14,47 +14,39 @@ package network // See the License for the specific language governing permissions and // limitations under the License. // -// Code generated by Microsoft (R) AutoRest Code Generator 1.0.1.0 -// Changes may cause incorrect behavior and will be lost if the code is -// regenerated. +// Code generated by Microsoft (R) AutoRest Code Generator. +// Changes may cause incorrect behavior and will be lost if the code is regenerated. import ( + "context" "github.com/Azure/go-autorest/autorest" "github.com/Azure/go-autorest/autorest/azure" "github.com/Azure/go-autorest/autorest/validation" "net/http" ) -// VirtualNetworkGatewayConnectionsClient is the composite Swagger for Network -// Client +// VirtualNetworkGatewayConnectionsClient is the network Client type VirtualNetworkGatewayConnectionsClient struct { - ManagementClient + BaseClient } -// NewVirtualNetworkGatewayConnectionsClient creates an instance of the -// VirtualNetworkGatewayConnectionsClient client. +// NewVirtualNetworkGatewayConnectionsClient creates an instance of the VirtualNetworkGatewayConnectionsClient client. func NewVirtualNetworkGatewayConnectionsClient(subscriptionID string) VirtualNetworkGatewayConnectionsClient { return NewVirtualNetworkGatewayConnectionsClientWithBaseURI(DefaultBaseURI, subscriptionID) } -// NewVirtualNetworkGatewayConnectionsClientWithBaseURI creates an instance of -// the VirtualNetworkGatewayConnectionsClient client. +// NewVirtualNetworkGatewayConnectionsClientWithBaseURI creates an instance of the +// VirtualNetworkGatewayConnectionsClient client. func NewVirtualNetworkGatewayConnectionsClientWithBaseURI(baseURI string, subscriptionID string) VirtualNetworkGatewayConnectionsClient { return VirtualNetworkGatewayConnectionsClient{NewWithBaseURI(baseURI, subscriptionID)} } -// CreateOrUpdate creates or updates a virtual network gateway connection in -// the specified resource group. This method may poll for completion. Polling -// can be canceled by passing the cancel channel argument. The channel will be -// used to cancel polling and any outstanding HTTP requests. -// -// resourceGroupName is the name of the resource group. -// virtualNetworkGatewayConnectionName is the name of the virtual network -// gateway connection. parameters is parameters supplied to the create or -// update virtual network gateway connection operation. -func (client VirtualNetworkGatewayConnectionsClient) CreateOrUpdate(resourceGroupName string, virtualNetworkGatewayConnectionName string, parameters VirtualNetworkGatewayConnection, cancel <-chan struct{}) (<-chan VirtualNetworkGatewayConnection, <-chan error) { - resultChan := make(chan VirtualNetworkGatewayConnection, 1) - errChan := make(chan error, 1) +// CreateOrUpdate creates or updates a virtual network gateway connection in the specified resource group. +// Parameters: +// resourceGroupName - the name of the resource group. +// virtualNetworkGatewayConnectionName - the name of the virtual network gateway connection. +// parameters - parameters supplied to the create or update virtual network gateway connection operation. +func (client VirtualNetworkGatewayConnectionsClient) CreateOrUpdate(ctx context.Context, resourceGroupName string, virtualNetworkGatewayConnectionName string, parameters VirtualNetworkGatewayConnection) (result VirtualNetworkGatewayConnectionsCreateOrUpdateFuture, err error) { if err := validation.Validate([]validation.Validation{ {TargetValue: parameters, Constraints: []validation.Constraint{{Target: "parameters.VirtualNetworkGatewayConnectionPropertiesFormat", Name: validation.Null, Rule: true, @@ -65,71 +57,62 @@ func (client VirtualNetworkGatewayConnectionsClient) CreateOrUpdate(resourceGrou {Target: "parameters.VirtualNetworkGatewayConnectionPropertiesFormat.LocalNetworkGateway2", Name: validation.Null, Rule: false, Chain: []validation.Constraint{{Target: "parameters.VirtualNetworkGatewayConnectionPropertiesFormat.LocalNetworkGateway2.LocalNetworkGatewayPropertiesFormat", Name: validation.Null, Rule: true, Chain: nil}}}, }}}}}); err != nil { - errChan <- validation.NewErrorWithValidationError(err, "network.VirtualNetworkGatewayConnectionsClient", "CreateOrUpdate") - close(errChan) - close(resultChan) - return resultChan, errChan + return result, validation.NewError("network.VirtualNetworkGatewayConnectionsClient", "CreateOrUpdate", err.Error()) } - go func() { - var err error - var result VirtualNetworkGatewayConnection - defer func() { - resultChan <- result - errChan <- err - close(resultChan) - close(errChan) - }() - req, err := client.CreateOrUpdatePreparer(resourceGroupName, virtualNetworkGatewayConnectionName, parameters, cancel) - if err != nil { - err = autorest.NewErrorWithError(err, "network.VirtualNetworkGatewayConnectionsClient", "CreateOrUpdate", nil, "Failure preparing request") - return - } + req, err := client.CreateOrUpdatePreparer(ctx, resourceGroupName, virtualNetworkGatewayConnectionName, parameters) + if err != nil { + err = autorest.NewErrorWithError(err, "network.VirtualNetworkGatewayConnectionsClient", "CreateOrUpdate", nil, "Failure preparing request") + return + } - resp, err := client.CreateOrUpdateSender(req) - if err != nil { - result.Response = autorest.Response{Response: resp} - err = autorest.NewErrorWithError(err, "network.VirtualNetworkGatewayConnectionsClient", "CreateOrUpdate", resp, "Failure sending request") - return - } + result, err = client.CreateOrUpdateSender(req) + if err != nil { + err = autorest.NewErrorWithError(err, "network.VirtualNetworkGatewayConnectionsClient", "CreateOrUpdate", result.Response(), "Failure sending request") + return + } - result, err = client.CreateOrUpdateResponder(resp) - if err != nil { - err = autorest.NewErrorWithError(err, "network.VirtualNetworkGatewayConnectionsClient", "CreateOrUpdate", resp, "Failure responding to request") - } - }() - return resultChan, errChan + return } // CreateOrUpdatePreparer prepares the CreateOrUpdate request. -func (client VirtualNetworkGatewayConnectionsClient) CreateOrUpdatePreparer(resourceGroupName string, virtualNetworkGatewayConnectionName string, parameters VirtualNetworkGatewayConnection, cancel <-chan struct{}) (*http.Request, error) { +func (client VirtualNetworkGatewayConnectionsClient) CreateOrUpdatePreparer(ctx context.Context, resourceGroupName string, virtualNetworkGatewayConnectionName string, parameters VirtualNetworkGatewayConnection) (*http.Request, error) { pathParameters := map[string]interface{}{ "resourceGroupName": autorest.Encode("path", resourceGroupName), "subscriptionId": autorest.Encode("path", client.SubscriptionID), "virtualNetworkGatewayConnectionName": autorest.Encode("path", virtualNetworkGatewayConnectionName), } - const APIVersion = "2017-03-01" + const APIVersion = "2018-01-01" queryParameters := map[string]interface{}{ "api-version": APIVersion, } preparer := autorest.CreatePreparer( - autorest.AsJSON(), + autorest.AsContentType("application/json; charset=utf-8"), autorest.AsPut(), autorest.WithBaseURL(client.BaseURI), autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/connections/{virtualNetworkGatewayConnectionName}", pathParameters), autorest.WithJSON(parameters), autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{Cancel: cancel}) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) } // CreateOrUpdateSender sends the CreateOrUpdate request. The method will close the // http.Response Body if it receives an error. -func (client VirtualNetworkGatewayConnectionsClient) CreateOrUpdateSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, - req, - azure.DoPollForAsynchronous(client.PollingDelay)) +func (client VirtualNetworkGatewayConnectionsClient) CreateOrUpdateSender(req *http.Request) (future VirtualNetworkGatewayConnectionsCreateOrUpdateFuture, err error) { + var resp *http.Response + resp, err = autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) + if err != nil { + return + } + err = autorest.Respond(resp, azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusCreated)) + if err != nil { + return + } + future.Future, err = azure.NewFutureFromResponse(resp) + return } // CreateOrUpdateResponder handles the response to the CreateOrUpdate request. The method always @@ -145,56 +128,35 @@ func (client VirtualNetworkGatewayConnectionsClient) CreateOrUpdateResponder(res return } -// Delete deletes the specified virtual network Gateway connection. This method -// may poll for completion. Polling can be canceled by passing the cancel -// channel argument. The channel will be used to cancel polling and any -// outstanding HTTP requests. -// -// resourceGroupName is the name of the resource group. -// virtualNetworkGatewayConnectionName is the name of the virtual network -// gateway connection. -func (client VirtualNetworkGatewayConnectionsClient) Delete(resourceGroupName string, virtualNetworkGatewayConnectionName string, cancel <-chan struct{}) (<-chan autorest.Response, <-chan error) { - resultChan := make(chan autorest.Response, 1) - errChan := make(chan error, 1) - go func() { - var err error - var result autorest.Response - defer func() { - resultChan <- result - errChan <- err - close(resultChan) - close(errChan) - }() - req, err := client.DeletePreparer(resourceGroupName, virtualNetworkGatewayConnectionName, cancel) - if err != nil { - err = autorest.NewErrorWithError(err, "network.VirtualNetworkGatewayConnectionsClient", "Delete", nil, "Failure preparing request") - return - } +// Delete deletes the specified virtual network Gateway connection. +// Parameters: +// resourceGroupName - the name of the resource group. +// virtualNetworkGatewayConnectionName - the name of the virtual network gateway connection. +func (client VirtualNetworkGatewayConnectionsClient) Delete(ctx context.Context, resourceGroupName string, virtualNetworkGatewayConnectionName string) (result VirtualNetworkGatewayConnectionsDeleteFuture, err error) { + req, err := client.DeletePreparer(ctx, resourceGroupName, virtualNetworkGatewayConnectionName) + if err != nil { + err = autorest.NewErrorWithError(err, "network.VirtualNetworkGatewayConnectionsClient", "Delete", nil, "Failure preparing request") + return + } - resp, err := client.DeleteSender(req) - if err != nil { - result.Response = resp - err = autorest.NewErrorWithError(err, "network.VirtualNetworkGatewayConnectionsClient", "Delete", resp, "Failure sending request") - return - } + result, err = client.DeleteSender(req) + if err != nil { + err = autorest.NewErrorWithError(err, "network.VirtualNetworkGatewayConnectionsClient", "Delete", result.Response(), "Failure sending request") + return + } - result, err = client.DeleteResponder(resp) - if err != nil { - err = autorest.NewErrorWithError(err, "network.VirtualNetworkGatewayConnectionsClient", "Delete", resp, "Failure responding to request") - } - }() - return resultChan, errChan + return } // DeletePreparer prepares the Delete request. -func (client VirtualNetworkGatewayConnectionsClient) DeletePreparer(resourceGroupName string, virtualNetworkGatewayConnectionName string, cancel <-chan struct{}) (*http.Request, error) { +func (client VirtualNetworkGatewayConnectionsClient) DeletePreparer(ctx context.Context, resourceGroupName string, virtualNetworkGatewayConnectionName string) (*http.Request, error) { pathParameters := map[string]interface{}{ "resourceGroupName": autorest.Encode("path", resourceGroupName), "subscriptionId": autorest.Encode("path", client.SubscriptionID), "virtualNetworkGatewayConnectionName": autorest.Encode("path", virtualNetworkGatewayConnectionName), } - const APIVersion = "2017-03-01" + const APIVersion = "2018-01-01" queryParameters := map[string]interface{}{ "api-version": APIVersion, } @@ -204,15 +166,24 @@ func (client VirtualNetworkGatewayConnectionsClient) DeletePreparer(resourceGrou autorest.WithBaseURL(client.BaseURI), autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/connections/{virtualNetworkGatewayConnectionName}", pathParameters), autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{Cancel: cancel}) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) } // DeleteSender sends the Delete request. The method will close the // http.Response Body if it receives an error. -func (client VirtualNetworkGatewayConnectionsClient) DeleteSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, - req, - azure.DoPollForAsynchronous(client.PollingDelay)) +func (client VirtualNetworkGatewayConnectionsClient) DeleteSender(req *http.Request) (future VirtualNetworkGatewayConnectionsDeleteFuture, err error) { + var resp *http.Response + resp, err = autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) + if err != nil { + return + } + err = autorest.Respond(resp, azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted, http.StatusNoContent)) + if err != nil { + return + } + future.Future, err = azure.NewFutureFromResponse(resp) + return } // DeleteResponder handles the response to the Delete request. The method always @@ -228,12 +199,11 @@ func (client VirtualNetworkGatewayConnectionsClient) DeleteResponder(resp *http. } // Get gets the specified virtual network gateway connection by resource group. -// -// resourceGroupName is the name of the resource group. -// virtualNetworkGatewayConnectionName is the name of the virtual network -// gateway connection. -func (client VirtualNetworkGatewayConnectionsClient) Get(resourceGroupName string, virtualNetworkGatewayConnectionName string) (result VirtualNetworkGatewayConnection, err error) { - req, err := client.GetPreparer(resourceGroupName, virtualNetworkGatewayConnectionName) +// Parameters: +// resourceGroupName - the name of the resource group. +// virtualNetworkGatewayConnectionName - the name of the virtual network gateway connection. +func (client VirtualNetworkGatewayConnectionsClient) Get(ctx context.Context, resourceGroupName string, virtualNetworkGatewayConnectionName string) (result VirtualNetworkGatewayConnection, err error) { + req, err := client.GetPreparer(ctx, resourceGroupName, virtualNetworkGatewayConnectionName) if err != nil { err = autorest.NewErrorWithError(err, "network.VirtualNetworkGatewayConnectionsClient", "Get", nil, "Failure preparing request") return @@ -255,14 +225,14 @@ func (client VirtualNetworkGatewayConnectionsClient) Get(resourceGroupName strin } // GetPreparer prepares the Get request. -func (client VirtualNetworkGatewayConnectionsClient) GetPreparer(resourceGroupName string, virtualNetworkGatewayConnectionName string) (*http.Request, error) { +func (client VirtualNetworkGatewayConnectionsClient) GetPreparer(ctx context.Context, resourceGroupName string, virtualNetworkGatewayConnectionName string) (*http.Request, error) { pathParameters := map[string]interface{}{ "resourceGroupName": autorest.Encode("path", resourceGroupName), "subscriptionId": autorest.Encode("path", client.SubscriptionID), "virtualNetworkGatewayConnectionName": autorest.Encode("path", virtualNetworkGatewayConnectionName), } - const APIVersion = "2017-03-01" + const APIVersion = "2018-01-01" queryParameters := map[string]interface{}{ "api-version": APIVersion, } @@ -272,13 +242,14 @@ func (client VirtualNetworkGatewayConnectionsClient) GetPreparer(resourceGroupNa autorest.WithBaseURL(client.BaseURI), autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/connections/{virtualNetworkGatewayConnectionName}", pathParameters), autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{}) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) } // GetSender sends the Get request. The method will close the // http.Response Body if it receives an error. func (client VirtualNetworkGatewayConnectionsClient) GetSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, req) + return autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) } // GetResponder handles the response to the Get request. The method always @@ -294,15 +265,13 @@ func (client VirtualNetworkGatewayConnectionsClient) GetResponder(resp *http.Res return } -// GetSharedKey the Get VirtualNetworkGatewayConnectionSharedKey operation -// retrieves information about the specified virtual network gateway connection -// shared key through Network resource provider. -// -// resourceGroupName is the name of the resource group. -// virtualNetworkGatewayConnectionName is the virtual network gateway -// connection shared key name. -func (client VirtualNetworkGatewayConnectionsClient) GetSharedKey(resourceGroupName string, virtualNetworkGatewayConnectionName string) (result ConnectionSharedKey, err error) { - req, err := client.GetSharedKeyPreparer(resourceGroupName, virtualNetworkGatewayConnectionName) +// GetSharedKey the Get VirtualNetworkGatewayConnectionSharedKey operation retrieves information about the specified +// virtual network gateway connection shared key through Network resource provider. +// Parameters: +// resourceGroupName - the name of the resource group. +// virtualNetworkGatewayConnectionName - the virtual network gateway connection shared key name. +func (client VirtualNetworkGatewayConnectionsClient) GetSharedKey(ctx context.Context, resourceGroupName string, virtualNetworkGatewayConnectionName string) (result ConnectionSharedKey, err error) { + req, err := client.GetSharedKeyPreparer(ctx, resourceGroupName, virtualNetworkGatewayConnectionName) if err != nil { err = autorest.NewErrorWithError(err, "network.VirtualNetworkGatewayConnectionsClient", "GetSharedKey", nil, "Failure preparing request") return @@ -324,14 +293,14 @@ func (client VirtualNetworkGatewayConnectionsClient) GetSharedKey(resourceGroupN } // GetSharedKeyPreparer prepares the GetSharedKey request. -func (client VirtualNetworkGatewayConnectionsClient) GetSharedKeyPreparer(resourceGroupName string, virtualNetworkGatewayConnectionName string) (*http.Request, error) { +func (client VirtualNetworkGatewayConnectionsClient) GetSharedKeyPreparer(ctx context.Context, resourceGroupName string, virtualNetworkGatewayConnectionName string) (*http.Request, error) { pathParameters := map[string]interface{}{ "resourceGroupName": autorest.Encode("path", resourceGroupName), "subscriptionId": autorest.Encode("path", client.SubscriptionID), "virtualNetworkGatewayConnectionName": autorest.Encode("path", virtualNetworkGatewayConnectionName), } - const APIVersion = "2017-03-01" + const APIVersion = "2018-01-01" queryParameters := map[string]interface{}{ "api-version": APIVersion, } @@ -341,13 +310,14 @@ func (client VirtualNetworkGatewayConnectionsClient) GetSharedKeyPreparer(resour autorest.WithBaseURL(client.BaseURI), autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/connections/{virtualNetworkGatewayConnectionName}/sharedkey", pathParameters), autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{}) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) } // GetSharedKeySender sends the GetSharedKey request. The method will close the // http.Response Body if it receives an error. func (client VirtualNetworkGatewayConnectionsClient) GetSharedKeySender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, req) + return autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) } // GetSharedKeyResponder handles the response to the GetSharedKey request. The method always @@ -363,12 +333,13 @@ func (client VirtualNetworkGatewayConnectionsClient) GetSharedKeyResponder(resp return } -// List the List VirtualNetworkGatewayConnections operation retrieves all the -// virtual network gateways connections created. -// -// resourceGroupName is the name of the resource group. -func (client VirtualNetworkGatewayConnectionsClient) List(resourceGroupName string) (result VirtualNetworkGatewayConnectionListResult, err error) { - req, err := client.ListPreparer(resourceGroupName) +// List the List VirtualNetworkGatewayConnections operation retrieves all the virtual network gateways connections +// created. +// Parameters: +// resourceGroupName - the name of the resource group. +func (client VirtualNetworkGatewayConnectionsClient) List(ctx context.Context, resourceGroupName string) (result VirtualNetworkGatewayConnectionListResultPage, err error) { + result.fn = client.listNextResults + req, err := client.ListPreparer(ctx, resourceGroupName) if err != nil { err = autorest.NewErrorWithError(err, "network.VirtualNetworkGatewayConnectionsClient", "List", nil, "Failure preparing request") return @@ -376,12 +347,12 @@ func (client VirtualNetworkGatewayConnectionsClient) List(resourceGroupName stri resp, err := client.ListSender(req) if err != nil { - result.Response = autorest.Response{Response: resp} + result.vngclr.Response = autorest.Response{Response: resp} err = autorest.NewErrorWithError(err, "network.VirtualNetworkGatewayConnectionsClient", "List", resp, "Failure sending request") return } - result, err = client.ListResponder(resp) + result.vngclr, err = client.ListResponder(resp) if err != nil { err = autorest.NewErrorWithError(err, "network.VirtualNetworkGatewayConnectionsClient", "List", resp, "Failure responding to request") } @@ -390,13 +361,13 @@ func (client VirtualNetworkGatewayConnectionsClient) List(resourceGroupName stri } // ListPreparer prepares the List request. -func (client VirtualNetworkGatewayConnectionsClient) ListPreparer(resourceGroupName string) (*http.Request, error) { +func (client VirtualNetworkGatewayConnectionsClient) ListPreparer(ctx context.Context, resourceGroupName string) (*http.Request, error) { pathParameters := map[string]interface{}{ "resourceGroupName": autorest.Encode("path", resourceGroupName), "subscriptionId": autorest.Encode("path", client.SubscriptionID), } - const APIVersion = "2017-03-01" + const APIVersion = "2018-01-01" queryParameters := map[string]interface{}{ "api-version": APIVersion, } @@ -406,13 +377,14 @@ func (client VirtualNetworkGatewayConnectionsClient) ListPreparer(resourceGroupN autorest.WithBaseURL(client.BaseURI), autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/connections", pathParameters), autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{}) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) } // ListSender sends the List request. The method will close the // http.Response Body if it receives an error. func (client VirtualNetworkGatewayConnectionsClient) ListSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, req) + return autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) } // ListResponder handles the response to the List request. The method always @@ -428,116 +400,104 @@ func (client VirtualNetworkGatewayConnectionsClient) ListResponder(resp *http.Re return } -// ListNextResults retrieves the next set of results, if any. -func (client VirtualNetworkGatewayConnectionsClient) ListNextResults(lastResults VirtualNetworkGatewayConnectionListResult) (result VirtualNetworkGatewayConnectionListResult, err error) { - req, err := lastResults.VirtualNetworkGatewayConnectionListResultPreparer() +// listNextResults retrieves the next set of results, if any. +func (client VirtualNetworkGatewayConnectionsClient) listNextResults(lastResults VirtualNetworkGatewayConnectionListResult) (result VirtualNetworkGatewayConnectionListResult, err error) { + req, err := lastResults.virtualNetworkGatewayConnectionListResultPreparer() if err != nil { - return result, autorest.NewErrorWithError(err, "network.VirtualNetworkGatewayConnectionsClient", "List", nil, "Failure preparing next results request") + return result, autorest.NewErrorWithError(err, "network.VirtualNetworkGatewayConnectionsClient", "listNextResults", nil, "Failure preparing next results request") } if req == nil { return } - resp, err := client.ListSender(req) if err != nil { result.Response = autorest.Response{Response: resp} - return result, autorest.NewErrorWithError(err, "network.VirtualNetworkGatewayConnectionsClient", "List", resp, "Failure sending next results request") + return result, autorest.NewErrorWithError(err, "network.VirtualNetworkGatewayConnectionsClient", "listNextResults", resp, "Failure sending next results request") } - result, err = client.ListResponder(resp) if err != nil { - err = autorest.NewErrorWithError(err, "network.VirtualNetworkGatewayConnectionsClient", "List", resp, "Failure responding to next results request") + err = autorest.NewErrorWithError(err, "network.VirtualNetworkGatewayConnectionsClient", "listNextResults", resp, "Failure responding to next results request") } - return } -// ResetSharedKey the VirtualNetworkGatewayConnectionResetSharedKey operation -// resets the virtual network gateway connection shared key for passed virtual -// network gateway connection in the specified resource group through Network -// resource provider. This method may poll for completion. Polling can be -// canceled by passing the cancel channel argument. The channel will be used to -// cancel polling and any outstanding HTTP requests. -// -// resourceGroupName is the name of the resource group. -// virtualNetworkGatewayConnectionName is the virtual network gateway -// connection reset shared key Name. parameters is parameters supplied to the -// begin reset virtual network gateway connection shared key operation through -// network resource provider. -func (client VirtualNetworkGatewayConnectionsClient) ResetSharedKey(resourceGroupName string, virtualNetworkGatewayConnectionName string, parameters ConnectionResetSharedKey, cancel <-chan struct{}) (<-chan ConnectionResetSharedKey, <-chan error) { - resultChan := make(chan ConnectionResetSharedKey, 1) - errChan := make(chan error, 1) +// ListComplete enumerates all values, automatically crossing page boundaries as required. +func (client VirtualNetworkGatewayConnectionsClient) ListComplete(ctx context.Context, resourceGroupName string) (result VirtualNetworkGatewayConnectionListResultIterator, err error) { + result.page, err = client.List(ctx, resourceGroupName) + return +} + +// ResetSharedKey the VirtualNetworkGatewayConnectionResetSharedKey operation resets the virtual network gateway +// connection shared key for passed virtual network gateway connection in the specified resource group through Network +// resource provider. +// Parameters: +// resourceGroupName - the name of the resource group. +// virtualNetworkGatewayConnectionName - the virtual network gateway connection reset shared key Name. +// parameters - parameters supplied to the begin reset virtual network gateway connection shared key operation +// through network resource provider. +func (client VirtualNetworkGatewayConnectionsClient) ResetSharedKey(ctx context.Context, resourceGroupName string, virtualNetworkGatewayConnectionName string, parameters ConnectionResetSharedKey) (result VirtualNetworkGatewayConnectionsResetSharedKeyFuture, err error) { if err := validation.Validate([]validation.Validation{ {TargetValue: parameters, Constraints: []validation.Constraint{{Target: "parameters.KeyLength", Name: validation.Null, Rule: true, Chain: []validation.Constraint{{Target: "parameters.KeyLength", Name: validation.InclusiveMaximum, Rule: 128, Chain: nil}, {Target: "parameters.KeyLength", Name: validation.InclusiveMinimum, Rule: 1, Chain: nil}, }}}}}); err != nil { - errChan <- validation.NewErrorWithValidationError(err, "network.VirtualNetworkGatewayConnectionsClient", "ResetSharedKey") - close(errChan) - close(resultChan) - return resultChan, errChan + return result, validation.NewError("network.VirtualNetworkGatewayConnectionsClient", "ResetSharedKey", err.Error()) } - go func() { - var err error - var result ConnectionResetSharedKey - defer func() { - resultChan <- result - errChan <- err - close(resultChan) - close(errChan) - }() - req, err := client.ResetSharedKeyPreparer(resourceGroupName, virtualNetworkGatewayConnectionName, parameters, cancel) - if err != nil { - err = autorest.NewErrorWithError(err, "network.VirtualNetworkGatewayConnectionsClient", "ResetSharedKey", nil, "Failure preparing request") - return - } + req, err := client.ResetSharedKeyPreparer(ctx, resourceGroupName, virtualNetworkGatewayConnectionName, parameters) + if err != nil { + err = autorest.NewErrorWithError(err, "network.VirtualNetworkGatewayConnectionsClient", "ResetSharedKey", nil, "Failure preparing request") + return + } - resp, err := client.ResetSharedKeySender(req) - if err != nil { - result.Response = autorest.Response{Response: resp} - err = autorest.NewErrorWithError(err, "network.VirtualNetworkGatewayConnectionsClient", "ResetSharedKey", resp, "Failure sending request") - return - } + result, err = client.ResetSharedKeySender(req) + if err != nil { + err = autorest.NewErrorWithError(err, "network.VirtualNetworkGatewayConnectionsClient", "ResetSharedKey", result.Response(), "Failure sending request") + return + } - result, err = client.ResetSharedKeyResponder(resp) - if err != nil { - err = autorest.NewErrorWithError(err, "network.VirtualNetworkGatewayConnectionsClient", "ResetSharedKey", resp, "Failure responding to request") - } - }() - return resultChan, errChan + return } // ResetSharedKeyPreparer prepares the ResetSharedKey request. -func (client VirtualNetworkGatewayConnectionsClient) ResetSharedKeyPreparer(resourceGroupName string, virtualNetworkGatewayConnectionName string, parameters ConnectionResetSharedKey, cancel <-chan struct{}) (*http.Request, error) { +func (client VirtualNetworkGatewayConnectionsClient) ResetSharedKeyPreparer(ctx context.Context, resourceGroupName string, virtualNetworkGatewayConnectionName string, parameters ConnectionResetSharedKey) (*http.Request, error) { pathParameters := map[string]interface{}{ "resourceGroupName": autorest.Encode("path", resourceGroupName), "subscriptionId": autorest.Encode("path", client.SubscriptionID), "virtualNetworkGatewayConnectionName": autorest.Encode("path", virtualNetworkGatewayConnectionName), } - const APIVersion = "2017-03-01" + const APIVersion = "2018-01-01" queryParameters := map[string]interface{}{ "api-version": APIVersion, } preparer := autorest.CreatePreparer( - autorest.AsJSON(), + autorest.AsContentType("application/json; charset=utf-8"), autorest.AsPost(), autorest.WithBaseURL(client.BaseURI), autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/connections/{virtualNetworkGatewayConnectionName}/sharedkey/reset", pathParameters), autorest.WithJSON(parameters), autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{Cancel: cancel}) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) } // ResetSharedKeySender sends the ResetSharedKey request. The method will close the // http.Response Body if it receives an error. -func (client VirtualNetworkGatewayConnectionsClient) ResetSharedKeySender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, - req, - azure.DoPollForAsynchronous(client.PollingDelay)) +func (client VirtualNetworkGatewayConnectionsClient) ResetSharedKeySender(req *http.Request) (future VirtualNetworkGatewayConnectionsResetSharedKeyFuture, err error) { + var resp *http.Response + resp, err = autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) + if err != nil { + return + } + err = autorest.Respond(resp, azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted)) + if err != nil { + return + } + future.Future, err = azure.NewFutureFromResponse(resp) + return } // ResetSharedKeyResponder handles the response to the ResetSharedKey request. The method always @@ -553,89 +513,74 @@ func (client VirtualNetworkGatewayConnectionsClient) ResetSharedKeyResponder(res return } -// SetSharedKey the Put VirtualNetworkGatewayConnectionSharedKey operation sets -// the virtual network gateway connection shared key for passed virtual network -// gateway connection in the specified resource group through Network resource -// provider. This method may poll for completion. Polling can be canceled by -// passing the cancel channel argument. The channel will be used to cancel -// polling and any outstanding HTTP requests. -// -// resourceGroupName is the name of the resource group. -// virtualNetworkGatewayConnectionName is the virtual network gateway -// connection name. parameters is parameters supplied to the Begin Set Virtual -// Network Gateway connection Shared key operation throughNetwork resource +// SetSharedKey the Put VirtualNetworkGatewayConnectionSharedKey operation sets the virtual network gateway connection +// shared key for passed virtual network gateway connection in the specified resource group through Network resource // provider. -func (client VirtualNetworkGatewayConnectionsClient) SetSharedKey(resourceGroupName string, virtualNetworkGatewayConnectionName string, parameters ConnectionSharedKey, cancel <-chan struct{}) (<-chan ConnectionSharedKey, <-chan error) { - resultChan := make(chan ConnectionSharedKey, 1) - errChan := make(chan error, 1) +// Parameters: +// resourceGroupName - the name of the resource group. +// virtualNetworkGatewayConnectionName - the virtual network gateway connection name. +// parameters - parameters supplied to the Begin Set Virtual Network Gateway connection Shared key operation +// throughNetwork resource provider. +func (client VirtualNetworkGatewayConnectionsClient) SetSharedKey(ctx context.Context, resourceGroupName string, virtualNetworkGatewayConnectionName string, parameters ConnectionSharedKey) (result VirtualNetworkGatewayConnectionsSetSharedKeyFuture, err error) { if err := validation.Validate([]validation.Validation{ {TargetValue: parameters, Constraints: []validation.Constraint{{Target: "parameters.Value", Name: validation.Null, Rule: true, Chain: nil}}}}); err != nil { - errChan <- validation.NewErrorWithValidationError(err, "network.VirtualNetworkGatewayConnectionsClient", "SetSharedKey") - close(errChan) - close(resultChan) - return resultChan, errChan + return result, validation.NewError("network.VirtualNetworkGatewayConnectionsClient", "SetSharedKey", err.Error()) } - go func() { - var err error - var result ConnectionSharedKey - defer func() { - resultChan <- result - errChan <- err - close(resultChan) - close(errChan) - }() - req, err := client.SetSharedKeyPreparer(resourceGroupName, virtualNetworkGatewayConnectionName, parameters, cancel) - if err != nil { - err = autorest.NewErrorWithError(err, "network.VirtualNetworkGatewayConnectionsClient", "SetSharedKey", nil, "Failure preparing request") - return - } + req, err := client.SetSharedKeyPreparer(ctx, resourceGroupName, virtualNetworkGatewayConnectionName, parameters) + if err != nil { + err = autorest.NewErrorWithError(err, "network.VirtualNetworkGatewayConnectionsClient", "SetSharedKey", nil, "Failure preparing request") + return + } - resp, err := client.SetSharedKeySender(req) - if err != nil { - result.Response = autorest.Response{Response: resp} - err = autorest.NewErrorWithError(err, "network.VirtualNetworkGatewayConnectionsClient", "SetSharedKey", resp, "Failure sending request") - return - } + result, err = client.SetSharedKeySender(req) + if err != nil { + err = autorest.NewErrorWithError(err, "network.VirtualNetworkGatewayConnectionsClient", "SetSharedKey", result.Response(), "Failure sending request") + return + } - result, err = client.SetSharedKeyResponder(resp) - if err != nil { - err = autorest.NewErrorWithError(err, "network.VirtualNetworkGatewayConnectionsClient", "SetSharedKey", resp, "Failure responding to request") - } - }() - return resultChan, errChan + return } // SetSharedKeyPreparer prepares the SetSharedKey request. -func (client VirtualNetworkGatewayConnectionsClient) SetSharedKeyPreparer(resourceGroupName string, virtualNetworkGatewayConnectionName string, parameters ConnectionSharedKey, cancel <-chan struct{}) (*http.Request, error) { +func (client VirtualNetworkGatewayConnectionsClient) SetSharedKeyPreparer(ctx context.Context, resourceGroupName string, virtualNetworkGatewayConnectionName string, parameters ConnectionSharedKey) (*http.Request, error) { pathParameters := map[string]interface{}{ "resourceGroupName": autorest.Encode("path", resourceGroupName), "subscriptionId": autorest.Encode("path", client.SubscriptionID), "virtualNetworkGatewayConnectionName": autorest.Encode("path", virtualNetworkGatewayConnectionName), } - const APIVersion = "2017-03-01" + const APIVersion = "2018-01-01" queryParameters := map[string]interface{}{ "api-version": APIVersion, } preparer := autorest.CreatePreparer( - autorest.AsJSON(), + autorest.AsContentType("application/json; charset=utf-8"), autorest.AsPut(), autorest.WithBaseURL(client.BaseURI), autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/connections/{virtualNetworkGatewayConnectionName}/sharedkey", pathParameters), autorest.WithJSON(parameters), autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{Cancel: cancel}) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) } // SetSharedKeySender sends the SetSharedKey request. The method will close the // http.Response Body if it receives an error. -func (client VirtualNetworkGatewayConnectionsClient) SetSharedKeySender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, - req, - azure.DoPollForAsynchronous(client.PollingDelay)) +func (client VirtualNetworkGatewayConnectionsClient) SetSharedKeySender(req *http.Request) (future VirtualNetworkGatewayConnectionsSetSharedKeyFuture, err error) { + var resp *http.Response + resp, err = autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) + if err != nil { + return + } + err = autorest.Respond(resp, azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusCreated)) + if err != nil { + return + } + future.Future, err = azure.NewFutureFromResponse(resp) + return } // SetSharedKeyResponder handles the response to the SetSharedKey request. The method always @@ -644,7 +589,81 @@ func (client VirtualNetworkGatewayConnectionsClient) SetSharedKeyResponder(resp err = autorest.Respond( resp, client.ByInspecting(), - azure.WithErrorUnlessStatusCode(http.StatusCreated, http.StatusOK), + azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusCreated), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + +// UpdateTags updates a virtual network gateway connection tags. +// Parameters: +// resourceGroupName - the name of the resource group. +// virtualNetworkGatewayConnectionName - the name of the virtual network gateway connection. +// parameters - parameters supplied to update virtual network gateway connection tags. +func (client VirtualNetworkGatewayConnectionsClient) UpdateTags(ctx context.Context, resourceGroupName string, virtualNetworkGatewayConnectionName string, parameters TagsObject) (result VirtualNetworkGatewayConnectionsUpdateTagsFuture, err error) { + req, err := client.UpdateTagsPreparer(ctx, resourceGroupName, virtualNetworkGatewayConnectionName, parameters) + if err != nil { + err = autorest.NewErrorWithError(err, "network.VirtualNetworkGatewayConnectionsClient", "UpdateTags", nil, "Failure preparing request") + return + } + + result, err = client.UpdateTagsSender(req) + if err != nil { + err = autorest.NewErrorWithError(err, "network.VirtualNetworkGatewayConnectionsClient", "UpdateTags", result.Response(), "Failure sending request") + return + } + + return +} + +// UpdateTagsPreparer prepares the UpdateTags request. +func (client VirtualNetworkGatewayConnectionsClient) UpdateTagsPreparer(ctx context.Context, resourceGroupName string, virtualNetworkGatewayConnectionName string, parameters TagsObject) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "resourceGroupName": autorest.Encode("path", resourceGroupName), + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + "virtualNetworkGatewayConnectionName": autorest.Encode("path", virtualNetworkGatewayConnectionName), + } + + const APIVersion = "2018-01-01" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsContentType("application/json; charset=utf-8"), + autorest.AsPatch(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/connections/{virtualNetworkGatewayConnectionName}", pathParameters), + autorest.WithJSON(parameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// UpdateTagsSender sends the UpdateTags request. The method will close the +// http.Response Body if it receives an error. +func (client VirtualNetworkGatewayConnectionsClient) UpdateTagsSender(req *http.Request) (future VirtualNetworkGatewayConnectionsUpdateTagsFuture, err error) { + var resp *http.Response + resp, err = autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) + if err != nil { + return + } + err = autorest.Respond(resp, azure.WithErrorUnlessStatusCode(http.StatusOK)) + if err != nil { + return + } + future.Future, err = azure.NewFutureFromResponse(resp) + return +} + +// UpdateTagsResponder handles the response to the UpdateTags request. The method always +// closes the http.Response Body. +func (client VirtualNetworkGatewayConnectionsClient) UpdateTagsResponder(resp *http.Response) (result VirtualNetworkGatewayConnectionListEntity, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK), autorest.ByUnmarshallingJSON(&result), autorest.ByClosing()) result.Response = autorest.Response{Response: resp} diff --git a/vendor/github.com/Azure/azure-sdk-for-go/services/network/mgmt/2018-01-01/network/virtualnetworkgateways.go b/vendor/github.com/Azure/azure-sdk-for-go/services/network/mgmt/2018-01-01/network/virtualnetworkgateways.go new file mode 100644 index 000000000..aa314cddf --- /dev/null +++ b/vendor/github.com/Azure/azure-sdk-for-go/services/network/mgmt/2018-01-01/network/virtualnetworkgateways.go @@ -0,0 +1,1177 @@ +package network + +// Copyright (c) Microsoft and contributors. All rights reserved. +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// +// See the License for the specific language governing permissions and +// limitations under the License. +// +// Code generated by Microsoft (R) AutoRest Code Generator. +// Changes may cause incorrect behavior and will be lost if the code is regenerated. + +import ( + "context" + "github.com/Azure/go-autorest/autorest" + "github.com/Azure/go-autorest/autorest/azure" + "github.com/Azure/go-autorest/autorest/validation" + "net/http" +) + +// VirtualNetworkGatewaysClient is the network Client +type VirtualNetworkGatewaysClient struct { + BaseClient +} + +// NewVirtualNetworkGatewaysClient creates an instance of the VirtualNetworkGatewaysClient client. +func NewVirtualNetworkGatewaysClient(subscriptionID string) VirtualNetworkGatewaysClient { + return NewVirtualNetworkGatewaysClientWithBaseURI(DefaultBaseURI, subscriptionID) +} + +// NewVirtualNetworkGatewaysClientWithBaseURI creates an instance of the VirtualNetworkGatewaysClient client. +func NewVirtualNetworkGatewaysClientWithBaseURI(baseURI string, subscriptionID string) VirtualNetworkGatewaysClient { + return VirtualNetworkGatewaysClient{NewWithBaseURI(baseURI, subscriptionID)} +} + +// CreateOrUpdate creates or updates a virtual network gateway in the specified resource group. +// Parameters: +// resourceGroupName - the name of the resource group. +// virtualNetworkGatewayName - the name of the virtual network gateway. +// parameters - parameters supplied to create or update virtual network gateway operation. +func (client VirtualNetworkGatewaysClient) CreateOrUpdate(ctx context.Context, resourceGroupName string, virtualNetworkGatewayName string, parameters VirtualNetworkGateway) (result VirtualNetworkGatewaysCreateOrUpdateFuture, err error) { + if err := validation.Validate([]validation.Validation{ + {TargetValue: parameters, + Constraints: []validation.Constraint{{Target: "parameters.VirtualNetworkGatewayPropertiesFormat", Name: validation.Null, Rule: true, Chain: nil}}}}); err != nil { + return result, validation.NewError("network.VirtualNetworkGatewaysClient", "CreateOrUpdate", err.Error()) + } + + req, err := client.CreateOrUpdatePreparer(ctx, resourceGroupName, virtualNetworkGatewayName, parameters) + if err != nil { + err = autorest.NewErrorWithError(err, "network.VirtualNetworkGatewaysClient", "CreateOrUpdate", nil, "Failure preparing request") + return + } + + result, err = client.CreateOrUpdateSender(req) + if err != nil { + err = autorest.NewErrorWithError(err, "network.VirtualNetworkGatewaysClient", "CreateOrUpdate", result.Response(), "Failure sending request") + return + } + + return +} + +// CreateOrUpdatePreparer prepares the CreateOrUpdate request. +func (client VirtualNetworkGatewaysClient) CreateOrUpdatePreparer(ctx context.Context, resourceGroupName string, virtualNetworkGatewayName string, parameters VirtualNetworkGateway) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "resourceGroupName": autorest.Encode("path", resourceGroupName), + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + "virtualNetworkGatewayName": autorest.Encode("path", virtualNetworkGatewayName), + } + + const APIVersion = "2018-01-01" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsContentType("application/json; charset=utf-8"), + autorest.AsPut(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/virtualNetworkGateways/{virtualNetworkGatewayName}", pathParameters), + autorest.WithJSON(parameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// CreateOrUpdateSender sends the CreateOrUpdate request. The method will close the +// http.Response Body if it receives an error. +func (client VirtualNetworkGatewaysClient) CreateOrUpdateSender(req *http.Request) (future VirtualNetworkGatewaysCreateOrUpdateFuture, err error) { + var resp *http.Response + resp, err = autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) + if err != nil { + return + } + err = autorest.Respond(resp, azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusCreated)) + if err != nil { + return + } + future.Future, err = azure.NewFutureFromResponse(resp) + return +} + +// CreateOrUpdateResponder handles the response to the CreateOrUpdate request. The method always +// closes the http.Response Body. +func (client VirtualNetworkGatewaysClient) CreateOrUpdateResponder(resp *http.Response) (result VirtualNetworkGateway, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusCreated), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + +// Delete deletes the specified virtual network gateway. +// Parameters: +// resourceGroupName - the name of the resource group. +// virtualNetworkGatewayName - the name of the virtual network gateway. +func (client VirtualNetworkGatewaysClient) Delete(ctx context.Context, resourceGroupName string, virtualNetworkGatewayName string) (result VirtualNetworkGatewaysDeleteFuture, err error) { + req, err := client.DeletePreparer(ctx, resourceGroupName, virtualNetworkGatewayName) + if err != nil { + err = autorest.NewErrorWithError(err, "network.VirtualNetworkGatewaysClient", "Delete", nil, "Failure preparing request") + return + } + + result, err = client.DeleteSender(req) + if err != nil { + err = autorest.NewErrorWithError(err, "network.VirtualNetworkGatewaysClient", "Delete", result.Response(), "Failure sending request") + return + } + + return +} + +// DeletePreparer prepares the Delete request. +func (client VirtualNetworkGatewaysClient) DeletePreparer(ctx context.Context, resourceGroupName string, virtualNetworkGatewayName string) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "resourceGroupName": autorest.Encode("path", resourceGroupName), + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + "virtualNetworkGatewayName": autorest.Encode("path", virtualNetworkGatewayName), + } + + const APIVersion = "2018-01-01" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsDelete(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/virtualNetworkGateways/{virtualNetworkGatewayName}", pathParameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// DeleteSender sends the Delete request. The method will close the +// http.Response Body if it receives an error. +func (client VirtualNetworkGatewaysClient) DeleteSender(req *http.Request) (future VirtualNetworkGatewaysDeleteFuture, err error) { + var resp *http.Response + resp, err = autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) + if err != nil { + return + } + err = autorest.Respond(resp, azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted, http.StatusNoContent)) + if err != nil { + return + } + future.Future, err = azure.NewFutureFromResponse(resp) + return +} + +// DeleteResponder handles the response to the Delete request. The method always +// closes the http.Response Body. +func (client VirtualNetworkGatewaysClient) DeleteResponder(resp *http.Response) (result autorest.Response, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted, http.StatusNoContent), + autorest.ByClosing()) + result.Response = resp + return +} + +// Generatevpnclientpackage generates VPN client package for P2S client of the virtual network gateway in the specified +// resource group. +// Parameters: +// resourceGroupName - the name of the resource group. +// virtualNetworkGatewayName - the name of the virtual network gateway. +// parameters - parameters supplied to the generate virtual network gateway VPN client package operation. +func (client VirtualNetworkGatewaysClient) Generatevpnclientpackage(ctx context.Context, resourceGroupName string, virtualNetworkGatewayName string, parameters VpnClientParameters) (result VirtualNetworkGatewaysGeneratevpnclientpackageFuture, err error) { + req, err := client.GeneratevpnclientpackagePreparer(ctx, resourceGroupName, virtualNetworkGatewayName, parameters) + if err != nil { + err = autorest.NewErrorWithError(err, "network.VirtualNetworkGatewaysClient", "Generatevpnclientpackage", nil, "Failure preparing request") + return + } + + result, err = client.GeneratevpnclientpackageSender(req) + if err != nil { + err = autorest.NewErrorWithError(err, "network.VirtualNetworkGatewaysClient", "Generatevpnclientpackage", result.Response(), "Failure sending request") + return + } + + return +} + +// GeneratevpnclientpackagePreparer prepares the Generatevpnclientpackage request. +func (client VirtualNetworkGatewaysClient) GeneratevpnclientpackagePreparer(ctx context.Context, resourceGroupName string, virtualNetworkGatewayName string, parameters VpnClientParameters) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "resourceGroupName": autorest.Encode("path", resourceGroupName), + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + "virtualNetworkGatewayName": autorest.Encode("path", virtualNetworkGatewayName), + } + + const APIVersion = "2018-01-01" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsContentType("application/json; charset=utf-8"), + autorest.AsPost(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/virtualNetworkGateways/{virtualNetworkGatewayName}/generatevpnclientpackage", pathParameters), + autorest.WithJSON(parameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// GeneratevpnclientpackageSender sends the Generatevpnclientpackage request. The method will close the +// http.Response Body if it receives an error. +func (client VirtualNetworkGatewaysClient) GeneratevpnclientpackageSender(req *http.Request) (future VirtualNetworkGatewaysGeneratevpnclientpackageFuture, err error) { + var resp *http.Response + resp, err = autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) + if err != nil { + return + } + err = autorest.Respond(resp, azure.WithErrorUnlessStatusCode(http.StatusOK)) + if err != nil { + return + } + future.Future, err = azure.NewFutureFromResponse(resp) + return +} + +// GeneratevpnclientpackageResponder handles the response to the Generatevpnclientpackage request. The method always +// closes the http.Response Body. +func (client VirtualNetworkGatewaysClient) GeneratevpnclientpackageResponder(resp *http.Response) (result String, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + +// GenerateVpnProfile generates VPN profile for P2S client of the virtual network gateway in the specified resource +// group. Used for IKEV2 and radius based authentication. +// Parameters: +// resourceGroupName - the name of the resource group. +// virtualNetworkGatewayName - the name of the virtual network gateway. +// parameters - parameters supplied to the generate virtual network gateway VPN client package operation. +func (client VirtualNetworkGatewaysClient) GenerateVpnProfile(ctx context.Context, resourceGroupName string, virtualNetworkGatewayName string, parameters VpnClientParameters) (result VirtualNetworkGatewaysGenerateVpnProfileFuture, err error) { + req, err := client.GenerateVpnProfilePreparer(ctx, resourceGroupName, virtualNetworkGatewayName, parameters) + if err != nil { + err = autorest.NewErrorWithError(err, "network.VirtualNetworkGatewaysClient", "GenerateVpnProfile", nil, "Failure preparing request") + return + } + + result, err = client.GenerateVpnProfileSender(req) + if err != nil { + err = autorest.NewErrorWithError(err, "network.VirtualNetworkGatewaysClient", "GenerateVpnProfile", result.Response(), "Failure sending request") + return + } + + return +} + +// GenerateVpnProfilePreparer prepares the GenerateVpnProfile request. +func (client VirtualNetworkGatewaysClient) GenerateVpnProfilePreparer(ctx context.Context, resourceGroupName string, virtualNetworkGatewayName string, parameters VpnClientParameters) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "resourceGroupName": autorest.Encode("path", resourceGroupName), + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + "virtualNetworkGatewayName": autorest.Encode("path", virtualNetworkGatewayName), + } + + const APIVersion = "2018-01-01" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsContentType("application/json; charset=utf-8"), + autorest.AsPost(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/virtualNetworkGateways/{virtualNetworkGatewayName}/generatevpnprofile", pathParameters), + autorest.WithJSON(parameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// GenerateVpnProfileSender sends the GenerateVpnProfile request. The method will close the +// http.Response Body if it receives an error. +func (client VirtualNetworkGatewaysClient) GenerateVpnProfileSender(req *http.Request) (future VirtualNetworkGatewaysGenerateVpnProfileFuture, err error) { + var resp *http.Response + resp, err = autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) + if err != nil { + return + } + err = autorest.Respond(resp, azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted)) + if err != nil { + return + } + future.Future, err = azure.NewFutureFromResponse(resp) + return +} + +// GenerateVpnProfileResponder handles the response to the GenerateVpnProfile request. The method always +// closes the http.Response Body. +func (client VirtualNetworkGatewaysClient) GenerateVpnProfileResponder(resp *http.Response) (result String, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + +// Get gets the specified virtual network gateway by resource group. +// Parameters: +// resourceGroupName - the name of the resource group. +// virtualNetworkGatewayName - the name of the virtual network gateway. +func (client VirtualNetworkGatewaysClient) Get(ctx context.Context, resourceGroupName string, virtualNetworkGatewayName string) (result VirtualNetworkGateway, err error) { + req, err := client.GetPreparer(ctx, resourceGroupName, virtualNetworkGatewayName) + if err != nil { + err = autorest.NewErrorWithError(err, "network.VirtualNetworkGatewaysClient", "Get", nil, "Failure preparing request") + return + } + + resp, err := client.GetSender(req) + if err != nil { + result.Response = autorest.Response{Response: resp} + err = autorest.NewErrorWithError(err, "network.VirtualNetworkGatewaysClient", "Get", resp, "Failure sending request") + return + } + + result, err = client.GetResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "network.VirtualNetworkGatewaysClient", "Get", resp, "Failure responding to request") + } + + return +} + +// GetPreparer prepares the Get request. +func (client VirtualNetworkGatewaysClient) GetPreparer(ctx context.Context, resourceGroupName string, virtualNetworkGatewayName string) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "resourceGroupName": autorest.Encode("path", resourceGroupName), + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + "virtualNetworkGatewayName": autorest.Encode("path", virtualNetworkGatewayName), + } + + const APIVersion = "2018-01-01" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsGet(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/virtualNetworkGateways/{virtualNetworkGatewayName}", pathParameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// GetSender sends the Get request. The method will close the +// http.Response Body if it receives an error. +func (client VirtualNetworkGatewaysClient) GetSender(req *http.Request) (*http.Response, error) { + return autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) +} + +// GetResponder handles the response to the Get request. The method always +// closes the http.Response Body. +func (client VirtualNetworkGatewaysClient) GetResponder(resp *http.Response) (result VirtualNetworkGateway, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + +// GetAdvertisedRoutes this operation retrieves a list of routes the virtual network gateway is advertising to the +// specified peer. +// Parameters: +// resourceGroupName - the name of the resource group. +// virtualNetworkGatewayName - the name of the virtual network gateway. +// peer - the IP address of the peer +func (client VirtualNetworkGatewaysClient) GetAdvertisedRoutes(ctx context.Context, resourceGroupName string, virtualNetworkGatewayName string, peer string) (result VirtualNetworkGatewaysGetAdvertisedRoutesFuture, err error) { + req, err := client.GetAdvertisedRoutesPreparer(ctx, resourceGroupName, virtualNetworkGatewayName, peer) + if err != nil { + err = autorest.NewErrorWithError(err, "network.VirtualNetworkGatewaysClient", "GetAdvertisedRoutes", nil, "Failure preparing request") + return + } + + result, err = client.GetAdvertisedRoutesSender(req) + if err != nil { + err = autorest.NewErrorWithError(err, "network.VirtualNetworkGatewaysClient", "GetAdvertisedRoutes", result.Response(), "Failure sending request") + return + } + + return +} + +// GetAdvertisedRoutesPreparer prepares the GetAdvertisedRoutes request. +func (client VirtualNetworkGatewaysClient) GetAdvertisedRoutesPreparer(ctx context.Context, resourceGroupName string, virtualNetworkGatewayName string, peer string) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "resourceGroupName": autorest.Encode("path", resourceGroupName), + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + "virtualNetworkGatewayName": autorest.Encode("path", virtualNetworkGatewayName), + } + + const APIVersion = "2018-01-01" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + "peer": autorest.Encode("query", peer), + } + + preparer := autorest.CreatePreparer( + autorest.AsPost(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/virtualNetworkGateways/{virtualNetworkGatewayName}/getAdvertisedRoutes", pathParameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// GetAdvertisedRoutesSender sends the GetAdvertisedRoutes request. The method will close the +// http.Response Body if it receives an error. +func (client VirtualNetworkGatewaysClient) GetAdvertisedRoutesSender(req *http.Request) (future VirtualNetworkGatewaysGetAdvertisedRoutesFuture, err error) { + var resp *http.Response + resp, err = autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) + if err != nil { + return + } + err = autorest.Respond(resp, azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted)) + if err != nil { + return + } + future.Future, err = azure.NewFutureFromResponse(resp) + return +} + +// GetAdvertisedRoutesResponder handles the response to the GetAdvertisedRoutes request. The method always +// closes the http.Response Body. +func (client VirtualNetworkGatewaysClient) GetAdvertisedRoutesResponder(resp *http.Response) (result GatewayRouteListResult, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + +// GetBgpPeerStatus the GetBgpPeerStatus operation retrieves the status of all BGP peers. +// Parameters: +// resourceGroupName - the name of the resource group. +// virtualNetworkGatewayName - the name of the virtual network gateway. +// peer - the IP address of the peer to retrieve the status of. +func (client VirtualNetworkGatewaysClient) GetBgpPeerStatus(ctx context.Context, resourceGroupName string, virtualNetworkGatewayName string, peer string) (result VirtualNetworkGatewaysGetBgpPeerStatusFuture, err error) { + req, err := client.GetBgpPeerStatusPreparer(ctx, resourceGroupName, virtualNetworkGatewayName, peer) + if err != nil { + err = autorest.NewErrorWithError(err, "network.VirtualNetworkGatewaysClient", "GetBgpPeerStatus", nil, "Failure preparing request") + return + } + + result, err = client.GetBgpPeerStatusSender(req) + if err != nil { + err = autorest.NewErrorWithError(err, "network.VirtualNetworkGatewaysClient", "GetBgpPeerStatus", result.Response(), "Failure sending request") + return + } + + return +} + +// GetBgpPeerStatusPreparer prepares the GetBgpPeerStatus request. +func (client VirtualNetworkGatewaysClient) GetBgpPeerStatusPreparer(ctx context.Context, resourceGroupName string, virtualNetworkGatewayName string, peer string) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "resourceGroupName": autorest.Encode("path", resourceGroupName), + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + "virtualNetworkGatewayName": autorest.Encode("path", virtualNetworkGatewayName), + } + + const APIVersion = "2018-01-01" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + if len(peer) > 0 { + queryParameters["peer"] = autorest.Encode("query", peer) + } + + preparer := autorest.CreatePreparer( + autorest.AsPost(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/virtualNetworkGateways/{virtualNetworkGatewayName}/getBgpPeerStatus", pathParameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// GetBgpPeerStatusSender sends the GetBgpPeerStatus request. The method will close the +// http.Response Body if it receives an error. +func (client VirtualNetworkGatewaysClient) GetBgpPeerStatusSender(req *http.Request) (future VirtualNetworkGatewaysGetBgpPeerStatusFuture, err error) { + var resp *http.Response + resp, err = autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) + if err != nil { + return + } + err = autorest.Respond(resp, azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted)) + if err != nil { + return + } + future.Future, err = azure.NewFutureFromResponse(resp) + return +} + +// GetBgpPeerStatusResponder handles the response to the GetBgpPeerStatus request. The method always +// closes the http.Response Body. +func (client VirtualNetworkGatewaysClient) GetBgpPeerStatusResponder(resp *http.Response) (result BgpPeerStatusListResult, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + +// GetLearnedRoutes this operation retrieves a list of routes the virtual network gateway has learned, including routes +// learned from BGP peers. +// Parameters: +// resourceGroupName - the name of the resource group. +// virtualNetworkGatewayName - the name of the virtual network gateway. +func (client VirtualNetworkGatewaysClient) GetLearnedRoutes(ctx context.Context, resourceGroupName string, virtualNetworkGatewayName string) (result VirtualNetworkGatewaysGetLearnedRoutesFuture, err error) { + req, err := client.GetLearnedRoutesPreparer(ctx, resourceGroupName, virtualNetworkGatewayName) + if err != nil { + err = autorest.NewErrorWithError(err, "network.VirtualNetworkGatewaysClient", "GetLearnedRoutes", nil, "Failure preparing request") + return + } + + result, err = client.GetLearnedRoutesSender(req) + if err != nil { + err = autorest.NewErrorWithError(err, "network.VirtualNetworkGatewaysClient", "GetLearnedRoutes", result.Response(), "Failure sending request") + return + } + + return +} + +// GetLearnedRoutesPreparer prepares the GetLearnedRoutes request. +func (client VirtualNetworkGatewaysClient) GetLearnedRoutesPreparer(ctx context.Context, resourceGroupName string, virtualNetworkGatewayName string) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "resourceGroupName": autorest.Encode("path", resourceGroupName), + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + "virtualNetworkGatewayName": autorest.Encode("path", virtualNetworkGatewayName), + } + + const APIVersion = "2018-01-01" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsPost(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/virtualNetworkGateways/{virtualNetworkGatewayName}/getLearnedRoutes", pathParameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// GetLearnedRoutesSender sends the GetLearnedRoutes request. The method will close the +// http.Response Body if it receives an error. +func (client VirtualNetworkGatewaysClient) GetLearnedRoutesSender(req *http.Request) (future VirtualNetworkGatewaysGetLearnedRoutesFuture, err error) { + var resp *http.Response + resp, err = autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) + if err != nil { + return + } + err = autorest.Respond(resp, azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted)) + if err != nil { + return + } + future.Future, err = azure.NewFutureFromResponse(resp) + return +} + +// GetLearnedRoutesResponder handles the response to the GetLearnedRoutes request. The method always +// closes the http.Response Body. +func (client VirtualNetworkGatewaysClient) GetLearnedRoutesResponder(resp *http.Response) (result GatewayRouteListResult, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + +// GetVpnProfilePackageURL gets pre-generated VPN profile for P2S client of the virtual network gateway in the +// specified resource group. The profile needs to be generated first using generateVpnProfile. +// Parameters: +// resourceGroupName - the name of the resource group. +// virtualNetworkGatewayName - the name of the virtual network gateway. +func (client VirtualNetworkGatewaysClient) GetVpnProfilePackageURL(ctx context.Context, resourceGroupName string, virtualNetworkGatewayName string) (result VirtualNetworkGatewaysGetVpnProfilePackageURLFuture, err error) { + req, err := client.GetVpnProfilePackageURLPreparer(ctx, resourceGroupName, virtualNetworkGatewayName) + if err != nil { + err = autorest.NewErrorWithError(err, "network.VirtualNetworkGatewaysClient", "GetVpnProfilePackageURL", nil, "Failure preparing request") + return + } + + result, err = client.GetVpnProfilePackageURLSender(req) + if err != nil { + err = autorest.NewErrorWithError(err, "network.VirtualNetworkGatewaysClient", "GetVpnProfilePackageURL", result.Response(), "Failure sending request") + return + } + + return +} + +// GetVpnProfilePackageURLPreparer prepares the GetVpnProfilePackageURL request. +func (client VirtualNetworkGatewaysClient) GetVpnProfilePackageURLPreparer(ctx context.Context, resourceGroupName string, virtualNetworkGatewayName string) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "resourceGroupName": autorest.Encode("path", resourceGroupName), + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + "virtualNetworkGatewayName": autorest.Encode("path", virtualNetworkGatewayName), + } + + const APIVersion = "2018-01-01" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsPost(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/virtualNetworkGateways/{virtualNetworkGatewayName}/getvpnprofilepackageurl", pathParameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// GetVpnProfilePackageURLSender sends the GetVpnProfilePackageURL request. The method will close the +// http.Response Body if it receives an error. +func (client VirtualNetworkGatewaysClient) GetVpnProfilePackageURLSender(req *http.Request) (future VirtualNetworkGatewaysGetVpnProfilePackageURLFuture, err error) { + var resp *http.Response + resp, err = autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) + if err != nil { + return + } + err = autorest.Respond(resp, azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted)) + if err != nil { + return + } + future.Future, err = azure.NewFutureFromResponse(resp) + return +} + +// GetVpnProfilePackageURLResponder handles the response to the GetVpnProfilePackageURL request. The method always +// closes the http.Response Body. +func (client VirtualNetworkGatewaysClient) GetVpnProfilePackageURLResponder(resp *http.Response) (result String, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + +// List gets all virtual network gateways by resource group. +// Parameters: +// resourceGroupName - the name of the resource group. +func (client VirtualNetworkGatewaysClient) List(ctx context.Context, resourceGroupName string) (result VirtualNetworkGatewayListResultPage, err error) { + result.fn = client.listNextResults + req, err := client.ListPreparer(ctx, resourceGroupName) + if err != nil { + err = autorest.NewErrorWithError(err, "network.VirtualNetworkGatewaysClient", "List", nil, "Failure preparing request") + return + } + + resp, err := client.ListSender(req) + if err != nil { + result.vnglr.Response = autorest.Response{Response: resp} + err = autorest.NewErrorWithError(err, "network.VirtualNetworkGatewaysClient", "List", resp, "Failure sending request") + return + } + + result.vnglr, err = client.ListResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "network.VirtualNetworkGatewaysClient", "List", resp, "Failure responding to request") + } + + return +} + +// ListPreparer prepares the List request. +func (client VirtualNetworkGatewaysClient) ListPreparer(ctx context.Context, resourceGroupName string) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "resourceGroupName": autorest.Encode("path", resourceGroupName), + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + } + + const APIVersion = "2018-01-01" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsGet(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/virtualNetworkGateways", pathParameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// ListSender sends the List request. The method will close the +// http.Response Body if it receives an error. +func (client VirtualNetworkGatewaysClient) ListSender(req *http.Request) (*http.Response, error) { + return autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) +} + +// ListResponder handles the response to the List request. The method always +// closes the http.Response Body. +func (client VirtualNetworkGatewaysClient) ListResponder(resp *http.Response) (result VirtualNetworkGatewayListResult, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + +// listNextResults retrieves the next set of results, if any. +func (client VirtualNetworkGatewaysClient) listNextResults(lastResults VirtualNetworkGatewayListResult) (result VirtualNetworkGatewayListResult, err error) { + req, err := lastResults.virtualNetworkGatewayListResultPreparer() + if err != nil { + return result, autorest.NewErrorWithError(err, "network.VirtualNetworkGatewaysClient", "listNextResults", nil, "Failure preparing next results request") + } + if req == nil { + return + } + resp, err := client.ListSender(req) + if err != nil { + result.Response = autorest.Response{Response: resp} + return result, autorest.NewErrorWithError(err, "network.VirtualNetworkGatewaysClient", "listNextResults", resp, "Failure sending next results request") + } + result, err = client.ListResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "network.VirtualNetworkGatewaysClient", "listNextResults", resp, "Failure responding to next results request") + } + return +} + +// ListComplete enumerates all values, automatically crossing page boundaries as required. +func (client VirtualNetworkGatewaysClient) ListComplete(ctx context.Context, resourceGroupName string) (result VirtualNetworkGatewayListResultIterator, err error) { + result.page, err = client.List(ctx, resourceGroupName) + return +} + +// ListConnections gets all the connections in a virtual network gateway. +// Parameters: +// resourceGroupName - the name of the resource group. +// virtualNetworkGatewayName - the name of the virtual network gateway. +func (client VirtualNetworkGatewaysClient) ListConnections(ctx context.Context, resourceGroupName string, virtualNetworkGatewayName string) (result VirtualNetworkGatewayListConnectionsResultPage, err error) { + result.fn = client.listConnectionsNextResults + req, err := client.ListConnectionsPreparer(ctx, resourceGroupName, virtualNetworkGatewayName) + if err != nil { + err = autorest.NewErrorWithError(err, "network.VirtualNetworkGatewaysClient", "ListConnections", nil, "Failure preparing request") + return + } + + resp, err := client.ListConnectionsSender(req) + if err != nil { + result.vnglcr.Response = autorest.Response{Response: resp} + err = autorest.NewErrorWithError(err, "network.VirtualNetworkGatewaysClient", "ListConnections", resp, "Failure sending request") + return + } + + result.vnglcr, err = client.ListConnectionsResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "network.VirtualNetworkGatewaysClient", "ListConnections", resp, "Failure responding to request") + } + + return +} + +// ListConnectionsPreparer prepares the ListConnections request. +func (client VirtualNetworkGatewaysClient) ListConnectionsPreparer(ctx context.Context, resourceGroupName string, virtualNetworkGatewayName string) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "resourceGroupName": autorest.Encode("path", resourceGroupName), + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + "virtualNetworkGatewayName": autorest.Encode("path", virtualNetworkGatewayName), + } + + const APIVersion = "2018-01-01" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsGet(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/virtualNetworkGateways/{virtualNetworkGatewayName}/connections", pathParameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// ListConnectionsSender sends the ListConnections request. The method will close the +// http.Response Body if it receives an error. +func (client VirtualNetworkGatewaysClient) ListConnectionsSender(req *http.Request) (*http.Response, error) { + return autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) +} + +// ListConnectionsResponder handles the response to the ListConnections request. The method always +// closes the http.Response Body. +func (client VirtualNetworkGatewaysClient) ListConnectionsResponder(resp *http.Response) (result VirtualNetworkGatewayListConnectionsResult, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + +// listConnectionsNextResults retrieves the next set of results, if any. +func (client VirtualNetworkGatewaysClient) listConnectionsNextResults(lastResults VirtualNetworkGatewayListConnectionsResult) (result VirtualNetworkGatewayListConnectionsResult, err error) { + req, err := lastResults.virtualNetworkGatewayListConnectionsResultPreparer() + if err != nil { + return result, autorest.NewErrorWithError(err, "network.VirtualNetworkGatewaysClient", "listConnectionsNextResults", nil, "Failure preparing next results request") + } + if req == nil { + return + } + resp, err := client.ListConnectionsSender(req) + if err != nil { + result.Response = autorest.Response{Response: resp} + return result, autorest.NewErrorWithError(err, "network.VirtualNetworkGatewaysClient", "listConnectionsNextResults", resp, "Failure sending next results request") + } + result, err = client.ListConnectionsResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "network.VirtualNetworkGatewaysClient", "listConnectionsNextResults", resp, "Failure responding to next results request") + } + return +} + +// ListConnectionsComplete enumerates all values, automatically crossing page boundaries as required. +func (client VirtualNetworkGatewaysClient) ListConnectionsComplete(ctx context.Context, resourceGroupName string, virtualNetworkGatewayName string) (result VirtualNetworkGatewayListConnectionsResultIterator, err error) { + result.page, err = client.ListConnections(ctx, resourceGroupName, virtualNetworkGatewayName) + return +} + +// Reset resets the primary of the virtual network gateway in the specified resource group. +// Parameters: +// resourceGroupName - the name of the resource group. +// virtualNetworkGatewayName - the name of the virtual network gateway. +// gatewayVip - virtual network gateway vip address supplied to the begin reset of the active-active feature +// enabled gateway. +func (client VirtualNetworkGatewaysClient) Reset(ctx context.Context, resourceGroupName string, virtualNetworkGatewayName string, gatewayVip string) (result VirtualNetworkGatewaysResetFuture, err error) { + req, err := client.ResetPreparer(ctx, resourceGroupName, virtualNetworkGatewayName, gatewayVip) + if err != nil { + err = autorest.NewErrorWithError(err, "network.VirtualNetworkGatewaysClient", "Reset", nil, "Failure preparing request") + return + } + + result, err = client.ResetSender(req) + if err != nil { + err = autorest.NewErrorWithError(err, "network.VirtualNetworkGatewaysClient", "Reset", result.Response(), "Failure sending request") + return + } + + return +} + +// ResetPreparer prepares the Reset request. +func (client VirtualNetworkGatewaysClient) ResetPreparer(ctx context.Context, resourceGroupName string, virtualNetworkGatewayName string, gatewayVip string) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "resourceGroupName": autorest.Encode("path", resourceGroupName), + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + "virtualNetworkGatewayName": autorest.Encode("path", virtualNetworkGatewayName), + } + + const APIVersion = "2018-01-01" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + if len(gatewayVip) > 0 { + queryParameters["gatewayVip"] = autorest.Encode("query", gatewayVip) + } + + preparer := autorest.CreatePreparer( + autorest.AsPost(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/virtualNetworkGateways/{virtualNetworkGatewayName}/reset", pathParameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// ResetSender sends the Reset request. The method will close the +// http.Response Body if it receives an error. +func (client VirtualNetworkGatewaysClient) ResetSender(req *http.Request) (future VirtualNetworkGatewaysResetFuture, err error) { + var resp *http.Response + resp, err = autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) + if err != nil { + return + } + err = autorest.Respond(resp, azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted)) + if err != nil { + return + } + future.Future, err = azure.NewFutureFromResponse(resp) + return +} + +// ResetResponder handles the response to the Reset request. The method always +// closes the http.Response Body. +func (client VirtualNetworkGatewaysClient) ResetResponder(resp *http.Response) (result VirtualNetworkGateway, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + +// SupportedVpnDevices gets a xml format representation for supported vpn devices. +// Parameters: +// resourceGroupName - the name of the resource group. +// virtualNetworkGatewayName - the name of the virtual network gateway. +func (client VirtualNetworkGatewaysClient) SupportedVpnDevices(ctx context.Context, resourceGroupName string, virtualNetworkGatewayName string) (result String, err error) { + req, err := client.SupportedVpnDevicesPreparer(ctx, resourceGroupName, virtualNetworkGatewayName) + if err != nil { + err = autorest.NewErrorWithError(err, "network.VirtualNetworkGatewaysClient", "SupportedVpnDevices", nil, "Failure preparing request") + return + } + + resp, err := client.SupportedVpnDevicesSender(req) + if err != nil { + result.Response = autorest.Response{Response: resp} + err = autorest.NewErrorWithError(err, "network.VirtualNetworkGatewaysClient", "SupportedVpnDevices", resp, "Failure sending request") + return + } + + result, err = client.SupportedVpnDevicesResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "network.VirtualNetworkGatewaysClient", "SupportedVpnDevices", resp, "Failure responding to request") + } + + return +} + +// SupportedVpnDevicesPreparer prepares the SupportedVpnDevices request. +func (client VirtualNetworkGatewaysClient) SupportedVpnDevicesPreparer(ctx context.Context, resourceGroupName string, virtualNetworkGatewayName string) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "resourceGroupName": autorest.Encode("path", resourceGroupName), + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + "virtualNetworkGatewayName": autorest.Encode("path", virtualNetworkGatewayName), + } + + const APIVersion = "2018-01-01" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsPost(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/virtualNetworkGateways/{virtualNetworkGatewayName}/supportedvpndevices", pathParameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// SupportedVpnDevicesSender sends the SupportedVpnDevices request. The method will close the +// http.Response Body if it receives an error. +func (client VirtualNetworkGatewaysClient) SupportedVpnDevicesSender(req *http.Request) (*http.Response, error) { + return autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) +} + +// SupportedVpnDevicesResponder handles the response to the SupportedVpnDevices request. The method always +// closes the http.Response Body. +func (client VirtualNetworkGatewaysClient) SupportedVpnDevicesResponder(resp *http.Response) (result String, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK), + autorest.ByUnmarshallingJSON(&result.Value), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + +// UpdateTags updates a virtual network gateway tags. +// Parameters: +// resourceGroupName - the name of the resource group. +// virtualNetworkGatewayName - the name of the virtual network gateway. +// parameters - parameters supplied to update virtual network gateway tags. +func (client VirtualNetworkGatewaysClient) UpdateTags(ctx context.Context, resourceGroupName string, virtualNetworkGatewayName string, parameters TagsObject) (result VirtualNetworkGatewaysUpdateTagsFuture, err error) { + req, err := client.UpdateTagsPreparer(ctx, resourceGroupName, virtualNetworkGatewayName, parameters) + if err != nil { + err = autorest.NewErrorWithError(err, "network.VirtualNetworkGatewaysClient", "UpdateTags", nil, "Failure preparing request") + return + } + + result, err = client.UpdateTagsSender(req) + if err != nil { + err = autorest.NewErrorWithError(err, "network.VirtualNetworkGatewaysClient", "UpdateTags", result.Response(), "Failure sending request") + return + } + + return +} + +// UpdateTagsPreparer prepares the UpdateTags request. +func (client VirtualNetworkGatewaysClient) UpdateTagsPreparer(ctx context.Context, resourceGroupName string, virtualNetworkGatewayName string, parameters TagsObject) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "resourceGroupName": autorest.Encode("path", resourceGroupName), + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + "virtualNetworkGatewayName": autorest.Encode("path", virtualNetworkGatewayName), + } + + const APIVersion = "2018-01-01" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsContentType("application/json; charset=utf-8"), + autorest.AsPatch(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/virtualNetworkGateways/{virtualNetworkGatewayName}", pathParameters), + autorest.WithJSON(parameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// UpdateTagsSender sends the UpdateTags request. The method will close the +// http.Response Body if it receives an error. +func (client VirtualNetworkGatewaysClient) UpdateTagsSender(req *http.Request) (future VirtualNetworkGatewaysUpdateTagsFuture, err error) { + var resp *http.Response + resp, err = autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) + if err != nil { + return + } + err = autorest.Respond(resp, azure.WithErrorUnlessStatusCode(http.StatusOK)) + if err != nil { + return + } + future.Future, err = azure.NewFutureFromResponse(resp) + return +} + +// UpdateTagsResponder handles the response to the UpdateTags request. The method always +// closes the http.Response Body. +func (client VirtualNetworkGatewaysClient) UpdateTagsResponder(resp *http.Response) (result VirtualNetworkGateway, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + +// VpnDeviceConfigurationScript gets a xml format representation for vpn device configuration script. +// Parameters: +// resourceGroupName - the name of the resource group. +// virtualNetworkGatewayConnectionName - the name of the virtual network gateway connection for which the +// configuration script is generated. +// parameters - parameters supplied to the generate vpn device script operation. +func (client VirtualNetworkGatewaysClient) VpnDeviceConfigurationScript(ctx context.Context, resourceGroupName string, virtualNetworkGatewayConnectionName string, parameters VpnDeviceScriptParameters) (result String, err error) { + req, err := client.VpnDeviceConfigurationScriptPreparer(ctx, resourceGroupName, virtualNetworkGatewayConnectionName, parameters) + if err != nil { + err = autorest.NewErrorWithError(err, "network.VirtualNetworkGatewaysClient", "VpnDeviceConfigurationScript", nil, "Failure preparing request") + return + } + + resp, err := client.VpnDeviceConfigurationScriptSender(req) + if err != nil { + result.Response = autorest.Response{Response: resp} + err = autorest.NewErrorWithError(err, "network.VirtualNetworkGatewaysClient", "VpnDeviceConfigurationScript", resp, "Failure sending request") + return + } + + result, err = client.VpnDeviceConfigurationScriptResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "network.VirtualNetworkGatewaysClient", "VpnDeviceConfigurationScript", resp, "Failure responding to request") + } + + return +} + +// VpnDeviceConfigurationScriptPreparer prepares the VpnDeviceConfigurationScript request. +func (client VirtualNetworkGatewaysClient) VpnDeviceConfigurationScriptPreparer(ctx context.Context, resourceGroupName string, virtualNetworkGatewayConnectionName string, parameters VpnDeviceScriptParameters) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "resourceGroupName": autorest.Encode("path", resourceGroupName), + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + "virtualNetworkGatewayConnectionName": autorest.Encode("path", virtualNetworkGatewayConnectionName), + } + + const APIVersion = "2018-01-01" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsContentType("application/json; charset=utf-8"), + autorest.AsPost(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/connections/{virtualNetworkGatewayConnectionName}/vpndeviceconfigurationscript", pathParameters), + autorest.WithJSON(parameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// VpnDeviceConfigurationScriptSender sends the VpnDeviceConfigurationScript request. The method will close the +// http.Response Body if it receives an error. +func (client VirtualNetworkGatewaysClient) VpnDeviceConfigurationScriptSender(req *http.Request) (*http.Response, error) { + return autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) +} + +// VpnDeviceConfigurationScriptResponder handles the response to the VpnDeviceConfigurationScript request. The method always +// closes the http.Response Body. +func (client VirtualNetworkGatewaysClient) VpnDeviceConfigurationScriptResponder(resp *http.Response) (result String, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK), + autorest.ByUnmarshallingJSON(&result.Value), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} diff --git a/vendor/github.com/Azure/azure-sdk-for-go/arm/network/virtualnetworkpeerings.go b/vendor/github.com/Azure/azure-sdk-for-go/services/network/mgmt/2018-01-01/network/virtualnetworkpeerings.go similarity index 58% rename from vendor/github.com/Azure/azure-sdk-for-go/arm/network/virtualnetworkpeerings.go rename to vendor/github.com/Azure/azure-sdk-for-go/services/network/mgmt/2018-01-01/network/virtualnetworkpeerings.go index 27fafdfd0..8b6b0765d 100644 --- a/vendor/github.com/Azure/azure-sdk-for-go/arm/network/virtualnetworkpeerings.go +++ b/vendor/github.com/Azure/azure-sdk-for-go/services/network/mgmt/2018-01-01/network/virtualnetworkpeerings.go @@ -14,77 +14,56 @@ package network // See the License for the specific language governing permissions and // limitations under the License. // -// Code generated by Microsoft (R) AutoRest Code Generator 1.0.1.0 -// Changes may cause incorrect behavior and will be lost if the code is -// regenerated. +// Code generated by Microsoft (R) AutoRest Code Generator. +// Changes may cause incorrect behavior and will be lost if the code is regenerated. import ( + "context" "github.com/Azure/go-autorest/autorest" "github.com/Azure/go-autorest/autorest/azure" "net/http" ) -// VirtualNetworkPeeringsClient is the composite Swagger for Network Client +// VirtualNetworkPeeringsClient is the network Client type VirtualNetworkPeeringsClient struct { - ManagementClient + BaseClient } -// NewVirtualNetworkPeeringsClient creates an instance of the -// VirtualNetworkPeeringsClient client. +// NewVirtualNetworkPeeringsClient creates an instance of the VirtualNetworkPeeringsClient client. func NewVirtualNetworkPeeringsClient(subscriptionID string) VirtualNetworkPeeringsClient { return NewVirtualNetworkPeeringsClientWithBaseURI(DefaultBaseURI, subscriptionID) } -// NewVirtualNetworkPeeringsClientWithBaseURI creates an instance of the -// VirtualNetworkPeeringsClient client. +// NewVirtualNetworkPeeringsClientWithBaseURI creates an instance of the VirtualNetworkPeeringsClient client. func NewVirtualNetworkPeeringsClientWithBaseURI(baseURI string, subscriptionID string) VirtualNetworkPeeringsClient { return VirtualNetworkPeeringsClient{NewWithBaseURI(baseURI, subscriptionID)} } -// CreateOrUpdate creates or updates a peering in the specified virtual -// network. This method may poll for completion. Polling can be canceled by -// passing the cancel channel argument. The channel will be used to cancel -// polling and any outstanding HTTP requests. -// -// resourceGroupName is the name of the resource group. virtualNetworkName is -// the name of the virtual network. virtualNetworkPeeringName is the name of -// the peering. virtualNetworkPeeringParameters is parameters supplied to the -// create or update virtual network peering operation. -func (client VirtualNetworkPeeringsClient) CreateOrUpdate(resourceGroupName string, virtualNetworkName string, virtualNetworkPeeringName string, virtualNetworkPeeringParameters VirtualNetworkPeering, cancel <-chan struct{}) (<-chan VirtualNetworkPeering, <-chan error) { - resultChan := make(chan VirtualNetworkPeering, 1) - errChan := make(chan error, 1) - go func() { - var err error - var result VirtualNetworkPeering - defer func() { - resultChan <- result - errChan <- err - close(resultChan) - close(errChan) - }() - req, err := client.CreateOrUpdatePreparer(resourceGroupName, virtualNetworkName, virtualNetworkPeeringName, virtualNetworkPeeringParameters, cancel) - if err != nil { - err = autorest.NewErrorWithError(err, "network.VirtualNetworkPeeringsClient", "CreateOrUpdate", nil, "Failure preparing request") - return - } +// CreateOrUpdate creates or updates a peering in the specified virtual network. +// Parameters: +// resourceGroupName - the name of the resource group. +// virtualNetworkName - the name of the virtual network. +// virtualNetworkPeeringName - the name of the peering. +// virtualNetworkPeeringParameters - parameters supplied to the create or update virtual network peering +// operation. +func (client VirtualNetworkPeeringsClient) CreateOrUpdate(ctx context.Context, resourceGroupName string, virtualNetworkName string, virtualNetworkPeeringName string, virtualNetworkPeeringParameters VirtualNetworkPeering) (result VirtualNetworkPeeringsCreateOrUpdateFuture, err error) { + req, err := client.CreateOrUpdatePreparer(ctx, resourceGroupName, virtualNetworkName, virtualNetworkPeeringName, virtualNetworkPeeringParameters) + if err != nil { + err = autorest.NewErrorWithError(err, "network.VirtualNetworkPeeringsClient", "CreateOrUpdate", nil, "Failure preparing request") + return + } - resp, err := client.CreateOrUpdateSender(req) - if err != nil { - result.Response = autorest.Response{Response: resp} - err = autorest.NewErrorWithError(err, "network.VirtualNetworkPeeringsClient", "CreateOrUpdate", resp, "Failure sending request") - return - } + result, err = client.CreateOrUpdateSender(req) + if err != nil { + err = autorest.NewErrorWithError(err, "network.VirtualNetworkPeeringsClient", "CreateOrUpdate", result.Response(), "Failure sending request") + return + } - result, err = client.CreateOrUpdateResponder(resp) - if err != nil { - err = autorest.NewErrorWithError(err, "network.VirtualNetworkPeeringsClient", "CreateOrUpdate", resp, "Failure responding to request") - } - }() - return resultChan, errChan + return } // CreateOrUpdatePreparer prepares the CreateOrUpdate request. -func (client VirtualNetworkPeeringsClient) CreateOrUpdatePreparer(resourceGroupName string, virtualNetworkName string, virtualNetworkPeeringName string, virtualNetworkPeeringParameters VirtualNetworkPeering, cancel <-chan struct{}) (*http.Request, error) { +func (client VirtualNetworkPeeringsClient) CreateOrUpdatePreparer(ctx context.Context, resourceGroupName string, virtualNetworkName string, virtualNetworkPeeringName string, virtualNetworkPeeringParameters VirtualNetworkPeering) (*http.Request, error) { pathParameters := map[string]interface{}{ "resourceGroupName": autorest.Encode("path", resourceGroupName), "subscriptionId": autorest.Encode("path", client.SubscriptionID), @@ -92,27 +71,36 @@ func (client VirtualNetworkPeeringsClient) CreateOrUpdatePreparer(resourceGroupN "virtualNetworkPeeringName": autorest.Encode("path", virtualNetworkPeeringName), } - const APIVersion = "2017-03-01" + const APIVersion = "2018-01-01" queryParameters := map[string]interface{}{ "api-version": APIVersion, } preparer := autorest.CreatePreparer( - autorest.AsJSON(), + autorest.AsContentType("application/json; charset=utf-8"), autorest.AsPut(), autorest.WithBaseURL(client.BaseURI), autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/virtualNetworks/{virtualNetworkName}/virtualNetworkPeerings/{virtualNetworkPeeringName}", pathParameters), autorest.WithJSON(virtualNetworkPeeringParameters), autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{Cancel: cancel}) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) } // CreateOrUpdateSender sends the CreateOrUpdate request. The method will close the // http.Response Body if it receives an error. -func (client VirtualNetworkPeeringsClient) CreateOrUpdateSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, - req, - azure.DoPollForAsynchronous(client.PollingDelay)) +func (client VirtualNetworkPeeringsClient) CreateOrUpdateSender(req *http.Request) (future VirtualNetworkPeeringsCreateOrUpdateFuture, err error) { + var resp *http.Response + resp, err = autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) + if err != nil { + return + } + err = autorest.Respond(resp, azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusCreated)) + if err != nil { + return + } + future.Future, err = azure.NewFutureFromResponse(resp) + return } // CreateOrUpdateResponder handles the response to the CreateOrUpdate request. The method always @@ -128,49 +116,29 @@ func (client VirtualNetworkPeeringsClient) CreateOrUpdateResponder(resp *http.Re return } -// Delete deletes the specified virtual network peering. This method may poll -// for completion. Polling can be canceled by passing the cancel channel -// argument. The channel will be used to cancel polling and any outstanding -// HTTP requests. -// -// resourceGroupName is the name of the resource group. virtualNetworkName is -// the name of the virtual network. virtualNetworkPeeringName is the name of -// the virtual network peering. -func (client VirtualNetworkPeeringsClient) Delete(resourceGroupName string, virtualNetworkName string, virtualNetworkPeeringName string, cancel <-chan struct{}) (<-chan autorest.Response, <-chan error) { - resultChan := make(chan autorest.Response, 1) - errChan := make(chan error, 1) - go func() { - var err error - var result autorest.Response - defer func() { - resultChan <- result - errChan <- err - close(resultChan) - close(errChan) - }() - req, err := client.DeletePreparer(resourceGroupName, virtualNetworkName, virtualNetworkPeeringName, cancel) - if err != nil { - err = autorest.NewErrorWithError(err, "network.VirtualNetworkPeeringsClient", "Delete", nil, "Failure preparing request") - return - } +// Delete deletes the specified virtual network peering. +// Parameters: +// resourceGroupName - the name of the resource group. +// virtualNetworkName - the name of the virtual network. +// virtualNetworkPeeringName - the name of the virtual network peering. +func (client VirtualNetworkPeeringsClient) Delete(ctx context.Context, resourceGroupName string, virtualNetworkName string, virtualNetworkPeeringName string) (result VirtualNetworkPeeringsDeleteFuture, err error) { + req, err := client.DeletePreparer(ctx, resourceGroupName, virtualNetworkName, virtualNetworkPeeringName) + if err != nil { + err = autorest.NewErrorWithError(err, "network.VirtualNetworkPeeringsClient", "Delete", nil, "Failure preparing request") + return + } - resp, err := client.DeleteSender(req) - if err != nil { - result.Response = resp - err = autorest.NewErrorWithError(err, "network.VirtualNetworkPeeringsClient", "Delete", resp, "Failure sending request") - return - } + result, err = client.DeleteSender(req) + if err != nil { + err = autorest.NewErrorWithError(err, "network.VirtualNetworkPeeringsClient", "Delete", result.Response(), "Failure sending request") + return + } - result, err = client.DeleteResponder(resp) - if err != nil { - err = autorest.NewErrorWithError(err, "network.VirtualNetworkPeeringsClient", "Delete", resp, "Failure responding to request") - } - }() - return resultChan, errChan + return } // DeletePreparer prepares the Delete request. -func (client VirtualNetworkPeeringsClient) DeletePreparer(resourceGroupName string, virtualNetworkName string, virtualNetworkPeeringName string, cancel <-chan struct{}) (*http.Request, error) { +func (client VirtualNetworkPeeringsClient) DeletePreparer(ctx context.Context, resourceGroupName string, virtualNetworkName string, virtualNetworkPeeringName string) (*http.Request, error) { pathParameters := map[string]interface{}{ "resourceGroupName": autorest.Encode("path", resourceGroupName), "subscriptionId": autorest.Encode("path", client.SubscriptionID), @@ -178,7 +146,7 @@ func (client VirtualNetworkPeeringsClient) DeletePreparer(resourceGroupName stri "virtualNetworkPeeringName": autorest.Encode("path", virtualNetworkPeeringName), } - const APIVersion = "2017-03-01" + const APIVersion = "2018-01-01" queryParameters := map[string]interface{}{ "api-version": APIVersion, } @@ -188,15 +156,24 @@ func (client VirtualNetworkPeeringsClient) DeletePreparer(resourceGroupName stri autorest.WithBaseURL(client.BaseURI), autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/virtualNetworks/{virtualNetworkName}/virtualNetworkPeerings/{virtualNetworkPeeringName}", pathParameters), autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{Cancel: cancel}) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) } // DeleteSender sends the Delete request. The method will close the // http.Response Body if it receives an error. -func (client VirtualNetworkPeeringsClient) DeleteSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, - req, - azure.DoPollForAsynchronous(client.PollingDelay)) +func (client VirtualNetworkPeeringsClient) DeleteSender(req *http.Request) (future VirtualNetworkPeeringsDeleteFuture, err error) { + var resp *http.Response + resp, err = autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) + if err != nil { + return + } + err = autorest.Respond(resp, azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted, http.StatusNoContent)) + if err != nil { + return + } + future.Future, err = azure.NewFutureFromResponse(resp) + return } // DeleteResponder handles the response to the Delete request. The method always @@ -205,19 +182,19 @@ func (client VirtualNetworkPeeringsClient) DeleteResponder(resp *http.Response) err = autorest.Respond( resp, client.ByInspecting(), - azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusNoContent, http.StatusAccepted), + azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted, http.StatusNoContent), autorest.ByClosing()) result.Response = resp return } // Get gets the specified virtual network peering. -// -// resourceGroupName is the name of the resource group. virtualNetworkName is -// the name of the virtual network. virtualNetworkPeeringName is the name of -// the virtual network peering. -func (client VirtualNetworkPeeringsClient) Get(resourceGroupName string, virtualNetworkName string, virtualNetworkPeeringName string) (result VirtualNetworkPeering, err error) { - req, err := client.GetPreparer(resourceGroupName, virtualNetworkName, virtualNetworkPeeringName) +// Parameters: +// resourceGroupName - the name of the resource group. +// virtualNetworkName - the name of the virtual network. +// virtualNetworkPeeringName - the name of the virtual network peering. +func (client VirtualNetworkPeeringsClient) Get(ctx context.Context, resourceGroupName string, virtualNetworkName string, virtualNetworkPeeringName string) (result VirtualNetworkPeering, err error) { + req, err := client.GetPreparer(ctx, resourceGroupName, virtualNetworkName, virtualNetworkPeeringName) if err != nil { err = autorest.NewErrorWithError(err, "network.VirtualNetworkPeeringsClient", "Get", nil, "Failure preparing request") return @@ -239,7 +216,7 @@ func (client VirtualNetworkPeeringsClient) Get(resourceGroupName string, virtual } // GetPreparer prepares the Get request. -func (client VirtualNetworkPeeringsClient) GetPreparer(resourceGroupName string, virtualNetworkName string, virtualNetworkPeeringName string) (*http.Request, error) { +func (client VirtualNetworkPeeringsClient) GetPreparer(ctx context.Context, resourceGroupName string, virtualNetworkName string, virtualNetworkPeeringName string) (*http.Request, error) { pathParameters := map[string]interface{}{ "resourceGroupName": autorest.Encode("path", resourceGroupName), "subscriptionId": autorest.Encode("path", client.SubscriptionID), @@ -247,7 +224,7 @@ func (client VirtualNetworkPeeringsClient) GetPreparer(resourceGroupName string, "virtualNetworkPeeringName": autorest.Encode("path", virtualNetworkPeeringName), } - const APIVersion = "2017-03-01" + const APIVersion = "2018-01-01" queryParameters := map[string]interface{}{ "api-version": APIVersion, } @@ -257,13 +234,14 @@ func (client VirtualNetworkPeeringsClient) GetPreparer(resourceGroupName string, autorest.WithBaseURL(client.BaseURI), autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/virtualNetworks/{virtualNetworkName}/virtualNetworkPeerings/{virtualNetworkPeeringName}", pathParameters), autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{}) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) } // GetSender sends the Get request. The method will close the // http.Response Body if it receives an error. func (client VirtualNetworkPeeringsClient) GetSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, req) + return autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) } // GetResponder handles the response to the Get request. The method always @@ -280,11 +258,12 @@ func (client VirtualNetworkPeeringsClient) GetResponder(resp *http.Response) (re } // List gets all virtual network peerings in a virtual network. -// -// resourceGroupName is the name of the resource group. virtualNetworkName is -// the name of the virtual network. -func (client VirtualNetworkPeeringsClient) List(resourceGroupName string, virtualNetworkName string) (result VirtualNetworkPeeringListResult, err error) { - req, err := client.ListPreparer(resourceGroupName, virtualNetworkName) +// Parameters: +// resourceGroupName - the name of the resource group. +// virtualNetworkName - the name of the virtual network. +func (client VirtualNetworkPeeringsClient) List(ctx context.Context, resourceGroupName string, virtualNetworkName string) (result VirtualNetworkPeeringListResultPage, err error) { + result.fn = client.listNextResults + req, err := client.ListPreparer(ctx, resourceGroupName, virtualNetworkName) if err != nil { err = autorest.NewErrorWithError(err, "network.VirtualNetworkPeeringsClient", "List", nil, "Failure preparing request") return @@ -292,12 +271,12 @@ func (client VirtualNetworkPeeringsClient) List(resourceGroupName string, virtua resp, err := client.ListSender(req) if err != nil { - result.Response = autorest.Response{Response: resp} + result.vnplr.Response = autorest.Response{Response: resp} err = autorest.NewErrorWithError(err, "network.VirtualNetworkPeeringsClient", "List", resp, "Failure sending request") return } - result, err = client.ListResponder(resp) + result.vnplr, err = client.ListResponder(resp) if err != nil { err = autorest.NewErrorWithError(err, "network.VirtualNetworkPeeringsClient", "List", resp, "Failure responding to request") } @@ -306,14 +285,14 @@ func (client VirtualNetworkPeeringsClient) List(resourceGroupName string, virtua } // ListPreparer prepares the List request. -func (client VirtualNetworkPeeringsClient) ListPreparer(resourceGroupName string, virtualNetworkName string) (*http.Request, error) { +func (client VirtualNetworkPeeringsClient) ListPreparer(ctx context.Context, resourceGroupName string, virtualNetworkName string) (*http.Request, error) { pathParameters := map[string]interface{}{ "resourceGroupName": autorest.Encode("path", resourceGroupName), "subscriptionId": autorest.Encode("path", client.SubscriptionID), "virtualNetworkName": autorest.Encode("path", virtualNetworkName), } - const APIVersion = "2017-03-01" + const APIVersion = "2018-01-01" queryParameters := map[string]interface{}{ "api-version": APIVersion, } @@ -323,13 +302,14 @@ func (client VirtualNetworkPeeringsClient) ListPreparer(resourceGroupName string autorest.WithBaseURL(client.BaseURI), autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/virtualNetworks/{virtualNetworkName}/virtualNetworkPeerings", pathParameters), autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{}) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) } // ListSender sends the List request. The method will close the // http.Response Body if it receives an error. func (client VirtualNetworkPeeringsClient) ListSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, req) + return autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) } // ListResponder handles the response to the List request. The method always @@ -345,26 +325,29 @@ func (client VirtualNetworkPeeringsClient) ListResponder(resp *http.Response) (r return } -// ListNextResults retrieves the next set of results, if any. -func (client VirtualNetworkPeeringsClient) ListNextResults(lastResults VirtualNetworkPeeringListResult) (result VirtualNetworkPeeringListResult, err error) { - req, err := lastResults.VirtualNetworkPeeringListResultPreparer() +// listNextResults retrieves the next set of results, if any. +func (client VirtualNetworkPeeringsClient) listNextResults(lastResults VirtualNetworkPeeringListResult) (result VirtualNetworkPeeringListResult, err error) { + req, err := lastResults.virtualNetworkPeeringListResultPreparer() if err != nil { - return result, autorest.NewErrorWithError(err, "network.VirtualNetworkPeeringsClient", "List", nil, "Failure preparing next results request") + return result, autorest.NewErrorWithError(err, "network.VirtualNetworkPeeringsClient", "listNextResults", nil, "Failure preparing next results request") } if req == nil { return } - resp, err := client.ListSender(req) if err != nil { result.Response = autorest.Response{Response: resp} - return result, autorest.NewErrorWithError(err, "network.VirtualNetworkPeeringsClient", "List", resp, "Failure sending next results request") + return result, autorest.NewErrorWithError(err, "network.VirtualNetworkPeeringsClient", "listNextResults", resp, "Failure sending next results request") } - result, err = client.ListResponder(resp) if err != nil { - err = autorest.NewErrorWithError(err, "network.VirtualNetworkPeeringsClient", "List", resp, "Failure responding to next results request") + err = autorest.NewErrorWithError(err, "network.VirtualNetworkPeeringsClient", "listNextResults", resp, "Failure responding to next results request") } - + return +} + +// ListComplete enumerates all values, automatically crossing page boundaries as required. +func (client VirtualNetworkPeeringsClient) ListComplete(ctx context.Context, resourceGroupName string, virtualNetworkName string) (result VirtualNetworkPeeringListResultIterator, err error) { + result.page, err = client.List(ctx, resourceGroupName, virtualNetworkName) return } diff --git a/vendor/github.com/Azure/azure-sdk-for-go/services/network/mgmt/2018-01-01/network/virtualnetworks.go b/vendor/github.com/Azure/azure-sdk-for-go/services/network/mgmt/2018-01-01/network/virtualnetworks.go new file mode 100644 index 000000000..261bdb40d --- /dev/null +++ b/vendor/github.com/Azure/azure-sdk-for-go/services/network/mgmt/2018-01-01/network/virtualnetworks.go @@ -0,0 +1,678 @@ +package network + +// Copyright (c) Microsoft and contributors. All rights reserved. +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// +// See the License for the specific language governing permissions and +// limitations under the License. +// +// Code generated by Microsoft (R) AutoRest Code Generator. +// Changes may cause incorrect behavior and will be lost if the code is regenerated. + +import ( + "context" + "github.com/Azure/go-autorest/autorest" + "github.com/Azure/go-autorest/autorest/azure" + "net/http" +) + +// VirtualNetworksClient is the network Client +type VirtualNetworksClient struct { + BaseClient +} + +// NewVirtualNetworksClient creates an instance of the VirtualNetworksClient client. +func NewVirtualNetworksClient(subscriptionID string) VirtualNetworksClient { + return NewVirtualNetworksClientWithBaseURI(DefaultBaseURI, subscriptionID) +} + +// NewVirtualNetworksClientWithBaseURI creates an instance of the VirtualNetworksClient client. +func NewVirtualNetworksClientWithBaseURI(baseURI string, subscriptionID string) VirtualNetworksClient { + return VirtualNetworksClient{NewWithBaseURI(baseURI, subscriptionID)} +} + +// CheckIPAddressAvailability checks whether a private IP address is available for use. +// Parameters: +// resourceGroupName - the name of the resource group. +// virtualNetworkName - the name of the virtual network. +// IPAddress - the private IP address to be verified. +func (client VirtualNetworksClient) CheckIPAddressAvailability(ctx context.Context, resourceGroupName string, virtualNetworkName string, IPAddress string) (result IPAddressAvailabilityResult, err error) { + req, err := client.CheckIPAddressAvailabilityPreparer(ctx, resourceGroupName, virtualNetworkName, IPAddress) + if err != nil { + err = autorest.NewErrorWithError(err, "network.VirtualNetworksClient", "CheckIPAddressAvailability", nil, "Failure preparing request") + return + } + + resp, err := client.CheckIPAddressAvailabilitySender(req) + if err != nil { + result.Response = autorest.Response{Response: resp} + err = autorest.NewErrorWithError(err, "network.VirtualNetworksClient", "CheckIPAddressAvailability", resp, "Failure sending request") + return + } + + result, err = client.CheckIPAddressAvailabilityResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "network.VirtualNetworksClient", "CheckIPAddressAvailability", resp, "Failure responding to request") + } + + return +} + +// CheckIPAddressAvailabilityPreparer prepares the CheckIPAddressAvailability request. +func (client VirtualNetworksClient) CheckIPAddressAvailabilityPreparer(ctx context.Context, resourceGroupName string, virtualNetworkName string, IPAddress string) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "resourceGroupName": autorest.Encode("path", resourceGroupName), + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + "virtualNetworkName": autorest.Encode("path", virtualNetworkName), + } + + const APIVersion = "2018-01-01" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + if len(IPAddress) > 0 { + queryParameters["ipAddress"] = autorest.Encode("query", IPAddress) + } + + preparer := autorest.CreatePreparer( + autorest.AsGet(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/virtualNetworks/{virtualNetworkName}/CheckIPAddressAvailability", pathParameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// CheckIPAddressAvailabilitySender sends the CheckIPAddressAvailability request. The method will close the +// http.Response Body if it receives an error. +func (client VirtualNetworksClient) CheckIPAddressAvailabilitySender(req *http.Request) (*http.Response, error) { + return autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) +} + +// CheckIPAddressAvailabilityResponder handles the response to the CheckIPAddressAvailability request. The method always +// closes the http.Response Body. +func (client VirtualNetworksClient) CheckIPAddressAvailabilityResponder(resp *http.Response) (result IPAddressAvailabilityResult, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + +// CreateOrUpdate creates or updates a virtual network in the specified resource group. +// Parameters: +// resourceGroupName - the name of the resource group. +// virtualNetworkName - the name of the virtual network. +// parameters - parameters supplied to the create or update virtual network operation +func (client VirtualNetworksClient) CreateOrUpdate(ctx context.Context, resourceGroupName string, virtualNetworkName string, parameters VirtualNetwork) (result VirtualNetworksCreateOrUpdateFuture, err error) { + req, err := client.CreateOrUpdatePreparer(ctx, resourceGroupName, virtualNetworkName, parameters) + if err != nil { + err = autorest.NewErrorWithError(err, "network.VirtualNetworksClient", "CreateOrUpdate", nil, "Failure preparing request") + return + } + + result, err = client.CreateOrUpdateSender(req) + if err != nil { + err = autorest.NewErrorWithError(err, "network.VirtualNetworksClient", "CreateOrUpdate", result.Response(), "Failure sending request") + return + } + + return +} + +// CreateOrUpdatePreparer prepares the CreateOrUpdate request. +func (client VirtualNetworksClient) CreateOrUpdatePreparer(ctx context.Context, resourceGroupName string, virtualNetworkName string, parameters VirtualNetwork) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "resourceGroupName": autorest.Encode("path", resourceGroupName), + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + "virtualNetworkName": autorest.Encode("path", virtualNetworkName), + } + + const APIVersion = "2018-01-01" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsContentType("application/json; charset=utf-8"), + autorest.AsPut(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/virtualNetworks/{virtualNetworkName}", pathParameters), + autorest.WithJSON(parameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// CreateOrUpdateSender sends the CreateOrUpdate request. The method will close the +// http.Response Body if it receives an error. +func (client VirtualNetworksClient) CreateOrUpdateSender(req *http.Request) (future VirtualNetworksCreateOrUpdateFuture, err error) { + var resp *http.Response + resp, err = autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) + if err != nil { + return + } + err = autorest.Respond(resp, azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusCreated)) + if err != nil { + return + } + future.Future, err = azure.NewFutureFromResponse(resp) + return +} + +// CreateOrUpdateResponder handles the response to the CreateOrUpdate request. The method always +// closes the http.Response Body. +func (client VirtualNetworksClient) CreateOrUpdateResponder(resp *http.Response) (result VirtualNetwork, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusCreated), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + +// Delete deletes the specified virtual network. +// Parameters: +// resourceGroupName - the name of the resource group. +// virtualNetworkName - the name of the virtual network. +func (client VirtualNetworksClient) Delete(ctx context.Context, resourceGroupName string, virtualNetworkName string) (result VirtualNetworksDeleteFuture, err error) { + req, err := client.DeletePreparer(ctx, resourceGroupName, virtualNetworkName) + if err != nil { + err = autorest.NewErrorWithError(err, "network.VirtualNetworksClient", "Delete", nil, "Failure preparing request") + return + } + + result, err = client.DeleteSender(req) + if err != nil { + err = autorest.NewErrorWithError(err, "network.VirtualNetworksClient", "Delete", result.Response(), "Failure sending request") + return + } + + return +} + +// DeletePreparer prepares the Delete request. +func (client VirtualNetworksClient) DeletePreparer(ctx context.Context, resourceGroupName string, virtualNetworkName string) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "resourceGroupName": autorest.Encode("path", resourceGroupName), + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + "virtualNetworkName": autorest.Encode("path", virtualNetworkName), + } + + const APIVersion = "2018-01-01" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsDelete(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/virtualNetworks/{virtualNetworkName}", pathParameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// DeleteSender sends the Delete request. The method will close the +// http.Response Body if it receives an error. +func (client VirtualNetworksClient) DeleteSender(req *http.Request) (future VirtualNetworksDeleteFuture, err error) { + var resp *http.Response + resp, err = autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) + if err != nil { + return + } + err = autorest.Respond(resp, azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted, http.StatusNoContent)) + if err != nil { + return + } + future.Future, err = azure.NewFutureFromResponse(resp) + return +} + +// DeleteResponder handles the response to the Delete request. The method always +// closes the http.Response Body. +func (client VirtualNetworksClient) DeleteResponder(resp *http.Response) (result autorest.Response, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted, http.StatusNoContent), + autorest.ByClosing()) + result.Response = resp + return +} + +// Get gets the specified virtual network by resource group. +// Parameters: +// resourceGroupName - the name of the resource group. +// virtualNetworkName - the name of the virtual network. +// expand - expands referenced resources. +func (client VirtualNetworksClient) Get(ctx context.Context, resourceGroupName string, virtualNetworkName string, expand string) (result VirtualNetwork, err error) { + req, err := client.GetPreparer(ctx, resourceGroupName, virtualNetworkName, expand) + if err != nil { + err = autorest.NewErrorWithError(err, "network.VirtualNetworksClient", "Get", nil, "Failure preparing request") + return + } + + resp, err := client.GetSender(req) + if err != nil { + result.Response = autorest.Response{Response: resp} + err = autorest.NewErrorWithError(err, "network.VirtualNetworksClient", "Get", resp, "Failure sending request") + return + } + + result, err = client.GetResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "network.VirtualNetworksClient", "Get", resp, "Failure responding to request") + } + + return +} + +// GetPreparer prepares the Get request. +func (client VirtualNetworksClient) GetPreparer(ctx context.Context, resourceGroupName string, virtualNetworkName string, expand string) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "resourceGroupName": autorest.Encode("path", resourceGroupName), + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + "virtualNetworkName": autorest.Encode("path", virtualNetworkName), + } + + const APIVersion = "2018-01-01" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + if len(expand) > 0 { + queryParameters["$expand"] = autorest.Encode("query", expand) + } + + preparer := autorest.CreatePreparer( + autorest.AsGet(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/virtualNetworks/{virtualNetworkName}", pathParameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// GetSender sends the Get request. The method will close the +// http.Response Body if it receives an error. +func (client VirtualNetworksClient) GetSender(req *http.Request) (*http.Response, error) { + return autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) +} + +// GetResponder handles the response to the Get request. The method always +// closes the http.Response Body. +func (client VirtualNetworksClient) GetResponder(resp *http.Response) (result VirtualNetwork, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + +// List gets all virtual networks in a resource group. +// Parameters: +// resourceGroupName - the name of the resource group. +func (client VirtualNetworksClient) List(ctx context.Context, resourceGroupName string) (result VirtualNetworkListResultPage, err error) { + result.fn = client.listNextResults + req, err := client.ListPreparer(ctx, resourceGroupName) + if err != nil { + err = autorest.NewErrorWithError(err, "network.VirtualNetworksClient", "List", nil, "Failure preparing request") + return + } + + resp, err := client.ListSender(req) + if err != nil { + result.vnlr.Response = autorest.Response{Response: resp} + err = autorest.NewErrorWithError(err, "network.VirtualNetworksClient", "List", resp, "Failure sending request") + return + } + + result.vnlr, err = client.ListResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "network.VirtualNetworksClient", "List", resp, "Failure responding to request") + } + + return +} + +// ListPreparer prepares the List request. +func (client VirtualNetworksClient) ListPreparer(ctx context.Context, resourceGroupName string) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "resourceGroupName": autorest.Encode("path", resourceGroupName), + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + } + + const APIVersion = "2018-01-01" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsGet(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/virtualNetworks", pathParameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// ListSender sends the List request. The method will close the +// http.Response Body if it receives an error. +func (client VirtualNetworksClient) ListSender(req *http.Request) (*http.Response, error) { + return autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) +} + +// ListResponder handles the response to the List request. The method always +// closes the http.Response Body. +func (client VirtualNetworksClient) ListResponder(resp *http.Response) (result VirtualNetworkListResult, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + +// listNextResults retrieves the next set of results, if any. +func (client VirtualNetworksClient) listNextResults(lastResults VirtualNetworkListResult) (result VirtualNetworkListResult, err error) { + req, err := lastResults.virtualNetworkListResultPreparer() + if err != nil { + return result, autorest.NewErrorWithError(err, "network.VirtualNetworksClient", "listNextResults", nil, "Failure preparing next results request") + } + if req == nil { + return + } + resp, err := client.ListSender(req) + if err != nil { + result.Response = autorest.Response{Response: resp} + return result, autorest.NewErrorWithError(err, "network.VirtualNetworksClient", "listNextResults", resp, "Failure sending next results request") + } + result, err = client.ListResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "network.VirtualNetworksClient", "listNextResults", resp, "Failure responding to next results request") + } + return +} + +// ListComplete enumerates all values, automatically crossing page boundaries as required. +func (client VirtualNetworksClient) ListComplete(ctx context.Context, resourceGroupName string) (result VirtualNetworkListResultIterator, err error) { + result.page, err = client.List(ctx, resourceGroupName) + return +} + +// ListAll gets all virtual networks in a subscription. +func (client VirtualNetworksClient) ListAll(ctx context.Context) (result VirtualNetworkListResultPage, err error) { + result.fn = client.listAllNextResults + req, err := client.ListAllPreparer(ctx) + if err != nil { + err = autorest.NewErrorWithError(err, "network.VirtualNetworksClient", "ListAll", nil, "Failure preparing request") + return + } + + resp, err := client.ListAllSender(req) + if err != nil { + result.vnlr.Response = autorest.Response{Response: resp} + err = autorest.NewErrorWithError(err, "network.VirtualNetworksClient", "ListAll", resp, "Failure sending request") + return + } + + result.vnlr, err = client.ListAllResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "network.VirtualNetworksClient", "ListAll", resp, "Failure responding to request") + } + + return +} + +// ListAllPreparer prepares the ListAll request. +func (client VirtualNetworksClient) ListAllPreparer(ctx context.Context) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + } + + const APIVersion = "2018-01-01" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsGet(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/providers/Microsoft.Network/virtualNetworks", pathParameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// ListAllSender sends the ListAll request. The method will close the +// http.Response Body if it receives an error. +func (client VirtualNetworksClient) ListAllSender(req *http.Request) (*http.Response, error) { + return autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) +} + +// ListAllResponder handles the response to the ListAll request. The method always +// closes the http.Response Body. +func (client VirtualNetworksClient) ListAllResponder(resp *http.Response) (result VirtualNetworkListResult, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + +// listAllNextResults retrieves the next set of results, if any. +func (client VirtualNetworksClient) listAllNextResults(lastResults VirtualNetworkListResult) (result VirtualNetworkListResult, err error) { + req, err := lastResults.virtualNetworkListResultPreparer() + if err != nil { + return result, autorest.NewErrorWithError(err, "network.VirtualNetworksClient", "listAllNextResults", nil, "Failure preparing next results request") + } + if req == nil { + return + } + resp, err := client.ListAllSender(req) + if err != nil { + result.Response = autorest.Response{Response: resp} + return result, autorest.NewErrorWithError(err, "network.VirtualNetworksClient", "listAllNextResults", resp, "Failure sending next results request") + } + result, err = client.ListAllResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "network.VirtualNetworksClient", "listAllNextResults", resp, "Failure responding to next results request") + } + return +} + +// ListAllComplete enumerates all values, automatically crossing page boundaries as required. +func (client VirtualNetworksClient) ListAllComplete(ctx context.Context) (result VirtualNetworkListResultIterator, err error) { + result.page, err = client.ListAll(ctx) + return +} + +// ListUsage lists usage stats. +// Parameters: +// resourceGroupName - the name of the resource group. +// virtualNetworkName - the name of the virtual network. +func (client VirtualNetworksClient) ListUsage(ctx context.Context, resourceGroupName string, virtualNetworkName string) (result VirtualNetworkListUsageResultPage, err error) { + result.fn = client.listUsageNextResults + req, err := client.ListUsagePreparer(ctx, resourceGroupName, virtualNetworkName) + if err != nil { + err = autorest.NewErrorWithError(err, "network.VirtualNetworksClient", "ListUsage", nil, "Failure preparing request") + return + } + + resp, err := client.ListUsageSender(req) + if err != nil { + result.vnlur.Response = autorest.Response{Response: resp} + err = autorest.NewErrorWithError(err, "network.VirtualNetworksClient", "ListUsage", resp, "Failure sending request") + return + } + + result.vnlur, err = client.ListUsageResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "network.VirtualNetworksClient", "ListUsage", resp, "Failure responding to request") + } + + return +} + +// ListUsagePreparer prepares the ListUsage request. +func (client VirtualNetworksClient) ListUsagePreparer(ctx context.Context, resourceGroupName string, virtualNetworkName string) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "resourceGroupName": autorest.Encode("path", resourceGroupName), + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + "virtualNetworkName": autorest.Encode("path", virtualNetworkName), + } + + const APIVersion = "2018-01-01" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsGet(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/virtualNetworks/{virtualNetworkName}/usages", pathParameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// ListUsageSender sends the ListUsage request. The method will close the +// http.Response Body if it receives an error. +func (client VirtualNetworksClient) ListUsageSender(req *http.Request) (*http.Response, error) { + return autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) +} + +// ListUsageResponder handles the response to the ListUsage request. The method always +// closes the http.Response Body. +func (client VirtualNetworksClient) ListUsageResponder(resp *http.Response) (result VirtualNetworkListUsageResult, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + +// listUsageNextResults retrieves the next set of results, if any. +func (client VirtualNetworksClient) listUsageNextResults(lastResults VirtualNetworkListUsageResult) (result VirtualNetworkListUsageResult, err error) { + req, err := lastResults.virtualNetworkListUsageResultPreparer() + if err != nil { + return result, autorest.NewErrorWithError(err, "network.VirtualNetworksClient", "listUsageNextResults", nil, "Failure preparing next results request") + } + if req == nil { + return + } + resp, err := client.ListUsageSender(req) + if err != nil { + result.Response = autorest.Response{Response: resp} + return result, autorest.NewErrorWithError(err, "network.VirtualNetworksClient", "listUsageNextResults", resp, "Failure sending next results request") + } + result, err = client.ListUsageResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "network.VirtualNetworksClient", "listUsageNextResults", resp, "Failure responding to next results request") + } + return +} + +// ListUsageComplete enumerates all values, automatically crossing page boundaries as required. +func (client VirtualNetworksClient) ListUsageComplete(ctx context.Context, resourceGroupName string, virtualNetworkName string) (result VirtualNetworkListUsageResultIterator, err error) { + result.page, err = client.ListUsage(ctx, resourceGroupName, virtualNetworkName) + return +} + +// UpdateTags updates a virtual network tags. +// Parameters: +// resourceGroupName - the name of the resource group. +// virtualNetworkName - the name of the virtual network. +// parameters - parameters supplied to update virtual network tags. +func (client VirtualNetworksClient) UpdateTags(ctx context.Context, resourceGroupName string, virtualNetworkName string, parameters TagsObject) (result VirtualNetworksUpdateTagsFuture, err error) { + req, err := client.UpdateTagsPreparer(ctx, resourceGroupName, virtualNetworkName, parameters) + if err != nil { + err = autorest.NewErrorWithError(err, "network.VirtualNetworksClient", "UpdateTags", nil, "Failure preparing request") + return + } + + result, err = client.UpdateTagsSender(req) + if err != nil { + err = autorest.NewErrorWithError(err, "network.VirtualNetworksClient", "UpdateTags", result.Response(), "Failure sending request") + return + } + + return +} + +// UpdateTagsPreparer prepares the UpdateTags request. +func (client VirtualNetworksClient) UpdateTagsPreparer(ctx context.Context, resourceGroupName string, virtualNetworkName string, parameters TagsObject) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "resourceGroupName": autorest.Encode("path", resourceGroupName), + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + "virtualNetworkName": autorest.Encode("path", virtualNetworkName), + } + + const APIVersion = "2018-01-01" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsContentType("application/json; charset=utf-8"), + autorest.AsPatch(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/virtualNetworks/{virtualNetworkName}", pathParameters), + autorest.WithJSON(parameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// UpdateTagsSender sends the UpdateTags request. The method will close the +// http.Response Body if it receives an error. +func (client VirtualNetworksClient) UpdateTagsSender(req *http.Request) (future VirtualNetworksUpdateTagsFuture, err error) { + var resp *http.Response + resp, err = autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) + if err != nil { + return + } + err = autorest.Respond(resp, azure.WithErrorUnlessStatusCode(http.StatusOK)) + if err != nil { + return + } + future.Future, err = azure.NewFutureFromResponse(resp) + return +} + +// UpdateTagsResponder handles the response to the UpdateTags request. The method always +// closes the http.Response Body. +func (client VirtualNetworksClient) UpdateTagsResponder(resp *http.Response) (result VirtualNetwork, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} diff --git a/vendor/github.com/Azure/azure-sdk-for-go/services/network/mgmt/2018-01-01/network/watchers.go b/vendor/github.com/Azure/azure-sdk-for-go/services/network/mgmt/2018-01-01/network/watchers.go new file mode 100644 index 000000000..877534e4e --- /dev/null +++ b/vendor/github.com/Azure/azure-sdk-for-go/services/network/mgmt/2018-01-01/network/watchers.go @@ -0,0 +1,1338 @@ +package network + +// Copyright (c) Microsoft and contributors. All rights reserved. +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// +// See the License for the specific language governing permissions and +// limitations under the License. +// +// Code generated by Microsoft (R) AutoRest Code Generator. +// Changes may cause incorrect behavior and will be lost if the code is regenerated. + +import ( + "context" + "github.com/Azure/go-autorest/autorest" + "github.com/Azure/go-autorest/autorest/azure" + "github.com/Azure/go-autorest/autorest/validation" + "net/http" +) + +// WatchersClient is the network Client +type WatchersClient struct { + BaseClient +} + +// NewWatchersClient creates an instance of the WatchersClient client. +func NewWatchersClient(subscriptionID string) WatchersClient { + return NewWatchersClientWithBaseURI(DefaultBaseURI, subscriptionID) +} + +// NewWatchersClientWithBaseURI creates an instance of the WatchersClient client. +func NewWatchersClientWithBaseURI(baseURI string, subscriptionID string) WatchersClient { + return WatchersClient{NewWithBaseURI(baseURI, subscriptionID)} +} + +// CheckConnectivity verifies the possibility of establishing a direct TCP connection from a virtual machine to a given +// endpoint including another VM or an arbitrary remote server. +// Parameters: +// resourceGroupName - the name of the network watcher resource group. +// networkWatcherName - the name of the network watcher resource. +// parameters - parameters that determine how the connectivity check will be performed. +func (client WatchersClient) CheckConnectivity(ctx context.Context, resourceGroupName string, networkWatcherName string, parameters ConnectivityParameters) (result WatchersCheckConnectivityFuture, err error) { + if err := validation.Validate([]validation.Validation{ + {TargetValue: parameters, + Constraints: []validation.Constraint{{Target: "parameters.Source", Name: validation.Null, Rule: true, + Chain: []validation.Constraint{{Target: "parameters.Source.ResourceID", Name: validation.Null, Rule: true, Chain: nil}}}, + {Target: "parameters.Destination", Name: validation.Null, Rule: true, Chain: nil}}}}); err != nil { + return result, validation.NewError("network.WatchersClient", "CheckConnectivity", err.Error()) + } + + req, err := client.CheckConnectivityPreparer(ctx, resourceGroupName, networkWatcherName, parameters) + if err != nil { + err = autorest.NewErrorWithError(err, "network.WatchersClient", "CheckConnectivity", nil, "Failure preparing request") + return + } + + result, err = client.CheckConnectivitySender(req) + if err != nil { + err = autorest.NewErrorWithError(err, "network.WatchersClient", "CheckConnectivity", result.Response(), "Failure sending request") + return + } + + return +} + +// CheckConnectivityPreparer prepares the CheckConnectivity request. +func (client WatchersClient) CheckConnectivityPreparer(ctx context.Context, resourceGroupName string, networkWatcherName string, parameters ConnectivityParameters) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "networkWatcherName": autorest.Encode("path", networkWatcherName), + "resourceGroupName": autorest.Encode("path", resourceGroupName), + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + } + + const APIVersion = "2018-01-01" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsContentType("application/json; charset=utf-8"), + autorest.AsPost(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/networkWatchers/{networkWatcherName}/connectivityCheck", pathParameters), + autorest.WithJSON(parameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// CheckConnectivitySender sends the CheckConnectivity request. The method will close the +// http.Response Body if it receives an error. +func (client WatchersClient) CheckConnectivitySender(req *http.Request) (future WatchersCheckConnectivityFuture, err error) { + var resp *http.Response + resp, err = autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) + if err != nil { + return + } + err = autorest.Respond(resp, azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted)) + if err != nil { + return + } + future.Future, err = azure.NewFutureFromResponse(resp) + return +} + +// CheckConnectivityResponder handles the response to the CheckConnectivity request. The method always +// closes the http.Response Body. +func (client WatchersClient) CheckConnectivityResponder(resp *http.Response) (result ConnectivityInformation, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + +// CreateOrUpdate creates or updates a network watcher in the specified resource group. +// Parameters: +// resourceGroupName - the name of the resource group. +// networkWatcherName - the name of the network watcher. +// parameters - parameters that define the network watcher resource. +func (client WatchersClient) CreateOrUpdate(ctx context.Context, resourceGroupName string, networkWatcherName string, parameters Watcher) (result Watcher, err error) { + req, err := client.CreateOrUpdatePreparer(ctx, resourceGroupName, networkWatcherName, parameters) + if err != nil { + err = autorest.NewErrorWithError(err, "network.WatchersClient", "CreateOrUpdate", nil, "Failure preparing request") + return + } + + resp, err := client.CreateOrUpdateSender(req) + if err != nil { + result.Response = autorest.Response{Response: resp} + err = autorest.NewErrorWithError(err, "network.WatchersClient", "CreateOrUpdate", resp, "Failure sending request") + return + } + + result, err = client.CreateOrUpdateResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "network.WatchersClient", "CreateOrUpdate", resp, "Failure responding to request") + } + + return +} + +// CreateOrUpdatePreparer prepares the CreateOrUpdate request. +func (client WatchersClient) CreateOrUpdatePreparer(ctx context.Context, resourceGroupName string, networkWatcherName string, parameters Watcher) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "networkWatcherName": autorest.Encode("path", networkWatcherName), + "resourceGroupName": autorest.Encode("path", resourceGroupName), + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + } + + const APIVersion = "2018-01-01" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsContentType("application/json; charset=utf-8"), + autorest.AsPut(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/networkWatchers/{networkWatcherName}", pathParameters), + autorest.WithJSON(parameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// CreateOrUpdateSender sends the CreateOrUpdate request. The method will close the +// http.Response Body if it receives an error. +func (client WatchersClient) CreateOrUpdateSender(req *http.Request) (*http.Response, error) { + return autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) +} + +// CreateOrUpdateResponder handles the response to the CreateOrUpdate request. The method always +// closes the http.Response Body. +func (client WatchersClient) CreateOrUpdateResponder(resp *http.Response) (result Watcher, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusCreated), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + +// Delete deletes the specified network watcher resource. +// Parameters: +// resourceGroupName - the name of the resource group. +// networkWatcherName - the name of the network watcher. +func (client WatchersClient) Delete(ctx context.Context, resourceGroupName string, networkWatcherName string) (result WatchersDeleteFuture, err error) { + req, err := client.DeletePreparer(ctx, resourceGroupName, networkWatcherName) + if err != nil { + err = autorest.NewErrorWithError(err, "network.WatchersClient", "Delete", nil, "Failure preparing request") + return + } + + result, err = client.DeleteSender(req) + if err != nil { + err = autorest.NewErrorWithError(err, "network.WatchersClient", "Delete", result.Response(), "Failure sending request") + return + } + + return +} + +// DeletePreparer prepares the Delete request. +func (client WatchersClient) DeletePreparer(ctx context.Context, resourceGroupName string, networkWatcherName string) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "networkWatcherName": autorest.Encode("path", networkWatcherName), + "resourceGroupName": autorest.Encode("path", resourceGroupName), + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + } + + const APIVersion = "2018-01-01" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsDelete(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/networkWatchers/{networkWatcherName}", pathParameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// DeleteSender sends the Delete request. The method will close the +// http.Response Body if it receives an error. +func (client WatchersClient) DeleteSender(req *http.Request) (future WatchersDeleteFuture, err error) { + var resp *http.Response + resp, err = autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) + if err != nil { + return + } + err = autorest.Respond(resp, azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted, http.StatusNoContent)) + if err != nil { + return + } + future.Future, err = azure.NewFutureFromResponse(resp) + return +} + +// DeleteResponder handles the response to the Delete request. The method always +// closes the http.Response Body. +func (client WatchersClient) DeleteResponder(resp *http.Response) (result autorest.Response, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted, http.StatusNoContent), + autorest.ByClosing()) + result.Response = resp + return +} + +// Get gets the specified network watcher by resource group. +// Parameters: +// resourceGroupName - the name of the resource group. +// networkWatcherName - the name of the network watcher. +func (client WatchersClient) Get(ctx context.Context, resourceGroupName string, networkWatcherName string) (result Watcher, err error) { + req, err := client.GetPreparer(ctx, resourceGroupName, networkWatcherName) + if err != nil { + err = autorest.NewErrorWithError(err, "network.WatchersClient", "Get", nil, "Failure preparing request") + return + } + + resp, err := client.GetSender(req) + if err != nil { + result.Response = autorest.Response{Response: resp} + err = autorest.NewErrorWithError(err, "network.WatchersClient", "Get", resp, "Failure sending request") + return + } + + result, err = client.GetResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "network.WatchersClient", "Get", resp, "Failure responding to request") + } + + return +} + +// GetPreparer prepares the Get request. +func (client WatchersClient) GetPreparer(ctx context.Context, resourceGroupName string, networkWatcherName string) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "networkWatcherName": autorest.Encode("path", networkWatcherName), + "resourceGroupName": autorest.Encode("path", resourceGroupName), + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + } + + const APIVersion = "2018-01-01" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsGet(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/networkWatchers/{networkWatcherName}", pathParameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// GetSender sends the Get request. The method will close the +// http.Response Body if it receives an error. +func (client WatchersClient) GetSender(req *http.Request) (*http.Response, error) { + return autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) +} + +// GetResponder handles the response to the Get request. The method always +// closes the http.Response Body. +func (client WatchersClient) GetResponder(resp *http.Response) (result Watcher, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + +// GetAzureReachabilityReport gets the relative latency score for internet service providers from a specified location +// to Azure regions. +// Parameters: +// resourceGroupName - the name of the network watcher resource group. +// networkWatcherName - the name of the network watcher resource. +// parameters - parameters that determine Azure reachability report configuration. +func (client WatchersClient) GetAzureReachabilityReport(ctx context.Context, resourceGroupName string, networkWatcherName string, parameters AzureReachabilityReportParameters) (result WatchersGetAzureReachabilityReportFuture, err error) { + if err := validation.Validate([]validation.Validation{ + {TargetValue: parameters, + Constraints: []validation.Constraint{{Target: "parameters.ProviderLocation", Name: validation.Null, Rule: true, + Chain: []validation.Constraint{{Target: "parameters.ProviderLocation.Country", Name: validation.Null, Rule: true, Chain: nil}}}, + {Target: "parameters.StartTime", Name: validation.Null, Rule: true, Chain: nil}, + {Target: "parameters.EndTime", Name: validation.Null, Rule: true, Chain: nil}}}}); err != nil { + return result, validation.NewError("network.WatchersClient", "GetAzureReachabilityReport", err.Error()) + } + + req, err := client.GetAzureReachabilityReportPreparer(ctx, resourceGroupName, networkWatcherName, parameters) + if err != nil { + err = autorest.NewErrorWithError(err, "network.WatchersClient", "GetAzureReachabilityReport", nil, "Failure preparing request") + return + } + + result, err = client.GetAzureReachabilityReportSender(req) + if err != nil { + err = autorest.NewErrorWithError(err, "network.WatchersClient", "GetAzureReachabilityReport", result.Response(), "Failure sending request") + return + } + + return +} + +// GetAzureReachabilityReportPreparer prepares the GetAzureReachabilityReport request. +func (client WatchersClient) GetAzureReachabilityReportPreparer(ctx context.Context, resourceGroupName string, networkWatcherName string, parameters AzureReachabilityReportParameters) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "networkWatcherName": autorest.Encode("path", networkWatcherName), + "resourceGroupName": autorest.Encode("path", resourceGroupName), + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + } + + const APIVersion = "2018-01-01" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsContentType("application/json; charset=utf-8"), + autorest.AsPost(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/networkWatchers/{networkWatcherName}/azureReachabilityReport", pathParameters), + autorest.WithJSON(parameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// GetAzureReachabilityReportSender sends the GetAzureReachabilityReport request. The method will close the +// http.Response Body if it receives an error. +func (client WatchersClient) GetAzureReachabilityReportSender(req *http.Request) (future WatchersGetAzureReachabilityReportFuture, err error) { + var resp *http.Response + resp, err = autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) + if err != nil { + return + } + err = autorest.Respond(resp, azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted)) + if err != nil { + return + } + future.Future, err = azure.NewFutureFromResponse(resp) + return +} + +// GetAzureReachabilityReportResponder handles the response to the GetAzureReachabilityReport request. The method always +// closes the http.Response Body. +func (client WatchersClient) GetAzureReachabilityReportResponder(resp *http.Response) (result AzureReachabilityReport, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + +// GetFlowLogStatus queries status of flow log and traffic analytics (optional) on a specified resource. +// Parameters: +// resourceGroupName - the name of the network watcher resource group. +// networkWatcherName - the name of the network watcher resource. +// parameters - parameters that define a resource to query flow log and traffic analytics (optional) status. +func (client WatchersClient) GetFlowLogStatus(ctx context.Context, resourceGroupName string, networkWatcherName string, parameters FlowLogStatusParameters) (result WatchersGetFlowLogStatusFuture, err error) { + if err := validation.Validate([]validation.Validation{ + {TargetValue: parameters, + Constraints: []validation.Constraint{{Target: "parameters.TargetResourceID", Name: validation.Null, Rule: true, Chain: nil}}}}); err != nil { + return result, validation.NewError("network.WatchersClient", "GetFlowLogStatus", err.Error()) + } + + req, err := client.GetFlowLogStatusPreparer(ctx, resourceGroupName, networkWatcherName, parameters) + if err != nil { + err = autorest.NewErrorWithError(err, "network.WatchersClient", "GetFlowLogStatus", nil, "Failure preparing request") + return + } + + result, err = client.GetFlowLogStatusSender(req) + if err != nil { + err = autorest.NewErrorWithError(err, "network.WatchersClient", "GetFlowLogStatus", result.Response(), "Failure sending request") + return + } + + return +} + +// GetFlowLogStatusPreparer prepares the GetFlowLogStatus request. +func (client WatchersClient) GetFlowLogStatusPreparer(ctx context.Context, resourceGroupName string, networkWatcherName string, parameters FlowLogStatusParameters) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "networkWatcherName": autorest.Encode("path", networkWatcherName), + "resourceGroupName": autorest.Encode("path", resourceGroupName), + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + } + + const APIVersion = "2018-01-01" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsContentType("application/json; charset=utf-8"), + autorest.AsPost(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/networkWatchers/{networkWatcherName}/queryFlowLogStatus", pathParameters), + autorest.WithJSON(parameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// GetFlowLogStatusSender sends the GetFlowLogStatus request. The method will close the +// http.Response Body if it receives an error. +func (client WatchersClient) GetFlowLogStatusSender(req *http.Request) (future WatchersGetFlowLogStatusFuture, err error) { + var resp *http.Response + resp, err = autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) + if err != nil { + return + } + err = autorest.Respond(resp, azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted)) + if err != nil { + return + } + future.Future, err = azure.NewFutureFromResponse(resp) + return +} + +// GetFlowLogStatusResponder handles the response to the GetFlowLogStatus request. The method always +// closes the http.Response Body. +func (client WatchersClient) GetFlowLogStatusResponder(resp *http.Response) (result FlowLogInformation, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + +// GetNextHop gets the next hop from the specified VM. +// Parameters: +// resourceGroupName - the name of the resource group. +// networkWatcherName - the name of the network watcher. +// parameters - parameters that define the source and destination endpoint. +func (client WatchersClient) GetNextHop(ctx context.Context, resourceGroupName string, networkWatcherName string, parameters NextHopParameters) (result WatchersGetNextHopFuture, err error) { + if err := validation.Validate([]validation.Validation{ + {TargetValue: parameters, + Constraints: []validation.Constraint{{Target: "parameters.TargetResourceID", Name: validation.Null, Rule: true, Chain: nil}, + {Target: "parameters.SourceIPAddress", Name: validation.Null, Rule: true, Chain: nil}, + {Target: "parameters.DestinationIPAddress", Name: validation.Null, Rule: true, Chain: nil}}}}); err != nil { + return result, validation.NewError("network.WatchersClient", "GetNextHop", err.Error()) + } + + req, err := client.GetNextHopPreparer(ctx, resourceGroupName, networkWatcherName, parameters) + if err != nil { + err = autorest.NewErrorWithError(err, "network.WatchersClient", "GetNextHop", nil, "Failure preparing request") + return + } + + result, err = client.GetNextHopSender(req) + if err != nil { + err = autorest.NewErrorWithError(err, "network.WatchersClient", "GetNextHop", result.Response(), "Failure sending request") + return + } + + return +} + +// GetNextHopPreparer prepares the GetNextHop request. +func (client WatchersClient) GetNextHopPreparer(ctx context.Context, resourceGroupName string, networkWatcherName string, parameters NextHopParameters) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "networkWatcherName": autorest.Encode("path", networkWatcherName), + "resourceGroupName": autorest.Encode("path", resourceGroupName), + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + } + + const APIVersion = "2018-01-01" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsContentType("application/json; charset=utf-8"), + autorest.AsPost(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/networkWatchers/{networkWatcherName}/nextHop", pathParameters), + autorest.WithJSON(parameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// GetNextHopSender sends the GetNextHop request. The method will close the +// http.Response Body if it receives an error. +func (client WatchersClient) GetNextHopSender(req *http.Request) (future WatchersGetNextHopFuture, err error) { + var resp *http.Response + resp, err = autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) + if err != nil { + return + } + err = autorest.Respond(resp, azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted)) + if err != nil { + return + } + future.Future, err = azure.NewFutureFromResponse(resp) + return +} + +// GetNextHopResponder handles the response to the GetNextHop request. The method always +// closes the http.Response Body. +func (client WatchersClient) GetNextHopResponder(resp *http.Response) (result NextHopResult, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + +// GetTopology gets the current network topology by resource group. +// Parameters: +// resourceGroupName - the name of the resource group. +// networkWatcherName - the name of the network watcher. +// parameters - parameters that define the representation of topology. +func (client WatchersClient) GetTopology(ctx context.Context, resourceGroupName string, networkWatcherName string, parameters TopologyParameters) (result Topology, err error) { + req, err := client.GetTopologyPreparer(ctx, resourceGroupName, networkWatcherName, parameters) + if err != nil { + err = autorest.NewErrorWithError(err, "network.WatchersClient", "GetTopology", nil, "Failure preparing request") + return + } + + resp, err := client.GetTopologySender(req) + if err != nil { + result.Response = autorest.Response{Response: resp} + err = autorest.NewErrorWithError(err, "network.WatchersClient", "GetTopology", resp, "Failure sending request") + return + } + + result, err = client.GetTopologyResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "network.WatchersClient", "GetTopology", resp, "Failure responding to request") + } + + return +} + +// GetTopologyPreparer prepares the GetTopology request. +func (client WatchersClient) GetTopologyPreparer(ctx context.Context, resourceGroupName string, networkWatcherName string, parameters TopologyParameters) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "networkWatcherName": autorest.Encode("path", networkWatcherName), + "resourceGroupName": autorest.Encode("path", resourceGroupName), + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + } + + const APIVersion = "2018-01-01" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsContentType("application/json; charset=utf-8"), + autorest.AsPost(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/networkWatchers/{networkWatcherName}/topology", pathParameters), + autorest.WithJSON(parameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// GetTopologySender sends the GetTopology request. The method will close the +// http.Response Body if it receives an error. +func (client WatchersClient) GetTopologySender(req *http.Request) (*http.Response, error) { + return autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) +} + +// GetTopologyResponder handles the response to the GetTopology request. The method always +// closes the http.Response Body. +func (client WatchersClient) GetTopologyResponder(resp *http.Response) (result Topology, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + +// GetTroubleshooting initiate troubleshooting on a specified resource +// Parameters: +// resourceGroupName - the name of the resource group. +// networkWatcherName - the name of the network watcher resource. +// parameters - parameters that define the resource to troubleshoot. +func (client WatchersClient) GetTroubleshooting(ctx context.Context, resourceGroupName string, networkWatcherName string, parameters TroubleshootingParameters) (result WatchersGetTroubleshootingFuture, err error) { + if err := validation.Validate([]validation.Validation{ + {TargetValue: parameters, + Constraints: []validation.Constraint{{Target: "parameters.TargetResourceID", Name: validation.Null, Rule: true, Chain: nil}, + {Target: "parameters.TroubleshootingProperties", Name: validation.Null, Rule: true, + Chain: []validation.Constraint{{Target: "parameters.TroubleshootingProperties.StorageID", Name: validation.Null, Rule: true, Chain: nil}, + {Target: "parameters.TroubleshootingProperties.StoragePath", Name: validation.Null, Rule: true, Chain: nil}, + }}}}}); err != nil { + return result, validation.NewError("network.WatchersClient", "GetTroubleshooting", err.Error()) + } + + req, err := client.GetTroubleshootingPreparer(ctx, resourceGroupName, networkWatcherName, parameters) + if err != nil { + err = autorest.NewErrorWithError(err, "network.WatchersClient", "GetTroubleshooting", nil, "Failure preparing request") + return + } + + result, err = client.GetTroubleshootingSender(req) + if err != nil { + err = autorest.NewErrorWithError(err, "network.WatchersClient", "GetTroubleshooting", result.Response(), "Failure sending request") + return + } + + return +} + +// GetTroubleshootingPreparer prepares the GetTroubleshooting request. +func (client WatchersClient) GetTroubleshootingPreparer(ctx context.Context, resourceGroupName string, networkWatcherName string, parameters TroubleshootingParameters) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "networkWatcherName": autorest.Encode("path", networkWatcherName), + "resourceGroupName": autorest.Encode("path", resourceGroupName), + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + } + + const APIVersion = "2018-01-01" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsContentType("application/json; charset=utf-8"), + autorest.AsPost(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/networkWatchers/{networkWatcherName}/troubleshoot", pathParameters), + autorest.WithJSON(parameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// GetTroubleshootingSender sends the GetTroubleshooting request. The method will close the +// http.Response Body if it receives an error. +func (client WatchersClient) GetTroubleshootingSender(req *http.Request) (future WatchersGetTroubleshootingFuture, err error) { + var resp *http.Response + resp, err = autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) + if err != nil { + return + } + err = autorest.Respond(resp, azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted)) + if err != nil { + return + } + future.Future, err = azure.NewFutureFromResponse(resp) + return +} + +// GetTroubleshootingResponder handles the response to the GetTroubleshooting request. The method always +// closes the http.Response Body. +func (client WatchersClient) GetTroubleshootingResponder(resp *http.Response) (result TroubleshootingResult, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + +// GetTroubleshootingResult get the last completed troubleshooting result on a specified resource +// Parameters: +// resourceGroupName - the name of the resource group. +// networkWatcherName - the name of the network watcher resource. +// parameters - parameters that define the resource to query the troubleshooting result. +func (client WatchersClient) GetTroubleshootingResult(ctx context.Context, resourceGroupName string, networkWatcherName string, parameters QueryTroubleshootingParameters) (result WatchersGetTroubleshootingResultFuture, err error) { + if err := validation.Validate([]validation.Validation{ + {TargetValue: parameters, + Constraints: []validation.Constraint{{Target: "parameters.TargetResourceID", Name: validation.Null, Rule: true, Chain: nil}}}}); err != nil { + return result, validation.NewError("network.WatchersClient", "GetTroubleshootingResult", err.Error()) + } + + req, err := client.GetTroubleshootingResultPreparer(ctx, resourceGroupName, networkWatcherName, parameters) + if err != nil { + err = autorest.NewErrorWithError(err, "network.WatchersClient", "GetTroubleshootingResult", nil, "Failure preparing request") + return + } + + result, err = client.GetTroubleshootingResultSender(req) + if err != nil { + err = autorest.NewErrorWithError(err, "network.WatchersClient", "GetTroubleshootingResult", result.Response(), "Failure sending request") + return + } + + return +} + +// GetTroubleshootingResultPreparer prepares the GetTroubleshootingResult request. +func (client WatchersClient) GetTroubleshootingResultPreparer(ctx context.Context, resourceGroupName string, networkWatcherName string, parameters QueryTroubleshootingParameters) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "networkWatcherName": autorest.Encode("path", networkWatcherName), + "resourceGroupName": autorest.Encode("path", resourceGroupName), + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + } + + const APIVersion = "2018-01-01" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsContentType("application/json; charset=utf-8"), + autorest.AsPost(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/networkWatchers/{networkWatcherName}/queryTroubleshootResult", pathParameters), + autorest.WithJSON(parameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// GetTroubleshootingResultSender sends the GetTroubleshootingResult request. The method will close the +// http.Response Body if it receives an error. +func (client WatchersClient) GetTroubleshootingResultSender(req *http.Request) (future WatchersGetTroubleshootingResultFuture, err error) { + var resp *http.Response + resp, err = autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) + if err != nil { + return + } + err = autorest.Respond(resp, azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted)) + if err != nil { + return + } + future.Future, err = azure.NewFutureFromResponse(resp) + return +} + +// GetTroubleshootingResultResponder handles the response to the GetTroubleshootingResult request. The method always +// closes the http.Response Body. +func (client WatchersClient) GetTroubleshootingResultResponder(resp *http.Response) (result TroubleshootingResult, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + +// GetVMSecurityRules gets the configured and effective security group rules on the specified VM. +// Parameters: +// resourceGroupName - the name of the resource group. +// networkWatcherName - the name of the network watcher. +// parameters - parameters that define the VM to check security groups for. +func (client WatchersClient) GetVMSecurityRules(ctx context.Context, resourceGroupName string, networkWatcherName string, parameters SecurityGroupViewParameters) (result WatchersGetVMSecurityRulesFuture, err error) { + if err := validation.Validate([]validation.Validation{ + {TargetValue: parameters, + Constraints: []validation.Constraint{{Target: "parameters.TargetResourceID", Name: validation.Null, Rule: true, Chain: nil}}}}); err != nil { + return result, validation.NewError("network.WatchersClient", "GetVMSecurityRules", err.Error()) + } + + req, err := client.GetVMSecurityRulesPreparer(ctx, resourceGroupName, networkWatcherName, parameters) + if err != nil { + err = autorest.NewErrorWithError(err, "network.WatchersClient", "GetVMSecurityRules", nil, "Failure preparing request") + return + } + + result, err = client.GetVMSecurityRulesSender(req) + if err != nil { + err = autorest.NewErrorWithError(err, "network.WatchersClient", "GetVMSecurityRules", result.Response(), "Failure sending request") + return + } + + return +} + +// GetVMSecurityRulesPreparer prepares the GetVMSecurityRules request. +func (client WatchersClient) GetVMSecurityRulesPreparer(ctx context.Context, resourceGroupName string, networkWatcherName string, parameters SecurityGroupViewParameters) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "networkWatcherName": autorest.Encode("path", networkWatcherName), + "resourceGroupName": autorest.Encode("path", resourceGroupName), + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + } + + const APIVersion = "2018-01-01" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsContentType("application/json; charset=utf-8"), + autorest.AsPost(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/networkWatchers/{networkWatcherName}/securityGroupView", pathParameters), + autorest.WithJSON(parameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// GetVMSecurityRulesSender sends the GetVMSecurityRules request. The method will close the +// http.Response Body if it receives an error. +func (client WatchersClient) GetVMSecurityRulesSender(req *http.Request) (future WatchersGetVMSecurityRulesFuture, err error) { + var resp *http.Response + resp, err = autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) + if err != nil { + return + } + err = autorest.Respond(resp, azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted)) + if err != nil { + return + } + future.Future, err = azure.NewFutureFromResponse(resp) + return +} + +// GetVMSecurityRulesResponder handles the response to the GetVMSecurityRules request. The method always +// closes the http.Response Body. +func (client WatchersClient) GetVMSecurityRulesResponder(resp *http.Response) (result SecurityGroupViewResult, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + +// List gets all network watchers by resource group. +// Parameters: +// resourceGroupName - the name of the resource group. +func (client WatchersClient) List(ctx context.Context, resourceGroupName string) (result WatcherListResult, err error) { + req, err := client.ListPreparer(ctx, resourceGroupName) + if err != nil { + err = autorest.NewErrorWithError(err, "network.WatchersClient", "List", nil, "Failure preparing request") + return + } + + resp, err := client.ListSender(req) + if err != nil { + result.Response = autorest.Response{Response: resp} + err = autorest.NewErrorWithError(err, "network.WatchersClient", "List", resp, "Failure sending request") + return + } + + result, err = client.ListResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "network.WatchersClient", "List", resp, "Failure responding to request") + } + + return +} + +// ListPreparer prepares the List request. +func (client WatchersClient) ListPreparer(ctx context.Context, resourceGroupName string) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "resourceGroupName": autorest.Encode("path", resourceGroupName), + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + } + + const APIVersion = "2018-01-01" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsGet(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/networkWatchers", pathParameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// ListSender sends the List request. The method will close the +// http.Response Body if it receives an error. +func (client WatchersClient) ListSender(req *http.Request) (*http.Response, error) { + return autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) +} + +// ListResponder handles the response to the List request. The method always +// closes the http.Response Body. +func (client WatchersClient) ListResponder(resp *http.Response) (result WatcherListResult, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + +// ListAll gets all network watchers by subscription. +func (client WatchersClient) ListAll(ctx context.Context) (result WatcherListResult, err error) { + req, err := client.ListAllPreparer(ctx) + if err != nil { + err = autorest.NewErrorWithError(err, "network.WatchersClient", "ListAll", nil, "Failure preparing request") + return + } + + resp, err := client.ListAllSender(req) + if err != nil { + result.Response = autorest.Response{Response: resp} + err = autorest.NewErrorWithError(err, "network.WatchersClient", "ListAll", resp, "Failure sending request") + return + } + + result, err = client.ListAllResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "network.WatchersClient", "ListAll", resp, "Failure responding to request") + } + + return +} + +// ListAllPreparer prepares the ListAll request. +func (client WatchersClient) ListAllPreparer(ctx context.Context) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + } + + const APIVersion = "2018-01-01" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsGet(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/providers/Microsoft.Network/networkWatchers", pathParameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// ListAllSender sends the ListAll request. The method will close the +// http.Response Body if it receives an error. +func (client WatchersClient) ListAllSender(req *http.Request) (*http.Response, error) { + return autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) +} + +// ListAllResponder handles the response to the ListAll request. The method always +// closes the http.Response Body. +func (client WatchersClient) ListAllResponder(resp *http.Response) (result WatcherListResult, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + +// ListAvailableProviders lists all available internet service providers for a specified Azure region. +// Parameters: +// resourceGroupName - the name of the network watcher resource group. +// networkWatcherName - the name of the network watcher resource. +// parameters - parameters that scope the list of available providers. +func (client WatchersClient) ListAvailableProviders(ctx context.Context, resourceGroupName string, networkWatcherName string, parameters AvailableProvidersListParameters) (result WatchersListAvailableProvidersFuture, err error) { + req, err := client.ListAvailableProvidersPreparer(ctx, resourceGroupName, networkWatcherName, parameters) + if err != nil { + err = autorest.NewErrorWithError(err, "network.WatchersClient", "ListAvailableProviders", nil, "Failure preparing request") + return + } + + result, err = client.ListAvailableProvidersSender(req) + if err != nil { + err = autorest.NewErrorWithError(err, "network.WatchersClient", "ListAvailableProviders", result.Response(), "Failure sending request") + return + } + + return +} + +// ListAvailableProvidersPreparer prepares the ListAvailableProviders request. +func (client WatchersClient) ListAvailableProvidersPreparer(ctx context.Context, resourceGroupName string, networkWatcherName string, parameters AvailableProvidersListParameters) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "networkWatcherName": autorest.Encode("path", networkWatcherName), + "resourceGroupName": autorest.Encode("path", resourceGroupName), + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + } + + const APIVersion = "2018-01-01" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsContentType("application/json; charset=utf-8"), + autorest.AsPost(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/networkWatchers/{networkWatcherName}/availableProvidersList", pathParameters), + autorest.WithJSON(parameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// ListAvailableProvidersSender sends the ListAvailableProviders request. The method will close the +// http.Response Body if it receives an error. +func (client WatchersClient) ListAvailableProvidersSender(req *http.Request) (future WatchersListAvailableProvidersFuture, err error) { + var resp *http.Response + resp, err = autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) + if err != nil { + return + } + err = autorest.Respond(resp, azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted)) + if err != nil { + return + } + future.Future, err = azure.NewFutureFromResponse(resp) + return +} + +// ListAvailableProvidersResponder handles the response to the ListAvailableProviders request. The method always +// closes the http.Response Body. +func (client WatchersClient) ListAvailableProvidersResponder(resp *http.Response) (result AvailableProvidersList, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + +// SetFlowLogConfiguration configures flow log and traffic analytics (optional) on a specified resource. +// Parameters: +// resourceGroupName - the name of the network watcher resource group. +// networkWatcherName - the name of the network watcher resource. +// parameters - parameters that define the configuration of flow log. +func (client WatchersClient) SetFlowLogConfiguration(ctx context.Context, resourceGroupName string, networkWatcherName string, parameters FlowLogInformation) (result WatchersSetFlowLogConfigurationFuture, err error) { + if err := validation.Validate([]validation.Validation{ + {TargetValue: parameters, + Constraints: []validation.Constraint{{Target: "parameters.TargetResourceID", Name: validation.Null, Rule: true, Chain: nil}, + {Target: "parameters.FlowLogProperties", Name: validation.Null, Rule: true, + Chain: []validation.Constraint{{Target: "parameters.FlowLogProperties.StorageID", Name: validation.Null, Rule: true, Chain: nil}, + {Target: "parameters.FlowLogProperties.Enabled", Name: validation.Null, Rule: true, Chain: nil}, + }}, + {Target: "parameters.TrafficAnalyticsProperties", Name: validation.Null, Rule: false, + Chain: []validation.Constraint{{Target: "parameters.TrafficAnalyticsProperties.NetworkWatcherFlowAnalyticsConfiguration", Name: validation.Null, Rule: true, + Chain: []validation.Constraint{{Target: "parameters.TrafficAnalyticsProperties.NetworkWatcherFlowAnalyticsConfiguration.Enabled", Name: validation.Null, Rule: true, Chain: nil}, + {Target: "parameters.TrafficAnalyticsProperties.NetworkWatcherFlowAnalyticsConfiguration.WorkspaceID", Name: validation.Null, Rule: true, Chain: nil}, + {Target: "parameters.TrafficAnalyticsProperties.NetworkWatcherFlowAnalyticsConfiguration.WorkspaceRegion", Name: validation.Null, Rule: true, Chain: nil}, + {Target: "parameters.TrafficAnalyticsProperties.NetworkWatcherFlowAnalyticsConfiguration.WorkspaceResourceID", Name: validation.Null, Rule: true, Chain: nil}, + }}, + }}}}}); err != nil { + return result, validation.NewError("network.WatchersClient", "SetFlowLogConfiguration", err.Error()) + } + + req, err := client.SetFlowLogConfigurationPreparer(ctx, resourceGroupName, networkWatcherName, parameters) + if err != nil { + err = autorest.NewErrorWithError(err, "network.WatchersClient", "SetFlowLogConfiguration", nil, "Failure preparing request") + return + } + + result, err = client.SetFlowLogConfigurationSender(req) + if err != nil { + err = autorest.NewErrorWithError(err, "network.WatchersClient", "SetFlowLogConfiguration", result.Response(), "Failure sending request") + return + } + + return +} + +// SetFlowLogConfigurationPreparer prepares the SetFlowLogConfiguration request. +func (client WatchersClient) SetFlowLogConfigurationPreparer(ctx context.Context, resourceGroupName string, networkWatcherName string, parameters FlowLogInformation) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "networkWatcherName": autorest.Encode("path", networkWatcherName), + "resourceGroupName": autorest.Encode("path", resourceGroupName), + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + } + + const APIVersion = "2018-01-01" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsContentType("application/json; charset=utf-8"), + autorest.AsPost(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/networkWatchers/{networkWatcherName}/configureFlowLog", pathParameters), + autorest.WithJSON(parameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// SetFlowLogConfigurationSender sends the SetFlowLogConfiguration request. The method will close the +// http.Response Body if it receives an error. +func (client WatchersClient) SetFlowLogConfigurationSender(req *http.Request) (future WatchersSetFlowLogConfigurationFuture, err error) { + var resp *http.Response + resp, err = autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) + if err != nil { + return + } + err = autorest.Respond(resp, azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted)) + if err != nil { + return + } + future.Future, err = azure.NewFutureFromResponse(resp) + return +} + +// SetFlowLogConfigurationResponder handles the response to the SetFlowLogConfiguration request. The method always +// closes the http.Response Body. +func (client WatchersClient) SetFlowLogConfigurationResponder(resp *http.Response) (result FlowLogInformation, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + +// UpdateTags updates a network watcher tags. +// Parameters: +// resourceGroupName - the name of the resource group. +// networkWatcherName - the name of the network watcher. +// parameters - parameters supplied to update network watcher tags. +func (client WatchersClient) UpdateTags(ctx context.Context, resourceGroupName string, networkWatcherName string, parameters TagsObject) (result Watcher, err error) { + req, err := client.UpdateTagsPreparer(ctx, resourceGroupName, networkWatcherName, parameters) + if err != nil { + err = autorest.NewErrorWithError(err, "network.WatchersClient", "UpdateTags", nil, "Failure preparing request") + return + } + + resp, err := client.UpdateTagsSender(req) + if err != nil { + result.Response = autorest.Response{Response: resp} + err = autorest.NewErrorWithError(err, "network.WatchersClient", "UpdateTags", resp, "Failure sending request") + return + } + + result, err = client.UpdateTagsResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "network.WatchersClient", "UpdateTags", resp, "Failure responding to request") + } + + return +} + +// UpdateTagsPreparer prepares the UpdateTags request. +func (client WatchersClient) UpdateTagsPreparer(ctx context.Context, resourceGroupName string, networkWatcherName string, parameters TagsObject) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "networkWatcherName": autorest.Encode("path", networkWatcherName), + "resourceGroupName": autorest.Encode("path", resourceGroupName), + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + } + + const APIVersion = "2018-01-01" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsContentType("application/json; charset=utf-8"), + autorest.AsPatch(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/networkWatchers/{networkWatcherName}", pathParameters), + autorest.WithJSON(parameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// UpdateTagsSender sends the UpdateTags request. The method will close the +// http.Response Body if it receives an error. +func (client WatchersClient) UpdateTagsSender(req *http.Request) (*http.Response, error) { + return autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) +} + +// UpdateTagsResponder handles the response to the UpdateTags request. The method always +// closes the http.Response Body. +func (client WatchersClient) UpdateTagsResponder(resp *http.Response) (result Watcher, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + +// VerifyIPFlow verify IP flow from the specified VM to a location given the currently configured NSG rules. +// Parameters: +// resourceGroupName - the name of the resource group. +// networkWatcherName - the name of the network watcher. +// parameters - parameters that define the IP flow to be verified. +func (client WatchersClient) VerifyIPFlow(ctx context.Context, resourceGroupName string, networkWatcherName string, parameters VerificationIPFlowParameters) (result WatchersVerifyIPFlowFuture, err error) { + if err := validation.Validate([]validation.Validation{ + {TargetValue: parameters, + Constraints: []validation.Constraint{{Target: "parameters.TargetResourceID", Name: validation.Null, Rule: true, Chain: nil}, + {Target: "parameters.LocalPort", Name: validation.Null, Rule: true, Chain: nil}, + {Target: "parameters.RemotePort", Name: validation.Null, Rule: true, Chain: nil}, + {Target: "parameters.LocalIPAddress", Name: validation.Null, Rule: true, Chain: nil}, + {Target: "parameters.RemoteIPAddress", Name: validation.Null, Rule: true, Chain: nil}}}}); err != nil { + return result, validation.NewError("network.WatchersClient", "VerifyIPFlow", err.Error()) + } + + req, err := client.VerifyIPFlowPreparer(ctx, resourceGroupName, networkWatcherName, parameters) + if err != nil { + err = autorest.NewErrorWithError(err, "network.WatchersClient", "VerifyIPFlow", nil, "Failure preparing request") + return + } + + result, err = client.VerifyIPFlowSender(req) + if err != nil { + err = autorest.NewErrorWithError(err, "network.WatchersClient", "VerifyIPFlow", result.Response(), "Failure sending request") + return + } + + return +} + +// VerifyIPFlowPreparer prepares the VerifyIPFlow request. +func (client WatchersClient) VerifyIPFlowPreparer(ctx context.Context, resourceGroupName string, networkWatcherName string, parameters VerificationIPFlowParameters) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "networkWatcherName": autorest.Encode("path", networkWatcherName), + "resourceGroupName": autorest.Encode("path", resourceGroupName), + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + } + + const APIVersion = "2018-01-01" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsContentType("application/json; charset=utf-8"), + autorest.AsPost(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/networkWatchers/{networkWatcherName}/ipFlowVerify", pathParameters), + autorest.WithJSON(parameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// VerifyIPFlowSender sends the VerifyIPFlow request. The method will close the +// http.Response Body if it receives an error. +func (client WatchersClient) VerifyIPFlowSender(req *http.Request) (future WatchersVerifyIPFlowFuture, err error) { + var resp *http.Response + resp, err = autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) + if err != nil { + return + } + err = autorest.Respond(resp, azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted)) + if err != nil { + return + } + future.Future, err = azure.NewFutureFromResponse(resp) + return +} + +// VerifyIPFlowResponder handles the response to the VerifyIPFlow request. The method always +// closes the http.Response Body. +func (client WatchersClient) VerifyIPFlowResponder(resp *http.Response) (result VerificationIPFlowResult, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} diff --git a/vendor/github.com/Azure/azure-sdk-for-go/arm/resources/subscriptions/client.go b/vendor/github.com/Azure/azure-sdk-for-go/services/resources/mgmt/2016-06-01/subscriptions/client.go similarity index 65% rename from vendor/github.com/Azure/azure-sdk-for-go/arm/resources/subscriptions/client.go rename to vendor/github.com/Azure/azure-sdk-for-go/services/resources/mgmt/2016-06-01/subscriptions/client.go index 31a609091..0195a20b3 100644 --- a/vendor/github.com/Azure/azure-sdk-for-go/arm/resources/subscriptions/client.go +++ b/vendor/github.com/Azure/azure-sdk-for-go/services/resources/mgmt/2016-06-01/subscriptions/client.go @@ -1,9 +1,7 @@ -// Package subscriptions implements the Azure ARM Subscriptions service API -// version 2016-06-01. +// Package subscriptions implements the Azure ARM Subscriptions service API version 2016-06-01. // -// All resource groups and resources exist within subscriptions. These -// operation enable you get information about your subscriptions and tenants. A -// tenant is a dedicated instance of Azure Active Directory (Azure AD) for your +// All resource groups and resources exist within subscriptions. These operation enable you get information about your +// subscriptions and tenants. A tenant is a dedicated instance of Azure Active Directory (Azure AD) for your // organization. package subscriptions @@ -21,9 +19,8 @@ package subscriptions // See the License for the specific language governing permissions and // limitations under the License. // -// Code generated by Microsoft (R) AutoRest Code Generator 1.0.1.0 -// Changes may cause incorrect behavior and will be lost if the code is -// regenerated. +// Code generated by Microsoft (R) AutoRest Code Generator. +// Changes may cause incorrect behavior and will be lost if the code is regenerated. import ( "github.com/Azure/go-autorest/autorest" @@ -34,20 +31,20 @@ const ( DefaultBaseURI = "https://management.azure.com" ) -// ManagementClient is the base client for Subscriptions. -type ManagementClient struct { +// BaseClient is the base client for Subscriptions. +type BaseClient struct { autorest.Client BaseURI string } -// New creates an instance of the ManagementClient client. -func New() ManagementClient { +// New creates an instance of the BaseClient client. +func New() BaseClient { return NewWithBaseURI(DefaultBaseURI) } -// NewWithBaseURI creates an instance of the ManagementClient client. -func NewWithBaseURI(baseURI string) ManagementClient { - return ManagementClient{ +// NewWithBaseURI creates an instance of the BaseClient client. +func NewWithBaseURI(baseURI string) BaseClient { + return BaseClient{ Client: autorest.NewClientWithUserAgent(UserAgent()), BaseURI: baseURI, } diff --git a/vendor/github.com/Azure/azure-sdk-for-go/services/resources/mgmt/2016-06-01/subscriptions/models.go b/vendor/github.com/Azure/azure-sdk-for-go/services/resources/mgmt/2016-06-01/subscriptions/models.go new file mode 100644 index 000000000..d7a3c9373 --- /dev/null +++ b/vendor/github.com/Azure/azure-sdk-for-go/services/resources/mgmt/2016-06-01/subscriptions/models.go @@ -0,0 +1,324 @@ +package subscriptions + +// Copyright (c) Microsoft and contributors. All rights reserved. +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// +// See the License for the specific language governing permissions and +// limitations under the License. +// +// Code generated by Microsoft (R) AutoRest Code Generator. +// Changes may cause incorrect behavior and will be lost if the code is regenerated. + +import ( + "github.com/Azure/go-autorest/autorest" + "github.com/Azure/go-autorest/autorest/to" + "net/http" +) + +// SpendingLimit enumerates the values for spending limit. +type SpendingLimit string + +const ( + // CurrentPeriodOff ... + CurrentPeriodOff SpendingLimit = "CurrentPeriodOff" + // Off ... + Off SpendingLimit = "Off" + // On ... + On SpendingLimit = "On" +) + +// PossibleSpendingLimitValues returns an array of possible values for the SpendingLimit const type. +func PossibleSpendingLimitValues() []SpendingLimit { + return []SpendingLimit{CurrentPeriodOff, Off, On} +} + +// State enumerates the values for state. +type State string + +const ( + // Deleted ... + Deleted State = "Deleted" + // Disabled ... + Disabled State = "Disabled" + // Enabled ... + Enabled State = "Enabled" + // PastDue ... + PastDue State = "PastDue" + // Warned ... + Warned State = "Warned" +) + +// PossibleStateValues returns an array of possible values for the State const type. +func PossibleStateValues() []State { + return []State{Deleted, Disabled, Enabled, PastDue, Warned} +} + +// ListResult subscription list operation response. +type ListResult struct { + autorest.Response `json:"-"` + // Value - An array of subscriptions. + Value *[]Subscription `json:"value,omitempty"` + // NextLink - The URL to get the next set of results. + NextLink *string `json:"nextLink,omitempty"` +} + +// ListResultIterator provides access to a complete listing of Subscription values. +type ListResultIterator struct { + i int + page ListResultPage +} + +// Next advances to the next value. If there was an error making +// the request the iterator does not advance and the error is returned. +func (iter *ListResultIterator) Next() error { + iter.i++ + if iter.i < len(iter.page.Values()) { + return nil + } + err := iter.page.Next() + if err != nil { + iter.i-- + return err + } + iter.i = 0 + return nil +} + +// NotDone returns true if the enumeration should be started or is not yet complete. +func (iter ListResultIterator) NotDone() bool { + return iter.page.NotDone() && iter.i < len(iter.page.Values()) +} + +// Response returns the raw server response from the last page request. +func (iter ListResultIterator) Response() ListResult { + return iter.page.Response() +} + +// Value returns the current value or a zero-initialized value if the +// iterator has advanced beyond the end of the collection. +func (iter ListResultIterator) Value() Subscription { + if !iter.page.NotDone() { + return Subscription{} + } + return iter.page.Values()[iter.i] +} + +// IsEmpty returns true if the ListResult contains no values. +func (lr ListResult) IsEmpty() bool { + return lr.Value == nil || len(*lr.Value) == 0 +} + +// listResultPreparer prepares a request to retrieve the next set of results. +// It returns nil if no more results exist. +func (lr ListResult) listResultPreparer() (*http.Request, error) { + if lr.NextLink == nil || len(to.String(lr.NextLink)) < 1 { + return nil, nil + } + return autorest.Prepare(&http.Request{}, + autorest.AsJSON(), + autorest.AsGet(), + autorest.WithBaseURL(to.String(lr.NextLink))) +} + +// ListResultPage contains a page of Subscription values. +type ListResultPage struct { + fn func(ListResult) (ListResult, error) + lr ListResult +} + +// Next advances to the next page of values. If there was an error making +// the request the page does not advance and the error is returned. +func (page *ListResultPage) Next() error { + next, err := page.fn(page.lr) + if err != nil { + return err + } + page.lr = next + return nil +} + +// NotDone returns true if the page enumeration should be started or is not yet complete. +func (page ListResultPage) NotDone() bool { + return !page.lr.IsEmpty() +} + +// Response returns the raw server response from the last page request. +func (page ListResultPage) Response() ListResult { + return page.lr +} + +// Values returns the slice of values for the current page or nil if there are no values. +func (page ListResultPage) Values() []Subscription { + if page.lr.IsEmpty() { + return nil + } + return *page.lr.Value +} + +// Location location information. +type Location struct { + // ID - The fully qualified ID of the location. For example, /subscriptions/00000000-0000-0000-0000-000000000000/locations/westus. + ID *string `json:"id,omitempty"` + // SubscriptionID - The subscription ID. + SubscriptionID *string `json:"subscriptionId,omitempty"` + // Name - The location name. + Name *string `json:"name,omitempty"` + // DisplayName - The display name of the location. + DisplayName *string `json:"displayName,omitempty"` + // Latitude - The latitude of the location. + Latitude *string `json:"latitude,omitempty"` + // Longitude - The longitude of the location. + Longitude *string `json:"longitude,omitempty"` +} + +// LocationListResult location list operation response. +type LocationListResult struct { + autorest.Response `json:"-"` + // Value - An array of locations. + Value *[]Location `json:"value,omitempty"` +} + +// Policies subscription policies. +type Policies struct { + // LocationPlacementID - The subscription location placement ID. The ID indicates which regions are visible for a subscription. For example, a subscription with a location placement Id of Public_2014-09-01 has access to Azure public regions. + LocationPlacementID *string `json:"locationPlacementId,omitempty"` + // QuotaID - The subscription quota ID. + QuotaID *string `json:"quotaId,omitempty"` + // SpendingLimit - The subscription spending limit. Possible values include: 'On', 'Off', 'CurrentPeriodOff' + SpendingLimit SpendingLimit `json:"spendingLimit,omitempty"` +} + +// Subscription subscription information. +type Subscription struct { + autorest.Response `json:"-"` + // ID - The fully qualified ID for the subscription. For example, /subscriptions/00000000-0000-0000-0000-000000000000. + ID *string `json:"id,omitempty"` + // SubscriptionID - The subscription ID. + SubscriptionID *string `json:"subscriptionId,omitempty"` + // DisplayName - The subscription display name. + DisplayName *string `json:"displayName,omitempty"` + // State - The subscription state. Possible values are Enabled, Warned, PastDue, Disabled, and Deleted. Possible values include: 'Enabled', 'Warned', 'PastDue', 'Disabled', 'Deleted' + State State `json:"state,omitempty"` + // SubscriptionPolicies - The subscription policies. + SubscriptionPolicies *Policies `json:"subscriptionPolicies,omitempty"` + // AuthorizationSource - The authorization source of the request. Valid values are one or more combinations of Legacy, RoleBased, Bypassed, Direct and Management. For example, 'Legacy, RoleBased'. + AuthorizationSource *string `json:"authorizationSource,omitempty"` +} + +// TenantIDDescription tenant Id information. +type TenantIDDescription struct { + // ID - The fully qualified ID of the tenant. For example, /tenants/00000000-0000-0000-0000-000000000000. + ID *string `json:"id,omitempty"` + // TenantID - The tenant ID. For example, 00000000-0000-0000-0000-000000000000. + TenantID *string `json:"tenantId,omitempty"` +} + +// TenantListResult tenant Ids information. +type TenantListResult struct { + autorest.Response `json:"-"` + // Value - An array of tenants. + Value *[]TenantIDDescription `json:"value,omitempty"` + // NextLink - The URL to use for getting the next set of results. + NextLink *string `json:"nextLink,omitempty"` +} + +// TenantListResultIterator provides access to a complete listing of TenantIDDescription values. +type TenantListResultIterator struct { + i int + page TenantListResultPage +} + +// Next advances to the next value. If there was an error making +// the request the iterator does not advance and the error is returned. +func (iter *TenantListResultIterator) Next() error { + iter.i++ + if iter.i < len(iter.page.Values()) { + return nil + } + err := iter.page.Next() + if err != nil { + iter.i-- + return err + } + iter.i = 0 + return nil +} + +// NotDone returns true if the enumeration should be started or is not yet complete. +func (iter TenantListResultIterator) NotDone() bool { + return iter.page.NotDone() && iter.i < len(iter.page.Values()) +} + +// Response returns the raw server response from the last page request. +func (iter TenantListResultIterator) Response() TenantListResult { + return iter.page.Response() +} + +// Value returns the current value or a zero-initialized value if the +// iterator has advanced beyond the end of the collection. +func (iter TenantListResultIterator) Value() TenantIDDescription { + if !iter.page.NotDone() { + return TenantIDDescription{} + } + return iter.page.Values()[iter.i] +} + +// IsEmpty returns true if the ListResult contains no values. +func (tlr TenantListResult) IsEmpty() bool { + return tlr.Value == nil || len(*tlr.Value) == 0 +} + +// tenantListResultPreparer prepares a request to retrieve the next set of results. +// It returns nil if no more results exist. +func (tlr TenantListResult) tenantListResultPreparer() (*http.Request, error) { + if tlr.NextLink == nil || len(to.String(tlr.NextLink)) < 1 { + return nil, nil + } + return autorest.Prepare(&http.Request{}, + autorest.AsJSON(), + autorest.AsGet(), + autorest.WithBaseURL(to.String(tlr.NextLink))) +} + +// TenantListResultPage contains a page of TenantIDDescription values. +type TenantListResultPage struct { + fn func(TenantListResult) (TenantListResult, error) + tlr TenantListResult +} + +// Next advances to the next page of values. If there was an error making +// the request the page does not advance and the error is returned. +func (page *TenantListResultPage) Next() error { + next, err := page.fn(page.tlr) + if err != nil { + return err + } + page.tlr = next + return nil +} + +// NotDone returns true if the page enumeration should be started or is not yet complete. +func (page TenantListResultPage) NotDone() bool { + return !page.tlr.IsEmpty() +} + +// Response returns the raw server response from the last page request. +func (page TenantListResultPage) Response() TenantListResult { + return page.tlr +} + +// Values returns the slice of values for the current page or nil if there are no values. +func (page TenantListResultPage) Values() []TenantIDDescription { + if page.tlr.IsEmpty() { + return nil + } + return *page.tlr.Value +} diff --git a/vendor/github.com/Azure/azure-sdk-for-go/arm/resources/subscriptions/subscriptionsgroup.go b/vendor/github.com/Azure/azure-sdk-for-go/services/resources/mgmt/2016-06-01/subscriptions/subscriptions.go similarity index 51% rename from vendor/github.com/Azure/azure-sdk-for-go/arm/resources/subscriptions/subscriptionsgroup.go rename to vendor/github.com/Azure/azure-sdk-for-go/services/resources/mgmt/2016-06-01/subscriptions/subscriptions.go index b8042f65d..ecbd69126 100644 --- a/vendor/github.com/Azure/azure-sdk-for-go/arm/resources/subscriptions/subscriptionsgroup.go +++ b/vendor/github.com/Azure/azure-sdk-for-go/services/resources/mgmt/2016-06-01/subscriptions/subscriptions.go @@ -14,61 +14,60 @@ package subscriptions // See the License for the specific language governing permissions and // limitations under the License. // -// Code generated by Microsoft (R) AutoRest Code Generator 1.0.1.0 -// Changes may cause incorrect behavior and will be lost if the code is -// regenerated. +// Code generated by Microsoft (R) AutoRest Code Generator. +// Changes may cause incorrect behavior and will be lost if the code is regenerated. import ( + "context" "github.com/Azure/go-autorest/autorest" "github.com/Azure/go-autorest/autorest/azure" "net/http" ) -// GroupClient is the all resource groups and resources exist within -// subscriptions. These operation enable you get information about your -// subscriptions and tenants. A tenant is a dedicated instance of Azure Active -// Directory (Azure AD) for your organization. -type GroupClient struct { - ManagementClient +// Client is the all resource groups and resources exist within subscriptions. These operation enable you get +// information about your subscriptions and tenants. A tenant is a dedicated instance of Azure Active Directory (Azure +// AD) for your organization. +type Client struct { + BaseClient } -// NewGroupClient creates an instance of the GroupClient client. -func NewGroupClient() GroupClient { - return NewGroupClientWithBaseURI(DefaultBaseURI) +// NewClient creates an instance of the Client client. +func NewClient() Client { + return NewClientWithBaseURI(DefaultBaseURI) } -// NewGroupClientWithBaseURI creates an instance of the GroupClient client. -func NewGroupClientWithBaseURI(baseURI string) GroupClient { - return GroupClient{NewWithBaseURI(baseURI)} +// NewClientWithBaseURI creates an instance of the Client client. +func NewClientWithBaseURI(baseURI string) Client { + return Client{NewWithBaseURI(baseURI)} } // Get gets details about a specified subscription. -// -// subscriptionID is the ID of the target subscription. -func (client GroupClient) Get(subscriptionID string) (result Subscription, err error) { - req, err := client.GetPreparer(subscriptionID) +// Parameters: +// subscriptionID - the ID of the target subscription. +func (client Client) Get(ctx context.Context, subscriptionID string) (result Subscription, err error) { + req, err := client.GetPreparer(ctx, subscriptionID) if err != nil { - err = autorest.NewErrorWithError(err, "subscriptions.GroupClient", "Get", nil, "Failure preparing request") + err = autorest.NewErrorWithError(err, "subscriptions.Client", "Get", nil, "Failure preparing request") return } resp, err := client.GetSender(req) if err != nil { result.Response = autorest.Response{Response: resp} - err = autorest.NewErrorWithError(err, "subscriptions.GroupClient", "Get", resp, "Failure sending request") + err = autorest.NewErrorWithError(err, "subscriptions.Client", "Get", resp, "Failure sending request") return } result, err = client.GetResponder(resp) if err != nil { - err = autorest.NewErrorWithError(err, "subscriptions.GroupClient", "Get", resp, "Failure responding to request") + err = autorest.NewErrorWithError(err, "subscriptions.Client", "Get", resp, "Failure responding to request") } return } // GetPreparer prepares the Get request. -func (client GroupClient) GetPreparer(subscriptionID string) (*http.Request, error) { +func (client Client) GetPreparer(ctx context.Context, subscriptionID string) (*http.Request, error) { pathParameters := map[string]interface{}{ "subscriptionId": autorest.Encode("path", subscriptionID), } @@ -83,18 +82,19 @@ func (client GroupClient) GetPreparer(subscriptionID string) (*http.Request, err autorest.WithBaseURL(client.BaseURI), autorest.WithPathParameters("/subscriptions/{subscriptionId}", pathParameters), autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{}) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) } // GetSender sends the Get request. The method will close the // http.Response Body if it receives an error. -func (client GroupClient) GetSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, req) +func (client Client) GetSender(req *http.Request) (*http.Response, error) { + return autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) } // GetResponder handles the response to the Get request. The method always // closes the http.Response Body. -func (client GroupClient) GetResponder(resp *http.Response) (result Subscription, err error) { +func (client Client) GetResponder(resp *http.Response) (result Subscription, err error) { err = autorest.Respond( resp, client.ByInspecting(), @@ -106,30 +106,31 @@ func (client GroupClient) GetResponder(resp *http.Response) (result Subscription } // List gets all subscriptions for a tenant. -func (client GroupClient) List() (result ListResult, err error) { - req, err := client.ListPreparer() +func (client Client) List(ctx context.Context) (result ListResultPage, err error) { + result.fn = client.listNextResults + req, err := client.ListPreparer(ctx) if err != nil { - err = autorest.NewErrorWithError(err, "subscriptions.GroupClient", "List", nil, "Failure preparing request") + err = autorest.NewErrorWithError(err, "subscriptions.Client", "List", nil, "Failure preparing request") return } resp, err := client.ListSender(req) if err != nil { - result.Response = autorest.Response{Response: resp} - err = autorest.NewErrorWithError(err, "subscriptions.GroupClient", "List", resp, "Failure sending request") + result.lr.Response = autorest.Response{Response: resp} + err = autorest.NewErrorWithError(err, "subscriptions.Client", "List", resp, "Failure sending request") return } - result, err = client.ListResponder(resp) + result.lr, err = client.ListResponder(resp) if err != nil { - err = autorest.NewErrorWithError(err, "subscriptions.GroupClient", "List", resp, "Failure responding to request") + err = autorest.NewErrorWithError(err, "subscriptions.Client", "List", resp, "Failure responding to request") } return } // ListPreparer prepares the List request. -func (client GroupClient) ListPreparer() (*http.Request, error) { +func (client Client) ListPreparer(ctx context.Context) (*http.Request, error) { const APIVersion = "2016-06-01" queryParameters := map[string]interface{}{ "api-version": APIVersion, @@ -140,18 +141,19 @@ func (client GroupClient) ListPreparer() (*http.Request, error) { autorest.WithBaseURL(client.BaseURI), autorest.WithPath("/subscriptions"), autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{}) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) } // ListSender sends the List request. The method will close the // http.Response Body if it receives an error. -func (client GroupClient) ListSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, req) +func (client Client) ListSender(req *http.Request) (*http.Response, error) { + return autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) } // ListResponder handles the response to the List request. The method always // closes the http.Response Body. -func (client GroupClient) ListResponder(resp *http.Response) (result ListResult, err error) { +func (client Client) ListResponder(resp *http.Response) (result ListResult, err error) { err = autorest.Respond( resp, client.ByInspecting(), @@ -162,59 +164,61 @@ func (client GroupClient) ListResponder(resp *http.Response) (result ListResult, return } -// ListNextResults retrieves the next set of results, if any. -func (client GroupClient) ListNextResults(lastResults ListResult) (result ListResult, err error) { - req, err := lastResults.ListResultPreparer() +// listNextResults retrieves the next set of results, if any. +func (client Client) listNextResults(lastResults ListResult) (result ListResult, err error) { + req, err := lastResults.listResultPreparer() if err != nil { - return result, autorest.NewErrorWithError(err, "subscriptions.GroupClient", "List", nil, "Failure preparing next results request") + return result, autorest.NewErrorWithError(err, "subscriptions.Client", "listNextResults", nil, "Failure preparing next results request") } if req == nil { return } - resp, err := client.ListSender(req) if err != nil { result.Response = autorest.Response{Response: resp} - return result, autorest.NewErrorWithError(err, "subscriptions.GroupClient", "List", resp, "Failure sending next results request") + return result, autorest.NewErrorWithError(err, "subscriptions.Client", "listNextResults", resp, "Failure sending next results request") } - result, err = client.ListResponder(resp) if err != nil { - err = autorest.NewErrorWithError(err, "subscriptions.GroupClient", "List", resp, "Failure responding to next results request") + err = autorest.NewErrorWithError(err, "subscriptions.Client", "listNextResults", resp, "Failure responding to next results request") } - return } -// ListLocations this operation provides all the locations that are available -// for resource providers; however, each resource provider may support a subset -// of this list. -// -// subscriptionID is the ID of the target subscription. -func (client GroupClient) ListLocations(subscriptionID string) (result LocationListResult, err error) { - req, err := client.ListLocationsPreparer(subscriptionID) +// ListComplete enumerates all values, automatically crossing page boundaries as required. +func (client Client) ListComplete(ctx context.Context) (result ListResultIterator, err error) { + result.page, err = client.List(ctx) + return +} + +// ListLocations this operation provides all the locations that are available for resource providers; however, each +// resource provider may support a subset of this list. +// Parameters: +// subscriptionID - the ID of the target subscription. +func (client Client) ListLocations(ctx context.Context, subscriptionID string) (result LocationListResult, err error) { + req, err := client.ListLocationsPreparer(ctx, subscriptionID) if err != nil { - err = autorest.NewErrorWithError(err, "subscriptions.GroupClient", "ListLocations", nil, "Failure preparing request") + err = autorest.NewErrorWithError(err, "subscriptions.Client", "ListLocations", nil, "Failure preparing request") return } resp, err := client.ListLocationsSender(req) if err != nil { result.Response = autorest.Response{Response: resp} - err = autorest.NewErrorWithError(err, "subscriptions.GroupClient", "ListLocations", resp, "Failure sending request") + err = autorest.NewErrorWithError(err, "subscriptions.Client", "ListLocations", resp, "Failure sending request") return } result, err = client.ListLocationsResponder(resp) if err != nil { - err = autorest.NewErrorWithError(err, "subscriptions.GroupClient", "ListLocations", resp, "Failure responding to request") + err = autorest.NewErrorWithError(err, "subscriptions.Client", "ListLocations", resp, "Failure responding to request") } return } // ListLocationsPreparer prepares the ListLocations request. -func (client GroupClient) ListLocationsPreparer(subscriptionID string) (*http.Request, error) { +func (client Client) ListLocationsPreparer(ctx context.Context, subscriptionID string) (*http.Request, error) { pathParameters := map[string]interface{}{ "subscriptionId": autorest.Encode("path", subscriptionID), } @@ -229,18 +233,19 @@ func (client GroupClient) ListLocationsPreparer(subscriptionID string) (*http.Re autorest.WithBaseURL(client.BaseURI), autorest.WithPathParameters("/subscriptions/{subscriptionId}/locations", pathParameters), autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{}) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) } // ListLocationsSender sends the ListLocations request. The method will close the // http.Response Body if it receives an error. -func (client GroupClient) ListLocationsSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, req) +func (client Client) ListLocationsSender(req *http.Request) (*http.Response, error) { + return autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) } // ListLocationsResponder handles the response to the ListLocations request. The method always // closes the http.Response Body. -func (client GroupClient) ListLocationsResponder(resp *http.Response) (result LocationListResult, err error) { +func (client Client) ListLocationsResponder(resp *http.Response) (result LocationListResult, err error) { err = autorest.Respond( resp, client.ByInspecting(), diff --git a/vendor/github.com/Azure/azure-sdk-for-go/arm/resources/subscriptions/tenants.go b/vendor/github.com/Azure/azure-sdk-for-go/services/resources/mgmt/2016-06-01/subscriptions/tenants.go similarity index 67% rename from vendor/github.com/Azure/azure-sdk-for-go/arm/resources/subscriptions/tenants.go rename to vendor/github.com/Azure/azure-sdk-for-go/services/resources/mgmt/2016-06-01/subscriptions/tenants.go index 35b2cd26d..dea3e725b 100644 --- a/vendor/github.com/Azure/azure-sdk-for-go/arm/resources/subscriptions/tenants.go +++ b/vendor/github.com/Azure/azure-sdk-for-go/services/resources/mgmt/2016-06-01/subscriptions/tenants.go @@ -14,22 +14,21 @@ package subscriptions // See the License for the specific language governing permissions and // limitations under the License. // -// Code generated by Microsoft (R) AutoRest Code Generator 1.0.1.0 -// Changes may cause incorrect behavior and will be lost if the code is -// regenerated. +// Code generated by Microsoft (R) AutoRest Code Generator. +// Changes may cause incorrect behavior and will be lost if the code is regenerated. import ( + "context" "github.com/Azure/go-autorest/autorest" "github.com/Azure/go-autorest/autorest/azure" "net/http" ) -// TenantsClient is the all resource groups and resources exist within -// subscriptions. These operation enable you get information about your -// subscriptions and tenants. A tenant is a dedicated instance of Azure Active -// Directory (Azure AD) for your organization. +// TenantsClient is the all resource groups and resources exist within subscriptions. These operation enable you get +// information about your subscriptions and tenants. A tenant is a dedicated instance of Azure Active Directory (Azure +// AD) for your organization. type TenantsClient struct { - ManagementClient + BaseClient } // NewTenantsClient creates an instance of the TenantsClient client. @@ -43,8 +42,9 @@ func NewTenantsClientWithBaseURI(baseURI string) TenantsClient { } // List gets the tenants for your account. -func (client TenantsClient) List() (result TenantListResult, err error) { - req, err := client.ListPreparer() +func (client TenantsClient) List(ctx context.Context) (result TenantListResultPage, err error) { + result.fn = client.listNextResults + req, err := client.ListPreparer(ctx) if err != nil { err = autorest.NewErrorWithError(err, "subscriptions.TenantsClient", "List", nil, "Failure preparing request") return @@ -52,12 +52,12 @@ func (client TenantsClient) List() (result TenantListResult, err error) { resp, err := client.ListSender(req) if err != nil { - result.Response = autorest.Response{Response: resp} + result.tlr.Response = autorest.Response{Response: resp} err = autorest.NewErrorWithError(err, "subscriptions.TenantsClient", "List", resp, "Failure sending request") return } - result, err = client.ListResponder(resp) + result.tlr, err = client.ListResponder(resp) if err != nil { err = autorest.NewErrorWithError(err, "subscriptions.TenantsClient", "List", resp, "Failure responding to request") } @@ -66,7 +66,7 @@ func (client TenantsClient) List() (result TenantListResult, err error) { } // ListPreparer prepares the List request. -func (client TenantsClient) ListPreparer() (*http.Request, error) { +func (client TenantsClient) ListPreparer(ctx context.Context) (*http.Request, error) { const APIVersion = "2016-06-01" queryParameters := map[string]interface{}{ "api-version": APIVersion, @@ -77,13 +77,14 @@ func (client TenantsClient) ListPreparer() (*http.Request, error) { autorest.WithBaseURL(client.BaseURI), autorest.WithPath("/tenants"), autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{}) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) } // ListSender sends the List request. The method will close the // http.Response Body if it receives an error. func (client TenantsClient) ListSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, req) + return autorest.SendWithSender(client, req, + autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...)) } // ListResponder handles the response to the List request. The method always @@ -99,26 +100,29 @@ func (client TenantsClient) ListResponder(resp *http.Response) (result TenantLis return } -// ListNextResults retrieves the next set of results, if any. -func (client TenantsClient) ListNextResults(lastResults TenantListResult) (result TenantListResult, err error) { - req, err := lastResults.TenantListResultPreparer() +// listNextResults retrieves the next set of results, if any. +func (client TenantsClient) listNextResults(lastResults TenantListResult) (result TenantListResult, err error) { + req, err := lastResults.tenantListResultPreparer() if err != nil { - return result, autorest.NewErrorWithError(err, "subscriptions.TenantsClient", "List", nil, "Failure preparing next results request") + return result, autorest.NewErrorWithError(err, "subscriptions.TenantsClient", "listNextResults", nil, "Failure preparing next results request") } if req == nil { return } - resp, err := client.ListSender(req) if err != nil { result.Response = autorest.Response{Response: resp} - return result, autorest.NewErrorWithError(err, "subscriptions.TenantsClient", "List", resp, "Failure sending next results request") + return result, autorest.NewErrorWithError(err, "subscriptions.TenantsClient", "listNextResults", resp, "Failure sending next results request") } - result, err = client.ListResponder(resp) if err != nil { - err = autorest.NewErrorWithError(err, "subscriptions.TenantsClient", "List", resp, "Failure responding to next results request") + err = autorest.NewErrorWithError(err, "subscriptions.TenantsClient", "listNextResults", resp, "Failure responding to next results request") } - + return +} + +// ListComplete enumerates all values, automatically crossing page boundaries as required. +func (client TenantsClient) ListComplete(ctx context.Context) (result TenantListResultIterator, err error) { + result.page, err = client.List(ctx) return } diff --git a/vendor/github.com/Azure/azure-sdk-for-go/arm/resources/subscriptions/version.go b/vendor/github.com/Azure/azure-sdk-for-go/services/resources/mgmt/2016-06-01/subscriptions/version.go similarity index 79% rename from vendor/github.com/Azure/azure-sdk-for-go/arm/resources/subscriptions/version.go rename to vendor/github.com/Azure/azure-sdk-for-go/services/resources/mgmt/2016-06-01/subscriptions/version.go index d624dd980..599c1d099 100644 --- a/vendor/github.com/Azure/azure-sdk-for-go/arm/resources/subscriptions/version.go +++ b/vendor/github.com/Azure/azure-sdk-for-go/services/resources/mgmt/2016-06-01/subscriptions/version.go @@ -1,5 +1,7 @@ package subscriptions +import "github.com/Azure/azure-sdk-for-go/version" + // Copyright (c) Microsoft and contributors. All rights reserved. // // Licensed under the Apache License, Version 2.0 (the "License"); @@ -14,16 +16,15 @@ package subscriptions // See the License for the specific language governing permissions and // limitations under the License. // -// Code generated by Microsoft (R) AutoRest Code Generator 1.0.1.0 -// Changes may cause incorrect behavior and will be lost if the code is -// regenerated. +// Code generated by Microsoft (R) AutoRest Code Generator. +// Changes may cause incorrect behavior and will be lost if the code is regenerated. // UserAgent returns the UserAgent string to use when sending http.Requests. func UserAgent() string { - return "Azure-SDK-For-Go/v10.0.2-beta arm-subscriptions/2016-06-01" + return "Azure-SDK-For-Go/" + version.Number + " subscriptions/2016-06-01" } // Version returns the semantic version (see http://semver.org) of the client. func Version() string { - return "v10.0.2-beta" + return version.Number } diff --git a/vendor/github.com/Azure/azure-sdk-for-go/arm/resources/resources/client.go b/vendor/github.com/Azure/azure-sdk-for-go/services/resources/mgmt/2018-02-01/resources/client.go similarity index 72% rename from vendor/github.com/Azure/azure-sdk-for-go/arm/resources/resources/client.go rename to vendor/github.com/Azure/azure-sdk-for-go/services/resources/mgmt/2018-02-01/resources/client.go index 3e13c72f9..05184f90b 100644 --- a/vendor/github.com/Azure/azure-sdk-for-go/arm/resources/resources/client.go +++ b/vendor/github.com/Azure/azure-sdk-for-go/services/resources/mgmt/2018-02-01/resources/client.go @@ -1,5 +1,4 @@ -// Package resources implements the Azure ARM Resources service API version -// 2016-09-01. +// Package resources implements the Azure ARM Resources service API version 2018-02-01. // // Provides operations for working with resources and resource groups. package resources @@ -18,9 +17,8 @@ package resources // See the License for the specific language governing permissions and // limitations under the License. // -// Code generated by Microsoft (R) AutoRest Code Generator 1.0.1.0 -// Changes may cause incorrect behavior and will be lost if the code is -// regenerated. +// Code generated by Microsoft (R) AutoRest Code Generator. +// Changes may cause incorrect behavior and will be lost if the code is regenerated. import ( "github.com/Azure/go-autorest/autorest" @@ -31,21 +29,21 @@ const ( DefaultBaseURI = "https://management.azure.com" ) -// ManagementClient is the base client for Resources. -type ManagementClient struct { +// BaseClient is the base client for Resources. +type BaseClient struct { autorest.Client BaseURI string SubscriptionID string } -// New creates an instance of the ManagementClient client. -func New(subscriptionID string) ManagementClient { +// New creates an instance of the BaseClient client. +func New(subscriptionID string) BaseClient { return NewWithBaseURI(DefaultBaseURI, subscriptionID) } -// NewWithBaseURI creates an instance of the ManagementClient client. -func NewWithBaseURI(baseURI string, subscriptionID string) ManagementClient { - return ManagementClient{ +// NewWithBaseURI creates an instance of the BaseClient client. +func NewWithBaseURI(baseURI string, subscriptionID string) BaseClient { + return BaseClient{ Client: autorest.NewClientWithUserAgent(UserAgent()), BaseURI: baseURI, SubscriptionID: subscriptionID, diff --git a/vendor/github.com/Azure/azure-sdk-for-go/arm/resources/resources/deploymentoperations.go b/vendor/github.com/Azure/azure-sdk-for-go/services/resources/mgmt/2018-02-01/resources/deploymentoperations.go similarity index 70% rename from vendor/github.com/Azure/azure-sdk-for-go/arm/resources/resources/deploymentoperations.go rename to vendor/github.com/Azure/azure-sdk-for-go/services/resources/mgmt/2018-02-01/resources/deploymentoperations.go index 34d12aac6..4b0be23e8 100644 --- a/vendor/github.com/Azure/azure-sdk-for-go/arm/resources/resources/deploymentoperations.go +++ b/vendor/github.com/Azure/azure-sdk-for-go/services/resources/mgmt/2018-02-01/resources/deploymentoperations.go @@ -14,41 +14,38 @@ package resources // See the License for the specific language governing permissions and // limitations under the License. // -// Code generated by Microsoft (R) AutoRest Code Generator 1.0.1.0 -// Changes may cause incorrect behavior and will be lost if the code is -// regenerated. +// Code generated by Microsoft (R) AutoRest Code Generator. +// Changes may cause incorrect behavior and will be lost if the code is regenerated. import ( + "context" "github.com/Azure/go-autorest/autorest" "github.com/Azure/go-autorest/autorest/azure" "github.com/Azure/go-autorest/autorest/validation" "net/http" ) -// DeploymentOperationsClient is the provides operations for working with -// resources and resource groups. +// DeploymentOperationsClient is the provides operations for working with resources and resource groups. type DeploymentOperationsClient struct { - ManagementClient + BaseClient } -// NewDeploymentOperationsClient creates an instance of the -// DeploymentOperationsClient client. +// NewDeploymentOperationsClient creates an instance of the DeploymentOperationsClient client. func NewDeploymentOperationsClient(subscriptionID string) DeploymentOperationsClient { return NewDeploymentOperationsClientWithBaseURI(DefaultBaseURI, subscriptionID) } -// NewDeploymentOperationsClientWithBaseURI creates an instance of the -// DeploymentOperationsClient client. +// NewDeploymentOperationsClientWithBaseURI creates an instance of the DeploymentOperationsClient client. func NewDeploymentOperationsClientWithBaseURI(baseURI string, subscriptionID string) DeploymentOperationsClient { return DeploymentOperationsClient{NewWithBaseURI(baseURI, subscriptionID)} } // Get gets a deployments operation. -// -// resourceGroupName is the name of the resource group. The name is case -// insensitive. deploymentName is the name of the deployment. operationID is -// the ID of the operation to get. -func (client DeploymentOperationsClient) Get(resourceGroupName string, deploymentName string, operationID string) (result DeploymentOperation, err error) { +// Parameters: +// resourceGroupName - the name of the resource group. The name is case insensitive. +// deploymentName - the name of the deployment. +// operationID - the ID of the operation to get. +func (client DeploymentOperationsClient) Get(ctx context.Context, resourceGroupName string, deploymentName string, operationID string) (result DeploymentOperation, err error) { if err := validation.Validate([]validation.Validation{ {TargetValue: resourceGroupName, Constraints: []validation.Constraint{{Target: "resourceGroupName", Name: validation.MaxLength, Rule: 90, Chain: nil}, @@ -58,10 +55,10 @@ func (client DeploymentOperationsClient) Get(resourceGroupName string, deploymen Constraints: []validation.Constraint{{Target: "deploymentName", Name: validation.MaxLength, Rule: 64, Chain: nil}, {Target: "deploymentName", Name: validation.MinLength, Rule: 1, Chain: nil}, {Target: "deploymentName", Name: validation.Pattern, Rule: `^[-\w\._\(\)]+$`, Chain: nil}}}}); err != nil { - return result, validation.NewErrorWithValidationError(err, "resources.DeploymentOperationsClient", "Get") + return result, validation.NewError("resources.DeploymentOperationsClient", "Get", err.Error()) } - req, err := client.GetPreparer(resourceGroupName, deploymentName, operationID) + req, err := client.GetPreparer(ctx, resourceGroupName, deploymentName, operationID) if err != nil { err = autorest.NewErrorWithError(err, "resources.DeploymentOperationsClient", "Get", nil, "Failure preparing request") return @@ -83,7 +80,7 @@ func (client DeploymentOperationsClient) Get(resourceGroupName string, deploymen } // GetPreparer prepares the Get request. -func (client DeploymentOperationsClient) GetPreparer(resourceGroupName string, deploymentName string, operationID string) (*http.Request, error) { +func (client DeploymentOperationsClient) GetPreparer(ctx context.Context, resourceGroupName string, deploymentName string, operationID string) (*http.Request, error) { pathParameters := map[string]interface{}{ "deploymentName": autorest.Encode("path", deploymentName), "operationId": autorest.Encode("path", operationID), @@ -91,7 +88,7 @@ func (client DeploymentOperationsClient) GetPreparer(resourceGroupName string, d "subscriptionId": autorest.Encode("path", client.SubscriptionID), } - const APIVersion = "2016-09-01" + const APIVersion = "2018-02-01" queryParameters := map[string]interface{}{ "api-version": APIVersion, } @@ -101,13 +98,14 @@ func (client DeploymentOperationsClient) GetPreparer(resourceGroupName string, d autorest.WithBaseURL(client.BaseURI), autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourcegroups/{resourceGroupName}/deployments/{deploymentName}/operations/{operationId}", pathParameters), autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{}) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) } // GetSender sends the Get request. The method will close the // http.Response Body if it receives an error. func (client DeploymentOperationsClient) GetSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, req) + return autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) } // GetResponder handles the response to the Get request. The method always @@ -124,11 +122,11 @@ func (client DeploymentOperationsClient) GetResponder(resp *http.Response) (resu } // List gets all deployments operations for a deployment. -// -// resourceGroupName is the name of the resource group. The name is case -// insensitive. deploymentName is the name of the deployment with the operation -// to get. top is the number of results to return. -func (client DeploymentOperationsClient) List(resourceGroupName string, deploymentName string, top *int32) (result DeploymentOperationsListResult, err error) { +// Parameters: +// resourceGroupName - the name of the resource group. The name is case insensitive. +// deploymentName - the name of the deployment with the operation to get. +// top - the number of results to return. +func (client DeploymentOperationsClient) List(ctx context.Context, resourceGroupName string, deploymentName string, top *int32) (result DeploymentOperationsListResultPage, err error) { if err := validation.Validate([]validation.Validation{ {TargetValue: resourceGroupName, Constraints: []validation.Constraint{{Target: "resourceGroupName", Name: validation.MaxLength, Rule: 90, Chain: nil}, @@ -138,10 +136,11 @@ func (client DeploymentOperationsClient) List(resourceGroupName string, deployme Constraints: []validation.Constraint{{Target: "deploymentName", Name: validation.MaxLength, Rule: 64, Chain: nil}, {Target: "deploymentName", Name: validation.MinLength, Rule: 1, Chain: nil}, {Target: "deploymentName", Name: validation.Pattern, Rule: `^[-\w\._\(\)]+$`, Chain: nil}}}}); err != nil { - return result, validation.NewErrorWithValidationError(err, "resources.DeploymentOperationsClient", "List") + return result, validation.NewError("resources.DeploymentOperationsClient", "List", err.Error()) } - req, err := client.ListPreparer(resourceGroupName, deploymentName, top) + result.fn = client.listNextResults + req, err := client.ListPreparer(ctx, resourceGroupName, deploymentName, top) if err != nil { err = autorest.NewErrorWithError(err, "resources.DeploymentOperationsClient", "List", nil, "Failure preparing request") return @@ -149,12 +148,12 @@ func (client DeploymentOperationsClient) List(resourceGroupName string, deployme resp, err := client.ListSender(req) if err != nil { - result.Response = autorest.Response{Response: resp} + result.dolr.Response = autorest.Response{Response: resp} err = autorest.NewErrorWithError(err, "resources.DeploymentOperationsClient", "List", resp, "Failure sending request") return } - result, err = client.ListResponder(resp) + result.dolr, err = client.ListResponder(resp) if err != nil { err = autorest.NewErrorWithError(err, "resources.DeploymentOperationsClient", "List", resp, "Failure responding to request") } @@ -163,14 +162,14 @@ func (client DeploymentOperationsClient) List(resourceGroupName string, deployme } // ListPreparer prepares the List request. -func (client DeploymentOperationsClient) ListPreparer(resourceGroupName string, deploymentName string, top *int32) (*http.Request, error) { +func (client DeploymentOperationsClient) ListPreparer(ctx context.Context, resourceGroupName string, deploymentName string, top *int32) (*http.Request, error) { pathParameters := map[string]interface{}{ "deploymentName": autorest.Encode("path", deploymentName), "resourceGroupName": autorest.Encode("path", resourceGroupName), "subscriptionId": autorest.Encode("path", client.SubscriptionID), } - const APIVersion = "2016-09-01" + const APIVersion = "2018-02-01" queryParameters := map[string]interface{}{ "api-version": APIVersion, } @@ -183,13 +182,14 @@ func (client DeploymentOperationsClient) ListPreparer(resourceGroupName string, autorest.WithBaseURL(client.BaseURI), autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourcegroups/{resourceGroupName}/deployments/{deploymentName}/operations", pathParameters), autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{}) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) } // ListSender sends the List request. The method will close the // http.Response Body if it receives an error. func (client DeploymentOperationsClient) ListSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, req) + return autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) } // ListResponder handles the response to the List request. The method always @@ -205,26 +205,29 @@ func (client DeploymentOperationsClient) ListResponder(resp *http.Response) (res return } -// ListNextResults retrieves the next set of results, if any. -func (client DeploymentOperationsClient) ListNextResults(lastResults DeploymentOperationsListResult) (result DeploymentOperationsListResult, err error) { - req, err := lastResults.DeploymentOperationsListResultPreparer() +// listNextResults retrieves the next set of results, if any. +func (client DeploymentOperationsClient) listNextResults(lastResults DeploymentOperationsListResult) (result DeploymentOperationsListResult, err error) { + req, err := lastResults.deploymentOperationsListResultPreparer() if err != nil { - return result, autorest.NewErrorWithError(err, "resources.DeploymentOperationsClient", "List", nil, "Failure preparing next results request") + return result, autorest.NewErrorWithError(err, "resources.DeploymentOperationsClient", "listNextResults", nil, "Failure preparing next results request") } if req == nil { return } - resp, err := client.ListSender(req) if err != nil { result.Response = autorest.Response{Response: resp} - return result, autorest.NewErrorWithError(err, "resources.DeploymentOperationsClient", "List", resp, "Failure sending next results request") + return result, autorest.NewErrorWithError(err, "resources.DeploymentOperationsClient", "listNextResults", resp, "Failure sending next results request") } - result, err = client.ListResponder(resp) if err != nil { - err = autorest.NewErrorWithError(err, "resources.DeploymentOperationsClient", "List", resp, "Failure responding to next results request") + err = autorest.NewErrorWithError(err, "resources.DeploymentOperationsClient", "listNextResults", resp, "Failure responding to next results request") } - + return +} + +// ListComplete enumerates all values, automatically crossing page boundaries as required. +func (client DeploymentOperationsClient) ListComplete(ctx context.Context, resourceGroupName string, deploymentName string, top *int32) (result DeploymentOperationsListResultIterator, err error) { + result.page, err = client.List(ctx, resourceGroupName, deploymentName, top) return } diff --git a/vendor/github.com/Azure/azure-sdk-for-go/arm/resources/resources/deployments.go b/vendor/github.com/Azure/azure-sdk-for-go/services/resources/mgmt/2018-02-01/resources/deployments.go similarity index 66% rename from vendor/github.com/Azure/azure-sdk-for-go/arm/resources/resources/deployments.go rename to vendor/github.com/Azure/azure-sdk-for-go/services/resources/mgmt/2018-02-01/resources/deployments.go index 7c3e19288..886129be8 100644 --- a/vendor/github.com/Azure/azure-sdk-for-go/arm/resources/resources/deployments.go +++ b/vendor/github.com/Azure/azure-sdk-for-go/services/resources/mgmt/2018-02-01/resources/deployments.go @@ -14,21 +14,20 @@ package resources // See the License for the specific language governing permissions and // limitations under the License. // -// Code generated by Microsoft (R) AutoRest Code Generator 1.0.1.0 -// Changes may cause incorrect behavior and will be lost if the code is -// regenerated. +// Code generated by Microsoft (R) AutoRest Code Generator. +// Changes may cause incorrect behavior and will be lost if the code is regenerated. import ( + "context" "github.com/Azure/go-autorest/autorest" "github.com/Azure/go-autorest/autorest/azure" "github.com/Azure/go-autorest/autorest/validation" "net/http" ) -// DeploymentsClient is the provides operations for working with resources and -// resource groups. +// DeploymentsClient is the provides operations for working with resources and resource groups. type DeploymentsClient struct { - ManagementClient + BaseClient } // NewDeploymentsClient creates an instance of the DeploymentsClient client. @@ -36,20 +35,18 @@ func NewDeploymentsClient(subscriptionID string) DeploymentsClient { return NewDeploymentsClientWithBaseURI(DefaultBaseURI, subscriptionID) } -// NewDeploymentsClientWithBaseURI creates an instance of the DeploymentsClient -// client. +// NewDeploymentsClientWithBaseURI creates an instance of the DeploymentsClient client. func NewDeploymentsClientWithBaseURI(baseURI string, subscriptionID string) DeploymentsClient { return DeploymentsClient{NewWithBaseURI(baseURI, subscriptionID)} } -// Cancel you can cancel a deployment only if the provisioningState is Accepted -// or Running. After the deployment is canceled, the provisioningState is set -// to Canceled. Canceling a template deployment stops the currently running +// Cancel you can cancel a deployment only if the provisioningState is Accepted or Running. After the deployment is +// canceled, the provisioningState is set to Canceled. Canceling a template deployment stops the currently running // template deployment and leaves the resource group partially deployed. -// -// resourceGroupName is the name of the resource group. The name is case -// insensitive. deploymentName is the name of the deployment to cancel. -func (client DeploymentsClient) Cancel(resourceGroupName string, deploymentName string) (result autorest.Response, err error) { +// Parameters: +// resourceGroupName - the name of the resource group. The name is case insensitive. +// deploymentName - the name of the deployment to cancel. +func (client DeploymentsClient) Cancel(ctx context.Context, resourceGroupName string, deploymentName string) (result autorest.Response, err error) { if err := validation.Validate([]validation.Validation{ {TargetValue: resourceGroupName, Constraints: []validation.Constraint{{Target: "resourceGroupName", Name: validation.MaxLength, Rule: 90, Chain: nil}, @@ -59,10 +56,10 @@ func (client DeploymentsClient) Cancel(resourceGroupName string, deploymentName Constraints: []validation.Constraint{{Target: "deploymentName", Name: validation.MaxLength, Rule: 64, Chain: nil}, {Target: "deploymentName", Name: validation.MinLength, Rule: 1, Chain: nil}, {Target: "deploymentName", Name: validation.Pattern, Rule: `^[-\w\._\(\)]+$`, Chain: nil}}}}); err != nil { - return result, validation.NewErrorWithValidationError(err, "resources.DeploymentsClient", "Cancel") + return result, validation.NewError("resources.DeploymentsClient", "Cancel", err.Error()) } - req, err := client.CancelPreparer(resourceGroupName, deploymentName) + req, err := client.CancelPreparer(ctx, resourceGroupName, deploymentName) if err != nil { err = autorest.NewErrorWithError(err, "resources.DeploymentsClient", "Cancel", nil, "Failure preparing request") return @@ -84,14 +81,14 @@ func (client DeploymentsClient) Cancel(resourceGroupName string, deploymentName } // CancelPreparer prepares the Cancel request. -func (client DeploymentsClient) CancelPreparer(resourceGroupName string, deploymentName string) (*http.Request, error) { +func (client DeploymentsClient) CancelPreparer(ctx context.Context, resourceGroupName string, deploymentName string) (*http.Request, error) { pathParameters := map[string]interface{}{ "deploymentName": autorest.Encode("path", deploymentName), "resourceGroupName": autorest.Encode("path", resourceGroupName), "subscriptionId": autorest.Encode("path", client.SubscriptionID), } - const APIVersion = "2016-09-01" + const APIVersion = "2018-02-01" queryParameters := map[string]interface{}{ "api-version": APIVersion, } @@ -101,13 +98,14 @@ func (client DeploymentsClient) CancelPreparer(resourceGroupName string, deploym autorest.WithBaseURL(client.BaseURI), autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourcegroups/{resourceGroupName}/providers/Microsoft.Resources/deployments/{deploymentName}/cancel", pathParameters), autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{}) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) } // CancelSender sends the Cancel request. The method will close the // http.Response Body if it receives an error. func (client DeploymentsClient) CancelSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, req) + return autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) } // CancelResponder handles the response to the Cancel request. The method always @@ -123,11 +121,11 @@ func (client DeploymentsClient) CancelResponder(resp *http.Response) (result aut } // CheckExistence checks whether the deployment exists. -// -// resourceGroupName is the name of the resource group with the deployment to -// check. The name is case insensitive. deploymentName is the name of the -// deployment to check. -func (client DeploymentsClient) CheckExistence(resourceGroupName string, deploymentName string) (result autorest.Response, err error) { +// Parameters: +// resourceGroupName - the name of the resource group with the deployment to check. The name is case +// insensitive. +// deploymentName - the name of the deployment to check. +func (client DeploymentsClient) CheckExistence(ctx context.Context, resourceGroupName string, deploymentName string) (result autorest.Response, err error) { if err := validation.Validate([]validation.Validation{ {TargetValue: resourceGroupName, Constraints: []validation.Constraint{{Target: "resourceGroupName", Name: validation.MaxLength, Rule: 90, Chain: nil}, @@ -137,10 +135,10 @@ func (client DeploymentsClient) CheckExistence(resourceGroupName string, deploym Constraints: []validation.Constraint{{Target: "deploymentName", Name: validation.MaxLength, Rule: 64, Chain: nil}, {Target: "deploymentName", Name: validation.MinLength, Rule: 1, Chain: nil}, {Target: "deploymentName", Name: validation.Pattern, Rule: `^[-\w\._\(\)]+$`, Chain: nil}}}}); err != nil { - return result, validation.NewErrorWithValidationError(err, "resources.DeploymentsClient", "CheckExistence") + return result, validation.NewError("resources.DeploymentsClient", "CheckExistence", err.Error()) } - req, err := client.CheckExistencePreparer(resourceGroupName, deploymentName) + req, err := client.CheckExistencePreparer(ctx, resourceGroupName, deploymentName) if err != nil { err = autorest.NewErrorWithError(err, "resources.DeploymentsClient", "CheckExistence", nil, "Failure preparing request") return @@ -162,14 +160,14 @@ func (client DeploymentsClient) CheckExistence(resourceGroupName string, deploym } // CheckExistencePreparer prepares the CheckExistence request. -func (client DeploymentsClient) CheckExistencePreparer(resourceGroupName string, deploymentName string) (*http.Request, error) { +func (client DeploymentsClient) CheckExistencePreparer(ctx context.Context, resourceGroupName string, deploymentName string) (*http.Request, error) { pathParameters := map[string]interface{}{ "deploymentName": autorest.Encode("path", deploymentName), "resourceGroupName": autorest.Encode("path", resourceGroupName), "subscriptionId": autorest.Encode("path", client.SubscriptionID), } - const APIVersion = "2016-09-01" + const APIVersion = "2018-02-01" queryParameters := map[string]interface{}{ "api-version": APIVersion, } @@ -179,13 +177,14 @@ func (client DeploymentsClient) CheckExistencePreparer(resourceGroupName string, autorest.WithBaseURL(client.BaseURI), autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourcegroups/{resourceGroupName}/providers/Microsoft.Resources/deployments/{deploymentName}", pathParameters), autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{}) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) } // CheckExistenceSender sends the CheckExistence request. The method will close the // http.Response Body if it receives an error. func (client DeploymentsClient) CheckExistenceSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, req) + return autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) } // CheckExistenceResponder handles the response to the CheckExistence request. The method always @@ -200,18 +199,13 @@ func (client DeploymentsClient) CheckExistenceResponder(resp *http.Response) (re return } -// CreateOrUpdate you can provide the template and parameters directly in the -// request or link to JSON files. This method may poll for completion. Polling -// can be canceled by passing the cancel channel argument. The channel will be -// used to cancel polling and any outstanding HTTP requests. -// -// resourceGroupName is the name of the resource group to deploy the resources -// to. The name is case insensitive. The resource group must already exist. -// deploymentName is the name of the deployment. parameters is additional -// parameters supplied to the operation. -func (client DeploymentsClient) CreateOrUpdate(resourceGroupName string, deploymentName string, parameters Deployment, cancel <-chan struct{}) (<-chan DeploymentExtended, <-chan error) { - resultChan := make(chan DeploymentExtended, 1) - errChan := make(chan error, 1) +// CreateOrUpdate you can provide the template and parameters directly in the request or link to JSON files. +// Parameters: +// resourceGroupName - the name of the resource group to deploy the resources to. The name is case insensitive. +// The resource group must already exist. +// deploymentName - the name of the deployment. +// parameters - additional parameters supplied to the operation. +func (client DeploymentsClient) CreateOrUpdate(ctx context.Context, resourceGroupName string, deploymentName string, parameters Deployment) (result DeploymentsCreateOrUpdateFuture, err error) { if err := validation.Validate([]validation.Validation{ {TargetValue: resourceGroupName, Constraints: []validation.Constraint{{Target: "resourceGroupName", Name: validation.MaxLength, Rule: 90, Chain: nil}, @@ -228,71 +222,62 @@ func (client DeploymentsClient) CreateOrUpdate(resourceGroupName string, deploym {Target: "parameters.Properties.ParametersLink", Name: validation.Null, Rule: false, Chain: []validation.Constraint{{Target: "parameters.Properties.ParametersLink.URI", Name: validation.Null, Rule: true, Chain: nil}}}, }}}}}); err != nil { - errChan <- validation.NewErrorWithValidationError(err, "resources.DeploymentsClient", "CreateOrUpdate") - close(errChan) - close(resultChan) - return resultChan, errChan + return result, validation.NewError("resources.DeploymentsClient", "CreateOrUpdate", err.Error()) } - go func() { - var err error - var result DeploymentExtended - defer func() { - resultChan <- result - errChan <- err - close(resultChan) - close(errChan) - }() - req, err := client.CreateOrUpdatePreparer(resourceGroupName, deploymentName, parameters, cancel) - if err != nil { - err = autorest.NewErrorWithError(err, "resources.DeploymentsClient", "CreateOrUpdate", nil, "Failure preparing request") - return - } + req, err := client.CreateOrUpdatePreparer(ctx, resourceGroupName, deploymentName, parameters) + if err != nil { + err = autorest.NewErrorWithError(err, "resources.DeploymentsClient", "CreateOrUpdate", nil, "Failure preparing request") + return + } - resp, err := client.CreateOrUpdateSender(req) - if err != nil { - result.Response = autorest.Response{Response: resp} - err = autorest.NewErrorWithError(err, "resources.DeploymentsClient", "CreateOrUpdate", resp, "Failure sending request") - return - } + result, err = client.CreateOrUpdateSender(req) + if err != nil { + err = autorest.NewErrorWithError(err, "resources.DeploymentsClient", "CreateOrUpdate", result.Response(), "Failure sending request") + return + } - result, err = client.CreateOrUpdateResponder(resp) - if err != nil { - err = autorest.NewErrorWithError(err, "resources.DeploymentsClient", "CreateOrUpdate", resp, "Failure responding to request") - } - }() - return resultChan, errChan + return } // CreateOrUpdatePreparer prepares the CreateOrUpdate request. -func (client DeploymentsClient) CreateOrUpdatePreparer(resourceGroupName string, deploymentName string, parameters Deployment, cancel <-chan struct{}) (*http.Request, error) { +func (client DeploymentsClient) CreateOrUpdatePreparer(ctx context.Context, resourceGroupName string, deploymentName string, parameters Deployment) (*http.Request, error) { pathParameters := map[string]interface{}{ "deploymentName": autorest.Encode("path", deploymentName), "resourceGroupName": autorest.Encode("path", resourceGroupName), "subscriptionId": autorest.Encode("path", client.SubscriptionID), } - const APIVersion = "2016-09-01" + const APIVersion = "2018-02-01" queryParameters := map[string]interface{}{ "api-version": APIVersion, } preparer := autorest.CreatePreparer( - autorest.AsJSON(), + autorest.AsContentType("application/json; charset=utf-8"), autorest.AsPut(), autorest.WithBaseURL(client.BaseURI), autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourcegroups/{resourceGroupName}/providers/Microsoft.Resources/deployments/{deploymentName}", pathParameters), autorest.WithJSON(parameters), autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{Cancel: cancel}) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) } // CreateOrUpdateSender sends the CreateOrUpdate request. The method will close the // http.Response Body if it receives an error. -func (client DeploymentsClient) CreateOrUpdateSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, - req, - azure.DoPollForAsynchronous(client.PollingDelay)) +func (client DeploymentsClient) CreateOrUpdateSender(req *http.Request) (future DeploymentsCreateOrUpdateFuture, err error) { + var resp *http.Response + resp, err = autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) + if err != nil { + return + } + err = autorest.Respond(resp, azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusCreated)) + if err != nil { + return + } + future.Future, err = azure.NewFutureFromResponse(resp) + return } // CreateOrUpdateResponder handles the response to the CreateOrUpdate request. The method always @@ -308,26 +293,18 @@ func (client DeploymentsClient) CreateOrUpdateResponder(resp *http.Response) (re return } -// Delete a template deployment that is currently running cannot be deleted. -// Deleting a template deployment removes the associated deployment operations. -// Deleting a template deployment does not affect the state of the resource -// group. This is an asynchronous operation that returns a status of 202 until -// the template deployment is successfully deleted. The Location response -// header contains the URI that is used to obtain the status of the process. -// While the process is running, a call to the URI in the Location header -// returns a status of 202. When the process finishes, the URI in the Location -// header returns a status of 204 on success. If the asynchronous request -// failed, the URI in the Location header returns an error-level status code. -// This method may poll for completion. Polling can be canceled by passing the -// cancel channel argument. The channel will be used to cancel polling and any -// outstanding HTTP requests. -// -// resourceGroupName is the name of the resource group with the deployment to -// delete. The name is case insensitive. deploymentName is the name of the -// deployment to delete. -func (client DeploymentsClient) Delete(resourceGroupName string, deploymentName string, cancel <-chan struct{}) (<-chan autorest.Response, <-chan error) { - resultChan := make(chan autorest.Response, 1) - errChan := make(chan error, 1) +// Delete a template deployment that is currently running cannot be deleted. Deleting a template deployment removes the +// associated deployment operations. Deleting a template deployment does not affect the state of the resource group. +// This is an asynchronous operation that returns a status of 202 until the template deployment is successfully +// deleted. The Location response header contains the URI that is used to obtain the status of the process. While the +// process is running, a call to the URI in the Location header returns a status of 202. When the process finishes, the +// URI in the Location header returns a status of 204 on success. If the asynchronous request failed, the URI in the +// Location header returns an error-level status code. +// Parameters: +// resourceGroupName - the name of the resource group with the deployment to delete. The name is case +// insensitive. +// deploymentName - the name of the deployment to delete. +func (client DeploymentsClient) Delete(ctx context.Context, resourceGroupName string, deploymentName string) (result DeploymentsDeleteFuture, err error) { if err := validation.Validate([]validation.Validation{ {TargetValue: resourceGroupName, Constraints: []validation.Constraint{{Target: "resourceGroupName", Name: validation.MaxLength, Rule: 90, Chain: nil}, @@ -337,51 +314,33 @@ func (client DeploymentsClient) Delete(resourceGroupName string, deploymentName Constraints: []validation.Constraint{{Target: "deploymentName", Name: validation.MaxLength, Rule: 64, Chain: nil}, {Target: "deploymentName", Name: validation.MinLength, Rule: 1, Chain: nil}, {Target: "deploymentName", Name: validation.Pattern, Rule: `^[-\w\._\(\)]+$`, Chain: nil}}}}); err != nil { - errChan <- validation.NewErrorWithValidationError(err, "resources.DeploymentsClient", "Delete") - close(errChan) - close(resultChan) - return resultChan, errChan + return result, validation.NewError("resources.DeploymentsClient", "Delete", err.Error()) } - go func() { - var err error - var result autorest.Response - defer func() { - resultChan <- result - errChan <- err - close(resultChan) - close(errChan) - }() - req, err := client.DeletePreparer(resourceGroupName, deploymentName, cancel) - if err != nil { - err = autorest.NewErrorWithError(err, "resources.DeploymentsClient", "Delete", nil, "Failure preparing request") - return - } + req, err := client.DeletePreparer(ctx, resourceGroupName, deploymentName) + if err != nil { + err = autorest.NewErrorWithError(err, "resources.DeploymentsClient", "Delete", nil, "Failure preparing request") + return + } - resp, err := client.DeleteSender(req) - if err != nil { - result.Response = resp - err = autorest.NewErrorWithError(err, "resources.DeploymentsClient", "Delete", resp, "Failure sending request") - return - } + result, err = client.DeleteSender(req) + if err != nil { + err = autorest.NewErrorWithError(err, "resources.DeploymentsClient", "Delete", result.Response(), "Failure sending request") + return + } - result, err = client.DeleteResponder(resp) - if err != nil { - err = autorest.NewErrorWithError(err, "resources.DeploymentsClient", "Delete", resp, "Failure responding to request") - } - }() - return resultChan, errChan + return } // DeletePreparer prepares the Delete request. -func (client DeploymentsClient) DeletePreparer(resourceGroupName string, deploymentName string, cancel <-chan struct{}) (*http.Request, error) { +func (client DeploymentsClient) DeletePreparer(ctx context.Context, resourceGroupName string, deploymentName string) (*http.Request, error) { pathParameters := map[string]interface{}{ "deploymentName": autorest.Encode("path", deploymentName), "resourceGroupName": autorest.Encode("path", resourceGroupName), "subscriptionId": autorest.Encode("path", client.SubscriptionID), } - const APIVersion = "2016-09-01" + const APIVersion = "2018-02-01" queryParameters := map[string]interface{}{ "api-version": APIVersion, } @@ -391,15 +350,24 @@ func (client DeploymentsClient) DeletePreparer(resourceGroupName string, deploym autorest.WithBaseURL(client.BaseURI), autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourcegroups/{resourceGroupName}/providers/Microsoft.Resources/deployments/{deploymentName}", pathParameters), autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{Cancel: cancel}) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) } // DeleteSender sends the Delete request. The method will close the // http.Response Body if it receives an error. -func (client DeploymentsClient) DeleteSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, - req, - azure.DoPollForAsynchronous(client.PollingDelay)) +func (client DeploymentsClient) DeleteSender(req *http.Request) (future DeploymentsDeleteFuture, err error) { + var resp *http.Response + resp, err = autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) + if err != nil { + return + } + err = autorest.Respond(resp, azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted, http.StatusNoContent)) + if err != nil { + return + } + future.Future, err = azure.NewFutureFromResponse(resp) + return } // DeleteResponder handles the response to the Delete request. The method always @@ -415,11 +383,10 @@ func (client DeploymentsClient) DeleteResponder(resp *http.Response) (result aut } // ExportTemplate exports the template used for specified deployment. -// -// resourceGroupName is the name of the resource group. The name is case -// insensitive. deploymentName is the name of the deployment from which to get -// the template. -func (client DeploymentsClient) ExportTemplate(resourceGroupName string, deploymentName string) (result DeploymentExportResult, err error) { +// Parameters: +// resourceGroupName - the name of the resource group. The name is case insensitive. +// deploymentName - the name of the deployment from which to get the template. +func (client DeploymentsClient) ExportTemplate(ctx context.Context, resourceGroupName string, deploymentName string) (result DeploymentExportResult, err error) { if err := validation.Validate([]validation.Validation{ {TargetValue: resourceGroupName, Constraints: []validation.Constraint{{Target: "resourceGroupName", Name: validation.MaxLength, Rule: 90, Chain: nil}, @@ -429,10 +396,10 @@ func (client DeploymentsClient) ExportTemplate(resourceGroupName string, deploym Constraints: []validation.Constraint{{Target: "deploymentName", Name: validation.MaxLength, Rule: 64, Chain: nil}, {Target: "deploymentName", Name: validation.MinLength, Rule: 1, Chain: nil}, {Target: "deploymentName", Name: validation.Pattern, Rule: `^[-\w\._\(\)]+$`, Chain: nil}}}}); err != nil { - return result, validation.NewErrorWithValidationError(err, "resources.DeploymentsClient", "ExportTemplate") + return result, validation.NewError("resources.DeploymentsClient", "ExportTemplate", err.Error()) } - req, err := client.ExportTemplatePreparer(resourceGroupName, deploymentName) + req, err := client.ExportTemplatePreparer(ctx, resourceGroupName, deploymentName) if err != nil { err = autorest.NewErrorWithError(err, "resources.DeploymentsClient", "ExportTemplate", nil, "Failure preparing request") return @@ -454,14 +421,14 @@ func (client DeploymentsClient) ExportTemplate(resourceGroupName string, deploym } // ExportTemplatePreparer prepares the ExportTemplate request. -func (client DeploymentsClient) ExportTemplatePreparer(resourceGroupName string, deploymentName string) (*http.Request, error) { +func (client DeploymentsClient) ExportTemplatePreparer(ctx context.Context, resourceGroupName string, deploymentName string) (*http.Request, error) { pathParameters := map[string]interface{}{ "deploymentName": autorest.Encode("path", deploymentName), "resourceGroupName": autorest.Encode("path", resourceGroupName), "subscriptionId": autorest.Encode("path", client.SubscriptionID), } - const APIVersion = "2016-09-01" + const APIVersion = "2018-02-01" queryParameters := map[string]interface{}{ "api-version": APIVersion, } @@ -471,13 +438,14 @@ func (client DeploymentsClient) ExportTemplatePreparer(resourceGroupName string, autorest.WithBaseURL(client.BaseURI), autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourcegroups/{resourceGroupName}/providers/Microsoft.Resources/deployments/{deploymentName}/exportTemplate", pathParameters), autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{}) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) } // ExportTemplateSender sends the ExportTemplate request. The method will close the // http.Response Body if it receives an error. func (client DeploymentsClient) ExportTemplateSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, req) + return autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) } // ExportTemplateResponder handles the response to the ExportTemplate request. The method always @@ -494,10 +462,10 @@ func (client DeploymentsClient) ExportTemplateResponder(resp *http.Response) (re } // Get gets a deployment. -// -// resourceGroupName is the name of the resource group. The name is case -// insensitive. deploymentName is the name of the deployment to get. -func (client DeploymentsClient) Get(resourceGroupName string, deploymentName string) (result DeploymentExtended, err error) { +// Parameters: +// resourceGroupName - the name of the resource group. The name is case insensitive. +// deploymentName - the name of the deployment to get. +func (client DeploymentsClient) Get(ctx context.Context, resourceGroupName string, deploymentName string) (result DeploymentExtended, err error) { if err := validation.Validate([]validation.Validation{ {TargetValue: resourceGroupName, Constraints: []validation.Constraint{{Target: "resourceGroupName", Name: validation.MaxLength, Rule: 90, Chain: nil}, @@ -507,10 +475,10 @@ func (client DeploymentsClient) Get(resourceGroupName string, deploymentName str Constraints: []validation.Constraint{{Target: "deploymentName", Name: validation.MaxLength, Rule: 64, Chain: nil}, {Target: "deploymentName", Name: validation.MinLength, Rule: 1, Chain: nil}, {Target: "deploymentName", Name: validation.Pattern, Rule: `^[-\w\._\(\)]+$`, Chain: nil}}}}); err != nil { - return result, validation.NewErrorWithValidationError(err, "resources.DeploymentsClient", "Get") + return result, validation.NewError("resources.DeploymentsClient", "Get", err.Error()) } - req, err := client.GetPreparer(resourceGroupName, deploymentName) + req, err := client.GetPreparer(ctx, resourceGroupName, deploymentName) if err != nil { err = autorest.NewErrorWithError(err, "resources.DeploymentsClient", "Get", nil, "Failure preparing request") return @@ -532,14 +500,14 @@ func (client DeploymentsClient) Get(resourceGroupName string, deploymentName str } // GetPreparer prepares the Get request. -func (client DeploymentsClient) GetPreparer(resourceGroupName string, deploymentName string) (*http.Request, error) { +func (client DeploymentsClient) GetPreparer(ctx context.Context, resourceGroupName string, deploymentName string) (*http.Request, error) { pathParameters := map[string]interface{}{ "deploymentName": autorest.Encode("path", deploymentName), "resourceGroupName": autorest.Encode("path", resourceGroupName), "subscriptionId": autorest.Encode("path", client.SubscriptionID), } - const APIVersion = "2016-09-01" + const APIVersion = "2018-02-01" queryParameters := map[string]interface{}{ "api-version": APIVersion, } @@ -549,13 +517,14 @@ func (client DeploymentsClient) GetPreparer(resourceGroupName string, deployment autorest.WithBaseURL(client.BaseURI), autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourcegroups/{resourceGroupName}/providers/Microsoft.Resources/deployments/{deploymentName}", pathParameters), autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{}) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) } // GetSender sends the Get request. The method will close the // http.Response Body if it receives an error. func (client DeploymentsClient) GetSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, req) + return autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) } // GetResponder handles the response to the Get request. The method always @@ -571,51 +540,52 @@ func (client DeploymentsClient) GetResponder(resp *http.Response) (result Deploy return } -// List get all the deployments for a resource group. -// -// resourceGroupName is the name of the resource group with the deployments to -// get. The name is case insensitive. filter is the filter to apply on the -// operation. For example, you can use $filter=provisioningState eq '{state}'. -// top is the number of results to get. If null is passed, returns all -// deployments. -func (client DeploymentsClient) List(resourceGroupName string, filter string, top *int32) (result DeploymentListResult, err error) { +// ListByResourceGroup get all the deployments for a resource group. +// Parameters: +// resourceGroupName - the name of the resource group with the deployments to get. The name is case +// insensitive. +// filter - the filter to apply on the operation. For example, you can use $filter=provisioningState eq +// '{state}'. +// top - the number of results to get. If null is passed, returns all deployments. +func (client DeploymentsClient) ListByResourceGroup(ctx context.Context, resourceGroupName string, filter string, top *int32) (result DeploymentListResultPage, err error) { if err := validation.Validate([]validation.Validation{ {TargetValue: resourceGroupName, Constraints: []validation.Constraint{{Target: "resourceGroupName", Name: validation.MaxLength, Rule: 90, Chain: nil}, {Target: "resourceGroupName", Name: validation.MinLength, Rule: 1, Chain: nil}, {Target: "resourceGroupName", Name: validation.Pattern, Rule: `^[-\w\._\(\)]+$`, Chain: nil}}}}); err != nil { - return result, validation.NewErrorWithValidationError(err, "resources.DeploymentsClient", "List") + return result, validation.NewError("resources.DeploymentsClient", "ListByResourceGroup", err.Error()) } - req, err := client.ListPreparer(resourceGroupName, filter, top) + result.fn = client.listByResourceGroupNextResults + req, err := client.ListByResourceGroupPreparer(ctx, resourceGroupName, filter, top) if err != nil { - err = autorest.NewErrorWithError(err, "resources.DeploymentsClient", "List", nil, "Failure preparing request") + err = autorest.NewErrorWithError(err, "resources.DeploymentsClient", "ListByResourceGroup", nil, "Failure preparing request") return } - resp, err := client.ListSender(req) + resp, err := client.ListByResourceGroupSender(req) if err != nil { - result.Response = autorest.Response{Response: resp} - err = autorest.NewErrorWithError(err, "resources.DeploymentsClient", "List", resp, "Failure sending request") + result.dlr.Response = autorest.Response{Response: resp} + err = autorest.NewErrorWithError(err, "resources.DeploymentsClient", "ListByResourceGroup", resp, "Failure sending request") return } - result, err = client.ListResponder(resp) + result.dlr, err = client.ListByResourceGroupResponder(resp) if err != nil { - err = autorest.NewErrorWithError(err, "resources.DeploymentsClient", "List", resp, "Failure responding to request") + err = autorest.NewErrorWithError(err, "resources.DeploymentsClient", "ListByResourceGroup", resp, "Failure responding to request") } return } -// ListPreparer prepares the List request. -func (client DeploymentsClient) ListPreparer(resourceGroupName string, filter string, top *int32) (*http.Request, error) { +// ListByResourceGroupPreparer prepares the ListByResourceGroup request. +func (client DeploymentsClient) ListByResourceGroupPreparer(ctx context.Context, resourceGroupName string, filter string, top *int32) (*http.Request, error) { pathParameters := map[string]interface{}{ "resourceGroupName": autorest.Encode("path", resourceGroupName), "subscriptionId": autorest.Encode("path", client.SubscriptionID), } - const APIVersion = "2016-09-01" + const APIVersion = "2018-02-01" queryParameters := map[string]interface{}{ "api-version": APIVersion, } @@ -631,18 +601,19 @@ func (client DeploymentsClient) ListPreparer(resourceGroupName string, filter st autorest.WithBaseURL(client.BaseURI), autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourcegroups/{resourceGroupName}/providers/Microsoft.Resources/deployments/", pathParameters), autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{}) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) } -// ListSender sends the List request. The method will close the +// ListByResourceGroupSender sends the ListByResourceGroup request. The method will close the // http.Response Body if it receives an error. -func (client DeploymentsClient) ListSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, req) +func (client DeploymentsClient) ListByResourceGroupSender(req *http.Request) (*http.Response, error) { + return autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) } -// ListResponder handles the response to the List request. The method always +// ListByResourceGroupResponder handles the response to the ListByResourceGroup request. The method always // closes the http.Response Body. -func (client DeploymentsClient) ListResponder(resp *http.Response) (result DeploymentListResult, err error) { +func (client DeploymentsClient) ListByResourceGroupResponder(resp *http.Response) (result DeploymentListResult, err error) { err = autorest.Respond( resp, client.ByInspecting(), @@ -653,37 +624,41 @@ func (client DeploymentsClient) ListResponder(resp *http.Response) (result Deplo return } -// ListNextResults retrieves the next set of results, if any. -func (client DeploymentsClient) ListNextResults(lastResults DeploymentListResult) (result DeploymentListResult, err error) { - req, err := lastResults.DeploymentListResultPreparer() +// listByResourceGroupNextResults retrieves the next set of results, if any. +func (client DeploymentsClient) listByResourceGroupNextResults(lastResults DeploymentListResult) (result DeploymentListResult, err error) { + req, err := lastResults.deploymentListResultPreparer() if err != nil { - return result, autorest.NewErrorWithError(err, "resources.DeploymentsClient", "List", nil, "Failure preparing next results request") + return result, autorest.NewErrorWithError(err, "resources.DeploymentsClient", "listByResourceGroupNextResults", nil, "Failure preparing next results request") } if req == nil { return } - - resp, err := client.ListSender(req) + resp, err := client.ListByResourceGroupSender(req) if err != nil { result.Response = autorest.Response{Response: resp} - return result, autorest.NewErrorWithError(err, "resources.DeploymentsClient", "List", resp, "Failure sending next results request") + return result, autorest.NewErrorWithError(err, "resources.DeploymentsClient", "listByResourceGroupNextResults", resp, "Failure sending next results request") } - - result, err = client.ListResponder(resp) + result, err = client.ListByResourceGroupResponder(resp) if err != nil { - err = autorest.NewErrorWithError(err, "resources.DeploymentsClient", "List", resp, "Failure responding to next results request") + err = autorest.NewErrorWithError(err, "resources.DeploymentsClient", "listByResourceGroupNextResults", resp, "Failure responding to next results request") } - return } -// Validate validates whether the specified template is syntactically correct -// and will be accepted by Azure Resource Manager.. -// -// resourceGroupName is the name of the resource group the template will be -// deployed to. The name is case insensitive. deploymentName is the name of the -// deployment. parameters is parameters to validate. -func (client DeploymentsClient) Validate(resourceGroupName string, deploymentName string, parameters Deployment) (result DeploymentValidateResult, err error) { +// ListByResourceGroupComplete enumerates all values, automatically crossing page boundaries as required. +func (client DeploymentsClient) ListByResourceGroupComplete(ctx context.Context, resourceGroupName string, filter string, top *int32) (result DeploymentListResultIterator, err error) { + result.page, err = client.ListByResourceGroup(ctx, resourceGroupName, filter, top) + return +} + +// Validate validates whether the specified template is syntactically correct and will be accepted by Azure Resource +// Manager.. +// Parameters: +// resourceGroupName - the name of the resource group the template will be deployed to. The name is case +// insensitive. +// deploymentName - the name of the deployment. +// parameters - parameters to validate. +func (client DeploymentsClient) Validate(ctx context.Context, resourceGroupName string, deploymentName string, parameters Deployment) (result DeploymentValidateResult, err error) { if err := validation.Validate([]validation.Validation{ {TargetValue: resourceGroupName, Constraints: []validation.Constraint{{Target: "resourceGroupName", Name: validation.MaxLength, Rule: 90, Chain: nil}, @@ -700,10 +675,10 @@ func (client DeploymentsClient) Validate(resourceGroupName string, deploymentNam {Target: "parameters.Properties.ParametersLink", Name: validation.Null, Rule: false, Chain: []validation.Constraint{{Target: "parameters.Properties.ParametersLink.URI", Name: validation.Null, Rule: true, Chain: nil}}}, }}}}}); err != nil { - return result, validation.NewErrorWithValidationError(err, "resources.DeploymentsClient", "Validate") + return result, validation.NewError("resources.DeploymentsClient", "Validate", err.Error()) } - req, err := client.ValidatePreparer(resourceGroupName, deploymentName, parameters) + req, err := client.ValidatePreparer(ctx, resourceGroupName, deploymentName, parameters) if err != nil { err = autorest.NewErrorWithError(err, "resources.DeploymentsClient", "Validate", nil, "Failure preparing request") return @@ -725,32 +700,33 @@ func (client DeploymentsClient) Validate(resourceGroupName string, deploymentNam } // ValidatePreparer prepares the Validate request. -func (client DeploymentsClient) ValidatePreparer(resourceGroupName string, deploymentName string, parameters Deployment) (*http.Request, error) { +func (client DeploymentsClient) ValidatePreparer(ctx context.Context, resourceGroupName string, deploymentName string, parameters Deployment) (*http.Request, error) { pathParameters := map[string]interface{}{ "deploymentName": autorest.Encode("path", deploymentName), "resourceGroupName": autorest.Encode("path", resourceGroupName), "subscriptionId": autorest.Encode("path", client.SubscriptionID), } - const APIVersion = "2016-09-01" + const APIVersion = "2018-02-01" queryParameters := map[string]interface{}{ "api-version": APIVersion, } preparer := autorest.CreatePreparer( - autorest.AsJSON(), + autorest.AsContentType("application/json; charset=utf-8"), autorest.AsPost(), autorest.WithBaseURL(client.BaseURI), autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourcegroups/{resourceGroupName}/providers/Microsoft.Resources/deployments/{deploymentName}/validate", pathParameters), autorest.WithJSON(parameters), autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{}) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) } // ValidateSender sends the Validate request. The method will close the // http.Response Body if it receives an error. func (client DeploymentsClient) ValidateSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, req) + return autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) } // ValidateResponder handles the response to the Validate request. The method always diff --git a/vendor/github.com/Azure/azure-sdk-for-go/arm/resources/resources/groups.go b/vendor/github.com/Azure/azure-sdk-for-go/services/resources/mgmt/2018-02-01/resources/groups.go similarity index 59% rename from vendor/github.com/Azure/azure-sdk-for-go/arm/resources/resources/groups.go rename to vendor/github.com/Azure/azure-sdk-for-go/services/resources/mgmt/2018-02-01/resources/groups.go index 13244a9ea..1b24ebcbf 100644 --- a/vendor/github.com/Azure/azure-sdk-for-go/arm/resources/resources/groups.go +++ b/vendor/github.com/Azure/azure-sdk-for-go/services/resources/mgmt/2018-02-01/resources/groups.go @@ -14,21 +14,20 @@ package resources // See the License for the specific language governing permissions and // limitations under the License. // -// Code generated by Microsoft (R) AutoRest Code Generator 1.0.1.0 -// Changes may cause incorrect behavior and will be lost if the code is -// regenerated. +// Code generated by Microsoft (R) AutoRest Code Generator. +// Changes may cause incorrect behavior and will be lost if the code is regenerated. import ( + "context" "github.com/Azure/go-autorest/autorest" "github.com/Azure/go-autorest/autorest/azure" "github.com/Azure/go-autorest/autorest/validation" "net/http" ) -// GroupsClient is the provides operations for working with resources and -// resource groups. +// GroupsClient is the provides operations for working with resources and resource groups. type GroupsClient struct { - ManagementClient + BaseClient } // NewGroupsClient creates an instance of the GroupsClient client. @@ -42,19 +41,18 @@ func NewGroupsClientWithBaseURI(baseURI string, subscriptionID string) GroupsCli } // CheckExistence checks whether a resource group exists. -// -// resourceGroupName is the name of the resource group to check. The name is -// case insensitive. -func (client GroupsClient) CheckExistence(resourceGroupName string) (result autorest.Response, err error) { +// Parameters: +// resourceGroupName - the name of the resource group to check. The name is case insensitive. +func (client GroupsClient) CheckExistence(ctx context.Context, resourceGroupName string) (result autorest.Response, err error) { if err := validation.Validate([]validation.Validation{ {TargetValue: resourceGroupName, Constraints: []validation.Constraint{{Target: "resourceGroupName", Name: validation.MaxLength, Rule: 90, Chain: nil}, {Target: "resourceGroupName", Name: validation.MinLength, Rule: 1, Chain: nil}, {Target: "resourceGroupName", Name: validation.Pattern, Rule: `^[-\w\._\(\)]+$`, Chain: nil}}}}); err != nil { - return result, validation.NewErrorWithValidationError(err, "resources.GroupsClient", "CheckExistence") + return result, validation.NewError("resources.GroupsClient", "CheckExistence", err.Error()) } - req, err := client.CheckExistencePreparer(resourceGroupName) + req, err := client.CheckExistencePreparer(ctx, resourceGroupName) if err != nil { err = autorest.NewErrorWithError(err, "resources.GroupsClient", "CheckExistence", nil, "Failure preparing request") return @@ -76,13 +74,13 @@ func (client GroupsClient) CheckExistence(resourceGroupName string) (result auto } // CheckExistencePreparer prepares the CheckExistence request. -func (client GroupsClient) CheckExistencePreparer(resourceGroupName string) (*http.Request, error) { +func (client GroupsClient) CheckExistencePreparer(ctx context.Context, resourceGroupName string) (*http.Request, error) { pathParameters := map[string]interface{}{ "resourceGroupName": autorest.Encode("path", resourceGroupName), "subscriptionId": autorest.Encode("path", client.SubscriptionID), } - const APIVersion = "2016-09-01" + const APIVersion = "2018-02-01" queryParameters := map[string]interface{}{ "api-version": APIVersion, } @@ -92,13 +90,14 @@ func (client GroupsClient) CheckExistencePreparer(resourceGroupName string) (*ht autorest.WithBaseURL(client.BaseURI), autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourcegroups/{resourceGroupName}", pathParameters), autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{}) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) } // CheckExistenceSender sends the CheckExistence request. The method will close the // http.Response Body if it receives an error. func (client GroupsClient) CheckExistenceSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, req) + return autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) } // CheckExistenceResponder handles the response to the CheckExistence request. The method always @@ -113,11 +112,11 @@ func (client GroupsClient) CheckExistenceResponder(resp *http.Response) (result return } -// CreateOrUpdate creates a resource group. -// -// resourceGroupName is the name of the resource group to create or update. -// parameters is parameters supplied to the create or update a resource group. -func (client GroupsClient) CreateOrUpdate(resourceGroupName string, parameters Group) (result Group, err error) { +// CreateOrUpdate creates or updates a resource group. +// Parameters: +// resourceGroupName - the name of the resource group to create or update. +// parameters - parameters supplied to the create or update a resource group. +func (client GroupsClient) CreateOrUpdate(ctx context.Context, resourceGroupName string, parameters Group) (result Group, err error) { if err := validation.Validate([]validation.Validation{ {TargetValue: resourceGroupName, Constraints: []validation.Constraint{{Target: "resourceGroupName", Name: validation.MaxLength, Rule: 90, Chain: nil}, @@ -125,10 +124,10 @@ func (client GroupsClient) CreateOrUpdate(resourceGroupName string, parameters G {Target: "resourceGroupName", Name: validation.Pattern, Rule: `^[-\w\._\(\)]+$`, Chain: nil}}}, {TargetValue: parameters, Constraints: []validation.Constraint{{Target: "parameters.Location", Name: validation.Null, Rule: true, Chain: nil}}}}); err != nil { - return result, validation.NewErrorWithValidationError(err, "resources.GroupsClient", "CreateOrUpdate") + return result, validation.NewError("resources.GroupsClient", "CreateOrUpdate", err.Error()) } - req, err := client.CreateOrUpdatePreparer(resourceGroupName, parameters) + req, err := client.CreateOrUpdatePreparer(ctx, resourceGroupName, parameters) if err != nil { err = autorest.NewErrorWithError(err, "resources.GroupsClient", "CreateOrUpdate", nil, "Failure preparing request") return @@ -150,31 +149,32 @@ func (client GroupsClient) CreateOrUpdate(resourceGroupName string, parameters G } // CreateOrUpdatePreparer prepares the CreateOrUpdate request. -func (client GroupsClient) CreateOrUpdatePreparer(resourceGroupName string, parameters Group) (*http.Request, error) { +func (client GroupsClient) CreateOrUpdatePreparer(ctx context.Context, resourceGroupName string, parameters Group) (*http.Request, error) { pathParameters := map[string]interface{}{ "resourceGroupName": autorest.Encode("path", resourceGroupName), "subscriptionId": autorest.Encode("path", client.SubscriptionID), } - const APIVersion = "2016-09-01" + const APIVersion = "2018-02-01" queryParameters := map[string]interface{}{ "api-version": APIVersion, } preparer := autorest.CreatePreparer( - autorest.AsJSON(), + autorest.AsContentType("application/json; charset=utf-8"), autorest.AsPut(), autorest.WithBaseURL(client.BaseURI), autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourcegroups/{resourceGroupName}", pathParameters), autorest.WithJSON(parameters), autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{}) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) } // CreateOrUpdateSender sends the CreateOrUpdate request. The method will close the // http.Response Body if it receives an error. func (client GroupsClient) CreateOrUpdateSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, req) + return autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) } // CreateOrUpdateResponder handles the response to the CreateOrUpdate request. The method always @@ -183,73 +183,49 @@ func (client GroupsClient) CreateOrUpdateResponder(resp *http.Response) (result err = autorest.Respond( resp, client.ByInspecting(), - azure.WithErrorUnlessStatusCode(http.StatusCreated, http.StatusOK), + azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusCreated), autorest.ByUnmarshallingJSON(&result), autorest.ByClosing()) result.Response = autorest.Response{Response: resp} return } -// Delete when you delete a resource group, all of its resources are also -// deleted. Deleting a resource group deletes all of its template deployments -// and currently stored operations. This method may poll for completion. -// Polling can be canceled by passing the cancel channel argument. The channel -// will be used to cancel polling and any outstanding HTTP requests. -// -// resourceGroupName is the name of the resource group to delete. The name is -// case insensitive. -func (client GroupsClient) Delete(resourceGroupName string, cancel <-chan struct{}) (<-chan autorest.Response, <-chan error) { - resultChan := make(chan autorest.Response, 1) - errChan := make(chan error, 1) +// Delete when you delete a resource group, all of its resources are also deleted. Deleting a resource group deletes +// all of its template deployments and currently stored operations. +// Parameters: +// resourceGroupName - the name of the resource group to delete. The name is case insensitive. +func (client GroupsClient) Delete(ctx context.Context, resourceGroupName string) (result GroupsDeleteFuture, err error) { if err := validation.Validate([]validation.Validation{ {TargetValue: resourceGroupName, Constraints: []validation.Constraint{{Target: "resourceGroupName", Name: validation.MaxLength, Rule: 90, Chain: nil}, {Target: "resourceGroupName", Name: validation.MinLength, Rule: 1, Chain: nil}, {Target: "resourceGroupName", Name: validation.Pattern, Rule: `^[-\w\._\(\)]+$`, Chain: nil}}}}); err != nil { - errChan <- validation.NewErrorWithValidationError(err, "resources.GroupsClient", "Delete") - close(errChan) - close(resultChan) - return resultChan, errChan + return result, validation.NewError("resources.GroupsClient", "Delete", err.Error()) } - go func() { - var err error - var result autorest.Response - defer func() { - resultChan <- result - errChan <- err - close(resultChan) - close(errChan) - }() - req, err := client.DeletePreparer(resourceGroupName, cancel) - if err != nil { - err = autorest.NewErrorWithError(err, "resources.GroupsClient", "Delete", nil, "Failure preparing request") - return - } + req, err := client.DeletePreparer(ctx, resourceGroupName) + if err != nil { + err = autorest.NewErrorWithError(err, "resources.GroupsClient", "Delete", nil, "Failure preparing request") + return + } - resp, err := client.DeleteSender(req) - if err != nil { - result.Response = resp - err = autorest.NewErrorWithError(err, "resources.GroupsClient", "Delete", resp, "Failure sending request") - return - } + result, err = client.DeleteSender(req) + if err != nil { + err = autorest.NewErrorWithError(err, "resources.GroupsClient", "Delete", result.Response(), "Failure sending request") + return + } - result, err = client.DeleteResponder(resp) - if err != nil { - err = autorest.NewErrorWithError(err, "resources.GroupsClient", "Delete", resp, "Failure responding to request") - } - }() - return resultChan, errChan + return } // DeletePreparer prepares the Delete request. -func (client GroupsClient) DeletePreparer(resourceGroupName string, cancel <-chan struct{}) (*http.Request, error) { +func (client GroupsClient) DeletePreparer(ctx context.Context, resourceGroupName string) (*http.Request, error) { pathParameters := map[string]interface{}{ "resourceGroupName": autorest.Encode("path", resourceGroupName), "subscriptionId": autorest.Encode("path", client.SubscriptionID), } - const APIVersion = "2016-09-01" + const APIVersion = "2018-02-01" queryParameters := map[string]interface{}{ "api-version": APIVersion, } @@ -259,15 +235,24 @@ func (client GroupsClient) DeletePreparer(resourceGroupName string, cancel <-cha autorest.WithBaseURL(client.BaseURI), autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourcegroups/{resourceGroupName}", pathParameters), autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{Cancel: cancel}) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) } // DeleteSender sends the Delete request. The method will close the // http.Response Body if it receives an error. -func (client GroupsClient) DeleteSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, - req, - azure.DoPollForAsynchronous(client.PollingDelay)) +func (client GroupsClient) DeleteSender(req *http.Request) (future GroupsDeleteFuture, err error) { + var resp *http.Response + resp, err = autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) + if err != nil { + return + } + err = autorest.Respond(resp, azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted)) + if err != nil { + return + } + future.Future, err = azure.NewFutureFromResponse(resp) + return } // DeleteResponder handles the response to the Delete request. The method always @@ -276,26 +261,26 @@ func (client GroupsClient) DeleteResponder(resp *http.Response) (result autorest err = autorest.Respond( resp, client.ByInspecting(), - azure.WithErrorUnlessStatusCode(http.StatusAccepted, http.StatusOK), + azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted), autorest.ByClosing()) result.Response = resp return } // ExportTemplate captures the specified resource group as a template. -// -// resourceGroupName is the name of the resource group to export as a template. -// parameters is parameters for exporting the template. -func (client GroupsClient) ExportTemplate(resourceGroupName string, parameters ExportTemplateRequest) (result GroupExportResult, err error) { +// Parameters: +// resourceGroupName - the name of the resource group to export as a template. +// parameters - parameters for exporting the template. +func (client GroupsClient) ExportTemplate(ctx context.Context, resourceGroupName string, parameters ExportTemplateRequest) (result GroupExportResult, err error) { if err := validation.Validate([]validation.Validation{ {TargetValue: resourceGroupName, Constraints: []validation.Constraint{{Target: "resourceGroupName", Name: validation.MaxLength, Rule: 90, Chain: nil}, {Target: "resourceGroupName", Name: validation.MinLength, Rule: 1, Chain: nil}, {Target: "resourceGroupName", Name: validation.Pattern, Rule: `^[-\w\._\(\)]+$`, Chain: nil}}}}); err != nil { - return result, validation.NewErrorWithValidationError(err, "resources.GroupsClient", "ExportTemplate") + return result, validation.NewError("resources.GroupsClient", "ExportTemplate", err.Error()) } - req, err := client.ExportTemplatePreparer(resourceGroupName, parameters) + req, err := client.ExportTemplatePreparer(ctx, resourceGroupName, parameters) if err != nil { err = autorest.NewErrorWithError(err, "resources.GroupsClient", "ExportTemplate", nil, "Failure preparing request") return @@ -317,31 +302,32 @@ func (client GroupsClient) ExportTemplate(resourceGroupName string, parameters E } // ExportTemplatePreparer prepares the ExportTemplate request. -func (client GroupsClient) ExportTemplatePreparer(resourceGroupName string, parameters ExportTemplateRequest) (*http.Request, error) { +func (client GroupsClient) ExportTemplatePreparer(ctx context.Context, resourceGroupName string, parameters ExportTemplateRequest) (*http.Request, error) { pathParameters := map[string]interface{}{ "resourceGroupName": autorest.Encode("path", resourceGroupName), "subscriptionId": autorest.Encode("path", client.SubscriptionID), } - const APIVersion = "2016-09-01" + const APIVersion = "2018-02-01" queryParameters := map[string]interface{}{ "api-version": APIVersion, } preparer := autorest.CreatePreparer( - autorest.AsJSON(), + autorest.AsContentType("application/json; charset=utf-8"), autorest.AsPost(), autorest.WithBaseURL(client.BaseURI), autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourcegroups/{resourceGroupName}/exportTemplate", pathParameters), autorest.WithJSON(parameters), autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{}) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) } // ExportTemplateSender sends the ExportTemplate request. The method will close the // http.Response Body if it receives an error. func (client GroupsClient) ExportTemplateSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, req) + return autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) } // ExportTemplateResponder handles the response to the ExportTemplate request. The method always @@ -358,19 +344,18 @@ func (client GroupsClient) ExportTemplateResponder(resp *http.Response) (result } // Get gets a resource group. -// -// resourceGroupName is the name of the resource group to get. The name is case -// insensitive. -func (client GroupsClient) Get(resourceGroupName string) (result Group, err error) { +// Parameters: +// resourceGroupName - the name of the resource group to get. The name is case insensitive. +func (client GroupsClient) Get(ctx context.Context, resourceGroupName string) (result Group, err error) { if err := validation.Validate([]validation.Validation{ {TargetValue: resourceGroupName, Constraints: []validation.Constraint{{Target: "resourceGroupName", Name: validation.MaxLength, Rule: 90, Chain: nil}, {Target: "resourceGroupName", Name: validation.MinLength, Rule: 1, Chain: nil}, {Target: "resourceGroupName", Name: validation.Pattern, Rule: `^[-\w\._\(\)]+$`, Chain: nil}}}}); err != nil { - return result, validation.NewErrorWithValidationError(err, "resources.GroupsClient", "Get") + return result, validation.NewError("resources.GroupsClient", "Get", err.Error()) } - req, err := client.GetPreparer(resourceGroupName) + req, err := client.GetPreparer(ctx, resourceGroupName) if err != nil { err = autorest.NewErrorWithError(err, "resources.GroupsClient", "Get", nil, "Failure preparing request") return @@ -392,13 +377,13 @@ func (client GroupsClient) Get(resourceGroupName string) (result Group, err erro } // GetPreparer prepares the Get request. -func (client GroupsClient) GetPreparer(resourceGroupName string) (*http.Request, error) { +func (client GroupsClient) GetPreparer(ctx context.Context, resourceGroupName string) (*http.Request, error) { pathParameters := map[string]interface{}{ "resourceGroupName": autorest.Encode("path", resourceGroupName), "subscriptionId": autorest.Encode("path", client.SubscriptionID), } - const APIVersion = "2016-09-01" + const APIVersion = "2018-02-01" queryParameters := map[string]interface{}{ "api-version": APIVersion, } @@ -408,13 +393,14 @@ func (client GroupsClient) GetPreparer(resourceGroupName string) (*http.Request, autorest.WithBaseURL(client.BaseURI), autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourcegroups/{resourceGroupName}", pathParameters), autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{}) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) } // GetSender sends the Get request. The method will close the // http.Response Body if it receives an error. func (client GroupsClient) GetSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, req) + return autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) } // GetResponder handles the response to the Get request. The method always @@ -431,11 +417,12 @@ func (client GroupsClient) GetResponder(resp *http.Response) (result Group, err } // List gets all the resource groups for a subscription. -// -// filter is the filter to apply on the operation. top is the number of results -// to return. If null is passed, returns all resource groups. -func (client GroupsClient) List(filter string, top *int32) (result GroupListResult, err error) { - req, err := client.ListPreparer(filter, top) +// Parameters: +// filter - the filter to apply on the operation. +// top - the number of results to return. If null is passed, returns all resource groups. +func (client GroupsClient) List(ctx context.Context, filter string, top *int32) (result GroupListResultPage, err error) { + result.fn = client.listNextResults + req, err := client.ListPreparer(ctx, filter, top) if err != nil { err = autorest.NewErrorWithError(err, "resources.GroupsClient", "List", nil, "Failure preparing request") return @@ -443,12 +430,12 @@ func (client GroupsClient) List(filter string, top *int32) (result GroupListResu resp, err := client.ListSender(req) if err != nil { - result.Response = autorest.Response{Response: resp} + result.glr.Response = autorest.Response{Response: resp} err = autorest.NewErrorWithError(err, "resources.GroupsClient", "List", resp, "Failure sending request") return } - result, err = client.ListResponder(resp) + result.glr, err = client.ListResponder(resp) if err != nil { err = autorest.NewErrorWithError(err, "resources.GroupsClient", "List", resp, "Failure responding to request") } @@ -457,12 +444,12 @@ func (client GroupsClient) List(filter string, top *int32) (result GroupListResu } // ListPreparer prepares the List request. -func (client GroupsClient) ListPreparer(filter string, top *int32) (*http.Request, error) { +func (client GroupsClient) ListPreparer(ctx context.Context, filter string, top *int32) (*http.Request, error) { pathParameters := map[string]interface{}{ "subscriptionId": autorest.Encode("path", client.SubscriptionID), } - const APIVersion = "2016-09-01" + const APIVersion = "2018-02-01" queryParameters := map[string]interface{}{ "api-version": APIVersion, } @@ -478,13 +465,14 @@ func (client GroupsClient) ListPreparer(filter string, top *int32) (*http.Reques autorest.WithBaseURL(client.BaseURI), autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourcegroups", pathParameters), autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{}) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) } // ListSender sends the List request. The method will close the // http.Response Body if it receives an error. func (client GroupsClient) ListSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, req) + return autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) } // ListResponder handles the response to the List request. The method always @@ -500,206 +488,100 @@ func (client GroupsClient) ListResponder(resp *http.Response) (result GroupListR return } -// ListNextResults retrieves the next set of results, if any. -func (client GroupsClient) ListNextResults(lastResults GroupListResult) (result GroupListResult, err error) { - req, err := lastResults.GroupListResultPreparer() +// listNextResults retrieves the next set of results, if any. +func (client GroupsClient) listNextResults(lastResults GroupListResult) (result GroupListResult, err error) { + req, err := lastResults.groupListResultPreparer() if err != nil { - return result, autorest.NewErrorWithError(err, "resources.GroupsClient", "List", nil, "Failure preparing next results request") + return result, autorest.NewErrorWithError(err, "resources.GroupsClient", "listNextResults", nil, "Failure preparing next results request") } if req == nil { return } - resp, err := client.ListSender(req) if err != nil { result.Response = autorest.Response{Response: resp} - return result, autorest.NewErrorWithError(err, "resources.GroupsClient", "List", resp, "Failure sending next results request") + return result, autorest.NewErrorWithError(err, "resources.GroupsClient", "listNextResults", resp, "Failure sending next results request") } - result, err = client.ListResponder(resp) if err != nil { - err = autorest.NewErrorWithError(err, "resources.GroupsClient", "List", resp, "Failure responding to next results request") + err = autorest.NewErrorWithError(err, "resources.GroupsClient", "listNextResults", resp, "Failure responding to next results request") } - return } -// ListResources get all the resources for a resource group. -// -// resourceGroupName is the resource group with the resources to get. filter is -// the filter to apply on the operation. expand is the $expand query parameter -// top is the number of results to return. If null is passed, returns all -// resources. -func (client GroupsClient) ListResources(resourceGroupName string, filter string, expand string, top *int32) (result ListResult, err error) { +// ListComplete enumerates all values, automatically crossing page boundaries as required. +func (client GroupsClient) ListComplete(ctx context.Context, filter string, top *int32) (result GroupListResultIterator, err error) { + result.page, err = client.List(ctx, filter, top) + return +} + +// Update resource groups can be updated through a simple PATCH operation to a group address. The format of the request +// is the same as that for creating a resource group. If a field is unspecified, the current value is retained. +// Parameters: +// resourceGroupName - the name of the resource group to update. The name is case insensitive. +// parameters - parameters supplied to update a resource group. +func (client GroupsClient) Update(ctx context.Context, resourceGroupName string, parameters GroupPatchable) (result Group, err error) { if err := validation.Validate([]validation.Validation{ {TargetValue: resourceGroupName, Constraints: []validation.Constraint{{Target: "resourceGroupName", Name: validation.MaxLength, Rule: 90, Chain: nil}, {Target: "resourceGroupName", Name: validation.MinLength, Rule: 1, Chain: nil}, {Target: "resourceGroupName", Name: validation.Pattern, Rule: `^[-\w\._\(\)]+$`, Chain: nil}}}}); err != nil { - return result, validation.NewErrorWithValidationError(err, "resources.GroupsClient", "ListResources") + return result, validation.NewError("resources.GroupsClient", "Update", err.Error()) } - req, err := client.ListResourcesPreparer(resourceGroupName, filter, expand, top) + req, err := client.UpdatePreparer(ctx, resourceGroupName, parameters) if err != nil { - err = autorest.NewErrorWithError(err, "resources.GroupsClient", "ListResources", nil, "Failure preparing request") + err = autorest.NewErrorWithError(err, "resources.GroupsClient", "Update", nil, "Failure preparing request") return } - resp, err := client.ListResourcesSender(req) + resp, err := client.UpdateSender(req) if err != nil { result.Response = autorest.Response{Response: resp} - err = autorest.NewErrorWithError(err, "resources.GroupsClient", "ListResources", resp, "Failure sending request") + err = autorest.NewErrorWithError(err, "resources.GroupsClient", "Update", resp, "Failure sending request") return } - result, err = client.ListResourcesResponder(resp) + result, err = client.UpdateResponder(resp) if err != nil { - err = autorest.NewErrorWithError(err, "resources.GroupsClient", "ListResources", resp, "Failure responding to request") + err = autorest.NewErrorWithError(err, "resources.GroupsClient", "Update", resp, "Failure responding to request") } return } -// ListResourcesPreparer prepares the ListResources request. -func (client GroupsClient) ListResourcesPreparer(resourceGroupName string, filter string, expand string, top *int32) (*http.Request, error) { +// UpdatePreparer prepares the Update request. +func (client GroupsClient) UpdatePreparer(ctx context.Context, resourceGroupName string, parameters GroupPatchable) (*http.Request, error) { pathParameters := map[string]interface{}{ "resourceGroupName": autorest.Encode("path", resourceGroupName), "subscriptionId": autorest.Encode("path", client.SubscriptionID), } - const APIVersion = "2016-09-01" - queryParameters := map[string]interface{}{ - "api-version": APIVersion, - } - if len(filter) > 0 { - queryParameters["$filter"] = autorest.Encode("query", filter) - } - if len(expand) > 0 { - queryParameters["$expand"] = autorest.Encode("query", expand) - } - if top != nil { - queryParameters["$top"] = autorest.Encode("query", *top) - } - - preparer := autorest.CreatePreparer( - autorest.AsGet(), - autorest.WithBaseURL(client.BaseURI), - autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/resources", pathParameters), - autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{}) -} - -// ListResourcesSender sends the ListResources request. The method will close the -// http.Response Body if it receives an error. -func (client GroupsClient) ListResourcesSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, req) -} - -// ListResourcesResponder handles the response to the ListResources request. The method always -// closes the http.Response Body. -func (client GroupsClient) ListResourcesResponder(resp *http.Response) (result ListResult, err error) { - err = autorest.Respond( - resp, - client.ByInspecting(), - azure.WithErrorUnlessStatusCode(http.StatusOK), - autorest.ByUnmarshallingJSON(&result), - autorest.ByClosing()) - result.Response = autorest.Response{Response: resp} - return -} - -// ListResourcesNextResults retrieves the next set of results, if any. -func (client GroupsClient) ListResourcesNextResults(lastResults ListResult) (result ListResult, err error) { - req, err := lastResults.ListResultPreparer() - if err != nil { - return result, autorest.NewErrorWithError(err, "resources.GroupsClient", "ListResources", nil, "Failure preparing next results request") - } - if req == nil { - return - } - - resp, err := client.ListResourcesSender(req) - if err != nil { - result.Response = autorest.Response{Response: resp} - return result, autorest.NewErrorWithError(err, "resources.GroupsClient", "ListResources", resp, "Failure sending next results request") - } - - result, err = client.ListResourcesResponder(resp) - if err != nil { - err = autorest.NewErrorWithError(err, "resources.GroupsClient", "ListResources", resp, "Failure responding to next results request") - } - - return -} - -// Patch resource groups can be updated through a simple PATCH operation to a -// group address. The format of the request is the same as that for creating a -// resource group. If a field is unspecified, the current value is retained. -// -// resourceGroupName is the name of the resource group to update. The name is -// case insensitive. parameters is parameters supplied to update a resource -// group. -func (client GroupsClient) Patch(resourceGroupName string, parameters Group) (result Group, err error) { - if err := validation.Validate([]validation.Validation{ - {TargetValue: resourceGroupName, - Constraints: []validation.Constraint{{Target: "resourceGroupName", Name: validation.MaxLength, Rule: 90, Chain: nil}, - {Target: "resourceGroupName", Name: validation.MinLength, Rule: 1, Chain: nil}, - {Target: "resourceGroupName", Name: validation.Pattern, Rule: `^[-\w\._\(\)]+$`, Chain: nil}}}}); err != nil { - return result, validation.NewErrorWithValidationError(err, "resources.GroupsClient", "Patch") - } - - req, err := client.PatchPreparer(resourceGroupName, parameters) - if err != nil { - err = autorest.NewErrorWithError(err, "resources.GroupsClient", "Patch", nil, "Failure preparing request") - return - } - - resp, err := client.PatchSender(req) - if err != nil { - result.Response = autorest.Response{Response: resp} - err = autorest.NewErrorWithError(err, "resources.GroupsClient", "Patch", resp, "Failure sending request") - return - } - - result, err = client.PatchResponder(resp) - if err != nil { - err = autorest.NewErrorWithError(err, "resources.GroupsClient", "Patch", resp, "Failure responding to request") - } - - return -} - -// PatchPreparer prepares the Patch request. -func (client GroupsClient) PatchPreparer(resourceGroupName string, parameters Group) (*http.Request, error) { - pathParameters := map[string]interface{}{ - "resourceGroupName": autorest.Encode("path", resourceGroupName), - "subscriptionId": autorest.Encode("path", client.SubscriptionID), - } - - const APIVersion = "2016-09-01" + const APIVersion = "2018-02-01" queryParameters := map[string]interface{}{ "api-version": APIVersion, } preparer := autorest.CreatePreparer( - autorest.AsJSON(), + autorest.AsContentType("application/json; charset=utf-8"), autorest.AsPatch(), autorest.WithBaseURL(client.BaseURI), autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourcegroups/{resourceGroupName}", pathParameters), autorest.WithJSON(parameters), autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{}) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) } -// PatchSender sends the Patch request. The method will close the +// UpdateSender sends the Update request. The method will close the // http.Response Body if it receives an error. -func (client GroupsClient) PatchSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, req) +func (client GroupsClient) UpdateSender(req *http.Request) (*http.Response, error) { + return autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) } -// PatchResponder handles the response to the Patch request. The method always +// UpdateResponder handles the response to the Update request. The method always // closes the http.Response Body. -func (client GroupsClient) PatchResponder(resp *http.Response) (result Group, err error) { +func (client GroupsClient) UpdateResponder(resp *http.Response) (result Group, err error) { err = autorest.Respond( resp, client.ByInspecting(), diff --git a/vendor/github.com/Azure/azure-sdk-for-go/services/resources/mgmt/2018-02-01/resources/models.go b/vendor/github.com/Azure/azure-sdk-for-go/services/resources/mgmt/2018-02-01/resources/models.go new file mode 100644 index 000000000..44e1b6102 --- /dev/null +++ b/vendor/github.com/Azure/azure-sdk-for-go/services/resources/mgmt/2018-02-01/resources/models.go @@ -0,0 +1,1543 @@ +package resources + +// Copyright (c) Microsoft and contributors. All rights reserved. +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// +// See the License for the specific language governing permissions and +// limitations under the License. +// +// Code generated by Microsoft (R) AutoRest Code Generator. +// Changes may cause incorrect behavior and will be lost if the code is regenerated. + +import ( + "encoding/json" + "github.com/Azure/go-autorest/autorest" + "github.com/Azure/go-autorest/autorest/azure" + "github.com/Azure/go-autorest/autorest/date" + "github.com/Azure/go-autorest/autorest/to" + "net/http" +) + +// DeploymentMode enumerates the values for deployment mode. +type DeploymentMode string + +const ( + // Complete ... + Complete DeploymentMode = "Complete" + // Incremental ... + Incremental DeploymentMode = "Incremental" +) + +// PossibleDeploymentModeValues returns an array of possible values for the DeploymentMode const type. +func PossibleDeploymentModeValues() []DeploymentMode { + return []DeploymentMode{Complete, Incremental} +} + +// OnErrorDeploymentType enumerates the values for on error deployment type. +type OnErrorDeploymentType string + +const ( + // LastSuccessful ... + LastSuccessful OnErrorDeploymentType = "LastSuccessful" + // SpecificDeployment ... + SpecificDeployment OnErrorDeploymentType = "SpecificDeployment" +) + +// PossibleOnErrorDeploymentTypeValues returns an array of possible values for the OnErrorDeploymentType const type. +func PossibleOnErrorDeploymentTypeValues() []OnErrorDeploymentType { + return []OnErrorDeploymentType{LastSuccessful, SpecificDeployment} +} + +// ResourceIdentityType enumerates the values for resource identity type. +type ResourceIdentityType string + +const ( + // None ... + None ResourceIdentityType = "None" + // SystemAssigned ... + SystemAssigned ResourceIdentityType = "SystemAssigned" + // SystemAssignedUserAssigned ... + SystemAssignedUserAssigned ResourceIdentityType = "SystemAssigned, UserAssigned" + // UserAssigned ... + UserAssigned ResourceIdentityType = "UserAssigned" +) + +// PossibleResourceIdentityTypeValues returns an array of possible values for the ResourceIdentityType const type. +func PossibleResourceIdentityTypeValues() []ResourceIdentityType { + return []ResourceIdentityType{None, SystemAssigned, SystemAssignedUserAssigned, UserAssigned} +} + +// AliasPathType the type of the paths for alias. +type AliasPathType struct { + // Path - The path of an alias. + Path *string `json:"path,omitempty"` + // APIVersions - The API versions. + APIVersions *[]string `json:"apiVersions,omitempty"` +} + +// AliasType the alias type. +type AliasType struct { + // Name - The alias name. + Name *string `json:"name,omitempty"` + // Paths - The paths for an alias. + Paths *[]AliasPathType `json:"paths,omitempty"` +} + +// BasicDependency deployment dependency information. +type BasicDependency struct { + // ID - The ID of the dependency. + ID *string `json:"id,omitempty"` + // ResourceType - The dependency resource type. + ResourceType *string `json:"resourceType,omitempty"` + // ResourceName - The dependency resource name. + ResourceName *string `json:"resourceName,omitempty"` +} + +// CreateOrUpdateByIDFuture an abstraction for monitoring and retrieving the results of a long-running operation. +type CreateOrUpdateByIDFuture struct { + azure.Future +} + +// Result returns the result of the asynchronous operation. +// If the operation has not completed it will return an error. +func (future *CreateOrUpdateByIDFuture) Result(client Client) (gr GenericResource, err error) { + var done bool + done, err = future.Done(client) + if err != nil { + err = autorest.NewErrorWithError(err, "resources.CreateOrUpdateByIDFuture", "Result", future.Response(), "Polling failure") + return + } + if !done { + err = azure.NewAsyncOpIncompleteError("resources.CreateOrUpdateByIDFuture") + return + } + sender := autorest.DecorateSender(client, autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...)) + if gr.Response.Response, err = future.GetResult(sender); err == nil && gr.Response.Response.StatusCode != http.StatusNoContent { + gr, err = client.CreateOrUpdateByIDResponder(gr.Response.Response) + if err != nil { + err = autorest.NewErrorWithError(err, "resources.CreateOrUpdateByIDFuture", "Result", gr.Response.Response, "Failure responding to request") + } + } + return +} + +// CreateOrUpdateFuture an abstraction for monitoring and retrieving the results of a long-running operation. +type CreateOrUpdateFuture struct { + azure.Future +} + +// Result returns the result of the asynchronous operation. +// If the operation has not completed it will return an error. +func (future *CreateOrUpdateFuture) Result(client Client) (gr GenericResource, err error) { + var done bool + done, err = future.Done(client) + if err != nil { + err = autorest.NewErrorWithError(err, "resources.CreateOrUpdateFuture", "Result", future.Response(), "Polling failure") + return + } + if !done { + err = azure.NewAsyncOpIncompleteError("resources.CreateOrUpdateFuture") + return + } + sender := autorest.DecorateSender(client, autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...)) + if gr.Response.Response, err = future.GetResult(sender); err == nil && gr.Response.Response.StatusCode != http.StatusNoContent { + gr, err = client.CreateOrUpdateResponder(gr.Response.Response) + if err != nil { + err = autorest.NewErrorWithError(err, "resources.CreateOrUpdateFuture", "Result", gr.Response.Response, "Failure responding to request") + } + } + return +} + +// DebugSetting ... +type DebugSetting struct { + // DetailLevel - Specifies the type of information to log for debugging. The permitted values are none, requestContent, responseContent, or both requestContent and responseContent separated by a comma. The default is none. When setting this value, carefully consider the type of information you are passing in during deployment. By logging information about the request or response, you could potentially expose sensitive data that is retrieved through the deployment operations. + DetailLevel *string `json:"detailLevel,omitempty"` +} + +// DeleteByIDFuture an abstraction for monitoring and retrieving the results of a long-running operation. +type DeleteByIDFuture struct { + azure.Future +} + +// Result returns the result of the asynchronous operation. +// If the operation has not completed it will return an error. +func (future *DeleteByIDFuture) Result(client Client) (ar autorest.Response, err error) { + var done bool + done, err = future.Done(client) + if err != nil { + err = autorest.NewErrorWithError(err, "resources.DeleteByIDFuture", "Result", future.Response(), "Polling failure") + return + } + if !done { + err = azure.NewAsyncOpIncompleteError("resources.DeleteByIDFuture") + return + } + ar.Response = future.Response() + return +} + +// DeleteFuture an abstraction for monitoring and retrieving the results of a long-running operation. +type DeleteFuture struct { + azure.Future +} + +// Result returns the result of the asynchronous operation. +// If the operation has not completed it will return an error. +func (future *DeleteFuture) Result(client Client) (ar autorest.Response, err error) { + var done bool + done, err = future.Done(client) + if err != nil { + err = autorest.NewErrorWithError(err, "resources.DeleteFuture", "Result", future.Response(), "Polling failure") + return + } + if !done { + err = azure.NewAsyncOpIncompleteError("resources.DeleteFuture") + return + } + ar.Response = future.Response() + return +} + +// Dependency deployment dependency information. +type Dependency struct { + // DependsOn - The list of dependencies. + DependsOn *[]BasicDependency `json:"dependsOn,omitempty"` + // ID - The ID of the dependency. + ID *string `json:"id,omitempty"` + // ResourceType - The dependency resource type. + ResourceType *string `json:"resourceType,omitempty"` + // ResourceName - The dependency resource name. + ResourceName *string `json:"resourceName,omitempty"` +} + +// Deployment deployment operation parameters. +type Deployment struct { + // Properties - The deployment properties. + Properties *DeploymentProperties `json:"properties,omitempty"` +} + +// DeploymentExportResult the deployment export result. +type DeploymentExportResult struct { + autorest.Response `json:"-"` + // Template - The template content. + Template interface{} `json:"template,omitempty"` +} + +// DeploymentExtended deployment information. +type DeploymentExtended struct { + autorest.Response `json:"-"` + // ID - The ID of the deployment. + ID *string `json:"id,omitempty"` + // Name - The name of the deployment. + Name *string `json:"name,omitempty"` + // Properties - Deployment properties. + Properties *DeploymentPropertiesExtended `json:"properties,omitempty"` +} + +// DeploymentExtendedFilter deployment filter. +type DeploymentExtendedFilter struct { + // ProvisioningState - The provisioning state. + ProvisioningState *string `json:"provisioningState,omitempty"` +} + +// DeploymentListResult list of deployments. +type DeploymentListResult struct { + autorest.Response `json:"-"` + // Value - An array of deployments. + Value *[]DeploymentExtended `json:"value,omitempty"` + // NextLink - The URL to use for getting the next set of results. + NextLink *string `json:"nextLink,omitempty"` +} + +// DeploymentListResultIterator provides access to a complete listing of DeploymentExtended values. +type DeploymentListResultIterator struct { + i int + page DeploymentListResultPage +} + +// Next advances to the next value. If there was an error making +// the request the iterator does not advance and the error is returned. +func (iter *DeploymentListResultIterator) Next() error { + iter.i++ + if iter.i < len(iter.page.Values()) { + return nil + } + err := iter.page.Next() + if err != nil { + iter.i-- + return err + } + iter.i = 0 + return nil +} + +// NotDone returns true if the enumeration should be started or is not yet complete. +func (iter DeploymentListResultIterator) NotDone() bool { + return iter.page.NotDone() && iter.i < len(iter.page.Values()) +} + +// Response returns the raw server response from the last page request. +func (iter DeploymentListResultIterator) Response() DeploymentListResult { + return iter.page.Response() +} + +// Value returns the current value or a zero-initialized value if the +// iterator has advanced beyond the end of the collection. +func (iter DeploymentListResultIterator) Value() DeploymentExtended { + if !iter.page.NotDone() { + return DeploymentExtended{} + } + return iter.page.Values()[iter.i] +} + +// IsEmpty returns true if the ListResult contains no values. +func (dlr DeploymentListResult) IsEmpty() bool { + return dlr.Value == nil || len(*dlr.Value) == 0 +} + +// deploymentListResultPreparer prepares a request to retrieve the next set of results. +// It returns nil if no more results exist. +func (dlr DeploymentListResult) deploymentListResultPreparer() (*http.Request, error) { + if dlr.NextLink == nil || len(to.String(dlr.NextLink)) < 1 { + return nil, nil + } + return autorest.Prepare(&http.Request{}, + autorest.AsJSON(), + autorest.AsGet(), + autorest.WithBaseURL(to.String(dlr.NextLink))) +} + +// DeploymentListResultPage contains a page of DeploymentExtended values. +type DeploymentListResultPage struct { + fn func(DeploymentListResult) (DeploymentListResult, error) + dlr DeploymentListResult +} + +// Next advances to the next page of values. If there was an error making +// the request the page does not advance and the error is returned. +func (page *DeploymentListResultPage) Next() error { + next, err := page.fn(page.dlr) + if err != nil { + return err + } + page.dlr = next + return nil +} + +// NotDone returns true if the page enumeration should be started or is not yet complete. +func (page DeploymentListResultPage) NotDone() bool { + return !page.dlr.IsEmpty() +} + +// Response returns the raw server response from the last page request. +func (page DeploymentListResultPage) Response() DeploymentListResult { + return page.dlr +} + +// Values returns the slice of values for the current page or nil if there are no values. +func (page DeploymentListResultPage) Values() []DeploymentExtended { + if page.dlr.IsEmpty() { + return nil + } + return *page.dlr.Value +} + +// DeploymentOperation deployment operation information. +type DeploymentOperation struct { + autorest.Response `json:"-"` + // ID - Full deployment operation ID. + ID *string `json:"id,omitempty"` + // OperationID - Deployment operation ID. + OperationID *string `json:"operationId,omitempty"` + // Properties - Deployment properties. + Properties *DeploymentOperationProperties `json:"properties,omitempty"` +} + +// DeploymentOperationProperties deployment operation properties. +type DeploymentOperationProperties struct { + // ProvisioningState - The state of the provisioning. + ProvisioningState *string `json:"provisioningState,omitempty"` + // Timestamp - The date and time of the operation. + Timestamp *date.Time `json:"timestamp,omitempty"` + // ServiceRequestID - Deployment operation service request id. + ServiceRequestID *string `json:"serviceRequestId,omitempty"` + // StatusCode - Operation status code. + StatusCode *string `json:"statusCode,omitempty"` + // StatusMessage - Operation status message. + StatusMessage interface{} `json:"statusMessage,omitempty"` + // TargetResource - The target resource. + TargetResource *TargetResource `json:"targetResource,omitempty"` + // Request - The HTTP request message. + Request *HTTPMessage `json:"request,omitempty"` + // Response - The HTTP response message. + Response *HTTPMessage `json:"response,omitempty"` +} + +// DeploymentOperationsListResult list of deployment operations. +type DeploymentOperationsListResult struct { + autorest.Response `json:"-"` + // Value - An array of deployment operations. + Value *[]DeploymentOperation `json:"value,omitempty"` + // NextLink - The URL to use for getting the next set of results. + NextLink *string `json:"nextLink,omitempty"` +} + +// DeploymentOperationsListResultIterator provides access to a complete listing of DeploymentOperation values. +type DeploymentOperationsListResultIterator struct { + i int + page DeploymentOperationsListResultPage +} + +// Next advances to the next value. If there was an error making +// the request the iterator does not advance and the error is returned. +func (iter *DeploymentOperationsListResultIterator) Next() error { + iter.i++ + if iter.i < len(iter.page.Values()) { + return nil + } + err := iter.page.Next() + if err != nil { + iter.i-- + return err + } + iter.i = 0 + return nil +} + +// NotDone returns true if the enumeration should be started or is not yet complete. +func (iter DeploymentOperationsListResultIterator) NotDone() bool { + return iter.page.NotDone() && iter.i < len(iter.page.Values()) +} + +// Response returns the raw server response from the last page request. +func (iter DeploymentOperationsListResultIterator) Response() DeploymentOperationsListResult { + return iter.page.Response() +} + +// Value returns the current value or a zero-initialized value if the +// iterator has advanced beyond the end of the collection. +func (iter DeploymentOperationsListResultIterator) Value() DeploymentOperation { + if !iter.page.NotDone() { + return DeploymentOperation{} + } + return iter.page.Values()[iter.i] +} + +// IsEmpty returns true if the ListResult contains no values. +func (dolr DeploymentOperationsListResult) IsEmpty() bool { + return dolr.Value == nil || len(*dolr.Value) == 0 +} + +// deploymentOperationsListResultPreparer prepares a request to retrieve the next set of results. +// It returns nil if no more results exist. +func (dolr DeploymentOperationsListResult) deploymentOperationsListResultPreparer() (*http.Request, error) { + if dolr.NextLink == nil || len(to.String(dolr.NextLink)) < 1 { + return nil, nil + } + return autorest.Prepare(&http.Request{}, + autorest.AsJSON(), + autorest.AsGet(), + autorest.WithBaseURL(to.String(dolr.NextLink))) +} + +// DeploymentOperationsListResultPage contains a page of DeploymentOperation values. +type DeploymentOperationsListResultPage struct { + fn func(DeploymentOperationsListResult) (DeploymentOperationsListResult, error) + dolr DeploymentOperationsListResult +} + +// Next advances to the next page of values. If there was an error making +// the request the page does not advance and the error is returned. +func (page *DeploymentOperationsListResultPage) Next() error { + next, err := page.fn(page.dolr) + if err != nil { + return err + } + page.dolr = next + return nil +} + +// NotDone returns true if the page enumeration should be started or is not yet complete. +func (page DeploymentOperationsListResultPage) NotDone() bool { + return !page.dolr.IsEmpty() +} + +// Response returns the raw server response from the last page request. +func (page DeploymentOperationsListResultPage) Response() DeploymentOperationsListResult { + return page.dolr +} + +// Values returns the slice of values for the current page or nil if there are no values. +func (page DeploymentOperationsListResultPage) Values() []DeploymentOperation { + if page.dolr.IsEmpty() { + return nil + } + return *page.dolr.Value +} + +// DeploymentProperties deployment properties. +type DeploymentProperties struct { + // Template - The template content. You use this element when you want to pass the template syntax directly in the request rather than link to an existing template. It can be a JObject or well-formed JSON string. Use either the templateLink property or the template property, but not both. + Template interface{} `json:"template,omitempty"` + // TemplateLink - The URI of the template. Use either the templateLink property or the template property, but not both. + TemplateLink *TemplateLink `json:"templateLink,omitempty"` + // Parameters - Name and value pairs that define the deployment parameters for the template. You use this element when you want to provide the parameter values directly in the request rather than link to an existing parameter file. Use either the parametersLink property or the parameters property, but not both. It can be a JObject or a well formed JSON string. + Parameters interface{} `json:"parameters,omitempty"` + // ParametersLink - The URI of parameters file. You use this element to link to an existing parameters file. Use either the parametersLink property or the parameters property, but not both. + ParametersLink *ParametersLink `json:"parametersLink,omitempty"` + // Mode - The mode that is used to deploy resources. This value can be either Incremental or Complete. In Incremental mode, resources are deployed without deleting existing resources that are not included in the template. In Complete mode, resources are deployed and existing resources in the resource group that are not included in the template are deleted. Be careful when using Complete mode as you may unintentionally delete resources. Possible values include: 'Incremental', 'Complete' + Mode DeploymentMode `json:"mode,omitempty"` + // DebugSetting - The debug setting of the deployment. + DebugSetting *DebugSetting `json:"debugSetting,omitempty"` + // OnErrorDeployment - The deployment on error behavior. + OnErrorDeployment *OnErrorDeployment `json:"onErrorDeployment,omitempty"` +} + +// DeploymentPropertiesExtended deployment properties with additional details. +type DeploymentPropertiesExtended struct { + // ProvisioningState - The state of the provisioning. + ProvisioningState *string `json:"provisioningState,omitempty"` + // CorrelationID - The correlation ID of the deployment. + CorrelationID *string `json:"correlationId,omitempty"` + // Timestamp - The timestamp of the template deployment. + Timestamp *date.Time `json:"timestamp,omitempty"` + // Outputs - Key/value pairs that represent deploymentoutput. + Outputs interface{} `json:"outputs,omitempty"` + // Providers - The list of resource providers needed for the deployment. + Providers *[]Provider `json:"providers,omitempty"` + // Dependencies - The list of deployment dependencies. + Dependencies *[]Dependency `json:"dependencies,omitempty"` + // Template - The template content. Use only one of Template or TemplateLink. + Template interface{} `json:"template,omitempty"` + // TemplateLink - The URI referencing the template. Use only one of Template or TemplateLink. + TemplateLink *TemplateLink `json:"templateLink,omitempty"` + // Parameters - Deployment parameters. Use only one of Parameters or ParametersLink. + Parameters interface{} `json:"parameters,omitempty"` + // ParametersLink - The URI referencing the parameters. Use only one of Parameters or ParametersLink. + ParametersLink *ParametersLink `json:"parametersLink,omitempty"` + // Mode - The deployment mode. Possible values are Incremental and Complete. Possible values include: 'Incremental', 'Complete' + Mode DeploymentMode `json:"mode,omitempty"` + // DebugSetting - The debug setting of the deployment. + DebugSetting *DebugSetting `json:"debugSetting,omitempty"` + // OnErrorDeployment - The deployment on error behavior. + OnErrorDeployment *OnErrorDeploymentExtended `json:"onErrorDeployment,omitempty"` +} + +// DeploymentsCreateOrUpdateFuture an abstraction for monitoring and retrieving the results of a long-running +// operation. +type DeploymentsCreateOrUpdateFuture struct { + azure.Future +} + +// Result returns the result of the asynchronous operation. +// If the operation has not completed it will return an error. +func (future *DeploymentsCreateOrUpdateFuture) Result(client DeploymentsClient) (de DeploymentExtended, err error) { + var done bool + done, err = future.Done(client) + if err != nil { + err = autorest.NewErrorWithError(err, "resources.DeploymentsCreateOrUpdateFuture", "Result", future.Response(), "Polling failure") + return + } + if !done { + err = azure.NewAsyncOpIncompleteError("resources.DeploymentsCreateOrUpdateFuture") + return + } + sender := autorest.DecorateSender(client, autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...)) + if de.Response.Response, err = future.GetResult(sender); err == nil && de.Response.Response.StatusCode != http.StatusNoContent { + de, err = client.CreateOrUpdateResponder(de.Response.Response) + if err != nil { + err = autorest.NewErrorWithError(err, "resources.DeploymentsCreateOrUpdateFuture", "Result", de.Response.Response, "Failure responding to request") + } + } + return +} + +// DeploymentsDeleteFuture an abstraction for monitoring and retrieving the results of a long-running operation. +type DeploymentsDeleteFuture struct { + azure.Future +} + +// Result returns the result of the asynchronous operation. +// If the operation has not completed it will return an error. +func (future *DeploymentsDeleteFuture) Result(client DeploymentsClient) (ar autorest.Response, err error) { + var done bool + done, err = future.Done(client) + if err != nil { + err = autorest.NewErrorWithError(err, "resources.DeploymentsDeleteFuture", "Result", future.Response(), "Polling failure") + return + } + if !done { + err = azure.NewAsyncOpIncompleteError("resources.DeploymentsDeleteFuture") + return + } + ar.Response = future.Response() + return +} + +// DeploymentValidateResult information from validate template deployment response. +type DeploymentValidateResult struct { + autorest.Response `json:"-"` + // Error - Validation error. + Error *ManagementErrorWithDetails `json:"error,omitempty"` + // Properties - The template deployment properties. + Properties *DeploymentPropertiesExtended `json:"properties,omitempty"` +} + +// ExportTemplateRequest export resource group template request parameters. +type ExportTemplateRequest struct { + // ResourcesProperty - The IDs of the resources. The only supported string currently is '*' (all resources). Future updates will support exporting specific resources. + ResourcesProperty *[]string `json:"resources,omitempty"` + // Options - The export template options. Supported values include 'IncludeParameterDefaultValue', 'IncludeComments' or 'IncludeParameterDefaultValue, IncludeComments + Options *string `json:"options,omitempty"` +} + +// GenericResource resource information. +type GenericResource struct { + autorest.Response `json:"-"` + // Plan - The plan of the resource. + Plan *Plan `json:"plan,omitempty"` + // Properties - The resource properties. + Properties interface{} `json:"properties,omitempty"` + // Kind - The kind of the resource. + Kind *string `json:"kind,omitempty"` + // ManagedBy - ID of the resource that manages this resource. + ManagedBy *string `json:"managedBy,omitempty"` + // Sku - The SKU of the resource. + Sku *Sku `json:"sku,omitempty"` + // Identity - The identity of the resource. + Identity *Identity `json:"identity,omitempty"` + // ID - Resource ID + ID *string `json:"id,omitempty"` + // Name - Resource name + Name *string `json:"name,omitempty"` + // Type - Resource type + Type *string `json:"type,omitempty"` + // Location - Resource location + Location *string `json:"location,omitempty"` + // Tags - Resource tags + Tags map[string]*string `json:"tags"` +} + +// MarshalJSON is the custom marshaler for GenericResource. +func (gr GenericResource) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]interface{}) + if gr.Plan != nil { + objectMap["plan"] = gr.Plan + } + objectMap["properties"] = gr.Properties + if gr.Kind != nil { + objectMap["kind"] = gr.Kind + } + if gr.ManagedBy != nil { + objectMap["managedBy"] = gr.ManagedBy + } + if gr.Sku != nil { + objectMap["sku"] = gr.Sku + } + if gr.Identity != nil { + objectMap["identity"] = gr.Identity + } + if gr.ID != nil { + objectMap["id"] = gr.ID + } + if gr.Name != nil { + objectMap["name"] = gr.Name + } + if gr.Type != nil { + objectMap["type"] = gr.Type + } + if gr.Location != nil { + objectMap["location"] = gr.Location + } + if gr.Tags != nil { + objectMap["tags"] = gr.Tags + } + return json.Marshal(objectMap) +} + +// GenericResourceFilter resource filter. +type GenericResourceFilter struct { + // ResourceType - The resource type. + ResourceType *string `json:"resourceType,omitempty"` + // Tagname - The tag name. + Tagname *string `json:"tagname,omitempty"` + // Tagvalue - The tag value. + Tagvalue *string `json:"tagvalue,omitempty"` +} + +// Group resource group information. +type Group struct { + autorest.Response `json:"-"` + // ID - The ID of the resource group. + ID *string `json:"id,omitempty"` + // Name - The name of the resource group. + Name *string `json:"name,omitempty"` + Properties *GroupProperties `json:"properties,omitempty"` + // Location - The location of the resource group. It cannot be changed after the resource group has been created. It must be one of the supported Azure locations. + Location *string `json:"location,omitempty"` + // ManagedBy - The ID of the resource that manages this resource group. + ManagedBy *string `json:"managedBy,omitempty"` + // Tags - The tags attached to the resource group. + Tags map[string]*string `json:"tags"` +} + +// MarshalJSON is the custom marshaler for Group. +func (g Group) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]interface{}) + if g.ID != nil { + objectMap["id"] = g.ID + } + if g.Name != nil { + objectMap["name"] = g.Name + } + if g.Properties != nil { + objectMap["properties"] = g.Properties + } + if g.Location != nil { + objectMap["location"] = g.Location + } + if g.ManagedBy != nil { + objectMap["managedBy"] = g.ManagedBy + } + if g.Tags != nil { + objectMap["tags"] = g.Tags + } + return json.Marshal(objectMap) +} + +// GroupExportResult resource group export result. +type GroupExportResult struct { + autorest.Response `json:"-"` + // Template - The template content. + Template interface{} `json:"template,omitempty"` + // Error - The error. + Error *ManagementErrorWithDetails `json:"error,omitempty"` +} + +// GroupFilter resource group filter. +type GroupFilter struct { + // TagName - The tag name. + TagName *string `json:"tagName,omitempty"` + // TagValue - The tag value. + TagValue *string `json:"tagValue,omitempty"` +} + +// GroupListResult list of resource groups. +type GroupListResult struct { + autorest.Response `json:"-"` + // Value - An array of resource groups. + Value *[]Group `json:"value,omitempty"` + // NextLink - The URL to use for getting the next set of results. + NextLink *string `json:"nextLink,omitempty"` +} + +// GroupListResultIterator provides access to a complete listing of Group values. +type GroupListResultIterator struct { + i int + page GroupListResultPage +} + +// Next advances to the next value. If there was an error making +// the request the iterator does not advance and the error is returned. +func (iter *GroupListResultIterator) Next() error { + iter.i++ + if iter.i < len(iter.page.Values()) { + return nil + } + err := iter.page.Next() + if err != nil { + iter.i-- + return err + } + iter.i = 0 + return nil +} + +// NotDone returns true if the enumeration should be started or is not yet complete. +func (iter GroupListResultIterator) NotDone() bool { + return iter.page.NotDone() && iter.i < len(iter.page.Values()) +} + +// Response returns the raw server response from the last page request. +func (iter GroupListResultIterator) Response() GroupListResult { + return iter.page.Response() +} + +// Value returns the current value or a zero-initialized value if the +// iterator has advanced beyond the end of the collection. +func (iter GroupListResultIterator) Value() Group { + if !iter.page.NotDone() { + return Group{} + } + return iter.page.Values()[iter.i] +} + +// IsEmpty returns true if the ListResult contains no values. +func (glr GroupListResult) IsEmpty() bool { + return glr.Value == nil || len(*glr.Value) == 0 +} + +// groupListResultPreparer prepares a request to retrieve the next set of results. +// It returns nil if no more results exist. +func (glr GroupListResult) groupListResultPreparer() (*http.Request, error) { + if glr.NextLink == nil || len(to.String(glr.NextLink)) < 1 { + return nil, nil + } + return autorest.Prepare(&http.Request{}, + autorest.AsJSON(), + autorest.AsGet(), + autorest.WithBaseURL(to.String(glr.NextLink))) +} + +// GroupListResultPage contains a page of Group values. +type GroupListResultPage struct { + fn func(GroupListResult) (GroupListResult, error) + glr GroupListResult +} + +// Next advances to the next page of values. If there was an error making +// the request the page does not advance and the error is returned. +func (page *GroupListResultPage) Next() error { + next, err := page.fn(page.glr) + if err != nil { + return err + } + page.glr = next + return nil +} + +// NotDone returns true if the page enumeration should be started or is not yet complete. +func (page GroupListResultPage) NotDone() bool { + return !page.glr.IsEmpty() +} + +// Response returns the raw server response from the last page request. +func (page GroupListResultPage) Response() GroupListResult { + return page.glr +} + +// Values returns the slice of values for the current page or nil if there are no values. +func (page GroupListResultPage) Values() []Group { + if page.glr.IsEmpty() { + return nil + } + return *page.glr.Value +} + +// GroupPatchable resource group information. +type GroupPatchable struct { + // Name - The name of the resource group. + Name *string `json:"name,omitempty"` + Properties *GroupProperties `json:"properties,omitempty"` + // ManagedBy - The ID of the resource that manages this resource group. + ManagedBy *string `json:"managedBy,omitempty"` + // Tags - The tags attached to the resource group. + Tags map[string]*string `json:"tags"` +} + +// MarshalJSON is the custom marshaler for GroupPatchable. +func (gp GroupPatchable) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]interface{}) + if gp.Name != nil { + objectMap["name"] = gp.Name + } + if gp.Properties != nil { + objectMap["properties"] = gp.Properties + } + if gp.ManagedBy != nil { + objectMap["managedBy"] = gp.ManagedBy + } + if gp.Tags != nil { + objectMap["tags"] = gp.Tags + } + return json.Marshal(objectMap) +} + +// GroupProperties the resource group properties. +type GroupProperties struct { + // ProvisioningState - The provisioning state. + ProvisioningState *string `json:"provisioningState,omitempty"` +} + +// GroupsDeleteFuture an abstraction for monitoring and retrieving the results of a long-running operation. +type GroupsDeleteFuture struct { + azure.Future +} + +// Result returns the result of the asynchronous operation. +// If the operation has not completed it will return an error. +func (future *GroupsDeleteFuture) Result(client GroupsClient) (ar autorest.Response, err error) { + var done bool + done, err = future.Done(client) + if err != nil { + err = autorest.NewErrorWithError(err, "resources.GroupsDeleteFuture", "Result", future.Response(), "Polling failure") + return + } + if !done { + err = azure.NewAsyncOpIncompleteError("resources.GroupsDeleteFuture") + return + } + ar.Response = future.Response() + return +} + +// HTTPMessage HTTP message. +type HTTPMessage struct { + // Content - HTTP message content. + Content interface{} `json:"content,omitempty"` +} + +// Identity identity for the resource. +type Identity struct { + // PrincipalID - The principal ID of resource identity. + PrincipalID *string `json:"principalId,omitempty"` + // TenantID - The tenant ID of resource. + TenantID *string `json:"tenantId,omitempty"` + // Type - The identity type. Possible values include: 'SystemAssigned', 'UserAssigned', 'SystemAssignedUserAssigned', 'None' + Type ResourceIdentityType `json:"type,omitempty"` +} + +// ListResult list of resource groups. +type ListResult struct { + autorest.Response `json:"-"` + // Value - An array of resources. + Value *[]GenericResource `json:"value,omitempty"` + // NextLink - The URL to use for getting the next set of results. + NextLink *string `json:"nextLink,omitempty"` +} + +// ListResultIterator provides access to a complete listing of GenericResource values. +type ListResultIterator struct { + i int + page ListResultPage +} + +// Next advances to the next value. If there was an error making +// the request the iterator does not advance and the error is returned. +func (iter *ListResultIterator) Next() error { + iter.i++ + if iter.i < len(iter.page.Values()) { + return nil + } + err := iter.page.Next() + if err != nil { + iter.i-- + return err + } + iter.i = 0 + return nil +} + +// NotDone returns true if the enumeration should be started or is not yet complete. +func (iter ListResultIterator) NotDone() bool { + return iter.page.NotDone() && iter.i < len(iter.page.Values()) +} + +// Response returns the raw server response from the last page request. +func (iter ListResultIterator) Response() ListResult { + return iter.page.Response() +} + +// Value returns the current value or a zero-initialized value if the +// iterator has advanced beyond the end of the collection. +func (iter ListResultIterator) Value() GenericResource { + if !iter.page.NotDone() { + return GenericResource{} + } + return iter.page.Values()[iter.i] +} + +// IsEmpty returns true if the ListResult contains no values. +func (lr ListResult) IsEmpty() bool { + return lr.Value == nil || len(*lr.Value) == 0 +} + +// listResultPreparer prepares a request to retrieve the next set of results. +// It returns nil if no more results exist. +func (lr ListResult) listResultPreparer() (*http.Request, error) { + if lr.NextLink == nil || len(to.String(lr.NextLink)) < 1 { + return nil, nil + } + return autorest.Prepare(&http.Request{}, + autorest.AsJSON(), + autorest.AsGet(), + autorest.WithBaseURL(to.String(lr.NextLink))) +} + +// ListResultPage contains a page of GenericResource values. +type ListResultPage struct { + fn func(ListResult) (ListResult, error) + lr ListResult +} + +// Next advances to the next page of values. If there was an error making +// the request the page does not advance and the error is returned. +func (page *ListResultPage) Next() error { + next, err := page.fn(page.lr) + if err != nil { + return err + } + page.lr = next + return nil +} + +// NotDone returns true if the page enumeration should be started or is not yet complete. +func (page ListResultPage) NotDone() bool { + return !page.lr.IsEmpty() +} + +// Response returns the raw server response from the last page request. +func (page ListResultPage) Response() ListResult { + return page.lr +} + +// Values returns the slice of values for the current page or nil if there are no values. +func (page ListResultPage) Values() []GenericResource { + if page.lr.IsEmpty() { + return nil + } + return *page.lr.Value +} + +// ManagementErrorWithDetails the detailed error message of resource management. +type ManagementErrorWithDetails struct { + // Code - The error code returned when exporting the template. + Code *string `json:"code,omitempty"` + // Message - The error message describing the export error. + Message *string `json:"message,omitempty"` + // Target - The target of the error. + Target *string `json:"target,omitempty"` + // Details - Validation error. + Details *[]ManagementErrorWithDetails `json:"details,omitempty"` +} + +// MoveInfo parameters of move resources. +type MoveInfo struct { + // ResourcesProperty - The IDs of the resources. + ResourcesProperty *[]string `json:"resources,omitempty"` + // TargetResourceGroup - The target resource group. + TargetResourceGroup *string `json:"targetResourceGroup,omitempty"` +} + +// MoveResourcesFuture an abstraction for monitoring and retrieving the results of a long-running operation. +type MoveResourcesFuture struct { + azure.Future +} + +// Result returns the result of the asynchronous operation. +// If the operation has not completed it will return an error. +func (future *MoveResourcesFuture) Result(client Client) (ar autorest.Response, err error) { + var done bool + done, err = future.Done(client) + if err != nil { + err = autorest.NewErrorWithError(err, "resources.MoveResourcesFuture", "Result", future.Response(), "Polling failure") + return + } + if !done { + err = azure.NewAsyncOpIncompleteError("resources.MoveResourcesFuture") + return + } + ar.Response = future.Response() + return +} + +// OnErrorDeployment deployment on error behavior. +type OnErrorDeployment struct { + // Type - The deployment on error behavior type. Possible values are LastSuccessful and SpecificDeployment. Possible values include: 'LastSuccessful', 'SpecificDeployment' + Type OnErrorDeploymentType `json:"type,omitempty"` + // DeploymentName - The deployment to be used on error case. + DeploymentName *string `json:"deploymentName,omitempty"` +} + +// OnErrorDeploymentExtended deployment on error behavior with additional details. +type OnErrorDeploymentExtended struct { + // ProvisioningState - The state of the provisioning for the on error deployment. + ProvisioningState *string `json:"provisioningState,omitempty"` + // Type - The deployment on error behavior type. Possible values are LastSuccessful and SpecificDeployment. Possible values include: 'LastSuccessful', 'SpecificDeployment' + Type OnErrorDeploymentType `json:"type,omitempty"` + // DeploymentName - The deployment to be used on error case. + DeploymentName *string `json:"deploymentName,omitempty"` +} + +// ParametersLink entity representing the reference to the deployment paramaters. +type ParametersLink struct { + // URI - The URI of the parameters file. + URI *string `json:"uri,omitempty"` + // ContentVersion - If included, must match the ContentVersion in the template. + ContentVersion *string `json:"contentVersion,omitempty"` +} + +// Plan plan for the resource. +type Plan struct { + // Name - The plan ID. + Name *string `json:"name,omitempty"` + // Publisher - The publisher ID. + Publisher *string `json:"publisher,omitempty"` + // Product - The offer ID. + Product *string `json:"product,omitempty"` + // PromotionCode - The promotion code. + PromotionCode *string `json:"promotionCode,omitempty"` + // Version - The plan's version. + Version *string `json:"version,omitempty"` +} + +// Provider resource provider information. +type Provider struct { + autorest.Response `json:"-"` + // ID - The provider ID. + ID *string `json:"id,omitempty"` + // Namespace - The namespace of the resource provider. + Namespace *string `json:"namespace,omitempty"` + // RegistrationState - The registration state of the provider. + RegistrationState *string `json:"registrationState,omitempty"` + // ResourceTypes - The collection of provider resource types. + ResourceTypes *[]ProviderResourceType `json:"resourceTypes,omitempty"` +} + +// ProviderListResult list of resource providers. +type ProviderListResult struct { + autorest.Response `json:"-"` + // Value - An array of resource providers. + Value *[]Provider `json:"value,omitempty"` + // NextLink - The URL to use for getting the next set of results. + NextLink *string `json:"nextLink,omitempty"` +} + +// ProviderListResultIterator provides access to a complete listing of Provider values. +type ProviderListResultIterator struct { + i int + page ProviderListResultPage +} + +// Next advances to the next value. If there was an error making +// the request the iterator does not advance and the error is returned. +func (iter *ProviderListResultIterator) Next() error { + iter.i++ + if iter.i < len(iter.page.Values()) { + return nil + } + err := iter.page.Next() + if err != nil { + iter.i-- + return err + } + iter.i = 0 + return nil +} + +// NotDone returns true if the enumeration should be started or is not yet complete. +func (iter ProviderListResultIterator) NotDone() bool { + return iter.page.NotDone() && iter.i < len(iter.page.Values()) +} + +// Response returns the raw server response from the last page request. +func (iter ProviderListResultIterator) Response() ProviderListResult { + return iter.page.Response() +} + +// Value returns the current value or a zero-initialized value if the +// iterator has advanced beyond the end of the collection. +func (iter ProviderListResultIterator) Value() Provider { + if !iter.page.NotDone() { + return Provider{} + } + return iter.page.Values()[iter.i] +} + +// IsEmpty returns true if the ListResult contains no values. +func (plr ProviderListResult) IsEmpty() bool { + return plr.Value == nil || len(*plr.Value) == 0 +} + +// providerListResultPreparer prepares a request to retrieve the next set of results. +// It returns nil if no more results exist. +func (plr ProviderListResult) providerListResultPreparer() (*http.Request, error) { + if plr.NextLink == nil || len(to.String(plr.NextLink)) < 1 { + return nil, nil + } + return autorest.Prepare(&http.Request{}, + autorest.AsJSON(), + autorest.AsGet(), + autorest.WithBaseURL(to.String(plr.NextLink))) +} + +// ProviderListResultPage contains a page of Provider values. +type ProviderListResultPage struct { + fn func(ProviderListResult) (ProviderListResult, error) + plr ProviderListResult +} + +// Next advances to the next page of values. If there was an error making +// the request the page does not advance and the error is returned. +func (page *ProviderListResultPage) Next() error { + next, err := page.fn(page.plr) + if err != nil { + return err + } + page.plr = next + return nil +} + +// NotDone returns true if the page enumeration should be started or is not yet complete. +func (page ProviderListResultPage) NotDone() bool { + return !page.plr.IsEmpty() +} + +// Response returns the raw server response from the last page request. +func (page ProviderListResultPage) Response() ProviderListResult { + return page.plr +} + +// Values returns the slice of values for the current page or nil if there are no values. +func (page ProviderListResultPage) Values() []Provider { + if page.plr.IsEmpty() { + return nil + } + return *page.plr.Value +} + +// ProviderOperationDisplayProperties resource provider operation's display properties. +type ProviderOperationDisplayProperties struct { + // Publisher - Operation description. + Publisher *string `json:"publisher,omitempty"` + // Provider - Operation provider. + Provider *string `json:"provider,omitempty"` + // Resource - Operation resource. + Resource *string `json:"resource,omitempty"` + // Operation - Operation. + Operation *string `json:"operation,omitempty"` + // Description - Operation description. + Description *string `json:"description,omitempty"` +} + +// ProviderResourceType resource type managed by the resource provider. +type ProviderResourceType struct { + // ResourceType - The resource type. + ResourceType *string `json:"resourceType,omitempty"` + // Locations - The collection of locations where this resource type can be created. + Locations *[]string `json:"locations,omitempty"` + // Aliases - The aliases that are supported by this resource type. + Aliases *[]AliasType `json:"aliases,omitempty"` + // APIVersions - The API version. + APIVersions *[]string `json:"apiVersions,omitempty"` + // Properties - The properties. + Properties map[string]*string `json:"properties"` +} + +// MarshalJSON is the custom marshaler for ProviderResourceType. +func (prt ProviderResourceType) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]interface{}) + if prt.ResourceType != nil { + objectMap["resourceType"] = prt.ResourceType + } + if prt.Locations != nil { + objectMap["locations"] = prt.Locations + } + if prt.Aliases != nil { + objectMap["aliases"] = prt.Aliases + } + if prt.APIVersions != nil { + objectMap["apiVersions"] = prt.APIVersions + } + if prt.Properties != nil { + objectMap["properties"] = prt.Properties + } + return json.Marshal(objectMap) +} + +// Resource resource. +type Resource struct { + // ID - Resource ID + ID *string `json:"id,omitempty"` + // Name - Resource name + Name *string `json:"name,omitempty"` + // Type - Resource type + Type *string `json:"type,omitempty"` + // Location - Resource location + Location *string `json:"location,omitempty"` + // Tags - Resource tags + Tags map[string]*string `json:"tags"` +} + +// MarshalJSON is the custom marshaler for Resource. +func (r Resource) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]interface{}) + if r.ID != nil { + objectMap["id"] = r.ID + } + if r.Name != nil { + objectMap["name"] = r.Name + } + if r.Type != nil { + objectMap["type"] = r.Type + } + if r.Location != nil { + objectMap["location"] = r.Location + } + if r.Tags != nil { + objectMap["tags"] = r.Tags + } + return json.Marshal(objectMap) +} + +// Sku SKU for the resource. +type Sku struct { + // Name - The SKU name. + Name *string `json:"name,omitempty"` + // Tier - The SKU tier. + Tier *string `json:"tier,omitempty"` + // Size - The SKU size. + Size *string `json:"size,omitempty"` + // Family - The SKU family. + Family *string `json:"family,omitempty"` + // Model - The SKU model. + Model *string `json:"model,omitempty"` + // Capacity - The SKU capacity. + Capacity *int32 `json:"capacity,omitempty"` +} + +// SubResource sub-resource. +type SubResource struct { + // ID - Resource ID + ID *string `json:"id,omitempty"` +} + +// TagCount tag count. +type TagCount struct { + // Type - Type of count. + Type *string `json:"type,omitempty"` + // Value - Value of count. + Value *int32 `json:"value,omitempty"` +} + +// TagDetails tag details. +type TagDetails struct { + autorest.Response `json:"-"` + // ID - The tag ID. + ID *string `json:"id,omitempty"` + // TagName - The tag name. + TagName *string `json:"tagName,omitempty"` + // Count - The total number of resources that use the resource tag. When a tag is initially created and has no associated resources, the value is 0. + Count *TagCount `json:"count,omitempty"` + // Values - The list of tag values. + Values *[]TagValue `json:"values,omitempty"` +} + +// TagsListResult list of subscription tags. +type TagsListResult struct { + autorest.Response `json:"-"` + // Value - An array of tags. + Value *[]TagDetails `json:"value,omitempty"` + // NextLink - The URL to use for getting the next set of results. + NextLink *string `json:"nextLink,omitempty"` +} + +// TagsListResultIterator provides access to a complete listing of TagDetails values. +type TagsListResultIterator struct { + i int + page TagsListResultPage +} + +// Next advances to the next value. If there was an error making +// the request the iterator does not advance and the error is returned. +func (iter *TagsListResultIterator) Next() error { + iter.i++ + if iter.i < len(iter.page.Values()) { + return nil + } + err := iter.page.Next() + if err != nil { + iter.i-- + return err + } + iter.i = 0 + return nil +} + +// NotDone returns true if the enumeration should be started or is not yet complete. +func (iter TagsListResultIterator) NotDone() bool { + return iter.page.NotDone() && iter.i < len(iter.page.Values()) +} + +// Response returns the raw server response from the last page request. +func (iter TagsListResultIterator) Response() TagsListResult { + return iter.page.Response() +} + +// Value returns the current value or a zero-initialized value if the +// iterator has advanced beyond the end of the collection. +func (iter TagsListResultIterator) Value() TagDetails { + if !iter.page.NotDone() { + return TagDetails{} + } + return iter.page.Values()[iter.i] +} + +// IsEmpty returns true if the ListResult contains no values. +func (tlr TagsListResult) IsEmpty() bool { + return tlr.Value == nil || len(*tlr.Value) == 0 +} + +// tagsListResultPreparer prepares a request to retrieve the next set of results. +// It returns nil if no more results exist. +func (tlr TagsListResult) tagsListResultPreparer() (*http.Request, error) { + if tlr.NextLink == nil || len(to.String(tlr.NextLink)) < 1 { + return nil, nil + } + return autorest.Prepare(&http.Request{}, + autorest.AsJSON(), + autorest.AsGet(), + autorest.WithBaseURL(to.String(tlr.NextLink))) +} + +// TagsListResultPage contains a page of TagDetails values. +type TagsListResultPage struct { + fn func(TagsListResult) (TagsListResult, error) + tlr TagsListResult +} + +// Next advances to the next page of values. If there was an error making +// the request the page does not advance and the error is returned. +func (page *TagsListResultPage) Next() error { + next, err := page.fn(page.tlr) + if err != nil { + return err + } + page.tlr = next + return nil +} + +// NotDone returns true if the page enumeration should be started or is not yet complete. +func (page TagsListResultPage) NotDone() bool { + return !page.tlr.IsEmpty() +} + +// Response returns the raw server response from the last page request. +func (page TagsListResultPage) Response() TagsListResult { + return page.tlr +} + +// Values returns the slice of values for the current page or nil if there are no values. +func (page TagsListResultPage) Values() []TagDetails { + if page.tlr.IsEmpty() { + return nil + } + return *page.tlr.Value +} + +// TagValue tag information. +type TagValue struct { + autorest.Response `json:"-"` + // ID - The tag ID. + ID *string `json:"id,omitempty"` + // TagValue - The tag value. + TagValue *string `json:"tagValue,omitempty"` + // Count - The tag value count. + Count *TagCount `json:"count,omitempty"` +} + +// TargetResource target resource. +type TargetResource struct { + // ID - The ID of the resource. + ID *string `json:"id,omitempty"` + // ResourceName - The name of the resource. + ResourceName *string `json:"resourceName,omitempty"` + // ResourceType - The type of the resource. + ResourceType *string `json:"resourceType,omitempty"` +} + +// TemplateLink entity representing the reference to the template. +type TemplateLink struct { + // URI - The URI of the template to deploy. + URI *string `json:"uri,omitempty"` + // ContentVersion - If included, must match the ContentVersion in the template. + ContentVersion *string `json:"contentVersion,omitempty"` +} + +// UpdateByIDFuture an abstraction for monitoring and retrieving the results of a long-running operation. +type UpdateByIDFuture struct { + azure.Future +} + +// Result returns the result of the asynchronous operation. +// If the operation has not completed it will return an error. +func (future *UpdateByIDFuture) Result(client Client) (gr GenericResource, err error) { + var done bool + done, err = future.Done(client) + if err != nil { + err = autorest.NewErrorWithError(err, "resources.UpdateByIDFuture", "Result", future.Response(), "Polling failure") + return + } + if !done { + err = azure.NewAsyncOpIncompleteError("resources.UpdateByIDFuture") + return + } + sender := autorest.DecorateSender(client, autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...)) + if gr.Response.Response, err = future.GetResult(sender); err == nil && gr.Response.Response.StatusCode != http.StatusNoContent { + gr, err = client.UpdateByIDResponder(gr.Response.Response) + if err != nil { + err = autorest.NewErrorWithError(err, "resources.UpdateByIDFuture", "Result", gr.Response.Response, "Failure responding to request") + } + } + return +} + +// UpdateFuture an abstraction for monitoring and retrieving the results of a long-running operation. +type UpdateFuture struct { + azure.Future +} + +// Result returns the result of the asynchronous operation. +// If the operation has not completed it will return an error. +func (future *UpdateFuture) Result(client Client) (gr GenericResource, err error) { + var done bool + done, err = future.Done(client) + if err != nil { + err = autorest.NewErrorWithError(err, "resources.UpdateFuture", "Result", future.Response(), "Polling failure") + return + } + if !done { + err = azure.NewAsyncOpIncompleteError("resources.UpdateFuture") + return + } + sender := autorest.DecorateSender(client, autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...)) + if gr.Response.Response, err = future.GetResult(sender); err == nil && gr.Response.Response.StatusCode != http.StatusNoContent { + gr, err = client.UpdateResponder(gr.Response.Response) + if err != nil { + err = autorest.NewErrorWithError(err, "resources.UpdateFuture", "Result", gr.Response.Response, "Failure responding to request") + } + } + return +} + +// ValidateMoveResourcesFuture an abstraction for monitoring and retrieving the results of a long-running +// operation. +type ValidateMoveResourcesFuture struct { + azure.Future +} + +// Result returns the result of the asynchronous operation. +// If the operation has not completed it will return an error. +func (future *ValidateMoveResourcesFuture) Result(client Client) (ar autorest.Response, err error) { + var done bool + done, err = future.Done(client) + if err != nil { + err = autorest.NewErrorWithError(err, "resources.ValidateMoveResourcesFuture", "Result", future.Response(), "Polling failure") + return + } + if !done { + err = azure.NewAsyncOpIncompleteError("resources.ValidateMoveResourcesFuture") + return + } + ar.Response = future.Response() + return +} diff --git a/vendor/github.com/Azure/azure-sdk-for-go/arm/resources/resources/providers.go b/vendor/github.com/Azure/azure-sdk-for-go/services/resources/mgmt/2018-02-01/resources/providers.go similarity index 71% rename from vendor/github.com/Azure/azure-sdk-for-go/arm/resources/resources/providers.go rename to vendor/github.com/Azure/azure-sdk-for-go/services/resources/mgmt/2018-02-01/resources/providers.go index 7fd6dc158..db8a647f5 100644 --- a/vendor/github.com/Azure/azure-sdk-for-go/arm/resources/resources/providers.go +++ b/vendor/github.com/Azure/azure-sdk-for-go/services/resources/mgmt/2018-02-01/resources/providers.go @@ -14,20 +14,19 @@ package resources // See the License for the specific language governing permissions and // limitations under the License. // -// Code generated by Microsoft (R) AutoRest Code Generator 1.0.1.0 -// Changes may cause incorrect behavior and will be lost if the code is -// regenerated. +// Code generated by Microsoft (R) AutoRest Code Generator. +// Changes may cause incorrect behavior and will be lost if the code is regenerated. import ( + "context" "github.com/Azure/go-autorest/autorest" "github.com/Azure/go-autorest/autorest/azure" "net/http" ) -// ProvidersClient is the provides operations for working with resources and -// resource groups. +// ProvidersClient is the provides operations for working with resources and resource groups. type ProvidersClient struct { - ManagementClient + BaseClient } // NewProvidersClient creates an instance of the ProvidersClient client. @@ -35,19 +34,18 @@ func NewProvidersClient(subscriptionID string) ProvidersClient { return NewProvidersClientWithBaseURI(DefaultBaseURI, subscriptionID) } -// NewProvidersClientWithBaseURI creates an instance of the ProvidersClient -// client. +// NewProvidersClientWithBaseURI creates an instance of the ProvidersClient client. func NewProvidersClientWithBaseURI(baseURI string, subscriptionID string) ProvidersClient { return ProvidersClient{NewWithBaseURI(baseURI, subscriptionID)} } // Get gets the specified resource provider. -// -// resourceProviderNamespace is the namespace of the resource provider. expand -// is the $expand query parameter. For example, to include property aliases in -// response, use $expand=resourceTypes/aliases. -func (client ProvidersClient) Get(resourceProviderNamespace string, expand string) (result Provider, err error) { - req, err := client.GetPreparer(resourceProviderNamespace, expand) +// Parameters: +// resourceProviderNamespace - the namespace of the resource provider. +// expand - the $expand query parameter. For example, to include property aliases in response, use +// $expand=resourceTypes/aliases. +func (client ProvidersClient) Get(ctx context.Context, resourceProviderNamespace string, expand string) (result Provider, err error) { + req, err := client.GetPreparer(ctx, resourceProviderNamespace, expand) if err != nil { err = autorest.NewErrorWithError(err, "resources.ProvidersClient", "Get", nil, "Failure preparing request") return @@ -69,13 +67,13 @@ func (client ProvidersClient) Get(resourceProviderNamespace string, expand strin } // GetPreparer prepares the Get request. -func (client ProvidersClient) GetPreparer(resourceProviderNamespace string, expand string) (*http.Request, error) { +func (client ProvidersClient) GetPreparer(ctx context.Context, resourceProviderNamespace string, expand string) (*http.Request, error) { pathParameters := map[string]interface{}{ "resourceProviderNamespace": autorest.Encode("path", resourceProviderNamespace), "subscriptionId": autorest.Encode("path", client.SubscriptionID), } - const APIVersion = "2016-09-01" + const APIVersion = "2018-02-01" queryParameters := map[string]interface{}{ "api-version": APIVersion, } @@ -88,13 +86,14 @@ func (client ProvidersClient) GetPreparer(resourceProviderNamespace string, expa autorest.WithBaseURL(client.BaseURI), autorest.WithPathParameters("/subscriptions/{subscriptionId}/providers/{resourceProviderNamespace}", pathParameters), autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{}) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) } // GetSender sends the Get request. The method will close the // http.Response Body if it receives an error. func (client ProvidersClient) GetSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, req) + return autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) } // GetResponder handles the response to the Get request. The method always @@ -111,14 +110,14 @@ func (client ProvidersClient) GetResponder(resp *http.Response) (result Provider } // List gets all resource providers for a subscription. -// -// top is the number of results to return. If null is passed returns all -// deployments. expand is the properties to include in the results. For -// example, use &$expand=metadata in the query string to retrieve resource -// provider metadata. To include property aliases in response, use +// Parameters: +// top - the number of results to return. If null is passed returns all deployments. +// expand - the properties to include in the results. For example, use &$expand=metadata in the query string to +// retrieve resource provider metadata. To include property aliases in response, use // $expand=resourceTypes/aliases. -func (client ProvidersClient) List(top *int32, expand string) (result ProviderListResult, err error) { - req, err := client.ListPreparer(top, expand) +func (client ProvidersClient) List(ctx context.Context, top *int32, expand string) (result ProviderListResultPage, err error) { + result.fn = client.listNextResults + req, err := client.ListPreparer(ctx, top, expand) if err != nil { err = autorest.NewErrorWithError(err, "resources.ProvidersClient", "List", nil, "Failure preparing request") return @@ -126,12 +125,12 @@ func (client ProvidersClient) List(top *int32, expand string) (result ProviderLi resp, err := client.ListSender(req) if err != nil { - result.Response = autorest.Response{Response: resp} + result.plr.Response = autorest.Response{Response: resp} err = autorest.NewErrorWithError(err, "resources.ProvidersClient", "List", resp, "Failure sending request") return } - result, err = client.ListResponder(resp) + result.plr, err = client.ListResponder(resp) if err != nil { err = autorest.NewErrorWithError(err, "resources.ProvidersClient", "List", resp, "Failure responding to request") } @@ -140,12 +139,12 @@ func (client ProvidersClient) List(top *int32, expand string) (result ProviderLi } // ListPreparer prepares the List request. -func (client ProvidersClient) ListPreparer(top *int32, expand string) (*http.Request, error) { +func (client ProvidersClient) ListPreparer(ctx context.Context, top *int32, expand string) (*http.Request, error) { pathParameters := map[string]interface{}{ "subscriptionId": autorest.Encode("path", client.SubscriptionID), } - const APIVersion = "2016-09-01" + const APIVersion = "2018-02-01" queryParameters := map[string]interface{}{ "api-version": APIVersion, } @@ -161,13 +160,14 @@ func (client ProvidersClient) ListPreparer(top *int32, expand string) (*http.Req autorest.WithBaseURL(client.BaseURI), autorest.WithPathParameters("/subscriptions/{subscriptionId}/providers", pathParameters), autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{}) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) } // ListSender sends the List request. The method will close the // http.Response Body if it receives an error. func (client ProvidersClient) ListSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, req) + return autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) } // ListResponder handles the response to the List request. The method always @@ -183,36 +183,38 @@ func (client ProvidersClient) ListResponder(resp *http.Response) (result Provide return } -// ListNextResults retrieves the next set of results, if any. -func (client ProvidersClient) ListNextResults(lastResults ProviderListResult) (result ProviderListResult, err error) { - req, err := lastResults.ProviderListResultPreparer() +// listNextResults retrieves the next set of results, if any. +func (client ProvidersClient) listNextResults(lastResults ProviderListResult) (result ProviderListResult, err error) { + req, err := lastResults.providerListResultPreparer() if err != nil { - return result, autorest.NewErrorWithError(err, "resources.ProvidersClient", "List", nil, "Failure preparing next results request") + return result, autorest.NewErrorWithError(err, "resources.ProvidersClient", "listNextResults", nil, "Failure preparing next results request") } if req == nil { return } - resp, err := client.ListSender(req) if err != nil { result.Response = autorest.Response{Response: resp} - return result, autorest.NewErrorWithError(err, "resources.ProvidersClient", "List", resp, "Failure sending next results request") + return result, autorest.NewErrorWithError(err, "resources.ProvidersClient", "listNextResults", resp, "Failure sending next results request") } - result, err = client.ListResponder(resp) if err != nil { - err = autorest.NewErrorWithError(err, "resources.ProvidersClient", "List", resp, "Failure responding to next results request") + err = autorest.NewErrorWithError(err, "resources.ProvidersClient", "listNextResults", resp, "Failure responding to next results request") } + return +} +// ListComplete enumerates all values, automatically crossing page boundaries as required. +func (client ProvidersClient) ListComplete(ctx context.Context, top *int32, expand string) (result ProviderListResultIterator, err error) { + result.page, err = client.List(ctx, top, expand) return } // Register registers a subscription with a resource provider. -// -// resourceProviderNamespace is the namespace of the resource provider to -// register. -func (client ProvidersClient) Register(resourceProviderNamespace string) (result Provider, err error) { - req, err := client.RegisterPreparer(resourceProviderNamespace) +// Parameters: +// resourceProviderNamespace - the namespace of the resource provider to register. +func (client ProvidersClient) Register(ctx context.Context, resourceProviderNamespace string) (result Provider, err error) { + req, err := client.RegisterPreparer(ctx, resourceProviderNamespace) if err != nil { err = autorest.NewErrorWithError(err, "resources.ProvidersClient", "Register", nil, "Failure preparing request") return @@ -234,13 +236,13 @@ func (client ProvidersClient) Register(resourceProviderNamespace string) (result } // RegisterPreparer prepares the Register request. -func (client ProvidersClient) RegisterPreparer(resourceProviderNamespace string) (*http.Request, error) { +func (client ProvidersClient) RegisterPreparer(ctx context.Context, resourceProviderNamespace string) (*http.Request, error) { pathParameters := map[string]interface{}{ "resourceProviderNamespace": autorest.Encode("path", resourceProviderNamespace), "subscriptionId": autorest.Encode("path", client.SubscriptionID), } - const APIVersion = "2016-09-01" + const APIVersion = "2018-02-01" queryParameters := map[string]interface{}{ "api-version": APIVersion, } @@ -250,13 +252,14 @@ func (client ProvidersClient) RegisterPreparer(resourceProviderNamespace string) autorest.WithBaseURL(client.BaseURI), autorest.WithPathParameters("/subscriptions/{subscriptionId}/providers/{resourceProviderNamespace}/register", pathParameters), autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{}) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) } // RegisterSender sends the Register request. The method will close the // http.Response Body if it receives an error. func (client ProvidersClient) RegisterSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, req) + return autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) } // RegisterResponder handles the response to the Register request. The method always @@ -273,11 +276,10 @@ func (client ProvidersClient) RegisterResponder(resp *http.Response) (result Pro } // Unregister unregisters a subscription from a resource provider. -// -// resourceProviderNamespace is the namespace of the resource provider to -// unregister. -func (client ProvidersClient) Unregister(resourceProviderNamespace string) (result Provider, err error) { - req, err := client.UnregisterPreparer(resourceProviderNamespace) +// Parameters: +// resourceProviderNamespace - the namespace of the resource provider to unregister. +func (client ProvidersClient) Unregister(ctx context.Context, resourceProviderNamespace string) (result Provider, err error) { + req, err := client.UnregisterPreparer(ctx, resourceProviderNamespace) if err != nil { err = autorest.NewErrorWithError(err, "resources.ProvidersClient", "Unregister", nil, "Failure preparing request") return @@ -299,13 +301,13 @@ func (client ProvidersClient) Unregister(resourceProviderNamespace string) (resu } // UnregisterPreparer prepares the Unregister request. -func (client ProvidersClient) UnregisterPreparer(resourceProviderNamespace string) (*http.Request, error) { +func (client ProvidersClient) UnregisterPreparer(ctx context.Context, resourceProviderNamespace string) (*http.Request, error) { pathParameters := map[string]interface{}{ "resourceProviderNamespace": autorest.Encode("path", resourceProviderNamespace), "subscriptionId": autorest.Encode("path", client.SubscriptionID), } - const APIVersion = "2016-09-01" + const APIVersion = "2018-02-01" queryParameters := map[string]interface{}{ "api-version": APIVersion, } @@ -315,13 +317,14 @@ func (client ProvidersClient) UnregisterPreparer(resourceProviderNamespace strin autorest.WithBaseURL(client.BaseURI), autorest.WithPathParameters("/subscriptions/{subscriptionId}/providers/{resourceProviderNamespace}/unregister", pathParameters), autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{}) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) } // UnregisterSender sends the Unregister request. The method will close the // http.Response Body if it receives an error. func (client ProvidersClient) UnregisterSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, req) + return autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) } // UnregisterResponder handles the response to the Unregister request. The method always diff --git a/vendor/github.com/Azure/azure-sdk-for-go/services/resources/mgmt/2018-02-01/resources/resources.go b/vendor/github.com/Azure/azure-sdk-for-go/services/resources/mgmt/2018-02-01/resources/resources.go new file mode 100644 index 000000000..bc7e57190 --- /dev/null +++ b/vendor/github.com/Azure/azure-sdk-for-go/services/resources/mgmt/2018-02-01/resources/resources.go @@ -0,0 +1,1201 @@ +package resources + +// Copyright (c) Microsoft and contributors. All rights reserved. +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// +// See the License for the specific language governing permissions and +// limitations under the License. +// +// Code generated by Microsoft (R) AutoRest Code Generator. +// Changes may cause incorrect behavior and will be lost if the code is regenerated. + +import ( + "context" + "github.com/Azure/go-autorest/autorest" + "github.com/Azure/go-autorest/autorest/azure" + "github.com/Azure/go-autorest/autorest/validation" + "net/http" +) + +// Client is the provides operations for working with resources and resource groups. +type Client struct { + BaseClient +} + +// NewClient creates an instance of the Client client. +func NewClient(subscriptionID string) Client { + return NewClientWithBaseURI(DefaultBaseURI, subscriptionID) +} + +// NewClientWithBaseURI creates an instance of the Client client. +func NewClientWithBaseURI(baseURI string, subscriptionID string) Client { + return Client{NewWithBaseURI(baseURI, subscriptionID)} +} + +// CheckExistence checks whether a resource exists. +// Parameters: +// resourceGroupName - the name of the resource group containing the resource to check. The name is case +// insensitive. +// resourceProviderNamespace - the resource provider of the resource to check. +// parentResourcePath - the parent resource identity. +// resourceType - the resource type. +// resourceName - the name of the resource to check whether it exists. +func (client Client) CheckExistence(ctx context.Context, resourceGroupName string, resourceProviderNamespace string, parentResourcePath string, resourceType string, resourceName string) (result autorest.Response, err error) { + if err := validation.Validate([]validation.Validation{ + {TargetValue: resourceGroupName, + Constraints: []validation.Constraint{{Target: "resourceGroupName", Name: validation.MaxLength, Rule: 90, Chain: nil}, + {Target: "resourceGroupName", Name: validation.MinLength, Rule: 1, Chain: nil}, + {Target: "resourceGroupName", Name: validation.Pattern, Rule: `^[-\w\._\(\)]+$`, Chain: nil}}}}); err != nil { + return result, validation.NewError("resources.Client", "CheckExistence", err.Error()) + } + + req, err := client.CheckExistencePreparer(ctx, resourceGroupName, resourceProviderNamespace, parentResourcePath, resourceType, resourceName) + if err != nil { + err = autorest.NewErrorWithError(err, "resources.Client", "CheckExistence", nil, "Failure preparing request") + return + } + + resp, err := client.CheckExistenceSender(req) + if err != nil { + result.Response = resp + err = autorest.NewErrorWithError(err, "resources.Client", "CheckExistence", resp, "Failure sending request") + return + } + + result, err = client.CheckExistenceResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "resources.Client", "CheckExistence", resp, "Failure responding to request") + } + + return +} + +// CheckExistencePreparer prepares the CheckExistence request. +func (client Client) CheckExistencePreparer(ctx context.Context, resourceGroupName string, resourceProviderNamespace string, parentResourcePath string, resourceType string, resourceName string) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "parentResourcePath": parentResourcePath, + "resourceGroupName": autorest.Encode("path", resourceGroupName), + "resourceName": autorest.Encode("path", resourceName), + "resourceProviderNamespace": autorest.Encode("path", resourceProviderNamespace), + "resourceType": resourceType, + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + } + + const APIVersion = "2018-02-01" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsHead(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourcegroups/{resourceGroupName}/providers/{resourceProviderNamespace}/{parentResourcePath}/{resourceType}/{resourceName}", pathParameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// CheckExistenceSender sends the CheckExistence request. The method will close the +// http.Response Body if it receives an error. +func (client Client) CheckExistenceSender(req *http.Request) (*http.Response, error) { + return autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) +} + +// CheckExistenceResponder handles the response to the CheckExistence request. The method always +// closes the http.Response Body. +func (client Client) CheckExistenceResponder(resp *http.Response) (result autorest.Response, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusNoContent, http.StatusNotFound), + autorest.ByClosing()) + result.Response = resp + return +} + +// CheckExistenceByID checks by ID whether a resource exists. +// Parameters: +// resourceID - the fully qualified ID of the resource, including the resource name and resource type. Use the +// format, +// /subscriptions/{guid}/resourceGroups/{resource-group-name}/{resource-provider-namespace}/{resource-type}/{resource-name} +func (client Client) CheckExistenceByID(ctx context.Context, resourceID string) (result autorest.Response, err error) { + req, err := client.CheckExistenceByIDPreparer(ctx, resourceID) + if err != nil { + err = autorest.NewErrorWithError(err, "resources.Client", "CheckExistenceByID", nil, "Failure preparing request") + return + } + + resp, err := client.CheckExistenceByIDSender(req) + if err != nil { + result.Response = resp + err = autorest.NewErrorWithError(err, "resources.Client", "CheckExistenceByID", resp, "Failure sending request") + return + } + + result, err = client.CheckExistenceByIDResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "resources.Client", "CheckExistenceByID", resp, "Failure responding to request") + } + + return +} + +// CheckExistenceByIDPreparer prepares the CheckExistenceByID request. +func (client Client) CheckExistenceByIDPreparer(ctx context.Context, resourceID string) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "resourceId": resourceID, + } + + const APIVersion = "2018-02-01" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsHead(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/{resourceId}", pathParameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// CheckExistenceByIDSender sends the CheckExistenceByID request. The method will close the +// http.Response Body if it receives an error. +func (client Client) CheckExistenceByIDSender(req *http.Request) (*http.Response, error) { + return autorest.SendWithSender(client, req, + autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...)) +} + +// CheckExistenceByIDResponder handles the response to the CheckExistenceByID request. The method always +// closes the http.Response Body. +func (client Client) CheckExistenceByIDResponder(resp *http.Response) (result autorest.Response, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusNoContent, http.StatusNotFound), + autorest.ByClosing()) + result.Response = resp + return +} + +// CreateOrUpdate creates a resource. +// Parameters: +// resourceGroupName - the name of the resource group for the resource. The name is case insensitive. +// resourceProviderNamespace - the namespace of the resource provider. +// parentResourcePath - the parent resource identity. +// resourceType - the resource type of the resource to create. +// resourceName - the name of the resource to create. +// parameters - parameters for creating or updating the resource. +func (client Client) CreateOrUpdate(ctx context.Context, resourceGroupName string, resourceProviderNamespace string, parentResourcePath string, resourceType string, resourceName string, parameters GenericResource) (result CreateOrUpdateFuture, err error) { + if err := validation.Validate([]validation.Validation{ + {TargetValue: resourceGroupName, + Constraints: []validation.Constraint{{Target: "resourceGroupName", Name: validation.MaxLength, Rule: 90, Chain: nil}, + {Target: "resourceGroupName", Name: validation.MinLength, Rule: 1, Chain: nil}, + {Target: "resourceGroupName", Name: validation.Pattern, Rule: `^[-\w\._\(\)]+$`, Chain: nil}}}, + {TargetValue: parameters, + Constraints: []validation.Constraint{{Target: "parameters.Kind", Name: validation.Null, Rule: false, + Chain: []validation.Constraint{{Target: "parameters.Kind", Name: validation.Pattern, Rule: `^[-\w\._,\(\)]+$`, Chain: nil}}}}}}); err != nil { + return result, validation.NewError("resources.Client", "CreateOrUpdate", err.Error()) + } + + req, err := client.CreateOrUpdatePreparer(ctx, resourceGroupName, resourceProviderNamespace, parentResourcePath, resourceType, resourceName, parameters) + if err != nil { + err = autorest.NewErrorWithError(err, "resources.Client", "CreateOrUpdate", nil, "Failure preparing request") + return + } + + result, err = client.CreateOrUpdateSender(req) + if err != nil { + err = autorest.NewErrorWithError(err, "resources.Client", "CreateOrUpdate", result.Response(), "Failure sending request") + return + } + + return +} + +// CreateOrUpdatePreparer prepares the CreateOrUpdate request. +func (client Client) CreateOrUpdatePreparer(ctx context.Context, resourceGroupName string, resourceProviderNamespace string, parentResourcePath string, resourceType string, resourceName string, parameters GenericResource) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "parentResourcePath": parentResourcePath, + "resourceGroupName": autorest.Encode("path", resourceGroupName), + "resourceName": autorest.Encode("path", resourceName), + "resourceProviderNamespace": autorest.Encode("path", resourceProviderNamespace), + "resourceType": resourceType, + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + } + + const APIVersion = "2018-02-01" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsContentType("application/json; charset=utf-8"), + autorest.AsPut(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourcegroups/{resourceGroupName}/providers/{resourceProviderNamespace}/{parentResourcePath}/{resourceType}/{resourceName}", pathParameters), + autorest.WithJSON(parameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// CreateOrUpdateSender sends the CreateOrUpdate request. The method will close the +// http.Response Body if it receives an error. +func (client Client) CreateOrUpdateSender(req *http.Request) (future CreateOrUpdateFuture, err error) { + var resp *http.Response + resp, err = autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) + if err != nil { + return + } + err = autorest.Respond(resp, azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusCreated, http.StatusAccepted)) + if err != nil { + return + } + future.Future, err = azure.NewFutureFromResponse(resp) + return +} + +// CreateOrUpdateResponder handles the response to the CreateOrUpdate request. The method always +// closes the http.Response Body. +func (client Client) CreateOrUpdateResponder(resp *http.Response) (result GenericResource, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusCreated, http.StatusAccepted), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + +// CreateOrUpdateByID create a resource by ID. +// Parameters: +// resourceID - the fully qualified ID of the resource, including the resource name and resource type. Use the +// format, +// /subscriptions/{guid}/resourceGroups/{resource-group-name}/{resource-provider-namespace}/{resource-type}/{resource-name} +// parameters - create or update resource parameters. +func (client Client) CreateOrUpdateByID(ctx context.Context, resourceID string, parameters GenericResource) (result CreateOrUpdateByIDFuture, err error) { + if err := validation.Validate([]validation.Validation{ + {TargetValue: parameters, + Constraints: []validation.Constraint{{Target: "parameters.Kind", Name: validation.Null, Rule: false, + Chain: []validation.Constraint{{Target: "parameters.Kind", Name: validation.Pattern, Rule: `^[-\w\._,\(\)]+$`, Chain: nil}}}}}}); err != nil { + return result, validation.NewError("resources.Client", "CreateOrUpdateByID", err.Error()) + } + + req, err := client.CreateOrUpdateByIDPreparer(ctx, resourceID, parameters) + if err != nil { + err = autorest.NewErrorWithError(err, "resources.Client", "CreateOrUpdateByID", nil, "Failure preparing request") + return + } + + result, err = client.CreateOrUpdateByIDSender(req) + if err != nil { + err = autorest.NewErrorWithError(err, "resources.Client", "CreateOrUpdateByID", result.Response(), "Failure sending request") + return + } + + return +} + +// CreateOrUpdateByIDPreparer prepares the CreateOrUpdateByID request. +func (client Client) CreateOrUpdateByIDPreparer(ctx context.Context, resourceID string, parameters GenericResource) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "resourceId": resourceID, + } + + const APIVersion = "2018-02-01" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsContentType("application/json; charset=utf-8"), + autorest.AsPut(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/{resourceId}", pathParameters), + autorest.WithJSON(parameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// CreateOrUpdateByIDSender sends the CreateOrUpdateByID request. The method will close the +// http.Response Body if it receives an error. +func (client Client) CreateOrUpdateByIDSender(req *http.Request) (future CreateOrUpdateByIDFuture, err error) { + var resp *http.Response + resp, err = autorest.SendWithSender(client, req, + autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...)) + if err != nil { + return + } + err = autorest.Respond(resp, azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusCreated, http.StatusAccepted)) + if err != nil { + return + } + future.Future, err = azure.NewFutureFromResponse(resp) + return +} + +// CreateOrUpdateByIDResponder handles the response to the CreateOrUpdateByID request. The method always +// closes the http.Response Body. +func (client Client) CreateOrUpdateByIDResponder(resp *http.Response) (result GenericResource, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusCreated, http.StatusAccepted), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + +// Delete deletes a resource. +// Parameters: +// resourceGroupName - the name of the resource group that contains the resource to delete. The name is case +// insensitive. +// resourceProviderNamespace - the namespace of the resource provider. +// parentResourcePath - the parent resource identity. +// resourceType - the resource type. +// resourceName - the name of the resource to delete. +func (client Client) Delete(ctx context.Context, resourceGroupName string, resourceProviderNamespace string, parentResourcePath string, resourceType string, resourceName string) (result DeleteFuture, err error) { + if err := validation.Validate([]validation.Validation{ + {TargetValue: resourceGroupName, + Constraints: []validation.Constraint{{Target: "resourceGroupName", Name: validation.MaxLength, Rule: 90, Chain: nil}, + {Target: "resourceGroupName", Name: validation.MinLength, Rule: 1, Chain: nil}, + {Target: "resourceGroupName", Name: validation.Pattern, Rule: `^[-\w\._\(\)]+$`, Chain: nil}}}}); err != nil { + return result, validation.NewError("resources.Client", "Delete", err.Error()) + } + + req, err := client.DeletePreparer(ctx, resourceGroupName, resourceProviderNamespace, parentResourcePath, resourceType, resourceName) + if err != nil { + err = autorest.NewErrorWithError(err, "resources.Client", "Delete", nil, "Failure preparing request") + return + } + + result, err = client.DeleteSender(req) + if err != nil { + err = autorest.NewErrorWithError(err, "resources.Client", "Delete", result.Response(), "Failure sending request") + return + } + + return +} + +// DeletePreparer prepares the Delete request. +func (client Client) DeletePreparer(ctx context.Context, resourceGroupName string, resourceProviderNamespace string, parentResourcePath string, resourceType string, resourceName string) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "parentResourcePath": parentResourcePath, + "resourceGroupName": autorest.Encode("path", resourceGroupName), + "resourceName": autorest.Encode("path", resourceName), + "resourceProviderNamespace": autorest.Encode("path", resourceProviderNamespace), + "resourceType": resourceType, + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + } + + const APIVersion = "2018-02-01" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsDelete(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourcegroups/{resourceGroupName}/providers/{resourceProviderNamespace}/{parentResourcePath}/{resourceType}/{resourceName}", pathParameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// DeleteSender sends the Delete request. The method will close the +// http.Response Body if it receives an error. +func (client Client) DeleteSender(req *http.Request) (future DeleteFuture, err error) { + var resp *http.Response + resp, err = autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) + if err != nil { + return + } + err = autorest.Respond(resp, azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted, http.StatusNoContent)) + if err != nil { + return + } + future.Future, err = azure.NewFutureFromResponse(resp) + return +} + +// DeleteResponder handles the response to the Delete request. The method always +// closes the http.Response Body. +func (client Client) DeleteResponder(resp *http.Response) (result autorest.Response, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted, http.StatusNoContent), + autorest.ByClosing()) + result.Response = resp + return +} + +// DeleteByID deletes a resource by ID. +// Parameters: +// resourceID - the fully qualified ID of the resource, including the resource name and resource type. Use the +// format, +// /subscriptions/{guid}/resourceGroups/{resource-group-name}/{resource-provider-namespace}/{resource-type}/{resource-name} +func (client Client) DeleteByID(ctx context.Context, resourceID string) (result DeleteByIDFuture, err error) { + req, err := client.DeleteByIDPreparer(ctx, resourceID) + if err != nil { + err = autorest.NewErrorWithError(err, "resources.Client", "DeleteByID", nil, "Failure preparing request") + return + } + + result, err = client.DeleteByIDSender(req) + if err != nil { + err = autorest.NewErrorWithError(err, "resources.Client", "DeleteByID", result.Response(), "Failure sending request") + return + } + + return +} + +// DeleteByIDPreparer prepares the DeleteByID request. +func (client Client) DeleteByIDPreparer(ctx context.Context, resourceID string) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "resourceId": resourceID, + } + + const APIVersion = "2018-02-01" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsDelete(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/{resourceId}", pathParameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// DeleteByIDSender sends the DeleteByID request. The method will close the +// http.Response Body if it receives an error. +func (client Client) DeleteByIDSender(req *http.Request) (future DeleteByIDFuture, err error) { + var resp *http.Response + resp, err = autorest.SendWithSender(client, req, + autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...)) + if err != nil { + return + } + err = autorest.Respond(resp, azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted, http.StatusNoContent)) + if err != nil { + return + } + future.Future, err = azure.NewFutureFromResponse(resp) + return +} + +// DeleteByIDResponder handles the response to the DeleteByID request. The method always +// closes the http.Response Body. +func (client Client) DeleteByIDResponder(resp *http.Response) (result autorest.Response, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted, http.StatusNoContent), + autorest.ByClosing()) + result.Response = resp + return +} + +// Get gets a resource. +// Parameters: +// resourceGroupName - the name of the resource group containing the resource to get. The name is case +// insensitive. +// resourceProviderNamespace - the namespace of the resource provider. +// parentResourcePath - the parent resource identity. +// resourceType - the resource type of the resource. +// resourceName - the name of the resource to get. +func (client Client) Get(ctx context.Context, resourceGroupName string, resourceProviderNamespace string, parentResourcePath string, resourceType string, resourceName string) (result GenericResource, err error) { + if err := validation.Validate([]validation.Validation{ + {TargetValue: resourceGroupName, + Constraints: []validation.Constraint{{Target: "resourceGroupName", Name: validation.MaxLength, Rule: 90, Chain: nil}, + {Target: "resourceGroupName", Name: validation.MinLength, Rule: 1, Chain: nil}, + {Target: "resourceGroupName", Name: validation.Pattern, Rule: `^[-\w\._\(\)]+$`, Chain: nil}}}}); err != nil { + return result, validation.NewError("resources.Client", "Get", err.Error()) + } + + req, err := client.GetPreparer(ctx, resourceGroupName, resourceProviderNamespace, parentResourcePath, resourceType, resourceName) + if err != nil { + err = autorest.NewErrorWithError(err, "resources.Client", "Get", nil, "Failure preparing request") + return + } + + resp, err := client.GetSender(req) + if err != nil { + result.Response = autorest.Response{Response: resp} + err = autorest.NewErrorWithError(err, "resources.Client", "Get", resp, "Failure sending request") + return + } + + result, err = client.GetResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "resources.Client", "Get", resp, "Failure responding to request") + } + + return +} + +// GetPreparer prepares the Get request. +func (client Client) GetPreparer(ctx context.Context, resourceGroupName string, resourceProviderNamespace string, parentResourcePath string, resourceType string, resourceName string) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "parentResourcePath": parentResourcePath, + "resourceGroupName": autorest.Encode("path", resourceGroupName), + "resourceName": autorest.Encode("path", resourceName), + "resourceProviderNamespace": autorest.Encode("path", resourceProviderNamespace), + "resourceType": resourceType, + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + } + + const APIVersion = "2018-02-01" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsGet(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourcegroups/{resourceGroupName}/providers/{resourceProviderNamespace}/{parentResourcePath}/{resourceType}/{resourceName}", pathParameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// GetSender sends the Get request. The method will close the +// http.Response Body if it receives an error. +func (client Client) GetSender(req *http.Request) (*http.Response, error) { + return autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) +} + +// GetResponder handles the response to the Get request. The method always +// closes the http.Response Body. +func (client Client) GetResponder(resp *http.Response) (result GenericResource, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + +// GetByID gets a resource by ID. +// Parameters: +// resourceID - the fully qualified ID of the resource, including the resource name and resource type. Use the +// format, +// /subscriptions/{guid}/resourceGroups/{resource-group-name}/{resource-provider-namespace}/{resource-type}/{resource-name} +func (client Client) GetByID(ctx context.Context, resourceID string) (result GenericResource, err error) { + req, err := client.GetByIDPreparer(ctx, resourceID) + if err != nil { + err = autorest.NewErrorWithError(err, "resources.Client", "GetByID", nil, "Failure preparing request") + return + } + + resp, err := client.GetByIDSender(req) + if err != nil { + result.Response = autorest.Response{Response: resp} + err = autorest.NewErrorWithError(err, "resources.Client", "GetByID", resp, "Failure sending request") + return + } + + result, err = client.GetByIDResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "resources.Client", "GetByID", resp, "Failure responding to request") + } + + return +} + +// GetByIDPreparer prepares the GetByID request. +func (client Client) GetByIDPreparer(ctx context.Context, resourceID string) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "resourceId": resourceID, + } + + const APIVersion = "2018-02-01" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsGet(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/{resourceId}", pathParameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// GetByIDSender sends the GetByID request. The method will close the +// http.Response Body if it receives an error. +func (client Client) GetByIDSender(req *http.Request) (*http.Response, error) { + return autorest.SendWithSender(client, req, + autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...)) +} + +// GetByIDResponder handles the response to the GetByID request. The method always +// closes the http.Response Body. +func (client Client) GetByIDResponder(resp *http.Response) (result GenericResource, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + +// List get all the resources in a subscription. +// Parameters: +// filter - the filter to apply on the operation. +// expand - the $expand query parameter. +// top - the number of results to return. If null is passed, returns all resource groups. +func (client Client) List(ctx context.Context, filter string, expand string, top *int32) (result ListResultPage, err error) { + result.fn = client.listNextResults + req, err := client.ListPreparer(ctx, filter, expand, top) + if err != nil { + err = autorest.NewErrorWithError(err, "resources.Client", "List", nil, "Failure preparing request") + return + } + + resp, err := client.ListSender(req) + if err != nil { + result.lr.Response = autorest.Response{Response: resp} + err = autorest.NewErrorWithError(err, "resources.Client", "List", resp, "Failure sending request") + return + } + + result.lr, err = client.ListResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "resources.Client", "List", resp, "Failure responding to request") + } + + return +} + +// ListPreparer prepares the List request. +func (client Client) ListPreparer(ctx context.Context, filter string, expand string, top *int32) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + } + + const APIVersion = "2018-02-01" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + if len(filter) > 0 { + queryParameters["$filter"] = autorest.Encode("query", filter) + } + if len(expand) > 0 { + queryParameters["$expand"] = autorest.Encode("query", expand) + } + if top != nil { + queryParameters["$top"] = autorest.Encode("query", *top) + } + + preparer := autorest.CreatePreparer( + autorest.AsGet(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/resources", pathParameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// ListSender sends the List request. The method will close the +// http.Response Body if it receives an error. +func (client Client) ListSender(req *http.Request) (*http.Response, error) { + return autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) +} + +// ListResponder handles the response to the List request. The method always +// closes the http.Response Body. +func (client Client) ListResponder(resp *http.Response) (result ListResult, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + +// listNextResults retrieves the next set of results, if any. +func (client Client) listNextResults(lastResults ListResult) (result ListResult, err error) { + req, err := lastResults.listResultPreparer() + if err != nil { + return result, autorest.NewErrorWithError(err, "resources.Client", "listNextResults", nil, "Failure preparing next results request") + } + if req == nil { + return + } + resp, err := client.ListSender(req) + if err != nil { + result.Response = autorest.Response{Response: resp} + return result, autorest.NewErrorWithError(err, "resources.Client", "listNextResults", resp, "Failure sending next results request") + } + result, err = client.ListResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "resources.Client", "listNextResults", resp, "Failure responding to next results request") + } + return +} + +// ListComplete enumerates all values, automatically crossing page boundaries as required. +func (client Client) ListComplete(ctx context.Context, filter string, expand string, top *int32) (result ListResultIterator, err error) { + result.page, err = client.List(ctx, filter, expand, top) + return +} + +// ListByResourceGroup get all the resources for a resource group. +// Parameters: +// resourceGroupName - the resource group with the resources to get. +// filter - the filter to apply on the operation. +// expand - the $expand query parameter +// top - the number of results to return. If null is passed, returns all resources. +func (client Client) ListByResourceGroup(ctx context.Context, resourceGroupName string, filter string, expand string, top *int32) (result ListResultPage, err error) { + if err := validation.Validate([]validation.Validation{ + {TargetValue: resourceGroupName, + Constraints: []validation.Constraint{{Target: "resourceGroupName", Name: validation.MaxLength, Rule: 90, Chain: nil}, + {Target: "resourceGroupName", Name: validation.MinLength, Rule: 1, Chain: nil}, + {Target: "resourceGroupName", Name: validation.Pattern, Rule: `^[-\w\._\(\)]+$`, Chain: nil}}}}); err != nil { + return result, validation.NewError("resources.Client", "ListByResourceGroup", err.Error()) + } + + result.fn = client.listByResourceGroupNextResults + req, err := client.ListByResourceGroupPreparer(ctx, resourceGroupName, filter, expand, top) + if err != nil { + err = autorest.NewErrorWithError(err, "resources.Client", "ListByResourceGroup", nil, "Failure preparing request") + return + } + + resp, err := client.ListByResourceGroupSender(req) + if err != nil { + result.lr.Response = autorest.Response{Response: resp} + err = autorest.NewErrorWithError(err, "resources.Client", "ListByResourceGroup", resp, "Failure sending request") + return + } + + result.lr, err = client.ListByResourceGroupResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "resources.Client", "ListByResourceGroup", resp, "Failure responding to request") + } + + return +} + +// ListByResourceGroupPreparer prepares the ListByResourceGroup request. +func (client Client) ListByResourceGroupPreparer(ctx context.Context, resourceGroupName string, filter string, expand string, top *int32) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "resourceGroupName": autorest.Encode("path", resourceGroupName), + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + } + + const APIVersion = "2018-02-01" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + if len(filter) > 0 { + queryParameters["$filter"] = autorest.Encode("query", filter) + } + if len(expand) > 0 { + queryParameters["$expand"] = autorest.Encode("query", expand) + } + if top != nil { + queryParameters["$top"] = autorest.Encode("query", *top) + } + + preparer := autorest.CreatePreparer( + autorest.AsGet(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/resources", pathParameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// ListByResourceGroupSender sends the ListByResourceGroup request. The method will close the +// http.Response Body if it receives an error. +func (client Client) ListByResourceGroupSender(req *http.Request) (*http.Response, error) { + return autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) +} + +// ListByResourceGroupResponder handles the response to the ListByResourceGroup request. The method always +// closes the http.Response Body. +func (client Client) ListByResourceGroupResponder(resp *http.Response) (result ListResult, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + +// listByResourceGroupNextResults retrieves the next set of results, if any. +func (client Client) listByResourceGroupNextResults(lastResults ListResult) (result ListResult, err error) { + req, err := lastResults.listResultPreparer() + if err != nil { + return result, autorest.NewErrorWithError(err, "resources.Client", "listByResourceGroupNextResults", nil, "Failure preparing next results request") + } + if req == nil { + return + } + resp, err := client.ListByResourceGroupSender(req) + if err != nil { + result.Response = autorest.Response{Response: resp} + return result, autorest.NewErrorWithError(err, "resources.Client", "listByResourceGroupNextResults", resp, "Failure sending next results request") + } + result, err = client.ListByResourceGroupResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "resources.Client", "listByResourceGroupNextResults", resp, "Failure responding to next results request") + } + return +} + +// ListByResourceGroupComplete enumerates all values, automatically crossing page boundaries as required. +func (client Client) ListByResourceGroupComplete(ctx context.Context, resourceGroupName string, filter string, expand string, top *int32) (result ListResultIterator, err error) { + result.page, err = client.ListByResourceGroup(ctx, resourceGroupName, filter, expand, top) + return +} + +// MoveResources the resources to move must be in the same source resource group. The target resource group may be in a +// different subscription. When moving resources, both the source group and the target group are locked for the +// duration of the operation. Write and delete operations are blocked on the groups until the move completes. +// Parameters: +// sourceResourceGroupName - the name of the resource group containing the resources to move. +// parameters - parameters for moving resources. +func (client Client) MoveResources(ctx context.Context, sourceResourceGroupName string, parameters MoveInfo) (result MoveResourcesFuture, err error) { + if err := validation.Validate([]validation.Validation{ + {TargetValue: sourceResourceGroupName, + Constraints: []validation.Constraint{{Target: "sourceResourceGroupName", Name: validation.MaxLength, Rule: 90, Chain: nil}, + {Target: "sourceResourceGroupName", Name: validation.MinLength, Rule: 1, Chain: nil}, + {Target: "sourceResourceGroupName", Name: validation.Pattern, Rule: `^[-\w\._\(\)]+$`, Chain: nil}}}}); err != nil { + return result, validation.NewError("resources.Client", "MoveResources", err.Error()) + } + + req, err := client.MoveResourcesPreparer(ctx, sourceResourceGroupName, parameters) + if err != nil { + err = autorest.NewErrorWithError(err, "resources.Client", "MoveResources", nil, "Failure preparing request") + return + } + + result, err = client.MoveResourcesSender(req) + if err != nil { + err = autorest.NewErrorWithError(err, "resources.Client", "MoveResources", result.Response(), "Failure sending request") + return + } + + return +} + +// MoveResourcesPreparer prepares the MoveResources request. +func (client Client) MoveResourcesPreparer(ctx context.Context, sourceResourceGroupName string, parameters MoveInfo) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "sourceResourceGroupName": autorest.Encode("path", sourceResourceGroupName), + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + } + + const APIVersion = "2018-02-01" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsContentType("application/json; charset=utf-8"), + autorest.AsPost(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{sourceResourceGroupName}/moveResources", pathParameters), + autorest.WithJSON(parameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// MoveResourcesSender sends the MoveResources request. The method will close the +// http.Response Body if it receives an error. +func (client Client) MoveResourcesSender(req *http.Request) (future MoveResourcesFuture, err error) { + var resp *http.Response + resp, err = autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) + if err != nil { + return + } + err = autorest.Respond(resp, azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted, http.StatusNoContent)) + if err != nil { + return + } + future.Future, err = azure.NewFutureFromResponse(resp) + return +} + +// MoveResourcesResponder handles the response to the MoveResources request. The method always +// closes the http.Response Body. +func (client Client) MoveResourcesResponder(resp *http.Response) (result autorest.Response, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted, http.StatusNoContent), + autorest.ByClosing()) + result.Response = resp + return +} + +// Update updates a resource. +// Parameters: +// resourceGroupName - the name of the resource group for the resource. The name is case insensitive. +// resourceProviderNamespace - the namespace of the resource provider. +// parentResourcePath - the parent resource identity. +// resourceType - the resource type of the resource to update. +// resourceName - the name of the resource to update. +// parameters - parameters for updating the resource. +func (client Client) Update(ctx context.Context, resourceGroupName string, resourceProviderNamespace string, parentResourcePath string, resourceType string, resourceName string, parameters GenericResource) (result UpdateFuture, err error) { + if err := validation.Validate([]validation.Validation{ + {TargetValue: resourceGroupName, + Constraints: []validation.Constraint{{Target: "resourceGroupName", Name: validation.MaxLength, Rule: 90, Chain: nil}, + {Target: "resourceGroupName", Name: validation.MinLength, Rule: 1, Chain: nil}, + {Target: "resourceGroupName", Name: validation.Pattern, Rule: `^[-\w\._\(\)]+$`, Chain: nil}}}}); err != nil { + return result, validation.NewError("resources.Client", "Update", err.Error()) + } + + req, err := client.UpdatePreparer(ctx, resourceGroupName, resourceProviderNamespace, parentResourcePath, resourceType, resourceName, parameters) + if err != nil { + err = autorest.NewErrorWithError(err, "resources.Client", "Update", nil, "Failure preparing request") + return + } + + result, err = client.UpdateSender(req) + if err != nil { + err = autorest.NewErrorWithError(err, "resources.Client", "Update", result.Response(), "Failure sending request") + return + } + + return +} + +// UpdatePreparer prepares the Update request. +func (client Client) UpdatePreparer(ctx context.Context, resourceGroupName string, resourceProviderNamespace string, parentResourcePath string, resourceType string, resourceName string, parameters GenericResource) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "parentResourcePath": parentResourcePath, + "resourceGroupName": autorest.Encode("path", resourceGroupName), + "resourceName": autorest.Encode("path", resourceName), + "resourceProviderNamespace": autorest.Encode("path", resourceProviderNamespace), + "resourceType": resourceType, + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + } + + const APIVersion = "2018-02-01" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsContentType("application/json; charset=utf-8"), + autorest.AsPatch(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourcegroups/{resourceGroupName}/providers/{resourceProviderNamespace}/{parentResourcePath}/{resourceType}/{resourceName}", pathParameters), + autorest.WithJSON(parameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// UpdateSender sends the Update request. The method will close the +// http.Response Body if it receives an error. +func (client Client) UpdateSender(req *http.Request) (future UpdateFuture, err error) { + var resp *http.Response + resp, err = autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) + if err != nil { + return + } + err = autorest.Respond(resp, azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted)) + if err != nil { + return + } + future.Future, err = azure.NewFutureFromResponse(resp) + return +} + +// UpdateResponder handles the response to the Update request. The method always +// closes the http.Response Body. +func (client Client) UpdateResponder(resp *http.Response) (result GenericResource, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + +// UpdateByID updates a resource by ID. +// Parameters: +// resourceID - the fully qualified ID of the resource, including the resource name and resource type. Use the +// format, +// /subscriptions/{guid}/resourceGroups/{resource-group-name}/{resource-provider-namespace}/{resource-type}/{resource-name} +// parameters - update resource parameters. +func (client Client) UpdateByID(ctx context.Context, resourceID string, parameters GenericResource) (result UpdateByIDFuture, err error) { + req, err := client.UpdateByIDPreparer(ctx, resourceID, parameters) + if err != nil { + err = autorest.NewErrorWithError(err, "resources.Client", "UpdateByID", nil, "Failure preparing request") + return + } + + result, err = client.UpdateByIDSender(req) + if err != nil { + err = autorest.NewErrorWithError(err, "resources.Client", "UpdateByID", result.Response(), "Failure sending request") + return + } + + return +} + +// UpdateByIDPreparer prepares the UpdateByID request. +func (client Client) UpdateByIDPreparer(ctx context.Context, resourceID string, parameters GenericResource) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "resourceId": resourceID, + } + + const APIVersion = "2018-02-01" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsContentType("application/json; charset=utf-8"), + autorest.AsPatch(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/{resourceId}", pathParameters), + autorest.WithJSON(parameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// UpdateByIDSender sends the UpdateByID request. The method will close the +// http.Response Body if it receives an error. +func (client Client) UpdateByIDSender(req *http.Request) (future UpdateByIDFuture, err error) { + var resp *http.Response + resp, err = autorest.SendWithSender(client, req, + autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...)) + if err != nil { + return + } + err = autorest.Respond(resp, azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted)) + if err != nil { + return + } + future.Future, err = azure.NewFutureFromResponse(resp) + return +} + +// UpdateByIDResponder handles the response to the UpdateByID request. The method always +// closes the http.Response Body. +func (client Client) UpdateByIDResponder(resp *http.Response) (result GenericResource, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} + +// ValidateMoveResources this operation checks whether the specified resources can be moved to the target. The +// resources to move must be in the same source resource group. The target resource group may be in a different +// subscription. If validation succeeds, it returns HTTP response code 204 (no content). If validation fails, it +// returns HTTP response code 409 (Conflict) with an error message. Retrieve the URL in the Location header value to +// check the result of the long-running operation. +// Parameters: +// sourceResourceGroupName - the name of the resource group containing the resources to validate for move. +// parameters - parameters for moving resources. +func (client Client) ValidateMoveResources(ctx context.Context, sourceResourceGroupName string, parameters MoveInfo) (result ValidateMoveResourcesFuture, err error) { + if err := validation.Validate([]validation.Validation{ + {TargetValue: sourceResourceGroupName, + Constraints: []validation.Constraint{{Target: "sourceResourceGroupName", Name: validation.MaxLength, Rule: 90, Chain: nil}, + {Target: "sourceResourceGroupName", Name: validation.MinLength, Rule: 1, Chain: nil}, + {Target: "sourceResourceGroupName", Name: validation.Pattern, Rule: `^[-\w\._\(\)]+$`, Chain: nil}}}}); err != nil { + return result, validation.NewError("resources.Client", "ValidateMoveResources", err.Error()) + } + + req, err := client.ValidateMoveResourcesPreparer(ctx, sourceResourceGroupName, parameters) + if err != nil { + err = autorest.NewErrorWithError(err, "resources.Client", "ValidateMoveResources", nil, "Failure preparing request") + return + } + + result, err = client.ValidateMoveResourcesSender(req) + if err != nil { + err = autorest.NewErrorWithError(err, "resources.Client", "ValidateMoveResources", result.Response(), "Failure sending request") + return + } + + return +} + +// ValidateMoveResourcesPreparer prepares the ValidateMoveResources request. +func (client Client) ValidateMoveResourcesPreparer(ctx context.Context, sourceResourceGroupName string, parameters MoveInfo) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "sourceResourceGroupName": autorest.Encode("path", sourceResourceGroupName), + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + } + + const APIVersion = "2018-02-01" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsContentType("application/json; charset=utf-8"), + autorest.AsPost(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{sourceResourceGroupName}/validateMoveResources", pathParameters), + autorest.WithJSON(parameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// ValidateMoveResourcesSender sends the ValidateMoveResources request. The method will close the +// http.Response Body if it receives an error. +func (client Client) ValidateMoveResourcesSender(req *http.Request) (future ValidateMoveResourcesFuture, err error) { + var resp *http.Response + resp, err = autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) + if err != nil { + return + } + err = autorest.Respond(resp, azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted, http.StatusNoContent, http.StatusConflict)) + if err != nil { + return + } + future.Future, err = azure.NewFutureFromResponse(resp) + return +} + +// ValidateMoveResourcesResponder handles the response to the ValidateMoveResources request. The method always +// closes the http.Response Body. +func (client Client) ValidateMoveResourcesResponder(resp *http.Response) (result autorest.Response, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted, http.StatusNoContent, http.StatusConflict), + autorest.ByClosing()) + result.Response = resp + return +} diff --git a/vendor/github.com/Azure/azure-sdk-for-go/arm/resources/resources/tags.go b/vendor/github.com/Azure/azure-sdk-for-go/services/resources/mgmt/2018-02-01/resources/tags.go similarity index 74% rename from vendor/github.com/Azure/azure-sdk-for-go/arm/resources/resources/tags.go rename to vendor/github.com/Azure/azure-sdk-for-go/services/resources/mgmt/2018-02-01/resources/tags.go index 445fccf62..539938125 100644 --- a/vendor/github.com/Azure/azure-sdk-for-go/arm/resources/resources/tags.go +++ b/vendor/github.com/Azure/azure-sdk-for-go/services/resources/mgmt/2018-02-01/resources/tags.go @@ -14,20 +14,19 @@ package resources // See the License for the specific language governing permissions and // limitations under the License. // -// Code generated by Microsoft (R) AutoRest Code Generator 1.0.1.0 -// Changes may cause incorrect behavior and will be lost if the code is -// regenerated. +// Code generated by Microsoft (R) AutoRest Code Generator. +// Changes may cause incorrect behavior and will be lost if the code is regenerated. import ( + "context" "github.com/Azure/go-autorest/autorest" "github.com/Azure/go-autorest/autorest/azure" "net/http" ) -// TagsClient is the provides operations for working with resources and -// resource groups. +// TagsClient is the provides operations for working with resources and resource groups. type TagsClient struct { - ManagementClient + BaseClient } // NewTagsClient creates an instance of the TagsClient client. @@ -40,13 +39,12 @@ func NewTagsClientWithBaseURI(baseURI string, subscriptionID string) TagsClient return TagsClient{NewWithBaseURI(baseURI, subscriptionID)} } -// CreateOrUpdate the tag name can have a maximum of 512 characters and is case -// insensitive. Tag names created by Azure have prefixes of microsoft, azure, -// or windows. You cannot create tags with one of these prefixes. -// -// tagName is the name of the tag to create. -func (client TagsClient) CreateOrUpdate(tagName string) (result TagDetails, err error) { - req, err := client.CreateOrUpdatePreparer(tagName) +// CreateOrUpdate the tag name can have a maximum of 512 characters and is case insensitive. Tag names created by Azure +// have prefixes of microsoft, azure, or windows. You cannot create tags with one of these prefixes. +// Parameters: +// tagName - the name of the tag to create. +func (client TagsClient) CreateOrUpdate(ctx context.Context, tagName string) (result TagDetails, err error) { + req, err := client.CreateOrUpdatePreparer(ctx, tagName) if err != nil { err = autorest.NewErrorWithError(err, "resources.TagsClient", "CreateOrUpdate", nil, "Failure preparing request") return @@ -68,13 +66,13 @@ func (client TagsClient) CreateOrUpdate(tagName string) (result TagDetails, err } // CreateOrUpdatePreparer prepares the CreateOrUpdate request. -func (client TagsClient) CreateOrUpdatePreparer(tagName string) (*http.Request, error) { +func (client TagsClient) CreateOrUpdatePreparer(ctx context.Context, tagName string) (*http.Request, error) { pathParameters := map[string]interface{}{ "subscriptionId": autorest.Encode("path", client.SubscriptionID), "tagName": autorest.Encode("path", tagName), } - const APIVersion = "2016-09-01" + const APIVersion = "2018-02-01" queryParameters := map[string]interface{}{ "api-version": APIVersion, } @@ -84,13 +82,14 @@ func (client TagsClient) CreateOrUpdatePreparer(tagName string) (*http.Request, autorest.WithBaseURL(client.BaseURI), autorest.WithPathParameters("/subscriptions/{subscriptionId}/tagNames/{tagName}", pathParameters), autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{}) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) } // CreateOrUpdateSender sends the CreateOrUpdate request. The method will close the // http.Response Body if it receives an error. func (client TagsClient) CreateOrUpdateSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, req) + return autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) } // CreateOrUpdateResponder handles the response to the CreateOrUpdate request. The method always @@ -106,12 +105,12 @@ func (client TagsClient) CreateOrUpdateResponder(resp *http.Response) (result Ta return } -// CreateOrUpdateValue creates a tag value. The name of the tag must already -// exist. -// -// tagName is the name of the tag. tagValue is the value of the tag to create. -func (client TagsClient) CreateOrUpdateValue(tagName string, tagValue string) (result TagValue, err error) { - req, err := client.CreateOrUpdateValuePreparer(tagName, tagValue) +// CreateOrUpdateValue creates a tag value. The name of the tag must already exist. +// Parameters: +// tagName - the name of the tag. +// tagValue - the value of the tag to create. +func (client TagsClient) CreateOrUpdateValue(ctx context.Context, tagName string, tagValue string) (result TagValue, err error) { + req, err := client.CreateOrUpdateValuePreparer(ctx, tagName, tagValue) if err != nil { err = autorest.NewErrorWithError(err, "resources.TagsClient", "CreateOrUpdateValue", nil, "Failure preparing request") return @@ -133,14 +132,14 @@ func (client TagsClient) CreateOrUpdateValue(tagName string, tagValue string) (r } // CreateOrUpdateValuePreparer prepares the CreateOrUpdateValue request. -func (client TagsClient) CreateOrUpdateValuePreparer(tagName string, tagValue string) (*http.Request, error) { +func (client TagsClient) CreateOrUpdateValuePreparer(ctx context.Context, tagName string, tagValue string) (*http.Request, error) { pathParameters := map[string]interface{}{ "subscriptionId": autorest.Encode("path", client.SubscriptionID), "tagName": autorest.Encode("path", tagName), "tagValue": autorest.Encode("path", tagValue), } - const APIVersion = "2016-09-01" + const APIVersion = "2018-02-01" queryParameters := map[string]interface{}{ "api-version": APIVersion, } @@ -150,13 +149,14 @@ func (client TagsClient) CreateOrUpdateValuePreparer(tagName string, tagValue st autorest.WithBaseURL(client.BaseURI), autorest.WithPathParameters("/subscriptions/{subscriptionId}/tagNames/{tagName}/tagValues/{tagValue}", pathParameters), autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{}) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) } // CreateOrUpdateValueSender sends the CreateOrUpdateValue request. The method will close the // http.Response Body if it receives an error. func (client TagsClient) CreateOrUpdateValueSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, req) + return autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) } // CreateOrUpdateValueResponder handles the response to the CreateOrUpdateValue request. The method always @@ -172,12 +172,11 @@ func (client TagsClient) CreateOrUpdateValueResponder(resp *http.Response) (resu return } -// Delete you must remove all values from a resource tag before you can delete -// it. -// -// tagName is the name of the tag. -func (client TagsClient) Delete(tagName string) (result autorest.Response, err error) { - req, err := client.DeletePreparer(tagName) +// Delete you must remove all values from a resource tag before you can delete it. +// Parameters: +// tagName - the name of the tag. +func (client TagsClient) Delete(ctx context.Context, tagName string) (result autorest.Response, err error) { + req, err := client.DeletePreparer(ctx, tagName) if err != nil { err = autorest.NewErrorWithError(err, "resources.TagsClient", "Delete", nil, "Failure preparing request") return @@ -199,13 +198,13 @@ func (client TagsClient) Delete(tagName string) (result autorest.Response, err e } // DeletePreparer prepares the Delete request. -func (client TagsClient) DeletePreparer(tagName string) (*http.Request, error) { +func (client TagsClient) DeletePreparer(ctx context.Context, tagName string) (*http.Request, error) { pathParameters := map[string]interface{}{ "subscriptionId": autorest.Encode("path", client.SubscriptionID), "tagName": autorest.Encode("path", tagName), } - const APIVersion = "2016-09-01" + const APIVersion = "2018-02-01" queryParameters := map[string]interface{}{ "api-version": APIVersion, } @@ -215,13 +214,14 @@ func (client TagsClient) DeletePreparer(tagName string) (*http.Request, error) { autorest.WithBaseURL(client.BaseURI), autorest.WithPathParameters("/subscriptions/{subscriptionId}/tagNames/{tagName}", pathParameters), autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{}) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) } // DeleteSender sends the Delete request. The method will close the // http.Response Body if it receives an error. func (client TagsClient) DeleteSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, req) + return autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) } // DeleteResponder handles the response to the Delete request. The method always @@ -237,10 +237,11 @@ func (client TagsClient) DeleteResponder(resp *http.Response) (result autorest.R } // DeleteValue deletes a tag value. -// -// tagName is the name of the tag. tagValue is the value of the tag to delete. -func (client TagsClient) DeleteValue(tagName string, tagValue string) (result autorest.Response, err error) { - req, err := client.DeleteValuePreparer(tagName, tagValue) +// Parameters: +// tagName - the name of the tag. +// tagValue - the value of the tag to delete. +func (client TagsClient) DeleteValue(ctx context.Context, tagName string, tagValue string) (result autorest.Response, err error) { + req, err := client.DeleteValuePreparer(ctx, tagName, tagValue) if err != nil { err = autorest.NewErrorWithError(err, "resources.TagsClient", "DeleteValue", nil, "Failure preparing request") return @@ -262,14 +263,14 @@ func (client TagsClient) DeleteValue(tagName string, tagValue string) (result au } // DeleteValuePreparer prepares the DeleteValue request. -func (client TagsClient) DeleteValuePreparer(tagName string, tagValue string) (*http.Request, error) { +func (client TagsClient) DeleteValuePreparer(ctx context.Context, tagName string, tagValue string) (*http.Request, error) { pathParameters := map[string]interface{}{ "subscriptionId": autorest.Encode("path", client.SubscriptionID), "tagName": autorest.Encode("path", tagName), "tagValue": autorest.Encode("path", tagValue), } - const APIVersion = "2016-09-01" + const APIVersion = "2018-02-01" queryParameters := map[string]interface{}{ "api-version": APIVersion, } @@ -279,13 +280,14 @@ func (client TagsClient) DeleteValuePreparer(tagName string, tagValue string) (* autorest.WithBaseURL(client.BaseURI), autorest.WithPathParameters("/subscriptions/{subscriptionId}/tagNames/{tagName}/tagValues/{tagValue}", pathParameters), autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{}) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) } // DeleteValueSender sends the DeleteValue request. The method will close the // http.Response Body if it receives an error. func (client TagsClient) DeleteValueSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, req) + return autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) } // DeleteValueResponder handles the response to the DeleteValue request. The method always @@ -300,10 +302,10 @@ func (client TagsClient) DeleteValueResponder(resp *http.Response) (result autor return } -// List gets the names and values of all resource tags that are defined in a -// subscription. -func (client TagsClient) List() (result TagsListResult, err error) { - req, err := client.ListPreparer() +// List gets the names and values of all resource tags that are defined in a subscription. +func (client TagsClient) List(ctx context.Context) (result TagsListResultPage, err error) { + result.fn = client.listNextResults + req, err := client.ListPreparer(ctx) if err != nil { err = autorest.NewErrorWithError(err, "resources.TagsClient", "List", nil, "Failure preparing request") return @@ -311,12 +313,12 @@ func (client TagsClient) List() (result TagsListResult, err error) { resp, err := client.ListSender(req) if err != nil { - result.Response = autorest.Response{Response: resp} + result.tlr.Response = autorest.Response{Response: resp} err = autorest.NewErrorWithError(err, "resources.TagsClient", "List", resp, "Failure sending request") return } - result, err = client.ListResponder(resp) + result.tlr, err = client.ListResponder(resp) if err != nil { err = autorest.NewErrorWithError(err, "resources.TagsClient", "List", resp, "Failure responding to request") } @@ -325,12 +327,12 @@ func (client TagsClient) List() (result TagsListResult, err error) { } // ListPreparer prepares the List request. -func (client TagsClient) ListPreparer() (*http.Request, error) { +func (client TagsClient) ListPreparer(ctx context.Context) (*http.Request, error) { pathParameters := map[string]interface{}{ "subscriptionId": autorest.Encode("path", client.SubscriptionID), } - const APIVersion = "2016-09-01" + const APIVersion = "2018-02-01" queryParameters := map[string]interface{}{ "api-version": APIVersion, } @@ -340,13 +342,14 @@ func (client TagsClient) ListPreparer() (*http.Request, error) { autorest.WithBaseURL(client.BaseURI), autorest.WithPathParameters("/subscriptions/{subscriptionId}/tagNames", pathParameters), autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{}) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) } // ListSender sends the List request. The method will close the // http.Response Body if it receives an error. func (client TagsClient) ListSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, req) + return autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) } // ListResponder handles the response to the List request. The method always @@ -362,26 +365,29 @@ func (client TagsClient) ListResponder(resp *http.Response) (result TagsListResu return } -// ListNextResults retrieves the next set of results, if any. -func (client TagsClient) ListNextResults(lastResults TagsListResult) (result TagsListResult, err error) { - req, err := lastResults.TagsListResultPreparer() +// listNextResults retrieves the next set of results, if any. +func (client TagsClient) listNextResults(lastResults TagsListResult) (result TagsListResult, err error) { + req, err := lastResults.tagsListResultPreparer() if err != nil { - return result, autorest.NewErrorWithError(err, "resources.TagsClient", "List", nil, "Failure preparing next results request") + return result, autorest.NewErrorWithError(err, "resources.TagsClient", "listNextResults", nil, "Failure preparing next results request") } if req == nil { return } - resp, err := client.ListSender(req) if err != nil { result.Response = autorest.Response{Response: resp} - return result, autorest.NewErrorWithError(err, "resources.TagsClient", "List", resp, "Failure sending next results request") + return result, autorest.NewErrorWithError(err, "resources.TagsClient", "listNextResults", resp, "Failure sending next results request") } - result, err = client.ListResponder(resp) if err != nil { - err = autorest.NewErrorWithError(err, "resources.TagsClient", "List", resp, "Failure responding to next results request") + err = autorest.NewErrorWithError(err, "resources.TagsClient", "listNextResults", resp, "Failure responding to next results request") } - + return +} + +// ListComplete enumerates all values, automatically crossing page boundaries as required. +func (client TagsClient) ListComplete(ctx context.Context) (result TagsListResultIterator, err error) { + result.page, err = client.List(ctx) return } diff --git a/vendor/github.com/Azure/azure-sdk-for-go/arm/resources/resources/version.go b/vendor/github.com/Azure/azure-sdk-for-go/services/resources/mgmt/2018-02-01/resources/version.go similarity index 80% rename from vendor/github.com/Azure/azure-sdk-for-go/arm/resources/resources/version.go rename to vendor/github.com/Azure/azure-sdk-for-go/services/resources/mgmt/2018-02-01/resources/version.go index 998788856..ad9049822 100644 --- a/vendor/github.com/Azure/azure-sdk-for-go/arm/resources/resources/version.go +++ b/vendor/github.com/Azure/azure-sdk-for-go/services/resources/mgmt/2018-02-01/resources/version.go @@ -1,5 +1,7 @@ package resources +import "github.com/Azure/azure-sdk-for-go/version" + // Copyright (c) Microsoft and contributors. All rights reserved. // // Licensed under the Apache License, Version 2.0 (the "License"); @@ -14,16 +16,15 @@ package resources // See the License for the specific language governing permissions and // limitations under the License. // -// Code generated by Microsoft (R) AutoRest Code Generator 1.0.1.0 -// Changes may cause incorrect behavior and will be lost if the code is -// regenerated. +// Code generated by Microsoft (R) AutoRest Code Generator. +// Changes may cause incorrect behavior and will be lost if the code is regenerated. // UserAgent returns the UserAgent string to use when sending http.Requests. func UserAgent() string { - return "Azure-SDK-For-Go/v10.0.2-beta arm-resources/2016-09-01" + return "Azure-SDK-For-Go/" + version.Number + " resources/2018-02-01" } // Version returns the semantic version (see http://semver.org) of the client. func Version() string { - return "v10.0.2-beta" + return version.Number } diff --git a/vendor/github.com/Azure/azure-sdk-for-go/arm/storage/accounts.go b/vendor/github.com/Azure/azure-sdk-for-go/services/storage/mgmt/2017-10-01/storage/accounts.go similarity index 70% rename from vendor/github.com/Azure/azure-sdk-for-go/arm/storage/accounts.go rename to vendor/github.com/Azure/azure-sdk-for-go/services/storage/mgmt/2017-10-01/storage/accounts.go index f0606ac13..56dc300ec 100644 --- a/vendor/github.com/Azure/azure-sdk-for-go/arm/storage/accounts.go +++ b/vendor/github.com/Azure/azure-sdk-for-go/services/storage/mgmt/2017-10-01/storage/accounts.go @@ -14,11 +14,11 @@ package storage // See the License for the specific language governing permissions and // limitations under the License. // -// Code generated by Microsoft (R) AutoRest Code Generator 1.0.1.0 -// Changes may cause incorrect behavior and will be lost if the code is -// regenerated. +// Code generated by Microsoft (R) AutoRest Code Generator. +// Changes may cause incorrect behavior and will be lost if the code is regenerated. import ( + "context" "github.com/Azure/go-autorest/autorest" "github.com/Azure/go-autorest/autorest/azure" "github.com/Azure/go-autorest/autorest/validation" @@ -27,7 +27,7 @@ import ( // AccountsClient is the the Azure Storage Management API. type AccountsClient struct { - ManagementClient + BaseClient } // NewAccountsClient creates an instance of the AccountsClient client. @@ -35,27 +35,24 @@ func NewAccountsClient(subscriptionID string) AccountsClient { return NewAccountsClientWithBaseURI(DefaultBaseURI, subscriptionID) } -// NewAccountsClientWithBaseURI creates an instance of the AccountsClient -// client. +// NewAccountsClientWithBaseURI creates an instance of the AccountsClient client. func NewAccountsClientWithBaseURI(baseURI string, subscriptionID string) AccountsClient { return AccountsClient{NewWithBaseURI(baseURI, subscriptionID)} } -// CheckNameAvailability checks that the storage account name is valid and is -// not already in use. -// -// accountName is the name of the storage account within the specified resource -// group. Storage account names must be between 3 and 24 characters in length -// and use numbers and lower-case letters only. -func (client AccountsClient) CheckNameAvailability(accountName AccountCheckNameAvailabilityParameters) (result CheckNameAvailabilityResult, err error) { +// CheckNameAvailability checks that the storage account name is valid and is not already in use. +// Parameters: +// accountName - the name of the storage account within the specified resource group. Storage account names +// must be between 3 and 24 characters in length and use numbers and lower-case letters only. +func (client AccountsClient) CheckNameAvailability(ctx context.Context, accountName AccountCheckNameAvailabilityParameters) (result CheckNameAvailabilityResult, err error) { if err := validation.Validate([]validation.Validation{ {TargetValue: accountName, Constraints: []validation.Constraint{{Target: "accountName.Name", Name: validation.Null, Rule: true, Chain: nil}, {Target: "accountName.Type", Name: validation.Null, Rule: true, Chain: nil}}}}); err != nil { - return result, validation.NewErrorWithValidationError(err, "storage.AccountsClient", "CheckNameAvailability") + return result, validation.NewError("storage.AccountsClient", "CheckNameAvailability", err.Error()) } - req, err := client.CheckNameAvailabilityPreparer(accountName) + req, err := client.CheckNameAvailabilityPreparer(ctx, accountName) if err != nil { err = autorest.NewErrorWithError(err, "storage.AccountsClient", "CheckNameAvailability", nil, "Failure preparing request") return @@ -77,30 +74,31 @@ func (client AccountsClient) CheckNameAvailability(accountName AccountCheckNameA } // CheckNameAvailabilityPreparer prepares the CheckNameAvailability request. -func (client AccountsClient) CheckNameAvailabilityPreparer(accountName AccountCheckNameAvailabilityParameters) (*http.Request, error) { +func (client AccountsClient) CheckNameAvailabilityPreparer(ctx context.Context, accountName AccountCheckNameAvailabilityParameters) (*http.Request, error) { pathParameters := map[string]interface{}{ "subscriptionId": autorest.Encode("path", client.SubscriptionID), } - const APIVersion = "2016-12-01" + const APIVersion = "2017-10-01" queryParameters := map[string]interface{}{ "api-version": APIVersion, } preparer := autorest.CreatePreparer( - autorest.AsJSON(), + autorest.AsContentType("application/json; charset=utf-8"), autorest.AsPost(), autorest.WithBaseURL(client.BaseURI), autorest.WithPathParameters("/subscriptions/{subscriptionId}/providers/Microsoft.Storage/checkNameAvailability", pathParameters), autorest.WithJSON(accountName), autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{}) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) } // CheckNameAvailabilitySender sends the CheckNameAvailability request. The method will close the // http.Response Body if it receives an error. func (client AccountsClient) CheckNameAvailabilitySender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, req) + return autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) } // CheckNameAvailabilityResponder handles the response to the CheckNameAvailability request. The method always @@ -116,24 +114,17 @@ func (client AccountsClient) CheckNameAvailabilityResponder(resp *http.Response) return } -// Create asynchronously creates a new storage account with the specified -// parameters. If an account is already created and a subsequent create request -// is issued with different properties, the account properties will be updated. -// If an account is already created and a subsequent create or update request -// is issued with the exact same set of properties, the request will succeed. -// This method may poll for completion. Polling can be canceled by passing the -// cancel channel argument. The channel will be used to cancel polling and any -// outstanding HTTP requests. -// -// resourceGroupName is the name of the resource group within the user's -// subscription. The name is case insensitive. accountName is the name of the -// storage account within the specified resource group. Storage account names -// must be between 3 and 24 characters in length and use numbers and lower-case -// letters only. parameters is the parameters to provide for the created -// account. -func (client AccountsClient) Create(resourceGroupName string, accountName string, parameters AccountCreateParameters, cancel <-chan struct{}) (<-chan Account, <-chan error) { - resultChan := make(chan Account, 1) - errChan := make(chan error, 1) +// Create asynchronously creates a new storage account with the specified parameters. If an account is already created +// and a subsequent create request is issued with different properties, the account properties will be updated. If an +// account is already created and a subsequent create or update request is issued with the exact same set of +// properties, the request will succeed. +// Parameters: +// resourceGroupName - the name of the resource group within the user's subscription. The name is case +// insensitive. +// accountName - the name of the storage account within the specified resource group. Storage account names +// must be between 3 and 24 characters in length and use numbers and lower-case letters only. +// parameters - the parameters to provide for the created account. +func (client AccountsClient) Create(ctx context.Context, resourceGroupName string, accountName string, parameters AccountCreateParameters) (result AccountsCreateFuture, err error) { if err := validation.Validate([]validation.Validation{ {TargetValue: resourceGroupName, Constraints: []validation.Constraint{{Target: "resourceGroupName", Name: validation.MaxLength, Rule: 90, Chain: nil}, @@ -145,77 +136,68 @@ func (client AccountsClient) Create(resourceGroupName string, accountName string {TargetValue: parameters, Constraints: []validation.Constraint{{Target: "parameters.Sku", Name: validation.Null, Rule: true, Chain: nil}, {Target: "parameters.Location", Name: validation.Null, Rule: true, Chain: nil}, + {Target: "parameters.Identity", Name: validation.Null, Rule: false, + Chain: []validation.Constraint{{Target: "parameters.Identity.Type", Name: validation.Null, Rule: true, Chain: nil}}}, {Target: "parameters.AccountPropertiesCreateParameters", Name: validation.Null, Rule: false, Chain: []validation.Constraint{{Target: "parameters.AccountPropertiesCreateParameters.CustomDomain", Name: validation.Null, Rule: false, Chain: []validation.Constraint{{Target: "parameters.AccountPropertiesCreateParameters.CustomDomain.Name", Name: validation.Null, Rule: true, Chain: nil}}}, - {Target: "parameters.AccountPropertiesCreateParameters.Encryption", Name: validation.Null, Rule: false, - Chain: []validation.Constraint{{Target: "parameters.AccountPropertiesCreateParameters.Encryption.KeySource", Name: validation.Null, Rule: true, Chain: nil}}}, }}}}}); err != nil { - errChan <- validation.NewErrorWithValidationError(err, "storage.AccountsClient", "Create") - close(errChan) - close(resultChan) - return resultChan, errChan + return result, validation.NewError("storage.AccountsClient", "Create", err.Error()) } - go func() { - var err error - var result Account - defer func() { - resultChan <- result - errChan <- err - close(resultChan) - close(errChan) - }() - req, err := client.CreatePreparer(resourceGroupName, accountName, parameters, cancel) - if err != nil { - err = autorest.NewErrorWithError(err, "storage.AccountsClient", "Create", nil, "Failure preparing request") - return - } + req, err := client.CreatePreparer(ctx, resourceGroupName, accountName, parameters) + if err != nil { + err = autorest.NewErrorWithError(err, "storage.AccountsClient", "Create", nil, "Failure preparing request") + return + } - resp, err := client.CreateSender(req) - if err != nil { - result.Response = autorest.Response{Response: resp} - err = autorest.NewErrorWithError(err, "storage.AccountsClient", "Create", resp, "Failure sending request") - return - } + result, err = client.CreateSender(req) + if err != nil { + err = autorest.NewErrorWithError(err, "storage.AccountsClient", "Create", result.Response(), "Failure sending request") + return + } - result, err = client.CreateResponder(resp) - if err != nil { - err = autorest.NewErrorWithError(err, "storage.AccountsClient", "Create", resp, "Failure responding to request") - } - }() - return resultChan, errChan + return } // CreatePreparer prepares the Create request. -func (client AccountsClient) CreatePreparer(resourceGroupName string, accountName string, parameters AccountCreateParameters, cancel <-chan struct{}) (*http.Request, error) { +func (client AccountsClient) CreatePreparer(ctx context.Context, resourceGroupName string, accountName string, parameters AccountCreateParameters) (*http.Request, error) { pathParameters := map[string]interface{}{ "accountName": autorest.Encode("path", accountName), "resourceGroupName": autorest.Encode("path", resourceGroupName), "subscriptionId": autorest.Encode("path", client.SubscriptionID), } - const APIVersion = "2016-12-01" + const APIVersion = "2017-10-01" queryParameters := map[string]interface{}{ "api-version": APIVersion, } preparer := autorest.CreatePreparer( - autorest.AsJSON(), + autorest.AsContentType("application/json; charset=utf-8"), autorest.AsPut(), autorest.WithBaseURL(client.BaseURI), autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Storage/storageAccounts/{accountName}", pathParameters), autorest.WithJSON(parameters), autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{Cancel: cancel}) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) } // CreateSender sends the Create request. The method will close the // http.Response Body if it receives an error. -func (client AccountsClient) CreateSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, - req, - azure.DoPollForAsynchronous(client.PollingDelay)) +func (client AccountsClient) CreateSender(req *http.Request) (future AccountsCreateFuture, err error) { + var resp *http.Response + resp, err = autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) + if err != nil { + return + } + err = autorest.Respond(resp, azure.WithErrorUnlessStatusCode(http.StatusOK, http.StatusAccepted)) + if err != nil { + return + } + future.Future, err = azure.NewFutureFromResponse(resp) + return } // CreateResponder handles the response to the Create request. The method always @@ -232,13 +214,12 @@ func (client AccountsClient) CreateResponder(resp *http.Response) (result Accoun } // Delete deletes a storage account in Microsoft Azure. -// -// resourceGroupName is the name of the resource group within the user's -// subscription. The name is case insensitive. accountName is the name of the -// storage account within the specified resource group. Storage account names -// must be between 3 and 24 characters in length and use numbers and lower-case -// letters only. -func (client AccountsClient) Delete(resourceGroupName string, accountName string) (result autorest.Response, err error) { +// Parameters: +// resourceGroupName - the name of the resource group within the user's subscription. The name is case +// insensitive. +// accountName - the name of the storage account within the specified resource group. Storage account names +// must be between 3 and 24 characters in length and use numbers and lower-case letters only. +func (client AccountsClient) Delete(ctx context.Context, resourceGroupName string, accountName string) (result autorest.Response, err error) { if err := validation.Validate([]validation.Validation{ {TargetValue: resourceGroupName, Constraints: []validation.Constraint{{Target: "resourceGroupName", Name: validation.MaxLength, Rule: 90, Chain: nil}, @@ -247,10 +228,10 @@ func (client AccountsClient) Delete(resourceGroupName string, accountName string {TargetValue: accountName, Constraints: []validation.Constraint{{Target: "accountName", Name: validation.MaxLength, Rule: 24, Chain: nil}, {Target: "accountName", Name: validation.MinLength, Rule: 3, Chain: nil}}}}); err != nil { - return result, validation.NewErrorWithValidationError(err, "storage.AccountsClient", "Delete") + return result, validation.NewError("storage.AccountsClient", "Delete", err.Error()) } - req, err := client.DeletePreparer(resourceGroupName, accountName) + req, err := client.DeletePreparer(ctx, resourceGroupName, accountName) if err != nil { err = autorest.NewErrorWithError(err, "storage.AccountsClient", "Delete", nil, "Failure preparing request") return @@ -272,14 +253,14 @@ func (client AccountsClient) Delete(resourceGroupName string, accountName string } // DeletePreparer prepares the Delete request. -func (client AccountsClient) DeletePreparer(resourceGroupName string, accountName string) (*http.Request, error) { +func (client AccountsClient) DeletePreparer(ctx context.Context, resourceGroupName string, accountName string) (*http.Request, error) { pathParameters := map[string]interface{}{ "accountName": autorest.Encode("path", accountName), "resourceGroupName": autorest.Encode("path", resourceGroupName), "subscriptionId": autorest.Encode("path", client.SubscriptionID), } - const APIVersion = "2016-12-01" + const APIVersion = "2017-10-01" queryParameters := map[string]interface{}{ "api-version": APIVersion, } @@ -289,13 +270,14 @@ func (client AccountsClient) DeletePreparer(resourceGroupName string, accountNam autorest.WithBaseURL(client.BaseURI), autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Storage/storageAccounts/{accountName}", pathParameters), autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{}) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) } // DeleteSender sends the Delete request. The method will close the // http.Response Body if it receives an error. func (client AccountsClient) DeleteSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, req) + return autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) } // DeleteResponder handles the response to the Delete request. The method always @@ -310,16 +292,14 @@ func (client AccountsClient) DeleteResponder(resp *http.Response) (result autore return } -// GetProperties returns the properties for the specified storage account -// including but not limited to name, SKU name, location, and account status. -// The ListKeys operation should be used to retrieve storage keys. -// -// resourceGroupName is the name of the resource group within the user's -// subscription. The name is case insensitive. accountName is the name of the -// storage account within the specified resource group. Storage account names -// must be between 3 and 24 characters in length and use numbers and lower-case -// letters only. -func (client AccountsClient) GetProperties(resourceGroupName string, accountName string) (result Account, err error) { +// GetProperties returns the properties for the specified storage account including but not limited to name, SKU name, +// location, and account status. The ListKeys operation should be used to retrieve storage keys. +// Parameters: +// resourceGroupName - the name of the resource group within the user's subscription. The name is case +// insensitive. +// accountName - the name of the storage account within the specified resource group. Storage account names +// must be between 3 and 24 characters in length and use numbers and lower-case letters only. +func (client AccountsClient) GetProperties(ctx context.Context, resourceGroupName string, accountName string) (result Account, err error) { if err := validation.Validate([]validation.Validation{ {TargetValue: resourceGroupName, Constraints: []validation.Constraint{{Target: "resourceGroupName", Name: validation.MaxLength, Rule: 90, Chain: nil}, @@ -328,10 +308,10 @@ func (client AccountsClient) GetProperties(resourceGroupName string, accountName {TargetValue: accountName, Constraints: []validation.Constraint{{Target: "accountName", Name: validation.MaxLength, Rule: 24, Chain: nil}, {Target: "accountName", Name: validation.MinLength, Rule: 3, Chain: nil}}}}); err != nil { - return result, validation.NewErrorWithValidationError(err, "storage.AccountsClient", "GetProperties") + return result, validation.NewError("storage.AccountsClient", "GetProperties", err.Error()) } - req, err := client.GetPropertiesPreparer(resourceGroupName, accountName) + req, err := client.GetPropertiesPreparer(ctx, resourceGroupName, accountName) if err != nil { err = autorest.NewErrorWithError(err, "storage.AccountsClient", "GetProperties", nil, "Failure preparing request") return @@ -353,14 +333,14 @@ func (client AccountsClient) GetProperties(resourceGroupName string, accountName } // GetPropertiesPreparer prepares the GetProperties request. -func (client AccountsClient) GetPropertiesPreparer(resourceGroupName string, accountName string) (*http.Request, error) { +func (client AccountsClient) GetPropertiesPreparer(ctx context.Context, resourceGroupName string, accountName string) (*http.Request, error) { pathParameters := map[string]interface{}{ "accountName": autorest.Encode("path", accountName), "resourceGroupName": autorest.Encode("path", resourceGroupName), "subscriptionId": autorest.Encode("path", client.SubscriptionID), } - const APIVersion = "2016-12-01" + const APIVersion = "2017-10-01" queryParameters := map[string]interface{}{ "api-version": APIVersion, } @@ -370,13 +350,14 @@ func (client AccountsClient) GetPropertiesPreparer(resourceGroupName string, acc autorest.WithBaseURL(client.BaseURI), autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Storage/storageAccounts/{accountName}", pathParameters), autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{}) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) } // GetPropertiesSender sends the GetProperties request. The method will close the // http.Response Body if it receives an error. func (client AccountsClient) GetPropertiesSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, req) + return autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) } // GetPropertiesResponder handles the response to the GetProperties request. The method always @@ -392,10 +373,10 @@ func (client AccountsClient) GetPropertiesResponder(resp *http.Response) (result return } -// List lists all the storage accounts available under the subscription. Note -// that storage keys are not returned; use the ListKeys operation for this. -func (client AccountsClient) List() (result AccountListResult, err error) { - req, err := client.ListPreparer() +// List lists all the storage accounts available under the subscription. Note that storage keys are not returned; use +// the ListKeys operation for this. +func (client AccountsClient) List(ctx context.Context) (result AccountListResult, err error) { + req, err := client.ListPreparer(ctx) if err != nil { err = autorest.NewErrorWithError(err, "storage.AccountsClient", "List", nil, "Failure preparing request") return @@ -417,12 +398,12 @@ func (client AccountsClient) List() (result AccountListResult, err error) { } // ListPreparer prepares the List request. -func (client AccountsClient) ListPreparer() (*http.Request, error) { +func (client AccountsClient) ListPreparer(ctx context.Context) (*http.Request, error) { pathParameters := map[string]interface{}{ "subscriptionId": autorest.Encode("path", client.SubscriptionID), } - const APIVersion = "2016-12-01" + const APIVersion = "2017-10-01" queryParameters := map[string]interface{}{ "api-version": APIVersion, } @@ -432,13 +413,14 @@ func (client AccountsClient) ListPreparer() (*http.Request, error) { autorest.WithBaseURL(client.BaseURI), autorest.WithPathParameters("/subscriptions/{subscriptionId}/providers/Microsoft.Storage/storageAccounts", pathParameters), autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{}) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) } // ListSender sends the List request. The method will close the // http.Response Body if it receives an error. func (client AccountsClient) ListSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, req) + return autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) } // ListResponder handles the response to the List request. The method always @@ -455,14 +437,13 @@ func (client AccountsClient) ListResponder(resp *http.Response) (result AccountL } // ListAccountSAS list SAS credentials of a storage account. -// -// resourceGroupName is the name of the resource group within the user's -// subscription. The name is case insensitive. accountName is the name of the -// storage account within the specified resource group. Storage account names -// must be between 3 and 24 characters in length and use numbers and lower-case -// letters only. parameters is the parameters to provide to list SAS -// credentials for the storage account. -func (client AccountsClient) ListAccountSAS(resourceGroupName string, accountName string, parameters AccountSasParameters) (result ListAccountSasResponse, err error) { +// Parameters: +// resourceGroupName - the name of the resource group within the user's subscription. The name is case +// insensitive. +// accountName - the name of the storage account within the specified resource group. Storage account names +// must be between 3 and 24 characters in length and use numbers and lower-case letters only. +// parameters - the parameters to provide to list SAS credentials for the storage account. +func (client AccountsClient) ListAccountSAS(ctx context.Context, resourceGroupName string, accountName string, parameters AccountSasParameters) (result ListAccountSasResponse, err error) { if err := validation.Validate([]validation.Validation{ {TargetValue: resourceGroupName, Constraints: []validation.Constraint{{Target: "resourceGroupName", Name: validation.MaxLength, Rule: 90, Chain: nil}, @@ -473,10 +454,10 @@ func (client AccountsClient) ListAccountSAS(resourceGroupName string, accountNam {Target: "accountName", Name: validation.MinLength, Rule: 3, Chain: nil}}}, {TargetValue: parameters, Constraints: []validation.Constraint{{Target: "parameters.SharedAccessExpiryTime", Name: validation.Null, Rule: true, Chain: nil}}}}); err != nil { - return result, validation.NewErrorWithValidationError(err, "storage.AccountsClient", "ListAccountSAS") + return result, validation.NewError("storage.AccountsClient", "ListAccountSAS", err.Error()) } - req, err := client.ListAccountSASPreparer(resourceGroupName, accountName, parameters) + req, err := client.ListAccountSASPreparer(ctx, resourceGroupName, accountName, parameters) if err != nil { err = autorest.NewErrorWithError(err, "storage.AccountsClient", "ListAccountSAS", nil, "Failure preparing request") return @@ -498,32 +479,33 @@ func (client AccountsClient) ListAccountSAS(resourceGroupName string, accountNam } // ListAccountSASPreparer prepares the ListAccountSAS request. -func (client AccountsClient) ListAccountSASPreparer(resourceGroupName string, accountName string, parameters AccountSasParameters) (*http.Request, error) { +func (client AccountsClient) ListAccountSASPreparer(ctx context.Context, resourceGroupName string, accountName string, parameters AccountSasParameters) (*http.Request, error) { pathParameters := map[string]interface{}{ "accountName": autorest.Encode("path", accountName), "resourceGroupName": autorest.Encode("path", resourceGroupName), "subscriptionId": autorest.Encode("path", client.SubscriptionID), } - const APIVersion = "2016-12-01" + const APIVersion = "2017-10-01" queryParameters := map[string]interface{}{ "api-version": APIVersion, } preparer := autorest.CreatePreparer( - autorest.AsJSON(), + autorest.AsContentType("application/json; charset=utf-8"), autorest.AsPost(), autorest.WithBaseURL(client.BaseURI), autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Storage/storageAccounts/{accountName}/ListAccountSas", pathParameters), autorest.WithJSON(parameters), autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{}) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) } // ListAccountSASSender sends the ListAccountSAS request. The method will close the // http.Response Body if it receives an error. func (client AccountsClient) ListAccountSASSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, req) + return autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) } // ListAccountSASResponder handles the response to the ListAccountSAS request. The method always @@ -539,22 +521,21 @@ func (client AccountsClient) ListAccountSASResponder(resp *http.Response) (resul return } -// ListByResourceGroup lists all the storage accounts available under the given -// resource group. Note that storage keys are not returned; use the ListKeys -// operation for this. -// -// resourceGroupName is the name of the resource group within the user's -// subscription. The name is case insensitive. -func (client AccountsClient) ListByResourceGroup(resourceGroupName string) (result AccountListResult, err error) { +// ListByResourceGroup lists all the storage accounts available under the given resource group. Note that storage keys +// are not returned; use the ListKeys operation for this. +// Parameters: +// resourceGroupName - the name of the resource group within the user's subscription. The name is case +// insensitive. +func (client AccountsClient) ListByResourceGroup(ctx context.Context, resourceGroupName string) (result AccountListResult, err error) { if err := validation.Validate([]validation.Validation{ {TargetValue: resourceGroupName, Constraints: []validation.Constraint{{Target: "resourceGroupName", Name: validation.MaxLength, Rule: 90, Chain: nil}, {Target: "resourceGroupName", Name: validation.MinLength, Rule: 1, Chain: nil}, {Target: "resourceGroupName", Name: validation.Pattern, Rule: `^[-\w\._\(\)]+$`, Chain: nil}}}}); err != nil { - return result, validation.NewErrorWithValidationError(err, "storage.AccountsClient", "ListByResourceGroup") + return result, validation.NewError("storage.AccountsClient", "ListByResourceGroup", err.Error()) } - req, err := client.ListByResourceGroupPreparer(resourceGroupName) + req, err := client.ListByResourceGroupPreparer(ctx, resourceGroupName) if err != nil { err = autorest.NewErrorWithError(err, "storage.AccountsClient", "ListByResourceGroup", nil, "Failure preparing request") return @@ -576,13 +557,13 @@ func (client AccountsClient) ListByResourceGroup(resourceGroupName string) (resu } // ListByResourceGroupPreparer prepares the ListByResourceGroup request. -func (client AccountsClient) ListByResourceGroupPreparer(resourceGroupName string) (*http.Request, error) { +func (client AccountsClient) ListByResourceGroupPreparer(ctx context.Context, resourceGroupName string) (*http.Request, error) { pathParameters := map[string]interface{}{ "resourceGroupName": autorest.Encode("path", resourceGroupName), "subscriptionId": autorest.Encode("path", client.SubscriptionID), } - const APIVersion = "2016-12-01" + const APIVersion = "2017-10-01" queryParameters := map[string]interface{}{ "api-version": APIVersion, } @@ -592,13 +573,14 @@ func (client AccountsClient) ListByResourceGroupPreparer(resourceGroupName strin autorest.WithBaseURL(client.BaseURI), autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Storage/storageAccounts", pathParameters), autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{}) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) } // ListByResourceGroupSender sends the ListByResourceGroup request. The method will close the // http.Response Body if it receives an error. func (client AccountsClient) ListByResourceGroupSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, req) + return autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) } // ListByResourceGroupResponder handles the response to the ListByResourceGroup request. The method always @@ -615,13 +597,12 @@ func (client AccountsClient) ListByResourceGroupResponder(resp *http.Response) ( } // ListKeys lists the access keys for the specified storage account. -// -// resourceGroupName is the name of the resource group within the user's -// subscription. The name is case insensitive. accountName is the name of the -// storage account within the specified resource group. Storage account names -// must be between 3 and 24 characters in length and use numbers and lower-case -// letters only. -func (client AccountsClient) ListKeys(resourceGroupName string, accountName string) (result AccountListKeysResult, err error) { +// Parameters: +// resourceGroupName - the name of the resource group within the user's subscription. The name is case +// insensitive. +// accountName - the name of the storage account within the specified resource group. Storage account names +// must be between 3 and 24 characters in length and use numbers and lower-case letters only. +func (client AccountsClient) ListKeys(ctx context.Context, resourceGroupName string, accountName string) (result AccountListKeysResult, err error) { if err := validation.Validate([]validation.Validation{ {TargetValue: resourceGroupName, Constraints: []validation.Constraint{{Target: "resourceGroupName", Name: validation.MaxLength, Rule: 90, Chain: nil}, @@ -630,10 +611,10 @@ func (client AccountsClient) ListKeys(resourceGroupName string, accountName stri {TargetValue: accountName, Constraints: []validation.Constraint{{Target: "accountName", Name: validation.MaxLength, Rule: 24, Chain: nil}, {Target: "accountName", Name: validation.MinLength, Rule: 3, Chain: nil}}}}); err != nil { - return result, validation.NewErrorWithValidationError(err, "storage.AccountsClient", "ListKeys") + return result, validation.NewError("storage.AccountsClient", "ListKeys", err.Error()) } - req, err := client.ListKeysPreparer(resourceGroupName, accountName) + req, err := client.ListKeysPreparer(ctx, resourceGroupName, accountName) if err != nil { err = autorest.NewErrorWithError(err, "storage.AccountsClient", "ListKeys", nil, "Failure preparing request") return @@ -655,14 +636,14 @@ func (client AccountsClient) ListKeys(resourceGroupName string, accountName stri } // ListKeysPreparer prepares the ListKeys request. -func (client AccountsClient) ListKeysPreparer(resourceGroupName string, accountName string) (*http.Request, error) { +func (client AccountsClient) ListKeysPreparer(ctx context.Context, resourceGroupName string, accountName string) (*http.Request, error) { pathParameters := map[string]interface{}{ "accountName": autorest.Encode("path", accountName), "resourceGroupName": autorest.Encode("path", resourceGroupName), "subscriptionId": autorest.Encode("path", client.SubscriptionID), } - const APIVersion = "2016-12-01" + const APIVersion = "2017-10-01" queryParameters := map[string]interface{}{ "api-version": APIVersion, } @@ -672,13 +653,14 @@ func (client AccountsClient) ListKeysPreparer(resourceGroupName string, accountN autorest.WithBaseURL(client.BaseURI), autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Storage/storageAccounts/{accountName}/listKeys", pathParameters), autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{}) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) } // ListKeysSender sends the ListKeys request. The method will close the // http.Response Body if it receives an error. func (client AccountsClient) ListKeysSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, req) + return autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) } // ListKeysResponder handles the response to the ListKeys request. The method always @@ -695,14 +677,13 @@ func (client AccountsClient) ListKeysResponder(resp *http.Response) (result Acco } // ListServiceSAS list service SAS credentials of a specific resource. -// -// resourceGroupName is the name of the resource group within the user's -// subscription. The name is case insensitive. accountName is the name of the -// storage account within the specified resource group. Storage account names -// must be between 3 and 24 characters in length and use numbers and lower-case -// letters only. parameters is the parameters to provide to list service SAS -// credentials. -func (client AccountsClient) ListServiceSAS(resourceGroupName string, accountName string, parameters ServiceSasParameters) (result ListServiceSasResponse, err error) { +// Parameters: +// resourceGroupName - the name of the resource group within the user's subscription. The name is case +// insensitive. +// accountName - the name of the storage account within the specified resource group. Storage account names +// must be between 3 and 24 characters in length and use numbers and lower-case letters only. +// parameters - the parameters to provide to list service SAS credentials. +func (client AccountsClient) ListServiceSAS(ctx context.Context, resourceGroupName string, accountName string, parameters ServiceSasParameters) (result ListServiceSasResponse, err error) { if err := validation.Validate([]validation.Validation{ {TargetValue: resourceGroupName, Constraints: []validation.Constraint{{Target: "resourceGroupName", Name: validation.MaxLength, Rule: 90, Chain: nil}, @@ -715,10 +696,10 @@ func (client AccountsClient) ListServiceSAS(resourceGroupName string, accountNam Constraints: []validation.Constraint{{Target: "parameters.CanonicalizedResource", Name: validation.Null, Rule: true, Chain: nil}, {Target: "parameters.Identifier", Name: validation.Null, Rule: false, Chain: []validation.Constraint{{Target: "parameters.Identifier", Name: validation.MaxLength, Rule: 64, Chain: nil}}}}}}); err != nil { - return result, validation.NewErrorWithValidationError(err, "storage.AccountsClient", "ListServiceSAS") + return result, validation.NewError("storage.AccountsClient", "ListServiceSAS", err.Error()) } - req, err := client.ListServiceSASPreparer(resourceGroupName, accountName, parameters) + req, err := client.ListServiceSASPreparer(ctx, resourceGroupName, accountName, parameters) if err != nil { err = autorest.NewErrorWithError(err, "storage.AccountsClient", "ListServiceSAS", nil, "Failure preparing request") return @@ -740,32 +721,33 @@ func (client AccountsClient) ListServiceSAS(resourceGroupName string, accountNam } // ListServiceSASPreparer prepares the ListServiceSAS request. -func (client AccountsClient) ListServiceSASPreparer(resourceGroupName string, accountName string, parameters ServiceSasParameters) (*http.Request, error) { +func (client AccountsClient) ListServiceSASPreparer(ctx context.Context, resourceGroupName string, accountName string, parameters ServiceSasParameters) (*http.Request, error) { pathParameters := map[string]interface{}{ "accountName": autorest.Encode("path", accountName), "resourceGroupName": autorest.Encode("path", resourceGroupName), "subscriptionId": autorest.Encode("path", client.SubscriptionID), } - const APIVersion = "2016-12-01" + const APIVersion = "2017-10-01" queryParameters := map[string]interface{}{ "api-version": APIVersion, } preparer := autorest.CreatePreparer( - autorest.AsJSON(), + autorest.AsContentType("application/json; charset=utf-8"), autorest.AsPost(), autorest.WithBaseURL(client.BaseURI), autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Storage/storageAccounts/{accountName}/ListServiceSas", pathParameters), autorest.WithJSON(parameters), autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{}) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) } // ListServiceSASSender sends the ListServiceSAS request. The method will close the // http.Response Body if it receives an error. func (client AccountsClient) ListServiceSASSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, req) + return autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) } // ListServiceSASResponder handles the response to the ListServiceSAS request. The method always @@ -781,16 +763,14 @@ func (client AccountsClient) ListServiceSASResponder(resp *http.Response) (resul return } -// RegenerateKey regenerates one of the access keys for the specified storage -// account. -// -// resourceGroupName is the name of the resource group within the user's -// subscription. The name is case insensitive. accountName is the name of the -// storage account within the specified resource group. Storage account names -// must be between 3 and 24 characters in length and use numbers and lower-case -// letters only. regenerateKey is specifies name of the key which should be -// regenerated -- key1 or key2. -func (client AccountsClient) RegenerateKey(resourceGroupName string, accountName string, regenerateKey AccountRegenerateKeyParameters) (result AccountListKeysResult, err error) { +// RegenerateKey regenerates one of the access keys for the specified storage account. +// Parameters: +// resourceGroupName - the name of the resource group within the user's subscription. The name is case +// insensitive. +// accountName - the name of the storage account within the specified resource group. Storage account names +// must be between 3 and 24 characters in length and use numbers and lower-case letters only. +// regenerateKey - specifies name of the key which should be regenerated -- key1 or key2. +func (client AccountsClient) RegenerateKey(ctx context.Context, resourceGroupName string, accountName string, regenerateKey AccountRegenerateKeyParameters) (result AccountListKeysResult, err error) { if err := validation.Validate([]validation.Validation{ {TargetValue: resourceGroupName, Constraints: []validation.Constraint{{Target: "resourceGroupName", Name: validation.MaxLength, Rule: 90, Chain: nil}, @@ -801,10 +781,10 @@ func (client AccountsClient) RegenerateKey(resourceGroupName string, accountName {Target: "accountName", Name: validation.MinLength, Rule: 3, Chain: nil}}}, {TargetValue: regenerateKey, Constraints: []validation.Constraint{{Target: "regenerateKey.KeyName", Name: validation.Null, Rule: true, Chain: nil}}}}); err != nil { - return result, validation.NewErrorWithValidationError(err, "storage.AccountsClient", "RegenerateKey") + return result, validation.NewError("storage.AccountsClient", "RegenerateKey", err.Error()) } - req, err := client.RegenerateKeyPreparer(resourceGroupName, accountName, regenerateKey) + req, err := client.RegenerateKeyPreparer(ctx, resourceGroupName, accountName, regenerateKey) if err != nil { err = autorest.NewErrorWithError(err, "storage.AccountsClient", "RegenerateKey", nil, "Failure preparing request") return @@ -826,32 +806,33 @@ func (client AccountsClient) RegenerateKey(resourceGroupName string, accountName } // RegenerateKeyPreparer prepares the RegenerateKey request. -func (client AccountsClient) RegenerateKeyPreparer(resourceGroupName string, accountName string, regenerateKey AccountRegenerateKeyParameters) (*http.Request, error) { +func (client AccountsClient) RegenerateKeyPreparer(ctx context.Context, resourceGroupName string, accountName string, regenerateKey AccountRegenerateKeyParameters) (*http.Request, error) { pathParameters := map[string]interface{}{ "accountName": autorest.Encode("path", accountName), "resourceGroupName": autorest.Encode("path", resourceGroupName), "subscriptionId": autorest.Encode("path", client.SubscriptionID), } - const APIVersion = "2016-12-01" + const APIVersion = "2017-10-01" queryParameters := map[string]interface{}{ "api-version": APIVersion, } preparer := autorest.CreatePreparer( - autorest.AsJSON(), + autorest.AsContentType("application/json; charset=utf-8"), autorest.AsPost(), autorest.WithBaseURL(client.BaseURI), autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Storage/storageAccounts/{accountName}/regenerateKey", pathParameters), autorest.WithJSON(regenerateKey), autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{}) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) } // RegenerateKeySender sends the RegenerateKey request. The method will close the // http.Response Body if it receives an error. func (client AccountsClient) RegenerateKeySender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, req) + return autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) } // RegenerateKeyResponder handles the response to the RegenerateKey request. The method always @@ -867,24 +848,19 @@ func (client AccountsClient) RegenerateKeyResponder(resp *http.Response) (result return } -// Update the update operation can be used to update the SKU, encryption, -// access tier, or tags for a storage account. It can also be used to map the -// account to a custom domain. Only one custom domain is supported per storage -// account; the replacement/change of custom domain is not supported. In order -// to replace an old custom domain, the old value must be cleared/unregistered -// before a new value can be set. The update of multiple properties is -// supported. This call does not change the storage keys for the account. If -// you want to change the storage account keys, use the regenerate keys -// operation. The location and name of the storage account cannot be changed -// after creation. -// -// resourceGroupName is the name of the resource group within the user's -// subscription. The name is case insensitive. accountName is the name of the -// storage account within the specified resource group. Storage account names -// must be between 3 and 24 characters in length and use numbers and lower-case -// letters only. parameters is the parameters to provide for the updated -// account. -func (client AccountsClient) Update(resourceGroupName string, accountName string, parameters AccountUpdateParameters) (result Account, err error) { +// Update the update operation can be used to update the SKU, encryption, access tier, or tags for a storage account. +// It can also be used to map the account to a custom domain. Only one custom domain is supported per storage account; +// the replacement/change of custom domain is not supported. In order to replace an old custom domain, the old value +// must be cleared/unregistered before a new value can be set. The update of multiple properties is supported. This +// call does not change the storage keys for the account. If you want to change the storage account keys, use the +// regenerate keys operation. The location and name of the storage account cannot be changed after creation. +// Parameters: +// resourceGroupName - the name of the resource group within the user's subscription. The name is case +// insensitive. +// accountName - the name of the storage account within the specified resource group. Storage account names +// must be between 3 and 24 characters in length and use numbers and lower-case letters only. +// parameters - the parameters to provide for the updated account. +func (client AccountsClient) Update(ctx context.Context, resourceGroupName string, accountName string, parameters AccountUpdateParameters) (result Account, err error) { if err := validation.Validate([]validation.Validation{ {TargetValue: resourceGroupName, Constraints: []validation.Constraint{{Target: "resourceGroupName", Name: validation.MaxLength, Rule: 90, Chain: nil}, @@ -893,10 +869,10 @@ func (client AccountsClient) Update(resourceGroupName string, accountName string {TargetValue: accountName, Constraints: []validation.Constraint{{Target: "accountName", Name: validation.MaxLength, Rule: 24, Chain: nil}, {Target: "accountName", Name: validation.MinLength, Rule: 3, Chain: nil}}}}); err != nil { - return result, validation.NewErrorWithValidationError(err, "storage.AccountsClient", "Update") + return result, validation.NewError("storage.AccountsClient", "Update", err.Error()) } - req, err := client.UpdatePreparer(resourceGroupName, accountName, parameters) + req, err := client.UpdatePreparer(ctx, resourceGroupName, accountName, parameters) if err != nil { err = autorest.NewErrorWithError(err, "storage.AccountsClient", "Update", nil, "Failure preparing request") return @@ -918,32 +894,33 @@ func (client AccountsClient) Update(resourceGroupName string, accountName string } // UpdatePreparer prepares the Update request. -func (client AccountsClient) UpdatePreparer(resourceGroupName string, accountName string, parameters AccountUpdateParameters) (*http.Request, error) { +func (client AccountsClient) UpdatePreparer(ctx context.Context, resourceGroupName string, accountName string, parameters AccountUpdateParameters) (*http.Request, error) { pathParameters := map[string]interface{}{ "accountName": autorest.Encode("path", accountName), "resourceGroupName": autorest.Encode("path", resourceGroupName), "subscriptionId": autorest.Encode("path", client.SubscriptionID), } - const APIVersion = "2016-12-01" + const APIVersion = "2017-10-01" queryParameters := map[string]interface{}{ "api-version": APIVersion, } preparer := autorest.CreatePreparer( - autorest.AsJSON(), + autorest.AsContentType("application/json; charset=utf-8"), autorest.AsPatch(), autorest.WithBaseURL(client.BaseURI), autorest.WithPathParameters("/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Storage/storageAccounts/{accountName}", pathParameters), autorest.WithJSON(parameters), autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{}) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) } // UpdateSender sends the Update request. The method will close the // http.Response Body if it receives an error. func (client AccountsClient) UpdateSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, req) + return autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) } // UpdateResponder handles the response to the Update request. The method always diff --git a/vendor/github.com/Azure/azure-sdk-for-go/arm/storage/client.go b/vendor/github.com/Azure/azure-sdk-for-go/services/storage/mgmt/2017-10-01/storage/client.go similarity index 72% rename from vendor/github.com/Azure/azure-sdk-for-go/arm/storage/client.go rename to vendor/github.com/Azure/azure-sdk-for-go/services/storage/mgmt/2017-10-01/storage/client.go index a537bdd25..2be951c81 100644 --- a/vendor/github.com/Azure/azure-sdk-for-go/arm/storage/client.go +++ b/vendor/github.com/Azure/azure-sdk-for-go/services/storage/mgmt/2017-10-01/storage/client.go @@ -1,5 +1,4 @@ -// Package storage implements the Azure ARM Storage service API version -// 2016-12-01. +// Package storage implements the Azure ARM Storage service API version 2017-10-01. // // The Azure Storage Management API. package storage @@ -18,9 +17,8 @@ package storage // See the License for the specific language governing permissions and // limitations under the License. // -// Code generated by Microsoft (R) AutoRest Code Generator 1.0.1.0 -// Changes may cause incorrect behavior and will be lost if the code is -// regenerated. +// Code generated by Microsoft (R) AutoRest Code Generator. +// Changes may cause incorrect behavior and will be lost if the code is regenerated. import ( "github.com/Azure/go-autorest/autorest" @@ -31,21 +29,21 @@ const ( DefaultBaseURI = "https://management.azure.com" ) -// ManagementClient is the base client for Storage. -type ManagementClient struct { +// BaseClient is the base client for Storage. +type BaseClient struct { autorest.Client BaseURI string SubscriptionID string } -// New creates an instance of the ManagementClient client. -func New(subscriptionID string) ManagementClient { +// New creates an instance of the BaseClient client. +func New(subscriptionID string) BaseClient { return NewWithBaseURI(DefaultBaseURI, subscriptionID) } -// NewWithBaseURI creates an instance of the ManagementClient client. -func NewWithBaseURI(baseURI string, subscriptionID string) ManagementClient { - return ManagementClient{ +// NewWithBaseURI creates an instance of the BaseClient client. +func NewWithBaseURI(baseURI string, subscriptionID string) BaseClient { + return BaseClient{ Client: autorest.NewClientWithUserAgent(UserAgent()), BaseURI: baseURI, SubscriptionID: subscriptionID, diff --git a/vendor/github.com/Azure/azure-sdk-for-go/services/storage/mgmt/2017-10-01/storage/models.go b/vendor/github.com/Azure/azure-sdk-for-go/services/storage/mgmt/2017-10-01/storage/models.go new file mode 100644 index 000000000..c215a75c8 --- /dev/null +++ b/vendor/github.com/Azure/azure-sdk-for-go/services/storage/mgmt/2017-10-01/storage/models.go @@ -0,0 +1,1288 @@ +package storage + +// Copyright (c) Microsoft and contributors. All rights reserved. +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// +// See the License for the specific language governing permissions and +// limitations under the License. +// +// Code generated by Microsoft (R) AutoRest Code Generator. +// Changes may cause incorrect behavior and will be lost if the code is regenerated. + +import ( + "encoding/json" + "github.com/Azure/go-autorest/autorest" + "github.com/Azure/go-autorest/autorest/azure" + "github.com/Azure/go-autorest/autorest/date" + "net/http" +) + +// AccessTier enumerates the values for access tier. +type AccessTier string + +const ( + // Cool ... + Cool AccessTier = "Cool" + // Hot ... + Hot AccessTier = "Hot" +) + +// PossibleAccessTierValues returns an array of possible values for the AccessTier const type. +func PossibleAccessTierValues() []AccessTier { + return []AccessTier{Cool, Hot} +} + +// AccountStatus enumerates the values for account status. +type AccountStatus string + +const ( + // Available ... + Available AccountStatus = "available" + // Unavailable ... + Unavailable AccountStatus = "unavailable" +) + +// PossibleAccountStatusValues returns an array of possible values for the AccountStatus const type. +func PossibleAccountStatusValues() []AccountStatus { + return []AccountStatus{Available, Unavailable} +} + +// Action enumerates the values for action. +type Action string + +const ( + // Allow ... + Allow Action = "Allow" +) + +// PossibleActionValues returns an array of possible values for the Action const type. +func PossibleActionValues() []Action { + return []Action{Allow} +} + +// Bypass enumerates the values for bypass. +type Bypass string + +const ( + // AzureServices ... + AzureServices Bypass = "AzureServices" + // Logging ... + Logging Bypass = "Logging" + // Metrics ... + Metrics Bypass = "Metrics" + // None ... + None Bypass = "None" +) + +// PossibleBypassValues returns an array of possible values for the Bypass const type. +func PossibleBypassValues() []Bypass { + return []Bypass{AzureServices, Logging, Metrics, None} +} + +// DefaultAction enumerates the values for default action. +type DefaultAction string + +const ( + // DefaultActionAllow ... + DefaultActionAllow DefaultAction = "Allow" + // DefaultActionDeny ... + DefaultActionDeny DefaultAction = "Deny" +) + +// PossibleDefaultActionValues returns an array of possible values for the DefaultAction const type. +func PossibleDefaultActionValues() []DefaultAction { + return []DefaultAction{DefaultActionAllow, DefaultActionDeny} +} + +// HTTPProtocol enumerates the values for http protocol. +type HTTPProtocol string + +const ( + // HTTPS ... + HTTPS HTTPProtocol = "https" + // Httpshttp ... + Httpshttp HTTPProtocol = "https,http" +) + +// PossibleHTTPProtocolValues returns an array of possible values for the HTTPProtocol const type. +func PossibleHTTPProtocolValues() []HTTPProtocol { + return []HTTPProtocol{HTTPS, Httpshttp} +} + +// KeyPermission enumerates the values for key permission. +type KeyPermission string + +const ( + // Full ... + Full KeyPermission = "Full" + // Read ... + Read KeyPermission = "Read" +) + +// PossibleKeyPermissionValues returns an array of possible values for the KeyPermission const type. +func PossibleKeyPermissionValues() []KeyPermission { + return []KeyPermission{Full, Read} +} + +// KeySource enumerates the values for key source. +type KeySource string + +const ( + // MicrosoftKeyvault ... + MicrosoftKeyvault KeySource = "Microsoft.Keyvault" + // MicrosoftStorage ... + MicrosoftStorage KeySource = "Microsoft.Storage" +) + +// PossibleKeySourceValues returns an array of possible values for the KeySource const type. +func PossibleKeySourceValues() []KeySource { + return []KeySource{MicrosoftKeyvault, MicrosoftStorage} +} + +// Kind enumerates the values for kind. +type Kind string + +const ( + // BlobStorage ... + BlobStorage Kind = "BlobStorage" + // Storage ... + Storage Kind = "Storage" + // StorageV2 ... + StorageV2 Kind = "StorageV2" +) + +// PossibleKindValues returns an array of possible values for the Kind const type. +func PossibleKindValues() []Kind { + return []Kind{BlobStorage, Storage, StorageV2} +} + +// Permissions enumerates the values for permissions. +type Permissions string + +const ( + // A ... + A Permissions = "a" + // C ... + C Permissions = "c" + // D ... + D Permissions = "d" + // L ... + L Permissions = "l" + // P ... + P Permissions = "p" + // R ... + R Permissions = "r" + // U ... + U Permissions = "u" + // W ... + W Permissions = "w" +) + +// PossiblePermissionsValues returns an array of possible values for the Permissions const type. +func PossiblePermissionsValues() []Permissions { + return []Permissions{A, C, D, L, P, R, U, W} +} + +// ProvisioningState enumerates the values for provisioning state. +type ProvisioningState string + +const ( + // Creating ... + Creating ProvisioningState = "Creating" + // ResolvingDNS ... + ResolvingDNS ProvisioningState = "ResolvingDNS" + // Succeeded ... + Succeeded ProvisioningState = "Succeeded" +) + +// PossibleProvisioningStateValues returns an array of possible values for the ProvisioningState const type. +func PossibleProvisioningStateValues() []ProvisioningState { + return []ProvisioningState{Creating, ResolvingDNS, Succeeded} +} + +// Reason enumerates the values for reason. +type Reason string + +const ( + // AccountNameInvalid ... + AccountNameInvalid Reason = "AccountNameInvalid" + // AlreadyExists ... + AlreadyExists Reason = "AlreadyExists" +) + +// PossibleReasonValues returns an array of possible values for the Reason const type. +func PossibleReasonValues() []Reason { + return []Reason{AccountNameInvalid, AlreadyExists} +} + +// ReasonCode enumerates the values for reason code. +type ReasonCode string + +const ( + // NotAvailableForSubscription ... + NotAvailableForSubscription ReasonCode = "NotAvailableForSubscription" + // QuotaID ... + QuotaID ReasonCode = "QuotaId" +) + +// PossibleReasonCodeValues returns an array of possible values for the ReasonCode const type. +func PossibleReasonCodeValues() []ReasonCode { + return []ReasonCode{NotAvailableForSubscription, QuotaID} +} + +// Services enumerates the values for services. +type Services string + +const ( + // B ... + B Services = "b" + // F ... + F Services = "f" + // Q ... + Q Services = "q" + // T ... + T Services = "t" +) + +// PossibleServicesValues returns an array of possible values for the Services const type. +func PossibleServicesValues() []Services { + return []Services{B, F, Q, T} +} + +// SignedResource enumerates the values for signed resource. +type SignedResource string + +const ( + // SignedResourceB ... + SignedResourceB SignedResource = "b" + // SignedResourceC ... + SignedResourceC SignedResource = "c" + // SignedResourceF ... + SignedResourceF SignedResource = "f" + // SignedResourceS ... + SignedResourceS SignedResource = "s" +) + +// PossibleSignedResourceValues returns an array of possible values for the SignedResource const type. +func PossibleSignedResourceValues() []SignedResource { + return []SignedResource{SignedResourceB, SignedResourceC, SignedResourceF, SignedResourceS} +} + +// SignedResourceTypes enumerates the values for signed resource types. +type SignedResourceTypes string + +const ( + // SignedResourceTypesC ... + SignedResourceTypesC SignedResourceTypes = "c" + // SignedResourceTypesO ... + SignedResourceTypesO SignedResourceTypes = "o" + // SignedResourceTypesS ... + SignedResourceTypesS SignedResourceTypes = "s" +) + +// PossibleSignedResourceTypesValues returns an array of possible values for the SignedResourceTypes const type. +func PossibleSignedResourceTypesValues() []SignedResourceTypes { + return []SignedResourceTypes{SignedResourceTypesC, SignedResourceTypesO, SignedResourceTypesS} +} + +// SkuName enumerates the values for sku name. +type SkuName string + +const ( + // PremiumLRS ... + PremiumLRS SkuName = "Premium_LRS" + // StandardGRS ... + StandardGRS SkuName = "Standard_GRS" + // StandardLRS ... + StandardLRS SkuName = "Standard_LRS" + // StandardRAGRS ... + StandardRAGRS SkuName = "Standard_RAGRS" + // StandardZRS ... + StandardZRS SkuName = "Standard_ZRS" +) + +// PossibleSkuNameValues returns an array of possible values for the SkuName const type. +func PossibleSkuNameValues() []SkuName { + return []SkuName{PremiumLRS, StandardGRS, StandardLRS, StandardRAGRS, StandardZRS} +} + +// SkuTier enumerates the values for sku tier. +type SkuTier string + +const ( + // Premium ... + Premium SkuTier = "Premium" + // Standard ... + Standard SkuTier = "Standard" +) + +// PossibleSkuTierValues returns an array of possible values for the SkuTier const type. +func PossibleSkuTierValues() []SkuTier { + return []SkuTier{Premium, Standard} +} + +// State enumerates the values for state. +type State string + +const ( + // StateDeprovisioning ... + StateDeprovisioning State = "deprovisioning" + // StateFailed ... + StateFailed State = "failed" + // StateNetworkSourceDeleted ... + StateNetworkSourceDeleted State = "networkSourceDeleted" + // StateProvisioning ... + StateProvisioning State = "provisioning" + // StateSucceeded ... + StateSucceeded State = "succeeded" +) + +// PossibleStateValues returns an array of possible values for the State const type. +func PossibleStateValues() []State { + return []State{StateDeprovisioning, StateFailed, StateNetworkSourceDeleted, StateProvisioning, StateSucceeded} +} + +// UsageUnit enumerates the values for usage unit. +type UsageUnit string + +const ( + // Bytes ... + Bytes UsageUnit = "Bytes" + // BytesPerSecond ... + BytesPerSecond UsageUnit = "BytesPerSecond" + // Count ... + Count UsageUnit = "Count" + // CountsPerSecond ... + CountsPerSecond UsageUnit = "CountsPerSecond" + // Percent ... + Percent UsageUnit = "Percent" + // Seconds ... + Seconds UsageUnit = "Seconds" +) + +// PossibleUsageUnitValues returns an array of possible values for the UsageUnit const type. +func PossibleUsageUnitValues() []UsageUnit { + return []UsageUnit{Bytes, BytesPerSecond, Count, CountsPerSecond, Percent, Seconds} +} + +// Account the storage account. +type Account struct { + autorest.Response `json:"-"` + // Sku - Gets the SKU. + Sku *Sku `json:"sku,omitempty"` + // Kind - Gets the Kind. Possible values include: 'Storage', 'StorageV2', 'BlobStorage' + Kind Kind `json:"kind,omitempty"` + // Identity - The identity of the resource. + Identity *Identity `json:"identity,omitempty"` + // AccountProperties - Properties of the storage account. + *AccountProperties `json:"properties,omitempty"` + // ID - Resource Id + ID *string `json:"id,omitempty"` + // Name - Resource name + Name *string `json:"name,omitempty"` + // Type - Resource type + Type *string `json:"type,omitempty"` + // Location - Resource location + Location *string `json:"location,omitempty"` + // Tags - Tags assigned to a resource; can be used for viewing and grouping a resource (across resource groups). + Tags map[string]*string `json:"tags"` +} + +// MarshalJSON is the custom marshaler for Account. +func (a Account) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]interface{}) + if a.Sku != nil { + objectMap["sku"] = a.Sku + } + if a.Kind != "" { + objectMap["kind"] = a.Kind + } + if a.Identity != nil { + objectMap["identity"] = a.Identity + } + if a.AccountProperties != nil { + objectMap["properties"] = a.AccountProperties + } + if a.ID != nil { + objectMap["id"] = a.ID + } + if a.Name != nil { + objectMap["name"] = a.Name + } + if a.Type != nil { + objectMap["type"] = a.Type + } + if a.Location != nil { + objectMap["location"] = a.Location + } + if a.Tags != nil { + objectMap["tags"] = a.Tags + } + return json.Marshal(objectMap) +} + +// UnmarshalJSON is the custom unmarshaler for Account struct. +func (a *Account) UnmarshalJSON(body []byte) error { + var m map[string]*json.RawMessage + err := json.Unmarshal(body, &m) + if err != nil { + return err + } + for k, v := range m { + switch k { + case "sku": + if v != nil { + var sku Sku + err = json.Unmarshal(*v, &sku) + if err != nil { + return err + } + a.Sku = &sku + } + case "kind": + if v != nil { + var kind Kind + err = json.Unmarshal(*v, &kind) + if err != nil { + return err + } + a.Kind = kind + } + case "identity": + if v != nil { + var identity Identity + err = json.Unmarshal(*v, &identity) + if err != nil { + return err + } + a.Identity = &identity + } + case "properties": + if v != nil { + var accountProperties AccountProperties + err = json.Unmarshal(*v, &accountProperties) + if err != nil { + return err + } + a.AccountProperties = &accountProperties + } + case "id": + if v != nil { + var ID string + err = json.Unmarshal(*v, &ID) + if err != nil { + return err + } + a.ID = &ID + } + case "name": + if v != nil { + var name string + err = json.Unmarshal(*v, &name) + if err != nil { + return err + } + a.Name = &name + } + case "type": + if v != nil { + var typeVar string + err = json.Unmarshal(*v, &typeVar) + if err != nil { + return err + } + a.Type = &typeVar + } + case "location": + if v != nil { + var location string + err = json.Unmarshal(*v, &location) + if err != nil { + return err + } + a.Location = &location + } + case "tags": + if v != nil { + var tags map[string]*string + err = json.Unmarshal(*v, &tags) + if err != nil { + return err + } + a.Tags = tags + } + } + } + + return nil +} + +// AccountCheckNameAvailabilityParameters the parameters used to check the availabity of the storage account name. +type AccountCheckNameAvailabilityParameters struct { + // Name - The storage account name. + Name *string `json:"name,omitempty"` + // Type - The type of resource, Microsoft.Storage/storageAccounts + Type *string `json:"type,omitempty"` +} + +// AccountCreateParameters the parameters used when creating a storage account. +type AccountCreateParameters struct { + // Sku - Required. Gets or sets the sku name. + Sku *Sku `json:"sku,omitempty"` + // Kind - Required. Indicates the type of storage account. Possible values include: 'Storage', 'StorageV2', 'BlobStorage' + Kind Kind `json:"kind,omitempty"` + // Location - Required. Gets or sets the location of the resource. This will be one of the supported and registered Azure Geo Regions (e.g. West US, East US, Southeast Asia, etc.). The geo region of a resource cannot be changed once it is created, but if an identical geo region is specified on update, the request will succeed. + Location *string `json:"location,omitempty"` + // Tags - Gets or sets a list of key value pairs that describe the resource. These tags can be used for viewing and grouping this resource (across resource groups). A maximum of 15 tags can be provided for a resource. Each tag must have a key with a length no greater than 128 characters and a value with a length no greater than 256 characters. + Tags map[string]*string `json:"tags"` + // Identity - The identity of the resource. + Identity *Identity `json:"identity,omitempty"` + // AccountPropertiesCreateParameters - The parameters used to create the storage account. + *AccountPropertiesCreateParameters `json:"properties,omitempty"` +} + +// MarshalJSON is the custom marshaler for AccountCreateParameters. +func (acp AccountCreateParameters) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]interface{}) + if acp.Sku != nil { + objectMap["sku"] = acp.Sku + } + if acp.Kind != "" { + objectMap["kind"] = acp.Kind + } + if acp.Location != nil { + objectMap["location"] = acp.Location + } + if acp.Tags != nil { + objectMap["tags"] = acp.Tags + } + if acp.Identity != nil { + objectMap["identity"] = acp.Identity + } + if acp.AccountPropertiesCreateParameters != nil { + objectMap["properties"] = acp.AccountPropertiesCreateParameters + } + return json.Marshal(objectMap) +} + +// UnmarshalJSON is the custom unmarshaler for AccountCreateParameters struct. +func (acp *AccountCreateParameters) UnmarshalJSON(body []byte) error { + var m map[string]*json.RawMessage + err := json.Unmarshal(body, &m) + if err != nil { + return err + } + for k, v := range m { + switch k { + case "sku": + if v != nil { + var sku Sku + err = json.Unmarshal(*v, &sku) + if err != nil { + return err + } + acp.Sku = &sku + } + case "kind": + if v != nil { + var kind Kind + err = json.Unmarshal(*v, &kind) + if err != nil { + return err + } + acp.Kind = kind + } + case "location": + if v != nil { + var location string + err = json.Unmarshal(*v, &location) + if err != nil { + return err + } + acp.Location = &location + } + case "tags": + if v != nil { + var tags map[string]*string + err = json.Unmarshal(*v, &tags) + if err != nil { + return err + } + acp.Tags = tags + } + case "identity": + if v != nil { + var identity Identity + err = json.Unmarshal(*v, &identity) + if err != nil { + return err + } + acp.Identity = &identity + } + case "properties": + if v != nil { + var accountPropertiesCreateParameters AccountPropertiesCreateParameters + err = json.Unmarshal(*v, &accountPropertiesCreateParameters) + if err != nil { + return err + } + acp.AccountPropertiesCreateParameters = &accountPropertiesCreateParameters + } + } + } + + return nil +} + +// AccountKey an access key for the storage account. +type AccountKey struct { + // KeyName - Name of the key. + KeyName *string `json:"keyName,omitempty"` + // Value - Base 64-encoded value of the key. + Value *string `json:"value,omitempty"` + // Permissions - Permissions for the key -- read-only or full permissions. Possible values include: 'Read', 'Full' + Permissions KeyPermission `json:"permissions,omitempty"` +} + +// AccountListKeysResult the response from the ListKeys operation. +type AccountListKeysResult struct { + autorest.Response `json:"-"` + // Keys - Gets the list of storage account keys and their properties for the specified storage account. + Keys *[]AccountKey `json:"keys,omitempty"` +} + +// AccountListResult the response from the List Storage Accounts operation. +type AccountListResult struct { + autorest.Response `json:"-"` + // Value - Gets the list of storage accounts and their properties. + Value *[]Account `json:"value,omitempty"` +} + +// AccountProperties properties of the storage account. +type AccountProperties struct { + // ProvisioningState - Gets the status of the storage account at the time the operation was called. Possible values include: 'Creating', 'ResolvingDNS', 'Succeeded' + ProvisioningState ProvisioningState `json:"provisioningState,omitempty"` + // PrimaryEndpoints - Gets the URLs that are used to perform a retrieval of a public blob, queue, or table object. Note that Standard_ZRS and Premium_LRS accounts only return the blob endpoint. + PrimaryEndpoints *Endpoints `json:"primaryEndpoints,omitempty"` + // PrimaryLocation - Gets the location of the primary data center for the storage account. + PrimaryLocation *string `json:"primaryLocation,omitempty"` + // StatusOfPrimary - Gets the status indicating whether the primary location of the storage account is available or unavailable. Possible values include: 'Available', 'Unavailable' + StatusOfPrimary AccountStatus `json:"statusOfPrimary,omitempty"` + // LastGeoFailoverTime - Gets the timestamp of the most recent instance of a failover to the secondary location. Only the most recent timestamp is retained. This element is not returned if there has never been a failover instance. Only available if the accountType is Standard_GRS or Standard_RAGRS. + LastGeoFailoverTime *date.Time `json:"lastGeoFailoverTime,omitempty"` + // SecondaryLocation - Gets the location of the geo-replicated secondary for the storage account. Only available if the accountType is Standard_GRS or Standard_RAGRS. + SecondaryLocation *string `json:"secondaryLocation,omitempty"` + // StatusOfSecondary - Gets the status indicating whether the secondary location of the storage account is available or unavailable. Only available if the SKU name is Standard_GRS or Standard_RAGRS. Possible values include: 'Available', 'Unavailable' + StatusOfSecondary AccountStatus `json:"statusOfSecondary,omitempty"` + // CreationTime - Gets the creation date and time of the storage account in UTC. + CreationTime *date.Time `json:"creationTime,omitempty"` + // CustomDomain - Gets the custom domain the user assigned to this storage account. + CustomDomain *CustomDomain `json:"customDomain,omitempty"` + // SecondaryEndpoints - Gets the URLs that are used to perform a retrieval of a public blob, queue, or table object from the secondary location of the storage account. Only available if the SKU name is Standard_RAGRS. + SecondaryEndpoints *Endpoints `json:"secondaryEndpoints,omitempty"` + // Encryption - Gets the encryption settings on the account. If unspecified, the account is unencrypted. + Encryption *Encryption `json:"encryption,omitempty"` + // AccessTier - Required for storage accounts where kind = BlobStorage. The access tier used for billing. Possible values include: 'Hot', 'Cool' + AccessTier AccessTier `json:"accessTier,omitempty"` + // EnableHTTPSTrafficOnly - Allows https traffic only to storage service if sets to true. + EnableHTTPSTrafficOnly *bool `json:"supportsHttpsTrafficOnly,omitempty"` + // NetworkRuleSet - Network rule set + NetworkRuleSet *NetworkRuleSet `json:"networkAcls,omitempty"` +} + +// AccountPropertiesCreateParameters the parameters used to create the storage account. +type AccountPropertiesCreateParameters struct { + // CustomDomain - User domain assigned to the storage account. Name is the CNAME source. Only one custom domain is supported per storage account at this time. To clear the existing custom domain, use an empty string for the custom domain name property. + CustomDomain *CustomDomain `json:"customDomain,omitempty"` + // Encryption - Provides the encryption settings on the account. If left unspecified the account encryption settings will remain the same. The default setting is unencrypted. + Encryption *Encryption `json:"encryption,omitempty"` + // NetworkRuleSet - Network rule set + NetworkRuleSet *NetworkRuleSet `json:"networkAcls,omitempty"` + // AccessTier - Required for storage accounts where kind = BlobStorage. The access tier used for billing. Possible values include: 'Hot', 'Cool' + AccessTier AccessTier `json:"accessTier,omitempty"` + // EnableHTTPSTrafficOnly - Allows https traffic only to storage service if sets to true. + EnableHTTPSTrafficOnly *bool `json:"supportsHttpsTrafficOnly,omitempty"` +} + +// AccountPropertiesUpdateParameters the parameters used when updating a storage account. +type AccountPropertiesUpdateParameters struct { + // CustomDomain - Custom domain assigned to the storage account by the user. Name is the CNAME source. Only one custom domain is supported per storage account at this time. To clear the existing custom domain, use an empty string for the custom domain name property. + CustomDomain *CustomDomain `json:"customDomain,omitempty"` + // Encryption - Provides the encryption settings on the account. The default setting is unencrypted. + Encryption *Encryption `json:"encryption,omitempty"` + // AccessTier - Required for storage accounts where kind = BlobStorage. The access tier used for billing. Possible values include: 'Hot', 'Cool' + AccessTier AccessTier `json:"accessTier,omitempty"` + // EnableHTTPSTrafficOnly - Allows https traffic only to storage service if sets to true. + EnableHTTPSTrafficOnly *bool `json:"supportsHttpsTrafficOnly,omitempty"` + // NetworkRuleSet - Network rule set + NetworkRuleSet *NetworkRuleSet `json:"networkAcls,omitempty"` +} + +// AccountRegenerateKeyParameters the parameters used to regenerate the storage account key. +type AccountRegenerateKeyParameters struct { + // KeyName - The name of storage keys that want to be regenerated, possible vaules are key1, key2. + KeyName *string `json:"keyName,omitempty"` +} + +// AccountSasParameters the parameters to list SAS credentials of a storage account. +type AccountSasParameters struct { + // Services - The signed services accessible with the account SAS. Possible values include: Blob (b), Queue (q), Table (t), File (f). Possible values include: 'B', 'Q', 'T', 'F' + Services Services `json:"signedServices,omitempty"` + // ResourceTypes - The signed resource types that are accessible with the account SAS. Service (s): Access to service-level APIs; Container (c): Access to container-level APIs; Object (o): Access to object-level APIs for blobs, queue messages, table entities, and files. Possible values include: 'SignedResourceTypesS', 'SignedResourceTypesC', 'SignedResourceTypesO' + ResourceTypes SignedResourceTypes `json:"signedResourceTypes,omitempty"` + // Permissions - The signed permissions for the account SAS. Possible values include: Read (r), Write (w), Delete (d), List (l), Add (a), Create (c), Update (u) and Process (p). Possible values include: 'R', 'D', 'W', 'L', 'A', 'C', 'U', 'P' + Permissions Permissions `json:"signedPermission,omitempty"` + // IPAddressOrRange - An IP address or a range of IP addresses from which to accept requests. + IPAddressOrRange *string `json:"signedIp,omitempty"` + // Protocols - The protocol permitted for a request made with the account SAS. Possible values include: 'Httpshttp', 'HTTPS' + Protocols HTTPProtocol `json:"signedProtocol,omitempty"` + // SharedAccessStartTime - The time at which the SAS becomes valid. + SharedAccessStartTime *date.Time `json:"signedStart,omitempty"` + // SharedAccessExpiryTime - The time at which the shared access signature becomes invalid. + SharedAccessExpiryTime *date.Time `json:"signedExpiry,omitempty"` + // KeyToSign - The key to sign the account SAS token with. + KeyToSign *string `json:"keyToSign,omitempty"` +} + +// AccountsCreateFuture an abstraction for monitoring and retrieving the results of a long-running operation. +type AccountsCreateFuture struct { + azure.Future +} + +// Result returns the result of the asynchronous operation. +// If the operation has not completed it will return an error. +func (future *AccountsCreateFuture) Result(client AccountsClient) (a Account, err error) { + var done bool + done, err = future.Done(client) + if err != nil { + err = autorest.NewErrorWithError(err, "storage.AccountsCreateFuture", "Result", future.Response(), "Polling failure") + return + } + if !done { + err = azure.NewAsyncOpIncompleteError("storage.AccountsCreateFuture") + return + } + sender := autorest.DecorateSender(client, autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...)) + if a.Response.Response, err = future.GetResult(sender); err == nil && a.Response.Response.StatusCode != http.StatusNoContent { + a, err = client.CreateResponder(a.Response.Response) + if err != nil { + err = autorest.NewErrorWithError(err, "storage.AccountsCreateFuture", "Result", a.Response.Response, "Failure responding to request") + } + } + return +} + +// AccountUpdateParameters the parameters that can be provided when updating the storage account properties. +type AccountUpdateParameters struct { + // Sku - Gets or sets the SKU name. Note that the SKU name cannot be updated to Standard_ZRS or Premium_LRS, nor can accounts of those sku names be updated to any other value. + Sku *Sku `json:"sku,omitempty"` + // Tags - Gets or sets a list of key value pairs that describe the resource. These tags can be used in viewing and grouping this resource (across resource groups). A maximum of 15 tags can be provided for a resource. Each tag must have a key no greater in length than 128 characters and a value no greater in length than 256 characters. + Tags map[string]*string `json:"tags"` + // Identity - The identity of the resource. + Identity *Identity `json:"identity,omitempty"` + // AccountPropertiesUpdateParameters - The parameters used when updating a storage account. + *AccountPropertiesUpdateParameters `json:"properties,omitempty"` + // Kind - Optional. Indicates the type of storage account. Currently only StorageV2 value supported by server. Possible values include: 'Storage', 'StorageV2', 'BlobStorage' + Kind Kind `json:"kind,omitempty"` +} + +// MarshalJSON is the custom marshaler for AccountUpdateParameters. +func (aup AccountUpdateParameters) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]interface{}) + if aup.Sku != nil { + objectMap["sku"] = aup.Sku + } + if aup.Tags != nil { + objectMap["tags"] = aup.Tags + } + if aup.Identity != nil { + objectMap["identity"] = aup.Identity + } + if aup.AccountPropertiesUpdateParameters != nil { + objectMap["properties"] = aup.AccountPropertiesUpdateParameters + } + if aup.Kind != "" { + objectMap["kind"] = aup.Kind + } + return json.Marshal(objectMap) +} + +// UnmarshalJSON is the custom unmarshaler for AccountUpdateParameters struct. +func (aup *AccountUpdateParameters) UnmarshalJSON(body []byte) error { + var m map[string]*json.RawMessage + err := json.Unmarshal(body, &m) + if err != nil { + return err + } + for k, v := range m { + switch k { + case "sku": + if v != nil { + var sku Sku + err = json.Unmarshal(*v, &sku) + if err != nil { + return err + } + aup.Sku = &sku + } + case "tags": + if v != nil { + var tags map[string]*string + err = json.Unmarshal(*v, &tags) + if err != nil { + return err + } + aup.Tags = tags + } + case "identity": + if v != nil { + var identity Identity + err = json.Unmarshal(*v, &identity) + if err != nil { + return err + } + aup.Identity = &identity + } + case "properties": + if v != nil { + var accountPropertiesUpdateParameters AccountPropertiesUpdateParameters + err = json.Unmarshal(*v, &accountPropertiesUpdateParameters) + if err != nil { + return err + } + aup.AccountPropertiesUpdateParameters = &accountPropertiesUpdateParameters + } + case "kind": + if v != nil { + var kind Kind + err = json.Unmarshal(*v, &kind) + if err != nil { + return err + } + aup.Kind = kind + } + } + } + + return nil +} + +// CheckNameAvailabilityResult the CheckNameAvailability operation response. +type CheckNameAvailabilityResult struct { + autorest.Response `json:"-"` + // NameAvailable - Gets a boolean value that indicates whether the name is available for you to use. If true, the name is available. If false, the name has already been taken or is invalid and cannot be used. + NameAvailable *bool `json:"nameAvailable,omitempty"` + // Reason - Gets the reason that a storage account name could not be used. The Reason element is only returned if NameAvailable is false. Possible values include: 'AccountNameInvalid', 'AlreadyExists' + Reason Reason `json:"reason,omitempty"` + // Message - Gets an error message explaining the Reason value in more detail. + Message *string `json:"message,omitempty"` +} + +// CustomDomain the custom domain assigned to this storage account. This can be set via Update. +type CustomDomain struct { + // Name - Gets or sets the custom domain name assigned to the storage account. Name is the CNAME source. + Name *string `json:"name,omitempty"` + // UseSubDomain - Indicates whether indirect CName validation is enabled. Default value is false. This should only be set on updates. + UseSubDomain *bool `json:"useSubDomain,omitempty"` +} + +// Dimension dimension of blobs, possiblly be blob type or access tier. +type Dimension struct { + // Name - Display name of dimension. + Name *string `json:"name,omitempty"` + // DisplayName - Display name of dimension. + DisplayName *string `json:"displayName,omitempty"` +} + +// Encryption the encryption settings on the storage account. +type Encryption struct { + // Services - List of services which support encryption. + Services *EncryptionServices `json:"services,omitempty"` + // KeySource - The encryption keySource (provider). Possible values (case-insensitive): Microsoft.Storage, Microsoft.Keyvault. Possible values include: 'MicrosoftStorage', 'MicrosoftKeyvault' + KeySource KeySource `json:"keySource,omitempty"` + // KeyVaultProperties - Properties provided by key vault. + KeyVaultProperties *KeyVaultProperties `json:"keyvaultproperties,omitempty"` +} + +// EncryptionService a service that allows server-side encryption to be used. +type EncryptionService struct { + // Enabled - A boolean indicating whether or not the service encrypts the data as it is stored. + Enabled *bool `json:"enabled,omitempty"` + // LastEnabledTime - Gets a rough estimate of the date/time when the encryption was last enabled by the user. Only returned when encryption is enabled. There might be some unencrypted blobs which were written after this time, as it is just a rough estimate. + LastEnabledTime *date.Time `json:"lastEnabledTime,omitempty"` +} + +// EncryptionServices a list of services that support encryption. +type EncryptionServices struct { + // Blob - The encryption function of the blob storage service. + Blob *EncryptionService `json:"blob,omitempty"` + // File - The encryption function of the file storage service. + File *EncryptionService `json:"file,omitempty"` + // Table - The encryption function of the table storage service. + Table *EncryptionService `json:"table,omitempty"` + // Queue - The encryption function of the queue storage service. + Queue *EncryptionService `json:"queue,omitempty"` +} + +// Endpoints the URIs that are used to perform a retrieval of a public blob, queue, or table object. +type Endpoints struct { + // Blob - Gets the blob endpoint. + Blob *string `json:"blob,omitempty"` + // Queue - Gets the queue endpoint. + Queue *string `json:"queue,omitempty"` + // Table - Gets the table endpoint. + Table *string `json:"table,omitempty"` + // File - Gets the file endpoint. + File *string `json:"file,omitempty"` +} + +// Identity identity for the resource. +type Identity struct { + // PrincipalID - The principal ID of resource identity. + PrincipalID *string `json:"principalId,omitempty"` + // TenantID - The tenant ID of resource. + TenantID *string `json:"tenantId,omitempty"` + // Type - The identity type. + Type *string `json:"type,omitempty"` +} + +// IPRule IP rule with specific IP or IP range in CIDR format. +type IPRule struct { + // IPAddressOrRange - Specifies the IP or IP range in CIDR format. Only IPV4 address is allowed. + IPAddressOrRange *string `json:"value,omitempty"` + // Action - The action of IP ACL rule. Possible values include: 'Allow' + Action Action `json:"action,omitempty"` +} + +// KeyVaultProperties properties of key vault. +type KeyVaultProperties struct { + // KeyName - The name of KeyVault key. + KeyName *string `json:"keyname,omitempty"` + // KeyVersion - The version of KeyVault key. + KeyVersion *string `json:"keyversion,omitempty"` + // KeyVaultURI - The Uri of KeyVault. + KeyVaultURI *string `json:"keyvaulturi,omitempty"` +} + +// ListAccountSasResponse the List SAS credentials operation response. +type ListAccountSasResponse struct { + autorest.Response `json:"-"` + // AccountSasToken - List SAS credentials of storage account. + AccountSasToken *string `json:"accountSasToken,omitempty"` +} + +// ListServiceSasResponse the List service SAS credentials operation response. +type ListServiceSasResponse struct { + autorest.Response `json:"-"` + // ServiceSasToken - List service SAS credentials of speicific resource. + ServiceSasToken *string `json:"serviceSasToken,omitempty"` +} + +// MetricSpecification metric specification of operation. +type MetricSpecification struct { + // Name - Name of metric specification. + Name *string `json:"name,omitempty"` + // DisplayName - Display name of metric specification. + DisplayName *string `json:"displayName,omitempty"` + // DisplayDescription - Display description of metric specification. + DisplayDescription *string `json:"displayDescription,omitempty"` + // Unit - Unit could be Bytes or Count. + Unit *string `json:"unit,omitempty"` + // Dimensions - Dimensions of blobs, including blob type and access tier. + Dimensions *[]Dimension `json:"dimensions,omitempty"` + // AggregationType - Aggregation type could be Average. + AggregationType *string `json:"aggregationType,omitempty"` + // FillGapWithZero - The property to decide fill gap with zero or not. + FillGapWithZero *bool `json:"fillGapWithZero,omitempty"` + // Category - The category this metric specification belong to, could be Capacity. + Category *string `json:"category,omitempty"` + // ResourceIDDimensionNameOverride - Account Resource Id. + ResourceIDDimensionNameOverride *string `json:"resourceIdDimensionNameOverride,omitempty"` +} + +// NetworkRuleSet network rule set +type NetworkRuleSet struct { + // Bypass - Specifies whether traffic is bypassed for Logging/Metrics/AzureServices. Possible values are any combination of Logging|Metrics|AzureServices (For example, "Logging, Metrics"), or None to bypass none of those traffics. Possible values include: 'None', 'Logging', 'Metrics', 'AzureServices' + Bypass Bypass `json:"bypass,omitempty"` + // VirtualNetworkRules - Sets the virtual network rules + VirtualNetworkRules *[]VirtualNetworkRule `json:"virtualNetworkRules,omitempty"` + // IPRules - Sets the IP ACL rules + IPRules *[]IPRule `json:"ipRules,omitempty"` + // DefaultAction - Specifies the default action of allow or deny when no other rules match. Possible values include: 'DefaultActionAllow', 'DefaultActionDeny' + DefaultAction DefaultAction `json:"defaultAction,omitempty"` +} + +// Operation storage REST API operation definition. +type Operation struct { + // Name - Operation name: {provider}/{resource}/{operation} + Name *string `json:"name,omitempty"` + // Display - Display metadata associated with the operation. + Display *OperationDisplay `json:"display,omitempty"` + // Origin - The origin of operations. + Origin *string `json:"origin,omitempty"` + // OperationProperties - Properties of operation, include metric specifications. + *OperationProperties `json:"properties,omitempty"` +} + +// MarshalJSON is the custom marshaler for Operation. +func (o Operation) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]interface{}) + if o.Name != nil { + objectMap["name"] = o.Name + } + if o.Display != nil { + objectMap["display"] = o.Display + } + if o.Origin != nil { + objectMap["origin"] = o.Origin + } + if o.OperationProperties != nil { + objectMap["properties"] = o.OperationProperties + } + return json.Marshal(objectMap) +} + +// UnmarshalJSON is the custom unmarshaler for Operation struct. +func (o *Operation) UnmarshalJSON(body []byte) error { + var m map[string]*json.RawMessage + err := json.Unmarshal(body, &m) + if err != nil { + return err + } + for k, v := range m { + switch k { + case "name": + if v != nil { + var name string + err = json.Unmarshal(*v, &name) + if err != nil { + return err + } + o.Name = &name + } + case "display": + if v != nil { + var display OperationDisplay + err = json.Unmarshal(*v, &display) + if err != nil { + return err + } + o.Display = &display + } + case "origin": + if v != nil { + var origin string + err = json.Unmarshal(*v, &origin) + if err != nil { + return err + } + o.Origin = &origin + } + case "properties": + if v != nil { + var operationProperties OperationProperties + err = json.Unmarshal(*v, &operationProperties) + if err != nil { + return err + } + o.OperationProperties = &operationProperties + } + } + } + + return nil +} + +// OperationDisplay display metadata associated with the operation. +type OperationDisplay struct { + // Provider - Service provider: Microsoft Storage. + Provider *string `json:"provider,omitempty"` + // Resource - Resource on which the operation is performed etc. + Resource *string `json:"resource,omitempty"` + // Operation - Type of operation: get, read, delete, etc. + Operation *string `json:"operation,omitempty"` +} + +// OperationListResult result of the request to list Storage operations. It contains a list of operations and a URL +// link to get the next set of results. +type OperationListResult struct { + autorest.Response `json:"-"` + // Value - List of Storage operations supported by the Storage resource provider. + Value *[]Operation `json:"value,omitempty"` +} + +// OperationProperties properties of operation, include metric specifications. +type OperationProperties struct { + // ServiceSpecification - One property of operation, include metric specifications. + ServiceSpecification *ServiceSpecification `json:"serviceSpecification,omitempty"` +} + +// Resource describes a storage resource. +type Resource struct { + // ID - Resource Id + ID *string `json:"id,omitempty"` + // Name - Resource name + Name *string `json:"name,omitempty"` + // Type - Resource type + Type *string `json:"type,omitempty"` + // Location - Resource location + Location *string `json:"location,omitempty"` + // Tags - Tags assigned to a resource; can be used for viewing and grouping a resource (across resource groups). + Tags map[string]*string `json:"tags"` +} + +// MarshalJSON is the custom marshaler for Resource. +func (r Resource) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]interface{}) + if r.ID != nil { + objectMap["id"] = r.ID + } + if r.Name != nil { + objectMap["name"] = r.Name + } + if r.Type != nil { + objectMap["type"] = r.Type + } + if r.Location != nil { + objectMap["location"] = r.Location + } + if r.Tags != nil { + objectMap["tags"] = r.Tags + } + return json.Marshal(objectMap) +} + +// Restriction the restriction because of which SKU cannot be used. +type Restriction struct { + // Type - The type of restrictions. As of now only possible value for this is location. + Type *string `json:"type,omitempty"` + // Values - The value of restrictions. If the restriction type is set to location. This would be different locations where the SKU is restricted. + Values *[]string `json:"values,omitempty"` + // ReasonCode - The reason for the restriction. As of now this can be “QuotaId” or “NotAvailableForSubscription”. Quota Id is set when the SKU has requiredQuotas parameter as the subscription does not belong to that quota. The “NotAvailableForSubscription” is related to capacity at DC. Possible values include: 'QuotaID', 'NotAvailableForSubscription' + ReasonCode ReasonCode `json:"reasonCode,omitempty"` +} + +// ServiceSasParameters the parameters to list service SAS credentials of a speicific resource. +type ServiceSasParameters struct { + // CanonicalizedResource - The canonical path to the signed resource. + CanonicalizedResource *string `json:"canonicalizedResource,omitempty"` + // Resource - The signed services accessible with the service SAS. Possible values include: Blob (b), Container (c), File (f), Share (s). Possible values include: 'SignedResourceB', 'SignedResourceC', 'SignedResourceF', 'SignedResourceS' + Resource SignedResource `json:"signedResource,omitempty"` + // Permissions - The signed permissions for the service SAS. Possible values include: Read (r), Write (w), Delete (d), List (l), Add (a), Create (c), Update (u) and Process (p). Possible values include: 'R', 'D', 'W', 'L', 'A', 'C', 'U', 'P' + Permissions Permissions `json:"signedPermission,omitempty"` + // IPAddressOrRange - An IP address or a range of IP addresses from which to accept requests. + IPAddressOrRange *string `json:"signedIp,omitempty"` + // Protocols - The protocol permitted for a request made with the account SAS. Possible values include: 'Httpshttp', 'HTTPS' + Protocols HTTPProtocol `json:"signedProtocol,omitempty"` + // SharedAccessStartTime - The time at which the SAS becomes valid. + SharedAccessStartTime *date.Time `json:"signedStart,omitempty"` + // SharedAccessExpiryTime - The time at which the shared access signature becomes invalid. + SharedAccessExpiryTime *date.Time `json:"signedExpiry,omitempty"` + // Identifier - A unique value up to 64 characters in length that correlates to an access policy specified for the container, queue, or table. + Identifier *string `json:"signedIdentifier,omitempty"` + // PartitionKeyStart - The start of partition key. + PartitionKeyStart *string `json:"startPk,omitempty"` + // PartitionKeyEnd - The end of partition key. + PartitionKeyEnd *string `json:"endPk,omitempty"` + // RowKeyStart - The start of row key. + RowKeyStart *string `json:"startRk,omitempty"` + // RowKeyEnd - The end of row key. + RowKeyEnd *string `json:"endRk,omitempty"` + // KeyToSign - The key to sign the account SAS token with. + KeyToSign *string `json:"keyToSign,omitempty"` + // CacheControl - The response header override for cache control. + CacheControl *string `json:"rscc,omitempty"` + // ContentDisposition - The response header override for content disposition. + ContentDisposition *string `json:"rscd,omitempty"` + // ContentEncoding - The response header override for content encoding. + ContentEncoding *string `json:"rsce,omitempty"` + // ContentLanguage - The response header override for content language. + ContentLanguage *string `json:"rscl,omitempty"` + // ContentType - The response header override for content type. + ContentType *string `json:"rsct,omitempty"` +} + +// ServiceSpecification one property of operation, include metric specifications. +type ServiceSpecification struct { + // MetricSpecifications - Metric specifications of operation. + MetricSpecifications *[]MetricSpecification `json:"metricSpecifications,omitempty"` +} + +// Sku the SKU of the storage account. +type Sku struct { + // Name - Gets or sets the sku name. Required for account creation; optional for update. Note that in older versions, sku name was called accountType. Possible values include: 'StandardLRS', 'StandardGRS', 'StandardRAGRS', 'StandardZRS', 'PremiumLRS' + Name SkuName `json:"name,omitempty"` + // Tier - Gets the sku tier. This is based on the SKU name. Possible values include: 'Standard', 'Premium' + Tier SkuTier `json:"tier,omitempty"` + // ResourceType - The type of the resource, usually it is 'storageAccounts'. + ResourceType *string `json:"resourceType,omitempty"` + // Kind - Indicates the type of storage account. Possible values include: 'Storage', 'StorageV2', 'BlobStorage' + Kind Kind `json:"kind,omitempty"` + // Locations - The set of locations that the SKU is available. This will be supported and registered Azure Geo Regions (e.g. West US, East US, Southeast Asia, etc.). + Locations *[]string `json:"locations,omitempty"` + // Capabilities - The capability information in the specified sku, including file encryption, network acls, change notification, etc. + Capabilities *[]SKUCapability `json:"capabilities,omitempty"` + // Restrictions - The restrictions because of which SKU cannot be used. This is empty if there are no restrictions. + Restrictions *[]Restriction `json:"restrictions,omitempty"` +} + +// SKUCapability the capability information in the specified sku, including file encryption, network acls, change +// notification, etc. +type SKUCapability struct { + // Name - The name of capability, The capability information in the specified sku, including file encryption, network acls, change notification, etc. + Name *string `json:"name,omitempty"` + // Value - A string value to indicate states of given capability. Possibly 'true' or 'false'. + Value *string `json:"value,omitempty"` +} + +// SkuListResult the response from the List Storage SKUs operation. +type SkuListResult struct { + autorest.Response `json:"-"` + // Value - Get the list result of storage SKUs and their properties. + Value *[]Sku `json:"value,omitempty"` +} + +// Usage describes Storage Resource Usage. +type Usage struct { + // Unit - Gets the unit of measurement. Possible values include: 'Count', 'Bytes', 'Seconds', 'Percent', 'CountsPerSecond', 'BytesPerSecond' + Unit UsageUnit `json:"unit,omitempty"` + // CurrentValue - Gets the current count of the allocated resources in the subscription. + CurrentValue *int32 `json:"currentValue,omitempty"` + // Limit - Gets the maximum count of the resources that can be allocated in the subscription. + Limit *int32 `json:"limit,omitempty"` + // Name - Gets the name of the type of usage. + Name *UsageName `json:"name,omitempty"` +} + +// UsageListResult the response from the List Usages operation. +type UsageListResult struct { + autorest.Response `json:"-"` + // Value - Gets or sets the list of Storage Resource Usages. + Value *[]Usage `json:"value,omitempty"` +} + +// UsageName the usage names that can be used; currently limited to StorageAccount. +type UsageName struct { + // Value - Gets a string describing the resource name. + Value *string `json:"value,omitempty"` + // LocalizedValue - Gets a localized string describing the resource name. + LocalizedValue *string `json:"localizedValue,omitempty"` +} + +// VirtualNetworkRule virtual Network rule. +type VirtualNetworkRule struct { + // VirtualNetworkResourceID - Resource ID of a subnet, for example: /subscriptions/{subscriptionId}/resourceGroups/{groupName}/providers/Microsoft.Network/virtualNetworks/{vnetName}/subnets/{subnetName}. + VirtualNetworkResourceID *string `json:"id,omitempty"` + // Action - The action of virtual network rule. Possible values include: 'Allow' + Action Action `json:"action,omitempty"` + // State - Gets the state of virtual network rule. Possible values include: 'StateProvisioning', 'StateDeprovisioning', 'StateSucceeded', 'StateFailed', 'StateNetworkSourceDeleted' + State State `json:"state,omitempty"` +} diff --git a/vendor/github.com/Azure/azure-sdk-for-go/services/storage/mgmt/2017-10-01/storage/operations.go b/vendor/github.com/Azure/azure-sdk-for-go/services/storage/mgmt/2017-10-01/storage/operations.go new file mode 100644 index 000000000..b36bc6d68 --- /dev/null +++ b/vendor/github.com/Azure/azure-sdk-for-go/services/storage/mgmt/2017-10-01/storage/operations.go @@ -0,0 +1,98 @@ +package storage + +// Copyright (c) Microsoft and contributors. All rights reserved. +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// +// See the License for the specific language governing permissions and +// limitations under the License. +// +// Code generated by Microsoft (R) AutoRest Code Generator. +// Changes may cause incorrect behavior and will be lost if the code is regenerated. + +import ( + "context" + "github.com/Azure/go-autorest/autorest" + "github.com/Azure/go-autorest/autorest/azure" + "net/http" +) + +// OperationsClient is the the Azure Storage Management API. +type OperationsClient struct { + BaseClient +} + +// NewOperationsClient creates an instance of the OperationsClient client. +func NewOperationsClient(subscriptionID string) OperationsClient { + return NewOperationsClientWithBaseURI(DefaultBaseURI, subscriptionID) +} + +// NewOperationsClientWithBaseURI creates an instance of the OperationsClient client. +func NewOperationsClientWithBaseURI(baseURI string, subscriptionID string) OperationsClient { + return OperationsClient{NewWithBaseURI(baseURI, subscriptionID)} +} + +// List lists all of the available Storage Rest API operations. +func (client OperationsClient) List(ctx context.Context) (result OperationListResult, err error) { + req, err := client.ListPreparer(ctx) + if err != nil { + err = autorest.NewErrorWithError(err, "storage.OperationsClient", "List", nil, "Failure preparing request") + return + } + + resp, err := client.ListSender(req) + if err != nil { + result.Response = autorest.Response{Response: resp} + err = autorest.NewErrorWithError(err, "storage.OperationsClient", "List", resp, "Failure sending request") + return + } + + result, err = client.ListResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "storage.OperationsClient", "List", resp, "Failure responding to request") + } + + return +} + +// ListPreparer prepares the List request. +func (client OperationsClient) ListPreparer(ctx context.Context) (*http.Request, error) { + const APIVersion = "2017-10-01" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsGet(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPath("/providers/Microsoft.Storage/operations"), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// ListSender sends the List request. The method will close the +// http.Response Body if it receives an error. +func (client OperationsClient) ListSender(req *http.Request) (*http.Response, error) { + return autorest.SendWithSender(client, req, + autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...)) +} + +// ListResponder handles the response to the List request. The method always +// closes the http.Response Body. +func (client OperationsClient) ListResponder(resp *http.Response) (result OperationListResult, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} diff --git a/vendor/github.com/Azure/azure-sdk-for-go/services/storage/mgmt/2017-10-01/storage/skus.go b/vendor/github.com/Azure/azure-sdk-for-go/services/storage/mgmt/2017-10-01/storage/skus.go new file mode 100644 index 000000000..20abb8c90 --- /dev/null +++ b/vendor/github.com/Azure/azure-sdk-for-go/services/storage/mgmt/2017-10-01/storage/skus.go @@ -0,0 +1,102 @@ +package storage + +// Copyright (c) Microsoft and contributors. All rights reserved. +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// +// See the License for the specific language governing permissions and +// limitations under the License. +// +// Code generated by Microsoft (R) AutoRest Code Generator. +// Changes may cause incorrect behavior and will be lost if the code is regenerated. + +import ( + "context" + "github.com/Azure/go-autorest/autorest" + "github.com/Azure/go-autorest/autorest/azure" + "net/http" +) + +// SkusClient is the the Azure Storage Management API. +type SkusClient struct { + BaseClient +} + +// NewSkusClient creates an instance of the SkusClient client. +func NewSkusClient(subscriptionID string) SkusClient { + return NewSkusClientWithBaseURI(DefaultBaseURI, subscriptionID) +} + +// NewSkusClientWithBaseURI creates an instance of the SkusClient client. +func NewSkusClientWithBaseURI(baseURI string, subscriptionID string) SkusClient { + return SkusClient{NewWithBaseURI(baseURI, subscriptionID)} +} + +// List lists the available SKUs supported by Microsoft.Storage for given subscription. +func (client SkusClient) List(ctx context.Context) (result SkuListResult, err error) { + req, err := client.ListPreparer(ctx) + if err != nil { + err = autorest.NewErrorWithError(err, "storage.SkusClient", "List", nil, "Failure preparing request") + return + } + + resp, err := client.ListSender(req) + if err != nil { + result.Response = autorest.Response{Response: resp} + err = autorest.NewErrorWithError(err, "storage.SkusClient", "List", resp, "Failure sending request") + return + } + + result, err = client.ListResponder(resp) + if err != nil { + err = autorest.NewErrorWithError(err, "storage.SkusClient", "List", resp, "Failure responding to request") + } + + return +} + +// ListPreparer prepares the List request. +func (client SkusClient) ListPreparer(ctx context.Context) (*http.Request, error) { + pathParameters := map[string]interface{}{ + "subscriptionId": autorest.Encode("path", client.SubscriptionID), + } + + const APIVersion = "2017-10-01" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsGet(), + autorest.WithBaseURL(client.BaseURI), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/providers/Microsoft.Storage/skus", pathParameters), + autorest.WithQueryParameters(queryParameters)) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) +} + +// ListSender sends the List request. The method will close the +// http.Response Body if it receives an error. +func (client SkusClient) ListSender(req *http.Request) (*http.Response, error) { + return autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) +} + +// ListResponder handles the response to the List request. The method always +// closes the http.Response Body. +func (client SkusClient) ListResponder(resp *http.Response) (result SkuListResult, err error) { + err = autorest.Respond( + resp, + client.ByInspecting(), + azure.WithErrorUnlessStatusCode(http.StatusOK), + autorest.ByUnmarshallingJSON(&result), + autorest.ByClosing()) + result.Response = autorest.Response{Response: resp} + return +} diff --git a/vendor/github.com/Azure/azure-sdk-for-go/arm/storage/usage.go b/vendor/github.com/Azure/azure-sdk-for-go/services/storage/mgmt/2017-10-01/storage/usage.go similarity index 85% rename from vendor/github.com/Azure/azure-sdk-for-go/arm/storage/usage.go rename to vendor/github.com/Azure/azure-sdk-for-go/services/storage/mgmt/2017-10-01/storage/usage.go index b12a6d315..16942e6f7 100644 --- a/vendor/github.com/Azure/azure-sdk-for-go/arm/storage/usage.go +++ b/vendor/github.com/Azure/azure-sdk-for-go/services/storage/mgmt/2017-10-01/storage/usage.go @@ -14,11 +14,11 @@ package storage // See the License for the specific language governing permissions and // limitations under the License. // -// Code generated by Microsoft (R) AutoRest Code Generator 1.0.1.0 -// Changes may cause incorrect behavior and will be lost if the code is -// regenerated. +// Code generated by Microsoft (R) AutoRest Code Generator. +// Changes may cause incorrect behavior and will be lost if the code is regenerated. import ( + "context" "github.com/Azure/go-autorest/autorest" "github.com/Azure/go-autorest/autorest/azure" "net/http" @@ -26,7 +26,7 @@ import ( // UsageClient is the the Azure Storage Management API. type UsageClient struct { - ManagementClient + BaseClient } // NewUsageClient creates an instance of the UsageClient client. @@ -39,10 +39,9 @@ func NewUsageClientWithBaseURI(baseURI string, subscriptionID string) UsageClien return UsageClient{NewWithBaseURI(baseURI, subscriptionID)} } -// List gets the current usage count and the limit for the resources under the -// subscription. -func (client UsageClient) List() (result UsageListResult, err error) { - req, err := client.ListPreparer() +// List gets the current usage count and the limit for the resources under the subscription. +func (client UsageClient) List(ctx context.Context) (result UsageListResult, err error) { + req, err := client.ListPreparer(ctx) if err != nil { err = autorest.NewErrorWithError(err, "storage.UsageClient", "List", nil, "Failure preparing request") return @@ -64,12 +63,12 @@ func (client UsageClient) List() (result UsageListResult, err error) { } // ListPreparer prepares the List request. -func (client UsageClient) ListPreparer() (*http.Request, error) { +func (client UsageClient) ListPreparer(ctx context.Context) (*http.Request, error) { pathParameters := map[string]interface{}{ "subscriptionId": autorest.Encode("path", client.SubscriptionID), } - const APIVersion = "2016-12-01" + const APIVersion = "2017-10-01" queryParameters := map[string]interface{}{ "api-version": APIVersion, } @@ -79,13 +78,14 @@ func (client UsageClient) ListPreparer() (*http.Request, error) { autorest.WithBaseURL(client.BaseURI), autorest.WithPathParameters("/subscriptions/{subscriptionId}/providers/Microsoft.Storage/usages", pathParameters), autorest.WithQueryParameters(queryParameters)) - return preparer.Prepare(&http.Request{}) + return preparer.Prepare((&http.Request{}).WithContext(ctx)) } // ListSender sends the List request. The method will close the // http.Response Body if it receives an error. func (client UsageClient) ListSender(req *http.Request) (*http.Response, error) { - return autorest.SendWithSender(client, req) + return autorest.SendWithSender(client, req, + azure.DoRetryWithRegistration(client.Client)) } // ListResponder handles the response to the List request. The method always diff --git a/vendor/github.com/Azure/azure-sdk-for-go/arm/storage/version.go b/vendor/github.com/Azure/azure-sdk-for-go/services/storage/mgmt/2017-10-01/storage/version.go similarity index 80% rename from vendor/github.com/Azure/azure-sdk-for-go/arm/storage/version.go rename to vendor/github.com/Azure/azure-sdk-for-go/services/storage/mgmt/2017-10-01/storage/version.go index ac97c159f..884a2478f 100644 --- a/vendor/github.com/Azure/azure-sdk-for-go/arm/storage/version.go +++ b/vendor/github.com/Azure/azure-sdk-for-go/services/storage/mgmt/2017-10-01/storage/version.go @@ -1,5 +1,7 @@ package storage +import "github.com/Azure/azure-sdk-for-go/version" + // Copyright (c) Microsoft and contributors. All rights reserved. // // Licensed under the Apache License, Version 2.0 (the "License"); @@ -14,16 +16,15 @@ package storage // See the License for the specific language governing permissions and // limitations under the License. // -// Code generated by Microsoft (R) AutoRest Code Generator 1.0.1.0 -// Changes may cause incorrect behavior and will be lost if the code is -// regenerated. +// Code generated by Microsoft (R) AutoRest Code Generator. +// Changes may cause incorrect behavior and will be lost if the code is regenerated. // UserAgent returns the UserAgent string to use when sending http.Requests. func UserAgent() string { - return "Azure-SDK-For-Go/v10.0.2-beta arm-storage/2016-12-01" + return "Azure-SDK-For-Go/" + version.Number + " storage/2017-10-01" } // Version returns the semantic version (see http://semver.org) of the client. func Version() string { - return "v10.0.2-beta" + return version.Number } diff --git a/vendor/github.com/Azure/azure-sdk-for-go/storage/README.md b/vendor/github.com/Azure/azure-sdk-for-go/storage/README.md new file mode 100644 index 000000000..459b45831 --- /dev/null +++ b/vendor/github.com/Azure/azure-sdk-for-go/storage/README.md @@ -0,0 +1,22 @@ +# Azure Storage SDK for Go (Preview) + +:exclamation: IMPORTANT: This package is in maintenance only and will be deprecated in the +future. Please use one of the following packages instead. + +| Service | Import Path/Repo | +|---------|------------------| +| Storage - Blobs | [github.com/Azure/azure-storage-blob-go](https://github.com/Azure/azure-storage-blob-go) | +| Storage - Files | [github.com/Azure/azure-storage-file-go](https://github.com/Azure/azure-storage-file-go) | +| Storage - Queues | [github.com/Azure/azure-storage-queue-go](https://github.com/Azure/azure-storage-queue-go) | + +The `github.com/Azure/azure-sdk-for-go/storage` package is used to manage +[Azure Storage](https://docs.microsoft.com/en-us/azure/storage/) data plane +resources: containers, blobs, tables, and queues. + +To manage storage *accounts* use Azure Resource Manager (ARM) via the packages +at [github.com/Azure/azure-sdk-for-go/services/storage](https://github.com/Azure/azure-sdk-for-go/tree/master/services/storage). + +This package also supports the [Azure Storage +Emulator](https://azure.microsoft.com/documentation/articles/storage-use-emulator/) +(Windows only). + diff --git a/vendor/github.com/Azure/azure-sdk-for-go/storage/appendblob.go b/vendor/github.com/Azure/azure-sdk-for-go/storage/appendblob.go index 3292cb556..8b5b96d48 100644 --- a/vendor/github.com/Azure/azure-sdk-for-go/storage/appendblob.go +++ b/vendor/github.com/Azure/azure-sdk-for-go/storage/appendblob.go @@ -1,7 +1,23 @@ package storage +// Copyright 2017 Microsoft Corporation +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + import ( "bytes" + "crypto/md5" + "encoding/base64" "fmt" "net/http" "net/url" @@ -11,6 +27,8 @@ import ( // PutAppendBlob initializes an empty append blob with specified name. An // append blob must be created using this method before appending blocks. // +// See CreateBlockBlobFromReader for more info on creating blobs. +// // See https://docs.microsoft.com/en-us/rest/api/storageservices/fileservices/Put-Blob func (b *Blob) PutAppendBlob(options *PutBlobOptions) error { params := url.Values{} @@ -29,8 +47,7 @@ func (b *Blob) PutAppendBlob(options *PutBlobOptions) error { if err != nil { return err } - readAndCloseBody(resp.body) - return checkRespCode(resp.statusCode, []int{http.StatusCreated}) + return b.respondCreation(resp, BlobTypeAppend) } // AppendBlockOptions includes the options for an append block operation @@ -44,6 +61,7 @@ type AppendBlockOptions struct { IfMatch string `header:"If-Match"` IfNoneMatch string `header:"If-None-Match"` RequestID string `header:"x-ms-client-request-id"` + ContentMD5 bool } // AppendBlock appends a block to an append blob. @@ -58,6 +76,10 @@ func (b *Blob) AppendBlock(chunk []byte, options *AppendBlockOptions) error { if options != nil { params = addTimeout(params, options.Timeout) headers = mergeHeaders(headers, headersFromStruct(*options)) + if options.ContentMD5 { + md5sum := md5.Sum(chunk) + headers[headerContentMD5] = base64.StdEncoding.EncodeToString(md5sum[:]) + } } uri := b.Container.bsc.client.getEndpoint(blobServiceName, b.buildPath(), params) @@ -65,6 +87,5 @@ func (b *Blob) AppendBlock(chunk []byte, options *AppendBlockOptions) error { if err != nil { return err } - readAndCloseBody(resp.body) - return checkRespCode(resp.statusCode, []int{http.StatusCreated}) + return b.respondCreation(resp, BlobTypeAppend) } diff --git a/vendor/github.com/Azure/azure-sdk-for-go/storage/authorization.go b/vendor/github.com/Azure/azure-sdk-for-go/storage/authorization.go index 608bf3133..76794c305 100644 --- a/vendor/github.com/Azure/azure-sdk-for-go/storage/authorization.go +++ b/vendor/github.com/Azure/azure-sdk-for-go/storage/authorization.go @@ -1,6 +1,20 @@ // Package storage provides clients for Microsoft Azure Storage Services. package storage +// Copyright 2017 Microsoft Corporation +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + import ( "bytes" "fmt" @@ -41,16 +55,18 @@ const ( ) func (c *Client) addAuthorizationHeader(verb, url string, headers map[string]string, auth authentication) (map[string]string, error) { - authHeader, err := c.getSharedKey(verb, url, headers, auth) - if err != nil { - return nil, err + if !c.sasClient { + authHeader, err := c.getSharedKey(verb, url, headers, auth) + if err != nil { + return nil, err + } + headers[headerAuthorization] = authHeader } - headers[headerAuthorization] = authHeader return headers, nil } func (c *Client) getSharedKey(verb, url string, headers map[string]string, auth authentication) (string, error) { - canRes, err := c.buildCanonicalizedResource(url, auth) + canRes, err := c.buildCanonicalizedResource(url, auth, false) if err != nil { return "", err } @@ -62,15 +78,18 @@ func (c *Client) getSharedKey(verb, url string, headers map[string]string, auth return c.createAuthorizationHeader(canString, auth), nil } -func (c *Client) buildCanonicalizedResource(uri string, auth authentication) (string, error) { +func (c *Client) buildCanonicalizedResource(uri string, auth authentication, sas bool) (string, error) { errMsg := "buildCanonicalizedResource error: %s" u, err := url.Parse(uri) if err != nil { return "", fmt.Errorf(errMsg, err.Error()) } - cr := bytes.NewBufferString("/") - cr.WriteString(c.getCanonicalizedAccountName()) + cr := bytes.NewBufferString("") + if c.accountName != StorageEmulatorAccountName || !sas { + cr.WriteString("/") + cr.WriteString(c.getCanonicalizedAccountName()) + } if len(u.Path) > 0 { // Any portion of the CanonicalizedResource string that is derived from diff --git a/vendor/github.com/Azure/azure-sdk-for-go/storage/blob.go b/vendor/github.com/Azure/azure-sdk-for-go/storage/blob.go index dd9eb386c..1d2248625 100644 --- a/vendor/github.com/Azure/azure-sdk-for-go/storage/blob.go +++ b/vendor/github.com/Azure/azure-sdk-for-go/storage/blob.go @@ -1,5 +1,19 @@ package storage +// Copyright 2017 Microsoft Corporation +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + import ( "encoding/xml" "errors" @@ -90,7 +104,7 @@ type BlobProperties struct { CacheControl string `xml:"Cache-Control" header:"x-ms-blob-cache-control"` ContentLanguage string `xml:"Cache-Language" header:"x-ms-blob-content-language"` ContentDisposition string `xml:"Content-Disposition" header:"x-ms-blob-content-disposition"` - BlobType BlobType `xml:"x-ms-blob-blob-type"` + BlobType BlobType `xml:"BlobType"` SequenceNumber int64 `xml:"x-ms-blob-sequence-number"` CopyID string `xml:"CopyId"` CopyStatus string `xml:"CopyStatus"` @@ -126,17 +140,16 @@ func (b *Blob) Exists() (bool, error) { headers := b.Container.bsc.client.getStandardHeaders() resp, err := b.Container.bsc.client.exec(http.MethodHead, uri, headers, nil, b.Container.bsc.auth) if resp != nil { - defer readAndCloseBody(resp.body) - if resp.statusCode == http.StatusOK || resp.statusCode == http.StatusNotFound { - return resp.statusCode == http.StatusOK, nil + defer drainRespBody(resp) + if resp.StatusCode == http.StatusOK || resp.StatusCode == http.StatusNotFound { + return resp.StatusCode == http.StatusOK, nil } } return false, err } // GetURL gets the canonical URL to the blob with the specified name in the -// specified container. If name is not specified, the canonical URL for the entire -// container is obtained. +// specified container. // This method does not create a publicly accessible URL if the blob or container // is private and this method does not check if the blob exists. func (b *Blob) GetURL() string { @@ -174,11 +187,17 @@ type BlobRange struct { } func (br BlobRange) String() string { + if br.End == 0 { + return fmt.Sprintf("bytes=%d-", br.Start) + } return fmt.Sprintf("bytes=%d-%d", br.Start, br.End) } // Get returns a stream to read the blob. Caller must call both Read and Close() // to correctly close the underlying connection. +// +// See the GetRange method for use with a Range header. +// // See https://docs.microsoft.com/en-us/rest/api/storageservices/fileservices/Get-Blob func (b *Blob) Get(options *GetBlobOptions) (io.ReadCloser, error) { rangeOptions := GetBlobRangeOptions{ @@ -189,13 +208,13 @@ func (b *Blob) Get(options *GetBlobOptions) (io.ReadCloser, error) { return nil, err } - if err := checkRespCode(resp.statusCode, []int{http.StatusOK}); err != nil { + if err := checkRespCode(resp, []int{http.StatusOK}); err != nil { return nil, err } - if err := b.writePropoerties(resp.headers); err != nil { - return resp.body, err + if err := b.writeProperties(resp.Header, true); err != nil { + return resp.Body, err } - return resp.body, nil + return resp.Body, nil } // GetRange reads the specified range of a blob to a stream. The bytesRange @@ -209,23 +228,27 @@ func (b *Blob) GetRange(options *GetBlobRangeOptions) (io.ReadCloser, error) { return nil, err } - if err := checkRespCode(resp.statusCode, []int{http.StatusPartialContent}); err != nil { + if err := checkRespCode(resp, []int{http.StatusPartialContent}); err != nil { return nil, err } - if err := b.writePropoerties(resp.headers); err != nil { - return resp.body, err + // Content-Length header should not be updated, as the service returns the range length + // (which is not alwys the full blob length) + if err := b.writeProperties(resp.Header, false); err != nil { + return resp.Body, err } - return resp.body, nil + return resp.Body, nil } -func (b *Blob) getRange(options *GetBlobRangeOptions) (*storageResponse, error) { +func (b *Blob) getRange(options *GetBlobRangeOptions) (*http.Response, error) { params := url.Values{} headers := b.Container.bsc.client.getStandardHeaders() if options != nil { if options.Range != nil { headers["Range"] = options.Range.String() - headers["x-ms-range-get-content-md5"] = fmt.Sprintf("%v", options.GetRangeContentMD5) + if options.GetRangeContentMD5 { + headers["x-ms-range-get-content-md5"] = "true" + } } if options.GetBlobOptions != nil { headers = mergeHeaders(headers, headersFromStruct(*options.GetBlobOptions)) @@ -270,13 +293,13 @@ func (b *Blob) CreateSnapshot(options *SnapshotOptions) (snapshotTimestamp *time if err != nil || resp == nil { return nil, err } - defer readAndCloseBody(resp.body) + defer drainRespBody(resp) - if err := checkRespCode(resp.statusCode, []int{http.StatusCreated}); err != nil { + if err := checkRespCode(resp, []int{http.StatusCreated}); err != nil { return nil, err } - snapshotResponse := resp.headers.Get(http.CanonicalHeaderKey("x-ms-snapshot")) + snapshotResponse := resp.Header.Get(http.CanonicalHeaderKey("x-ms-snapshot")) if snapshotResponse != "" { snapshotTimestamp, err := time.Parse(time.RFC3339, snapshotResponse) if err != nil { @@ -317,23 +340,25 @@ func (b *Blob) GetProperties(options *GetBlobPropertiesOptions) error { if err != nil { return err } - defer readAndCloseBody(resp.body) + defer drainRespBody(resp) - if err = checkRespCode(resp.statusCode, []int{http.StatusOK}); err != nil { + if err = checkRespCode(resp, []int{http.StatusOK}); err != nil { return err } - return b.writePropoerties(resp.headers) + return b.writeProperties(resp.Header, true) } -func (b *Blob) writePropoerties(h http.Header) error { +func (b *Blob) writeProperties(h http.Header, includeContentLen bool) error { var err error - var contentLength int64 - contentLengthStr := h.Get("Content-Length") - if contentLengthStr != "" { - contentLength, err = strconv.ParseInt(contentLengthStr, 0, 64) - if err != nil { - return err + contentLength := b.Properties.ContentLength + if includeContentLen { + contentLengthStr := h.Get("Content-Length") + if contentLengthStr != "" { + contentLength, err = strconv.ParseInt(contentLengthStr, 0, 64) + if err != nil { + return err + } } } @@ -425,8 +450,8 @@ func (b *Blob) SetProperties(options *SetBlobPropertiesOptions) error { uri := b.Container.bsc.client.getEndpoint(blobServiceName, b.buildPath(), params) if b.Properties.BlobType == BlobTypePage { - headers = addToHeaders(headers, "x-ms-blob-content-length", fmt.Sprintf("byte %v", b.Properties.ContentLength)) - if options != nil || options.SequenceNumberAction != nil { + headers = addToHeaders(headers, "x-ms-blob-content-length", fmt.Sprintf("%v", b.Properties.ContentLength)) + if options != nil && options.SequenceNumberAction != nil { headers = addToHeaders(headers, "x-ms-sequence-number-action", string(*options.SequenceNumberAction)) if *options.SequenceNumberAction != SequenceNumberActionIncrement { headers = addToHeaders(headers, "x-ms-blob-sequence-number", fmt.Sprintf("%v", b.Properties.SequenceNumber)) @@ -438,8 +463,8 @@ func (b *Blob) SetProperties(options *SetBlobPropertiesOptions) error { if err != nil { return err } - readAndCloseBody(resp.body) - return checkRespCode(resp.statusCode, []int{http.StatusOK}) + defer drainRespBody(resp) + return checkRespCode(resp, []int{http.StatusOK}) } // SetBlobMetadataOptions includes the options for a set blob metadata operation @@ -476,8 +501,8 @@ func (b *Blob) SetMetadata(options *SetBlobMetadataOptions) error { if err != nil { return err } - readAndCloseBody(resp.body) - return checkRespCode(resp.statusCode, []int{http.StatusOK}) + defer drainRespBody(resp) + return checkRespCode(resp, []int{http.StatusOK}) } // GetBlobMetadataOptions includes the options for a get blob metadata operation @@ -513,38 +538,18 @@ func (b *Blob) GetMetadata(options *GetBlobMetadataOptions) error { if err != nil { return err } - defer readAndCloseBody(resp.body) + defer drainRespBody(resp) - if err := checkRespCode(resp.statusCode, []int{http.StatusOK}); err != nil { + if err := checkRespCode(resp, []int{http.StatusOK}); err != nil { return err } - b.writeMetadata(resp.headers) + b.writeMetadata(resp.Header) return nil } func (b *Blob) writeMetadata(h http.Header) { - metadata := make(map[string]string) - for k, v := range h { - // Can't trust CanonicalHeaderKey() to munge case - // reliably. "_" is allowed in identifiers: - // https://msdn.microsoft.com/en-us/library/azure/dd179414.aspx - // https://msdn.microsoft.com/library/aa664670(VS.71).aspx - // http://tools.ietf.org/html/rfc7230#section-3.2 - // ...but "_" is considered invalid by - // CanonicalMIMEHeaderKey in - // https://golang.org/src/net/textproto/reader.go?s=14615:14659#L542 - // so k can be "X-Ms-Meta-Lol" or "x-ms-meta-lol_rofl". - k = strings.ToLower(k) - if len(v) == 0 || !strings.HasPrefix(k, strings.ToLower(userDefinedMetadataHeaderPrefix)) { - continue - } - // metadata["lol"] = content of the last X-Ms-Meta-Lol header - k = k[len(userDefinedMetadataHeaderPrefix):] - metadata[k] = v[len(v)-1] - } - - b.Metadata = BlobMetadata(metadata) + b.Metadata = BlobMetadata(writeMetadata(h)) } // DeleteBlobOptions includes the options for a delete blob operation @@ -569,8 +574,8 @@ func (b *Blob) Delete(options *DeleteBlobOptions) error { if err != nil { return err } - readAndCloseBody(resp.body) - return checkRespCode(resp.statusCode, []int{http.StatusAccepted}) + defer drainRespBody(resp) + return checkRespCode(resp, []int{http.StatusAccepted}) } // DeleteIfExists deletes the given blob from the specified container If the @@ -580,15 +585,15 @@ func (b *Blob) Delete(options *DeleteBlobOptions) error { func (b *Blob) DeleteIfExists(options *DeleteBlobOptions) (bool, error) { resp, err := b.delete(options) if resp != nil { - defer readAndCloseBody(resp.body) - if resp.statusCode == http.StatusAccepted || resp.statusCode == http.StatusNotFound { - return resp.statusCode == http.StatusAccepted, nil + defer drainRespBody(resp) + if resp.StatusCode == http.StatusAccepted || resp.StatusCode == http.StatusNotFound { + return resp.StatusCode == http.StatusAccepted, nil } } return false, err } -func (b *Blob) delete(options *DeleteBlobOptions) (*storageResponse, error) { +func (b *Blob) delete(options *DeleteBlobOptions) (*http.Response, error) { params := url.Values{} headers := b.Container.bsc.client.getStandardHeaders() @@ -615,3 +620,13 @@ func pathForResource(container, name string) string { } return fmt.Sprintf("/%s", container) } + +func (b *Blob) respondCreation(resp *http.Response, bt BlobType) error { + defer drainRespBody(resp) + err := checkRespCode(resp, []int{http.StatusCreated}) + if err != nil { + return err + } + b.Properties.BlobType = bt + return nil +} diff --git a/vendor/github.com/Azure/azure-sdk-for-go/storage/blobsasuri.go b/vendor/github.com/Azure/azure-sdk-for-go/storage/blobsasuri.go index 43173d3a4..31894dbfc 100644 --- a/vendor/github.com/Azure/azure-sdk-for-go/storage/blobsasuri.go +++ b/vendor/github.com/Azure/azure-sdk-for-go/storage/blobsasuri.go @@ -1,5 +1,19 @@ package storage +// Copyright 2017 Microsoft Corporation +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + import ( "errors" "fmt" @@ -8,68 +22,126 @@ import ( "time" ) -// GetSASURIWithSignedIPAndProtocol creates an URL to the specified blob which contains the Shared -// Access Signature with specified permissions and expiration time. Also includes signedIPRange and allowed protocols. -// If old API version is used but no signedIP is passed (ie empty string) then this should still work. -// We only populate the signedIP when it non-empty. +// OverrideHeaders defines overridable response heaedrs in +// a request using a SAS URI. +// See https://docs.microsoft.com/en-us/rest/api/storageservices/constructing-a-service-sas +type OverrideHeaders struct { + CacheControl string + ContentDisposition string + ContentEncoding string + ContentLanguage string + ContentType string +} + +// BlobSASOptions are options to construct a blob SAS +// URI. +// See https://docs.microsoft.com/en-us/rest/api/storageservices/constructing-a-service-sas +type BlobSASOptions struct { + BlobServiceSASPermissions + OverrideHeaders + SASOptions +} + +// BlobServiceSASPermissions includes the available permissions for +// blob service SAS URI. +type BlobServiceSASPermissions struct { + Read bool + Add bool + Create bool + Write bool + Delete bool +} + +func (p BlobServiceSASPermissions) buildString() string { + permissions := "" + if p.Read { + permissions += "r" + } + if p.Add { + permissions += "a" + } + if p.Create { + permissions += "c" + } + if p.Write { + permissions += "w" + } + if p.Delete { + permissions += "d" + } + return permissions +} + +// GetSASURI creates an URL to the blob which contains the Shared +// Access Signature with the specified options. // -// See https://msdn.microsoft.com/en-us/library/azure/ee395415.aspx -func (b *Blob) GetSASURIWithSignedIPAndProtocol(expiry time.Time, permissions string, signedIPRange string, HTTPSOnly bool) (string, error) { - var ( - signedPermissions = permissions - blobURL = b.GetURL() - ) - canonicalizedResource, err := b.Container.bsc.client.buildCanonicalizedResource(blobURL, b.Container.bsc.auth) +// See https://docs.microsoft.com/en-us/rest/api/storageservices/constructing-a-service-sas +func (b *Blob) GetSASURI(options BlobSASOptions) (string, error) { + uri := b.GetURL() + signedResource := "b" + canonicalizedResource, err := b.Container.bsc.client.buildCanonicalizedResource(uri, b.Container.bsc.auth, true) if err != nil { return "", err } - // "The canonicalizedresouce portion of the string is a canonical path to the signed resource. - // It must include the service name (blob, table, queue or file) for version 2015-02-21 or - // later, the storage account name, and the resource name, and must be URL-decoded. - // -- https://msdn.microsoft.com/en-us/library/azure/dn140255.aspx + permissions := options.BlobServiceSASPermissions.buildString() + return b.Container.bsc.client.blobAndFileSASURI(options.SASOptions, uri, permissions, canonicalizedResource, signedResource, options.OverrideHeaders) +} + +func (c *Client) blobAndFileSASURI(options SASOptions, uri, permissions, canonicalizedResource, signedResource string, headers OverrideHeaders) (string, error) { + start := "" + if options.Start != (time.Time{}) { + start = options.Start.UTC().Format(time.RFC3339) + } + + expiry := options.Expiry.UTC().Format(time.RFC3339) // We need to replace + with %2b first to avoid being treated as a space (which is correct for query strings, but not the path component). canonicalizedResource = strings.Replace(canonicalizedResource, "+", "%2b", -1) - canonicalizedResource, err = url.QueryUnescape(canonicalizedResource) + canonicalizedResource, err := url.QueryUnescape(canonicalizedResource) if err != nil { return "", err } - signedExpiry := expiry.UTC().Format(time.RFC3339) - - //If blob name is missing, resource is a container - signedResource := "c" - if len(b.Name) > 0 { - signedResource = "b" - } - - protocols := "https,http" - if HTTPSOnly { + protocols := "" + if options.UseHTTPS { protocols = "https" } - stringToSign, err := blobSASStringToSign(b.Container.bsc.client.apiVersion, canonicalizedResource, signedExpiry, signedPermissions, signedIPRange, protocols) + stringToSign, err := blobSASStringToSign(permissions, start, expiry, canonicalizedResource, options.Identifier, options.IP, protocols, c.apiVersion, headers) if err != nil { return "", err } - sig := b.Container.bsc.client.computeHmac256(stringToSign) + sig := c.computeHmac256(stringToSign) sasParams := url.Values{ - "sv": {b.Container.bsc.client.apiVersion}, - "se": {signedExpiry}, + "sv": {c.apiVersion}, + "se": {expiry}, "sr": {signedResource}, - "sp": {signedPermissions}, + "sp": {permissions}, "sig": {sig}, } - if b.Container.bsc.client.apiVersion >= "2015-04-05" { - sasParams.Add("spr", protocols) - if signedIPRange != "" { - sasParams.Add("sip", signedIPRange) + if start != "" { + sasParams.Add("st", start) + } + + if c.apiVersion >= "2015-04-05" { + if protocols != "" { + sasParams.Add("spr", protocols) + } + if options.IP != "" { + sasParams.Add("sip", options.IP) } } - sasURL, err := url.Parse(blobURL) + // Add override response hedaers + addQueryParameter(sasParams, "rscc", headers.CacheControl) + addQueryParameter(sasParams, "rscd", headers.ContentDisposition) + addQueryParameter(sasParams, "rsce", headers.ContentEncoding) + addQueryParameter(sasParams, "rscl", headers.ContentLanguage) + addQueryParameter(sasParams, "rsct", headers.ContentType) + + sasURL, err := url.Parse(uri) if err != nil { return "", err } @@ -77,16 +149,12 @@ func (b *Blob) GetSASURIWithSignedIPAndProtocol(expiry time.Time, permissions st return sasURL.String(), nil } -// GetSASURI creates an URL to the specified blob which contains the Shared -// Access Signature with specified permissions and expiration time. -// -// See https://msdn.microsoft.com/en-us/library/azure/ee395415.aspx -func (b *Blob) GetSASURI(expiry time.Time, permissions string) (string, error) { - return b.GetSASURIWithSignedIPAndProtocol(expiry, permissions, "", false) -} - -func blobSASStringToSign(signedVersion, canonicalizedResource, signedExpiry, signedPermissions string, signedIP string, protocols string) (string, error) { - var signedStart, signedIdentifier, rscc, rscd, rsce, rscl, rsct string +func blobSASStringToSign(signedPermissions, signedStart, signedExpiry, canonicalizedResource, signedIdentifier, signedIP, protocols, signedVersion string, headers OverrideHeaders) (string, error) { + rscc := headers.CacheControl + rscd := headers.ContentDisposition + rsce := headers.ContentEncoding + rscl := headers.ContentLanguage + rsct := headers.ContentType if signedVersion >= "2015-02-21" { canonicalizedResource = "/blob" + canonicalizedResource diff --git a/vendor/github.com/Azure/azure-sdk-for-go/storage/blobserviceclient.go b/vendor/github.com/Azure/azure-sdk-for-go/storage/blobserviceclient.go index 450b20f96..f6a9da282 100644 --- a/vendor/github.com/Azure/azure-sdk-for-go/storage/blobserviceclient.go +++ b/vendor/github.com/Azure/azure-sdk-for-go/storage/blobserviceclient.go @@ -1,9 +1,26 @@ package storage +// Copyright 2017 Microsoft Corporation +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + import ( + "encoding/xml" + "fmt" "net/http" "net/url" "strconv" + "strings" ) // BlobStorageClient contains operations for Microsoft Azure Blob Storage @@ -45,6 +62,21 @@ func (b *BlobStorageClient) GetContainerReference(name string) *Container { } } +// GetContainerReferenceFromSASURI returns a Container object for the specified +// container SASURI +func GetContainerReferenceFromSASURI(sasuri url.URL) (*Container, error) { + path := strings.Split(sasuri.Path, "/") + if len(path) <= 1 { + return nil, fmt.Errorf("could not find a container in URI: %s", sasuri.String()) + } + cli := newSASClient().GetBlobService() + return &Container{ + bsc: &cli, + Name: path[1], + sasuri: sasuri, + }, nil +} + // ListContainers returns the list of containers in a storage account along with // pagination token and other response details. // @@ -54,21 +86,53 @@ func (b BlobStorageClient) ListContainers(params ListContainersParameters) (*Con uri := b.client.getEndpoint(blobServiceName, "", q) headers := b.client.getStandardHeaders() - var out ContainerListResponse + type ContainerAlias struct { + bsc *BlobStorageClient + Name string `xml:"Name"` + Properties ContainerProperties `xml:"Properties"` + Metadata BlobMetadata + sasuri url.URL + } + type ContainerListResponseAlias struct { + XMLName xml.Name `xml:"EnumerationResults"` + Xmlns string `xml:"xmlns,attr"` + Prefix string `xml:"Prefix"` + Marker string `xml:"Marker"` + NextMarker string `xml:"NextMarker"` + MaxResults int64 `xml:"MaxResults"` + Containers []ContainerAlias `xml:"Containers>Container"` + } + + var outAlias ContainerListResponseAlias resp, err := b.client.exec(http.MethodGet, uri, headers, nil, b.auth) if err != nil { return nil, err } - defer resp.body.Close() - err = xmlUnmarshal(resp.body, &out) + defer resp.Body.Close() + err = xmlUnmarshal(resp.Body, &outAlias) if err != nil { return nil, err } - // assign our client to the newly created Container objects - for i := range out.Containers { - out.Containers[i].bsc = &b + out := ContainerListResponse{ + XMLName: outAlias.XMLName, + Xmlns: outAlias.Xmlns, + Prefix: outAlias.Prefix, + Marker: outAlias.Marker, + NextMarker: outAlias.NextMarker, + MaxResults: outAlias.MaxResults, + Containers: make([]Container, len(outAlias.Containers)), } + for i, cnt := range outAlias.Containers { + out.Containers[i] = Container{ + bsc: &b, + Name: cnt.Name, + Properties: cnt.Properties, + Metadata: map[string]string(cnt.Metadata), + sasuri: cnt.sasuri, + } + } + return &out, err } @@ -93,3 +157,26 @@ func (p ListContainersParameters) getParameters() url.Values { return out } + +func writeMetadata(h http.Header) map[string]string { + metadata := make(map[string]string) + for k, v := range h { + // Can't trust CanonicalHeaderKey() to munge case + // reliably. "_" is allowed in identifiers: + // https://msdn.microsoft.com/en-us/library/azure/dd179414.aspx + // https://msdn.microsoft.com/library/aa664670(VS.71).aspx + // http://tools.ietf.org/html/rfc7230#section-3.2 + // ...but "_" is considered invalid by + // CanonicalMIMEHeaderKey in + // https://golang.org/src/net/textproto/reader.go?s=14615:14659#L542 + // so k can be "X-Ms-Meta-Lol" or "x-ms-meta-lol_rofl". + k = strings.ToLower(k) + if len(v) == 0 || !strings.HasPrefix(k, strings.ToLower(userDefinedMetadataHeaderPrefix)) { + continue + } + // metadata["lol"] = content of the last X-Ms-Meta-Lol header + k = k[len(userDefinedMetadataHeaderPrefix):] + metadata[k] = v[len(v)-1] + } + return metadata +} diff --git a/vendor/github.com/Azure/azure-sdk-for-go/storage/blockblob.go b/vendor/github.com/Azure/azure-sdk-for-go/storage/blockblob.go index 09b13cc18..c9c62d799 100644 --- a/vendor/github.com/Azure/azure-sdk-for-go/storage/blockblob.go +++ b/vendor/github.com/Azure/azure-sdk-for-go/storage/blockblob.go @@ -1,5 +1,19 @@ package storage +// Copyright 2017 Microsoft Corporation +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + import ( "bytes" "encoding/xml" @@ -7,6 +21,7 @@ import ( "io" "net/http" "net/url" + "strconv" "strings" "time" ) @@ -67,6 +82,8 @@ type BlockResponse struct { // CreateBlockBlob initializes an empty block blob with no blocks. // +// See CreateBlockBlobFromReader for more info on creating blobs. +// // See https://docs.microsoft.com/en-us/rest/api/storageservices/fileservices/Put-Blob func (b *Blob) CreateBlockBlob(options *PutBlobOptions) error { return b.CreateBlockBlobFromReader(nil, options) @@ -76,16 +93,46 @@ func (b *Blob) CreateBlockBlob(options *PutBlobOptions) error { // reader. Size must be the number of bytes read from reader. To // create an empty blob, use size==0 and reader==nil. // +// Any headers set in blob.Properties or metadata in blob.Metadata +// will be set on the blob. +// // The API rejects requests with size > 256 MiB (but this limit is not // checked by the SDK). To write a larger blob, use CreateBlockBlob, // PutBlock, and PutBlockList. // +// To create a blob from scratch, call container.GetBlobReference() to +// get an empty blob, fill in blob.Properties and blob.Metadata as +// appropriate then call this method. +// // See https://docs.microsoft.com/en-us/rest/api/storageservices/fileservices/Put-Blob func (b *Blob) CreateBlockBlobFromReader(blob io.Reader, options *PutBlobOptions) error { params := url.Values{} headers := b.Container.bsc.client.getStandardHeaders() headers["x-ms-blob-type"] = string(BlobTypeBlock) - headers["Content-Length"] = fmt.Sprintf("%d", b.Properties.ContentLength) + + headers["Content-Length"] = "0" + var n int64 + var err error + if blob != nil { + type lener interface { + Len() int + } + // TODO(rjeczalik): handle io.ReadSeeker, in case blob is *os.File etc. + if l, ok := blob.(lener); ok { + n = int64(l.Len()) + } else { + var buf bytes.Buffer + n, err = io.Copy(&buf, blob) + if err != nil { + return err + } + blob = &buf + } + + headers["Content-Length"] = strconv.FormatInt(n, 10) + } + b.Properties.ContentLength = n + headers = mergeHeaders(headers, headersFromStruct(b.Properties)) headers = b.Container.bsc.client.addMetadataToHeaders(headers, b.Metadata) @@ -99,8 +146,7 @@ func (b *Blob) CreateBlockBlobFromReader(blob io.Reader, options *PutBlobOptions if err != nil { return err } - readAndCloseBody(resp.body) - return checkRespCode(resp.statusCode, []int{http.StatusCreated}) + return b.respondCreation(resp, BlobTypeBlock) } // PutBlockOptions includes the options for a put block operation @@ -148,8 +194,7 @@ func (b *Blob) PutBlockWithLength(blockID string, size uint64, blob io.Reader, o if err != nil { return err } - readAndCloseBody(resp.body) - return checkRespCode(resp.statusCode, []int{http.StatusCreated}) + return b.respondCreation(resp, BlobTypeBlock) } // PutBlockListOptions includes the options for a put block list operation @@ -184,8 +229,8 @@ func (b *Blob) PutBlockList(blocks []Block, options *PutBlockListOptions) error if err != nil { return err } - readAndCloseBody(resp.body) - return checkRespCode(resp.statusCode, []int{http.StatusCreated}) + defer drainRespBody(resp) + return checkRespCode(resp, []int{http.StatusCreated}) } // GetBlockListOptions includes the options for a get block list operation @@ -218,8 +263,8 @@ func (b *Blob) GetBlockList(blockType BlockListType, options *GetBlockListOption if err != nil { return out, err } - defer resp.body.Close() + defer resp.Body.Close() - err = xmlUnmarshal(resp.body, &out) + err = xmlUnmarshal(resp.Body, &out) return out, err } diff --git a/vendor/github.com/Azure/azure-sdk-for-go/storage/client.go b/vendor/github.com/Azure/azure-sdk-for-go/storage/client.go index 8671e52eb..2930824b2 100644 --- a/vendor/github.com/Azure/azure-sdk-for-go/storage/client.go +++ b/vendor/github.com/Azure/azure-sdk-for-go/storage/client.go @@ -1,9 +1,22 @@ // Package storage provides clients for Microsoft Azure Storage Services. package storage +// Copyright 2017 Microsoft Corporation +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + import ( "bufio" - "bytes" "encoding/base64" "encoding/json" "encoding/xml" @@ -17,9 +30,11 @@ import ( "net/url" "regexp" "runtime" + "strconv" "strings" "time" + "github.com/Azure/azure-sdk-for-go/version" "github.com/Azure/go-autorest/autorest" "github.com/Azure/go-autorest/autorest/azure" ) @@ -33,7 +48,9 @@ const ( // basic client is created. DefaultAPIVersion = "2016-05-31" - defaultUseHTTPS = true + defaultUseHTTPS = true + defaultRetryAttempts = 5 + defaultRetryDuration = time.Second * 5 // StorageEmulatorAccountName is the fixed storage account used by Azure Storage Emulator StorageEmulatorAccountName = "devstoreaccount1" @@ -53,10 +70,28 @@ const ( userAgentHeader = "User-Agent" userDefinedMetadataHeaderPrefix = "x-ms-meta-" + + connectionStringAccountName = "accountname" + connectionStringAccountKey = "accountkey" + connectionStringEndpointSuffix = "endpointsuffix" + connectionStringEndpointProtocol = "defaultendpointsprotocol" + + connectionStringBlobEndpoint = "blobendpoint" + connectionStringFileEndpoint = "fileendpoint" + connectionStringQueueEndpoint = "queueendpoint" + connectionStringTableEndpoint = "tableendpoint" + connectionStringSAS = "sharedaccesssignature" ) var ( - validStorageAccount = regexp.MustCompile("^[0-9a-z]{3,24}$") + validStorageAccount = regexp.MustCompile("^[0-9a-z]{3,24}$") + defaultValidStatusCodes = []int{ + http.StatusRequestTimeout, // 408 + http.StatusInternalServerError, // 500 + http.StatusBadGateway, // 502 + http.StatusServiceUnavailable, // 503 + http.StatusGatewayTimeout, // 504 + } ) // Sender sends a request @@ -75,22 +110,17 @@ type DefaultSender struct { // Send is the default retry strategy in the client func (ds *DefaultSender) Send(c *Client, req *http.Request) (resp *http.Response, err error) { - b := []byte{} - if req.Body != nil { - b, err = ioutil.ReadAll(req.Body) + rr := autorest.NewRetriableRequest(req) + for attempts := 0; attempts < ds.RetryAttempts; attempts++ { + err = rr.Prepare() if err != nil { return resp, err } - } - - for attempts := 0; attempts < ds.RetryAttempts; attempts++ { - if len(b) > 0 { - req.Body = ioutil.NopCloser(bytes.NewBuffer(b)) - } - resp, err = c.HTTPClient.Do(req) + resp, err = c.HTTPClient.Do(rr.Request()) if err != nil || !autorest.ResponseHasStatusCode(resp, ds.ValidStatusCodes...) { return resp, err } + drainRespBody(resp) autorest.DelayForBackoff(ds.RetryDuration, attempts, req.Cancel) ds.attempts = attempts } @@ -118,16 +148,12 @@ type Client struct { baseURL string apiVersion string userAgent string -} - -type storageResponse struct { - statusCode int - headers http.Header - body io.ReadCloser + sasClient bool + accountSASToken url.Values } type odataResponse struct { - storageResponse + resp *http.Response odata odataErrorWrapper } @@ -167,6 +193,7 @@ type odataErrorWrapper struct { type UnexpectedStatusCodeError struct { allowed []int got int + inner error } func (e UnexpectedStatusCodeError) Error() string { @@ -177,7 +204,7 @@ func (e UnexpectedStatusCodeError) Error() string { for _, v := range e.allowed { expected = append(expected, s(v)) } - return fmt.Sprintf("storage: status code from service response is %s; was expecting %s", got, strings.Join(expected, " or ")) + return fmt.Sprintf("storage: status code from service response is %s; was expecting %s. Inner error: %+v", got, strings.Join(expected, " or "), e.inner) } // Got is the actual status code returned by Azure. @@ -185,6 +212,60 @@ func (e UnexpectedStatusCodeError) Got() int { return e.got } +// Inner returns any inner error info. +func (e UnexpectedStatusCodeError) Inner() error { + return e.inner +} + +// NewClientFromConnectionString creates a Client from the connection string. +func NewClientFromConnectionString(input string) (Client, error) { + // build a map of connection string key/value pairs + parts := map[string]string{} + for _, pair := range strings.Split(input, ";") { + if pair == "" { + continue + } + + equalDex := strings.IndexByte(pair, '=') + if equalDex <= 0 { + return Client{}, fmt.Errorf("Invalid connection segment %q", pair) + } + + value := strings.TrimSpace(pair[equalDex+1:]) + key := strings.TrimSpace(strings.ToLower(pair[:equalDex])) + parts[key] = value + } + + // TODO: validate parameter sets? + + if parts[connectionStringAccountName] == StorageEmulatorAccountName { + return NewEmulatorClient() + } + + if parts[connectionStringSAS] != "" { + endpoint := "" + if parts[connectionStringBlobEndpoint] != "" { + endpoint = parts[connectionStringBlobEndpoint] + } else if parts[connectionStringFileEndpoint] != "" { + endpoint = parts[connectionStringFileEndpoint] + } else if parts[connectionStringQueueEndpoint] != "" { + endpoint = parts[connectionStringQueueEndpoint] + } else { + endpoint = parts[connectionStringTableEndpoint] + } + + return NewAccountSASClientFromEndpointToken(endpoint, parts[connectionStringSAS]) + } + + useHTTPS := defaultUseHTTPS + if parts[connectionStringEndpointProtocol] != "" { + useHTTPS = parts[connectionStringEndpointProtocol] == "https" + } + + return NewClient(parts[connectionStringAccountName], parts[connectionStringAccountKey], + parts[connectionStringEndpointSuffix], DefaultAPIVersion, useHTTPS) +} + // NewBasicClient constructs a Client with given storage service name and // key. func NewBasicClient(accountName, accountKey string) (Client, error) { @@ -212,13 +293,13 @@ func NewEmulatorClient() (Client, error) { // NewClient constructs a Client. This should be used if the caller wants // to specify whether to use HTTPS, a specific REST API version or a custom // storage endpoint than Azure Public Cloud. -func NewClient(accountName, accountKey, blobServiceBaseURL, apiVersion string, useHTTPS bool) (Client, error) { +func NewClient(accountName, accountKey, serviceBaseURL, apiVersion string, useHTTPS bool) (Client, error) { var c Client if !IsValidStorageAccount(accountName) { return c, fmt.Errorf("azure: account name is not valid: it must be between 3 and 24 characters, and only may contain numbers and lowercase letters: %v", accountName) } else if accountKey == "" { return c, fmt.Errorf("azure: account key required") - } else if blobServiceBaseURL == "" { + } else if serviceBaseURL == "" { return c, fmt.Errorf("azure: base storage service url required") } @@ -232,19 +313,14 @@ func NewClient(accountName, accountKey, blobServiceBaseURL, apiVersion string, u accountName: accountName, accountKey: key, useHTTPS: useHTTPS, - baseURL: blobServiceBaseURL, + baseURL: serviceBaseURL, apiVersion: apiVersion, + sasClient: false, UseSharedKeyLite: false, Sender: &DefaultSender{ - RetryAttempts: 5, - ValidStatusCodes: []int{ - http.StatusRequestTimeout, // 408 - http.StatusInternalServerError, // 500 - http.StatusBadGateway, // 502 - http.StatusServiceUnavailable, // 503 - http.StatusGatewayTimeout, // 504 - }, - RetryDuration: time.Second * 5, + RetryAttempts: defaultRetryAttempts, + ValidStatusCodes: defaultValidStatusCodes, + RetryDuration: defaultRetryDuration, }, } c.userAgent = c.getDefaultUserAgent() @@ -257,12 +333,90 @@ func IsValidStorageAccount(account string) bool { return validStorageAccount.MatchString(account) } +// NewAccountSASClient contructs a client that uses accountSAS authorization +// for its operations. +func NewAccountSASClient(account string, token url.Values, env azure.Environment) Client { + c := newSASClient() + c.accountSASToken = token + c.accountName = account + c.baseURL = env.StorageEndpointSuffix + + // Get API version and protocol from token + c.apiVersion = token.Get("sv") + c.useHTTPS = token.Get("spr") == "https" + return c +} + +// NewAccountSASClientFromEndpointToken constructs a client that uses accountSAS authorization +// for its operations using the specified endpoint and SAS token. +func NewAccountSASClientFromEndpointToken(endpoint string, sasToken string) (Client, error) { + u, err := url.Parse(endpoint) + if err != nil { + return Client{}, err + } + + token, err := url.ParseQuery(sasToken) + if err != nil { + return Client{}, err + } + + // the host name will look something like this + // - foo.blob.core.windows.net + // "foo" is the account name + // "core.windows.net" is the baseURL + + // find the first dot to get account name + i1 := strings.IndexByte(u.Host, '.') + if i1 < 0 { + return Client{}, fmt.Errorf("failed to find '.' in %s", u.Host) + } + + // now find the second dot to get the base URL + i2 := strings.IndexByte(u.Host[i1+1:], '.') + if i2 < 0 { + return Client{}, fmt.Errorf("failed to find '.' in %s", u.Host[i1+1:]) + } + + c := newSASClient() + c.accountSASToken = token + c.accountName = u.Host[:i1] + c.baseURL = u.Host[i1+i2+2:] + + // Get API version and protocol from token + c.apiVersion = token.Get("sv") + c.useHTTPS = token.Get("spr") == "https" + return c, nil +} + +func newSASClient() Client { + c := Client{ + HTTPClient: http.DefaultClient, + apiVersion: DefaultAPIVersion, + sasClient: true, + Sender: &DefaultSender{ + RetryAttempts: defaultRetryAttempts, + ValidStatusCodes: defaultValidStatusCodes, + RetryDuration: defaultRetryDuration, + }, + } + c.userAgent = c.getDefaultUserAgent() + return c +} + +func (c Client) isServiceSASClient() bool { + return c.sasClient && c.accountSASToken == nil +} + +func (c Client) isAccountSASClient() bool { + return c.sasClient && c.accountSASToken != nil +} + func (c Client) getDefaultUserAgent() string { return fmt.Sprintf("Go/%s (%s-%s) azure-storage-go/%s api-version/%s", runtime.Version(), runtime.GOARCH, runtime.GOOS, - sdkVersion, + version.Number, c.apiVersion, ) } @@ -329,6 +483,160 @@ func (c Client) getEndpoint(service, path string, params url.Values) string { return u.String() } +// AccountSASTokenOptions includes options for constructing +// an account SAS token. +// https://docs.microsoft.com/en-us/rest/api/storageservices/constructing-an-account-sas +type AccountSASTokenOptions struct { + APIVersion string + Services Services + ResourceTypes ResourceTypes + Permissions Permissions + Start time.Time + Expiry time.Time + IP string + UseHTTPS bool +} + +// Services specify services accessible with an account SAS. +type Services struct { + Blob bool + Queue bool + Table bool + File bool +} + +// ResourceTypes specify the resources accesible with an +// account SAS. +type ResourceTypes struct { + Service bool + Container bool + Object bool +} + +// Permissions specifies permissions for an accountSAS. +type Permissions struct { + Read bool + Write bool + Delete bool + List bool + Add bool + Create bool + Update bool + Process bool +} + +// GetAccountSASToken creates an account SAS token +// See https://docs.microsoft.com/en-us/rest/api/storageservices/constructing-an-account-sas +func (c Client) GetAccountSASToken(options AccountSASTokenOptions) (url.Values, error) { + if options.APIVersion == "" { + options.APIVersion = c.apiVersion + } + + if options.APIVersion < "2015-04-05" { + return url.Values{}, fmt.Errorf("account SAS does not support API versions prior to 2015-04-05. API version : %s", options.APIVersion) + } + + // build services string + services := "" + if options.Services.Blob { + services += "b" + } + if options.Services.Queue { + services += "q" + } + if options.Services.Table { + services += "t" + } + if options.Services.File { + services += "f" + } + + // build resources string + resources := "" + if options.ResourceTypes.Service { + resources += "s" + } + if options.ResourceTypes.Container { + resources += "c" + } + if options.ResourceTypes.Object { + resources += "o" + } + + // build permissions string + permissions := "" + if options.Permissions.Read { + permissions += "r" + } + if options.Permissions.Write { + permissions += "w" + } + if options.Permissions.Delete { + permissions += "d" + } + if options.Permissions.List { + permissions += "l" + } + if options.Permissions.Add { + permissions += "a" + } + if options.Permissions.Create { + permissions += "c" + } + if options.Permissions.Update { + permissions += "u" + } + if options.Permissions.Process { + permissions += "p" + } + + // build start time, if exists + start := "" + if options.Start != (time.Time{}) { + start = options.Start.UTC().Format(time.RFC3339) + } + + // build expiry time + expiry := options.Expiry.UTC().Format(time.RFC3339) + + protocol := "https,http" + if options.UseHTTPS { + protocol = "https" + } + + stringToSign := strings.Join([]string{ + c.accountName, + permissions, + services, + resources, + start, + expiry, + options.IP, + protocol, + options.APIVersion, + "", + }, "\n") + signature := c.computeHmac256(stringToSign) + + sasParams := url.Values{ + "sv": {options.APIVersion}, + "ss": {services}, + "srt": {resources}, + "sp": {permissions}, + "se": {expiry}, + "spr": {protocol}, + "sig": {signature}, + } + if start != "" { + sasParams.Add("st", start) + } + if options.IP != "" { + sasParams.Add("sip", options.IP) + } + + return sasParams, nil +} + // GetBlobService returns a BlobStorageClient which can operate on the blob // service of the storage account. func (c Client) GetBlobService() BlobStorageClient { @@ -393,7 +701,7 @@ func (c Client) getStandardHeaders() map[string]string { } } -func (c Client) exec(verb, url string, headers map[string]string, body io.Reader, auth authentication) (*storageResponse, error) { +func (c Client) exec(verb, url string, headers map[string]string, body io.Reader, auth authentication) (*http.Response, error) { headers, err := c.addAuthorizationHeader(verb, url, headers, auth) if err != nil { return nil, err @@ -404,8 +712,25 @@ func (c Client) exec(verb, url string, headers map[string]string, body io.Reader return nil, errors.New("azure/storage: error creating request: " + err.Error()) } + // http.NewRequest() will automatically set req.ContentLength for a handful of types + // otherwise we will handle here. + if req.ContentLength < 1 { + if clstr, ok := headers["Content-Length"]; ok { + if cl, err := strconv.ParseInt(clstr, 10, 64); err == nil { + req.ContentLength = cl + } + } + } + for k, v := range headers { - req.Header.Add(k, v) + req.Header[k] = append(req.Header[k], v) // Must bypass case munging present in `Add` by using map functions directly. See https://github.com/Azure/azure-sdk-for-go/issues/645 + } + + if c.isAccountSASClient() { + // append the SAS token to the query params + v := req.URL.Query() + v = mergeParams(v, c.accountSASToken) + req.URL.RawQuery = v.Encode() } resp, err := c.Sender.Send(&c, req) @@ -414,48 +739,10 @@ func (c Client) exec(verb, url string, headers map[string]string, body io.Reader } if resp.StatusCode >= 400 && resp.StatusCode <= 505 { - var respBody []byte - respBody, err = readAndCloseBody(resp.Body) - if err != nil { - return nil, err - } - - requestID, date, version := getDebugHeaders(resp.Header) - if len(respBody) == 0 { - // no error in response body, might happen in HEAD requests - err = serviceErrFromStatusCode(resp.StatusCode, resp.Status, requestID, date, version) - } else { - storageErr := AzureStorageServiceError{ - StatusCode: resp.StatusCode, - RequestID: requestID, - Date: date, - APIVersion: version, - } - // response contains storage service error object, unmarshal - if resp.Header.Get("Content-Type") == "application/xml" { - errIn := serviceErrFromXML(respBody, &storageErr) - if err != nil { // error unmarshaling the error response - err = errIn - } - } else { - errIn := serviceErrFromJSON(respBody, &storageErr) - if err != nil { // error unmarshaling the error response - err = errIn - } - } - err = storageErr - } - return &storageResponse{ - statusCode: resp.StatusCode, - headers: resp.Header, - body: ioutil.NopCloser(bytes.NewReader(respBody)), /* restore the body */ - }, err + return resp, getErrorFromResponse(resp) } - return &storageResponse{ - statusCode: resp.StatusCode, - headers: resp.Header, - body: resp.Body}, nil + return resp, nil } func (c Client) execInternalJSONCommon(verb, url string, headers map[string]string, body io.Reader, auth authentication) (*odataResponse, *http.Request, *http.Response, error) { @@ -474,10 +761,7 @@ func (c Client) execInternalJSONCommon(verb, url string, headers map[string]stri return nil, nil, nil, err } - respToRet := &odataResponse{} - respToRet.body = resp.Body - respToRet.statusCode = resp.StatusCode - respToRet.headers = resp.Header + respToRet := &odataResponse{resp: resp} statusCode := resp.StatusCode if statusCode >= 400 && statusCode <= 505 { @@ -562,7 +846,7 @@ func genChangesetReader(req *http.Request, respToRet *odataResponse, batchPartBu if err != nil { return err } - respToRet.statusCode = changesetResp.StatusCode + respToRet.resp = changesetResp } return nil @@ -597,6 +881,12 @@ func readAndCloseBody(body io.ReadCloser) ([]byte, error) { return out, err } +// reads the response body then closes it +func drainRespBody(resp *http.Response) { + io.Copy(ioutil.Discard, resp.Body) + resp.Body.Close() +} + func serviceErrFromXML(body []byte, storageErr *AzureStorageServiceError) error { if err := xml.Unmarshal(body, storageErr); err != nil { storageErr.Message = fmt.Sprintf("Response body could no be unmarshaled: %v. Body: %v.", err, string(body)) @@ -635,13 +925,18 @@ func (e AzureStorageServiceError) Error() string { // checkRespCode returns UnexpectedStatusError if the given response code is not // one of the allowed status codes; otherwise nil. -func checkRespCode(respCode int, allowed []int) error { +func checkRespCode(resp *http.Response, allowed []int) error { for _, v := range allowed { - if respCode == v { + if resp.StatusCode == v { return nil } } - return UnexpectedStatusCodeError{allowed, respCode} + err := getErrorFromResponse(resp) + return UnexpectedStatusCodeError{ + allowed: allowed, + got: resp.StatusCode, + inner: err, + } } func (c Client) addMetadataToHeaders(h map[string]string, metadata map[string]string) map[string]string { @@ -658,3 +953,37 @@ func getDebugHeaders(h http.Header) (requestID, date, version string) { date = h.Get("Date") return } + +func getErrorFromResponse(resp *http.Response) error { + respBody, err := readAndCloseBody(resp.Body) + if err != nil { + return err + } + + requestID, date, version := getDebugHeaders(resp.Header) + if len(respBody) == 0 { + // no error in response body, might happen in HEAD requests + err = serviceErrFromStatusCode(resp.StatusCode, resp.Status, requestID, date, version) + } else { + storageErr := AzureStorageServiceError{ + StatusCode: resp.StatusCode, + RequestID: requestID, + Date: date, + APIVersion: version, + } + // response contains storage service error object, unmarshal + if resp.Header.Get("Content-Type") == "application/xml" { + errIn := serviceErrFromXML(respBody, &storageErr) + if err != nil { // error unmarshaling the error response + err = errIn + } + } else { + errIn := serviceErrFromJSON(respBody, &storageErr) + if err != nil { // error unmarshaling the error response + err = errIn + } + } + err = storageErr + } + return err +} diff --git a/vendor/github.com/Azure/azure-sdk-for-go/storage/commonsasuri.go b/vendor/github.com/Azure/azure-sdk-for-go/storage/commonsasuri.go new file mode 100644 index 000000000..e898e9bfa --- /dev/null +++ b/vendor/github.com/Azure/azure-sdk-for-go/storage/commonsasuri.go @@ -0,0 +1,38 @@ +package storage + +// Copyright 2017 Microsoft Corporation +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +import ( + "net/url" + "time" +) + +// SASOptions includes options used by SAS URIs for different +// services and resources. +type SASOptions struct { + APIVersion string + Start time.Time + Expiry time.Time + IP string + UseHTTPS bool + Identifier string +} + +func addQueryParameter(query url.Values, key, value string) url.Values { + if value != "" { + query.Add(key, value) + } + return query +} diff --git a/vendor/github.com/Azure/azure-sdk-for-go/storage/container.go b/vendor/github.com/Azure/azure-sdk-for-go/storage/container.go index c2c9c055b..056473d49 100644 --- a/vendor/github.com/Azure/azure-sdk-for-go/storage/container.go +++ b/vendor/github.com/Azure/azure-sdk-for-go/storage/container.go @@ -1,8 +1,21 @@ package storage +// Copyright 2017 Microsoft Corporation +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + import ( "encoding/xml" - "errors" "fmt" "io" "net/http" @@ -18,20 +31,75 @@ type Container struct { Name string `xml:"Name"` Properties ContainerProperties `xml:"Properties"` Metadata map[string]string + sasuri url.URL +} + +// Client returns the HTTP client used by the Container reference. +func (c *Container) Client() *Client { + return &c.bsc.client } func (c *Container) buildPath() string { return fmt.Sprintf("/%s", c.Name) } +// GetURL gets the canonical URL to the container. +// This method does not create a publicly accessible URL if the container +// is private and this method does not check if the blob exists. +func (c *Container) GetURL() string { + container := c.Name + if container == "" { + container = "$root" + } + return c.bsc.client.getEndpoint(blobServiceName, pathForResource(container, ""), nil) +} + +// ContainerSASOptions are options to construct a container SAS +// URI. +// See https://docs.microsoft.com/en-us/rest/api/storageservices/constructing-a-service-sas +type ContainerSASOptions struct { + ContainerSASPermissions + OverrideHeaders + SASOptions +} + +// ContainerSASPermissions includes the available permissions for +// a container SAS URI. +type ContainerSASPermissions struct { + BlobServiceSASPermissions + List bool +} + +// GetSASURI creates an URL to the container which contains the Shared +// Access Signature with the specified options. +// +// See https://docs.microsoft.com/en-us/rest/api/storageservices/constructing-a-service-sas +func (c *Container) GetSASURI(options ContainerSASOptions) (string, error) { + uri := c.GetURL() + signedResource := "c" + canonicalizedResource, err := c.bsc.client.buildCanonicalizedResource(uri, c.bsc.auth, true) + if err != nil { + return "", err + } + + // build permissions string + permissions := options.BlobServiceSASPermissions.buildString() + if options.List { + permissions += "l" + } + + return c.bsc.client.blobAndFileSASURI(options.SASOptions, uri, permissions, canonicalizedResource, signedResource, options.OverrideHeaders) +} + // ContainerProperties contains various properties of a container returned from // various endpoints like ListContainers. type ContainerProperties struct { - LastModified string `xml:"Last-Modified"` - Etag string `xml:"Etag"` - LeaseStatus string `xml:"LeaseStatus"` - LeaseState string `xml:"LeaseState"` - LeaseDuration string `xml:"LeaseDuration"` + LastModified string `xml:"Last-Modified"` + Etag string `xml:"Etag"` + LeaseStatus string `xml:"LeaseStatus"` + LeaseState string `xml:"LeaseState"` + LeaseDuration string `xml:"LeaseDuration"` + PublicAccess ContainerAccessType `xml:"PublicAccess"` } // ContainerListResponse contains the response fields from @@ -190,8 +258,8 @@ func (c *Container) Create(options *CreateContainerOptions) error { if err != nil { return err } - readAndCloseBody(resp.body) - return checkRespCode(resp.statusCode, []int{http.StatusCreated}) + defer drainRespBody(resp) + return checkRespCode(resp, []int{http.StatusCreated}) } // CreateIfNotExists creates a blob container if it does not exist. Returns @@ -199,15 +267,15 @@ func (c *Container) Create(options *CreateContainerOptions) error { func (c *Container) CreateIfNotExists(options *CreateContainerOptions) (bool, error) { resp, err := c.create(options) if resp != nil { - defer readAndCloseBody(resp.body) - if resp.statusCode == http.StatusCreated || resp.statusCode == http.StatusConflict { - return resp.statusCode == http.StatusCreated, nil + defer drainRespBody(resp) + if resp.StatusCode == http.StatusCreated || resp.StatusCode == http.StatusConflict { + return resp.StatusCode == http.StatusCreated, nil } } return false, err } -func (c *Container) create(options *CreateContainerOptions) (*storageResponse, error) { +func (c *Container) create(options *CreateContainerOptions) (*http.Response, error) { query := url.Values{"restype": {"container"}} headers := c.bsc.client.getStandardHeaders() headers = c.bsc.client.addMetadataToHeaders(headers, c.Metadata) @@ -224,14 +292,24 @@ func (c *Container) create(options *CreateContainerOptions) (*storageResponse, e // Exists returns true if a container with given name exists // on the storage account, otherwise returns false. func (c *Container) Exists() (bool, error) { - uri := c.bsc.client.getEndpoint(blobServiceName, c.buildPath(), url.Values{"restype": {"container"}}) + q := url.Values{"restype": {"container"}} + var uri string + if c.bsc.client.isServiceSASClient() { + q = mergeParams(q, c.sasuri.Query()) + newURI := c.sasuri + newURI.RawQuery = q.Encode() + uri = newURI.String() + + } else { + uri = c.bsc.client.getEndpoint(blobServiceName, c.buildPath(), q) + } headers := c.bsc.client.getStandardHeaders() resp, err := c.bsc.client.exec(http.MethodHead, uri, headers, nil, c.bsc.auth) if resp != nil { - defer readAndCloseBody(resp.body) - if resp.statusCode == http.StatusOK || resp.statusCode == http.StatusNotFound { - return resp.statusCode == http.StatusOK, nil + defer drainRespBody(resp) + if resp.StatusCode == http.StatusOK || resp.StatusCode == http.StatusNotFound { + return resp.StatusCode == http.StatusOK, nil } } return false, err @@ -271,13 +349,8 @@ func (c *Container) SetPermissions(permissions ContainerPermissions, options *Se if err != nil { return err } - defer readAndCloseBody(resp.body) - - if err := checkRespCode(resp.statusCode, []int{http.StatusOK}); err != nil { - return errors.New("Unable to set permissions") - } - - return nil + defer drainRespBody(resp) + return checkRespCode(resp, []int{http.StatusOK}) } // GetContainerPermissionOptions includes options for a get container permissions operation @@ -307,14 +380,14 @@ func (c *Container) GetPermissions(options *GetContainerPermissionOptions) (*Con if err != nil { return nil, err } - defer resp.body.Close() + defer resp.Body.Close() var ap AccessPolicy - err = xmlUnmarshal(resp.body, &ap.SignedIdentifiersList) + err = xmlUnmarshal(resp.Body, &ap.SignedIdentifiersList) if err != nil { return nil, err } - return buildAccessPolicy(ap, &resp.headers), nil + return buildAccessPolicy(ap, &resp.Header), nil } func buildAccessPolicy(ap AccessPolicy, headers *http.Header) *ContainerPermissions { @@ -358,8 +431,8 @@ func (c *Container) Delete(options *DeleteContainerOptions) error { if err != nil { return err } - readAndCloseBody(resp.body) - return checkRespCode(resp.statusCode, []int{http.StatusAccepted}) + defer drainRespBody(resp) + return checkRespCode(resp, []int{http.StatusAccepted}) } // DeleteIfExists deletes the container with given name on the storage @@ -371,15 +444,15 @@ func (c *Container) Delete(options *DeleteContainerOptions) error { func (c *Container) DeleteIfExists(options *DeleteContainerOptions) (bool, error) { resp, err := c.delete(options) if resp != nil { - defer readAndCloseBody(resp.body) - if resp.statusCode == http.StatusAccepted || resp.statusCode == http.StatusNotFound { - return resp.statusCode == http.StatusAccepted, nil + defer drainRespBody(resp) + if resp.StatusCode == http.StatusAccepted || resp.StatusCode == http.StatusNotFound { + return resp.StatusCode == http.StatusAccepted, nil } } return false, err } -func (c *Container) delete(options *DeleteContainerOptions) (*storageResponse, error) { +func (c *Container) delete(options *DeleteContainerOptions) (*http.Response, error) { query := url.Values{"restype": {"container"}} headers := c.bsc.client.getStandardHeaders() @@ -399,9 +472,17 @@ func (c *Container) delete(options *DeleteContainerOptions) (*storageResponse, e func (c *Container) ListBlobs(params ListBlobsParameters) (BlobListResponse, error) { q := mergeParams(params.getParameters(), url.Values{ "restype": {"container"}, - "comp": {"list"}}, - ) - uri := c.bsc.client.getEndpoint(blobServiceName, c.buildPath(), q) + "comp": {"list"}, + }) + var uri string + if c.bsc.client.isServiceSASClient() { + q = mergeParams(q, c.sasuri.Query()) + newURI := c.sasuri + newURI.RawQuery = q.Encode() + uri = newURI.String() + } else { + uri = c.bsc.client.getEndpoint(blobServiceName, c.buildPath(), q) + } headers := c.bsc.client.getStandardHeaders() headers = addToHeaders(headers, "x-ms-client-request-id", params.RequestID) @@ -411,15 +492,90 @@ func (c *Container) ListBlobs(params ListBlobsParameters) (BlobListResponse, err if err != nil { return out, err } - defer resp.body.Close() + defer resp.Body.Close() - err = xmlUnmarshal(resp.body, &out) + err = xmlUnmarshal(resp.Body, &out) for i := range out.Blobs { out.Blobs[i].Container = c } return out, err } +// ContainerMetadataOptions includes options for container metadata operations +type ContainerMetadataOptions struct { + Timeout uint + LeaseID string `header:"x-ms-lease-id"` + RequestID string `header:"x-ms-client-request-id"` +} + +// SetMetadata replaces the metadata for the specified container. +// +// Some keys may be converted to Camel-Case before sending. All keys +// are returned in lower case by GetBlobMetadata. HTTP header names +// are case-insensitive so case munging should not matter to other +// applications either. +// +// See https://docs.microsoft.com/en-us/rest/api/storageservices/set-container-metadata +func (c *Container) SetMetadata(options *ContainerMetadataOptions) error { + params := url.Values{ + "comp": {"metadata"}, + "restype": {"container"}, + } + headers := c.bsc.client.getStandardHeaders() + headers = c.bsc.client.addMetadataToHeaders(headers, c.Metadata) + + if options != nil { + params = addTimeout(params, options.Timeout) + headers = mergeHeaders(headers, headersFromStruct(*options)) + } + + uri := c.bsc.client.getEndpoint(blobServiceName, c.buildPath(), params) + + resp, err := c.bsc.client.exec(http.MethodPut, uri, headers, nil, c.bsc.auth) + if err != nil { + return err + } + defer drainRespBody(resp) + return checkRespCode(resp, []int{http.StatusOK}) +} + +// GetMetadata returns all user-defined metadata for the specified container. +// +// All metadata keys will be returned in lower case. (HTTP header +// names are case-insensitive.) +// +// See https://docs.microsoft.com/en-us/rest/api/storageservices/get-container-metadata +func (c *Container) GetMetadata(options *ContainerMetadataOptions) error { + params := url.Values{ + "comp": {"metadata"}, + "restype": {"container"}, + } + headers := c.bsc.client.getStandardHeaders() + + if options != nil { + params = addTimeout(params, options.Timeout) + headers = mergeHeaders(headers, headersFromStruct(*options)) + } + + uri := c.bsc.client.getEndpoint(blobServiceName, c.buildPath(), params) + + resp, err := c.bsc.client.exec(http.MethodGet, uri, headers, nil, c.bsc.auth) + if err != nil { + return err + } + defer drainRespBody(resp) + if err := checkRespCode(resp, []int{http.StatusOK}); err != nil { + return err + } + + c.writeMetadata(resp.Header) + return nil +} + +func (c *Container) writeMetadata(h http.Header) { + c.Metadata = writeMetadata(h) +} + func generateContainerACLpayload(policies []ContainerAccessPolicy) (io.Reader, int, error) { sil := SignedIdentifiers{ SignedIdentifiers: []SignedIdentifier{}, @@ -451,3 +607,34 @@ func (capd *ContainerAccessPolicy) generateContainerPermissions() (permissions s return permissions } + +// GetProperties updated the properties of the container. +// +// See https://docs.microsoft.com/en-us/rest/api/storageservices/get-container-properties +func (c *Container) GetProperties() error { + params := url.Values{ + "restype": {"container"}, + } + headers := c.bsc.client.getStandardHeaders() + + uri := c.bsc.client.getEndpoint(blobServiceName, c.buildPath(), params) + + resp, err := c.bsc.client.exec(http.MethodGet, uri, headers, nil, c.bsc.auth) + if err != nil { + return err + } + defer resp.Body.Close() + if err := checkRespCode(resp, []int{http.StatusOK}); err != nil { + return err + } + + // update properties + c.Properties.Etag = resp.Header.Get(headerEtag) + c.Properties.LeaseStatus = resp.Header.Get("x-ms-lease-status") + c.Properties.LeaseState = resp.Header.Get("x-ms-lease-state") + c.Properties.LeaseDuration = resp.Header.Get("x-ms-lease-duration") + c.Properties.LastModified = resp.Header.Get("Last-Modified") + c.Properties.PublicAccess = ContainerAccessType(resp.Header.Get(ContainerAccessHeader)) + + return nil +} diff --git a/vendor/github.com/Azure/azure-sdk-for-go/storage/copyblob.go b/vendor/github.com/Azure/azure-sdk-for-go/storage/copyblob.go index 377a3c622..151e9a510 100644 --- a/vendor/github.com/Azure/azure-sdk-for-go/storage/copyblob.go +++ b/vendor/github.com/Azure/azure-sdk-for-go/storage/copyblob.go @@ -1,5 +1,19 @@ package storage +// Copyright 2017 Microsoft Corporation +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + import ( "errors" "fmt" @@ -50,7 +64,7 @@ type IncrementalCopyOptionsConditions struct { // Copy starts a blob copy operation and waits for the operation to // complete. sourceBlob parameter must be a canonical URL to the blob (can be -// obtained using GetBlobURL method.) There is no SLA on blob copy and therefore +// obtained using the GetURL method.) There is no SLA on blob copy and therefore // this helper method works faster on smaller files. // // See https://docs.microsoft.com/en-us/rest/api/storageservices/fileservices/Copy-Blob @@ -65,7 +79,7 @@ func (b *Blob) Copy(sourceBlob string, options *CopyOptions) error { // StartCopy starts a blob copy operation. // sourceBlob parameter must be a canonical URL to the blob (can be -// obtained using GetBlobURL method.) +// obtained using the GetURL method.) // // See https://docs.microsoft.com/en-us/rest/api/storageservices/fileservices/Copy-Blob func (b *Blob) StartCopy(sourceBlob string, options *CopyOptions) (string, error) { @@ -96,13 +110,13 @@ func (b *Blob) StartCopy(sourceBlob string, options *CopyOptions) (string, error if err != nil { return "", err } - defer readAndCloseBody(resp.body) + defer drainRespBody(resp) - if err := checkRespCode(resp.statusCode, []int{http.StatusAccepted, http.StatusCreated}); err != nil { + if err := checkRespCode(resp, []int{http.StatusAccepted, http.StatusCreated}); err != nil { return "", err } - copyID := resp.headers.Get("x-ms-copy-id") + copyID := resp.Header.Get("x-ms-copy-id") if copyID == "" { return "", errors.New("Got empty copy id header") } @@ -138,8 +152,8 @@ func (b *Blob) AbortCopy(copyID string, options *AbortCopyOptions) error { if err != nil { return err } - readAndCloseBody(resp.body) - return checkRespCode(resp.statusCode, []int{http.StatusNoContent}) + defer drainRespBody(resp) + return checkRespCode(resp, []int{http.StatusNoContent}) } // WaitForCopy loops until a BlobCopy operation is completed (or fails with error) @@ -209,13 +223,13 @@ func (b *Blob) IncrementalCopyBlob(sourceBlobURL string, snapshotTime time.Time, if err != nil { return "", err } - defer readAndCloseBody(resp.body) + defer drainRespBody(resp) - if err := checkRespCode(resp.statusCode, []int{http.StatusAccepted}); err != nil { + if err := checkRespCode(resp, []int{http.StatusAccepted}); err != nil { return "", err } - copyID := resp.headers.Get("x-ms-copy-id") + copyID := resp.Header.Get("x-ms-copy-id") if copyID == "" { return "", errors.New("Got empty copy id header") } diff --git a/vendor/github.com/Azure/azure-sdk-for-go/storage/directory.go b/vendor/github.com/Azure/azure-sdk-for-go/storage/directory.go index 29610329e..2e805e7df 100644 --- a/vendor/github.com/Azure/azure-sdk-for-go/storage/directory.go +++ b/vendor/github.com/Azure/azure-sdk-for-go/storage/directory.go @@ -1,9 +1,24 @@ package storage +// Copyright 2017 Microsoft Corporation +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + import ( "encoding/xml" "net/http" "net/url" + "sync" ) // Directory represents a directory on a share. @@ -92,10 +107,10 @@ func (d *Directory) CreateIfNotExists(options *FileRequestOptions) (bool, error) params := prepareOptions(options) resp, err := d.fsc.createResourceNoClose(d.buildPath(), resourceDirectory, params, nil) if resp != nil { - defer readAndCloseBody(resp.body) - if resp.statusCode == http.StatusCreated || resp.statusCode == http.StatusConflict { - if resp.statusCode == http.StatusCreated { - d.updateEtagAndLastModified(resp.headers) + defer drainRespBody(resp) + if resp.StatusCode == http.StatusCreated || resp.StatusCode == http.StatusConflict { + if resp.StatusCode == http.StatusCreated { + d.updateEtagAndLastModified(resp.Header) return true, nil } @@ -120,9 +135,9 @@ func (d *Directory) Delete(options *FileRequestOptions) error { func (d *Directory) DeleteIfExists(options *FileRequestOptions) (bool, error) { resp, err := d.fsc.deleteResourceNoClose(d.buildPath(), resourceDirectory, options) if resp != nil { - defer readAndCloseBody(resp.body) - if resp.statusCode == http.StatusAccepted || resp.statusCode == http.StatusNotFound { - return resp.statusCode == http.StatusAccepted, nil + defer drainRespBody(resp) + if resp.StatusCode == http.StatusAccepted || resp.StatusCode == http.StatusNotFound { + return resp.StatusCode == http.StatusAccepted, nil } } return false, err @@ -169,6 +184,7 @@ func (d *Directory) GetFileReference(name string) *File { Name: name, parent: d, share: d.share, + mutex: &sync.Mutex{}, } } @@ -184,9 +200,9 @@ func (d *Directory) ListDirsAndFiles(params ListDirsAndFilesParameters) (*DirsAn return nil, err } - defer resp.body.Close() + defer resp.Body.Close() var out DirsAndFilesListResponse - err = xmlUnmarshal(resp.body, &out) + err = xmlUnmarshal(resp.Body, &out) return &out, err } diff --git a/vendor/github.com/Azure/azure-sdk-for-go/storage/entity.go b/vendor/github.com/Azure/azure-sdk-for-go/storage/entity.go index 13e947507..fbbcb93ba 100644 --- a/vendor/github.com/Azure/azure-sdk-for-go/storage/entity.go +++ b/vendor/github.com/Azure/azure-sdk-for-go/storage/entity.go @@ -1,7 +1,22 @@ package storage +// Copyright 2017 Microsoft Corporation +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + import ( "bytes" + "encoding/base64" "encoding/json" "errors" "fmt" @@ -12,7 +27,7 @@ import ( "strings" "time" - "github.com/satori/uuid" + "github.com/satori/go.uuid" ) // Annotating as secure for gas scanning @@ -97,13 +112,13 @@ func (e *Entity) Get(timeout uint, ml MetadataLevel, options *GetEntityOptions) if err != nil { return err } - defer readAndCloseBody(resp.body) + defer drainRespBody(resp) - if err = checkRespCode(resp.statusCode, []int{http.StatusOK}); err != nil { + if err = checkRespCode(resp, []int{http.StatusOK}); err != nil { return err } - respBody, err := ioutil.ReadAll(resp.body) + respBody, err := ioutil.ReadAll(resp.Body) if err != nil { return err } @@ -139,22 +154,21 @@ func (e *Entity) Insert(ml MetadataLevel, options *EntityOptions) error { if err != nil { return err } - defer resp.body.Close() - - data, err := ioutil.ReadAll(resp.body) - if err != nil { - return err - } + defer drainRespBody(resp) if ml != EmptyPayload { - if err = checkRespCode(resp.statusCode, []int{http.StatusCreated}); err != nil { + if err = checkRespCode(resp, []int{http.StatusCreated}); err != nil { + return err + } + data, err := ioutil.ReadAll(resp.Body) + if err != nil { return err } if err = e.UnmarshalJSON(data); err != nil { return err } } else { - if err = checkRespCode(resp.statusCode, []int{http.StatusNoContent}); err != nil { + if err = checkRespCode(resp, []int{http.StatusNoContent}); err != nil { return err } } @@ -193,18 +207,18 @@ func (e *Entity) Delete(force bool, options *EntityOptions) error { uri := e.Table.tsc.client.getEndpoint(tableServiceName, e.buildPath(), query) resp, err := e.Table.tsc.client.exec(http.MethodDelete, uri, headers, nil, e.Table.tsc.auth) if err != nil { - if resp.statusCode == http.StatusPreconditionFailed { + if resp.StatusCode == http.StatusPreconditionFailed { return fmt.Errorf(etagErrorTemplate, err) } return err } - defer readAndCloseBody(resp.body) + defer drainRespBody(resp) - if err = checkRespCode(resp.statusCode, []int{http.StatusNoContent}); err != nil { + if err = checkRespCode(resp, []int{http.StatusNoContent}); err != nil { return err } - return e.updateTimestamp(resp.headers) + return e.updateTimestamp(resp.Header) } // InsertOrReplace inserts an entity or replaces the existing one. @@ -233,7 +247,7 @@ func (e *Entity) MarshalJSON() ([]byte, error) { switch t := v.(type) { case []byte: completeMap[typeKey] = OdataBinary - completeMap[k] = string(t) + completeMap[k] = t case time.Time: completeMap[typeKey] = OdataDateTime completeMap[k] = t.Format(time.RFC3339Nano) @@ -307,7 +321,10 @@ func (e *Entity) UnmarshalJSON(data []byte) error { } switch v { case OdataBinary: - props[valueKey] = []byte(str) + props[valueKey], err = base64.StdEncoding.DecodeString(str) + if err != nil { + return fmt.Errorf(errorTemplate, err) + } case OdataDateTime: t, err := time.Parse("2006-01-02T15:04:05Z", str) if err != nil { @@ -382,13 +399,13 @@ func (e *Entity) insertOr(verb string, options *EntityOptions) error { if err != nil { return err } - defer readAndCloseBody(resp.body) + defer drainRespBody(resp) - if err = checkRespCode(resp.statusCode, []int{http.StatusNoContent}); err != nil { + if err = checkRespCode(resp, []int{http.StatusNoContent}); err != nil { return err } - return e.updateEtagAndTimestamp(resp.headers) + return e.updateEtagAndTimestamp(resp.Header) } func (e *Entity) updateMerge(force bool, verb string, options *EntityOptions) error { @@ -406,18 +423,18 @@ func (e *Entity) updateMerge(force bool, verb string, options *EntityOptions) er uri := e.Table.tsc.client.getEndpoint(tableServiceName, e.buildPath(), query) resp, err := e.Table.tsc.client.exec(verb, uri, headers, bytes.NewReader(body), e.Table.tsc.auth) if err != nil { - if resp.statusCode == http.StatusPreconditionFailed { + if resp.StatusCode == http.StatusPreconditionFailed { return fmt.Errorf(etagErrorTemplate, err) } return err } - defer readAndCloseBody(resp.body) + defer drainRespBody(resp) - if err = checkRespCode(resp.statusCode, []int{http.StatusNoContent}); err != nil { + if err = checkRespCode(resp, []int{http.StatusNoContent}); err != nil { return err } - return e.updateEtagAndTimestamp(resp.headers) + return e.updateEtagAndTimestamp(resp.Header) } func stringFromMap(props map[string]interface{}, key string) string { diff --git a/vendor/github.com/Azure/azure-sdk-for-go/storage/file.go b/vendor/github.com/Azure/azure-sdk-for-go/storage/file.go index 238ac6d6d..06bbe4ba0 100644 --- a/vendor/github.com/Azure/azure-sdk-for-go/storage/file.go +++ b/vendor/github.com/Azure/azure-sdk-for-go/storage/file.go @@ -1,5 +1,19 @@ package storage +// Copyright 2017 Microsoft Corporation +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + import ( "errors" "fmt" @@ -8,11 +22,16 @@ import ( "net/http" "net/url" "strconv" + "sync" ) const fourMB = uint64(4194304) const oneTB = uint64(1099511627776) +// Export maximum range and file sizes +const MaxRangeSize = fourMB +const MaxFileSize = oneTB + // File represents a file on a share. type File struct { fsc *FileServiceClient @@ -22,6 +41,7 @@ type File struct { Properties FileProperties `xml:"Properties"` share *Share FileCopyProperties FileCopyState + mutex *sync.Mutex } // FileProperties contains various properties of a file. @@ -148,7 +168,9 @@ func (f *File) CopyFile(sourceURL string, options *FileRequestOptions) error { return err } - f.updateEtagLastModifiedAndCopyHeaders(headers) + f.updateEtagAndLastModified(headers) + f.FileCopyProperties.ID = headers.Get("X-Ms-Copy-Id") + f.FileCopyProperties.Status = headers.Get("X-Ms-Copy-Status") return nil } @@ -165,9 +187,9 @@ func (f *File) Delete(options *FileRequestOptions) error { func (f *File) DeleteIfExists(options *FileRequestOptions) (bool, error) { resp, err := f.fsc.deleteResourceNoClose(f.buildPath(), resourceFile, options) if resp != nil { - defer readAndCloseBody(resp.body) - if resp.statusCode == http.StatusAccepted || resp.statusCode == http.StatusNotFound { - return resp.statusCode == http.StatusAccepted, nil + defer drainRespBody(resp) + if resp.StatusCode == http.StatusAccepted || resp.StatusCode == http.StatusNotFound { + return resp.StatusCode == http.StatusAccepted, nil } } return false, err @@ -189,11 +211,11 @@ func (f *File) DownloadToStream(options *FileRequestOptions) (io.ReadCloser, err return nil, err } - if err = checkRespCode(resp.statusCode, []int{http.StatusOK}); err != nil { - readAndCloseBody(resp.body) + if err = checkRespCode(resp, []int{http.StatusOK}); err != nil { + drainRespBody(resp) return nil, err } - return resp.body, nil + return resp.Body, nil } // DownloadRangeToStream operation downloads the specified range of this file with optional MD5 hash. @@ -219,14 +241,14 @@ func (f *File) DownloadRangeToStream(fileRange FileRange, options *GetFileOption return fs, err } - if err = checkRespCode(resp.statusCode, []int{http.StatusOK, http.StatusPartialContent}); err != nil { - readAndCloseBody(resp.body) + if err = checkRespCode(resp, []int{http.StatusOK, http.StatusPartialContent}); err != nil { + drainRespBody(resp) return fs, err } - fs.Body = resp.body + fs.Body = resp.Body if options != nil && options.GetContentMD5 { - fs.ContentMD5 = resp.headers.Get("Content-MD5") + fs.ContentMD5 = resp.Header.Get("Content-MD5") } return fs, nil } @@ -292,20 +314,20 @@ func (f *File) ListRanges(options *ListRangesOptions) (*FileRanges, error) { return nil, err } - defer resp.body.Close() + defer resp.Body.Close() var cl uint64 - cl, err = strconv.ParseUint(resp.headers.Get("x-ms-content-length"), 10, 64) + cl, err = strconv.ParseUint(resp.Header.Get("x-ms-content-length"), 10, 64) if err != nil { - ioutil.ReadAll(resp.body) + ioutil.ReadAll(resp.Body) return nil, err } var out FileRanges out.ContentLength = cl - out.ETag = resp.headers.Get("ETag") - out.LastModified = resp.headers.Get("Last-Modified") + out.ETag = resp.Header.Get("ETag") + out.LastModified = resp.Header.Get("Last-Modified") - err = xmlUnmarshal(resp.body, &out) + err = xmlUnmarshal(resp.Body, &out) return &out, err } @@ -353,8 +375,8 @@ func (f *File) modifyRange(bytes io.Reader, fileRange FileRange, timeout *uint, if err != nil { return nil, err } - defer readAndCloseBody(resp.body) - return resp.headers, checkRespCode(resp.statusCode, []int{http.StatusCreated}) + defer drainRespBody(resp) + return resp.Header, checkRespCode(resp, []int{http.StatusCreated}) } // SetMetadata replaces the metadata for this file. @@ -399,14 +421,6 @@ func (f *File) updateEtagAndLastModified(headers http.Header) { f.Properties.LastModified = headers.Get("Last-Modified") } -// updates Etag, last modified date and x-ms-copy-id -func (f *File) updateEtagLastModifiedAndCopyHeaders(headers http.Header) { - f.Properties.Etag = headers.Get("Etag") - f.Properties.LastModified = headers.Get("Last-Modified") - f.FileCopyProperties.ID = headers.Get("X-Ms-Copy-Id") - f.FileCopyProperties.Status = headers.Get("X-Ms-Copy-Status") -} - // updates file properties from the specified HTTP header func (f *File) updateProperties(header http.Header) { size, err := strconv.ParseUint(header.Get("Content-Length"), 10, 64) @@ -430,7 +444,7 @@ func (f *File) URL() string { return f.fsc.client.getEndpoint(fileServiceName, f.buildPath(), nil) } -// WriteRangeOptions includes opptions for a write file range operation +// WriteRangeOptions includes options for a write file range operation type WriteRangeOptions struct { Timeout uint ContentMD5 string @@ -456,7 +470,11 @@ func (f *File) WriteRange(bytes io.Reader, fileRange FileRange, options *WriteRa if err != nil { return err } - + // it's perfectly legal for multiple go routines to call WriteRange + // on the same *File (e.g. concurrently writing non-overlapping ranges) + // so we must take the file mutex before updating our properties. + f.mutex.Lock() f.updateEtagAndLastModified(headers) + f.mutex.Unlock() return nil } diff --git a/vendor/github.com/Azure/azure-sdk-for-go/storage/fileserviceclient.go b/vendor/github.com/Azure/azure-sdk-for-go/storage/fileserviceclient.go index 81217bdfa..1db8e7da6 100644 --- a/vendor/github.com/Azure/azure-sdk-for-go/storage/fileserviceclient.go +++ b/vendor/github.com/Azure/azure-sdk-for-go/storage/fileserviceclient.go @@ -1,5 +1,19 @@ package storage +// Copyright 2017 Microsoft Corporation +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + import ( "encoding/xml" "fmt" @@ -140,8 +154,8 @@ func (f FileServiceClient) ListShares(params ListSharesParameters) (*ShareListRe if err != nil { return nil, err } - defer resp.body.Close() - err = xmlUnmarshal(resp.body, &out) + defer resp.Body.Close() + err = xmlUnmarshal(resp.Body, &out) // assign our client to the newly created Share objects for i := range out.Shares { @@ -165,7 +179,7 @@ func (f *FileServiceClient) SetServiceProperties(props ServiceProperties) error } // retrieves directory or share content -func (f FileServiceClient) listContent(path string, params url.Values, extraHeaders map[string]string) (*storageResponse, error) { +func (f FileServiceClient) listContent(path string, params url.Values, extraHeaders map[string]string) (*http.Response, error) { if err := f.checkForStorageEmulator(); err != nil { return nil, err } @@ -179,8 +193,8 @@ func (f FileServiceClient) listContent(path string, params url.Values, extraHead return nil, err } - if err = checkRespCode(resp.statusCode, []int{http.StatusOK}); err != nil { - readAndCloseBody(resp.body) + if err = checkRespCode(resp, []int{http.StatusOK}); err != nil { + drainRespBody(resp) return nil, err } @@ -198,9 +212,9 @@ func (f FileServiceClient) resourceExists(path string, res resourceType) (bool, resp, err := f.client.exec(http.MethodHead, uri, headers, nil, f.auth) if resp != nil { - defer readAndCloseBody(resp.body) - if resp.statusCode == http.StatusOK || resp.statusCode == http.StatusNotFound { - return resp.statusCode == http.StatusOK, resp.headers, nil + defer drainRespBody(resp) + if resp.StatusCode == http.StatusOK || resp.StatusCode == http.StatusNotFound { + return resp.StatusCode == http.StatusOK, resp.Header, nil } } return false, nil, err @@ -212,12 +226,12 @@ func (f FileServiceClient) createResource(path string, res resourceType, urlPara if err != nil { return nil, err } - defer readAndCloseBody(resp.body) - return resp.headers, checkRespCode(resp.statusCode, expectedResponseCodes) + defer drainRespBody(resp) + return resp.Header, checkRespCode(resp, expectedResponseCodes) } // creates a resource depending on the specified resource type, doesn't close the response body -func (f FileServiceClient) createResourceNoClose(path string, res resourceType, urlParams url.Values, extraHeaders map[string]string) (*storageResponse, error) { +func (f FileServiceClient) createResourceNoClose(path string, res resourceType, urlParams url.Values, extraHeaders map[string]string) (*http.Response, error) { if err := f.checkForStorageEmulator(); err != nil { return nil, err } @@ -237,17 +251,17 @@ func (f FileServiceClient) getResourceHeaders(path string, comp compType, res re if err != nil { return nil, err } - defer readAndCloseBody(resp.body) + defer drainRespBody(resp) - if err = checkRespCode(resp.statusCode, []int{http.StatusOK}); err != nil { + if err = checkRespCode(resp, []int{http.StatusOK}); err != nil { return nil, err } - return resp.headers, nil + return resp.Header, nil } // gets the specified resource, doesn't close the response body -func (f FileServiceClient) getResourceNoClose(path string, comp compType, res resourceType, params url.Values, verb string, extraHeaders map[string]string) (*storageResponse, error) { +func (f FileServiceClient) getResourceNoClose(path string, comp compType, res resourceType, params url.Values, verb string, extraHeaders map[string]string) (*http.Response, error) { if err := f.checkForStorageEmulator(); err != nil { return nil, err } @@ -265,12 +279,12 @@ func (f FileServiceClient) deleteResource(path string, res resourceType, options if err != nil { return err } - defer readAndCloseBody(resp.body) - return checkRespCode(resp.statusCode, []int{http.StatusAccepted}) + defer drainRespBody(resp) + return checkRespCode(resp, []int{http.StatusAccepted}) } // deletes the resource and returns the response, doesn't close the response body -func (f FileServiceClient) deleteResourceNoClose(path string, res resourceType, options *FileRequestOptions) (*storageResponse, error) { +func (f FileServiceClient) deleteResourceNoClose(path string, res resourceType, options *FileRequestOptions) (*http.Response, error) { if err := f.checkForStorageEmulator(); err != nil { return nil, err } @@ -309,9 +323,9 @@ func (f FileServiceClient) setResourceHeaders(path string, comp compType, res re if err != nil { return nil, err } - defer readAndCloseBody(resp.body) + defer drainRespBody(resp) - return resp.headers, checkRespCode(resp.statusCode, []int{http.StatusOK}) + return resp.Header, checkRespCode(resp, []int{http.StatusOK}) } //checkForStorageEmulator determines if the client is setup for use with diff --git a/vendor/github.com/Azure/azure-sdk-for-go/storage/leaseblob.go b/vendor/github.com/Azure/azure-sdk-for-go/storage/leaseblob.go index 415b74018..5b4a65145 100644 --- a/vendor/github.com/Azure/azure-sdk-for-go/storage/leaseblob.go +++ b/vendor/github.com/Azure/azure-sdk-for-go/storage/leaseblob.go @@ -1,5 +1,19 @@ package storage +// Copyright 2017 Microsoft Corporation +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + import ( "errors" "net/http" @@ -39,13 +53,13 @@ func (b *Blob) leaseCommonPut(headers map[string]string, expectedStatus int, opt if err != nil { return nil, err } - defer readAndCloseBody(resp.body) + defer drainRespBody(resp) - if err := checkRespCode(resp.statusCode, []int{expectedStatus}); err != nil { + if err := checkRespCode(resp, []int{expectedStatus}); err != nil { return nil, err } - return resp.headers, nil + return resp.Header, nil } // LeaseOptions includes options for all operations regarding leasing blobs diff --git a/vendor/github.com/Azure/azure-sdk-for-go/storage/message.go b/vendor/github.com/Azure/azure-sdk-for-go/storage/message.go index 3ededcd42..ce33dcb72 100644 --- a/vendor/github.com/Azure/azure-sdk-for-go/storage/message.go +++ b/vendor/github.com/Azure/azure-sdk-for-go/storage/message.go @@ -1,5 +1,19 @@ package storage +// Copyright 2017 Microsoft Corporation +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + import ( "encoding/xml" "fmt" @@ -64,13 +78,16 @@ func (m *Message) Put(options *PutMessageOptions) error { if err != nil { return err } - defer readAndCloseBody(resp.body) - - err = xmlUnmarshal(resp.body, m) + defer drainRespBody(resp) + err = checkRespCode(resp, []int{http.StatusCreated}) if err != nil { return err } - return checkRespCode(resp.statusCode, []int{http.StatusCreated}) + err = xmlUnmarshal(resp.Body, m) + if err != nil { + return err + } + return nil } // UpdateMessageOptions is the set of options can be specified for Update Messsage @@ -111,10 +128,10 @@ func (m *Message) Update(options *UpdateMessageOptions) error { if err != nil { return err } - defer readAndCloseBody(resp.body) + defer drainRespBody(resp) - m.PopReceipt = resp.headers.Get("x-ms-popreceipt") - nextTimeStr := resp.headers.Get("x-ms-time-next-visible") + m.PopReceipt = resp.Header.Get("x-ms-popreceipt") + nextTimeStr := resp.Header.Get("x-ms-time-next-visible") if nextTimeStr != "" { nextTime, err := time.Parse(time.RFC1123, nextTimeStr) if err != nil { @@ -123,7 +140,7 @@ func (m *Message) Update(options *UpdateMessageOptions) error { m.NextVisible = TimeRFC1123(nextTime) } - return checkRespCode(resp.statusCode, []int{http.StatusNoContent}) + return checkRespCode(resp, []int{http.StatusNoContent}) } // Delete operation deletes the specified message. @@ -143,8 +160,8 @@ func (m *Message) Delete(options *QueueServiceOptions) error { if err != nil { return err } - readAndCloseBody(resp.body) - return checkRespCode(resp.statusCode, []int{http.StatusNoContent}) + defer drainRespBody(resp) + return checkRespCode(resp, []int{http.StatusNoContent}) } type putMessageRequest struct { diff --git a/vendor/github.com/Azure/azure-sdk-for-go/storage/odata.go b/vendor/github.com/Azure/azure-sdk-for-go/storage/odata.go index 41d832e2b..800adf129 100644 --- a/vendor/github.com/Azure/azure-sdk-for-go/storage/odata.go +++ b/vendor/github.com/Azure/azure-sdk-for-go/storage/odata.go @@ -1,5 +1,19 @@ package storage +// Copyright 2017 Microsoft Corporation +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + // MetadataLevel determines if operations should return a paylod, // and it level of detail. type MetadataLevel string diff --git a/vendor/github.com/Azure/azure-sdk-for-go/storage/pageblob.go b/vendor/github.com/Azure/azure-sdk-for-go/storage/pageblob.go index bc5b398d3..7ffd63821 100644 --- a/vendor/github.com/Azure/azure-sdk-for-go/storage/pageblob.go +++ b/vendor/github.com/Azure/azure-sdk-for-go/storage/pageblob.go @@ -1,5 +1,19 @@ package storage +// Copyright 2017 Microsoft Corporation +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + import ( "encoding/xml" "errors" @@ -73,10 +87,10 @@ func (b *Blob) modifyRange(blobRange BlobRange, bytes io.Reader, options *PutPag return errors.New("the value for rangeEnd must be greater than or equal to rangeStart") } if blobRange.Start%512 != 0 { - return errors.New("the value for rangeStart must be a modulus of 512") + return errors.New("the value for rangeStart must be a multiple of 512") } if blobRange.End%512 != 511 { - return errors.New("the value for rangeEnd must be a modulus of 511") + return errors.New("the value for rangeEnd must be a multiple of 512 - 1") } params := url.Values{"comp": {"page"}} @@ -107,9 +121,8 @@ func (b *Blob) modifyRange(blobRange BlobRange, bytes io.Reader, options *PutPag if err != nil { return err } - readAndCloseBody(resp.body) - - return checkRespCode(resp.statusCode, []int{http.StatusCreated}) + defer drainRespBody(resp) + return checkRespCode(resp, []int{http.StatusCreated}) } // GetPageRangesOptions includes the options for a get page ranges operation @@ -133,7 +146,7 @@ func (b *Blob) GetPageRanges(options *GetPageRangesOptions) (GetPageRangesRespon params = addTimeout(params, options.Timeout) params = addSnapshot(params, options.Snapshot) if options.PreviousSnapshot != nil { - params.Add("prevsnapshot", timeRfc1123Formatted(*options.PreviousSnapshot)) + params.Add("prevsnapshot", timeRFC3339Formatted(*options.PreviousSnapshot)) } if options.Range != nil { headers["Range"] = options.Range.String() @@ -147,12 +160,12 @@ func (b *Blob) GetPageRanges(options *GetPageRangesOptions) (GetPageRangesRespon if err != nil { return out, err } - defer resp.body.Close() + defer drainRespBody(resp) - if err = checkRespCode(resp.statusCode, []int{http.StatusOK}); err != nil { + if err = checkRespCode(resp, []int{http.StatusOK}); err != nil { return out, err } - err = xmlUnmarshal(resp.body, &out) + err = xmlUnmarshal(resp.Body, &out) return out, err } @@ -160,6 +173,8 @@ func (b *Blob) GetPageRanges(options *GetPageRangesOptions) (GetPageRangesRespon // size in bytes (size must be aligned to a 512-byte boundary). A page blob must // be created using this method before writing pages. // +// See CreateBlockBlobFromReader for more info on creating blobs. +// // See https://docs.microsoft.com/en-us/rest/api/storageservices/fileservices/Put-Blob func (b *Blob) PutPageBlob(options *PutBlobOptions) error { if b.Properties.ContentLength%512 != 0 { @@ -184,6 +199,5 @@ func (b *Blob) PutPageBlob(options *PutBlobOptions) error { if err != nil { return err } - readAndCloseBody(resp.body) - return checkRespCode(resp.statusCode, []int{http.StatusCreated}) + return b.respondCreation(resp, BlobTypePage) } diff --git a/vendor/github.com/Azure/azure-sdk-for-go/storage/queue.go b/vendor/github.com/Azure/azure-sdk-for-go/storage/queue.go index c2c7f742c..55238ab15 100644 --- a/vendor/github.com/Azure/azure-sdk-for-go/storage/queue.go +++ b/vendor/github.com/Azure/azure-sdk-for-go/storage/queue.go @@ -1,8 +1,21 @@ package storage +// Copyright 2017 Microsoft Corporation +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + import ( "encoding/xml" - "errors" "fmt" "io" "net/http" @@ -78,8 +91,8 @@ func (q *Queue) Create(options *QueueServiceOptions) error { if err != nil { return err } - readAndCloseBody(resp.body) - return checkRespCode(resp.statusCode, []int{http.StatusCreated}) + defer drainRespBody(resp) + return checkRespCode(resp, []int{http.StatusCreated}) } // Delete operation permanently deletes the specified queue. @@ -98,8 +111,8 @@ func (q *Queue) Delete(options *QueueServiceOptions) error { if err != nil { return err } - readAndCloseBody(resp.body) - return checkRespCode(resp.statusCode, []int{http.StatusNoContent}) + defer drainRespBody(resp) + return checkRespCode(resp, []int{http.StatusNoContent}) } // Exists returns true if a queue with given name exists. @@ -107,10 +120,11 @@ func (q *Queue) Exists() (bool, error) { uri := q.qsc.client.getEndpoint(queueServiceName, q.buildPath(), url.Values{"comp": {"metadata"}}) resp, err := q.qsc.client.exec(http.MethodGet, uri, q.qsc.client.getStandardHeaders(), nil, q.qsc.auth) if resp != nil { - defer readAndCloseBody(resp.body) - if resp.statusCode == http.StatusOK || resp.statusCode == http.StatusNotFound { - return resp.statusCode == http.StatusOK, nil + defer drainRespBody(resp) + if resp.StatusCode == http.StatusOK || resp.StatusCode == http.StatusNotFound { + return resp.StatusCode == http.StatusOK, nil } + err = getErrorFromResponse(resp) } return false, err } @@ -134,8 +148,8 @@ func (q *Queue) SetMetadata(options *QueueServiceOptions) error { if err != nil { return err } - readAndCloseBody(resp.body) - return checkRespCode(resp.statusCode, []int{http.StatusNoContent}) + defer drainRespBody(resp) + return checkRespCode(resp, []int{http.StatusNoContent}) } // GetMetadata operation retrieves user-defined metadata and queue @@ -161,13 +175,13 @@ func (q *Queue) GetMetadata(options *QueueServiceOptions) error { if err != nil { return err } - defer readAndCloseBody(resp.body) + defer drainRespBody(resp) - if err := checkRespCode(resp.statusCode, []int{http.StatusOK}); err != nil { + if err := checkRespCode(resp, []int{http.StatusOK}); err != nil { return err } - aproxMessagesStr := resp.headers.Get(http.CanonicalHeaderKey(approximateMessagesCountHeader)) + aproxMessagesStr := resp.Header.Get(http.CanonicalHeaderKey(approximateMessagesCountHeader)) if aproxMessagesStr != "" { aproxMessages, err := strconv.ParseUint(aproxMessagesStr, 10, 64) if err != nil { @@ -176,7 +190,7 @@ func (q *Queue) GetMetadata(options *QueueServiceOptions) error { q.AproxMessageCount = aproxMessages } - q.Metadata = getMetadataFromHeaders(resp.headers) + q.Metadata = getMetadataFromHeaders(resp.Header) return nil } @@ -227,10 +241,10 @@ func (q *Queue) GetMessages(options *GetMessagesOptions) ([]Message, error) { if err != nil { return []Message{}, err } - defer readAndCloseBody(resp.body) + defer resp.Body.Close() var out messages - err = xmlUnmarshal(resp.body, &out) + err = xmlUnmarshal(resp.Body, &out) if err != nil { return []Message{}, err } @@ -270,10 +284,10 @@ func (q *Queue) PeekMessages(options *PeekMessagesOptions) ([]Message, error) { if err != nil { return []Message{}, err } - defer readAndCloseBody(resp.body) + defer resp.Body.Close() var out messages - err = xmlUnmarshal(resp.body, &out) + err = xmlUnmarshal(resp.Body, &out) if err != nil { return []Message{}, err } @@ -300,8 +314,8 @@ func (q *Queue) ClearMessages(options *QueueServiceOptions) error { if err != nil { return err } - readAndCloseBody(resp.body) - return checkRespCode(resp.statusCode, []int{http.StatusNoContent}) + defer drainRespBody(resp) + return checkRespCode(resp, []int{http.StatusNoContent}) } // SetPermissions sets up queue permissions @@ -327,13 +341,8 @@ func (q *Queue) SetPermissions(permissions QueuePermissions, options *SetQueuePe if err != nil { return err } - defer readAndCloseBody(resp.body) - - if err := checkRespCode(resp.statusCode, []int{http.StatusNoContent}); err != nil { - return errors.New("Unable to set permissions") - } - - return nil + defer drainRespBody(resp) + return checkRespCode(resp, []int{http.StatusNoContent}) } func generateQueueACLpayload(policies []QueueAccessPolicy) (io.Reader, int, error) { @@ -395,14 +404,14 @@ func (q *Queue) GetPermissions(options *GetQueuePermissionOptions) (*QueuePermis if err != nil { return nil, err } - defer resp.body.Close() + defer resp.Body.Close() var ap AccessPolicy - err = xmlUnmarshal(resp.body, &ap.SignedIdentifiersList) + err = xmlUnmarshal(resp.Body, &ap.SignedIdentifiersList) if err != nil { return nil, err } - return buildQueueAccessPolicy(ap, &resp.headers), nil + return buildQueueAccessPolicy(ap, &resp.Header), nil } func buildQueueAccessPolicy(ap AccessPolicy, headers *http.Header) *QueuePermissions { diff --git a/vendor/github.com/Azure/azure-sdk-for-go/storage/queuesasuri.go b/vendor/github.com/Azure/azure-sdk-for-go/storage/queuesasuri.go new file mode 100644 index 000000000..28d9ab937 --- /dev/null +++ b/vendor/github.com/Azure/azure-sdk-for-go/storage/queuesasuri.go @@ -0,0 +1,146 @@ +package storage + +// Copyright 2017 Microsoft Corporation +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +import ( + "errors" + "fmt" + "net/url" + "strings" + "time" +) + +// QueueSASOptions are options to construct a blob SAS +// URI. +// See https://docs.microsoft.com/en-us/rest/api/storageservices/constructing-a-service-sas +type QueueSASOptions struct { + QueueSASPermissions + SASOptions +} + +// QueueSASPermissions includes the available permissions for +// a queue SAS URI. +type QueueSASPermissions struct { + Read bool + Add bool + Update bool + Process bool +} + +func (q QueueSASPermissions) buildString() string { + permissions := "" + + if q.Read { + permissions += "r" + } + if q.Add { + permissions += "a" + } + if q.Update { + permissions += "u" + } + if q.Process { + permissions += "p" + } + return permissions +} + +// GetSASURI creates an URL to the specified queue which contains the Shared +// Access Signature with specified permissions and expiration time. +// +// See https://docs.microsoft.com/en-us/rest/api/storageservices/constructing-a-service-sas +func (q *Queue) GetSASURI(options QueueSASOptions) (string, error) { + canonicalizedResource, err := q.qsc.client.buildCanonicalizedResource(q.buildPath(), q.qsc.auth, true) + if err != nil { + return "", err + } + + // "The canonicalizedresouce portion of the string is a canonical path to the signed resource. + // It must include the service name (blob, table, queue or file) for version 2015-02-21 or + // later, the storage account name, and the resource name, and must be URL-decoded. + // -- https://msdn.microsoft.com/en-us/library/azure/dn140255.aspx + // We need to replace + with %2b first to avoid being treated as a space (which is correct for query strings, but not the path component). + canonicalizedResource = strings.Replace(canonicalizedResource, "+", "%2b", -1) + canonicalizedResource, err = url.QueryUnescape(canonicalizedResource) + if err != nil { + return "", err + } + + signedStart := "" + if options.Start != (time.Time{}) { + signedStart = options.Start.UTC().Format(time.RFC3339) + } + signedExpiry := options.Expiry.UTC().Format(time.RFC3339) + + protocols := "https,http" + if options.UseHTTPS { + protocols = "https" + } + + permissions := options.QueueSASPermissions.buildString() + stringToSign, err := queueSASStringToSign(q.qsc.client.apiVersion, canonicalizedResource, signedStart, signedExpiry, options.IP, permissions, protocols, options.Identifier) + if err != nil { + return "", err + } + + sig := q.qsc.client.computeHmac256(stringToSign) + sasParams := url.Values{ + "sv": {q.qsc.client.apiVersion}, + "se": {signedExpiry}, + "sp": {permissions}, + "sig": {sig}, + } + + if q.qsc.client.apiVersion >= "2015-04-05" { + sasParams.Add("spr", protocols) + addQueryParameter(sasParams, "sip", options.IP) + } + + uri := q.qsc.client.getEndpoint(queueServiceName, q.buildPath(), nil) + sasURL, err := url.Parse(uri) + if err != nil { + return "", err + } + sasURL.RawQuery = sasParams.Encode() + return sasURL.String(), nil +} + +func queueSASStringToSign(signedVersion, canonicalizedResource, signedStart, signedExpiry, signedIP, signedPermissions, protocols, signedIdentifier string) (string, error) { + + if signedVersion >= "2015-02-21" { + canonicalizedResource = "/queue" + canonicalizedResource + } + + // https://msdn.microsoft.com/en-us/library/azure/dn140255.aspx#Anchor_12 + if signedVersion >= "2015-04-05" { + return fmt.Sprintf("%s\n%s\n%s\n%s\n%s\n%s\n%s\n%s", + signedPermissions, + signedStart, + signedExpiry, + canonicalizedResource, + signedIdentifier, + signedIP, + protocols, + signedVersion), nil + + } + + // reference: http://msdn.microsoft.com/en-us/library/azure/dn140255.aspx + if signedVersion >= "2013-08-15" { + return fmt.Sprintf("%s\n%s\n%s\n%s\n%s\n%s", signedPermissions, signedStart, signedExpiry, canonicalizedResource, signedIdentifier, signedVersion), nil + } + + return "", errors.New("storage: not implemented SAS for versions earlier than 2013-08-15") +} diff --git a/vendor/github.com/Azure/azure-sdk-for-go/storage/queueserviceclient.go b/vendor/github.com/Azure/azure-sdk-for-go/storage/queueserviceclient.go index 19b44941c..29febe146 100644 --- a/vendor/github.com/Azure/azure-sdk-for-go/storage/queueserviceclient.go +++ b/vendor/github.com/Azure/azure-sdk-for-go/storage/queueserviceclient.go @@ -1,5 +1,19 @@ package storage +// Copyright 2017 Microsoft Corporation +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + // QueueServiceClient contains operations for Microsoft Azure Queue Storage // Service. type QueueServiceClient struct { diff --git a/vendor/github.com/Azure/azure-sdk-for-go/storage/share.go b/vendor/github.com/Azure/azure-sdk-for-go/storage/share.go index e6a868081..cf75a2659 100644 --- a/vendor/github.com/Azure/azure-sdk-for-go/storage/share.go +++ b/vendor/github.com/Azure/azure-sdk-for-go/storage/share.go @@ -1,5 +1,19 @@ package storage +// Copyright 2017 Microsoft Corporation +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + import ( "fmt" "net/http" @@ -61,10 +75,10 @@ func (s *Share) CreateIfNotExists(options *FileRequestOptions) (bool, error) { params := prepareOptions(options) resp, err := s.fsc.createResourceNoClose(s.buildPath(), resourceShare, params, extraheaders) if resp != nil { - defer readAndCloseBody(resp.body) - if resp.statusCode == http.StatusCreated || resp.statusCode == http.StatusConflict { - if resp.statusCode == http.StatusCreated { - s.updateEtagAndLastModified(resp.headers) + defer drainRespBody(resp) + if resp.StatusCode == http.StatusCreated || resp.StatusCode == http.StatusConflict { + if resp.StatusCode == http.StatusCreated { + s.updateEtagAndLastModified(resp.Header) return true, nil } return false, s.FetchAttributes(nil) @@ -89,9 +103,9 @@ func (s *Share) Delete(options *FileRequestOptions) error { func (s *Share) DeleteIfExists(options *FileRequestOptions) (bool, error) { resp, err := s.fsc.deleteResourceNoClose(s.buildPath(), resourceShare, options) if resp != nil { - defer readAndCloseBody(resp.body) - if resp.statusCode == http.StatusAccepted || resp.statusCode == http.StatusNotFound { - return resp.statusCode == http.StatusAccepted, nil + defer drainRespBody(resp) + if resp.StatusCode == http.StatusAccepted || resp.StatusCode == http.StatusNotFound { + return resp.StatusCode == http.StatusAccepted, nil } } return false, err diff --git a/vendor/github.com/Azure/azure-sdk-for-go/storage/storagepolicy.go b/vendor/github.com/Azure/azure-sdk-for-go/storage/storagepolicy.go index bee1c31ad..056ab398a 100644 --- a/vendor/github.com/Azure/azure-sdk-for-go/storage/storagepolicy.go +++ b/vendor/github.com/Azure/azure-sdk-for-go/storage/storagepolicy.go @@ -1,5 +1,19 @@ package storage +// Copyright 2017 Microsoft Corporation +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + import ( "strings" "time" diff --git a/vendor/github.com/Azure/azure-sdk-for-go/storage/storageservice.go b/vendor/github.com/Azure/azure-sdk-for-go/storage/storageservice.go index 88700fbc9..c338975ab 100644 --- a/vendor/github.com/Azure/azure-sdk-for-go/storage/storageservice.go +++ b/vendor/github.com/Azure/azure-sdk-for-go/storage/storageservice.go @@ -1,5 +1,19 @@ package storage +// Copyright 2017 Microsoft Corporation +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + import ( "net/http" "net/url" @@ -63,14 +77,14 @@ func (c Client) getServiceProperties(service string, auth authentication) (*Serv if err != nil { return nil, err } - defer resp.body.Close() + defer resp.Body.Close() - if err := checkRespCode(resp.statusCode, []int{http.StatusOK}); err != nil { + if err := checkRespCode(resp, []int{http.StatusOK}); err != nil { return nil, err } var out ServiceProperties - err = xmlUnmarshal(resp.body, &out) + err = xmlUnmarshal(resp.Body, &out) if err != nil { return nil, err } @@ -112,6 +126,6 @@ func (c Client) setServiceProperties(props ServiceProperties, service string, au if err != nil { return err } - readAndCloseBody(resp.body) - return checkRespCode(resp.statusCode, []int{http.StatusAccepted}) + defer drainRespBody(resp) + return checkRespCode(resp, []int{http.StatusAccepted}) } diff --git a/vendor/github.com/Azure/azure-sdk-for-go/storage/table.go b/vendor/github.com/Azure/azure-sdk-for-go/storage/table.go index 4eae3af9d..22d9b4f5c 100644 --- a/vendor/github.com/Azure/azure-sdk-for-go/storage/table.go +++ b/vendor/github.com/Azure/azure-sdk-for-go/storage/table.go @@ -1,5 +1,19 @@ package storage +// Copyright 2017 Microsoft Corporation +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + import ( "bytes" "encoding/json" @@ -83,13 +97,13 @@ func (t *Table) Get(timeout uint, ml MetadataLevel) error { if err != nil { return err } - defer readAndCloseBody(resp.body) + defer resp.Body.Close() - if err = checkRespCode(resp.statusCode, []int{http.StatusOK}); err != nil { + if err = checkRespCode(resp, []int{http.StatusOK}); err != nil { return err } - respBody, err := ioutil.ReadAll(resp.body) + respBody, err := ioutil.ReadAll(resp.Body) if err != nil { return err } @@ -129,20 +143,20 @@ func (t *Table) Create(timeout uint, ml MetadataLevel, options *TableOptions) er if err != nil { return err } - defer readAndCloseBody(resp.body) + defer resp.Body.Close() if ml == EmptyPayload { - if err := checkRespCode(resp.statusCode, []int{http.StatusNoContent}); err != nil { + if err := checkRespCode(resp, []int{http.StatusNoContent}); err != nil { return err } } else { - if err := checkRespCode(resp.statusCode, []int{http.StatusCreated}); err != nil { + if err := checkRespCode(resp, []int{http.StatusCreated}); err != nil { return err } } if ml != EmptyPayload { - data, err := ioutil.ReadAll(resp.body) + data, err := ioutil.ReadAll(resp.Body) if err != nil { return err } @@ -172,13 +186,9 @@ func (t *Table) Delete(timeout uint, options *TableOptions) error { if err != nil { return err } - defer readAndCloseBody(resp.body) + defer drainRespBody(resp) - if err := checkRespCode(resp.statusCode, []int{http.StatusNoContent}); err != nil { - return err - - } - return nil + return checkRespCode(resp, []int{http.StatusNoContent}) } // QueryOptions includes options for a query entities operation. @@ -259,12 +269,9 @@ func (t *Table) SetPermissions(tap []TableAccessPolicy, timeout uint, options *T if err != nil { return err } - defer readAndCloseBody(resp.body) + defer drainRespBody(resp) - if err := checkRespCode(resp.statusCode, []int{http.StatusNoContent}); err != nil { - return err - } - return nil + return checkRespCode(resp, []int{http.StatusNoContent}) } func generateTableACLPayload(policies []TableAccessPolicy) (io.Reader, int, error) { @@ -294,14 +301,14 @@ func (t *Table) GetPermissions(timeout int, options *TableOptions) ([]TableAcces if err != nil { return nil, err } - defer resp.body.Close() + defer resp.Body.Close() - if err = checkRespCode(resp.statusCode, []int{http.StatusOK}); err != nil { + if err = checkRespCode(resp, []int{http.StatusOK}); err != nil { return nil, err } var ap AccessPolicy - err = xmlUnmarshal(resp.body, &ap.SignedIdentifiersList) + err = xmlUnmarshal(resp.Body, &ap.SignedIdentifiersList) if err != nil { return nil, err } @@ -318,13 +325,13 @@ func (t *Table) queryEntities(uri string, headers map[string]string, ml Metadata if err != nil { return nil, err } - defer resp.body.Close() + defer resp.Body.Close() - if err = checkRespCode(resp.statusCode, []int{http.StatusOK}); err != nil { + if err = checkRespCode(resp, []int{http.StatusOK}); err != nil { return nil, err } - data, err := ioutil.ReadAll(resp.body) + data, err := ioutil.ReadAll(resp.Body) if err != nil { return nil, err } @@ -339,7 +346,7 @@ func (t *Table) queryEntities(uri string, headers map[string]string, ml Metadata } entities.table = t - contToken := extractContinuationTokenFromHeaders(resp.headers) + contToken := extractContinuationTokenFromHeaders(resp.Header) if contToken == nil { entities.NextLink = nil } else { diff --git a/vendor/github.com/Azure/azure-sdk-for-go/storage/table_batch.go b/vendor/github.com/Azure/azure-sdk-for-go/storage/table_batch.go index 7a0f0915c..a2159e296 100644 --- a/vendor/github.com/Azure/azure-sdk-for-go/storage/table_batch.go +++ b/vendor/github.com/Azure/azure-sdk-for-go/storage/table_batch.go @@ -1,5 +1,19 @@ package storage +// Copyright 2017 Microsoft Corporation +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + import ( "bytes" "encoding/json" @@ -12,7 +26,7 @@ import ( "sort" "strings" - "github.com/satori/uuid" + "github.com/marstr/guid" ) // Operation type. Insert, Delete, Replace etc. @@ -117,14 +131,26 @@ func (t *TableBatch) MergeEntity(entity *Entity) { // the changesets. // As per document https://docs.microsoft.com/en-us/rest/api/storageservices/fileservices/performing-entity-group-transactions func (t *TableBatch) ExecuteBatch() error { - changesetBoundary := fmt.Sprintf("changeset_%s", uuid.NewV1()) + + // Using `github.com/marstr/guid` is in response to issue #947 (https://github.com/Azure/azure-sdk-for-go/issues/947). + id, err := guid.NewGUIDs(guid.CreationStrategyVersion1) + if err != nil { + return err + } + + changesetBoundary := fmt.Sprintf("changeset_%s", id.String()) uri := t.Table.tsc.client.getEndpoint(tableServiceName, "$batch", nil) changesetBody, err := t.generateChangesetBody(changesetBoundary) if err != nil { return err } - boundary := fmt.Sprintf("batch_%s", uuid.NewV1()) + id, err = guid.NewGUIDs(guid.CreationStrategyVersion1) + if err != nil { + return err + } + + boundary := fmt.Sprintf("batch_%s", id.String()) body, err := generateBody(changesetBody, changesetBoundary, boundary) if err != nil { return err @@ -137,15 +163,15 @@ func (t *TableBatch) ExecuteBatch() error { if err != nil { return err } - defer resp.body.Close() + defer drainRespBody(resp.resp) - if err = checkRespCode(resp.statusCode, []int{http.StatusAccepted}); err != nil { + if err = checkRespCode(resp.resp, []int{http.StatusAccepted}); err != nil { // check which batch failed. operationFailedMessage := t.getFailedOperation(resp.odata.Err.Message.Value) - requestID, date, version := getDebugHeaders(resp.headers) + requestID, date, version := getDebugHeaders(resp.resp.Header) return AzureStorageServiceError{ - StatusCode: resp.statusCode, + StatusCode: resp.resp.StatusCode, Code: resp.odata.Err.Code, RequestID: requestID, Date: date, diff --git a/vendor/github.com/Azure/azure-sdk-for-go/storage/tableserviceclient.go b/vendor/github.com/Azure/azure-sdk-for-go/storage/tableserviceclient.go index 895dcfded..1f063a395 100644 --- a/vendor/github.com/Azure/azure-sdk-for-go/storage/tableserviceclient.go +++ b/vendor/github.com/Azure/azure-sdk-for-go/storage/tableserviceclient.go @@ -1,5 +1,19 @@ package storage +// Copyright 2017 Microsoft Corporation +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + import ( "encoding/json" "fmt" @@ -131,13 +145,13 @@ func (t *TableServiceClient) queryTables(uri string, headers map[string]string, if err != nil { return nil, err } - defer resp.body.Close() + defer resp.Body.Close() - if err := checkRespCode(resp.statusCode, []int{http.StatusOK}); err != nil { + if err := checkRespCode(resp, []int{http.StatusOK}); err != nil { return nil, err } - respBody, err := ioutil.ReadAll(resp.body) + respBody, err := ioutil.ReadAll(resp.Body) if err != nil { return nil, err } @@ -152,7 +166,7 @@ func (t *TableServiceClient) queryTables(uri string, headers map[string]string, } out.tsc = t - nextLink := resp.headers.Get(http.CanonicalHeaderKey(headerXmsContinuation)) + nextLink := resp.Header.Get(http.CanonicalHeaderKey(headerXmsContinuation)) if nextLink == "" { out.NextLink = nil } else { diff --git a/vendor/github.com/Azure/azure-sdk-for-go/storage/util.go b/vendor/github.com/Azure/azure-sdk-for-go/storage/util.go index 8a902be2f..e8a5dcf8c 100644 --- a/vendor/github.com/Azure/azure-sdk-for-go/storage/util.go +++ b/vendor/github.com/Azure/azure-sdk-for-go/storage/util.go @@ -1,5 +1,19 @@ package storage +// Copyright 2017 Microsoft Corporation +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + import ( "bytes" "crypto/hmac" @@ -18,7 +32,29 @@ import ( ) var ( - fixedTime = time.Date(2050, time.December, 20, 21, 55, 0, 0, time.FixedZone("GMT", -6)) + fixedTime = time.Date(2050, time.December, 20, 21, 55, 0, 0, time.FixedZone("GMT", -6)) + accountSASOptions = AccountSASTokenOptions{ + Services: Services{ + Blob: true, + }, + ResourceTypes: ResourceTypes{ + Service: true, + Container: true, + Object: true, + }, + Permissions: Permissions{ + Read: true, + Write: true, + Delete: true, + List: true, + Add: true, + Create: true, + Update: true, + Process: true, + }, + Expiry: fixedTime, + UseHTTPS: true, + } ) func (c Client) computeHmac256(message string) string { @@ -35,6 +71,10 @@ func timeRfc1123Formatted(t time.Time) string { return t.Format(http.TimeFormat) } +func timeRFC3339Formatted(t time.Time) string { + return t.Format("2006-01-02T15:04:05.0000000Z") +} + func mergeParams(v1, v2 url.Values) url.Values { out := url.Values{} for k, v := range v1 { @@ -136,7 +176,7 @@ func addTimeout(params url.Values, timeout uint) url.Values { func addSnapshot(params url.Values, snapshot *time.Time) url.Values { if snapshot != nil { - params.Add("snapshot", timeRfc1123Formatted(*snapshot)) + params.Add("snapshot", timeRFC3339Formatted(*snapshot)) } return params } @@ -169,6 +209,11 @@ func (t *TimeRFC1123) UnmarshalXML(d *xml.Decoder, start xml.StartElement) error return nil } +// MarshalXML marshals using time.RFC1123. +func (t *TimeRFC1123) MarshalXML(e *xml.Encoder, start xml.StartElement) error { + return e.EncodeElement(time.Time(*t).Format(time.RFC1123), start) +} + // returns a map of custom metadata values from the specified HTTP header func getMetadataFromHeaders(header http.Header) map[string]string { metadata := make(map[string]string) diff --git a/vendor/github.com/Azure/azure-sdk-for-go/storage/version.go b/vendor/github.com/Azure/azure-sdk-for-go/storage/version.go deleted file mode 100644 index a23fff1e2..000000000 --- a/vendor/github.com/Azure/azure-sdk-for-go/storage/version.go +++ /dev/null @@ -1,5 +0,0 @@ -package storage - -var ( - sdkVersion = "10.0.2" -) diff --git a/vendor/github.com/Azure/azure-sdk-for-go/version/version.go b/vendor/github.com/Azure/azure-sdk-for-go/version/version.go new file mode 100644 index 000000000..0d0677e74 --- /dev/null +++ b/vendor/github.com/Azure/azure-sdk-for-go/version/version.go @@ -0,0 +1,21 @@ +package version + +// Copyright (c) Microsoft and contributors. All rights reserved. +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// +// See the License for the specific language governing permissions and +// limitations under the License. +// +// Code generated by Microsoft (R) AutoRest Code Generator. +// Changes may cause incorrect behavior and will be lost if the code is regenerated. + +// Number contains the semantic version of this SDK. +const Number = "v17.3.1" diff --git a/vendor/github.com/Azure/go-ansiterm/LICENSE b/vendor/github.com/Azure/go-ansiterm/LICENSE new file mode 100644 index 000000000..e3d9a64d1 --- /dev/null +++ b/vendor/github.com/Azure/go-ansiterm/LICENSE @@ -0,0 +1,21 @@ +The MIT License (MIT) + +Copyright (c) 2015 Microsoft Corporation + +Permission is hereby granted, free of charge, to any person obtaining a copy +of this software and associated documentation files (the "Software"), to deal +in the Software without restriction, including without limitation the rights +to use, copy, modify, merge, publish, distribute, sublicense, and/or sell +copies of the Software, and to permit persons to whom the Software is +furnished to do so, subject to the following conditions: + +The above copyright notice and this permission notice shall be included in +all copies or substantial portions of the Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR +IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, +FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE +AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER +LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, +OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN +THE SOFTWARE. diff --git a/vendor/github.com/Azure/go-ansiterm/README.md b/vendor/github.com/Azure/go-ansiterm/README.md new file mode 100644 index 000000000..261c041e7 --- /dev/null +++ b/vendor/github.com/Azure/go-ansiterm/README.md @@ -0,0 +1,12 @@ +# go-ansiterm + +This is a cross platform Ansi Terminal Emulation library. It reads a stream of Ansi characters and produces the appropriate function calls. The results of the function calls are platform dependent. + +For example the parser might receive "ESC, [, A" as a stream of three characters. This is the code for Cursor Up (http://www.vt100.net/docs/vt510-rm/CUU). The parser then calls the cursor up function (CUU()) on an event handler. The event handler determines what platform specific work must be done to cause the cursor to move up one position. + +The parser (parser.go) is a partial implementation of this state machine (http://vt100.net/emu/vt500_parser.png). There are also two event handler implementations, one for tests (test_event_handler.go) to validate that the expected events are being produced and called, the other is a Windows implementation (winterm/win_event_handler.go). + +See parser_test.go for examples exercising the state machine and generating appropriate function calls. + +----- +This project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/). For more information see the [Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/) or contact [opencode@microsoft.com](mailto:opencode@microsoft.com) with any additional questions or comments. diff --git a/vendor/github.com/Azure/go-ansiterm/constants.go b/vendor/github.com/Azure/go-ansiterm/constants.go new file mode 100644 index 000000000..96504a33b --- /dev/null +++ b/vendor/github.com/Azure/go-ansiterm/constants.go @@ -0,0 +1,188 @@ +package ansiterm + +const LogEnv = "DEBUG_TERMINAL" + +// ANSI constants +// References: +// -- http://www.ecma-international.org/publications/standards/Ecma-048.htm +// -- http://man7.org/linux/man-pages/man4/console_codes.4.html +// -- http://manpages.ubuntu.com/manpages/intrepid/man4/console_codes.4.html +// -- http://en.wikipedia.org/wiki/ANSI_escape_code +// -- http://vt100.net/emu/dec_ansi_parser +// -- http://vt100.net/emu/vt500_parser.svg +// -- http://invisible-island.net/xterm/ctlseqs/ctlseqs.html +// -- http://www.inwap.com/pdp10/ansicode.txt +const ( + // ECMA-48 Set Graphics Rendition + // Note: + // -- Constants leading with an underscore (e.g., _ANSI_xxx) are unsupported or reserved + // -- Fonts could possibly be supported via SetCurrentConsoleFontEx + // -- Windows does not expose the per-window cursor (i.e., caret) blink times + ANSI_SGR_RESET = 0 + ANSI_SGR_BOLD = 1 + ANSI_SGR_DIM = 2 + _ANSI_SGR_ITALIC = 3 + ANSI_SGR_UNDERLINE = 4 + _ANSI_SGR_BLINKSLOW = 5 + _ANSI_SGR_BLINKFAST = 6 + ANSI_SGR_REVERSE = 7 + _ANSI_SGR_INVISIBLE = 8 + _ANSI_SGR_LINETHROUGH = 9 + _ANSI_SGR_FONT_00 = 10 + _ANSI_SGR_FONT_01 = 11 + _ANSI_SGR_FONT_02 = 12 + _ANSI_SGR_FONT_03 = 13 + _ANSI_SGR_FONT_04 = 14 + _ANSI_SGR_FONT_05 = 15 + _ANSI_SGR_FONT_06 = 16 + _ANSI_SGR_FONT_07 = 17 + _ANSI_SGR_FONT_08 = 18 + _ANSI_SGR_FONT_09 = 19 + _ANSI_SGR_FONT_10 = 20 + _ANSI_SGR_DOUBLEUNDERLINE = 21 + ANSI_SGR_BOLD_DIM_OFF = 22 + _ANSI_SGR_ITALIC_OFF = 23 + ANSI_SGR_UNDERLINE_OFF = 24 + _ANSI_SGR_BLINK_OFF = 25 + _ANSI_SGR_RESERVED_00 = 26 + ANSI_SGR_REVERSE_OFF = 27 + _ANSI_SGR_INVISIBLE_OFF = 28 + _ANSI_SGR_LINETHROUGH_OFF = 29 + ANSI_SGR_FOREGROUND_BLACK = 30 + ANSI_SGR_FOREGROUND_RED = 31 + ANSI_SGR_FOREGROUND_GREEN = 32 + ANSI_SGR_FOREGROUND_YELLOW = 33 + ANSI_SGR_FOREGROUND_BLUE = 34 + ANSI_SGR_FOREGROUND_MAGENTA = 35 + ANSI_SGR_FOREGROUND_CYAN = 36 + ANSI_SGR_FOREGROUND_WHITE = 37 + _ANSI_SGR_RESERVED_01 = 38 + ANSI_SGR_FOREGROUND_DEFAULT = 39 + ANSI_SGR_BACKGROUND_BLACK = 40 + ANSI_SGR_BACKGROUND_RED = 41 + ANSI_SGR_BACKGROUND_GREEN = 42 + ANSI_SGR_BACKGROUND_YELLOW = 43 + ANSI_SGR_BACKGROUND_BLUE = 44 + ANSI_SGR_BACKGROUND_MAGENTA = 45 + ANSI_SGR_BACKGROUND_CYAN = 46 + ANSI_SGR_BACKGROUND_WHITE = 47 + _ANSI_SGR_RESERVED_02 = 48 + ANSI_SGR_BACKGROUND_DEFAULT = 49 + // 50 - 65: Unsupported + + ANSI_MAX_CMD_LENGTH = 4096 + + MAX_INPUT_EVENTS = 128 + DEFAULT_WIDTH = 80 + DEFAULT_HEIGHT = 24 + + ANSI_BEL = 0x07 + ANSI_BACKSPACE = 0x08 + ANSI_TAB = 0x09 + ANSI_LINE_FEED = 0x0A + ANSI_VERTICAL_TAB = 0x0B + ANSI_FORM_FEED = 0x0C + ANSI_CARRIAGE_RETURN = 0x0D + ANSI_ESCAPE_PRIMARY = 0x1B + ANSI_ESCAPE_SECONDARY = 0x5B + ANSI_OSC_STRING_ENTRY = 0x5D + ANSI_COMMAND_FIRST = 0x40 + ANSI_COMMAND_LAST = 0x7E + DCS_ENTRY = 0x90 + CSI_ENTRY = 0x9B + OSC_STRING = 0x9D + ANSI_PARAMETER_SEP = ";" + ANSI_CMD_G0 = '(' + ANSI_CMD_G1 = ')' + ANSI_CMD_G2 = '*' + ANSI_CMD_G3 = '+' + ANSI_CMD_DECPNM = '>' + ANSI_CMD_DECPAM = '=' + ANSI_CMD_OSC = ']' + ANSI_CMD_STR_TERM = '\\' + + KEY_CONTROL_PARAM_2 = ";2" + KEY_CONTROL_PARAM_3 = ";3" + KEY_CONTROL_PARAM_4 = ";4" + KEY_CONTROL_PARAM_5 = ";5" + KEY_CONTROL_PARAM_6 = ";6" + KEY_CONTROL_PARAM_7 = ";7" + KEY_CONTROL_PARAM_8 = ";8" + KEY_ESC_CSI = "\x1B[" + KEY_ESC_N = "\x1BN" + KEY_ESC_O = "\x1BO" + + FILL_CHARACTER = ' ' +) + +func getByteRange(start byte, end byte) []byte { + bytes := make([]byte, 0, 32) + for i := start; i <= end; i++ { + bytes = append(bytes, byte(i)) + } + + return bytes +} + +var toGroundBytes = getToGroundBytes() +var executors = getExecuteBytes() + +// SPACE 20+A0 hex Always and everywhere a blank space +// Intermediate 20-2F hex !"#$%&'()*+,-./ +var intermeds = getByteRange(0x20, 0x2F) + +// Parameters 30-3F hex 0123456789:;<=>? +// CSI Parameters 30-39, 3B hex 0123456789; +var csiParams = getByteRange(0x30, 0x3F) + +var csiCollectables = append(getByteRange(0x30, 0x39), getByteRange(0x3B, 0x3F)...) + +// Uppercase 40-5F hex @ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_ +var upperCase = getByteRange(0x40, 0x5F) + +// Lowercase 60-7E hex `abcdefghijlkmnopqrstuvwxyz{|}~ +var lowerCase = getByteRange(0x60, 0x7E) + +// Alphabetics 40-7E hex (all of upper and lower case) +var alphabetics = append(upperCase, lowerCase...) + +var printables = getByteRange(0x20, 0x7F) + +var escapeIntermediateToGroundBytes = getByteRange(0x30, 0x7E) +var escapeToGroundBytes = getEscapeToGroundBytes() + +// See http://www.vt100.net/emu/vt500_parser.png for description of the complex +// byte ranges below + +func getEscapeToGroundBytes() []byte { + escapeToGroundBytes := getByteRange(0x30, 0x4F) + escapeToGroundBytes = append(escapeToGroundBytes, getByteRange(0x51, 0x57)...) + escapeToGroundBytes = append(escapeToGroundBytes, 0x59) + escapeToGroundBytes = append(escapeToGroundBytes, 0x5A) + escapeToGroundBytes = append(escapeToGroundBytes, 0x5C) + escapeToGroundBytes = append(escapeToGroundBytes, getByteRange(0x60, 0x7E)...) + return escapeToGroundBytes +} + +func getExecuteBytes() []byte { + executeBytes := getByteRange(0x00, 0x17) + executeBytes = append(executeBytes, 0x19) + executeBytes = append(executeBytes, getByteRange(0x1C, 0x1F)...) + return executeBytes +} + +func getToGroundBytes() []byte { + groundBytes := []byte{0x18} + groundBytes = append(groundBytes, 0x1A) + groundBytes = append(groundBytes, getByteRange(0x80, 0x8F)...) + groundBytes = append(groundBytes, getByteRange(0x91, 0x97)...) + groundBytes = append(groundBytes, 0x99) + groundBytes = append(groundBytes, 0x9A) + groundBytes = append(groundBytes, 0x9C) + return groundBytes +} + +// Delete 7F hex Always and everywhere ignored +// C1 Control 80-9F hex 32 additional control characters +// G1 Displayable A1-FE hex 94 additional displayable characters +// Special A0+FF hex Same as SPACE and DELETE diff --git a/vendor/github.com/Azure/go-ansiterm/context.go b/vendor/github.com/Azure/go-ansiterm/context.go new file mode 100644 index 000000000..8d66e777c --- /dev/null +++ b/vendor/github.com/Azure/go-ansiterm/context.go @@ -0,0 +1,7 @@ +package ansiterm + +type ansiContext struct { + currentChar byte + paramBuffer []byte + interBuffer []byte +} diff --git a/vendor/github.com/Azure/go-ansiterm/csi_entry_state.go b/vendor/github.com/Azure/go-ansiterm/csi_entry_state.go new file mode 100644 index 000000000..bcbe00d0c --- /dev/null +++ b/vendor/github.com/Azure/go-ansiterm/csi_entry_state.go @@ -0,0 +1,49 @@ +package ansiterm + +type csiEntryState struct { + baseState +} + +func (csiState csiEntryState) Handle(b byte) (s state, e error) { + csiState.parser.logf("CsiEntry::Handle %#x", b) + + nextState, err := csiState.baseState.Handle(b) + if nextState != nil || err != nil { + return nextState, err + } + + switch { + case sliceContains(alphabetics, b): + return csiState.parser.ground, nil + case sliceContains(csiCollectables, b): + return csiState.parser.csiParam, nil + case sliceContains(executors, b): + return csiState, csiState.parser.execute() + } + + return csiState, nil +} + +func (csiState csiEntryState) Transition(s state) error { + csiState.parser.logf("CsiEntry::Transition %s --> %s", csiState.Name(), s.Name()) + csiState.baseState.Transition(s) + + switch s { + case csiState.parser.ground: + return csiState.parser.csiDispatch() + case csiState.parser.csiParam: + switch { + case sliceContains(csiParams, csiState.parser.context.currentChar): + csiState.parser.collectParam() + case sliceContains(intermeds, csiState.parser.context.currentChar): + csiState.parser.collectInter() + } + } + + return nil +} + +func (csiState csiEntryState) Enter() error { + csiState.parser.clear() + return nil +} diff --git a/vendor/github.com/Azure/go-ansiterm/csi_param_state.go b/vendor/github.com/Azure/go-ansiterm/csi_param_state.go new file mode 100644 index 000000000..7ed5e01c3 --- /dev/null +++ b/vendor/github.com/Azure/go-ansiterm/csi_param_state.go @@ -0,0 +1,38 @@ +package ansiterm + +type csiParamState struct { + baseState +} + +func (csiState csiParamState) Handle(b byte) (s state, e error) { + csiState.parser.logf("CsiParam::Handle %#x", b) + + nextState, err := csiState.baseState.Handle(b) + if nextState != nil || err != nil { + return nextState, err + } + + switch { + case sliceContains(alphabetics, b): + return csiState.parser.ground, nil + case sliceContains(csiCollectables, b): + csiState.parser.collectParam() + return csiState, nil + case sliceContains(executors, b): + return csiState, csiState.parser.execute() + } + + return csiState, nil +} + +func (csiState csiParamState) Transition(s state) error { + csiState.parser.logf("CsiParam::Transition %s --> %s", csiState.Name(), s.Name()) + csiState.baseState.Transition(s) + + switch s { + case csiState.parser.ground: + return csiState.parser.csiDispatch() + } + + return nil +} diff --git a/vendor/github.com/Azure/go-ansiterm/escape_intermediate_state.go b/vendor/github.com/Azure/go-ansiterm/escape_intermediate_state.go new file mode 100644 index 000000000..1c719db9e --- /dev/null +++ b/vendor/github.com/Azure/go-ansiterm/escape_intermediate_state.go @@ -0,0 +1,36 @@ +package ansiterm + +type escapeIntermediateState struct { + baseState +} + +func (escState escapeIntermediateState) Handle(b byte) (s state, e error) { + escState.parser.logf("escapeIntermediateState::Handle %#x", b) + nextState, err := escState.baseState.Handle(b) + if nextState != nil || err != nil { + return nextState, err + } + + switch { + case sliceContains(intermeds, b): + return escState, escState.parser.collectInter() + case sliceContains(executors, b): + return escState, escState.parser.execute() + case sliceContains(escapeIntermediateToGroundBytes, b): + return escState.parser.ground, nil + } + + return escState, nil +} + +func (escState escapeIntermediateState) Transition(s state) error { + escState.parser.logf("escapeIntermediateState::Transition %s --> %s", escState.Name(), s.Name()) + escState.baseState.Transition(s) + + switch s { + case escState.parser.ground: + return escState.parser.escDispatch() + } + + return nil +} diff --git a/vendor/github.com/Azure/go-ansiterm/escape_state.go b/vendor/github.com/Azure/go-ansiterm/escape_state.go new file mode 100644 index 000000000..6390abd23 --- /dev/null +++ b/vendor/github.com/Azure/go-ansiterm/escape_state.go @@ -0,0 +1,47 @@ +package ansiterm + +type escapeState struct { + baseState +} + +func (escState escapeState) Handle(b byte) (s state, e error) { + escState.parser.logf("escapeState::Handle %#x", b) + nextState, err := escState.baseState.Handle(b) + if nextState != nil || err != nil { + return nextState, err + } + + switch { + case b == ANSI_ESCAPE_SECONDARY: + return escState.parser.csiEntry, nil + case b == ANSI_OSC_STRING_ENTRY: + return escState.parser.oscString, nil + case sliceContains(executors, b): + return escState, escState.parser.execute() + case sliceContains(escapeToGroundBytes, b): + return escState.parser.ground, nil + case sliceContains(intermeds, b): + return escState.parser.escapeIntermediate, nil + } + + return escState, nil +} + +func (escState escapeState) Transition(s state) error { + escState.parser.logf("Escape::Transition %s --> %s", escState.Name(), s.Name()) + escState.baseState.Transition(s) + + switch s { + case escState.parser.ground: + return escState.parser.escDispatch() + case escState.parser.escapeIntermediate: + return escState.parser.collectInter() + } + + return nil +} + +func (escState escapeState) Enter() error { + escState.parser.clear() + return nil +} diff --git a/vendor/github.com/Azure/go-ansiterm/event_handler.go b/vendor/github.com/Azure/go-ansiterm/event_handler.go new file mode 100644 index 000000000..98087b38c --- /dev/null +++ b/vendor/github.com/Azure/go-ansiterm/event_handler.go @@ -0,0 +1,90 @@ +package ansiterm + +type AnsiEventHandler interface { + // Print + Print(b byte) error + + // Execute C0 commands + Execute(b byte) error + + // CUrsor Up + CUU(int) error + + // CUrsor Down + CUD(int) error + + // CUrsor Forward + CUF(int) error + + // CUrsor Backward + CUB(int) error + + // Cursor to Next Line + CNL(int) error + + // Cursor to Previous Line + CPL(int) error + + // Cursor Horizontal position Absolute + CHA(int) error + + // Vertical line Position Absolute + VPA(int) error + + // CUrsor Position + CUP(int, int) error + + // Horizontal and Vertical Position (depends on PUM) + HVP(int, int) error + + // Text Cursor Enable Mode + DECTCEM(bool) error + + // Origin Mode + DECOM(bool) error + + // 132 Column Mode + DECCOLM(bool) error + + // Erase in Display + ED(int) error + + // Erase in Line + EL(int) error + + // Insert Line + IL(int) error + + // Delete Line + DL(int) error + + // Insert Character + ICH(int) error + + // Delete Character + DCH(int) error + + // Set Graphics Rendition + SGR([]int) error + + // Pan Down + SU(int) error + + // Pan Up + SD(int) error + + // Device Attributes + DA([]string) error + + // Set Top and Bottom Margins + DECSTBM(int, int) error + + // Index + IND() error + + // Reverse Index + RI() error + + // Flush updates from previous commands + Flush() error +} diff --git a/vendor/github.com/Azure/go-ansiterm/ground_state.go b/vendor/github.com/Azure/go-ansiterm/ground_state.go new file mode 100644 index 000000000..52451e946 --- /dev/null +++ b/vendor/github.com/Azure/go-ansiterm/ground_state.go @@ -0,0 +1,24 @@ +package ansiterm + +type groundState struct { + baseState +} + +func (gs groundState) Handle(b byte) (s state, e error) { + gs.parser.context.currentChar = b + + nextState, err := gs.baseState.Handle(b) + if nextState != nil || err != nil { + return nextState, err + } + + switch { + case sliceContains(printables, b): + return gs, gs.parser.print() + + case sliceContains(executors, b): + return gs, gs.parser.execute() + } + + return gs, nil +} diff --git a/vendor/github.com/Azure/go-ansiterm/osc_string_state.go b/vendor/github.com/Azure/go-ansiterm/osc_string_state.go new file mode 100644 index 000000000..593b10ab6 --- /dev/null +++ b/vendor/github.com/Azure/go-ansiterm/osc_string_state.go @@ -0,0 +1,31 @@ +package ansiterm + +type oscStringState struct { + baseState +} + +func (oscState oscStringState) Handle(b byte) (s state, e error) { + oscState.parser.logf("OscString::Handle %#x", b) + nextState, err := oscState.baseState.Handle(b) + if nextState != nil || err != nil { + return nextState, err + } + + switch { + case isOscStringTerminator(b): + return oscState.parser.ground, nil + } + + return oscState, nil +} + +// See below for OSC string terminators for linux +// http://man7.org/linux/man-pages/man4/console_codes.4.html +func isOscStringTerminator(b byte) bool { + + if b == ANSI_BEL || b == 0x5C { + return true + } + + return false +} diff --git a/vendor/github.com/Azure/go-ansiterm/parser.go b/vendor/github.com/Azure/go-ansiterm/parser.go new file mode 100644 index 000000000..03cec7ada --- /dev/null +++ b/vendor/github.com/Azure/go-ansiterm/parser.go @@ -0,0 +1,151 @@ +package ansiterm + +import ( + "errors" + "log" + "os" +) + +type AnsiParser struct { + currState state + eventHandler AnsiEventHandler + context *ansiContext + csiEntry state + csiParam state + dcsEntry state + escape state + escapeIntermediate state + error state + ground state + oscString state + stateMap []state + + logf func(string, ...interface{}) +} + +type Option func(*AnsiParser) + +func WithLogf(f func(string, ...interface{})) Option { + return func(ap *AnsiParser) { + ap.logf = f + } +} + +func CreateParser(initialState string, evtHandler AnsiEventHandler, opts ...Option) *AnsiParser { + ap := &AnsiParser{ + eventHandler: evtHandler, + context: &ansiContext{}, + } + for _, o := range opts { + o(ap) + } + + if isDebugEnv := os.Getenv(LogEnv); isDebugEnv == "1" { + logFile, _ := os.Create("ansiParser.log") + logger := log.New(logFile, "", log.LstdFlags) + if ap.logf != nil { + l := ap.logf + ap.logf = func(s string, v ...interface{}) { + l(s, v...) + logger.Printf(s, v...) + } + } else { + ap.logf = logger.Printf + } + } + + if ap.logf == nil { + ap.logf = func(string, ...interface{}) {} + } + + ap.csiEntry = csiEntryState{baseState{name: "CsiEntry", parser: ap}} + ap.csiParam = csiParamState{baseState{name: "CsiParam", parser: ap}} + ap.dcsEntry = dcsEntryState{baseState{name: "DcsEntry", parser: ap}} + ap.escape = escapeState{baseState{name: "Escape", parser: ap}} + ap.escapeIntermediate = escapeIntermediateState{baseState{name: "EscapeIntermediate", parser: ap}} + ap.error = errorState{baseState{name: "Error", parser: ap}} + ap.ground = groundState{baseState{name: "Ground", parser: ap}} + ap.oscString = oscStringState{baseState{name: "OscString", parser: ap}} + + ap.stateMap = []state{ + ap.csiEntry, + ap.csiParam, + ap.dcsEntry, + ap.escape, + ap.escapeIntermediate, + ap.error, + ap.ground, + ap.oscString, + } + + ap.currState = getState(initialState, ap.stateMap) + + ap.logf("CreateParser: parser %p", ap) + return ap +} + +func getState(name string, states []state) state { + for _, el := range states { + if el.Name() == name { + return el + } + } + + return nil +} + +func (ap *AnsiParser) Parse(bytes []byte) (int, error) { + for i, b := range bytes { + if err := ap.handle(b); err != nil { + return i, err + } + } + + return len(bytes), ap.eventHandler.Flush() +} + +func (ap *AnsiParser) handle(b byte) error { + ap.context.currentChar = b + newState, err := ap.currState.Handle(b) + if err != nil { + return err + } + + if newState == nil { + ap.logf("WARNING: newState is nil") + return errors.New("New state of 'nil' is invalid.") + } + + if newState != ap.currState { + if err := ap.changeState(newState); err != nil { + return err + } + } + + return nil +} + +func (ap *AnsiParser) changeState(newState state) error { + ap.logf("ChangeState %s --> %s", ap.currState.Name(), newState.Name()) + + // Exit old state + if err := ap.currState.Exit(); err != nil { + ap.logf("Exit state '%s' failed with : '%v'", ap.currState.Name(), err) + return err + } + + // Perform transition action + if err := ap.currState.Transition(newState); err != nil { + ap.logf("Transition from '%s' to '%s' failed with: '%v'", ap.currState.Name(), newState.Name, err) + return err + } + + // Enter new state + if err := newState.Enter(); err != nil { + ap.logf("Enter state '%s' failed with: '%v'", newState.Name(), err) + return err + } + + ap.currState = newState + return nil +} diff --git a/vendor/github.com/Azure/go-ansiterm/parser_action_helpers.go b/vendor/github.com/Azure/go-ansiterm/parser_action_helpers.go new file mode 100644 index 000000000..de0a1f9cd --- /dev/null +++ b/vendor/github.com/Azure/go-ansiterm/parser_action_helpers.go @@ -0,0 +1,99 @@ +package ansiterm + +import ( + "strconv" +) + +func parseParams(bytes []byte) ([]string, error) { + paramBuff := make([]byte, 0, 0) + params := []string{} + + for _, v := range bytes { + if v == ';' { + if len(paramBuff) > 0 { + // Completed parameter, append it to the list + s := string(paramBuff) + params = append(params, s) + paramBuff = make([]byte, 0, 0) + } + } else { + paramBuff = append(paramBuff, v) + } + } + + // Last parameter may not be terminated with ';' + if len(paramBuff) > 0 { + s := string(paramBuff) + params = append(params, s) + } + + return params, nil +} + +func parseCmd(context ansiContext) (string, error) { + return string(context.currentChar), nil +} + +func getInt(params []string, dflt int) int { + i := getInts(params, 1, dflt)[0] + return i +} + +func getInts(params []string, minCount int, dflt int) []int { + ints := []int{} + + for _, v := range params { + i, _ := strconv.Atoi(v) + // Zero is mapped to the default value in VT100. + if i == 0 { + i = dflt + } + ints = append(ints, i) + } + + if len(ints) < minCount { + remaining := minCount - len(ints) + for i := 0; i < remaining; i++ { + ints = append(ints, dflt) + } + } + + return ints +} + +func (ap *AnsiParser) modeDispatch(param string, set bool) error { + switch param { + case "?3": + return ap.eventHandler.DECCOLM(set) + case "?6": + return ap.eventHandler.DECOM(set) + case "?25": + return ap.eventHandler.DECTCEM(set) + } + return nil +} + +func (ap *AnsiParser) hDispatch(params []string) error { + if len(params) == 1 { + return ap.modeDispatch(params[0], true) + } + + return nil +} + +func (ap *AnsiParser) lDispatch(params []string) error { + if len(params) == 1 { + return ap.modeDispatch(params[0], false) + } + + return nil +} + +func getEraseParam(params []string) int { + param := getInt(params, 0) + if param < 0 || 3 < param { + param = 0 + } + + return param +} diff --git a/vendor/github.com/Azure/go-ansiterm/parser_actions.go b/vendor/github.com/Azure/go-ansiterm/parser_actions.go new file mode 100644 index 000000000..0bb5e51e9 --- /dev/null +++ b/vendor/github.com/Azure/go-ansiterm/parser_actions.go @@ -0,0 +1,119 @@ +package ansiterm + +func (ap *AnsiParser) collectParam() error { + currChar := ap.context.currentChar + ap.logf("collectParam %#x", currChar) + ap.context.paramBuffer = append(ap.context.paramBuffer, currChar) + return nil +} + +func (ap *AnsiParser) collectInter() error { + currChar := ap.context.currentChar + ap.logf("collectInter %#x", currChar) + ap.context.paramBuffer = append(ap.context.interBuffer, currChar) + return nil +} + +func (ap *AnsiParser) escDispatch() error { + cmd, _ := parseCmd(*ap.context) + intermeds := ap.context.interBuffer + ap.logf("escDispatch currentChar: %#x", ap.context.currentChar) + ap.logf("escDispatch: %v(%v)", cmd, intermeds) + + switch cmd { + case "D": // IND + return ap.eventHandler.IND() + case "E": // NEL, equivalent to CRLF + err := ap.eventHandler.Execute(ANSI_CARRIAGE_RETURN) + if err == nil { + err = ap.eventHandler.Execute(ANSI_LINE_FEED) + } + return err + case "M": // RI + return ap.eventHandler.RI() + } + + return nil +} + +func (ap *AnsiParser) csiDispatch() error { + cmd, _ := parseCmd(*ap.context) + params, _ := parseParams(ap.context.paramBuffer) + ap.logf("Parsed params: %v with length: %d", params, len(params)) + + ap.logf("csiDispatch: %v(%v)", cmd, params) + + switch cmd { + case "@": + return ap.eventHandler.ICH(getInt(params, 1)) + case "A": + return ap.eventHandler.CUU(getInt(params, 1)) + case "B": + return ap.eventHandler.CUD(getInt(params, 1)) + case "C": + return ap.eventHandler.CUF(getInt(params, 1)) + case "D": + return ap.eventHandler.CUB(getInt(params, 1)) + case "E": + return ap.eventHandler.CNL(getInt(params, 1)) + case "F": + return ap.eventHandler.CPL(getInt(params, 1)) + case "G": + return ap.eventHandler.CHA(getInt(params, 1)) + case "H": + ints := getInts(params, 2, 1) + x, y := ints[0], ints[1] + return ap.eventHandler.CUP(x, y) + case "J": + param := getEraseParam(params) + return ap.eventHandler.ED(param) + case "K": + param := getEraseParam(params) + return ap.eventHandler.EL(param) + case "L": + return ap.eventHandler.IL(getInt(params, 1)) + case "M": + return ap.eventHandler.DL(getInt(params, 1)) + case "P": + return ap.eventHandler.DCH(getInt(params, 1)) + case "S": + return ap.eventHandler.SU(getInt(params, 1)) + case "T": + return ap.eventHandler.SD(getInt(params, 1)) + case "c": + return ap.eventHandler.DA(params) + case "d": + return ap.eventHandler.VPA(getInt(params, 1)) + case "f": + ints := getInts(params, 2, 1) + x, y := ints[0], ints[1] + return ap.eventHandler.HVP(x, y) + case "h": + return ap.hDispatch(params) + case "l": + return ap.lDispatch(params) + case "m": + return ap.eventHandler.SGR(getInts(params, 1, 0)) + case "r": + ints := getInts(params, 2, 1) + top, bottom := ints[0], ints[1] + return ap.eventHandler.DECSTBM(top, bottom) + default: + ap.logf("ERROR: Unsupported CSI command: '%s', with full context: %v", cmd, ap.context) + return nil + } + +} + +func (ap *AnsiParser) print() error { + return ap.eventHandler.Print(ap.context.currentChar) +} + +func (ap *AnsiParser) clear() error { + ap.context = &ansiContext{} + return nil +} + +func (ap *AnsiParser) execute() error { + return ap.eventHandler.Execute(ap.context.currentChar) +} diff --git a/vendor/github.com/Azure/go-ansiterm/states.go b/vendor/github.com/Azure/go-ansiterm/states.go new file mode 100644 index 000000000..f2ea1fcd1 --- /dev/null +++ b/vendor/github.com/Azure/go-ansiterm/states.go @@ -0,0 +1,71 @@ +package ansiterm + +type stateID int + +type state interface { + Enter() error + Exit() error + Handle(byte) (state, error) + Name() string + Transition(state) error +} + +type baseState struct { + name string + parser *AnsiParser +} + +func (base baseState) Enter() error { + return nil +} + +func (base baseState) Exit() error { + return nil +} + +func (base baseState) Handle(b byte) (s state, e error) { + + switch { + case b == CSI_ENTRY: + return base.parser.csiEntry, nil + case b == DCS_ENTRY: + return base.parser.dcsEntry, nil + case b == ANSI_ESCAPE_PRIMARY: + return base.parser.escape, nil + case b == OSC_STRING: + return base.parser.oscString, nil + case sliceContains(toGroundBytes, b): + return base.parser.ground, nil + } + + return nil, nil +} + +func (base baseState) Name() string { + return base.name +} + +func (base baseState) Transition(s state) error { + if s == base.parser.ground { + execBytes := []byte{0x18} + execBytes = append(execBytes, 0x1A) + execBytes = append(execBytes, getByteRange(0x80, 0x8F)...) + execBytes = append(execBytes, getByteRange(0x91, 0x97)...) + execBytes = append(execBytes, 0x99) + execBytes = append(execBytes, 0x9A) + + if sliceContains(execBytes, base.parser.context.currentChar) { + return base.parser.execute() + } + } + + return nil +} + +type dcsEntryState struct { + baseState +} + +type errorState struct { + baseState +} diff --git a/vendor/github.com/Azure/go-ansiterm/utilities.go b/vendor/github.com/Azure/go-ansiterm/utilities.go new file mode 100644 index 000000000..392114493 --- /dev/null +++ b/vendor/github.com/Azure/go-ansiterm/utilities.go @@ -0,0 +1,21 @@ +package ansiterm + +import ( + "strconv" +) + +func sliceContains(bytes []byte, b byte) bool { + for _, v := range bytes { + if v == b { + return true + } + } + + return false +} + +func convertBytesToInteger(bytes []byte) int { + s := string(bytes) + i, _ := strconv.Atoi(s) + return i +} diff --git a/vendor/github.com/Azure/go-ansiterm/winterm/ansi.go b/vendor/github.com/Azure/go-ansiterm/winterm/ansi.go new file mode 100644 index 000000000..a67327972 --- /dev/null +++ b/vendor/github.com/Azure/go-ansiterm/winterm/ansi.go @@ -0,0 +1,182 @@ +// +build windows + +package winterm + +import ( + "fmt" + "os" + "strconv" + "strings" + "syscall" + + "github.com/Azure/go-ansiterm" +) + +// Windows keyboard constants +// See https://msdn.microsoft.com/en-us/library/windows/desktop/dd375731(v=vs.85).aspx. +const ( + VK_PRIOR = 0x21 // PAGE UP key + VK_NEXT = 0x22 // PAGE DOWN key + VK_END = 0x23 // END key + VK_HOME = 0x24 // HOME key + VK_LEFT = 0x25 // LEFT ARROW key + VK_UP = 0x26 // UP ARROW key + VK_RIGHT = 0x27 // RIGHT ARROW key + VK_DOWN = 0x28 // DOWN ARROW key + VK_SELECT = 0x29 // SELECT key + VK_PRINT = 0x2A // PRINT key + VK_EXECUTE = 0x2B // EXECUTE key + VK_SNAPSHOT = 0x2C // PRINT SCREEN key + VK_INSERT = 0x2D // INS key + VK_DELETE = 0x2E // DEL key + VK_HELP = 0x2F // HELP key + VK_F1 = 0x70 // F1 key + VK_F2 = 0x71 // F2 key + VK_F3 = 0x72 // F3 key + VK_F4 = 0x73 // F4 key + VK_F5 = 0x74 // F5 key + VK_F6 = 0x75 // F6 key + VK_F7 = 0x76 // F7 key + VK_F8 = 0x77 // F8 key + VK_F9 = 0x78 // F9 key + VK_F10 = 0x79 // F10 key + VK_F11 = 0x7A // F11 key + VK_F12 = 0x7B // F12 key + + RIGHT_ALT_PRESSED = 0x0001 + LEFT_ALT_PRESSED = 0x0002 + RIGHT_CTRL_PRESSED = 0x0004 + LEFT_CTRL_PRESSED = 0x0008 + SHIFT_PRESSED = 0x0010 + NUMLOCK_ON = 0x0020 + SCROLLLOCK_ON = 0x0040 + CAPSLOCK_ON = 0x0080 + ENHANCED_KEY = 0x0100 +) + +type ansiCommand struct { + CommandBytes []byte + Command string + Parameters []string + IsSpecial bool +} + +func newAnsiCommand(command []byte) *ansiCommand { + + if isCharacterSelectionCmdChar(command[1]) { + // Is Character Set Selection commands + return &ansiCommand{ + CommandBytes: command, + Command: string(command), + IsSpecial: true, + } + } + + // last char is command character + lastCharIndex := len(command) - 1 + + ac := &ansiCommand{ + CommandBytes: command, + Command: string(command[lastCharIndex]), + IsSpecial: false, + } + + // more than a single escape + if lastCharIndex != 0 { + start := 1 + // skip if double char escape sequence + if command[0] == ansiterm.ANSI_ESCAPE_PRIMARY && command[1] == ansiterm.ANSI_ESCAPE_SECONDARY { + start++ + } + // convert this to GetNextParam method + ac.Parameters = strings.Split(string(command[start:lastCharIndex]), ansiterm.ANSI_PARAMETER_SEP) + } + + return ac +} + +func (ac *ansiCommand) paramAsSHORT(index int, defaultValue int16) int16 { + if index < 0 || index >= len(ac.Parameters) { + return defaultValue + } + + param, err := strconv.ParseInt(ac.Parameters[index], 10, 16) + if err != nil { + return defaultValue + } + + return int16(param) +} + +func (ac *ansiCommand) String() string { + return fmt.Sprintf("0x%v \"%v\" (\"%v\")", + bytesToHex(ac.CommandBytes), + ac.Command, + strings.Join(ac.Parameters, "\",\"")) +} + +// isAnsiCommandChar returns true if the passed byte falls within the range of ANSI commands. +// See http://manpages.ubuntu.com/manpages/intrepid/man4/console_codes.4.html. +func isAnsiCommandChar(b byte) bool { + switch { + case ansiterm.ANSI_COMMAND_FIRST <= b && b <= ansiterm.ANSI_COMMAND_LAST && b != ansiterm.ANSI_ESCAPE_SECONDARY: + return true + case b == ansiterm.ANSI_CMD_G1 || b == ansiterm.ANSI_CMD_OSC || b == ansiterm.ANSI_CMD_DECPAM || b == ansiterm.ANSI_CMD_DECPNM: + // non-CSI escape sequence terminator + return true + case b == ansiterm.ANSI_CMD_STR_TERM || b == ansiterm.ANSI_BEL: + // String escape sequence terminator + return true + } + return false +} + +func isXtermOscSequence(command []byte, current byte) bool { + return (len(command) >= 2 && command[0] == ansiterm.ANSI_ESCAPE_PRIMARY && command[1] == ansiterm.ANSI_CMD_OSC && current != ansiterm.ANSI_BEL) +} + +func isCharacterSelectionCmdChar(b byte) bool { + return (b == ansiterm.ANSI_CMD_G0 || b == ansiterm.ANSI_CMD_G1 || b == ansiterm.ANSI_CMD_G2 || b == ansiterm.ANSI_CMD_G3) +} + +// bytesToHex converts a slice of bytes to a human-readable string. +func bytesToHex(b []byte) string { + hex := make([]string, len(b)) + for i, ch := range b { + hex[i] = fmt.Sprintf("%X", ch) + } + return strings.Join(hex, "") +} + +// ensureInRange adjusts the passed value, if necessary, to ensure it is within +// the passed min / max range. +func ensureInRange(n int16, min int16, max int16) int16 { + if n < min { + return min + } else if n > max { + return max + } else { + return n + } +} + +func GetStdFile(nFile int) (*os.File, uintptr) { + var file *os.File + switch nFile { + case syscall.STD_INPUT_HANDLE: + file = os.Stdin + case syscall.STD_OUTPUT_HANDLE: + file = os.Stdout + case syscall.STD_ERROR_HANDLE: + file = os.Stderr + default: + panic(fmt.Errorf("Invalid standard handle identifier: %v", nFile)) + } + + fd, err := syscall.GetStdHandle(nFile) + if err != nil { + panic(fmt.Errorf("Invalid standard handle identifier: %v -- %v", nFile, err)) + } + + return file, uintptr(fd) +} diff --git a/vendor/github.com/Azure/go-ansiterm/winterm/api.go b/vendor/github.com/Azure/go-ansiterm/winterm/api.go new file mode 100644 index 000000000..6055e33b9 --- /dev/null +++ b/vendor/github.com/Azure/go-ansiterm/winterm/api.go @@ -0,0 +1,327 @@ +// +build windows + +package winterm + +import ( + "fmt" + "syscall" + "unsafe" +) + +//=========================================================================================================== +// IMPORTANT NOTE: +// +// The methods below make extensive use of the "unsafe" package to obtain the required pointers. +// Beginning in Go 1.3, the garbage collector may release local variables (e.g., incoming arguments, stack +// variables) the pointers reference *before* the API completes. +// +// As a result, in those cases, the code must hint that the variables remain in active by invoking the +// dummy method "use" (see below). Newer versions of Go are planned to change the mechanism to no longer +// require unsafe pointers. +// +// If you add or modify methods, ENSURE protection of local variables through the "use" builtin to inform +// the garbage collector the variables remain in use if: +// +// -- The value is not a pointer (e.g., int32, struct) +// -- The value is not referenced by the method after passing the pointer to Windows +// +// See http://golang.org/doc/go1.3. +//=========================================================================================================== + +var ( + kernel32DLL = syscall.NewLazyDLL("kernel32.dll") + + getConsoleCursorInfoProc = kernel32DLL.NewProc("GetConsoleCursorInfo") + setConsoleCursorInfoProc = kernel32DLL.NewProc("SetConsoleCursorInfo") + setConsoleCursorPositionProc = kernel32DLL.NewProc("SetConsoleCursorPosition") + setConsoleModeProc = kernel32DLL.NewProc("SetConsoleMode") + getConsoleScreenBufferInfoProc = kernel32DLL.NewProc("GetConsoleScreenBufferInfo") + setConsoleScreenBufferSizeProc = kernel32DLL.NewProc("SetConsoleScreenBufferSize") + scrollConsoleScreenBufferProc = kernel32DLL.NewProc("ScrollConsoleScreenBufferA") + setConsoleTextAttributeProc = kernel32DLL.NewProc("SetConsoleTextAttribute") + setConsoleWindowInfoProc = kernel32DLL.NewProc("SetConsoleWindowInfo") + writeConsoleOutputProc = kernel32DLL.NewProc("WriteConsoleOutputW") + readConsoleInputProc = kernel32DLL.NewProc("ReadConsoleInputW") + waitForSingleObjectProc = kernel32DLL.NewProc("WaitForSingleObject") +) + +// Windows Console constants +const ( + // Console modes + // See https://msdn.microsoft.com/en-us/library/windows/desktop/ms686033(v=vs.85).aspx. + ENABLE_PROCESSED_INPUT = 0x0001 + ENABLE_LINE_INPUT = 0x0002 + ENABLE_ECHO_INPUT = 0x0004 + ENABLE_WINDOW_INPUT = 0x0008 + ENABLE_MOUSE_INPUT = 0x0010 + ENABLE_INSERT_MODE = 0x0020 + ENABLE_QUICK_EDIT_MODE = 0x0040 + ENABLE_EXTENDED_FLAGS = 0x0080 + ENABLE_AUTO_POSITION = 0x0100 + ENABLE_VIRTUAL_TERMINAL_INPUT = 0x0200 + + ENABLE_PROCESSED_OUTPUT = 0x0001 + ENABLE_WRAP_AT_EOL_OUTPUT = 0x0002 + ENABLE_VIRTUAL_TERMINAL_PROCESSING = 0x0004 + DISABLE_NEWLINE_AUTO_RETURN = 0x0008 + ENABLE_LVB_GRID_WORLDWIDE = 0x0010 + + // Character attributes + // Note: + // -- The attributes are combined to produce various colors (e.g., Blue + Green will create Cyan). + // Clearing all foreground or background colors results in black; setting all creates white. + // See https://msdn.microsoft.com/en-us/library/windows/desktop/ms682088(v=vs.85).aspx#_win32_character_attributes. + FOREGROUND_BLUE uint16 = 0x0001 + FOREGROUND_GREEN uint16 = 0x0002 + FOREGROUND_RED uint16 = 0x0004 + FOREGROUND_INTENSITY uint16 = 0x0008 + FOREGROUND_MASK uint16 = 0x000F + + BACKGROUND_BLUE uint16 = 0x0010 + BACKGROUND_GREEN uint16 = 0x0020 + BACKGROUND_RED uint16 = 0x0040 + BACKGROUND_INTENSITY uint16 = 0x0080 + BACKGROUND_MASK uint16 = 0x00F0 + + COMMON_LVB_MASK uint16 = 0xFF00 + COMMON_LVB_REVERSE_VIDEO uint16 = 0x4000 + COMMON_LVB_UNDERSCORE uint16 = 0x8000 + + // Input event types + // See https://msdn.microsoft.com/en-us/library/windows/desktop/ms683499(v=vs.85).aspx. + KEY_EVENT = 0x0001 + MOUSE_EVENT = 0x0002 + WINDOW_BUFFER_SIZE_EVENT = 0x0004 + MENU_EVENT = 0x0008 + FOCUS_EVENT = 0x0010 + + // WaitForSingleObject return codes + WAIT_ABANDONED = 0x00000080 + WAIT_FAILED = 0xFFFFFFFF + WAIT_SIGNALED = 0x0000000 + WAIT_TIMEOUT = 0x00000102 + + // WaitForSingleObject wait duration + WAIT_INFINITE = 0xFFFFFFFF + WAIT_ONE_SECOND = 1000 + WAIT_HALF_SECOND = 500 + WAIT_QUARTER_SECOND = 250 +) + +// Windows API Console types +// -- See https://msdn.microsoft.com/en-us/library/windows/desktop/ms682101(v=vs.85).aspx for Console specific types (e.g., COORD) +// -- See https://msdn.microsoft.com/en-us/library/aa296569(v=vs.60).aspx for comments on alignment +type ( + CHAR_INFO struct { + UnicodeChar uint16 + Attributes uint16 + } + + CONSOLE_CURSOR_INFO struct { + Size uint32 + Visible int32 + } + + CONSOLE_SCREEN_BUFFER_INFO struct { + Size COORD + CursorPosition COORD + Attributes uint16 + Window SMALL_RECT + MaximumWindowSize COORD + } + + COORD struct { + X int16 + Y int16 + } + + SMALL_RECT struct { + Left int16 + Top int16 + Right int16 + Bottom int16 + } + + // INPUT_RECORD is a C/C++ union of which KEY_EVENT_RECORD is one case, it is also the largest + // See https://msdn.microsoft.com/en-us/library/windows/desktop/ms683499(v=vs.85).aspx. + INPUT_RECORD struct { + EventType uint16 + KeyEvent KEY_EVENT_RECORD + } + + KEY_EVENT_RECORD struct { + KeyDown int32 + RepeatCount uint16 + VirtualKeyCode uint16 + VirtualScanCode uint16 + UnicodeChar uint16 + ControlKeyState uint32 + } + + WINDOW_BUFFER_SIZE struct { + Size COORD + } +) + +// boolToBOOL converts a Go bool into a Windows int32. +func boolToBOOL(f bool) int32 { + if f { + return int32(1) + } else { + return int32(0) + } +} + +// GetConsoleCursorInfo retrieves information about the size and visiblity of the console cursor. +// See https://msdn.microsoft.com/en-us/library/windows/desktop/ms683163(v=vs.85).aspx. +func GetConsoleCursorInfo(handle uintptr, cursorInfo *CONSOLE_CURSOR_INFO) error { + r1, r2, err := getConsoleCursorInfoProc.Call(handle, uintptr(unsafe.Pointer(cursorInfo)), 0) + return checkError(r1, r2, err) +} + +// SetConsoleCursorInfo sets the size and visiblity of the console cursor. +// See https://msdn.microsoft.com/en-us/library/windows/desktop/ms686019(v=vs.85).aspx. +func SetConsoleCursorInfo(handle uintptr, cursorInfo *CONSOLE_CURSOR_INFO) error { + r1, r2, err := setConsoleCursorInfoProc.Call(handle, uintptr(unsafe.Pointer(cursorInfo)), 0) + return checkError(r1, r2, err) +} + +// SetConsoleCursorPosition location of the console cursor. +// See https://msdn.microsoft.com/en-us/library/windows/desktop/ms686025(v=vs.85).aspx. +func SetConsoleCursorPosition(handle uintptr, coord COORD) error { + r1, r2, err := setConsoleCursorPositionProc.Call(handle, coordToPointer(coord)) + use(coord) + return checkError(r1, r2, err) +} + +// GetConsoleMode gets the console mode for given file descriptor +// See http://msdn.microsoft.com/en-us/library/windows/desktop/ms683167(v=vs.85).aspx. +func GetConsoleMode(handle uintptr) (mode uint32, err error) { + err = syscall.GetConsoleMode(syscall.Handle(handle), &mode) + return mode, err +} + +// SetConsoleMode sets the console mode for given file descriptor +// See http://msdn.microsoft.com/en-us/library/windows/desktop/ms686033(v=vs.85).aspx. +func SetConsoleMode(handle uintptr, mode uint32) error { + r1, r2, err := setConsoleModeProc.Call(handle, uintptr(mode), 0) + use(mode) + return checkError(r1, r2, err) +} + +// GetConsoleScreenBufferInfo retrieves information about the specified console screen buffer. +// See http://msdn.microsoft.com/en-us/library/windows/desktop/ms683171(v=vs.85).aspx. +func GetConsoleScreenBufferInfo(handle uintptr) (*CONSOLE_SCREEN_BUFFER_INFO, error) { + info := CONSOLE_SCREEN_BUFFER_INFO{} + err := checkError(getConsoleScreenBufferInfoProc.Call(handle, uintptr(unsafe.Pointer(&info)), 0)) + if err != nil { + return nil, err + } + return &info, nil +} + +func ScrollConsoleScreenBuffer(handle uintptr, scrollRect SMALL_RECT, clipRect SMALL_RECT, destOrigin COORD, char CHAR_INFO) error { + r1, r2, err := scrollConsoleScreenBufferProc.Call(handle, uintptr(unsafe.Pointer(&scrollRect)), uintptr(unsafe.Pointer(&clipRect)), coordToPointer(destOrigin), uintptr(unsafe.Pointer(&char))) + use(scrollRect) + use(clipRect) + use(destOrigin) + use(char) + return checkError(r1, r2, err) +} + +// SetConsoleScreenBufferSize sets the size of the console screen buffer. +// See https://msdn.microsoft.com/en-us/library/windows/desktop/ms686044(v=vs.85).aspx. +func SetConsoleScreenBufferSize(handle uintptr, coord COORD) error { + r1, r2, err := setConsoleScreenBufferSizeProc.Call(handle, coordToPointer(coord)) + use(coord) + return checkError(r1, r2, err) +} + +// SetConsoleTextAttribute sets the attributes of characters written to the +// console screen buffer by the WriteFile or WriteConsole function. +// See http://msdn.microsoft.com/en-us/library/windows/desktop/ms686047(v=vs.85).aspx. +func SetConsoleTextAttribute(handle uintptr, attribute uint16) error { + r1, r2, err := setConsoleTextAttributeProc.Call(handle, uintptr(attribute), 0) + use(attribute) + return checkError(r1, r2, err) +} + +// SetConsoleWindowInfo sets the size and position of the console screen buffer's window. +// Note that the size and location must be within and no larger than the backing console screen buffer. +// See https://msdn.microsoft.com/en-us/library/windows/desktop/ms686125(v=vs.85).aspx. +func SetConsoleWindowInfo(handle uintptr, isAbsolute bool, rect SMALL_RECT) error { + r1, r2, err := setConsoleWindowInfoProc.Call(handle, uintptr(boolToBOOL(isAbsolute)), uintptr(unsafe.Pointer(&rect))) + use(isAbsolute) + use(rect) + return checkError(r1, r2, err) +} + +// WriteConsoleOutput writes the CHAR_INFOs from the provided buffer to the active console buffer. +// See https://msdn.microsoft.com/en-us/library/windows/desktop/ms687404(v=vs.85).aspx. +func WriteConsoleOutput(handle uintptr, buffer []CHAR_INFO, bufferSize COORD, bufferCoord COORD, writeRegion *SMALL_RECT) error { + r1, r2, err := writeConsoleOutputProc.Call(handle, uintptr(unsafe.Pointer(&buffer[0])), coordToPointer(bufferSize), coordToPointer(bufferCoord), uintptr(unsafe.Pointer(writeRegion))) + use(buffer) + use(bufferSize) + use(bufferCoord) + return checkError(r1, r2, err) +} + +// ReadConsoleInput reads (and removes) data from the console input buffer. +// See https://msdn.microsoft.com/en-us/library/windows/desktop/ms684961(v=vs.85).aspx. +func ReadConsoleInput(handle uintptr, buffer []INPUT_RECORD, count *uint32) error { + r1, r2, err := readConsoleInputProc.Call(handle, uintptr(unsafe.Pointer(&buffer[0])), uintptr(len(buffer)), uintptr(unsafe.Pointer(count))) + use(buffer) + return checkError(r1, r2, err) +} + +// WaitForSingleObject waits for the passed handle to be signaled. +// It returns true if the handle was signaled; false otherwise. +// See https://msdn.microsoft.com/en-us/library/windows/desktop/ms687032(v=vs.85).aspx. +func WaitForSingleObject(handle uintptr, msWait uint32) (bool, error) { + r1, _, err := waitForSingleObjectProc.Call(handle, uintptr(uint32(msWait))) + switch r1 { + case WAIT_ABANDONED, WAIT_TIMEOUT: + return false, nil + case WAIT_SIGNALED: + return true, nil + } + use(msWait) + return false, err +} + +// String helpers +func (info CONSOLE_SCREEN_BUFFER_INFO) String() string { + return fmt.Sprintf("Size(%v) Cursor(%v) Window(%v) Max(%v)", info.Size, info.CursorPosition, info.Window, info.MaximumWindowSize) +} + +func (coord COORD) String() string { + return fmt.Sprintf("%v,%v", coord.X, coord.Y) +} + +func (rect SMALL_RECT) String() string { + return fmt.Sprintf("(%v,%v),(%v,%v)", rect.Left, rect.Top, rect.Right, rect.Bottom) +} + +// checkError evaluates the results of a Windows API call and returns the error if it failed. +func checkError(r1, r2 uintptr, err error) error { + // Windows APIs return non-zero to indicate success + if r1 != 0 { + return nil + } + + // Return the error if provided, otherwise default to EINVAL + if err != nil { + return err + } + return syscall.EINVAL +} + +// coordToPointer converts a COORD into a uintptr (by fooling the type system). +func coordToPointer(c COORD) uintptr { + // Note: This code assumes the two SHORTs are correctly laid out; the "cast" to uint32 is just to get a pointer to pass. + return uintptr(*((*uint32)(unsafe.Pointer(&c)))) +} + +// use is a no-op, but the compiler cannot see that it is. +// Calling use(p) ensures that p is kept live until that point. +func use(p interface{}) {} diff --git a/vendor/github.com/Azure/go-ansiterm/winterm/attr_translation.go b/vendor/github.com/Azure/go-ansiterm/winterm/attr_translation.go new file mode 100644 index 000000000..cbec8f728 --- /dev/null +++ b/vendor/github.com/Azure/go-ansiterm/winterm/attr_translation.go @@ -0,0 +1,100 @@ +// +build windows + +package winterm + +import "github.com/Azure/go-ansiterm" + +const ( + FOREGROUND_COLOR_MASK = FOREGROUND_RED | FOREGROUND_GREEN | FOREGROUND_BLUE + BACKGROUND_COLOR_MASK = BACKGROUND_RED | BACKGROUND_GREEN | BACKGROUND_BLUE +) + +// collectAnsiIntoWindowsAttributes modifies the passed Windows text mode flags to reflect the +// request represented by the passed ANSI mode. +func collectAnsiIntoWindowsAttributes(windowsMode uint16, inverted bool, baseMode uint16, ansiMode int16) (uint16, bool) { + switch ansiMode { + + // Mode styles + case ansiterm.ANSI_SGR_BOLD: + windowsMode = windowsMode | FOREGROUND_INTENSITY + + case ansiterm.ANSI_SGR_DIM, ansiterm.ANSI_SGR_BOLD_DIM_OFF: + windowsMode &^= FOREGROUND_INTENSITY + + case ansiterm.ANSI_SGR_UNDERLINE: + windowsMode = windowsMode | COMMON_LVB_UNDERSCORE + + case ansiterm.ANSI_SGR_REVERSE: + inverted = true + + case ansiterm.ANSI_SGR_REVERSE_OFF: + inverted = false + + case ansiterm.ANSI_SGR_UNDERLINE_OFF: + windowsMode &^= COMMON_LVB_UNDERSCORE + + // Foreground colors + case ansiterm.ANSI_SGR_FOREGROUND_DEFAULT: + windowsMode = (windowsMode &^ FOREGROUND_MASK) | (baseMode & FOREGROUND_MASK) + + case ansiterm.ANSI_SGR_FOREGROUND_BLACK: + windowsMode = (windowsMode &^ FOREGROUND_COLOR_MASK) + + case ansiterm.ANSI_SGR_FOREGROUND_RED: + windowsMode = (windowsMode &^ FOREGROUND_COLOR_MASK) | FOREGROUND_RED + + case ansiterm.ANSI_SGR_FOREGROUND_GREEN: + windowsMode = (windowsMode &^ FOREGROUND_COLOR_MASK) | FOREGROUND_GREEN + + case ansiterm.ANSI_SGR_FOREGROUND_YELLOW: + windowsMode = (windowsMode &^ FOREGROUND_COLOR_MASK) | FOREGROUND_RED | FOREGROUND_GREEN + + case ansiterm.ANSI_SGR_FOREGROUND_BLUE: + windowsMode = (windowsMode &^ FOREGROUND_COLOR_MASK) | FOREGROUND_BLUE + + case ansiterm.ANSI_SGR_FOREGROUND_MAGENTA: + windowsMode = (windowsMode &^ FOREGROUND_COLOR_MASK) | FOREGROUND_RED | FOREGROUND_BLUE + + case ansiterm.ANSI_SGR_FOREGROUND_CYAN: + windowsMode = (windowsMode &^ FOREGROUND_COLOR_MASK) | FOREGROUND_GREEN | FOREGROUND_BLUE + + case ansiterm.ANSI_SGR_FOREGROUND_WHITE: + windowsMode = (windowsMode &^ FOREGROUND_COLOR_MASK) | FOREGROUND_RED | FOREGROUND_GREEN | FOREGROUND_BLUE + + // Background colors + case ansiterm.ANSI_SGR_BACKGROUND_DEFAULT: + // Black with no intensity + windowsMode = (windowsMode &^ BACKGROUND_MASK) | (baseMode & BACKGROUND_MASK) + + case ansiterm.ANSI_SGR_BACKGROUND_BLACK: + windowsMode = (windowsMode &^ BACKGROUND_COLOR_MASK) + + case ansiterm.ANSI_SGR_BACKGROUND_RED: + windowsMode = (windowsMode &^ BACKGROUND_COLOR_MASK) | BACKGROUND_RED + + case ansiterm.ANSI_SGR_BACKGROUND_GREEN: + windowsMode = (windowsMode &^ BACKGROUND_COLOR_MASK) | BACKGROUND_GREEN + + case ansiterm.ANSI_SGR_BACKGROUND_YELLOW: + windowsMode = (windowsMode &^ BACKGROUND_COLOR_MASK) | BACKGROUND_RED | BACKGROUND_GREEN + + case ansiterm.ANSI_SGR_BACKGROUND_BLUE: + windowsMode = (windowsMode &^ BACKGROUND_COLOR_MASK) | BACKGROUND_BLUE + + case ansiterm.ANSI_SGR_BACKGROUND_MAGENTA: + windowsMode = (windowsMode &^ BACKGROUND_COLOR_MASK) | BACKGROUND_RED | BACKGROUND_BLUE + + case ansiterm.ANSI_SGR_BACKGROUND_CYAN: + windowsMode = (windowsMode &^ BACKGROUND_COLOR_MASK) | BACKGROUND_GREEN | BACKGROUND_BLUE + + case ansiterm.ANSI_SGR_BACKGROUND_WHITE: + windowsMode = (windowsMode &^ BACKGROUND_COLOR_MASK) | BACKGROUND_RED | BACKGROUND_GREEN | BACKGROUND_BLUE + } + + return windowsMode, inverted +} + +// invertAttributes inverts the foreground and background colors of a Windows attributes value +func invertAttributes(windowsMode uint16) uint16 { + return (COMMON_LVB_MASK & windowsMode) | ((FOREGROUND_MASK & windowsMode) << 4) | ((BACKGROUND_MASK & windowsMode) >> 4) +} diff --git a/vendor/github.com/Azure/go-ansiterm/winterm/cursor_helpers.go b/vendor/github.com/Azure/go-ansiterm/winterm/cursor_helpers.go new file mode 100644 index 000000000..3ee06ea72 --- /dev/null +++ b/vendor/github.com/Azure/go-ansiterm/winterm/cursor_helpers.go @@ -0,0 +1,101 @@ +// +build windows + +package winterm + +const ( + horizontal = iota + vertical +) + +func (h *windowsAnsiEventHandler) getCursorWindow(info *CONSOLE_SCREEN_BUFFER_INFO) SMALL_RECT { + if h.originMode { + sr := h.effectiveSr(info.Window) + return SMALL_RECT{ + Top: sr.top, + Bottom: sr.bottom, + Left: 0, + Right: info.Size.X - 1, + } + } else { + return SMALL_RECT{ + Top: info.Window.Top, + Bottom: info.Window.Bottom, + Left: 0, + Right: info.Size.X - 1, + } + } +} + +// setCursorPosition sets the cursor to the specified position, bounded to the screen size +func (h *windowsAnsiEventHandler) setCursorPosition(position COORD, window SMALL_RECT) error { + position.X = ensureInRange(position.X, window.Left, window.Right) + position.Y = ensureInRange(position.Y, window.Top, window.Bottom) + err := SetConsoleCursorPosition(h.fd, position) + if err != nil { + return err + } + h.logf("Cursor position set: (%d, %d)", position.X, position.Y) + return err +} + +func (h *windowsAnsiEventHandler) moveCursorVertical(param int) error { + return h.moveCursor(vertical, param) +} + +func (h *windowsAnsiEventHandler) moveCursorHorizontal(param int) error { + return h.moveCursor(horizontal, param) +} + +func (h *windowsAnsiEventHandler) moveCursor(moveMode int, param int) error { + info, err := GetConsoleScreenBufferInfo(h.fd) + if err != nil { + return err + } + + position := info.CursorPosition + switch moveMode { + case horizontal: + position.X += int16(param) + case vertical: + position.Y += int16(param) + } + + if err = h.setCursorPosition(position, h.getCursorWindow(info)); err != nil { + return err + } + + return nil +} + +func (h *windowsAnsiEventHandler) moveCursorLine(param int) error { + info, err := GetConsoleScreenBufferInfo(h.fd) + if err != nil { + return err + } + + position := info.CursorPosition + position.X = 0 + position.Y += int16(param) + + if err = h.setCursorPosition(position, h.getCursorWindow(info)); err != nil { + return err + } + + return nil +} + +func (h *windowsAnsiEventHandler) moveCursorColumn(param int) error { + info, err := GetConsoleScreenBufferInfo(h.fd) + if err != nil { + return err + } + + position := info.CursorPosition + position.X = int16(param) - 1 + + if err = h.setCursorPosition(position, h.getCursorWindow(info)); err != nil { + return err + } + + return nil +} diff --git a/vendor/github.com/Azure/go-ansiterm/winterm/erase_helpers.go b/vendor/github.com/Azure/go-ansiterm/winterm/erase_helpers.go new file mode 100644 index 000000000..244b5fa25 --- /dev/null +++ b/vendor/github.com/Azure/go-ansiterm/winterm/erase_helpers.go @@ -0,0 +1,84 @@ +// +build windows + +package winterm + +import "github.com/Azure/go-ansiterm" + +func (h *windowsAnsiEventHandler) clearRange(attributes uint16, fromCoord COORD, toCoord COORD) error { + // Ignore an invalid (negative area) request + if toCoord.Y < fromCoord.Y { + return nil + } + + var err error + + var coordStart = COORD{} + var coordEnd = COORD{} + + xCurrent, yCurrent := fromCoord.X, fromCoord.Y + xEnd, yEnd := toCoord.X, toCoord.Y + + // Clear any partial initial line + if xCurrent > 0 { + coordStart.X, coordStart.Y = xCurrent, yCurrent + coordEnd.X, coordEnd.Y = xEnd, yCurrent + + err = h.clearRect(attributes, coordStart, coordEnd) + if err != nil { + return err + } + + xCurrent = 0 + yCurrent += 1 + } + + // Clear intervening rectangular section + if yCurrent < yEnd { + coordStart.X, coordStart.Y = xCurrent, yCurrent + coordEnd.X, coordEnd.Y = xEnd, yEnd-1 + + err = h.clearRect(attributes, coordStart, coordEnd) + if err != nil { + return err + } + + xCurrent = 0 + yCurrent = yEnd + } + + // Clear remaining partial ending line + coordStart.X, coordStart.Y = xCurrent, yCurrent + coordEnd.X, coordEnd.Y = xEnd, yEnd + + err = h.clearRect(attributes, coordStart, coordEnd) + if err != nil { + return err + } + + return nil +} + +func (h *windowsAnsiEventHandler) clearRect(attributes uint16, fromCoord COORD, toCoord COORD) error { + region := SMALL_RECT{Top: fromCoord.Y, Left: fromCoord.X, Bottom: toCoord.Y, Right: toCoord.X} + width := toCoord.X - fromCoord.X + 1 + height := toCoord.Y - fromCoord.Y + 1 + size := uint32(width) * uint32(height) + + if size <= 0 { + return nil + } + + buffer := make([]CHAR_INFO, size) + + char := CHAR_INFO{ansiterm.FILL_CHARACTER, attributes} + for i := 0; i < int(size); i++ { + buffer[i] = char + } + + err := WriteConsoleOutput(h.fd, buffer, COORD{X: width, Y: height}, COORD{X: 0, Y: 0}, ®ion) + if err != nil { + return err + } + + return nil +} diff --git a/vendor/github.com/Azure/go-ansiterm/winterm/scroll_helper.go b/vendor/github.com/Azure/go-ansiterm/winterm/scroll_helper.go new file mode 100644 index 000000000..2d27fa1d0 --- /dev/null +++ b/vendor/github.com/Azure/go-ansiterm/winterm/scroll_helper.go @@ -0,0 +1,118 @@ +// +build windows + +package winterm + +// effectiveSr gets the current effective scroll region in buffer coordinates +func (h *windowsAnsiEventHandler) effectiveSr(window SMALL_RECT) scrollRegion { + top := addInRange(window.Top, h.sr.top, window.Top, window.Bottom) + bottom := addInRange(window.Top, h.sr.bottom, window.Top, window.Bottom) + if top >= bottom { + top = window.Top + bottom = window.Bottom + } + return scrollRegion{top: top, bottom: bottom} +} + +func (h *windowsAnsiEventHandler) scrollUp(param int) error { + info, err := GetConsoleScreenBufferInfo(h.fd) + if err != nil { + return err + } + + sr := h.effectiveSr(info.Window) + return h.scroll(param, sr, info) +} + +func (h *windowsAnsiEventHandler) scrollDown(param int) error { + return h.scrollUp(-param) +} + +func (h *windowsAnsiEventHandler) deleteLines(param int) error { + info, err := GetConsoleScreenBufferInfo(h.fd) + if err != nil { + return err + } + + start := info.CursorPosition.Y + sr := h.effectiveSr(info.Window) + // Lines cannot be inserted or deleted outside the scrolling region. + if start >= sr.top && start <= sr.bottom { + sr.top = start + return h.scroll(param, sr, info) + } else { + return nil + } +} + +func (h *windowsAnsiEventHandler) insertLines(param int) error { + return h.deleteLines(-param) +} + +// scroll scrolls the provided scroll region by param lines. The scroll region is in buffer coordinates. +func (h *windowsAnsiEventHandler) scroll(param int, sr scrollRegion, info *CONSOLE_SCREEN_BUFFER_INFO) error { + h.logf("scroll: scrollTop: %d, scrollBottom: %d", sr.top, sr.bottom) + h.logf("scroll: windowTop: %d, windowBottom: %d", info.Window.Top, info.Window.Bottom) + + // Copy from and clip to the scroll region (full buffer width) + scrollRect := SMALL_RECT{ + Top: sr.top, + Bottom: sr.bottom, + Left: 0, + Right: info.Size.X - 1, + } + + // Origin to which area should be copied + destOrigin := COORD{ + X: 0, + Y: sr.top - int16(param), + } + + char := CHAR_INFO{ + UnicodeChar: ' ', + Attributes: h.attributes, + } + + if err := ScrollConsoleScreenBuffer(h.fd, scrollRect, scrollRect, destOrigin, char); err != nil { + return err + } + return nil +} + +func (h *windowsAnsiEventHandler) deleteCharacters(param int) error { + info, err := GetConsoleScreenBufferInfo(h.fd) + if err != nil { + return err + } + return h.scrollLine(param, info.CursorPosition, info) +} + +func (h *windowsAnsiEventHandler) insertCharacters(param int) error { + return h.deleteCharacters(-param) +} + +// scrollLine scrolls a line horizontally starting at the provided position by a number of columns. +func (h *windowsAnsiEventHandler) scrollLine(columns int, position COORD, info *CONSOLE_SCREEN_BUFFER_INFO) error { + // Copy from and clip to the scroll region (full buffer width) + scrollRect := SMALL_RECT{ + Top: position.Y, + Bottom: position.Y, + Left: position.X, + Right: info.Size.X - 1, + } + + // Origin to which area should be copied + destOrigin := COORD{ + X: position.X - int16(columns), + Y: position.Y, + } + + char := CHAR_INFO{ + UnicodeChar: ' ', + Attributes: h.attributes, + } + + if err := ScrollConsoleScreenBuffer(h.fd, scrollRect, scrollRect, destOrigin, char); err != nil { + return err + } + return nil +} diff --git a/vendor/github.com/Azure/go-ansiterm/winterm/utilities.go b/vendor/github.com/Azure/go-ansiterm/winterm/utilities.go new file mode 100644 index 000000000..afa7635d7 --- /dev/null +++ b/vendor/github.com/Azure/go-ansiterm/winterm/utilities.go @@ -0,0 +1,9 @@ +// +build windows + +package winterm + +// AddInRange increments a value by the passed quantity while ensuring the values +// always remain within the supplied min / max range. +func addInRange(n int16, increment int16, min int16, max int16) int16 { + return ensureInRange(n+increment, min, max) +} diff --git a/vendor/github.com/Azure/go-ansiterm/winterm/win_event_handler.go b/vendor/github.com/Azure/go-ansiterm/winterm/win_event_handler.go new file mode 100644 index 000000000..2d40fb75a --- /dev/null +++ b/vendor/github.com/Azure/go-ansiterm/winterm/win_event_handler.go @@ -0,0 +1,743 @@ +// +build windows + +package winterm + +import ( + "bytes" + "log" + "os" + "strconv" + + "github.com/Azure/go-ansiterm" +) + +type windowsAnsiEventHandler struct { + fd uintptr + file *os.File + infoReset *CONSOLE_SCREEN_BUFFER_INFO + sr scrollRegion + buffer bytes.Buffer + attributes uint16 + inverted bool + wrapNext bool + drewMarginByte bool + originMode bool + marginByte byte + curInfo *CONSOLE_SCREEN_BUFFER_INFO + curPos COORD + logf func(string, ...interface{}) +} + +type Option func(*windowsAnsiEventHandler) + +func WithLogf(f func(string, ...interface{})) Option { + return func(w *windowsAnsiEventHandler) { + w.logf = f + } +} + +func CreateWinEventHandler(fd uintptr, file *os.File, opts ...Option) ansiterm.AnsiEventHandler { + infoReset, err := GetConsoleScreenBufferInfo(fd) + if err != nil { + return nil + } + + h := &windowsAnsiEventHandler{ + fd: fd, + file: file, + infoReset: infoReset, + attributes: infoReset.Attributes, + } + for _, o := range opts { + o(h) + } + + if isDebugEnv := os.Getenv(ansiterm.LogEnv); isDebugEnv == "1" { + logFile, _ := os.Create("winEventHandler.log") + logger := log.New(logFile, "", log.LstdFlags) + if h.logf != nil { + l := h.logf + h.logf = func(s string, v ...interface{}) { + l(s, v...) + logger.Printf(s, v...) + } + } else { + h.logf = logger.Printf + } + } + + if h.logf == nil { + h.logf = func(string, ...interface{}) {} + } + + return h +} + +type scrollRegion struct { + top int16 + bottom int16 +} + +// simulateLF simulates a LF or CR+LF by scrolling if necessary to handle the +// current cursor position and scroll region settings, in which case it returns +// true. If no special handling is necessary, then it does nothing and returns +// false. +// +// In the false case, the caller should ensure that a carriage return +// and line feed are inserted or that the text is otherwise wrapped. +func (h *windowsAnsiEventHandler) simulateLF(includeCR bool) (bool, error) { + if h.wrapNext { + if err := h.Flush(); err != nil { + return false, err + } + h.clearWrap() + } + pos, info, err := h.getCurrentInfo() + if err != nil { + return false, err + } + sr := h.effectiveSr(info.Window) + if pos.Y == sr.bottom { + // Scrolling is necessary. Let Windows automatically scroll if the scrolling region + // is the full window. + if sr.top == info.Window.Top && sr.bottom == info.Window.Bottom { + if includeCR { + pos.X = 0 + h.updatePos(pos) + } + return false, nil + } + + // A custom scroll region is active. Scroll the window manually to simulate + // the LF. + if err := h.Flush(); err != nil { + return false, err + } + h.logf("Simulating LF inside scroll region") + if err := h.scrollUp(1); err != nil { + return false, err + } + if includeCR { + pos.X = 0 + if err := SetConsoleCursorPosition(h.fd, pos); err != nil { + return false, err + } + } + return true, nil + + } else if pos.Y < info.Window.Bottom { + // Let Windows handle the LF. + pos.Y++ + if includeCR { + pos.X = 0 + } + h.updatePos(pos) + return false, nil + } else { + // The cursor is at the bottom of the screen but outside the scroll + // region. Skip the LF. + h.logf("Simulating LF outside scroll region") + if includeCR { + if err := h.Flush(); err != nil { + return false, err + } + pos.X = 0 + if err := SetConsoleCursorPosition(h.fd, pos); err != nil { + return false, err + } + } + return true, nil + } +} + +// executeLF executes a LF without a CR. +func (h *windowsAnsiEventHandler) executeLF() error { + handled, err := h.simulateLF(false) + if err != nil { + return err + } + if !handled { + // Windows LF will reset the cursor column position. Write the LF + // and restore the cursor position. + pos, _, err := h.getCurrentInfo() + if err != nil { + return err + } + h.buffer.WriteByte(ansiterm.ANSI_LINE_FEED) + if pos.X != 0 { + if err := h.Flush(); err != nil { + return err + } + h.logf("Resetting cursor position for LF without CR") + if err := SetConsoleCursorPosition(h.fd, pos); err != nil { + return err + } + } + } + return nil +} + +func (h *windowsAnsiEventHandler) Print(b byte) error { + if h.wrapNext { + h.buffer.WriteByte(h.marginByte) + h.clearWrap() + if _, err := h.simulateLF(true); err != nil { + return err + } + } + pos, info, err := h.getCurrentInfo() + if err != nil { + return err + } + if pos.X == info.Size.X-1 { + h.wrapNext = true + h.marginByte = b + } else { + pos.X++ + h.updatePos(pos) + h.buffer.WriteByte(b) + } + return nil +} + +func (h *windowsAnsiEventHandler) Execute(b byte) error { + switch b { + case ansiterm.ANSI_TAB: + h.logf("Execute(TAB)") + // Move to the next tab stop, but preserve auto-wrap if already set. + if !h.wrapNext { + pos, info, err := h.getCurrentInfo() + if err != nil { + return err + } + pos.X = (pos.X + 8) - pos.X%8 + if pos.X >= info.Size.X { + pos.X = info.Size.X - 1 + } + if err := h.Flush(); err != nil { + return err + } + if err := SetConsoleCursorPosition(h.fd, pos); err != nil { + return err + } + } + return nil + + case ansiterm.ANSI_BEL: + h.buffer.WriteByte(ansiterm.ANSI_BEL) + return nil + + case ansiterm.ANSI_BACKSPACE: + if h.wrapNext { + if err := h.Flush(); err != nil { + return err + } + h.clearWrap() + } + pos, _, err := h.getCurrentInfo() + if err != nil { + return err + } + if pos.X > 0 { + pos.X-- + h.updatePos(pos) + h.buffer.WriteByte(ansiterm.ANSI_BACKSPACE) + } + return nil + + case ansiterm.ANSI_VERTICAL_TAB, ansiterm.ANSI_FORM_FEED: + // Treat as true LF. + return h.executeLF() + + case ansiterm.ANSI_LINE_FEED: + // Simulate a CR and LF for now since there is no way in go-ansiterm + // to tell if the LF should include CR (and more things break when it's + // missing than when it's incorrectly added). + handled, err := h.simulateLF(true) + if handled || err != nil { + return err + } + return h.buffer.WriteByte(ansiterm.ANSI_LINE_FEED) + + case ansiterm.ANSI_CARRIAGE_RETURN: + if h.wrapNext { + if err := h.Flush(); err != nil { + return err + } + h.clearWrap() + } + pos, _, err := h.getCurrentInfo() + if err != nil { + return err + } + if pos.X != 0 { + pos.X = 0 + h.updatePos(pos) + h.buffer.WriteByte(ansiterm.ANSI_CARRIAGE_RETURN) + } + return nil + + default: + return nil + } +} + +func (h *windowsAnsiEventHandler) CUU(param int) error { + if err := h.Flush(); err != nil { + return err + } + h.logf("CUU: [%v]", []string{strconv.Itoa(param)}) + h.clearWrap() + return h.moveCursorVertical(-param) +} + +func (h *windowsAnsiEventHandler) CUD(param int) error { + if err := h.Flush(); err != nil { + return err + } + h.logf("CUD: [%v]", []string{strconv.Itoa(param)}) + h.clearWrap() + return h.moveCursorVertical(param) +} + +func (h *windowsAnsiEventHandler) CUF(param int) error { + if err := h.Flush(); err != nil { + return err + } + h.logf("CUF: [%v]", []string{strconv.Itoa(param)}) + h.clearWrap() + return h.moveCursorHorizontal(param) +} + +func (h *windowsAnsiEventHandler) CUB(param int) error { + if err := h.Flush(); err != nil { + return err + } + h.logf("CUB: [%v]", []string{strconv.Itoa(param)}) + h.clearWrap() + return h.moveCursorHorizontal(-param) +} + +func (h *windowsAnsiEventHandler) CNL(param int) error { + if err := h.Flush(); err != nil { + return err + } + h.logf("CNL: [%v]", []string{strconv.Itoa(param)}) + h.clearWrap() + return h.moveCursorLine(param) +} + +func (h *windowsAnsiEventHandler) CPL(param int) error { + if err := h.Flush(); err != nil { + return err + } + h.logf("CPL: [%v]", []string{strconv.Itoa(param)}) + h.clearWrap() + return h.moveCursorLine(-param) +} + +func (h *windowsAnsiEventHandler) CHA(param int) error { + if err := h.Flush(); err != nil { + return err + } + h.logf("CHA: [%v]", []string{strconv.Itoa(param)}) + h.clearWrap() + return h.moveCursorColumn(param) +} + +func (h *windowsAnsiEventHandler) VPA(param int) error { + if err := h.Flush(); err != nil { + return err + } + h.logf("VPA: [[%d]]", param) + h.clearWrap() + info, err := GetConsoleScreenBufferInfo(h.fd) + if err != nil { + return err + } + window := h.getCursorWindow(info) + position := info.CursorPosition + position.Y = window.Top + int16(param) - 1 + return h.setCursorPosition(position, window) +} + +func (h *windowsAnsiEventHandler) CUP(row int, col int) error { + if err := h.Flush(); err != nil { + return err + } + h.logf("CUP: [[%d %d]]", row, col) + h.clearWrap() + info, err := GetConsoleScreenBufferInfo(h.fd) + if err != nil { + return err + } + + window := h.getCursorWindow(info) + position := COORD{window.Left + int16(col) - 1, window.Top + int16(row) - 1} + return h.setCursorPosition(position, window) +} + +func (h *windowsAnsiEventHandler) HVP(row int, col int) error { + if err := h.Flush(); err != nil { + return err + } + h.logf("HVP: [[%d %d]]", row, col) + h.clearWrap() + return h.CUP(row, col) +} + +func (h *windowsAnsiEventHandler) DECTCEM(visible bool) error { + if err := h.Flush(); err != nil { + return err + } + h.logf("DECTCEM: [%v]", []string{strconv.FormatBool(visible)}) + h.clearWrap() + return nil +} + +func (h *windowsAnsiEventHandler) DECOM(enable bool) error { + if err := h.Flush(); err != nil { + return err + } + h.logf("DECOM: [%v]", []string{strconv.FormatBool(enable)}) + h.clearWrap() + h.originMode = enable + return h.CUP(1, 1) +} + +func (h *windowsAnsiEventHandler) DECCOLM(use132 bool) error { + if err := h.Flush(); err != nil { + return err + } + h.logf("DECCOLM: [%v]", []string{strconv.FormatBool(use132)}) + h.clearWrap() + if err := h.ED(2); err != nil { + return err + } + info, err := GetConsoleScreenBufferInfo(h.fd) + if err != nil { + return err + } + targetWidth := int16(80) + if use132 { + targetWidth = 132 + } + if info.Size.X < targetWidth { + if err := SetConsoleScreenBufferSize(h.fd, COORD{targetWidth, info.Size.Y}); err != nil { + h.logf("set buffer failed: %v", err) + return err + } + } + window := info.Window + window.Left = 0 + window.Right = targetWidth - 1 + if err := SetConsoleWindowInfo(h.fd, true, window); err != nil { + h.logf("set window failed: %v", err) + return err + } + if info.Size.X > targetWidth { + if err := SetConsoleScreenBufferSize(h.fd, COORD{targetWidth, info.Size.Y}); err != nil { + h.logf("set buffer failed: %v", err) + return err + } + } + return SetConsoleCursorPosition(h.fd, COORD{0, 0}) +} + +func (h *windowsAnsiEventHandler) ED(param int) error { + if err := h.Flush(); err != nil { + return err + } + h.logf("ED: [%v]", []string{strconv.Itoa(param)}) + h.clearWrap() + + // [J -- Erases from the cursor to the end of the screen, including the cursor position. + // [1J -- Erases from the beginning of the screen to the cursor, including the cursor position. + // [2J -- Erases the complete display. The cursor does not move. + // Notes: + // -- Clearing the entire buffer, versus just the Window, works best for Windows Consoles + + info, err := GetConsoleScreenBufferInfo(h.fd) + if err != nil { + return err + } + + var start COORD + var end COORD + + switch param { + case 0: + start = info.CursorPosition + end = COORD{info.Size.X - 1, info.Size.Y - 1} + + case 1: + start = COORD{0, 0} + end = info.CursorPosition + + case 2: + start = COORD{0, 0} + end = COORD{info.Size.X - 1, info.Size.Y - 1} + } + + err = h.clearRange(h.attributes, start, end) + if err != nil { + return err + } + + // If the whole buffer was cleared, move the window to the top while preserving + // the window-relative cursor position. + if param == 2 { + pos := info.CursorPosition + window := info.Window + pos.Y -= window.Top + window.Bottom -= window.Top + window.Top = 0 + if err := SetConsoleCursorPosition(h.fd, pos); err != nil { + return err + } + if err := SetConsoleWindowInfo(h.fd, true, window); err != nil { + return err + } + } + + return nil +} + +func (h *windowsAnsiEventHandler) EL(param int) error { + if err := h.Flush(); err != nil { + return err + } + h.logf("EL: [%v]", strconv.Itoa(param)) + h.clearWrap() + + // [K -- Erases from the cursor to the end of the line, including the cursor position. + // [1K -- Erases from the beginning of the line to the cursor, including the cursor position. + // [2K -- Erases the complete line. + + info, err := GetConsoleScreenBufferInfo(h.fd) + if err != nil { + return err + } + + var start COORD + var end COORD + + switch param { + case 0: + start = info.CursorPosition + end = COORD{info.Size.X, info.CursorPosition.Y} + + case 1: + start = COORD{0, info.CursorPosition.Y} + end = info.CursorPosition + + case 2: + start = COORD{0, info.CursorPosition.Y} + end = COORD{info.Size.X, info.CursorPosition.Y} + } + + err = h.clearRange(h.attributes, start, end) + if err != nil { + return err + } + + return nil +} + +func (h *windowsAnsiEventHandler) IL(param int) error { + if err := h.Flush(); err != nil { + return err + } + h.logf("IL: [%v]", strconv.Itoa(param)) + h.clearWrap() + return h.insertLines(param) +} + +func (h *windowsAnsiEventHandler) DL(param int) error { + if err := h.Flush(); err != nil { + return err + } + h.logf("DL: [%v]", strconv.Itoa(param)) + h.clearWrap() + return h.deleteLines(param) +} + +func (h *windowsAnsiEventHandler) ICH(param int) error { + if err := h.Flush(); err != nil { + return err + } + h.logf("ICH: [%v]", strconv.Itoa(param)) + h.clearWrap() + return h.insertCharacters(param) +} + +func (h *windowsAnsiEventHandler) DCH(param int) error { + if err := h.Flush(); err != nil { + return err + } + h.logf("DCH: [%v]", strconv.Itoa(param)) + h.clearWrap() + return h.deleteCharacters(param) +} + +func (h *windowsAnsiEventHandler) SGR(params []int) error { + if err := h.Flush(); err != nil { + return err + } + strings := []string{} + for _, v := range params { + strings = append(strings, strconv.Itoa(v)) + } + + h.logf("SGR: [%v]", strings) + + if len(params) <= 0 { + h.attributes = h.infoReset.Attributes + h.inverted = false + } else { + for _, attr := range params { + + if attr == ansiterm.ANSI_SGR_RESET { + h.attributes = h.infoReset.Attributes + h.inverted = false + continue + } + + h.attributes, h.inverted = collectAnsiIntoWindowsAttributes(h.attributes, h.inverted, h.infoReset.Attributes, int16(attr)) + } + } + + attributes := h.attributes + if h.inverted { + attributes = invertAttributes(attributes) + } + err := SetConsoleTextAttribute(h.fd, attributes) + if err != nil { + return err + } + + return nil +} + +func (h *windowsAnsiEventHandler) SU(param int) error { + if err := h.Flush(); err != nil { + return err + } + h.logf("SU: [%v]", []string{strconv.Itoa(param)}) + h.clearWrap() + return h.scrollUp(param) +} + +func (h *windowsAnsiEventHandler) SD(param int) error { + if err := h.Flush(); err != nil { + return err + } + h.logf("SD: [%v]", []string{strconv.Itoa(param)}) + h.clearWrap() + return h.scrollDown(param) +} + +func (h *windowsAnsiEventHandler) DA(params []string) error { + h.logf("DA: [%v]", params) + // DA cannot be implemented because it must send data on the VT100 input stream, + // which is not available to go-ansiterm. + return nil +} + +func (h *windowsAnsiEventHandler) DECSTBM(top int, bottom int) error { + if err := h.Flush(); err != nil { + return err + } + h.logf("DECSTBM: [%d, %d]", top, bottom) + + // Windows is 0 indexed, Linux is 1 indexed + h.sr.top = int16(top - 1) + h.sr.bottom = int16(bottom - 1) + + // This command also moves the cursor to the origin. + h.clearWrap() + return h.CUP(1, 1) +} + +func (h *windowsAnsiEventHandler) RI() error { + if err := h.Flush(); err != nil { + return err + } + h.logf("RI: []") + h.clearWrap() + + info, err := GetConsoleScreenBufferInfo(h.fd) + if err != nil { + return err + } + + sr := h.effectiveSr(info.Window) + if info.CursorPosition.Y == sr.top { + return h.scrollDown(1) + } + + return h.moveCursorVertical(-1) +} + +func (h *windowsAnsiEventHandler) IND() error { + h.logf("IND: []") + return h.executeLF() +} + +func (h *windowsAnsiEventHandler) Flush() error { + h.curInfo = nil + if h.buffer.Len() > 0 { + h.logf("Flush: [%s]", h.buffer.Bytes()) + if _, err := h.buffer.WriteTo(h.file); err != nil { + return err + } + } + + if h.wrapNext && !h.drewMarginByte { + h.logf("Flush: drawing margin byte '%c'", h.marginByte) + + info, err := GetConsoleScreenBufferInfo(h.fd) + if err != nil { + return err + } + + charInfo := []CHAR_INFO{{UnicodeChar: uint16(h.marginByte), Attributes: info.Attributes}} + size := COORD{1, 1} + position := COORD{0, 0} + region := SMALL_RECT{Left: info.CursorPosition.X, Top: info.CursorPosition.Y, Right: info.CursorPosition.X, Bottom: info.CursorPosition.Y} + if err := WriteConsoleOutput(h.fd, charInfo, size, position, ®ion); err != nil { + return err + } + h.drewMarginByte = true + } + return nil +} + +// cacheConsoleInfo ensures that the current console screen information has been queried +// since the last call to Flush(). It must be called before accessing h.curInfo or h.curPos. +func (h *windowsAnsiEventHandler) getCurrentInfo() (COORD, *CONSOLE_SCREEN_BUFFER_INFO, error) { + if h.curInfo == nil { + info, err := GetConsoleScreenBufferInfo(h.fd) + if err != nil { + return COORD{}, nil, err + } + h.curInfo = info + h.curPos = info.CursorPosition + } + return h.curPos, h.curInfo, nil +} + +func (h *windowsAnsiEventHandler) updatePos(pos COORD) { + if h.curInfo == nil { + panic("failed to call getCurrentInfo before calling updatePos") + } + h.curPos = pos +} + +// clearWrap clears the state where the cursor is in the margin +// waiting for the next character before wrapping the line. This must +// be done before most operations that act on the cursor. +func (h *windowsAnsiEventHandler) clearWrap() { + h.wrapNext = false + h.drewMarginByte = false +} diff --git a/vendor/github.com/Azure/go-autorest/autorest/adal/README.md b/vendor/github.com/Azure/go-autorest/autorest/adal/README.md index a17cf98c6..7b0c4bc4d 100644 --- a/vendor/github.com/Azure/go-autorest/autorest/adal/README.md +++ b/vendor/github.com/Azure/go-autorest/autorest/adal/README.md @@ -1,19 +1,24 @@ -# Azure Active Directory library for Go +# Azure Active Directory authentication for Go -This project provides a stand alone Azure Active Directory library for Go. The code was extracted -from [go-autorest](https://github.com/Azure/go-autorest/) project, which is used as a base for -[azure-sdk-for-go](https://github.com/Azure/azure-sdk-for-go). +This is a standalone package for authenticating with Azure Active +Directory from other Go libraries and applications, in particular the [Azure SDK +for Go](https://github.com/Azure/azure-sdk-for-go). +Note: Despite the package's name it is not related to other "ADAL" libraries +maintained in the [github.com/AzureAD](https://github.com/AzureAD) org. Issues +should be opened in [this repo's](https://github.com/Azure/go-autorest/issues) +or [the SDK's](https://github.com/Azure/azure-sdk-for-go/issues) issue +trackers. -## Installation +## Install -``` +```bash go get -u github.com/Azure/go-autorest/autorest/adal ``` ## Usage -An Active Directory application is required in order to use this library. An application can be registered in the [Azure Portal](https://portal.azure.com/) follow these [guidelines](https://docs.microsoft.com/en-us/azure/active-directory/develop/active-directory-integrating-applications) or using the [Azure CLI](https://github.com/Azure/azure-cli). +An Active Directory application is required in order to use this library. An application can be registered in the [Azure Portal](https://portal.azure.com/) by following these [guidelines](https://docs.microsoft.com/en-us/azure/active-directory/develop/active-directory-integrating-applications) or using the [Azure CLI](https://github.com/Azure/azure-cli). ### Register an Azure AD Application with secret @@ -218,6 +223,40 @@ if (err == nil) { } ``` +#### Username password authenticate + +```Go +spt, err := adal.NewServicePrincipalTokenFromUsernamePassword( + oauthConfig, + applicationID, + username, + password, + resource, + callbacks...) + +if (err == nil) { + token := spt.Token +} +``` + +#### Authorization code authenticate + +``` Go +spt, err := adal.NewServicePrincipalTokenFromAuthorizationCode( + oauthConfig, + applicationID, + clientSecret, + authorizationCode, + redirectURI, + resource, + callbacks...) + +err = spt.Refresh() +if (err == nil) { + token := spt.Token +} +``` + ### Command Line Tool A command line tool is available in `cmd/adal.go` that can acquire a token for a given resource. It supports all flows mentioned above. diff --git a/vendor/github.com/Azure/go-autorest/autorest/adal/config.go b/vendor/github.com/Azure/go-autorest/autorest/adal/config.go index 12375e0e4..bee5e61dd 100644 --- a/vendor/github.com/Azure/go-autorest/autorest/adal/config.go +++ b/vendor/github.com/Azure/go-autorest/autorest/adal/config.go @@ -1,5 +1,19 @@ package adal +// Copyright 2017 Microsoft Corporation +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + import ( "fmt" "net/url" @@ -12,14 +26,30 @@ const ( // OAuthConfig represents the endpoints needed // in OAuth operations type OAuthConfig struct { - AuthorityEndpoint url.URL - AuthorizeEndpoint url.URL - TokenEndpoint url.URL - DeviceCodeEndpoint url.URL + AuthorityEndpoint url.URL `json:"authorityEndpoint"` + AuthorizeEndpoint url.URL `json:"authorizeEndpoint"` + TokenEndpoint url.URL `json:"tokenEndpoint"` + DeviceCodeEndpoint url.URL `json:"deviceCodeEndpoint"` +} + +// IsZero returns true if the OAuthConfig object is zero-initialized. +func (oac OAuthConfig) IsZero() bool { + return oac == OAuthConfig{} +} + +func validateStringParam(param, name string) error { + if len(param) == 0 { + return fmt.Errorf("parameter '" + name + "' cannot be empty") + } + return nil } // NewOAuthConfig returns an OAuthConfig with tenant specific urls func NewOAuthConfig(activeDirectoryEndpoint, tenantID string) (*OAuthConfig, error) { + if err := validateStringParam(activeDirectoryEndpoint, "activeDirectoryEndpoint"); err != nil { + return nil, err + } + // it's legal for tenantID to be empty so don't validate it const activeDirectoryEndpointTemplate = "%s/oauth2/%s?api-version=%s" u, err := url.Parse(activeDirectoryEndpoint) if err != nil { diff --git a/vendor/github.com/Azure/go-autorest/autorest/adal/devicetoken.go b/vendor/github.com/Azure/go-autorest/autorest/adal/devicetoken.go index 6c511f8c8..b38f4c245 100644 --- a/vendor/github.com/Azure/go-autorest/autorest/adal/devicetoken.go +++ b/vendor/github.com/Azure/go-autorest/autorest/adal/devicetoken.go @@ -1,5 +1,19 @@ package adal +// Copyright 2017 Microsoft Corporation +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + /* This file is largely based on rjw57/oauth2device's code, with the follow differences: * scope -> resource, and only allow a single one diff --git a/vendor/github.com/Azure/go-autorest/autorest/adal/persist.go b/vendor/github.com/Azure/go-autorest/autorest/adal/persist.go index 73711c667..9e15f2751 100644 --- a/vendor/github.com/Azure/go-autorest/autorest/adal/persist.go +++ b/vendor/github.com/Azure/go-autorest/autorest/adal/persist.go @@ -1,5 +1,19 @@ package adal +// Copyright 2017 Microsoft Corporation +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + import ( "encoding/json" "fmt" diff --git a/vendor/github.com/Azure/go-autorest/autorest/adal/sender.go b/vendor/github.com/Azure/go-autorest/autorest/adal/sender.go index 7928c971a..0e5ad14d3 100644 --- a/vendor/github.com/Azure/go-autorest/autorest/adal/sender.go +++ b/vendor/github.com/Azure/go-autorest/autorest/adal/sender.go @@ -1,5 +1,19 @@ package adal +// Copyright 2017 Microsoft Corporation +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + import ( "net/http" ) diff --git a/vendor/github.com/Azure/go-autorest/autorest/adal/token.go b/vendor/github.com/Azure/go-autorest/autorest/adal/token.go index 559fc6653..eec4dced7 100644 --- a/vendor/github.com/Azure/go-autorest/autorest/adal/token.go +++ b/vendor/github.com/Azure/go-autorest/autorest/adal/token.go @@ -1,26 +1,45 @@ package adal +// Copyright 2017 Microsoft Corporation +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + import ( + "context" "crypto/rand" "crypto/rsa" "crypto/sha1" "crypto/x509" "encoding/base64" "encoding/json" + "errors" "fmt" "io/ioutil" + "math" + "net" "net/http" "net/url" "strconv" "strings" + "sync" "time" + "github.com/Azure/go-autorest/autorest/date" "github.com/dgrijalva/jwt-go" ) const ( defaultRefresh = 5 * time.Minute - tokenBaseDate = "1970-01-01T00:00:00Z" // OAuthGrantTypeDeviceCode is the "grant_type" identifier used in device flow OAuthGrantTypeDeviceCode = "device_code" @@ -28,24 +47,36 @@ const ( // OAuthGrantTypeClientCredentials is the "grant_type" identifier used in credential flows OAuthGrantTypeClientCredentials = "client_credentials" + // OAuthGrantTypeUserPass is the "grant_type" identifier used in username and password auth flows + OAuthGrantTypeUserPass = "password" + // OAuthGrantTypeRefreshToken is the "grant_type" identifier used in refresh token flows OAuthGrantTypeRefreshToken = "refresh_token" - // managedIdentitySettingsPath is the path to the MSI Extension settings file (to discover the endpoint) - managedIdentitySettingsPath = "/var/lib/waagent/ManagedIdentity-Settings" + // OAuthGrantTypeAuthorizationCode is the "grant_type" identifier used in authorization code flows + OAuthGrantTypeAuthorizationCode = "authorization_code" + + // metadataHeader is the header required by MSI extension + metadataHeader = "Metadata" + + // msiEndpoint is the well known endpoint for getting MSI authentications tokens + msiEndpoint = "http://169.254.169.254/metadata/identity/oauth2/token" + + // the default number of attempts to refresh an MSI authentication token + defaultMaxMSIRefreshAttempts = 5 ) -var expirationBase time.Time - -func init() { - expirationBase, _ = time.Parse(time.RFC3339, tokenBaseDate) -} - // OAuthTokenProvider is an interface which should be implemented by an access token retriever type OAuthTokenProvider interface { OAuthToken() string } +// TokenRefreshError is an interface used by errors returned during token refresh. +type TokenRefreshError interface { + error + Response() *http.Response +} + // Refresher is an interface for token refresh functionality type Refresher interface { Refresh() error @@ -53,6 +84,13 @@ type Refresher interface { EnsureFresh() error } +// RefresherWithContext is an interface for token refresh functionality +type RefresherWithContext interface { + RefreshWithContext(ctx context.Context) error + RefreshExchangeWithContext(ctx context.Context, resource string) error + EnsureFreshWithContext(ctx context.Context) error +} + // TokenRefreshCallback is the type representing callbacks that will be called after // a successful token refresh type TokenRefreshCallback func(Token) error @@ -70,13 +108,21 @@ type Token struct { Type string `json:"token_type"` } +// IsZero returns true if the token object is zero-initialized. +func (t Token) IsZero() bool { + return t == Token{} +} + // Expires returns the time.Time when the Token expires. func (t Token) Expires() time.Time { s, err := strconv.Atoi(t.ExpiresOn) if err != nil { s = -3600 } - return expirationBase.Add(time.Duration(s) * time.Second).UTC() + + expiration := date.NewUnixTimeFromSeconds(float64(s)) + + return time.Time(expiration).UTC() } // IsExpired returns true if the Token is expired, false otherwise. @@ -95,6 +141,12 @@ func (t *Token) OAuthToken() string { return t.AccessToken } +// ServicePrincipalSecret is an interface that allows various secret mechanism to fill the form +// that is submitted when acquiring an oAuth token. +type ServicePrincipalSecret interface { + SetAuthenticationValues(spt *ServicePrincipalToken, values *url.Values) error +} + // ServicePrincipalNoSecret represents a secret type that contains no secret // meaning it is not valid for fetching a fresh token. This is used by Manual type ServicePrincipalNoSecret struct { @@ -106,15 +158,19 @@ func (noSecret *ServicePrincipalNoSecret) SetAuthenticationValues(spt *ServicePr return fmt.Errorf("Manually created ServicePrincipalToken does not contain secret material to retrieve a new access token") } -// ServicePrincipalSecret is an interface that allows various secret mechanism to fill the form -// that is submitted when acquiring an oAuth token. -type ServicePrincipalSecret interface { - SetAuthenticationValues(spt *ServicePrincipalToken, values *url.Values) error +// MarshalJSON implements the json.Marshaler interface. +func (noSecret ServicePrincipalNoSecret) MarshalJSON() ([]byte, error) { + type tokenType struct { + Type string `json:"type"` + } + return json.Marshal(tokenType{ + Type: "ServicePrincipalNoSecret", + }) } // ServicePrincipalTokenSecret implements ServicePrincipalSecret for client_secret type authorization. type ServicePrincipalTokenSecret struct { - ClientSecret string + ClientSecret string `json:"value"` } // SetAuthenticationValues is a method of the interface ServicePrincipalSecret. @@ -124,23 +180,24 @@ func (tokenSecret *ServicePrincipalTokenSecret) SetAuthenticationValues(spt *Ser return nil } +// MarshalJSON implements the json.Marshaler interface. +func (tokenSecret ServicePrincipalTokenSecret) MarshalJSON() ([]byte, error) { + type tokenType struct { + Type string `json:"type"` + Value string `json:"value"` + } + return json.Marshal(tokenType{ + Type: "ServicePrincipalTokenSecret", + Value: tokenSecret.ClientSecret, + }) +} + // ServicePrincipalCertificateSecret implements ServicePrincipalSecret for generic RSA cert auth with signed JWTs. type ServicePrincipalCertificateSecret struct { Certificate *x509.Certificate PrivateKey *rsa.PrivateKey } -// ServicePrincipalMSISecret implements ServicePrincipalSecret for machines running the MSI Extension. -type ServicePrincipalMSISecret struct { -} - -// SetAuthenticationValues is a method of the interface ServicePrincipalSecret. -// MSI extension requires the authority field to be set to the real tenant authority endpoint -func (msiSecret *ServicePrincipalMSISecret) SetAuthenticationValues(spt *ServicePrincipalToken, v *url.Values) error { - v.Set("authority", spt.oauthConfig.AuthorityEndpoint.String()) - return nil -} - // SignJwt returns the JWT signed with the certificate's private key. func (secret *ServicePrincipalCertificateSecret) SignJwt(spt *ServicePrincipalToken) (string, error) { hasher := sha1.New() @@ -161,9 +218,9 @@ func (secret *ServicePrincipalCertificateSecret) SignJwt(spt *ServicePrincipalTo token := jwt.New(jwt.SigningMethodRS256) token.Header["x5t"] = thumbprint token.Claims = jwt.MapClaims{ - "aud": spt.oauthConfig.TokenEndpoint.String(), - "iss": spt.clientID, - "sub": spt.clientID, + "aud": spt.inner.OauthConfig.TokenEndpoint.String(), + "iss": spt.inner.ClientID, + "sub": spt.inner.ClientID, "jti": base64.URLEncoding.EncodeToString(jti), "nbf": time.Now().Unix(), "exp": time.Now().Add(time.Hour * 24).Unix(), @@ -186,30 +243,184 @@ func (secret *ServicePrincipalCertificateSecret) SetAuthenticationValues(spt *Se return nil } +// MarshalJSON implements the json.Marshaler interface. +func (secret ServicePrincipalCertificateSecret) MarshalJSON() ([]byte, error) { + return nil, errors.New("marshalling ServicePrincipalCertificateSecret is not supported") +} + +// ServicePrincipalMSISecret implements ServicePrincipalSecret for machines running the MSI Extension. +type ServicePrincipalMSISecret struct { +} + +// SetAuthenticationValues is a method of the interface ServicePrincipalSecret. +func (msiSecret *ServicePrincipalMSISecret) SetAuthenticationValues(spt *ServicePrincipalToken, v *url.Values) error { + return nil +} + +// MarshalJSON implements the json.Marshaler interface. +func (msiSecret ServicePrincipalMSISecret) MarshalJSON() ([]byte, error) { + return nil, errors.New("marshalling ServicePrincipalMSISecret is not supported") +} + +// ServicePrincipalUsernamePasswordSecret implements ServicePrincipalSecret for username and password auth. +type ServicePrincipalUsernamePasswordSecret struct { + Username string `json:"username"` + Password string `json:"password"` +} + +// SetAuthenticationValues is a method of the interface ServicePrincipalSecret. +func (secret *ServicePrincipalUsernamePasswordSecret) SetAuthenticationValues(spt *ServicePrincipalToken, v *url.Values) error { + v.Set("username", secret.Username) + v.Set("password", secret.Password) + return nil +} + +// MarshalJSON implements the json.Marshaler interface. +func (secret ServicePrincipalUsernamePasswordSecret) MarshalJSON() ([]byte, error) { + type tokenType struct { + Type string `json:"type"` + Username string `json:"username"` + Password string `json:"password"` + } + return json.Marshal(tokenType{ + Type: "ServicePrincipalUsernamePasswordSecret", + Username: secret.Username, + Password: secret.Password, + }) +} + +// ServicePrincipalAuthorizationCodeSecret implements ServicePrincipalSecret for authorization code auth. +type ServicePrincipalAuthorizationCodeSecret struct { + ClientSecret string `json:"value"` + AuthorizationCode string `json:"authCode"` + RedirectURI string `json:"redirect"` +} + +// SetAuthenticationValues is a method of the interface ServicePrincipalSecret. +func (secret *ServicePrincipalAuthorizationCodeSecret) SetAuthenticationValues(spt *ServicePrincipalToken, v *url.Values) error { + v.Set("code", secret.AuthorizationCode) + v.Set("client_secret", secret.ClientSecret) + v.Set("redirect_uri", secret.RedirectURI) + return nil +} + +// MarshalJSON implements the json.Marshaler interface. +func (secret ServicePrincipalAuthorizationCodeSecret) MarshalJSON() ([]byte, error) { + type tokenType struct { + Type string `json:"type"` + Value string `json:"value"` + AuthCode string `json:"authCode"` + Redirect string `json:"redirect"` + } + return json.Marshal(tokenType{ + Type: "ServicePrincipalAuthorizationCodeSecret", + Value: secret.ClientSecret, + AuthCode: secret.AuthorizationCode, + Redirect: secret.RedirectURI, + }) +} + // ServicePrincipalToken encapsulates a Token created for a Service Principal. type ServicePrincipalToken struct { - Token - - secret ServicePrincipalSecret - oauthConfig OAuthConfig - clientID string - resource string - autoRefresh bool - refreshWithin time.Duration - sender Sender - + inner servicePrincipalToken + refreshLock *sync.RWMutex + sender Sender refreshCallbacks []TokenRefreshCallback + // MaxMSIRefreshAttempts is the maximum number of attempts to refresh an MSI token. + MaxMSIRefreshAttempts int +} + +// MarshalTokenJSON returns the marshalled inner token. +func (spt ServicePrincipalToken) MarshalTokenJSON() ([]byte, error) { + return json.Marshal(spt.inner.Token) +} + +// SetRefreshCallbacks replaces any existing refresh callbacks with the specified callbacks. +func (spt *ServicePrincipalToken) SetRefreshCallbacks(callbacks []TokenRefreshCallback) { + spt.refreshCallbacks = callbacks +} + +// MarshalJSON implements the json.Marshaler interface. +func (spt ServicePrincipalToken) MarshalJSON() ([]byte, error) { + return json.Marshal(spt.inner) +} + +// UnmarshalJSON implements the json.Unmarshaler interface. +func (spt *ServicePrincipalToken) UnmarshalJSON(data []byte) error { + // need to determine the token type + raw := map[string]interface{}{} + err := json.Unmarshal(data, &raw) + if err != nil { + return err + } + secret := raw["secret"].(map[string]interface{}) + switch secret["type"] { + case "ServicePrincipalNoSecret": + spt.inner.Secret = &ServicePrincipalNoSecret{} + case "ServicePrincipalTokenSecret": + spt.inner.Secret = &ServicePrincipalTokenSecret{} + case "ServicePrincipalCertificateSecret": + return errors.New("unmarshalling ServicePrincipalCertificateSecret is not supported") + case "ServicePrincipalMSISecret": + return errors.New("unmarshalling ServicePrincipalMSISecret is not supported") + case "ServicePrincipalUsernamePasswordSecret": + spt.inner.Secret = &ServicePrincipalUsernamePasswordSecret{} + case "ServicePrincipalAuthorizationCodeSecret": + spt.inner.Secret = &ServicePrincipalAuthorizationCodeSecret{} + default: + return fmt.Errorf("unrecognized token type '%s'", secret["type"]) + } + err = json.Unmarshal(data, &spt.inner) + if err != nil { + return err + } + spt.refreshLock = &sync.RWMutex{} + spt.sender = &http.Client{} + return nil +} + +// internal type used for marshalling/unmarshalling +type servicePrincipalToken struct { + Token Token `json:"token"` + Secret ServicePrincipalSecret `json:"secret"` + OauthConfig OAuthConfig `json:"oauth"` + ClientID string `json:"clientID"` + Resource string `json:"resource"` + AutoRefresh bool `json:"autoRefresh"` + RefreshWithin time.Duration `json:"refreshWithin"` +} + +func validateOAuthConfig(oac OAuthConfig) error { + if oac.IsZero() { + return fmt.Errorf("parameter 'oauthConfig' cannot be zero-initialized") + } + return nil } // NewServicePrincipalTokenWithSecret create a ServicePrincipalToken using the supplied ServicePrincipalSecret implementation. func NewServicePrincipalTokenWithSecret(oauthConfig OAuthConfig, id string, resource string, secret ServicePrincipalSecret, callbacks ...TokenRefreshCallback) (*ServicePrincipalToken, error) { + if err := validateOAuthConfig(oauthConfig); err != nil { + return nil, err + } + if err := validateStringParam(id, "id"); err != nil { + return nil, err + } + if err := validateStringParam(resource, "resource"); err != nil { + return nil, err + } + if secret == nil { + return nil, fmt.Errorf("parameter 'secret' cannot be nil") + } spt := &ServicePrincipalToken{ - oauthConfig: oauthConfig, - secret: secret, - clientID: id, - resource: resource, - autoRefresh: true, - refreshWithin: defaultRefresh, + inner: servicePrincipalToken{ + OauthConfig: oauthConfig, + Secret: secret, + ClientID: id, + Resource: resource, + AutoRefresh: true, + RefreshWithin: defaultRefresh, + }, + refreshLock: &sync.RWMutex{}, sender: &http.Client{}, refreshCallbacks: callbacks, } @@ -218,6 +429,18 @@ func NewServicePrincipalTokenWithSecret(oauthConfig OAuthConfig, id string, reso // NewServicePrincipalTokenFromManualToken creates a ServicePrincipalToken using the supplied token func NewServicePrincipalTokenFromManualToken(oauthConfig OAuthConfig, clientID string, resource string, token Token, callbacks ...TokenRefreshCallback) (*ServicePrincipalToken, error) { + if err := validateOAuthConfig(oauthConfig); err != nil { + return nil, err + } + if err := validateStringParam(clientID, "clientID"); err != nil { + return nil, err + } + if err := validateStringParam(resource, "resource"); err != nil { + return nil, err + } + if token.IsZero() { + return nil, fmt.Errorf("parameter 'token' cannot be zero-initialized") + } spt, err := NewServicePrincipalTokenWithSecret( oauthConfig, clientID, @@ -228,7 +451,39 @@ func NewServicePrincipalTokenFromManualToken(oauthConfig OAuthConfig, clientID s return nil, err } - spt.Token = token + spt.inner.Token = token + + return spt, nil +} + +// NewServicePrincipalTokenFromManualTokenSecret creates a ServicePrincipalToken using the supplied token and secret +func NewServicePrincipalTokenFromManualTokenSecret(oauthConfig OAuthConfig, clientID string, resource string, token Token, secret ServicePrincipalSecret, callbacks ...TokenRefreshCallback) (*ServicePrincipalToken, error) { + if err := validateOAuthConfig(oauthConfig); err != nil { + return nil, err + } + if err := validateStringParam(clientID, "clientID"); err != nil { + return nil, err + } + if err := validateStringParam(resource, "resource"); err != nil { + return nil, err + } + if secret == nil { + return nil, fmt.Errorf("parameter 'secret' cannot be nil") + } + if token.IsZero() { + return nil, fmt.Errorf("parameter 'token' cannot be zero-initialized") + } + spt, err := NewServicePrincipalTokenWithSecret( + oauthConfig, + clientID, + resource, + secret, + callbacks...) + if err != nil { + return nil, err + } + + spt.inner.Token = token return spt, nil } @@ -236,6 +491,18 @@ func NewServicePrincipalTokenFromManualToken(oauthConfig OAuthConfig, clientID s // NewServicePrincipalToken creates a ServicePrincipalToken from the supplied Service Principal // credentials scoped to the named resource. func NewServicePrincipalToken(oauthConfig OAuthConfig, clientID string, secret string, resource string, callbacks ...TokenRefreshCallback) (*ServicePrincipalToken, error) { + if err := validateOAuthConfig(oauthConfig); err != nil { + return nil, err + } + if err := validateStringParam(clientID, "clientID"); err != nil { + return nil, err + } + if err := validateStringParam(secret, "secret"); err != nil { + return nil, err + } + if err := validateStringParam(resource, "resource"); err != nil { + return nil, err + } return NewServicePrincipalTokenWithSecret( oauthConfig, clientID, @@ -247,8 +514,23 @@ func NewServicePrincipalToken(oauthConfig OAuthConfig, clientID string, secret s ) } -// NewServicePrincipalTokenFromCertificate create a ServicePrincipalToken from the supplied pkcs12 bytes. +// NewServicePrincipalTokenFromCertificate creates a ServicePrincipalToken from the supplied pkcs12 bytes. func NewServicePrincipalTokenFromCertificate(oauthConfig OAuthConfig, clientID string, certificate *x509.Certificate, privateKey *rsa.PrivateKey, resource string, callbacks ...TokenRefreshCallback) (*ServicePrincipalToken, error) { + if err := validateOAuthConfig(oauthConfig); err != nil { + return nil, err + } + if err := validateStringParam(clientID, "clientID"); err != nil { + return nil, err + } + if err := validateStringParam(resource, "resource"); err != nil { + return nil, err + } + if certificate == nil { + return nil, fmt.Errorf("parameter 'certificate' cannot be nil") + } + if privateKey == nil { + return nil, fmt.Errorf("parameter 'privateKey' cannot be nil") + } return NewServicePrincipalTokenWithSecret( oauthConfig, clientID, @@ -261,57 +543,172 @@ func NewServicePrincipalTokenFromCertificate(oauthConfig OAuthConfig, clientID s ) } -// NewServicePrincipalTokenFromMSI creates a ServicePrincipalToken via the MSI VM Extension. -func NewServicePrincipalTokenFromMSI(oauthConfig OAuthConfig, resource string, callbacks ...TokenRefreshCallback) (*ServicePrincipalToken, error) { - return newServicePrincipalTokenFromMSI(oauthConfig, resource, managedIdentitySettingsPath, callbacks...) +// NewServicePrincipalTokenFromUsernamePassword creates a ServicePrincipalToken from the username and password. +func NewServicePrincipalTokenFromUsernamePassword(oauthConfig OAuthConfig, clientID string, username string, password string, resource string, callbacks ...TokenRefreshCallback) (*ServicePrincipalToken, error) { + if err := validateOAuthConfig(oauthConfig); err != nil { + return nil, err + } + if err := validateStringParam(clientID, "clientID"); err != nil { + return nil, err + } + if err := validateStringParam(username, "username"); err != nil { + return nil, err + } + if err := validateStringParam(password, "password"); err != nil { + return nil, err + } + if err := validateStringParam(resource, "resource"); err != nil { + return nil, err + } + return NewServicePrincipalTokenWithSecret( + oauthConfig, + clientID, + resource, + &ServicePrincipalUsernamePasswordSecret{ + Username: username, + Password: password, + }, + callbacks..., + ) } -func newServicePrincipalTokenFromMSI(oauthConfig OAuthConfig, resource, settingsPath string, callbacks ...TokenRefreshCallback) (*ServicePrincipalToken, error) { - // Read MSI settings - bytes, err := ioutil.ReadFile(settingsPath) - if err != nil { +// NewServicePrincipalTokenFromAuthorizationCode creates a ServicePrincipalToken from the +func NewServicePrincipalTokenFromAuthorizationCode(oauthConfig OAuthConfig, clientID string, clientSecret string, authorizationCode string, redirectURI string, resource string, callbacks ...TokenRefreshCallback) (*ServicePrincipalToken, error) { + + if err := validateOAuthConfig(oauthConfig); err != nil { return nil, err } - msiSettings := struct { - URL string `json:"url"` - }{} - err = json.Unmarshal(bytes, &msiSettings) - if err != nil { + if err := validateStringParam(clientID, "clientID"); err != nil { + return nil, err + } + if err := validateStringParam(clientSecret, "clientSecret"); err != nil { + return nil, err + } + if err := validateStringParam(authorizationCode, "authorizationCode"); err != nil { + return nil, err + } + if err := validateStringParam(redirectURI, "redirectURI"); err != nil { + return nil, err + } + if err := validateStringParam(resource, "resource"); err != nil { return nil, err } + return NewServicePrincipalTokenWithSecret( + oauthConfig, + clientID, + resource, + &ServicePrincipalAuthorizationCodeSecret{ + ClientSecret: clientSecret, + AuthorizationCode: authorizationCode, + RedirectURI: redirectURI, + }, + callbacks..., + ) +} + +// GetMSIVMEndpoint gets the MSI endpoint on Virtual Machines. +func GetMSIVMEndpoint() (string, error) { + return msiEndpoint, nil +} + +// NewServicePrincipalTokenFromMSI creates a ServicePrincipalToken via the MSI VM Extension. +// It will use the system assigned identity when creating the token. +func NewServicePrincipalTokenFromMSI(msiEndpoint, resource string, callbacks ...TokenRefreshCallback) (*ServicePrincipalToken, error) { + return newServicePrincipalTokenFromMSI(msiEndpoint, resource, nil, callbacks...) +} + +// NewServicePrincipalTokenFromMSIWithUserAssignedID creates a ServicePrincipalToken via the MSI VM Extension. +// It will use the specified user assigned identity when creating the token. +func NewServicePrincipalTokenFromMSIWithUserAssignedID(msiEndpoint, resource string, userAssignedID string, callbacks ...TokenRefreshCallback) (*ServicePrincipalToken, error) { + return newServicePrincipalTokenFromMSI(msiEndpoint, resource, &userAssignedID, callbacks...) +} + +func newServicePrincipalTokenFromMSI(msiEndpoint, resource string, userAssignedID *string, callbacks ...TokenRefreshCallback) (*ServicePrincipalToken, error) { + if err := validateStringParam(msiEndpoint, "msiEndpoint"); err != nil { + return nil, err + } + if err := validateStringParam(resource, "resource"); err != nil { + return nil, err + } + if userAssignedID != nil { + if err := validateStringParam(*userAssignedID, "userAssignedID"); err != nil { + return nil, err + } + } // We set the oauth config token endpoint to be MSI's endpoint - // We leave the authority as-is so MSI can POST it with the token request - msiEndpointURL, err := url.Parse(msiSettings.URL) + msiEndpointURL, err := url.Parse(msiEndpoint) if err != nil { return nil, err } - msiTokenEndpointURL, err := msiEndpointURL.Parse("/oauth2/token") - if err != nil { - return nil, err + v := url.Values{} + v.Set("resource", resource) + v.Set("api-version", "2018-02-01") + if userAssignedID != nil { + v.Set("client_id", *userAssignedID) } - - oauthConfig.TokenEndpoint = *msiTokenEndpointURL + msiEndpointURL.RawQuery = v.Encode() spt := &ServicePrincipalToken{ - oauthConfig: oauthConfig, - secret: &ServicePrincipalMSISecret{}, - resource: resource, - autoRefresh: true, - refreshWithin: defaultRefresh, - sender: &http.Client{}, - refreshCallbacks: callbacks, + inner: servicePrincipalToken{ + OauthConfig: OAuthConfig{ + TokenEndpoint: *msiEndpointURL, + }, + Secret: &ServicePrincipalMSISecret{}, + Resource: resource, + AutoRefresh: true, + RefreshWithin: defaultRefresh, + }, + refreshLock: &sync.RWMutex{}, + sender: &http.Client{}, + refreshCallbacks: callbacks, + MaxMSIRefreshAttempts: defaultMaxMSIRefreshAttempts, + } + + if userAssignedID != nil { + spt.inner.ClientID = *userAssignedID } return spt, nil } +// internal type that implements TokenRefreshError +type tokenRefreshError struct { + message string + resp *http.Response +} + +// Error implements the error interface which is part of the TokenRefreshError interface. +func (tre tokenRefreshError) Error() string { + return tre.message +} + +// Response implements the TokenRefreshError interface, it returns the raw HTTP response from the refresh operation. +func (tre tokenRefreshError) Response() *http.Response { + return tre.resp +} + +func newTokenRefreshError(message string, resp *http.Response) TokenRefreshError { + return tokenRefreshError{message: message, resp: resp} +} + // EnsureFresh will refresh the token if it will expire within the refresh window (as set by -// RefreshWithin) and autoRefresh flag is on. +// RefreshWithin) and autoRefresh flag is on. This method is safe for concurrent use. func (spt *ServicePrincipalToken) EnsureFresh() error { - if spt.autoRefresh && spt.WillExpireIn(spt.refreshWithin) { - return spt.Refresh() + return spt.EnsureFreshWithContext(context.Background()) +} + +// EnsureFreshWithContext will refresh the token if it will expire within the refresh window (as set by +// RefreshWithin) and autoRefresh flag is on. This method is safe for concurrent use. +func (spt *ServicePrincipalToken) EnsureFreshWithContext(ctx context.Context) error { + if spt.inner.AutoRefresh && spt.inner.Token.WillExpireIn(spt.inner.RefreshWithin) { + // take the write lock then check to see if the token was already refreshed + spt.refreshLock.Lock() + defer spt.refreshLock.Unlock() + if spt.inner.Token.WillExpireIn(spt.inner.RefreshWithin) { + return spt.refreshInternal(ctx, spt.inner.Resource) + } } return nil } @@ -320,7 +717,7 @@ func (spt *ServicePrincipalToken) EnsureFresh() error { func (spt *ServicePrincipalToken) InvokeRefreshCallbacks(token Token) error { if spt.refreshCallbacks != nil { for _, callback := range spt.refreshCallbacks { - err := callback(spt.Token) + err := callback(spt.inner.Token) if err != nil { return fmt.Errorf("adal: TokenRefreshCallback handler failed. Error = '%v'", err) } @@ -330,50 +727,118 @@ func (spt *ServicePrincipalToken) InvokeRefreshCallbacks(token Token) error { } // Refresh obtains a fresh token for the Service Principal. +// This method is not safe for concurrent use and should be syncrhonized. func (spt *ServicePrincipalToken) Refresh() error { - return spt.refreshInternal(spt.resource) + return spt.RefreshWithContext(context.Background()) +} + +// RefreshWithContext obtains a fresh token for the Service Principal. +// This method is not safe for concurrent use and should be syncrhonized. +func (spt *ServicePrincipalToken) RefreshWithContext(ctx context.Context) error { + spt.refreshLock.Lock() + defer spt.refreshLock.Unlock() + return spt.refreshInternal(ctx, spt.inner.Resource) } // RefreshExchange refreshes the token, but for a different resource. +// This method is not safe for concurrent use and should be syncrhonized. func (spt *ServicePrincipalToken) RefreshExchange(resource string) error { - return spt.refreshInternal(resource) + return spt.RefreshExchangeWithContext(context.Background(), resource) } -func (spt *ServicePrincipalToken) refreshInternal(resource string) error { - v := url.Values{} - v.Set("client_id", spt.clientID) - v.Set("resource", resource) +// RefreshExchangeWithContext refreshes the token, but for a different resource. +// This method is not safe for concurrent use and should be syncrhonized. +func (spt *ServicePrincipalToken) RefreshExchangeWithContext(ctx context.Context, resource string) error { + spt.refreshLock.Lock() + defer spt.refreshLock.Unlock() + return spt.refreshInternal(ctx, resource) +} - if spt.RefreshToken != "" { - v.Set("grant_type", OAuthGrantTypeRefreshToken) - v.Set("refresh_token", spt.RefreshToken) - } else { - v.Set("grant_type", OAuthGrantTypeClientCredentials) - err := spt.secret.SetAuthenticationValues(spt, &v) - if err != nil { - return err - } +func (spt *ServicePrincipalToken) getGrantType() string { + switch spt.inner.Secret.(type) { + case *ServicePrincipalUsernamePasswordSecret: + return OAuthGrantTypeUserPass + case *ServicePrincipalAuthorizationCodeSecret: + return OAuthGrantTypeAuthorizationCode + default: + return OAuthGrantTypeClientCredentials } +} - s := v.Encode() - body := ioutil.NopCloser(strings.NewReader(s)) - req, err := http.NewRequest(http.MethodPost, spt.oauthConfig.TokenEndpoint.String(), body) +func isIMDS(u url.URL) bool { + imds, err := url.Parse(msiEndpoint) + if err != nil { + return false + } + return u.Host == imds.Host && u.Path == imds.Path +} + +func (spt *ServicePrincipalToken) refreshInternal(ctx context.Context, resource string) error { + req, err := http.NewRequest(http.MethodPost, spt.inner.OauthConfig.TokenEndpoint.String(), nil) if err != nil { return fmt.Errorf("adal: Failed to build the refresh request. Error = '%v'", err) } + req = req.WithContext(ctx) + if !isIMDS(spt.inner.OauthConfig.TokenEndpoint) { + v := url.Values{} + v.Set("client_id", spt.inner.ClientID) + v.Set("resource", resource) - req.ContentLength = int64(len(s)) - req.Header.Set(contentType, mimeTypeFormPost) - resp, err := spt.sender.Do(req) + if spt.inner.Token.RefreshToken != "" { + v.Set("grant_type", OAuthGrantTypeRefreshToken) + v.Set("refresh_token", spt.inner.Token.RefreshToken) + // web apps must specify client_secret when refreshing tokens + // see https://docs.microsoft.com/en-us/azure/active-directory/develop/active-directory-protocols-oauth-code#refreshing-the-access-tokens + if spt.getGrantType() == OAuthGrantTypeAuthorizationCode { + err := spt.inner.Secret.SetAuthenticationValues(spt, &v) + if err != nil { + return err + } + } + } else { + v.Set("grant_type", spt.getGrantType()) + err := spt.inner.Secret.SetAuthenticationValues(spt, &v) + if err != nil { + return err + } + } + + s := v.Encode() + body := ioutil.NopCloser(strings.NewReader(s)) + req.ContentLength = int64(len(s)) + req.Header.Set(contentType, mimeTypeFormPost) + req.Body = body + } + + if _, ok := spt.inner.Secret.(*ServicePrincipalMSISecret); ok { + req.Method = http.MethodGet + req.Header.Set(metadataHeader, "true") + } + + var resp *http.Response + if isIMDS(spt.inner.OauthConfig.TokenEndpoint) { + resp, err = retryForIMDS(spt.sender, req, spt.MaxMSIRefreshAttempts) + } else { + resp, err = spt.sender.Do(req) + } if err != nil { - return fmt.Errorf("adal: Failed to execute the refresh request. Error = '%v'", err) - } - defer resp.Body.Close() - if resp.StatusCode != http.StatusOK { - return fmt.Errorf("adal: Refresh request failed. Status Code = '%d'", resp.StatusCode) + return newTokenRefreshError(fmt.Sprintf("adal: Failed to execute the refresh request. Error = '%v'", err), nil) } + defer resp.Body.Close() rb, err := ioutil.ReadAll(resp.Body) + + if resp.StatusCode != http.StatusOK { + if err != nil { + return newTokenRefreshError(fmt.Sprintf("adal: Refresh request failed. Status Code = '%d'. Failed reading response body: %v", resp.StatusCode, err), resp) + } + return newTokenRefreshError(fmt.Sprintf("adal: Refresh request failed. Status Code = '%d'. Response body: %s", resp.StatusCode, string(rb)), resp) + } + + // for the following error cases don't return a TokenRefreshError. the operation succeeded + // but some transient failure happened during deserialization. by returning a generic error + // the retry logic will kick in (we don't retry on TokenRefreshError). + if err != nil { return fmt.Errorf("adal: Failed to read a new service principal token during refresh. Error = '%v'", err) } @@ -386,23 +851,116 @@ func (spt *ServicePrincipalToken) refreshInternal(resource string) error { return fmt.Errorf("adal: Failed to unmarshal the service principal token during refresh. Error = '%v' JSON = '%s'", err, string(rb)) } - spt.Token = token + spt.inner.Token = token return spt.InvokeRefreshCallbacks(token) } +// retry logic specific to retrieving a token from the IMDS endpoint +func retryForIMDS(sender Sender, req *http.Request, maxAttempts int) (resp *http.Response, err error) { + // copied from client.go due to circular dependency + retries := []int{ + http.StatusRequestTimeout, // 408 + http.StatusTooManyRequests, // 429 + http.StatusInternalServerError, // 500 + http.StatusBadGateway, // 502 + http.StatusServiceUnavailable, // 503 + http.StatusGatewayTimeout, // 504 + } + // extra retry status codes specific to IMDS + retries = append(retries, + http.StatusNotFound, + http.StatusGone, + // all remaining 5xx + http.StatusNotImplemented, + http.StatusHTTPVersionNotSupported, + http.StatusVariantAlsoNegotiates, + http.StatusInsufficientStorage, + http.StatusLoopDetected, + http.StatusNotExtended, + http.StatusNetworkAuthenticationRequired) + + // see https://docs.microsoft.com/en-us/azure/active-directory/managed-service-identity/how-to-use-vm-token#retry-guidance + + const maxDelay time.Duration = 60 * time.Second + + attempt := 0 + delay := time.Duration(0) + + for attempt < maxAttempts { + resp, err = sender.Do(req) + // retry on temporary network errors, e.g. transient network failures. + // if we don't receive a response then assume we can't connect to the + // endpoint so we're likely not running on an Azure VM so don't retry. + if (err != nil && !isTemporaryNetworkError(err)) || resp == nil || resp.StatusCode == http.StatusOK || !containsInt(retries, resp.StatusCode) { + return + } + + // perform exponential backoff with a cap. + // must increment attempt before calculating delay. + attempt++ + // the base value of 2 is the "delta backoff" as specified in the guidance doc + delay += (time.Duration(math.Pow(2, float64(attempt))) * time.Second) + if delay > maxDelay { + delay = maxDelay + } + + select { + case <-time.After(delay): + // intentionally left blank + case <-req.Context().Done(): + err = req.Context().Err() + return + } + } + return +} + +// returns true if the specified error is a temporary network error or false if it's not. +// if the error doesn't implement the net.Error interface the return value is true. +func isTemporaryNetworkError(err error) bool { + if netErr, ok := err.(net.Error); !ok || (ok && netErr.Temporary()) { + return true + } + return false +} + +// returns true if slice ints contains the value n +func containsInt(ints []int, n int) bool { + for _, i := range ints { + if i == n { + return true + } + } + return false +} + // SetAutoRefresh enables or disables automatic refreshing of stale tokens. func (spt *ServicePrincipalToken) SetAutoRefresh(autoRefresh bool) { - spt.autoRefresh = autoRefresh + spt.inner.AutoRefresh = autoRefresh } // SetRefreshWithin sets the interval within which if the token will expire, EnsureFresh will // refresh the token. func (spt *ServicePrincipalToken) SetRefreshWithin(d time.Duration) { - spt.refreshWithin = d + spt.inner.RefreshWithin = d return } // SetSender sets the http.Client used when obtaining the Service Principal token. An // undecorated http.Client is used by default. func (spt *ServicePrincipalToken) SetSender(s Sender) { spt.sender = s } + +// OAuthToken implements the OAuthTokenProvider interface. It returns the current access token. +func (spt *ServicePrincipalToken) OAuthToken() string { + spt.refreshLock.RLock() + defer spt.refreshLock.RUnlock() + return spt.inner.Token.OAuthToken() +} + +// Token returns a copy of the current token. +func (spt *ServicePrincipalToken) Token() Token { + spt.refreshLock.RLock() + defer spt.refreshLock.RUnlock() + return spt.inner.Token +} diff --git a/vendor/github.com/Azure/go-autorest/autorest/authorization.go b/vendor/github.com/Azure/go-autorest/autorest/authorization.go index 7f4e3d845..77eff45bd 100644 --- a/vendor/github.com/Azure/go-autorest/autorest/authorization.go +++ b/vendor/github.com/Azure/go-autorest/autorest/authorization.go @@ -1,12 +1,37 @@ package autorest +// Copyright 2017 Microsoft Corporation +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + import ( "fmt" "net/http" + "net/url" + "strings" "github.com/Azure/go-autorest/autorest/adal" ) +const ( + bearerChallengeHeader = "Www-Authenticate" + bearer = "Bearer" + tenantID = "tenantID" + apiKeyAuthorizerHeader = "Ocp-Apim-Subscription-Key" + bingAPISdkHeader = "X-BingApis-SDK-Client" + golangBingAPISdkHeaderValue = "Go-SDK" +) + // Authorizer is the interface that provides a PrepareDecorator used to supply request // authorization. Most often, the Authorizer decorator runs last so it has access to the full // state of the formed HTTP request. @@ -22,6 +47,53 @@ func (na NullAuthorizer) WithAuthorization() PrepareDecorator { return WithNothing() } +// APIKeyAuthorizer implements API Key authorization. +type APIKeyAuthorizer struct { + headers map[string]interface{} + queryParameters map[string]interface{} +} + +// NewAPIKeyAuthorizerWithHeaders creates an ApiKeyAuthorizer with headers. +func NewAPIKeyAuthorizerWithHeaders(headers map[string]interface{}) *APIKeyAuthorizer { + return NewAPIKeyAuthorizer(headers, nil) +} + +// NewAPIKeyAuthorizerWithQueryParameters creates an ApiKeyAuthorizer with query parameters. +func NewAPIKeyAuthorizerWithQueryParameters(queryParameters map[string]interface{}) *APIKeyAuthorizer { + return NewAPIKeyAuthorizer(nil, queryParameters) +} + +// NewAPIKeyAuthorizer creates an ApiKeyAuthorizer with headers. +func NewAPIKeyAuthorizer(headers map[string]interface{}, queryParameters map[string]interface{}) *APIKeyAuthorizer { + return &APIKeyAuthorizer{headers: headers, queryParameters: queryParameters} +} + +// WithAuthorization returns a PrepareDecorator that adds an HTTP headers and Query Paramaters +func (aka *APIKeyAuthorizer) WithAuthorization() PrepareDecorator { + return func(p Preparer) Preparer { + return DecoratePreparer(p, WithHeaders(aka.headers), WithQueryParameters(aka.queryParameters)) + } +} + +// CognitiveServicesAuthorizer implements authorization for Cognitive Services. +type CognitiveServicesAuthorizer struct { + subscriptionKey string +} + +// NewCognitiveServicesAuthorizer is +func NewCognitiveServicesAuthorizer(subscriptionKey string) *CognitiveServicesAuthorizer { + return &CognitiveServicesAuthorizer{subscriptionKey: subscriptionKey} +} + +// WithAuthorization is +func (csa *CognitiveServicesAuthorizer) WithAuthorization() PrepareDecorator { + headers := make(map[string]interface{}) + headers[apiKeyAuthorizerHeader] = csa.subscriptionKey + headers[bingAPISdkHeader] = golangBingAPISdkHeaderValue + + return NewAPIKeyAuthorizerWithHeaders(headers).WithAuthorization() +} + // BearerAuthorizer implements the bearer authorization type BearerAuthorizer struct { tokenProvider adal.OAuthTokenProvider @@ -32,10 +104,6 @@ func NewBearerAuthorizer(tp adal.OAuthTokenProvider) *BearerAuthorizer { return &BearerAuthorizer{tokenProvider: tp} } -func (ba *BearerAuthorizer) withBearerAuthorization() PrepareDecorator { - return WithHeader(headerAuthorization, fmt.Sprintf("Bearer %s", ba.tokenProvider.OAuthToken())) -} - // WithAuthorization returns a PrepareDecorator that adds an HTTP Authorization header whose // value is "Bearer " followed by the token. // @@ -43,15 +111,149 @@ func (ba *BearerAuthorizer) withBearerAuthorization() PrepareDecorator { func (ba *BearerAuthorizer) WithAuthorization() PrepareDecorator { return func(p Preparer) Preparer { return PreparerFunc(func(r *http.Request) (*http.Request, error) { - refresher, ok := ba.tokenProvider.(adal.Refresher) - if ok { - err := refresher.EnsureFresh() + r, err := p.Prepare(r) + if err == nil { + // the ordering is important here, prefer RefresherWithContext if available + if refresher, ok := ba.tokenProvider.(adal.RefresherWithContext); ok { + err = refresher.EnsureFreshWithContext(r.Context()) + } else if refresher, ok := ba.tokenProvider.(adal.Refresher); ok { + err = refresher.EnsureFresh() + } if err != nil { - return r, NewErrorWithError(err, "azure.BearerAuthorizer", "WithAuthorization", nil, + var resp *http.Response + if tokError, ok := err.(adal.TokenRefreshError); ok { + resp = tokError.Response() + } + return r, NewErrorWithError(err, "azure.BearerAuthorizer", "WithAuthorization", resp, "Failed to refresh the Token for request to %s", r.URL) } + return Prepare(r, WithHeader(headerAuthorization, fmt.Sprintf("Bearer %s", ba.tokenProvider.OAuthToken()))) } - return (ba.withBearerAuthorization()(p)).Prepare(r) + return r, err }) } } + +// BearerAuthorizerCallbackFunc is the authentication callback signature. +type BearerAuthorizerCallbackFunc func(tenantID, resource string) (*BearerAuthorizer, error) + +// BearerAuthorizerCallback implements bearer authorization via a callback. +type BearerAuthorizerCallback struct { + sender Sender + callback BearerAuthorizerCallbackFunc +} + +// NewBearerAuthorizerCallback creates a bearer authorization callback. The callback +// is invoked when the HTTP request is submitted. +func NewBearerAuthorizerCallback(sender Sender, callback BearerAuthorizerCallbackFunc) *BearerAuthorizerCallback { + if sender == nil { + sender = &http.Client{} + } + return &BearerAuthorizerCallback{sender: sender, callback: callback} +} + +// WithAuthorization returns a PrepareDecorator that adds an HTTP Authorization header whose value +// is "Bearer " followed by the token. The BearerAuthorizer is obtained via a user-supplied callback. +// +// By default, the token will be automatically refreshed through the Refresher interface. +func (bacb *BearerAuthorizerCallback) WithAuthorization() PrepareDecorator { + return func(p Preparer) Preparer { + return PreparerFunc(func(r *http.Request) (*http.Request, error) { + r, err := p.Prepare(r) + if err == nil { + // make a copy of the request and remove the body as it's not + // required and avoids us having to create a copy of it. + rCopy := *r + removeRequestBody(&rCopy) + + resp, err := bacb.sender.Do(&rCopy) + if err == nil && resp.StatusCode == 401 { + defer resp.Body.Close() + if hasBearerChallenge(resp) { + bc, err := newBearerChallenge(resp) + if err != nil { + return r, err + } + if bacb.callback != nil { + ba, err := bacb.callback(bc.values[tenantID], bc.values["resource"]) + if err != nil { + return r, err + } + return Prepare(r, ba.WithAuthorization()) + } + } + } + } + return r, err + }) + } +} + +// returns true if the HTTP response contains a bearer challenge +func hasBearerChallenge(resp *http.Response) bool { + authHeader := resp.Header.Get(bearerChallengeHeader) + if len(authHeader) == 0 || strings.Index(authHeader, bearer) < 0 { + return false + } + return true +} + +type bearerChallenge struct { + values map[string]string +} + +func newBearerChallenge(resp *http.Response) (bc bearerChallenge, err error) { + challenge := strings.TrimSpace(resp.Header.Get(bearerChallengeHeader)) + trimmedChallenge := challenge[len(bearer)+1:] + + // challenge is a set of key=value pairs that are comma delimited + pairs := strings.Split(trimmedChallenge, ",") + if len(pairs) < 1 { + err = fmt.Errorf("challenge '%s' contains no pairs", challenge) + return bc, err + } + + bc.values = make(map[string]string) + for i := range pairs { + trimmedPair := strings.TrimSpace(pairs[i]) + pair := strings.Split(trimmedPair, "=") + if len(pair) == 2 { + // remove the enclosing quotes + key := strings.Trim(pair[0], "\"") + value := strings.Trim(pair[1], "\"") + + switch key { + case "authorization", "authorization_uri": + // strip the tenant ID from the authorization URL + asURL, err := url.Parse(value) + if err != nil { + return bc, err + } + bc.values[tenantID] = asURL.Path[1:] + default: + bc.values[key] = value + } + } + } + + return bc, err +} + +// EventGridKeyAuthorizer implements authorization for event grid using key authentication. +type EventGridKeyAuthorizer struct { + topicKey string +} + +// NewEventGridKeyAuthorizer creates a new EventGridKeyAuthorizer +// with the specified topic key. +func NewEventGridKeyAuthorizer(topicKey string) EventGridKeyAuthorizer { + return EventGridKeyAuthorizer{topicKey: topicKey} +} + +// WithAuthorization returns a PrepareDecorator that adds the aeg-sas-key authentication header. +func (egta EventGridKeyAuthorizer) WithAuthorization() PrepareDecorator { + headers := map[string]interface{}{ + "aeg-sas-key": egta.topicKey, + } + return NewAPIKeyAuthorizerWithHeaders(headers).WithAuthorization() +} diff --git a/vendor/github.com/Azure/go-autorest/autorest/autorest.go b/vendor/github.com/Azure/go-autorest/autorest/autorest.go index 51f1c4bbc..aafdf021f 100644 --- a/vendor/github.com/Azure/go-autorest/autorest/autorest.go +++ b/vendor/github.com/Azure/go-autorest/autorest/autorest.go @@ -57,7 +57,22 @@ generated clients, see the Client described below. */ package autorest +// Copyright 2017 Microsoft Corporation +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + import ( + "context" "net/http" "time" ) @@ -73,6 +88,9 @@ const ( // ResponseHasStatusCode returns true if the status code in the HTTP Response is in the passed set // and false otherwise. func ResponseHasStatusCode(resp *http.Response, codes ...int) bool { + if resp == nil { + return false + } return containsInt(codes, resp.StatusCode) } @@ -113,3 +131,20 @@ func NewPollingRequest(resp *http.Response, cancel <-chan struct{}) (*http.Reque return req, nil } + +// NewPollingRequestWithContext allocates and returns a new http.Request with the specified context to poll for the passed response. +func NewPollingRequestWithContext(ctx context.Context, resp *http.Response) (*http.Request, error) { + location := GetLocation(resp) + if location == "" { + return nil, NewErrorWithResponse("autorest", "NewPollingRequestWithContext", resp, "Location header missing from response that requires polling") + } + + req, err := Prepare((&http.Request{}).WithContext(ctx), + AsGet(), + WithBaseURL(location)) + if err != nil { + return nil, NewErrorWithError(err, "autorest", "NewPollingRequestWithContext", nil, "Failure creating poll request to %s", location) + } + + return req, nil +} diff --git a/vendor/github.com/Azure/go-autorest/autorest/azure/async.go b/vendor/github.com/Azure/go-autorest/autorest/azure/async.go index 332a8909d..cda1e180a 100644 --- a/vendor/github.com/Azure/go-autorest/autorest/azure/async.go +++ b/vendor/github.com/Azure/go-autorest/autorest/azure/async.go @@ -1,15 +1,31 @@ package azure +// Copyright 2017 Microsoft Corporation +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + import ( "bytes" + "context" + "encoding/json" "fmt" "io/ioutil" "net/http" + "net/url" "strings" "time" "github.com/Azure/go-autorest/autorest" - "github.com/Azure/go-autorest/autorest/date" ) const ( @@ -23,280 +39,878 @@ const ( operationSucceeded string = "Succeeded" ) -// DoPollForAsynchronous returns a SendDecorator that polls if the http.Response is for an Azure -// long-running operation. It will delay between requests for the duration specified in the -// RetryAfter header or, if the header is absent, the passed delay. Polling may be canceled by -// closing the optional channel on the http.Request. -func DoPollForAsynchronous(delay time.Duration) autorest.SendDecorator { - return func(s autorest.Sender) autorest.Sender { - return autorest.SenderFunc(func(r *http.Request) (resp *http.Response, err error) { - resp, err = s.Do(r) - if err != nil { - return resp, err - } - pollingCodes := []int{http.StatusAccepted, http.StatusCreated, http.StatusOK} - if !autorest.ResponseHasStatusCode(resp, pollingCodes...) { - return resp, nil - } +var pollingCodes = [...]int{http.StatusNoContent, http.StatusAccepted, http.StatusCreated, http.StatusOK} - ps := pollingState{} - for err == nil { - err = updatePollingState(resp, &ps) - if err != nil { - break - } - if ps.hasTerminated() { - if !ps.hasSucceeded() { - err = ps - } - break - } +// Future provides a mechanism to access the status and results of an asynchronous request. +// Since futures are stateful they should be passed by value to avoid race conditions. +type Future struct { + req *http.Request // legacy + pt pollingTracker +} - r, err = newPollingRequest(resp, ps) - if err != nil { - return resp, err - } +// NewFuture returns a new Future object initialized with the specified request. +// Deprecated: Please use NewFutureFromResponse instead. +func NewFuture(req *http.Request) Future { + return Future{req: req} +} - delay = autorest.GetRetryAfter(resp, delay) - resp, err = autorest.SendWithSender(s, r, - autorest.AfterDelay(delay)) - } - - return resp, err - }) +// NewFutureFromResponse returns a new Future object initialized +// with the initial response from an asynchronous operation. +func NewFutureFromResponse(resp *http.Response) (Future, error) { + pt, err := createPollingTracker(resp) + if err != nil { + return Future{}, err } + return Future{pt: pt}, nil } -func getAsyncOperation(resp *http.Response) string { - return resp.Header.Get(http.CanonicalHeaderKey(headerAsyncOperation)) -} - -func hasSucceeded(state string) bool { - return state == operationSucceeded -} - -func hasTerminated(state string) bool { - switch state { - case operationCanceled, operationFailed, operationSucceeded: - return true - default: - return false +// Response returns the last HTTP response. +func (f Future) Response() *http.Response { + if f.pt == nil { + return nil } + return f.pt.latestResponse() } -func hasFailed(state string) bool { - return state == operationFailed -} - -type provisioningTracker interface { - state() string - hasSucceeded() bool - hasTerminated() bool -} - -type operationResource struct { - // Note: - // The specification states services should return the "id" field. However some return it as - // "operationId". - ID string `json:"id"` - OperationID string `json:"operationId"` - Name string `json:"name"` - Status string `json:"status"` - Properties map[string]interface{} `json:"properties"` - OperationError ServiceError `json:"error"` - StartTime date.Time `json:"startTime"` - EndTime date.Time `json:"endTime"` - PercentComplete float64 `json:"percentComplete"` -} - -func (or operationResource) state() string { - return or.Status -} - -func (or operationResource) hasSucceeded() bool { - return hasSucceeded(or.state()) -} - -func (or operationResource) hasTerminated() bool { - return hasTerminated(or.state()) -} - -type provisioningProperties struct { - ProvisioningState string `json:"provisioningState"` -} - -type provisioningStatus struct { - Properties provisioningProperties `json:"properties,omitempty"` - ProvisioningError ServiceError `json:"error,omitempty"` -} - -func (ps provisioningStatus) state() string { - return ps.Properties.ProvisioningState -} - -func (ps provisioningStatus) hasSucceeded() bool { - return hasSucceeded(ps.state()) -} - -func (ps provisioningStatus) hasTerminated() bool { - return hasTerminated(ps.state()) -} - -func (ps provisioningStatus) hasProvisioningError() bool { - return ps.ProvisioningError != ServiceError{} -} - -type pollingResponseFormat string - -const ( - usesOperationResponse pollingResponseFormat = "OperationResponse" - usesProvisioningStatus pollingResponseFormat = "ProvisioningStatus" - formatIsUnknown pollingResponseFormat = "" -) - -type pollingState struct { - responseFormat pollingResponseFormat - uri string - state string - code string - message string -} - -func (ps pollingState) hasSucceeded() bool { - return hasSucceeded(ps.state) -} - -func (ps pollingState) hasTerminated() bool { - return hasTerminated(ps.state) -} - -func (ps pollingState) hasFailed() bool { - return hasFailed(ps.state) -} - -func (ps pollingState) Error() string { - return fmt.Sprintf("Long running operation terminated with status '%s': Code=%q Message=%q", ps.state, ps.code, ps.message) -} - -// updatePollingState maps the operation status -- retrieved from either a provisioningState -// field, the status field of an OperationResource, or inferred from the HTTP status code -- -// into a well-known states. Since the process begins from the initial request, the state -// always comes from either a the provisioningState returned or is inferred from the HTTP -// status code. Subsequent requests will read an Azure OperationResource object if the -// service initially returned the Azure-AsyncOperation header. The responseFormat field notes -// the expected response format. -func updatePollingState(resp *http.Response, ps *pollingState) error { - // Determine the response shape - // -- The first response will always be a provisioningStatus response; only the polling requests, - // depending on the header returned, may be something otherwise. - var pt provisioningTracker - if ps.responseFormat == usesOperationResponse { - pt = &operationResource{} - } else { - pt = &provisioningStatus{} +// Status returns the last status message of the operation. +func (f Future) Status() string { + if f.pt == nil { + return "" } + return f.pt.pollingStatus() +} - // If this is the first request (that is, the polling response shape is unknown), determine how - // to poll and what to expect - if ps.responseFormat == formatIsUnknown { - req := resp.Request - if req == nil { - return autorest.NewError("azure", "updatePollingState", "Azure Polling Error - Original HTTP request is missing") +// PollingMethod returns the method used to monitor the status of the asynchronous operation. +func (f Future) PollingMethod() PollingMethodType { + if f.pt == nil { + return PollingUnknown + } + return f.pt.pollingMethod() +} + +// Done queries the service to see if the operation has completed. +func (f *Future) Done(sender autorest.Sender) (bool, error) { + // support for legacy Future implementation + if f.req != nil { + resp, err := sender.Do(f.req) + if err != nil { + return false, err } + pt, err := createPollingTracker(resp) + if err != nil { + return false, err + } + f.pt = pt + f.req = nil + } + // end legacy + if f.pt == nil { + return false, autorest.NewError("Future", "Done", "future is not initialized") + } + if f.pt.hasTerminated() { + return true, f.pt.pollingError() + } + if err := f.pt.pollForStatus(sender); err != nil { + return false, err + } + if err := f.pt.checkForErrors(); err != nil { + return f.pt.hasTerminated(), err + } + if err := f.pt.updatePollingState(f.pt.provisioningStateApplicable()); err != nil { + return false, err + } + if err := f.pt.updateHeaders(); err != nil { + return false, err + } + return f.pt.hasTerminated(), f.pt.pollingError() +} - // Prefer the Azure-AsyncOperation header - ps.uri = getAsyncOperation(resp) - if ps.uri != "" { - ps.responseFormat = usesOperationResponse +// GetPollingDelay returns a duration the application should wait before checking +// the status of the asynchronous request and true; this value is returned from +// the service via the Retry-After response header. If the header wasn't returned +// then the function returns the zero-value time.Duration and false. +func (f Future) GetPollingDelay() (time.Duration, bool) { + if f.pt == nil { + return 0, false + } + resp := f.pt.latestResponse() + if resp == nil { + return 0, false + } + + retry := resp.Header.Get(autorest.HeaderRetryAfter) + if retry == "" { + return 0, false + } + + d, err := time.ParseDuration(retry + "s") + if err != nil { + panic(err) + } + + return d, true +} + +// WaitForCompletion will return when one of the following conditions is met: the long +// running operation has completed, the provided context is cancelled, or the client's +// polling duration has been exceeded. It will retry failed polling attempts based on +// the retry value defined in the client up to the maximum retry attempts. +// Deprecated: Please use WaitForCompletionRef() instead. +func (f Future) WaitForCompletion(ctx context.Context, client autorest.Client) error { + return f.WaitForCompletionRef(ctx, client) +} + +// WaitForCompletionRef will return when one of the following conditions is met: the long +// running operation has completed, the provided context is cancelled, or the client's +// polling duration has been exceeded. It will retry failed polling attempts based on +// the retry value defined in the client up to the maximum retry attempts. +func (f *Future) WaitForCompletionRef(ctx context.Context, client autorest.Client) error { + ctx, cancel := context.WithTimeout(ctx, client.PollingDuration) + defer cancel() + done, err := f.Done(client) + for attempts := 0; !done; done, err = f.Done(client) { + if attempts >= client.RetryAttempts { + return autorest.NewErrorWithError(err, "Future", "WaitForCompletion", f.pt.latestResponse(), "the number of retries has been exceeded") + } + // we want delayAttempt to be zero in the non-error case so + // that DelayForBackoff doesn't perform exponential back-off + var delayAttempt int + var delay time.Duration + if err == nil { + // check for Retry-After delay, if not present use the client's polling delay + var ok bool + delay, ok = f.GetPollingDelay() + if !ok { + delay = client.PollingDelay + } } else { - ps.responseFormat = usesProvisioningStatus + // there was an error polling for status so perform exponential + // back-off based on the number of attempts using the client's retry + // duration. update attempts after delayAttempt to avoid off-by-one. + delayAttempt = attempts + delay = client.RetryDuration + attempts++ } - - // Else, use the Location header - if ps.uri == "" { - ps.uri = autorest.GetLocation(resp) - } - - // Lastly, requests against an existing resource, use the last request URI - if ps.uri == "" { - m := strings.ToUpper(req.Method) - if m == http.MethodPatch || m == http.MethodPut || m == http.MethodGet { - ps.uri = req.URL.String() - } + // wait until the delay elapses or the context is cancelled + delayElapsed := autorest.DelayForBackoff(delay, delayAttempt, ctx.Done()) + if !delayElapsed { + return autorest.NewErrorWithError(ctx.Err(), "Future", "WaitForCompletion", f.pt.latestResponse(), "context has been cancelled") } } + return err +} - // Read and interpret the response (saving the Body in case no polling is necessary) - b := &bytes.Buffer{} - err := autorest.Respond(resp, - autorest.ByCopying(b), - autorest.ByUnmarshallingJSON(pt), - autorest.ByClosing()) - resp.Body = ioutil.NopCloser(b) +// MarshalJSON implements the json.Marshaler interface. +func (f Future) MarshalJSON() ([]byte, error) { + return json.Marshal(f.pt) +} + +// UnmarshalJSON implements the json.Unmarshaler interface. +func (f *Future) UnmarshalJSON(data []byte) error { + // unmarshal into JSON object to determine the tracker type + obj := map[string]interface{}{} + err := json.Unmarshal(data, &obj) if err != nil { return err } + if obj["method"] == nil { + return autorest.NewError("Future", "UnmarshalJSON", "missing 'method' property") + } + method := obj["method"].(string) + switch strings.ToUpper(method) { + case http.MethodDelete: + f.pt = &pollingTrackerDelete{} + case http.MethodPatch: + f.pt = &pollingTrackerPatch{} + case http.MethodPost: + f.pt = &pollingTrackerPost{} + case http.MethodPut: + f.pt = &pollingTrackerPut{} + default: + return autorest.NewError("Future", "UnmarshalJSON", "unsupoorted method '%s'", method) + } + // now unmarshal into the tracker + return json.Unmarshal(data, &f.pt) +} - // Interpret the results - // -- Terminal states apply regardless - // -- Unknown states are per-service inprogress states - // -- Otherwise, infer state from HTTP status code - if pt.hasTerminated() { - ps.state = pt.state() - } else if pt.state() != "" { - ps.state = operationInProgress - } else { - switch resp.StatusCode { - case http.StatusAccepted: - ps.state = operationInProgress +// PollingURL returns the URL used for retrieving the status of the long-running operation. +func (f Future) PollingURL() string { + if f.pt == nil { + return "" + } + return f.pt.pollingURL() +} - case http.StatusNoContent, http.StatusCreated, http.StatusOK: - ps.state = operationSucceeded +// GetResult should be called once polling has completed successfully. +// It makes the final GET call to retrieve the resultant payload. +func (f Future) GetResult(sender autorest.Sender) (*http.Response, error) { + if f.pt.finalGetURL() == "" { + // we can end up in this situation if the async operation returns a 200 + // with no polling URLs. in that case return the response which should + // contain the JSON payload (only do this for successful terminal cases). + if lr := f.pt.latestResponse(); lr != nil && f.pt.hasSucceeded() { + return lr, nil + } + return nil, autorest.NewError("Future", "GetResult", "missing URL for retrieving result") + } + req, err := http.NewRequest(http.MethodGet, f.pt.finalGetURL(), nil) + if err != nil { + return nil, err + } + return sender.Do(req) +} - default: - ps.state = operationFailed +type pollingTracker interface { + // these methods can differ per tracker + + // checks the response headers and status code to determine the polling mechanism + updateHeaders() error + + // checks the response for tracker-specific error conditions + checkForErrors() error + + // returns true if provisioning state should be checked + provisioningStateApplicable() bool + + // methods common to all trackers + + // initializes the tracker's internal state, call this when the tracker is created + initializeState() error + + // makes an HTTP request to check the status of the LRO + pollForStatus(sender autorest.Sender) error + + // updates internal tracker state, call this after each call to pollForStatus + updatePollingState(provStateApl bool) error + + // returns the error response from the service, can be nil + pollingError() error + + // returns the polling method being used + pollingMethod() PollingMethodType + + // returns the state of the LRO as returned from the service + pollingStatus() string + + // returns the URL used for polling status + pollingURL() string + + // returns the URL used for the final GET to retrieve the resource + finalGetURL() string + + // returns true if the LRO is in a terminal state + hasTerminated() bool + + // returns true if the LRO is in a failed terminal state + hasFailed() bool + + // returns true if the LRO is in a successful terminal state + hasSucceeded() bool + + // returns the cached HTTP response after a call to pollForStatus(), can be nil + latestResponse() *http.Response +} + +type pollingTrackerBase struct { + // resp is the last response, either from the submission of the LRO or from polling + resp *http.Response + + // method is the HTTP verb, this is needed for deserialization + Method string `json:"method"` + + // rawBody is the raw JSON response body + rawBody map[string]interface{} + + // denotes if polling is using async-operation or location header + Pm PollingMethodType `json:"pollingMethod"` + + // the URL to poll for status + URI string `json:"pollingURI"` + + // the state of the LRO as returned from the service + State string `json:"lroState"` + + // the URL to GET for the final result + FinalGetURI string `json:"resultURI"` + + // used to hold an error object returned from the service + Err *ServiceError `json:"error,omitempty"` +} + +func (pt *pollingTrackerBase) initializeState() error { + // determine the initial polling state based on response body and/or HTTP status + // code. this is applicable to the initial LRO response, not polling responses! + pt.Method = pt.resp.Request.Method + if err := pt.updateRawBody(); err != nil { + return err + } + switch pt.resp.StatusCode { + case http.StatusOK: + if ps := pt.getProvisioningState(); ps != nil { + pt.State = *ps + } else { + pt.State = operationSucceeded + } + case http.StatusCreated: + if ps := pt.getProvisioningState(); ps != nil { + pt.State = *ps + } else { + pt.State = operationInProgress + } + case http.StatusAccepted: + pt.State = operationInProgress + case http.StatusNoContent: + pt.State = operationSucceeded + default: + pt.State = operationFailed + pt.updateErrorFromResponse() + } + return nil +} + +func (pt pollingTrackerBase) getProvisioningState() *string { + if pt.rawBody != nil && pt.rawBody["properties"] != nil { + p := pt.rawBody["properties"].(map[string]interface{}) + if ps := p["provisioningState"]; ps != nil { + s := ps.(string) + return &s } } + return nil +} - if ps.state == operationInProgress && ps.uri == "" { - return autorest.NewError("azure", "updatePollingState", "Azure Polling Error - Unable to obtain polling URI for %s %s", resp.Request.Method, resp.Request.URL) +func (pt *pollingTrackerBase) updateRawBody() error { + pt.rawBody = map[string]interface{}{} + if pt.resp.ContentLength != 0 { + defer pt.resp.Body.Close() + b, err := ioutil.ReadAll(pt.resp.Body) + if err != nil { + return autorest.NewErrorWithError(err, "pollingTrackerBase", "updateRawBody", nil, "failed to read response body") + } + // put the body back so it's available to other callers + pt.resp.Body = ioutil.NopCloser(bytes.NewReader(b)) + if err = json.Unmarshal(b, &pt.rawBody); err != nil { + return autorest.NewErrorWithError(err, "pollingTrackerBase", "updateRawBody", nil, "failed to unmarshal response body") + } } + return nil +} - // For failed operation, check for error code and message in - // -- Operation resource - // -- Response - // -- Otherwise, Unknown - if ps.hasFailed() { - if ps.responseFormat == usesOperationResponse { - or := pt.(*operationResource) - ps.code = or.OperationError.Code - ps.message = or.OperationError.Message - } else { - p := pt.(*provisioningStatus) - if p.hasProvisioningError() { - ps.code = p.ProvisioningError.Code - ps.message = p.ProvisioningError.Message +func (pt *pollingTrackerBase) pollForStatus(sender autorest.Sender) error { + req, err := http.NewRequest(http.MethodGet, pt.URI, nil) + if err != nil { + return autorest.NewErrorWithError(err, "pollingTrackerBase", "pollForStatus", nil, "failed to create HTTP request") + } + // attach the context from the original request if available (it will be absent for deserialized futures) + if pt.resp != nil { + req = req.WithContext(pt.resp.Request.Context()) + } + pt.resp, err = sender.Do(req) + if err != nil { + return autorest.NewErrorWithError(err, "pollingTrackerBase", "pollForStatus", nil, "failed to send HTTP request") + } + if autorest.ResponseHasStatusCode(pt.resp, pollingCodes[:]...) { + // reset the service error on success case + pt.Err = nil + err = pt.updateRawBody() + } else { + // check response body for error content + pt.updateErrorFromResponse() + } + return err +} + +// attempts to unmarshal a ServiceError type from the response body. +// if that fails then make a best attempt at creating something meaningful. +func (pt *pollingTrackerBase) updateErrorFromResponse() { + var err error + if pt.resp.ContentLength != 0 { + type respErr struct { + ServiceError *ServiceError `json:"error"` + } + re := respErr{} + defer pt.resp.Body.Close() + var b []byte + b, err = ioutil.ReadAll(pt.resp.Body) + if err != nil { + goto Default + } + if err = json.Unmarshal(b, &re); err != nil { + goto Default + } + // unmarshalling the error didn't yield anything, try unwrapped error + if re.ServiceError == nil { + err = json.Unmarshal(b, &re.ServiceError) + if err != nil { + goto Default + } + } + if re.ServiceError != nil { + pt.Err = re.ServiceError + return + } + } +Default: + se := &ServiceError{ + Code: fmt.Sprintf("HTTP status code %v", pt.resp.StatusCode), + Message: pt.resp.Status, + } + if err != nil { + se.InnerError = make(map[string]interface{}) + se.InnerError["unmarshalError"] = err.Error() + } + pt.Err = se +} + +func (pt *pollingTrackerBase) updatePollingState(provStateApl bool) error { + if pt.Pm == PollingAsyncOperation && pt.rawBody["status"] != nil { + pt.State = pt.rawBody["status"].(string) + } else { + if pt.resp.StatusCode == http.StatusAccepted { + pt.State = operationInProgress + } else if provStateApl { + if ps := pt.getProvisioningState(); ps != nil { + pt.State = *ps } else { - ps.code = "Unknown" - ps.message = "None" + pt.State = operationSucceeded + } + } else { + return autorest.NewError("pollingTrackerBase", "updatePollingState", "the response from the async operation has an invalid status code") + } + } + // if the operation has failed update the error state + if pt.hasFailed() { + pt.updateErrorFromResponse() + } + return nil +} + +func (pt pollingTrackerBase) pollingError() error { + if pt.Err == nil { + return nil + } + return pt.Err +} + +func (pt pollingTrackerBase) pollingMethod() PollingMethodType { + return pt.Pm +} + +func (pt pollingTrackerBase) pollingStatus() string { + return pt.State +} + +func (pt pollingTrackerBase) pollingURL() string { + return pt.URI +} + +func (pt pollingTrackerBase) finalGetURL() string { + return pt.FinalGetURI +} + +func (pt pollingTrackerBase) hasTerminated() bool { + return strings.EqualFold(pt.State, operationCanceled) || strings.EqualFold(pt.State, operationFailed) || strings.EqualFold(pt.State, operationSucceeded) +} + +func (pt pollingTrackerBase) hasFailed() bool { + return strings.EqualFold(pt.State, operationCanceled) || strings.EqualFold(pt.State, operationFailed) +} + +func (pt pollingTrackerBase) hasSucceeded() bool { + return strings.EqualFold(pt.State, operationSucceeded) +} + +func (pt pollingTrackerBase) latestResponse() *http.Response { + return pt.resp +} + +// error checking common to all trackers +func (pt pollingTrackerBase) baseCheckForErrors() error { + // for Azure-AsyncOperations the response body cannot be nil or empty + if pt.Pm == PollingAsyncOperation { + if pt.resp.Body == nil || pt.resp.ContentLength == 0 { + return autorest.NewError("pollingTrackerBase", "baseCheckForErrors", "for Azure-AsyncOperation response body cannot be nil") + } + if pt.rawBody["status"] == nil { + return autorest.NewError("pollingTrackerBase", "baseCheckForErrors", "missing status property in Azure-AsyncOperation response body") + } + } + return nil +} + +// DELETE + +type pollingTrackerDelete struct { + pollingTrackerBase +} + +func (pt *pollingTrackerDelete) updateHeaders() error { + // for 201 the Location header is required + if pt.resp.StatusCode == http.StatusCreated { + if lh, err := getURLFromLocationHeader(pt.resp); err != nil { + return err + } else if lh == "" { + return autorest.NewError("pollingTrackerDelete", "updateHeaders", "missing Location header in 201 response") + } else { + pt.URI = lh + } + pt.Pm = PollingLocation + pt.FinalGetURI = pt.URI + } + // for 202 prefer the Azure-AsyncOperation header but fall back to Location if necessary + if pt.resp.StatusCode == http.StatusAccepted { + ao, err := getURLFromAsyncOpHeader(pt.resp) + if err != nil { + return err + } else if ao != "" { + pt.URI = ao + pt.Pm = PollingAsyncOperation + } + // if the Location header is invalid and we already have a polling URL + // then we don't care if the Location header URL is malformed. + if lh, err := getURLFromLocationHeader(pt.resp); err != nil && pt.URI == "" { + return err + } else if lh != "" { + if ao == "" { + pt.URI = lh + pt.Pm = PollingLocation + } + // when both headers are returned we use the value in the Location header for the final GET + pt.FinalGetURI = lh + } + // make sure a polling URL was found + if pt.URI == "" { + return autorest.NewError("pollingTrackerPost", "updateHeaders", "didn't get any suitable polling URLs in 202 response") + } + } + return nil +} + +func (pt pollingTrackerDelete) checkForErrors() error { + return pt.baseCheckForErrors() +} + +func (pt pollingTrackerDelete) provisioningStateApplicable() bool { + return pt.resp.StatusCode == http.StatusOK || pt.resp.StatusCode == http.StatusNoContent +} + +// PATCH + +type pollingTrackerPatch struct { + pollingTrackerBase +} + +func (pt *pollingTrackerPatch) updateHeaders() error { + // by default we can use the original URL for polling and final GET + if pt.URI == "" { + pt.URI = pt.resp.Request.URL.String() + } + if pt.FinalGetURI == "" { + pt.FinalGetURI = pt.resp.Request.URL.String() + } + if pt.Pm == PollingUnknown { + pt.Pm = PollingRequestURI + } + // for 201 it's permissible for no headers to be returned + if pt.resp.StatusCode == http.StatusCreated { + if ao, err := getURLFromAsyncOpHeader(pt.resp); err != nil { + return err + } else if ao != "" { + pt.URI = ao + pt.Pm = PollingAsyncOperation + } + } + // for 202 prefer the Azure-AsyncOperation header but fall back to Location if necessary + // note the absense of the "final GET" mechanism for PATCH + if pt.resp.StatusCode == http.StatusAccepted { + ao, err := getURLFromAsyncOpHeader(pt.resp) + if err != nil { + return err + } else if ao != "" { + pt.URI = ao + pt.Pm = PollingAsyncOperation + } + if ao == "" { + if lh, err := getURLFromLocationHeader(pt.resp); err != nil { + return err + } else if lh == "" { + return autorest.NewError("pollingTrackerPatch", "updateHeaders", "didn't get any suitable polling URLs in 202 response") + } else { + pt.URI = lh + pt.Pm = PollingLocation } } } return nil } -func newPollingRequest(resp *http.Response, ps pollingState) (*http.Request, error) { - req := resp.Request - if req == nil { - return nil, autorest.NewError("azure", "newPollingRequest", "Azure Polling Error - Original HTTP request is missing") - } - - reqPoll, err := autorest.Prepare(&http.Request{Cancel: req.Cancel}, - autorest.AsGet(), - autorest.WithBaseURL(ps.uri)) - if err != nil { - return nil, autorest.NewErrorWithError(err, "azure", "newPollingRequest", nil, "Failure creating poll request to %s", ps.uri) - } - - return reqPoll, nil +func (pt pollingTrackerPatch) checkForErrors() error { + return pt.baseCheckForErrors() +} + +func (pt pollingTrackerPatch) provisioningStateApplicable() bool { + return pt.resp.StatusCode == http.StatusOK || pt.resp.StatusCode == http.StatusCreated +} + +// POST + +type pollingTrackerPost struct { + pollingTrackerBase +} + +func (pt *pollingTrackerPost) updateHeaders() error { + // 201 requires Location header + if pt.resp.StatusCode == http.StatusCreated { + if lh, err := getURLFromLocationHeader(pt.resp); err != nil { + return err + } else if lh == "" { + return autorest.NewError("pollingTrackerPost", "updateHeaders", "missing Location header in 201 response") + } else { + pt.URI = lh + pt.FinalGetURI = lh + pt.Pm = PollingLocation + } + } + // for 202 prefer the Azure-AsyncOperation header but fall back to Location if necessary + if pt.resp.StatusCode == http.StatusAccepted { + ao, err := getURLFromAsyncOpHeader(pt.resp) + if err != nil { + return err + } else if ao != "" { + pt.URI = ao + pt.Pm = PollingAsyncOperation + } + // if the Location header is invalid and we already have a polling URL + // then we don't care if the Location header URL is malformed. + if lh, err := getURLFromLocationHeader(pt.resp); err != nil && pt.URI == "" { + return err + } else if lh != "" { + if ao == "" { + pt.URI = lh + pt.Pm = PollingLocation + } + // when both headers are returned we use the value in the Location header for the final GET + pt.FinalGetURI = lh + } + // make sure a polling URL was found + if pt.URI == "" { + return autorest.NewError("pollingTrackerPost", "updateHeaders", "didn't get any suitable polling URLs in 202 response") + } + } + return nil +} + +func (pt pollingTrackerPost) checkForErrors() error { + return pt.baseCheckForErrors() +} + +func (pt pollingTrackerPost) provisioningStateApplicable() bool { + return pt.resp.StatusCode == http.StatusOK || pt.resp.StatusCode == http.StatusNoContent +} + +// PUT + +type pollingTrackerPut struct { + pollingTrackerBase +} + +func (pt *pollingTrackerPut) updateHeaders() error { + // by default we can use the original URL for polling and final GET + if pt.URI == "" { + pt.URI = pt.resp.Request.URL.String() + } + if pt.FinalGetURI == "" { + pt.FinalGetURI = pt.resp.Request.URL.String() + } + if pt.Pm == PollingUnknown { + pt.Pm = PollingRequestURI + } + // for 201 it's permissible for no headers to be returned + if pt.resp.StatusCode == http.StatusCreated { + if ao, err := getURLFromAsyncOpHeader(pt.resp); err != nil { + return err + } else if ao != "" { + pt.URI = ao + pt.Pm = PollingAsyncOperation + } + } + // for 202 prefer the Azure-AsyncOperation header but fall back to Location if necessary + if pt.resp.StatusCode == http.StatusAccepted { + ao, err := getURLFromAsyncOpHeader(pt.resp) + if err != nil { + return err + } else if ao != "" { + pt.URI = ao + pt.Pm = PollingAsyncOperation + } + // if the Location header is invalid and we already have a polling URL + // then we don't care if the Location header URL is malformed. + if lh, err := getURLFromLocationHeader(pt.resp); err != nil && pt.URI == "" { + return err + } else if lh != "" { + if ao == "" { + pt.URI = lh + pt.Pm = PollingLocation + } + // when both headers are returned we use the value in the Location header for the final GET + pt.FinalGetURI = lh + } + // make sure a polling URL was found + if pt.URI == "" { + return autorest.NewError("pollingTrackerPut", "updateHeaders", "didn't get any suitable polling URLs in 202 response") + } + } + return nil +} + +func (pt pollingTrackerPut) checkForErrors() error { + err := pt.baseCheckForErrors() + if err != nil { + return err + } + // if there are no LRO headers then the body cannot be empty + ao, err := getURLFromAsyncOpHeader(pt.resp) + if err != nil { + return err + } + lh, err := getURLFromLocationHeader(pt.resp) + if err != nil { + return err + } + if ao == "" && lh == "" && len(pt.rawBody) == 0 { + return autorest.NewError("pollingTrackerPut", "checkForErrors", "the response did not contain a body") + } + return nil +} + +func (pt pollingTrackerPut) provisioningStateApplicable() bool { + return pt.resp.StatusCode == http.StatusOK || pt.resp.StatusCode == http.StatusCreated +} + +// creates a polling tracker based on the verb of the original request +func createPollingTracker(resp *http.Response) (pollingTracker, error) { + var pt pollingTracker + switch strings.ToUpper(resp.Request.Method) { + case http.MethodDelete: + pt = &pollingTrackerDelete{pollingTrackerBase: pollingTrackerBase{resp: resp}} + case http.MethodPatch: + pt = &pollingTrackerPatch{pollingTrackerBase: pollingTrackerBase{resp: resp}} + case http.MethodPost: + pt = &pollingTrackerPost{pollingTrackerBase: pollingTrackerBase{resp: resp}} + case http.MethodPut: + pt = &pollingTrackerPut{pollingTrackerBase: pollingTrackerBase{resp: resp}} + default: + return nil, autorest.NewError("azure", "createPollingTracker", "unsupported HTTP method %s", resp.Request.Method) + } + if err := pt.initializeState(); err != nil { + return pt, err + } + // this initializes the polling header values, we do this during creation in case the + // initial response send us invalid values; this way the API call will return a non-nil + // error (not doing this means the error shows up in Future.Done) + return pt, pt.updateHeaders() +} + +// gets the polling URL from the Azure-AsyncOperation header. +// ensures the URL is well-formed and absolute. +func getURLFromAsyncOpHeader(resp *http.Response) (string, error) { + s := resp.Header.Get(http.CanonicalHeaderKey(headerAsyncOperation)) + if s == "" { + return "", nil + } + if !isValidURL(s) { + return "", autorest.NewError("azure", "getURLFromAsyncOpHeader", "invalid polling URL '%s'", s) + } + return s, nil +} + +// gets the polling URL from the Location header. +// ensures the URL is well-formed and absolute. +func getURLFromLocationHeader(resp *http.Response) (string, error) { + s := resp.Header.Get(http.CanonicalHeaderKey(autorest.HeaderLocation)) + if s == "" { + return "", nil + } + if !isValidURL(s) { + return "", autorest.NewError("azure", "getURLFromLocationHeader", "invalid polling URL '%s'", s) + } + return s, nil +} + +// verify that the URL is valid and absolute +func isValidURL(s string) bool { + u, err := url.Parse(s) + return err == nil && u.IsAbs() +} + +// DoPollForAsynchronous returns a SendDecorator that polls if the http.Response is for an Azure +// long-running operation. It will delay between requests for the duration specified in the +// RetryAfter header or, if the header is absent, the passed delay. Polling may be canceled via +// the context associated with the http.Request. +// Deprecated: Prefer using Futures to allow for non-blocking async operations. +func DoPollForAsynchronous(delay time.Duration) autorest.SendDecorator { + return func(s autorest.Sender) autorest.Sender { + return autorest.SenderFunc(func(r *http.Request) (*http.Response, error) { + resp, err := s.Do(r) + if err != nil { + return resp, err + } + if !autorest.ResponseHasStatusCode(resp, pollingCodes[:]...) { + return resp, nil + } + future, err := NewFutureFromResponse(resp) + if err != nil { + return resp, err + } + // retry until either the LRO completes or we receive an error + var done bool + for done, err = future.Done(s); !done && err == nil; done, err = future.Done(s) { + // check for Retry-After delay, if not present use the specified polling delay + if pd, ok := future.GetPollingDelay(); ok { + delay = pd + } + // wait until the delay elapses or the context is cancelled + if delayElapsed := autorest.DelayForBackoff(delay, 0, r.Context().Done()); !delayElapsed { + return future.Response(), + autorest.NewErrorWithError(r.Context().Err(), "azure", "DoPollForAsynchronous", future.Response(), "context has been cancelled") + } + } + return future.Response(), err + }) + } +} + +// PollingMethodType defines a type used for enumerating polling mechanisms. +type PollingMethodType string + +const ( + // PollingAsyncOperation indicates the polling method uses the Azure-AsyncOperation header. + PollingAsyncOperation PollingMethodType = "AsyncOperation" + + // PollingLocation indicates the polling method uses the Location header. + PollingLocation PollingMethodType = "Location" + + // PollingRequestURI indicates the polling method uses the original request URI. + PollingRequestURI PollingMethodType = "RequestURI" + + // PollingUnknown indicates an unknown polling method and is the default value. + PollingUnknown PollingMethodType = "" +) + +// AsyncOpIncompleteError is the type that's returned from a future that has not completed. +type AsyncOpIncompleteError struct { + // FutureType is the name of the type composed of a azure.Future. + FutureType string +} + +// Error returns an error message including the originating type name of the error. +func (e AsyncOpIncompleteError) Error() string { + return fmt.Sprintf("%s: asynchronous operation has not completed", e.FutureType) +} + +// NewAsyncOpIncompleteError creates a new AsyncOpIncompleteError with the specified parameters. +func NewAsyncOpIncompleteError(futureType string) AsyncOpIncompleteError { + return AsyncOpIncompleteError{ + FutureType: futureType, + } } diff --git a/vendor/github.com/Azure/go-autorest/autorest/azure/azure.go b/vendor/github.com/Azure/go-autorest/autorest/azure/azure.go index 3f4d13421..a702ffe75 100644 --- a/vendor/github.com/Azure/go-autorest/autorest/azure/azure.go +++ b/vendor/github.com/Azure/go-autorest/autorest/azure/azure.go @@ -1,16 +1,29 @@ -/* -Package azure provides Azure-specific implementations used with AutoRest. - -See the included examples for more detail. -*/ +// Package azure provides Azure-specific implementations used with AutoRest. +// See the included examples for more detail. package azure +// Copyright 2017 Microsoft Corporation +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + import ( "encoding/json" "fmt" "io/ioutil" "net/http" + "regexp" "strconv" + "strings" "github.com/Azure/go-autorest/autorest" ) @@ -29,21 +42,88 @@ const ( ) // ServiceError encapsulates the error response from an Azure service. +// It adhears to the OData v4 specification for error responses. type ServiceError struct { - Code string `json:"code"` - Message string `json:"message"` - Details *[]interface{} `json:"details"` + Code string `json:"code"` + Message string `json:"message"` + Target *string `json:"target"` + Details []map[string]interface{} `json:"details"` + InnerError map[string]interface{} `json:"innererror"` } func (se ServiceError) Error() string { - if se.Details != nil { - d, err := json.Marshal(*(se.Details)) - if err != nil { - return fmt.Sprintf("Code=%q Message=%q Details=%v", se.Code, se.Message, *se.Details) - } - return fmt.Sprintf("Code=%q Message=%q Details=%v", se.Code, se.Message, string(d)) + result := fmt.Sprintf("Code=%q Message=%q", se.Code, se.Message) + + if se.Target != nil { + result += fmt.Sprintf(" Target=%q", *se.Target) } - return fmt.Sprintf("Code=%q Message=%q", se.Code, se.Message) + + if se.Details != nil { + d, err := json.Marshal(se.Details) + if err != nil { + result += fmt.Sprintf(" Details=%v", se.Details) + } + result += fmt.Sprintf(" Details=%v", string(d)) + } + + if se.InnerError != nil { + d, err := json.Marshal(se.InnerError) + if err != nil { + result += fmt.Sprintf(" InnerError=%v", se.InnerError) + } + result += fmt.Sprintf(" InnerError=%v", string(d)) + } + + return result +} + +// UnmarshalJSON implements the json.Unmarshaler interface for the ServiceError type. +func (se *ServiceError) UnmarshalJSON(b []byte) error { + // per the OData v4 spec the details field must be an array of JSON objects. + // unfortunately not all services adhear to the spec and just return a single + // object instead of an array with one object. so we have to perform some + // shenanigans to accommodate both cases. + // http://docs.oasis-open.org/odata/odata-json-format/v4.0/os/odata-json-format-v4.0-os.html#_Toc372793091 + + type serviceError1 struct { + Code string `json:"code"` + Message string `json:"message"` + Target *string `json:"target"` + Details []map[string]interface{} `json:"details"` + InnerError map[string]interface{} `json:"innererror"` + } + + type serviceError2 struct { + Code string `json:"code"` + Message string `json:"message"` + Target *string `json:"target"` + Details map[string]interface{} `json:"details"` + InnerError map[string]interface{} `json:"innererror"` + } + + se1 := serviceError1{} + err := json.Unmarshal(b, &se1) + if err == nil { + se.populate(se1.Code, se1.Message, se1.Target, se1.Details, se1.InnerError) + return nil + } + + se2 := serviceError2{} + err = json.Unmarshal(b, &se2) + if err == nil { + se.populate(se2.Code, se2.Message, se2.Target, nil, se2.InnerError) + se.Details = append(se.Details, se2.Details) + return nil + } + return err +} + +func (se *ServiceError) populate(code, message string, target *string, details []map[string]interface{}, inner map[string]interface{}) { + se.Code = code + se.Message = message + se.Target = target + se.Details = details + se.InnerError = inner } // RequestError describes an error response returned by Azure service. @@ -69,6 +149,41 @@ func IsAzureError(e error) bool { return ok } +// Resource contains details about an Azure resource. +type Resource struct { + SubscriptionID string + ResourceGroup string + Provider string + ResourceType string + ResourceName string +} + +// ParseResourceID parses a resource ID into a ResourceDetails struct. +// See https://docs.microsoft.com/en-us/azure/azure-resource-manager/resource-group-template-functions-resource#return-value-4. +func ParseResourceID(resourceID string) (Resource, error) { + + const resourceIDPatternText = `(?i)subscriptions/(.+)/resourceGroups/(.+)/providers/(.+?)/(.+?)/(.+)` + resourceIDPattern := regexp.MustCompile(resourceIDPatternText) + match := resourceIDPattern.FindStringSubmatch(resourceID) + + if len(match) == 0 { + return Resource{}, fmt.Errorf("parsing failed for %s. Invalid resource Id format", resourceID) + } + + v := strings.Split(match[5], "/") + resourceName := v[len(v)-1] + + result := Resource{ + SubscriptionID: match[1], + ResourceGroup: match[2], + Provider: match[3], + ResourceType: match[4], + ResourceName: resourceName, + } + + return result, nil +} + // NewErrorWithError creates a new Error conforming object from the // passed packageType, method, statusCode of the given resp (UndefinedStatusCode // if resp is nil), message, and original error. message is treated as a format @@ -164,10 +279,29 @@ func WithErrorUnlessStatusCode(codes ...int) autorest.RespondDecorator { resp.Body = ioutil.NopCloser(&b) if decodeErr != nil { return fmt.Errorf("autorest/azure: error response cannot be parsed: %q error: %v", b.String(), decodeErr) - } else if e.ServiceError == nil { - e.ServiceError = &ServiceError{Code: "Unknown", Message: "Unknown service error"} } - + if e.ServiceError == nil { + // Check if error is unwrapped ServiceError + if err := json.Unmarshal(b.Bytes(), &e.ServiceError); err != nil { + return err + } + } + if e.ServiceError.Message == "" { + // if we're here it means the returned error wasn't OData v4 compliant. + // try to unmarshal the body as raw JSON in hopes of getting something. + rawBody := map[string]interface{}{} + if err := json.Unmarshal(b.Bytes(), &rawBody); err != nil { + return err + } + e.ServiceError = &ServiceError{ + Code: "Unknown", + Message: "Unknown service error", + } + if len(rawBody) > 0 { + e.ServiceError.Details = []map[string]interface{}{rawBody} + } + } + e.Response = resp e.RequestID = ExtractRequestID(resp) if e.StatusCode == nil { e.StatusCode = resp.StatusCode diff --git a/vendor/github.com/Azure/go-autorest/autorest/azure/environments.go b/vendor/github.com/Azure/go-autorest/autorest/azure/environments.go index 1cf55651f..7e41f7fd9 100644 --- a/vendor/github.com/Azure/go-autorest/autorest/azure/environments.go +++ b/vendor/github.com/Azure/go-autorest/autorest/azure/environments.go @@ -1,10 +1,31 @@ package azure +// Copyright 2017 Microsoft Corporation +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + import ( + "encoding/json" "fmt" + "io/ioutil" + "os" "strings" ) +// EnvironmentFilepathName captures the name of the environment variable containing the path to the file +// to be used while populating the Azure Environment. +const EnvironmentFilepathName = "AZURE_ENVIRONMENT_FILEPATH" + var environments = map[string]Environment{ "AZURECHINACLOUD": ChinaCloud, "AZUREGERMANCLOUD": GermanCloud, @@ -23,6 +44,8 @@ type Environment struct { GalleryEndpoint string `json:"galleryEndpoint"` KeyVaultEndpoint string `json:"keyVaultEndpoint"` GraphEndpoint string `json:"graphEndpoint"` + ServiceBusEndpoint string `json:"serviceBusEndpoint"` + BatchManagementEndpoint string `json:"batchManagementEndpoint"` StorageEndpointSuffix string `json:"storageEndpointSuffix"` SQLDatabaseDNSSuffix string `json:"sqlDatabaseDNSSuffix"` TrafficManagerDNSSuffix string `json:"trafficManagerDNSSuffix"` @@ -31,6 +54,7 @@ type Environment struct { ServiceManagementVMDNSSuffix string `json:"serviceManagementVMDNSSuffix"` ResourceManagerVMDNSSuffix string `json:"resourceManagerVMDNSSuffix"` ContainerRegistryDNSSuffix string `json:"containerRegistryDNSSuffix"` + TokenAudience string `json:"tokenAudience"` } var ( @@ -45,14 +69,17 @@ var ( GalleryEndpoint: "https://gallery.azure.com/", KeyVaultEndpoint: "https://vault.azure.net/", GraphEndpoint: "https://graph.windows.net/", + ServiceBusEndpoint: "https://servicebus.windows.net/", + BatchManagementEndpoint: "https://batch.core.windows.net/", StorageEndpointSuffix: "core.windows.net", SQLDatabaseDNSSuffix: "database.windows.net", TrafficManagerDNSSuffix: "trafficmanager.net", KeyVaultDNSSuffix: "vault.azure.net", - ServiceBusEndpointSuffix: "servicebus.azure.com", + ServiceBusEndpointSuffix: "servicebus.windows.net", ServiceManagementVMDNSSuffix: "cloudapp.net", ResourceManagerVMDNSSuffix: "cloudapp.azure.com", ContainerRegistryDNSSuffix: "azurecr.io", + TokenAudience: "https://management.azure.com/", } // USGovernmentCloud is the cloud environment for the US Government @@ -62,10 +89,12 @@ var ( PublishSettingsURL: "https://manage.windowsazure.us/publishsettings/index", ServiceManagementEndpoint: "https://management.core.usgovcloudapi.net/", ResourceManagerEndpoint: "https://management.usgovcloudapi.net/", - ActiveDirectoryEndpoint: "https://login.microsoftonline.com/", + ActiveDirectoryEndpoint: "https://login.microsoftonline.us/", GalleryEndpoint: "https://gallery.usgovcloudapi.net/", KeyVaultEndpoint: "https://vault.usgovcloudapi.net/", - GraphEndpoint: "https://graph.usgovcloudapi.net/", + GraphEndpoint: "https://graph.windows.net/", + ServiceBusEndpoint: "https://servicebus.usgovcloudapi.net/", + BatchManagementEndpoint: "https://batch.core.usgovcloudapi.net/", StorageEndpointSuffix: "core.usgovcloudapi.net", SQLDatabaseDNSSuffix: "database.usgovcloudapi.net", TrafficManagerDNSSuffix: "usgovtrafficmanager.net", @@ -74,6 +103,7 @@ var ( ServiceManagementVMDNSSuffix: "usgovcloudapp.net", ResourceManagerVMDNSSuffix: "cloudapp.windowsazure.us", ContainerRegistryDNSSuffix: "azurecr.io", + TokenAudience: "https://management.usgovcloudapi.net/", } // ChinaCloud is the cloud environment operated in China @@ -87,14 +117,17 @@ var ( GalleryEndpoint: "https://gallery.chinacloudapi.cn/", KeyVaultEndpoint: "https://vault.azure.cn/", GraphEndpoint: "https://graph.chinacloudapi.cn/", + ServiceBusEndpoint: "https://servicebus.chinacloudapi.cn/", + BatchManagementEndpoint: "https://batch.chinacloudapi.cn/", StorageEndpointSuffix: "core.chinacloudapi.cn", SQLDatabaseDNSSuffix: "database.chinacloudapi.cn", TrafficManagerDNSSuffix: "trafficmanager.cn", KeyVaultDNSSuffix: "vault.azure.cn", - ServiceBusEndpointSuffix: "servicebus.chinacloudapi.net", + ServiceBusEndpointSuffix: "servicebus.chinacloudapi.cn", ServiceManagementVMDNSSuffix: "chinacloudapp.cn", ResourceManagerVMDNSSuffix: "cloudapp.azure.cn", ContainerRegistryDNSSuffix: "azurecr.io", + TokenAudience: "https://management.chinacloudapi.cn/", } // GermanCloud is the cloud environment operated in Germany @@ -108,6 +141,8 @@ var ( GalleryEndpoint: "https://gallery.cloudapi.de/", KeyVaultEndpoint: "https://vault.microsoftazure.de/", GraphEndpoint: "https://graph.cloudapi.de/", + ServiceBusEndpoint: "https://servicebus.cloudapi.de/", + BatchManagementEndpoint: "https://batch.cloudapi.de/", StorageEndpointSuffix: "core.cloudapi.de", SQLDatabaseDNSSuffix: "database.cloudapi.de", TrafficManagerDNSSuffix: "azuretrafficmanager.de", @@ -116,15 +151,41 @@ var ( ServiceManagementVMDNSSuffix: "azurecloudapp.de", ResourceManagerVMDNSSuffix: "cloudapp.microsoftazure.de", ContainerRegistryDNSSuffix: "azurecr.io", + TokenAudience: "https://management.microsoftazure.de/", } ) -// EnvironmentFromName returns an Environment based on the common name specified +// EnvironmentFromName returns an Environment based on the common name specified. func EnvironmentFromName(name string) (Environment, error) { + // IMPORTANT + // As per @radhikagupta5: + // This is technical debt, fundamentally here because Kubernetes is not currently accepting + // contributions to the providers. Once that is an option, the provider should be updated to + // directly call `EnvironmentFromFile`. Until then, we rely on dispatching Azure Stack environment creation + // from this method based on the name that is provided to us. + if strings.EqualFold(name, "AZURESTACKCLOUD") { + return EnvironmentFromFile(os.Getenv(EnvironmentFilepathName)) + } + name = strings.ToUpper(name) env, ok := environments[name] if !ok { return env, fmt.Errorf("autorest/azure: There is no cloud environment matching the name %q", name) } + return env, nil } + +// EnvironmentFromFile loads an Environment from a configuration file available on disk. +// This function is particularly useful in the Hybrid Cloud model, where one must define their own +// endpoints. +func EnvironmentFromFile(location string) (unmarshaled Environment, err error) { + fileContents, err := ioutil.ReadFile(location) + if err != nil { + return + } + + err = json.Unmarshal(fileContents, &unmarshaled) + + return +} diff --git a/vendor/github.com/Azure/go-autorest/autorest/azure/metadata_environment.go b/vendor/github.com/Azure/go-autorest/autorest/azure/metadata_environment.go new file mode 100644 index 000000000..507f9e95c --- /dev/null +++ b/vendor/github.com/Azure/go-autorest/autorest/azure/metadata_environment.go @@ -0,0 +1,245 @@ +package azure + +import ( + "encoding/json" + "fmt" + "io/ioutil" + "net/http" + "strings" + + "github.com/Azure/go-autorest/autorest" +) + +// Copyright 2017 Microsoft Corporation +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +type audience []string + +type authentication struct { + LoginEndpoint string `json:"loginEndpoint"` + Audiences audience `json:"audiences"` +} + +type environmentMetadataInfo struct { + GalleryEndpoint string `json:"galleryEndpoint"` + GraphEndpoint string `json:"graphEndpoint"` + PortalEndpoint string `json:"portalEndpoint"` + Authentication authentication `json:"authentication"` +} + +// EnvironmentProperty represent property names that clients can override +type EnvironmentProperty string + +const ( + // EnvironmentName ... + EnvironmentName EnvironmentProperty = "name" + // EnvironmentManagementPortalURL .. + EnvironmentManagementPortalURL EnvironmentProperty = "managementPortalURL" + // EnvironmentPublishSettingsURL ... + EnvironmentPublishSettingsURL EnvironmentProperty = "publishSettingsURL" + // EnvironmentServiceManagementEndpoint ... + EnvironmentServiceManagementEndpoint EnvironmentProperty = "serviceManagementEndpoint" + // EnvironmentResourceManagerEndpoint ... + EnvironmentResourceManagerEndpoint EnvironmentProperty = "resourceManagerEndpoint" + // EnvironmentActiveDirectoryEndpoint ... + EnvironmentActiveDirectoryEndpoint EnvironmentProperty = "activeDirectoryEndpoint" + // EnvironmentGalleryEndpoint ... + EnvironmentGalleryEndpoint EnvironmentProperty = "galleryEndpoint" + // EnvironmentKeyVaultEndpoint ... + EnvironmentKeyVaultEndpoint EnvironmentProperty = "keyVaultEndpoint" + // EnvironmentGraphEndpoint ... + EnvironmentGraphEndpoint EnvironmentProperty = "graphEndpoint" + // EnvironmentServiceBusEndpoint ... + EnvironmentServiceBusEndpoint EnvironmentProperty = "serviceBusEndpoint" + // EnvironmentBatchManagementEndpoint ... + EnvironmentBatchManagementEndpoint EnvironmentProperty = "batchManagementEndpoint" + // EnvironmentStorageEndpointSuffix ... + EnvironmentStorageEndpointSuffix EnvironmentProperty = "storageEndpointSuffix" + // EnvironmentSQLDatabaseDNSSuffix ... + EnvironmentSQLDatabaseDNSSuffix EnvironmentProperty = "sqlDatabaseDNSSuffix" + // EnvironmentTrafficManagerDNSSuffix ... + EnvironmentTrafficManagerDNSSuffix EnvironmentProperty = "trafficManagerDNSSuffix" + // EnvironmentKeyVaultDNSSuffix ... + EnvironmentKeyVaultDNSSuffix EnvironmentProperty = "keyVaultDNSSuffix" + // EnvironmentServiceBusEndpointSuffix ... + EnvironmentServiceBusEndpointSuffix EnvironmentProperty = "serviceBusEndpointSuffix" + // EnvironmentServiceManagementVMDNSSuffix ... + EnvironmentServiceManagementVMDNSSuffix EnvironmentProperty = "serviceManagementVMDNSSuffix" + // EnvironmentResourceManagerVMDNSSuffix ... + EnvironmentResourceManagerVMDNSSuffix EnvironmentProperty = "resourceManagerVMDNSSuffix" + // EnvironmentContainerRegistryDNSSuffix ... + EnvironmentContainerRegistryDNSSuffix EnvironmentProperty = "containerRegistryDNSSuffix" + // EnvironmentTokenAudience ... + EnvironmentTokenAudience EnvironmentProperty = "tokenAudience" +) + +// OverrideProperty represents property name and value that clients can override +type OverrideProperty struct { + Key EnvironmentProperty + Value string +} + +// EnvironmentFromURL loads an Environment from a URL +// This function is particularly useful in the Hybrid Cloud model, where one may define their own +// endpoints. +func EnvironmentFromURL(resourceManagerEndpoint string, properties ...OverrideProperty) (environment Environment, err error) { + var metadataEnvProperties environmentMetadataInfo + + if resourceManagerEndpoint == "" { + return environment, fmt.Errorf("Metadata resource manager endpoint is empty") + } + + if metadataEnvProperties, err = retrieveMetadataEnvironment(resourceManagerEndpoint); err != nil { + return environment, err + } + + // Give priority to user's override values + overrideProperties(&environment, properties) + + if environment.Name == "" { + environment.Name = "HybridEnvironment" + } + stampDNSSuffix := environment.StorageEndpointSuffix + if stampDNSSuffix == "" { + stampDNSSuffix = strings.TrimSuffix(strings.TrimPrefix(strings.Replace(resourceManagerEndpoint, strings.Split(resourceManagerEndpoint, ".")[0], "", 1), "."), "/") + environment.StorageEndpointSuffix = stampDNSSuffix + } + if environment.KeyVaultDNSSuffix == "" { + environment.KeyVaultDNSSuffix = fmt.Sprintf("%s.%s", "vault", stampDNSSuffix) + } + if environment.KeyVaultEndpoint == "" { + environment.KeyVaultEndpoint = fmt.Sprintf("%s%s", "https://", environment.KeyVaultDNSSuffix) + } + if environment.TokenAudience == "" { + environment.TokenAudience = metadataEnvProperties.Authentication.Audiences[0] + } + if environment.ActiveDirectoryEndpoint == "" { + environment.ActiveDirectoryEndpoint = metadataEnvProperties.Authentication.LoginEndpoint + } + if environment.ResourceManagerEndpoint == "" { + environment.ResourceManagerEndpoint = resourceManagerEndpoint + } + if environment.GalleryEndpoint == "" { + environment.GalleryEndpoint = metadataEnvProperties.GalleryEndpoint + } + if environment.GraphEndpoint == "" { + environment.GraphEndpoint = metadataEnvProperties.GraphEndpoint + } + + return environment, nil +} + +func overrideProperties(environment *Environment, properties []OverrideProperty) { + for _, property := range properties { + switch property.Key { + case EnvironmentName: + { + environment.Name = property.Value + } + case EnvironmentManagementPortalURL: + { + environment.ManagementPortalURL = property.Value + } + case EnvironmentPublishSettingsURL: + { + environment.PublishSettingsURL = property.Value + } + case EnvironmentServiceManagementEndpoint: + { + environment.ServiceManagementEndpoint = property.Value + } + case EnvironmentResourceManagerEndpoint: + { + environment.ResourceManagerEndpoint = property.Value + } + case EnvironmentActiveDirectoryEndpoint: + { + environment.ActiveDirectoryEndpoint = property.Value + } + case EnvironmentGalleryEndpoint: + { + environment.GalleryEndpoint = property.Value + } + case EnvironmentKeyVaultEndpoint: + { + environment.KeyVaultEndpoint = property.Value + } + case EnvironmentGraphEndpoint: + { + environment.GraphEndpoint = property.Value + } + case EnvironmentServiceBusEndpoint: + { + environment.ServiceBusEndpoint = property.Value + } + case EnvironmentBatchManagementEndpoint: + { + environment.BatchManagementEndpoint = property.Value + } + case EnvironmentStorageEndpointSuffix: + { + environment.StorageEndpointSuffix = property.Value + } + case EnvironmentSQLDatabaseDNSSuffix: + { + environment.SQLDatabaseDNSSuffix = property.Value + } + case EnvironmentTrafficManagerDNSSuffix: + { + environment.TrafficManagerDNSSuffix = property.Value + } + case EnvironmentKeyVaultDNSSuffix: + { + environment.KeyVaultDNSSuffix = property.Value + } + case EnvironmentServiceBusEndpointSuffix: + { + environment.ServiceBusEndpointSuffix = property.Value + } + case EnvironmentServiceManagementVMDNSSuffix: + { + environment.ServiceManagementVMDNSSuffix = property.Value + } + case EnvironmentResourceManagerVMDNSSuffix: + { + environment.ResourceManagerVMDNSSuffix = property.Value + } + case EnvironmentContainerRegistryDNSSuffix: + { + environment.ContainerRegistryDNSSuffix = property.Value + } + case EnvironmentTokenAudience: + { + environment.TokenAudience = property.Value + } + } + } +} + +func retrieveMetadataEnvironment(endpoint string) (environment environmentMetadataInfo, err error) { + client := autorest.NewClientWithUserAgent("") + managementEndpoint := fmt.Sprintf("%s%s", strings.TrimSuffix(endpoint, "/"), "/metadata/endpoints?api-version=1.0") + req, _ := http.NewRequest("GET", managementEndpoint, nil) + response, err := client.Do(req) + if err != nil { + return environment, err + } + defer response.Body.Close() + jsonResponse, err := ioutil.ReadAll(response.Body) + if err != nil { + return environment, err + } + err = json.Unmarshal(jsonResponse, &environment) + return environment, err +} diff --git a/vendor/github.com/Azure/go-autorest/autorest/azure/rp.go b/vendor/github.com/Azure/go-autorest/autorest/azure/rp.go new file mode 100644 index 000000000..bd34f0ed5 --- /dev/null +++ b/vendor/github.com/Azure/go-autorest/autorest/azure/rp.go @@ -0,0 +1,200 @@ +// Copyright 2017 Microsoft Corporation +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +package azure + +import ( + "errors" + "fmt" + "net/http" + "net/url" + "strings" + "time" + + "github.com/Azure/go-autorest/autorest" +) + +// DoRetryWithRegistration tries to register the resource provider in case it is unregistered. +// It also handles request retries +func DoRetryWithRegistration(client autorest.Client) autorest.SendDecorator { + return func(s autorest.Sender) autorest.Sender { + return autorest.SenderFunc(func(r *http.Request) (resp *http.Response, err error) { + rr := autorest.NewRetriableRequest(r) + for currentAttempt := 0; currentAttempt < client.RetryAttempts; currentAttempt++ { + err = rr.Prepare() + if err != nil { + return resp, err + } + + resp, err = autorest.SendWithSender(s, rr.Request(), + autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...), + ) + if err != nil { + return resp, err + } + + if resp.StatusCode != http.StatusConflict || client.SkipResourceProviderRegistration { + return resp, err + } + var re RequestError + err = autorest.Respond( + resp, + autorest.ByUnmarshallingJSON(&re), + ) + if err != nil { + return resp, err + } + err = re + + if re.ServiceError != nil && re.ServiceError.Code == "MissingSubscriptionRegistration" { + regErr := register(client, r, re) + if regErr != nil { + return resp, fmt.Errorf("failed auto registering Resource Provider: %s. Original error: %s", regErr, err) + } + } + } + return resp, err + }) + } +} + +func getProvider(re RequestError) (string, error) { + if re.ServiceError != nil && len(re.ServiceError.Details) > 0 { + return re.ServiceError.Details[0]["target"].(string), nil + } + return "", errors.New("provider was not found in the response") +} + +func register(client autorest.Client, originalReq *http.Request, re RequestError) error { + subID := getSubscription(originalReq.URL.Path) + if subID == "" { + return errors.New("missing parameter subscriptionID to register resource provider") + } + providerName, err := getProvider(re) + if err != nil { + return fmt.Errorf("missing parameter provider to register resource provider: %s", err) + } + newURL := url.URL{ + Scheme: originalReq.URL.Scheme, + Host: originalReq.URL.Host, + } + + // taken from the resources SDK + // with almost identical code, this sections are easier to mantain + // It is also not a good idea to import the SDK here + // https://github.com/Azure/azure-sdk-for-go/blob/9f366792afa3e0ddaecdc860e793ba9d75e76c27/arm/resources/resources/providers.go#L252 + pathParameters := map[string]interface{}{ + "resourceProviderNamespace": autorest.Encode("path", providerName), + "subscriptionId": autorest.Encode("path", subID), + } + + const APIVersion = "2016-09-01" + queryParameters := map[string]interface{}{ + "api-version": APIVersion, + } + + preparer := autorest.CreatePreparer( + autorest.AsPost(), + autorest.WithBaseURL(newURL.String()), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/providers/{resourceProviderNamespace}/register", pathParameters), + autorest.WithQueryParameters(queryParameters), + ) + + req, err := preparer.Prepare(&http.Request{}) + if err != nil { + return err + } + req = req.WithContext(originalReq.Context()) + + resp, err := autorest.SendWithSender(client, req, + autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...), + ) + if err != nil { + return err + } + + type Provider struct { + RegistrationState *string `json:"registrationState,omitempty"` + } + var provider Provider + + err = autorest.Respond( + resp, + WithErrorUnlessStatusCode(http.StatusOK), + autorest.ByUnmarshallingJSON(&provider), + autorest.ByClosing(), + ) + if err != nil { + return err + } + + // poll for registered provisioning state + now := time.Now() + for err == nil && time.Since(now) < client.PollingDuration { + // taken from the resources SDK + // https://github.com/Azure/azure-sdk-for-go/blob/9f366792afa3e0ddaecdc860e793ba9d75e76c27/arm/resources/resources/providers.go#L45 + preparer := autorest.CreatePreparer( + autorest.AsGet(), + autorest.WithBaseURL(newURL.String()), + autorest.WithPathParameters("/subscriptions/{subscriptionId}/providers/{resourceProviderNamespace}", pathParameters), + autorest.WithQueryParameters(queryParameters), + ) + req, err = preparer.Prepare(&http.Request{}) + if err != nil { + return err + } + req = req.WithContext(originalReq.Context()) + + resp, err := autorest.SendWithSender(client, req, + autorest.DoRetryForStatusCodes(client.RetryAttempts, client.RetryDuration, autorest.StatusCodesForRetry...), + ) + if err != nil { + return err + } + + err = autorest.Respond( + resp, + WithErrorUnlessStatusCode(http.StatusOK), + autorest.ByUnmarshallingJSON(&provider), + autorest.ByClosing(), + ) + if err != nil { + return err + } + + if provider.RegistrationState != nil && + *provider.RegistrationState == "Registered" { + break + } + + delayed := autorest.DelayWithRetryAfter(resp, originalReq.Context().Done()) + if !delayed && !autorest.DelayForBackoff(client.PollingDelay, 0, originalReq.Context().Done()) { + return originalReq.Context().Err() + } + } + if !(time.Since(now) < client.PollingDuration) { + return errors.New("polling for resource provider registration has exceeded the polling duration") + } + return err +} + +func getSubscription(path string) string { + parts := strings.Split(path, "/") + for i, v := range parts { + if v == "subscriptions" && (i+1) < len(parts) { + return parts[i+1] + } + } + return "" +} diff --git a/vendor/github.com/Azure/go-autorest/autorest/client.go b/vendor/github.com/Azure/go-autorest/autorest/client.go index b5f94b5c3..4e92dcad0 100644 --- a/vendor/github.com/Azure/go-autorest/autorest/client.go +++ b/vendor/github.com/Azure/go-autorest/autorest/client.go @@ -1,5 +1,19 @@ package autorest +// Copyright 2017 Microsoft Corporation +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + import ( "bytes" "fmt" @@ -21,6 +35,9 @@ const ( // DefaultRetryAttempts is number of attempts for retry status codes (5xx). DefaultRetryAttempts = 3 + + // DefaultRetryDuration is the duration to wait between retries. + DefaultRetryDuration = 30 * time.Second ) var ( @@ -33,8 +50,10 @@ var ( Version(), ) - statusCodesForRetry = []int{ + // StatusCodesForRetry are a defined group of status code for which the client will retry + StatusCodesForRetry = []int{ http.StatusRequestTimeout, // 408 + http.StatusTooManyRequests, // 429 http.StatusInternalServerError, // 500 http.StatusBadGateway, // 502 http.StatusServiceUnavailable, // 503 @@ -147,6 +166,9 @@ type Client struct { UserAgent string Jar http.CookieJar + + // Set to true to skip attempted registration of resource providers (false by default). + SkipResourceProviderRegistration bool } // NewClientWithUserAgent returns an instance of a Client with the UserAgent set to the passed @@ -156,9 +178,10 @@ func NewClientWithUserAgent(ua string) Client { PollingDelay: DefaultPollingDelay, PollingDuration: DefaultPollingDuration, RetryAttempts: DefaultRetryAttempts, - RetryDuration: 30 * time.Second, + RetryDuration: DefaultRetryDuration, UserAgent: defaultUserAgent, } + c.Sender = c.sender() c.AddToUserAgent(ua) return c } @@ -180,16 +203,22 @@ func (c Client) Do(r *http.Request) (*http.Response, error) { r, _ = Prepare(r, WithUserAgent(c.UserAgent)) } + // NOTE: c.WithInspection() must be last in the list so that it can inspect all preceding operations r, err := Prepare(r, - c.WithInspection(), - c.WithAuthorization()) + c.WithAuthorization(), + c.WithInspection()) if err != nil { - return nil, NewErrorWithError(err, "autorest/Client", "Do", nil, "Preparing request failed") + var resp *http.Response + if detErr, ok := err.(DetailedError); ok { + // if the authorization failed (e.g. invalid credentials) there will + // be a response associated with the error, be sure to return it. + resp = detErr.Response + } + return resp, NewErrorWithError(err, "autorest/Client", "Do", nil, "Preparing request failed") } - resp, err := SendWithSender(c.sender(), r, - DoRetryForStatusCodes(c.RetryAttempts, c.RetryDuration, statusCodesForRetry...)) - Respond(resp, - c.ByInspecting()) + + resp, err := SendWithSender(c.sender(), r) + Respond(resp, c.ByInspecting()) return resp, err } diff --git a/vendor/github.com/Azure/go-autorest/autorest/date/date.go b/vendor/github.com/Azure/go-autorest/autorest/date/date.go index 80ca60e9b..c45710656 100644 --- a/vendor/github.com/Azure/go-autorest/autorest/date/date.go +++ b/vendor/github.com/Azure/go-autorest/autorest/date/date.go @@ -5,6 +5,20 @@ time.Time types. And both convert to time.Time through a ToTime method. */ package date +// Copyright 2017 Microsoft Corporation +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + import ( "fmt" "time" diff --git a/vendor/github.com/Azure/go-autorest/autorest/date/time.go b/vendor/github.com/Azure/go-autorest/autorest/date/time.go index c1af62963..b453fad04 100644 --- a/vendor/github.com/Azure/go-autorest/autorest/date/time.go +++ b/vendor/github.com/Azure/go-autorest/autorest/date/time.go @@ -1,5 +1,19 @@ package date +// Copyright 2017 Microsoft Corporation +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + import ( "regexp" "time" diff --git a/vendor/github.com/Azure/go-autorest/autorest/date/timerfc1123.go b/vendor/github.com/Azure/go-autorest/autorest/date/timerfc1123.go index 11995fb9f..48fb39ba9 100644 --- a/vendor/github.com/Azure/go-autorest/autorest/date/timerfc1123.go +++ b/vendor/github.com/Azure/go-autorest/autorest/date/timerfc1123.go @@ -1,5 +1,19 @@ package date +// Copyright 2017 Microsoft Corporation +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + import ( "errors" "time" diff --git a/vendor/github.com/Azure/go-autorest/autorest/date/unixtime.go b/vendor/github.com/Azure/go-autorest/autorest/date/unixtime.go index e085c77ee..7073959b2 100644 --- a/vendor/github.com/Azure/go-autorest/autorest/date/unixtime.go +++ b/vendor/github.com/Azure/go-autorest/autorest/date/unixtime.go @@ -1,5 +1,19 @@ package date +// Copyright 2017 Microsoft Corporation +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + import ( "bytes" "encoding/binary" diff --git a/vendor/github.com/Azure/go-autorest/autorest/date/utility.go b/vendor/github.com/Azure/go-autorest/autorest/date/utility.go index 207b1a240..12addf0eb 100644 --- a/vendor/github.com/Azure/go-autorest/autorest/date/utility.go +++ b/vendor/github.com/Azure/go-autorest/autorest/date/utility.go @@ -1,5 +1,19 @@ package date +// Copyright 2017 Microsoft Corporation +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + import ( "strings" "time" diff --git a/vendor/github.com/Azure/go-autorest/autorest/error.go b/vendor/github.com/Azure/go-autorest/autorest/error.go index 4bcb8f27b..f724f3332 100644 --- a/vendor/github.com/Azure/go-autorest/autorest/error.go +++ b/vendor/github.com/Azure/go-autorest/autorest/error.go @@ -1,5 +1,19 @@ package autorest +// Copyright 2017 Microsoft Corporation +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + import ( "fmt" "net/http" @@ -31,6 +45,9 @@ type DetailedError struct { // Service Error is the response body of failed API in bytes ServiceError []byte + + // Response is the response object that was returned during failure if applicable. + Response *http.Response } // NewError creates a new Error conforming object from the passed packageType, method, and @@ -67,6 +84,7 @@ func NewErrorWithError(original error, packageType string, method string, resp * Method: method, StatusCode: statusCode, Message: fmt.Sprintf(message, args...), + Response: resp, } } diff --git a/vendor/github.com/Azure/go-autorest/autorest/preparer.go b/vendor/github.com/Azure/go-autorest/autorest/preparer.go index afd114821..6d67bd733 100644 --- a/vendor/github.com/Azure/go-autorest/autorest/preparer.go +++ b/vendor/github.com/Azure/go-autorest/autorest/preparer.go @@ -1,5 +1,19 @@ package autorest +// Copyright 2017 Microsoft Corporation +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + import ( "bytes" "encoding/json" @@ -13,8 +27,9 @@ import ( ) const ( - mimeTypeJSON = "application/json" - mimeTypeFormPost = "application/x-www-form-urlencoded" + mimeTypeJSON = "application/json" + mimeTypeOctetStream = "application/octet-stream" + mimeTypeFormPost = "application/x-www-form-urlencoded" headerAuthorization = "Authorization" headerContentType = "Content-Type" @@ -98,6 +113,28 @@ func WithHeader(header string, value string) PrepareDecorator { } } +// WithHeaders returns a PrepareDecorator that sets the specified HTTP headers of the http.Request to +// the passed value. It canonicalizes the passed headers name (via http.CanonicalHeaderKey) before +// adding them. +func WithHeaders(headers map[string]interface{}) PrepareDecorator { + h := ensureValueStrings(headers) + return func(p Preparer) Preparer { + return PreparerFunc(func(r *http.Request) (*http.Request, error) { + r, err := p.Prepare(r) + if err == nil { + if r.Header == nil { + r.Header = make(http.Header) + } + + for name, value := range h { + r.Header.Set(http.CanonicalHeaderKey(name), value) + } + } + return r, err + }) + } +} + // WithBearerAuthorization returns a PrepareDecorator that adds an HTTP Authorization header whose // value is "Bearer " followed by the supplied token. func WithBearerAuthorization(token string) PrepareDecorator { @@ -128,6 +165,11 @@ func AsJSON() PrepareDecorator { return AsContentType(mimeTypeJSON) } +// AsOctetStream returns a PrepareDecorator that adds the "application/octet-stream" Content-Type header. +func AsOctetStream() PrepareDecorator { + return AsContentType(mimeTypeOctetStream) +} + // WithMethod returns a PrepareDecorator that sets the HTTP method of the passed request. The // decorator does not validate that the passed method string is a known HTTP method. func WithMethod(method string) PrepareDecorator { @@ -201,6 +243,11 @@ func WithFormData(v url.Values) PrepareDecorator { r, err := p.Prepare(r) if err == nil { s := v.Encode() + + if r.Header == nil { + r.Header = make(http.Header) + } + r.Header.Set(http.CanonicalHeaderKey(headerContentType), mimeTypeFormPost) r.ContentLength = int64(len(s)) r.Body = ioutil.NopCloser(strings.NewReader(s)) } @@ -416,11 +463,16 @@ func WithQueryParameters(queryParameters map[string]interface{}) PrepareDecorato if r.URL == nil { return r, NewError("autorest", "WithQueryParameters", "Invoked with a nil URL") } + v := r.URL.Query() for key, value := range parameters { - v.Add(key, value) + d, err := url.QueryUnescape(value) + if err != nil { + return r, err + } + v.Add(key, d) } - r.URL.RawQuery = createQuery(v) + r.URL.RawQuery = v.Encode() } return r, err }) diff --git a/vendor/github.com/Azure/go-autorest/autorest/responder.go b/vendor/github.com/Azure/go-autorest/autorest/responder.go index 87f71e585..a908a0adb 100644 --- a/vendor/github.com/Azure/go-autorest/autorest/responder.go +++ b/vendor/github.com/Azure/go-autorest/autorest/responder.go @@ -1,5 +1,19 @@ package autorest +// Copyright 2017 Microsoft Corporation +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + import ( "bytes" "encoding/json" diff --git a/vendor/github.com/Azure/go-autorest/autorest/retriablerequest.go b/vendor/github.com/Azure/go-autorest/autorest/retriablerequest.go new file mode 100644 index 000000000..fa11dbed7 --- /dev/null +++ b/vendor/github.com/Azure/go-autorest/autorest/retriablerequest.go @@ -0,0 +1,52 @@ +package autorest + +// Copyright 2017 Microsoft Corporation +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +import ( + "bytes" + "io" + "io/ioutil" + "net/http" +) + +// NewRetriableRequest returns a wrapper around an HTTP request that support retry logic. +func NewRetriableRequest(req *http.Request) *RetriableRequest { + return &RetriableRequest{req: req} +} + +// Request returns the wrapped HTTP request. +func (rr *RetriableRequest) Request() *http.Request { + return rr.req +} + +func (rr *RetriableRequest) prepareFromByteReader() (err error) { + // fall back to making a copy (only do this once) + b := []byte{} + if rr.req.ContentLength > 0 { + b = make([]byte, rr.req.ContentLength) + _, err = io.ReadFull(rr.req.Body, b) + if err != nil { + return err + } + } else { + b, err = ioutil.ReadAll(rr.req.Body) + if err != nil { + return err + } + } + rr.br = bytes.NewReader(b) + rr.req.Body = ioutil.NopCloser(rr.br) + return err +} diff --git a/vendor/github.com/Azure/go-autorest/autorest/retriablerequest_1.7.go b/vendor/github.com/Azure/go-autorest/autorest/retriablerequest_1.7.go new file mode 100644 index 000000000..7143cc61b --- /dev/null +++ b/vendor/github.com/Azure/go-autorest/autorest/retriablerequest_1.7.go @@ -0,0 +1,54 @@ +// +build !go1.8 + +// Copyright 2017 Microsoft Corporation +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +package autorest + +import ( + "bytes" + "io/ioutil" + "net/http" +) + +// RetriableRequest provides facilities for retrying an HTTP request. +type RetriableRequest struct { + req *http.Request + br *bytes.Reader +} + +// Prepare signals that the request is about to be sent. +func (rr *RetriableRequest) Prepare() (err error) { + // preserve the request body; this is to support retry logic as + // the underlying transport will always close the reqeust body + if rr.req.Body != nil { + if rr.br != nil { + _, err = rr.br.Seek(0, 0 /*io.SeekStart*/) + rr.req.Body = ioutil.NopCloser(rr.br) + } + if err != nil { + return err + } + if rr.br == nil { + // fall back to making a copy (only do this once) + err = rr.prepareFromByteReader() + } + } + return err +} + +func removeRequestBody(req *http.Request) { + req.Body = nil + req.ContentLength = 0 +} diff --git a/vendor/github.com/Azure/go-autorest/autorest/retriablerequest_1.8.go b/vendor/github.com/Azure/go-autorest/autorest/retriablerequest_1.8.go new file mode 100644 index 000000000..ae15c6bf9 --- /dev/null +++ b/vendor/github.com/Azure/go-autorest/autorest/retriablerequest_1.8.go @@ -0,0 +1,66 @@ +// +build go1.8 + +// Copyright 2017 Microsoft Corporation +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +package autorest + +import ( + "bytes" + "io" + "io/ioutil" + "net/http" +) + +// RetriableRequest provides facilities for retrying an HTTP request. +type RetriableRequest struct { + req *http.Request + rc io.ReadCloser + br *bytes.Reader +} + +// Prepare signals that the request is about to be sent. +func (rr *RetriableRequest) Prepare() (err error) { + // preserve the request body; this is to support retry logic as + // the underlying transport will always close the reqeust body + if rr.req.Body != nil { + if rr.rc != nil { + rr.req.Body = rr.rc + } else if rr.br != nil { + _, err = rr.br.Seek(0, io.SeekStart) + rr.req.Body = ioutil.NopCloser(rr.br) + } + if err != nil { + return err + } + if rr.req.GetBody != nil { + // this will allow us to preserve the body without having to + // make a copy. note we need to do this on each iteration + rr.rc, err = rr.req.GetBody() + if err != nil { + return err + } + } else if rr.br == nil { + // fall back to making a copy (only do this once) + err = rr.prepareFromByteReader() + } + } + return err +} + +func removeRequestBody(req *http.Request) { + req.Body = nil + req.GetBody = nil + req.ContentLength = 0 +} diff --git a/vendor/github.com/Azure/go-autorest/autorest/sender.go b/vendor/github.com/Azure/go-autorest/autorest/sender.go index 9c0697815..cacbd8157 100644 --- a/vendor/github.com/Azure/go-autorest/autorest/sender.go +++ b/vendor/github.com/Azure/go-autorest/autorest/sender.go @@ -1,12 +1,25 @@ package autorest +// Copyright 2017 Microsoft Corporation +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + import ( - "bytes" "fmt" - "io/ioutil" "log" "math" "net/http" + "strconv" "time" ) @@ -73,7 +86,7 @@ func SendWithSender(s Sender, r *http.Request, decorators ...SendDecorator) (*ht func AfterDelay(d time.Duration) SendDecorator { return func(s Sender) Sender { return SenderFunc(func(r *http.Request) (*http.Response, error) { - if !DelayForBackoff(d, 0, r.Cancel) { + if !DelayForBackoff(d, 0, r.Context().Done()) { return nil, fmt.Errorf("autorest: AfterDelay canceled before full delay") } return s.Do(r) @@ -152,7 +165,7 @@ func DoPollForStatusCodes(duration time.Duration, delay time.Duration, codes ... resp, err = s.Do(r) if err == nil && ResponseHasStatusCode(resp, codes...) { - r, err = NewPollingRequest(resp, r.Cancel) + r, err = NewPollingRequestWithContext(r.Context(), resp) for err == nil && ResponseHasStatusCode(resp, codes...) { Respond(resp, @@ -175,12 +188,19 @@ func DoPollForStatusCodes(duration time.Duration, delay time.Duration, codes ... func DoRetryForAttempts(attempts int, backoff time.Duration) SendDecorator { return func(s Sender) Sender { return SenderFunc(func(r *http.Request) (resp *http.Response, err error) { + rr := NewRetriableRequest(r) for attempt := 0; attempt < attempts; attempt++ { - resp, err = s.Do(r) + err = rr.Prepare() + if err != nil { + return resp, err + } + resp, err = s.Do(rr.Request()) if err == nil { return resp, err } - DelayForBackoff(backoff, attempt, r.Cancel) + if !DelayForBackoff(backoff, attempt, r.Context().Done()) { + return nil, r.Context().Err() + } } return resp, err }) @@ -194,29 +214,57 @@ func DoRetryForAttempts(attempts int, backoff time.Duration) SendDecorator { func DoRetryForStatusCodes(attempts int, backoff time.Duration, codes ...int) SendDecorator { return func(s Sender) Sender { return SenderFunc(func(r *http.Request) (resp *http.Response, err error) { - b := []byte{} - if r.Body != nil { - b, err = ioutil.ReadAll(r.Body) + rr := NewRetriableRequest(r) + // Increment to add the first call (attempts denotes number of retries) + attempts++ + for attempt := 0; attempt < attempts; { + err = rr.Prepare() if err != nil { return resp, err } - } - - // Increment to add the first call (attempts denotes number of retries) - attempts++ - for attempt := 0; attempt < attempts; attempt++ { - r.Body = ioutil.NopCloser(bytes.NewBuffer(b)) - resp, err = s.Do(r) - if err != nil || !ResponseHasStatusCode(resp, codes...) { + resp, err = s.Do(rr.Request()) + // if the error isn't temporary don't bother retrying + if err != nil && !IsTemporaryNetworkError(err) { + return nil, err + } + // we want to retry if err is not nil (e.g. transient network failure). note that for failed authentication + // resp and err will both have a value, so in this case we don't want to retry as it will never succeed. + if err == nil && !ResponseHasStatusCode(resp, codes...) || IsTokenRefreshError(err) { return resp, err } - DelayForBackoff(backoff, attempt, r.Cancel) + delayed := DelayWithRetryAfter(resp, r.Context().Done()) + if !delayed && !DelayForBackoff(backoff, attempt, r.Context().Done()) { + return nil, r.Context().Err() + } + // don't count a 429 against the number of attempts + // so that we continue to retry until it succeeds + if resp == nil || resp.StatusCode != http.StatusTooManyRequests { + attempt++ + } } return resp, err }) } } +// DelayWithRetryAfter invokes time.After for the duration specified in the "Retry-After" header in +// responses with status code 429 +func DelayWithRetryAfter(resp *http.Response, cancel <-chan struct{}) bool { + if resp == nil { + return false + } + retryAfter, _ := strconv.Atoi(resp.Header.Get("Retry-After")) + if resp.StatusCode == http.StatusTooManyRequests && retryAfter > 0 { + select { + case <-time.After(time.Duration(retryAfter) * time.Second): + return true + case <-cancel: + return false + } + } + return false +} + // DoRetryForDuration returns a SendDecorator that retries the request until the total time is equal // to or greater than the specified duration, exponentially backing off between requests using the // supplied backoff time.Duration (which may be zero). Retrying may be canceled by closing the @@ -224,13 +272,20 @@ func DoRetryForStatusCodes(attempts int, backoff time.Duration, codes ...int) Se func DoRetryForDuration(d time.Duration, backoff time.Duration) SendDecorator { return func(s Sender) Sender { return SenderFunc(func(r *http.Request) (resp *http.Response, err error) { + rr := NewRetriableRequest(r) end := time.Now().Add(d) for attempt := 0; time.Now().Before(end); attempt++ { - resp, err = s.Do(r) + err = rr.Prepare() + if err != nil { + return resp, err + } + resp, err = s.Do(rr.Request()) if err == nil { return resp, err } - DelayForBackoff(backoff, attempt, r.Cancel) + if !DelayForBackoff(backoff, attempt, r.Context().Done()) { + return nil, r.Context().Err() + } } return resp, err }) diff --git a/vendor/github.com/Azure/go-autorest/autorest/to/convert.go b/vendor/github.com/Azure/go-autorest/autorest/to/convert.go index 7b180b866..fdda2ce1a 100644 --- a/vendor/github.com/Azure/go-autorest/autorest/to/convert.go +++ b/vendor/github.com/Azure/go-autorest/autorest/to/convert.go @@ -3,6 +3,20 @@ Package to provides helpers to ease working with pointer values of marshalled st */ package to +// Copyright 2017 Microsoft Corporation +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + // String returns a string value for the passed string pointer. It returns the empty string if the // pointer is nil. func String(s *string) string { diff --git a/vendor/github.com/Azure/go-autorest/autorest/utility.go b/vendor/github.com/Azure/go-autorest/autorest/utility.go index 78067148b..bfddd90b5 100644 --- a/vendor/github.com/Azure/go-autorest/autorest/utility.go +++ b/vendor/github.com/Azure/go-autorest/autorest/utility.go @@ -1,15 +1,32 @@ package autorest +// Copyright 2017 Microsoft Corporation +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + import ( "bytes" "encoding/json" "encoding/xml" "fmt" "io" + "net" + "net/http" "net/url" "reflect" - "sort" "strings" + + "github.com/Azure/go-autorest/autorest/adal" ) // EncodedAs is a series of constants specifying various data encodings @@ -123,13 +140,38 @@ func MapToValues(m map[string]interface{}) url.Values { return v } -// String method converts interface v to string. If interface is a list, it -// joins list elements using separator. -func String(v interface{}, sep ...string) string { - if len(sep) > 0 { - return ensureValueString(strings.Join(v.([]string), sep[0])) +// AsStringSlice method converts interface{} to []string. This expects a +//that the parameter passed to be a slice or array of a type that has the underlying +//type a string. +func AsStringSlice(s interface{}) ([]string, error) { + v := reflect.ValueOf(s) + if v.Kind() != reflect.Slice && v.Kind() != reflect.Array { + return nil, NewError("autorest", "AsStringSlice", "the value's type is not an array.") } - return ensureValueString(v) + stringSlice := make([]string, 0, v.Len()) + + for i := 0; i < v.Len(); i++ { + stringSlice = append(stringSlice, v.Index(i).String()) + } + return stringSlice, nil +} + +// String method converts interface v to string. If interface is a list, it +// joins list elements using the seperator. Note that only sep[0] will be used for +// joining if any separator is specified. +func String(v interface{}, sep ...string) string { + if len(sep) == 0 { + return ensureValueString(v) + } + stringSlice, ok := v.([]string) + if ok == false { + var err error + stringSlice, err = AsStringSlice(v) + if err != nil { + panic(fmt.Sprintf("autorest: Couldn't convert value to a string %s.", err)) + } + } + return ensureValueString(strings.Join(stringSlice, sep[0])) } // Encode method encodes url path and query parameters. @@ -153,26 +195,34 @@ func queryEscape(s string) string { return url.QueryEscape(s) } -// This method is same as Encode() method of "net/url" go package, -// except it does not encode the query parameters because they -// already come encoded. It formats values map in query format (bar=foo&a=b). -func createQuery(v url.Values) string { - var buf bytes.Buffer - keys := make([]string, 0, len(v)) - for k := range v { - keys = append(keys, k) - } - sort.Strings(keys) - for _, k := range keys { - vs := v[k] - prefix := url.QueryEscape(k) + "=" - for _, v := range vs { - if buf.Len() > 0 { - buf.WriteByte('&') - } - buf.WriteString(prefix) - buf.WriteString(v) - } - } - return buf.String() +// ChangeToGet turns the specified http.Request into a GET (it assumes it wasn't). +// This is mainly useful for long-running operations that use the Azure-AsyncOperation +// header, so we change the initial PUT into a GET to retrieve the final result. +func ChangeToGet(req *http.Request) *http.Request { + req.Method = "GET" + req.Body = nil + req.ContentLength = 0 + req.Header.Del("Content-Length") + return req +} + +// IsTokenRefreshError returns true if the specified error implements the TokenRefreshError +// interface. If err is a DetailedError it will walk the chain of Original errors. +func IsTokenRefreshError(err error) bool { + if _, ok := err.(adal.TokenRefreshError); ok { + return true + } + if de, ok := err.(DetailedError); ok { + return IsTokenRefreshError(de.Original) + } + return false +} + +// IsTemporaryNetworkError returns true if the specified error is a temporary network error or false +// if it's not. If the error doesn't implement the net.Error interface the return value is true. +func IsTemporaryNetworkError(err error) bool { + if netErr, ok := err.(net.Error); !ok || (ok && netErr.Temporary()) { + return true + } + return false } diff --git a/vendor/github.com/Azure/go-autorest/autorest/validation/error.go b/vendor/github.com/Azure/go-autorest/autorest/validation/error.go new file mode 100644 index 000000000..fed156dbf --- /dev/null +++ b/vendor/github.com/Azure/go-autorest/autorest/validation/error.go @@ -0,0 +1,48 @@ +package validation + +// Copyright 2017 Microsoft Corporation +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +import ( + "fmt" +) + +// Error is the type that's returned when the validation of an APIs arguments constraints fails. +type Error struct { + // PackageType is the package type of the object emitting the error. For types, the value + // matches that produced the the '%T' format specifier of the fmt package. For other elements, + // such as functions, it is just the package name (e.g., "autorest"). + PackageType string + + // Method is the name of the method raising the error. + Method string + + // Message is the error message. + Message string +} + +// Error returns a string containing the details of the validation failure. +func (e Error) Error() string { + return fmt.Sprintf("%s#%s: Invalid input: %s", e.PackageType, e.Method, e.Message) +} + +// NewError creates a new Error object with the specified parameters. +// message is treated as a format string to which the optional args apply. +func NewError(packageType string, method string, message string, args ...interface{}) Error { + return Error{ + PackageType: packageType, + Method: method, + Message: fmt.Sprintf(message, args...), + } +} diff --git a/vendor/github.com/Azure/go-autorest/autorest/validation/validation.go b/vendor/github.com/Azure/go-autorest/autorest/validation/validation.go index d7b0eadc5..ae987f8fa 100644 --- a/vendor/github.com/Azure/go-autorest/autorest/validation/validation.go +++ b/vendor/github.com/Azure/go-autorest/autorest/validation/validation.go @@ -3,6 +3,20 @@ Package validation provides methods for validating parameter value using reflect */ package validation +// Copyright 2017 Microsoft Corporation +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + import ( "fmt" "reflect" @@ -91,15 +105,12 @@ func validateStruct(x reflect.Value, v Constraint, name ...string) error { return createError(x, v, fmt.Sprintf("field %q doesn't exist", v.Target)) } - if err := Validate([]Validation{ + return Validate([]Validation{ { TargetValue: getInterfaceValue(f), Constraints: []Constraint{v}, }, - }); err != nil { - return err - } - return nil + }) } func validatePtr(x reflect.Value, v Constraint) error { @@ -125,29 +136,29 @@ func validatePtr(x reflect.Value, v Constraint) error { func validateInt(x reflect.Value, v Constraint) error { i := x.Int() - r, ok := v.Rule.(int) + r, ok := toInt64(v.Rule) if !ok { return createError(x, v, fmt.Sprintf("rule must be integer value for %v constraint; got: %v", v.Name, v.Rule)) } switch v.Name { case MultipleOf: - if i%int64(r) != 0 { + if i%r != 0 { return createError(x, v, fmt.Sprintf("value must be a multiple of %v", r)) } case ExclusiveMinimum: - if i <= int64(r) { + if i <= r { return createError(x, v, fmt.Sprintf("value must be greater than %v", r)) } case ExclusiveMaximum: - if i >= int64(r) { + if i >= r { return createError(x, v, fmt.Sprintf("value must be less than %v", r)) } case InclusiveMinimum: - if i < int64(r) { + if i < r { return createError(x, v, fmt.Sprintf("value must be greater than or equal to %v", r)) } case InclusiveMaximum: - if i > int64(r) { + if i > r { return createError(x, v, fmt.Sprintf("value must be less than or equal to %v", r)) } default: @@ -205,14 +216,14 @@ func validateString(x reflect.Value, v Constraint) error { return createError(x, v, fmt.Sprintf("rule must be integer value for %v constraint; got: %v", v.Name, v.Rule)) } if len(s) > v.Rule.(int) { - return createError(x, v, fmt.Sprintf("value length must be less than %v", v.Rule)) + return createError(x, v, fmt.Sprintf("value length must be less than or equal to %v", v.Rule)) } case MinLength: if _, ok := v.Rule.(int); !ok { return createError(x, v, fmt.Sprintf("rule must be integer value for %v constraint; got: %v", v.Name, v.Rule)) } if len(s) < v.Rule.(int) { - return createError(x, v, fmt.Sprintf("value length must be greater than %v", v.Rule)) + return createError(x, v, fmt.Sprintf("value length must be greater than or equal to %v", v.Rule)) } case ReadOnly: if len(s) > 0 { @@ -273,6 +284,17 @@ func validateArrayMap(x reflect.Value, v Constraint) error { if x.Len() != 0 { return createError(x, v, "readonly parameter; must send as nil or empty in request") } + case Pattern: + reg, err := regexp.Compile(v.Rule.(string)) + if err != nil { + return createError(x, v, err.Error()) + } + keys := x.MapKeys() + for _, k := range keys { + if !reg.MatchString(k.String()) { + return createError(k, v, fmt.Sprintf("map key doesn't match pattern %v", v.Rule)) + } + } default: return createError(x, v, fmt.Sprintf("constraint %v is not applicable to array, slice and map type", v.Name)) } @@ -366,8 +388,21 @@ func createError(x reflect.Value, v Constraint, err string) error { v.Target, v.Name, getInterfaceValue(x), err) } +func toInt64(v interface{}) (int64, bool) { + if i64, ok := v.(int64); ok { + return i64, true + } + // older generators emit max constants as int, so if int64 fails fall back to int + if i32, ok := v.(int); ok { + return int64(i32), true + } + return 0, false +} + // NewErrorWithValidationError appends package type and method name in // validation error. +// +// Deprecated: Please use validation.NewError() instead. func NewErrorWithValidationError(err error, packageType, method string) error { - return fmt.Errorf("%s#%s: Invalid input: %v", packageType, method, err) + return NewError(packageType, method, err.Error()) } diff --git a/vendor/github.com/Azure/go-autorest/autorest/version.go b/vendor/github.com/Azure/go-autorest/autorest/version.go index a222e8efa..e32cd68fe 100644 --- a/vendor/github.com/Azure/go-autorest/autorest/version.go +++ b/vendor/github.com/Azure/go-autorest/autorest/version.go @@ -1,35 +1,20 @@ package autorest -import ( - "bytes" - "fmt" - "strings" - "sync" -) - -const ( - major = 8 - minor = 0 - patch = 0 - tag = "" -) - -var once sync.Once -var version string +// Copyright 2017 Microsoft Corporation +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. // Version returns the semantic version (see http://semver.org). func Version() string { - once.Do(func() { - semver := fmt.Sprintf("%d.%d.%d", major, minor, patch) - verBuilder := bytes.NewBufferString(semver) - if tag != "" && tag != "-" { - updated := strings.TrimPrefix(tag, "-") - _, err := verBuilder.WriteString("-" + updated) - if err == nil { - verBuilder = bytes.NewBufferString(semver) - } - } - version = verBuilder.String() - }) - return version + return "v10.12.0" } diff --git a/vendor/github.com/NaverCloudPlatform/ncloud-sdk-go/LICENSE b/vendor/github.com/NaverCloudPlatform/ncloud-sdk-go/LICENSE new file mode 100644 index 000000000..4ae922683 --- /dev/null +++ b/vendor/github.com/NaverCloudPlatform/ncloud-sdk-go/LICENSE @@ -0,0 +1,7 @@ +Copyright 2017 NAVER BUSINESS PLATFORM Corp. + +Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: + +The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. \ No newline at end of file diff --git a/vendor/github.com/NaverCloudPlatform/ncloud-sdk-go/NOTICE b/vendor/github.com/NaverCloudPlatform/ncloud-sdk-go/NOTICE new file mode 100644 index 000000000..f495434a4 --- /dev/null +++ b/vendor/github.com/NaverCloudPlatform/ncloud-sdk-go/NOTICE @@ -0,0 +1,26 @@ +ncp-sdk-go + +This project contains subcomponents with separate copyright notices and license terms. +Your use of the source code for these subcomponents is subject to the terms and conditions of the following licenses. + +======================================================================= +OAuth 1.0 Library for Go (https://github.com/mrjones/oauth/) +======================================================================= + +Copyright (C) 2013 Matthew R. Jones + +Permission is hereby granted, free of charge, to any person obtaining +a copy of this software and associated documentation files (the "Software"), +to deal in the Software without restriction, including without limitation +the rights to use, copy, modify, merge, publish, distribute, sublicense, +and/or sell copies of the Software, and to permit persons to whom the Software +is furnished to do so, subject to the following conditions: + +The above copyright notice and this permission notice shall be included +in all copies or substantial portions of the Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, +INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR +PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE +FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, +ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. diff --git a/vendor/github.com/NaverCloudPlatform/ncloud-sdk-go/common/commonResponse.go b/vendor/github.com/NaverCloudPlatform/ncloud-sdk-go/common/commonResponse.go new file mode 100644 index 000000000..33f12f983 --- /dev/null +++ b/vendor/github.com/NaverCloudPlatform/ncloud-sdk-go/common/commonResponse.go @@ -0,0 +1,14 @@ +package common + +import "encoding/xml" + +type CommonResponse struct { + RequestID string `xml:"requestId"` + ReturnCode int `xml:"returnCode"` + ReturnMessage string `xml:"returnMessage"` +} + +type ResponseError struct { + ResponseError xml.Name `xml:"responseError"` + CommonResponse +} diff --git a/vendor/github.com/NaverCloudPlatform/ncloud-sdk-go/common/commoncode.go b/vendor/github.com/NaverCloudPlatform/ncloud-sdk-go/common/commoncode.go new file mode 100644 index 000000000..aff8bcf98 --- /dev/null +++ b/vendor/github.com/NaverCloudPlatform/ncloud-sdk-go/common/commoncode.go @@ -0,0 +1,10 @@ +package common + +type CommonCode struct { + CodeKind string `xml:"codeKind"` + DetailCategorizeCode string `xml:"detailCategorizeCode"` + Code string `xml:"code"` + CodeName string `xml:"codeName"` + CodeOrder int `xml:"codeOrder"` + JavaConstantCode string `xml:"javaConstantCode"` +} diff --git a/vendor/github.com/NaverCloudPlatform/ncloud-sdk-go/common/parseErrorResponse.go b/vendor/github.com/NaverCloudPlatform/ncloud-sdk-go/common/parseErrorResponse.go new file mode 100644 index 000000000..10c76f58c --- /dev/null +++ b/vendor/github.com/NaverCloudPlatform/ncloud-sdk-go/common/parseErrorResponse.go @@ -0,0 +1,20 @@ +package common + +import ( + "encoding/xml" + "strings" +) + +func ParseErrorResponse(bytes []byte) (*ResponseError, error) { + responseError := ResponseError{} + + if err := xml.Unmarshal([]byte(bytes), &responseError); err != nil { + return nil, err + } + + if responseError.ReturnMessage != "" { + responseError.ReturnMessage = strings.TrimSpace(responseError.ReturnMessage) + } + + return &responseError, nil +} diff --git a/vendor/github.com/NaverCloudPlatform/ncloud-sdk-go/common/region.go b/vendor/github.com/NaverCloudPlatform/ncloud-sdk-go/common/region.go new file mode 100644 index 000000000..eaa055f6e --- /dev/null +++ b/vendor/github.com/NaverCloudPlatform/ncloud-sdk-go/common/region.go @@ -0,0 +1,7 @@ +package common + +type Region struct { + RegionNo string `xml:"regionNo"` + RegionCode string `xml:"regionCode"` + RegionName string `xml:"regionName"` +} diff --git a/vendor/github.com/NaverCloudPlatform/ncloud-sdk-go/common/zone.go b/vendor/github.com/NaverCloudPlatform/ncloud-sdk-go/common/zone.go new file mode 100644 index 000000000..f2ec78381 --- /dev/null +++ b/vendor/github.com/NaverCloudPlatform/ncloud-sdk-go/common/zone.go @@ -0,0 +1,7 @@ +package common + +type Zone struct { + ZoneNo string `xml:"zoneNo"` + ZoneName string `xml:"zoneName"` + ZoneDescription string `xml:"zoneDescription"` +} diff --git a/vendor/github.com/NaverCloudPlatform/ncloud-sdk-go/oauth/oauth.go b/vendor/github.com/NaverCloudPlatform/ncloud-sdk-go/oauth/oauth.go new file mode 100644 index 000000000..c6b917b4c --- /dev/null +++ b/vendor/github.com/NaverCloudPlatform/ncloud-sdk-go/oauth/oauth.go @@ -0,0 +1,370 @@ +package oauth + +import ( + "crypto" + "crypto/hmac" + _ "crypto/sha1" + "encoding/base64" + "fmt" + "math/rand" + "net/url" + "os" + "sort" + "strconv" + "sync" + "time" +) + +const ( + OAUTH_VERSION = "1.0" + SIGNATURE_METHOD_HMAC = "HMAC-" + + HTTP_AUTH_HEADER = "Authorization" + OAUTH_HEADER = "OAuth " + CONSUMER_KEY_PARAM = "oauth_consumer_key" + NONCE_PARAM = "oauth_nonce" + SIGNATURE_METHOD_PARAM = "oauth_signature_method" + SIGNATURE_PARAM = "oauth_signature" + TIMESTAMP_PARAM = "oauth_timestamp" + TOKEN_PARAM = "oauth_token" + TOKEN_SECRET_PARAM = "oauth_token_secret" + VERSION_PARAM = "oauth_version" +) + +var HASH_METHOD_MAP = map[crypto.Hash]string{ + crypto.SHA1: "SHA1", + crypto.SHA256: "SHA256", +} + +// Creates a new Consumer instance, with a HMAC-SHA1 signer +func NewConsumer(consumerKey string, consumerSecret string, requestMethod string, requestURL string) *Consumer { + clock := &defaultClock{} + consumer := &Consumer{ + consumerKey: consumerKey, + consumerSecret: consumerSecret, + requestMethod: requestMethod, + requestURL: requestURL, + clock: clock, + nonceGenerator: newLockedNonceGenerator(clock), + + AdditionalParams: make(map[string]string), + } + + consumer.signer = &HMACSigner{ + consumerSecret: consumerSecret, + hashFunc: crypto.SHA1, + } + + return consumer +} + +// lockedNonceGenerator wraps a non-reentrant random number generator with alock +type lockedNonceGenerator struct { + nonceGenerator nonceGenerator + lock sync.Mutex +} + +func newLockedNonceGenerator(c clock) *lockedNonceGenerator { + return &lockedNonceGenerator{ + nonceGenerator: rand.New(rand.NewSource(c.Nanos())), + } +} + +func (n *lockedNonceGenerator) Int63() int64 { + n.lock.Lock() + r := n.nonceGenerator.Int63() + n.lock.Unlock() + return r +} + +type clock interface { + Seconds() int64 + Nanos() int64 +} + +type nonceGenerator interface { + Int63() int64 +} + +type signer interface { + Sign(message string) (string, error) + Verify(message string, signature string) error + SignatureMethod() string + HashFunc() crypto.Hash + Debug(enabled bool) +} + +type defaultClock struct{} + +func (*defaultClock) Seconds() int64 { + return time.Now().Unix() +} + +func (*defaultClock) Nanos() int64 { + return time.Now().UnixNano() +} + +type Consumer struct { + AdditionalParams map[string]string + + // The rest of this class is configured via the NewConsumer function. + consumerKey string + + consumerSecret string + + requestMethod string + + requestURL string + + debug bool + + // Private seams for mocking dependencies when testing + clock clock + // Seeded generators are not reentrant + nonceGenerator nonceGenerator + signer signer +} + +type HMACSigner struct { + consumerSecret string + hashFunc crypto.Hash + debug bool +} + +func (s *HMACSigner) Debug(enabled bool) { + s.debug = enabled +} + +func (s *HMACSigner) Sign(message string) (string, error) { + key := escape(s.consumerSecret) + if s.debug { + fmt.Println("Signing:", message) + fmt.Println("Key:", key) + } + + h := hmac.New(s.HashFunc().New, []byte(key+"&")) + h.Write([]byte(message)) + rawSignature := h.Sum(nil) + + base64signature := base64.StdEncoding.EncodeToString(rawSignature) + if s.debug { + fmt.Println("Base64 signature:", base64signature) + } + return base64signature, nil +} + +func (s *HMACSigner) Verify(message string, signature string) error { + if s.debug { + fmt.Println("Verifying Base64 signature:", signature) + } + validSignature, err := s.Sign(message) + if err != nil { + return err + } + + if validSignature != signature { + decodedSigniture, _ := url.QueryUnescape(signature) + if validSignature != decodedSigniture { + return fmt.Errorf("signature did not match") + } + } + + return nil +} + +func (s *HMACSigner) SignatureMethod() string { + return SIGNATURE_METHOD_HMAC + HASH_METHOD_MAP[s.HashFunc()] +} + +func (s *HMACSigner) HashFunc() crypto.Hash { + return s.hashFunc +} + +func escape(s string) string { + t := make([]byte, 0, 3*len(s)) + for i := 0; i < len(s); i++ { + c := s[i] + if isEscapable(c) { + t = append(t, '%') + t = append(t, "0123456789ABCDEF"[c>>4]) + t = append(t, "0123456789ABCDEF"[c&15]) + } else { + t = append(t, s[i]) + } + } + return string(t) +} + +func isEscapable(b byte) bool { + return !('A' <= b && b <= 'Z' || 'a' <= b && b <= 'z' || '0' <= b && b <= '9' || b == '-' || b == '.' || b == '_' || b == '~') +} + +func (c *Consumer) Debug(enabled bool) { + c.debug = enabled + c.signer.Debug(enabled) +} + +func (c *Consumer) GetRequestUrl() (loginUrl string, err error) { + if os.Getenv("GO_ENV") == "development" { + c.AdditionalParams["approachKey"] = c.consumerKey + c.AdditionalParams["secretKey"] = c.consumerSecret + } + c.AdditionalParams["responseFormatType"] = "xml" + + params := c.baseParams(c.consumerKey, c.AdditionalParams) + + if c.debug { + fmt.Println("params:", params) + } + + req := &request{ + method: c.requestMethod, + url: c.requestURL, + oauthParams: params, + } + + signature, err := c.signRequest(req) + if err != nil { + return "", err + } + + result := req.url + "?" + for pos, key := range req.oauthParams.Keys() { + for innerPos, value := range req.oauthParams.Get(key) { + if pos+innerPos != 0 { + result += "&" + } + result += fmt.Sprintf("%s=%s", key, value) + } + } + + result += fmt.Sprintf("&%s=%s", SIGNATURE_PARAM, escape(signature)) + + if c.debug { + fmt.Println("req: ", result) + } + + return result, nil +} + +func (c *Consumer) baseParams(consumerKey string, additionalParams map[string]string) *OrderedParams { + params := NewOrderedParams() + params.Add(VERSION_PARAM, OAUTH_VERSION) + params.Add(SIGNATURE_METHOD_PARAM, c.signer.SignatureMethod()) + params.Add(TIMESTAMP_PARAM, strconv.FormatInt(c.clock.Seconds(), 10)) + params.Add(NONCE_PARAM, strconv.FormatInt(c.nonceGenerator.Int63(), 10)) + params.Add(CONSUMER_KEY_PARAM, consumerKey) + + for key, value := range additionalParams { + params.Add(key, value) + } + + return params +} + +func (c *Consumer) signRequest(req *request) (string, error) { + baseString := c.requestString(req.method, req.url, req.oauthParams) + + if c.debug { + fmt.Println("baseString: ", baseString) + } + + signature, err := c.signer.Sign(baseString) + if err != nil { + return "", err + } + + return signature, nil +} + +func (c *Consumer) requestString(method string, url string, params *OrderedParams) string { + result := method + "&" + escape(url) + for pos, key := range params.Keys() { + for innerPos, value := range params.Get(key) { + if pos+innerPos == 0 { + result += "&" + } else { + result += escape("&") + } + result += escape(fmt.Sprintf("%s=%s", key, value)) + } + } + return result +} + +type request struct { + method string + url string + oauthParams *OrderedParams + userParams map[string]string +} + +// +// String Sorting helpers +// + +type ByValue []string + +func (a ByValue) Len() int { + return len(a) +} + +func (a ByValue) Swap(i, j int) { + a[i], a[j] = a[j], a[i] +} + +func (a ByValue) Less(i, j int) bool { + return a[i] < a[j] +} + +// +// ORDERED PARAMS +// + +type OrderedParams struct { + allParams map[string][]string + keyOrdering []string +} + +func NewOrderedParams() *OrderedParams { + return &OrderedParams{ + allParams: make(map[string][]string), + keyOrdering: make([]string, 0), + } +} + +func (o *OrderedParams) Get(key string) []string { + sort.Sort(ByValue(o.allParams[key])) + return o.allParams[key] +} + +func (o *OrderedParams) Keys() []string { + sort.Sort(o) + return o.keyOrdering +} + +func (o *OrderedParams) Add(key, value string) { + o.AddUnescaped(key, escape(value)) +} + +func (o *OrderedParams) AddUnescaped(key, value string) { + if _, exists := o.allParams[key]; !exists { + o.keyOrdering = append(o.keyOrdering, key) + o.allParams[key] = make([]string, 1) + o.allParams[key][0] = value + } else { + o.allParams[key] = append(o.allParams[key], value) + } +} + +func (o *OrderedParams) Len() int { + return len(o.keyOrdering) +} + +func (o *OrderedParams) Less(i int, j int) bool { + return o.keyOrdering[i] < o.keyOrdering[j] +} + +func (o *OrderedParams) Swap(i int, j int) { + o.keyOrdering[i], o.keyOrdering[j] = o.keyOrdering[j], o.keyOrdering[i] +} diff --git a/vendor/github.com/NaverCloudPlatform/ncloud-sdk-go/request/request.go b/vendor/github.com/NaverCloudPlatform/ncloud-sdk-go/request/request.go new file mode 100644 index 000000000..2f0d77d14 --- /dev/null +++ b/vendor/github.com/NaverCloudPlatform/ncloud-sdk-go/request/request.go @@ -0,0 +1,38 @@ +package request + +import ( + "io/ioutil" + "net/http" + + "github.com/NaverCloudPlatform/ncloud-sdk-go/oauth" +) + +// NewRequest is http request with oauth +func NewRequest(accessKey string, secretKey string, method string, url string, params map[string]string) ([]byte, *http.Response, error) { + c := oauth.NewConsumer(accessKey, secretKey, method, url) + + for k, v := range params { + c.AdditionalParams[k] = v + } + + reqURL, err := c.GetRequestUrl() + if err != nil { + return nil, nil, err + } + + req, err := http.NewRequest(method, reqURL, nil) + if err != nil { + return nil, nil, err + } + + client := &http.Client{} + resp, err := client.Do(req) + if err != nil { + return nil, nil, err + } + defer resp.Body.Close() + + bytes, _ := ioutil.ReadAll(resp.Body) + + return bytes, resp, nil +} diff --git a/vendor/github.com/NaverCloudPlatform/ncloud-sdk-go/sdk/connection.go b/vendor/github.com/NaverCloudPlatform/ncloud-sdk-go/sdk/connection.go new file mode 100644 index 000000000..27f8a6197 --- /dev/null +++ b/vendor/github.com/NaverCloudPlatform/ncloud-sdk-go/sdk/connection.go @@ -0,0 +1,21 @@ +package sdk + +import ( + "os" +) + +// NewConnection create connection for server api +func NewConnection(accessKey string, secretKey string) *Conn { + conn := &Conn{ + accessKey: accessKey, + secretKey: secretKey, + apiURL: "https://api.ncloud.com/", + } + + // for other phase(dev, test, beta ...) test + if os.Getenv("NCLOUD_API_GW") != "" { + conn.apiURL = os.Getenv("NCLOUD_API_GW") + } + + return conn +} diff --git a/vendor/github.com/NaverCloudPlatform/ncloud-sdk-go/sdk/createBlockStorageInstance.go b/vendor/github.com/NaverCloudPlatform/ncloud-sdk-go/sdk/createBlockStorageInstance.go new file mode 100644 index 000000000..3fd8e61b7 --- /dev/null +++ b/vendor/github.com/NaverCloudPlatform/ncloud-sdk-go/sdk/createBlockStorageInstance.go @@ -0,0 +1,87 @@ +package sdk + +import ( + "encoding/xml" + "errors" + "fmt" + "strconv" + + common "github.com/NaverCloudPlatform/ncloud-sdk-go/common" + request "github.com/NaverCloudPlatform/ncloud-sdk-go/request" +) + +func processCreateBlockStorageInstanceParams(reqParams *RequestBlockStorageInstance) (map[string]string, error) { + params := make(map[string]string) + + if reqParams.BlockStorageName != "" { + if len := len(reqParams.BlockStorageName); len < 3 || len > 30 { + return nil, errors.New("Length of BlockStorageName should be min 3 or max 30") + } + params["blockStorageName"] = reqParams.BlockStorageName + } + + if reqParams.BlockStorageSize < 10 || reqParams.BlockStorageSize > 2000 { + return nil, errors.New("BlockStorageSize should be min 10 or max 2000") + } + + if reqParams.BlockStorageDescription != "" { + if len := len(reqParams.BlockStorageDescription); len > 1000 { + return nil, errors.New("Length of BlockStorageDescription should be max 1000") + } + params["blockStorageDescription"] = reqParams.BlockStorageDescription + } + + if int(reqParams.BlockStorageSize/10)*10 != reqParams.BlockStorageSize { + return nil, errors.New("BlockStorageSize must be a multiple of 10 GB") + } + + if reqParams.BlockStorageSize == 0 { + return nil, errors.New("BlockStorageSize field is required") + } + + params["blockStorageSize"] = strconv.Itoa(reqParams.BlockStorageSize) + + if reqParams.ServerInstanceNo == "" { + return nil, errors.New("ServerInstanceNo field is required") + } + + params["serverInstanceNo"] = reqParams.ServerInstanceNo + + return params, nil +} + +// CreateBlockStorageInstance create block storage instance +func (s *Conn) CreateBlockStorageInstance(reqParams *RequestBlockStorageInstance) (*BlockStorageInstanceList, error) { + + params, err := processCreateBlockStorageInstanceParams(reqParams) + if err != nil { + return nil, err + } + + params["action"] = "createBlockStorageInstance" + + bytes, resp, err := request.NewRequest(s.accessKey, s.secretKey, "GET", s.apiURL+"server/", params) + if err != nil { + return nil, err + } + + if resp.StatusCode != 200 { + responseError, err := common.ParseErrorResponse(bytes) + if err != nil { + return nil, err + } + + respError := BlockStorageInstanceList{} + respError.ReturnCode = responseError.ReturnCode + respError.ReturnMessage = responseError.ReturnMessage + + return &respError, fmt.Errorf("%s %s - error code: %d , error message: %s", resp.Status, string(bytes), responseError.ReturnCode, responseError.ReturnMessage) + } + + var blockStorageInstanceList = BlockStorageInstanceList{} + if err := xml.Unmarshal([]byte(bytes), &blockStorageInstanceList); err != nil { + return nil, err + } + + return &blockStorageInstanceList, nil +} diff --git a/vendor/github.com/NaverCloudPlatform/ncloud-sdk-go/sdk/createLoginKey.go b/vendor/github.com/NaverCloudPlatform/ncloud-sdk-go/sdk/createLoginKey.go new file mode 100644 index 000000000..b2cd490dc --- /dev/null +++ b/vendor/github.com/NaverCloudPlatform/ncloud-sdk-go/sdk/createLoginKey.go @@ -0,0 +1,64 @@ +package sdk + +import ( + "encoding/xml" + "errors" + "fmt" + "strings" + + common "github.com/NaverCloudPlatform/ncloud-sdk-go/common" + request "github.com/NaverCloudPlatform/ncloud-sdk-go/request" +) + +func processCreateLoginKeyParams(keyName string) error { + if keyName == "" { + return errors.New("KeyName is required field") + } + + if len := len(keyName); len < 3 || len > 30 { + return errors.New("Length of KeyName should be min 3 or max 30") + } + + return nil +} + +// CreateLoginKey create loginkey with keyName +func (s *Conn) CreateLoginKey(keyName string) (*PrivateKey, error) { + if err := processCreateLoginKeyParams(keyName); err != nil { + return nil, err + } + + params := make(map[string]string) + + params["keyName"] = keyName + params["action"] = "createLoginKey" + + bytes, resp, err := request.NewRequest(s.accessKey, s.secretKey, "GET", s.apiURL+"server/", params) + if err != nil { + return nil, err + } + + if resp.StatusCode != 200 { + responseError, err := common.ParseErrorResponse(bytes) + if err != nil { + return nil, err + } + + respError := PrivateKey{} + respError.ReturnCode = responseError.ReturnCode + respError.ReturnMessage = responseError.ReturnMessage + + return &respError, fmt.Errorf("%s %s - error code: %d , error message: %s", resp.Status, string(bytes), responseError.ReturnCode, responseError.ReturnMessage) + } + + privateKey := PrivateKey{} + if err := xml.Unmarshal([]byte(bytes), &privateKey); err != nil { + return nil, err + } + + if privateKey.PrivateKey != "" { + privateKey.PrivateKey = strings.TrimSpace(privateKey.PrivateKey) + } + + return &privateKey, nil +} diff --git a/vendor/github.com/NaverCloudPlatform/ncloud-sdk-go/sdk/createPublicIpInstance.go b/vendor/github.com/NaverCloudPlatform/ncloud-sdk-go/sdk/createPublicIpInstance.go new file mode 100644 index 000000000..6dc5aa0fe --- /dev/null +++ b/vendor/github.com/NaverCloudPlatform/ncloud-sdk-go/sdk/createPublicIpInstance.go @@ -0,0 +1,74 @@ +package sdk + +import ( + "encoding/xml" + "errors" + "fmt" + + common "github.com/NaverCloudPlatform/ncloud-sdk-go/common" + request "github.com/NaverCloudPlatform/ncloud-sdk-go/request" +) + +func processCreatePublicIPInstanceParams(reqParams *RequestCreatePublicIPInstance) (map[string]string, error) { + params := make(map[string]string) + + if reqParams.ServerInstanceNo != "" { + params["serverInstanceNo"] = reqParams.ServerInstanceNo + } + + if reqParams.PublicIPDescription != "" { + if len := len(reqParams.PublicIPDescription); len > 1000 { + return params, errors.New("Length of publicIpDescription should be max 1000") + } + params["publicIpDescription"] = reqParams.PublicIPDescription + } + + if reqParams.InternetLineTypeCode != "" { + if reqParams.InternetLineTypeCode != "PUBLC" && reqParams.InternetLineTypeCode != "GLBL" { + return params, errors.New("InternetLineTypeCode should be PUBLC or GLBL") + } + params["internetLineTypeCode"] = reqParams.InternetLineTypeCode + } + + if reqParams.RegionNo != "" { + params["regionNo"] = reqParams.RegionNo + } + + return params, nil +} + +// CreatePublicIPInstance create public ip instance and allocate it to server instance +func (s *Conn) CreatePublicIPInstance(reqParams *RequestCreatePublicIPInstance) (*PublicIPInstanceList, error) { + params, err := processCreatePublicIPInstanceParams(reqParams) + if err != nil { + return nil, err + } + + params["action"] = "createPublicIpInstance" + + bytes, resp, err := request.NewRequest(s.accessKey, s.secretKey, "GET", s.apiURL+"server/", params) + if err != nil { + return nil, err + } + + if resp.StatusCode != 200 { + responseError, err := common.ParseErrorResponse(bytes) + if err != nil { + return nil, err + } + + respError := PublicIPInstanceList{} + respError.ReturnCode = responseError.ReturnCode + respError.ReturnMessage = responseError.ReturnMessage + + return &respError, fmt.Errorf("%s %s - error code: %d , error message: %s", resp.Status, string(bytes), responseError.ReturnCode, responseError.ReturnMessage) + } + + var responseCreatePublicIPInstances = PublicIPInstanceList{} + if err := xml.Unmarshal([]byte(bytes), &responseCreatePublicIPInstances); err != nil { + fmt.Println(err) + return nil, err + } + + return &responseCreatePublicIPInstances, nil +} diff --git a/vendor/github.com/NaverCloudPlatform/ncloud-sdk-go/sdk/createServerImage.go b/vendor/github.com/NaverCloudPlatform/ncloud-sdk-go/sdk/createServerImage.go new file mode 100644 index 000000000..11fb9b308 --- /dev/null +++ b/vendor/github.com/NaverCloudPlatform/ncloud-sdk-go/sdk/createServerImage.go @@ -0,0 +1,71 @@ +package sdk + +import ( + "encoding/xml" + "errors" + "fmt" + + common "github.com/NaverCloudPlatform/ncloud-sdk-go/common" + request "github.com/NaverCloudPlatform/ncloud-sdk-go/request" +) + +func processCreateMemberServerImageParams(reqParams *RequestCreateServerImage) (map[string]string, error) { + params := make(map[string]string) + + if reqParams == nil || reqParams.ServerInstanceNo == "" { + return params, errors.New("ServerInstanceNo is required field") + } + + if reqParams.MemberServerImageName != "" { + if len := len(reqParams.MemberServerImageName); len < 3 || len > 30 { + return nil, errors.New("Length of MemberServerImageName should be min 3 or max 30") + } + params["memberServerImageName"] = reqParams.MemberServerImageName + } + + if reqParams.MemberServerImageDescription != "" { + if len := len(reqParams.MemberServerImageDescription); len > 1000 { + return nil, errors.New("Length of MemberServerImageDescription should be smaller than 1000") + } + params["memberServerImageDescription"] = reqParams.MemberServerImageDescription + } + + params["serverInstanceNo"] = reqParams.ServerInstanceNo + + return params, nil +} + +// CreateMemberServerImage create member server image and retrun member server image list +func (s *Conn) CreateMemberServerImage(reqParams *RequestCreateServerImage) (*MemberServerImageList, error) { + params, err := processCreateMemberServerImageParams(reqParams) + if err != nil { + return nil, err + } + + params["action"] = "createMemberServerImage" + + bytes, resp, err := request.NewRequest(s.accessKey, s.secretKey, "GET", s.apiURL+"server/", params) + if err != nil { + return nil, err + } + + if resp.StatusCode != 200 { + responseError, err := common.ParseErrorResponse(bytes) + if err != nil { + return nil, err + } + + respError := MemberServerImageList{} + respError.ReturnCode = responseError.ReturnCode + respError.ReturnMessage = responseError.ReturnMessage + + return &respError, fmt.Errorf("%s %s - error code: %d , error message: %s", resp.Status, string(bytes), responseError.ReturnCode, responseError.ReturnMessage) + } + + var serverImageListsResp = MemberServerImageList{} + if err := xml.Unmarshal([]byte(bytes), &serverImageListsResp); err != nil { + return nil, err + } + + return &serverImageListsResp, nil +} diff --git a/vendor/github.com/NaverCloudPlatform/ncloud-sdk-go/sdk/createServerInstances.go b/vendor/github.com/NaverCloudPlatform/ncloud-sdk-go/sdk/createServerInstances.go new file mode 100644 index 000000000..a00da9145 --- /dev/null +++ b/vendor/github.com/NaverCloudPlatform/ncloud-sdk-go/sdk/createServerInstances.go @@ -0,0 +1,148 @@ +package sdk + +import ( + "encoding/base64" + "encoding/xml" + "errors" + "fmt" + "strconv" + + common "github.com/NaverCloudPlatform/ncloud-sdk-go/common" + request "github.com/NaverCloudPlatform/ncloud-sdk-go/request" +) + +func processCreateServerInstancesParams(reqParams *RequestCreateServerInstance) (map[string]string, error) { + params := make(map[string]string) + + if reqParams == nil { + return params, nil + } + + if reqParams.ServerImageProductCode != "" { + if len := len(reqParams.ServerImageProductCode); len > 20 { + return nil, errors.New("Length of ServerImageProductCode should be min 1 or max 20") + } + params["serverImageProductCode"] = reqParams.ServerImageProductCode + } + + if reqParams.ServerProductCode != "" { + if len := len(reqParams.ServerProductCode); len > 20 { + return nil, errors.New("Length of ServerProductCode should be min 1 or max 20") + } + params["serverProductCode"] = reqParams.ServerProductCode + } + + if reqParams.MemberServerImageNo != "" { + params["memberServerImageNo"] = reqParams.MemberServerImageNo + } + + if reqParams.ServerName != "" { + if len := len(reqParams.ServerName); len < 3 || len > 30 { + return nil, errors.New("Length of ServerName should be min 3 or max 30") + } + params["serverName"] = reqParams.ServerName + } + + if reqParams.ServerDescription != "" { + if len := len(reqParams.ServerDescription); len > 1000 { + return nil, errors.New("Length of ServerDescription should be min 1 or max 1000") + } + params["serverDescription"] = reqParams.ServerDescription + } + + if reqParams.LoginKeyName != "" { + if len := len(reqParams.LoginKeyName); len < 3 || len > 30 { + return nil, errors.New("Length of LoginKeyName should be min 3 or max 30") + } + params["loginKeyName"] = reqParams.LoginKeyName + } + + if reqParams.IsProtectServerTermination == true { + params["isProtectServerTermination"] = "true" + } + + if reqParams.ServerCreateCount > 0 { + if reqParams.ServerCreateCount > 20 { + return nil, errors.New("ServerCreateCount should be min 1 or max 20") + + } + params["serverCreateCount"] = strconv.Itoa(reqParams.ServerCreateCount) + } + + if reqParams.ServerCreateStartNo > 0 { + if reqParams.ServerCreateCount+reqParams.ServerCreateStartNo > 1000 { + return nil, errors.New("Sum of ServerCreateCount and ServerCreateStartNo should be less than 1000") + + } + params["serverCreateStartNo"] = strconv.Itoa(reqParams.ServerCreateStartNo) + } + + if reqParams.InternetLineTypeCode != "" { + if reqParams.InternetLineTypeCode != "PUBLC" && reqParams.InternetLineTypeCode != "GLBL" { + return nil, errors.New("InternetLineTypeCode should be PUBLC or GLBL") + } + params["internetLineTypeCode"] = reqParams.InternetLineTypeCode + } + + if reqParams.FeeSystemTypeCode != "" { + if reqParams.FeeSystemTypeCode != "FXSUM" && reqParams.FeeSystemTypeCode != "MTRAT" { + return nil, errors.New("FeeSystemTypeCode should be FXSUM or MTRAT") + } + params["feeSystemTypeCode"] = reqParams.FeeSystemTypeCode + } + + if reqParams.UserData != "" { + if len := len(reqParams.UserData); len > 21847 { + return nil, errors.New("Length of UserData should be min 1 or max 21847") + } + params["userData"] = base64.StdEncoding.EncodeToString([]byte(reqParams.UserData)) + } + + if reqParams.ZoneNo != "" { + params["zoneNo"] = reqParams.ZoneNo + } + + if len(reqParams.AccessControlGroupConfigurationNoList) > 0 { + for k, v := range reqParams.AccessControlGroupConfigurationNoList { + params[fmt.Sprintf("accessControlGroupConfigurationNoList.%d", k+1)] = v + } + } + + return params, nil +} + +// CreateServerInstances create server instances +func (s *Conn) CreateServerInstances(reqParams *RequestCreateServerInstance) (*ServerInstanceList, error) { + params, err := processCreateServerInstancesParams(reqParams) + if err != nil { + return nil, err + } + + params["action"] = "createServerInstances" + + bytes, resp, err := request.NewRequest(s.accessKey, s.secretKey, "GET", s.apiURL+"server/", params) + if err != nil { + return nil, err + } + + if resp.StatusCode != 200 { + responseError, err := common.ParseErrorResponse(bytes) + if err != nil { + return nil, err + } + + respError := ServerInstanceList{} + respError.ReturnCode = responseError.ReturnCode + respError.ReturnMessage = responseError.ReturnMessage + + return &respError, fmt.Errorf("%s %s - error code: %d , error message: %s", resp.Status, string(bytes), responseError.ReturnCode, responseError.ReturnMessage) + } + + var responseCreateServerInstances = ServerInstanceList{} + if err := xml.Unmarshal([]byte(bytes), &responseCreateServerInstances); err != nil { + fmt.Println(err) + return nil, err + } + + return &responseCreateServerInstances, nil +} diff --git a/vendor/github.com/NaverCloudPlatform/ncloud-sdk-go/sdk/deleteBlockStorageInstances.go b/vendor/github.com/NaverCloudPlatform/ncloud-sdk-go/sdk/deleteBlockStorageInstances.go new file mode 100644 index 000000000..c47f96bfb --- /dev/null +++ b/vendor/github.com/NaverCloudPlatform/ncloud-sdk-go/sdk/deleteBlockStorageInstances.go @@ -0,0 +1,60 @@ +package sdk + +import ( + "encoding/xml" + "errors" + "fmt" + + common "github.com/NaverCloudPlatform/ncloud-sdk-go/common" + request "github.com/NaverCloudPlatform/ncloud-sdk-go/request" +) + +func processDeleteBlockStorageInstancesParams(blockStorageInstanceNoList []string) (map[string]string, error) { + params := make(map[string]string) + + if len(blockStorageInstanceNoList) == 0 { + return params, errors.New("BlockStorageInstanceNoList field is Required") + } + + for k, v := range blockStorageInstanceNoList { + params[fmt.Sprintf("blockStorageInstanceNoList.%d", k+1)] = v + } + + return params, nil +} + +// DeleteBlockStorageInstances delete block storage instances +func (s *Conn) DeleteBlockStorageInstances(blockStorageInstanceNoList []string) (*BlockStorageInstanceList, error) { + params, err := processDeleteBlockStorageInstancesParams(blockStorageInstanceNoList) + if err != nil { + return nil, err + } + + params["action"] = "deleteBlockStorageInstances" + + bytes, resp, err := request.NewRequest(s.accessKey, s.secretKey, "GET", s.apiURL+"server/", params) + if err != nil { + return nil, err + } + + if resp.StatusCode != 200 { + responseError, err := common.ParseErrorResponse(bytes) + if err != nil { + return nil, err + } + + respError := BlockStorageInstanceList{} + respError.ReturnCode = responseError.ReturnCode + respError.ReturnMessage = responseError.ReturnMessage + + return &respError, fmt.Errorf("%s %s - error code: %d , error message: %s", resp.Status, string(bytes), responseError.ReturnCode, responseError.ReturnMessage) + } + + var blockStorageInstanceList = BlockStorageInstanceList{} + if err := xml.Unmarshal([]byte(bytes), &blockStorageInstanceList); err != nil { + fmt.Println(err) + return nil, err + } + + return &blockStorageInstanceList, nil +} diff --git a/vendor/github.com/NaverCloudPlatform/ncloud-sdk-go/sdk/deleteLoginKey.go b/vendor/github.com/NaverCloudPlatform/ncloud-sdk-go/sdk/deleteLoginKey.go new file mode 100644 index 000000000..7cc3cf257 --- /dev/null +++ b/vendor/github.com/NaverCloudPlatform/ncloud-sdk-go/sdk/deleteLoginKey.go @@ -0,0 +1,59 @@ +package sdk + +import ( + "encoding/xml" + "errors" + "fmt" + + common "github.com/NaverCloudPlatform/ncloud-sdk-go/common" + request "github.com/NaverCloudPlatform/ncloud-sdk-go/request" +) + +func processDeleteLoginKeyParams(keyName string) error { + if keyName == "" { + return errors.New("KeyName is required field") + } + + if len := len(keyName); len < 3 || len > 30 { + return errors.New("Length of KeyName should be min 3 or max 30") + } + + return nil +} + +// DeleteLoginKey delete login key with keyName +func (s *Conn) DeleteLoginKey(keyName string) (*common.CommonResponse, error) { + if err := processDeleteLoginKeyParams(keyName); err != nil { + return nil, err + } + + params := make(map[string]string) + + params["keyName"] = keyName + params["action"] = "deleteLoginKey" + + bytes, resp, err := request.NewRequest(s.accessKey, s.secretKey, "GET", s.apiURL+"server/", params) + if err != nil { + return nil, err + } + + if resp.StatusCode != 200 { + responseError, err := common.ParseErrorResponse(bytes) + if err != nil { + return nil, err + } + + respError := common.CommonResponse{} + respError.ReturnCode = responseError.ReturnCode + respError.ReturnMessage = responseError.ReturnMessage + + return &respError, fmt.Errorf("%s %s - error code: %d , error message: %s", resp.Status, string(bytes), responseError.ReturnCode, responseError.ReturnMessage) + } + + var responseDeleteLoginKey = common.CommonResponse{} + if err := xml.Unmarshal([]byte(bytes), &responseDeleteLoginKey); err != nil { + return nil, err + } + + return &responseDeleteLoginKey, nil +} diff --git a/vendor/github.com/NaverCloudPlatform/ncloud-sdk-go/sdk/deletePublicIpInstances.go b/vendor/github.com/NaverCloudPlatform/ncloud-sdk-go/sdk/deletePublicIpInstances.go new file mode 100644 index 000000000..5ea47602b --- /dev/null +++ b/vendor/github.com/NaverCloudPlatform/ncloud-sdk-go/sdk/deletePublicIpInstances.go @@ -0,0 +1,62 @@ +package sdk + +import ( + "encoding/xml" + "errors" + "fmt" + + common "github.com/NaverCloudPlatform/ncloud-sdk-go/common" + request "github.com/NaverCloudPlatform/ncloud-sdk-go/request" +) + +func processDeletePublicIPInstancesParams(reqParams *RequestDeletePublicIPInstances) (map[string]string, error) { + params := make(map[string]string) + + if reqParams == nil || len(reqParams.PublicIPInstanceNoList) == 0 { + return params, errors.New("Required field is not specified. location : publicIpInstanceNoList.N") + } + + if len(reqParams.PublicIPInstanceNoList) > 0 { + for k, v := range reqParams.PublicIPInstanceNoList { + params[fmt.Sprintf("publicIpInstanceNoList.%d", k+1)] = v + } + } + + return params, nil +} + +// DeletePublicIPInstances delete public ip instances +func (s *Conn) DeletePublicIPInstances(reqParams *RequestDeletePublicIPInstances) (*PublicIPInstanceList, error) { + params, err := processDeletePublicIPInstancesParams(reqParams) + if err != nil { + return nil, err + } + + params["action"] = "deletePublicIpInstances" + + bytes, resp, err := request.NewRequest(s.accessKey, s.secretKey, "GET", s.apiURL+"server/", params) + if err != nil { + return nil, err + } + + if resp.StatusCode != 200 { + responseError, err := common.ParseErrorResponse(bytes) + if err != nil { + return nil, err + } + + respError := PublicIPInstanceList{} + respError.ReturnCode = responseError.ReturnCode + respError.ReturnMessage = responseError.ReturnMessage + + return &respError, fmt.Errorf("%s %s - error code: %d , error message: %s", resp.Status, string(bytes), responseError.ReturnCode, responseError.ReturnMessage) + } + + var publicIPInstanceList = PublicIPInstanceList{} + if err := xml.Unmarshal([]byte(bytes), &publicIPInstanceList); err != nil { + fmt.Println(err) + return nil, err + } + + return &publicIPInstanceList, nil +} diff --git a/vendor/github.com/NaverCloudPlatform/ncloud-sdk-go/sdk/disassociatePublicIpFromServerInstance.go b/vendor/github.com/NaverCloudPlatform/ncloud-sdk-go/sdk/disassociatePublicIpFromServerInstance.go new file mode 100644 index 000000000..6d1db9efc --- /dev/null +++ b/vendor/github.com/NaverCloudPlatform/ncloud-sdk-go/sdk/disassociatePublicIpFromServerInstance.go @@ -0,0 +1,54 @@ +package sdk + +import ( + "encoding/xml" + "errors" + "fmt" + + common "github.com/NaverCloudPlatform/ncloud-sdk-go/common" + request "github.com/NaverCloudPlatform/ncloud-sdk-go/request" +) + +func processDisassociatePublicIPParams(PublicIPInstanceNo string) error { + if PublicIPInstanceNo == "" { + return errors.New("Required field is not specified. location : publicIpInstanceNo") + } + + return nil +} + +// DisassociatePublicIP diassociate public ip from server instance +func (s *Conn) DisassociatePublicIP(PublicIPInstanceNo string) (*PublicIPInstanceList, error) { + if err := processDisassociatePublicIPParams(PublicIPInstanceNo); err != nil { + return nil, err + } + + params := make(map[string]string) + params["publicIpInstanceNo"] = PublicIPInstanceNo + params["action"] = "disassociatePublicIpFromServerInstance" + + bytes, resp, err := request.NewRequest(s.accessKey, s.secretKey, "GET", s.apiURL+"server/", params) + if err != nil { + return nil, err + } + + if resp.StatusCode != 200 { + responseError, err := common.ParseErrorResponse(bytes) + if err != nil { + return nil, err + } + + respError := PublicIPInstanceList{} + respError.ReturnCode = responseError.ReturnCode + respError.ReturnMessage = responseError.ReturnMessage + + return &respError, fmt.Errorf("%s %s - error code: %d , error message: %s", resp.Status, string(bytes), responseError.ReturnCode, responseError.ReturnMessage) + } + + var responseDisassociatePublicIPInstances = PublicIPInstanceList{} + if err := xml.Unmarshal([]byte(bytes), &responseDisassociatePublicIPInstances); err != nil { + return nil, err + } + + return &responseDisassociatePublicIPInstances, nil +} diff --git a/vendor/github.com/NaverCloudPlatform/ncloud-sdk-go/sdk/getAccessControlGroupList.go b/vendor/github.com/NaverCloudPlatform/ncloud-sdk-go/sdk/getAccessControlGroupList.go new file mode 100644 index 000000000..937c877ce --- /dev/null +++ b/vendor/github.com/NaverCloudPlatform/ncloud-sdk-go/sdk/getAccessControlGroupList.go @@ -0,0 +1,89 @@ +package sdk + +import ( + "encoding/xml" + "errors" + "fmt" + "strconv" + + common "github.com/NaverCloudPlatform/ncloud-sdk-go/common" + request "github.com/NaverCloudPlatform/ncloud-sdk-go/request" +) + +func processGetAccessControlGroupListParams(reqParams *RequestAccessControlGroupList) (map[string]string, error) { + params := make(map[string]string) + + if reqParams == nil { + return params, nil + } + + if len(reqParams.AccessControlGroupConfigurationNoList) > 0 { + for k, v := range reqParams.AccessControlGroupConfigurationNoList { + params[fmt.Sprintf("accessControlGroupConfigurationNoList.%d", k+1)] = v + } + } + + if reqParams.IsDefault { + params["isDefault"] = "true" + } + + if reqParams.AccessControlGroupName != "" { + if len(reqParams.AccessControlGroupName) < 3 || len(reqParams.AccessControlGroupName) > 30 { + return nil, errors.New("AccessControlGroupName must be between 3 and 30 characters in length") + } + params["accessControlGroupName"] = reqParams.AccessControlGroupName + } + + if reqParams.PageNo != 0 { + if reqParams.PageNo > 2147483647 { + return nil, errors.New("PageNo should be up to 2147483647") + } + + params["pageNo"] = strconv.Itoa(reqParams.PageNo) + } + + if reqParams.PageSize != 0 { + if reqParams.PageSize > 2147483647 { + return nil, errors.New("PageSize should be up to 2147483647") + } + + params["pageSize"] = strconv.Itoa(reqParams.PageSize) + } + + return params, nil +} + +// GetAccessControlGroupList get access control group list +func (s *Conn) GetAccessControlGroupList(reqParams *RequestAccessControlGroupList) (*AccessControlGroupList, error) { + params, err := processGetAccessControlGroupListParams(reqParams) + if err != nil { + return nil, err + } + + params["action"] = "getAccessControlGroupList" + + bytes, resp, err := request.NewRequest(s.accessKey, s.secretKey, "GET", s.apiURL+"server/", params) + if err != nil { + return nil, err + } + + if resp.StatusCode != 200 { + responseError, err := common.ParseErrorResponse(bytes) + if err != nil { + return nil, err + } + + respError := AccessControlGroupList{} + respError.ReturnCode = responseError.ReturnCode + respError.ReturnMessage = responseError.ReturnMessage + + return &respError, fmt.Errorf("%s %s - error code: %d , error message: %s", resp.Status, string(bytes), responseError.ReturnCode, responseError.ReturnMessage) + } + + var AccessControlGroupList = AccessControlGroupList{} + if err := xml.Unmarshal([]byte(bytes), &AccessControlGroupList); err != nil { + return nil, err + } + + return &AccessControlGroupList, nil +} diff --git a/vendor/github.com/NaverCloudPlatform/ncloud-sdk-go/sdk/getAccessControlRuleList.go b/vendor/github.com/NaverCloudPlatform/ncloud-sdk-go/sdk/getAccessControlRuleList.go new file mode 100644 index 000000000..c07f5985c --- /dev/null +++ b/vendor/github.com/NaverCloudPlatform/ncloud-sdk-go/sdk/getAccessControlRuleList.go @@ -0,0 +1,61 @@ +package sdk + +import ( + "encoding/xml" + "errors" + "fmt" + "strconv" + + common "github.com/NaverCloudPlatform/ncloud-sdk-go/common" + request "github.com/NaverCloudPlatform/ncloud-sdk-go/request" +) + +func checkGetAccessControlRuleListParams(accessControlGroupConfigurationNo string) error { + if accessControlGroupConfigurationNo == "" { + return errors.New("accessControlGroupConfigurationNo is required") + } + + if no, err := strconv.Atoi(accessControlGroupConfigurationNo); err != nil { + return err + } else if no < 0 || no > 2147483647 { + return errors.New("accessControlGroupConfigurationNoeNo must be up to 2147483647") + } + + return nil +} + +// GetAccessControlRuleList get access control group list +func (s *Conn) GetAccessControlRuleList(accessControlGroupConfigurationNo string) (*AccessControlRuleList, error) { + if err := checkGetAccessControlRuleListParams(accessControlGroupConfigurationNo); err != nil { + return nil, err + } + + params := make(map[string]string) + params["accessControlGroupConfigurationNo"] = accessControlGroupConfigurationNo + params["action"] = "getAccessControlRuleList" + + bytes, resp, err := request.NewRequest(s.accessKey, s.secretKey, "GET", s.apiURL+"server/", params) + if err != nil { + return nil, err + } + + if resp.StatusCode != 200 { + responseError, err := common.ParseErrorResponse(bytes) + if err != nil { + return nil, err + } + + respError := AccessControlRuleList{} + respError.ReturnCode = responseError.ReturnCode + respError.ReturnMessage = responseError.ReturnMessage + + return &respError, fmt.Errorf("%s %s - error code: %d , error message: %s", resp.Status, string(bytes), responseError.ReturnCode, responseError.ReturnMessage) + } + + var AccessControlRuleList = AccessControlRuleList{} + if err := xml.Unmarshal([]byte(bytes), &AccessControlRuleList); err != nil { + return nil, err + } + + return &AccessControlRuleList, nil +} diff --git a/vendor/github.com/NaverCloudPlatform/ncloud-sdk-go/sdk/getBlockStorageInstanceList.go b/vendor/github.com/NaverCloudPlatform/ncloud-sdk-go/sdk/getBlockStorageInstanceList.go new file mode 100644 index 000000000..48f1506f3 --- /dev/null +++ b/vendor/github.com/NaverCloudPlatform/ncloud-sdk-go/sdk/getBlockStorageInstanceList.go @@ -0,0 +1,144 @@ +package sdk + +import ( + "encoding/xml" + "errors" + "fmt" + "strconv" + "strings" + + common "github.com/NaverCloudPlatform/ncloud-sdk-go/common" + request "github.com/NaverCloudPlatform/ncloud-sdk-go/request" +) + +func processGetBlockStorageInstanceListParams(reqParams *RequestBlockStorageInstanceList) (map[string]string, error) { + params := make(map[string]string) + + if reqParams == nil { + return params, nil + } + + if reqParams.ServerInstanceNo != "" { + params["serverInstanceNo"] = reqParams.ServerInstanceNo + } + + if len(reqParams.BlockStorageInstanceNoList) > 0 { + for k, v := range reqParams.BlockStorageInstanceNoList { + params[fmt.Sprintf("blockStorageInstanceNoList.%d", k+1)] = v + } + } + + if reqParams.SearchFilterName != "" { + if reqParams.SearchFilterName != "blockStorageName" && reqParams.SearchFilterName != "attachmentInformation" { + return nil, errors.New("SearchFilterName should be blockStorageName or attachmentInformation") + } + params["searchFilterName"] = reqParams.SearchFilterName + } + + if reqParams.SearchFilterValue == "" { + params["searchFilterValue"] = reqParams.SearchFilterValue + } + + if len(reqParams.BlockStorageTypeCodeList) > 0 { + for k, v := range reqParams.BlockStorageTypeCodeList { + if v != "BASIC" && v != "SVRBS" { + return nil, errors.New("BlockStorageTypeCodeList value should be BASIC or SVRBS") + } + params[fmt.Sprintf("blockStorageTypeCodeList.%d", k+1)] = v + } + } + + if reqParams.PageNo != 0 { + if reqParams.PageNo > 2147483647 { + return nil, errors.New("PageNo should be up to 2147483647") + } + + params["pageNo"] = strconv.Itoa(reqParams.PageNo) + } + + if reqParams.PageSize != 0 { + if reqParams.PageSize > 2147483647 { + return nil, errors.New("PageSize should be up to 2147483647") + } + + params["pageSize"] = strconv.Itoa(reqParams.PageSize) + } + + if reqParams.BlockStorageInstanceStatusCode != "" { + if reqParams.BlockStorageInstanceStatusCode != "ATTAC" && reqParams.BlockStorageInstanceStatusCode != "CRAET" { + return nil, errors.New("BlockStorageInstanceStatusCode should be ATTAC or CRAET") + } + params["blockStorageInstanceStatusCode"] = reqParams.BlockStorageInstanceStatusCode + } + + if reqParams.DiskTypeCode != "" { + if reqParams.DiskTypeCode != "NET" && reqParams.DiskTypeCode != "LOCAL" { + return nil, errors.New("DiskTypeCode should be NET or LOCAL") + } + params["diskTypeCode"] = reqParams.DiskTypeCode + } + + if reqParams.DiskDetailTypeCode != "" { + if reqParams.DiskDetailTypeCode != "HDD" && reqParams.DiskDetailTypeCode != "SSD" { + return nil, errors.New("DiskDetailTypeCode should be HDD or SSD") + } + params["diskDetailTypeCode"] = reqParams.DiskDetailTypeCode + } + + if reqParams.RegionNo != "" { + params["regionNo"] = reqParams.RegionNo + } + + if reqParams.SortedBy != "" { + if strings.EqualFold(reqParams.SortedBy, "blockStorageName") || strings.EqualFold(reqParams.SortedBy, "blockStorageInstanceNo") { + params["sortedBy"] = reqParams.SortedBy + } else { + return nil, errors.New("SortedBy should be blockStorageName or blockStorageInstanceNo") + } + } + + if reqParams.SortingOrder != "" { + if strings.EqualFold(reqParams.SortingOrder, "ascending") || strings.EqualFold(reqParams.SortingOrder, "descending") { + params["sortingOrder"] = reqParams.SortingOrder + } else { + return nil, errors.New("SortingOrder should be ascending or descending") + } + } + + return params, nil +} + +// GetBlockStorageInstance Get block storage instance list +func (s *Conn) GetBlockStorageInstance(reqParams *RequestBlockStorageInstanceList) (*BlockStorageInstanceList, error) { + params, err := processGetBlockStorageInstanceListParams(reqParams) + if err != nil { + return nil, err + } + + params["action"] = "getBlockStorageInstanceList" + + bytes, resp, err := request.NewRequest(s.accessKey, s.secretKey, "GET", s.apiURL+"server/", params) + if err != nil { + return nil, err + } + + if resp.StatusCode != 200 { + responseError, err := common.ParseErrorResponse(bytes) + if err != nil { + return nil, err + } + + respError := BlockStorageInstanceList{} + respError.ReturnCode = responseError.ReturnCode + respError.ReturnMessage = responseError.ReturnMessage + + return &respError, fmt.Errorf("%s %s - error code: %d , error message: %s", resp.Status, string(bytes), responseError.ReturnCode, responseError.ReturnMessage) + } + + var blockStorageInstanceList = BlockStorageInstanceList{} + if err := xml.Unmarshal([]byte(bytes), &blockStorageInstanceList); err != nil { + return nil, err + } + + return &blockStorageInstanceList, nil +} diff --git a/vendor/github.com/NaverCloudPlatform/ncloud-sdk-go/sdk/getLoginKeyList.go b/vendor/github.com/NaverCloudPlatform/ncloud-sdk-go/sdk/getLoginKeyList.go new file mode 100644 index 000000000..d6866ebaf --- /dev/null +++ b/vendor/github.com/NaverCloudPlatform/ncloud-sdk-go/sdk/getLoginKeyList.go @@ -0,0 +1,77 @@ +package sdk + +import ( + "encoding/xml" + "errors" + "fmt" + "strconv" + + common "github.com/NaverCloudPlatform/ncloud-sdk-go/common" + request "github.com/NaverCloudPlatform/ncloud-sdk-go/request" +) + +func processGetLoginKeyListParams(reqParams *RequestGetLoginKeyList) (map[string]string, error) { + params := make(map[string]string) + + if reqParams == nil { + return params, nil + } + + if reqParams.KeyName != "" { + if len := len(reqParams.KeyName); len < 3 || len > 20 { + return nil, errors.New("Length of KeyName should be min 3 or max 30") + } + params["keyName"] = reqParams.KeyName + } + + if reqParams.PageNo > 0 { + if reqParams.PageNo > 2147483647 { + return nil, errors.New("PageNo should be min 0 or max 2147483647") + } + params["pageNo"] = strconv.Itoa(reqParams.PageNo) + } + + if reqParams.PageSize > 0 { + if reqParams.PageSize > 2147483647 { + return nil, errors.New("PageSize should be min 0 or max 2147483647") + } + params["pageSize"] = strconv.Itoa(reqParams.PageSize) + } + + return params, nil +} + +// GetLoginKeyList get login key list +func (s *Conn) GetLoginKeyList(reqParams *RequestGetLoginKeyList) (*LoginKeyList, error) { + params, err := processGetLoginKeyListParams(reqParams) + if err != nil { + return nil, err + } + + params["action"] = "getLoginKeyList" + + bytes, resp, err := request.NewRequest(s.accessKey, s.secretKey, "GET", s.apiURL+"server/", params) + if err != nil { + return nil, err + } + + if resp.StatusCode != 200 { + responseError, err := common.ParseErrorResponse(bytes) + if err != nil { + return nil, err + } + + respError := LoginKeyList{} + respError.ReturnCode = responseError.ReturnCode + respError.ReturnMessage = responseError.ReturnMessage + + return &respError, fmt.Errorf("%s %s - error code: %d , error message: %s", resp.Status, string(bytes), responseError.ReturnCode, responseError.ReturnMessage) + } + + loginKeyList := LoginKeyList{} + if err := xml.Unmarshal([]byte(bytes), &loginKeyList); err != nil { + return nil, err + } + + return &loginKeyList, nil +} diff --git a/vendor/github.com/NaverCloudPlatform/ncloud-sdk-go/sdk/getMemberServerImagesList.go b/vendor/github.com/NaverCloudPlatform/ncloud-sdk-go/sdk/getMemberServerImagesList.go new file mode 100644 index 000000000..f5fe9f8bf --- /dev/null +++ b/vendor/github.com/NaverCloudPlatform/ncloud-sdk-go/sdk/getMemberServerImagesList.go @@ -0,0 +1,87 @@ +package sdk + +import ( + "encoding/xml" + "fmt" + "strconv" + + common "github.com/NaverCloudPlatform/ncloud-sdk-go/common" + request "github.com/NaverCloudPlatform/ncloud-sdk-go/request" +) + +func processGetMemberServerImageListParams(reqParams *RequestServerImageList) (map[string]string, error) { + params := make(map[string]string) + + if reqParams == nil { + return params, nil + } + + if len(reqParams.MemberServerImageNoList) > 0 { + for k, v := range reqParams.MemberServerImageNoList { + params[fmt.Sprintf("memberServerImageNoList.%d", k+1)] = v + } + } + + if len(reqParams.PlatformTypeCodeList) > 0 { + for k, v := range reqParams.PlatformTypeCodeList { + params[fmt.Sprintf("platformTypeCodeList.%d", k+1)] = v + } + } + + if reqParams.PageNo != 0 { + params["pageNo"] = strconv.Itoa(reqParams.PageNo) + } + + if reqParams.PageSize != 0 { + params["pageSize"] = strconv.Itoa(reqParams.PageSize) + } + + if reqParams.RegionNo != "" { + params["regionNo"] = reqParams.RegionNo + } + + if reqParams.SortedBy != "" { + params["sortedBy"] = reqParams.SortedBy + } + + if reqParams.SortingOrder != "" { + params["sortingOrder"] = reqParams.SortingOrder + } + + return params, nil +} + +// GetMemberServerImageList get member server image list +func (s *Conn) GetMemberServerImageList(reqParams *RequestServerImageList) (*MemberServerImageList, error) { + params, err := processGetMemberServerImageListParams(reqParams) + if err != nil { + return nil, err + } + + params["action"] = "getMemberServerImageList" + + bytes, resp, err := request.NewRequest(s.accessKey, s.secretKey, "GET", s.apiURL+"server/", params) + if err != nil { + return nil, err + } + + if resp.StatusCode != 200 { + responseError, err := common.ParseErrorResponse(bytes) + if err != nil { + return nil, err + } + + respError := MemberServerImageList{} + respError.ReturnCode = responseError.ReturnCode + respError.ReturnMessage = responseError.ReturnMessage + + return &respError, fmt.Errorf("%s %s - error code: %d , error message: %s", resp.Status, string(bytes), responseError.ReturnCode, responseError.ReturnMessage) + } + + var serverImageListsResp = MemberServerImageList{} + if err := xml.Unmarshal([]byte(bytes), &serverImageListsResp); err != nil { + return nil, err + } + + return &serverImageListsResp, nil +} diff --git a/vendor/github.com/NaverCloudPlatform/ncloud-sdk-go/sdk/getPublicIpInstanceList.go b/vendor/github.com/NaverCloudPlatform/ncloud-sdk-go/sdk/getPublicIpInstanceList.go new file mode 100644 index 000000000..a781728c6 --- /dev/null +++ b/vendor/github.com/NaverCloudPlatform/ncloud-sdk-go/sdk/getPublicIpInstanceList.go @@ -0,0 +1,130 @@ +package sdk + +import ( + "encoding/xml" + "errors" + "fmt" + "strconv" + "strings" + + common "github.com/NaverCloudPlatform/ncloud-sdk-go/common" + request "github.com/NaverCloudPlatform/ncloud-sdk-go/request" +) + +func processGetPublicIPInstanceListParams(reqParams *RequestPublicIPInstanceList) (map[string]string, error) { + params := make(map[string]string) + + if reqParams == nil { + return params, nil + } + + if reqParams.IsAssociated { + params["isAssociated"] = "true" + } + + if len(reqParams.PublicIPInstanceNoList) > 0 { + for k, v := range reqParams.PublicIPInstanceNoList { + params[fmt.Sprintf("publicIpInstanceNoList.%d", k+1)] = v + } + } + + if len(reqParams.PublicIPList) > 0 { + for k, v := range reqParams.PublicIPList { + if len(reqParams.PublicIPList) < 5 || len(reqParams.PublicIPList) > 15 { + return nil, errors.New("PublicIPList must be between 5 and 15 characters in length") + } + params[fmt.Sprintf("publicIpList.%d", k+1)] = v + } + } + + if reqParams.SearchFilterName != "" { + if reqParams.SearchFilterName != "publicIp" && reqParams.SearchFilterName != "associatedServerName" { + return nil, errors.New("SearchFilterName must be publicIp or associatedServerName") + } + params["searchFilterName"] = reqParams.SearchFilterName + } + + if reqParams.SearchFilterValue != "" { + params["searchFilterValue"] = reqParams.SearchFilterValue + } + + if reqParams.InternetLineTypeCode != "" { + if reqParams.InternetLineTypeCode != "PUBLC" && reqParams.InternetLineTypeCode != "GLBL" { + return params, errors.New("InternetLineTypeCode must be PUBLC or GLBL") + } + params["internetLineTypeCode"] = reqParams.InternetLineTypeCode + } + + if reqParams.RegionNo != "" { + params["regionNo"] = reqParams.RegionNo + } + + if reqParams.PageNo != 0 { + if reqParams.PageNo > 2147483647 { + return nil, errors.New("PageNo must be up to 2147483647") + } + + params["pageNo"] = strconv.Itoa(reqParams.PageNo) + } + + if reqParams.PageSize != 0 { + if reqParams.PageSize > 2147483647 { + return nil, errors.New("PageSize must be up to 2147483647") + } + + params["pageSize"] = strconv.Itoa(reqParams.PageSize) + } + + if reqParams.SortedBy != "" { + if strings.EqualFold(reqParams.SortedBy, "publicIp") || strings.EqualFold(reqParams.SortedBy, "publicIpInstanceNo") { + params["sortedBy"] = reqParams.SortedBy + } else { + return nil, errors.New("SortedBy must be publicIp or publicIpInstanceNo") + } + } + + if reqParams.SortingOrder != "" { + if strings.EqualFold(reqParams.SortingOrder, "ascending") || strings.EqualFold(reqParams.SortingOrder, "descending") { + params["sortingOrder"] = reqParams.SortingOrder + } else { + return nil, errors.New("SortingOrder must be ascending or descending") + } + } + + return params, nil +} + +// GetPublicIPInstanceList get public ip instance list +func (s *Conn) GetPublicIPInstanceList(reqParams *RequestPublicIPInstanceList) (*PublicIPInstanceList, error) { + params, err := processGetPublicIPInstanceListParams(reqParams) + if err != nil { + return nil, err + } + + params["action"] = "getPublicIpInstanceList" + + bytes, resp, err := request.NewRequest(s.accessKey, s.secretKey, "GET", s.apiURL+"server/", params) + if err != nil { + return nil, err + } + + if resp.StatusCode != 200 { + responseError, err := common.ParseErrorResponse(bytes) + if err != nil { + return nil, err + } + + respError := PublicIPInstanceList{} + respError.ReturnCode = responseError.ReturnCode + respError.ReturnMessage = responseError.ReturnMessage + + return &respError, fmt.Errorf("%s %s - error code: %d , error message: %s", resp.Status, string(bytes), responseError.ReturnCode, responseError.ReturnMessage) + } + + var publicIPInstanceList = PublicIPInstanceList{} + if err := xml.Unmarshal([]byte(bytes), &publicIPInstanceList); err != nil { + return nil, err + } + + return &publicIPInstanceList, nil +} diff --git a/vendor/github.com/NaverCloudPlatform/ncloud-sdk-go/sdk/getRegionList.go b/vendor/github.com/NaverCloudPlatform/ncloud-sdk-go/sdk/getRegionList.go new file mode 100644 index 000000000..0e1647684 --- /dev/null +++ b/vendor/github.com/NaverCloudPlatform/ncloud-sdk-go/sdk/getRegionList.go @@ -0,0 +1,40 @@ +package sdk + +import ( + "encoding/xml" + "fmt" + + common "github.com/NaverCloudPlatform/ncloud-sdk-go/common" + request "github.com/NaverCloudPlatform/ncloud-sdk-go/request" +) + +// GetRegionList gets region list +func (s *Conn) GetRegionList() (*RegionList, error) { + params := make(map[string]string) + params["action"] = "getRegionList" + + bytes, resp, err := request.NewRequest(s.accessKey, s.secretKey, "GET", s.apiURL+"server/", params) + if err != nil { + return nil, err + } + + if resp.StatusCode != 200 { + responseError, err := common.ParseErrorResponse(bytes) + if err != nil { + return nil, err + } + + respError := RegionList{} + respError.ReturnCode = responseError.ReturnCode + respError.ReturnMessage = responseError.ReturnMessage + + return &respError, fmt.Errorf("%s %s - error code: %d , error message: %s", resp.Status, string(bytes), responseError.ReturnCode, responseError.ReturnMessage) + } + + regionList := RegionList{} + if err := xml.Unmarshal([]byte(bytes), ®ionList); err != nil { + return nil, err + } + + return ®ionList, nil +} diff --git a/vendor/github.com/NaverCloudPlatform/ncloud-sdk-go/sdk/getRootPassword.go b/vendor/github.com/NaverCloudPlatform/ncloud-sdk-go/sdk/getRootPassword.go new file mode 100644 index 000000000..9ec595f49 --- /dev/null +++ b/vendor/github.com/NaverCloudPlatform/ncloud-sdk-go/sdk/getRootPassword.go @@ -0,0 +1,67 @@ +package sdk + +import ( + "encoding/xml" + "errors" + "fmt" + "strings" + + common "github.com/NaverCloudPlatform/ncloud-sdk-go/common" + request "github.com/NaverCloudPlatform/ncloud-sdk-go/request" +) + +func processGetRootPasswordParams(reqParams *RequestGetRootPassword) (map[string]string, error) { + params := make(map[string]string) + + if reqParams.ServerInstanceNo == "" { + return params, errors.New("Required field is not specified. location : serverInstanceNo") + } + + if reqParams.PrivateKey == "" { + return params, errors.New("Required field is not specified. location : privateKey") + } + + params["serverInstanceNo"] = reqParams.ServerInstanceNo + params["privateKey"] = reqParams.PrivateKey + + return params, nil +} + +// GetRootPassword get root password from server instance +func (s *Conn) GetRootPassword(reqParams *RequestGetRootPassword) (*RootPassword, error) { + params, err := processGetRootPasswordParams(reqParams) + if err != nil { + return nil, err + } + + params["action"] = "getRootPassword" + + bytes, resp, err := request.NewRequest(s.accessKey, s.secretKey, "GET", s.apiURL+"server/", params) + if err != nil { + return nil, err + } + + if resp.StatusCode != 200 { + responseError, err := common.ParseErrorResponse(bytes) + if err != nil { + return nil, err + } + + respError := RootPassword{} + respError.ReturnCode = responseError.ReturnCode + respError.ReturnMessage = responseError.ReturnMessage + + return &respError, fmt.Errorf("%s %s - error code: %d , error message: %s", resp.Status, string(bytes), responseError.ReturnCode, responseError.ReturnMessage) + } + + responseGetRootPassword := RootPassword{} + if err := xml.Unmarshal([]byte(bytes), &responseGetRootPassword); err != nil { + return nil, err + } + + if responseGetRootPassword.RootPassword != "" { + responseGetRootPassword.RootPassword = strings.TrimSpace(responseGetRootPassword.RootPassword) + } + + return &responseGetRootPassword, nil +} diff --git a/vendor/github.com/NaverCloudPlatform/ncloud-sdk-go/sdk/getServerImageProductList.go b/vendor/github.com/NaverCloudPlatform/ncloud-sdk-go/sdk/getServerImageProductList.go new file mode 100644 index 000000000..645649097 --- /dev/null +++ b/vendor/github.com/NaverCloudPlatform/ncloud-sdk-go/sdk/getServerImageProductList.go @@ -0,0 +1,88 @@ +package sdk + +import ( + "encoding/xml" + "errors" + "fmt" + "strconv" + + common "github.com/NaverCloudPlatform/ncloud-sdk-go/common" + request "github.com/NaverCloudPlatform/ncloud-sdk-go/request" +) + +func processGetServerImageProductListParams(reqParams *RequestGetServerImageProductList) (map[string]string, error) { + params := make(map[string]string) + + if reqParams == nil { + return params, nil + } + + if reqParams.ExclusionProductCode != "" { + if len(reqParams.ExclusionProductCode) > 20 { + return params, errors.New("Length of exclusionProductCode should be max 20") + } + params["exclusionProductCode"] = reqParams.ExclusionProductCode + } + + if reqParams.ProductCode != "" { + if len(reqParams.ProductCode) > 20 { + return params, errors.New("Length of productCode should be max 20") + } + params["productCode"] = reqParams.ProductCode + } + + if len(reqParams.PlatformTypeCodeList) > 0 { + for k, v := range reqParams.PlatformTypeCodeList { + params[fmt.Sprintf("platformTypeCodeList.%d", k+1)] = v + } + } + + if reqParams.BlockStorageSize > 0 { + if reqParams.BlockStorageSize != 50 && reqParams.BlockStorageSize != 100 { + return nil, errors.New("blockStorageSize should be null, 50 or 100") + } + params["blockStorageSize"] = strconv.Itoa(reqParams.BlockStorageSize) + } + + if reqParams.RegionNo != "" { + params["regionNo"] = reqParams.RegionNo + } + + return params, nil +} + +// GetServerImageProductList gets server image product list +func (s *Conn) GetServerImageProductList(reqParams *RequestGetServerImageProductList) (*ProductList, error) { + params, err := processGetServerImageProductListParams(reqParams) + if err != nil { + return nil, err + } + + params["action"] = "getServerImageProductList" + + bytes, resp, err := request.NewRequest(s.accessKey, s.secretKey, "GET", s.apiURL+"server/", params) + if err != nil { + return nil, err + } + + if resp.StatusCode != 200 { + responseError, err := common.ParseErrorResponse(bytes) + if err != nil { + return nil, err + } + + respError := ProductList{} + respError.ReturnCode = responseError.ReturnCode + respError.ReturnMessage = responseError.ReturnMessage + + return &respError, fmt.Errorf("%s %s - error code: %d , error message: %s", resp.Status, string(bytes), responseError.ReturnCode, responseError.ReturnMessage) + } + + var productList = ProductList{} + if err := xml.Unmarshal([]byte(bytes), &productList); err != nil { + fmt.Println(err) + return nil, err + } + + return &productList, nil +} diff --git a/vendor/github.com/NaverCloudPlatform/ncloud-sdk-go/sdk/getServerInstanceList.go b/vendor/github.com/NaverCloudPlatform/ncloud-sdk-go/sdk/getServerInstanceList.go new file mode 100644 index 000000000..a4de2206e --- /dev/null +++ b/vendor/github.com/NaverCloudPlatform/ncloud-sdk-go/sdk/getServerInstanceList.go @@ -0,0 +1,133 @@ +package sdk + +import ( + "encoding/xml" + "errors" + "fmt" + "strconv" + + common "github.com/NaverCloudPlatform/ncloud-sdk-go/common" + request "github.com/NaverCloudPlatform/ncloud-sdk-go/request" +) + +func processGetServerInstanceListParams(reqParams *RequestGetServerInstanceList) (map[string]string, error) { + params := make(map[string]string) + + if reqParams == nil { + return params, nil + } + + if len(reqParams.ServerInstanceNoList) > 0 { + for k, v := range reqParams.ServerInstanceNoList { + params[fmt.Sprintf("serverInstanceNoList.%d", k+1)] = v + } + } + + if reqParams.SearchFilterName != "" { + params["searchFilterName"] = reqParams.SearchFilterName + } + + if reqParams.SearchFilterValue != "" { + params["searchFilterValue"] = reqParams.SearchFilterValue + } + + if reqParams.PageNo > 0 { + if reqParams.PageNo > 2147483647 { + return nil, errors.New("PageNo should be less than 2147483647") + + } + params["pageNo"] = strconv.Itoa(reqParams.PageNo) + } + + if reqParams.PageSize > 0 { + if reqParams.PageSize > 2147483647 { + return nil, errors.New("PageSize should be less than 2147483647") + + } + params["pageSize"] = strconv.Itoa(reqParams.PageSize) + } + + if reqParams.ServerInstanceStatusCode != "" { + if reqParams.ServerInstanceStatusCode != "RUN" && reqParams.ServerInstanceStatusCode != "NSTOP" && reqParams.ServerInstanceStatusCode != "ING" { + return nil, errors.New("ServerInstanceStatusCode should be RUN, NSTOP or ING") + } + params["serverInstanceStatusCode"] = reqParams.ServerInstanceStatusCode + } + + if reqParams.InternetLineTypeCode != "" { + if reqParams.InternetLineTypeCode != "PUBLC" && reqParams.InternetLineTypeCode != "GLBL" { + return nil, errors.New("InternetLineTypeCode should be PUBLC or GLBL") + } + params["internetLineTypeCode"] = reqParams.InternetLineTypeCode + } + + if reqParams.RegionNo != "" { + params["regionNo"] = reqParams.RegionNo + } + + if reqParams.BaseBlockStorageDiskTypeCode != "" { + if reqParams.BaseBlockStorageDiskTypeCode != "NET" && reqParams.BaseBlockStorageDiskTypeCode != "LOCAL" { + return nil, errors.New("BaseBlockStorageDiskTypeCode should be NET or LOCAL") + } + params["baseBlockStorageDiskTypeCode"] = reqParams.BaseBlockStorageDiskTypeCode + } + + if reqParams.BaseBlockStorageDiskDetailTypeCode != "" { + if reqParams.BaseBlockStorageDiskDetailTypeCode != "HDD" && reqParams.BaseBlockStorageDiskDetailTypeCode != "SSD" { + return nil, errors.New("BaseBlockStorageDiskDetailTypeCode should be HDD or SSD") + } + params["baseBlockStorageDiskDetailTypeCode"] = reqParams.BaseBlockStorageDiskDetailTypeCode + } + + if reqParams.SortedBy != "" { + if reqParams.SortedBy != "serverName" && reqParams.SortedBy != "serverInstanceNo" { + return nil, errors.New("SortedBy should be serverName or serverInstanceNo") + } + params["sortedBy"] = reqParams.SortedBy + } + + if reqParams.SortingOrder != "" { + if reqParams.SortingOrder != "ascending" && reqParams.SortingOrder != "descending" { + return nil, errors.New("SortingOrder should be ascending or descending") + } + params["sortingOrder"] = reqParams.SortingOrder + } + + return params, nil +} + +// GetServerInstanceList get server instance list +func (s *Conn) GetServerInstanceList(reqParams *RequestGetServerInstanceList) (*ServerInstanceList, error) { + params, err := processGetServerInstanceListParams(reqParams) + if err != nil { + return nil, err + } + + params["action"] = "getServerInstanceList" + + bytes, resp, err := request.NewRequest(s.accessKey, s.secretKey, "GET", s.apiURL+"server/", params) + if err != nil { + return nil, err + } + + if resp.StatusCode != 200 { + responseError, err := common.ParseErrorResponse(bytes) + if err != nil { + return nil, err + } + + respError := ServerInstanceList{} + respError.ReturnCode = responseError.ReturnCode + respError.ReturnMessage = responseError.ReturnMessage + + return &respError, fmt.Errorf("%s %s - error code: %d , error message: %s", resp.Status, string(bytes), responseError.ReturnCode, responseError.ReturnMessage) + } + + var serverInstanceList = ServerInstanceList{} + if err := xml.Unmarshal([]byte(bytes), &serverInstanceList); err != nil { + fmt.Println(err) + return nil, err + } + + return &serverInstanceList, nil +} diff --git a/vendor/github.com/NaverCloudPlatform/ncloud-sdk-go/sdk/getServerProductList.go b/vendor/github.com/NaverCloudPlatform/ncloud-sdk-go/sdk/getServerProductList.go new file mode 100644 index 000000000..b932df876 --- /dev/null +++ b/vendor/github.com/NaverCloudPlatform/ncloud-sdk-go/sdk/getServerProductList.go @@ -0,0 +1,79 @@ +package sdk + +import ( + "encoding/xml" + "errors" + "fmt" + + common "github.com/NaverCloudPlatform/ncloud-sdk-go/common" + request "github.com/NaverCloudPlatform/ncloud-sdk-go/request" +) + +func processGetServerProductListParams(reqParams *RequestGetServerProductList) (map[string]string, error) { + params := make(map[string]string) + + if reqParams == nil || reqParams.ServerImageProductCode == "" { + return params, errors.New("ServerImageProductCode field is required") + } + + if len(reqParams.ServerImageProductCode) > 20 { + return params, errors.New("Length of serverImageProductCode should be max 20") + } + + params["serverImageProductCode"] = reqParams.ServerImageProductCode + + if reqParams.ExclusionProductCode != "" { + if len(reqParams.ExclusionProductCode) > 20 { + return params, errors.New("Length of exclusionProductCode should be max 20") + } + params["exclusionProductCode"] = reqParams.ExclusionProductCode + } + + if reqParams.ProductCode != "" { + if len(reqParams.ProductCode) > 20 { + return params, errors.New("Length of productCode should be max 20") + } + params["productCode"] = reqParams.ProductCode + } + + if reqParams.RegionNo != "" { + params["regionNo"] = reqParams.RegionNo + } + + return params, nil +} + +// GetServerProductList : Get Server product list with server image product code by default. +func (s *Conn) GetServerProductList(reqParams *RequestGetServerProductList) (*ProductList, error) { + params, err := processGetServerProductListParams(reqParams) + if err != nil { + return nil, err + } + + params["action"] = "getServerProductList" + + bytes, resp, err := request.NewRequest(s.accessKey, s.secretKey, "GET", s.apiURL+"server/", params) + if err != nil { + return nil, err + } + + if resp.StatusCode != 200 { + responseError, err := common.ParseErrorResponse(bytes) + if err != nil { + return nil, err + } + + respError := ProductList{} + respError.ReturnCode = responseError.ReturnCode + respError.ReturnMessage = responseError.ReturnMessage + + return &respError, fmt.Errorf("%s %s - error code: %d , error message: %s", resp.Status, string(bytes), responseError.ReturnCode, responseError.ReturnMessage) + } + + var productListResp = ProductList{} + if err := xml.Unmarshal([]byte(bytes), &productListResp); err != nil { + return nil, err + } + + return &productListResp, nil +} diff --git a/vendor/github.com/NaverCloudPlatform/ncloud-sdk-go/sdk/getZoneList.go b/vendor/github.com/NaverCloudPlatform/ncloud-sdk-go/sdk/getZoneList.go new file mode 100644 index 000000000..42354eec0 --- /dev/null +++ b/vendor/github.com/NaverCloudPlatform/ncloud-sdk-go/sdk/getZoneList.go @@ -0,0 +1,44 @@ +package sdk + +import ( + "encoding/xml" + "fmt" + + common "github.com/NaverCloudPlatform/ncloud-sdk-go/common" + request "github.com/NaverCloudPlatform/ncloud-sdk-go/request" +) + +// GetZoneList get zone list +func (s *Conn) GetZoneList(regionNo string) (*ZoneList, error) { + params := make(map[string]string) + params["action"] = "getZoneList" + + if regionNo != "" { + params["regionNo"] = regionNo + } + + bytes, resp, err := request.NewRequest(s.accessKey, s.secretKey, "GET", s.apiURL+"server/", params) + if err != nil { + return nil, err + } + + if resp.StatusCode != 200 { + responseError, err := common.ParseErrorResponse(bytes) + if err != nil { + return nil, err + } + + respError := ZoneList{} + respError.ReturnCode = responseError.ReturnCode + respError.ReturnMessage = responseError.ReturnMessage + + return &respError, fmt.Errorf("%s %s - error code: %d , error message: %s", resp.Status, string(bytes), responseError.ReturnCode, responseError.ReturnMessage) + } + + ZoneList := ZoneList{} + if err := xml.Unmarshal([]byte(bytes), &ZoneList); err != nil { + return nil, err + } + + return &ZoneList, nil +} diff --git a/vendor/github.com/NaverCloudPlatform/ncloud-sdk-go/sdk/model.go b/vendor/github.com/NaverCloudPlatform/ncloud-sdk-go/sdk/model.go new file mode 100644 index 000000000..02cac088e --- /dev/null +++ b/vendor/github.com/NaverCloudPlatform/ncloud-sdk-go/sdk/model.go @@ -0,0 +1,359 @@ +package sdk + +import ( + common "github.com/NaverCloudPlatform/ncloud-sdk-go/common" +) + +type Conn struct { + accessKey string + secretKey string + apiURL string +} + +// ServerImage structures +type ServerImage struct { + MemberServerImageNo string `xml:"memberServerImageNo"` + MemberServerImageName string `xml:"memberServerImageName"` + MemberServerImageDescription string `xml:"memberServerImageDescription"` + OriginalServerInstanceNo string `xml:"originalServerInstanceNo"` + OriginalServerProductCode string `xml:"originalServerProductCode"` + OriginalServerName string `xml:"originalServerName"` + OriginalBaseBlockStorageDiskType common.CommonCode `xml:"originalBaseBlockStorageDiskType"` + OriginalServerImageProductCode string `xml:"originalServerImageProductCode"` + OriginalOsInformation string `xml:"originalOsInformation"` + OriginalServerImageName string `xml:"originalServerImageName"` + MemberServerImageStatusName string `xml:"memberServerImageStatusName"` + MemberServerImageStatus common.CommonCode `xml:"memberServerImageStatus"` + MemberServerImageOperation common.CommonCode `xml:"memberServerImageOperation"` + MemberServerImagePlatformType common.CommonCode `xml:"memberServerImagePlatformType"` + CreateDate string `xml:"createDate"` + Zone common.Zone `xml:"zone"` + Region common.Region `xml:"region"` + MemberServerImageBlockStorageTotalRows int `xml:"memberServerImageBlockStorageTotalRows"` + MemberServerImageBlockStorageTotalSize int `xml:"memberServerImageBlockStorageTotalSize"` +} + +type MemberServerImageList struct { + common.CommonResponse + TotalRows int `xml:"totalRows"` + MemberServerImageList []ServerImage `xml:"memberServerImageList>memberServerImage,omitempty"` +} + +type RequestServerImageList struct { + MemberServerImageNoList []string + PlatformTypeCodeList []string + PageNo int + PageSize int + RegionNo string + SortedBy string + SortingOrder string +} + +type RequestCreateServerImage struct { + MemberServerImageName string + MemberServerImageDescription string + ServerInstanceNo string +} + +type RequestGetServerImageProductList struct { + ExclusionProductCode string + ProductCode string + PlatformTypeCodeList []string + BlockStorageSize int + RegionNo string +} + +// ProductList : Response of server product list +type ProductList struct { + common.CommonResponse + TotalRows int `xml:"totalRows"` + Product []Product `xml:"productList>product,omitempty"` +} + +// Product : Product information of Server +type Product struct { + ProductCode string `xml:"productCode"` + ProductName string `xml:"productName"` + ProductType common.CommonCode `xml:"productType"` + ProductDescription string `xml:"productDescription"` + InfraResourceType common.CommonCode `xml:"infraResourceType"` + CPUCount int `xml:"cpuCount"` + MemorySize int `xml:"memorySize"` + BaseBlockStorageSize int `xml:"baseBlockStorageSize"` + PlatformType common.CommonCode `xml:"platformType"` + OsInformation string `xml:"osInformation"` + AddBlockStroageSize int `xml:"addBlockStroageSize"` +} + +// RequestCreateServerInstance is Server Instances structures +type RequestCreateServerInstance struct { + ServerImageProductCode string + ServerProductCode string + MemberServerImageNo string + ServerName string + ServerDescription string + LoginKeyName string + IsProtectServerTermination bool + ServerCreateCount int + ServerCreateStartNo int + InternetLineTypeCode string + FeeSystemTypeCode string + UserData string + ZoneNo string + AccessControlGroupConfigurationNoList []string +} + +type ServerInstanceList struct { + common.CommonResponse + TotalRows int `xml:"totalRows"` + ServerInstanceList []ServerInstance `xml:"serverInstanceList>serverInstance,omitempty"` +} + +type ServerInstance struct { + ServerInstanceNo string `xml:"serverInstanceNo"` + ServerName string `xml:"serverName"` + ServerDescription string `xml:"serverDescription"` + CPUCount int `xml:"cpuCount"` + MemorySize int `xml:"memorySize"` + BaseBlockStorageSize int `xml:"baseBlockStorageSize"` + PlatformType common.CommonCode `xml:"platformType"` + LoginKeyName string `xml:"loginKeyName"` + IsFeeChargingMonitoring bool `xml:"isFeeChargingMonitoring"` + PublicIP string `xml:"publicIp"` + PrivateIP string `xml:"privateIp"` + ServerImageName string `xml:"serverImageName"` + ServerInstanceStatus common.CommonCode `xml:"serverInstanceStatus"` + ServerInstanceOperation common.CommonCode `xml:"serverInstanceOperation"` + ServerInstanceStatusName string `xml:"serverInstanceStatusName"` + CreateDate string `xml:"createDate"` + Uptime string `xml:"uptime"` + ServerImageProductCode string `xml:"serverImageProductCode"` + ServerProductCode string `xml:"serverProductCode"` + IsProtectServerTermination bool `xml:"isProtectServerTermination"` + PortForwardingPublicIP string `xml:"portForwardingPublicIp"` + PortForwardingExternalPort int `xml:"portForwardingExternalPort"` + PortForwardingInternalPort int `xml:"portForwardingInternalPort"` + Zone common.Zone `xml:"zone"` + Region common.Region `xml:"region"` + BaseBlockStorageDiskType common.CommonCode `xml:"baseBlockStorageDiskType"` + BaseBlockStroageDiskDetailType common.CommonCode `xml:"baseBlockStroageDiskDetailType"` + InternetLineType common.CommonCode `xml:"internetLineType"` + UserData string `xml:"userData"` + AccessControlGroupList []AccessControlGroup `xml:"accessControlGroupList>accessControlGroup"` +} + +type AccessControlGroup struct { + AccessControlGroupConfigurationNo string `xml:"accessControlGroupConfigurationNo"` + AccessControlGroupName string `xml:"accessControlGroupName"` + AccessControlGroupDescription string `xml:"accessControlGroupDescription"` + IsDefault bool `xml:"isDefault"` + CreateDate string `xml:"createDate"` +} + +// RequestGetLoginKeyList is Login Key structures +type RequestGetLoginKeyList struct { + KeyName string + PageNo int + PageSize int +} + +type LoginKeyList struct { + common.CommonResponse + TotalRows int `xml:"totalRows"` + LoginKeyList []LoginKey `xml:"loginKeyList>loginKey,omitempty"` +} + +type LoginKey struct { + Fingerprint string `xml:"fingerprint"` + KeyName string `xml:"keyName"` + CreateDate string `xml:"createDate"` +} + +type PrivateKey struct { + common.CommonResponse + PrivateKey string `xml:"privateKey"` +} +type RequestCreatePublicIPInstance struct { + ServerInstanceNo string + PublicIPDescription string + InternetLineTypeCode string + RegionNo string +} + +type RequestPublicIPInstanceList struct { + IsAssociated bool + PublicIPInstanceNoList []string + PublicIPList []string + SearchFilterName string + SearchFilterValue string + InternetLineTypeCode string + RegionNo string + PageNo int + PageSize int + SortedBy string + SortingOrder string +} + +type PublicIPInstanceList struct { + common.CommonResponse + TotalRows int `xml:"totalRows"` + PublicIPInstanceList []PublicIPInstance `xml:"publicIpInstanceList>publicIpInstance,omitempty"` +} + +type PublicIPInstance struct { + PublicIPInstanceNo string `xml:"publicIpInstanceNo"` + PublicIP string `xml:"publicIp"` + PublicIPDescription string `xml:"publicIpDescription"` + CreateDate string `xml:"createDate"` + InternetLineType common.CommonCode `xml:"internetLineType"` + PublicIPInstanceStatusName string `xml:"publicIpInstanceStatusName"` + PublicIPInstanceStatus common.CommonCode `xml:"publicIpInstanceStatus"` + PublicIPInstanceOperation common.CommonCode `xml:"publicIpInstanceOperation"` + PublicIPKindType common.CommonCode `xml:"publicIpKindType"` + ServerInstance ServerInstance `xml:"serverInstanceAssociatedWithPublicIp"` +} + +type RequestDeletePublicIPInstances struct { + PublicIPInstanceNoList []string +} + +// RequestGetServerInstanceList : Get Server Instance List +type RequestGetServerInstanceList struct { + ServerInstanceNoList []string + SearchFilterName string + SearchFilterValue string + PageNo int + PageSize int + ServerInstanceStatusCode string + InternetLineTypeCode string + RegionNo string + BaseBlockStorageDiskTypeCode string + BaseBlockStorageDiskDetailTypeCode string + SortedBy string + SortingOrder string +} + +type RequestStopServerInstances struct { + ServerInstanceNoList []string +} + +type RequestTerminateServerInstances struct { + ServerInstanceNoList []string +} + +// RequestGetRootPassword : Request to get root password of the server +type RequestGetRootPassword struct { + ServerInstanceNo string + PrivateKey string +} + +// RootPassword : Response of getting root password of the server +type RootPassword struct { + common.CommonResponse + TotalRows int `xml:"totalRows"` + RootPassword string `xml:"rootPassword"` +} + +// RequestGetZoneList : Request to get zone list +type RequestGetZoneList struct { + regionNo string +} + +// ZoneList : Response of getting zone list +type ZoneList struct { + common.CommonResponse + TotalRows int `xml:"totalRows"` + Zone []common.Zone `xml:"zoneList>zone"` +} + +// RegionList : Response of getting region list +type RegionList struct { + common.CommonResponse + TotalRows int `xml:"totalRows"` + RegionList []common.Region `xml:"regionList>region,omitempty"` +} + +type RequestBlockStorageInstance struct { + BlockStorageName string + BlockStorageSize int + BlockStorageDescription string + ServerInstanceNo string +} + +type RequestBlockStorageInstanceList struct { + ServerInstanceNo string + BlockStorageInstanceNoList []string + SearchFilterName string + SearchFilterValue string + BlockStorageTypeCodeList []string + PageNo int + PageSize int + BlockStorageInstanceStatusCode string + DiskTypeCode string + DiskDetailTypeCode string + RegionNo string + SortedBy string + SortingOrder string +} + +type BlockStorageInstanceList struct { + common.CommonResponse + TotalRows int `xml:"totalRows"` + BlockStorageInstance []BlockStorageInstance `xml:"blockStorageInstanceList>blockStorageInstance,omitempty"` +} + +type BlockStorageInstance struct { + BlockStorageInstanceNo string `xml:"blockStorageInstanceNo"` + ServerInstanceNo string `xml:"serverInstanceNo"` + ServerName string `xml:"serverName"` + BlockStorageType common.CommonCode `xml:"blockStorageType"` + BlockStorageName string `xml:"blockStorageName"` + BlockStorageSize int `xml:"blockStorageSize"` + DeviceName string `xml:"deviceName"` + BlockStorageProductCode string `xml:"blockStorageProductCode"` + BlockStorageInstanceStatus common.CommonCode `xml:"blockStorageInstanceStatus"` + BlockStorageInstanceOperation common.CommonCode `xml:"blockStorageInstanceOperation"` + BlockStorageInstanceStatusName string `xml:"blockStorageInstanceStatusName"` + CreateDate string `xml:"createDate"` + BlockStorageInstanceDescription string `xml:"blockStorageInstanceDescription"` + DiskType common.CommonCode `xml:"diskType"` + DiskDetailType common.CommonCode `xml:"diskDetailType"` +} + +// RequestGetServerProductList : Request to get server product list +type RequestGetServerProductList struct { + ExclusionProductCode string + ProductCode string + ServerImageProductCode string + RegionNo string +} + +type RequestAccessControlGroupList struct { + AccessControlGroupConfigurationNoList []string + IsDefault bool + AccessControlGroupName string + PageNo int + PageSize int +} + +type AccessControlGroupList struct { + common.CommonResponse + TotalRows int `xml:"totalRows"` + AccessControlGroup []AccessControlGroup `xml:"accessControlGroupList>accessControlGroup,omitempty"` +} + +type AccessControlRuleList struct { + common.CommonResponse + TotalRows int `xml:"totalRows"` + AccessControlRuleList []AccessControlRule `xml:"accessControlRuleList>accessControlRule,omitempty"` +} + +type AccessControlRule struct { + AccessControlRuleConfigurationNo string `xml:"accessControlRuleConfigurationNo"` + AccessControlRuleDescription string `xml:"accessControlRuleDescription"` + SourceAccessControlRuleConfigurationNo string `xml:"sourceAccessControlRuleConfigurationNo"` + SourceAccessControlRuleName string `xml:"sourceAccessControlRuleName"` + ProtocolType common.CommonCode `xml:"protocolType"` + SourceIP string `xml:"sourceIp"` + DestinationPort string `xml:"destinationPort"` +} diff --git a/vendor/github.com/NaverCloudPlatform/ncloud-sdk-go/sdk/stopServerInstances.go b/vendor/github.com/NaverCloudPlatform/ncloud-sdk-go/sdk/stopServerInstances.go new file mode 100644 index 000000000..e3786e5fa --- /dev/null +++ b/vendor/github.com/NaverCloudPlatform/ncloud-sdk-go/sdk/stopServerInstances.go @@ -0,0 +1,62 @@ +package sdk + +import ( + "encoding/xml" + "errors" + "fmt" + + common "github.com/NaverCloudPlatform/ncloud-sdk-go/common" + request "github.com/NaverCloudPlatform/ncloud-sdk-go/request" +) + +func processStopServerInstancesParams(reqParams *RequestStopServerInstances) (map[string]string, error) { + params := make(map[string]string) + + if reqParams == nil || len(reqParams.ServerInstanceNoList) == 0 { + return params, errors.New("serverInstanceNoList is required") + } + + if len(reqParams.ServerInstanceNoList) > 0 { + for k, v := range reqParams.ServerInstanceNoList { + params[fmt.Sprintf("serverInstanceNoList.%d", k+1)] = v + } + } + + return params, nil +} + +// StopServerInstances stop server instances +func (s *Conn) StopServerInstances(reqParams *RequestStopServerInstances) (*ServerInstanceList, error) { + params, err := processStopServerInstancesParams(reqParams) + if err != nil { + return nil, err + } + + params["action"] = "stopServerInstances" + + bytes, resp, err := request.NewRequest(s.accessKey, s.secretKey, "GET", s.apiURL+"server/", params) + if err != nil { + return nil, err + } + + if resp.StatusCode != 200 { + responseError, err := common.ParseErrorResponse(bytes) + if err != nil { + return nil, err + } + + respError := ServerInstanceList{} + respError.ReturnCode = responseError.ReturnCode + respError.ReturnMessage = responseError.ReturnMessage + + return &respError, fmt.Errorf("%s %s - error code: %d , error message: %s", resp.Status, string(bytes), responseError.ReturnCode, responseError.ReturnMessage) + } + + var serverInstanceList = ServerInstanceList{} + if err := xml.Unmarshal([]byte(bytes), &serverInstanceList); err != nil { + fmt.Println(err) + return nil, err + } + + return &serverInstanceList, nil +} diff --git a/vendor/github.com/NaverCloudPlatform/ncloud-sdk-go/sdk/terminateServerInstances.go b/vendor/github.com/NaverCloudPlatform/ncloud-sdk-go/sdk/terminateServerInstances.go new file mode 100644 index 000000000..598f8efa1 --- /dev/null +++ b/vendor/github.com/NaverCloudPlatform/ncloud-sdk-go/sdk/terminateServerInstances.go @@ -0,0 +1,62 @@ +package sdk + +import ( + "encoding/xml" + "errors" + "fmt" + + common "github.com/NaverCloudPlatform/ncloud-sdk-go/common" + request "github.com/NaverCloudPlatform/ncloud-sdk-go/request" +) + +func processTerminateServerInstancesParams(reqParams *RequestTerminateServerInstances) (map[string]string, error) { + params := make(map[string]string) + + if reqParams == nil || len(reqParams.ServerInstanceNoList) == 0 { + return params, errors.New("serverInstanceNoList is required") + } + + if len(reqParams.ServerInstanceNoList) > 0 { + for k, v := range reqParams.ServerInstanceNoList { + params[fmt.Sprintf("serverInstanceNoList.%d", k+1)] = v + } + } + + return params, nil +} + +// TerminateServerInstances terminate server instances +func (s *Conn) TerminateServerInstances(reqParams *RequestTerminateServerInstances) (*ServerInstanceList, error) { + params, err := processTerminateServerInstancesParams(reqParams) + if err != nil { + return nil, err + } + + params["action"] = "terminateServerInstances" + + bytes, resp, err := request.NewRequest(s.accessKey, s.secretKey, "GET", s.apiURL+"server/", params) + if err != nil { + return nil, err + } + + if resp.StatusCode != 200 { + responseError, err := common.ParseErrorResponse(bytes) + if err != nil { + return nil, err + } + + respError := ServerInstanceList{} + respError.ReturnCode = responseError.ReturnCode + respError.ReturnMessage = responseError.ReturnMessage + + return &respError, fmt.Errorf("%s %s - error code: %d , error message: %s", resp.Status, string(bytes), responseError.ReturnCode, responseError.ReturnMessage) + } + + var serverInstanceList = ServerInstanceList{} + if err := xml.Unmarshal([]byte(bytes), &serverInstanceList); err != nil { + fmt.Println(err) + return nil, err + } + + return &serverInstanceList, nil +} diff --git a/vendor/github.com/aws/aws-sdk-go/CHANGELOG.md b/vendor/github.com/aws/aws-sdk-go/CHANGELOG.md index b52655b67..dd7465367 100644 --- a/vendor/github.com/aws/aws-sdk-go/CHANGELOG.md +++ b/vendor/github.com/aws/aws-sdk-go/CHANGELOG.md @@ -1,3 +1,1146 @@ +Release v1.12.72 (2018-02-07) +=== + +### Service Client Updates +* `aws/endpoints`: Updated Regions and Endpoints metadata. +* `service/glue`: Updates service API and documentation + * This new feature will now allow customers to add a customized json classifier. They can specify a json path to indicate the object, array or field of the json documents they'd like crawlers to inspect when they crawl json files. +* `service/servicecatalog`: Updates service API, documentation, and paginators + * This release of Service Catalog adds SearchProvisionedProducts API and ProvisionedProductPlan APIs. +* `service/servicediscovery`: Updates service API and documentation + * This release adds support for registering CNAME record types and creating Route 53 alias records that route traffic to Amazon Elastic Load Balancers using Amazon Route 53 Auto Naming APIs. +* `service/ssm`: Updates service API and documentation + * This Patch Manager release supports configuring Linux repos as part of patch baselines, controlling updates of non-OS security packages and also creating patch baselines for SUSE12 + +### SDK Enhancements +* `private/model/api`: Add validation to ensure there is no duplication of services in models/apis ([#1758](https://github.com/aws/aws-sdk-go/pull/1758)) + * Prevents the SDK from mistakenly generating code a single service multiple times with different model versions. +* `example/service/ec2/instancesbyRegion`: Fix typos in example ([#1762](https://github.com/aws/aws-sdk-go/pull/1762)) +* `private/model/api`: removing SDK API reference crosslinks from input/output shapes. (#1765) + +### SDK Bugs +* `aws/session`: Fix bug in session.New not supporting AWS_SDK_LOAD_CONFIG ([#1770](https://github.com/aws/aws-sdk-go/pull/1770)) + * Fixes a bug in the session.New function that was not correctly sourcing the shared configuration files' path. + * Fixes [#1771](https://github.com/aws/aws-sdk-go/pull/1771) +Release v1.12.71 (2018-02-05) +=== + +### Service Client Updates +* `service/acm`: Updates service documentation + * Documentation updates for acm +* `service/cloud9`: Updates service documentation and examples + * API usage examples for AWS Cloud9. +* `aws/endpoints`: Updated Regions and Endpoints metadata. +* `service/kinesis`: Updates service API and documentation + * Using ListShards a Kinesis Data Streams customer or client can get information about shards in a data stream (including meta-data for each shard) without obtaining data stream level information. +* `service/opsworks`: Updates service API, documentation, and waiters + * AWS OpsWorks Stacks supports EBS encryption and HDD volume types. Also, a new DescribeOperatingSystems API is available, which lists all operating systems supported by OpsWorks Stacks. + +Release v1.12.70 (2018-01-26) +=== + +### Service Client Updates +* `service/devicefarm`: Updates service API and documentation + * Add InteractionMode in CreateRemoteAccessSession for DirectDeviceAccess feature. +* `service/medialive`: Updates service API and documentation + * Add InputSpecification to CreateChannel (specification of input attributes is used for channel sizing and affects pricing); add NotFoundException to DeleteInputSecurityGroups. +* `service/mturk-requester`: Updates service documentation + +Release v1.12.69 (2018-01-26) +=== + +### SDK Bugs +* `models/api`: Fix colliding names [#1754](https://github.com/aws/aws-sdk-go/pull/1754) [#1756](https://github.com/aws/aws-sdk-go/pull/1756) + * SDK had duplicate folders that were causing errors in some builds. + * Fixes [#1753](https://github.com/aws/aws-sdk-go/issues/1753) +Release v1.12.68 (2018-01-25) +=== + +### Service Client Updates +* `service/alexaforbusiness`: Updates service API and documentation +* `service/codebuild`: Updates service API and documentation + * Adding support for Shallow Clone and GitHub Enterprise in AWS CodeBuild. +* `aws/endpoints`: Updated Regions and Endpoints metadata. +* `service/guardduty`: Adds new service + * Added the missing AccessKeyDetails object to the resource shape. +* `service/lambda`: Updates service API and documentation + * AWS Lambda now supports Revision ID on your function versions and aliases, to track and apply conditional updates when you are updating your function version or alias resources. + +### SDK Bugs +* `service/s3/s3manager`: Fix check for nil OrigErr in Error() [#1749](https://github.com/aws/aws-sdk-go/issues/1749) + * S3 Manager's `Error` type did not check for nil of `OrigErr` when calling `Error()` + * Fixes [#1748](https://github.com/aws/aws-sdk-go/issues/1748) +Release v1.12.67 (2018-01-22) +=== + +### Service Client Updates +* `service/budgets`: Updates service API and documentation + * Add additional costTypes: IncludeDiscount, UseAmortized, to support finer control for different charges included in a cost budget. + +Release v1.12.66 (2018-01-19) +=== + +### Service Client Updates +* `aws/endpoints`: Updated Regions and Endpoints metadata. +* `service/glue`: Updates service API and documentation + * New AWS Glue DataCatalog APIs to manage table versions and a new feature to skip archiving of the old table version when updating table. +* `service/transcribe`: Adds new service + +Release v1.12.65 (2018-01-18) +=== + +### Service Client Updates +* `service/sagemaker`: Updates service API and documentation + * CreateTrainingJob and CreateEndpointConfig now supports KMS Key for volume encryption. + +Release v1.12.64 (2018-01-17) +=== + +### Service Client Updates +* `service/autoscaling-plans`: Updates service documentation +* `service/ec2`: Updates service documentation + * Documentation updates for EC2 + +Release v1.12.63 (2018-01-17) +=== + +### Service Client Updates +* `service/application-autoscaling`: Updates service API and documentation +* `service/autoscaling-plans`: Adds new service +* `service/rds`: Updates service API and documentation + * With this release you can now integrate RDS DB instances with CloudWatch Logs. We have added parameters to the operations for creating and modifying DB instances (for example CreateDBInstance) to allow you to take advantage of this capability through the CLI and API. Once you enable this feature, a stream of log events will publish to CloudWatch Logs for each log type you enable. + +Release v1.12.62 (2018-01-15) +=== + +### Service Client Updates +* `aws/endpoints`: Updated Regions and Endpoints metadata. +* `service/lambda`: Updates service API and documentation + * Support for creating Lambda Functions using 'dotnetcore2.0' and 'go1.x'. + +Release v1.12.61 (2018-01-12) +=== + +### Service Client Updates +* `service/glue`: Updates service API and documentation + * Support is added to generate ETL scripts in Scala which can now be run by AWS Glue ETL jobs. In addition, the trigger API now supports firing when any conditions are met (in addition to all conditions). Also, jobs can be triggered based on a "failed" or "stopped" job run (in addition to a "succeeded" job run). + +Release v1.12.60 (2018-01-11) +=== + +### Service Client Updates +* `service/elasticloadbalancing`: Updates service API and documentation +* `service/elasticloadbalancingv2`: Updates service API and documentation +* `service/rds`: Updates service API and documentation + * Read Replicas for Amazon RDS for MySQL, MariaDB, and PostgreSQL now support Multi-AZ deployments.Amazon RDS Read Replicas enable you to create one or more read-only copies of your database instance within the same AWS Region or in a different AWS Region. Updates made to the source database are asynchronously copied to the Read Replicas. In addition to providing scalability for read-heavy workloads, you can choose to promote a Read Replica to become standalone a DB instance when needed.Amazon RDS Multi-AZ Deployments provide enhanced availability for database instances within a single AWS Region. With Multi-AZ, your data is synchronously replicated to a standby in a different Availability Zone (AZ). In case of an infrastructure failure, Amazon RDS performs an automatic failover to the standby, minimizing disruption to your applications.You can now combine Read Replicas with Multi-AZ as part of a disaster recovery strategy for your production databases. A well-designed and tested plan is critical for maintaining business continuity after a disaster. Since Read Replicas can also be created in different regions than the source database, your Read Replica can be promoted to become the new production database in case of a regional disruption.You can also combine Read Replicas with Multi-AZ for your database engine upgrade process. You can create a Read Replica of your production database instance and upgrade it to a new database engine version. When the upgrade is complete, you can stop applications, promote the Read Replica to a standalone database instance and switch over your applications. Since the database instance is already a Multi-AZ deployment, no additional steps are needed.For more information, see the Amazon RDS User Guide. +* `service/ssm`: Updates service documentation + * Updates documentation for the HierarchyLevelLimitExceededException error. + +Release v1.12.59 (2018-01-09) +=== + +### Service Client Updates +* `service/kms`: Updates service documentation + * Documentation updates for AWS KMS + +Release v1.12.58 (2018-01-09) +=== + +### Service Client Updates +* `service/ds`: Updates service API and documentation + * On October 24 we introduced AWS Directory Service for Microsoft Active Directory (Standard Edition), also known as AWS Microsoft AD (Standard Edition), which is a managed Microsoft Active Directory (AD) that is optimized for small and midsize businesses (SMBs). With this SDK release, you can now create an AWS Microsoft AD directory using API. This enables you to run typical SMB workloads using a cost-effective, highly available, and managed Microsoft AD in the AWS Cloud. + +Release v1.12.57 (2018-01-08) +=== + +### Service Client Updates +* `service/codedeploy`: Updates service API and documentation + * The AWS CodeDeploy API was updated to support DeleteGitHubAccountToken, a new method that deletes a GitHub account connection. +* `service/discovery`: Updates service API and documentation + * Documentation updates for AWS Application Discovery Service. +* `service/route53`: Updates service API and documentation + * This release adds an exception to the CreateTrafficPolicyVersion API operation. + +Release v1.12.56 (2018-01-05) +=== + +### Service Client Updates +* `service/inspector`: Updates service API, documentation, and examples + * Added 2 new attributes to the DescribeAssessmentTemplate response, indicating the total number of assessment runs and last assessment run ARN (if present.) +* `service/snowball`: Updates service documentation + * Documentation updates for snowball +* `service/ssm`: Updates service documentation + * Documentation updates for ssm + +Release v1.12.55 (2018-01-02) +=== + +### Service Client Updates +* `service/rds`: Updates service documentation + * Documentation updates for rds + +Release v1.12.54 (2017-12-29) +=== + +### Service Client Updates +* `aws/endpoints`: Updated Regions and Endpoints metadata. +* `service/workspaces`: Updates service API and documentation + * Modify WorkSpaces have been updated with flexible storage and switching of hardware bundles feature. The following configurations have been added to ModifyWorkSpacesProperties: storage and compute. This update provides the capability to configure the storage of a WorkSpace. It also adds the capability of switching hardware bundle of a WorkSpace by specifying an eligible compute (Value, Standard, Performance, Power). + +Release v1.12.53 (2017-12-22) +=== + +### Service Client Updates +* `service/ec2`: Updates service API + * This release fixes an issue with tags not showing in DescribeAddresses responses. +* `service/ecs`: Updates service API and documentation + * Amazon ECS users can now set a health check initialization wait period of their ECS services, the services that are associated with an Elastic Load Balancer (ELB) will wait for a period of time before the ELB become healthy. You can now configure this in Create and Update Service. +* `service/inspector`: Updates service API and documentation + * PreviewAgents API now returns additional fields within the AgentPreview data type. The API now shows the agent health and availability status for all instances included in the assessment target. This allows users to check the health status of Inspector Agents before running an assessment. In addition, it shows the instance ID, hostname, and IP address of the targeted instances. +* `service/sagemaker`: Updates service API and documentation + * SageMaker Models no longer support SupplementalContainers. API's that have been affected are CreateModel and DescribeModel. + +Release v1.12.52 (2017-12-21) +=== + +### Service Client Updates +* `service/codebuild`: Updates service API and documentation + * Adding support allowing AWS CodeBuild customers to select specific curated image versions. +* `service/ec2`: Updates service API and documentation + * Elastic IP tagging enables you to add key and value metadata to your Elastic IPs so that you can search, filter, and organize them according to your organization's needs. +* `aws/endpoints`: Updated Regions and Endpoints metadata. +* `service/kinesisanalytics`: Updates service API and documentation + * Kinesis Analytics now supports AWS Lambda functions as output. + +Release v1.12.51 (2017-12-21) +=== + +### Service Client Updates +* `service/config`: Updates service API +* `aws/endpoints`: Updated Regions and Endpoints metadata. +* `service/iot`: Updates service API and documentation + * This release adds support for code signed Over-the-air update functionality for Amazon FreeRTOS. Users can now create and schedule Over-the-air updates to their Amazon FreeRTOS devices using these new APIs. + +Release v1.12.50 (2017-12-19) +=== + +### Service Client Updates +* `service/apigateway`: Updates service API and documentation + * API Gateway now adds support for calling API with compressed payloads using one of the supported content codings, tagging an API stage for cost allocation, and returning API keys from a custom authorizer for use with a usage plan. +* `aws/endpoints`: Updated Regions and Endpoints metadata. +* `service/mediastore-data`: Updates service documentation +* `service/route53`: Updates service API and documentation + * Route 53 added support for a new China (Ningxia) region, cn-northwest-1. You can now specify cn-northwest-1 as the region for latency-based or geoproximity routing. Route 53 also added support for a new EU (Paris) region, eu-west-3. You can now associate VPCs in eu-west-3 with private hosted zones and create alias records that route traffic to resources in eu-west-3. + +Release v1.12.49 (2017-12-19) +=== + +### Service Client Updates +* `aws/endpoints`: Updated Regions and Endpoints metadata. +* `service/monitoring`: Updates service documentation + * Documentation updates for monitoring + +Release v1.12.48 (2017-12-15) +=== + +### Service Client Updates +* `service/appstream`: Updates service API and documentation + * This API update is to enable customers to add tags to their Amazon AppStream 2.0 resources + +Release v1.12.47 (2017-12-14) +=== + +### Service Client Updates +* `service/apigateway`: Updates service API and documentation + * Adds support for Cognito Authorizer scopes at the API method level. +* `service/email`: Updates service documentation + * Added information about the maximum number of transactions per second for the SendCustomVerificationEmail operation. +* `aws/endpoints`: Updated Regions and Endpoints metadata. + +Release v1.12.46 (2017-12-12) +=== + +### Service Client Updates +* `service/workmail`: Adds new service + * Today, Amazon WorkMail released an administrative SDK and enabled AWS CloudTrail integration. With the administrative SDK, you can natively integrate WorkMail with your existing services. The SDK enables programmatic user, resource, and group management through API calls. This means your existing IT tools and workflows can now automate WorkMail management, and third party applications can streamline WorkMail migrations and account actions. + +Release v1.12.45 (2017-12-11) +=== + +### Service Client Updates +* `service/cognito-idp`: Updates service API and documentation +* `aws/endpoints`: Updated Regions and Endpoints metadata. +* `service/lex-models`: Updates service API and documentation +* `service/sagemaker`: Updates service API + * CreateModel API Update: The request parameter 'ExecutionRoleArn' has changed from optional to required. + +Release v1.12.44 (2017-12-08) +=== + +### Service Client Updates +* `service/appstream`: Updates service API and documentation + * This API update is to support the feature that allows customers to automatically consume the latest Amazon AppStream 2.0 agent as and when published by AWS. +* `service/ecs`: Updates service documentation + * Documentation updates for Windows containers. +* `service/monitoring`: Updates service API and documentation + * With this launch, you can now create a CloudWatch alarm that alerts you when M out of N datapoints of a metric are breaching your predefined threshold, such as three out of five times in any given five minutes interval or two out of six times in a thirty minutes interval. When M out of N datapoints are not breaching your threshold in an interval, the alarm will be in OK state. Please note that the M datapoints out of N datapoints in an interval can be of any order and does not need to be consecutive. Consequently, you can now get alerted even when the spikes in your metrics are intermittent over an interval. + +Release v1.12.43 (2017-12-07) +=== + +### Service Client Updates +* `service/email`: Updates service API, documentation, and paginators + * Customers can customize the emails that Amazon SES sends when verifying new identities. This feature is helpful for developers whose applications send email through Amazon SES on behalf of their customers. +* `service/es`: Updates service API and documentation + * Added support for encryption of data at rest on Amazon Elasticsearch Service using AWS KMS + +### SDK Bugs +* `models/apis` Fixes removes colliding sagemaker models folders ([#1686](https://github.com/aws/aws-sdk-go/pull/1686)) + * Fixes Release v1.12.42's SageMaker vs sagemaker model folders. +Release v1.12.42 (2017-12-06) +=== + +### Service Client Updates +* `service/clouddirectory`: Updates service API and documentation + * Amazon Cloud Directory makes it easier for you to apply schema changes across your directories with in-place schema upgrades. Your directories now remain available while backward-compatible schema changes are being applied, such as the addition of new fields. You also can view the history of your schema changes in Cloud Directory by using both major and minor version identifiers, which can help you track and audit schema versions across directories. +* `service/elasticbeanstalk`: Updates service documentation + * Documentation updates for AWS Elastic Beanstalk. +* `service/sagemaker`: Adds new service + * Initial waiters for common SageMaker workflows. + +Release v1.12.41 (2017-12-05) +=== + +### Service Client Updates +* `service/iot`: Updates service API and documentation + * Add error action API for RulesEngine. +* `service/servicecatalog`: Updates service API and documentation + * ServiceCatalog has two distinct personas for its use, an "admin" persona (who creates sets of products with different versions and prescribes who has access to them) and an "end-user" persona (who can launch cloud resources based on the configuration data their admins have given them access to). This API update will allow admin users to deactivate/activate product versions, end-user will only be able to access and launch active product versions. +* `service/servicediscovery`: Adds new service + * Amazon Route 53 Auto Naming lets you configure public or private namespaces that your microservice applications run in. When instances of the service become available, you can call the Auto Naming API to register the instance, and Amazon Route 53 automatically creates up to five DNS records and an optional health check. Clients that submit DNS queries for the service receive an answer that contains up to eight healthy records. + +Release v1.12.40 (2017-12-04) +=== + +### Service Client Updates +* `service/budgets`: Updates service API and documentation + * Add additional costTypes to support finer control for different charges included in a cost budget. +* `service/ecs`: Updates service documentation + * Documentation updates for ecs + +Release v1.12.39 (2017-12-01) +=== + +### Service Client Updates +* `service/SageMaker`: Updates service waiters + +Release v1.12.38 (2017-11-30) +=== + +### Service Client Updates +* `service/AWSMoneypenny`: Adds new service +* `service/Cloud9`: Adds new service +* `service/Serverless Registry`: Adds new service +* `service/apigateway`: Updates service API, documentation, and paginators + * Added support Private Integration and VPC Link features in API Gateway. This allows to create an API with the API Gateway private integration, thus providing clients access to HTTP/HTTPS resources in an Amazon VPC from outside of the VPC through a VpcLink resource. +* `service/ec2`: Updates service API and documentation + * Adds the following updates: 1. Spread Placement ensures that instances are placed on distinct hardware in order to reduce correlated failures. 2. Inter-region VPC Peering allows customers to peer VPCs across different AWS regions without requiring additional gateways, VPN connections or physical hardware +* `service/lambda`: Updates service API and documentation + * AWS Lambda now supports the ability to set the concurrency limits for individual functions, and increasing memory to 3008 MB. + +Release v1.12.37 (2017-11-30) +=== + +### Service Client Updates +* `service/Ardi`: Adds new service +* `service/autoscaling`: Updates service API and documentation + * You can now use Auto Scaling with EC2 Launch Templates via the CreateAutoScalingGroup and UpdateAutoScalingGroup APIs. +* `service/ec2`: Updates service API and documentation + * Adds the following updates: 1. T2 Unlimited enables high CPU performance for any period of time whenever required 2. You are now able to create and launch EC2 m5 and h1 instances +* `service/lightsail`: Updates service API and documentation + * This release adds support for load balancer and TLS/SSL certificate management. This set of APIs allows customers to create, manage, and scale secure load balanced applications on Lightsail infrastructure. To provide support for customers who manage their DNS on Lightsail, we've added the ability create an Alias A type record which can point to a load balancer DNS name via the CreateDomainEntry API http://docs.aws.amazon.com/lightsail/2016-11-28/api-reference/API_CreateDomainEntry.html. +* `service/ssm`: Updates service API and documentation + * This release updates AWS Systems Manager APIs to enable executing automations at controlled rate, target resources in a resource groups and execute entire automation at once or single step at a time. It is now also possible to use YAML, in addition to JSON, when creating Systems Manager documents. +* `service/waf`: Updates service API and documentation + * This release adds support for rule group and managed rule group. Rule group is a container of rules that customers can create, put rules in it and associate the rule group to a WebACL. All rules in a rule group will function identically as they would if each rule was individually associated to the WebACL. Managed rule group is a pre-configured rule group composed by our security partners and made available via the AWS Marketplace. Customers can subscribe to these managed rule groups, associate the managed rule group to their WebACL and start using them immediately to protect their resources. +* `service/waf-regional`: Updates service API and documentation + +Release v1.12.36 (2017-11-29) +=== + +### Service Client Updates +* `service/DeepInsight`: Adds new service +* `service/IronmanRuntime`: Adds new service +* `service/Orchestra - Laser`: Adds new service +* `service/SageMaker`: Adds new service +* `service/Shine`: Adds new service +* `service/archived.kinesisvideo`: Adds new service +* `service/data.kinesisvideo`: Adds new service +* `service/dynamodb`: Updates service API and documentation + * Amazon DynamoDB now supports the following features: Global Table and On-Demand Backup. Global Table is a fully-managed, multi-region, multi-master database. DynamoDB customers can now write anywhere and read anywhere with single-digit millisecond latency by performing database operations closest to where end users reside. Global Table also enables customers to disaster-proof their applications, keeping them running and data accessible even in the face of natural disasters or region disruptions. Customers can set up Global Table with just a few clicks in the AWS Management Console-no application rewrites required. On-Demand Backup capability is to protect data from loss due to application errors, and meet customers' archival needs for compliance and regulatory reasons. Customers can backup and restore their DynamoDB table data anytime, with a single-click in the AWS management console or a single API call. Backup and restore actions execute with zero impact on table performance or availability. For more information, see the Amazon DynamoDB Developer Guide. +* `service/ecs`: Updates service API and documentation + * Amazon Elastic Container Service (Amazon ECS) released a new launch type for running containers on a serverless infrastructure. The Fargate launch type allows you to run your containerized applications without the need to provision and manage the backend infrastructure. Just register your task definition and Fargate launches the container for you. +* `service/glacier`: Updates service API and documentation + * This release includes support for Glacier Select, a new feature that allows you to filter and analyze your Glacier archives and store the results in a user-specified S3 location. +* `service/greengrass`: Updates service API and documentation + * Greengrass OTA feature allows updating Greengrass Core and Greengrass OTA Agent. Local Resource Access feature allows Greengrass Lambdas to access local resources such as peripheral devices and volumes. +* `service/iot`: Updates service API and documentation + * This release adds support for a number of new IoT features, including AWS IoT Device Management (Jobs, Fleet Index and Thing Registration), Thing Groups, Policies on Thing Groups, Registry & Job Events, JSON Logs, Fine-Grained Logging Controls, Custom Authorization and AWS Service Authentication Using X.509 Certificates. +* `service/kinesisvideo`: Adds new service + * Announcing Amazon Kinesis Video Streams, a fully managed video ingestion and storage service. Kinesis Video Streams makes it easy to securely stream video from connected devices to AWS for machine learning, analytics, and processing. You can also stream other time-encoded data like RADAR and LIDAR signals using Kinesis Video Streams. +* `service/rekognition`: Updates service API, documentation, and paginators + * This release introduces Amazon Rekognition support for video analysis. +* `service/s3`: Updates service API and documentation + * This release includes support for Glacier Select, a new feature that allows you to filter and analyze your Glacier storage class objects and store the results in a user-specified S3 location. + +Release v1.12.35 (2017-11-29) +=== + +### Service Client Updates +* `service/AmazonMQ`: Adds new service +* `service/GuardDuty`: Adds new service +* `service/apigateway`: Updates service API and documentation + * Changes related to CanaryReleaseDeployment feature. Enables API developer to create a deployment as canary deployment and test API changes with percentage of customers before promoting changes to all customers. +* `service/batch`: Updates service API and documentation + * Add support for Array Jobs which allow users to easily submit many copies of a job with a single API call. This change also enhances the job dependency model to support N_TO_N and sequential dependency chains. The ListJobs and DescribeJobs APIs now have the ability to list or describe the status of entire Array Jobs or individual elements within the array. +* `service/cognito-idp`: Updates service API and documentation +* `service/deepdish`: Adds new service + * AWS AppSync is an enterprise-level, fully managed GraphQL service with real-time data synchronization and offline programming features. +* `service/ec2`: Updates service API and documentation + * Adds the following updates: 1. You are now able to host a service powered by AWS PrivateLink to provide private connectivity to other VPCs. You are now also able to create endpoints to other services powered by PrivateLink including AWS services, Marketplace Seller services or custom services created by yourself or other AWS VPC customers. 2. You are now able to save launch parameters in a single template that can be used with Auto Scaling, Spot Fleet, Spot, and On Demand instances. 3. You are now able to launch Spot instances via the RunInstances API, using a single additional parameter. RunInstances will response synchronously with an instance ID should capacity be available for your Spot request. 4. A simplified Spot pricing model which delivers low, predictable prices that adjust gradually, based on long-term trends in supply and demand. 5. Amazon EC2 Spot can now hibernate Amazon EBS-backed instances in the event of an interruption, so your workloads pick up from where they left off. Spot can fulfill your request by resuming instances from a hibernated state when capacity is available. +* `service/lambda`: Updates service API and documentation + * Lambda aliases can now shift traffic between two function versions, based on preassigned weights. + +Release v1.12.34 (2017-11-27) +=== + +### Service Client Updates +* `service/data.mediastore`: Adds new service +* `service/mediaconvert`: Adds new service + * AWS Elemental MediaConvert is a file-based video conversion service that transforms media into formats required for traditional broadcast and for internet streaming to multi-screen devices. +* `service/medialive`: Adds new service + * AWS Elemental MediaLive is a video service that lets you easily create live outputs for broadcast and streaming delivery. +* `service/mediapackage`: Adds new service + * AWS Elemental MediaPackage is a just-in-time video packaging and origination service that lets you format highly secure and reliable live outputs for a variety of devices. +* `service/mediastore`: Adds new service + * AWS Elemental MediaStore is an AWS storage service optimized for media. It gives you the performance, consistency, and low latency required to deliver live and on-demand video content. AWS Elemental MediaStore acts as the origin store in your video workflow. + +Release v1.12.33 (2017-11-22) +=== + +### Service Client Updates +* `service/acm`: Updates service API and documentation + * AWS Certificate Manager now supports the ability to import domainless certs and additional Key Types as well as an additional validation method for DNS. +* `aws/endpoints`: Updated Regions and Endpoints metadata. + +Release v1.12.32 (2017-11-22) +=== + +### Service Client Updates +* `service/apigateway`: Updates service API and documentation + * Add support for Access logs and customizable integration timeouts +* `service/cloudformation`: Updates service API and documentation + * 1) Instance-level parameter overrides (CloudFormation-StackSet feature): This feature will allow the customers to override the template parameters on specific stackInstances. Customers will also have ability to update their existing instances with/without parameter-overrides using a new API "UpdateStackInstances" 2) Add support for SSM parameters in CloudFormation - This feature will allow the customers to use Systems Manager parameters in CloudFormation templates. They will be able to see values for these parameters in Describe APIs. +* `service/codebuild`: Updates service API and documentation + * Adding support for accessing Amazon VPC resources from AWS CodeBuild, dependency caching and build badges. +* `service/elasticmapreduce`: Updates service API and documentation + * Enable Kerberos on Amazon EMR. +* `aws/endpoints`: Updated Regions and Endpoints metadata. +* `service/rekognition`: Updates service API and documentation + * This release includes updates to Amazon Rekognition for the following APIs. The new DetectText API allows you to recognize and extract textual content from images. Face Model Versioning has been added to operations that deal with face detection. +* `service/shield`: Updates service API, documentation, and paginators + * The AWS Shield SDK has been updated in order to support Elastic IP address protections, the addition of AttackProperties objects in DescribeAttack responses, and a new GetSubscriptionState operation. +* `service/storagegateway`: Updates service API and documentation + * AWS Storage Gateway now enables you to get notification when all your files written to your NFS file share have been uploaded to Amazon S3. Storage Gateway also enables guessing of the MIME type for uploaded objects based on file extensions. +* `service/xray`: Updates service API, documentation, and paginators + * Added automatic pagination support for AWS X-Ray APIs in the SDKs that support this feature. + +Release v1.12.31 (2017-11-20) +=== + +### Service Client Updates +* `service/apigateway`: Updates service documentation + * Documentation updates for Apigateway +* `service/codecommit`: Updates service API, documentation, and paginators + * AWS CodeCommit now supports pull requests. You can use pull requests to collaboratively review code changes for minor changes or fixes, major feature additions, or new versions of your released software. +* `service/firehose`: Updates service API and documentation + * This release includes a new Kinesis Firehose feature that supports Splunk as Kinesis Firehose delivery destination. You can now use Kinesis Firehose to ingest real-time data to Splunk in a serverless, reliable, and salable manner. This release also includes a new feature that allows you to configure Lambda buffer size in Kinesis Firehose data transformation feature. You can now customize the data buffer size before invoking Lambda function in Kinesis Firehose for data transformation. This feature allows you to flexibly trade-off processing and delivery latency with cost and efficiency based on your specific use cases and requirements. +* `service/iis`: Adds new service + * The AWS Cost Explorer API gives customers programmatic access to AWS cost and usage information, allowing them to perform adhoc queries and build interactive cost management applications that leverage this dataset. +* `service/kinesis`: Updates service API and documentation + * Customers can now obtain the important characteristics of their stream with DescribeStreamSummary. The response will not include the shard list for the stream but will have the number of open shards, and all the other fields included in the DescribeStream response. +* `service/workdocs`: Updates service API and documentation + * DescribeGroups API and miscellaneous enhancements + +### SDK Bugs +* `aws/client`: Retry delays for throttled exception were not limited to 5 mintues [#1654](https://github.com/aws/aws-sdk-go/pull/1654) + * Fixes [#1653](https://github.com/aws/aws-sdk-go/issues/1653) +Release v1.12.30 (2017-11-17) +=== + +### Service Client Updates +* `service/application-autoscaling`: Updates service API and documentation +* `service/dms`: Updates service API, documentation, and paginators + * Support for migration task assessment. Support for data validation after the migration. +* `service/elasticloadbalancingv2`: Updates service API and documentation +* `aws/endpoints`: Updated Regions and Endpoints metadata. +* `service/rds`: Updates service API and documentation + * Amazon RDS now supports importing MySQL databases by using backup files from Amazon S3. +* `service/s3`: Updates service API + * Added ORC to the supported S3 Inventory formats. + +### SDK Bugs +* `private/protocol/restjson`: Define JSONValue marshaling for body and querystring ([#1640](https://github.com/aws/aws-sdk-go/pull/1640)) + * Adds support for APIs which use JSONValue for body and querystring targets. + * Fixes [#1636](https://github.com/aws/aws-sdk-go/issues/1636) +Release v1.12.29 (2017-11-16) +=== + +### Service Client Updates +* `service/application-autoscaling`: Updates service API and documentation +* `service/ec2`: Updates service API + * You are now able to create and launch EC2 x1e smaller instance sizes +* `service/glue`: Updates service API and documentation + * API update for AWS Glue. New crawler configuration attribute enables customers to specify crawler behavior. New XML classifier enables classification of XML data. +* `service/opsworkscm`: Updates service API, documentation, and waiters + * Documentation updates for OpsWorks-cm: a new feature, OpsWorks for Puppet Enterprise, that allows users to create and manage OpsWorks-hosted Puppet Enterprise servers. +* `service/organizations`: Updates service API, documentation, and paginators + * This release adds APIs that you can use to enable and disable integration with AWS services designed to work with AWS Organizations. This integration allows the AWS service to perform operations on your behalf on all of the accounts in your organization. Although you can use these APIs yourself, we recommend that you instead use the commands provided in the other AWS service to enable integration with AWS Organizations. +* `service/route53`: Updates service API and documentation + * You can use Route 53's GetAccountLimit/GetHostedZoneLimit/GetReusableDelegationSetLimit APIs to view your current limits (including custom set limits) on Route 53 resources such as hosted zones and health checks. These APIs also return the number of each resource you're currently using to enable comparison against your current limits. + +Release v1.12.28 (2017-11-15) +=== + +### Service Client Updates +* `service/apigateway`: Updates service API and documentation + * 1. Extended GetDocumentationParts operation to support retrieving documentation parts resources without contents. 2. Added hosted zone ID in the custom domain response. +* `service/email`: Updates service API, documentation, and examples + * SES launches Configuration Set Reputation Metrics and Email Pausing Today, two features that build upon the capabilities of the reputation dashboard. The first is the ability to export reputation metrics for individual configuration sets. The second is the ability to temporarily pause email sending, either at the configuration set level, or across your entire Amazon SES account. +* `service/polly`: Updates service API + * Amazon Polly adds Korean language support with new female voice - "Seoyeon" and new Indian English female voice - "Aditi" +* `service/states`: Updates service API and documentation + * You can now use the UpdateStateMachine API to update your state machine definition and role ARN. Existing executions will continue to use the previous definition and role ARN. You can use the DescribeStateMachineForExecution API to determine which state machine definition and role ARN is associated with an execution + +Release v1.12.27 (2017-11-14) +=== + +### Service Client Updates +* `service/ecs`: Updates service API and documentation + * Added new mode for Task Networking in ECS, called awsvpc mode. Mode configuration parameters to be passed in via awsvpcConfiguration. Updated APIs now use/show this new mode - RegisterTaskDefinition, CreateService, UpdateService, RunTask, StartTask. +* `aws/endpoints`: Updated Regions and Endpoints metadata. +* `service/lightsail`: Updates service API and documentation + * Lightsail now supports attached block storage, which allows you to scale your applications and protect application data with additional SSD-backed storage disks. This feature allows Lightsail customers to attach secure storage disks to their Lightsail instances and manage their attached disks, including creating and deleting disks, attaching and detaching disks from instances, and backing up disks via snapshot. +* `service/route53`: Updates service API and documentation + * When a Route 53 health check or hosted zone is created by a linked AWS service, the object now includes information about the service that created it. Hosted zones or health checks that are created by a linked service can't be updated or deleted using Route 53. +* `service/ssm`: Updates service API and documentation + * EC2 Systems Manager GetInventory API adds support for aggregation. + +### SDK Enhancements +* `aws/request`: Remove default port from HTTP host header ([#1618](https://github.com/aws/aws-sdk-go/pull/1618)) + * Updates the SDK to automatically remove default ports based on the URL's scheme when setting the HTTP Host header's value. + * Fixes [#1537](https://github.com/aws/aws-sdk-go/issues/1537) + +Release v1.12.26 (2017-11-09) +=== + +### Service Client Updates +* `service/ec2`: Updates service API and documentation + * Introduces the following features: 1. Create a default subnet in an Availability Zone if no default subnet exists. 2. Spot Fleet integrates with Elastic Load Balancing to enable you to attach one or more load balancers to a Spot Fleet request. When you attach the load balancer, it automatically registers the instance in the Spot Fleet to the load balancers which distributes incoming traffic across the instances. +* `aws/endpoints`: Updated Regions and Endpoints metadata. + +Release v1.12.25 (2017-11-08) +=== + +### Service Client Updates +* `service/application-autoscaling`: Updates service API and documentation +* `service/batch`: Updates service documentation + * Documentation updates for AWS Batch. +* `service/ec2`: Updates service API and documentation + * AWS PrivateLink for Amazon Services - Customers can now privately access Amazon services from their Amazon Virtual Private Cloud (VPC), without using public IPs, and without requiring the traffic to traverse across the Internet. +* `service/elasticache`: Updates service API and documentation + * This release adds online resharding for ElastiCache for Redis offering, providing the ability to add and remove shards from a running cluster. Developers can now dynamically scale-out or scale-in their Redis cluster workloads to adapt to changes in demand. ElastiCache will resize the cluster by adding or removing shards and redistribute hash slots uniformly across the new shard configuration, all while the cluster continues to stay online and serves requests. +* `aws/endpoints`: Updated Regions and Endpoints metadata. + +Release v1.12.24 (2017-11-07) +=== + +### Service Client Updates +* `service/elasticloadbalancingv2`: Updates service documentation +* `aws/endpoints`: Updated Regions and Endpoints metadata. +* `service/rds`: Updates service API and documentation + * DescribeOrderableDBInstanceOptions now returns the minimum and maximum allowed values for storage size, total provisioned IOPS, and provisioned IOPS per GiB for a DB instance. +* `service/s3`: Updates service API, documentation, and examples + * This releases adds support for 4 features: 1. Default encryption for S3 Bucket, 2. Encryption status in inventory and Encryption support for inventory. 3. Cross region replication of KMS-encrypted objects, and 4. ownership overwrite for CRR. + +Release v1.12.23 (2017-11-07) +=== + +### Service Client Updates +* `service/api.pricing`: Adds new service +* `service/ec2`: Updates service API + * You are now able to create and launch EC2 C5 instances, the next generation of EC2's compute-optimized instances, in us-east-1, us-west-2 and eu-west-1. C5 instances offer up to 72 vCPUs, 144 GiB of DDR4 instance memory, 25 Gbps in Network bandwidth and improved EBS and Networking bandwidth on smaller instance sizes to deliver improved performance for compute-intensive workloads. +* `aws/endpoints`: Updated Regions and Endpoints metadata. +* `service/kms`: Updates service API, documentation, and examples + * Documentation updates for AWS KMS. +* `service/organizations`: Updates service documentation + * This release updates permission statements for several API operations, and corrects some other minor errors. +* `service/states`: Updates service API, documentation, and paginators + * Documentation update. + +Release v1.12.22 (2017-11-03) +=== + +### Service Client Updates +* `service/ecs`: Updates service API and documentation + * Amazon ECS users can now add devices to their containers and enable init process in containers through the use of docker's 'devices' and 'init' features. These fields can be specified under linuxParameters in ContainerDefinition in the Task Definition Template. +* `aws/endpoints`: Updated Regions and Endpoints metadata. + +Release v1.12.21 (2017-11-02) +=== + +### Service Client Updates +* `service/apigateway`: Updates service API and documentation + * This release supports creating and managing Regional and Edge-Optimized API endpoints. +* `aws/endpoints`: Updated Regions and Endpoints metadata. + +### SDK Bugs +* `aws/request`: Fix bug in request presign creating invalide URL ([#1624](https://github.com/aws/aws-sdk-go/pull/1624)) + * Fixes a bug the Request Presign and PresignRequest methods that would allow a invalid expire duration as input. A expire time of 0 would be interpreted by the SDK to generate a normal request signature, not a presigned URL. This caused the returned URL unusable. + * Fixes [#1617](https://github.com/aws/aws-sdk-go/issues/1617) +Release v1.12.20 (2017-11-01) +=== + +### Service Client Updates +* `service/acm`: Updates service documentation + * Documentation updates for ACM +* `service/cloudhsmv2`: Updates service documentation + * Minor documentation update for AWS CloudHSM (cloudhsmv2). +* `service/directconnect`: Updates service API and documentation + * AWS DirectConnect now provides support for Global Access for Virtual Private Cloud (VPC) via a new feature called Direct Connect Gateway. A Direct Connect Gateway will allow you to group multiple Direct Connect Private Virtual Interfaces (DX-VIF) and Private Virtual Gateways (VGW) from different AWS regions (but belonging to the same AWS Account) and pass traffic from any DX-VIF to any VPC in the grouping. +* `aws/endpoints`: Updated Regions and Endpoints metadata. + +### SDK Enhancements +* `aws/client`: Adding status code 429 to throttlable status codes in default retryer (#1621) + +Release v1.12.19 (2017-10-26) +=== + +### Service Client Updates +* `aws/endpoints`: Updated Regions and Endpoints metadata. + +Release v1.12.18 (2017-10-26) +=== + +### Service Client Updates +* `service/cloudfront`: Updates service API and documentation + * You can now specify additional options for MinimumProtocolVersion, which controls the SSL/TLS protocol that CloudFront uses to communicate with viewers. The minimum protocol version that you choose also determines the ciphers that CloudFront uses to encrypt the content that it returns to viewers. +* `service/ec2`: Updates service API + * You are now able to create and launch EC2 P3 instance, next generation GPU instances, optimized for machine learning and high performance computing applications. With up to eight NVIDIA Tesla V100 GPUs, P3 instances provide up to one petaflop of mixed-precision, 125 teraflops of single-precision, and 62 teraflops of double-precision floating point performance, as well as a 300 GB/s second-generation NVLink interconnect that enables high-speed, low-latency GPU-to-GPU communication. P3 instances also feature up to 64 vCPUs based on custom Intel Xeon E5 (Broadwell) processors, 488 GB of DRAM, and 25 Gbps of dedicated aggregate network bandwidth using the Elastic Network Adapter (ENA). +* `aws/endpoints`: Updated Regions and Endpoints metadata. + +Release v1.12.17 (2017-10-24) +=== + +### Service Client Updates +* `service/config`: Updates service API +* `service/elasticache`: Updates service API, documentation, and examples + * Amazon ElastiCache for Redis today announced support for data encryption both for data in-transit and data at-rest. The new encryption in-transit functionality enables ElastiCache for Redis customers to encrypt data for all communication between clients and Redis engine, and all intra-cluster Redis communication. The encryption at-rest functionality allows customers to encrypt their S3 based backups. Customers can begin using the new functionality by simply enabling this functionality via AWS console, and a small configuration change in their Redis clients. The ElastiCache for Redis service automatically manages life cycle of the certificates required for encryption, including the issuance, renewal and expiration of certificates. Additionally, as part of this launch, customers will gain the ability to start using the Redis AUTH command that provides an added level of authentication. +* `service/glue`: Adds new service + * AWS Glue: Adding a new API, BatchStopJobRun, to stop one or more job runs for a specified Job. +* `service/pinpoint`: Updates service API and documentation + * Added support for APNs VoIP messages. Added support for collapsible IDs, message priority, and TTL for APNs and FCM/GCM. + +Release v1.12.16 (2017-10-23) +=== + +### Service Client Updates +* `aws/endpoints`: Updated Regions and Endpoints metadata. +* `service/organizations`: Updates service API and documentation + * This release supports integrating other AWS services with AWS Organizations through the use of an IAM service-linked role called AWSServiceRoleForOrganizations. Certain operations automatically create that role if it does not already exist. + +Release v1.12.15 (2017-10-20) +=== + +### Service Client Updates +* `service/ec2`: Updates service API and documentation + * Adding pagination support for DescribeSecurityGroups for EC2 Classic and VPC Security Groups + +Release v1.12.14 (2017-10-19) +=== + +### Service Client Updates +* `service/sqs`: Updates service API and documentation + * Added support for tracking cost allocation by adding, updating, removing, and listing the metadata tags of Amazon SQS queues. +* `service/ssm`: Updates service API and documentation + * EC2 Systems Manager versioning support for Parameter Store. Also support for referencing parameter versions in SSM Documents. + +Release v1.12.13 (2017-10-18) +=== + +### Service Client Updates +* `service/lightsail`: Updates service API and documentation + * This release adds support for Windows Server-based Lightsail instances. The GetInstanceAccessDetails API now returns the password of your Windows Server-based instance when using the default key pair. GetInstanceAccessDetails also returns a PasswordData object for Windows Server instances containing the ciphertext and keyPairName. The Blueprint data type now includes a list of platform values (LINUX_UNIX or WINDOWS). The Bundle data type now includes a list of SupportedPlatforms values (LINUX_UNIX or WINDOWS). + +Release v1.12.12 (2017-10-17) +=== + +### Service Client Updates +* `aws/endpoints`: Updated Regions and Endpoints metadata. +* `service/es`: Updates service API and documentation + * This release adds support for VPC access to Amazon Elasticsearch Service. + * This release adds support for VPC access to Amazon Elasticsearch Service. + +Release v1.12.11 (2017-10-16) +=== + +### Service Client Updates +* `service/cloudhsm`: Updates service API and documentation + * Documentation updates for AWS CloudHSM Classic. +* `service/ec2`: Updates service API and documentation + * You can now change the tenancy of your VPC from dedicated to default with a single API operation. For more details refer to the documentation for changing VPC tenancy. +* `service/es`: Updates service API and documentation + * AWS Elasticsearch adds support for enabling slow log publishing. Using slow log publishing options customers can configure and enable index/query slow log publishing of their domain to preferred AWS Cloudwatch log group. +* `service/rds`: Updates service API and waiters + * Adds waiters for DBSnapshotAvailable and DBSnapshotDeleted. +* `service/waf`: Updates service API and documentation + * This release adds support for regular expressions as match conditions in rules, and support for geographical location by country of request IP address as a match condition in rules. +* `service/waf-regional`: Updates service API and documentation + +Release v1.12.10 (2017-10-12) +=== + +### Service Client Updates +* `service/codecommit`: Updates service API and documentation + * This release includes the DeleteBranch API and a change to the contents of a Commit object. +* `service/dms`: Updates service API and documentation + * This change includes addition of new optional parameter to an existing API +* `service/elasticbeanstalk`: Updates service API and documentation + * Added the ability to add, delete or update Tags +* `service/polly`: Updates service API + * Amazon Polly exposes two new voices: "Matthew" (US English) and "Takumi" (Japanese) +* `service/rds`: Updates service API and documentation + * You can now call DescribeValidDBInstanceModifications to learn what modifications you can make to your DB instance. You can use this information when you call ModifyDBInstance. + +Release v1.12.9 (2017-10-11) +=== + +### Service Client Updates +* `service/ecr`: Updates service API, documentation, and paginators + * Adds support for new API set used to manage Amazon ECR repository lifecycle policies. Amazon ECR lifecycle policies enable you to specify the lifecycle management of images in a repository. The configuration is a set of one or more rules, where each rule defines an action for Amazon ECR to apply to an image. This allows the automation of cleaning up unused images, for example expiring images based on age or status. A lifecycle policy preview API is provided as well, which allows you to see the impact of a lifecycle policy on an image repository before you execute it +* `service/email`: Updates service API and documentation + * Added content related to email template management and templated email sending operations. +* `aws/endpoints`: Updated Regions and Endpoints metadata. + +Release v1.12.8 (2017-10-10) +=== + +### Service Client Updates +* `service/ec2`: Updates service API and documentation + * This release includes updates to AWS Virtual Private Gateway. +* `service/elasticloadbalancingv2`: Updates service API and documentation +* `service/opsworkscm`: Updates service API and documentation + * Provide engine specific information for node associations. + +Release v1.12.7 (2017-10-06) +=== + +### Service Client Updates +* `service/sqs`: Updates service documentation + * Documentation updates regarding availability of FIFO queues and miscellaneous corrections. + +Release v1.12.6 (2017-10-05) +=== + +### Service Client Updates +* `service/redshift`: Updates service API and documentation + * DescribeEventSubscriptions API supports tag keys and tag values as request parameters. + +Release v1.12.5 (2017-10-04) +=== + +### Service Client Updates +* `service/kinesisanalytics`: Updates service API and documentation + * Kinesis Analytics now supports schema discovery on objects in S3. Additionally, Kinesis Analytics now supports input data preprocessing through Lambda. +* `service/route53domains`: Updates service API and documentation + * Added a new API that checks whether a domain name can be transferred to Amazon Route 53. + +### SDK Bugs +* `service/s3/s3crypto`: Correct PutObjectRequest documentation ([#1568](https://github.com/aws/aws-sdk-go/pull/1568)) + * s3Crypto's PutObjectRequest docstring example was using an incorrect value. Corrected the type used in the example. +Release v1.12.4 (2017-10-03) +=== + +### Service Client Updates +* `service/ec2`: Updates service API, documentation, and waiters + * This release includes service updates to AWS VPN. +* `service/ssm`: Updates service API and documentation + * EC2 Systems Manager support for tagging SSM Documents. Also support for tag-based permissions to restrict access to SSM Documents based on these tags. + +Release v1.12.3 (2017-10-02) +=== + +### Service Client Updates +* `service/cloudhsm`: Updates service documentation and paginators + * Documentation updates for CloudHSM + +Release v1.12.2 (2017-09-29) +=== + +### Service Client Updates +* `service/appstream`: Updates service API and documentation + * Includes APIs for managing and accessing image builders, and deleting images. +* `service/codebuild`: Updates service API and documentation + * Adding support for Building GitHub Pull Requests in AWS CodeBuild +* `service/mturk-requester`: Updates service API and documentation +* `service/organizations`: Updates service API and documentation + * This release flags the HandshakeParty structure's Type and Id fields as 'required'. They effectively were required in the past, as you received an error if you did not include them. This is now reflected at the API definition level. +* `service/route53`: Updates service API and documentation + * This change allows customers to reset elements of health check. + +### SDK Bugs +* `private/protocol/query`: Fix query protocol handling of nested byte slices ([#1557](https://github.com/aws/aws-sdk-go/issues/1557)) + * Fixes the query protocol to correctly marshal nested []byte values of API operations. +* `service/s3`: Fix PutObject and UploadPart API to include ContentMD5 field ([#1559](https://github.com/aws/aws-sdk-go/pull/1559)) + * Fixes the SDK's S3 PutObject and UploadPart API code generation to correctly render the ContentMD5 field into the associated input types for these two API operations. + * Fixes [#1553](https://github.com/aws/aws-sdk-go/pull/1553) +Release v1.12.1 (2017-09-27) +=== + +### Service Client Updates +* `aws/endpoints`: Updated Regions and Endpoints metadata. +* `service/pinpoint`: Updates service API and documentation + * Added two new push notification channels: Amazon Device Messaging (ADM) and, for push notification support in China, Baidu Cloud Push. Added support for APNs auth via .p8 key file. Added operation for direct message deliveries to user IDs, enabling you to message an individual user on multiple endpoints. + +Release v1.12.0 (2017-09-26) +=== + +### SDK Bugs +* `API Marshaler`: Revert REST JSON and XML protocol marshaler improvements + * Bug [#1550](https://github.com/aws/aws-sdk-go/issues/1550) identified a missed condition in the Amazon Route 53 RESTXML protocol marshaling causing requests to that service to fail. Reverting the marshaler improvements until the bug can be fixed. + +Release v1.11.0 (2017-09-26) +=== + +### Service Client Updates +* `service/cloudformation`: Updates service API and documentation + * You can now prevent a stack from being accidentally deleted by enabling termination protection on the stack. If you attempt to delete a stack with termination protection enabled, the deletion fails and the stack, including its status, remains unchanged. You can enable termination protection on a stack when you create it. Termination protection on stacks is disabled by default. After creation, you can set termination protection on a stack whose status is CREATE_COMPLETE, UPDATE_COMPLETE, or UPDATE_ROLLBACK_COMPLETE. + +### SDK Features +* Add dep Go dependency management metadata files (#1544) + * Adds the Go `dep` dependency management metadata files to the SDK. + * Fixes [#1451](https://github.com/aws/aws-sdk-go/issues/1451) + * Fixes [#634](https://github.com/aws/aws-sdk-go/issues/634) +* `service/dynamodb/expression`: Add expression building utility for DynamoDB ([#1527](https://github.com/aws/aws-sdk-go/pull/1527)) + * Adds a new package, expression, to the SDK providing builder utilities to create DynamoDB expressions safely taking advantage of type safety. +* `API Marshaler`: Add generated marshalers for RESTXML protocol ([#1409](https://github.com/aws/aws-sdk-go/pull/1409)) + * Updates the RESTXML protocol marshaler to use generated code instead of reflection for REST XML based services. +* `API Marshaler`: Add generated marshalers for RESTJSON protocol ([#1547](https://github.com/aws/aws-sdk-go/pull/1547)) + * Updates the RESTJSON protocol marshaler to use generated code instead of reflection for REST JSON based services. + +### SDK Enhancements +* `private/protocol`: Update format of REST JSON and XMl benchmarks ([#1546](https://github.com/aws/aws-sdk-go/pull/1546)) + * Updates the format of the REST JSON and XML benchmarks to be readable. RESTJSON benchmarks were updated to more accurately bench building of the protocol. + +Release v1.10.51 (2017-09-22) +=== + +### Service Client Updates +* `service/config`: Updates service API and documentation +* `service/ecs`: Updates service API and documentation + * Amazon ECS users can now add and drop Linux capabilities to their containers through the use of docker's cap-add and cap-drop features. Customers can specify the capabilities they wish to add or drop for each container in their task definition. +* `aws/endpoints`: Updated Regions and Endpoints metadata. +* `service/rds`: Updates service documentation + * Documentation updates for rds + +Release v1.10.50 (2017-09-21) +=== + +### Service Client Updates +* `service/budgets`: Updates service API + * Including "DuplicateRecordException" in UpdateNotification and UpdateSubscriber. +* `service/ec2`: Updates service API and documentation + * Add EC2 APIs to copy Amazon FPGA Images (AFIs) within the same region and across multiple regions, delete AFIs, and modify AFI attributes. AFI attributes include name, description and granting/denying other AWS accounts to load the AFI. +* `service/logs`: Updates service API and documentation + * Adds support for associating LogGroups with KMS Keys. + +### SDK Bugs +* Fix greengrass service model being duplicated with different casing. ([#1541](https://github.com/aws/aws-sdk-go/pull/1541)) + * Fixes [#1540](https://github.com/aws/aws-sdk-go/issues/1540) + * Fixes [#1539](https://github.com/aws/aws-sdk-go/issues/1539) +Release v1.10.49 (2017-09-20) +=== + +### Service Client Updates +* `service/Greengrass`: Adds new service +* `service/appstream`: Updates service API and documentation + * API updates for supporting On-Demand fleets. +* `service/codepipeline`: Updates service API and documentation + * This change includes a PipelineMetadata object that is part of the output from the GetPipeline API that includes the Pipeline ARN, created, and updated timestamp. +* `aws/endpoints`: Updated Regions and Endpoints metadata. +* `service/rds`: Updates service API and documentation + * Introduces the --option-group-name parameter to the ModifyDBSnapshot CLI command. You can specify this parameter when you upgrade an Oracle DB snapshot. The same option group considerations apply when upgrading a DB snapshot as when upgrading a DB instance. For more information, see http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_UpgradeDBInstance.Oracle.html#USER_UpgradeDBInstance.Oracle.OGPG.OG +* `service/runtime.lex`: Updates service API and documentation + +Release v1.10.48 (2017-09-19) +=== + +### Service Client Updates +* `service/ec2`: Updates service API + * Fixed bug in EC2 clients preventing ElasticGpuSet from being set. + +### SDK Enhancements +* `aws/credentials`: Add EnvProviderName constant. ([#1531](https://github.com/aws/aws-sdk-go/issues/1531)) + * Adds the "EnvConfigCredentials" string literal as EnvProviderName constant. + * Fixes [#1444](https://github.com/aws/aws-sdk-go/issues/1444) + +Release v1.10.47 (2017-09-18) +=== + +### Service Client Updates +* `service/ec2`: Updates service API and documentation + * Amazon EC2 now lets you opt for Spot instances to be stopped in the event of an interruption instead of being terminated. Your Spot request can be fulfilled again by restarting instances from a previously stopped state, subject to availability of capacity at or below your preferred price. When you submit a persistent Spot request, you can choose from "terminate" or "stop" as the instance interruption behavior. Choosing "stop" will shutdown your Spot instances so you can continue from this stopped state later on. This feature is only available for instances with Amazon EBS volume as their root device. +* `service/email`: Updates service API and documentation + * Amazon Simple Email Service (Amazon SES) now lets you customize the domains used for tracking open and click events. Previously, open and click tracking links referred to destinations hosted on domains operated by Amazon SES. With this feature, you can use your own branded domains for capturing open and click events. +* `service/iam`: Updates service API and documentation + * A new API, DeleteServiceLinkedRole, submits a service-linked role deletion request and returns a DeletionTaskId, which you can use to check the status of the deletion. + +Release v1.10.46 (2017-09-15) +=== + +### Service Client Updates +* `service/apigateway`: Updates service API and documentation + * Add a new enum "REQUEST" to '--type ' field in the current create-authorizer API, and make "identitySource" optional. + +Release v1.10.45 (2017-09-14) +=== + +### Service Client Updates +* `service/codebuild`: Updates service API and documentation + * Supporting Parameter Store in environment variables for AWS CodeBuild +* `service/organizations`: Updates service documentation + * Documentation updates for AWS Organizations +* `service/servicecatalog`: Updates service API, documentation, and paginators + * This release of Service Catalog adds API support to copy products. + +Release v1.10.44 (2017-09-13) +=== + +### Service Client Updates +* `service/autoscaling`: Updates service API and documentation + * Customers can create Life Cycle Hooks at the time of creating Auto Scaling Groups through the CreateAutoScalingGroup API +* `service/batch`: Updates service documentation and examples + * Documentation updates for batch +* `service/ec2`: Updates service API + * You are now able to create and launch EC2 x1e.32xlarge instance, a new EC2 instance in the X1 family, in us-east-1, us-west-2, eu-west-1, and ap-northeast-1. x1e.32xlarge offers 128 vCPUs, 3,904 GiB of DDR4 instance memory, high memory bandwidth, large L3 caches, and leading reliability capabilities to boost the performance and reliability of in-memory applications. +* `service/events`: Updates service API and documentation + * Exposes ConcurrentModificationException as one of the valid exceptions for PutPermission and RemovePermission operation. + +### SDK Enhancements +* `service/autoscaling`: Fix documentation for PutScalingPolicy.AutoScalingGroupName [#1522](https://github.com/aws/aws-sdk-go/pull/1522) +* `service/s3/s3manager`: Clarify S3 Upload manager Concurrency config [#1521](https://github.com/aws/aws-sdk-go/pull/1521) + * Fixes [#1458](https://github.com/aws/aws-sdk-go/issues/1458) +* `service/dynamodb/dynamodbattribute`: Add support for time alias. [#1520](https://github.com/aws/aws-sdk-go/pull/1520) + * Related to [#1505](https://github.com/aws/aws-sdk-go/pull/1505) + +Release v1.10.43 (2017-09-12) +=== + +### Service Client Updates +* `service/ec2`: Updates service API + * Fixed bug in EC2 clients preventing HostOfferingSet from being set +* `aws/endpoints`: Updated Regions and Endpoints metadata. + +Release v1.10.42 (2017-09-12) +=== + +### Service Client Updates +* `service/devicefarm`: Updates service API and documentation + * DeviceFarm has added support for two features - RemoteDebugging and Customer Artifacts. Customers can now do remote Debugging on their Private Devices and can now retrieve custom files generated by their tests on the device and the device host (execution environment) on both public and private devices. + +Release v1.10.41 (2017-09-08) +=== + +### Service Client Updates +* `service/logs`: Updates service API and documentation + * Adds support for the PutResourcePolicy, DescribeResourcePolicy and DeleteResourcePolicy APIs. + +Release v1.10.40 (2017-09-07) +=== + +### Service Client Updates +* `service/application-autoscaling`: Updates service documentation +* `service/ec2`: Updates service API and documentation + * With Tagging support, you can add Key and Value metadata to search, filter and organize your NAT Gateways according to your organization's needs. +* `service/elasticloadbalancingv2`: Updates service API and documentation +* `aws/endpoints`: Updated Regions and Endpoints metadata. +* `service/lex-models`: Updates service API and documentation +* `service/route53`: Updates service API and documentation + * You can configure Amazon Route 53 to log information about the DNS queries that Amazon Route 53 receives for your domains and subdomains. When you configure query logging, Amazon Route 53 starts to send logs to CloudWatch Logs. You can use various tools, including the AWS console, to access the query logs. + +Release v1.10.39 (2017-09-06) +=== + +### Service Client Updates +* `service/budgets`: Updates service API and documentation + * Add an optional "thresholdType" to notifications to support percentage or absolute value thresholds. + +Release v1.10.38 (2017-09-05) +=== + +### Service Client Updates +* `service/codestar`: Updates service API and documentation + * Added support to tag CodeStar projects. Tags can be used to organize and find CodeStar projects on key-value pairs that you can choose. For example, you could add a tag with a key of "Release" and a value of "Beta" to projects your organization is working on for an upcoming beta release. +* `aws/endpoints`: Updated Regions and Endpoints metadata. + +Release v1.10.37 (2017-09-01) +=== + +### Service Client Updates +* `service/MobileHub`: Adds new service +* `service/gamelift`: Updates service API and documentation + * GameLift VPC resources can be peered with any other AWS VPC. R4 memory-optimized instances now available to deploy. +* `service/ssm`: Updates service API and documentation + * Adding KMS encryption support to SSM Inventory Resource Data Sync. Exposes the ClientToken parameter on SSM StartAutomationExecution to provide idempotent execution requests. + +Release v1.10.36 (2017-08-31) +=== + +### Service Client Updates +* `service/codebuild`: Updates service API, documentation, and examples + * The AWS CodeBuild HTTP API now provides the BatchDeleteBuilds operation, which enables you to delete existing builds. +* `service/ec2`: Updates service API and documentation + * Descriptions for Security Group Rules enables customers to be able to define a description for ingress and egress security group rules . The Descriptions for Security Group Rules feature supports one description field per Security Group rule for both ingress and egress rules . Descriptions for Security Group Rules provides a simple way to describe the purpose or function of a Security Group Rule allowing for easier customer identification of configuration elements . Prior to the release of Descriptions for Security Group Rules , customers had to maintain a separate system outside of AWS if they wanted to track Security Group Rule mapping and their purpose for being implemented. If a security group rule has already been created and you would like to update or change your description for that security group rule you can use the UpdateSecurityGroupRuleDescription API. +* `service/elasticloadbalancingv2`: Updates service API and documentation +* `aws/endpoints`: Updated Regions and Endpoints metadata. +* `service/lex-models`: Updates service API and documentation + +### SDK Bugs +* `aws/signer/v4`: Revert [#1491](https://github.com/aws/aws-sdk-go/issues/1491) as change conflicts with an undocumented AWS v4 signature test case. + * Related to: [#1495](https://github.com/aws/aws-sdk-go/issues/1495). +Release v1.10.35 (2017-08-30) +=== + +### Service Client Updates +* `service/application-autoscaling`: Updates service API and documentation +* `service/organizations`: Updates service API and documentation + * The exception ConstraintViolationException now contains a new reason subcode MASTERACCOUNT_MISSING_CONTACT_INFO to make it easier to understand why attempting to remove an account from an Organization can fail. We also improved several other of the text descriptions and examples. + +Release v1.10.34 (2017-08-29) +=== + +### Service Client Updates +* `service/config`: Updates service API and documentation +* `service/ec2`: Updates service API and documentation + * Provides capability to add secondary CIDR blocks to a VPC. + +### SDK Bugs +* `aws/signer/v4`: Fix Signing Unordered Multi Value Query Parameters ([#1491](https://github.com/aws/aws-sdk-go/pull/1491)) + * Removes sorting of query string values when calculating v4 signing as this is not part of the spec. The spec only requires the keys, not values, to be sorted which is achieved by Query.Encode(). +Release v1.10.33 (2017-08-25) +=== + +### Service Client Updates +* `service/cloudformation`: Updates service API and documentation + * Rollback triggers enable you to have AWS CloudFormation monitor the state of your application during stack creation and updating, and to roll back that operation if the application breaches the threshold of any of the alarms you've specified. +* `service/gamelift`: Updates service API + * Update spelling of MatchmakingTicket status values for internal consistency. +* `service/rds`: Updates service API and documentation + * Option group options now contain additional properties that identify requirements for certain options. Check these properties to determine if your DB instance must be in a VPC or have auto minor upgrade turned on before you can use an option. Check to see if you can downgrade the version of an option after you have installed it. + +### SDK Enhancements +* `example/service/ec2`: Add EC2 list instances example ([#1492](https://github.com/aws/aws-sdk-go/pull/1492)) + +Release v1.10.32 (2017-08-25) +=== + +### Service Client Updates +* `service/rekognition`: Updates service API, documentation, and examples + * Update the enum value of LandmarkType and GenderType to be consistent with service response + +Release v1.10.31 (2017-08-23) +=== + +### Service Client Updates +* `service/appstream`: Updates service documentation + * Documentation updates for appstream +* `aws/endpoints`: Updated Regions and Endpoints metadata. + +Release v1.10.30 (2017-08-22) +=== + +### Service Client Updates +* `service/ssm`: Updates service API and documentation + * Changes to associations in Systems Manager State Manager can now be recorded. Previously, when you edited associations, you could not go back and review older association settings. Now, associations are versioned, and can be named using human-readable strings, allowing you to see a trail of association changes. You can also perform rate-based scheduling, which allows you to schedule associations more granularly. + +Release v1.10.29 (2017-08-21) +=== + +### Service Client Updates +* `service/firehose`: Updates service API, documentation, and paginators + * This change will allow customers to attach a Firehose delivery stream to an existing Kinesis stream directly. You no longer need a forwarder to move data from a Kinesis stream to a Firehose delivery stream. You can now run your streaming applications on your Kinesis stream and easily attach a Firehose delivery stream to it for data delivery to S3, Redshift, or Elasticsearch concurrently. +* `service/route53`: Updates service API and documentation + * Amazon Route 53 now supports CAA resource record type. A CAA record controls which certificate authorities are allowed to issue certificates for the domain or subdomain. + +Release v1.10.28 (2017-08-18) +=== + +### Service Client Updates +* `aws/endpoints`: Updated Regions and Endpoints metadata. + +Release v1.10.27 (2017-08-16) +=== + +### Service Client Updates +* `service/gamelift`: Updates service API and documentation + * The Matchmaking Grouping Service is a new feature that groups player match requests for a given game together into game sessions based on developer configured rules. + +### SDK Enhancements +* `aws/arn`: aws/arn: Package for parsing and producing ARNs ([#1463](https://github.com/aws/aws-sdk-go/pull/1463)) + * Adds the `arn` package for AWS ARN parsing and building. Use this package to build AWS ARNs for services such as outlined in the [documentation](http://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html). + +### SDK Bugs +* `aws/signer/v4`: Correct V4 presign signature to include content sha25 in URL ([#1469](https://github.com/aws/aws-sdk-go/pull/1469)) + * Updates the V4 signer so that when a Presign is generated the `X-Amz-Content-Sha256` header is added to the query string instead of being required to be in the header. This allows you to generate presigned URLs for GET requests, e.g S3.GetObject that do not require additional headers to be set by the downstream users of the presigned URL. + * Related To: [#1467](https://github.com/aws/aws-sdk-go/issues/1467) + +Release v1.10.26 (2017-08-15) +=== + +### Service Client Updates +* `service/ec2`: Updates service API + * Fixed bug in EC2 clients preventing HostReservation from being set +* `aws/endpoints`: Updated Regions and Endpoints metadata. + +Release v1.10.25 (2017-08-14) +=== + +### Service Client Updates +* `service/AWS Glue`: Adds new service +* `service/batch`: Updates service API and documentation + * This release enhances the DescribeJobs API to include the CloudWatch logStreamName attribute in ContainerDetail and ContainerDetailAttempt +* `service/cloudhsmv2`: Adds new service + * CloudHSM provides hardware security modules for protecting sensitive data and cryptographic keys within an EC2 VPC, and enable the customer to maintain control over key access and use. This is a second-generation of the service that will improve security, lower cost and provide better customer usability. +* `service/elasticfilesystem`: Updates service API, documentation, and paginators + * Customers can create encrypted EFS file systems and specify a KMS master key to encrypt it with. +* `aws/endpoints`: Updated Regions and Endpoints metadata. +* `service/mgh`: Adds new service + * AWS Migration Hub provides a single location to track migrations across multiple AWS and partner solutions. Using Migration Hub allows you to choose the AWS and partner migration tools that best fit your needs, while providing visibility into the status of your entire migration portfolio. Migration Hub also provides key metrics and progress for individual applications, regardless of which tools are being used to migrate them. For example, you might use AWS Database Migration Service, AWS Server Migration Service, and partner migration tools to migrate an application comprised of a database, virtualized web servers, and a bare metal server. Using Migration Hub will provide you with a single screen that shows the migration progress of all the resources in the application. This allows you to quickly get progress updates across all of your migrations, easily identify and troubleshoot any issues, and reduce the overall time and effort spent on your migration projects. Migration Hub is available to all AWS customers at no additional charge. You only pay for the cost of the migration tools you use, and any resources being consumed on AWS. +* `service/ssm`: Updates service API and documentation + * Systems Manager Maintenance Windows include the following changes or enhancements: New task options using Systems Manager Automation, AWS Lambda, and AWS Step Functions; enhanced ability to edit the targets of a Maintenance Window, including specifying a target name and description, and ability to edit the owner field; enhanced ability to edits tasks; enhanced support for Run Command parameters; and you can now use a --safe flag when attempting to deregister a target. If this flag is enabled when you attempt to deregister a target, the system returns an error if the target is referenced by any task. Also, Systems Manager now includes Configuration Compliance to scan your fleet of managed instances for patch compliance and configuration inconsistencies. You can collect and aggregate data from multiple AWS accounts and Regions, and then drill down into specific resources that aren't compliant. +* `service/storagegateway`: Updates service API and documentation + * Add optional field ForceDelete to DeleteFileShare api. + +Release v1.10.24 (2017-08-11) +=== + +### Service Client Updates +* `service/codedeploy`: Updates service API and documentation + * Adds support for specifying Application Load Balancers in deployment groups, for both in-place and blue/green deployments. +* `service/cognito-idp`: Updates service API and documentation +* `service/ec2`: Updates service API and documentation + * Provides customers an opportunity to recover an EIP that was released + Release v1.10.23 (2017-08-10) === diff --git a/vendor/github.com/aws/aws-sdk-go/CONTRIBUTING.md b/vendor/github.com/aws/aws-sdk-go/CONTRIBUTING.md index 7c0186f0c..6f422a95e 100644 --- a/vendor/github.com/aws/aws-sdk-go/CONTRIBUTING.md +++ b/vendor/github.com/aws/aws-sdk-go/CONTRIBUTING.md @@ -67,7 +67,7 @@ Please be aware of the following notes prior to opening a pull request: 5. The JSON files under the SDK's `models` folder are sourced from outside the SDK. Such as `models/apis/ec2/2016-11-15/api.json`. We will not accept pull requests directly on these models. If you discover an issue with the models please - create a Github [issue](issues) describing the issue. + create a [GitHub issue][issues] describing the issue. ### Testing diff --git a/vendor/github.com/aws/aws-sdk-go/Gopkg.lock b/vendor/github.com/aws/aws-sdk-go/Gopkg.lock new file mode 100644 index 000000000..854c94fdf --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/Gopkg.lock @@ -0,0 +1,20 @@ +# This file is autogenerated, do not edit; changes may be undone by the next 'dep ensure'. + + +[[projects]] + name = "github.com/go-ini/ini" + packages = ["."] + revision = "300e940a926eb277d3901b20bdfcc54928ad3642" + version = "v1.25.4" + +[[projects]] + name = "github.com/jmespath/go-jmespath" + packages = ["."] + revision = "0b12d6b5" + +[solve-meta] + analyzer-name = "dep" + analyzer-version = 1 + inputs-digest = "51a86a867df617990082dec6b868e4efe2fdb2ed0e02a3daa93cd30f962b5085" + solver-name = "gps-cdcl" + solver-version = 1 diff --git a/vendor/github.com/aws/aws-sdk-go/Gopkg.toml b/vendor/github.com/aws/aws-sdk-go/Gopkg.toml new file mode 100644 index 000000000..664fc5955 --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/Gopkg.toml @@ -0,0 +1,48 @@ + +# Gopkg.toml example +# +# Refer to https://github.com/golang/dep/blob/master/docs/Gopkg.toml.md +# for detailed Gopkg.toml documentation. +# +# required = ["github.com/user/thing/cmd/thing"] +# ignored = ["github.com/user/project/pkgX", "bitbucket.org/user/project/pkgA/pkgY"] +# +# [[constraint]] +# name = "github.com/user/project" +# version = "1.0.0" +# +# [[constraint]] +# name = "github.com/user/project2" +# branch = "dev" +# source = "github.com/myfork/project2" +# +# [[override]] +# name = "github.com/x/y" +# version = "2.4.0" + +ignored = [ + # Testing/Example/Codegen dependencies + "github.com/stretchr/testify", + "github.com/stretchr/testify/assert", + "github.com/stretchr/testify/require", + "github.com/go-sql-driver/mysql", + "github.com/gucumber/gucumber", + "github.com/pkg/errors", + "golang.org/x/net", + "golang.org/x/net/html", + "golang.org/x/net/http2", + "golang.org/x/text", + "golang.org/x/text/html", + "golang.org/x/tools", + "golang.org/x/tools/go/loader", +] + + +[[constraint]] + name = "github.com/go-ini/ini" + version = "1.25.4" + +[[constraint]] + name = "github.com/jmespath/go-jmespath" + revision = "0b12d6b5" + #version = "0.2.2" diff --git a/vendor/github.com/aws/aws-sdk-go/Makefile b/vendor/github.com/aws/aws-sdk-go/Makefile index df9803c2a..83ccc1e62 100644 --- a/vendor/github.com/aws/aws-sdk-go/Makefile +++ b/vendor/github.com/aws/aws-sdk-go/Makefile @@ -74,7 +74,7 @@ smoke-tests: get-deps-tests performance: get-deps-tests AWS_TESTING_LOG_RESULTS=${log-detailed} AWS_TESTING_REGION=$(region) AWS_TESTING_DB_TABLE=$(table) gucumber -go-tags "integration" ./awstesting/performance -sandbox-tests: sandbox-test-go15 sandbox-test-go15-novendorexp sandbox-test-go16 sandbox-test-go17 sandbox-test-go18 sandbox-test-gotip +sandbox-tests: sandbox-test-go15 sandbox-test-go15-novendorexp sandbox-test-go16 sandbox-test-go17 sandbox-test-go18 sandbox-test-go19 sandbox-test-gotip sandbox-build-go15: docker build -f ./awstesting/sandbox/Dockerfile.test.go1.5 -t "aws-sdk-go-1.5" . @@ -111,6 +111,13 @@ sandbox-go18: sandbox-build-go18 sandbox-test-go18: sandbox-build-go18 docker run -t aws-sdk-go-1.8 +sandbox-build-go19: + docker build -f ./awstesting/sandbox/Dockerfile.test.go1.8 -t "aws-sdk-go-1.9" . +sandbox-go19: sandbox-build-go19 + docker run -i -t aws-sdk-go-1.9 bash +sandbox-test-go19: sandbox-build-go19 + docker run -t aws-sdk-go-1.9 + sandbox-build-gotip: @echo "Run make update-aws-golang-tip, if this test fails because missing aws-golang:tip container" docker build -f ./awstesting/sandbox/Dockerfile.test.gotip -t "aws-sdk-go-tip" . diff --git a/vendor/github.com/aws/aws-sdk-go/README.md b/vendor/github.com/aws/aws-sdk-go/README.md index d5e048a5f..c32774491 100644 --- a/vendor/github.com/aws/aws-sdk-go/README.md +++ b/vendor/github.com/aws/aws-sdk-go/README.md @@ -6,6 +6,8 @@ aws-sdk-go is the official AWS SDK for the Go programming language. Checkout our [release notes](https://github.com/aws/aws-sdk-go/releases) for information about the latest bug fixes, updates, and features added to the SDK. +We [announced](https://aws.amazon.com/blogs/developer/aws-sdk-for-go-2-0-developer-preview/) the Developer Preview for the [v2 AWS SDK for Go](). The v2 SDK is available at https://github.com/aws/aws-sdk-go-v2, and `go get github.com/aws/aws-sdk-go-v2` via `go get`. Check out the v2 SDK's [changes and updates](https://github.com/aws/aws-sdk-go-v2/blob/master/CHANGELOG.md), and let us know what you think. We want your feedback. + ## Installing If you are using Go 1.5 with the `GO15VENDOREXPERIMENT=1` vendoring flag, or 1.6 and higher you can use the following command to retrieve the SDK. The SDK's non-testing dependencies will be included and are vendored in the `vendor` folder. @@ -185,7 +187,7 @@ Option's SharedConfigState parameter. })) ``` -[credentials_pkg]: ttps://docs.aws.amazon.com/sdk-for-go/api/aws/credentials +[credentials_pkg]: https://docs.aws.amazon.com/sdk-for-go/api/aws/credentials ### Configuring AWS Region @@ -305,7 +307,7 @@ documentation for the errors that could be returned. // will leak connections. defer result.Body.Close() - fmt.Println("Object Size:", aws.StringValue(result.ContentLength)) + fmt.Println("Object Size:", aws.Int64Value(result.ContentLength)) ``` ### API Request Pagination and Resource Waiters diff --git a/vendor/github.com/aws/aws-sdk-go/aws/client/default_retryer.go b/vendor/github.com/aws/aws-sdk-go/aws/client/default_retryer.go index e25a460fb..63d2df67c 100644 --- a/vendor/github.com/aws/aws-sdk-go/aws/client/default_retryer.go +++ b/vendor/github.com/aws/aws-sdk-go/aws/client/default_retryer.go @@ -2,6 +2,7 @@ package client import ( "math/rand" + "strconv" "sync" "time" @@ -38,14 +39,18 @@ func (d DefaultRetryer) RetryRules(r *request.Request) time.Duration { minTime := 30 throttle := d.shouldThrottle(r) if throttle { + if delay, ok := getRetryDelay(r); ok { + return delay + } + minTime = 500 } retryCount := r.RetryCount - if retryCount > 13 { - retryCount = 13 - } else if throttle && retryCount > 8 { + if throttle && retryCount > 8 { retryCount = 8 + } else if retryCount > 13 { + retryCount = 13 } delay := (1 << uint(retryCount)) * (seededRand.Intn(minTime) + minTime) @@ -68,12 +73,49 @@ func (d DefaultRetryer) ShouldRetry(r *request.Request) bool { // ShouldThrottle returns true if the request should be throttled. func (d DefaultRetryer) shouldThrottle(r *request.Request) bool { - if r.HTTPResponse.StatusCode == 502 || - r.HTTPResponse.StatusCode == 503 || - r.HTTPResponse.StatusCode == 504 { - return true + switch r.HTTPResponse.StatusCode { + case 429: + case 502: + case 503: + case 504: + default: + return r.IsErrorThrottle() } - return r.IsErrorThrottle() + + return true +} + +// This will look in the Retry-After header, RFC 7231, for how long +// it will wait before attempting another request +func getRetryDelay(r *request.Request) (time.Duration, bool) { + if !canUseRetryAfterHeader(r) { + return 0, false + } + + delayStr := r.HTTPResponse.Header.Get("Retry-After") + if len(delayStr) == 0 { + return 0, false + } + + delay, err := strconv.Atoi(delayStr) + if err != nil { + return 0, false + } + + return time.Duration(delay) * time.Second, true +} + +// Will look at the status code to see if the retry header pertains to +// the status code. +func canUseRetryAfterHeader(r *request.Request) bool { + switch r.HTTPResponse.StatusCode { + case 429: + case 503: + default: + return false + } + + return true } // lockedSource is a thread-safe implementation of rand.Source diff --git a/vendor/github.com/aws/aws-sdk-go/aws/config.go b/vendor/github.com/aws/aws-sdk-go/aws/config.go index ae3a28696..4fd0d0724 100644 --- a/vendor/github.com/aws/aws-sdk-go/aws/config.go +++ b/vendor/github.com/aws/aws-sdk-go/aws/config.go @@ -168,7 +168,7 @@ type Config struct { // EC2MetadataDisableTimeoutOverride *bool - // Instructs the endpiont to be generated for a service client to + // Instructs the endpoint to be generated for a service client to // be the dual stack endpoint. The dual stack endpoint will support // both IPv4 and IPv6 addressing. // diff --git a/vendor/github.com/aws/aws-sdk-go/aws/defaults/defaults.go b/vendor/github.com/aws/aws-sdk-go/aws/defaults/defaults.go index 07afe3b8e..2cb08182f 100644 --- a/vendor/github.com/aws/aws-sdk-go/aws/defaults/defaults.go +++ b/vendor/github.com/aws/aws-sdk-go/aws/defaults/defaults.go @@ -9,6 +9,7 @@ package defaults import ( "fmt" + "net" "net/http" "net/url" "os" @@ -118,14 +119,43 @@ func RemoteCredProvider(cfg aws.Config, handlers request.Handlers) credentials.P return ec2RoleProvider(cfg, handlers) } +var lookupHostFn = net.LookupHost + +func isLoopbackHost(host string) (bool, error) { + ip := net.ParseIP(host) + if ip != nil { + return ip.IsLoopback(), nil + } + + // Host is not an ip, perform lookup + addrs, err := lookupHostFn(host) + if err != nil { + return false, err + } + for _, addr := range addrs { + if !net.ParseIP(addr).IsLoopback() { + return false, nil + } + } + + return true, nil +} + func localHTTPCredProvider(cfg aws.Config, handlers request.Handlers, u string) credentials.Provider { var errMsg string parsed, err := url.Parse(u) if err != nil { errMsg = fmt.Sprintf("invalid URL, %v", err) - } else if host := aws.URLHostname(parsed); !(host == "localhost" || host == "127.0.0.1") { - errMsg = fmt.Sprintf("invalid host address, %q, only localhost and 127.0.0.1 are valid.", host) + } else { + host := aws.URLHostname(parsed) + if len(host) == 0 { + errMsg = "unable to parse host from local HTTP cred provider URL" + } else if isLoopback, loopbackErr := isLoopbackHost(host); loopbackErr != nil { + errMsg = fmt.Sprintf("failed to resolve host %q, %v", host, loopbackErr) + } else if !isLoopback { + errMsg = fmt.Sprintf("invalid endpoint host, %q, only loopback hosts are allowed.", host) + } } if len(errMsg) > 0 { diff --git a/vendor/github.com/aws/aws-sdk-go/aws/endpoints/decode.go b/vendor/github.com/aws/aws-sdk-go/aws/endpoints/decode.go index 74f72de07..6dc035a53 100644 --- a/vendor/github.com/aws/aws-sdk-go/aws/endpoints/decode.go +++ b/vendor/github.com/aws/aws-sdk-go/aws/endpoints/decode.go @@ -4,6 +4,7 @@ import ( "encoding/json" "fmt" "io" + "os" "github.com/aws/aws-sdk-go/aws/awserr" ) @@ -84,11 +85,34 @@ func decodeV3Endpoints(modelDef modelDefinition, opts DecodeModelOptions) (Resol custAddEC2Metadata(p) custAddS3DualStack(p) custRmIotDataService(p) + + custFixCloudHSMv2SigningName(p) } return ps, nil } +func custFixCloudHSMv2SigningName(p *partition) { + // Workaround for aws/aws-sdk-go#1745 until the endpoint model can be + // fixed upstream. TODO remove this once the endpoints model is updated. + + s, ok := p.Services["cloudhsmv2"] + if !ok { + return + } + + if len(s.Defaults.CredentialScope.Service) != 0 { + fmt.Fprintf(os.Stderr, "cloudhsmv2 signing name already set, ignoring override.\n") + // If the value is already set don't override + return + } + + s.Defaults.CredentialScope.Service = "cloudhsm" + fmt.Fprintf(os.Stderr, "cloudhsmv2 signing name not set, overriding.\n") + + p.Services["cloudhsmv2"] = s +} + func custAddS3DualStack(p *partition) { if p.ID != "aws" { return diff --git a/vendor/github.com/aws/aws-sdk-go/aws/endpoints/defaults.go b/vendor/github.com/aws/aws-sdk-go/aws/endpoints/defaults.go index 899087ec5..9e728feee 100644 --- a/vendor/github.com/aws/aws-sdk-go/aws/endpoints/defaults.go +++ b/vendor/github.com/aws/aws-sdk-go/aws/endpoints/defaults.go @@ -24,6 +24,7 @@ const ( EuCentral1RegionID = "eu-central-1" // EU (Frankfurt). EuWest1RegionID = "eu-west-1" // EU (Ireland). EuWest2RegionID = "eu-west-2" // EU (London). + EuWest3RegionID = "eu-west-3" // EU (Paris). SaEast1RegionID = "sa-east-1" // South America (Sao Paulo). UsEast1RegionID = "us-east-1" // US East (N. Virginia). UsEast2RegionID = "us-east-2" // US East (Ohio). @@ -33,7 +34,8 @@ const ( // AWS China partition's regions. const ( - CnNorth1RegionID = "cn-north-1" // China (Beijing). + CnNorth1RegionID = "cn-north-1" // China (Beijing). + CnNorthwest1RegionID = "cn-northwest-1" // China (Ningxia). ) // AWS GovCloud (US) partition's regions. @@ -44,17 +46,20 @@ const ( // Service identifiers const ( AcmServiceID = "acm" // Acm. + ApiPricingServiceID = "api.pricing" // ApiPricing. ApigatewayServiceID = "apigateway" // Apigateway. ApplicationAutoscalingServiceID = "application-autoscaling" // ApplicationAutoscaling. Appstream2ServiceID = "appstream2" // Appstream2. AthenaServiceID = "athena" // Athena. AutoscalingServiceID = "autoscaling" // Autoscaling. + AutoscalingPlansServiceID = "autoscaling-plans" // AutoscalingPlans. BatchServiceID = "batch" // Batch. BudgetsServiceID = "budgets" // Budgets. ClouddirectoryServiceID = "clouddirectory" // Clouddirectory. CloudformationServiceID = "cloudformation" // Cloudformation. CloudfrontServiceID = "cloudfront" // Cloudfront. CloudhsmServiceID = "cloudhsm" // Cloudhsm. + Cloudhsmv2ServiceID = "cloudhsmv2" // Cloudhsmv2. CloudsearchServiceID = "cloudsearch" // Cloudsearch. CloudtrailServiceID = "cloudtrail" // Cloudtrail. CodebuildServiceID = "codebuild" // Codebuild. @@ -68,6 +73,7 @@ const ( ConfigServiceID = "config" // Config. CurServiceID = "cur" // Cur. DatapipelineServiceID = "datapipeline" // Datapipeline. + DaxServiceID = "dax" // Dax. DevicefarmServiceID = "devicefarm" // Devicefarm. DirectconnectServiceID = "directconnect" // Directconnect. DiscoveryServiceID = "discovery" // Discovery. @@ -91,6 +97,7 @@ const ( FirehoseServiceID = "firehose" // Firehose. GameliftServiceID = "gamelift" // Gamelift. GlacierServiceID = "glacier" // Glacier. + GlueServiceID = "glue" // Glue. GreengrassServiceID = "greengrass" // Greengrass. HealthServiceID = "health" // Health. IamServiceID = "iam" // Iam. @@ -106,6 +113,7 @@ const ( MachinelearningServiceID = "machinelearning" // Machinelearning. MarketplacecommerceanalyticsServiceID = "marketplacecommerceanalytics" // Marketplacecommerceanalytics. MeteringMarketplaceServiceID = "metering.marketplace" // MeteringMarketplace. + MghServiceID = "mgh" // Mgh. MobileanalyticsServiceID = "mobileanalytics" // Mobileanalytics. ModelsLexServiceID = "models.lex" // ModelsLex. MonitoringServiceID = "monitoring" // Monitoring. @@ -217,6 +225,9 @@ var awsPartition = partition{ "eu-west-2": region{ Description: "EU (London)", }, + "eu-west-3": region{ + Description: "EU (Paris)", + }, "sa-east-1": region{ Description: "South America (Sao Paulo)", }, @@ -246,6 +257,7 @@ var awsPartition = partition{ "eu-central-1": endpoint{}, "eu-west-1": endpoint{}, "eu-west-2": endpoint{}, + "eu-west-3": endpoint{}, "sa-east-1": endpoint{}, "us-east-1": endpoint{}, "us-east-2": endpoint{}, @@ -253,6 +265,17 @@ var awsPartition = partition{ "us-west-2": endpoint{}, }, }, + "api.pricing": service{ + Defaults: endpoint{ + CredentialScope: credentialScope{ + Service: "pricing", + }, + }, + Endpoints: endpoints{ + "ap-south-1": endpoint{}, + "us-east-1": endpoint{}, + }, + }, "apigateway": service{ Endpoints: endpoints{ @@ -265,6 +288,7 @@ var awsPartition = partition{ "eu-central-1": endpoint{}, "eu-west-1": endpoint{}, "eu-west-2": endpoint{}, + "eu-west-3": endpoint{}, "sa-east-1": endpoint{}, "us-east-1": endpoint{}, "us-east-2": endpoint{}, @@ -290,6 +314,7 @@ var awsPartition = partition{ "eu-central-1": endpoint{}, "eu-west-1": endpoint{}, "eu-west-2": endpoint{}, + "eu-west-3": endpoint{}, "sa-east-1": endpoint{}, "us-east-1": endpoint{}, "us-east-2": endpoint{}, @@ -316,6 +341,8 @@ var awsPartition = partition{ Endpoints: endpoints{ "ap-northeast-1": endpoint{}, "ap-southeast-1": endpoint{}, + "ap-southeast-2": endpoint{}, + "eu-central-1": endpoint{}, "eu-west-1": endpoint{}, "us-east-1": endpoint{}, "us-east-2": endpoint{}, @@ -336,6 +363,7 @@ var awsPartition = partition{ "eu-central-1": endpoint{}, "eu-west-1": endpoint{}, "eu-west-2": endpoint{}, + "eu-west-3": endpoint{}, "sa-east-1": endpoint{}, "us-east-1": endpoint{}, "us-east-2": endpoint{}, @@ -343,10 +371,27 @@ var awsPartition = partition{ "us-west-2": endpoint{}, }, }, + "autoscaling-plans": service{ + Defaults: endpoint{ + Hostname: "autoscaling.{region}.amazonaws.com", + Protocols: []string{"http", "https"}, + CredentialScope: credentialScope{ + Service: "autoscaling-plans", + }, + }, + Endpoints: endpoints{ + "ap-southeast-1": endpoint{}, + "eu-west-1": endpoint{}, + "us-east-1": endpoint{}, + "us-east-2": endpoint{}, + "us-west-2": endpoint{}, + }, + }, "batch": service{ Endpoints: endpoints{ "ap-northeast-1": endpoint{}, + "ap-southeast-1": endpoint{}, "ap-southeast-2": endpoint{}, "eu-central-1": endpoint{}, "eu-west-1": endpoint{}, @@ -393,6 +438,7 @@ var awsPartition = partition{ "eu-central-1": endpoint{}, "eu-west-1": endpoint{}, "eu-west-2": endpoint{}, + "eu-west-3": endpoint{}, "sa-east-1": endpoint{}, "us-east-1": endpoint{}, "us-east-2": endpoint{}, @@ -429,6 +475,26 @@ var awsPartition = partition{ "us-west-2": endpoint{}, }, }, + "cloudhsmv2": service{ + Defaults: endpoint{ + CredentialScope: credentialScope{ + Service: "cloudhsm", + }, + }, + Endpoints: endpoints{ + "ap-northeast-1": endpoint{}, + "ap-south-1": endpoint{}, + "ap-southeast-1": endpoint{}, + "ap-southeast-2": endpoint{}, + "ca-central-1": endpoint{}, + "eu-central-1": endpoint{}, + "eu-west-1": endpoint{}, + "us-east-1": endpoint{}, + "us-east-2": endpoint{}, + "us-west-1": endpoint{}, + "us-west-2": endpoint{}, + }, + }, "cloudsearch": service{ Endpoints: endpoints{ @@ -456,6 +522,7 @@ var awsPartition = partition{ "eu-central-1": endpoint{}, "eu-west-1": endpoint{}, "eu-west-2": endpoint{}, + "eu-west-3": endpoint{}, "sa-east-1": endpoint{}, "us-east-1": endpoint{}, "us-east-2": endpoint{}, @@ -467,6 +534,7 @@ var awsPartition = partition{ Endpoints: endpoints{ "ap-northeast-1": endpoint{}, + "ap-northeast-2": endpoint{}, "ap-southeast-1": endpoint{}, "ap-southeast-2": endpoint{}, "ca-central-1": endpoint{}, @@ -510,6 +578,7 @@ var awsPartition = partition{ "eu-central-1": endpoint{}, "eu-west-1": endpoint{}, "eu-west-2": endpoint{}, + "eu-west-3": endpoint{}, "sa-east-1": endpoint{}, "us-east-1": endpoint{}, "us-east-2": endpoint{}, @@ -521,6 +590,8 @@ var awsPartition = partition{ Endpoints: endpoints{ "ap-northeast-1": endpoint{}, + "ap-northeast-2": endpoint{}, + "ap-south-1": endpoint{}, "ap-southeast-1": endpoint{}, "ap-southeast-2": endpoint{}, "ca-central-1": endpoint{}, @@ -537,12 +608,16 @@ var awsPartition = partition{ "codestar": service{ Endpoints: endpoints{ + "ap-northeast-1": endpoint{}, "ap-southeast-1": endpoint{}, "ap-southeast-2": endpoint{}, + "ca-central-1": endpoint{}, "eu-central-1": endpoint{}, "eu-west-1": endpoint{}, + "eu-west-2": endpoint{}, "us-east-1": endpoint{}, "us-east-2": endpoint{}, + "us-west-1": endpoint{}, "us-west-2": endpoint{}, }, }, @@ -552,6 +627,7 @@ var awsPartition = partition{ "ap-northeast-1": endpoint{}, "ap-northeast-2": endpoint{}, "ap-south-1": endpoint{}, + "ap-southeast-1": endpoint{}, "ap-southeast-2": endpoint{}, "eu-central-1": endpoint{}, "eu-west-1": endpoint{}, @@ -567,6 +643,7 @@ var awsPartition = partition{ "ap-northeast-1": endpoint{}, "ap-northeast-2": endpoint{}, "ap-south-1": endpoint{}, + "ap-southeast-1": endpoint{}, "ap-southeast-2": endpoint{}, "eu-central-1": endpoint{}, "eu-west-1": endpoint{}, @@ -582,6 +659,7 @@ var awsPartition = partition{ "ap-northeast-1": endpoint{}, "ap-northeast-2": endpoint{}, "ap-south-1": endpoint{}, + "ap-southeast-1": endpoint{}, "ap-southeast-2": endpoint{}, "eu-central-1": endpoint{}, "eu-west-1": endpoint{}, @@ -603,6 +681,7 @@ var awsPartition = partition{ "eu-central-1": endpoint{}, "eu-west-1": endpoint{}, "eu-west-2": endpoint{}, + "eu-west-3": endpoint{}, "sa-east-1": endpoint{}, "us-east-1": endpoint{}, "us-east-2": endpoint{}, @@ -626,6 +705,18 @@ var awsPartition = partition{ "us-west-2": endpoint{}, }, }, + "dax": service{ + + Endpoints: endpoints{ + "ap-northeast-1": endpoint{}, + "ap-south-1": endpoint{}, + "eu-west-1": endpoint{}, + "sa-east-1": endpoint{}, + "us-east-1": endpoint{}, + "us-west-1": endpoint{}, + "us-west-2": endpoint{}, + }, + }, "devicefarm": service{ Endpoints: endpoints{ @@ -644,6 +735,7 @@ var awsPartition = partition{ "eu-central-1": endpoint{}, "eu-west-1": endpoint{}, "eu-west-2": endpoint{}, + "eu-west-3": endpoint{}, "sa-east-1": endpoint{}, "us-east-1": endpoint{}, "us-east-2": endpoint{}, @@ -669,6 +761,7 @@ var awsPartition = partition{ "eu-central-1": endpoint{}, "eu-west-1": endpoint{}, "eu-west-2": endpoint{}, + "eu-west-3": endpoint{}, "sa-east-1": endpoint{}, "us-east-1": endpoint{}, "us-east-2": endpoint{}, @@ -681,14 +774,17 @@ var awsPartition = partition{ Endpoints: endpoints{ "ap-northeast-1": endpoint{}, "ap-northeast-2": endpoint{}, + "ap-south-1": endpoint{}, "ap-southeast-1": endpoint{}, "ap-southeast-2": endpoint{}, "ca-central-1": endpoint{}, "eu-central-1": endpoint{}, "eu-west-1": endpoint{}, "eu-west-2": endpoint{}, + "sa-east-1": endpoint{}, "us-east-1": endpoint{}, "us-east-2": endpoint{}, + "us-west-1": endpoint{}, "us-west-2": endpoint{}, }, }, @@ -706,6 +802,7 @@ var awsPartition = partition{ "eu-central-1": endpoint{}, "eu-west-1": endpoint{}, "eu-west-2": endpoint{}, + "eu-west-3": endpoint{}, "local": endpoint{ Hostname: "localhost:8000", Protocols: []string{"http"}, @@ -734,6 +831,7 @@ var awsPartition = partition{ "eu-central-1": endpoint{}, "eu-west-1": endpoint{}, "eu-west-2": endpoint{}, + "eu-west-3": endpoint{}, "sa-east-1": endpoint{}, "us-east-1": endpoint{}, "us-east-2": endpoint{}, @@ -756,12 +854,16 @@ var awsPartition = partition{ Endpoints: endpoints{ "ap-northeast-1": endpoint{}, + "ap-northeast-2": endpoint{}, + "ap-south-1": endpoint{}, "ap-southeast-1": endpoint{}, "ap-southeast-2": endpoint{}, "ca-central-1": endpoint{}, "eu-central-1": endpoint{}, "eu-west-1": endpoint{}, "eu-west-2": endpoint{}, + "eu-west-3": endpoint{}, + "sa-east-1": endpoint{}, "us-east-1": endpoint{}, "us-east-2": endpoint{}, "us-west-1": endpoint{}, @@ -772,12 +874,16 @@ var awsPartition = partition{ Endpoints: endpoints{ "ap-northeast-1": endpoint{}, + "ap-northeast-2": endpoint{}, + "ap-south-1": endpoint{}, "ap-southeast-1": endpoint{}, "ap-southeast-2": endpoint{}, "ca-central-1": endpoint{}, "eu-central-1": endpoint{}, "eu-west-1": endpoint{}, "eu-west-2": endpoint{}, + "eu-west-3": endpoint{}, + "sa-east-1": endpoint{}, "us-east-1": endpoint{}, "us-east-2": endpoint{}, "us-west-1": endpoint{}, @@ -796,6 +902,7 @@ var awsPartition = partition{ "eu-central-1": endpoint{}, "eu-west-1": endpoint{}, "eu-west-2": endpoint{}, + "eu-west-3": endpoint{}, "sa-east-1": endpoint{}, "us-east-1": endpoint{}, "us-east-2": endpoint{}, @@ -815,6 +922,7 @@ var awsPartition = partition{ "eu-central-1": endpoint{}, "eu-west-1": endpoint{}, "eu-west-2": endpoint{}, + "eu-west-3": endpoint{}, "sa-east-1": endpoint{}, "us-east-1": endpoint{}, "us-east-2": endpoint{}, @@ -826,6 +934,7 @@ var awsPartition = partition{ Endpoints: endpoints{ "ap-southeast-2": endpoint{}, + "eu-central-1": endpoint{}, "eu-west-1": endpoint{}, "us-east-1": endpoint{}, "us-east-2": endpoint{}, @@ -834,7 +943,7 @@ var awsPartition = partition{ }, "elasticloadbalancing": service{ Defaults: endpoint{ - Protocols: []string{"http", "https"}, + Protocols: []string{"https"}, }, Endpoints: endpoints{ "ap-northeast-1": endpoint{}, @@ -846,6 +955,7 @@ var awsPartition = partition{ "eu-central-1": endpoint{}, "eu-west-1": endpoint{}, "eu-west-2": endpoint{}, + "eu-west-3": endpoint{}, "sa-east-1": endpoint{}, "us-east-1": endpoint{}, "us-east-2": endpoint{}, @@ -870,6 +980,7 @@ var awsPartition = partition{ }, "eu-west-1": endpoint{}, "eu-west-2": endpoint{}, + "eu-west-3": endpoint{}, "sa-east-1": endpoint{}, "us-east-1": endpoint{ SSLCommonName: "{service}.{region}.{dnsSuffix}", @@ -922,6 +1033,7 @@ var awsPartition = partition{ "eu-central-1": endpoint{}, "eu-west-1": endpoint{}, "eu-west-2": endpoint{}, + "eu-west-3": endpoint{}, "sa-east-1": endpoint{}, "us-east-1": endpoint{}, "us-east-2": endpoint{}, @@ -941,6 +1053,7 @@ var awsPartition = partition{ "eu-central-1": endpoint{}, "eu-west-1": endpoint{}, "eu-west-2": endpoint{}, + "eu-west-3": endpoint{}, "sa-east-1": endpoint{}, "us-east-1": endpoint{}, "us-east-2": endpoint{}, @@ -951,9 +1064,15 @@ var awsPartition = partition{ "firehose": service{ Endpoints: endpoints{ - "eu-west-1": endpoint{}, - "us-east-1": endpoint{}, - "us-west-2": endpoint{}, + "ap-northeast-1": endpoint{}, + "ap-southeast-1": endpoint{}, + "ap-southeast-2": endpoint{}, + "eu-central-1": endpoint{}, + "eu-west-1": endpoint{}, + "us-east-1": endpoint{}, + "us-east-2": endpoint{}, + "us-west-1": endpoint{}, + "us-west-2": endpoint{}, }, }, "gamelift": service{ @@ -963,10 +1082,15 @@ var awsPartition = partition{ "ap-northeast-2": endpoint{}, "ap-south-1": endpoint{}, "ap-southeast-1": endpoint{}, + "ap-southeast-2": endpoint{}, + "ca-central-1": endpoint{}, "eu-central-1": endpoint{}, "eu-west-1": endpoint{}, + "eu-west-2": endpoint{}, "sa-east-1": endpoint{}, "us-east-1": endpoint{}, + "us-east-2": endpoint{}, + "us-west-1": endpoint{}, "us-west-2": endpoint{}, }, }, @@ -978,23 +1102,36 @@ var awsPartition = partition{ "ap-northeast-1": endpoint{}, "ap-northeast-2": endpoint{}, "ap-south-1": endpoint{}, + "ap-southeast-1": endpoint{}, "ap-southeast-2": endpoint{}, "ca-central-1": endpoint{}, "eu-central-1": endpoint{}, "eu-west-1": endpoint{}, "eu-west-2": endpoint{}, + "eu-west-3": endpoint{}, "us-east-1": endpoint{}, "us-east-2": endpoint{}, "us-west-1": endpoint{}, "us-west-2": endpoint{}, }, }, + "glue": service{ + + Endpoints: endpoints{ + "ap-northeast-1": endpoint{}, + "eu-west-1": endpoint{}, + "us-east-1": endpoint{}, + "us-east-2": endpoint{}, + "us-west-2": endpoint{}, + }, + }, "greengrass": service{ IsRegionalized: boxedTrue, Defaults: endpoint{ Protocols: []string{"https"}, }, Endpoints: endpoints{ + "ap-northeast-1": endpoint{}, "ap-southeast-2": endpoint{}, "eu-central-1": endpoint{}, "us-east-1": endpoint{}, @@ -1042,8 +1179,10 @@ var awsPartition = partition{ "ap-northeast-2": endpoint{}, "ap-south-1": endpoint{}, "ap-southeast-2": endpoint{}, + "eu-central-1": endpoint{}, "eu-west-1": endpoint{}, "us-east-1": endpoint{}, + "us-east-2": endpoint{}, "us-west-1": endpoint{}, "us-west-2": endpoint{}, }, @@ -1079,6 +1218,7 @@ var awsPartition = partition{ "eu-central-1": endpoint{}, "eu-west-1": endpoint{}, "eu-west-2": endpoint{}, + "eu-west-3": endpoint{}, "sa-east-1": endpoint{}, "us-east-1": endpoint{}, "us-east-2": endpoint{}, @@ -1106,6 +1246,7 @@ var awsPartition = partition{ "eu-central-1": endpoint{}, "eu-west-1": endpoint{}, "eu-west-2": endpoint{}, + "eu-west-3": endpoint{}, "sa-east-1": endpoint{}, "us-east-1": endpoint{}, "us-east-2": endpoint{}, @@ -1125,6 +1266,7 @@ var awsPartition = partition{ "eu-central-1": endpoint{}, "eu-west-1": endpoint{}, "eu-west-2": endpoint{}, + "eu-west-3": endpoint{}, "sa-east-1": endpoint{}, "us-east-1": endpoint{}, "us-east-2": endpoint{}, @@ -1159,6 +1301,7 @@ var awsPartition = partition{ "eu-central-1": endpoint{}, "eu-west-1": endpoint{}, "eu-west-2": endpoint{}, + "eu-west-3": endpoint{}, "sa-east-1": endpoint{}, "us-east-1": endpoint{}, "us-east-2": endpoint{}, @@ -1202,6 +1345,12 @@ var awsPartition = partition{ "us-west-2": endpoint{}, }, }, + "mgh": service{ + + Endpoints: endpoints{ + "us-west-2": endpoint{}, + }, + }, "mobileanalytics": service{ Endpoints: endpoints{ @@ -1232,6 +1381,7 @@ var awsPartition = partition{ "eu-central-1": endpoint{}, "eu-west-1": endpoint{}, "eu-west-2": endpoint{}, + "eu-west-3": endpoint{}, "sa-east-1": endpoint{}, "us-east-1": endpoint{}, "us-east-2": endpoint{}, @@ -1260,6 +1410,7 @@ var awsPartition = partition{ "eu-central-1": endpoint{}, "eu-west-1": endpoint{}, "eu-west-2": endpoint{}, + "eu-west-3": endpoint{}, "sa-east-1": endpoint{}, "us-east-1": endpoint{}, "us-east-2": endpoint{}, @@ -1301,10 +1452,21 @@ var awsPartition = partition{ "polly": service{ Endpoints: endpoints{ - "eu-west-1": endpoint{}, - "us-east-1": endpoint{}, - "us-east-2": endpoint{}, - "us-west-2": endpoint{}, + "ap-northeast-1": endpoint{}, + "ap-northeast-2": endpoint{}, + "ap-south-1": endpoint{}, + "ap-southeast-1": endpoint{}, + "ap-southeast-2": endpoint{}, + "ca-central-1": endpoint{}, + "eu-central-1": endpoint{}, + "eu-west-1": endpoint{}, + "eu-west-2": endpoint{}, + "eu-west-3": endpoint{}, + "sa-east-1": endpoint{}, + "us-east-1": endpoint{}, + "us-east-2": endpoint{}, + "us-west-1": endpoint{}, + "us-west-2": endpoint{}, }, }, "rds": service{ @@ -1319,6 +1481,7 @@ var awsPartition = partition{ "eu-central-1": endpoint{}, "eu-west-1": endpoint{}, "eu-west-2": endpoint{}, + "eu-west-3": endpoint{}, "sa-east-1": endpoint{}, "us-east-1": endpoint{ SSLCommonName: "{service}.{dnsSuffix}", @@ -1340,6 +1503,7 @@ var awsPartition = partition{ "eu-central-1": endpoint{}, "eu-west-1": endpoint{}, "eu-west-2": endpoint{}, + "eu-west-3": endpoint{}, "sa-east-1": endpoint{}, "us-east-1": endpoint{}, "us-east-2": endpoint{}, @@ -1352,6 +1516,7 @@ var awsPartition = partition{ Endpoints: endpoints{ "eu-west-1": endpoint{}, "us-east-1": endpoint{}, + "us-east-2": endpoint{}, "us-west-2": endpoint{}, }, }, @@ -1381,6 +1546,7 @@ var awsPartition = partition{ }, }, Endpoints: endpoints{ + "eu-west-1": endpoint{}, "us-east-1": endpoint{}, }, }, @@ -1396,26 +1562,27 @@ var awsPartition = partition{ }, Endpoints: endpoints{ "ap-northeast-1": endpoint{ - Hostname: "s3-ap-northeast-1.amazonaws.com", + Hostname: "s3.ap-northeast-1.amazonaws.com", SignatureVersions: []string{"s3", "s3v4"}, }, "ap-northeast-2": endpoint{}, "ap-south-1": endpoint{}, "ap-southeast-1": endpoint{ - Hostname: "s3-ap-southeast-1.amazonaws.com", + Hostname: "s3.ap-southeast-1.amazonaws.com", SignatureVersions: []string{"s3", "s3v4"}, }, "ap-southeast-2": endpoint{ - Hostname: "s3-ap-southeast-2.amazonaws.com", + Hostname: "s3.ap-southeast-2.amazonaws.com", SignatureVersions: []string{"s3", "s3v4"}, }, "ca-central-1": endpoint{}, "eu-central-1": endpoint{}, "eu-west-1": endpoint{ - Hostname: "s3-eu-west-1.amazonaws.com", + Hostname: "s3.eu-west-1.amazonaws.com", SignatureVersions: []string{"s3", "s3v4"}, }, "eu-west-2": endpoint{}, + "eu-west-3": endpoint{}, "s3-external-1": endpoint{ Hostname: "s3-external-1.amazonaws.com", SignatureVersions: []string{"s3", "s3v4"}, @@ -1424,7 +1591,7 @@ var awsPartition = partition{ }, }, "sa-east-1": endpoint{ - Hostname: "s3-sa-east-1.amazonaws.com", + Hostname: "s3.sa-east-1.amazonaws.com", SignatureVersions: []string{"s3", "s3v4"}, }, "us-east-1": endpoint{ @@ -1433,11 +1600,11 @@ var awsPartition = partition{ }, "us-east-2": endpoint{}, "us-west-1": endpoint{ - Hostname: "s3-us-west-1.amazonaws.com", + Hostname: "s3.us-west-1.amazonaws.com", SignatureVersions: []string{"s3", "s3v4"}, }, "us-west-2": endpoint{ - Hostname: "s3-us-west-2.amazonaws.com", + Hostname: "s3.us-west-2.amazonaws.com", SignatureVersions: []string{"s3", "s3v4"}, }, }, @@ -1464,14 +1631,19 @@ var awsPartition = partition{ Endpoints: endpoints{ "ap-northeast-1": endpoint{}, + "ap-northeast-2": endpoint{}, + "ap-south-1": endpoint{}, "ap-southeast-1": endpoint{}, "ap-southeast-2": endpoint{}, "ca-central-1": endpoint{}, "eu-central-1": endpoint{}, "eu-west-1": endpoint{}, "eu-west-2": endpoint{}, + "eu-west-3": endpoint{}, + "sa-east-1": endpoint{}, "us-east-1": endpoint{}, "us-east-2": endpoint{}, + "us-west-1": endpoint{}, "us-west-2": endpoint{}, }, }, @@ -1495,19 +1667,24 @@ var awsPartition = partition{ "ca-central-1": endpoint{}, "eu-central-1": endpoint{}, "eu-west-1": endpoint{}, + "eu-west-3": endpoint{}, "us-east-1": endpoint{}, "us-east-2": endpoint{}, + "us-west-1": endpoint{}, "us-west-2": endpoint{}, }, }, "snowball": service{ Endpoints: endpoints{ + "ap-northeast-1": endpoint{}, "ap-south-1": endpoint{}, "ap-southeast-2": endpoint{}, "eu-central-1": endpoint{}, "eu-west-1": endpoint{}, "eu-west-2": endpoint{}, + "eu-west-3": endpoint{}, + "sa-east-1": endpoint{}, "us-east-1": endpoint{}, "us-east-2": endpoint{}, "us-west-1": endpoint{}, @@ -1528,6 +1705,7 @@ var awsPartition = partition{ "eu-central-1": endpoint{}, "eu-west-1": endpoint{}, "eu-west-2": endpoint{}, + "eu-west-3": endpoint{}, "sa-east-1": endpoint{}, "us-east-1": endpoint{}, "us-east-2": endpoint{}, @@ -1550,6 +1728,7 @@ var awsPartition = partition{ "eu-central-1": endpoint{}, "eu-west-1": endpoint{}, "eu-west-2": endpoint{}, + "eu-west-3": endpoint{}, "sa-east-1": endpoint{}, "us-east-1": endpoint{ SSLCommonName: "queue.{dnsSuffix}", @@ -1571,6 +1750,7 @@ var awsPartition = partition{ "eu-central-1": endpoint{}, "eu-west-1": endpoint{}, "eu-west-2": endpoint{}, + "eu-west-3": endpoint{}, "sa-east-1": endpoint{}, "us-east-1": endpoint{}, "us-east-2": endpoint{}, @@ -1582,9 +1762,12 @@ var awsPartition = partition{ Endpoints: endpoints{ "ap-northeast-1": endpoint{}, + "ap-southeast-1": endpoint{}, "ap-southeast-2": endpoint{}, + "ca-central-1": endpoint{}, "eu-central-1": endpoint{}, "eu-west-1": endpoint{}, + "eu-west-2": endpoint{}, "us-east-1": endpoint{}, "us-east-2": endpoint{}, "us-west-2": endpoint{}, @@ -1602,6 +1785,7 @@ var awsPartition = partition{ "eu-central-1": endpoint{}, "eu-west-1": endpoint{}, "eu-west-2": endpoint{}, + "eu-west-3": endpoint{}, "sa-east-1": endpoint{}, "us-east-1": endpoint{}, "us-east-2": endpoint{}, @@ -1626,6 +1810,7 @@ var awsPartition = partition{ "eu-central-1": endpoint{}, "eu-west-1": endpoint{}, "eu-west-2": endpoint{}, + "eu-west-3": endpoint{}, "local": endpoint{ Hostname: "localhost:8000", Protocols: []string{"http"}, @@ -1664,6 +1849,7 @@ var awsPartition = partition{ "eu-central-1": endpoint{}, "eu-west-1": endpoint{}, "eu-west-2": endpoint{}, + "eu-west-3": endpoint{}, "sa-east-1": endpoint{}, "us-east-1": endpoint{}, "us-east-1-fips": endpoint{ @@ -1713,6 +1899,7 @@ var awsPartition = partition{ "eu-central-1": endpoint{}, "eu-west-1": endpoint{}, "eu-west-2": endpoint{}, + "eu-west-3": endpoint{}, "sa-east-1": endpoint{}, "us-east-1": endpoint{}, "us-east-2": endpoint{}, @@ -1756,8 +1943,11 @@ var awsPartition = partition{ Endpoints: endpoints{ "ap-northeast-1": endpoint{}, + "ap-southeast-2": endpoint{}, + "eu-central-1": endpoint{}, "eu-west-1": endpoint{}, "us-east-1": endpoint{}, + "us-west-1": endpoint{}, "us-west-2": endpoint{}, }, }, @@ -1780,6 +1970,7 @@ var awsPartition = partition{ "ap-southeast-2": endpoint{}, "eu-central-1": endpoint{}, "eu-west-1": endpoint{}, + "eu-west-2": endpoint{}, "us-east-1": endpoint{}, "us-west-2": endpoint{}, }, @@ -1830,30 +2021,62 @@ var awscnPartition = partition{ "cn-north-1": region{ Description: "China (Beijing)", }, + "cn-northwest-1": region{ + Description: "China (Ningxia)", + }, }, Services: services{ + "apigateway": service{ + + Endpoints: endpoints{ + "cn-north-1": endpoint{}, + }, + }, + "application-autoscaling": service{ + Defaults: endpoint{ + Hostname: "autoscaling.{region}.amazonaws.com", + Protocols: []string{"http", "https"}, + CredentialScope: credentialScope{ + Service: "application-autoscaling", + }, + }, + Endpoints: endpoints{ + "cn-north-1": endpoint{}, + "cn-northwest-1": endpoint{}, + }, + }, "autoscaling": service{ Defaults: endpoint{ Protocols: []string{"http", "https"}, }, Endpoints: endpoints{ - "cn-north-1": endpoint{}, + "cn-north-1": endpoint{}, + "cn-northwest-1": endpoint{}, }, }, "cloudformation": service{ Endpoints: endpoints{ - "cn-north-1": endpoint{}, + "cn-north-1": endpoint{}, + "cn-northwest-1": endpoint{}, }, }, "cloudtrail": service{ Endpoints: endpoints{ - "cn-north-1": endpoint{}, + "cn-north-1": endpoint{}, + "cn-northwest-1": endpoint{}, }, }, "codedeploy": service{ + Endpoints: endpoints{ + "cn-north-1": endpoint{}, + "cn-northwest-1": endpoint{}, + }, + }, + "cognito-identity": service{ + Endpoints: endpoints{ "cn-north-1": endpoint{}, }, @@ -1861,13 +2084,15 @@ var awscnPartition = partition{ "config": service{ Endpoints: endpoints{ - "cn-north-1": endpoint{}, + "cn-north-1": endpoint{}, + "cn-northwest-1": endpoint{}, }, }, "directconnect": service{ Endpoints: endpoints{ - "cn-north-1": endpoint{}, + "cn-north-1": endpoint{}, + "cn-northwest-1": endpoint{}, }, }, "dynamodb": service{ @@ -1875,7 +2100,8 @@ var awscnPartition = partition{ Protocols: []string{"http", "https"}, }, Endpoints: endpoints{ - "cn-north-1": endpoint{}, + "cn-north-1": endpoint{}, + "cn-northwest-1": endpoint{}, }, }, "ec2": service{ @@ -1883,7 +2109,8 @@ var awscnPartition = partition{ Protocols: []string{"http", "https"}, }, Endpoints: endpoints{ - "cn-north-1": endpoint{}, + "cn-north-1": endpoint{}, + "cn-northwest-1": endpoint{}, }, }, "ec2metadata": service{ @@ -1912,21 +2139,24 @@ var awscnPartition = partition{ "elasticache": service{ Endpoints: endpoints{ - "cn-north-1": endpoint{}, + "cn-north-1": endpoint{}, + "cn-northwest-1": endpoint{}, }, }, "elasticbeanstalk": service{ Endpoints: endpoints{ - "cn-north-1": endpoint{}, + "cn-north-1": endpoint{}, + "cn-northwest-1": endpoint{}, }, }, "elasticloadbalancing": service{ Defaults: endpoint{ - Protocols: []string{"http", "https"}, + Protocols: []string{"https"}, }, Endpoints: endpoints{ - "cn-north-1": endpoint{}, + "cn-north-1": endpoint{}, + "cn-northwest-1": endpoint{}, }, }, "elasticmapreduce": service{ @@ -1934,13 +2164,21 @@ var awscnPartition = partition{ Protocols: []string{"http", "https"}, }, Endpoints: endpoints{ - "cn-north-1": endpoint{}, + "cn-north-1": endpoint{}, + "cn-northwest-1": endpoint{}, + }, + }, + "es": service{ + + Endpoints: endpoints{ + "cn-northwest-1": endpoint{}, }, }, "events": service{ Endpoints: endpoints{ - "cn-north-1": endpoint{}, + "cn-north-1": endpoint{}, + "cn-northwest-1": endpoint{}, }, }, "glacier": service{ @@ -1948,7 +2186,8 @@ var awscnPartition = partition{ Protocols: []string{"http", "https"}, }, Endpoints: endpoints{ - "cn-north-1": endpoint{}, + "cn-north-1": endpoint{}, + "cn-northwest-1": endpoint{}, }, }, "iam": service{ @@ -1964,8 +2203,25 @@ var awscnPartition = partition{ }, }, }, + "iot": service{ + Defaults: endpoint{ + CredentialScope: credentialScope{ + Service: "execute-api", + }, + }, + Endpoints: endpoints{ + "cn-north-1": endpoint{}, + }, + }, "kinesis": service{ + Endpoints: endpoints{ + "cn-north-1": endpoint{}, + "cn-northwest-1": endpoint{}, + }, + }, + "lambda": service{ + Endpoints: endpoints{ "cn-north-1": endpoint{}, }, @@ -1973,7 +2229,8 @@ var awscnPartition = partition{ "logs": service{ Endpoints: endpoints{ - "cn-north-1": endpoint{}, + "cn-north-1": endpoint{}, + "cn-northwest-1": endpoint{}, }, }, "monitoring": service{ @@ -1981,19 +2238,22 @@ var awscnPartition = partition{ Protocols: []string{"http", "https"}, }, Endpoints: endpoints{ - "cn-north-1": endpoint{}, + "cn-north-1": endpoint{}, + "cn-northwest-1": endpoint{}, }, }, "rds": service{ Endpoints: endpoints{ - "cn-north-1": endpoint{}, + "cn-north-1": endpoint{}, + "cn-northwest-1": endpoint{}, }, }, "redshift": service{ Endpoints: endpoints{ - "cn-north-1": endpoint{}, + "cn-north-1": endpoint{}, + "cn-northwest-1": endpoint{}, }, }, "s3": service{ @@ -2001,6 +2261,19 @@ var awscnPartition = partition{ Protocols: []string{"http", "https"}, SignatureVersions: []string{"s3v4"}, }, + Endpoints: endpoints{ + "cn-north-1": endpoint{}, + "cn-northwest-1": endpoint{}, + }, + }, + "sms": service{ + + Endpoints: endpoints{ + "cn-north-1": endpoint{}, + }, + }, + "snowball": service{ + Endpoints: endpoints{ "cn-north-1": endpoint{}, }, @@ -2010,7 +2283,8 @@ var awscnPartition = partition{ Protocols: []string{"http", "https"}, }, Endpoints: endpoints{ - "cn-north-1": endpoint{}, + "cn-north-1": endpoint{}, + "cn-northwest-1": endpoint{}, }, }, "sqs": service{ @@ -2019,13 +2293,15 @@ var awscnPartition = partition{ Protocols: []string{"http", "https"}, }, Endpoints: endpoints{ - "cn-north-1": endpoint{}, + "cn-north-1": endpoint{}, + "cn-northwest-1": endpoint{}, }, }, "ssm": service{ Endpoints: endpoints{ - "cn-north-1": endpoint{}, + "cn-north-1": endpoint{}, + "cn-northwest-1": endpoint{}, }, }, "storagegateway": service{ @@ -2042,19 +2318,22 @@ var awscnPartition = partition{ }, }, Endpoints: endpoints{ - "cn-north-1": endpoint{}, + "cn-north-1": endpoint{}, + "cn-northwest-1": endpoint{}, }, }, "sts": service{ Endpoints: endpoints{ - "cn-north-1": endpoint{}, + "cn-north-1": endpoint{}, + "cn-northwest-1": endpoint{}, }, }, "swf": service{ Endpoints: endpoints{ - "cn-north-1": endpoint{}, + "cn-north-1": endpoint{}, + "cn-northwest-1": endpoint{}, }, }, "tagging": service{ @@ -2092,6 +2371,18 @@ var awsusgovPartition = partition{ }, }, Services: services{ + "acm": service{ + + Endpoints: endpoints{ + "us-gov-west-1": endpoint{}, + }, + }, + "apigateway": service{ + + Endpoints: endpoints{ + "us-gov-west-1": endpoint{}, + }, + }, "autoscaling": service{ Endpoints: endpoints{ @@ -2136,10 +2427,22 @@ var awsusgovPartition = partition{ "us-gov-west-1": endpoint{}, }, }, + "dms": service{ + + Endpoints: endpoints{ + "us-gov-west-1": endpoint{}, + }, + }, "dynamodb": service{ Endpoints: endpoints{ "us-gov-west-1": endpoint{}, + "us-gov-west-1-fips": endpoint{ + Hostname: "dynamodb.us-gov-west-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-gov-west-1", + }, + }, }, }, "ec2": service{ @@ -2159,12 +2462,24 @@ var awsusgovPartition = partition{ }, }, }, + "ecs": service{ + + Endpoints: endpoints{ + "us-gov-west-1": endpoint{}, + }, + }, "elasticache": service{ Endpoints: endpoints{ "us-gov-west-1": endpoint{}, }, }, + "elasticbeanstalk": service{ + + Endpoints: endpoints{ + "us-gov-west-1": endpoint{}, + }, + }, "elasticloadbalancing": service{ Endpoints: endpoints{ @@ -2268,7 +2583,7 @@ var awsusgovPartition = partition{ }, }, "us-gov-west-1": endpoint{ - Hostname: "s3-us-gov-west-1.amazonaws.com", + Hostname: "s3.us-gov-west-1.amazonaws.com", Protocols: []string{"http", "https"}, }, }, @@ -2316,6 +2631,12 @@ var awsusgovPartition = partition{ }, Endpoints: endpoints{ "us-gov-west-1": endpoint{}, + "us-gov-west-1-fips": endpoint{ + Hostname: "dynamodb.us-gov-west-1.amazonaws.com", + CredentialScope: credentialScope{ + Region: "us-gov-west-1", + }, + }, }, }, "sts": service{ diff --git a/vendor/github.com/aws/aws-sdk-go/aws/request/request.go b/vendor/github.com/aws/aws-sdk-go/aws/request/request.go index 088ba5290..5c7db4982 100644 --- a/vendor/github.com/aws/aws-sdk-go/aws/request/request.go +++ b/vendor/github.com/aws/aws-sdk-go/aws/request/request.go @@ -28,6 +28,10 @@ const ( // during body reads. ErrCodeResponseTimeout = "ResponseTimeout" + // ErrCodeInvalidPresignExpire is returned when the expire time provided to + // presign is invalid + ErrCodeInvalidPresignExpire = "InvalidPresignExpireError" + // CanceledErrorCode is the error code that will be returned by an // API request that was canceled. Requests given a aws.Context may // return this error when canceled. @@ -42,7 +46,6 @@ type Request struct { Retryer Time time.Time - ExpireTime time.Duration Operation *Operation HTTPRequest *http.Request HTTPResponse *http.Response @@ -60,6 +63,11 @@ type Request struct { LastSignedAt time.Time DisableFollowRedirects bool + // A value greater than 0 instructs the request to be signed as Presigned URL + // You should not set this field directly. Instead use Request's + // Presign or PresignRequest methods. + ExpireTime time.Duration + context aws.Context built bool @@ -104,6 +112,8 @@ func New(cfg aws.Config, clientInfo metadata.ClientInfo, handlers Handlers, err = awserr.New("InvalidEndpointURL", "invalid endpoint uri", err) } + SanitizeHostForHeader(httpReq) + r := &Request{ Config: cfg, ClientInfo: clientInfo, @@ -250,34 +260,59 @@ func (r *Request) SetReaderBody(reader io.ReadSeeker) { // Presign returns the request's signed URL. Error will be returned // if the signing fails. -func (r *Request) Presign(expireTime time.Duration) (string, error) { - r.ExpireTime = expireTime +// +// It is invalid to create a presigned URL with a expire duration 0 or less. An +// error is returned if expire duration is 0 or less. +func (r *Request) Presign(expire time.Duration) (string, error) { + r = r.copy() + + // Presign requires all headers be hoisted. There is no way to retrieve + // the signed headers not hoisted without this. Making the presigned URL + // useless. r.NotHoist = false + u, _, err := getPresignedURL(r, expire) + return u, err +} + +// PresignRequest behaves just like presign, with the addition of returning a +// set of headers that were signed. +// +// It is invalid to create a presigned URL with a expire duration 0 or less. An +// error is returned if expire duration is 0 or less. +// +// Returns the URL string for the API operation with signature in the query string, +// and the HTTP headers that were included in the signature. These headers must +// be included in any HTTP request made with the presigned URL. +// +// To prevent hoisting any headers to the query string set NotHoist to true on +// this Request value prior to calling PresignRequest. +func (r *Request) PresignRequest(expire time.Duration) (string, http.Header, error) { + r = r.copy() + return getPresignedURL(r, expire) +} + +func getPresignedURL(r *Request, expire time.Duration) (string, http.Header, error) { + if expire <= 0 { + return "", nil, awserr.New( + ErrCodeInvalidPresignExpire, + "presigned URL requires an expire duration greater than 0", + nil, + ) + } + + r.ExpireTime = expire + if r.Operation.BeforePresignFn != nil { - r = r.copy() - err := r.Operation.BeforePresignFn(r) - if err != nil { - return "", err + if err := r.Operation.BeforePresignFn(r); err != nil { + return "", nil, err } } - r.Sign() - if r.Error != nil { - return "", r.Error + if err := r.Sign(); err != nil { + return "", nil, err } - return r.HTTPRequest.URL.String(), nil -} -// PresignRequest behaves just like presign, but hoists all headers and signs them. -// Also returns the signed hash back to the user -func (r *Request) PresignRequest(expireTime time.Duration) (string, http.Header, error) { - r.ExpireTime = expireTime - r.NotHoist = true - r.Sign() - if r.Error != nil { - return "", nil, r.Error - } return r.HTTPRequest.URL.String(), r.SignedHeaderVals, nil } @@ -573,3 +608,72 @@ func shouldRetryCancel(r *Request) bool { errStr != "net/http: request canceled while waiting for connection") } + +// SanitizeHostForHeader removes default port from host and updates request.Host +func SanitizeHostForHeader(r *http.Request) { + host := getHost(r) + port := portOnly(host) + if port != "" && isDefaultPort(r.URL.Scheme, port) { + r.Host = stripPort(host) + } +} + +// Returns host from request +func getHost(r *http.Request) string { + if r.Host != "" { + return r.Host + } + + return r.URL.Host +} + +// Hostname returns u.Host, without any port number. +// +// If Host is an IPv6 literal with a port number, Hostname returns the +// IPv6 literal without the square brackets. IPv6 literals may include +// a zone identifier. +// +// Copied from the Go 1.8 standard library (net/url) +func stripPort(hostport string) string { + colon := strings.IndexByte(hostport, ':') + if colon == -1 { + return hostport + } + if i := strings.IndexByte(hostport, ']'); i != -1 { + return strings.TrimPrefix(hostport[:i], "[") + } + return hostport[:colon] +} + +// Port returns the port part of u.Host, without the leading colon. +// If u.Host doesn't contain a port, Port returns an empty string. +// +// Copied from the Go 1.8 standard library (net/url) +func portOnly(hostport string) string { + colon := strings.IndexByte(hostport, ':') + if colon == -1 { + return "" + } + if i := strings.Index(hostport, "]:"); i != -1 { + return hostport[i+len("]:"):] + } + if strings.Contains(hostport, "]") { + return "" + } + return hostport[colon+len(":"):] +} + +// Returns true if the specified URI is using the standard port +// (i.e. port 80 for HTTP URIs or 443 for HTTPS URIs) +func isDefaultPort(scheme, port string) bool { + if port == "" { + return true + } + + lowerCaseScheme := strings.ToLower(scheme) + if (lowerCaseScheme == "http" && port == "80") || (lowerCaseScheme == "https" && port == "443") { + return true + } + + return false +} diff --git a/vendor/github.com/aws/aws-sdk-go/aws/request/request_pagination.go b/vendor/github.com/aws/aws-sdk-go/aws/request/request_pagination.go index 59de6736b..159518a75 100644 --- a/vendor/github.com/aws/aws-sdk-go/aws/request/request_pagination.go +++ b/vendor/github.com/aws/aws-sdk-go/aws/request/request_pagination.go @@ -142,13 +142,28 @@ func (r *Request) nextPageTokens() []interface{} { tokens := []interface{}{} tokenAdded := false for _, outToken := range r.Operation.OutputTokens { - v, _ := awsutil.ValuesAtPath(r.Data, outToken) - if len(v) > 0 { - tokens = append(tokens, v[0]) - tokenAdded = true - } else { + vs, _ := awsutil.ValuesAtPath(r.Data, outToken) + if len(vs) == 0 { tokens = append(tokens, nil) + continue } + v := vs[0] + + switch tv := v.(type) { + case *string: + if len(aws.StringValue(tv)) == 0 { + tokens = append(tokens, nil) + continue + } + case string: + if len(tv) == 0 { + tokens = append(tokens, nil) + continue + } + } + + tokenAdded = true + tokens = append(tokens, v) } if !tokenAdded { return nil diff --git a/vendor/github.com/aws/aws-sdk-go/aws/session/env_config.go b/vendor/github.com/aws/aws-sdk-go/aws/session/env_config.go index 4b102f8f2..12b452177 100644 --- a/vendor/github.com/aws/aws-sdk-go/aws/session/env_config.go +++ b/vendor/github.com/aws/aws-sdk-go/aws/session/env_config.go @@ -5,8 +5,12 @@ import ( "strconv" "github.com/aws/aws-sdk-go/aws/credentials" + "github.com/aws/aws-sdk-go/aws/defaults" ) +// EnvProviderName provides a name of the provider when config is loaded from environment. +const EnvProviderName = "EnvConfigCredentials" + // envConfig is a collection of environment values the SDK will read // setup config from. All environment values are optional. But some values // such as credentials require multiple values to be complete or the values @@ -157,7 +161,7 @@ func envConfigLoad(enableSharedConfig bool) envConfig { if len(cfg.Creds.AccessKeyID) == 0 || len(cfg.Creds.SecretAccessKey) == 0 { cfg.Creds = credentials.Value{} } else { - cfg.Creds.ProviderName = "EnvConfigCredentials" + cfg.Creds.ProviderName = EnvProviderName } regionKeys := regionEnvKeys @@ -173,6 +177,13 @@ func envConfigLoad(enableSharedConfig bool) envConfig { setFromEnvVal(&cfg.SharedCredentialsFile, sharedCredsFileEnvKey) setFromEnvVal(&cfg.SharedConfigFile, sharedConfigFileEnvKey) + if len(cfg.SharedCredentialsFile) == 0 { + cfg.SharedCredentialsFile = defaults.SharedCredentialsFilename() + } + if len(cfg.SharedConfigFile) == 0 { + cfg.SharedConfigFile = defaults.SharedConfigFilename() + } + cfg.CustomCABundle = os.Getenv("AWS_CA_BUNDLE") return cfg diff --git a/vendor/github.com/aws/aws-sdk-go/aws/session/session.go b/vendor/github.com/aws/aws-sdk-go/aws/session/session.go index 9f75d5ac5..2fc789d6d 100644 --- a/vendor/github.com/aws/aws-sdk-go/aws/session/session.go +++ b/vendor/github.com/aws/aws-sdk-go/aws/session/session.go @@ -58,7 +58,12 @@ func New(cfgs ...*aws.Config) *Session { envCfg := loadEnvConfig() if envCfg.EnableSharedConfig { - s, err := newSession(Options{}, envCfg, cfgs...) + var cfg aws.Config + cfg.MergeIn(cfgs...) + s, err := NewSessionWithOptions(Options{ + Config: cfg, + SharedConfigState: SharedConfigEnable, + }) if err != nil { // Old session.New expected all errors to be discovered when // a request is made, and would report the errors then. This @@ -243,13 +248,6 @@ func NewSessionWithOptions(opts Options) (*Session, error) { envCfg.EnableSharedConfig = true } - if len(envCfg.SharedCredentialsFile) == 0 { - envCfg.SharedCredentialsFile = defaults.SharedCredentialsFilename() - } - if len(envCfg.SharedConfigFile) == 0 { - envCfg.SharedConfigFile = defaults.SharedConfigFilename() - } - // Only use AWS_CA_BUNDLE if session option is not provided. if len(envCfg.CustomCABundle) != 0 && opts.CustomCABundle == nil { f, err := os.Open(envCfg.CustomCABundle) diff --git a/vendor/github.com/aws/aws-sdk-go/aws/signer/v4/v4.go b/vendor/github.com/aws/aws-sdk-go/aws/signer/v4/v4.go index d68905acb..ccc88b4ac 100644 --- a/vendor/github.com/aws/aws-sdk-go/aws/signer/v4/v4.go +++ b/vendor/github.com/aws/aws-sdk-go/aws/signer/v4/v4.go @@ -268,7 +268,7 @@ type signingCtx struct { // "X-Amz-Content-Sha256" header with a precomputed value. The signer will // only compute the hash if the request header value is empty. func (v4 Signer) Sign(r *http.Request, body io.ReadSeeker, service, region string, signTime time.Time) (http.Header, error) { - return v4.signWithBody(r, body, service, region, 0, signTime) + return v4.signWithBody(r, body, service, region, 0, false, signTime) } // Presign signs AWS v4 requests with the provided body, service name, region @@ -302,10 +302,10 @@ func (v4 Signer) Sign(r *http.Request, body io.ReadSeeker, service, region strin // presigned request's signature you can set the "X-Amz-Content-Sha256" // HTTP header and that will be included in the request's signature. func (v4 Signer) Presign(r *http.Request, body io.ReadSeeker, service, region string, exp time.Duration, signTime time.Time) (http.Header, error) { - return v4.signWithBody(r, body, service, region, exp, signTime) + return v4.signWithBody(r, body, service, region, exp, true, signTime) } -func (v4 Signer) signWithBody(r *http.Request, body io.ReadSeeker, service, region string, exp time.Duration, signTime time.Time) (http.Header, error) { +func (v4 Signer) signWithBody(r *http.Request, body io.ReadSeeker, service, region string, exp time.Duration, isPresign bool, signTime time.Time) (http.Header, error) { currentTimeFn := v4.currentTimeFn if currentTimeFn == nil { currentTimeFn = time.Now @@ -317,7 +317,7 @@ func (v4 Signer) signWithBody(r *http.Request, body io.ReadSeeker, service, regi Query: r.URL.Query(), Time: signTime, ExpireTime: exp, - isPresign: exp != 0, + isPresign: isPresign, ServiceName: service, Region: region, DisableURIPathEscaping: v4.DisableURIPathEscaping, @@ -339,6 +339,7 @@ func (v4 Signer) signWithBody(r *http.Request, body io.ReadSeeker, service, regi return http.Header{}, err } + ctx.sanitizeHostForHeader() ctx.assignAmzQueryValues() ctx.build(v4.DisableHeaderHoisting) @@ -363,6 +364,10 @@ func (v4 Signer) signWithBody(r *http.Request, body io.ReadSeeker, service, regi return ctx.SignedHeaderVals, nil } +func (ctx *signingCtx) sanitizeHostForHeader() { + request.SanitizeHostForHeader(ctx.Request) +} + func (ctx *signingCtx) handlePresignRemoval() { if !ctx.isPresign { return @@ -467,7 +472,7 @@ func signSDKRequestWithCurrTime(req *request.Request, curTimeFn func() time.Time } signedHeaders, err := v4.signWithBody(req.HTTPRequest, req.GetBody(), - name, region, req.ExpireTime, signingTime, + name, region, req.ExpireTime, req.ExpireTime > 0, signingTime, ) if err != nil { req.Error = err @@ -502,6 +507,8 @@ func (ctx *signingCtx) build(disableHeaderHoisting bool) { ctx.buildTime() // no depends ctx.buildCredentialString() // no depends + ctx.buildBodyDigest() + unsignedHeaders := ctx.Request.Header if ctx.isPresign { if !disableHeaderHoisting { @@ -513,7 +520,6 @@ func (ctx *signingCtx) build(disableHeaderHoisting bool) { } } - ctx.buildBodyDigest() ctx.buildCanonicalHeaders(ignoredHeaders, unsignedHeaders) ctx.buildCanonicalString() // depends on canon headers / signed headers ctx.buildStringToSign() // depends on canon string diff --git a/vendor/github.com/aws/aws-sdk-go/aws/version.go b/vendor/github.com/aws/aws-sdk-go/aws/version.go index 7a4f26e39..02dfdd9e2 100644 --- a/vendor/github.com/aws/aws-sdk-go/aws/version.go +++ b/vendor/github.com/aws/aws-sdk-go/aws/version.go @@ -5,4 +5,4 @@ package aws const SDKName = "aws-sdk-go" // SDKVersion is the version of this SDK -const SDKVersion = "1.10.23" +const SDKVersion = "1.12.72" diff --git a/vendor/github.com/aws/aws-sdk-go/buildspec.yml b/vendor/github.com/aws/aws-sdk-go/buildspec.yml new file mode 100644 index 000000000..427208edf --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/buildspec.yml @@ -0,0 +1,21 @@ +version: 0.2 + +phases: + build: + commands: + - echo Build started on `date` + - export GOPATH=/go + - export SDK_CB_ROOT=`pwd` + - export SDK_GO_ROOT=/go/src/github.com/aws/aws-sdk-go + - mkdir -p /go/src/github.com/aws + - ln -s $SDK_CB_ROOT $SDK_GO_ROOT + - cd $SDK_GO_ROOT + - make unit + - cd $SDK_CB_ROOT + - #echo Compiling the Go code... + post_build: + commands: + - echo Build completed on `date` +#artifacts: +# files: +# - hello diff --git a/vendor/github.com/aws/aws-sdk-go/doc.go b/vendor/github.com/aws/aws-sdk-go/doc.go index 3e077e51d..32b806a4a 100644 --- a/vendor/github.com/aws/aws-sdk-go/doc.go +++ b/vendor/github.com/aws/aws-sdk-go/doc.go @@ -197,7 +197,7 @@ // regions different from the Session's region. // // svc := s3.New(sess, &aws.Config{ -// Region: aws.String(ednpoints.UsWest2RegionID), +// Region: aws.String(endpoints.UsWest2RegionID), // }) // // See the Config type in the aws package for more information and additional diff --git a/vendor/github.com/aws/aws-sdk-go/private/protocol/json/jsonutil/build.go b/vendor/github.com/aws/aws-sdk-go/private/protocol/json/jsonutil/build.go index 6efe43d5f..ec765ba25 100644 --- a/vendor/github.com/aws/aws-sdk-go/private/protocol/json/jsonutil/build.go +++ b/vendor/github.com/aws/aws-sdk-go/private/protocol/json/jsonutil/build.go @@ -12,6 +12,7 @@ import ( "strconv" "time" + "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/private/protocol" ) @@ -49,7 +50,10 @@ func buildAny(value reflect.Value, buf *bytes.Buffer, tag reflect.StructTag) err t = "list" } case reflect.Map: - t = "map" + // cannot be a JSONValue map + if _, ok := value.Interface().(aws.JSONValue); !ok { + t = "map" + } } } @@ -210,14 +214,11 @@ func buildScalar(v reflect.Value, buf *bytes.Buffer, tag reflect.StructTag) erro } buf.Write(strconv.AppendFloat(scratch[:0], f, 'f', -1, 64)) default: - switch value.Type() { - case timeType: - converted := v.Interface().(*time.Time) - + switch converted := value.Interface().(type) { + case time.Time: buf.Write(strconv.AppendInt(scratch[:0], converted.UTC().Unix(), 10)) - case byteSliceType: + case []byte: if !value.IsNil() { - converted := value.Interface().([]byte) buf.WriteByte('"') if len(converted) < 1024 { // for small buffers, using Encode directly is much faster. @@ -233,6 +234,12 @@ func buildScalar(v reflect.Value, buf *bytes.Buffer, tag reflect.StructTag) erro } buf.WriteByte('"') } + case aws.JSONValue: + str, err := protocol.EncodeJSONValue(converted, protocol.QuotedEscape) + if err != nil { + return fmt.Errorf("unable to encode JSONValue, %v", err) + } + buf.WriteString(str) default: return fmt.Errorf("unsupported JSON value %v (%s)", value.Interface(), value.Type()) } diff --git a/vendor/github.com/aws/aws-sdk-go/private/protocol/json/jsonutil/unmarshal.go b/vendor/github.com/aws/aws-sdk-go/private/protocol/json/jsonutil/unmarshal.go index fea535613..037e1e7be 100644 --- a/vendor/github.com/aws/aws-sdk-go/private/protocol/json/jsonutil/unmarshal.go +++ b/vendor/github.com/aws/aws-sdk-go/private/protocol/json/jsonutil/unmarshal.go @@ -8,6 +8,9 @@ import ( "io/ioutil" "reflect" "time" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/private/protocol" ) // UnmarshalJSON reads a stream and unmarshals the results in object v. @@ -50,7 +53,10 @@ func unmarshalAny(value reflect.Value, data interface{}, tag reflect.StructTag) t = "list" } case reflect.Map: - t = "map" + // cannot be a JSONValue map + if _, ok := value.Interface().(aws.JSONValue); !ok { + t = "map" + } } } @@ -183,6 +189,13 @@ func unmarshalScalar(value reflect.Value, data interface{}, tag reflect.StructTa return err } value.Set(reflect.ValueOf(b)) + case aws.JSONValue: + // No need to use escaping as the value is a non-quoted string. + v, err := protocol.DecodeJSONValue(d, protocol.NoEscape) + if err != nil { + return err + } + value.Set(reflect.ValueOf(v)) default: return errf() } diff --git a/vendor/github.com/aws/aws-sdk-go/private/protocol/jsonvalue.go b/vendor/github.com/aws/aws-sdk-go/private/protocol/jsonvalue.go new file mode 100644 index 000000000..776d11018 --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/private/protocol/jsonvalue.go @@ -0,0 +1,76 @@ +package protocol + +import ( + "encoding/base64" + "encoding/json" + "fmt" + "strconv" + + "github.com/aws/aws-sdk-go/aws" +) + +// EscapeMode is the mode that should be use for escaping a value +type EscapeMode uint + +// The modes for escaping a value before it is marshaled, and unmarshaled. +const ( + NoEscape EscapeMode = iota + Base64Escape + QuotedEscape +) + +// EncodeJSONValue marshals the value into a JSON string, and optionally base64 +// encodes the string before returning it. +// +// Will panic if the escape mode is unknown. +func EncodeJSONValue(v aws.JSONValue, escape EscapeMode) (string, error) { + b, err := json.Marshal(v) + if err != nil { + return "", err + } + + switch escape { + case NoEscape: + return string(b), nil + case Base64Escape: + return base64.StdEncoding.EncodeToString(b), nil + case QuotedEscape: + return strconv.Quote(string(b)), nil + } + + panic(fmt.Sprintf("EncodeJSONValue called with unknown EscapeMode, %v", escape)) +} + +// DecodeJSONValue will attempt to decode the string input as a JSONValue. +// Optionally decoding base64 the value first before JSON unmarshaling. +// +// Will panic if the escape mode is unknown. +func DecodeJSONValue(v string, escape EscapeMode) (aws.JSONValue, error) { + var b []byte + var err error + + switch escape { + case NoEscape: + b = []byte(v) + case Base64Escape: + b, err = base64.StdEncoding.DecodeString(v) + case QuotedEscape: + var u string + u, err = strconv.Unquote(v) + b = []byte(u) + default: + panic(fmt.Sprintf("DecodeJSONValue called with unknown EscapeMode, %v", escape)) + } + + if err != nil { + return nil, err + } + + m := aws.JSONValue{} + err = json.Unmarshal(b, &m) + if err != nil { + return nil, err + } + + return m, nil +} diff --git a/vendor/github.com/aws/aws-sdk-go/private/protocol/query/queryutil/queryutil.go b/vendor/github.com/aws/aws-sdk-go/private/protocol/query/queryutil/queryutil.go index 524ca952a..5ce9cba32 100644 --- a/vendor/github.com/aws/aws-sdk-go/private/protocol/query/queryutil/queryutil.go +++ b/vendor/github.com/aws/aws-sdk-go/private/protocol/query/queryutil/queryutil.go @@ -121,6 +121,10 @@ func (q *queryParser) parseList(v url.Values, value reflect.Value, prefix string return nil } + if _, ok := value.Interface().([]byte); ok { + return q.parseScalar(v, value, prefix, tag) + } + // check for unflattened list member if !q.isEC2 && tag.Get("flattened") == "" { if listName := tag.Get("locationNameList"); listName == "" { diff --git a/vendor/github.com/aws/aws-sdk-go/private/protocol/rest/build.go b/vendor/github.com/aws/aws-sdk-go/private/protocol/rest/build.go index 716183564..c405288d7 100644 --- a/vendor/github.com/aws/aws-sdk-go/private/protocol/rest/build.go +++ b/vendor/github.com/aws/aws-sdk-go/private/protocol/rest/build.go @@ -4,7 +4,6 @@ package rest import ( "bytes" "encoding/base64" - "encoding/json" "fmt" "io" "net/http" @@ -18,6 +17,7 @@ import ( "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/aws/awserr" "github.com/aws/aws-sdk-go/aws/request" + "github.com/aws/aws-sdk-go/private/protocol" ) // RFC822 returns an RFC822 formatted timestamp for AWS protocols @@ -252,13 +252,12 @@ func EscapePath(path string, encodeSep bool) string { return buf.String() } -func convertType(v reflect.Value, tag reflect.StructTag) (string, error) { +func convertType(v reflect.Value, tag reflect.StructTag) (str string, err error) { v = reflect.Indirect(v) if !v.IsValid() { return "", errValueNotSet } - var str string switch value := v.Interface().(type) { case string: str = value @@ -273,17 +272,19 @@ func convertType(v reflect.Value, tag reflect.StructTag) (string, error) { case time.Time: str = value.UTC().Format(RFC822) case aws.JSONValue: - b, err := json.Marshal(value) - if err != nil { - return "", err + if len(value) == 0 { + return "", errValueNotSet } + escaping := protocol.NoEscape if tag.Get("location") == "header" { - str = base64.StdEncoding.EncodeToString(b) - } else { - str = string(b) + escaping = protocol.Base64Escape + } + str, err = protocol.EncodeJSONValue(value, escaping) + if err != nil { + return "", fmt.Errorf("unable to encode JSONValue, %v", err) } default: - err := fmt.Errorf("Unsupported value for param %v (%s)", v.Interface(), v.Type()) + err := fmt.Errorf("unsupported value for param %v (%s)", v.Interface(), v.Type()) return "", err } return str, nil diff --git a/vendor/github.com/aws/aws-sdk-go/private/protocol/rest/unmarshal.go b/vendor/github.com/aws/aws-sdk-go/private/protocol/rest/unmarshal.go index 7a779ee22..823f045ee 100644 --- a/vendor/github.com/aws/aws-sdk-go/private/protocol/rest/unmarshal.go +++ b/vendor/github.com/aws/aws-sdk-go/private/protocol/rest/unmarshal.go @@ -3,7 +3,6 @@ package rest import ( "bytes" "encoding/base64" - "encoding/json" "fmt" "io" "io/ioutil" @@ -16,6 +15,7 @@ import ( "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/aws/awserr" "github.com/aws/aws-sdk-go/aws/request" + "github.com/aws/aws-sdk-go/private/protocol" ) // UnmarshalHandler is a named request handler for unmarshaling rest protocol requests @@ -204,17 +204,11 @@ func unmarshalHeader(v reflect.Value, header string, tag reflect.StructTag) erro } v.Set(reflect.ValueOf(&t)) case aws.JSONValue: - b := []byte(header) - var err error + escaping := protocol.NoEscape if tag.Get("location") == "header" { - b, err = base64.StdEncoding.DecodeString(header) - if err != nil { - return err - } + escaping = protocol.Base64Escape } - - m := aws.JSONValue{} - err = json.Unmarshal(b, &m) + m, err := protocol.DecodeJSONValue(header, escaping) if err != nil { return err } diff --git a/vendor/github.com/aws/aws-sdk-go/service/ec2/api.go b/vendor/github.com/aws/aws-sdk-go/service/ec2/api.go index da2809630..3461bb96a 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/ec2/api.go +++ b/vendor/github.com/aws/aws-sdk-go/service/ec2/api.go @@ -38,7 +38,7 @@ const opAcceptReservedInstancesExchangeQuote = "AcceptReservedInstancesExchangeQ // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/AcceptReservedInstancesExchangeQuote +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/AcceptReservedInstancesExchangeQuote func (c *EC2) AcceptReservedInstancesExchangeQuoteRequest(input *AcceptReservedInstancesExchangeQuoteInput) (req *request.Request, output *AcceptReservedInstancesExchangeQuoteOutput) { op := &request.Operation{ Name: opAcceptReservedInstancesExchangeQuote, @@ -66,7 +66,7 @@ func (c *EC2) AcceptReservedInstancesExchangeQuoteRequest(input *AcceptReservedI // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation AcceptReservedInstancesExchangeQuote for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/AcceptReservedInstancesExchangeQuote +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/AcceptReservedInstancesExchangeQuote func (c *EC2) AcceptReservedInstancesExchangeQuote(input *AcceptReservedInstancesExchangeQuoteInput) (*AcceptReservedInstancesExchangeQuoteOutput, error) { req, out := c.AcceptReservedInstancesExchangeQuoteRequest(input) return out, req.Send() @@ -88,6 +88,81 @@ func (c *EC2) AcceptReservedInstancesExchangeQuoteWithContext(ctx aws.Context, i return out, req.Send() } +const opAcceptVpcEndpointConnections = "AcceptVpcEndpointConnections" + +// AcceptVpcEndpointConnectionsRequest generates a "aws/request.Request" representing the +// client's request for the AcceptVpcEndpointConnections operation. The "output" return +// value will be populated with the request's response once the request complets +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See AcceptVpcEndpointConnections for more information on using the AcceptVpcEndpointConnections +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the AcceptVpcEndpointConnectionsRequest method. +// req, resp := client.AcceptVpcEndpointConnectionsRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/AcceptVpcEndpointConnections +func (c *EC2) AcceptVpcEndpointConnectionsRequest(input *AcceptVpcEndpointConnectionsInput) (req *request.Request, output *AcceptVpcEndpointConnectionsOutput) { + op := &request.Operation{ + Name: opAcceptVpcEndpointConnections, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &AcceptVpcEndpointConnectionsInput{} + } + + output = &AcceptVpcEndpointConnectionsOutput{} + req = c.newRequest(op, input, output) + return +} + +// AcceptVpcEndpointConnections API operation for Amazon Elastic Compute Cloud. +// +// Accepts one or more interface VPC endpoint connection requests to your VPC +// endpoint service. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Elastic Compute Cloud's +// API operation AcceptVpcEndpointConnections for usage and error information. +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/AcceptVpcEndpointConnections +func (c *EC2) AcceptVpcEndpointConnections(input *AcceptVpcEndpointConnectionsInput) (*AcceptVpcEndpointConnectionsOutput, error) { + req, out := c.AcceptVpcEndpointConnectionsRequest(input) + return out, req.Send() +} + +// AcceptVpcEndpointConnectionsWithContext is the same as AcceptVpcEndpointConnections with the addition of +// the ability to pass a context and additional request options. +// +// See AcceptVpcEndpointConnections for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *EC2) AcceptVpcEndpointConnectionsWithContext(ctx aws.Context, input *AcceptVpcEndpointConnectionsInput, opts ...request.Option) (*AcceptVpcEndpointConnectionsOutput, error) { + req, out := c.AcceptVpcEndpointConnectionsRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + const opAcceptVpcPeeringConnection = "AcceptVpcPeeringConnection" // AcceptVpcPeeringConnectionRequest generates a "aws/request.Request" representing the @@ -113,7 +188,7 @@ const opAcceptVpcPeeringConnection = "AcceptVpcPeeringConnection" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/AcceptVpcPeeringConnection +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/AcceptVpcPeeringConnection func (c *EC2) AcceptVpcPeeringConnectionRequest(input *AcceptVpcPeeringConnectionInput) (req *request.Request, output *AcceptVpcPeeringConnectionOutput) { op := &request.Operation{ Name: opAcceptVpcPeeringConnection, @@ -137,13 +212,16 @@ func (c *EC2) AcceptVpcPeeringConnectionRequest(input *AcceptVpcPeeringConnectio // of the peer VPC. Use DescribeVpcPeeringConnections to view your outstanding // VPC peering connection requests. // +// For an inter-region VPC peering connection request, you must accept the VPC +// peering connection in the region of the accepter VPC. +// // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation AcceptVpcPeeringConnection for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/AcceptVpcPeeringConnection +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/AcceptVpcPeeringConnection func (c *EC2) AcceptVpcPeeringConnection(input *AcceptVpcPeeringConnectionInput) (*AcceptVpcPeeringConnectionOutput, error) { req, out := c.AcceptVpcPeeringConnectionRequest(input) return out, req.Send() @@ -190,7 +268,7 @@ const opAllocateAddress = "AllocateAddress" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/AllocateAddress +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/AllocateAddress func (c *EC2) AllocateAddressRequest(input *AllocateAddressInput) (req *request.Request, output *AllocateAddressOutput) { op := &request.Operation{ Name: opAllocateAddress, @@ -209,10 +287,18 @@ func (c *EC2) AllocateAddressRequest(input *AllocateAddressInput) (req *request. // AllocateAddress API operation for Amazon Elastic Compute Cloud. // -// Acquires an Elastic IP address. +// Allocates an Elastic IP address. // // An Elastic IP address is for use either in the EC2-Classic platform or in -// a VPC. For more information, see Elastic IP Addresses (http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/elastic-ip-addresses-eip.html) +// a VPC. By default, you can allocate 5 Elastic IP addresses for EC2-Classic +// per region and 5 Elastic IP addresses for EC2-VPC per region. +// +// If you release an Elastic IP address for use in a VPC, you might be able +// to recover it. To recover an Elastic IP address that you released, specify +// it in the Address parameter. Note that you cannot recover an Elastic IP address +// that you released after it is allocated to another AWS account. +// +// For more information, see Elastic IP Addresses (http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/elastic-ip-addresses-eip.html) // in the Amazon Elastic Compute Cloud User Guide. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions @@ -221,7 +307,7 @@ func (c *EC2) AllocateAddressRequest(input *AllocateAddressInput) (req *request. // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation AllocateAddress for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/AllocateAddress +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/AllocateAddress func (c *EC2) AllocateAddress(input *AllocateAddressInput) (*AllocateAddressOutput, error) { req, out := c.AllocateAddressRequest(input) return out, req.Send() @@ -268,7 +354,7 @@ const opAllocateHosts = "AllocateHosts" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/AllocateHosts +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/AllocateHosts func (c *EC2) AllocateHostsRequest(input *AllocateHostsInput) (req *request.Request, output *AllocateHostsOutput) { op := &request.Operation{ Name: opAllocateHosts, @@ -297,7 +383,7 @@ func (c *EC2) AllocateHostsRequest(input *AllocateHostsInput) (req *request.Requ // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation AllocateHosts for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/AllocateHosts +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/AllocateHosts func (c *EC2) AllocateHosts(input *AllocateHostsInput) (*AllocateHostsOutput, error) { req, out := c.AllocateHostsRequest(input) return out, req.Send() @@ -344,7 +430,7 @@ const opAssignIpv6Addresses = "AssignIpv6Addresses" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/AssignIpv6Addresses +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/AssignIpv6Addresses func (c *EC2) AssignIpv6AddressesRequest(input *AssignIpv6AddressesInput) (req *request.Request, output *AssignIpv6AddressesOutput) { op := &request.Operation{ Name: opAssignIpv6Addresses, @@ -378,7 +464,7 @@ func (c *EC2) AssignIpv6AddressesRequest(input *AssignIpv6AddressesInput) (req * // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation AssignIpv6Addresses for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/AssignIpv6Addresses +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/AssignIpv6Addresses func (c *EC2) AssignIpv6Addresses(input *AssignIpv6AddressesInput) (*AssignIpv6AddressesOutput, error) { req, out := c.AssignIpv6AddressesRequest(input) return out, req.Send() @@ -425,7 +511,7 @@ const opAssignPrivateIpAddresses = "AssignPrivateIpAddresses" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/AssignPrivateIpAddresses +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/AssignPrivateIpAddresses func (c *EC2) AssignPrivateIpAddressesRequest(input *AssignPrivateIpAddressesInput) (req *request.Request, output *AssignPrivateIpAddressesOutput) { op := &request.Operation{ Name: opAssignPrivateIpAddresses, @@ -464,7 +550,7 @@ func (c *EC2) AssignPrivateIpAddressesRequest(input *AssignPrivateIpAddressesInp // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation AssignPrivateIpAddresses for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/AssignPrivateIpAddresses +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/AssignPrivateIpAddresses func (c *EC2) AssignPrivateIpAddresses(input *AssignPrivateIpAddressesInput) (*AssignPrivateIpAddressesOutput, error) { req, out := c.AssignPrivateIpAddressesRequest(input) return out, req.Send() @@ -511,7 +597,7 @@ const opAssociateAddress = "AssociateAddress" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/AssociateAddress +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/AssociateAddress func (c *EC2) AssociateAddressRequest(input *AssociateAddressInput) (req *request.Request, output *AssociateAddressOutput) { op := &request.Operation{ Name: opAssociateAddress, @@ -561,7 +647,7 @@ func (c *EC2) AssociateAddressRequest(input *AssociateAddressInput) (req *reques // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation AssociateAddress for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/AssociateAddress +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/AssociateAddress func (c *EC2) AssociateAddress(input *AssociateAddressInput) (*AssociateAddressOutput, error) { req, out := c.AssociateAddressRequest(input) return out, req.Send() @@ -608,7 +694,7 @@ const opAssociateDhcpOptions = "AssociateDhcpOptions" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/AssociateDhcpOptions +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/AssociateDhcpOptions func (c *EC2) AssociateDhcpOptionsRequest(input *AssociateDhcpOptionsInput) (req *request.Request, output *AssociateDhcpOptionsOutput) { op := &request.Operation{ Name: opAssociateDhcpOptions, @@ -648,7 +734,7 @@ func (c *EC2) AssociateDhcpOptionsRequest(input *AssociateDhcpOptionsInput) (req // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation AssociateDhcpOptions for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/AssociateDhcpOptions +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/AssociateDhcpOptions func (c *EC2) AssociateDhcpOptions(input *AssociateDhcpOptionsInput) (*AssociateDhcpOptionsOutput, error) { req, out := c.AssociateDhcpOptionsRequest(input) return out, req.Send() @@ -695,7 +781,7 @@ const opAssociateIamInstanceProfile = "AssociateIamInstanceProfile" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/AssociateIamInstanceProfile +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/AssociateIamInstanceProfile func (c *EC2) AssociateIamInstanceProfileRequest(input *AssociateIamInstanceProfileInput) (req *request.Request, output *AssociateIamInstanceProfileOutput) { op := &request.Operation{ Name: opAssociateIamInstanceProfile, @@ -723,7 +809,7 @@ func (c *EC2) AssociateIamInstanceProfileRequest(input *AssociateIamInstanceProf // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation AssociateIamInstanceProfile for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/AssociateIamInstanceProfile +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/AssociateIamInstanceProfile func (c *EC2) AssociateIamInstanceProfile(input *AssociateIamInstanceProfileInput) (*AssociateIamInstanceProfileOutput, error) { req, out := c.AssociateIamInstanceProfileRequest(input) return out, req.Send() @@ -770,7 +856,7 @@ const opAssociateRouteTable = "AssociateRouteTable" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/AssociateRouteTable +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/AssociateRouteTable func (c *EC2) AssociateRouteTableRequest(input *AssociateRouteTableInput) (req *request.Request, output *AssociateRouteTableOutput) { op := &request.Operation{ Name: opAssociateRouteTable, @@ -804,7 +890,7 @@ func (c *EC2) AssociateRouteTableRequest(input *AssociateRouteTableInput) (req * // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation AssociateRouteTable for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/AssociateRouteTable +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/AssociateRouteTable func (c *EC2) AssociateRouteTable(input *AssociateRouteTableInput) (*AssociateRouteTableOutput, error) { req, out := c.AssociateRouteTableRequest(input) return out, req.Send() @@ -851,7 +937,7 @@ const opAssociateSubnetCidrBlock = "AssociateSubnetCidrBlock" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/AssociateSubnetCidrBlock +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/AssociateSubnetCidrBlock func (c *EC2) AssociateSubnetCidrBlockRequest(input *AssociateSubnetCidrBlockInput) (req *request.Request, output *AssociateSubnetCidrBlockOutput) { op := &request.Operation{ Name: opAssociateSubnetCidrBlock, @@ -880,7 +966,7 @@ func (c *EC2) AssociateSubnetCidrBlockRequest(input *AssociateSubnetCidrBlockInp // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation AssociateSubnetCidrBlock for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/AssociateSubnetCidrBlock +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/AssociateSubnetCidrBlock func (c *EC2) AssociateSubnetCidrBlock(input *AssociateSubnetCidrBlockInput) (*AssociateSubnetCidrBlockOutput, error) { req, out := c.AssociateSubnetCidrBlockRequest(input) return out, req.Send() @@ -927,7 +1013,7 @@ const opAssociateVpcCidrBlock = "AssociateVpcCidrBlock" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/AssociateVpcCidrBlock +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/AssociateVpcCidrBlock func (c *EC2) AssociateVpcCidrBlockRequest(input *AssociateVpcCidrBlockInput) (req *request.Request, output *AssociateVpcCidrBlockOutput) { op := &request.Operation{ Name: opAssociateVpcCidrBlock, @@ -946,8 +1032,13 @@ func (c *EC2) AssociateVpcCidrBlockRequest(input *AssociateVpcCidrBlockInput) (r // AssociateVpcCidrBlock API operation for Amazon Elastic Compute Cloud. // -// Associates a CIDR block with your VPC. You can only associate a single Amazon-provided -// IPv6 CIDR block with your VPC. The IPv6 CIDR block size is fixed at /56. +// Associates a CIDR block with your VPC. You can associate a secondary IPv4 +// CIDR block, or you can associate an Amazon-provided IPv6 CIDR block. The +// IPv6 CIDR block size is fixed at /56. +// +// For more information about associating CIDR blocks with your VPC and applicable +// restrictions, see VPC and Subnet Sizing (http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Subnets.html#VPC_Sizing) +// in the Amazon Virtual Private Cloud User Guide. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -955,7 +1046,7 @@ func (c *EC2) AssociateVpcCidrBlockRequest(input *AssociateVpcCidrBlockInput) (r // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation AssociateVpcCidrBlock for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/AssociateVpcCidrBlock +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/AssociateVpcCidrBlock func (c *EC2) AssociateVpcCidrBlock(input *AssociateVpcCidrBlockInput) (*AssociateVpcCidrBlockOutput, error) { req, out := c.AssociateVpcCidrBlockRequest(input) return out, req.Send() @@ -1002,7 +1093,7 @@ const opAttachClassicLinkVpc = "AttachClassicLinkVpc" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/AttachClassicLinkVpc +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/AttachClassicLinkVpc func (c *EC2) AttachClassicLinkVpcRequest(input *AttachClassicLinkVpcInput) (req *request.Request, output *AttachClassicLinkVpcOutput) { op := &request.Operation{ Name: opAttachClassicLinkVpc, @@ -1040,7 +1131,7 @@ func (c *EC2) AttachClassicLinkVpcRequest(input *AttachClassicLinkVpcInput) (req // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation AttachClassicLinkVpc for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/AttachClassicLinkVpc +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/AttachClassicLinkVpc func (c *EC2) AttachClassicLinkVpc(input *AttachClassicLinkVpcInput) (*AttachClassicLinkVpcOutput, error) { req, out := c.AttachClassicLinkVpcRequest(input) return out, req.Send() @@ -1087,7 +1178,7 @@ const opAttachInternetGateway = "AttachInternetGateway" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/AttachInternetGateway +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/AttachInternetGateway func (c *EC2) AttachInternetGatewayRequest(input *AttachInternetGatewayInput) (req *request.Request, output *AttachInternetGatewayOutput) { op := &request.Operation{ Name: opAttachInternetGateway, @@ -1118,7 +1209,7 @@ func (c *EC2) AttachInternetGatewayRequest(input *AttachInternetGatewayInput) (r // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation AttachInternetGateway for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/AttachInternetGateway +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/AttachInternetGateway func (c *EC2) AttachInternetGateway(input *AttachInternetGatewayInput) (*AttachInternetGatewayOutput, error) { req, out := c.AttachInternetGatewayRequest(input) return out, req.Send() @@ -1165,7 +1256,7 @@ const opAttachNetworkInterface = "AttachNetworkInterface" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/AttachNetworkInterface +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/AttachNetworkInterface func (c *EC2) AttachNetworkInterfaceRequest(input *AttachNetworkInterfaceInput) (req *request.Request, output *AttachNetworkInterfaceOutput) { op := &request.Operation{ Name: opAttachNetworkInterface, @@ -1192,7 +1283,7 @@ func (c *EC2) AttachNetworkInterfaceRequest(input *AttachNetworkInterfaceInput) // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation AttachNetworkInterface for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/AttachNetworkInterface +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/AttachNetworkInterface func (c *EC2) AttachNetworkInterface(input *AttachNetworkInterfaceInput) (*AttachNetworkInterfaceOutput, error) { req, out := c.AttachNetworkInterfaceRequest(input) return out, req.Send() @@ -1239,7 +1330,7 @@ const opAttachVolume = "AttachVolume" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/AttachVolume +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/AttachVolume func (c *EC2) AttachVolumeRequest(input *AttachVolumeInput) (req *request.Request, output *VolumeAttachment) { op := &request.Operation{ Name: opAttachVolume, @@ -1295,7 +1386,7 @@ func (c *EC2) AttachVolumeRequest(input *AttachVolumeInput) (req *request.Reques // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation AttachVolume for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/AttachVolume +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/AttachVolume func (c *EC2) AttachVolume(input *AttachVolumeInput) (*VolumeAttachment, error) { req, out := c.AttachVolumeRequest(input) return out, req.Send() @@ -1342,7 +1433,7 @@ const opAttachVpnGateway = "AttachVpnGateway" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/AttachVpnGateway +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/AttachVpnGateway func (c *EC2) AttachVpnGatewayRequest(input *AttachVpnGatewayInput) (req *request.Request, output *AttachVpnGatewayOutput) { op := &request.Operation{ Name: opAttachVpnGateway, @@ -1364,8 +1455,7 @@ func (c *EC2) AttachVpnGatewayRequest(input *AttachVpnGatewayInput) (req *reques // Attaches a virtual private gateway to a VPC. You can attach one virtual private // gateway to one VPC at a time. // -// For more information, see Adding a Hardware Virtual Private Gateway to Your -// VPC (http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_VPN.html) +// For more information, see AWS Managed VPN Connections (http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_VPN.html) // in the Amazon Virtual Private Cloud User Guide. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions @@ -1374,7 +1464,7 @@ func (c *EC2) AttachVpnGatewayRequest(input *AttachVpnGatewayInput) (req *reques // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation AttachVpnGateway for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/AttachVpnGateway +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/AttachVpnGateway func (c *EC2) AttachVpnGateway(input *AttachVpnGatewayInput) (*AttachVpnGatewayOutput, error) { req, out := c.AttachVpnGatewayRequest(input) return out, req.Send() @@ -1421,7 +1511,7 @@ const opAuthorizeSecurityGroupEgress = "AuthorizeSecurityGroupEgress" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/AuthorizeSecurityGroupEgress +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/AuthorizeSecurityGroupEgress func (c *EC2) AuthorizeSecurityGroupEgressRequest(input *AuthorizeSecurityGroupEgressInput) (req *request.Request, output *AuthorizeSecurityGroupEgressOutput) { op := &request.Operation{ Name: opAuthorizeSecurityGroupEgress, @@ -1455,7 +1545,8 @@ func (c *EC2) AuthorizeSecurityGroupEgressRequest(input *AuthorizeSecurityGroupE // range or a source group. For the TCP and UDP protocols, you must also specify // the destination port or port range. For the ICMP protocol, you must also // specify the ICMP type and code. You can use -1 for the type or code to mean -// all types or all codes. +// all types or all codes. You can optionally specify a description for the +// rule. // // Rule changes are propagated to affected instances as quickly as possible. // However, a small delay might occur. @@ -1466,7 +1557,7 @@ func (c *EC2) AuthorizeSecurityGroupEgressRequest(input *AuthorizeSecurityGroupE // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation AuthorizeSecurityGroupEgress for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/AuthorizeSecurityGroupEgress +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/AuthorizeSecurityGroupEgress func (c *EC2) AuthorizeSecurityGroupEgress(input *AuthorizeSecurityGroupEgressInput) (*AuthorizeSecurityGroupEgressOutput, error) { req, out := c.AuthorizeSecurityGroupEgressRequest(input) return out, req.Send() @@ -1513,7 +1604,7 @@ const opAuthorizeSecurityGroupIngress = "AuthorizeSecurityGroupIngress" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/AuthorizeSecurityGroupIngress +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/AuthorizeSecurityGroupIngress func (c *EC2) AuthorizeSecurityGroupIngressRequest(input *AuthorizeSecurityGroupIngressInput) (req *request.Request, output *AuthorizeSecurityGroupIngressOutput) { op := &request.Operation{ Name: opAuthorizeSecurityGroupIngress, @@ -1552,13 +1643,15 @@ func (c *EC2) AuthorizeSecurityGroupIngressRequest(input *AuthorizeSecurityGroup // peer VPC in a VPC peering connection. For more information about VPC security // group limits, see Amazon VPC Limits (http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Appendix_Limits.html). // +// You can optionally specify a description for the security group rule. +// // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation AuthorizeSecurityGroupIngress for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/AuthorizeSecurityGroupIngress +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/AuthorizeSecurityGroupIngress func (c *EC2) AuthorizeSecurityGroupIngress(input *AuthorizeSecurityGroupIngressInput) (*AuthorizeSecurityGroupIngressOutput, error) { req, out := c.AuthorizeSecurityGroupIngressRequest(input) return out, req.Send() @@ -1605,7 +1698,7 @@ const opBundleInstance = "BundleInstance" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/BundleInstance +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/BundleInstance func (c *EC2) BundleInstanceRequest(input *BundleInstanceInput) (req *request.Request, output *BundleInstanceOutput) { op := &request.Operation{ Name: opBundleInstance, @@ -1640,7 +1733,7 @@ func (c *EC2) BundleInstanceRequest(input *BundleInstanceInput) (req *request.Re // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation BundleInstance for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/BundleInstance +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/BundleInstance func (c *EC2) BundleInstance(input *BundleInstanceInput) (*BundleInstanceOutput, error) { req, out := c.BundleInstanceRequest(input) return out, req.Send() @@ -1687,7 +1780,7 @@ const opCancelBundleTask = "CancelBundleTask" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CancelBundleTask +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CancelBundleTask func (c *EC2) CancelBundleTaskRequest(input *CancelBundleTaskInput) (req *request.Request, output *CancelBundleTaskOutput) { op := &request.Operation{ Name: opCancelBundleTask, @@ -1714,7 +1807,7 @@ func (c *EC2) CancelBundleTaskRequest(input *CancelBundleTaskInput) (req *reques // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation CancelBundleTask for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CancelBundleTask +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CancelBundleTask func (c *EC2) CancelBundleTask(input *CancelBundleTaskInput) (*CancelBundleTaskOutput, error) { req, out := c.CancelBundleTaskRequest(input) return out, req.Send() @@ -1761,7 +1854,7 @@ const opCancelConversionTask = "CancelConversionTask" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CancelConversionTask +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CancelConversionTask func (c *EC2) CancelConversionTaskRequest(input *CancelConversionTaskInput) (req *request.Request, output *CancelConversionTaskOutput) { op := &request.Operation{ Name: opCancelConversionTask, @@ -1797,7 +1890,7 @@ func (c *EC2) CancelConversionTaskRequest(input *CancelConversionTaskInput) (req // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation CancelConversionTask for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CancelConversionTask +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CancelConversionTask func (c *EC2) CancelConversionTask(input *CancelConversionTaskInput) (*CancelConversionTaskOutput, error) { req, out := c.CancelConversionTaskRequest(input) return out, req.Send() @@ -1844,7 +1937,7 @@ const opCancelExportTask = "CancelExportTask" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CancelExportTask +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CancelExportTask func (c *EC2) CancelExportTaskRequest(input *CancelExportTaskInput) (req *request.Request, output *CancelExportTaskOutput) { op := &request.Operation{ Name: opCancelExportTask, @@ -1876,7 +1969,7 @@ func (c *EC2) CancelExportTaskRequest(input *CancelExportTaskInput) (req *reques // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation CancelExportTask for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CancelExportTask +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CancelExportTask func (c *EC2) CancelExportTask(input *CancelExportTaskInput) (*CancelExportTaskOutput, error) { req, out := c.CancelExportTaskRequest(input) return out, req.Send() @@ -1923,7 +2016,7 @@ const opCancelImportTask = "CancelImportTask" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CancelImportTask +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CancelImportTask func (c *EC2) CancelImportTaskRequest(input *CancelImportTaskInput) (req *request.Request, output *CancelImportTaskOutput) { op := &request.Operation{ Name: opCancelImportTask, @@ -1950,7 +2043,7 @@ func (c *EC2) CancelImportTaskRequest(input *CancelImportTaskInput) (req *reques // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation CancelImportTask for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CancelImportTask +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CancelImportTask func (c *EC2) CancelImportTask(input *CancelImportTaskInput) (*CancelImportTaskOutput, error) { req, out := c.CancelImportTaskRequest(input) return out, req.Send() @@ -1997,7 +2090,7 @@ const opCancelReservedInstancesListing = "CancelReservedInstancesListing" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CancelReservedInstancesListing +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CancelReservedInstancesListing func (c *EC2) CancelReservedInstancesListingRequest(input *CancelReservedInstancesListingInput) (req *request.Request, output *CancelReservedInstancesListingOutput) { op := &request.Operation{ Name: opCancelReservedInstancesListing, @@ -2028,7 +2121,7 @@ func (c *EC2) CancelReservedInstancesListingRequest(input *CancelReservedInstanc // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation CancelReservedInstancesListing for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CancelReservedInstancesListing +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CancelReservedInstancesListing func (c *EC2) CancelReservedInstancesListing(input *CancelReservedInstancesListingInput) (*CancelReservedInstancesListingOutput, error) { req, out := c.CancelReservedInstancesListingRequest(input) return out, req.Send() @@ -2075,7 +2168,7 @@ const opCancelSpotFleetRequests = "CancelSpotFleetRequests" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CancelSpotFleetRequests +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CancelSpotFleetRequests func (c *EC2) CancelSpotFleetRequestsRequest(input *CancelSpotFleetRequestsInput) (req *request.Request, output *CancelSpotFleetRequestsOutput) { op := &request.Operation{ Name: opCancelSpotFleetRequests, @@ -2094,12 +2187,12 @@ func (c *EC2) CancelSpotFleetRequestsRequest(input *CancelSpotFleetRequestsInput // CancelSpotFleetRequests API operation for Amazon Elastic Compute Cloud. // -// Cancels the specified Spot fleet requests. +// Cancels the specified Spot Fleet requests. // -// After you cancel a Spot fleet request, the Spot fleet launches no new Spot -// instances. You must specify whether the Spot fleet should also terminate -// its Spot instances. If you terminate the instances, the Spot fleet request -// enters the cancelled_terminating state. Otherwise, the Spot fleet request +// After you cancel a Spot Fleet request, the Spot Fleet launches no new Spot +// Instances. You must specify whether the Spot Fleet should also terminate +// its Spot Instances. If you terminate the instances, the Spot Fleet request +// enters the cancelled_terminating state. Otherwise, the Spot Fleet request // enters the cancelled_running state and the instances continue to run until // they are interrupted or you terminate them manually. // @@ -2109,7 +2202,7 @@ func (c *EC2) CancelSpotFleetRequestsRequest(input *CancelSpotFleetRequestsInput // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation CancelSpotFleetRequests for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CancelSpotFleetRequests +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CancelSpotFleetRequests func (c *EC2) CancelSpotFleetRequests(input *CancelSpotFleetRequestsInput) (*CancelSpotFleetRequestsOutput, error) { req, out := c.CancelSpotFleetRequestsRequest(input) return out, req.Send() @@ -2156,7 +2249,7 @@ const opCancelSpotInstanceRequests = "CancelSpotInstanceRequests" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CancelSpotInstanceRequests +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CancelSpotInstanceRequests func (c *EC2) CancelSpotInstanceRequestsRequest(input *CancelSpotInstanceRequestsInput) (req *request.Request, output *CancelSpotInstanceRequestsOutput) { op := &request.Operation{ Name: opCancelSpotInstanceRequests, @@ -2175,14 +2268,13 @@ func (c *EC2) CancelSpotInstanceRequestsRequest(input *CancelSpotInstanceRequest // CancelSpotInstanceRequests API operation for Amazon Elastic Compute Cloud. // -// Cancels one or more Spot instance requests. Spot instances are instances -// that Amazon EC2 starts on your behalf when the bid price that you specify -// exceeds the current Spot price. Amazon EC2 periodically sets the Spot price -// based on available Spot instance capacity and current Spot instance requests. -// For more information, see Spot Instance Requests (http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/spot-requests.html) -// in the Amazon Elastic Compute Cloud User Guide. +// Cancels one or more Spot Instance requests. Spot Instances are instances +// that Amazon EC2 starts on your behalf when the maximum price that you specify +// exceeds the current Spot price. For more information, see Spot Instance Requests +// (http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/spot-requests.html) in +// the Amazon Elastic Compute Cloud User Guide. // -// Canceling a Spot instance request does not terminate running Spot instances +// Canceling a Spot Instance request does not terminate running Spot Instances // associated with the request. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions @@ -2191,7 +2283,7 @@ func (c *EC2) CancelSpotInstanceRequestsRequest(input *CancelSpotInstanceRequest // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation CancelSpotInstanceRequests for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CancelSpotInstanceRequests +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CancelSpotInstanceRequests func (c *EC2) CancelSpotInstanceRequests(input *CancelSpotInstanceRequestsInput) (*CancelSpotInstanceRequestsOutput, error) { req, out := c.CancelSpotInstanceRequestsRequest(input) return out, req.Send() @@ -2238,7 +2330,7 @@ const opConfirmProductInstance = "ConfirmProductInstance" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ConfirmProductInstance +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ConfirmProductInstance func (c *EC2) ConfirmProductInstanceRequest(input *ConfirmProductInstanceInput) (req *request.Request, output *ConfirmProductInstanceOutput) { op := &request.Operation{ Name: opConfirmProductInstance, @@ -2259,8 +2351,7 @@ func (c *EC2) ConfirmProductInstanceRequest(input *ConfirmProductInstanceInput) // // Determines whether a product code is associated with an instance. This action // can only be used by the owner of the product code. It is useful when a product -// code owner needs to verify whether another user's instance is eligible for -// support. +// code owner must verify whether another user's instance is eligible for support. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -2268,7 +2359,7 @@ func (c *EC2) ConfirmProductInstanceRequest(input *ConfirmProductInstanceInput) // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation ConfirmProductInstance for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ConfirmProductInstance +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ConfirmProductInstance func (c *EC2) ConfirmProductInstance(input *ConfirmProductInstanceInput) (*ConfirmProductInstanceOutput, error) { req, out := c.ConfirmProductInstanceRequest(input) return out, req.Send() @@ -2290,6 +2381,80 @@ func (c *EC2) ConfirmProductInstanceWithContext(ctx aws.Context, input *ConfirmP return out, req.Send() } +const opCopyFpgaImage = "CopyFpgaImage" + +// CopyFpgaImageRequest generates a "aws/request.Request" representing the +// client's request for the CopyFpgaImage operation. The "output" return +// value will be populated with the request's response once the request complets +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See CopyFpgaImage for more information on using the CopyFpgaImage +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the CopyFpgaImageRequest method. +// req, resp := client.CopyFpgaImageRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CopyFpgaImage +func (c *EC2) CopyFpgaImageRequest(input *CopyFpgaImageInput) (req *request.Request, output *CopyFpgaImageOutput) { + op := &request.Operation{ + Name: opCopyFpgaImage, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &CopyFpgaImageInput{} + } + + output = &CopyFpgaImageOutput{} + req = c.newRequest(op, input, output) + return +} + +// CopyFpgaImage API operation for Amazon Elastic Compute Cloud. +// +// Copies the specified Amazon FPGA Image (AFI) to the current region. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Elastic Compute Cloud's +// API operation CopyFpgaImage for usage and error information. +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CopyFpgaImage +func (c *EC2) CopyFpgaImage(input *CopyFpgaImageInput) (*CopyFpgaImageOutput, error) { + req, out := c.CopyFpgaImageRequest(input) + return out, req.Send() +} + +// CopyFpgaImageWithContext is the same as CopyFpgaImage with the addition of +// the ability to pass a context and additional request options. +// +// See CopyFpgaImage for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *EC2) CopyFpgaImageWithContext(ctx aws.Context, input *CopyFpgaImageInput, opts ...request.Option) (*CopyFpgaImageOutput, error) { + req, out := c.CopyFpgaImageRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + const opCopyImage = "CopyImage" // CopyImageRequest generates a "aws/request.Request" representing the @@ -2315,7 +2480,7 @@ const opCopyImage = "CopyImage" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CopyImage +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CopyImage func (c *EC2) CopyImageRequest(input *CopyImageInput) (req *request.Request, output *CopyImageOutput) { op := &request.Operation{ Name: opCopyImage, @@ -2348,7 +2513,7 @@ func (c *EC2) CopyImageRequest(input *CopyImageInput) (req *request.Request, out // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation CopyImage for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CopyImage +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CopyImage func (c *EC2) CopyImage(input *CopyImageInput) (*CopyImageOutput, error) { req, out := c.CopyImageRequest(input) return out, req.Send() @@ -2395,7 +2560,7 @@ const opCopySnapshot = "CopySnapshot" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CopySnapshot +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CopySnapshot func (c *EC2) CopySnapshotRequest(input *CopySnapshotInput) (req *request.Request, output *CopySnapshotOutput) { op := &request.Operation{ Name: opCopySnapshot, @@ -2441,7 +2606,7 @@ func (c *EC2) CopySnapshotRequest(input *CopySnapshotInput) (req *request.Reques // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation CopySnapshot for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CopySnapshot +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CopySnapshot func (c *EC2) CopySnapshot(input *CopySnapshotInput) (*CopySnapshotOutput, error) { req, out := c.CopySnapshotRequest(input) return out, req.Send() @@ -2488,7 +2653,7 @@ const opCreateCustomerGateway = "CreateCustomerGateway" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CreateCustomerGateway +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CreateCustomerGateway func (c *EC2) CreateCustomerGatewayRequest(input *CreateCustomerGatewayInput) (req *request.Request, output *CreateCustomerGatewayOutput) { op := &request.Operation{ Name: opCreateCustomerGateway, @@ -2523,9 +2688,9 @@ func (c *EC2) CreateCustomerGatewayRequest(input *CreateCustomerGatewayInput) (r // the exception of 7224, which is reserved in the us-east-1 region, and 9059, // which is reserved in the eu-west-1 region. // -// For more information about VPN customer gateways, see Adding a Hardware Virtual -// Private Gateway to Your VPC (http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_VPN.html) -// in the Amazon Virtual Private Cloud User Guide. +// For more information about VPN customer gateways, see AWS Managed VPN Connections +// (http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_VPN.html) in the +// Amazon Virtual Private Cloud User Guide. // // You cannot create more than one customer gateway with the same VPN type, // IP address, and BGP ASN parameter values. If you run an identical request @@ -2539,7 +2704,7 @@ func (c *EC2) CreateCustomerGatewayRequest(input *CreateCustomerGatewayInput) (r // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation CreateCustomerGateway for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CreateCustomerGateway +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CreateCustomerGateway func (c *EC2) CreateCustomerGateway(input *CreateCustomerGatewayInput) (*CreateCustomerGatewayOutput, error) { req, out := c.CreateCustomerGatewayRequest(input) return out, req.Send() @@ -2561,6 +2726,84 @@ func (c *EC2) CreateCustomerGatewayWithContext(ctx aws.Context, input *CreateCus return out, req.Send() } +const opCreateDefaultSubnet = "CreateDefaultSubnet" + +// CreateDefaultSubnetRequest generates a "aws/request.Request" representing the +// client's request for the CreateDefaultSubnet operation. The "output" return +// value will be populated with the request's response once the request complets +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See CreateDefaultSubnet for more information on using the CreateDefaultSubnet +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the CreateDefaultSubnetRequest method. +// req, resp := client.CreateDefaultSubnetRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CreateDefaultSubnet +func (c *EC2) CreateDefaultSubnetRequest(input *CreateDefaultSubnetInput) (req *request.Request, output *CreateDefaultSubnetOutput) { + op := &request.Operation{ + Name: opCreateDefaultSubnet, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &CreateDefaultSubnetInput{} + } + + output = &CreateDefaultSubnetOutput{} + req = c.newRequest(op, input, output) + return +} + +// CreateDefaultSubnet API operation for Amazon Elastic Compute Cloud. +// +// Creates a default subnet with a size /20 IPv4 CIDR block in the specified +// Availability Zone in your default VPC. You can have only one default subnet +// per Availability Zone. For more information, see Creating a Default Subnet +// (http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/default-vpc.html#create-default-subnet) +// in the Amazon Virtual Private Cloud User Guide. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Elastic Compute Cloud's +// API operation CreateDefaultSubnet for usage and error information. +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CreateDefaultSubnet +func (c *EC2) CreateDefaultSubnet(input *CreateDefaultSubnetInput) (*CreateDefaultSubnetOutput, error) { + req, out := c.CreateDefaultSubnetRequest(input) + return out, req.Send() +} + +// CreateDefaultSubnetWithContext is the same as CreateDefaultSubnet with the addition of +// the ability to pass a context and additional request options. +// +// See CreateDefaultSubnet for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *EC2) CreateDefaultSubnetWithContext(ctx aws.Context, input *CreateDefaultSubnetInput, opts ...request.Option) (*CreateDefaultSubnetOutput, error) { + req, out := c.CreateDefaultSubnetRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + const opCreateDefaultVpc = "CreateDefaultVpc" // CreateDefaultVpcRequest generates a "aws/request.Request" representing the @@ -2586,7 +2829,7 @@ const opCreateDefaultVpc = "CreateDefaultVpc" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CreateDefaultVpc +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CreateDefaultVpc func (c *EC2) CreateDefaultVpcRequest(input *CreateDefaultVpcInput) (req *request.Request, output *CreateDefaultVpcOutput) { op := &request.Operation{ Name: opCreateDefaultVpc, @@ -2625,7 +2868,7 @@ func (c *EC2) CreateDefaultVpcRequest(input *CreateDefaultVpcInput) (req *reques // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation CreateDefaultVpc for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CreateDefaultVpc +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CreateDefaultVpc func (c *EC2) CreateDefaultVpc(input *CreateDefaultVpcInput) (*CreateDefaultVpcOutput, error) { req, out := c.CreateDefaultVpcRequest(input) return out, req.Send() @@ -2672,7 +2915,7 @@ const opCreateDhcpOptions = "CreateDhcpOptions" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CreateDhcpOptions +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CreateDhcpOptions func (c *EC2) CreateDhcpOptionsRequest(input *CreateDhcpOptionsInput) (req *request.Request, output *CreateDhcpOptionsOutput) { op := &request.Operation{ Name: opCreateDhcpOptions, @@ -2738,7 +2981,7 @@ func (c *EC2) CreateDhcpOptionsRequest(input *CreateDhcpOptionsInput) (req *requ // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation CreateDhcpOptions for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CreateDhcpOptions +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CreateDhcpOptions func (c *EC2) CreateDhcpOptions(input *CreateDhcpOptionsInput) (*CreateDhcpOptionsOutput, error) { req, out := c.CreateDhcpOptionsRequest(input) return out, req.Send() @@ -2785,7 +3028,7 @@ const opCreateEgressOnlyInternetGateway = "CreateEgressOnlyInternetGateway" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CreateEgressOnlyInternetGateway +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CreateEgressOnlyInternetGateway func (c *EC2) CreateEgressOnlyInternetGatewayRequest(input *CreateEgressOnlyInternetGatewayInput) (req *request.Request, output *CreateEgressOnlyInternetGatewayOutput) { op := &request.Operation{ Name: opCreateEgressOnlyInternetGateway, @@ -2815,7 +3058,7 @@ func (c *EC2) CreateEgressOnlyInternetGatewayRequest(input *CreateEgressOnlyInte // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation CreateEgressOnlyInternetGateway for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CreateEgressOnlyInternetGateway +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CreateEgressOnlyInternetGateway func (c *EC2) CreateEgressOnlyInternetGateway(input *CreateEgressOnlyInternetGatewayInput) (*CreateEgressOnlyInternetGatewayOutput, error) { req, out := c.CreateEgressOnlyInternetGatewayRequest(input) return out, req.Send() @@ -2862,7 +3105,7 @@ const opCreateFlowLogs = "CreateFlowLogs" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CreateFlowLogs +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CreateFlowLogs func (c *EC2) CreateFlowLogsRequest(input *CreateFlowLogsInput) (req *request.Request, output *CreateFlowLogsOutput) { op := &request.Operation{ Name: opCreateFlowLogs, @@ -2898,7 +3141,7 @@ func (c *EC2) CreateFlowLogsRequest(input *CreateFlowLogsInput) (req *request.Re // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation CreateFlowLogs for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CreateFlowLogs +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CreateFlowLogs func (c *EC2) CreateFlowLogs(input *CreateFlowLogsInput) (*CreateFlowLogsOutput, error) { req, out := c.CreateFlowLogsRequest(input) return out, req.Send() @@ -2945,7 +3188,7 @@ const opCreateFpgaImage = "CreateFpgaImage" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CreateFpgaImage +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CreateFpgaImage func (c *EC2) CreateFpgaImageRequest(input *CreateFpgaImageInput) (req *request.Request, output *CreateFpgaImageOutput) { op := &request.Operation{ Name: opCreateFpgaImage, @@ -2979,7 +3222,7 @@ func (c *EC2) CreateFpgaImageRequest(input *CreateFpgaImageInput) (req *request. // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation CreateFpgaImage for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CreateFpgaImage +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CreateFpgaImage func (c *EC2) CreateFpgaImage(input *CreateFpgaImageInput) (*CreateFpgaImageOutput, error) { req, out := c.CreateFpgaImageRequest(input) return out, req.Send() @@ -3026,7 +3269,7 @@ const opCreateImage = "CreateImage" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CreateImage +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CreateImage func (c *EC2) CreateImageRequest(input *CreateImageInput) (req *request.Request, output *CreateImageOutput) { op := &request.Operation{ Name: opCreateImage, @@ -3062,7 +3305,7 @@ func (c *EC2) CreateImageRequest(input *CreateImageInput) (req *request.Request, // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation CreateImage for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CreateImage +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CreateImage func (c *EC2) CreateImage(input *CreateImageInput) (*CreateImageOutput, error) { req, out := c.CreateImageRequest(input) return out, req.Send() @@ -3109,7 +3352,7 @@ const opCreateInstanceExportTask = "CreateInstanceExportTask" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CreateInstanceExportTask +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CreateInstanceExportTask func (c *EC2) CreateInstanceExportTaskRequest(input *CreateInstanceExportTaskInput) (req *request.Request, output *CreateInstanceExportTaskOutput) { op := &request.Operation{ Name: opCreateInstanceExportTask, @@ -3141,7 +3384,7 @@ func (c *EC2) CreateInstanceExportTaskRequest(input *CreateInstanceExportTaskInp // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation CreateInstanceExportTask for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CreateInstanceExportTask +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CreateInstanceExportTask func (c *EC2) CreateInstanceExportTask(input *CreateInstanceExportTaskInput) (*CreateInstanceExportTaskOutput, error) { req, out := c.CreateInstanceExportTaskRequest(input) return out, req.Send() @@ -3188,7 +3431,7 @@ const opCreateInternetGateway = "CreateInternetGateway" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CreateInternetGateway +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CreateInternetGateway func (c *EC2) CreateInternetGatewayRequest(input *CreateInternetGatewayInput) (req *request.Request, output *CreateInternetGatewayOutput) { op := &request.Operation{ Name: opCreateInternetGateway, @@ -3219,7 +3462,7 @@ func (c *EC2) CreateInternetGatewayRequest(input *CreateInternetGatewayInput) (r // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation CreateInternetGateway for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CreateInternetGateway +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CreateInternetGateway func (c *EC2) CreateInternetGateway(input *CreateInternetGatewayInput) (*CreateInternetGatewayOutput, error) { req, out := c.CreateInternetGatewayRequest(input) return out, req.Send() @@ -3266,7 +3509,7 @@ const opCreateKeyPair = "CreateKeyPair" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CreateKeyPair +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CreateKeyPair func (c *EC2) CreateKeyPairRequest(input *CreateKeyPairInput) (req *request.Request, output *CreateKeyPairOutput) { op := &request.Operation{ Name: opCreateKeyPair, @@ -3287,15 +3530,16 @@ func (c *EC2) CreateKeyPairRequest(input *CreateKeyPairInput) (req *request.Requ // // Creates a 2048-bit RSA key pair with the specified name. Amazon EC2 stores // the public key and displays the private key for you to save to a file. The -// private key is returned as an unencrypted PEM encoded PKCS#8 private key. +// private key is returned as an unencrypted PEM encoded PKCS#1 private key. // If a key with the specified name already exists, Amazon EC2 returns an error. // // You can have up to five thousand key pairs per region. // // The key pair returned to you is available only in the region in which you -// create it. To create a key pair that is available in all regions, use ImportKeyPair. +// create it. If you prefer, you can create your own key pair using a third-party +// tool and upload it to any region using ImportKeyPair. // -// For more information about key pairs, see Key Pairs (http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-key-pairs.html) +// For more information, see Key Pairs (http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-key-pairs.html) // in the Amazon Elastic Compute Cloud User Guide. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions @@ -3304,7 +3548,7 @@ func (c *EC2) CreateKeyPairRequest(input *CreateKeyPairInput) (req *request.Requ // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation CreateKeyPair for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CreateKeyPair +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CreateKeyPair func (c *EC2) CreateKeyPair(input *CreateKeyPairInput) (*CreateKeyPairOutput, error) { req, out := c.CreateKeyPairRequest(input) return out, req.Send() @@ -3326,6 +3570,160 @@ func (c *EC2) CreateKeyPairWithContext(ctx aws.Context, input *CreateKeyPairInpu return out, req.Send() } +const opCreateLaunchTemplate = "CreateLaunchTemplate" + +// CreateLaunchTemplateRequest generates a "aws/request.Request" representing the +// client's request for the CreateLaunchTemplate operation. The "output" return +// value will be populated with the request's response once the request complets +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See CreateLaunchTemplate for more information on using the CreateLaunchTemplate +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the CreateLaunchTemplateRequest method. +// req, resp := client.CreateLaunchTemplateRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CreateLaunchTemplate +func (c *EC2) CreateLaunchTemplateRequest(input *CreateLaunchTemplateInput) (req *request.Request, output *CreateLaunchTemplateOutput) { + op := &request.Operation{ + Name: opCreateLaunchTemplate, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &CreateLaunchTemplateInput{} + } + + output = &CreateLaunchTemplateOutput{} + req = c.newRequest(op, input, output) + return +} + +// CreateLaunchTemplate API operation for Amazon Elastic Compute Cloud. +// +// Creates a launch template. A launch template contains the parameters to launch +// an instance. When you launch an instance using RunInstances, you can specify +// a launch template instead of providing the launch parameters in the request. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Elastic Compute Cloud's +// API operation CreateLaunchTemplate for usage and error information. +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CreateLaunchTemplate +func (c *EC2) CreateLaunchTemplate(input *CreateLaunchTemplateInput) (*CreateLaunchTemplateOutput, error) { + req, out := c.CreateLaunchTemplateRequest(input) + return out, req.Send() +} + +// CreateLaunchTemplateWithContext is the same as CreateLaunchTemplate with the addition of +// the ability to pass a context and additional request options. +// +// See CreateLaunchTemplate for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *EC2) CreateLaunchTemplateWithContext(ctx aws.Context, input *CreateLaunchTemplateInput, opts ...request.Option) (*CreateLaunchTemplateOutput, error) { + req, out := c.CreateLaunchTemplateRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opCreateLaunchTemplateVersion = "CreateLaunchTemplateVersion" + +// CreateLaunchTemplateVersionRequest generates a "aws/request.Request" representing the +// client's request for the CreateLaunchTemplateVersion operation. The "output" return +// value will be populated with the request's response once the request complets +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See CreateLaunchTemplateVersion for more information on using the CreateLaunchTemplateVersion +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the CreateLaunchTemplateVersionRequest method. +// req, resp := client.CreateLaunchTemplateVersionRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CreateLaunchTemplateVersion +func (c *EC2) CreateLaunchTemplateVersionRequest(input *CreateLaunchTemplateVersionInput) (req *request.Request, output *CreateLaunchTemplateVersionOutput) { + op := &request.Operation{ + Name: opCreateLaunchTemplateVersion, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &CreateLaunchTemplateVersionInput{} + } + + output = &CreateLaunchTemplateVersionOutput{} + req = c.newRequest(op, input, output) + return +} + +// CreateLaunchTemplateVersion API operation for Amazon Elastic Compute Cloud. +// +// Creates a new version for a launch template. You can specify an existing +// version of launch template from which to base the new version. +// +// Launch template versions are numbered in the order in which they are created. +// You cannot specify, change, or replace the numbering of launch template versions. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Elastic Compute Cloud's +// API operation CreateLaunchTemplateVersion for usage and error information. +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CreateLaunchTemplateVersion +func (c *EC2) CreateLaunchTemplateVersion(input *CreateLaunchTemplateVersionInput) (*CreateLaunchTemplateVersionOutput, error) { + req, out := c.CreateLaunchTemplateVersionRequest(input) + return out, req.Send() +} + +// CreateLaunchTemplateVersionWithContext is the same as CreateLaunchTemplateVersion with the addition of +// the ability to pass a context and additional request options. +// +// See CreateLaunchTemplateVersion for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *EC2) CreateLaunchTemplateVersionWithContext(ctx aws.Context, input *CreateLaunchTemplateVersionInput, opts ...request.Option) (*CreateLaunchTemplateVersionOutput, error) { + req, out := c.CreateLaunchTemplateVersionRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + const opCreateNatGateway = "CreateNatGateway" // CreateNatGatewayRequest generates a "aws/request.Request" representing the @@ -3351,7 +3749,7 @@ const opCreateNatGateway = "CreateNatGateway" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CreateNatGateway +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CreateNatGateway func (c *EC2) CreateNatGatewayRequest(input *CreateNatGatewayInput) (req *request.Request, output *CreateNatGatewayOutput) { op := &request.Operation{ Name: opCreateNatGateway, @@ -3383,7 +3781,7 @@ func (c *EC2) CreateNatGatewayRequest(input *CreateNatGatewayInput) (req *reques // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation CreateNatGateway for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CreateNatGateway +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CreateNatGateway func (c *EC2) CreateNatGateway(input *CreateNatGatewayInput) (*CreateNatGatewayOutput, error) { req, out := c.CreateNatGatewayRequest(input) return out, req.Send() @@ -3430,7 +3828,7 @@ const opCreateNetworkAcl = "CreateNetworkAcl" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CreateNetworkAcl +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CreateNetworkAcl func (c *EC2) CreateNetworkAclRequest(input *CreateNetworkAclInput) (req *request.Request, output *CreateNetworkAclOutput) { op := &request.Operation{ Name: opCreateNetworkAcl, @@ -3461,7 +3859,7 @@ func (c *EC2) CreateNetworkAclRequest(input *CreateNetworkAclInput) (req *reques // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation CreateNetworkAcl for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CreateNetworkAcl +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CreateNetworkAcl func (c *EC2) CreateNetworkAcl(input *CreateNetworkAclInput) (*CreateNetworkAclOutput, error) { req, out := c.CreateNetworkAclRequest(input) return out, req.Send() @@ -3508,7 +3906,7 @@ const opCreateNetworkAclEntry = "CreateNetworkAclEntry" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CreateNetworkAclEntry +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CreateNetworkAclEntry func (c *EC2) CreateNetworkAclEntryRequest(input *CreateNetworkAclEntryInput) (req *request.Request, output *CreateNetworkAclEntryOutput) { op := &request.Operation{ Name: opCreateNetworkAclEntry, @@ -3553,7 +3951,7 @@ func (c *EC2) CreateNetworkAclEntryRequest(input *CreateNetworkAclEntryInput) (r // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation CreateNetworkAclEntry for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CreateNetworkAclEntry +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CreateNetworkAclEntry func (c *EC2) CreateNetworkAclEntry(input *CreateNetworkAclEntryInput) (*CreateNetworkAclEntryOutput, error) { req, out := c.CreateNetworkAclEntryRequest(input) return out, req.Send() @@ -3600,7 +3998,7 @@ const opCreateNetworkInterface = "CreateNetworkInterface" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CreateNetworkInterface +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CreateNetworkInterface func (c *EC2) CreateNetworkInterfaceRequest(input *CreateNetworkInterfaceInput) (req *request.Request, output *CreateNetworkInterfaceOutput) { op := &request.Operation{ Name: opCreateNetworkInterface, @@ -3631,7 +4029,7 @@ func (c *EC2) CreateNetworkInterfaceRequest(input *CreateNetworkInterfaceInput) // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation CreateNetworkInterface for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CreateNetworkInterface +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CreateNetworkInterface func (c *EC2) CreateNetworkInterface(input *CreateNetworkInterfaceInput) (*CreateNetworkInterfaceOutput, error) { req, out := c.CreateNetworkInterfaceRequest(input) return out, req.Send() @@ -3678,7 +4076,7 @@ const opCreateNetworkInterfacePermission = "CreateNetworkInterfacePermission" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CreateNetworkInterfacePermission +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CreateNetworkInterfacePermission func (c *EC2) CreateNetworkInterfacePermissionRequest(input *CreateNetworkInterfacePermissionInput) (req *request.Request, output *CreateNetworkInterfacePermissionOutput) { op := &request.Operation{ Name: opCreateNetworkInterfacePermission, @@ -3709,7 +4107,7 @@ func (c *EC2) CreateNetworkInterfacePermissionRequest(input *CreateNetworkInterf // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation CreateNetworkInterfacePermission for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CreateNetworkInterfacePermission +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CreateNetworkInterfacePermission func (c *EC2) CreateNetworkInterfacePermission(input *CreateNetworkInterfacePermissionInput) (*CreateNetworkInterfacePermissionOutput, error) { req, out := c.CreateNetworkInterfacePermissionRequest(input) return out, req.Send() @@ -3756,7 +4154,7 @@ const opCreatePlacementGroup = "CreatePlacementGroup" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CreatePlacementGroup +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CreatePlacementGroup func (c *EC2) CreatePlacementGroupRequest(input *CreatePlacementGroupInput) (req *request.Request, output *CreatePlacementGroupOutput) { op := &request.Operation{ Name: opCreatePlacementGroup, @@ -3777,11 +4175,14 @@ func (c *EC2) CreatePlacementGroupRequest(input *CreatePlacementGroupInput) (req // CreatePlacementGroup API operation for Amazon Elastic Compute Cloud. // -// Creates a placement group that you launch cluster instances into. You must -// give the group a name that's unique within the scope of your account. +// Creates a placement group in which to launch instances. The strategy of the +// placement group determines how the instances are organized within the group. // -// For more information about placement groups and cluster instances, see Cluster -// Instances (http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using_cluster_computing.html) +// A cluster placement group is a logical grouping of instances within a single +// Availability Zone that benefit from low network latency, high network throughput. +// A spread placement group places instances on distinct hardware. +// +// For more information, see Placement Groups (http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/placement-groups.html) // in the Amazon Elastic Compute Cloud User Guide. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions @@ -3790,7 +4191,7 @@ func (c *EC2) CreatePlacementGroupRequest(input *CreatePlacementGroupInput) (req // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation CreatePlacementGroup for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CreatePlacementGroup +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CreatePlacementGroup func (c *EC2) CreatePlacementGroup(input *CreatePlacementGroupInput) (*CreatePlacementGroupOutput, error) { req, out := c.CreatePlacementGroupRequest(input) return out, req.Send() @@ -3837,7 +4238,7 @@ const opCreateReservedInstancesListing = "CreateReservedInstancesListing" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CreateReservedInstancesListing +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CreateReservedInstancesListing func (c *EC2) CreateReservedInstancesListingRequest(input *CreateReservedInstancesListingInput) (req *request.Request, output *CreateReservedInstancesListingOutput) { op := &request.Operation{ Name: opCreateReservedInstancesListing, @@ -3887,7 +4288,7 @@ func (c *EC2) CreateReservedInstancesListingRequest(input *CreateReservedInstanc // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation CreateReservedInstancesListing for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CreateReservedInstancesListing +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CreateReservedInstancesListing func (c *EC2) CreateReservedInstancesListing(input *CreateReservedInstancesListingInput) (*CreateReservedInstancesListingOutput, error) { req, out := c.CreateReservedInstancesListingRequest(input) return out, req.Send() @@ -3934,7 +4335,7 @@ const opCreateRoute = "CreateRoute" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CreateRoute +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CreateRoute func (c *EC2) CreateRouteRequest(input *CreateRouteInput) (req *request.Request, output *CreateRouteOutput) { op := &request.Operation{ Name: opCreateRoute, @@ -3980,7 +4381,7 @@ func (c *EC2) CreateRouteRequest(input *CreateRouteInput) (req *request.Request, // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation CreateRoute for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CreateRoute +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CreateRoute func (c *EC2) CreateRoute(input *CreateRouteInput) (*CreateRouteOutput, error) { req, out := c.CreateRouteRequest(input) return out, req.Send() @@ -4027,7 +4428,7 @@ const opCreateRouteTable = "CreateRouteTable" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CreateRouteTable +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CreateRouteTable func (c *EC2) CreateRouteTableRequest(input *CreateRouteTableInput) (req *request.Request, output *CreateRouteTableOutput) { op := &request.Operation{ Name: opCreateRouteTable, @@ -4058,7 +4459,7 @@ func (c *EC2) CreateRouteTableRequest(input *CreateRouteTableInput) (req *reques // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation CreateRouteTable for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CreateRouteTable +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CreateRouteTable func (c *EC2) CreateRouteTable(input *CreateRouteTableInput) (*CreateRouteTableOutput, error) { req, out := c.CreateRouteTableRequest(input) return out, req.Send() @@ -4105,7 +4506,7 @@ const opCreateSecurityGroup = "CreateSecurityGroup" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CreateSecurityGroup +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CreateSecurityGroup func (c *EC2) CreateSecurityGroupRequest(input *CreateSecurityGroupInput) (req *request.Request, output *CreateSecurityGroupOutput) { op := &request.Operation{ Name: opCreateSecurityGroup, @@ -4158,7 +4559,7 @@ func (c *EC2) CreateSecurityGroupRequest(input *CreateSecurityGroupInput) (req * // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation CreateSecurityGroup for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CreateSecurityGroup +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CreateSecurityGroup func (c *EC2) CreateSecurityGroup(input *CreateSecurityGroupInput) (*CreateSecurityGroupOutput, error) { req, out := c.CreateSecurityGroupRequest(input) return out, req.Send() @@ -4205,7 +4606,7 @@ const opCreateSnapshot = "CreateSnapshot" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CreateSnapshot +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CreateSnapshot func (c *EC2) CreateSnapshotRequest(input *CreateSnapshotInput) (req *request.Request, output *Snapshot) { op := &request.Operation{ Name: opCreateSnapshot, @@ -4259,7 +4660,7 @@ func (c *EC2) CreateSnapshotRequest(input *CreateSnapshotInput) (req *request.Re // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation CreateSnapshot for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CreateSnapshot +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CreateSnapshot func (c *EC2) CreateSnapshot(input *CreateSnapshotInput) (*Snapshot, error) { req, out := c.CreateSnapshotRequest(input) return out, req.Send() @@ -4306,7 +4707,7 @@ const opCreateSpotDatafeedSubscription = "CreateSpotDatafeedSubscription" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CreateSpotDatafeedSubscription +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CreateSpotDatafeedSubscription func (c *EC2) CreateSpotDatafeedSubscriptionRequest(input *CreateSpotDatafeedSubscriptionInput) (req *request.Request, output *CreateSpotDatafeedSubscriptionOutput) { op := &request.Operation{ Name: opCreateSpotDatafeedSubscription, @@ -4325,7 +4726,7 @@ func (c *EC2) CreateSpotDatafeedSubscriptionRequest(input *CreateSpotDatafeedSub // CreateSpotDatafeedSubscription API operation for Amazon Elastic Compute Cloud. // -// Creates a data feed for Spot instances, enabling you to view Spot instance +// Creates a data feed for Spot Instances, enabling you to view Spot Instance // usage logs. You can create one data feed per AWS account. For more information, // see Spot Instance Data Feed (http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/spot-data-feeds.html) // in the Amazon Elastic Compute Cloud User Guide. @@ -4336,7 +4737,7 @@ func (c *EC2) CreateSpotDatafeedSubscriptionRequest(input *CreateSpotDatafeedSub // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation CreateSpotDatafeedSubscription for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CreateSpotDatafeedSubscription +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CreateSpotDatafeedSubscription func (c *EC2) CreateSpotDatafeedSubscription(input *CreateSpotDatafeedSubscriptionInput) (*CreateSpotDatafeedSubscriptionOutput, error) { req, out := c.CreateSpotDatafeedSubscriptionRequest(input) return out, req.Send() @@ -4383,7 +4784,7 @@ const opCreateSubnet = "CreateSubnet" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CreateSubnet +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CreateSubnet func (c *EC2) CreateSubnetRequest(input *CreateSubnetInput) (req *request.Request, output *CreateSubnetOutput) { op := &request.Operation{ Name: opCreateSubnet, @@ -4404,14 +4805,13 @@ func (c *EC2) CreateSubnetRequest(input *CreateSubnetInput) (req *request.Reques // // Creates a subnet in an existing VPC. // -// When you create each subnet, you provide the VPC ID and the CIDR block you -// want for the subnet. After you create a subnet, you can't change its CIDR -// block. The subnet's IPv4 CIDR block can be the same as the VPC's IPv4 CIDR -// block (assuming you want only a single subnet in the VPC), or a subset of -// the VPC's IPv4 CIDR block. If you create more than one subnet in a VPC, the -// subnets' CIDR blocks must not overlap. The smallest IPv4 subnet (and VPC) -// you can create uses a /28 netmask (16 IPv4 addresses), and the largest uses -// a /16 netmask (65,536 IPv4 addresses). +// When you create each subnet, you provide the VPC ID and the IPv4 CIDR block +// you want for the subnet. After you create a subnet, you can't change its +// CIDR block. The size of the subnet's IPv4 CIDR block can be the same as a +// VPC's IPv4 CIDR block, or a subset of a VPC's IPv4 CIDR block. If you create +// more than one subnet in a VPC, the subnets' CIDR blocks must not overlap. +// The smallest IPv4 subnet (and VPC) you can create uses a /28 netmask (16 +// IPv4 addresses), and the largest uses a /16 netmask (65,536 IPv4 addresses). // // If you've associated an IPv6 CIDR block with your VPC, you can create a subnet // with an IPv6 CIDR block that uses a /64 prefix length. @@ -4437,7 +4837,7 @@ func (c *EC2) CreateSubnetRequest(input *CreateSubnetInput) (req *request.Reques // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation CreateSubnet for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CreateSubnet +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CreateSubnet func (c *EC2) CreateSubnet(input *CreateSubnetInput) (*CreateSubnetOutput, error) { req, out := c.CreateSubnetRequest(input) return out, req.Send() @@ -4484,7 +4884,7 @@ const opCreateTags = "CreateTags" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CreateTags +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CreateTags func (c *EC2) CreateTagsRequest(input *CreateTagsInput) (req *request.Request, output *CreateTagsOutput) { op := &request.Operation{ Name: opCreateTags, @@ -4521,7 +4921,7 @@ func (c *EC2) CreateTagsRequest(input *CreateTagsInput) (req *request.Request, o // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation CreateTags for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CreateTags +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CreateTags func (c *EC2) CreateTags(input *CreateTagsInput) (*CreateTagsOutput, error) { req, out := c.CreateTagsRequest(input) return out, req.Send() @@ -4568,7 +4968,7 @@ const opCreateVolume = "CreateVolume" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CreateVolume +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CreateVolume func (c *EC2) CreateVolumeRequest(input *CreateVolumeInput) (req *request.Request, output *Volume) { op := &request.Operation{ Name: opCreateVolume, @@ -4613,7 +5013,7 @@ func (c *EC2) CreateVolumeRequest(input *CreateVolumeInput) (req *request.Reques // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation CreateVolume for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CreateVolume +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CreateVolume func (c *EC2) CreateVolume(input *CreateVolumeInput) (*Volume, error) { req, out := c.CreateVolumeRequest(input) return out, req.Send() @@ -4660,7 +5060,7 @@ const opCreateVpc = "CreateVpc" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CreateVpc +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CreateVpc func (c *EC2) CreateVpcRequest(input *CreateVpcInput) (req *request.Request, output *CreateVpcOutput) { op := &request.Operation{ Name: opCreateVpc, @@ -4705,7 +5105,7 @@ func (c *EC2) CreateVpcRequest(input *CreateVpcInput) (req *request.Request, out // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation CreateVpc for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CreateVpc +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CreateVpc func (c *EC2) CreateVpc(input *CreateVpcInput) (*CreateVpcOutput, error) { req, out := c.CreateVpcRequest(input) return out, req.Send() @@ -4752,7 +5152,7 @@ const opCreateVpcEndpoint = "CreateVpcEndpoint" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CreateVpcEndpoint +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CreateVpcEndpoint func (c *EC2) CreateVpcEndpointRequest(input *CreateVpcEndpointInput) (req *request.Request, output *CreateVpcEndpointOutput) { op := &request.Operation{ Name: opCreateVpcEndpoint, @@ -4771,13 +5171,23 @@ func (c *EC2) CreateVpcEndpointRequest(input *CreateVpcEndpointInput) (req *requ // CreateVpcEndpoint API operation for Amazon Elastic Compute Cloud. // -// Creates a VPC endpoint for a specified AWS service. An endpoint enables you -// to create a private connection between your VPC and another AWS service in -// your account. You can specify an endpoint policy to attach to the endpoint -// that will control access to the service from your VPC. You can also specify -// the VPC route tables that use the endpoint. +// Creates a VPC endpoint for a specified service. An endpoint enables you to +// create a private connection between your VPC and the service. The service +// may be provided by AWS, an AWS Marketplace partner, or another AWS account. +// For more information, see VPC Endpoints (http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/vpc-endpoints.html) +// in the Amazon Virtual Private Cloud User Guide. // -// Use DescribeVpcEndpointServices to get a list of supported AWS services. +// A gateway endpoint serves as a target for a route in your route table for +// traffic destined for the AWS service. You can specify an endpoint policy +// to attach to the endpoint that will control access to the service from your +// VPC. You can also specify the VPC route tables that use the endpoint. +// +// An interface endpoint is a network interface in your subnet that serves as +// an endpoint for communicating with the specified service. You can specify +// the subnets in which to create an endpoint, and the security groups to associate +// with the endpoint network interface. +// +// Use DescribeVpcEndpointServices to get a list of supported services. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -4785,7 +5195,7 @@ func (c *EC2) CreateVpcEndpointRequest(input *CreateVpcEndpointInput) (req *requ // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation CreateVpcEndpoint for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CreateVpcEndpoint +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CreateVpcEndpoint func (c *EC2) CreateVpcEndpoint(input *CreateVpcEndpointInput) (*CreateVpcEndpointOutput, error) { req, out := c.CreateVpcEndpointRequest(input) return out, req.Send() @@ -4807,6 +5217,167 @@ func (c *EC2) CreateVpcEndpointWithContext(ctx aws.Context, input *CreateVpcEndp return out, req.Send() } +const opCreateVpcEndpointConnectionNotification = "CreateVpcEndpointConnectionNotification" + +// CreateVpcEndpointConnectionNotificationRequest generates a "aws/request.Request" representing the +// client's request for the CreateVpcEndpointConnectionNotification operation. The "output" return +// value will be populated with the request's response once the request complets +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See CreateVpcEndpointConnectionNotification for more information on using the CreateVpcEndpointConnectionNotification +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the CreateVpcEndpointConnectionNotificationRequest method. +// req, resp := client.CreateVpcEndpointConnectionNotificationRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CreateVpcEndpointConnectionNotification +func (c *EC2) CreateVpcEndpointConnectionNotificationRequest(input *CreateVpcEndpointConnectionNotificationInput) (req *request.Request, output *CreateVpcEndpointConnectionNotificationOutput) { + op := &request.Operation{ + Name: opCreateVpcEndpointConnectionNotification, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &CreateVpcEndpointConnectionNotificationInput{} + } + + output = &CreateVpcEndpointConnectionNotificationOutput{} + req = c.newRequest(op, input, output) + return +} + +// CreateVpcEndpointConnectionNotification API operation for Amazon Elastic Compute Cloud. +// +// Creates a connection notification for a specified VPC endpoint or VPC endpoint +// service. A connection notification notifies you of specific endpoint events. +// You must create an SNS topic to receive notifications. For more information, +// see Create a Topic (http://docs.aws.amazon.com/sns/latest/dg/CreateTopic.html) +// in the Amazon Simple Notification Service Developer Guide. +// +// You can create a connection notification for interface endpoints only. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Elastic Compute Cloud's +// API operation CreateVpcEndpointConnectionNotification for usage and error information. +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CreateVpcEndpointConnectionNotification +func (c *EC2) CreateVpcEndpointConnectionNotification(input *CreateVpcEndpointConnectionNotificationInput) (*CreateVpcEndpointConnectionNotificationOutput, error) { + req, out := c.CreateVpcEndpointConnectionNotificationRequest(input) + return out, req.Send() +} + +// CreateVpcEndpointConnectionNotificationWithContext is the same as CreateVpcEndpointConnectionNotification with the addition of +// the ability to pass a context and additional request options. +// +// See CreateVpcEndpointConnectionNotification for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *EC2) CreateVpcEndpointConnectionNotificationWithContext(ctx aws.Context, input *CreateVpcEndpointConnectionNotificationInput, opts ...request.Option) (*CreateVpcEndpointConnectionNotificationOutput, error) { + req, out := c.CreateVpcEndpointConnectionNotificationRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opCreateVpcEndpointServiceConfiguration = "CreateVpcEndpointServiceConfiguration" + +// CreateVpcEndpointServiceConfigurationRequest generates a "aws/request.Request" representing the +// client's request for the CreateVpcEndpointServiceConfiguration operation. The "output" return +// value will be populated with the request's response once the request complets +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See CreateVpcEndpointServiceConfiguration for more information on using the CreateVpcEndpointServiceConfiguration +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the CreateVpcEndpointServiceConfigurationRequest method. +// req, resp := client.CreateVpcEndpointServiceConfigurationRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CreateVpcEndpointServiceConfiguration +func (c *EC2) CreateVpcEndpointServiceConfigurationRequest(input *CreateVpcEndpointServiceConfigurationInput) (req *request.Request, output *CreateVpcEndpointServiceConfigurationOutput) { + op := &request.Operation{ + Name: opCreateVpcEndpointServiceConfiguration, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &CreateVpcEndpointServiceConfigurationInput{} + } + + output = &CreateVpcEndpointServiceConfigurationOutput{} + req = c.newRequest(op, input, output) + return +} + +// CreateVpcEndpointServiceConfiguration API operation for Amazon Elastic Compute Cloud. +// +// Creates a VPC endpoint service configuration to which service consumers (AWS +// accounts, IAM users, and IAM roles) can connect. Service consumers can create +// an interface VPC endpoint to connect to your service. +// +// To create an endpoint service configuration, you must first create a Network +// Load Balancer for your service. For more information, see VPC Endpoint Services +// (http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/endpoint-service.html) +// in the Amazon Virtual Private Cloud User Guide. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Elastic Compute Cloud's +// API operation CreateVpcEndpointServiceConfiguration for usage and error information. +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CreateVpcEndpointServiceConfiguration +func (c *EC2) CreateVpcEndpointServiceConfiguration(input *CreateVpcEndpointServiceConfigurationInput) (*CreateVpcEndpointServiceConfigurationOutput, error) { + req, out := c.CreateVpcEndpointServiceConfigurationRequest(input) + return out, req.Send() +} + +// CreateVpcEndpointServiceConfigurationWithContext is the same as CreateVpcEndpointServiceConfiguration with the addition of +// the ability to pass a context and additional request options. +// +// See CreateVpcEndpointServiceConfiguration for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *EC2) CreateVpcEndpointServiceConfigurationWithContext(ctx aws.Context, input *CreateVpcEndpointServiceConfigurationInput, opts ...request.Option) (*CreateVpcEndpointServiceConfigurationOutput, error) { + req, out := c.CreateVpcEndpointServiceConfigurationRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + const opCreateVpcPeeringConnection = "CreateVpcPeeringConnection" // CreateVpcPeeringConnectionRequest generates a "aws/request.Request" representing the @@ -4832,7 +5403,7 @@ const opCreateVpcPeeringConnection = "CreateVpcPeeringConnection" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CreateVpcPeeringConnection +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CreateVpcPeeringConnection func (c *EC2) CreateVpcPeeringConnectionRequest(input *CreateVpcPeeringConnectionInput) (req *request.Request, output *CreateVpcPeeringConnectionOutput) { op := &request.Operation{ Name: opCreateVpcPeeringConnection, @@ -4852,16 +5423,21 @@ func (c *EC2) CreateVpcPeeringConnectionRequest(input *CreateVpcPeeringConnectio // CreateVpcPeeringConnection API operation for Amazon Elastic Compute Cloud. // // Requests a VPC peering connection between two VPCs: a requester VPC that -// you own and a peer VPC with which to create the connection. The peer VPC -// can belong to another AWS account. The requester VPC and peer VPC cannot -// have overlapping CIDR blocks. +// you own and an accepter VPC with which to create the connection. The accepter +// VPC can belong to another AWS account and can be in a different region to +// the requester VPC. The requester VPC and accepter VPC cannot have overlapping +// CIDR blocks. // -// The owner of the peer VPC must accept the peering request to activate the -// peering connection. The VPC peering connection request expires after 7 days, -// after which it cannot be accepted or rejected. +// Limitations and rules apply to a VPC peering connection. For more information, +// see the limitations (http://docs.aws.amazon.com/AmazonVPC/latest/PeeringGuide/vpc-peering-basics.html#vpc-peering-limitations) +// section in the VPC Peering Guide. // -// If you try to create a VPC peering connection between VPCs that have overlapping -// CIDR blocks, the VPC peering connection status goes to failed. +// The owner of the accepter VPC must accept the peering request to activate +// the peering connection. The VPC peering connection request expires after +// 7 days, after which it cannot be accepted or rejected. +// +// If you create a VPC peering connection request between VPCs with overlapping +// CIDR blocks, the VPC peering connection has a status of failed. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -4869,7 +5445,7 @@ func (c *EC2) CreateVpcPeeringConnectionRequest(input *CreateVpcPeeringConnectio // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation CreateVpcPeeringConnection for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CreateVpcPeeringConnection +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CreateVpcPeeringConnection func (c *EC2) CreateVpcPeeringConnection(input *CreateVpcPeeringConnectionInput) (*CreateVpcPeeringConnectionOutput, error) { req, out := c.CreateVpcPeeringConnectionRequest(input) return out, req.Send() @@ -4916,7 +5492,7 @@ const opCreateVpnConnection = "CreateVpnConnection" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CreateVpnConnection +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CreateVpnConnection func (c *EC2) CreateVpnConnectionRequest(input *CreateVpnConnectionInput) (req *request.Request, output *CreateVpnConnectionOutput) { op := &request.Operation{ Name: opCreateVpnConnection, @@ -4952,8 +5528,7 @@ func (c *EC2) CreateVpnConnectionRequest(input *CreateVpnConnectionInput) (req * // This is an idempotent operation. If you perform the operation more than once, // Amazon EC2 doesn't return an error. // -// For more information about VPN connections, see Adding a Hardware Virtual -// Private Gateway to Your VPC (http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_VPN.html) +// For more information, see AWS Managed VPN Connections (http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_VPN.html) // in the Amazon Virtual Private Cloud User Guide. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions @@ -4962,7 +5537,7 @@ func (c *EC2) CreateVpnConnectionRequest(input *CreateVpnConnectionInput) (req * // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation CreateVpnConnection for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CreateVpnConnection +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CreateVpnConnection func (c *EC2) CreateVpnConnection(input *CreateVpnConnectionInput) (*CreateVpnConnectionOutput, error) { req, out := c.CreateVpnConnectionRequest(input) return out, req.Send() @@ -5009,7 +5584,7 @@ const opCreateVpnConnectionRoute = "CreateVpnConnectionRoute" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CreateVpnConnectionRoute +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CreateVpnConnectionRoute func (c *EC2) CreateVpnConnectionRouteRequest(input *CreateVpnConnectionRouteInput) (req *request.Request, output *CreateVpnConnectionRouteOutput) { op := &request.Operation{ Name: opCreateVpnConnectionRoute, @@ -5035,9 +5610,9 @@ func (c *EC2) CreateVpnConnectionRouteRequest(input *CreateVpnConnectionRouteInp // traffic to be routed from the virtual private gateway to the VPN customer // gateway. // -// For more information about VPN connections, see Adding a Hardware Virtual -// Private Gateway to Your VPC (http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_VPN.html) -// in the Amazon Virtual Private Cloud User Guide. +// For more information about VPN connections, see AWS Managed VPN Connections +// (http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_VPN.html) in the +// Amazon Virtual Private Cloud User Guide. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -5045,7 +5620,7 @@ func (c *EC2) CreateVpnConnectionRouteRequest(input *CreateVpnConnectionRouteInp // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation CreateVpnConnectionRoute for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CreateVpnConnectionRoute +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CreateVpnConnectionRoute func (c *EC2) CreateVpnConnectionRoute(input *CreateVpnConnectionRouteInput) (*CreateVpnConnectionRouteOutput, error) { req, out := c.CreateVpnConnectionRouteRequest(input) return out, req.Send() @@ -5092,7 +5667,7 @@ const opCreateVpnGateway = "CreateVpnGateway" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CreateVpnGateway +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CreateVpnGateway func (c *EC2) CreateVpnGatewayRequest(input *CreateVpnGatewayInput) (req *request.Request, output *CreateVpnGatewayOutput) { op := &request.Operation{ Name: opCreateVpnGateway, @@ -5115,8 +5690,8 @@ func (c *EC2) CreateVpnGatewayRequest(input *CreateVpnGatewayInput) (req *reques // on the VPC side of your VPN connection. You can create a virtual private // gateway before creating the VPC itself. // -// For more information about virtual private gateways, see Adding a Hardware -// Virtual Private Gateway to Your VPC (http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_VPN.html) +// For more information about virtual private gateways, see AWS Managed VPN +// Connections (http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_VPN.html) // in the Amazon Virtual Private Cloud User Guide. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions @@ -5125,7 +5700,7 @@ func (c *EC2) CreateVpnGatewayRequest(input *CreateVpnGatewayInput) (req *reques // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation CreateVpnGateway for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CreateVpnGateway +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CreateVpnGateway func (c *EC2) CreateVpnGateway(input *CreateVpnGatewayInput) (*CreateVpnGatewayOutput, error) { req, out := c.CreateVpnGatewayRequest(input) return out, req.Send() @@ -5172,7 +5747,7 @@ const opDeleteCustomerGateway = "DeleteCustomerGateway" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DeleteCustomerGateway +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DeleteCustomerGateway func (c *EC2) DeleteCustomerGatewayRequest(input *DeleteCustomerGatewayInput) (req *request.Request, output *DeleteCustomerGatewayOutput) { op := &request.Operation{ Name: opDeleteCustomerGateway, @@ -5202,7 +5777,7 @@ func (c *EC2) DeleteCustomerGatewayRequest(input *DeleteCustomerGatewayInput) (r // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation DeleteCustomerGateway for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DeleteCustomerGateway +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DeleteCustomerGateway func (c *EC2) DeleteCustomerGateway(input *DeleteCustomerGatewayInput) (*DeleteCustomerGatewayOutput, error) { req, out := c.DeleteCustomerGatewayRequest(input) return out, req.Send() @@ -5249,7 +5824,7 @@ const opDeleteDhcpOptions = "DeleteDhcpOptions" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DeleteDhcpOptions +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DeleteDhcpOptions func (c *EC2) DeleteDhcpOptionsRequest(input *DeleteDhcpOptionsInput) (req *request.Request, output *DeleteDhcpOptionsOutput) { op := &request.Operation{ Name: opDeleteDhcpOptions, @@ -5281,7 +5856,7 @@ func (c *EC2) DeleteDhcpOptionsRequest(input *DeleteDhcpOptionsInput) (req *requ // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation DeleteDhcpOptions for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DeleteDhcpOptions +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DeleteDhcpOptions func (c *EC2) DeleteDhcpOptions(input *DeleteDhcpOptionsInput) (*DeleteDhcpOptionsOutput, error) { req, out := c.DeleteDhcpOptionsRequest(input) return out, req.Send() @@ -5328,7 +5903,7 @@ const opDeleteEgressOnlyInternetGateway = "DeleteEgressOnlyInternetGateway" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DeleteEgressOnlyInternetGateway +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DeleteEgressOnlyInternetGateway func (c *EC2) DeleteEgressOnlyInternetGatewayRequest(input *DeleteEgressOnlyInternetGatewayInput) (req *request.Request, output *DeleteEgressOnlyInternetGatewayOutput) { op := &request.Operation{ Name: opDeleteEgressOnlyInternetGateway, @@ -5355,7 +5930,7 @@ func (c *EC2) DeleteEgressOnlyInternetGatewayRequest(input *DeleteEgressOnlyInte // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation DeleteEgressOnlyInternetGateway for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DeleteEgressOnlyInternetGateway +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DeleteEgressOnlyInternetGateway func (c *EC2) DeleteEgressOnlyInternetGateway(input *DeleteEgressOnlyInternetGatewayInput) (*DeleteEgressOnlyInternetGatewayOutput, error) { req, out := c.DeleteEgressOnlyInternetGatewayRequest(input) return out, req.Send() @@ -5402,7 +5977,7 @@ const opDeleteFlowLogs = "DeleteFlowLogs" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DeleteFlowLogs +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DeleteFlowLogs func (c *EC2) DeleteFlowLogsRequest(input *DeleteFlowLogsInput) (req *request.Request, output *DeleteFlowLogsOutput) { op := &request.Operation{ Name: opDeleteFlowLogs, @@ -5429,7 +6004,7 @@ func (c *EC2) DeleteFlowLogsRequest(input *DeleteFlowLogsInput) (req *request.Re // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation DeleteFlowLogs for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DeleteFlowLogs +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DeleteFlowLogs func (c *EC2) DeleteFlowLogs(input *DeleteFlowLogsInput) (*DeleteFlowLogsOutput, error) { req, out := c.DeleteFlowLogsRequest(input) return out, req.Send() @@ -5451,6 +6026,80 @@ func (c *EC2) DeleteFlowLogsWithContext(ctx aws.Context, input *DeleteFlowLogsIn return out, req.Send() } +const opDeleteFpgaImage = "DeleteFpgaImage" + +// DeleteFpgaImageRequest generates a "aws/request.Request" representing the +// client's request for the DeleteFpgaImage operation. The "output" return +// value will be populated with the request's response once the request complets +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DeleteFpgaImage for more information on using the DeleteFpgaImage +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DeleteFpgaImageRequest method. +// req, resp := client.DeleteFpgaImageRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DeleteFpgaImage +func (c *EC2) DeleteFpgaImageRequest(input *DeleteFpgaImageInput) (req *request.Request, output *DeleteFpgaImageOutput) { + op := &request.Operation{ + Name: opDeleteFpgaImage, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DeleteFpgaImageInput{} + } + + output = &DeleteFpgaImageOutput{} + req = c.newRequest(op, input, output) + return +} + +// DeleteFpgaImage API operation for Amazon Elastic Compute Cloud. +// +// Deletes the specified Amazon FPGA Image (AFI). +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Elastic Compute Cloud's +// API operation DeleteFpgaImage for usage and error information. +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DeleteFpgaImage +func (c *EC2) DeleteFpgaImage(input *DeleteFpgaImageInput) (*DeleteFpgaImageOutput, error) { + req, out := c.DeleteFpgaImageRequest(input) + return out, req.Send() +} + +// DeleteFpgaImageWithContext is the same as DeleteFpgaImage with the addition of +// the ability to pass a context and additional request options. +// +// See DeleteFpgaImage for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *EC2) DeleteFpgaImageWithContext(ctx aws.Context, input *DeleteFpgaImageInput, opts ...request.Option) (*DeleteFpgaImageOutput, error) { + req, out := c.DeleteFpgaImageRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + const opDeleteInternetGateway = "DeleteInternetGateway" // DeleteInternetGatewayRequest generates a "aws/request.Request" representing the @@ -5476,7 +6125,7 @@ const opDeleteInternetGateway = "DeleteInternetGateway" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DeleteInternetGateway +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DeleteInternetGateway func (c *EC2) DeleteInternetGatewayRequest(input *DeleteInternetGatewayInput) (req *request.Request, output *DeleteInternetGatewayOutput) { op := &request.Operation{ Name: opDeleteInternetGateway, @@ -5506,7 +6155,7 @@ func (c *EC2) DeleteInternetGatewayRequest(input *DeleteInternetGatewayInput) (r // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation DeleteInternetGateway for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DeleteInternetGateway +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DeleteInternetGateway func (c *EC2) DeleteInternetGateway(input *DeleteInternetGatewayInput) (*DeleteInternetGatewayOutput, error) { req, out := c.DeleteInternetGatewayRequest(input) return out, req.Send() @@ -5553,7 +6202,7 @@ const opDeleteKeyPair = "DeleteKeyPair" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DeleteKeyPair +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DeleteKeyPair func (c *EC2) DeleteKeyPairRequest(input *DeleteKeyPairInput) (req *request.Request, output *DeleteKeyPairOutput) { op := &request.Operation{ Name: opDeleteKeyPair, @@ -5582,7 +6231,7 @@ func (c *EC2) DeleteKeyPairRequest(input *DeleteKeyPairInput) (req *request.Requ // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation DeleteKeyPair for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DeleteKeyPair +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DeleteKeyPair func (c *EC2) DeleteKeyPair(input *DeleteKeyPairInput) (*DeleteKeyPairOutput, error) { req, out := c.DeleteKeyPairRequest(input) return out, req.Send() @@ -5604,6 +6253,158 @@ func (c *EC2) DeleteKeyPairWithContext(ctx aws.Context, input *DeleteKeyPairInpu return out, req.Send() } +const opDeleteLaunchTemplate = "DeleteLaunchTemplate" + +// DeleteLaunchTemplateRequest generates a "aws/request.Request" representing the +// client's request for the DeleteLaunchTemplate operation. The "output" return +// value will be populated with the request's response once the request complets +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DeleteLaunchTemplate for more information on using the DeleteLaunchTemplate +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DeleteLaunchTemplateRequest method. +// req, resp := client.DeleteLaunchTemplateRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DeleteLaunchTemplate +func (c *EC2) DeleteLaunchTemplateRequest(input *DeleteLaunchTemplateInput) (req *request.Request, output *DeleteLaunchTemplateOutput) { + op := &request.Operation{ + Name: opDeleteLaunchTemplate, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DeleteLaunchTemplateInput{} + } + + output = &DeleteLaunchTemplateOutput{} + req = c.newRequest(op, input, output) + return +} + +// DeleteLaunchTemplate API operation for Amazon Elastic Compute Cloud. +// +// Deletes a launch template. Deleting a launch template deletes all of its +// versions. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Elastic Compute Cloud's +// API operation DeleteLaunchTemplate for usage and error information. +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DeleteLaunchTemplate +func (c *EC2) DeleteLaunchTemplate(input *DeleteLaunchTemplateInput) (*DeleteLaunchTemplateOutput, error) { + req, out := c.DeleteLaunchTemplateRequest(input) + return out, req.Send() +} + +// DeleteLaunchTemplateWithContext is the same as DeleteLaunchTemplate with the addition of +// the ability to pass a context and additional request options. +// +// See DeleteLaunchTemplate for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *EC2) DeleteLaunchTemplateWithContext(ctx aws.Context, input *DeleteLaunchTemplateInput, opts ...request.Option) (*DeleteLaunchTemplateOutput, error) { + req, out := c.DeleteLaunchTemplateRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDeleteLaunchTemplateVersions = "DeleteLaunchTemplateVersions" + +// DeleteLaunchTemplateVersionsRequest generates a "aws/request.Request" representing the +// client's request for the DeleteLaunchTemplateVersions operation. The "output" return +// value will be populated with the request's response once the request complets +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DeleteLaunchTemplateVersions for more information on using the DeleteLaunchTemplateVersions +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DeleteLaunchTemplateVersionsRequest method. +// req, resp := client.DeleteLaunchTemplateVersionsRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DeleteLaunchTemplateVersions +func (c *EC2) DeleteLaunchTemplateVersionsRequest(input *DeleteLaunchTemplateVersionsInput) (req *request.Request, output *DeleteLaunchTemplateVersionsOutput) { + op := &request.Operation{ + Name: opDeleteLaunchTemplateVersions, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DeleteLaunchTemplateVersionsInput{} + } + + output = &DeleteLaunchTemplateVersionsOutput{} + req = c.newRequest(op, input, output) + return +} + +// DeleteLaunchTemplateVersions API operation for Amazon Elastic Compute Cloud. +// +// Deletes one or more versions of a launch template. You cannot delete the +// default version of a launch template; you must first assign a different version +// as the default. If the default version is the only version for the launch +// template, you must delete the entire launch template using DeleteLaunchTemplate. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Elastic Compute Cloud's +// API operation DeleteLaunchTemplateVersions for usage and error information. +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DeleteLaunchTemplateVersions +func (c *EC2) DeleteLaunchTemplateVersions(input *DeleteLaunchTemplateVersionsInput) (*DeleteLaunchTemplateVersionsOutput, error) { + req, out := c.DeleteLaunchTemplateVersionsRequest(input) + return out, req.Send() +} + +// DeleteLaunchTemplateVersionsWithContext is the same as DeleteLaunchTemplateVersions with the addition of +// the ability to pass a context and additional request options. +// +// See DeleteLaunchTemplateVersions for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *EC2) DeleteLaunchTemplateVersionsWithContext(ctx aws.Context, input *DeleteLaunchTemplateVersionsInput, opts ...request.Option) (*DeleteLaunchTemplateVersionsOutput, error) { + req, out := c.DeleteLaunchTemplateVersionsRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + const opDeleteNatGateway = "DeleteNatGateway" // DeleteNatGatewayRequest generates a "aws/request.Request" representing the @@ -5629,7 +6430,7 @@ const opDeleteNatGateway = "DeleteNatGateway" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DeleteNatGateway +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DeleteNatGateway func (c *EC2) DeleteNatGatewayRequest(input *DeleteNatGatewayInput) (req *request.Request, output *DeleteNatGatewayOutput) { op := &request.Operation{ Name: opDeleteNatGateway, @@ -5658,7 +6459,7 @@ func (c *EC2) DeleteNatGatewayRequest(input *DeleteNatGatewayInput) (req *reques // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation DeleteNatGateway for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DeleteNatGateway +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DeleteNatGateway func (c *EC2) DeleteNatGateway(input *DeleteNatGatewayInput) (*DeleteNatGatewayOutput, error) { req, out := c.DeleteNatGatewayRequest(input) return out, req.Send() @@ -5705,7 +6506,7 @@ const opDeleteNetworkAcl = "DeleteNetworkAcl" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DeleteNetworkAcl +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DeleteNetworkAcl func (c *EC2) DeleteNetworkAclRequest(input *DeleteNetworkAclInput) (req *request.Request, output *DeleteNetworkAclOutput) { op := &request.Operation{ Name: opDeleteNetworkAcl, @@ -5735,7 +6536,7 @@ func (c *EC2) DeleteNetworkAclRequest(input *DeleteNetworkAclInput) (req *reques // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation DeleteNetworkAcl for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DeleteNetworkAcl +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DeleteNetworkAcl func (c *EC2) DeleteNetworkAcl(input *DeleteNetworkAclInput) (*DeleteNetworkAclOutput, error) { req, out := c.DeleteNetworkAclRequest(input) return out, req.Send() @@ -5782,7 +6583,7 @@ const opDeleteNetworkAclEntry = "DeleteNetworkAclEntry" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DeleteNetworkAclEntry +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DeleteNetworkAclEntry func (c *EC2) DeleteNetworkAclEntryRequest(input *DeleteNetworkAclEntryInput) (req *request.Request, output *DeleteNetworkAclEntryOutput) { op := &request.Operation{ Name: opDeleteNetworkAclEntry, @@ -5812,7 +6613,7 @@ func (c *EC2) DeleteNetworkAclEntryRequest(input *DeleteNetworkAclEntryInput) (r // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation DeleteNetworkAclEntry for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DeleteNetworkAclEntry +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DeleteNetworkAclEntry func (c *EC2) DeleteNetworkAclEntry(input *DeleteNetworkAclEntryInput) (*DeleteNetworkAclEntryOutput, error) { req, out := c.DeleteNetworkAclEntryRequest(input) return out, req.Send() @@ -5859,7 +6660,7 @@ const opDeleteNetworkInterface = "DeleteNetworkInterface" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DeleteNetworkInterface +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DeleteNetworkInterface func (c *EC2) DeleteNetworkInterfaceRequest(input *DeleteNetworkInterfaceInput) (req *request.Request, output *DeleteNetworkInterfaceOutput) { op := &request.Operation{ Name: opDeleteNetworkInterface, @@ -5889,7 +6690,7 @@ func (c *EC2) DeleteNetworkInterfaceRequest(input *DeleteNetworkInterfaceInput) // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation DeleteNetworkInterface for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DeleteNetworkInterface +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DeleteNetworkInterface func (c *EC2) DeleteNetworkInterface(input *DeleteNetworkInterfaceInput) (*DeleteNetworkInterfaceOutput, error) { req, out := c.DeleteNetworkInterfaceRequest(input) return out, req.Send() @@ -5936,7 +6737,7 @@ const opDeleteNetworkInterfacePermission = "DeleteNetworkInterfacePermission" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DeleteNetworkInterfacePermission +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DeleteNetworkInterfacePermission func (c *EC2) DeleteNetworkInterfacePermissionRequest(input *DeleteNetworkInterfacePermissionInput) (req *request.Request, output *DeleteNetworkInterfacePermissionOutput) { op := &request.Operation{ Name: opDeleteNetworkInterfacePermission, @@ -5966,7 +6767,7 @@ func (c *EC2) DeleteNetworkInterfacePermissionRequest(input *DeleteNetworkInterf // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation DeleteNetworkInterfacePermission for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DeleteNetworkInterfacePermission +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DeleteNetworkInterfacePermission func (c *EC2) DeleteNetworkInterfacePermission(input *DeleteNetworkInterfacePermissionInput) (*DeleteNetworkInterfacePermissionOutput, error) { req, out := c.DeleteNetworkInterfacePermissionRequest(input) return out, req.Send() @@ -6013,7 +6814,7 @@ const opDeletePlacementGroup = "DeletePlacementGroup" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DeletePlacementGroup +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DeletePlacementGroup func (c *EC2) DeletePlacementGroupRequest(input *DeletePlacementGroupInput) (req *request.Request, output *DeletePlacementGroupOutput) { op := &request.Operation{ Name: opDeletePlacementGroup, @@ -6035,8 +6836,8 @@ func (c *EC2) DeletePlacementGroupRequest(input *DeletePlacementGroupInput) (req // DeletePlacementGroup API operation for Amazon Elastic Compute Cloud. // // Deletes the specified placement group. You must terminate all instances in -// the placement group before you can delete the placement group. For more information -// about placement groups and cluster instances, see Cluster Instances (http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using_cluster_computing.html) +// the placement group before you can delete the placement group. For more information, +// see Placement Groups (http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/placement-groups.html) // in the Amazon Elastic Compute Cloud User Guide. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions @@ -6045,7 +6846,7 @@ func (c *EC2) DeletePlacementGroupRequest(input *DeletePlacementGroupInput) (req // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation DeletePlacementGroup for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DeletePlacementGroup +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DeletePlacementGroup func (c *EC2) DeletePlacementGroup(input *DeletePlacementGroupInput) (*DeletePlacementGroupOutput, error) { req, out := c.DeletePlacementGroupRequest(input) return out, req.Send() @@ -6092,7 +6893,7 @@ const opDeleteRoute = "DeleteRoute" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DeleteRoute +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DeleteRoute func (c *EC2) DeleteRouteRequest(input *DeleteRouteInput) (req *request.Request, output *DeleteRouteOutput) { op := &request.Operation{ Name: opDeleteRoute, @@ -6121,7 +6922,7 @@ func (c *EC2) DeleteRouteRequest(input *DeleteRouteInput) (req *request.Request, // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation DeleteRoute for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DeleteRoute +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DeleteRoute func (c *EC2) DeleteRoute(input *DeleteRouteInput) (*DeleteRouteOutput, error) { req, out := c.DeleteRouteRequest(input) return out, req.Send() @@ -6168,7 +6969,7 @@ const opDeleteRouteTable = "DeleteRouteTable" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DeleteRouteTable +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DeleteRouteTable func (c *EC2) DeleteRouteTableRequest(input *DeleteRouteTableInput) (req *request.Request, output *DeleteRouteTableOutput) { op := &request.Operation{ Name: opDeleteRouteTable, @@ -6199,7 +7000,7 @@ func (c *EC2) DeleteRouteTableRequest(input *DeleteRouteTableInput) (req *reques // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation DeleteRouteTable for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DeleteRouteTable +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DeleteRouteTable func (c *EC2) DeleteRouteTable(input *DeleteRouteTableInput) (*DeleteRouteTableOutput, error) { req, out := c.DeleteRouteTableRequest(input) return out, req.Send() @@ -6246,7 +7047,7 @@ const opDeleteSecurityGroup = "DeleteSecurityGroup" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DeleteSecurityGroup +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DeleteSecurityGroup func (c *EC2) DeleteSecurityGroupRequest(input *DeleteSecurityGroupInput) (req *request.Request, output *DeleteSecurityGroupOutput) { op := &request.Operation{ Name: opDeleteSecurityGroup, @@ -6279,7 +7080,7 @@ func (c *EC2) DeleteSecurityGroupRequest(input *DeleteSecurityGroupInput) (req * // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation DeleteSecurityGroup for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DeleteSecurityGroup +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DeleteSecurityGroup func (c *EC2) DeleteSecurityGroup(input *DeleteSecurityGroupInput) (*DeleteSecurityGroupOutput, error) { req, out := c.DeleteSecurityGroupRequest(input) return out, req.Send() @@ -6326,7 +7127,7 @@ const opDeleteSnapshot = "DeleteSnapshot" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DeleteSnapshot +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DeleteSnapshot func (c *EC2) DeleteSnapshotRequest(input *DeleteSnapshotInput) (req *request.Request, output *DeleteSnapshotOutput) { op := &request.Operation{ Name: opDeleteSnapshot, @@ -6369,7 +7170,7 @@ func (c *EC2) DeleteSnapshotRequest(input *DeleteSnapshotInput) (req *request.Re // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation DeleteSnapshot for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DeleteSnapshot +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DeleteSnapshot func (c *EC2) DeleteSnapshot(input *DeleteSnapshotInput) (*DeleteSnapshotOutput, error) { req, out := c.DeleteSnapshotRequest(input) return out, req.Send() @@ -6416,7 +7217,7 @@ const opDeleteSpotDatafeedSubscription = "DeleteSpotDatafeedSubscription" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DeleteSpotDatafeedSubscription +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DeleteSpotDatafeedSubscription func (c *EC2) DeleteSpotDatafeedSubscriptionRequest(input *DeleteSpotDatafeedSubscriptionInput) (req *request.Request, output *DeleteSpotDatafeedSubscriptionOutput) { op := &request.Operation{ Name: opDeleteSpotDatafeedSubscription, @@ -6437,7 +7238,7 @@ func (c *EC2) DeleteSpotDatafeedSubscriptionRequest(input *DeleteSpotDatafeedSub // DeleteSpotDatafeedSubscription API operation for Amazon Elastic Compute Cloud. // -// Deletes the data feed for Spot instances. +// Deletes the data feed for Spot Instances. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -6445,7 +7246,7 @@ func (c *EC2) DeleteSpotDatafeedSubscriptionRequest(input *DeleteSpotDatafeedSub // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation DeleteSpotDatafeedSubscription for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DeleteSpotDatafeedSubscription +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DeleteSpotDatafeedSubscription func (c *EC2) DeleteSpotDatafeedSubscription(input *DeleteSpotDatafeedSubscriptionInput) (*DeleteSpotDatafeedSubscriptionOutput, error) { req, out := c.DeleteSpotDatafeedSubscriptionRequest(input) return out, req.Send() @@ -6492,7 +7293,7 @@ const opDeleteSubnet = "DeleteSubnet" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DeleteSubnet +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DeleteSubnet func (c *EC2) DeleteSubnetRequest(input *DeleteSubnetInput) (req *request.Request, output *DeleteSubnetOutput) { op := &request.Operation{ Name: opDeleteSubnet, @@ -6522,7 +7323,7 @@ func (c *EC2) DeleteSubnetRequest(input *DeleteSubnetInput) (req *request.Reques // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation DeleteSubnet for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DeleteSubnet +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DeleteSubnet func (c *EC2) DeleteSubnet(input *DeleteSubnetInput) (*DeleteSubnetOutput, error) { req, out := c.DeleteSubnetRequest(input) return out, req.Send() @@ -6569,7 +7370,7 @@ const opDeleteTags = "DeleteTags" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DeleteTags +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DeleteTags func (c *EC2) DeleteTagsRequest(input *DeleteTagsInput) (req *request.Request, output *DeleteTagsOutput) { op := &request.Operation{ Name: opDeleteTags, @@ -6590,10 +7391,10 @@ func (c *EC2) DeleteTagsRequest(input *DeleteTagsInput) (req *request.Request, o // DeleteTags API operation for Amazon Elastic Compute Cloud. // -// Deletes the specified set of tags from the specified set of resources. This -// call is designed to follow a DescribeTags request. +// Deletes the specified set of tags from the specified set of resources. // -// For more information about tags, see Tagging Your Resources (http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Using_Tags.html) +// To list the current tags, use DescribeTags. For more information about tags, +// see Tagging Your Resources (http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Using_Tags.html) // in the Amazon Elastic Compute Cloud User Guide. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions @@ -6602,7 +7403,7 @@ func (c *EC2) DeleteTagsRequest(input *DeleteTagsInput) (req *request.Request, o // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation DeleteTags for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DeleteTags +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DeleteTags func (c *EC2) DeleteTags(input *DeleteTagsInput) (*DeleteTagsOutput, error) { req, out := c.DeleteTagsRequest(input) return out, req.Send() @@ -6649,7 +7450,7 @@ const opDeleteVolume = "DeleteVolume" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DeleteVolume +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DeleteVolume func (c *EC2) DeleteVolumeRequest(input *DeleteVolumeInput) (req *request.Request, output *DeleteVolumeOutput) { op := &request.Operation{ Name: opDeleteVolume, @@ -6684,7 +7485,7 @@ func (c *EC2) DeleteVolumeRequest(input *DeleteVolumeInput) (req *request.Reques // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation DeleteVolume for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DeleteVolume +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DeleteVolume func (c *EC2) DeleteVolume(input *DeleteVolumeInput) (*DeleteVolumeOutput, error) { req, out := c.DeleteVolumeRequest(input) return out, req.Send() @@ -6731,7 +7532,7 @@ const opDeleteVpc = "DeleteVpc" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DeleteVpc +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DeleteVpc func (c *EC2) DeleteVpcRequest(input *DeleteVpcInput) (req *request.Request, output *DeleteVpcOutput) { op := &request.Operation{ Name: opDeleteVpc, @@ -6764,7 +7565,7 @@ func (c *EC2) DeleteVpcRequest(input *DeleteVpcInput) (req *request.Request, out // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation DeleteVpc for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DeleteVpc +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DeleteVpc func (c *EC2) DeleteVpc(input *DeleteVpcInput) (*DeleteVpcOutput, error) { req, out := c.DeleteVpcRequest(input) return out, req.Send() @@ -6786,6 +7587,157 @@ func (c *EC2) DeleteVpcWithContext(ctx aws.Context, input *DeleteVpcInput, opts return out, req.Send() } +const opDeleteVpcEndpointConnectionNotifications = "DeleteVpcEndpointConnectionNotifications" + +// DeleteVpcEndpointConnectionNotificationsRequest generates a "aws/request.Request" representing the +// client's request for the DeleteVpcEndpointConnectionNotifications operation. The "output" return +// value will be populated with the request's response once the request complets +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DeleteVpcEndpointConnectionNotifications for more information on using the DeleteVpcEndpointConnectionNotifications +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DeleteVpcEndpointConnectionNotificationsRequest method. +// req, resp := client.DeleteVpcEndpointConnectionNotificationsRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DeleteVpcEndpointConnectionNotifications +func (c *EC2) DeleteVpcEndpointConnectionNotificationsRequest(input *DeleteVpcEndpointConnectionNotificationsInput) (req *request.Request, output *DeleteVpcEndpointConnectionNotificationsOutput) { + op := &request.Operation{ + Name: opDeleteVpcEndpointConnectionNotifications, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DeleteVpcEndpointConnectionNotificationsInput{} + } + + output = &DeleteVpcEndpointConnectionNotificationsOutput{} + req = c.newRequest(op, input, output) + return +} + +// DeleteVpcEndpointConnectionNotifications API operation for Amazon Elastic Compute Cloud. +// +// Deletes one or more VPC endpoint connection notifications. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Elastic Compute Cloud's +// API operation DeleteVpcEndpointConnectionNotifications for usage and error information. +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DeleteVpcEndpointConnectionNotifications +func (c *EC2) DeleteVpcEndpointConnectionNotifications(input *DeleteVpcEndpointConnectionNotificationsInput) (*DeleteVpcEndpointConnectionNotificationsOutput, error) { + req, out := c.DeleteVpcEndpointConnectionNotificationsRequest(input) + return out, req.Send() +} + +// DeleteVpcEndpointConnectionNotificationsWithContext is the same as DeleteVpcEndpointConnectionNotifications with the addition of +// the ability to pass a context and additional request options. +// +// See DeleteVpcEndpointConnectionNotifications for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *EC2) DeleteVpcEndpointConnectionNotificationsWithContext(ctx aws.Context, input *DeleteVpcEndpointConnectionNotificationsInput, opts ...request.Option) (*DeleteVpcEndpointConnectionNotificationsOutput, error) { + req, out := c.DeleteVpcEndpointConnectionNotificationsRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDeleteVpcEndpointServiceConfigurations = "DeleteVpcEndpointServiceConfigurations" + +// DeleteVpcEndpointServiceConfigurationsRequest generates a "aws/request.Request" representing the +// client's request for the DeleteVpcEndpointServiceConfigurations operation. The "output" return +// value will be populated with the request's response once the request complets +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DeleteVpcEndpointServiceConfigurations for more information on using the DeleteVpcEndpointServiceConfigurations +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DeleteVpcEndpointServiceConfigurationsRequest method. +// req, resp := client.DeleteVpcEndpointServiceConfigurationsRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DeleteVpcEndpointServiceConfigurations +func (c *EC2) DeleteVpcEndpointServiceConfigurationsRequest(input *DeleteVpcEndpointServiceConfigurationsInput) (req *request.Request, output *DeleteVpcEndpointServiceConfigurationsOutput) { + op := &request.Operation{ + Name: opDeleteVpcEndpointServiceConfigurations, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DeleteVpcEndpointServiceConfigurationsInput{} + } + + output = &DeleteVpcEndpointServiceConfigurationsOutput{} + req = c.newRequest(op, input, output) + return +} + +// DeleteVpcEndpointServiceConfigurations API operation for Amazon Elastic Compute Cloud. +// +// Deletes one or more VPC endpoint service configurations in your account. +// Before you delete the endpoint service configuration, you must reject any +// Available or PendingAcceptance interface endpoint connections that are attached +// to the service. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Elastic Compute Cloud's +// API operation DeleteVpcEndpointServiceConfigurations for usage and error information. +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DeleteVpcEndpointServiceConfigurations +func (c *EC2) DeleteVpcEndpointServiceConfigurations(input *DeleteVpcEndpointServiceConfigurationsInput) (*DeleteVpcEndpointServiceConfigurationsOutput, error) { + req, out := c.DeleteVpcEndpointServiceConfigurationsRequest(input) + return out, req.Send() +} + +// DeleteVpcEndpointServiceConfigurationsWithContext is the same as DeleteVpcEndpointServiceConfigurations with the addition of +// the ability to pass a context and additional request options. +// +// See DeleteVpcEndpointServiceConfigurations for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *EC2) DeleteVpcEndpointServiceConfigurationsWithContext(ctx aws.Context, input *DeleteVpcEndpointServiceConfigurationsInput, opts ...request.Option) (*DeleteVpcEndpointServiceConfigurationsOutput, error) { + req, out := c.DeleteVpcEndpointServiceConfigurationsRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + const opDeleteVpcEndpoints = "DeleteVpcEndpoints" // DeleteVpcEndpointsRequest generates a "aws/request.Request" representing the @@ -6811,7 +7763,7 @@ const opDeleteVpcEndpoints = "DeleteVpcEndpoints" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DeleteVpcEndpoints +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DeleteVpcEndpoints func (c *EC2) DeleteVpcEndpointsRequest(input *DeleteVpcEndpointsInput) (req *request.Request, output *DeleteVpcEndpointsOutput) { op := &request.Operation{ Name: opDeleteVpcEndpoints, @@ -6830,8 +7782,10 @@ func (c *EC2) DeleteVpcEndpointsRequest(input *DeleteVpcEndpointsInput) (req *re // DeleteVpcEndpoints API operation for Amazon Elastic Compute Cloud. // -// Deletes one or more specified VPC endpoints. Deleting the endpoint also deletes -// the endpoint routes in the route tables that were associated with the endpoint. +// Deletes one or more specified VPC endpoints. Deleting a gateway endpoint +// also deletes the endpoint routes in the route tables that were associated +// with the endpoint. Deleting an interface endpoint deletes the endpoint network +// interfaces. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -6839,7 +7793,7 @@ func (c *EC2) DeleteVpcEndpointsRequest(input *DeleteVpcEndpointsInput) (req *re // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation DeleteVpcEndpoints for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DeleteVpcEndpoints +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DeleteVpcEndpoints func (c *EC2) DeleteVpcEndpoints(input *DeleteVpcEndpointsInput) (*DeleteVpcEndpointsOutput, error) { req, out := c.DeleteVpcEndpointsRequest(input) return out, req.Send() @@ -6886,7 +7840,7 @@ const opDeleteVpcPeeringConnection = "DeleteVpcPeeringConnection" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DeleteVpcPeeringConnection +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DeleteVpcPeeringConnection func (c *EC2) DeleteVpcPeeringConnectionRequest(input *DeleteVpcPeeringConnectionInput) (req *request.Request, output *DeleteVpcPeeringConnectionOutput) { op := &request.Operation{ Name: opDeleteVpcPeeringConnection, @@ -6906,8 +7860,8 @@ func (c *EC2) DeleteVpcPeeringConnectionRequest(input *DeleteVpcPeeringConnectio // DeleteVpcPeeringConnection API operation for Amazon Elastic Compute Cloud. // // Deletes a VPC peering connection. Either the owner of the requester VPC or -// the owner of the peer VPC can delete the VPC peering connection if it's in -// the active state. The owner of the requester VPC can delete a VPC peering +// the owner of the accepter VPC can delete the VPC peering connection if it's +// in the active state. The owner of the requester VPC can delete a VPC peering // connection in the pending-acceptance state. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions @@ -6916,7 +7870,7 @@ func (c *EC2) DeleteVpcPeeringConnectionRequest(input *DeleteVpcPeeringConnectio // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation DeleteVpcPeeringConnection for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DeleteVpcPeeringConnection +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DeleteVpcPeeringConnection func (c *EC2) DeleteVpcPeeringConnection(input *DeleteVpcPeeringConnectionInput) (*DeleteVpcPeeringConnectionOutput, error) { req, out := c.DeleteVpcPeeringConnectionRequest(input) return out, req.Send() @@ -6963,7 +7917,7 @@ const opDeleteVpnConnection = "DeleteVpnConnection" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DeleteVpnConnection +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DeleteVpnConnection func (c *EC2) DeleteVpnConnectionRequest(input *DeleteVpnConnectionInput) (req *request.Request, output *DeleteVpnConnectionOutput) { op := &request.Operation{ Name: opDeleteVpnConnection, @@ -7001,7 +7955,7 @@ func (c *EC2) DeleteVpnConnectionRequest(input *DeleteVpnConnectionInput) (req * // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation DeleteVpnConnection for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DeleteVpnConnection +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DeleteVpnConnection func (c *EC2) DeleteVpnConnection(input *DeleteVpnConnectionInput) (*DeleteVpnConnectionOutput, error) { req, out := c.DeleteVpnConnectionRequest(input) return out, req.Send() @@ -7048,7 +8002,7 @@ const opDeleteVpnConnectionRoute = "DeleteVpnConnectionRoute" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DeleteVpnConnectionRoute +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DeleteVpnConnectionRoute func (c *EC2) DeleteVpnConnectionRouteRequest(input *DeleteVpnConnectionRouteInput) (req *request.Request, output *DeleteVpnConnectionRouteOutput) { op := &request.Operation{ Name: opDeleteVpnConnectionRoute, @@ -7080,7 +8034,7 @@ func (c *EC2) DeleteVpnConnectionRouteRequest(input *DeleteVpnConnectionRouteInp // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation DeleteVpnConnectionRoute for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DeleteVpnConnectionRoute +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DeleteVpnConnectionRoute func (c *EC2) DeleteVpnConnectionRoute(input *DeleteVpnConnectionRouteInput) (*DeleteVpnConnectionRouteOutput, error) { req, out := c.DeleteVpnConnectionRouteRequest(input) return out, req.Send() @@ -7127,7 +8081,7 @@ const opDeleteVpnGateway = "DeleteVpnGateway" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DeleteVpnGateway +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DeleteVpnGateway func (c *EC2) DeleteVpnGatewayRequest(input *DeleteVpnGatewayInput) (req *request.Request, output *DeleteVpnGatewayOutput) { op := &request.Operation{ Name: opDeleteVpnGateway, @@ -7160,7 +8114,7 @@ func (c *EC2) DeleteVpnGatewayRequest(input *DeleteVpnGatewayInput) (req *reques // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation DeleteVpnGateway for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DeleteVpnGateway +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DeleteVpnGateway func (c *EC2) DeleteVpnGateway(input *DeleteVpnGatewayInput) (*DeleteVpnGatewayOutput, error) { req, out := c.DeleteVpnGatewayRequest(input) return out, req.Send() @@ -7207,7 +8161,7 @@ const opDeregisterImage = "DeregisterImage" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DeregisterImage +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DeregisterImage func (c *EC2) DeregisterImageRequest(input *DeregisterImageInput) (req *request.Request, output *DeregisterImageOutput) { op := &request.Operation{ Name: opDeregisterImage, @@ -7229,9 +8183,14 @@ func (c *EC2) DeregisterImageRequest(input *DeregisterImageInput) (req *request. // DeregisterImage API operation for Amazon Elastic Compute Cloud. // // Deregisters the specified AMI. After you deregister an AMI, it can't be used -// to launch new instances. +// to launch new instances; however, it doesn't affect any instances that you've +// already launched from the AMI. You'll continue to incur usage costs for those +// instances until you terminate them. // -// This command does not delete the AMI. +// When you deregister an Amazon EBS-backed AMI, it doesn't affect the snapshot +// that was created for the root volume of the instance during the AMI creation +// process. When you deregister an instance store-backed AMI, it doesn't affect +// the files that you uploaded to Amazon S3 when you created the AMI. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -7239,7 +8198,7 @@ func (c *EC2) DeregisterImageRequest(input *DeregisterImageInput) (req *request. // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation DeregisterImage for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DeregisterImage +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DeregisterImage func (c *EC2) DeregisterImage(input *DeregisterImageInput) (*DeregisterImageOutput, error) { req, out := c.DeregisterImageRequest(input) return out, req.Send() @@ -7286,7 +8245,7 @@ const opDescribeAccountAttributes = "DescribeAccountAttributes" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeAccountAttributes +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeAccountAttributes func (c *EC2) DescribeAccountAttributesRequest(input *DescribeAccountAttributesInput) (req *request.Request, output *DescribeAccountAttributesOutput) { op := &request.Operation{ Name: opDescribeAccountAttributes, @@ -7331,7 +8290,7 @@ func (c *EC2) DescribeAccountAttributesRequest(input *DescribeAccountAttributesI // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation DescribeAccountAttributes for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeAccountAttributes +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeAccountAttributes func (c *EC2) DescribeAccountAttributes(input *DescribeAccountAttributesInput) (*DescribeAccountAttributesOutput, error) { req, out := c.DescribeAccountAttributesRequest(input) return out, req.Send() @@ -7378,7 +8337,7 @@ const opDescribeAddresses = "DescribeAddresses" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeAddresses +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeAddresses func (c *EC2) DescribeAddressesRequest(input *DescribeAddressesInput) (req *request.Request, output *DescribeAddressesOutput) { op := &request.Operation{ Name: opDescribeAddresses, @@ -7409,7 +8368,7 @@ func (c *EC2) DescribeAddressesRequest(input *DescribeAddressesInput) (req *requ // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation DescribeAddresses for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeAddresses +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeAddresses func (c *EC2) DescribeAddresses(input *DescribeAddressesInput) (*DescribeAddressesOutput, error) { req, out := c.DescribeAddressesRequest(input) return out, req.Send() @@ -7456,7 +8415,7 @@ const opDescribeAvailabilityZones = "DescribeAvailabilityZones" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeAvailabilityZones +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeAvailabilityZones func (c *EC2) DescribeAvailabilityZonesRequest(input *DescribeAvailabilityZonesInput) (req *request.Request, output *DescribeAvailabilityZonesOutput) { op := &request.Operation{ Name: opDescribeAvailabilityZones, @@ -7489,7 +8448,7 @@ func (c *EC2) DescribeAvailabilityZonesRequest(input *DescribeAvailabilityZonesI // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation DescribeAvailabilityZones for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeAvailabilityZones +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeAvailabilityZones func (c *EC2) DescribeAvailabilityZones(input *DescribeAvailabilityZonesInput) (*DescribeAvailabilityZonesOutput, error) { req, out := c.DescribeAvailabilityZonesRequest(input) return out, req.Send() @@ -7536,7 +8495,7 @@ const opDescribeBundleTasks = "DescribeBundleTasks" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeBundleTasks +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeBundleTasks func (c *EC2) DescribeBundleTasksRequest(input *DescribeBundleTasksInput) (req *request.Request, output *DescribeBundleTasksOutput) { op := &request.Operation{ Name: opDescribeBundleTasks, @@ -7568,7 +8527,7 @@ func (c *EC2) DescribeBundleTasksRequest(input *DescribeBundleTasksInput) (req * // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation DescribeBundleTasks for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeBundleTasks +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeBundleTasks func (c *EC2) DescribeBundleTasks(input *DescribeBundleTasksInput) (*DescribeBundleTasksOutput, error) { req, out := c.DescribeBundleTasksRequest(input) return out, req.Send() @@ -7615,7 +8574,7 @@ const opDescribeClassicLinkInstances = "DescribeClassicLinkInstances" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeClassicLinkInstances +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeClassicLinkInstances func (c *EC2) DescribeClassicLinkInstancesRequest(input *DescribeClassicLinkInstancesInput) (req *request.Request, output *DescribeClassicLinkInstancesOutput) { op := &request.Operation{ Name: opDescribeClassicLinkInstances, @@ -7645,7 +8604,7 @@ func (c *EC2) DescribeClassicLinkInstancesRequest(input *DescribeClassicLinkInst // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation DescribeClassicLinkInstances for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeClassicLinkInstances +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeClassicLinkInstances func (c *EC2) DescribeClassicLinkInstances(input *DescribeClassicLinkInstancesInput) (*DescribeClassicLinkInstancesOutput, error) { req, out := c.DescribeClassicLinkInstancesRequest(input) return out, req.Send() @@ -7692,7 +8651,7 @@ const opDescribeConversionTasks = "DescribeConversionTasks" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeConversionTasks +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeConversionTasks func (c *EC2) DescribeConversionTasksRequest(input *DescribeConversionTasksInput) (req *request.Request, output *DescribeConversionTasksOutput) { op := &request.Operation{ Name: opDescribeConversionTasks, @@ -7723,7 +8682,7 @@ func (c *EC2) DescribeConversionTasksRequest(input *DescribeConversionTasksInput // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation DescribeConversionTasks for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeConversionTasks +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeConversionTasks func (c *EC2) DescribeConversionTasks(input *DescribeConversionTasksInput) (*DescribeConversionTasksOutput, error) { req, out := c.DescribeConversionTasksRequest(input) return out, req.Send() @@ -7770,7 +8729,7 @@ const opDescribeCustomerGateways = "DescribeCustomerGateways" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeCustomerGateways +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeCustomerGateways func (c *EC2) DescribeCustomerGatewaysRequest(input *DescribeCustomerGatewaysInput) (req *request.Request, output *DescribeCustomerGatewaysOutput) { op := &request.Operation{ Name: opDescribeCustomerGateways, @@ -7791,9 +8750,9 @@ func (c *EC2) DescribeCustomerGatewaysRequest(input *DescribeCustomerGatewaysInp // // Describes one or more of your VPN customer gateways. // -// For more information about VPN customer gateways, see Adding a Hardware Virtual -// Private Gateway to Your VPC (http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_VPN.html) -// in the Amazon Virtual Private Cloud User Guide. +// For more information about VPN customer gateways, see AWS Managed VPN Connections +// (http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_VPN.html) in the +// Amazon Virtual Private Cloud User Guide. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -7801,7 +8760,7 @@ func (c *EC2) DescribeCustomerGatewaysRequest(input *DescribeCustomerGatewaysInp // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation DescribeCustomerGateways for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeCustomerGateways +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeCustomerGateways func (c *EC2) DescribeCustomerGateways(input *DescribeCustomerGatewaysInput) (*DescribeCustomerGatewaysOutput, error) { req, out := c.DescribeCustomerGatewaysRequest(input) return out, req.Send() @@ -7848,7 +8807,7 @@ const opDescribeDhcpOptions = "DescribeDhcpOptions" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeDhcpOptions +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeDhcpOptions func (c *EC2) DescribeDhcpOptionsRequest(input *DescribeDhcpOptionsInput) (req *request.Request, output *DescribeDhcpOptionsOutput) { op := &request.Operation{ Name: opDescribeDhcpOptions, @@ -7878,7 +8837,7 @@ func (c *EC2) DescribeDhcpOptionsRequest(input *DescribeDhcpOptionsInput) (req * // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation DescribeDhcpOptions for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeDhcpOptions +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeDhcpOptions func (c *EC2) DescribeDhcpOptions(input *DescribeDhcpOptionsInput) (*DescribeDhcpOptionsOutput, error) { req, out := c.DescribeDhcpOptionsRequest(input) return out, req.Send() @@ -7925,7 +8884,7 @@ const opDescribeEgressOnlyInternetGateways = "DescribeEgressOnlyInternetGateways // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeEgressOnlyInternetGateways +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeEgressOnlyInternetGateways func (c *EC2) DescribeEgressOnlyInternetGatewaysRequest(input *DescribeEgressOnlyInternetGatewaysInput) (req *request.Request, output *DescribeEgressOnlyInternetGatewaysOutput) { op := &request.Operation{ Name: opDescribeEgressOnlyInternetGateways, @@ -7952,7 +8911,7 @@ func (c *EC2) DescribeEgressOnlyInternetGatewaysRequest(input *DescribeEgressOnl // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation DescribeEgressOnlyInternetGateways for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeEgressOnlyInternetGateways +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeEgressOnlyInternetGateways func (c *EC2) DescribeEgressOnlyInternetGateways(input *DescribeEgressOnlyInternetGatewaysInput) (*DescribeEgressOnlyInternetGatewaysOutput, error) { req, out := c.DescribeEgressOnlyInternetGatewaysRequest(input) return out, req.Send() @@ -7999,7 +8958,7 @@ const opDescribeElasticGpus = "DescribeElasticGpus" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeElasticGpus +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeElasticGpus func (c *EC2) DescribeElasticGpusRequest(input *DescribeElasticGpusInput) (req *request.Request, output *DescribeElasticGpusOutput) { op := &request.Operation{ Name: opDescribeElasticGpus, @@ -8019,7 +8978,7 @@ func (c *EC2) DescribeElasticGpusRequest(input *DescribeElasticGpusInput) (req * // DescribeElasticGpus API operation for Amazon Elastic Compute Cloud. // // Describes the Elastic GPUs associated with your instances. For more information -// about Elastic GPUs, see Amazon EC2 Elastic GPUs (http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/elastic-gpus.html). +// about Elastic GPUs, see Amazon EC2 Elastic GPUs (http://docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/elastic-gpus.html). // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -8027,7 +8986,7 @@ func (c *EC2) DescribeElasticGpusRequest(input *DescribeElasticGpusInput) (req * // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation DescribeElasticGpus for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeElasticGpus +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeElasticGpus func (c *EC2) DescribeElasticGpus(input *DescribeElasticGpusInput) (*DescribeElasticGpusOutput, error) { req, out := c.DescribeElasticGpusRequest(input) return out, req.Send() @@ -8074,7 +9033,7 @@ const opDescribeExportTasks = "DescribeExportTasks" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeExportTasks +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeExportTasks func (c *EC2) DescribeExportTasksRequest(input *DescribeExportTasksInput) (req *request.Request, output *DescribeExportTasksOutput) { op := &request.Operation{ Name: opDescribeExportTasks, @@ -8101,7 +9060,7 @@ func (c *EC2) DescribeExportTasksRequest(input *DescribeExportTasksInput) (req * // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation DescribeExportTasks for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeExportTasks +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeExportTasks func (c *EC2) DescribeExportTasks(input *DescribeExportTasksInput) (*DescribeExportTasksOutput, error) { req, out := c.DescribeExportTasksRequest(input) return out, req.Send() @@ -8148,7 +9107,7 @@ const opDescribeFlowLogs = "DescribeFlowLogs" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeFlowLogs +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeFlowLogs func (c *EC2) DescribeFlowLogsRequest(input *DescribeFlowLogsInput) (req *request.Request, output *DescribeFlowLogsOutput) { op := &request.Operation{ Name: opDescribeFlowLogs, @@ -8177,7 +9136,7 @@ func (c *EC2) DescribeFlowLogsRequest(input *DescribeFlowLogsInput) (req *reques // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation DescribeFlowLogs for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeFlowLogs +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeFlowLogs func (c *EC2) DescribeFlowLogs(input *DescribeFlowLogsInput) (*DescribeFlowLogsOutput, error) { req, out := c.DescribeFlowLogsRequest(input) return out, req.Send() @@ -8199,6 +9158,80 @@ func (c *EC2) DescribeFlowLogsWithContext(ctx aws.Context, input *DescribeFlowLo return out, req.Send() } +const opDescribeFpgaImageAttribute = "DescribeFpgaImageAttribute" + +// DescribeFpgaImageAttributeRequest generates a "aws/request.Request" representing the +// client's request for the DescribeFpgaImageAttribute operation. The "output" return +// value will be populated with the request's response once the request complets +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DescribeFpgaImageAttribute for more information on using the DescribeFpgaImageAttribute +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DescribeFpgaImageAttributeRequest method. +// req, resp := client.DescribeFpgaImageAttributeRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeFpgaImageAttribute +func (c *EC2) DescribeFpgaImageAttributeRequest(input *DescribeFpgaImageAttributeInput) (req *request.Request, output *DescribeFpgaImageAttributeOutput) { + op := &request.Operation{ + Name: opDescribeFpgaImageAttribute, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DescribeFpgaImageAttributeInput{} + } + + output = &DescribeFpgaImageAttributeOutput{} + req = c.newRequest(op, input, output) + return +} + +// DescribeFpgaImageAttribute API operation for Amazon Elastic Compute Cloud. +// +// Describes the specified attribute of the specified Amazon FPGA Image (AFI). +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Elastic Compute Cloud's +// API operation DescribeFpgaImageAttribute for usage and error information. +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeFpgaImageAttribute +func (c *EC2) DescribeFpgaImageAttribute(input *DescribeFpgaImageAttributeInput) (*DescribeFpgaImageAttributeOutput, error) { + req, out := c.DescribeFpgaImageAttributeRequest(input) + return out, req.Send() +} + +// DescribeFpgaImageAttributeWithContext is the same as DescribeFpgaImageAttribute with the addition of +// the ability to pass a context and additional request options. +// +// See DescribeFpgaImageAttribute for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *EC2) DescribeFpgaImageAttributeWithContext(ctx aws.Context, input *DescribeFpgaImageAttributeInput, opts ...request.Option) (*DescribeFpgaImageAttributeOutput, error) { + req, out := c.DescribeFpgaImageAttributeRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + const opDescribeFpgaImages = "DescribeFpgaImages" // DescribeFpgaImagesRequest generates a "aws/request.Request" representing the @@ -8224,7 +9257,7 @@ const opDescribeFpgaImages = "DescribeFpgaImages" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeFpgaImages +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeFpgaImages func (c *EC2) DescribeFpgaImagesRequest(input *DescribeFpgaImagesInput) (req *request.Request, output *DescribeFpgaImagesOutput) { op := &request.Operation{ Name: opDescribeFpgaImages, @@ -8253,7 +9286,7 @@ func (c *EC2) DescribeFpgaImagesRequest(input *DescribeFpgaImagesInput) (req *re // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation DescribeFpgaImages for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeFpgaImages +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeFpgaImages func (c *EC2) DescribeFpgaImages(input *DescribeFpgaImagesInput) (*DescribeFpgaImagesOutput, error) { req, out := c.DescribeFpgaImagesRequest(input) return out, req.Send() @@ -8300,7 +9333,7 @@ const opDescribeHostReservationOfferings = "DescribeHostReservationOfferings" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeHostReservationOfferings +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeHostReservationOfferings func (c *EC2) DescribeHostReservationOfferingsRequest(input *DescribeHostReservationOfferingsInput) (req *request.Request, output *DescribeHostReservationOfferingsOutput) { op := &request.Operation{ Name: opDescribeHostReservationOfferings, @@ -8335,7 +9368,7 @@ func (c *EC2) DescribeHostReservationOfferingsRequest(input *DescribeHostReserva // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation DescribeHostReservationOfferings for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeHostReservationOfferings +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeHostReservationOfferings func (c *EC2) DescribeHostReservationOfferings(input *DescribeHostReservationOfferingsInput) (*DescribeHostReservationOfferingsOutput, error) { req, out := c.DescribeHostReservationOfferingsRequest(input) return out, req.Send() @@ -8382,7 +9415,7 @@ const opDescribeHostReservations = "DescribeHostReservations" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeHostReservations +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeHostReservations func (c *EC2) DescribeHostReservationsRequest(input *DescribeHostReservationsInput) (req *request.Request, output *DescribeHostReservationsOutput) { op := &request.Operation{ Name: opDescribeHostReservations, @@ -8410,7 +9443,7 @@ func (c *EC2) DescribeHostReservationsRequest(input *DescribeHostReservationsInp // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation DescribeHostReservations for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeHostReservations +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeHostReservations func (c *EC2) DescribeHostReservations(input *DescribeHostReservationsInput) (*DescribeHostReservationsOutput, error) { req, out := c.DescribeHostReservationsRequest(input) return out, req.Send() @@ -8457,7 +9490,7 @@ const opDescribeHosts = "DescribeHosts" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeHosts +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeHosts func (c *EC2) DescribeHostsRequest(input *DescribeHostsInput) (req *request.Request, output *DescribeHostsOutput) { op := &request.Operation{ Name: opDescribeHosts, @@ -8488,7 +9521,7 @@ func (c *EC2) DescribeHostsRequest(input *DescribeHostsInput) (req *request.Requ // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation DescribeHosts for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeHosts +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeHosts func (c *EC2) DescribeHosts(input *DescribeHostsInput) (*DescribeHostsOutput, error) { req, out := c.DescribeHostsRequest(input) return out, req.Send() @@ -8535,7 +9568,7 @@ const opDescribeIamInstanceProfileAssociations = "DescribeIamInstanceProfileAsso // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeIamInstanceProfileAssociations +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeIamInstanceProfileAssociations func (c *EC2) DescribeIamInstanceProfileAssociationsRequest(input *DescribeIamInstanceProfileAssociationsInput) (req *request.Request, output *DescribeIamInstanceProfileAssociationsOutput) { op := &request.Operation{ Name: opDescribeIamInstanceProfileAssociations, @@ -8562,7 +9595,7 @@ func (c *EC2) DescribeIamInstanceProfileAssociationsRequest(input *DescribeIamIn // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation DescribeIamInstanceProfileAssociations for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeIamInstanceProfileAssociations +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeIamInstanceProfileAssociations func (c *EC2) DescribeIamInstanceProfileAssociations(input *DescribeIamInstanceProfileAssociationsInput) (*DescribeIamInstanceProfileAssociationsOutput, error) { req, out := c.DescribeIamInstanceProfileAssociationsRequest(input) return out, req.Send() @@ -8609,7 +9642,7 @@ const opDescribeIdFormat = "DescribeIdFormat" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeIdFormat +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeIdFormat func (c *EC2) DescribeIdFormatRequest(input *DescribeIdFormatInput) (req *request.Request, output *DescribeIdFormatOutput) { op := &request.Operation{ Name: opDescribeIdFormat, @@ -8649,7 +9682,7 @@ func (c *EC2) DescribeIdFormatRequest(input *DescribeIdFormatInput) (req *reques // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation DescribeIdFormat for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeIdFormat +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeIdFormat func (c *EC2) DescribeIdFormat(input *DescribeIdFormatInput) (*DescribeIdFormatOutput, error) { req, out := c.DescribeIdFormatRequest(input) return out, req.Send() @@ -8696,7 +9729,7 @@ const opDescribeIdentityIdFormat = "DescribeIdentityIdFormat" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeIdentityIdFormat +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeIdentityIdFormat func (c *EC2) DescribeIdentityIdFormatRequest(input *DescribeIdentityIdFormatInput) (req *request.Request, output *DescribeIdentityIdFormatOutput) { op := &request.Operation{ Name: opDescribeIdentityIdFormat, @@ -8734,7 +9767,7 @@ func (c *EC2) DescribeIdentityIdFormatRequest(input *DescribeIdentityIdFormatInp // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation DescribeIdentityIdFormat for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeIdentityIdFormat +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeIdentityIdFormat func (c *EC2) DescribeIdentityIdFormat(input *DescribeIdentityIdFormatInput) (*DescribeIdentityIdFormatOutput, error) { req, out := c.DescribeIdentityIdFormatRequest(input) return out, req.Send() @@ -8781,7 +9814,7 @@ const opDescribeImageAttribute = "DescribeImageAttribute" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeImageAttribute +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeImageAttribute func (c *EC2) DescribeImageAttributeRequest(input *DescribeImageAttributeInput) (req *request.Request, output *DescribeImageAttributeOutput) { op := &request.Operation{ Name: opDescribeImageAttribute, @@ -8809,7 +9842,7 @@ func (c *EC2) DescribeImageAttributeRequest(input *DescribeImageAttributeInput) // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation DescribeImageAttribute for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeImageAttribute +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeImageAttribute func (c *EC2) DescribeImageAttribute(input *DescribeImageAttributeInput) (*DescribeImageAttributeOutput, error) { req, out := c.DescribeImageAttributeRequest(input) return out, req.Send() @@ -8856,7 +9889,7 @@ const opDescribeImages = "DescribeImages" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeImages +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeImages func (c *EC2) DescribeImagesRequest(input *DescribeImagesInput) (req *request.Request, output *DescribeImagesOutput) { op := &request.Operation{ Name: opDescribeImages, @@ -8889,7 +9922,7 @@ func (c *EC2) DescribeImagesRequest(input *DescribeImagesInput) (req *request.Re // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation DescribeImages for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeImages +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeImages func (c *EC2) DescribeImages(input *DescribeImagesInput) (*DescribeImagesOutput, error) { req, out := c.DescribeImagesRequest(input) return out, req.Send() @@ -8936,7 +9969,7 @@ const opDescribeImportImageTasks = "DescribeImportImageTasks" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeImportImageTasks +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeImportImageTasks func (c *EC2) DescribeImportImageTasksRequest(input *DescribeImportImageTasksInput) (req *request.Request, output *DescribeImportImageTasksOutput) { op := &request.Operation{ Name: opDescribeImportImageTasks, @@ -8964,7 +9997,7 @@ func (c *EC2) DescribeImportImageTasksRequest(input *DescribeImportImageTasksInp // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation DescribeImportImageTasks for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeImportImageTasks +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeImportImageTasks func (c *EC2) DescribeImportImageTasks(input *DescribeImportImageTasksInput) (*DescribeImportImageTasksOutput, error) { req, out := c.DescribeImportImageTasksRequest(input) return out, req.Send() @@ -9011,7 +10044,7 @@ const opDescribeImportSnapshotTasks = "DescribeImportSnapshotTasks" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeImportSnapshotTasks +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeImportSnapshotTasks func (c *EC2) DescribeImportSnapshotTasksRequest(input *DescribeImportSnapshotTasksInput) (req *request.Request, output *DescribeImportSnapshotTasksOutput) { op := &request.Operation{ Name: opDescribeImportSnapshotTasks, @@ -9038,7 +10071,7 @@ func (c *EC2) DescribeImportSnapshotTasksRequest(input *DescribeImportSnapshotTa // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation DescribeImportSnapshotTasks for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeImportSnapshotTasks +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeImportSnapshotTasks func (c *EC2) DescribeImportSnapshotTasks(input *DescribeImportSnapshotTasksInput) (*DescribeImportSnapshotTasksOutput, error) { req, out := c.DescribeImportSnapshotTasksRequest(input) return out, req.Send() @@ -9085,7 +10118,7 @@ const opDescribeInstanceAttribute = "DescribeInstanceAttribute" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeInstanceAttribute +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeInstanceAttribute func (c *EC2) DescribeInstanceAttributeRequest(input *DescribeInstanceAttributeInput) (req *request.Request, output *DescribeInstanceAttributeOutput) { op := &request.Operation{ Name: opDescribeInstanceAttribute, @@ -9116,7 +10149,7 @@ func (c *EC2) DescribeInstanceAttributeRequest(input *DescribeInstanceAttributeI // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation DescribeInstanceAttribute for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeInstanceAttribute +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeInstanceAttribute func (c *EC2) DescribeInstanceAttribute(input *DescribeInstanceAttributeInput) (*DescribeInstanceAttributeOutput, error) { req, out := c.DescribeInstanceAttributeRequest(input) return out, req.Send() @@ -9138,6 +10171,98 @@ func (c *EC2) DescribeInstanceAttributeWithContext(ctx aws.Context, input *Descr return out, req.Send() } +const opDescribeInstanceCreditSpecifications = "DescribeInstanceCreditSpecifications" + +// DescribeInstanceCreditSpecificationsRequest generates a "aws/request.Request" representing the +// client's request for the DescribeInstanceCreditSpecifications operation. The "output" return +// value will be populated with the request's response once the request complets +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DescribeInstanceCreditSpecifications for more information on using the DescribeInstanceCreditSpecifications +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DescribeInstanceCreditSpecificationsRequest method. +// req, resp := client.DescribeInstanceCreditSpecificationsRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeInstanceCreditSpecifications +func (c *EC2) DescribeInstanceCreditSpecificationsRequest(input *DescribeInstanceCreditSpecificationsInput) (req *request.Request, output *DescribeInstanceCreditSpecificationsOutput) { + op := &request.Operation{ + Name: opDescribeInstanceCreditSpecifications, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DescribeInstanceCreditSpecificationsInput{} + } + + output = &DescribeInstanceCreditSpecificationsOutput{} + req = c.newRequest(op, input, output) + return +} + +// DescribeInstanceCreditSpecifications API operation for Amazon Elastic Compute Cloud. +// +// Describes the credit option for CPU usage of one or more of your T2 instances. +// The credit options are standard and unlimited. +// +// If you do not specify an instance ID, Amazon EC2 returns only the T2 instances +// with the unlimited credit option. If you specify one or more instance IDs, +// Amazon EC2 returns the credit option (standard or unlimited) of those instances. +// If you specify an instance ID that is not valid, such as an instance that +// is not a T2 instance, an error is returned. +// +// Recently terminated instances might appear in the returned results. This +// interval is usually less than one hour. +// +// If an Availability Zone is experiencing a service disruption and you specify +// instance IDs in the affected zone, or do not specify any instance IDs at +// all, the call fails. If you specify only instance IDs in an unaffected zone, +// the call works normally. +// +// For more information, see T2 Instances (http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/t2-instances.html) +// in the Amazon Elastic Compute Cloud User Guide. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Elastic Compute Cloud's +// API operation DescribeInstanceCreditSpecifications for usage and error information. +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeInstanceCreditSpecifications +func (c *EC2) DescribeInstanceCreditSpecifications(input *DescribeInstanceCreditSpecificationsInput) (*DescribeInstanceCreditSpecificationsOutput, error) { + req, out := c.DescribeInstanceCreditSpecificationsRequest(input) + return out, req.Send() +} + +// DescribeInstanceCreditSpecificationsWithContext is the same as DescribeInstanceCreditSpecifications with the addition of +// the ability to pass a context and additional request options. +// +// See DescribeInstanceCreditSpecifications for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *EC2) DescribeInstanceCreditSpecificationsWithContext(ctx aws.Context, input *DescribeInstanceCreditSpecificationsInput, opts ...request.Option) (*DescribeInstanceCreditSpecificationsOutput, error) { + req, out := c.DescribeInstanceCreditSpecificationsRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + const opDescribeInstanceStatus = "DescribeInstanceStatus" // DescribeInstanceStatusRequest generates a "aws/request.Request" representing the @@ -9163,7 +10288,7 @@ const opDescribeInstanceStatus = "DescribeInstanceStatus" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeInstanceStatus +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeInstanceStatus func (c *EC2) DescribeInstanceStatusRequest(input *DescribeInstanceStatusInput) (req *request.Request, output *DescribeInstanceStatusOutput) { op := &request.Operation{ Name: opDescribeInstanceStatus, @@ -9217,7 +10342,7 @@ func (c *EC2) DescribeInstanceStatusRequest(input *DescribeInstanceStatusInput) // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation DescribeInstanceStatus for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeInstanceStatus +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeInstanceStatus func (c *EC2) DescribeInstanceStatus(input *DescribeInstanceStatusInput) (*DescribeInstanceStatusOutput, error) { req, out := c.DescribeInstanceStatusRequest(input) return out, req.Send() @@ -9314,7 +10439,7 @@ const opDescribeInstances = "DescribeInstances" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeInstances +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeInstances func (c *EC2) DescribeInstancesRequest(input *DescribeInstancesInput) (req *request.Request, output *DescribeInstancesOutput) { op := &request.Operation{ Name: opDescribeInstances, @@ -9362,7 +10487,7 @@ func (c *EC2) DescribeInstancesRequest(input *DescribeInstancesInput) (req *requ // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation DescribeInstances for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeInstances +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeInstances func (c *EC2) DescribeInstances(input *DescribeInstancesInput) (*DescribeInstancesOutput, error) { req, out := c.DescribeInstancesRequest(input) return out, req.Send() @@ -9459,7 +10584,7 @@ const opDescribeInternetGateways = "DescribeInternetGateways" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeInternetGateways +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeInternetGateways func (c *EC2) DescribeInternetGatewaysRequest(input *DescribeInternetGatewaysInput) (req *request.Request, output *DescribeInternetGatewaysOutput) { op := &request.Operation{ Name: opDescribeInternetGateways, @@ -9486,7 +10611,7 @@ func (c *EC2) DescribeInternetGatewaysRequest(input *DescribeInternetGatewaysInp // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation DescribeInternetGateways for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeInternetGateways +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeInternetGateways func (c *EC2) DescribeInternetGateways(input *DescribeInternetGatewaysInput) (*DescribeInternetGatewaysOutput, error) { req, out := c.DescribeInternetGatewaysRequest(input) return out, req.Send() @@ -9533,7 +10658,7 @@ const opDescribeKeyPairs = "DescribeKeyPairs" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeKeyPairs +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeKeyPairs func (c *EC2) DescribeKeyPairsRequest(input *DescribeKeyPairsInput) (req *request.Request, output *DescribeKeyPairsOutput) { op := &request.Operation{ Name: opDescribeKeyPairs, @@ -9563,7 +10688,7 @@ func (c *EC2) DescribeKeyPairsRequest(input *DescribeKeyPairsInput) (req *reques // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation DescribeKeyPairs for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeKeyPairs +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeKeyPairs func (c *EC2) DescribeKeyPairs(input *DescribeKeyPairsInput) (*DescribeKeyPairsOutput, error) { req, out := c.DescribeKeyPairsRequest(input) return out, req.Send() @@ -9585,6 +10710,155 @@ func (c *EC2) DescribeKeyPairsWithContext(ctx aws.Context, input *DescribeKeyPai return out, req.Send() } +const opDescribeLaunchTemplateVersions = "DescribeLaunchTemplateVersions" + +// DescribeLaunchTemplateVersionsRequest generates a "aws/request.Request" representing the +// client's request for the DescribeLaunchTemplateVersions operation. The "output" return +// value will be populated with the request's response once the request complets +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DescribeLaunchTemplateVersions for more information on using the DescribeLaunchTemplateVersions +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DescribeLaunchTemplateVersionsRequest method. +// req, resp := client.DescribeLaunchTemplateVersionsRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeLaunchTemplateVersions +func (c *EC2) DescribeLaunchTemplateVersionsRequest(input *DescribeLaunchTemplateVersionsInput) (req *request.Request, output *DescribeLaunchTemplateVersionsOutput) { + op := &request.Operation{ + Name: opDescribeLaunchTemplateVersions, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DescribeLaunchTemplateVersionsInput{} + } + + output = &DescribeLaunchTemplateVersionsOutput{} + req = c.newRequest(op, input, output) + return +} + +// DescribeLaunchTemplateVersions API operation for Amazon Elastic Compute Cloud. +// +// Describes one or more versions of a specified launch template. You can describe +// all versions, individual versions, or a range of versions. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Elastic Compute Cloud's +// API operation DescribeLaunchTemplateVersions for usage and error information. +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeLaunchTemplateVersions +func (c *EC2) DescribeLaunchTemplateVersions(input *DescribeLaunchTemplateVersionsInput) (*DescribeLaunchTemplateVersionsOutput, error) { + req, out := c.DescribeLaunchTemplateVersionsRequest(input) + return out, req.Send() +} + +// DescribeLaunchTemplateVersionsWithContext is the same as DescribeLaunchTemplateVersions with the addition of +// the ability to pass a context and additional request options. +// +// See DescribeLaunchTemplateVersions for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *EC2) DescribeLaunchTemplateVersionsWithContext(ctx aws.Context, input *DescribeLaunchTemplateVersionsInput, opts ...request.Option) (*DescribeLaunchTemplateVersionsOutput, error) { + req, out := c.DescribeLaunchTemplateVersionsRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDescribeLaunchTemplates = "DescribeLaunchTemplates" + +// DescribeLaunchTemplatesRequest generates a "aws/request.Request" representing the +// client's request for the DescribeLaunchTemplates operation. The "output" return +// value will be populated with the request's response once the request complets +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DescribeLaunchTemplates for more information on using the DescribeLaunchTemplates +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DescribeLaunchTemplatesRequest method. +// req, resp := client.DescribeLaunchTemplatesRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeLaunchTemplates +func (c *EC2) DescribeLaunchTemplatesRequest(input *DescribeLaunchTemplatesInput) (req *request.Request, output *DescribeLaunchTemplatesOutput) { + op := &request.Operation{ + Name: opDescribeLaunchTemplates, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DescribeLaunchTemplatesInput{} + } + + output = &DescribeLaunchTemplatesOutput{} + req = c.newRequest(op, input, output) + return +} + +// DescribeLaunchTemplates API operation for Amazon Elastic Compute Cloud. +// +// Describes one or more launch templates. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Elastic Compute Cloud's +// API operation DescribeLaunchTemplates for usage and error information. +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeLaunchTemplates +func (c *EC2) DescribeLaunchTemplates(input *DescribeLaunchTemplatesInput) (*DescribeLaunchTemplatesOutput, error) { + req, out := c.DescribeLaunchTemplatesRequest(input) + return out, req.Send() +} + +// DescribeLaunchTemplatesWithContext is the same as DescribeLaunchTemplates with the addition of +// the ability to pass a context and additional request options. +// +// See DescribeLaunchTemplates for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *EC2) DescribeLaunchTemplatesWithContext(ctx aws.Context, input *DescribeLaunchTemplatesInput, opts ...request.Option) (*DescribeLaunchTemplatesOutput, error) { + req, out := c.DescribeLaunchTemplatesRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + const opDescribeMovingAddresses = "DescribeMovingAddresses" // DescribeMovingAddressesRequest generates a "aws/request.Request" representing the @@ -9610,7 +10884,7 @@ const opDescribeMovingAddresses = "DescribeMovingAddresses" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeMovingAddresses +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeMovingAddresses func (c *EC2) DescribeMovingAddressesRequest(input *DescribeMovingAddressesInput) (req *request.Request, output *DescribeMovingAddressesOutput) { op := &request.Operation{ Name: opDescribeMovingAddresses, @@ -9639,7 +10913,7 @@ func (c *EC2) DescribeMovingAddressesRequest(input *DescribeMovingAddressesInput // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation DescribeMovingAddresses for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeMovingAddresses +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeMovingAddresses func (c *EC2) DescribeMovingAddresses(input *DescribeMovingAddressesInput) (*DescribeMovingAddressesOutput, error) { req, out := c.DescribeMovingAddressesRequest(input) return out, req.Send() @@ -9686,7 +10960,7 @@ const opDescribeNatGateways = "DescribeNatGateways" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeNatGateways +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeNatGateways func (c *EC2) DescribeNatGatewaysRequest(input *DescribeNatGatewaysInput) (req *request.Request, output *DescribeNatGatewaysOutput) { op := &request.Operation{ Name: opDescribeNatGateways, @@ -9719,7 +10993,7 @@ func (c *EC2) DescribeNatGatewaysRequest(input *DescribeNatGatewaysInput) (req * // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation DescribeNatGateways for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeNatGateways +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeNatGateways func (c *EC2) DescribeNatGateways(input *DescribeNatGatewaysInput) (*DescribeNatGatewaysOutput, error) { req, out := c.DescribeNatGatewaysRequest(input) return out, req.Send() @@ -9816,7 +11090,7 @@ const opDescribeNetworkAcls = "DescribeNetworkAcls" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeNetworkAcls +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeNetworkAcls func (c *EC2) DescribeNetworkAclsRequest(input *DescribeNetworkAclsInput) (req *request.Request, output *DescribeNetworkAclsOutput) { op := &request.Operation{ Name: opDescribeNetworkAcls, @@ -9846,7 +11120,7 @@ func (c *EC2) DescribeNetworkAclsRequest(input *DescribeNetworkAclsInput) (req * // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation DescribeNetworkAcls for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeNetworkAcls +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeNetworkAcls func (c *EC2) DescribeNetworkAcls(input *DescribeNetworkAclsInput) (*DescribeNetworkAclsOutput, error) { req, out := c.DescribeNetworkAclsRequest(input) return out, req.Send() @@ -9893,7 +11167,7 @@ const opDescribeNetworkInterfaceAttribute = "DescribeNetworkInterfaceAttribute" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeNetworkInterfaceAttribute +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeNetworkInterfaceAttribute func (c *EC2) DescribeNetworkInterfaceAttributeRequest(input *DescribeNetworkInterfaceAttributeInput) (req *request.Request, output *DescribeNetworkInterfaceAttributeOutput) { op := &request.Operation{ Name: opDescribeNetworkInterfaceAttribute, @@ -9921,7 +11195,7 @@ func (c *EC2) DescribeNetworkInterfaceAttributeRequest(input *DescribeNetworkInt // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation DescribeNetworkInterfaceAttribute for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeNetworkInterfaceAttribute +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeNetworkInterfaceAttribute func (c *EC2) DescribeNetworkInterfaceAttribute(input *DescribeNetworkInterfaceAttributeInput) (*DescribeNetworkInterfaceAttributeOutput, error) { req, out := c.DescribeNetworkInterfaceAttributeRequest(input) return out, req.Send() @@ -9968,7 +11242,7 @@ const opDescribeNetworkInterfacePermissions = "DescribeNetworkInterfacePermissio // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeNetworkInterfacePermissions +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeNetworkInterfacePermissions func (c *EC2) DescribeNetworkInterfacePermissionsRequest(input *DescribeNetworkInterfacePermissionsInput) (req *request.Request, output *DescribeNetworkInterfacePermissionsOutput) { op := &request.Operation{ Name: opDescribeNetworkInterfacePermissions, @@ -9995,7 +11269,7 @@ func (c *EC2) DescribeNetworkInterfacePermissionsRequest(input *DescribeNetworkI // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation DescribeNetworkInterfacePermissions for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeNetworkInterfacePermissions +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeNetworkInterfacePermissions func (c *EC2) DescribeNetworkInterfacePermissions(input *DescribeNetworkInterfacePermissionsInput) (*DescribeNetworkInterfacePermissionsOutput, error) { req, out := c.DescribeNetworkInterfacePermissionsRequest(input) return out, req.Send() @@ -10042,7 +11316,7 @@ const opDescribeNetworkInterfaces = "DescribeNetworkInterfaces" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeNetworkInterfaces +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeNetworkInterfaces func (c *EC2) DescribeNetworkInterfacesRequest(input *DescribeNetworkInterfacesInput) (req *request.Request, output *DescribeNetworkInterfacesOutput) { op := &request.Operation{ Name: opDescribeNetworkInterfaces, @@ -10069,7 +11343,7 @@ func (c *EC2) DescribeNetworkInterfacesRequest(input *DescribeNetworkInterfacesI // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation DescribeNetworkInterfaces for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeNetworkInterfaces +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeNetworkInterfaces func (c *EC2) DescribeNetworkInterfaces(input *DescribeNetworkInterfacesInput) (*DescribeNetworkInterfacesOutput, error) { req, out := c.DescribeNetworkInterfacesRequest(input) return out, req.Send() @@ -10116,7 +11390,7 @@ const opDescribePlacementGroups = "DescribePlacementGroups" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribePlacementGroups +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribePlacementGroups func (c *EC2) DescribePlacementGroupsRequest(input *DescribePlacementGroupsInput) (req *request.Request, output *DescribePlacementGroupsOutput) { op := &request.Operation{ Name: opDescribePlacementGroups, @@ -10135,8 +11409,8 @@ func (c *EC2) DescribePlacementGroupsRequest(input *DescribePlacementGroupsInput // DescribePlacementGroups API operation for Amazon Elastic Compute Cloud. // -// Describes one or more of your placement groups. For more information about -// placement groups and cluster instances, see Cluster Instances (http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using_cluster_computing.html) +// Describes one or more of your placement groups. For more information, see +// Placement Groups (http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/placement-groups.html) // in the Amazon Elastic Compute Cloud User Guide. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions @@ -10145,7 +11419,7 @@ func (c *EC2) DescribePlacementGroupsRequest(input *DescribePlacementGroupsInput // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation DescribePlacementGroups for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribePlacementGroups +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribePlacementGroups func (c *EC2) DescribePlacementGroups(input *DescribePlacementGroupsInput) (*DescribePlacementGroupsOutput, error) { req, out := c.DescribePlacementGroupsRequest(input) return out, req.Send() @@ -10192,7 +11466,7 @@ const opDescribePrefixLists = "DescribePrefixLists" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribePrefixLists +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribePrefixLists func (c *EC2) DescribePrefixListsRequest(input *DescribePrefixListsInput) (req *request.Request, output *DescribePrefixListsOutput) { op := &request.Operation{ Name: opDescribePrefixLists, @@ -10215,7 +11489,7 @@ func (c *EC2) DescribePrefixListsRequest(input *DescribePrefixListsInput) (req * // the prefix list name and prefix list ID of the service and the IP address // range for the service. A prefix list ID is required for creating an outbound // security group rule that allows traffic from a VPC to access an AWS service -// through a VPC endpoint. +// through a gateway VPC endpoint. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -10223,7 +11497,7 @@ func (c *EC2) DescribePrefixListsRequest(input *DescribePrefixListsInput) (req * // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation DescribePrefixLists for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribePrefixLists +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribePrefixLists func (c *EC2) DescribePrefixLists(input *DescribePrefixListsInput) (*DescribePrefixListsOutput, error) { req, out := c.DescribePrefixListsRequest(input) return out, req.Send() @@ -10270,7 +11544,7 @@ const opDescribeRegions = "DescribeRegions" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeRegions +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeRegions func (c *EC2) DescribeRegionsRequest(input *DescribeRegionsInput) (req *request.Request, output *DescribeRegionsOutput) { op := &request.Operation{ Name: opDescribeRegions, @@ -10300,7 +11574,7 @@ func (c *EC2) DescribeRegionsRequest(input *DescribeRegionsInput) (req *request. // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation DescribeRegions for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeRegions +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeRegions func (c *EC2) DescribeRegions(input *DescribeRegionsInput) (*DescribeRegionsOutput, error) { req, out := c.DescribeRegionsRequest(input) return out, req.Send() @@ -10347,7 +11621,7 @@ const opDescribeReservedInstances = "DescribeReservedInstances" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeReservedInstances +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeReservedInstances func (c *EC2) DescribeReservedInstancesRequest(input *DescribeReservedInstancesInput) (req *request.Request, output *DescribeReservedInstancesOutput) { op := &request.Operation{ Name: opDescribeReservedInstances, @@ -10377,7 +11651,7 @@ func (c *EC2) DescribeReservedInstancesRequest(input *DescribeReservedInstancesI // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation DescribeReservedInstances for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeReservedInstances +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeReservedInstances func (c *EC2) DescribeReservedInstances(input *DescribeReservedInstancesInput) (*DescribeReservedInstancesOutput, error) { req, out := c.DescribeReservedInstancesRequest(input) return out, req.Send() @@ -10424,7 +11698,7 @@ const opDescribeReservedInstancesListings = "DescribeReservedInstancesListings" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeReservedInstancesListings +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeReservedInstancesListings func (c *EC2) DescribeReservedInstancesListingsRequest(input *DescribeReservedInstancesListingsInput) (req *request.Request, output *DescribeReservedInstancesListingsOutput) { op := &request.Operation{ Name: opDescribeReservedInstancesListings, @@ -10472,7 +11746,7 @@ func (c *EC2) DescribeReservedInstancesListingsRequest(input *DescribeReservedIn // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation DescribeReservedInstancesListings for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeReservedInstancesListings +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeReservedInstancesListings func (c *EC2) DescribeReservedInstancesListings(input *DescribeReservedInstancesListingsInput) (*DescribeReservedInstancesListingsOutput, error) { req, out := c.DescribeReservedInstancesListingsRequest(input) return out, req.Send() @@ -10519,7 +11793,7 @@ const opDescribeReservedInstancesModifications = "DescribeReservedInstancesModif // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeReservedInstancesModifications +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeReservedInstancesModifications func (c *EC2) DescribeReservedInstancesModificationsRequest(input *DescribeReservedInstancesModificationsInput) (req *request.Request, output *DescribeReservedInstancesModificationsOutput) { op := &request.Operation{ Name: opDescribeReservedInstancesModifications, @@ -10558,7 +11832,7 @@ func (c *EC2) DescribeReservedInstancesModificationsRequest(input *DescribeReser // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation DescribeReservedInstancesModifications for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeReservedInstancesModifications +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeReservedInstancesModifications func (c *EC2) DescribeReservedInstancesModifications(input *DescribeReservedInstancesModificationsInput) (*DescribeReservedInstancesModificationsOutput, error) { req, out := c.DescribeReservedInstancesModificationsRequest(input) return out, req.Send() @@ -10655,7 +11929,7 @@ const opDescribeReservedInstancesOfferings = "DescribeReservedInstancesOfferings // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeReservedInstancesOfferings +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeReservedInstancesOfferings func (c *EC2) DescribeReservedInstancesOfferingsRequest(input *DescribeReservedInstancesOfferingsInput) (req *request.Request, output *DescribeReservedInstancesOfferingsOutput) { op := &request.Operation{ Name: opDescribeReservedInstancesOfferings, @@ -10699,7 +11973,7 @@ func (c *EC2) DescribeReservedInstancesOfferingsRequest(input *DescribeReservedI // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation DescribeReservedInstancesOfferings for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeReservedInstancesOfferings +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeReservedInstancesOfferings func (c *EC2) DescribeReservedInstancesOfferings(input *DescribeReservedInstancesOfferingsInput) (*DescribeReservedInstancesOfferingsOutput, error) { req, out := c.DescribeReservedInstancesOfferingsRequest(input) return out, req.Send() @@ -10796,7 +12070,7 @@ const opDescribeRouteTables = "DescribeRouteTables" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeRouteTables +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeRouteTables func (c *EC2) DescribeRouteTablesRequest(input *DescribeRouteTablesInput) (req *request.Request, output *DescribeRouteTablesOutput) { op := &request.Operation{ Name: opDescribeRouteTables, @@ -10831,7 +12105,7 @@ func (c *EC2) DescribeRouteTablesRequest(input *DescribeRouteTablesInput) (req * // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation DescribeRouteTables for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeRouteTables +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeRouteTables func (c *EC2) DescribeRouteTables(input *DescribeRouteTablesInput) (*DescribeRouteTablesOutput, error) { req, out := c.DescribeRouteTablesRequest(input) return out, req.Send() @@ -10878,7 +12152,7 @@ const opDescribeScheduledInstanceAvailability = "DescribeScheduledInstanceAvaila // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeScheduledInstanceAvailability +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeScheduledInstanceAvailability func (c *EC2) DescribeScheduledInstanceAvailabilityRequest(input *DescribeScheduledInstanceAvailabilityInput) (req *request.Request, output *DescribeScheduledInstanceAvailabilityOutput) { op := &request.Operation{ Name: opDescribeScheduledInstanceAvailability, @@ -10913,7 +12187,7 @@ func (c *EC2) DescribeScheduledInstanceAvailabilityRequest(input *DescribeSchedu // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation DescribeScheduledInstanceAvailability for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeScheduledInstanceAvailability +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeScheduledInstanceAvailability func (c *EC2) DescribeScheduledInstanceAvailability(input *DescribeScheduledInstanceAvailabilityInput) (*DescribeScheduledInstanceAvailabilityOutput, error) { req, out := c.DescribeScheduledInstanceAvailabilityRequest(input) return out, req.Send() @@ -10960,7 +12234,7 @@ const opDescribeScheduledInstances = "DescribeScheduledInstances" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeScheduledInstances +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeScheduledInstances func (c *EC2) DescribeScheduledInstancesRequest(input *DescribeScheduledInstancesInput) (req *request.Request, output *DescribeScheduledInstancesOutput) { op := &request.Operation{ Name: opDescribeScheduledInstances, @@ -10987,7 +12261,7 @@ func (c *EC2) DescribeScheduledInstancesRequest(input *DescribeScheduledInstance // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation DescribeScheduledInstances for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeScheduledInstances +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeScheduledInstances func (c *EC2) DescribeScheduledInstances(input *DescribeScheduledInstancesInput) (*DescribeScheduledInstancesOutput, error) { req, out := c.DescribeScheduledInstancesRequest(input) return out, req.Send() @@ -11034,7 +12308,7 @@ const opDescribeSecurityGroupReferences = "DescribeSecurityGroupReferences" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeSecurityGroupReferences +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeSecurityGroupReferences func (c *EC2) DescribeSecurityGroupReferencesRequest(input *DescribeSecurityGroupReferencesInput) (req *request.Request, output *DescribeSecurityGroupReferencesOutput) { op := &request.Operation{ Name: opDescribeSecurityGroupReferences, @@ -11062,7 +12336,7 @@ func (c *EC2) DescribeSecurityGroupReferencesRequest(input *DescribeSecurityGrou // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation DescribeSecurityGroupReferences for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeSecurityGroupReferences +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeSecurityGroupReferences func (c *EC2) DescribeSecurityGroupReferences(input *DescribeSecurityGroupReferencesInput) (*DescribeSecurityGroupReferencesOutput, error) { req, out := c.DescribeSecurityGroupReferencesRequest(input) return out, req.Send() @@ -11109,7 +12383,7 @@ const opDescribeSecurityGroups = "DescribeSecurityGroups" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeSecurityGroups +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeSecurityGroups func (c *EC2) DescribeSecurityGroupsRequest(input *DescribeSecurityGroupsInput) (req *request.Request, output *DescribeSecurityGroupsOutput) { op := &request.Operation{ Name: opDescribeSecurityGroups, @@ -11143,7 +12417,7 @@ func (c *EC2) DescribeSecurityGroupsRequest(input *DescribeSecurityGroupsInput) // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation DescribeSecurityGroups for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeSecurityGroups +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeSecurityGroups func (c *EC2) DescribeSecurityGroups(input *DescribeSecurityGroupsInput) (*DescribeSecurityGroupsOutput, error) { req, out := c.DescribeSecurityGroupsRequest(input) return out, req.Send() @@ -11190,7 +12464,7 @@ const opDescribeSnapshotAttribute = "DescribeSnapshotAttribute" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeSnapshotAttribute +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeSnapshotAttribute func (c *EC2) DescribeSnapshotAttributeRequest(input *DescribeSnapshotAttributeInput) (req *request.Request, output *DescribeSnapshotAttributeOutput) { op := &request.Operation{ Name: opDescribeSnapshotAttribute, @@ -11221,7 +12495,7 @@ func (c *EC2) DescribeSnapshotAttributeRequest(input *DescribeSnapshotAttributeI // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation DescribeSnapshotAttribute for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeSnapshotAttribute +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeSnapshotAttribute func (c *EC2) DescribeSnapshotAttribute(input *DescribeSnapshotAttributeInput) (*DescribeSnapshotAttributeOutput, error) { req, out := c.DescribeSnapshotAttributeRequest(input) return out, req.Send() @@ -11268,7 +12542,7 @@ const opDescribeSnapshots = "DescribeSnapshots" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeSnapshots +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeSnapshots func (c *EC2) DescribeSnapshotsRequest(input *DescribeSnapshotsInput) (req *request.Request, output *DescribeSnapshotsOutput) { op := &request.Operation{ Name: opDescribeSnapshots, @@ -11346,7 +12620,7 @@ func (c *EC2) DescribeSnapshotsRequest(input *DescribeSnapshotsInput) (req *requ // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation DescribeSnapshots for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeSnapshots +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeSnapshots func (c *EC2) DescribeSnapshots(input *DescribeSnapshotsInput) (*DescribeSnapshotsOutput, error) { req, out := c.DescribeSnapshotsRequest(input) return out, req.Send() @@ -11443,7 +12717,7 @@ const opDescribeSpotDatafeedSubscription = "DescribeSpotDatafeedSubscription" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeSpotDatafeedSubscription +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeSpotDatafeedSubscription func (c *EC2) DescribeSpotDatafeedSubscriptionRequest(input *DescribeSpotDatafeedSubscriptionInput) (req *request.Request, output *DescribeSpotDatafeedSubscriptionOutput) { op := &request.Operation{ Name: opDescribeSpotDatafeedSubscription, @@ -11462,7 +12736,7 @@ func (c *EC2) DescribeSpotDatafeedSubscriptionRequest(input *DescribeSpotDatafee // DescribeSpotDatafeedSubscription API operation for Amazon Elastic Compute Cloud. // -// Describes the data feed for Spot instances. For more information, see Spot +// Describes the data feed for Spot Instances. For more information, see Spot // Instance Data Feed (http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/spot-data-feeds.html) // in the Amazon Elastic Compute Cloud User Guide. // @@ -11472,7 +12746,7 @@ func (c *EC2) DescribeSpotDatafeedSubscriptionRequest(input *DescribeSpotDatafee // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation DescribeSpotDatafeedSubscription for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeSpotDatafeedSubscription +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeSpotDatafeedSubscription func (c *EC2) DescribeSpotDatafeedSubscription(input *DescribeSpotDatafeedSubscriptionInput) (*DescribeSpotDatafeedSubscriptionOutput, error) { req, out := c.DescribeSpotDatafeedSubscriptionRequest(input) return out, req.Send() @@ -11519,7 +12793,7 @@ const opDescribeSpotFleetInstances = "DescribeSpotFleetInstances" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeSpotFleetInstances +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeSpotFleetInstances func (c *EC2) DescribeSpotFleetInstancesRequest(input *DescribeSpotFleetInstancesInput) (req *request.Request, output *DescribeSpotFleetInstancesOutput) { op := &request.Operation{ Name: opDescribeSpotFleetInstances, @@ -11538,7 +12812,7 @@ func (c *EC2) DescribeSpotFleetInstancesRequest(input *DescribeSpotFleetInstance // DescribeSpotFleetInstances API operation for Amazon Elastic Compute Cloud. // -// Describes the running instances for the specified Spot fleet. +// Describes the running instances for the specified Spot Fleet. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -11546,7 +12820,7 @@ func (c *EC2) DescribeSpotFleetInstancesRequest(input *DescribeSpotFleetInstance // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation DescribeSpotFleetInstances for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeSpotFleetInstances +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeSpotFleetInstances func (c *EC2) DescribeSpotFleetInstances(input *DescribeSpotFleetInstancesInput) (*DescribeSpotFleetInstancesOutput, error) { req, out := c.DescribeSpotFleetInstancesRequest(input) return out, req.Send() @@ -11593,7 +12867,7 @@ const opDescribeSpotFleetRequestHistory = "DescribeSpotFleetRequestHistory" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeSpotFleetRequestHistory +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeSpotFleetRequestHistory func (c *EC2) DescribeSpotFleetRequestHistoryRequest(input *DescribeSpotFleetRequestHistoryInput) (req *request.Request, output *DescribeSpotFleetRequestHistoryOutput) { op := &request.Operation{ Name: opDescribeSpotFleetRequestHistory, @@ -11612,10 +12886,10 @@ func (c *EC2) DescribeSpotFleetRequestHistoryRequest(input *DescribeSpotFleetReq // DescribeSpotFleetRequestHistory API operation for Amazon Elastic Compute Cloud. // -// Describes the events for the specified Spot fleet request during the specified +// Describes the events for the specified Spot Fleet request during the specified // time. // -// Spot fleet events are delayed by up to 30 seconds before they can be described. +// Spot Fleet events are delayed by up to 30 seconds before they can be described. // This ensures that you can query by the last evaluated time and not miss a // recorded event. // @@ -11625,7 +12899,7 @@ func (c *EC2) DescribeSpotFleetRequestHistoryRequest(input *DescribeSpotFleetReq // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation DescribeSpotFleetRequestHistory for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeSpotFleetRequestHistory +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeSpotFleetRequestHistory func (c *EC2) DescribeSpotFleetRequestHistory(input *DescribeSpotFleetRequestHistoryInput) (*DescribeSpotFleetRequestHistoryOutput, error) { req, out := c.DescribeSpotFleetRequestHistoryRequest(input) return out, req.Send() @@ -11672,7 +12946,7 @@ const opDescribeSpotFleetRequests = "DescribeSpotFleetRequests" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeSpotFleetRequests +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeSpotFleetRequests func (c *EC2) DescribeSpotFleetRequestsRequest(input *DescribeSpotFleetRequestsInput) (req *request.Request, output *DescribeSpotFleetRequestsOutput) { op := &request.Operation{ Name: opDescribeSpotFleetRequests, @@ -11697,9 +12971,9 @@ func (c *EC2) DescribeSpotFleetRequestsRequest(input *DescribeSpotFleetRequestsI // DescribeSpotFleetRequests API operation for Amazon Elastic Compute Cloud. // -// Describes your Spot fleet requests. +// Describes your Spot Fleet requests. // -// Spot fleet requests are deleted 48 hours after they are canceled and their +// Spot Fleet requests are deleted 48 hours after they are canceled and their // instances are terminated. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions @@ -11708,7 +12982,7 @@ func (c *EC2) DescribeSpotFleetRequestsRequest(input *DescribeSpotFleetRequestsI // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation DescribeSpotFleetRequests for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeSpotFleetRequests +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeSpotFleetRequests func (c *EC2) DescribeSpotFleetRequests(input *DescribeSpotFleetRequestsInput) (*DescribeSpotFleetRequestsOutput, error) { req, out := c.DescribeSpotFleetRequestsRequest(input) return out, req.Send() @@ -11805,7 +13079,7 @@ const opDescribeSpotInstanceRequests = "DescribeSpotInstanceRequests" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeSpotInstanceRequests +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeSpotInstanceRequests func (c *EC2) DescribeSpotInstanceRequestsRequest(input *DescribeSpotInstanceRequestsInput) (req *request.Request, output *DescribeSpotInstanceRequestsOutput) { op := &request.Operation{ Name: opDescribeSpotInstanceRequests, @@ -11824,20 +13098,19 @@ func (c *EC2) DescribeSpotInstanceRequestsRequest(input *DescribeSpotInstanceReq // DescribeSpotInstanceRequests API operation for Amazon Elastic Compute Cloud. // -// Describes the Spot instance requests that belong to your account. Spot instances -// are instances that Amazon EC2 launches when the bid price that you specify -// exceeds the current Spot price. Amazon EC2 periodically sets the Spot price -// based on available Spot instance capacity and current Spot instance requests. -// For more information, see Spot Instance Requests (http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/spot-requests.html) -// in the Amazon Elastic Compute Cloud User Guide. +// Describes the Spot Instance requests that belong to your account. Spot Instances +// are instances that Amazon EC2 launches when the Spot price that you specify +// exceeds the current Spot price. For more information, see Spot Instance Requests +// (http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/spot-requests.html) in +// the Amazon Elastic Compute Cloud User Guide. // -// You can use DescribeSpotInstanceRequests to find a running Spot instance -// by examining the response. If the status of the Spot instance is fulfilled, +// You can use DescribeSpotInstanceRequests to find a running Spot Instance +// by examining the response. If the status of the Spot Instance is fulfilled, // the instance ID appears in the response and contains the identifier of the // instance. Alternatively, you can use DescribeInstances with a filter to look // for instances where the instance lifecycle is spot. // -// Spot instance requests are deleted 4 hours after they are canceled and their +// Spot Instance requests are deleted 4 hours after they are canceled and their // instances are terminated. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions @@ -11846,7 +13119,7 @@ func (c *EC2) DescribeSpotInstanceRequestsRequest(input *DescribeSpotInstanceReq // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation DescribeSpotInstanceRequests for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeSpotInstanceRequests +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeSpotInstanceRequests func (c *EC2) DescribeSpotInstanceRequests(input *DescribeSpotInstanceRequestsInput) (*DescribeSpotInstanceRequestsOutput, error) { req, out := c.DescribeSpotInstanceRequestsRequest(input) return out, req.Send() @@ -11893,7 +13166,7 @@ const opDescribeSpotPriceHistory = "DescribeSpotPriceHistory" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeSpotPriceHistory +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeSpotPriceHistory func (c *EC2) DescribeSpotPriceHistoryRequest(input *DescribeSpotPriceHistoryInput) (req *request.Request, output *DescribeSpotPriceHistoryOutput) { op := &request.Operation{ Name: opDescribeSpotPriceHistory, @@ -11933,7 +13206,7 @@ func (c *EC2) DescribeSpotPriceHistoryRequest(input *DescribeSpotPriceHistoryInp // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation DescribeSpotPriceHistory for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeSpotPriceHistory +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeSpotPriceHistory func (c *EC2) DescribeSpotPriceHistory(input *DescribeSpotPriceHistoryInput) (*DescribeSpotPriceHistoryOutput, error) { req, out := c.DescribeSpotPriceHistoryRequest(input) return out, req.Send() @@ -12030,7 +13303,7 @@ const opDescribeStaleSecurityGroups = "DescribeStaleSecurityGroups" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeStaleSecurityGroups +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeStaleSecurityGroups func (c *EC2) DescribeStaleSecurityGroupsRequest(input *DescribeStaleSecurityGroupsInput) (req *request.Request, output *DescribeStaleSecurityGroupsOutput) { op := &request.Operation{ Name: opDescribeStaleSecurityGroups, @@ -12060,7 +13333,7 @@ func (c *EC2) DescribeStaleSecurityGroupsRequest(input *DescribeStaleSecurityGro // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation DescribeStaleSecurityGroups for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeStaleSecurityGroups +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeStaleSecurityGroups func (c *EC2) DescribeStaleSecurityGroups(input *DescribeStaleSecurityGroupsInput) (*DescribeStaleSecurityGroupsOutput, error) { req, out := c.DescribeStaleSecurityGroupsRequest(input) return out, req.Send() @@ -12107,7 +13380,7 @@ const opDescribeSubnets = "DescribeSubnets" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeSubnets +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeSubnets func (c *EC2) DescribeSubnetsRequest(input *DescribeSubnetsInput) (req *request.Request, output *DescribeSubnetsOutput) { op := &request.Operation{ Name: opDescribeSubnets, @@ -12137,7 +13410,7 @@ func (c *EC2) DescribeSubnetsRequest(input *DescribeSubnetsInput) (req *request. // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation DescribeSubnets for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeSubnets +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeSubnets func (c *EC2) DescribeSubnets(input *DescribeSubnetsInput) (*DescribeSubnetsOutput, error) { req, out := c.DescribeSubnetsRequest(input) return out, req.Send() @@ -12184,7 +13457,7 @@ const opDescribeTags = "DescribeTags" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeTags +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeTags func (c *EC2) DescribeTagsRequest(input *DescribeTagsInput) (req *request.Request, output *DescribeTagsOutput) { op := &request.Operation{ Name: opDescribeTags, @@ -12220,7 +13493,7 @@ func (c *EC2) DescribeTagsRequest(input *DescribeTagsInput) (req *request.Reques // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation DescribeTags for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeTags +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeTags func (c *EC2) DescribeTags(input *DescribeTagsInput) (*DescribeTagsOutput, error) { req, out := c.DescribeTagsRequest(input) return out, req.Send() @@ -12317,7 +13590,7 @@ const opDescribeVolumeAttribute = "DescribeVolumeAttribute" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeVolumeAttribute +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeVolumeAttribute func (c *EC2) DescribeVolumeAttributeRequest(input *DescribeVolumeAttributeInput) (req *request.Request, output *DescribeVolumeAttributeOutput) { op := &request.Operation{ Name: opDescribeVolumeAttribute, @@ -12348,7 +13621,7 @@ func (c *EC2) DescribeVolumeAttributeRequest(input *DescribeVolumeAttributeInput // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation DescribeVolumeAttribute for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeVolumeAttribute +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeVolumeAttribute func (c *EC2) DescribeVolumeAttribute(input *DescribeVolumeAttributeInput) (*DescribeVolumeAttributeOutput, error) { req, out := c.DescribeVolumeAttributeRequest(input) return out, req.Send() @@ -12395,7 +13668,7 @@ const opDescribeVolumeStatus = "DescribeVolumeStatus" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeVolumeStatus +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeVolumeStatus func (c *EC2) DescribeVolumeStatusRequest(input *DescribeVolumeStatusInput) (req *request.Request, output *DescribeVolumeStatusOutput) { op := &request.Operation{ Name: opDescribeVolumeStatus, @@ -12462,7 +13735,7 @@ func (c *EC2) DescribeVolumeStatusRequest(input *DescribeVolumeStatusInput) (req // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation DescribeVolumeStatus for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeVolumeStatus +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeVolumeStatus func (c *EC2) DescribeVolumeStatus(input *DescribeVolumeStatusInput) (*DescribeVolumeStatusOutput, error) { req, out := c.DescribeVolumeStatusRequest(input) return out, req.Send() @@ -12559,7 +13832,7 @@ const opDescribeVolumes = "DescribeVolumes" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeVolumes +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeVolumes func (c *EC2) DescribeVolumesRequest(input *DescribeVolumesInput) (req *request.Request, output *DescribeVolumesOutput) { op := &request.Operation{ Name: opDescribeVolumes, @@ -12602,7 +13875,7 @@ func (c *EC2) DescribeVolumesRequest(input *DescribeVolumesInput) (req *request. // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation DescribeVolumes for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeVolumes +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeVolumes func (c *EC2) DescribeVolumes(input *DescribeVolumesInput) (*DescribeVolumesOutput, error) { req, out := c.DescribeVolumesRequest(input) return out, req.Send() @@ -12699,7 +13972,7 @@ const opDescribeVolumesModifications = "DescribeVolumesModifications" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeVolumesModifications +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeVolumesModifications func (c *EC2) DescribeVolumesModificationsRequest(input *DescribeVolumesModificationsInput) (req *request.Request, output *DescribeVolumesModificationsOutput) { op := &request.Operation{ Name: opDescribeVolumesModifications, @@ -12738,7 +14011,7 @@ func (c *EC2) DescribeVolumesModificationsRequest(input *DescribeVolumesModifica // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation DescribeVolumesModifications for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeVolumesModifications +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeVolumesModifications func (c *EC2) DescribeVolumesModifications(input *DescribeVolumesModificationsInput) (*DescribeVolumesModificationsOutput, error) { req, out := c.DescribeVolumesModificationsRequest(input) return out, req.Send() @@ -12785,7 +14058,7 @@ const opDescribeVpcAttribute = "DescribeVpcAttribute" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeVpcAttribute +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeVpcAttribute func (c *EC2) DescribeVpcAttributeRequest(input *DescribeVpcAttributeInput) (req *request.Request, output *DescribeVpcAttributeOutput) { op := &request.Operation{ Name: opDescribeVpcAttribute, @@ -12813,7 +14086,7 @@ func (c *EC2) DescribeVpcAttributeRequest(input *DescribeVpcAttributeInput) (req // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation DescribeVpcAttribute for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeVpcAttribute +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeVpcAttribute func (c *EC2) DescribeVpcAttribute(input *DescribeVpcAttributeInput) (*DescribeVpcAttributeOutput, error) { req, out := c.DescribeVpcAttributeRequest(input) return out, req.Send() @@ -12860,7 +14133,7 @@ const opDescribeVpcClassicLink = "DescribeVpcClassicLink" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeVpcClassicLink +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeVpcClassicLink func (c *EC2) DescribeVpcClassicLinkRequest(input *DescribeVpcClassicLinkInput) (req *request.Request, output *DescribeVpcClassicLinkOutput) { op := &request.Operation{ Name: opDescribeVpcClassicLink, @@ -12887,7 +14160,7 @@ func (c *EC2) DescribeVpcClassicLinkRequest(input *DescribeVpcClassicLinkInput) // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation DescribeVpcClassicLink for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeVpcClassicLink +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeVpcClassicLink func (c *EC2) DescribeVpcClassicLink(input *DescribeVpcClassicLinkInput) (*DescribeVpcClassicLinkOutput, error) { req, out := c.DescribeVpcClassicLinkRequest(input) return out, req.Send() @@ -12934,7 +14207,7 @@ const opDescribeVpcClassicLinkDnsSupport = "DescribeVpcClassicLinkDnsSupport" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeVpcClassicLinkDnsSupport +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeVpcClassicLinkDnsSupport func (c *EC2) DescribeVpcClassicLinkDnsSupportRequest(input *DescribeVpcClassicLinkDnsSupportInput) (req *request.Request, output *DescribeVpcClassicLinkDnsSupportOutput) { op := &request.Operation{ Name: opDescribeVpcClassicLinkDnsSupport, @@ -12967,7 +14240,7 @@ func (c *EC2) DescribeVpcClassicLinkDnsSupportRequest(input *DescribeVpcClassicL // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation DescribeVpcClassicLinkDnsSupport for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeVpcClassicLinkDnsSupport +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeVpcClassicLinkDnsSupport func (c *EC2) DescribeVpcClassicLinkDnsSupport(input *DescribeVpcClassicLinkDnsSupportInput) (*DescribeVpcClassicLinkDnsSupportOutput, error) { req, out := c.DescribeVpcClassicLinkDnsSupportRequest(input) return out, req.Send() @@ -12989,6 +14262,305 @@ func (c *EC2) DescribeVpcClassicLinkDnsSupportWithContext(ctx aws.Context, input return out, req.Send() } +const opDescribeVpcEndpointConnectionNotifications = "DescribeVpcEndpointConnectionNotifications" + +// DescribeVpcEndpointConnectionNotificationsRequest generates a "aws/request.Request" representing the +// client's request for the DescribeVpcEndpointConnectionNotifications operation. The "output" return +// value will be populated with the request's response once the request complets +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DescribeVpcEndpointConnectionNotifications for more information on using the DescribeVpcEndpointConnectionNotifications +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DescribeVpcEndpointConnectionNotificationsRequest method. +// req, resp := client.DescribeVpcEndpointConnectionNotificationsRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeVpcEndpointConnectionNotifications +func (c *EC2) DescribeVpcEndpointConnectionNotificationsRequest(input *DescribeVpcEndpointConnectionNotificationsInput) (req *request.Request, output *DescribeVpcEndpointConnectionNotificationsOutput) { + op := &request.Operation{ + Name: opDescribeVpcEndpointConnectionNotifications, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DescribeVpcEndpointConnectionNotificationsInput{} + } + + output = &DescribeVpcEndpointConnectionNotificationsOutput{} + req = c.newRequest(op, input, output) + return +} + +// DescribeVpcEndpointConnectionNotifications API operation for Amazon Elastic Compute Cloud. +// +// Describes the connection notifications for VPC endpoints and VPC endpoint +// services. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Elastic Compute Cloud's +// API operation DescribeVpcEndpointConnectionNotifications for usage and error information. +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeVpcEndpointConnectionNotifications +func (c *EC2) DescribeVpcEndpointConnectionNotifications(input *DescribeVpcEndpointConnectionNotificationsInput) (*DescribeVpcEndpointConnectionNotificationsOutput, error) { + req, out := c.DescribeVpcEndpointConnectionNotificationsRequest(input) + return out, req.Send() +} + +// DescribeVpcEndpointConnectionNotificationsWithContext is the same as DescribeVpcEndpointConnectionNotifications with the addition of +// the ability to pass a context and additional request options. +// +// See DescribeVpcEndpointConnectionNotifications for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *EC2) DescribeVpcEndpointConnectionNotificationsWithContext(ctx aws.Context, input *DescribeVpcEndpointConnectionNotificationsInput, opts ...request.Option) (*DescribeVpcEndpointConnectionNotificationsOutput, error) { + req, out := c.DescribeVpcEndpointConnectionNotificationsRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDescribeVpcEndpointConnections = "DescribeVpcEndpointConnections" + +// DescribeVpcEndpointConnectionsRequest generates a "aws/request.Request" representing the +// client's request for the DescribeVpcEndpointConnections operation. The "output" return +// value will be populated with the request's response once the request complets +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DescribeVpcEndpointConnections for more information on using the DescribeVpcEndpointConnections +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DescribeVpcEndpointConnectionsRequest method. +// req, resp := client.DescribeVpcEndpointConnectionsRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeVpcEndpointConnections +func (c *EC2) DescribeVpcEndpointConnectionsRequest(input *DescribeVpcEndpointConnectionsInput) (req *request.Request, output *DescribeVpcEndpointConnectionsOutput) { + op := &request.Operation{ + Name: opDescribeVpcEndpointConnections, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DescribeVpcEndpointConnectionsInput{} + } + + output = &DescribeVpcEndpointConnectionsOutput{} + req = c.newRequest(op, input, output) + return +} + +// DescribeVpcEndpointConnections API operation for Amazon Elastic Compute Cloud. +// +// Describes the VPC endpoint connections to your VPC endpoint services, including +// any endpoints that are pending your acceptance. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Elastic Compute Cloud's +// API operation DescribeVpcEndpointConnections for usage and error information. +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeVpcEndpointConnections +func (c *EC2) DescribeVpcEndpointConnections(input *DescribeVpcEndpointConnectionsInput) (*DescribeVpcEndpointConnectionsOutput, error) { + req, out := c.DescribeVpcEndpointConnectionsRequest(input) + return out, req.Send() +} + +// DescribeVpcEndpointConnectionsWithContext is the same as DescribeVpcEndpointConnections with the addition of +// the ability to pass a context and additional request options. +// +// See DescribeVpcEndpointConnections for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *EC2) DescribeVpcEndpointConnectionsWithContext(ctx aws.Context, input *DescribeVpcEndpointConnectionsInput, opts ...request.Option) (*DescribeVpcEndpointConnectionsOutput, error) { + req, out := c.DescribeVpcEndpointConnectionsRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDescribeVpcEndpointServiceConfigurations = "DescribeVpcEndpointServiceConfigurations" + +// DescribeVpcEndpointServiceConfigurationsRequest generates a "aws/request.Request" representing the +// client's request for the DescribeVpcEndpointServiceConfigurations operation. The "output" return +// value will be populated with the request's response once the request complets +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DescribeVpcEndpointServiceConfigurations for more information on using the DescribeVpcEndpointServiceConfigurations +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DescribeVpcEndpointServiceConfigurationsRequest method. +// req, resp := client.DescribeVpcEndpointServiceConfigurationsRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeVpcEndpointServiceConfigurations +func (c *EC2) DescribeVpcEndpointServiceConfigurationsRequest(input *DescribeVpcEndpointServiceConfigurationsInput) (req *request.Request, output *DescribeVpcEndpointServiceConfigurationsOutput) { + op := &request.Operation{ + Name: opDescribeVpcEndpointServiceConfigurations, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DescribeVpcEndpointServiceConfigurationsInput{} + } + + output = &DescribeVpcEndpointServiceConfigurationsOutput{} + req = c.newRequest(op, input, output) + return +} + +// DescribeVpcEndpointServiceConfigurations API operation for Amazon Elastic Compute Cloud. +// +// Describes the VPC endpoint service configurations in your account (your services). +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Elastic Compute Cloud's +// API operation DescribeVpcEndpointServiceConfigurations for usage and error information. +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeVpcEndpointServiceConfigurations +func (c *EC2) DescribeVpcEndpointServiceConfigurations(input *DescribeVpcEndpointServiceConfigurationsInput) (*DescribeVpcEndpointServiceConfigurationsOutput, error) { + req, out := c.DescribeVpcEndpointServiceConfigurationsRequest(input) + return out, req.Send() +} + +// DescribeVpcEndpointServiceConfigurationsWithContext is the same as DescribeVpcEndpointServiceConfigurations with the addition of +// the ability to pass a context and additional request options. +// +// See DescribeVpcEndpointServiceConfigurations for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *EC2) DescribeVpcEndpointServiceConfigurationsWithContext(ctx aws.Context, input *DescribeVpcEndpointServiceConfigurationsInput, opts ...request.Option) (*DescribeVpcEndpointServiceConfigurationsOutput, error) { + req, out := c.DescribeVpcEndpointServiceConfigurationsRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDescribeVpcEndpointServicePermissions = "DescribeVpcEndpointServicePermissions" + +// DescribeVpcEndpointServicePermissionsRequest generates a "aws/request.Request" representing the +// client's request for the DescribeVpcEndpointServicePermissions operation. The "output" return +// value will be populated with the request's response once the request complets +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DescribeVpcEndpointServicePermissions for more information on using the DescribeVpcEndpointServicePermissions +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DescribeVpcEndpointServicePermissionsRequest method. +// req, resp := client.DescribeVpcEndpointServicePermissionsRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeVpcEndpointServicePermissions +func (c *EC2) DescribeVpcEndpointServicePermissionsRequest(input *DescribeVpcEndpointServicePermissionsInput) (req *request.Request, output *DescribeVpcEndpointServicePermissionsOutput) { + op := &request.Operation{ + Name: opDescribeVpcEndpointServicePermissions, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DescribeVpcEndpointServicePermissionsInput{} + } + + output = &DescribeVpcEndpointServicePermissionsOutput{} + req = c.newRequest(op, input, output) + return +} + +// DescribeVpcEndpointServicePermissions API operation for Amazon Elastic Compute Cloud. +// +// Describes the principals (service consumers) that are permitted to discover +// your VPC endpoint service. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Elastic Compute Cloud's +// API operation DescribeVpcEndpointServicePermissions for usage and error information. +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeVpcEndpointServicePermissions +func (c *EC2) DescribeVpcEndpointServicePermissions(input *DescribeVpcEndpointServicePermissionsInput) (*DescribeVpcEndpointServicePermissionsOutput, error) { + req, out := c.DescribeVpcEndpointServicePermissionsRequest(input) + return out, req.Send() +} + +// DescribeVpcEndpointServicePermissionsWithContext is the same as DescribeVpcEndpointServicePermissions with the addition of +// the ability to pass a context and additional request options. +// +// See DescribeVpcEndpointServicePermissions for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *EC2) DescribeVpcEndpointServicePermissionsWithContext(ctx aws.Context, input *DescribeVpcEndpointServicePermissionsInput, opts ...request.Option) (*DescribeVpcEndpointServicePermissionsOutput, error) { + req, out := c.DescribeVpcEndpointServicePermissionsRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + const opDescribeVpcEndpointServices = "DescribeVpcEndpointServices" // DescribeVpcEndpointServicesRequest generates a "aws/request.Request" representing the @@ -13014,7 +14586,7 @@ const opDescribeVpcEndpointServices = "DescribeVpcEndpointServices" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeVpcEndpointServices +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeVpcEndpointServices func (c *EC2) DescribeVpcEndpointServicesRequest(input *DescribeVpcEndpointServicesInput) (req *request.Request, output *DescribeVpcEndpointServicesOutput) { op := &request.Operation{ Name: opDescribeVpcEndpointServices, @@ -13033,8 +14605,7 @@ func (c *EC2) DescribeVpcEndpointServicesRequest(input *DescribeVpcEndpointServi // DescribeVpcEndpointServices API operation for Amazon Elastic Compute Cloud. // -// Describes all supported AWS services that can be specified when creating -// a VPC endpoint. +// Describes available services to which you can create a VPC endpoint. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -13042,7 +14613,7 @@ func (c *EC2) DescribeVpcEndpointServicesRequest(input *DescribeVpcEndpointServi // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation DescribeVpcEndpointServices for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeVpcEndpointServices +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeVpcEndpointServices func (c *EC2) DescribeVpcEndpointServices(input *DescribeVpcEndpointServicesInput) (*DescribeVpcEndpointServicesOutput, error) { req, out := c.DescribeVpcEndpointServicesRequest(input) return out, req.Send() @@ -13089,7 +14660,7 @@ const opDescribeVpcEndpoints = "DescribeVpcEndpoints" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeVpcEndpoints +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeVpcEndpoints func (c *EC2) DescribeVpcEndpointsRequest(input *DescribeVpcEndpointsInput) (req *request.Request, output *DescribeVpcEndpointsOutput) { op := &request.Operation{ Name: opDescribeVpcEndpoints, @@ -13116,7 +14687,7 @@ func (c *EC2) DescribeVpcEndpointsRequest(input *DescribeVpcEndpointsInput) (req // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation DescribeVpcEndpoints for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeVpcEndpoints +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeVpcEndpoints func (c *EC2) DescribeVpcEndpoints(input *DescribeVpcEndpointsInput) (*DescribeVpcEndpointsOutput, error) { req, out := c.DescribeVpcEndpointsRequest(input) return out, req.Send() @@ -13163,7 +14734,7 @@ const opDescribeVpcPeeringConnections = "DescribeVpcPeeringConnections" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeVpcPeeringConnections +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeVpcPeeringConnections func (c *EC2) DescribeVpcPeeringConnectionsRequest(input *DescribeVpcPeeringConnectionsInput) (req *request.Request, output *DescribeVpcPeeringConnectionsOutput) { op := &request.Operation{ Name: opDescribeVpcPeeringConnections, @@ -13190,7 +14761,7 @@ func (c *EC2) DescribeVpcPeeringConnectionsRequest(input *DescribeVpcPeeringConn // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation DescribeVpcPeeringConnections for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeVpcPeeringConnections +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeVpcPeeringConnections func (c *EC2) DescribeVpcPeeringConnections(input *DescribeVpcPeeringConnectionsInput) (*DescribeVpcPeeringConnectionsOutput, error) { req, out := c.DescribeVpcPeeringConnectionsRequest(input) return out, req.Send() @@ -13237,7 +14808,7 @@ const opDescribeVpcs = "DescribeVpcs" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeVpcs +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeVpcs func (c *EC2) DescribeVpcsRequest(input *DescribeVpcsInput) (req *request.Request, output *DescribeVpcsOutput) { op := &request.Operation{ Name: opDescribeVpcs, @@ -13264,7 +14835,7 @@ func (c *EC2) DescribeVpcsRequest(input *DescribeVpcsInput) (req *request.Reques // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation DescribeVpcs for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeVpcs +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeVpcs func (c *EC2) DescribeVpcs(input *DescribeVpcsInput) (*DescribeVpcsOutput, error) { req, out := c.DescribeVpcsRequest(input) return out, req.Send() @@ -13311,7 +14882,7 @@ const opDescribeVpnConnections = "DescribeVpnConnections" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeVpnConnections +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeVpnConnections func (c *EC2) DescribeVpnConnectionsRequest(input *DescribeVpnConnectionsInput) (req *request.Request, output *DescribeVpnConnectionsOutput) { op := &request.Operation{ Name: opDescribeVpnConnections, @@ -13332,9 +14903,9 @@ func (c *EC2) DescribeVpnConnectionsRequest(input *DescribeVpnConnectionsInput) // // Describes one or more of your VPN connections. // -// For more information about VPN connections, see Adding a Hardware Virtual -// Private Gateway to Your VPC (http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_VPN.html) -// in the Amazon Virtual Private Cloud User Guide. +// For more information about VPN connections, see AWS Managed VPN Connections +// (http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_VPN.html) in the +// Amazon Virtual Private Cloud User Guide. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -13342,7 +14913,7 @@ func (c *EC2) DescribeVpnConnectionsRequest(input *DescribeVpnConnectionsInput) // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation DescribeVpnConnections for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeVpnConnections +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeVpnConnections func (c *EC2) DescribeVpnConnections(input *DescribeVpnConnectionsInput) (*DescribeVpnConnectionsOutput, error) { req, out := c.DescribeVpnConnectionsRequest(input) return out, req.Send() @@ -13389,7 +14960,7 @@ const opDescribeVpnGateways = "DescribeVpnGateways" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeVpnGateways +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeVpnGateways func (c *EC2) DescribeVpnGatewaysRequest(input *DescribeVpnGatewaysInput) (req *request.Request, output *DescribeVpnGatewaysOutput) { op := &request.Operation{ Name: opDescribeVpnGateways, @@ -13410,8 +14981,8 @@ func (c *EC2) DescribeVpnGatewaysRequest(input *DescribeVpnGatewaysInput) (req * // // Describes one or more of your virtual private gateways. // -// For more information about virtual private gateways, see Adding an IPsec -// Hardware VPN to Your VPC (http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_VPN.html) +// For more information about virtual private gateways, see AWS Managed VPN +// Connections (http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_VPN.html) // in the Amazon Virtual Private Cloud User Guide. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions @@ -13420,7 +14991,7 @@ func (c *EC2) DescribeVpnGatewaysRequest(input *DescribeVpnGatewaysInput) (req * // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation DescribeVpnGateways for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeVpnGateways +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeVpnGateways func (c *EC2) DescribeVpnGateways(input *DescribeVpnGatewaysInput) (*DescribeVpnGatewaysOutput, error) { req, out := c.DescribeVpnGatewaysRequest(input) return out, req.Send() @@ -13467,7 +15038,7 @@ const opDetachClassicLinkVpc = "DetachClassicLinkVpc" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DetachClassicLinkVpc +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DetachClassicLinkVpc func (c *EC2) DetachClassicLinkVpcRequest(input *DetachClassicLinkVpcInput) (req *request.Request, output *DetachClassicLinkVpcOutput) { op := &request.Operation{ Name: opDetachClassicLinkVpc, @@ -13496,7 +15067,7 @@ func (c *EC2) DetachClassicLinkVpcRequest(input *DetachClassicLinkVpcInput) (req // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation DetachClassicLinkVpc for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DetachClassicLinkVpc +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DetachClassicLinkVpc func (c *EC2) DetachClassicLinkVpc(input *DetachClassicLinkVpcInput) (*DetachClassicLinkVpcOutput, error) { req, out := c.DetachClassicLinkVpcRequest(input) return out, req.Send() @@ -13543,7 +15114,7 @@ const opDetachInternetGateway = "DetachInternetGateway" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DetachInternetGateway +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DetachInternetGateway func (c *EC2) DetachInternetGatewayRequest(input *DetachInternetGatewayInput) (req *request.Request, output *DetachInternetGatewayOutput) { op := &request.Operation{ Name: opDetachInternetGateway, @@ -13574,7 +15145,7 @@ func (c *EC2) DetachInternetGatewayRequest(input *DetachInternetGatewayInput) (r // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation DetachInternetGateway for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DetachInternetGateway +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DetachInternetGateway func (c *EC2) DetachInternetGateway(input *DetachInternetGatewayInput) (*DetachInternetGatewayOutput, error) { req, out := c.DetachInternetGatewayRequest(input) return out, req.Send() @@ -13621,7 +15192,7 @@ const opDetachNetworkInterface = "DetachNetworkInterface" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DetachNetworkInterface +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DetachNetworkInterface func (c *EC2) DetachNetworkInterfaceRequest(input *DetachNetworkInterfaceInput) (req *request.Request, output *DetachNetworkInterfaceOutput) { op := &request.Operation{ Name: opDetachNetworkInterface, @@ -13650,7 +15221,7 @@ func (c *EC2) DetachNetworkInterfaceRequest(input *DetachNetworkInterfaceInput) // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation DetachNetworkInterface for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DetachNetworkInterface +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DetachNetworkInterface func (c *EC2) DetachNetworkInterface(input *DetachNetworkInterfaceInput) (*DetachNetworkInterfaceOutput, error) { req, out := c.DetachNetworkInterfaceRequest(input) return out, req.Send() @@ -13697,7 +15268,7 @@ const opDetachVolume = "DetachVolume" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DetachVolume +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DetachVolume func (c *EC2) DetachVolumeRequest(input *DetachVolumeInput) (req *request.Request, output *VolumeAttachment) { op := &request.Operation{ Name: opDetachVolume, @@ -13737,7 +15308,7 @@ func (c *EC2) DetachVolumeRequest(input *DetachVolumeInput) (req *request.Reques // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation DetachVolume for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DetachVolume +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DetachVolume func (c *EC2) DetachVolume(input *DetachVolumeInput) (*VolumeAttachment, error) { req, out := c.DetachVolumeRequest(input) return out, req.Send() @@ -13784,7 +15355,7 @@ const opDetachVpnGateway = "DetachVpnGateway" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DetachVpnGateway +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DetachVpnGateway func (c *EC2) DetachVpnGatewayRequest(input *DetachVpnGatewayInput) (req *request.Request, output *DetachVpnGatewayOutput) { op := &request.Operation{ Name: opDetachVpnGateway, @@ -13820,7 +15391,7 @@ func (c *EC2) DetachVpnGatewayRequest(input *DetachVpnGatewayInput) (req *reques // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation DetachVpnGateway for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DetachVpnGateway +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DetachVpnGateway func (c *EC2) DetachVpnGateway(input *DetachVpnGatewayInput) (*DetachVpnGatewayOutput, error) { req, out := c.DetachVpnGatewayRequest(input) return out, req.Send() @@ -13867,7 +15438,7 @@ const opDisableVgwRoutePropagation = "DisableVgwRoutePropagation" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DisableVgwRoutePropagation +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DisableVgwRoutePropagation func (c *EC2) DisableVgwRoutePropagationRequest(input *DisableVgwRoutePropagationInput) (req *request.Request, output *DisableVgwRoutePropagationOutput) { op := &request.Operation{ Name: opDisableVgwRoutePropagation, @@ -13897,7 +15468,7 @@ func (c *EC2) DisableVgwRoutePropagationRequest(input *DisableVgwRoutePropagatio // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation DisableVgwRoutePropagation for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DisableVgwRoutePropagation +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DisableVgwRoutePropagation func (c *EC2) DisableVgwRoutePropagation(input *DisableVgwRoutePropagationInput) (*DisableVgwRoutePropagationOutput, error) { req, out := c.DisableVgwRoutePropagationRequest(input) return out, req.Send() @@ -13944,7 +15515,7 @@ const opDisableVpcClassicLink = "DisableVpcClassicLink" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DisableVpcClassicLink +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DisableVpcClassicLink func (c *EC2) DisableVpcClassicLinkRequest(input *DisableVpcClassicLinkInput) (req *request.Request, output *DisableVpcClassicLinkOutput) { op := &request.Operation{ Name: opDisableVpcClassicLink, @@ -13972,7 +15543,7 @@ func (c *EC2) DisableVpcClassicLinkRequest(input *DisableVpcClassicLinkInput) (r // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation DisableVpcClassicLink for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DisableVpcClassicLink +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DisableVpcClassicLink func (c *EC2) DisableVpcClassicLink(input *DisableVpcClassicLinkInput) (*DisableVpcClassicLinkOutput, error) { req, out := c.DisableVpcClassicLinkRequest(input) return out, req.Send() @@ -14019,7 +15590,7 @@ const opDisableVpcClassicLinkDnsSupport = "DisableVpcClassicLinkDnsSupport" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DisableVpcClassicLinkDnsSupport +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DisableVpcClassicLinkDnsSupport func (c *EC2) DisableVpcClassicLinkDnsSupportRequest(input *DisableVpcClassicLinkDnsSupportInput) (req *request.Request, output *DisableVpcClassicLinkDnsSupportOutput) { op := &request.Operation{ Name: opDisableVpcClassicLinkDnsSupport, @@ -14050,7 +15621,7 @@ func (c *EC2) DisableVpcClassicLinkDnsSupportRequest(input *DisableVpcClassicLin // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation DisableVpcClassicLinkDnsSupport for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DisableVpcClassicLinkDnsSupport +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DisableVpcClassicLinkDnsSupport func (c *EC2) DisableVpcClassicLinkDnsSupport(input *DisableVpcClassicLinkDnsSupportInput) (*DisableVpcClassicLinkDnsSupportOutput, error) { req, out := c.DisableVpcClassicLinkDnsSupportRequest(input) return out, req.Send() @@ -14097,7 +15668,7 @@ const opDisassociateAddress = "DisassociateAddress" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DisassociateAddress +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DisassociateAddress func (c *EC2) DisassociateAddressRequest(input *DisassociateAddressInput) (req *request.Request, output *DisassociateAddressOutput) { op := &request.Operation{ Name: opDisassociateAddress, @@ -14134,7 +15705,7 @@ func (c *EC2) DisassociateAddressRequest(input *DisassociateAddressInput) (req * // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation DisassociateAddress for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DisassociateAddress +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DisassociateAddress func (c *EC2) DisassociateAddress(input *DisassociateAddressInput) (*DisassociateAddressOutput, error) { req, out := c.DisassociateAddressRequest(input) return out, req.Send() @@ -14181,7 +15752,7 @@ const opDisassociateIamInstanceProfile = "DisassociateIamInstanceProfile" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DisassociateIamInstanceProfile +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DisassociateIamInstanceProfile func (c *EC2) DisassociateIamInstanceProfileRequest(input *DisassociateIamInstanceProfileInput) (req *request.Request, output *DisassociateIamInstanceProfileOutput) { op := &request.Operation{ Name: opDisassociateIamInstanceProfile, @@ -14210,7 +15781,7 @@ func (c *EC2) DisassociateIamInstanceProfileRequest(input *DisassociateIamInstan // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation DisassociateIamInstanceProfile for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DisassociateIamInstanceProfile +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DisassociateIamInstanceProfile func (c *EC2) DisassociateIamInstanceProfile(input *DisassociateIamInstanceProfileInput) (*DisassociateIamInstanceProfileOutput, error) { req, out := c.DisassociateIamInstanceProfileRequest(input) return out, req.Send() @@ -14257,7 +15828,7 @@ const opDisassociateRouteTable = "DisassociateRouteTable" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DisassociateRouteTable +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DisassociateRouteTable func (c *EC2) DisassociateRouteTableRequest(input *DisassociateRouteTableInput) (req *request.Request, output *DisassociateRouteTableOutput) { op := &request.Operation{ Name: opDisassociateRouteTable, @@ -14291,7 +15862,7 @@ func (c *EC2) DisassociateRouteTableRequest(input *DisassociateRouteTableInput) // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation DisassociateRouteTable for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DisassociateRouteTable +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DisassociateRouteTable func (c *EC2) DisassociateRouteTable(input *DisassociateRouteTableInput) (*DisassociateRouteTableOutput, error) { req, out := c.DisassociateRouteTableRequest(input) return out, req.Send() @@ -14338,7 +15909,7 @@ const opDisassociateSubnetCidrBlock = "DisassociateSubnetCidrBlock" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DisassociateSubnetCidrBlock +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DisassociateSubnetCidrBlock func (c *EC2) DisassociateSubnetCidrBlockRequest(input *DisassociateSubnetCidrBlockInput) (req *request.Request, output *DisassociateSubnetCidrBlockOutput) { op := &request.Operation{ Name: opDisassociateSubnetCidrBlock, @@ -14367,7 +15938,7 @@ func (c *EC2) DisassociateSubnetCidrBlockRequest(input *DisassociateSubnetCidrBl // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation DisassociateSubnetCidrBlock for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DisassociateSubnetCidrBlock +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DisassociateSubnetCidrBlock func (c *EC2) DisassociateSubnetCidrBlock(input *DisassociateSubnetCidrBlockInput) (*DisassociateSubnetCidrBlockOutput, error) { req, out := c.DisassociateSubnetCidrBlockRequest(input) return out, req.Send() @@ -14414,7 +15985,7 @@ const opDisassociateVpcCidrBlock = "DisassociateVpcCidrBlock" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DisassociateVpcCidrBlock +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DisassociateVpcCidrBlock func (c *EC2) DisassociateVpcCidrBlockRequest(input *DisassociateVpcCidrBlockInput) (req *request.Request, output *DisassociateVpcCidrBlockOutput) { op := &request.Operation{ Name: opDisassociateVpcCidrBlock, @@ -14433,9 +16004,13 @@ func (c *EC2) DisassociateVpcCidrBlockRequest(input *DisassociateVpcCidrBlockInp // DisassociateVpcCidrBlock API operation for Amazon Elastic Compute Cloud. // -// Disassociates a CIDR block from a VPC. Currently, you can disassociate an -// IPv6 CIDR block only. You must detach or delete all gateways and resources -// that are associated with the CIDR block before you can disassociate it. +// Disassociates a CIDR block from a VPC. To disassociate the CIDR block, you +// must specify its association ID. You can get the association ID by using +// DescribeVpcs. You must detach or delete all gateways and resources that are +// associated with the CIDR block before you can disassociate it. +// +// You cannot disassociate the CIDR block with which you originally created +// the VPC (the primary CIDR block). // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -14443,7 +16018,7 @@ func (c *EC2) DisassociateVpcCidrBlockRequest(input *DisassociateVpcCidrBlockInp // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation DisassociateVpcCidrBlock for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DisassociateVpcCidrBlock +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DisassociateVpcCidrBlock func (c *EC2) DisassociateVpcCidrBlock(input *DisassociateVpcCidrBlockInput) (*DisassociateVpcCidrBlockOutput, error) { req, out := c.DisassociateVpcCidrBlockRequest(input) return out, req.Send() @@ -14490,7 +16065,7 @@ const opEnableVgwRoutePropagation = "EnableVgwRoutePropagation" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/EnableVgwRoutePropagation +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/EnableVgwRoutePropagation func (c *EC2) EnableVgwRoutePropagationRequest(input *EnableVgwRoutePropagationInput) (req *request.Request, output *EnableVgwRoutePropagationOutput) { op := &request.Operation{ Name: opEnableVgwRoutePropagation, @@ -14520,7 +16095,7 @@ func (c *EC2) EnableVgwRoutePropagationRequest(input *EnableVgwRoutePropagationI // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation EnableVgwRoutePropagation for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/EnableVgwRoutePropagation +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/EnableVgwRoutePropagation func (c *EC2) EnableVgwRoutePropagation(input *EnableVgwRoutePropagationInput) (*EnableVgwRoutePropagationOutput, error) { req, out := c.EnableVgwRoutePropagationRequest(input) return out, req.Send() @@ -14567,7 +16142,7 @@ const opEnableVolumeIO = "EnableVolumeIO" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/EnableVolumeIO +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/EnableVolumeIO func (c *EC2) EnableVolumeIORequest(input *EnableVolumeIOInput) (req *request.Request, output *EnableVolumeIOOutput) { op := &request.Operation{ Name: opEnableVolumeIO, @@ -14597,7 +16172,7 @@ func (c *EC2) EnableVolumeIORequest(input *EnableVolumeIOInput) (req *request.Re // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation EnableVolumeIO for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/EnableVolumeIO +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/EnableVolumeIO func (c *EC2) EnableVolumeIO(input *EnableVolumeIOInput) (*EnableVolumeIOOutput, error) { req, out := c.EnableVolumeIORequest(input) return out, req.Send() @@ -14644,7 +16219,7 @@ const opEnableVpcClassicLink = "EnableVpcClassicLink" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/EnableVpcClassicLink +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/EnableVpcClassicLink func (c *EC2) EnableVpcClassicLinkRequest(input *EnableVpcClassicLinkInput) (req *request.Request, output *EnableVpcClassicLinkOutput) { op := &request.Operation{ Name: opEnableVpcClassicLink, @@ -14677,7 +16252,7 @@ func (c *EC2) EnableVpcClassicLinkRequest(input *EnableVpcClassicLinkInput) (req // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation EnableVpcClassicLink for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/EnableVpcClassicLink +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/EnableVpcClassicLink func (c *EC2) EnableVpcClassicLink(input *EnableVpcClassicLinkInput) (*EnableVpcClassicLinkOutput, error) { req, out := c.EnableVpcClassicLinkRequest(input) return out, req.Send() @@ -14724,7 +16299,7 @@ const opEnableVpcClassicLinkDnsSupport = "EnableVpcClassicLinkDnsSupport" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/EnableVpcClassicLinkDnsSupport +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/EnableVpcClassicLinkDnsSupport func (c *EC2) EnableVpcClassicLinkDnsSupportRequest(input *EnableVpcClassicLinkDnsSupportInput) (req *request.Request, output *EnableVpcClassicLinkDnsSupportOutput) { op := &request.Operation{ Name: opEnableVpcClassicLinkDnsSupport, @@ -14757,7 +16332,7 @@ func (c *EC2) EnableVpcClassicLinkDnsSupportRequest(input *EnableVpcClassicLinkD // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation EnableVpcClassicLinkDnsSupport for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/EnableVpcClassicLinkDnsSupport +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/EnableVpcClassicLinkDnsSupport func (c *EC2) EnableVpcClassicLinkDnsSupport(input *EnableVpcClassicLinkDnsSupportInput) (*EnableVpcClassicLinkDnsSupportOutput, error) { req, out := c.EnableVpcClassicLinkDnsSupportRequest(input) return out, req.Send() @@ -14804,7 +16379,7 @@ const opGetConsoleOutput = "GetConsoleOutput" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/GetConsoleOutput +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/GetConsoleOutput func (c *EC2) GetConsoleOutputRequest(input *GetConsoleOutputInput) (req *request.Request, output *GetConsoleOutputOutput) { op := &request.Operation{ Name: opGetConsoleOutput, @@ -14831,7 +16406,7 @@ func (c *EC2) GetConsoleOutputRequest(input *GetConsoleOutputInput) (req *reques // the Amazon EC2 API and command line interface. // // Instance console output is buffered and posted shortly after instance boot, -// reboot, and termination. Amazon EC2 preserves the most recent 64 KB output +// reboot, and termination. Amazon EC2 preserves the most recent 64 KB output, // which is available for at least one hour after the most recent post. // // For Linux instances, the instance console output displays the exact console @@ -14848,7 +16423,7 @@ func (c *EC2) GetConsoleOutputRequest(input *GetConsoleOutputInput) (req *reques // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation GetConsoleOutput for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/GetConsoleOutput +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/GetConsoleOutput func (c *EC2) GetConsoleOutput(input *GetConsoleOutputInput) (*GetConsoleOutputOutput, error) { req, out := c.GetConsoleOutputRequest(input) return out, req.Send() @@ -14895,7 +16470,7 @@ const opGetConsoleScreenshot = "GetConsoleScreenshot" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/GetConsoleScreenshot +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/GetConsoleScreenshot func (c *EC2) GetConsoleScreenshotRequest(input *GetConsoleScreenshotInput) (req *request.Request, output *GetConsoleScreenshotOutput) { op := &request.Operation{ Name: opGetConsoleScreenshot, @@ -14924,7 +16499,7 @@ func (c *EC2) GetConsoleScreenshotRequest(input *GetConsoleScreenshotInput) (req // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation GetConsoleScreenshot for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/GetConsoleScreenshot +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/GetConsoleScreenshot func (c *EC2) GetConsoleScreenshot(input *GetConsoleScreenshotInput) (*GetConsoleScreenshotOutput, error) { req, out := c.GetConsoleScreenshotRequest(input) return out, req.Send() @@ -14971,7 +16546,7 @@ const opGetHostReservationPurchasePreview = "GetHostReservationPurchasePreview" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/GetHostReservationPurchasePreview +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/GetHostReservationPurchasePreview func (c *EC2) GetHostReservationPurchasePreviewRequest(input *GetHostReservationPurchasePreviewInput) (req *request.Request, output *GetHostReservationPurchasePreviewOutput) { op := &request.Operation{ Name: opGetHostReservationPurchasePreview, @@ -15003,7 +16578,7 @@ func (c *EC2) GetHostReservationPurchasePreviewRequest(input *GetHostReservation // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation GetHostReservationPurchasePreview for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/GetHostReservationPurchasePreview +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/GetHostReservationPurchasePreview func (c *EC2) GetHostReservationPurchasePreview(input *GetHostReservationPurchasePreviewInput) (*GetHostReservationPurchasePreviewOutput, error) { req, out := c.GetHostReservationPurchasePreviewRequest(input) return out, req.Send() @@ -15025,6 +16600,81 @@ func (c *EC2) GetHostReservationPurchasePreviewWithContext(ctx aws.Context, inpu return out, req.Send() } +const opGetLaunchTemplateData = "GetLaunchTemplateData" + +// GetLaunchTemplateDataRequest generates a "aws/request.Request" representing the +// client's request for the GetLaunchTemplateData operation. The "output" return +// value will be populated with the request's response once the request complets +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See GetLaunchTemplateData for more information on using the GetLaunchTemplateData +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the GetLaunchTemplateDataRequest method. +// req, resp := client.GetLaunchTemplateDataRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/GetLaunchTemplateData +func (c *EC2) GetLaunchTemplateDataRequest(input *GetLaunchTemplateDataInput) (req *request.Request, output *GetLaunchTemplateDataOutput) { + op := &request.Operation{ + Name: opGetLaunchTemplateData, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &GetLaunchTemplateDataInput{} + } + + output = &GetLaunchTemplateDataOutput{} + req = c.newRequest(op, input, output) + return +} + +// GetLaunchTemplateData API operation for Amazon Elastic Compute Cloud. +// +// Retrieves the configuration data of the specified instance. You can use this +// data to create a launch template. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Elastic Compute Cloud's +// API operation GetLaunchTemplateData for usage and error information. +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/GetLaunchTemplateData +func (c *EC2) GetLaunchTemplateData(input *GetLaunchTemplateDataInput) (*GetLaunchTemplateDataOutput, error) { + req, out := c.GetLaunchTemplateDataRequest(input) + return out, req.Send() +} + +// GetLaunchTemplateDataWithContext is the same as GetLaunchTemplateData with the addition of +// the ability to pass a context and additional request options. +// +// See GetLaunchTemplateData for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *EC2) GetLaunchTemplateDataWithContext(ctx aws.Context, input *GetLaunchTemplateDataInput, opts ...request.Option) (*GetLaunchTemplateDataOutput, error) { + req, out := c.GetLaunchTemplateDataRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + const opGetPasswordData = "GetPasswordData" // GetPasswordDataRequest generates a "aws/request.Request" representing the @@ -15050,7 +16700,7 @@ const opGetPasswordData = "GetPasswordData" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/GetPasswordData +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/GetPasswordData func (c *EC2) GetPasswordDataRequest(input *GetPasswordDataInput) (req *request.Request, output *GetPasswordDataOutput) { op := &request.Operation{ Name: opGetPasswordData, @@ -15069,20 +16719,24 @@ func (c *EC2) GetPasswordDataRequest(input *GetPasswordDataInput) (req *request. // GetPasswordData API operation for Amazon Elastic Compute Cloud. // -// Retrieves the encrypted administrator password for an instance running Windows. +// Retrieves the encrypted administrator password for a running Windows instance. // -// The Windows password is generated at boot if the EC2Config service plugin, -// Ec2SetPassword, is enabled. This usually only happens the first time an AMI -// is launched, and then Ec2SetPassword is automatically disabled. The password -// is not generated for rebundled AMIs unless Ec2SetPassword is enabled before -// bundling. +// The Windows password is generated at boot by the EC2Config service or EC2Launch +// scripts (Windows Server 2016 and later). This usually only happens the first +// time an instance is launched. For more information, see EC2Config (http://docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/UsingConfig_WinAMI.html) +// and EC2Launch (http://docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/ec2launch.html) +// in the Amazon Elastic Compute Cloud User Guide. +// +// For the EC2Config service, the password is not generated for rebundled AMIs +// unless Ec2SetPassword is enabled before bundling. // // The password is encrypted using the key pair that you specified when you // launched the instance. You must provide the corresponding key pair file. // -// Password generation and encryption takes a few moments. We recommend that -// you wait up to 15 minutes after launching an instance before trying to retrieve -// the generated password. +// When you launch an instance, password generation and encryption may take +// a few minutes. If you try to retrieve the password before it's available, +// the output returns an empty string. We recommend that you wait up to 15 minutes +// after launching an instance before trying to retrieve the generated password. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -15090,7 +16744,7 @@ func (c *EC2) GetPasswordDataRequest(input *GetPasswordDataInput) (req *request. // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation GetPasswordData for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/GetPasswordData +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/GetPasswordData func (c *EC2) GetPasswordData(input *GetPasswordDataInput) (*GetPasswordDataOutput, error) { req, out := c.GetPasswordDataRequest(input) return out, req.Send() @@ -15137,7 +16791,7 @@ const opGetReservedInstancesExchangeQuote = "GetReservedInstancesExchangeQuote" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/GetReservedInstancesExchangeQuote +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/GetReservedInstancesExchangeQuote func (c *EC2) GetReservedInstancesExchangeQuoteRequest(input *GetReservedInstancesExchangeQuoteInput) (req *request.Request, output *GetReservedInstancesExchangeQuoteOutput) { op := &request.Operation{ Name: opGetReservedInstancesExchangeQuote, @@ -15156,9 +16810,10 @@ func (c *EC2) GetReservedInstancesExchangeQuoteRequest(input *GetReservedInstanc // GetReservedInstancesExchangeQuote API operation for Amazon Elastic Compute Cloud. // -// Returns details about the values and term of your specified Convertible Reserved -// Instances. When a target configuration is specified, it returns information -// about whether the exchange is valid and can be performed. +// Returns a quote and exchange information for exchanging one or more specified +// Convertible Reserved Instances for a new Convertible Reserved Instance. If +// the exchange cannot be performed, the reason is returned in the response. +// Use AcceptReservedInstancesExchangeQuote to perform the exchange. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -15166,7 +16821,7 @@ func (c *EC2) GetReservedInstancesExchangeQuoteRequest(input *GetReservedInstanc // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation GetReservedInstancesExchangeQuote for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/GetReservedInstancesExchangeQuote +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/GetReservedInstancesExchangeQuote func (c *EC2) GetReservedInstancesExchangeQuote(input *GetReservedInstancesExchangeQuoteInput) (*GetReservedInstancesExchangeQuoteOutput, error) { req, out := c.GetReservedInstancesExchangeQuoteRequest(input) return out, req.Send() @@ -15213,7 +16868,7 @@ const opImportImage = "ImportImage" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ImportImage +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ImportImage func (c *EC2) ImportImageRequest(input *ImportImageInput) (req *request.Request, output *ImportImageOutput) { op := &request.Operation{ Name: opImportImage, @@ -15243,7 +16898,7 @@ func (c *EC2) ImportImageRequest(input *ImportImageInput) (req *request.Request, // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation ImportImage for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ImportImage +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ImportImage func (c *EC2) ImportImage(input *ImportImageInput) (*ImportImageOutput, error) { req, out := c.ImportImageRequest(input) return out, req.Send() @@ -15290,7 +16945,7 @@ const opImportInstance = "ImportInstance" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ImportInstance +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ImportInstance func (c *EC2) ImportInstanceRequest(input *ImportInstanceInput) (req *request.Request, output *ImportInstanceOutput) { op := &request.Operation{ Name: opImportInstance, @@ -15323,7 +16978,7 @@ func (c *EC2) ImportInstanceRequest(input *ImportInstanceInput) (req *request.Re // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation ImportInstance for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ImportInstance +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ImportInstance func (c *EC2) ImportInstance(input *ImportInstanceInput) (*ImportInstanceOutput, error) { req, out := c.ImportInstanceRequest(input) return out, req.Send() @@ -15370,7 +17025,7 @@ const opImportKeyPair = "ImportKeyPair" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ImportKeyPair +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ImportKeyPair func (c *EC2) ImportKeyPairRequest(input *ImportKeyPairInput) (req *request.Request, output *ImportKeyPairOutput) { op := &request.Operation{ Name: opImportKeyPair, @@ -15404,7 +17059,7 @@ func (c *EC2) ImportKeyPairRequest(input *ImportKeyPairInput) (req *request.Requ // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation ImportKeyPair for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ImportKeyPair +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ImportKeyPair func (c *EC2) ImportKeyPair(input *ImportKeyPairInput) (*ImportKeyPairOutput, error) { req, out := c.ImportKeyPairRequest(input) return out, req.Send() @@ -15451,7 +17106,7 @@ const opImportSnapshot = "ImportSnapshot" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ImportSnapshot +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ImportSnapshot func (c *EC2) ImportSnapshotRequest(input *ImportSnapshotInput) (req *request.Request, output *ImportSnapshotOutput) { op := &request.Operation{ Name: opImportSnapshot, @@ -15478,7 +17133,7 @@ func (c *EC2) ImportSnapshotRequest(input *ImportSnapshotInput) (req *request.Re // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation ImportSnapshot for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ImportSnapshot +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ImportSnapshot func (c *EC2) ImportSnapshot(input *ImportSnapshotInput) (*ImportSnapshotOutput, error) { req, out := c.ImportSnapshotRequest(input) return out, req.Send() @@ -15525,7 +17180,7 @@ const opImportVolume = "ImportVolume" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ImportVolume +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ImportVolume func (c *EC2) ImportVolumeRequest(input *ImportVolumeInput) (req *request.Request, output *ImportVolumeOutput) { op := &request.Operation{ Name: opImportVolume, @@ -15556,7 +17211,7 @@ func (c *EC2) ImportVolumeRequest(input *ImportVolumeInput) (req *request.Reques // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation ImportVolume for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ImportVolume +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ImportVolume func (c *EC2) ImportVolume(input *ImportVolumeInput) (*ImportVolumeOutput, error) { req, out := c.ImportVolumeRequest(input) return out, req.Send() @@ -15578,6 +17233,80 @@ func (c *EC2) ImportVolumeWithContext(ctx aws.Context, input *ImportVolumeInput, return out, req.Send() } +const opModifyFpgaImageAttribute = "ModifyFpgaImageAttribute" + +// ModifyFpgaImageAttributeRequest generates a "aws/request.Request" representing the +// client's request for the ModifyFpgaImageAttribute operation. The "output" return +// value will be populated with the request's response once the request complets +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See ModifyFpgaImageAttribute for more information on using the ModifyFpgaImageAttribute +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the ModifyFpgaImageAttributeRequest method. +// req, resp := client.ModifyFpgaImageAttributeRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ModifyFpgaImageAttribute +func (c *EC2) ModifyFpgaImageAttributeRequest(input *ModifyFpgaImageAttributeInput) (req *request.Request, output *ModifyFpgaImageAttributeOutput) { + op := &request.Operation{ + Name: opModifyFpgaImageAttribute, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &ModifyFpgaImageAttributeInput{} + } + + output = &ModifyFpgaImageAttributeOutput{} + req = c.newRequest(op, input, output) + return +} + +// ModifyFpgaImageAttribute API operation for Amazon Elastic Compute Cloud. +// +// Modifies the specified attribute of the specified Amazon FPGA Image (AFI). +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Elastic Compute Cloud's +// API operation ModifyFpgaImageAttribute for usage and error information. +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ModifyFpgaImageAttribute +func (c *EC2) ModifyFpgaImageAttribute(input *ModifyFpgaImageAttributeInput) (*ModifyFpgaImageAttributeOutput, error) { + req, out := c.ModifyFpgaImageAttributeRequest(input) + return out, req.Send() +} + +// ModifyFpgaImageAttributeWithContext is the same as ModifyFpgaImageAttribute with the addition of +// the ability to pass a context and additional request options. +// +// See ModifyFpgaImageAttribute for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *EC2) ModifyFpgaImageAttributeWithContext(ctx aws.Context, input *ModifyFpgaImageAttributeInput, opts ...request.Option) (*ModifyFpgaImageAttributeOutput, error) { + req, out := c.ModifyFpgaImageAttributeRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + const opModifyHosts = "ModifyHosts" // ModifyHostsRequest generates a "aws/request.Request" representing the @@ -15603,7 +17332,7 @@ const opModifyHosts = "ModifyHosts" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ModifyHosts +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ModifyHosts func (c *EC2) ModifyHostsRequest(input *ModifyHostsInput) (req *request.Request, output *ModifyHostsOutput) { op := &request.Operation{ Name: opModifyHosts, @@ -15636,7 +17365,7 @@ func (c *EC2) ModifyHostsRequest(input *ModifyHostsInput) (req *request.Request, // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation ModifyHosts for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ModifyHosts +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ModifyHosts func (c *EC2) ModifyHosts(input *ModifyHostsInput) (*ModifyHostsOutput, error) { req, out := c.ModifyHostsRequest(input) return out, req.Send() @@ -15683,7 +17412,7 @@ const opModifyIdFormat = "ModifyIdFormat" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ModifyIdFormat +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ModifyIdFormat func (c *EC2) ModifyIdFormatRequest(input *ModifyIdFormatInput) (req *request.Request, output *ModifyIdFormatOutput) { op := &request.Operation{ Name: opModifyIdFormat, @@ -15726,7 +17455,7 @@ func (c *EC2) ModifyIdFormatRequest(input *ModifyIdFormatInput) (req *request.Re // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation ModifyIdFormat for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ModifyIdFormat +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ModifyIdFormat func (c *EC2) ModifyIdFormat(input *ModifyIdFormatInput) (*ModifyIdFormatOutput, error) { req, out := c.ModifyIdFormatRequest(input) return out, req.Send() @@ -15773,7 +17502,7 @@ const opModifyIdentityIdFormat = "ModifyIdentityIdFormat" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ModifyIdentityIdFormat +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ModifyIdentityIdFormat func (c *EC2) ModifyIdentityIdFormatRequest(input *ModifyIdentityIdFormatInput) (req *request.Request, output *ModifyIdentityIdFormatOutput) { op := &request.Operation{ Name: opModifyIdentityIdFormat, @@ -15816,7 +17545,7 @@ func (c *EC2) ModifyIdentityIdFormatRequest(input *ModifyIdentityIdFormatInput) // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation ModifyIdentityIdFormat for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ModifyIdentityIdFormat +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ModifyIdentityIdFormat func (c *EC2) ModifyIdentityIdFormat(input *ModifyIdentityIdFormatInput) (*ModifyIdentityIdFormatOutput, error) { req, out := c.ModifyIdentityIdFormatRequest(input) return out, req.Send() @@ -15863,7 +17592,7 @@ const opModifyImageAttribute = "ModifyImageAttribute" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ModifyImageAttribute +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ModifyImageAttribute func (c *EC2) ModifyImageAttributeRequest(input *ModifyImageAttributeInput) (req *request.Request, output *ModifyImageAttributeOutput) { op := &request.Operation{ Name: opModifyImageAttribute, @@ -15885,15 +17614,15 @@ func (c *EC2) ModifyImageAttributeRequest(input *ModifyImageAttributeInput) (req // ModifyImageAttribute API operation for Amazon Elastic Compute Cloud. // // Modifies the specified attribute of the specified AMI. You can specify only -// one attribute at a time. +// one attribute at a time. You can use the Attribute parameter to specify the +// attribute or one of the following parameters: Description, LaunchPermission, +// or ProductCode. // // AWS Marketplace product codes cannot be modified. Images with an AWS Marketplace // product code cannot be made public. // -// The SriovNetSupport enhanced networking attribute cannot be changed using -// this command. Instead, enable SriovNetSupport on an instance and create an -// AMI from the instance. This will result in an image with SriovNetSupport -// enabled. +// To enable the SriovNetSupport enhanced networking attribute of an image, +// enable SriovNetSupport on an instance and create an AMI from the instance. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -15901,7 +17630,7 @@ func (c *EC2) ModifyImageAttributeRequest(input *ModifyImageAttributeInput) (req // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation ModifyImageAttribute for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ModifyImageAttribute +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ModifyImageAttribute func (c *EC2) ModifyImageAttribute(input *ModifyImageAttributeInput) (*ModifyImageAttributeOutput, error) { req, out := c.ModifyImageAttributeRequest(input) return out, req.Send() @@ -15948,7 +17677,7 @@ const opModifyInstanceAttribute = "ModifyInstanceAttribute" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ModifyInstanceAttribute +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ModifyInstanceAttribute func (c *EC2) ModifyInstanceAttributeRequest(input *ModifyInstanceAttributeInput) (req *request.Request, output *ModifyInstanceAttributeOutput) { op := &request.Operation{ Name: opModifyInstanceAttribute, @@ -15982,7 +17711,7 @@ func (c *EC2) ModifyInstanceAttributeRequest(input *ModifyInstanceAttributeInput // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation ModifyInstanceAttribute for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ModifyInstanceAttribute +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ModifyInstanceAttribute func (c *EC2) ModifyInstanceAttribute(input *ModifyInstanceAttributeInput) (*ModifyInstanceAttributeOutput, error) { req, out := c.ModifyInstanceAttributeRequest(input) return out, req.Send() @@ -16004,6 +17733,84 @@ func (c *EC2) ModifyInstanceAttributeWithContext(ctx aws.Context, input *ModifyI return out, req.Send() } +const opModifyInstanceCreditSpecification = "ModifyInstanceCreditSpecification" + +// ModifyInstanceCreditSpecificationRequest generates a "aws/request.Request" representing the +// client's request for the ModifyInstanceCreditSpecification operation. The "output" return +// value will be populated with the request's response once the request complets +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See ModifyInstanceCreditSpecification for more information on using the ModifyInstanceCreditSpecification +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the ModifyInstanceCreditSpecificationRequest method. +// req, resp := client.ModifyInstanceCreditSpecificationRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ModifyInstanceCreditSpecification +func (c *EC2) ModifyInstanceCreditSpecificationRequest(input *ModifyInstanceCreditSpecificationInput) (req *request.Request, output *ModifyInstanceCreditSpecificationOutput) { + op := &request.Operation{ + Name: opModifyInstanceCreditSpecification, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &ModifyInstanceCreditSpecificationInput{} + } + + output = &ModifyInstanceCreditSpecificationOutput{} + req = c.newRequest(op, input, output) + return +} + +// ModifyInstanceCreditSpecification API operation for Amazon Elastic Compute Cloud. +// +// Modifies the credit option for CPU usage on a running or stopped T2 instance. +// The credit options are standard and unlimited. +// +// For more information, see T2 Instances (http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/t2-instances.html) +// in the Amazon Elastic Compute Cloud User Guide. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Elastic Compute Cloud's +// API operation ModifyInstanceCreditSpecification for usage and error information. +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ModifyInstanceCreditSpecification +func (c *EC2) ModifyInstanceCreditSpecification(input *ModifyInstanceCreditSpecificationInput) (*ModifyInstanceCreditSpecificationOutput, error) { + req, out := c.ModifyInstanceCreditSpecificationRequest(input) + return out, req.Send() +} + +// ModifyInstanceCreditSpecificationWithContext is the same as ModifyInstanceCreditSpecification with the addition of +// the ability to pass a context and additional request options. +// +// See ModifyInstanceCreditSpecification for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *EC2) ModifyInstanceCreditSpecificationWithContext(ctx aws.Context, input *ModifyInstanceCreditSpecificationInput, opts ...request.Option) (*ModifyInstanceCreditSpecificationOutput, error) { + req, out := c.ModifyInstanceCreditSpecificationRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + const opModifyInstancePlacement = "ModifyInstancePlacement" // ModifyInstancePlacementRequest generates a "aws/request.Request" representing the @@ -16029,7 +17836,7 @@ const opModifyInstancePlacement = "ModifyInstancePlacement" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ModifyInstancePlacement +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ModifyInstancePlacement func (c *EC2) ModifyInstancePlacementRequest(input *ModifyInstancePlacementInput) (req *request.Request, output *ModifyInstancePlacementOutput) { op := &request.Operation{ Name: opModifyInstancePlacement, @@ -16074,7 +17881,7 @@ func (c *EC2) ModifyInstancePlacementRequest(input *ModifyInstancePlacementInput // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation ModifyInstancePlacement for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ModifyInstancePlacement +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ModifyInstancePlacement func (c *EC2) ModifyInstancePlacement(input *ModifyInstancePlacementInput) (*ModifyInstancePlacementOutput, error) { req, out := c.ModifyInstancePlacementRequest(input) return out, req.Send() @@ -16096,6 +17903,82 @@ func (c *EC2) ModifyInstancePlacementWithContext(ctx aws.Context, input *ModifyI return out, req.Send() } +const opModifyLaunchTemplate = "ModifyLaunchTemplate" + +// ModifyLaunchTemplateRequest generates a "aws/request.Request" representing the +// client's request for the ModifyLaunchTemplate operation. The "output" return +// value will be populated with the request's response once the request complets +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See ModifyLaunchTemplate for more information on using the ModifyLaunchTemplate +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the ModifyLaunchTemplateRequest method. +// req, resp := client.ModifyLaunchTemplateRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ModifyLaunchTemplate +func (c *EC2) ModifyLaunchTemplateRequest(input *ModifyLaunchTemplateInput) (req *request.Request, output *ModifyLaunchTemplateOutput) { + op := &request.Operation{ + Name: opModifyLaunchTemplate, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &ModifyLaunchTemplateInput{} + } + + output = &ModifyLaunchTemplateOutput{} + req = c.newRequest(op, input, output) + return +} + +// ModifyLaunchTemplate API operation for Amazon Elastic Compute Cloud. +// +// Modifies a launch template. You can specify which version of the launch template +// to set as the default version. When launching an instance, the default version +// applies when a launch template version is not specified. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Elastic Compute Cloud's +// API operation ModifyLaunchTemplate for usage and error information. +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ModifyLaunchTemplate +func (c *EC2) ModifyLaunchTemplate(input *ModifyLaunchTemplateInput) (*ModifyLaunchTemplateOutput, error) { + req, out := c.ModifyLaunchTemplateRequest(input) + return out, req.Send() +} + +// ModifyLaunchTemplateWithContext is the same as ModifyLaunchTemplate with the addition of +// the ability to pass a context and additional request options. +// +// See ModifyLaunchTemplate for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *EC2) ModifyLaunchTemplateWithContext(ctx aws.Context, input *ModifyLaunchTemplateInput, opts ...request.Option) (*ModifyLaunchTemplateOutput, error) { + req, out := c.ModifyLaunchTemplateRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + const opModifyNetworkInterfaceAttribute = "ModifyNetworkInterfaceAttribute" // ModifyNetworkInterfaceAttributeRequest generates a "aws/request.Request" representing the @@ -16121,7 +18004,7 @@ const opModifyNetworkInterfaceAttribute = "ModifyNetworkInterfaceAttribute" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ModifyNetworkInterfaceAttribute +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ModifyNetworkInterfaceAttribute func (c *EC2) ModifyNetworkInterfaceAttributeRequest(input *ModifyNetworkInterfaceAttributeInput) (req *request.Request, output *ModifyNetworkInterfaceAttributeOutput) { op := &request.Operation{ Name: opModifyNetworkInterfaceAttribute, @@ -16151,7 +18034,7 @@ func (c *EC2) ModifyNetworkInterfaceAttributeRequest(input *ModifyNetworkInterfa // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation ModifyNetworkInterfaceAttribute for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ModifyNetworkInterfaceAttribute +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ModifyNetworkInterfaceAttribute func (c *EC2) ModifyNetworkInterfaceAttribute(input *ModifyNetworkInterfaceAttributeInput) (*ModifyNetworkInterfaceAttributeOutput, error) { req, out := c.ModifyNetworkInterfaceAttributeRequest(input) return out, req.Send() @@ -16198,7 +18081,7 @@ const opModifyReservedInstances = "ModifyReservedInstances" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ModifyReservedInstances +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ModifyReservedInstances func (c *EC2) ModifyReservedInstancesRequest(input *ModifyReservedInstancesInput) (req *request.Request, output *ModifyReservedInstancesOutput) { op := &request.Operation{ Name: opModifyReservedInstances, @@ -16218,9 +18101,9 @@ func (c *EC2) ModifyReservedInstancesRequest(input *ModifyReservedInstancesInput // ModifyReservedInstances API operation for Amazon Elastic Compute Cloud. // // Modifies the Availability Zone, instance count, instance type, or network -// platform (EC2-Classic or EC2-VPC) of your Standard Reserved Instances. The -// Reserved Instances to be modified must be identical, except for Availability -// Zone, network platform, and instance type. +// platform (EC2-Classic or EC2-VPC) of your Reserved Instances. The Reserved +// Instances to be modified must be identical, except for Availability Zone, +// network platform, and instance type. // // For more information, see Modifying Reserved Instances (http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ri-modifying.html) // in the Amazon Elastic Compute Cloud User Guide. @@ -16231,7 +18114,7 @@ func (c *EC2) ModifyReservedInstancesRequest(input *ModifyReservedInstancesInput // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation ModifyReservedInstances for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ModifyReservedInstances +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ModifyReservedInstances func (c *EC2) ModifyReservedInstances(input *ModifyReservedInstancesInput) (*ModifyReservedInstancesOutput, error) { req, out := c.ModifyReservedInstancesRequest(input) return out, req.Send() @@ -16278,7 +18161,7 @@ const opModifySnapshotAttribute = "ModifySnapshotAttribute" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ModifySnapshotAttribute +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ModifySnapshotAttribute func (c *EC2) ModifySnapshotAttributeRequest(input *ModifySnapshotAttributeInput) (req *request.Request, output *ModifySnapshotAttributeOutput) { op := &request.Operation{ Name: opModifySnapshotAttribute, @@ -16319,7 +18202,7 @@ func (c *EC2) ModifySnapshotAttributeRequest(input *ModifySnapshotAttributeInput // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation ModifySnapshotAttribute for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ModifySnapshotAttribute +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ModifySnapshotAttribute func (c *EC2) ModifySnapshotAttribute(input *ModifySnapshotAttributeInput) (*ModifySnapshotAttributeOutput, error) { req, out := c.ModifySnapshotAttributeRequest(input) return out, req.Send() @@ -16366,7 +18249,7 @@ const opModifySpotFleetRequest = "ModifySpotFleetRequest" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ModifySpotFleetRequest +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ModifySpotFleetRequest func (c *EC2) ModifySpotFleetRequestRequest(input *ModifySpotFleetRequestInput) (req *request.Request, output *ModifySpotFleetRequestOutput) { op := &request.Operation{ Name: opModifySpotFleetRequest, @@ -16385,26 +18268,29 @@ func (c *EC2) ModifySpotFleetRequestRequest(input *ModifySpotFleetRequestInput) // ModifySpotFleetRequest API operation for Amazon Elastic Compute Cloud. // -// Modifies the specified Spot fleet request. +// Modifies the specified Spot Fleet request. // -// While the Spot fleet request is being modified, it is in the modifying state. +// While the Spot Fleet request is being modified, it is in the modifying state. // -// To scale up your Spot fleet, increase its target capacity. The Spot fleet -// launches the additional Spot instances according to the allocation strategy -// for the Spot fleet request. If the allocation strategy is lowestPrice, the -// Spot fleet launches instances using the Spot pool with the lowest price. -// If the allocation strategy is diversified, the Spot fleet distributes the +// To scale up your Spot Fleet, increase its target capacity. The Spot Fleet +// launches the additional Spot Instances according to the allocation strategy +// for the Spot Fleet request. If the allocation strategy is lowestPrice, the +// Spot Fleet launches instances using the Spot pool with the lowest price. +// If the allocation strategy is diversified, the Spot Fleet distributes the // instances across the Spot pools. // -// To scale down your Spot fleet, decrease its target capacity. First, the Spot -// fleet cancels any open bids that exceed the new target capacity. You can -// request that the Spot fleet terminate Spot instances until the size of the -// fleet no longer exceeds the new target capacity. If the allocation strategy -// is lowestPrice, the Spot fleet terminates the instances with the highest -// price per unit. If the allocation strategy is diversified, the Spot fleet +// To scale down your Spot Fleet, decrease its target capacity. First, the Spot +// Fleet cancels any open requests that exceed the new target capacity. You +// can request that the Spot Fleet terminate Spot Instances until the size of +// the fleet no longer exceeds the new target capacity. If the allocation strategy +// is lowestPrice, the Spot Fleet terminates the instances with the highest +// price per unit. If the allocation strategy is diversified, the Spot Fleet // terminates instances across the Spot pools. Alternatively, you can request -// that the Spot fleet keep the fleet at its current size, but not replace any -// Spot instances that are interrupted or that you terminate manually. +// that the Spot Fleet keep the fleet at its current size, but not replace any +// Spot Instances that are interrupted or that you terminate manually. +// +// If you are finished with your Spot Fleet for now, but will use it again later, +// you can set the target capacity to 0. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -16412,7 +18298,7 @@ func (c *EC2) ModifySpotFleetRequestRequest(input *ModifySpotFleetRequestInput) // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation ModifySpotFleetRequest for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ModifySpotFleetRequest +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ModifySpotFleetRequest func (c *EC2) ModifySpotFleetRequest(input *ModifySpotFleetRequestInput) (*ModifySpotFleetRequestOutput, error) { req, out := c.ModifySpotFleetRequestRequest(input) return out, req.Send() @@ -16459,7 +18345,7 @@ const opModifySubnetAttribute = "ModifySubnetAttribute" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ModifySubnetAttribute +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ModifySubnetAttribute func (c *EC2) ModifySubnetAttributeRequest(input *ModifySubnetAttributeInput) (req *request.Request, output *ModifySubnetAttributeOutput) { op := &request.Operation{ Name: opModifySubnetAttribute, @@ -16488,7 +18374,7 @@ func (c *EC2) ModifySubnetAttributeRequest(input *ModifySubnetAttributeInput) (r // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation ModifySubnetAttribute for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ModifySubnetAttribute +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ModifySubnetAttribute func (c *EC2) ModifySubnetAttribute(input *ModifySubnetAttributeInput) (*ModifySubnetAttributeOutput, error) { req, out := c.ModifySubnetAttributeRequest(input) return out, req.Send() @@ -16535,7 +18421,7 @@ const opModifyVolume = "ModifyVolume" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ModifyVolume +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ModifyVolume func (c *EC2) ModifyVolumeRequest(input *ModifyVolumeInput) (req *request.Request, output *ModifyVolumeOutput) { op := &request.Operation{ Name: opModifyVolume, @@ -16594,7 +18480,7 @@ func (c *EC2) ModifyVolumeRequest(input *ModifyVolumeInput) (req *request.Reques // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation ModifyVolume for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ModifyVolume +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ModifyVolume func (c *EC2) ModifyVolume(input *ModifyVolumeInput) (*ModifyVolumeOutput, error) { req, out := c.ModifyVolumeRequest(input) return out, req.Send() @@ -16641,7 +18527,7 @@ const opModifyVolumeAttribute = "ModifyVolumeAttribute" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ModifyVolumeAttribute +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ModifyVolumeAttribute func (c *EC2) ModifyVolumeAttributeRequest(input *ModifyVolumeAttributeInput) (req *request.Request, output *ModifyVolumeAttributeOutput) { op := &request.Operation{ Name: opModifyVolumeAttribute, @@ -16679,7 +18565,7 @@ func (c *EC2) ModifyVolumeAttributeRequest(input *ModifyVolumeAttributeInput) (r // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation ModifyVolumeAttribute for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ModifyVolumeAttribute +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ModifyVolumeAttribute func (c *EC2) ModifyVolumeAttribute(input *ModifyVolumeAttributeInput) (*ModifyVolumeAttributeOutput, error) { req, out := c.ModifyVolumeAttributeRequest(input) return out, req.Send() @@ -16726,7 +18612,7 @@ const opModifyVpcAttribute = "ModifyVpcAttribute" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ModifyVpcAttribute +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ModifyVpcAttribute func (c *EC2) ModifyVpcAttributeRequest(input *ModifyVpcAttributeInput) (req *request.Request, output *ModifyVpcAttributeOutput) { op := &request.Operation{ Name: opModifyVpcAttribute, @@ -16755,7 +18641,7 @@ func (c *EC2) ModifyVpcAttributeRequest(input *ModifyVpcAttributeInput) (req *re // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation ModifyVpcAttribute for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ModifyVpcAttribute +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ModifyVpcAttribute func (c *EC2) ModifyVpcAttribute(input *ModifyVpcAttributeInput) (*ModifyVpcAttributeOutput, error) { req, out := c.ModifyVpcAttributeRequest(input) return out, req.Send() @@ -16802,7 +18688,7 @@ const opModifyVpcEndpoint = "ModifyVpcEndpoint" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ModifyVpcEndpoint +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ModifyVpcEndpoint func (c *EC2) ModifyVpcEndpointRequest(input *ModifyVpcEndpointInput) (req *request.Request, output *ModifyVpcEndpointOutput) { op := &request.Operation{ Name: opModifyVpcEndpoint, @@ -16821,9 +18707,10 @@ func (c *EC2) ModifyVpcEndpointRequest(input *ModifyVpcEndpointInput) (req *requ // ModifyVpcEndpoint API operation for Amazon Elastic Compute Cloud. // -// Modifies attributes of a specified VPC endpoint. You can modify the policy -// associated with the endpoint, and you can add and remove route tables associated -// with the endpoint. +// Modifies attributes of a specified VPC endpoint. The attributes that you +// can modify depend on the type of VPC endpoint (interface or gateway). For +// more information, see VPC Endpoints (http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/vpc-endpoints.html) +// in the Amazon Virtual Private Cloud User Guide. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -16831,7 +18718,7 @@ func (c *EC2) ModifyVpcEndpointRequest(input *ModifyVpcEndpointInput) (req *requ // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation ModifyVpcEndpoint for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ModifyVpcEndpoint +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ModifyVpcEndpoint func (c *EC2) ModifyVpcEndpoint(input *ModifyVpcEndpointInput) (*ModifyVpcEndpointOutput, error) { req, out := c.ModifyVpcEndpointRequest(input) return out, req.Send() @@ -16853,6 +18740,235 @@ func (c *EC2) ModifyVpcEndpointWithContext(ctx aws.Context, input *ModifyVpcEndp return out, req.Send() } +const opModifyVpcEndpointConnectionNotification = "ModifyVpcEndpointConnectionNotification" + +// ModifyVpcEndpointConnectionNotificationRequest generates a "aws/request.Request" representing the +// client's request for the ModifyVpcEndpointConnectionNotification operation. The "output" return +// value will be populated with the request's response once the request complets +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See ModifyVpcEndpointConnectionNotification for more information on using the ModifyVpcEndpointConnectionNotification +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the ModifyVpcEndpointConnectionNotificationRequest method. +// req, resp := client.ModifyVpcEndpointConnectionNotificationRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ModifyVpcEndpointConnectionNotification +func (c *EC2) ModifyVpcEndpointConnectionNotificationRequest(input *ModifyVpcEndpointConnectionNotificationInput) (req *request.Request, output *ModifyVpcEndpointConnectionNotificationOutput) { + op := &request.Operation{ + Name: opModifyVpcEndpointConnectionNotification, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &ModifyVpcEndpointConnectionNotificationInput{} + } + + output = &ModifyVpcEndpointConnectionNotificationOutput{} + req = c.newRequest(op, input, output) + return +} + +// ModifyVpcEndpointConnectionNotification API operation for Amazon Elastic Compute Cloud. +// +// Modifies a connection notification for VPC endpoint or VPC endpoint service. +// You can change the SNS topic for the notification, or the events for which +// to be notified. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Elastic Compute Cloud's +// API operation ModifyVpcEndpointConnectionNotification for usage and error information. +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ModifyVpcEndpointConnectionNotification +func (c *EC2) ModifyVpcEndpointConnectionNotification(input *ModifyVpcEndpointConnectionNotificationInput) (*ModifyVpcEndpointConnectionNotificationOutput, error) { + req, out := c.ModifyVpcEndpointConnectionNotificationRequest(input) + return out, req.Send() +} + +// ModifyVpcEndpointConnectionNotificationWithContext is the same as ModifyVpcEndpointConnectionNotification with the addition of +// the ability to pass a context and additional request options. +// +// See ModifyVpcEndpointConnectionNotification for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *EC2) ModifyVpcEndpointConnectionNotificationWithContext(ctx aws.Context, input *ModifyVpcEndpointConnectionNotificationInput, opts ...request.Option) (*ModifyVpcEndpointConnectionNotificationOutput, error) { + req, out := c.ModifyVpcEndpointConnectionNotificationRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opModifyVpcEndpointServiceConfiguration = "ModifyVpcEndpointServiceConfiguration" + +// ModifyVpcEndpointServiceConfigurationRequest generates a "aws/request.Request" representing the +// client's request for the ModifyVpcEndpointServiceConfiguration operation. The "output" return +// value will be populated with the request's response once the request complets +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See ModifyVpcEndpointServiceConfiguration for more information on using the ModifyVpcEndpointServiceConfiguration +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the ModifyVpcEndpointServiceConfigurationRequest method. +// req, resp := client.ModifyVpcEndpointServiceConfigurationRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ModifyVpcEndpointServiceConfiguration +func (c *EC2) ModifyVpcEndpointServiceConfigurationRequest(input *ModifyVpcEndpointServiceConfigurationInput) (req *request.Request, output *ModifyVpcEndpointServiceConfigurationOutput) { + op := &request.Operation{ + Name: opModifyVpcEndpointServiceConfiguration, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &ModifyVpcEndpointServiceConfigurationInput{} + } + + output = &ModifyVpcEndpointServiceConfigurationOutput{} + req = c.newRequest(op, input, output) + return +} + +// ModifyVpcEndpointServiceConfiguration API operation for Amazon Elastic Compute Cloud. +// +// Modifies the attributes of your VPC endpoint service configuration. You can +// change the Network Load Balancers for your service, and you can specify whether +// acceptance is required for requests to connect to your endpoint service through +// an interface VPC endpoint. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Elastic Compute Cloud's +// API operation ModifyVpcEndpointServiceConfiguration for usage and error information. +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ModifyVpcEndpointServiceConfiguration +func (c *EC2) ModifyVpcEndpointServiceConfiguration(input *ModifyVpcEndpointServiceConfigurationInput) (*ModifyVpcEndpointServiceConfigurationOutput, error) { + req, out := c.ModifyVpcEndpointServiceConfigurationRequest(input) + return out, req.Send() +} + +// ModifyVpcEndpointServiceConfigurationWithContext is the same as ModifyVpcEndpointServiceConfiguration with the addition of +// the ability to pass a context and additional request options. +// +// See ModifyVpcEndpointServiceConfiguration for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *EC2) ModifyVpcEndpointServiceConfigurationWithContext(ctx aws.Context, input *ModifyVpcEndpointServiceConfigurationInput, opts ...request.Option) (*ModifyVpcEndpointServiceConfigurationOutput, error) { + req, out := c.ModifyVpcEndpointServiceConfigurationRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opModifyVpcEndpointServicePermissions = "ModifyVpcEndpointServicePermissions" + +// ModifyVpcEndpointServicePermissionsRequest generates a "aws/request.Request" representing the +// client's request for the ModifyVpcEndpointServicePermissions operation. The "output" return +// value will be populated with the request's response once the request complets +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See ModifyVpcEndpointServicePermissions for more information on using the ModifyVpcEndpointServicePermissions +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the ModifyVpcEndpointServicePermissionsRequest method. +// req, resp := client.ModifyVpcEndpointServicePermissionsRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ModifyVpcEndpointServicePermissions +func (c *EC2) ModifyVpcEndpointServicePermissionsRequest(input *ModifyVpcEndpointServicePermissionsInput) (req *request.Request, output *ModifyVpcEndpointServicePermissionsOutput) { + op := &request.Operation{ + Name: opModifyVpcEndpointServicePermissions, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &ModifyVpcEndpointServicePermissionsInput{} + } + + output = &ModifyVpcEndpointServicePermissionsOutput{} + req = c.newRequest(op, input, output) + return +} + +// ModifyVpcEndpointServicePermissions API operation for Amazon Elastic Compute Cloud. +// +// Modifies the permissions for your VPC endpoint service (http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/endpoint-service.html). +// You can add or remove permissions for service consumers (IAM users, IAM roles, +// and AWS accounts) to connect to your endpoint service. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Elastic Compute Cloud's +// API operation ModifyVpcEndpointServicePermissions for usage and error information. +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ModifyVpcEndpointServicePermissions +func (c *EC2) ModifyVpcEndpointServicePermissions(input *ModifyVpcEndpointServicePermissionsInput) (*ModifyVpcEndpointServicePermissionsOutput, error) { + req, out := c.ModifyVpcEndpointServicePermissionsRequest(input) + return out, req.Send() +} + +// ModifyVpcEndpointServicePermissionsWithContext is the same as ModifyVpcEndpointServicePermissions with the addition of +// the ability to pass a context and additional request options. +// +// See ModifyVpcEndpointServicePermissions for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *EC2) ModifyVpcEndpointServicePermissionsWithContext(ctx aws.Context, input *ModifyVpcEndpointServicePermissionsInput, opts ...request.Option) (*ModifyVpcEndpointServicePermissionsOutput, error) { + req, out := c.ModifyVpcEndpointServicePermissionsRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + const opModifyVpcPeeringConnectionOptions = "ModifyVpcPeeringConnectionOptions" // ModifyVpcPeeringConnectionOptionsRequest generates a "aws/request.Request" representing the @@ -16878,7 +18994,7 @@ const opModifyVpcPeeringConnectionOptions = "ModifyVpcPeeringConnectionOptions" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ModifyVpcPeeringConnectionOptions +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ModifyVpcPeeringConnectionOptions func (c *EC2) ModifyVpcPeeringConnectionOptionsRequest(input *ModifyVpcPeeringConnectionOptionsInput) (req *request.Request, output *ModifyVpcPeeringConnectionOptionsOutput) { op := &request.Operation{ Name: opModifyVpcPeeringConnectionOptions, @@ -16924,7 +19040,7 @@ func (c *EC2) ModifyVpcPeeringConnectionOptionsRequest(input *ModifyVpcPeeringCo // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation ModifyVpcPeeringConnectionOptions for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ModifyVpcPeeringConnectionOptions +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ModifyVpcPeeringConnectionOptions func (c *EC2) ModifyVpcPeeringConnectionOptions(input *ModifyVpcPeeringConnectionOptionsInput) (*ModifyVpcPeeringConnectionOptionsOutput, error) { req, out := c.ModifyVpcPeeringConnectionOptionsRequest(input) return out, req.Send() @@ -16946,6 +19062,89 @@ func (c *EC2) ModifyVpcPeeringConnectionOptionsWithContext(ctx aws.Context, inpu return out, req.Send() } +const opModifyVpcTenancy = "ModifyVpcTenancy" + +// ModifyVpcTenancyRequest generates a "aws/request.Request" representing the +// client's request for the ModifyVpcTenancy operation. The "output" return +// value will be populated with the request's response once the request complets +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See ModifyVpcTenancy for more information on using the ModifyVpcTenancy +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the ModifyVpcTenancyRequest method. +// req, resp := client.ModifyVpcTenancyRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ModifyVpcTenancy +func (c *EC2) ModifyVpcTenancyRequest(input *ModifyVpcTenancyInput) (req *request.Request, output *ModifyVpcTenancyOutput) { + op := &request.Operation{ + Name: opModifyVpcTenancy, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &ModifyVpcTenancyInput{} + } + + output = &ModifyVpcTenancyOutput{} + req = c.newRequest(op, input, output) + return +} + +// ModifyVpcTenancy API operation for Amazon Elastic Compute Cloud. +// +// Modifies the instance tenancy attribute of the specified VPC. You can change +// the instance tenancy attribute of a VPC to default only. You cannot change +// the instance tenancy attribute to dedicated. +// +// After you modify the tenancy of the VPC, any new instances that you launch +// into the VPC have a tenancy of default, unless you specify otherwise during +// launch. The tenancy of any existing instances in the VPC is not affected. +// +// For more information about Dedicated Instances, see Dedicated Instances (http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/dedicated-instance.html) +// in the Amazon Elastic Compute Cloud User Guide. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Elastic Compute Cloud's +// API operation ModifyVpcTenancy for usage and error information. +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ModifyVpcTenancy +func (c *EC2) ModifyVpcTenancy(input *ModifyVpcTenancyInput) (*ModifyVpcTenancyOutput, error) { + req, out := c.ModifyVpcTenancyRequest(input) + return out, req.Send() +} + +// ModifyVpcTenancyWithContext is the same as ModifyVpcTenancy with the addition of +// the ability to pass a context and additional request options. +// +// See ModifyVpcTenancy for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *EC2) ModifyVpcTenancyWithContext(ctx aws.Context, input *ModifyVpcTenancyInput, opts ...request.Option) (*ModifyVpcTenancyOutput, error) { + req, out := c.ModifyVpcTenancyRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + const opMonitorInstances = "MonitorInstances" // MonitorInstancesRequest generates a "aws/request.Request" representing the @@ -16971,7 +19170,7 @@ const opMonitorInstances = "MonitorInstances" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/MonitorInstances +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/MonitorInstances func (c *EC2) MonitorInstancesRequest(input *MonitorInstancesInput) (req *request.Request, output *MonitorInstancesOutput) { op := &request.Operation{ Name: opMonitorInstances, @@ -17003,7 +19202,7 @@ func (c *EC2) MonitorInstancesRequest(input *MonitorInstancesInput) (req *reques // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation MonitorInstances for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/MonitorInstances +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/MonitorInstances func (c *EC2) MonitorInstances(input *MonitorInstancesInput) (*MonitorInstancesOutput, error) { req, out := c.MonitorInstancesRequest(input) return out, req.Send() @@ -17050,7 +19249,7 @@ const opMoveAddressToVpc = "MoveAddressToVpc" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/MoveAddressToVpc +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/MoveAddressToVpc func (c *EC2) MoveAddressToVpcRequest(input *MoveAddressToVpcInput) (req *request.Request, output *MoveAddressToVpcOutput) { op := &request.Operation{ Name: opMoveAddressToVpc, @@ -17083,7 +19282,7 @@ func (c *EC2) MoveAddressToVpcRequest(input *MoveAddressToVpcInput) (req *reques // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation MoveAddressToVpc for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/MoveAddressToVpc +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/MoveAddressToVpc func (c *EC2) MoveAddressToVpc(input *MoveAddressToVpcInput) (*MoveAddressToVpcOutput, error) { req, out := c.MoveAddressToVpcRequest(input) return out, req.Send() @@ -17130,7 +19329,7 @@ const opPurchaseHostReservation = "PurchaseHostReservation" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/PurchaseHostReservation +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/PurchaseHostReservation func (c *EC2) PurchaseHostReservationRequest(input *PurchaseHostReservationInput) (req *request.Request, output *PurchaseHostReservationOutput) { op := &request.Operation{ Name: opPurchaseHostReservation, @@ -17160,7 +19359,7 @@ func (c *EC2) PurchaseHostReservationRequest(input *PurchaseHostReservationInput // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation PurchaseHostReservation for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/PurchaseHostReservation +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/PurchaseHostReservation func (c *EC2) PurchaseHostReservation(input *PurchaseHostReservationInput) (*PurchaseHostReservationOutput, error) { req, out := c.PurchaseHostReservationRequest(input) return out, req.Send() @@ -17207,7 +19406,7 @@ const opPurchaseReservedInstancesOffering = "PurchaseReservedInstancesOffering" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/PurchaseReservedInstancesOffering +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/PurchaseReservedInstancesOffering func (c *EC2) PurchaseReservedInstancesOfferingRequest(input *PurchaseReservedInstancesOfferingInput) (req *request.Request, output *PurchaseReservedInstancesOfferingOutput) { op := &request.Operation{ Name: opPurchaseReservedInstancesOffering, @@ -17243,7 +19442,7 @@ func (c *EC2) PurchaseReservedInstancesOfferingRequest(input *PurchaseReservedIn // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation PurchaseReservedInstancesOffering for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/PurchaseReservedInstancesOffering +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/PurchaseReservedInstancesOffering func (c *EC2) PurchaseReservedInstancesOffering(input *PurchaseReservedInstancesOfferingInput) (*PurchaseReservedInstancesOfferingOutput, error) { req, out := c.PurchaseReservedInstancesOfferingRequest(input) return out, req.Send() @@ -17290,7 +19489,7 @@ const opPurchaseScheduledInstances = "PurchaseScheduledInstances" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/PurchaseScheduledInstances +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/PurchaseScheduledInstances func (c *EC2) PurchaseScheduledInstancesRequest(input *PurchaseScheduledInstancesInput) (req *request.Request, output *PurchaseScheduledInstancesOutput) { op := &request.Operation{ Name: opPurchaseScheduledInstances, @@ -17326,7 +19525,7 @@ func (c *EC2) PurchaseScheduledInstancesRequest(input *PurchaseScheduledInstance // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation PurchaseScheduledInstances for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/PurchaseScheduledInstances +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/PurchaseScheduledInstances func (c *EC2) PurchaseScheduledInstances(input *PurchaseScheduledInstancesInput) (*PurchaseScheduledInstancesOutput, error) { req, out := c.PurchaseScheduledInstancesRequest(input) return out, req.Send() @@ -17373,7 +19572,7 @@ const opRebootInstances = "RebootInstances" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/RebootInstances +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/RebootInstances func (c *EC2) RebootInstancesRequest(input *RebootInstancesInput) (req *request.Request, output *RebootInstancesOutput) { op := &request.Operation{ Name: opRebootInstances, @@ -17412,7 +19611,7 @@ func (c *EC2) RebootInstancesRequest(input *RebootInstancesInput) (req *request. // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation RebootInstances for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/RebootInstances +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/RebootInstances func (c *EC2) RebootInstances(input *RebootInstancesInput) (*RebootInstancesOutput, error) { req, out := c.RebootInstancesRequest(input) return out, req.Send() @@ -17459,7 +19658,7 @@ const opRegisterImage = "RegisterImage" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/RegisterImage +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/RegisterImage func (c *EC2) RegisterImageRequest(input *RegisterImageInput) (req *request.Request, output *RegisterImageOutput) { op := &request.Operation{ Name: opRegisterImage, @@ -17514,7 +19713,7 @@ func (c *EC2) RegisterImageRequest(input *RegisterImageInput) (req *request.Requ // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation RegisterImage for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/RegisterImage +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/RegisterImage func (c *EC2) RegisterImage(input *RegisterImageInput) (*RegisterImageOutput, error) { req, out := c.RegisterImageRequest(input) return out, req.Send() @@ -17536,6 +19735,81 @@ func (c *EC2) RegisterImageWithContext(ctx aws.Context, input *RegisterImageInpu return out, req.Send() } +const opRejectVpcEndpointConnections = "RejectVpcEndpointConnections" + +// RejectVpcEndpointConnectionsRequest generates a "aws/request.Request" representing the +// client's request for the RejectVpcEndpointConnections operation. The "output" return +// value will be populated with the request's response once the request complets +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See RejectVpcEndpointConnections for more information on using the RejectVpcEndpointConnections +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the RejectVpcEndpointConnectionsRequest method. +// req, resp := client.RejectVpcEndpointConnectionsRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/RejectVpcEndpointConnections +func (c *EC2) RejectVpcEndpointConnectionsRequest(input *RejectVpcEndpointConnectionsInput) (req *request.Request, output *RejectVpcEndpointConnectionsOutput) { + op := &request.Operation{ + Name: opRejectVpcEndpointConnections, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &RejectVpcEndpointConnectionsInput{} + } + + output = &RejectVpcEndpointConnectionsOutput{} + req = c.newRequest(op, input, output) + return +} + +// RejectVpcEndpointConnections API operation for Amazon Elastic Compute Cloud. +// +// Rejects one or more VPC endpoint connection requests to your VPC endpoint +// service. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Elastic Compute Cloud's +// API operation RejectVpcEndpointConnections for usage and error information. +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/RejectVpcEndpointConnections +func (c *EC2) RejectVpcEndpointConnections(input *RejectVpcEndpointConnectionsInput) (*RejectVpcEndpointConnectionsOutput, error) { + req, out := c.RejectVpcEndpointConnectionsRequest(input) + return out, req.Send() +} + +// RejectVpcEndpointConnectionsWithContext is the same as RejectVpcEndpointConnections with the addition of +// the ability to pass a context and additional request options. +// +// See RejectVpcEndpointConnections for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *EC2) RejectVpcEndpointConnectionsWithContext(ctx aws.Context, input *RejectVpcEndpointConnectionsInput, opts ...request.Option) (*RejectVpcEndpointConnectionsOutput, error) { + req, out := c.RejectVpcEndpointConnectionsRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + const opRejectVpcPeeringConnection = "RejectVpcPeeringConnection" // RejectVpcPeeringConnectionRequest generates a "aws/request.Request" representing the @@ -17561,7 +19835,7 @@ const opRejectVpcPeeringConnection = "RejectVpcPeeringConnection" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/RejectVpcPeeringConnection +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/RejectVpcPeeringConnection func (c *EC2) RejectVpcPeeringConnectionRequest(input *RejectVpcPeeringConnectionInput) (req *request.Request, output *RejectVpcPeeringConnectionOutput) { op := &request.Operation{ Name: opRejectVpcPeeringConnection, @@ -17592,7 +19866,7 @@ func (c *EC2) RejectVpcPeeringConnectionRequest(input *RejectVpcPeeringConnectio // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation RejectVpcPeeringConnection for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/RejectVpcPeeringConnection +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/RejectVpcPeeringConnection func (c *EC2) RejectVpcPeeringConnection(input *RejectVpcPeeringConnectionInput) (*RejectVpcPeeringConnectionOutput, error) { req, out := c.RejectVpcPeeringConnectionRequest(input) return out, req.Send() @@ -17639,7 +19913,7 @@ const opReleaseAddress = "ReleaseAddress" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ReleaseAddress +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ReleaseAddress func (c *EC2) ReleaseAddressRequest(input *ReleaseAddressInput) (req *request.Request, output *ReleaseAddressOutput) { op := &request.Operation{ Name: opReleaseAddress, @@ -17662,19 +19936,22 @@ func (c *EC2) ReleaseAddressRequest(input *ReleaseAddressInput) (req *request.Re // // Releases the specified Elastic IP address. // -// After releasing an Elastic IP address, it is released to the IP address pool -// and might be unavailable to you. Be sure to update your DNS records and any -// servers or devices that communicate with the address. If you attempt to release -// an Elastic IP address that you already released, you'll get an AuthFailure -// error if the address is already allocated to another AWS account. -// // [EC2-Classic, default VPC] Releasing an Elastic IP address automatically // disassociates it from any instance that it's associated with. To disassociate // an Elastic IP address without releasing it, use DisassociateAddress. // // [Nondefault VPC] You must use DisassociateAddress to disassociate the Elastic -// IP address before you try to release it. Otherwise, Amazon EC2 returns an -// error (InvalidIPAddress.InUse). +// IP address before you can release it. Otherwise, Amazon EC2 returns an error +// (InvalidIPAddress.InUse). +// +// After releasing an Elastic IP address, it is released to the IP address pool. +// Be sure to update your DNS records and any servers or devices that communicate +// with the address. If you attempt to release an Elastic IP address that you +// already released, you'll get an AuthFailure error if the address is already +// allocated to another AWS account. +// +// [EC2-VPC] After you release an Elastic IP address for use in a VPC, you might +// be able to recover it. For more information, see AllocateAddress. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -17682,7 +19959,7 @@ func (c *EC2) ReleaseAddressRequest(input *ReleaseAddressInput) (req *request.Re // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation ReleaseAddress for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ReleaseAddress +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ReleaseAddress func (c *EC2) ReleaseAddress(input *ReleaseAddressInput) (*ReleaseAddressOutput, error) { req, out := c.ReleaseAddressRequest(input) return out, req.Send() @@ -17729,7 +20006,7 @@ const opReleaseHosts = "ReleaseHosts" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ReleaseHosts +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ReleaseHosts func (c *EC2) ReleaseHostsRequest(input *ReleaseHostsInput) (req *request.Request, output *ReleaseHostsOutput) { op := &request.Operation{ Name: opReleaseHosts, @@ -17767,7 +20044,7 @@ func (c *EC2) ReleaseHostsRequest(input *ReleaseHostsInput) (req *request.Reques // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation ReleaseHosts for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ReleaseHosts +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ReleaseHosts func (c *EC2) ReleaseHosts(input *ReleaseHostsInput) (*ReleaseHostsOutput, error) { req, out := c.ReleaseHostsRequest(input) return out, req.Send() @@ -17814,7 +20091,7 @@ const opReplaceIamInstanceProfileAssociation = "ReplaceIamInstanceProfileAssocia // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ReplaceIamInstanceProfileAssociation +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ReplaceIamInstanceProfileAssociation func (c *EC2) ReplaceIamInstanceProfileAssociationRequest(input *ReplaceIamInstanceProfileAssociationInput) (req *request.Request, output *ReplaceIamInstanceProfileAssociationOutput) { op := &request.Operation{ Name: opReplaceIamInstanceProfileAssociation, @@ -17846,7 +20123,7 @@ func (c *EC2) ReplaceIamInstanceProfileAssociationRequest(input *ReplaceIamInsta // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation ReplaceIamInstanceProfileAssociation for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ReplaceIamInstanceProfileAssociation +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ReplaceIamInstanceProfileAssociation func (c *EC2) ReplaceIamInstanceProfileAssociation(input *ReplaceIamInstanceProfileAssociationInput) (*ReplaceIamInstanceProfileAssociationOutput, error) { req, out := c.ReplaceIamInstanceProfileAssociationRequest(input) return out, req.Send() @@ -17893,7 +20170,7 @@ const opReplaceNetworkAclAssociation = "ReplaceNetworkAclAssociation" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ReplaceNetworkAclAssociation +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ReplaceNetworkAclAssociation func (c *EC2) ReplaceNetworkAclAssociationRequest(input *ReplaceNetworkAclAssociationInput) (req *request.Request, output *ReplaceNetworkAclAssociationOutput) { op := &request.Operation{ Name: opReplaceNetworkAclAssociation, @@ -17917,13 +20194,15 @@ func (c *EC2) ReplaceNetworkAclAssociationRequest(input *ReplaceNetworkAclAssoci // For more information about network ACLs, see Network ACLs (http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_ACLs.html) // in the Amazon Virtual Private Cloud User Guide. // +// This is an idempotent operation. +// // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation ReplaceNetworkAclAssociation for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ReplaceNetworkAclAssociation +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ReplaceNetworkAclAssociation func (c *EC2) ReplaceNetworkAclAssociation(input *ReplaceNetworkAclAssociationInput) (*ReplaceNetworkAclAssociationOutput, error) { req, out := c.ReplaceNetworkAclAssociationRequest(input) return out, req.Send() @@ -17970,7 +20249,7 @@ const opReplaceNetworkAclEntry = "ReplaceNetworkAclEntry" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ReplaceNetworkAclEntry +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ReplaceNetworkAclEntry func (c *EC2) ReplaceNetworkAclEntryRequest(input *ReplaceNetworkAclEntryInput) (req *request.Request, output *ReplaceNetworkAclEntryOutput) { op := &request.Operation{ Name: opReplaceNetworkAclEntry, @@ -18001,7 +20280,7 @@ func (c *EC2) ReplaceNetworkAclEntryRequest(input *ReplaceNetworkAclEntryInput) // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation ReplaceNetworkAclEntry for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ReplaceNetworkAclEntry +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ReplaceNetworkAclEntry func (c *EC2) ReplaceNetworkAclEntry(input *ReplaceNetworkAclEntryInput) (*ReplaceNetworkAclEntryOutput, error) { req, out := c.ReplaceNetworkAclEntryRequest(input) return out, req.Send() @@ -18048,7 +20327,7 @@ const opReplaceRoute = "ReplaceRoute" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ReplaceRoute +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ReplaceRoute func (c *EC2) ReplaceRouteRequest(input *ReplaceRouteInput) (req *request.Request, output *ReplaceRouteOutput) { op := &request.Operation{ Name: opReplaceRoute, @@ -18083,7 +20362,7 @@ func (c *EC2) ReplaceRouteRequest(input *ReplaceRouteInput) (req *request.Reques // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation ReplaceRoute for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ReplaceRoute +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ReplaceRoute func (c *EC2) ReplaceRoute(input *ReplaceRouteInput) (*ReplaceRouteOutput, error) { req, out := c.ReplaceRouteRequest(input) return out, req.Send() @@ -18130,7 +20409,7 @@ const opReplaceRouteTableAssociation = "ReplaceRouteTableAssociation" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ReplaceRouteTableAssociation +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ReplaceRouteTableAssociation func (c *EC2) ReplaceRouteTableAssociationRequest(input *ReplaceRouteTableAssociationInput) (req *request.Request, output *ReplaceRouteTableAssociationOutput) { op := &request.Operation{ Name: opReplaceRouteTableAssociation, @@ -18165,7 +20444,7 @@ func (c *EC2) ReplaceRouteTableAssociationRequest(input *ReplaceRouteTableAssoci // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation ReplaceRouteTableAssociation for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ReplaceRouteTableAssociation +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ReplaceRouteTableAssociation func (c *EC2) ReplaceRouteTableAssociation(input *ReplaceRouteTableAssociationInput) (*ReplaceRouteTableAssociationOutput, error) { req, out := c.ReplaceRouteTableAssociationRequest(input) return out, req.Send() @@ -18212,7 +20491,7 @@ const opReportInstanceStatus = "ReportInstanceStatus" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ReportInstanceStatus +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ReportInstanceStatus func (c *EC2) ReportInstanceStatusRequest(input *ReportInstanceStatusInput) (req *request.Request, output *ReportInstanceStatusOutput) { op := &request.Operation{ Name: opReportInstanceStatus, @@ -18247,7 +20526,7 @@ func (c *EC2) ReportInstanceStatusRequest(input *ReportInstanceStatusInput) (req // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation ReportInstanceStatus for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ReportInstanceStatus +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ReportInstanceStatus func (c *EC2) ReportInstanceStatus(input *ReportInstanceStatusInput) (*ReportInstanceStatusOutput, error) { req, out := c.ReportInstanceStatusRequest(input) return out, req.Send() @@ -18294,7 +20573,7 @@ const opRequestSpotFleet = "RequestSpotFleet" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/RequestSpotFleet +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/RequestSpotFleet func (c *EC2) RequestSpotFleetRequest(input *RequestSpotFleetInput) (req *request.Request, output *RequestSpotFleetOutput) { op := &request.Operation{ Name: opRequestSpotFleet, @@ -18313,21 +20592,24 @@ func (c *EC2) RequestSpotFleetRequest(input *RequestSpotFleetInput) (req *reques // RequestSpotFleet API operation for Amazon Elastic Compute Cloud. // -// Creates a Spot fleet request. +// Creates a Spot Fleet request. // // You can submit a single request that includes multiple launch specifications // that vary by instance type, AMI, Availability Zone, or subnet. // -// By default, the Spot fleet requests Spot instances in the Spot pool where +// By default, the Spot Fleet requests Spot Instances in the Spot pool where // the price per unit is the lowest. Each launch specification can include its // own instance weighting that reflects the value of the instance type to your // application workload. // -// Alternatively, you can specify that the Spot fleet distribute the target +// Alternatively, you can specify that the Spot Fleet distribute the target // capacity across the Spot pools included in its launch specifications. By -// ensuring that the Spot instances in your Spot fleet are in different Spot +// ensuring that the Spot Instances in your Spot Fleet are in different Spot // pools, you can improve the availability of your fleet. // +// You can specify tags for the Spot Instances. You cannot tag other resource +// types in a Spot Fleet request; only the instance resource type is supported. +// // For more information, see Spot Fleet Requests (http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/spot-fleet-requests.html) // in the Amazon Elastic Compute Cloud User Guide. // @@ -18337,7 +20619,7 @@ func (c *EC2) RequestSpotFleetRequest(input *RequestSpotFleetInput) (req *reques // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation RequestSpotFleet for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/RequestSpotFleet +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/RequestSpotFleet func (c *EC2) RequestSpotFleet(input *RequestSpotFleetInput) (*RequestSpotFleetOutput, error) { req, out := c.RequestSpotFleetRequest(input) return out, req.Send() @@ -18384,7 +20666,7 @@ const opRequestSpotInstances = "RequestSpotInstances" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/RequestSpotInstances +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/RequestSpotInstances func (c *EC2) RequestSpotInstancesRequest(input *RequestSpotInstancesInput) (req *request.Request, output *RequestSpotInstancesOutput) { op := &request.Operation{ Name: opRequestSpotInstances, @@ -18403,11 +20685,9 @@ func (c *EC2) RequestSpotInstancesRequest(input *RequestSpotInstancesInput) (req // RequestSpotInstances API operation for Amazon Elastic Compute Cloud. // -// Creates a Spot instance request. Spot instances are instances that Amazon -// EC2 launches when the bid price that you specify exceeds the current Spot -// price. Amazon EC2 periodically sets the Spot price based on available Spot -// Instance capacity and current Spot instance requests. For more information, -// see Spot Instance Requests (http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/spot-requests.html) +// Creates a Spot Instance request. Spot Instances are instances that Amazon +// EC2 launches when the maximum price that you specify exceeds the current +// Spot price. For more information, see Spot Instance Requests (http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/spot-requests.html) // in the Amazon Elastic Compute Cloud User Guide. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions @@ -18416,7 +20696,7 @@ func (c *EC2) RequestSpotInstancesRequest(input *RequestSpotInstancesInput) (req // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation RequestSpotInstances for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/RequestSpotInstances +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/RequestSpotInstances func (c *EC2) RequestSpotInstances(input *RequestSpotInstancesInput) (*RequestSpotInstancesOutput, error) { req, out := c.RequestSpotInstancesRequest(input) return out, req.Send() @@ -18438,6 +20718,81 @@ func (c *EC2) RequestSpotInstancesWithContext(ctx aws.Context, input *RequestSpo return out, req.Send() } +const opResetFpgaImageAttribute = "ResetFpgaImageAttribute" + +// ResetFpgaImageAttributeRequest generates a "aws/request.Request" representing the +// client's request for the ResetFpgaImageAttribute operation. The "output" return +// value will be populated with the request's response once the request complets +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See ResetFpgaImageAttribute for more information on using the ResetFpgaImageAttribute +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the ResetFpgaImageAttributeRequest method. +// req, resp := client.ResetFpgaImageAttributeRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ResetFpgaImageAttribute +func (c *EC2) ResetFpgaImageAttributeRequest(input *ResetFpgaImageAttributeInput) (req *request.Request, output *ResetFpgaImageAttributeOutput) { + op := &request.Operation{ + Name: opResetFpgaImageAttribute, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &ResetFpgaImageAttributeInput{} + } + + output = &ResetFpgaImageAttributeOutput{} + req = c.newRequest(op, input, output) + return +} + +// ResetFpgaImageAttribute API operation for Amazon Elastic Compute Cloud. +// +// Resets the specified attribute of the specified Amazon FPGA Image (AFI) to +// its default value. You can only reset the load permission attribute. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Elastic Compute Cloud's +// API operation ResetFpgaImageAttribute for usage and error information. +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ResetFpgaImageAttribute +func (c *EC2) ResetFpgaImageAttribute(input *ResetFpgaImageAttributeInput) (*ResetFpgaImageAttributeOutput, error) { + req, out := c.ResetFpgaImageAttributeRequest(input) + return out, req.Send() +} + +// ResetFpgaImageAttributeWithContext is the same as ResetFpgaImageAttribute with the addition of +// the ability to pass a context and additional request options. +// +// See ResetFpgaImageAttribute for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *EC2) ResetFpgaImageAttributeWithContext(ctx aws.Context, input *ResetFpgaImageAttributeInput, opts ...request.Option) (*ResetFpgaImageAttributeOutput, error) { + req, out := c.ResetFpgaImageAttributeRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + const opResetImageAttribute = "ResetImageAttribute" // ResetImageAttributeRequest generates a "aws/request.Request" representing the @@ -18463,7 +20818,7 @@ const opResetImageAttribute = "ResetImageAttribute" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ResetImageAttribute +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ResetImageAttribute func (c *EC2) ResetImageAttributeRequest(input *ResetImageAttributeInput) (req *request.Request, output *ResetImageAttributeOutput) { op := &request.Operation{ Name: opResetImageAttribute, @@ -18494,7 +20849,7 @@ func (c *EC2) ResetImageAttributeRequest(input *ResetImageAttributeInput) (req * // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation ResetImageAttribute for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ResetImageAttribute +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ResetImageAttribute func (c *EC2) ResetImageAttribute(input *ResetImageAttributeInput) (*ResetImageAttributeOutput, error) { req, out := c.ResetImageAttributeRequest(input) return out, req.Send() @@ -18541,7 +20896,7 @@ const opResetInstanceAttribute = "ResetInstanceAttribute" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ResetInstanceAttribute +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ResetInstanceAttribute func (c *EC2) ResetInstanceAttributeRequest(input *ResetInstanceAttributeInput) (req *request.Request, output *ResetInstanceAttributeOutput) { op := &request.Operation{ Name: opResetInstanceAttribute, @@ -18578,7 +20933,7 @@ func (c *EC2) ResetInstanceAttributeRequest(input *ResetInstanceAttributeInput) // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation ResetInstanceAttribute for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ResetInstanceAttribute +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ResetInstanceAttribute func (c *EC2) ResetInstanceAttribute(input *ResetInstanceAttributeInput) (*ResetInstanceAttributeOutput, error) { req, out := c.ResetInstanceAttributeRequest(input) return out, req.Send() @@ -18625,7 +20980,7 @@ const opResetNetworkInterfaceAttribute = "ResetNetworkInterfaceAttribute" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ResetNetworkInterfaceAttribute +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ResetNetworkInterfaceAttribute func (c *EC2) ResetNetworkInterfaceAttributeRequest(input *ResetNetworkInterfaceAttributeInput) (req *request.Request, output *ResetNetworkInterfaceAttributeOutput) { op := &request.Operation{ Name: opResetNetworkInterfaceAttribute, @@ -18655,7 +21010,7 @@ func (c *EC2) ResetNetworkInterfaceAttributeRequest(input *ResetNetworkInterface // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation ResetNetworkInterfaceAttribute for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ResetNetworkInterfaceAttribute +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ResetNetworkInterfaceAttribute func (c *EC2) ResetNetworkInterfaceAttribute(input *ResetNetworkInterfaceAttributeInput) (*ResetNetworkInterfaceAttributeOutput, error) { req, out := c.ResetNetworkInterfaceAttributeRequest(input) return out, req.Send() @@ -18702,7 +21057,7 @@ const opResetSnapshotAttribute = "ResetSnapshotAttribute" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ResetSnapshotAttribute +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ResetSnapshotAttribute func (c *EC2) ResetSnapshotAttributeRequest(input *ResetSnapshotAttributeInput) (req *request.Request, output *ResetSnapshotAttributeOutput) { op := &request.Operation{ Name: opResetSnapshotAttribute, @@ -18735,7 +21090,7 @@ func (c *EC2) ResetSnapshotAttributeRequest(input *ResetSnapshotAttributeInput) // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation ResetSnapshotAttribute for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ResetSnapshotAttribute +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ResetSnapshotAttribute func (c *EC2) ResetSnapshotAttribute(input *ResetSnapshotAttributeInput) (*ResetSnapshotAttributeOutput, error) { req, out := c.ResetSnapshotAttributeRequest(input) return out, req.Send() @@ -18782,7 +21137,7 @@ const opRestoreAddressToClassic = "RestoreAddressToClassic" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/RestoreAddressToClassic +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/RestoreAddressToClassic func (c *EC2) RestoreAddressToClassicRequest(input *RestoreAddressToClassicInput) (req *request.Request, output *RestoreAddressToClassicOutput) { op := &request.Operation{ Name: opRestoreAddressToClassic, @@ -18812,7 +21167,7 @@ func (c *EC2) RestoreAddressToClassicRequest(input *RestoreAddressToClassicInput // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation RestoreAddressToClassic for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/RestoreAddressToClassic +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/RestoreAddressToClassic func (c *EC2) RestoreAddressToClassic(input *RestoreAddressToClassicInput) (*RestoreAddressToClassicOutput, error) { req, out := c.RestoreAddressToClassicRequest(input) return out, req.Send() @@ -18859,7 +21214,7 @@ const opRevokeSecurityGroupEgress = "RevokeSecurityGroupEgress" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/RevokeSecurityGroupEgress +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/RevokeSecurityGroupEgress func (c *EC2) RevokeSecurityGroupEgressRequest(input *RevokeSecurityGroupEgressInput) (req *request.Request, output *RevokeSecurityGroupEgressOutput) { op := &request.Operation{ Name: opRevokeSecurityGroupEgress, @@ -18882,13 +21237,14 @@ func (c *EC2) RevokeSecurityGroupEgressRequest(input *RevokeSecurityGroupEgressI // // [EC2-VPC only] Removes one or more egress rules from a security group for // EC2-VPC. This action doesn't apply to security groups for use in EC2-Classic. -// The values that you specify in the revoke request (for example, ports) must -// match the existing rule's values for the rule to be revoked. +// To remove a rule, the values that you specify (for example, ports) must match +// the existing rule's values exactly. // // Each rule consists of the protocol and the IPv4 or IPv6 CIDR range or source // security group. For the TCP and UDP protocols, you must also specify the // destination port or range of ports. For the ICMP protocol, you must also -// specify the ICMP type and code. +// specify the ICMP type and code. If the security group rule has a description, +// you do not have to specify the description to revoke the rule. // // Rule changes are propagated to instances within the security group as quickly // as possible. However, a small delay might occur. @@ -18899,7 +21255,7 @@ func (c *EC2) RevokeSecurityGroupEgressRequest(input *RevokeSecurityGroupEgressI // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation RevokeSecurityGroupEgress for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/RevokeSecurityGroupEgress +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/RevokeSecurityGroupEgress func (c *EC2) RevokeSecurityGroupEgress(input *RevokeSecurityGroupEgressInput) (*RevokeSecurityGroupEgressOutput, error) { req, out := c.RevokeSecurityGroupEgressRequest(input) return out, req.Send() @@ -18946,7 +21302,7 @@ const opRevokeSecurityGroupIngress = "RevokeSecurityGroupIngress" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/RevokeSecurityGroupIngress +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/RevokeSecurityGroupIngress func (c *EC2) RevokeSecurityGroupIngressRequest(input *RevokeSecurityGroupIngressInput) (req *request.Request, output *RevokeSecurityGroupIngressOutput) { op := &request.Operation{ Name: opRevokeSecurityGroupIngress, @@ -18967,14 +21323,19 @@ func (c *EC2) RevokeSecurityGroupIngressRequest(input *RevokeSecurityGroupIngres // RevokeSecurityGroupIngress API operation for Amazon Elastic Compute Cloud. // -// Removes one or more ingress rules from a security group. The values that -// you specify in the revoke request (for example, ports) must match the existing -// rule's values for the rule to be removed. +// Removes one or more ingress rules from a security group. To remove a rule, +// the values that you specify (for example, ports) must match the existing +// rule's values exactly. +// +// [EC2-Classic security groups only] If the values you specify do not match +// the existing rule's values, no error is returned. Use DescribeSecurityGroups +// to verify that the rule has been removed. // // Each rule consists of the protocol and the CIDR range or source security // group. For the TCP and UDP protocols, you must also specify the destination // port or range of ports. For the ICMP protocol, you must also specify the -// ICMP type and code. +// ICMP type and code. If the security group rule has a description, you do +// not have to specify the description to revoke the rule. // // Rule changes are propagated to instances within the security group as quickly // as possible. However, a small delay might occur. @@ -18985,7 +21346,7 @@ func (c *EC2) RevokeSecurityGroupIngressRequest(input *RevokeSecurityGroupIngres // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation RevokeSecurityGroupIngress for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/RevokeSecurityGroupIngress +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/RevokeSecurityGroupIngress func (c *EC2) RevokeSecurityGroupIngress(input *RevokeSecurityGroupIngressInput) (*RevokeSecurityGroupIngressOutput, error) { req, out := c.RevokeSecurityGroupIngressRequest(input) return out, req.Send() @@ -19032,7 +21393,7 @@ const opRunInstances = "RunInstances" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/RunInstances +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/RunInstances func (c *EC2) RunInstancesRequest(input *RunInstancesInput) (req *request.Request, output *Reservation) { op := &request.Operation{ Name: opRunInstances, @@ -19081,9 +21442,14 @@ func (c *EC2) RunInstancesRequest(input *RunInstancesInput) (req *request.Reques // * If any of the AMIs have a product code attached for which the user has // not subscribed, the request fails. // +// You can create a launch template (http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-launch-templates.html), +// which is a resource that contains the parameters to launch an instance. When +// you launch an instance using RunInstances, you can specify the launch template +// instead of specifying the launch parameters. +// // To ensure faster instance launches, break up large requests into smaller -// batches. For example, create 5 separate launch requests for 100 instances -// each instead of 1 launch request for 500 instances. +// batches. For example, create five separate launch requests for 100 instances +// each instead of one launch request for 500 instances. // // An instance is ready for you to use when it's in the running state. You can // check the state of your instance using DescribeInstances. You can tag instances @@ -19107,7 +21473,7 @@ func (c *EC2) RunInstancesRequest(input *RunInstancesInput) (req *request.Reques // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation RunInstances for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/RunInstances +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/RunInstances func (c *EC2) RunInstances(input *RunInstancesInput) (*Reservation, error) { req, out := c.RunInstancesRequest(input) return out, req.Send() @@ -19154,7 +21520,7 @@ const opRunScheduledInstances = "RunScheduledInstances" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/RunScheduledInstances +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/RunScheduledInstances func (c *EC2) RunScheduledInstancesRequest(input *RunScheduledInstancesInput) (req *request.Request, output *RunScheduledInstancesOutput) { op := &request.Operation{ Name: opRunScheduledInstances, @@ -19191,7 +21557,7 @@ func (c *EC2) RunScheduledInstancesRequest(input *RunScheduledInstancesInput) (r // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation RunScheduledInstances for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/RunScheduledInstances +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/RunScheduledInstances func (c *EC2) RunScheduledInstances(input *RunScheduledInstancesInput) (*RunScheduledInstancesOutput, error) { req, out := c.RunScheduledInstancesRequest(input) return out, req.Send() @@ -19238,7 +21604,7 @@ const opStartInstances = "StartInstances" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/StartInstances +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/StartInstances func (c *EC2) StartInstancesRequest(input *StartInstancesInput) (req *request.Request, output *StartInstancesOutput) { op := &request.Operation{ Name: opStartInstances, @@ -19257,16 +21623,20 @@ func (c *EC2) StartInstancesRequest(input *StartInstancesInput) (req *request.Re // StartInstances API operation for Amazon Elastic Compute Cloud. // -// Starts an Amazon EBS-backed AMI that you've previously stopped. +// Starts an Amazon EBS-backed instance that you've previously stopped. // // Instances that use Amazon EBS volumes as their root devices can be quickly // stopped and started. When an instance is stopped, the compute resources are -// released and you are not billed for hourly instance usage. However, your -// root partition Amazon EBS volume remains, continues to persist your data, -// and you are charged for Amazon EBS volume usage. You can restart your instance -// at any time. Each time you transition an instance from stopped to started, -// Amazon EC2 charges a full instance hour, even if transitions happen multiple -// times within a single hour. +// released and you are not billed for instance usage. However, your root partition +// Amazon EBS volume remains and continues to persist your data, and you are +// charged for Amazon EBS volume usage. You can restart your instance at any +// time. Every time you start your Windows instance, Amazon EC2 charges you +// for a full instance hour. If you stop and restart your Windows instance, +// a new instance hour begins and Amazon EC2 charges you for another full instance +// hour even if you are still within the same 60-minute period when it was stopped. +// Every time you start your Linux instance, Amazon EC2 charges a one-minute +// minimum for instance usage, and thereafter charges per second for instance +// usage. // // Before stopping an instance, make sure it is in a state from which it can // be restarted. Stopping an instance does not preserve data stored in RAM. @@ -19283,7 +21653,7 @@ func (c *EC2) StartInstancesRequest(input *StartInstancesInput) (req *request.Re // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation StartInstances for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/StartInstances +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/StartInstances func (c *EC2) StartInstances(input *StartInstancesInput) (*StartInstancesOutput, error) { req, out := c.StartInstancesRequest(input) return out, req.Send() @@ -19330,7 +21700,7 @@ const opStopInstances = "StopInstances" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/StopInstances +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/StopInstances func (c *EC2) StopInstancesRequest(input *StopInstancesInput) (req *request.Request, output *StopInstancesOutput) { op := &request.Operation{ Name: opStopInstances, @@ -19351,14 +21721,17 @@ func (c *EC2) StopInstancesRequest(input *StopInstancesInput) (req *request.Requ // // Stops an Amazon EBS-backed instance. // -// We don't charge hourly usage for a stopped instance, or data transfer fees; -// however, your root partition Amazon EBS volume remains, continues to persist -// your data, and you are charged for Amazon EBS volume usage. Each time you -// transition an instance from stopped to started, Amazon EC2 charges a full -// instance hour, even if transitions happen multiple times within a single -// hour. +// We don't charge usage for a stopped instance, or data transfer fees; however, +// your root partition Amazon EBS volume remains and continues to persist your +// data, and you are charged for Amazon EBS volume usage. Every time you start +// your Windows instance, Amazon EC2 charges you for a full instance hour. If +// you stop and restart your Windows instance, a new instance hour begins and +// Amazon EC2 charges you for another full instance hour even if you are still +// within the same 60-minute period when it was stopped. Every time you start +// your Linux instance, Amazon EC2 charges a one-minute minimum for instance +// usage, and thereafter charges per second for instance usage. // -// You can't start or stop Spot instances, and you can't stop instance store-backed +// You can't start or stop Spot Instances, and you can't stop instance store-backed // instances. // // When you stop an instance, we shut it down. You can restart your instance @@ -19386,7 +21759,7 @@ func (c *EC2) StopInstancesRequest(input *StopInstancesInput) (req *request.Requ // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation StopInstances for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/StopInstances +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/StopInstances func (c *EC2) StopInstances(input *StopInstancesInput) (*StopInstancesOutput, error) { req, out := c.StopInstancesRequest(input) return out, req.Send() @@ -19433,7 +21806,7 @@ const opTerminateInstances = "TerminateInstances" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/TerminateInstances +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/TerminateInstances func (c *EC2) TerminateInstancesRequest(input *TerminateInstancesInput) (req *request.Request, output *TerminateInstancesOutput) { op := &request.Operation{ Name: opTerminateInstances, @@ -19484,7 +21857,7 @@ func (c *EC2) TerminateInstancesRequest(input *TerminateInstancesInput) (req *re // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation TerminateInstances for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/TerminateInstances +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/TerminateInstances func (c *EC2) TerminateInstances(input *TerminateInstancesInput) (*TerminateInstancesOutput, error) { req, out := c.TerminateInstancesRequest(input) return out, req.Send() @@ -19531,7 +21904,7 @@ const opUnassignIpv6Addresses = "UnassignIpv6Addresses" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/UnassignIpv6Addresses +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/UnassignIpv6Addresses func (c *EC2) UnassignIpv6AddressesRequest(input *UnassignIpv6AddressesInput) (req *request.Request, output *UnassignIpv6AddressesOutput) { op := &request.Operation{ Name: opUnassignIpv6Addresses, @@ -19558,7 +21931,7 @@ func (c *EC2) UnassignIpv6AddressesRequest(input *UnassignIpv6AddressesInput) (r // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation UnassignIpv6Addresses for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/UnassignIpv6Addresses +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/UnassignIpv6Addresses func (c *EC2) UnassignIpv6Addresses(input *UnassignIpv6AddressesInput) (*UnassignIpv6AddressesOutput, error) { req, out := c.UnassignIpv6AddressesRequest(input) return out, req.Send() @@ -19605,7 +21978,7 @@ const opUnassignPrivateIpAddresses = "UnassignPrivateIpAddresses" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/UnassignPrivateIpAddresses +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/UnassignPrivateIpAddresses func (c *EC2) UnassignPrivateIpAddressesRequest(input *UnassignPrivateIpAddressesInput) (req *request.Request, output *UnassignPrivateIpAddressesOutput) { op := &request.Operation{ Name: opUnassignPrivateIpAddresses, @@ -19634,7 +22007,7 @@ func (c *EC2) UnassignPrivateIpAddressesRequest(input *UnassignPrivateIpAddresse // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation UnassignPrivateIpAddresses for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/UnassignPrivateIpAddresses +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/UnassignPrivateIpAddresses func (c *EC2) UnassignPrivateIpAddresses(input *UnassignPrivateIpAddressesInput) (*UnassignPrivateIpAddressesOutput, error) { req, out := c.UnassignPrivateIpAddressesRequest(input) return out, req.Send() @@ -19681,7 +22054,7 @@ const opUnmonitorInstances = "UnmonitorInstances" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/UnmonitorInstances +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/UnmonitorInstances func (c *EC2) UnmonitorInstancesRequest(input *UnmonitorInstancesInput) (req *request.Request, output *UnmonitorInstancesOutput) { op := &request.Operation{ Name: opUnmonitorInstances, @@ -19710,7 +22083,7 @@ func (c *EC2) UnmonitorInstancesRequest(input *UnmonitorInstancesInput) (req *re // // See the AWS API reference guide for Amazon Elastic Compute Cloud's // API operation UnmonitorInstances for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/UnmonitorInstances +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/UnmonitorInstances func (c *EC2) UnmonitorInstances(input *UnmonitorInstancesInput) (*UnmonitorInstancesOutput, error) { req, out := c.UnmonitorInstancesRequest(input) return out, req.Send() @@ -19732,8 +22105,167 @@ func (c *EC2) UnmonitorInstancesWithContext(ctx aws.Context, input *UnmonitorIns return out, req.Send() } +const opUpdateSecurityGroupRuleDescriptionsEgress = "UpdateSecurityGroupRuleDescriptionsEgress" + +// UpdateSecurityGroupRuleDescriptionsEgressRequest generates a "aws/request.Request" representing the +// client's request for the UpdateSecurityGroupRuleDescriptionsEgress operation. The "output" return +// value will be populated with the request's response once the request complets +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See UpdateSecurityGroupRuleDescriptionsEgress for more information on using the UpdateSecurityGroupRuleDescriptionsEgress +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the UpdateSecurityGroupRuleDescriptionsEgressRequest method. +// req, resp := client.UpdateSecurityGroupRuleDescriptionsEgressRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/UpdateSecurityGroupRuleDescriptionsEgress +func (c *EC2) UpdateSecurityGroupRuleDescriptionsEgressRequest(input *UpdateSecurityGroupRuleDescriptionsEgressInput) (req *request.Request, output *UpdateSecurityGroupRuleDescriptionsEgressOutput) { + op := &request.Operation{ + Name: opUpdateSecurityGroupRuleDescriptionsEgress, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &UpdateSecurityGroupRuleDescriptionsEgressInput{} + } + + output = &UpdateSecurityGroupRuleDescriptionsEgressOutput{} + req = c.newRequest(op, input, output) + return +} + +// UpdateSecurityGroupRuleDescriptionsEgress API operation for Amazon Elastic Compute Cloud. +// +// [EC2-VPC only] Updates the description of an egress (outbound) security group +// rule. You can replace an existing description, or add a description to a +// rule that did not have one previously. +// +// You specify the description as part of the IP permissions structure. You +// can remove a description for a security group rule by omitting the description +// parameter in the request. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Elastic Compute Cloud's +// API operation UpdateSecurityGroupRuleDescriptionsEgress for usage and error information. +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/UpdateSecurityGroupRuleDescriptionsEgress +func (c *EC2) UpdateSecurityGroupRuleDescriptionsEgress(input *UpdateSecurityGroupRuleDescriptionsEgressInput) (*UpdateSecurityGroupRuleDescriptionsEgressOutput, error) { + req, out := c.UpdateSecurityGroupRuleDescriptionsEgressRequest(input) + return out, req.Send() +} + +// UpdateSecurityGroupRuleDescriptionsEgressWithContext is the same as UpdateSecurityGroupRuleDescriptionsEgress with the addition of +// the ability to pass a context and additional request options. +// +// See UpdateSecurityGroupRuleDescriptionsEgress for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *EC2) UpdateSecurityGroupRuleDescriptionsEgressWithContext(ctx aws.Context, input *UpdateSecurityGroupRuleDescriptionsEgressInput, opts ...request.Option) (*UpdateSecurityGroupRuleDescriptionsEgressOutput, error) { + req, out := c.UpdateSecurityGroupRuleDescriptionsEgressRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opUpdateSecurityGroupRuleDescriptionsIngress = "UpdateSecurityGroupRuleDescriptionsIngress" + +// UpdateSecurityGroupRuleDescriptionsIngressRequest generates a "aws/request.Request" representing the +// client's request for the UpdateSecurityGroupRuleDescriptionsIngress operation. The "output" return +// value will be populated with the request's response once the request complets +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See UpdateSecurityGroupRuleDescriptionsIngress for more information on using the UpdateSecurityGroupRuleDescriptionsIngress +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the UpdateSecurityGroupRuleDescriptionsIngressRequest method. +// req, resp := client.UpdateSecurityGroupRuleDescriptionsIngressRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/UpdateSecurityGroupRuleDescriptionsIngress +func (c *EC2) UpdateSecurityGroupRuleDescriptionsIngressRequest(input *UpdateSecurityGroupRuleDescriptionsIngressInput) (req *request.Request, output *UpdateSecurityGroupRuleDescriptionsIngressOutput) { + op := &request.Operation{ + Name: opUpdateSecurityGroupRuleDescriptionsIngress, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &UpdateSecurityGroupRuleDescriptionsIngressInput{} + } + + output = &UpdateSecurityGroupRuleDescriptionsIngressOutput{} + req = c.newRequest(op, input, output) + return +} + +// UpdateSecurityGroupRuleDescriptionsIngress API operation for Amazon Elastic Compute Cloud. +// +// Updates the description of an ingress (inbound) security group rule. You +// can replace an existing description, or add a description to a rule that +// did not have one previously. +// +// You specify the description as part of the IP permissions structure. You +// can remove a description for a security group rule by omitting the description +// parameter in the request. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Elastic Compute Cloud's +// API operation UpdateSecurityGroupRuleDescriptionsIngress for usage and error information. +// See also, https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/UpdateSecurityGroupRuleDescriptionsIngress +func (c *EC2) UpdateSecurityGroupRuleDescriptionsIngress(input *UpdateSecurityGroupRuleDescriptionsIngressInput) (*UpdateSecurityGroupRuleDescriptionsIngressOutput, error) { + req, out := c.UpdateSecurityGroupRuleDescriptionsIngressRequest(input) + return out, req.Send() +} + +// UpdateSecurityGroupRuleDescriptionsIngressWithContext is the same as UpdateSecurityGroupRuleDescriptionsIngress with the addition of +// the ability to pass a context and additional request options. +// +// See UpdateSecurityGroupRuleDescriptionsIngress for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *EC2) UpdateSecurityGroupRuleDescriptionsIngressWithContext(ctx aws.Context, input *UpdateSecurityGroupRuleDescriptionsIngressInput, opts ...request.Option) (*UpdateSecurityGroupRuleDescriptionsIngressOutput, error) { + req, out := c.UpdateSecurityGroupRuleDescriptionsIngressRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + // Contains the parameters for accepting the quote. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/AcceptReservedInstancesExchangeQuoteRequest type AcceptReservedInstancesExchangeQuoteInput struct { _ struct{} `type:"structure"` @@ -19743,14 +22275,14 @@ type AcceptReservedInstancesExchangeQuoteInput struct { // it is UnauthorizedOperation. DryRun *bool `type:"boolean"` - // The IDs of the Convertible Reserved Instances to exchange for other Convertible - // Reserved Instances of the same or higher value. + // The IDs of the Convertible Reserved Instances to exchange for another Convertible + // Reserved Instance of the same or higher value. // // ReservedInstanceIds is a required field ReservedInstanceIds []*string `locationName:"ReservedInstanceId" locationNameList:"ReservedInstanceId" type:"list" required:"true"` - // The configurations of the Convertible Reserved Instance offerings that you - // are purchasing in this exchange. + // The configuration of the target Convertible Reserved Instance to exchange + // for your current Convertible Reserved Instances. TargetConfigurations []*TargetConfigurationRequest `locationName:"TargetConfiguration" locationNameList:"TargetConfigurationRequest" type:"list"` } @@ -19806,7 +22338,6 @@ func (s *AcceptReservedInstancesExchangeQuoteInput) SetTargetConfigurations(v [] } // The result of the exchange and whether it was successful. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/AcceptReservedInstancesExchangeQuoteResult type AcceptReservedInstancesExchangeQuoteOutput struct { _ struct{} `type:"structure"` @@ -19830,8 +22361,94 @@ func (s *AcceptReservedInstancesExchangeQuoteOutput) SetExchangeId(v string) *Ac return s } +type AcceptVpcEndpointConnectionsInput struct { + _ struct{} `type:"structure"` + + // Checks whether you have the required permissions for the action, without + // actually making the request, and provides an error response. If you have + // the required permissions, the error response is DryRunOperation. Otherwise, + // it is UnauthorizedOperation. + DryRun *bool `type:"boolean"` + + // The ID of the endpoint service. + // + // ServiceId is a required field + ServiceId *string `type:"string" required:"true"` + + // The IDs of one or more interface VPC endpoints. + // + // VpcEndpointIds is a required field + VpcEndpointIds []*string `locationName:"VpcEndpointId" locationNameList:"item" type:"list" required:"true"` +} + +// String returns the string representation +func (s AcceptVpcEndpointConnectionsInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s AcceptVpcEndpointConnectionsInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *AcceptVpcEndpointConnectionsInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "AcceptVpcEndpointConnectionsInput"} + if s.ServiceId == nil { + invalidParams.Add(request.NewErrParamRequired("ServiceId")) + } + if s.VpcEndpointIds == nil { + invalidParams.Add(request.NewErrParamRequired("VpcEndpointIds")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetDryRun sets the DryRun field's value. +func (s *AcceptVpcEndpointConnectionsInput) SetDryRun(v bool) *AcceptVpcEndpointConnectionsInput { + s.DryRun = &v + return s +} + +// SetServiceId sets the ServiceId field's value. +func (s *AcceptVpcEndpointConnectionsInput) SetServiceId(v string) *AcceptVpcEndpointConnectionsInput { + s.ServiceId = &v + return s +} + +// SetVpcEndpointIds sets the VpcEndpointIds field's value. +func (s *AcceptVpcEndpointConnectionsInput) SetVpcEndpointIds(v []*string) *AcceptVpcEndpointConnectionsInput { + s.VpcEndpointIds = v + return s +} + +type AcceptVpcEndpointConnectionsOutput struct { + _ struct{} `type:"structure"` + + // Information about the interface endpoints that were not accepted, if applicable. + Unsuccessful []*UnsuccessfulItem `locationName:"unsuccessful" locationNameList:"item" type:"list"` +} + +// String returns the string representation +func (s AcceptVpcEndpointConnectionsOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s AcceptVpcEndpointConnectionsOutput) GoString() string { + return s.String() +} + +// SetUnsuccessful sets the Unsuccessful field's value. +func (s *AcceptVpcEndpointConnectionsOutput) SetUnsuccessful(v []*UnsuccessfulItem) *AcceptVpcEndpointConnectionsOutput { + s.Unsuccessful = v + return s +} + // Contains the parameters for AcceptVpcPeeringConnection. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/AcceptVpcPeeringConnectionRequest type AcceptVpcPeeringConnectionInput struct { _ struct{} `type:"structure"` @@ -19841,7 +22458,8 @@ type AcceptVpcPeeringConnectionInput struct { // it is UnauthorizedOperation. DryRun *bool `locationName:"dryRun" type:"boolean"` - // The ID of the VPC peering connection. + // The ID of the VPC peering connection. You must specify this parameter in + // the request. VpcPeeringConnectionId *string `locationName:"vpcPeeringConnectionId" type:"string"` } @@ -19868,7 +22486,6 @@ func (s *AcceptVpcPeeringConnectionInput) SetVpcPeeringConnectionId(v string) *A } // Contains the output of AcceptVpcPeeringConnection. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/AcceptVpcPeeringConnectionResult type AcceptVpcPeeringConnectionOutput struct { _ struct{} `type:"structure"` @@ -19893,7 +22510,6 @@ func (s *AcceptVpcPeeringConnectionOutput) SetVpcPeeringConnection(v *VpcPeering } // Describes an account attribute. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/AccountAttribute type AccountAttribute struct { _ struct{} `type:"structure"` @@ -19927,7 +22543,6 @@ func (s *AccountAttribute) SetAttributeValues(v []*AccountAttributeValue) *Accou } // Describes a value of an account attribute. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/AccountAttributeValue type AccountAttributeValue struct { _ struct{} `type:"structure"` @@ -19951,8 +22566,7 @@ func (s *AccountAttributeValue) SetAttributeValue(v string) *AccountAttributeVal return s } -// Describes a running instance in a Spot fleet. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ActiveInstance +// Describes a running instance in a Spot Fleet. type ActiveInstance struct { _ struct{} `type:"structure"` @@ -19967,7 +22581,7 @@ type ActiveInstance struct { // The instance type. InstanceType *string `locationName:"instanceType" type:"string"` - // The ID of the Spot instance request. + // The ID of the Spot Instance request. SpotInstanceRequestId *string `locationName:"spotInstanceRequestId" type:"string"` } @@ -20006,7 +22620,6 @@ func (s *ActiveInstance) SetSpotInstanceRequestId(v string) *ActiveInstance { } // Describes an Elastic IP address. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/Address type Address struct { _ struct{} `type:"structure"` @@ -20035,6 +22648,9 @@ type Address struct { // The Elastic IP address. PublicIp *string `locationName:"publicIp" type:"string"` + + // Any tags assigned to the Elastic IP address. + Tags []*Tag `locationName:"tagSet" locationNameList:"item" type:"list"` } // String returns the string representation @@ -20095,11 +22711,19 @@ func (s *Address) SetPublicIp(v string) *Address { return s } +// SetTags sets the Tags field's value. +func (s *Address) SetTags(v []*Tag) *Address { + s.Tags = v + return s +} + // Contains the parameters for AllocateAddress. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/AllocateAddressRequest type AllocateAddressInput struct { _ struct{} `type:"structure"` + // [EC2-VPC] The Elastic IP address to recover. + Address *string `type:"string"` + // Set to vpc to allocate the address for use with instances in a VPC. // // Default: The address is for use with instances in EC2-Classic. @@ -20122,6 +22746,12 @@ func (s AllocateAddressInput) GoString() string { return s.String() } +// SetAddress sets the Address field's value. +func (s *AllocateAddressInput) SetAddress(v string) *AllocateAddressInput { + s.Address = &v + return s +} + // SetDomain sets the Domain field's value. func (s *AllocateAddressInput) SetDomain(v string) *AllocateAddressInput { s.Domain = &v @@ -20135,7 +22765,6 @@ func (s *AllocateAddressInput) SetDryRun(v bool) *AllocateAddressInput { } // Contains the output of AllocateAddress. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/AllocateAddressResult type AllocateAddressOutput struct { _ struct{} `type:"structure"` @@ -20180,7 +22809,6 @@ func (s *AllocateAddressOutput) SetPublicIp(v string) *AllocateAddressOutput { } // Contains the parameters for AllocateHosts. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/AllocateHostsRequest type AllocateHostsInput struct { _ struct{} `type:"structure"` @@ -20275,7 +22903,6 @@ func (s *AllocateHostsInput) SetQuantity(v int64) *AllocateHostsInput { } // Contains the output of AllocateHosts. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/AllocateHostsResult type AllocateHostsOutput struct { _ struct{} `type:"structure"` @@ -20300,7 +22927,39 @@ func (s *AllocateHostsOutput) SetHostIds(v []*string) *AllocateHostsOutput { return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/AssignIpv6AddressesRequest +// Describes a principal. +type AllowedPrincipal struct { + _ struct{} `type:"structure"` + + // The Amazon Resource Name (ARN) of the principal. + Principal *string `locationName:"principal" type:"string"` + + // The type of principal. + PrincipalType *string `locationName:"principalType" type:"string" enum:"PrincipalType"` +} + +// String returns the string representation +func (s AllowedPrincipal) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s AllowedPrincipal) GoString() string { + return s.String() +} + +// SetPrincipal sets the Principal field's value. +func (s *AllowedPrincipal) SetPrincipal(v string) *AllowedPrincipal { + s.Principal = &v + return s +} + +// SetPrincipalType sets the PrincipalType field's value. +func (s *AllowedPrincipal) SetPrincipalType(v string) *AllowedPrincipal { + s.PrincipalType = &v + return s +} + type AssignIpv6AddressesInput struct { _ struct{} `type:"structure"` @@ -20360,7 +23019,6 @@ func (s *AssignIpv6AddressesInput) SetNetworkInterfaceId(v string) *AssignIpv6Ad return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/AssignIpv6AddressesResult type AssignIpv6AddressesOutput struct { _ struct{} `type:"structure"` @@ -20394,7 +23052,6 @@ func (s *AssignIpv6AddressesOutput) SetNetworkInterfaceId(v string) *AssignIpv6A } // Contains the parameters for AssignPrivateIpAddresses. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/AssignPrivateIpAddressesRequest type AssignPrivateIpAddressesInput struct { _ struct{} `type:"structure"` @@ -20467,7 +23124,6 @@ func (s *AssignPrivateIpAddressesInput) SetSecondaryPrivateIpAddressCount(v int6 return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/AssignPrivateIpAddressesOutput type AssignPrivateIpAddressesOutput struct { _ struct{} `type:"structure"` } @@ -20483,7 +23139,6 @@ func (s AssignPrivateIpAddressesOutput) GoString() string { } // Contains the parameters for AssociateAddress. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/AssociateAddressRequest type AssociateAddressInput struct { _ struct{} `type:"structure"` @@ -20576,7 +23231,6 @@ func (s *AssociateAddressInput) SetPublicIp(v string) *AssociateAddressInput { } // Contains the output of AssociateAddress. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/AssociateAddressResult type AssociateAddressOutput struct { _ struct{} `type:"structure"` @@ -20602,7 +23256,6 @@ func (s *AssociateAddressOutput) SetAssociationId(v string) *AssociateAddressOut } // Contains the parameters for AssociateDhcpOptions. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/AssociateDhcpOptionsRequest type AssociateDhcpOptionsInput struct { _ struct{} `type:"structure"` @@ -20668,7 +23321,6 @@ func (s *AssociateDhcpOptionsInput) SetVpcId(v string) *AssociateDhcpOptionsInpu return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/AssociateDhcpOptionsOutput type AssociateDhcpOptionsOutput struct { _ struct{} `type:"structure"` } @@ -20683,7 +23335,6 @@ func (s AssociateDhcpOptionsOutput) GoString() string { return s.String() } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/AssociateIamInstanceProfileRequest type AssociateIamInstanceProfileInput struct { _ struct{} `type:"structure"` @@ -20736,7 +23387,6 @@ func (s *AssociateIamInstanceProfileInput) SetInstanceId(v string) *AssociateIam return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/AssociateIamInstanceProfileResult type AssociateIamInstanceProfileOutput struct { _ struct{} `type:"structure"` @@ -20761,7 +23411,6 @@ func (s *AssociateIamInstanceProfileOutput) SetIamInstanceProfileAssociation(v * } // Contains the parameters for AssociateRouteTable. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/AssociateRouteTableRequest type AssociateRouteTableInput struct { _ struct{} `type:"structure"` @@ -20827,7 +23476,6 @@ func (s *AssociateRouteTableInput) SetSubnetId(v string) *AssociateRouteTableInp } // Contains the output of AssociateRouteTable. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/AssociateRouteTableResult type AssociateRouteTableOutput struct { _ struct{} `type:"structure"` @@ -20851,7 +23499,6 @@ func (s *AssociateRouteTableOutput) SetAssociationId(v string) *AssociateRouteTa return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/AssociateSubnetCidrBlockRequest type AssociateSubnetCidrBlockInput struct { _ struct{} `type:"structure"` @@ -20904,7 +23551,6 @@ func (s *AssociateSubnetCidrBlockInput) SetSubnetId(v string) *AssociateSubnetCi return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/AssociateSubnetCidrBlockResult type AssociateSubnetCidrBlockOutput struct { _ struct{} `type:"structure"` @@ -20937,7 +23583,6 @@ func (s *AssociateSubnetCidrBlockOutput) SetSubnetId(v string) *AssociateSubnetC return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/AssociateVpcCidrBlockRequest type AssociateVpcCidrBlockInput struct { _ struct{} `type:"structure"` @@ -20946,6 +23591,9 @@ type AssociateVpcCidrBlockInput struct { // CIDR block. AmazonProvidedIpv6CidrBlock *bool `locationName:"amazonProvidedIpv6CidrBlock" type:"boolean"` + // An IPv4 CIDR block to associate with the VPC. + CidrBlock *string `type:"string"` + // The ID of the VPC. // // VpcId is a required field @@ -20981,16 +23629,24 @@ func (s *AssociateVpcCidrBlockInput) SetAmazonProvidedIpv6CidrBlock(v bool) *Ass return s } +// SetCidrBlock sets the CidrBlock field's value. +func (s *AssociateVpcCidrBlockInput) SetCidrBlock(v string) *AssociateVpcCidrBlockInput { + s.CidrBlock = &v + return s +} + // SetVpcId sets the VpcId field's value. func (s *AssociateVpcCidrBlockInput) SetVpcId(v string) *AssociateVpcCidrBlockInput { s.VpcId = &v return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/AssociateVpcCidrBlockResult type AssociateVpcCidrBlockOutput struct { _ struct{} `type:"structure"` + // Information about the IPv4 CIDR block association. + CidrBlockAssociation *VpcCidrBlockAssociation `locationName:"cidrBlockAssociation" type:"structure"` + // Information about the IPv6 CIDR block association. Ipv6CidrBlockAssociation *VpcIpv6CidrBlockAssociation `locationName:"ipv6CidrBlockAssociation" type:"structure"` @@ -21008,6 +23664,12 @@ func (s AssociateVpcCidrBlockOutput) GoString() string { return s.String() } +// SetCidrBlockAssociation sets the CidrBlockAssociation field's value. +func (s *AssociateVpcCidrBlockOutput) SetCidrBlockAssociation(v *VpcCidrBlockAssociation) *AssociateVpcCidrBlockOutput { + s.CidrBlockAssociation = v + return s +} + // SetIpv6CidrBlockAssociation sets the Ipv6CidrBlockAssociation field's value. func (s *AssociateVpcCidrBlockOutput) SetIpv6CidrBlockAssociation(v *VpcIpv6CidrBlockAssociation) *AssociateVpcCidrBlockOutput { s.Ipv6CidrBlockAssociation = v @@ -21021,7 +23683,6 @@ func (s *AssociateVpcCidrBlockOutput) SetVpcId(v string) *AssociateVpcCidrBlockO } // Contains the parameters for AttachClassicLinkVpc. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/AttachClassicLinkVpcRequest type AttachClassicLinkVpcInput struct { _ struct{} `type:"structure"` @@ -21102,7 +23763,6 @@ func (s *AttachClassicLinkVpcInput) SetVpcId(v string) *AttachClassicLinkVpcInpu } // Contains the output of AttachClassicLinkVpc. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/AttachClassicLinkVpcResult type AttachClassicLinkVpcOutput struct { _ struct{} `type:"structure"` @@ -21127,7 +23787,6 @@ func (s *AttachClassicLinkVpcOutput) SetReturn(v bool) *AttachClassicLinkVpcOutp } // Contains the parameters for AttachInternetGateway. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/AttachInternetGatewayRequest type AttachInternetGatewayInput struct { _ struct{} `type:"structure"` @@ -21192,7 +23851,6 @@ func (s *AttachInternetGatewayInput) SetVpcId(v string) *AttachInternetGatewayIn return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/AttachInternetGatewayOutput type AttachInternetGatewayOutput struct { _ struct{} `type:"structure"` } @@ -21208,7 +23866,6 @@ func (s AttachInternetGatewayOutput) GoString() string { } // Contains the parameters for AttachNetworkInterface. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/AttachNetworkInterfaceRequest type AttachNetworkInterfaceInput struct { _ struct{} `type:"structure"` @@ -21288,7 +23945,6 @@ func (s *AttachNetworkInterfaceInput) SetNetworkInterfaceId(v string) *AttachNet } // Contains the output of AttachNetworkInterface. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/AttachNetworkInterfaceResult type AttachNetworkInterfaceOutput struct { _ struct{} `type:"structure"` @@ -21313,11 +23969,10 @@ func (s *AttachNetworkInterfaceOutput) SetAttachmentId(v string) *AttachNetworkI } // Contains the parameters for AttachVolume. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/AttachVolumeRequest type AttachVolumeInput struct { _ struct{} `type:"structure"` - // The device name to expose to the instance (for example, /dev/sdh or xvdh). + // The device name (for example, /dev/sdh or xvdh). // // Device is a required field Device *string `type:"string" required:"true"` @@ -21394,7 +24049,6 @@ func (s *AttachVolumeInput) SetVolumeId(v string) *AttachVolumeInput { } // Contains the parameters for AttachVpnGateway. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/AttachVpnGatewayRequest type AttachVpnGatewayInput struct { _ struct{} `type:"structure"` @@ -21460,7 +24114,6 @@ func (s *AttachVpnGatewayInput) SetVpnGatewayId(v string) *AttachVpnGatewayInput } // Contains the output of AttachVpnGateway. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/AttachVpnGatewayResult type AttachVpnGatewayOutput struct { _ struct{} `type:"structure"` @@ -21485,7 +24138,6 @@ func (s *AttachVpnGatewayOutput) SetVpcAttachment(v *VpcAttachment) *AttachVpnGa } // Describes a value for a resource attribute that is a Boolean value. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/AttributeBooleanValue type AttributeBooleanValue struct { _ struct{} `type:"structure"` @@ -21510,7 +24162,6 @@ func (s *AttributeBooleanValue) SetValue(v bool) *AttributeBooleanValue { } // Describes a value for a resource attribute that is a String. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/AttributeValue type AttributeValue struct { _ struct{} `type:"structure"` @@ -21535,12 +24186,10 @@ func (s *AttributeValue) SetValue(v string) *AttributeValue { } // Contains the parameters for AuthorizeSecurityGroupEgress. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/AuthorizeSecurityGroupEgressRequest type AuthorizeSecurityGroupEgressInput struct { _ struct{} `type:"structure"` - // The CIDR IPv4 address range. We recommend that you specify the CIDR range - // in a set of IP permissions instead. + // Not supported. Use a set of IP permissions to specify the CIDR. CidrIp *string `locationName:"cidrIp" type:"string"` // Checks whether you have the required permissions for the action, without @@ -21549,8 +24198,7 @@ type AuthorizeSecurityGroupEgressInput struct { // it is UnauthorizedOperation. DryRun *bool `locationName:"dryRun" type:"boolean"` - // The start of port range for the TCP and UDP protocols, or an ICMP type number. - // We recommend that you specify the port range in a set of IP permissions instead. + // Not supported. Use a set of IP permissions to specify the port. FromPort *int64 `locationName:"fromPort" type:"integer"` // The ID of the security group. @@ -21558,26 +24206,23 @@ type AuthorizeSecurityGroupEgressInput struct { // GroupId is a required field GroupId *string `locationName:"groupId" type:"string" required:"true"` - // A set of IP permissions. You can't specify a destination security group and - // a CIDR IP address range. + // One or more sets of IP permissions. You can't specify a destination security + // group and a CIDR IP address range in the same set of permissions. IpPermissions []*IpPermission `locationName:"ipPermissions" locationNameList:"item" type:"list"` - // The IP protocol name or number. We recommend that you specify the protocol - // in a set of IP permissions instead. + // Not supported. Use a set of IP permissions to specify the protocol name or + // number. IpProtocol *string `locationName:"ipProtocol" type:"string"` - // The name of a destination security group. To authorize outbound access to - // a destination security group, we recommend that you use a set of IP permissions - // instead. + // Not supported. Use a set of IP permissions to specify a destination security + // group. SourceSecurityGroupName *string `locationName:"sourceSecurityGroupName" type:"string"` - // The AWS account number for a destination security group. To authorize outbound - // access to a destination security group, we recommend that you use a set of - // IP permissions instead. + // Not supported. Use a set of IP permissions to specify a destination security + // group. SourceSecurityGroupOwnerId *string `locationName:"sourceSecurityGroupOwnerId" type:"string"` - // The end of port range for the TCP and UDP protocols, or an ICMP type number. - // We recommend that you specify the port range in a set of IP permissions instead. + // Not supported. Use a set of IP permissions to specify the port. ToPort *int64 `locationName:"toPort" type:"integer"` } @@ -21658,7 +24303,6 @@ func (s *AuthorizeSecurityGroupEgressInput) SetToPort(v int64) *AuthorizeSecurit return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/AuthorizeSecurityGroupEgressOutput type AuthorizeSecurityGroupEgressOutput struct { _ struct{} `type:"structure"` } @@ -21674,7 +24318,6 @@ func (s AuthorizeSecurityGroupEgressOutput) GoString() string { } // Contains the parameters for AuthorizeSecurityGroupIngress. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/AuthorizeSecurityGroupIngressRequest type AuthorizeSecurityGroupIngressInput struct { _ struct{} `type:"structure"` @@ -21690,16 +24333,20 @@ type AuthorizeSecurityGroupIngressInput struct { // The start of port range for the TCP and UDP protocols, or an ICMP/ICMPv6 // type number. For the ICMP/ICMPv6 type number, use -1 to specify all types. + // If you specify all ICMP/ICMPv6 types, you must specify all codes. FromPort *int64 `type:"integer"` - // The ID of the security group. Required for a nondefault VPC. + // The ID of the security group. You must specify either the security group + // ID or the security group name in the request. For security groups in a nondefault + // VPC, you must specify the security group ID. GroupId *string `type:"string"` - // [EC2-Classic, default VPC] The name of the security group. + // [EC2-Classic, default VPC] The name of the security group. You must specify + // either the security group ID or the security group name in the request. GroupName *string `type:"string"` - // A set of IP permissions. Can be used to specify multiple rules in a single - // command. + // One or more sets of IP permissions. Can be used to specify multiple rules + // in a single command. IpPermissions []*IpPermission `locationNameList:"item" type:"list"` // The IP protocol name (tcp, udp, icmp) or number (see Protocol Numbers (http://www.iana.org/assignments/protocol-numbers/protocol-numbers.xhtml)). @@ -21719,8 +24366,8 @@ type AuthorizeSecurityGroupIngressInput struct { // be in the same VPC. SourceSecurityGroupName *string `type:"string"` - // [EC2-Classic] The AWS account number for the source security group, if the - // source security group is in a different account. You can't specify this parameter + // [EC2-Classic] The AWS account ID for the source security group, if the source + // security group is in a different account. You can't specify this parameter // in combination with the following parameters: the CIDR IP address range, // the IP protocol, the start of the port range, and the end of the port range. // Creates rules that grant full ICMP, UDP, and TCP access. To create a rule @@ -21728,7 +24375,8 @@ type AuthorizeSecurityGroupIngressInput struct { SourceSecurityGroupOwnerId *string `type:"string"` // The end of port range for the TCP and UDP protocols, or an ICMP/ICMPv6 code - // number. For the ICMP/ICMPv6 code number, use -1 to specify all codes. + // number. For the ICMP/ICMPv6 code number, use -1 to specify all codes. If + // you specify all ICMP/ICMPv6 types, you must specify all codes. ToPort *int64 `type:"integer"` } @@ -21802,7 +24450,6 @@ func (s *AuthorizeSecurityGroupIngressInput) SetToPort(v int64) *AuthorizeSecuri return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/AuthorizeSecurityGroupIngressOutput type AuthorizeSecurityGroupIngressOutput struct { _ struct{} `type:"structure"` } @@ -21818,7 +24465,6 @@ func (s AuthorizeSecurityGroupIngressOutput) GoString() string { } // Describes an Availability Zone. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/AvailabilityZone type AvailabilityZone struct { _ struct{} `type:"structure"` @@ -21870,7 +24516,6 @@ func (s *AvailabilityZone) SetZoneName(v string) *AvailabilityZone { } // Describes a message about an Availability Zone. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/AvailabilityZoneMessage type AvailabilityZoneMessage struct { _ struct{} `type:"structure"` @@ -21895,7 +24540,6 @@ func (s *AvailabilityZoneMessage) SetMessage(v string) *AvailabilityZoneMessage } // The capacity information for instances launched onto the Dedicated Host. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/AvailableCapacity type AvailableCapacity struct { _ struct{} `type:"structure"` @@ -21928,7 +24572,6 @@ func (s *AvailableCapacity) SetAvailableVCpus(v int64) *AvailableCapacity { return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/BlobAttributeValue type BlobAttributeValue struct { _ struct{} `type:"structure"` @@ -21953,11 +24596,10 @@ func (s *BlobAttributeValue) SetValue(v []byte) *BlobAttributeValue { } // Describes a block device mapping. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/BlockDeviceMapping type BlockDeviceMapping struct { _ struct{} `type:"structure"` - // The device name exposed to the instance (for example, /dev/sdh or xvdh). + // The device name (for example, /dev/sdh or xvdh). DeviceName *string `locationName:"deviceName" type:"string"` // Parameters used to automatically set up EBS volumes when the instance is @@ -22016,7 +24658,6 @@ func (s *BlockDeviceMapping) SetVirtualName(v string) *BlockDeviceMapping { } // Contains the parameters for BundleInstance. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/BundleInstanceRequest type BundleInstanceInput struct { _ struct{} `type:"structure"` @@ -22090,7 +24731,6 @@ func (s *BundleInstanceInput) SetStorage(v *Storage) *BundleInstanceInput { } // Contains the output of BundleInstance. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/BundleInstanceResult type BundleInstanceOutput struct { _ struct{} `type:"structure"` @@ -22115,7 +24755,6 @@ func (s *BundleInstanceOutput) SetBundleTask(v *BundleTask) *BundleInstanceOutpu } // Describes a bundle task. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/BundleTask type BundleTask struct { _ struct{} `type:"structure"` @@ -22203,7 +24842,6 @@ func (s *BundleTask) SetUpdateTime(v time.Time) *BundleTask { } // Describes an error for BundleInstance. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/BundleTaskError type BundleTaskError struct { _ struct{} `type:"structure"` @@ -22237,7 +24875,6 @@ func (s *BundleTaskError) SetMessage(v string) *BundleTaskError { } // Contains the parameters for CancelBundleTask. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CancelBundleTaskRequest type CancelBundleTaskInput struct { _ struct{} `type:"structure"` @@ -22289,7 +24926,6 @@ func (s *CancelBundleTaskInput) SetDryRun(v bool) *CancelBundleTaskInput { } // Contains the output of CancelBundleTask. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CancelBundleTaskResult type CancelBundleTaskOutput struct { _ struct{} `type:"structure"` @@ -22314,7 +24950,6 @@ func (s *CancelBundleTaskOutput) SetBundleTask(v *BundleTask) *CancelBundleTaskO } // Contains the parameters for CancelConversionTask. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CancelConversionRequest type CancelConversionTaskInput struct { _ struct{} `type:"structure"` @@ -22374,7 +25009,6 @@ func (s *CancelConversionTaskInput) SetReasonMessage(v string) *CancelConversion return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CancelConversionTaskOutput type CancelConversionTaskOutput struct { _ struct{} `type:"structure"` } @@ -22390,7 +25024,6 @@ func (s CancelConversionTaskOutput) GoString() string { } // Contains the parameters for CancelExportTask. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CancelExportTaskRequest type CancelExportTaskInput struct { _ struct{} `type:"structure"` @@ -22429,7 +25062,6 @@ func (s *CancelExportTaskInput) SetExportTaskId(v string) *CancelExportTaskInput return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CancelExportTaskOutput type CancelExportTaskOutput struct { _ struct{} `type:"structure"` } @@ -22445,7 +25077,6 @@ func (s CancelExportTaskOutput) GoString() string { } // Contains the parameters for CancelImportTask. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CancelImportTaskRequest type CancelImportTaskInput struct { _ struct{} `type:"structure"` @@ -22491,7 +25122,6 @@ func (s *CancelImportTaskInput) SetImportTaskId(v string) *CancelImportTaskInput } // Contains the output for CancelImportTask. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CancelImportTaskResult type CancelImportTaskOutput struct { _ struct{} `type:"structure"` @@ -22534,7 +25164,6 @@ func (s *CancelImportTaskOutput) SetState(v string) *CancelImportTaskOutput { } // Contains the parameters for CancelReservedInstancesListing. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CancelReservedInstancesListingRequest type CancelReservedInstancesListingInput struct { _ struct{} `type:"structure"` @@ -22574,7 +25203,6 @@ func (s *CancelReservedInstancesListingInput) SetReservedInstancesListingId(v st } // Contains the output of CancelReservedInstancesListing. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CancelReservedInstancesListingResult type CancelReservedInstancesListingOutput struct { _ struct{} `type:"structure"` @@ -22598,8 +25226,7 @@ func (s *CancelReservedInstancesListingOutput) SetReservedInstancesListings(v [] return s } -// Describes a Spot fleet error. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CancelSpotFleetRequestsError +// Describes a Spot Fleet error. type CancelSpotFleetRequestsError struct { _ struct{} `type:"structure"` @@ -22636,8 +25263,7 @@ func (s *CancelSpotFleetRequestsError) SetMessage(v string) *CancelSpotFleetRequ return s } -// Describes a Spot fleet request that was not successfully canceled. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CancelSpotFleetRequestsErrorItem +// Describes a Spot Fleet request that was not successfully canceled. type CancelSpotFleetRequestsErrorItem struct { _ struct{} `type:"structure"` @@ -22646,7 +25272,7 @@ type CancelSpotFleetRequestsErrorItem struct { // Error is a required field Error *CancelSpotFleetRequestsError `locationName:"error" type:"structure" required:"true"` - // The ID of the Spot fleet request. + // The ID of the Spot Fleet request. // // SpotFleetRequestId is a required field SpotFleetRequestId *string `locationName:"spotFleetRequestId" type:"string" required:"true"` @@ -22675,7 +25301,6 @@ func (s *CancelSpotFleetRequestsErrorItem) SetSpotFleetRequestId(v string) *Canc } // Contains the parameters for CancelSpotFleetRequests. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CancelSpotFleetRequestsRequest type CancelSpotFleetRequestsInput struct { _ struct{} `type:"structure"` @@ -22685,12 +25310,12 @@ type CancelSpotFleetRequestsInput struct { // it is UnauthorizedOperation. DryRun *bool `locationName:"dryRun" type:"boolean"` - // The IDs of the Spot fleet requests. + // The IDs of the Spot Fleet requests. // // SpotFleetRequestIds is a required field SpotFleetRequestIds []*string `locationName:"spotFleetRequestId" locationNameList:"item" type:"list" required:"true"` - // Indicates whether to terminate instances for a Spot fleet request if it is + // Indicates whether to terminate instances for a Spot Fleet request if it is // canceled successfully. // // TerminateInstances is a required field @@ -22742,14 +25367,13 @@ func (s *CancelSpotFleetRequestsInput) SetTerminateInstances(v bool) *CancelSpot } // Contains the output of CancelSpotFleetRequests. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CancelSpotFleetRequestsResponse type CancelSpotFleetRequestsOutput struct { _ struct{} `type:"structure"` - // Information about the Spot fleet requests that are successfully canceled. + // Information about the Spot Fleet requests that are successfully canceled. SuccessfulFleetRequests []*CancelSpotFleetRequestsSuccessItem `locationName:"successfulFleetRequestSet" locationNameList:"item" type:"list"` - // Information about the Spot fleet requests that are not successfully canceled. + // Information about the Spot Fleet requests that are not successfully canceled. UnsuccessfulFleetRequests []*CancelSpotFleetRequestsErrorItem `locationName:"unsuccessfulFleetRequestSet" locationNameList:"item" type:"list"` } @@ -22775,22 +25399,21 @@ func (s *CancelSpotFleetRequestsOutput) SetUnsuccessfulFleetRequests(v []*Cancel return s } -// Describes a Spot fleet request that was successfully canceled. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CancelSpotFleetRequestsSuccessItem +// Describes a Spot Fleet request that was successfully canceled. type CancelSpotFleetRequestsSuccessItem struct { _ struct{} `type:"structure"` - // The current state of the Spot fleet request. + // The current state of the Spot Fleet request. // // CurrentSpotFleetRequestState is a required field CurrentSpotFleetRequestState *string `locationName:"currentSpotFleetRequestState" type:"string" required:"true" enum:"BatchState"` - // The previous state of the Spot fleet request. + // The previous state of the Spot Fleet request. // // PreviousSpotFleetRequestState is a required field PreviousSpotFleetRequestState *string `locationName:"previousSpotFleetRequestState" type:"string" required:"true" enum:"BatchState"` - // The ID of the Spot fleet request. + // The ID of the Spot Fleet request. // // SpotFleetRequestId is a required field SpotFleetRequestId *string `locationName:"spotFleetRequestId" type:"string" required:"true"` @@ -22825,7 +25448,6 @@ func (s *CancelSpotFleetRequestsSuccessItem) SetSpotFleetRequestId(v string) *Ca } // Contains the parameters for CancelSpotInstanceRequests. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CancelSpotInstanceRequestsRequest type CancelSpotInstanceRequestsInput struct { _ struct{} `type:"structure"` @@ -22835,7 +25457,7 @@ type CancelSpotInstanceRequestsInput struct { // it is UnauthorizedOperation. DryRun *bool `locationName:"dryRun" type:"boolean"` - // One or more Spot instance request IDs. + // One or more Spot Instance request IDs. // // SpotInstanceRequestIds is a required field SpotInstanceRequestIds []*string `locationName:"SpotInstanceRequestId" locationNameList:"SpotInstanceRequestId" type:"list" required:"true"` @@ -22877,11 +25499,10 @@ func (s *CancelSpotInstanceRequestsInput) SetSpotInstanceRequestIds(v []*string) } // Contains the output of CancelSpotInstanceRequests. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CancelSpotInstanceRequestsResult type CancelSpotInstanceRequestsOutput struct { _ struct{} `type:"structure"` - // One or more Spot instance requests. + // One or more Spot Instance requests. CancelledSpotInstanceRequests []*CancelledSpotInstanceRequest `locationName:"spotInstanceRequestSet" locationNameList:"item" type:"list"` } @@ -22901,15 +25522,14 @@ func (s *CancelSpotInstanceRequestsOutput) SetCancelledSpotInstanceRequests(v [] return s } -// Describes a request to cancel a Spot instance. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CancelledSpotInstanceRequest +// Describes a request to cancel a Spot Instance. type CancelledSpotInstanceRequest struct { _ struct{} `type:"structure"` - // The ID of the Spot instance request. + // The ID of the Spot Instance request. SpotInstanceRequestId *string `locationName:"spotInstanceRequestId" type:"string"` - // The state of the Spot instance request. + // The state of the Spot Instance request. State *string `locationName:"state" type:"string" enum:"CancelSpotInstanceRequestState"` } @@ -22935,8 +25555,31 @@ func (s *CancelledSpotInstanceRequest) SetState(v string) *CancelledSpotInstance return s } +// Describes an IPv4 CIDR block. +type CidrBlock struct { + _ struct{} `type:"structure"` + + // The IPv4 CIDR block. + CidrBlock *string `locationName:"cidrBlock" type:"string"` +} + +// String returns the string representation +func (s CidrBlock) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CidrBlock) GoString() string { + return s.String() +} + +// SetCidrBlock sets the CidrBlock field's value. +func (s *CidrBlock) SetCidrBlock(v string) *CidrBlock { + s.CidrBlock = &v + return s +} + // Describes the ClassicLink DNS support status of a VPC. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ClassicLinkDnsSupport type ClassicLinkDnsSupport struct { _ struct{} `type:"structure"` @@ -22970,7 +25613,6 @@ func (s *ClassicLinkDnsSupport) SetVpcId(v string) *ClassicLinkDnsSupport { } // Describes a linked EC2-Classic instance. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ClassicLinkInstance type ClassicLinkInstance struct { _ struct{} `type:"structure"` @@ -23021,8 +25663,99 @@ func (s *ClassicLinkInstance) SetVpcId(v string) *ClassicLinkInstance { return s } +// Describes a Classic Load Balancer. +type ClassicLoadBalancer struct { + _ struct{} `type:"structure"` + + // The name of the load balancer. + // + // Name is a required field + Name *string `locationName:"name" type:"string" required:"true"` +} + +// String returns the string representation +func (s ClassicLoadBalancer) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ClassicLoadBalancer) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ClassicLoadBalancer) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ClassicLoadBalancer"} + if s.Name == nil { + invalidParams.Add(request.NewErrParamRequired("Name")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetName sets the Name field's value. +func (s *ClassicLoadBalancer) SetName(v string) *ClassicLoadBalancer { + s.Name = &v + return s +} + +// Describes the Classic Load Balancers to attach to a Spot Fleet. Spot Fleet +// registers the running Spot Instances with these Classic Load Balancers. +type ClassicLoadBalancersConfig struct { + _ struct{} `type:"structure"` + + // One or more Classic Load Balancers. + // + // ClassicLoadBalancers is a required field + ClassicLoadBalancers []*ClassicLoadBalancer `locationName:"classicLoadBalancers" locationNameList:"item" min:"1" type:"list" required:"true"` +} + +// String returns the string representation +func (s ClassicLoadBalancersConfig) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ClassicLoadBalancersConfig) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ClassicLoadBalancersConfig) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ClassicLoadBalancersConfig"} + if s.ClassicLoadBalancers == nil { + invalidParams.Add(request.NewErrParamRequired("ClassicLoadBalancers")) + } + if s.ClassicLoadBalancers != nil && len(s.ClassicLoadBalancers) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ClassicLoadBalancers", 1)) + } + if s.ClassicLoadBalancers != nil { + for i, v := range s.ClassicLoadBalancers { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "ClassicLoadBalancers", i), err.(request.ErrInvalidParams)) + } + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetClassicLoadBalancers sets the ClassicLoadBalancers field's value. +func (s *ClassicLoadBalancersConfig) SetClassicLoadBalancers(v []*ClassicLoadBalancer) *ClassicLoadBalancersConfig { + s.ClassicLoadBalancers = v + return s +} + // Describes the client-specific data. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ClientData type ClientData struct { _ struct{} `type:"structure"` @@ -23074,7 +25807,6 @@ func (s *ClientData) SetUploadStart(v time.Time) *ClientData { } // Contains the parameters for ConfirmProductInstance. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ConfirmProductInstanceRequest type ConfirmProductInstanceInput struct { _ struct{} `type:"structure"` @@ -23140,7 +25872,6 @@ func (s *ConfirmProductInstanceInput) SetProductCode(v string) *ConfirmProductIn } // Contains the output of ConfirmProductInstance. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ConfirmProductInstanceResult type ConfirmProductInstanceOutput struct { _ struct{} `type:"structure"` @@ -23175,8 +25906,86 @@ func (s *ConfirmProductInstanceOutput) SetReturn(v bool) *ConfirmProductInstance return s } +// Describes a connection notification for a VPC endpoint or VPC endpoint service. +type ConnectionNotification struct { + _ struct{} `type:"structure"` + + // The events for the notification. Valid values are Accept, Connect, Delete, + // and Reject. + ConnectionEvents []*string `locationName:"connectionEvents" locationNameList:"item" type:"list"` + + // The ARN of the SNS topic for the notification. + ConnectionNotificationArn *string `locationName:"connectionNotificationArn" type:"string"` + + // The ID of the notification. + ConnectionNotificationId *string `locationName:"connectionNotificationId" type:"string"` + + // The state of the notification. + ConnectionNotificationState *string `locationName:"connectionNotificationState" type:"string" enum:"ConnectionNotificationState"` + + // The type of notification. + ConnectionNotificationType *string `locationName:"connectionNotificationType" type:"string" enum:"ConnectionNotificationType"` + + // The ID of the endpoint service. + ServiceId *string `locationName:"serviceId" type:"string"` + + // The ID of the VPC endpoint. + VpcEndpointId *string `locationName:"vpcEndpointId" type:"string"` +} + +// String returns the string representation +func (s ConnectionNotification) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ConnectionNotification) GoString() string { + return s.String() +} + +// SetConnectionEvents sets the ConnectionEvents field's value. +func (s *ConnectionNotification) SetConnectionEvents(v []*string) *ConnectionNotification { + s.ConnectionEvents = v + return s +} + +// SetConnectionNotificationArn sets the ConnectionNotificationArn field's value. +func (s *ConnectionNotification) SetConnectionNotificationArn(v string) *ConnectionNotification { + s.ConnectionNotificationArn = &v + return s +} + +// SetConnectionNotificationId sets the ConnectionNotificationId field's value. +func (s *ConnectionNotification) SetConnectionNotificationId(v string) *ConnectionNotification { + s.ConnectionNotificationId = &v + return s +} + +// SetConnectionNotificationState sets the ConnectionNotificationState field's value. +func (s *ConnectionNotification) SetConnectionNotificationState(v string) *ConnectionNotification { + s.ConnectionNotificationState = &v + return s +} + +// SetConnectionNotificationType sets the ConnectionNotificationType field's value. +func (s *ConnectionNotification) SetConnectionNotificationType(v string) *ConnectionNotification { + s.ConnectionNotificationType = &v + return s +} + +// SetServiceId sets the ServiceId field's value. +func (s *ConnectionNotification) SetServiceId(v string) *ConnectionNotification { + s.ServiceId = &v + return s +} + +// SetVpcEndpointId sets the VpcEndpointId field's value. +func (s *ConnectionNotification) SetVpcEndpointId(v string) *ConnectionNotification { + s.VpcEndpointId = &v + return s +} + // Describes a conversion task. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ConversionTask type ConversionTask struct { _ struct{} `type:"structure"` @@ -23261,8 +26070,122 @@ func (s *ConversionTask) SetTags(v []*Tag) *ConversionTask { return s } +type CopyFpgaImageInput struct { + _ struct{} `type:"structure"` + + // Unique, case-sensitive identifier that you provide to ensure the idempotency + // of the request. For more information, see Ensuring Idempotency (http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Run_Instance_Idempotency.html). + ClientToken *string `type:"string"` + + // The description for the new AFI. + Description *string `type:"string"` + + // Checks whether you have the required permissions for the action, without + // actually making the request, and provides an error response. If you have + // the required permissions, the error response is DryRunOperation. Otherwise, + // it is UnauthorizedOperation. + DryRun *bool `type:"boolean"` + + // The name for the new AFI. The default is the name of the source AFI. + Name *string `type:"string"` + + // The ID of the source AFI. + // + // SourceFpgaImageId is a required field + SourceFpgaImageId *string `type:"string" required:"true"` + + // The region that contains the source AFI. + // + // SourceRegion is a required field + SourceRegion *string `type:"string" required:"true"` +} + +// String returns the string representation +func (s CopyFpgaImageInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CopyFpgaImageInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *CopyFpgaImageInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CopyFpgaImageInput"} + if s.SourceFpgaImageId == nil { + invalidParams.Add(request.NewErrParamRequired("SourceFpgaImageId")) + } + if s.SourceRegion == nil { + invalidParams.Add(request.NewErrParamRequired("SourceRegion")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetClientToken sets the ClientToken field's value. +func (s *CopyFpgaImageInput) SetClientToken(v string) *CopyFpgaImageInput { + s.ClientToken = &v + return s +} + +// SetDescription sets the Description field's value. +func (s *CopyFpgaImageInput) SetDescription(v string) *CopyFpgaImageInput { + s.Description = &v + return s +} + +// SetDryRun sets the DryRun field's value. +func (s *CopyFpgaImageInput) SetDryRun(v bool) *CopyFpgaImageInput { + s.DryRun = &v + return s +} + +// SetName sets the Name field's value. +func (s *CopyFpgaImageInput) SetName(v string) *CopyFpgaImageInput { + s.Name = &v + return s +} + +// SetSourceFpgaImageId sets the SourceFpgaImageId field's value. +func (s *CopyFpgaImageInput) SetSourceFpgaImageId(v string) *CopyFpgaImageInput { + s.SourceFpgaImageId = &v + return s +} + +// SetSourceRegion sets the SourceRegion field's value. +func (s *CopyFpgaImageInput) SetSourceRegion(v string) *CopyFpgaImageInput { + s.SourceRegion = &v + return s +} + +type CopyFpgaImageOutput struct { + _ struct{} `type:"structure"` + + // The ID of the new AFI. + FpgaImageId *string `locationName:"fpgaImageId" type:"string"` +} + +// String returns the string representation +func (s CopyFpgaImageOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CopyFpgaImageOutput) GoString() string { + return s.String() +} + +// SetFpgaImageId sets the FpgaImageId field's value. +func (s *CopyFpgaImageOutput) SetFpgaImageId(v string) *CopyFpgaImageOutput { + s.FpgaImageId = &v + return s +} + // Contains the parameters for CopyImage. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CopyImageRequest type CopyImageInput struct { _ struct{} `type:"structure"` @@ -23391,7 +26314,6 @@ func (s *CopyImageInput) SetSourceRegion(v string) *CopyImageInput { } // Contains the output of CopyImage. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CopyImageResult type CopyImageOutput struct { _ struct{} `type:"structure"` @@ -23416,7 +26338,6 @@ func (s *CopyImageOutput) SetImageId(v string) *CopyImageOutput { } // Contains the parameters for CopySnapshot. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CopySnapshotRequest type CopySnapshotInput struct { _ struct{} `type:"structure"` @@ -23558,7 +26479,6 @@ func (s *CopySnapshotInput) SetSourceSnapshotId(v string) *CopySnapshotInput { } // Contains the output of CopySnapshot. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CopySnapshotResult type CopySnapshotOutput struct { _ struct{} `type:"structure"` @@ -23583,7 +26503,6 @@ func (s *CopySnapshotOutput) SetSnapshotId(v string) *CopySnapshotOutput { } // Contains the parameters for CreateCustomerGateway. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CreateCustomerGatewayRequest type CreateCustomerGatewayInput struct { _ struct{} `type:"structure"` @@ -23666,7 +26585,6 @@ func (s *CreateCustomerGatewayInput) SetType(v string) *CreateCustomerGatewayInp } // Contains the output of CreateCustomerGateway. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CreateCustomerGatewayResult type CreateCustomerGatewayOutput struct { _ struct{} `type:"structure"` @@ -23690,8 +26608,80 @@ func (s *CreateCustomerGatewayOutput) SetCustomerGateway(v *CustomerGateway) *Cr return s } +type CreateDefaultSubnetInput struct { + _ struct{} `type:"structure"` + + // The Availability Zone in which to create the default subnet. + // + // AvailabilityZone is a required field + AvailabilityZone *string `type:"string" required:"true"` + + // Checks whether you have the required permissions for the action, without + // actually making the request, and provides an error response. If you have + // the required permissions, the error response is DryRunOperation. Otherwise, + // it is UnauthorizedOperation. + DryRun *bool `type:"boolean"` +} + +// String returns the string representation +func (s CreateDefaultSubnetInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateDefaultSubnetInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *CreateDefaultSubnetInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CreateDefaultSubnetInput"} + if s.AvailabilityZone == nil { + invalidParams.Add(request.NewErrParamRequired("AvailabilityZone")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetAvailabilityZone sets the AvailabilityZone field's value. +func (s *CreateDefaultSubnetInput) SetAvailabilityZone(v string) *CreateDefaultSubnetInput { + s.AvailabilityZone = &v + return s +} + +// SetDryRun sets the DryRun field's value. +func (s *CreateDefaultSubnetInput) SetDryRun(v bool) *CreateDefaultSubnetInput { + s.DryRun = &v + return s +} + +type CreateDefaultSubnetOutput struct { + _ struct{} `type:"structure"` + + // Information about the subnet. + Subnet *Subnet `locationName:"subnet" type:"structure"` +} + +// String returns the string representation +func (s CreateDefaultSubnetOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateDefaultSubnetOutput) GoString() string { + return s.String() +} + +// SetSubnet sets the Subnet field's value. +func (s *CreateDefaultSubnetOutput) SetSubnet(v *Subnet) *CreateDefaultSubnetOutput { + s.Subnet = v + return s +} + // Contains the parameters for CreateDefaultVpc. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CreateDefaultVpcRequest type CreateDefaultVpcInput struct { _ struct{} `type:"structure"` @@ -23719,7 +26709,6 @@ func (s *CreateDefaultVpcInput) SetDryRun(v bool) *CreateDefaultVpcInput { } // Contains the output of CreateDefaultVpc. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CreateDefaultVpcResult type CreateDefaultVpcOutput struct { _ struct{} `type:"structure"` @@ -23744,7 +26733,6 @@ func (s *CreateDefaultVpcOutput) SetVpc(v *Vpc) *CreateDefaultVpcOutput { } // Contains the parameters for CreateDhcpOptions. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CreateDhcpOptionsRequest type CreateDhcpOptionsInput struct { _ struct{} `type:"structure"` @@ -23796,7 +26784,6 @@ func (s *CreateDhcpOptionsInput) SetDryRun(v bool) *CreateDhcpOptionsInput { } // Contains the output of CreateDhcpOptions. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CreateDhcpOptionsResult type CreateDhcpOptionsOutput struct { _ struct{} `type:"structure"` @@ -23820,7 +26807,6 @@ func (s *CreateDhcpOptionsOutput) SetDhcpOptions(v *DhcpOptions) *CreateDhcpOpti return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CreateEgressOnlyInternetGatewayRequest type CreateEgressOnlyInternetGatewayInput struct { _ struct{} `type:"structure"` @@ -23881,7 +26867,6 @@ func (s *CreateEgressOnlyInternetGatewayInput) SetVpcId(v string) *CreateEgressO return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CreateEgressOnlyInternetGatewayResult type CreateEgressOnlyInternetGatewayOutput struct { _ struct{} `type:"structure"` @@ -23916,7 +26901,6 @@ func (s *CreateEgressOnlyInternetGatewayOutput) SetEgressOnlyInternetGateway(v * } // Contains the parameters for CreateFlowLogs. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CreateFlowLogsRequest type CreateFlowLogsInput struct { _ struct{} `type:"structure"` @@ -24025,7 +27009,6 @@ func (s *CreateFlowLogsInput) SetTrafficType(v string) *CreateFlowLogsInput { } // Contains the output of CreateFlowLogs. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CreateFlowLogsResult type CreateFlowLogsOutput struct { _ struct{} `type:"structure"` @@ -24068,7 +27051,6 @@ func (s *CreateFlowLogsOutput) SetUnsuccessful(v []*UnsuccessfulItem) *CreateFlo return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CreateFpgaImageRequest type CreateFpgaImageInput struct { _ struct{} `type:"structure"` @@ -24157,7 +27139,6 @@ func (s *CreateFpgaImageInput) SetName(v string) *CreateFpgaImageInput { return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CreateFpgaImageResult type CreateFpgaImageOutput struct { _ struct{} `type:"structure"` @@ -24191,7 +27172,6 @@ func (s *CreateFpgaImageOutput) SetFpgaImageId(v string) *CreateFpgaImageOutput } // Contains the parameters for CreateImage. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CreateImageRequest type CreateImageInput struct { _ struct{} `type:"structure"` @@ -24291,7 +27271,6 @@ func (s *CreateImageInput) SetNoReboot(v bool) *CreateImageInput { } // Contains the output of CreateImage. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CreateImageResult type CreateImageOutput struct { _ struct{} `type:"structure"` @@ -24316,7 +27295,6 @@ func (s *CreateImageOutput) SetImageId(v string) *CreateImageOutput { } // Contains the parameters for CreateInstanceExportTask. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CreateInstanceExportTaskRequest type CreateInstanceExportTaskInput struct { _ struct{} `type:"structure"` @@ -24384,7 +27362,6 @@ func (s *CreateInstanceExportTaskInput) SetTargetEnvironment(v string) *CreateIn } // Contains the output for CreateInstanceExportTask. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CreateInstanceExportTaskResult type CreateInstanceExportTaskOutput struct { _ struct{} `type:"structure"` @@ -24409,7 +27386,6 @@ func (s *CreateInstanceExportTaskOutput) SetExportTask(v *ExportTask) *CreateIns } // Contains the parameters for CreateInternetGateway. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CreateInternetGatewayRequest type CreateInternetGatewayInput struct { _ struct{} `type:"structure"` @@ -24437,7 +27413,6 @@ func (s *CreateInternetGatewayInput) SetDryRun(v bool) *CreateInternetGatewayInp } // Contains the output of CreateInternetGateway. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CreateInternetGatewayResult type CreateInternetGatewayOutput struct { _ struct{} `type:"structure"` @@ -24462,7 +27437,6 @@ func (s *CreateInternetGatewayOutput) SetInternetGateway(v *InternetGateway) *Cr } // Contains the parameters for CreateKeyPair. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CreateKeyPairRequest type CreateKeyPairInput struct { _ struct{} `type:"structure"` @@ -24516,7 +27490,6 @@ func (s *CreateKeyPairInput) SetKeyName(v string) *CreateKeyPairInput { } // Describes a key pair. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/KeyPair type CreateKeyPairOutput struct { _ struct{} `type:"structure"` @@ -24558,8 +27531,252 @@ func (s *CreateKeyPairOutput) SetKeyName(v string) *CreateKeyPairOutput { return s } +type CreateLaunchTemplateInput struct { + _ struct{} `type:"structure"` + + // Unique, case-sensitive identifier you provide to ensure the idempotency of + // the request. For more information, see Ensuring Idempotency (http://docs.aws.amazon.com/AWSEC2/latest/APIReference/Run_Instance_Idempotency.html). + ClientToken *string `type:"string"` + + // Checks whether you have the required permissions for the action, without + // actually making the request, and provides an error response. If you have + // the required permissions, the error response is DryRunOperation. Otherwise, + // it is UnauthorizedOperation. + DryRun *bool `type:"boolean"` + + // The information for the launch template. + // + // LaunchTemplateData is a required field + LaunchTemplateData *RequestLaunchTemplateData `type:"structure" required:"true"` + + // A name for the launch template. + // + // LaunchTemplateName is a required field + LaunchTemplateName *string `min:"3" type:"string" required:"true"` + + // A description for the first version of the launch template. + VersionDescription *string `type:"string"` +} + +// String returns the string representation +func (s CreateLaunchTemplateInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateLaunchTemplateInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *CreateLaunchTemplateInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CreateLaunchTemplateInput"} + if s.LaunchTemplateData == nil { + invalidParams.Add(request.NewErrParamRequired("LaunchTemplateData")) + } + if s.LaunchTemplateName == nil { + invalidParams.Add(request.NewErrParamRequired("LaunchTemplateName")) + } + if s.LaunchTemplateName != nil && len(*s.LaunchTemplateName) < 3 { + invalidParams.Add(request.NewErrParamMinLen("LaunchTemplateName", 3)) + } + if s.LaunchTemplateData != nil { + if err := s.LaunchTemplateData.Validate(); err != nil { + invalidParams.AddNested("LaunchTemplateData", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetClientToken sets the ClientToken field's value. +func (s *CreateLaunchTemplateInput) SetClientToken(v string) *CreateLaunchTemplateInput { + s.ClientToken = &v + return s +} + +// SetDryRun sets the DryRun field's value. +func (s *CreateLaunchTemplateInput) SetDryRun(v bool) *CreateLaunchTemplateInput { + s.DryRun = &v + return s +} + +// SetLaunchTemplateData sets the LaunchTemplateData field's value. +func (s *CreateLaunchTemplateInput) SetLaunchTemplateData(v *RequestLaunchTemplateData) *CreateLaunchTemplateInput { + s.LaunchTemplateData = v + return s +} + +// SetLaunchTemplateName sets the LaunchTemplateName field's value. +func (s *CreateLaunchTemplateInput) SetLaunchTemplateName(v string) *CreateLaunchTemplateInput { + s.LaunchTemplateName = &v + return s +} + +// SetVersionDescription sets the VersionDescription field's value. +func (s *CreateLaunchTemplateInput) SetVersionDescription(v string) *CreateLaunchTemplateInput { + s.VersionDescription = &v + return s +} + +type CreateLaunchTemplateOutput struct { + _ struct{} `type:"structure"` + + // Information about the launch template. + LaunchTemplate *LaunchTemplate `locationName:"launchTemplate" type:"structure"` +} + +// String returns the string representation +func (s CreateLaunchTemplateOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateLaunchTemplateOutput) GoString() string { + return s.String() +} + +// SetLaunchTemplate sets the LaunchTemplate field's value. +func (s *CreateLaunchTemplateOutput) SetLaunchTemplate(v *LaunchTemplate) *CreateLaunchTemplateOutput { + s.LaunchTemplate = v + return s +} + +type CreateLaunchTemplateVersionInput struct { + _ struct{} `type:"structure"` + + // Unique, case-sensitive identifier you provide to ensure the idempotency of + // the request. For more information, see Ensuring Idempotency (http://docs.aws.amazon.com/AWSEC2/latest/APIReference/Run_Instance_Idempotency.html). + ClientToken *string `type:"string"` + + // Checks whether you have the required permissions for the action, without + // actually making the request, and provides an error response. If you have + // the required permissions, the error response is DryRunOperation. Otherwise, + // it is UnauthorizedOperation. + DryRun *bool `type:"boolean"` + + // The information for the launch template. + // + // LaunchTemplateData is a required field + LaunchTemplateData *RequestLaunchTemplateData `type:"structure" required:"true"` + + // The ID of the launch template. You must specify either the launch template + // ID or launch template name in the request. + LaunchTemplateId *string `type:"string"` + + // The name of the launch template. You must specify either the launch template + // ID or launch template name in the request. + LaunchTemplateName *string `min:"3" type:"string"` + + // The version number of the launch template version on which to base the new + // version. The new version inherits the same launch parameters as the source + // version, except for parameters that you specify in LaunchTemplateData. + SourceVersion *string `type:"string"` + + // A description for the version of the launch template. + VersionDescription *string `type:"string"` +} + +// String returns the string representation +func (s CreateLaunchTemplateVersionInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateLaunchTemplateVersionInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *CreateLaunchTemplateVersionInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CreateLaunchTemplateVersionInput"} + if s.LaunchTemplateData == nil { + invalidParams.Add(request.NewErrParamRequired("LaunchTemplateData")) + } + if s.LaunchTemplateName != nil && len(*s.LaunchTemplateName) < 3 { + invalidParams.Add(request.NewErrParamMinLen("LaunchTemplateName", 3)) + } + if s.LaunchTemplateData != nil { + if err := s.LaunchTemplateData.Validate(); err != nil { + invalidParams.AddNested("LaunchTemplateData", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetClientToken sets the ClientToken field's value. +func (s *CreateLaunchTemplateVersionInput) SetClientToken(v string) *CreateLaunchTemplateVersionInput { + s.ClientToken = &v + return s +} + +// SetDryRun sets the DryRun field's value. +func (s *CreateLaunchTemplateVersionInput) SetDryRun(v bool) *CreateLaunchTemplateVersionInput { + s.DryRun = &v + return s +} + +// SetLaunchTemplateData sets the LaunchTemplateData field's value. +func (s *CreateLaunchTemplateVersionInput) SetLaunchTemplateData(v *RequestLaunchTemplateData) *CreateLaunchTemplateVersionInput { + s.LaunchTemplateData = v + return s +} + +// SetLaunchTemplateId sets the LaunchTemplateId field's value. +func (s *CreateLaunchTemplateVersionInput) SetLaunchTemplateId(v string) *CreateLaunchTemplateVersionInput { + s.LaunchTemplateId = &v + return s +} + +// SetLaunchTemplateName sets the LaunchTemplateName field's value. +func (s *CreateLaunchTemplateVersionInput) SetLaunchTemplateName(v string) *CreateLaunchTemplateVersionInput { + s.LaunchTemplateName = &v + return s +} + +// SetSourceVersion sets the SourceVersion field's value. +func (s *CreateLaunchTemplateVersionInput) SetSourceVersion(v string) *CreateLaunchTemplateVersionInput { + s.SourceVersion = &v + return s +} + +// SetVersionDescription sets the VersionDescription field's value. +func (s *CreateLaunchTemplateVersionInput) SetVersionDescription(v string) *CreateLaunchTemplateVersionInput { + s.VersionDescription = &v + return s +} + +type CreateLaunchTemplateVersionOutput struct { + _ struct{} `type:"structure"` + + // Information about the launch template version. + LaunchTemplateVersion *LaunchTemplateVersion `locationName:"launchTemplateVersion" type:"structure"` +} + +// String returns the string representation +func (s CreateLaunchTemplateVersionOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateLaunchTemplateVersionOutput) GoString() string { + return s.String() +} + +// SetLaunchTemplateVersion sets the LaunchTemplateVersion field's value. +func (s *CreateLaunchTemplateVersionOutput) SetLaunchTemplateVersion(v *LaunchTemplateVersion) *CreateLaunchTemplateVersionOutput { + s.LaunchTemplateVersion = v + return s +} + // Contains the parameters for CreateNatGateway. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CreateNatGatewayRequest type CreateNatGatewayInput struct { _ struct{} `type:"structure"` @@ -24627,7 +27844,6 @@ func (s *CreateNatGatewayInput) SetSubnetId(v string) *CreateNatGatewayInput { } // Contains the output of CreateNatGateway. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CreateNatGatewayResult type CreateNatGatewayOutput struct { _ struct{} `type:"structure"` @@ -24662,7 +27878,6 @@ func (s *CreateNatGatewayOutput) SetNatGateway(v *NatGateway) *CreateNatGatewayO } // Contains the parameters for CreateNetworkAclEntry. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CreateNetworkAclEntryRequest type CreateNetworkAclEntryInput struct { _ struct{} `type:"structure"` @@ -24817,7 +28032,6 @@ func (s *CreateNetworkAclEntryInput) SetRuleNumber(v int64) *CreateNetworkAclEnt return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CreateNetworkAclEntryOutput type CreateNetworkAclEntryOutput struct { _ struct{} `type:"structure"` } @@ -24833,7 +28047,6 @@ func (s CreateNetworkAclEntryOutput) GoString() string { } // Contains the parameters for CreateNetworkAcl. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CreateNetworkAclRequest type CreateNetworkAclInput struct { _ struct{} `type:"structure"` @@ -24885,7 +28098,6 @@ func (s *CreateNetworkAclInput) SetVpcId(v string) *CreateNetworkAclInput { } // Contains the output of CreateNetworkAcl. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CreateNetworkAclResult type CreateNetworkAclOutput struct { _ struct{} `type:"structure"` @@ -24910,7 +28122,6 @@ func (s *CreateNetworkAclOutput) SetNetworkAcl(v *NetworkAcl) *CreateNetworkAclO } // Contains the parameters for CreateNetworkInterface. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CreateNetworkInterfaceRequest type CreateNetworkInterfaceInput struct { _ struct{} `type:"structure"` @@ -25052,7 +28263,6 @@ func (s *CreateNetworkInterfaceInput) SetSubnetId(v string) *CreateNetworkInterf } // Contains the output of CreateNetworkInterface. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CreateNetworkInterfaceResult type CreateNetworkInterfaceOutput struct { _ struct{} `type:"structure"` @@ -25077,7 +28287,6 @@ func (s *CreateNetworkInterfaceOutput) SetNetworkInterface(v *NetworkInterface) } // Contains the parameters for CreateNetworkInterfacePermission. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CreateNetworkInterfacePermissionRequest type CreateNetworkInterfacePermissionInput struct { _ struct{} `type:"structure"` @@ -25161,7 +28370,6 @@ func (s *CreateNetworkInterfacePermissionInput) SetPermission(v string) *CreateN } // Contains the output of CreateNetworkInterfacePermission. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CreateNetworkInterfacePermissionResult type CreateNetworkInterfacePermissionOutput struct { _ struct{} `type:"structure"` @@ -25186,7 +28394,6 @@ func (s *CreateNetworkInterfacePermissionOutput) SetInterfacePermission(v *Netwo } // Contains the parameters for CreatePlacementGroup. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CreatePlacementGroupRequest type CreatePlacementGroupInput struct { _ struct{} `type:"structure"` @@ -25196,7 +28403,8 @@ type CreatePlacementGroupInput struct { // it is UnauthorizedOperation. DryRun *bool `locationName:"dryRun" type:"boolean"` - // A name for the placement group. + // A name for the placement group. Must be unique within the scope of your account + // for the region. // // Constraints: Up to 255 ASCII characters // @@ -25253,7 +28461,6 @@ func (s *CreatePlacementGroupInput) SetStrategy(v string) *CreatePlacementGroupI return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CreatePlacementGroupOutput type CreatePlacementGroupOutput struct { _ struct{} `type:"structure"` } @@ -25269,7 +28476,6 @@ func (s CreatePlacementGroupOutput) GoString() string { } // Contains the parameters for CreateReservedInstancesListing. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CreateReservedInstancesListingRequest type CreateReservedInstancesListingInput struct { _ struct{} `type:"structure"` @@ -25357,7 +28563,6 @@ func (s *CreateReservedInstancesListingInput) SetReservedInstancesId(v string) * } // Contains the output of CreateReservedInstancesListing. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CreateReservedInstancesListingResult type CreateReservedInstancesListingOutput struct { _ struct{} `type:"structure"` @@ -25382,7 +28587,6 @@ func (s *CreateReservedInstancesListingOutput) SetReservedInstancesListings(v [] } // Contains the parameters for CreateRoute. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CreateRouteRequest type CreateRouteInput struct { _ struct{} `type:"structure"` @@ -25510,7 +28714,6 @@ func (s *CreateRouteInput) SetVpcPeeringConnectionId(v string) *CreateRouteInput } // Contains the output of CreateRoute. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CreateRouteResult type CreateRouteOutput struct { _ struct{} `type:"structure"` @@ -25535,7 +28738,6 @@ func (s *CreateRouteOutput) SetReturn(v bool) *CreateRouteOutput { } // Contains the parameters for CreateRouteTable. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CreateRouteTableRequest type CreateRouteTableInput struct { _ struct{} `type:"structure"` @@ -25587,7 +28789,6 @@ func (s *CreateRouteTableInput) SetVpcId(v string) *CreateRouteTableInput { } // Contains the output of CreateRouteTable. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CreateRouteTableResult type CreateRouteTableOutput struct { _ struct{} `type:"structure"` @@ -25612,7 +28813,6 @@ func (s *CreateRouteTableOutput) SetRouteTable(v *RouteTable) *CreateRouteTableO } // Contains the parameters for CreateSecurityGroup. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CreateSecurityGroupRequest type CreateSecurityGroupInput struct { _ struct{} `type:"structure"` @@ -25699,7 +28899,6 @@ func (s *CreateSecurityGroupInput) SetVpcId(v string) *CreateSecurityGroupInput } // Contains the output of CreateSecurityGroup. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CreateSecurityGroupResult type CreateSecurityGroupOutput struct { _ struct{} `type:"structure"` @@ -25724,7 +28923,6 @@ func (s *CreateSecurityGroupOutput) SetGroupId(v string) *CreateSecurityGroupOut } // Contains the parameters for CreateSnapshot. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CreateSnapshotRequest type CreateSnapshotInput struct { _ struct{} `type:"structure"` @@ -25785,11 +28983,10 @@ func (s *CreateSnapshotInput) SetVolumeId(v string) *CreateSnapshotInput { } // Contains the parameters for CreateSpotDatafeedSubscription. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CreateSpotDatafeedSubscriptionRequest type CreateSpotDatafeedSubscriptionInput struct { _ struct{} `type:"structure"` - // The Amazon S3 bucket in which to store the Spot instance data feed. + // The Amazon S3 bucket in which to store the Spot Instance data feed. // // Bucket is a required field Bucket *string `locationName:"bucket" type:"string" required:"true"` @@ -25846,11 +29043,10 @@ func (s *CreateSpotDatafeedSubscriptionInput) SetPrefix(v string) *CreateSpotDat } // Contains the output of CreateSpotDatafeedSubscription. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CreateSpotDatafeedSubscriptionResult type CreateSpotDatafeedSubscriptionOutput struct { _ struct{} `type:"structure"` - // The Spot instance data feed subscription. + // The Spot Instance data feed subscription. SpotDatafeedSubscription *SpotDatafeedSubscription `locationName:"spotDatafeedSubscription" type:"structure"` } @@ -25871,7 +29067,6 @@ func (s *CreateSpotDatafeedSubscriptionOutput) SetSpotDatafeedSubscription(v *Sp } // Contains the parameters for CreateSubnet. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CreateSubnetRequest type CreateSubnetInput struct { _ struct{} `type:"structure"` @@ -25959,7 +29154,6 @@ func (s *CreateSubnetInput) SetVpcId(v string) *CreateSubnetInput { } // Contains the output of CreateSubnet. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CreateSubnetResult type CreateSubnetOutput struct { _ struct{} `type:"structure"` @@ -25984,7 +29178,6 @@ func (s *CreateSubnetOutput) SetSubnet(v *Subnet) *CreateSubnetOutput { } // Contains the parameters for CreateTags. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CreateTagsRequest type CreateTagsInput struct { _ struct{} `type:"structure"` @@ -26051,7 +29244,6 @@ func (s *CreateTagsInput) SetTags(v []*Tag) *CreateTagsInput { return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CreateTagsOutput type CreateTagsOutput struct { _ struct{} `type:"structure"` } @@ -26067,7 +29259,6 @@ func (s CreateTagsOutput) GoString() string { } // Contains the parameters for CreateVolume. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CreateVolumeRequest type CreateVolumeInput struct { _ struct{} `type:"structure"` @@ -26211,7 +29402,6 @@ func (s *CreateVolumeInput) SetVolumeType(v string) *CreateVolumeInput { // Describes the user or group to be added or removed from the permissions for // a volume. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CreateVolumePermission type CreateVolumePermission struct { _ struct{} `type:"structure"` @@ -26247,7 +29437,6 @@ func (s *CreateVolumePermission) SetUserId(v string) *CreateVolumePermission { } // Describes modifications to the permissions for a volume. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CreateVolumePermissionModifications type CreateVolumePermissionModifications struct { _ struct{} `type:"structure"` @@ -26282,8 +29471,133 @@ func (s *CreateVolumePermissionModifications) SetRemove(v []*CreateVolumePermiss return s } +type CreateVpcEndpointConnectionNotificationInput struct { + _ struct{} `type:"structure"` + + // Unique, case-sensitive identifier you provide to ensure the idempotency of + // the request. For more information, see How to Ensure Idempotency (http://docs.aws.amazon.com/AWSEC2/latest/APIReference/Run_Instance_Idempotency.html). + ClientToken *string `type:"string"` + + // One or more endpoint events for which to receive notifications. Valid values + // are Accept, Connect, Delete, and Reject. + // + // ConnectionEvents is a required field + ConnectionEvents []*string `locationNameList:"item" type:"list" required:"true"` + + // The ARN of the SNS topic for the notifications. + // + // ConnectionNotificationArn is a required field + ConnectionNotificationArn *string `type:"string" required:"true"` + + // Checks whether you have the required permissions for the action, without + // actually making the request, and provides an error response. If you have + // the required permissions, the error response is DryRunOperation. Otherwise, + // it is UnauthorizedOperation. + DryRun *bool `type:"boolean"` + + // The ID of the endpoint service. + ServiceId *string `type:"string"` + + // The ID of the endpoint. + VpcEndpointId *string `type:"string"` +} + +// String returns the string representation +func (s CreateVpcEndpointConnectionNotificationInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateVpcEndpointConnectionNotificationInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *CreateVpcEndpointConnectionNotificationInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CreateVpcEndpointConnectionNotificationInput"} + if s.ConnectionEvents == nil { + invalidParams.Add(request.NewErrParamRequired("ConnectionEvents")) + } + if s.ConnectionNotificationArn == nil { + invalidParams.Add(request.NewErrParamRequired("ConnectionNotificationArn")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetClientToken sets the ClientToken field's value. +func (s *CreateVpcEndpointConnectionNotificationInput) SetClientToken(v string) *CreateVpcEndpointConnectionNotificationInput { + s.ClientToken = &v + return s +} + +// SetConnectionEvents sets the ConnectionEvents field's value. +func (s *CreateVpcEndpointConnectionNotificationInput) SetConnectionEvents(v []*string) *CreateVpcEndpointConnectionNotificationInput { + s.ConnectionEvents = v + return s +} + +// SetConnectionNotificationArn sets the ConnectionNotificationArn field's value. +func (s *CreateVpcEndpointConnectionNotificationInput) SetConnectionNotificationArn(v string) *CreateVpcEndpointConnectionNotificationInput { + s.ConnectionNotificationArn = &v + return s +} + +// SetDryRun sets the DryRun field's value. +func (s *CreateVpcEndpointConnectionNotificationInput) SetDryRun(v bool) *CreateVpcEndpointConnectionNotificationInput { + s.DryRun = &v + return s +} + +// SetServiceId sets the ServiceId field's value. +func (s *CreateVpcEndpointConnectionNotificationInput) SetServiceId(v string) *CreateVpcEndpointConnectionNotificationInput { + s.ServiceId = &v + return s +} + +// SetVpcEndpointId sets the VpcEndpointId field's value. +func (s *CreateVpcEndpointConnectionNotificationInput) SetVpcEndpointId(v string) *CreateVpcEndpointConnectionNotificationInput { + s.VpcEndpointId = &v + return s +} + +type CreateVpcEndpointConnectionNotificationOutput struct { + _ struct{} `type:"structure"` + + // Unique, case-sensitive identifier you provide to ensure the idempotency of + // the request. + ClientToken *string `locationName:"clientToken" type:"string"` + + // Information about the notification. + ConnectionNotification *ConnectionNotification `locationName:"connectionNotification" type:"structure"` +} + +// String returns the string representation +func (s CreateVpcEndpointConnectionNotificationOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateVpcEndpointConnectionNotificationOutput) GoString() string { + return s.String() +} + +// SetClientToken sets the ClientToken field's value. +func (s *CreateVpcEndpointConnectionNotificationOutput) SetClientToken(v string) *CreateVpcEndpointConnectionNotificationOutput { + s.ClientToken = &v + return s +} + +// SetConnectionNotification sets the ConnectionNotification field's value. +func (s *CreateVpcEndpointConnectionNotificationOutput) SetConnectionNotification(v *ConnectionNotification) *CreateVpcEndpointConnectionNotificationOutput { + s.ConnectionNotification = v + return s +} + // Contains the parameters for CreateVpcEndpoint. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CreateVpcEndpointRequest type CreateVpcEndpointInput struct { _ struct{} `type:"structure"` @@ -26297,20 +29611,49 @@ type CreateVpcEndpointInput struct { // it is UnauthorizedOperation. DryRun *bool `type:"boolean"` - // A policy to attach to the endpoint that controls access to the service. The - // policy must be in valid JSON format. If this parameter is not specified, - // we attach a default policy that allows full access to the service. + // (Gateway endpoint) A policy to attach to the endpoint that controls access + // to the service. The policy must be in valid JSON format. If this parameter + // is not specified, we attach a default policy that allows full access to the + // service. PolicyDocument *string `type:"string"` - // One or more route table IDs. + // (Interface endpoint) Indicate whether to associate a private hosted zone + // with the specified VPC. The private hosted zone contains a record set for + // the default public DNS name for the service for the region (for example, + // kinesis.us-east-1.amazonaws.com) which resolves to the private IP addresses + // of the endpoint network interfaces in the VPC. This enables you to make requests + // to the default public DNS name for the service instead of the public DNS + // names that are automatically generated by the VPC endpoint service. + // + // To use a private hosted zone, you must set the following VPC attributes to + // true: enableDnsHostnames and enableDnsSupport. Use ModifyVpcAttribute to + // set the VPC attributes. + // + // Default: true + PrivateDnsEnabled *bool `type:"boolean"` + + // (Gateway endpoint) One or more route table IDs. RouteTableIds []*string `locationName:"RouteTableId" locationNameList:"item" type:"list"` - // The AWS service name, in the form com.amazonaws.region.service. To get a - // list of available services, use the DescribeVpcEndpointServices request. + // (Interface endpoint) The ID of one or more security groups to associate with + // the endpoint network interface. + SecurityGroupIds []*string `locationName:"SecurityGroupId" locationNameList:"item" type:"list"` + + // The service name. To get a list of available services, use the DescribeVpcEndpointServices + // request, or get the name from the service provider. // // ServiceName is a required field ServiceName *string `type:"string" required:"true"` + // (Interface endpoint) The ID of one or more subnets in which to create an + // endpoint network interface. + SubnetIds []*string `locationName:"SubnetId" locationNameList:"item" type:"list"` + + // The type of endpoint. + // + // Default: Gateway + VpcEndpointType *string `type:"string" enum:"VpcEndpointType"` + // The ID of the VPC in which the endpoint will be used. // // VpcId is a required field @@ -26361,18 +29704,42 @@ func (s *CreateVpcEndpointInput) SetPolicyDocument(v string) *CreateVpcEndpointI return s } +// SetPrivateDnsEnabled sets the PrivateDnsEnabled field's value. +func (s *CreateVpcEndpointInput) SetPrivateDnsEnabled(v bool) *CreateVpcEndpointInput { + s.PrivateDnsEnabled = &v + return s +} + // SetRouteTableIds sets the RouteTableIds field's value. func (s *CreateVpcEndpointInput) SetRouteTableIds(v []*string) *CreateVpcEndpointInput { s.RouteTableIds = v return s } +// SetSecurityGroupIds sets the SecurityGroupIds field's value. +func (s *CreateVpcEndpointInput) SetSecurityGroupIds(v []*string) *CreateVpcEndpointInput { + s.SecurityGroupIds = v + return s +} + // SetServiceName sets the ServiceName field's value. func (s *CreateVpcEndpointInput) SetServiceName(v string) *CreateVpcEndpointInput { s.ServiceName = &v return s } +// SetSubnetIds sets the SubnetIds field's value. +func (s *CreateVpcEndpointInput) SetSubnetIds(v []*string) *CreateVpcEndpointInput { + s.SubnetIds = v + return s +} + +// SetVpcEndpointType sets the VpcEndpointType field's value. +func (s *CreateVpcEndpointInput) SetVpcEndpointType(v string) *CreateVpcEndpointInput { + s.VpcEndpointType = &v + return s +} + // SetVpcId sets the VpcId field's value. func (s *CreateVpcEndpointInput) SetVpcId(v string) *CreateVpcEndpointInput { s.VpcId = &v @@ -26380,7 +29747,6 @@ func (s *CreateVpcEndpointInput) SetVpcId(v string) *CreateVpcEndpointInput { } // Contains the output of CreateVpcEndpoint. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CreateVpcEndpointResult type CreateVpcEndpointOutput struct { _ struct{} `type:"structure"` @@ -26414,8 +29780,111 @@ func (s *CreateVpcEndpointOutput) SetVpcEndpoint(v *VpcEndpoint) *CreateVpcEndpo return s } +type CreateVpcEndpointServiceConfigurationInput struct { + _ struct{} `type:"structure"` + + // Indicate whether requests from service consumers to create an endpoint to + // your service must be accepted. To accept a request, use AcceptVpcEndpointConnections. + AcceptanceRequired *bool `type:"boolean"` + + // Unique, case-sensitive identifier you provide to ensure the idempotency of + // the request. For more information, see How to Ensure Idempotency (http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Run_Instance_Idempotency.html). + ClientToken *string `type:"string"` + + // Checks whether you have the required permissions for the action, without + // actually making the request, and provides an error response. If you have + // the required permissions, the error response is DryRunOperation. Otherwise, + // it is UnauthorizedOperation. + DryRun *bool `type:"boolean"` + + // The Amazon Resource Names (ARNs) of one or more Network Load Balancers for + // your service. + // + // NetworkLoadBalancerArns is a required field + NetworkLoadBalancerArns []*string `locationName:"NetworkLoadBalancerArn" locationNameList:"item" type:"list" required:"true"` +} + +// String returns the string representation +func (s CreateVpcEndpointServiceConfigurationInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateVpcEndpointServiceConfigurationInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *CreateVpcEndpointServiceConfigurationInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CreateVpcEndpointServiceConfigurationInput"} + if s.NetworkLoadBalancerArns == nil { + invalidParams.Add(request.NewErrParamRequired("NetworkLoadBalancerArns")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetAcceptanceRequired sets the AcceptanceRequired field's value. +func (s *CreateVpcEndpointServiceConfigurationInput) SetAcceptanceRequired(v bool) *CreateVpcEndpointServiceConfigurationInput { + s.AcceptanceRequired = &v + return s +} + +// SetClientToken sets the ClientToken field's value. +func (s *CreateVpcEndpointServiceConfigurationInput) SetClientToken(v string) *CreateVpcEndpointServiceConfigurationInput { + s.ClientToken = &v + return s +} + +// SetDryRun sets the DryRun field's value. +func (s *CreateVpcEndpointServiceConfigurationInput) SetDryRun(v bool) *CreateVpcEndpointServiceConfigurationInput { + s.DryRun = &v + return s +} + +// SetNetworkLoadBalancerArns sets the NetworkLoadBalancerArns field's value. +func (s *CreateVpcEndpointServiceConfigurationInput) SetNetworkLoadBalancerArns(v []*string) *CreateVpcEndpointServiceConfigurationInput { + s.NetworkLoadBalancerArns = v + return s +} + +type CreateVpcEndpointServiceConfigurationOutput struct { + _ struct{} `type:"structure"` + + // Unique, case-sensitive identifier you provide to ensure the idempotency of + // the request. + ClientToken *string `locationName:"clientToken" type:"string"` + + // Information about the service configuration. + ServiceConfiguration *ServiceConfiguration `locationName:"serviceConfiguration" type:"structure"` +} + +// String returns the string representation +func (s CreateVpcEndpointServiceConfigurationOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateVpcEndpointServiceConfigurationOutput) GoString() string { + return s.String() +} + +// SetClientToken sets the ClientToken field's value. +func (s *CreateVpcEndpointServiceConfigurationOutput) SetClientToken(v string) *CreateVpcEndpointServiceConfigurationOutput { + s.ClientToken = &v + return s +} + +// SetServiceConfiguration sets the ServiceConfiguration field's value. +func (s *CreateVpcEndpointServiceConfigurationOutput) SetServiceConfiguration(v *ServiceConfiguration) *CreateVpcEndpointServiceConfigurationOutput { + s.ServiceConfiguration = v + return s +} + // Contains the parameters for CreateVpc. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CreateVpcRequest type CreateVpcInput struct { _ struct{} `type:"structure"` @@ -26496,7 +29965,6 @@ func (s *CreateVpcInput) SetInstanceTenancy(v string) *CreateVpcInput { } // Contains the output of CreateVpc. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CreateVpcResult type CreateVpcOutput struct { _ struct{} `type:"structure"` @@ -26521,7 +29989,6 @@ func (s *CreateVpcOutput) SetVpc(v *Vpc) *CreateVpcOutput { } // Contains the parameters for CreateVpcPeeringConnection. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CreateVpcPeeringConnectionRequest type CreateVpcPeeringConnectionInput struct { _ struct{} `type:"structure"` @@ -26531,15 +29998,22 @@ type CreateVpcPeeringConnectionInput struct { // it is UnauthorizedOperation. DryRun *bool `locationName:"dryRun" type:"boolean"` - // The AWS account ID of the owner of the peer VPC. + // The AWS account ID of the owner of the accepter VPC. // // Default: Your AWS account ID PeerOwnerId *string `locationName:"peerOwnerId" type:"string"` + // The region code for the accepter VPC, if the accepter VPC is located in a + // region other than the region in which you make the request. + // + // Default: The region in which you make the request. + PeerRegion *string `type:"string"` + // The ID of the VPC with which you are creating the VPC peering connection. + // You must specify this parameter in the request. PeerVpcId *string `locationName:"peerVpcId" type:"string"` - // The ID of the requester VPC. + // The ID of the requester VPC. You must specify this parameter in the request. VpcId *string `locationName:"vpcId" type:"string"` } @@ -26565,6 +30039,12 @@ func (s *CreateVpcPeeringConnectionInput) SetPeerOwnerId(v string) *CreateVpcPee return s } +// SetPeerRegion sets the PeerRegion field's value. +func (s *CreateVpcPeeringConnectionInput) SetPeerRegion(v string) *CreateVpcPeeringConnectionInput { + s.PeerRegion = &v + return s +} + // SetPeerVpcId sets the PeerVpcId field's value. func (s *CreateVpcPeeringConnectionInput) SetPeerVpcId(v string) *CreateVpcPeeringConnectionInput { s.PeerVpcId = &v @@ -26578,7 +30058,6 @@ func (s *CreateVpcPeeringConnectionInput) SetVpcId(v string) *CreateVpcPeeringCo } // Contains the output of CreateVpcPeeringConnection. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CreateVpcPeeringConnectionResult type CreateVpcPeeringConnectionOutput struct { _ struct{} `type:"structure"` @@ -26603,7 +30082,6 @@ func (s *CreateVpcPeeringConnectionOutput) SetVpcPeeringConnection(v *VpcPeering } // Contains the parameters for CreateVpnConnection. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CreateVpnConnectionRequest type CreateVpnConnectionInput struct { _ struct{} `type:"structure"` @@ -26618,11 +30096,7 @@ type CreateVpnConnectionInput struct { // it is UnauthorizedOperation. DryRun *bool `locationName:"dryRun" type:"boolean"` - // Indicates whether the VPN connection requires static routes. If you are creating - // a VPN connection for a device that does not support BGP, you must specify - // true. - // - // Default: false + // The options for the VPN connection. Options *VpnConnectionOptionsSpecification `locationName:"options" type:"structure"` // The type of VPN connection (ipsec.1). @@ -26696,7 +30170,6 @@ func (s *CreateVpnConnectionInput) SetVpnGatewayId(v string) *CreateVpnConnectio } // Contains the output of CreateVpnConnection. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CreateVpnConnectionResult type CreateVpnConnectionOutput struct { _ struct{} `type:"structure"` @@ -26721,7 +30194,6 @@ func (s *CreateVpnConnectionOutput) SetVpnConnection(v *VpnConnection) *CreateVp } // Contains the parameters for CreateVpnConnectionRoute. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CreateVpnConnectionRouteRequest type CreateVpnConnectionRouteInput struct { _ struct{} `type:"structure"` @@ -26774,7 +30246,6 @@ func (s *CreateVpnConnectionRouteInput) SetVpnConnectionId(v string) *CreateVpnC return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CreateVpnConnectionRouteOutput type CreateVpnConnectionRouteOutput struct { _ struct{} `type:"structure"` } @@ -26790,10 +30261,16 @@ func (s CreateVpnConnectionRouteOutput) GoString() string { } // Contains the parameters for CreateVpnGateway. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CreateVpnGatewayRequest type CreateVpnGatewayInput struct { _ struct{} `type:"structure"` + // A private Autonomous System Number (ASN) for the Amazon side of a BGP session. + // If you're using a 16-bit ASN, it must be in the 64512 to 65534 range. If + // you're using a 32-bit ASN, it must be in the 4200000000 to 4294967294 range. + // + // Default: 64512 + AmazonSideAsn *int64 `type:"long"` + // The Availability Zone for the virtual private gateway. AvailabilityZone *string `type:"string"` @@ -26832,6 +30309,12 @@ func (s *CreateVpnGatewayInput) Validate() error { return nil } +// SetAmazonSideAsn sets the AmazonSideAsn field's value. +func (s *CreateVpnGatewayInput) SetAmazonSideAsn(v int64) *CreateVpnGatewayInput { + s.AmazonSideAsn = &v + return s +} + // SetAvailabilityZone sets the AvailabilityZone field's value. func (s *CreateVpnGatewayInput) SetAvailabilityZone(v string) *CreateVpnGatewayInput { s.AvailabilityZone = &v @@ -26851,7 +30334,6 @@ func (s *CreateVpnGatewayInput) SetType(v string) *CreateVpnGatewayInput { } // Contains the output of CreateVpnGateway. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CreateVpnGatewayResult type CreateVpnGatewayOutput struct { _ struct{} `type:"structure"` @@ -26875,8 +30357,71 @@ func (s *CreateVpnGatewayOutput) SetVpnGateway(v *VpnGateway) *CreateVpnGatewayO return s } +// Describes the credit option for CPU usage of a T2 instance. +type CreditSpecification struct { + _ struct{} `type:"structure"` + + // The credit option for CPU usage of a T2 instance. + CpuCredits *string `locationName:"cpuCredits" type:"string"` +} + +// String returns the string representation +func (s CreditSpecification) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreditSpecification) GoString() string { + return s.String() +} + +// SetCpuCredits sets the CpuCredits field's value. +func (s *CreditSpecification) SetCpuCredits(v string) *CreditSpecification { + s.CpuCredits = &v + return s +} + +// The credit option for CPU usage of a T2 instance. +type CreditSpecificationRequest struct { + _ struct{} `type:"structure"` + + // The credit option for CPU usage of a T2 instance. Valid values are standard + // and unlimited. + // + // CpuCredits is a required field + CpuCredits *string `type:"string" required:"true"` +} + +// String returns the string representation +func (s CreditSpecificationRequest) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreditSpecificationRequest) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *CreditSpecificationRequest) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CreditSpecificationRequest"} + if s.CpuCredits == nil { + invalidParams.Add(request.NewErrParamRequired("CpuCredits")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetCpuCredits sets the CpuCredits field's value. +func (s *CreditSpecificationRequest) SetCpuCredits(v string) *CreditSpecificationRequest { + s.CpuCredits = &v + return s +} + // Describes a customer gateway. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/CustomerGateway type CustomerGateway struct { _ struct{} `type:"structure"` @@ -26948,7 +30493,6 @@ func (s *CustomerGateway) SetType(v string) *CustomerGateway { } // Contains the parameters for DeleteCustomerGateway. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DeleteCustomerGatewayRequest type DeleteCustomerGatewayInput struct { _ struct{} `type:"structure"` @@ -26999,7 +30543,6 @@ func (s *DeleteCustomerGatewayInput) SetDryRun(v bool) *DeleteCustomerGatewayInp return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DeleteCustomerGatewayOutput type DeleteCustomerGatewayOutput struct { _ struct{} `type:"structure"` } @@ -27015,7 +30558,6 @@ func (s DeleteCustomerGatewayOutput) GoString() string { } // Contains the parameters for DeleteDhcpOptions. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DeleteDhcpOptionsRequest type DeleteDhcpOptionsInput struct { _ struct{} `type:"structure"` @@ -27066,7 +30608,6 @@ func (s *DeleteDhcpOptionsInput) SetDryRun(v bool) *DeleteDhcpOptionsInput { return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DeleteDhcpOptionsOutput type DeleteDhcpOptionsOutput struct { _ struct{} `type:"structure"` } @@ -27081,7 +30622,6 @@ func (s DeleteDhcpOptionsOutput) GoString() string { return s.String() } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DeleteEgressOnlyInternetGatewayRequest type DeleteEgressOnlyInternetGatewayInput struct { _ struct{} `type:"structure"` @@ -27132,7 +30672,6 @@ func (s *DeleteEgressOnlyInternetGatewayInput) SetEgressOnlyInternetGatewayId(v return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DeleteEgressOnlyInternetGatewayResult type DeleteEgressOnlyInternetGatewayOutput struct { _ struct{} `type:"structure"` @@ -27157,7 +30696,6 @@ func (s *DeleteEgressOnlyInternetGatewayOutput) SetReturnCode(v bool) *DeleteEgr } // Contains the parameters for DeleteFlowLogs. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DeleteFlowLogsRequest type DeleteFlowLogsInput struct { _ struct{} `type:"structure"` @@ -27197,7 +30735,6 @@ func (s *DeleteFlowLogsInput) SetFlowLogIds(v []*string) *DeleteFlowLogsInput { } // Contains the output of DeleteFlowLogs. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DeleteFlowLogsResult type DeleteFlowLogsOutput struct { _ struct{} `type:"structure"` @@ -27221,8 +30758,80 @@ func (s *DeleteFlowLogsOutput) SetUnsuccessful(v []*UnsuccessfulItem) *DeleteFlo return s } +type DeleteFpgaImageInput struct { + _ struct{} `type:"structure"` + + // Checks whether you have the required permissions for the action, without + // actually making the request, and provides an error response. If you have + // the required permissions, the error response is DryRunOperation. Otherwise, + // it is UnauthorizedOperation. + DryRun *bool `type:"boolean"` + + // The ID of the AFI. + // + // FpgaImageId is a required field + FpgaImageId *string `type:"string" required:"true"` +} + +// String returns the string representation +func (s DeleteFpgaImageInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteFpgaImageInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DeleteFpgaImageInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeleteFpgaImageInput"} + if s.FpgaImageId == nil { + invalidParams.Add(request.NewErrParamRequired("FpgaImageId")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetDryRun sets the DryRun field's value. +func (s *DeleteFpgaImageInput) SetDryRun(v bool) *DeleteFpgaImageInput { + s.DryRun = &v + return s +} + +// SetFpgaImageId sets the FpgaImageId field's value. +func (s *DeleteFpgaImageInput) SetFpgaImageId(v string) *DeleteFpgaImageInput { + s.FpgaImageId = &v + return s +} + +type DeleteFpgaImageOutput struct { + _ struct{} `type:"structure"` + + // Is true if the request succeeds, and an error otherwise. + Return *bool `locationName:"return" type:"boolean"` +} + +// String returns the string representation +func (s DeleteFpgaImageOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteFpgaImageOutput) GoString() string { + return s.String() +} + +// SetReturn sets the Return field's value. +func (s *DeleteFpgaImageOutput) SetReturn(v bool) *DeleteFpgaImageOutput { + s.Return = &v + return s +} + // Contains the parameters for DeleteInternetGateway. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DeleteInternetGatewayRequest type DeleteInternetGatewayInput struct { _ struct{} `type:"structure"` @@ -27273,7 +30882,6 @@ func (s *DeleteInternetGatewayInput) SetInternetGatewayId(v string) *DeleteInter return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DeleteInternetGatewayOutput type DeleteInternetGatewayOutput struct { _ struct{} `type:"structure"` } @@ -27289,7 +30897,6 @@ func (s DeleteInternetGatewayOutput) GoString() string { } // Contains the parameters for DeleteKeyPair. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DeleteKeyPairRequest type DeleteKeyPairInput struct { _ struct{} `type:"structure"` @@ -27340,7 +30947,6 @@ func (s *DeleteKeyPairInput) SetKeyName(v string) *DeleteKeyPairInput { return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DeleteKeyPairOutput type DeleteKeyPairOutput struct { _ struct{} `type:"structure"` } @@ -27355,8 +30961,287 @@ func (s DeleteKeyPairOutput) GoString() string { return s.String() } +type DeleteLaunchTemplateInput struct { + _ struct{} `type:"structure"` + + // Checks whether you have the required permissions for the action, without + // actually making the request, and provides an error response. If you have + // the required permissions, the error response is DryRunOperation. Otherwise, + // it is UnauthorizedOperation. + DryRun *bool `type:"boolean"` + + // The ID of the launch template. You must specify either the launch template + // ID or launch template name in the request. + LaunchTemplateId *string `type:"string"` + + // The name of the launch template. You must specify either the launch template + // ID or launch template name in the request. + LaunchTemplateName *string `min:"3" type:"string"` +} + +// String returns the string representation +func (s DeleteLaunchTemplateInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteLaunchTemplateInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DeleteLaunchTemplateInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeleteLaunchTemplateInput"} + if s.LaunchTemplateName != nil && len(*s.LaunchTemplateName) < 3 { + invalidParams.Add(request.NewErrParamMinLen("LaunchTemplateName", 3)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetDryRun sets the DryRun field's value. +func (s *DeleteLaunchTemplateInput) SetDryRun(v bool) *DeleteLaunchTemplateInput { + s.DryRun = &v + return s +} + +// SetLaunchTemplateId sets the LaunchTemplateId field's value. +func (s *DeleteLaunchTemplateInput) SetLaunchTemplateId(v string) *DeleteLaunchTemplateInput { + s.LaunchTemplateId = &v + return s +} + +// SetLaunchTemplateName sets the LaunchTemplateName field's value. +func (s *DeleteLaunchTemplateInput) SetLaunchTemplateName(v string) *DeleteLaunchTemplateInput { + s.LaunchTemplateName = &v + return s +} + +type DeleteLaunchTemplateOutput struct { + _ struct{} `type:"structure"` + + // Information about the launch template. + LaunchTemplate *LaunchTemplate `locationName:"launchTemplate" type:"structure"` +} + +// String returns the string representation +func (s DeleteLaunchTemplateOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteLaunchTemplateOutput) GoString() string { + return s.String() +} + +// SetLaunchTemplate sets the LaunchTemplate field's value. +func (s *DeleteLaunchTemplateOutput) SetLaunchTemplate(v *LaunchTemplate) *DeleteLaunchTemplateOutput { + s.LaunchTemplate = v + return s +} + +type DeleteLaunchTemplateVersionsInput struct { + _ struct{} `type:"structure"` + + // Checks whether you have the required permissions for the action, without + // actually making the request, and provides an error response. If you have + // the required permissions, the error response is DryRunOperation. Otherwise, + // it is UnauthorizedOperation. + DryRun *bool `type:"boolean"` + + // The ID of the launch template. You must specify either the launch template + // ID or launch template name in the request. + LaunchTemplateId *string `type:"string"` + + // The name of the launch template. You must specify either the launch template + // ID or launch template name in the request. + LaunchTemplateName *string `min:"3" type:"string"` + + // The version numbers of one or more launch template versions to delete. + // + // Versions is a required field + Versions []*string `locationName:"LaunchTemplateVersion" locationNameList:"item" type:"list" required:"true"` +} + +// String returns the string representation +func (s DeleteLaunchTemplateVersionsInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteLaunchTemplateVersionsInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DeleteLaunchTemplateVersionsInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeleteLaunchTemplateVersionsInput"} + if s.LaunchTemplateName != nil && len(*s.LaunchTemplateName) < 3 { + invalidParams.Add(request.NewErrParamMinLen("LaunchTemplateName", 3)) + } + if s.Versions == nil { + invalidParams.Add(request.NewErrParamRequired("Versions")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetDryRun sets the DryRun field's value. +func (s *DeleteLaunchTemplateVersionsInput) SetDryRun(v bool) *DeleteLaunchTemplateVersionsInput { + s.DryRun = &v + return s +} + +// SetLaunchTemplateId sets the LaunchTemplateId field's value. +func (s *DeleteLaunchTemplateVersionsInput) SetLaunchTemplateId(v string) *DeleteLaunchTemplateVersionsInput { + s.LaunchTemplateId = &v + return s +} + +// SetLaunchTemplateName sets the LaunchTemplateName field's value. +func (s *DeleteLaunchTemplateVersionsInput) SetLaunchTemplateName(v string) *DeleteLaunchTemplateVersionsInput { + s.LaunchTemplateName = &v + return s +} + +// SetVersions sets the Versions field's value. +func (s *DeleteLaunchTemplateVersionsInput) SetVersions(v []*string) *DeleteLaunchTemplateVersionsInput { + s.Versions = v + return s +} + +type DeleteLaunchTemplateVersionsOutput struct { + _ struct{} `type:"structure"` + + // Information about the launch template versions that were successfully deleted. + SuccessfullyDeletedLaunchTemplateVersions []*DeleteLaunchTemplateVersionsResponseSuccessItem `locationName:"successfullyDeletedLaunchTemplateVersionSet" locationNameList:"item" type:"list"` + + // Information about the launch template versions that could not be deleted. + UnsuccessfullyDeletedLaunchTemplateVersions []*DeleteLaunchTemplateVersionsResponseErrorItem `locationName:"unsuccessfullyDeletedLaunchTemplateVersionSet" locationNameList:"item" type:"list"` +} + +// String returns the string representation +func (s DeleteLaunchTemplateVersionsOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteLaunchTemplateVersionsOutput) GoString() string { + return s.String() +} + +// SetSuccessfullyDeletedLaunchTemplateVersions sets the SuccessfullyDeletedLaunchTemplateVersions field's value. +func (s *DeleteLaunchTemplateVersionsOutput) SetSuccessfullyDeletedLaunchTemplateVersions(v []*DeleteLaunchTemplateVersionsResponseSuccessItem) *DeleteLaunchTemplateVersionsOutput { + s.SuccessfullyDeletedLaunchTemplateVersions = v + return s +} + +// SetUnsuccessfullyDeletedLaunchTemplateVersions sets the UnsuccessfullyDeletedLaunchTemplateVersions field's value. +func (s *DeleteLaunchTemplateVersionsOutput) SetUnsuccessfullyDeletedLaunchTemplateVersions(v []*DeleteLaunchTemplateVersionsResponseErrorItem) *DeleteLaunchTemplateVersionsOutput { + s.UnsuccessfullyDeletedLaunchTemplateVersions = v + return s +} + +// Describes a launch template version that could not be deleted. +type DeleteLaunchTemplateVersionsResponseErrorItem struct { + _ struct{} `type:"structure"` + + // The ID of the launch template. + LaunchTemplateId *string `locationName:"launchTemplateId" type:"string"` + + // The name of the launch template. + LaunchTemplateName *string `locationName:"launchTemplateName" type:"string"` + + // Information about the error. + ResponseError *ResponseError `locationName:"responseError" type:"structure"` + + // The version number of the launch template. + VersionNumber *int64 `locationName:"versionNumber" type:"long"` +} + +// String returns the string representation +func (s DeleteLaunchTemplateVersionsResponseErrorItem) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteLaunchTemplateVersionsResponseErrorItem) GoString() string { + return s.String() +} + +// SetLaunchTemplateId sets the LaunchTemplateId field's value. +func (s *DeleteLaunchTemplateVersionsResponseErrorItem) SetLaunchTemplateId(v string) *DeleteLaunchTemplateVersionsResponseErrorItem { + s.LaunchTemplateId = &v + return s +} + +// SetLaunchTemplateName sets the LaunchTemplateName field's value. +func (s *DeleteLaunchTemplateVersionsResponseErrorItem) SetLaunchTemplateName(v string) *DeleteLaunchTemplateVersionsResponseErrorItem { + s.LaunchTemplateName = &v + return s +} + +// SetResponseError sets the ResponseError field's value. +func (s *DeleteLaunchTemplateVersionsResponseErrorItem) SetResponseError(v *ResponseError) *DeleteLaunchTemplateVersionsResponseErrorItem { + s.ResponseError = v + return s +} + +// SetVersionNumber sets the VersionNumber field's value. +func (s *DeleteLaunchTemplateVersionsResponseErrorItem) SetVersionNumber(v int64) *DeleteLaunchTemplateVersionsResponseErrorItem { + s.VersionNumber = &v + return s +} + +// Describes a launch template version that was successfully deleted. +type DeleteLaunchTemplateVersionsResponseSuccessItem struct { + _ struct{} `type:"structure"` + + // The ID of the launch template. + LaunchTemplateId *string `locationName:"launchTemplateId" type:"string"` + + // The name of the launch template. + LaunchTemplateName *string `locationName:"launchTemplateName" type:"string"` + + // The version number of the launch template. + VersionNumber *int64 `locationName:"versionNumber" type:"long"` +} + +// String returns the string representation +func (s DeleteLaunchTemplateVersionsResponseSuccessItem) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteLaunchTemplateVersionsResponseSuccessItem) GoString() string { + return s.String() +} + +// SetLaunchTemplateId sets the LaunchTemplateId field's value. +func (s *DeleteLaunchTemplateVersionsResponseSuccessItem) SetLaunchTemplateId(v string) *DeleteLaunchTemplateVersionsResponseSuccessItem { + s.LaunchTemplateId = &v + return s +} + +// SetLaunchTemplateName sets the LaunchTemplateName field's value. +func (s *DeleteLaunchTemplateVersionsResponseSuccessItem) SetLaunchTemplateName(v string) *DeleteLaunchTemplateVersionsResponseSuccessItem { + s.LaunchTemplateName = &v + return s +} + +// SetVersionNumber sets the VersionNumber field's value. +func (s *DeleteLaunchTemplateVersionsResponseSuccessItem) SetVersionNumber(v int64) *DeleteLaunchTemplateVersionsResponseSuccessItem { + s.VersionNumber = &v + return s +} + // Contains the parameters for DeleteNatGateway. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DeleteNatGatewayRequest type DeleteNatGatewayInput struct { _ struct{} `type:"structure"` @@ -27396,7 +31281,6 @@ func (s *DeleteNatGatewayInput) SetNatGatewayId(v string) *DeleteNatGatewayInput } // Contains the output of DeleteNatGateway. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DeleteNatGatewayResult type DeleteNatGatewayOutput struct { _ struct{} `type:"structure"` @@ -27421,7 +31305,6 @@ func (s *DeleteNatGatewayOutput) SetNatGatewayId(v string) *DeleteNatGatewayOutp } // Contains the parameters for DeleteNetworkAclEntry. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DeleteNetworkAclEntryRequest type DeleteNetworkAclEntryInput struct { _ struct{} `type:"structure"` @@ -27500,7 +31383,6 @@ func (s *DeleteNetworkAclEntryInput) SetRuleNumber(v int64) *DeleteNetworkAclEnt return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DeleteNetworkAclEntryOutput type DeleteNetworkAclEntryOutput struct { _ struct{} `type:"structure"` } @@ -27516,7 +31398,6 @@ func (s DeleteNetworkAclEntryOutput) GoString() string { } // Contains the parameters for DeleteNetworkAcl. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DeleteNetworkAclRequest type DeleteNetworkAclInput struct { _ struct{} `type:"structure"` @@ -27567,7 +31448,6 @@ func (s *DeleteNetworkAclInput) SetNetworkAclId(v string) *DeleteNetworkAclInput return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DeleteNetworkAclOutput type DeleteNetworkAclOutput struct { _ struct{} `type:"structure"` } @@ -27583,7 +31463,6 @@ func (s DeleteNetworkAclOutput) GoString() string { } // Contains the parameters for DeleteNetworkInterface. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DeleteNetworkInterfaceRequest type DeleteNetworkInterfaceInput struct { _ struct{} `type:"structure"` @@ -27634,7 +31513,6 @@ func (s *DeleteNetworkInterfaceInput) SetNetworkInterfaceId(v string) *DeleteNet return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DeleteNetworkInterfaceOutput type DeleteNetworkInterfaceOutput struct { _ struct{} `type:"structure"` } @@ -27650,7 +31528,6 @@ func (s DeleteNetworkInterfaceOutput) GoString() string { } // Contains the parameters for DeleteNetworkInterfacePermission. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DeleteNetworkInterfacePermissionRequest type DeleteNetworkInterfacePermissionInput struct { _ struct{} `type:"structure"` @@ -27712,7 +31589,6 @@ func (s *DeleteNetworkInterfacePermissionInput) SetNetworkInterfacePermissionId( } // Contains the output for DeleteNetworkInterfacePermission. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DeleteNetworkInterfacePermissionResult type DeleteNetworkInterfacePermissionOutput struct { _ struct{} `type:"structure"` @@ -27737,7 +31613,6 @@ func (s *DeleteNetworkInterfacePermissionOutput) SetReturn(v bool) *DeleteNetwor } // Contains the parameters for DeletePlacementGroup. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DeletePlacementGroupRequest type DeletePlacementGroupInput struct { _ struct{} `type:"structure"` @@ -27788,7 +31663,6 @@ func (s *DeletePlacementGroupInput) SetGroupName(v string) *DeletePlacementGroup return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DeletePlacementGroupOutput type DeletePlacementGroupOutput struct { _ struct{} `type:"structure"` } @@ -27804,7 +31678,6 @@ func (s DeletePlacementGroupOutput) GoString() string { } // Contains the parameters for DeleteRoute. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DeleteRouteRequest type DeleteRouteInput struct { _ struct{} `type:"structure"` @@ -27875,7 +31748,6 @@ func (s *DeleteRouteInput) SetRouteTableId(v string) *DeleteRouteInput { return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DeleteRouteOutput type DeleteRouteOutput struct { _ struct{} `type:"structure"` } @@ -27891,7 +31763,6 @@ func (s DeleteRouteOutput) GoString() string { } // Contains the parameters for DeleteRouteTable. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DeleteRouteTableRequest type DeleteRouteTableInput struct { _ struct{} `type:"structure"` @@ -27942,7 +31813,6 @@ func (s *DeleteRouteTableInput) SetRouteTableId(v string) *DeleteRouteTableInput return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DeleteRouteTableOutput type DeleteRouteTableOutput struct { _ struct{} `type:"structure"` } @@ -27958,7 +31828,6 @@ func (s DeleteRouteTableOutput) GoString() string { } // Contains the parameters for DeleteSecurityGroup. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DeleteSecurityGroupRequest type DeleteSecurityGroupInput struct { _ struct{} `type:"structure"` @@ -28004,7 +31873,6 @@ func (s *DeleteSecurityGroupInput) SetGroupName(v string) *DeleteSecurityGroupIn return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DeleteSecurityGroupOutput type DeleteSecurityGroupOutput struct { _ struct{} `type:"structure"` } @@ -28020,7 +31888,6 @@ func (s DeleteSecurityGroupOutput) GoString() string { } // Contains the parameters for DeleteSnapshot. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DeleteSnapshotRequest type DeleteSnapshotInput struct { _ struct{} `type:"structure"` @@ -28071,7 +31938,6 @@ func (s *DeleteSnapshotInput) SetSnapshotId(v string) *DeleteSnapshotInput { return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DeleteSnapshotOutput type DeleteSnapshotOutput struct { _ struct{} `type:"structure"` } @@ -28087,7 +31953,6 @@ func (s DeleteSnapshotOutput) GoString() string { } // Contains the parameters for DeleteSpotDatafeedSubscription. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DeleteSpotDatafeedSubscriptionRequest type DeleteSpotDatafeedSubscriptionInput struct { _ struct{} `type:"structure"` @@ -28114,7 +31979,6 @@ func (s *DeleteSpotDatafeedSubscriptionInput) SetDryRun(v bool) *DeleteSpotDataf return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DeleteSpotDatafeedSubscriptionOutput type DeleteSpotDatafeedSubscriptionOutput struct { _ struct{} `type:"structure"` } @@ -28130,7 +31994,6 @@ func (s DeleteSpotDatafeedSubscriptionOutput) GoString() string { } // Contains the parameters for DeleteSubnet. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DeleteSubnetRequest type DeleteSubnetInput struct { _ struct{} `type:"structure"` @@ -28181,7 +32044,6 @@ func (s *DeleteSubnetInput) SetSubnetId(v string) *DeleteSubnetInput { return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DeleteSubnetOutput type DeleteSubnetOutput struct { _ struct{} `type:"structure"` } @@ -28197,7 +32059,6 @@ func (s DeleteSubnetOutput) GoString() string { } // Contains the parameters for DeleteTags. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DeleteTagsRequest type DeleteTagsInput struct { _ struct{} `type:"structure"` @@ -28207,15 +32068,17 @@ type DeleteTagsInput struct { // it is UnauthorizedOperation. DryRun *bool `locationName:"dryRun" type:"boolean"` - // The ID of the resource. For example, ami-1a2b3c4d. You can specify more than - // one resource ID. + // The IDs of one or more resources. // // Resources is a required field Resources []*string `locationName:"resourceId" type:"list" required:"true"` - // One or more tags to delete. If you omit the value parameter, we delete the - // tag regardless of its value. If you specify this parameter with an empty - // string as the value, we delete the key only if its value is an empty string. + // One or more tags to delete. If you omit this parameter, we delete all tags + // for the specified resources. Specify a tag key and an optional tag value + // to delete specific tags. If you specify a tag key without a tag value, we + // delete any tag with this key regardless of its value. If you specify a tag + // key with an empty string as the tag value, we delete the tag only if its + // value is an empty string. Tags []*Tag `locationName:"tag" locationNameList:"item" type:"list"` } @@ -28260,7 +32123,6 @@ func (s *DeleteTagsInput) SetTags(v []*Tag) *DeleteTagsInput { return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DeleteTagsOutput type DeleteTagsOutput struct { _ struct{} `type:"structure"` } @@ -28276,7 +32138,6 @@ func (s DeleteTagsOutput) GoString() string { } // Contains the parameters for DeleteVolume. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DeleteVolumeRequest type DeleteVolumeInput struct { _ struct{} `type:"structure"` @@ -28327,7 +32188,6 @@ func (s *DeleteVolumeInput) SetVolumeId(v string) *DeleteVolumeInput { return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DeleteVolumeOutput type DeleteVolumeOutput struct { _ struct{} `type:"structure"` } @@ -28342,8 +32202,153 @@ func (s DeleteVolumeOutput) GoString() string { return s.String() } +type DeleteVpcEndpointConnectionNotificationsInput struct { + _ struct{} `type:"structure"` + + // One or more notification IDs. + // + // ConnectionNotificationIds is a required field + ConnectionNotificationIds []*string `locationName:"ConnectionNotificationId" locationNameList:"item" type:"list" required:"true"` + + // Checks whether you have the required permissions for the action, without + // actually making the request, and provides an error response. If you have + // the required permissions, the error response is DryRunOperation. Otherwise, + // it is UnauthorizedOperation. + DryRun *bool `type:"boolean"` +} + +// String returns the string representation +func (s DeleteVpcEndpointConnectionNotificationsInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteVpcEndpointConnectionNotificationsInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DeleteVpcEndpointConnectionNotificationsInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeleteVpcEndpointConnectionNotificationsInput"} + if s.ConnectionNotificationIds == nil { + invalidParams.Add(request.NewErrParamRequired("ConnectionNotificationIds")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetConnectionNotificationIds sets the ConnectionNotificationIds field's value. +func (s *DeleteVpcEndpointConnectionNotificationsInput) SetConnectionNotificationIds(v []*string) *DeleteVpcEndpointConnectionNotificationsInput { + s.ConnectionNotificationIds = v + return s +} + +// SetDryRun sets the DryRun field's value. +func (s *DeleteVpcEndpointConnectionNotificationsInput) SetDryRun(v bool) *DeleteVpcEndpointConnectionNotificationsInput { + s.DryRun = &v + return s +} + +type DeleteVpcEndpointConnectionNotificationsOutput struct { + _ struct{} `type:"structure"` + + // Information about the notifications that could not be deleted successfully. + Unsuccessful []*UnsuccessfulItem `locationName:"unsuccessful" locationNameList:"item" type:"list"` +} + +// String returns the string representation +func (s DeleteVpcEndpointConnectionNotificationsOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteVpcEndpointConnectionNotificationsOutput) GoString() string { + return s.String() +} + +// SetUnsuccessful sets the Unsuccessful field's value. +func (s *DeleteVpcEndpointConnectionNotificationsOutput) SetUnsuccessful(v []*UnsuccessfulItem) *DeleteVpcEndpointConnectionNotificationsOutput { + s.Unsuccessful = v + return s +} + +type DeleteVpcEndpointServiceConfigurationsInput struct { + _ struct{} `type:"structure"` + + // Checks whether you have the required permissions for the action, without + // actually making the request, and provides an error response. If you have + // the required permissions, the error response is DryRunOperation. Otherwise, + // it is UnauthorizedOperation. + DryRun *bool `type:"boolean"` + + // The IDs of one or more services. + // + // ServiceIds is a required field + ServiceIds []*string `locationName:"ServiceId" locationNameList:"item" type:"list" required:"true"` +} + +// String returns the string representation +func (s DeleteVpcEndpointServiceConfigurationsInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteVpcEndpointServiceConfigurationsInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DeleteVpcEndpointServiceConfigurationsInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeleteVpcEndpointServiceConfigurationsInput"} + if s.ServiceIds == nil { + invalidParams.Add(request.NewErrParamRequired("ServiceIds")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetDryRun sets the DryRun field's value. +func (s *DeleteVpcEndpointServiceConfigurationsInput) SetDryRun(v bool) *DeleteVpcEndpointServiceConfigurationsInput { + s.DryRun = &v + return s +} + +// SetServiceIds sets the ServiceIds field's value. +func (s *DeleteVpcEndpointServiceConfigurationsInput) SetServiceIds(v []*string) *DeleteVpcEndpointServiceConfigurationsInput { + s.ServiceIds = v + return s +} + +type DeleteVpcEndpointServiceConfigurationsOutput struct { + _ struct{} `type:"structure"` + + // Information about the service configurations that were not deleted, if applicable. + Unsuccessful []*UnsuccessfulItem `locationName:"unsuccessful" locationNameList:"item" type:"list"` +} + +// String returns the string representation +func (s DeleteVpcEndpointServiceConfigurationsOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteVpcEndpointServiceConfigurationsOutput) GoString() string { + return s.String() +} + +// SetUnsuccessful sets the Unsuccessful field's value. +func (s *DeleteVpcEndpointServiceConfigurationsOutput) SetUnsuccessful(v []*UnsuccessfulItem) *DeleteVpcEndpointServiceConfigurationsOutput { + s.Unsuccessful = v + return s +} + // Contains the parameters for DeleteVpcEndpoints. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DeleteVpcEndpointsRequest type DeleteVpcEndpointsInput struct { _ struct{} `type:"structure"` @@ -28353,7 +32358,7 @@ type DeleteVpcEndpointsInput struct { // it is UnauthorizedOperation. DryRun *bool `type:"boolean"` - // One or more endpoint IDs. + // One or more VPC endpoint IDs. // // VpcEndpointIds is a required field VpcEndpointIds []*string `locationName:"VpcEndpointId" locationNameList:"item" type:"list" required:"true"` @@ -28395,11 +32400,10 @@ func (s *DeleteVpcEndpointsInput) SetVpcEndpointIds(v []*string) *DeleteVpcEndpo } // Contains the output of DeleteVpcEndpoints. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DeleteVpcEndpointsResult type DeleteVpcEndpointsOutput struct { _ struct{} `type:"structure"` - // Information about the endpoints that were not successfully deleted. + // Information about the VPC endpoints that were not successfully deleted. Unsuccessful []*UnsuccessfulItem `locationName:"unsuccessful" locationNameList:"item" type:"list"` } @@ -28420,7 +32424,6 @@ func (s *DeleteVpcEndpointsOutput) SetUnsuccessful(v []*UnsuccessfulItem) *Delet } // Contains the parameters for DeleteVpc. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DeleteVpcRequest type DeleteVpcInput struct { _ struct{} `type:"structure"` @@ -28471,7 +32474,6 @@ func (s *DeleteVpcInput) SetVpcId(v string) *DeleteVpcInput { return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DeleteVpcOutput type DeleteVpcOutput struct { _ struct{} `type:"structure"` } @@ -28487,7 +32489,6 @@ func (s DeleteVpcOutput) GoString() string { } // Contains the parameters for DeleteVpcPeeringConnection. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DeleteVpcPeeringConnectionRequest type DeleteVpcPeeringConnectionInput struct { _ struct{} `type:"structure"` @@ -28539,7 +32540,6 @@ func (s *DeleteVpcPeeringConnectionInput) SetVpcPeeringConnectionId(v string) *D } // Contains the output of DeleteVpcPeeringConnection. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DeleteVpcPeeringConnectionResult type DeleteVpcPeeringConnectionOutput struct { _ struct{} `type:"structure"` @@ -28564,7 +32564,6 @@ func (s *DeleteVpcPeeringConnectionOutput) SetReturn(v bool) *DeleteVpcPeeringCo } // Contains the parameters for DeleteVpnConnection. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DeleteVpnConnectionRequest type DeleteVpnConnectionInput struct { _ struct{} `type:"structure"` @@ -28615,7 +32614,6 @@ func (s *DeleteVpnConnectionInput) SetVpnConnectionId(v string) *DeleteVpnConnec return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DeleteVpnConnectionOutput type DeleteVpnConnectionOutput struct { _ struct{} `type:"structure"` } @@ -28631,7 +32629,6 @@ func (s DeleteVpnConnectionOutput) GoString() string { } // Contains the parameters for DeleteVpnConnectionRoute. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DeleteVpnConnectionRouteRequest type DeleteVpnConnectionRouteInput struct { _ struct{} `type:"structure"` @@ -28684,7 +32681,6 @@ func (s *DeleteVpnConnectionRouteInput) SetVpnConnectionId(v string) *DeleteVpnC return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DeleteVpnConnectionRouteOutput type DeleteVpnConnectionRouteOutput struct { _ struct{} `type:"structure"` } @@ -28700,7 +32696,6 @@ func (s DeleteVpnConnectionRouteOutput) GoString() string { } // Contains the parameters for DeleteVpnGateway. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DeleteVpnGatewayRequest type DeleteVpnGatewayInput struct { _ struct{} `type:"structure"` @@ -28751,7 +32746,6 @@ func (s *DeleteVpnGatewayInput) SetVpnGatewayId(v string) *DeleteVpnGatewayInput return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DeleteVpnGatewayOutput type DeleteVpnGatewayOutput struct { _ struct{} `type:"structure"` } @@ -28767,7 +32761,6 @@ func (s DeleteVpnGatewayOutput) GoString() string { } // Contains the parameters for DeregisterImage. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DeregisterImageRequest type DeregisterImageInput struct { _ struct{} `type:"structure"` @@ -28818,7 +32811,6 @@ func (s *DeregisterImageInput) SetImageId(v string) *DeregisterImageInput { return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DeregisterImageOutput type DeregisterImageOutput struct { _ struct{} `type:"structure"` } @@ -28834,7 +32826,6 @@ func (s DeregisterImageOutput) GoString() string { } // Contains the parameters for DescribeAccountAttributes. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeAccountAttributesRequest type DescribeAccountAttributesInput struct { _ struct{} `type:"structure"` @@ -28871,7 +32862,6 @@ func (s *DescribeAccountAttributesInput) SetDryRun(v bool) *DescribeAccountAttri } // Contains the output of DescribeAccountAttributes. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeAccountAttributesResult type DescribeAccountAttributesOutput struct { _ struct{} `type:"structure"` @@ -28896,7 +32886,6 @@ func (s *DescribeAccountAttributesOutput) SetAccountAttributes(v []*AccountAttri } // Contains the parameters for DescribeAddresses. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeAddressesRequest type DescribeAddressesInput struct { _ struct{} `type:"structure"` @@ -28932,6 +32921,18 @@ type DescribeAddressesInput struct { // the Elastic IP address. // // * public-ip - The Elastic IP address. + // + // * tag:key=value - The key/value combination of a tag assigned to the resource. + // Specify the key of the tag in the filter name and the value of the tag + // in the filter value. For example, for the tag Purpose=X, specify tag:Purpose + // for the filter name and X for the filter value. + // + // * tag-key - The key of a tag assigned to the resource. This filter is + // independent of the tag-value filter. For example, if you use both the + // filter "tag-key=Purpose" and the filter "tag-value=X", you get any resources + // assigned both the tag key Purpose (regardless of what the tag's value + // is), and the tag value X (regardless of the tag's key). If you want to + // list only resources where Purpose is X, see the tag:key=value filter. Filters []*Filter `locationName:"Filter" locationNameList:"Filter" type:"list"` // [EC2-Classic] One or more Elastic IP addresses. @@ -28975,7 +32976,6 @@ func (s *DescribeAddressesInput) SetPublicIps(v []*string) *DescribeAddressesInp } // Contains the output of DescribeAddresses. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeAddressesResult type DescribeAddressesOutput struct { _ struct{} `type:"structure"` @@ -29000,7 +33000,6 @@ func (s *DescribeAddressesOutput) SetAddresses(v []*Address) *DescribeAddressesO } // Contains the parameters for DescribeAvailabilityZones. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeAvailabilityZonesRequest type DescribeAvailabilityZonesInput struct { _ struct{} `type:"structure"` @@ -29056,7 +33055,6 @@ func (s *DescribeAvailabilityZonesInput) SetZoneNames(v []*string) *DescribeAvai } // Contains the output of DescribeAvailabiltyZones. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeAvailabilityZonesResult type DescribeAvailabilityZonesOutput struct { _ struct{} `type:"structure"` @@ -29081,7 +33079,6 @@ func (s *DescribeAvailabilityZonesOutput) SetAvailabilityZones(v []*Availability } // Contains the parameters for DescribeBundleTasks. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeBundleTasksRequest type DescribeBundleTasksInput struct { _ struct{} `type:"structure"` @@ -29151,7 +33148,6 @@ func (s *DescribeBundleTasksInput) SetFilters(v []*Filter) *DescribeBundleTasksI } // Contains the output of DescribeBundleTasks. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeBundleTasksResult type DescribeBundleTasksOutput struct { _ struct{} `type:"structure"` @@ -29176,7 +33172,6 @@ func (s *DescribeBundleTasksOutput) SetBundleTasks(v []*BundleTask) *DescribeBun } // Contains the parameters for DescribeClassicLinkInstances. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeClassicLinkInstancesRequest type DescribeClassicLinkInstancesInput struct { _ struct{} `type:"structure"` @@ -29267,7 +33262,6 @@ func (s *DescribeClassicLinkInstancesInput) SetNextToken(v string) *DescribeClas } // Contains the output of DescribeClassicLinkInstances. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeClassicLinkInstancesResult type DescribeClassicLinkInstancesOutput struct { _ struct{} `type:"structure"` @@ -29302,7 +33296,6 @@ func (s *DescribeClassicLinkInstancesOutput) SetNextToken(v string) *DescribeCla } // Contains the parameters for DescribeConversionTasks. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeConversionTasksRequest type DescribeConversionTasksInput struct { _ struct{} `type:"structure"` @@ -29339,7 +33332,6 @@ func (s *DescribeConversionTasksInput) SetDryRun(v bool) *DescribeConversionTask } // Contains the output for DescribeConversionTasks. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeConversionTasksResult type DescribeConversionTasksOutput struct { _ struct{} `type:"structure"` @@ -29364,7 +33356,6 @@ func (s *DescribeConversionTasksOutput) SetConversionTasks(v []*ConversionTask) } // Contains the parameters for DescribeCustomerGateways. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeCustomerGatewaysRequest type DescribeCustomerGatewaysInput struct { _ struct{} `type:"structure"` @@ -29442,7 +33433,6 @@ func (s *DescribeCustomerGatewaysInput) SetFilters(v []*Filter) *DescribeCustome } // Contains the output of DescribeCustomerGateways. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeCustomerGatewaysResult type DescribeCustomerGatewaysOutput struct { _ struct{} `type:"structure"` @@ -29467,7 +33457,6 @@ func (s *DescribeCustomerGatewaysOutput) SetCustomerGateways(v []*CustomerGatewa } // Contains the parameters for DescribeDhcpOptions. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeDhcpOptionsRequest type DescribeDhcpOptionsInput struct { _ struct{} `type:"structure"` @@ -29537,7 +33526,6 @@ func (s *DescribeDhcpOptionsInput) SetFilters(v []*Filter) *DescribeDhcpOptionsI } // Contains the output of DescribeDhcpOptions. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeDhcpOptionsResult type DescribeDhcpOptionsOutput struct { _ struct{} `type:"structure"` @@ -29561,7 +33549,6 @@ func (s *DescribeDhcpOptionsOutput) SetDhcpOptions(v []*DhcpOptions) *DescribeDh return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeEgressOnlyInternetGatewaysRequest type DescribeEgressOnlyInternetGatewaysInput struct { _ struct{} `type:"structure"` @@ -29618,7 +33605,6 @@ func (s *DescribeEgressOnlyInternetGatewaysInput) SetNextToken(v string) *Descri return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeEgressOnlyInternetGatewaysResult type DescribeEgressOnlyInternetGatewaysOutput struct { _ struct{} `type:"structure"` @@ -29651,7 +33637,6 @@ func (s *DescribeEgressOnlyInternetGatewaysOutput) SetNextToken(v string) *Descr return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeElasticGpusRequest type DescribeElasticGpusInput struct { _ struct{} `type:"structure"` @@ -29726,12 +33711,11 @@ func (s *DescribeElasticGpusInput) SetNextToken(v string) *DescribeElasticGpusIn return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeElasticGpusResult type DescribeElasticGpusOutput struct { _ struct{} `type:"structure"` // Information about the Elastic GPUs. - ElasticGpuSet []*ElasticGpus `locationName:"elasticGpuSet" type:"list"` + ElasticGpuSet []*ElasticGpus `locationName:"elasticGpuSet" locationNameList:"item" type:"list"` // The total number of items to return. If the total number of items available // is more than the value specified in max-items then a Next-Token will be provided @@ -29772,7 +33756,6 @@ func (s *DescribeElasticGpusOutput) SetNextToken(v string) *DescribeElasticGpusO } // Contains the parameters for DescribeExportTasks. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeExportTasksRequest type DescribeExportTasksInput struct { _ struct{} `type:"structure"` @@ -29797,7 +33780,6 @@ func (s *DescribeExportTasksInput) SetExportTaskIds(v []*string) *DescribeExport } // Contains the output for DescribeExportTasks. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeExportTasksResult type DescribeExportTasksOutput struct { _ struct{} `type:"structure"` @@ -29822,7 +33804,6 @@ func (s *DescribeExportTasksOutput) SetExportTasks(v []*ExportTask) *DescribeExp } // Contains the parameters for DescribeFlowLogs. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeFlowLogsRequest type DescribeFlowLogsInput struct { _ struct{} `type:"structure"` @@ -29888,7 +33869,6 @@ func (s *DescribeFlowLogsInput) SetNextToken(v string) *DescribeFlowLogsInput { } // Contains the output of DescribeFlowLogs. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeFlowLogsResult type DescribeFlowLogsOutput struct { _ struct{} `type:"structure"` @@ -29922,7 +33902,93 @@ func (s *DescribeFlowLogsOutput) SetNextToken(v string) *DescribeFlowLogsOutput return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeFpgaImagesRequest +type DescribeFpgaImageAttributeInput struct { + _ struct{} `type:"structure"` + + // The AFI attribute. + // + // Attribute is a required field + Attribute *string `type:"string" required:"true" enum:"FpgaImageAttributeName"` + + // Checks whether you have the required permissions for the action, without + // actually making the request, and provides an error response. If you have + // the required permissions, the error response is DryRunOperation. Otherwise, + // it is UnauthorizedOperation. + DryRun *bool `type:"boolean"` + + // The ID of the AFI. + // + // FpgaImageId is a required field + FpgaImageId *string `type:"string" required:"true"` +} + +// String returns the string representation +func (s DescribeFpgaImageAttributeInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeFpgaImageAttributeInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DescribeFpgaImageAttributeInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DescribeFpgaImageAttributeInput"} + if s.Attribute == nil { + invalidParams.Add(request.NewErrParamRequired("Attribute")) + } + if s.FpgaImageId == nil { + invalidParams.Add(request.NewErrParamRequired("FpgaImageId")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetAttribute sets the Attribute field's value. +func (s *DescribeFpgaImageAttributeInput) SetAttribute(v string) *DescribeFpgaImageAttributeInput { + s.Attribute = &v + return s +} + +// SetDryRun sets the DryRun field's value. +func (s *DescribeFpgaImageAttributeInput) SetDryRun(v bool) *DescribeFpgaImageAttributeInput { + s.DryRun = &v + return s +} + +// SetFpgaImageId sets the FpgaImageId field's value. +func (s *DescribeFpgaImageAttributeInput) SetFpgaImageId(v string) *DescribeFpgaImageAttributeInput { + s.FpgaImageId = &v + return s +} + +type DescribeFpgaImageAttributeOutput struct { + _ struct{} `type:"structure"` + + // Information about the attribute. + FpgaImageAttribute *FpgaImageAttribute `locationName:"fpgaImageAttribute" type:"structure"` +} + +// String returns the string representation +func (s DescribeFpgaImageAttributeOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeFpgaImageAttributeOutput) GoString() string { + return s.String() +} + +// SetFpgaImageAttribute sets the FpgaImageAttribute field's value. +func (s *DescribeFpgaImageAttributeOutput) SetFpgaImageAttribute(v *FpgaImageAttribute) *DescribeFpgaImageAttributeOutput { + s.FpgaImageAttribute = v + return s +} + type DescribeFpgaImagesInput struct { _ struct{} `type:"structure"` @@ -30046,7 +34112,6 @@ func (s *DescribeFpgaImagesInput) SetOwners(v []*string) *DescribeFpgaImagesInpu return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeFpgaImagesResult type DescribeFpgaImagesOutput struct { _ struct{} `type:"structure"` @@ -30080,7 +34145,6 @@ func (s *DescribeFpgaImagesOutput) SetNextToken(v string) *DescribeFpgaImagesOut return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeHostReservationOfferingsRequest type DescribeHostReservationOfferingsInput struct { _ struct{} `type:"structure"` @@ -30164,7 +34228,6 @@ func (s *DescribeHostReservationOfferingsInput) SetOfferingId(v string) *Describ return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeHostReservationOfferingsResult type DescribeHostReservationOfferingsOutput struct { _ struct{} `type:"structure"` @@ -30173,7 +34236,7 @@ type DescribeHostReservationOfferingsOutput struct { NextToken *string `locationName:"nextToken" type:"string"` // Information about the offerings. - OfferingSet []*HostOffering `locationName:"offeringSet" type:"list"` + OfferingSet []*HostOffering `locationName:"offeringSet" locationNameList:"item" type:"list"` } // String returns the string representation @@ -30198,7 +34261,6 @@ func (s *DescribeHostReservationOfferingsOutput) SetOfferingSet(v []*HostOfferin return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeHostReservationsRequest type DescribeHostReservationsInput struct { _ struct{} `type:"structure"` @@ -30259,12 +34321,11 @@ func (s *DescribeHostReservationsInput) SetNextToken(v string) *DescribeHostRese return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeHostReservationsResult type DescribeHostReservationsOutput struct { _ struct{} `type:"structure"` // Details about the reservation's configuration. - HostReservationSet []*HostReservation `locationName:"hostReservationSet" type:"list"` + HostReservationSet []*HostReservation `locationName:"hostReservationSet" locationNameList:"item" type:"list"` // The token to use to retrieve the next page of results. This value is null // when there are no more results to return. @@ -30294,7 +34355,6 @@ func (s *DescribeHostReservationsOutput) SetNextToken(v string) *DescribeHostRes } // Contains the parameters for DescribeHosts. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeHostsRequest type DescribeHostsInput struct { _ struct{} `type:"structure"` @@ -30366,7 +34426,6 @@ func (s *DescribeHostsInput) SetNextToken(v string) *DescribeHostsInput { } // Contains the output of DescribeHosts. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeHostsResult type DescribeHostsOutput struct { _ struct{} `type:"structure"` @@ -30400,7 +34459,6 @@ func (s *DescribeHostsOutput) SetNextToken(v string) *DescribeHostsOutput { return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeIamInstanceProfileAssociationsRequest type DescribeIamInstanceProfileAssociationsInput struct { _ struct{} `type:"structure"` @@ -30473,7 +34531,6 @@ func (s *DescribeIamInstanceProfileAssociationsInput) SetNextToken(v string) *De return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeIamInstanceProfileAssociationsResult type DescribeIamInstanceProfileAssociationsOutput struct { _ struct{} `type:"structure"` @@ -30508,7 +34565,6 @@ func (s *DescribeIamInstanceProfileAssociationsOutput) SetNextToken(v string) *D } // Contains the parameters for DescribeIdFormat. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeIdFormatRequest type DescribeIdFormatInput struct { _ struct{} `type:"structure"` @@ -30533,7 +34589,6 @@ func (s *DescribeIdFormatInput) SetResource(v string) *DescribeIdFormatInput { } // Contains the output of DescribeIdFormat. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeIdFormatResult type DescribeIdFormatOutput struct { _ struct{} `type:"structure"` @@ -30558,7 +34613,6 @@ func (s *DescribeIdFormatOutput) SetStatuses(v []*IdFormat) *DescribeIdFormatOut } // Contains the parameters for DescribeIdentityIdFormat. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeIdentityIdFormatRequest type DescribeIdentityIdFormatInput struct { _ struct{} `type:"structure"` @@ -30608,7 +34662,6 @@ func (s *DescribeIdentityIdFormatInput) SetResource(v string) *DescribeIdentityI } // Contains the output of DescribeIdentityIdFormat. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeIdentityIdFormatResult type DescribeIdentityIdFormatOutput struct { _ struct{} `type:"structure"` @@ -30633,7 +34686,6 @@ func (s *DescribeIdentityIdFormatOutput) SetStatuses(v []*IdFormat) *DescribeIde } // Contains the parameters for DescribeImageAttribute. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeImageAttributeRequest type DescribeImageAttributeInput struct { _ struct{} `type:"structure"` @@ -30703,7 +34755,6 @@ func (s *DescribeImageAttributeInput) SetImageId(v string) *DescribeImageAttribu } // Describes an image attribute. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ImageAttribute type DescribeImageAttributeOutput struct { _ struct{} `type:"structure"` @@ -30792,7 +34843,6 @@ func (s *DescribeImageAttributeOutput) SetSriovNetSupport(v *AttributeValue) *De } // Contains the parameters for DescribeImages. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeImagesRequest type DescribeImagesInput struct { _ struct{} `type:"structure"` @@ -30813,8 +34863,8 @@ type DescribeImagesInput struct { // * block-device-mapping.delete-on-termination - A Boolean value that indicates // whether the Amazon EBS volume is deleted on instance termination. // - // * block-device-mapping.device-name - The device name for the EBS volume - // (for example, /dev/sdh). + // * block-device-mapping.device-name - The device name specified in the + // block device mapping (for example, /dev/sdh or xvdh). // // * block-device-mapping.snapshot-id - The ID of the snapshot used for the // EBS volume. @@ -30858,7 +34908,7 @@ type DescribeImagesInput struct { // // * ramdisk-id - The RAM disk ID. // - // * root-device-name - The name of the root device volume (for example, + // * root-device-name - The device name of the root device volume (for example, // /dev/sda1). // // * root-device-type - The type of the root device volume (ebs | instance-store). @@ -30869,6 +34919,9 @@ type DescribeImagesInput struct { // // * state-reason-message - The message for the state change. // + // * sriov-net-support - A value of simple indicates that enhanced networking + // with the Intel 82599 VF interface is enabled. + // // * tag:key=value - The key/value combination of a tag assigned to the resource. // Specify the key of the tag in the filter name and the value of the tag // in the filter value. For example, for the tag Purpose=X, specify tag:Purpose @@ -30941,7 +34994,6 @@ func (s *DescribeImagesInput) SetOwners(v []*string) *DescribeImagesInput { } // Contains the output of DescribeImages. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeImagesResult type DescribeImagesOutput struct { _ struct{} `type:"structure"` @@ -30966,7 +35018,6 @@ func (s *DescribeImagesOutput) SetImages(v []*Image) *DescribeImagesOutput { } // Contains the parameters for DescribeImportImageTasks. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeImportImageTasksRequest type DescribeImportImageTasksInput struct { _ struct{} `type:"structure"` @@ -31032,7 +35083,6 @@ func (s *DescribeImportImageTasksInput) SetNextToken(v string) *DescribeImportIm } // Contains the output for DescribeImportImageTasks. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeImportImageTasksResult type DescribeImportImageTasksOutput struct { _ struct{} `type:"structure"` @@ -31068,7 +35118,6 @@ func (s *DescribeImportImageTasksOutput) SetNextToken(v string) *DescribeImportI } // Contains the parameters for DescribeImportSnapshotTasks. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeImportSnapshotTasksRequest type DescribeImportSnapshotTasksInput struct { _ struct{} `type:"structure"` @@ -31133,7 +35182,6 @@ func (s *DescribeImportSnapshotTasksInput) SetNextToken(v string) *DescribeImpor } // Contains the output for DescribeImportSnapshotTasks. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeImportSnapshotTasksResult type DescribeImportSnapshotTasksOutput struct { _ struct{} `type:"structure"` @@ -31169,7 +35217,6 @@ func (s *DescribeImportSnapshotTasksOutput) SetNextToken(v string) *DescribeImpo } // Contains the parameters for DescribeInstanceAttribute. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeInstanceAttributeRequest type DescribeInstanceAttributeInput struct { _ struct{} `type:"structure"` @@ -31237,7 +35284,6 @@ func (s *DescribeInstanceAttributeInput) SetInstanceId(v string) *DescribeInstan } // Describes an instance attribute. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/InstanceAttribute type DescribeInstanceAttributeOutput struct { _ struct{} `type:"structure"` @@ -31248,7 +35294,7 @@ type DescribeInstanceAttributeOutput struct { // EC2 console, CLI, or API; otherwise, you can. DisableApiTermination *AttributeBooleanValue `locationName:"disableApiTermination" type:"structure"` - // Indicates whether the instance is optimized for EBS I/O. + // Indicates whether the instance is optimized for Amazon EBS I/O. EbsOptimized *AttributeBooleanValue `locationName:"ebsOptimized" type:"structure"` // Indicates whether enhanced networking with ENA is enabled. @@ -31276,12 +35322,12 @@ type DescribeInstanceAttributeOutput struct { // The RAM disk ID. RamdiskId *AttributeValue `locationName:"ramdisk" type:"structure"` - // The name of the root device (for example, /dev/sda1 or /dev/xvda). + // The device name of the root device volume (for example, /dev/sda1). RootDeviceName *AttributeValue `locationName:"rootDeviceName" type:"structure"` // Indicates whether source/destination checking is enabled. A value of true - // means checking is enabled, and false means checking is disabled. This value - // must be false for a NAT instance to perform NAT. + // means that checking is enabled, and false means that checking is disabled. + // This value must be false for a NAT instance to perform NAT. SourceDestCheck *AttributeBooleanValue `locationName:"sourceDestCheck" type:"structure"` // Indicates whether enhanced networking with the Intel 82599 Virtual Function @@ -31392,8 +35438,111 @@ func (s *DescribeInstanceAttributeOutput) SetUserData(v *AttributeValue) *Descri return s } +type DescribeInstanceCreditSpecificationsInput struct { + _ struct{} `type:"structure"` + + // Checks whether you have the required permissions for the action, without + // actually making the request, and provides an error response. If you have + // the required permissions, the error response is DryRunOperation. Otherwise, + // it is UnauthorizedOperation. + DryRun *bool `type:"boolean"` + + // One or more filters. + // + // * instance-id - The ID of the instance. + Filters []*Filter `locationName:"Filter" locationNameList:"Filter" type:"list"` + + // One or more instance IDs. + // + // Default: Describes all your instances. + // + // Constraints: Maximum 1000 explicitly specified instance IDs. + InstanceIds []*string `locationName:"InstanceId" locationNameList:"InstanceId" type:"list"` + + // The maximum number of results to return in a single call. To retrieve the + // remaining results, make another call with the returned NextToken value. This + // value can be between 5 and 1000. You cannot specify this parameter and the + // instance IDs parameter in the same call. + MaxResults *int64 `type:"integer"` + + // The token to retrieve the next page of results. + NextToken *string `type:"string"` +} + +// String returns the string representation +func (s DescribeInstanceCreditSpecificationsInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeInstanceCreditSpecificationsInput) GoString() string { + return s.String() +} + +// SetDryRun sets the DryRun field's value. +func (s *DescribeInstanceCreditSpecificationsInput) SetDryRun(v bool) *DescribeInstanceCreditSpecificationsInput { + s.DryRun = &v + return s +} + +// SetFilters sets the Filters field's value. +func (s *DescribeInstanceCreditSpecificationsInput) SetFilters(v []*Filter) *DescribeInstanceCreditSpecificationsInput { + s.Filters = v + return s +} + +// SetInstanceIds sets the InstanceIds field's value. +func (s *DescribeInstanceCreditSpecificationsInput) SetInstanceIds(v []*string) *DescribeInstanceCreditSpecificationsInput { + s.InstanceIds = v + return s +} + +// SetMaxResults sets the MaxResults field's value. +func (s *DescribeInstanceCreditSpecificationsInput) SetMaxResults(v int64) *DescribeInstanceCreditSpecificationsInput { + s.MaxResults = &v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *DescribeInstanceCreditSpecificationsInput) SetNextToken(v string) *DescribeInstanceCreditSpecificationsInput { + s.NextToken = &v + return s +} + +type DescribeInstanceCreditSpecificationsOutput struct { + _ struct{} `type:"structure"` + + // Information about the credit option for CPU usage of an instance. + InstanceCreditSpecifications []*InstanceCreditSpecification `locationName:"instanceCreditSpecificationSet" locationNameList:"item" type:"list"` + + // The token to use to retrieve the next page of results. This value is null + // when there are no more results to return. + NextToken *string `locationName:"nextToken" type:"string"` +} + +// String returns the string representation +func (s DescribeInstanceCreditSpecificationsOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeInstanceCreditSpecificationsOutput) GoString() string { + return s.String() +} + +// SetInstanceCreditSpecifications sets the InstanceCreditSpecifications field's value. +func (s *DescribeInstanceCreditSpecificationsOutput) SetInstanceCreditSpecifications(v []*InstanceCreditSpecification) *DescribeInstanceCreditSpecificationsOutput { + s.InstanceCreditSpecifications = v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *DescribeInstanceCreditSpecificationsOutput) SetNextToken(v string) *DescribeInstanceCreditSpecificationsOutput { + s.NextToken = &v + return s +} + // Contains the parameters for DescribeInstanceStatus. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeInstanceStatusRequest type DescribeInstanceStatusInput struct { _ struct{} `type:"structure"` @@ -31510,7 +35659,6 @@ func (s *DescribeInstanceStatusInput) SetNextToken(v string) *DescribeInstanceSt } // Contains the output of DescribeInstanceStatus. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeInstanceStatusResult type DescribeInstanceStatusOutput struct { _ struct{} `type:"structure"` @@ -31545,7 +35693,6 @@ func (s *DescribeInstanceStatusOutput) SetNextToken(v string) *DescribeInstanceS } // Contains the parameters for DescribeInstances. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeInstancesRequest type DescribeInstancesInput struct { _ struct{} `type:"structure"` @@ -31570,8 +35717,8 @@ type DescribeInstancesInput struct { // * block-device-mapping.delete-on-termination - A Boolean that indicates // whether the EBS volume is deleted on instance termination. // - // * block-device-mapping.device-name - The device name for the EBS volume - // (for example, /dev/sdh or xvdh). + // * block-device-mapping.device-name - The device name specified in the + // block device mapping (for example, /dev/sdh or xvdh). // // * block-device-mapping.status - The status for the EBS volume (attaching // | attached | detaching | detached). @@ -31712,10 +35859,10 @@ type DescribeInstancesInput struct { // | in-use). // // * network-interface.source-dest-check - Whether the network interface - // performs source/destination checking. A value of true means checking is - // enabled, and false means checking is disabled. The value must be false - // for the network interface to perform network address translation (NAT) - // in your VPC. + // performs source/destination checking. A value of true means that checking + // is enabled, and false means that checking is disabled. The value must + // be false for the network interface to perform network address translation + // (NAT) in your VPC. // // * network-interface.subnet-id - The ID of the subnet for the network interface. // @@ -31750,22 +35897,21 @@ type DescribeInstancesInput struct { // ID is created any time you launch an instance. A reservation ID has a // one-to-one relationship with an instance launch request, but can be associated // with more than one instance if you launch multiple instances using the - // same launch request. For example, if you launch one instance, you'll get + // same launch request. For example, if you launch one instance, you get // one reservation ID. If you launch ten instances using the same launch - // request, you'll also get one reservation ID. + // request, you also get one reservation ID. // - // * root-device-name - The name of the root device for the instance (for - // example, /dev/sda1 or /dev/xvda). + // * root-device-name - The device name of the root device volume (for example, + // /dev/sda1). // - // * root-device-type - The type of root device that the instance uses (ebs - // | instance-store). + // * root-device-type - The type of the root device volume (ebs | instance-store). // // * source-dest-check - Indicates whether the instance performs source/destination // checking. A value of true means that checking is enabled, and false means - // checking is disabled. The value must be false for the instance to perform - // network address translation (NAT) in your VPC. + // that checking is disabled. The value must be false for the instance to + // perform network address translation (NAT) in your VPC. // - // * spot-instance-request-id - The ID of the Spot instance request. + // * spot-instance-request-id - The ID of the Spot Instance request. // // * state-reason-code - The reason code for the state change. // @@ -31782,9 +35928,8 @@ type DescribeInstancesInput struct { // independent of the tag-value filter. For example, if you use both the // filter "tag-key=Purpose" and the filter "tag-value=X", you get any resources // assigned both the tag key Purpose (regardless of what the tag's value - // is), and the tag value X (regardless of what the tag's key is). If you - // want to list only resources where Purpose is X, see the tag:key=value - // filter. + // is), and the tag value X (regardless of the tag's key). If you want to + // list only resources where Purpose is X, see the tag:key=value filter. // // * tag-value - The value of a tag assigned to the resource. This filter // is independent of the tag-key filter. @@ -31853,7 +35998,6 @@ func (s *DescribeInstancesInput) SetNextToken(v string) *DescribeInstancesInput } // Contains the output of DescribeInstances. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeInstancesResult type DescribeInstancesOutput struct { _ struct{} `type:"structure"` @@ -31888,7 +36032,6 @@ func (s *DescribeInstancesOutput) SetReservations(v []*Reservation) *DescribeIns } // Contains the parameters for DescribeInternetGateways. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeInternetGatewaysRequest type DescribeInternetGatewaysInput struct { _ struct{} `type:"structure"` @@ -31959,7 +36102,6 @@ func (s *DescribeInternetGatewaysInput) SetInternetGatewayIds(v []*string) *Desc } // Contains the output of DescribeInternetGateways. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeInternetGatewaysResult type DescribeInternetGatewaysOutput struct { _ struct{} `type:"structure"` @@ -31984,7 +36126,6 @@ func (s *DescribeInternetGatewaysOutput) SetInternetGateways(v []*InternetGatewa } // Contains the parameters for DescribeKeyPairs. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeKeyPairsRequest type DescribeKeyPairsInput struct { _ struct{} `type:"structure"` @@ -32036,7 +36177,6 @@ func (s *DescribeKeyPairsInput) SetKeyNames(v []*string) *DescribeKeyPairsInput } // Contains the output of DescribeKeyPairs. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeKeyPairsResult type DescribeKeyPairsOutput struct { _ struct{} `type:"structure"` @@ -32060,8 +36200,295 @@ func (s *DescribeKeyPairsOutput) SetKeyPairs(v []*KeyPairInfo) *DescribeKeyPairs return s } +type DescribeLaunchTemplateVersionsInput struct { + _ struct{} `type:"structure"` + + // Checks whether you have the required permissions for the action, without + // actually making the request, and provides an error response. If you have + // the required permissions, the error response is DryRunOperation. Otherwise, + // it is UnauthorizedOperation. + DryRun *bool `type:"boolean"` + + // One or more filters. + // + // * create-time - The time the launch template version was created. + // + // * ebs-optimized - A boolean that indicates whether the instance is optimized + // for Amazon EBS I/O. + // + // * iam-instance-profile - The ARN of the IAM instance profile. + // + // * image-id - The ID of the AMI. + // + // * instance-type - The instance type. + // + // * is-default-version - A boolean that indicates whether the launch template + // version is the default version. + // + // * kernel-id - The kernel ID. + // + // * ram-disk-id - The RAM disk ID. + Filters []*Filter `locationName:"Filter" locationNameList:"Filter" type:"list"` + + // The ID of the launch template. You must specify either the launch template + // ID or launch template name in the request. + LaunchTemplateId *string `type:"string"` + + // The name of the launch template. You must specify either the launch template + // ID or launch template name in the request. + LaunchTemplateName *string `min:"3" type:"string"` + + // The maximum number of results to return in a single call. To retrieve the + // remaining results, make another call with the returned NextToken value. This + // value can be between 5 and 1000. + MaxResults *int64 `type:"integer"` + + // The version number up to which to describe launch template versions. + MaxVersion *string `type:"string"` + + // The version number after which to describe launch template versions. + MinVersion *string `type:"string"` + + // The token to request the next page of results. + NextToken *string `type:"string"` + + // One or more versions of the launch template. + Versions []*string `locationName:"LaunchTemplateVersion" locationNameList:"item" type:"list"` +} + +// String returns the string representation +func (s DescribeLaunchTemplateVersionsInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeLaunchTemplateVersionsInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DescribeLaunchTemplateVersionsInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DescribeLaunchTemplateVersionsInput"} + if s.LaunchTemplateName != nil && len(*s.LaunchTemplateName) < 3 { + invalidParams.Add(request.NewErrParamMinLen("LaunchTemplateName", 3)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetDryRun sets the DryRun field's value. +func (s *DescribeLaunchTemplateVersionsInput) SetDryRun(v bool) *DescribeLaunchTemplateVersionsInput { + s.DryRun = &v + return s +} + +// SetFilters sets the Filters field's value. +func (s *DescribeLaunchTemplateVersionsInput) SetFilters(v []*Filter) *DescribeLaunchTemplateVersionsInput { + s.Filters = v + return s +} + +// SetLaunchTemplateId sets the LaunchTemplateId field's value. +func (s *DescribeLaunchTemplateVersionsInput) SetLaunchTemplateId(v string) *DescribeLaunchTemplateVersionsInput { + s.LaunchTemplateId = &v + return s +} + +// SetLaunchTemplateName sets the LaunchTemplateName field's value. +func (s *DescribeLaunchTemplateVersionsInput) SetLaunchTemplateName(v string) *DescribeLaunchTemplateVersionsInput { + s.LaunchTemplateName = &v + return s +} + +// SetMaxResults sets the MaxResults field's value. +func (s *DescribeLaunchTemplateVersionsInput) SetMaxResults(v int64) *DescribeLaunchTemplateVersionsInput { + s.MaxResults = &v + return s +} + +// SetMaxVersion sets the MaxVersion field's value. +func (s *DescribeLaunchTemplateVersionsInput) SetMaxVersion(v string) *DescribeLaunchTemplateVersionsInput { + s.MaxVersion = &v + return s +} + +// SetMinVersion sets the MinVersion field's value. +func (s *DescribeLaunchTemplateVersionsInput) SetMinVersion(v string) *DescribeLaunchTemplateVersionsInput { + s.MinVersion = &v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *DescribeLaunchTemplateVersionsInput) SetNextToken(v string) *DescribeLaunchTemplateVersionsInput { + s.NextToken = &v + return s +} + +// SetVersions sets the Versions field's value. +func (s *DescribeLaunchTemplateVersionsInput) SetVersions(v []*string) *DescribeLaunchTemplateVersionsInput { + s.Versions = v + return s +} + +type DescribeLaunchTemplateVersionsOutput struct { + _ struct{} `type:"structure"` + + // Information about the launch template versions. + LaunchTemplateVersions []*LaunchTemplateVersion `locationName:"launchTemplateVersionSet" locationNameList:"item" type:"list"` + + // The token to use to retrieve the next page of results. This value is null + // when there are no more results to return. + NextToken *string `locationName:"nextToken" type:"string"` +} + +// String returns the string representation +func (s DescribeLaunchTemplateVersionsOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeLaunchTemplateVersionsOutput) GoString() string { + return s.String() +} + +// SetLaunchTemplateVersions sets the LaunchTemplateVersions field's value. +func (s *DescribeLaunchTemplateVersionsOutput) SetLaunchTemplateVersions(v []*LaunchTemplateVersion) *DescribeLaunchTemplateVersionsOutput { + s.LaunchTemplateVersions = v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *DescribeLaunchTemplateVersionsOutput) SetNextToken(v string) *DescribeLaunchTemplateVersionsOutput { + s.NextToken = &v + return s +} + +type DescribeLaunchTemplatesInput struct { + _ struct{} `type:"structure"` + + // Checks whether you have the required permissions for the action, without + // actually making the request, and provides an error response. If you have + // the required permissions, the error response is DryRunOperation. Otherwise, + // it is UnauthorizedOperation. + DryRun *bool `type:"boolean"` + + // One or more filters. + // + // * create-time - The time the launch template was created. + // + // * launch-template-name - The name of the launch template. + // + // * tag:key=value - The key/value combination of a tag assigned to the resource. + // Specify the key of the tag in the filter name and the value of the tag + // in the filter value. For example, for the tag Purpose=X, specify tag:Purpose + // for the filter name and X for the filter value. + // + // * tag-key - The key of a tag assigned to the resource. This filter is + // independent of the tag-value filter. For example, if you use both the + // filter "tag-key=Purpose" and the filter "tag-value=X", you get any resources + // assigned both the tag key Purpose (regardless of what the tag's value + // is), and the tag value X (regardless of the tag's key). If you want to + // list only resources where Purpose is X, see the tag:key=value filter. + Filters []*Filter `locationName:"Filter" locationNameList:"Filter" type:"list"` + + // One or more launch template IDs. + LaunchTemplateIds []*string `locationName:"LaunchTemplateId" locationNameList:"item" type:"list"` + + // One or more launch template names. + LaunchTemplateNames []*string `locationName:"LaunchTemplateName" locationNameList:"item" type:"list"` + + // The maximum number of results to return in a single call. To retrieve the + // remaining results, make another call with the returned NextToken value. This + // value can be between 5 and 1000. + MaxResults *int64 `type:"integer"` + + // The token to request the next page of results. + NextToken *string `type:"string"` +} + +// String returns the string representation +func (s DescribeLaunchTemplatesInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeLaunchTemplatesInput) GoString() string { + return s.String() +} + +// SetDryRun sets the DryRun field's value. +func (s *DescribeLaunchTemplatesInput) SetDryRun(v bool) *DescribeLaunchTemplatesInput { + s.DryRun = &v + return s +} + +// SetFilters sets the Filters field's value. +func (s *DescribeLaunchTemplatesInput) SetFilters(v []*Filter) *DescribeLaunchTemplatesInput { + s.Filters = v + return s +} + +// SetLaunchTemplateIds sets the LaunchTemplateIds field's value. +func (s *DescribeLaunchTemplatesInput) SetLaunchTemplateIds(v []*string) *DescribeLaunchTemplatesInput { + s.LaunchTemplateIds = v + return s +} + +// SetLaunchTemplateNames sets the LaunchTemplateNames field's value. +func (s *DescribeLaunchTemplatesInput) SetLaunchTemplateNames(v []*string) *DescribeLaunchTemplatesInput { + s.LaunchTemplateNames = v + return s +} + +// SetMaxResults sets the MaxResults field's value. +func (s *DescribeLaunchTemplatesInput) SetMaxResults(v int64) *DescribeLaunchTemplatesInput { + s.MaxResults = &v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *DescribeLaunchTemplatesInput) SetNextToken(v string) *DescribeLaunchTemplatesInput { + s.NextToken = &v + return s +} + +type DescribeLaunchTemplatesOutput struct { + _ struct{} `type:"structure"` + + // Information about the launch templates. + LaunchTemplates []*LaunchTemplate `locationName:"launchTemplates" locationNameList:"item" type:"list"` + + // The token to use to retrieve the next page of results. This value is null + // when there are no more results to return. + NextToken *string `locationName:"nextToken" type:"string"` +} + +// String returns the string representation +func (s DescribeLaunchTemplatesOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeLaunchTemplatesOutput) GoString() string { + return s.String() +} + +// SetLaunchTemplates sets the LaunchTemplates field's value. +func (s *DescribeLaunchTemplatesOutput) SetLaunchTemplates(v []*LaunchTemplate) *DescribeLaunchTemplatesOutput { + s.LaunchTemplates = v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *DescribeLaunchTemplatesOutput) SetNextToken(v string) *DescribeLaunchTemplatesOutput { + s.NextToken = &v + return s +} + // Contains the parameters for DescribeMovingAddresses. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeMovingAddressesRequest type DescribeMovingAddressesInput struct { _ struct{} `type:"structure"` @@ -32133,7 +36560,6 @@ func (s *DescribeMovingAddressesInput) SetPublicIps(v []*string) *DescribeMoving } // Contains the output of DescribeMovingAddresses. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeMovingAddressesResult type DescribeMovingAddressesOutput struct { _ struct{} `type:"structure"` @@ -32168,7 +36594,6 @@ func (s *DescribeMovingAddressesOutput) SetNextToken(v string) *DescribeMovingAd } // Contains the parameters for DescribeNatGateways. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeNatGatewaysRequest type DescribeNatGatewaysInput struct { _ struct{} `type:"structure"` @@ -32181,6 +36606,22 @@ type DescribeNatGatewaysInput struct { // // * subnet-id - The ID of the subnet in which the NAT gateway resides. // + // * tag:key=value - The key/value combination of a tag assigned to the resource. + // Specify the key of the tag in the filter name and the value of the tag + // in the filter value. For example, for the tag Purpose=X, specify tag:Purpose + // for the filter name and X for the filter value. + // + // * tag-key - The key of a tag assigned to the resource. This filter is + // independent of the tag-value filter. For example, if you use both the + // filter "tag-key=Purpose" and the filter "tag-value=X", you get any resources + // assigned both the tag key Purpose (regardless of what the tag's value + // is), and the tag value X (regardless of what the tag's key is). If you + // want to list only resources where Purpose is X, see the tag:key=value + // filter. + // + // * tag-value - The value of a tag assigned to the resource. This filter + // is independent of the tag-key filter. + // // * vpc-id - The ID of the VPC in which the NAT gateway resides. Filter []*Filter `locationNameList:"Filter" type:"list"` @@ -32234,7 +36675,6 @@ func (s *DescribeNatGatewaysInput) SetNextToken(v string) *DescribeNatGatewaysIn } // Contains the output of DescribeNatGateways. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeNatGatewaysResult type DescribeNatGatewaysOutput struct { _ struct{} `type:"structure"` @@ -32269,7 +36709,6 @@ func (s *DescribeNatGatewaysOutput) SetNextToken(v string) *DescribeNatGatewaysO } // Contains the parameters for DescribeNetworkAcls. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeNetworkAclsRequest type DescribeNetworkAclsInput struct { _ struct{} `type:"structure"` @@ -32371,7 +36810,6 @@ func (s *DescribeNetworkAclsInput) SetNetworkAclIds(v []*string) *DescribeNetwor } // Contains the output of DescribeNetworkAcls. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeNetworkAclsResult type DescribeNetworkAclsOutput struct { _ struct{} `type:"structure"` @@ -32396,7 +36834,6 @@ func (s *DescribeNetworkAclsOutput) SetNetworkAcls(v []*NetworkAcl) *DescribeNet } // Contains the parameters for DescribeNetworkInterfaceAttribute. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeNetworkInterfaceAttributeRequest type DescribeNetworkInterfaceAttributeInput struct { _ struct{} `type:"structure"` @@ -32457,7 +36894,6 @@ func (s *DescribeNetworkInterfaceAttributeInput) SetNetworkInterfaceId(v string) } // Contains the output of DescribeNetworkInterfaceAttribute. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeNetworkInterfaceAttributeResult type DescribeNetworkInterfaceAttributeOutput struct { _ struct{} `type:"structure"` @@ -32518,7 +36954,6 @@ func (s *DescribeNetworkInterfaceAttributeOutput) SetSourceDestCheck(v *Attribut } // Contains the parameters for DescribeNetworkInterfacePermissions. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeNetworkInterfacePermissionsRequest type DescribeNetworkInterfacePermissionsInput struct { _ struct{} `type:"structure"` @@ -32585,7 +37020,6 @@ func (s *DescribeNetworkInterfacePermissionsInput) SetNextToken(v string) *Descr } // Contains the output for DescribeNetworkInterfacePermissions. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeNetworkInterfacePermissionsResult type DescribeNetworkInterfacePermissionsOutput struct { _ struct{} `type:"structure"` @@ -32619,7 +37053,6 @@ func (s *DescribeNetworkInterfacePermissionsOutput) SetNextToken(v string) *Desc } // Contains the parameters for DescribeNetworkInterfaces. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeNetworkInterfacesRequest type DescribeNetworkInterfacesInput struct { _ struct{} `type:"structure"` @@ -32777,7 +37210,6 @@ func (s *DescribeNetworkInterfacesInput) SetNetworkInterfaceIds(v []*string) *De } // Contains the output of DescribeNetworkInterfaces. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeNetworkInterfacesResult type DescribeNetworkInterfacesOutput struct { _ struct{} `type:"structure"` @@ -32802,7 +37234,6 @@ func (s *DescribeNetworkInterfacesOutput) SetNetworkInterfaces(v []*NetworkInter } // Contains the parameters for DescribePlacementGroups. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribePlacementGroupsRequest type DescribePlacementGroupsInput struct { _ struct{} `type:"structure"` @@ -32819,7 +37250,7 @@ type DescribePlacementGroupsInput struct { // * state - The state of the placement group (pending | available | deleting // | deleted). // - // * strategy - The strategy of the placement group (cluster). + // * strategy - The strategy of the placement group (cluster | spread). Filters []*Filter `locationName:"Filter" locationNameList:"Filter" type:"list"` // One or more placement group names. @@ -32857,7 +37288,6 @@ func (s *DescribePlacementGroupsInput) SetGroupNames(v []*string) *DescribePlace } // Contains the output of DescribePlacementGroups. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribePlacementGroupsResult type DescribePlacementGroupsOutput struct { _ struct{} `type:"structure"` @@ -32882,7 +37312,6 @@ func (s *DescribePlacementGroupsOutput) SetPlacementGroups(v []*PlacementGroup) } // Contains the parameters for DescribePrefixLists. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribePrefixListsRequest type DescribePrefixListsInput struct { _ struct{} `type:"structure"` @@ -32956,7 +37385,6 @@ func (s *DescribePrefixListsInput) SetPrefixListIds(v []*string) *DescribePrefix } // Contains the output of DescribePrefixLists. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribePrefixListsResult type DescribePrefixListsOutput struct { _ struct{} `type:"structure"` @@ -32991,7 +37419,6 @@ func (s *DescribePrefixListsOutput) SetPrefixLists(v []*PrefixList) *DescribePre } // Contains the parameters for DescribeRegions. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeRegionsRequest type DescribeRegionsInput struct { _ struct{} `type:"structure"` @@ -33041,7 +37468,6 @@ func (s *DescribeRegionsInput) SetRegionNames(v []*string) *DescribeRegionsInput } // Contains the output of DescribeRegions. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeRegionsResult type DescribeRegionsOutput struct { _ struct{} `type:"structure"` @@ -33066,7 +37492,6 @@ func (s *DescribeRegionsOutput) SetRegions(v []*Region) *DescribeRegionsOutput { } // Contains the parameters for DescribeReservedInstances. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeReservedInstancesRequest type DescribeReservedInstancesInput struct { _ struct{} `type:"structure"` @@ -33186,7 +37611,6 @@ func (s *DescribeReservedInstancesInput) SetReservedInstancesIds(v []*string) *D } // Contains the parameters for DescribeReservedInstancesListings. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeReservedInstancesListingsRequest type DescribeReservedInstancesListingsInput struct { _ struct{} `type:"structure"` @@ -33238,7 +37662,6 @@ func (s *DescribeReservedInstancesListingsInput) SetReservedInstancesListingId(v } // Contains the output of DescribeReservedInstancesListings. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeReservedInstancesListingsResult type DescribeReservedInstancesListingsOutput struct { _ struct{} `type:"structure"` @@ -33263,7 +37686,6 @@ func (s *DescribeReservedInstancesListingsOutput) SetReservedInstancesListings(v } // Contains the parameters for DescribeReservedInstancesModifications. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeReservedInstancesModificationsRequest type DescribeReservedInstancesModificationsInput struct { _ struct{} `type:"structure"` @@ -33339,7 +37761,6 @@ func (s *DescribeReservedInstancesModificationsInput) SetReservedInstancesModifi } // Contains the output of DescribeReservedInstancesModifications. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeReservedInstancesModificationsResult type DescribeReservedInstancesModificationsOutput struct { _ struct{} `type:"structure"` @@ -33374,7 +37795,6 @@ func (s *DescribeReservedInstancesModificationsOutput) SetReservedInstancesModif } // Contains the parameters for DescribeReservedInstancesOfferings. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeReservedInstancesOfferingsRequest type DescribeReservedInstancesOfferingsInput struct { _ struct{} `type:"structure"` @@ -33584,7 +38004,6 @@ func (s *DescribeReservedInstancesOfferingsInput) SetReservedInstancesOfferingId } // Contains the output of DescribeReservedInstancesOfferings. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeReservedInstancesOfferingsResult type DescribeReservedInstancesOfferingsOutput struct { _ struct{} `type:"structure"` @@ -33619,7 +38038,6 @@ func (s *DescribeReservedInstancesOfferingsOutput) SetReservedInstancesOfferings } // Contains the output for DescribeReservedInstances. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeReservedInstancesResult type DescribeReservedInstancesOutput struct { _ struct{} `type:"structure"` @@ -33644,7 +38062,6 @@ func (s *DescribeReservedInstancesOutput) SetReservedInstances(v []*ReservedInst } // Contains the parameters for DescribeRouteTables. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeRouteTablesRequest type DescribeRouteTablesInput struct { _ struct{} `type:"structure"` @@ -33757,7 +38174,6 @@ func (s *DescribeRouteTablesInput) SetRouteTableIds(v []*string) *DescribeRouteT } // Contains the output of DescribeRouteTables. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeRouteTablesResult type DescribeRouteTablesOutput struct { _ struct{} `type:"structure"` @@ -33782,7 +38198,6 @@ func (s *DescribeRouteTablesOutput) SetRouteTables(v []*RouteTable) *DescribeRou } // Contains the parameters for DescribeScheduledInstanceAvailability. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeScheduledInstanceAvailabilityRequest type DescribeScheduledInstanceAvailabilityInput struct { _ struct{} `type:"structure"` @@ -33912,7 +38327,6 @@ func (s *DescribeScheduledInstanceAvailabilityInput) SetRecurrence(v *ScheduledI } // Contains the output of DescribeScheduledInstanceAvailability. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeScheduledInstanceAvailabilityResult type DescribeScheduledInstanceAvailabilityOutput struct { _ struct{} `type:"structure"` @@ -33947,7 +38361,6 @@ func (s *DescribeScheduledInstanceAvailabilityOutput) SetScheduledInstanceAvaila } // Contains the parameters for DescribeScheduledInstances. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeScheduledInstancesRequest type DescribeScheduledInstancesInput struct { _ struct{} `type:"structure"` @@ -34030,7 +38443,6 @@ func (s *DescribeScheduledInstancesInput) SetSlotStartTimeRange(v *SlotStartTime } // Contains the output of DescribeScheduledInstances. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeScheduledInstancesResult type DescribeScheduledInstancesOutput struct { _ struct{} `type:"structure"` @@ -34064,7 +38476,6 @@ func (s *DescribeScheduledInstancesOutput) SetScheduledInstanceSet(v []*Schedule return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeSecurityGroupReferencesRequest type DescribeSecurityGroupReferencesInput struct { _ struct{} `type:"structure"` @@ -34115,7 +38526,6 @@ func (s *DescribeSecurityGroupReferencesInput) SetGroupId(v []*string) *Describe return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeSecurityGroupReferencesResult type DescribeSecurityGroupReferencesOutput struct { _ struct{} `type:"structure"` @@ -34140,7 +38550,6 @@ func (s *DescribeSecurityGroupReferencesOutput) SetSecurityGroupReferenceSet(v [ } // Contains the parameters for DescribeSecurityGroups. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeSecurityGroupsRequest type DescribeSecurityGroupsInput struct { _ struct{} `type:"structure"` @@ -34156,36 +38565,63 @@ type DescribeSecurityGroupsInput struct { // // * description - The description of the security group. // + // * egress.ip-permission.cidr - An IPv4 CIDR block for an outbound security + // group rule. + // + // * egress.ip-permission.from-port - For an outbound rule, the start of + // port range for the TCP and UDP protocols, or an ICMP type number. + // + // * egress.ip-permission.group-id - The ID of a security group that has + // been referenced in an outbound security group rule. + // + // * egress.ip-permission.group-name - The name of a security group that + // has been referenced in an outbound security group rule. + // + // * egress.ip-permission.ipv6-cidr - An IPv6 CIDR block for an outbound + // security group rule. + // // * egress.ip-permission.prefix-list-id - The ID (prefix) of the AWS service - // to which the security group allows access. + // to which a security group rule allows outbound access. + // + // * egress.ip-permission.protocol - The IP protocol for an outbound security + // group rule (tcp | udp | icmp or a protocol number). + // + // * egress.ip-permission.to-port - For an outbound rule, the end of port + // range for the TCP and UDP protocols, or an ICMP code. + // + // * egress.ip-permission.user-id - The ID of an AWS account that has been + // referenced in an outbound security group rule. // // * group-id - The ID of the security group. // // * group-name - The name of the security group. // - // * ip-permission.cidr - An IPv4 CIDR range that has been granted permission - // in a security group rule. + // * ip-permission.cidr - An IPv4 CIDR block for an inbound security group + // rule. // - // * ip-permission.from-port - The start of port range for the TCP and UDP - // protocols, or an ICMP type number. + // * ip-permission.from-port - For an inbound rule, the start of port range + // for the TCP and UDP protocols, or an ICMP type number. // - // * ip-permission.group-id - The ID of a security group that has been granted - // permission. + // * ip-permission.group-id - The ID of a security group that has been referenced + // in an inbound security group rule. // // * ip-permission.group-name - The name of a security group that has been - // granted permission. + // referenced in an inbound security group rule. // - // * ip-permission.ipv6-cidr - An IPv6 CIDR range that has been granted permission - // in a security group rule. + // * ip-permission.ipv6-cidr - An IPv6 CIDR block for an inbound security + // group rule. // - // * ip-permission.protocol - The IP protocol for the permission (tcp | udp - // | icmp or a protocol number). + // * ip-permission.prefix-list-id - The ID (prefix) of the AWS service from + // which a security group rule allows inbound access. // - // * ip-permission.to-port - The end of port range for the TCP and UDP protocols, - // or an ICMP code. + // * ip-permission.protocol - The IP protocol for an inbound security group + // rule (tcp | udp | icmp or a protocol number). // - // * ip-permission.user-id - The ID of an AWS account that has been granted - // permission. + // * ip-permission.to-port - For an inbound rule, the end of port range for + // the TCP and UDP protocols, or an ICMP code. + // + // * ip-permission.user-id - The ID of an AWS account that has been referenced + // in an inbound security group rule. // // * owner-id - The AWS account ID of the owner of the security group. // @@ -34209,6 +38645,14 @@ type DescribeSecurityGroupsInput struct { // // Default: Describes all your security groups. GroupNames []*string `locationName:"GroupName" locationNameList:"GroupName" type:"list"` + + // The maximum number of results to return in a single call. To retrieve the + // remaining results, make another request with the returned NextToken value. + // This value can be between 5 and 1000. + MaxResults *int64 `type:"integer"` + + // The token to request the next page of results. + NextToken *string `type:"string"` } // String returns the string representation @@ -34245,11 +38689,26 @@ func (s *DescribeSecurityGroupsInput) SetGroupNames(v []*string) *DescribeSecuri return s } +// SetMaxResults sets the MaxResults field's value. +func (s *DescribeSecurityGroupsInput) SetMaxResults(v int64) *DescribeSecurityGroupsInput { + s.MaxResults = &v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *DescribeSecurityGroupsInput) SetNextToken(v string) *DescribeSecurityGroupsInput { + s.NextToken = &v + return s +} + // Contains the output of DescribeSecurityGroups. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeSecurityGroupsResult type DescribeSecurityGroupsOutput struct { _ struct{} `type:"structure"` + // The token to use to retrieve the next page of results. This value is null + // when there are no more results to return. + NextToken *string `locationName:"nextToken" type:"string"` + // Information about one or more security groups. SecurityGroups []*SecurityGroup `locationName:"securityGroupInfo" locationNameList:"item" type:"list"` } @@ -34264,6 +38723,12 @@ func (s DescribeSecurityGroupsOutput) GoString() string { return s.String() } +// SetNextToken sets the NextToken field's value. +func (s *DescribeSecurityGroupsOutput) SetNextToken(v string) *DescribeSecurityGroupsOutput { + s.NextToken = &v + return s +} + // SetSecurityGroups sets the SecurityGroups field's value. func (s *DescribeSecurityGroupsOutput) SetSecurityGroups(v []*SecurityGroup) *DescribeSecurityGroupsOutput { s.SecurityGroups = v @@ -34271,7 +38736,6 @@ func (s *DescribeSecurityGroupsOutput) SetSecurityGroups(v []*SecurityGroup) *De } // Contains the parameters for DescribeSnapshotAttribute. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeSnapshotAttributeRequest type DescribeSnapshotAttributeInput struct { _ struct{} `type:"structure"` @@ -34337,7 +38801,6 @@ func (s *DescribeSnapshotAttributeInput) SetSnapshotId(v string) *DescribeSnapsh } // Contains the output of DescribeSnapshotAttribute. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeSnapshotAttributeResult type DescribeSnapshotAttributeOutput struct { _ struct{} `type:"structure"` @@ -34380,7 +38843,6 @@ func (s *DescribeSnapshotAttributeOutput) SetSnapshotId(v string) *DescribeSnaps } // Contains the parameters for DescribeSnapshots. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeSnapshotsRequest type DescribeSnapshotsInput struct { _ struct{} `type:"structure"` @@ -34514,7 +38976,6 @@ func (s *DescribeSnapshotsInput) SetSnapshotIds(v []*string) *DescribeSnapshotsI } // Contains the output of DescribeSnapshots. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeSnapshotsResult type DescribeSnapshotsOutput struct { _ struct{} `type:"structure"` @@ -34551,7 +39012,6 @@ func (s *DescribeSnapshotsOutput) SetSnapshots(v []*Snapshot) *DescribeSnapshots } // Contains the parameters for DescribeSpotDatafeedSubscription. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeSpotDatafeedSubscriptionRequest type DescribeSpotDatafeedSubscriptionInput struct { _ struct{} `type:"structure"` @@ -34579,11 +39039,10 @@ func (s *DescribeSpotDatafeedSubscriptionInput) SetDryRun(v bool) *DescribeSpotD } // Contains the output of DescribeSpotDatafeedSubscription. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeSpotDatafeedSubscriptionResult type DescribeSpotDatafeedSubscriptionOutput struct { _ struct{} `type:"structure"` - // The Spot instance data feed subscription. + // The Spot Instance data feed subscription. SpotDatafeedSubscription *SpotDatafeedSubscription `locationName:"spotDatafeedSubscription" type:"structure"` } @@ -34604,7 +39063,6 @@ func (s *DescribeSpotDatafeedSubscriptionOutput) SetSpotDatafeedSubscription(v * } // Contains the parameters for DescribeSpotFleetInstances. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeSpotFleetInstancesRequest type DescribeSpotFleetInstancesInput struct { _ struct{} `type:"structure"` @@ -34622,7 +39080,7 @@ type DescribeSpotFleetInstancesInput struct { // The token for the next set of results. NextToken *string `locationName:"nextToken" type:"string"` - // The ID of the Spot fleet request. + // The ID of the Spot Fleet request. // // SpotFleetRequestId is a required field SpotFleetRequestId *string `locationName:"spotFleetRequestId" type:"string" required:"true"` @@ -34676,7 +39134,6 @@ func (s *DescribeSpotFleetInstancesInput) SetSpotFleetRequestId(v string) *Descr } // Contains the output of DescribeSpotFleetInstances. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeSpotFleetInstancesResponse type DescribeSpotFleetInstancesOutput struct { _ struct{} `type:"structure"` @@ -34690,7 +39147,7 @@ type DescribeSpotFleetInstancesOutput struct { // when there are no more results to return. NextToken *string `locationName:"nextToken" type:"string"` - // The ID of the Spot fleet request. + // The ID of the Spot Fleet request. // // SpotFleetRequestId is a required field SpotFleetRequestId *string `locationName:"spotFleetRequestId" type:"string" required:"true"` @@ -34725,7 +39182,6 @@ func (s *DescribeSpotFleetInstancesOutput) SetSpotFleetRequestId(v string) *Desc } // Contains the parameters for DescribeSpotFleetRequestHistory. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeSpotFleetRequestHistoryRequest type DescribeSpotFleetRequestHistoryInput struct { _ struct{} `type:"structure"` @@ -34746,7 +39202,7 @@ type DescribeSpotFleetRequestHistoryInput struct { // The token for the next set of results. NextToken *string `locationName:"nextToken" type:"string"` - // The ID of the Spot fleet request. + // The ID of the Spot Fleet request. // // SpotFleetRequestId is a required field SpotFleetRequestId *string `locationName:"spotFleetRequestId" type:"string" required:"true"` @@ -34820,11 +39276,10 @@ func (s *DescribeSpotFleetRequestHistoryInput) SetStartTime(v time.Time) *Descri } // Contains the output of DescribeSpotFleetRequestHistory. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeSpotFleetRequestHistoryResponse type DescribeSpotFleetRequestHistoryOutput struct { _ struct{} `type:"structure"` - // Information about the events in the history of the Spot fleet request. + // Information about the events in the history of the Spot Fleet request. // // HistoryRecords is a required field HistoryRecords []*HistoryRecord `locationName:"historyRecordSet" locationNameList:"item" type:"list" required:"true"` @@ -34841,7 +39296,7 @@ type DescribeSpotFleetRequestHistoryOutput struct { // when there are no more results to return. NextToken *string `locationName:"nextToken" type:"string"` - // The ID of the Spot fleet request. + // The ID of the Spot Fleet request. // // SpotFleetRequestId is a required field SpotFleetRequestId *string `locationName:"spotFleetRequestId" type:"string" required:"true"` @@ -34893,7 +39348,6 @@ func (s *DescribeSpotFleetRequestHistoryOutput) SetStartTime(v time.Time) *Descr } // Contains the parameters for DescribeSpotFleetRequests. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeSpotFleetRequestsRequest type DescribeSpotFleetRequestsInput struct { _ struct{} `type:"structure"` @@ -34911,7 +39365,7 @@ type DescribeSpotFleetRequestsInput struct { // The token for the next set of results. NextToken *string `locationName:"nextToken" type:"string"` - // The IDs of the Spot fleet requests. + // The IDs of the Spot Fleet requests. SpotFleetRequestIds []*string `locationName:"spotFleetRequestId" locationNameList:"item" type:"list"` } @@ -34950,7 +39404,6 @@ func (s *DescribeSpotFleetRequestsInput) SetSpotFleetRequestIds(v []*string) *De } // Contains the output of DescribeSpotFleetRequests. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeSpotFleetRequestsResponse type DescribeSpotFleetRequestsOutput struct { _ struct{} `type:"structure"` @@ -34958,7 +39411,7 @@ type DescribeSpotFleetRequestsOutput struct { // when there are no more results to return. NextToken *string `locationName:"nextToken" type:"string"` - // Information about the configuration of your Spot fleet. + // Information about the configuration of your Spot Fleet. // // SpotFleetRequestConfigs is a required field SpotFleetRequestConfigs []*SpotFleetRequestConfig `locationName:"spotFleetRequestConfigSet" locationNameList:"item" type:"list" required:"true"` @@ -34987,7 +39440,6 @@ func (s *DescribeSpotFleetRequestsOutput) SetSpotFleetRequestConfigs(v []*SpotFl } // Contains the parameters for DescribeSpotInstanceRequests. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeSpotInstanceRequestsRequest type DescribeSpotInstanceRequestsInput struct { _ struct{} `type:"structure"` @@ -35001,7 +39453,7 @@ type DescribeSpotInstanceRequestsInput struct { // // * availability-zone-group - The Availability Zone group. // - // * create-time - The time stamp when the Spot instance request was created. + // * create-time - The time stamp when the Spot Instance request was created. // // * fault-code - The fault code related to the request. // @@ -35009,23 +39461,23 @@ type DescribeSpotInstanceRequestsInput struct { // // * instance-id - The ID of the instance that fulfilled the request. // - // * launch-group - The Spot instance launch group. + // * launch-group - The Spot Instance launch group. // // * launch.block-device-mapping.delete-on-termination - Indicates whether - // the Amazon EBS volume is deleted on instance termination. + // the EBS volume is deleted on instance termination. // - // * launch.block-device-mapping.device-name - The device name for the Amazon - // EBS volume (for example, /dev/sdh). + // * launch.block-device-mapping.device-name - The device name for the volume + // in the block device mapping (for example, /dev/sdh or xvdh). // - // * launch.block-device-mapping.snapshot-id - The ID of the snapshot used - // for the Amazon EBS volume. + // * launch.block-device-mapping.snapshot-id - The ID of the snapshot for + // the EBS volume. // - // * launch.block-device-mapping.volume-size - The size of the Amazon EBS - // volume, in GiB. + // * launch.block-device-mapping.volume-size - The size of the EBS volume, + // in GiB. // - // * launch.block-device-mapping.volume-type - The type of the Amazon EBS - // volume: gp2 for General Purpose SSD, io1 for Provisioned IOPS SSD, st1 - // for Throughput Optimized HDD, sc1for Cold HDD, or standard for Magnetic. + // * launch.block-device-mapping.volume-type - The type of EBS volume: gp2 + // for General Purpose SSD, io1 for Provisioned IOPS SSD, st1 for Throughput + // Optimized HDD, sc1for Cold HDD, or standard for Magnetic. // // * launch.group-id - The security group for the instance. // @@ -35037,53 +39489,53 @@ type DescribeSpotInstanceRequestsInput struct { // // * launch.key-name - The name of the key pair the instance launched with. // - // * launch.monitoring-enabled - Whether monitoring is enabled for the Spot - // instance. + // * launch.monitoring-enabled - Whether detailed monitoring is enabled for + // the Spot Instance. // // * launch.ramdisk-id - The RAM disk ID. // - // * network-interface.network-interface-id - The ID of the network interface. - // - // * network-interface.device-index - The index of the device for the network - // interface attachment on the instance. - // - // * network-interface.subnet-id - The ID of the subnet for the instance. - // - // * network-interface.description - A description of the network interface. - // - // * network-interface.private-ip-address - The primary private IP address - // of the network interface. - // - // * network-interface.delete-on-termination - Indicates whether the network - // interface is deleted when the instance is terminated. - // - // * network-interface.group-id - The ID of the security group associated - // with the network interface. - // - // * network-interface.group-name - The name of the security group associated - // with the network interface. + // * launched-availability-zone - The Availability Zone in which the request + // is launched. // // * network-interface.addresses.primary - Indicates whether the IP address // is the primary private IP address. // + // * network-interface.delete-on-termination - Indicates whether the network + // interface is deleted when the instance is terminated. + // + // * network-interface.description - A description of the network interface. + // + // * network-interface.device-index - The index of the device for the network + // interface attachment on the instance. + // + // * network-interface.group-id - The ID of the security group associated + // with the network interface. + // + // * network-interface.network-interface-id - The ID of the network interface. + // + // * network-interface.private-ip-address - The primary private IP address + // of the network interface. + // + // * network-interface.subnet-id - The ID of the subnet for the instance. + // // * product-description - The product description associated with the instance // (Linux/UNIX | Windows). // - // * spot-instance-request-id - The Spot instance request ID. + // * spot-instance-request-id - The Spot Instance request ID. // - // * spot-price - The maximum hourly price for any Spot instance launched + // * spot-price - The maximum hourly price for any Spot Instance launched // to fulfill the request. // - // * state - The state of the Spot instance request (open | active | closed - // | cancelled | failed). Spot bid status information can help you track - // your Amazon EC2 Spot instance requests. For more information, see Spot - // Bid Status (http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/spot-bid-status.html) + // * state - The state of the Spot Instance request (open | active | closed + // | cancelled | failed). Spot request status information can help you track + // your Amazon EC2 Spot Instance requests. For more information, see Spot + // Request Status (http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/spot-bid-status.html) // in the Amazon Elastic Compute Cloud User Guide. // // * status-code - The short code describing the most recent evaluation of - // your Spot instance request. + // your Spot Instance request. // - // * status-message - The message explaining the status of the Spot instance + // * status-message - The message explaining the status of the Spot Instance // request. // // * tag:key=value - The key/value combination of a tag assigned to the resource. @@ -35102,17 +39554,14 @@ type DescribeSpotInstanceRequestsInput struct { // * tag-value - The value of a tag assigned to the resource. This filter // is independent of the tag-key filter. // - // * type - The type of Spot instance request (one-time | persistent). - // - // * launched-availability-zone - The Availability Zone in which the bid - // is launched. + // * type - The type of Spot Instance request (one-time | persistent). // // * valid-from - The start date of the request. // // * valid-until - The end date of the request. Filters []*Filter `locationName:"Filter" locationNameList:"Filter" type:"list"` - // One or more Spot instance request IDs. + // One or more Spot Instance request IDs. SpotInstanceRequestIds []*string `locationName:"SpotInstanceRequestId" locationNameList:"SpotInstanceRequestId" type:"list"` } @@ -35145,11 +39594,10 @@ func (s *DescribeSpotInstanceRequestsInput) SetSpotInstanceRequestIds(v []*strin } // Contains the output of DescribeSpotInstanceRequests. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeSpotInstanceRequestsResult type DescribeSpotInstanceRequestsOutput struct { _ struct{} `type:"structure"` - // One or more Spot instance requests. + // One or more Spot Instance requests. SpotInstanceRequests []*SpotInstanceRequest `locationName:"spotInstanceRequestSet" locationNameList:"item" type:"list"` } @@ -35170,7 +39618,6 @@ func (s *DescribeSpotInstanceRequestsOutput) SetSpotInstanceRequests(v []*SpotIn } // Contains the parameters for DescribeSpotPriceHistory. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeSpotPriceHistoryRequest type DescribeSpotPriceHistoryInput struct { _ struct{} `type:"structure"` @@ -35206,8 +39653,7 @@ type DescribeSpotPriceHistoryInput struct { // than or less than comparison is not supported. Filters []*Filter `locationName:"Filter" locationNameList:"Filter" type:"list"` - // Filters the results by the specified instance types. Note that T2 and HS1 - // instance types are not supported. + // Filters the results by the specified instance types. InstanceTypes []*string `locationName:"InstanceType" type:"list"` // The maximum number of results to return in a single call. Specify a value @@ -35291,7 +39737,6 @@ func (s *DescribeSpotPriceHistoryInput) SetStartTime(v time.Time) *DescribeSpotP } // Contains the output of DescribeSpotPriceHistory. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeSpotPriceHistoryResult type DescribeSpotPriceHistoryOutput struct { _ struct{} `type:"structure"` @@ -35325,7 +39770,6 @@ func (s *DescribeSpotPriceHistoryOutput) SetSpotPriceHistory(v []*SpotPrice) *De return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeStaleSecurityGroupsRequest type DescribeStaleSecurityGroupsInput struct { _ struct{} `type:"structure"` @@ -35403,7 +39847,6 @@ func (s *DescribeStaleSecurityGroupsInput) SetVpcId(v string) *DescribeStaleSecu return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeStaleSecurityGroupsResult type DescribeStaleSecurityGroupsOutput struct { _ struct{} `type:"structure"` @@ -35438,7 +39881,6 @@ func (s *DescribeStaleSecurityGroupsOutput) SetStaleSecurityGroupSet(v []*StaleS } // Contains the parameters for DescribeSubnets. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeSubnetsRequest type DescribeSubnetsInput struct { _ struct{} `type:"structure"` @@ -35530,7 +39972,6 @@ func (s *DescribeSubnetsInput) SetSubnetIds(v []*string) *DescribeSubnetsInput { } // Contains the output of DescribeSubnets. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeSubnetsResult type DescribeSubnetsOutput struct { _ struct{} `type:"structure"` @@ -35555,7 +39996,6 @@ func (s *DescribeSubnetsOutput) SetSubnets(v []*Subnet) *DescribeSubnetsOutput { } // Contains the parameters for DescribeTags. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeTagsRequest type DescribeTagsInput struct { _ struct{} `type:"structure"` @@ -35572,9 +40012,10 @@ type DescribeTagsInput struct { // * resource-id - The resource ID. // // * resource-type - The resource type (customer-gateway | dhcp-options | - // image | instance | internet-gateway | network-acl | network-interface - // | reserved-instances | route-table | security-group | snapshot | spot-instances-request - // | subnet | volume | vpc | vpn-connection | vpn-gateway). + // elastic-ip | fpga-image | image | instance | internet-gateway | launch-template + // | natgateway | network-acl | network-interface | reserved-instances | + // route-table | security-group | snapshot | spot-instances-request | subnet + // | volume | vpc | vpc-peering-connection | vpn-connection | vpn-gateway). // // * value - The tag value. Filters []*Filter `locationName:"Filter" locationNameList:"Filter" type:"list"` @@ -35623,7 +40064,6 @@ func (s *DescribeTagsInput) SetNextToken(v string) *DescribeTagsInput { } // Contains the output of DescribeTags. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeTagsResult type DescribeTagsOutput struct { _ struct{} `type:"structure"` @@ -35658,7 +40098,6 @@ func (s *DescribeTagsOutput) SetTags(v []*TagDescription) *DescribeTagsOutput { } // Contains the parameters for DescribeVolumeAttribute. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeVolumeAttributeRequest type DescribeVolumeAttributeInput struct { _ struct{} `type:"structure"` @@ -35719,7 +40158,6 @@ func (s *DescribeVolumeAttributeInput) SetVolumeId(v string) *DescribeVolumeAttr } // Contains the output of DescribeVolumeAttribute. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeVolumeAttributeResult type DescribeVolumeAttributeOutput struct { _ struct{} `type:"structure"` @@ -35762,7 +40200,6 @@ func (s *DescribeVolumeAttributeOutput) SetVolumeId(v string) *DescribeVolumeAtt } // Contains the parameters for DescribeVolumeStatus. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeVolumeStatusRequest type DescribeVolumeStatusInput struct { _ struct{} `type:"structure"` @@ -35868,7 +40305,6 @@ func (s *DescribeVolumeStatusInput) SetVolumeIds(v []*string) *DescribeVolumeSta } // Contains the output of DescribeVolumeStatus. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeVolumeStatusResult type DescribeVolumeStatusOutput struct { _ struct{} `type:"structure"` @@ -35903,7 +40339,6 @@ func (s *DescribeVolumeStatusOutput) SetVolumeStatuses(v []*VolumeStatusItem) *D } // Contains the parameters for DescribeVolumes. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeVolumesRequest type DescribeVolumesInput struct { _ struct{} `type:"structure"` @@ -35920,7 +40355,7 @@ type DescribeVolumesInput struct { // * attachment.delete-on-termination - Whether the volume is deleted on // instance termination. // - // * attachment.device - The device name that is exposed to the instance + // * attachment.device - The device name specified in the block device mapping // (for example, /dev/sda1). // // * attachment.instance-id - The ID of the instance the volume is attached @@ -36026,7 +40461,6 @@ func (s *DescribeVolumesInput) SetVolumeIds(v []*string) *DescribeVolumesInput { return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeVolumesModificationsRequest type DescribeVolumesModificationsInput struct { _ struct{} `type:"structure"` @@ -36092,7 +40526,6 @@ func (s *DescribeVolumesModificationsInput) SetVolumeIds(v []*string) *DescribeV return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeVolumesModificationsResult type DescribeVolumesModificationsOutput struct { _ struct{} `type:"structure"` @@ -36126,7 +40559,6 @@ func (s *DescribeVolumesModificationsOutput) SetVolumesModifications(v []*Volume } // Contains the output of DescribeVolumes. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeVolumesResult type DescribeVolumesOutput struct { _ struct{} `type:"structure"` @@ -36163,7 +40595,6 @@ func (s *DescribeVolumesOutput) SetVolumes(v []*Volume) *DescribeVolumesOutput { } // Contains the parameters for DescribeVpcAttribute. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeVpcAttributeRequest type DescribeVpcAttributeInput struct { _ struct{} `type:"structure"` @@ -36229,7 +40660,6 @@ func (s *DescribeVpcAttributeInput) SetVpcId(v string) *DescribeVpcAttributeInpu } // Contains the output of DescribeVpcAttribute. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeVpcAttributeResult type DescribeVpcAttributeOutput struct { _ struct{} `type:"structure"` @@ -36276,7 +40706,6 @@ func (s *DescribeVpcAttributeOutput) SetVpcId(v string) *DescribeVpcAttributeOut } // Contains the parameters for DescribeVpcClassicLinkDnsSupport. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeVpcClassicLinkDnsSupportRequest type DescribeVpcClassicLinkDnsSupportInput struct { _ struct{} `type:"structure"` @@ -36338,7 +40767,6 @@ func (s *DescribeVpcClassicLinkDnsSupportInput) SetVpcIds(v []*string) *Describe } // Contains the output of DescribeVpcClassicLinkDnsSupport. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeVpcClassicLinkDnsSupportResult type DescribeVpcClassicLinkDnsSupportOutput struct { _ struct{} `type:"structure"` @@ -36372,7 +40800,6 @@ func (s *DescribeVpcClassicLinkDnsSupportOutput) SetVpcs(v []*ClassicLinkDnsSupp } // Contains the parameters for DescribeVpcClassicLink. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeVpcClassicLinkRequest type DescribeVpcClassicLinkInput struct { _ struct{} `type:"structure"` @@ -36437,7 +40864,6 @@ func (s *DescribeVpcClassicLinkInput) SetVpcIds(v []*string) *DescribeVpcClassic } // Contains the output of DescribeVpcClassicLink. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeVpcClassicLinkResult type DescribeVpcClassicLinkOutput struct { _ struct{} `type:"structure"` @@ -36461,8 +40887,440 @@ func (s *DescribeVpcClassicLinkOutput) SetVpcs(v []*VpcClassicLink) *DescribeVpc return s } +type DescribeVpcEndpointConnectionNotificationsInput struct { + _ struct{} `type:"structure"` + + // The ID of the notification. + ConnectionNotificationId *string `type:"string"` + + // Checks whether you have the required permissions for the action, without + // actually making the request, and provides an error response. If you have + // the required permissions, the error response is DryRunOperation. Otherwise, + // it is UnauthorizedOperation. + DryRun *bool `type:"boolean"` + + // One or more filters. + // + // * connection-notification-arn - The ARN of SNS topic for the notification. + // + // * connection-notification-id - The ID of the notification. + // + // * connection-notification-state - The state of the notification (Enabled + // | Disabled). + // + // * connection-notification-type - The type of notification (Topic). + // + // * service-id - The ID of the endpoint service. + // + // * vpc-endpoint-id - The ID of the VPC endpoint. + Filters []*Filter `locationName:"Filter" locationNameList:"Filter" type:"list"` + + // The maximum number of results to return in a single call. To retrieve the + // remaining results, make another request with the returned NextToken value. + MaxResults *int64 `type:"integer"` + + // The token to request the next page of results. + NextToken *string `type:"string"` +} + +// String returns the string representation +func (s DescribeVpcEndpointConnectionNotificationsInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeVpcEndpointConnectionNotificationsInput) GoString() string { + return s.String() +} + +// SetConnectionNotificationId sets the ConnectionNotificationId field's value. +func (s *DescribeVpcEndpointConnectionNotificationsInput) SetConnectionNotificationId(v string) *DescribeVpcEndpointConnectionNotificationsInput { + s.ConnectionNotificationId = &v + return s +} + +// SetDryRun sets the DryRun field's value. +func (s *DescribeVpcEndpointConnectionNotificationsInput) SetDryRun(v bool) *DescribeVpcEndpointConnectionNotificationsInput { + s.DryRun = &v + return s +} + +// SetFilters sets the Filters field's value. +func (s *DescribeVpcEndpointConnectionNotificationsInput) SetFilters(v []*Filter) *DescribeVpcEndpointConnectionNotificationsInput { + s.Filters = v + return s +} + +// SetMaxResults sets the MaxResults field's value. +func (s *DescribeVpcEndpointConnectionNotificationsInput) SetMaxResults(v int64) *DescribeVpcEndpointConnectionNotificationsInput { + s.MaxResults = &v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *DescribeVpcEndpointConnectionNotificationsInput) SetNextToken(v string) *DescribeVpcEndpointConnectionNotificationsInput { + s.NextToken = &v + return s +} + +type DescribeVpcEndpointConnectionNotificationsOutput struct { + _ struct{} `type:"structure"` + + // One or more notifications. + ConnectionNotificationSet []*ConnectionNotification `locationName:"connectionNotificationSet" locationNameList:"item" type:"list"` + + // The token to use to retrieve the next page of results. This value is null + // when there are no more results to return. + NextToken *string `locationName:"nextToken" type:"string"` +} + +// String returns the string representation +func (s DescribeVpcEndpointConnectionNotificationsOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeVpcEndpointConnectionNotificationsOutput) GoString() string { + return s.String() +} + +// SetConnectionNotificationSet sets the ConnectionNotificationSet field's value. +func (s *DescribeVpcEndpointConnectionNotificationsOutput) SetConnectionNotificationSet(v []*ConnectionNotification) *DescribeVpcEndpointConnectionNotificationsOutput { + s.ConnectionNotificationSet = v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *DescribeVpcEndpointConnectionNotificationsOutput) SetNextToken(v string) *DescribeVpcEndpointConnectionNotificationsOutput { + s.NextToken = &v + return s +} + +type DescribeVpcEndpointConnectionsInput struct { + _ struct{} `type:"structure"` + + // Checks whether you have the required permissions for the action, without + // actually making the request, and provides an error response. If you have + // the required permissions, the error response is DryRunOperation. Otherwise, + // it is UnauthorizedOperation. + DryRun *bool `type:"boolean"` + + // One or more filters. + // + // * service-id - The ID of the service. + // + // * vpc-endpoint-owner - The AWS account number of the owner of the endpoint. + // + // * vpc-endpoint-state - The state of the endpoint (pendingAcceptance | + // pending | available | deleting | deleted | rejected | failed). + // + // * vpc-endpoint-id - The ID of the endpoint. + Filters []*Filter `locationName:"Filter" locationNameList:"Filter" type:"list"` + + // The maximum number of results to return for the request in a single page. + // The remaining results of the initial request can be seen by sending another + // request with the returned NextToken value. This value can be between 5 and + // 1000; if MaxResults is given a value larger than 1000, only 1000 results + // are returned. + MaxResults *int64 `type:"integer"` + + // The token to retrieve the next page of results. + NextToken *string `type:"string"` +} + +// String returns the string representation +func (s DescribeVpcEndpointConnectionsInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeVpcEndpointConnectionsInput) GoString() string { + return s.String() +} + +// SetDryRun sets the DryRun field's value. +func (s *DescribeVpcEndpointConnectionsInput) SetDryRun(v bool) *DescribeVpcEndpointConnectionsInput { + s.DryRun = &v + return s +} + +// SetFilters sets the Filters field's value. +func (s *DescribeVpcEndpointConnectionsInput) SetFilters(v []*Filter) *DescribeVpcEndpointConnectionsInput { + s.Filters = v + return s +} + +// SetMaxResults sets the MaxResults field's value. +func (s *DescribeVpcEndpointConnectionsInput) SetMaxResults(v int64) *DescribeVpcEndpointConnectionsInput { + s.MaxResults = &v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *DescribeVpcEndpointConnectionsInput) SetNextToken(v string) *DescribeVpcEndpointConnectionsInput { + s.NextToken = &v + return s +} + +type DescribeVpcEndpointConnectionsOutput struct { + _ struct{} `type:"structure"` + + // The token to use to retrieve the next page of results. This value is null + // when there are no more results to return. + NextToken *string `locationName:"nextToken" type:"string"` + + // Information about one or more VPC endpoint connections. + VpcEndpointConnections []*VpcEndpointConnection `locationName:"vpcEndpointConnectionSet" locationNameList:"item" type:"list"` +} + +// String returns the string representation +func (s DescribeVpcEndpointConnectionsOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeVpcEndpointConnectionsOutput) GoString() string { + return s.String() +} + +// SetNextToken sets the NextToken field's value. +func (s *DescribeVpcEndpointConnectionsOutput) SetNextToken(v string) *DescribeVpcEndpointConnectionsOutput { + s.NextToken = &v + return s +} + +// SetVpcEndpointConnections sets the VpcEndpointConnections field's value. +func (s *DescribeVpcEndpointConnectionsOutput) SetVpcEndpointConnections(v []*VpcEndpointConnection) *DescribeVpcEndpointConnectionsOutput { + s.VpcEndpointConnections = v + return s +} + +type DescribeVpcEndpointServiceConfigurationsInput struct { + _ struct{} `type:"structure"` + + // Checks whether you have the required permissions for the action, without + // actually making the request, and provides an error response. If you have + // the required permissions, the error response is DryRunOperation. Otherwise, + // it is UnauthorizedOperation. + DryRun *bool `type:"boolean"` + + // One or more filters. + // + // * service-name - The name of the service. + // + // * service-id - The ID of the service. + // + // * service-state - The state of the service (Pending | Available | Deleting + // | Deleted | Failed). + Filters []*Filter `locationName:"Filter" locationNameList:"Filter" type:"list"` + + // The maximum number of results to return for the request in a single page. + // The remaining results of the initial request can be seen by sending another + // request with the returned NextToken value. This value can be between 5 and + // 1000; if MaxResults is given a value larger than 1000, only 1000 results + // are returned. + MaxResults *int64 `type:"integer"` + + // The token to retrieve the next page of results. + NextToken *string `type:"string"` + + // The IDs of one or more services. + ServiceIds []*string `locationName:"ServiceId" locationNameList:"item" type:"list"` +} + +// String returns the string representation +func (s DescribeVpcEndpointServiceConfigurationsInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeVpcEndpointServiceConfigurationsInput) GoString() string { + return s.String() +} + +// SetDryRun sets the DryRun field's value. +func (s *DescribeVpcEndpointServiceConfigurationsInput) SetDryRun(v bool) *DescribeVpcEndpointServiceConfigurationsInput { + s.DryRun = &v + return s +} + +// SetFilters sets the Filters field's value. +func (s *DescribeVpcEndpointServiceConfigurationsInput) SetFilters(v []*Filter) *DescribeVpcEndpointServiceConfigurationsInput { + s.Filters = v + return s +} + +// SetMaxResults sets the MaxResults field's value. +func (s *DescribeVpcEndpointServiceConfigurationsInput) SetMaxResults(v int64) *DescribeVpcEndpointServiceConfigurationsInput { + s.MaxResults = &v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *DescribeVpcEndpointServiceConfigurationsInput) SetNextToken(v string) *DescribeVpcEndpointServiceConfigurationsInput { + s.NextToken = &v + return s +} + +// SetServiceIds sets the ServiceIds field's value. +func (s *DescribeVpcEndpointServiceConfigurationsInput) SetServiceIds(v []*string) *DescribeVpcEndpointServiceConfigurationsInput { + s.ServiceIds = v + return s +} + +type DescribeVpcEndpointServiceConfigurationsOutput struct { + _ struct{} `type:"structure"` + + // The token to use to retrieve the next page of results. This value is null + // when there are no more results to return. + NextToken *string `locationName:"nextToken" type:"string"` + + // Information about one or more services. + ServiceConfigurations []*ServiceConfiguration `locationName:"serviceConfigurationSet" locationNameList:"item" type:"list"` +} + +// String returns the string representation +func (s DescribeVpcEndpointServiceConfigurationsOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeVpcEndpointServiceConfigurationsOutput) GoString() string { + return s.String() +} + +// SetNextToken sets the NextToken field's value. +func (s *DescribeVpcEndpointServiceConfigurationsOutput) SetNextToken(v string) *DescribeVpcEndpointServiceConfigurationsOutput { + s.NextToken = &v + return s +} + +// SetServiceConfigurations sets the ServiceConfigurations field's value. +func (s *DescribeVpcEndpointServiceConfigurationsOutput) SetServiceConfigurations(v []*ServiceConfiguration) *DescribeVpcEndpointServiceConfigurationsOutput { + s.ServiceConfigurations = v + return s +} + +type DescribeVpcEndpointServicePermissionsInput struct { + _ struct{} `type:"structure"` + + // Checks whether you have the required permissions for the action, without + // actually making the request, and provides an error response. If you have + // the required permissions, the error response is DryRunOperation. Otherwise, + // it is UnauthorizedOperation. + DryRun *bool `type:"boolean"` + + // One or more filters. + // + // * principal - The ARN of the principal. + // + // * principal-type - The principal type (All | Service | OrganizationUnit + // | Account | User | Role). + Filters []*Filter `locationName:"Filter" locationNameList:"Filter" type:"list"` + + // The maximum number of results to return for the request in a single page. + // The remaining results of the initial request can be seen by sending another + // request with the returned NextToken value. This value can be between 5 and + // 1000; if MaxResults is given a value larger than 1000, only 1000 results + // are returned. + MaxResults *int64 `type:"integer"` + + // The token to retrieve the next page of results. + NextToken *string `type:"string"` + + // The ID of the service. + // + // ServiceId is a required field + ServiceId *string `type:"string" required:"true"` +} + +// String returns the string representation +func (s DescribeVpcEndpointServicePermissionsInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeVpcEndpointServicePermissionsInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DescribeVpcEndpointServicePermissionsInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DescribeVpcEndpointServicePermissionsInput"} + if s.ServiceId == nil { + invalidParams.Add(request.NewErrParamRequired("ServiceId")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetDryRun sets the DryRun field's value. +func (s *DescribeVpcEndpointServicePermissionsInput) SetDryRun(v bool) *DescribeVpcEndpointServicePermissionsInput { + s.DryRun = &v + return s +} + +// SetFilters sets the Filters field's value. +func (s *DescribeVpcEndpointServicePermissionsInput) SetFilters(v []*Filter) *DescribeVpcEndpointServicePermissionsInput { + s.Filters = v + return s +} + +// SetMaxResults sets the MaxResults field's value. +func (s *DescribeVpcEndpointServicePermissionsInput) SetMaxResults(v int64) *DescribeVpcEndpointServicePermissionsInput { + s.MaxResults = &v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *DescribeVpcEndpointServicePermissionsInput) SetNextToken(v string) *DescribeVpcEndpointServicePermissionsInput { + s.NextToken = &v + return s +} + +// SetServiceId sets the ServiceId field's value. +func (s *DescribeVpcEndpointServicePermissionsInput) SetServiceId(v string) *DescribeVpcEndpointServicePermissionsInput { + s.ServiceId = &v + return s +} + +type DescribeVpcEndpointServicePermissionsOutput struct { + _ struct{} `type:"structure"` + + // Information about one or more allowed principals. + AllowedPrincipals []*AllowedPrincipal `locationName:"allowedPrincipals" locationNameList:"item" type:"list"` + + // The token to use to retrieve the next page of results. This value is null + // when there are no more results to return. + NextToken *string `locationName:"nextToken" type:"string"` +} + +// String returns the string representation +func (s DescribeVpcEndpointServicePermissionsOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeVpcEndpointServicePermissionsOutput) GoString() string { + return s.String() +} + +// SetAllowedPrincipals sets the AllowedPrincipals field's value. +func (s *DescribeVpcEndpointServicePermissionsOutput) SetAllowedPrincipals(v []*AllowedPrincipal) *DescribeVpcEndpointServicePermissionsOutput { + s.AllowedPrincipals = v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *DescribeVpcEndpointServicePermissionsOutput) SetNextToken(v string) *DescribeVpcEndpointServicePermissionsOutput { + s.NextToken = &v + return s +} + // Contains the parameters for DescribeVpcEndpointServices. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeVpcEndpointServicesRequest type DescribeVpcEndpointServicesInput struct { _ struct{} `type:"structure"` @@ -36472,6 +41330,11 @@ type DescribeVpcEndpointServicesInput struct { // it is UnauthorizedOperation. DryRun *bool `type:"boolean"` + // One or more filters. + // + // * service-name: The name of the service. + Filters []*Filter `locationName:"Filter" locationNameList:"Filter" type:"list"` + // The maximum number of items to return for this request. The request returns // a token that you can specify in a subsequent call to get the next set of // results. @@ -36482,6 +41345,9 @@ type DescribeVpcEndpointServicesInput struct { // The token for the next set of items to return. (You received this token from // a prior call.) NextToken *string `type:"string"` + + // One or more service names. + ServiceNames []*string `locationName:"ServiceName" locationNameList:"item" type:"list"` } // String returns the string representation @@ -36500,6 +41366,12 @@ func (s *DescribeVpcEndpointServicesInput) SetDryRun(v bool) *DescribeVpcEndpoin return s } +// SetFilters sets the Filters field's value. +func (s *DescribeVpcEndpointServicesInput) SetFilters(v []*Filter) *DescribeVpcEndpointServicesInput { + s.Filters = v + return s +} + // SetMaxResults sets the MaxResults field's value. func (s *DescribeVpcEndpointServicesInput) SetMaxResults(v int64) *DescribeVpcEndpointServicesInput { s.MaxResults = &v @@ -36512,8 +41384,13 @@ func (s *DescribeVpcEndpointServicesInput) SetNextToken(v string) *DescribeVpcEn return s } +// SetServiceNames sets the ServiceNames field's value. +func (s *DescribeVpcEndpointServicesInput) SetServiceNames(v []*string) *DescribeVpcEndpointServicesInput { + s.ServiceNames = v + return s +} + // Contains the output of DescribeVpcEndpointServices. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeVpcEndpointServicesResult type DescribeVpcEndpointServicesOutput struct { _ struct{} `type:"structure"` @@ -36521,7 +41398,10 @@ type DescribeVpcEndpointServicesOutput struct { // items to return, the string is empty. NextToken *string `locationName:"nextToken" type:"string"` - // A list of supported AWS services. + // Information about the service. + ServiceDetails []*ServiceDetail `locationName:"serviceDetailSet" locationNameList:"item" type:"list"` + + // A list of supported services. ServiceNames []*string `locationName:"serviceNameSet" locationNameList:"item" type:"list"` } @@ -36541,6 +41421,12 @@ func (s *DescribeVpcEndpointServicesOutput) SetNextToken(v string) *DescribeVpcE return s } +// SetServiceDetails sets the ServiceDetails field's value. +func (s *DescribeVpcEndpointServicesOutput) SetServiceDetails(v []*ServiceDetail) *DescribeVpcEndpointServicesOutput { + s.ServiceDetails = v + return s +} + // SetServiceNames sets the ServiceNames field's value. func (s *DescribeVpcEndpointServicesOutput) SetServiceNames(v []*string) *DescribeVpcEndpointServicesOutput { s.ServiceNames = v @@ -36548,7 +41434,6 @@ func (s *DescribeVpcEndpointServicesOutput) SetServiceNames(v []*string) *Descri } // Contains the parameters for DescribeVpcEndpoints. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeVpcEndpointsRequest type DescribeVpcEndpointsInput struct { _ struct{} `type:"structure"` @@ -36560,7 +41445,7 @@ type DescribeVpcEndpointsInput struct { // One or more filters. // - // * service-name: The name of the AWS service. + // * service-name: The name of the service. // // * vpc-id: The ID of the VPC in which the endpoint resides. // @@ -36626,7 +41511,6 @@ func (s *DescribeVpcEndpointsInput) SetVpcEndpointIds(v []*string) *DescribeVpcE } // Contains the output of DescribeVpcEndpoints. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeVpcEndpointsResult type DescribeVpcEndpointsOutput struct { _ struct{} `type:"structure"` @@ -36661,7 +41545,6 @@ func (s *DescribeVpcEndpointsOutput) SetVpcEndpoints(v []*VpcEndpoint) *Describe } // Contains the parameters for DescribeVpcPeeringConnections. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeVpcPeeringConnectionsRequest type DescribeVpcPeeringConnectionsInput struct { _ struct{} `type:"structure"` @@ -36673,12 +41556,12 @@ type DescribeVpcPeeringConnectionsInput struct { // One or more filters. // - // * accepter-vpc-info.cidr-block - The IPv4 CIDR block of the peer VPC. + // * accepter-vpc-info.cidr-block - The IPv4 CIDR block of the accepter VPC. // // * accepter-vpc-info.owner-id - The AWS account ID of the owner of the - // peer VPC. + // accepter VPC. // - // * accepter-vpc-info.vpc-id - The ID of the peer VPC. + // * accepter-vpc-info.vpc-id - The ID of the accepter VPC. // // * expiration-time - The expiration date and time for the VPC peering connection. // @@ -36691,7 +41574,7 @@ type DescribeVpcPeeringConnectionsInput struct { // * requester-vpc-info.vpc-id - The ID of the requester VPC. // // * status-code - The status of the VPC peering connection (pending-acceptance - // | failed | expired | provisioning | active | deleted | rejected). + // | failed | expired | provisioning | active | deleting | deleted | rejected). // // * status-message - A message that provides more information about the // status of the VPC peering connection, if applicable. @@ -36750,7 +41633,6 @@ func (s *DescribeVpcPeeringConnectionsInput) SetVpcPeeringConnectionIds(v []*str } // Contains the output of DescribeVpcPeeringConnections. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeVpcPeeringConnectionsResult type DescribeVpcPeeringConnectionsOutput struct { _ struct{} `type:"structure"` @@ -36775,7 +41657,6 @@ func (s *DescribeVpcPeeringConnectionsOutput) SetVpcPeeringConnections(v []*VpcP } // Contains the parameters for DescribeVpcs. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeVpcsRequest type DescribeVpcsInput struct { _ struct{} `type:"structure"` @@ -36787,10 +41668,19 @@ type DescribeVpcsInput struct { // One or more filters. // - // * cidr - The IPv4 CIDR block of the VPC. The CIDR block you specify must - // exactly match the VPC's CIDR block for information to be returned for - // the VPC. Must contain the slash followed by one or two digits (for example, - // /28). + // * cidr - The primary IPv4 CIDR block of the VPC. The CIDR block you specify + // must exactly match the VPC's CIDR block for information to be returned + // for the VPC. Must contain the slash followed by one or two digits (for + // example, /28). + // + // * cidr-block-association.cidr-block - An IPv4 CIDR block associated with + // the VPC. + // + // * cidr-block-association.association-id - The association ID for an IPv4 + // CIDR block associated with the VPC. + // + // * cidr-block-association.state - The state of an IPv4 CIDR block associated + // with the VPC. // // * dhcp-options-id - The ID of a set of DHCP options. // @@ -36861,7 +41751,6 @@ func (s *DescribeVpcsInput) SetVpcIds(v []*string) *DescribeVpcsInput { } // Contains the output of DescribeVpcs. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeVpcsResult type DescribeVpcsOutput struct { _ struct{} `type:"structure"` @@ -36886,7 +41775,6 @@ func (s *DescribeVpcsOutput) SetVpcs(v []*Vpc) *DescribeVpcsOutput { } // Contains the parameters for DescribeVpnConnections. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeVpnConnectionsRequest type DescribeVpnConnectionsInput struct { _ struct{} `type:"structure"` @@ -36977,7 +41865,6 @@ func (s *DescribeVpnConnectionsInput) SetVpnConnectionIds(v []*string) *Describe } // Contains the output of DescribeVpnConnections. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeVpnConnectionsResult type DescribeVpnConnectionsOutput struct { _ struct{} `type:"structure"` @@ -37002,7 +41889,6 @@ func (s *DescribeVpnConnectionsOutput) SetVpnConnections(v []*VpnConnection) *De } // Contains the parameters for DescribeVpnGateways. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeVpnGatewaysRequest type DescribeVpnGatewaysInput struct { _ struct{} `type:"structure"` @@ -37014,6 +41900,9 @@ type DescribeVpnGatewaysInput struct { // One or more filters. // + // * amazon-side-asn - The Autonomous System Number (ASN) for the Amazon + // side of the gateway. + // // * attachment.state - The current state of the attachment between the gateway // and the VPC (attaching | attached | detaching | detached). // @@ -37082,7 +41971,6 @@ func (s *DescribeVpnGatewaysInput) SetVpnGatewayIds(v []*string) *DescribeVpnGat } // Contains the output of DescribeVpnGateways. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeVpnGatewaysResult type DescribeVpnGatewaysOutput struct { _ struct{} `type:"structure"` @@ -37107,7 +41995,6 @@ func (s *DescribeVpnGatewaysOutput) SetVpnGateways(v []*VpnGateway) *DescribeVpn } // Contains the parameters for DetachClassicLinkVpc. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DetachClassicLinkVpcRequest type DetachClassicLinkVpcInput struct { _ struct{} `type:"structure"` @@ -37173,7 +42060,6 @@ func (s *DetachClassicLinkVpcInput) SetVpcId(v string) *DetachClassicLinkVpcInpu } // Contains the output of DetachClassicLinkVpc. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DetachClassicLinkVpcResult type DetachClassicLinkVpcOutput struct { _ struct{} `type:"structure"` @@ -37198,7 +42084,6 @@ func (s *DetachClassicLinkVpcOutput) SetReturn(v bool) *DetachClassicLinkVpcOutp } // Contains the parameters for DetachInternetGateway. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DetachInternetGatewayRequest type DetachInternetGatewayInput struct { _ struct{} `type:"structure"` @@ -37263,7 +42148,6 @@ func (s *DetachInternetGatewayInput) SetVpcId(v string) *DetachInternetGatewayIn return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DetachInternetGatewayOutput type DetachInternetGatewayOutput struct { _ struct{} `type:"structure"` } @@ -37279,7 +42163,6 @@ func (s DetachInternetGatewayOutput) GoString() string { } // Contains the parameters for DetachNetworkInterface. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DetachNetworkInterfaceRequest type DetachNetworkInterfaceInput struct { _ struct{} `type:"structure"` @@ -37339,7 +42222,6 @@ func (s *DetachNetworkInterfaceInput) SetForce(v bool) *DetachNetworkInterfaceIn return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DetachNetworkInterfaceOutput type DetachNetworkInterfaceOutput struct { _ struct{} `type:"structure"` } @@ -37355,7 +42237,6 @@ func (s DetachNetworkInterfaceOutput) GoString() string { } // Contains the parameters for DetachVolume. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DetachVolumeRequest type DetachVolumeInput struct { _ struct{} `type:"structure"` @@ -37440,7 +42321,6 @@ func (s *DetachVolumeInput) SetVolumeId(v string) *DetachVolumeInput { } // Contains the parameters for DetachVpnGateway. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DetachVpnGatewayRequest type DetachVpnGatewayInput struct { _ struct{} `type:"structure"` @@ -37505,7 +42385,6 @@ func (s *DetachVpnGatewayInput) SetVpnGatewayId(v string) *DetachVpnGatewayInput return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DetachVpnGatewayOutput type DetachVpnGatewayOutput struct { _ struct{} `type:"structure"` } @@ -37521,7 +42400,6 @@ func (s DetachVpnGatewayOutput) GoString() string { } // Describes a DHCP configuration option. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DhcpConfiguration type DhcpConfiguration struct { _ struct{} `type:"structure"` @@ -37555,7 +42433,6 @@ func (s *DhcpConfiguration) SetValues(v []*AttributeValue) *DhcpConfiguration { } // Describes a set of DHCP options. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DhcpOptions type DhcpOptions struct { _ struct{} `type:"structure"` @@ -37598,7 +42475,6 @@ func (s *DhcpOptions) SetTags(v []*Tag) *DhcpOptions { } // Contains the parameters for DisableVgwRoutePropagation. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DisableVgwRoutePropagationRequest type DisableVgwRoutePropagationInput struct { _ struct{} `type:"structure"` @@ -37651,7 +42527,6 @@ func (s *DisableVgwRoutePropagationInput) SetRouteTableId(v string) *DisableVgwR return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DisableVgwRoutePropagationOutput type DisableVgwRoutePropagationOutput struct { _ struct{} `type:"structure"` } @@ -37667,7 +42542,6 @@ func (s DisableVgwRoutePropagationOutput) GoString() string { } // Contains the parameters for DisableVpcClassicLinkDnsSupport. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DisableVpcClassicLinkDnsSupportRequest type DisableVpcClassicLinkDnsSupportInput struct { _ struct{} `type:"structure"` @@ -37692,7 +42566,6 @@ func (s *DisableVpcClassicLinkDnsSupportInput) SetVpcId(v string) *DisableVpcCla } // Contains the output of DisableVpcClassicLinkDnsSupport. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DisableVpcClassicLinkDnsSupportResult type DisableVpcClassicLinkDnsSupportOutput struct { _ struct{} `type:"structure"` @@ -37717,7 +42590,6 @@ func (s *DisableVpcClassicLinkDnsSupportOutput) SetReturn(v bool) *DisableVpcCla } // Contains the parameters for DisableVpcClassicLink. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DisableVpcClassicLinkRequest type DisableVpcClassicLinkInput struct { _ struct{} `type:"structure"` @@ -37769,7 +42641,6 @@ func (s *DisableVpcClassicLinkInput) SetVpcId(v string) *DisableVpcClassicLinkIn } // Contains the output of DisableVpcClassicLink. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DisableVpcClassicLinkResult type DisableVpcClassicLinkOutput struct { _ struct{} `type:"structure"` @@ -37794,7 +42665,6 @@ func (s *DisableVpcClassicLinkOutput) SetReturn(v bool) *DisableVpcClassicLinkOu } // Contains the parameters for DisassociateAddress. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DisassociateAddressRequest type DisassociateAddressInput struct { _ struct{} `type:"structure"` @@ -37839,7 +42709,6 @@ func (s *DisassociateAddressInput) SetPublicIp(v string) *DisassociateAddressInp return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DisassociateAddressOutput type DisassociateAddressOutput struct { _ struct{} `type:"structure"` } @@ -37854,7 +42723,6 @@ func (s DisassociateAddressOutput) GoString() string { return s.String() } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DisassociateIamInstanceProfileRequest type DisassociateIamInstanceProfileInput struct { _ struct{} `type:"structure"` @@ -37893,7 +42761,6 @@ func (s *DisassociateIamInstanceProfileInput) SetAssociationId(v string) *Disass return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DisassociateIamInstanceProfileResult type DisassociateIamInstanceProfileOutput struct { _ struct{} `type:"structure"` @@ -37918,7 +42785,6 @@ func (s *DisassociateIamInstanceProfileOutput) SetIamInstanceProfileAssociation( } // Contains the parameters for DisassociateRouteTable. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DisassociateRouteTableRequest type DisassociateRouteTableInput struct { _ struct{} `type:"structure"` @@ -37970,7 +42836,6 @@ func (s *DisassociateRouteTableInput) SetDryRun(v bool) *DisassociateRouteTableI return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DisassociateRouteTableOutput type DisassociateRouteTableOutput struct { _ struct{} `type:"structure"` } @@ -37985,7 +42850,6 @@ func (s DisassociateRouteTableOutput) GoString() string { return s.String() } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DisassociateSubnetCidrBlockRequest type DisassociateSubnetCidrBlockInput struct { _ struct{} `type:"structure"` @@ -38024,7 +42888,6 @@ func (s *DisassociateSubnetCidrBlockInput) SetAssociationId(v string) *Disassoci return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DisassociateSubnetCidrBlockResult type DisassociateSubnetCidrBlockOutput struct { _ struct{} `type:"structure"` @@ -38057,7 +42920,6 @@ func (s *DisassociateSubnetCidrBlockOutput) SetSubnetId(v string) *DisassociateS return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DisassociateVpcCidrBlockRequest type DisassociateVpcCidrBlockInput struct { _ struct{} `type:"structure"` @@ -38096,10 +42958,12 @@ func (s *DisassociateVpcCidrBlockInput) SetAssociationId(v string) *Disassociate return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DisassociateVpcCidrBlockResult type DisassociateVpcCidrBlockOutput struct { _ struct{} `type:"structure"` + // Information about the IPv4 CIDR block association. + CidrBlockAssociation *VpcCidrBlockAssociation `locationName:"cidrBlockAssociation" type:"structure"` + // Information about the IPv6 CIDR block association. Ipv6CidrBlockAssociation *VpcIpv6CidrBlockAssociation `locationName:"ipv6CidrBlockAssociation" type:"structure"` @@ -38117,6 +42981,12 @@ func (s DisassociateVpcCidrBlockOutput) GoString() string { return s.String() } +// SetCidrBlockAssociation sets the CidrBlockAssociation field's value. +func (s *DisassociateVpcCidrBlockOutput) SetCidrBlockAssociation(v *VpcCidrBlockAssociation) *DisassociateVpcCidrBlockOutput { + s.CidrBlockAssociation = v + return s +} + // SetIpv6CidrBlockAssociation sets the Ipv6CidrBlockAssociation field's value. func (s *DisassociateVpcCidrBlockOutput) SetIpv6CidrBlockAssociation(v *VpcIpv6CidrBlockAssociation) *DisassociateVpcCidrBlockOutput { s.Ipv6CidrBlockAssociation = v @@ -38130,7 +43000,6 @@ func (s *DisassociateVpcCidrBlockOutput) SetVpcId(v string) *DisassociateVpcCidr } // Describes a disk image. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DiskImage type DiskImage struct { _ struct{} `type:"structure"` @@ -38193,7 +43062,6 @@ func (s *DiskImage) SetVolume(v *VolumeDetail) *DiskImage { } // Describes a disk image. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DiskImageDescription type DiskImageDescription struct { _ struct{} `type:"structure"` @@ -38258,7 +43126,6 @@ func (s *DiskImageDescription) SetSize(v int64) *DiskImageDescription { } // Describes a disk image. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DiskImageDetail type DiskImageDetail struct { _ struct{} `type:"structure"` @@ -38333,7 +43200,6 @@ func (s *DiskImageDetail) SetImportManifestUrl(v string) *DiskImageDetail { } // Describes a disk image volume. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DiskImageVolumeDescription type DiskImageVolumeDescription struct { _ struct{} `type:"structure"` @@ -38368,16 +43234,50 @@ func (s *DiskImageVolumeDescription) SetSize(v int64) *DiskImageVolumeDescriptio return s } +// Describes a DNS entry. +type DnsEntry struct { + _ struct{} `type:"structure"` + + // The DNS name. + DnsName *string `locationName:"dnsName" type:"string"` + + // The ID of the private hosted zone. + HostedZoneId *string `locationName:"hostedZoneId" type:"string"` +} + +// String returns the string representation +func (s DnsEntry) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DnsEntry) GoString() string { + return s.String() +} + +// SetDnsName sets the DnsName field's value. +func (s *DnsEntry) SetDnsName(v string) *DnsEntry { + s.DnsName = &v + return s +} + +// SetHostedZoneId sets the HostedZoneId field's value. +func (s *DnsEntry) SetHostedZoneId(v string) *DnsEntry { + s.HostedZoneId = &v + return s +} + // Describes a block device for an EBS volume. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/EbsBlockDevice type EbsBlockDevice struct { _ struct{} `type:"structure"` // Indicates whether the EBS volume is deleted on instance termination. DeleteOnTermination *bool `locationName:"deleteOnTermination" type:"boolean"` - // Indicates whether the EBS volume is encrypted. Encrypted Amazon EBS volumes - // may only be attached to instances that support Amazon EBS encryption. + // Indicates whether the EBS volume is encrypted. Encrypted volumes can only + // be attached to instances that support Amazon EBS encryption. If you are creating + // a volume from a snapshot, you can't specify an encryption value. This is + // because only blank volumes can be encrypted on creation. Encrypted *bool `locationName:"encrypted" type:"boolean"` // The number of I/O operations per second (IOPS) that the volume supports. @@ -38395,6 +43295,14 @@ type EbsBlockDevice struct { // it is not used in requests to create gp2, st1, sc1, or standard volumes. Iops *int64 `locationName:"iops" type:"integer"` + // ID for a user-managed CMK under which the EBS volume is encrypted. + // + // Note: This parameter is only supported on BlockDeviceMapping objects called + // by RunInstances (http://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_RunInstances.html), + // RequestSpotFleet (http://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_RequestSpotFleet.html), + // and RequestSpotInstances (http://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_RequestSpotInstances.html). + KmsKeyId *string `type:"string"` + // The ID of the snapshot. SnapshotId *string `locationName:"snapshotId" type:"string"` @@ -38444,6 +43352,12 @@ func (s *EbsBlockDevice) SetIops(v int64) *EbsBlockDevice { return s } +// SetKmsKeyId sets the KmsKeyId field's value. +func (s *EbsBlockDevice) SetKmsKeyId(v string) *EbsBlockDevice { + s.KmsKeyId = &v + return s +} + // SetSnapshotId sets the SnapshotId field's value. func (s *EbsBlockDevice) SetSnapshotId(v string) *EbsBlockDevice { s.SnapshotId = &v @@ -38463,7 +43377,6 @@ func (s *EbsBlockDevice) SetVolumeType(v string) *EbsBlockDevice { } // Describes a parameter used to set up an EBS volume in a block device mapping. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/EbsInstanceBlockDevice type EbsInstanceBlockDevice struct { _ struct{} `type:"structure"` @@ -38516,7 +43429,6 @@ func (s *EbsInstanceBlockDevice) SetVolumeId(v string) *EbsInstanceBlockDevice { // Describes information used to set up an EBS volume specified in a block device // mapping. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/EbsInstanceBlockDeviceSpecification type EbsInstanceBlockDeviceSpecification struct { _ struct{} `type:"structure"` @@ -38550,7 +43462,6 @@ func (s *EbsInstanceBlockDeviceSpecification) SetVolumeId(v string) *EbsInstance } // Describes an egress-only Internet gateway. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/EgressOnlyInternetGateway type EgressOnlyInternetGateway struct { _ struct{} `type:"structure"` @@ -38584,7 +43495,6 @@ func (s *EgressOnlyInternetGateway) SetEgressOnlyInternetGatewayId(v string) *Eg } // Describes the association between an instance and an Elastic GPU. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ElasticGpuAssociation type ElasticGpuAssociation struct { _ struct{} `type:"structure"` @@ -38636,7 +43546,6 @@ func (s *ElasticGpuAssociation) SetElasticGpuId(v string) *ElasticGpuAssociation } // Describes the status of an Elastic GPU. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ElasticGpuHealth type ElasticGpuHealth struct { _ struct{} `type:"structure"` @@ -38661,7 +43570,6 @@ func (s *ElasticGpuHealth) SetStatus(v string) *ElasticGpuHealth { } // A specification for an Elastic GPU. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ElasticGpuSpecification type ElasticGpuSpecification struct { _ struct{} `type:"structure"` @@ -38700,8 +43608,31 @@ func (s *ElasticGpuSpecification) SetType(v string) *ElasticGpuSpecification { return s } +// Describes an elastic GPU. +type ElasticGpuSpecificationResponse struct { + _ struct{} `type:"structure"` + + // The elastic GPU type. + Type *string `locationName:"type" type:"string"` +} + +// String returns the string representation +func (s ElasticGpuSpecificationResponse) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ElasticGpuSpecificationResponse) GoString() string { + return s.String() +} + +// SetType sets the Type field's value. +func (s *ElasticGpuSpecificationResponse) SetType(v string) *ElasticGpuSpecificationResponse { + s.Type = &v + return s +} + // Describes an Elastic GPU. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ElasticGpus type ElasticGpus struct { _ struct{} `type:"structure"` @@ -38771,7 +43702,6 @@ func (s *ElasticGpus) SetInstanceId(v string) *ElasticGpus { } // Contains the parameters for EnableVgwRoutePropagation. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/EnableVgwRoutePropagationRequest type EnableVgwRoutePropagationInput struct { _ struct{} `type:"structure"` @@ -38824,7 +43754,6 @@ func (s *EnableVgwRoutePropagationInput) SetRouteTableId(v string) *EnableVgwRou return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/EnableVgwRoutePropagationOutput type EnableVgwRoutePropagationOutput struct { _ struct{} `type:"structure"` } @@ -38840,7 +43769,6 @@ func (s EnableVgwRoutePropagationOutput) GoString() string { } // Contains the parameters for EnableVolumeIO. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/EnableVolumeIORequest type EnableVolumeIOInput struct { _ struct{} `type:"structure"` @@ -38891,7 +43819,6 @@ func (s *EnableVolumeIOInput) SetVolumeId(v string) *EnableVolumeIOInput { return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/EnableVolumeIOOutput type EnableVolumeIOOutput struct { _ struct{} `type:"structure"` } @@ -38907,7 +43834,6 @@ func (s EnableVolumeIOOutput) GoString() string { } // Contains the parameters for EnableVpcClassicLinkDnsSupport. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/EnableVpcClassicLinkDnsSupportRequest type EnableVpcClassicLinkDnsSupportInput struct { _ struct{} `type:"structure"` @@ -38932,7 +43858,6 @@ func (s *EnableVpcClassicLinkDnsSupportInput) SetVpcId(v string) *EnableVpcClass } // Contains the output of EnableVpcClassicLinkDnsSupport. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/EnableVpcClassicLinkDnsSupportResult type EnableVpcClassicLinkDnsSupportOutput struct { _ struct{} `type:"structure"` @@ -38957,7 +43882,6 @@ func (s *EnableVpcClassicLinkDnsSupportOutput) SetReturn(v bool) *EnableVpcClass } // Contains the parameters for EnableVpcClassicLink. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/EnableVpcClassicLinkRequest type EnableVpcClassicLinkInput struct { _ struct{} `type:"structure"` @@ -39009,7 +43933,6 @@ func (s *EnableVpcClassicLinkInput) SetVpcId(v string) *EnableVpcClassicLinkInpu } // Contains the output of EnableVpcClassicLink. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/EnableVpcClassicLinkResult type EnableVpcClassicLinkOutput struct { _ struct{} `type:"structure"` @@ -39033,8 +43956,7 @@ func (s *EnableVpcClassicLinkOutput) SetReturn(v bool) *EnableVpcClassicLinkOutp return s } -// Describes a Spot fleet event. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/EventInformation +// Describes a Spot Fleet event. type EventInformation struct { _ struct{} `type:"structure"` @@ -39043,9 +43965,9 @@ type EventInformation struct { // The event. // - // The following are the error events. + // The following are the error events: // - // * iamFleetRoleInvalid - The Spot fleet did not have the required permissions + // * iamFleetRoleInvalid - The Spot Fleet did not have the required permissions // either to launch or terminate an instance. // // * launchSpecTemporarilyBlacklisted - The configuration is not valid and @@ -39056,43 +43978,52 @@ type EventInformation struct { // For more information, see the description of the event. // // * spotInstanceCountLimitExceeded - You've reached the limit on the number - // of Spot instances that you can launch. + // of Spot Instances that you can launch. // - // The following are the fleetRequestChange events. + // The following are the fleetRequestChange events: // - // * active - The Spot fleet has been validated and Amazon EC2 is attempting - // to maintain the target number of running Spot instances. + // * active - The Spot Fleet has been validated and Amazon EC2 is attempting + // to maintain the target number of running Spot Instances. // - // * cancelled - The Spot fleet is canceled and has no running Spot instances. - // The Spot fleet will be deleted two days after its instances were terminated. + // * cancelled - The Spot Fleet is canceled and has no running Spot Instances. + // The Spot Fleet will be deleted two days after its instances were terminated. // - // * cancelled_running - The Spot fleet is canceled and will not launch additional - // Spot instances, but its existing Spot instances continue to run until + // * cancelled_running - The Spot Fleet is canceled and will not launch additional + // Spot Instances, but its existing Spot Instances continue to run until // they are interrupted or terminated. // - // * cancelled_terminating - The Spot fleet is canceled and its Spot instances + // * cancelled_terminating - The Spot Fleet is canceled and its Spot Instances // are terminating. // - // * expired - The Spot fleet request has expired. A subsequent event indicates + // * expired - The Spot Fleet request has expired. A subsequent event indicates // that the instances were terminated, if the request was created with TerminateInstancesWithExpiration // set. // - // * modify_in_progress - A request to modify the Spot fleet request was + // * modify_in_progress - A request to modify the Spot Fleet request was // accepted and is in progress. // - // * modify_successful - The Spot fleet request was modified. + // * modify_successful - The Spot Fleet request was modified. // - // * price_update - The bid price for a launch configuration was adjusted - // because it was too high. This change is permanent. + // * price_update - The price for a launch configuration was adjusted because + // it was too high. This change is permanent. // - // * submitted - The Spot fleet request is being evaluated and Amazon EC2 - // is preparing to launch the target number of Spot instances. + // * submitted - The Spot Fleet request is being evaluated and Amazon EC2 + // is preparing to launch the target number of Spot Instances. // - // The following are the instanceChange events. + // The following are the instanceChange events: // - // * launched - A bid was fulfilled and a new instance was launched. + // * launched - A request was fulfilled and a new instance was launched. // // * terminated - An instance was terminated by the user. + // + // The following are the Information events: + // + // * launchSpecUnusable - The price in a launch specification is not valid + // because it is below the Spot price or the Spot price is above the On-Demand + // price. + // + // * fleetProgressHalted - The price in every launch specification is not + // valid. A launch specification might become valid if the Spot price changes. EventSubType *string `locationName:"eventSubType" type:"string"` // The ID of the instance. This information is available only for instanceChange @@ -39129,7 +44060,6 @@ func (s *EventInformation) SetInstanceId(v string) *EventInformation { } // Describes an instance export task. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ExportTask type ExportTask struct { _ struct{} `type:"structure"` @@ -39199,7 +44129,6 @@ func (s *ExportTask) SetStatusMessage(v string) *ExportTask { } // Describes the format and location for an instance export task. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ExportToS3Task type ExportToS3Task struct { _ struct{} `type:"structure"` @@ -39253,7 +44182,6 @@ func (s *ExportToS3Task) SetS3Key(v string) *ExportToS3Task { } // Describes an instance export task. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ExportToS3TaskSpecification type ExportToS3TaskSpecification struct { _ struct{} `type:"structure"` @@ -39310,7 +44238,6 @@ func (s *ExportToS3TaskSpecification) SetS3Prefix(v string) *ExportToS3TaskSpeci // A filter name and value pair that is used to return a more specific list // of results. Filters can be used to match a set of resources by various criteria, // such as tags, attributes, or IDs. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/Filter type Filter struct { _ struct{} `type:"structure"` @@ -39343,8 +44270,65 @@ func (s *Filter) SetValues(v []*string) *Filter { return s } +// Describes a launch template. +type FleetLaunchTemplateSpecification struct { + _ struct{} `type:"structure"` + + // The ID of the launch template. You must specify either a template ID or a + // template name. + LaunchTemplateId *string `locationName:"launchTemplateId" type:"string"` + + // The name of the launch template. You must specify either a template name + // or a template ID. + LaunchTemplateName *string `locationName:"launchTemplateName" min:"3" type:"string"` + + // The version number. By default, the default version of the launch template + // is used. + Version *string `locationName:"version" type:"string"` +} + +// String returns the string representation +func (s FleetLaunchTemplateSpecification) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s FleetLaunchTemplateSpecification) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *FleetLaunchTemplateSpecification) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "FleetLaunchTemplateSpecification"} + if s.LaunchTemplateName != nil && len(*s.LaunchTemplateName) < 3 { + invalidParams.Add(request.NewErrParamMinLen("LaunchTemplateName", 3)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetLaunchTemplateId sets the LaunchTemplateId field's value. +func (s *FleetLaunchTemplateSpecification) SetLaunchTemplateId(v string) *FleetLaunchTemplateSpecification { + s.LaunchTemplateId = &v + return s +} + +// SetLaunchTemplateName sets the LaunchTemplateName field's value. +func (s *FleetLaunchTemplateSpecification) SetLaunchTemplateName(v string) *FleetLaunchTemplateSpecification { + s.LaunchTemplateName = &v + return s +} + +// SetVersion sets the Version field's value. +func (s *FleetLaunchTemplateSpecification) SetVersion(v string) *FleetLaunchTemplateSpecification { + s.Version = &v + return s +} + // Describes a flow log. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/FlowLog type FlowLog struct { _ struct{} `type:"structure"` @@ -39446,7 +44430,6 @@ func (s *FlowLog) SetTrafficType(v string) *FlowLog { } // Describes an Amazon FPGA image (AFI). -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/FpgaImage type FpgaImage struct { _ struct{} `type:"structure"` @@ -39477,6 +44460,9 @@ type FpgaImage struct { // The product codes for the AFI. ProductCodes []*ProductCode `locationName:"productCodes" locationNameList:"item" type:"list"` + // Indicates whether the AFI is public. + Public *bool `locationName:"public" type:"boolean"` + // The version of the AWS Shell that was used to create the bitstream. ShellVersion *string `locationName:"shellVersion" type:"string"` @@ -39554,6 +44540,12 @@ func (s *FpgaImage) SetProductCodes(v []*ProductCode) *FpgaImage { return s } +// SetPublic sets the Public field's value. +func (s *FpgaImage) SetPublic(v bool) *FpgaImage { + s.Public = &v + return s +} + // SetShellVersion sets the ShellVersion field's value. func (s *FpgaImage) SetShellVersion(v string) *FpgaImage { s.ShellVersion = &v @@ -39578,9 +44570,68 @@ func (s *FpgaImage) SetUpdateTime(v time.Time) *FpgaImage { return s } +// Describes an Amazon FPGA image (AFI) attribute. +type FpgaImageAttribute struct { + _ struct{} `type:"structure"` + + // The description of the AFI. + Description *string `locationName:"description" type:"string"` + + // The ID of the AFI. + FpgaImageId *string `locationName:"fpgaImageId" type:"string"` + + // One or more load permissions. + LoadPermissions []*LoadPermission `locationName:"loadPermissions" locationNameList:"item" type:"list"` + + // The name of the AFI. + Name *string `locationName:"name" type:"string"` + + // One or more product codes. + ProductCodes []*ProductCode `locationName:"productCodes" locationNameList:"item" type:"list"` +} + +// String returns the string representation +func (s FpgaImageAttribute) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s FpgaImageAttribute) GoString() string { + return s.String() +} + +// SetDescription sets the Description field's value. +func (s *FpgaImageAttribute) SetDescription(v string) *FpgaImageAttribute { + s.Description = &v + return s +} + +// SetFpgaImageId sets the FpgaImageId field's value. +func (s *FpgaImageAttribute) SetFpgaImageId(v string) *FpgaImageAttribute { + s.FpgaImageId = &v + return s +} + +// SetLoadPermissions sets the LoadPermissions field's value. +func (s *FpgaImageAttribute) SetLoadPermissions(v []*LoadPermission) *FpgaImageAttribute { + s.LoadPermissions = v + return s +} + +// SetName sets the Name field's value. +func (s *FpgaImageAttribute) SetName(v string) *FpgaImageAttribute { + s.Name = &v + return s +} + +// SetProductCodes sets the ProductCodes field's value. +func (s *FpgaImageAttribute) SetProductCodes(v []*ProductCode) *FpgaImageAttribute { + s.ProductCodes = v + return s +} + // Describes the state of the bitstream generation process for an Amazon FPGA // image (AFI). -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/FpgaImageState type FpgaImageState struct { _ struct{} `type:"structure"` @@ -39622,7 +44673,6 @@ func (s *FpgaImageState) SetMessage(v string) *FpgaImageState { } // Contains the parameters for GetConsoleOutput. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/GetConsoleOutputRequest type GetConsoleOutputInput struct { _ struct{} `type:"structure"` @@ -39674,7 +44724,6 @@ func (s *GetConsoleOutputInput) SetInstanceId(v string) *GetConsoleOutputInput { } // Contains the output of GetConsoleOutput. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/GetConsoleOutputResult type GetConsoleOutputOutput struct { _ struct{} `type:"structure"` @@ -39718,7 +44767,6 @@ func (s *GetConsoleOutputOutput) SetTimestamp(v time.Time) *GetConsoleOutputOutp } // Contains the parameters for the request. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/GetConsoleScreenshotRequest type GetConsoleScreenshotInput struct { _ struct{} `type:"structure"` @@ -39780,7 +44828,6 @@ func (s *GetConsoleScreenshotInput) SetWakeUp(v bool) *GetConsoleScreenshotInput } // Contains the output of the request. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/GetConsoleScreenshotResult type GetConsoleScreenshotOutput struct { _ struct{} `type:"structure"` @@ -39813,7 +44860,6 @@ func (s *GetConsoleScreenshotOutput) SetInstanceId(v string) *GetConsoleScreensh return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/GetHostReservationPurchasePreviewRequest type GetHostReservationPurchasePreviewInput struct { _ struct{} `type:"structure"` @@ -39867,7 +44913,6 @@ func (s *GetHostReservationPurchasePreviewInput) SetOfferingId(v string) *GetHos return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/GetHostReservationPurchasePreviewResult type GetHostReservationPurchasePreviewOutput struct { _ struct{} `type:"structure"` @@ -39877,7 +44922,7 @@ type GetHostReservationPurchasePreviewOutput struct { // The purchase information of the Dedicated Host Reservation and the Dedicated // Hosts associated with it. - Purchase []*Purchase `locationName:"purchase" type:"list"` + Purchase []*Purchase `locationName:"purchase" locationNameList:"item" type:"list"` // The potential total hourly price of the reservation per hour. TotalHourlyPrice *string `locationName:"totalHourlyPrice" type:"string"` @@ -39920,8 +44965,80 @@ func (s *GetHostReservationPurchasePreviewOutput) SetTotalUpfrontPrice(v string) return s } +type GetLaunchTemplateDataInput struct { + _ struct{} `type:"structure"` + + // Checks whether you have the required permissions for the action, without + // actually making the request, and provides an error response. If you have + // the required permissions, the error response is DryRunOperation. Otherwise, + // it is UnauthorizedOperation. + DryRun *bool `type:"boolean"` + + // The ID of the instance. + // + // InstanceId is a required field + InstanceId *string `type:"string" required:"true"` +} + +// String returns the string representation +func (s GetLaunchTemplateDataInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetLaunchTemplateDataInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *GetLaunchTemplateDataInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "GetLaunchTemplateDataInput"} + if s.InstanceId == nil { + invalidParams.Add(request.NewErrParamRequired("InstanceId")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetDryRun sets the DryRun field's value. +func (s *GetLaunchTemplateDataInput) SetDryRun(v bool) *GetLaunchTemplateDataInput { + s.DryRun = &v + return s +} + +// SetInstanceId sets the InstanceId field's value. +func (s *GetLaunchTemplateDataInput) SetInstanceId(v string) *GetLaunchTemplateDataInput { + s.InstanceId = &v + return s +} + +type GetLaunchTemplateDataOutput struct { + _ struct{} `type:"structure"` + + // The instance data. + LaunchTemplateData *ResponseLaunchTemplateData `locationName:"launchTemplateData" type:"structure"` +} + +// String returns the string representation +func (s GetLaunchTemplateDataOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetLaunchTemplateDataOutput) GoString() string { + return s.String() +} + +// SetLaunchTemplateData sets the LaunchTemplateData field's value. +func (s *GetLaunchTemplateDataOutput) SetLaunchTemplateData(v *ResponseLaunchTemplateData) *GetLaunchTemplateDataOutput { + s.LaunchTemplateData = v + return s +} + // Contains the parameters for GetPasswordData. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/GetPasswordDataRequest type GetPasswordDataInput struct { _ struct{} `type:"structure"` @@ -39973,14 +45090,14 @@ func (s *GetPasswordDataInput) SetInstanceId(v string) *GetPasswordDataInput { } // Contains the output of GetPasswordData. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/GetPasswordDataResult type GetPasswordDataOutput struct { _ struct{} `type:"structure"` // The ID of the Windows instance. InstanceId *string `locationName:"instanceId" type:"string"` - // The password of the instance. + // The password of the instance. Returns an empty string if the password is + // not available. PasswordData *string `locationName:"passwordData" type:"string"` // The time the data was last updated. @@ -40016,7 +45133,6 @@ func (s *GetPasswordDataOutput) SetTimestamp(v time.Time) *GetPasswordDataOutput } // Contains the parameters for GetReservedInstanceExchangeQuote. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/GetReservedInstancesExchangeQuoteRequest type GetReservedInstancesExchangeQuoteInput struct { _ struct{} `type:"structure"` @@ -40031,7 +45147,7 @@ type GetReservedInstancesExchangeQuoteInput struct { // ReservedInstanceIds is a required field ReservedInstanceIds []*string `locationName:"ReservedInstanceId" locationNameList:"ReservedInstanceId" type:"list" required:"true"` - // The configuration requirements of the Convertible Reserved Instances to exchange + // The configuration of the target Convertible Reserved Instance to exchange // for your current Convertible Reserved Instances. TargetConfigurations []*TargetConfigurationRequest `locationName:"TargetConfiguration" locationNameList:"TargetConfigurationRequest" type:"list"` } @@ -40088,7 +45204,6 @@ func (s *GetReservedInstancesExchangeQuoteInput) SetTargetConfigurations(v []*Ta } // Contains the output of GetReservedInstancesExchangeQuote. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/GetReservedInstancesExchangeQuoteResult type GetReservedInstancesExchangeQuoteOutput struct { _ struct{} `type:"structure"` @@ -40185,7 +45300,6 @@ func (s *GetReservedInstancesExchangeQuoteOutput) SetValidationFailureReason(v s } // Describes a security group. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/GroupIdentifier type GroupIdentifier struct { _ struct{} `type:"structure"` @@ -40218,8 +45332,7 @@ func (s *GroupIdentifier) SetGroupName(v string) *GroupIdentifier { return s } -// Describes an event in the history of the Spot fleet request. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/HistoryRecord +// Describes an event in the history of the Spot Fleet request. type HistoryRecord struct { _ struct{} `type:"structure"` @@ -40230,12 +45343,14 @@ type HistoryRecord struct { // The event type. // - // * error - Indicates an error with the Spot fleet request. + // * error - An error with the Spot Fleet request. // - // * fleetRequestChange - Indicates a change in the status or configuration - // of the Spot fleet request. + // * fleetRequestChange - A change in the status or configuration of the + // Spot Fleet request. // - // * instanceChange - Indicates that an instance was launched or terminated. + // * instanceChange - An instance was launched or terminated. + // + // * Information - An informational event. // // EventType is a required field EventType *string `locationName:"eventType" type:"string" required:"true" enum:"EventType"` @@ -40275,7 +45390,6 @@ func (s *HistoryRecord) SetTimestamp(v time.Time) *HistoryRecord { } // Describes the properties of the Dedicated Host. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/Host type Host struct { _ struct{} `type:"structure"` @@ -40375,7 +45489,6 @@ func (s *Host) SetState(v string) *Host { } // Describes an instance running on a Dedicated Host. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/HostInstance type HostInstance struct { _ struct{} `type:"structure"` @@ -40409,7 +45522,6 @@ func (s *HostInstance) SetInstanceType(v string) *HostInstance { } // Details about the Dedicated Host Reservation offering. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/HostOffering type HostOffering struct { _ struct{} `type:"structure"` @@ -40488,7 +45600,6 @@ func (s *HostOffering) SetUpfrontPrice(v string) *HostOffering { } // Describes properties of a Dedicated Host. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/HostProperties type HostProperties struct { _ struct{} `type:"structure"` @@ -40540,7 +45651,6 @@ func (s *HostProperties) SetTotalVCpus(v int64) *HostProperties { } // Details about the Dedicated Host Reservation and associated Dedicated Hosts. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/HostReservation type HostReservation struct { _ struct{} `type:"structure"` @@ -40678,7 +45788,6 @@ func (s *HostReservation) SetUpfrontPrice(v string) *HostReservation { } // Describes an IAM instance profile. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/IamInstanceProfile type IamInstanceProfile struct { _ struct{} `type:"structure"` @@ -40712,7 +45821,6 @@ func (s *IamInstanceProfile) SetId(v string) *IamInstanceProfile { } // Describes an association between an IAM instance profile and an instance. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/IamInstanceProfileAssociation type IamInstanceProfileAssociation struct { _ struct{} `type:"structure"` @@ -40773,7 +45881,6 @@ func (s *IamInstanceProfileAssociation) SetTimestamp(v time.Time) *IamInstancePr } // Describes an IAM instance profile. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/IamInstanceProfileSpecification type IamInstanceProfileSpecification struct { _ struct{} `type:"structure"` @@ -40807,7 +45914,6 @@ func (s *IamInstanceProfileSpecification) SetName(v string) *IamInstanceProfileS } // Describes the ICMP type and code. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/IcmpTypeCode type IcmpTypeCode struct { _ struct{} `type:"structure"` @@ -40841,7 +45947,6 @@ func (s *IcmpTypeCode) SetType(v int64) *IcmpTypeCode { } // Describes the ID format for a resource. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/IdFormat type IdFormat struct { _ struct{} `type:"structure"` @@ -40886,7 +45991,6 @@ func (s *IdFormat) SetUseLongIds(v bool) *IdFormat { } // Describes an image. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/Image type Image struct { _ struct{} `type:"structure"` @@ -40946,7 +46050,7 @@ type Image struct { // images. RamdiskId *string `locationName:"ramdiskId" type:"string"` - // The device name of the root device (for example, /dev/sda1 or /dev/xvda). + // The device name of the root device volume (for example, /dev/sda1). RootDeviceName *string `locationName:"rootDeviceName" type:"string"` // The type of root device used by the AMI. The AMI can use an EBS volume or @@ -41126,7 +46230,6 @@ func (s *Image) SetVirtualizationType(v string) *Image { } // Describes the disk container object for an import image task. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ImageDiskContainer type ImageDiskContainer struct { _ struct{} `type:"structure"` @@ -41199,7 +46302,6 @@ func (s *ImageDiskContainer) SetUserBucket(v *UserBucket) *ImageDiskContainer { } // Contains the parameters for ImportImage. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ImportImageRequest type ImportImageInput struct { _ struct{} `type:"structure"` @@ -41321,7 +46423,6 @@ func (s *ImportImageInput) SetRoleName(v string) *ImportImageInput { } // Contains the output for ImportImage. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ImportImageResult type ImportImageOutput struct { _ struct{} `type:"structure"` @@ -41436,7 +46537,6 @@ func (s *ImportImageOutput) SetStatusMessage(v string) *ImportImageOutput { } // Describes an import image task. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ImportImageTask type ImportImageTask struct { _ struct{} `type:"structure"` @@ -41555,7 +46655,6 @@ func (s *ImportImageTask) SetStatusMessage(v string) *ImportImageTask { } // Contains the parameters for ImportInstance. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ImportInstanceRequest type ImportInstanceInput struct { _ struct{} `type:"structure"` @@ -41644,7 +46743,6 @@ func (s *ImportInstanceInput) SetPlatform(v string) *ImportInstanceInput { } // Describes the launch specification for VM import. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ImportInstanceLaunchSpecification type ImportInstanceLaunchSpecification struct { _ struct{} `type:"structure"` @@ -41764,7 +46862,6 @@ func (s *ImportInstanceLaunchSpecification) SetUserData(v *UserData) *ImportInst } // Contains the output for ImportInstance. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ImportInstanceResult type ImportInstanceOutput struct { _ struct{} `type:"structure"` @@ -41789,7 +46886,6 @@ func (s *ImportInstanceOutput) SetConversionTask(v *ConversionTask) *ImportInsta } // Describes an import instance task. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ImportInstanceTaskDetails type ImportInstanceTaskDetails struct { _ struct{} `type:"structure"` @@ -41843,7 +46939,6 @@ func (s *ImportInstanceTaskDetails) SetVolumes(v []*ImportInstanceVolumeDetailIt } // Describes an import volume task. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ImportInstanceVolumeDetailItem type ImportInstanceVolumeDetailItem struct { _ struct{} `type:"structure"` @@ -41932,7 +47027,6 @@ func (s *ImportInstanceVolumeDetailItem) SetVolume(v *DiskImageVolumeDescription } // Contains the parameters for ImportKeyPair. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ImportKeyPairRequest type ImportKeyPairInput struct { _ struct{} `type:"structure"` @@ -42001,7 +47095,6 @@ func (s *ImportKeyPairInput) SetPublicKeyMaterial(v []byte) *ImportKeyPairInput } // Contains the output of ImportKeyPair. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ImportKeyPairResult type ImportKeyPairOutput struct { _ struct{} `type:"structure"` @@ -42035,7 +47128,6 @@ func (s *ImportKeyPairOutput) SetKeyName(v string) *ImportKeyPairOutput { } // Contains the parameters for ImportSnapshot. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ImportSnapshotRequest type ImportSnapshotInput struct { _ struct{} `type:"structure"` @@ -42108,7 +47200,6 @@ func (s *ImportSnapshotInput) SetRoleName(v string) *ImportSnapshotInput { } // Contains the output for ImportSnapshot. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ImportSnapshotResult type ImportSnapshotOutput struct { _ struct{} `type:"structure"` @@ -42151,7 +47242,6 @@ func (s *ImportSnapshotOutput) SetSnapshotTaskDetail(v *SnapshotTaskDetail) *Imp } // Describes an import snapshot task. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ImportSnapshotTask type ImportSnapshotTask struct { _ struct{} `type:"structure"` @@ -42194,7 +47284,6 @@ func (s *ImportSnapshotTask) SetSnapshotTaskDetail(v *SnapshotTaskDetail) *Impor } // Contains the parameters for ImportVolume. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ImportVolumeRequest type ImportVolumeInput struct { _ struct{} `type:"structure"` @@ -42293,7 +47382,6 @@ func (s *ImportVolumeInput) SetVolume(v *VolumeDetail) *ImportVolumeInput { } // Contains the output for ImportVolume. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ImportVolumeResult type ImportVolumeOutput struct { _ struct{} `type:"structure"` @@ -42318,7 +47406,6 @@ func (s *ImportVolumeOutput) SetConversionTask(v *ConversionTask) *ImportVolumeO } // Describes an import volume task. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ImportVolumeTaskDetails type ImportVolumeTaskDetails struct { _ struct{} `type:"structure"` @@ -42387,7 +47474,6 @@ func (s *ImportVolumeTaskDetails) SetVolume(v *DiskImageVolumeDescription) *Impo } // Describes an instance. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/Instance type Instance struct { _ struct{} `type:"structure"` @@ -42404,7 +47490,7 @@ type Instance struct { // The idempotency token you provided when you launched the instance, if applicable. ClientToken *string `locationName:"clientToken" type:"string"` - // Indicates whether the instance is optimized for EBS I/O. This optimization + // Indicates whether the instance is optimized for Amazon EBS I/O. This optimization // provides dedicated throughput to Amazon EBS and an optimized configuration // stack to provide optimal I/O performance. This optimization isn't available // with all instance types. Additional usage charges apply when using an EBS @@ -42429,7 +47515,7 @@ type Instance struct { // The ID of the instance. InstanceId *string `locationName:"instanceId" type:"string"` - // Indicates whether this is a Spot instance or a Scheduled Instance. + // Indicates whether this is a Spot Instance or a Scheduled Instance. InstanceLifecycle *string `locationName:"instanceLifecycle" type:"string" enum:"InstanceLifecycleType"` // The instance type. @@ -42461,7 +47547,7 @@ type Instance struct { // DNS hostname can only be used inside the Amazon EC2 network. This name is // not available until the instance enters the running state. // - // [EC2-VPC] The Amazon-provided DNS server will resolve Amazon-provided private + // [EC2-VPC] The Amazon-provided DNS server resolves Amazon-provided private // DNS hostnames if you've enabled DNS resolution and DNS hostnames in your // VPC. If you are not using the Amazon-provided DNS server in your VPC, your // custom domain name servers must resolve the hostname as appropriate. @@ -42484,7 +47570,7 @@ type Instance struct { // The RAM disk associated with this instance, if applicable. RamdiskId *string `locationName:"ramdiskId" type:"string"` - // The root device name (for example, /dev/sda1 or /dev/xvda). + // The device name of the root device volume (for example, /dev/sda1). RootDeviceName *string `locationName:"rootDeviceName" type:"string"` // The root device type used by the AMI. The AMI can use an EBS volume or an @@ -42496,13 +47582,13 @@ type Instance struct { // Specifies whether to enable an instance launched in a VPC to perform NAT. // This controls whether source/destination checking is enabled on the instance. - // A value of true means checking is enabled, and false means checking is disabled. - // The value must be false for the instance to perform NAT. For more information, - // see NAT Instances (http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_NAT_Instance.html) + // A value of true means that checking is enabled, and false means that checking + // is disabled. The value must be false for the instance to perform NAT. For + // more information, see NAT Instances (http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_NAT_Instance.html) // in the Amazon Virtual Private Cloud User Guide. SourceDestCheck *bool `locationName:"sourceDestCheck" type:"boolean"` - // If the request is a Spot instance request, the ID of the request. + // If the request is a Spot Instance request, the ID of the request. SpotInstanceRequestId *string `locationName:"spotInstanceRequestId" type:"string"` // Specifies whether enhanced networking with the Intel 82599 Virtual Function @@ -42776,11 +47862,10 @@ func (s *Instance) SetVpcId(v string) *Instance { } // Describes a block device mapping. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/InstanceBlockDeviceMapping type InstanceBlockDeviceMapping struct { _ struct{} `type:"structure"` - // The device name exposed to the instance (for example, /dev/sdh or xvdh). + // The device name (for example, /dev/sdh or xvdh). DeviceName *string `locationName:"deviceName" type:"string"` // Parameters used to automatically set up EBS volumes when the instance is @@ -42811,11 +47896,10 @@ func (s *InstanceBlockDeviceMapping) SetEbs(v *EbsInstanceBlockDevice) *Instance } // Describes a block device mapping entry. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/InstanceBlockDeviceMappingSpecification type InstanceBlockDeviceMappingSpecification struct { _ struct{} `type:"structure"` - // The device name exposed to the instance (for example, /dev/sdh or xvdh). + // The device name (for example, /dev/sdh or xvdh). DeviceName *string `locationName:"deviceName" type:"string"` // Parameters used to automatically set up EBS volumes when the instance is @@ -42864,7 +47948,6 @@ func (s *InstanceBlockDeviceMappingSpecification) SetVirtualName(v string) *Inst } // Information about the instance type that the Dedicated Host supports. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/InstanceCapacity type InstanceCapacity struct { _ struct{} `type:"structure"` @@ -42907,7 +47990,6 @@ func (s *InstanceCapacity) SetTotalCapacity(v int64) *InstanceCapacity { } // Describes a Reserved Instance listing state. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/InstanceCount type InstanceCount struct { _ struct{} `type:"structure"` @@ -42940,8 +48022,75 @@ func (s *InstanceCount) SetState(v string) *InstanceCount { return s } +// Describes the credit option for CPU usage of a T2 instance. +type InstanceCreditSpecification struct { + _ struct{} `type:"structure"` + + // The credit option for CPU usage of the instance. Valid values are standard + // and unlimited. + CpuCredits *string `locationName:"cpuCredits" type:"string"` + + // The ID of the instance. + InstanceId *string `locationName:"instanceId" type:"string"` +} + +// String returns the string representation +func (s InstanceCreditSpecification) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s InstanceCreditSpecification) GoString() string { + return s.String() +} + +// SetCpuCredits sets the CpuCredits field's value. +func (s *InstanceCreditSpecification) SetCpuCredits(v string) *InstanceCreditSpecification { + s.CpuCredits = &v + return s +} + +// SetInstanceId sets the InstanceId field's value. +func (s *InstanceCreditSpecification) SetInstanceId(v string) *InstanceCreditSpecification { + s.InstanceId = &v + return s +} + +// Describes the credit option for CPU usage of a T2 instance. +type InstanceCreditSpecificationRequest struct { + _ struct{} `type:"structure"` + + // The credit option for CPU usage of the instance. Valid values are standard + // and unlimited. + CpuCredits *string `type:"string"` + + // The ID of the instance. + InstanceId *string `type:"string"` +} + +// String returns the string representation +func (s InstanceCreditSpecificationRequest) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s InstanceCreditSpecificationRequest) GoString() string { + return s.String() +} + +// SetCpuCredits sets the CpuCredits field's value. +func (s *InstanceCreditSpecificationRequest) SetCpuCredits(v string) *InstanceCreditSpecificationRequest { + s.CpuCredits = &v + return s +} + +// SetInstanceId sets the InstanceId field's value. +func (s *InstanceCreditSpecificationRequest) SetInstanceId(v string) *InstanceCreditSpecificationRequest { + s.InstanceId = &v + return s +} + // Describes an instance to export. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/InstanceExportDetails type InstanceExportDetails struct { _ struct{} `type:"structure"` @@ -42975,7 +48124,6 @@ func (s *InstanceExportDetails) SetTargetEnvironment(v string) *InstanceExportDe } // Describes an IPv6 address. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/InstanceIpv6Address type InstanceIpv6Address struct { _ struct{} `type:"structure"` @@ -42999,8 +48147,64 @@ func (s *InstanceIpv6Address) SetIpv6Address(v string) *InstanceIpv6Address { return s } +// Describes an IPv6 address. +type InstanceIpv6AddressRequest struct { + _ struct{} `type:"structure"` + + // The IPv6 address. + Ipv6Address *string `type:"string"` +} + +// String returns the string representation +func (s InstanceIpv6AddressRequest) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s InstanceIpv6AddressRequest) GoString() string { + return s.String() +} + +// SetIpv6Address sets the Ipv6Address field's value. +func (s *InstanceIpv6AddressRequest) SetIpv6Address(v string) *InstanceIpv6AddressRequest { + s.Ipv6Address = &v + return s +} + +// Describes the market (purchasing) option for the instances. +type InstanceMarketOptionsRequest struct { + _ struct{} `type:"structure"` + + // The market type. + MarketType *string `type:"string" enum:"MarketType"` + + // The options for Spot Instances. + SpotOptions *SpotMarketOptions `type:"structure"` +} + +// String returns the string representation +func (s InstanceMarketOptionsRequest) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s InstanceMarketOptionsRequest) GoString() string { + return s.String() +} + +// SetMarketType sets the MarketType field's value. +func (s *InstanceMarketOptionsRequest) SetMarketType(v string) *InstanceMarketOptionsRequest { + s.MarketType = &v + return s +} + +// SetSpotOptions sets the SpotOptions field's value. +func (s *InstanceMarketOptionsRequest) SetSpotOptions(v *SpotMarketOptions) *InstanceMarketOptionsRequest { + s.SpotOptions = v + return s +} + // Describes the monitoring of an instance. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/InstanceMonitoring type InstanceMonitoring struct { _ struct{} `type:"structure"` @@ -43034,7 +48238,6 @@ func (s *InstanceMonitoring) SetMonitoring(v *Monitoring) *InstanceMonitoring { } // Describes a network interface. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/InstanceNetworkInterface type InstanceNetworkInterface struct { _ struct{} `type:"structure"` @@ -43186,7 +48389,6 @@ func (s *InstanceNetworkInterface) SetVpcId(v string) *InstanceNetworkInterface } // Describes association information for an Elastic IP address (IPv4). -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/InstanceNetworkInterfaceAssociation type InstanceNetworkInterfaceAssociation struct { _ struct{} `type:"structure"` @@ -43229,7 +48431,6 @@ func (s *InstanceNetworkInterfaceAssociation) SetPublicIp(v string) *InstanceNet } // Describes a network interface attachment. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/InstanceNetworkInterfaceAttachment type InstanceNetworkInterfaceAttachment struct { _ struct{} `type:"structure"` @@ -43290,7 +48491,6 @@ func (s *InstanceNetworkInterfaceAttachment) SetStatus(v string) *InstanceNetwor } // Describes a network interface. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/InstanceNetworkInterfaceSpecification type InstanceNetworkInterfaceSpecification struct { _ struct{} `type:"structure"` @@ -43460,7 +48660,6 @@ func (s *InstanceNetworkInterfaceSpecification) SetSubnetId(v string) *InstanceN } // Describes a private IPv4 address. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/InstancePrivateIpAddress type InstancePrivateIpAddress struct { _ struct{} `type:"structure"` @@ -43513,7 +48712,6 @@ func (s *InstancePrivateIpAddress) SetPrivateIpAddress(v string) *InstancePrivat } // Describes the current state of an instance. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/InstanceState type InstanceState struct { _ struct{} `type:"structure"` @@ -43560,7 +48758,6 @@ func (s *InstanceState) SetName(v string) *InstanceState { } // Describes an instance state change. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/InstanceStateChange type InstanceStateChange struct { _ struct{} `type:"structure"` @@ -43603,7 +48800,6 @@ func (s *InstanceStateChange) SetPreviousState(v *InstanceState) *InstanceStateC } // Describes the status of an instance. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/InstanceStatus type InstanceStatus struct { _ struct{} `type:"structure"` @@ -43677,7 +48873,6 @@ func (s *InstanceStatus) SetSystemStatus(v *InstanceStatusSummary) *InstanceStat } // Describes the instance status. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/InstanceStatusDetails type InstanceStatusDetails struct { _ struct{} `type:"structure"` @@ -43721,7 +48916,6 @@ func (s *InstanceStatusDetails) SetStatus(v string) *InstanceStatusDetails { } // Describes a scheduled event for an instance. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/InstanceStatusEvent type InstanceStatusEvent struct { _ struct{} `type:"structure"` @@ -43777,7 +48971,6 @@ func (s *InstanceStatusEvent) SetNotBefore(v time.Time) *InstanceStatusEvent { } // Describes the status of an instance. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/InstanceStatusSummary type InstanceStatusSummary struct { _ struct{} `type:"structure"` @@ -43811,7 +49004,6 @@ func (s *InstanceStatusSummary) SetStatus(v string) *InstanceStatusSummary { } // Describes an Internet gateway. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/InternetGateway type InternetGateway struct { _ struct{} `type:"structure"` @@ -43855,11 +49047,11 @@ func (s *InternetGateway) SetTags(v []*Tag) *InternetGateway { // Describes the attachment of a VPC to an Internet gateway or an egress-only // Internet gateway. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/InternetGatewayAttachment type InternetGatewayAttachment struct { _ struct{} `type:"structure"` - // The current state of the attachment. + // The current state of the attachment. For an Internet gateway, the state is + // available when attached to a VPC; otherwise, this value is not returned. State *string `locationName:"state" type:"string" enum:"AttachmentStatus"` // The ID of the VPC. @@ -43888,13 +49080,13 @@ func (s *InternetGatewayAttachment) SetVpcId(v string) *InternetGatewayAttachmen return s } -// Describes a security group rule. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/IpPermission +// Describes a set of permissions for a security group rule. type IpPermission struct { _ struct{} `type:"structure"` // The start of port range for the TCP and UDP protocols, or an ICMP/ICMPv6 - // type number. A value of -1 indicates all ICMP/ICMPv6 types. + // type number. A value of -1 indicates all ICMP/ICMPv6 types. If you specify + // all ICMP/ICMPv6 types, you must specify all codes. FromPort *int64 `locationName:"fromPort" type:"integer"` // The IP protocol name (tcp, udp, icmp) or number (see Protocol Numbers (http://www.iana.org/assignments/protocol-numbers/protocol-numbers.xhtml)). @@ -43913,14 +49105,16 @@ type IpPermission struct { // [EC2-VPC only] One or more IPv6 ranges. Ipv6Ranges []*Ipv6Range `locationName:"ipv6Ranges" locationNameList:"item" type:"list"` - // (Valid for AuthorizeSecurityGroupEgress, RevokeSecurityGroupEgress and DescribeSecurityGroups - // only) One or more prefix list IDs for an AWS service. In an AuthorizeSecurityGroupEgress - // request, this is the AWS service that you want to access through a VPC endpoint - // from instances associated with the security group. + // (EC2-VPC only; valid for AuthorizeSecurityGroupEgress, RevokeSecurityGroupEgress + // and DescribeSecurityGroups only) One or more prefix list IDs for an AWS service. + // In an AuthorizeSecurityGroupEgress request, this is the AWS service that + // you want to access through a VPC endpoint from instances associated with + // the security group. PrefixListIds []*PrefixListId `locationName:"prefixListIds" locationNameList:"item" type:"list"` // The end of port range for the TCP and UDP protocols, or an ICMP/ICMPv6 code. // A value of -1 indicates all ICMP/ICMPv6 codes for the specified ICMP type. + // If you specify all ICMP/ICMPv6 types, you must specify all codes. ToPort *int64 `locationName:"toPort" type:"integer"` // One or more security group and AWS account ID pairs. @@ -43980,13 +49174,19 @@ func (s *IpPermission) SetUserIdGroupPairs(v []*UserIdGroupPair) *IpPermission { } // Describes an IPv4 range. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/IpRange type IpRange struct { _ struct{} `type:"structure"` // The IPv4 CIDR range. You can either specify a CIDR range or a source security - // group, not both. To specify a single IPv4 address, use the /32 prefix. + // group, not both. To specify a single IPv4 address, use the /32 prefix length. CidrIp *string `locationName:"cidrIp" type:"string"` + + // A description for the security group rule that references this IPv4 address + // range. + // + // Constraints: Up to 255 characters in length. Allowed characters are a-z, + // A-Z, 0-9, spaces, and ._-:/()#,@[]+=;{}!$* + Description *string `locationName:"description" type:"string"` } // String returns the string representation @@ -44005,8 +49205,13 @@ func (s *IpRange) SetCidrIp(v string) *IpRange { return s } +// SetDescription sets the Description field's value. +func (s *IpRange) SetDescription(v string) *IpRange { + s.Description = &v + return s +} + // Describes an IPv6 CIDR block. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/Ipv6CidrBlock type Ipv6CidrBlock struct { _ struct{} `type:"structure"` @@ -44031,13 +49236,19 @@ func (s *Ipv6CidrBlock) SetIpv6CidrBlock(v string) *Ipv6CidrBlock { } // [EC2-VPC only] Describes an IPv6 range. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/Ipv6Range type Ipv6Range struct { _ struct{} `type:"structure"` // The IPv6 CIDR range. You can either specify a CIDR range or a source security - // group, not both. To specify a single IPv6 address, use the /128 prefix. + // group, not both. To specify a single IPv6 address, use the /128 prefix length. CidrIpv6 *string `locationName:"cidrIpv6" type:"string"` + + // A description for the security group rule that references this IPv6 address + // range. + // + // Constraints: Up to 255 characters in length. Allowed characters are a-z, + // A-Z, 0-9, spaces, and ._-:/()#,@[]+=;{}!$* + Description *string `locationName:"description" type:"string"` } // String returns the string representation @@ -44056,8 +49267,13 @@ func (s *Ipv6Range) SetCidrIpv6(v string) *Ipv6Range { return s } +// SetDescription sets the Description field's value. +func (s *Ipv6Range) SetDescription(v string) *Ipv6Range { + s.Description = &v + return s +} + // Describes a key pair. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/KeyPairInfo type KeyPairInfo struct { _ struct{} `type:"structure"` @@ -44094,7 +49310,6 @@ func (s *KeyPairInfo) SetKeyName(v string) *KeyPairInfo { } // Describes a launch permission. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/LaunchPermission type LaunchPermission struct { _ struct{} `type:"structure"` @@ -44128,7 +49343,6 @@ func (s *LaunchPermission) SetUserId(v string) *LaunchPermission { } // Describes a launch permission modification. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/LaunchPermissionModifications type LaunchPermissionModifications struct { _ struct{} `type:"structure"` @@ -44163,7 +49377,6 @@ func (s *LaunchPermissionModifications) SetRemove(v []*LaunchPermission) *Launch } // Describes the launch specification for an instance. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/LaunchSpecification type LaunchSpecification struct { _ struct{} `type:"structure"` @@ -44171,9 +49384,6 @@ type LaunchSpecification struct { AddressingType *string `locationName:"addressingType" type:"string"` // One or more block device mapping entries. - // - // Although you can specify encrypted EBS volumes in this block device mapping - // for your Spot Instances, these volumes are not encrypted. BlockDeviceMappings []*BlockDeviceMapping `locationName:"blockDeviceMapping" locationNameList:"item" type:"list"` // Indicates whether the instance is optimized for EBS I/O. This optimization @@ -44327,8 +49537,1697 @@ func (s *LaunchSpecification) SetUserData(v string) *LaunchSpecification { return s } +// Describes a launch template. +type LaunchTemplate struct { + _ struct{} `type:"structure"` + + // The time launch template was created. + CreateTime *time.Time `locationName:"createTime" type:"timestamp" timestampFormat:"iso8601"` + + // The principal that created the launch template. + CreatedBy *string `locationName:"createdBy" type:"string"` + + // The version number of the default version of the launch template. + DefaultVersionNumber *int64 `locationName:"defaultVersionNumber" type:"long"` + + // The version number of the latest version of the launch template. + LatestVersionNumber *int64 `locationName:"latestVersionNumber" type:"long"` + + // The ID of the launch template. + LaunchTemplateId *string `locationName:"launchTemplateId" type:"string"` + + // The name of the launch template. + LaunchTemplateName *string `locationName:"launchTemplateName" min:"3" type:"string"` + + // The tags for the launch template. + Tags []*Tag `locationName:"tagSet" locationNameList:"item" type:"list"` +} + +// String returns the string representation +func (s LaunchTemplate) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s LaunchTemplate) GoString() string { + return s.String() +} + +// SetCreateTime sets the CreateTime field's value. +func (s *LaunchTemplate) SetCreateTime(v time.Time) *LaunchTemplate { + s.CreateTime = &v + return s +} + +// SetCreatedBy sets the CreatedBy field's value. +func (s *LaunchTemplate) SetCreatedBy(v string) *LaunchTemplate { + s.CreatedBy = &v + return s +} + +// SetDefaultVersionNumber sets the DefaultVersionNumber field's value. +func (s *LaunchTemplate) SetDefaultVersionNumber(v int64) *LaunchTemplate { + s.DefaultVersionNumber = &v + return s +} + +// SetLatestVersionNumber sets the LatestVersionNumber field's value. +func (s *LaunchTemplate) SetLatestVersionNumber(v int64) *LaunchTemplate { + s.LatestVersionNumber = &v + return s +} + +// SetLaunchTemplateId sets the LaunchTemplateId field's value. +func (s *LaunchTemplate) SetLaunchTemplateId(v string) *LaunchTemplate { + s.LaunchTemplateId = &v + return s +} + +// SetLaunchTemplateName sets the LaunchTemplateName field's value. +func (s *LaunchTemplate) SetLaunchTemplateName(v string) *LaunchTemplate { + s.LaunchTemplateName = &v + return s +} + +// SetTags sets the Tags field's value. +func (s *LaunchTemplate) SetTags(v []*Tag) *LaunchTemplate { + s.Tags = v + return s +} + +// Describes a block device mapping. +type LaunchTemplateBlockDeviceMapping struct { + _ struct{} `type:"structure"` + + // The device name. + DeviceName *string `locationName:"deviceName" type:"string"` + + // Information about the block device for an EBS volume. + Ebs *LaunchTemplateEbsBlockDevice `locationName:"ebs" type:"structure"` + + // Suppresses the specified device included in the block device mapping of the + // AMI. + NoDevice *string `locationName:"noDevice" type:"string"` + + // The virtual device name (ephemeralN). + VirtualName *string `locationName:"virtualName" type:"string"` +} + +// String returns the string representation +func (s LaunchTemplateBlockDeviceMapping) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s LaunchTemplateBlockDeviceMapping) GoString() string { + return s.String() +} + +// SetDeviceName sets the DeviceName field's value. +func (s *LaunchTemplateBlockDeviceMapping) SetDeviceName(v string) *LaunchTemplateBlockDeviceMapping { + s.DeviceName = &v + return s +} + +// SetEbs sets the Ebs field's value. +func (s *LaunchTemplateBlockDeviceMapping) SetEbs(v *LaunchTemplateEbsBlockDevice) *LaunchTemplateBlockDeviceMapping { + s.Ebs = v + return s +} + +// SetNoDevice sets the NoDevice field's value. +func (s *LaunchTemplateBlockDeviceMapping) SetNoDevice(v string) *LaunchTemplateBlockDeviceMapping { + s.NoDevice = &v + return s +} + +// SetVirtualName sets the VirtualName field's value. +func (s *LaunchTemplateBlockDeviceMapping) SetVirtualName(v string) *LaunchTemplateBlockDeviceMapping { + s.VirtualName = &v + return s +} + +// Describes a block device mapping. +type LaunchTemplateBlockDeviceMappingRequest struct { + _ struct{} `type:"structure"` + + // The device name (for example, /dev/sdh or xvdh). + DeviceName *string `type:"string"` + + // Parameters used to automatically set up EBS volumes when the instance is + // launched. + Ebs *LaunchTemplateEbsBlockDeviceRequest `type:"structure"` + + // Suppresses the specified device included in the block device mapping of the + // AMI. + NoDevice *string `type:"string"` + + // The virtual device name (ephemeralN). Instance store volumes are numbered + // starting from 0. An instance type with 2 available instance store volumes + // can specify mappings for ephemeral0 and ephemeral1. The number of available + // instance store volumes depends on the instance type. After you connect to + // the instance, you must mount the volume. + VirtualName *string `type:"string"` +} + +// String returns the string representation +func (s LaunchTemplateBlockDeviceMappingRequest) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s LaunchTemplateBlockDeviceMappingRequest) GoString() string { + return s.String() +} + +// SetDeviceName sets the DeviceName field's value. +func (s *LaunchTemplateBlockDeviceMappingRequest) SetDeviceName(v string) *LaunchTemplateBlockDeviceMappingRequest { + s.DeviceName = &v + return s +} + +// SetEbs sets the Ebs field's value. +func (s *LaunchTemplateBlockDeviceMappingRequest) SetEbs(v *LaunchTemplateEbsBlockDeviceRequest) *LaunchTemplateBlockDeviceMappingRequest { + s.Ebs = v + return s +} + +// SetNoDevice sets the NoDevice field's value. +func (s *LaunchTemplateBlockDeviceMappingRequest) SetNoDevice(v string) *LaunchTemplateBlockDeviceMappingRequest { + s.NoDevice = &v + return s +} + +// SetVirtualName sets the VirtualName field's value. +func (s *LaunchTemplateBlockDeviceMappingRequest) SetVirtualName(v string) *LaunchTemplateBlockDeviceMappingRequest { + s.VirtualName = &v + return s +} + +// Describes a launch template and overrides. +type LaunchTemplateConfig struct { + _ struct{} `type:"structure"` + + // The launch template. + LaunchTemplateSpecification *FleetLaunchTemplateSpecification `locationName:"launchTemplateSpecification" type:"structure"` + + // Any parameters that you specify override the same parameters in the launch + // template. + Overrides []*LaunchTemplateOverrides `locationName:"overrides" locationNameList:"item" type:"list"` +} + +// String returns the string representation +func (s LaunchTemplateConfig) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s LaunchTemplateConfig) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *LaunchTemplateConfig) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "LaunchTemplateConfig"} + if s.LaunchTemplateSpecification != nil { + if err := s.LaunchTemplateSpecification.Validate(); err != nil { + invalidParams.AddNested("LaunchTemplateSpecification", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetLaunchTemplateSpecification sets the LaunchTemplateSpecification field's value. +func (s *LaunchTemplateConfig) SetLaunchTemplateSpecification(v *FleetLaunchTemplateSpecification) *LaunchTemplateConfig { + s.LaunchTemplateSpecification = v + return s +} + +// SetOverrides sets the Overrides field's value. +func (s *LaunchTemplateConfig) SetOverrides(v []*LaunchTemplateOverrides) *LaunchTemplateConfig { + s.Overrides = v + return s +} + +// Describes a block device for an EBS volume. +type LaunchTemplateEbsBlockDevice struct { + _ struct{} `type:"structure"` + + // Indicates whether the EBS volume is deleted on instance termination. + DeleteOnTermination *bool `locationName:"deleteOnTermination" type:"boolean"` + + // Indicates whether the EBS volume is encrypted. + Encrypted *bool `locationName:"encrypted" type:"boolean"` + + // The number of I/O operations per second (IOPS) that the volume supports. + Iops *int64 `locationName:"iops" type:"integer"` + + // The ARN of the AWS Key Management Service (AWS KMS) CMK used for encryption. + KmsKeyId *string `locationName:"kmsKeyId" type:"string"` + + // The ID of the snapshot. + SnapshotId *string `locationName:"snapshotId" type:"string"` + + // The size of the volume, in GiB. + VolumeSize *int64 `locationName:"volumeSize" type:"integer"` + + // The volume type. + VolumeType *string `locationName:"volumeType" type:"string" enum:"VolumeType"` +} + +// String returns the string representation +func (s LaunchTemplateEbsBlockDevice) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s LaunchTemplateEbsBlockDevice) GoString() string { + return s.String() +} + +// SetDeleteOnTermination sets the DeleteOnTermination field's value. +func (s *LaunchTemplateEbsBlockDevice) SetDeleteOnTermination(v bool) *LaunchTemplateEbsBlockDevice { + s.DeleteOnTermination = &v + return s +} + +// SetEncrypted sets the Encrypted field's value. +func (s *LaunchTemplateEbsBlockDevice) SetEncrypted(v bool) *LaunchTemplateEbsBlockDevice { + s.Encrypted = &v + return s +} + +// SetIops sets the Iops field's value. +func (s *LaunchTemplateEbsBlockDevice) SetIops(v int64) *LaunchTemplateEbsBlockDevice { + s.Iops = &v + return s +} + +// SetKmsKeyId sets the KmsKeyId field's value. +func (s *LaunchTemplateEbsBlockDevice) SetKmsKeyId(v string) *LaunchTemplateEbsBlockDevice { + s.KmsKeyId = &v + return s +} + +// SetSnapshotId sets the SnapshotId field's value. +func (s *LaunchTemplateEbsBlockDevice) SetSnapshotId(v string) *LaunchTemplateEbsBlockDevice { + s.SnapshotId = &v + return s +} + +// SetVolumeSize sets the VolumeSize field's value. +func (s *LaunchTemplateEbsBlockDevice) SetVolumeSize(v int64) *LaunchTemplateEbsBlockDevice { + s.VolumeSize = &v + return s +} + +// SetVolumeType sets the VolumeType field's value. +func (s *LaunchTemplateEbsBlockDevice) SetVolumeType(v string) *LaunchTemplateEbsBlockDevice { + s.VolumeType = &v + return s +} + +// The parameters for a block device for an EBS volume. +type LaunchTemplateEbsBlockDeviceRequest struct { + _ struct{} `type:"structure"` + + // Indicates whether the EBS volume is deleted on instance termination. + DeleteOnTermination *bool `type:"boolean"` + + // Indicates whether the EBS volume is encrypted. Encrypted volumes can only + // be attached to instances that support Amazon EBS encryption. If you are creating + // a volume from a snapshot, you can't specify an encryption value. + Encrypted *bool `type:"boolean"` + + // The number of I/O operations per second (IOPS) that the volume supports. + // For io1, this represents the number of IOPS that are provisioned for the + // volume. For gp2, this represents the baseline performance of the volume and + // the rate at which the volume accumulates I/O credits for bursting. For more + // information about General Purpose SSD baseline performance, I/O credits, + // and bursting, see Amazon EBS Volume Types in the Amazon Elastic Compute Cloud + // User Guide. + // + // Condition: This parameter is required for requests to create io1 volumes; + // it is not used in requests to create gp2, st1, sc1, or standard volumes. + Iops *int64 `type:"integer"` + + // The ARN of the AWS Key Management Service (AWS KMS) CMK used for encryption. + KmsKeyId *string `type:"string"` + + // The ID of the snapshot. + SnapshotId *string `type:"string"` + + // The size of the volume, in GiB. + // + // Default: If you're creating the volume from a snapshot and don't specify + // a volume size, the default is the snapshot size. + VolumeSize *int64 `type:"integer"` + + // The volume type. + VolumeType *string `type:"string" enum:"VolumeType"` +} + +// String returns the string representation +func (s LaunchTemplateEbsBlockDeviceRequest) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s LaunchTemplateEbsBlockDeviceRequest) GoString() string { + return s.String() +} + +// SetDeleteOnTermination sets the DeleteOnTermination field's value. +func (s *LaunchTemplateEbsBlockDeviceRequest) SetDeleteOnTermination(v bool) *LaunchTemplateEbsBlockDeviceRequest { + s.DeleteOnTermination = &v + return s +} + +// SetEncrypted sets the Encrypted field's value. +func (s *LaunchTemplateEbsBlockDeviceRequest) SetEncrypted(v bool) *LaunchTemplateEbsBlockDeviceRequest { + s.Encrypted = &v + return s +} + +// SetIops sets the Iops field's value. +func (s *LaunchTemplateEbsBlockDeviceRequest) SetIops(v int64) *LaunchTemplateEbsBlockDeviceRequest { + s.Iops = &v + return s +} + +// SetKmsKeyId sets the KmsKeyId field's value. +func (s *LaunchTemplateEbsBlockDeviceRequest) SetKmsKeyId(v string) *LaunchTemplateEbsBlockDeviceRequest { + s.KmsKeyId = &v + return s +} + +// SetSnapshotId sets the SnapshotId field's value. +func (s *LaunchTemplateEbsBlockDeviceRequest) SetSnapshotId(v string) *LaunchTemplateEbsBlockDeviceRequest { + s.SnapshotId = &v + return s +} + +// SetVolumeSize sets the VolumeSize field's value. +func (s *LaunchTemplateEbsBlockDeviceRequest) SetVolumeSize(v int64) *LaunchTemplateEbsBlockDeviceRequest { + s.VolumeSize = &v + return s +} + +// SetVolumeType sets the VolumeType field's value. +func (s *LaunchTemplateEbsBlockDeviceRequest) SetVolumeType(v string) *LaunchTemplateEbsBlockDeviceRequest { + s.VolumeType = &v + return s +} + +// Describes an IAM instance profile. +type LaunchTemplateIamInstanceProfileSpecification struct { + _ struct{} `type:"structure"` + + // The Amazon Resource Name (ARN) of the instance profile. + Arn *string `locationName:"arn" type:"string"` + + // The name of the instance profile. + Name *string `locationName:"name" type:"string"` +} + +// String returns the string representation +func (s LaunchTemplateIamInstanceProfileSpecification) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s LaunchTemplateIamInstanceProfileSpecification) GoString() string { + return s.String() +} + +// SetArn sets the Arn field's value. +func (s *LaunchTemplateIamInstanceProfileSpecification) SetArn(v string) *LaunchTemplateIamInstanceProfileSpecification { + s.Arn = &v + return s +} + +// SetName sets the Name field's value. +func (s *LaunchTemplateIamInstanceProfileSpecification) SetName(v string) *LaunchTemplateIamInstanceProfileSpecification { + s.Name = &v + return s +} + +// An IAM instance profile. +type LaunchTemplateIamInstanceProfileSpecificationRequest struct { + _ struct{} `type:"structure"` + + // The Amazon Resource Name (ARN) of the instance profile. + Arn *string `type:"string"` + + // The name of the instance profile. + Name *string `type:"string"` +} + +// String returns the string representation +func (s LaunchTemplateIamInstanceProfileSpecificationRequest) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s LaunchTemplateIamInstanceProfileSpecificationRequest) GoString() string { + return s.String() +} + +// SetArn sets the Arn field's value. +func (s *LaunchTemplateIamInstanceProfileSpecificationRequest) SetArn(v string) *LaunchTemplateIamInstanceProfileSpecificationRequest { + s.Arn = &v + return s +} + +// SetName sets the Name field's value. +func (s *LaunchTemplateIamInstanceProfileSpecificationRequest) SetName(v string) *LaunchTemplateIamInstanceProfileSpecificationRequest { + s.Name = &v + return s +} + +// The market (purchasing) option for the instances. +type LaunchTemplateInstanceMarketOptions struct { + _ struct{} `type:"structure"` + + // The market type. + MarketType *string `locationName:"marketType" type:"string" enum:"MarketType"` + + // The options for Spot Instances. + SpotOptions *LaunchTemplateSpotMarketOptions `locationName:"spotOptions" type:"structure"` +} + +// String returns the string representation +func (s LaunchTemplateInstanceMarketOptions) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s LaunchTemplateInstanceMarketOptions) GoString() string { + return s.String() +} + +// SetMarketType sets the MarketType field's value. +func (s *LaunchTemplateInstanceMarketOptions) SetMarketType(v string) *LaunchTemplateInstanceMarketOptions { + s.MarketType = &v + return s +} + +// SetSpotOptions sets the SpotOptions field's value. +func (s *LaunchTemplateInstanceMarketOptions) SetSpotOptions(v *LaunchTemplateSpotMarketOptions) *LaunchTemplateInstanceMarketOptions { + s.SpotOptions = v + return s +} + +// The market (purchasing) option for the instances. +type LaunchTemplateInstanceMarketOptionsRequest struct { + _ struct{} `type:"structure"` + + // The market type. + MarketType *string `type:"string" enum:"MarketType"` + + // The options for Spot Instances. + SpotOptions *LaunchTemplateSpotMarketOptionsRequest `type:"structure"` +} + +// String returns the string representation +func (s LaunchTemplateInstanceMarketOptionsRequest) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s LaunchTemplateInstanceMarketOptionsRequest) GoString() string { + return s.String() +} + +// SetMarketType sets the MarketType field's value. +func (s *LaunchTemplateInstanceMarketOptionsRequest) SetMarketType(v string) *LaunchTemplateInstanceMarketOptionsRequest { + s.MarketType = &v + return s +} + +// SetSpotOptions sets the SpotOptions field's value. +func (s *LaunchTemplateInstanceMarketOptionsRequest) SetSpotOptions(v *LaunchTemplateSpotMarketOptionsRequest) *LaunchTemplateInstanceMarketOptionsRequest { + s.SpotOptions = v + return s +} + +// Describes a network interface. +type LaunchTemplateInstanceNetworkInterfaceSpecification struct { + _ struct{} `type:"structure"` + + // Indicates whether to associate a public IPv4 address with eth0 for a new + // network interface. + AssociatePublicIpAddress *bool `locationName:"associatePublicIpAddress" type:"boolean"` + + // Indicates whether the network interface is deleted when the instance is terminated. + DeleteOnTermination *bool `locationName:"deleteOnTermination" type:"boolean"` + + // A description for the network interface. + Description *string `locationName:"description" type:"string"` + + // The device index for the network interface attachment. + DeviceIndex *int64 `locationName:"deviceIndex" type:"integer"` + + // The IDs of one or more security groups. + Groups []*string `locationName:"groupSet" locationNameList:"groupId" type:"list"` + + // The number of IPv6 addresses for the network interface. + Ipv6AddressCount *int64 `locationName:"ipv6AddressCount" type:"integer"` + + // The IPv6 addresses for the network interface. + Ipv6Addresses []*InstanceIpv6Address `locationName:"ipv6AddressesSet" locationNameList:"item" type:"list"` + + // The ID of the network interface. + NetworkInterfaceId *string `locationName:"networkInterfaceId" type:"string"` + + // The primary private IPv4 address of the network interface. + PrivateIpAddress *string `locationName:"privateIpAddress" type:"string"` + + // One or more private IPv4 addresses. + PrivateIpAddresses []*PrivateIpAddressSpecification `locationName:"privateIpAddressesSet" locationNameList:"item" type:"list"` + + // The number of secondary private IPv4 addresses for the network interface. + SecondaryPrivateIpAddressCount *int64 `locationName:"secondaryPrivateIpAddressCount" type:"integer"` + + // The ID of the subnet for the network interface. + SubnetId *string `locationName:"subnetId" type:"string"` +} + +// String returns the string representation +func (s LaunchTemplateInstanceNetworkInterfaceSpecification) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s LaunchTemplateInstanceNetworkInterfaceSpecification) GoString() string { + return s.String() +} + +// SetAssociatePublicIpAddress sets the AssociatePublicIpAddress field's value. +func (s *LaunchTemplateInstanceNetworkInterfaceSpecification) SetAssociatePublicIpAddress(v bool) *LaunchTemplateInstanceNetworkInterfaceSpecification { + s.AssociatePublicIpAddress = &v + return s +} + +// SetDeleteOnTermination sets the DeleteOnTermination field's value. +func (s *LaunchTemplateInstanceNetworkInterfaceSpecification) SetDeleteOnTermination(v bool) *LaunchTemplateInstanceNetworkInterfaceSpecification { + s.DeleteOnTermination = &v + return s +} + +// SetDescription sets the Description field's value. +func (s *LaunchTemplateInstanceNetworkInterfaceSpecification) SetDescription(v string) *LaunchTemplateInstanceNetworkInterfaceSpecification { + s.Description = &v + return s +} + +// SetDeviceIndex sets the DeviceIndex field's value. +func (s *LaunchTemplateInstanceNetworkInterfaceSpecification) SetDeviceIndex(v int64) *LaunchTemplateInstanceNetworkInterfaceSpecification { + s.DeviceIndex = &v + return s +} + +// SetGroups sets the Groups field's value. +func (s *LaunchTemplateInstanceNetworkInterfaceSpecification) SetGroups(v []*string) *LaunchTemplateInstanceNetworkInterfaceSpecification { + s.Groups = v + return s +} + +// SetIpv6AddressCount sets the Ipv6AddressCount field's value. +func (s *LaunchTemplateInstanceNetworkInterfaceSpecification) SetIpv6AddressCount(v int64) *LaunchTemplateInstanceNetworkInterfaceSpecification { + s.Ipv6AddressCount = &v + return s +} + +// SetIpv6Addresses sets the Ipv6Addresses field's value. +func (s *LaunchTemplateInstanceNetworkInterfaceSpecification) SetIpv6Addresses(v []*InstanceIpv6Address) *LaunchTemplateInstanceNetworkInterfaceSpecification { + s.Ipv6Addresses = v + return s +} + +// SetNetworkInterfaceId sets the NetworkInterfaceId field's value. +func (s *LaunchTemplateInstanceNetworkInterfaceSpecification) SetNetworkInterfaceId(v string) *LaunchTemplateInstanceNetworkInterfaceSpecification { + s.NetworkInterfaceId = &v + return s +} + +// SetPrivateIpAddress sets the PrivateIpAddress field's value. +func (s *LaunchTemplateInstanceNetworkInterfaceSpecification) SetPrivateIpAddress(v string) *LaunchTemplateInstanceNetworkInterfaceSpecification { + s.PrivateIpAddress = &v + return s +} + +// SetPrivateIpAddresses sets the PrivateIpAddresses field's value. +func (s *LaunchTemplateInstanceNetworkInterfaceSpecification) SetPrivateIpAddresses(v []*PrivateIpAddressSpecification) *LaunchTemplateInstanceNetworkInterfaceSpecification { + s.PrivateIpAddresses = v + return s +} + +// SetSecondaryPrivateIpAddressCount sets the SecondaryPrivateIpAddressCount field's value. +func (s *LaunchTemplateInstanceNetworkInterfaceSpecification) SetSecondaryPrivateIpAddressCount(v int64) *LaunchTemplateInstanceNetworkInterfaceSpecification { + s.SecondaryPrivateIpAddressCount = &v + return s +} + +// SetSubnetId sets the SubnetId field's value. +func (s *LaunchTemplateInstanceNetworkInterfaceSpecification) SetSubnetId(v string) *LaunchTemplateInstanceNetworkInterfaceSpecification { + s.SubnetId = &v + return s +} + +// The parameters for a network interface. +type LaunchTemplateInstanceNetworkInterfaceSpecificationRequest struct { + _ struct{} `type:"structure"` + + // Associates a public IPv4 address with eth0 for a new network interface. + AssociatePublicIpAddress *bool `type:"boolean"` + + // Indicates whether the network interface is deleted when the instance is terminated. + DeleteOnTermination *bool `type:"boolean"` + + // A description for the network interface. + Description *string `type:"string"` + + // The device index for the network interface attachment. + DeviceIndex *int64 `type:"integer"` + + // The IDs of one or more security groups. + Groups []*string `locationName:"SecurityGroupId" locationNameList:"SecurityGroupId" type:"list"` + + // The number of IPv6 addresses to assign to a network interface. Amazon EC2 + // automatically selects the IPv6 addresses from the subnet range. You can't + // use this option if specifying specific IPv6 addresses. + Ipv6AddressCount *int64 `type:"integer"` + + // One or more specific IPv6 addresses from the IPv6 CIDR block range of your + // subnet. You can't use this option if you're specifying a number of IPv6 addresses. + Ipv6Addresses []*InstanceIpv6AddressRequest `locationNameList:"InstanceIpv6Address" type:"list"` + + // The ID of the network interface. + NetworkInterfaceId *string `type:"string"` + + // The primary private IPv4 address of the network interface. + PrivateIpAddress *string `type:"string"` + + // One or more private IPv4 addresses. + PrivateIpAddresses []*PrivateIpAddressSpecification `locationNameList:"item" type:"list"` + + // The number of secondary private IPv4 addresses to assign to a network interface. + SecondaryPrivateIpAddressCount *int64 `type:"integer"` + + // The ID of the subnet for the network interface. + SubnetId *string `type:"string"` +} + +// String returns the string representation +func (s LaunchTemplateInstanceNetworkInterfaceSpecificationRequest) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s LaunchTemplateInstanceNetworkInterfaceSpecificationRequest) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *LaunchTemplateInstanceNetworkInterfaceSpecificationRequest) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "LaunchTemplateInstanceNetworkInterfaceSpecificationRequest"} + if s.PrivateIpAddresses != nil { + for i, v := range s.PrivateIpAddresses { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "PrivateIpAddresses", i), err.(request.ErrInvalidParams)) + } + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetAssociatePublicIpAddress sets the AssociatePublicIpAddress field's value. +func (s *LaunchTemplateInstanceNetworkInterfaceSpecificationRequest) SetAssociatePublicIpAddress(v bool) *LaunchTemplateInstanceNetworkInterfaceSpecificationRequest { + s.AssociatePublicIpAddress = &v + return s +} + +// SetDeleteOnTermination sets the DeleteOnTermination field's value. +func (s *LaunchTemplateInstanceNetworkInterfaceSpecificationRequest) SetDeleteOnTermination(v bool) *LaunchTemplateInstanceNetworkInterfaceSpecificationRequest { + s.DeleteOnTermination = &v + return s +} + +// SetDescription sets the Description field's value. +func (s *LaunchTemplateInstanceNetworkInterfaceSpecificationRequest) SetDescription(v string) *LaunchTemplateInstanceNetworkInterfaceSpecificationRequest { + s.Description = &v + return s +} + +// SetDeviceIndex sets the DeviceIndex field's value. +func (s *LaunchTemplateInstanceNetworkInterfaceSpecificationRequest) SetDeviceIndex(v int64) *LaunchTemplateInstanceNetworkInterfaceSpecificationRequest { + s.DeviceIndex = &v + return s +} + +// SetGroups sets the Groups field's value. +func (s *LaunchTemplateInstanceNetworkInterfaceSpecificationRequest) SetGroups(v []*string) *LaunchTemplateInstanceNetworkInterfaceSpecificationRequest { + s.Groups = v + return s +} + +// SetIpv6AddressCount sets the Ipv6AddressCount field's value. +func (s *LaunchTemplateInstanceNetworkInterfaceSpecificationRequest) SetIpv6AddressCount(v int64) *LaunchTemplateInstanceNetworkInterfaceSpecificationRequest { + s.Ipv6AddressCount = &v + return s +} + +// SetIpv6Addresses sets the Ipv6Addresses field's value. +func (s *LaunchTemplateInstanceNetworkInterfaceSpecificationRequest) SetIpv6Addresses(v []*InstanceIpv6AddressRequest) *LaunchTemplateInstanceNetworkInterfaceSpecificationRequest { + s.Ipv6Addresses = v + return s +} + +// SetNetworkInterfaceId sets the NetworkInterfaceId field's value. +func (s *LaunchTemplateInstanceNetworkInterfaceSpecificationRequest) SetNetworkInterfaceId(v string) *LaunchTemplateInstanceNetworkInterfaceSpecificationRequest { + s.NetworkInterfaceId = &v + return s +} + +// SetPrivateIpAddress sets the PrivateIpAddress field's value. +func (s *LaunchTemplateInstanceNetworkInterfaceSpecificationRequest) SetPrivateIpAddress(v string) *LaunchTemplateInstanceNetworkInterfaceSpecificationRequest { + s.PrivateIpAddress = &v + return s +} + +// SetPrivateIpAddresses sets the PrivateIpAddresses field's value. +func (s *LaunchTemplateInstanceNetworkInterfaceSpecificationRequest) SetPrivateIpAddresses(v []*PrivateIpAddressSpecification) *LaunchTemplateInstanceNetworkInterfaceSpecificationRequest { + s.PrivateIpAddresses = v + return s +} + +// SetSecondaryPrivateIpAddressCount sets the SecondaryPrivateIpAddressCount field's value. +func (s *LaunchTemplateInstanceNetworkInterfaceSpecificationRequest) SetSecondaryPrivateIpAddressCount(v int64) *LaunchTemplateInstanceNetworkInterfaceSpecificationRequest { + s.SecondaryPrivateIpAddressCount = &v + return s +} + +// SetSubnetId sets the SubnetId field's value. +func (s *LaunchTemplateInstanceNetworkInterfaceSpecificationRequest) SetSubnetId(v string) *LaunchTemplateInstanceNetworkInterfaceSpecificationRequest { + s.SubnetId = &v + return s +} + +// Describes overrides for a launch template. +type LaunchTemplateOverrides struct { + _ struct{} `type:"structure"` + + // The Availability Zone in which to launch the instances. + AvailabilityZone *string `locationName:"availabilityZone" type:"string"` + + // The instance type. + InstanceType *string `locationName:"instanceType" type:"string" enum:"InstanceType"` + + // The maximum price per unit hour that you are willing to pay for a Spot Instance. + SpotPrice *string `locationName:"spotPrice" type:"string"` + + // The ID of the subnet in which to launch the instances. + SubnetId *string `locationName:"subnetId" type:"string"` + + // The number of units provided by the specified instance type. + WeightedCapacity *float64 `locationName:"weightedCapacity" type:"double"` +} + +// String returns the string representation +func (s LaunchTemplateOverrides) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s LaunchTemplateOverrides) GoString() string { + return s.String() +} + +// SetAvailabilityZone sets the AvailabilityZone field's value. +func (s *LaunchTemplateOverrides) SetAvailabilityZone(v string) *LaunchTemplateOverrides { + s.AvailabilityZone = &v + return s +} + +// SetInstanceType sets the InstanceType field's value. +func (s *LaunchTemplateOverrides) SetInstanceType(v string) *LaunchTemplateOverrides { + s.InstanceType = &v + return s +} + +// SetSpotPrice sets the SpotPrice field's value. +func (s *LaunchTemplateOverrides) SetSpotPrice(v string) *LaunchTemplateOverrides { + s.SpotPrice = &v + return s +} + +// SetSubnetId sets the SubnetId field's value. +func (s *LaunchTemplateOverrides) SetSubnetId(v string) *LaunchTemplateOverrides { + s.SubnetId = &v + return s +} + +// SetWeightedCapacity sets the WeightedCapacity field's value. +func (s *LaunchTemplateOverrides) SetWeightedCapacity(v float64) *LaunchTemplateOverrides { + s.WeightedCapacity = &v + return s +} + +// Describes the placement of an instance. +type LaunchTemplatePlacement struct { + _ struct{} `type:"structure"` + + // The affinity setting for the instance on the Dedicated Host. + Affinity *string `locationName:"affinity" type:"string"` + + // The Availability Zone of the instance. + AvailabilityZone *string `locationName:"availabilityZone" type:"string"` + + // The name of the placement group for the instance. + GroupName *string `locationName:"groupName" type:"string"` + + // The ID of the Dedicated Host for the instance. + HostId *string `locationName:"hostId" type:"string"` + + // Reserved for future use. + SpreadDomain *string `locationName:"spreadDomain" type:"string"` + + // The tenancy of the instance (if the instance is running in a VPC). An instance + // with a tenancy of dedicated runs on single-tenant hardware. + Tenancy *string `locationName:"tenancy" type:"string" enum:"Tenancy"` +} + +// String returns the string representation +func (s LaunchTemplatePlacement) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s LaunchTemplatePlacement) GoString() string { + return s.String() +} + +// SetAffinity sets the Affinity field's value. +func (s *LaunchTemplatePlacement) SetAffinity(v string) *LaunchTemplatePlacement { + s.Affinity = &v + return s +} + +// SetAvailabilityZone sets the AvailabilityZone field's value. +func (s *LaunchTemplatePlacement) SetAvailabilityZone(v string) *LaunchTemplatePlacement { + s.AvailabilityZone = &v + return s +} + +// SetGroupName sets the GroupName field's value. +func (s *LaunchTemplatePlacement) SetGroupName(v string) *LaunchTemplatePlacement { + s.GroupName = &v + return s +} + +// SetHostId sets the HostId field's value. +func (s *LaunchTemplatePlacement) SetHostId(v string) *LaunchTemplatePlacement { + s.HostId = &v + return s +} + +// SetSpreadDomain sets the SpreadDomain field's value. +func (s *LaunchTemplatePlacement) SetSpreadDomain(v string) *LaunchTemplatePlacement { + s.SpreadDomain = &v + return s +} + +// SetTenancy sets the Tenancy field's value. +func (s *LaunchTemplatePlacement) SetTenancy(v string) *LaunchTemplatePlacement { + s.Tenancy = &v + return s +} + +// The placement for the instance. +type LaunchTemplatePlacementRequest struct { + _ struct{} `type:"structure"` + + // The affinity setting for an instance on a Dedicated Host. + Affinity *string `type:"string"` + + // The Availability Zone for the instance. + AvailabilityZone *string `type:"string"` + + // The name of the placement group for the instance. + GroupName *string `type:"string"` + + // The ID of the Dedicated Host for the instance. + HostId *string `type:"string"` + + // Reserved for future use. + SpreadDomain *string `type:"string"` + + // The tenancy of the instance (if the instance is running in a VPC). An instance + // with a tenancy of dedicated runs on single-tenant hardware. + Tenancy *string `type:"string" enum:"Tenancy"` +} + +// String returns the string representation +func (s LaunchTemplatePlacementRequest) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s LaunchTemplatePlacementRequest) GoString() string { + return s.String() +} + +// SetAffinity sets the Affinity field's value. +func (s *LaunchTemplatePlacementRequest) SetAffinity(v string) *LaunchTemplatePlacementRequest { + s.Affinity = &v + return s +} + +// SetAvailabilityZone sets the AvailabilityZone field's value. +func (s *LaunchTemplatePlacementRequest) SetAvailabilityZone(v string) *LaunchTemplatePlacementRequest { + s.AvailabilityZone = &v + return s +} + +// SetGroupName sets the GroupName field's value. +func (s *LaunchTemplatePlacementRequest) SetGroupName(v string) *LaunchTemplatePlacementRequest { + s.GroupName = &v + return s +} + +// SetHostId sets the HostId field's value. +func (s *LaunchTemplatePlacementRequest) SetHostId(v string) *LaunchTemplatePlacementRequest { + s.HostId = &v + return s +} + +// SetSpreadDomain sets the SpreadDomain field's value. +func (s *LaunchTemplatePlacementRequest) SetSpreadDomain(v string) *LaunchTemplatePlacementRequest { + s.SpreadDomain = &v + return s +} + +// SetTenancy sets the Tenancy field's value. +func (s *LaunchTemplatePlacementRequest) SetTenancy(v string) *LaunchTemplatePlacementRequest { + s.Tenancy = &v + return s +} + +// The launch template to use. You must specify either the launch template ID +// or launch template name in the request. +type LaunchTemplateSpecification struct { + _ struct{} `type:"structure"` + + // The ID of the launch template. + LaunchTemplateId *string `type:"string"` + + // The name of the launch template. + LaunchTemplateName *string `type:"string"` + + // The version number of the launch template. + // + // Default: The default version for the launch template. + Version *string `type:"string"` +} + +// String returns the string representation +func (s LaunchTemplateSpecification) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s LaunchTemplateSpecification) GoString() string { + return s.String() +} + +// SetLaunchTemplateId sets the LaunchTemplateId field's value. +func (s *LaunchTemplateSpecification) SetLaunchTemplateId(v string) *LaunchTemplateSpecification { + s.LaunchTemplateId = &v + return s +} + +// SetLaunchTemplateName sets the LaunchTemplateName field's value. +func (s *LaunchTemplateSpecification) SetLaunchTemplateName(v string) *LaunchTemplateSpecification { + s.LaunchTemplateName = &v + return s +} + +// SetVersion sets the Version field's value. +func (s *LaunchTemplateSpecification) SetVersion(v string) *LaunchTemplateSpecification { + s.Version = &v + return s +} + +// The options for Spot Instances. +type LaunchTemplateSpotMarketOptions struct { + _ struct{} `type:"structure"` + + // The required duration for the Spot Instances (also known as Spot blocks), + // in minutes. This value must be a multiple of 60 (60, 120, 180, 240, 300, + // or 360). + BlockDurationMinutes *int64 `locationName:"blockDurationMinutes" type:"integer"` + + // The behavior when a Spot Instance is interrupted. + InstanceInterruptionBehavior *string `locationName:"instanceInterruptionBehavior" type:"string" enum:"InstanceInterruptionBehavior"` + + // The maximum hourly price you're willing to pay for the Spot Instances. + MaxPrice *string `locationName:"maxPrice" type:"string"` + + // The Spot Instance request type. + SpotInstanceType *string `locationName:"spotInstanceType" type:"string" enum:"SpotInstanceType"` + + // The end date of the request. For a one-time request, the request remains + // active until all instances launch, the request is canceled, or this date + // is reached. If the request is persistent, it remains active until it is canceled + // or this date and time is reached. + ValidUntil *time.Time `locationName:"validUntil" type:"timestamp" timestampFormat:"iso8601"` +} + +// String returns the string representation +func (s LaunchTemplateSpotMarketOptions) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s LaunchTemplateSpotMarketOptions) GoString() string { + return s.String() +} + +// SetBlockDurationMinutes sets the BlockDurationMinutes field's value. +func (s *LaunchTemplateSpotMarketOptions) SetBlockDurationMinutes(v int64) *LaunchTemplateSpotMarketOptions { + s.BlockDurationMinutes = &v + return s +} + +// SetInstanceInterruptionBehavior sets the InstanceInterruptionBehavior field's value. +func (s *LaunchTemplateSpotMarketOptions) SetInstanceInterruptionBehavior(v string) *LaunchTemplateSpotMarketOptions { + s.InstanceInterruptionBehavior = &v + return s +} + +// SetMaxPrice sets the MaxPrice field's value. +func (s *LaunchTemplateSpotMarketOptions) SetMaxPrice(v string) *LaunchTemplateSpotMarketOptions { + s.MaxPrice = &v + return s +} + +// SetSpotInstanceType sets the SpotInstanceType field's value. +func (s *LaunchTemplateSpotMarketOptions) SetSpotInstanceType(v string) *LaunchTemplateSpotMarketOptions { + s.SpotInstanceType = &v + return s +} + +// SetValidUntil sets the ValidUntil field's value. +func (s *LaunchTemplateSpotMarketOptions) SetValidUntil(v time.Time) *LaunchTemplateSpotMarketOptions { + s.ValidUntil = &v + return s +} + +// The options for Spot Instances. +type LaunchTemplateSpotMarketOptionsRequest struct { + _ struct{} `type:"structure"` + + // The required duration for the Spot Instances (also known as Spot blocks), + // in minutes. This value must be a multiple of 60 (60, 120, 180, 240, 300, + // or 360). + BlockDurationMinutes *int64 `type:"integer"` + + // The behavior when a Spot Instance is interrupted. The default is terminate. + InstanceInterruptionBehavior *string `type:"string" enum:"InstanceInterruptionBehavior"` + + // The maximum hourly price you're willing to pay for the Spot Instances. + MaxPrice *string `type:"string"` + + // The Spot Instance request type. + SpotInstanceType *string `type:"string" enum:"SpotInstanceType"` + + // The end date of the request. For a one-time request, the request remains + // active until all instances launch, the request is canceled, or this date + // is reached. If the request is persistent, it remains active until it is canceled + // or this date and time is reached. The default end date is 7 days from the + // current date. + ValidUntil *time.Time `type:"timestamp" timestampFormat:"iso8601"` +} + +// String returns the string representation +func (s LaunchTemplateSpotMarketOptionsRequest) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s LaunchTemplateSpotMarketOptionsRequest) GoString() string { + return s.String() +} + +// SetBlockDurationMinutes sets the BlockDurationMinutes field's value. +func (s *LaunchTemplateSpotMarketOptionsRequest) SetBlockDurationMinutes(v int64) *LaunchTemplateSpotMarketOptionsRequest { + s.BlockDurationMinutes = &v + return s +} + +// SetInstanceInterruptionBehavior sets the InstanceInterruptionBehavior field's value. +func (s *LaunchTemplateSpotMarketOptionsRequest) SetInstanceInterruptionBehavior(v string) *LaunchTemplateSpotMarketOptionsRequest { + s.InstanceInterruptionBehavior = &v + return s +} + +// SetMaxPrice sets the MaxPrice field's value. +func (s *LaunchTemplateSpotMarketOptionsRequest) SetMaxPrice(v string) *LaunchTemplateSpotMarketOptionsRequest { + s.MaxPrice = &v + return s +} + +// SetSpotInstanceType sets the SpotInstanceType field's value. +func (s *LaunchTemplateSpotMarketOptionsRequest) SetSpotInstanceType(v string) *LaunchTemplateSpotMarketOptionsRequest { + s.SpotInstanceType = &v + return s +} + +// SetValidUntil sets the ValidUntil field's value. +func (s *LaunchTemplateSpotMarketOptionsRequest) SetValidUntil(v time.Time) *LaunchTemplateSpotMarketOptionsRequest { + s.ValidUntil = &v + return s +} + +// The tag specification for the launch template. +type LaunchTemplateTagSpecification struct { + _ struct{} `type:"structure"` + + // The type of resource. + ResourceType *string `locationName:"resourceType" type:"string" enum:"ResourceType"` + + // The tags for the resource. + Tags []*Tag `locationName:"tagSet" locationNameList:"item" type:"list"` +} + +// String returns the string representation +func (s LaunchTemplateTagSpecification) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s LaunchTemplateTagSpecification) GoString() string { + return s.String() +} + +// SetResourceType sets the ResourceType field's value. +func (s *LaunchTemplateTagSpecification) SetResourceType(v string) *LaunchTemplateTagSpecification { + s.ResourceType = &v + return s +} + +// SetTags sets the Tags field's value. +func (s *LaunchTemplateTagSpecification) SetTags(v []*Tag) *LaunchTemplateTagSpecification { + s.Tags = v + return s +} + +// The tags specification for the launch template. +type LaunchTemplateTagSpecificationRequest struct { + _ struct{} `type:"structure"` + + // The type of resource to tag. Currently, the resource types that support tagging + // on creation are instance and volume. + ResourceType *string `type:"string" enum:"ResourceType"` + + // The tags to apply to the resource. + Tags []*Tag `locationName:"Tag" locationNameList:"item" type:"list"` +} + +// String returns the string representation +func (s LaunchTemplateTagSpecificationRequest) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s LaunchTemplateTagSpecificationRequest) GoString() string { + return s.String() +} + +// SetResourceType sets the ResourceType field's value. +func (s *LaunchTemplateTagSpecificationRequest) SetResourceType(v string) *LaunchTemplateTagSpecificationRequest { + s.ResourceType = &v + return s +} + +// SetTags sets the Tags field's value. +func (s *LaunchTemplateTagSpecificationRequest) SetTags(v []*Tag) *LaunchTemplateTagSpecificationRequest { + s.Tags = v + return s +} + +// Describes a launch template version. +type LaunchTemplateVersion struct { + _ struct{} `type:"structure"` + + // The time the version was created. + CreateTime *time.Time `locationName:"createTime" type:"timestamp" timestampFormat:"iso8601"` + + // The principal that created the version. + CreatedBy *string `locationName:"createdBy" type:"string"` + + // Indicates whether the version is the default version. + DefaultVersion *bool `locationName:"defaultVersion" type:"boolean"` + + // Information about the launch template. + LaunchTemplateData *ResponseLaunchTemplateData `locationName:"launchTemplateData" type:"structure"` + + // The ID of the launch template. + LaunchTemplateId *string `locationName:"launchTemplateId" type:"string"` + + // The name of the launch template. + LaunchTemplateName *string `locationName:"launchTemplateName" min:"3" type:"string"` + + // The description for the version. + VersionDescription *string `locationName:"versionDescription" type:"string"` + + // The version number. + VersionNumber *int64 `locationName:"versionNumber" type:"long"` +} + +// String returns the string representation +func (s LaunchTemplateVersion) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s LaunchTemplateVersion) GoString() string { + return s.String() +} + +// SetCreateTime sets the CreateTime field's value. +func (s *LaunchTemplateVersion) SetCreateTime(v time.Time) *LaunchTemplateVersion { + s.CreateTime = &v + return s +} + +// SetCreatedBy sets the CreatedBy field's value. +func (s *LaunchTemplateVersion) SetCreatedBy(v string) *LaunchTemplateVersion { + s.CreatedBy = &v + return s +} + +// SetDefaultVersion sets the DefaultVersion field's value. +func (s *LaunchTemplateVersion) SetDefaultVersion(v bool) *LaunchTemplateVersion { + s.DefaultVersion = &v + return s +} + +// SetLaunchTemplateData sets the LaunchTemplateData field's value. +func (s *LaunchTemplateVersion) SetLaunchTemplateData(v *ResponseLaunchTemplateData) *LaunchTemplateVersion { + s.LaunchTemplateData = v + return s +} + +// SetLaunchTemplateId sets the LaunchTemplateId field's value. +func (s *LaunchTemplateVersion) SetLaunchTemplateId(v string) *LaunchTemplateVersion { + s.LaunchTemplateId = &v + return s +} + +// SetLaunchTemplateName sets the LaunchTemplateName field's value. +func (s *LaunchTemplateVersion) SetLaunchTemplateName(v string) *LaunchTemplateVersion { + s.LaunchTemplateName = &v + return s +} + +// SetVersionDescription sets the VersionDescription field's value. +func (s *LaunchTemplateVersion) SetVersionDescription(v string) *LaunchTemplateVersion { + s.VersionDescription = &v + return s +} + +// SetVersionNumber sets the VersionNumber field's value. +func (s *LaunchTemplateVersion) SetVersionNumber(v int64) *LaunchTemplateVersion { + s.VersionNumber = &v + return s +} + +// Describes the monitoring for the instance. +type LaunchTemplatesMonitoring struct { + _ struct{} `type:"structure"` + + // Indicates whether detailed monitoring is enabled. Otherwise, basic monitoring + // is enabled. + Enabled *bool `locationName:"enabled" type:"boolean"` +} + +// String returns the string representation +func (s LaunchTemplatesMonitoring) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s LaunchTemplatesMonitoring) GoString() string { + return s.String() +} + +// SetEnabled sets the Enabled field's value. +func (s *LaunchTemplatesMonitoring) SetEnabled(v bool) *LaunchTemplatesMonitoring { + s.Enabled = &v + return s +} + +// Describes the monitoring for the instance. +type LaunchTemplatesMonitoringRequest struct { + _ struct{} `type:"structure"` + + // Specify true to enable detailed monitoring. Otherwise, basic monitoring is + // enabled. + Enabled *bool `type:"boolean"` +} + +// String returns the string representation +func (s LaunchTemplatesMonitoringRequest) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s LaunchTemplatesMonitoringRequest) GoString() string { + return s.String() +} + +// SetEnabled sets the Enabled field's value. +func (s *LaunchTemplatesMonitoringRequest) SetEnabled(v bool) *LaunchTemplatesMonitoringRequest { + s.Enabled = &v + return s +} + +// Describes the Classic Load Balancers and target groups to attach to a Spot +// Fleet request. +type LoadBalancersConfig struct { + _ struct{} `type:"structure"` + + // The Classic Load Balancers. + ClassicLoadBalancersConfig *ClassicLoadBalancersConfig `locationName:"classicLoadBalancersConfig" type:"structure"` + + // The target groups. + TargetGroupsConfig *TargetGroupsConfig `locationName:"targetGroupsConfig" type:"structure"` +} + +// String returns the string representation +func (s LoadBalancersConfig) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s LoadBalancersConfig) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *LoadBalancersConfig) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "LoadBalancersConfig"} + if s.ClassicLoadBalancersConfig != nil { + if err := s.ClassicLoadBalancersConfig.Validate(); err != nil { + invalidParams.AddNested("ClassicLoadBalancersConfig", err.(request.ErrInvalidParams)) + } + } + if s.TargetGroupsConfig != nil { + if err := s.TargetGroupsConfig.Validate(); err != nil { + invalidParams.AddNested("TargetGroupsConfig", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetClassicLoadBalancersConfig sets the ClassicLoadBalancersConfig field's value. +func (s *LoadBalancersConfig) SetClassicLoadBalancersConfig(v *ClassicLoadBalancersConfig) *LoadBalancersConfig { + s.ClassicLoadBalancersConfig = v + return s +} + +// SetTargetGroupsConfig sets the TargetGroupsConfig field's value. +func (s *LoadBalancersConfig) SetTargetGroupsConfig(v *TargetGroupsConfig) *LoadBalancersConfig { + s.TargetGroupsConfig = v + return s +} + +// Describes a load permission. +type LoadPermission struct { + _ struct{} `type:"structure"` + + // The name of the group. + Group *string `locationName:"group" type:"string" enum:"PermissionGroup"` + + // The AWS account ID. + UserId *string `locationName:"userId" type:"string"` +} + +// String returns the string representation +func (s LoadPermission) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s LoadPermission) GoString() string { + return s.String() +} + +// SetGroup sets the Group field's value. +func (s *LoadPermission) SetGroup(v string) *LoadPermission { + s.Group = &v + return s +} + +// SetUserId sets the UserId field's value. +func (s *LoadPermission) SetUserId(v string) *LoadPermission { + s.UserId = &v + return s +} + +// Describes modifications to the load permissions of an Amazon FPGA image (AFI). +type LoadPermissionModifications struct { + _ struct{} `type:"structure"` + + // The load permissions to add. + Add []*LoadPermissionRequest `locationNameList:"item" type:"list"` + + // The load permissions to remove. + Remove []*LoadPermissionRequest `locationNameList:"item" type:"list"` +} + +// String returns the string representation +func (s LoadPermissionModifications) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s LoadPermissionModifications) GoString() string { + return s.String() +} + +// SetAdd sets the Add field's value. +func (s *LoadPermissionModifications) SetAdd(v []*LoadPermissionRequest) *LoadPermissionModifications { + s.Add = v + return s +} + +// SetRemove sets the Remove field's value. +func (s *LoadPermissionModifications) SetRemove(v []*LoadPermissionRequest) *LoadPermissionModifications { + s.Remove = v + return s +} + +// Describes a load permission. +type LoadPermissionRequest struct { + _ struct{} `type:"structure"` + + // The name of the group. + Group *string `type:"string" enum:"PermissionGroup"` + + // The AWS account ID. + UserId *string `type:"string"` +} + +// String returns the string representation +func (s LoadPermissionRequest) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s LoadPermissionRequest) GoString() string { + return s.String() +} + +// SetGroup sets the Group field's value. +func (s *LoadPermissionRequest) SetGroup(v string) *LoadPermissionRequest { + s.Group = &v + return s +} + +// SetUserId sets the UserId field's value. +func (s *LoadPermissionRequest) SetUserId(v string) *LoadPermissionRequest { + s.UserId = &v + return s +} + +type ModifyFpgaImageAttributeInput struct { + _ struct{} `type:"structure"` + + // The name of the attribute. + Attribute *string `type:"string" enum:"FpgaImageAttributeName"` + + // A description for the AFI. + Description *string `type:"string"` + + // Checks whether you have the required permissions for the action, without + // actually making the request, and provides an error response. If you have + // the required permissions, the error response is DryRunOperation. Otherwise, + // it is UnauthorizedOperation. + DryRun *bool `type:"boolean"` + + // The ID of the AFI. + // + // FpgaImageId is a required field + FpgaImageId *string `type:"string" required:"true"` + + // The load permission for the AFI. + LoadPermission *LoadPermissionModifications `type:"structure"` + + // A name for the AFI. + Name *string `type:"string"` + + // The operation type. + OperationType *string `type:"string" enum:"OperationType"` + + // One or more product codes. After you add a product code to an AFI, it can't + // be removed. This parameter is valid only when modifying the productCodes + // attribute. + ProductCodes []*string `locationName:"ProductCode" locationNameList:"ProductCode" type:"list"` + + // One or more user groups. This parameter is valid only when modifying the + // loadPermission attribute. + UserGroups []*string `locationName:"UserGroup" locationNameList:"UserGroup" type:"list"` + + // One or more AWS account IDs. This parameter is valid only when modifying + // the loadPermission attribute. + UserIds []*string `locationName:"UserId" locationNameList:"UserId" type:"list"` +} + +// String returns the string representation +func (s ModifyFpgaImageAttributeInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ModifyFpgaImageAttributeInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ModifyFpgaImageAttributeInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ModifyFpgaImageAttributeInput"} + if s.FpgaImageId == nil { + invalidParams.Add(request.NewErrParamRequired("FpgaImageId")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetAttribute sets the Attribute field's value. +func (s *ModifyFpgaImageAttributeInput) SetAttribute(v string) *ModifyFpgaImageAttributeInput { + s.Attribute = &v + return s +} + +// SetDescription sets the Description field's value. +func (s *ModifyFpgaImageAttributeInput) SetDescription(v string) *ModifyFpgaImageAttributeInput { + s.Description = &v + return s +} + +// SetDryRun sets the DryRun field's value. +func (s *ModifyFpgaImageAttributeInput) SetDryRun(v bool) *ModifyFpgaImageAttributeInput { + s.DryRun = &v + return s +} + +// SetFpgaImageId sets the FpgaImageId field's value. +func (s *ModifyFpgaImageAttributeInput) SetFpgaImageId(v string) *ModifyFpgaImageAttributeInput { + s.FpgaImageId = &v + return s +} + +// SetLoadPermission sets the LoadPermission field's value. +func (s *ModifyFpgaImageAttributeInput) SetLoadPermission(v *LoadPermissionModifications) *ModifyFpgaImageAttributeInput { + s.LoadPermission = v + return s +} + +// SetName sets the Name field's value. +func (s *ModifyFpgaImageAttributeInput) SetName(v string) *ModifyFpgaImageAttributeInput { + s.Name = &v + return s +} + +// SetOperationType sets the OperationType field's value. +func (s *ModifyFpgaImageAttributeInput) SetOperationType(v string) *ModifyFpgaImageAttributeInput { + s.OperationType = &v + return s +} + +// SetProductCodes sets the ProductCodes field's value. +func (s *ModifyFpgaImageAttributeInput) SetProductCodes(v []*string) *ModifyFpgaImageAttributeInput { + s.ProductCodes = v + return s +} + +// SetUserGroups sets the UserGroups field's value. +func (s *ModifyFpgaImageAttributeInput) SetUserGroups(v []*string) *ModifyFpgaImageAttributeInput { + s.UserGroups = v + return s +} + +// SetUserIds sets the UserIds field's value. +func (s *ModifyFpgaImageAttributeInput) SetUserIds(v []*string) *ModifyFpgaImageAttributeInput { + s.UserIds = v + return s +} + +type ModifyFpgaImageAttributeOutput struct { + _ struct{} `type:"structure"` + + // Information about the attribute. + FpgaImageAttribute *FpgaImageAttribute `locationName:"fpgaImageAttribute" type:"structure"` +} + +// String returns the string representation +func (s ModifyFpgaImageAttributeOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ModifyFpgaImageAttributeOutput) GoString() string { + return s.String() +} + +// SetFpgaImageAttribute sets the FpgaImageAttribute field's value. +func (s *ModifyFpgaImageAttributeOutput) SetFpgaImageAttribute(v *FpgaImageAttribute) *ModifyFpgaImageAttributeOutput { + s.FpgaImageAttribute = v + return s +} + // Contains the parameters for ModifyHosts. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ModifyHostsRequest type ModifyHostsInput struct { _ struct{} `type:"structure"` @@ -44382,7 +51281,6 @@ func (s *ModifyHostsInput) SetHostIds(v []*string) *ModifyHostsInput { } // Contains the output of ModifyHosts. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ModifyHostsResult type ModifyHostsOutput struct { _ struct{} `type:"structure"` @@ -44417,7 +51315,6 @@ func (s *ModifyHostsOutput) SetUnsuccessful(v []*UnsuccessfulItem) *ModifyHostsO } // Contains the parameters of ModifyIdFormat. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ModifyIdFormatRequest type ModifyIdFormatInput struct { _ struct{} `type:"structure"` @@ -44470,7 +51367,6 @@ func (s *ModifyIdFormatInput) SetUseLongIds(v bool) *ModifyIdFormatInput { return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ModifyIdFormatOutput type ModifyIdFormatOutput struct { _ struct{} `type:"structure"` } @@ -44486,7 +51382,6 @@ func (s ModifyIdFormatOutput) GoString() string { } // Contains the parameters of ModifyIdentityIdFormat. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ModifyIdentityIdFormatRequest type ModifyIdentityIdFormatInput struct { _ struct{} `type:"structure"` @@ -44555,7 +51450,6 @@ func (s *ModifyIdentityIdFormatInput) SetUseLongIds(v bool) *ModifyIdentityIdFor return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ModifyIdentityIdFormatOutput type ModifyIdentityIdFormatOutput struct { _ struct{} `type:"structure"` } @@ -44571,14 +51465,14 @@ func (s ModifyIdentityIdFormatOutput) GoString() string { } // Contains the parameters for ModifyImageAttribute. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ModifyImageAttributeRequest type ModifyImageAttributeInput struct { _ struct{} `type:"structure"` - // The name of the attribute to modify. + // The name of the attribute to modify. The valid values are description, launchPermission, + // and productCodes. Attribute *string `type:"string"` - // A description for the AMI. + // A new description for the AMI. Description *AttributeValue `type:"structure"` // Checks whether you have the required permissions for the action, without @@ -44592,26 +51486,27 @@ type ModifyImageAttributeInput struct { // ImageId is a required field ImageId *string `type:"string" required:"true"` - // A launch permission modification. + // A new launch permission for the AMI. LaunchPermission *LaunchPermissionModifications `type:"structure"` - // The operation type. + // The operation type. This parameter can be used only when the Attribute parameter + // is launchPermission. OperationType *string `type:"string" enum:"OperationType"` - // One or more product codes. After you add a product code to an AMI, it can't - // be removed. This is only valid when modifying the productCodes attribute. + // One or more DevPay product codes. After you add a product code to an AMI, + // it can't be removed. ProductCodes []*string `locationName:"ProductCode" locationNameList:"ProductCode" type:"list"` - // One or more user groups. This is only valid when modifying the launchPermission - // attribute. + // One or more user groups. This parameter can be used only when the Attribute + // parameter is launchPermission. UserGroups []*string `locationName:"UserGroup" locationNameList:"UserGroup" type:"list"` - // One or more AWS account IDs. This is only valid when modifying the launchPermission - // attribute. + // One or more AWS account IDs. This parameter can be used only when the Attribute + // parameter is launchPermission. UserIds []*string `locationName:"UserId" locationNameList:"UserId" type:"list"` - // The value of the attribute being modified. This is only valid when modifying - // the description attribute. + // The value of the attribute being modified. This parameter can be used only + // when the Attribute parameter is description or productCodes. Value *string `type:"string"` } @@ -44698,7 +51593,6 @@ func (s *ModifyImageAttributeInput) SetValue(v string) *ModifyImageAttributeInpu return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ModifyImageAttributeOutput type ModifyImageAttributeOutput struct { _ struct{} `type:"structure"` } @@ -44714,7 +51608,6 @@ func (s ModifyImageAttributeOutput) GoString() string { } // Contains the parameters for ModifyInstanceAttribute. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ModifyInstanceAttributeRequest type ModifyInstanceAttributeInput struct { _ struct{} `type:"structure"` @@ -44743,7 +51636,7 @@ type ModifyInstanceAttributeInput struct { // it is UnauthorizedOperation. DryRun *bool `locationName:"dryRun" type:"boolean"` - // Specifies whether the instance is optimized for EBS I/O. This optimization + // Specifies whether the instance is optimized for Amazon EBS I/O. This optimization // provides dedicated throughput to Amazon EBS and an optimized configuration // stack to provide optimal EBS I/O performance. This optimization isn't available // with all instance types. Additional usage charges apply when using an EBS @@ -44786,8 +51679,8 @@ type ModifyInstanceAttributeInput struct { Ramdisk *AttributeValue `locationName:"ramdisk" type:"structure"` // Specifies whether source/destination checking is enabled. A value of true - // means that checking is enabled, and false means checking is disabled. This - // value must be false for a NAT instance to perform NAT. + // means that checking is enabled, and false means that checking is disabled. + // This value must be false for a NAT instance to perform NAT. SourceDestCheck *AttributeBooleanValue `type:"structure"` // Set to simple to enable enhanced networking with the Intel 82599 Virtual @@ -44801,8 +51694,8 @@ type ModifyInstanceAttributeInput struct { SriovNetSupport *AttributeValue `locationName:"sriovNetSupport" type:"structure"` // Changes the instance's user data to the specified value. If you are using - // an AWS SDK or command line tool, Base64-encoding is performed for you, and - // you can load the text from a file. Otherwise, you must provide Base64-encoded + // an AWS SDK or command line tool, base64-encoding is performed for you, and + // you can load the text from a file. Otherwise, you must provide base64-encoded // text. UserData *BlobAttributeValue `locationName:"userData" type:"structure"` @@ -44930,7 +51823,6 @@ func (s *ModifyInstanceAttributeInput) SetValue(v string) *ModifyInstanceAttribu return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ModifyInstanceAttributeOutput type ModifyInstanceAttributeOutput struct { _ struct{} `type:"structure"` } @@ -44945,8 +51837,102 @@ func (s ModifyInstanceAttributeOutput) GoString() string { return s.String() } +type ModifyInstanceCreditSpecificationInput struct { + _ struct{} `type:"structure"` + + // A unique, case-sensitive token that you provide to ensure idempotency of + // your modification request. For more information, see Ensuring Idempotency + // (http://docs.aws.amazon.com/AWSEC2/latest/APIReference/Run_Instance_Idempotency.html). + ClientToken *string `type:"string"` + + // Checks whether you have the required permissions for the action, without + // actually making the request, and provides an error response. If you have + // the required permissions, the error response is DryRunOperation. Otherwise, + // it is UnauthorizedOperation. + DryRun *bool `type:"boolean"` + + // Information about the credit option for CPU usage. + // + // InstanceCreditSpecifications is a required field + InstanceCreditSpecifications []*InstanceCreditSpecificationRequest `locationName:"InstanceCreditSpecification" locationNameList:"item" type:"list" required:"true"` +} + +// String returns the string representation +func (s ModifyInstanceCreditSpecificationInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ModifyInstanceCreditSpecificationInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ModifyInstanceCreditSpecificationInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ModifyInstanceCreditSpecificationInput"} + if s.InstanceCreditSpecifications == nil { + invalidParams.Add(request.NewErrParamRequired("InstanceCreditSpecifications")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetClientToken sets the ClientToken field's value. +func (s *ModifyInstanceCreditSpecificationInput) SetClientToken(v string) *ModifyInstanceCreditSpecificationInput { + s.ClientToken = &v + return s +} + +// SetDryRun sets the DryRun field's value. +func (s *ModifyInstanceCreditSpecificationInput) SetDryRun(v bool) *ModifyInstanceCreditSpecificationInput { + s.DryRun = &v + return s +} + +// SetInstanceCreditSpecifications sets the InstanceCreditSpecifications field's value. +func (s *ModifyInstanceCreditSpecificationInput) SetInstanceCreditSpecifications(v []*InstanceCreditSpecificationRequest) *ModifyInstanceCreditSpecificationInput { + s.InstanceCreditSpecifications = v + return s +} + +type ModifyInstanceCreditSpecificationOutput struct { + _ struct{} `type:"structure"` + + // Information about the instances whose credit option for CPU usage was successfully + // modified. + SuccessfulInstanceCreditSpecifications []*SuccessfulInstanceCreditSpecificationItem `locationName:"successfulInstanceCreditSpecificationSet" locationNameList:"item" type:"list"` + + // Information about the instances whose credit option for CPU usage was not + // modified. + UnsuccessfulInstanceCreditSpecifications []*UnsuccessfulInstanceCreditSpecificationItem `locationName:"unsuccessfulInstanceCreditSpecificationSet" locationNameList:"item" type:"list"` +} + +// String returns the string representation +func (s ModifyInstanceCreditSpecificationOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ModifyInstanceCreditSpecificationOutput) GoString() string { + return s.String() +} + +// SetSuccessfulInstanceCreditSpecifications sets the SuccessfulInstanceCreditSpecifications field's value. +func (s *ModifyInstanceCreditSpecificationOutput) SetSuccessfulInstanceCreditSpecifications(v []*SuccessfulInstanceCreditSpecificationItem) *ModifyInstanceCreditSpecificationOutput { + s.SuccessfulInstanceCreditSpecifications = v + return s +} + +// SetUnsuccessfulInstanceCreditSpecifications sets the UnsuccessfulInstanceCreditSpecifications field's value. +func (s *ModifyInstanceCreditSpecificationOutput) SetUnsuccessfulInstanceCreditSpecifications(v []*UnsuccessfulInstanceCreditSpecificationItem) *ModifyInstanceCreditSpecificationOutput { + s.UnsuccessfulInstanceCreditSpecifications = v + return s +} + // Contains the parameters for ModifyInstancePlacement. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ModifyInstancePlacementRequest type ModifyInstancePlacementInput struct { _ struct{} `type:"structure"` @@ -45013,7 +51999,6 @@ func (s *ModifyInstancePlacementInput) SetTenancy(v string) *ModifyInstancePlace } // Contains the output of ModifyInstancePlacement. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ModifyInstancePlacementResult type ModifyInstancePlacementOutput struct { _ struct{} `type:"structure"` @@ -45037,8 +52022,108 @@ func (s *ModifyInstancePlacementOutput) SetReturn(v bool) *ModifyInstancePlaceme return s } +type ModifyLaunchTemplateInput struct { + _ struct{} `type:"structure"` + + // Unique, case-sensitive identifier you provide to ensure the idempotency of + // the request. For more information, see Ensuring Idempotency (http://docs.aws.amazon.com/AWSEC2/latest/APIReference/Run_Instance_Idempotency.html). + ClientToken *string `type:"string"` + + // The version number of the launch template to set as the default version. + DefaultVersion *string `locationName:"SetDefaultVersion" type:"string"` + + // Checks whether you have the required permissions for the action, without + // actually making the request, and provides an error response. If you have + // the required permissions, the error response is DryRunOperation. Otherwise, + // it is UnauthorizedOperation. + DryRun *bool `type:"boolean"` + + // The ID of the launch template. You must specify either the launch template + // ID or launch template name in the request. + LaunchTemplateId *string `type:"string"` + + // The name of the launch template. You must specify either the launch template + // ID or launch template name in the request. + LaunchTemplateName *string `min:"3" type:"string"` +} + +// String returns the string representation +func (s ModifyLaunchTemplateInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ModifyLaunchTemplateInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ModifyLaunchTemplateInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ModifyLaunchTemplateInput"} + if s.LaunchTemplateName != nil && len(*s.LaunchTemplateName) < 3 { + invalidParams.Add(request.NewErrParamMinLen("LaunchTemplateName", 3)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetClientToken sets the ClientToken field's value. +func (s *ModifyLaunchTemplateInput) SetClientToken(v string) *ModifyLaunchTemplateInput { + s.ClientToken = &v + return s +} + +// SetDefaultVersion sets the DefaultVersion field's value. +func (s *ModifyLaunchTemplateInput) SetDefaultVersion(v string) *ModifyLaunchTemplateInput { + s.DefaultVersion = &v + return s +} + +// SetDryRun sets the DryRun field's value. +func (s *ModifyLaunchTemplateInput) SetDryRun(v bool) *ModifyLaunchTemplateInput { + s.DryRun = &v + return s +} + +// SetLaunchTemplateId sets the LaunchTemplateId field's value. +func (s *ModifyLaunchTemplateInput) SetLaunchTemplateId(v string) *ModifyLaunchTemplateInput { + s.LaunchTemplateId = &v + return s +} + +// SetLaunchTemplateName sets the LaunchTemplateName field's value. +func (s *ModifyLaunchTemplateInput) SetLaunchTemplateName(v string) *ModifyLaunchTemplateInput { + s.LaunchTemplateName = &v + return s +} + +type ModifyLaunchTemplateOutput struct { + _ struct{} `type:"structure"` + + // Information about the launch template. + LaunchTemplate *LaunchTemplate `locationName:"launchTemplate" type:"structure"` +} + +// String returns the string representation +func (s ModifyLaunchTemplateOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ModifyLaunchTemplateOutput) GoString() string { + return s.String() +} + +// SetLaunchTemplate sets the LaunchTemplate field's value. +func (s *ModifyLaunchTemplateOutput) SetLaunchTemplate(v *LaunchTemplate) *ModifyLaunchTemplateOutput { + s.LaunchTemplate = v + return s +} + // Contains the parameters for ModifyNetworkInterfaceAttribute. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ModifyNetworkInterfaceAttributeRequest type ModifyNetworkInterfaceAttributeInput struct { _ struct{} `type:"structure"` @@ -45133,7 +52218,6 @@ func (s *ModifyNetworkInterfaceAttributeInput) SetSourceDestCheck(v *AttributeBo return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ModifyNetworkInterfaceAttributeOutput type ModifyNetworkInterfaceAttributeOutput struct { _ struct{} `type:"structure"` } @@ -45149,7 +52233,6 @@ func (s ModifyNetworkInterfaceAttributeOutput) GoString() string { } // Contains the parameters for ModifyReservedInstances. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ModifyReservedInstancesRequest type ModifyReservedInstancesInput struct { _ struct{} `type:"structure"` @@ -45213,7 +52296,6 @@ func (s *ModifyReservedInstancesInput) SetTargetConfigurations(v []*ReservedInst } // Contains the output of ModifyReservedInstances. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ModifyReservedInstancesResult type ModifyReservedInstancesOutput struct { _ struct{} `type:"structure"` @@ -45238,7 +52320,6 @@ func (s *ModifyReservedInstancesOutput) SetReservedInstancesModificationId(v str } // Contains the parameters for ModifySnapshotAttribute. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ModifySnapshotAttributeRequest type ModifySnapshotAttributeInput struct { _ struct{} `type:"structure"` @@ -45336,7 +52417,6 @@ func (s *ModifySnapshotAttributeInput) SetUserIds(v []*string) *ModifySnapshotAt return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ModifySnapshotAttributeOutput type ModifySnapshotAttributeOutput struct { _ struct{} `type:"structure"` } @@ -45352,16 +52432,15 @@ func (s ModifySnapshotAttributeOutput) GoString() string { } // Contains the parameters for ModifySpotFleetRequest. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ModifySpotFleetRequestRequest type ModifySpotFleetRequestInput struct { _ struct{} `type:"structure"` - // Indicates whether running Spot instances should be terminated if the target - // capacity of the Spot fleet request is decreased below the current size of - // the Spot fleet. + // Indicates whether running Spot Instances should be terminated if the target + // capacity of the Spot Fleet request is decreased below the current size of + // the Spot Fleet. ExcessCapacityTerminationPolicy *string `locationName:"excessCapacityTerminationPolicy" type:"string" enum:"ExcessCapacityTerminationPolicy"` - // The ID of the Spot fleet request. + // The ID of the Spot Fleet request. // // SpotFleetRequestId is a required field SpotFleetRequestId *string `locationName:"spotFleetRequestId" type:"string" required:"true"` @@ -45412,7 +52491,6 @@ func (s *ModifySpotFleetRequestInput) SetTargetCapacity(v int64) *ModifySpotFlee } // Contains the output of ModifySpotFleetRequest. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ModifySpotFleetRequestResponse type ModifySpotFleetRequestOutput struct { _ struct{} `type:"structure"` @@ -45437,7 +52515,6 @@ func (s *ModifySpotFleetRequestOutput) SetReturn(v bool) *ModifySpotFleetRequest } // Contains the parameters for ModifySubnetAttribute. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ModifySubnetAttributeRequest type ModifySubnetAttributeInput struct { _ struct{} `type:"structure"` @@ -45504,7 +52581,6 @@ func (s *ModifySubnetAttributeInput) SetSubnetId(v string) *ModifySubnetAttribut return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ModifySubnetAttributeOutput type ModifySubnetAttributeOutput struct { _ struct{} `type:"structure"` } @@ -45520,7 +52596,6 @@ func (s ModifySubnetAttributeOutput) GoString() string { } // Contains the parameters for ModifyVolumeAttribute. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ModifyVolumeAttributeRequest type ModifyVolumeAttributeInput struct { _ struct{} `type:"structure"` @@ -45580,7 +52655,6 @@ func (s *ModifyVolumeAttributeInput) SetVolumeId(v string) *ModifyVolumeAttribut return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ModifyVolumeAttributeOutput type ModifyVolumeAttributeOutput struct { _ struct{} `type:"structure"` } @@ -45595,7 +52669,6 @@ func (s ModifyVolumeAttributeOutput) GoString() string { return s.String() } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ModifyVolumeRequest type ModifyVolumeInput struct { _ struct{} `type:"structure"` @@ -45687,7 +52760,6 @@ func (s *ModifyVolumeInput) SetVolumeType(v string) *ModifyVolumeInput { return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ModifyVolumeResult type ModifyVolumeOutput struct { _ struct{} `type:"structure"` @@ -45712,7 +52784,6 @@ func (s *ModifyVolumeOutput) SetVolumeModification(v *VolumeModification) *Modif } // Contains the parameters for ModifyVpcAttribute. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ModifyVpcAttributeRequest type ModifyVpcAttributeInput struct { _ struct{} `type:"structure"` @@ -45781,7 +52852,6 @@ func (s *ModifyVpcAttributeInput) SetVpcId(v string) *ModifyVpcAttributeInput { return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ModifyVpcAttributeOutput type ModifyVpcAttributeOutput struct { _ struct{} `type:"structure"` } @@ -45796,29 +52866,138 @@ func (s ModifyVpcAttributeOutput) GoString() string { return s.String() } +type ModifyVpcEndpointConnectionNotificationInput struct { + _ struct{} `type:"structure"` + + // One or more events for the endpoint. Valid values are Accept, Connect, Delete, + // and Reject. + ConnectionEvents []*string `locationNameList:"item" type:"list"` + + // The ARN for the SNS topic for the notification. + ConnectionNotificationArn *string `type:"string"` + + // The ID of the notification. + // + // ConnectionNotificationId is a required field + ConnectionNotificationId *string `type:"string" required:"true"` + + // Checks whether you have the required permissions for the action, without + // actually making the request, and provides an error response. If you have + // the required permissions, the error response is DryRunOperation. Otherwise, + // it is UnauthorizedOperation. + DryRun *bool `type:"boolean"` +} + +// String returns the string representation +func (s ModifyVpcEndpointConnectionNotificationInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ModifyVpcEndpointConnectionNotificationInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ModifyVpcEndpointConnectionNotificationInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ModifyVpcEndpointConnectionNotificationInput"} + if s.ConnectionNotificationId == nil { + invalidParams.Add(request.NewErrParamRequired("ConnectionNotificationId")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetConnectionEvents sets the ConnectionEvents field's value. +func (s *ModifyVpcEndpointConnectionNotificationInput) SetConnectionEvents(v []*string) *ModifyVpcEndpointConnectionNotificationInput { + s.ConnectionEvents = v + return s +} + +// SetConnectionNotificationArn sets the ConnectionNotificationArn field's value. +func (s *ModifyVpcEndpointConnectionNotificationInput) SetConnectionNotificationArn(v string) *ModifyVpcEndpointConnectionNotificationInput { + s.ConnectionNotificationArn = &v + return s +} + +// SetConnectionNotificationId sets the ConnectionNotificationId field's value. +func (s *ModifyVpcEndpointConnectionNotificationInput) SetConnectionNotificationId(v string) *ModifyVpcEndpointConnectionNotificationInput { + s.ConnectionNotificationId = &v + return s +} + +// SetDryRun sets the DryRun field's value. +func (s *ModifyVpcEndpointConnectionNotificationInput) SetDryRun(v bool) *ModifyVpcEndpointConnectionNotificationInput { + s.DryRun = &v + return s +} + +type ModifyVpcEndpointConnectionNotificationOutput struct { + _ struct{} `type:"structure"` + + // Returns true if the request succeeds; otherwise, it returns an error. + ReturnValue *bool `locationName:"return" type:"boolean"` +} + +// String returns the string representation +func (s ModifyVpcEndpointConnectionNotificationOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ModifyVpcEndpointConnectionNotificationOutput) GoString() string { + return s.String() +} + +// SetReturnValue sets the ReturnValue field's value. +func (s *ModifyVpcEndpointConnectionNotificationOutput) SetReturnValue(v bool) *ModifyVpcEndpointConnectionNotificationOutput { + s.ReturnValue = &v + return s +} + // Contains the parameters for ModifyVpcEndpoint. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ModifyVpcEndpointRequest type ModifyVpcEndpointInput struct { _ struct{} `type:"structure"` - // One or more route tables IDs to associate with the endpoint. + // (Gateway endpoint) One or more route tables IDs to associate with the endpoint. AddRouteTableIds []*string `locationName:"AddRouteTableId" locationNameList:"item" type:"list"` + // (Interface endpoint) One or more security group IDs to associate with the + // network interface. + AddSecurityGroupIds []*string `locationName:"AddSecurityGroupId" locationNameList:"item" type:"list"` + + // (Interface endpoint) One or more subnet IDs in which to serve the endpoint. + AddSubnetIds []*string `locationName:"AddSubnetId" locationNameList:"item" type:"list"` + // Checks whether you have the required permissions for the action, without // actually making the request, and provides an error response. If you have // the required permissions, the error response is DryRunOperation. Otherwise, // it is UnauthorizedOperation. DryRun *bool `type:"boolean"` - // A policy document to attach to the endpoint. The policy must be in valid - // JSON format. + // (Gateway endpoint) A policy document to attach to the endpoint. The policy + // must be in valid JSON format. PolicyDocument *string `type:"string"` - // One or more route table IDs to disassociate from the endpoint. + // (Interface endpoint) Indicate whether a private hosted zone is associated + // with the VPC. + PrivateDnsEnabled *bool `type:"boolean"` + + // (Gateway endpoint) One or more route table IDs to disassociate from the endpoint. RemoveRouteTableIds []*string `locationName:"RemoveRouteTableId" locationNameList:"item" type:"list"` - // Specify true to reset the policy document to the default policy. The default - // policy allows access to the service. + // (Interface endpoint) One or more security group IDs to disassociate from + // the network interface. + RemoveSecurityGroupIds []*string `locationName:"RemoveSecurityGroupId" locationNameList:"item" type:"list"` + + // (Interface endpoint) One or more subnets IDs in which to remove the endpoint. + RemoveSubnetIds []*string `locationName:"RemoveSubnetId" locationNameList:"item" type:"list"` + + // (Gateway endpoint) Specify true to reset the policy document to the default + // policy. The default policy allows full access to the service. ResetPolicy *bool `type:"boolean"` // The ID of the endpoint. @@ -45856,6 +53035,18 @@ func (s *ModifyVpcEndpointInput) SetAddRouteTableIds(v []*string) *ModifyVpcEndp return s } +// SetAddSecurityGroupIds sets the AddSecurityGroupIds field's value. +func (s *ModifyVpcEndpointInput) SetAddSecurityGroupIds(v []*string) *ModifyVpcEndpointInput { + s.AddSecurityGroupIds = v + return s +} + +// SetAddSubnetIds sets the AddSubnetIds field's value. +func (s *ModifyVpcEndpointInput) SetAddSubnetIds(v []*string) *ModifyVpcEndpointInput { + s.AddSubnetIds = v + return s +} + // SetDryRun sets the DryRun field's value. func (s *ModifyVpcEndpointInput) SetDryRun(v bool) *ModifyVpcEndpointInput { s.DryRun = &v @@ -45868,12 +53059,30 @@ func (s *ModifyVpcEndpointInput) SetPolicyDocument(v string) *ModifyVpcEndpointI return s } +// SetPrivateDnsEnabled sets the PrivateDnsEnabled field's value. +func (s *ModifyVpcEndpointInput) SetPrivateDnsEnabled(v bool) *ModifyVpcEndpointInput { + s.PrivateDnsEnabled = &v + return s +} + // SetRemoveRouteTableIds sets the RemoveRouteTableIds field's value. func (s *ModifyVpcEndpointInput) SetRemoveRouteTableIds(v []*string) *ModifyVpcEndpointInput { s.RemoveRouteTableIds = v return s } +// SetRemoveSecurityGroupIds sets the RemoveSecurityGroupIds field's value. +func (s *ModifyVpcEndpointInput) SetRemoveSecurityGroupIds(v []*string) *ModifyVpcEndpointInput { + s.RemoveSecurityGroupIds = v + return s +} + +// SetRemoveSubnetIds sets the RemoveSubnetIds field's value. +func (s *ModifyVpcEndpointInput) SetRemoveSubnetIds(v []*string) *ModifyVpcEndpointInput { + s.RemoveSubnetIds = v + return s +} + // SetResetPolicy sets the ResetPolicy field's value. func (s *ModifyVpcEndpointInput) SetResetPolicy(v bool) *ModifyVpcEndpointInput { s.ResetPolicy = &v @@ -45886,8 +53095,6 @@ func (s *ModifyVpcEndpointInput) SetVpcEndpointId(v string) *ModifyVpcEndpointIn return s } -// Contains the output of ModifyVpcEndpoint. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ModifyVpcEndpointResult type ModifyVpcEndpointOutput struct { _ struct{} `type:"structure"` @@ -45911,7 +53118,201 @@ func (s *ModifyVpcEndpointOutput) SetReturn(v bool) *ModifyVpcEndpointOutput { return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ModifyVpcPeeringConnectionOptionsRequest +type ModifyVpcEndpointServiceConfigurationInput struct { + _ struct{} `type:"structure"` + + // Indicate whether requests to create an endpoint to your service must be accepted. + AcceptanceRequired *bool `type:"boolean"` + + // The Amazon Resource Names (ARNs) of Network Load Balancers to add to your + // service configuration. + AddNetworkLoadBalancerArns []*string `locationName:"addNetworkLoadBalancerArn" locationNameList:"item" type:"list"` + + // Checks whether you have the required permissions for the action, without + // actually making the request, and provides an error response. If you have + // the required permissions, the error response is DryRunOperation. Otherwise, + // it is UnauthorizedOperation. + DryRun *bool `type:"boolean"` + + // The Amazon Resource Names (ARNs) of Network Load Balancers to remove from + // your service configuration. + RemoveNetworkLoadBalancerArns []*string `locationName:"removeNetworkLoadBalancerArn" locationNameList:"item" type:"list"` + + // The ID of the service. + // + // ServiceId is a required field + ServiceId *string `type:"string" required:"true"` +} + +// String returns the string representation +func (s ModifyVpcEndpointServiceConfigurationInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ModifyVpcEndpointServiceConfigurationInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ModifyVpcEndpointServiceConfigurationInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ModifyVpcEndpointServiceConfigurationInput"} + if s.ServiceId == nil { + invalidParams.Add(request.NewErrParamRequired("ServiceId")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetAcceptanceRequired sets the AcceptanceRequired field's value. +func (s *ModifyVpcEndpointServiceConfigurationInput) SetAcceptanceRequired(v bool) *ModifyVpcEndpointServiceConfigurationInput { + s.AcceptanceRequired = &v + return s +} + +// SetAddNetworkLoadBalancerArns sets the AddNetworkLoadBalancerArns field's value. +func (s *ModifyVpcEndpointServiceConfigurationInput) SetAddNetworkLoadBalancerArns(v []*string) *ModifyVpcEndpointServiceConfigurationInput { + s.AddNetworkLoadBalancerArns = v + return s +} + +// SetDryRun sets the DryRun field's value. +func (s *ModifyVpcEndpointServiceConfigurationInput) SetDryRun(v bool) *ModifyVpcEndpointServiceConfigurationInput { + s.DryRun = &v + return s +} + +// SetRemoveNetworkLoadBalancerArns sets the RemoveNetworkLoadBalancerArns field's value. +func (s *ModifyVpcEndpointServiceConfigurationInput) SetRemoveNetworkLoadBalancerArns(v []*string) *ModifyVpcEndpointServiceConfigurationInput { + s.RemoveNetworkLoadBalancerArns = v + return s +} + +// SetServiceId sets the ServiceId field's value. +func (s *ModifyVpcEndpointServiceConfigurationInput) SetServiceId(v string) *ModifyVpcEndpointServiceConfigurationInput { + s.ServiceId = &v + return s +} + +type ModifyVpcEndpointServiceConfigurationOutput struct { + _ struct{} `type:"structure"` + + // Returns true if the request succeeds; otherwise, it returns an error. + Return *bool `locationName:"return" type:"boolean"` +} + +// String returns the string representation +func (s ModifyVpcEndpointServiceConfigurationOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ModifyVpcEndpointServiceConfigurationOutput) GoString() string { + return s.String() +} + +// SetReturn sets the Return field's value. +func (s *ModifyVpcEndpointServiceConfigurationOutput) SetReturn(v bool) *ModifyVpcEndpointServiceConfigurationOutput { + s.Return = &v + return s +} + +type ModifyVpcEndpointServicePermissionsInput struct { + _ struct{} `type:"structure"` + + // One or more Amazon Resource Names (ARNs) of principals for which to allow + // permission. Specify * to allow all principals. + AddAllowedPrincipals []*string `locationNameList:"item" type:"list"` + + // Checks whether you have the required permissions for the action, without + // actually making the request, and provides an error response. If you have + // the required permissions, the error response is DryRunOperation. Otherwise, + // it is UnauthorizedOperation. + DryRun *bool `type:"boolean"` + + // One or more Amazon Resource Names (ARNs) of principals for which to remove + // permission. + RemoveAllowedPrincipals []*string `locationNameList:"item" type:"list"` + + // The ID of the service. + // + // ServiceId is a required field + ServiceId *string `type:"string" required:"true"` +} + +// String returns the string representation +func (s ModifyVpcEndpointServicePermissionsInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ModifyVpcEndpointServicePermissionsInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ModifyVpcEndpointServicePermissionsInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ModifyVpcEndpointServicePermissionsInput"} + if s.ServiceId == nil { + invalidParams.Add(request.NewErrParamRequired("ServiceId")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetAddAllowedPrincipals sets the AddAllowedPrincipals field's value. +func (s *ModifyVpcEndpointServicePermissionsInput) SetAddAllowedPrincipals(v []*string) *ModifyVpcEndpointServicePermissionsInput { + s.AddAllowedPrincipals = v + return s +} + +// SetDryRun sets the DryRun field's value. +func (s *ModifyVpcEndpointServicePermissionsInput) SetDryRun(v bool) *ModifyVpcEndpointServicePermissionsInput { + s.DryRun = &v + return s +} + +// SetRemoveAllowedPrincipals sets the RemoveAllowedPrincipals field's value. +func (s *ModifyVpcEndpointServicePermissionsInput) SetRemoveAllowedPrincipals(v []*string) *ModifyVpcEndpointServicePermissionsInput { + s.RemoveAllowedPrincipals = v + return s +} + +// SetServiceId sets the ServiceId field's value. +func (s *ModifyVpcEndpointServicePermissionsInput) SetServiceId(v string) *ModifyVpcEndpointServicePermissionsInput { + s.ServiceId = &v + return s +} + +type ModifyVpcEndpointServicePermissionsOutput struct { + _ struct{} `type:"structure"` + + // Returns true if the request succeeds; otherwise, it returns an error. + ReturnValue *bool `locationName:"return" type:"boolean"` +} + +// String returns the string representation +func (s ModifyVpcEndpointServicePermissionsOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ModifyVpcEndpointServicePermissionsOutput) GoString() string { + return s.String() +} + +// SetReturnValue sets the ReturnValue field's value. +func (s *ModifyVpcEndpointServicePermissionsOutput) SetReturnValue(v bool) *ModifyVpcEndpointServicePermissionsOutput { + s.ReturnValue = &v + return s +} + type ModifyVpcPeeringConnectionOptionsInput struct { _ struct{} `type:"structure"` @@ -45980,7 +53381,6 @@ func (s *ModifyVpcPeeringConnectionOptionsInput) SetVpcPeeringConnectionId(v str return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ModifyVpcPeeringConnectionOptionsResult type ModifyVpcPeeringConnectionOptionsOutput struct { _ struct{} `type:"structure"` @@ -46013,8 +53413,96 @@ func (s *ModifyVpcPeeringConnectionOptionsOutput) SetRequesterPeeringConnectionO return s } +// Contains the parameters for ModifyVpcTenancy. +type ModifyVpcTenancyInput struct { + _ struct{} `type:"structure"` + + // Checks whether you have the required permissions for the operation, without + // actually making the request, and provides an error response. If you have + // the required permissions, the error response is DryRunOperation. Otherwise, + // it is UnauthorizedOperation. + DryRun *bool `type:"boolean"` + + // The instance tenancy attribute for the VPC. + // + // InstanceTenancy is a required field + InstanceTenancy *string `type:"string" required:"true" enum:"VpcTenancy"` + + // The ID of the VPC. + // + // VpcId is a required field + VpcId *string `type:"string" required:"true"` +} + +// String returns the string representation +func (s ModifyVpcTenancyInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ModifyVpcTenancyInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ModifyVpcTenancyInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ModifyVpcTenancyInput"} + if s.InstanceTenancy == nil { + invalidParams.Add(request.NewErrParamRequired("InstanceTenancy")) + } + if s.VpcId == nil { + invalidParams.Add(request.NewErrParamRequired("VpcId")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetDryRun sets the DryRun field's value. +func (s *ModifyVpcTenancyInput) SetDryRun(v bool) *ModifyVpcTenancyInput { + s.DryRun = &v + return s +} + +// SetInstanceTenancy sets the InstanceTenancy field's value. +func (s *ModifyVpcTenancyInput) SetInstanceTenancy(v string) *ModifyVpcTenancyInput { + s.InstanceTenancy = &v + return s +} + +// SetVpcId sets the VpcId field's value. +func (s *ModifyVpcTenancyInput) SetVpcId(v string) *ModifyVpcTenancyInput { + s.VpcId = &v + return s +} + +// Contains the output of ModifyVpcTenancy. +type ModifyVpcTenancyOutput struct { + _ struct{} `type:"structure"` + + // Returns true if the request succeeds; otherwise, returns an error. + ReturnValue *bool `locationName:"return" type:"boolean"` +} + +// String returns the string representation +func (s ModifyVpcTenancyOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ModifyVpcTenancyOutput) GoString() string { + return s.String() +} + +// SetReturnValue sets the ReturnValue field's value. +func (s *ModifyVpcTenancyOutput) SetReturnValue(v bool) *ModifyVpcTenancyOutput { + s.ReturnValue = &v + return s +} + // Contains the parameters for MonitorInstances. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/MonitorInstancesRequest type MonitorInstancesInput struct { _ struct{} `type:"structure"` @@ -46066,7 +53554,6 @@ func (s *MonitorInstancesInput) SetInstanceIds(v []*string) *MonitorInstancesInp } // Contains the output of MonitorInstances. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/MonitorInstancesResult type MonitorInstancesOutput struct { _ struct{} `type:"structure"` @@ -46091,7 +53578,6 @@ func (s *MonitorInstancesOutput) SetInstanceMonitorings(v []*InstanceMonitoring) } // Describes the monitoring of an instance. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/Monitoring type Monitoring struct { _ struct{} `type:"structure"` @@ -46117,7 +53603,6 @@ func (s *Monitoring) SetState(v string) *Monitoring { } // Contains the parameters for MoveAddressToVpc. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/MoveAddressToVpcRequest type MoveAddressToVpcInput struct { _ struct{} `type:"structure"` @@ -46169,7 +53654,6 @@ func (s *MoveAddressToVpcInput) SetPublicIp(v string) *MoveAddressToVpcInput { } // Contains the output of MoveAddressToVpc. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/MoveAddressToVpcResult type MoveAddressToVpcOutput struct { _ struct{} `type:"structure"` @@ -46203,7 +53687,6 @@ func (s *MoveAddressToVpcOutput) SetStatus(v string) *MoveAddressToVpcOutput { } // Describes the status of a moving Elastic IP address. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/MovingAddressStatus type MovingAddressStatus struct { _ struct{} `type:"structure"` @@ -46238,7 +53721,6 @@ func (s *MovingAddressStatus) SetPublicIp(v string) *MovingAddressStatus { } // Describes a NAT gateway. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/NatGateway type NatGateway struct { _ struct{} `type:"structure"` @@ -46309,6 +53791,9 @@ type NatGateway struct { // The ID of the subnet in which the NAT gateway is located. SubnetId *string `locationName:"subnetId" type:"string"` + // The tags for the NAT gateway. + Tags []*Tag `locationName:"tagSet" locationNameList:"item" type:"list"` + // The ID of the VPC in which the NAT gateway is located. VpcId *string `locationName:"vpcId" type:"string"` } @@ -46377,6 +53862,12 @@ func (s *NatGateway) SetSubnetId(v string) *NatGateway { return s } +// SetTags sets the Tags field's value. +func (s *NatGateway) SetTags(v []*Tag) *NatGateway { + s.Tags = v + return s +} + // SetVpcId sets the VpcId field's value. func (s *NatGateway) SetVpcId(v string) *NatGateway { s.VpcId = &v @@ -46384,7 +53875,6 @@ func (s *NatGateway) SetVpcId(v string) *NatGateway { } // Describes the IP addresses and network interface associated with a NAT gateway. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/NatGatewayAddress type NatGatewayAddress struct { _ struct{} `type:"structure"` @@ -46437,7 +53927,6 @@ func (s *NatGatewayAddress) SetPublicIp(v string) *NatGatewayAddress { } // Describes a network ACL. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/NetworkAcl type NetworkAcl struct { _ struct{} `type:"structure"` @@ -46507,7 +53996,6 @@ func (s *NetworkAcl) SetVpcId(v string) *NetworkAcl { } // Describes an association between a network ACL and a subnet. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/NetworkAclAssociation type NetworkAclAssociation struct { _ struct{} `type:"structure"` @@ -46550,7 +54038,6 @@ func (s *NetworkAclAssociation) SetSubnetId(v string) *NetworkAclAssociation { } // Describes an entry in a network ACL. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/NetworkAclEntry type NetworkAclEntry struct { _ struct{} `type:"structure"` @@ -46640,7 +54127,6 @@ func (s *NetworkAclEntry) SetRuleNumber(v int64) *NetworkAclEntry { } // Describes a network interface. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/NetworkInterface type NetworkInterface struct { _ struct{} `type:"structure"` @@ -46838,7 +54324,6 @@ func (s *NetworkInterface) SetVpcId(v string) *NetworkInterface { } // Describes association information for an Elastic IP address (IPv4 only). -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/NetworkInterfaceAssociation type NetworkInterfaceAssociation struct { _ struct{} `type:"structure"` @@ -46899,7 +54384,6 @@ func (s *NetworkInterfaceAssociation) SetPublicIp(v string) *NetworkInterfaceAss } // Describes a network interface attachment. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/NetworkInterfaceAttachment type NetworkInterfaceAttachment struct { _ struct{} `type:"structure"` @@ -46978,7 +54462,6 @@ func (s *NetworkInterfaceAttachment) SetStatus(v string) *NetworkInterfaceAttach } // Describes an attachment change. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/NetworkInterfaceAttachmentChanges type NetworkInterfaceAttachmentChanges struct { _ struct{} `type:"structure"` @@ -47012,7 +54495,6 @@ func (s *NetworkInterfaceAttachmentChanges) SetDeleteOnTermination(v bool) *Netw } // Describes an IPv6 address associated with a network interface. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/NetworkInterfaceIpv6Address type NetworkInterfaceIpv6Address struct { _ struct{} `type:"structure"` @@ -47037,7 +54519,6 @@ func (s *NetworkInterfaceIpv6Address) SetIpv6Address(v string) *NetworkInterface } // Describes a permission for a network interface. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/NetworkInterfacePermission type NetworkInterfacePermission struct { _ struct{} `type:"structure"` @@ -47107,7 +54588,6 @@ func (s *NetworkInterfacePermission) SetPermissionState(v *NetworkInterfacePermi } // Describes the state of a network interface permission. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/NetworkInterfacePermissionState type NetworkInterfacePermissionState struct { _ struct{} `type:"structure"` @@ -47141,7 +54621,6 @@ func (s *NetworkInterfacePermissionState) SetStatusMessage(v string) *NetworkInt } // Describes the private IPv4 address of a network interface. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/NetworkInterfacePrivateIpAddress type NetworkInterfacePrivateIpAddress struct { _ struct{} `type:"structure"` @@ -47194,7 +54673,6 @@ func (s *NetworkInterfacePrivateIpAddress) SetPrivateIpAddress(v string) *Networ return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/NewDhcpConfiguration type NewDhcpConfiguration struct { _ struct{} `type:"structure"` @@ -47227,7 +54705,6 @@ func (s *NewDhcpConfiguration) SetValues(v []*string) *NewDhcpConfiguration { // Describes the data that identifies an Amazon FPGA image (AFI) on the PCI // bus. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/PciId type PciId struct { _ struct{} `type:"structure"` @@ -47279,7 +54756,6 @@ func (s *PciId) SetVendorId(v string) *PciId { } // Describes the VPC peering connection options. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/PeeringConnectionOptions type PeeringConnectionOptions struct { _ struct{} `type:"structure"` @@ -47325,7 +54801,6 @@ func (s *PeeringConnectionOptions) SetAllowEgressFromLocalVpcToRemoteClassicLink } // The VPC peering connection options. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/PeeringConnectionOptionsRequest type PeeringConnectionOptionsRequest struct { _ struct{} `type:"structure"` @@ -47371,7 +54846,6 @@ func (s *PeeringConnectionOptionsRequest) SetAllowEgressFromLocalVpcToRemoteClas } // Describes the placement of an instance. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/Placement type Placement struct { _ struct{} `type:"structure"` @@ -47445,7 +54919,6 @@ func (s *Placement) SetTenancy(v string) *Placement { } // Describes a placement group. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/PlacementGroup type PlacementGroup struct { _ struct{} `type:"structure"` @@ -47488,7 +54961,6 @@ func (s *PlacementGroup) SetStrategy(v string) *PlacementGroup { } // Describes a range of ports. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/PortRange type PortRange struct { _ struct{} `type:"structure"` @@ -47522,7 +54994,6 @@ func (s *PortRange) SetTo(v int64) *PortRange { } // Describes prefixes for AWS services. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/PrefixList type PrefixList struct { _ struct{} `type:"structure"` @@ -47564,11 +55035,17 @@ func (s *PrefixList) SetPrefixListName(v string) *PrefixList { return s } -// The ID of the prefix. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/PrefixListId +// [EC2-VPC only] The ID of the prefix. type PrefixListId struct { _ struct{} `type:"structure"` + // A description for the security group rule that references this prefix list + // ID. + // + // Constraints: Up to 255 characters in length. Allowed characters are a-z, + // A-Z, 0-9, spaces, and ._-:/()#,@[]+=;{}!$* + Description *string `locationName:"description" type:"string"` + // The ID of the prefix. PrefixListId *string `locationName:"prefixListId" type:"string"` } @@ -47583,6 +55060,12 @@ func (s PrefixListId) GoString() string { return s.String() } +// SetDescription sets the Description field's value. +func (s *PrefixListId) SetDescription(v string) *PrefixListId { + s.Description = &v + return s +} + // SetPrefixListId sets the PrefixListId field's value. func (s *PrefixListId) SetPrefixListId(v string) *PrefixListId { s.PrefixListId = &v @@ -47590,7 +55073,6 @@ func (s *PrefixListId) SetPrefixListId(v string) *PrefixListId { } // Describes the price for a Reserved Instance. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/PriceSchedule type PriceSchedule struct { _ struct{} `type:"structure"` @@ -47653,7 +55135,6 @@ func (s *PriceSchedule) SetTerm(v int64) *PriceSchedule { } // Describes the price for a Reserved Instance. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/PriceScheduleSpecification type PriceScheduleSpecification struct { _ struct{} `type:"structure"` @@ -47698,7 +55179,6 @@ func (s *PriceScheduleSpecification) SetTerm(v int64) *PriceScheduleSpecificatio } // Describes a Reserved Instance offering. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/PricingDetail type PricingDetail struct { _ struct{} `type:"structure"` @@ -47732,7 +55212,6 @@ func (s *PricingDetail) SetPrice(v float64) *PricingDetail { } // Describes a secondary private IPv4 address for a network interface. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/PrivateIpAddressSpecification type PrivateIpAddressSpecification struct { _ struct{} `type:"structure"` @@ -47782,7 +55261,6 @@ func (s *PrivateIpAddressSpecification) SetPrivateIpAddress(v string) *PrivateIp } // Describes a product code. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ProductCode type ProductCode struct { _ struct{} `type:"structure"` @@ -47816,7 +55294,6 @@ func (s *ProductCode) SetProductCodeType(v string) *ProductCode { } // Describes a virtual private gateway propagating route. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/PropagatingVgw type PropagatingVgw struct { _ struct{} `type:"structure"` @@ -47843,7 +55320,6 @@ func (s *PropagatingVgw) SetGatewayId(v string) *PropagatingVgw { // Reserved. If you need to sustain traffic greater than the documented limits // (http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/vpc-nat-gateway.html), // contact us through the Support Center (https://console.aws.amazon.com/support/home?). -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ProvisionedBandwidth type ProvisionedBandwidth struct { _ struct{} `type:"structure"` @@ -47914,7 +55390,6 @@ func (s *ProvisionedBandwidth) SetStatus(v string) *ProvisionedBandwidth { } // Describes the result of the purchase. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/Purchase type Purchase struct { _ struct{} `type:"structure"` @@ -48003,7 +55478,6 @@ func (s *Purchase) SetUpfrontPrice(v string) *Purchase { return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/PurchaseHostReservationRequest type PurchaseHostReservationInput struct { _ struct{} `type:"structure"` @@ -48093,7 +55567,6 @@ func (s *PurchaseHostReservationInput) SetOfferingId(v string) *PurchaseHostRese return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/PurchaseHostReservationResult type PurchaseHostReservationOutput struct { _ struct{} `type:"structure"` @@ -48107,7 +55580,7 @@ type PurchaseHostReservationOutput struct { CurrencyCode *string `locationName:"currencyCode" type:"string" enum:"CurrencyCodeValues"` // Describes the details of the purchase. - Purchase []*Purchase `locationName:"purchase" type:"list"` + Purchase []*Purchase `locationName:"purchase" locationNameList:"item" type:"list"` // The total hourly price of the reservation calculated per hour. TotalHourlyPrice *string `locationName:"totalHourlyPrice" type:"string"` @@ -48158,7 +55631,6 @@ func (s *PurchaseHostReservationOutput) SetTotalUpfrontPrice(v string) *Purchase } // Describes a request to purchase Scheduled Instances. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/PurchaseRequest type PurchaseRequest struct { _ struct{} `type:"structure"` @@ -48212,7 +55684,6 @@ func (s *PurchaseRequest) SetPurchaseToken(v string) *PurchaseRequest { } // Contains the parameters for PurchaseReservedInstancesOffering. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/PurchaseReservedInstancesOfferingRequest type PurchaseReservedInstancesOfferingInput struct { _ struct{} `type:"structure"` @@ -48289,7 +55760,6 @@ func (s *PurchaseReservedInstancesOfferingInput) SetReservedInstancesOfferingId( } // Contains the output of PurchaseReservedInstancesOffering. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/PurchaseReservedInstancesOfferingResult type PurchaseReservedInstancesOfferingOutput struct { _ struct{} `type:"structure"` @@ -48314,7 +55784,6 @@ func (s *PurchaseReservedInstancesOfferingOutput) SetReservedInstancesId(v strin } // Contains the parameters for PurchaseScheduledInstances. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/PurchaseScheduledInstancesRequest type PurchaseScheduledInstancesInput struct { _ struct{} `type:"structure"` @@ -48389,7 +55858,6 @@ func (s *PurchaseScheduledInstancesInput) SetPurchaseRequests(v []*PurchaseReque } // Contains the output of PurchaseScheduledInstances. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/PurchaseScheduledInstancesResult type PurchaseScheduledInstancesOutput struct { _ struct{} `type:"structure"` @@ -48414,7 +55882,6 @@ func (s *PurchaseScheduledInstancesOutput) SetScheduledInstanceSet(v []*Schedule } // Contains the parameters for RebootInstances. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/RebootInstancesRequest type RebootInstancesInput struct { _ struct{} `type:"structure"` @@ -48465,7 +55932,6 @@ func (s *RebootInstancesInput) SetInstanceIds(v []*string) *RebootInstancesInput return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/RebootInstancesOutput type RebootInstancesOutput struct { _ struct{} `type:"structure"` } @@ -48481,7 +55947,6 @@ func (s RebootInstancesOutput) GoString() string { } // Describes a recurring charge. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/RecurringCharge type RecurringCharge struct { _ struct{} `type:"structure"` @@ -48515,7 +55980,6 @@ func (s *RecurringCharge) SetFrequency(v string) *RecurringCharge { } // Describes a region. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/Region type Region struct { _ struct{} `type:"structure"` @@ -48549,7 +56013,6 @@ func (s *Region) SetRegionName(v string) *Region { } // Contains the parameters for RegisterImage. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/RegisterImageRequest type RegisterImageInput struct { _ struct{} `type:"structure"` @@ -48601,7 +56064,7 @@ type RegisterImageInput struct { // The ID of the RAM disk. RamdiskId *string `locationName:"ramdiskId" type:"string"` - // The name of the root device (for example, /dev/sda1, or /dev/xvda). + // The device name of the root device volume (for example, /dev/sda1). RootDeviceName *string `locationName:"rootDeviceName" type:"string"` // Set to simple to enable enhanced networking with the Intel 82599 Virtual @@ -48614,7 +56077,7 @@ type RegisterImageInput struct { // PV AMI can make instances launched from the AMI unreachable. SriovNetSupport *string `locationName:"sriovNetSupport" type:"string"` - // The type of virtualization. + // The type of virtualization (hvm | paravirtual). // // Default: paravirtual VirtualizationType *string `locationName:"virtualizationType" type:"string"` @@ -48722,7 +56185,6 @@ func (s *RegisterImageInput) SetVirtualizationType(v string) *RegisterImageInput } // Contains the output of RegisterImage. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/RegisterImageResult type RegisterImageOutput struct { _ struct{} `type:"structure"` @@ -48746,8 +56208,94 @@ func (s *RegisterImageOutput) SetImageId(v string) *RegisterImageOutput { return s } +type RejectVpcEndpointConnectionsInput struct { + _ struct{} `type:"structure"` + + // Checks whether you have the required permissions for the action, without + // actually making the request, and provides an error response. If you have + // the required permissions, the error response is DryRunOperation. Otherwise, + // it is UnauthorizedOperation. + DryRun *bool `type:"boolean"` + + // The ID of the service. + // + // ServiceId is a required field + ServiceId *string `type:"string" required:"true"` + + // The IDs of one or more VPC endpoints. + // + // VpcEndpointIds is a required field + VpcEndpointIds []*string `locationName:"VpcEndpointId" locationNameList:"item" type:"list" required:"true"` +} + +// String returns the string representation +func (s RejectVpcEndpointConnectionsInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s RejectVpcEndpointConnectionsInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *RejectVpcEndpointConnectionsInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "RejectVpcEndpointConnectionsInput"} + if s.ServiceId == nil { + invalidParams.Add(request.NewErrParamRequired("ServiceId")) + } + if s.VpcEndpointIds == nil { + invalidParams.Add(request.NewErrParamRequired("VpcEndpointIds")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetDryRun sets the DryRun field's value. +func (s *RejectVpcEndpointConnectionsInput) SetDryRun(v bool) *RejectVpcEndpointConnectionsInput { + s.DryRun = &v + return s +} + +// SetServiceId sets the ServiceId field's value. +func (s *RejectVpcEndpointConnectionsInput) SetServiceId(v string) *RejectVpcEndpointConnectionsInput { + s.ServiceId = &v + return s +} + +// SetVpcEndpointIds sets the VpcEndpointIds field's value. +func (s *RejectVpcEndpointConnectionsInput) SetVpcEndpointIds(v []*string) *RejectVpcEndpointConnectionsInput { + s.VpcEndpointIds = v + return s +} + +type RejectVpcEndpointConnectionsOutput struct { + _ struct{} `type:"structure"` + + // Information about the endpoints that were not rejected, if applicable. + Unsuccessful []*UnsuccessfulItem `locationName:"unsuccessful" locationNameList:"item" type:"list"` +} + +// String returns the string representation +func (s RejectVpcEndpointConnectionsOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s RejectVpcEndpointConnectionsOutput) GoString() string { + return s.String() +} + +// SetUnsuccessful sets the Unsuccessful field's value. +func (s *RejectVpcEndpointConnectionsOutput) SetUnsuccessful(v []*UnsuccessfulItem) *RejectVpcEndpointConnectionsOutput { + s.Unsuccessful = v + return s +} + // Contains the parameters for RejectVpcPeeringConnection. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/RejectVpcPeeringConnectionRequest type RejectVpcPeeringConnectionInput struct { _ struct{} `type:"structure"` @@ -48799,7 +56347,6 @@ func (s *RejectVpcPeeringConnectionInput) SetVpcPeeringConnectionId(v string) *R } // Contains the output of RejectVpcPeeringConnection. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/RejectVpcPeeringConnectionResult type RejectVpcPeeringConnectionOutput struct { _ struct{} `type:"structure"` @@ -48824,7 +56371,6 @@ func (s *RejectVpcPeeringConnectionOutput) SetReturn(v bool) *RejectVpcPeeringCo } // Contains the parameters for ReleaseAddress. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ReleaseAddressRequest type ReleaseAddressInput struct { _ struct{} `type:"structure"` @@ -48869,7 +56415,6 @@ func (s *ReleaseAddressInput) SetPublicIp(v string) *ReleaseAddressInput { return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ReleaseAddressOutput type ReleaseAddressOutput struct { _ struct{} `type:"structure"` } @@ -48885,7 +56430,6 @@ func (s ReleaseAddressOutput) GoString() string { } // Contains the parameters for ReleaseHosts. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ReleaseHostsRequest type ReleaseHostsInput struct { _ struct{} `type:"structure"` @@ -48925,7 +56469,6 @@ func (s *ReleaseHostsInput) SetHostIds(v []*string) *ReleaseHostsInput { } // Contains the output of ReleaseHosts. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ReleaseHostsResult type ReleaseHostsOutput struct { _ struct{} `type:"structure"` @@ -48959,7 +56502,6 @@ func (s *ReleaseHostsOutput) SetUnsuccessful(v []*UnsuccessfulItem) *ReleaseHost return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ReplaceIamInstanceProfileAssociationRequest type ReplaceIamInstanceProfileAssociationInput struct { _ struct{} `type:"structure"` @@ -49012,7 +56554,6 @@ func (s *ReplaceIamInstanceProfileAssociationInput) SetIamInstanceProfile(v *Iam return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ReplaceIamInstanceProfileAssociationResult type ReplaceIamInstanceProfileAssociationOutput struct { _ struct{} `type:"structure"` @@ -49037,7 +56578,6 @@ func (s *ReplaceIamInstanceProfileAssociationOutput) SetIamInstanceProfileAssoci } // Contains the parameters for ReplaceNetworkAclAssociation. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ReplaceNetworkAclAssociationRequest type ReplaceNetworkAclAssociationInput struct { _ struct{} `type:"structure"` @@ -49104,7 +56644,6 @@ func (s *ReplaceNetworkAclAssociationInput) SetNetworkAclId(v string) *ReplaceNe } // Contains the output of ReplaceNetworkAclAssociation. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ReplaceNetworkAclAssociationResult type ReplaceNetworkAclAssociationOutput struct { _ struct{} `type:"structure"` @@ -49129,7 +56668,6 @@ func (s *ReplaceNetworkAclAssociationOutput) SetNewAssociationId(v string) *Repl } // Contains the parameters for ReplaceNetworkAclEntry. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ReplaceNetworkAclEntryRequest type ReplaceNetworkAclEntryInput struct { _ struct{} `type:"structure"` @@ -49282,7 +56820,6 @@ func (s *ReplaceNetworkAclEntryInput) SetRuleNumber(v int64) *ReplaceNetworkAclE return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ReplaceNetworkAclEntryOutput type ReplaceNetworkAclEntryOutput struct { _ struct{} `type:"structure"` } @@ -49298,7 +56835,6 @@ func (s ReplaceNetworkAclEntryOutput) GoString() string { } // Contains the parameters for ReplaceRoute. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ReplaceRouteRequest type ReplaceRouteInput struct { _ struct{} `type:"structure"` @@ -49423,7 +56959,6 @@ func (s *ReplaceRouteInput) SetVpcPeeringConnectionId(v string) *ReplaceRouteInp return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ReplaceRouteOutput type ReplaceRouteOutput struct { _ struct{} `type:"structure"` } @@ -49439,7 +56974,6 @@ func (s ReplaceRouteOutput) GoString() string { } // Contains the parameters for ReplaceRouteTableAssociation. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ReplaceRouteTableAssociationRequest type ReplaceRouteTableAssociationInput struct { _ struct{} `type:"structure"` @@ -49505,7 +57039,6 @@ func (s *ReplaceRouteTableAssociationInput) SetRouteTableId(v string) *ReplaceRo } // Contains the output of ReplaceRouteTableAssociation. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ReplaceRouteTableAssociationResult type ReplaceRouteTableAssociationOutput struct { _ struct{} `type:"structure"` @@ -49530,7 +57063,6 @@ func (s *ReplaceRouteTableAssociationOutput) SetNewAssociationId(v string) *Repl } // Contains the parameters for ReportInstanceStatus. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ReportInstanceStatusRequest type ReportInstanceStatusInput struct { _ struct{} `type:"structure"` @@ -49551,7 +57083,7 @@ type ReportInstanceStatusInput struct { // Instances is a required field Instances []*string `locationName:"instanceId" locationNameList:"InstanceId" type:"list" required:"true"` - // One or more reason codes that describes the health state of your instance. + // One or more reason codes that describe the health state of your instance. // // * instance-stuck-in-state: My instance is stuck in a state. // @@ -49562,13 +57094,13 @@ type ReportInstanceStatusInput struct { // * password-not-available: A password is not available for my instance. // // * performance-network: My instance is experiencing performance problems - // which I believe are network related. + // that I believe are network related. // // * performance-instance-store: My instance is experiencing performance - // problems which I believe are related to the instance stores. + // problems that I believe are related to the instance stores. // // * performance-ebs-volume: My instance is experiencing performance problems - // which I believe are related to an EBS volume. + // that I believe are related to an EBS volume. // // * performance-other: My instance is experiencing performance problems. // @@ -49657,7 +57189,6 @@ func (s *ReportInstanceStatusInput) SetStatus(v string) *ReportInstanceStatusInp return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ReportInstanceStatusOutput type ReportInstanceStatusOutput struct { _ struct{} `type:"structure"` } @@ -49672,8 +57203,275 @@ func (s ReportInstanceStatusOutput) GoString() string { return s.String() } +// The information to include in the launch template. +type RequestLaunchTemplateData struct { + _ struct{} `type:"structure"` + + // The block device mapping. + // + // Supplying both a snapshot ID and an encryption value as arguments for block-device + // mapping results in an error. This is because only blank volumes can be encrypted + // on start, and these are not created from a snapshot. If a snapshot is the + // basis for the volume, it contains data by definition and its encryption status + // cannot be changed using this action. + BlockDeviceMappings []*LaunchTemplateBlockDeviceMappingRequest `locationName:"BlockDeviceMapping" locationNameList:"BlockDeviceMapping" type:"list"` + + // The credit option for CPU usage of the instance. Valid for T2 instances only. + CreditSpecification *CreditSpecificationRequest `type:"structure"` + + // If set to true, you can't terminate the instance using the Amazon EC2 console, + // CLI, or API. To change this attribute to false after launch, use ModifyInstanceAttribute. + DisableApiTermination *bool `type:"boolean"` + + // Indicates whether the instance is optimized for Amazon EBS I/O. This optimization + // provides dedicated throughput to Amazon EBS and an optimized configuration + // stack to provide optimal Amazon EBS I/O performance. This optimization isn't + // available with all instance types. Additional usage charges apply when using + // an EBS-optimized instance. + EbsOptimized *bool `type:"boolean"` + + // An elastic GPU to associate with the instance. + ElasticGpuSpecifications []*ElasticGpuSpecification `locationName:"ElasticGpuSpecification" locationNameList:"ElasticGpuSpecification" type:"list"` + + // The IAM instance profile. + IamInstanceProfile *LaunchTemplateIamInstanceProfileSpecificationRequest `type:"structure"` + + // The ID of the AMI, which you can get by using DescribeImages. + ImageId *string `type:"string"` + + // Indicates whether an instance stops or terminates when you initiate shutdown + // from the instance (using the operating system command for system shutdown). + // + // Default: stop + InstanceInitiatedShutdownBehavior *string `type:"string" enum:"ShutdownBehavior"` + + // The market (purchasing) option for the instances. + InstanceMarketOptions *LaunchTemplateInstanceMarketOptionsRequest `type:"structure"` + + // The instance type. For more information, see Instance Types (http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instance-types.html) + // in the Amazon Elastic Compute Cloud User Guide. + InstanceType *string `type:"string" enum:"InstanceType"` + + // The ID of the kernel. + // + // We recommend that you use PV-GRUB instead of kernels and RAM disks. For more + // information, see User Provided Kernels (http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/UserProvidedkernels.html) + // in the Amazon Elastic Compute Cloud User Guide. + KernelId *string `type:"string"` + + // The name of the key pair. You can create a key pair using CreateKeyPair or + // ImportKeyPair. + // + // If you do not specify a key pair, you can't connect to the instance unless + // you choose an AMI that is configured to allow users another way to log in. + KeyName *string `type:"string"` + + // The monitoring for the instance. + Monitoring *LaunchTemplatesMonitoringRequest `type:"structure"` + + // One or more network interfaces. + NetworkInterfaces []*LaunchTemplateInstanceNetworkInterfaceSpecificationRequest `locationName:"NetworkInterface" locationNameList:"InstanceNetworkInterfaceSpecification" type:"list"` + + // The placement for the instance. + Placement *LaunchTemplatePlacementRequest `type:"structure"` + + // The ID of the RAM disk. + // + // We recommend that you use PV-GRUB instead of kernels and RAM disks. For more + // information, see User Provided Kernels (http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/UserProvidedkernels.html) + // in the Amazon Elastic Compute Cloud User Guide. + RamDiskId *string `type:"string"` + + // One or more security group IDs. You can create a security group using CreateSecurityGroup. + // You cannot specify both a security group ID and security name in the same + // request. + SecurityGroupIds []*string `locationName:"SecurityGroupId" locationNameList:"SecurityGroupId" type:"list"` + + // [EC2-Classic, default VPC] One or more security group names. For a nondefault + // VPC, you must use security group IDs instead. You cannot specify both a security + // group ID and security name in the same request. + SecurityGroups []*string `locationName:"SecurityGroup" locationNameList:"SecurityGroup" type:"list"` + + // The tags to apply to the resources during launch. You can tag instances and + // volumes. The specified tags are applied to all instances or volumes that + // are created during launch. + TagSpecifications []*LaunchTemplateTagSpecificationRequest `locationName:"TagSpecification" locationNameList:"LaunchTemplateTagSpecificationRequest" type:"list"` + + // The user data to make available to the instance. For more information, see + // Running Commands on Your Linux Instance at Launch (http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/user-data.html) + // (Linux) and Adding User Data (http://docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/ec2-instance-metadata.html#instancedata-add-user-data) + // (Windows). If you are using a command line tool, base64-encoding is performed + // for you and you can load the text from a file. Otherwise, you must provide + // base64-encoded text. + UserData *string `type:"string"` +} + +// String returns the string representation +func (s RequestLaunchTemplateData) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s RequestLaunchTemplateData) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *RequestLaunchTemplateData) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "RequestLaunchTemplateData"} + if s.CreditSpecification != nil { + if err := s.CreditSpecification.Validate(); err != nil { + invalidParams.AddNested("CreditSpecification", err.(request.ErrInvalidParams)) + } + } + if s.ElasticGpuSpecifications != nil { + for i, v := range s.ElasticGpuSpecifications { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "ElasticGpuSpecifications", i), err.(request.ErrInvalidParams)) + } + } + } + if s.NetworkInterfaces != nil { + for i, v := range s.NetworkInterfaces { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "NetworkInterfaces", i), err.(request.ErrInvalidParams)) + } + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetBlockDeviceMappings sets the BlockDeviceMappings field's value. +func (s *RequestLaunchTemplateData) SetBlockDeviceMappings(v []*LaunchTemplateBlockDeviceMappingRequest) *RequestLaunchTemplateData { + s.BlockDeviceMappings = v + return s +} + +// SetCreditSpecification sets the CreditSpecification field's value. +func (s *RequestLaunchTemplateData) SetCreditSpecification(v *CreditSpecificationRequest) *RequestLaunchTemplateData { + s.CreditSpecification = v + return s +} + +// SetDisableApiTermination sets the DisableApiTermination field's value. +func (s *RequestLaunchTemplateData) SetDisableApiTermination(v bool) *RequestLaunchTemplateData { + s.DisableApiTermination = &v + return s +} + +// SetEbsOptimized sets the EbsOptimized field's value. +func (s *RequestLaunchTemplateData) SetEbsOptimized(v bool) *RequestLaunchTemplateData { + s.EbsOptimized = &v + return s +} + +// SetElasticGpuSpecifications sets the ElasticGpuSpecifications field's value. +func (s *RequestLaunchTemplateData) SetElasticGpuSpecifications(v []*ElasticGpuSpecification) *RequestLaunchTemplateData { + s.ElasticGpuSpecifications = v + return s +} + +// SetIamInstanceProfile sets the IamInstanceProfile field's value. +func (s *RequestLaunchTemplateData) SetIamInstanceProfile(v *LaunchTemplateIamInstanceProfileSpecificationRequest) *RequestLaunchTemplateData { + s.IamInstanceProfile = v + return s +} + +// SetImageId sets the ImageId field's value. +func (s *RequestLaunchTemplateData) SetImageId(v string) *RequestLaunchTemplateData { + s.ImageId = &v + return s +} + +// SetInstanceInitiatedShutdownBehavior sets the InstanceInitiatedShutdownBehavior field's value. +func (s *RequestLaunchTemplateData) SetInstanceInitiatedShutdownBehavior(v string) *RequestLaunchTemplateData { + s.InstanceInitiatedShutdownBehavior = &v + return s +} + +// SetInstanceMarketOptions sets the InstanceMarketOptions field's value. +func (s *RequestLaunchTemplateData) SetInstanceMarketOptions(v *LaunchTemplateInstanceMarketOptionsRequest) *RequestLaunchTemplateData { + s.InstanceMarketOptions = v + return s +} + +// SetInstanceType sets the InstanceType field's value. +func (s *RequestLaunchTemplateData) SetInstanceType(v string) *RequestLaunchTemplateData { + s.InstanceType = &v + return s +} + +// SetKernelId sets the KernelId field's value. +func (s *RequestLaunchTemplateData) SetKernelId(v string) *RequestLaunchTemplateData { + s.KernelId = &v + return s +} + +// SetKeyName sets the KeyName field's value. +func (s *RequestLaunchTemplateData) SetKeyName(v string) *RequestLaunchTemplateData { + s.KeyName = &v + return s +} + +// SetMonitoring sets the Monitoring field's value. +func (s *RequestLaunchTemplateData) SetMonitoring(v *LaunchTemplatesMonitoringRequest) *RequestLaunchTemplateData { + s.Monitoring = v + return s +} + +// SetNetworkInterfaces sets the NetworkInterfaces field's value. +func (s *RequestLaunchTemplateData) SetNetworkInterfaces(v []*LaunchTemplateInstanceNetworkInterfaceSpecificationRequest) *RequestLaunchTemplateData { + s.NetworkInterfaces = v + return s +} + +// SetPlacement sets the Placement field's value. +func (s *RequestLaunchTemplateData) SetPlacement(v *LaunchTemplatePlacementRequest) *RequestLaunchTemplateData { + s.Placement = v + return s +} + +// SetRamDiskId sets the RamDiskId field's value. +func (s *RequestLaunchTemplateData) SetRamDiskId(v string) *RequestLaunchTemplateData { + s.RamDiskId = &v + return s +} + +// SetSecurityGroupIds sets the SecurityGroupIds field's value. +func (s *RequestLaunchTemplateData) SetSecurityGroupIds(v []*string) *RequestLaunchTemplateData { + s.SecurityGroupIds = v + return s +} + +// SetSecurityGroups sets the SecurityGroups field's value. +func (s *RequestLaunchTemplateData) SetSecurityGroups(v []*string) *RequestLaunchTemplateData { + s.SecurityGroups = v + return s +} + +// SetTagSpecifications sets the TagSpecifications field's value. +func (s *RequestLaunchTemplateData) SetTagSpecifications(v []*LaunchTemplateTagSpecificationRequest) *RequestLaunchTemplateData { + s.TagSpecifications = v + return s +} + +// SetUserData sets the UserData field's value. +func (s *RequestLaunchTemplateData) SetUserData(v string) *RequestLaunchTemplateData { + s.UserData = &v + return s +} + // Contains the parameters for RequestSpotFleet. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/RequestSpotFleetRequest type RequestSpotFleetInput struct { _ struct{} `type:"structure"` @@ -49683,7 +57481,7 @@ type RequestSpotFleetInput struct { // it is UnauthorizedOperation. DryRun *bool `locationName:"dryRun" type:"boolean"` - // The configuration for the Spot fleet request. + // The configuration for the Spot Fleet request. // // SpotFleetRequestConfig is a required field SpotFleetRequestConfig *SpotFleetRequestConfigData `locationName:"spotFleetRequestConfig" type:"structure" required:"true"` @@ -49730,11 +57528,10 @@ func (s *RequestSpotFleetInput) SetSpotFleetRequestConfig(v *SpotFleetRequestCon } // Contains the output of RequestSpotFleet. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/RequestSpotFleetResponse type RequestSpotFleetOutput struct { _ struct{} `type:"structure"` - // The ID of the Spot fleet request. + // The ID of the Spot Fleet request. // // SpotFleetRequestId is a required field SpotFleetRequestId *string `locationName:"spotFleetRequestId" type:"string" required:"true"` @@ -49757,38 +57554,37 @@ func (s *RequestSpotFleetOutput) SetSpotFleetRequestId(v string) *RequestSpotFle } // Contains the parameters for RequestSpotInstances. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/RequestSpotInstancesRequest type RequestSpotInstancesInput struct { _ struct{} `type:"structure"` - // The user-specified name for a logical grouping of bids. + // The user-specified name for a logical grouping of requests. // // When you specify an Availability Zone group in a Spot Instance request, all - // Spot instances in the request are launched in the same Availability Zone. + // Spot Instances in the request are launched in the same Availability Zone. // Instance proximity is maintained with this parameter, but the choice of Availability - // Zone is not. The group applies only to bids for Spot Instances of the same - // instance type. Any additional Spot instance requests that are specified with - // the same Availability Zone group name are launched in that same Availability + // Zone is not. The group applies only to requests for Spot Instances of the + // same instance type. Any additional Spot Instance requests that are specified + // with the same Availability Zone group name are launched in that same Availability // Zone, as long as at least one instance from the group is still active. // // If there is no active instance running in the Availability Zone group that - // you specify for a new Spot instance request (all instances are terminated, - // the bid is expired, or the bid falls below current market), then Amazon EC2 - // launches the instance in any Availability Zone where the constraint can be - // met. Consequently, the subsequent set of Spot instances could be placed in - // a different zone from the original request, even if you specified the same - // Availability Zone group. + // you specify for a new Spot Instance request (all instances are terminated, + // the request is expired, or the maximum price you specified falls below current + // Spot price), then Amazon EC2 launches the instance in any Availability Zone + // where the constraint can be met. Consequently, the subsequent set of Spot + // Instances could be placed in a different zone from the original request, + // even if you specified the same Availability Zone group. // // Default: Instances are launched in any available Availability Zone. AvailabilityZoneGroup *string `locationName:"availabilityZoneGroup" type:"string"` - // The required duration for the Spot instances (also known as Spot blocks), + // The required duration for the Spot Instances (also known as Spot blocks), // in minutes. This value must be a multiple of 60 (60, 120, 180, 240, 300, // or 360). // - // The duration period starts as soon as your Spot instance receives its instance - // ID. At the end of the duration period, Amazon EC2 marks the Spot instance - // for termination and provides a Spot instance termination notice, which gives + // The duration period starts as soon as your Spot Instance receives its instance + // ID. At the end of the duration period, Amazon EC2 marks the Spot Instance + // for termination and provides a Spot Instance termination notice, which gives // the instance a two-minute warning before it terminates. // // Note that you can't specify an Availability Zone group or a launch group @@ -49806,12 +57602,15 @@ type RequestSpotInstancesInput struct { // it is UnauthorizedOperation. DryRun *bool `locationName:"dryRun" type:"boolean"` - // The maximum number of Spot instances to launch. + // The maximum number of Spot Instances to launch. // // Default: 1 InstanceCount *int64 `locationName:"instanceCount" type:"integer"` - // The instance launch group. Launch groups are Spot instances that launch together + // The behavior when a Spot Instance is interrupted. The default is terminate. + InstanceInterruptionBehavior *string `type:"string" enum:"InstanceInterruptionBehavior"` + + // The instance launch group. Launch groups are Spot Instances that launch together // and terminate together. // // Default: Instances are launched and terminated individually @@ -49820,13 +57619,11 @@ type RequestSpotInstancesInput struct { // The launch specification. LaunchSpecification *RequestSpotLaunchSpecification `type:"structure"` - // The maximum hourly price (bid) for any Spot instance launched to fulfill - // the request. - // - // SpotPrice is a required field - SpotPrice *string `locationName:"spotPrice" type:"string" required:"true"` + // The maximum price per hour that you are willing to pay for a Spot Instance. + // The default is the On-Demand price. + SpotPrice *string `locationName:"spotPrice" type:"string"` - // The Spot instance request type. + // The Spot Instance request type. // // Default: one-time Type *string `locationName:"type" type:"string" enum:"SpotInstanceType"` @@ -49836,16 +57633,13 @@ type RequestSpotInstancesInput struct { // launch, the request expires, or the request is canceled. If the request is // persistent, the request becomes active at this date and time and remains // active until it expires or is canceled. - // - // Default: The request is effective indefinitely. ValidFrom *time.Time `locationName:"validFrom" type:"timestamp" timestampFormat:"iso8601"` // The end date of the request. If this is a one-time request, the request remains // active until all instances launch, the request is canceled, or this date // is reached. If the request is persistent, it remains active until it is canceled - // or this date and time is reached. - // - // Default: The request is effective indefinitely. + // or this date is reached. The default end date is 7 days from the current + // date. ValidUntil *time.Time `locationName:"validUntil" type:"timestamp" timestampFormat:"iso8601"` } @@ -49862,9 +57656,6 @@ func (s RequestSpotInstancesInput) GoString() string { // Validate inspects the fields of the type to determine if they are valid. func (s *RequestSpotInstancesInput) Validate() error { invalidParams := request.ErrInvalidParams{Context: "RequestSpotInstancesInput"} - if s.SpotPrice == nil { - invalidParams.Add(request.NewErrParamRequired("SpotPrice")) - } if s.LaunchSpecification != nil { if err := s.LaunchSpecification.Validate(); err != nil { invalidParams.AddNested("LaunchSpecification", err.(request.ErrInvalidParams)) @@ -49907,6 +57698,12 @@ func (s *RequestSpotInstancesInput) SetInstanceCount(v int64) *RequestSpotInstan return s } +// SetInstanceInterruptionBehavior sets the InstanceInterruptionBehavior field's value. +func (s *RequestSpotInstancesInput) SetInstanceInterruptionBehavior(v string) *RequestSpotInstancesInput { + s.InstanceInterruptionBehavior = &v + return s +} + // SetLaunchGroup sets the LaunchGroup field's value. func (s *RequestSpotInstancesInput) SetLaunchGroup(v string) *RequestSpotInstancesInput { s.LaunchGroup = &v @@ -49944,11 +57741,10 @@ func (s *RequestSpotInstancesInput) SetValidUntil(v time.Time) *RequestSpotInsta } // Contains the output of RequestSpotInstances. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/RequestSpotInstancesResult type RequestSpotInstancesOutput struct { _ struct{} `type:"structure"` - // One or more Spot instance requests. + // One or more Spot Instance requests. SpotInstanceRequests []*SpotInstanceRequest `locationName:"spotInstanceRequestSet" locationNameList:"item" type:"list"` } @@ -49969,17 +57765,16 @@ func (s *RequestSpotInstancesOutput) SetSpotInstanceRequests(v []*SpotInstanceRe } // Describes the launch specification for an instance. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/RequestSpotLaunchSpecification type RequestSpotLaunchSpecification struct { _ struct{} `type:"structure"` // Deprecated. AddressingType *string `locationName:"addressingType" type:"string"` - // One or more block device mapping entries. - // - // Although you can specify encrypted EBS volumes in this block device mapping - // for your Spot Instances, these volumes are not encrypted. + // One or more block device mapping entries. You can't specify both a snapshot + // ID and an encryption value. This is because only blank volumes can be encrypted + // on creation. If a snapshot is the basis for a volume, it is not blank and + // its encryption status is used for the volume encryption status. BlockDeviceMappings []*BlockDeviceMapping `locationName:"blockDeviceMapping" locationNameList:"item" type:"list"` // Indicates whether the instance is optimized for EBS I/O. This optimization @@ -50170,7 +57965,6 @@ func (s *RequestSpotLaunchSpecification) SetUserData(v string) *RequestSpotLaunc } // Describes a reservation. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/Reservation type Reservation struct { _ struct{} `type:"structure"` @@ -50232,7 +58026,6 @@ func (s *Reservation) SetReservationId(v string) *Reservation { } // The cost associated with the Reserved Instance. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ReservationValue type ReservationValue struct { _ struct{} `type:"structure"` @@ -50276,7 +58069,6 @@ func (s *ReservationValue) SetRemainingUpfrontValue(v string) *ReservationValue } // Describes the limit price of a Reserved Instance offering. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ReservedInstanceLimitPrice type ReservedInstanceLimitPrice struct { _ struct{} `type:"structure"` @@ -50312,7 +58104,6 @@ func (s *ReservedInstanceLimitPrice) SetCurrencyCode(v string) *ReservedInstance } // The total value of the Convertible Reserved Instance. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ReservedInstanceReservationValue type ReservedInstanceReservationValue struct { _ struct{} `type:"structure"` @@ -50346,7 +58137,6 @@ func (s *ReservedInstanceReservationValue) SetReservedInstanceId(v string) *Rese } // Describes a Reserved Instance. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ReservedInstances type ReservedInstances struct { _ struct{} `type:"structure"` @@ -50525,7 +58315,6 @@ func (s *ReservedInstances) SetUsagePrice(v float64) *ReservedInstances { } // Describes the configuration settings for the modified Reserved Instances. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ReservedInstancesConfiguration type ReservedInstancesConfiguration struct { _ struct{} `type:"structure"` @@ -50588,7 +58377,6 @@ func (s *ReservedInstancesConfiguration) SetScope(v string) *ReservedInstancesCo } // Describes the ID of a Reserved Instance. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ReservedInstancesId type ReservedInstancesId struct { _ struct{} `type:"structure"` @@ -50613,7 +58401,6 @@ func (s *ReservedInstancesId) SetReservedInstancesId(v string) *ReservedInstance } // Describes a Reserved Instance listing. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ReservedInstancesListing type ReservedInstancesListing struct { _ struct{} `type:"structure"` @@ -50721,7 +58508,6 @@ func (s *ReservedInstancesListing) SetUpdateDate(v time.Time) *ReservedInstances } // Describes a Reserved Instance modification. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ReservedInstancesModification type ReservedInstancesModification struct { _ struct{} `type:"structure"` @@ -50820,7 +58606,6 @@ func (s *ReservedInstancesModification) SetUpdateDate(v time.Time) *ReservedInst } // Describes the modification request/s. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ReservedInstancesModificationResult type ReservedInstancesModificationResult struct { _ struct{} `type:"structure"` @@ -50856,7 +58641,6 @@ func (s *ReservedInstancesModificationResult) SetTargetConfiguration(v *Reserved } // Describes a Reserved Instance offering. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ReservedInstancesOffering type ReservedInstancesOffering struct { _ struct{} `type:"structure"` @@ -51014,8 +58798,89 @@ func (s *ReservedInstancesOffering) SetUsagePrice(v float64) *ReservedInstancesO return s } +type ResetFpgaImageAttributeInput struct { + _ struct{} `type:"structure"` + + // The attribute. + Attribute *string `type:"string" enum:"ResetFpgaImageAttributeName"` + + // Checks whether you have the required permissions for the action, without + // actually making the request, and provides an error response. If you have + // the required permissions, the error response is DryRunOperation. Otherwise, + // it is UnauthorizedOperation. + DryRun *bool `type:"boolean"` + + // The ID of the AFI. + // + // FpgaImageId is a required field + FpgaImageId *string `type:"string" required:"true"` +} + +// String returns the string representation +func (s ResetFpgaImageAttributeInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ResetFpgaImageAttributeInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ResetFpgaImageAttributeInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ResetFpgaImageAttributeInput"} + if s.FpgaImageId == nil { + invalidParams.Add(request.NewErrParamRequired("FpgaImageId")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetAttribute sets the Attribute field's value. +func (s *ResetFpgaImageAttributeInput) SetAttribute(v string) *ResetFpgaImageAttributeInput { + s.Attribute = &v + return s +} + +// SetDryRun sets the DryRun field's value. +func (s *ResetFpgaImageAttributeInput) SetDryRun(v bool) *ResetFpgaImageAttributeInput { + s.DryRun = &v + return s +} + +// SetFpgaImageId sets the FpgaImageId field's value. +func (s *ResetFpgaImageAttributeInput) SetFpgaImageId(v string) *ResetFpgaImageAttributeInput { + s.FpgaImageId = &v + return s +} + +type ResetFpgaImageAttributeOutput struct { + _ struct{} `type:"structure"` + + // Is true if the request succeeds, and an error otherwise. + Return *bool `locationName:"return" type:"boolean"` +} + +// String returns the string representation +func (s ResetFpgaImageAttributeOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ResetFpgaImageAttributeOutput) GoString() string { + return s.String() +} + +// SetReturn sets the Return field's value. +func (s *ResetFpgaImageAttributeOutput) SetReturn(v bool) *ResetFpgaImageAttributeOutput { + s.Return = &v + return s +} + // Contains the parameters for ResetImageAttribute. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ResetImageAttributeRequest type ResetImageAttributeInput struct { _ struct{} `type:"structure"` @@ -51081,7 +58946,6 @@ func (s *ResetImageAttributeInput) SetImageId(v string) *ResetImageAttributeInpu return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ResetImageAttributeOutput type ResetImageAttributeOutput struct { _ struct{} `type:"structure"` } @@ -51097,7 +58961,6 @@ func (s ResetImageAttributeOutput) GoString() string { } // Contains the parameters for ResetInstanceAttribute. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ResetInstanceAttributeRequest type ResetInstanceAttributeInput struct { _ struct{} `type:"structure"` @@ -51165,7 +59028,6 @@ func (s *ResetInstanceAttributeInput) SetInstanceId(v string) *ResetInstanceAttr return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ResetInstanceAttributeOutput type ResetInstanceAttributeOutput struct { _ struct{} `type:"structure"` } @@ -51181,7 +59043,6 @@ func (s ResetInstanceAttributeOutput) GoString() string { } // Contains the parameters for ResetNetworkInterfaceAttribute. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ResetNetworkInterfaceAttributeRequest type ResetNetworkInterfaceAttributeInput struct { _ struct{} `type:"structure"` @@ -51241,7 +59102,6 @@ func (s *ResetNetworkInterfaceAttributeInput) SetSourceDestCheck(v string) *Rese return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ResetNetworkInterfaceAttributeOutput type ResetNetworkInterfaceAttributeOutput struct { _ struct{} `type:"structure"` } @@ -51257,7 +59117,6 @@ func (s ResetNetworkInterfaceAttributeOutput) GoString() string { } // Contains the parameters for ResetSnapshotAttribute. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ResetSnapshotAttributeRequest type ResetSnapshotAttributeInput struct { _ struct{} `type:"structure"` @@ -51323,7 +59182,6 @@ func (s *ResetSnapshotAttributeInput) SetSnapshotId(v string) *ResetSnapshotAttr return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ResetSnapshotAttributeOutput type ResetSnapshotAttributeOutput struct { _ struct{} `type:"structure"` } @@ -51338,8 +59196,238 @@ func (s ResetSnapshotAttributeOutput) GoString() string { return s.String() } +// Describes the error that's returned when you cannot delete a launch template +// version. +type ResponseError struct { + _ struct{} `type:"structure"` + + // The error code. + Code *string `locationName:"code" type:"string" enum:"LaunchTemplateErrorCode"` + + // The error message, if applicable. + Message *string `locationName:"message" type:"string"` +} + +// String returns the string representation +func (s ResponseError) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ResponseError) GoString() string { + return s.String() +} + +// SetCode sets the Code field's value. +func (s *ResponseError) SetCode(v string) *ResponseError { + s.Code = &v + return s +} + +// SetMessage sets the Message field's value. +func (s *ResponseError) SetMessage(v string) *ResponseError { + s.Message = &v + return s +} + +// The information for a launch template. +type ResponseLaunchTemplateData struct { + _ struct{} `type:"structure"` + + // The block device mappings. + BlockDeviceMappings []*LaunchTemplateBlockDeviceMapping `locationName:"blockDeviceMappingSet" locationNameList:"item" type:"list"` + + // The credit option for CPU usage of the instance. + CreditSpecification *CreditSpecification `locationName:"creditSpecification" type:"structure"` + + // If set to true, indicates that the instance cannot be terminated using the + // Amazon EC2 console, command line tool, or API. + DisableApiTermination *bool `locationName:"disableApiTermination" type:"boolean"` + + // Indicates whether the instance is optimized for Amazon EBS I/O. + EbsOptimized *bool `locationName:"ebsOptimized" type:"boolean"` + + // The elastic GPU specification. + ElasticGpuSpecifications []*ElasticGpuSpecificationResponse `locationName:"elasticGpuSpecificationSet" locationNameList:"item" type:"list"` + + // The IAM instance profile. + IamInstanceProfile *LaunchTemplateIamInstanceProfileSpecification `locationName:"iamInstanceProfile" type:"structure"` + + // The ID of the AMI that was used to launch the instance. + ImageId *string `locationName:"imageId" type:"string"` + + // Indicates whether an instance stops or terminates when you initiate shutdown + // from the instance (using the operating system command for system shutdown). + InstanceInitiatedShutdownBehavior *string `locationName:"instanceInitiatedShutdownBehavior" type:"string" enum:"ShutdownBehavior"` + + // The market (purchasing) option for the instances. + InstanceMarketOptions *LaunchTemplateInstanceMarketOptions `locationName:"instanceMarketOptions" type:"structure"` + + // The instance type. + InstanceType *string `locationName:"instanceType" type:"string" enum:"InstanceType"` + + // The ID of the kernel, if applicable. + KernelId *string `locationName:"kernelId" type:"string"` + + // The name of the key pair. + KeyName *string `locationName:"keyName" type:"string"` + + // The monitoring for the instance. + Monitoring *LaunchTemplatesMonitoring `locationName:"monitoring" type:"structure"` + + // The network interfaces. + NetworkInterfaces []*LaunchTemplateInstanceNetworkInterfaceSpecification `locationName:"networkInterfaceSet" locationNameList:"item" type:"list"` + + // The placement of the instance. + Placement *LaunchTemplatePlacement `locationName:"placement" type:"structure"` + + // The ID of the RAM disk, if applicable. + RamDiskId *string `locationName:"ramDiskId" type:"string"` + + // The security group IDs. + SecurityGroupIds []*string `locationName:"securityGroupIdSet" locationNameList:"item" type:"list"` + + // The security group names. + SecurityGroups []*string `locationName:"securityGroupSet" locationNameList:"item" type:"list"` + + // The tags. + TagSpecifications []*LaunchTemplateTagSpecification `locationName:"tagSpecificationSet" locationNameList:"item" type:"list"` + + // The user data for the instance. + UserData *string `locationName:"userData" type:"string"` +} + +// String returns the string representation +func (s ResponseLaunchTemplateData) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ResponseLaunchTemplateData) GoString() string { + return s.String() +} + +// SetBlockDeviceMappings sets the BlockDeviceMappings field's value. +func (s *ResponseLaunchTemplateData) SetBlockDeviceMappings(v []*LaunchTemplateBlockDeviceMapping) *ResponseLaunchTemplateData { + s.BlockDeviceMappings = v + return s +} + +// SetCreditSpecification sets the CreditSpecification field's value. +func (s *ResponseLaunchTemplateData) SetCreditSpecification(v *CreditSpecification) *ResponseLaunchTemplateData { + s.CreditSpecification = v + return s +} + +// SetDisableApiTermination sets the DisableApiTermination field's value. +func (s *ResponseLaunchTemplateData) SetDisableApiTermination(v bool) *ResponseLaunchTemplateData { + s.DisableApiTermination = &v + return s +} + +// SetEbsOptimized sets the EbsOptimized field's value. +func (s *ResponseLaunchTemplateData) SetEbsOptimized(v bool) *ResponseLaunchTemplateData { + s.EbsOptimized = &v + return s +} + +// SetElasticGpuSpecifications sets the ElasticGpuSpecifications field's value. +func (s *ResponseLaunchTemplateData) SetElasticGpuSpecifications(v []*ElasticGpuSpecificationResponse) *ResponseLaunchTemplateData { + s.ElasticGpuSpecifications = v + return s +} + +// SetIamInstanceProfile sets the IamInstanceProfile field's value. +func (s *ResponseLaunchTemplateData) SetIamInstanceProfile(v *LaunchTemplateIamInstanceProfileSpecification) *ResponseLaunchTemplateData { + s.IamInstanceProfile = v + return s +} + +// SetImageId sets the ImageId field's value. +func (s *ResponseLaunchTemplateData) SetImageId(v string) *ResponseLaunchTemplateData { + s.ImageId = &v + return s +} + +// SetInstanceInitiatedShutdownBehavior sets the InstanceInitiatedShutdownBehavior field's value. +func (s *ResponseLaunchTemplateData) SetInstanceInitiatedShutdownBehavior(v string) *ResponseLaunchTemplateData { + s.InstanceInitiatedShutdownBehavior = &v + return s +} + +// SetInstanceMarketOptions sets the InstanceMarketOptions field's value. +func (s *ResponseLaunchTemplateData) SetInstanceMarketOptions(v *LaunchTemplateInstanceMarketOptions) *ResponseLaunchTemplateData { + s.InstanceMarketOptions = v + return s +} + +// SetInstanceType sets the InstanceType field's value. +func (s *ResponseLaunchTemplateData) SetInstanceType(v string) *ResponseLaunchTemplateData { + s.InstanceType = &v + return s +} + +// SetKernelId sets the KernelId field's value. +func (s *ResponseLaunchTemplateData) SetKernelId(v string) *ResponseLaunchTemplateData { + s.KernelId = &v + return s +} + +// SetKeyName sets the KeyName field's value. +func (s *ResponseLaunchTemplateData) SetKeyName(v string) *ResponseLaunchTemplateData { + s.KeyName = &v + return s +} + +// SetMonitoring sets the Monitoring field's value. +func (s *ResponseLaunchTemplateData) SetMonitoring(v *LaunchTemplatesMonitoring) *ResponseLaunchTemplateData { + s.Monitoring = v + return s +} + +// SetNetworkInterfaces sets the NetworkInterfaces field's value. +func (s *ResponseLaunchTemplateData) SetNetworkInterfaces(v []*LaunchTemplateInstanceNetworkInterfaceSpecification) *ResponseLaunchTemplateData { + s.NetworkInterfaces = v + return s +} + +// SetPlacement sets the Placement field's value. +func (s *ResponseLaunchTemplateData) SetPlacement(v *LaunchTemplatePlacement) *ResponseLaunchTemplateData { + s.Placement = v + return s +} + +// SetRamDiskId sets the RamDiskId field's value. +func (s *ResponseLaunchTemplateData) SetRamDiskId(v string) *ResponseLaunchTemplateData { + s.RamDiskId = &v + return s +} + +// SetSecurityGroupIds sets the SecurityGroupIds field's value. +func (s *ResponseLaunchTemplateData) SetSecurityGroupIds(v []*string) *ResponseLaunchTemplateData { + s.SecurityGroupIds = v + return s +} + +// SetSecurityGroups sets the SecurityGroups field's value. +func (s *ResponseLaunchTemplateData) SetSecurityGroups(v []*string) *ResponseLaunchTemplateData { + s.SecurityGroups = v + return s +} + +// SetTagSpecifications sets the TagSpecifications field's value. +func (s *ResponseLaunchTemplateData) SetTagSpecifications(v []*LaunchTemplateTagSpecification) *ResponseLaunchTemplateData { + s.TagSpecifications = v + return s +} + +// SetUserData sets the UserData field's value. +func (s *ResponseLaunchTemplateData) SetUserData(v string) *ResponseLaunchTemplateData { + s.UserData = &v + return s +} + // Contains the parameters for RestoreAddressToClassic. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/RestoreAddressToClassicRequest type RestoreAddressToClassicInput struct { _ struct{} `type:"structure"` @@ -51391,7 +59479,6 @@ func (s *RestoreAddressToClassicInput) SetPublicIp(v string) *RestoreAddressToCl } // Contains the output of RestoreAddressToClassic. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/RestoreAddressToClassicResult type RestoreAddressToClassicOutput struct { _ struct{} `type:"structure"` @@ -51425,12 +59512,10 @@ func (s *RestoreAddressToClassicOutput) SetStatus(v string) *RestoreAddressToCla } // Contains the parameters for RevokeSecurityGroupEgress. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/RevokeSecurityGroupEgressRequest type RevokeSecurityGroupEgressInput struct { _ struct{} `type:"structure"` - // The CIDR IP address range. We recommend that you specify the CIDR range in - // a set of IP permissions instead. + // Not supported. Use a set of IP permissions to specify the CIDR. CidrIp *string `locationName:"cidrIp" type:"string"` // Checks whether you have the required permissions for the action, without @@ -51439,8 +59524,7 @@ type RevokeSecurityGroupEgressInput struct { // it is UnauthorizedOperation. DryRun *bool `locationName:"dryRun" type:"boolean"` - // The start of port range for the TCP and UDP protocols, or an ICMP type number. - // We recommend that you specify the port range in a set of IP permissions instead. + // Not supported. Use a set of IP permissions to specify the port. FromPort *int64 `locationName:"fromPort" type:"integer"` // The ID of the security group. @@ -51448,26 +59532,23 @@ type RevokeSecurityGroupEgressInput struct { // GroupId is a required field GroupId *string `locationName:"groupId" type:"string" required:"true"` - // A set of IP permissions. You can't specify a destination security group and - // a CIDR IP address range. + // One or more sets of IP permissions. You can't specify a destination security + // group and a CIDR IP address range in the same set of permissions. IpPermissions []*IpPermission `locationName:"ipPermissions" locationNameList:"item" type:"list"` - // The IP protocol name or number. We recommend that you specify the protocol - // in a set of IP permissions instead. + // Not supported. Use a set of IP permissions to specify the protocol name or + // number. IpProtocol *string `locationName:"ipProtocol" type:"string"` - // The name of a destination security group. To revoke outbound access to a - // destination security group, we recommend that you use a set of IP permissions - // instead. + // Not supported. Use a set of IP permissions to specify a destination security + // group. SourceSecurityGroupName *string `locationName:"sourceSecurityGroupName" type:"string"` - // The AWS account number for a destination security group. To revoke outbound - // access to a destination security group, we recommend that you use a set of - // IP permissions instead. + // Not supported. Use a set of IP permissions to specify a destination security + // group. SourceSecurityGroupOwnerId *string `locationName:"sourceSecurityGroupOwnerId" type:"string"` - // The end of port range for the TCP and UDP protocols, or an ICMP type number. - // We recommend that you specify the port range in a set of IP permissions instead. + // Not supported. Use a set of IP permissions to specify the port. ToPort *int64 `locationName:"toPort" type:"integer"` } @@ -51548,7 +59629,6 @@ func (s *RevokeSecurityGroupEgressInput) SetToPort(v int64) *RevokeSecurityGroup return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/RevokeSecurityGroupEgressOutput type RevokeSecurityGroupEgressOutput struct { _ struct{} `type:"structure"` } @@ -51564,7 +59644,6 @@ func (s RevokeSecurityGroupEgressOutput) GoString() string { } // Contains the parameters for RevokeSecurityGroupIngress. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/RevokeSecurityGroupIngressRequest type RevokeSecurityGroupIngressInput struct { _ struct{} `type:"structure"` @@ -51582,15 +59661,17 @@ type RevokeSecurityGroupIngressInput struct { // For the ICMP type number, use -1 to specify all ICMP types. FromPort *int64 `type:"integer"` - // The ID of the security group. Required for a security group in a nondefault - // VPC. + // The ID of the security group. You must specify either the security group + // ID or the security group name in the request. For security groups in a nondefault + // VPC, you must specify the security group ID. GroupId *string `type:"string"` - // [EC2-Classic, default VPC] The name of the security group. + // [EC2-Classic, default VPC] The name of the security group. You must specify + // either the security group ID or the security group name in the request. GroupName *string `type:"string"` - // A set of IP permissions. You can't specify a source security group and a - // CIDR IP address range. + // One or more sets of IP permissions. You can't specify a source security group + // and a CIDR IP address range in the same set of permissions. IpPermissions []*IpPermission `locationNameList:"item" type:"list"` // The IP protocol name (tcp, udp, icmp) or number (see Protocol Numbers (http://www.iana.org/assignments/protocol-numbers/protocol-numbers.xhtml)). @@ -51688,7 +59769,6 @@ func (s *RevokeSecurityGroupIngressInput) SetToPort(v int64) *RevokeSecurityGrou return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/RevokeSecurityGroupIngressOutput type RevokeSecurityGroupIngressOutput struct { _ struct{} `type:"structure"` } @@ -51704,7 +59784,6 @@ func (s RevokeSecurityGroupIngressOutput) GoString() string { } // Describes a route in a route table. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/Route type Route struct { _ struct{} `type:"structure"` @@ -51837,7 +59916,6 @@ func (s *Route) SetVpcPeeringConnectionId(v string) *Route { } // Describes a route table. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/RouteTable type RouteTable struct { _ struct{} `type:"structure"` @@ -51907,7 +59985,6 @@ func (s *RouteTable) SetVpcId(v string) *RouteTable { } // Describes an association between a route table and a subnet. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/RouteTableAssociation type RouteTableAssociation struct { _ struct{} `type:"structure"` @@ -51959,20 +60036,16 @@ func (s *RouteTableAssociation) SetSubnetId(v string) *RouteTableAssociation { } // Contains the parameters for RunInstances. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/RunInstancesRequest type RunInstancesInput struct { _ struct{} `type:"structure"` // Reserved. AdditionalInfo *string `locationName:"additionalInfo" type:"string"` - // The block device mapping. - // - // Supplying both a snapshot ID and an encryption value as arguments for block-device - // mapping results in an error. This is because only blank volumes can be encrypted - // on start, and these are not created from a snapshot. If a snapshot is the - // basis for the volume, it contains data by definition and its encryption status - // cannot be changed using this action. + // One or more block device mapping entries. You can't specify both a snapshot + // ID and an encryption value. This is because only blank volumes can be encrypted + // on creation. If a snapshot is the basis for a volume, it is not blank and + // its encryption status is used for the volume encryption status. BlockDeviceMappings []*BlockDeviceMapping `locationName:"BlockDeviceMapping" locationNameList:"BlockDeviceMapping" type:"list"` // Unique, case-sensitive identifier you provide to ensure the idempotency of @@ -51981,6 +60054,14 @@ type RunInstancesInput struct { // Constraints: Maximum 64 ASCII characters ClientToken *string `locationName:"clientToken" type:"string"` + // The credit option for CPU usage of the instance. Valid values are standard + // and unlimited. To change this attribute after launch, use ModifyInstanceCreditSpecification. + // For more information, see T2 Instances (http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/t2-instances.html) + // in the Amazon Elastic Compute Cloud User Guide. + // + // Default: standard + CreditSpecification *CreditSpecificationRequest `type:"structure"` + // If you set this parameter to true, you can't terminate the instance using // the Amazon EC2 console, CLI, or API; otherwise, you can. To change this attribute // to false after launch, use ModifyInstanceAttribute. Alternatively, if you @@ -51996,25 +60077,25 @@ type RunInstancesInput struct { // it is UnauthorizedOperation. DryRun *bool `locationName:"dryRun" type:"boolean"` - // Indicates whether the instance is optimized for EBS I/O. This optimization + // Indicates whether the instance is optimized for Amazon EBS I/O. This optimization // provides dedicated throughput to Amazon EBS and an optimized configuration - // stack to provide optimal EBS I/O performance. This optimization isn't available - // with all instance types. Additional usage charges apply when using an EBS-optimized - // instance. + // stack to provide optimal Amazon EBS I/O performance. This optimization isn't + // available with all instance types. Additional usage charges apply when using + // an EBS-optimized instance. // // Default: false EbsOptimized *bool `locationName:"ebsOptimized" type:"boolean"` - // An Elastic GPU to associate with the instance. + // An elastic GPU to associate with the instance. ElasticGpuSpecification []*ElasticGpuSpecification `locationNameList:"item" type:"list"` // The IAM instance profile. IamInstanceProfile *IamInstanceProfileSpecification `locationName:"iamInstanceProfile" type:"structure"` - // The ID of the AMI, which you can get by calling DescribeImages. - // - // ImageId is a required field - ImageId *string `type:"string" required:"true"` + // The ID of the AMI, which you can get by calling DescribeImages. An AMI is + // required to launch an instance and must be specified here or in a launch + // template. + ImageId *string `type:"string"` // Indicates whether an instance stops or terminates when you initiate shutdown // from the instance (using the operating system command for system shutdown). @@ -52022,6 +60103,9 @@ type RunInstancesInput struct { // Default: stop InstanceInitiatedShutdownBehavior *string `locationName:"instanceInitiatedShutdownBehavior" type:"string" enum:"ShutdownBehavior"` + // The market (purchasing) option for the instances. + InstanceMarketOptions *InstanceMarketOptionsRequest `type:"structure"` + // The instance type. For more information, see Instance Types (http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instance-types.html) // in the Amazon Elastic Compute Cloud User Guide. // @@ -52056,6 +60140,10 @@ type RunInstancesInput struct { // you choose an AMI that is configured to allow users another way to log in. KeyName *string `type:"string"` + // The launch template to use to launch the instances. Any parameters that you + // specify in RunInstances override the same parameters in the launch template. + LaunchTemplate *LaunchTemplateSpecification `type:"structure"` + // The maximum number of instances to launch. If you specify more instances // than Amazon EC2 can launch in the target Availability Zone, Amazon EC2 launches // the largest possible number of instances above MinCount. @@ -52127,9 +60215,9 @@ type RunInstancesInput struct { // The user data to make available to the instance. For more information, see // Running Commands on Your Linux Instance at Launch (http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/user-data.html) // (Linux) and Adding User Data (http://docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/ec2-instance-metadata.html#instancedata-add-user-data) - // (Windows). If you are using an AWS SDK or command line tool, Base64-encoding - // is performed for you, and you can load the text from a file. Otherwise, you - // must provide Base64-encoded text. + // (Windows). If you are using a command line tool, base64-encoding is performed + // for you, and you can load the text from a file. Otherwise, you must provide + // base64-encoded text. UserData *string `type:"string"` } @@ -52146,15 +60234,17 @@ func (s RunInstancesInput) GoString() string { // Validate inspects the fields of the type to determine if they are valid. func (s *RunInstancesInput) Validate() error { invalidParams := request.ErrInvalidParams{Context: "RunInstancesInput"} - if s.ImageId == nil { - invalidParams.Add(request.NewErrParamRequired("ImageId")) - } if s.MaxCount == nil { invalidParams.Add(request.NewErrParamRequired("MaxCount")) } if s.MinCount == nil { invalidParams.Add(request.NewErrParamRequired("MinCount")) } + if s.CreditSpecification != nil { + if err := s.CreditSpecification.Validate(); err != nil { + invalidParams.AddNested("CreditSpecification", err.(request.ErrInvalidParams)) + } + } if s.ElasticGpuSpecification != nil { for i, v := range s.ElasticGpuSpecification { if v == nil { @@ -52205,6 +60295,12 @@ func (s *RunInstancesInput) SetClientToken(v string) *RunInstancesInput { return s } +// SetCreditSpecification sets the CreditSpecification field's value. +func (s *RunInstancesInput) SetCreditSpecification(v *CreditSpecificationRequest) *RunInstancesInput { + s.CreditSpecification = v + return s +} + // SetDisableApiTermination sets the DisableApiTermination field's value. func (s *RunInstancesInput) SetDisableApiTermination(v bool) *RunInstancesInput { s.DisableApiTermination = &v @@ -52247,6 +60343,12 @@ func (s *RunInstancesInput) SetInstanceInitiatedShutdownBehavior(v string) *RunI return s } +// SetInstanceMarketOptions sets the InstanceMarketOptions field's value. +func (s *RunInstancesInput) SetInstanceMarketOptions(v *InstanceMarketOptionsRequest) *RunInstancesInput { + s.InstanceMarketOptions = v + return s +} + // SetInstanceType sets the InstanceType field's value. func (s *RunInstancesInput) SetInstanceType(v string) *RunInstancesInput { s.InstanceType = &v @@ -52277,6 +60379,12 @@ func (s *RunInstancesInput) SetKeyName(v string) *RunInstancesInput { return s } +// SetLaunchTemplate sets the LaunchTemplate field's value. +func (s *RunInstancesInput) SetLaunchTemplate(v *LaunchTemplateSpecification) *RunInstancesInput { + s.LaunchTemplate = v + return s +} + // SetMaxCount sets the MaxCount field's value. func (s *RunInstancesInput) SetMaxCount(v int64) *RunInstancesInput { s.MaxCount = &v @@ -52350,7 +60458,6 @@ func (s *RunInstancesInput) SetUserData(v string) *RunInstancesInput { } // Describes the monitoring of an instance. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/RunInstancesMonitoringEnabled type RunInstancesMonitoringEnabled struct { _ struct{} `type:"structure"` @@ -52391,7 +60498,6 @@ func (s *RunInstancesMonitoringEnabled) SetEnabled(v bool) *RunInstancesMonitori } // Contains the parameters for RunScheduledInstances. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/RunScheduledInstancesRequest type RunScheduledInstancesInput struct { _ struct{} `type:"structure"` @@ -52484,7 +60590,6 @@ func (s *RunScheduledInstancesInput) SetScheduledInstanceId(v string) *RunSchedu } // Contains the output of RunScheduledInstances. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/RunScheduledInstancesResult type RunScheduledInstancesOutput struct { _ struct{} `type:"structure"` @@ -52510,7 +60615,6 @@ func (s *RunScheduledInstancesOutput) SetInstanceIdSet(v []*string) *RunSchedule // Describes the storage parameters for S3 and S3 buckets for an instance store-backed // AMI. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/S3Storage type S3Storage struct { _ struct{} `type:"structure"` @@ -52578,7 +60682,6 @@ func (s *S3Storage) SetUploadPolicySignature(v string) *S3Storage { } // Describes a Scheduled Instance. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ScheduledInstance type ScheduledInstance struct { _ struct{} `type:"structure"` @@ -52729,7 +60832,6 @@ func (s *ScheduledInstance) SetTotalScheduledInstanceHours(v int64) *ScheduledIn } // Describes a schedule that is available for your Scheduled Instances. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ScheduledInstanceAvailability type ScheduledInstanceAvailability struct { _ struct{} `type:"structure"` @@ -52863,7 +60965,6 @@ func (s *ScheduledInstanceAvailability) SetTotalScheduledInstanceHours(v int64) } // Describes the recurring schedule for a Scheduled Instance. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ScheduledInstanceRecurrence type ScheduledInstanceRecurrence struct { _ struct{} `type:"structure"` @@ -52928,7 +61029,6 @@ func (s *ScheduledInstanceRecurrence) SetOccurrenceUnit(v string) *ScheduledInst } // Describes the recurring schedule for a Scheduled Instance. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ScheduledInstanceRecurrenceRequest type ScheduledInstanceRecurrenceRequest struct { _ struct{} `type:"structure"` @@ -52996,11 +61096,10 @@ func (s *ScheduledInstanceRecurrenceRequest) SetOccurrenceUnit(v string) *Schedu } // Describes a block device mapping for a Scheduled Instance. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ScheduledInstancesBlockDeviceMapping type ScheduledInstancesBlockDeviceMapping struct { _ struct{} `type:"structure"` - // The device name exposed to the instance (for example, /dev/sdh or xvdh). + // The device name (for example, /dev/sdh or xvdh). DeviceName *string `type:"string"` // Parameters used to set up EBS volumes automatically when the instance is @@ -53013,7 +61112,7 @@ type ScheduledInstancesBlockDeviceMapping struct { // The virtual device name (ephemeralN). Instance store volumes are numbered // starting from 0. An instance type with two available instance store volumes - // can specify mappings for ephemeral0 and ephemeral1.The number of available + // can specify mappings for ephemeral0 and ephemeral1. The number of available // instance store volumes depends on the instance type. After you connect to // the instance, you must mount the volume. // @@ -53059,7 +61158,6 @@ func (s *ScheduledInstancesBlockDeviceMapping) SetVirtualName(v string) *Schedul } // Describes an EBS volume for a Scheduled Instance. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ScheduledInstancesEbs type ScheduledInstancesEbs struct { _ struct{} `type:"structure"` @@ -53148,7 +61246,6 @@ func (s *ScheduledInstancesEbs) SetVolumeType(v string) *ScheduledInstancesEbs { } // Describes an IAM instance profile for a Scheduled Instance. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ScheduledInstancesIamInstanceProfile type ScheduledInstancesIamInstanceProfile struct { _ struct{} `type:"structure"` @@ -53182,7 +61279,6 @@ func (s *ScheduledInstancesIamInstanceProfile) SetName(v string) *ScheduledInsta } // Describes an IPv6 address. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ScheduledInstancesIpv6Address type ScheduledInstancesIpv6Address struct { _ struct{} `type:"structure"` @@ -53211,7 +61307,6 @@ func (s *ScheduledInstancesIpv6Address) SetIpv6Address(v string) *ScheduledInsta // If you are launching the Scheduled Instance in EC2-VPC, you must specify // the ID of the subnet. You can specify the subnet using either SubnetId or // NetworkInterface. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ScheduledInstancesLaunchSpecification type ScheduledInstancesLaunchSpecification struct { _ struct{} `type:"structure"` @@ -53374,7 +61469,6 @@ func (s *ScheduledInstancesLaunchSpecification) SetUserData(v string) *Scheduled } // Describes whether monitoring is enabled for a Scheduled Instance. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ScheduledInstancesMonitoring type ScheduledInstancesMonitoring struct { _ struct{} `type:"structure"` @@ -53399,7 +61493,6 @@ func (s *ScheduledInstancesMonitoring) SetEnabled(v bool) *ScheduledInstancesMon } // Describes a network interface for a Scheduled Instance. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ScheduledInstancesNetworkInterface type ScheduledInstancesNetworkInterface struct { _ struct{} `type:"structure"` @@ -53528,7 +61621,6 @@ func (s *ScheduledInstancesNetworkInterface) SetSubnetId(v string) *ScheduledIns } // Describes the placement for a Scheduled Instance. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ScheduledInstancesPlacement type ScheduledInstancesPlacement struct { _ struct{} `type:"structure"` @@ -53562,7 +61654,6 @@ func (s *ScheduledInstancesPlacement) SetGroupName(v string) *ScheduledInstances } // Describes a private IPv4 address for a Scheduled Instance. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/ScheduledInstancesPrivateIpAddressConfig type ScheduledInstancesPrivateIpAddressConfig struct { _ struct{} `type:"structure"` @@ -53597,7 +61688,6 @@ func (s *ScheduledInstancesPrivateIpAddressConfig) SetPrivateIpAddress(v string) } // Describes a security group -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/SecurityGroup type SecurityGroup struct { _ struct{} `type:"structure"` @@ -53684,8 +61774,40 @@ func (s *SecurityGroup) SetVpcId(v string) *SecurityGroup { return s } +// Describes a security group. +type SecurityGroupIdentifier struct { + _ struct{} `type:"structure"` + + // The ID of the security group. + GroupId *string `locationName:"groupId" type:"string"` + + // The name of the security group. + GroupName *string `locationName:"groupName" type:"string"` +} + +// String returns the string representation +func (s SecurityGroupIdentifier) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s SecurityGroupIdentifier) GoString() string { + return s.String() +} + +// SetGroupId sets the GroupId field's value. +func (s *SecurityGroupIdentifier) SetGroupId(v string) *SecurityGroupIdentifier { + s.GroupId = &v + return s +} + +// SetGroupName sets the GroupName field's value. +func (s *SecurityGroupIdentifier) SetGroupName(v string) *SecurityGroupIdentifier { + s.GroupName = &v + return s +} + // Describes a VPC with a security group that references your security group. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/SecurityGroupReference type SecurityGroupReference struct { _ struct{} `type:"structure"` @@ -53731,9 +61853,217 @@ func (s *SecurityGroupReference) SetVpcPeeringConnectionId(v string) *SecurityGr return s } +// Describes a service configuration for a VPC endpoint service. +type ServiceConfiguration struct { + _ struct{} `type:"structure"` + + // Indicates whether requests from other AWS accounts to create an endpoint + // to the service must first be accepted. + AcceptanceRequired *bool `locationName:"acceptanceRequired" type:"boolean"` + + // In the Availability Zones in which the service is available. + AvailabilityZones []*string `locationName:"availabilityZoneSet" locationNameList:"item" type:"list"` + + // The DNS names for the service. + BaseEndpointDnsNames []*string `locationName:"baseEndpointDnsNameSet" locationNameList:"item" type:"list"` + + // The Amazon Resource Names (ARNs) of the Network Load Balancers for the service. + NetworkLoadBalancerArns []*string `locationName:"networkLoadBalancerArnSet" locationNameList:"item" type:"list"` + + // The private DNS name for the service. + PrivateDnsName *string `locationName:"privateDnsName" type:"string"` + + // The ID of the service. + ServiceId *string `locationName:"serviceId" type:"string"` + + // The name of the service. + ServiceName *string `locationName:"serviceName" type:"string"` + + // The service state. + ServiceState *string `locationName:"serviceState" type:"string" enum:"ServiceState"` + + // The type of service. + ServiceType []*ServiceTypeDetail `locationName:"serviceType" locationNameList:"item" type:"list"` +} + +// String returns the string representation +func (s ServiceConfiguration) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ServiceConfiguration) GoString() string { + return s.String() +} + +// SetAcceptanceRequired sets the AcceptanceRequired field's value. +func (s *ServiceConfiguration) SetAcceptanceRequired(v bool) *ServiceConfiguration { + s.AcceptanceRequired = &v + return s +} + +// SetAvailabilityZones sets the AvailabilityZones field's value. +func (s *ServiceConfiguration) SetAvailabilityZones(v []*string) *ServiceConfiguration { + s.AvailabilityZones = v + return s +} + +// SetBaseEndpointDnsNames sets the BaseEndpointDnsNames field's value. +func (s *ServiceConfiguration) SetBaseEndpointDnsNames(v []*string) *ServiceConfiguration { + s.BaseEndpointDnsNames = v + return s +} + +// SetNetworkLoadBalancerArns sets the NetworkLoadBalancerArns field's value. +func (s *ServiceConfiguration) SetNetworkLoadBalancerArns(v []*string) *ServiceConfiguration { + s.NetworkLoadBalancerArns = v + return s +} + +// SetPrivateDnsName sets the PrivateDnsName field's value. +func (s *ServiceConfiguration) SetPrivateDnsName(v string) *ServiceConfiguration { + s.PrivateDnsName = &v + return s +} + +// SetServiceId sets the ServiceId field's value. +func (s *ServiceConfiguration) SetServiceId(v string) *ServiceConfiguration { + s.ServiceId = &v + return s +} + +// SetServiceName sets the ServiceName field's value. +func (s *ServiceConfiguration) SetServiceName(v string) *ServiceConfiguration { + s.ServiceName = &v + return s +} + +// SetServiceState sets the ServiceState field's value. +func (s *ServiceConfiguration) SetServiceState(v string) *ServiceConfiguration { + s.ServiceState = &v + return s +} + +// SetServiceType sets the ServiceType field's value. +func (s *ServiceConfiguration) SetServiceType(v []*ServiceTypeDetail) *ServiceConfiguration { + s.ServiceType = v + return s +} + +// Describes a VPC endpoint service. +type ServiceDetail struct { + _ struct{} `type:"structure"` + + // Indicates whether VPC endpoint connection requests to the service must be + // accepted by the service owner. + AcceptanceRequired *bool `locationName:"acceptanceRequired" type:"boolean"` + + // The Availability Zones in which the service is available. + AvailabilityZones []*string `locationName:"availabilityZoneSet" locationNameList:"item" type:"list"` + + // The DNS names for the service. + BaseEndpointDnsNames []*string `locationName:"baseEndpointDnsNameSet" locationNameList:"item" type:"list"` + + // The AWS account ID of the service owner. + Owner *string `locationName:"owner" type:"string"` + + // The private DNS name for the service. + PrivateDnsName *string `locationName:"privateDnsName" type:"string"` + + // The Amazon Resource Name (ARN) of the service. + ServiceName *string `locationName:"serviceName" type:"string"` + + // The type of service. + ServiceType []*ServiceTypeDetail `locationName:"serviceType" locationNameList:"item" type:"list"` + + // Indicates whether the service supports endpoint policies. + VpcEndpointPolicySupported *bool `locationName:"vpcEndpointPolicySupported" type:"boolean"` +} + +// String returns the string representation +func (s ServiceDetail) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ServiceDetail) GoString() string { + return s.String() +} + +// SetAcceptanceRequired sets the AcceptanceRequired field's value. +func (s *ServiceDetail) SetAcceptanceRequired(v bool) *ServiceDetail { + s.AcceptanceRequired = &v + return s +} + +// SetAvailabilityZones sets the AvailabilityZones field's value. +func (s *ServiceDetail) SetAvailabilityZones(v []*string) *ServiceDetail { + s.AvailabilityZones = v + return s +} + +// SetBaseEndpointDnsNames sets the BaseEndpointDnsNames field's value. +func (s *ServiceDetail) SetBaseEndpointDnsNames(v []*string) *ServiceDetail { + s.BaseEndpointDnsNames = v + return s +} + +// SetOwner sets the Owner field's value. +func (s *ServiceDetail) SetOwner(v string) *ServiceDetail { + s.Owner = &v + return s +} + +// SetPrivateDnsName sets the PrivateDnsName field's value. +func (s *ServiceDetail) SetPrivateDnsName(v string) *ServiceDetail { + s.PrivateDnsName = &v + return s +} + +// SetServiceName sets the ServiceName field's value. +func (s *ServiceDetail) SetServiceName(v string) *ServiceDetail { + s.ServiceName = &v + return s +} + +// SetServiceType sets the ServiceType field's value. +func (s *ServiceDetail) SetServiceType(v []*ServiceTypeDetail) *ServiceDetail { + s.ServiceType = v + return s +} + +// SetVpcEndpointPolicySupported sets the VpcEndpointPolicySupported field's value. +func (s *ServiceDetail) SetVpcEndpointPolicySupported(v bool) *ServiceDetail { + s.VpcEndpointPolicySupported = &v + return s +} + +// Describes the type of service for a VPC endpoint. +type ServiceTypeDetail struct { + _ struct{} `type:"structure"` + + // The type of service. + ServiceType *string `locationName:"serviceType" type:"string" enum:"ServiceType"` +} + +// String returns the string representation +func (s ServiceTypeDetail) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ServiceTypeDetail) GoString() string { + return s.String() +} + +// SetServiceType sets the ServiceType field's value. +func (s *ServiceTypeDetail) SetServiceType(v string) *ServiceTypeDetail { + s.ServiceType = &v + return s +} + // Describes the time period for a Scheduled Instance to start its first schedule. // The time period must span less than one day. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/SlotDateTimeRangeRequest type SlotDateTimeRangeRequest struct { _ struct{} `type:"structure"` @@ -53789,7 +62119,6 @@ func (s *SlotDateTimeRangeRequest) SetLatestTime(v time.Time) *SlotDateTimeRange } // Describes the time period for a Scheduled Instance to start its first schedule. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/SlotStartTimeRangeRequest type SlotStartTimeRangeRequest struct { _ struct{} `type:"structure"` @@ -53823,7 +62152,6 @@ func (s *SlotStartTimeRangeRequest) SetLatestTime(v time.Time) *SlotStartTimeRan } // Describes a snapshot. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/Snapshot type Snapshot struct { _ struct{} `type:"structure"` @@ -53981,7 +62309,6 @@ func (s *Snapshot) SetVolumeSize(v int64) *Snapshot { } // Describes the snapshot created from the imported disk. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/SnapshotDetail type SnapshotDetail struct { _ struct{} `type:"structure"` @@ -54087,7 +62414,6 @@ func (s *SnapshotDetail) SetUserBucket(v *UserBucketDetails) *SnapshotDetail { } // The disk container object for the import snapshot request. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/SnapshotDiskContainer type SnapshotDiskContainer struct { _ struct{} `type:"structure"` @@ -54142,7 +62468,6 @@ func (s *SnapshotDiskContainer) SetUserBucket(v *UserBucket) *SnapshotDiskContai } // Details about the import snapshot task. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/SnapshotTaskDetail type SnapshotTaskDetail struct { _ struct{} `type:"structure"` @@ -54238,15 +62563,14 @@ func (s *SnapshotTaskDetail) SetUserBucket(v *UserBucketDetails) *SnapshotTaskDe return s } -// Describes the data feed for a Spot instance. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/SpotDatafeedSubscription +// Describes the data feed for a Spot Instance. type SpotDatafeedSubscription struct { _ struct{} `type:"structure"` - // The Amazon S3 bucket where the Spot instance data feed is located. + // The Amazon S3 bucket where the Spot Instance data feed is located. Bucket *string `locationName:"bucket" type:"string"` - // The fault codes for the Spot instance request, if any. + // The fault codes for the Spot Instance request, if any. Fault *SpotInstanceStateFault `locationName:"fault" type:"structure"` // The AWS account ID of the account. @@ -54255,7 +62579,7 @@ type SpotDatafeedSubscription struct { // The prefix that is prepended to data feed files. Prefix *string `locationName:"prefix" type:"string"` - // The state of the Spot instance data feed subscription. + // The state of the Spot Instance data feed subscription. State *string `locationName:"state" type:"string" enum:"DatafeedSubscriptionState"` } @@ -54299,15 +62623,17 @@ func (s *SpotDatafeedSubscription) SetState(v string) *SpotDatafeedSubscription return s } -// Describes the launch specification for one or more Spot instances. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/SpotFleetLaunchSpecification +// Describes the launch specification for one or more Spot Instances. type SpotFleetLaunchSpecification struct { _ struct{} `type:"structure"` // Deprecated. AddressingType *string `locationName:"addressingType" type:"string"` - // One or more block device mapping entries. + // One or more block device mapping entries. You can't specify both a snapshot + // ID and an encryption value. This is because only blank volumes can be encrypted + // on creation. If a snapshot is the basis for a volume, it is not blank and + // its encryption status is used for the volume encryption status. BlockDeviceMappings []*BlockDeviceMapping `locationName:"blockDeviceMapping" locationNameList:"item" type:"list"` // Indicates whether the instances are optimized for EBS I/O. This optimization @@ -54325,7 +62651,7 @@ type SpotFleetLaunchSpecification struct { // The ID of the AMI. ImageId *string `locationName:"imageId" type:"string"` - // The instance type. Note that T2 and HS1 instance types are not supported. + // The instance type. InstanceType *string `locationName:"instanceType" type:"string" enum:"InstanceType"` // The ID of the kernel. @@ -54352,10 +62678,10 @@ type SpotFleetLaunchSpecification struct { // you can specify the names or the IDs of the security groups. SecurityGroups []*GroupIdentifier `locationName:"groupSet" locationNameList:"item" type:"list"` - // The bid price per unit hour for the specified instance type. If this value - // is not specified, the default is the Spot bid price specified for the fleet. - // To determine the bid price per unit hour, divide the Spot bid price by the - // value of WeightedCapacity. + // The maximum price per unit hour that you are willing to pay for a Spot Instance. + // If this value is not specified, the default is the Spot price specified for + // the fleet. To determine the Spot price per unit hour, divide the Spot price + // by the value of WeightedCapacity. SpotPrice *string `locationName:"spotPrice" type:"string"` // The ID of the subnet in which to launch the instances. To specify multiple @@ -54519,7 +62845,6 @@ func (s *SpotFleetLaunchSpecification) SetWeightedCapacity(v float64) *SpotFleet } // Describes whether monitoring is enabled. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/SpotFleetMonitoring type SpotFleetMonitoring struct { _ struct{} `type:"structure"` @@ -54545,16 +62870,15 @@ func (s *SpotFleetMonitoring) SetEnabled(v bool) *SpotFleetMonitoring { return s } -// Describes a Spot fleet request. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/SpotFleetRequestConfig +// Describes a Spot Fleet request. type SpotFleetRequestConfig struct { _ struct{} `type:"structure"` - // The progress of the Spot fleet request. If there is an error, the status - // is error. After all bids are placed, the status is pending_fulfillment. If - // the size of the fleet is equal to or greater than its target capacity, the - // status is fulfilled. If the size of the fleet is decreased, the status is - // pending_termination while Spot instances are terminating. + // The progress of the Spot Fleet request. If there is an error, the status + // is error. After all requests are placed, the status is pending_fulfillment. + // If the size of the fleet is equal to or greater than its target capacity, + // the status is fulfilled. If the size of the fleet is decreased, the status + // is pending_termination while Spot Instances are terminating. ActivityStatus *string `locationName:"activityStatus" type:"string" enum:"ActivityStatus"` // The creation date and time of the request. @@ -54562,17 +62886,17 @@ type SpotFleetRequestConfig struct { // CreateTime is a required field CreateTime *time.Time `locationName:"createTime" type:"timestamp" timestampFormat:"iso8601" required:"true"` - // Information about the configuration of the Spot fleet request. + // The configuration of the Spot Fleet request. // // SpotFleetRequestConfig is a required field SpotFleetRequestConfig *SpotFleetRequestConfigData `locationName:"spotFleetRequestConfig" type:"structure" required:"true"` - // The ID of the Spot fleet request. + // The ID of the Spot Fleet request. // // SpotFleetRequestId is a required field SpotFleetRequestId *string `locationName:"spotFleetRequestId" type:"string" required:"true"` - // The state of the Spot fleet request. + // The state of the Spot Fleet request. // // SpotFleetRequestState is a required field SpotFleetRequestState *string `locationName:"spotFleetRequestState" type:"string" required:"true" enum:"BatchState"` @@ -54618,13 +62942,12 @@ func (s *SpotFleetRequestConfig) SetSpotFleetRequestState(v string) *SpotFleetRe return s } -// Describes the configuration of a Spot fleet request. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/SpotFleetRequestConfigData +// Describes the configuration of a Spot Fleet request. type SpotFleetRequestConfigData struct { _ struct{} `type:"structure"` // Indicates how to allocate the target capacity across the Spot pools specified - // by the Spot fleet request. The default is lowestPrice. + // by the Spot Fleet request. The default is lowestPrice. AllocationStrategy *string `locationName:"allocationStrategy" type:"string" enum:"AllocationStrategy"` // A unique, case-sensitive identifier you provide to ensure idempotency of @@ -54632,54 +62955,68 @@ type SpotFleetRequestConfigData struct { // see Ensuring Idempotency (http://docs.aws.amazon.com/AWSEC2/latest/APIReference/Run_Instance_Idempotency.html). ClientToken *string `locationName:"clientToken" type:"string"` - // Indicates whether running Spot instances should be terminated if the target - // capacity of the Spot fleet request is decreased below the current size of - // the Spot fleet. + // Indicates whether running Spot Instances should be terminated if the target + // capacity of the Spot Fleet request is decreased below the current size of + // the Spot Fleet. ExcessCapacityTerminationPolicy *string `locationName:"excessCapacityTerminationPolicy" type:"string" enum:"ExcessCapacityTerminationPolicy"` // The number of units fulfilled by this request compared to the set target // capacity. FulfilledCapacity *float64 `locationName:"fulfilledCapacity" type:"double"` - // Grants the Spot fleet permission to terminate Spot instances on your behalf - // when you cancel its Spot fleet request using CancelSpotFleetRequests or when - // the Spot fleet request expires, if you set terminateInstancesWithExpiration. + // Grants the Spot Fleet permission to terminate Spot Instances on your behalf + // when you cancel its Spot Fleet request using CancelSpotFleetRequests or when + // the Spot Fleet request expires, if you set terminateInstancesWithExpiration. // // IamFleetRole is a required field IamFleetRole *string `locationName:"iamFleetRole" type:"string" required:"true"` - // Information about the launch specifications for the Spot fleet request. - // - // LaunchSpecifications is a required field - LaunchSpecifications []*SpotFleetLaunchSpecification `locationName:"launchSpecifications" locationNameList:"item" min:"1" type:"list" required:"true"` + // The behavior when a Spot Instance is interrupted. The default is terminate. + InstanceInterruptionBehavior *string `locationName:"instanceInterruptionBehavior" type:"string" enum:"InstanceInterruptionBehavior"` - // Indicates whether Spot fleet should replace unhealthy instances. + // The launch specifications for the Spot Fleet request. + LaunchSpecifications []*SpotFleetLaunchSpecification `locationName:"launchSpecifications" locationNameList:"item" type:"list"` + + // The launch template and overrides. + LaunchTemplateConfigs []*LaunchTemplateConfig `locationName:"launchTemplateConfigs" locationNameList:"item" type:"list"` + + // One or more Classic Load Balancers and target groups to attach to the Spot + // Fleet request. Spot Fleet registers the running Spot Instances with the specified + // Classic Load Balancers and target groups. + // + // With Network Load Balancers, Spot Fleet cannot register instances that have + // the following instance types: C1, CC1, CC2, CG1, CG2, CR1, CS1, G1, G2, HI1, + // HS1, M1, M2, M3, and T1. + LoadBalancersConfig *LoadBalancersConfig `locationName:"loadBalancersConfig" type:"structure"` + + // Indicates whether Spot Fleet should replace unhealthy instances. ReplaceUnhealthyInstances *bool `locationName:"replaceUnhealthyInstances" type:"boolean"` - // The bid price per unit hour. - // - // SpotPrice is a required field - SpotPrice *string `locationName:"spotPrice" type:"string" required:"true"` + // The maximum price per unit hour that you are willing to pay for a Spot Instance. + // The default is the On-Demand price. + SpotPrice *string `locationName:"spotPrice" type:"string"` // The number of units to request. You can choose to set the target capacity // in terms of instances or a performance characteristic that is important to - // your application workload, such as vCPUs, memory, or I/O. + // your application workload, such as vCPUs, memory, or I/O. If the request + // type is maintain, you can specify a target capacity of 0 and add capacity + // later. // // TargetCapacity is a required field TargetCapacity *int64 `locationName:"targetCapacity" type:"integer" required:"true"` - // Indicates whether running Spot instances should be terminated when the Spot - // fleet request expires. + // Indicates whether running Spot Instances should be terminated when the Spot + // Fleet request expires. TerminateInstancesWithExpiration *bool `locationName:"terminateInstancesWithExpiration" type:"boolean"` // The type of request. Indicates whether the fleet will only request the target // capacity or also attempt to maintain it. When you request a certain target - // capacity, the fleet will only place the required bids. It will not attempt - // to replenish Spot instances if capacity is diminished, nor will it submit - // bids in alternative Spot pools if capacity is not available. When you want - // to maintain a certain target capacity, fleet will place the required bids - // to meet this target capacity. It will also automatically replenish any interrupted - // instances. Default: maintain. + // capacity, the fleet will only place the required requests. It will not attempt + // to replenish Spot Instances if capacity is diminished, nor will it submit + // requests in alternative Spot pools if capacity is not available. When you + // want to maintain a certain target capacity, fleet will place the required + // requests to meet this target capacity. It will also automatically replenish + // any interrupted instances. Default: maintain. Type *string `locationName:"type" type:"string" enum:"FleetType"` // The start date and time of the request, in UTC format (for example, YYYY-MM-DDTHH:MM:SSZ). @@ -54687,8 +63024,8 @@ type SpotFleetRequestConfigData struct { ValidFrom *time.Time `locationName:"validFrom" type:"timestamp" timestampFormat:"iso8601"` // The end date and time of the request, in UTC format (for example, YYYY-MM-DDTHH:MM:SSZ). - // At this point, no new Spot instance requests are placed or enabled to fulfill - // the request. + // At this point, no new Spot Instance requests are placed or able to fulfill + // the request. The default end date is 7 days from the current date. ValidUntil *time.Time `locationName:"validUntil" type:"timestamp" timestampFormat:"iso8601"` } @@ -54708,15 +63045,6 @@ func (s *SpotFleetRequestConfigData) Validate() error { if s.IamFleetRole == nil { invalidParams.Add(request.NewErrParamRequired("IamFleetRole")) } - if s.LaunchSpecifications == nil { - invalidParams.Add(request.NewErrParamRequired("LaunchSpecifications")) - } - if s.LaunchSpecifications != nil && len(s.LaunchSpecifications) < 1 { - invalidParams.Add(request.NewErrParamMinLen("LaunchSpecifications", 1)) - } - if s.SpotPrice == nil { - invalidParams.Add(request.NewErrParamRequired("SpotPrice")) - } if s.TargetCapacity == nil { invalidParams.Add(request.NewErrParamRequired("TargetCapacity")) } @@ -54730,6 +63058,21 @@ func (s *SpotFleetRequestConfigData) Validate() error { } } } + if s.LaunchTemplateConfigs != nil { + for i, v := range s.LaunchTemplateConfigs { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "LaunchTemplateConfigs", i), err.(request.ErrInvalidParams)) + } + } + } + if s.LoadBalancersConfig != nil { + if err := s.LoadBalancersConfig.Validate(); err != nil { + invalidParams.AddNested("LoadBalancersConfig", err.(request.ErrInvalidParams)) + } + } if invalidParams.Len() > 0 { return invalidParams @@ -54767,12 +63110,30 @@ func (s *SpotFleetRequestConfigData) SetIamFleetRole(v string) *SpotFleetRequest return s } +// SetInstanceInterruptionBehavior sets the InstanceInterruptionBehavior field's value. +func (s *SpotFleetRequestConfigData) SetInstanceInterruptionBehavior(v string) *SpotFleetRequestConfigData { + s.InstanceInterruptionBehavior = &v + return s +} + // SetLaunchSpecifications sets the LaunchSpecifications field's value. func (s *SpotFleetRequestConfigData) SetLaunchSpecifications(v []*SpotFleetLaunchSpecification) *SpotFleetRequestConfigData { s.LaunchSpecifications = v return s } +// SetLaunchTemplateConfigs sets the LaunchTemplateConfigs field's value. +func (s *SpotFleetRequestConfigData) SetLaunchTemplateConfigs(v []*LaunchTemplateConfig) *SpotFleetRequestConfigData { + s.LaunchTemplateConfigs = v + return s +} + +// SetLoadBalancersConfig sets the LoadBalancersConfig field's value. +func (s *SpotFleetRequestConfigData) SetLoadBalancersConfig(v *LoadBalancersConfig) *SpotFleetRequestConfigData { + s.LoadBalancersConfig = v + return s +} + // SetReplaceUnhealthyInstances sets the ReplaceUnhealthyInstances field's value. func (s *SpotFleetRequestConfigData) SetReplaceUnhealthyInstances(v bool) *SpotFleetRequestConfigData { s.ReplaceUnhealthyInstances = &v @@ -54815,8 +63176,7 @@ func (s *SpotFleetRequestConfigData) SetValidUntil(v time.Time) *SpotFleetReques return s } -// The tags for a Spot fleet resource. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/SpotFleetTagSpecification +// The tags for a Spot Fleet resource. type SpotFleetTagSpecification struct { _ struct{} `type:"structure"` @@ -54850,67 +63210,68 @@ func (s *SpotFleetTagSpecification) SetTags(v []*Tag) *SpotFleetTagSpecification return s } -// Describes a Spot instance request. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/SpotInstanceRequest +// Describes a Spot Instance request. type SpotInstanceRequest struct { _ struct{} `type:"structure"` - // If you specified a duration and your Spot instance request was fulfilled, - // this is the fixed hourly price in effect for the Spot instance while it runs. + // If you specified a duration and your Spot Instance request was fulfilled, + // this is the fixed hourly price in effect for the Spot Instance while it runs. ActualBlockHourlyPrice *string `locationName:"actualBlockHourlyPrice" type:"string"` // The Availability Zone group. If you specify the same Availability Zone group - // for all Spot instance requests, all Spot instances are launched in the same + // for all Spot Instance requests, all Spot Instances are launched in the same // Availability Zone. AvailabilityZoneGroup *string `locationName:"availabilityZoneGroup" type:"string"` - // The duration for the Spot instance, in minutes. + // The duration for the Spot Instance, in minutes. BlockDurationMinutes *int64 `locationName:"blockDurationMinutes" type:"integer"` - // The date and time when the Spot instance request was created, in UTC format + // The date and time when the Spot Instance request was created, in UTC format // (for example, YYYY-MM-DDTHH:MM:SSZ). CreateTime *time.Time `locationName:"createTime" type:"timestamp" timestampFormat:"iso8601"` - // The fault codes for the Spot instance request, if any. + // The fault codes for the Spot Instance request, if any. Fault *SpotInstanceStateFault `locationName:"fault" type:"structure"` - // The instance ID, if an instance has been launched to fulfill the Spot instance + // The instance ID, if an instance has been launched to fulfill the Spot Instance // request. InstanceId *string `locationName:"instanceId" type:"string"` - // The instance launch group. Launch groups are Spot instances that launch together + // The behavior when a Spot Instance is interrupted. + InstanceInterruptionBehavior *string `locationName:"instanceInterruptionBehavior" type:"string" enum:"InstanceInterruptionBehavior"` + + // The instance launch group. Launch groups are Spot Instances that launch together // and terminate together. LaunchGroup *string `locationName:"launchGroup" type:"string"` // Additional information for launching instances. LaunchSpecification *LaunchSpecification `locationName:"launchSpecification" type:"structure"` - // The Availability Zone in which the bid is launched. + // The Availability Zone in which the request is launched. LaunchedAvailabilityZone *string `locationName:"launchedAvailabilityZone" type:"string"` - // The product description associated with the Spot instance. + // The product description associated with the Spot Instance. ProductDescription *string `locationName:"productDescription" type:"string" enum:"RIProductDescription"` - // The ID of the Spot instance request. + // The ID of the Spot Instance request. SpotInstanceRequestId *string `locationName:"spotInstanceRequestId" type:"string"` - // The maximum hourly price (bid) for the Spot instance launched to fulfill - // the request. + // The maximum price per hour that you are willing to pay for a Spot Instance. SpotPrice *string `locationName:"spotPrice" type:"string"` - // The state of the Spot instance request. Spot bid status information can help - // you track your Spot instance requests. For more information, see Spot Bid - // Status (http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/spot-bid-status.html) + // The state of the Spot Instance request. Spot status information can help + // you track your Spot Instance requests. For more information, see Spot Status + // (http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/spot-bid-status.html) // in the Amazon Elastic Compute Cloud User Guide. State *string `locationName:"state" type:"string" enum:"SpotInstanceState"` - // The status code and status message describing the Spot instance request. + // The status code and status message describing the Spot Instance request. Status *SpotInstanceStatus `locationName:"status" type:"structure"` // Any tags assigned to the resource. Tags []*Tag `locationName:"tagSet" locationNameList:"item" type:"list"` - // The Spot instance request type. + // The Spot Instance request type. Type *string `locationName:"type" type:"string" enum:"SpotInstanceType"` // The start date of the request, in UTC format (for example, YYYY-MM-DDTHH:MM:SSZ). @@ -54920,7 +63281,8 @@ type SpotInstanceRequest struct { // The end date of the request, in UTC format (for example, YYYY-MM-DDTHH:MM:SSZ). // If this is a one-time request, it remains active until all instances launch, // the request is canceled, or this date is reached. If the request is persistent, - // it remains active until it is canceled or this date is reached. + // it remains active until it is canceled or this date is reached. The default + // end date is 7 days from the current date. ValidUntil *time.Time `locationName:"validUntil" type:"timestamp" timestampFormat:"iso8601"` } @@ -54970,6 +63332,12 @@ func (s *SpotInstanceRequest) SetInstanceId(v string) *SpotInstanceRequest { return s } +// SetInstanceInterruptionBehavior sets the InstanceInterruptionBehavior field's value. +func (s *SpotInstanceRequest) SetInstanceInterruptionBehavior(v string) *SpotInstanceRequest { + s.InstanceInterruptionBehavior = &v + return s +} + // SetLaunchGroup sets the LaunchGroup field's value. func (s *SpotInstanceRequest) SetLaunchGroup(v string) *SpotInstanceRequest { s.LaunchGroup = &v @@ -55042,15 +63410,14 @@ func (s *SpotInstanceRequest) SetValidUntil(v time.Time) *SpotInstanceRequest { return s } -// Describes a Spot instance state change. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/SpotInstanceStateFault +// Describes a Spot Instance state change. type SpotInstanceStateFault struct { _ struct{} `type:"structure"` - // The reason code for the Spot instance state change. + // The reason code for the Spot Instance state change. Code *string `locationName:"code" type:"string"` - // The message for the Spot instance state change. + // The message for the Spot Instance state change. Message *string `locationName:"message" type:"string"` } @@ -55076,12 +63443,11 @@ func (s *SpotInstanceStateFault) SetMessage(v string) *SpotInstanceStateFault { return s } -// Describes the status of a Spot instance request. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/SpotInstanceStatus +// Describes the status of a Spot Instance request. type SpotInstanceStatus struct { _ struct{} `type:"structure"` - // The status code. For a list of status codes, see Spot Bid Status Codes (http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/spot-bid-status.html#spot-instance-bid-status-understand) + // The status code. For a list of status codes, see Spot Status Codes (http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/spot-bid-status.html#spot-instance-bid-status-understand) // in the Amazon Elastic Compute Cloud User Guide. Code *string `locationName:"code" type:"string"` @@ -55121,23 +63487,89 @@ func (s *SpotInstanceStatus) SetUpdateTime(v time.Time) *SpotInstanceStatus { return s } -// Describes Spot instance placement. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/SpotPlacement +// The options for Spot Instances. +type SpotMarketOptions struct { + _ struct{} `type:"structure"` + + // The required duration for the Spot Instances (also known as Spot blocks), + // in minutes. This value must be a multiple of 60 (60, 120, 180, 240, 300, + // or 360). + BlockDurationMinutes *int64 `type:"integer"` + + // The behavior when a Spot Instance is interrupted. The default is terminate. + InstanceInterruptionBehavior *string `type:"string" enum:"InstanceInterruptionBehavior"` + + // The maximum hourly price you're willing to pay for the Spot Instances. The + // default is the On-Demand price. + MaxPrice *string `type:"string"` + + // The Spot Instance request type. + SpotInstanceType *string `type:"string" enum:"SpotInstanceType"` + + // The end date of the request. For a one-time request, the request remains + // active until all instances launch, the request is canceled, or this date + // is reached. If the request is persistent, it remains active until it is canceled + // or this date and time is reached. The default end date is 7 days from the + // current date. + ValidUntil *time.Time `type:"timestamp" timestampFormat:"iso8601"` +} + +// String returns the string representation +func (s SpotMarketOptions) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s SpotMarketOptions) GoString() string { + return s.String() +} + +// SetBlockDurationMinutes sets the BlockDurationMinutes field's value. +func (s *SpotMarketOptions) SetBlockDurationMinutes(v int64) *SpotMarketOptions { + s.BlockDurationMinutes = &v + return s +} + +// SetInstanceInterruptionBehavior sets the InstanceInterruptionBehavior field's value. +func (s *SpotMarketOptions) SetInstanceInterruptionBehavior(v string) *SpotMarketOptions { + s.InstanceInterruptionBehavior = &v + return s +} + +// SetMaxPrice sets the MaxPrice field's value. +func (s *SpotMarketOptions) SetMaxPrice(v string) *SpotMarketOptions { + s.MaxPrice = &v + return s +} + +// SetSpotInstanceType sets the SpotInstanceType field's value. +func (s *SpotMarketOptions) SetSpotInstanceType(v string) *SpotMarketOptions { + s.SpotInstanceType = &v + return s +} + +// SetValidUntil sets the ValidUntil field's value. +func (s *SpotMarketOptions) SetValidUntil(v time.Time) *SpotMarketOptions { + s.ValidUntil = &v + return s +} + +// Describes Spot Instance placement. type SpotPlacement struct { _ struct{} `type:"structure"` // The Availability Zone. // - // [Spot fleet only] To specify multiple Availability Zones, separate them using + // [Spot Fleet only] To specify multiple Availability Zones, separate them using // commas; for example, "us-west-2a, us-west-2b". AvailabilityZone *string `locationName:"availabilityZone" type:"string"` - // The name of the placement group (for cluster instances). + // The name of the placement group. GroupName *string `locationName:"groupName" type:"string"` // The tenancy of the instance (if the instance is running in a VPC). An instance // with a tenancy of dedicated runs on single-tenant hardware. The host tenancy - // is not supported for Spot instances. + // is not supported for Spot Instances. Tenancy *string `locationName:"tenancy" type:"string" enum:"Tenancy"` } @@ -55169,22 +63601,21 @@ func (s *SpotPlacement) SetTenancy(v string) *SpotPlacement { return s } -// Describes the maximum hourly price (bid) for any Spot instance launched to -// fulfill the request. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/SpotPrice +// Describes the maximum price per hour that you are willing to pay for a Spot +// Instance. type SpotPrice struct { _ struct{} `type:"structure"` // The Availability Zone. AvailabilityZone *string `locationName:"availabilityZone" type:"string"` - // The instance type. Note that T2 and HS1 instance types are not supported. + // The instance type. InstanceType *string `locationName:"instanceType" type:"string" enum:"InstanceType"` // A general description of the AMI. ProductDescription *string `locationName:"productDescription" type:"string" enum:"RIProductDescription"` - // The maximum price (bid) that you are willing to pay for a Spot instance. + // The maximum price per hour that you are willing to pay for a Spot Instance. SpotPrice *string `locationName:"spotPrice" type:"string"` // The date and time the request was created, in UTC format (for example, YYYY-MM-DDTHH:MM:SSZ). @@ -55232,7 +63663,6 @@ func (s *SpotPrice) SetTimestamp(v time.Time) *SpotPrice { } // Describes a stale rule in a security group. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/StaleIpPermission type StaleIpPermission struct { _ struct{} `type:"structure"` @@ -55307,7 +63737,6 @@ func (s *StaleIpPermission) SetUserIdGroupPairs(v []*UserIdGroupPair) *StaleIpPe } // Describes a stale security group (a security group that contains stale rules). -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/StaleSecurityGroup type StaleSecurityGroup struct { _ struct{} `type:"structure"` @@ -55379,7 +63808,6 @@ func (s *StaleSecurityGroup) SetVpcId(v string) *StaleSecurityGroup { } // Contains the parameters for StartInstances. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/StartInstancesRequest type StartInstancesInput struct { _ struct{} `type:"structure"` @@ -55440,7 +63868,6 @@ func (s *StartInstancesInput) SetInstanceIds(v []*string) *StartInstancesInput { } // Contains the output of StartInstances. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/StartInstancesResult type StartInstancesOutput struct { _ struct{} `type:"structure"` @@ -55465,7 +63892,6 @@ func (s *StartInstancesOutput) SetStartingInstances(v []*InstanceStateChange) *S } // Describes a state change. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/StateReason type StateReason struct { _ struct{} `type:"structure"` @@ -55482,8 +63908,8 @@ type StateReason struct { // // * Server.ScheduledStop: The instance was stopped due to a scheduled retirement. // - // * Server.SpotInstanceTermination: A Spot instance was terminated due to - // an increase in the market price. + // * Server.SpotInstanceTermination: A Spot Instance was terminated due to + // an increase in the Spot price. // // * Client.InternalError: A client error caused the instance to terminate // on launch. @@ -55491,6 +63917,9 @@ type StateReason struct { // * Client.InstanceInitiatedShutdown: The instance was shut down using the // shutdown -h command from the instance. // + // * Client.InstanceTerminated: The instance was terminated or rebooted during + // AMI creation. + // // * Client.UserInitiatedShutdown: The instance was shut down using the Amazon // EC2 API. // @@ -55525,7 +63954,6 @@ func (s *StateReason) SetMessage(v string) *StateReason { } // Contains the parameters for StopInstances. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/StopInstancesRequest type StopInstancesInput struct { _ struct{} `type:"structure"` @@ -55591,7 +64019,6 @@ func (s *StopInstancesInput) SetInstanceIds(v []*string) *StopInstancesInput { } // Contains the output of StopInstances. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/StopInstancesResult type StopInstancesOutput struct { _ struct{} `type:"structure"` @@ -55616,7 +64043,6 @@ func (s *StopInstancesOutput) SetStoppingInstances(v []*InstanceStateChange) *St } // Describes the storage location for an instance store-backed AMI. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/Storage type Storage struct { _ struct{} `type:"structure"` @@ -55641,7 +64067,6 @@ func (s *Storage) SetS3(v *S3Storage) *Storage { } // Describes a storage location in Amazon S3. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/StorageLocation type StorageLocation struct { _ struct{} `type:"structure"` @@ -55675,7 +64100,6 @@ func (s *StorageLocation) SetKey(v string) *StorageLocation { } // Describes a subnet. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/Subnet type Subnet struct { _ struct{} `type:"structure"` @@ -55793,7 +64217,6 @@ func (s *Subnet) SetVpcId(v string) *Subnet { } // Describes the state of a CIDR block. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/SubnetCidrBlockState type SubnetCidrBlockState struct { _ struct{} `type:"structure"` @@ -55827,7 +64250,6 @@ func (s *SubnetCidrBlockState) SetStatusMessage(v string) *SubnetCidrBlockState } // Describes an IPv6 CIDR block associated with a subnet. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/SubnetIpv6CidrBlockAssociation type SubnetIpv6CidrBlockAssociation struct { _ struct{} `type:"structure"` @@ -55869,8 +64291,32 @@ func (s *SubnetIpv6CidrBlockAssociation) SetIpv6CidrBlockState(v *SubnetCidrBloc return s } +// Describes the T2 instance whose credit option for CPU usage was successfully +// modified. +type SuccessfulInstanceCreditSpecificationItem struct { + _ struct{} `type:"structure"` + + // The ID of the instance. + InstanceId *string `locationName:"instanceId" type:"string"` +} + +// String returns the string representation +func (s SuccessfulInstanceCreditSpecificationItem) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s SuccessfulInstanceCreditSpecificationItem) GoString() string { + return s.String() +} + +// SetInstanceId sets the InstanceId field's value. +func (s *SuccessfulInstanceCreditSpecificationItem) SetInstanceId(v string) *SuccessfulInstanceCreditSpecificationItem { + s.InstanceId = &v + return s +} + // Describes a tag. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/Tag type Tag struct { _ struct{} `type:"structure"` @@ -55910,7 +64356,6 @@ func (s *Tag) SetValue(v string) *Tag { } // Describes a tag. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/TagDescription type TagDescription struct { _ struct{} `type:"structure"` @@ -55962,7 +64407,6 @@ func (s *TagDescription) SetValue(v string) *TagDescription { } // The tags to apply to a resource when the resource is being created. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/TagSpecification type TagSpecification struct { _ struct{} `type:"structure"` @@ -55997,7 +64441,6 @@ func (s *TagSpecification) SetTags(v []*Tag) *TagSpecification { } // Information about the Convertible Reserved Instance offering. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/TargetConfiguration type TargetConfiguration struct { _ struct{} `type:"structure"` @@ -56032,7 +64475,6 @@ func (s *TargetConfiguration) SetOfferingId(v string) *TargetConfiguration { } // Details about the target configuration. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/TargetConfigurationRequest type TargetConfigurationRequest struct { _ struct{} `type:"structure"` @@ -56081,8 +64523,99 @@ func (s *TargetConfigurationRequest) SetOfferingId(v string) *TargetConfiguratio return s } +// Describes a load balancer target group. +type TargetGroup struct { + _ struct{} `type:"structure"` + + // The Amazon Resource Name (ARN) of the target group. + // + // Arn is a required field + Arn *string `locationName:"arn" type:"string" required:"true"` +} + +// String returns the string representation +func (s TargetGroup) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s TargetGroup) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *TargetGroup) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "TargetGroup"} + if s.Arn == nil { + invalidParams.Add(request.NewErrParamRequired("Arn")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetArn sets the Arn field's value. +func (s *TargetGroup) SetArn(v string) *TargetGroup { + s.Arn = &v + return s +} + +// Describes the target groups to attach to a Spot Fleet. Spot Fleet registers +// the running Spot Instances with these target groups. +type TargetGroupsConfig struct { + _ struct{} `type:"structure"` + + // One or more target groups. + // + // TargetGroups is a required field + TargetGroups []*TargetGroup `locationName:"targetGroups" locationNameList:"item" min:"1" type:"list" required:"true"` +} + +// String returns the string representation +func (s TargetGroupsConfig) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s TargetGroupsConfig) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *TargetGroupsConfig) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "TargetGroupsConfig"} + if s.TargetGroups == nil { + invalidParams.Add(request.NewErrParamRequired("TargetGroups")) + } + if s.TargetGroups != nil && len(s.TargetGroups) < 1 { + invalidParams.Add(request.NewErrParamMinLen("TargetGroups", 1)) + } + if s.TargetGroups != nil { + for i, v := range s.TargetGroups { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "TargetGroups", i), err.(request.ErrInvalidParams)) + } + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetTargetGroups sets the TargetGroups field's value. +func (s *TargetGroupsConfig) SetTargetGroups(v []*TargetGroup) *TargetGroupsConfig { + s.TargetGroups = v + return s +} + // The total value of the new Convertible Reserved Instances. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/TargetReservationValue type TargetReservationValue struct { _ struct{} `type:"structure"` @@ -56119,7 +64652,6 @@ func (s *TargetReservationValue) SetTargetConfiguration(v *TargetConfiguration) } // Contains the parameters for TerminateInstances. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/TerminateInstancesRequest type TerminateInstancesInput struct { _ struct{} `type:"structure"` @@ -56174,7 +64706,6 @@ func (s *TerminateInstancesInput) SetInstanceIds(v []*string) *TerminateInstance } // Contains the output of TerminateInstances. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/TerminateInstancesResult type TerminateInstancesOutput struct { _ struct{} `type:"structure"` @@ -56198,7 +64729,6 @@ func (s *TerminateInstancesOutput) SetTerminatingInstances(v []*InstanceStateCha return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/UnassignIpv6AddressesRequest type UnassignIpv6AddressesInput struct { _ struct{} `type:"structure"` @@ -56251,7 +64781,6 @@ func (s *UnassignIpv6AddressesInput) SetNetworkInterfaceId(v string) *UnassignIp return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/UnassignIpv6AddressesResult type UnassignIpv6AddressesOutput struct { _ struct{} `type:"structure"` @@ -56285,7 +64814,6 @@ func (s *UnassignIpv6AddressesOutput) SetUnassignedIpv6Addresses(v []*string) *U } // Contains the parameters for UnassignPrivateIpAddresses. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/UnassignPrivateIpAddressesRequest type UnassignPrivateIpAddressesInput struct { _ struct{} `type:"structure"` @@ -56339,7 +64867,6 @@ func (s *UnassignPrivateIpAddressesInput) SetPrivateIpAddresses(v []*string) *Un return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/UnassignPrivateIpAddressesOutput type UnassignPrivateIpAddressesOutput struct { _ struct{} `type:"structure"` } @@ -56355,7 +64882,6 @@ func (s UnassignPrivateIpAddressesOutput) GoString() string { } // Contains the parameters for UnmonitorInstances. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/UnmonitorInstancesRequest type UnmonitorInstancesInput struct { _ struct{} `type:"structure"` @@ -56407,7 +64933,6 @@ func (s *UnmonitorInstancesInput) SetInstanceIds(v []*string) *UnmonitorInstance } // Contains the output of UnmonitorInstances. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/UnmonitorInstancesResult type UnmonitorInstancesOutput struct { _ struct{} `type:"structure"` @@ -56431,8 +64956,75 @@ func (s *UnmonitorInstancesOutput) SetInstanceMonitorings(v []*InstanceMonitorin return s } +// Describes the T2 instance whose credit option for CPU usage was not modified. +type UnsuccessfulInstanceCreditSpecificationItem struct { + _ struct{} `type:"structure"` + + // The applicable error for the T2 instance whose credit option for CPU usage + // was not modified. + Error *UnsuccessfulInstanceCreditSpecificationItemError `locationName:"error" type:"structure"` + + // The ID of the instance. + InstanceId *string `locationName:"instanceId" type:"string"` +} + +// String returns the string representation +func (s UnsuccessfulInstanceCreditSpecificationItem) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UnsuccessfulInstanceCreditSpecificationItem) GoString() string { + return s.String() +} + +// SetError sets the Error field's value. +func (s *UnsuccessfulInstanceCreditSpecificationItem) SetError(v *UnsuccessfulInstanceCreditSpecificationItemError) *UnsuccessfulInstanceCreditSpecificationItem { + s.Error = v + return s +} + +// SetInstanceId sets the InstanceId field's value. +func (s *UnsuccessfulInstanceCreditSpecificationItem) SetInstanceId(v string) *UnsuccessfulInstanceCreditSpecificationItem { + s.InstanceId = &v + return s +} + +// Information about the error for the T2 instance whose credit option for CPU +// usage was not modified. +type UnsuccessfulInstanceCreditSpecificationItemError struct { + _ struct{} `type:"structure"` + + // The error code. + Code *string `locationName:"code" type:"string" enum:"UnsuccessfulInstanceCreditSpecificationErrorCode"` + + // The applicable error message. + Message *string `locationName:"message" type:"string"` +} + +// String returns the string representation +func (s UnsuccessfulInstanceCreditSpecificationItemError) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UnsuccessfulInstanceCreditSpecificationItemError) GoString() string { + return s.String() +} + +// SetCode sets the Code field's value. +func (s *UnsuccessfulInstanceCreditSpecificationItemError) SetCode(v string) *UnsuccessfulInstanceCreditSpecificationItemError { + s.Code = &v + return s +} + +// SetMessage sets the Message field's value. +func (s *UnsuccessfulInstanceCreditSpecificationItemError) SetMessage(v string) *UnsuccessfulInstanceCreditSpecificationItemError { + s.Message = &v + return s +} + // Information about items that were not successfully processed in a batch call. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/UnsuccessfulItem type UnsuccessfulItem struct { _ struct{} `type:"structure"` @@ -56469,7 +65061,6 @@ func (s *UnsuccessfulItem) SetResourceId(v string) *UnsuccessfulItem { // Information about the error that occurred. For more information about errors, // see Error Codes (http://docs.aws.amazon.com/AWSEC2/latest/APIReference/errors-overview.html). -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/UnsuccessfulItemError type UnsuccessfulItemError struct { _ struct{} `type:"structure"` @@ -56506,8 +65097,199 @@ func (s *UnsuccessfulItemError) SetMessage(v string) *UnsuccessfulItemError { return s } +// Contains the parameters for UpdateSecurityGroupRuleDescriptionsEgress. +type UpdateSecurityGroupRuleDescriptionsEgressInput struct { + _ struct{} `type:"structure"` + + // Checks whether you have the required permissions for the action, without + // actually making the request, and provides an error response. If you have + // the required permissions, the error response is DryRunOperation. Otherwise, + // it is UnauthorizedOperation. + DryRun *bool `type:"boolean"` + + // The ID of the security group. You must specify either the security group + // ID or the security group name in the request. For security groups in a nondefault + // VPC, you must specify the security group ID. + GroupId *string `type:"string"` + + // [Default VPC] The name of the security group. You must specify either the + // security group ID or the security group name in the request. + GroupName *string `type:"string"` + + // The IP permissions for the security group rule. + // + // IpPermissions is a required field + IpPermissions []*IpPermission `locationNameList:"item" type:"list" required:"true"` +} + +// String returns the string representation +func (s UpdateSecurityGroupRuleDescriptionsEgressInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UpdateSecurityGroupRuleDescriptionsEgressInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *UpdateSecurityGroupRuleDescriptionsEgressInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "UpdateSecurityGroupRuleDescriptionsEgressInput"} + if s.IpPermissions == nil { + invalidParams.Add(request.NewErrParamRequired("IpPermissions")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetDryRun sets the DryRun field's value. +func (s *UpdateSecurityGroupRuleDescriptionsEgressInput) SetDryRun(v bool) *UpdateSecurityGroupRuleDescriptionsEgressInput { + s.DryRun = &v + return s +} + +// SetGroupId sets the GroupId field's value. +func (s *UpdateSecurityGroupRuleDescriptionsEgressInput) SetGroupId(v string) *UpdateSecurityGroupRuleDescriptionsEgressInput { + s.GroupId = &v + return s +} + +// SetGroupName sets the GroupName field's value. +func (s *UpdateSecurityGroupRuleDescriptionsEgressInput) SetGroupName(v string) *UpdateSecurityGroupRuleDescriptionsEgressInput { + s.GroupName = &v + return s +} + +// SetIpPermissions sets the IpPermissions field's value. +func (s *UpdateSecurityGroupRuleDescriptionsEgressInput) SetIpPermissions(v []*IpPermission) *UpdateSecurityGroupRuleDescriptionsEgressInput { + s.IpPermissions = v + return s +} + +// Contains the output of UpdateSecurityGroupRuleDescriptionsEgress. +type UpdateSecurityGroupRuleDescriptionsEgressOutput struct { + _ struct{} `type:"structure"` + + // Returns true if the request succeeds; otherwise, returns an error. + Return *bool `locationName:"return" type:"boolean"` +} + +// String returns the string representation +func (s UpdateSecurityGroupRuleDescriptionsEgressOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UpdateSecurityGroupRuleDescriptionsEgressOutput) GoString() string { + return s.String() +} + +// SetReturn sets the Return field's value. +func (s *UpdateSecurityGroupRuleDescriptionsEgressOutput) SetReturn(v bool) *UpdateSecurityGroupRuleDescriptionsEgressOutput { + s.Return = &v + return s +} + +// Contains the parameters for UpdateSecurityGroupRuleDescriptionsIngress. +type UpdateSecurityGroupRuleDescriptionsIngressInput struct { + _ struct{} `type:"structure"` + + // Checks whether you have the required permissions for the action, without + // actually making the request, and provides an error response. If you have + // the required permissions, the error response is DryRunOperation. Otherwise, + // it is UnauthorizedOperation. + DryRun *bool `type:"boolean"` + + // The ID of the security group. You must specify either the security group + // ID or the security group name in the request. For security groups in a nondefault + // VPC, you must specify the security group ID. + GroupId *string `type:"string"` + + // [EC2-Classic, default VPC] The name of the security group. You must specify + // either the security group ID or the security group name in the request. + GroupName *string `type:"string"` + + // The IP permissions for the security group rule. + // + // IpPermissions is a required field + IpPermissions []*IpPermission `locationNameList:"item" type:"list" required:"true"` +} + +// String returns the string representation +func (s UpdateSecurityGroupRuleDescriptionsIngressInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UpdateSecurityGroupRuleDescriptionsIngressInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *UpdateSecurityGroupRuleDescriptionsIngressInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "UpdateSecurityGroupRuleDescriptionsIngressInput"} + if s.IpPermissions == nil { + invalidParams.Add(request.NewErrParamRequired("IpPermissions")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetDryRun sets the DryRun field's value. +func (s *UpdateSecurityGroupRuleDescriptionsIngressInput) SetDryRun(v bool) *UpdateSecurityGroupRuleDescriptionsIngressInput { + s.DryRun = &v + return s +} + +// SetGroupId sets the GroupId field's value. +func (s *UpdateSecurityGroupRuleDescriptionsIngressInput) SetGroupId(v string) *UpdateSecurityGroupRuleDescriptionsIngressInput { + s.GroupId = &v + return s +} + +// SetGroupName sets the GroupName field's value. +func (s *UpdateSecurityGroupRuleDescriptionsIngressInput) SetGroupName(v string) *UpdateSecurityGroupRuleDescriptionsIngressInput { + s.GroupName = &v + return s +} + +// SetIpPermissions sets the IpPermissions field's value. +func (s *UpdateSecurityGroupRuleDescriptionsIngressInput) SetIpPermissions(v []*IpPermission) *UpdateSecurityGroupRuleDescriptionsIngressInput { + s.IpPermissions = v + return s +} + +// Contains the output of UpdateSecurityGroupRuleDescriptionsIngress. +type UpdateSecurityGroupRuleDescriptionsIngressOutput struct { + _ struct{} `type:"structure"` + + // Returns true if the request succeeds; otherwise, returns an error. + Return *bool `locationName:"return" type:"boolean"` +} + +// String returns the string representation +func (s UpdateSecurityGroupRuleDescriptionsIngressOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UpdateSecurityGroupRuleDescriptionsIngressOutput) GoString() string { + return s.String() +} + +// SetReturn sets the Return field's value. +func (s *UpdateSecurityGroupRuleDescriptionsIngressOutput) SetReturn(v bool) *UpdateSecurityGroupRuleDescriptionsIngressOutput { + s.Return = &v + return s +} + // Describes the S3 bucket for the disk image. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/UserBucket type UserBucket struct { _ struct{} `type:"structure"` @@ -56541,7 +65323,6 @@ func (s *UserBucket) SetS3Key(v string) *UserBucket { } // Describes the S3 bucket for the disk image. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/UserBucketDetails type UserBucketDetails struct { _ struct{} `type:"structure"` @@ -56575,7 +65356,6 @@ func (s *UserBucketDetails) SetS3Key(v string) *UserBucketDetails { } // Describes the user data for an instance. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/UserData type UserData struct { _ struct{} `type:"structure"` @@ -56602,23 +65382,35 @@ func (s *UserData) SetData(v string) *UserData { } // Describes a security group and AWS account ID pair. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/UserIdGroupPair type UserIdGroupPair struct { _ struct{} `type:"structure"` + // A description for the security group rule that references this user ID group + // pair. + // + // Constraints: Up to 255 characters in length. Allowed characters are a-z, + // A-Z, 0-9, spaces, and ._-:/()#,@[]+=;{}!$* + Description *string `locationName:"description" type:"string"` + // The ID of the security group. GroupId *string `locationName:"groupId" type:"string"` // The name of the security group. In a request, use this parameter for a security // group in EC2-Classic or a default VPC only. For a security group in a nondefault // VPC, use the security group ID. + // + // For a referenced security group in another VPC, this value is not returned + // if the referenced security group is deleted. GroupName *string `locationName:"groupName" type:"string"` // The status of a VPC peering connection, if applicable. PeeringStatus *string `locationName:"peeringStatus" type:"string"` - // The ID of an AWS account. For a referenced security group in another VPC, - // the account ID of the referenced security group is returned. + // The ID of an AWS account. + // + // For a referenced security group in another VPC, the account ID of the referenced + // security group is returned in the response. If the referenced security group + // is deleted, this value is not returned. // // [EC2-Classic] Required when adding or removing rules that reference a security // group in another AWS account. @@ -56641,6 +65433,12 @@ func (s UserIdGroupPair) GoString() string { return s.String() } +// SetDescription sets the Description field's value. +func (s *UserIdGroupPair) SetDescription(v string) *UserIdGroupPair { + s.Description = &v + return s +} + // SetGroupId sets the GroupId field's value. func (s *UserIdGroupPair) SetGroupId(v string) *UserIdGroupPair { s.GroupId = &v @@ -56678,7 +65476,6 @@ func (s *UserIdGroupPair) SetVpcPeeringConnectionId(v string) *UserIdGroupPair { } // Describes telemetry for a VPN tunnel. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/VgwTelemetry type VgwTelemetry struct { _ struct{} `type:"structure"` @@ -56740,7 +65537,6 @@ func (s *VgwTelemetry) SetStatusMessage(v string) *VgwTelemetry { } // Describes a volume. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/Volume type Volume struct { _ struct{} `type:"structure"` @@ -56879,7 +65675,6 @@ func (s *Volume) SetVolumeType(v string) *Volume { } // Describes volume attachment details. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/VolumeAttachment type VolumeAttachment struct { _ struct{} `type:"structure"` @@ -56949,7 +65744,6 @@ func (s *VolumeAttachment) SetVolumeId(v string) *VolumeAttachment { } // Describes an EBS volume. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/VolumeDetail type VolumeDetail struct { _ struct{} `type:"structure"` @@ -56991,7 +65785,6 @@ func (s *VolumeDetail) SetSize(v int64) *VolumeDetail { // Describes the modification status of an EBS volume. // // If the volume has never been modified, some element values will be null. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/VolumeModification type VolumeModification struct { _ struct{} `type:"structure"` @@ -57116,7 +65909,6 @@ func (s *VolumeModification) SetVolumeId(v string) *VolumeModification { } // Describes a volume status operation code. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/VolumeStatusAction type VolumeStatusAction struct { _ struct{} `type:"structure"` @@ -57168,7 +65960,6 @@ func (s *VolumeStatusAction) SetEventType(v string) *VolumeStatusAction { } // Describes a volume status. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/VolumeStatusDetails type VolumeStatusDetails struct { _ struct{} `type:"structure"` @@ -57202,7 +65993,6 @@ func (s *VolumeStatusDetails) SetStatus(v string) *VolumeStatusDetails { } // Describes a volume status event. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/VolumeStatusEvent type VolumeStatusEvent struct { _ struct{} `type:"structure"` @@ -57263,7 +66053,6 @@ func (s *VolumeStatusEvent) SetNotBefore(v time.Time) *VolumeStatusEvent { } // Describes the status of a volume. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/VolumeStatusInfo type VolumeStatusInfo struct { _ struct{} `type:"structure"` @@ -57297,7 +66086,6 @@ func (s *VolumeStatusInfo) SetStatus(v string) *VolumeStatusInfo { } // Describes the volume status. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/VolumeStatusItem type VolumeStatusItem struct { _ struct{} `type:"structure"` @@ -57358,13 +66146,15 @@ func (s *VolumeStatusItem) SetVolumeStatus(v *VolumeStatusInfo) *VolumeStatusIte } // Describes a VPC. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/Vpc type Vpc struct { _ struct{} `type:"structure"` - // The IPv4 CIDR block for the VPC. + // The primary IPv4 CIDR block for the VPC. CidrBlock *string `locationName:"cidrBlock" type:"string"` + // Information about the IPv4 CIDR blocks associated with the VPC. + CidrBlockAssociationSet []*VpcCidrBlockAssociation `locationName:"cidrBlockAssociationSet" locationNameList:"item" type:"list"` + // The ID of the set of DHCP options you've associated with the VPC (or default // if the default options are associated with the VPC). DhcpOptionsId *string `locationName:"dhcpOptionsId" type:"string"` @@ -57404,6 +66194,12 @@ func (s *Vpc) SetCidrBlock(v string) *Vpc { return s } +// SetCidrBlockAssociationSet sets the CidrBlockAssociationSet field's value. +func (s *Vpc) SetCidrBlockAssociationSet(v []*VpcCidrBlockAssociation) *Vpc { + s.CidrBlockAssociationSet = v + return s +} + // SetDhcpOptionsId sets the DhcpOptionsId field's value. func (s *Vpc) SetDhcpOptionsId(v string) *Vpc { s.DhcpOptionsId = &v @@ -57447,7 +66243,6 @@ func (s *Vpc) SetVpcId(v string) *Vpc { } // Describes an attachment between a virtual private gateway and a VPC. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/VpcAttachment type VpcAttachment struct { _ struct{} `type:"structure"` @@ -57480,8 +66275,49 @@ func (s *VpcAttachment) SetVpcId(v string) *VpcAttachment { return s } +// Describes an IPv4 CIDR block associated with a VPC. +type VpcCidrBlockAssociation struct { + _ struct{} `type:"structure"` + + // The association ID for the IPv4 CIDR block. + AssociationId *string `locationName:"associationId" type:"string"` + + // The IPv4 CIDR block. + CidrBlock *string `locationName:"cidrBlock" type:"string"` + + // Information about the state of the CIDR block. + CidrBlockState *VpcCidrBlockState `locationName:"cidrBlockState" type:"structure"` +} + +// String returns the string representation +func (s VpcCidrBlockAssociation) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s VpcCidrBlockAssociation) GoString() string { + return s.String() +} + +// SetAssociationId sets the AssociationId field's value. +func (s *VpcCidrBlockAssociation) SetAssociationId(v string) *VpcCidrBlockAssociation { + s.AssociationId = &v + return s +} + +// SetCidrBlock sets the CidrBlock field's value. +func (s *VpcCidrBlockAssociation) SetCidrBlock(v string) *VpcCidrBlockAssociation { + s.CidrBlock = &v + return s +} + +// SetCidrBlockState sets the CidrBlockState field's value. +func (s *VpcCidrBlockAssociation) SetCidrBlockState(v *VpcCidrBlockState) *VpcCidrBlockAssociation { + s.CidrBlockState = v + return s +} + // Describes the state of a CIDR block. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/VpcCidrBlockState type VpcCidrBlockState struct { _ struct{} `type:"structure"` @@ -57515,7 +66351,6 @@ func (s *VpcCidrBlockState) SetStatusMessage(v string) *VpcCidrBlockState { } // Describes whether a VPC is enabled for ClassicLink. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/VpcClassicLink type VpcClassicLink struct { _ struct{} `type:"structure"` @@ -57558,28 +66393,47 @@ func (s *VpcClassicLink) SetVpcId(v string) *VpcClassicLink { } // Describes a VPC endpoint. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/VpcEndpoint type VpcEndpoint struct { _ struct{} `type:"structure"` // The date and time the VPC endpoint was created. CreationTimestamp *time.Time `locationName:"creationTimestamp" type:"timestamp" timestampFormat:"iso8601"` - // The policy document associated with the endpoint. + // (Interface endpoint) The DNS entries for the endpoint. + DnsEntries []*DnsEntry `locationName:"dnsEntrySet" locationNameList:"item" type:"list"` + + // (Interface endpoint) Information about the security groups associated with + // the network interface. + Groups []*SecurityGroupIdentifier `locationName:"groupSet" locationNameList:"item" type:"list"` + + // (Interface endpoint) One or more network interfaces for the endpoint. + NetworkInterfaceIds []*string `locationName:"networkInterfaceIdSet" locationNameList:"item" type:"list"` + + // The policy document associated with the endpoint, if applicable. PolicyDocument *string `locationName:"policyDocument" type:"string"` - // One or more route tables associated with the endpoint. + // (Interface endpoint) Indicates whether the VPC is associated with a private + // hosted zone. + PrivateDnsEnabled *bool `locationName:"privateDnsEnabled" type:"boolean"` + + // (Gateway endpoint) One or more route tables associated with the endpoint. RouteTableIds []*string `locationName:"routeTableIdSet" locationNameList:"item" type:"list"` - // The name of the AWS service to which the endpoint is associated. + // The name of the service to which the endpoint is associated. ServiceName *string `locationName:"serviceName" type:"string"` // The state of the VPC endpoint. State *string `locationName:"state" type:"string" enum:"State"` + // (Interface endpoint) One or more subnets in which the endpoint is located. + SubnetIds []*string `locationName:"subnetIdSet" locationNameList:"item" type:"list"` + // The ID of the VPC endpoint. VpcEndpointId *string `locationName:"vpcEndpointId" type:"string"` + // The type of endpoint. + VpcEndpointType *string `locationName:"vpcEndpointType" type:"string" enum:"VpcEndpointType"` + // The ID of the VPC to which the endpoint is associated. VpcId *string `locationName:"vpcId" type:"string"` } @@ -57600,12 +66454,36 @@ func (s *VpcEndpoint) SetCreationTimestamp(v time.Time) *VpcEndpoint { return s } +// SetDnsEntries sets the DnsEntries field's value. +func (s *VpcEndpoint) SetDnsEntries(v []*DnsEntry) *VpcEndpoint { + s.DnsEntries = v + return s +} + +// SetGroups sets the Groups field's value. +func (s *VpcEndpoint) SetGroups(v []*SecurityGroupIdentifier) *VpcEndpoint { + s.Groups = v + return s +} + +// SetNetworkInterfaceIds sets the NetworkInterfaceIds field's value. +func (s *VpcEndpoint) SetNetworkInterfaceIds(v []*string) *VpcEndpoint { + s.NetworkInterfaceIds = v + return s +} + // SetPolicyDocument sets the PolicyDocument field's value. func (s *VpcEndpoint) SetPolicyDocument(v string) *VpcEndpoint { s.PolicyDocument = &v return s } +// SetPrivateDnsEnabled sets the PrivateDnsEnabled field's value. +func (s *VpcEndpoint) SetPrivateDnsEnabled(v bool) *VpcEndpoint { + s.PrivateDnsEnabled = &v + return s +} + // SetRouteTableIds sets the RouteTableIds field's value. func (s *VpcEndpoint) SetRouteTableIds(v []*string) *VpcEndpoint { s.RouteTableIds = v @@ -57624,20 +66502,91 @@ func (s *VpcEndpoint) SetState(v string) *VpcEndpoint { return s } +// SetSubnetIds sets the SubnetIds field's value. +func (s *VpcEndpoint) SetSubnetIds(v []*string) *VpcEndpoint { + s.SubnetIds = v + return s +} + // SetVpcEndpointId sets the VpcEndpointId field's value. func (s *VpcEndpoint) SetVpcEndpointId(v string) *VpcEndpoint { s.VpcEndpointId = &v return s } +// SetVpcEndpointType sets the VpcEndpointType field's value. +func (s *VpcEndpoint) SetVpcEndpointType(v string) *VpcEndpoint { + s.VpcEndpointType = &v + return s +} + // SetVpcId sets the VpcId field's value. func (s *VpcEndpoint) SetVpcId(v string) *VpcEndpoint { s.VpcId = &v return s } +// Describes a VPC endpoint connection to a service. +type VpcEndpointConnection struct { + _ struct{} `type:"structure"` + + // The date and time the VPC endpoint was created. + CreationTimestamp *time.Time `locationName:"creationTimestamp" type:"timestamp" timestampFormat:"iso8601"` + + // The ID of the service to which the endpoint is connected. + ServiceId *string `locationName:"serviceId" type:"string"` + + // The ID of the VPC endpoint. + VpcEndpointId *string `locationName:"vpcEndpointId" type:"string"` + + // The AWS account ID of the owner of the VPC endpoint. + VpcEndpointOwner *string `locationName:"vpcEndpointOwner" type:"string"` + + // The state of the VPC endpoint. + VpcEndpointState *string `locationName:"vpcEndpointState" type:"string" enum:"State"` +} + +// String returns the string representation +func (s VpcEndpointConnection) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s VpcEndpointConnection) GoString() string { + return s.String() +} + +// SetCreationTimestamp sets the CreationTimestamp field's value. +func (s *VpcEndpointConnection) SetCreationTimestamp(v time.Time) *VpcEndpointConnection { + s.CreationTimestamp = &v + return s +} + +// SetServiceId sets the ServiceId field's value. +func (s *VpcEndpointConnection) SetServiceId(v string) *VpcEndpointConnection { + s.ServiceId = &v + return s +} + +// SetVpcEndpointId sets the VpcEndpointId field's value. +func (s *VpcEndpointConnection) SetVpcEndpointId(v string) *VpcEndpointConnection { + s.VpcEndpointId = &v + return s +} + +// SetVpcEndpointOwner sets the VpcEndpointOwner field's value. +func (s *VpcEndpointConnection) SetVpcEndpointOwner(v string) *VpcEndpointConnection { + s.VpcEndpointOwner = &v + return s +} + +// SetVpcEndpointState sets the VpcEndpointState field's value. +func (s *VpcEndpointConnection) SetVpcEndpointState(v string) *VpcEndpointConnection { + s.VpcEndpointState = &v + return s +} + // Describes an IPv6 CIDR block associated with a VPC. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/VpcIpv6CidrBlockAssociation type VpcIpv6CidrBlockAssociation struct { _ struct{} `type:"structure"` @@ -57680,7 +66629,6 @@ func (s *VpcIpv6CidrBlockAssociation) SetIpv6CidrBlockState(v *VpcCidrBlockState } // Describes a VPC peering connection. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/VpcPeeringConnection type VpcPeeringConnection struct { _ struct{} `type:"structure"` @@ -57752,7 +66700,6 @@ func (s *VpcPeeringConnection) SetVpcPeeringConnectionId(v string) *VpcPeeringCo } // Describes the VPC peering connection options. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/VpcPeeringConnectionOptionsDescription type VpcPeeringConnectionOptionsDescription struct { _ struct{} `type:"structure"` @@ -57798,7 +66745,6 @@ func (s *VpcPeeringConnectionOptionsDescription) SetAllowEgressFromLocalVpcToRem } // Describes the status of a VPC peering connection. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/VpcPeeringConnectionStateReason type VpcPeeringConnectionStateReason struct { _ struct{} `type:"structure"` @@ -57832,13 +66778,15 @@ func (s *VpcPeeringConnectionStateReason) SetMessage(v string) *VpcPeeringConnec } // Describes a VPC in a VPC peering connection. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/VpcPeeringConnectionVpcInfo type VpcPeeringConnectionVpcInfo struct { _ struct{} `type:"structure"` // The IPv4 CIDR block for the VPC. CidrBlock *string `locationName:"cidrBlock" type:"string"` + // Information about the IPv4 CIDR blocks for the VPC. + CidrBlockSet []*CidrBlock `locationName:"cidrBlockSet" locationNameList:"item" type:"list"` + // The IPv6 CIDR block for the VPC. Ipv6CidrBlockSet []*Ipv6CidrBlock `locationName:"ipv6CidrBlockSet" locationNameList:"item" type:"list"` @@ -57849,6 +66797,9 @@ type VpcPeeringConnectionVpcInfo struct { // requester VPC. PeeringOptions *VpcPeeringConnectionOptionsDescription `locationName:"peeringOptions" type:"structure"` + // The region in which the VPC is located. + Region *string `locationName:"region" type:"string"` + // The ID of the VPC. VpcId *string `locationName:"vpcId" type:"string"` } @@ -57869,6 +66820,12 @@ func (s *VpcPeeringConnectionVpcInfo) SetCidrBlock(v string) *VpcPeeringConnecti return s } +// SetCidrBlockSet sets the CidrBlockSet field's value. +func (s *VpcPeeringConnectionVpcInfo) SetCidrBlockSet(v []*CidrBlock) *VpcPeeringConnectionVpcInfo { + s.CidrBlockSet = v + return s +} + // SetIpv6CidrBlockSet sets the Ipv6CidrBlockSet field's value. func (s *VpcPeeringConnectionVpcInfo) SetIpv6CidrBlockSet(v []*Ipv6CidrBlock) *VpcPeeringConnectionVpcInfo { s.Ipv6CidrBlockSet = v @@ -57887,6 +66844,12 @@ func (s *VpcPeeringConnectionVpcInfo) SetPeeringOptions(v *VpcPeeringConnectionO return s } +// SetRegion sets the Region field's value. +func (s *VpcPeeringConnectionVpcInfo) SetRegion(v string) *VpcPeeringConnectionVpcInfo { + s.Region = &v + return s +} + // SetVpcId sets the VpcId field's value. func (s *VpcPeeringConnectionVpcInfo) SetVpcId(v string) *VpcPeeringConnectionVpcInfo { s.VpcId = &v @@ -57894,10 +66857,15 @@ func (s *VpcPeeringConnectionVpcInfo) SetVpcId(v string) *VpcPeeringConnectionVp } // Describes a VPN connection. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/VpnConnection type VpnConnection struct { _ struct{} `type:"structure"` + // The category of the VPN connection. A value of VPN indicates an AWS VPN connection. + // A value of VPN-Classic indicates an AWS Classic VPN connection. For more + // information, see AWS Managed VPN Categories (http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_VPN.html#vpn-categories) + // in the Amazon Virtual Private Cloud User Guide. + Category *string `locationName:"category" type:"string"` + // The configuration information for the VPN connection's customer gateway (in // the native XML format). This element is always present in the CreateVpnConnection // response; however, it's present in the DescribeVpnConnections response only @@ -57942,6 +66910,12 @@ func (s VpnConnection) GoString() string { return s.String() } +// SetCategory sets the Category field's value. +func (s *VpnConnection) SetCategory(v string) *VpnConnection { + s.Category = &v + return s +} + // SetCustomerGatewayConfiguration sets the CustomerGatewayConfiguration field's value. func (s *VpnConnection) SetCustomerGatewayConfiguration(v string) *VpnConnection { s.CustomerGatewayConfiguration = &v @@ -58003,7 +66977,6 @@ func (s *VpnConnection) SetVpnGatewayId(v string) *VpnConnection { } // Describes VPN connection options. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/VpnConnectionOptions type VpnConnectionOptions struct { _ struct{} `type:"structure"` @@ -58029,13 +67002,18 @@ func (s *VpnConnectionOptions) SetStaticRoutesOnly(v bool) *VpnConnectionOptions } // Describes VPN connection options. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/VpnConnectionOptionsSpecification type VpnConnectionOptionsSpecification struct { _ struct{} `type:"structure"` - // Indicates whether the VPN connection uses static routes only. Static routes - // must be used for devices that don't support BGP. + // Indicate whether the VPN connection uses static routes only. If you are creating + // a VPN connection for a device that does not support BGP, you must specify + // true. + // + // Default: false StaticRoutesOnly *bool `locationName:"staticRoutesOnly" type:"boolean"` + + // The tunnel options for the VPN connection. + TunnelOptions []*VpnTunnelOptionsSpecification `locationNameList:"item" type:"list"` } // String returns the string representation @@ -58054,11 +67032,19 @@ func (s *VpnConnectionOptionsSpecification) SetStaticRoutesOnly(v bool) *VpnConn return s } +// SetTunnelOptions sets the TunnelOptions field's value. +func (s *VpnConnectionOptionsSpecification) SetTunnelOptions(v []*VpnTunnelOptionsSpecification) *VpnConnectionOptionsSpecification { + s.TunnelOptions = v + return s +} + // Describes a virtual private gateway. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/VpnGateway type VpnGateway struct { _ struct{} `type:"structure"` + // The private Autonomous System Number (ASN) for the Amazon side of a BGP session. + AmazonSideAsn *int64 `locationName:"amazonSideAsn" type:"long"` + // The Availability Zone where the virtual private gateway was created, if applicable. // This field may be empty or not returned. AvailabilityZone *string `locationName:"availabilityZone" type:"string"` @@ -58089,6 +67075,12 @@ func (s VpnGateway) GoString() string { return s.String() } +// SetAmazonSideAsn sets the AmazonSideAsn field's value. +func (s *VpnGateway) SetAmazonSideAsn(v int64) *VpnGateway { + s.AmazonSideAsn = &v + return s +} + // SetAvailabilityZone sets the AvailabilityZone field's value. func (s *VpnGateway) SetAvailabilityZone(v string) *VpnGateway { s.AvailabilityZone = &v @@ -58126,7 +67118,6 @@ func (s *VpnGateway) SetVpnGatewayId(v string) *VpnGateway { } // Describes a static route for a VPN connection. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/VpnStaticRoute type VpnStaticRoute struct { _ struct{} `type:"structure"` @@ -58168,6 +67159,62 @@ func (s *VpnStaticRoute) SetState(v string) *VpnStaticRoute { return s } +// The tunnel options for a VPN connection. +type VpnTunnelOptionsSpecification struct { + _ struct{} `type:"structure"` + + // The pre-shared key (PSK) to establish initial authentication between the + // virtual private gateway and customer gateway. + // + // Constraints: Allowed characters are alphanumeric characters and ._. Must + // be between 8 and 64 characters in length and cannot start with zero (0). + PreSharedKey *string `type:"string"` + + // The range of inside IP addresses for the tunnel. Any specified CIDR blocks + // must be unique across all VPN connections that use the same virtual private + // gateway. + // + // Constraints: A size /30 CIDR block from the 169.254.0.0/16 range. The following + // CIDR blocks are reserved and cannot be used: + // + // * 169.254.0.0/30 + // + // * 169.254.1.0/30 + // + // * 169.254.2.0/30 + // + // * 169.254.3.0/30 + // + // * 169.254.4.0/30 + // + // * 169.254.5.0/30 + // + // * 169.254.169.252/30 + TunnelInsideCidr *string `type:"string"` +} + +// String returns the string representation +func (s VpnTunnelOptionsSpecification) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s VpnTunnelOptionsSpecification) GoString() string { + return s.String() +} + +// SetPreSharedKey sets the PreSharedKey field's value. +func (s *VpnTunnelOptionsSpecification) SetPreSharedKey(v string) *VpnTunnelOptionsSpecification { + s.PreSharedKey = &v + return s +} + +// SetTunnelInsideCidr sets the TunnelInsideCidr field's value. +func (s *VpnTunnelOptionsSpecification) SetTunnelInsideCidr(v string) *VpnTunnelOptionsSpecification { + s.TunnelInsideCidr = &v + return s +} + const ( // AccountAttributeNameSupportedPlatforms is a AccountAttributeName enum value AccountAttributeNameSupportedPlatforms = "supported-platforms" @@ -58344,6 +67391,19 @@ const ( CancelSpotInstanceRequestStateCompleted = "completed" ) +const ( + // ConnectionNotificationStateEnabled is a ConnectionNotificationState enum value + ConnectionNotificationStateEnabled = "Enabled" + + // ConnectionNotificationStateDisabled is a ConnectionNotificationState enum value + ConnectionNotificationStateDisabled = "Disabled" +) + +const ( + // ConnectionNotificationTypeTopic is a ConnectionNotificationType enum value + ConnectionNotificationTypeTopic = "Topic" +) + const ( // ContainerFormatOva is a ContainerFormat enum value ContainerFormatOva = "ova" @@ -58496,6 +67556,20 @@ const ( FlowLogsResourceTypeNetworkInterface = "NetworkInterface" ) +const ( + // FpgaImageAttributeNameDescription is a FpgaImageAttributeName enum value + FpgaImageAttributeNameDescription = "description" + + // FpgaImageAttributeNameName is a FpgaImageAttributeName enum value + FpgaImageAttributeNameName = "name" + + // FpgaImageAttributeNameLoadPermission is a FpgaImageAttributeName enum value + FpgaImageAttributeNameLoadPermission = "loadPermission" + + // FpgaImageAttributeNameProductCodes is a FpgaImageAttributeName enum value + FpgaImageAttributeNameProductCodes = "productCodes" +) + const ( // FpgaImageStateCodePending is a FpgaImageStateCode enum value FpgaImageStateCodePending = "pending" @@ -58654,6 +67728,17 @@ const ( InstanceHealthStatusUnhealthy = "unhealthy" ) +const ( + // InstanceInterruptionBehaviorHibernate is a InstanceInterruptionBehavior enum value + InstanceInterruptionBehaviorHibernate = "hibernate" + + // InstanceInterruptionBehaviorStop is a InstanceInterruptionBehavior enum value + InstanceInterruptionBehaviorStop = "stop" + + // InstanceInterruptionBehaviorTerminate is a InstanceInterruptionBehavior enum value + InstanceInterruptionBehaviorTerminate = "terminate" +) + const ( // InstanceLifecycleTypeSpot is a InstanceLifecycleType enum value InstanceLifecycleTypeSpot = "spot" @@ -58800,6 +67885,24 @@ const ( // InstanceTypeX132xlarge is a InstanceType enum value InstanceTypeX132xlarge = "x1.32xlarge" + // InstanceTypeX1eXlarge is a InstanceType enum value + InstanceTypeX1eXlarge = "x1e.xlarge" + + // InstanceTypeX1e2xlarge is a InstanceType enum value + InstanceTypeX1e2xlarge = "x1e.2xlarge" + + // InstanceTypeX1e4xlarge is a InstanceType enum value + InstanceTypeX1e4xlarge = "x1e.4xlarge" + + // InstanceTypeX1e8xlarge is a InstanceType enum value + InstanceTypeX1e8xlarge = "x1e.8xlarge" + + // InstanceTypeX1e16xlarge is a InstanceType enum value + InstanceTypeX1e16xlarge = "x1e.16xlarge" + + // InstanceTypeX1e32xlarge is a InstanceType enum value + InstanceTypeX1e32xlarge = "x1e.32xlarge" + // InstanceTypeI2Xlarge is a InstanceType enum value InstanceTypeI2Xlarge = "i2.xlarge" @@ -58872,6 +67975,24 @@ const ( // InstanceTypeC48xlarge is a InstanceType enum value InstanceTypeC48xlarge = "c4.8xlarge" + // InstanceTypeC5Large is a InstanceType enum value + InstanceTypeC5Large = "c5.large" + + // InstanceTypeC5Xlarge is a InstanceType enum value + InstanceTypeC5Xlarge = "c5.xlarge" + + // InstanceTypeC52xlarge is a InstanceType enum value + InstanceTypeC52xlarge = "c5.2xlarge" + + // InstanceTypeC54xlarge is a InstanceType enum value + InstanceTypeC54xlarge = "c5.4xlarge" + + // InstanceTypeC59xlarge is a InstanceType enum value + InstanceTypeC59xlarge = "c5.9xlarge" + + // InstanceTypeC518xlarge is a InstanceType enum value + InstanceTypeC518xlarge = "c5.18xlarge" + // InstanceTypeCc14xlarge is a InstanceType enum value InstanceTypeCc14xlarge = "cc1.4xlarge" @@ -58905,6 +68026,15 @@ const ( // InstanceTypeP216xlarge is a InstanceType enum value InstanceTypeP216xlarge = "p2.16xlarge" + // InstanceTypeP32xlarge is a InstanceType enum value + InstanceTypeP32xlarge = "p3.2xlarge" + + // InstanceTypeP38xlarge is a InstanceType enum value + InstanceTypeP38xlarge = "p3.8xlarge" + + // InstanceTypeP316xlarge is a InstanceType enum value + InstanceTypeP316xlarge = "p3.16xlarge" + // InstanceTypeD2Xlarge is a InstanceType enum value InstanceTypeD2Xlarge = "d2.xlarge" @@ -58922,6 +68052,36 @@ const ( // InstanceTypeF116xlarge is a InstanceType enum value InstanceTypeF116xlarge = "f1.16xlarge" + + // InstanceTypeM5Large is a InstanceType enum value + InstanceTypeM5Large = "m5.large" + + // InstanceTypeM5Xlarge is a InstanceType enum value + InstanceTypeM5Xlarge = "m5.xlarge" + + // InstanceTypeM52xlarge is a InstanceType enum value + InstanceTypeM52xlarge = "m5.2xlarge" + + // InstanceTypeM54xlarge is a InstanceType enum value + InstanceTypeM54xlarge = "m5.4xlarge" + + // InstanceTypeM512xlarge is a InstanceType enum value + InstanceTypeM512xlarge = "m5.12xlarge" + + // InstanceTypeM524xlarge is a InstanceType enum value + InstanceTypeM524xlarge = "m5.24xlarge" + + // InstanceTypeH12xlarge is a InstanceType enum value + InstanceTypeH12xlarge = "h1.2xlarge" + + // InstanceTypeH14xlarge is a InstanceType enum value + InstanceTypeH14xlarge = "h1.4xlarge" + + // InstanceTypeH18xlarge is a InstanceType enum value + InstanceTypeH18xlarge = "h1.8xlarge" + + // InstanceTypeH116xlarge is a InstanceType enum value + InstanceTypeH116xlarge = "h1.16xlarge" ) const ( @@ -58932,6 +68092,26 @@ const ( InterfacePermissionTypeEipAssociate = "EIP-ASSOCIATE" ) +const ( + // LaunchTemplateErrorCodeLaunchTemplateIdDoesNotExist is a LaunchTemplateErrorCode enum value + LaunchTemplateErrorCodeLaunchTemplateIdDoesNotExist = "launchTemplateIdDoesNotExist" + + // LaunchTemplateErrorCodeLaunchTemplateIdMalformed is a LaunchTemplateErrorCode enum value + LaunchTemplateErrorCodeLaunchTemplateIdMalformed = "launchTemplateIdMalformed" + + // LaunchTemplateErrorCodeLaunchTemplateNameDoesNotExist is a LaunchTemplateErrorCode enum value + LaunchTemplateErrorCodeLaunchTemplateNameDoesNotExist = "launchTemplateNameDoesNotExist" + + // LaunchTemplateErrorCodeLaunchTemplateNameMalformed is a LaunchTemplateErrorCode enum value + LaunchTemplateErrorCodeLaunchTemplateNameMalformed = "launchTemplateNameMalformed" + + // LaunchTemplateErrorCodeLaunchTemplateVersionDoesNotExist is a LaunchTemplateErrorCode enum value + LaunchTemplateErrorCodeLaunchTemplateVersionDoesNotExist = "launchTemplateVersionDoesNotExist" + + // LaunchTemplateErrorCodeUnexpectedError is a LaunchTemplateErrorCode enum value + LaunchTemplateErrorCodeUnexpectedError = "unexpectedError" +) + const ( // ListingStateAvailable is a ListingState enum value ListingStateAvailable = "available" @@ -58960,6 +68140,11 @@ const ( ListingStatusClosed = "closed" ) +const ( + // MarketTypeSpot is a MarketType enum value + MarketTypeSpot = "spot" +) + const ( // MonitoringStateDisabled is a MonitoringState enum value MonitoringStateDisabled = "disabled" @@ -59118,6 +68303,9 @@ const ( const ( // PlacementStrategyCluster is a PlacementStrategy enum value PlacementStrategyCluster = "cluster" + + // PlacementStrategySpread is a PlacementStrategy enum value + PlacementStrategySpread = "spread" ) const ( @@ -59125,6 +68313,26 @@ const ( PlatformValuesWindows = "Windows" ) +const ( + // PrincipalTypeAll is a PrincipalType enum value + PrincipalTypeAll = "All" + + // PrincipalTypeService is a PrincipalType enum value + PrincipalTypeService = "Service" + + // PrincipalTypeOrganizationUnit is a PrincipalType enum value + PrincipalTypeOrganizationUnit = "OrganizationUnit" + + // PrincipalTypeAccount is a PrincipalType enum value + PrincipalTypeAccount = "Account" + + // PrincipalTypeUser is a PrincipalType enum value + PrincipalTypeUser = "User" + + // PrincipalTypeRole is a PrincipalType enum value + PrincipalTypeRole = "Role" +) + const ( // ProductCodeValuesDevpay is a ProductCodeValues enum value ProductCodeValuesDevpay = "devpay" @@ -59217,6 +68425,11 @@ const ( ReservedInstanceStateRetired = "retired" ) +const ( + // ResetFpgaImageAttributeNameLoadPermission is a ResetFpgaImageAttributeName enum value + ResetFpgaImageAttributeNameLoadPermission = "loadPermission" +) + const ( // ResetImageAttributeNameLaunchPermission is a ResetImageAttributeName enum value ResetImageAttributeNameLaunchPermission = "launchPermission" @@ -59302,6 +68515,31 @@ const ( RuleActionDeny = "deny" ) +const ( + // ServiceStatePending is a ServiceState enum value + ServiceStatePending = "Pending" + + // ServiceStateAvailable is a ServiceState enum value + ServiceStateAvailable = "Available" + + // ServiceStateDeleting is a ServiceState enum value + ServiceStateDeleting = "Deleting" + + // ServiceStateDeleted is a ServiceState enum value + ServiceStateDeleted = "Deleted" + + // ServiceStateFailed is a ServiceState enum value + ServiceStateFailed = "Failed" +) + +const ( + // ServiceTypeInterface is a ServiceType enum value + ServiceTypeInterface = "Interface" + + // ServiceTypeGateway is a ServiceType enum value + ServiceTypeGateway = "Gateway" +) + const ( // ShutdownBehaviorStop is a ShutdownBehavior enum value ShutdownBehaviorStop = "stop" @@ -59355,6 +68593,9 @@ const ( ) const ( + // StatePendingAcceptance is a State enum value + StatePendingAcceptance = "PendingAcceptance" + // StatePending is a State enum value StatePending = "Pending" @@ -59366,6 +68607,15 @@ const ( // StateDeleted is a State enum value StateDeleted = "Deleted" + + // StateRejected is a State enum value + StateRejected = "Rejected" + + // StateFailed is a State enum value + StateFailed = "Failed" + + // StateExpired is a State enum value + StateExpired = "Expired" ) const ( @@ -59473,6 +68723,20 @@ const ( TrafficTypeAll = "ALL" ) +const ( + // UnsuccessfulInstanceCreditSpecificationErrorCodeInvalidInstanceIdMalformed is a UnsuccessfulInstanceCreditSpecificationErrorCode enum value + UnsuccessfulInstanceCreditSpecificationErrorCodeInvalidInstanceIdMalformed = "InvalidInstanceID.Malformed" + + // UnsuccessfulInstanceCreditSpecificationErrorCodeInvalidInstanceIdNotFound is a UnsuccessfulInstanceCreditSpecificationErrorCode enum value + UnsuccessfulInstanceCreditSpecificationErrorCodeInvalidInstanceIdNotFound = "InvalidInstanceID.NotFound" + + // UnsuccessfulInstanceCreditSpecificationErrorCodeIncorrectInstanceState is a UnsuccessfulInstanceCreditSpecificationErrorCode enum value + UnsuccessfulInstanceCreditSpecificationErrorCodeIncorrectInstanceState = "IncorrectInstanceState" + + // UnsuccessfulInstanceCreditSpecificationErrorCodeInstanceCreditSpecificationNotSupported is a UnsuccessfulInstanceCreditSpecificationErrorCode enum value + UnsuccessfulInstanceCreditSpecificationErrorCodeInstanceCreditSpecificationNotSupported = "InstanceCreditSpecification.NotSupported" +) + const ( // VirtualizationTypeHvm is a VirtualizationType enum value VirtualizationTypeHvm = "hvm" @@ -59493,6 +68757,9 @@ const ( // VolumeAttachmentStateDetached is a VolumeAttachmentState enum value VolumeAttachmentStateDetached = "detached" + + // VolumeAttachmentStateBusy is a VolumeAttachmentState enum value + VolumeAttachmentStateBusy = "busy" ) const ( @@ -59601,6 +68868,14 @@ const ( VpcCidrBlockStateCodeFailed = "failed" ) +const ( + // VpcEndpointTypeInterface is a VpcEndpointType enum value + VpcEndpointTypeInterface = "Interface" + + // VpcEndpointTypeGateway is a VpcEndpointType enum value + VpcEndpointTypeGateway = "Gateway" +) + const ( // VpcPeeringConnectionStateReasonCodeInitiatingRequest is a VpcPeeringConnectionStateReasonCode enum value VpcPeeringConnectionStateReasonCodeInitiatingRequest = "initiating-request" @@ -59638,6 +68913,11 @@ const ( VpcStateAvailable = "available" ) +const ( + // VpcTenancyDefault is a VpcTenancy enum value + VpcTenancyDefault = "default" +) + const ( // VpnStatePending is a VpnState enum value VpnStatePending = "pending" diff --git a/vendor/github.com/aws/aws-sdk-go/service/ec2/doc.go b/vendor/github.com/aws/aws-sdk-go/service/ec2/doc.go index 547167782..432a54df4 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/ec2/doc.go +++ b/vendor/github.com/aws/aws-sdk-go/service/ec2/doc.go @@ -4,9 +4,8 @@ // requests to Amazon Elastic Compute Cloud. // // Amazon Elastic Compute Cloud (Amazon EC2) provides resizable computing capacity -// in the Amazon Web Services (AWS) cloud. Using Amazon EC2 eliminates your -// need to invest in hardware up front, so you can develop and deploy applications -// faster. +// in the AWS Cloud. Using Amazon EC2 eliminates your need to invest in hardware +// up front, so you can develop and deploy applications faster. // // See https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15 for more information on this service. // @@ -15,7 +14,7 @@ // // Using the Client // -// To Amazon Elastic Compute Cloud with the SDK use the New function to create +// To contact Amazon Elastic Compute Cloud with the SDK use the New function to create // a new service client. With that client you can make API requests to the service. // These clients are safe to use concurrently. // diff --git a/vendor/github.com/aws/aws-sdk-go/service/ec2/waiters.go b/vendor/github.com/aws/aws-sdk-go/service/ec2/waiters.go index 6914a666b..0469f0f01 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/ec2/waiters.go +++ b/vendor/github.com/aws/aws-sdk-go/service/ec2/waiters.go @@ -1025,6 +1025,11 @@ func (c *EC2) WaitUntilSpotInstanceRequestFulfilledWithContext(ctx aws.Context, Matcher: request.PathAllWaiterMatch, Argument: "SpotInstanceRequests[].Status.Code", Expected: "fulfilled", }, + { + State: request.SuccessWaiterState, + Matcher: request.PathAllWaiterMatch, Argument: "SpotInstanceRequests[].Status.Code", + Expected: "request-canceled-and-instance-running", + }, { State: request.FailureWaiterState, Matcher: request.PathAnyWaiterMatch, Argument: "SpotInstanceRequests[].Status.Code", diff --git a/vendor/github.com/aws/aws-sdk-go/service/ecr/api.go b/vendor/github.com/aws/aws-sdk-go/service/ecr/api.go index f9136852f..bf9630e4c 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/ecr/api.go +++ b/vendor/github.com/aws/aws-sdk-go/service/ecr/api.go @@ -35,7 +35,7 @@ const opBatchCheckLayerAvailability = "BatchCheckLayerAvailability" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ecr-2015-09-21/BatchCheckLayerAvailability +// See also, https://docs.aws.amazon.com/goto/WebAPI/ecr-2015-09-21/BatchCheckLayerAvailability func (c *ECR) BatchCheckLayerAvailabilityRequest(input *BatchCheckLayerAvailabilityInput) (req *request.Request, output *BatchCheckLayerAvailabilityOutput) { op := &request.Operation{ Name: opBatchCheckLayerAvailability, @@ -80,7 +80,7 @@ func (c *ECR) BatchCheckLayerAvailabilityRequest(input *BatchCheckLayerAvailabil // * ErrCodeServerException "ServerException" // These errors are usually caused by a server-side issue. // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ecr-2015-09-21/BatchCheckLayerAvailability +// See also, https://docs.aws.amazon.com/goto/WebAPI/ecr-2015-09-21/BatchCheckLayerAvailability func (c *ECR) BatchCheckLayerAvailability(input *BatchCheckLayerAvailabilityInput) (*BatchCheckLayerAvailabilityOutput, error) { req, out := c.BatchCheckLayerAvailabilityRequest(input) return out, req.Send() @@ -127,7 +127,7 @@ const opBatchDeleteImage = "BatchDeleteImage" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ecr-2015-09-21/BatchDeleteImage +// See also, https://docs.aws.amazon.com/goto/WebAPI/ecr-2015-09-21/BatchDeleteImage func (c *ECR) BatchDeleteImageRequest(input *BatchDeleteImageInput) (req *request.Request, output *BatchDeleteImageOutput) { op := &request.Operation{ Name: opBatchDeleteImage, @@ -175,7 +175,7 @@ func (c *ECR) BatchDeleteImageRequest(input *BatchDeleteImageInput) (req *reques // The specified repository could not be found. Check the spelling of the specified // repository and ensure that you are performing operations on the correct registry. // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ecr-2015-09-21/BatchDeleteImage +// See also, https://docs.aws.amazon.com/goto/WebAPI/ecr-2015-09-21/BatchDeleteImage func (c *ECR) BatchDeleteImage(input *BatchDeleteImageInput) (*BatchDeleteImageOutput, error) { req, out := c.BatchDeleteImageRequest(input) return out, req.Send() @@ -222,7 +222,7 @@ const opBatchGetImage = "BatchGetImage" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ecr-2015-09-21/BatchGetImage +// See also, https://docs.aws.amazon.com/goto/WebAPI/ecr-2015-09-21/BatchGetImage func (c *ECR) BatchGetImageRequest(input *BatchGetImageInput) (req *request.Request, output *BatchGetImageOutput) { op := &request.Operation{ Name: opBatchGetImage, @@ -263,7 +263,7 @@ func (c *ECR) BatchGetImageRequest(input *BatchGetImageInput) (req *request.Requ // The specified repository could not be found. Check the spelling of the specified // repository and ensure that you are performing operations on the correct registry. // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ecr-2015-09-21/BatchGetImage +// See also, https://docs.aws.amazon.com/goto/WebAPI/ecr-2015-09-21/BatchGetImage func (c *ECR) BatchGetImage(input *BatchGetImageInput) (*BatchGetImageOutput, error) { req, out := c.BatchGetImageRequest(input) return out, req.Send() @@ -310,7 +310,7 @@ const opCompleteLayerUpload = "CompleteLayerUpload" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ecr-2015-09-21/CompleteLayerUpload +// See also, https://docs.aws.amazon.com/goto/WebAPI/ecr-2015-09-21/CompleteLayerUpload func (c *ECR) CompleteLayerUploadRequest(input *CompleteLayerUploadInput) (req *request.Request, output *CompleteLayerUploadOutput) { op := &request.Operation{ Name: opCompleteLayerUpload, @@ -329,9 +329,9 @@ func (c *ECR) CompleteLayerUploadRequest(input *CompleteLayerUploadInput) (req * // CompleteLayerUpload API operation for Amazon EC2 Container Registry. // -// Inform Amazon ECR that the image layer upload for a specified registry, repository -// name, and upload ID, has completed. You can optionally provide a sha256 digest -// of the image layer for data validation purposes. +// Informs Amazon ECR that the image layer upload has completed for a specified +// registry, repository name, and upload ID. You can optionally provide a sha256 +// digest of the image layer for data validation purposes. // // This operation is used by the Amazon ECR proxy, and it is not intended for // general use by customers for pulling and pushing images. In most cases, you @@ -373,7 +373,7 @@ func (c *ECR) CompleteLayerUploadRequest(input *CompleteLayerUploadInput) (req * // * ErrCodeEmptyUploadException "EmptyUploadException" // The specified layer upload does not contain any layer parts. // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ecr-2015-09-21/CompleteLayerUpload +// See also, https://docs.aws.amazon.com/goto/WebAPI/ecr-2015-09-21/CompleteLayerUpload func (c *ECR) CompleteLayerUpload(input *CompleteLayerUploadInput) (*CompleteLayerUploadOutput, error) { req, out := c.CompleteLayerUploadRequest(input) return out, req.Send() @@ -420,7 +420,7 @@ const opCreateRepository = "CreateRepository" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ecr-2015-09-21/CreateRepository +// See also, https://docs.aws.amazon.com/goto/WebAPI/ecr-2015-09-21/CreateRepository func (c *ECR) CreateRepositoryRequest(input *CreateRepositoryInput) (req *request.Request, output *CreateRepositoryOutput) { op := &request.Operation{ Name: opCreateRepository, @@ -465,7 +465,7 @@ func (c *ECR) CreateRepositoryRequest(input *CreateRepositoryInput) (req *reques // (http://docs.aws.amazon.com/AmazonECR/latest/userguide/service_limits.html) // in the Amazon EC2 Container Registry User Guide. // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ecr-2015-09-21/CreateRepository +// See also, https://docs.aws.amazon.com/goto/WebAPI/ecr-2015-09-21/CreateRepository func (c *ECR) CreateRepository(input *CreateRepositoryInput) (*CreateRepositoryOutput, error) { req, out := c.CreateRepositoryRequest(input) return out, req.Send() @@ -487,6 +487,96 @@ func (c *ECR) CreateRepositoryWithContext(ctx aws.Context, input *CreateReposito return out, req.Send() } +const opDeleteLifecyclePolicy = "DeleteLifecyclePolicy" + +// DeleteLifecyclePolicyRequest generates a "aws/request.Request" representing the +// client's request for the DeleteLifecyclePolicy operation. The "output" return +// value will be populated with the request's response once the request complets +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DeleteLifecyclePolicy for more information on using the DeleteLifecyclePolicy +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DeleteLifecyclePolicyRequest method. +// req, resp := client.DeleteLifecyclePolicyRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ecr-2015-09-21/DeleteLifecyclePolicy +func (c *ECR) DeleteLifecyclePolicyRequest(input *DeleteLifecyclePolicyInput) (req *request.Request, output *DeleteLifecyclePolicyOutput) { + op := &request.Operation{ + Name: opDeleteLifecyclePolicy, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DeleteLifecyclePolicyInput{} + } + + output = &DeleteLifecyclePolicyOutput{} + req = c.newRequest(op, input, output) + return +} + +// DeleteLifecyclePolicy API operation for Amazon EC2 Container Registry. +// +// Deletes the specified lifecycle policy. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon EC2 Container Registry's +// API operation DeleteLifecyclePolicy for usage and error information. +// +// Returned Error Codes: +// * ErrCodeServerException "ServerException" +// These errors are usually caused by a server-side issue. +// +// * ErrCodeInvalidParameterException "InvalidParameterException" +// The specified parameter is invalid. Review the available parameters for the +// API request. +// +// * ErrCodeRepositoryNotFoundException "RepositoryNotFoundException" +// The specified repository could not be found. Check the spelling of the specified +// repository and ensure that you are performing operations on the correct registry. +// +// * ErrCodeLifecyclePolicyNotFoundException "LifecyclePolicyNotFoundException" +// The lifecycle policy could not be found, and no policy is set to the repository. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ecr-2015-09-21/DeleteLifecyclePolicy +func (c *ECR) DeleteLifecyclePolicy(input *DeleteLifecyclePolicyInput) (*DeleteLifecyclePolicyOutput, error) { + req, out := c.DeleteLifecyclePolicyRequest(input) + return out, req.Send() +} + +// DeleteLifecyclePolicyWithContext is the same as DeleteLifecyclePolicy with the addition of +// the ability to pass a context and additional request options. +// +// See DeleteLifecyclePolicy for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ECR) DeleteLifecyclePolicyWithContext(ctx aws.Context, input *DeleteLifecyclePolicyInput, opts ...request.Option) (*DeleteLifecyclePolicyOutput, error) { + req, out := c.DeleteLifecyclePolicyRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + const opDeleteRepository = "DeleteRepository" // DeleteRepositoryRequest generates a "aws/request.Request" representing the @@ -512,7 +602,7 @@ const opDeleteRepository = "DeleteRepository" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ecr-2015-09-21/DeleteRepository +// See also, https://docs.aws.amazon.com/goto/WebAPI/ecr-2015-09-21/DeleteRepository func (c *ECR) DeleteRepositoryRequest(input *DeleteRepositoryInput) (req *request.Request, output *DeleteRepositoryOutput) { op := &request.Operation{ Name: opDeleteRepository, @@ -557,7 +647,7 @@ func (c *ECR) DeleteRepositoryRequest(input *DeleteRepositoryInput) (req *reques // The specified repository contains images. To delete a repository that contains // images, you must force the deletion with the force parameter. // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ecr-2015-09-21/DeleteRepository +// See also, https://docs.aws.amazon.com/goto/WebAPI/ecr-2015-09-21/DeleteRepository func (c *ECR) DeleteRepository(input *DeleteRepositoryInput) (*DeleteRepositoryOutput, error) { req, out := c.DeleteRepositoryRequest(input) return out, req.Send() @@ -604,7 +694,7 @@ const opDeleteRepositoryPolicy = "DeleteRepositoryPolicy" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ecr-2015-09-21/DeleteRepositoryPolicy +// See also, https://docs.aws.amazon.com/goto/WebAPI/ecr-2015-09-21/DeleteRepositoryPolicy func (c *ECR) DeleteRepositoryPolicyRequest(input *DeleteRepositoryPolicyInput) (req *request.Request, output *DeleteRepositoryPolicyOutput) { op := &request.Operation{ Name: opDeleteRepositoryPolicy, @@ -648,7 +738,7 @@ func (c *ECR) DeleteRepositoryPolicyRequest(input *DeleteRepositoryPolicyInput) // The specified repository and registry combination does not have an associated // repository policy. // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ecr-2015-09-21/DeleteRepositoryPolicy +// See also, https://docs.aws.amazon.com/goto/WebAPI/ecr-2015-09-21/DeleteRepositoryPolicy func (c *ECR) DeleteRepositoryPolicy(input *DeleteRepositoryPolicyInput) (*DeleteRepositoryPolicyOutput, error) { req, out := c.DeleteRepositoryPolicyRequest(input) return out, req.Send() @@ -695,7 +785,7 @@ const opDescribeImages = "DescribeImages" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ecr-2015-09-21/DescribeImages +// See also, https://docs.aws.amazon.com/goto/WebAPI/ecr-2015-09-21/DescribeImages func (c *ECR) DescribeImagesRequest(input *DescribeImagesInput) (req *request.Request, output *DescribeImagesOutput) { op := &request.Operation{ Name: opDescribeImages, @@ -750,7 +840,7 @@ func (c *ECR) DescribeImagesRequest(input *DescribeImagesInput) (req *request.Re // * ErrCodeImageNotFoundException "ImageNotFoundException" // The image requested does not exist in the specified repository. // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ecr-2015-09-21/DescribeImages +// See also, https://docs.aws.amazon.com/goto/WebAPI/ecr-2015-09-21/DescribeImages func (c *ECR) DescribeImages(input *DescribeImagesInput) (*DescribeImagesOutput, error) { req, out := c.DescribeImagesRequest(input) return out, req.Send() @@ -847,7 +937,7 @@ const opDescribeRepositories = "DescribeRepositories" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ecr-2015-09-21/DescribeRepositories +// See also, https://docs.aws.amazon.com/goto/WebAPI/ecr-2015-09-21/DescribeRepositories func (c *ECR) DescribeRepositoriesRequest(input *DescribeRepositoriesInput) (req *request.Request, output *DescribeRepositoriesOutput) { op := &request.Operation{ Name: opDescribeRepositories, @@ -893,7 +983,7 @@ func (c *ECR) DescribeRepositoriesRequest(input *DescribeRepositoriesInput) (req // The specified repository could not be found. Check the spelling of the specified // repository and ensure that you are performing operations on the correct registry. // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ecr-2015-09-21/DescribeRepositories +// See also, https://docs.aws.amazon.com/goto/WebAPI/ecr-2015-09-21/DescribeRepositories func (c *ECR) DescribeRepositories(input *DescribeRepositoriesInput) (*DescribeRepositoriesOutput, error) { req, out := c.DescribeRepositoriesRequest(input) return out, req.Send() @@ -990,7 +1080,7 @@ const opGetAuthorizationToken = "GetAuthorizationToken" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ecr-2015-09-21/GetAuthorizationToken +// See also, https://docs.aws.amazon.com/goto/WebAPI/ecr-2015-09-21/GetAuthorizationToken func (c *ECR) GetAuthorizationTokenRequest(input *GetAuthorizationTokenInput) (req *request.Request, output *GetAuthorizationTokenOutput) { op := &request.Operation{ Name: opGetAuthorizationToken, @@ -1033,7 +1123,7 @@ func (c *ECR) GetAuthorizationTokenRequest(input *GetAuthorizationTokenInput) (r // The specified parameter is invalid. Review the available parameters for the // API request. // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ecr-2015-09-21/GetAuthorizationToken +// See also, https://docs.aws.amazon.com/goto/WebAPI/ecr-2015-09-21/GetAuthorizationToken func (c *ECR) GetAuthorizationToken(input *GetAuthorizationTokenInput) (*GetAuthorizationTokenOutput, error) { req, out := c.GetAuthorizationTokenRequest(input) return out, req.Send() @@ -1080,7 +1170,7 @@ const opGetDownloadUrlForLayer = "GetDownloadUrlForLayer" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ecr-2015-09-21/GetDownloadUrlForLayer +// See also, https://docs.aws.amazon.com/goto/WebAPI/ecr-2015-09-21/GetDownloadUrlForLayer func (c *ECR) GetDownloadUrlForLayerRequest(input *GetDownloadUrlForLayerInput) (req *request.Request, output *GetDownloadUrlForLayerOutput) { op := &request.Operation{ Name: opGetDownloadUrlForLayer, @@ -1133,7 +1223,7 @@ func (c *ECR) GetDownloadUrlForLayerRequest(input *GetDownloadUrlForLayerInput) // The specified repository could not be found. Check the spelling of the specified // repository and ensure that you are performing operations on the correct registry. // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ecr-2015-09-21/GetDownloadUrlForLayer +// See also, https://docs.aws.amazon.com/goto/WebAPI/ecr-2015-09-21/GetDownloadUrlForLayer func (c *ECR) GetDownloadUrlForLayer(input *GetDownloadUrlForLayerInput) (*GetDownloadUrlForLayerOutput, error) { req, out := c.GetDownloadUrlForLayerRequest(input) return out, req.Send() @@ -1155,6 +1245,186 @@ func (c *ECR) GetDownloadUrlForLayerWithContext(ctx aws.Context, input *GetDownl return out, req.Send() } +const opGetLifecyclePolicy = "GetLifecyclePolicy" + +// GetLifecyclePolicyRequest generates a "aws/request.Request" representing the +// client's request for the GetLifecyclePolicy operation. The "output" return +// value will be populated with the request's response once the request complets +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See GetLifecyclePolicy for more information on using the GetLifecyclePolicy +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the GetLifecyclePolicyRequest method. +// req, resp := client.GetLifecyclePolicyRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ecr-2015-09-21/GetLifecyclePolicy +func (c *ECR) GetLifecyclePolicyRequest(input *GetLifecyclePolicyInput) (req *request.Request, output *GetLifecyclePolicyOutput) { + op := &request.Operation{ + Name: opGetLifecyclePolicy, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &GetLifecyclePolicyInput{} + } + + output = &GetLifecyclePolicyOutput{} + req = c.newRequest(op, input, output) + return +} + +// GetLifecyclePolicy API operation for Amazon EC2 Container Registry. +// +// Retrieves the specified lifecycle policy. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon EC2 Container Registry's +// API operation GetLifecyclePolicy for usage and error information. +// +// Returned Error Codes: +// * ErrCodeServerException "ServerException" +// These errors are usually caused by a server-side issue. +// +// * ErrCodeInvalidParameterException "InvalidParameterException" +// The specified parameter is invalid. Review the available parameters for the +// API request. +// +// * ErrCodeRepositoryNotFoundException "RepositoryNotFoundException" +// The specified repository could not be found. Check the spelling of the specified +// repository and ensure that you are performing operations on the correct registry. +// +// * ErrCodeLifecyclePolicyNotFoundException "LifecyclePolicyNotFoundException" +// The lifecycle policy could not be found, and no policy is set to the repository. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ecr-2015-09-21/GetLifecyclePolicy +func (c *ECR) GetLifecyclePolicy(input *GetLifecyclePolicyInput) (*GetLifecyclePolicyOutput, error) { + req, out := c.GetLifecyclePolicyRequest(input) + return out, req.Send() +} + +// GetLifecyclePolicyWithContext is the same as GetLifecyclePolicy with the addition of +// the ability to pass a context and additional request options. +// +// See GetLifecyclePolicy for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ECR) GetLifecyclePolicyWithContext(ctx aws.Context, input *GetLifecyclePolicyInput, opts ...request.Option) (*GetLifecyclePolicyOutput, error) { + req, out := c.GetLifecyclePolicyRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opGetLifecyclePolicyPreview = "GetLifecyclePolicyPreview" + +// GetLifecyclePolicyPreviewRequest generates a "aws/request.Request" representing the +// client's request for the GetLifecyclePolicyPreview operation. The "output" return +// value will be populated with the request's response once the request complets +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See GetLifecyclePolicyPreview for more information on using the GetLifecyclePolicyPreview +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the GetLifecyclePolicyPreviewRequest method. +// req, resp := client.GetLifecyclePolicyPreviewRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ecr-2015-09-21/GetLifecyclePolicyPreview +func (c *ECR) GetLifecyclePolicyPreviewRequest(input *GetLifecyclePolicyPreviewInput) (req *request.Request, output *GetLifecyclePolicyPreviewOutput) { + op := &request.Operation{ + Name: opGetLifecyclePolicyPreview, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &GetLifecyclePolicyPreviewInput{} + } + + output = &GetLifecyclePolicyPreviewOutput{} + req = c.newRequest(op, input, output) + return +} + +// GetLifecyclePolicyPreview API operation for Amazon EC2 Container Registry. +// +// Retrieves the results of the specified lifecycle policy preview request. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon EC2 Container Registry's +// API operation GetLifecyclePolicyPreview for usage and error information. +// +// Returned Error Codes: +// * ErrCodeServerException "ServerException" +// These errors are usually caused by a server-side issue. +// +// * ErrCodeInvalidParameterException "InvalidParameterException" +// The specified parameter is invalid. Review the available parameters for the +// API request. +// +// * ErrCodeRepositoryNotFoundException "RepositoryNotFoundException" +// The specified repository could not be found. Check the spelling of the specified +// repository and ensure that you are performing operations on the correct registry. +// +// * ErrCodeLifecyclePolicyPreviewNotFoundException "LifecyclePolicyPreviewNotFoundException" +// There is no dry run for this repository. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ecr-2015-09-21/GetLifecyclePolicyPreview +func (c *ECR) GetLifecyclePolicyPreview(input *GetLifecyclePolicyPreviewInput) (*GetLifecyclePolicyPreviewOutput, error) { + req, out := c.GetLifecyclePolicyPreviewRequest(input) + return out, req.Send() +} + +// GetLifecyclePolicyPreviewWithContext is the same as GetLifecyclePolicyPreview with the addition of +// the ability to pass a context and additional request options. +// +// See GetLifecyclePolicyPreview for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ECR) GetLifecyclePolicyPreviewWithContext(ctx aws.Context, input *GetLifecyclePolicyPreviewInput, opts ...request.Option) (*GetLifecyclePolicyPreviewOutput, error) { + req, out := c.GetLifecyclePolicyPreviewRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + const opGetRepositoryPolicy = "GetRepositoryPolicy" // GetRepositoryPolicyRequest generates a "aws/request.Request" representing the @@ -1180,7 +1450,7 @@ const opGetRepositoryPolicy = "GetRepositoryPolicy" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ecr-2015-09-21/GetRepositoryPolicy +// See also, https://docs.aws.amazon.com/goto/WebAPI/ecr-2015-09-21/GetRepositoryPolicy func (c *ECR) GetRepositoryPolicyRequest(input *GetRepositoryPolicyInput) (req *request.Request, output *GetRepositoryPolicyOutput) { op := &request.Operation{ Name: opGetRepositoryPolicy, @@ -1224,7 +1494,7 @@ func (c *ECR) GetRepositoryPolicyRequest(input *GetRepositoryPolicyInput) (req * // The specified repository and registry combination does not have an associated // repository policy. // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ecr-2015-09-21/GetRepositoryPolicy +// See also, https://docs.aws.amazon.com/goto/WebAPI/ecr-2015-09-21/GetRepositoryPolicy func (c *ECR) GetRepositoryPolicy(input *GetRepositoryPolicyInput) (*GetRepositoryPolicyOutput, error) { req, out := c.GetRepositoryPolicyRequest(input) return out, req.Send() @@ -1271,7 +1541,7 @@ const opInitiateLayerUpload = "InitiateLayerUpload" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ecr-2015-09-21/InitiateLayerUpload +// See also, https://docs.aws.amazon.com/goto/WebAPI/ecr-2015-09-21/InitiateLayerUpload func (c *ECR) InitiateLayerUploadRequest(input *InitiateLayerUploadInput) (req *request.Request, output *InitiateLayerUploadOutput) { op := &request.Operation{ Name: opInitiateLayerUpload, @@ -1315,7 +1585,7 @@ func (c *ECR) InitiateLayerUploadRequest(input *InitiateLayerUploadInput) (req * // The specified repository could not be found. Check the spelling of the specified // repository and ensure that you are performing operations on the correct registry. // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ecr-2015-09-21/InitiateLayerUpload +// See also, https://docs.aws.amazon.com/goto/WebAPI/ecr-2015-09-21/InitiateLayerUpload func (c *ECR) InitiateLayerUpload(input *InitiateLayerUploadInput) (*InitiateLayerUploadOutput, error) { req, out := c.InitiateLayerUploadRequest(input) return out, req.Send() @@ -1362,7 +1632,7 @@ const opListImages = "ListImages" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ecr-2015-09-21/ListImages +// See also, https://docs.aws.amazon.com/goto/WebAPI/ecr-2015-09-21/ListImages func (c *ECR) ListImagesRequest(input *ListImagesInput) (req *request.Request, output *ListImagesOutput) { op := &request.Operation{ Name: opListImages, @@ -1414,7 +1684,7 @@ func (c *ECR) ListImagesRequest(input *ListImagesInput) (req *request.Request, o // The specified repository could not be found. Check the spelling of the specified // repository and ensure that you are performing operations on the correct registry. // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ecr-2015-09-21/ListImages +// See also, https://docs.aws.amazon.com/goto/WebAPI/ecr-2015-09-21/ListImages func (c *ECR) ListImages(input *ListImagesInput) (*ListImagesOutput, error) { req, out := c.ListImagesRequest(input) return out, req.Send() @@ -1511,7 +1781,7 @@ const opPutImage = "PutImage" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ecr-2015-09-21/PutImage +// See also, https://docs.aws.amazon.com/goto/WebAPI/ecr-2015-09-21/PutImage func (c *ECR) PutImageRequest(input *PutImageInput) (req *request.Request, output *PutImageOutput) { op := &request.Operation{ Name: opPutImage, @@ -1556,8 +1826,8 @@ func (c *ECR) PutImageRequest(input *PutImageInput) (req *request.Request, outpu // repository and ensure that you are performing operations on the correct registry. // // * ErrCodeImageAlreadyExistsException "ImageAlreadyExistsException" -// The specified image has already been pushed, and there are no changes to -// the manifest or image tag since the last push. +// The specified image has already been pushed, and there were no changes to +// the manifest or image tag after the last push. // // * ErrCodeLayersNotFoundException "LayersNotFoundException" // The specified layers could not be found, or the specified layer is not valid @@ -1569,7 +1839,7 @@ func (c *ECR) PutImageRequest(input *PutImageInput) (req *request.Request, outpu // (http://docs.aws.amazon.com/AmazonECR/latest/userguide/service_limits.html) // in the Amazon EC2 Container Registry User Guide. // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ecr-2015-09-21/PutImage +// See also, https://docs.aws.amazon.com/goto/WebAPI/ecr-2015-09-21/PutImage func (c *ECR) PutImage(input *PutImageInput) (*PutImageOutput, error) { req, out := c.PutImageRequest(input) return out, req.Send() @@ -1591,6 +1861,93 @@ func (c *ECR) PutImageWithContext(ctx aws.Context, input *PutImageInput, opts .. return out, req.Send() } +const opPutLifecyclePolicy = "PutLifecyclePolicy" + +// PutLifecyclePolicyRequest generates a "aws/request.Request" representing the +// client's request for the PutLifecyclePolicy operation. The "output" return +// value will be populated with the request's response once the request complets +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See PutLifecyclePolicy for more information on using the PutLifecyclePolicy +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the PutLifecyclePolicyRequest method. +// req, resp := client.PutLifecyclePolicyRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ecr-2015-09-21/PutLifecyclePolicy +func (c *ECR) PutLifecyclePolicyRequest(input *PutLifecyclePolicyInput) (req *request.Request, output *PutLifecyclePolicyOutput) { + op := &request.Operation{ + Name: opPutLifecyclePolicy, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &PutLifecyclePolicyInput{} + } + + output = &PutLifecyclePolicyOutput{} + req = c.newRequest(op, input, output) + return +} + +// PutLifecyclePolicy API operation for Amazon EC2 Container Registry. +// +// Creates or updates a lifecycle policy. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon EC2 Container Registry's +// API operation PutLifecyclePolicy for usage and error information. +// +// Returned Error Codes: +// * ErrCodeServerException "ServerException" +// These errors are usually caused by a server-side issue. +// +// * ErrCodeInvalidParameterException "InvalidParameterException" +// The specified parameter is invalid. Review the available parameters for the +// API request. +// +// * ErrCodeRepositoryNotFoundException "RepositoryNotFoundException" +// The specified repository could not be found. Check the spelling of the specified +// repository and ensure that you are performing operations on the correct registry. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ecr-2015-09-21/PutLifecyclePolicy +func (c *ECR) PutLifecyclePolicy(input *PutLifecyclePolicyInput) (*PutLifecyclePolicyOutput, error) { + req, out := c.PutLifecyclePolicyRequest(input) + return out, req.Send() +} + +// PutLifecyclePolicyWithContext is the same as PutLifecyclePolicy with the addition of +// the ability to pass a context and additional request options. +// +// See PutLifecyclePolicy for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ECR) PutLifecyclePolicyWithContext(ctx aws.Context, input *PutLifecyclePolicyInput, opts ...request.Option) (*PutLifecyclePolicyOutput, error) { + req, out := c.PutLifecyclePolicyRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + const opSetRepositoryPolicy = "SetRepositoryPolicy" // SetRepositoryPolicyRequest generates a "aws/request.Request" representing the @@ -1616,7 +1973,7 @@ const opSetRepositoryPolicy = "SetRepositoryPolicy" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ecr-2015-09-21/SetRepositoryPolicy +// See also, https://docs.aws.amazon.com/goto/WebAPI/ecr-2015-09-21/SetRepositoryPolicy func (c *ECR) SetRepositoryPolicyRequest(input *SetRepositoryPolicyInput) (req *request.Request, output *SetRepositoryPolicyOutput) { op := &request.Operation{ Name: opSetRepositoryPolicy, @@ -1656,7 +2013,7 @@ func (c *ECR) SetRepositoryPolicyRequest(input *SetRepositoryPolicyInput) (req * // The specified repository could not be found. Check the spelling of the specified // repository and ensure that you are performing operations on the correct registry. // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ecr-2015-09-21/SetRepositoryPolicy +// See also, https://docs.aws.amazon.com/goto/WebAPI/ecr-2015-09-21/SetRepositoryPolicy func (c *ECR) SetRepositoryPolicy(input *SetRepositoryPolicyInput) (*SetRepositoryPolicyOutput, error) { req, out := c.SetRepositoryPolicyRequest(input) return out, req.Send() @@ -1678,6 +2035,101 @@ func (c *ECR) SetRepositoryPolicyWithContext(ctx aws.Context, input *SetReposito return out, req.Send() } +const opStartLifecyclePolicyPreview = "StartLifecyclePolicyPreview" + +// StartLifecyclePolicyPreviewRequest generates a "aws/request.Request" representing the +// client's request for the StartLifecyclePolicyPreview operation. The "output" return +// value will be populated with the request's response once the request complets +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See StartLifecyclePolicyPreview for more information on using the StartLifecyclePolicyPreview +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the StartLifecyclePolicyPreviewRequest method. +// req, resp := client.StartLifecyclePolicyPreviewRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ecr-2015-09-21/StartLifecyclePolicyPreview +func (c *ECR) StartLifecyclePolicyPreviewRequest(input *StartLifecyclePolicyPreviewInput) (req *request.Request, output *StartLifecyclePolicyPreviewOutput) { + op := &request.Operation{ + Name: opStartLifecyclePolicyPreview, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &StartLifecyclePolicyPreviewInput{} + } + + output = &StartLifecyclePolicyPreviewOutput{} + req = c.newRequest(op, input, output) + return +} + +// StartLifecyclePolicyPreview API operation for Amazon EC2 Container Registry. +// +// Starts a preview of the specified lifecycle policy. This allows you to see +// the results before creating the lifecycle policy. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon EC2 Container Registry's +// API operation StartLifecyclePolicyPreview for usage and error information. +// +// Returned Error Codes: +// * ErrCodeServerException "ServerException" +// These errors are usually caused by a server-side issue. +// +// * ErrCodeInvalidParameterException "InvalidParameterException" +// The specified parameter is invalid. Review the available parameters for the +// API request. +// +// * ErrCodeRepositoryNotFoundException "RepositoryNotFoundException" +// The specified repository could not be found. Check the spelling of the specified +// repository and ensure that you are performing operations on the correct registry. +// +// * ErrCodeLifecyclePolicyNotFoundException "LifecyclePolicyNotFoundException" +// The lifecycle policy could not be found, and no policy is set to the repository. +// +// * ErrCodeLifecyclePolicyPreviewInProgressException "LifecyclePolicyPreviewInProgressException" +// The previous lifecycle policy preview request has not completed. Please try +// again later. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/ecr-2015-09-21/StartLifecyclePolicyPreview +func (c *ECR) StartLifecyclePolicyPreview(input *StartLifecyclePolicyPreviewInput) (*StartLifecyclePolicyPreviewOutput, error) { + req, out := c.StartLifecyclePolicyPreviewRequest(input) + return out, req.Send() +} + +// StartLifecyclePolicyPreviewWithContext is the same as StartLifecyclePolicyPreview with the addition of +// the ability to pass a context and additional request options. +// +// See StartLifecyclePolicyPreview for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *ECR) StartLifecyclePolicyPreviewWithContext(ctx aws.Context, input *StartLifecyclePolicyPreviewInput, opts ...request.Option) (*StartLifecyclePolicyPreviewOutput, error) { + req, out := c.StartLifecyclePolicyPreviewRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + const opUploadLayerPart = "UploadLayerPart" // UploadLayerPartRequest generates a "aws/request.Request" representing the @@ -1703,7 +2155,7 @@ const opUploadLayerPart = "UploadLayerPart" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ecr-2015-09-21/UploadLayerPart +// See also, https://docs.aws.amazon.com/goto/WebAPI/ecr-2015-09-21/UploadLayerPart func (c *ECR) UploadLayerPartRequest(input *UploadLayerPartInput) (req *request.Request, output *UploadLayerPartOutput) { op := &request.Operation{ Name: opUploadLayerPart, @@ -1761,7 +2213,7 @@ func (c *ECR) UploadLayerPartRequest(input *UploadLayerPartInput) (req *request. // (http://docs.aws.amazon.com/AmazonECR/latest/userguide/service_limits.html) // in the Amazon EC2 Container Registry User Guide. // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ecr-2015-09-21/UploadLayerPart +// See also, https://docs.aws.amazon.com/goto/WebAPI/ecr-2015-09-21/UploadLayerPart func (c *ECR) UploadLayerPart(input *UploadLayerPartInput) (*UploadLayerPartOutput, error) { req, out := c.UploadLayerPartRequest(input) return out, req.Send() @@ -1784,7 +2236,6 @@ func (c *ECR) UploadLayerPartWithContext(ctx aws.Context, input *UploadLayerPart } // An object representing authorization data for an Amazon ECR registry. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ecr-2015-09-21/AuthorizationData type AuthorizationData struct { _ struct{} `type:"structure"` @@ -1831,7 +2282,6 @@ func (s *AuthorizationData) SetProxyEndpoint(v string) *AuthorizationData { return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ecr-2015-09-21/BatchCheckLayerAvailabilityRequest type BatchCheckLayerAvailabilityInput struct { _ struct{} `type:"structure"` @@ -1900,7 +2350,6 @@ func (s *BatchCheckLayerAvailabilityInput) SetRepositoryName(v string) *BatchChe return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ecr-2015-09-21/BatchCheckLayerAvailabilityResponse type BatchCheckLayerAvailabilityOutput struct { _ struct{} `type:"structure"` @@ -1936,7 +2385,6 @@ func (s *BatchCheckLayerAvailabilityOutput) SetLayers(v []*Layer) *BatchCheckLay // Deletes specified images within a specified repository. Images are specified // with either the imageTag or imageDigest. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ecr-2015-09-21/BatchDeleteImageRequest type BatchDeleteImageInput struct { _ struct{} `type:"structure"` @@ -2006,7 +2454,6 @@ func (s *BatchDeleteImageInput) SetRepositoryName(v string) *BatchDeleteImageInp return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ecr-2015-09-21/BatchDeleteImageResponse type BatchDeleteImageOutput struct { _ struct{} `type:"structure"` @@ -2039,7 +2486,6 @@ func (s *BatchDeleteImageOutput) SetImageIds(v []*ImageIdentifier) *BatchDeleteI return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ecr-2015-09-21/BatchGetImageRequest type BatchGetImageInput struct { _ struct{} `type:"structure"` @@ -2124,7 +2570,6 @@ func (s *BatchGetImageInput) SetRepositoryName(v string) *BatchGetImageInput { return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ecr-2015-09-21/BatchGetImageResponse type BatchGetImageOutput struct { _ struct{} `type:"structure"` @@ -2157,7 +2602,6 @@ func (s *BatchGetImageOutput) SetImages(v []*Image) *BatchGetImageOutput { return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ecr-2015-09-21/CompleteLayerUploadRequest type CompleteLayerUploadInput struct { _ struct{} `type:"structure"` @@ -2241,7 +2685,6 @@ func (s *CompleteLayerUploadInput) SetUploadId(v string) *CompleteLayerUploadInp return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ecr-2015-09-21/CompleteLayerUploadResponse type CompleteLayerUploadOutput struct { _ struct{} `type:"structure"` @@ -2292,7 +2735,6 @@ func (s *CompleteLayerUploadOutput) SetUploadId(v string) *CompleteLayerUploadOu return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ecr-2015-09-21/CreateRepositoryRequest type CreateRepositoryInput struct { _ struct{} `type:"structure"` @@ -2336,7 +2778,6 @@ func (s *CreateRepositoryInput) SetRepositoryName(v string) *CreateRepositoryInp return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ecr-2015-09-21/CreateRepositoryResponse type CreateRepositoryOutput struct { _ struct{} `type:"structure"` @@ -2360,11 +2801,112 @@ func (s *CreateRepositoryOutput) SetRepository(v *Repository) *CreateRepositoryO return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ecr-2015-09-21/DeleteRepositoryRequest +type DeleteLifecyclePolicyInput struct { + _ struct{} `type:"structure"` + + // The AWS account ID associated with the registry that contains the repository. + // If you do not specify a registry, the default registry is assumed. + RegistryId *string `locationName:"registryId" type:"string"` + + // The name of the repository that is associated with the repository policy + // to
 delete. + // + // RepositoryName is a required field + RepositoryName *string `locationName:"repositoryName" min:"2" type:"string" required:"true"` +} + +// String returns the string representation +func (s DeleteLifecyclePolicyInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteLifecyclePolicyInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DeleteLifecyclePolicyInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeleteLifecyclePolicyInput"} + if s.RepositoryName == nil { + invalidParams.Add(request.NewErrParamRequired("RepositoryName")) + } + if s.RepositoryName != nil && len(*s.RepositoryName) < 2 { + invalidParams.Add(request.NewErrParamMinLen("RepositoryName", 2)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetRegistryId sets the RegistryId field's value. +func (s *DeleteLifecyclePolicyInput) SetRegistryId(v string) *DeleteLifecyclePolicyInput { + s.RegistryId = &v + return s +} + +// SetRepositoryName sets the RepositoryName field's value. +func (s *DeleteLifecyclePolicyInput) SetRepositoryName(v string) *DeleteLifecyclePolicyInput { + s.RepositoryName = &v + return s +} + +type DeleteLifecyclePolicyOutput struct { + _ struct{} `type:"structure"` + + // The time stamp of the last time that the lifecycle policy was run. + LastEvaluatedAt *time.Time `locationName:"lastEvaluatedAt" type:"timestamp" timestampFormat:"unix"` + + // The JSON repository policy text. + LifecyclePolicyText *string `locationName:"lifecyclePolicyText" min:"100" type:"string"` + + // The registry ID associated with the request. + RegistryId *string `locationName:"registryId" type:"string"` + + // The repository name associated with the request. + RepositoryName *string `locationName:"repositoryName" min:"2" type:"string"` +} + +// String returns the string representation +func (s DeleteLifecyclePolicyOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteLifecyclePolicyOutput) GoString() string { + return s.String() +} + +// SetLastEvaluatedAt sets the LastEvaluatedAt field's value. +func (s *DeleteLifecyclePolicyOutput) SetLastEvaluatedAt(v time.Time) *DeleteLifecyclePolicyOutput { + s.LastEvaluatedAt = &v + return s +} + +// SetLifecyclePolicyText sets the LifecyclePolicyText field's value. +func (s *DeleteLifecyclePolicyOutput) SetLifecyclePolicyText(v string) *DeleteLifecyclePolicyOutput { + s.LifecyclePolicyText = &v + return s +} + +// SetRegistryId sets the RegistryId field's value. +func (s *DeleteLifecyclePolicyOutput) SetRegistryId(v string) *DeleteLifecyclePolicyOutput { + s.RegistryId = &v + return s +} + +// SetRepositoryName sets the RepositoryName field's value. +func (s *DeleteLifecyclePolicyOutput) SetRepositoryName(v string) *DeleteLifecyclePolicyOutput { + s.RepositoryName = &v + return s +} + type DeleteRepositoryInput struct { _ struct{} `type:"structure"` - // Force the deletion of the repository if it contains images. + // If a repository contains images, forces the deletion. Force *bool `locationName:"force" type:"boolean"` // The AWS account ID associated with the registry that contains the repository @@ -2421,7 +2963,6 @@ func (s *DeleteRepositoryInput) SetRepositoryName(v string) *DeleteRepositoryInp return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ecr-2015-09-21/DeleteRepositoryResponse type DeleteRepositoryOutput struct { _ struct{} `type:"structure"` @@ -2445,7 +2986,6 @@ func (s *DeleteRepositoryOutput) SetRepository(v *Repository) *DeleteRepositoryO return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ecr-2015-09-21/DeleteRepositoryPolicyRequest type DeleteRepositoryPolicyInput struct { _ struct{} `type:"structure"` @@ -2499,7 +3039,6 @@ func (s *DeleteRepositoryPolicyInput) SetRepositoryName(v string) *DeleteReposit return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ecr-2015-09-21/DeleteRepositoryPolicyResponse type DeleteRepositoryPolicyOutput struct { _ struct{} `type:"structure"` @@ -2542,7 +3081,6 @@ func (s *DeleteRepositoryPolicyOutput) SetRepositoryName(v string) *DeleteReposi } // An object representing a filter on a DescribeImages operation. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ecr-2015-09-21/DescribeImagesFilter type DescribeImagesFilter struct { _ struct{} `type:"structure"` @@ -2567,7 +3105,6 @@ func (s *DescribeImagesFilter) SetTagStatus(v string) *DescribeImagesFilter { return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ecr-2015-09-21/DescribeImagesRequest type DescribeImagesInput struct { _ struct{} `type:"structure"` @@ -2672,7 +3209,6 @@ func (s *DescribeImagesInput) SetRepositoryName(v string) *DescribeImagesInput { return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ecr-2015-09-21/DescribeImagesResponse type DescribeImagesOutput struct { _ struct{} `type:"structure"` @@ -2708,7 +3244,6 @@ func (s *DescribeImagesOutput) SetNextToken(v string) *DescribeImagesOutput { return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ecr-2015-09-21/DescribeRepositoriesRequest type DescribeRepositoriesInput struct { _ struct{} `type:"structure"` @@ -2791,7 +3326,6 @@ func (s *DescribeRepositoriesInput) SetRepositoryNames(v []*string) *DescribeRep return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ecr-2015-09-21/DescribeRepositoriesResponse type DescribeRepositoriesOutput struct { _ struct{} `type:"structure"` @@ -2827,7 +3361,6 @@ func (s *DescribeRepositoriesOutput) SetRepositories(v []*Repository) *DescribeR return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ecr-2015-09-21/GetAuthorizationTokenRequest type GetAuthorizationTokenInput struct { _ struct{} `type:"structure"` @@ -2866,7 +3399,6 @@ func (s *GetAuthorizationTokenInput) SetRegistryIds(v []*string) *GetAuthorizati return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ecr-2015-09-21/GetAuthorizationTokenResponse type GetAuthorizationTokenOutput struct { _ struct{} `type:"structure"` @@ -2891,7 +3423,6 @@ func (s *GetAuthorizationTokenOutput) SetAuthorizationData(v []*AuthorizationDat return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ecr-2015-09-21/GetDownloadUrlForLayerRequest type GetDownloadUrlForLayerInput struct { _ struct{} `type:"structure"` @@ -2957,7 +3488,6 @@ func (s *GetDownloadUrlForLayerInput) SetRepositoryName(v string) *GetDownloadUr return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ecr-2015-09-21/GetDownloadUrlForLayerResponse type GetDownloadUrlForLayerOutput struct { _ struct{} `type:"structure"` @@ -2990,7 +3520,292 @@ func (s *GetDownloadUrlForLayerOutput) SetLayerDigest(v string) *GetDownloadUrlF return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ecr-2015-09-21/GetRepositoryPolicyRequest +type GetLifecyclePolicyInput struct { + _ struct{} `type:"structure"` + + // The AWS account ID associated with the registry that contains the repository. + // If you do not specify a registry, the default registry is assumed. + RegistryId *string `locationName:"registryId" type:"string"` + + // The name of the repository with the policy to retrieve. + // + // RepositoryName is a required field + RepositoryName *string `locationName:"repositoryName" min:"2" type:"string" required:"true"` +} + +// String returns the string representation +func (s GetLifecyclePolicyInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetLifecyclePolicyInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *GetLifecyclePolicyInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "GetLifecyclePolicyInput"} + if s.RepositoryName == nil { + invalidParams.Add(request.NewErrParamRequired("RepositoryName")) + } + if s.RepositoryName != nil && len(*s.RepositoryName) < 2 { + invalidParams.Add(request.NewErrParamMinLen("RepositoryName", 2)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetRegistryId sets the RegistryId field's value. +func (s *GetLifecyclePolicyInput) SetRegistryId(v string) *GetLifecyclePolicyInput { + s.RegistryId = &v + return s +} + +// SetRepositoryName sets the RepositoryName field's value. +func (s *GetLifecyclePolicyInput) SetRepositoryName(v string) *GetLifecyclePolicyInput { + s.RepositoryName = &v + return s +} + +type GetLifecyclePolicyOutput struct { + _ struct{} `type:"structure"` + + // The time stamp of the last time that the lifecycle policy was run. + LastEvaluatedAt *time.Time `locationName:"lastEvaluatedAt" type:"timestamp" timestampFormat:"unix"` + + // The JSON repository policy text. + LifecyclePolicyText *string `locationName:"lifecyclePolicyText" min:"100" type:"string"` + + // The registry ID associated with the request. + RegistryId *string `locationName:"registryId" type:"string"` + + // The repository name associated with the request. + RepositoryName *string `locationName:"repositoryName" min:"2" type:"string"` +} + +// String returns the string representation +func (s GetLifecyclePolicyOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetLifecyclePolicyOutput) GoString() string { + return s.String() +} + +// SetLastEvaluatedAt sets the LastEvaluatedAt field's value. +func (s *GetLifecyclePolicyOutput) SetLastEvaluatedAt(v time.Time) *GetLifecyclePolicyOutput { + s.LastEvaluatedAt = &v + return s +} + +// SetLifecyclePolicyText sets the LifecyclePolicyText field's value. +func (s *GetLifecyclePolicyOutput) SetLifecyclePolicyText(v string) *GetLifecyclePolicyOutput { + s.LifecyclePolicyText = &v + return s +} + +// SetRegistryId sets the RegistryId field's value. +func (s *GetLifecyclePolicyOutput) SetRegistryId(v string) *GetLifecyclePolicyOutput { + s.RegistryId = &v + return s +} + +// SetRepositoryName sets the RepositoryName field's value. +func (s *GetLifecyclePolicyOutput) SetRepositoryName(v string) *GetLifecyclePolicyOutput { + s.RepositoryName = &v + return s +} + +type GetLifecyclePolicyPreviewInput struct { + _ struct{} `type:"structure"` + + // An optional parameter that filters results based on image tag status and + // all tags, if tagged. + Filter *LifecyclePolicyPreviewFilter `locationName:"filter" type:"structure"` + + // The list of imageIDs to be included. + ImageIds []*ImageIdentifier `locationName:"imageIds" min:"1" type:"list"` + + // The maximum number of repository results returned by GetLifecyclePolicyPreviewRequest + // in
 paginated output. When this parameter is used, GetLifecyclePolicyPreviewRequest + // only returns
 maxResults results in a single page along with a nextToken + // response element. The remaining results of the initial request can be seen + // by sending
 another GetLifecyclePolicyPreviewRequest request with the returned + // nextToken
 value. This value can be between 1 and 100. If this
 parameter + // is not used, then GetLifecyclePolicyPreviewRequest returns up to
 100 results + // and a nextToken value, if
 applicable. + MaxResults *int64 `locationName:"maxResults" min:"1" type:"integer"` + + // The nextToken value returned from a previous paginated
 GetLifecyclePolicyPreviewRequest + // request where maxResults was used and the
 results exceeded the value of + // that parameter. Pagination continues from the end of the
 previous results + // that returned the nextToken value. This value is
 null when there are no + // more results to return. + NextToken *string `locationName:"nextToken" type:"string"` + + // The AWS account ID associated with the registry that contains the repository. + // If you do not specify a registry, the default registry is assumed. + RegistryId *string `locationName:"registryId" type:"string"` + + // The name of the repository with the policy to retrieve. + // + // RepositoryName is a required field + RepositoryName *string `locationName:"repositoryName" min:"2" type:"string" required:"true"` +} + +// String returns the string representation +func (s GetLifecyclePolicyPreviewInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetLifecyclePolicyPreviewInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *GetLifecyclePolicyPreviewInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "GetLifecyclePolicyPreviewInput"} + if s.ImageIds != nil && len(s.ImageIds) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ImageIds", 1)) + } + if s.MaxResults != nil && *s.MaxResults < 1 { + invalidParams.Add(request.NewErrParamMinValue("MaxResults", 1)) + } + if s.RepositoryName == nil { + invalidParams.Add(request.NewErrParamRequired("RepositoryName")) + } + if s.RepositoryName != nil && len(*s.RepositoryName) < 2 { + invalidParams.Add(request.NewErrParamMinLen("RepositoryName", 2)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetFilter sets the Filter field's value. +func (s *GetLifecyclePolicyPreviewInput) SetFilter(v *LifecyclePolicyPreviewFilter) *GetLifecyclePolicyPreviewInput { + s.Filter = v + return s +} + +// SetImageIds sets the ImageIds field's value. +func (s *GetLifecyclePolicyPreviewInput) SetImageIds(v []*ImageIdentifier) *GetLifecyclePolicyPreviewInput { + s.ImageIds = v + return s +} + +// SetMaxResults sets the MaxResults field's value. +func (s *GetLifecyclePolicyPreviewInput) SetMaxResults(v int64) *GetLifecyclePolicyPreviewInput { + s.MaxResults = &v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *GetLifecyclePolicyPreviewInput) SetNextToken(v string) *GetLifecyclePolicyPreviewInput { + s.NextToken = &v + return s +} + +// SetRegistryId sets the RegistryId field's value. +func (s *GetLifecyclePolicyPreviewInput) SetRegistryId(v string) *GetLifecyclePolicyPreviewInput { + s.RegistryId = &v + return s +} + +// SetRepositoryName sets the RepositoryName field's value. +func (s *GetLifecyclePolicyPreviewInput) SetRepositoryName(v string) *GetLifecyclePolicyPreviewInput { + s.RepositoryName = &v + return s +} + +type GetLifecyclePolicyPreviewOutput struct { + _ struct{} `type:"structure"` + + // The JSON repository policy text. + LifecyclePolicyText *string `locationName:"lifecyclePolicyText" min:"100" type:"string"` + + // The nextToken value to include in a future GetLifecyclePolicyPreview request. + // When the results of a GetLifecyclePolicyPreview request exceed maxResults, + // this value can be used to retrieve the next page of results. This value is + // null when there are no more results to return. + NextToken *string `locationName:"nextToken" type:"string"` + + // The results of the lifecycle policy preview request. + PreviewResults []*LifecyclePolicyPreviewResult `locationName:"previewResults" type:"list"` + + // The registry ID associated with the request. + RegistryId *string `locationName:"registryId" type:"string"` + + // The repository name associated with the request. + RepositoryName *string `locationName:"repositoryName" min:"2" type:"string"` + + // The status of the lifecycle policy preview request. + Status *string `locationName:"status" type:"string" enum:"LifecyclePolicyPreviewStatus"` + + // The list of images that is returned as a result of the action. + Summary *LifecyclePolicyPreviewSummary `locationName:"summary" type:"structure"` +} + +// String returns the string representation +func (s GetLifecyclePolicyPreviewOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetLifecyclePolicyPreviewOutput) GoString() string { + return s.String() +} + +// SetLifecyclePolicyText sets the LifecyclePolicyText field's value. +func (s *GetLifecyclePolicyPreviewOutput) SetLifecyclePolicyText(v string) *GetLifecyclePolicyPreviewOutput { + s.LifecyclePolicyText = &v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *GetLifecyclePolicyPreviewOutput) SetNextToken(v string) *GetLifecyclePolicyPreviewOutput { + s.NextToken = &v + return s +} + +// SetPreviewResults sets the PreviewResults field's value. +func (s *GetLifecyclePolicyPreviewOutput) SetPreviewResults(v []*LifecyclePolicyPreviewResult) *GetLifecyclePolicyPreviewOutput { + s.PreviewResults = v + return s +} + +// SetRegistryId sets the RegistryId field's value. +func (s *GetLifecyclePolicyPreviewOutput) SetRegistryId(v string) *GetLifecyclePolicyPreviewOutput { + s.RegistryId = &v + return s +} + +// SetRepositoryName sets the RepositoryName field's value. +func (s *GetLifecyclePolicyPreviewOutput) SetRepositoryName(v string) *GetLifecyclePolicyPreviewOutput { + s.RepositoryName = &v + return s +} + +// SetStatus sets the Status field's value. +func (s *GetLifecyclePolicyPreviewOutput) SetStatus(v string) *GetLifecyclePolicyPreviewOutput { + s.Status = &v + return s +} + +// SetSummary sets the Summary field's value. +func (s *GetLifecyclePolicyPreviewOutput) SetSummary(v *LifecyclePolicyPreviewSummary) *GetLifecyclePolicyPreviewOutput { + s.Summary = v + return s +} + type GetRepositoryPolicyInput struct { _ struct{} `type:"structure"` @@ -2998,7 +3813,7 @@ type GetRepositoryPolicyInput struct { // If you do not specify a registry, the default registry is assumed. RegistryId *string `locationName:"registryId" type:"string"` - // The name of the repository whose policy you want to retrieve. + // The name of the repository with the policy to retrieve. // // RepositoryName is a required field RepositoryName *string `locationName:"repositoryName" min:"2" type:"string" required:"true"` @@ -3042,7 +3857,6 @@ func (s *GetRepositoryPolicyInput) SetRepositoryName(v string) *GetRepositoryPol return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ecr-2015-09-21/GetRepositoryPolicyResponse type GetRepositoryPolicyOutput struct { _ struct{} `type:"structure"` @@ -3085,7 +3899,6 @@ func (s *GetRepositoryPolicyOutput) SetRepositoryName(v string) *GetRepositoryPo } // An object representing an Amazon ECR image. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ecr-2015-09-21/Image type Image struct { _ struct{} `type:"structure"` @@ -3137,7 +3950,6 @@ func (s *Image) SetRepositoryName(v string) *Image { } // An object that describes an image returned by a DescribeImages operation. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ecr-2015-09-21/ImageDetail type ImageDetail struct { _ struct{} `type:"structure"` @@ -3213,7 +4025,6 @@ func (s *ImageDetail) SetRepositoryName(v string) *ImageDetail { } // An object representing an Amazon ECR image failure. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ecr-2015-09-21/ImageFailure type ImageFailure struct { _ struct{} `type:"structure"` @@ -3256,7 +4067,6 @@ func (s *ImageFailure) SetImageId(v *ImageIdentifier) *ImageFailure { } // An object with identifying information for an Amazon ECR image. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ecr-2015-09-21/ImageIdentifier type ImageIdentifier struct { _ struct{} `type:"structure"` @@ -3289,15 +4099,14 @@ func (s *ImageIdentifier) SetImageTag(v string) *ImageIdentifier { return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ecr-2015-09-21/InitiateLayerUploadRequest type InitiateLayerUploadInput struct { _ struct{} `type:"structure"` - // The AWS account ID associated with the registry that you intend to upload - // layers to. If you do not specify a registry, the default registry is assumed. + // The AWS account ID associated with the registry to which you intend to upload + // layers. If you do not specify a registry, the default registry is assumed. RegistryId *string `locationName:"registryId" type:"string"` - // The name of the repository that you intend to upload layers to. + // The name of the repository to which you intend to upload layers. // // RepositoryName is a required field RepositoryName *string `locationName:"repositoryName" min:"2" type:"string" required:"true"` @@ -3341,7 +4150,6 @@ func (s *InitiateLayerUploadInput) SetRepositoryName(v string) *InitiateLayerUpl return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ecr-2015-09-21/InitiateLayerUploadResponse type InitiateLayerUploadOutput struct { _ struct{} `type:"structure"` @@ -3377,7 +4185,6 @@ func (s *InitiateLayerUploadOutput) SetUploadId(v string) *InitiateLayerUploadOu } // An object representing an Amazon ECR image layer. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ecr-2015-09-21/Layer type Layer struct { _ struct{} `type:"structure"` @@ -3430,7 +4237,6 @@ func (s *Layer) SetMediaType(v string) *Layer { } // An object representing an Amazon ECR image layer failure. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ecr-2015-09-21/LayerFailure type LayerFailure struct { _ struct{} `type:"structure"` @@ -3472,8 +4278,140 @@ func (s *LayerFailure) SetLayerDigest(v string) *LayerFailure { return s } +// The filter for the lifecycle policy preview. +type LifecyclePolicyPreviewFilter struct { + _ struct{} `type:"structure"` + + // The tag status of the image. + TagStatus *string `locationName:"tagStatus" type:"string" enum:"TagStatus"` +} + +// String returns the string representation +func (s LifecyclePolicyPreviewFilter) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s LifecyclePolicyPreviewFilter) GoString() string { + return s.String() +} + +// SetTagStatus sets the TagStatus field's value. +func (s *LifecyclePolicyPreviewFilter) SetTagStatus(v string) *LifecyclePolicyPreviewFilter { + s.TagStatus = &v + return s +} + +// The result of the lifecycle policy preview. +type LifecyclePolicyPreviewResult struct { + _ struct{} `type:"structure"` + + // The type of action to be taken. + Action *LifecyclePolicyRuleAction `locationName:"action" type:"structure"` + + // The priority of the applied rule. + AppliedRulePriority *int64 `locationName:"appliedRulePriority" min:"1" type:"integer"` + + // The sha256 digest of the image manifest. + ImageDigest *string `locationName:"imageDigest" type:"string"` + + // The date and time, expressed in standard JavaScript date format, at which + // the current image was pushed to the repository. + ImagePushedAt *time.Time `locationName:"imagePushedAt" type:"timestamp" timestampFormat:"unix"` + + // The list of tags associated with this image. + ImageTags []*string `locationName:"imageTags" type:"list"` +} + +// String returns the string representation +func (s LifecyclePolicyPreviewResult) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s LifecyclePolicyPreviewResult) GoString() string { + return s.String() +} + +// SetAction sets the Action field's value. +func (s *LifecyclePolicyPreviewResult) SetAction(v *LifecyclePolicyRuleAction) *LifecyclePolicyPreviewResult { + s.Action = v + return s +} + +// SetAppliedRulePriority sets the AppliedRulePriority field's value. +func (s *LifecyclePolicyPreviewResult) SetAppliedRulePriority(v int64) *LifecyclePolicyPreviewResult { + s.AppliedRulePriority = &v + return s +} + +// SetImageDigest sets the ImageDigest field's value. +func (s *LifecyclePolicyPreviewResult) SetImageDigest(v string) *LifecyclePolicyPreviewResult { + s.ImageDigest = &v + return s +} + +// SetImagePushedAt sets the ImagePushedAt field's value. +func (s *LifecyclePolicyPreviewResult) SetImagePushedAt(v time.Time) *LifecyclePolicyPreviewResult { + s.ImagePushedAt = &v + return s +} + +// SetImageTags sets the ImageTags field's value. +func (s *LifecyclePolicyPreviewResult) SetImageTags(v []*string) *LifecyclePolicyPreviewResult { + s.ImageTags = v + return s +} + +// The summary of the lifecycle policy preview request. +type LifecyclePolicyPreviewSummary struct { + _ struct{} `type:"structure"` + + // The number of expiring images. + ExpiringImageTotalCount *int64 `locationName:"expiringImageTotalCount" type:"integer"` +} + +// String returns the string representation +func (s LifecyclePolicyPreviewSummary) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s LifecyclePolicyPreviewSummary) GoString() string { + return s.String() +} + +// SetExpiringImageTotalCount sets the ExpiringImageTotalCount field's value. +func (s *LifecyclePolicyPreviewSummary) SetExpiringImageTotalCount(v int64) *LifecyclePolicyPreviewSummary { + s.ExpiringImageTotalCount = &v + return s +} + +// The type of action to be taken. +type LifecyclePolicyRuleAction struct { + _ struct{} `type:"structure"` + + // The type of action to be taken. + Type *string `locationName:"type" type:"string" enum:"ImageActionType"` +} + +// String returns the string representation +func (s LifecyclePolicyRuleAction) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s LifecyclePolicyRuleAction) GoString() string { + return s.String() +} + +// SetType sets the Type field's value. +func (s *LifecyclePolicyRuleAction) SetType(v string) *LifecyclePolicyRuleAction { + s.Type = &v + return s +} + // An object representing a filter on a ListImages operation. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ecr-2015-09-21/ListImagesFilter type ListImagesFilter struct { _ struct{} `type:"structure"` @@ -3498,7 +4436,6 @@ func (s *ListImagesFilter) SetTagStatus(v string) *ListImagesFilter { return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ecr-2015-09-21/ListImagesRequest type ListImagesInput struct { _ struct{} `type:"structure"` @@ -3524,11 +4461,11 @@ type ListImagesInput struct { NextToken *string `locationName:"nextToken" type:"string"` // The AWS account ID associated with the registry that contains the repository - // to list images in. If you do not specify a registry, the default registry + // in which to list images. If you do not specify a registry, the default registry // is assumed. RegistryId *string `locationName:"registryId" type:"string"` - // The repository whose image IDs are to be listed. + // The repository with image IDs to be listed. // // RepositoryName is a required field RepositoryName *string `locationName:"repositoryName" min:"2" type:"string" required:"true"` @@ -3593,7 +4530,6 @@ func (s *ListImagesInput) SetRepositoryName(v string) *ListImagesInput { return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ecr-2015-09-21/ListImagesResponse type ListImagesOutput struct { _ struct{} `type:"structure"` @@ -3629,7 +4565,6 @@ func (s *ListImagesOutput) SetNextToken(v string) *ListImagesOutput { return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ecr-2015-09-21/PutImageRequest type PutImageInput struct { _ struct{} `type:"structure"` @@ -3706,7 +4641,6 @@ func (s *PutImageInput) SetRepositoryName(v string) *PutImageInput { return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ecr-2015-09-21/PutImageResponse type PutImageOutput struct { _ struct{} `type:"structure"` @@ -3730,28 +4664,135 @@ func (s *PutImageOutput) SetImage(v *Image) *PutImageOutput { return s } +type PutLifecyclePolicyInput struct { + _ struct{} `type:"structure"` + + // The JSON repository policy text to apply to the repository. + // + // LifecyclePolicyText is a required field + LifecyclePolicyText *string `locationName:"lifecyclePolicyText" min:"100" type:"string" required:"true"` + + // The AWS account ID associated with the registry that contains the repository. + // If you do
 not specify a registry, the default registry is assumed. + RegistryId *string `locationName:"registryId" type:"string"` + + // The name of the repository to receive the policy. + // + // RepositoryName is a required field + RepositoryName *string `locationName:"repositoryName" min:"2" type:"string" required:"true"` +} + +// String returns the string representation +func (s PutLifecyclePolicyInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s PutLifecyclePolicyInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *PutLifecyclePolicyInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "PutLifecyclePolicyInput"} + if s.LifecyclePolicyText == nil { + invalidParams.Add(request.NewErrParamRequired("LifecyclePolicyText")) + } + if s.LifecyclePolicyText != nil && len(*s.LifecyclePolicyText) < 100 { + invalidParams.Add(request.NewErrParamMinLen("LifecyclePolicyText", 100)) + } + if s.RepositoryName == nil { + invalidParams.Add(request.NewErrParamRequired("RepositoryName")) + } + if s.RepositoryName != nil && len(*s.RepositoryName) < 2 { + invalidParams.Add(request.NewErrParamMinLen("RepositoryName", 2)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetLifecyclePolicyText sets the LifecyclePolicyText field's value. +func (s *PutLifecyclePolicyInput) SetLifecyclePolicyText(v string) *PutLifecyclePolicyInput { + s.LifecyclePolicyText = &v + return s +} + +// SetRegistryId sets the RegistryId field's value. +func (s *PutLifecyclePolicyInput) SetRegistryId(v string) *PutLifecyclePolicyInput { + s.RegistryId = &v + return s +} + +// SetRepositoryName sets the RepositoryName field's value. +func (s *PutLifecyclePolicyInput) SetRepositoryName(v string) *PutLifecyclePolicyInput { + s.RepositoryName = &v + return s +} + +type PutLifecyclePolicyOutput struct { + _ struct{} `type:"structure"` + + // The JSON repository policy text. + LifecyclePolicyText *string `locationName:"lifecyclePolicyText" min:"100" type:"string"` + + // The registry ID associated with the request. + RegistryId *string `locationName:"registryId" type:"string"` + + // The repository name associated with the request. + RepositoryName *string `locationName:"repositoryName" min:"2" type:"string"` +} + +// String returns the string representation +func (s PutLifecyclePolicyOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s PutLifecyclePolicyOutput) GoString() string { + return s.String() +} + +// SetLifecyclePolicyText sets the LifecyclePolicyText field's value. +func (s *PutLifecyclePolicyOutput) SetLifecyclePolicyText(v string) *PutLifecyclePolicyOutput { + s.LifecyclePolicyText = &v + return s +} + +// SetRegistryId sets the RegistryId field's value. +func (s *PutLifecyclePolicyOutput) SetRegistryId(v string) *PutLifecyclePolicyOutput { + s.RegistryId = &v + return s +} + +// SetRepositoryName sets the RepositoryName field's value. +func (s *PutLifecyclePolicyOutput) SetRepositoryName(v string) *PutLifecyclePolicyOutput { + s.RepositoryName = &v + return s +} + // An object representing a repository. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ecr-2015-09-21/Repository type Repository struct { _ struct{} `type:"structure"` - // The date and time, in JavaScript date/time format, when the repository was - // created. + // The date and time, in JavaScript date format, when the repository was created. CreatedAt *time.Time `locationName:"createdAt" type:"timestamp" timestampFormat:"unix"` // The AWS account ID associated with the registry that contains the repository. RegistryId *string `locationName:"registryId" type:"string"` // The Amazon Resource Name (ARN) that identifies the repository. The ARN contains - // the arn:aws:ecr namespace, followed by the region of the repository, the - // AWS account ID of the repository owner, the repository namespace, and then - // the repository name. For example, arn:aws:ecr:region:012345678910:repository/test. + // the arn:aws:ecr namespace, followed by the region of the repository, AWS + // account ID of the repository owner, repository namespace, and repository + // name. For example, arn:aws:ecr:region:012345678910:repository/test. RepositoryArn *string `locationName:"repositoryArn" type:"string"` // The name of the repository. RepositoryName *string `locationName:"repositoryName" min:"2" type:"string"` - // The URI for the repository. You can use this URI for Docker push and pull + // The URI for the repository. You can use this URI for Docker push or pull // operations. RepositoryUri *string `locationName:"repositoryUri" type:"string"` } @@ -3796,7 +4837,6 @@ func (s *Repository) SetRepositoryUri(v string) *Repository { return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ecr-2015-09-21/SetRepositoryPolicyRequest type SetRepositoryPolicyInput struct { _ struct{} `type:"structure"` @@ -3873,7 +4913,6 @@ func (s *SetRepositoryPolicyInput) SetRepositoryName(v string) *SetRepositoryPol return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ecr-2015-09-21/SetRepositoryPolicyResponse type SetRepositoryPolicyOutput struct { _ struct{} `type:"structure"` @@ -3915,7 +4954,120 @@ func (s *SetRepositoryPolicyOutput) SetRepositoryName(v string) *SetRepositoryPo return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ecr-2015-09-21/UploadLayerPartRequest +type StartLifecyclePolicyPreviewInput struct { + _ struct{} `type:"structure"` + + // The policy to be evaluated against. If you do not specify a policy, the current + // policy for the repository is used. + LifecyclePolicyText *string `locationName:"lifecyclePolicyText" min:"100" type:"string"` + + // The AWS account ID associated with the registry that contains the repository. + // If you do not specify a registry, the default registry is assumed. + RegistryId *string `locationName:"registryId" type:"string"` + + // The name of the repository to be evaluated. + // + // RepositoryName is a required field + RepositoryName *string `locationName:"repositoryName" min:"2" type:"string" required:"true"` +} + +// String returns the string representation +func (s StartLifecyclePolicyPreviewInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s StartLifecyclePolicyPreviewInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *StartLifecyclePolicyPreviewInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "StartLifecyclePolicyPreviewInput"} + if s.LifecyclePolicyText != nil && len(*s.LifecyclePolicyText) < 100 { + invalidParams.Add(request.NewErrParamMinLen("LifecyclePolicyText", 100)) + } + if s.RepositoryName == nil { + invalidParams.Add(request.NewErrParamRequired("RepositoryName")) + } + if s.RepositoryName != nil && len(*s.RepositoryName) < 2 { + invalidParams.Add(request.NewErrParamMinLen("RepositoryName", 2)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetLifecyclePolicyText sets the LifecyclePolicyText field's value. +func (s *StartLifecyclePolicyPreviewInput) SetLifecyclePolicyText(v string) *StartLifecyclePolicyPreviewInput { + s.LifecyclePolicyText = &v + return s +} + +// SetRegistryId sets the RegistryId field's value. +func (s *StartLifecyclePolicyPreviewInput) SetRegistryId(v string) *StartLifecyclePolicyPreviewInput { + s.RegistryId = &v + return s +} + +// SetRepositoryName sets the RepositoryName field's value. +func (s *StartLifecyclePolicyPreviewInput) SetRepositoryName(v string) *StartLifecyclePolicyPreviewInput { + s.RepositoryName = &v + return s +} + +type StartLifecyclePolicyPreviewOutput struct { + _ struct{} `type:"structure"` + + // The JSON repository policy text. + LifecyclePolicyText *string `locationName:"lifecyclePolicyText" min:"100" type:"string"` + + // The registry ID associated with the request. + RegistryId *string `locationName:"registryId" type:"string"` + + // The repository name associated with the request. + RepositoryName *string `locationName:"repositoryName" min:"2" type:"string"` + + // The status of the lifecycle policy preview request. + Status *string `locationName:"status" type:"string" enum:"LifecyclePolicyPreviewStatus"` +} + +// String returns the string representation +func (s StartLifecyclePolicyPreviewOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s StartLifecyclePolicyPreviewOutput) GoString() string { + return s.String() +} + +// SetLifecyclePolicyText sets the LifecyclePolicyText field's value. +func (s *StartLifecyclePolicyPreviewOutput) SetLifecyclePolicyText(v string) *StartLifecyclePolicyPreviewOutput { + s.LifecyclePolicyText = &v + return s +} + +// SetRegistryId sets the RegistryId field's value. +func (s *StartLifecyclePolicyPreviewOutput) SetRegistryId(v string) *StartLifecyclePolicyPreviewOutput { + s.RegistryId = &v + return s +} + +// SetRepositoryName sets the RepositoryName field's value. +func (s *StartLifecyclePolicyPreviewOutput) SetRepositoryName(v string) *StartLifecyclePolicyPreviewOutput { + s.RepositoryName = &v + return s +} + +// SetStatus sets the Status field's value. +func (s *StartLifecyclePolicyPreviewOutput) SetStatus(v string) *StartLifecyclePolicyPreviewOutput { + s.Status = &v + return s +} + type UploadLayerPartInput struct { _ struct{} `type:"structure"` @@ -3936,11 +5088,11 @@ type UploadLayerPartInput struct { // PartLastByte is a required field PartLastByte *int64 `locationName:"partLastByte" type:"long" required:"true"` - // The AWS account ID associated with the registry that you are uploading layer - // parts to. If you do not specify a registry, the default registry is assumed. + // The AWS account ID associated with the registry to which you are uploading + // layer parts. If you do not specify a registry, the default registry is assumed. RegistryId *string `locationName:"registryId" type:"string"` - // The name of the repository that you are uploading layer parts to. + // The name of the repository to which you are uploading layer parts. // // RepositoryName is a required field RepositoryName *string `locationName:"repositoryName" min:"2" type:"string" required:"true"` @@ -4026,7 +5178,6 @@ func (s *UploadLayerPartInput) SetUploadId(v string) *UploadLayerPartInput { return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/ecr-2015-09-21/UploadLayerPartResponse type UploadLayerPartOutput struct { _ struct{} `type:"structure"` @@ -4077,6 +5228,11 @@ func (s *UploadLayerPartOutput) SetUploadId(v string) *UploadLayerPartOutput { return s } +const ( + // ImageActionTypeExpire is a ImageActionType enum value + ImageActionTypeExpire = "EXPIRE" +) + const ( // ImageFailureCodeInvalidImageDigest is a ImageFailureCode enum value ImageFailureCodeInvalidImageDigest = "InvalidImageDigest" @@ -4110,6 +5266,20 @@ const ( LayerFailureCodeMissingLayerDigest = "MissingLayerDigest" ) +const ( + // LifecyclePolicyPreviewStatusInProgress is a LifecyclePolicyPreviewStatus enum value + LifecyclePolicyPreviewStatusInProgress = "IN_PROGRESS" + + // LifecyclePolicyPreviewStatusComplete is a LifecyclePolicyPreviewStatus enum value + LifecyclePolicyPreviewStatusComplete = "COMPLETE" + + // LifecyclePolicyPreviewStatusExpired is a LifecyclePolicyPreviewStatus enum value + LifecyclePolicyPreviewStatusExpired = "EXPIRED" + + // LifecyclePolicyPreviewStatusFailed is a LifecyclePolicyPreviewStatus enum value + LifecyclePolicyPreviewStatusFailed = "FAILED" +) + const ( // TagStatusTagged is a TagStatus enum value TagStatusTagged = "TAGGED" diff --git a/vendor/github.com/aws/aws-sdk-go/service/ecr/doc.go b/vendor/github.com/aws/aws-sdk-go/service/ecr/doc.go index 987aa72fa..a2aea0f6e 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/ecr/doc.go +++ b/vendor/github.com/aws/aws-sdk-go/service/ecr/doc.go @@ -3,11 +3,11 @@ // Package ecr provides the client and types for making API // requests to Amazon EC2 Container Registry. // -// Amazon EC2 Container Registry (Amazon ECR) is a managed AWS Docker registry -// service. Customers can use the familiar Docker CLI to push, pull, and manage -// images. Amazon ECR provides a secure, scalable, and reliable registry. Amazon -// ECR supports private Docker repositories with resource-based permissions -// using AWS IAM so that specific users or Amazon EC2 instances can access repositories +// Amazon EC2 Container Registry (Amazon ECR) is a managed Docker registry service. +// Customers can use the familiar Docker CLI to push, pull, and manage images. +// Amazon ECR provides a secure, scalable, and reliable registry. Amazon ECR +// supports private Docker repositories with resource-based permissions using +// IAM so that specific users or Amazon EC2 instances can access repositories // and images. Developers can use the Docker CLI to author and manage images. // // See https://docs.aws.amazon.com/goto/WebAPI/ecr-2015-09-21 for more information on this service. @@ -17,7 +17,7 @@ // // Using the Client // -// To Amazon EC2 Container Registry with the SDK use the New function to create +// To contact Amazon EC2 Container Registry with the SDK use the New function to create // a new service client. With that client you can make API requests to the service. // These clients are safe to use concurrently. // diff --git a/vendor/github.com/aws/aws-sdk-go/service/ecr/errors.go b/vendor/github.com/aws/aws-sdk-go/service/ecr/errors.go index 4399e6a29..2a2caaedd 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/ecr/errors.go +++ b/vendor/github.com/aws/aws-sdk-go/service/ecr/errors.go @@ -13,8 +13,8 @@ const ( // ErrCodeImageAlreadyExistsException for service response error code // "ImageAlreadyExistsException". // - // The specified image has already been pushed, and there are no changes to - // the manifest or image tag since the last push. + // The specified image has already been pushed, and there were no changes to + // the manifest or image tag after the last push. ErrCodeImageAlreadyExistsException = "ImageAlreadyExistsException" // ErrCodeImageNotFoundException for service response error code @@ -70,6 +70,25 @@ const ( // for this repository. ErrCodeLayersNotFoundException = "LayersNotFoundException" + // ErrCodeLifecyclePolicyNotFoundException for service response error code + // "LifecyclePolicyNotFoundException". + // + // The lifecycle policy could not be found, and no policy is set to the repository. + ErrCodeLifecyclePolicyNotFoundException = "LifecyclePolicyNotFoundException" + + // ErrCodeLifecyclePolicyPreviewInProgressException for service response error code + // "LifecyclePolicyPreviewInProgressException". + // + // The previous lifecycle policy preview request has not completed. Please try + // again later. + ErrCodeLifecyclePolicyPreviewInProgressException = "LifecyclePolicyPreviewInProgressException" + + // ErrCodeLifecyclePolicyPreviewNotFoundException for service response error code + // "LifecyclePolicyPreviewNotFoundException". + // + // There is no dry run for this repository. + ErrCodeLifecyclePolicyPreviewNotFoundException = "LifecyclePolicyPreviewNotFoundException" + // ErrCodeLimitExceededException for service response error code // "LimitExceededException". // diff --git a/vendor/github.com/aws/aws-sdk-go/service/s3/api.go b/vendor/github.com/aws/aws-sdk-go/service/s3/api.go index e65ef266f..2faeee1ec 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/s3/api.go +++ b/vendor/github.com/aws/aws-sdk-go/service/s3/api.go @@ -39,7 +39,7 @@ const opAbortMultipartUpload = "AbortMultipartUpload" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/AbortMultipartUpload +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/AbortMultipartUpload func (c *S3) AbortMultipartUploadRequest(input *AbortMultipartUploadInput) (req *request.Request, output *AbortMultipartUploadOutput) { op := &request.Operation{ Name: opAbortMultipartUpload, @@ -75,7 +75,7 @@ func (c *S3) AbortMultipartUploadRequest(input *AbortMultipartUploadInput) (req // * ErrCodeNoSuchUpload "NoSuchUpload" // The specified multipart upload does not exist. // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/AbortMultipartUpload +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/AbortMultipartUpload func (c *S3) AbortMultipartUpload(input *AbortMultipartUploadInput) (*AbortMultipartUploadOutput, error) { req, out := c.AbortMultipartUploadRequest(input) return out, req.Send() @@ -122,7 +122,7 @@ const opCompleteMultipartUpload = "CompleteMultipartUpload" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/CompleteMultipartUpload +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/CompleteMultipartUpload func (c *S3) CompleteMultipartUploadRequest(input *CompleteMultipartUploadInput) (req *request.Request, output *CompleteMultipartUploadOutput) { op := &request.Operation{ Name: opCompleteMultipartUpload, @@ -149,7 +149,7 @@ func (c *S3) CompleteMultipartUploadRequest(input *CompleteMultipartUploadInput) // // See the AWS API reference guide for Amazon Simple Storage Service's // API operation CompleteMultipartUpload for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/CompleteMultipartUpload +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/CompleteMultipartUpload func (c *S3) CompleteMultipartUpload(input *CompleteMultipartUploadInput) (*CompleteMultipartUploadOutput, error) { req, out := c.CompleteMultipartUploadRequest(input) return out, req.Send() @@ -196,7 +196,7 @@ const opCopyObject = "CopyObject" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/CopyObject +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/CopyObject func (c *S3) CopyObjectRequest(input *CopyObjectInput) (req *request.Request, output *CopyObjectOutput) { op := &request.Operation{ Name: opCopyObject, @@ -229,7 +229,7 @@ func (c *S3) CopyObjectRequest(input *CopyObjectInput) (req *request.Request, ou // The source object of the COPY operation is not in the active tier and is // only stored in Amazon Glacier. // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/CopyObject +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/CopyObject func (c *S3) CopyObject(input *CopyObjectInput) (*CopyObjectOutput, error) { req, out := c.CopyObjectRequest(input) return out, req.Send() @@ -276,7 +276,7 @@ const opCreateBucket = "CreateBucket" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/CreateBucket +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/CreateBucket func (c *S3) CreateBucketRequest(input *CreateBucketInput) (req *request.Request, output *CreateBucketOutput) { op := &request.Operation{ Name: opCreateBucket, @@ -311,7 +311,7 @@ func (c *S3) CreateBucketRequest(input *CreateBucketInput) (req *request.Request // // * ErrCodeBucketAlreadyOwnedByYou "BucketAlreadyOwnedByYou" // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/CreateBucket +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/CreateBucket func (c *S3) CreateBucket(input *CreateBucketInput) (*CreateBucketOutput, error) { req, out := c.CreateBucketRequest(input) return out, req.Send() @@ -358,7 +358,7 @@ const opCreateMultipartUpload = "CreateMultipartUpload" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/CreateMultipartUpload +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/CreateMultipartUpload func (c *S3) CreateMultipartUploadRequest(input *CreateMultipartUploadInput) (req *request.Request, output *CreateMultipartUploadOutput) { op := &request.Operation{ Name: opCreateMultipartUpload, @@ -391,7 +391,7 @@ func (c *S3) CreateMultipartUploadRequest(input *CreateMultipartUploadInput) (re // // See the AWS API reference guide for Amazon Simple Storage Service's // API operation CreateMultipartUpload for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/CreateMultipartUpload +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/CreateMultipartUpload func (c *S3) CreateMultipartUpload(input *CreateMultipartUploadInput) (*CreateMultipartUploadOutput, error) { req, out := c.CreateMultipartUploadRequest(input) return out, req.Send() @@ -438,7 +438,7 @@ const opDeleteBucket = "DeleteBucket" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/DeleteBucket +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/DeleteBucket func (c *S3) DeleteBucketRequest(input *DeleteBucketInput) (req *request.Request, output *DeleteBucketOutput) { op := &request.Operation{ Name: opDeleteBucket, @@ -468,7 +468,7 @@ func (c *S3) DeleteBucketRequest(input *DeleteBucketInput) (req *request.Request // // See the AWS API reference guide for Amazon Simple Storage Service's // API operation DeleteBucket for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/DeleteBucket +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/DeleteBucket func (c *S3) DeleteBucket(input *DeleteBucketInput) (*DeleteBucketOutput, error) { req, out := c.DeleteBucketRequest(input) return out, req.Send() @@ -515,7 +515,7 @@ const opDeleteBucketAnalyticsConfiguration = "DeleteBucketAnalyticsConfiguration // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/DeleteBucketAnalyticsConfiguration +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/DeleteBucketAnalyticsConfiguration func (c *S3) DeleteBucketAnalyticsConfigurationRequest(input *DeleteBucketAnalyticsConfigurationInput) (req *request.Request, output *DeleteBucketAnalyticsConfigurationOutput) { op := &request.Operation{ Name: opDeleteBucketAnalyticsConfiguration, @@ -545,7 +545,7 @@ func (c *S3) DeleteBucketAnalyticsConfigurationRequest(input *DeleteBucketAnalyt // // See the AWS API reference guide for Amazon Simple Storage Service's // API operation DeleteBucketAnalyticsConfiguration for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/DeleteBucketAnalyticsConfiguration +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/DeleteBucketAnalyticsConfiguration func (c *S3) DeleteBucketAnalyticsConfiguration(input *DeleteBucketAnalyticsConfigurationInput) (*DeleteBucketAnalyticsConfigurationOutput, error) { req, out := c.DeleteBucketAnalyticsConfigurationRequest(input) return out, req.Send() @@ -592,7 +592,7 @@ const opDeleteBucketCors = "DeleteBucketCors" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/DeleteBucketCors +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/DeleteBucketCors func (c *S3) DeleteBucketCorsRequest(input *DeleteBucketCorsInput) (req *request.Request, output *DeleteBucketCorsOutput) { op := &request.Operation{ Name: opDeleteBucketCors, @@ -621,7 +621,7 @@ func (c *S3) DeleteBucketCorsRequest(input *DeleteBucketCorsInput) (req *request // // See the AWS API reference guide for Amazon Simple Storage Service's // API operation DeleteBucketCors for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/DeleteBucketCors +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/DeleteBucketCors func (c *S3) DeleteBucketCors(input *DeleteBucketCorsInput) (*DeleteBucketCorsOutput, error) { req, out := c.DeleteBucketCorsRequest(input) return out, req.Send() @@ -643,6 +643,82 @@ func (c *S3) DeleteBucketCorsWithContext(ctx aws.Context, input *DeleteBucketCor return out, req.Send() } +const opDeleteBucketEncryption = "DeleteBucketEncryption" + +// DeleteBucketEncryptionRequest generates a "aws/request.Request" representing the +// client's request for the DeleteBucketEncryption operation. The "output" return +// value will be populated with the request's response once the request complets +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DeleteBucketEncryption for more information on using the DeleteBucketEncryption +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DeleteBucketEncryptionRequest method. +// req, resp := client.DeleteBucketEncryptionRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/DeleteBucketEncryption +func (c *S3) DeleteBucketEncryptionRequest(input *DeleteBucketEncryptionInput) (req *request.Request, output *DeleteBucketEncryptionOutput) { + op := &request.Operation{ + Name: opDeleteBucketEncryption, + HTTPMethod: "DELETE", + HTTPPath: "/{Bucket}?encryption", + } + + if input == nil { + input = &DeleteBucketEncryptionInput{} + } + + output = &DeleteBucketEncryptionOutput{} + req = c.newRequest(op, input, output) + req.Handlers.Unmarshal.Remove(restxml.UnmarshalHandler) + req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) + return +} + +// DeleteBucketEncryption API operation for Amazon Simple Storage Service. +// +// Deletes the server-side encryption configuration from the bucket. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Simple Storage Service's +// API operation DeleteBucketEncryption for usage and error information. +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/DeleteBucketEncryption +func (c *S3) DeleteBucketEncryption(input *DeleteBucketEncryptionInput) (*DeleteBucketEncryptionOutput, error) { + req, out := c.DeleteBucketEncryptionRequest(input) + return out, req.Send() +} + +// DeleteBucketEncryptionWithContext is the same as DeleteBucketEncryption with the addition of +// the ability to pass a context and additional request options. +// +// See DeleteBucketEncryption for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *S3) DeleteBucketEncryptionWithContext(ctx aws.Context, input *DeleteBucketEncryptionInput, opts ...request.Option) (*DeleteBucketEncryptionOutput, error) { + req, out := c.DeleteBucketEncryptionRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + const opDeleteBucketInventoryConfiguration = "DeleteBucketInventoryConfiguration" // DeleteBucketInventoryConfigurationRequest generates a "aws/request.Request" representing the @@ -668,7 +744,7 @@ const opDeleteBucketInventoryConfiguration = "DeleteBucketInventoryConfiguration // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/DeleteBucketInventoryConfiguration +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/DeleteBucketInventoryConfiguration func (c *S3) DeleteBucketInventoryConfigurationRequest(input *DeleteBucketInventoryConfigurationInput) (req *request.Request, output *DeleteBucketInventoryConfigurationOutput) { op := &request.Operation{ Name: opDeleteBucketInventoryConfiguration, @@ -698,7 +774,7 @@ func (c *S3) DeleteBucketInventoryConfigurationRequest(input *DeleteBucketInvent // // See the AWS API reference guide for Amazon Simple Storage Service's // API operation DeleteBucketInventoryConfiguration for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/DeleteBucketInventoryConfiguration +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/DeleteBucketInventoryConfiguration func (c *S3) DeleteBucketInventoryConfiguration(input *DeleteBucketInventoryConfigurationInput) (*DeleteBucketInventoryConfigurationOutput, error) { req, out := c.DeleteBucketInventoryConfigurationRequest(input) return out, req.Send() @@ -745,7 +821,7 @@ const opDeleteBucketLifecycle = "DeleteBucketLifecycle" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/DeleteBucketLifecycle +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/DeleteBucketLifecycle func (c *S3) DeleteBucketLifecycleRequest(input *DeleteBucketLifecycleInput) (req *request.Request, output *DeleteBucketLifecycleOutput) { op := &request.Operation{ Name: opDeleteBucketLifecycle, @@ -774,7 +850,7 @@ func (c *S3) DeleteBucketLifecycleRequest(input *DeleteBucketLifecycleInput) (re // // See the AWS API reference guide for Amazon Simple Storage Service's // API operation DeleteBucketLifecycle for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/DeleteBucketLifecycle +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/DeleteBucketLifecycle func (c *S3) DeleteBucketLifecycle(input *DeleteBucketLifecycleInput) (*DeleteBucketLifecycleOutput, error) { req, out := c.DeleteBucketLifecycleRequest(input) return out, req.Send() @@ -821,7 +897,7 @@ const opDeleteBucketMetricsConfiguration = "DeleteBucketMetricsConfiguration" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/DeleteBucketMetricsConfiguration +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/DeleteBucketMetricsConfiguration func (c *S3) DeleteBucketMetricsConfigurationRequest(input *DeleteBucketMetricsConfigurationInput) (req *request.Request, output *DeleteBucketMetricsConfigurationOutput) { op := &request.Operation{ Name: opDeleteBucketMetricsConfiguration, @@ -851,7 +927,7 @@ func (c *S3) DeleteBucketMetricsConfigurationRequest(input *DeleteBucketMetricsC // // See the AWS API reference guide for Amazon Simple Storage Service's // API operation DeleteBucketMetricsConfiguration for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/DeleteBucketMetricsConfiguration +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/DeleteBucketMetricsConfiguration func (c *S3) DeleteBucketMetricsConfiguration(input *DeleteBucketMetricsConfigurationInput) (*DeleteBucketMetricsConfigurationOutput, error) { req, out := c.DeleteBucketMetricsConfigurationRequest(input) return out, req.Send() @@ -898,7 +974,7 @@ const opDeleteBucketPolicy = "DeleteBucketPolicy" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/DeleteBucketPolicy +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/DeleteBucketPolicy func (c *S3) DeleteBucketPolicyRequest(input *DeleteBucketPolicyInput) (req *request.Request, output *DeleteBucketPolicyOutput) { op := &request.Operation{ Name: opDeleteBucketPolicy, @@ -927,7 +1003,7 @@ func (c *S3) DeleteBucketPolicyRequest(input *DeleteBucketPolicyInput) (req *req // // See the AWS API reference guide for Amazon Simple Storage Service's // API operation DeleteBucketPolicy for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/DeleteBucketPolicy +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/DeleteBucketPolicy func (c *S3) DeleteBucketPolicy(input *DeleteBucketPolicyInput) (*DeleteBucketPolicyOutput, error) { req, out := c.DeleteBucketPolicyRequest(input) return out, req.Send() @@ -974,7 +1050,7 @@ const opDeleteBucketReplication = "DeleteBucketReplication" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/DeleteBucketReplication +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/DeleteBucketReplication func (c *S3) DeleteBucketReplicationRequest(input *DeleteBucketReplicationInput) (req *request.Request, output *DeleteBucketReplicationOutput) { op := &request.Operation{ Name: opDeleteBucketReplication, @@ -1003,7 +1079,7 @@ func (c *S3) DeleteBucketReplicationRequest(input *DeleteBucketReplicationInput) // // See the AWS API reference guide for Amazon Simple Storage Service's // API operation DeleteBucketReplication for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/DeleteBucketReplication +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/DeleteBucketReplication func (c *S3) DeleteBucketReplication(input *DeleteBucketReplicationInput) (*DeleteBucketReplicationOutput, error) { req, out := c.DeleteBucketReplicationRequest(input) return out, req.Send() @@ -1050,7 +1126,7 @@ const opDeleteBucketTagging = "DeleteBucketTagging" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/DeleteBucketTagging +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/DeleteBucketTagging func (c *S3) DeleteBucketTaggingRequest(input *DeleteBucketTaggingInput) (req *request.Request, output *DeleteBucketTaggingOutput) { op := &request.Operation{ Name: opDeleteBucketTagging, @@ -1079,7 +1155,7 @@ func (c *S3) DeleteBucketTaggingRequest(input *DeleteBucketTaggingInput) (req *r // // See the AWS API reference guide for Amazon Simple Storage Service's // API operation DeleteBucketTagging for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/DeleteBucketTagging +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/DeleteBucketTagging func (c *S3) DeleteBucketTagging(input *DeleteBucketTaggingInput) (*DeleteBucketTaggingOutput, error) { req, out := c.DeleteBucketTaggingRequest(input) return out, req.Send() @@ -1126,7 +1202,7 @@ const opDeleteBucketWebsite = "DeleteBucketWebsite" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/DeleteBucketWebsite +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/DeleteBucketWebsite func (c *S3) DeleteBucketWebsiteRequest(input *DeleteBucketWebsiteInput) (req *request.Request, output *DeleteBucketWebsiteOutput) { op := &request.Operation{ Name: opDeleteBucketWebsite, @@ -1155,7 +1231,7 @@ func (c *S3) DeleteBucketWebsiteRequest(input *DeleteBucketWebsiteInput) (req *r // // See the AWS API reference guide for Amazon Simple Storage Service's // API operation DeleteBucketWebsite for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/DeleteBucketWebsite +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/DeleteBucketWebsite func (c *S3) DeleteBucketWebsite(input *DeleteBucketWebsiteInput) (*DeleteBucketWebsiteOutput, error) { req, out := c.DeleteBucketWebsiteRequest(input) return out, req.Send() @@ -1202,7 +1278,7 @@ const opDeleteObject = "DeleteObject" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/DeleteObject +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/DeleteObject func (c *S3) DeleteObjectRequest(input *DeleteObjectInput) (req *request.Request, output *DeleteObjectOutput) { op := &request.Operation{ Name: opDeleteObject, @@ -1231,7 +1307,7 @@ func (c *S3) DeleteObjectRequest(input *DeleteObjectInput) (req *request.Request // // See the AWS API reference guide for Amazon Simple Storage Service's // API operation DeleteObject for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/DeleteObject +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/DeleteObject func (c *S3) DeleteObject(input *DeleteObjectInput) (*DeleteObjectOutput, error) { req, out := c.DeleteObjectRequest(input) return out, req.Send() @@ -1278,7 +1354,7 @@ const opDeleteObjectTagging = "DeleteObjectTagging" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/DeleteObjectTagging +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/DeleteObjectTagging func (c *S3) DeleteObjectTaggingRequest(input *DeleteObjectTaggingInput) (req *request.Request, output *DeleteObjectTaggingOutput) { op := &request.Operation{ Name: opDeleteObjectTagging, @@ -1305,7 +1381,7 @@ func (c *S3) DeleteObjectTaggingRequest(input *DeleteObjectTaggingInput) (req *r // // See the AWS API reference guide for Amazon Simple Storage Service's // API operation DeleteObjectTagging for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/DeleteObjectTagging +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/DeleteObjectTagging func (c *S3) DeleteObjectTagging(input *DeleteObjectTaggingInput) (*DeleteObjectTaggingOutput, error) { req, out := c.DeleteObjectTaggingRequest(input) return out, req.Send() @@ -1352,7 +1428,7 @@ const opDeleteObjects = "DeleteObjects" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/DeleteObjects +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/DeleteObjects func (c *S3) DeleteObjectsRequest(input *DeleteObjectsInput) (req *request.Request, output *DeleteObjectsOutput) { op := &request.Operation{ Name: opDeleteObjects, @@ -1380,7 +1456,7 @@ func (c *S3) DeleteObjectsRequest(input *DeleteObjectsInput) (req *request.Reque // // See the AWS API reference guide for Amazon Simple Storage Service's // API operation DeleteObjects for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/DeleteObjects +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/DeleteObjects func (c *S3) DeleteObjects(input *DeleteObjectsInput) (*DeleteObjectsOutput, error) { req, out := c.DeleteObjectsRequest(input) return out, req.Send() @@ -1427,7 +1503,7 @@ const opGetBucketAccelerateConfiguration = "GetBucketAccelerateConfiguration" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/GetBucketAccelerateConfiguration +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/GetBucketAccelerateConfiguration func (c *S3) GetBucketAccelerateConfigurationRequest(input *GetBucketAccelerateConfigurationInput) (req *request.Request, output *GetBucketAccelerateConfigurationOutput) { op := &request.Operation{ Name: opGetBucketAccelerateConfiguration, @@ -1454,7 +1530,7 @@ func (c *S3) GetBucketAccelerateConfigurationRequest(input *GetBucketAccelerateC // // See the AWS API reference guide for Amazon Simple Storage Service's // API operation GetBucketAccelerateConfiguration for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/GetBucketAccelerateConfiguration +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/GetBucketAccelerateConfiguration func (c *S3) GetBucketAccelerateConfiguration(input *GetBucketAccelerateConfigurationInput) (*GetBucketAccelerateConfigurationOutput, error) { req, out := c.GetBucketAccelerateConfigurationRequest(input) return out, req.Send() @@ -1501,7 +1577,7 @@ const opGetBucketAcl = "GetBucketAcl" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/GetBucketAcl +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/GetBucketAcl func (c *S3) GetBucketAclRequest(input *GetBucketAclInput) (req *request.Request, output *GetBucketAclOutput) { op := &request.Operation{ Name: opGetBucketAcl, @@ -1528,7 +1604,7 @@ func (c *S3) GetBucketAclRequest(input *GetBucketAclInput) (req *request.Request // // See the AWS API reference guide for Amazon Simple Storage Service's // API operation GetBucketAcl for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/GetBucketAcl +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/GetBucketAcl func (c *S3) GetBucketAcl(input *GetBucketAclInput) (*GetBucketAclOutput, error) { req, out := c.GetBucketAclRequest(input) return out, req.Send() @@ -1575,7 +1651,7 @@ const opGetBucketAnalyticsConfiguration = "GetBucketAnalyticsConfiguration" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/GetBucketAnalyticsConfiguration +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/GetBucketAnalyticsConfiguration func (c *S3) GetBucketAnalyticsConfigurationRequest(input *GetBucketAnalyticsConfigurationInput) (req *request.Request, output *GetBucketAnalyticsConfigurationOutput) { op := &request.Operation{ Name: opGetBucketAnalyticsConfiguration, @@ -1603,7 +1679,7 @@ func (c *S3) GetBucketAnalyticsConfigurationRequest(input *GetBucketAnalyticsCon // // See the AWS API reference guide for Amazon Simple Storage Service's // API operation GetBucketAnalyticsConfiguration for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/GetBucketAnalyticsConfiguration +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/GetBucketAnalyticsConfiguration func (c *S3) GetBucketAnalyticsConfiguration(input *GetBucketAnalyticsConfigurationInput) (*GetBucketAnalyticsConfigurationOutput, error) { req, out := c.GetBucketAnalyticsConfigurationRequest(input) return out, req.Send() @@ -1650,7 +1726,7 @@ const opGetBucketCors = "GetBucketCors" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/GetBucketCors +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/GetBucketCors func (c *S3) GetBucketCorsRequest(input *GetBucketCorsInput) (req *request.Request, output *GetBucketCorsOutput) { op := &request.Operation{ Name: opGetBucketCors, @@ -1677,7 +1753,7 @@ func (c *S3) GetBucketCorsRequest(input *GetBucketCorsInput) (req *request.Reque // // See the AWS API reference guide for Amazon Simple Storage Service's // API operation GetBucketCors for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/GetBucketCors +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/GetBucketCors func (c *S3) GetBucketCors(input *GetBucketCorsInput) (*GetBucketCorsOutput, error) { req, out := c.GetBucketCorsRequest(input) return out, req.Send() @@ -1699,6 +1775,80 @@ func (c *S3) GetBucketCorsWithContext(ctx aws.Context, input *GetBucketCorsInput return out, req.Send() } +const opGetBucketEncryption = "GetBucketEncryption" + +// GetBucketEncryptionRequest generates a "aws/request.Request" representing the +// client's request for the GetBucketEncryption operation. The "output" return +// value will be populated with the request's response once the request complets +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See GetBucketEncryption for more information on using the GetBucketEncryption +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the GetBucketEncryptionRequest method. +// req, resp := client.GetBucketEncryptionRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/GetBucketEncryption +func (c *S3) GetBucketEncryptionRequest(input *GetBucketEncryptionInput) (req *request.Request, output *GetBucketEncryptionOutput) { + op := &request.Operation{ + Name: opGetBucketEncryption, + HTTPMethod: "GET", + HTTPPath: "/{Bucket}?encryption", + } + + if input == nil { + input = &GetBucketEncryptionInput{} + } + + output = &GetBucketEncryptionOutput{} + req = c.newRequest(op, input, output) + return +} + +// GetBucketEncryption API operation for Amazon Simple Storage Service. +// +// Returns the server-side encryption configuration of a bucket. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Simple Storage Service's +// API operation GetBucketEncryption for usage and error information. +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/GetBucketEncryption +func (c *S3) GetBucketEncryption(input *GetBucketEncryptionInput) (*GetBucketEncryptionOutput, error) { + req, out := c.GetBucketEncryptionRequest(input) + return out, req.Send() +} + +// GetBucketEncryptionWithContext is the same as GetBucketEncryption with the addition of +// the ability to pass a context and additional request options. +// +// See GetBucketEncryption for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *S3) GetBucketEncryptionWithContext(ctx aws.Context, input *GetBucketEncryptionInput, opts ...request.Option) (*GetBucketEncryptionOutput, error) { + req, out := c.GetBucketEncryptionRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + const opGetBucketInventoryConfiguration = "GetBucketInventoryConfiguration" // GetBucketInventoryConfigurationRequest generates a "aws/request.Request" representing the @@ -1724,7 +1874,7 @@ const opGetBucketInventoryConfiguration = "GetBucketInventoryConfiguration" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/GetBucketInventoryConfiguration +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/GetBucketInventoryConfiguration func (c *S3) GetBucketInventoryConfigurationRequest(input *GetBucketInventoryConfigurationInput) (req *request.Request, output *GetBucketInventoryConfigurationOutput) { op := &request.Operation{ Name: opGetBucketInventoryConfiguration, @@ -1752,7 +1902,7 @@ func (c *S3) GetBucketInventoryConfigurationRequest(input *GetBucketInventoryCon // // See the AWS API reference guide for Amazon Simple Storage Service's // API operation GetBucketInventoryConfiguration for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/GetBucketInventoryConfiguration +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/GetBucketInventoryConfiguration func (c *S3) GetBucketInventoryConfiguration(input *GetBucketInventoryConfigurationInput) (*GetBucketInventoryConfigurationOutput, error) { req, out := c.GetBucketInventoryConfigurationRequest(input) return out, req.Send() @@ -1799,7 +1949,7 @@ const opGetBucketLifecycle = "GetBucketLifecycle" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/GetBucketLifecycle +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/GetBucketLifecycle func (c *S3) GetBucketLifecycleRequest(input *GetBucketLifecycleInput) (req *request.Request, output *GetBucketLifecycleOutput) { if c.Client.Config.Logger != nil { c.Client.Config.Logger.Log("This operation, GetBucketLifecycle, has been deprecated") @@ -1829,7 +1979,7 @@ func (c *S3) GetBucketLifecycleRequest(input *GetBucketLifecycleInput) (req *req // // See the AWS API reference guide for Amazon Simple Storage Service's // API operation GetBucketLifecycle for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/GetBucketLifecycle +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/GetBucketLifecycle func (c *S3) GetBucketLifecycle(input *GetBucketLifecycleInput) (*GetBucketLifecycleOutput, error) { req, out := c.GetBucketLifecycleRequest(input) return out, req.Send() @@ -1876,7 +2026,7 @@ const opGetBucketLifecycleConfiguration = "GetBucketLifecycleConfiguration" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/GetBucketLifecycleConfiguration +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/GetBucketLifecycleConfiguration func (c *S3) GetBucketLifecycleConfigurationRequest(input *GetBucketLifecycleConfigurationInput) (req *request.Request, output *GetBucketLifecycleConfigurationOutput) { op := &request.Operation{ Name: opGetBucketLifecycleConfiguration, @@ -1903,7 +2053,7 @@ func (c *S3) GetBucketLifecycleConfigurationRequest(input *GetBucketLifecycleCon // // See the AWS API reference guide for Amazon Simple Storage Service's // API operation GetBucketLifecycleConfiguration for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/GetBucketLifecycleConfiguration +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/GetBucketLifecycleConfiguration func (c *S3) GetBucketLifecycleConfiguration(input *GetBucketLifecycleConfigurationInput) (*GetBucketLifecycleConfigurationOutput, error) { req, out := c.GetBucketLifecycleConfigurationRequest(input) return out, req.Send() @@ -1950,7 +2100,7 @@ const opGetBucketLocation = "GetBucketLocation" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/GetBucketLocation +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/GetBucketLocation func (c *S3) GetBucketLocationRequest(input *GetBucketLocationInput) (req *request.Request, output *GetBucketLocationOutput) { op := &request.Operation{ Name: opGetBucketLocation, @@ -1977,7 +2127,7 @@ func (c *S3) GetBucketLocationRequest(input *GetBucketLocationInput) (req *reque // // See the AWS API reference guide for Amazon Simple Storage Service's // API operation GetBucketLocation for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/GetBucketLocation +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/GetBucketLocation func (c *S3) GetBucketLocation(input *GetBucketLocationInput) (*GetBucketLocationOutput, error) { req, out := c.GetBucketLocationRequest(input) return out, req.Send() @@ -2024,7 +2174,7 @@ const opGetBucketLogging = "GetBucketLogging" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/GetBucketLogging +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/GetBucketLogging func (c *S3) GetBucketLoggingRequest(input *GetBucketLoggingInput) (req *request.Request, output *GetBucketLoggingOutput) { op := &request.Operation{ Name: opGetBucketLogging, @@ -2052,7 +2202,7 @@ func (c *S3) GetBucketLoggingRequest(input *GetBucketLoggingInput) (req *request // // See the AWS API reference guide for Amazon Simple Storage Service's // API operation GetBucketLogging for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/GetBucketLogging +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/GetBucketLogging func (c *S3) GetBucketLogging(input *GetBucketLoggingInput) (*GetBucketLoggingOutput, error) { req, out := c.GetBucketLoggingRequest(input) return out, req.Send() @@ -2099,7 +2249,7 @@ const opGetBucketMetricsConfiguration = "GetBucketMetricsConfiguration" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/GetBucketMetricsConfiguration +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/GetBucketMetricsConfiguration func (c *S3) GetBucketMetricsConfigurationRequest(input *GetBucketMetricsConfigurationInput) (req *request.Request, output *GetBucketMetricsConfigurationOutput) { op := &request.Operation{ Name: opGetBucketMetricsConfiguration, @@ -2127,7 +2277,7 @@ func (c *S3) GetBucketMetricsConfigurationRequest(input *GetBucketMetricsConfigu // // See the AWS API reference guide for Amazon Simple Storage Service's // API operation GetBucketMetricsConfiguration for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/GetBucketMetricsConfiguration +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/GetBucketMetricsConfiguration func (c *S3) GetBucketMetricsConfiguration(input *GetBucketMetricsConfigurationInput) (*GetBucketMetricsConfigurationOutput, error) { req, out := c.GetBucketMetricsConfigurationRequest(input) return out, req.Send() @@ -2174,7 +2324,7 @@ const opGetBucketNotification = "GetBucketNotification" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/GetBucketNotification +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/GetBucketNotification func (c *S3) GetBucketNotificationRequest(input *GetBucketNotificationConfigurationRequest) (req *request.Request, output *NotificationConfigurationDeprecated) { if c.Client.Config.Logger != nil { c.Client.Config.Logger.Log("This operation, GetBucketNotification, has been deprecated") @@ -2204,7 +2354,7 @@ func (c *S3) GetBucketNotificationRequest(input *GetBucketNotificationConfigurat // // See the AWS API reference guide for Amazon Simple Storage Service's // API operation GetBucketNotification for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/GetBucketNotification +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/GetBucketNotification func (c *S3) GetBucketNotification(input *GetBucketNotificationConfigurationRequest) (*NotificationConfigurationDeprecated, error) { req, out := c.GetBucketNotificationRequest(input) return out, req.Send() @@ -2251,7 +2401,7 @@ const opGetBucketNotificationConfiguration = "GetBucketNotificationConfiguration // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/GetBucketNotificationConfiguration +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/GetBucketNotificationConfiguration func (c *S3) GetBucketNotificationConfigurationRequest(input *GetBucketNotificationConfigurationRequest) (req *request.Request, output *NotificationConfiguration) { op := &request.Operation{ Name: opGetBucketNotificationConfiguration, @@ -2278,7 +2428,7 @@ func (c *S3) GetBucketNotificationConfigurationRequest(input *GetBucketNotificat // // See the AWS API reference guide for Amazon Simple Storage Service's // API operation GetBucketNotificationConfiguration for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/GetBucketNotificationConfiguration +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/GetBucketNotificationConfiguration func (c *S3) GetBucketNotificationConfiguration(input *GetBucketNotificationConfigurationRequest) (*NotificationConfiguration, error) { req, out := c.GetBucketNotificationConfigurationRequest(input) return out, req.Send() @@ -2325,7 +2475,7 @@ const opGetBucketPolicy = "GetBucketPolicy" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/GetBucketPolicy +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/GetBucketPolicy func (c *S3) GetBucketPolicyRequest(input *GetBucketPolicyInput) (req *request.Request, output *GetBucketPolicyOutput) { op := &request.Operation{ Name: opGetBucketPolicy, @@ -2352,7 +2502,7 @@ func (c *S3) GetBucketPolicyRequest(input *GetBucketPolicyInput) (req *request.R // // See the AWS API reference guide for Amazon Simple Storage Service's // API operation GetBucketPolicy for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/GetBucketPolicy +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/GetBucketPolicy func (c *S3) GetBucketPolicy(input *GetBucketPolicyInput) (*GetBucketPolicyOutput, error) { req, out := c.GetBucketPolicyRequest(input) return out, req.Send() @@ -2399,7 +2549,7 @@ const opGetBucketReplication = "GetBucketReplication" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/GetBucketReplication +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/GetBucketReplication func (c *S3) GetBucketReplicationRequest(input *GetBucketReplicationInput) (req *request.Request, output *GetBucketReplicationOutput) { op := &request.Operation{ Name: opGetBucketReplication, @@ -2426,7 +2576,7 @@ func (c *S3) GetBucketReplicationRequest(input *GetBucketReplicationInput) (req // // See the AWS API reference guide for Amazon Simple Storage Service's // API operation GetBucketReplication for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/GetBucketReplication +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/GetBucketReplication func (c *S3) GetBucketReplication(input *GetBucketReplicationInput) (*GetBucketReplicationOutput, error) { req, out := c.GetBucketReplicationRequest(input) return out, req.Send() @@ -2473,7 +2623,7 @@ const opGetBucketRequestPayment = "GetBucketRequestPayment" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/GetBucketRequestPayment +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/GetBucketRequestPayment func (c *S3) GetBucketRequestPaymentRequest(input *GetBucketRequestPaymentInput) (req *request.Request, output *GetBucketRequestPaymentOutput) { op := &request.Operation{ Name: opGetBucketRequestPayment, @@ -2500,7 +2650,7 @@ func (c *S3) GetBucketRequestPaymentRequest(input *GetBucketRequestPaymentInput) // // See the AWS API reference guide for Amazon Simple Storage Service's // API operation GetBucketRequestPayment for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/GetBucketRequestPayment +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/GetBucketRequestPayment func (c *S3) GetBucketRequestPayment(input *GetBucketRequestPaymentInput) (*GetBucketRequestPaymentOutput, error) { req, out := c.GetBucketRequestPaymentRequest(input) return out, req.Send() @@ -2547,7 +2697,7 @@ const opGetBucketTagging = "GetBucketTagging" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/GetBucketTagging +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/GetBucketTagging func (c *S3) GetBucketTaggingRequest(input *GetBucketTaggingInput) (req *request.Request, output *GetBucketTaggingOutput) { op := &request.Operation{ Name: opGetBucketTagging, @@ -2574,7 +2724,7 @@ func (c *S3) GetBucketTaggingRequest(input *GetBucketTaggingInput) (req *request // // See the AWS API reference guide for Amazon Simple Storage Service's // API operation GetBucketTagging for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/GetBucketTagging +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/GetBucketTagging func (c *S3) GetBucketTagging(input *GetBucketTaggingInput) (*GetBucketTaggingOutput, error) { req, out := c.GetBucketTaggingRequest(input) return out, req.Send() @@ -2621,7 +2771,7 @@ const opGetBucketVersioning = "GetBucketVersioning" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/GetBucketVersioning +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/GetBucketVersioning func (c *S3) GetBucketVersioningRequest(input *GetBucketVersioningInput) (req *request.Request, output *GetBucketVersioningOutput) { op := &request.Operation{ Name: opGetBucketVersioning, @@ -2648,7 +2798,7 @@ func (c *S3) GetBucketVersioningRequest(input *GetBucketVersioningInput) (req *r // // See the AWS API reference guide for Amazon Simple Storage Service's // API operation GetBucketVersioning for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/GetBucketVersioning +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/GetBucketVersioning func (c *S3) GetBucketVersioning(input *GetBucketVersioningInput) (*GetBucketVersioningOutput, error) { req, out := c.GetBucketVersioningRequest(input) return out, req.Send() @@ -2695,7 +2845,7 @@ const opGetBucketWebsite = "GetBucketWebsite" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/GetBucketWebsite +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/GetBucketWebsite func (c *S3) GetBucketWebsiteRequest(input *GetBucketWebsiteInput) (req *request.Request, output *GetBucketWebsiteOutput) { op := &request.Operation{ Name: opGetBucketWebsite, @@ -2722,7 +2872,7 @@ func (c *S3) GetBucketWebsiteRequest(input *GetBucketWebsiteInput) (req *request // // See the AWS API reference guide for Amazon Simple Storage Service's // API operation GetBucketWebsite for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/GetBucketWebsite +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/GetBucketWebsite func (c *S3) GetBucketWebsite(input *GetBucketWebsiteInput) (*GetBucketWebsiteOutput, error) { req, out := c.GetBucketWebsiteRequest(input) return out, req.Send() @@ -2769,7 +2919,7 @@ const opGetObject = "GetObject" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/GetObject +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/GetObject func (c *S3) GetObjectRequest(input *GetObjectInput) (req *request.Request, output *GetObjectOutput) { op := &request.Operation{ Name: opGetObject, @@ -2801,7 +2951,7 @@ func (c *S3) GetObjectRequest(input *GetObjectInput) (req *request.Request, outp // * ErrCodeNoSuchKey "NoSuchKey" // The specified key does not exist. // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/GetObject +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/GetObject func (c *S3) GetObject(input *GetObjectInput) (*GetObjectOutput, error) { req, out := c.GetObjectRequest(input) return out, req.Send() @@ -2848,7 +2998,7 @@ const opGetObjectAcl = "GetObjectAcl" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/GetObjectAcl +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/GetObjectAcl func (c *S3) GetObjectAclRequest(input *GetObjectAclInput) (req *request.Request, output *GetObjectAclOutput) { op := &request.Operation{ Name: opGetObjectAcl, @@ -2880,7 +3030,7 @@ func (c *S3) GetObjectAclRequest(input *GetObjectAclInput) (req *request.Request // * ErrCodeNoSuchKey "NoSuchKey" // The specified key does not exist. // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/GetObjectAcl +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/GetObjectAcl func (c *S3) GetObjectAcl(input *GetObjectAclInput) (*GetObjectAclOutput, error) { req, out := c.GetObjectAclRequest(input) return out, req.Send() @@ -2927,7 +3077,7 @@ const opGetObjectTagging = "GetObjectTagging" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/GetObjectTagging +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/GetObjectTagging func (c *S3) GetObjectTaggingRequest(input *GetObjectTaggingInput) (req *request.Request, output *GetObjectTaggingOutput) { op := &request.Operation{ Name: opGetObjectTagging, @@ -2954,7 +3104,7 @@ func (c *S3) GetObjectTaggingRequest(input *GetObjectTaggingInput) (req *request // // See the AWS API reference guide for Amazon Simple Storage Service's // API operation GetObjectTagging for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/GetObjectTagging +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/GetObjectTagging func (c *S3) GetObjectTagging(input *GetObjectTaggingInput) (*GetObjectTaggingOutput, error) { req, out := c.GetObjectTaggingRequest(input) return out, req.Send() @@ -3001,7 +3151,7 @@ const opGetObjectTorrent = "GetObjectTorrent" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/GetObjectTorrent +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/GetObjectTorrent func (c *S3) GetObjectTorrentRequest(input *GetObjectTorrentInput) (req *request.Request, output *GetObjectTorrentOutput) { op := &request.Operation{ Name: opGetObjectTorrent, @@ -3028,7 +3178,7 @@ func (c *S3) GetObjectTorrentRequest(input *GetObjectTorrentInput) (req *request // // See the AWS API reference guide for Amazon Simple Storage Service's // API operation GetObjectTorrent for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/GetObjectTorrent +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/GetObjectTorrent func (c *S3) GetObjectTorrent(input *GetObjectTorrentInput) (*GetObjectTorrentOutput, error) { req, out := c.GetObjectTorrentRequest(input) return out, req.Send() @@ -3075,7 +3225,7 @@ const opHeadBucket = "HeadBucket" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/HeadBucket +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/HeadBucket func (c *S3) HeadBucketRequest(input *HeadBucketInput) (req *request.Request, output *HeadBucketOutput) { op := &request.Operation{ Name: opHeadBucket, @@ -3110,7 +3260,7 @@ func (c *S3) HeadBucketRequest(input *HeadBucketInput) (req *request.Request, ou // * ErrCodeNoSuchBucket "NoSuchBucket" // The specified bucket does not exist. // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/HeadBucket +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/HeadBucket func (c *S3) HeadBucket(input *HeadBucketInput) (*HeadBucketOutput, error) { req, out := c.HeadBucketRequest(input) return out, req.Send() @@ -3157,7 +3307,7 @@ const opHeadObject = "HeadObject" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/HeadObject +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/HeadObject func (c *S3) HeadObjectRequest(input *HeadObjectInput) (req *request.Request, output *HeadObjectOutput) { op := &request.Operation{ Name: opHeadObject, @@ -3189,7 +3339,7 @@ func (c *S3) HeadObjectRequest(input *HeadObjectInput) (req *request.Request, ou // // See the AWS API reference guide for Amazon Simple Storage Service's // API operation HeadObject for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/HeadObject +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/HeadObject func (c *S3) HeadObject(input *HeadObjectInput) (*HeadObjectOutput, error) { req, out := c.HeadObjectRequest(input) return out, req.Send() @@ -3236,7 +3386,7 @@ const opListBucketAnalyticsConfigurations = "ListBucketAnalyticsConfigurations" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/ListBucketAnalyticsConfigurations +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/ListBucketAnalyticsConfigurations func (c *S3) ListBucketAnalyticsConfigurationsRequest(input *ListBucketAnalyticsConfigurationsInput) (req *request.Request, output *ListBucketAnalyticsConfigurationsOutput) { op := &request.Operation{ Name: opListBucketAnalyticsConfigurations, @@ -3263,7 +3413,7 @@ func (c *S3) ListBucketAnalyticsConfigurationsRequest(input *ListBucketAnalytics // // See the AWS API reference guide for Amazon Simple Storage Service's // API operation ListBucketAnalyticsConfigurations for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/ListBucketAnalyticsConfigurations +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/ListBucketAnalyticsConfigurations func (c *S3) ListBucketAnalyticsConfigurations(input *ListBucketAnalyticsConfigurationsInput) (*ListBucketAnalyticsConfigurationsOutput, error) { req, out := c.ListBucketAnalyticsConfigurationsRequest(input) return out, req.Send() @@ -3310,7 +3460,7 @@ const opListBucketInventoryConfigurations = "ListBucketInventoryConfigurations" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/ListBucketInventoryConfigurations +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/ListBucketInventoryConfigurations func (c *S3) ListBucketInventoryConfigurationsRequest(input *ListBucketInventoryConfigurationsInput) (req *request.Request, output *ListBucketInventoryConfigurationsOutput) { op := &request.Operation{ Name: opListBucketInventoryConfigurations, @@ -3337,7 +3487,7 @@ func (c *S3) ListBucketInventoryConfigurationsRequest(input *ListBucketInventory // // See the AWS API reference guide for Amazon Simple Storage Service's // API operation ListBucketInventoryConfigurations for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/ListBucketInventoryConfigurations +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/ListBucketInventoryConfigurations func (c *S3) ListBucketInventoryConfigurations(input *ListBucketInventoryConfigurationsInput) (*ListBucketInventoryConfigurationsOutput, error) { req, out := c.ListBucketInventoryConfigurationsRequest(input) return out, req.Send() @@ -3384,7 +3534,7 @@ const opListBucketMetricsConfigurations = "ListBucketMetricsConfigurations" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/ListBucketMetricsConfigurations +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/ListBucketMetricsConfigurations func (c *S3) ListBucketMetricsConfigurationsRequest(input *ListBucketMetricsConfigurationsInput) (req *request.Request, output *ListBucketMetricsConfigurationsOutput) { op := &request.Operation{ Name: opListBucketMetricsConfigurations, @@ -3411,7 +3561,7 @@ func (c *S3) ListBucketMetricsConfigurationsRequest(input *ListBucketMetricsConf // // See the AWS API reference guide for Amazon Simple Storage Service's // API operation ListBucketMetricsConfigurations for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/ListBucketMetricsConfigurations +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/ListBucketMetricsConfigurations func (c *S3) ListBucketMetricsConfigurations(input *ListBucketMetricsConfigurationsInput) (*ListBucketMetricsConfigurationsOutput, error) { req, out := c.ListBucketMetricsConfigurationsRequest(input) return out, req.Send() @@ -3458,7 +3608,7 @@ const opListBuckets = "ListBuckets" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/ListBuckets +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/ListBuckets func (c *S3) ListBucketsRequest(input *ListBucketsInput) (req *request.Request, output *ListBucketsOutput) { op := &request.Operation{ Name: opListBuckets, @@ -3485,7 +3635,7 @@ func (c *S3) ListBucketsRequest(input *ListBucketsInput) (req *request.Request, // // See the AWS API reference guide for Amazon Simple Storage Service's // API operation ListBuckets for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/ListBuckets +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/ListBuckets func (c *S3) ListBuckets(input *ListBucketsInput) (*ListBucketsOutput, error) { req, out := c.ListBucketsRequest(input) return out, req.Send() @@ -3532,7 +3682,7 @@ const opListMultipartUploads = "ListMultipartUploads" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/ListMultipartUploads +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/ListMultipartUploads func (c *S3) ListMultipartUploadsRequest(input *ListMultipartUploadsInput) (req *request.Request, output *ListMultipartUploadsOutput) { op := &request.Operation{ Name: opListMultipartUploads, @@ -3565,7 +3715,7 @@ func (c *S3) ListMultipartUploadsRequest(input *ListMultipartUploadsInput) (req // // See the AWS API reference guide for Amazon Simple Storage Service's // API operation ListMultipartUploads for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/ListMultipartUploads +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/ListMultipartUploads func (c *S3) ListMultipartUploads(input *ListMultipartUploadsInput) (*ListMultipartUploadsOutput, error) { req, out := c.ListMultipartUploadsRequest(input) return out, req.Send() @@ -3662,7 +3812,7 @@ const opListObjectVersions = "ListObjectVersions" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/ListObjectVersions +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/ListObjectVersions func (c *S3) ListObjectVersionsRequest(input *ListObjectVersionsInput) (req *request.Request, output *ListObjectVersionsOutput) { op := &request.Operation{ Name: opListObjectVersions, @@ -3695,7 +3845,7 @@ func (c *S3) ListObjectVersionsRequest(input *ListObjectVersionsInput) (req *req // // See the AWS API reference guide for Amazon Simple Storage Service's // API operation ListObjectVersions for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/ListObjectVersions +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/ListObjectVersions func (c *S3) ListObjectVersions(input *ListObjectVersionsInput) (*ListObjectVersionsOutput, error) { req, out := c.ListObjectVersionsRequest(input) return out, req.Send() @@ -3792,7 +3942,7 @@ const opListObjects = "ListObjects" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/ListObjects +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/ListObjects func (c *S3) ListObjectsRequest(input *ListObjectsInput) (req *request.Request, output *ListObjectsOutput) { op := &request.Operation{ Name: opListObjects, @@ -3832,7 +3982,7 @@ func (c *S3) ListObjectsRequest(input *ListObjectsInput) (req *request.Request, // * ErrCodeNoSuchBucket "NoSuchBucket" // The specified bucket does not exist. // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/ListObjects +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/ListObjects func (c *S3) ListObjects(input *ListObjectsInput) (*ListObjectsOutput, error) { req, out := c.ListObjectsRequest(input) return out, req.Send() @@ -3929,7 +4079,7 @@ const opListObjectsV2 = "ListObjectsV2" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/ListObjectsV2 +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/ListObjectsV2 func (c *S3) ListObjectsV2Request(input *ListObjectsV2Input) (req *request.Request, output *ListObjectsV2Output) { op := &request.Operation{ Name: opListObjectsV2, @@ -3970,7 +4120,7 @@ func (c *S3) ListObjectsV2Request(input *ListObjectsV2Input) (req *request.Reque // * ErrCodeNoSuchBucket "NoSuchBucket" // The specified bucket does not exist. // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/ListObjectsV2 +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/ListObjectsV2 func (c *S3) ListObjectsV2(input *ListObjectsV2Input) (*ListObjectsV2Output, error) { req, out := c.ListObjectsV2Request(input) return out, req.Send() @@ -4067,7 +4217,7 @@ const opListParts = "ListParts" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/ListParts +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/ListParts func (c *S3) ListPartsRequest(input *ListPartsInput) (req *request.Request, output *ListPartsOutput) { op := &request.Operation{ Name: opListParts, @@ -4100,7 +4250,7 @@ func (c *S3) ListPartsRequest(input *ListPartsInput) (req *request.Request, outp // // See the AWS API reference guide for Amazon Simple Storage Service's // API operation ListParts for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/ListParts +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/ListParts func (c *S3) ListParts(input *ListPartsInput) (*ListPartsOutput, error) { req, out := c.ListPartsRequest(input) return out, req.Send() @@ -4197,7 +4347,7 @@ const opPutBucketAccelerateConfiguration = "PutBucketAccelerateConfiguration" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/PutBucketAccelerateConfiguration +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/PutBucketAccelerateConfiguration func (c *S3) PutBucketAccelerateConfigurationRequest(input *PutBucketAccelerateConfigurationInput) (req *request.Request, output *PutBucketAccelerateConfigurationOutput) { op := &request.Operation{ Name: opPutBucketAccelerateConfiguration, @@ -4226,7 +4376,7 @@ func (c *S3) PutBucketAccelerateConfigurationRequest(input *PutBucketAccelerateC // // See the AWS API reference guide for Amazon Simple Storage Service's // API operation PutBucketAccelerateConfiguration for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/PutBucketAccelerateConfiguration +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/PutBucketAccelerateConfiguration func (c *S3) PutBucketAccelerateConfiguration(input *PutBucketAccelerateConfigurationInput) (*PutBucketAccelerateConfigurationOutput, error) { req, out := c.PutBucketAccelerateConfigurationRequest(input) return out, req.Send() @@ -4273,7 +4423,7 @@ const opPutBucketAcl = "PutBucketAcl" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/PutBucketAcl +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/PutBucketAcl func (c *S3) PutBucketAclRequest(input *PutBucketAclInput) (req *request.Request, output *PutBucketAclOutput) { op := &request.Operation{ Name: opPutBucketAcl, @@ -4302,7 +4452,7 @@ func (c *S3) PutBucketAclRequest(input *PutBucketAclInput) (req *request.Request // // See the AWS API reference guide for Amazon Simple Storage Service's // API operation PutBucketAcl for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/PutBucketAcl +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/PutBucketAcl func (c *S3) PutBucketAcl(input *PutBucketAclInput) (*PutBucketAclOutput, error) { req, out := c.PutBucketAclRequest(input) return out, req.Send() @@ -4349,7 +4499,7 @@ const opPutBucketAnalyticsConfiguration = "PutBucketAnalyticsConfiguration" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/PutBucketAnalyticsConfiguration +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/PutBucketAnalyticsConfiguration func (c *S3) PutBucketAnalyticsConfigurationRequest(input *PutBucketAnalyticsConfigurationInput) (req *request.Request, output *PutBucketAnalyticsConfigurationOutput) { op := &request.Operation{ Name: opPutBucketAnalyticsConfiguration, @@ -4379,7 +4529,7 @@ func (c *S3) PutBucketAnalyticsConfigurationRequest(input *PutBucketAnalyticsCon // // See the AWS API reference guide for Amazon Simple Storage Service's // API operation PutBucketAnalyticsConfiguration for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/PutBucketAnalyticsConfiguration +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/PutBucketAnalyticsConfiguration func (c *S3) PutBucketAnalyticsConfiguration(input *PutBucketAnalyticsConfigurationInput) (*PutBucketAnalyticsConfigurationOutput, error) { req, out := c.PutBucketAnalyticsConfigurationRequest(input) return out, req.Send() @@ -4426,7 +4576,7 @@ const opPutBucketCors = "PutBucketCors" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/PutBucketCors +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/PutBucketCors func (c *S3) PutBucketCorsRequest(input *PutBucketCorsInput) (req *request.Request, output *PutBucketCorsOutput) { op := &request.Operation{ Name: opPutBucketCors, @@ -4455,7 +4605,7 @@ func (c *S3) PutBucketCorsRequest(input *PutBucketCorsInput) (req *request.Reque // // See the AWS API reference guide for Amazon Simple Storage Service's // API operation PutBucketCors for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/PutBucketCors +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/PutBucketCors func (c *S3) PutBucketCors(input *PutBucketCorsInput) (*PutBucketCorsOutput, error) { req, out := c.PutBucketCorsRequest(input) return out, req.Send() @@ -4477,6 +4627,83 @@ func (c *S3) PutBucketCorsWithContext(ctx aws.Context, input *PutBucketCorsInput return out, req.Send() } +const opPutBucketEncryption = "PutBucketEncryption" + +// PutBucketEncryptionRequest generates a "aws/request.Request" representing the +// client's request for the PutBucketEncryption operation. The "output" return +// value will be populated with the request's response once the request complets +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See PutBucketEncryption for more information on using the PutBucketEncryption +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the PutBucketEncryptionRequest method. +// req, resp := client.PutBucketEncryptionRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/PutBucketEncryption +func (c *S3) PutBucketEncryptionRequest(input *PutBucketEncryptionInput) (req *request.Request, output *PutBucketEncryptionOutput) { + op := &request.Operation{ + Name: opPutBucketEncryption, + HTTPMethod: "PUT", + HTTPPath: "/{Bucket}?encryption", + } + + if input == nil { + input = &PutBucketEncryptionInput{} + } + + output = &PutBucketEncryptionOutput{} + req = c.newRequest(op, input, output) + req.Handlers.Unmarshal.Remove(restxml.UnmarshalHandler) + req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) + return +} + +// PutBucketEncryption API operation for Amazon Simple Storage Service. +// +// Creates a new server-side encryption configuration (or replaces an existing +// one, if present). +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Simple Storage Service's +// API operation PutBucketEncryption for usage and error information. +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/PutBucketEncryption +func (c *S3) PutBucketEncryption(input *PutBucketEncryptionInput) (*PutBucketEncryptionOutput, error) { + req, out := c.PutBucketEncryptionRequest(input) + return out, req.Send() +} + +// PutBucketEncryptionWithContext is the same as PutBucketEncryption with the addition of +// the ability to pass a context and additional request options. +// +// See PutBucketEncryption for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *S3) PutBucketEncryptionWithContext(ctx aws.Context, input *PutBucketEncryptionInput, opts ...request.Option) (*PutBucketEncryptionOutput, error) { + req, out := c.PutBucketEncryptionRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + const opPutBucketInventoryConfiguration = "PutBucketInventoryConfiguration" // PutBucketInventoryConfigurationRequest generates a "aws/request.Request" representing the @@ -4502,7 +4729,7 @@ const opPutBucketInventoryConfiguration = "PutBucketInventoryConfiguration" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/PutBucketInventoryConfiguration +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/PutBucketInventoryConfiguration func (c *S3) PutBucketInventoryConfigurationRequest(input *PutBucketInventoryConfigurationInput) (req *request.Request, output *PutBucketInventoryConfigurationOutput) { op := &request.Operation{ Name: opPutBucketInventoryConfiguration, @@ -4532,7 +4759,7 @@ func (c *S3) PutBucketInventoryConfigurationRequest(input *PutBucketInventoryCon // // See the AWS API reference guide for Amazon Simple Storage Service's // API operation PutBucketInventoryConfiguration for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/PutBucketInventoryConfiguration +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/PutBucketInventoryConfiguration func (c *S3) PutBucketInventoryConfiguration(input *PutBucketInventoryConfigurationInput) (*PutBucketInventoryConfigurationOutput, error) { req, out := c.PutBucketInventoryConfigurationRequest(input) return out, req.Send() @@ -4579,7 +4806,7 @@ const opPutBucketLifecycle = "PutBucketLifecycle" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/PutBucketLifecycle +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/PutBucketLifecycle func (c *S3) PutBucketLifecycleRequest(input *PutBucketLifecycleInput) (req *request.Request, output *PutBucketLifecycleOutput) { if c.Client.Config.Logger != nil { c.Client.Config.Logger.Log("This operation, PutBucketLifecycle, has been deprecated") @@ -4611,7 +4838,7 @@ func (c *S3) PutBucketLifecycleRequest(input *PutBucketLifecycleInput) (req *req // // See the AWS API reference guide for Amazon Simple Storage Service's // API operation PutBucketLifecycle for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/PutBucketLifecycle +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/PutBucketLifecycle func (c *S3) PutBucketLifecycle(input *PutBucketLifecycleInput) (*PutBucketLifecycleOutput, error) { req, out := c.PutBucketLifecycleRequest(input) return out, req.Send() @@ -4658,7 +4885,7 @@ const opPutBucketLifecycleConfiguration = "PutBucketLifecycleConfiguration" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/PutBucketLifecycleConfiguration +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/PutBucketLifecycleConfiguration func (c *S3) PutBucketLifecycleConfigurationRequest(input *PutBucketLifecycleConfigurationInput) (req *request.Request, output *PutBucketLifecycleConfigurationOutput) { op := &request.Operation{ Name: opPutBucketLifecycleConfiguration, @@ -4688,7 +4915,7 @@ func (c *S3) PutBucketLifecycleConfigurationRequest(input *PutBucketLifecycleCon // // See the AWS API reference guide for Amazon Simple Storage Service's // API operation PutBucketLifecycleConfiguration for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/PutBucketLifecycleConfiguration +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/PutBucketLifecycleConfiguration func (c *S3) PutBucketLifecycleConfiguration(input *PutBucketLifecycleConfigurationInput) (*PutBucketLifecycleConfigurationOutput, error) { req, out := c.PutBucketLifecycleConfigurationRequest(input) return out, req.Send() @@ -4735,7 +4962,7 @@ const opPutBucketLogging = "PutBucketLogging" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/PutBucketLogging +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/PutBucketLogging func (c *S3) PutBucketLoggingRequest(input *PutBucketLoggingInput) (req *request.Request, output *PutBucketLoggingOutput) { op := &request.Operation{ Name: opPutBucketLogging, @@ -4766,7 +4993,7 @@ func (c *S3) PutBucketLoggingRequest(input *PutBucketLoggingInput) (req *request // // See the AWS API reference guide for Amazon Simple Storage Service's // API operation PutBucketLogging for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/PutBucketLogging +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/PutBucketLogging func (c *S3) PutBucketLogging(input *PutBucketLoggingInput) (*PutBucketLoggingOutput, error) { req, out := c.PutBucketLoggingRequest(input) return out, req.Send() @@ -4813,7 +5040,7 @@ const opPutBucketMetricsConfiguration = "PutBucketMetricsConfiguration" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/PutBucketMetricsConfiguration +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/PutBucketMetricsConfiguration func (c *S3) PutBucketMetricsConfigurationRequest(input *PutBucketMetricsConfigurationInput) (req *request.Request, output *PutBucketMetricsConfigurationOutput) { op := &request.Operation{ Name: opPutBucketMetricsConfiguration, @@ -4843,7 +5070,7 @@ func (c *S3) PutBucketMetricsConfigurationRequest(input *PutBucketMetricsConfigu // // See the AWS API reference guide for Amazon Simple Storage Service's // API operation PutBucketMetricsConfiguration for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/PutBucketMetricsConfiguration +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/PutBucketMetricsConfiguration func (c *S3) PutBucketMetricsConfiguration(input *PutBucketMetricsConfigurationInput) (*PutBucketMetricsConfigurationOutput, error) { req, out := c.PutBucketMetricsConfigurationRequest(input) return out, req.Send() @@ -4890,7 +5117,7 @@ const opPutBucketNotification = "PutBucketNotification" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/PutBucketNotification +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/PutBucketNotification func (c *S3) PutBucketNotificationRequest(input *PutBucketNotificationInput) (req *request.Request, output *PutBucketNotificationOutput) { if c.Client.Config.Logger != nil { c.Client.Config.Logger.Log("This operation, PutBucketNotification, has been deprecated") @@ -4922,7 +5149,7 @@ func (c *S3) PutBucketNotificationRequest(input *PutBucketNotificationInput) (re // // See the AWS API reference guide for Amazon Simple Storage Service's // API operation PutBucketNotification for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/PutBucketNotification +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/PutBucketNotification func (c *S3) PutBucketNotification(input *PutBucketNotificationInput) (*PutBucketNotificationOutput, error) { req, out := c.PutBucketNotificationRequest(input) return out, req.Send() @@ -4969,7 +5196,7 @@ const opPutBucketNotificationConfiguration = "PutBucketNotificationConfiguration // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/PutBucketNotificationConfiguration +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/PutBucketNotificationConfiguration func (c *S3) PutBucketNotificationConfigurationRequest(input *PutBucketNotificationConfigurationInput) (req *request.Request, output *PutBucketNotificationConfigurationOutput) { op := &request.Operation{ Name: opPutBucketNotificationConfiguration, @@ -4998,7 +5225,7 @@ func (c *S3) PutBucketNotificationConfigurationRequest(input *PutBucketNotificat // // See the AWS API reference guide for Amazon Simple Storage Service's // API operation PutBucketNotificationConfiguration for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/PutBucketNotificationConfiguration +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/PutBucketNotificationConfiguration func (c *S3) PutBucketNotificationConfiguration(input *PutBucketNotificationConfigurationInput) (*PutBucketNotificationConfigurationOutput, error) { req, out := c.PutBucketNotificationConfigurationRequest(input) return out, req.Send() @@ -5045,7 +5272,7 @@ const opPutBucketPolicy = "PutBucketPolicy" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/PutBucketPolicy +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/PutBucketPolicy func (c *S3) PutBucketPolicyRequest(input *PutBucketPolicyInput) (req *request.Request, output *PutBucketPolicyOutput) { op := &request.Operation{ Name: opPutBucketPolicy, @@ -5075,7 +5302,7 @@ func (c *S3) PutBucketPolicyRequest(input *PutBucketPolicyInput) (req *request.R // // See the AWS API reference guide for Amazon Simple Storage Service's // API operation PutBucketPolicy for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/PutBucketPolicy +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/PutBucketPolicy func (c *S3) PutBucketPolicy(input *PutBucketPolicyInput) (*PutBucketPolicyOutput, error) { req, out := c.PutBucketPolicyRequest(input) return out, req.Send() @@ -5122,7 +5349,7 @@ const opPutBucketReplication = "PutBucketReplication" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/PutBucketReplication +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/PutBucketReplication func (c *S3) PutBucketReplicationRequest(input *PutBucketReplicationInput) (req *request.Request, output *PutBucketReplicationOutput) { op := &request.Operation{ Name: opPutBucketReplication, @@ -5152,7 +5379,7 @@ func (c *S3) PutBucketReplicationRequest(input *PutBucketReplicationInput) (req // // See the AWS API reference guide for Amazon Simple Storage Service's // API operation PutBucketReplication for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/PutBucketReplication +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/PutBucketReplication func (c *S3) PutBucketReplication(input *PutBucketReplicationInput) (*PutBucketReplicationOutput, error) { req, out := c.PutBucketReplicationRequest(input) return out, req.Send() @@ -5199,7 +5426,7 @@ const opPutBucketRequestPayment = "PutBucketRequestPayment" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/PutBucketRequestPayment +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/PutBucketRequestPayment func (c *S3) PutBucketRequestPaymentRequest(input *PutBucketRequestPaymentInput) (req *request.Request, output *PutBucketRequestPaymentOutput) { op := &request.Operation{ Name: opPutBucketRequestPayment, @@ -5232,7 +5459,7 @@ func (c *S3) PutBucketRequestPaymentRequest(input *PutBucketRequestPaymentInput) // // See the AWS API reference guide for Amazon Simple Storage Service's // API operation PutBucketRequestPayment for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/PutBucketRequestPayment +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/PutBucketRequestPayment func (c *S3) PutBucketRequestPayment(input *PutBucketRequestPaymentInput) (*PutBucketRequestPaymentOutput, error) { req, out := c.PutBucketRequestPaymentRequest(input) return out, req.Send() @@ -5279,7 +5506,7 @@ const opPutBucketTagging = "PutBucketTagging" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/PutBucketTagging +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/PutBucketTagging func (c *S3) PutBucketTaggingRequest(input *PutBucketTaggingInput) (req *request.Request, output *PutBucketTaggingOutput) { op := &request.Operation{ Name: opPutBucketTagging, @@ -5308,7 +5535,7 @@ func (c *S3) PutBucketTaggingRequest(input *PutBucketTaggingInput) (req *request // // See the AWS API reference guide for Amazon Simple Storage Service's // API operation PutBucketTagging for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/PutBucketTagging +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/PutBucketTagging func (c *S3) PutBucketTagging(input *PutBucketTaggingInput) (*PutBucketTaggingOutput, error) { req, out := c.PutBucketTaggingRequest(input) return out, req.Send() @@ -5355,7 +5582,7 @@ const opPutBucketVersioning = "PutBucketVersioning" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/PutBucketVersioning +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/PutBucketVersioning func (c *S3) PutBucketVersioningRequest(input *PutBucketVersioningInput) (req *request.Request, output *PutBucketVersioningOutput) { op := &request.Operation{ Name: opPutBucketVersioning, @@ -5385,7 +5612,7 @@ func (c *S3) PutBucketVersioningRequest(input *PutBucketVersioningInput) (req *r // // See the AWS API reference guide for Amazon Simple Storage Service's // API operation PutBucketVersioning for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/PutBucketVersioning +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/PutBucketVersioning func (c *S3) PutBucketVersioning(input *PutBucketVersioningInput) (*PutBucketVersioningOutput, error) { req, out := c.PutBucketVersioningRequest(input) return out, req.Send() @@ -5432,7 +5659,7 @@ const opPutBucketWebsite = "PutBucketWebsite" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/PutBucketWebsite +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/PutBucketWebsite func (c *S3) PutBucketWebsiteRequest(input *PutBucketWebsiteInput) (req *request.Request, output *PutBucketWebsiteOutput) { op := &request.Operation{ Name: opPutBucketWebsite, @@ -5461,7 +5688,7 @@ func (c *S3) PutBucketWebsiteRequest(input *PutBucketWebsiteInput) (req *request // // See the AWS API reference guide for Amazon Simple Storage Service's // API operation PutBucketWebsite for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/PutBucketWebsite +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/PutBucketWebsite func (c *S3) PutBucketWebsite(input *PutBucketWebsiteInput) (*PutBucketWebsiteOutput, error) { req, out := c.PutBucketWebsiteRequest(input) return out, req.Send() @@ -5508,7 +5735,7 @@ const opPutObject = "PutObject" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/PutObject +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/PutObject func (c *S3) PutObjectRequest(input *PutObjectInput) (req *request.Request, output *PutObjectOutput) { op := &request.Operation{ Name: opPutObject, @@ -5535,7 +5762,7 @@ func (c *S3) PutObjectRequest(input *PutObjectInput) (req *request.Request, outp // // See the AWS API reference guide for Amazon Simple Storage Service's // API operation PutObject for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/PutObject +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/PutObject func (c *S3) PutObject(input *PutObjectInput) (*PutObjectOutput, error) { req, out := c.PutObjectRequest(input) return out, req.Send() @@ -5582,7 +5809,7 @@ const opPutObjectAcl = "PutObjectAcl" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/PutObjectAcl +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/PutObjectAcl func (c *S3) PutObjectAclRequest(input *PutObjectAclInput) (req *request.Request, output *PutObjectAclOutput) { op := &request.Operation{ Name: opPutObjectAcl, @@ -5615,7 +5842,7 @@ func (c *S3) PutObjectAclRequest(input *PutObjectAclInput) (req *request.Request // * ErrCodeNoSuchKey "NoSuchKey" // The specified key does not exist. // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/PutObjectAcl +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/PutObjectAcl func (c *S3) PutObjectAcl(input *PutObjectAclInput) (*PutObjectAclOutput, error) { req, out := c.PutObjectAclRequest(input) return out, req.Send() @@ -5662,7 +5889,7 @@ const opPutObjectTagging = "PutObjectTagging" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/PutObjectTagging +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/PutObjectTagging func (c *S3) PutObjectTaggingRequest(input *PutObjectTaggingInput) (req *request.Request, output *PutObjectTaggingOutput) { op := &request.Operation{ Name: opPutObjectTagging, @@ -5689,7 +5916,7 @@ func (c *S3) PutObjectTaggingRequest(input *PutObjectTaggingInput) (req *request // // See the AWS API reference guide for Amazon Simple Storage Service's // API operation PutObjectTagging for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/PutObjectTagging +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/PutObjectTagging func (c *S3) PutObjectTagging(input *PutObjectTaggingInput) (*PutObjectTaggingOutput, error) { req, out := c.PutObjectTaggingRequest(input) return out, req.Send() @@ -5736,7 +5963,7 @@ const opRestoreObject = "RestoreObject" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/RestoreObject +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/RestoreObject func (c *S3) RestoreObjectRequest(input *RestoreObjectInput) (req *request.Request, output *RestoreObjectOutput) { op := &request.Operation{ Name: opRestoreObject, @@ -5768,7 +5995,7 @@ func (c *S3) RestoreObjectRequest(input *RestoreObjectInput) (req *request.Reque // * ErrCodeObjectAlreadyInActiveTierError "ObjectAlreadyInActiveTierError" // This operation is not allowed against this storage tier // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/RestoreObject +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/RestoreObject func (c *S3) RestoreObject(input *RestoreObjectInput) (*RestoreObjectOutput, error) { req, out := c.RestoreObjectRequest(input) return out, req.Send() @@ -5815,7 +6042,7 @@ const opUploadPart = "UploadPart" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/UploadPart +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/UploadPart func (c *S3) UploadPartRequest(input *UploadPartInput) (req *request.Request, output *UploadPartOutput) { op := &request.Operation{ Name: opUploadPart, @@ -5848,7 +6075,7 @@ func (c *S3) UploadPartRequest(input *UploadPartInput) (req *request.Request, ou // // See the AWS API reference guide for Amazon Simple Storage Service's // API operation UploadPart for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/UploadPart +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/UploadPart func (c *S3) UploadPart(input *UploadPartInput) (*UploadPartOutput, error) { req, out := c.UploadPartRequest(input) return out, req.Send() @@ -5895,7 +6122,7 @@ const opUploadPartCopy = "UploadPartCopy" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/UploadPartCopy +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/UploadPartCopy func (c *S3) UploadPartCopyRequest(input *UploadPartCopyInput) (req *request.Request, output *UploadPartCopyOutput) { op := &request.Operation{ Name: opUploadPartCopy, @@ -5922,7 +6149,7 @@ func (c *S3) UploadPartCopyRequest(input *UploadPartCopyInput) (req *request.Req // // See the AWS API reference guide for Amazon Simple Storage Service's // API operation UploadPartCopy for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/UploadPartCopy +// See also, https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/UploadPartCopy func (c *S3) UploadPartCopy(input *UploadPartCopyInput) (*UploadPartCopyOutput, error) { req, out := c.UploadPartCopyRequest(input) return out, req.Send() @@ -5946,7 +6173,6 @@ func (c *S3) UploadPartCopyWithContext(ctx aws.Context, input *UploadPartCopyInp // Specifies the days since the initiation of an Incomplete Multipart Upload // that Lifecycle will wait before permanently removing all parts of the upload. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/AbortIncompleteMultipartUpload type AbortIncompleteMultipartUpload struct { _ struct{} `type:"structure"` @@ -5971,7 +6197,6 @@ func (s *AbortIncompleteMultipartUpload) SetDaysAfterInitiation(v int64) *AbortI return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/AbortMultipartUploadRequest type AbortMultipartUploadInput struct { _ struct{} `type:"structure"` @@ -6054,7 +6279,6 @@ func (s *AbortMultipartUploadInput) SetUploadId(v string) *AbortMultipartUploadI return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/AbortMultipartUploadOutput type AbortMultipartUploadOutput struct { _ struct{} `type:"structure"` @@ -6079,7 +6303,6 @@ func (s *AbortMultipartUploadOutput) SetRequestCharged(v string) *AbortMultipart return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/AccelerateConfiguration type AccelerateConfiguration struct { _ struct{} `type:"structure"` @@ -6103,7 +6326,6 @@ func (s *AccelerateConfiguration) SetStatus(v string) *AccelerateConfiguration { return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/AccessControlPolicy type AccessControlPolicy struct { _ struct{} `type:"structure"` @@ -6155,7 +6377,45 @@ func (s *AccessControlPolicy) SetOwner(v *Owner) *AccessControlPolicy { return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/AnalyticsAndOperator +// Container for information regarding the access control for replicas. +type AccessControlTranslation struct { + _ struct{} `type:"structure"` + + // The override value for the owner of the replica object. + // + // Owner is a required field + Owner *string `type:"string" required:"true" enum:"OwnerOverride"` +} + +// String returns the string representation +func (s AccessControlTranslation) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s AccessControlTranslation) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *AccessControlTranslation) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "AccessControlTranslation"} + if s.Owner == nil { + invalidParams.Add(request.NewErrParamRequired("Owner")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetOwner sets the Owner field's value. +func (s *AccessControlTranslation) SetOwner(v string) *AccessControlTranslation { + s.Owner = &v + return s +} + type AnalyticsAndOperator struct { _ struct{} `type:"structure"` @@ -6208,7 +6468,6 @@ func (s *AnalyticsAndOperator) SetTags(v []*Tag) *AnalyticsAndOperator { return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/AnalyticsConfiguration type AnalyticsConfiguration struct { _ struct{} `type:"structure"` @@ -6283,7 +6542,6 @@ func (s *AnalyticsConfiguration) SetStorageClassAnalysis(v *StorageClassAnalysis return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/AnalyticsExportDestination type AnalyticsExportDestination struct { _ struct{} `type:"structure"` @@ -6327,7 +6585,6 @@ func (s *AnalyticsExportDestination) SetS3BucketDestination(v *AnalyticsS3Bucket return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/AnalyticsFilter type AnalyticsFilter struct { _ struct{} `type:"structure"` @@ -6390,7 +6647,6 @@ func (s *AnalyticsFilter) SetTag(v *Tag) *AnalyticsFilter { return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/AnalyticsS3BucketDestination type AnalyticsS3BucketDestination struct { _ struct{} `type:"structure"` @@ -6470,7 +6726,6 @@ func (s *AnalyticsS3BucketDestination) SetPrefix(v string) *AnalyticsS3BucketDes return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/Bucket type Bucket struct { _ struct{} `type:"structure"` @@ -6503,7 +6758,6 @@ func (s *Bucket) SetName(v string) *Bucket { return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/BucketLifecycleConfiguration type BucketLifecycleConfiguration struct { _ struct{} `type:"structure"` @@ -6550,7 +6804,6 @@ func (s *BucketLifecycleConfiguration) SetRules(v []*LifecycleRule) *BucketLifec return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/BucketLoggingStatus type BucketLoggingStatus struct { _ struct{} `type:"structure"` @@ -6588,7 +6841,6 @@ func (s *BucketLoggingStatus) SetLoggingEnabled(v *LoggingEnabled) *BucketLoggin return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/CORSConfiguration type CORSConfiguration struct { _ struct{} `type:"structure"` @@ -6635,7 +6887,6 @@ func (s *CORSConfiguration) SetCORSRules(v []*CORSRule) *CORSConfiguration { return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/CORSRule type CORSRule struct { _ struct{} `type:"structure"` @@ -6719,7 +6970,138 @@ func (s *CORSRule) SetMaxAgeSeconds(v int64) *CORSRule { return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/CloudFunctionConfiguration +// Describes how a CSV-formatted input object is formatted. +type CSVInput struct { + _ struct{} `type:"structure"` + + // Single character used to indicate a row should be ignored when present at + // the start of a row. + Comments *string `type:"string"` + + // Value used to separate individual fields in a record. + FieldDelimiter *string `type:"string"` + + // Describes the first line of input. Valid values: None, Ignore, Use. + FileHeaderInfo *string `type:"string" enum:"FileHeaderInfo"` + + // Value used for escaping where the field delimiter is part of the value. + QuoteCharacter *string `type:"string"` + + // Single character used for escaping the quote character inside an already + // escaped value. + QuoteEscapeCharacter *string `type:"string"` + + // Value used to separate individual records. + RecordDelimiter *string `type:"string"` +} + +// String returns the string representation +func (s CSVInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CSVInput) GoString() string { + return s.String() +} + +// SetComments sets the Comments field's value. +func (s *CSVInput) SetComments(v string) *CSVInput { + s.Comments = &v + return s +} + +// SetFieldDelimiter sets the FieldDelimiter field's value. +func (s *CSVInput) SetFieldDelimiter(v string) *CSVInput { + s.FieldDelimiter = &v + return s +} + +// SetFileHeaderInfo sets the FileHeaderInfo field's value. +func (s *CSVInput) SetFileHeaderInfo(v string) *CSVInput { + s.FileHeaderInfo = &v + return s +} + +// SetQuoteCharacter sets the QuoteCharacter field's value. +func (s *CSVInput) SetQuoteCharacter(v string) *CSVInput { + s.QuoteCharacter = &v + return s +} + +// SetQuoteEscapeCharacter sets the QuoteEscapeCharacter field's value. +func (s *CSVInput) SetQuoteEscapeCharacter(v string) *CSVInput { + s.QuoteEscapeCharacter = &v + return s +} + +// SetRecordDelimiter sets the RecordDelimiter field's value. +func (s *CSVInput) SetRecordDelimiter(v string) *CSVInput { + s.RecordDelimiter = &v + return s +} + +// Describes how CSV-formatted results are formatted. +type CSVOutput struct { + _ struct{} `type:"structure"` + + // Value used to separate individual fields in a record. + FieldDelimiter *string `type:"string"` + + // Value used for escaping where the field delimiter is part of the value. + QuoteCharacter *string `type:"string"` + + // Single character used for escaping the quote character inside an already + // escaped value. + QuoteEscapeCharacter *string `type:"string"` + + // Indicates whether or not all output fields should be quoted. + QuoteFields *string `type:"string" enum:"QuoteFields"` + + // Value used to separate individual records. + RecordDelimiter *string `type:"string"` +} + +// String returns the string representation +func (s CSVOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CSVOutput) GoString() string { + return s.String() +} + +// SetFieldDelimiter sets the FieldDelimiter field's value. +func (s *CSVOutput) SetFieldDelimiter(v string) *CSVOutput { + s.FieldDelimiter = &v + return s +} + +// SetQuoteCharacter sets the QuoteCharacter field's value. +func (s *CSVOutput) SetQuoteCharacter(v string) *CSVOutput { + s.QuoteCharacter = &v + return s +} + +// SetQuoteEscapeCharacter sets the QuoteEscapeCharacter field's value. +func (s *CSVOutput) SetQuoteEscapeCharacter(v string) *CSVOutput { + s.QuoteEscapeCharacter = &v + return s +} + +// SetQuoteFields sets the QuoteFields field's value. +func (s *CSVOutput) SetQuoteFields(v string) *CSVOutput { + s.QuoteFields = &v + return s +} + +// SetRecordDelimiter sets the RecordDelimiter field's value. +func (s *CSVOutput) SetRecordDelimiter(v string) *CSVOutput { + s.RecordDelimiter = &v + return s +} + type CloudFunctionConfiguration struct { _ struct{} `type:"structure"` @@ -6777,7 +7159,6 @@ func (s *CloudFunctionConfiguration) SetInvocationRole(v string) *CloudFunctionC return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/CommonPrefix type CommonPrefix struct { _ struct{} `type:"structure"` @@ -6800,7 +7181,6 @@ func (s *CommonPrefix) SetPrefix(v string) *CommonPrefix { return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/CompleteMultipartUploadRequest type CompleteMultipartUploadInput struct { _ struct{} `type:"structure" payload:"MultipartUpload"` @@ -6891,7 +7271,6 @@ func (s *CompleteMultipartUploadInput) SetUploadId(v string) *CompleteMultipartU return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/CompleteMultipartUploadOutput type CompleteMultipartUploadOutput struct { _ struct{} `type:"structure"` @@ -6995,7 +7374,6 @@ func (s *CompleteMultipartUploadOutput) SetVersionId(v string) *CompleteMultipar return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/CompletedMultipartUpload type CompletedMultipartUpload struct { _ struct{} `type:"structure"` @@ -7018,7 +7396,6 @@ func (s *CompletedMultipartUpload) SetParts(v []*CompletedPart) *CompletedMultip return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/CompletedPart type CompletedPart struct { _ struct{} `type:"structure"` @@ -7052,7 +7429,6 @@ func (s *CompletedPart) SetPartNumber(v int64) *CompletedPart { return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/Condition type Condition struct { _ struct{} `type:"structure"` @@ -7095,7 +7471,6 @@ func (s *Condition) SetKeyPrefixEquals(v string) *Condition { return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/CopyObjectRequest type CopyObjectInput struct { _ struct{} `type:"structure"` @@ -7479,7 +7854,6 @@ func (s *CopyObjectInput) SetWebsiteRedirectLocation(v string) *CopyObjectInput return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/CopyObjectOutput type CopyObjectOutput struct { _ struct{} `type:"structure" payload:"CopyObjectResult"` @@ -7580,7 +7954,6 @@ func (s *CopyObjectOutput) SetVersionId(v string) *CopyObjectOutput { return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/CopyObjectResult type CopyObjectResult struct { _ struct{} `type:"structure"` @@ -7611,7 +7984,6 @@ func (s *CopyObjectResult) SetLastModified(v time.Time) *CopyObjectResult { return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/CopyPartResult type CopyPartResult struct { _ struct{} `type:"structure"` @@ -7644,7 +8016,6 @@ func (s *CopyPartResult) SetLastModified(v time.Time) *CopyPartResult { return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/CreateBucketConfiguration type CreateBucketConfiguration struct { _ struct{} `type:"structure"` @@ -7669,7 +8040,6 @@ func (s *CreateBucketConfiguration) SetLocationConstraint(v string) *CreateBucke return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/CreateBucketRequest type CreateBucketInput struct { _ struct{} `type:"structure" payload:"CreateBucketConfiguration"` @@ -7776,7 +8146,6 @@ func (s *CreateBucketInput) SetGrantWriteACP(v string) *CreateBucketInput { return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/CreateBucketOutput type CreateBucketOutput struct { _ struct{} `type:"structure"` @@ -7799,7 +8168,6 @@ func (s *CreateBucketOutput) SetLocation(v string) *CreateBucketOutput { return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/CreateMultipartUploadRequest type CreateMultipartUploadInput struct { _ struct{} `type:"structure"` @@ -8071,7 +8439,6 @@ func (s *CreateMultipartUploadInput) SetWebsiteRedirectLocation(v string) *Creat return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/CreateMultipartUploadOutput type CreateMultipartUploadOutput struct { _ struct{} `type:"structure"` @@ -8191,7 +8558,6 @@ func (s *CreateMultipartUploadOutput) SetUploadId(v string) *CreateMultipartUplo return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/Delete type Delete struct { _ struct{} `type:"structure"` @@ -8248,7 +8614,6 @@ func (s *Delete) SetQuiet(v bool) *Delete { return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/DeleteBucketAnalyticsConfigurationRequest type DeleteBucketAnalyticsConfigurationInput struct { _ struct{} `type:"structure"` @@ -8308,7 +8673,6 @@ func (s *DeleteBucketAnalyticsConfigurationInput) SetId(v string) *DeleteBucketA return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/DeleteBucketAnalyticsConfigurationOutput type DeleteBucketAnalyticsConfigurationOutput struct { _ struct{} `type:"structure"` } @@ -8323,7 +8687,6 @@ func (s DeleteBucketAnalyticsConfigurationOutput) GoString() string { return s.String() } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/DeleteBucketCorsRequest type DeleteBucketCorsInput struct { _ struct{} `type:"structure"` @@ -8367,7 +8730,6 @@ func (s *DeleteBucketCorsInput) getBucket() (v string) { return *s.Bucket } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/DeleteBucketCorsOutput type DeleteBucketCorsOutput struct { _ struct{} `type:"structure"` } @@ -8382,7 +8744,66 @@ func (s DeleteBucketCorsOutput) GoString() string { return s.String() } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/DeleteBucketRequest +type DeleteBucketEncryptionInput struct { + _ struct{} `type:"structure"` + + // The name of the bucket containing the server-side encryption configuration + // to delete. + // + // Bucket is a required field + Bucket *string `location:"uri" locationName:"Bucket" type:"string" required:"true"` +} + +// String returns the string representation +func (s DeleteBucketEncryptionInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteBucketEncryptionInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DeleteBucketEncryptionInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeleteBucketEncryptionInput"} + if s.Bucket == nil { + invalidParams.Add(request.NewErrParamRequired("Bucket")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetBucket sets the Bucket field's value. +func (s *DeleteBucketEncryptionInput) SetBucket(v string) *DeleteBucketEncryptionInput { + s.Bucket = &v + return s +} + +func (s *DeleteBucketEncryptionInput) getBucket() (v string) { + if s.Bucket == nil { + return v + } + return *s.Bucket +} + +type DeleteBucketEncryptionOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s DeleteBucketEncryptionOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteBucketEncryptionOutput) GoString() string { + return s.String() +} + type DeleteBucketInput struct { _ struct{} `type:"structure"` @@ -8426,7 +8847,6 @@ func (s *DeleteBucketInput) getBucket() (v string) { return *s.Bucket } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/DeleteBucketInventoryConfigurationRequest type DeleteBucketInventoryConfigurationInput struct { _ struct{} `type:"structure"` @@ -8486,7 +8906,6 @@ func (s *DeleteBucketInventoryConfigurationInput) SetId(v string) *DeleteBucketI return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/DeleteBucketInventoryConfigurationOutput type DeleteBucketInventoryConfigurationOutput struct { _ struct{} `type:"structure"` } @@ -8501,7 +8920,6 @@ func (s DeleteBucketInventoryConfigurationOutput) GoString() string { return s.String() } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/DeleteBucketLifecycleRequest type DeleteBucketLifecycleInput struct { _ struct{} `type:"structure"` @@ -8545,7 +8963,6 @@ func (s *DeleteBucketLifecycleInput) getBucket() (v string) { return *s.Bucket } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/DeleteBucketLifecycleOutput type DeleteBucketLifecycleOutput struct { _ struct{} `type:"structure"` } @@ -8560,7 +8977,6 @@ func (s DeleteBucketLifecycleOutput) GoString() string { return s.String() } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/DeleteBucketMetricsConfigurationRequest type DeleteBucketMetricsConfigurationInput struct { _ struct{} `type:"structure"` @@ -8620,7 +9036,6 @@ func (s *DeleteBucketMetricsConfigurationInput) SetId(v string) *DeleteBucketMet return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/DeleteBucketMetricsConfigurationOutput type DeleteBucketMetricsConfigurationOutput struct { _ struct{} `type:"structure"` } @@ -8635,7 +9050,6 @@ func (s DeleteBucketMetricsConfigurationOutput) GoString() string { return s.String() } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/DeleteBucketOutput type DeleteBucketOutput struct { _ struct{} `type:"structure"` } @@ -8650,7 +9064,6 @@ func (s DeleteBucketOutput) GoString() string { return s.String() } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/DeleteBucketPolicyRequest type DeleteBucketPolicyInput struct { _ struct{} `type:"structure"` @@ -8694,7 +9107,6 @@ func (s *DeleteBucketPolicyInput) getBucket() (v string) { return *s.Bucket } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/DeleteBucketPolicyOutput type DeleteBucketPolicyOutput struct { _ struct{} `type:"structure"` } @@ -8709,7 +9121,6 @@ func (s DeleteBucketPolicyOutput) GoString() string { return s.String() } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/DeleteBucketReplicationRequest type DeleteBucketReplicationInput struct { _ struct{} `type:"structure"` @@ -8753,7 +9164,6 @@ func (s *DeleteBucketReplicationInput) getBucket() (v string) { return *s.Bucket } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/DeleteBucketReplicationOutput type DeleteBucketReplicationOutput struct { _ struct{} `type:"structure"` } @@ -8768,7 +9178,6 @@ func (s DeleteBucketReplicationOutput) GoString() string { return s.String() } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/DeleteBucketTaggingRequest type DeleteBucketTaggingInput struct { _ struct{} `type:"structure"` @@ -8812,7 +9221,6 @@ func (s *DeleteBucketTaggingInput) getBucket() (v string) { return *s.Bucket } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/DeleteBucketTaggingOutput type DeleteBucketTaggingOutput struct { _ struct{} `type:"structure"` } @@ -8827,7 +9235,6 @@ func (s DeleteBucketTaggingOutput) GoString() string { return s.String() } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/DeleteBucketWebsiteRequest type DeleteBucketWebsiteInput struct { _ struct{} `type:"structure"` @@ -8871,7 +9278,6 @@ func (s *DeleteBucketWebsiteInput) getBucket() (v string) { return *s.Bucket } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/DeleteBucketWebsiteOutput type DeleteBucketWebsiteOutput struct { _ struct{} `type:"structure"` } @@ -8886,7 +9292,6 @@ func (s DeleteBucketWebsiteOutput) GoString() string { return s.String() } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/DeleteMarkerEntry type DeleteMarkerEntry struct { _ struct{} `type:"structure"` @@ -8946,7 +9351,6 @@ func (s *DeleteMarkerEntry) SetVersionId(v string) *DeleteMarkerEntry { return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/DeleteObjectRequest type DeleteObjectInput struct { _ struct{} `type:"structure"` @@ -9036,7 +9440,6 @@ func (s *DeleteObjectInput) SetVersionId(v string) *DeleteObjectInput { return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/DeleteObjectOutput type DeleteObjectOutput struct { _ struct{} `type:"structure"` @@ -9081,7 +9484,6 @@ func (s *DeleteObjectOutput) SetVersionId(v string) *DeleteObjectOutput { return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/DeleteObjectTaggingRequest type DeleteObjectTaggingInput struct { _ struct{} `type:"structure"` @@ -9149,7 +9551,6 @@ func (s *DeleteObjectTaggingInput) SetVersionId(v string) *DeleteObjectTaggingIn return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/DeleteObjectTaggingOutput type DeleteObjectTaggingOutput struct { _ struct{} `type:"structure"` @@ -9173,7 +9574,6 @@ func (s *DeleteObjectTaggingOutput) SetVersionId(v string) *DeleteObjectTaggingO return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/DeleteObjectsRequest type DeleteObjectsInput struct { _ struct{} `type:"structure" payload:"Delete"` @@ -9256,7 +9656,6 @@ func (s *DeleteObjectsInput) SetRequestPayer(v string) *DeleteObjectsInput { return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/DeleteObjectsOutput type DeleteObjectsOutput struct { _ struct{} `type:"structure"` @@ -9297,7 +9696,6 @@ func (s *DeleteObjectsOutput) SetRequestCharged(v string) *DeleteObjectsOutput { return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/DeletedObject type DeletedObject struct { _ struct{} `type:"structure"` @@ -9344,16 +9742,26 @@ func (s *DeletedObject) SetVersionId(v string) *DeletedObject { return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/Destination +// Container for replication destination information. type Destination struct { _ struct{} `type:"structure"` + // Container for information regarding the access control for replicas. + AccessControlTranslation *AccessControlTranslation `type:"structure"` + + // Account ID of the destination bucket. Currently this is only being verified + // if Access Control Translation is enabled + Account *string `type:"string"` + // Amazon resource name (ARN) of the bucket where you want Amazon S3 to store // replicas of the object identified by the rule. // // Bucket is a required field Bucket *string `type:"string" required:"true"` + // Container for information regarding encryption based configuration for replicas. + EncryptionConfiguration *EncryptionConfiguration `type:"structure"` + // The class of storage used to store the object. StorageClass *string `type:"string" enum:"StorageClass"` } @@ -9374,6 +9782,11 @@ func (s *Destination) Validate() error { if s.Bucket == nil { invalidParams.Add(request.NewErrParamRequired("Bucket")) } + if s.AccessControlTranslation != nil { + if err := s.AccessControlTranslation.Validate(); err != nil { + invalidParams.AddNested("AccessControlTranslation", err.(request.ErrInvalidParams)) + } + } if invalidParams.Len() > 0 { return invalidParams @@ -9381,6 +9794,18 @@ func (s *Destination) Validate() error { return nil } +// SetAccessControlTranslation sets the AccessControlTranslation field's value. +func (s *Destination) SetAccessControlTranslation(v *AccessControlTranslation) *Destination { + s.AccessControlTranslation = v + return s +} + +// SetAccount sets the Account field's value. +func (s *Destination) SetAccount(v string) *Destination { + s.Account = &v + return s +} + // SetBucket sets the Bucket field's value. func (s *Destination) SetBucket(v string) *Destination { s.Bucket = &v @@ -9394,13 +9819,103 @@ func (s *Destination) getBucket() (v string) { return *s.Bucket } +// SetEncryptionConfiguration sets the EncryptionConfiguration field's value. +func (s *Destination) SetEncryptionConfiguration(v *EncryptionConfiguration) *Destination { + s.EncryptionConfiguration = v + return s +} + // SetStorageClass sets the StorageClass field's value. func (s *Destination) SetStorageClass(v string) *Destination { s.StorageClass = &v return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/Error +// Describes the server-side encryption that will be applied to the restore +// results. +type Encryption struct { + _ struct{} `type:"structure"` + + // The server-side encryption algorithm used when storing job results in Amazon + // S3 (e.g., AES256, aws:kms). + // + // EncryptionType is a required field + EncryptionType *string `type:"string" required:"true" enum:"ServerSideEncryption"` + + // If the encryption type is aws:kms, this optional value can be used to specify + // the encryption context for the restore results. + KMSContext *string `type:"string"` + + // If the encryption type is aws:kms, this optional value specifies the AWS + // KMS key ID to use for encryption of job results. + KMSKeyId *string `type:"string"` +} + +// String returns the string representation +func (s Encryption) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s Encryption) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *Encryption) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "Encryption"} + if s.EncryptionType == nil { + invalidParams.Add(request.NewErrParamRequired("EncryptionType")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetEncryptionType sets the EncryptionType field's value. +func (s *Encryption) SetEncryptionType(v string) *Encryption { + s.EncryptionType = &v + return s +} + +// SetKMSContext sets the KMSContext field's value. +func (s *Encryption) SetKMSContext(v string) *Encryption { + s.KMSContext = &v + return s +} + +// SetKMSKeyId sets the KMSKeyId field's value. +func (s *Encryption) SetKMSKeyId(v string) *Encryption { + s.KMSKeyId = &v + return s +} + +// Container for information regarding encryption based configuration for replicas. +type EncryptionConfiguration struct { + _ struct{} `type:"structure"` + + // The id of the KMS key used to encrypt the replica object. + ReplicaKmsKeyID *string `type:"string"` +} + +// String returns the string representation +func (s EncryptionConfiguration) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s EncryptionConfiguration) GoString() string { + return s.String() +} + +// SetReplicaKmsKeyID sets the ReplicaKmsKeyID field's value. +func (s *EncryptionConfiguration) SetReplicaKmsKeyID(v string) *EncryptionConfiguration { + s.ReplicaKmsKeyID = &v + return s +} + type Error struct { _ struct{} `type:"structure"` @@ -9447,7 +9962,6 @@ func (s *Error) SetVersionId(v string) *Error { return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/ErrorDocument type ErrorDocument struct { _ struct{} `type:"structure"` @@ -9490,7 +10004,6 @@ func (s *ErrorDocument) SetKey(v string) *ErrorDocument { } // Container for key value pair that defines the criteria for the filter rule. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/FilterRule type FilterRule struct { _ struct{} `type:"structure"` @@ -9525,7 +10038,6 @@ func (s *FilterRule) SetValue(v string) *FilterRule { return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/GetBucketAccelerateConfigurationRequest type GetBucketAccelerateConfigurationInput struct { _ struct{} `type:"structure"` @@ -9571,7 +10083,6 @@ func (s *GetBucketAccelerateConfigurationInput) getBucket() (v string) { return *s.Bucket } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/GetBucketAccelerateConfigurationOutput type GetBucketAccelerateConfigurationOutput struct { _ struct{} `type:"structure"` @@ -9595,7 +10106,6 @@ func (s *GetBucketAccelerateConfigurationOutput) SetStatus(v string) *GetBucketA return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/GetBucketAclRequest type GetBucketAclInput struct { _ struct{} `type:"structure"` @@ -9639,7 +10149,6 @@ func (s *GetBucketAclInput) getBucket() (v string) { return *s.Bucket } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/GetBucketAclOutput type GetBucketAclOutput struct { _ struct{} `type:"structure"` @@ -9671,7 +10180,6 @@ func (s *GetBucketAclOutput) SetOwner(v *Owner) *GetBucketAclOutput { return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/GetBucketAnalyticsConfigurationRequest type GetBucketAnalyticsConfigurationInput struct { _ struct{} `type:"structure"` @@ -9731,7 +10239,6 @@ func (s *GetBucketAnalyticsConfigurationInput) SetId(v string) *GetBucketAnalyti return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/GetBucketAnalyticsConfigurationOutput type GetBucketAnalyticsConfigurationOutput struct { _ struct{} `type:"structure" payload:"AnalyticsConfiguration"` @@ -9755,7 +10262,6 @@ func (s *GetBucketAnalyticsConfigurationOutput) SetAnalyticsConfiguration(v *Ana return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/GetBucketCorsRequest type GetBucketCorsInput struct { _ struct{} `type:"structure"` @@ -9799,7 +10305,6 @@ func (s *GetBucketCorsInput) getBucket() (v string) { return *s.Bucket } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/GetBucketCorsOutput type GetBucketCorsOutput struct { _ struct{} `type:"structure"` @@ -9822,7 +10327,76 @@ func (s *GetBucketCorsOutput) SetCORSRules(v []*CORSRule) *GetBucketCorsOutput { return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/GetBucketInventoryConfigurationRequest +type GetBucketEncryptionInput struct { + _ struct{} `type:"structure"` + + // The name of the bucket from which the server-side encryption configuration + // is retrieved. + // + // Bucket is a required field + Bucket *string `location:"uri" locationName:"Bucket" type:"string" required:"true"` +} + +// String returns the string representation +func (s GetBucketEncryptionInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetBucketEncryptionInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *GetBucketEncryptionInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "GetBucketEncryptionInput"} + if s.Bucket == nil { + invalidParams.Add(request.NewErrParamRequired("Bucket")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetBucket sets the Bucket field's value. +func (s *GetBucketEncryptionInput) SetBucket(v string) *GetBucketEncryptionInput { + s.Bucket = &v + return s +} + +func (s *GetBucketEncryptionInput) getBucket() (v string) { + if s.Bucket == nil { + return v + } + return *s.Bucket +} + +type GetBucketEncryptionOutput struct { + _ struct{} `type:"structure" payload:"ServerSideEncryptionConfiguration"` + + // Container for server-side encryption configuration rules. Currently S3 supports + // one rule only. + ServerSideEncryptionConfiguration *ServerSideEncryptionConfiguration `type:"structure"` +} + +// String returns the string representation +func (s GetBucketEncryptionOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetBucketEncryptionOutput) GoString() string { + return s.String() +} + +// SetServerSideEncryptionConfiguration sets the ServerSideEncryptionConfiguration field's value. +func (s *GetBucketEncryptionOutput) SetServerSideEncryptionConfiguration(v *ServerSideEncryptionConfiguration) *GetBucketEncryptionOutput { + s.ServerSideEncryptionConfiguration = v + return s +} + type GetBucketInventoryConfigurationInput struct { _ struct{} `type:"structure"` @@ -9882,7 +10456,6 @@ func (s *GetBucketInventoryConfigurationInput) SetId(v string) *GetBucketInvento return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/GetBucketInventoryConfigurationOutput type GetBucketInventoryConfigurationOutput struct { _ struct{} `type:"structure" payload:"InventoryConfiguration"` @@ -9906,7 +10479,6 @@ func (s *GetBucketInventoryConfigurationOutput) SetInventoryConfiguration(v *Inv return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/GetBucketLifecycleConfigurationRequest type GetBucketLifecycleConfigurationInput struct { _ struct{} `type:"structure"` @@ -9950,7 +10522,6 @@ func (s *GetBucketLifecycleConfigurationInput) getBucket() (v string) { return *s.Bucket } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/GetBucketLifecycleConfigurationOutput type GetBucketLifecycleConfigurationOutput struct { _ struct{} `type:"structure"` @@ -9973,7 +10544,6 @@ func (s *GetBucketLifecycleConfigurationOutput) SetRules(v []*LifecycleRule) *Ge return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/GetBucketLifecycleRequest type GetBucketLifecycleInput struct { _ struct{} `type:"structure"` @@ -10017,7 +10587,6 @@ func (s *GetBucketLifecycleInput) getBucket() (v string) { return *s.Bucket } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/GetBucketLifecycleOutput type GetBucketLifecycleOutput struct { _ struct{} `type:"structure"` @@ -10040,7 +10609,6 @@ func (s *GetBucketLifecycleOutput) SetRules(v []*Rule) *GetBucketLifecycleOutput return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/GetBucketLocationRequest type GetBucketLocationInput struct { _ struct{} `type:"structure"` @@ -10084,7 +10652,6 @@ func (s *GetBucketLocationInput) getBucket() (v string) { return *s.Bucket } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/GetBucketLocationOutput type GetBucketLocationOutput struct { _ struct{} `type:"structure"` @@ -10107,7 +10674,6 @@ func (s *GetBucketLocationOutput) SetLocationConstraint(v string) *GetBucketLoca return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/GetBucketLoggingRequest type GetBucketLoggingInput struct { _ struct{} `type:"structure"` @@ -10151,7 +10717,6 @@ func (s *GetBucketLoggingInput) getBucket() (v string) { return *s.Bucket } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/GetBucketLoggingOutput type GetBucketLoggingOutput struct { _ struct{} `type:"structure"` @@ -10174,7 +10739,6 @@ func (s *GetBucketLoggingOutput) SetLoggingEnabled(v *LoggingEnabled) *GetBucket return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/GetBucketMetricsConfigurationRequest type GetBucketMetricsConfigurationInput struct { _ struct{} `type:"structure"` @@ -10234,7 +10798,6 @@ func (s *GetBucketMetricsConfigurationInput) SetId(v string) *GetBucketMetricsCo return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/GetBucketMetricsConfigurationOutput type GetBucketMetricsConfigurationOutput struct { _ struct{} `type:"structure" payload:"MetricsConfiguration"` @@ -10258,7 +10821,6 @@ func (s *GetBucketMetricsConfigurationOutput) SetMetricsConfiguration(v *Metrics return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/GetBucketNotificationConfigurationRequest type GetBucketNotificationConfigurationRequest struct { _ struct{} `type:"structure"` @@ -10304,7 +10866,6 @@ func (s *GetBucketNotificationConfigurationRequest) getBucket() (v string) { return *s.Bucket } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/GetBucketPolicyRequest type GetBucketPolicyInput struct { _ struct{} `type:"structure"` @@ -10348,7 +10909,6 @@ func (s *GetBucketPolicyInput) getBucket() (v string) { return *s.Bucket } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/GetBucketPolicyOutput type GetBucketPolicyOutput struct { _ struct{} `type:"structure" payload:"Policy"` @@ -10372,7 +10932,6 @@ func (s *GetBucketPolicyOutput) SetPolicy(v string) *GetBucketPolicyOutput { return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/GetBucketReplicationRequest type GetBucketReplicationInput struct { _ struct{} `type:"structure"` @@ -10416,7 +10975,6 @@ func (s *GetBucketReplicationInput) getBucket() (v string) { return *s.Bucket } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/GetBucketReplicationOutput type GetBucketReplicationOutput struct { _ struct{} `type:"structure" payload:"ReplicationConfiguration"` @@ -10441,7 +10999,6 @@ func (s *GetBucketReplicationOutput) SetReplicationConfiguration(v *ReplicationC return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/GetBucketRequestPaymentRequest type GetBucketRequestPaymentInput struct { _ struct{} `type:"structure"` @@ -10485,7 +11042,6 @@ func (s *GetBucketRequestPaymentInput) getBucket() (v string) { return *s.Bucket } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/GetBucketRequestPaymentOutput type GetBucketRequestPaymentOutput struct { _ struct{} `type:"structure"` @@ -10509,7 +11065,6 @@ func (s *GetBucketRequestPaymentOutput) SetPayer(v string) *GetBucketRequestPaym return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/GetBucketTaggingRequest type GetBucketTaggingInput struct { _ struct{} `type:"structure"` @@ -10553,7 +11108,6 @@ func (s *GetBucketTaggingInput) getBucket() (v string) { return *s.Bucket } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/GetBucketTaggingOutput type GetBucketTaggingOutput struct { _ struct{} `type:"structure"` @@ -10577,7 +11131,6 @@ func (s *GetBucketTaggingOutput) SetTagSet(v []*Tag) *GetBucketTaggingOutput { return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/GetBucketVersioningRequest type GetBucketVersioningInput struct { _ struct{} `type:"structure"` @@ -10621,7 +11174,6 @@ func (s *GetBucketVersioningInput) getBucket() (v string) { return *s.Bucket } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/GetBucketVersioningOutput type GetBucketVersioningOutput struct { _ struct{} `type:"structure"` @@ -10656,7 +11208,6 @@ func (s *GetBucketVersioningOutput) SetStatus(v string) *GetBucketVersioningOutp return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/GetBucketWebsiteRequest type GetBucketWebsiteInput struct { _ struct{} `type:"structure"` @@ -10700,7 +11251,6 @@ func (s *GetBucketWebsiteInput) getBucket() (v string) { return *s.Bucket } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/GetBucketWebsiteOutput type GetBucketWebsiteOutput struct { _ struct{} `type:"structure"` @@ -10747,7 +11297,6 @@ func (s *GetBucketWebsiteOutput) SetRoutingRules(v []*RoutingRule) *GetBucketWeb return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/GetObjectAclRequest type GetObjectAclInput struct { _ struct{} `type:"structure"` @@ -10827,7 +11376,6 @@ func (s *GetObjectAclInput) SetVersionId(v string) *GetObjectAclInput { return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/GetObjectAclOutput type GetObjectAclOutput struct { _ struct{} `type:"structure"` @@ -10869,7 +11417,6 @@ func (s *GetObjectAclOutput) SetRequestCharged(v string) *GetObjectAclOutput { return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/GetObjectRequest type GetObjectInput struct { _ struct{} `type:"structure"` @@ -11104,7 +11651,6 @@ func (s *GetObjectInput) SetVersionId(v string) *GetObjectInput { return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/GetObjectOutput type GetObjectOutput struct { _ struct{} `type:"structure" payload:"Body"` @@ -11388,7 +11934,6 @@ func (s *GetObjectOutput) SetWebsiteRedirectLocation(v string) *GetObjectOutput return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/GetObjectTaggingRequest type GetObjectTaggingInput struct { _ struct{} `type:"structure"` @@ -11455,7 +12000,6 @@ func (s *GetObjectTaggingInput) SetVersionId(v string) *GetObjectTaggingInput { return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/GetObjectTaggingOutput type GetObjectTaggingOutput struct { _ struct{} `type:"structure"` @@ -11487,7 +12031,6 @@ func (s *GetObjectTaggingOutput) SetVersionId(v string) *GetObjectTaggingOutput return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/GetObjectTorrentRequest type GetObjectTorrentInput struct { _ struct{} `type:"structure"` @@ -11558,7 +12101,6 @@ func (s *GetObjectTorrentInput) SetRequestPayer(v string) *GetObjectTorrentInput return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/GetObjectTorrentOutput type GetObjectTorrentOutput struct { _ struct{} `type:"structure" payload:"Body"` @@ -11591,7 +12133,6 @@ func (s *GetObjectTorrentOutput) SetRequestCharged(v string) *GetObjectTorrentOu return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/GlacierJobParameters type GlacierJobParameters struct { _ struct{} `type:"structure"` @@ -11630,7 +12171,6 @@ func (s *GlacierJobParameters) SetTier(v string) *GlacierJobParameters { return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/Grant type Grant struct { _ struct{} `type:"structure"` @@ -11677,7 +12217,6 @@ func (s *Grant) SetPermission(v string) *Grant { return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/Grantee type Grantee struct { _ struct{} `type:"structure" xmlPrefix:"xsi" xmlURI:"http://www.w3.org/2001/XMLSchema-instance"` @@ -11752,7 +12291,6 @@ func (s *Grantee) SetURI(v string) *Grantee { return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/HeadBucketRequest type HeadBucketInput struct { _ struct{} `type:"structure"` @@ -11796,7 +12334,6 @@ func (s *HeadBucketInput) getBucket() (v string) { return *s.Bucket } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/HeadBucketOutput type HeadBucketOutput struct { _ struct{} `type:"structure"` } @@ -11811,7 +12348,6 @@ func (s HeadBucketOutput) GoString() string { return s.String() } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/HeadObjectRequest type HeadObjectInput struct { _ struct{} `type:"structure"` @@ -11993,7 +12529,6 @@ func (s *HeadObjectInput) SetVersionId(v string) *HeadObjectInput { return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/HeadObjectOutput type HeadObjectOutput struct { _ struct{} `type:"structure"` @@ -12250,7 +12785,6 @@ func (s *HeadObjectOutput) SetWebsiteRedirectLocation(v string) *HeadObjectOutpu return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/IndexDocument type IndexDocument struct { _ struct{} `type:"structure"` @@ -12292,7 +12826,6 @@ func (s *IndexDocument) SetSuffix(v string) *IndexDocument { return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/Initiator type Initiator struct { _ struct{} `type:"structure"` @@ -12326,7 +12859,30 @@ func (s *Initiator) SetID(v string) *Initiator { return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/InventoryConfiguration +// Describes the serialization format of the object. +type InputSerialization struct { + _ struct{} `type:"structure"` + + // Describes the serialization of a CSV-encoded object. + CSV *CSVInput `type:"structure"` +} + +// String returns the string representation +func (s InputSerialization) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s InputSerialization) GoString() string { + return s.String() +} + +// SetCSV sets the CSV field's value. +func (s *InputSerialization) SetCSV(v *CSVInput) *InputSerialization { + s.CSV = v + return s +} + type InventoryConfiguration struct { _ struct{} `type:"structure"` @@ -12455,7 +13011,6 @@ func (s *InventoryConfiguration) SetSchedule(v *InventorySchedule) *InventoryCon return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/InventoryDestination type InventoryDestination struct { _ struct{} `type:"structure"` @@ -12500,7 +13055,55 @@ func (s *InventoryDestination) SetS3BucketDestination(v *InventoryS3BucketDestin return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/InventoryFilter +// Contains the type of server-side encryption used to encrypt the inventory +// results. +type InventoryEncryption struct { + _ struct{} `type:"structure"` + + // Specifies the use of SSE-KMS to encrypt delievered Inventory reports. + SSEKMS *SSEKMS `locationName:"SSE-KMS" type:"structure"` + + // Specifies the use of SSE-S3 to encrypt delievered Inventory reports. + SSES3 *SSES3 `locationName:"SSE-S3" type:"structure"` +} + +// String returns the string representation +func (s InventoryEncryption) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s InventoryEncryption) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *InventoryEncryption) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "InventoryEncryption"} + if s.SSEKMS != nil { + if err := s.SSEKMS.Validate(); err != nil { + invalidParams.AddNested("SSEKMS", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetSSEKMS sets the SSEKMS field's value. +func (s *InventoryEncryption) SetSSEKMS(v *SSEKMS) *InventoryEncryption { + s.SSEKMS = v + return s +} + +// SetSSES3 sets the SSES3 field's value. +func (s *InventoryEncryption) SetSSES3(v *SSES3) *InventoryEncryption { + s.SSES3 = v + return s +} + type InventoryFilter struct { _ struct{} `type:"structure"` @@ -12539,7 +13142,6 @@ func (s *InventoryFilter) SetPrefix(v string) *InventoryFilter { return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/InventoryS3BucketDestination type InventoryS3BucketDestination struct { _ struct{} `type:"structure"` @@ -12552,6 +13154,10 @@ type InventoryS3BucketDestination struct { // Bucket is a required field Bucket *string `type:"string" required:"true"` + // Contains the type of server-side encryption used to encrypt the inventory + // results. + Encryption *InventoryEncryption `type:"structure"` + // Specifies the output format of the inventory results. // // Format is a required field @@ -12580,6 +13186,11 @@ func (s *InventoryS3BucketDestination) Validate() error { if s.Format == nil { invalidParams.Add(request.NewErrParamRequired("Format")) } + if s.Encryption != nil { + if err := s.Encryption.Validate(); err != nil { + invalidParams.AddNested("Encryption", err.(request.ErrInvalidParams)) + } + } if invalidParams.Len() > 0 { return invalidParams @@ -12606,6 +13217,12 @@ func (s *InventoryS3BucketDestination) getBucket() (v string) { return *s.Bucket } +// SetEncryption sets the Encryption field's value. +func (s *InventoryS3BucketDestination) SetEncryption(v *InventoryEncryption) *InventoryS3BucketDestination { + s.Encryption = v + return s +} + // SetFormat sets the Format field's value. func (s *InventoryS3BucketDestination) SetFormat(v string) *InventoryS3BucketDestination { s.Format = &v @@ -12618,7 +13235,6 @@ func (s *InventoryS3BucketDestination) SetPrefix(v string) *InventoryS3BucketDes return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/InventorySchedule type InventorySchedule struct { _ struct{} `type:"structure"` @@ -12658,7 +13274,6 @@ func (s *InventorySchedule) SetFrequency(v string) *InventorySchedule { } // Container for object key name prefix and suffix filtering rules. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/S3KeyFilter type KeyFilter struct { _ struct{} `type:"structure"` @@ -12684,7 +13299,6 @@ func (s *KeyFilter) SetFilterRules(v []*FilterRule) *KeyFilter { } // Container for specifying the AWS Lambda notification configuration. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/LambdaFunctionConfiguration type LambdaFunctionConfiguration struct { _ struct{} `type:"structure"` @@ -12756,7 +13370,6 @@ func (s *LambdaFunctionConfiguration) SetLambdaFunctionArn(v string) *LambdaFunc return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/LifecycleConfiguration type LifecycleConfiguration struct { _ struct{} `type:"structure"` @@ -12803,7 +13416,6 @@ func (s *LifecycleConfiguration) SetRules(v []*Rule) *LifecycleConfiguration { return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/LifecycleExpiration type LifecycleExpiration struct { _ struct{} `type:"structure"` @@ -12850,7 +13462,6 @@ func (s *LifecycleExpiration) SetExpiredObjectDeleteMarker(v bool) *LifecycleExp return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/LifecycleRule type LifecycleRule struct { _ struct{} `type:"structure"` @@ -12974,7 +13585,6 @@ func (s *LifecycleRule) SetTransitions(v []*Transition) *LifecycleRule { // This is used in a Lifecycle Rule Filter to apply a logical AND to two or // more predicates. The Lifecycle Rule will apply to any object matching all // of the predicates configured inside the And operator. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/LifecycleRuleAndOperator type LifecycleRuleAndOperator struct { _ struct{} `type:"structure"` @@ -13029,7 +13639,6 @@ func (s *LifecycleRuleAndOperator) SetTags(v []*Tag) *LifecycleRuleAndOperator { // The Filter is used to identify objects that a Lifecycle Rule applies to. // A Filter must have exactly one of Prefix, Tag, or And specified. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/LifecycleRuleFilter type LifecycleRuleFilter struct { _ struct{} `type:"structure"` @@ -13093,7 +13702,6 @@ func (s *LifecycleRuleFilter) SetTag(v *Tag) *LifecycleRuleFilter { return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/ListBucketAnalyticsConfigurationsRequest type ListBucketAnalyticsConfigurationsInput struct { _ struct{} `type:"structure"` @@ -13149,7 +13757,6 @@ func (s *ListBucketAnalyticsConfigurationsInput) SetContinuationToken(v string) return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/ListBucketAnalyticsConfigurationsOutput type ListBucketAnalyticsConfigurationsOutput struct { _ struct{} `type:"structure"` @@ -13204,7 +13811,6 @@ func (s *ListBucketAnalyticsConfigurationsOutput) SetNextContinuationToken(v str return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/ListBucketInventoryConfigurationsRequest type ListBucketInventoryConfigurationsInput struct { _ struct{} `type:"structure"` @@ -13262,7 +13868,6 @@ func (s *ListBucketInventoryConfigurationsInput) SetContinuationToken(v string) return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/ListBucketInventoryConfigurationsOutput type ListBucketInventoryConfigurationsOutput struct { _ struct{} `type:"structure"` @@ -13317,7 +13922,6 @@ func (s *ListBucketInventoryConfigurationsOutput) SetNextContinuationToken(v str return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/ListBucketMetricsConfigurationsRequest type ListBucketMetricsConfigurationsInput struct { _ struct{} `type:"structure"` @@ -13375,7 +13979,6 @@ func (s *ListBucketMetricsConfigurationsInput) SetContinuationToken(v string) *L return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/ListBucketMetricsConfigurationsOutput type ListBucketMetricsConfigurationsOutput struct { _ struct{} `type:"structure"` @@ -13432,7 +14035,6 @@ func (s *ListBucketMetricsConfigurationsOutput) SetNextContinuationToken(v strin return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/ListBucketsInput type ListBucketsInput struct { _ struct{} `type:"structure"` } @@ -13447,7 +14049,6 @@ func (s ListBucketsInput) GoString() string { return s.String() } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/ListBucketsOutput type ListBucketsOutput struct { _ struct{} `type:"structure"` @@ -13478,7 +14079,6 @@ func (s *ListBucketsOutput) SetOwner(v *Owner) *ListBucketsOutput { return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/ListMultipartUploadsRequest type ListMultipartUploadsInput struct { _ struct{} `type:"structure"` @@ -13587,7 +14187,6 @@ func (s *ListMultipartUploadsInput) SetUploadIdMarker(v string) *ListMultipartUp return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/ListMultipartUploadsOutput type ListMultipartUploadsOutput struct { _ struct{} `type:"structure"` @@ -13721,7 +14320,6 @@ func (s *ListMultipartUploadsOutput) SetUploads(v []*MultipartUpload) *ListMulti return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/ListObjectVersionsRequest type ListObjectVersionsInput struct { _ struct{} `type:"structure"` @@ -13825,7 +14423,6 @@ func (s *ListObjectVersionsInput) SetVersionIdMarker(v string) *ListObjectVersio return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/ListObjectVersionsOutput type ListObjectVersionsOutput struct { _ struct{} `type:"structure"` @@ -13953,7 +14550,6 @@ func (s *ListObjectVersionsOutput) SetVersions(v []*ObjectVersion) *ListObjectVe return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/ListObjectsRequest type ListObjectsInput struct { _ struct{} `type:"structure"` @@ -14059,7 +14655,6 @@ func (s *ListObjectsInput) SetRequestPayer(v string) *ListObjectsInput { return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/ListObjectsOutput type ListObjectsOutput struct { _ struct{} `type:"structure"` @@ -14164,7 +14759,6 @@ func (s *ListObjectsOutput) SetPrefix(v string) *ListObjectsOutput { return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/ListObjectsV2Request type ListObjectsV2Input struct { _ struct{} `type:"structure"` @@ -14290,7 +14884,6 @@ func (s *ListObjectsV2Input) SetStartAfter(v string) *ListObjectsV2Input { return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/ListObjectsV2Output type ListObjectsV2Output struct { _ struct{} `type:"structure"` @@ -14424,7 +15017,6 @@ func (s *ListObjectsV2Output) SetStartAfter(v string) *ListObjectsV2Output { return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/ListPartsRequest type ListPartsInput struct { _ struct{} `type:"structure"` @@ -14528,7 +15120,6 @@ func (s *ListPartsInput) SetUploadId(v string) *ListPartsInput { return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/ListPartsOutput type ListPartsOutput struct { _ struct{} `type:"structure"` @@ -14678,7 +15269,134 @@ func (s *ListPartsOutput) SetUploadId(v string) *ListPartsOutput { return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/LoggingEnabled +// Describes an S3 location that will receive the results of the restore request. +type Location struct { + _ struct{} `type:"structure"` + + // A list of grants that control access to the staged results. + AccessControlList []*Grant `locationNameList:"Grant" type:"list"` + + // The name of the bucket where the restore results will be placed. + // + // BucketName is a required field + BucketName *string `type:"string" required:"true"` + + // The canned ACL to apply to the restore results. + CannedACL *string `type:"string" enum:"ObjectCannedACL"` + + // Describes the server-side encryption that will be applied to the restore + // results. + Encryption *Encryption `type:"structure"` + + // The prefix that is prepended to the restore results for this request. + // + // Prefix is a required field + Prefix *string `type:"string" required:"true"` + + // The class of storage used to store the restore results. + StorageClass *string `type:"string" enum:"StorageClass"` + + // The tag-set that is applied to the restore results. + Tagging *Tagging `type:"structure"` + + // A list of metadata to store with the restore results in S3. + UserMetadata []*MetadataEntry `locationNameList:"MetadataEntry" type:"list"` +} + +// String returns the string representation +func (s Location) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s Location) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *Location) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "Location"} + if s.BucketName == nil { + invalidParams.Add(request.NewErrParamRequired("BucketName")) + } + if s.Prefix == nil { + invalidParams.Add(request.NewErrParamRequired("Prefix")) + } + if s.AccessControlList != nil { + for i, v := range s.AccessControlList { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "AccessControlList", i), err.(request.ErrInvalidParams)) + } + } + } + if s.Encryption != nil { + if err := s.Encryption.Validate(); err != nil { + invalidParams.AddNested("Encryption", err.(request.ErrInvalidParams)) + } + } + if s.Tagging != nil { + if err := s.Tagging.Validate(); err != nil { + invalidParams.AddNested("Tagging", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetAccessControlList sets the AccessControlList field's value. +func (s *Location) SetAccessControlList(v []*Grant) *Location { + s.AccessControlList = v + return s +} + +// SetBucketName sets the BucketName field's value. +func (s *Location) SetBucketName(v string) *Location { + s.BucketName = &v + return s +} + +// SetCannedACL sets the CannedACL field's value. +func (s *Location) SetCannedACL(v string) *Location { + s.CannedACL = &v + return s +} + +// SetEncryption sets the Encryption field's value. +func (s *Location) SetEncryption(v *Encryption) *Location { + s.Encryption = v + return s +} + +// SetPrefix sets the Prefix field's value. +func (s *Location) SetPrefix(v string) *Location { + s.Prefix = &v + return s +} + +// SetStorageClass sets the StorageClass field's value. +func (s *Location) SetStorageClass(v string) *Location { + s.StorageClass = &v + return s +} + +// SetTagging sets the Tagging field's value. +func (s *Location) SetTagging(v *Tagging) *Location { + s.Tagging = v + return s +} + +// SetUserMetadata sets the UserMetadata field's value. +func (s *Location) SetUserMetadata(v []*MetadataEntry) *Location { + s.UserMetadata = v + return s +} + type LoggingEnabled struct { _ struct{} `type:"structure"` @@ -14745,7 +15463,37 @@ func (s *LoggingEnabled) SetTargetPrefix(v string) *LoggingEnabled { return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/MetricsAndOperator +// A metadata key-value pair to store with an object. +type MetadataEntry struct { + _ struct{} `type:"structure"` + + Name *string `type:"string"` + + Value *string `type:"string"` +} + +// String returns the string representation +func (s MetadataEntry) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s MetadataEntry) GoString() string { + return s.String() +} + +// SetName sets the Name field's value. +func (s *MetadataEntry) SetName(v string) *MetadataEntry { + s.Name = &v + return s +} + +// SetValue sets the Value field's value. +func (s *MetadataEntry) SetValue(v string) *MetadataEntry { + s.Value = &v + return s +} + type MetricsAndOperator struct { _ struct{} `type:"structure"` @@ -14798,7 +15546,6 @@ func (s *MetricsAndOperator) SetTags(v []*Tag) *MetricsAndOperator { return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/MetricsConfiguration type MetricsConfiguration struct { _ struct{} `type:"structure"` @@ -14853,7 +15600,6 @@ func (s *MetricsConfiguration) SetId(v string) *MetricsConfiguration { return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/MetricsFilter type MetricsFilter struct { _ struct{} `type:"structure"` @@ -14917,7 +15663,6 @@ func (s *MetricsFilter) SetTag(v *Tag) *MetricsFilter { return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/MultipartUpload type MultipartUpload struct { _ struct{} `type:"structure"` @@ -14990,7 +15735,6 @@ func (s *MultipartUpload) SetUploadId(v string) *MultipartUpload { // configuration action on a bucket that has versioning enabled (or suspended) // to request that Amazon S3 delete noncurrent object versions at a specific // period in the object's lifetime. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/NoncurrentVersionExpiration type NoncurrentVersionExpiration struct { _ struct{} `type:"structure"` @@ -15022,7 +15766,6 @@ func (s *NoncurrentVersionExpiration) SetNoncurrentDays(v int64) *NoncurrentVers // versioning-enabled (or versioning is suspended), you can set this action // to request that Amazon S3 transition noncurrent object versions to the STANDARD_IA // or GLACIER storage class at a specific period in the object's lifetime. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/NoncurrentVersionTransition type NoncurrentVersionTransition struct { _ struct{} `type:"structure"` @@ -15060,7 +15803,6 @@ func (s *NoncurrentVersionTransition) SetStorageClass(v string) *NoncurrentVersi // Container for specifying the notification configuration of the bucket. If // this element is empty, notifications are turned off on the bucket. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/NotificationConfiguration type NotificationConfiguration struct { _ struct{} `type:"structure"` @@ -15139,7 +15881,6 @@ func (s *NotificationConfiguration) SetTopicConfigurations(v []*TopicConfigurati return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/NotificationConfigurationDeprecated type NotificationConfigurationDeprecated struct { _ struct{} `type:"structure"` @@ -15180,7 +15921,6 @@ func (s *NotificationConfigurationDeprecated) SetTopicConfiguration(v *TopicConf // Container for object key name filtering rules. For information about key // name filtering, go to Configuring Event Notifications (http://docs.aws.amazon.com/AmazonS3/latest/dev/NotificationHowTo.html) -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/NotificationConfigurationFilter type NotificationConfigurationFilter struct { _ struct{} `type:"structure"` @@ -15204,7 +15944,6 @@ func (s *NotificationConfigurationFilter) SetKey(v *KeyFilter) *NotificationConf return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/Object type Object struct { _ struct{} `type:"structure"` @@ -15268,7 +16007,6 @@ func (s *Object) SetStorageClass(v string) *Object { return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/ObjectIdentifier type ObjectIdentifier struct { _ struct{} `type:"structure"` @@ -15319,7 +16057,6 @@ func (s *ObjectIdentifier) SetVersionId(v string) *ObjectIdentifier { return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/ObjectVersion type ObjectVersion struct { _ struct{} `type:"structure"` @@ -15405,7 +16142,69 @@ func (s *ObjectVersion) SetVersionId(v string) *ObjectVersion { return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/Owner +// Describes the location where the restore job's output is stored. +type OutputLocation struct { + _ struct{} `type:"structure"` + + // Describes an S3 location that will receive the results of the restore request. + S3 *Location `type:"structure"` +} + +// String returns the string representation +func (s OutputLocation) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s OutputLocation) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *OutputLocation) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "OutputLocation"} + if s.S3 != nil { + if err := s.S3.Validate(); err != nil { + invalidParams.AddNested("S3", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetS3 sets the S3 field's value. +func (s *OutputLocation) SetS3(v *Location) *OutputLocation { + s.S3 = v + return s +} + +// Describes how results of the Select job are serialized. +type OutputSerialization struct { + _ struct{} `type:"structure"` + + // Describes the serialization of CSV-encoded Select results. + CSV *CSVOutput `type:"structure"` +} + +// String returns the string representation +func (s OutputSerialization) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s OutputSerialization) GoString() string { + return s.String() +} + +// SetCSV sets the CSV field's value. +func (s *OutputSerialization) SetCSV(v *CSVOutput) *OutputSerialization { + s.CSV = v + return s +} + type Owner struct { _ struct{} `type:"structure"` @@ -15436,7 +16235,6 @@ func (s *Owner) SetID(v string) *Owner { return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/Part type Part struct { _ struct{} `type:"structure"` @@ -15488,7 +16286,6 @@ func (s *Part) SetSize(v int64) *Part { return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/PutBucketAccelerateConfigurationRequest type PutBucketAccelerateConfigurationInput struct { _ struct{} `type:"structure" payload:"AccelerateConfiguration"` @@ -15548,7 +16345,6 @@ func (s *PutBucketAccelerateConfigurationInput) getBucket() (v string) { return *s.Bucket } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/PutBucketAccelerateConfigurationOutput type PutBucketAccelerateConfigurationOutput struct { _ struct{} `type:"structure"` } @@ -15563,7 +16359,6 @@ func (s PutBucketAccelerateConfigurationOutput) GoString() string { return s.String() } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/PutBucketAclRequest type PutBucketAclInput struct { _ struct{} `type:"structure" payload:"AccessControlPolicy"` @@ -15675,7 +16470,6 @@ func (s *PutBucketAclInput) SetGrantWriteACP(v string) *PutBucketAclInput { return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/PutBucketAclOutput type PutBucketAclOutput struct { _ struct{} `type:"structure"` } @@ -15690,7 +16484,6 @@ func (s PutBucketAclOutput) GoString() string { return s.String() } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/PutBucketAnalyticsConfigurationRequest type PutBucketAnalyticsConfigurationInput struct { _ struct{} `type:"structure" payload:"AnalyticsConfiguration"` @@ -15769,7 +16562,6 @@ func (s *PutBucketAnalyticsConfigurationInput) SetId(v string) *PutBucketAnalyti return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/PutBucketAnalyticsConfigurationOutput type PutBucketAnalyticsConfigurationOutput struct { _ struct{} `type:"structure"` } @@ -15784,7 +16576,6 @@ func (s PutBucketAnalyticsConfigurationOutput) GoString() string { return s.String() } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/PutBucketCorsRequest type PutBucketCorsInput struct { _ struct{} `type:"structure" payload:"CORSConfiguration"` @@ -15845,7 +16636,6 @@ func (s *PutBucketCorsInput) SetCORSConfiguration(v *CORSConfiguration) *PutBuck return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/PutBucketCorsOutput type PutBucketCorsOutput struct { _ struct{} `type:"structure"` } @@ -15860,7 +16650,86 @@ func (s PutBucketCorsOutput) GoString() string { return s.String() } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/PutBucketInventoryConfigurationRequest +type PutBucketEncryptionInput struct { + _ struct{} `type:"structure" payload:"ServerSideEncryptionConfiguration"` + + // The name of the bucket for which the server-side encryption configuration + // is set. + // + // Bucket is a required field + Bucket *string `location:"uri" locationName:"Bucket" type:"string" required:"true"` + + // Container for server-side encryption configuration rules. Currently S3 supports + // one rule only. + // + // ServerSideEncryptionConfiguration is a required field + ServerSideEncryptionConfiguration *ServerSideEncryptionConfiguration `locationName:"ServerSideEncryptionConfiguration" type:"structure" required:"true" xmlURI:"http://s3.amazonaws.com/doc/2006-03-01/"` +} + +// String returns the string representation +func (s PutBucketEncryptionInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s PutBucketEncryptionInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *PutBucketEncryptionInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "PutBucketEncryptionInput"} + if s.Bucket == nil { + invalidParams.Add(request.NewErrParamRequired("Bucket")) + } + if s.ServerSideEncryptionConfiguration == nil { + invalidParams.Add(request.NewErrParamRequired("ServerSideEncryptionConfiguration")) + } + if s.ServerSideEncryptionConfiguration != nil { + if err := s.ServerSideEncryptionConfiguration.Validate(); err != nil { + invalidParams.AddNested("ServerSideEncryptionConfiguration", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetBucket sets the Bucket field's value. +func (s *PutBucketEncryptionInput) SetBucket(v string) *PutBucketEncryptionInput { + s.Bucket = &v + return s +} + +func (s *PutBucketEncryptionInput) getBucket() (v string) { + if s.Bucket == nil { + return v + } + return *s.Bucket +} + +// SetServerSideEncryptionConfiguration sets the ServerSideEncryptionConfiguration field's value. +func (s *PutBucketEncryptionInput) SetServerSideEncryptionConfiguration(v *ServerSideEncryptionConfiguration) *PutBucketEncryptionInput { + s.ServerSideEncryptionConfiguration = v + return s +} + +type PutBucketEncryptionOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s PutBucketEncryptionOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s PutBucketEncryptionOutput) GoString() string { + return s.String() +} + type PutBucketInventoryConfigurationInput struct { _ struct{} `type:"structure" payload:"InventoryConfiguration"` @@ -15939,7 +16808,6 @@ func (s *PutBucketInventoryConfigurationInput) SetInventoryConfiguration(v *Inve return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/PutBucketInventoryConfigurationOutput type PutBucketInventoryConfigurationOutput struct { _ struct{} `type:"structure"` } @@ -15954,7 +16822,6 @@ func (s PutBucketInventoryConfigurationOutput) GoString() string { return s.String() } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/PutBucketLifecycleConfigurationRequest type PutBucketLifecycleConfigurationInput struct { _ struct{} `type:"structure" payload:"LifecycleConfiguration"` @@ -16011,7 +16878,6 @@ func (s *PutBucketLifecycleConfigurationInput) SetLifecycleConfiguration(v *Buck return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/PutBucketLifecycleConfigurationOutput type PutBucketLifecycleConfigurationOutput struct { _ struct{} `type:"structure"` } @@ -16026,7 +16892,6 @@ func (s PutBucketLifecycleConfigurationOutput) GoString() string { return s.String() } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/PutBucketLifecycleRequest type PutBucketLifecycleInput struct { _ struct{} `type:"structure" payload:"LifecycleConfiguration"` @@ -16083,7 +16948,6 @@ func (s *PutBucketLifecycleInput) SetLifecycleConfiguration(v *LifecycleConfigur return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/PutBucketLifecycleOutput type PutBucketLifecycleOutput struct { _ struct{} `type:"structure"` } @@ -16098,7 +16962,6 @@ func (s PutBucketLifecycleOutput) GoString() string { return s.String() } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/PutBucketLoggingRequest type PutBucketLoggingInput struct { _ struct{} `type:"structure" payload:"BucketLoggingStatus"` @@ -16159,7 +17022,6 @@ func (s *PutBucketLoggingInput) SetBucketLoggingStatus(v *BucketLoggingStatus) * return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/PutBucketLoggingOutput type PutBucketLoggingOutput struct { _ struct{} `type:"structure"` } @@ -16174,7 +17036,6 @@ func (s PutBucketLoggingOutput) GoString() string { return s.String() } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/PutBucketMetricsConfigurationRequest type PutBucketMetricsConfigurationInput struct { _ struct{} `type:"structure" payload:"MetricsConfiguration"` @@ -16253,7 +17114,6 @@ func (s *PutBucketMetricsConfigurationInput) SetMetricsConfiguration(v *MetricsC return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/PutBucketMetricsConfigurationOutput type PutBucketMetricsConfigurationOutput struct { _ struct{} `type:"structure"` } @@ -16268,7 +17128,6 @@ func (s PutBucketMetricsConfigurationOutput) GoString() string { return s.String() } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/PutBucketNotificationConfigurationRequest type PutBucketNotificationConfigurationInput struct { _ struct{} `type:"structure" payload:"NotificationConfiguration"` @@ -16332,7 +17191,6 @@ func (s *PutBucketNotificationConfigurationInput) SetNotificationConfiguration(v return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/PutBucketNotificationConfigurationOutput type PutBucketNotificationConfigurationOutput struct { _ struct{} `type:"structure"` } @@ -16347,7 +17205,6 @@ func (s PutBucketNotificationConfigurationOutput) GoString() string { return s.String() } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/PutBucketNotificationRequest type PutBucketNotificationInput struct { _ struct{} `type:"structure" payload:"NotificationConfiguration"` @@ -16403,7 +17260,6 @@ func (s *PutBucketNotificationInput) SetNotificationConfiguration(v *Notificatio return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/PutBucketNotificationOutput type PutBucketNotificationOutput struct { _ struct{} `type:"structure"` } @@ -16418,13 +17274,16 @@ func (s PutBucketNotificationOutput) GoString() string { return s.String() } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/PutBucketPolicyRequest type PutBucketPolicyInput struct { _ struct{} `type:"structure" payload:"Policy"` // Bucket is a required field Bucket *string `location:"uri" locationName:"Bucket" type:"string" required:"true"` + // Set this parameter to true to confirm that you want to remove your permissions + // to change this bucket policy in the future. + ConfirmRemoveSelfBucketAccess *bool `location:"header" locationName:"x-amz-confirm-remove-self-bucket-access" type:"boolean"` + // The bucket policy as a JSON document. // // Policy is a required field @@ -16470,13 +17329,18 @@ func (s *PutBucketPolicyInput) getBucket() (v string) { return *s.Bucket } +// SetConfirmRemoveSelfBucketAccess sets the ConfirmRemoveSelfBucketAccess field's value. +func (s *PutBucketPolicyInput) SetConfirmRemoveSelfBucketAccess(v bool) *PutBucketPolicyInput { + s.ConfirmRemoveSelfBucketAccess = &v + return s +} + // SetPolicy sets the Policy field's value. func (s *PutBucketPolicyInput) SetPolicy(v string) *PutBucketPolicyInput { s.Policy = &v return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/PutBucketPolicyOutput type PutBucketPolicyOutput struct { _ struct{} `type:"structure"` } @@ -16491,7 +17355,6 @@ func (s PutBucketPolicyOutput) GoString() string { return s.String() } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/PutBucketReplicationRequest type PutBucketReplicationInput struct { _ struct{} `type:"structure" payload:"ReplicationConfiguration"` @@ -16555,7 +17418,6 @@ func (s *PutBucketReplicationInput) SetReplicationConfiguration(v *ReplicationCo return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/PutBucketReplicationOutput type PutBucketReplicationOutput struct { _ struct{} `type:"structure"` } @@ -16570,7 +17432,6 @@ func (s PutBucketReplicationOutput) GoString() string { return s.String() } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/PutBucketRequestPaymentRequest type PutBucketRequestPaymentInput struct { _ struct{} `type:"structure" payload:"RequestPaymentConfiguration"` @@ -16631,7 +17492,6 @@ func (s *PutBucketRequestPaymentInput) SetRequestPaymentConfiguration(v *Request return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/PutBucketRequestPaymentOutput type PutBucketRequestPaymentOutput struct { _ struct{} `type:"structure"` } @@ -16646,7 +17506,6 @@ func (s PutBucketRequestPaymentOutput) GoString() string { return s.String() } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/PutBucketTaggingRequest type PutBucketTaggingInput struct { _ struct{} `type:"structure" payload:"Tagging"` @@ -16707,7 +17566,6 @@ func (s *PutBucketTaggingInput) SetTagging(v *Tagging) *PutBucketTaggingInput { return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/PutBucketTaggingOutput type PutBucketTaggingOutput struct { _ struct{} `type:"structure"` } @@ -16722,7 +17580,6 @@ func (s PutBucketTaggingOutput) GoString() string { return s.String() } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/PutBucketVersioningRequest type PutBucketVersioningInput struct { _ struct{} `type:"structure" payload:"VersioningConfiguration"` @@ -16788,7 +17645,6 @@ func (s *PutBucketVersioningInput) SetVersioningConfiguration(v *VersioningConfi return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/PutBucketVersioningOutput type PutBucketVersioningOutput struct { _ struct{} `type:"structure"` } @@ -16803,7 +17659,6 @@ func (s PutBucketVersioningOutput) GoString() string { return s.String() } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/PutBucketWebsiteRequest type PutBucketWebsiteInput struct { _ struct{} `type:"structure" payload:"WebsiteConfiguration"` @@ -16864,7 +17719,6 @@ func (s *PutBucketWebsiteInput) SetWebsiteConfiguration(v *WebsiteConfiguration) return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/PutBucketWebsiteOutput type PutBucketWebsiteOutput struct { _ struct{} `type:"structure"` } @@ -16879,7 +17733,6 @@ func (s PutBucketWebsiteOutput) GoString() string { return s.String() } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/PutObjectAclRequest type PutObjectAclInput struct { _ struct{} `type:"structure" payload:"AccessControlPolicy"` @@ -17027,7 +17880,6 @@ func (s *PutObjectAclInput) SetVersionId(v string) *PutObjectAclInput { return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/PutObjectAclOutput type PutObjectAclOutput struct { _ struct{} `type:"structure"` @@ -17052,7 +17904,6 @@ func (s *PutObjectAclOutput) SetRequestCharged(v string) *PutObjectAclOutput { return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/PutObjectRequest type PutObjectInput struct { _ struct{} `type:"structure" payload:"Body"` @@ -17085,6 +17936,9 @@ type PutObjectInput struct { // body cannot be determined automatically. ContentLength *int64 `location:"header" locationName:"Content-Length" type:"long"` + // The base64-encoded 128-bit MD5 digest of the part data. + ContentMD5 *string `location:"header" locationName:"Content-MD5" type:"string"` + // A standard MIME type describing the format of the object data. ContentType *string `location:"header" locationName:"Content-Type" type:"string"` @@ -17238,6 +18092,12 @@ func (s *PutObjectInput) SetContentLength(v int64) *PutObjectInput { return s } +// SetContentMD5 sets the ContentMD5 field's value. +func (s *PutObjectInput) SetContentMD5(v string) *PutObjectInput { + s.ContentMD5 = &v + return s +} + // SetContentType sets the ContentType field's value. func (s *PutObjectInput) SetContentType(v string) *PutObjectInput { s.ContentType = &v @@ -17347,7 +18207,6 @@ func (s *PutObjectInput) SetWebsiteRedirectLocation(v string) *PutObjectInput { return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/PutObjectOutput type PutObjectOutput struct { _ struct{} `type:"structure"` @@ -17442,7 +18301,6 @@ func (s *PutObjectOutput) SetVersionId(v string) *PutObjectOutput { return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/PutObjectTaggingRequest type PutObjectTaggingInput struct { _ struct{} `type:"structure" payload:"Tagging"` @@ -17526,7 +18384,6 @@ func (s *PutObjectTaggingInput) SetVersionId(v string) *PutObjectTaggingInput { return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/PutObjectTaggingOutput type PutObjectTaggingOutput struct { _ struct{} `type:"structure"` @@ -17551,7 +18408,6 @@ func (s *PutObjectTaggingOutput) SetVersionId(v string) *PutObjectTaggingOutput // Container for specifying an configuration when you want Amazon S3 to publish // events to an Amazon Simple Queue Service (Amazon SQS) queue. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/QueueConfiguration type QueueConfiguration struct { _ struct{} `type:"structure"` @@ -17623,7 +18479,6 @@ func (s *QueueConfiguration) SetQueueArn(v string) *QueueConfiguration { return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/QueueConfigurationDeprecated type QueueConfigurationDeprecated struct { _ struct{} `type:"structure"` @@ -17673,7 +18528,6 @@ func (s *QueueConfigurationDeprecated) SetQueue(v string) *QueueConfigurationDep return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/Redirect type Redirect struct { _ struct{} `type:"structure"` @@ -17742,7 +18596,6 @@ func (s *Redirect) SetReplaceKeyWith(v string) *Redirect { return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/RedirectAllRequestsTo type RedirectAllRequestsTo struct { _ struct{} `type:"structure"` @@ -17793,7 +18646,6 @@ func (s *RedirectAllRequestsTo) SetProtocol(v string) *RedirectAllRequestsTo { // Container for replication rules. You can add as many as 1,000 rules. Total // replication configuration size can be up to 2 MB. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/ReplicationConfiguration type ReplicationConfiguration struct { _ struct{} `type:"structure"` @@ -17858,10 +18710,12 @@ func (s *ReplicationConfiguration) SetRules(v []*ReplicationRule) *ReplicationCo return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/ReplicationRule +// Container for information about a particular replication rule. type ReplicationRule struct { _ struct{} `type:"structure"` + // Container for replication destination information. + // // Destination is a required field Destination *Destination `type:"structure" required:"true"` @@ -17875,6 +18729,9 @@ type ReplicationRule struct { // Prefix is a required field Prefix *string `type:"string" required:"true"` + // Container for filters that define which source objects should be replicated. + SourceSelectionCriteria *SourceSelectionCriteria `type:"structure"` + // The rule is ignored if status is not Enabled. // // Status is a required field @@ -17908,6 +18765,11 @@ func (s *ReplicationRule) Validate() error { invalidParams.AddNested("Destination", err.(request.ErrInvalidParams)) } } + if s.SourceSelectionCriteria != nil { + if err := s.SourceSelectionCriteria.Validate(); err != nil { + invalidParams.AddNested("SourceSelectionCriteria", err.(request.ErrInvalidParams)) + } + } if invalidParams.Len() > 0 { return invalidParams @@ -17933,13 +18795,18 @@ func (s *ReplicationRule) SetPrefix(v string) *ReplicationRule { return s } +// SetSourceSelectionCriteria sets the SourceSelectionCriteria field's value. +func (s *ReplicationRule) SetSourceSelectionCriteria(v *SourceSelectionCriteria) *ReplicationRule { + s.SourceSelectionCriteria = v + return s +} + // SetStatus sets the Status field's value. func (s *ReplicationRule) SetStatus(v string) *ReplicationRule { s.Status = &v return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/RequestPaymentConfiguration type RequestPaymentConfiguration struct { _ struct{} `type:"structure"` @@ -17978,7 +18845,6 @@ func (s *RequestPaymentConfiguration) SetPayer(v string) *RequestPaymentConfigur return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/RestoreObjectRequest type RestoreObjectInput struct { _ struct{} `type:"structure" payload:"RestoreRequest"` @@ -17994,6 +18860,7 @@ type RestoreObjectInput struct { // at http://docs.aws.amazon.com/AmazonS3/latest/dev/ObjectsinRequesterPaysBuckets.html RequestPayer *string `location:"header" locationName:"x-amz-request-payer" type:"string" enum:"RequestPayer"` + // Container for restore job parameters. RestoreRequest *RestoreRequest `locationName:"RestoreRequest" type:"structure" xmlURI:"http://s3.amazonaws.com/doc/2006-03-01/"` VersionId *string `location:"querystring" locationName:"versionId" type:"string"` @@ -18070,13 +18937,16 @@ func (s *RestoreObjectInput) SetVersionId(v string) *RestoreObjectInput { return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/RestoreObjectOutput type RestoreObjectOutput struct { _ struct{} `type:"structure"` // If present, indicates that the requester was successfully charged for the // request. RequestCharged *string `location:"header" locationName:"x-amz-request-charged" type:"string" enum:"RequestCharged"` + + // Indicates the path in the provided S3 output location where Select results + // will be restored to. + RestoreOutputPath *string `location:"header" locationName:"x-amz-restore-output-path" type:"string"` } // String returns the string representation @@ -18095,17 +18965,38 @@ func (s *RestoreObjectOutput) SetRequestCharged(v string) *RestoreObjectOutput { return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/RestoreRequest +// SetRestoreOutputPath sets the RestoreOutputPath field's value. +func (s *RestoreObjectOutput) SetRestoreOutputPath(v string) *RestoreObjectOutput { + s.RestoreOutputPath = &v + return s +} + +// Container for restore job parameters. type RestoreRequest struct { _ struct{} `type:"structure"` - // Lifetime of the active copy in days - // - // Days is a required field - Days *int64 `type:"integer" required:"true"` + // Lifetime of the active copy in days. Do not use with restores that specify + // OutputLocation. + Days *int64 `type:"integer"` - // Glacier related prameters pertaining to this job. + // The optional description for the job. + Description *string `type:"string"` + + // Glacier related parameters pertaining to this job. Do not use with restores + // that specify OutputLocation. GlacierJobParameters *GlacierJobParameters `type:"structure"` + + // Describes the location where the restore job's output is stored. + OutputLocation *OutputLocation `type:"structure"` + + // Describes the parameters for Select job types. + SelectParameters *SelectParameters `type:"structure"` + + // Glacier retrieval tier at which the restore will be processed. + Tier *string `type:"string" enum:"Tier"` + + // Type of restore request. + Type *string `type:"string" enum:"RestoreRequestType"` } // String returns the string representation @@ -18121,14 +19012,21 @@ func (s RestoreRequest) GoString() string { // Validate inspects the fields of the type to determine if they are valid. func (s *RestoreRequest) Validate() error { invalidParams := request.ErrInvalidParams{Context: "RestoreRequest"} - if s.Days == nil { - invalidParams.Add(request.NewErrParamRequired("Days")) - } if s.GlacierJobParameters != nil { if err := s.GlacierJobParameters.Validate(); err != nil { invalidParams.AddNested("GlacierJobParameters", err.(request.ErrInvalidParams)) } } + if s.OutputLocation != nil { + if err := s.OutputLocation.Validate(); err != nil { + invalidParams.AddNested("OutputLocation", err.(request.ErrInvalidParams)) + } + } + if s.SelectParameters != nil { + if err := s.SelectParameters.Validate(); err != nil { + invalidParams.AddNested("SelectParameters", err.(request.ErrInvalidParams)) + } + } if invalidParams.Len() > 0 { return invalidParams @@ -18142,13 +19040,42 @@ func (s *RestoreRequest) SetDays(v int64) *RestoreRequest { return s } +// SetDescription sets the Description field's value. +func (s *RestoreRequest) SetDescription(v string) *RestoreRequest { + s.Description = &v + return s +} + // SetGlacierJobParameters sets the GlacierJobParameters field's value. func (s *RestoreRequest) SetGlacierJobParameters(v *GlacierJobParameters) *RestoreRequest { s.GlacierJobParameters = v return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/RoutingRule +// SetOutputLocation sets the OutputLocation field's value. +func (s *RestoreRequest) SetOutputLocation(v *OutputLocation) *RestoreRequest { + s.OutputLocation = v + return s +} + +// SetSelectParameters sets the SelectParameters field's value. +func (s *RestoreRequest) SetSelectParameters(v *SelectParameters) *RestoreRequest { + s.SelectParameters = v + return s +} + +// SetTier sets the Tier field's value. +func (s *RestoreRequest) SetTier(v string) *RestoreRequest { + s.Tier = &v + return s +} + +// SetType sets the Type field's value. +func (s *RestoreRequest) SetType(v string) *RestoreRequest { + s.Type = &v + return s +} + type RoutingRule struct { _ struct{} `type:"structure"` @@ -18201,7 +19128,6 @@ func (s *RoutingRule) SetRedirect(v *Redirect) *RoutingRule { return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/Rule type Rule struct { _ struct{} `type:"structure"` @@ -18316,7 +19242,365 @@ func (s *Rule) SetTransition(v *Transition) *Rule { return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/StorageClassAnalysis +// Specifies the use of SSE-KMS to encrypt delievered Inventory reports. +type SSEKMS struct { + _ struct{} `locationName:"SSE-KMS" type:"structure"` + + // Specifies the ID of the AWS Key Management Service (KMS) master encryption + // key to use for encrypting Inventory reports. + // + // KeyId is a required field + KeyId *string `type:"string" required:"true"` +} + +// String returns the string representation +func (s SSEKMS) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s SSEKMS) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *SSEKMS) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "SSEKMS"} + if s.KeyId == nil { + invalidParams.Add(request.NewErrParamRequired("KeyId")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetKeyId sets the KeyId field's value. +func (s *SSEKMS) SetKeyId(v string) *SSEKMS { + s.KeyId = &v + return s +} + +// Specifies the use of SSE-S3 to encrypt delievered Inventory reports. +type SSES3 struct { + _ struct{} `locationName:"SSE-S3" type:"structure"` +} + +// String returns the string representation +func (s SSES3) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s SSES3) GoString() string { + return s.String() +} + +// Describes the parameters for Select job types. +type SelectParameters struct { + _ struct{} `type:"structure"` + + // The expression that is used to query the object. + // + // Expression is a required field + Expression *string `type:"string" required:"true"` + + // The type of the provided expression (e.g., SQL). + // + // ExpressionType is a required field + ExpressionType *string `type:"string" required:"true" enum:"ExpressionType"` + + // Describes the serialization format of the object. + // + // InputSerialization is a required field + InputSerialization *InputSerialization `type:"structure" required:"true"` + + // Describes how the results of the Select job are serialized. + // + // OutputSerialization is a required field + OutputSerialization *OutputSerialization `type:"structure" required:"true"` +} + +// String returns the string representation +func (s SelectParameters) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s SelectParameters) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *SelectParameters) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "SelectParameters"} + if s.Expression == nil { + invalidParams.Add(request.NewErrParamRequired("Expression")) + } + if s.ExpressionType == nil { + invalidParams.Add(request.NewErrParamRequired("ExpressionType")) + } + if s.InputSerialization == nil { + invalidParams.Add(request.NewErrParamRequired("InputSerialization")) + } + if s.OutputSerialization == nil { + invalidParams.Add(request.NewErrParamRequired("OutputSerialization")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetExpression sets the Expression field's value. +func (s *SelectParameters) SetExpression(v string) *SelectParameters { + s.Expression = &v + return s +} + +// SetExpressionType sets the ExpressionType field's value. +func (s *SelectParameters) SetExpressionType(v string) *SelectParameters { + s.ExpressionType = &v + return s +} + +// SetInputSerialization sets the InputSerialization field's value. +func (s *SelectParameters) SetInputSerialization(v *InputSerialization) *SelectParameters { + s.InputSerialization = v + return s +} + +// SetOutputSerialization sets the OutputSerialization field's value. +func (s *SelectParameters) SetOutputSerialization(v *OutputSerialization) *SelectParameters { + s.OutputSerialization = v + return s +} + +// Describes the default server-side encryption to apply to new objects in the +// bucket. If Put Object request does not specify any server-side encryption, +// this default encryption will be applied. +type ServerSideEncryptionByDefault struct { + _ struct{} `type:"structure"` + + // KMS master key ID to use for the default encryption. This parameter is allowed + // if SSEAlgorithm is aws:kms. + KMSMasterKeyID *string `type:"string"` + + // Server-side encryption algorithm to use for the default encryption. + // + // SSEAlgorithm is a required field + SSEAlgorithm *string `type:"string" required:"true" enum:"ServerSideEncryption"` +} + +// String returns the string representation +func (s ServerSideEncryptionByDefault) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ServerSideEncryptionByDefault) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ServerSideEncryptionByDefault) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ServerSideEncryptionByDefault"} + if s.SSEAlgorithm == nil { + invalidParams.Add(request.NewErrParamRequired("SSEAlgorithm")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetKMSMasterKeyID sets the KMSMasterKeyID field's value. +func (s *ServerSideEncryptionByDefault) SetKMSMasterKeyID(v string) *ServerSideEncryptionByDefault { + s.KMSMasterKeyID = &v + return s +} + +// SetSSEAlgorithm sets the SSEAlgorithm field's value. +func (s *ServerSideEncryptionByDefault) SetSSEAlgorithm(v string) *ServerSideEncryptionByDefault { + s.SSEAlgorithm = &v + return s +} + +// Container for server-side encryption configuration rules. Currently S3 supports +// one rule only. +type ServerSideEncryptionConfiguration struct { + _ struct{} `type:"structure"` + + // Container for information about a particular server-side encryption configuration + // rule. + // + // Rules is a required field + Rules []*ServerSideEncryptionRule `locationName:"Rule" type:"list" flattened:"true" required:"true"` +} + +// String returns the string representation +func (s ServerSideEncryptionConfiguration) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ServerSideEncryptionConfiguration) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ServerSideEncryptionConfiguration) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ServerSideEncryptionConfiguration"} + if s.Rules == nil { + invalidParams.Add(request.NewErrParamRequired("Rules")) + } + if s.Rules != nil { + for i, v := range s.Rules { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "Rules", i), err.(request.ErrInvalidParams)) + } + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetRules sets the Rules field's value. +func (s *ServerSideEncryptionConfiguration) SetRules(v []*ServerSideEncryptionRule) *ServerSideEncryptionConfiguration { + s.Rules = v + return s +} + +// Container for information about a particular server-side encryption configuration +// rule. +type ServerSideEncryptionRule struct { + _ struct{} `type:"structure"` + + // Describes the default server-side encryption to apply to new objects in the + // bucket. If Put Object request does not specify any server-side encryption, + // this default encryption will be applied. + ApplyServerSideEncryptionByDefault *ServerSideEncryptionByDefault `type:"structure"` +} + +// String returns the string representation +func (s ServerSideEncryptionRule) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ServerSideEncryptionRule) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ServerSideEncryptionRule) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ServerSideEncryptionRule"} + if s.ApplyServerSideEncryptionByDefault != nil { + if err := s.ApplyServerSideEncryptionByDefault.Validate(); err != nil { + invalidParams.AddNested("ApplyServerSideEncryptionByDefault", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetApplyServerSideEncryptionByDefault sets the ApplyServerSideEncryptionByDefault field's value. +func (s *ServerSideEncryptionRule) SetApplyServerSideEncryptionByDefault(v *ServerSideEncryptionByDefault) *ServerSideEncryptionRule { + s.ApplyServerSideEncryptionByDefault = v + return s +} + +// Container for filters that define which source objects should be replicated. +type SourceSelectionCriteria struct { + _ struct{} `type:"structure"` + + // Container for filter information of selection of KMS Encrypted S3 objects. + SseKmsEncryptedObjects *SseKmsEncryptedObjects `type:"structure"` +} + +// String returns the string representation +func (s SourceSelectionCriteria) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s SourceSelectionCriteria) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *SourceSelectionCriteria) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "SourceSelectionCriteria"} + if s.SseKmsEncryptedObjects != nil { + if err := s.SseKmsEncryptedObjects.Validate(); err != nil { + invalidParams.AddNested("SseKmsEncryptedObjects", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetSseKmsEncryptedObjects sets the SseKmsEncryptedObjects field's value. +func (s *SourceSelectionCriteria) SetSseKmsEncryptedObjects(v *SseKmsEncryptedObjects) *SourceSelectionCriteria { + s.SseKmsEncryptedObjects = v + return s +} + +// Container for filter information of selection of KMS Encrypted S3 objects. +type SseKmsEncryptedObjects struct { + _ struct{} `type:"structure"` + + // The replication for KMS encrypted S3 objects is disabled if status is not + // Enabled. + // + // Status is a required field + Status *string `type:"string" required:"true" enum:"SseKmsEncryptedObjectsStatus"` +} + +// String returns the string representation +func (s SseKmsEncryptedObjects) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s SseKmsEncryptedObjects) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *SseKmsEncryptedObjects) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "SseKmsEncryptedObjects"} + if s.Status == nil { + invalidParams.Add(request.NewErrParamRequired("Status")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetStatus sets the Status field's value. +func (s *SseKmsEncryptedObjects) SetStatus(v string) *SseKmsEncryptedObjects { + s.Status = &v + return s +} + type StorageClassAnalysis struct { _ struct{} `type:"structure"` @@ -18356,7 +19640,6 @@ func (s *StorageClassAnalysis) SetDataExport(v *StorageClassAnalysisDataExport) return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/StorageClassAnalysisDataExport type StorageClassAnalysisDataExport struct { _ struct{} `type:"structure"` @@ -18414,7 +19697,6 @@ func (s *StorageClassAnalysisDataExport) SetOutputSchemaVersion(v string) *Stora return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/Tag type Tag struct { _ struct{} `type:"structure"` @@ -18470,7 +19752,6 @@ func (s *Tag) SetValue(v string) *Tag { return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/Tagging type Tagging struct { _ struct{} `type:"structure"` @@ -18517,7 +19798,6 @@ func (s *Tagging) SetTagSet(v []*Tag) *Tagging { return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/TargetGrant type TargetGrant struct { _ struct{} `type:"structure"` @@ -18566,7 +19846,6 @@ func (s *TargetGrant) SetPermission(v string) *TargetGrant { // Container for specifying the configuration when you want Amazon S3 to publish // events to an Amazon Simple Notification Service (Amazon SNS) topic. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/TopicConfiguration type TopicConfiguration struct { _ struct{} `type:"structure"` @@ -18638,7 +19917,6 @@ func (s *TopicConfiguration) SetTopicArn(v string) *TopicConfiguration { return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/TopicConfigurationDeprecated type TopicConfigurationDeprecated struct { _ struct{} `type:"structure"` @@ -18690,7 +19968,6 @@ func (s *TopicConfigurationDeprecated) SetTopic(v string) *TopicConfigurationDep return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/Transition type Transition struct { _ struct{} `type:"structure"` @@ -18734,7 +20011,6 @@ func (s *Transition) SetStorageClass(v string) *Transition { return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/UploadPartCopyRequest type UploadPartCopyInput struct { _ struct{} `type:"structure"` @@ -18978,7 +20254,6 @@ func (s *UploadPartCopyInput) SetUploadId(v string) *UploadPartCopyInput { return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/UploadPartCopyOutput type UploadPartCopyOutput struct { _ struct{} `type:"structure" payload:"CopyPartResult"` @@ -19063,7 +20338,6 @@ func (s *UploadPartCopyOutput) SetServerSideEncryption(v string) *UploadPartCopy return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/UploadPartRequest type UploadPartInput struct { _ struct{} `type:"structure" payload:"Body"` @@ -19079,6 +20353,9 @@ type UploadPartInput struct { // body cannot be determined automatically. ContentLength *int64 `location:"header" locationName:"Content-Length" type:"long"` + // The base64-encoded 128-bit MD5 digest of the part data. + ContentMD5 *string `location:"header" locationName:"Content-MD5" type:"string"` + // Object key for which the multipart upload was initiated. // // Key is a required field @@ -19178,6 +20455,12 @@ func (s *UploadPartInput) SetContentLength(v int64) *UploadPartInput { return s } +// SetContentMD5 sets the ContentMD5 field's value. +func (s *UploadPartInput) SetContentMD5(v string) *UploadPartInput { + s.ContentMD5 = &v + return s +} + // SetKey sets the Key field's value. func (s *UploadPartInput) SetKey(v string) *UploadPartInput { s.Key = &v @@ -19227,7 +20510,6 @@ func (s *UploadPartInput) SetUploadId(v string) *UploadPartInput { return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/UploadPartOutput type UploadPartOutput struct { _ struct{} `type:"structure"` @@ -19303,7 +20585,6 @@ func (s *UploadPartOutput) SetServerSideEncryption(v string) *UploadPartOutput { return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/VersioningConfiguration type VersioningConfiguration struct { _ struct{} `type:"structure"` @@ -19338,7 +20619,6 @@ func (s *VersioningConfiguration) SetStatus(v string) *VersioningConfiguration { return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/s3-2006-03-01/WebsiteConfiguration type WebsiteConfiguration struct { _ struct{} `type:"structure"` @@ -19550,6 +20830,22 @@ const ( ExpirationStatusDisabled = "Disabled" ) +const ( + // ExpressionTypeSql is a ExpressionType enum value + ExpressionTypeSql = "SQL" +) + +const ( + // FileHeaderInfoUse is a FileHeaderInfo enum value + FileHeaderInfoUse = "USE" + + // FileHeaderInfoIgnore is a FileHeaderInfo enum value + FileHeaderInfoIgnore = "IGNORE" + + // FileHeaderInfoNone is a FileHeaderInfo enum value + FileHeaderInfoNone = "NONE" +) + const ( // FilterRuleNamePrefix is a FilterRuleName enum value FilterRuleNamePrefix = "prefix" @@ -19561,6 +20857,9 @@ const ( const ( // InventoryFormatCsv is a InventoryFormat enum value InventoryFormatCsv = "CSV" + + // InventoryFormatOrc is a InventoryFormat enum value + InventoryFormatOrc = "ORC" ) const ( @@ -19597,6 +20896,9 @@ const ( // InventoryOptionalFieldReplicationStatus is a InventoryOptionalField enum value InventoryOptionalFieldReplicationStatus = "ReplicationStatus" + + // InventoryOptionalFieldEncryptionStatus is a InventoryOptionalField enum value + InventoryOptionalFieldEncryptionStatus = "EncryptionStatus" ) const ( @@ -19662,6 +20964,11 @@ const ( ObjectVersionStorageClassStandard = "STANDARD" ) +const ( + // OwnerOverrideDestination is a OwnerOverride enum value + OwnerOverrideDestination = "Destination" +) + const ( // PayerRequester is a Payer enum value PayerRequester = "Requester" @@ -19695,6 +21002,14 @@ const ( ProtocolHttps = "https" ) +const ( + // QuoteFieldsAlways is a QuoteFields enum value + QuoteFieldsAlways = "ALWAYS" + + // QuoteFieldsAsneeded is a QuoteFields enum value + QuoteFieldsAsneeded = "ASNEEDED" +) + const ( // ReplicationRuleStatusEnabled is a ReplicationRuleStatus enum value ReplicationRuleStatusEnabled = "Enabled" @@ -19733,6 +21048,11 @@ const ( RequestPayerRequester = "requester" ) +const ( + // RestoreRequestTypeSelect is a RestoreRequestType enum value + RestoreRequestTypeSelect = "SELECT" +) + const ( // ServerSideEncryptionAes256 is a ServerSideEncryption enum value ServerSideEncryptionAes256 = "AES256" @@ -19741,6 +21061,14 @@ const ( ServerSideEncryptionAwsKms = "aws:kms" ) +const ( + // SseKmsEncryptedObjectsStatusEnabled is a SseKmsEncryptedObjectsStatus enum value + SseKmsEncryptedObjectsStatusEnabled = "Enabled" + + // SseKmsEncryptedObjectsStatusDisabled is a SseKmsEncryptedObjectsStatus enum value + SseKmsEncryptedObjectsStatusDisabled = "Disabled" +) + const ( // StorageClassStandard is a StorageClass enum value StorageClassStandard = "STANDARD" diff --git a/vendor/github.com/aws/aws-sdk-go/service/s3/doc.go b/vendor/github.com/aws/aws-sdk-go/service/s3/doc.go index 30068d159..0def02255 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/s3/doc.go +++ b/vendor/github.com/aws/aws-sdk-go/service/s3/doc.go @@ -10,7 +10,7 @@ // // Using the Client // -// To Amazon Simple Storage Service with the SDK use the New function to create +// To contact Amazon Simple Storage Service with the SDK use the New function to create // a new service client. With that client you can make API requests to the service. // These clients are safe to use concurrently. // diff --git a/vendor/github.com/aws/aws-sdk-go/service/s3/doc_custom.go b/vendor/github.com/aws/aws-sdk-go/service/s3/doc_custom.go index b794a63ba..39b912c26 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/s3/doc_custom.go +++ b/vendor/github.com/aws/aws-sdk-go/service/s3/doc_custom.go @@ -35,7 +35,7 @@ // // The s3manager package's Downloader provides concurrently downloading of Objects // from S3. The Downloader will write S3 Object content with an io.WriterAt. -// Once the Downloader instance is created you can call Upload concurrently from +// Once the Downloader instance is created you can call Download concurrently from // multiple goroutines safely. // // // The session the S3 Downloader will use @@ -56,7 +56,7 @@ // Key: aws.String(myString), // }) // if err != nil { -// return fmt.Errorf("failed to upload file, %v", err) +// return fmt.Errorf("failed to download file, %v", err) // } // fmt.Printf("file downloaded, %d bytes\n", n) // diff --git a/vendor/github.com/aws/aws-sdk-go/service/s3/s3iface/interface.go b/vendor/github.com/aws/aws-sdk-go/service/s3/s3iface/interface.go index 34e71df0f..28c30d976 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/s3/s3iface/interface.go +++ b/vendor/github.com/aws/aws-sdk-go/service/s3/s3iface/interface.go @@ -92,6 +92,10 @@ type S3API interface { DeleteBucketCorsWithContext(aws.Context, *s3.DeleteBucketCorsInput, ...request.Option) (*s3.DeleteBucketCorsOutput, error) DeleteBucketCorsRequest(*s3.DeleteBucketCorsInput) (*request.Request, *s3.DeleteBucketCorsOutput) + DeleteBucketEncryption(*s3.DeleteBucketEncryptionInput) (*s3.DeleteBucketEncryptionOutput, error) + DeleteBucketEncryptionWithContext(aws.Context, *s3.DeleteBucketEncryptionInput, ...request.Option) (*s3.DeleteBucketEncryptionOutput, error) + DeleteBucketEncryptionRequest(*s3.DeleteBucketEncryptionInput) (*request.Request, *s3.DeleteBucketEncryptionOutput) + DeleteBucketInventoryConfiguration(*s3.DeleteBucketInventoryConfigurationInput) (*s3.DeleteBucketInventoryConfigurationOutput, error) DeleteBucketInventoryConfigurationWithContext(aws.Context, *s3.DeleteBucketInventoryConfigurationInput, ...request.Option) (*s3.DeleteBucketInventoryConfigurationOutput, error) DeleteBucketInventoryConfigurationRequest(*s3.DeleteBucketInventoryConfigurationInput) (*request.Request, *s3.DeleteBucketInventoryConfigurationOutput) @@ -148,6 +152,10 @@ type S3API interface { GetBucketCorsWithContext(aws.Context, *s3.GetBucketCorsInput, ...request.Option) (*s3.GetBucketCorsOutput, error) GetBucketCorsRequest(*s3.GetBucketCorsInput) (*request.Request, *s3.GetBucketCorsOutput) + GetBucketEncryption(*s3.GetBucketEncryptionInput) (*s3.GetBucketEncryptionOutput, error) + GetBucketEncryptionWithContext(aws.Context, *s3.GetBucketEncryptionInput, ...request.Option) (*s3.GetBucketEncryptionOutput, error) + GetBucketEncryptionRequest(*s3.GetBucketEncryptionInput) (*request.Request, *s3.GetBucketEncryptionOutput) + GetBucketInventoryConfiguration(*s3.GetBucketInventoryConfigurationInput) (*s3.GetBucketInventoryConfigurationOutput, error) GetBucketInventoryConfigurationWithContext(aws.Context, *s3.GetBucketInventoryConfigurationInput, ...request.Option) (*s3.GetBucketInventoryConfigurationOutput, error) GetBucketInventoryConfigurationRequest(*s3.GetBucketInventoryConfigurationInput) (*request.Request, *s3.GetBucketInventoryConfigurationOutput) @@ -295,6 +303,10 @@ type S3API interface { PutBucketCorsWithContext(aws.Context, *s3.PutBucketCorsInput, ...request.Option) (*s3.PutBucketCorsOutput, error) PutBucketCorsRequest(*s3.PutBucketCorsInput) (*request.Request, *s3.PutBucketCorsOutput) + PutBucketEncryption(*s3.PutBucketEncryptionInput) (*s3.PutBucketEncryptionOutput, error) + PutBucketEncryptionWithContext(aws.Context, *s3.PutBucketEncryptionInput, ...request.Option) (*s3.PutBucketEncryptionOutput, error) + PutBucketEncryptionRequest(*s3.PutBucketEncryptionInput) (*request.Request, *s3.PutBucketEncryptionOutput) + PutBucketInventoryConfiguration(*s3.PutBucketInventoryConfigurationInput) (*s3.PutBucketInventoryConfigurationOutput, error) PutBucketInventoryConfigurationWithContext(aws.Context, *s3.PutBucketInventoryConfigurationInput, ...request.Option) (*s3.PutBucketInventoryConfigurationOutput, error) PutBucketInventoryConfigurationRequest(*s3.PutBucketInventoryConfigurationInput) (*request.Request, *s3.PutBucketInventoryConfigurationOutput) diff --git a/vendor/github.com/aws/aws-sdk-go/service/s3/s3manager/batch.go b/vendor/github.com/aws/aws-sdk-go/service/s3/s3manager/batch.go index 33931c6a6..7f1674f4f 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/s3/s3manager/batch.go +++ b/vendor/github.com/aws/aws-sdk-go/service/s3/s3manager/batch.go @@ -60,7 +60,15 @@ func newError(err error, bucket, key *string) Error { } func (err *Error) Error() string { - return fmt.Sprintf("failed to upload %q to %q:\n%s", err.Key, err.Bucket, err.OrigErr.Error()) + origErr := "" + if err.OrigErr != nil { + origErr = ":\n" + err.OrigErr.Error() + } + return fmt.Sprintf("failed to upload %q to %q%s", + aws.StringValue(err.Key), + aws.StringValue(err.Bucket), + origErr, + ) } // NewBatchError will return a BatchError that satisfies the awserr.Error interface. @@ -206,7 +214,7 @@ type BatchDelete struct { // }, // } // -// if err := batcher.Delete(&s3manager.DeleteObjectsIterator{ +// if err := batcher.Delete(aws.BackgroundContext(), &s3manager.DeleteObjectsIterator{ // Objects: objects, // }); err != nil { // return err @@ -239,7 +247,7 @@ func NewBatchDeleteWithClient(client s3iface.S3API, options ...func(*BatchDelete // }, // } // -// if err := batcher.Delete(&s3manager.DeleteObjectsIterator{ +// if err := batcher.Delete(aws.BackgroundContext(), &s3manager.DeleteObjectsIterator{ // Objects: objects, // }); err != nil { // return err diff --git a/vendor/github.com/aws/aws-sdk-go/service/s3/s3manager/upload.go b/vendor/github.com/aws/aws-sdk-go/service/s3/s3manager/upload.go index fc1f47205..a4079b83e 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/s3/s3manager/upload.go +++ b/vendor/github.com/aws/aws-sdk-go/service/s3/s3manager/upload.go @@ -117,6 +117,9 @@ type UploadInput struct { // The language the content is in. ContentLanguage *string `location:"header" locationName:"Content-Language" type:"string"` + // The base64-encoded 128-bit MD5 digest of the part data. + ContentMD5 *string `location:"header" locationName:"Content-MD5" type:"string"` + // A standard MIME type describing the format of the object data. ContentType *string `location:"header" locationName:"Content-Type" type:"string"` @@ -218,8 +221,11 @@ type Uploader struct { // if this value is set to zero, the DefaultUploadPartSize value will be used. PartSize int64 - // The number of goroutines to spin up in parallel when sending parts. - // If this is set to zero, the DefaultUploadConcurrency value will be used. + // The number of goroutines to spin up in parallel per call to Upload when + // sending parts. If this is set to zero, the DefaultUploadConcurrency value + // will be used. + // + // The concurrency pool is not shared between calls to Upload. Concurrency int // Setting this value to true will cause the SDK to avoid calling diff --git a/vendor/github.com/aws/aws-sdk-go/service/sts/api.go b/vendor/github.com/aws/aws-sdk-go/service/sts/api.go index 3b8be4378..22a0a1285 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/sts/api.go +++ b/vendor/github.com/aws/aws-sdk-go/service/sts/api.go @@ -35,7 +35,7 @@ const opAssumeRole = "AssumeRole" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/sts-2011-06-15/AssumeRole +// See also, https://docs.aws.amazon.com/goto/WebAPI/sts-2011-06-15/AssumeRole func (c *STS) AssumeRoleRequest(input *AssumeRoleInput) (req *request.Request, output *AssumeRoleOutput) { op := &request.Operation{ Name: opAssumeRole, @@ -168,7 +168,7 @@ func (c *STS) AssumeRoleRequest(input *AssumeRoleInput) (req *request.Request, o // and Deactivating AWS STS in an AWS Region (http://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp_enable-regions.html) // in the IAM User Guide. // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/sts-2011-06-15/AssumeRole +// See also, https://docs.aws.amazon.com/goto/WebAPI/sts-2011-06-15/AssumeRole func (c *STS) AssumeRole(input *AssumeRoleInput) (*AssumeRoleOutput, error) { req, out := c.AssumeRoleRequest(input) return out, req.Send() @@ -215,7 +215,7 @@ const opAssumeRoleWithSAML = "AssumeRoleWithSAML" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/sts-2011-06-15/AssumeRoleWithSAML +// See also, https://docs.aws.amazon.com/goto/WebAPI/sts-2011-06-15/AssumeRoleWithSAML func (c *STS) AssumeRoleWithSAMLRequest(input *AssumeRoleWithSAMLInput) (req *request.Request, output *AssumeRoleWithSAMLOutput) { op := &request.Operation{ Name: opAssumeRoleWithSAML, @@ -341,7 +341,7 @@ func (c *STS) AssumeRoleWithSAMLRequest(input *AssumeRoleWithSAMLInput) (req *re // and Deactivating AWS STS in an AWS Region (http://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp_enable-regions.html) // in the IAM User Guide. // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/sts-2011-06-15/AssumeRoleWithSAML +// See also, https://docs.aws.amazon.com/goto/WebAPI/sts-2011-06-15/AssumeRoleWithSAML func (c *STS) AssumeRoleWithSAML(input *AssumeRoleWithSAMLInput) (*AssumeRoleWithSAMLOutput, error) { req, out := c.AssumeRoleWithSAMLRequest(input) return out, req.Send() @@ -388,7 +388,7 @@ const opAssumeRoleWithWebIdentity = "AssumeRoleWithWebIdentity" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/sts-2011-06-15/AssumeRoleWithWebIdentity +// See also, https://docs.aws.amazon.com/goto/WebAPI/sts-2011-06-15/AssumeRoleWithWebIdentity func (c *STS) AssumeRoleWithWebIdentityRequest(input *AssumeRoleWithWebIdentityInput) (req *request.Request, output *AssumeRoleWithWebIdentityOutput) { op := &request.Operation{ Name: opAssumeRoleWithWebIdentity, @@ -543,7 +543,7 @@ func (c *STS) AssumeRoleWithWebIdentityRequest(input *AssumeRoleWithWebIdentityI // and Deactivating AWS STS in an AWS Region (http://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp_enable-regions.html) // in the IAM User Guide. // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/sts-2011-06-15/AssumeRoleWithWebIdentity +// See also, https://docs.aws.amazon.com/goto/WebAPI/sts-2011-06-15/AssumeRoleWithWebIdentity func (c *STS) AssumeRoleWithWebIdentity(input *AssumeRoleWithWebIdentityInput) (*AssumeRoleWithWebIdentityOutput, error) { req, out := c.AssumeRoleWithWebIdentityRequest(input) return out, req.Send() @@ -590,7 +590,7 @@ const opDecodeAuthorizationMessage = "DecodeAuthorizationMessage" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/sts-2011-06-15/DecodeAuthorizationMessage +// See also, https://docs.aws.amazon.com/goto/WebAPI/sts-2011-06-15/DecodeAuthorizationMessage func (c *STS) DecodeAuthorizationMessageRequest(input *DecodeAuthorizationMessageInput) (req *request.Request, output *DecodeAuthorizationMessageOutput) { op := &request.Operation{ Name: opDecodeAuthorizationMessage, @@ -655,7 +655,7 @@ func (c *STS) DecodeAuthorizationMessageRequest(input *DecodeAuthorizationMessag // invalid. This can happen if the token contains invalid characters, such as // linebreaks. // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/sts-2011-06-15/DecodeAuthorizationMessage +// See also, https://docs.aws.amazon.com/goto/WebAPI/sts-2011-06-15/DecodeAuthorizationMessage func (c *STS) DecodeAuthorizationMessage(input *DecodeAuthorizationMessageInput) (*DecodeAuthorizationMessageOutput, error) { req, out := c.DecodeAuthorizationMessageRequest(input) return out, req.Send() @@ -702,7 +702,7 @@ const opGetCallerIdentity = "GetCallerIdentity" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/sts-2011-06-15/GetCallerIdentity +// See also, https://docs.aws.amazon.com/goto/WebAPI/sts-2011-06-15/GetCallerIdentity func (c *STS) GetCallerIdentityRequest(input *GetCallerIdentityInput) (req *request.Request, output *GetCallerIdentityOutput) { op := &request.Operation{ Name: opGetCallerIdentity, @@ -730,7 +730,7 @@ func (c *STS) GetCallerIdentityRequest(input *GetCallerIdentityInput) (req *requ // // See the AWS API reference guide for AWS Security Token Service's // API operation GetCallerIdentity for usage and error information. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/sts-2011-06-15/GetCallerIdentity +// See also, https://docs.aws.amazon.com/goto/WebAPI/sts-2011-06-15/GetCallerIdentity func (c *STS) GetCallerIdentity(input *GetCallerIdentityInput) (*GetCallerIdentityOutput, error) { req, out := c.GetCallerIdentityRequest(input) return out, req.Send() @@ -777,7 +777,7 @@ const opGetFederationToken = "GetFederationToken" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/sts-2011-06-15/GetFederationToken +// See also, https://docs.aws.amazon.com/goto/WebAPI/sts-2011-06-15/GetFederationToken func (c *STS) GetFederationTokenRequest(input *GetFederationTokenInput) (req *request.Request, output *GetFederationTokenOutput) { op := &request.Operation{ Name: opGetFederationToken, @@ -899,7 +899,7 @@ func (c *STS) GetFederationTokenRequest(input *GetFederationTokenInput) (req *re // and Deactivating AWS STS in an AWS Region (http://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp_enable-regions.html) // in the IAM User Guide. // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/sts-2011-06-15/GetFederationToken +// See also, https://docs.aws.amazon.com/goto/WebAPI/sts-2011-06-15/GetFederationToken func (c *STS) GetFederationToken(input *GetFederationTokenInput) (*GetFederationTokenOutput, error) { req, out := c.GetFederationTokenRequest(input) return out, req.Send() @@ -946,7 +946,7 @@ const opGetSessionToken = "GetSessionToken" // fmt.Println(resp) // } // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/sts-2011-06-15/GetSessionToken +// See also, https://docs.aws.amazon.com/goto/WebAPI/sts-2011-06-15/GetSessionToken func (c *STS) GetSessionTokenRequest(input *GetSessionTokenInput) (req *request.Request, output *GetSessionTokenOutput) { op := &request.Operation{ Name: opGetSessionToken, @@ -1027,7 +1027,7 @@ func (c *STS) GetSessionTokenRequest(input *GetSessionTokenInput) (req *request. // and Deactivating AWS STS in an AWS Region (http://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp_enable-regions.html) // in the IAM User Guide. // -// Please also see https://docs.aws.amazon.com/goto/WebAPI/sts-2011-06-15/GetSessionToken +// See also, https://docs.aws.amazon.com/goto/WebAPI/sts-2011-06-15/GetSessionToken func (c *STS) GetSessionToken(input *GetSessionTokenInput) (*GetSessionTokenOutput, error) { req, out := c.GetSessionTokenRequest(input) return out, req.Send() @@ -1049,7 +1049,6 @@ func (c *STS) GetSessionTokenWithContext(ctx aws.Context, input *GetSessionToken return out, req.Send() } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/sts-2011-06-15/AssumeRoleRequest type AssumeRoleInput struct { _ struct{} `type:"structure"` @@ -1241,7 +1240,6 @@ func (s *AssumeRoleInput) SetTokenCode(v string) *AssumeRoleInput { // Contains the response to a successful AssumeRole request, including temporary // AWS credentials that can be used to make AWS requests. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/sts-2011-06-15/AssumeRoleResponse type AssumeRoleOutput struct { _ struct{} `type:"structure"` @@ -1295,7 +1293,6 @@ func (s *AssumeRoleOutput) SetPackedPolicySize(v int64) *AssumeRoleOutput { return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/sts-2011-06-15/AssumeRoleWithSAMLRequest type AssumeRoleWithSAMLInput struct { _ struct{} `type:"structure"` @@ -1436,7 +1433,6 @@ func (s *AssumeRoleWithSAMLInput) SetSAMLAssertion(v string) *AssumeRoleWithSAML // Contains the response to a successful AssumeRoleWithSAML request, including // temporary AWS credentials that can be used to make AWS requests. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/sts-2011-06-15/AssumeRoleWithSAMLResponse type AssumeRoleWithSAMLOutput struct { _ struct{} `type:"structure"` @@ -1548,7 +1544,6 @@ func (s *AssumeRoleWithSAMLOutput) SetSubjectType(v string) *AssumeRoleWithSAMLO return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/sts-2011-06-15/AssumeRoleWithWebIdentityRequest type AssumeRoleWithWebIdentityInput struct { _ struct{} `type:"structure"` @@ -1711,7 +1706,6 @@ func (s *AssumeRoleWithWebIdentityInput) SetWebIdentityToken(v string) *AssumeRo // Contains the response to a successful AssumeRoleWithWebIdentity request, // including temporary AWS credentials that can be used to make AWS requests. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/sts-2011-06-15/AssumeRoleWithWebIdentityResponse type AssumeRoleWithWebIdentityOutput struct { _ struct{} `type:"structure"` @@ -1804,7 +1798,6 @@ func (s *AssumeRoleWithWebIdentityOutput) SetSubjectFromWebIdentityToken(v strin // The identifiers for the temporary security credentials that the operation // returns. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/sts-2011-06-15/AssumedRoleUser type AssumedRoleUser struct { _ struct{} `type:"structure"` @@ -1847,7 +1840,6 @@ func (s *AssumedRoleUser) SetAssumedRoleId(v string) *AssumedRoleUser { } // AWS credentials for API authentication. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/sts-2011-06-15/Credentials type Credentials struct { _ struct{} `type:"structure"` @@ -1906,7 +1898,6 @@ func (s *Credentials) SetSessionToken(v string) *Credentials { return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/sts-2011-06-15/DecodeAuthorizationMessageRequest type DecodeAuthorizationMessageInput struct { _ struct{} `type:"structure"` @@ -1951,7 +1942,6 @@ func (s *DecodeAuthorizationMessageInput) SetEncodedMessage(v string) *DecodeAut // A document that contains additional information about the authorization status // of a request from an encoded message that is returned in response to an AWS // request. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/sts-2011-06-15/DecodeAuthorizationMessageResponse type DecodeAuthorizationMessageOutput struct { _ struct{} `type:"structure"` @@ -1976,7 +1966,6 @@ func (s *DecodeAuthorizationMessageOutput) SetDecodedMessage(v string) *DecodeAu } // Identifiers for the federated user that is associated with the credentials. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/sts-2011-06-15/FederatedUser type FederatedUser struct { _ struct{} `type:"structure"` @@ -2017,7 +2006,6 @@ func (s *FederatedUser) SetFederatedUserId(v string) *FederatedUser { return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/sts-2011-06-15/GetCallerIdentityRequest type GetCallerIdentityInput struct { _ struct{} `type:"structure"` } @@ -2034,7 +2022,6 @@ func (s GetCallerIdentityInput) GoString() string { // Contains the response to a successful GetCallerIdentity request, including // information about the entity making the request. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/sts-2011-06-15/GetCallerIdentityResponse type GetCallerIdentityOutput struct { _ struct{} `type:"structure"` @@ -2080,7 +2067,6 @@ func (s *GetCallerIdentityOutput) SetUserId(v string) *GetCallerIdentityOutput { return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/sts-2011-06-15/GetFederationTokenRequest type GetFederationTokenInput struct { _ struct{} `type:"structure"` @@ -2189,7 +2175,6 @@ func (s *GetFederationTokenInput) SetPolicy(v string) *GetFederationTokenInput { // Contains the response to a successful GetFederationToken request, including // temporary AWS credentials that can be used to make AWS requests. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/sts-2011-06-15/GetFederationTokenResponse type GetFederationTokenOutput struct { _ struct{} `type:"structure"` @@ -2242,7 +2227,6 @@ func (s *GetFederationTokenOutput) SetPackedPolicySize(v int64) *GetFederationTo return s } -// Please also see https://docs.aws.amazon.com/goto/WebAPI/sts-2011-06-15/GetSessionTokenRequest type GetSessionTokenInput struct { _ struct{} `type:"structure"` @@ -2327,7 +2311,6 @@ func (s *GetSessionTokenInput) SetTokenCode(v string) *GetSessionTokenInput { // Contains the response to a successful GetSessionToken request, including // temporary AWS credentials that can be used to make AWS requests. -// Please also see https://docs.aws.amazon.com/goto/WebAPI/sts-2011-06-15/GetSessionTokenResponse type GetSessionTokenOutput struct { _ struct{} `type:"structure"` diff --git a/vendor/github.com/aws/aws-sdk-go/service/sts/doc.go b/vendor/github.com/aws/aws-sdk-go/service/sts/doc.go index a43fa8055..ef681ab0c 100644 --- a/vendor/github.com/aws/aws-sdk-go/service/sts/doc.go +++ b/vendor/github.com/aws/aws-sdk-go/service/sts/doc.go @@ -56,7 +56,7 @@ // // Using the Client // -// To AWS Security Token Service with the SDK use the New function to create +// To contact AWS Security Token Service with the SDK use the New function to create // a new service client. With that client you can make API requests to the service. // These clients are safe to use concurrently. // diff --git a/vendor/github.com/creack/goselect/Dockerfile b/vendor/github.com/creack/goselect/Dockerfile new file mode 100644 index 000000000..d03b5a9d9 --- /dev/null +++ b/vendor/github.com/creack/goselect/Dockerfile @@ -0,0 +1,5 @@ +FROM google/golang:stable +MAINTAINER Guillaume J. Charmes +CMD /tmp/a.out +ADD . /src +RUN cd /src && go build -o /tmp/a.out diff --git a/vendor/github.com/creack/goselect/LICENSE b/vendor/github.com/creack/goselect/LICENSE new file mode 100644 index 000000000..95c600af1 --- /dev/null +++ b/vendor/github.com/creack/goselect/LICENSE @@ -0,0 +1,22 @@ +The MIT License (MIT) + +Copyright (c) 2014 Guillaume J. Charmes + +Permission is hereby granted, free of charge, to any person obtaining a copy +of this software and associated documentation files (the "Software"), to deal +in the Software without restriction, including without limitation the rights +to use, copy, modify, merge, publish, distribute, sublicense, and/or sell +copies of the Software, and to permit persons to whom the Software is +furnished to do so, subject to the following conditions: + +The above copyright notice and this permission notice shall be included in all +copies or substantial portions of the Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR +IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, +FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE +AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER +LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, +OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +SOFTWARE. + diff --git a/vendor/github.com/creack/goselect/README.md b/vendor/github.com/creack/goselect/README.md new file mode 100644 index 000000000..d5d06510e --- /dev/null +++ b/vendor/github.com/creack/goselect/README.md @@ -0,0 +1,30 @@ +# go-select + +select(2) implementation in Go + +## Supported platforms + +| | 386 | amd64 | arm | arm64 | +|---------------|-----|-------|-----|-------| +| **linux** | yes | yes | yes | yes | +| **darwin** | yes | yes | n/a | ?? | +| **freebsd** | yes | yes | yes | ?? | +| **openbsd** | yes | yes | yes | ?? | +| **netbsd** | yes | yes | yes | ?? | +| **dragonfly** | n/a | yes | n/a | ?? | +| **solaris** | n/a | no | n/a | ?? | +| **plan9** | no | no | n/a | ?? | +| **windows** | yes | yes | n/a | ?? | +| **android** | n/a | n/a | no | ?? | + +*n/a: platform not supported by Go + +Go on `plan9` and `solaris` do not implement `syscall.Select` not `syscall.SYS_SELECT`. + +## Cross compile + +Using davecheney's https://github.com/davecheney/golang-crosscompile + +``` +export PLATFORMS="darwin/386 darwin/amd64 freebsd/386 freebsd/amd64 freebsd/arm linux/386 linux/amd64 linux/arm windows/386 windows/amd64 openbsd/386 openbsd/amd64 netbsd/386 netbsd/amd64 dragonfly/amd64 plan9/386 plan9/amd64 solaris/amd64" +``` diff --git a/vendor/github.com/creack/goselect/fdset.go b/vendor/github.com/creack/goselect/fdset.go new file mode 100644 index 000000000..2ff3f6c7a --- /dev/null +++ b/vendor/github.com/creack/goselect/fdset.go @@ -0,0 +1,33 @@ +// +build !freebsd,!windows,!plan9 + +package goselect + +import "syscall" + +const FD_SETSIZE = syscall.FD_SETSIZE + +// FDSet wraps syscall.FdSet with convenience methods +type FDSet syscall.FdSet + +// Set adds the fd to the set +func (fds *FDSet) Set(fd uintptr) { + fds.Bits[fd/NFDBITS] |= (1 << (fd % NFDBITS)) +} + +// Clear remove the fd from the set +func (fds *FDSet) Clear(fd uintptr) { + fds.Bits[fd/NFDBITS] &^= (1 << (fd % NFDBITS)) +} + +// IsSet check if the given fd is set +func (fds *FDSet) IsSet(fd uintptr) bool { + return fds.Bits[fd/NFDBITS]&(1<<(fd%NFDBITS)) != 0 +} + +// Keep a null set to avoid reinstatiation +var nullFdSet = &FDSet{} + +// Zero empties the Set +func (fds *FDSet) Zero() { + copy(fds.Bits[:], (nullFdSet).Bits[:]) +} diff --git a/vendor/github.com/creack/goselect/fdset_32.go b/vendor/github.com/creack/goselect/fdset_32.go new file mode 100644 index 000000000..8b90d21a9 --- /dev/null +++ b/vendor/github.com/creack/goselect/fdset_32.go @@ -0,0 +1,10 @@ +// +build darwin openbsd netbsd 386 arm + +package goselect + +// darwin, netbsd and openbsd uses uint32 on both amd64 and 386 + +const ( + // NFDBITS is the amount of bits per mask + NFDBITS = 4 * 8 +) diff --git a/vendor/github.com/creack/goselect/fdset_64.go b/vendor/github.com/creack/goselect/fdset_64.go new file mode 100644 index 000000000..71b7c9cc3 --- /dev/null +++ b/vendor/github.com/creack/goselect/fdset_64.go @@ -0,0 +1,11 @@ +// +build !darwin,!netbsd,!openbsd +// +build amd64 arm64 ppc64le + +package goselect + +// darwin, netbsd and openbsd uses uint32 on both amd64 and 386 + +const ( + // NFDBITS is the amount of bits per mask + NFDBITS = 8 * 8 +) diff --git a/vendor/github.com/creack/goselect/fdset_doc.go b/vendor/github.com/creack/goselect/fdset_doc.go new file mode 100644 index 000000000..d9f15a1ce --- /dev/null +++ b/vendor/github.com/creack/goselect/fdset_doc.go @@ -0,0 +1,93 @@ +package goselect + +/** +From: XCode's MacOSX10.10.sdk/System/Library/Frameworks/Kernel.framework/Versions/A/Headers/sys/select.h +-- +// darwin/amd64 / 386 +sizeof(__int32_t) == 4 +-- + +typedef __int32_t __fd_mask; + +#define FD_SETSIZE 1024 +#define __NFDBITS (sizeof(__fd_mask) * 8) +#define __howmany(x, y) ((((x) % (y)) == 0) ? ((x) / (y)) : (((x) / (y)) + 1)) + +typedef struct fd_set { + __fd_mask fds_bits[__howmany(__FD_SETSIZE, __NFDBITS)]; +} fd_set; + +#define __FD_MASK(n) ((__fd_mask)1 << ((n) % __NFDBITS)) +#define FD_SET(n, p) ((p)->fds_bits[(n)/__NFDBITS] |= __FD_MASK(n)) +#define FD_CLR(n, p) ((p)->fds_bits[(n)/__NFDBITS] &= ~__FD_MASK(n)) +#define FD_ISSET(n, p) (((p)->fds_bits[(n)/__NFDBITS] & __FD_MASK(n)) != 0) +*/ + +/** +From: /usr/include/i386-linux-gnu/sys/select.h +-- +// linux/i686 +sizeof(long int) == 4 +-- + +typedef long int __fd_mask; + +#define FD_SETSIZE 1024 +#define __NFDBITS (sizeof(__fd_mask) * 8) + + +typedef struct fd_set { + __fd_mask fds_bits[__FD_SETSIZE / __NFDBITS]; +} fd_set; + +#define __FD_MASK(n) ((__fd_mask)1 << ((n) % __NFDBITS)) +#define FD_SET(n, p) ((p)->fds_bits[(n)/__NFDBITS] |= __FD_MASK(n)) +#define FD_CLR(n, p) ((p)->fds_bits[(n)/__NFDBITS] &= ~__FD_MASK(n)) +#define FD_ISSET(n, p) (((p)->fds_bits[(n)/__NFDBITS] & __FD_MASK(n)) != 0) +*/ + +/** +From: /usr/include/x86_64-linux-gnu/sys/select.h +-- +// linux/amd64 +sizeof(long int) == 8 +-- + +typedef long int __fd_mask; + +#define FD_SETSIZE 1024 +#define __NFDBITS (sizeof(__fd_mask) * 8) + + +typedef struct fd_set { + __fd_mask fds_bits[__FD_SETSIZE / __NFDBITS]; +} fd_set; + +#define __FD_MASK(n) ((__fd_mask)1 << ((n) % __NFDBITS)) +#define FD_SET(n, p) ((p)->fds_bits[(n)/__NFDBITS] |= __FD_MASK(n)) +#define FD_CLR(n, p) ((p)->fds_bits[(n)/__NFDBITS] &= ~__FD_MASK(n)) +#define FD_ISSET(n, p) (((p)->fds_bits[(n)/__NFDBITS] & __FD_MASK(n)) != 0) +*/ + +/** +From: /usr/include/sys/select.h +-- +// freebsd/amd64 +sizeof(unsigned long) == 8 +-- + +typedef unsigned long __fd_mask; + +#define FD_SETSIZE 1024U +#define __NFDBITS (sizeof(__fd_mask) * 8) +#define _howmany(x, y) (((x) + ((y) - 1)) / (y)) + +typedef struct fd_set { + __fd_mask fds_bits[_howmany(FD_SETSIZE, __NFDBITS)]; +} fd_set; + +#define __FD_MASK(n) ((__fd_mask)1 << ((n) % __NFDBITS)) +#define FD_SET(n, p) ((p)->fds_bits[(n)/__NFDBITS] |= __FD_MASK(n)) +#define FD_CLR(n, p) ((p)->fds_bits[(n)/__NFDBITS] &= ~__FD_MASK(n)) +#define FD_ISSET(n, p) (((p)->fds_bits[(n)/__NFDBITS] & __FD_MASK(n)) != 0) +*/ diff --git a/vendor/github.com/creack/goselect/fdset_freebsd.go b/vendor/github.com/creack/goselect/fdset_freebsd.go new file mode 100644 index 000000000..03f7b9127 --- /dev/null +++ b/vendor/github.com/creack/goselect/fdset_freebsd.go @@ -0,0 +1,33 @@ +// +build freebsd + +package goselect + +import "syscall" + +const FD_SETSIZE = syscall.FD_SETSIZE + +// FDSet wraps syscall.FdSet with convenience methods +type FDSet syscall.FdSet + +// Set adds the fd to the set +func (fds *FDSet) Set(fd uintptr) { + fds.X__fds_bits[fd/NFDBITS] |= (1 << (fd % NFDBITS)) +} + +// Clear remove the fd from the set +func (fds *FDSet) Clear(fd uintptr) { + fds.X__fds_bits[fd/NFDBITS] &^= (1 << (fd % NFDBITS)) +} + +// IsSet check if the given fd is set +func (fds *FDSet) IsSet(fd uintptr) bool { + return fds.X__fds_bits[fd/NFDBITS]&(1<<(fd%NFDBITS)) != 0 +} + +// Keep a null set to avoid reinstatiation +var nullFdSet = &FDSet{} + +// Zero empties the Set +func (fds *FDSet) Zero() { + copy(fds.X__fds_bits[:], (nullFdSet).X__fds_bits[:]) +} diff --git a/vendor/github.com/creack/goselect/fdset_unsupported.go b/vendor/github.com/creack/goselect/fdset_unsupported.go new file mode 100644 index 000000000..cbd8d5880 --- /dev/null +++ b/vendor/github.com/creack/goselect/fdset_unsupported.go @@ -0,0 +1,20 @@ +// +build plan9 + +package goselect + +const FD_SETSIZE = 0 + +// FDSet wraps syscall.FdSet with convenience methods +type FDSet struct{} + +// Set adds the fd to the set +func (fds *FDSet) Set(fd uintptr) {} + +// Clear remove the fd from the set +func (fds *FDSet) Clear(fd uintptr) {} + +// IsSet check if the given fd is set +func (fds *FDSet) IsSet(fd uintptr) bool { return false } + +// Zero empties the Set +func (fds *FDSet) Zero() {} diff --git a/vendor/github.com/creack/goselect/fdset_windows.go b/vendor/github.com/creack/goselect/fdset_windows.go new file mode 100644 index 000000000..ee3491975 --- /dev/null +++ b/vendor/github.com/creack/goselect/fdset_windows.go @@ -0,0 +1,57 @@ +// +build windows + +package goselect + +import "syscall" + +const FD_SETSIZE = 64 + +// FDSet extracted from mingw libs source code +type FDSet struct { + fd_count uint + fd_array [FD_SETSIZE]uintptr +} + +// Set adds the fd to the set +func (fds *FDSet) Set(fd uintptr) { + var i uint + for i = 0; i < fds.fd_count; i++ { + if fds.fd_array[i] == fd { + break + } + } + if i == fds.fd_count { + if fds.fd_count < FD_SETSIZE { + fds.fd_array[i] = fd + fds.fd_count++ + } + } +} + +// Clear remove the fd from the set +func (fds *FDSet) Clear(fd uintptr) { + var i uint + for i = 0; i < fds.fd_count; i++ { + if fds.fd_array[i] == fd { + for i < fds.fd_count-1 { + fds.fd_array[i] = fds.fd_array[i+1] + i++ + } + fds.fd_count-- + break + } + } +} + +// IsSet check if the given fd is set +func (fds *FDSet) IsSet(fd uintptr) bool { + if isset, err := __WSAFDIsSet(syscall.Handle(fd), fds); err == nil && isset != 0 { + return true + } + return false +} + +// Zero empties the Set +func (fds *FDSet) Zero() { + fds.fd_count = 0 +} diff --git a/vendor/github.com/creack/goselect/select.go b/vendor/github.com/creack/goselect/select.go new file mode 100644 index 000000000..7f2875e73 --- /dev/null +++ b/vendor/github.com/creack/goselect/select.go @@ -0,0 +1,28 @@ +package goselect + +import ( + "syscall" + "time" +) + +// Select wraps syscall.Select with Go types +func Select(n int, r, w, e *FDSet, timeout time.Duration) error { + var timeval *syscall.Timeval + if timeout >= 0 { + t := syscall.NsecToTimeval(timeout.Nanoseconds()) + timeval = &t + } + + return sysSelect(n, r, w, e, timeval) +} + +// RetrySelect wraps syscall.Select with Go types, and retries a number of times, with a given retryDelay. +func RetrySelect(n int, r, w, e *FDSet, timeout time.Duration, retries int, retryDelay time.Duration) (err error) { + for i := 0; i < retries; i++ { + if err = Select(n, r, w, e, timeout); err != syscall.EINTR { + return err + } + time.Sleep(retryDelay) + } + return err +} diff --git a/vendor/github.com/creack/goselect/select_linux.go b/vendor/github.com/creack/goselect/select_linux.go new file mode 100644 index 000000000..acd569e9d --- /dev/null +++ b/vendor/github.com/creack/goselect/select_linux.go @@ -0,0 +1,10 @@ +// +build linux + +package goselect + +import "syscall" + +func sysSelect(n int, r, w, e *FDSet, timeout *syscall.Timeval) error { + _, err := syscall.Select(n, (*syscall.FdSet)(r), (*syscall.FdSet)(w), (*syscall.FdSet)(e), timeout) + return err +} diff --git a/vendor/github.com/creack/goselect/select_other.go b/vendor/github.com/creack/goselect/select_other.go new file mode 100644 index 000000000..6c8208147 --- /dev/null +++ b/vendor/github.com/creack/goselect/select_other.go @@ -0,0 +1,9 @@ +// +build !linux,!windows,!plan9,!solaris + +package goselect + +import "syscall" + +func sysSelect(n int, r, w, e *FDSet, timeout *syscall.Timeval) error { + return syscall.Select(n, (*syscall.FdSet)(r), (*syscall.FdSet)(w), (*syscall.FdSet)(e), timeout) +} diff --git a/vendor/github.com/creack/goselect/select_unsupported.go b/vendor/github.com/creack/goselect/select_unsupported.go new file mode 100644 index 000000000..bea0ad94a --- /dev/null +++ b/vendor/github.com/creack/goselect/select_unsupported.go @@ -0,0 +1,16 @@ +// +build plan9 solaris + +package goselect + +import ( + "fmt" + "runtime" + "syscall" +) + +// ErrUnsupported . +var ErrUnsupported = fmt.Errorf("Platofrm %s/%s unsupported", runtime.GOOS, runtime.GOARCH) + +func sysSelect(n int, r, w, e *FDSet, timeout *syscall.Timeval) error { + return ErrUnsupported +} diff --git a/vendor/github.com/creack/goselect/select_windows.go b/vendor/github.com/creack/goselect/select_windows.go new file mode 100644 index 000000000..0fb70d5d2 --- /dev/null +++ b/vendor/github.com/creack/goselect/select_windows.go @@ -0,0 +1,13 @@ +// +build windows + +package goselect + +import "syscall" + +//sys _select(nfds int, readfds *FDSet, writefds *FDSet, exceptfds *FDSet, timeout *syscall.Timeval) (total int, err error) = ws2_32.select +//sys __WSAFDIsSet(handle syscall.Handle, fdset *FDSet) (isset int, err error) = ws2_32.__WSAFDIsSet + +func sysSelect(n int, r, w, e *FDSet, timeout *syscall.Timeval) error { + _, err := _select(n, r, w, e, timeout) + return err +} diff --git a/vendor/github.com/creack/goselect/test_crosscompile.sh b/vendor/github.com/creack/goselect/test_crosscompile.sh new file mode 100755 index 000000000..533ca6647 --- /dev/null +++ b/vendor/github.com/creack/goselect/test_crosscompile.sh @@ -0,0 +1,17 @@ +export GOOS=linux; export GOARCH=arm; echo $GOOS/$GOARCH; go build +export GOOS=linux; export GOARCH=arm64; echo $GOOS/$GOARCH; go build +export GOOS=linux; export GOARCH=amd64; echo $GOOS/$GOARCH; go build +export GOOS=linux; export GOARCH=386; echo $GOOS/$GOARCH; go build +export GOOS=darwin; export GOARCH=arm; echo $GOOS/$GOARCH; go build +export GOOS=darwin; export GOARCH=arm64; echo $GOOS/$GOARCH; go build +export GOOS=darwin; export GOARCH=amd64; echo $GOOS/$GOARCH; go build +export GOOS=darwin; export GOARCH=386; echo $GOOS/$GOARCH; go build +export GOOS=freebsd; export GOARCH=arm; echo $GOOS/$GOARCH; go build +export GOOS=freebsd; export GOARCH=amd64; echo $GOOS/$GOARCH; go build +export GOOS=freebsd; export GOARCH=386; echo $GOOS/$GOARCH; go build +export GOOS=openbsd; export GOARCH=arm; echo $GOOS/$GOARCH; go build +export GOOS=openbsd; export GOARCH=amd64; echo $GOOS/$GOARCH; go build +export GOOS=openbsd; export GOARCH=386; echo $GOOS/$GOARCH; go build +export GOOS=netbsd; export GOARCH=arm; echo $GOOS/$GOARCH; go build +export GOOS=netbsd; export GOARCH=amd64; echo $GOOS/$GOARCH; go build +export GOOS=netbsd; export GOARCH=386; echo $GOOS/$GOARCH; go build diff --git a/vendor/github.com/creack/goselect/zselect_windows.go b/vendor/github.com/creack/goselect/zselect_windows.go new file mode 100644 index 000000000..c3b10e1d0 --- /dev/null +++ b/vendor/github.com/creack/goselect/zselect_windows.go @@ -0,0 +1,41 @@ +// MACHINE GENERATED BY 'go generate' COMMAND; DO NOT EDIT + +package goselect + +import "unsafe" +import "syscall" + +var _ unsafe.Pointer + +var ( + modws2_32 = syscall.NewLazyDLL("ws2_32.dll") + + procselect = modws2_32.NewProc("select") + proc__WSAFDIsSet = modws2_32.NewProc("__WSAFDIsSet") +) + +func _select(nfds int, readfds *FDSet, writefds *FDSet, exceptfds *FDSet, timeout *syscall.Timeval) (total int, err error) { + r0, _, e1 := syscall.Syscall6(procselect.Addr(), 5, uintptr(nfds), uintptr(unsafe.Pointer(readfds)), uintptr(unsafe.Pointer(writefds)), uintptr(unsafe.Pointer(exceptfds)), uintptr(unsafe.Pointer(timeout)), 0) + total = int(r0) + if total == 0 { + if e1 != 0 { + err = error(e1) + } else { + err = syscall.EINVAL + } + } + return +} + +func __WSAFDIsSet(handle syscall.Handle, fdset *FDSet) (isset int, err error) { + r0, _, e1 := syscall.Syscall(proc__WSAFDIsSet.Addr(), 2, uintptr(handle), uintptr(unsafe.Pointer(fdset)), 0) + isset = int(r0) + if isset == 0 { + if e1 != 0 { + err = error(e1) + } else { + err = syscall.EINVAL + } + } + return +} diff --git a/vendor/github.com/davecgh/go-spew/LICENSE b/vendor/github.com/davecgh/go-spew/LICENSE index bb6733231..c83641619 100644 --- a/vendor/github.com/davecgh/go-spew/LICENSE +++ b/vendor/github.com/davecgh/go-spew/LICENSE @@ -1,6 +1,6 @@ ISC License -Copyright (c) 2012-2013 Dave Collins +Copyright (c) 2012-2016 Dave Collins Permission to use, copy, modify, and distribute this software for any purpose with or without fee is hereby granted, provided that the above diff --git a/vendor/github.com/davecgh/go-spew/spew/bypass.go b/vendor/github.com/davecgh/go-spew/spew/bypass.go index d42a0bc4a..8a4a6589a 100644 --- a/vendor/github.com/davecgh/go-spew/spew/bypass.go +++ b/vendor/github.com/davecgh/go-spew/spew/bypass.go @@ -1,4 +1,4 @@ -// Copyright (c) 2015 Dave Collins +// Copyright (c) 2015-2016 Dave Collins // // Permission to use, copy, modify, and distribute this software for any // purpose with or without fee is hereby granted, provided that the above diff --git a/vendor/github.com/davecgh/go-spew/spew/bypasssafe.go b/vendor/github.com/davecgh/go-spew/spew/bypasssafe.go deleted file mode 100644 index e47a4e795..000000000 --- a/vendor/github.com/davecgh/go-spew/spew/bypasssafe.go +++ /dev/null @@ -1,38 +0,0 @@ -// Copyright (c) 2015 Dave Collins -// -// Permission to use, copy, modify, and distribute this software for any -// purpose with or without fee is hereby granted, provided that the above -// copyright notice and this permission notice appear in all copies. -// -// THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES -// WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF -// MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR -// ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES -// WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN -// ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF -// OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. - -// NOTE: Due to the following build constraints, this file will only be compiled -// when the code is running on Google App Engine, compiled by GopherJS, or -// "-tags safe" is added to the go build command line. The "disableunsafe" -// tag is deprecated and thus should not be used. -// +build js appengine safe disableunsafe - -package spew - -import "reflect" - -const ( - // UnsafeDisabled is a build-time constant which specifies whether or - // not access to the unsafe package is available. - UnsafeDisabled = true -) - -// unsafeReflectValue typically converts the passed reflect.Value into a one -// that bypasses the typical safety restrictions preventing access to -// unaddressable and unexported data. However, doing this relies on access to -// the unsafe package. This is a stub version which simply returns the passed -// reflect.Value when the unsafe package is not available. -func unsafeReflectValue(v reflect.Value) reflect.Value { - return v -} diff --git a/vendor/github.com/davecgh/go-spew/spew/common.go b/vendor/github.com/davecgh/go-spew/spew/common.go index 14f02dc15..7c519ff47 100644 --- a/vendor/github.com/davecgh/go-spew/spew/common.go +++ b/vendor/github.com/davecgh/go-spew/spew/common.go @@ -1,5 +1,5 @@ /* - * Copyright (c) 2013 Dave Collins + * Copyright (c) 2013-2016 Dave Collins * * Permission to use, copy, modify, and distribute this software for any * purpose with or without fee is hereby granted, provided that the above diff --git a/vendor/github.com/davecgh/go-spew/spew/config.go b/vendor/github.com/davecgh/go-spew/spew/config.go index 555282723..2e3d22f31 100644 --- a/vendor/github.com/davecgh/go-spew/spew/config.go +++ b/vendor/github.com/davecgh/go-spew/spew/config.go @@ -1,5 +1,5 @@ /* - * Copyright (c) 2013 Dave Collins + * Copyright (c) 2013-2016 Dave Collins * * Permission to use, copy, modify, and distribute this software for any * purpose with or without fee is hereby granted, provided that the above @@ -67,6 +67,15 @@ type ConfigState struct { // Google App Engine or with the "safe" build tag specified. DisablePointerMethods bool + // DisablePointerAddresses specifies whether to disable the printing of + // pointer addresses. This is useful when diffing data structures in tests. + DisablePointerAddresses bool + + // DisableCapacities specifies whether to disable the printing of capacities + // for arrays, slices, maps and channels. This is useful when diffing + // data structures in tests. + DisableCapacities bool + // ContinueOnMethod specifies whether or not recursion should continue once // a custom error or Stringer interface is invoked. The default, false, // means it will print the results of invoking the custom error or Stringer diff --git a/vendor/github.com/davecgh/go-spew/spew/doc.go b/vendor/github.com/davecgh/go-spew/spew/doc.go index 5be0c4060..aacaac6f1 100644 --- a/vendor/github.com/davecgh/go-spew/spew/doc.go +++ b/vendor/github.com/davecgh/go-spew/spew/doc.go @@ -1,5 +1,5 @@ /* - * Copyright (c) 2013 Dave Collins + * Copyright (c) 2013-2016 Dave Collins * * Permission to use, copy, modify, and distribute this software for any * purpose with or without fee is hereby granted, provided that the above @@ -91,6 +91,15 @@ The following configuration options are available: which only accept pointer receivers from non-pointer variables. Pointer method invocation is enabled by default. + * DisablePointerAddresses + DisablePointerAddresses specifies whether to disable the printing of + pointer addresses. This is useful when diffing data structures in tests. + + * DisableCapacities + DisableCapacities specifies whether to disable the printing of + capacities for arrays, slices, maps and channels. This is useful when + diffing data structures in tests. + * ContinueOnMethod Enables recursion into types after invoking error and Stringer interface methods. Recursion after method invocation is disabled by default. diff --git a/vendor/github.com/davecgh/go-spew/spew/dump.go b/vendor/github.com/davecgh/go-spew/spew/dump.go index a0ff95e27..df1d582a7 100644 --- a/vendor/github.com/davecgh/go-spew/spew/dump.go +++ b/vendor/github.com/davecgh/go-spew/spew/dump.go @@ -1,5 +1,5 @@ /* - * Copyright (c) 2013 Dave Collins + * Copyright (c) 2013-2016 Dave Collins * * Permission to use, copy, modify, and distribute this software for any * purpose with or without fee is hereby granted, provided that the above @@ -129,7 +129,7 @@ func (d *dumpState) dumpPtr(v reflect.Value) { d.w.Write(closeParenBytes) // Display pointer information. - if len(pointerChain) > 0 { + if !d.cs.DisablePointerAddresses && len(pointerChain) > 0 { d.w.Write(openParenBytes) for i, addr := range pointerChain { if i > 0 { @@ -282,13 +282,13 @@ func (d *dumpState) dump(v reflect.Value) { case reflect.Map, reflect.String: valueLen = v.Len() } - if valueLen != 0 || valueCap != 0 { + if valueLen != 0 || !d.cs.DisableCapacities && valueCap != 0 { d.w.Write(openParenBytes) if valueLen != 0 { d.w.Write(lenEqualsBytes) printInt(d.w, int64(valueLen), 10) } - if valueCap != 0 { + if !d.cs.DisableCapacities && valueCap != 0 { if valueLen != 0 { d.w.Write(spaceBytes) } diff --git a/vendor/github.com/davecgh/go-spew/spew/format.go b/vendor/github.com/davecgh/go-spew/spew/format.go index ecf3b80e2..c49875bac 100644 --- a/vendor/github.com/davecgh/go-spew/spew/format.go +++ b/vendor/github.com/davecgh/go-spew/spew/format.go @@ -1,5 +1,5 @@ /* - * Copyright (c) 2013 Dave Collins + * Copyright (c) 2013-2016 Dave Collins * * Permission to use, copy, modify, and distribute this software for any * purpose with or without fee is hereby granted, provided that the above diff --git a/vendor/github.com/davecgh/go-spew/spew/spew.go b/vendor/github.com/davecgh/go-spew/spew/spew.go index d8233f542..32c0e3388 100644 --- a/vendor/github.com/davecgh/go-spew/spew/spew.go +++ b/vendor/github.com/davecgh/go-spew/spew/spew.go @@ -1,5 +1,5 @@ /* - * Copyright (c) 2013 Dave Collins + * Copyright (c) 2013-2016 Dave Collins * * Permission to use, copy, modify, and distribute this software for any * purpose with or without fee is hereby granted, provided that the above diff --git a/vendor/github.com/denverdino/aliyungo/common/client.go b/vendor/github.com/denverdino/aliyungo/common/client.go old mode 100755 new mode 100644 index a59789fee..436b239b2 --- a/vendor/github.com/denverdino/aliyungo/common/client.go +++ b/vendor/github.com/denverdino/aliyungo/common/client.go @@ -3,10 +3,14 @@ package common import ( "bytes" "encoding/json" + "errors" + "fmt" "io/ioutil" "log" "net/http" "net/url" + "os" + "strconv" "strings" "time" @@ -25,6 +29,7 @@ type UnderlineString string type Client struct { AccessKeyId string //Access Key Id AccessKeySecret string //Access Key Secret + securityToken string debug bool httpClient *http.Client endpoint string @@ -32,29 +37,67 @@ type Client struct { serviceCode string regionID Region businessInfo string - userAgent string + userAgent string } -// NewClient creates a new instance of ECS client +// Initialize properties of a client instance func (client *Client) Init(endpoint, version, accessKeyId, accessKeySecret string) { client.AccessKeyId = accessKeyId - client.AccessKeySecret = accessKeySecret + "&" + ak := accessKeySecret + if !strings.HasSuffix(ak, "&") { + ak += "&" + } + client.AccessKeySecret = ak client.debug = false - client.httpClient = &http.Client{} + handshakeTimeout, err := strconv.Atoi(os.Getenv("TLSHandshakeTimeout")) + if err != nil { + handshakeTimeout = 0 + } + if handshakeTimeout == 0 { + client.httpClient = &http.Client{} + } else { + t := &http.Transport{ + TLSHandshakeTimeout: time.Duration(handshakeTimeout) * time.Second} + client.httpClient = &http.Client{Transport: t} + } client.endpoint = endpoint client.version = version } +// Initialize properties of a client instance including regionID func (client *Client) NewInit(endpoint, version, accessKeyId, accessKeySecret, serviceCode string, regionID Region) { client.Init(endpoint, version, accessKeyId, accessKeySecret) client.serviceCode = serviceCode client.regionID = regionID - client.setEndpointByLocation(regionID, serviceCode, accessKeyId, accessKeySecret) + client.setEndpointByLocation(regionID, serviceCode, accessKeyId, accessKeySecret, client.securityToken) +} + +// Intialize client object when all properties are ready +func (client *Client) InitClient() *Client { + client.debug = false + handshakeTimeout, err := strconv.Atoi(os.Getenv("TLSHandshakeTimeout")) + if err != nil { + handshakeTimeout = 0 + } + if handshakeTimeout == 0 { + client.httpClient = &http.Client{} + } else { + t := &http.Transport{ + TLSHandshakeTimeout: time.Duration(handshakeTimeout) * time.Second} + client.httpClient = &http.Client{Transport: t} + } + client.setEndpointByLocation(client.regionID, client.serviceCode, client.AccessKeyId, client.AccessKeySecret, client.securityToken) + return client +} + +func (client *Client) NewInitForAssumeRole(endpoint, version, accessKeyId, accessKeySecret, serviceCode string, regionID Region, securityToken string) { + client.NewInit(endpoint, version, accessKeyId, accessKeySecret, serviceCode, regionID) + client.securityToken = securityToken } //NewClient using location service -func (client *Client) setEndpointByLocation(region Region, serviceCode, accessKeyId, accessKeySecret string) { - locationClient := NewLocationClient(accessKeyId, accessKeySecret) +func (client *Client) setEndpointByLocation(region Region, serviceCode, accessKeyId, accessKeySecret, securityToken string) { + locationClient := NewLocationClient(accessKeyId, accessKeySecret, securityToken) ep := locationClient.DescribeOpenAPIEndpoint(region, serviceCode) if ep == "" { ep = loadEndpointFromFile(region, serviceCode) @@ -65,6 +108,95 @@ func (client *Client) setEndpointByLocation(region Region, serviceCode, accessKe } } +// Ensure all necessary properties are valid +func (client *Client) ensureProperties() error { + var msg string + + if client.endpoint == "" { + msg = fmt.Sprintf("endpoint cannot be empty!") + } else if client.version == "" { + msg = fmt.Sprintf("version cannot be empty!") + } else if client.AccessKeyId == "" { + msg = fmt.Sprintf("AccessKeyId cannot be empty!") + } else if client.AccessKeySecret == "" { + msg = fmt.Sprintf("AccessKeySecret cannot be empty!") + } + + if msg != "" { + return errors.New(msg) + } + + return nil +} + +// ---------------------------------------------------- +// WithXXX methods +// ---------------------------------------------------- + +// WithEndpoint sets custom endpoint +func (client *Client) WithEndpoint(endpoint string) *Client { + client.SetEndpoint(endpoint) + return client +} + +// WithVersion sets custom version +func (client *Client) WithVersion(version string) *Client { + client.SetVersion(version) + return client +} + +// WithRegionID sets Region ID +func (client *Client) WithRegionID(regionID Region) *Client { + client.SetRegionID(regionID) + return client +} + +//WithServiceCode sets serviceCode +func (client *Client) WithServiceCode(serviceCode string) *Client { + client.SetServiceCode(serviceCode) + return client +} + +// WithAccessKeyId sets new AccessKeyId +func (client *Client) WithAccessKeyId(id string) *Client { + client.SetAccessKeyId(id) + return client +} + +// WithAccessKeySecret sets new AccessKeySecret +func (client *Client) WithAccessKeySecret(secret string) *Client { + client.SetAccessKeySecret(secret) + return client +} + +// WithSecurityToken sets securityToken +func (client *Client) WithSecurityToken(securityToken string) *Client { + client.SetSecurityToken(securityToken) + return client +} + +// WithDebug sets debug mode to log the request/response message +func (client *Client) WithDebug(debug bool) *Client { + client.SetDebug(debug) + return client +} + +// WithBusinessInfo sets business info to log the request/response message +func (client *Client) WithBusinessInfo(businessInfo string) *Client { + client.SetBusinessInfo(businessInfo) + return client +} + +// WithUserAgent sets user agent to the request/response message +func (client *Client) WithUserAgent(userAgent string) *Client { + client.SetUserAgent(userAgent) + return client +} + +// ---------------------------------------------------- +// SetXXX methods +// ---------------------------------------------------- + // SetEndpoint sets custom endpoint func (client *Client) SetEndpoint(endpoint string) { client.endpoint = endpoint @@ -75,6 +207,7 @@ func (client *Client) SetVersion(version string) { client.version = version } +// SetEndpoint sets Region ID func (client *Client) SetRegionID(regionID Region) { client.regionID = regionID } @@ -113,11 +246,19 @@ func (client *Client) SetUserAgent(userAgent string) { client.userAgent = userAgent } +//set SecurityToken +func (client *Client) SetSecurityToken(securityToken string) { + client.securityToken = securityToken +} + // Invoke sends the raw HTTP request for ECS services func (client *Client) Invoke(action string, args interface{}, response interface{}) error { + if err := client.ensureProperties(); err != nil { + return err + } request := Request{} - request.init(client.version, action, client.AccessKeyId) + request.init(client.version, action, client.AccessKeyId, client.securityToken, client.regionID) query := util.ConvertToQueryValues(request) util.SetQueryValues(args, &query) @@ -137,7 +278,7 @@ func (client *Client) Invoke(action string, args interface{}, response interface // TODO move to util and add build val flag httpReq.Header.Set("X-SDK-Client", `AliyunGO/`+Version+client.businessInfo) - httpReq.Header.Set("User-Agent", httpReq.UserAgent()+ " " +client.userAgent) + httpReq.Header.Set("User-Agent", httpReq.UserAgent()+" "+client.userAgent) t0 := time.Now() httpResp, err := client.httpClient.Do(httpReq) @@ -185,9 +326,12 @@ func (client *Client) Invoke(action string, args interface{}, response interface // Invoke sends the raw HTTP request for ECS services func (client *Client) InvokeByFlattenMethod(action string, args interface{}, response interface{}) error { + if err := client.ensureProperties(); err != nil { + return err + } request := Request{} - request.init(client.version, action, client.AccessKeyId) + request.init(client.version, action, client.AccessKeyId, client.securityToken, client.regionID) query := util.ConvertToQueryValues(request) @@ -208,7 +352,7 @@ func (client *Client) InvokeByFlattenMethod(action string, args interface{}, res // TODO move to util and add build val flag httpReq.Header.Set("X-SDK-Client", `AliyunGO/`+Version+client.businessInfo) - httpReq.Header.Set("User-Agent", httpReq.UserAgent()+ " " +client.userAgent) + httpReq.Header.Set("User-Agent", httpReq.UserAgent()+" "+client.userAgent) t0 := time.Now() httpResp, err := client.httpClient.Do(httpReq) @@ -258,10 +402,12 @@ func (client *Client) InvokeByFlattenMethod(action string, args interface{}, res //改进了一下上面那个方法,可以使用各种Http方法 //2017.1.30 增加了一个path参数,用来拓展访问的地址 func (client *Client) InvokeByAnyMethod(method, action, path string, args interface{}, response interface{}) error { + if err := client.ensureProperties(); err != nil { + return err + } request := Request{} - request.init(client.version, action, client.AccessKeyId) - + request.init(client.version, action, client.AccessKeyId, client.securityToken, client.regionID) data := util.ConvertToQueryValues(request) util.SetQueryValues(args, &data) @@ -290,8 +436,7 @@ func (client *Client) InvokeByAnyMethod(method, action, path string, args interf // TODO move to util and add build val flag httpReq.Header.Set("X-SDK-Client", `AliyunGO/`+Version+client.businessInfo) - - httpReq.Header.Set("User-Agent", httpReq.Header.Get("User-Agent")+ " " +client.userAgent) + httpReq.Header.Set("User-Agent", httpReq.Header.Get("User-Agent")+" "+client.userAgent) t0 := time.Now() httpResp, err := client.httpClient.Do(httpReq) diff --git a/vendor/github.com/denverdino/aliyungo/common/endpoint.go b/vendor/github.com/denverdino/aliyungo/common/endpoint.go index 16bcbf9d6..786606cf3 100644 --- a/vendor/github.com/denverdino/aliyungo/common/endpoint.go +++ b/vendor/github.com/denverdino/aliyungo/common/endpoint.go @@ -18,6 +18,51 @@ const ( var ( endpoints = make(map[Region]map[string]string) + + SpecailEnpoints = map[Region]map[string]string{ + APNorthEast1: { + "ecs": "https://ecs.ap-northeast-1.aliyuncs.com", + "slb": "https://slb.ap-northeast-1.aliyuncs.com", + "rds": "https://rds.ap-northeast-1.aliyuncs.com", + "vpc": "https://vpc.ap-northeast-1.aliyuncs.com", + }, + APSouthEast2: { + "ecs": "https://ecs.ap-southeast-2.aliyuncs.com", + "slb": "https://slb.ap-southeast-2.aliyuncs.com", + "rds": "https://rds.ap-southeast-2.aliyuncs.com", + "vpc": "https://vpc.ap-southeast-2.aliyuncs.com", + }, + APSouthEast3: { + "ecs": "https://ecs.ap-southeast-3.aliyuncs.com", + "slb": "https://slb.ap-southeast-3.aliyuncs.com", + "rds": "https://rds.ap-southeast-3.aliyuncs.com", + "vpc": "https://vpc.ap-southeast-3.aliyuncs.com", + }, + MEEast1: { + "ecs": "https://ecs.me-east-1.aliyuncs.com", + "slb": "https://slb.me-east-1.aliyuncs.com", + "rds": "https://rds.me-east-1.aliyuncs.com", + "vpc": "https://vpc.me-east-1.aliyuncs.com", + }, + EUCentral1: { + "ecs": "https://ecs.eu-central-1.aliyuncs.com", + "slb": "https://slb.eu-central-1.aliyuncs.com", + "rds": "https://rds.eu-central-1.aliyuncs.com", + "vpc": "https://vpc.eu-central-1.aliyuncs.com", + }, + Zhangjiakou: { + "ecs": "https://ecs.cn-zhangjiakou.aliyuncs.com", + "slb": "https://slb.cn-zhangjiakou.aliyuncs.com", + "rds": "https://rds.cn-zhangjiakou.aliyuncs.com", + "vpc": "https://vpc.cn-zhangjiakou.aliyuncs.com", + }, + Huhehaote: { + "ecs": "https://ecs.cn-huhehaote.aliyuncs.com", + "slb": "https://slb.cn-huhehaote.aliyuncs.com", + "rds": "https://rds.cn-huhehaote.aliyuncs.com", + "vpc": "https://vpc.cn-huhehaote.aliyuncs.com", + }, + } ) //init endpoints from file @@ -25,18 +70,39 @@ func init() { } -func NewLocationClient(accessKeyId, accessKeySecret string) *Client { +type LocationClient struct { + Client +} + +func NewLocationClient(accessKeyId, accessKeySecret, securityToken string) *LocationClient { endpoint := os.Getenv("LOCATION_ENDPOINT") if endpoint == "" { endpoint = locationDefaultEndpoint } - client := &Client{} + client := &LocationClient{} client.Init(endpoint, locationAPIVersion, accessKeyId, accessKeySecret) + client.securityToken = securityToken return client } -func (client *Client) DescribeEndpoint(args *DescribeEndpointArgs) (*DescribeEndpointResponse, error) { +func NewLocationClientWithSecurityToken(accessKeyId, accessKeySecret, securityToken string) *LocationClient { + endpoint := os.Getenv("LOCATION_ENDPOINT") + if endpoint == "" { + endpoint = locationDefaultEndpoint + } + + client := &LocationClient{} + client.WithEndpoint(endpoint). + WithVersion(locationAPIVersion). + WithAccessKeyId(accessKeyId). + WithAccessKeySecret(accessKeySecret). + WithSecurityToken(securityToken). + InitClient() + return client +} + +func (client *LocationClient) DescribeEndpoint(args *DescribeEndpointArgs) (*DescribeEndpointResponse, error) { response := &DescribeEndpointResponse{} err := client.Invoke("DescribeEndpoint", args, response) if err != nil { @@ -45,6 +111,15 @@ func (client *Client) DescribeEndpoint(args *DescribeEndpointArgs) (*DescribeEnd return response, err } +func (client *LocationClient) DescribeEndpoints(args *DescribeEndpointsArgs) (*DescribeEndpointsResponse, error) { + response := &DescribeEndpointsResponse{} + err := client.Invoke("DescribeEndpoints", args, response) + if err != nil { + return nil, err + } + return response, err +} + func getProductRegionEndpoint(region Region, serviceCode string) string { if sp, ok := endpoints[region]; ok { if endpoint, ok := sp[serviceCode]; ok { @@ -61,34 +136,34 @@ func setProductRegionEndpoint(region Region, serviceCode string, endpoint string } } -func (client *Client) DescribeOpenAPIEndpoint(region Region, serviceCode string) string { +func (client *LocationClient) DescribeOpenAPIEndpoint(region Region, serviceCode string) string { if endpoint := getProductRegionEndpoint(region, serviceCode); endpoint != "" { return endpoint } defaultProtocols := HTTP_PROTOCOL - args := &DescribeEndpointArgs{ + args := &DescribeEndpointsArgs{ Id: region, ServiceCode: serviceCode, Type: "openAPI", } - endpoint, err := client.DescribeEndpoint(args) - if err != nil || endpoint.Endpoint == "" { + endpoint, err := client.DescribeEndpoints(args) + if err != nil || len(endpoint.Endpoints.Endpoint) <= 0 { return "" } - for _, protocol := range endpoint.Protocols.Protocols { + for _, protocol := range endpoint.Endpoints.Endpoint[0].Protocols.Protocols { if strings.ToLower(protocol) == HTTPS_PROTOCOL { defaultProtocols = HTTPS_PROTOCOL break } } - ep := fmt.Sprintf("%s://%s", defaultProtocols, endpoint.Endpoint) + ep := fmt.Sprintf("%s://%s", defaultProtocols, endpoint.Endpoints.Endpoint[0].Endpoint) - setProductRegionEndpoint(region, serviceCode, ep) + //setProductRegionEndpoint(region, serviceCode, ep) return ep } @@ -97,13 +172,11 @@ func loadEndpointFromFile(region Region, serviceCode string) string { if err != nil { return "" } - var endpoints Endpoints err = xml.Unmarshal(data, &endpoints) if err != nil { return "" } - for _, endpoint := range endpoints.Endpoint { if endpoint.RegionIds.RegionId == string(region) { for _, product := range endpoint.Products.Product { diff --git a/vendor/github.com/denverdino/aliyungo/common/endpoints.xml b/vendor/github.com/denverdino/aliyungo/common/endpoints.xml index 4079bcd2b..21f3a0b2e 100644 --- a/vendor/github.com/denverdino/aliyungo/common/endpoints.xml +++ b/vendor/github.com/denverdino/aliyungo/common/endpoints.xml @@ -1346,4 +1346,14 @@ Slbslb.cn-zhangjiakou.aliyuncs.com + + cn-huhehaote + + Rdsrds.cn-huhehaote.aliyuncs.com + Ecsecs.cn-huhehaote.aliyuncs.com + Vpcvpc.cn-huhehaote.aliyuncs.com + Cmsmetrics.cn-hangzhou.aliyuncs.com + Slbslb.cn-huhehaote.aliyuncs.com + + diff --git a/vendor/github.com/denverdino/aliyungo/common/regions.go b/vendor/github.com/denverdino/aliyungo/common/regions.go index 62e6e9d81..38b14dd86 100644 --- a/vendor/github.com/denverdino/aliyungo/common/regions.go +++ b/vendor/github.com/denverdino/aliyungo/common/regions.go @@ -12,10 +12,15 @@ const ( Shenzhen = Region("cn-shenzhen") Shanghai = Region("cn-shanghai") Zhangjiakou = Region("cn-zhangjiakou") + Huhehaote = Region("cn-huhehaote") APSouthEast1 = Region("ap-southeast-1") APNorthEast1 = Region("ap-northeast-1") APSouthEast2 = Region("ap-southeast-2") + APSouthEast3 = Region("ap-southeast-3") + APSouthEast5 = Region("ap-southeast-5") + + APSouth1 = Region("ap-south-1") USWest1 = Region("us-west-1") USEast1 = Region("us-east-1") @@ -23,12 +28,17 @@ const ( MEEast1 = Region("me-east-1") EUCentral1 = Region("eu-central-1") + + ShenZhenFinance = Region("cn-shenzhen-finance-1") + ShanghaiFinance = Region("cn-shanghai-finance-1") ) var ValidRegions = []Region{ - Hangzhou, Qingdao, Beijing, Shenzhen, Hongkong, Shanghai, Zhangjiakou, + Hangzhou, Qingdao, Beijing, Shenzhen, Hongkong, Shanghai, Zhangjiakou, Huhehaote, USWest1, USEast1, - APNorthEast1, APSouthEast1, APSouthEast2, + APNorthEast1, APSouthEast1, APSouthEast2, APSouthEast3, APSouthEast5, + APSouth1, MEEast1, EUCentral1, + ShenZhenFinance, ShanghaiFinance, } diff --git a/vendor/github.com/denverdino/aliyungo/common/request.go b/vendor/github.com/denverdino/aliyungo/common/request.go index 2a883f19b..f35c2990d 100644 --- a/vendor/github.com/denverdino/aliyungo/common/request.go +++ b/vendor/github.com/denverdino/aliyungo/common/request.go @@ -20,7 +20,9 @@ const ( type Request struct { Format string Version string + RegionId Region AccessKeyId string + SecurityToken string Signature string SignatureMethod string Timestamp util.ISO6801Time @@ -30,7 +32,7 @@ type Request struct { Action string } -func (request *Request) init(version string, action string, AccessKeyId string) { +func (request *Request) init(version string, action string, AccessKeyId string, securityToken string, regionId Region) { request.Format = JSONResponseFormat request.Timestamp = util.NewISO6801Time(time.Now().UTC()) request.Version = version @@ -39,6 +41,8 @@ func (request *Request) init(version string, action string, AccessKeyId string) request.SignatureNonce = util.CreateRandomString() request.Action = action request.AccessKeyId = AccessKeyId + request.SecurityToken = securityToken + request.RegionId = regionId } type Response struct { diff --git a/vendor/github.com/denverdino/aliyungo/common/types.go b/vendor/github.com/denverdino/aliyungo/common/types.go index a74e150e9..cf161f11b 100644 --- a/vendor/github.com/denverdino/aliyungo/common/types.go +++ b/vendor/github.com/denverdino/aliyungo/common/types.go @@ -36,6 +36,23 @@ type DescribeEndpointResponse struct { EndpointItem } +type DescribeEndpointsArgs struct { + Id Region + ServiceCode string + Type string +} + +type DescribeEndpointsResponse struct { + Response + Endpoints APIEndpoints + RequestId string + Success bool +} + +type APIEndpoints struct { + Endpoint []EndpointItem +} + type NetType string const ( @@ -48,6 +65,7 @@ type TimeType string const ( Hour = TimeType("Hour") Day = TimeType("Day") + Week = TimeType("Week") Month = TimeType("Month") Year = TimeType("Year") ) diff --git a/vendor/github.com/denverdino/aliyungo/ecs/client.go b/vendor/github.com/denverdino/aliyungo/ecs/client.go index d70a1554e..117483c35 100644 --- a/vendor/github.com/denverdino/aliyungo/ecs/client.go +++ b/vendor/github.com/denverdino/aliyungo/ecs/client.go @@ -20,8 +20,7 @@ const ( // ECSDefaultEndpoint is the default API endpoint of ECS services ECSDefaultEndpoint = "https://ecs-cn-hangzhou.aliyuncs.com" ECSAPIVersion = "2014-05-26" - - ECSServiceCode = "ecs" + ECSServiceCode = "ecs" VPCDefaultEndpoint = "https://vpc.aliyuncs.com" VPCAPIVersion = "2016-04-28" @@ -37,38 +36,80 @@ func NewClient(accessKeyId, accessKeySecret string) *Client { return NewClientWithEndpoint(endpoint, accessKeyId, accessKeySecret) } -func NewECSClient(accessKeyId, accessKeySecret string, regionID common.Region) *Client { - endpoint := os.Getenv("ECS_ENDPOINT") - if endpoint == "" { - endpoint = ECSDefaultEndpoint - } - - return NewClientWithRegion(endpoint, accessKeyId, accessKeySecret, regionID) -} - -func NewClientWithRegion(endpoint string, accessKeyId, accessKeySecret string, regionID common.Region) *Client { +func NewClientWithRegion(endpoint string, accessKeyId string, accessKeySecret string, regionID common.Region) *Client { client := &Client{} client.NewInit(endpoint, ECSAPIVersion, accessKeyId, accessKeySecret, ECSServiceCode, regionID) return client } -func NewClientWithEndpoint(endpoint string, accessKeyId, accessKeySecret string) *Client { +func NewClientWithEndpoint(endpoint string, accessKeyId string, accessKeySecret string) *Client { client := &Client{} client.Init(endpoint, ECSAPIVersion, accessKeyId, accessKeySecret) return client } -func NewVPCClient(accessKeyId, accessKeySecret string, regionID common.Region) *Client { +// --------------------------------------- +// NewECSClient creates a new instance of ECS client +// --------------------------------------- +func NewECSClient(accessKeyId, accessKeySecret string, regionID common.Region) *Client { + return NewECSClientWithSecurityToken(accessKeyId, accessKeySecret, "", regionID) +} + +func NewECSClientWithSecurityToken(accessKeyId string, accessKeySecret string, securityToken string, regionID common.Region) *Client { + endpoint := os.Getenv("ECS_ENDPOINT") + if endpoint == "" { + endpoint = ECSDefaultEndpoint + } + + return NewECSClientWithEndpointAndSecurityToken(endpoint, accessKeyId, accessKeySecret, securityToken, regionID) +} + +func NewECSClientWithEndpoint(endpoint string, accessKeyId string, accessKeySecret string, regionID common.Region) *Client { + return NewECSClientWithEndpointAndSecurityToken(endpoint, accessKeyId, accessKeySecret, "", regionID) +} + +func NewECSClientWithEndpointAndSecurityToken(endpoint string, accessKeyId string, accessKeySecret string, securityToken string, regionID common.Region) *Client { + client := &Client{} + client.WithEndpoint(endpoint). + WithVersion(ECSAPIVersion). + WithAccessKeyId(accessKeyId). + WithAccessKeySecret(accessKeySecret). + WithSecurityToken(securityToken). + WithServiceCode(ECSServiceCode). + WithRegionID(regionID). + InitClient() + return client +} + +// --------------------------------------- +// NewVPCClient creates a new instance of VPC client +// --------------------------------------- +func NewVPCClient(accessKeyId string, accessKeySecret string, regionID common.Region) *Client { + return NewVPCClientWithSecurityToken(accessKeyId, accessKeySecret, "", regionID) +} + +func NewVPCClientWithSecurityToken(accessKeyId string, accessKeySecret string, securityToken string, regionID common.Region) *Client { endpoint := os.Getenv("VPC_ENDPOINT") if endpoint == "" { endpoint = VPCDefaultEndpoint } - return NewVPCClientWithRegion(endpoint, accessKeyId, accessKeySecret, regionID) + return NewVPCClientWithEndpointAndSecurityToken(endpoint, accessKeyId, accessKeySecret, securityToken, regionID) } -func NewVPCClientWithRegion(endpoint string, accessKeyId, accessKeySecret string, regionID common.Region) *Client { +func NewVPCClientWithEndpoint(endpoint string, accessKeyId string, accessKeySecret string, regionID common.Region) *Client { + return NewVPCClientWithEndpointAndSecurityToken(endpoint, accessKeyId, accessKeySecret, "", regionID) +} + +func NewVPCClientWithEndpointAndSecurityToken(endpoint string, accessKeyId string, accessKeySecret string, securityToken string, regionID common.Region) *Client { client := &Client{} - client.NewInit(endpoint, VPCAPIVersion, accessKeyId, accessKeySecret, VPCServiceCode, regionID) + client.WithEndpoint(endpoint). + WithVersion(VPCAPIVersion). + WithAccessKeyId(accessKeyId). + WithAccessKeySecret(accessKeySecret). + WithSecurityToken(securityToken). + WithServiceCode(VPCServiceCode). + WithRegionID(regionID). + InitClient() return client } diff --git a/vendor/github.com/denverdino/aliyungo/ecs/disks.go b/vendor/github.com/denverdino/aliyungo/ecs/disks.go index 7a67c380d..6b898c60d 100644 --- a/vendor/github.com/denverdino/aliyungo/ecs/disks.go +++ b/vendor/github.com/denverdino/aliyungo/ecs/disks.go @@ -135,6 +135,7 @@ type CreateDiskArgs struct { ZoneId string DiskName string Description string + Encrypted bool DiskCategory DiskCategory Size int SnapshotId string @@ -240,6 +241,29 @@ func (client *Client) DetachDisk(instanceId string, diskId string) error { return err } +type ResizeDiskArgs struct { + DiskId string + NewSize int +} + +type ResizeDiskResponse struct { + common.Response +} + +// +// ResizeDisk can only support to enlarge disk size +// You can read doc at https://help.aliyun.com/document_detail/25522.html +func (client *Client) ResizeDisk(diskId string, sizeGB int) error { + args := ResizeDiskArgs{ + DiskId:diskId, + NewSize:sizeGB, + } + response := ResizeDiskResponse{} + err := client.Invoke("ResizeDisk", &args, &response) + return err +} + + type ResetDiskArgs struct { DiskId string SnapshotId string @@ -249,6 +273,7 @@ type ResetDiskResponse struct { common.Response } + // ResetDisk resets disk to original status // // You can read doc at http://docs.aliyun.com/#/pub/ecs/open-api/disk&resetdisk diff --git a/vendor/github.com/denverdino/aliyungo/ecs/instances.go b/vendor/github.com/denverdino/aliyungo/ecs/instances.go index 4eb999537..a08ac1244 100644 --- a/vendor/github.com/denverdino/aliyungo/ecs/instances.go +++ b/vendor/github.com/denverdino/aliyungo/ecs/instances.go @@ -224,6 +224,7 @@ type SpotStrategyType string const ( NoSpot = SpotStrategyType("NoSpot") SpotWithPriceLimit = SpotStrategyType("SpotWithPriceLimit") + SpotAsPriceGo = SpotStrategyType("SpotAsPriceGo") ) // @@ -244,7 +245,7 @@ type InstanceAttributesType struct { SerialNumber string Status InstanceStatus OperationLocks OperationLocksType - SecurityGroupIds struct { + SecurityGroupIds struct { SecurityGroupId []string } PublicIpAddress IpAddressSetType @@ -259,11 +260,12 @@ type InstanceAttributesType struct { IoOptimized StringOrBool InstanceChargeType common.InstanceChargeType ExpiredTime util.ISO6801Time - Tags struct { + Tags struct { Tag []TagItemType } - SpotStrategy SpotStrategyType - KeyPairName string + SpotStrategy SpotStrategyType + SpotPriceLimit float64 + KeyPairName string } type DescribeInstanceAttributeResponse struct { @@ -535,7 +537,9 @@ type CreateInstanceArgs struct { AutoRenew bool AutoRenewPeriod int SpotStrategy SpotStrategyType + SpotPriceLimit float64 KeyPairName string + RamRoleName string } type CreateInstanceResponse struct { @@ -624,3 +628,57 @@ func (client *Client) LeaveSecurityGroup(instanceId string, securityGroupId stri err := client.Invoke("LeaveSecurityGroup", &args, &response) return err } + +type AttachInstancesArgs struct { + RegionId common.Region + RamRoleName string + InstanceIds string +} + +// AttachInstanceRamRole attach instances to ram role +// +// You can read doc at https://help.aliyun.com/document_detail/54244.html?spm=5176.doc54245.6.811.zEJcS5 +func (client *Client) AttachInstanceRamRole(args *AttachInstancesArgs) (err error) { + response := common.Response{} + err = client.Invoke("AttachInstanceRamRole", args, &response) + if err != nil { + return err + } + return nil +} + +// DetachInstanceRamRole detach instances from ram role +// +// You can read doc at https://help.aliyun.com/document_detail/54245.html?spm=5176.doc54243.6.813.bt8RB3 +func (client *Client) DetachInstanceRamRole(args *AttachInstancesArgs) (err error) { + response := common.Response{} + err = client.Invoke("DetachInstanceRamRole", args, &response) + if err != nil { + return err + } + return nil +} + +type DescribeInstanceRamRoleResponse struct { + common.Response + InstanceRamRoleSets struct { + InstanceRamRoleSet []InstanceRamRoleSetType + } +} + +type InstanceRamRoleSetType struct { + InstanceId string + RamRoleName string +} + +// DescribeInstanceRamRole +// +// You can read doc at https://help.aliyun.com/document_detail/54243.html?spm=5176.doc54245.6.812.RgNCoi +func (client *Client) DescribeInstanceRamRole(args *AttachInstancesArgs) (resp *DescribeInstanceRamRoleResponse, err error) { + response := &DescribeInstanceRamRoleResponse{} + err = client.Invoke("DescribeInstanceRamRole", args, response) + if err != nil { + return response, err + } + return response, nil +} diff --git a/vendor/github.com/denverdino/aliyungo/ecs/networks.go b/vendor/github.com/denverdino/aliyungo/ecs/networks.go index 100835c37..4c4cd7150 100644 --- a/vendor/github.com/denverdino/aliyungo/ecs/networks.go +++ b/vendor/github.com/denverdino/aliyungo/ecs/networks.go @@ -36,8 +36,8 @@ func (client *Client) AllocatePublicIpAddress(instanceId string) (ipAddress stri type ModifyInstanceNetworkSpec struct { InstanceId string - InternetMaxBandwidthOut *int - InternetMaxBandwidthIn *int + InternetMaxBandwidthOut int + InternetMaxBandwidthIn int } type ModifyInstanceNetworkSpecResponse struct { diff --git a/vendor/github.com/denverdino/aliyungo/ecs/route_tables.go b/vendor/github.com/denverdino/aliyungo/ecs/route_tables.go index cc85cb129..f8531a20b 100644 --- a/vendor/github.com/denverdino/aliyungo/ecs/route_tables.go +++ b/vendor/github.com/denverdino/aliyungo/ecs/route_tables.go @@ -102,6 +102,7 @@ type NextHopType string const ( NextHopIntance = NextHopType("Instance") //Default NextHopTunnel = NextHopType("Tunnel") + NextHopTunnelRouterInterface = NextHopType("RouterInterface") ) type CreateRouteEntryArgs struct { diff --git a/vendor/github.com/denverdino/aliyungo/ecs/router_interface.go b/vendor/github.com/denverdino/aliyungo/ecs/router_interface.go new file mode 100644 index 000000000..62af67860 --- /dev/null +++ b/vendor/github.com/denverdino/aliyungo/ecs/router_interface.go @@ -0,0 +1,227 @@ +package ecs + +import ( + "github.com/denverdino/aliyungo/common" +) + +type EcsCommonResponse struct { + common.Response +} +type RouterType string +type InterfaceStatus string +type Role string +type Spec string + +const ( + VRouter = RouterType("VRouter") + VBR = RouterType("VBR") + + Idl = InterfaceStatus("Idl") + Active = InterfaceStatus("Active") + Inactive = InterfaceStatus("Inactive") + + InitiatingSide = Role("InitiatingSide") + AcceptingSide = Role("AcceptingSide") + + Small1 = Spec("Small.1") + Small2 = Spec("Small.2") + Small5 = Spec("Small.5") + Middle1 = Spec("Middle.1") + Middle2 = Spec("Middle.2") + Middle5 = Spec("Middle.5") + Large1 = Spec("Large.1") + Large2 = Spec("Large.2") +) + +type CreateRouterInterfaceArgs struct { + RegionId common.Region + OppositeRegionId common.Region + RouterType RouterType + OppositeRouterType RouterType + RouterId string + OppositeRouterId string + Role Role + Spec Spec + AccessPointId string + OppositeAccessPointId string + OppositeInterfaceId string + OppositeInterfaceOwnerId string + Name string + Description string + HealthCheckSourceIp string + HealthCheckTargetIp string +} + +type CreateRouterInterfaceResponse struct { + common.Response + RouterInterfaceId string +} + +// CreateRouterInterface create Router interface +// +// You can read doc at https://help.aliyun.com/document_detail/36032.html?spm=5176.product27706.6.664.EbBsxC +func (client *Client) CreateRouterInterface(args *CreateRouterInterfaceArgs) (response *CreateRouterInterfaceResponse, err error) { + response = &CreateRouterInterfaceResponse{} + err = client.Invoke("CreateRouterInterface", args, &response) + if err != nil { + return response, err + } + return response, nil +} + +type Filter struct { + Key string + Value []string +} + +type DescribeRouterInterfacesArgs struct { + RegionId common.Region + common.Pagination + Filter []Filter +} + +type RouterInterfaceItemType struct { + ChargeType string + RouterInterfaceId string + AccessPointId string + OppositeRegionId string + OppositeAccessPointId string + Role Role + Spec Spec + Name string + Description string + RouterId string + RouterType RouterType + CreationTime string + Status string + BusinessStatus string + ConnectedTime string + OppositeInterfaceId string + OppositeInterfaceSpec string + OppositeInterfaceStatus string + OppositeInterfaceBusinessStatus string + OppositeRouterId string + OppositeRouterType RouterType + OppositeInterfaceOwnerId string + HealthCheckSourceIp string + HealthCheckTargetIp string +} + +type DescribeRouterInterfacesResponse struct { + RouterInterfaceSet struct { + RouterInterfaceType []RouterInterfaceItemType + } + common.PaginationResult +} + +// DescribeRouterInterfaces describe Router interfaces +// +// You can read doc at https://help.aliyun.com/document_detail/36032.html?spm=5176.product27706.6.664.EbBsxC +func (client *Client) DescribeRouterInterfaces(args *DescribeRouterInterfacesArgs) (response *DescribeRouterInterfacesResponse, err error) { + response = &DescribeRouterInterfacesResponse{} + err = client.Invoke("DescribeRouterInterfaces", args, &response) + if err != nil { + return response, err + } + return response, nil +} + +type OperateRouterInterfaceArgs struct { + RegionId common.Region + RouterInterfaceId string +} + +// ConnectRouterInterface +// +// You can read doc at https://help.aliyun.com/document_detail/36031.html?spm=5176.doc36035.6.666.wkyljN +func (client *Client) ConnectRouterInterface(args *OperateRouterInterfaceArgs) (response *EcsCommonResponse, err error) { + response = &EcsCommonResponse{} + err = client.Invoke("ConnectRouterInterface", args, &response) + if err != nil { + return response, err + } + return response, nil +} + +// ActivateRouterInterface active Router Interface +// +// You can read doc at https://help.aliyun.com/document_detail/36030.html?spm=5176.doc36031.6.667.DAuZLD +func (client *Client) ActivateRouterInterface(args *OperateRouterInterfaceArgs) (response *EcsCommonResponse, err error) { + response = &EcsCommonResponse{} + err = client.Invoke("ActivateRouterInterface", args, &response) + if err != nil { + return response, err + } + return response, nil +} + +// DeactivateRouterInterface deactivate Router Interface +// +// You can read doc at https://help.aliyun.com/document_detail/36033.html?spm=5176.doc36030.6.668.JqCWUz +func (client *Client) DeactivateRouterInterface(args *OperateRouterInterfaceArgs) (response *EcsCommonResponse, err error) { + response = &EcsCommonResponse{} + err = client.Invoke("DeactivateRouterInterface", args, &response) + if err != nil { + return response, err + } + return response, nil +} + +type ModifyRouterInterfaceSpecArgs struct { + RegionId common.Region + RouterInterfaceId string + Spec Spec +} + +type ModifyRouterInterfaceSpecResponse struct { + common.Response + Spec Spec +} + +// ModifyRouterInterfaceSpec +// +// You can read doc at https://help.aliyun.com/document_detail/36037.html?spm=5176.doc36036.6.669.McKiye +func (client *Client) ModifyRouterInterfaceSpec(args *ModifyRouterInterfaceSpecArgs) (response *ModifyRouterInterfaceSpecResponse, err error) { + response = &ModifyRouterInterfaceSpecResponse{} + err = client.Invoke("ModifyRouterInterfaceSpec", args, &response) + if err != nil { + return response, err + } + return response, nil +} + +type ModifyRouterInterfaceAttributeArgs struct { + RegionId common.Region + RouterInterfaceId string + Name string + Description string + OppositeInterfaceId string + OppositeRouterId string + OppositeInterfaceOwnerId string + HealthCheckSourceIp string + HealthCheckTargetIp string +} + +// ModifyRouterInterfaceAttribute +// +// You can read doc at https://help.aliyun.com/document_detail/36036.html?spm=5176.doc36037.6.670.Dcz3xS +func (client *Client) ModifyRouterInterfaceAttribute(args *ModifyRouterInterfaceAttributeArgs) (response *EcsCommonResponse, err error) { + response = &EcsCommonResponse{} + err = client.Invoke("ModifyRouterInterfaceAttribute", args, &response) + if err != nil { + return response, err + } + return response, nil +} + +// DeleteRouterInterface delete Router Interface +// +// You can read doc at https://help.aliyun.com/document_detail/36034.html?spm=5176.doc36036.6.671.y2xpNt +func (client *Client) DeleteRouterInterface(args *OperateRouterInterfaceArgs) (response *EcsCommonResponse, err error) { + response = &EcsCommonResponse{} + err = client.Invoke("DeleteRouterInterface", args, &response) + if err != nil { + return response, err + } + return response, nil +} diff --git a/vendor/github.com/denverdino/aliyungo/ram/client.go b/vendor/github.com/denverdino/aliyungo/ram/client.go index 974bc023f..2a29bd804 100644 --- a/vendor/github.com/denverdino/aliyungo/ram/client.go +++ b/vendor/github.com/denverdino/aliyungo/ram/client.go @@ -1,8 +1,9 @@ package ram import ( - "github.com/denverdino/aliyungo/common" "os" + + "github.com/denverdino/aliyungo/common" ) const ( @@ -16,15 +17,29 @@ type RamClient struct { } func NewClient(accessKeyId string, accessKeySecret string) RamClientInterface { + return NewClientWithSecurityToken(accessKeyId, accessKeySecret, "") +} + +func NewClientWithSecurityToken(accessKeyId string, accessKeySecret string, securityToken string) RamClientInterface { endpoint := os.Getenv("RAM_ENDPOINT") if endpoint == "" { endpoint = RAMDefaultEndpoint } - return NewClientWithEndpoint(endpoint, accessKeyId, accessKeySecret) + + return NewClientWithEndpointAndSecurityToken(endpoint, accessKeyId, accessKeySecret, securityToken) } func NewClientWithEndpoint(endpoint string, accessKeyId string, accessKeySecret string) RamClientInterface { + return NewClientWithEndpointAndSecurityToken(endpoint, accessKeyId, accessKeySecret, "") +} + +func NewClientWithEndpointAndSecurityToken(endpoint string, accessKeyId string, accessKeySecret string, securityToken string) RamClientInterface { client := &RamClient{} - client.Init(endpoint, RAMAPIVersion, accessKeyId, accessKeySecret) + client.WithEndpoint(endpoint). + WithVersion(RAMAPIVersion). + WithAccessKeyId(accessKeyId). + WithAccessKeySecret(accessKeySecret). + WithSecurityToken(securityToken). + InitClient() return client } diff --git a/vendor/github.com/denverdino/aliyungo/ram/group.go b/vendor/github.com/denverdino/aliyungo/ram/group.go index 6f1224f5f..491e6931d 100644 --- a/vendor/github.com/denverdino/aliyungo/ram/group.go +++ b/vendor/github.com/denverdino/aliyungo/ram/group.go @@ -31,6 +31,8 @@ type GroupResponse struct { type GroupListResponse struct { RamCommonResponse + IsTruncated bool + Marker string Groups struct { Group []Group } diff --git a/vendor/github.com/denverdino/aliyungo/ram/policy.go b/vendor/github.com/denverdino/aliyungo/ram/policy.go index b0a84b86d..0acfd71a7 100644 --- a/vendor/github.com/denverdino/aliyungo/ram/policy.go +++ b/vendor/github.com/denverdino/aliyungo/ram/policy.go @@ -1,8 +1,15 @@ package ram +type Type string + +const ( + Custom Type = "Custom" + System Type = "System" +) + type PolicyRequest struct { PolicyName string - PolicyType string + PolicyType Type Description string PolicyDocument string SetAsDefault string @@ -21,15 +28,16 @@ type PolicyResponse struct { } type PolicyQueryRequest struct { - PolicyType string + PolicyType Type Marker string MaxItems int8 } type PolicyQueryResponse struct { + RamCommonResponse IsTruncated bool Marker string - Policies struct { + Policies struct { Policy []Policy } } diff --git a/vendor/github.com/denverdino/aliyungo/util/encoding.go b/vendor/github.com/denverdino/aliyungo/util/encoding.go index 8cb588288..99a508f5b 100644 --- a/vendor/github.com/denverdino/aliyungo/util/encoding.go +++ b/vendor/github.com/denverdino/aliyungo/util/encoding.go @@ -66,24 +66,26 @@ func setQueryValues(i interface{}, values *url.Values, prefix string) { // TODO Use Tag for validation // tag := typ.Field(i).Tag.Get("tagname") kind := field.Kind() + isPtr := false if (kind == reflect.Ptr || kind == reflect.Array || kind == reflect.Slice || kind == reflect.Map || kind == reflect.Chan) && field.IsNil() { continue } if kind == reflect.Ptr { field = field.Elem() kind = field.Kind() + isPtr = true } var value string //switch field.Interface().(type) { switch kind { case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64: i := field.Int() - if i != 0 { + if i != 0 || isPtr { value = strconv.FormatInt(i, 10) } case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64: i := field.Uint() - if i != 0 { + if i != 0 || isPtr { value = strconv.FormatUint(i, 10) } case reflect.Float32: @@ -197,12 +199,14 @@ func setQueryValuesByFlattenMethod(i interface{}, values *url.Values, prefix str // tag := typ.Field(i).Tag.Get("tagname") kind := field.Kind() + isPtr := false if (kind == reflect.Ptr || kind == reflect.Array || kind == reflect.Slice || kind == reflect.Map || kind == reflect.Chan) && field.IsNil() { continue } if kind == reflect.Ptr { field = field.Elem() kind = field.Kind() + isPtr = true } var value string @@ -210,12 +214,12 @@ func setQueryValuesByFlattenMethod(i interface{}, values *url.Values, prefix str switch kind { case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64: i := field.Int() - if i != 0 { + if i != 0 || isPtr { value = strconv.FormatInt(i, 10) } case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64: i := field.Uint() - if i != 0 { + if i != 0 || isPtr { value = strconv.FormatUint(i, 10) } case reflect.Float32: diff --git a/vendor/github.com/dgrijalva/jwt-go/MIGRATION_GUIDE.md b/vendor/github.com/dgrijalva/jwt-go/MIGRATION_GUIDE.md index fd62e9490..7fc1f793c 100644 --- a/vendor/github.com/dgrijalva/jwt-go/MIGRATION_GUIDE.md +++ b/vendor/github.com/dgrijalva/jwt-go/MIGRATION_GUIDE.md @@ -56,8 +56,9 @@ This simple parsing example: is directly mapped to: ```go - if token, err := request.ParseFromRequest(tokenString, request.OAuth2Extractor, req, keyLookupFunc); err == nil { - fmt.Printf("Token for user %v expires %v", token.Claims["user"], token.Claims["exp"]) + if token, err := request.ParseFromRequest(req, request.OAuth2Extractor, keyLookupFunc); err == nil { + claims := token.Claims.(jwt.MapClaims) + fmt.Printf("Token for user %v expires %v", claims["user"], claims["exp"]) } ``` diff --git a/vendor/github.com/dgrijalva/jwt-go/README.md b/vendor/github.com/dgrijalva/jwt-go/README.md index 00f613672..d358d881b 100644 --- a/vendor/github.com/dgrijalva/jwt-go/README.md +++ b/vendor/github.com/dgrijalva/jwt-go/README.md @@ -1,11 +1,15 @@ -A [go](http://www.golang.org) (or 'golang' for search engine friendliness) implementation of [JSON Web Tokens](http://self-issued.info/docs/draft-ietf-oauth-json-web-token.html) +# jwt-go [![Build Status](https://travis-ci.org/dgrijalva/jwt-go.svg?branch=master)](https://travis-ci.org/dgrijalva/jwt-go) +[![GoDoc](https://godoc.org/github.com/dgrijalva/jwt-go?status.svg)](https://godoc.org/github.com/dgrijalva/jwt-go) -**BREAKING CHANGES:*** Version 3.0.0 is here. It includes _a lot_ of changes including a few that break the API. We've tried to break as few things as possible, so there should just be a few type signature changes. A full list of breaking changes is available in `VERSION_HISTORY.md`. See `MIGRATION_GUIDE.md` for more information on updating your code. +A [go](http://www.golang.org) (or 'golang' for search engine friendliness) implementation of [JSON Web Tokens](http://self-issued.info/docs/draft-ietf-oauth-json-web-token.html) -**NOTICE:** A vulnerability in JWT was [recently published](https://auth0.com/blog/2015/03/31/critical-vulnerabilities-in-json-web-token-libraries/). As this library doesn't force users to validate the `alg` is what they expected, it's possible your usage is effected. There will be an update soon to remedy this, and it will likey require backwards-incompatible changes to the API. In the short term, please make sure your implementation verifies the `alg` is what you expect. +**NEW VERSION COMING:** There have been a lot of improvements suggested since the version 3.0.0 released in 2016. I'm working now on cutting two different releases: 3.2.0 will contain any non-breaking changes or enhancements. 4.0.0 will follow shortly which will include breaking changes. See the 4.0.0 milestone to get an idea of what's coming. If you have other ideas, or would like to participate in 4.0.0, now's the time. If you depend on this library and don't want to be interrupted, I recommend you use your dependency mangement tool to pin to version 3. +**SECURITY NOTICE:** Some older versions of Go have a security issue in the cryotp/elliptic. Recommendation is to upgrade to at least 1.8.3. See issue #216 for more detail. + +**SECURITY NOTICE:** It's important that you [validate the `alg` presented is what you expect](https://auth0.com/blog/2015/03/31/critical-vulnerabilities-in-json-web-token-libraries/). This library attempts to make it easy to do the right thing by requiring key types match the expected alg, but you should take the extra step to verify it in your usage. See the examples provided. ## What the heck is a JWT? @@ -25,8 +29,8 @@ This library supports the parsing and verification as well as the generation and See [the project documentation](https://godoc.org/github.com/dgrijalva/jwt-go) for examples of usage: -* [Simple example of parsing and validating a token](https://godoc.org/github.com/dgrijalva/jwt-go#example_Parse_hmac) -* [Simple example of building and signing a token](https://godoc.org/github.com/dgrijalva/jwt-go#example_New_hmac) +* [Simple example of parsing and validating a token](https://godoc.org/github.com/dgrijalva/jwt-go#example-Parse--Hmac) +* [Simple example of building and signing a token](https://godoc.org/github.com/dgrijalva/jwt-go#example-New--Hmac) * [Directory of Examples](https://godoc.org/github.com/dgrijalva/jwt-go#pkg-examples) ## Extensions @@ -37,7 +41,7 @@ Here's an example of an extension that integrates with the Google App Engine sig ## Compliance -This library was last reviewed to comply with [RTF 7519](http://www.rfc-editor.org/info/rfc7519) dated May 2015 with a few notable differences: +This library was last reviewed to comply with [RTF 7519](http://www.rfc-editor.org/info/rfc7519) dated May 2015 with a few notable differences: * In order to protect against accidental use of [Unsecured JWTs](http://self-issued.info/docs/draft-ietf-oauth-json-web-token.html#UnsecuredJWT), tokens using `alg=none` will only be accepted if the constant `jwt.UnsafeAllowNoneSignatureType` is provided as the key. @@ -47,7 +51,10 @@ This library is considered production ready. Feedback and feature requests are This project uses [Semantic Versioning 2.0.0](http://semver.org). Accepted pull requests will land on `master`. Periodically, versions will be tagged from `master`. You can find all the releases on [the project releases page](https://github.com/dgrijalva/jwt-go/releases). -While we try to make it obvious when we make breaking changes, there isn't a great mechanism for pushing announcements out to users. You may want to use this alternative package include: `gopkg.in/dgrijalva/jwt-go.v2`. It will do the right thing WRT semantic versioning. +While we try to make it obvious when we make breaking changes, there isn't a great mechanism for pushing announcements out to users. You may want to use this alternative package include: `gopkg.in/dgrijalva/jwt-go.v3`. It will do the right thing WRT semantic versioning. + +**BREAKING CHANGES:*** +* Version 3.0.0 includes _a lot_ of changes from the 2.x line, including a few that break the API. We've tried to break as few things as possible, so there should just be a few type signature changes. A full list of breaking changes is available in `VERSION_HISTORY.md`. See `MIGRATION_GUIDE.md` for more information on updating your code. ## Usage Tips @@ -68,18 +75,26 @@ Symmetric signing methods, such as HSA, use only a single secret. This is probab Asymmetric signing methods, such as RSA, use different keys for signing and verifying tokens. This makes it possible to produce tokens with a private key, and allow any consumer to access the public key for verification. +### Signing Methods and Key Types + +Each signing method expects a different object type for its signing keys. See the package documentation for details. Here are the most common ones: + +* The [HMAC signing method](https://godoc.org/github.com/dgrijalva/jwt-go#SigningMethodHMAC) (`HS256`,`HS384`,`HS512`) expect `[]byte` values for signing and validation +* The [RSA signing method](https://godoc.org/github.com/dgrijalva/jwt-go#SigningMethodRSA) (`RS256`,`RS384`,`RS512`) expect `*rsa.PrivateKey` for signing and `*rsa.PublicKey` for validation +* The [ECDSA signing method](https://godoc.org/github.com/dgrijalva/jwt-go#SigningMethodECDSA) (`ES256`,`ES384`,`ES512`) expect `*ecdsa.PrivateKey` for signing and `*ecdsa.PublicKey` for validation + ### JWT and OAuth It's worth mentioning that OAuth and JWT are not the same thing. A JWT token is simply a signed JSON object. It can be used anywhere such a thing is useful. There is some confusion, though, as JWT is the most common type of bearer token used in OAuth2 authentication. Without going too far down the rabbit hole, here's a description of the interaction of these technologies: -* OAuth is a protocol for allowing an identity provider to be separate from the service a user is logging in to. For example, whenever you use Facebook to log into a different service (Yelp, Spotify, etc), you are using OAuth. +* OAuth is a protocol for allowing an identity provider to be separate from the service a user is logging in to. For example, whenever you use Facebook to log into a different service (Yelp, Spotify, etc), you are using OAuth. * OAuth defines several options for passing around authentication data. One popular method is called a "bearer token". A bearer token is simply a string that _should_ only be held by an authenticated user. Thus, simply presenting this token proves your identity. You can probably derive from here why a JWT might make a good bearer token. * Because bearer tokens are used for authentication, it's important they're kept secret. This is why transactions that use bearer tokens typically happen over SSL. - + ## More Documentation can be found [on godoc.org](http://godoc.org/github.com/dgrijalva/jwt-go). -The command line utility included in this project (cmd/jwt) provides a straightforward example of token creation and parsing as well as a useful tool for debugging your own integration. You'll also find several implementation examples in to documentation. +The command line utility included in this project (cmd/jwt) provides a straightforward example of token creation and parsing as well as a useful tool for debugging your own integration. You'll also find several implementation examples in the documentation. diff --git a/vendor/github.com/dgrijalva/jwt-go/VERSION_HISTORY.md b/vendor/github.com/dgrijalva/jwt-go/VERSION_HISTORY.md index b605b4509..637029831 100644 --- a/vendor/github.com/dgrijalva/jwt-go/VERSION_HISTORY.md +++ b/vendor/github.com/dgrijalva/jwt-go/VERSION_HISTORY.md @@ -1,5 +1,18 @@ ## `jwt-go` Version History +#### 3.2.0 + +* Added method `ParseUnverified` to allow users to split up the tasks of parsing and validation +* HMAC signing method returns `ErrInvalidKeyType` instead of `ErrInvalidKey` where appropriate +* Added options to `request.ParseFromRequest`, which allows for an arbitrary list of modifiers to parsing behavior. Initial set include `WithClaims` and `WithParser`. Existing usage of this function will continue to work as before. +* Deprecated `ParseFromRequestWithClaims` to simplify API in the future. + +#### 3.1.0 + +* Improvements to `jwt` command line tool +* Added `SkipClaimsValidation` option to `Parser` +* Documentation updates + #### 3.0.0 * **Compatibility Breaking Changes**: See MIGRATION_GUIDE.md for tips on updating your code diff --git a/vendor/github.com/dgrijalva/jwt-go/ecdsa.go b/vendor/github.com/dgrijalva/jwt-go/ecdsa.go index 2f59a2223..f97738124 100644 --- a/vendor/github.com/dgrijalva/jwt-go/ecdsa.go +++ b/vendor/github.com/dgrijalva/jwt-go/ecdsa.go @@ -14,6 +14,7 @@ var ( ) // Implements the ECDSA family of signing methods signing methods +// Expects *ecdsa.PrivateKey for signing and *ecdsa.PublicKey for verification type SigningMethodECDSA struct { Name string Hash crypto.Hash diff --git a/vendor/github.com/dgrijalva/jwt-go/errors.go b/vendor/github.com/dgrijalva/jwt-go/errors.go index 662df19d4..1c93024aa 100644 --- a/vendor/github.com/dgrijalva/jwt-go/errors.go +++ b/vendor/github.com/dgrijalva/jwt-go/errors.go @@ -51,13 +51,9 @@ func (e ValidationError) Error() string { } else { return "token is invalid" } - return e.Inner.Error() } // No errors func (e *ValidationError) valid() bool { - if e.Errors > 0 { - return false - } - return true + return e.Errors == 0 } diff --git a/vendor/github.com/dgrijalva/jwt-go/hmac.go b/vendor/github.com/dgrijalva/jwt-go/hmac.go index c22991925..addbe5d40 100644 --- a/vendor/github.com/dgrijalva/jwt-go/hmac.go +++ b/vendor/github.com/dgrijalva/jwt-go/hmac.go @@ -7,6 +7,7 @@ import ( ) // Implements the HMAC-SHA family of signing methods signing methods +// Expects key type of []byte for both signing and validation type SigningMethodHMAC struct { Name string Hash crypto.Hash @@ -90,5 +91,5 @@ func (m *SigningMethodHMAC) Sign(signingString string, key interface{}) (string, return EncodeSegment(hasher.Sum(nil)), nil } - return "", ErrInvalidKey + return "", ErrInvalidKeyType } diff --git a/vendor/github.com/dgrijalva/jwt-go/parser.go b/vendor/github.com/dgrijalva/jwt-go/parser.go index 7020c52a1..d6901d9ad 100644 --- a/vendor/github.com/dgrijalva/jwt-go/parser.go +++ b/vendor/github.com/dgrijalva/jwt-go/parser.go @@ -8,8 +8,9 @@ import ( ) type Parser struct { - ValidMethods []string // If populated, only these methods will be considered valid - UseJSONNumber bool // Use JSON Number format in JSON decoder + ValidMethods []string // If populated, only these methods will be considered valid + UseJSONNumber bool // Use JSON Number format in JSON decoder + SkipClaimsValidation bool // Skip claims validation during token parsing } // Parse, validate, and return a token. @@ -20,55 +21,9 @@ func (p *Parser) Parse(tokenString string, keyFunc Keyfunc) (*Token, error) { } func (p *Parser) ParseWithClaims(tokenString string, claims Claims, keyFunc Keyfunc) (*Token, error) { - parts := strings.Split(tokenString, ".") - if len(parts) != 3 { - return nil, NewValidationError("token contains an invalid number of segments", ValidationErrorMalformed) - } - - var err error - token := &Token{Raw: tokenString} - - // parse Header - var headerBytes []byte - if headerBytes, err = DecodeSegment(parts[0]); err != nil { - if strings.HasPrefix(strings.ToLower(tokenString), "bearer ") { - return token, NewValidationError("tokenstring should not contain 'bearer '", ValidationErrorMalformed) - } - return token, &ValidationError{Inner: err, Errors: ValidationErrorMalformed} - } - if err = json.Unmarshal(headerBytes, &token.Header); err != nil { - return token, &ValidationError{Inner: err, Errors: ValidationErrorMalformed} - } - - // parse Claims - var claimBytes []byte - token.Claims = claims - - if claimBytes, err = DecodeSegment(parts[1]); err != nil { - return token, &ValidationError{Inner: err, Errors: ValidationErrorMalformed} - } - dec := json.NewDecoder(bytes.NewBuffer(claimBytes)) - if p.UseJSONNumber { - dec.UseNumber() - } - // JSON Decode. Special case for map type to avoid weird pointer behavior - if c, ok := token.Claims.(MapClaims); ok { - err = dec.Decode(&c) - } else { - err = dec.Decode(&claims) - } - // Handle decode error + token, parts, err := p.ParseUnverified(tokenString, claims) if err != nil { - return token, &ValidationError{Inner: err, Errors: ValidationErrorMalformed} - } - - // Lookup signature method - if method, ok := token.Header["alg"].(string); ok { - if token.Method = GetSigningMethod(method); token.Method == nil { - return token, NewValidationError("signing method (alg) is unavailable.", ValidationErrorUnverifiable) - } - } else { - return token, NewValidationError("signing method (alg) is unspecified.", ValidationErrorUnverifiable) + return token, err } // Verify signing method is in the required set @@ -95,20 +50,25 @@ func (p *Parser) ParseWithClaims(tokenString string, claims Claims, keyFunc Keyf } if key, err = keyFunc(token); err != nil { // keyFunc returned an error + if ve, ok := err.(*ValidationError); ok { + return token, ve + } return token, &ValidationError{Inner: err, Errors: ValidationErrorUnverifiable} } vErr := &ValidationError{} // Validate Claims - if err := token.Claims.Valid(); err != nil { + if !p.SkipClaimsValidation { + if err := token.Claims.Valid(); err != nil { - // If the Claims Valid returned an error, check if it is a validation error, - // If it was another error type, create a ValidationError with a generic ClaimsInvalid flag set - if e, ok := err.(*ValidationError); !ok { - vErr = &ValidationError{Inner: err, Errors: ValidationErrorClaimsInvalid} - } else { - vErr = e + // If the Claims Valid returned an error, check if it is a validation error, + // If it was another error type, create a ValidationError with a generic ClaimsInvalid flag set + if e, ok := err.(*ValidationError); !ok { + vErr = &ValidationError{Inner: err, Errors: ValidationErrorClaimsInvalid} + } else { + vErr = e + } } } @@ -126,3 +86,63 @@ func (p *Parser) ParseWithClaims(tokenString string, claims Claims, keyFunc Keyf return token, vErr } + +// WARNING: Don't use this method unless you know what you're doing +// +// This method parses the token but doesn't validate the signature. It's only +// ever useful in cases where you know the signature is valid (because it has +// been checked previously in the stack) and you want to extract values from +// it. +func (p *Parser) ParseUnverified(tokenString string, claims Claims) (token *Token, parts []string, err error) { + parts = strings.Split(tokenString, ".") + if len(parts) != 3 { + return nil, parts, NewValidationError("token contains an invalid number of segments", ValidationErrorMalformed) + } + + token = &Token{Raw: tokenString} + + // parse Header + var headerBytes []byte + if headerBytes, err = DecodeSegment(parts[0]); err != nil { + if strings.HasPrefix(strings.ToLower(tokenString), "bearer ") { + return token, parts, NewValidationError("tokenstring should not contain 'bearer '", ValidationErrorMalformed) + } + return token, parts, &ValidationError{Inner: err, Errors: ValidationErrorMalformed} + } + if err = json.Unmarshal(headerBytes, &token.Header); err != nil { + return token, parts, &ValidationError{Inner: err, Errors: ValidationErrorMalformed} + } + + // parse Claims + var claimBytes []byte + token.Claims = claims + + if claimBytes, err = DecodeSegment(parts[1]); err != nil { + return token, parts, &ValidationError{Inner: err, Errors: ValidationErrorMalformed} + } + dec := json.NewDecoder(bytes.NewBuffer(claimBytes)) + if p.UseJSONNumber { + dec.UseNumber() + } + // JSON Decode. Special case for map type to avoid weird pointer behavior + if c, ok := token.Claims.(MapClaims); ok { + err = dec.Decode(&c) + } else { + err = dec.Decode(&claims) + } + // Handle decode error + if err != nil { + return token, parts, &ValidationError{Inner: err, Errors: ValidationErrorMalformed} + } + + // Lookup signature method + if method, ok := token.Header["alg"].(string); ok { + if token.Method = GetSigningMethod(method); token.Method == nil { + return token, parts, NewValidationError("signing method (alg) is unavailable.", ValidationErrorUnverifiable) + } + } else { + return token, parts, NewValidationError("signing method (alg) is unspecified.", ValidationErrorUnverifiable) + } + + return token, parts, nil +} diff --git a/vendor/github.com/dgrijalva/jwt-go/rsa.go b/vendor/github.com/dgrijalva/jwt-go/rsa.go index 0ae0b1984..e4caf1ca4 100644 --- a/vendor/github.com/dgrijalva/jwt-go/rsa.go +++ b/vendor/github.com/dgrijalva/jwt-go/rsa.go @@ -7,6 +7,7 @@ import ( ) // Implements the RSA family of signing methods signing methods +// Expects *rsa.PrivateKey for signing and *rsa.PublicKey for validation type SigningMethodRSA struct { Name string Hash crypto.Hash @@ -44,7 +45,7 @@ func (m *SigningMethodRSA) Alg() string { } // Implements the Verify method from SigningMethod -// For this signing method, must be an rsa.PublicKey structure. +// For this signing method, must be an *rsa.PublicKey structure. func (m *SigningMethodRSA) Verify(signingString, signature string, key interface{}) error { var err error @@ -73,7 +74,7 @@ func (m *SigningMethodRSA) Verify(signingString, signature string, key interface } // Implements the Sign method from SigningMethod -// For this signing method, must be an rsa.PrivateKey structure. +// For this signing method, must be an *rsa.PrivateKey structure. func (m *SigningMethodRSA) Sign(signingString string, key interface{}) (string, error) { var rsaKey *rsa.PrivateKey var ok bool diff --git a/vendor/github.com/dgrijalva/jwt-go/rsa_utils.go b/vendor/github.com/dgrijalva/jwt-go/rsa_utils.go index 213a90dbb..a5ababf95 100644 --- a/vendor/github.com/dgrijalva/jwt-go/rsa_utils.go +++ b/vendor/github.com/dgrijalva/jwt-go/rsa_utils.go @@ -39,6 +39,38 @@ func ParseRSAPrivateKeyFromPEM(key []byte) (*rsa.PrivateKey, error) { return pkey, nil } +// Parse PEM encoded PKCS1 or PKCS8 private key protected with password +func ParseRSAPrivateKeyFromPEMWithPassword(key []byte, password string) (*rsa.PrivateKey, error) { + var err error + + // Parse PEM block + var block *pem.Block + if block, _ = pem.Decode(key); block == nil { + return nil, ErrKeyMustBePEMEncoded + } + + var parsedKey interface{} + + var blockDecrypted []byte + if blockDecrypted, err = x509.DecryptPEMBlock(block, []byte(password)); err != nil { + return nil, err + } + + if parsedKey, err = x509.ParsePKCS1PrivateKey(blockDecrypted); err != nil { + if parsedKey, err = x509.ParsePKCS8PrivateKey(blockDecrypted); err != nil { + return nil, err + } + } + + var pkey *rsa.PrivateKey + var ok bool + if pkey, ok = parsedKey.(*rsa.PrivateKey); !ok { + return nil, ErrNotRSAPrivateKey + } + + return pkey, nil +} + // Parse PEM encoded PKCS1 or PKCS8 public key func ParseRSAPublicKeyFromPEM(key []byte) (*rsa.PublicKey, error) { var err error diff --git a/vendor/github.com/docker/docker/LICENSE b/vendor/github.com/docker/docker/LICENSE new file mode 100644 index 000000000..9c8e20ab8 --- /dev/null +++ b/vendor/github.com/docker/docker/LICENSE @@ -0,0 +1,191 @@ + + Apache License + Version 2.0, January 2004 + https://www.apache.org/licenses/ + + TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION + + 1. Definitions. + + "License" shall mean the terms and conditions for use, reproduction, + and distribution as defined by Sections 1 through 9 of this document. + + "Licensor" shall mean the copyright owner or entity authorized by + the copyright owner that is granting the License. + + "Legal Entity" shall mean the union of the acting entity and all + other entities that control, are controlled by, or are under common + control with that entity. For the purposes of this definition, + "control" means (i) the power, direct or indirect, to cause the + direction or management of such entity, whether by contract or + otherwise, or (ii) ownership of fifty percent (50%) or more of the + outstanding shares, or (iii) beneficial ownership of such entity. + + "You" (or "Your") shall mean an individual or Legal Entity + exercising permissions granted by this License. + + "Source" form shall mean the preferred form for making modifications, + including but not limited to software source code, documentation + source, and configuration files. + + "Object" form shall mean any form resulting from mechanical + transformation or translation of a Source form, including but + not limited to compiled object code, generated documentation, + and conversions to other media types. + + "Work" shall mean the work of authorship, whether in Source or + Object form, made available under the License, as indicated by a + copyright notice that is included in or attached to the work + (an example is provided in the Appendix below). + + "Derivative Works" shall mean any work, whether in Source or Object + form, that is based on (or derived from) the Work and for which the + editorial revisions, annotations, elaborations, or other modifications + represent, as a whole, an original work of authorship. For the purposes + of this License, Derivative Works shall not include works that remain + separable from, or merely link (or bind by name) to the interfaces of, + the Work and Derivative Works thereof. + + "Contribution" shall mean any work of authorship, including + the original version of the Work and any modifications or additions + to that Work or Derivative Works thereof, that is intentionally + submitted to Licensor for inclusion in the Work by the copyright owner + or by an individual or Legal Entity authorized to submit on behalf of + the copyright owner. For the purposes of this definition, "submitted" + means any form of electronic, verbal, or written communication sent + to the Licensor or its representatives, including but not limited to + communication on electronic mailing lists, source code control systems, + and issue tracking systems that are managed by, or on behalf of, the + Licensor for the purpose of discussing and improving the Work, but + excluding communication that is conspicuously marked or otherwise + designated in writing by the copyright owner as "Not a Contribution." + + "Contributor" shall mean Licensor and any individual or Legal Entity + on behalf of whom a Contribution has been received by Licensor and + subsequently incorporated within the Work. + + 2. Grant of Copyright License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + copyright license to reproduce, prepare Derivative Works of, + publicly display, publicly perform, sublicense, and distribute the + Work and such Derivative Works in Source or Object form. + + 3. Grant of Patent License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + (except as stated in this section) patent license to make, have made, + use, offer to sell, sell, import, and otherwise transfer the Work, + where such license applies only to those patent claims licensable + by such Contributor that are necessarily infringed by their + Contribution(s) alone or by combination of their Contribution(s) + with the Work to which such Contribution(s) was submitted. If You + institute patent litigation against any entity (including a + cross-claim or counterclaim in a lawsuit) alleging that the Work + or a Contribution incorporated within the Work constitutes direct + or contributory patent infringement, then any patent licenses + granted to You under this License for that Work shall terminate + as of the date such litigation is filed. + + 4. Redistribution. You may reproduce and distribute copies of the + Work or Derivative Works thereof in any medium, with or without + modifications, and in Source or Object form, provided that You + meet the following conditions: + + (a) You must give any other recipients of the Work or + Derivative Works a copy of this License; and + + (b) You must cause any modified files to carry prominent notices + stating that You changed the files; and + + (c) You must retain, in the Source form of any Derivative Works + that You distribute, all copyright, patent, trademark, and + attribution notices from the Source form of the Work, + excluding those notices that do not pertain to any part of + the Derivative Works; and + + (d) If the Work includes a "NOTICE" text file as part of its + distribution, then any Derivative Works that You distribute must + include a readable copy of the attribution notices contained + within such NOTICE file, excluding those notices that do not + pertain to any part of the Derivative Works, in at least one + of the following places: within a NOTICE text file distributed + as part of the Derivative Works; within the Source form or + documentation, if provided along with the Derivative Works; or, + within a display generated by the Derivative Works, if and + wherever such third-party notices normally appear. The contents + of the NOTICE file are for informational purposes only and + do not modify the License. You may add Your own attribution + notices within Derivative Works that You distribute, alongside + or as an addendum to the NOTICE text from the Work, provided + that such additional attribution notices cannot be construed + as modifying the License. + + You may add Your own copyright statement to Your modifications and + may provide additional or different license terms and conditions + for use, reproduction, or distribution of Your modifications, or + for any such Derivative Works as a whole, provided Your use, + reproduction, and distribution of the Work otherwise complies with + the conditions stated in this License. + + 5. Submission of Contributions. Unless You explicitly state otherwise, + any Contribution intentionally submitted for inclusion in the Work + by You to the Licensor shall be under the terms and conditions of + this License, without any additional terms or conditions. + Notwithstanding the above, nothing herein shall supersede or modify + the terms of any separate license agreement you may have executed + with Licensor regarding such Contributions. + + 6. Trademarks. This License does not grant permission to use the trade + names, trademarks, service marks, or product names of the Licensor, + except as required for reasonable and customary use in describing the + origin of the Work and reproducing the content of the NOTICE file. + + 7. Disclaimer of Warranty. Unless required by applicable law or + agreed to in writing, Licensor provides the Work (and each + Contributor provides its Contributions) on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or + implied, including, without limitation, any warranties or conditions + of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A + PARTICULAR PURPOSE. You are solely responsible for determining the + appropriateness of using or redistributing the Work and assume any + risks associated with Your exercise of permissions under this License. + + 8. Limitation of Liability. In no event and under no legal theory, + whether in tort (including negligence), contract, or otherwise, + unless required by applicable law (such as deliberate and grossly + negligent acts) or agreed to in writing, shall any Contributor be + liable to You for damages, including any direct, indirect, special, + incidental, or consequential damages of any character arising as a + result of this License or out of the use or inability to use the + Work (including but not limited to damages for loss of goodwill, + work stoppage, computer failure or malfunction, or any and all + other commercial damages or losses), even if such Contributor + has been advised of the possibility of such damages. + + 9. Accepting Warranty or Additional Liability. While redistributing + the Work or Derivative Works thereof, You may choose to offer, + and charge a fee for, acceptance of support, warranty, indemnity, + or other liability obligations and/or rights consistent with this + License. However, in accepting such obligations, You may act only + on Your own behalf and on Your sole responsibility, not on behalf + of any other Contributor, and only if You agree to indemnify, + defend, and hold each Contributor harmless for any liability + incurred by, or claims asserted against, such Contributor by reason + of your accepting any such warranty or additional liability. + + END OF TERMS AND CONDITIONS + + Copyright 2013-2017 Docker, Inc. + + Licensed under the Apache License, Version 2.0 (the "License"); + you may not use this file except in compliance with the License. + You may obtain a copy of the License at + + https://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. diff --git a/vendor/github.com/docker/docker/NOTICE b/vendor/github.com/docker/docker/NOTICE new file mode 100644 index 000000000..0c74e15b0 --- /dev/null +++ b/vendor/github.com/docker/docker/NOTICE @@ -0,0 +1,19 @@ +Docker +Copyright 2012-2017 Docker, Inc. + +This product includes software developed at Docker, Inc. (https://www.docker.com). + +This product contains software (https://github.com/kr/pty) developed +by Keith Rarick, licensed under the MIT License. + +The following is courtesy of our legal counsel: + + +Use and transfer of Docker may be subject to certain restrictions by the +United States and other governments. +It is your responsibility to ensure that your use and/or transfer does not +violate applicable laws. + +For more information, please see https://www.bis.doc.gov + +See also https://www.apache.org/dev/crypto.html and/or seek legal counsel. diff --git a/vendor/github.com/docker/docker/pkg/namesgenerator/names-generator.go b/vendor/github.com/docker/docker/pkg/namesgenerator/names-generator.go new file mode 100644 index 000000000..d732a4795 --- /dev/null +++ b/vendor/github.com/docker/docker/pkg/namesgenerator/names-generator.go @@ -0,0 +1,608 @@ +package namesgenerator + +import ( + "fmt" + + "github.com/docker/docker/pkg/random" +) + +var ( + left = [...]string{ + "admiring", + "adoring", + "affectionate", + "agitated", + "amazing", + "angry", + "awesome", + "blissful", + "boring", + "brave", + "clever", + "cocky", + "compassionate", + "competent", + "condescending", + "confident", + "cranky", + "dazzling", + "determined", + "distracted", + "dreamy", + "eager", + "ecstatic", + "elastic", + "elated", + "elegant", + "eloquent", + "epic", + "fervent", + "festive", + "flamboyant", + "focused", + "friendly", + "frosty", + "gallant", + "gifted", + "goofy", + "gracious", + "happy", + "hardcore", + "heuristic", + "hopeful", + "hungry", + "infallible", + "inspiring", + "jolly", + "jovial", + "keen", + "kind", + "laughing", + "loving", + "lucid", + "mystifying", + "modest", + "musing", + "naughty", + "nervous", + "nifty", + "nostalgic", + "objective", + "optimistic", + "peaceful", + "pedantic", + "pensive", + "practical", + "priceless", + "quirky", + "quizzical", + "relaxed", + "reverent", + "romantic", + "sad", + "serene", + "sharp", + "silly", + "sleepy", + "stoic", + "stupefied", + "suspicious", + "tender", + "thirsty", + "trusting", + "unruffled", + "upbeat", + "vibrant", + "vigilant", + "vigorous", + "wizardly", + "wonderful", + "xenodochial", + "youthful", + "zealous", + "zen", + } + + // Docker, starting from 0.7.x, generates names from notable scientists and hackers. + // Please, for any amazing man that you add to the list, consider adding an equally amazing woman to it, and vice versa. + right = [...]string{ + // Muhammad ibn Jābir al-Ḥarrānī al-Battānī was a founding father of astronomy. https://en.wikipedia.org/wiki/Mu%E1%B8%A5ammad_ibn_J%C4%81bir_al-%E1%B8%A4arr%C4%81n%C4%AB_al-Batt%C4%81n%C4%AB + "albattani", + + // Frances E. Allen, became the first female IBM Fellow in 1989. In 2006, she became the first female recipient of the ACM's Turing Award. https://en.wikipedia.org/wiki/Frances_E._Allen + "allen", + + // June Almeida - Scottish virologist who took the first pictures of the rubella virus - https://en.wikipedia.org/wiki/June_Almeida + "almeida", + + // Maria Gaetana Agnesi - Italian mathematician, philosopher, theologian and humanitarian. She was the first woman to write a mathematics handbook and the first woman appointed as a Mathematics Professor at a University. https://en.wikipedia.org/wiki/Maria_Gaetana_Agnesi + "agnesi", + + // Archimedes was a physicist, engineer and mathematician who invented too many things to list them here. https://en.wikipedia.org/wiki/Archimedes + "archimedes", + + // Maria Ardinghelli - Italian translator, mathematician and physicist - https://en.wikipedia.org/wiki/Maria_Ardinghelli + "ardinghelli", + + // Aryabhata - Ancient Indian mathematician-astronomer during 476-550 CE https://en.wikipedia.org/wiki/Aryabhata + "aryabhata", + + // Wanda Austin - Wanda Austin is the President and CEO of The Aerospace Corporation, a leading architect for the US security space programs. https://en.wikipedia.org/wiki/Wanda_Austin + "austin", + + // Charles Babbage invented the concept of a programmable computer. https://en.wikipedia.org/wiki/Charles_Babbage. + "babbage", + + // Stefan Banach - Polish mathematician, was one of the founders of modern functional analysis. https://en.wikipedia.org/wiki/Stefan_Banach + "banach", + + // John Bardeen co-invented the transistor - https://en.wikipedia.org/wiki/John_Bardeen + "bardeen", + + // Jean Bartik, born Betty Jean Jennings, was one of the original programmers for the ENIAC computer. https://en.wikipedia.org/wiki/Jean_Bartik + "bartik", + + // Laura Bassi, the world's first female professor https://en.wikipedia.org/wiki/Laura_Bassi + "bassi", + + // Hugh Beaver, British engineer, founder of the Guinness Book of World Records https://en.wikipedia.org/wiki/Hugh_Beaver + "beaver", + + // Alexander Graham Bell - an eminent Scottish-born scientist, inventor, engineer and innovator who is credited with inventing the first practical telephone - https://en.wikipedia.org/wiki/Alexander_Graham_Bell + "bell", + + // Karl Friedrich Benz - a German automobile engineer. Inventor of the first practical motorcar. https://en.wikipedia.org/wiki/Karl_Benz + "benz", + + // Homi J Bhabha - was an Indian nuclear physicist, founding director, and professor of physics at the Tata Institute of Fundamental Research. Colloquially known as "father of Indian nuclear programme"- https://en.wikipedia.org/wiki/Homi_J._Bhabha + "bhabha", + + // Bhaskara II - Ancient Indian mathematician-astronomer whose work on calculus predates Newton and Leibniz by over half a millennium - https://en.wikipedia.org/wiki/Bh%C4%81skara_II#Calculus + "bhaskara", + + // Elizabeth Blackwell - American doctor and first American woman to receive a medical degree - https://en.wikipedia.org/wiki/Elizabeth_Blackwell + "blackwell", + + // Niels Bohr is the father of quantum theory. https://en.wikipedia.org/wiki/Niels_Bohr. + "bohr", + + // Kathleen Booth, she's credited with writing the first assembly language. https://en.wikipedia.org/wiki/Kathleen_Booth + "booth", + + // Anita Borg - Anita Borg was the founding director of the Institute for Women and Technology (IWT). https://en.wikipedia.org/wiki/Anita_Borg + "borg", + + // Satyendra Nath Bose - He provided the foundation for Bose–Einstein statistics and the theory of the Bose–Einstein condensate. - https://en.wikipedia.org/wiki/Satyendra_Nath_Bose + "bose", + + // Evelyn Boyd Granville - She was one of the first African-American woman to receive a Ph.D. in mathematics; she earned it in 1949 from Yale University. https://en.wikipedia.org/wiki/Evelyn_Boyd_Granville + "boyd", + + // Brahmagupta - Ancient Indian mathematician during 598-670 CE who gave rules to compute with zero - https://en.wikipedia.org/wiki/Brahmagupta#Zero + "brahmagupta", + + // Walter Houser Brattain co-invented the transistor - https://en.wikipedia.org/wiki/Walter_Houser_Brattain + "brattain", + + // Emmett Brown invented time travel. https://en.wikipedia.org/wiki/Emmett_Brown (thanks Brian Goff) + "brown", + + // Rachel Carson - American marine biologist and conservationist, her book Silent Spring and other writings are credited with advancing the global environmental movement. https://en.wikipedia.org/wiki/Rachel_Carson + "carson", + + // Subrahmanyan Chandrasekhar - Astrophysicist known for his mathematical theory on different stages and evolution in structures of the stars. He has won nobel prize for physics - https://en.wikipedia.org/wiki/Subrahmanyan_Chandrasekhar + "chandrasekhar", + + //Claude Shannon - The father of information theory and founder of digital circuit design theory. (https://en.wikipedia.org/wiki/Claude_Shannon) + "shannon", + + // Joan Clarke - Bletchley Park code breaker during the Second World War who pioneered techniques that remained top secret for decades. Also an accomplished numismatist https://en.wikipedia.org/wiki/Joan_Clarke + "clarke", + + // Jane Colden - American botanist widely considered the first female American botanist - https://en.wikipedia.org/wiki/Jane_Colden + "colden", + + // Gerty Theresa Cori - American biochemist who became the third woman—and first American woman—to win a Nobel Prize in science, and the first woman to be awarded the Nobel Prize in Physiology or Medicine. Cori was born in Prague. https://en.wikipedia.org/wiki/Gerty_Cori + "cori", + + // Seymour Roger Cray was an American electrical engineer and supercomputer architect who designed a series of computers that were the fastest in the world for decades. https://en.wikipedia.org/wiki/Seymour_Cray + "cray", + + // This entry reflects a husband and wife team who worked together: + // Joan Curran was a Welsh scientist who developed radar and invented chaff, a radar countermeasure. https://en.wikipedia.org/wiki/Joan_Curran + // Samuel Curran was an Irish physicist who worked alongside his wife during WWII and invented the proximity fuse. https://en.wikipedia.org/wiki/Samuel_Curran + "curran", + + // Marie Curie discovered radioactivity. https://en.wikipedia.org/wiki/Marie_Curie. + "curie", + + // Charles Darwin established the principles of natural evolution. https://en.wikipedia.org/wiki/Charles_Darwin. + "darwin", + + // Leonardo Da Vinci invented too many things to list here. https://en.wikipedia.org/wiki/Leonardo_da_Vinci. + "davinci", + + // Edsger Wybe Dijkstra was a Dutch computer scientist and mathematical scientist. https://en.wikipedia.org/wiki/Edsger_W._Dijkstra. + "dijkstra", + + // Donna Dubinsky - played an integral role in the development of personal digital assistants (PDAs) serving as CEO of Palm, Inc. and co-founding Handspring. https://en.wikipedia.org/wiki/Donna_Dubinsky + "dubinsky", + + // Annie Easley - She was a leading member of the team which developed software for the Centaur rocket stage and one of the first African-Americans in her field. https://en.wikipedia.org/wiki/Annie_Easley + "easley", + + // Thomas Alva Edison, prolific inventor https://en.wikipedia.org/wiki/Thomas_Edison + "edison", + + // Albert Einstein invented the general theory of relativity. https://en.wikipedia.org/wiki/Albert_Einstein + "einstein", + + // Gertrude Elion - American biochemist, pharmacologist and the 1988 recipient of the Nobel Prize in Medicine - https://en.wikipedia.org/wiki/Gertrude_Elion + "elion", + + // Douglas Engelbart gave the mother of all demos: https://en.wikipedia.org/wiki/Douglas_Engelbart + "engelbart", + + // Euclid invented geometry. https://en.wikipedia.org/wiki/Euclid + "euclid", + + // Leonhard Euler invented large parts of modern mathematics. https://de.wikipedia.org/wiki/Leonhard_Euler + "euler", + + // Pierre de Fermat pioneered several aspects of modern mathematics. https://en.wikipedia.org/wiki/Pierre_de_Fermat + "fermat", + + // Enrico Fermi invented the first nuclear reactor. https://en.wikipedia.org/wiki/Enrico_Fermi. + "fermi", + + // Richard Feynman was a key contributor to quantum mechanics and particle physics. https://en.wikipedia.org/wiki/Richard_Feynman + "feynman", + + // Benjamin Franklin is famous for his experiments in electricity and the invention of the lightning rod. + "franklin", + + // Galileo was a founding father of modern astronomy, and faced politics and obscurantism to establish scientific truth. https://en.wikipedia.org/wiki/Galileo_Galilei + "galileo", + + // William Henry "Bill" Gates III is an American business magnate, philanthropist, investor, computer programmer, and inventor. https://en.wikipedia.org/wiki/Bill_Gates + "gates", + + // Adele Goldberg, was one of the designers and developers of the Smalltalk language. https://en.wikipedia.org/wiki/Adele_Goldberg_(computer_scientist) + "goldberg", + + // Adele Goldstine, born Adele Katz, wrote the complete technical description for the first electronic digital computer, ENIAC. https://en.wikipedia.org/wiki/Adele_Goldstine + "goldstine", + + // Shafi Goldwasser is a computer scientist known for creating theoretical foundations of modern cryptography. Winner of 2012 ACM Turing Award. https://en.wikipedia.org/wiki/Shafi_Goldwasser + "goldwasser", + + // James Golick, all around gangster. + "golick", + + // Jane Goodall - British primatologist, ethologist, and anthropologist who is considered to be the world's foremost expert on chimpanzees - https://en.wikipedia.org/wiki/Jane_Goodall + "goodall", + + // Lois Haibt - American computer scientist, part of the team at IBM that developed FORTRAN - https://en.wikipedia.org/wiki/Lois_Haibt + "haibt", + + // Margaret Hamilton - Director of the Software Engineering Division of the MIT Instrumentation Laboratory, which developed on-board flight software for the Apollo space program. https://en.wikipedia.org/wiki/Margaret_Hamilton_(scientist) + "hamilton", + + // Stephen Hawking pioneered the field of cosmology by combining general relativity and quantum mechanics. https://en.wikipedia.org/wiki/Stephen_Hawking + "hawking", + + // Werner Heisenberg was a founding father of quantum mechanics. https://en.wikipedia.org/wiki/Werner_Heisenberg + "heisenberg", + + // Grete Hermann was a German philosopher noted for her philosophical work on the foundations of quantum mechanics. https://en.wikipedia.org/wiki/Grete_Hermann + "hermann", + + // Jaroslav Heyrovský was the inventor of the polarographic method, father of the electroanalytical method, and recipient of the Nobel Prize in 1959. His main field of work was polarography. https://en.wikipedia.org/wiki/Jaroslav_Heyrovsk%C3%BD + "heyrovsky", + + // Dorothy Hodgkin was a British biochemist, credited with the development of protein crystallography. She was awarded the Nobel Prize in Chemistry in 1964. https://en.wikipedia.org/wiki/Dorothy_Hodgkin + "hodgkin", + + // Erna Schneider Hoover revolutionized modern communication by inventing a computerized telephone switching method. https://en.wikipedia.org/wiki/Erna_Schneider_Hoover + "hoover", + + // Grace Hopper developed the first compiler for a computer programming language and is credited with popularizing the term "debugging" for fixing computer glitches. https://en.wikipedia.org/wiki/Grace_Hopper + "hopper", + + // Frances Hugle, she was an American scientist, engineer, and inventor who contributed to the understanding of semiconductors, integrated circuitry, and the unique electrical principles of microscopic materials. https://en.wikipedia.org/wiki/Frances_Hugle + "hugle", + + // Hypatia - Greek Alexandrine Neoplatonist philosopher in Egypt who was one of the earliest mothers of mathematics - https://en.wikipedia.org/wiki/Hypatia + "hypatia", + + // Mary Jackson, American mathematician and aerospace engineer who earned the highest title within NASA's engineering department - https://en.wikipedia.org/wiki/Mary_Jackson_(engineer) + "jackson", + + // Yeong-Sil Jang was a Korean scientist and astronomer during the Joseon Dynasty; he invented the first metal printing press and water gauge. https://en.wikipedia.org/wiki/Jang_Yeong-sil + "jang", + + // Betty Jennings - one of the original programmers of the ENIAC. https://en.wikipedia.org/wiki/ENIAC - https://en.wikipedia.org/wiki/Jean_Bartik + "jennings", + + // Mary Lou Jepsen, was the founder and chief technology officer of One Laptop Per Child (OLPC), and the founder of Pixel Qi. https://en.wikipedia.org/wiki/Mary_Lou_Jepsen + "jepsen", + + // Katherine Coleman Goble Johnson - American physicist and mathematician contributed to the NASA. https://en.wikipedia.org/wiki/Katherine_Johnson + "johnson", + + // Irène Joliot-Curie - French scientist who was awarded the Nobel Prize for Chemistry in 1935. Daughter of Marie and Pierre Curie. https://en.wikipedia.org/wiki/Ir%C3%A8ne_Joliot-Curie + "joliot", + + // Karen Spärck Jones came up with the concept of inverse document frequency, which is used in most search engines today. https://en.wikipedia.org/wiki/Karen_Sp%C3%A4rck_Jones + "jones", + + // A. P. J. Abdul Kalam - is an Indian scientist aka Missile Man of India for his work on the development of ballistic missile and launch vehicle technology - https://en.wikipedia.org/wiki/A._P._J._Abdul_Kalam + "kalam", + + // Susan Kare, created the icons and many of the interface elements for the original Apple Macintosh in the 1980s, and was an original employee of NeXT, working as the Creative Director. https://en.wikipedia.org/wiki/Susan_Kare + "kare", + + // Mary Kenneth Keller, Sister Mary Kenneth Keller became the first American woman to earn a PhD in Computer Science in 1965. https://en.wikipedia.org/wiki/Mary_Kenneth_Keller + "keller", + + // Johannes Kepler, German astronomer known for his three laws of planetary motion - https://en.wikipedia.org/wiki/Johannes_Kepler + "kepler", + + // Har Gobind Khorana - Indian-American biochemist who shared the 1968 Nobel Prize for Physiology - https://en.wikipedia.org/wiki/Har_Gobind_Khorana + "khorana", + + // Jack Kilby invented silicone integrated circuits and gave Silicon Valley its name. - https://en.wikipedia.org/wiki/Jack_Kilby + "kilby", + + // Maria Kirch - German astronomer and first woman to discover a comet - https://en.wikipedia.org/wiki/Maria_Margarethe_Kirch + "kirch", + + // Donald Knuth - American computer scientist, author of "The Art of Computer Programming" and creator of the TeX typesetting system. https://en.wikipedia.org/wiki/Donald_Knuth + "knuth", + + // Sophie Kowalevski - Russian mathematician responsible for important original contributions to analysis, differential equations and mechanics - https://en.wikipedia.org/wiki/Sofia_Kovalevskaya + "kowalevski", + + // Marie-Jeanne de Lalande - French astronomer, mathematician and cataloguer of stars - https://en.wikipedia.org/wiki/Marie-Jeanne_de_Lalande + "lalande", + + // Hedy Lamarr - Actress and inventor. The principles of her work are now incorporated into modern Wi-Fi, CDMA and Bluetooth technology. https://en.wikipedia.org/wiki/Hedy_Lamarr + "lamarr", + + // Leslie B. Lamport - American computer scientist. Lamport is best known for his seminal work in distributed systems and was the winner of the 2013 Turing Award. https://en.wikipedia.org/wiki/Leslie_Lamport + "lamport", + + // Mary Leakey - British paleoanthropologist who discovered the first fossilized Proconsul skull - https://en.wikipedia.org/wiki/Mary_Leakey + "leakey", + + // Henrietta Swan Leavitt - she was an American astronomer who discovered the relation between the luminosity and the period of Cepheid variable stars. https://en.wikipedia.org/wiki/Henrietta_Swan_Leavitt + "leavitt", + + //Daniel Lewin - Mathematician, Akamai co-founder, soldier, 9/11 victim-- Developed optimization techniques for routing traffic on the internet. Died attempting to stop the 9-11 hijackers. https://en.wikipedia.org/wiki/Daniel_Lewin + "lewin", + + // Ruth Lichterman - one of the original programmers of the ENIAC. https://en.wikipedia.org/wiki/ENIAC - https://en.wikipedia.org/wiki/Ruth_Teitelbaum + "lichterman", + + // Barbara Liskov - co-developed the Liskov substitution principle. Liskov was also the winner of the Turing Prize in 2008. - https://en.wikipedia.org/wiki/Barbara_Liskov + "liskov", + + // Ada Lovelace invented the first algorithm. https://en.wikipedia.org/wiki/Ada_Lovelace (thanks James Turnbull) + "lovelace", + + // Auguste and Louis Lumière - the first filmmakers in history - https://en.wikipedia.org/wiki/Auguste_and_Louis_Lumi%C3%A8re + "lumiere", + + // Mahavira - Ancient Indian mathematician during 9th century AD who discovered basic algebraic identities - https://en.wikipedia.org/wiki/Mah%C4%81v%C4%ABra_(mathematician) + "mahavira", + + // Maria Mayer - American theoretical physicist and Nobel laureate in Physics for proposing the nuclear shell model of the atomic nucleus - https://en.wikipedia.org/wiki/Maria_Mayer + "mayer", + + // John McCarthy invented LISP: https://en.wikipedia.org/wiki/John_McCarthy_(computer_scientist) + "mccarthy", + + // Barbara McClintock - a distinguished American cytogeneticist, 1983 Nobel Laureate in Physiology or Medicine for discovering transposons. https://en.wikipedia.org/wiki/Barbara_McClintock + "mcclintock", + + // Malcolm McLean invented the modern shipping container: https://en.wikipedia.org/wiki/Malcom_McLean + "mclean", + + // Kay McNulty - one of the original programmers of the ENIAC. https://en.wikipedia.org/wiki/ENIAC - https://en.wikipedia.org/wiki/Kathleen_Antonelli + "mcnulty", + + // Lise Meitner - Austrian/Swedish physicist who was involved in the discovery of nuclear fission. The element meitnerium is named after her - https://en.wikipedia.org/wiki/Lise_Meitner + "meitner", + + // Carla Meninsky, was the game designer and programmer for Atari 2600 games Dodge 'Em and Warlords. https://en.wikipedia.org/wiki/Carla_Meninsky + "meninsky", + + // Johanna Mestorf - German prehistoric archaeologist and first female museum director in Germany - https://en.wikipedia.org/wiki/Johanna_Mestorf + "mestorf", + + // Marvin Minsky - Pioneer in Artificial Intelligence, co-founder of the MIT's AI Lab, won the Turing Award in 1969. https://en.wikipedia.org/wiki/Marvin_Minsky + "minsky", + + // Maryam Mirzakhani - an Iranian mathematician and the first woman to win the Fields Medal. https://en.wikipedia.org/wiki/Maryam_Mirzakhani + "mirzakhani", + + // Samuel Morse - contributed to the invention of a single-wire telegraph system based on European telegraphs and was a co-developer of the Morse code - https://en.wikipedia.org/wiki/Samuel_Morse + "morse", + + // Ian Murdock - founder of the Debian project - https://en.wikipedia.org/wiki/Ian_Murdock + "murdock", + + // John von Neumann - todays computer architectures are based on the von Neumann architecture. https://en.wikipedia.org/wiki/Von_Neumann_architecture + "neumann", + + // Isaac Newton invented classic mechanics and modern optics. https://en.wikipedia.org/wiki/Isaac_Newton + "newton", + + // Florence Nightingale, more prominently known as a nurse, was also the first female member of the Royal Statistical Society and a pioneer in statistical graphics https://en.wikipedia.org/wiki/Florence_Nightingale#Statistics_and_sanitary_reform + "nightingale", + + // Alfred Nobel - a Swedish chemist, engineer, innovator, and armaments manufacturer (inventor of dynamite) - https://en.wikipedia.org/wiki/Alfred_Nobel + "nobel", + + // Emmy Noether, German mathematician. Noether's Theorem is named after her. https://en.wikipedia.org/wiki/Emmy_Noether + "noether", + + // Poppy Northcutt. Poppy Northcutt was the first woman to work as part of NASA’s Mission Control. http://www.businessinsider.com/poppy-northcutt-helped-apollo-astronauts-2014-12?op=1 + "northcutt", + + // Robert Noyce invented silicone integrated circuits and gave Silicon Valley its name. - https://en.wikipedia.org/wiki/Robert_Noyce + "noyce", + + // Panini - Ancient Indian linguist and grammarian from 4th century CE who worked on the world's first formal system - https://en.wikipedia.org/wiki/P%C4%81%E1%B9%87ini#Comparison_with_modern_formal_systems + "panini", + + // Ambroise Pare invented modern surgery. https://en.wikipedia.org/wiki/Ambroise_Par%C3%A9 + "pare", + + // Louis Pasteur discovered vaccination, fermentation and pasteurization. https://en.wikipedia.org/wiki/Louis_Pasteur. + "pasteur", + + // Cecilia Payne-Gaposchkin was an astronomer and astrophysicist who, in 1925, proposed in her Ph.D. thesis an explanation for the composition of stars in terms of the relative abundances of hydrogen and helium. https://en.wikipedia.org/wiki/Cecilia_Payne-Gaposchkin + "payne", + + // Radia Perlman is a software designer and network engineer and most famous for her invention of the spanning-tree protocol (STP). https://en.wikipedia.org/wiki/Radia_Perlman + "perlman", + + // Rob Pike was a key contributor to Unix, Plan 9, the X graphic system, utf-8, and the Go programming language. https://en.wikipedia.org/wiki/Rob_Pike + "pike", + + // Henri Poincaré made fundamental contributions in several fields of mathematics. https://en.wikipedia.org/wiki/Henri_Poincar%C3%A9 + "poincare", + + // Laura Poitras is a director and producer whose work, made possible by open source crypto tools, advances the causes of truth and freedom of information by reporting disclosures by whistleblowers such as Edward Snowden. https://en.wikipedia.org/wiki/Laura_Poitras + "poitras", + + // Claudius Ptolemy - a Greco-Egyptian writer of Alexandria, known as a mathematician, astronomer, geographer, astrologer, and poet of a single epigram in the Greek Anthology - https://en.wikipedia.org/wiki/Ptolemy + "ptolemy", + + // C. V. Raman - Indian physicist who won the Nobel Prize in 1930 for proposing the Raman effect. - https://en.wikipedia.org/wiki/C._V._Raman + "raman", + + // Srinivasa Ramanujan - Indian mathematician and autodidact who made extraordinary contributions to mathematical analysis, number theory, infinite series, and continued fractions. - https://en.wikipedia.org/wiki/Srinivasa_Ramanujan + "ramanujan", + + // Sally Kristen Ride was an American physicist and astronaut. She was the first American woman in space, and the youngest American astronaut. https://en.wikipedia.org/wiki/Sally_Ride + "ride", + + // Rita Levi-Montalcini - Won Nobel Prize in Physiology or Medicine jointly with colleague Stanley Cohen for the discovery of nerve growth factor (https://en.wikipedia.org/wiki/Rita_Levi-Montalcini) + "montalcini", + + // Dennis Ritchie - co-creator of UNIX and the C programming language. - https://en.wikipedia.org/wiki/Dennis_Ritchie + "ritchie", + + // Wilhelm Conrad Röntgen - German physicist who was awarded the first Nobel Prize in Physics in 1901 for the discovery of X-rays (Röntgen rays). https://en.wikipedia.org/wiki/Wilhelm_R%C3%B6ntgen + "roentgen", + + // Rosalind Franklin - British biophysicist and X-ray crystallographer whose research was critical to the understanding of DNA - https://en.wikipedia.org/wiki/Rosalind_Franklin + "rosalind", + + // Meghnad Saha - Indian astrophysicist best known for his development of the Saha equation, used to describe chemical and physical conditions in stars - https://en.wikipedia.org/wiki/Meghnad_Saha + "saha", + + // Jean E. Sammet developed FORMAC, the first widely used computer language for symbolic manipulation of mathematical formulas. https://en.wikipedia.org/wiki/Jean_E._Sammet + "sammet", + + // Carol Shaw - Originally an Atari employee, Carol Shaw is said to be the first female video game designer. https://en.wikipedia.org/wiki/Carol_Shaw_(video_game_designer) + "shaw", + + // Dame Stephanie "Steve" Shirley - Founded a software company in 1962 employing women working from home. https://en.wikipedia.org/wiki/Steve_Shirley + "shirley", + + // William Shockley co-invented the transistor - https://en.wikipedia.org/wiki/William_Shockley + "shockley", + + // Françoise Barré-Sinoussi - French virologist and Nobel Prize Laureate in Physiology or Medicine; her work was fundamental in identifying HIV as the cause of AIDS. https://en.wikipedia.org/wiki/Fran%C3%A7oise_Barr%C3%A9-Sinoussi + "sinoussi", + + // Betty Snyder - one of the original programmers of the ENIAC. https://en.wikipedia.org/wiki/ENIAC - https://en.wikipedia.org/wiki/Betty_Holberton + "snyder", + + // Frances Spence - one of the original programmers of the ENIAC. https://en.wikipedia.org/wiki/ENIAC - https://en.wikipedia.org/wiki/Frances_Spence + "spence", + + // Richard Matthew Stallman - the founder of the Free Software movement, the GNU project, the Free Software Foundation, and the League for Programming Freedom. He also invented the concept of copyleft to protect the ideals of this movement, and enshrined this concept in the widely-used GPL (General Public License) for software. https://en.wikiquote.org/wiki/Richard_Stallman + "stallman", + + // Michael Stonebraker is a database research pioneer and architect of Ingres, Postgres, VoltDB and SciDB. Winner of 2014 ACM Turing Award. https://en.wikipedia.org/wiki/Michael_Stonebraker + "stonebraker", + + // Janese Swanson (with others) developed the first of the Carmen Sandiego games. She went on to found Girl Tech. https://en.wikipedia.org/wiki/Janese_Swanson + "swanson", + + // Aaron Swartz was influential in creating RSS, Markdown, Creative Commons, Reddit, and much of the internet as we know it today. He was devoted to freedom of information on the web. https://en.wikiquote.org/wiki/Aaron_Swartz + "swartz", + + // Bertha Swirles was a theoretical physicist who made a number of contributions to early quantum theory. https://en.wikipedia.org/wiki/Bertha_Swirles + "swirles", + + // Nikola Tesla invented the AC electric system and every gadget ever used by a James Bond villain. https://en.wikipedia.org/wiki/Nikola_Tesla + "tesla", + + // Ken Thompson - co-creator of UNIX and the C programming language - https://en.wikipedia.org/wiki/Ken_Thompson + "thompson", + + // Linus Torvalds invented Linux and Git. https://en.wikipedia.org/wiki/Linus_Torvalds + "torvalds", + + // Alan Turing was a founding father of computer science. https://en.wikipedia.org/wiki/Alan_Turing. + "turing", + + // Varahamihira - Ancient Indian mathematician who discovered trigonometric formulae during 505-587 CE - https://en.wikipedia.org/wiki/Var%C4%81hamihira#Contributions + "varahamihira", + + // Sir Mokshagundam Visvesvaraya - is a notable Indian engineer. He is a recipient of the Indian Republic's highest honour, the Bharat Ratna, in 1955. On his birthday, 15 September is celebrated as Engineer's Day in India in his memory - https://en.wikipedia.org/wiki/Visvesvaraya + "visvesvaraya", + + // Christiane Nüsslein-Volhard - German biologist, won Nobel Prize in Physiology or Medicine in 1995 for research on the genetic control of embryonic development. https://en.wikipedia.org/wiki/Christiane_N%C3%BCsslein-Volhard + "volhard", + + // Marlyn Wescoff - one of the original programmers of the ENIAC. https://en.wikipedia.org/wiki/ENIAC - https://en.wikipedia.org/wiki/Marlyn_Meltzer + "wescoff", + + // Andrew Wiles - Notable British mathematician who proved the enigmatic Fermat's Last Theorem - https://en.wikipedia.org/wiki/Andrew_Wiles + "wiles", + + // Roberta Williams, did pioneering work in graphical adventure games for personal computers, particularly the King's Quest series. https://en.wikipedia.org/wiki/Roberta_Williams + "williams", + + // Sophie Wilson designed the first Acorn Micro-Computer and the instruction set for ARM processors. https://en.wikipedia.org/wiki/Sophie_Wilson + "wilson", + + // Jeannette Wing - co-developed the Liskov substitution principle. - https://en.wikipedia.org/wiki/Jeannette_Wing + "wing", + + // Steve Wozniak invented the Apple I and Apple II. https://en.wikipedia.org/wiki/Steve_Wozniak + "wozniak", + + // The Wright brothers, Orville and Wilbur - credited with inventing and building the world's first successful airplane and making the first controlled, powered and sustained heavier-than-air human flight - https://en.wikipedia.org/wiki/Wright_brothers + "wright", + + // Rosalyn Sussman Yalow - Rosalyn Sussman Yalow was an American medical physicist, and a co-winner of the 1977 Nobel Prize in Physiology or Medicine for development of the radioimmunoassay technique. https://en.wikipedia.org/wiki/Rosalyn_Sussman_Yalow + "yalow", + + // Ada Yonath - an Israeli crystallographer, the first woman from the Middle East to win a Nobel prize in the sciences. https://en.wikipedia.org/wiki/Ada_Yonath + "yonath", + } +) + +// GetRandomName generates a random name from the list of adjectives and surnames in this package +// formatted as "adjective_surname". For example 'focused_turing'. If retry is non-zero, a random +// integer between 0 and 10 will be added to the end of the name, e.g `focused_turing3` +func GetRandomName(retry int) string { + rnd := random.Rand +begin: + name := fmt.Sprintf("%s_%s", left[rnd.Intn(len(left))], right[rnd.Intn(len(right))]) + if name == "boring_wozniak" /* Steve Wozniak is not boring */ { + goto begin + } + + if retry > 0 { + name = fmt.Sprintf("%s%d", name, rnd.Intn(10)) + } + return name +} diff --git a/vendor/github.com/docker/docker/pkg/random/random.go b/vendor/github.com/docker/docker/pkg/random/random.go new file mode 100644 index 000000000..70de4d130 --- /dev/null +++ b/vendor/github.com/docker/docker/pkg/random/random.go @@ -0,0 +1,71 @@ +package random + +import ( + cryptorand "crypto/rand" + "io" + "math" + "math/big" + "math/rand" + "sync" + "time" +) + +// Rand is a global *rand.Rand instance, which initialized with NewSource() source. +var Rand = rand.New(NewSource()) + +// Reader is a global, shared instance of a pseudorandom bytes generator. +// It doesn't consume entropy. +var Reader io.Reader = &reader{rnd: Rand} + +// copypaste from standard math/rand +type lockedSource struct { + lk sync.Mutex + src rand.Source +} + +func (r *lockedSource) Int63() (n int64) { + r.lk.Lock() + n = r.src.Int63() + r.lk.Unlock() + return +} + +func (r *lockedSource) Seed(seed int64) { + r.lk.Lock() + r.src.Seed(seed) + r.lk.Unlock() +} + +// NewSource returns math/rand.Source safe for concurrent use and initialized +// with current unix-nano timestamp +func NewSource() rand.Source { + var seed int64 + if cryptoseed, err := cryptorand.Int(cryptorand.Reader, big.NewInt(math.MaxInt64)); err != nil { + // This should not happen, but worst-case fallback to time-based seed. + seed = time.Now().UnixNano() + } else { + seed = cryptoseed.Int64() + } + return &lockedSource{ + src: rand.NewSource(seed), + } +} + +type reader struct { + rnd *rand.Rand +} + +func (r *reader) Read(b []byte) (int, error) { + i := 0 + for { + val := r.rnd.Int63() + for val > 0 { + b[i] = byte(val) + i++ + if i == len(b) { + return i, nil + } + val >>= 8 + } + } +} diff --git a/vendor/github.com/docker/docker/pkg/term/windows/ansi_reader.go b/vendor/github.com/docker/docker/pkg/term/windows/ansi_reader.go new file mode 100644 index 000000000..1d7c452cc --- /dev/null +++ b/vendor/github.com/docker/docker/pkg/term/windows/ansi_reader.go @@ -0,0 +1,263 @@ +// +build windows + +package windowsconsole // import "github.com/docker/docker/pkg/term/windows" + +import ( + "bytes" + "errors" + "fmt" + "io" + "os" + "strings" + "unsafe" + + ansiterm "github.com/Azure/go-ansiterm" + "github.com/Azure/go-ansiterm/winterm" +) + +const ( + escapeSequence = ansiterm.KEY_ESC_CSI +) + +// ansiReader wraps a standard input file (e.g., os.Stdin) providing ANSI sequence translation. +type ansiReader struct { + file *os.File + fd uintptr + buffer []byte + cbBuffer int + command []byte +} + +// NewAnsiReader returns an io.ReadCloser that provides VT100 terminal emulation on top of a +// Windows console input handle. +func NewAnsiReader(nFile int) io.ReadCloser { + initLogger() + file, fd := winterm.GetStdFile(nFile) + return &ansiReader{ + file: file, + fd: fd, + command: make([]byte, 0, ansiterm.ANSI_MAX_CMD_LENGTH), + buffer: make([]byte, 0), + } +} + +// Close closes the wrapped file. +func (ar *ansiReader) Close() (err error) { + return ar.file.Close() +} + +// Fd returns the file descriptor of the wrapped file. +func (ar *ansiReader) Fd() uintptr { + return ar.fd +} + +// Read reads up to len(p) bytes of translated input events into p. +func (ar *ansiReader) Read(p []byte) (int, error) { + if len(p) == 0 { + return 0, nil + } + + // Previously read bytes exist, read as much as we can and return + if len(ar.buffer) > 0 { + logger.Debugf("Reading previously cached bytes") + + originalLength := len(ar.buffer) + copiedLength := copy(p, ar.buffer) + + if copiedLength == originalLength { + ar.buffer = make([]byte, 0, len(p)) + } else { + ar.buffer = ar.buffer[copiedLength:] + } + + logger.Debugf("Read from cache p[%d]: % x", copiedLength, p) + return copiedLength, nil + } + + // Read and translate key events + events, err := readInputEvents(ar.fd, len(p)) + if err != nil { + return 0, err + } else if len(events) == 0 { + logger.Debug("No input events detected") + return 0, nil + } + + keyBytes := translateKeyEvents(events, []byte(escapeSequence)) + + // Save excess bytes and right-size keyBytes + if len(keyBytes) > len(p) { + logger.Debugf("Received %d keyBytes, only room for %d bytes", len(keyBytes), len(p)) + ar.buffer = keyBytes[len(p):] + keyBytes = keyBytes[:len(p)] + } else if len(keyBytes) == 0 { + logger.Debug("No key bytes returned from the translator") + return 0, nil + } + + copiedLength := copy(p, keyBytes) + if copiedLength != len(keyBytes) { + return 0, errors.New("unexpected copy length encountered") + } + + logger.Debugf("Read p[%d]: % x", copiedLength, p) + logger.Debugf("Read keyBytes[%d]: % x", copiedLength, keyBytes) + return copiedLength, nil +} + +// readInputEvents polls until at least one event is available. +func readInputEvents(fd uintptr, maxBytes int) ([]winterm.INPUT_RECORD, error) { + // Determine the maximum number of records to retrieve + // -- Cast around the type system to obtain the size of a single INPUT_RECORD. + // unsafe.Sizeof requires an expression vs. a type-reference; the casting + // tricks the type system into believing it has such an expression. + recordSize := int(unsafe.Sizeof(*((*winterm.INPUT_RECORD)(unsafe.Pointer(&maxBytes))))) + countRecords := maxBytes / recordSize + if countRecords > ansiterm.MAX_INPUT_EVENTS { + countRecords = ansiterm.MAX_INPUT_EVENTS + } else if countRecords == 0 { + countRecords = 1 + } + logger.Debugf("[windows] readInputEvents: Reading %v records (buffer size %v, record size %v)", countRecords, maxBytes, recordSize) + + // Wait for and read input events + events := make([]winterm.INPUT_RECORD, countRecords) + nEvents := uint32(0) + eventsExist, err := winterm.WaitForSingleObject(fd, winterm.WAIT_INFINITE) + if err != nil { + return nil, err + } + + if eventsExist { + err = winterm.ReadConsoleInput(fd, events, &nEvents) + if err != nil { + return nil, err + } + } + + // Return a slice restricted to the number of returned records + logger.Debugf("[windows] readInputEvents: Read %v events", nEvents) + return events[:nEvents], nil +} + +// KeyEvent Translation Helpers + +var arrowKeyMapPrefix = map[uint16]string{ + winterm.VK_UP: "%s%sA", + winterm.VK_DOWN: "%s%sB", + winterm.VK_RIGHT: "%s%sC", + winterm.VK_LEFT: "%s%sD", +} + +var keyMapPrefix = map[uint16]string{ + winterm.VK_UP: "\x1B[%sA", + winterm.VK_DOWN: "\x1B[%sB", + winterm.VK_RIGHT: "\x1B[%sC", + winterm.VK_LEFT: "\x1B[%sD", + winterm.VK_HOME: "\x1B[1%s~", // showkey shows ^[[1 + winterm.VK_END: "\x1B[4%s~", // showkey shows ^[[4 + winterm.VK_INSERT: "\x1B[2%s~", + winterm.VK_DELETE: "\x1B[3%s~", + winterm.VK_PRIOR: "\x1B[5%s~", + winterm.VK_NEXT: "\x1B[6%s~", + winterm.VK_F1: "", + winterm.VK_F2: "", + winterm.VK_F3: "\x1B[13%s~", + winterm.VK_F4: "\x1B[14%s~", + winterm.VK_F5: "\x1B[15%s~", + winterm.VK_F6: "\x1B[17%s~", + winterm.VK_F7: "\x1B[18%s~", + winterm.VK_F8: "\x1B[19%s~", + winterm.VK_F9: "\x1B[20%s~", + winterm.VK_F10: "\x1B[21%s~", + winterm.VK_F11: "\x1B[23%s~", + winterm.VK_F12: "\x1B[24%s~", +} + +// translateKeyEvents converts the input events into the appropriate ANSI string. +func translateKeyEvents(events []winterm.INPUT_RECORD, escapeSequence []byte) []byte { + var buffer bytes.Buffer + for _, event := range events { + if event.EventType == winterm.KEY_EVENT && event.KeyEvent.KeyDown != 0 { + buffer.WriteString(keyToString(&event.KeyEvent, escapeSequence)) + } + } + + return buffer.Bytes() +} + +// keyToString maps the given input event record to the corresponding string. +func keyToString(keyEvent *winterm.KEY_EVENT_RECORD, escapeSequence []byte) string { + if keyEvent.UnicodeChar == 0 { + return formatVirtualKey(keyEvent.VirtualKeyCode, keyEvent.ControlKeyState, escapeSequence) + } + + _, alt, control := getControlKeys(keyEvent.ControlKeyState) + if control { + // TODO(azlinux): Implement following control sequences + // -D Signals the end of input from the keyboard; also exits current shell. + // -H Deletes the first character to the left of the cursor. Also called the ERASE key. + // -Q Restarts printing after it has been stopped with -s. + // -S Suspends printing on the screen (does not stop the program). + // -U Deletes all characters on the current line. Also called the KILL key. + // -E Quits current command and creates a core + + } + + // +Key generates ESC N Key + if !control && alt { + return ansiterm.KEY_ESC_N + strings.ToLower(string(keyEvent.UnicodeChar)) + } + + return string(keyEvent.UnicodeChar) +} + +// formatVirtualKey converts a virtual key (e.g., up arrow) into the appropriate ANSI string. +func formatVirtualKey(key uint16, controlState uint32, escapeSequence []byte) string { + shift, alt, control := getControlKeys(controlState) + modifier := getControlKeysModifier(shift, alt, control) + + if format, ok := arrowKeyMapPrefix[key]; ok { + return fmt.Sprintf(format, escapeSequence, modifier) + } + + if format, ok := keyMapPrefix[key]; ok { + return fmt.Sprintf(format, modifier) + } + + return "" +} + +// getControlKeys extracts the shift, alt, and ctrl key states. +func getControlKeys(controlState uint32) (shift, alt, control bool) { + shift = 0 != (controlState & winterm.SHIFT_PRESSED) + alt = 0 != (controlState & (winterm.LEFT_ALT_PRESSED | winterm.RIGHT_ALT_PRESSED)) + control = 0 != (controlState & (winterm.LEFT_CTRL_PRESSED | winterm.RIGHT_CTRL_PRESSED)) + return shift, alt, control +} + +// getControlKeysModifier returns the ANSI modifier for the given combination of control keys. +func getControlKeysModifier(shift, alt, control bool) string { + if shift && alt && control { + return ansiterm.KEY_CONTROL_PARAM_8 + } + if alt && control { + return ansiterm.KEY_CONTROL_PARAM_7 + } + if shift && control { + return ansiterm.KEY_CONTROL_PARAM_6 + } + if control { + return ansiterm.KEY_CONTROL_PARAM_5 + } + if shift && alt { + return ansiterm.KEY_CONTROL_PARAM_4 + } + if alt { + return ansiterm.KEY_CONTROL_PARAM_3 + } + if shift { + return ansiterm.KEY_CONTROL_PARAM_2 + } + return "" +} diff --git a/vendor/github.com/docker/docker/pkg/term/windows/ansi_writer.go b/vendor/github.com/docker/docker/pkg/term/windows/ansi_writer.go new file mode 100644 index 000000000..7799a03fc --- /dev/null +++ b/vendor/github.com/docker/docker/pkg/term/windows/ansi_writer.go @@ -0,0 +1,64 @@ +// +build windows + +package windowsconsole // import "github.com/docker/docker/pkg/term/windows" + +import ( + "io" + "os" + + ansiterm "github.com/Azure/go-ansiterm" + "github.com/Azure/go-ansiterm/winterm" +) + +// ansiWriter wraps a standard output file (e.g., os.Stdout) providing ANSI sequence translation. +type ansiWriter struct { + file *os.File + fd uintptr + infoReset *winterm.CONSOLE_SCREEN_BUFFER_INFO + command []byte + escapeSequence []byte + inAnsiSequence bool + parser *ansiterm.AnsiParser +} + +// NewAnsiWriter returns an io.Writer that provides VT100 terminal emulation on top of a +// Windows console output handle. +func NewAnsiWriter(nFile int) io.Writer { + initLogger() + file, fd := winterm.GetStdFile(nFile) + info, err := winterm.GetConsoleScreenBufferInfo(fd) + if err != nil { + return nil + } + + parser := ansiterm.CreateParser("Ground", winterm.CreateWinEventHandler(fd, file)) + logger.Infof("newAnsiWriter: parser %p", parser) + + aw := &ansiWriter{ + file: file, + fd: fd, + infoReset: info, + command: make([]byte, 0, ansiterm.ANSI_MAX_CMD_LENGTH), + escapeSequence: []byte(ansiterm.KEY_ESC_CSI), + parser: parser, + } + + logger.Infof("newAnsiWriter: aw.parser %p", aw.parser) + logger.Infof("newAnsiWriter: %v", aw) + return aw +} + +func (aw *ansiWriter) Fd() uintptr { + return aw.fd +} + +// Write writes len(p) bytes from p to the underlying data stream. +func (aw *ansiWriter) Write(p []byte) (total int, err error) { + if len(p) == 0 { + return 0, nil + } + + logger.Infof("Write: % x", p) + logger.Infof("Write: %s", string(p)) + return aw.parser.Parse(p) +} diff --git a/vendor/github.com/docker/docker/pkg/term/windows/console.go b/vendor/github.com/docker/docker/pkg/term/windows/console.go new file mode 100644 index 000000000..527401975 --- /dev/null +++ b/vendor/github.com/docker/docker/pkg/term/windows/console.go @@ -0,0 +1,35 @@ +// +build windows + +package windowsconsole // import "github.com/docker/docker/pkg/term/windows" + +import ( + "os" + + "github.com/Azure/go-ansiterm/winterm" +) + +// GetHandleInfo returns file descriptor and bool indicating whether the file is a console. +func GetHandleInfo(in interface{}) (uintptr, bool) { + switch t := in.(type) { + case *ansiReader: + return t.Fd(), true + case *ansiWriter: + return t.Fd(), true + } + + var inFd uintptr + var isTerminal bool + + if file, ok := in.(*os.File); ok { + inFd = file.Fd() + isTerminal = IsConsole(inFd) + } + return inFd, isTerminal +} + +// IsConsole returns true if the given file descriptor is a Windows Console. +// The code assumes that GetConsoleMode will return an error for file descriptors that are not a console. +func IsConsole(fd uintptr) bool { + _, e := winterm.GetConsoleMode(fd) + return e == nil +} diff --git a/vendor/github.com/docker/docker/pkg/term/windows/windows.go b/vendor/github.com/docker/docker/pkg/term/windows/windows.go new file mode 100644 index 000000000..1f8965969 --- /dev/null +++ b/vendor/github.com/docker/docker/pkg/term/windows/windows.go @@ -0,0 +1,33 @@ +// These files implement ANSI-aware input and output streams for use by the Docker Windows client. +// When asked for the set of standard streams (e.g., stdin, stdout, stderr), the code will create +// and return pseudo-streams that convert ANSI sequences to / from Windows Console API calls. + +package windowsconsole // import "github.com/docker/docker/pkg/term/windows" + +import ( + "io/ioutil" + "os" + "sync" + + ansiterm "github.com/Azure/go-ansiterm" + "github.com/sirupsen/logrus" +) + +var logger *logrus.Logger +var initOnce sync.Once + +func initLogger() { + initOnce.Do(func() { + logFile := ioutil.Discard + + if isDebugEnv := os.Getenv(ansiterm.LogEnv); isDebugEnv == "1" { + logFile, _ = os.Create("ansiReaderWriter.log") + } + + logger = &logrus.Logger{ + Out: logFile, + Formatter: new(logrus.TextFormatter), + Level: logrus.DebugLevel, + } + }) +} diff --git a/vendor/github.com/dustin/go-humanize/LICENSE b/vendor/github.com/dustin/go-humanize/LICENSE new file mode 100644 index 000000000..8d9a94a90 --- /dev/null +++ b/vendor/github.com/dustin/go-humanize/LICENSE @@ -0,0 +1,21 @@ +Copyright (c) 2005-2008 Dustin Sallings + +Permission is hereby granted, free of charge, to any person obtaining a copy +of this software and associated documentation files (the "Software"), to deal +in the Software without restriction, including without limitation the rights +to use, copy, modify, merge, publish, distribute, sublicense, and/or sell +copies of the Software, and to permit persons to whom the Software is +furnished to do so, subject to the following conditions: + +The above copyright notice and this permission notice shall be included in +all copies or substantial portions of the Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR +IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, +FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE +AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER +LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, +OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +SOFTWARE. + + diff --git a/vendor/github.com/dustin/go-humanize/README.markdown b/vendor/github.com/dustin/go-humanize/README.markdown new file mode 100644 index 000000000..f69d3c870 --- /dev/null +++ b/vendor/github.com/dustin/go-humanize/README.markdown @@ -0,0 +1,92 @@ +# Humane Units [![Build Status](https://travis-ci.org/dustin/go-humanize.svg?branch=master)](https://travis-ci.org/dustin/go-humanize) [![GoDoc](https://godoc.org/github.com/dustin/go-humanize?status.svg)](https://godoc.org/github.com/dustin/go-humanize) + +Just a few functions for helping humanize times and sizes. + +`go get` it as `github.com/dustin/go-humanize`, import it as +`"github.com/dustin/go-humanize"`, use it as `humanize`. + +See [godoc](https://godoc.org/github.com/dustin/go-humanize) for +complete documentation. + +## Sizes + +This lets you take numbers like `82854982` and convert them to useful +strings like, `83 MB` or `79 MiB` (whichever you prefer). + +Example: + +```go +fmt.Printf("That file is %s.", humanize.Bytes(82854982)) // That file is 83 MB. +``` + +## Times + +This lets you take a `time.Time` and spit it out in relative terms. +For example, `12 seconds ago` or `3 days from now`. + +Example: + +```go +fmt.Printf("This was touched %s.", humanize.Time(someTimeInstance)) // This was touched 7 hours ago. +``` + +Thanks to Kyle Lemons for the time implementation from an IRC +conversation one day. It's pretty neat. + +## Ordinals + +From a [mailing list discussion][odisc] where a user wanted to be able +to label ordinals. + + 0 -> 0th + 1 -> 1st + 2 -> 2nd + 3 -> 3rd + 4 -> 4th + [...] + +Example: + +```go +fmt.Printf("You're my %s best friend.", humanize.Ordinal(193)) // You are my 193rd best friend. +``` + +## Commas + +Want to shove commas into numbers? Be my guest. + + 0 -> 0 + 100 -> 100 + 1000 -> 1,000 + 1000000000 -> 1,000,000,000 + -100000 -> -100,000 + +Example: + +```go +fmt.Printf("You owe $%s.\n", humanize.Comma(6582491)) // You owe $6,582,491. +``` + +## Ftoa + +Nicer float64 formatter that removes trailing zeros. + +```go +fmt.Printf("%f", 2.24) // 2.240000 +fmt.Printf("%s", humanize.Ftoa(2.24)) // 2.24 +fmt.Printf("%f", 2.0) // 2.000000 +fmt.Printf("%s", humanize.Ftoa(2.0)) // 2 +``` + +## SI notation + +Format numbers with [SI notation][sinotation]. + +Example: + +```go +humanize.SI(0.00000000223, "M") // 2.23 nM +``` + +[odisc]: https://groups.google.com/d/topic/golang-nuts/l8NhI74jl-4/discussion +[sinotation]: http://en.wikipedia.org/wiki/Metric_prefix diff --git a/vendor/github.com/dustin/go-humanize/big.go b/vendor/github.com/dustin/go-humanize/big.go new file mode 100644 index 000000000..f49dc337d --- /dev/null +++ b/vendor/github.com/dustin/go-humanize/big.go @@ -0,0 +1,31 @@ +package humanize + +import ( + "math/big" +) + +// order of magnitude (to a max order) +func oomm(n, b *big.Int, maxmag int) (float64, int) { + mag := 0 + m := &big.Int{} + for n.Cmp(b) >= 0 { + n.DivMod(n, b, m) + mag++ + if mag == maxmag && maxmag >= 0 { + break + } + } + return float64(n.Int64()) + (float64(m.Int64()) / float64(b.Int64())), mag +} + +// total order of magnitude +// (same as above, but with no upper limit) +func oom(n, b *big.Int) (float64, int) { + mag := 0 + m := &big.Int{} + for n.Cmp(b) >= 0 { + n.DivMod(n, b, m) + mag++ + } + return float64(n.Int64()) + (float64(m.Int64()) / float64(b.Int64())), mag +} diff --git a/vendor/github.com/dustin/go-humanize/bigbytes.go b/vendor/github.com/dustin/go-humanize/bigbytes.go new file mode 100644 index 000000000..1a2bf6172 --- /dev/null +++ b/vendor/github.com/dustin/go-humanize/bigbytes.go @@ -0,0 +1,173 @@ +package humanize + +import ( + "fmt" + "math/big" + "strings" + "unicode" +) + +var ( + bigIECExp = big.NewInt(1024) + + // BigByte is one byte in bit.Ints + BigByte = big.NewInt(1) + // BigKiByte is 1,024 bytes in bit.Ints + BigKiByte = (&big.Int{}).Mul(BigByte, bigIECExp) + // BigMiByte is 1,024 k bytes in bit.Ints + BigMiByte = (&big.Int{}).Mul(BigKiByte, bigIECExp) + // BigGiByte is 1,024 m bytes in bit.Ints + BigGiByte = (&big.Int{}).Mul(BigMiByte, bigIECExp) + // BigTiByte is 1,024 g bytes in bit.Ints + BigTiByte = (&big.Int{}).Mul(BigGiByte, bigIECExp) + // BigPiByte is 1,024 t bytes in bit.Ints + BigPiByte = (&big.Int{}).Mul(BigTiByte, bigIECExp) + // BigEiByte is 1,024 p bytes in bit.Ints + BigEiByte = (&big.Int{}).Mul(BigPiByte, bigIECExp) + // BigZiByte is 1,024 e bytes in bit.Ints + BigZiByte = (&big.Int{}).Mul(BigEiByte, bigIECExp) + // BigYiByte is 1,024 z bytes in bit.Ints + BigYiByte = (&big.Int{}).Mul(BigZiByte, bigIECExp) +) + +var ( + bigSIExp = big.NewInt(1000) + + // BigSIByte is one SI byte in big.Ints + BigSIByte = big.NewInt(1) + // BigKByte is 1,000 SI bytes in big.Ints + BigKByte = (&big.Int{}).Mul(BigSIByte, bigSIExp) + // BigMByte is 1,000 SI k bytes in big.Ints + BigMByte = (&big.Int{}).Mul(BigKByte, bigSIExp) + // BigGByte is 1,000 SI m bytes in big.Ints + BigGByte = (&big.Int{}).Mul(BigMByte, bigSIExp) + // BigTByte is 1,000 SI g bytes in big.Ints + BigTByte = (&big.Int{}).Mul(BigGByte, bigSIExp) + // BigPByte is 1,000 SI t bytes in big.Ints + BigPByte = (&big.Int{}).Mul(BigTByte, bigSIExp) + // BigEByte is 1,000 SI p bytes in big.Ints + BigEByte = (&big.Int{}).Mul(BigPByte, bigSIExp) + // BigZByte is 1,000 SI e bytes in big.Ints + BigZByte = (&big.Int{}).Mul(BigEByte, bigSIExp) + // BigYByte is 1,000 SI z bytes in big.Ints + BigYByte = (&big.Int{}).Mul(BigZByte, bigSIExp) +) + +var bigBytesSizeTable = map[string]*big.Int{ + "b": BigByte, + "kib": BigKiByte, + "kb": BigKByte, + "mib": BigMiByte, + "mb": BigMByte, + "gib": BigGiByte, + "gb": BigGByte, + "tib": BigTiByte, + "tb": BigTByte, + "pib": BigPiByte, + "pb": BigPByte, + "eib": BigEiByte, + "eb": BigEByte, + "zib": BigZiByte, + "zb": BigZByte, + "yib": BigYiByte, + "yb": BigYByte, + // Without suffix + "": BigByte, + "ki": BigKiByte, + "k": BigKByte, + "mi": BigMiByte, + "m": BigMByte, + "gi": BigGiByte, + "g": BigGByte, + "ti": BigTiByte, + "t": BigTByte, + "pi": BigPiByte, + "p": BigPByte, + "ei": BigEiByte, + "e": BigEByte, + "z": BigZByte, + "zi": BigZiByte, + "y": BigYByte, + "yi": BigYiByte, +} + +var ten = big.NewInt(10) + +func humanateBigBytes(s, base *big.Int, sizes []string) string { + if s.Cmp(ten) < 0 { + return fmt.Sprintf("%d B", s) + } + c := (&big.Int{}).Set(s) + val, mag := oomm(c, base, len(sizes)-1) + suffix := sizes[mag] + f := "%.0f %s" + if val < 10 { + f = "%.1f %s" + } + + return fmt.Sprintf(f, val, suffix) + +} + +// BigBytes produces a human readable representation of an SI size. +// +// See also: ParseBigBytes. +// +// BigBytes(82854982) -> 83 MB +func BigBytes(s *big.Int) string { + sizes := []string{"B", "kB", "MB", "GB", "TB", "PB", "EB", "ZB", "YB"} + return humanateBigBytes(s, bigSIExp, sizes) +} + +// BigIBytes produces a human readable representation of an IEC size. +// +// See also: ParseBigBytes. +// +// BigIBytes(82854982) -> 79 MiB +func BigIBytes(s *big.Int) string { + sizes := []string{"B", "KiB", "MiB", "GiB", "TiB", "PiB", "EiB", "ZiB", "YiB"} + return humanateBigBytes(s, bigIECExp, sizes) +} + +// ParseBigBytes parses a string representation of bytes into the number +// of bytes it represents. +// +// See also: BigBytes, BigIBytes. +// +// ParseBigBytes("42 MB") -> 42000000, nil +// ParseBigBytes("42 mib") -> 44040192, nil +func ParseBigBytes(s string) (*big.Int, error) { + lastDigit := 0 + hasComma := false + for _, r := range s { + if !(unicode.IsDigit(r) || r == '.' || r == ',') { + break + } + if r == ',' { + hasComma = true + } + lastDigit++ + } + + num := s[:lastDigit] + if hasComma { + num = strings.Replace(num, ",", "", -1) + } + + val := &big.Rat{} + _, err := fmt.Sscanf(num, "%f", val) + if err != nil { + return nil, err + } + + extra := strings.ToLower(strings.TrimSpace(s[lastDigit:])) + if m, ok := bigBytesSizeTable[extra]; ok { + mv := (&big.Rat{}).SetInt(m) + val.Mul(val, mv) + rv := &big.Int{} + rv.Div(val.Num(), val.Denom()) + return rv, nil + } + + return nil, fmt.Errorf("unhandled size name: %v", extra) +} diff --git a/vendor/github.com/dustin/go-humanize/bytes.go b/vendor/github.com/dustin/go-humanize/bytes.go new file mode 100644 index 000000000..0b498f488 --- /dev/null +++ b/vendor/github.com/dustin/go-humanize/bytes.go @@ -0,0 +1,143 @@ +package humanize + +import ( + "fmt" + "math" + "strconv" + "strings" + "unicode" +) + +// IEC Sizes. +// kibis of bits +const ( + Byte = 1 << (iota * 10) + KiByte + MiByte + GiByte + TiByte + PiByte + EiByte +) + +// SI Sizes. +const ( + IByte = 1 + KByte = IByte * 1000 + MByte = KByte * 1000 + GByte = MByte * 1000 + TByte = GByte * 1000 + PByte = TByte * 1000 + EByte = PByte * 1000 +) + +var bytesSizeTable = map[string]uint64{ + "b": Byte, + "kib": KiByte, + "kb": KByte, + "mib": MiByte, + "mb": MByte, + "gib": GiByte, + "gb": GByte, + "tib": TiByte, + "tb": TByte, + "pib": PiByte, + "pb": PByte, + "eib": EiByte, + "eb": EByte, + // Without suffix + "": Byte, + "ki": KiByte, + "k": KByte, + "mi": MiByte, + "m": MByte, + "gi": GiByte, + "g": GByte, + "ti": TiByte, + "t": TByte, + "pi": PiByte, + "p": PByte, + "ei": EiByte, + "e": EByte, +} + +func logn(n, b float64) float64 { + return math.Log(n) / math.Log(b) +} + +func humanateBytes(s uint64, base float64, sizes []string) string { + if s < 10 { + return fmt.Sprintf("%d B", s) + } + e := math.Floor(logn(float64(s), base)) + suffix := sizes[int(e)] + val := math.Floor(float64(s)/math.Pow(base, e)*10+0.5) / 10 + f := "%.0f %s" + if val < 10 { + f = "%.1f %s" + } + + return fmt.Sprintf(f, val, suffix) +} + +// Bytes produces a human readable representation of an SI size. +// +// See also: ParseBytes. +// +// Bytes(82854982) -> 83 MB +func Bytes(s uint64) string { + sizes := []string{"B", "kB", "MB", "GB", "TB", "PB", "EB"} + return humanateBytes(s, 1000, sizes) +} + +// IBytes produces a human readable representation of an IEC size. +// +// See also: ParseBytes. +// +// IBytes(82854982) -> 79 MiB +func IBytes(s uint64) string { + sizes := []string{"B", "KiB", "MiB", "GiB", "TiB", "PiB", "EiB"} + return humanateBytes(s, 1024, sizes) +} + +// ParseBytes parses a string representation of bytes into the number +// of bytes it represents. +// +// See Also: Bytes, IBytes. +// +// ParseBytes("42 MB") -> 42000000, nil +// ParseBytes("42 mib") -> 44040192, nil +func ParseBytes(s string) (uint64, error) { + lastDigit := 0 + hasComma := false + for _, r := range s { + if !(unicode.IsDigit(r) || r == '.' || r == ',') { + break + } + if r == ',' { + hasComma = true + } + lastDigit++ + } + + num := s[:lastDigit] + if hasComma { + num = strings.Replace(num, ",", "", -1) + } + + f, err := strconv.ParseFloat(num, 64) + if err != nil { + return 0, err + } + + extra := strings.ToLower(strings.TrimSpace(s[lastDigit:])) + if m, ok := bytesSizeTable[extra]; ok { + f *= float64(m) + if f >= math.MaxUint64 { + return 0, fmt.Errorf("too large: %v", s) + } + return uint64(f), nil + } + + return 0, fmt.Errorf("unhandled size name: %v", extra) +} diff --git a/vendor/github.com/dustin/go-humanize/comma.go b/vendor/github.com/dustin/go-humanize/comma.go new file mode 100644 index 000000000..eb285cb95 --- /dev/null +++ b/vendor/github.com/dustin/go-humanize/comma.go @@ -0,0 +1,108 @@ +package humanize + +import ( + "bytes" + "math" + "math/big" + "strconv" + "strings" +) + +// Comma produces a string form of the given number in base 10 with +// commas after every three orders of magnitude. +// +// e.g. Comma(834142) -> 834,142 +func Comma(v int64) string { + sign := "" + + // minin64 can't be negated to a usable value, so it has to be special cased. + if v == math.MinInt64 { + return "-9,223,372,036,854,775,808" + } + + if v < 0 { + sign = "-" + v = 0 - v + } + + parts := []string{"", "", "", "", "", "", ""} + j := len(parts) - 1 + + for v > 999 { + parts[j] = strconv.FormatInt(v%1000, 10) + switch len(parts[j]) { + case 2: + parts[j] = "0" + parts[j] + case 1: + parts[j] = "00" + parts[j] + } + v = v / 1000 + j-- + } + parts[j] = strconv.Itoa(int(v)) + return sign + strings.Join(parts[j:], ",") +} + +// Commaf produces a string form of the given number in base 10 with +// commas after every three orders of magnitude. +// +// e.g. Commaf(834142.32) -> 834,142.32 +func Commaf(v float64) string { + buf := &bytes.Buffer{} + if v < 0 { + buf.Write([]byte{'-'}) + v = 0 - v + } + + comma := []byte{','} + + parts := strings.Split(strconv.FormatFloat(v, 'f', -1, 64), ".") + pos := 0 + if len(parts[0])%3 != 0 { + pos += len(parts[0]) % 3 + buf.WriteString(parts[0][:pos]) + buf.Write(comma) + } + for ; pos < len(parts[0]); pos += 3 { + buf.WriteString(parts[0][pos : pos+3]) + buf.Write(comma) + } + buf.Truncate(buf.Len() - 1) + + if len(parts) > 1 { + buf.Write([]byte{'.'}) + buf.WriteString(parts[1]) + } + return buf.String() +} + +// BigComma produces a string form of the given big.Int in base 10 +// with commas after every three orders of magnitude. +func BigComma(b *big.Int) string { + sign := "" + if b.Sign() < 0 { + sign = "-" + b.Abs(b) + } + + athousand := big.NewInt(1000) + c := (&big.Int{}).Set(b) + _, m := oom(c, athousand) + parts := make([]string, m+1) + j := len(parts) - 1 + + mod := &big.Int{} + for b.Cmp(athousand) >= 0 { + b.DivMod(b, athousand, mod) + parts[j] = strconv.FormatInt(mod.Int64(), 10) + switch len(parts[j]) { + case 2: + parts[j] = "0" + parts[j] + case 1: + parts[j] = "00" + parts[j] + } + j-- + } + parts[j] = strconv.Itoa(int(b.Int64())) + return sign + strings.Join(parts[j:], ",") +} diff --git a/vendor/github.com/dustin/go-humanize/commaf.go b/vendor/github.com/dustin/go-humanize/commaf.go new file mode 100644 index 000000000..620690dec --- /dev/null +++ b/vendor/github.com/dustin/go-humanize/commaf.go @@ -0,0 +1,40 @@ +// +build go1.6 + +package humanize + +import ( + "bytes" + "math/big" + "strings" +) + +// BigCommaf produces a string form of the given big.Float in base 10 +// with commas after every three orders of magnitude. +func BigCommaf(v *big.Float) string { + buf := &bytes.Buffer{} + if v.Sign() < 0 { + buf.Write([]byte{'-'}) + v.Abs(v) + } + + comma := []byte{','} + + parts := strings.Split(v.Text('f', -1), ".") + pos := 0 + if len(parts[0])%3 != 0 { + pos += len(parts[0]) % 3 + buf.WriteString(parts[0][:pos]) + buf.Write(comma) + } + for ; pos < len(parts[0]); pos += 3 { + buf.WriteString(parts[0][pos : pos+3]) + buf.Write(comma) + } + buf.Truncate(buf.Len() - 1) + + if len(parts) > 1 { + buf.Write([]byte{'.'}) + buf.WriteString(parts[1]) + } + return buf.String() +} diff --git a/vendor/github.com/dustin/go-humanize/ftoa.go b/vendor/github.com/dustin/go-humanize/ftoa.go new file mode 100644 index 000000000..c76190b10 --- /dev/null +++ b/vendor/github.com/dustin/go-humanize/ftoa.go @@ -0,0 +1,23 @@ +package humanize + +import "strconv" + +func stripTrailingZeros(s string) string { + offset := len(s) - 1 + for offset > 0 { + if s[offset] == '.' { + offset-- + break + } + if s[offset] != '0' { + break + } + offset-- + } + return s[:offset+1] +} + +// Ftoa converts a float to a string with no trailing zeros. +func Ftoa(num float64) string { + return stripTrailingZeros(strconv.FormatFloat(num, 'f', 6, 64)) +} diff --git a/vendor/github.com/dustin/go-humanize/humanize.go b/vendor/github.com/dustin/go-humanize/humanize.go new file mode 100644 index 000000000..a2c2da31e --- /dev/null +++ b/vendor/github.com/dustin/go-humanize/humanize.go @@ -0,0 +1,8 @@ +/* +Package humanize converts boring ugly numbers to human-friendly strings and back. + +Durations can be turned into strings such as "3 days ago", numbers +representing sizes like 82854982 into useful strings like, "83 MB" or +"79 MiB" (whichever you prefer). +*/ +package humanize diff --git a/vendor/github.com/dustin/go-humanize/number.go b/vendor/github.com/dustin/go-humanize/number.go new file mode 100644 index 000000000..dec618659 --- /dev/null +++ b/vendor/github.com/dustin/go-humanize/number.go @@ -0,0 +1,192 @@ +package humanize + +/* +Slightly adapted from the source to fit go-humanize. + +Author: https://github.com/gorhill +Source: https://gist.github.com/gorhill/5285193 + +*/ + +import ( + "math" + "strconv" +) + +var ( + renderFloatPrecisionMultipliers = [...]float64{ + 1, + 10, + 100, + 1000, + 10000, + 100000, + 1000000, + 10000000, + 100000000, + 1000000000, + } + + renderFloatPrecisionRounders = [...]float64{ + 0.5, + 0.05, + 0.005, + 0.0005, + 0.00005, + 0.000005, + 0.0000005, + 0.00000005, + 0.000000005, + 0.0000000005, + } +) + +// FormatFloat produces a formatted number as string based on the following user-specified criteria: +// * thousands separator +// * decimal separator +// * decimal precision +// +// Usage: s := RenderFloat(format, n) +// The format parameter tells how to render the number n. +// +// See examples: http://play.golang.org/p/LXc1Ddm1lJ +// +// Examples of format strings, given n = 12345.6789: +// "#,###.##" => "12,345.67" +// "#,###." => "12,345" +// "#,###" => "12345,678" +// "#\u202F###,##" => "12 345,68" +// "#.###,###### => 12.345,678900 +// "" (aka default format) => 12,345.67 +// +// The highest precision allowed is 9 digits after the decimal symbol. +// There is also a version for integer number, FormatInteger(), +// which is convenient for calls within template. +func FormatFloat(format string, n float64) string { + // Special cases: + // NaN = "NaN" + // +Inf = "+Infinity" + // -Inf = "-Infinity" + if math.IsNaN(n) { + return "NaN" + } + if n > math.MaxFloat64 { + return "Infinity" + } + if n < -math.MaxFloat64 { + return "-Infinity" + } + + // default format + precision := 2 + decimalStr := "." + thousandStr := "," + positiveStr := "" + negativeStr := "-" + + if len(format) > 0 { + format := []rune(format) + + // If there is an explicit format directive, + // then default values are these: + precision = 9 + thousandStr = "" + + // collect indices of meaningful formatting directives + formatIndx := []int{} + for i, char := range format { + if char != '#' && char != '0' { + formatIndx = append(formatIndx, i) + } + } + + if len(formatIndx) > 0 { + // Directive at index 0: + // Must be a '+' + // Raise an error if not the case + // index: 0123456789 + // +0.000,000 + // +000,000.0 + // +0000.00 + // +0000 + if formatIndx[0] == 0 { + if format[formatIndx[0]] != '+' { + panic("RenderFloat(): invalid positive sign directive") + } + positiveStr = "+" + formatIndx = formatIndx[1:] + } + + // Two directives: + // First is thousands separator + // Raise an error if not followed by 3-digit + // 0123456789 + // 0.000,000 + // 000,000.00 + if len(formatIndx) == 2 { + if (formatIndx[1] - formatIndx[0]) != 4 { + panic("RenderFloat(): thousands separator directive must be followed by 3 digit-specifiers") + } + thousandStr = string(format[formatIndx[0]]) + formatIndx = formatIndx[1:] + } + + // One directive: + // Directive is decimal separator + // The number of digit-specifier following the separator indicates wanted precision + // 0123456789 + // 0.00 + // 000,0000 + if len(formatIndx) == 1 { + decimalStr = string(format[formatIndx[0]]) + precision = len(format) - formatIndx[0] - 1 + } + } + } + + // generate sign part + var signStr string + if n >= 0.000000001 { + signStr = positiveStr + } else if n <= -0.000000001 { + signStr = negativeStr + n = -n + } else { + signStr = "" + n = 0.0 + } + + // split number into integer and fractional parts + intf, fracf := math.Modf(n + renderFloatPrecisionRounders[precision]) + + // generate integer part string + intStr := strconv.FormatInt(int64(intf), 10) + + // add thousand separator if required + if len(thousandStr) > 0 { + for i := len(intStr); i > 3; { + i -= 3 + intStr = intStr[:i] + thousandStr + intStr[i:] + } + } + + // no fractional part, we can leave now + if precision == 0 { + return signStr + intStr + } + + // generate fractional part + fracStr := strconv.Itoa(int(fracf * renderFloatPrecisionMultipliers[precision])) + // may need padding + if len(fracStr) < precision { + fracStr = "000000000000000"[:precision-len(fracStr)] + fracStr + } + + return signStr + intStr + decimalStr + fracStr +} + +// FormatInteger produces a formatted number as string. +// See FormatFloat. +func FormatInteger(format string, n int) string { + return FormatFloat(format, float64(n)) +} diff --git a/vendor/github.com/dustin/go-humanize/ordinals.go b/vendor/github.com/dustin/go-humanize/ordinals.go new file mode 100644 index 000000000..43d88a861 --- /dev/null +++ b/vendor/github.com/dustin/go-humanize/ordinals.go @@ -0,0 +1,25 @@ +package humanize + +import "strconv" + +// Ordinal gives you the input number in a rank/ordinal format. +// +// Ordinal(3) -> 3rd +func Ordinal(x int) string { + suffix := "th" + switch x % 10 { + case 1: + if x%100 != 11 { + suffix = "st" + } + case 2: + if x%100 != 12 { + suffix = "nd" + } + case 3: + if x%100 != 13 { + suffix = "rd" + } + } + return strconv.Itoa(x) + suffix +} diff --git a/vendor/github.com/dustin/go-humanize/si.go b/vendor/github.com/dustin/go-humanize/si.go new file mode 100644 index 000000000..b24e48169 --- /dev/null +++ b/vendor/github.com/dustin/go-humanize/si.go @@ -0,0 +1,113 @@ +package humanize + +import ( + "errors" + "math" + "regexp" + "strconv" +) + +var siPrefixTable = map[float64]string{ + -24: "y", // yocto + -21: "z", // zepto + -18: "a", // atto + -15: "f", // femto + -12: "p", // pico + -9: "n", // nano + -6: "µ", // micro + -3: "m", // milli + 0: "", + 3: "k", // kilo + 6: "M", // mega + 9: "G", // giga + 12: "T", // tera + 15: "P", // peta + 18: "E", // exa + 21: "Z", // zetta + 24: "Y", // yotta +} + +var revSIPrefixTable = revfmap(siPrefixTable) + +// revfmap reverses the map and precomputes the power multiplier +func revfmap(in map[float64]string) map[string]float64 { + rv := map[string]float64{} + for k, v := range in { + rv[v] = math.Pow(10, k) + } + return rv +} + +var riParseRegex *regexp.Regexp + +func init() { + ri := `^([\-0-9.]+)\s?([` + for _, v := range siPrefixTable { + ri += v + } + ri += `]?)(.*)` + + riParseRegex = regexp.MustCompile(ri) +} + +// ComputeSI finds the most appropriate SI prefix for the given number +// and returns the prefix along with the value adjusted to be within +// that prefix. +// +// See also: SI, ParseSI. +// +// e.g. ComputeSI(2.2345e-12) -> (2.2345, "p") +func ComputeSI(input float64) (float64, string) { + if input == 0 { + return 0, "" + } + mag := math.Abs(input) + exponent := math.Floor(logn(mag, 10)) + exponent = math.Floor(exponent/3) * 3 + + value := mag / math.Pow(10, exponent) + + // Handle special case where value is exactly 1000.0 + // Should return 1 M instead of 1000 k + if value == 1000.0 { + exponent += 3 + value = mag / math.Pow(10, exponent) + } + + value = math.Copysign(value, input) + + prefix := siPrefixTable[exponent] + return value, prefix +} + +// SI returns a string with default formatting. +// +// SI uses Ftoa to format float value, removing trailing zeros. +// +// See also: ComputeSI, ParseSI. +// +// e.g. SI(1000000, "B") -> 1 MB +// e.g. SI(2.2345e-12, "F") -> 2.2345 pF +func SI(input float64, unit string) string { + value, prefix := ComputeSI(input) + return Ftoa(value) + " " + prefix + unit +} + +var errInvalid = errors.New("invalid input") + +// ParseSI parses an SI string back into the number and unit. +// +// See also: SI, ComputeSI. +// +// e.g. ParseSI("2.2345 pF") -> (2.2345e-12, "F", nil) +func ParseSI(input string) (float64, string, error) { + found := riParseRegex.FindStringSubmatch(input) + if len(found) != 4 { + return 0, "", errInvalid + } + mag := revSIPrefixTable[found[2]] + unit := found[3] + + base, err := strconv.ParseFloat(found[1], 64) + return base * mag, unit, err +} diff --git a/vendor/github.com/dustin/go-humanize/times.go b/vendor/github.com/dustin/go-humanize/times.go new file mode 100644 index 000000000..b311f11c8 --- /dev/null +++ b/vendor/github.com/dustin/go-humanize/times.go @@ -0,0 +1,117 @@ +package humanize + +import ( + "fmt" + "math" + "sort" + "time" +) + +// Seconds-based time units +const ( + Day = 24 * time.Hour + Week = 7 * Day + Month = 30 * Day + Year = 12 * Month + LongTime = 37 * Year +) + +// Time formats a time into a relative string. +// +// Time(someT) -> "3 weeks ago" +func Time(then time.Time) string { + return RelTime(then, time.Now(), "ago", "from now") +} + +// A RelTimeMagnitude struct contains a relative time point at which +// the relative format of time will switch to a new format string. A +// slice of these in ascending order by their "D" field is passed to +// CustomRelTime to format durations. +// +// The Format field is a string that may contain a "%s" which will be +// replaced with the appropriate signed label (e.g. "ago" or "from +// now") and a "%d" that will be replaced by the quantity. +// +// The DivBy field is the amount of time the time difference must be +// divided by in order to display correctly. +// +// e.g. if D is 2*time.Minute and you want to display "%d minutes %s" +// DivBy should be time.Minute so whatever the duration is will be +// expressed in minutes. +type RelTimeMagnitude struct { + D time.Duration + Format string + DivBy time.Duration +} + +var defaultMagnitudes = []RelTimeMagnitude{ + {time.Second, "now", time.Second}, + {2 * time.Second, "1 second %s", 1}, + {time.Minute, "%d seconds %s", time.Second}, + {2 * time.Minute, "1 minute %s", 1}, + {time.Hour, "%d minutes %s", time.Minute}, + {2 * time.Hour, "1 hour %s", 1}, + {Day, "%d hours %s", time.Hour}, + {2 * Day, "1 day %s", 1}, + {Week, "%d days %s", Day}, + {2 * Week, "1 week %s", 1}, + {Month, "%d weeks %s", Week}, + {2 * Month, "1 month %s", 1}, + {Year, "%d months %s", Month}, + {18 * Month, "1 year %s", 1}, + {2 * Year, "2 years %s", 1}, + {LongTime, "%d years %s", Year}, + {math.MaxInt64, "a long while %s", 1}, +} + +// RelTime formats a time into a relative string. +// +// It takes two times and two labels. In addition to the generic time +// delta string (e.g. 5 minutes), the labels are used applied so that +// the label corresponding to the smaller time is applied. +// +// RelTime(timeInPast, timeInFuture, "earlier", "later") -> "3 weeks earlier" +func RelTime(a, b time.Time, albl, blbl string) string { + return CustomRelTime(a, b, albl, blbl, defaultMagnitudes) +} + +// CustomRelTime formats a time into a relative string. +// +// It takes two times two labels and a table of relative time formats. +// In addition to the generic time delta string (e.g. 5 minutes), the +// labels are used applied so that the label corresponding to the +// smaller time is applied. +func CustomRelTime(a, b time.Time, albl, blbl string, magnitudes []RelTimeMagnitude) string { + lbl := albl + diff := b.Sub(a) + + if a.After(b) { + lbl = blbl + diff = a.Sub(b) + } + + n := sort.Search(len(magnitudes), func(i int) bool { + return magnitudes[i].D >= diff + }) + + if n >= len(magnitudes) { + n = len(magnitudes) - 1 + } + mag := magnitudes[n] + args := []interface{}{} + escaped := false + for _, ch := range mag.Format { + if escaped { + switch ch { + case 's': + args = append(args, lbl) + case 'd': + args = append(args, diff/mag.DivBy) + } + escaped = false + } else { + escaped = ch == '%' + } + } + return fmt.Sprintf(mag.Format, args...) +} diff --git a/vendor/github.com/google/go-cmp/LICENSE b/vendor/github.com/google/go-cmp/LICENSE new file mode 100644 index 000000000..32017f8fa --- /dev/null +++ b/vendor/github.com/google/go-cmp/LICENSE @@ -0,0 +1,27 @@ +Copyright (c) 2017 The Go Authors. All rights reserved. + +Redistribution and use in source and binary forms, with or without +modification, are permitted provided that the following conditions are +met: + + * Redistributions of source code must retain the above copyright +notice, this list of conditions and the following disclaimer. + * Redistributions in binary form must reproduce the above +copyright notice, this list of conditions and the following disclaimer +in the documentation and/or other materials provided with the +distribution. + * Neither the name of Google Inc. nor the names of its +contributors may be used to endorse or promote products derived from +this software without specific prior written permission. + +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR +A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT +LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, +DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY +THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. diff --git a/vendor/github.com/google/go-cmp/cmp/compare.go b/vendor/github.com/google/go-cmp/cmp/compare.go new file mode 100644 index 000000000..b56d63447 --- /dev/null +++ b/vendor/github.com/google/go-cmp/cmp/compare.go @@ -0,0 +1,599 @@ +// Copyright 2017, The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE.md file. + +// Package cmp determines equality of values. +// +// This package is intended to be a more powerful and safer alternative to +// reflect.DeepEqual for comparing whether two values are semantically equal. +// +// The primary features of cmp are: +// +// • When the default behavior of equality does not suit the needs of the test, +// custom equality functions can override the equality operation. +// For example, an equality function may report floats as equal so long as they +// are within some tolerance of each other. +// +// • Types that have an Equal method may use that method to determine equality. +// This allows package authors to determine the equality operation for the types +// that they define. +// +// • If no custom equality functions are used and no Equal method is defined, +// equality is determined by recursively comparing the primitive kinds on both +// values, much like reflect.DeepEqual. Unlike reflect.DeepEqual, unexported +// fields are not compared by default; they result in panics unless suppressed +// by using an Ignore option (see cmpopts.IgnoreUnexported) or explicitly compared +// using the AllowUnexported option. +package cmp + +import ( + "fmt" + "reflect" + "strings" + + "github.com/google/go-cmp/cmp/internal/diff" + "github.com/google/go-cmp/cmp/internal/function" + "github.com/google/go-cmp/cmp/internal/value" +) + +// BUG(dsnet): Maps with keys containing NaN values cannot be properly compared due to +// the reflection package's inability to retrieve such entries. Equal will panic +// anytime it comes across a NaN key, but this behavior may change. +// +// See https://golang.org/issue/11104 for more details. + +var nothing = reflect.Value{} + +// Equal reports whether x and y are equal by recursively applying the +// following rules in the given order to x and y and all of their sub-values: +// +// • If two values are not of the same type, then they are never equal +// and the overall result is false. +// +// • Let S be the set of all Ignore, Transformer, and Comparer options that +// remain after applying all path filters, value filters, and type filters. +// If at least one Ignore exists in S, then the comparison is ignored. +// If the number of Transformer and Comparer options in S is greater than one, +// then Equal panics because it is ambiguous which option to use. +// If S contains a single Transformer, then use that to transform the current +// values and recursively call Equal on the output values. +// If S contains a single Comparer, then use that to compare the current values. +// Otherwise, evaluation proceeds to the next rule. +// +// • If the values have an Equal method of the form "(T) Equal(T) bool" or +// "(T) Equal(I) bool" where T is assignable to I, then use the result of +// x.Equal(y) even if x or y is nil. +// Otherwise, no such method exists and evaluation proceeds to the next rule. +// +// • Lastly, try to compare x and y based on their basic kinds. +// Simple kinds like booleans, integers, floats, complex numbers, strings, and +// channels are compared using the equivalent of the == operator in Go. +// Functions are only equal if they are both nil, otherwise they are unequal. +// Pointers are equal if the underlying values they point to are also equal. +// Interfaces are equal if their underlying concrete values are also equal. +// +// Structs are equal if all of their fields are equal. If a struct contains +// unexported fields, Equal panics unless the AllowUnexported option is used or +// an Ignore option (e.g., cmpopts.IgnoreUnexported) ignores that field. +// +// Arrays, slices, and maps are equal if they are both nil or both non-nil +// with the same length and the elements at each index or key are equal. +// Note that a non-nil empty slice and a nil slice are not equal. +// To equate empty slices and maps, consider using cmpopts.EquateEmpty. +// Map keys are equal according to the == operator. +// To use custom comparisons for map keys, consider using cmpopts.SortMaps. +func Equal(x, y interface{}, opts ...Option) bool { + s := newState(opts) + s.compareAny(reflect.ValueOf(x), reflect.ValueOf(y)) + return s.result.Equal() +} + +// Diff returns a human-readable report of the differences between two values. +// It returns an empty string if and only if Equal returns true for the same +// input values and options. The output string will use the "-" symbol to +// indicate elements removed from x, and the "+" symbol to indicate elements +// added to y. +// +// Do not depend on this output being stable. +func Diff(x, y interface{}, opts ...Option) string { + r := new(defaultReporter) + opts = Options{Options(opts), r} + eq := Equal(x, y, opts...) + d := r.String() + if (d == "") != eq { + panic("inconsistent difference and equality results") + } + return d +} + +type state struct { + // These fields represent the "comparison state". + // Calling statelessCompare must not result in observable changes to these. + result diff.Result // The current result of comparison + curPath Path // The current path in the value tree + reporter reporter // Optional reporter used for difference formatting + + // recChecker checks for infinite cycles applying the same set of + // transformers upon the output of itself. + recChecker recChecker + + // dynChecker triggers pseudo-random checks for option correctness. + // It is safe for statelessCompare to mutate this value. + dynChecker dynChecker + + // These fields, once set by processOption, will not change. + exporters map[reflect.Type]bool // Set of structs with unexported field visibility + opts Options // List of all fundamental and filter options +} + +func newState(opts []Option) *state { + s := new(state) + for _, opt := range opts { + s.processOption(opt) + } + return s +} + +func (s *state) processOption(opt Option) { + switch opt := opt.(type) { + case nil: + case Options: + for _, o := range opt { + s.processOption(o) + } + case coreOption: + type filtered interface { + isFiltered() bool + } + if fopt, ok := opt.(filtered); ok && !fopt.isFiltered() { + panic(fmt.Sprintf("cannot use an unfiltered option: %v", opt)) + } + s.opts = append(s.opts, opt) + case visibleStructs: + if s.exporters == nil { + s.exporters = make(map[reflect.Type]bool) + } + for t := range opt { + s.exporters[t] = true + } + case reporter: + if s.reporter != nil { + panic("difference reporter already registered") + } + s.reporter = opt + default: + panic(fmt.Sprintf("unknown option %T", opt)) + } +} + +// statelessCompare compares two values and returns the result. +// This function is stateless in that it does not alter the current result, +// or output to any registered reporters. +func (s *state) statelessCompare(vx, vy reflect.Value) diff.Result { + // We do not save and restore the curPath because all of the compareX + // methods should properly push and pop from the path. + // It is an implementation bug if the contents of curPath differs from + // when calling this function to when returning from it. + + oldResult, oldReporter := s.result, s.reporter + s.result = diff.Result{} // Reset result + s.reporter = nil // Remove reporter to avoid spurious printouts + s.compareAny(vx, vy) + res := s.result + s.result, s.reporter = oldResult, oldReporter + return res +} + +func (s *state) compareAny(vx, vy reflect.Value) { + // TODO: Support cyclic data structures. + s.recChecker.Check(s.curPath) + + // Rule 0: Differing types are never equal. + if !vx.IsValid() || !vy.IsValid() { + s.report(vx.IsValid() == vy.IsValid(), vx, vy) + return + } + if vx.Type() != vy.Type() { + s.report(false, vx, vy) // Possible for path to be empty + return + } + t := vx.Type() + if len(s.curPath) == 0 { + s.curPath.push(&pathStep{typ: t}) + defer s.curPath.pop() + } + vx, vy = s.tryExporting(vx, vy) + + // Rule 1: Check whether an option applies on this node in the value tree. + if s.tryOptions(vx, vy, t) { + return + } + + // Rule 2: Check whether the type has a valid Equal method. + if s.tryMethod(vx, vy, t) { + return + } + + // Rule 3: Recursively descend into each value's underlying kind. + switch t.Kind() { + case reflect.Bool: + s.report(vx.Bool() == vy.Bool(), vx, vy) + return + case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64: + s.report(vx.Int() == vy.Int(), vx, vy) + return + case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64, reflect.Uintptr: + s.report(vx.Uint() == vy.Uint(), vx, vy) + return + case reflect.Float32, reflect.Float64: + s.report(vx.Float() == vy.Float(), vx, vy) + return + case reflect.Complex64, reflect.Complex128: + s.report(vx.Complex() == vy.Complex(), vx, vy) + return + case reflect.String: + s.report(vx.String() == vy.String(), vx, vy) + return + case reflect.Chan, reflect.UnsafePointer: + s.report(vx.Pointer() == vy.Pointer(), vx, vy) + return + case reflect.Func: + s.report(vx.IsNil() && vy.IsNil(), vx, vy) + return + case reflect.Ptr: + if vx.IsNil() || vy.IsNil() { + s.report(vx.IsNil() && vy.IsNil(), vx, vy) + return + } + s.curPath.push(&indirect{pathStep{t.Elem()}}) + defer s.curPath.pop() + s.compareAny(vx.Elem(), vy.Elem()) + return + case reflect.Interface: + if vx.IsNil() || vy.IsNil() { + s.report(vx.IsNil() && vy.IsNil(), vx, vy) + return + } + if vx.Elem().Type() != vy.Elem().Type() { + s.report(false, vx.Elem(), vy.Elem()) + return + } + s.curPath.push(&typeAssertion{pathStep{vx.Elem().Type()}}) + defer s.curPath.pop() + s.compareAny(vx.Elem(), vy.Elem()) + return + case reflect.Slice: + if vx.IsNil() || vy.IsNil() { + s.report(vx.IsNil() && vy.IsNil(), vx, vy) + return + } + fallthrough + case reflect.Array: + s.compareArray(vx, vy, t) + return + case reflect.Map: + s.compareMap(vx, vy, t) + return + case reflect.Struct: + s.compareStruct(vx, vy, t) + return + default: + panic(fmt.Sprintf("%v kind not handled", t.Kind())) + } +} + +func (s *state) tryExporting(vx, vy reflect.Value) (reflect.Value, reflect.Value) { + if sf, ok := s.curPath[len(s.curPath)-1].(*structField); ok && sf.unexported { + if sf.force { + // Use unsafe pointer arithmetic to get read-write access to an + // unexported field in the struct. + vx = unsafeRetrieveField(sf.pvx, sf.field) + vy = unsafeRetrieveField(sf.pvy, sf.field) + } else { + // We are not allowed to export the value, so invalidate them + // so that tryOptions can panic later if not explicitly ignored. + vx = nothing + vy = nothing + } + } + return vx, vy +} + +func (s *state) tryOptions(vx, vy reflect.Value, t reflect.Type) bool { + // If there were no FilterValues, we will not detect invalid inputs, + // so manually check for them and append invalid if necessary. + // We still evaluate the options since an ignore can override invalid. + opts := s.opts + if !vx.IsValid() || !vy.IsValid() { + opts = Options{opts, invalid{}} + } + + // Evaluate all filters and apply the remaining options. + if opt := opts.filter(s, vx, vy, t); opt != nil { + opt.apply(s, vx, vy) + return true + } + return false +} + +func (s *state) tryMethod(vx, vy reflect.Value, t reflect.Type) bool { + // Check if this type even has an Equal method. + m, ok := t.MethodByName("Equal") + if !ok || !function.IsType(m.Type, function.EqualAssignable) { + return false + } + + eq := s.callTTBFunc(m.Func, vx, vy) + s.report(eq, vx, vy) + return true +} + +func (s *state) callTRFunc(f, v reflect.Value) reflect.Value { + v = sanitizeValue(v, f.Type().In(0)) + if !s.dynChecker.Next() { + return f.Call([]reflect.Value{v})[0] + } + + // Run the function twice and ensure that we get the same results back. + // We run in goroutines so that the race detector (if enabled) can detect + // unsafe mutations to the input. + c := make(chan reflect.Value) + go detectRaces(c, f, v) + want := f.Call([]reflect.Value{v})[0] + if got := <-c; !s.statelessCompare(got, want).Equal() { + // To avoid false-positives with non-reflexive equality operations, + // we sanity check whether a value is equal to itself. + if !s.statelessCompare(want, want).Equal() { + return want + } + fn := getFuncName(f.Pointer()) + panic(fmt.Sprintf("non-deterministic function detected: %s", fn)) + } + return want +} + +func (s *state) callTTBFunc(f, x, y reflect.Value) bool { + x = sanitizeValue(x, f.Type().In(0)) + y = sanitizeValue(y, f.Type().In(1)) + if !s.dynChecker.Next() { + return f.Call([]reflect.Value{x, y})[0].Bool() + } + + // Swapping the input arguments is sufficient to check that + // f is symmetric and deterministic. + // We run in goroutines so that the race detector (if enabled) can detect + // unsafe mutations to the input. + c := make(chan reflect.Value) + go detectRaces(c, f, y, x) + want := f.Call([]reflect.Value{x, y})[0].Bool() + if got := <-c; !got.IsValid() || got.Bool() != want { + fn := getFuncName(f.Pointer()) + panic(fmt.Sprintf("non-deterministic or non-symmetric function detected: %s", fn)) + } + return want +} + +func detectRaces(c chan<- reflect.Value, f reflect.Value, vs ...reflect.Value) { + var ret reflect.Value + defer func() { + recover() // Ignore panics, let the other call to f panic instead + c <- ret + }() + ret = f.Call(vs)[0] +} + +// sanitizeValue converts nil interfaces of type T to those of type R, +// assuming that T is assignable to R. +// Otherwise, it returns the input value as is. +func sanitizeValue(v reflect.Value, t reflect.Type) reflect.Value { + // TODO(dsnet): Workaround for reflect bug (https://golang.org/issue/22143). + // The upstream fix landed in Go1.10, so we can remove this when drop support + // for Go1.9 and below. + if v.Kind() == reflect.Interface && v.IsNil() && v.Type() != t { + return reflect.New(t).Elem() + } + return v +} + +func (s *state) compareArray(vx, vy reflect.Value, t reflect.Type) { + step := &sliceIndex{pathStep{t.Elem()}, 0, 0} + s.curPath.push(step) + + // Compute an edit-script for slices vx and vy. + es := diff.Difference(vx.Len(), vy.Len(), func(ix, iy int) diff.Result { + step.xkey, step.ykey = ix, iy + return s.statelessCompare(vx.Index(ix), vy.Index(iy)) + }) + + // Report the entire slice as is if the arrays are of primitive kind, + // and the arrays are different enough. + isPrimitive := false + switch t.Elem().Kind() { + case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64, + reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64, reflect.Uintptr, + reflect.Bool, reflect.Float32, reflect.Float64, reflect.Complex64, reflect.Complex128: + isPrimitive = true + } + if isPrimitive && es.Dist() > (vx.Len()+vy.Len())/4 { + s.curPath.pop() // Pop first since we are reporting the whole slice + s.report(false, vx, vy) + return + } + + // Replay the edit-script. + var ix, iy int + for _, e := range es { + switch e { + case diff.UniqueX: + step.xkey, step.ykey = ix, -1 + s.report(false, vx.Index(ix), nothing) + ix++ + case diff.UniqueY: + step.xkey, step.ykey = -1, iy + s.report(false, nothing, vy.Index(iy)) + iy++ + default: + step.xkey, step.ykey = ix, iy + if e == diff.Identity { + s.report(true, vx.Index(ix), vy.Index(iy)) + } else { + s.compareAny(vx.Index(ix), vy.Index(iy)) + } + ix++ + iy++ + } + } + s.curPath.pop() + return +} + +func (s *state) compareMap(vx, vy reflect.Value, t reflect.Type) { + if vx.IsNil() || vy.IsNil() { + s.report(vx.IsNil() && vy.IsNil(), vx, vy) + return + } + + // We combine and sort the two map keys so that we can perform the + // comparisons in a deterministic order. + step := &mapIndex{pathStep: pathStep{t.Elem()}} + s.curPath.push(step) + defer s.curPath.pop() + for _, k := range value.SortKeys(append(vx.MapKeys(), vy.MapKeys()...)) { + step.key = k + vvx := vx.MapIndex(k) + vvy := vy.MapIndex(k) + switch { + case vvx.IsValid() && vvy.IsValid(): + s.compareAny(vvx, vvy) + case vvx.IsValid() && !vvy.IsValid(): + s.report(false, vvx, nothing) + case !vvx.IsValid() && vvy.IsValid(): + s.report(false, nothing, vvy) + default: + // It is possible for both vvx and vvy to be invalid if the + // key contained a NaN value in it. There is no way in + // reflection to be able to retrieve these values. + // See https://golang.org/issue/11104 + panic(fmt.Sprintf("%#v has map key with NaNs", s.curPath)) + } + } +} + +func (s *state) compareStruct(vx, vy reflect.Value, t reflect.Type) { + var vax, vay reflect.Value // Addressable versions of vx and vy + + step := &structField{} + s.curPath.push(step) + defer s.curPath.pop() + for i := 0; i < t.NumField(); i++ { + vvx := vx.Field(i) + vvy := vy.Field(i) + step.typ = t.Field(i).Type + step.name = t.Field(i).Name + step.idx = i + step.unexported = !isExported(step.name) + if step.unexported { + // Defer checking of unexported fields until later to give an + // Ignore a chance to ignore the field. + if !vax.IsValid() || !vay.IsValid() { + // For unsafeRetrieveField to work, the parent struct must + // be addressable. Create a new copy of the values if + // necessary to make them addressable. + vax = makeAddressable(vx) + vay = makeAddressable(vy) + } + step.force = s.exporters[t] + step.pvx = vax + step.pvy = vay + step.field = t.Field(i) + } + s.compareAny(vvx, vvy) + } +} + +// report records the result of a single comparison. +// It also calls Report if any reporter is registered. +func (s *state) report(eq bool, vx, vy reflect.Value) { + if eq { + s.result.NSame++ + } else { + s.result.NDiff++ + } + if s.reporter != nil { + s.reporter.Report(vx, vy, eq, s.curPath) + } +} + +// recChecker tracks the state needed to periodically perform checks that +// user provided transformers are not stuck in an infinitely recursive cycle. +type recChecker struct{ next int } + +// Check scans the Path for any recursive transformers and panics when any +// recursive transformers are detected. Note that the presence of a +// recursive Transformer does not necessarily imply an infinite cycle. +// As such, this check only activates after some minimal number of path steps. +func (rc *recChecker) Check(p Path) { + const minLen = 1 << 16 + if rc.next == 0 { + rc.next = minLen + } + if len(p) < rc.next { + return + } + rc.next <<= 1 + + // Check whether the same transformer has appeared at least twice. + var ss []string + m := map[Option]int{} + for _, ps := range p { + if t, ok := ps.(Transform); ok { + t := t.Option() + if m[t] == 1 { // Transformer was used exactly once before + tf := t.(*transformer).fnc.Type() + ss = append(ss, fmt.Sprintf("%v: %v => %v", t, tf.In(0), tf.Out(0))) + } + m[t]++ + } + } + if len(ss) > 0 { + const warning = "recursive set of Transformers detected" + const help = "consider using cmpopts.AcyclicTransformer" + set := strings.Join(ss, "\n\t") + panic(fmt.Sprintf("%s:\n\t%s\n%s", warning, set, help)) + } +} + +// dynChecker tracks the state needed to periodically perform checks that +// user provided functions are symmetric and deterministic. +// The zero value is safe for immediate use. +type dynChecker struct{ curr, next int } + +// Next increments the state and reports whether a check should be performed. +// +// Checks occur every Nth function call, where N is a triangular number: +// 0 1 3 6 10 15 21 28 36 45 55 66 78 91 105 120 136 153 171 190 ... +// See https://en.wikipedia.org/wiki/Triangular_number +// +// This sequence ensures that the cost of checks drops significantly as +// the number of functions calls grows larger. +func (dc *dynChecker) Next() bool { + ok := dc.curr == dc.next + if ok { + dc.curr = 0 + dc.next++ + } + dc.curr++ + return ok +} + +// makeAddressable returns a value that is always addressable. +// It returns the input verbatim if it is already addressable, +// otherwise it creates a new value and returns an addressable copy. +func makeAddressable(v reflect.Value) reflect.Value { + if v.CanAddr() { + return v + } + vc := reflect.New(v.Type()).Elem() + vc.Set(v) + return vc +} diff --git a/vendor/github.com/google/go-cmp/cmp/internal/diff/debug_disable.go b/vendor/github.com/google/go-cmp/cmp/internal/diff/debug_disable.go new file mode 100644 index 000000000..42afa4960 --- /dev/null +++ b/vendor/github.com/google/go-cmp/cmp/internal/diff/debug_disable.go @@ -0,0 +1,17 @@ +// Copyright 2017, The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE.md file. + +// +build !debug + +package diff + +var debug debugger + +type debugger struct{} + +func (debugger) Begin(_, _ int, f EqualFunc, _, _ *EditScript) EqualFunc { + return f +} +func (debugger) Update() {} +func (debugger) Finish() {} diff --git a/vendor/github.com/google/go-cmp/cmp/internal/diff/debug_enable.go b/vendor/github.com/google/go-cmp/cmp/internal/diff/debug_enable.go new file mode 100644 index 000000000..fd9f7f177 --- /dev/null +++ b/vendor/github.com/google/go-cmp/cmp/internal/diff/debug_enable.go @@ -0,0 +1,122 @@ +// Copyright 2017, The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE.md file. + +// +build debug + +package diff + +import ( + "fmt" + "strings" + "sync" + "time" +) + +// The algorithm can be seen running in real-time by enabling debugging: +// go test -tags=debug -v +// +// Example output: +// === RUN TestDifference/#34 +// ┌───────────────────────────────┐ +// │ \ · · · · · · · · · · · · · · │ +// │ · # · · · · · · · · · · · · · │ +// │ · \ · · · · · · · · · · · · · │ +// │ · · \ · · · · · · · · · · · · │ +// │ · · · X # · · · · · · · · · · │ +// │ · · · # \ · · · · · · · · · · │ +// │ · · · · · # # · · · · · · · · │ +// │ · · · · · # \ · · · · · · · · │ +// │ · · · · · · · \ · · · · · · · │ +// │ · · · · · · · · \ · · · · · · │ +// │ · · · · · · · · · \ · · · · · │ +// │ · · · · · · · · · · \ · · # · │ +// │ · · · · · · · · · · · \ # # · │ +// │ · · · · · · · · · · · # # # · │ +// │ · · · · · · · · · · # # # # · │ +// │ · · · · · · · · · # # # # # · │ +// │ · · · · · · · · · · · · · · \ │ +// └───────────────────────────────┘ +// [.Y..M.XY......YXYXY.|] +// +// The grid represents the edit-graph where the horizontal axis represents +// list X and the vertical axis represents list Y. The start of the two lists +// is the top-left, while the ends are the bottom-right. The '·' represents +// an unexplored node in the graph. The '\' indicates that the two symbols +// from list X and Y are equal. The 'X' indicates that two symbols are similar +// (but not exactly equal) to each other. The '#' indicates that the two symbols +// are different (and not similar). The algorithm traverses this graph trying to +// make the paths starting in the top-left and the bottom-right connect. +// +// The series of '.', 'X', 'Y', and 'M' characters at the bottom represents +// the currently established path from the forward and reverse searches, +// separated by a '|' character. + +const ( + updateDelay = 100 * time.Millisecond + finishDelay = 500 * time.Millisecond + ansiTerminal = true // ANSI escape codes used to move terminal cursor +) + +var debug debugger + +type debugger struct { + sync.Mutex + p1, p2 EditScript + fwdPath, revPath *EditScript + grid []byte + lines int +} + +func (dbg *debugger) Begin(nx, ny int, f EqualFunc, p1, p2 *EditScript) EqualFunc { + dbg.Lock() + dbg.fwdPath, dbg.revPath = p1, p2 + top := "┌─" + strings.Repeat("──", nx) + "┐\n" + row := "│ " + strings.Repeat("· ", nx) + "│\n" + btm := "└─" + strings.Repeat("──", nx) + "┘\n" + dbg.grid = []byte(top + strings.Repeat(row, ny) + btm) + dbg.lines = strings.Count(dbg.String(), "\n") + fmt.Print(dbg) + + // Wrap the EqualFunc so that we can intercept each result. + return func(ix, iy int) (r Result) { + cell := dbg.grid[len(top)+iy*len(row):][len("│ ")+len("· ")*ix:][:len("·")] + for i := range cell { + cell[i] = 0 // Zero out the multiple bytes of UTF-8 middle-dot + } + switch r = f(ix, iy); { + case r.Equal(): + cell[0] = '\\' + case r.Similar(): + cell[0] = 'X' + default: + cell[0] = '#' + } + return + } +} + +func (dbg *debugger) Update() { + dbg.print(updateDelay) +} + +func (dbg *debugger) Finish() { + dbg.print(finishDelay) + dbg.Unlock() +} + +func (dbg *debugger) String() string { + dbg.p1, dbg.p2 = *dbg.fwdPath, dbg.p2[:0] + for i := len(*dbg.revPath) - 1; i >= 0; i-- { + dbg.p2 = append(dbg.p2, (*dbg.revPath)[i]) + } + return fmt.Sprintf("%s[%v|%v]\n\n", dbg.grid, dbg.p1, dbg.p2) +} + +func (dbg *debugger) print(d time.Duration) { + if ansiTerminal { + fmt.Printf("\x1b[%dA", dbg.lines) // Reset terminal cursor + } + fmt.Print(dbg) + time.Sleep(d) +} diff --git a/vendor/github.com/google/go-cmp/cmp/internal/diff/diff.go b/vendor/github.com/google/go-cmp/cmp/internal/diff/diff.go new file mode 100644 index 000000000..260befea2 --- /dev/null +++ b/vendor/github.com/google/go-cmp/cmp/internal/diff/diff.go @@ -0,0 +1,363 @@ +// Copyright 2017, The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE.md file. + +// Package diff implements an algorithm for producing edit-scripts. +// The edit-script is a sequence of operations needed to transform one list +// of symbols into another (or vice-versa). The edits allowed are insertions, +// deletions, and modifications. The summation of all edits is called the +// Levenshtein distance as this problem is well-known in computer science. +// +// This package prioritizes performance over accuracy. That is, the run time +// is more important than obtaining a minimal Levenshtein distance. +package diff + +// EditType represents a single operation within an edit-script. +type EditType uint8 + +const ( + // Identity indicates that a symbol pair is identical in both list X and Y. + Identity EditType = iota + // UniqueX indicates that a symbol only exists in X and not Y. + UniqueX + // UniqueY indicates that a symbol only exists in Y and not X. + UniqueY + // Modified indicates that a symbol pair is a modification of each other. + Modified +) + +// EditScript represents the series of differences between two lists. +type EditScript []EditType + +// String returns a human-readable string representing the edit-script where +// Identity, UniqueX, UniqueY, and Modified are represented by the +// '.', 'X', 'Y', and 'M' characters, respectively. +func (es EditScript) String() string { + b := make([]byte, len(es)) + for i, e := range es { + switch e { + case Identity: + b[i] = '.' + case UniqueX: + b[i] = 'X' + case UniqueY: + b[i] = 'Y' + case Modified: + b[i] = 'M' + default: + panic("invalid edit-type") + } + } + return string(b) +} + +// stats returns a histogram of the number of each type of edit operation. +func (es EditScript) stats() (s struct{ NI, NX, NY, NM int }) { + for _, e := range es { + switch e { + case Identity: + s.NI++ + case UniqueX: + s.NX++ + case UniqueY: + s.NY++ + case Modified: + s.NM++ + default: + panic("invalid edit-type") + } + } + return +} + +// Dist is the Levenshtein distance and is guaranteed to be 0 if and only if +// lists X and Y are equal. +func (es EditScript) Dist() int { return len(es) - es.stats().NI } + +// LenX is the length of the X list. +func (es EditScript) LenX() int { return len(es) - es.stats().NY } + +// LenY is the length of the Y list. +func (es EditScript) LenY() int { return len(es) - es.stats().NX } + +// EqualFunc reports whether the symbols at indexes ix and iy are equal. +// When called by Difference, the index is guaranteed to be within nx and ny. +type EqualFunc func(ix int, iy int) Result + +// Result is the result of comparison. +// NSame is the number of sub-elements that are equal. +// NDiff is the number of sub-elements that are not equal. +type Result struct{ NSame, NDiff int } + +// Equal indicates whether the symbols are equal. Two symbols are equal +// if and only if NDiff == 0. If Equal, then they are also Similar. +func (r Result) Equal() bool { return r.NDiff == 0 } + +// Similar indicates whether two symbols are similar and may be represented +// by using the Modified type. As a special case, we consider binary comparisons +// (i.e., those that return Result{1, 0} or Result{0, 1}) to be similar. +// +// The exact ratio of NSame to NDiff to determine similarity may change. +func (r Result) Similar() bool { + // Use NSame+1 to offset NSame so that binary comparisons are similar. + return r.NSame+1 >= r.NDiff +} + +// Difference reports whether two lists of lengths nx and ny are equal +// given the definition of equality provided as f. +// +// This function returns an edit-script, which is a sequence of operations +// needed to convert one list into the other. The following invariants for +// the edit-script are maintained: +// • eq == (es.Dist()==0) +// • nx == es.LenX() +// • ny == es.LenY() +// +// This algorithm is not guaranteed to be an optimal solution (i.e., one that +// produces an edit-script with a minimal Levenshtein distance). This algorithm +// favors performance over optimality. The exact output is not guaranteed to +// be stable and may change over time. +func Difference(nx, ny int, f EqualFunc) (es EditScript) { + // This algorithm is based on traversing what is known as an "edit-graph". + // See Figure 1 from "An O(ND) Difference Algorithm and Its Variations" + // by Eugene W. Myers. Since D can be as large as N itself, this is + // effectively O(N^2). Unlike the algorithm from that paper, we are not + // interested in the optimal path, but at least some "decent" path. + // + // For example, let X and Y be lists of symbols: + // X = [A B C A B B A] + // Y = [C B A B A C] + // + // The edit-graph can be drawn as the following: + // A B C A B B A + // ┌─────────────┐ + // C │_|_|\|_|_|_|_│ 0 + // B │_|\|_|_|\|\|_│ 1 + // A │\|_|_|\|_|_|\│ 2 + // B │_|\|_|_|\|\|_│ 3 + // A │\|_|_|\|_|_|\│ 4 + // C │ | |\| | | | │ 5 + // └─────────────┘ 6 + // 0 1 2 3 4 5 6 7 + // + // List X is written along the horizontal axis, while list Y is written + // along the vertical axis. At any point on this grid, if the symbol in + // list X matches the corresponding symbol in list Y, then a '\' is drawn. + // The goal of any minimal edit-script algorithm is to find a path from the + // top-left corner to the bottom-right corner, while traveling through the + // fewest horizontal or vertical edges. + // A horizontal edge is equivalent to inserting a symbol from list X. + // A vertical edge is equivalent to inserting a symbol from list Y. + // A diagonal edge is equivalent to a matching symbol between both X and Y. + + // Invariants: + // • 0 ≤ fwdPath.X ≤ (fwdFrontier.X, revFrontier.X) ≤ revPath.X ≤ nx + // • 0 ≤ fwdPath.Y ≤ (fwdFrontier.Y, revFrontier.Y) ≤ revPath.Y ≤ ny + // + // In general: + // • fwdFrontier.X < revFrontier.X + // • fwdFrontier.Y < revFrontier.Y + // Unless, it is time for the algorithm to terminate. + fwdPath := path{+1, point{0, 0}, make(EditScript, 0, (nx+ny)/2)} + revPath := path{-1, point{nx, ny}, make(EditScript, 0)} + fwdFrontier := fwdPath.point // Forward search frontier + revFrontier := revPath.point // Reverse search frontier + + // Search budget bounds the cost of searching for better paths. + // The longest sequence of non-matching symbols that can be tolerated is + // approximately the square-root of the search budget. + searchBudget := 4 * (nx + ny) // O(n) + + // The algorithm below is a greedy, meet-in-the-middle algorithm for + // computing sub-optimal edit-scripts between two lists. + // + // The algorithm is approximately as follows: + // • Searching for differences switches back-and-forth between + // a search that starts at the beginning (the top-left corner), and + // a search that starts at the end (the bottom-right corner). The goal of + // the search is connect with the search from the opposite corner. + // • As we search, we build a path in a greedy manner, where the first + // match seen is added to the path (this is sub-optimal, but provides a + // decent result in practice). When matches are found, we try the next pair + // of symbols in the lists and follow all matches as far as possible. + // • When searching for matches, we search along a diagonal going through + // through the "frontier" point. If no matches are found, we advance the + // frontier towards the opposite corner. + // • This algorithm terminates when either the X coordinates or the + // Y coordinates of the forward and reverse frontier points ever intersect. + // + // This algorithm is correct even if searching only in the forward direction + // or in the reverse direction. We do both because it is commonly observed + // that two lists commonly differ because elements were added to the front + // or end of the other list. + // + // Running the tests with the "debug" build tag prints a visualization of + // the algorithm running in real-time. This is educational for understanding + // how the algorithm works. See debug_enable.go. + f = debug.Begin(nx, ny, f, &fwdPath.es, &revPath.es) + for { + // Forward search from the beginning. + if fwdFrontier.X >= revFrontier.X || fwdFrontier.Y >= revFrontier.Y || searchBudget == 0 { + break + } + for stop1, stop2, i := false, false, 0; !(stop1 && stop2) && searchBudget > 0; i++ { + // Search in a diagonal pattern for a match. + z := zigzag(i) + p := point{fwdFrontier.X + z, fwdFrontier.Y - z} + switch { + case p.X >= revPath.X || p.Y < fwdPath.Y: + stop1 = true // Hit top-right corner + case p.Y >= revPath.Y || p.X < fwdPath.X: + stop2 = true // Hit bottom-left corner + case f(p.X, p.Y).Equal(): + // Match found, so connect the path to this point. + fwdPath.connect(p, f) + fwdPath.append(Identity) + // Follow sequence of matches as far as possible. + for fwdPath.X < revPath.X && fwdPath.Y < revPath.Y { + if !f(fwdPath.X, fwdPath.Y).Equal() { + break + } + fwdPath.append(Identity) + } + fwdFrontier = fwdPath.point + stop1, stop2 = true, true + default: + searchBudget-- // Match not found + } + debug.Update() + } + // Advance the frontier towards reverse point. + if revPath.X-fwdFrontier.X >= revPath.Y-fwdFrontier.Y { + fwdFrontier.X++ + } else { + fwdFrontier.Y++ + } + + // Reverse search from the end. + if fwdFrontier.X >= revFrontier.X || fwdFrontier.Y >= revFrontier.Y || searchBudget == 0 { + break + } + for stop1, stop2, i := false, false, 0; !(stop1 && stop2) && searchBudget > 0; i++ { + // Search in a diagonal pattern for a match. + z := zigzag(i) + p := point{revFrontier.X - z, revFrontier.Y + z} + switch { + case fwdPath.X >= p.X || revPath.Y < p.Y: + stop1 = true // Hit bottom-left corner + case fwdPath.Y >= p.Y || revPath.X < p.X: + stop2 = true // Hit top-right corner + case f(p.X-1, p.Y-1).Equal(): + // Match found, so connect the path to this point. + revPath.connect(p, f) + revPath.append(Identity) + // Follow sequence of matches as far as possible. + for fwdPath.X < revPath.X && fwdPath.Y < revPath.Y { + if !f(revPath.X-1, revPath.Y-1).Equal() { + break + } + revPath.append(Identity) + } + revFrontier = revPath.point + stop1, stop2 = true, true + default: + searchBudget-- // Match not found + } + debug.Update() + } + // Advance the frontier towards forward point. + if revFrontier.X-fwdPath.X >= revFrontier.Y-fwdPath.Y { + revFrontier.X-- + } else { + revFrontier.Y-- + } + } + + // Join the forward and reverse paths and then append the reverse path. + fwdPath.connect(revPath.point, f) + for i := len(revPath.es) - 1; i >= 0; i-- { + t := revPath.es[i] + revPath.es = revPath.es[:i] + fwdPath.append(t) + } + debug.Finish() + return fwdPath.es +} + +type path struct { + dir int // +1 if forward, -1 if reverse + point // Leading point of the EditScript path + es EditScript +} + +// connect appends any necessary Identity, Modified, UniqueX, or UniqueY types +// to the edit-script to connect p.point to dst. +func (p *path) connect(dst point, f EqualFunc) { + if p.dir > 0 { + // Connect in forward direction. + for dst.X > p.X && dst.Y > p.Y { + switch r := f(p.X, p.Y); { + case r.Equal(): + p.append(Identity) + case r.Similar(): + p.append(Modified) + case dst.X-p.X >= dst.Y-p.Y: + p.append(UniqueX) + default: + p.append(UniqueY) + } + } + for dst.X > p.X { + p.append(UniqueX) + } + for dst.Y > p.Y { + p.append(UniqueY) + } + } else { + // Connect in reverse direction. + for p.X > dst.X && p.Y > dst.Y { + switch r := f(p.X-1, p.Y-1); { + case r.Equal(): + p.append(Identity) + case r.Similar(): + p.append(Modified) + case p.Y-dst.Y >= p.X-dst.X: + p.append(UniqueY) + default: + p.append(UniqueX) + } + } + for p.X > dst.X { + p.append(UniqueX) + } + for p.Y > dst.Y { + p.append(UniqueY) + } + } +} + +func (p *path) append(t EditType) { + p.es = append(p.es, t) + switch t { + case Identity, Modified: + p.add(p.dir, p.dir) + case UniqueX: + p.add(p.dir, 0) + case UniqueY: + p.add(0, p.dir) + } + debug.Update() +} + +type point struct{ X, Y int } + +func (p *point) add(dx, dy int) { p.X += dx; p.Y += dy } + +// zigzag maps a consecutive sequence of integers to a zig-zag sequence. +// [0 1 2 3 4 5 ...] => [0 -1 +1 -2 +2 ...] +func zigzag(x int) int { + if x&1 != 0 { + x = ^x + } + return x >> 1 +} diff --git a/vendor/github.com/google/go-cmp/cmp/internal/function/func.go b/vendor/github.com/google/go-cmp/cmp/internal/function/func.go new file mode 100644 index 000000000..4c35ff11e --- /dev/null +++ b/vendor/github.com/google/go-cmp/cmp/internal/function/func.go @@ -0,0 +1,49 @@ +// Copyright 2017, The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE.md file. + +// Package function identifies function types. +package function + +import "reflect" + +type funcType int + +const ( + _ funcType = iota + + ttbFunc // func(T, T) bool + tibFunc // func(T, I) bool + trFunc // func(T) R + + Equal = ttbFunc // func(T, T) bool + EqualAssignable = tibFunc // func(T, I) bool; encapsulates func(T, T) bool + Transformer = trFunc // func(T) R + ValueFilter = ttbFunc // func(T, T) bool + Less = ttbFunc // func(T, T) bool +) + +var boolType = reflect.TypeOf(true) + +// IsType reports whether the reflect.Type is of the specified function type. +func IsType(t reflect.Type, ft funcType) bool { + if t == nil || t.Kind() != reflect.Func || t.IsVariadic() { + return false + } + ni, no := t.NumIn(), t.NumOut() + switch ft { + case ttbFunc: // func(T, T) bool + if ni == 2 && no == 1 && t.In(0) == t.In(1) && t.Out(0) == boolType { + return true + } + case tibFunc: // func(T, I) bool + if ni == 2 && no == 1 && t.In(0).AssignableTo(t.In(1)) && t.Out(0) == boolType { + return true + } + case trFunc: // func(T) R + if ni == 1 && no == 1 { + return true + } + } + return false +} diff --git a/vendor/github.com/google/go-cmp/cmp/internal/value/format.go b/vendor/github.com/google/go-cmp/cmp/internal/value/format.go new file mode 100644 index 000000000..bafb2d121 --- /dev/null +++ b/vendor/github.com/google/go-cmp/cmp/internal/value/format.go @@ -0,0 +1,280 @@ +// Copyright 2017, The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE.md file. + +// Package value provides functionality for reflect.Value types. +package value + +import ( + "fmt" + "reflect" + "strconv" + "strings" + "unicode" +) + +var stringerIface = reflect.TypeOf((*fmt.Stringer)(nil)).Elem() + +// Format formats the value v as a string. +// +// This is similar to fmt.Sprintf("%+v", v) except this: +// * Prints the type unless it can be elided +// * Avoids printing struct fields that are zero +// * Prints a nil-slice as being nil, not empty +// * Prints map entries in deterministic order +func Format(v reflect.Value, conf FormatConfig) string { + conf.printType = true + conf.followPointers = true + conf.realPointers = true + return formatAny(v, conf, visited{}) +} + +type FormatConfig struct { + UseStringer bool // Should the String method be used if available? + printType bool // Should we print the type before the value? + PrintPrimitiveType bool // Should we print the type of primitives? + followPointers bool // Should we recursively follow pointers? + realPointers bool // Should we print the real address of pointers? +} + +func formatAny(v reflect.Value, conf FormatConfig, m visited) string { + // TODO: Should this be a multi-line printout in certain situations? + + if !v.IsValid() { + return "" + } + if conf.UseStringer && v.Type().Implements(stringerIface) && v.CanInterface() { + if (v.Kind() == reflect.Ptr || v.Kind() == reflect.Interface) && v.IsNil() { + return "" + } + + const stringerPrefix = "s" // Indicates that the String method was used + s := v.Interface().(fmt.Stringer).String() + return stringerPrefix + formatString(s) + } + + switch v.Kind() { + case reflect.Bool: + return formatPrimitive(v.Type(), v.Bool(), conf) + case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64: + return formatPrimitive(v.Type(), v.Int(), conf) + case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64, reflect.Uintptr: + if v.Type().PkgPath() == "" || v.Kind() == reflect.Uintptr { + // Unnamed uints are usually bytes or words, so use hexadecimal. + return formatPrimitive(v.Type(), formatHex(v.Uint()), conf) + } + return formatPrimitive(v.Type(), v.Uint(), conf) + case reflect.Float32, reflect.Float64: + return formatPrimitive(v.Type(), v.Float(), conf) + case reflect.Complex64, reflect.Complex128: + return formatPrimitive(v.Type(), v.Complex(), conf) + case reflect.String: + return formatPrimitive(v.Type(), formatString(v.String()), conf) + case reflect.UnsafePointer, reflect.Chan, reflect.Func: + return formatPointer(v, conf) + case reflect.Ptr: + if v.IsNil() { + if conf.printType { + return fmt.Sprintf("(%v)(nil)", v.Type()) + } + return "" + } + if m.Visit(v) || !conf.followPointers { + return formatPointer(v, conf) + } + return "&" + formatAny(v.Elem(), conf, m) + case reflect.Interface: + if v.IsNil() { + if conf.printType { + return fmt.Sprintf("%v(nil)", v.Type()) + } + return "" + } + return formatAny(v.Elem(), conf, m) + case reflect.Slice: + if v.IsNil() { + if conf.printType { + return fmt.Sprintf("%v(nil)", v.Type()) + } + return "" + } + fallthrough + case reflect.Array: + var ss []string + subConf := conf + subConf.printType = v.Type().Elem().Kind() == reflect.Interface + for i := 0; i < v.Len(); i++ { + vi := v.Index(i) + if vi.CanAddr() { // Check for recursive elements + p := vi.Addr() + if m.Visit(p) { + subConf := conf + subConf.printType = true + ss = append(ss, "*"+formatPointer(p, subConf)) + continue + } + } + ss = append(ss, formatAny(vi, subConf, m)) + } + s := fmt.Sprintf("{%s}", strings.Join(ss, ", ")) + if conf.printType { + return v.Type().String() + s + } + return s + case reflect.Map: + if v.IsNil() { + if conf.printType { + return fmt.Sprintf("%v(nil)", v.Type()) + } + return "" + } + if m.Visit(v) { + return formatPointer(v, conf) + } + + var ss []string + keyConf, valConf := conf, conf + keyConf.printType = v.Type().Key().Kind() == reflect.Interface + keyConf.followPointers = false + valConf.printType = v.Type().Elem().Kind() == reflect.Interface + for _, k := range SortKeys(v.MapKeys()) { + sk := formatAny(k, keyConf, m) + sv := formatAny(v.MapIndex(k), valConf, m) + ss = append(ss, fmt.Sprintf("%s: %s", sk, sv)) + } + s := fmt.Sprintf("{%s}", strings.Join(ss, ", ")) + if conf.printType { + return v.Type().String() + s + } + return s + case reflect.Struct: + var ss []string + subConf := conf + subConf.printType = true + for i := 0; i < v.NumField(); i++ { + vv := v.Field(i) + if isZero(vv) { + continue // Elide zero value fields + } + name := v.Type().Field(i).Name + subConf.UseStringer = conf.UseStringer + s := formatAny(vv, subConf, m) + ss = append(ss, fmt.Sprintf("%s: %s", name, s)) + } + s := fmt.Sprintf("{%s}", strings.Join(ss, ", ")) + if conf.printType { + return v.Type().String() + s + } + return s + default: + panic(fmt.Sprintf("%v kind not handled", v.Kind())) + } +} + +func formatString(s string) string { + // Use quoted string if it the same length as a raw string literal. + // Otherwise, attempt to use the raw string form. + qs := strconv.Quote(s) + if len(qs) == 1+len(s)+1 { + return qs + } + + // Disallow newlines to ensure output is a single line. + // Only allow printable runes for readability purposes. + rawInvalid := func(r rune) bool { + return r == '`' || r == '\n' || !unicode.IsPrint(r) + } + if strings.IndexFunc(s, rawInvalid) < 0 { + return "`" + s + "`" + } + return qs +} + +func formatPrimitive(t reflect.Type, v interface{}, conf FormatConfig) string { + if conf.printType && (conf.PrintPrimitiveType || t.PkgPath() != "") { + return fmt.Sprintf("%v(%v)", t, v) + } + return fmt.Sprintf("%v", v) +} + +func formatPointer(v reflect.Value, conf FormatConfig) string { + p := v.Pointer() + if !conf.realPointers { + p = 0 // For deterministic printing purposes + } + s := formatHex(uint64(p)) + if conf.printType { + return fmt.Sprintf("(%v)(%s)", v.Type(), s) + } + return s +} + +func formatHex(u uint64) string { + var f string + switch { + case u <= 0xff: + f = "0x%02x" + case u <= 0xffff: + f = "0x%04x" + case u <= 0xffffff: + f = "0x%06x" + case u <= 0xffffffff: + f = "0x%08x" + case u <= 0xffffffffff: + f = "0x%010x" + case u <= 0xffffffffffff: + f = "0x%012x" + case u <= 0xffffffffffffff: + f = "0x%014x" + case u <= 0xffffffffffffffff: + f = "0x%016x" + } + return fmt.Sprintf(f, u) +} + +// isZero reports whether v is the zero value. +// This does not rely on Interface and so can be used on unexported fields. +func isZero(v reflect.Value) bool { + switch v.Kind() { + case reflect.Bool: + return v.Bool() == false + case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64: + return v.Int() == 0 + case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64, reflect.Uintptr: + return v.Uint() == 0 + case reflect.Float32, reflect.Float64: + return v.Float() == 0 + case reflect.Complex64, reflect.Complex128: + return v.Complex() == 0 + case reflect.String: + return v.String() == "" + case reflect.UnsafePointer: + return v.Pointer() == 0 + case reflect.Chan, reflect.Func, reflect.Interface, reflect.Ptr, reflect.Map, reflect.Slice: + return v.IsNil() + case reflect.Array: + for i := 0; i < v.Len(); i++ { + if !isZero(v.Index(i)) { + return false + } + } + return true + case reflect.Struct: + for i := 0; i < v.NumField(); i++ { + if !isZero(v.Field(i)) { + return false + } + } + return true + } + return false +} + +type visited map[Pointer]bool + +func (m visited) Visit(v reflect.Value) bool { + p := PointerOf(v) + visited := m[p] + m[p] = true + return visited +} diff --git a/vendor/github.com/google/go-cmp/cmp/internal/value/pointer_purego.go b/vendor/github.com/google/go-cmp/cmp/internal/value/pointer_purego.go new file mode 100644 index 000000000..0a01c4796 --- /dev/null +++ b/vendor/github.com/google/go-cmp/cmp/internal/value/pointer_purego.go @@ -0,0 +1,23 @@ +// Copyright 2018, The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE.md file. + +// +build purego + +package value + +import "reflect" + +// Pointer is an opaque typed pointer and is guaranteed to be comparable. +type Pointer struct { + p uintptr + t reflect.Type +} + +// PointerOf returns a Pointer from v, which must be a +// reflect.Ptr, reflect.Slice, or reflect.Map. +func PointerOf(v reflect.Value) Pointer { + // NOTE: Storing a pointer as an uintptr is technically incorrect as it + // assumes that the GC implementation does not use a moving collector. + return Pointer{v.Pointer(), v.Type()} +} diff --git a/vendor/github.com/google/go-cmp/cmp/internal/value/pointer_unsafe.go b/vendor/github.com/google/go-cmp/cmp/internal/value/pointer_unsafe.go new file mode 100644 index 000000000..da134ae2a --- /dev/null +++ b/vendor/github.com/google/go-cmp/cmp/internal/value/pointer_unsafe.go @@ -0,0 +1,26 @@ +// Copyright 2018, The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE.md file. + +// +build !purego + +package value + +import ( + "reflect" + "unsafe" +) + +// Pointer is an opaque typed pointer and is guaranteed to be comparable. +type Pointer struct { + p unsafe.Pointer + t reflect.Type +} + +// PointerOf returns a Pointer from v, which must be a +// reflect.Ptr, reflect.Slice, or reflect.Map. +func PointerOf(v reflect.Value) Pointer { + // The proper representation of a pointer is unsafe.Pointer, + // which is necessary if the GC ever uses a moving collector. + return Pointer{unsafe.Pointer(v.Pointer()), v.Type()} +} diff --git a/vendor/github.com/google/go-cmp/cmp/internal/value/sort.go b/vendor/github.com/google/go-cmp/cmp/internal/value/sort.go new file mode 100644 index 000000000..fe8aa27a0 --- /dev/null +++ b/vendor/github.com/google/go-cmp/cmp/internal/value/sort.go @@ -0,0 +1,111 @@ +// Copyright 2017, The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE.md file. + +package value + +import ( + "fmt" + "math" + "reflect" + "sort" +) + +// SortKeys sorts a list of map keys, deduplicating keys if necessary. +// The type of each value must be comparable. +func SortKeys(vs []reflect.Value) []reflect.Value { + if len(vs) == 0 { + return vs + } + + // Sort the map keys. + sort.Sort(valueSorter(vs)) + + // Deduplicate keys (fails for NaNs). + vs2 := vs[:1] + for _, v := range vs[1:] { + if isLess(vs2[len(vs2)-1], v) { + vs2 = append(vs2, v) + } + } + return vs2 +} + +// TODO: Use sort.Slice once Google AppEngine is on Go1.8 or above. +type valueSorter []reflect.Value + +func (vs valueSorter) Len() int { return len(vs) } +func (vs valueSorter) Less(i, j int) bool { return isLess(vs[i], vs[j]) } +func (vs valueSorter) Swap(i, j int) { vs[i], vs[j] = vs[j], vs[i] } + +// isLess is a generic function for sorting arbitrary map keys. +// The inputs must be of the same type and must be comparable. +func isLess(x, y reflect.Value) bool { + switch x.Type().Kind() { + case reflect.Bool: + return !x.Bool() && y.Bool() + case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64: + return x.Int() < y.Int() + case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64, reflect.Uintptr: + return x.Uint() < y.Uint() + case reflect.Float32, reflect.Float64: + fx, fy := x.Float(), y.Float() + return fx < fy || math.IsNaN(fx) && !math.IsNaN(fy) + case reflect.Complex64, reflect.Complex128: + cx, cy := x.Complex(), y.Complex() + rx, ix, ry, iy := real(cx), imag(cx), real(cy), imag(cy) + if rx == ry || (math.IsNaN(rx) && math.IsNaN(ry)) { + return ix < iy || math.IsNaN(ix) && !math.IsNaN(iy) + } + return rx < ry || math.IsNaN(rx) && !math.IsNaN(ry) + case reflect.Ptr, reflect.UnsafePointer, reflect.Chan: + return x.Pointer() < y.Pointer() + case reflect.String: + return x.String() < y.String() + case reflect.Array: + for i := 0; i < x.Len(); i++ { + if isLess(x.Index(i), y.Index(i)) { + return true + } + if isLess(y.Index(i), x.Index(i)) { + return false + } + } + return false + case reflect.Struct: + for i := 0; i < x.NumField(); i++ { + if isLess(x.Field(i), y.Field(i)) { + return true + } + if isLess(y.Field(i), x.Field(i)) { + return false + } + } + return false + case reflect.Interface: + vx, vy := x.Elem(), y.Elem() + if !vx.IsValid() || !vy.IsValid() { + return !vx.IsValid() && vy.IsValid() + } + tx, ty := vx.Type(), vy.Type() + if tx == ty { + return isLess(x.Elem(), y.Elem()) + } + if tx.Kind() != ty.Kind() { + return vx.Kind() < vy.Kind() + } + if tx.String() != ty.String() { + return tx.String() < ty.String() + } + if tx.PkgPath() != ty.PkgPath() { + return tx.PkgPath() < ty.PkgPath() + } + // This can happen in rare situations, so we fallback to just comparing + // the unique pointer for a reflect.Type. This guarantees deterministic + // ordering within a program, but it is obviously not stable. + return reflect.ValueOf(vx.Type()).Pointer() < reflect.ValueOf(vy.Type()).Pointer() + default: + // Must be Func, Map, or Slice; which are not comparable. + panic(fmt.Sprintf("%T is not comparable", x.Type())) + } +} diff --git a/vendor/github.com/google/go-cmp/cmp/options.go b/vendor/github.com/google/go-cmp/cmp/options.go new file mode 100644 index 000000000..cc3a4b9e7 --- /dev/null +++ b/vendor/github.com/google/go-cmp/cmp/options.go @@ -0,0 +1,456 @@ +// Copyright 2017, The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE.md file. + +package cmp + +import ( + "fmt" + "reflect" + "runtime" + "strings" + + "github.com/google/go-cmp/cmp/internal/function" +) + +// Option configures for specific behavior of Equal and Diff. In particular, +// the fundamental Option functions (Ignore, Transformer, and Comparer), +// configure how equality is determined. +// +// The fundamental options may be composed with filters (FilterPath and +// FilterValues) to control the scope over which they are applied. +// +// The cmp/cmpopts package provides helper functions for creating options that +// may be used with Equal and Diff. +type Option interface { + // filter applies all filters and returns the option that remains. + // Each option may only read s.curPath and call s.callTTBFunc. + // + // An Options is returned only if multiple comparers or transformers + // can apply simultaneously and will only contain values of those types + // or sub-Options containing values of those types. + filter(s *state, vx, vy reflect.Value, t reflect.Type) applicableOption +} + +// applicableOption represents the following types: +// Fundamental: ignore | invalid | *comparer | *transformer +// Grouping: Options +type applicableOption interface { + Option + + // apply executes the option, which may mutate s or panic. + apply(s *state, vx, vy reflect.Value) +} + +// coreOption represents the following types: +// Fundamental: ignore | invalid | *comparer | *transformer +// Filters: *pathFilter | *valuesFilter +type coreOption interface { + Option + isCore() +} + +type core struct{} + +func (core) isCore() {} + +// Options is a list of Option values that also satisfies the Option interface. +// Helper comparison packages may return an Options value when packing multiple +// Option values into a single Option. When this package processes an Options, +// it will be implicitly expanded into a flat list. +// +// Applying a filter on an Options is equivalent to applying that same filter +// on all individual options held within. +type Options []Option + +func (opts Options) filter(s *state, vx, vy reflect.Value, t reflect.Type) (out applicableOption) { + for _, opt := range opts { + switch opt := opt.filter(s, vx, vy, t); opt.(type) { + case ignore: + return ignore{} // Only ignore can short-circuit evaluation + case invalid: + out = invalid{} // Takes precedence over comparer or transformer + case *comparer, *transformer, Options: + switch out.(type) { + case nil: + out = opt + case invalid: + // Keep invalid + case *comparer, *transformer, Options: + out = Options{out, opt} // Conflicting comparers or transformers + } + } + } + return out +} + +func (opts Options) apply(s *state, _, _ reflect.Value) { + const warning = "ambiguous set of applicable options" + const help = "consider using filters to ensure at most one Comparer or Transformer may apply" + var ss []string + for _, opt := range flattenOptions(nil, opts) { + ss = append(ss, fmt.Sprint(opt)) + } + set := strings.Join(ss, "\n\t") + panic(fmt.Sprintf("%s at %#v:\n\t%s\n%s", warning, s.curPath, set, help)) +} + +func (opts Options) String() string { + var ss []string + for _, opt := range opts { + ss = append(ss, fmt.Sprint(opt)) + } + return fmt.Sprintf("Options{%s}", strings.Join(ss, ", ")) +} + +// FilterPath returns a new Option where opt is only evaluated if filter f +// returns true for the current Path in the value tree. +// +// The option passed in may be an Ignore, Transformer, Comparer, Options, or +// a previously filtered Option. +func FilterPath(f func(Path) bool, opt Option) Option { + if f == nil { + panic("invalid path filter function") + } + if opt := normalizeOption(opt); opt != nil { + return &pathFilter{fnc: f, opt: opt} + } + return nil +} + +type pathFilter struct { + core + fnc func(Path) bool + opt Option +} + +func (f pathFilter) filter(s *state, vx, vy reflect.Value, t reflect.Type) applicableOption { + if f.fnc(s.curPath) { + return f.opt.filter(s, vx, vy, t) + } + return nil +} + +func (f pathFilter) String() string { + fn := getFuncName(reflect.ValueOf(f.fnc).Pointer()) + return fmt.Sprintf("FilterPath(%s, %v)", fn, f.opt) +} + +// FilterValues returns a new Option where opt is only evaluated if filter f, +// which is a function of the form "func(T, T) bool", returns true for the +// current pair of values being compared. If the type of the values is not +// assignable to T, then this filter implicitly returns false. +// +// The filter function must be +// symmetric (i.e., agnostic to the order of the inputs) and +// deterministic (i.e., produces the same result when given the same inputs). +// If T is an interface, it is possible that f is called with two values with +// different concrete types that both implement T. +// +// The option passed in may be an Ignore, Transformer, Comparer, Options, or +// a previously filtered Option. +func FilterValues(f interface{}, opt Option) Option { + v := reflect.ValueOf(f) + if !function.IsType(v.Type(), function.ValueFilter) || v.IsNil() { + panic(fmt.Sprintf("invalid values filter function: %T", f)) + } + if opt := normalizeOption(opt); opt != nil { + vf := &valuesFilter{fnc: v, opt: opt} + if ti := v.Type().In(0); ti.Kind() != reflect.Interface || ti.NumMethod() > 0 { + vf.typ = ti + } + return vf + } + return nil +} + +type valuesFilter struct { + core + typ reflect.Type // T + fnc reflect.Value // func(T, T) bool + opt Option +} + +func (f valuesFilter) filter(s *state, vx, vy reflect.Value, t reflect.Type) applicableOption { + if !vx.IsValid() || !vy.IsValid() { + return invalid{} + } + if (f.typ == nil || t.AssignableTo(f.typ)) && s.callTTBFunc(f.fnc, vx, vy) { + return f.opt.filter(s, vx, vy, t) + } + return nil +} + +func (f valuesFilter) String() string { + fn := getFuncName(f.fnc.Pointer()) + return fmt.Sprintf("FilterValues(%s, %v)", fn, f.opt) +} + +// Ignore is an Option that causes all comparisons to be ignored. +// This value is intended to be combined with FilterPath or FilterValues. +// It is an error to pass an unfiltered Ignore option to Equal. +func Ignore() Option { return ignore{} } + +type ignore struct{ core } + +func (ignore) isFiltered() bool { return false } +func (ignore) filter(_ *state, _, _ reflect.Value, _ reflect.Type) applicableOption { return ignore{} } +func (ignore) apply(_ *state, _, _ reflect.Value) { return } +func (ignore) String() string { return "Ignore()" } + +// invalid is a sentinel Option type to indicate that some options could not +// be evaluated due to unexported fields. +type invalid struct{ core } + +func (invalid) filter(_ *state, _, _ reflect.Value, _ reflect.Type) applicableOption { return invalid{} } +func (invalid) apply(s *state, _, _ reflect.Value) { + const help = "consider using AllowUnexported or cmpopts.IgnoreUnexported" + panic(fmt.Sprintf("cannot handle unexported field: %#v\n%s", s.curPath, help)) +} + +// Transformer returns an Option that applies a transformation function that +// converts values of a certain type into that of another. +// +// The transformer f must be a function "func(T) R" that converts values of +// type T to those of type R and is implicitly filtered to input values +// assignable to T. The transformer must not mutate T in any way. +// +// To help prevent some cases of infinite recursive cycles applying the +// same transform to the output of itself (e.g., in the case where the +// input and output types are the same), an implicit filter is added such that +// a transformer is applicable only if that exact transformer is not already +// in the tail of the Path since the last non-Transform step. +// For situations where the implicit filter is still insufficient, +// consider using cmpopts.AcyclicTransformer, which adds a filter +// to prevent the transformer from being recursively applied upon itself. +// +// The name is a user provided label that is used as the Transform.Name in the +// transformation PathStep. If empty, an arbitrary name is used. +func Transformer(name string, f interface{}) Option { + v := reflect.ValueOf(f) + if !function.IsType(v.Type(), function.Transformer) || v.IsNil() { + panic(fmt.Sprintf("invalid transformer function: %T", f)) + } + if name == "" { + name = "λ" // Lambda-symbol as place-holder for anonymous transformer + } + if !isValid(name) { + panic(fmt.Sprintf("invalid name: %q", name)) + } + tr := &transformer{name: name, fnc: reflect.ValueOf(f)} + if ti := v.Type().In(0); ti.Kind() != reflect.Interface || ti.NumMethod() > 0 { + tr.typ = ti + } + return tr +} + +type transformer struct { + core + name string + typ reflect.Type // T + fnc reflect.Value // func(T) R +} + +func (tr *transformer) isFiltered() bool { return tr.typ != nil } + +func (tr *transformer) filter(s *state, _, _ reflect.Value, t reflect.Type) applicableOption { + for i := len(s.curPath) - 1; i >= 0; i-- { + if t, ok := s.curPath[i].(*transform); !ok { + break // Hit most recent non-Transform step + } else if tr == t.trans { + return nil // Cannot directly use same Transform + } + } + if tr.typ == nil || t.AssignableTo(tr.typ) { + return tr + } + return nil +} + +func (tr *transformer) apply(s *state, vx, vy reflect.Value) { + // Update path before calling the Transformer so that dynamic checks + // will use the updated path. + s.curPath.push(&transform{pathStep{tr.fnc.Type().Out(0)}, tr}) + defer s.curPath.pop() + + vx = s.callTRFunc(tr.fnc, vx) + vy = s.callTRFunc(tr.fnc, vy) + s.compareAny(vx, vy) +} + +func (tr transformer) String() string { + return fmt.Sprintf("Transformer(%s, %s)", tr.name, getFuncName(tr.fnc.Pointer())) +} + +// Comparer returns an Option that determines whether two values are equal +// to each other. +// +// The comparer f must be a function "func(T, T) bool" and is implicitly +// filtered to input values assignable to T. If T is an interface, it is +// possible that f is called with two values of different concrete types that +// both implement T. +// +// The equality function must be: +// • Symmetric: equal(x, y) == equal(y, x) +// • Deterministic: equal(x, y) == equal(x, y) +// • Pure: equal(x, y) does not modify x or y +func Comparer(f interface{}) Option { + v := reflect.ValueOf(f) + if !function.IsType(v.Type(), function.Equal) || v.IsNil() { + panic(fmt.Sprintf("invalid comparer function: %T", f)) + } + cm := &comparer{fnc: v} + if ti := v.Type().In(0); ti.Kind() != reflect.Interface || ti.NumMethod() > 0 { + cm.typ = ti + } + return cm +} + +type comparer struct { + core + typ reflect.Type // T + fnc reflect.Value // func(T, T) bool +} + +func (cm *comparer) isFiltered() bool { return cm.typ != nil } + +func (cm *comparer) filter(_ *state, _, _ reflect.Value, t reflect.Type) applicableOption { + if cm.typ == nil || t.AssignableTo(cm.typ) { + return cm + } + return nil +} + +func (cm *comparer) apply(s *state, vx, vy reflect.Value) { + eq := s.callTTBFunc(cm.fnc, vx, vy) + s.report(eq, vx, vy) +} + +func (cm comparer) String() string { + return fmt.Sprintf("Comparer(%s)", getFuncName(cm.fnc.Pointer())) +} + +// AllowUnexported returns an Option that forcibly allows operations on +// unexported fields in certain structs, which are specified by passing in a +// value of each struct type. +// +// Users of this option must understand that comparing on unexported fields +// from external packages is not safe since changes in the internal +// implementation of some external package may cause the result of Equal +// to unexpectedly change. However, it may be valid to use this option on types +// defined in an internal package where the semantic meaning of an unexported +// field is in the control of the user. +// +// For some cases, a custom Comparer should be used instead that defines +// equality as a function of the public API of a type rather than the underlying +// unexported implementation. +// +// For example, the reflect.Type documentation defines equality to be determined +// by the == operator on the interface (essentially performing a shallow pointer +// comparison) and most attempts to compare *regexp.Regexp types are interested +// in only checking that the regular expression strings are equal. +// Both of these are accomplished using Comparers: +// +// Comparer(func(x, y reflect.Type) bool { return x == y }) +// Comparer(func(x, y *regexp.Regexp) bool { return x.String() == y.String() }) +// +// In other cases, the cmpopts.IgnoreUnexported option can be used to ignore +// all unexported fields on specified struct types. +func AllowUnexported(types ...interface{}) Option { + if !supportAllowUnexported { + panic("AllowUnexported is not supported on purego builds, Google App Engine Standard, or GopherJS") + } + m := make(map[reflect.Type]bool) + for _, typ := range types { + t := reflect.TypeOf(typ) + if t.Kind() != reflect.Struct { + panic(fmt.Sprintf("invalid struct type: %T", typ)) + } + m[t] = true + } + return visibleStructs(m) +} + +type visibleStructs map[reflect.Type]bool + +func (visibleStructs) filter(_ *state, _, _ reflect.Value, _ reflect.Type) applicableOption { + panic("not implemented") +} + +// reporter is an Option that configures how differences are reported. +type reporter interface { + // TODO: Not exported yet. + // + // Perhaps add PushStep and PopStep and change Report to only accept + // a PathStep instead of the full-path? Adding a PushStep and PopStep makes + // it clear that we are traversing the value tree in a depth-first-search + // manner, which has an effect on how values are printed. + + Option + + // Report is called for every comparison made and will be provided with + // the two values being compared, the equality result, and the + // current path in the value tree. It is possible for x or y to be an + // invalid reflect.Value if one of the values is non-existent; + // which is possible with maps and slices. + Report(x, y reflect.Value, eq bool, p Path) +} + +// normalizeOption normalizes the input options such that all Options groups +// are flattened and groups with a single element are reduced to that element. +// Only coreOptions and Options containing coreOptions are allowed. +func normalizeOption(src Option) Option { + switch opts := flattenOptions(nil, Options{src}); len(opts) { + case 0: + return nil + case 1: + return opts[0] + default: + return opts + } +} + +// flattenOptions copies all options in src to dst as a flat list. +// Only coreOptions and Options containing coreOptions are allowed. +func flattenOptions(dst, src Options) Options { + for _, opt := range src { + switch opt := opt.(type) { + case nil: + continue + case Options: + dst = flattenOptions(dst, opt) + case coreOption: + dst = append(dst, opt) + default: + panic(fmt.Sprintf("invalid option type: %T", opt)) + } + } + return dst +} + +// getFuncName returns a short function name from the pointer. +// The string parsing logic works up until Go1.9. +func getFuncName(p uintptr) string { + fnc := runtime.FuncForPC(p) + if fnc == nil { + return "" + } + name := fnc.Name() // E.g., "long/path/name/mypkg.(mytype).(long/path/name/mypkg.myfunc)-fm" + if strings.HasSuffix(name, ")-fm") || strings.HasSuffix(name, ")·fm") { + // Strip the package name from method name. + name = strings.TrimSuffix(name, ")-fm") + name = strings.TrimSuffix(name, ")·fm") + if i := strings.LastIndexByte(name, '('); i >= 0 { + methodName := name[i+1:] // E.g., "long/path/name/mypkg.myfunc" + if j := strings.LastIndexByte(methodName, '.'); j >= 0 { + methodName = methodName[j+1:] // E.g., "myfunc" + } + name = name[:i] + methodName // E.g., "long/path/name/mypkg.(mytype)." + "myfunc" + } + } + if i := strings.LastIndexByte(name, '/'); i >= 0 { + // Strip the package name. + name = name[i+1:] // E.g., "mypkg.(mytype).myfunc" + } + return name +} diff --git a/vendor/github.com/google/go-cmp/cmp/path.go b/vendor/github.com/google/go-cmp/cmp/path.go new file mode 100644 index 000000000..c08a3cf80 --- /dev/null +++ b/vendor/github.com/google/go-cmp/cmp/path.go @@ -0,0 +1,309 @@ +// Copyright 2017, The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE.md file. + +package cmp + +import ( + "fmt" + "reflect" + "strings" + "unicode" + "unicode/utf8" +) + +type ( + // Path is a list of PathSteps describing the sequence of operations to get + // from some root type to the current position in the value tree. + // The first Path element is always an operation-less PathStep that exists + // simply to identify the initial type. + // + // When traversing structs with embedded structs, the embedded struct will + // always be accessed as a field before traversing the fields of the + // embedded struct themselves. That is, an exported field from the + // embedded struct will never be accessed directly from the parent struct. + Path []PathStep + + // PathStep is a union-type for specific operations to traverse + // a value's tree structure. Users of this package never need to implement + // these types as values of this type will be returned by this package. + PathStep interface { + String() string + Type() reflect.Type // Resulting type after performing the path step + isPathStep() + } + + // SliceIndex is an index operation on a slice or array at some index Key. + SliceIndex interface { + PathStep + Key() int // May return -1 if in a split state + + // SplitKeys returns the indexes for indexing into slices in the + // x and y values, respectively. These indexes may differ due to the + // insertion or removal of an element in one of the slices, causing + // all of the indexes to be shifted. If an index is -1, then that + // indicates that the element does not exist in the associated slice. + // + // Key is guaranteed to return -1 if and only if the indexes returned + // by SplitKeys are not the same. SplitKeys will never return -1 for + // both indexes. + SplitKeys() (x int, y int) + + isSliceIndex() + } + // MapIndex is an index operation on a map at some index Key. + MapIndex interface { + PathStep + Key() reflect.Value + isMapIndex() + } + // TypeAssertion represents a type assertion on an interface. + TypeAssertion interface { + PathStep + isTypeAssertion() + } + // StructField represents a struct field access on a field called Name. + StructField interface { + PathStep + Name() string + Index() int + isStructField() + } + // Indirect represents pointer indirection on the parent type. + Indirect interface { + PathStep + isIndirect() + } + // Transform is a transformation from the parent type to the current type. + Transform interface { + PathStep + Name() string + Func() reflect.Value + + // Option returns the originally constructed Transformer option. + // The == operator can be used to detect the exact option used. + Option() Option + + isTransform() + } +) + +func (pa *Path) push(s PathStep) { + *pa = append(*pa, s) +} + +func (pa *Path) pop() { + *pa = (*pa)[:len(*pa)-1] +} + +// Last returns the last PathStep in the Path. +// If the path is empty, this returns a non-nil PathStep that reports a nil Type. +func (pa Path) Last() PathStep { + return pa.Index(-1) +} + +// Index returns the ith step in the Path and supports negative indexing. +// A negative index starts counting from the tail of the Path such that -1 +// refers to the last step, -2 refers to the second-to-last step, and so on. +// If index is invalid, this returns a non-nil PathStep that reports a nil Type. +func (pa Path) Index(i int) PathStep { + if i < 0 { + i = len(pa) + i + } + if i < 0 || i >= len(pa) { + return pathStep{} + } + return pa[i] +} + +// String returns the simplified path to a node. +// The simplified path only contains struct field accesses. +// +// For example: +// MyMap.MySlices.MyField +func (pa Path) String() string { + var ss []string + for _, s := range pa { + if _, ok := s.(*structField); ok { + ss = append(ss, s.String()) + } + } + return strings.TrimPrefix(strings.Join(ss, ""), ".") +} + +// GoString returns the path to a specific node using Go syntax. +// +// For example: +// (*root.MyMap["key"].(*mypkg.MyStruct).MySlices)[2][3].MyField +func (pa Path) GoString() string { + var ssPre, ssPost []string + var numIndirect int + for i, s := range pa { + var nextStep PathStep + if i+1 < len(pa) { + nextStep = pa[i+1] + } + switch s := s.(type) { + case *indirect: + numIndirect++ + pPre, pPost := "(", ")" + switch nextStep.(type) { + case *indirect: + continue // Next step is indirection, so let them batch up + case *structField: + numIndirect-- // Automatic indirection on struct fields + case nil: + pPre, pPost = "", "" // Last step; no need for parenthesis + } + if numIndirect > 0 { + ssPre = append(ssPre, pPre+strings.Repeat("*", numIndirect)) + ssPost = append(ssPost, pPost) + } + numIndirect = 0 + continue + case *transform: + ssPre = append(ssPre, s.trans.name+"(") + ssPost = append(ssPost, ")") + continue + case *typeAssertion: + // As a special-case, elide type assertions on anonymous types + // since they are typically generated dynamically and can be very + // verbose. For example, some transforms return interface{} because + // of Go's lack of generics, but typically take in and return the + // exact same concrete type. + if s.Type().PkgPath() == "" { + continue + } + } + ssPost = append(ssPost, s.String()) + } + for i, j := 0, len(ssPre)-1; i < j; i, j = i+1, j-1 { + ssPre[i], ssPre[j] = ssPre[j], ssPre[i] + } + return strings.Join(ssPre, "") + strings.Join(ssPost, "") +} + +type ( + pathStep struct { + typ reflect.Type + } + + sliceIndex struct { + pathStep + xkey, ykey int + } + mapIndex struct { + pathStep + key reflect.Value + } + typeAssertion struct { + pathStep + } + structField struct { + pathStep + name string + idx int + + // These fields are used for forcibly accessing an unexported field. + // pvx, pvy, and field are only valid if unexported is true. + unexported bool + force bool // Forcibly allow visibility + pvx, pvy reflect.Value // Parent values + field reflect.StructField // Field information + } + indirect struct { + pathStep + } + transform struct { + pathStep + trans *transformer + } +) + +func (ps pathStep) Type() reflect.Type { return ps.typ } +func (ps pathStep) String() string { + if ps.typ == nil { + return "" + } + s := ps.typ.String() + if s == "" || strings.ContainsAny(s, "{}\n") { + return "root" // Type too simple or complex to print + } + return fmt.Sprintf("{%s}", s) +} + +func (si sliceIndex) String() string { + switch { + case si.xkey == si.ykey: + return fmt.Sprintf("[%d]", si.xkey) + case si.ykey == -1: + // [5->?] means "I don't know where X[5] went" + return fmt.Sprintf("[%d->?]", si.xkey) + case si.xkey == -1: + // [?->3] means "I don't know where Y[3] came from" + return fmt.Sprintf("[?->%d]", si.ykey) + default: + // [5->3] means "X[5] moved to Y[3]" + return fmt.Sprintf("[%d->%d]", si.xkey, si.ykey) + } +} +func (mi mapIndex) String() string { return fmt.Sprintf("[%#v]", mi.key) } +func (ta typeAssertion) String() string { return fmt.Sprintf(".(%v)", ta.typ) } +func (sf structField) String() string { return fmt.Sprintf(".%s", sf.name) } +func (in indirect) String() string { return "*" } +func (tf transform) String() string { return fmt.Sprintf("%s()", tf.trans.name) } + +func (si sliceIndex) Key() int { + if si.xkey != si.ykey { + return -1 + } + return si.xkey +} +func (si sliceIndex) SplitKeys() (x, y int) { return si.xkey, si.ykey } +func (mi mapIndex) Key() reflect.Value { return mi.key } +func (sf structField) Name() string { return sf.name } +func (sf structField) Index() int { return sf.idx } +func (tf transform) Name() string { return tf.trans.name } +func (tf transform) Func() reflect.Value { return tf.trans.fnc } +func (tf transform) Option() Option { return tf.trans } + +func (pathStep) isPathStep() {} +func (sliceIndex) isSliceIndex() {} +func (mapIndex) isMapIndex() {} +func (typeAssertion) isTypeAssertion() {} +func (structField) isStructField() {} +func (indirect) isIndirect() {} +func (transform) isTransform() {} + +var ( + _ SliceIndex = sliceIndex{} + _ MapIndex = mapIndex{} + _ TypeAssertion = typeAssertion{} + _ StructField = structField{} + _ Indirect = indirect{} + _ Transform = transform{} + + _ PathStep = sliceIndex{} + _ PathStep = mapIndex{} + _ PathStep = typeAssertion{} + _ PathStep = structField{} + _ PathStep = indirect{} + _ PathStep = transform{} +) + +// isExported reports whether the identifier is exported. +func isExported(id string) bool { + r, _ := utf8.DecodeRuneInString(id) + return unicode.IsUpper(r) +} + +// isValid reports whether the identifier is valid. +// Empty and underscore-only strings are not valid. +func isValid(id string) bool { + ok := id != "" && id != "_" + for j, c := range id { + ok = ok && (j > 0 || !unicode.IsDigit(c)) + ok = ok && (c == '_' || unicode.IsLetter(c) || unicode.IsDigit(c)) + } + return ok +} diff --git a/vendor/github.com/google/go-cmp/cmp/reporter.go b/vendor/github.com/google/go-cmp/cmp/reporter.go new file mode 100644 index 000000000..20e9f18e0 --- /dev/null +++ b/vendor/github.com/google/go-cmp/cmp/reporter.go @@ -0,0 +1,53 @@ +// Copyright 2017, The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE.md file. + +package cmp + +import ( + "fmt" + "reflect" + "strings" + + "github.com/google/go-cmp/cmp/internal/value" +) + +type defaultReporter struct { + Option + diffs []string // List of differences, possibly truncated + ndiffs int // Total number of differences + nbytes int // Number of bytes in diffs + nlines int // Number of lines in diffs +} + +var _ reporter = (*defaultReporter)(nil) + +func (r *defaultReporter) Report(x, y reflect.Value, eq bool, p Path) { + if eq { + return // Ignore equal results + } + const maxBytes = 4096 + const maxLines = 256 + r.ndiffs++ + if r.nbytes < maxBytes && r.nlines < maxLines { + sx := value.Format(x, value.FormatConfig{UseStringer: true}) + sy := value.Format(y, value.FormatConfig{UseStringer: true}) + if sx == sy { + // Unhelpful output, so use more exact formatting. + sx = value.Format(x, value.FormatConfig{PrintPrimitiveType: true}) + sy = value.Format(y, value.FormatConfig{PrintPrimitiveType: true}) + } + s := fmt.Sprintf("%#v:\n\t-: %s\n\t+: %s\n", p, sx, sy) + r.diffs = append(r.diffs, s) + r.nbytes += len(s) + r.nlines += strings.Count(s, "\n") + } +} + +func (r *defaultReporter) String() string { + s := strings.Join(r.diffs, "") + if r.ndiffs == len(r.diffs) { + return s + } + return fmt.Sprintf("%s... %d more differences ...", s, r.ndiffs-len(r.diffs)) +} diff --git a/vendor/github.com/google/go-cmp/cmp/unsafe_reflect.go b/vendor/github.com/google/go-cmp/cmp/unsafe_reflect.go new file mode 100644 index 000000000..579b65507 --- /dev/null +++ b/vendor/github.com/google/go-cmp/cmp/unsafe_reflect.go @@ -0,0 +1,23 @@ +// Copyright 2017, The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE.md file. + +// +build !purego,!appengine,!js + +package cmp + +import ( + "reflect" + "unsafe" +) + +const supportAllowUnexported = true + +// unsafeRetrieveField uses unsafe to forcibly retrieve any field from a struct +// such that the value has read-write permissions. +// +// The parent struct, v, must be addressable, while f must be a StructField +// describing the field to retrieve. +func unsafeRetrieveField(v reflect.Value, f reflect.StructField) reflect.Value { + return reflect.NewAt(f.Type, unsafe.Pointer(v.UnsafeAddr()+f.Offset)).Elem() +} diff --git a/vendor/github.com/gophercloud/gophercloud/FAQ.md b/vendor/github.com/gophercloud/gophercloud/FAQ.md deleted file mode 100644 index 88a366a28..000000000 --- a/vendor/github.com/gophercloud/gophercloud/FAQ.md +++ /dev/null @@ -1,148 +0,0 @@ -# Tips - -## Implementing default logging and re-authentication attempts - -You can implement custom logging and/or limit re-auth attempts by creating a custom HTTP client -like the following and setting it as the provider client's HTTP Client (via the -`gophercloud.ProviderClient.HTTPClient` field): - -```go -//... - -// LogRoundTripper satisfies the http.RoundTripper interface and is used to -// customize the default Gophercloud RoundTripper to allow for logging. -type LogRoundTripper struct { - rt http.RoundTripper - numReauthAttempts int -} - -// newHTTPClient return a custom HTTP client that allows for logging relevant -// information before and after the HTTP request. -func newHTTPClient() http.Client { - return http.Client{ - Transport: &LogRoundTripper{ - rt: http.DefaultTransport, - }, - } -} - -// RoundTrip performs a round-trip HTTP request and logs relevant information about it. -func (lrt *LogRoundTripper) RoundTrip(request *http.Request) (*http.Response, error) { - glog.Infof("Request URL: %s\n", request.URL) - - response, err := lrt.rt.RoundTrip(request) - if response == nil { - return nil, err - } - - if response.StatusCode == http.StatusUnauthorized { - if lrt.numReauthAttempts == 3 { - return response, fmt.Errorf("Tried to re-authenticate 3 times with no success.") - } - lrt.numReauthAttempts++ - } - - glog.Debugf("Response Status: %s\n", response.Status) - - return response, nil -} - -endpoint := "https://127.0.0.1/auth" -pc := openstack.NewClient(endpoint) -pc.HTTPClient = newHTTPClient() - -//... -``` - - -## Implementing custom objects - -OpenStack request/response objects may differ among variable names or types. - -### Custom request objects - -To pass custom options to a request, implement the desired `OptsBuilder` interface. For -example, to pass in - -```go -type MyCreateServerOpts struct { - Name string - Size int -} -``` - -to `servers.Create`, simply implement the `servers.CreateOptsBuilder` interface: - -```go -func (o MyCreateServeropts) ToServerCreateMap() (map[string]interface{}, error) { - return map[string]interface{}{ - "name": o.Name, - "size": o.Size, - }, nil -} -``` - -create an instance of your custom options object, and pass it to `servers.Create`: - -```go -// ... -myOpts := MyCreateServerOpts{ - Name: "s1", - Size: "100", -} -server, err := servers.Create(computeClient, myOpts).Extract() -// ... -``` - -### Custom response objects - -Some OpenStack services have extensions. Extensions that are supported in Gophercloud can be -combined to create a custom object: - -```go -// ... -type MyVolume struct { - volumes.Volume - tenantattr.VolumeExt -} - -var v struct { - MyVolume `json:"volume"` -} - -err := volumes.Get(client, volID).ExtractInto(&v) -// ... -``` - -## Overriding default `UnmarshalJSON` method - -For some response objects, a field may be a custom type or may be allowed to take on -different types. In these cases, overriding the default `UnmarshalJSON` method may be -necessary. To do this, declare the JSON `struct` field tag as "-" and create an `UnmarshalJSON` -method on the type: - -```go -// ... -type MyVolume struct { - ID string `json: "id"` - TimeCreated time.Time `json: "-"` -} - -func (r *MyVolume) UnmarshalJSON(b []byte) error { - type tmp MyVolume - var s struct { - tmp - TimeCreated gophercloud.JSONRFC3339MilliNoZ `json:"created_at"` - } - err := json.Unmarshal(b, &s) - if err != nil { - return err - } - *r = Volume(s.tmp) - - r.TimeCreated = time.Time(s.CreatedAt) - - return err -} -// ... -``` diff --git a/vendor/github.com/gophercloud/gophercloud/MIGRATING.md b/vendor/github.com/gophercloud/gophercloud/MIGRATING.md deleted file mode 100644 index aa383c9cc..000000000 --- a/vendor/github.com/gophercloud/gophercloud/MIGRATING.md +++ /dev/null @@ -1,32 +0,0 @@ -# Compute - -## Floating IPs - -* `github.com/gophercloud/gophercloud/openstack/compute/v2/extensions/floatingip` is now `github.com/gophercloud/gophercloud/openstack/compute/v2/extensions/floatingips` -* `floatingips.Associate` and `floatingips.Disassociate` have been removed. -* `floatingips.DisassociateOpts` is now required to disassociate a Floating IP. - -## Security Groups - -* `secgroups.AddServerToGroup` is now `secgroups.AddServer`. -* `secgroups.RemoveServerFromGroup` is now `secgroups.RemoveServer`. - -## Servers - -* `servers.Reboot` now requires a `servers.RebootOpts` struct: - - ```golang - rebootOpts := &servers.RebootOpts{ - Type: servers.SoftReboot, - } - res := servers.Reboot(client, server.ID, rebootOpts) - ``` - -# Identity - -## V3 - -### Tokens - -* `Token.ExpiresAt` is now of type `gophercloud.JSONRFC3339Milli` instead of - `time.Time` diff --git a/vendor/github.com/gophercloud/gophercloud/README.md b/vendor/github.com/gophercloud/gophercloud/README.md index 60ca479de..8c5bfce79 100644 --- a/vendor/github.com/gophercloud/gophercloud/README.md +++ b/vendor/github.com/gophercloud/gophercloud/README.md @@ -127,7 +127,7 @@ new resource in the `server` variable (a ## Advanced Usage -Have a look at the [FAQ](./FAQ.md) for some tips on customizing the way Gophercloud works. +Have a look at the [FAQ](./docs/FAQ.md) for some tips on customizing the way Gophercloud works. ## Backwards-Compatibility Guarantees @@ -141,3 +141,19 @@ See the [contributing guide](./.github/CONTRIBUTING.md). If you're struggling with something or have spotted a potential bug, feel free to submit an issue to our [bug tracker](/issues). + +## Thank You + +We'd like to extend special thanks and appreciation to the following: + +### OpenLab + + + +OpenLab is providing a full CI environment to test each PR and merge for a variety of OpenStack releases. + +### VEXXHOST + + + +VEXXHOST is providing their services to assist with the development and testing of Gophercloud. diff --git a/vendor/github.com/gophercloud/gophercloud/STYLEGUIDE.md b/vendor/github.com/gophercloud/gophercloud/STYLEGUIDE.md deleted file mode 100644 index e7531a83d..000000000 --- a/vendor/github.com/gophercloud/gophercloud/STYLEGUIDE.md +++ /dev/null @@ -1,74 +0,0 @@ - -## On Pull Requests - -- Before you start a PR there needs to be a Github issue and a discussion about it - on that issue with a core contributor, even if it's just a 'SGTM'. - -- A PR's description must reference the issue it closes with a `For ` (e.g. For #293). - -- A PR's description must contain link(s) to the line(s) in the OpenStack - source code (on Github) that prove(s) the PR code to be valid. Links to documentation - are not good enough. The link(s) should be to a non-`master` branch. For example, - a pull request implementing the creation of a Neutron v2 subnet might put the - following link in the description: - - https://github.com/openstack/neutron/blob/stable/mitaka/neutron/api/v2/attributes.py#L749 - - From that link, a reviewer (or user) can verify the fields in the request/response - objects in the PR. - -- A PR that is in-progress should have `[wip]` in front of the PR's title. When - ready for review, remove the `[wip]` and ping a core contributor with an `@`. - -- Forcing PRs to be small can have the effect of users submitting PRs in a hierarchical chain, with - one depending on the next. If a PR depends on another one, it should have a [Pending #PRNUM] - prefix in the PR title. In addition, it will be the PR submitter's responsibility to remove the - [Pending #PRNUM] tag once the PR has been updated with the merged, dependent PR. That will - let reviewers know it is ready to review. - -- A PR should be small. Even if you intend on implementing an entire - service, a PR should only be one route of that service - (e.g. create server or get server, but not both). - -- Unless explicitly asked, do not squash commits in the middle of a review; only - append. It makes it difficult for the reviewer to see what's changed from one - review to the next. - -## On Code - -- In re design: follow as closely as is reasonable the code already in the library. - Most operations (e.g. create, delete) admit the same design. - -- Unit tests and acceptance (integration) tests must be written to cover each PR. - Tests for operations with several options (e.g. list, create) should include all - the options in the tests. This will allow users to verify an operation on their - own infrastructure and see an example of usage. - -- If in doubt, ask in-line on the PR. - -### File Structure - -- The following should be used in most cases: - - - `requests.go`: contains all the functions that make HTTP requests and the - types associated with the HTTP request (parameters for URL, body, etc) - - `results.go`: contains all the response objects and their methods - - `urls.go`: contains the endpoints to which the requests are made - -### Naming - -- For methods on a type in `results.go`, the receiver should be named `r` and the - variable into which it will be unmarshalled `s`. - -- Functions in `requests.go`, with the exception of functions that return a - `pagination.Pager`, should be named returns of the name `r`. - -- Functions in `requests.go` that accept request bodies should accept as their - last parameter an `interface` named `OptsBuilder` (eg `CreateOptsBuilder`). - This `interface` should have at the least a method named `ToMap` - (eg `ToPortCreateMap`). - -- Functions in `requests.go` that accept query strings should accept as their - last parameter an `interface` named `OptsBuilder` (eg `ListOptsBuilder`). - This `interface` should have at the least a method named `ToQuery` - (eg `ToServerListQuery`). diff --git a/vendor/github.com/gophercloud/gophercloud/auth_options.go b/vendor/github.com/gophercloud/gophercloud/auth_options.go index 19c08341a..5e693585c 100644 --- a/vendor/github.com/gophercloud/gophercloud/auth_options.go +++ b/vendor/github.com/gophercloud/gophercloud/auth_options.go @@ -9,12 +9,32 @@ ProviderClient representing an active session on that provider. Its fields are the union of those recognized by each identity implementation and provider. + +An example of manually providing authentication information: + + opts := gophercloud.AuthOptions{ + IdentityEndpoint: "https://openstack.example.com:5000/v2.0", + Username: "{username}", + Password: "{password}", + TenantID: "{tenant_id}", + } + + provider, err := openstack.AuthenticatedClient(opts) + +An example of using AuthOptionsFromEnv(), where the environment variables can +be read from a file, such as a standard openrc file: + + opts, err := openstack.AuthOptionsFromEnv() + provider, err := openstack.AuthenticatedClient(opts) */ type AuthOptions struct { // IdentityEndpoint specifies the HTTP endpoint that is required to work with // the Identity API of the appropriate version. While it's ultimately needed by // all of the identity services, it will often be populated by a provider-level // function. + // + // The IdentityEndpoint is typically referred to as the "auth_url" or + // "OS_AUTH_URL" in the information provided by the cloud operator. IdentityEndpoint string `json:"-"` // Username is required if using Identity V2 API. Consult with your provider's @@ -39,7 +59,7 @@ type AuthOptions struct { // If DomainID or DomainName are provided, they will also apply to TenantName. // It is not currently possible to authenticate with Username and a Domain // and scope to a Project in a different Domain by using TenantName. To - // accomplish that, the ProjectID will need to be provided to the TenantID + // accomplish that, the ProjectID will need to be provided as the TenantID // option. TenantID string `json:"tenantId,omitempty"` TenantName string `json:"tenantName,omitempty"` @@ -50,15 +70,28 @@ type AuthOptions struct { // false, it will not cache these settings, but re-authentication will not be // possible. This setting defaults to false. // - // NOTE: The reauth function will try to re-authenticate endlessly if left unchecked. - // The way to limit the number of attempts is to provide a custom HTTP client to the provider client - // and provide a transport that implements the RoundTripper interface and stores the number of failed retries. - // For an example of this, see here: https://github.com/rackspace/rack/blob/1.0.0/auth/clients.go#L311 + // NOTE: The reauth function will try to re-authenticate endlessly if left + // unchecked. The way to limit the number of attempts is to provide a custom + // HTTP client to the provider client and provide a transport that implements + // the RoundTripper interface and stores the number of failed retries. For an + // example of this, see here: + // https://github.com/rackspace/rack/blob/1.0.0/auth/clients.go#L311 AllowReauth bool `json:"-"` // TokenID allows users to authenticate (possibly as another user) with an // authentication token ID. TokenID string `json:"-"` + + // Scope determines the scoping of the authentication request. + Scope *AuthScope `json:"-"` +} + +// AuthScope allows a created token to be limited to a specific domain or project. +type AuthScope struct { + ProjectID string + ProjectName string + DomainID string + DomainName string } // ToTokenV2CreateMap allows AuthOptions to satisfy the AuthOptionsBuilder @@ -241,82 +274,85 @@ func (opts *AuthOptions) ToTokenV3CreateMap(scope map[string]interface{}) (map[s } func (opts *AuthOptions) ToTokenV3ScopeMap() (map[string]interface{}, error) { - - var scope struct { - ProjectID string - ProjectName string - DomainID string - DomainName string - } - - if opts.TenantID != "" { - scope.ProjectID = opts.TenantID - } else { - if opts.TenantName != "" { - scope.ProjectName = opts.TenantName - scope.DomainID = opts.DomainID - scope.DomainName = opts.DomainName + // For backwards compatibility. + // If AuthOptions.Scope was not set, try to determine it. + // This works well for common scenarios. + if opts.Scope == nil { + opts.Scope = new(AuthScope) + if opts.TenantID != "" { + opts.Scope.ProjectID = opts.TenantID + } else { + if opts.TenantName != "" { + opts.Scope.ProjectName = opts.TenantName + opts.Scope.DomainID = opts.DomainID + opts.Scope.DomainName = opts.DomainName + } } } - if scope.ProjectName != "" { + if opts.Scope.ProjectName != "" { // ProjectName provided: either DomainID or DomainName must also be supplied. // ProjectID may not be supplied. - if scope.DomainID == "" && scope.DomainName == "" { + if opts.Scope.DomainID == "" && opts.Scope.DomainName == "" { return nil, ErrScopeDomainIDOrDomainName{} } - if scope.ProjectID != "" { + if opts.Scope.ProjectID != "" { return nil, ErrScopeProjectIDOrProjectName{} } - if scope.DomainID != "" { + if opts.Scope.DomainID != "" { // ProjectName + DomainID return map[string]interface{}{ "project": map[string]interface{}{ - "name": &scope.ProjectName, - "domain": map[string]interface{}{"id": &scope.DomainID}, + "name": &opts.Scope.ProjectName, + "domain": map[string]interface{}{"id": &opts.Scope.DomainID}, }, }, nil } - if scope.DomainName != "" { + if opts.Scope.DomainName != "" { // ProjectName + DomainName return map[string]interface{}{ "project": map[string]interface{}{ - "name": &scope.ProjectName, - "domain": map[string]interface{}{"name": &scope.DomainName}, + "name": &opts.Scope.ProjectName, + "domain": map[string]interface{}{"name": &opts.Scope.DomainName}, }, }, nil } - } else if scope.ProjectID != "" { + } else if opts.Scope.ProjectID != "" { // ProjectID provided. ProjectName, DomainID, and DomainName may not be provided. - if scope.DomainID != "" { + if opts.Scope.DomainID != "" { return nil, ErrScopeProjectIDAlone{} } - if scope.DomainName != "" { + if opts.Scope.DomainName != "" { return nil, ErrScopeProjectIDAlone{} } // ProjectID return map[string]interface{}{ "project": map[string]interface{}{ - "id": &scope.ProjectID, + "id": &opts.Scope.ProjectID, }, }, nil - } else if scope.DomainID != "" { + } else if opts.Scope.DomainID != "" { // DomainID provided. ProjectID, ProjectName, and DomainName may not be provided. - if scope.DomainName != "" { + if opts.Scope.DomainName != "" { return nil, ErrScopeDomainIDOrDomainName{} } // DomainID return map[string]interface{}{ "domain": map[string]interface{}{ - "id": &scope.DomainID, + "id": &opts.Scope.DomainID, + }, + }, nil + } else if opts.Scope.DomainName != "" { + // DomainName + return map[string]interface{}{ + "domain": map[string]interface{}{ + "name": &opts.Scope.DomainName, }, }, nil - } else if scope.DomainName != "" { - return nil, ErrScopeDomainName{} } return nil, nil diff --git a/vendor/github.com/gophercloud/gophercloud/doc.go b/vendor/github.com/gophercloud/gophercloud/doc.go index b559516f9..30067aa35 100644 --- a/vendor/github.com/gophercloud/gophercloud/doc.go +++ b/vendor/github.com/gophercloud/gophercloud/doc.go @@ -3,11 +3,17 @@ Package gophercloud provides a multi-vendor interface to OpenStack-compatible clouds. The library has a three-level hierarchy: providers, services, and resources. -Provider structs represent the service providers that offer and manage a -collection of services. The IdentityEndpoint is typically refered to as -"auth_url" in information provided by the cloud operator. Additionally, -the cloud may refer to TenantID or TenantName as project_id and project_name. -These are defined like so: +Authenticating with Providers + +Provider structs represent the cloud providers that offer and manage a +collection of services. You will generally want to create one Provider +client per OpenStack cloud. + +Use your OpenStack credentials to create a Provider client. The +IdentityEndpoint is typically refered to as "auth_url" or "OS_AUTH_URL" in +information provided by the cloud operator. Additionally, the cloud may refer to +TenantID or TenantName as project_id and project_name. Credentials are +specified like so: opts := gophercloud.AuthOptions{ IdentityEndpoint: "https://openstack.example.com:5000/v2.0", @@ -18,6 +24,16 @@ These are defined like so: provider, err := openstack.AuthenticatedClient(opts) +You may also use the openstack.AuthOptionsFromEnv() helper function. This +function reads in standard environment variables frequently found in an +OpenStack `openrc` file. Again note that Gophercloud currently uses "tenant" +instead of "project". + + opts, err := openstack.AuthOptionsFromEnv() + provider, err := openstack.AuthenticatedClient(opts) + +Service Clients + Service structs are specific to a provider and handle all of the logic and operations for a particular OpenStack service. Examples of services include: Compute, Object Storage, Block Storage. In order to define one, you need to @@ -27,6 +43,8 @@ pass in the parent provider, like so: client := openstack.NewComputeV2(provider, opts) +Resources + Resource structs are the domain models that services make use of in order to work with and represent the state of API resources: @@ -62,6 +80,12 @@ of results: return true, nil }) +If you want to obtain the entire collection of pages without doing any +intermediary processing on each page, you can use the AllPages method: + + allPages, err := servers.List(client, nil).AllPages() + allServers, err := servers.ExtractServers(allPages) + This top-level package contains utility functions and data types that are used throughout the provider and service packages. Of particular note for end users are the AuthOptions and EndpointOpts structs. diff --git a/vendor/github.com/gophercloud/gophercloud/endpoint_search.go b/vendor/github.com/gophercloud/gophercloud/endpoint_search.go index 9887947f6..2fbc3c97f 100644 --- a/vendor/github.com/gophercloud/gophercloud/endpoint_search.go +++ b/vendor/github.com/gophercloud/gophercloud/endpoint_search.go @@ -27,7 +27,7 @@ const ( // unambiguously identify one, and only one, endpoint within the catalog. // // Usually, these are passed to service client factory functions in a provider -// package, like "rackspace.NewComputeV2()". +// package, like "openstack.NewComputeV2()". type EndpointOpts struct { // Type [required] is the service type for the client (e.g., "compute", // "object-store"). Generally, this will be supplied by the service client diff --git a/vendor/github.com/gophercloud/gophercloud/errors.go b/vendor/github.com/gophercloud/gophercloud/errors.go index e0fe7c1e0..a5fa68d6d 100644 --- a/vendor/github.com/gophercloud/gophercloud/errors.go +++ b/vendor/github.com/gophercloud/gophercloud/errors.go @@ -1,6 +1,9 @@ package gophercloud -import "fmt" +import ( + "fmt" + "strings" +) // BaseError is an error type that all other error types embed. type BaseError struct { @@ -43,6 +46,33 @@ func (e ErrInvalidInput) Error() string { return e.choseErrString() } +// ErrMissingEnvironmentVariable is the error when environment variable is required +// in a particular situation but not provided by the user +type ErrMissingEnvironmentVariable struct { + BaseError + EnvironmentVariable string +} + +func (e ErrMissingEnvironmentVariable) Error() string { + e.DefaultErrString = fmt.Sprintf("Missing environment variable [%s]", e.EnvironmentVariable) + return e.choseErrString() +} + +// ErrMissingAnyoneOfEnvironmentVariables is the error when anyone of the environment variables +// is required in a particular situation but not provided by the user +type ErrMissingAnyoneOfEnvironmentVariables struct { + BaseError + EnvironmentVariables []string +} + +func (e ErrMissingAnyoneOfEnvironmentVariables) Error() string { + e.DefaultErrString = fmt.Sprintf( + "Missing one of the following environment variables [%s]", + strings.Join(e.EnvironmentVariables, ", "), + ) + return e.choseErrString() +} + // ErrUnexpectedResponseCode is returned by the Request method when a response code other than // those listed in OkCodes is encountered. type ErrUnexpectedResponseCode struct { @@ -72,6 +102,11 @@ type ErrDefault401 struct { ErrUnexpectedResponseCode } +// ErrDefault403 is the default error type returned on a 403 HTTP response code. +type ErrDefault403 struct { + ErrUnexpectedResponseCode +} + // ErrDefault404 is the default error type returned on a 404 HTTP response code. type ErrDefault404 struct { ErrUnexpectedResponseCode @@ -103,11 +138,22 @@ type ErrDefault503 struct { } func (e ErrDefault400) Error() string { - return "Invalid request due to incorrect syntax or missing required parameters." + e.DefaultErrString = fmt.Sprintf( + "Bad request with: [%s %s], error message: %s", + e.Method, e.URL, e.Body, + ) + return e.choseErrString() } func (e ErrDefault401) Error() string { return "Authentication failed" } +func (e ErrDefault403) Error() string { + e.DefaultErrString = fmt.Sprintf( + "Request forbidden: [%s %s], error message: %s", + e.Method, e.URL, e.Body, + ) + return e.choseErrString() +} func (e ErrDefault404) Error() string { return "Resource not found" } @@ -141,6 +187,12 @@ type Err401er interface { Error401(ErrUnexpectedResponseCode) error } +// Err403er is the interface resource error types implement to override the error message +// from a 403 error. +type Err403er interface { + Error403(ErrUnexpectedResponseCode) error +} + // Err404er is the interface resource error types implement to override the error message // from a 404 error. type Err404er interface { @@ -393,13 +445,6 @@ func (e ErrScopeProjectIDAlone) Error() string { return "ProjectID must be supplied alone in a Scope" } -// ErrScopeDomainName indicates that a DomainName was provided alone in a Scope. -type ErrScopeDomainName struct{ BaseError } - -func (e ErrScopeDomainName) Error() string { - return "DomainName must be supplied with a ProjectName or ProjectID in a Scope" -} - // ErrScopeEmpty indicates that no credentials were provided in a Scope. type ErrScopeEmpty struct{ BaseError } diff --git a/vendor/github.com/gophercloud/gophercloud/openstack/auth_env.go b/vendor/github.com/gophercloud/gophercloud/openstack/auth_env.go index f6d2eb194..994b5550c 100644 --- a/vendor/github.com/gophercloud/gophercloud/openstack/auth_env.go +++ b/vendor/github.com/gophercloud/gophercloud/openstack/auth_env.go @@ -8,10 +8,27 @@ import ( var nilOptions = gophercloud.AuthOptions{} -// AuthOptionsFromEnv fills out an identity.AuthOptions structure with the settings found on the various OpenStack -// OS_* environment variables. The following variables provide sources of truth: OS_AUTH_URL, OS_USERNAME, -// OS_PASSWORD, OS_TENANT_ID, and OS_TENANT_NAME. Of these, OS_USERNAME, OS_PASSWORD, and OS_AUTH_URL must -// have settings, or an error will result. OS_TENANT_ID and OS_TENANT_NAME are optional. +/* +AuthOptionsFromEnv fills out an identity.AuthOptions structure with the +settings found on the various OpenStack OS_* environment variables. + +The following variables provide sources of truth: OS_AUTH_URL, OS_USERNAME, +OS_PASSWORD, OS_TENANT_ID, and OS_TENANT_NAME. + +Of these, OS_USERNAME, OS_PASSWORD, and OS_AUTH_URL must have settings, +or an error will result. OS_TENANT_ID, OS_TENANT_NAME, OS_PROJECT_ID, and +OS_PROJECT_NAME are optional. + +OS_TENANT_ID and OS_TENANT_NAME are mutually exclusive to OS_PROJECT_ID and +OS_PROJECT_NAME. If OS_PROJECT_ID and OS_PROJECT_NAME are set, they will +still be referred as "tenant" in Gophercloud. + +To use this function, first set the OS_* environment variables (for example, +by sourcing an `openrc` file), then: + + opts, err := openstack.AuthOptionsFromEnv() + provider, err := openstack.AuthenticatedClient(opts) +*/ func AuthOptionsFromEnv() (gophercloud.AuthOptions, error) { authURL := os.Getenv("OS_AUTH_URL") username := os.Getenv("OS_USERNAME") @@ -22,18 +39,34 @@ func AuthOptionsFromEnv() (gophercloud.AuthOptions, error) { domainID := os.Getenv("OS_DOMAIN_ID") domainName := os.Getenv("OS_DOMAIN_NAME") + // If OS_PROJECT_ID is set, overwrite tenantID with the value. + if v := os.Getenv("OS_PROJECT_ID"); v != "" { + tenantID = v + } + + // If OS_PROJECT_NAME is set, overwrite tenantName with the value. + if v := os.Getenv("OS_PROJECT_NAME"); v != "" { + tenantName = v + } + if authURL == "" { - err := gophercloud.ErrMissingInput{Argument: "authURL"} + err := gophercloud.ErrMissingEnvironmentVariable{ + EnvironmentVariable: "OS_AUTH_URL", + } return nilOptions, err } if username == "" && userID == "" { - err := gophercloud.ErrMissingInput{Argument: "username"} + err := gophercloud.ErrMissingAnyoneOfEnvironmentVariables{ + EnvironmentVariables: []string{"OS_USERNAME", "OS_USERID"}, + } return nilOptions, err } if password == "" { - err := gophercloud.ErrMissingInput{Argument: "password"} + err := gophercloud.ErrMissingEnvironmentVariable{ + EnvironmentVariable: "OS_PASSWORD", + } return nilOptions, err } diff --git a/vendor/github.com/gophercloud/gophercloud/openstack/client.go b/vendor/github.com/gophercloud/gophercloud/openstack/client.go index 09120e8fa..e554b7bc3 100644 --- a/vendor/github.com/gophercloud/gophercloud/openstack/client.go +++ b/vendor/github.com/gophercloud/gophercloud/openstack/client.go @@ -2,7 +2,6 @@ package openstack import ( "fmt" - "net/url" "reflect" "github.com/gophercloud/gophercloud" @@ -12,43 +11,66 @@ import ( ) const ( - v20 = "v2.0" - v30 = "v3.0" + // v2 represents Keystone v2. + // It should never increase beyond 2.0. + v2 = "v2.0" + + // v3 represents Keystone v3. + // The version can be anything from v3 to v3.x. + v3 = "v3" ) -// NewClient prepares an unauthenticated ProviderClient instance. -// Most users will probably prefer using the AuthenticatedClient function instead. -// This is useful if you wish to explicitly control the version of the identity service that's used for authentication explicitly, -// for example. +/* +NewClient prepares an unauthenticated ProviderClient instance. +Most users will probably prefer using the AuthenticatedClient function +instead. + +This is useful if you wish to explicitly control the version of the identity +service that's used for authentication explicitly, for example. + +A basic example of using this would be: + + ao, err := openstack.AuthOptionsFromEnv() + provider, err := openstack.NewClient(ao.IdentityEndpoint) + client, err := openstack.NewIdentityV3(provider, gophercloud.EndpointOpts{}) +*/ func NewClient(endpoint string) (*gophercloud.ProviderClient, error) { - u, err := url.Parse(endpoint) + base, err := utils.BaseEndpoint(endpoint) if err != nil { return nil, err } - hadPath := u.Path != "" - u.Path, u.RawQuery, u.Fragment = "", "", "" - base := u.String() endpoint = gophercloud.NormalizeURL(endpoint) base = gophercloud.NormalizeURL(base) - if hadPath { - return &gophercloud.ProviderClient{ - IdentityBase: base, - IdentityEndpoint: endpoint, - }, nil - } + p := new(gophercloud.ProviderClient) + p.IdentityBase = base + p.IdentityEndpoint = endpoint + p.UseTokenLock() - return &gophercloud.ProviderClient{ - IdentityBase: base, - IdentityEndpoint: "", - }, nil + return p, nil } -// AuthenticatedClient logs in to an OpenStack cloud found at the identity endpoint specified by options, acquires a token, and -// returns a Client instance that's ready to operate. -// It first queries the root identity endpoint to determine which versions of the identity service are supported, then chooses -// the most recent identity service available to proceed. +/* +AuthenticatedClient logs in to an OpenStack cloud found at the identity endpoint +specified by the options, acquires a token, and returns a Provider Client +instance that's ready to operate. + +If the full path to a versioned identity endpoint was specified (example: +http://example.com:5000/v3), that path will be used as the endpoint to query. + +If a versionless endpoint was specified (example: http://example.com:5000/), +the endpoint will be queried to determine which versions of the identity service +are available, then chooses the most recent or most supported version. + +Example: + + ao, err := openstack.AuthOptionsFromEnv() + provider, err := openstack.AuthenticatedClient(ao) + client, err := openstack.NewNetworkV2(client, gophercloud.EndpointOpts{ + Region: os.Getenv("OS_REGION_NAME"), + }) +*/ func AuthenticatedClient(options gophercloud.AuthOptions) (*gophercloud.ProviderClient, error) { client, err := NewClient(options.IdentityEndpoint) if err != nil { @@ -62,11 +84,12 @@ func AuthenticatedClient(options gophercloud.AuthOptions) (*gophercloud.Provider return client, nil } -// Authenticate or re-authenticate against the most recent identity service supported at the provided endpoint. +// Authenticate or re-authenticate against the most recent identity service +// supported at the provided endpoint. func Authenticate(client *gophercloud.ProviderClient, options gophercloud.AuthOptions) error { versions := []*utils.Version{ - {ID: v20, Priority: 20, Suffix: "/v2.0/"}, - {ID: v30, Priority: 30, Suffix: "/v3/"}, + {ID: v2, Priority: 20, Suffix: "/v2.0/"}, + {ID: v3, Priority: 30, Suffix: "/v3/"}, } chosen, endpoint, err := utils.ChooseVersion(client, versions) @@ -75,9 +98,9 @@ func Authenticate(client *gophercloud.ProviderClient, options gophercloud.AuthOp } switch chosen.ID { - case v20: + case v2: return v2auth(client, endpoint, options, gophercloud.EndpointOpts{}) - case v30: + case v3: return v3auth(client, endpoint, &options, gophercloud.EndpointOpts{}) default: // The switch statement must be out of date from the versions list. @@ -123,9 +146,21 @@ func v2auth(client *gophercloud.ProviderClient, endpoint string, options gopherc } if options.AllowReauth { + // here we're creating a throw-away client (tac). it's a copy of the user's provider client, but + // with the token and reauth func zeroed out. combined with setting `AllowReauth` to `false`, + // this should retry authentication only once + tac := *client + tac.ReauthFunc = nil + tac.TokenID = "" + tao := options + tao.AllowReauth = false client.ReauthFunc = func() error { - client.TokenID = "" - return v2auth(client, endpoint, options, eo) + err := v2auth(&tac, endpoint, tao, eo) + if err != nil { + return err + } + client.TokenID = tac.TokenID + return nil } } client.TokenID = token.ID @@ -167,9 +202,32 @@ func v3auth(client *gophercloud.ProviderClient, endpoint string, opts tokens3.Au client.TokenID = token.ID if opts.CanReauth() { + // here we're creating a throw-away client (tac). it's a copy of the user's provider client, but + // with the token and reauth func zeroed out. combined with setting `AllowReauth` to `false`, + // this should retry authentication only once + tac := *client + tac.ReauthFunc = nil + tac.TokenID = "" + var tao tokens3.AuthOptionsBuilder + switch ot := opts.(type) { + case *gophercloud.AuthOptions: + o := *ot + o.AllowReauth = false + tao = &o + case *tokens3.AuthOptions: + o := *ot + o.AllowReauth = false + tao = &o + default: + tao = opts + } client.ReauthFunc = func() error { - client.TokenID = "" - return v3auth(client, endpoint, opts, eo) + err := v3auth(&tac, endpoint, tao, eo) + if err != nil { + return err + } + client.TokenID = tac.TokenID + return nil } } client.EndpointLocator = func(opts gophercloud.EndpointOpts) (string, error) { @@ -179,7 +237,8 @@ func v3auth(client *gophercloud.ProviderClient, endpoint string, opts tokens3.Au return nil } -// NewIdentityV2 creates a ServiceClient that may be used to interact with the v2 identity service. +// NewIdentityV2 creates a ServiceClient that may be used to interact with the +// v2 identity service. func NewIdentityV2(client *gophercloud.ProviderClient, eo gophercloud.EndpointOpts) (*gophercloud.ServiceClient, error) { endpoint := client.IdentityBase + "v2.0/" clientType := "identity" @@ -199,7 +258,8 @@ func NewIdentityV2(client *gophercloud.ProviderClient, eo gophercloud.EndpointOp }, nil } -// NewIdentityV3 creates a ServiceClient that may be used to access the v3 identity service. +// NewIdentityV3 creates a ServiceClient that may be used to access the v3 +// identity service. func NewIdentityV3(client *gophercloud.ProviderClient, eo gophercloud.EndpointOpts) (*gophercloud.ServiceClient, error) { endpoint := client.IdentityBase + "v3/" clientType := "identity" @@ -212,6 +272,19 @@ func NewIdentityV3(client *gophercloud.ProviderClient, eo gophercloud.EndpointOp } } + // Ensure endpoint still has a suffix of v3. + // This is because EndpointLocator might have found a versionless + // endpoint or the published endpoint is still /v2.0. In both + // cases, we need to fix the endpoint to point to /v3. + base, err := utils.BaseEndpoint(endpoint) + if err != nil { + return nil, err + } + + base = gophercloud.NormalizeURL(base) + + endpoint = base + "v3/" + return &gophercloud.ServiceClient{ ProviderClient: client, Endpoint: endpoint, @@ -232,33 +305,43 @@ func initClientOpts(client *gophercloud.ProviderClient, eo gophercloud.EndpointO return sc, nil } -// NewObjectStorageV1 creates a ServiceClient that may be used with the v1 object storage package. +// NewObjectStorageV1 creates a ServiceClient that may be used with the v1 +// object storage package. func NewObjectStorageV1(client *gophercloud.ProviderClient, eo gophercloud.EndpointOpts) (*gophercloud.ServiceClient, error) { return initClientOpts(client, eo, "object-store") } -// NewComputeV2 creates a ServiceClient that may be used with the v2 compute package. +// NewComputeV2 creates a ServiceClient that may be used with the v2 compute +// package. func NewComputeV2(client *gophercloud.ProviderClient, eo gophercloud.EndpointOpts) (*gophercloud.ServiceClient, error) { return initClientOpts(client, eo, "compute") } -// NewNetworkV2 creates a ServiceClient that may be used with the v2 network package. +// NewNetworkV2 creates a ServiceClient that may be used with the v2 network +// package. func NewNetworkV2(client *gophercloud.ProviderClient, eo gophercloud.EndpointOpts) (*gophercloud.ServiceClient, error) { sc, err := initClientOpts(client, eo, "network") sc.ResourceBase = sc.Endpoint + "v2.0/" return sc, err } -// NewBlockStorageV1 creates a ServiceClient that may be used to access the v1 block storage service. +// NewBlockStorageV1 creates a ServiceClient that may be used to access the v1 +// block storage service. func NewBlockStorageV1(client *gophercloud.ProviderClient, eo gophercloud.EndpointOpts) (*gophercloud.ServiceClient, error) { return initClientOpts(client, eo, "volume") } -// NewBlockStorageV2 creates a ServiceClient that may be used to access the v2 block storage service. +// NewBlockStorageV2 creates a ServiceClient that may be used to access the v2 +// block storage service. func NewBlockStorageV2(client *gophercloud.ProviderClient, eo gophercloud.EndpointOpts) (*gophercloud.ServiceClient, error) { return initClientOpts(client, eo, "volumev2") } +// NewBlockStorageV3 creates a ServiceClient that may be used to access the v3 block storage service. +func NewBlockStorageV3(client *gophercloud.ProviderClient, eo gophercloud.EndpointOpts) (*gophercloud.ServiceClient, error) { + return initClientOpts(client, eo, "volumev3") +} + // NewSharedFileSystemV2 creates a ServiceClient that may be used to access the v2 shared file system service. func NewSharedFileSystemV2(client *gophercloud.ProviderClient, eo gophercloud.EndpointOpts) (*gophercloud.ServiceClient, error) { return initClientOpts(client, eo, "sharev2") @@ -270,7 +353,8 @@ func NewCDNV1(client *gophercloud.ProviderClient, eo gophercloud.EndpointOpts) ( return initClientOpts(client, eo, "cdn") } -// NewOrchestrationV1 creates a ServiceClient that may be used to access the v1 orchestration service. +// NewOrchestrationV1 creates a ServiceClient that may be used to access the v1 +// orchestration service. func NewOrchestrationV1(client *gophercloud.ProviderClient, eo gophercloud.EndpointOpts) (*gophercloud.ServiceClient, error) { return initClientOpts(client, eo, "orchestration") } @@ -280,16 +364,45 @@ func NewDBV1(client *gophercloud.ProviderClient, eo gophercloud.EndpointOpts) (* return initClientOpts(client, eo, "database") } -// NewDNSV2 creates a ServiceClient that may be used to access the v2 DNS service. +// NewDNSV2 creates a ServiceClient that may be used to access the v2 DNS +// service. func NewDNSV2(client *gophercloud.ProviderClient, eo gophercloud.EndpointOpts) (*gophercloud.ServiceClient, error) { sc, err := initClientOpts(client, eo, "dns") sc.ResourceBase = sc.Endpoint + "v2/" return sc, err } -// NewImageServiceV2 creates a ServiceClient that may be used to access the v2 image service. +// NewImageServiceV2 creates a ServiceClient that may be used to access the v2 +// image service. func NewImageServiceV2(client *gophercloud.ProviderClient, eo gophercloud.EndpointOpts) (*gophercloud.ServiceClient, error) { sc, err := initClientOpts(client, eo, "image") sc.ResourceBase = sc.Endpoint + "v2/" return sc, err } + +// NewLoadBalancerV2 creates a ServiceClient that may be used to access the v2 +// load balancer service. +func NewLoadBalancerV2(client *gophercloud.ProviderClient, eo gophercloud.EndpointOpts) (*gophercloud.ServiceClient, error) { + sc, err := initClientOpts(client, eo, "load-balancer") + sc.ResourceBase = sc.Endpoint + "v2.0/" + return sc, err +} + +// NewClusteringV1 creates a ServiceClient that may be used with the v1 clustering +// package. +func NewClusteringV1(client *gophercloud.ProviderClient, eo gophercloud.EndpointOpts) (*gophercloud.ServiceClient, error) { + return initClientOpts(client, eo, "clustering") +} + +// NewMessagingV2 creates a ServiceClient that may be used with the v2 messaging +// service. +func NewMessagingV2(client *gophercloud.ProviderClient, clientID string, eo gophercloud.EndpointOpts) (*gophercloud.ServiceClient, error) { + sc, err := initClientOpts(client, eo, "messaging") + sc.MoreHeaders = map[string]string{"Client-ID": clientID} + return sc, err +} + +// NewContainerV1 creates a ServiceClient that may be used with v1 container package +func NewContainerV1(client *gophercloud.ProviderClient, eo gophercloud.EndpointOpts) (*gophercloud.ServiceClient, error) { + return initClientOpts(client, eo, "container") +} diff --git a/vendor/github.com/gophercloud/gophercloud/openstack/common/extensions/doc.go b/vendor/github.com/gophercloud/gophercloud/openstack/common/extensions/doc.go deleted file mode 100644 index 4a168f4b2..000000000 --- a/vendor/github.com/gophercloud/gophercloud/openstack/common/extensions/doc.go +++ /dev/null @@ -1,15 +0,0 @@ -// Package extensions provides information and interaction with the different extensions available -// for an OpenStack service. -// -// The purpose of OpenStack API extensions is to: -// -// - Introduce new features in the API without requiring a version change. -// - Introduce vendor-specific niche functionality. -// - Act as a proving ground for experimental functionalities that might be included in a future -// version of the API. -// -// Extensions usually have tags that prevent conflicts with other extensions that define attributes -// or resources with the same names, and with core resources and attributes. -// Because an extension might not be supported by all plug-ins, its availability varies with deployments -// and the specific plug-in. -package extensions diff --git a/vendor/github.com/gophercloud/gophercloud/openstack/common/extensions/errors.go b/vendor/github.com/gophercloud/gophercloud/openstack/common/extensions/errors.go deleted file mode 100755 index aeec0fa75..000000000 --- a/vendor/github.com/gophercloud/gophercloud/openstack/common/extensions/errors.go +++ /dev/null @@ -1 +0,0 @@ -package extensions diff --git a/vendor/github.com/gophercloud/gophercloud/openstack/common/extensions/requests.go b/vendor/github.com/gophercloud/gophercloud/openstack/common/extensions/requests.go deleted file mode 100755 index 46b7d60cd..000000000 --- a/vendor/github.com/gophercloud/gophercloud/openstack/common/extensions/requests.go +++ /dev/null @@ -1,20 +0,0 @@ -package extensions - -import ( - "github.com/gophercloud/gophercloud" - "github.com/gophercloud/gophercloud/pagination" -) - -// Get retrieves information for a specific extension using its alias. -func Get(c *gophercloud.ServiceClient, alias string) (r GetResult) { - _, r.Err = c.Get(ExtensionURL(c, alias), &r.Body, nil) - return -} - -// List returns a Pager which allows you to iterate over the full collection of extensions. -// It does not accept query parameters. -func List(c *gophercloud.ServiceClient) pagination.Pager { - return pagination.NewPager(c, ListExtensionURL(c), func(r pagination.PageResult) pagination.Page { - return ExtensionPage{pagination.SinglePageBase(r)} - }) -} diff --git a/vendor/github.com/gophercloud/gophercloud/openstack/common/extensions/results.go b/vendor/github.com/gophercloud/gophercloud/openstack/common/extensions/results.go deleted file mode 100755 index d5f865091..000000000 --- a/vendor/github.com/gophercloud/gophercloud/openstack/common/extensions/results.go +++ /dev/null @@ -1,53 +0,0 @@ -package extensions - -import ( - "github.com/gophercloud/gophercloud" - "github.com/gophercloud/gophercloud/pagination" -) - -// GetResult temporarily stores the result of a Get call. -// Use its Extract() method to interpret it as an Extension. -type GetResult struct { - gophercloud.Result -} - -// Extract interprets a GetResult as an Extension. -func (r GetResult) Extract() (*Extension, error) { - var s struct { - Extension *Extension `json:"extension"` - } - err := r.ExtractInto(&s) - return s.Extension, err -} - -// Extension is a struct that represents an OpenStack extension. -type Extension struct { - Updated string `json:"updated"` - Name string `json:"name"` - Links []interface{} `json:"links"` - Namespace string `json:"namespace"` - Alias string `json:"alias"` - Description string `json:"description"` -} - -// ExtensionPage is the page returned by a pager when traversing over a collection of extensions. -type ExtensionPage struct { - pagination.SinglePageBase -} - -// IsEmpty checks whether an ExtensionPage struct is empty. -func (r ExtensionPage) IsEmpty() (bool, error) { - is, err := ExtractExtensions(r) - return len(is) == 0, err -} - -// ExtractExtensions accepts a Page struct, specifically an ExtensionPage struct, and extracts the -// elements into a slice of Extension structs. -// In other words, a generic collection is mapped into a relevant slice. -func ExtractExtensions(r pagination.Page) ([]Extension, error) { - var s struct { - Extensions []Extension `json:"extensions"` - } - err := (r.(ExtensionPage)).ExtractInto(&s) - return s.Extensions, err -} diff --git a/vendor/github.com/gophercloud/gophercloud/openstack/common/extensions/urls.go b/vendor/github.com/gophercloud/gophercloud/openstack/common/extensions/urls.go deleted file mode 100644 index eaf38b2d1..000000000 --- a/vendor/github.com/gophercloud/gophercloud/openstack/common/extensions/urls.go +++ /dev/null @@ -1,13 +0,0 @@ -package extensions - -import "github.com/gophercloud/gophercloud" - -// ExtensionURL generates the URL for an extension resource by name. -func ExtensionURL(c *gophercloud.ServiceClient, name string) string { - return c.ServiceURL("extensions", name) -} - -// ListExtensionURL generates the URL for the extensions resource collection. -func ListExtensionURL(c *gophercloud.ServiceClient) string { - return c.ServiceURL("extensions") -} diff --git a/vendor/github.com/gophercloud/gophercloud/openstack/compute/v2/extensions/delegate.go b/vendor/github.com/gophercloud/gophercloud/openstack/compute/v2/extensions/delegate.go deleted file mode 100644 index 00e7c3bec..000000000 --- a/vendor/github.com/gophercloud/gophercloud/openstack/compute/v2/extensions/delegate.go +++ /dev/null @@ -1,23 +0,0 @@ -package extensions - -import ( - "github.com/gophercloud/gophercloud" - common "github.com/gophercloud/gophercloud/openstack/common/extensions" - "github.com/gophercloud/gophercloud/pagination" -) - -// ExtractExtensions interprets a Page as a slice of Extensions. -func ExtractExtensions(page pagination.Page) ([]common.Extension, error) { - return common.ExtractExtensions(page) -} - -// Get retrieves information for a specific extension using its alias. -func Get(c *gophercloud.ServiceClient, alias string) common.GetResult { - return common.Get(c, alias) -} - -// List returns a Pager which allows you to iterate over the full collection of extensions. -// It does not accept query parameters. -func List(c *gophercloud.ServiceClient) pagination.Pager { - return common.List(c) -} diff --git a/vendor/github.com/gophercloud/gophercloud/openstack/compute/v2/extensions/doc.go b/vendor/github.com/gophercloud/gophercloud/openstack/compute/v2/extensions/doc.go deleted file mode 100644 index 2b447da1d..000000000 --- a/vendor/github.com/gophercloud/gophercloud/openstack/compute/v2/extensions/doc.go +++ /dev/null @@ -1,3 +0,0 @@ -// Package extensions provides information and interaction with the -// different extensions available for the OpenStack Compute service. -package extensions diff --git a/vendor/github.com/gophercloud/gophercloud/openstack/compute/v2/extensions/floatingips/doc.go b/vendor/github.com/gophercloud/gophercloud/openstack/compute/v2/extensions/floatingips/doc.go index 6682fa629..f5dbdbf8b 100644 --- a/vendor/github.com/gophercloud/gophercloud/openstack/compute/v2/extensions/floatingips/doc.go +++ b/vendor/github.com/gophercloud/gophercloud/openstack/compute/v2/extensions/floatingips/doc.go @@ -1,3 +1,68 @@ -// Package floatingips provides the ability to manage floating ips through -// nova-network +/* +Package floatingips provides the ability to manage floating ips through the +Nova API. + +This API has been deprecated and will be removed from a future release of the +Nova API service. + +For environements that support this extension, this package can be used +regardless of if either Neutron or nova-network is used as the cloud's network +service. + +Example to List Floating IPs + + allPages, err := floatingips.List(computeClient).AllPages() + if err != nil { + panic(err) + } + + allFloatingIPs, err := floatingips.ExtractFloatingIPs(allPages) + if err != nil { + panic(err) + } + + for _, fip := range allFloatingIPs { + fmt.Printf("%+v\n", fip) + } + +Example to Create a Floating IP + + createOpts := floatingips.CreateOpts{ + Pool: "nova", + } + + fip, err := floatingips.Create(computeClient, createOpts).Extract() + if err != nil { + panic(err) + } + +Example to Delete a Floating IP + + err := floatingips.Delete(computeClient, "floatingip-id").ExtractErr() + if err != nil { + panic(err) + } + +Example to Associate a Floating IP With a Server + + associateOpts := floatingips.AssociateOpts{ + FloatingIP: "10.10.10.2", + } + + err := floatingips.AssociateInstance(computeClient, "server-id", associateOpts).ExtractErr() + if err != nil { + panic(err) + } + +Example to Disassociate a Floating IP From a Server + + disassociateOpts := floatingips.DisassociateOpts{ + FloatingIP: "10.10.10.2", + } + + err := floatingips.DisassociateInstance(computeClient, "server-id", disassociateOpts).ExtractErr() + if err != nil { + panic(err) + } +*/ package floatingips diff --git a/vendor/github.com/gophercloud/gophercloud/openstack/compute/v2/extensions/floatingips/requests.go b/vendor/github.com/gophercloud/gophercloud/openstack/compute/v2/extensions/floatingips/requests.go index b36aeba59..a922639de 100644 --- a/vendor/github.com/gophercloud/gophercloud/openstack/compute/v2/extensions/floatingips/requests.go +++ b/vendor/github.com/gophercloud/gophercloud/openstack/compute/v2/extensions/floatingips/requests.go @@ -12,15 +12,15 @@ func List(client *gophercloud.ServiceClient) pagination.Pager { }) } -// CreateOptsBuilder describes struct types that can be accepted by the Create call. Notable, the -// CreateOpts struct in this package does. +// CreateOptsBuilder allows extensions to add additional parameters to the +// Create request. type CreateOptsBuilder interface { ToFloatingIPCreateMap() (map[string]interface{}, error) } -// CreateOpts specifies a Floating IP allocation request +// CreateOpts specifies a Floating IP allocation request. type CreateOpts struct { - // Pool is the pool of floating IPs to allocate one from + // Pool is the pool of Floating IPs to allocate one from. Pool string `json:"pool" required:"true"` } @@ -29,7 +29,7 @@ func (opts CreateOpts) ToFloatingIPCreateMap() (map[string]interface{}, error) { return gophercloud.BuildRequestBody(opts, "") } -// Create requests the creation of a new floating IP +// Create requests the creation of a new Floating IP. func Create(client *gophercloud.ServiceClient, opts CreateOptsBuilder) (r CreateResult) { b, err := opts.ToFloatingIPCreateMap() if err != nil { @@ -42,29 +42,30 @@ func Create(client *gophercloud.ServiceClient, opts CreateOptsBuilder) (r Create return } -// Get returns data about a previously created FloatingIP. +// Get returns data about a previously created Floating IP. func Get(client *gophercloud.ServiceClient, id string) (r GetResult) { _, r.Err = client.Get(getURL(client, id), &r.Body, nil) return } -// Delete requests the deletion of a previous allocated FloatingIP. +// Delete requests the deletion of a previous allocated Floating IP. func Delete(client *gophercloud.ServiceClient, id string) (r DeleteResult) { _, r.Err = client.Delete(deleteURL(client, id), nil) return } -// AssociateOptsBuilder is the interface types must satfisfy to be used as -// Associate options +// AssociateOptsBuilder allows extensions to add additional parameters to the +// Associate request. type AssociateOptsBuilder interface { ToFloatingIPAssociateMap() (map[string]interface{}, error) } -// AssociateOpts specifies the required information to associate a floating IP with an instance +// AssociateOpts specifies the required information to associate a Floating IP with an instance type AssociateOpts struct { - // FloatingIP is the floating IP to associate with an instance + // FloatingIP is the Floating IP to associate with an instance. FloatingIP string `json:"address" required:"true"` - // FixedIP is an optional fixed IP address of the server + + // FixedIP is an optional fixed IP address of the server. FixedIP string `json:"fixed_address,omitempty"` } @@ -73,7 +74,7 @@ func (opts AssociateOpts) ToFloatingIPAssociateMap() (map[string]interface{}, er return gophercloud.BuildRequestBody(opts, "addFloatingIp") } -// AssociateInstance pairs an allocated floating IP with an instance. +// AssociateInstance pairs an allocated Floating IP with a server. func AssociateInstance(client *gophercloud.ServiceClient, serverID string, opts AssociateOptsBuilder) (r AssociateResult) { b, err := opts.ToFloatingIPAssociateMap() if err != nil { @@ -84,23 +85,24 @@ func AssociateInstance(client *gophercloud.ServiceClient, serverID string, opts return } -// DisassociateOptsBuilder is the interface types must satfisfy to be used as -// Disassociate options +// DisassociateOptsBuilder allows extensions to add additional parameters to +// the Disassociate request. type DisassociateOptsBuilder interface { ToFloatingIPDisassociateMap() (map[string]interface{}, error) } -// DisassociateOpts specifies the required information to disassociate a floating IP with an instance +// DisassociateOpts specifies the required information to disassociate a +// Floating IP with a server. type DisassociateOpts struct { FloatingIP string `json:"address" required:"true"` } -// ToFloatingIPDisassociateMap constructs a request body from AssociateOpts. +// ToFloatingIPDisassociateMap constructs a request body from DisassociateOpts. func (opts DisassociateOpts) ToFloatingIPDisassociateMap() (map[string]interface{}, error) { return gophercloud.BuildRequestBody(opts, "removeFloatingIp") } -// DisassociateInstance decouples an allocated floating IP from an instance +// DisassociateInstance decouples an allocated Floating IP from an instance func DisassociateInstance(client *gophercloud.ServiceClient, serverID string, opts DisassociateOptsBuilder) (r DisassociateResult) { b, err := opts.ToFloatingIPDisassociateMap() if err != nil { diff --git a/vendor/github.com/gophercloud/gophercloud/openstack/compute/v2/extensions/floatingips/results.go b/vendor/github.com/gophercloud/gophercloud/openstack/compute/v2/extensions/floatingips/results.go index 2f5b33844..da4e9da0e 100644 --- a/vendor/github.com/gophercloud/gophercloud/openstack/compute/v2/extensions/floatingips/results.go +++ b/vendor/github.com/gophercloud/gophercloud/openstack/compute/v2/extensions/floatingips/results.go @@ -8,21 +8,21 @@ import ( "github.com/gophercloud/gophercloud/pagination" ) -// A FloatingIP is an IP that can be associated with an instance +// A FloatingIP is an IP that can be associated with a server. type FloatingIP struct { // ID is a unique ID of the Floating IP ID string `json:"-"` - // FixedIP is the IP of the instance related to the Floating IP + // FixedIP is a specific IP on the server to pair the Floating IP with. FixedIP string `json:"fixed_ip,omitempty"` - // InstanceID is the ID of the instance that is using the Floating IP + // InstanceID is the ID of the server that is using the Floating IP. InstanceID string `json:"instance_id"` - // IP is the actual Floating IP + // IP is the actual Floating IP. IP string `json:"ip"` - // Pool is the pool of floating IPs that this floating IP belongs to + // Pool is the pool of Floating IPs that this Floating IP belongs to. Pool string `json:"pool"` } @@ -49,8 +49,7 @@ func (r *FloatingIP) UnmarshalJSON(b []byte) error { return err } -// FloatingIPPage stores a single, only page of FloatingIPs -// results from a List call. +// FloatingIPPage stores a single page of FloatingIPs from a List call. type FloatingIPPage struct { pagination.SinglePageBase } @@ -61,8 +60,7 @@ func (page FloatingIPPage) IsEmpty() (bool, error) { return len(va) == 0, err } -// ExtractFloatingIPs interprets a page of results as a slice of -// FloatingIPs. +// ExtractFloatingIPs interprets a page of results as a slice of FloatingIPs. func ExtractFloatingIPs(r pagination.Page) ([]FloatingIP, error) { var s struct { FloatingIPs []FloatingIP `json:"floating_ips"` @@ -86,32 +84,32 @@ func (r FloatingIPResult) Extract() (*FloatingIP, error) { return s.FloatingIP, err } -// CreateResult is the response from a Create operation. Call its Extract method to interpret it -// as a FloatingIP. +// CreateResult is the response from a Create operation. Call its Extract method +// to interpret it as a FloatingIP. type CreateResult struct { FloatingIPResult } -// GetResult is the response from a Get operation. Call its Extract method to interpret it -// as a FloatingIP. +// GetResult is the response from a Get operation. Call its Extract method to +// interpret it as a FloatingIP. type GetResult struct { FloatingIPResult } -// DeleteResult is the response from a Delete operation. Call its Extract method to determine if -// the call succeeded or failed. +// DeleteResult is the response from a Delete operation. Call its ExtractErr +// method to determine if the call succeeded or failed. type DeleteResult struct { gophercloud.ErrResult } -// AssociateResult is the response from a Delete operation. Call its Extract method to determine if -// the call succeeded or failed. +// AssociateResult is the response from a Delete operation. Call its ExtractErr +// method to determine if the call succeeded or failed. type AssociateResult struct { gophercloud.ErrResult } -// DisassociateResult is the response from a Delete operation. Call its Extract method to determine if -// the call succeeded or failed. +// DisassociateResult is the response from a Delete operation. Call its +// ExtractErr method to determine if the call succeeded or failed. type DisassociateResult struct { gophercloud.ErrResult } diff --git a/vendor/github.com/gophercloud/gophercloud/openstack/compute/v2/extensions/keypairs/doc.go b/vendor/github.com/gophercloud/gophercloud/openstack/compute/v2/extensions/keypairs/doc.go index 856f41bac..24c460772 100644 --- a/vendor/github.com/gophercloud/gophercloud/openstack/compute/v2/extensions/keypairs/doc.go +++ b/vendor/github.com/gophercloud/gophercloud/openstack/compute/v2/extensions/keypairs/doc.go @@ -1,3 +1,71 @@ -// Package keypairs provides information and interaction with the Keypairs -// extension for the OpenStack Compute service. +/* +Package keypairs provides the ability to manage key pairs as well as create +servers with a specified key pair. + +Example to List Key Pairs + + allPages, err := keypairs.List(computeClient).AllPages() + if err != nil { + panic(err) + } + + allKeyPairs, err := keypairs.ExtractKeyPairs(allPages) + if err != nil { + panic(err) + } + + for _, kp := range allKeyPairs { + fmt.Printf("%+v\n", kp) + } + +Example to Create a Key Pair + + createOpts := keypairs.CreateOpts{ + Name: "keypair-name", + } + + keypair, err := keypairs.Create(computeClient, createOpts).Extract() + if err != nil { + panic(err) + } + + fmt.Printf("%+v", keypair) + +Example to Import a Key Pair + + createOpts := keypairs.CreateOpts{ + Name: "keypair-name", + PublicKey: "public-key", + } + + keypair, err := keypairs.Create(computeClient, createOpts).Extract() + if err != nil { + panic(err) + } + +Example to Delete a Key Pair + + err := keypairs.Delete(computeClient, "keypair-name").ExtractErr() + if err != nil { + panic(err) + } + +Example to Create a Server With a Key Pair + + serverCreateOpts := servers.CreateOpts{ + Name: "server_name", + ImageRef: "image-uuid", + FlavorRef: "flavor-uuid", + } + + createOpts := keypairs.CreateOptsExt{ + CreateOptsBuilder: serverCreateOpts, + KeyName: "keypair-name", + } + + server, err := servers.Create(computeClient, createOpts).Extract() + if err != nil { + panic(err) + } +*/ package keypairs diff --git a/vendor/github.com/gophercloud/gophercloud/openstack/compute/v2/extensions/keypairs/requests.go b/vendor/github.com/gophercloud/gophercloud/openstack/compute/v2/extensions/keypairs/requests.go index adf1e5596..4e5e499e3 100644 --- a/vendor/github.com/gophercloud/gophercloud/openstack/compute/v2/extensions/keypairs/requests.go +++ b/vendor/github.com/gophercloud/gophercloud/openstack/compute/v2/extensions/keypairs/requests.go @@ -9,11 +9,12 @@ import ( // CreateOptsExt adds a KeyPair option to the base CreateOpts. type CreateOptsExt struct { servers.CreateOptsBuilder + + // KeyName is the name of the key pair. KeyName string `json:"key_name,omitempty"` } -// ToServerCreateMap adds the key_name and, optionally, key_data options to -// the base server creation options. +// ToServerCreateMap adds the key_name to the base server creation options. func (opts CreateOptsExt) ToServerCreateMap() (map[string]interface{}, error) { base, err := opts.CreateOptsBuilder.ToServerCreateMap() if err != nil { @@ -37,18 +38,19 @@ func List(client *gophercloud.ServiceClient) pagination.Pager { }) } -// CreateOptsBuilder describes struct types that can be accepted by the Create call. Notable, the -// CreateOpts struct in this package does. +// CreateOptsBuilder allows extensions to add additional parameters to the +// Create request. type CreateOptsBuilder interface { ToKeyPairCreateMap() (map[string]interface{}, error) } -// CreateOpts specifies keypair creation or import parameters. +// CreateOpts specifies KeyPair creation or import parameters. type CreateOpts struct { // Name is a friendly name to refer to this KeyPair in other services. Name string `json:"name" required:"true"` - // PublicKey [optional] is a pregenerated OpenSSH-formatted public key. If provided, this key - // will be imported and no new key will be created. + + // PublicKey [optional] is a pregenerated OpenSSH-formatted public key. + // If provided, this key will be imported and no new key will be created. PublicKey string `json:"public_key,omitempty"` } @@ -57,8 +59,8 @@ func (opts CreateOpts) ToKeyPairCreateMap() (map[string]interface{}, error) { return gophercloud.BuildRequestBody(opts, "keypair") } -// Create requests the creation of a new keypair on the server, or to import a pre-existing -// keypair. +// Create requests the creation of a new KeyPair on the server, or to import a +// pre-existing keypair. func Create(client *gophercloud.ServiceClient, opts CreateOptsBuilder) (r CreateResult) { b, err := opts.ToKeyPairCreateMap() if err != nil { diff --git a/vendor/github.com/gophercloud/gophercloud/openstack/compute/v2/extensions/keypairs/results.go b/vendor/github.com/gophercloud/gophercloud/openstack/compute/v2/extensions/keypairs/results.go index 4c785a24c..2d71034b1 100644 --- a/vendor/github.com/gophercloud/gophercloud/openstack/compute/v2/extensions/keypairs/results.go +++ b/vendor/github.com/gophercloud/gophercloud/openstack/compute/v2/extensions/keypairs/results.go @@ -5,29 +5,33 @@ import ( "github.com/gophercloud/gophercloud/pagination" ) -// KeyPair is an SSH key known to the OpenStack Cloud that is available to be injected into -// servers. +// KeyPair is an SSH key known to the OpenStack Cloud that is available to be +// injected into servers. type KeyPair struct { - // Name is used to refer to this keypair from other services within this region. + // Name is used to refer to this keypair from other services within this + // region. Name string `json:"name"` - // Fingerprint is a short sequence of bytes that can be used to authenticate or validate a longer - // public key. + // Fingerprint is a short sequence of bytes that can be used to authenticate + // or validate a longer public key. Fingerprint string `json:"fingerprint"` - // PublicKey is the public key from this pair, in OpenSSH format. "ssh-rsa AAAAB3Nz..." + // PublicKey is the public key from this pair, in OpenSSH format. + // "ssh-rsa AAAAB3Nz..." PublicKey string `json:"public_key"` // PrivateKey is the private key from this pair, in PEM format. - // "-----BEGIN RSA PRIVATE KEY-----\nMIICXA..." It is only present if this keypair was just - // returned from a Create call + // "-----BEGIN RSA PRIVATE KEY-----\nMIICXA..." + // It is only present if this KeyPair was just returned from a Create call. PrivateKey string `json:"private_key"` - // UserID is the user who owns this keypair. + // UserID is the user who owns this KeyPair. UserID string `json:"user_id"` } -// KeyPairPage stores a single, only page of KeyPair results from a List call. +// KeyPairPage stores a single page of all KeyPair results from a List call. +// Use the ExtractKeyPairs function to convert the results to a slice of +// KeyPairs. type KeyPairPage struct { pagination.SinglePageBase } @@ -58,7 +62,8 @@ type keyPairResult struct { gophercloud.Result } -// Extract is a method that attempts to interpret any KeyPair resource response as a KeyPair struct. +// Extract is a method that attempts to interpret any KeyPair resource response +// as a KeyPair struct. func (r keyPairResult) Extract() (*KeyPair, error) { var s struct { KeyPair *KeyPair `json:"keypair"` @@ -67,20 +72,20 @@ func (r keyPairResult) Extract() (*KeyPair, error) { return s.KeyPair, err } -// CreateResult is the response from a Create operation. Call its Extract method to interpret it -// as a KeyPair. +// CreateResult is the response from a Create operation. Call its Extract method +// to interpret it as a KeyPair. type CreateResult struct { keyPairResult } -// GetResult is the response from a Get operation. Call its Extract method to interpret it -// as a KeyPair. +// GetResult is the response from a Get operation. Call its Extract method to +// interpret it as a KeyPair. type GetResult struct { keyPairResult } -// DeleteResult is the response from a Delete operation. Call its Extract method to determine if -// the call succeeded or failed. +// DeleteResult is the response from a Delete operation. Call its ExtractErr +// method to determine if the call succeeded or failed. type DeleteResult struct { gophercloud.ErrResult } diff --git a/vendor/github.com/gophercloud/gophercloud/openstack/compute/v2/extensions/startstop/doc.go b/vendor/github.com/gophercloud/gophercloud/openstack/compute/v2/extensions/startstop/doc.go index d2729f874..ab97edb77 100644 --- a/vendor/github.com/gophercloud/gophercloud/openstack/compute/v2/extensions/startstop/doc.go +++ b/vendor/github.com/gophercloud/gophercloud/openstack/compute/v2/extensions/startstop/doc.go @@ -1,5 +1,19 @@ /* Package startstop provides functionality to start and stop servers that have been provisioned by the OpenStack Compute service. + +Example to Stop and Start a Server + + serverID := "47b6b7b7-568d-40e4-868c-d5c41735532e" + + err := startstop.Stop(computeClient, serverID).ExtractErr() + if err != nil { + panic(err) + } + + err := startstop.Start(computeClient, serverID).ExtractErr() + if err != nil { + panic(err) + } */ package startstop diff --git a/vendor/github.com/gophercloud/gophercloud/openstack/compute/v2/extensions/startstop/requests.go b/vendor/github.com/gophercloud/gophercloud/openstack/compute/v2/extensions/startstop/requests.go index 1d8a593b9..5b4f3f39d 100644 --- a/vendor/github.com/gophercloud/gophercloud/openstack/compute/v2/extensions/startstop/requests.go +++ b/vendor/github.com/gophercloud/gophercloud/openstack/compute/v2/extensions/startstop/requests.go @@ -7,13 +7,13 @@ func actionURL(client *gophercloud.ServiceClient, id string) string { } // Start is the operation responsible for starting a Compute server. -func Start(client *gophercloud.ServiceClient, id string) (r gophercloud.ErrResult) { +func Start(client *gophercloud.ServiceClient, id string) (r StartResult) { _, r.Err = client.Post(actionURL(client, id), map[string]interface{}{"os-start": nil}, nil, nil) return } // Stop is the operation responsible for stopping a Compute server. -func Stop(client *gophercloud.ServiceClient, id string) (r gophercloud.ErrResult) { +func Stop(client *gophercloud.ServiceClient, id string) (r StopResult) { _, r.Err = client.Post(actionURL(client, id), map[string]interface{}{"os-stop": nil}, nil, nil) return } diff --git a/vendor/github.com/gophercloud/gophercloud/openstack/compute/v2/extensions/startstop/results.go b/vendor/github.com/gophercloud/gophercloud/openstack/compute/v2/extensions/startstop/results.go new file mode 100644 index 000000000..834968933 --- /dev/null +++ b/vendor/github.com/gophercloud/gophercloud/openstack/compute/v2/extensions/startstop/results.go @@ -0,0 +1,15 @@ +package startstop + +import "github.com/gophercloud/gophercloud" + +// StartResult is the response from a Start operation. Call its ExtractErr +// method to determine if the request succeeded or failed. +type StartResult struct { + gophercloud.ErrResult +} + +// StopResult is the response from Stop operation. Call its ExtractErr +// method to determine if the request succeeded or failed. +type StopResult struct { + gophercloud.ErrResult +} diff --git a/vendor/github.com/gophercloud/gophercloud/openstack/compute/v2/flavors/doc.go b/vendor/github.com/gophercloud/gophercloud/openstack/compute/v2/flavors/doc.go index 5822e1bcf..34d8764fa 100644 --- a/vendor/github.com/gophercloud/gophercloud/openstack/compute/v2/flavors/doc.go +++ b/vendor/github.com/gophercloud/gophercloud/openstack/compute/v2/flavors/doc.go @@ -1,7 +1,137 @@ -// Package flavors provides information and interaction with the flavor API -// resource in the OpenStack Compute service. -// -// A flavor is an available hardware configuration for a server. Each flavor -// has a unique combination of disk space, memory capacity and priority for CPU -// time. +/* +Package flavors provides information and interaction with the flavor API +in the OpenStack Compute service. + +A flavor is an available hardware configuration for a server. Each flavor +has a unique combination of disk space, memory capacity and priority for CPU +time. + +Example to List Flavors + + listOpts := flavors.ListOpts{ + AccessType: flavors.PublicAccess, + } + + allPages, err := flavors.ListDetail(computeClient, listOpts).AllPages() + if err != nil { + panic(err) + } + + allFlavors, err := flavors.ExtractFlavors(allPages) + if err != nil { + panic(err) + } + + for _, flavor := range allFlavors { + fmt.Printf("%+v\n", flavor) + } + +Example to Create a Flavor + + createOpts := flavors.CreateOpts{ + ID: "1", + Name: "m1.tiny", + Disk: gophercloud.IntToPointer(1), + RAM: 512, + VCPUs: 1, + RxTxFactor: 1.0, + } + + flavor, err := flavors.Create(computeClient, createOpts).Extract() + if err != nil { + panic(err) + } + +Example to List Flavor Access + + flavorID := "e91758d6-a54a-4778-ad72-0c73a1cb695b" + + allPages, err := flavors.ListAccesses(computeClient, flavorID).AllPages() + if err != nil { + panic(err) + } + + allAccesses, err := flavors.ExtractAccesses(allPages) + if err != nil { + panic(err) + } + + for _, access := range allAccesses { + fmt.Printf("%+v", access) + } + +Example to Grant Access to a Flavor + + flavorID := "e91758d6-a54a-4778-ad72-0c73a1cb695b" + + accessOpts := flavors.AddAccessOpts{ + Tenant: "15153a0979884b59b0592248ef947921", + } + + accessList, err := flavors.AddAccess(computeClient, flavor.ID, accessOpts).Extract() + if err != nil { + panic(err) + } + +Example to Remove/Revoke Access to a Flavor + + flavorID := "e91758d6-a54a-4778-ad72-0c73a1cb695b" + + accessOpts := flavors.RemoveAccessOpts{ + Tenant: "15153a0979884b59b0592248ef947921", + } + + accessList, err := flavors.RemoveAccess(computeClient, flavor.ID, accessOpts).Extract() + if err != nil { + panic(err) + } + +Example to Create Extra Specs for a Flavor + + flavorID := "e91758d6-a54a-4778-ad72-0c73a1cb695b" + + createOpts := flavors.ExtraSpecsOpts{ + "hw:cpu_policy": "CPU-POLICY", + "hw:cpu_thread_policy": "CPU-THREAD-POLICY", + } + createdExtraSpecs, err := flavors.CreateExtraSpecs(computeClient, flavorID, createOpts).Extract() + if err != nil { + panic(err) + } + + fmt.Printf("%+v", createdExtraSpecs) + +Example to Get Extra Specs for a Flavor + + flavorID := "e91758d6-a54a-4778-ad72-0c73a1cb695b" + + extraSpecs, err := flavors.ListExtraSpecs(computeClient, flavorID).Extract() + if err != nil { + panic(err) + } + + fmt.Printf("%+v", extraSpecs) + +Example to Update Extra Specs for a Flavor + + flavorID := "e91758d6-a54a-4778-ad72-0c73a1cb695b" + + updateOpts := flavors.ExtraSpecsOpts{ + "hw:cpu_thread_policy": "CPU-THREAD-POLICY-UPDATED", + } + updatedExtraSpec, err := flavors.UpdateExtraSpec(computeClient, flavorID, updateOpts).Extract() + if err != nil { + panic(err) + } + + fmt.Printf("%+v", updatedExtraSpec) + +Example to Delete an Extra Spec for a Flavor + + flavorID := "e91758d6-a54a-4778-ad72-0c73a1cb695b" + err := flavors.DeleteExtraSpec(computeClient, flavorID, "hw:cpu_thread_policy").ExtractErr() + if err != nil { + panic(err) + } +*/ package flavors diff --git a/vendor/github.com/gophercloud/gophercloud/openstack/compute/v2/flavors/requests.go b/vendor/github.com/gophercloud/gophercloud/openstack/compute/v2/flavors/requests.go index 03d7e8724..4b406df95 100644 --- a/vendor/github.com/gophercloud/gophercloud/openstack/compute/v2/flavors/requests.go +++ b/vendor/github.com/gophercloud/gophercloud/openstack/compute/v2/flavors/requests.go @@ -11,33 +11,44 @@ type ListOptsBuilder interface { ToFlavorListQuery() (string, error) } -// AccessType maps to OpenStack's Flavor.is_public field. Although the is_public field is boolean, the -// request options are ternary, which is why AccessType is a string. The following values are -// allowed: -// -// PublicAccess (the default): Returns public flavors and private flavors associated with that project. -// PrivateAccess (admin only): Returns private flavors, across all projects. -// AllAccess (admin only): Returns public and private flavors across all projects. -// -// The AccessType arguement is optional, and if it is not supplied, OpenStack returns the PublicAccess -// flavors. +/* + AccessType maps to OpenStack's Flavor.is_public field. Although the is_public + field is boolean, the request options are ternary, which is why AccessType is + a string. The following values are allowed: + + The AccessType arguement is optional, and if it is not supplied, OpenStack + returns the PublicAccess flavors. +*/ type AccessType string const ( - PublicAccess AccessType = "true" + // PublicAccess returns public flavors and private flavors associated with + // that project. + PublicAccess AccessType = "true" + + // PrivateAccess (admin only) returns private flavors, across all projects. PrivateAccess AccessType = "false" - AllAccess AccessType = "None" + + // AllAccess (admin only) returns public and private flavors across all + // projects. + AllAccess AccessType = "None" ) -// ListOpts helps control the results returned by the List() function. -// For example, a flavor with a minDisk field of 10 will not be returned if you specify MinDisk set to 20. -// Typically, software will use the last ID of the previous call to List to set the Marker for the current call. -type ListOpts struct { +/* + ListOpts filters the results returned by the List() function. + For example, a flavor with a minDisk field of 10 will not be returned if you + specify MinDisk set to 20. - // ChangesSince, if provided, instructs List to return only those things which have changed since the timestamp provided. + Typically, software will use the last ID of the previous call to List to set + the Marker for the current call. +*/ +type ListOpts struct { + // ChangesSince, if provided, instructs List to return only those things which + // have changed since the timestamp provided. ChangesSince string `q:"changes-since"` - // MinDisk and MinRAM, if provided, elides flavors which do not meet your criteria. + // MinDisk and MinRAM, if provided, elides flavors which do not meet your + // criteria. MinDisk int `q:"minDisk"` MinRAM int `q:"minRam"` @@ -45,11 +56,12 @@ type ListOpts struct { // Marker instructs List where to start listing from. Marker string `q:"marker"` - // Limit instructs List to refrain from sending excessively large lists of flavors. + // Limit instructs List to refrain from sending excessively large lists of + // flavors. Limit int `q:"limit"` - // AccessType, if provided, instructs List which set of flavors to return. If IsPublic not provided, - // flavors for the current project are returned. + // AccessType, if provided, instructs List which set of flavors to return. + // If IsPublic not provided, flavors for the current project are returned. AccessType AccessType `q:"is_public"` } @@ -60,8 +72,8 @@ func (opts ListOpts) ToFlavorListQuery() (string, error) { } // ListDetail instructs OpenStack to provide a list of flavors. -// You may provide criteria by which List curtails its results for easier processing. -// See ListOpts for more details. +// You may provide criteria by which List curtails its results for easier +// processing. func ListDetail(client *gophercloud.ServiceClient, opts ListOptsBuilder) pagination.Pager { url := listURL(client) if opts != nil { @@ -80,31 +92,42 @@ type CreateOptsBuilder interface { ToFlavorCreateMap() (map[string]interface{}, error) } -// CreateOpts is passed to Create to create a flavor -// Source: -// https://github.com/openstack/nova/blob/stable/newton/nova/api/openstack/compute/schemas/flavor_manage.py#L20 +// CreateOpts specifies parameters used for creating a flavor. type CreateOpts struct { + // Name is the name of the flavor. Name string `json:"name" required:"true"` - // memory size, in MBs - RAM int `json:"ram" required:"true"` + + // RAM is the memory of the flavor, measured in MB. + RAM int `json:"ram" required:"true"` + + // VCPUs is the number of vcpus for the flavor. VCPUs int `json:"vcpus" required:"true"` - // disk size, in GBs - Disk *int `json:"disk" required:"true"` - ID string `json:"id,omitempty"` - // non-zero, positive - Swap *int `json:"swap,omitempty"` + + // Disk the amount of root disk space, measured in GB. + Disk *int `json:"disk" required:"true"` + + // ID is a unique ID for the flavor. + ID string `json:"id,omitempty"` + + // Swap is the amount of swap space for the flavor, measured in MB. + Swap *int `json:"swap,omitempty"` + + // RxTxFactor alters the network bandwidth of a flavor. RxTxFactor float64 `json:"rxtx_factor,omitempty"` - IsPublic *bool `json:"os-flavor-access:is_public,omitempty"` - // ephemeral disk size, in GBs, non-zero, positive + + // IsPublic flags a flavor as being available to all projects or not. + IsPublic *bool `json:"os-flavor-access:is_public,omitempty"` + + // Ephemeral is the amount of ephemeral disk space, measured in GB. Ephemeral *int `json:"OS-FLV-EXT-DATA:ephemeral,omitempty"` } -// ToFlavorCreateMap satisfies the CreateOptsBuilder interface -func (opts *CreateOpts) ToFlavorCreateMap() (map[string]interface{}, error) { +// ToFlavorCreateMap constructs a request body from CreateOpts. +func (opts CreateOpts) ToFlavorCreateMap() (map[string]interface{}, error) { return gophercloud.BuildRequestBody(opts, "flavor") } -// Create a flavor +// Create requests the creation of a new flavor. func Create(client *gophercloud.ServiceClient, opts CreateOptsBuilder) (r CreateResult) { b, err := opts.ToFlavorCreateMap() if err != nil { @@ -117,14 +140,177 @@ func Create(client *gophercloud.ServiceClient, opts CreateOptsBuilder) (r Create return } -// Get instructs OpenStack to provide details on a single flavor, identified by its ID. -// Use ExtractFlavor to convert its result into a Flavor. +// Get retrieves details of a single flavor. Use ExtractFlavor to convert its +// result into a Flavor. func Get(client *gophercloud.ServiceClient, id string) (r GetResult) { _, r.Err = client.Get(getURL(client, id), &r.Body, nil) return } -// IDFromName is a convienience function that returns a flavor's ID given its name. +// Delete deletes the specified flavor ID. +func Delete(client *gophercloud.ServiceClient, id string) (r DeleteResult) { + _, r.Err = client.Delete(deleteURL(client, id), nil) + return +} + +// ListAccesses retrieves the tenants which have access to a flavor. +func ListAccesses(client *gophercloud.ServiceClient, id string) pagination.Pager { + url := accessURL(client, id) + + return pagination.NewPager(client, url, func(r pagination.PageResult) pagination.Page { + return AccessPage{pagination.SinglePageBase(r)} + }) +} + +// AddAccessOptsBuilder allows extensions to add additional parameters to the +// AddAccess requests. +type AddAccessOptsBuilder interface { + ToFlavorAddAccessMap() (map[string]interface{}, error) +} + +// AddAccessOpts represents options for adding access to a flavor. +type AddAccessOpts struct { + // Tenant is the project/tenant ID to grant access. + Tenant string `json:"tenant"` +} + +// ToFlavorAddAccessMap constructs a request body from AddAccessOpts. +func (opts AddAccessOpts) ToFlavorAddAccessMap() (map[string]interface{}, error) { + return gophercloud.BuildRequestBody(opts, "addTenantAccess") +} + +// AddAccess grants a tenant/project access to a flavor. +func AddAccess(client *gophercloud.ServiceClient, id string, opts AddAccessOptsBuilder) (r AddAccessResult) { + b, err := opts.ToFlavorAddAccessMap() + if err != nil { + r.Err = err + return + } + _, r.Err = client.Post(accessActionURL(client, id), b, &r.Body, &gophercloud.RequestOpts{ + OkCodes: []int{200}, + }) + return +} + +// RemoveAccessOptsBuilder allows extensions to add additional parameters to the +// RemoveAccess requests. +type RemoveAccessOptsBuilder interface { + ToFlavorRemoveAccessMap() (map[string]interface{}, error) +} + +// RemoveAccessOpts represents options for removing access to a flavor. +type RemoveAccessOpts struct { + // Tenant is the project/tenant ID to grant access. + Tenant string `json:"tenant"` +} + +// ToFlavorRemoveAccessMap constructs a request body from RemoveAccessOpts. +func (opts RemoveAccessOpts) ToFlavorRemoveAccessMap() (map[string]interface{}, error) { + return gophercloud.BuildRequestBody(opts, "removeTenantAccess") +} + +// RemoveAccess removes/revokes a tenant/project access to a flavor. +func RemoveAccess(client *gophercloud.ServiceClient, id string, opts RemoveAccessOptsBuilder) (r RemoveAccessResult) { + b, err := opts.ToFlavorRemoveAccessMap() + if err != nil { + r.Err = err + return + } + _, r.Err = client.Post(accessActionURL(client, id), b, &r.Body, &gophercloud.RequestOpts{ + OkCodes: []int{200}, + }) + return +} + +// ExtraSpecs requests all the extra-specs for the given flavor ID. +func ListExtraSpecs(client *gophercloud.ServiceClient, flavorID string) (r ListExtraSpecsResult) { + _, r.Err = client.Get(extraSpecsListURL(client, flavorID), &r.Body, nil) + return +} + +func GetExtraSpec(client *gophercloud.ServiceClient, flavorID string, key string) (r GetExtraSpecResult) { + _, r.Err = client.Get(extraSpecsGetURL(client, flavorID, key), &r.Body, nil) + return +} + +// CreateExtraSpecsOptsBuilder allows extensions to add additional parameters to the +// CreateExtraSpecs requests. +type CreateExtraSpecsOptsBuilder interface { + ToFlavorExtraSpecsCreateMap() (map[string]interface{}, error) +} + +// ExtraSpecsOpts is a map that contains key-value pairs. +type ExtraSpecsOpts map[string]string + +// ToFlavorExtraSpecsCreateMap assembles a body for a Create request based on +// the contents of ExtraSpecsOpts. +func (opts ExtraSpecsOpts) ToFlavorExtraSpecsCreateMap() (map[string]interface{}, error) { + return map[string]interface{}{"extra_specs": opts}, nil +} + +// CreateExtraSpecs will create or update the extra-specs key-value pairs for +// the specified Flavor. +func CreateExtraSpecs(client *gophercloud.ServiceClient, flavorID string, opts CreateExtraSpecsOptsBuilder) (r CreateExtraSpecsResult) { + b, err := opts.ToFlavorExtraSpecsCreateMap() + if err != nil { + r.Err = err + return + } + _, r.Err = client.Post(extraSpecsCreateURL(client, flavorID), b, &r.Body, &gophercloud.RequestOpts{ + OkCodes: []int{200}, + }) + return +} + +// UpdateExtraSpecOptsBuilder allows extensions to add additional parameters to +// the Update request. +type UpdateExtraSpecOptsBuilder interface { + ToFlavorExtraSpecUpdateMap() (map[string]string, string, error) +} + +// ToFlavorExtraSpecUpdateMap assembles a body for an Update request based on +// the contents of a ExtraSpecOpts. +func (opts ExtraSpecsOpts) ToFlavorExtraSpecUpdateMap() (map[string]string, string, error) { + if len(opts) != 1 { + err := gophercloud.ErrInvalidInput{} + err.Argument = "flavors.ExtraSpecOpts" + err.Info = "Must have 1 and only one key-value pair" + return nil, "", err + } + + var key string + for k := range opts { + key = k + } + + return opts, key, nil +} + +// UpdateExtraSpec will updates the value of the specified flavor's extra spec +// for the key in opts. +func UpdateExtraSpec(client *gophercloud.ServiceClient, flavorID string, opts UpdateExtraSpecOptsBuilder) (r UpdateExtraSpecResult) { + b, key, err := opts.ToFlavorExtraSpecUpdateMap() + if err != nil { + r.Err = err + return + } + _, r.Err = client.Put(extraSpecUpdateURL(client, flavorID, key), b, &r.Body, &gophercloud.RequestOpts{ + OkCodes: []int{200}, + }) + return +} + +// DeleteExtraSpec will delete the key-value pair with the given key for the given +// flavor ID. +func DeleteExtraSpec(client *gophercloud.ServiceClient, flavorID, key string) (r DeleteExtraSpecResult) { + _, r.Err = client.Delete(extraSpecDeleteURL(client, flavorID, key), &gophercloud.RequestOpts{ + OkCodes: []int{200}, + }) + return +} + +// IDFromName is a convienience function that returns a flavor's ID given its +// name. func IDFromName(client *gophercloud.ServiceClient, name string) (string, error) { count := 0 id := "" diff --git a/vendor/github.com/gophercloud/gophercloud/openstack/compute/v2/flavors/results.go b/vendor/github.com/gophercloud/gophercloud/openstack/compute/v2/flavors/results.go index 121abbb8d..525cddaea 100644 --- a/vendor/github.com/gophercloud/gophercloud/openstack/compute/v2/flavors/results.go +++ b/vendor/github.com/gophercloud/gophercloud/openstack/compute/v2/flavors/results.go @@ -12,16 +12,26 @@ type commonResult struct { gophercloud.Result } +// CreateResult is the response of a Get operations. Call its Extract method to +// interpret it as a Flavor. type CreateResult struct { commonResult } -// GetResult temporarily holds the response from a Get call. +// GetResult is the response of a Get operations. Call its Extract method to +// interpret it as a Flavor. type GetResult struct { commonResult } -// Extract provides access to the individual Flavor returned by the Get and Create functions. +// DeleteResult is the result from a Delete operation. Call its ExtractErr +// method to determine if the call succeeded or failed. +type DeleteResult struct { + gophercloud.ErrResult +} + +// Extract provides access to the individual Flavor returned by the Get and +// Create functions. func (r commonResult) Extract() (*Flavor, error) { var s struct { Flavor *Flavor `json:"flavor"` @@ -30,24 +40,35 @@ func (r commonResult) Extract() (*Flavor, error) { return s.Flavor, err } -// Flavor records represent (virtual) hardware configurations for server resources in a region. +// Flavor represent (virtual) hardware configurations for server resources +// in a region. type Flavor struct { - // The Id field contains the flavor's unique identifier. - // For example, this identifier will be useful when specifying which hardware configuration to use for a new server instance. + // ID is the flavor's unique ID. ID string `json:"id"` - // The Disk and RA< fields provide a measure of storage space offered by the flavor, in GB and MB, respectively. + + // Disk is the amount of root disk, measured in GB. Disk int `json:"disk"` - RAM int `json:"ram"` - // The Name field provides a human-readable moniker for the flavor. - Name string `json:"name"` + + // RAM is the amount of memory, measured in MB. + RAM int `json:"ram"` + + // Name is the name of the flavor. + Name string `json:"name"` + + // RxTxFactor describes bandwidth alterations of the flavor. RxTxFactor float64 `json:"rxtx_factor"` - // Swap indicates how much space is reserved for swap. - // If not provided, this field will be set to 0. + + // Swap is the amount of swap space, measured in MB. Swap int `json:"swap"` + // VCPUs indicates how many (virtual) CPUs are available for this flavor. VCPUs int `json:"vcpus"` + // IsPublic indicates whether the flavor is public. - IsPublic bool `json:"is_public"` + IsPublic bool `json:"os-flavor-access:is_public"` + + // Ephemeral is the amount of ephemeral disk space, measured in GB. + Ephemeral int `json:"OS-FLV-EXT-DATA:ephemeral"` } func (r *Flavor) UnmarshalJSON(b []byte) error { @@ -82,18 +103,19 @@ func (r *Flavor) UnmarshalJSON(b []byte) error { return nil } -// FlavorPage contains a single page of the response from a List call. +// FlavorPage contains a single page of all flavors from a ListDetails call. type FlavorPage struct { pagination.LinkedPageBase } -// IsEmpty determines if a page contains any results. +// IsEmpty determines if a FlavorPage contains any results. func (page FlavorPage) IsEmpty() (bool, error) { flavors, err := ExtractFlavors(page) return len(flavors) == 0, err } -// NextPageURL uses the response's embedded link reference to navigate to the next page of results. +// NextPageURL uses the response's embedded link reference to navigate to the +// next page of results. func (page FlavorPage) NextPageURL() (string, error) { var s struct { Links []gophercloud.Link `json:"flavors_links"` @@ -105,7 +127,8 @@ func (page FlavorPage) NextPageURL() (string, error) { return gophercloud.ExtractNextURL(s.Links) } -// ExtractFlavors provides access to the list of flavors in a page acquired from the List operation. +// ExtractFlavors provides access to the list of flavors in a page acquired +// from the ListDetail operation. func ExtractFlavors(r pagination.Page) ([]Flavor, error) { var s struct { Flavors []Flavor `json:"flavors"` @@ -113,3 +136,117 @@ func ExtractFlavors(r pagination.Page) ([]Flavor, error) { err := (r.(FlavorPage)).ExtractInto(&s) return s.Flavors, err } + +// AccessPage contains a single page of all FlavorAccess entries for a flavor. +type AccessPage struct { + pagination.SinglePageBase +} + +// IsEmpty indicates whether an AccessPage is empty. +func (page AccessPage) IsEmpty() (bool, error) { + v, err := ExtractAccesses(page) + return len(v) == 0, err +} + +// ExtractAccesses interprets a page of results as a slice of FlavorAccess. +func ExtractAccesses(r pagination.Page) ([]FlavorAccess, error) { + var s struct { + FlavorAccesses []FlavorAccess `json:"flavor_access"` + } + err := (r.(AccessPage)).ExtractInto(&s) + return s.FlavorAccesses, err +} + +type accessResult struct { + gophercloud.Result +} + +// AddAccessResult is the response of an AddAccess operation. Call its +// Extract method to interpret it as a slice of FlavorAccess. +type AddAccessResult struct { + accessResult +} + +// RemoveAccessResult is the response of a RemoveAccess operation. Call its +// Extract method to interpret it as a slice of FlavorAccess. +type RemoveAccessResult struct { + accessResult +} + +// Extract provides access to the result of an access create or delete. +// The result will be all accesses that the flavor has. +func (r accessResult) Extract() ([]FlavorAccess, error) { + var s struct { + FlavorAccesses []FlavorAccess `json:"flavor_access"` + } + err := r.ExtractInto(&s) + return s.FlavorAccesses, err +} + +// FlavorAccess represents an ACL of tenant access to a specific Flavor. +type FlavorAccess struct { + // FlavorID is the unique ID of the flavor. + FlavorID string `json:"flavor_id"` + + // TenantID is the unique ID of the tenant. + TenantID string `json:"tenant_id"` +} + +// Extract interprets any extraSpecsResult as ExtraSpecs, if possible. +func (r extraSpecsResult) Extract() (map[string]string, error) { + var s struct { + ExtraSpecs map[string]string `json:"extra_specs"` + } + err := r.ExtractInto(&s) + return s.ExtraSpecs, err +} + +// extraSpecsResult contains the result of a call for (potentially) multiple +// key-value pairs. Call its Extract method to interpret it as a +// map[string]interface. +type extraSpecsResult struct { + gophercloud.Result +} + +// ListExtraSpecsResult contains the result of a Get operation. Call its Extract +// method to interpret it as a map[string]interface. +type ListExtraSpecsResult struct { + extraSpecsResult +} + +// CreateExtraSpecResult contains the result of a Create operation. Call its +// Extract method to interpret it as a map[string]interface. +type CreateExtraSpecsResult struct { + extraSpecsResult +} + +// extraSpecResult contains the result of a call for individual a single +// key-value pair. +type extraSpecResult struct { + gophercloud.Result +} + +// GetExtraSpecResult contains the result of a Get operation. Call its Extract +// method to interpret it as a map[string]interface. +type GetExtraSpecResult struct { + extraSpecResult +} + +// UpdateExtraSpecResult contains the result of an Update operation. Call its +// Extract method to interpret it as a map[string]interface. +type UpdateExtraSpecResult struct { + extraSpecResult +} + +// DeleteExtraSpecResult contains the result of a Delete operation. Call its +// ExtractErr method to determine if the call succeeded or failed. +type DeleteExtraSpecResult struct { + gophercloud.ErrResult +} + +// Extract interprets any extraSpecResult as an ExtraSpec, if possible. +func (r extraSpecResult) Extract() (map[string]string, error) { + var s map[string]string + err := r.ExtractInto(&s) + return s, err +} diff --git a/vendor/github.com/gophercloud/gophercloud/openstack/compute/v2/flavors/urls.go b/vendor/github.com/gophercloud/gophercloud/openstack/compute/v2/flavors/urls.go index 2fc21796f..8620dd78a 100644 --- a/vendor/github.com/gophercloud/gophercloud/openstack/compute/v2/flavors/urls.go +++ b/vendor/github.com/gophercloud/gophercloud/openstack/compute/v2/flavors/urls.go @@ -15,3 +15,35 @@ func listURL(client *gophercloud.ServiceClient) string { func createURL(client *gophercloud.ServiceClient) string { return client.ServiceURL("flavors") } + +func deleteURL(client *gophercloud.ServiceClient, id string) string { + return client.ServiceURL("flavors", id) +} + +func accessURL(client *gophercloud.ServiceClient, id string) string { + return client.ServiceURL("flavors", id, "os-flavor-access") +} + +func accessActionURL(client *gophercloud.ServiceClient, id string) string { + return client.ServiceURL("flavors", id, "action") +} + +func extraSpecsListURL(client *gophercloud.ServiceClient, id string) string { + return client.ServiceURL("flavors", id, "os-extra_specs") +} + +func extraSpecsGetURL(client *gophercloud.ServiceClient, id, key string) string { + return client.ServiceURL("flavors", id, "os-extra_specs", key) +} + +func extraSpecsCreateURL(client *gophercloud.ServiceClient, id string) string { + return client.ServiceURL("flavors", id, "os-extra_specs") +} + +func extraSpecUpdateURL(client *gophercloud.ServiceClient, id, key string) string { + return client.ServiceURL("flavors", id, "os-extra_specs", key) +} + +func extraSpecDeleteURL(client *gophercloud.ServiceClient, id, key string) string { + return client.ServiceURL("flavors", id, "os-extra_specs", key) +} diff --git a/vendor/github.com/gophercloud/gophercloud/openstack/compute/v2/images/doc.go b/vendor/github.com/gophercloud/gophercloud/openstack/compute/v2/images/doc.go index 0edaa3f02..22410a79a 100644 --- a/vendor/github.com/gophercloud/gophercloud/openstack/compute/v2/images/doc.go +++ b/vendor/github.com/gophercloud/gophercloud/openstack/compute/v2/images/doc.go @@ -1,7 +1,32 @@ -// Package images provides information and interaction with the image API -// resource in the OpenStack Compute service. -// -// An image is a collection of files used to create or rebuild a server. -// Operators provide a number of pre-built OS images by default. You may also -// create custom images from cloud servers you have launched. +/* +Package images provides information and interaction with the images through +the OpenStack Compute service. + +This API is deprecated and will be removed from a future version of the Nova +API service. + +An image is a collection of files used to create or rebuild a server. +Operators provide a number of pre-built OS images by default. You may also +create custom images from cloud servers you have launched. + +Example to List Images + + listOpts := images.ListOpts{ + Limit: 2, + } + + allPages, err := images.ListDetail(computeClient, listOpts).AllPages() + if err != nil { + panic(err) + } + + allImages, err := images.ExtractImages(allPages) + if err != nil { + panic(err) + } + + for _, image := range allImages { + fmt.Printf("%+v\n", image) + } +*/ package images diff --git a/vendor/github.com/gophercloud/gophercloud/openstack/compute/v2/images/requests.go b/vendor/github.com/gophercloud/gophercloud/openstack/compute/v2/images/requests.go index df9f1da8f..558b481b9 100644 --- a/vendor/github.com/gophercloud/gophercloud/openstack/compute/v2/images/requests.go +++ b/vendor/github.com/gophercloud/gophercloud/openstack/compute/v2/images/requests.go @@ -6,26 +6,33 @@ import ( ) // ListOptsBuilder allows extensions to add additional parameters to the -// List request. +// ListDetail request. type ListOptsBuilder interface { ToImageListQuery() (string, error) } -// ListOpts contain options for limiting the number of Images returned from a call to ListDetail. +// ListOpts contain options filtering Images returned from a call to ListDetail. type ListOpts struct { - // When the image last changed status (in date-time format). + // ChangesSince filters Images based on the last changed status (in date-time + // format). ChangesSince string `q:"changes-since"` - // The number of Images to return. + + // Limit limits the number of Images to return. Limit int `q:"limit"` - // UUID of the Image at which to set a marker. + + // Mark is an Image UUID at which to set a marker. Marker string `q:"marker"` - // The name of the Image. + + // Name is the name of the Image. Name string `q:"name"` - // The name of the Server (in URL format). + + // Server is the name of the Server (in URL format). Server string `q:"server"` - // The current status of the Image. + + // Status is the current status of the Image. Status string `q:"status"` - // The value of the type of image (e.g. BASE, SERVER, ALL) + + // Type is the type of image (e.g. BASE, SERVER, ALL). Type string `q:"type"` } @@ -50,8 +57,7 @@ func ListDetail(client *gophercloud.ServiceClient, opts ListOptsBuilder) paginat }) } -// Get acquires additional detail about a specific image by ID. -// Use ExtractImage() to interpret the result as an openstack Image. +// Get returns data about a specific image by its ID. func Get(client *gophercloud.ServiceClient, id string) (r GetResult) { _, r.Err = client.Get(getURL(client, id), &r.Body, nil) return @@ -63,7 +69,8 @@ func Delete(client *gophercloud.ServiceClient, id string) (r DeleteResult) { return } -// IDFromName is a convienience function that returns an image's ID given its name. +// IDFromName is a convienience function that returns an image's ID given its +// name. func IDFromName(client *gophercloud.ServiceClient, name string) (string, error) { count := 0 id := "" diff --git a/vendor/github.com/gophercloud/gophercloud/openstack/compute/v2/images/results.go b/vendor/github.com/gophercloud/gophercloud/openstack/compute/v2/images/results.go index f9ebc69e9..70d1018c7 100644 --- a/vendor/github.com/gophercloud/gophercloud/openstack/compute/v2/images/results.go +++ b/vendor/github.com/gophercloud/gophercloud/openstack/compute/v2/images/results.go @@ -5,12 +5,14 @@ import ( "github.com/gophercloud/gophercloud/pagination" ) -// GetResult temporarily stores a Get response. +// GetResult is the response from a Get operation. Call its Extract method to +// interpret it as an Image. type GetResult struct { gophercloud.Result } -// DeleteResult represents the result of an image.Delete operation. +// DeleteResult is the result from a Delete operation. Call its ExtractErr +// method to determine if the call succeeded or failed. type DeleteResult struct { gophercloud.ErrResult } @@ -24,44 +26,53 @@ func (r GetResult) Extract() (*Image, error) { return s.Image, err } -// Image is used for JSON (un)marshalling. -// It provides a description of an OS image. +// Image represents an Image returned by the Compute API. type Image struct { - // ID contains the image's unique identifier. + // ID is the unique ID of an image. ID string + // Created is the date when the image was created. Created string - // MinDisk and MinRAM specify the minimum resources a server must provide to be able to install the image. + // MinDisk is the minimum amount of disk a flavor must have to be able + // to create a server based on the image, measured in GB. MinDisk int - MinRAM int + + // MinRAM is the minimum amount of RAM a flavor must have to be able + // to create a server based on the image, measured in MB. + MinRAM int // Name provides a human-readable moniker for the OS image. Name string // The Progress and Status fields indicate image-creation status. - // Any usable image will have 100% progress. Progress int - Status string + // Status is the current status of the image. + Status string + + // Update is the date when the image was updated. Updated string + // Metadata provides free-form key/value pairs that further describe the + // image. Metadata map[string]interface{} } -// ImagePage contains a single page of results from a List operation. -// Use ExtractImages to convert it into a slice of usable structs. +// ImagePage contains a single page of all Images returne from a ListDetail +// operation. Use ExtractImages to convert it into a slice of usable structs. type ImagePage struct { pagination.LinkedPageBase } -// IsEmpty returns true if a page contains no Image results. +// IsEmpty returns true if an ImagePage contains no Image results. func (page ImagePage) IsEmpty() (bool, error) { images, err := ExtractImages(page) return len(images) == 0, err } -// NextPageURL uses the response's embedded link reference to navigate to the next page of results. +// NextPageURL uses the response's embedded link reference to navigate to the +// next page of results. func (page ImagePage) NextPageURL() (string, error) { var s struct { Links []gophercloud.Link `json:"images_links"` @@ -73,7 +84,8 @@ func (page ImagePage) NextPageURL() (string, error) { return gophercloud.ExtractNextURL(s.Links) } -// ExtractImages converts a page of List results into a slice of usable Image structs. +// ExtractImages converts a page of List results into a slice of usable Image +// structs. func ExtractImages(r pagination.Page) ([]Image, error) { var s struct { Images []Image `json:"images"` diff --git a/vendor/github.com/gophercloud/gophercloud/openstack/compute/v2/servers/doc.go b/vendor/github.com/gophercloud/gophercloud/openstack/compute/v2/servers/doc.go index fe4567120..3b0ab7836 100644 --- a/vendor/github.com/gophercloud/gophercloud/openstack/compute/v2/servers/doc.go +++ b/vendor/github.com/gophercloud/gophercloud/openstack/compute/v2/servers/doc.go @@ -1,6 +1,115 @@ -// Package servers provides information and interaction with the server API -// resource in the OpenStack Compute service. -// -// A server is a virtual machine instance in the compute system. In order for -// one to be provisioned, a valid flavor and image are required. +/* +Package servers provides information and interaction with the server API +resource in the OpenStack Compute service. + +A server is a virtual machine instance in the compute system. In order for +one to be provisioned, a valid flavor and image are required. + +Example to List Servers + + listOpts := servers.ListOpts{ + AllTenants: true, + } + + allPages, err := servers.List(computeClient, listOpts).AllPages() + if err != nil { + panic(err) + } + + allServers, err := servers.ExtractServers(allPages) + if err != nil { + panic(err) + } + + for _, server := range allServers { + fmt.Printf("%+v\n", server) + } + +Example to Create a Server + + createOpts := servers.CreateOpts{ + Name: "server_name", + ImageRef: "image-uuid", + FlavorRef: "flavor-uuid", + } + + server, err := servers.Create(computeClient, createOpts).Extract() + if err != nil { + panic(err) + } + +Example to Delete a Server + + serverID := "d9072956-1560-487c-97f2-18bdf65ec749" + err := servers.Delete(computeClient, serverID).ExtractErr() + if err != nil { + panic(err) + } + +Example to Force Delete a Server + + serverID := "d9072956-1560-487c-97f2-18bdf65ec749" + err := servers.ForceDelete(computeClient, serverID).ExtractErr() + if err != nil { + panic(err) + } + +Example to Reboot a Server + + rebootOpts := servers.RebootOpts{ + Type: servers.SoftReboot, + } + + serverID := "d9072956-1560-487c-97f2-18bdf65ec749" + + err := servers.Reboot(computeClient, serverID, rebootOpts).ExtractErr() + if err != nil { + panic(err) + } + +Example to Rebuild a Server + + rebuildOpts := servers.RebuildOpts{ + Name: "new_name", + ImageID: "image-uuid", + } + + serverID := "d9072956-1560-487c-97f2-18bdf65ec749" + + server, err := servers.Rebuilt(computeClient, serverID, rebuildOpts).Extract() + if err != nil { + panic(err) + } + +Example to Resize a Server + + resizeOpts := servers.ResizeOpts{ + FlavorRef: "flavor-uuid", + } + + serverID := "d9072956-1560-487c-97f2-18bdf65ec749" + + err := servers.Resize(computeClient, serverID, resizeOpts).ExtractErr() + if err != nil { + panic(err) + } + + err = servers.ConfirmResize(computeClient, serverID).ExtractErr() + if err != nil { + panic(err) + } + +Example to Snapshot a Server + + snapshotOpts := servers.CreateImageOpts{ + Name: "snapshot_name", + } + + serverID := "d9072956-1560-487c-97f2-18bdf65ec749" + + image, err := servers.CreateImage(computeClient, serverID, snapshotOpts).ExtractImageID() + if err != nil { + panic(err) + } +*/ package servers diff --git a/vendor/github.com/gophercloud/gophercloud/openstack/compute/v2/servers/requests.go b/vendor/github.com/gophercloud/gophercloud/openstack/compute/v2/servers/requests.go index 961863731..626eb63e9 100644 --- a/vendor/github.com/gophercloud/gophercloud/openstack/compute/v2/servers/requests.go +++ b/vendor/github.com/gophercloud/gophercloud/openstack/compute/v2/servers/requests.go @@ -21,13 +21,13 @@ type ListOptsBuilder interface { // the server attributes you want to see returned. Marker and Limit are used // for pagination. type ListOpts struct { - // A time/date stamp for when the server last changed status. + // ChangesSince is a time/date stamp for when the server last changed status. ChangesSince string `q:"changes-since"` - // Name of the image in URL format. + // Image is the name of the image in URL format. Image string `q:"image"` - // Name of the flavor in URL format. + // Flavor is the name of the flavor in URL format. Flavor string `q:"flavor"` // Name of the server as a string; can be queried with regular expressions. @@ -36,20 +36,25 @@ type ListOpts struct { // underlying database server implemented for Compute. Name string `q:"name"` - // Value of the status of the server so that you can filter on "ACTIVE" for example. + // Status is the value of the status of the server so that you can filter on + // "ACTIVE" for example. Status string `q:"status"` - // Name of the host as a string. + // Host is the name of the host as a string. Host string `q:"host"` - // UUID of the server at which you want to set a marker. + // Marker is a UUID of the server at which you want to set a marker. Marker string `q:"marker"` - // Integer value for the limit of values to return. + // Limit is an integer value for the limit of values to return. Limit int `q:"limit"` - // Bool to show all tenants + // AllTenants is a bool to show all tenants. AllTenants bool `q:"all_tenants"` + + // TenantID lists servers for a particular tenant. + // Setting "AllTenants = true" is required. + TenantID string `q:"tenant_id"` } // ToServerListQuery formats a ListOpts into a query string. @@ -73,15 +78,16 @@ func List(client *gophercloud.ServiceClient, opts ListOptsBuilder) pagination.Pa }) } -// CreateOptsBuilder describes struct types that can be accepted by the Create call. -// The CreateOpts struct in this package does. +// CreateOptsBuilder allows extensions to add additional parameters to the +// Create request. type CreateOptsBuilder interface { ToServerCreateMap() (map[string]interface{}, error) } -// Network is used within CreateOpts to control a new server's network attachments. +// Network is used within CreateOpts to control a new server's network +// attachments. type Network struct { - // UUID of a nova-network to attach to the newly provisioned server. + // UUID of a network to attach to the newly provisioned server. // Required unless Port is provided. UUID string @@ -89,19 +95,21 @@ type Network struct { // Required unless UUID is provided. Port string - // FixedIP [optional] specifies a fixed IPv4 address to be used on this network. + // FixedIP specifies a fixed IPv4 address to be used on this network. FixedIP string } // Personality is an array of files that are injected into the server at launch. type Personality []*File -// File is used within CreateOpts and RebuildOpts to inject a file into the server at launch. -// File implements the json.Marshaler interface, so when a Create or Rebuild operation is requested, -// json.Marshal will call File's MarshalJSON method. +// File is used within CreateOpts and RebuildOpts to inject a file into the +// server at launch. +// File implements the json.Marshaler interface, so when a Create or Rebuild +// operation is requested, json.Marshal will call File's MarshalJSON method. type File struct { - // Path of the file + // Path of the file. Path string + // Contents of the file. Maximum content size is 255 bytes. Contents []byte } @@ -123,13 +131,13 @@ type CreateOpts struct { // Name is the name to assign to the newly launched server. Name string `json:"name" required:"true"` - // ImageRef [optional; required if ImageName is not provided] is the ID or full - // URL to the image that contains the server's OS and initial state. + // ImageRef [optional; required if ImageName is not provided] is the ID or + // full URL to the image that contains the server's OS and initial state. // Also optional if using the boot-from-volume extension. ImageRef string `json:"imageRef"` - // ImageName [optional; required if ImageRef is not provided] is the name of the - // image that contains the server's OS and initial state. + // ImageName [optional; required if ImageRef is not provided] is the name of + // the image that contains the server's OS and initial state. // Also optional if using the boot-from-volume extension. ImageName string `json:"-"` @@ -141,7 +149,8 @@ type CreateOpts struct { // the flavor that describes the server's specs. FlavorName string `json:"-"` - // SecurityGroups lists the names of the security groups to which this server should belong. + // SecurityGroups lists the names of the security groups to which this server + // should belong. SecurityGroups []string `json:"-"` // UserData contains configuration information or scripts to use upon launch. @@ -152,10 +161,12 @@ type CreateOpts struct { AvailabilityZone string `json:"availability_zone,omitempty"` // Networks dictates how this server will be attached to available networks. - // By default, the server will be attached to all isolated networks for the tenant. + // By default, the server will be attached to all isolated networks for the + // tenant. Networks []Network `json:"-"` - // Metadata contains key-value pairs (up to 255 bytes each) to attach to the server. + // Metadata contains key-value pairs (up to 255 bytes each) to attach to the + // server. Metadata map[string]string `json:"metadata,omitempty"` // Personality includes files to inject into the server at launch. @@ -166,7 +177,7 @@ type CreateOpts struct { ConfigDrive *bool `json:"config_drive,omitempty"` // AdminPass sets the root user password. If not set, a randomly-generated - // password will be created and returned in the rponse. + // password will be created and returned in the response. AdminPass string `json:"adminPass,omitempty"` // AccessIPv4 specifies an IPv4 address for the instance. @@ -180,7 +191,8 @@ type CreateOpts struct { ServiceClient *gophercloud.ServiceClient `json:"-"` } -// ToServerCreateMap assembles a request body based on the contents of a CreateOpts. +// ToServerCreateMap assembles a request body based on the contents of a +// CreateOpts. func (opts CreateOpts) ToServerCreateMap() (map[string]interface{}, error) { sc := opts.ServiceClient opts.ServiceClient = nil @@ -274,13 +286,14 @@ func Create(client *gophercloud.ServiceClient, opts CreateOptsBuilder) (r Create return } -// Delete requests that a server previously provisioned be removed from your account. +// Delete requests that a server previously provisioned be removed from your +// account. func Delete(client *gophercloud.ServiceClient, id string) (r DeleteResult) { _, r.Err = client.Delete(deleteURL(client, id), nil) return } -// ForceDelete forces the deletion of a server +// ForceDelete forces the deletion of a server. func ForceDelete(client *gophercloud.ServiceClient, id string) (r ActionResult) { _, r.Err = client.Post(actionURL(client, id), map[string]interface{}{"forceDelete": ""}, nil, nil) return @@ -294,12 +307,14 @@ func Get(client *gophercloud.ServiceClient, id string) (r GetResult) { return } -// UpdateOptsBuilder allows extensions to add additional attributes to the Update request. +// UpdateOptsBuilder allows extensions to add additional attributes to the +// Update request. type UpdateOptsBuilder interface { ToServerUpdateMap() (map[string]interface{}, error) } -// UpdateOpts specifies the base attributes that may be updated on an existing server. +// UpdateOpts specifies the base attributes that may be updated on an existing +// server. type UpdateOpts struct { // Name changes the displayed name of the server. // The server host name will *not* change. @@ -331,7 +346,8 @@ func Update(client *gophercloud.ServiceClient, id string, opts UpdateOptsBuilder return } -// ChangeAdminPassword alters the administrator or root password for a specified server. +// ChangeAdminPassword alters the administrator or root password for a specified +// server. func ChangeAdminPassword(client *gophercloud.ServiceClient, id, newPassword string) (r ActionResult) { b := map[string]interface{}{ "changePassword": map[string]string{ @@ -354,33 +370,38 @@ const ( PowerCycle = HardReboot ) -// RebootOptsBuilder is an interface that options must satisfy in order to be -// used when rebooting a server instance +// RebootOptsBuilder allows extensions to add additional parameters to the +// reboot request. type RebootOptsBuilder interface { ToServerRebootMap() (map[string]interface{}, error) } -// RebootOpts satisfies the RebootOptsBuilder interface +// RebootOpts provides options to the reboot request. type RebootOpts struct { + // Type is the type of reboot to perform on the server. Type RebootMethod `json:"type" required:"true"` } -// ToServerRebootMap allows RebootOpts to satisfiy the RebootOptsBuilder -// interface +// ToServerRebootMap builds a body for the reboot request. func (opts *RebootOpts) ToServerRebootMap() (map[string]interface{}, error) { return gophercloud.BuildRequestBody(opts, "reboot") } -// Reboot requests that a given server reboot. -// Two methods exist for rebooting a server: -// -// HardReboot (aka PowerCycle) starts the server instance by physically cutting power to the machine, or if a VM, -// terminating it at the hypervisor level. -// It's done. Caput. Full stop. -// Then, after a brief while, power is rtored or the VM instance rtarted. -// -// SoftReboot (aka OSReboot) simply tells the OS to rtart under its own procedur. -// E.g., in Linux, asking it to enter runlevel 6, or executing "sudo shutdown -r now", or by asking Windows to rtart the machine. +/* + Reboot requests that a given server reboot. + + Two methods exist for rebooting a server: + + HardReboot (aka PowerCycle) starts the server instance by physically cutting + power to the machine, or if a VM, terminating it at the hypervisor level. + It's done. Caput. Full stop. + Then, after a brief while, power is rtored or the VM instance restarted. + + SoftReboot (aka OSReboot) simply tells the OS to restart under its own + procedure. + E.g., in Linux, asking it to enter runlevel 6, or executing + "sudo shutdown -r now", or by asking Windows to rtart the machine. +*/ func Reboot(client *gophercloud.ServiceClient, id string, opts RebootOptsBuilder) (r ActionResult) { b, err := opts.ToServerRebootMap() if err != nil { @@ -391,31 +412,43 @@ func Reboot(client *gophercloud.ServiceClient, id string, opts RebootOptsBuilder return } -// RebuildOptsBuilder is an interface that allows extensions to override the -// default behaviour of rebuild options +// RebuildOptsBuilder allows extensions to provide additional parameters to the +// rebuild request. type RebuildOptsBuilder interface { ToServerRebuildMap() (map[string]interface{}, error) } // RebuildOpts represents the configuration options used in a server rebuild -// operation +// operation. type RebuildOpts struct { - // The server's admin password + // AdminPass is the server's admin password AdminPass string `json:"adminPass,omitempty"` - // The ID of the image you want your server to be provisioned on - ImageID string `json:"imageRef"` + + // ImageID is the ID of the image you want your server to be provisioned on. + ImageID string `json:"imageRef"` + + // ImageName is readable name of an image. ImageName string `json:"-"` + // Name to set the server to Name string `json:"name,omitempty"` + // AccessIPv4 [optional] provides a new IPv4 address for the instance. AccessIPv4 string `json:"accessIPv4,omitempty"` + // AccessIPv6 [optional] provides a new IPv6 address for the instance. AccessIPv6 string `json:"accessIPv6,omitempty"` - // Metadata [optional] contains key-value pairs (up to 255 bytes each) to attach to the server. + + // Metadata [optional] contains key-value pairs (up to 255 bytes each) + // to attach to the server. Metadata map[string]string `json:"metadata,omitempty"` + // Personality [optional] includes files to inject into the server at launch. // Rebuild will base64-encode file contents for you. - Personality Personality `json:"personality,omitempty"` + Personality Personality `json:"personality,omitempty"` + + // ServiceClient will allow calls to be made to retrieve an image or + // flavor ID by name. ServiceClient *gophercloud.ServiceClient `json:"-"` } @@ -458,31 +491,34 @@ func Rebuild(client *gophercloud.ServiceClient, id string, opts RebuildOptsBuild return } -// ResizeOptsBuilder is an interface that allows extensions to override the default structure of -// a Resize request. +// ResizeOptsBuilder allows extensions to add additional parameters to the +// resize request. type ResizeOptsBuilder interface { ToServerResizeMap() (map[string]interface{}, error) } -// ResizeOpts represents the configuration options used to control a Resize operation. +// ResizeOpts represents the configuration options used to control a Resize +// operation. type ResizeOpts struct { // FlavorRef is the ID of the flavor you wish your server to become. FlavorRef string `json:"flavorRef" required:"true"` } -// ToServerResizeMap formats a ResizeOpts as a map that can be used as a JSON request body for the -// Resize request. +// ToServerResizeMap formats a ResizeOpts as a map that can be used as a JSON +// request body for the Resize request. func (opts ResizeOpts) ToServerResizeMap() (map[string]interface{}, error) { return gophercloud.BuildRequestBody(opts, "resize") } // Resize instructs the provider to change the flavor of the server. +// // Note that this implies rebuilding it. +// // Unfortunately, one cannot pass rebuild parameters to the resize function. -// When the resize completes, the server will be in RESIZE_VERIFY state. -// While in this state, you can explore the use of the new server's configuration. -// If you like it, call ConfirmResize() to commit the resize permanently. -// Otherwise, call RevertResize() to restore the old configuration. +// When the resize completes, the server will be in VERIFY_RESIZE state. +// While in this state, you can explore the use of the new server's +// configuration. If you like it, call ConfirmResize() to commit the resize +// permanently. Otherwise, call RevertResize() to restore the old configuration. func Resize(client *gophercloud.ServiceClient, id string, opts ResizeOptsBuilder) (r ActionResult) { b, err := opts.ToServerResizeMap() if err != nil { @@ -542,8 +578,8 @@ func Rescue(client *gophercloud.ServiceClient, id string, opts RescueOptsBuilder return } -// ResetMetadataOptsBuilder allows extensions to add additional parameters to the -// Reset request. +// ResetMetadataOptsBuilder allows extensions to add additional parameters to +// the Reset request. type ResetMetadataOptsBuilder interface { ToMetadataResetMap() (map[string]interface{}, error) } @@ -551,20 +587,23 @@ type ResetMetadataOptsBuilder interface { // MetadataOpts is a map that contains key-value pairs. type MetadataOpts map[string]string -// ToMetadataResetMap assembles a body for a Reset request based on the contents of a MetadataOpts. +// ToMetadataResetMap assembles a body for a Reset request based on the contents +// of a MetadataOpts. func (opts MetadataOpts) ToMetadataResetMap() (map[string]interface{}, error) { return map[string]interface{}{"metadata": opts}, nil } -// ToMetadataUpdateMap assembles a body for an Update request based on the contents of a MetadataOpts. +// ToMetadataUpdateMap assembles a body for an Update request based on the +// contents of a MetadataOpts. func (opts MetadataOpts) ToMetadataUpdateMap() (map[string]interface{}, error) { return map[string]interface{}{"metadata": opts}, nil } -// ResetMetadata will create multiple new key-value pairs for the given server ID. -// Note: Using this operation will erase any already-existing metadata and create -// the new metadata provided. To keep any already-existing metadata, use the -// UpdateMetadatas or UpdateMetadata function. +// ResetMetadata will create multiple new key-value pairs for the given server +// ID. +// Note: Using this operation will erase any already-existing metadata and +// create the new metadata provided. To keep any already-existing metadata, +// use the UpdateMetadatas or UpdateMetadata function. func ResetMetadata(client *gophercloud.ServiceClient, id string, opts ResetMetadataOptsBuilder) (r ResetMetadataResult) { b, err := opts.ToMetadataResetMap() if err != nil { @@ -583,15 +622,15 @@ func Metadata(client *gophercloud.ServiceClient, id string) (r GetMetadataResult return } -// UpdateMetadataOptsBuilder allows extensions to add additional parameters to the -// Create request. +// UpdateMetadataOptsBuilder allows extensions to add additional parameters to +// the Create request. type UpdateMetadataOptsBuilder interface { ToMetadataUpdateMap() (map[string]interface{}, error) } -// UpdateMetadata updates (or creates) all the metadata specified by opts for the given server ID. -// This operation does not affect already-existing metadata that is not specified -// by opts. +// UpdateMetadata updates (or creates) all the metadata specified by opts for +// the given server ID. This operation does not affect already-existing metadata +// that is not specified by opts. func UpdateMetadata(client *gophercloud.ServiceClient, id string, opts UpdateMetadataOptsBuilder) (r UpdateMetadataResult) { b, err := opts.ToMetadataUpdateMap() if err != nil { @@ -613,7 +652,8 @@ type MetadatumOptsBuilder interface { // MetadatumOpts is a map of length one that contains a key-value pair. type MetadatumOpts map[string]string -// ToMetadatumCreateMap assembles a body for a Create request based on the contents of a MetadataumOpts. +// ToMetadatumCreateMap assembles a body for a Create request based on the +// contents of a MetadataumOpts. func (opts MetadatumOpts) ToMetadatumCreateMap() (map[string]interface{}, string, error) { if len(opts) != 1 { err := gophercloud.ErrInvalidInput{} @@ -629,7 +669,8 @@ func (opts MetadatumOpts) ToMetadatumCreateMap() (map[string]interface{}, string return metadatum, key, nil } -// CreateMetadatum will create or update the key-value pair with the given key for the given server ID. +// CreateMetadatum will create or update the key-value pair with the given key +// for the given server ID. func CreateMetadatum(client *gophercloud.ServiceClient, id string, opts MetadatumOptsBuilder) (r CreateMetadatumResult) { b, key, err := opts.ToMetadatumCreateMap() if err != nil { @@ -642,53 +683,60 @@ func CreateMetadatum(client *gophercloud.ServiceClient, id string, opts Metadatu return } -// Metadatum requests the key-value pair with the given key for the given server ID. +// Metadatum requests the key-value pair with the given key for the given +// server ID. func Metadatum(client *gophercloud.ServiceClient, id, key string) (r GetMetadatumResult) { _, r.Err = client.Get(metadatumURL(client, id, key), &r.Body, nil) return } -// DeleteMetadatum will delete the key-value pair with the given key for the given server ID. +// DeleteMetadatum will delete the key-value pair with the given key for the +// given server ID. func DeleteMetadatum(client *gophercloud.ServiceClient, id, key string) (r DeleteMetadatumResult) { _, r.Err = client.Delete(metadatumURL(client, id, key), nil) return } -// ListAddresses makes a request against the API to list the servers IP addresses. +// ListAddresses makes a request against the API to list the servers IP +// addresses. func ListAddresses(client *gophercloud.ServiceClient, id string) pagination.Pager { return pagination.NewPager(client, listAddressesURL(client, id), func(r pagination.PageResult) pagination.Page { return AddressPage{pagination.SinglePageBase(r)} }) } -// ListAddressesByNetwork makes a request against the API to list the servers IP addresses -// for the given network. +// ListAddressesByNetwork makes a request against the API to list the servers IP +// addresses for the given network. func ListAddressesByNetwork(client *gophercloud.ServiceClient, id, network string) pagination.Pager { return pagination.NewPager(client, listAddressesByNetworkURL(client, id, network), func(r pagination.PageResult) pagination.Page { return NetworkAddressPage{pagination.SinglePageBase(r)} }) } -// CreateImageOptsBuilder is the interface types must satisfy in order to be -// used as CreateImage options +// CreateImageOptsBuilder allows extensions to add additional parameters to the +// CreateImage request. type CreateImageOptsBuilder interface { ToServerCreateImageMap() (map[string]interface{}, error) } -// CreateImageOpts satisfies the CreateImageOptsBuilder +// CreateImageOpts provides options to pass to the CreateImage request. type CreateImageOpts struct { - // Name of the image/snapshot + // Name of the image/snapshot. Name string `json:"name" required:"true"` - // Metadata contains key-value pairs (up to 255 bytes each) to attach to the created image. + + // Metadata contains key-value pairs (up to 255 bytes each) to attach to + // the created image. Metadata map[string]string `json:"metadata,omitempty"` } -// ToServerCreateImageMap formats a CreateImageOpts structure into a request body. +// ToServerCreateImageMap formats a CreateImageOpts structure into a request +// body. func (opts CreateImageOpts) ToServerCreateImageMap() (map[string]interface{}, error) { return gophercloud.BuildRequestBody(opts, "createImage") } -// CreateImage makes a request against the nova API to schedule an image to be created of the server +// CreateImage makes a request against the nova API to schedule an image to be +// created of the server func CreateImage(client *gophercloud.ServiceClient, id string, opts CreateImageOptsBuilder) (r CreateImageResult) { b, err := opts.ToServerCreateImageMap() if err != nil { @@ -703,7 +751,8 @@ func CreateImage(client *gophercloud.ServiceClient, id string, opts CreateImageO return } -// IDFromName is a convienience function that returns a server's ID given its name. +// IDFromName is a convienience function that returns a server's ID given its +// name. func IDFromName(client *gophercloud.ServiceClient, name string) (string, error) { count := 0 id := "" @@ -734,7 +783,8 @@ func IDFromName(client *gophercloud.ServiceClient, name string) (string, error) } } -// GetPassword makes a request against the nova API to get the encrypted administrative password. +// GetPassword makes a request against the nova API to get the encrypted +// administrative password. func GetPassword(client *gophercloud.ServiceClient, serverId string) (r GetPasswordResult) { _, r.Err = client.Get(passwordURL(client, serverId), &r.Body, nil) return diff --git a/vendor/github.com/gophercloud/gophercloud/openstack/compute/v2/servers/results.go b/vendor/github.com/gophercloud/gophercloud/openstack/compute/v2/servers/results.go index 1ae1e91c7..c6c1ff43f 100644 --- a/vendor/github.com/gophercloud/gophercloud/openstack/compute/v2/servers/results.go +++ b/vendor/github.com/gophercloud/gophercloud/openstack/compute/v2/servers/results.go @@ -32,54 +32,64 @@ func ExtractServersInto(r pagination.Page, v interface{}) error { return r.(ServerPage).Result.ExtractIntoSlicePtr(v, "servers") } -// CreateResult temporarily contains the response from a Create call. +// CreateResult is the response from a Create operation. Call its Extract +// method to interpret it as a Server. type CreateResult struct { serverResult } -// GetResult temporarily contains the response from a Get call. +// GetResult is the response from a Get operation. Call its Extract +// method to interpret it as a Server. type GetResult struct { serverResult } -// UpdateResult temporarily contains the response from an Update call. +// UpdateResult is the response from an Update operation. Call its Extract +// method to interpret it as a Server. type UpdateResult struct { serverResult } -// DeleteResult temporarily contains the response from a Delete call. +// DeleteResult is the response from a Delete operation. Call its ExtractErr +// method to determine if the call succeeded or failed. type DeleteResult struct { gophercloud.ErrResult } -// RebuildResult temporarily contains the response from a Rebuild call. +// RebuildResult is the response from a Rebuild operation. Call its Extract +// method to interpret it as a Server. type RebuildResult struct { serverResult } -// ActionResult represents the result of server action operations, like reboot +// ActionResult represents the result of server action operations, like reboot. +// Call its ExtractErr method to determine if the action succeeded or failed. type ActionResult struct { gophercloud.ErrResult } -// RescueResult represents the result of a server rescue operation +// RescueResult is the response from a Rescue operation. Call its ExtractErr +// method to determine if the call succeeded or failed. type RescueResult struct { ActionResult } -// CreateImageResult represents the result of an image creation operation +// CreateImageResult is the response from a CreateImage operation. Call its +// ExtractImageID method to retrieve the ID of the newly created image. type CreateImageResult struct { gophercloud.Result } // GetPasswordResult represent the result of a get os-server-password operation. +// Call its ExtractPassword method to retrieve the password. type GetPasswordResult struct { gophercloud.Result } // ExtractPassword gets the encrypted password. // If privateKey != nil the password is decrypted with the private key. -// If privateKey == nil the encrypted password is returned and can be decrypted with: +// If privateKey == nil the encrypted password is returned and can be decrypted +// with: // echo '' | base64 -D | openssl rsautl -decrypt -inkey func (r GetPasswordResult) ExtractPassword(privateKey *rsa.PrivateKey) (string, error) { var s struct { @@ -107,7 +117,7 @@ func decryptPassword(encryptedPassword string, privateKey *rsa.PrivateKey) (stri return string(password), nil } -// ExtractImageID gets the ID of the newly created server image from the header +// ExtractImageID gets the ID of the newly created server image from the header. func (r CreateImageResult) ExtractImageID() (string, error) { if r.Err != nil { return "", r.Err @@ -133,45 +143,84 @@ func (r RescueResult) Extract() (string, error) { return s.AdminPass, err } -// Server exposes only the standard OpenStack fields corresponding to a given server on the user's account. +// Server represents a server/instance in the OpenStack cloud. type Server struct { - // ID uniquely identifies this server amongst all other servers, including those not accessible to the current tenant. + // ID uniquely identifies this server amongst all other servers, + // including those not accessible to the current tenant. ID string `json:"id"` + // TenantID identifies the tenant owning this server resource. TenantID string `json:"tenant_id"` + // UserID uniquely identifies the user account owning the tenant. UserID string `json:"user_id"` + // Name contains the human-readable name for the server. Name string `json:"name"` - // Updated and Created contain ISO-8601 timestamps of when the state of the server last changed, and when it was created. + + // Updated and Created contain ISO-8601 timestamps of when the state of the + // server last changed, and when it was created. Updated time.Time `json:"updated"` Created time.Time `json:"created"` - HostID string `json:"hostid"` - // Status contains the current operational status of the server, such as IN_PROGRESS or ACTIVE. + + // HostID is the host where the server is located in the cloud. + HostID string `json:"hostid"` + + // Status contains the current operational status of the server, + // such as IN_PROGRESS or ACTIVE. Status string `json:"status"` + // Progress ranges from 0..100. // A request made against the server completes only once Progress reaches 100. Progress int `json:"progress"` - // AccessIPv4 and AccessIPv6 contain the IP addresses of the server, suitable for remote access for administration. + + // AccessIPv4 and AccessIPv6 contain the IP addresses of the server, + // suitable for remote access for administration. AccessIPv4 string `json:"accessIPv4"` AccessIPv6 string `json:"accessIPv6"` - // Image refers to a JSON object, which itself indicates the OS image used to deploy the server. + + // Image refers to a JSON object, which itself indicates the OS image used to + // deploy the server. Image map[string]interface{} `json:"-"` - // Flavor refers to a JSON object, which itself indicates the hardware configuration of the deployed server. + + // Flavor refers to a JSON object, which itself indicates the hardware + // configuration of the deployed server. Flavor map[string]interface{} `json:"flavor"` - // Addresses includes a list of all IP addresses assigned to the server, keyed by pool. + + // Addresses includes a list of all IP addresses assigned to the server, + // keyed by pool. Addresses map[string]interface{} `json:"addresses"` - // Metadata includes a list of all user-specified key-value pairs attached to the server. + + // Metadata includes a list of all user-specified key-value pairs attached + // to the server. Metadata map[string]string `json:"metadata"` - // Links includes HTTP references to the itself, useful for passing along to other APIs that might want a server reference. + + // Links includes HTTP references to the itself, useful for passing along to + // other APIs that might want a server reference. Links []interface{} `json:"links"` + // KeyName indicates which public key was injected into the server on launch. KeyName string `json:"key_name"` - // AdminPass will generally be empty (""). However, it will contain the administrative password chosen when provisioning a new server without a set AdminPass setting in the first place. + + // AdminPass will generally be empty (""). However, it will contain the + // administrative password chosen when provisioning a new server without a + // set AdminPass setting in the first place. // Note that this is the ONLY time this field will be valid. AdminPass string `json:"adminPass"` - // SecurityGroups includes the security groups that this instance has applied to it + + // SecurityGroups includes the security groups that this instance has applied + // to it. SecurityGroups []map[string]interface{} `json:"security_groups"` + + // Fault contains failure information about a server. + Fault Fault `json:"fault"` +} + +type Fault struct { + Code int `json:"code"` + Created time.Time `json:"created"` + Details string `json:"details"` + Message string `json:"message"` } func (r *Server) UnmarshalJSON(b []byte) error { @@ -200,9 +249,10 @@ func (r *Server) UnmarshalJSON(b []byte) error { return err } -// ServerPage abstracts the raw results of making a List() request against the API. -// As OpenStack extensions may freely alter the response bodies of structures returned to the client, you may only safely access the -// data provided through the ExtractServers call. +// ServerPage abstracts the raw results of making a List() request against +// the API. As OpenStack extensions may freely alter the response bodies of +// structures returned to the client, you may only safely access the data +// provided through the ExtractServers call. type ServerPage struct { pagination.LinkedPageBase } @@ -213,7 +263,8 @@ func (r ServerPage) IsEmpty() (bool, error) { return len(s) == 0, err } -// NextPageURL uses the response's embedded link reference to navigate to the next page of results. +// NextPageURL uses the response's embedded link reference to navigate to the +// next page of results. func (r ServerPage) NextPageURL() (string, error) { var s struct { Links []gophercloud.Link `json:"servers_links"` @@ -225,49 +276,59 @@ func (r ServerPage) NextPageURL() (string, error) { return gophercloud.ExtractNextURL(s.Links) } -// ExtractServers interprets the results of a single page from a List() call, producing a slice of Server entities. +// ExtractServers interprets the results of a single page from a List() call, +// producing a slice of Server entities. func ExtractServers(r pagination.Page) ([]Server, error) { var s []Server err := ExtractServersInto(r, &s) return s, err } -// MetadataResult contains the result of a call for (potentially) multiple key-value pairs. +// MetadataResult contains the result of a call for (potentially) multiple +// key-value pairs. Call its Extract method to interpret it as a +// map[string]interface. type MetadataResult struct { gophercloud.Result } -// GetMetadataResult temporarily contains the response from a metadata Get call. +// GetMetadataResult contains the result of a Get operation. Call its Extract +// method to interpret it as a map[string]interface. type GetMetadataResult struct { MetadataResult } -// ResetMetadataResult temporarily contains the response from a metadata Reset call. +// ResetMetadataResult contains the result of a Reset operation. Call its +// Extract method to interpret it as a map[string]interface. type ResetMetadataResult struct { MetadataResult } -// UpdateMetadataResult temporarily contains the response from a metadata Update call. +// UpdateMetadataResult contains the result of an Update operation. Call its +// Extract method to interpret it as a map[string]interface. type UpdateMetadataResult struct { MetadataResult } -// MetadatumResult contains the result of a call for individual a single key-value pair. +// MetadatumResult contains the result of a call for individual a single +// key-value pair. type MetadatumResult struct { gophercloud.Result } -// GetMetadatumResult temporarily contains the response from a metadatum Get call. +// GetMetadatumResult contains the result of a Get operation. Call its Extract +// method to interpret it as a map[string]interface. type GetMetadatumResult struct { MetadatumResult } -// CreateMetadatumResult temporarily contains the response from a metadatum Create call. +// CreateMetadatumResult contains the result of a Create operation. Call its +// Extract method to interpret it as a map[string]interface. type CreateMetadatumResult struct { MetadatumResult } -// DeleteMetadatumResult temporarily contains the response from a metadatum Delete call. +// DeleteMetadatumResult contains the result of a Delete operation. Call its +// ExtractErr method to determine if the call succeeded or failed. type DeleteMetadatumResult struct { gophercloud.ErrResult } @@ -296,9 +357,10 @@ type Address struct { Address string `json:"addr"` } -// AddressPage abstracts the raw results of making a ListAddresses() request against the API. -// As OpenStack extensions may freely alter the response bodies of structures returned -// to the client, you may only safely access the data provided through the ExtractAddresses call. +// AddressPage abstracts the raw results of making a ListAddresses() request +// against the API. As OpenStack extensions may freely alter the response bodies +// of structures returned to the client, you may only safely access the data +// provided through the ExtractAddresses call. type AddressPage struct { pagination.SinglePageBase } @@ -309,8 +371,8 @@ func (r AddressPage) IsEmpty() (bool, error) { return len(addresses) == 0, err } -// ExtractAddresses interprets the results of a single page from a ListAddresses() call, -// producing a map of addresses. +// ExtractAddresses interprets the results of a single page from a +// ListAddresses() call, producing a map of addresses. func ExtractAddresses(r pagination.Page) (map[string][]Address, error) { var s struct { Addresses map[string][]Address `json:"addresses"` @@ -319,9 +381,11 @@ func ExtractAddresses(r pagination.Page) (map[string][]Address, error) { return s.Addresses, err } -// NetworkAddressPage abstracts the raw results of making a ListAddressesByNetwork() request against the API. -// As OpenStack extensions may freely alter the response bodies of structures returned -// to the client, you may only safely access the data provided through the ExtractAddresses call. +// NetworkAddressPage abstracts the raw results of making a +// ListAddressesByNetwork() request against the API. +// As OpenStack extensions may freely alter the response bodies of structures +// returned to the client, you may only safely access the data provided through +// the ExtractAddresses call. type NetworkAddressPage struct { pagination.SinglePageBase } @@ -332,8 +396,8 @@ func (r NetworkAddressPage) IsEmpty() (bool, error) { return len(addresses) == 0, err } -// ExtractNetworkAddresses interprets the results of a single page from a ListAddressesByNetwork() call, -// producing a slice of addresses. +// ExtractNetworkAddresses interprets the results of a single page from a +// ListAddressesByNetwork() call, producing a slice of addresses. func ExtractNetworkAddresses(r pagination.Page) ([]Address, error) { var s map[string][]Address err := (r.(NetworkAddressPage)).ExtractInto(&s) diff --git a/vendor/github.com/gophercloud/gophercloud/openstack/compute/v2/servers/util.go b/vendor/github.com/gophercloud/gophercloud/openstack/compute/v2/servers/util.go index 494a0e4dc..cadef0545 100644 --- a/vendor/github.com/gophercloud/gophercloud/openstack/compute/v2/servers/util.go +++ b/vendor/github.com/gophercloud/gophercloud/openstack/compute/v2/servers/util.go @@ -2,8 +2,9 @@ package servers import "github.com/gophercloud/gophercloud" -// WaitForStatus will continually poll a server until it successfully transitions to a specified -// status. It will do this for at most the number of seconds specified. +// WaitForStatus will continually poll a server until it successfully +// transitions to a specified status. It will do this for at most the number +// of seconds specified. func WaitForStatus(c *gophercloud.ServiceClient, id, status string, secs int) error { return gophercloud.WaitFor(secs, func() (bool, error) { current, err := Get(c, id).Extract() diff --git a/vendor/github.com/gophercloud/gophercloud/openstack/doc.go b/vendor/github.com/gophercloud/gophercloud/openstack/doc.go new file mode 100644 index 000000000..cedf1f4d3 --- /dev/null +++ b/vendor/github.com/gophercloud/gophercloud/openstack/doc.go @@ -0,0 +1,14 @@ +/* +Package openstack contains resources for the individual OpenStack projects +supported in Gophercloud. It also includes functions to authenticate to an +OpenStack cloud and for provisioning various service-level clients. + +Example of Creating a Service Client + + ao, err := openstack.AuthOptionsFromEnv() + provider, err := openstack.AuthenticatedClient(ao) + client, err := openstack.NewNetworkV2(client, gophercloud.EndpointOpts{ + Region: os.Getenv("OS_REGION_NAME"), + }) +*/ +package openstack diff --git a/vendor/github.com/gophercloud/gophercloud/openstack/endpoint_location.go b/vendor/github.com/gophercloud/gophercloud/openstack/endpoint_location.go index ea37f5b27..12c8aebcf 100644 --- a/vendor/github.com/gophercloud/gophercloud/openstack/endpoint_location.go +++ b/vendor/github.com/gophercloud/gophercloud/openstack/endpoint_location.go @@ -6,12 +6,16 @@ import ( tokens3 "github.com/gophercloud/gophercloud/openstack/identity/v3/tokens" ) -// V2EndpointURL discovers the endpoint URL for a specific service from a ServiceCatalog acquired -// during the v2 identity service. The specified EndpointOpts are used to identify a unique, -// unambiguous endpoint to return. It's an error both when multiple endpoints match the provided -// criteria and when none do. The minimum that can be specified is a Type, but you will also often -// need to specify a Name and/or a Region depending on what's available on your OpenStack -// deployment. +/* +V2EndpointURL discovers the endpoint URL for a specific service from a +ServiceCatalog acquired during the v2 identity service. + +The specified EndpointOpts are used to identify a unique, unambiguous endpoint +to return. It's an error both when multiple endpoints match the provided +criteria and when none do. The minimum that can be specified is a Type, but you +will also often need to specify a Name and/or a Region depending on what's +available on your OpenStack deployment. +*/ func V2EndpointURL(catalog *tokens2.ServiceCatalog, opts gophercloud.EndpointOpts) (string, error) { // Extract Endpoints from the catalog entries that match the requested Type, Name if provided, and Region if provided. var endpoints = make([]tokens2.Endpoint, 0, 1) @@ -54,12 +58,16 @@ func V2EndpointURL(catalog *tokens2.ServiceCatalog, opts gophercloud.EndpointOpt return "", err } -// V3EndpointURL discovers the endpoint URL for a specific service from a Catalog acquired -// during the v3 identity service. The specified EndpointOpts are used to identify a unique, -// unambiguous endpoint to return. It's an error both when multiple endpoints match the provided -// criteria and when none do. The minimum that can be specified is a Type, but you will also often -// need to specify a Name and/or a Region depending on what's available on your OpenStack -// deployment. +/* +V3EndpointURL discovers the endpoint URL for a specific service from a Catalog +acquired during the v3 identity service. + +The specified EndpointOpts are used to identify a unique, unambiguous endpoint +to return. It's an error both when multiple endpoints match the provided +criteria and when none do. The minimum that can be specified is a Type, but you +will also often need to specify a Name and/or a Region depending on what's +available on your OpenStack deployment. +*/ func V3EndpointURL(catalog *tokens3.ServiceCatalog, opts gophercloud.EndpointOpts) (string, error) { // Extract Endpoints from the catalog entries that match the requested Type, Interface, // Name if provided, and Region if provided. @@ -76,7 +84,7 @@ func V3EndpointURL(catalog *tokens3.ServiceCatalog, opts gophercloud.EndpointOpt return "", err } if (opts.Availability == gophercloud.Availability(endpoint.Interface)) && - (opts.Region == "" || endpoint.Region == opts.Region) { + (opts.Region == "" || endpoint.Region == opts.Region || endpoint.RegionID == opts.Region) { endpoints = append(endpoints, endpoint) } } diff --git a/vendor/github.com/gophercloud/gophercloud/openstack/identity/v2/tenants/doc.go b/vendor/github.com/gophercloud/gophercloud/openstack/identity/v2/tenants/doc.go index 0c2d49d56..45623369e 100644 --- a/vendor/github.com/gophercloud/gophercloud/openstack/identity/v2/tenants/doc.go +++ b/vendor/github.com/gophercloud/gophercloud/openstack/identity/v2/tenants/doc.go @@ -1,7 +1,65 @@ -// Package tenants provides information and interaction with the -// tenants API resource for the OpenStack Identity service. -// -// See http://developer.openstack.org/api-ref-identity-v2.html#identity-auth-v2 -// and http://developer.openstack.org/api-ref-identity-v2.html#admin-tenants -// for more information. +/* +Package tenants provides information and interaction with the +tenants API resource for the OpenStack Identity service. + +See http://developer.openstack.org/api-ref-identity-v2.html#identity-auth-v2 +and http://developer.openstack.org/api-ref-identity-v2.html#admin-tenants +for more information. + +Example to List Tenants + + listOpts := tenants.ListOpts{ + Limit: 2, + } + + allPages, err := tenants.List(identityClient, listOpts).AllPages() + if err != nil { + panic(err) + } + + allTenants, err := tenants.ExtractTenants(allPages) + if err != nil { + panic(err) + } + + for _, tenant := range allTenants { + fmt.Printf("%+v\n", tenant) + } + +Example to Create a Tenant + + createOpts := tenants.CreateOpts{ + Name: "tenant_name", + Description: "this is a tenant", + Enabled: gophercloud.Enabled, + } + + tenant, err := tenants.Create(identityClient, createOpts).Extract() + if err != nil { + panic(err) + } + +Example to Update a Tenant + + tenantID := "e6db6ed6277c461a853458589063b295" + + updateOpts := tenants.UpdateOpts{ + Description: "this is a new description", + Enabled: gophercloud.Disabled, + } + + tenant, err := tenants.Update(identityClient, tenantID, updateOpts).Extract() + if err != nil { + panic(err) + } + +Example to Delete a Tenant + + tenantID := "e6db6ed6277c461a853458589063b295" + + err := tenants.Delete(identitYClient, tenantID).ExtractErr() + if err != nil { + panic(err) + } +*/ package tenants diff --git a/vendor/github.com/gophercloud/gophercloud/openstack/identity/v2/tenants/requests.go b/vendor/github.com/gophercloud/gophercloud/openstack/identity/v2/tenants/requests.go index b6550ce60..60f58c8ce 100644 --- a/vendor/github.com/gophercloud/gophercloud/openstack/identity/v2/tenants/requests.go +++ b/vendor/github.com/gophercloud/gophercloud/openstack/identity/v2/tenants/requests.go @@ -9,6 +9,7 @@ import ( type ListOpts struct { // Marker is the ID of the last Tenant on the previous page. Marker string `q:"marker"` + // Limit specifies the page size. Limit int `q:"limit"` } @@ -32,18 +33,22 @@ func List(client *gophercloud.ServiceClient, opts *ListOpts) pagination.Pager { type CreateOpts struct { // Name is the name of the tenant. Name string `json:"name" required:"true"` + // Description is the description of the tenant. Description string `json:"description,omitempty"` + // Enabled sets the tenant status to enabled or disabled. Enabled *bool `json:"enabled,omitempty"` } -// CreateOptsBuilder describes struct types that can be accepted by the Create call. +// CreateOptsBuilder enables extensions to add additional parameters to the +// Create request. type CreateOptsBuilder interface { ToTenantCreateMap() (map[string]interface{}, error) } -// ToTenantCreateMap assembles a request body based on the contents of a CreateOpts. +// ToTenantCreateMap assembles a request body based on the contents of +// a CreateOpts. func (opts CreateOpts) ToTenantCreateMap() (map[string]interface{}, error) { return gophercloud.BuildRequestBody(opts, "tenant") } @@ -67,17 +72,21 @@ func Get(client *gophercloud.ServiceClient, id string) (r GetResult) { return } -// UpdateOptsBuilder allows extensions to add additional attributes to the Update request. +// UpdateOptsBuilder allows extensions to add additional parameters to the +// Update request. type UpdateOptsBuilder interface { ToTenantUpdateMap() (map[string]interface{}, error) } -// UpdateOpts specifies the base attributes that may be updated on an existing server. +// UpdateOpts specifies the base attributes that may be updated on an existing +// tenant. type UpdateOpts struct { // Name is the name of the tenant. Name string `json:"name,omitempty"` + // Description is the description of the tenant. Description string `json:"description,omitempty"` + // Enabled sets the tenant status to enabled or disabled. Enabled *bool `json:"enabled,omitempty"` } @@ -100,7 +109,7 @@ func Update(client *gophercloud.ServiceClient, id string, opts UpdateOptsBuilder return } -// Delete is the operation responsible for permanently deleting an API tenant. +// Delete is the operation responsible for permanently deleting a tenant. func Delete(client *gophercloud.ServiceClient, id string) (r DeleteResult) { _, r.Err = client.Delete(deleteURL(client, id), nil) return diff --git a/vendor/github.com/gophercloud/gophercloud/openstack/identity/v2/tenants/results.go b/vendor/github.com/gophercloud/gophercloud/openstack/identity/v2/tenants/results.go index 5a319de5c..bb6c2c6b0 100644 --- a/vendor/github.com/gophercloud/gophercloud/openstack/identity/v2/tenants/results.go +++ b/vendor/github.com/gophercloud/gophercloud/openstack/identity/v2/tenants/results.go @@ -43,7 +43,8 @@ func (r TenantPage) NextPageURL() (string, error) { return gophercloud.ExtractNextURL(s.Links) } -// ExtractTenants returns a slice of Tenants contained in a single page of results. +// ExtractTenants returns a slice of Tenants contained in a single page of +// results. func ExtractTenants(r pagination.Page) ([]Tenant, error) { var s struct { Tenants []Tenant `json:"tenants"` @@ -56,7 +57,7 @@ type tenantResult struct { gophercloud.Result } -// Extract interprets any tenantResults as a tenant. +// Extract interprets any tenantResults as a Tenant. func (r tenantResult) Extract() (*Tenant, error) { var s struct { Tenant *Tenant `json:"tenant"` @@ -65,22 +66,26 @@ func (r tenantResult) Extract() (*Tenant, error) { return s.Tenant, err } -// GetResult temporarily contains the response from the Get call. +// GetResult is the response from a Get request. Call its Extract method to +// interpret it as a Tenant. type GetResult struct { tenantResult } -// CreateResult temporarily contains the reponse from the Create call. +// CreateResult is the response from a Create request. Call its Extract method +// to interpret it as a Tenant. type CreateResult struct { tenantResult } -// DeleteResult temporarily contains the response from the Delete call. +// DeleteResult is the response from a Get request. Call its ExtractErr method +// to determine if the call succeeded or failed. type DeleteResult struct { gophercloud.ErrResult } -// UpdateResult temporarily contains the response from the Update call. +// UpdateResult is the response from a Update request. Call its Extract method +// to interpret it as a Tenant. type UpdateResult struct { tenantResult } diff --git a/vendor/github.com/gophercloud/gophercloud/openstack/identity/v2/tokens/doc.go b/vendor/github.com/gophercloud/gophercloud/openstack/identity/v2/tokens/doc.go index 31cacc5e1..5375eea87 100644 --- a/vendor/github.com/gophercloud/gophercloud/openstack/identity/v2/tokens/doc.go +++ b/vendor/github.com/gophercloud/gophercloud/openstack/identity/v2/tokens/doc.go @@ -1,5 +1,46 @@ -// Package tokens provides information and interaction with the token API -// resource for the OpenStack Identity service. -// For more information, see: -// http://developer.openstack.org/api-ref-identity-v2.html#identity-auth-v2 +/* +Package tokens provides information and interaction with the token API +resource for the OpenStack Identity service. + +For more information, see: +http://developer.openstack.org/api-ref-identity-v2.html#identity-auth-v2 + +Example to Create an Unscoped Token from a Password + + authOpts := gophercloud.AuthOptions{ + Username: "user", + Password: "pass" + } + + token, err := tokens.Create(identityClient, authOpts).ExtractToken() + if err != nil { + panic(err) + } + +Example to Create a Token from a Tenant ID and Password + + authOpts := gophercloud.AuthOptions{ + Username: "user", + Password: "password", + TenantID: "fc394f2ab2df4114bde39905f800dc57" + } + + token, err := tokens.Create(identityClient, authOpts).ExtractToken() + if err != nil { + panic(err) + } + +Example to Create a Token from a Tenant Name and Password + + authOpts := gophercloud.AuthOptions{ + Username: "user", + Password: "password", + TenantName: "tenantname" + } + + token, err := tokens.Create(identityClient, authOpts).ExtractToken() + if err != nil { + panic(err) + } +*/ package tokens diff --git a/vendor/github.com/gophercloud/gophercloud/openstack/identity/v2/tokens/requests.go b/vendor/github.com/gophercloud/gophercloud/openstack/identity/v2/tokens/requests.go index 4983031e7..ab32368cc 100644 --- a/vendor/github.com/gophercloud/gophercloud/openstack/identity/v2/tokens/requests.go +++ b/vendor/github.com/gophercloud/gophercloud/openstack/identity/v2/tokens/requests.go @@ -2,17 +2,21 @@ package tokens import "github.com/gophercloud/gophercloud" +// PasswordCredentialsV2 represents the required options to authenticate +// with a username and password. type PasswordCredentialsV2 struct { Username string `json:"username" required:"true"` Password string `json:"password" required:"true"` } +// TokenCredentialsV2 represents the required options to authenticate +// with a token. type TokenCredentialsV2 struct { ID string `json:"id,omitempty" required:"true"` } -// AuthOptionsV2 wraps a gophercloud AuthOptions in order to adhere to the AuthOptionsBuilder -// interface. +// AuthOptionsV2 wraps a gophercloud AuthOptions in order to adhere to the +// AuthOptionsBuilder interface. type AuthOptionsV2 struct { PasswordCredentials *PasswordCredentialsV2 `json:"passwordCredentials,omitempty" xor:"TokenCredentials"` @@ -23,15 +27,16 @@ type AuthOptionsV2 struct { TenantID string `json:"tenantId,omitempty"` TenantName string `json:"tenantName,omitempty"` - // TokenCredentials allows users to authenticate (possibly as another user) with an - // authentication token ID. + // TokenCredentials allows users to authenticate (possibly as another user) + // with an authentication token ID. TokenCredentials *TokenCredentialsV2 `json:"token,omitempty" xor:"PasswordCredentials"` } -// AuthOptionsBuilder describes any argument that may be passed to the Create call. +// AuthOptionsBuilder allows extensions to add additional parameters to the +// token create request. type AuthOptionsBuilder interface { - // ToTokenCreateMap assembles the Create request body, returning an error if parameters are - // missing or inconsistent. + // ToTokenCreateMap assembles the Create request body, returning an error + // if parameters are missing or inconsistent. ToTokenV2CreateMap() (map[string]interface{}, error) } @@ -47,8 +52,7 @@ type AuthOptions struct { TokenID string } -// ToTokenV2CreateMap allows AuthOptions to satisfy the AuthOptionsBuilder -// interface in the v2 tokens package +// ToTokenV2CreateMap builds a token request body from the given AuthOptions. func (opts AuthOptions) ToTokenV2CreateMap() (map[string]interface{}, error) { v2Opts := AuthOptionsV2{ TenantID: opts.TenantID, @@ -74,9 +78,9 @@ func (opts AuthOptions) ToTokenV2CreateMap() (map[string]interface{}, error) { } // Create authenticates to the identity service and attempts to acquire a Token. -// If successful, the CreateResult -// Generally, rather than interact with this call directly, end users should call openstack.AuthenticatedClient(), -// which abstracts all of the gory details about navigating service catalogs and such. +// Generally, rather than interact with this call directly, end users should +// call openstack.AuthenticatedClient(), which abstracts all of the gory details +// about navigating service catalogs and such. func Create(client *gophercloud.ServiceClient, auth AuthOptionsBuilder) (r CreateResult) { b, err := auth.ToTokenV2CreateMap() if err != nil { diff --git a/vendor/github.com/gophercloud/gophercloud/openstack/identity/v2/tokens/results.go b/vendor/github.com/gophercloud/gophercloud/openstack/identity/v2/tokens/results.go index 6b3649370..b11326772 100644 --- a/vendor/github.com/gophercloud/gophercloud/openstack/identity/v2/tokens/results.go +++ b/vendor/github.com/gophercloud/gophercloud/openstack/identity/v2/tokens/results.go @@ -7,20 +7,24 @@ import ( "github.com/gophercloud/gophercloud/openstack/identity/v2/tenants" ) -// Token provides only the most basic information related to an authentication token. +// Token provides only the most basic information related to an authentication +// token. type Token struct { // ID provides the primary means of identifying a user to the OpenStack API. - // OpenStack defines this field as an opaque value, so do not depend on its content. - // It is safe, however, to compare for equality. + // OpenStack defines this field as an opaque value, so do not depend on its + // content. It is safe, however, to compare for equality. ID string - // ExpiresAt provides a timestamp in ISO 8601 format, indicating when the authentication token becomes invalid. - // After this point in time, future API requests made using this authentication token will respond with errors. - // Either the caller will need to reauthenticate manually, or more preferably, the caller should exploit automatic re-authentication. + // ExpiresAt provides a timestamp in ISO 8601 format, indicating when the + // authentication token becomes invalid. After this point in time, future + // API requests made using this authentication token will respond with + // errors. Either the caller will need to reauthenticate manually, or more + // preferably, the caller should exploit automatic re-authentication. // See the AuthOptions structure for more details. ExpiresAt time.Time - // Tenant provides information about the tenant to which this token grants access. + // Tenant provides information about the tenant to which this token grants + // access. Tenant tenants.Tenant } @@ -38,13 +42,17 @@ type User struct { } // Endpoint represents a single API endpoint offered by a service. -// It provides the public and internal URLs, if supported, along with a region specifier, again if provided. +// It provides the public and internal URLs, if supported, along with a region +// specifier, again if provided. +// // The significance of the Region field will depend upon your provider. // -// In addition, the interface offered by the service will have version information associated with it -// through the VersionId, VersionInfo, and VersionList fields, if provided or supported. +// In addition, the interface offered by the service will have version +// information associated with it through the VersionId, VersionInfo, and +// VersionList fields, if provided or supported. // -// In all cases, fields which aren't supported by the provider and service combined will assume a zero-value (""). +// In all cases, fields which aren't supported by the provider and service +// combined will assume a zero-value (""). type Endpoint struct { TenantID string `json:"tenantId"` PublicURL string `json:"publicURL"` @@ -56,38 +64,44 @@ type Endpoint struct { VersionList string `json:"versionList"` } -// CatalogEntry provides a type-safe interface to an Identity API V2 service catalog listing. -// Each class of service, such as cloud DNS or block storage services, will have a single -// CatalogEntry representing it. +// CatalogEntry provides a type-safe interface to an Identity API V2 service +// catalog listing. // -// Note: when looking for the desired service, try, whenever possible, to key off the type field. -// Otherwise, you'll tie the representation of the service to a specific provider. +// Each class of service, such as cloud DNS or block storage services, will have +// a single CatalogEntry representing it. +// +// Note: when looking for the desired service, try, whenever possible, to key +// off the type field. Otherwise, you'll tie the representation of the service +// to a specific provider. type CatalogEntry struct { // Name will contain the provider-specified name for the service. Name string `json:"name"` - // Type will contain a type string if OpenStack defines a type for the service. - // Otherwise, for provider-specific services, the provider may assign their own type strings. + // Type will contain a type string if OpenStack defines a type for the + // service. Otherwise, for provider-specific services, the provider may assign + // their own type strings. Type string `json:"type"` - // Endpoints will let the caller iterate over all the different endpoints that may exist for - // the service. + // Endpoints will let the caller iterate over all the different endpoints that + // may exist for the service. Endpoints []Endpoint `json:"endpoints"` } -// ServiceCatalog provides a view into the service catalog from a previous, successful authentication. +// ServiceCatalog provides a view into the service catalog from a previous, +// successful authentication. type ServiceCatalog struct { Entries []CatalogEntry } -// CreateResult defers the interpretation of a created token. -// Use ExtractToken() to interpret it as a Token, or ExtractServiceCatalog() to interpret it as a service catalog. +// CreateResult is the response from a Create request. Use ExtractToken() to +// interpret it as a Token, or ExtractServiceCatalog() to interpret it as a +// service catalog. type CreateResult struct { gophercloud.Result } -// GetResult is the deferred response from a Get call, which is the same with a Created token. -// Use ExtractUser() to interpret it as a User. +// GetResult is the deferred response from a Get call, which is the same with a +// Created token. Use ExtractUser() to interpret it as a User. type GetResult struct { CreateResult } @@ -121,7 +135,8 @@ func (r CreateResult) ExtractToken() (*Token, error) { }, nil } -// ExtractServiceCatalog returns the ServiceCatalog that was generated along with the user's Token. +// ExtractServiceCatalog returns the ServiceCatalog that was generated along +// with the user's Token. func (r CreateResult) ExtractServiceCatalog() (*ServiceCatalog, error) { var s struct { Access struct { diff --git a/vendor/github.com/gophercloud/gophercloud/openstack/identity/v3/tokens/doc.go b/vendor/github.com/gophercloud/gophercloud/openstack/identity/v3/tokens/doc.go index 76ff5f473..966e128f1 100644 --- a/vendor/github.com/gophercloud/gophercloud/openstack/identity/v3/tokens/doc.go +++ b/vendor/github.com/gophercloud/gophercloud/openstack/identity/v3/tokens/doc.go @@ -1,6 +1,108 @@ -// Package tokens provides information and interaction with the token API -// resource for the OpenStack Identity service. -// -// For more information, see: -// http://developer.openstack.org/api-ref-identity-v3.html#tokens-v3 +/* +Package tokens provides information and interaction with the token API +resource for the OpenStack Identity service. + +For more information, see: +http://developer.openstack.org/api-ref-identity-v3.html#tokens-v3 + +Example to Create a Token From a Username and Password + + authOptions := tokens.AuthOptions{ + UserID: "username", + Password: "password", + } + + token, err := tokens.Create(identityClient, authOptions).ExtractToken() + if err != nil { + panic(err) + } + +Example to Create a Token From a Username, Password, and Domain + + authOptions := tokens.AuthOptions{ + UserID: "username", + Password: "password", + DomainID: "default", + } + + token, err := tokens.Create(identityClient, authOptions).ExtractToken() + if err != nil { + panic(err) + } + + authOptions = tokens.AuthOptions{ + UserID: "username", + Password: "password", + DomainName: "default", + } + + token, err = tokens.Create(identityClient, authOptions).ExtractToken() + if err != nil { + panic(err) + } + +Example to Create a Token From a Token + + authOptions := tokens.AuthOptions{ + TokenID: "token_id", + } + + token, err := tokens.Create(identityClient, authOptions).ExtractToken() + if err != nil { + panic(err) + } + +Example to Create a Token from a Username and Password with Project ID Scope + + scope := tokens.Scope{ + ProjectID: "0fe36e73809d46aeae6705c39077b1b3", + } + + authOptions := tokens.AuthOptions{ + Scope: &scope, + UserID: "username", + Password: "password", + } + + token, err = tokens.Create(identityClient, authOptions).ExtractToken() + if err != nil { + panic(err) + } + +Example to Create a Token from a Username and Password with Domain ID Scope + + scope := tokens.Scope{ + DomainID: "default", + } + + authOptions := tokens.AuthOptions{ + Scope: &scope, + UserID: "username", + Password: "password", + } + + token, err = tokens.Create(identityClient, authOptions).ExtractToken() + if err != nil { + panic(err) + } + +Example to Create a Token from a Username and Password with Project Name Scope + + scope := tokens.Scope{ + ProjectName: "project_name", + DomainID: "default", + } + + authOptions := tokens.AuthOptions{ + Scope: &scope, + UserID: "username", + Password: "password", + } + + token, err = tokens.Create(identityClient, authOptions).ExtractToken() + if err != nil { + panic(err) + } + +*/ package tokens diff --git a/vendor/github.com/gophercloud/gophercloud/openstack/identity/v3/tokens/requests.go b/vendor/github.com/gophercloud/gophercloud/openstack/identity/v3/tokens/requests.go index 39c19aee3..6e99a793c 100644 --- a/vendor/github.com/gophercloud/gophercloud/openstack/identity/v3/tokens/requests.go +++ b/vendor/github.com/gophercloud/gophercloud/openstack/identity/v3/tokens/requests.go @@ -10,20 +10,22 @@ type Scope struct { DomainName string } -// AuthOptionsBuilder describes any argument that may be passed to the Create call. +// AuthOptionsBuilder provides the ability for extensions to add additional +// parameters to AuthOptions. Extensions must satisfy all required methods. type AuthOptionsBuilder interface { - // ToTokenV3CreateMap assembles the Create request body, returning an error if parameters are - // missing or inconsistent. + // ToTokenV3CreateMap assembles the Create request body, returning an error + // if parameters are missing or inconsistent. ToTokenV3CreateMap(map[string]interface{}) (map[string]interface{}, error) ToTokenV3ScopeMap() (map[string]interface{}, error) CanReauth() bool } +// AuthOptions represents options for authenticating a user. type AuthOptions struct { // IdentityEndpoint specifies the HTTP endpoint that is required to work with - // the Identity API of the appropriate version. While it's ultimately needed by - // all of the identity services, it will often be populated by a provider-level - // function. + // the Identity API of the appropriate version. While it's ultimately needed + // by all of the identity services, it will often be populated by a + // provider-level function. IdentityEndpoint string `json:"-"` // Username is required if using Identity V2 API. Consult with your provider's @@ -39,11 +41,11 @@ type AuthOptions struct { DomainID string `json:"-"` DomainName string `json:"name,omitempty"` - // AllowReauth should be set to true if you grant permission for Gophercloud to - // cache your credentials in memory, and to allow Gophercloud to attempt to - // re-authenticate automatically if/when your token expires. If you set it to - // false, it will not cache these settings, but re-authentication will not be - // possible. This setting defaults to false. + // AllowReauth should be set to true if you grant permission for Gophercloud + // to cache your credentials in memory, and to allow Gophercloud to attempt + // to re-authenticate automatically if/when your token expires. If you set + // it to false, it will not cache these settings, but re-authentication will + // not be possible. This setting defaults to false. AllowReauth bool `json:"-"` // TokenID allows users to authenticate (possibly as another user) with an @@ -53,6 +55,7 @@ type AuthOptions struct { Scope Scope `json:"-"` } +// ToTokenV3CreateMap builds a request body from AuthOptions. func (opts *AuthOptions) ToTokenV3CreateMap(scope map[string]interface{}) (map[string]interface{}, error) { gophercloudAuthOpts := gophercloud.AuthOptions{ Username: opts.Username, @@ -67,68 +70,17 @@ func (opts *AuthOptions) ToTokenV3CreateMap(scope map[string]interface{}) (map[s return gophercloudAuthOpts.ToTokenV3CreateMap(scope) } +// ToTokenV3CreateMap builds a scope request body from AuthOptions. func (opts *AuthOptions) ToTokenV3ScopeMap() (map[string]interface{}, error) { - if opts.Scope.ProjectName != "" { - // ProjectName provided: either DomainID or DomainName must also be supplied. - // ProjectID may not be supplied. - if opts.Scope.DomainID == "" && opts.Scope.DomainName == "" { - return nil, gophercloud.ErrScopeDomainIDOrDomainName{} - } - if opts.Scope.ProjectID != "" { - return nil, gophercloud.ErrScopeProjectIDOrProjectName{} - } + scope := gophercloud.AuthScope(opts.Scope) - if opts.Scope.DomainID != "" { - // ProjectName + DomainID - return map[string]interface{}{ - "project": map[string]interface{}{ - "name": &opts.Scope.ProjectName, - "domain": map[string]interface{}{"id": &opts.Scope.DomainID}, - }, - }, nil - } - - if opts.Scope.DomainName != "" { - // ProjectName + DomainName - return map[string]interface{}{ - "project": map[string]interface{}{ - "name": &opts.Scope.ProjectName, - "domain": map[string]interface{}{"name": &opts.Scope.DomainName}, - }, - }, nil - } - } else if opts.Scope.ProjectID != "" { - // ProjectID provided. ProjectName, DomainID, and DomainName may not be provided. - if opts.Scope.DomainID != "" { - return nil, gophercloud.ErrScopeProjectIDAlone{} - } - if opts.Scope.DomainName != "" { - return nil, gophercloud.ErrScopeProjectIDAlone{} - } - - // ProjectID - return map[string]interface{}{ - "project": map[string]interface{}{ - "id": &opts.Scope.ProjectID, - }, - }, nil - } else if opts.Scope.DomainID != "" { - // DomainID provided. ProjectID, ProjectName, and DomainName may not be provided. - if opts.Scope.DomainName != "" { - return nil, gophercloud.ErrScopeDomainIDOrDomainName{} - } - - // DomainID - return map[string]interface{}{ - "domain": map[string]interface{}{ - "id": &opts.Scope.DomainID, - }, - }, nil - } else if opts.Scope.DomainName != "" { - return nil, gophercloud.ErrScopeDomainName{} + gophercloudAuthOpts := gophercloud.AuthOptions{ + Scope: &scope, + DomainID: opts.DomainID, + DomainName: opts.DomainName, } - return nil, nil + return gophercloudAuthOpts.ToTokenV3ScopeMap() } func (opts *AuthOptions) CanReauth() bool { @@ -141,7 +93,8 @@ func subjectTokenHeaders(c *gophercloud.ServiceClient, subjectToken string) map[ } } -// Create authenticates and either generates a new token, or changes the Scope of an existing token. +// Create authenticates and either generates a new token, or changes the Scope +// of an existing token. func Create(c *gophercloud.ServiceClient, opts AuthOptionsBuilder) (r CreateResult) { scope, err := opts.ToTokenV3ScopeMap() if err != nil { @@ -180,7 +133,7 @@ func Get(c *gophercloud.ServiceClient, token string) (r GetResult) { // Validate determines if a specified token is valid or not. func Validate(c *gophercloud.ServiceClient, token string) (bool, error) { - resp, err := c.Request("HEAD", tokenURL(c), &gophercloud.RequestOpts{ + resp, err := c.Head(tokenURL(c), &gophercloud.RequestOpts{ MoreHeaders: subjectTokenHeaders(c, token), OkCodes: []int{200, 204, 404}, }) diff --git a/vendor/github.com/gophercloud/gophercloud/openstack/identity/v3/tokens/results.go b/vendor/github.com/gophercloud/gophercloud/openstack/identity/v3/tokens/results.go index 7c306e83f..ebdca58f6 100644 --- a/vendor/github.com/gophercloud/gophercloud/openstack/identity/v3/tokens/results.go +++ b/vendor/github.com/gophercloud/gophercloud/openstack/identity/v3/tokens/results.go @@ -13,41 +13,50 @@ import ( type Endpoint struct { ID string `json:"id"` Region string `json:"region"` + RegionID string `json:"region_id"` Interface string `json:"interface"` URL string `json:"url"` } -// CatalogEntry provides a type-safe interface to an Identity API V3 service catalog listing. -// Each class of service, such as cloud DNS or block storage services, could have multiple -// CatalogEntry representing it (one by interface type, e.g public, admin or internal). +// CatalogEntry provides a type-safe interface to an Identity API V3 service +// catalog listing. Each class of service, such as cloud DNS or block storage +// services, could have multiple CatalogEntry representing it (one by interface +// type, e.g public, admin or internal). // -// Note: when looking for the desired service, try, whenever possible, to key off the type field. -// Otherwise, you'll tie the representation of the service to a specific provider. +// Note: when looking for the desired service, try, whenever possible, to key +// off the type field. Otherwise, you'll tie the representation of the service +// to a specific provider. type CatalogEntry struct { // Service ID ID string `json:"id"` + // Name will contain the provider-specified name for the service. Name string `json:"name"` - // Type will contain a type string if OpenStack defines a type for the service. - // Otherwise, for provider-specific services, the provider may assign their own type strings. + + // Type will contain a type string if OpenStack defines a type for the + // service. Otherwise, for provider-specific services, the provider may + // assign their own type strings. Type string `json:"type"` - // Endpoints will let the caller iterate over all the different endpoints that may exist for - // the service. + + // Endpoints will let the caller iterate over all the different endpoints that + // may exist for the service. Endpoints []Endpoint `json:"endpoints"` } -// ServiceCatalog provides a view into the service catalog from a previous, successful authentication. +// ServiceCatalog provides a view into the service catalog from a previous, +// successful authentication. type ServiceCatalog struct { Entries []CatalogEntry `json:"catalog"` } -// Domain provides information about the domain to which this token grants access. +// Domain provides information about the domain to which this token grants +// access. type Domain struct { ID string `json:"id"` Name string `json:"name"` } -// User represents a user resource that exists on the API. +// User represents a user resource that exists in the Identity Service. type User struct { Domain Domain `json:"domain"` ID string `json:"id"` @@ -67,7 +76,8 @@ type Project struct { Name string `json:"name"` } -// commonResult is the deferred result of a Create or a Get call. +// commonResult is the response from a request. A commonResult has various +// methods which can be used to extract different details about the result. type commonResult struct { gophercloud.Result } @@ -92,7 +102,8 @@ func (r commonResult) ExtractToken() (*Token, error) { return &s, err } -// ExtractServiceCatalog returns the ServiceCatalog that was generated along with the user's Token. +// ExtractServiceCatalog returns the ServiceCatalog that was generated along +// with the user's Token. func (r commonResult) ExtractServiceCatalog() (*ServiceCatalog, error) { var s ServiceCatalog err := r.ExtractInto(&s) @@ -126,27 +137,31 @@ func (r commonResult) ExtractProject() (*Project, error) { return s.Project, err } -// CreateResult defers the interpretation of a created token. -// Use ExtractToken() to interpret it as a Token, or ExtractServiceCatalog() to interpret it as a service catalog. +// CreateResult is the response from a Create request. Use ExtractToken() +// to interpret it as a Token, or ExtractServiceCatalog() to interpret it +// as a service catalog. type CreateResult struct { commonResult } -// GetResult is the deferred response from a Get call. +// GetResult is the response from a Get request. Use ExtractToken() +// to interpret it as a Token, or ExtractServiceCatalog() to interpret it +// as a service catalog. type GetResult struct { commonResult } -// RevokeResult is the deferred response from a Revoke call. +// RevokeResult is response from a Revoke request. type RevokeResult struct { commonResult } -// Token is a string that grants a user access to a controlled set of services in an OpenStack provider. -// Each Token is valid for a set length of time. +// Token is a string that grants a user access to a controlled set of services +// in an OpenStack provider. Each Token is valid for a set length of time. type Token struct { // ID is the issued token. ID string `json:"id"` + // ExpiresAt is the timestamp at which this token will no longer be accepted. ExpiresAt time.Time `json:"expires_at"` } diff --git a/vendor/github.com/gophercloud/gophercloud/openstack/imageservice/v2/images/doc.go b/vendor/github.com/gophercloud/gophercloud/openstack/imageservice/v2/images/doc.go new file mode 100644 index 000000000..14da9ac90 --- /dev/null +++ b/vendor/github.com/gophercloud/gophercloud/openstack/imageservice/v2/images/doc.go @@ -0,0 +1,60 @@ +/* +Package images enables management and retrieval of images from the OpenStack +Image Service. + +Example to List Images + + images.ListOpts{ + Owner: "a7509e1ae65945fda83f3e52c6296017", + } + + allPages, err := images.List(imagesClient, listOpts).AllPages() + if err != nil { + panic(err) + } + + allImages, err := images.ExtractImages(allPages) + if err != nil { + panic(err) + } + + for _, image := range allImages { + fmt.Printf("%+v\n", image) + } + +Example to Create an Image + + createOpts := images.CreateOpts{ + Name: "image_name", + Visibility: images.ImageVisibilityPrivate, + } + + image, err := images.Create(imageClient, createOpts) + if err != nil { + panic(err) + } + +Example to Update an Image + + imageID := "1bea47ed-f6a9-463b-b423-14b9cca9ad27" + + updateOpts := images.UpdateOpts{ + images.ReplaceImageName{ + NewName: "new_name", + }, + } + + image, err := images.Update(imageClient, imageID, updateOpts).Extract() + if err != nil { + panic(err) + } + +Example to Delete an Image + + imageID := "1bea47ed-f6a9-463b-b423-14b9cca9ad27" + err := images.Delete(imageClient, imageID).ExtractErr() + if err != nil { + panic(err) + } +*/ +package images diff --git a/vendor/github.com/gophercloud/gophercloud/openstack/imageservice/v2/images/requests.go b/vendor/github.com/gophercloud/gophercloud/openstack/imageservice/v2/images/requests.go index 044b5cb95..88cd4d265 100644 --- a/vendor/github.com/gophercloud/gophercloud/openstack/imageservice/v2/images/requests.go +++ b/vendor/github.com/gophercloud/gophercloud/openstack/imageservice/v2/images/requests.go @@ -1,6 +1,10 @@ package images import ( + "fmt" + "net/url" + "time" + "github.com/gophercloud/gophercloud" "github.com/gophercloud/gophercloud/pagination" ) @@ -15,33 +19,106 @@ type ListOptsBuilder interface { // the API. Filtering is achieved by passing in struct field values that map to // the server attributes you want to see returned. Marker and Limit are used // for pagination. -//http://developer.openstack.org/api-ref-image-v2.html +// +// http://developer.openstack.org/api-ref-image-v2.html type ListOpts struct { + // ID is the ID of the image. + // Multiple IDs can be specified by constructing a string + // such as "in:uuid1,uuid2,uuid3". + ID string `q:"id"` + // Integer value for the limit of values to return. Limit int `q:"limit"` // UUID of the server at which you want to set a marker. Marker string `q:"marker"` - Name string `q:"name"` - Visibility ImageVisibility `q:"visibility"` + // Name filters on the name of the image. + // Multiple names can be specified by constructing a string + // such as "in:name1,name2,name3". + Name string `q:"name"` + + // Visibility filters on the visibility of the image. + Visibility ImageVisibility `q:"visibility"` + + // MemberStatus filters on the member status of the image. MemberStatus ImageMemberStatus `q:"member_status"` - Owner string `q:"owner"` - Status ImageStatus `q:"status"` - SizeMin int64 `q:"size_min"` - SizeMax int64 `q:"size_max"` - SortKey string `q:"sort_key"` - SortDir string `q:"sort_dir"` - Tag string `q:"tag"` + + // Owner filters on the project ID of the image. + Owner string `q:"owner"` + + // Status filters on the status of the image. + // Multiple statuses can be specified by constructing a string + // such as "in:saving,queued". + Status ImageStatus `q:"status"` + + // SizeMin filters on the size_min image property. + SizeMin int64 `q:"size_min"` + + // SizeMax filters on the size_max image property. + SizeMax int64 `q:"size_max"` + + // Sort sorts the results using the new style of sorting. See the OpenStack + // Image API reference for the exact syntax. + // + // Sort cannot be used with the classic sort options (sort_key and sort_dir). + Sort string `q:"sort"` + + // SortKey will sort the results based on a specified image property. + SortKey string `q:"sort_key"` + + // SortDir will sort the list results either ascending or decending. + SortDir string `q:"sort_dir"` + + // Tags filters on specific image tags. + Tags []string `q:"tag"` + + // CreatedAtQuery filters images based on their creation date. + CreatedAtQuery *ImageDateQuery + + // UpdatedAtQuery filters images based on their updated date. + UpdatedAtQuery *ImageDateQuery + + // ContainerFormat filters images based on the container_format. + // Multiple container formats can be specified by constructing a + // string such as "in:bare,ami". + ContainerFormat string `q:"container_format"` + + // DiskFormat filters images based on the disk_format. + // Multiple disk formats can be specified by constructing a string + // such as "in:qcow2,iso". + DiskFormat string `q:"disk_format"` } // ToImageListQuery formats a ListOpts into a query string. func (opts ListOpts) ToImageListQuery() (string, error) { q, err := gophercloud.BuildQueryString(opts) + params := q.Query() + + if opts.CreatedAtQuery != nil { + createdAt := opts.CreatedAtQuery.Date.Format(time.RFC3339) + if v := opts.CreatedAtQuery.Filter; v != "" { + createdAt = fmt.Sprintf("%s:%s", v, createdAt) + } + + params.Add("created_at", createdAt) + } + + if opts.UpdatedAtQuery != nil { + updatedAt := opts.UpdatedAtQuery.Date.Format(time.RFC3339) + if v := opts.UpdatedAtQuery.Filter; v != "" { + updatedAt = fmt.Sprintf("%s:%s", v, updatedAt) + } + + params.Add("updated_at", updatedAt) + } + + q = &url.URL{RawQuery: params.Encode()} + return q.String(), err } -// List implements image list request +// List implements image list request. func List(c *gophercloud.ServiceClient, opts ListOptsBuilder) pagination.Pager { url := listURL(c) if opts != nil { @@ -56,14 +133,13 @@ func List(c *gophercloud.ServiceClient, opts ListOptsBuilder) pagination.Pager { }) } -// CreateOptsBuilder describes struct types that can be accepted by the Create call. -// The CreateOpts struct in this package does. +// CreateOptsBuilder allows extensions to add parameters to the Create request. type CreateOptsBuilder interface { // Returns value that can be passed to json.Marshal ToImageCreateMap() (map[string]interface{}, error) } -// CreateOpts implements CreateOptsBuilder +// CreateOpts represents options used to create an image. type CreateOpts struct { // Name is the name of the new image. Name string `json:"name" required:"true"` @@ -118,7 +194,7 @@ func (opts CreateOpts) ToImageCreateMap() (map[string]interface{}, error) { return b, nil } -// Create implements create image request +// Create implements create image request. func Create(client *gophercloud.ServiceClient, opts CreateOptsBuilder) (r CreateResult) { b, err := opts.ToImageCreateMap() if err != nil { @@ -129,19 +205,19 @@ func Create(client *gophercloud.ServiceClient, opts CreateOptsBuilder) (r Create return } -// Delete implements image delete request +// Delete implements image delete request. func Delete(client *gophercloud.ServiceClient, id string) (r DeleteResult) { _, r.Err = client.Delete(deleteURL(client, id), nil) return } -// Get implements image get request +// Get implements image get request. func Get(client *gophercloud.ServiceClient, id string) (r GetResult) { _, r.Err = client.Get(getURL(client, id), &r.Body, nil) return } -// Update implements image updated request +// Update implements image updated request. func Update(client *gophercloud.ServiceClient, id string, opts UpdateOptsBuilder) (r UpdateResult) { b, err := opts.ToImageUpdateMap() if err != nil { @@ -155,9 +231,11 @@ func Update(client *gophercloud.ServiceClient, id string, opts UpdateOptsBuilder return } -// UpdateOptsBuilder implements UpdateOptsBuilder +// UpdateOptsBuilder allows extensions to add additional parameters to the +// Update request. type UpdateOptsBuilder interface { - // returns value implementing json.Marshaler which when marshaled matches the patch schema: + // returns value implementing json.Marshaler which when marshaled matches + // the patch schema: // http://specs.openstack.org/openstack/glance-specs/specs/api/v2/http-patch-image-api-v2.html ToImageUpdateMap() ([]interface{}, error) } @@ -165,7 +243,8 @@ type UpdateOptsBuilder interface { // UpdateOpts implements UpdateOpts type UpdateOpts []Patch -// ToImageUpdateMap builder +// ToImageUpdateMap assembles a request body based on the contents of +// UpdateOpts. func (opts UpdateOpts) ToImageUpdateMap() ([]interface{}, error) { m := make([]interface{}, len(opts)) for i, patch := range opts { @@ -175,18 +254,18 @@ func (opts UpdateOpts) ToImageUpdateMap() ([]interface{}, error) { return m, nil } -// Patch represents a single update to an existing image. Multiple updates to an image can be -// submitted at the same time. +// Patch represents a single update to an existing image. Multiple updates +// to an image can be submitted at the same time. type Patch interface { ToImagePatchMap() map[string]interface{} } -// UpdateVisibility updated visibility +// UpdateVisibility represents an updated visibility property request. type UpdateVisibility struct { Visibility ImageVisibility } -// ToImagePatchMap builder +// ToImagePatchMap assembles a request body based on UpdateVisibility. func (u UpdateVisibility) ToImagePatchMap() map[string]interface{} { return map[string]interface{}{ "op": "replace", @@ -195,12 +274,12 @@ func (u UpdateVisibility) ToImagePatchMap() map[string]interface{} { } } -// ReplaceImageName implements Patch +// ReplaceImageName represents an updated image_name property request. type ReplaceImageName struct { NewName string } -// ToImagePatchMap builder +// ToImagePatchMap assembles a request body based on ReplaceImageName. func (r ReplaceImageName) ToImagePatchMap() map[string]interface{} { return map[string]interface{}{ "op": "replace", @@ -209,12 +288,12 @@ func (r ReplaceImageName) ToImagePatchMap() map[string]interface{} { } } -// ReplaceImageChecksum implements Patch +// ReplaceImageChecksum represents an updated checksum property request. type ReplaceImageChecksum struct { Checksum string } -// ReplaceImageChecksum builder +// ReplaceImageChecksum assembles a request body based on ReplaceImageChecksum. func (rc ReplaceImageChecksum) ToImagePatchMap() map[string]interface{} { return map[string]interface{}{ "op": "replace", @@ -223,12 +302,12 @@ func (rc ReplaceImageChecksum) ToImagePatchMap() map[string]interface{} { } } -// ReplaceImageTags implements Patch +// ReplaceImageTags represents an updated tags property request. type ReplaceImageTags struct { NewTags []string } -// ToImagePatchMap builder +// ToImagePatchMap assembles a request body based on ReplaceImageTags. func (r ReplaceImageTags) ToImagePatchMap() map[string]interface{} { return map[string]interface{}{ "op": "replace", diff --git a/vendor/github.com/gophercloud/gophercloud/openstack/imageservice/v2/images/results.go b/vendor/github.com/gophercloud/gophercloud/openstack/imageservice/v2/images/results.go index 632186b72..e25606884 100644 --- a/vendor/github.com/gophercloud/gophercloud/openstack/imageservice/v2/images/results.go +++ b/vendor/github.com/gophercloud/gophercloud/openstack/imageservice/v2/images/results.go @@ -11,11 +11,9 @@ import ( "github.com/gophercloud/gophercloud/pagination" ) -// Image model -// Does not include the literal image data; just metadata. -// returned by listing images, and by fetching a specific image. +// Image represents an image found in the OpenStack Image service. type Image struct { - // ID is the image UUID + // ID is the image UUID. ID string `json:"id"` // Name is the human-readable display name for the image. @@ -34,16 +32,19 @@ type Image struct { ContainerFormat string `json:"container_format"` // DiskFormat is the format of the disk. - // If set, valid values are ami, ari, aki, vhd, vmdk, raw, qcow2, vdi, and iso. + // If set, valid values are ami, ari, aki, vhd, vmdk, raw, qcow2, vdi, + // and iso. DiskFormat string `json:"disk_format"` - // MinDiskGigabytes is the amount of disk space in GB that is required to boot the image. + // MinDiskGigabytes is the amount of disk space in GB that is required to + // boot the image. MinDiskGigabytes int `json:"min_disk"` - // MinRAMMegabytes [optional] is the amount of RAM in MB that is required to boot the image. + // MinRAMMegabytes [optional] is the amount of RAM in MB that is required to + // boot the image. MinRAMMegabytes int `json:"min_ram"` - // Owner is the tenant the image belongs to. + // Owner is the tenant ID the image belongs to. Owner string `json:"owner"` // Protected is whether the image is deletable or not. @@ -52,7 +53,7 @@ type Image struct { // Visibility defines who can see/use the image. Visibility ImageVisibility `json:"visibility"` - // Checksum is the checksum of the data that's associated with the image + // Checksum is the checksum of the data that's associated with the image. Checksum string `json:"checksum"` // SizeBytes is the size of the data that's associated with the image. @@ -60,23 +61,27 @@ type Image struct { // Metadata is a set of metadata associated with the image. // Image metadata allow for meaningfully define the image properties - // and tags. See http://docs.openstack.org/developer/glance/metadefs-concepts.html. + // and tags. + // See http://docs.openstack.org/developer/glance/metadefs-concepts.html. Metadata map[string]string `json:"metadata"` - // Properties is a set of key-value pairs, if any, that are associated with the image. + // Properties is a set of key-value pairs, if any, that are associated with + // the image. Properties map[string]interface{} `json:"-"` // CreatedAt is the date when the image has been created. CreatedAt time.Time `json:"created_at"` - // UpdatedAt is the date when the last change has been made to the image or it's properties. + // UpdatedAt is the date when the last change has been made to the image or + // it's properties. UpdatedAt time.Time `json:"updated_at"` - // File is the trailing path after the glance endpoint that represent the location - // of the image or the path to retrieve it. + // File is the trailing path after the glance endpoint that represent the + // location of the image or the path to retrieve it. File string `json:"file"` - // Schema is the path to the JSON-schema that represent the image or image entity. + // Schema is the path to the JSON-schema that represent the image or image + // entity. Schema string `json:"schema"` // VirtualSize is the virtual size of the image @@ -97,7 +102,7 @@ func (r *Image) UnmarshalJSON(b []byte) error { switch t := s.SizeBytes.(type) { case nil: - return nil + r.SizeBytes = 0 case float32: r.SizeBytes = int64(t) case float64: @@ -131,38 +136,43 @@ func (r commonResult) Extract() (*Image, error) { return s, err } -// CreateResult represents the result of a Create operation +// CreateResult represents the result of a Create operation. Call its Extract +// method to interpret it as an Image. type CreateResult struct { commonResult } -// UpdateResult represents the result of an Update operation +// UpdateResult represents the result of an Update operation. Call its Extract +// method to interpret it as an Image. type UpdateResult struct { commonResult } -// GetResult represents the result of a Get operation +// GetResult represents the result of a Get operation. Call its Extract +// method to interpret it as an Image. type GetResult struct { commonResult } -//DeleteResult model +// DeleteResult represents the result of a Delete operation. Call its +// ExtractErr method to interpret it as an Image. type DeleteResult struct { gophercloud.ErrResult } -// ImagePage represents page +// ImagePage represents the results of a List request. type ImagePage struct { pagination.LinkedPageBase } -// IsEmpty returns true if a page contains no Images results. +// IsEmpty returns true if an ImagePage contains no Images results. func (r ImagePage) IsEmpty() (bool, error) { images, err := ExtractImages(r) return len(images) == 0, err } -// NextPageURL uses the response's embedded link reference to navigate to the next page of results. +// NextPageURL uses the response's embedded link reference to navigate to +// the next page of results. func (r ImagePage) NextPageURL() (string, error) { var s struct { Next string `json:"next"` @@ -179,7 +189,8 @@ func (r ImagePage) NextPageURL() (string, error) { return nextPageURL(r.URL.String(), s.Next) } -// ExtractImages interprets the results of a single page from a List() call, producing a slice of Image entities. +// ExtractImages interprets the results of a single page from a List() call, +// producing a slice of Image entities. func ExtractImages(r pagination.Page) ([]Image, error) { var s struct { Images []Image `json:"images"` diff --git a/vendor/github.com/gophercloud/gophercloud/openstack/imageservice/v2/images/types.go b/vendor/github.com/gophercloud/gophercloud/openstack/imageservice/v2/images/types.go index 086e7e5d5..d2f9cbd3b 100644 --- a/vendor/github.com/gophercloud/gophercloud/openstack/imageservice/v2/images/types.go +++ b/vendor/github.com/gophercloud/gophercloud/openstack/imageservice/v2/images/types.go @@ -1,5 +1,9 @@ package images +import ( + "time" +) + // ImageStatus image statuses // http://docs.openstack.org/developer/glance/statuses.html type ImageStatus string @@ -9,7 +13,8 @@ const ( // been reserved for an image in the image registry. ImageStatusQueued ImageStatus = "queued" - // ImageStatusSaving denotes that an image’s raw data is currently being uploaded to Glance + // ImageStatusSaving denotes that an image’s raw data is currently being + // uploaded to Glance ImageStatusSaving ImageStatus = "saving" // ImageStatusActive denotes an image that is fully available in Glance. @@ -23,16 +28,18 @@ const ( // The image information is retained in the image registry. ImageStatusDeleted ImageStatus = "deleted" - // ImageStatusPendingDelete is similar to Delete, but the image is not yet deleted. + // ImageStatusPendingDelete is similar to Delete, but the image is not yet + // deleted. ImageStatusPendingDelete ImageStatus = "pending_delete" - // ImageStatusDeactivated denotes that access to image data is not allowed to any non-admin user. + // ImageStatusDeactivated denotes that access to image data is not allowed to + // any non-admin user. ImageStatusDeactivated ImageStatus = "deactivated" ) // ImageVisibility denotes an image that is fully available in Glance. -// This occurs when the image data is uploaded, or the image size -// is explicitly set to zero on creation. +// This occurs when the image data is uploaded, or the image size is explicitly +// set to zero on creation. // According to design // https://wiki.openstack.org/wiki/Glance-v2-community-image-visibility-design type ImageVisibility string @@ -52,12 +59,13 @@ const ( // ImageVisibilityCommunity images: // - all users can see and boot it - // - users with tenantId in the member-list of the image with member_status == 'accepted' - // have this image in their default image-list + // - users with tenantId in the member-list of the image with + // member_status == 'accepted' have this image in their default image-list. ImageVisibilityCommunity ImageVisibility = "community" ) -// MemberStatus is a status for adding a new member (tenant) to an image member list. +// MemberStatus is a status for adding a new member (tenant) to an image +// member list. type ImageMemberStatus string const ( @@ -73,3 +81,24 @@ const ( // ImageMemberStatusAll ImageMemberStatusAll ImageMemberStatus = "all" ) + +// ImageDateFilter represents a valid filter to use for filtering +// images by their date during a List. +type ImageDateFilter string + +const ( + FilterGT ImageDateFilter = "gt" + FilterGTE ImageDateFilter = "gte" + FilterLT ImageDateFilter = "lt" + FilterLTE ImageDateFilter = "lte" + FilterNEQ ImageDateFilter = "neq" + FilterEQ ImageDateFilter = "eq" +) + +// ImageDateQuery represents a date field to be used for listing images. +// If no filter is specified, the query will act as though FilterEQ was +// set. +type ImageDateQuery struct { + Date time.Time + Filter ImageDateFilter +} diff --git a/vendor/github.com/gophercloud/gophercloud/openstack/imageservice/v2/members/doc.go b/vendor/github.com/gophercloud/gophercloud/openstack/imageservice/v2/members/doc.go new file mode 100644 index 000000000..1a7132045 --- /dev/null +++ b/vendor/github.com/gophercloud/gophercloud/openstack/imageservice/v2/members/doc.go @@ -0,0 +1,58 @@ +/* +Package members enables management and retrieval of image members. + +Members are projects other than the image owner who have access to the image. + +Example to List Members of an Image + + imageID := "2b6cacd4-cfd6-4b95-8302-4c04ccf0be3f" + + allPages, err := members.List(imageID).AllPages() + if err != nil { + panic(err) + } + + allMembers, err := members.ExtractMembers(allPages) + if err != nil { + panic(err) + } + + for _, member := range allMembers { + fmt.Printf("%+v\n", member) + } + +Example to Add a Member to an Image + + imageID := "2b6cacd4-cfd6-4b95-8302-4c04ccf0be3f" + projectID := "fc404778935a4cebaddcb4788fb3ff2c" + + member, err := members.Create(imageClient, imageID, projectID).Extract() + if err != nil { + panic(err) + } + +Example to Update the Status of a Member + + imageID := "2b6cacd4-cfd6-4b95-8302-4c04ccf0be3f" + projectID := "fc404778935a4cebaddcb4788fb3ff2c" + + updateOpts := members.UpdateOpts{ + Status: "accepted", + } + + member, err := members.Update(imageClient, imageID, projectID, updateOpts).Extract() + if err != nil { + panic(err) + } + +Example to Delete a Member from an Image + + imageID := "2b6cacd4-cfd6-4b95-8302-4c04ccf0be3f" + projectID := "fc404778935a4cebaddcb4788fb3ff2c" + + err := members.Delete(imageClient, imageID, projectID).ExtractErr() + if err != nil { + panic(err) + } +*/ +package members diff --git a/vendor/github.com/gophercloud/gophercloud/openstack/imageservice/v2/members/requests.go b/vendor/github.com/gophercloud/gophercloud/openstack/imageservice/v2/members/requests.go index b16fb82d4..b80e54e83 100644 --- a/vendor/github.com/gophercloud/gophercloud/openstack/imageservice/v2/members/requests.go +++ b/vendor/github.com/gophercloud/gophercloud/openstack/imageservice/v2/members/requests.go @@ -5,16 +5,24 @@ import ( "github.com/gophercloud/gophercloud/pagination" ) -// Create member for specific image -// -// Preconditions -// The specified images must exist. -// You can only add a new member to an image which 'visibility' attribute is private. -// You must be the owner of the specified image. -// Synchronous Postconditions -// With correct permissions, you can see the member status of the image as pending through API calls. -// -// More details here: http://developer.openstack.org/api-ref-image-v2.html#createImageMember-v2 +/* + Create member for specific image + + Preconditions + + * The specified images must exist. + * You can only add a new member to an image which 'visibility' attribute is + private. + * You must be the owner of the specified image. + + Synchronous Postconditions + + With correct permissions, you can see the member status of the image as + pending through API calls. + + More details here: + http://developer.openstack.org/api-ref-image-v2.html#createImageMember-v2 +*/ func Create(client *gophercloud.ServiceClient, id string, member string) (r CreateResult) { b := map[string]interface{}{"member": member} _, r.Err = client.Post(createMemberURL(client, id), b, &r.Body, &gophercloud.RequestOpts{ @@ -23,8 +31,7 @@ func Create(client *gophercloud.ServiceClient, id string, member string) (r Crea return } -// List members returns list of members for specifed image id -// More details: http://developer.openstack.org/api-ref-image-v2.html#listImageMembers-v2 +// List members returns list of members for specifed image id. func List(client *gophercloud.ServiceClient, id string) pagination.Pager { return pagination.NewPager(client, listMembersURL(client, id), func(r pagination.PageResult) pagination.Page { return MemberPage{pagination.SinglePageBase(r)} @@ -32,26 +39,24 @@ func List(client *gophercloud.ServiceClient, id string) pagination.Pager { } // Get image member details. -// More details: http://developer.openstack.org/api-ref-image-v2.html#getImageMember-v2 func Get(client *gophercloud.ServiceClient, imageID string, memberID string) (r DetailsResult) { _, r.Err = client.Get(getMemberURL(client, imageID, memberID), &r.Body, &gophercloud.RequestOpts{OkCodes: []int{200}}) return } -// Delete membership for given image. -// Callee should be image owner -// More details: http://developer.openstack.org/api-ref-image-v2.html#deleteImageMember-v2 +// Delete membership for given image. Callee should be image owner. func Delete(client *gophercloud.ServiceClient, imageID string, memberID string) (r DeleteResult) { _, r.Err = client.Delete(deleteMemberURL(client, imageID, memberID), &gophercloud.RequestOpts{OkCodes: []int{204}}) return } -// UpdateOptsBuilder allows extensions to add additional attributes to the Update request. +// UpdateOptsBuilder allows extensions to add additional attributes to the +// Update request. type UpdateOptsBuilder interface { ToImageMemberUpdateMap() (map[string]interface{}, error) } -// UpdateOpts implements UpdateOptsBuilder +// UpdateOpts represents options to an Update request. type UpdateOpts struct { Status string } @@ -63,8 +68,7 @@ func (opts UpdateOpts) ToImageMemberUpdateMap() (map[string]interface{}, error) }, nil } -// Update function updates member -// More details: http://developer.openstack.org/api-ref-image-v2.html#updateImageMember-v2 +// Update function updates member. func Update(client *gophercloud.ServiceClient, imageID string, memberID string, opts UpdateOptsBuilder) (r UpdateResult) { b, err := opts.ToImageMemberUpdateMap() if err != nil { diff --git a/vendor/github.com/gophercloud/gophercloud/openstack/imageservice/v2/members/results.go b/vendor/github.com/gophercloud/gophercloud/openstack/imageservice/v2/members/results.go index d3cc1ceaf..ab694bdc0 100644 --- a/vendor/github.com/gophercloud/gophercloud/openstack/imageservice/v2/members/results.go +++ b/vendor/github.com/gophercloud/gophercloud/openstack/imageservice/v2/members/results.go @@ -7,7 +7,7 @@ import ( "github.com/gophercloud/gophercloud/pagination" ) -// Member model +// Member represents a member of an Image. type Member struct { CreatedAt time.Time `json:"created_at"` ImageID string `json:"image_id"` @@ -17,7 +17,7 @@ type Member struct { UpdatedAt time.Time `json:"updated_at"` } -// Extract Member model from request if possible +// Extract Member model from a request. func (r commonResult) Extract() (*Member, error) { var s *Member err := r.ExtractInto(&s) @@ -29,7 +29,8 @@ type MemberPage struct { pagination.SinglePageBase } -// ExtractMembers returns a slice of Members contained in a single page of results. +// ExtractMembers returns a slice of Members contained in a single page +// of results. func ExtractMembers(r pagination.Page) ([]Member, error) { var s struct { Members []Member `json:"members"` @@ -38,7 +39,7 @@ func ExtractMembers(r pagination.Page) ([]Member, error) { return s.Members, err } -// IsEmpty determines whether or not a page of Members contains any results. +// IsEmpty determines whether or not a MemberPage contains any results. func (r MemberPage) IsEmpty() (bool, error) { members, err := ExtractMembers(r) return len(members) == 0, err @@ -48,22 +49,26 @@ type commonResult struct { gophercloud.Result } -// CreateResult result model +// CreateResult represents the result of a Create operation. Call its Extract +// method to interpret it as a Member. type CreateResult struct { commonResult } -// DetailsResult model +// DetailsResult represents the result of a Get operation. Call its Extract +// method to interpret it as a Member. type DetailsResult struct { commonResult } -// UpdateResult model +// UpdateResult represents the result of an Update operation. Call its Extract +// method to interpret it as a Member. type UpdateResult struct { commonResult } -// DeleteResult model +// DeleteResult represents the result of a Delete operation. Call its +// ExtractErr method to determine if the request succeeded or failed. type DeleteResult struct { gophercloud.ErrResult } diff --git a/vendor/github.com/gophercloud/gophercloud/openstack/utils/base_endpoint.go b/vendor/github.com/gophercloud/gophercloud/openstack/utils/base_endpoint.go new file mode 100644 index 000000000..d6f9e34ea --- /dev/null +++ b/vendor/github.com/gophercloud/gophercloud/openstack/utils/base_endpoint.go @@ -0,0 +1,29 @@ +package utils + +import ( + "net/url" + "regexp" + "strings" +) + +// BaseEndpoint will return a URL without the /vX.Y +// portion of the URL. +func BaseEndpoint(endpoint string) (string, error) { + var base string + + u, err := url.Parse(endpoint) + if err != nil { + return base, err + } + + u.RawQuery, u.Fragment = "", "" + + versionRe := regexp.MustCompile("v[0-9.]+/?") + if version := versionRe.FindString(u.Path); version != "" { + base = strings.Replace(u.String(), version, "", -1) + } else { + base = u.String() + } + + return base, nil +} diff --git a/vendor/github.com/gophercloud/gophercloud/openstack/utils/choose_version.go b/vendor/github.com/gophercloud/gophercloud/openstack/utils/choose_version.go index c605d0844..27da19f91 100644 --- a/vendor/github.com/gophercloud/gophercloud/openstack/utils/choose_version.go +++ b/vendor/github.com/gophercloud/gophercloud/openstack/utils/choose_version.go @@ -68,11 +68,6 @@ func ChooseVersion(client *gophercloud.ProviderClient, recognized []*Version) (* return nil, "", err } - byID := make(map[string]*Version) - for _, version := range recognized { - byID[version.ID] = version - } - var highest *Version var endpoint string @@ -84,20 +79,22 @@ func ChooseVersion(client *gophercloud.ProviderClient, recognized []*Version) (* } } - if matching, ok := byID[value.ID]; ok { - // Prefer a version that exactly matches the provided endpoint. - if href == identityEndpoint { - if href == "" { - return nil, "", fmt.Errorf("Endpoint missing in version %s response from %s", value.ID, client.IdentityBase) + for _, version := range recognized { + if strings.Contains(value.ID, version.ID) { + // Prefer a version that exactly matches the provided endpoint. + if href == identityEndpoint { + if href == "" { + return nil, "", fmt.Errorf("Endpoint missing in version %s response from %s", value.ID, client.IdentityBase) + } + return version, href, nil } - return matching, href, nil - } - // Otherwise, find the highest-priority version with a whitelisted status. - if goodStatus[strings.ToLower(value.Status)] { - if highest == nil || matching.Priority > highest.Priority { - highest = matching - endpoint = href + // Otherwise, find the highest-priority version with a whitelisted status. + if goodStatus[strings.ToLower(value.Status)] { + if highest == nil || version.Priority > highest.Priority { + highest = version + endpoint = href + } } } } diff --git a/vendor/github.com/gophercloud/gophercloud/pagination/pager.go b/vendor/github.com/gophercloud/gophercloud/pagination/pager.go index 6f1609ef2..7c65926b7 100644 --- a/vendor/github.com/gophercloud/gophercloud/pagination/pager.go +++ b/vendor/github.com/gophercloud/gophercloud/pagination/pager.go @@ -22,7 +22,6 @@ var ( // Depending on the pagination strategy of a particular resource, there may be an additional subinterface that the result type // will need to implement. type Page interface { - // NextPageURL generates the URL for the page of data that follows this collection. // Return "" if no such page exists. NextPageURL() (string, error) diff --git a/vendor/github.com/gophercloud/gophercloud/params.go b/vendor/github.com/gophercloud/gophercloud/params.go index e484fe1c1..19b8cf7bf 100644 --- a/vendor/github.com/gophercloud/gophercloud/params.go +++ b/vendor/github.com/gophercloud/gophercloud/params.go @@ -10,10 +10,28 @@ import ( "time" ) -// BuildRequestBody builds a map[string]interface from the given `struct`. If -// parent is not the empty string, the final map[string]interface returned will -// encapsulate the built one -// +/* +BuildRequestBody builds a map[string]interface from the given `struct`. If +parent is not an empty string, the final map[string]interface returned will +encapsulate the built one. For example: + + disk := 1 + createOpts := flavors.CreateOpts{ + ID: "1", + Name: "m1.tiny", + Disk: &disk, + RAM: 512, + VCPUs: 1, + RxTxFactor: 1.0, + } + + body, err := gophercloud.BuildRequestBody(createOpts, "flavor") + +The above example can be run as-is, however it is recommended to look at how +BuildRequestBody is used within Gophercloud to more fully understand how it +fits within the request process as a whole rather than use it directly as shown +above. +*/ func BuildRequestBody(opts interface{}, parent string) (map[string]interface{}, error) { optsValue := reflect.ValueOf(opts) if optsValue.Kind() == reflect.Ptr { @@ -97,10 +115,15 @@ func BuildRequestBody(opts interface{}, parent string) (map[string]interface{}, } } + jsonTag := f.Tag.Get("json") + if jsonTag == "-" { + continue + } + if v.Kind() == reflect.Struct || (v.Kind() == reflect.Ptr && v.Elem().Kind() == reflect.Struct) { if zero { //fmt.Printf("value before change: %+v\n", optsValue.Field(i)) - if jsonTag := f.Tag.Get("json"); jsonTag != "" { + if jsonTag != "" { jsonTagPieces := strings.Split(jsonTag, ",") if len(jsonTagPieces) > 1 && jsonTagPieces[1] == "omitempty" { if v.CanSet() { @@ -329,12 +352,20 @@ func BuildQueryString(opts interface{}) (*url.URL, error) { params.Add(tags[0], v.Index(i).String()) } } + case reflect.Map: + if v.Type().Key().Kind() == reflect.String && v.Type().Elem().Kind() == reflect.String { + var s []string + for _, k := range v.MapKeys() { + value := v.MapIndex(k).String() + s = append(s, fmt.Sprintf("'%s':'%s'", k.String(), value)) + } + params.Add(tags[0], fmt.Sprintf("{%s}", strings.Join(s, ", "))) + } } } else { - // Otherwise, the field is not set. - if len(tags) == 2 && tags[1] == "required" { - // And the field is required. Return an error. - return nil, fmt.Errorf("Required query parameter [%s] not set.", f.Name) + // if the field has a 'required' tag, it can't have a zero-value + if requiredTag := f.Tag.Get("required"); requiredTag == "true" { + return &url.URL{}, fmt.Errorf("Required query parameter [%s] not set.", f.Name) } } } @@ -407,10 +438,9 @@ func BuildHeaders(opts interface{}) (map[string]string, error) { optsMap[tags[0]] = strconv.FormatBool(v.Bool()) } } else { - // Otherwise, the field is not set. - if len(tags) == 2 && tags[1] == "required" { - // And the field is required. Return an error. - return optsMap, fmt.Errorf("Required header not set.") + // if the field has a 'required' tag, it can't have a zero-value + if requiredTag := f.Tag.Get("required"); requiredTag == "true" { + return optsMap, fmt.Errorf("Required header [%s] not set.", f.Name) } } } diff --git a/vendor/github.com/gophercloud/gophercloud/provider_client.go b/vendor/github.com/gophercloud/gophercloud/provider_client.go index f88682381..17e451274 100644 --- a/vendor/github.com/gophercloud/gophercloud/provider_client.go +++ b/vendor/github.com/gophercloud/gophercloud/provider_client.go @@ -7,6 +7,7 @@ import ( "io/ioutil" "net/http" "strings" + "sync" ) // DefaultUserAgent is the default User-Agent string set in the request header. @@ -51,6 +52,8 @@ type ProviderClient struct { IdentityEndpoint string // TokenID is the ID of the most recently issued valid token. + // NOTE: Aside from within a custom ReauthFunc, this field shouldn't be set by an application. + // To safely read or write this value, call `Token` or `SetToken`, respectively TokenID string // EndpointLocator describes how this provider discovers the endpoints for @@ -68,16 +71,89 @@ type ProviderClient struct { // authentication functions for different Identity service versions. ReauthFunc func() error - Debug bool + mut *sync.RWMutex + + reauthmut *reauthlock +} + +type reauthlock struct { + sync.RWMutex + reauthing bool } // AuthenticatedHeaders returns a map of HTTP headers that are common for all // authenticated service requests. -func (client *ProviderClient) AuthenticatedHeaders() map[string]string { - if client.TokenID == "" { - return map[string]string{} +func (client *ProviderClient) AuthenticatedHeaders() (m map[string]string) { + if client.reauthmut != nil { + client.reauthmut.RLock() + if client.reauthmut.reauthing { + client.reauthmut.RUnlock() + return + } + client.reauthmut.RUnlock() } - return map[string]string{"X-Auth-Token": client.TokenID} + t := client.Token() + if t == "" { + return + } + return map[string]string{"X-Auth-Token": t} +} + +// UseTokenLock creates a mutex that is used to allow safe concurrent access to the auth token. +// If the application's ProviderClient is not used concurrently, this doesn't need to be called. +func (client *ProviderClient) UseTokenLock() { + client.mut = new(sync.RWMutex) + client.reauthmut = new(reauthlock) +} + +// Token safely reads the value of the auth token from the ProviderClient. Applications should +// call this method to access the token instead of the TokenID field +func (client *ProviderClient) Token() string { + if client.mut != nil { + client.mut.RLock() + defer client.mut.RUnlock() + } + return client.TokenID +} + +// SetToken safely sets the value of the auth token in the ProviderClient. Applications may +// use this method in a custom ReauthFunc +func (client *ProviderClient) SetToken(t string) { + if client.mut != nil { + client.mut.Lock() + defer client.mut.Unlock() + } + client.TokenID = t +} + +//Reauthenticate calls client.ReauthFunc in a thread-safe way. If this is +//called because of a 401 response, the caller may pass the previous token. In +//this case, the reauthentication can be skipped if another thread has already +//reauthenticated in the meantime. If no previous token is known, an empty +//string should be passed instead to force unconditional reauthentication. +func (client *ProviderClient) Reauthenticate(previousToken string) (err error) { + if client.ReauthFunc == nil { + return nil + } + + if client.mut == nil { + return client.ReauthFunc() + } + client.mut.Lock() + defer client.mut.Unlock() + + client.reauthmut.Lock() + client.reauthmut.reauthing = true + client.reauthmut.Unlock() + + if previousToken == "" || client.TokenID == previousToken { + err = client.ReauthFunc() + } + + client.reauthmut.Lock() + client.reauthmut.reauthing = false + client.reauthmut.Unlock() + return } // RequestOpts customizes the behavior of the provider.Request() method. @@ -145,10 +221,6 @@ func (client *ProviderClient) Request(method, url string, options *RequestOpts) } req.Header.Set("Accept", applicationJSON) - for k, v := range client.AuthenticatedHeaders() { - req.Header.Add(k, v) - } - // Set the User-Agent header req.Header.Set("User-Agent", client.UserAgent.Join()) @@ -162,9 +234,16 @@ func (client *ProviderClient) Request(method, url string, options *RequestOpts) } } + // get latest token from client + for k, v := range client.AuthenticatedHeaders() { + req.Header.Set(k, v) + } + // Set connection parameter to close the connection immediately when we've got the response req.Close = true + prereqtok := req.Header.Get("X-Auth-Token") + // Issue the request. resp, err := client.HTTPClient.Do(req) if err != nil { @@ -188,9 +267,6 @@ func (client *ProviderClient) Request(method, url string, options *RequestOpts) if !ok { body, _ := ioutil.ReadAll(resp.Body) resp.Body.Close() - //pc := make([]uintptr, 1) - //runtime.Callers(2, pc) - //f := runtime.FuncForPC(pc[0]) respErr := ErrUnexpectedResponseCode{ URL: url, Method: method, @@ -198,7 +274,6 @@ func (client *ProviderClient) Request(method, url string, options *RequestOpts) Actual: resp.StatusCode, Body: body, } - //respErr.Function = "gophercloud.ProviderClient.Request" errType := options.ErrorContext switch resp.StatusCode { @@ -209,7 +284,7 @@ func (client *ProviderClient) Request(method, url string, options *RequestOpts) } case http.StatusUnauthorized: if client.ReauthFunc != nil { - err = client.ReauthFunc() + err = client.Reauthenticate(prereqtok) if err != nil { e := &ErrUnableToReauthenticate{} e.ErrOriginal = respErr @@ -239,6 +314,11 @@ func (client *ProviderClient) Request(method, url string, options *RequestOpts) if error401er, ok := errType.(Err401er); ok { err = error401er.Error401(respErr) } + case http.StatusForbidden: + err = ErrDefault403{respErr} + if error403er, ok := errType.(Err403er); ok { + err = error403er.Error403(respErr) + } case http.StatusNotFound: err = ErrDefault404{respErr} if error404er, ok := errType.(Err404er); ok { diff --git a/vendor/github.com/gophercloud/gophercloud/results.go b/vendor/github.com/gophercloud/gophercloud/results.go index 76c16ef8f..fdd4830ec 100644 --- a/vendor/github.com/gophercloud/gophercloud/results.go +++ b/vendor/github.com/gophercloud/gophercloud/results.go @@ -78,6 +78,53 @@ func (r Result) extractIntoPtr(to interface{}, label string) error { return err } + toValue := reflect.ValueOf(to) + if toValue.Kind() == reflect.Ptr { + toValue = toValue.Elem() + } + + switch toValue.Kind() { + case reflect.Slice: + typeOfV := toValue.Type().Elem() + if typeOfV.Kind() == reflect.Struct { + if typeOfV.NumField() > 0 && typeOfV.Field(0).Anonymous { + newSlice := reflect.MakeSlice(reflect.SliceOf(typeOfV), 0, 0) + newType := reflect.New(typeOfV).Elem() + + for _, v := range m[label].([]interface{}) { + b, err := json.Marshal(v) + if err != nil { + return err + } + + for i := 0; i < newType.NumField(); i++ { + s := newType.Field(i).Addr().Interface() + err = json.NewDecoder(bytes.NewReader(b)).Decode(s) + if err != nil { + return err + } + } + newSlice = reflect.Append(newSlice, newType) + } + toValue.Set(newSlice) + } + } + case reflect.Struct: + typeOfV := toValue.Type() + if typeOfV.NumField() > 0 && typeOfV.Field(0).Anonymous { + for i := 0; i < toValue.NumField(); i++ { + toField := toValue.Field(i) + if toField.Kind() == reflect.Struct { + s := toField.Addr().Interface() + err = json.NewDecoder(bytes.NewReader(b)).Decode(s) + if err != nil { + return err + } + } + } + } + } + err = json.Unmarshal(b, &to) return err } @@ -177,9 +224,8 @@ type HeaderResult struct { Result } -// ExtractHeader will return the http.Header and error from the HeaderResult. -// -// header, err := objects.Create(client, "my_container", objects.CreateOpts{}).ExtractHeader() +// ExtractInto allows users to provide an object into which `Extract` will +// extract the http.Header headers of the result. func (r HeaderResult) ExtractInto(to interface{}) error { if r.Err != nil { return r.Err @@ -299,6 +345,27 @@ func (jt *JSONRFC3339NoZ) UnmarshalJSON(data []byte) error { return nil } +// RFC3339ZNoT is the time format used in Zun (Containers Service). +const RFC3339ZNoT = "2006-01-02 15:04:05-07:00" + +type JSONRFC3339ZNoT time.Time + +func (jt *JSONRFC3339ZNoT) UnmarshalJSON(data []byte) error { + var s string + if err := json.Unmarshal(data, &s); err != nil { + return err + } + if s == "" { + return nil + } + t, err := time.Parse(RFC3339ZNoT, s) + if err != nil { + return err + } + *jt = JSONRFC3339ZNoT(t) + return nil +} + /* Link is an internal type to be used in packages of collection resources that are paginated in a certain way. diff --git a/vendor/github.com/gophercloud/gophercloud/service_client.go b/vendor/github.com/gophercloud/gophercloud/service_client.go index 1160fefa7..2734510e1 100644 --- a/vendor/github.com/gophercloud/gophercloud/service_client.go +++ b/vendor/github.com/gophercloud/gophercloud/service_client.go @@ -28,6 +28,10 @@ type ServiceClient struct { // The microversion of the service to use. Set this to use a particular microversion. Microversion string + + // MoreHeaders allows users (or Gophercloud) to set service-wide headers on requests. Put another way, + // values set in this field will be set on all the HTTP requests the service client sends. + MoreHeaders map[string]string } // ResourceBaseURL returns the base URL of any resources used by this service. It MUST end with a /. @@ -108,15 +112,39 @@ func (client *ServiceClient) Delete(url string, opts *RequestOpts) (*http.Respon return client.Request("DELETE", url, opts) } +// Head calls `Request` with the "HEAD" HTTP verb. +func (client *ServiceClient) Head(url string, opts *RequestOpts) (*http.Response, error) { + if opts == nil { + opts = new(RequestOpts) + } + client.initReqOpts(url, nil, nil, opts) + return client.Request("HEAD", url, opts) +} + func (client *ServiceClient) setMicroversionHeader(opts *RequestOpts) { switch client.Type { case "compute": opts.MoreHeaders["X-OpenStack-Nova-API-Version"] = client.Microversion case "sharev2": opts.MoreHeaders["X-OpenStack-Manila-API-Version"] = client.Microversion + case "volume": + opts.MoreHeaders["X-OpenStack-Volume-API-Version"] = client.Microversion } if client.Type != "" { opts.MoreHeaders["OpenStack-API-Version"] = client.Type + " " + client.Microversion } } + +// Request carries out the HTTP operation for the service client +func (client *ServiceClient) Request(method, url string, options *RequestOpts) (*http.Response, error) { + if len(client.MoreHeaders) > 0 { + if options == nil { + options = new(RequestOpts) + } + for k, v := range client.MoreHeaders { + options.MoreHeaders[k] = v + } + } + return client.ProviderClient.Request(method, url, options) +} diff --git a/vendor/github.com/gophercloud/utils/LICENSE b/vendor/github.com/gophercloud/utils/LICENSE new file mode 100644 index 000000000..8dada3eda --- /dev/null +++ b/vendor/github.com/gophercloud/utils/LICENSE @@ -0,0 +1,201 @@ + Apache License + Version 2.0, January 2004 + http://www.apache.org/licenses/ + + TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION + + 1. Definitions. + + "License" shall mean the terms and conditions for use, reproduction, + and distribution as defined by Sections 1 through 9 of this document. + + "Licensor" shall mean the copyright owner or entity authorized by + the copyright owner that is granting the License. + + "Legal Entity" shall mean the union of the acting entity and all + other entities that control, are controlled by, or are under common + control with that entity. For the purposes of this definition, + "control" means (i) the power, direct or indirect, to cause the + direction or management of such entity, whether by contract or + otherwise, or (ii) ownership of fifty percent (50%) or more of the + outstanding shares, or (iii) beneficial ownership of such entity. + + "You" (or "Your") shall mean an individual or Legal Entity + exercising permissions granted by this License. + + "Source" form shall mean the preferred form for making modifications, + including but not limited to software source code, documentation + source, and configuration files. + + "Object" form shall mean any form resulting from mechanical + transformation or translation of a Source form, including but + not limited to compiled object code, generated documentation, + and conversions to other media types. + + "Work" shall mean the work of authorship, whether in Source or + Object form, made available under the License, as indicated by a + copyright notice that is included in or attached to the work + (an example is provided in the Appendix below). + + "Derivative Works" shall mean any work, whether in Source or Object + form, that is based on (or derived from) the Work and for which the + editorial revisions, annotations, elaborations, or other modifications + represent, as a whole, an original work of authorship. For the purposes + of this License, Derivative Works shall not include works that remain + separable from, or merely link (or bind by name) to the interfaces of, + the Work and Derivative Works thereof. + + "Contribution" shall mean any work of authorship, including + the original version of the Work and any modifications or additions + to that Work or Derivative Works thereof, that is intentionally + submitted to Licensor for inclusion in the Work by the copyright owner + or by an individual or Legal Entity authorized to submit on behalf of + the copyright owner. For the purposes of this definition, "submitted" + means any form of electronic, verbal, or written communication sent + to the Licensor or its representatives, including but not limited to + communication on electronic mailing lists, source code control systems, + and issue tracking systems that are managed by, or on behalf of, the + Licensor for the purpose of discussing and improving the Work, but + excluding communication that is conspicuously marked or otherwise + designated in writing by the copyright owner as "Not a Contribution." + + "Contributor" shall mean Licensor and any individual or Legal Entity + on behalf of whom a Contribution has been received by Licensor and + subsequently incorporated within the Work. + + 2. Grant of Copyright License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + copyright license to reproduce, prepare Derivative Works of, + publicly display, publicly perform, sublicense, and distribute the + Work and such Derivative Works in Source or Object form. + + 3. Grant of Patent License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + (except as stated in this section) patent license to make, have made, + use, offer to sell, sell, import, and otherwise transfer the Work, + where such license applies only to those patent claims licensable + by such Contributor that are necessarily infringed by their + Contribution(s) alone or by combination of their Contribution(s) + with the Work to which such Contribution(s) was submitted. If You + institute patent litigation against any entity (including a + cross-claim or counterclaim in a lawsuit) alleging that the Work + or a Contribution incorporated within the Work constitutes direct + or contributory patent infringement, then any patent licenses + granted to You under this License for that Work shall terminate + as of the date such litigation is filed. + + 4. Redistribution. You may reproduce and distribute copies of the + Work or Derivative Works thereof in any medium, with or without + modifications, and in Source or Object form, provided that You + meet the following conditions: + + (a) You must give any other recipients of the Work or + Derivative Works a copy of this License; and + + (b) You must cause any modified files to carry prominent notices + stating that You changed the files; and + + (c) You must retain, in the Source form of any Derivative Works + that You distribute, all copyright, patent, trademark, and + attribution notices from the Source form of the Work, + excluding those notices that do not pertain to any part of + the Derivative Works; and + + (d) If the Work includes a "NOTICE" text file as part of its + distribution, then any Derivative Works that You distribute must + include a readable copy of the attribution notices contained + within such NOTICE file, excluding those notices that do not + pertain to any part of the Derivative Works, in at least one + of the following places: within a NOTICE text file distributed + as part of the Derivative Works; within the Source form or + documentation, if provided along with the Derivative Works; or, + within a display generated by the Derivative Works, if and + wherever such third-party notices normally appear. The contents + of the NOTICE file are for informational purposes only and + do not modify the License. You may add Your own attribution + notices within Derivative Works that You distribute, alongside + or as an addendum to the NOTICE text from the Work, provided + that such additional attribution notices cannot be construed + as modifying the License. + + You may add Your own copyright statement to Your modifications and + may provide additional or different license terms and conditions + for use, reproduction, or distribution of Your modifications, or + for any such Derivative Works as a whole, provided Your use, + reproduction, and distribution of the Work otherwise complies with + the conditions stated in this License. + + 5. Submission of Contributions. Unless You explicitly state otherwise, + any Contribution intentionally submitted for inclusion in the Work + by You to the Licensor shall be under the terms and conditions of + this License, without any additional terms or conditions. + Notwithstanding the above, nothing herein shall supersede or modify + the terms of any separate license agreement you may have executed + with Licensor regarding such Contributions. + + 6. Trademarks. This License does not grant permission to use the trade + names, trademarks, service marks, or product names of the Licensor, + except as required for reasonable and customary use in describing the + origin of the Work and reproducing the content of the NOTICE file. + + 7. Disclaimer of Warranty. Unless required by applicable law or + agreed to in writing, Licensor provides the Work (and each + Contributor provides its Contributions) on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or + implied, including, without limitation, any warranties or conditions + of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A + PARTICULAR PURPOSE. You are solely responsible for determining the + appropriateness of using or redistributing the Work and assume any + risks associated with Your exercise of permissions under this License. + + 8. Limitation of Liability. In no event and under no legal theory, + whether in tort (including negligence), contract, or otherwise, + unless required by applicable law (such as deliberate and grossly + negligent acts) or agreed to in writing, shall any Contributor be + liable to You for damages, including any direct, indirect, special, + incidental, or consequential damages of any character arising as a + result of this License or out of the use or inability to use the + Work (including but not limited to damages for loss of goodwill, + work stoppage, computer failure or malfunction, or any and all + other commercial damages or losses), even if such Contributor + has been advised of the possibility of such damages. + + 9. Accepting Warranty or Additional Liability. While redistributing + the Work or Derivative Works thereof, You may choose to offer, + and charge a fee for, acceptance of support, warranty, indemnity, + or other liability obligations and/or rights consistent with this + License. However, in accepting such obligations, You may act only + on Your own behalf and on Your sole responsibility, not on behalf + of any other Contributor, and only if You agree to indemnify, + defend, and hold each Contributor harmless for any liability + incurred by, or claims asserted against, such Contributor by reason + of your accepting any such warranty or additional liability. + + END OF TERMS AND CONDITIONS + + APPENDIX: How to apply the Apache License to your work. + + To apply the Apache License to your work, attach the following + boilerplate notice, with the fields enclosed by brackets "{}" + replaced with your own identifying information. (Don't include + the brackets!) The text should be enclosed in the appropriate + comment syntax for the file format. We also recommend that a + file or class name and description of purpose be included on the + same "printed page" as the copyright notice for easier + identification within third-party archives. + + Copyright {yyyy} {name of copyright owner} + + Licensed under the Apache License, Version 2.0 (the "License"); + you may not use this file except in compliance with the License. + You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. diff --git a/vendor/github.com/gophercloud/utils/openstack/clientconfig/doc.go b/vendor/github.com/gophercloud/utils/openstack/clientconfig/doc.go new file mode 100644 index 000000000..1f3be2128 --- /dev/null +++ b/vendor/github.com/gophercloud/utils/openstack/clientconfig/doc.go @@ -0,0 +1,49 @@ +/* +Package clientconfig provides convienent functions for creating OpenStack +clients. It is based on the Python os-client-config library. + +See https://docs.openstack.org/os-client-config/latest for details. + +Example to Create a Provider Client From clouds.yaml + + opts := &clientconfig.ClientOpts{ + Name: "hawaii", + } + + pClient, err := clientconfig.AuthenticatedClient(opts) + if err != nil { + panic(err) + } + + +Example to Manually Create a Provider Client + + opts := &clientconfig.ClientOpts{ + AuthInfo: &clientconfig.AuthInfo{ + AuthURL: "https://hi.example.com:5000/v3", + Username: "jdoe", + Password: "password", + ProjectName: "Some Project", + DomainName: "default", + }, + } + + pClient, err := clientconfig.AuthenticatedClient(opts) + if err != nil { + panic(err) + } + + +Example to Create a Service Client from clouds.yaml + + opts := &clientconfig.ClientOpts{ + Name: "hawaii", + } + + computeClient, err := clientconfig.NewServiceClient("compute", opts) + if err != nil { + panic(err) + } + +*/ +package clientconfig diff --git a/vendor/github.com/gophercloud/utils/openstack/clientconfig/requests.go b/vendor/github.com/gophercloud/utils/openstack/clientconfig/requests.go new file mode 100644 index 000000000..254f72776 --- /dev/null +++ b/vendor/github.com/gophercloud/utils/openstack/clientconfig/requests.go @@ -0,0 +1,596 @@ +package clientconfig + +import ( + "fmt" + "os" + "strings" + + "github.com/gophercloud/gophercloud" + "github.com/gophercloud/gophercloud/openstack" + + "gopkg.in/yaml.v2" +) + +// AuthType respresents a valid method of authentication. +type AuthType string + +const ( + AuthPassword AuthType = "password" + AuthToken AuthType = "token" + + AuthV2Password AuthType = "v2password" + AuthV2Token AuthType = "v2token" + + AuthV3Password AuthType = "v3password" + AuthV3Token AuthType = "v3token" +) + +// ClientOpts represents options to customize the way a client is +// configured. +type ClientOpts struct { + // Cloud is the cloud entry in clouds.yaml to use. + Cloud string + + // EnvPrefix allows a custom environment variable prefix to be used. + EnvPrefix string + + // AuthType specifies the type of authentication to use. + // By default, this is "password". + AuthType AuthType + + // AuthInfo defines the authentication information needed to + // authenticate to a cloud when clouds.yaml isn't used. + AuthInfo *AuthInfo +} + +// LoadYAML will load a clouds.yaml file and return the full config. +func LoadYAML() (map[string]Cloud, error) { + content, err := findAndReadYAML() + if err != nil { + return nil, err + } + + var clouds Clouds + err = yaml.Unmarshal(content, &clouds) + if err != nil { + return nil, fmt.Errorf("failed to unmarshal yaml: %v", err) + } + + return clouds.Clouds, nil +} + +// GetCloudFromYAML will return a cloud entry from a clouds.yaml file. +func GetCloudFromYAML(opts *ClientOpts) (*Cloud, error) { + clouds, err := LoadYAML() + if err != nil { + return nil, fmt.Errorf("unable to load clouds.yaml: %s", err) + } + + // Determine which cloud to use. + // First see if a cloud name was explicitly set in opts. + var cloudName string + if opts != nil && opts.Cloud != "" { + cloudName = opts.Cloud + } + + // Next see if a cloud name was specified as an environment variable. + // This is supposed to override an explicit opts setting. + envPrefix := "OS_" + if opts.EnvPrefix != "" { + envPrefix = opts.EnvPrefix + } + + if v := os.Getenv(envPrefix + "CLOUD"); v != "" { + cloudName = v + } + + var cloud *Cloud + if cloudName != "" { + v, ok := clouds[cloudName] + if !ok { + return nil, fmt.Errorf("cloud %s does not exist in clouds.yaml", cloudName) + } + cloud = &v + } + + // If a cloud was not specified, and clouds only contains + // a single entry, use that entry. + if cloudName == "" && len(clouds) == 1 { + for _, v := range clouds { + cloud = &v + } + } + + if cloud == nil { + return nil, fmt.Errorf("Unable to determine a valid entry in clouds.yaml") + } + + return cloud, nil +} + +// AuthOptions creates a gophercloud.AuthOptions structure with the +// settings found in a specific cloud entry of a clouds.yaml file or +// based on authentication settings given in ClientOpts. +// +// This attempts to be a single point of entry for all OpenStack authentication. +// +// See http://docs.openstack.org/developer/os-client-config and +// https://github.com/openstack/os-client-config/blob/master/os_client_config/config.py. +func AuthOptions(opts *ClientOpts) (*gophercloud.AuthOptions, error) { + cloud := new(Cloud) + + // If no opts were passed in, create an empty ClientOpts. + if opts == nil { + opts = new(ClientOpts) + } + + // Determine if a clouds.yaml entry should be retrieved. + // Start by figuring out the cloud name. + // First check if one was explicitly specified in opts. + var cloudName string + if opts.Cloud != "" { + cloudName = opts.Cloud + } + + // Next see if a cloud name was specified as an environment variable. + envPrefix := "OS_" + if opts.EnvPrefix != "" { + envPrefix = opts.EnvPrefix + } + + if v := os.Getenv(envPrefix + "CLOUD"); v != "" { + cloudName = v + } + + // If a cloud name was determined, try to look it up in clouds.yaml. + if cloudName != "" { + // Get the requested cloud. + var err error + cloud, err = GetCloudFromYAML(opts) + if err != nil { + return nil, err + } + } + + // If cloud.AuthInfo is nil, then no cloud was specified. + if cloud.AuthInfo == nil { + // If opts.Auth is not nil, then try using the auth settings from it. + if opts.AuthInfo != nil { + cloud.AuthInfo = opts.AuthInfo + } + + // If cloud.AuthInfo is still nil, then set it to an empty Auth struct + // and rely on environment variables to do the authentication. + if cloud.AuthInfo == nil { + cloud.AuthInfo = new(AuthInfo) + } + } + + identityAPI := determineIdentityAPI(cloud, opts) + switch identityAPI { + case "2.0", "2": + return v2auth(cloud, opts) + case "3": + return v3auth(cloud, opts) + } + + return nil, fmt.Errorf("Unable to build AuthOptions") +} + +func determineIdentityAPI(cloud *Cloud, opts *ClientOpts) string { + var identityAPI string + if cloud.IdentityAPIVersion != "" { + identityAPI = cloud.IdentityAPIVersion + } + + envPrefix := "OS_" + if opts != nil && opts.EnvPrefix != "" { + envPrefix = opts.EnvPrefix + } + + if v := os.Getenv(envPrefix + "IDENTITY_API_VERSION"); v != "" { + identityAPI = v + } + + if identityAPI == "" { + if cloud.AuthInfo != nil { + if strings.Contains(cloud.AuthInfo.AuthURL, "v2.0") { + identityAPI = "2.0" + } + + if strings.Contains(cloud.AuthInfo.AuthURL, "v3") { + identityAPI = "3" + } + } + } + + if identityAPI == "" { + switch cloud.AuthType { + case AuthV2Password: + identityAPI = "2.0" + case AuthV2Token: + identityAPI = "2.0" + case AuthV3Password: + identityAPI = "3" + case AuthV3Token: + identityAPI = "3" + } + } + + // If an Identity API version could not be determined, + // default to v3. + if identityAPI == "" { + identityAPI = "3" + } + + return identityAPI +} + +// v2auth creates a v2-compatible gophercloud.AuthOptions struct. +func v2auth(cloud *Cloud, opts *ClientOpts) (*gophercloud.AuthOptions, error) { + // Environment variable overrides. + envPrefix := "OS_" + if opts != nil && opts.EnvPrefix != "" { + envPrefix = opts.EnvPrefix + } + + if v := os.Getenv(envPrefix + "AUTH_URL"); v != "" { + cloud.AuthInfo.AuthURL = v + } + + if v := os.Getenv(envPrefix + "TOKEN"); v != "" { + cloud.AuthInfo.Token = v + } + + if v := os.Getenv(envPrefix + "AUTH_TOKEN"); v != "" { + cloud.AuthInfo.Token = v + } + + if v := os.Getenv(envPrefix + "USERNAME"); v != "" { + cloud.AuthInfo.Username = v + } + + if v := os.Getenv(envPrefix + "PASSWORD"); v != "" { + cloud.AuthInfo.Password = v + } + + if v := os.Getenv(envPrefix + "TENANT_ID"); v != "" { + cloud.AuthInfo.ProjectID = v + } + + if v := os.Getenv(envPrefix + "PROJECT_ID"); v != "" { + cloud.AuthInfo.ProjectID = v + } + + if v := os.Getenv(envPrefix + "TENANT_NAME"); v != "" { + cloud.AuthInfo.ProjectName = v + } + + if v := os.Getenv(envPrefix + "PROJECT_NAME"); v != "" { + cloud.AuthInfo.ProjectName = v + } + + ao := &gophercloud.AuthOptions{ + IdentityEndpoint: cloud.AuthInfo.AuthURL, + TokenID: cloud.AuthInfo.Token, + Username: cloud.AuthInfo.Username, + Password: cloud.AuthInfo.Password, + TenantID: cloud.AuthInfo.ProjectID, + TenantName: cloud.AuthInfo.ProjectName, + } + + return ao, nil +} + +// v3auth creates a v3-compatible gophercloud.AuthOptions struct. +func v3auth(cloud *Cloud, opts *ClientOpts) (*gophercloud.AuthOptions, error) { + // Environment variable overrides. + envPrefix := "OS_" + if opts != nil && opts.EnvPrefix != "" { + envPrefix = opts.EnvPrefix + } + + if v := os.Getenv(envPrefix + "AUTH_URL"); v != "" { + cloud.AuthInfo.AuthURL = v + } + + if v := os.Getenv(envPrefix + "TOKEN"); v != "" { + cloud.AuthInfo.Token = v + } + + if v := os.Getenv(envPrefix + "AUTH_TOKEN"); v != "" { + cloud.AuthInfo.Token = v + } + + if v := os.Getenv(envPrefix + "USERNAME"); v != "" { + cloud.AuthInfo.Username = v + } + + if v := os.Getenv(envPrefix + "USER_ID"); v != "" { + cloud.AuthInfo.UserID = v + } + + if v := os.Getenv(envPrefix + "PASSWORD"); v != "" { + cloud.AuthInfo.Password = v + } + + if v := os.Getenv(envPrefix + "TENANT_ID"); v != "" { + cloud.AuthInfo.ProjectID = v + } + + if v := os.Getenv(envPrefix + "PROJECT_ID"); v != "" { + cloud.AuthInfo.ProjectID = v + } + + if v := os.Getenv(envPrefix + "TENANT_NAME"); v != "" { + cloud.AuthInfo.ProjectName = v + } + + if v := os.Getenv(envPrefix + "PROJECT_NAME"); v != "" { + cloud.AuthInfo.ProjectName = v + } + + if v := os.Getenv(envPrefix + "DOMAIN_ID"); v != "" { + cloud.AuthInfo.DomainID = v + } + + if v := os.Getenv(envPrefix + "DOMAIN_NAME"); v != "" { + cloud.AuthInfo.DomainName = v + } + + if v := os.Getenv(envPrefix + "DEFAULT_DOMAIN"); v != "" { + cloud.AuthInfo.DefaultDomain = v + } + + if v := os.Getenv(envPrefix + "PROJECT_DOMAIN_ID"); v != "" { + cloud.AuthInfo.ProjectDomainID = v + } + + if v := os.Getenv(envPrefix + "PROJECT_DOMAIN_NAME"); v != "" { + cloud.AuthInfo.ProjectDomainName = v + } + + if v := os.Getenv(envPrefix + "USER_DOMAIN_ID"); v != "" { + cloud.AuthInfo.UserDomainID = v + } + + if v := os.Getenv(envPrefix + "USER_DOMAIN_NAME"); v != "" { + cloud.AuthInfo.UserDomainName = v + } + + // Build a scope and try to do it correctly. + // https://github.com/openstack/os-client-config/blob/master/os_client_config/config.py#L595 + scope := new(gophercloud.AuthScope) + + if !isProjectScoped(cloud.AuthInfo) { + if cloud.AuthInfo.DomainID != "" { + scope.DomainID = cloud.AuthInfo.DomainID + } else if cloud.AuthInfo.DomainName != "" { + scope.DomainName = cloud.AuthInfo.DomainName + } + } else { + // If Domain* is set, but UserDomain* or ProjectDomain* aren't, + // then use Domain* as the default setting. + cloud = setDomainIfNeeded(cloud) + + if cloud.AuthInfo.ProjectID != "" { + scope.ProjectID = cloud.AuthInfo.ProjectID + } else { + scope.ProjectName = cloud.AuthInfo.ProjectName + scope.DomainID = cloud.AuthInfo.ProjectDomainID + scope.DomainName = cloud.AuthInfo.ProjectDomainName + } + } + + ao := &gophercloud.AuthOptions{ + Scope: scope, + IdentityEndpoint: cloud.AuthInfo.AuthURL, + TokenID: cloud.AuthInfo.Token, + Username: cloud.AuthInfo.Username, + UserID: cloud.AuthInfo.UserID, + Password: cloud.AuthInfo.Password, + TenantID: cloud.AuthInfo.ProjectID, + TenantName: cloud.AuthInfo.ProjectName, + DomainID: cloud.AuthInfo.UserDomainID, + DomainName: cloud.AuthInfo.UserDomainName, + } + + // If an auth_type of "token" was specified, then make sure + // Gophercloud properly authenticates with a token. This involves + // unsetting a few other auth options. The reason this is done + // here is to wait until all auth settings (both in clouds.yaml + // and via environment variables) are set and then unset them. + if strings.Contains(string(cloud.AuthType), "token") || ao.TokenID != "" { + ao.Username = "" + ao.Password = "" + ao.UserID = "" + ao.DomainID = "" + ao.DomainName = "" + } + + // Check for absolute minimum requirements. + if ao.IdentityEndpoint == "" { + err := gophercloud.ErrMissingInput{Argument: "auth_url"} + return nil, err + } + + return ao, nil +} + +// AuthenticatedClient is a convenience function to get a new provider client +// based on a clouds.yaml entry. +func AuthenticatedClient(opts *ClientOpts) (*gophercloud.ProviderClient, error) { + ao, err := AuthOptions(opts) + if err != nil { + return nil, err + } + + return openstack.AuthenticatedClient(*ao) +} + +// NewServiceClient is a convenience function to get a new service client. +func NewServiceClient(service string, opts *ClientOpts) (*gophercloud.ServiceClient, error) { + cloud := new(Cloud) + + // If no opts were passed in, create an empty ClientOpts. + if opts == nil { + opts = new(ClientOpts) + } + + // Determine if a clouds.yaml entry should be retrieved. + // Start by figuring out the cloud name. + // First check if one was explicitly specified in opts. + var cloudName string + if opts.Cloud != "" { + cloudName = opts.Cloud + } + + // Next see if a cloud name was specified as an environment variable. + envPrefix := "OS_" + if opts.EnvPrefix != "" { + envPrefix = opts.EnvPrefix + } + + if v := os.Getenv(envPrefix + "CLOUD"); v != "" { + cloudName = v + } + + // If a cloud name was determined, try to look it up in clouds.yaml. + if cloudName != "" { + // Get the requested cloud. + var err error + cloud, err = GetCloudFromYAML(opts) + if err != nil { + return nil, err + } + } + + // Get a Provider Client + pClient, err := AuthenticatedClient(opts) + if err != nil { + return nil, err + } + + // Determine the region to use. + var region string + if v := cloud.RegionName; v != "" { + region = cloud.RegionName + } + + if v := os.Getenv(envPrefix + "REGION_NAME"); v != "" { + region = v + } + + eo := gophercloud.EndpointOpts{ + Region: region, + } + + switch service { + case "clustering": + return openstack.NewClusteringV1(pClient, eo) + case "compute": + return openstack.NewComputeV2(pClient, eo) + case "container": + return openstack.NewContainerV1(pClient, eo) + case "database": + return openstack.NewDBV1(pClient, eo) + case "dns": + return openstack.NewDNSV2(pClient, eo) + case "identity": + identityVersion := "3" + if v := cloud.IdentityAPIVersion; v != "" { + identityVersion = v + } + + switch identityVersion { + case "v2", "2", "2.0": + return openstack.NewIdentityV2(pClient, eo) + case "v3", "3": + return openstack.NewIdentityV3(pClient, eo) + default: + return nil, fmt.Errorf("invalid identity API version") + } + case "image": + return openstack.NewImageServiceV2(pClient, eo) + case "load-balancer": + return openstack.NewLoadBalancerV2(pClient, eo) + case "network": + return openstack.NewNetworkV2(pClient, eo) + case "object-store": + return openstack.NewObjectStorageV1(pClient, eo) + case "orchestration": + return openstack.NewOrchestrationV1(pClient, eo) + case "sharev2": + return openstack.NewSharedFileSystemV2(pClient, eo) + case "volume": + volumeVersion := "2" + if v := cloud.VolumeAPIVersion; v != "" { + volumeVersion = v + } + + switch volumeVersion { + case "v1", "1": + return openstack.NewBlockStorageV1(pClient, eo) + case "v2", "2": + return openstack.NewBlockStorageV2(pClient, eo) + case "v3", "3": + return openstack.NewBlockStorageV3(pClient, eo) + default: + return nil, fmt.Errorf("invalid volume API version") + } + } + + return nil, fmt.Errorf("unable to create a service client for %s", service) +} + +// isProjectScoped determines if an auth struct is project scoped. +func isProjectScoped(authInfo *AuthInfo) bool { + if authInfo.ProjectID == "" && authInfo.ProjectName == "" { + return false + } + + return true +} + +// setDomainIfNeeded will set a DomainID and DomainName +// to ProjectDomain* and UserDomain* if not already set. +func setDomainIfNeeded(cloud *Cloud) *Cloud { + if cloud.AuthInfo.DomainID != "" { + if cloud.AuthInfo.UserDomainID == "" { + cloud.AuthInfo.UserDomainID = cloud.AuthInfo.DomainID + } + + if cloud.AuthInfo.ProjectDomainID == "" { + cloud.AuthInfo.ProjectDomainID = cloud.AuthInfo.DomainID + } + + cloud.AuthInfo.DomainID = "" + } + + if cloud.AuthInfo.DomainName != "" { + if cloud.AuthInfo.UserDomainName == "" { + cloud.AuthInfo.UserDomainName = cloud.AuthInfo.DomainName + } + + if cloud.AuthInfo.ProjectDomainName == "" { + cloud.AuthInfo.ProjectDomainName = cloud.AuthInfo.DomainName + } + + cloud.AuthInfo.DomainName = "" + } + + // If Domain fields are still not set, and if DefaultDomain has a value, + // set UserDomainID and ProjectDomainID to DefaultDomain. + // https://github.com/openstack/osc-lib/blob/86129e6f88289ef14bfaa3f7c9cdfbea8d9fc944/osc_lib/cli/client_config.py#L117-L146 + if cloud.AuthInfo.DefaultDomain != "" { + if cloud.AuthInfo.UserDomainName == "" && cloud.AuthInfo.UserDomainID == "" { + cloud.AuthInfo.UserDomainID = cloud.AuthInfo.DefaultDomain + } + + if cloud.AuthInfo.ProjectDomainName == "" && cloud.AuthInfo.ProjectDomainID == "" { + cloud.AuthInfo.ProjectDomainID = cloud.AuthInfo.DefaultDomain + } + } + + return cloud +} diff --git a/vendor/github.com/gophercloud/utils/openstack/clientconfig/results.go b/vendor/github.com/gophercloud/utils/openstack/clientconfig/results.go new file mode 100644 index 000000000..cb7603700 --- /dev/null +++ b/vendor/github.com/gophercloud/utils/openstack/clientconfig/results.go @@ -0,0 +1,88 @@ +package clientconfig + +// Clouds represents a collection of Cloud entries in a clouds.yaml file. +// The format of clouds.yaml is documented at +// https://docs.openstack.org/os-client-config/latest/user/configuration.html. +type Clouds struct { + Clouds map[string]Cloud `yaml:"clouds"` +} + +// Cloud represents an entry in a clouds.yaml file. +type Cloud struct { + AuthInfo *AuthInfo `yaml:"auth"` + AuthType AuthType `yaml:"auth_type"` + RegionName string `yaml:"region_name"` + Regions []interface{} `yaml:"regions"` + + // API Version overrides. + IdentityAPIVersion string `yaml:"identity_api_version"` + VolumeAPIVersion string `yaml:"volume_api_version"` +} + +// Auth represents the auth section of a cloud entry or +// auth options entered explicitly in ClientOpts. +type AuthInfo struct { + // AuthURL is the keystone/identity endpoint URL. + AuthURL string `yaml:"auth_url"` + + // Token is a pre-generated authentication token. + Token string `yaml:"token"` + + // Username is the username of the user. + Username string `yaml:"username"` + + // UserID is the unique ID of a user. + UserID string `yaml:"user_id"` + + // Password is the password of the user. + Password string `yaml:"password"` + + // ProjectName is the common/human-readable name of a project. + // Users can be scoped to a project. + // ProjectName on its own is not enough to ensure a unique scope. It must + // also be combined with either a ProjectDomainName or ProjectDomainID. + // ProjectName cannot be combined with ProjectID in a scope. + ProjectName string `yaml:"project_name"` + + // ProjectID is the unique ID of a project. + // It can be used to scope a user to a specific project. + ProjectID string `yaml:"project_id"` + + // UserDomainName is the name of the domain where a user resides. + // It is used to identify the source domain of a user. + UserDomainName string `yaml:"user_domain_name"` + + // UserDomainID is the unique ID of the domain where a user resides. + // It is used to identify the source domain of a user. + UserDomainID string `yaml:"user_domain_id"` + + // ProjectDomainName is the name of the domain where a project resides. + // It is used to identify the source domain of a project. + // ProjectDomainName can be used in addition to a ProjectName when scoping + // a user to a specific project. + ProjectDomainName string `yaml:"project_domain_name"` + + // ProjectDomainID is the name of the domain where a project resides. + // It is used to identify the source domain of a project. + // ProjectDomainID can be used in addition to a ProjectName when scoping + // a user to a specific project. + ProjectDomainID string `yaml:"project_domain_id"` + + // DomainName is the name of a domain which can be used to identify the + // source domain of either a user or a project. + // If UserDomainName and ProjectDomainName are not specified, then DomainName + // is used as a default choice. + // It can also be used be used to specify a domain-only scope. + DomainName string `yaml:"domain_name"` + + // DomainID is the unique ID of a domain which can be used to identify the + // source domain of eitehr a user or a project. + // If UserDomainID and ProjectDomainID are not specified, then DomainID is + // used as a default choice. + // It can also be used be used to specify a domain-only scope. + DomainID string `yaml:"domain_id"` + + // DefaultDomain is the domain ID to fall back on if no other domain has + // been specified and a domain is required for scope. + DefaultDomain string `yaml:"default_domain"` +} diff --git a/vendor/github.com/gophercloud/utils/openstack/clientconfig/utils.go b/vendor/github.com/gophercloud/utils/openstack/clientconfig/utils.go new file mode 100644 index 000000000..34518c6ed --- /dev/null +++ b/vendor/github.com/gophercloud/utils/openstack/clientconfig/utils.go @@ -0,0 +1,67 @@ +package clientconfig + +import ( + "fmt" + "io/ioutil" + "os" + "os/user" + "path/filepath" +) + +// findAndLoadYAML attempts to locate a clouds.yaml file in the following +// locations: +// +// 1. OS_CLIENT_CONFIG_FILE +// 2. Current directory. +// 3. unix-specific user_config_dir (~/.config/openstack/clouds.yaml) +// 4. unix-specific site_config_dir (/etc/openstack/clouds.yaml) +// +// If found, the contents of the file is returned. +func findAndReadYAML() ([]byte, error) { + // OS_CLIENT_CONFIG_FILE + if v := os.Getenv("OS_CLIENT_CONFIG_FILE"); v != "" { + if ok := fileExists(v); ok { + return ioutil.ReadFile(v) + } + } + + // current directory + cwd, err := os.Getwd() + if err != nil { + return nil, fmt.Errorf("unable to determine working directory: %s", err) + } + + filename := filepath.Join(cwd, "clouds.yaml") + if ok := fileExists(filename); ok { + return ioutil.ReadFile(filename) + } + + // unix user config directory: ~/.config/openstack. + currentUser, err := user.Current() + if err != nil { + return nil, fmt.Errorf("unable to get current user: %s", err) + } + + homeDir := currentUser.HomeDir + if homeDir != "" { + filename := filepath.Join(homeDir, ".config/openstack/clouds.yaml") + if ok := fileExists(filename); ok { + return ioutil.ReadFile(filename) + } + } + + // unix-specific site config directory: /etc/openstack. + if ok := fileExists("/etc/openstack/clouds.yaml"); ok { + return ioutil.ReadFile("/etc/openstack/clouds.yaml") + } + + return nil, fmt.Errorf("no clouds.yaml file found") +} + +// fileExists checks for the existence of a file at a given location. +func fileExists(filename string) bool { + if _, err := os.Stat(filename); err == nil { + return true + } + return false +} diff --git a/vendor/github.com/gorilla/websocket/AUTHORS b/vendor/github.com/gorilla/websocket/AUTHORS new file mode 100644 index 000000000..b003eca0c --- /dev/null +++ b/vendor/github.com/gorilla/websocket/AUTHORS @@ -0,0 +1,8 @@ +# This is the official list of Gorilla WebSocket authors for copyright +# purposes. +# +# Please keep the list sorted. + +Gary Burd +Joachim Bauch + diff --git a/vendor/github.com/gorilla/websocket/LICENSE b/vendor/github.com/gorilla/websocket/LICENSE new file mode 100644 index 000000000..9171c9722 --- /dev/null +++ b/vendor/github.com/gorilla/websocket/LICENSE @@ -0,0 +1,22 @@ +Copyright (c) 2013 The Gorilla WebSocket Authors. All rights reserved. + +Redistribution and use in source and binary forms, with or without +modification, are permitted provided that the following conditions are met: + + Redistributions of source code must retain the above copyright notice, this + list of conditions and the following disclaimer. + + Redistributions in binary form must reproduce the above copyright notice, + this list of conditions and the following disclaimer in the documentation + and/or other materials provided with the distribution. + +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND +ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED +WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE +DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE +FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL +DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR +SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER +CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, +OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. diff --git a/vendor/github.com/gorilla/websocket/README.md b/vendor/github.com/gorilla/websocket/README.md new file mode 100644 index 000000000..33c3d2be3 --- /dev/null +++ b/vendor/github.com/gorilla/websocket/README.md @@ -0,0 +1,64 @@ +# Gorilla WebSocket + +Gorilla WebSocket is a [Go](http://golang.org/) implementation of the +[WebSocket](http://www.rfc-editor.org/rfc/rfc6455.txt) protocol. + +[![Build Status](https://travis-ci.org/gorilla/websocket.svg?branch=master)](https://travis-ci.org/gorilla/websocket) +[![GoDoc](https://godoc.org/github.com/gorilla/websocket?status.svg)](https://godoc.org/github.com/gorilla/websocket) + +### Documentation + +* [API Reference](http://godoc.org/github.com/gorilla/websocket) +* [Chat example](https://github.com/gorilla/websocket/tree/master/examples/chat) +* [Command example](https://github.com/gorilla/websocket/tree/master/examples/command) +* [Client and server example](https://github.com/gorilla/websocket/tree/master/examples/echo) +* [File watch example](https://github.com/gorilla/websocket/tree/master/examples/filewatch) + +### Status + +The Gorilla WebSocket package provides a complete and tested implementation of +the [WebSocket](http://www.rfc-editor.org/rfc/rfc6455.txt) protocol. The +package API is stable. + +### Installation + + go get github.com/gorilla/websocket + +### Protocol Compliance + +The Gorilla WebSocket package passes the server tests in the [Autobahn Test +Suite](http://autobahn.ws/testsuite) using the application in the [examples/autobahn +subdirectory](https://github.com/gorilla/websocket/tree/master/examples/autobahn). + +### Gorilla WebSocket compared with other packages + + + + + + + + + + + + + + + + + + +
    github.com/gorillagolang.org/x/net
    RFC 6455 Features
    Passes Autobahn Test SuiteYesNo
    Receive fragmented messageYesNo, see note 1
    Send close messageYesNo
    Send pings and receive pongsYesNo
    Get the type of a received data messageYesYes, see note 2
    Other Features
    Compression ExtensionsExperimentalNo
    Read message using io.ReaderYesNo, see note 3
    Write message using io.WriteCloserYesNo, see note 3
    + +Notes: + +1. Large messages are fragmented in [Chrome's new WebSocket implementation](http://www.ietf.org/mail-archive/web/hybi/current/msg10503.html). +2. The application can get the type of a received data message by implementing + a [Codec marshal](http://godoc.org/golang.org/x/net/websocket#Codec.Marshal) + function. +3. The go.net io.Reader and io.Writer operate across WebSocket frame boundaries. + Read returns when the input buffer is full or a frame boundary is + encountered. Each call to Write sends a single frame message. The Gorilla + io.Reader and io.WriteCloser operate on a single WebSocket message. + diff --git a/vendor/github.com/gorilla/websocket/client.go b/vendor/github.com/gorilla/websocket/client.go new file mode 100644 index 000000000..43a87c753 --- /dev/null +++ b/vendor/github.com/gorilla/websocket/client.go @@ -0,0 +1,392 @@ +// Copyright 2013 The Gorilla WebSocket Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +package websocket + +import ( + "bufio" + "bytes" + "crypto/tls" + "encoding/base64" + "errors" + "io" + "io/ioutil" + "net" + "net/http" + "net/url" + "strings" + "time" +) + +// ErrBadHandshake is returned when the server response to opening handshake is +// invalid. +var ErrBadHandshake = errors.New("websocket: bad handshake") + +var errInvalidCompression = errors.New("websocket: invalid compression negotiation") + +// NewClient creates a new client connection using the given net connection. +// The URL u specifies the host and request URI. Use requestHeader to specify +// the origin (Origin), subprotocols (Sec-WebSocket-Protocol) and cookies +// (Cookie). Use the response.Header to get the selected subprotocol +// (Sec-WebSocket-Protocol) and cookies (Set-Cookie). +// +// If the WebSocket handshake fails, ErrBadHandshake is returned along with a +// non-nil *http.Response so that callers can handle redirects, authentication, +// etc. +// +// Deprecated: Use Dialer instead. +func NewClient(netConn net.Conn, u *url.URL, requestHeader http.Header, readBufSize, writeBufSize int) (c *Conn, response *http.Response, err error) { + d := Dialer{ + ReadBufferSize: readBufSize, + WriteBufferSize: writeBufSize, + NetDial: func(net, addr string) (net.Conn, error) { + return netConn, nil + }, + } + return d.Dial(u.String(), requestHeader) +} + +// A Dialer contains options for connecting to WebSocket server. +type Dialer struct { + // NetDial specifies the dial function for creating TCP connections. If + // NetDial is nil, net.Dial is used. + NetDial func(network, addr string) (net.Conn, error) + + // Proxy specifies a function to return a proxy for a given + // Request. If the function returns a non-nil error, the + // request is aborted with the provided error. + // If Proxy is nil or returns a nil *URL, no proxy is used. + Proxy func(*http.Request) (*url.URL, error) + + // TLSClientConfig specifies the TLS configuration to use with tls.Client. + // If nil, the default configuration is used. + TLSClientConfig *tls.Config + + // HandshakeTimeout specifies the duration for the handshake to complete. + HandshakeTimeout time.Duration + + // ReadBufferSize and WriteBufferSize specify I/O buffer sizes. If a buffer + // size is zero, then a useful default size is used. The I/O buffer sizes + // do not limit the size of the messages that can be sent or received. + ReadBufferSize, WriteBufferSize int + + // Subprotocols specifies the client's requested subprotocols. + Subprotocols []string + + // EnableCompression specifies if the client should attempt to negotiate + // per message compression (RFC 7692). Setting this value to true does not + // guarantee that compression will be supported. Currently only "no context + // takeover" modes are supported. + EnableCompression bool + + // Jar specifies the cookie jar. + // If Jar is nil, cookies are not sent in requests and ignored + // in responses. + Jar http.CookieJar +} + +var errMalformedURL = errors.New("malformed ws or wss URL") + +// parseURL parses the URL. +// +// This function is a replacement for the standard library url.Parse function. +// In Go 1.4 and earlier, url.Parse loses information from the path. +func parseURL(s string) (*url.URL, error) { + // From the RFC: + // + // ws-URI = "ws:" "//" host [ ":" port ] path [ "?" query ] + // wss-URI = "wss:" "//" host [ ":" port ] path [ "?" query ] + var u url.URL + switch { + case strings.HasPrefix(s, "ws://"): + u.Scheme = "ws" + s = s[len("ws://"):] + case strings.HasPrefix(s, "wss://"): + u.Scheme = "wss" + s = s[len("wss://"):] + default: + return nil, errMalformedURL + } + + if i := strings.Index(s, "?"); i >= 0 { + u.RawQuery = s[i+1:] + s = s[:i] + } + + if i := strings.Index(s, "/"); i >= 0 { + u.Opaque = s[i:] + s = s[:i] + } else { + u.Opaque = "/" + } + + u.Host = s + + if strings.Contains(u.Host, "@") { + // Don't bother parsing user information because user information is + // not allowed in websocket URIs. + return nil, errMalformedURL + } + + return &u, nil +} + +func hostPortNoPort(u *url.URL) (hostPort, hostNoPort string) { + hostPort = u.Host + hostNoPort = u.Host + if i := strings.LastIndex(u.Host, ":"); i > strings.LastIndex(u.Host, "]") { + hostNoPort = hostNoPort[:i] + } else { + switch u.Scheme { + case "wss": + hostPort += ":443" + case "https": + hostPort += ":443" + default: + hostPort += ":80" + } + } + return hostPort, hostNoPort +} + +// DefaultDialer is a dialer with all fields set to the default zero values. +var DefaultDialer = &Dialer{ + Proxy: http.ProxyFromEnvironment, +} + +// Dial creates a new client connection. Use requestHeader to specify the +// origin (Origin), subprotocols (Sec-WebSocket-Protocol) and cookies (Cookie). +// Use the response.Header to get the selected subprotocol +// (Sec-WebSocket-Protocol) and cookies (Set-Cookie). +// +// If the WebSocket handshake fails, ErrBadHandshake is returned along with a +// non-nil *http.Response so that callers can handle redirects, authentication, +// etcetera. The response body may not contain the entire response and does not +// need to be closed by the application. +func (d *Dialer) Dial(urlStr string, requestHeader http.Header) (*Conn, *http.Response, error) { + + if d == nil { + d = &Dialer{ + Proxy: http.ProxyFromEnvironment, + } + } + + challengeKey, err := generateChallengeKey() + if err != nil { + return nil, nil, err + } + + u, err := parseURL(urlStr) + if err != nil { + return nil, nil, err + } + + switch u.Scheme { + case "ws": + u.Scheme = "http" + case "wss": + u.Scheme = "https" + default: + return nil, nil, errMalformedURL + } + + if u.User != nil { + // User name and password are not allowed in websocket URIs. + return nil, nil, errMalformedURL + } + + req := &http.Request{ + Method: "GET", + URL: u, + Proto: "HTTP/1.1", + ProtoMajor: 1, + ProtoMinor: 1, + Header: make(http.Header), + Host: u.Host, + } + + // Set the cookies present in the cookie jar of the dialer + if d.Jar != nil { + for _, cookie := range d.Jar.Cookies(u) { + req.AddCookie(cookie) + } + } + + // Set the request headers using the capitalization for names and values in + // RFC examples. Although the capitalization shouldn't matter, there are + // servers that depend on it. The Header.Set method is not used because the + // method canonicalizes the header names. + req.Header["Upgrade"] = []string{"websocket"} + req.Header["Connection"] = []string{"Upgrade"} + req.Header["Sec-WebSocket-Key"] = []string{challengeKey} + req.Header["Sec-WebSocket-Version"] = []string{"13"} + if len(d.Subprotocols) > 0 { + req.Header["Sec-WebSocket-Protocol"] = []string{strings.Join(d.Subprotocols, ", ")} + } + for k, vs := range requestHeader { + switch { + case k == "Host": + if len(vs) > 0 { + req.Host = vs[0] + } + case k == "Upgrade" || + k == "Connection" || + k == "Sec-Websocket-Key" || + k == "Sec-Websocket-Version" || + k == "Sec-Websocket-Extensions" || + (k == "Sec-Websocket-Protocol" && len(d.Subprotocols) > 0): + return nil, nil, errors.New("websocket: duplicate header not allowed: " + k) + default: + req.Header[k] = vs + } + } + + if d.EnableCompression { + req.Header.Set("Sec-Websocket-Extensions", "permessage-deflate; server_no_context_takeover; client_no_context_takeover") + } + + hostPort, hostNoPort := hostPortNoPort(u) + + var proxyURL *url.URL + // Check wether the proxy method has been configured + if d.Proxy != nil { + proxyURL, err = d.Proxy(req) + } + if err != nil { + return nil, nil, err + } + + var targetHostPort string + if proxyURL != nil { + targetHostPort, _ = hostPortNoPort(proxyURL) + } else { + targetHostPort = hostPort + } + + var deadline time.Time + if d.HandshakeTimeout != 0 { + deadline = time.Now().Add(d.HandshakeTimeout) + } + + netDial := d.NetDial + if netDial == nil { + netDialer := &net.Dialer{Deadline: deadline} + netDial = netDialer.Dial + } + + netConn, err := netDial("tcp", targetHostPort) + if err != nil { + return nil, nil, err + } + + defer func() { + if netConn != nil { + netConn.Close() + } + }() + + if err := netConn.SetDeadline(deadline); err != nil { + return nil, nil, err + } + + if proxyURL != nil { + connectHeader := make(http.Header) + if user := proxyURL.User; user != nil { + proxyUser := user.Username() + if proxyPassword, passwordSet := user.Password(); passwordSet { + credential := base64.StdEncoding.EncodeToString([]byte(proxyUser + ":" + proxyPassword)) + connectHeader.Set("Proxy-Authorization", "Basic "+credential) + } + } + connectReq := &http.Request{ + Method: "CONNECT", + URL: &url.URL{Opaque: hostPort}, + Host: hostPort, + Header: connectHeader, + } + + connectReq.Write(netConn) + + // Read response. + // Okay to use and discard buffered reader here, because + // TLS server will not speak until spoken to. + br := bufio.NewReader(netConn) + resp, err := http.ReadResponse(br, connectReq) + if err != nil { + return nil, nil, err + } + if resp.StatusCode != 200 { + f := strings.SplitN(resp.Status, " ", 2) + return nil, nil, errors.New(f[1]) + } + } + + if u.Scheme == "https" { + cfg := cloneTLSConfig(d.TLSClientConfig) + if cfg.ServerName == "" { + cfg.ServerName = hostNoPort + } + tlsConn := tls.Client(netConn, cfg) + netConn = tlsConn + if err := tlsConn.Handshake(); err != nil { + return nil, nil, err + } + if !cfg.InsecureSkipVerify { + if err := tlsConn.VerifyHostname(cfg.ServerName); err != nil { + return nil, nil, err + } + } + } + + conn := newConn(netConn, false, d.ReadBufferSize, d.WriteBufferSize) + + if err := req.Write(netConn); err != nil { + return nil, nil, err + } + + resp, err := http.ReadResponse(conn.br, req) + if err != nil { + return nil, nil, err + } + + if d.Jar != nil { + if rc := resp.Cookies(); len(rc) > 0 { + d.Jar.SetCookies(u, rc) + } + } + + if resp.StatusCode != 101 || + !strings.EqualFold(resp.Header.Get("Upgrade"), "websocket") || + !strings.EqualFold(resp.Header.Get("Connection"), "upgrade") || + resp.Header.Get("Sec-Websocket-Accept") != computeAcceptKey(challengeKey) { + // Before closing the network connection on return from this + // function, slurp up some of the response to aid application + // debugging. + buf := make([]byte, 1024) + n, _ := io.ReadFull(resp.Body, buf) + resp.Body = ioutil.NopCloser(bytes.NewReader(buf[:n])) + return nil, resp, ErrBadHandshake + } + + for _, ext := range parseExtensions(resp.Header) { + if ext[""] != "permessage-deflate" { + continue + } + _, snct := ext["server_no_context_takeover"] + _, cnct := ext["client_no_context_takeover"] + if !snct || !cnct { + return nil, resp, errInvalidCompression + } + conn.newCompressionWriter = compressNoContextTakeover + conn.newDecompressionReader = decompressNoContextTakeover + break + } + + resp.Body = ioutil.NopCloser(bytes.NewReader([]byte{})) + conn.subprotocol = resp.Header.Get("Sec-Websocket-Protocol") + + netConn.SetDeadline(time.Time{}) + netConn = nil // to avoid close in defer. + return conn, resp, nil +} diff --git a/vendor/github.com/gorilla/websocket/client_clone.go b/vendor/github.com/gorilla/websocket/client_clone.go new file mode 100644 index 000000000..4f0d94372 --- /dev/null +++ b/vendor/github.com/gorilla/websocket/client_clone.go @@ -0,0 +1,16 @@ +// Copyright 2013 The Gorilla WebSocket Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +// +build go1.8 + +package websocket + +import "crypto/tls" + +func cloneTLSConfig(cfg *tls.Config) *tls.Config { + if cfg == nil { + return &tls.Config{} + } + return cfg.Clone() +} diff --git a/vendor/github.com/gorilla/websocket/client_clone_legacy.go b/vendor/github.com/gorilla/websocket/client_clone_legacy.go new file mode 100644 index 000000000..babb007fb --- /dev/null +++ b/vendor/github.com/gorilla/websocket/client_clone_legacy.go @@ -0,0 +1,38 @@ +// Copyright 2013 The Gorilla WebSocket Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +// +build !go1.8 + +package websocket + +import "crypto/tls" + +// cloneTLSConfig clones all public fields except the fields +// SessionTicketsDisabled and SessionTicketKey. This avoids copying the +// sync.Mutex in the sync.Once and makes it safe to call cloneTLSConfig on a +// config in active use. +func cloneTLSConfig(cfg *tls.Config) *tls.Config { + if cfg == nil { + return &tls.Config{} + } + return &tls.Config{ + Rand: cfg.Rand, + Time: cfg.Time, + Certificates: cfg.Certificates, + NameToCertificate: cfg.NameToCertificate, + GetCertificate: cfg.GetCertificate, + RootCAs: cfg.RootCAs, + NextProtos: cfg.NextProtos, + ServerName: cfg.ServerName, + ClientAuth: cfg.ClientAuth, + ClientCAs: cfg.ClientCAs, + InsecureSkipVerify: cfg.InsecureSkipVerify, + CipherSuites: cfg.CipherSuites, + PreferServerCipherSuites: cfg.PreferServerCipherSuites, + ClientSessionCache: cfg.ClientSessionCache, + MinVersion: cfg.MinVersion, + MaxVersion: cfg.MaxVersion, + CurvePreferences: cfg.CurvePreferences, + } +} diff --git a/vendor/github.com/gorilla/websocket/compression.go b/vendor/github.com/gorilla/websocket/compression.go new file mode 100644 index 000000000..813ffb1e8 --- /dev/null +++ b/vendor/github.com/gorilla/websocket/compression.go @@ -0,0 +1,148 @@ +// Copyright 2017 The Gorilla WebSocket Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +package websocket + +import ( + "compress/flate" + "errors" + "io" + "strings" + "sync" +) + +const ( + minCompressionLevel = -2 // flate.HuffmanOnly not defined in Go < 1.6 + maxCompressionLevel = flate.BestCompression + defaultCompressionLevel = 1 +) + +var ( + flateWriterPools [maxCompressionLevel - minCompressionLevel + 1]sync.Pool + flateReaderPool = sync.Pool{New: func() interface{} { + return flate.NewReader(nil) + }} +) + +func decompressNoContextTakeover(r io.Reader) io.ReadCloser { + const tail = + // Add four bytes as specified in RFC + "\x00\x00\xff\xff" + + // Add final block to squelch unexpected EOF error from flate reader. + "\x01\x00\x00\xff\xff" + + fr, _ := flateReaderPool.Get().(io.ReadCloser) + fr.(flate.Resetter).Reset(io.MultiReader(r, strings.NewReader(tail)), nil) + return &flateReadWrapper{fr} +} + +func isValidCompressionLevel(level int) bool { + return minCompressionLevel <= level && level <= maxCompressionLevel +} + +func compressNoContextTakeover(w io.WriteCloser, level int) io.WriteCloser { + p := &flateWriterPools[level-minCompressionLevel] + tw := &truncWriter{w: w} + fw, _ := p.Get().(*flate.Writer) + if fw == nil { + fw, _ = flate.NewWriter(tw, level) + } else { + fw.Reset(tw) + } + return &flateWriteWrapper{fw: fw, tw: tw, p: p} +} + +// truncWriter is an io.Writer that writes all but the last four bytes of the +// stream to another io.Writer. +type truncWriter struct { + w io.WriteCloser + n int + p [4]byte +} + +func (w *truncWriter) Write(p []byte) (int, error) { + n := 0 + + // fill buffer first for simplicity. + if w.n < len(w.p) { + n = copy(w.p[w.n:], p) + p = p[n:] + w.n += n + if len(p) == 0 { + return n, nil + } + } + + m := len(p) + if m > len(w.p) { + m = len(w.p) + } + + if nn, err := w.w.Write(w.p[:m]); err != nil { + return n + nn, err + } + + copy(w.p[:], w.p[m:]) + copy(w.p[len(w.p)-m:], p[len(p)-m:]) + nn, err := w.w.Write(p[:len(p)-m]) + return n + nn, err +} + +type flateWriteWrapper struct { + fw *flate.Writer + tw *truncWriter + p *sync.Pool +} + +func (w *flateWriteWrapper) Write(p []byte) (int, error) { + if w.fw == nil { + return 0, errWriteClosed + } + return w.fw.Write(p) +} + +func (w *flateWriteWrapper) Close() error { + if w.fw == nil { + return errWriteClosed + } + err1 := w.fw.Flush() + w.p.Put(w.fw) + w.fw = nil + if w.tw.p != [4]byte{0, 0, 0xff, 0xff} { + return errors.New("websocket: internal error, unexpected bytes at end of flate stream") + } + err2 := w.tw.w.Close() + if err1 != nil { + return err1 + } + return err2 +} + +type flateReadWrapper struct { + fr io.ReadCloser +} + +func (r *flateReadWrapper) Read(p []byte) (int, error) { + if r.fr == nil { + return 0, io.ErrClosedPipe + } + n, err := r.fr.Read(p) + if err == io.EOF { + // Preemptively place the reader back in the pool. This helps with + // scenarios where the application does not call NextReader() soon after + // this final read. + r.Close() + } + return n, err +} + +func (r *flateReadWrapper) Close() error { + if r.fr == nil { + return io.ErrClosedPipe + } + err := r.fr.Close() + flateReaderPool.Put(r.fr) + r.fr = nil + return err +} diff --git a/vendor/github.com/gorilla/websocket/conn.go b/vendor/github.com/gorilla/websocket/conn.go new file mode 100644 index 000000000..97e1dbacb --- /dev/null +++ b/vendor/github.com/gorilla/websocket/conn.go @@ -0,0 +1,1149 @@ +// Copyright 2013 The Gorilla WebSocket Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +package websocket + +import ( + "bufio" + "encoding/binary" + "errors" + "io" + "io/ioutil" + "math/rand" + "net" + "strconv" + "sync" + "time" + "unicode/utf8" +) + +const ( + // Frame header byte 0 bits from Section 5.2 of RFC 6455 + finalBit = 1 << 7 + rsv1Bit = 1 << 6 + rsv2Bit = 1 << 5 + rsv3Bit = 1 << 4 + + // Frame header byte 1 bits from Section 5.2 of RFC 6455 + maskBit = 1 << 7 + + maxFrameHeaderSize = 2 + 8 + 4 // Fixed header + length + mask + maxControlFramePayloadSize = 125 + + writeWait = time.Second + + defaultReadBufferSize = 4096 + defaultWriteBufferSize = 4096 + + continuationFrame = 0 + noFrame = -1 +) + +// Close codes defined in RFC 6455, section 11.7. +const ( + CloseNormalClosure = 1000 + CloseGoingAway = 1001 + CloseProtocolError = 1002 + CloseUnsupportedData = 1003 + CloseNoStatusReceived = 1005 + CloseAbnormalClosure = 1006 + CloseInvalidFramePayloadData = 1007 + ClosePolicyViolation = 1008 + CloseMessageTooBig = 1009 + CloseMandatoryExtension = 1010 + CloseInternalServerErr = 1011 + CloseServiceRestart = 1012 + CloseTryAgainLater = 1013 + CloseTLSHandshake = 1015 +) + +// The message types are defined in RFC 6455, section 11.8. +const ( + // TextMessage denotes a text data message. The text message payload is + // interpreted as UTF-8 encoded text data. + TextMessage = 1 + + // BinaryMessage denotes a binary data message. + BinaryMessage = 2 + + // CloseMessage denotes a close control message. The optional message + // payload contains a numeric code and text. Use the FormatCloseMessage + // function to format a close message payload. + CloseMessage = 8 + + // PingMessage denotes a ping control message. The optional message payload + // is UTF-8 encoded text. + PingMessage = 9 + + // PongMessage denotes a ping control message. The optional message payload + // is UTF-8 encoded text. + PongMessage = 10 +) + +// ErrCloseSent is returned when the application writes a message to the +// connection after sending a close message. +var ErrCloseSent = errors.New("websocket: close sent") + +// ErrReadLimit is returned when reading a message that is larger than the +// read limit set for the connection. +var ErrReadLimit = errors.New("websocket: read limit exceeded") + +// netError satisfies the net Error interface. +type netError struct { + msg string + temporary bool + timeout bool +} + +func (e *netError) Error() string { return e.msg } +func (e *netError) Temporary() bool { return e.temporary } +func (e *netError) Timeout() bool { return e.timeout } + +// CloseError represents close frame. +type CloseError struct { + + // Code is defined in RFC 6455, section 11.7. + Code int + + // Text is the optional text payload. + Text string +} + +func (e *CloseError) Error() string { + s := []byte("websocket: close ") + s = strconv.AppendInt(s, int64(e.Code), 10) + switch e.Code { + case CloseNormalClosure: + s = append(s, " (normal)"...) + case CloseGoingAway: + s = append(s, " (going away)"...) + case CloseProtocolError: + s = append(s, " (protocol error)"...) + case CloseUnsupportedData: + s = append(s, " (unsupported data)"...) + case CloseNoStatusReceived: + s = append(s, " (no status)"...) + case CloseAbnormalClosure: + s = append(s, " (abnormal closure)"...) + case CloseInvalidFramePayloadData: + s = append(s, " (invalid payload data)"...) + case ClosePolicyViolation: + s = append(s, " (policy violation)"...) + case CloseMessageTooBig: + s = append(s, " (message too big)"...) + case CloseMandatoryExtension: + s = append(s, " (mandatory extension missing)"...) + case CloseInternalServerErr: + s = append(s, " (internal server error)"...) + case CloseTLSHandshake: + s = append(s, " (TLS handshake error)"...) + } + if e.Text != "" { + s = append(s, ": "...) + s = append(s, e.Text...) + } + return string(s) +} + +// IsCloseError returns boolean indicating whether the error is a *CloseError +// with one of the specified codes. +func IsCloseError(err error, codes ...int) bool { + if e, ok := err.(*CloseError); ok { + for _, code := range codes { + if e.Code == code { + return true + } + } + } + return false +} + +// IsUnexpectedCloseError returns boolean indicating whether the error is a +// *CloseError with a code not in the list of expected codes. +func IsUnexpectedCloseError(err error, expectedCodes ...int) bool { + if e, ok := err.(*CloseError); ok { + for _, code := range expectedCodes { + if e.Code == code { + return false + } + } + return true + } + return false +} + +var ( + errWriteTimeout = &netError{msg: "websocket: write timeout", timeout: true, temporary: true} + errUnexpectedEOF = &CloseError{Code: CloseAbnormalClosure, Text: io.ErrUnexpectedEOF.Error()} + errBadWriteOpCode = errors.New("websocket: bad write message type") + errWriteClosed = errors.New("websocket: write closed") + errInvalidControlFrame = errors.New("websocket: invalid control frame") +) + +func newMaskKey() [4]byte { + n := rand.Uint32() + return [4]byte{byte(n), byte(n >> 8), byte(n >> 16), byte(n >> 24)} +} + +func hideTempErr(err error) error { + if e, ok := err.(net.Error); ok && e.Temporary() { + err = &netError{msg: e.Error(), timeout: e.Timeout()} + } + return err +} + +func isControl(frameType int) bool { + return frameType == CloseMessage || frameType == PingMessage || frameType == PongMessage +} + +func isData(frameType int) bool { + return frameType == TextMessage || frameType == BinaryMessage +} + +var validReceivedCloseCodes = map[int]bool{ + // see http://www.iana.org/assignments/websocket/websocket.xhtml#close-code-number + + CloseNormalClosure: true, + CloseGoingAway: true, + CloseProtocolError: true, + CloseUnsupportedData: true, + CloseNoStatusReceived: false, + CloseAbnormalClosure: false, + CloseInvalidFramePayloadData: true, + ClosePolicyViolation: true, + CloseMessageTooBig: true, + CloseMandatoryExtension: true, + CloseInternalServerErr: true, + CloseServiceRestart: true, + CloseTryAgainLater: true, + CloseTLSHandshake: false, +} + +func isValidReceivedCloseCode(code int) bool { + return validReceivedCloseCodes[code] || (code >= 3000 && code <= 4999) +} + +// The Conn type represents a WebSocket connection. +type Conn struct { + conn net.Conn + isServer bool + subprotocol string + + // Write fields + mu chan bool // used as mutex to protect write to conn + writeBuf []byte // frame is constructed in this buffer. + writeDeadline time.Time + writer io.WriteCloser // the current writer returned to the application + isWriting bool // for best-effort concurrent write detection + + writeErrMu sync.Mutex + writeErr error + + enableWriteCompression bool + compressionLevel int + newCompressionWriter func(io.WriteCloser, int) io.WriteCloser + + // Read fields + reader io.ReadCloser // the current reader returned to the application + readErr error + br *bufio.Reader + readRemaining int64 // bytes remaining in current frame. + readFinal bool // true the current message has more frames. + readLength int64 // Message size. + readLimit int64 // Maximum message size. + readMaskPos int + readMaskKey [4]byte + handlePong func(string) error + handlePing func(string) error + handleClose func(int, string) error + readErrCount int + messageReader *messageReader // the current low-level reader + + readDecompress bool // whether last read frame had RSV1 set + newDecompressionReader func(io.Reader) io.ReadCloser +} + +func newConn(conn net.Conn, isServer bool, readBufferSize, writeBufferSize int) *Conn { + return newConnBRW(conn, isServer, readBufferSize, writeBufferSize, nil) +} + +type writeHook struct { + p []byte +} + +func (wh *writeHook) Write(p []byte) (int, error) { + wh.p = p + return len(p), nil +} + +func newConnBRW(conn net.Conn, isServer bool, readBufferSize, writeBufferSize int, brw *bufio.ReadWriter) *Conn { + mu := make(chan bool, 1) + mu <- true + + var br *bufio.Reader + if readBufferSize == 0 && brw != nil && brw.Reader != nil { + // Reuse the supplied bufio.Reader if the buffer has a useful size. + // This code assumes that peek on a reader returns + // bufio.Reader.buf[:0]. + brw.Reader.Reset(conn) + if p, err := brw.Reader.Peek(0); err == nil && cap(p) >= 256 { + br = brw.Reader + } + } + if br == nil { + if readBufferSize == 0 { + readBufferSize = defaultReadBufferSize + } + if readBufferSize < maxControlFramePayloadSize { + readBufferSize = maxControlFramePayloadSize + } + br = bufio.NewReaderSize(conn, readBufferSize) + } + + var writeBuf []byte + if writeBufferSize == 0 && brw != nil && brw.Writer != nil { + // Use the bufio.Writer's buffer if the buffer has a useful size. This + // code assumes that bufio.Writer.buf[:1] is passed to the + // bufio.Writer's underlying writer. + var wh writeHook + brw.Writer.Reset(&wh) + brw.Writer.WriteByte(0) + brw.Flush() + if cap(wh.p) >= maxFrameHeaderSize+256 { + writeBuf = wh.p[:cap(wh.p)] + } + } + + if writeBuf == nil { + if writeBufferSize == 0 { + writeBufferSize = defaultWriteBufferSize + } + writeBuf = make([]byte, writeBufferSize+maxFrameHeaderSize) + } + + c := &Conn{ + isServer: isServer, + br: br, + conn: conn, + mu: mu, + readFinal: true, + writeBuf: writeBuf, + enableWriteCompression: true, + compressionLevel: defaultCompressionLevel, + } + c.SetCloseHandler(nil) + c.SetPingHandler(nil) + c.SetPongHandler(nil) + return c +} + +// Subprotocol returns the negotiated protocol for the connection. +func (c *Conn) Subprotocol() string { + return c.subprotocol +} + +// Close closes the underlying network connection without sending or waiting for a close frame. +func (c *Conn) Close() error { + return c.conn.Close() +} + +// LocalAddr returns the local network address. +func (c *Conn) LocalAddr() net.Addr { + return c.conn.LocalAddr() +} + +// RemoteAddr returns the remote network address. +func (c *Conn) RemoteAddr() net.Addr { + return c.conn.RemoteAddr() +} + +// Write methods + +func (c *Conn) writeFatal(err error) error { + err = hideTempErr(err) + c.writeErrMu.Lock() + if c.writeErr == nil { + c.writeErr = err + } + c.writeErrMu.Unlock() + return err +} + +func (c *Conn) write(frameType int, deadline time.Time, bufs ...[]byte) error { + <-c.mu + defer func() { c.mu <- true }() + + c.writeErrMu.Lock() + err := c.writeErr + c.writeErrMu.Unlock() + if err != nil { + return err + } + + c.conn.SetWriteDeadline(deadline) + for _, buf := range bufs { + if len(buf) > 0 { + _, err := c.conn.Write(buf) + if err != nil { + return c.writeFatal(err) + } + } + } + + if frameType == CloseMessage { + c.writeFatal(ErrCloseSent) + } + return nil +} + +// WriteControl writes a control message with the given deadline. The allowed +// message types are CloseMessage, PingMessage and PongMessage. +func (c *Conn) WriteControl(messageType int, data []byte, deadline time.Time) error { + if !isControl(messageType) { + return errBadWriteOpCode + } + if len(data) > maxControlFramePayloadSize { + return errInvalidControlFrame + } + + b0 := byte(messageType) | finalBit + b1 := byte(len(data)) + if !c.isServer { + b1 |= maskBit + } + + buf := make([]byte, 0, maxFrameHeaderSize+maxControlFramePayloadSize) + buf = append(buf, b0, b1) + + if c.isServer { + buf = append(buf, data...) + } else { + key := newMaskKey() + buf = append(buf, key[:]...) + buf = append(buf, data...) + maskBytes(key, 0, buf[6:]) + } + + d := time.Hour * 1000 + if !deadline.IsZero() { + d = deadline.Sub(time.Now()) + if d < 0 { + return errWriteTimeout + } + } + + timer := time.NewTimer(d) + select { + case <-c.mu: + timer.Stop() + case <-timer.C: + return errWriteTimeout + } + defer func() { c.mu <- true }() + + c.writeErrMu.Lock() + err := c.writeErr + c.writeErrMu.Unlock() + if err != nil { + return err + } + + c.conn.SetWriteDeadline(deadline) + _, err = c.conn.Write(buf) + if err != nil { + return c.writeFatal(err) + } + if messageType == CloseMessage { + c.writeFatal(ErrCloseSent) + } + return err +} + +func (c *Conn) prepWrite(messageType int) error { + // Close previous writer if not already closed by the application. It's + // probably better to return an error in this situation, but we cannot + // change this without breaking existing applications. + if c.writer != nil { + c.writer.Close() + c.writer = nil + } + + if !isControl(messageType) && !isData(messageType) { + return errBadWriteOpCode + } + + c.writeErrMu.Lock() + err := c.writeErr + c.writeErrMu.Unlock() + return err +} + +// NextWriter returns a writer for the next message to send. The writer's Close +// method flushes the complete message to the network. +// +// There can be at most one open writer on a connection. NextWriter closes the +// previous writer if the application has not already done so. +func (c *Conn) NextWriter(messageType int) (io.WriteCloser, error) { + if err := c.prepWrite(messageType); err != nil { + return nil, err + } + + mw := &messageWriter{ + c: c, + frameType: messageType, + pos: maxFrameHeaderSize, + } + c.writer = mw + if c.newCompressionWriter != nil && c.enableWriteCompression && isData(messageType) { + w := c.newCompressionWriter(c.writer, c.compressionLevel) + mw.compress = true + c.writer = w + } + return c.writer, nil +} + +type messageWriter struct { + c *Conn + compress bool // whether next call to flushFrame should set RSV1 + pos int // end of data in writeBuf. + frameType int // type of the current frame. + err error +} + +func (w *messageWriter) fatal(err error) error { + if w.err != nil { + w.err = err + w.c.writer = nil + } + return err +} + +// flushFrame writes buffered data and extra as a frame to the network. The +// final argument indicates that this is the last frame in the message. +func (w *messageWriter) flushFrame(final bool, extra []byte) error { + c := w.c + length := w.pos - maxFrameHeaderSize + len(extra) + + // Check for invalid control frames. + if isControl(w.frameType) && + (!final || length > maxControlFramePayloadSize) { + return w.fatal(errInvalidControlFrame) + } + + b0 := byte(w.frameType) + if final { + b0 |= finalBit + } + if w.compress { + b0 |= rsv1Bit + } + w.compress = false + + b1 := byte(0) + if !c.isServer { + b1 |= maskBit + } + + // Assume that the frame starts at beginning of c.writeBuf. + framePos := 0 + if c.isServer { + // Adjust up if mask not included in the header. + framePos = 4 + } + + switch { + case length >= 65536: + c.writeBuf[framePos] = b0 + c.writeBuf[framePos+1] = b1 | 127 + binary.BigEndian.PutUint64(c.writeBuf[framePos+2:], uint64(length)) + case length > 125: + framePos += 6 + c.writeBuf[framePos] = b0 + c.writeBuf[framePos+1] = b1 | 126 + binary.BigEndian.PutUint16(c.writeBuf[framePos+2:], uint16(length)) + default: + framePos += 8 + c.writeBuf[framePos] = b0 + c.writeBuf[framePos+1] = b1 | byte(length) + } + + if !c.isServer { + key := newMaskKey() + copy(c.writeBuf[maxFrameHeaderSize-4:], key[:]) + maskBytes(key, 0, c.writeBuf[maxFrameHeaderSize:w.pos]) + if len(extra) > 0 { + return c.writeFatal(errors.New("websocket: internal error, extra used in client mode")) + } + } + + // Write the buffers to the connection with best-effort detection of + // concurrent writes. See the concurrency section in the package + // documentation for more info. + + if c.isWriting { + panic("concurrent write to websocket connection") + } + c.isWriting = true + + err := c.write(w.frameType, c.writeDeadline, c.writeBuf[framePos:w.pos], extra) + + if !c.isWriting { + panic("concurrent write to websocket connection") + } + c.isWriting = false + + if err != nil { + return w.fatal(err) + } + + if final { + c.writer = nil + return nil + } + + // Setup for next frame. + w.pos = maxFrameHeaderSize + w.frameType = continuationFrame + return nil +} + +func (w *messageWriter) ncopy(max int) (int, error) { + n := len(w.c.writeBuf) - w.pos + if n <= 0 { + if err := w.flushFrame(false, nil); err != nil { + return 0, err + } + n = len(w.c.writeBuf) - w.pos + } + if n > max { + n = max + } + return n, nil +} + +func (w *messageWriter) Write(p []byte) (int, error) { + if w.err != nil { + return 0, w.err + } + + if len(p) > 2*len(w.c.writeBuf) && w.c.isServer { + // Don't buffer large messages. + err := w.flushFrame(false, p) + if err != nil { + return 0, err + } + return len(p), nil + } + + nn := len(p) + for len(p) > 0 { + n, err := w.ncopy(len(p)) + if err != nil { + return 0, err + } + copy(w.c.writeBuf[w.pos:], p[:n]) + w.pos += n + p = p[n:] + } + return nn, nil +} + +func (w *messageWriter) WriteString(p string) (int, error) { + if w.err != nil { + return 0, w.err + } + + nn := len(p) + for len(p) > 0 { + n, err := w.ncopy(len(p)) + if err != nil { + return 0, err + } + copy(w.c.writeBuf[w.pos:], p[:n]) + w.pos += n + p = p[n:] + } + return nn, nil +} + +func (w *messageWriter) ReadFrom(r io.Reader) (nn int64, err error) { + if w.err != nil { + return 0, w.err + } + for { + if w.pos == len(w.c.writeBuf) { + err = w.flushFrame(false, nil) + if err != nil { + break + } + } + var n int + n, err = r.Read(w.c.writeBuf[w.pos:]) + w.pos += n + nn += int64(n) + if err != nil { + if err == io.EOF { + err = nil + } + break + } + } + return nn, err +} + +func (w *messageWriter) Close() error { + if w.err != nil { + return w.err + } + if err := w.flushFrame(true, nil); err != nil { + return err + } + w.err = errWriteClosed + return nil +} + +// WritePreparedMessage writes prepared message into connection. +func (c *Conn) WritePreparedMessage(pm *PreparedMessage) error { + frameType, frameData, err := pm.frame(prepareKey{ + isServer: c.isServer, + compress: c.newCompressionWriter != nil && c.enableWriteCompression && isData(pm.messageType), + compressionLevel: c.compressionLevel, + }) + if err != nil { + return err + } + if c.isWriting { + panic("concurrent write to websocket connection") + } + c.isWriting = true + err = c.write(frameType, c.writeDeadline, frameData, nil) + if !c.isWriting { + panic("concurrent write to websocket connection") + } + c.isWriting = false + return err +} + +// WriteMessage is a helper method for getting a writer using NextWriter, +// writing the message and closing the writer. +func (c *Conn) WriteMessage(messageType int, data []byte) error { + + if c.isServer && (c.newCompressionWriter == nil || !c.enableWriteCompression) { + // Fast path with no allocations and single frame. + + if err := c.prepWrite(messageType); err != nil { + return err + } + mw := messageWriter{c: c, frameType: messageType, pos: maxFrameHeaderSize} + n := copy(c.writeBuf[mw.pos:], data) + mw.pos += n + data = data[n:] + return mw.flushFrame(true, data) + } + + w, err := c.NextWriter(messageType) + if err != nil { + return err + } + if _, err = w.Write(data); err != nil { + return err + } + return w.Close() +} + +// SetWriteDeadline sets the write deadline on the underlying network +// connection. After a write has timed out, the websocket state is corrupt and +// all future writes will return an error. A zero value for t means writes will +// not time out. +func (c *Conn) SetWriteDeadline(t time.Time) error { + c.writeDeadline = t + return nil +} + +// Read methods + +func (c *Conn) advanceFrame() (int, error) { + + // 1. Skip remainder of previous frame. + + if c.readRemaining > 0 { + if _, err := io.CopyN(ioutil.Discard, c.br, c.readRemaining); err != nil { + return noFrame, err + } + } + + // 2. Read and parse first two bytes of frame header. + + p, err := c.read(2) + if err != nil { + return noFrame, err + } + + final := p[0]&finalBit != 0 + frameType := int(p[0] & 0xf) + mask := p[1]&maskBit != 0 + c.readRemaining = int64(p[1] & 0x7f) + + c.readDecompress = false + if c.newDecompressionReader != nil && (p[0]&rsv1Bit) != 0 { + c.readDecompress = true + p[0] &^= rsv1Bit + } + + if rsv := p[0] & (rsv1Bit | rsv2Bit | rsv3Bit); rsv != 0 { + return noFrame, c.handleProtocolError("unexpected reserved bits 0x" + strconv.FormatInt(int64(rsv), 16)) + } + + switch frameType { + case CloseMessage, PingMessage, PongMessage: + if c.readRemaining > maxControlFramePayloadSize { + return noFrame, c.handleProtocolError("control frame length > 125") + } + if !final { + return noFrame, c.handleProtocolError("control frame not final") + } + case TextMessage, BinaryMessage: + if !c.readFinal { + return noFrame, c.handleProtocolError("message start before final message frame") + } + c.readFinal = final + case continuationFrame: + if c.readFinal { + return noFrame, c.handleProtocolError("continuation after final message frame") + } + c.readFinal = final + default: + return noFrame, c.handleProtocolError("unknown opcode " + strconv.Itoa(frameType)) + } + + // 3. Read and parse frame length. + + switch c.readRemaining { + case 126: + p, err := c.read(2) + if err != nil { + return noFrame, err + } + c.readRemaining = int64(binary.BigEndian.Uint16(p)) + case 127: + p, err := c.read(8) + if err != nil { + return noFrame, err + } + c.readRemaining = int64(binary.BigEndian.Uint64(p)) + } + + // 4. Handle frame masking. + + if mask != c.isServer { + return noFrame, c.handleProtocolError("incorrect mask flag") + } + + if mask { + c.readMaskPos = 0 + p, err := c.read(len(c.readMaskKey)) + if err != nil { + return noFrame, err + } + copy(c.readMaskKey[:], p) + } + + // 5. For text and binary messages, enforce read limit and return. + + if frameType == continuationFrame || frameType == TextMessage || frameType == BinaryMessage { + + c.readLength += c.readRemaining + if c.readLimit > 0 && c.readLength > c.readLimit { + c.WriteControl(CloseMessage, FormatCloseMessage(CloseMessageTooBig, ""), time.Now().Add(writeWait)) + return noFrame, ErrReadLimit + } + + return frameType, nil + } + + // 6. Read control frame payload. + + var payload []byte + if c.readRemaining > 0 { + payload, err = c.read(int(c.readRemaining)) + c.readRemaining = 0 + if err != nil { + return noFrame, err + } + if c.isServer { + maskBytes(c.readMaskKey, 0, payload) + } + } + + // 7. Process control frame payload. + + switch frameType { + case PongMessage: + if err := c.handlePong(string(payload)); err != nil { + return noFrame, err + } + case PingMessage: + if err := c.handlePing(string(payload)); err != nil { + return noFrame, err + } + case CloseMessage: + closeCode := CloseNoStatusReceived + closeText := "" + if len(payload) >= 2 { + closeCode = int(binary.BigEndian.Uint16(payload)) + if !isValidReceivedCloseCode(closeCode) { + return noFrame, c.handleProtocolError("invalid close code") + } + closeText = string(payload[2:]) + if !utf8.ValidString(closeText) { + return noFrame, c.handleProtocolError("invalid utf8 payload in close frame") + } + } + if err := c.handleClose(closeCode, closeText); err != nil { + return noFrame, err + } + return noFrame, &CloseError{Code: closeCode, Text: closeText} + } + + return frameType, nil +} + +func (c *Conn) handleProtocolError(message string) error { + c.WriteControl(CloseMessage, FormatCloseMessage(CloseProtocolError, message), time.Now().Add(writeWait)) + return errors.New("websocket: " + message) +} + +// NextReader returns the next data message received from the peer. The +// returned messageType is either TextMessage or BinaryMessage. +// +// There can be at most one open reader on a connection. NextReader discards +// the previous message if the application has not already consumed it. +// +// Applications must break out of the application's read loop when this method +// returns a non-nil error value. Errors returned from this method are +// permanent. Once this method returns a non-nil error, all subsequent calls to +// this method return the same error. +func (c *Conn) NextReader() (messageType int, r io.Reader, err error) { + // Close previous reader, only relevant for decompression. + if c.reader != nil { + c.reader.Close() + c.reader = nil + } + + c.messageReader = nil + c.readLength = 0 + + for c.readErr == nil { + frameType, err := c.advanceFrame() + if err != nil { + c.readErr = hideTempErr(err) + break + } + if frameType == TextMessage || frameType == BinaryMessage { + c.messageReader = &messageReader{c} + c.reader = c.messageReader + if c.readDecompress { + c.reader = c.newDecompressionReader(c.reader) + } + return frameType, c.reader, nil + } + } + + // Applications that do handle the error returned from this method spin in + // tight loop on connection failure. To help application developers detect + // this error, panic on repeated reads to the failed connection. + c.readErrCount++ + if c.readErrCount >= 1000 { + panic("repeated read on failed websocket connection") + } + + return noFrame, nil, c.readErr +} + +type messageReader struct{ c *Conn } + +func (r *messageReader) Read(b []byte) (int, error) { + c := r.c + if c.messageReader != r { + return 0, io.EOF + } + + for c.readErr == nil { + + if c.readRemaining > 0 { + if int64(len(b)) > c.readRemaining { + b = b[:c.readRemaining] + } + n, err := c.br.Read(b) + c.readErr = hideTempErr(err) + if c.isServer { + c.readMaskPos = maskBytes(c.readMaskKey, c.readMaskPos, b[:n]) + } + c.readRemaining -= int64(n) + if c.readRemaining > 0 && c.readErr == io.EOF { + c.readErr = errUnexpectedEOF + } + return n, c.readErr + } + + if c.readFinal { + c.messageReader = nil + return 0, io.EOF + } + + frameType, err := c.advanceFrame() + switch { + case err != nil: + c.readErr = hideTempErr(err) + case frameType == TextMessage || frameType == BinaryMessage: + c.readErr = errors.New("websocket: internal error, unexpected text or binary in Reader") + } + } + + err := c.readErr + if err == io.EOF && c.messageReader == r { + err = errUnexpectedEOF + } + return 0, err +} + +func (r *messageReader) Close() error { + return nil +} + +// ReadMessage is a helper method for getting a reader using NextReader and +// reading from that reader to a buffer. +func (c *Conn) ReadMessage() (messageType int, p []byte, err error) { + var r io.Reader + messageType, r, err = c.NextReader() + if err != nil { + return messageType, nil, err + } + p, err = ioutil.ReadAll(r) + return messageType, p, err +} + +// SetReadDeadline sets the read deadline on the underlying network connection. +// After a read has timed out, the websocket connection state is corrupt and +// all future reads will return an error. A zero value for t means reads will +// not time out. +func (c *Conn) SetReadDeadline(t time.Time) error { + return c.conn.SetReadDeadline(t) +} + +// SetReadLimit sets the maximum size for a message read from the peer. If a +// message exceeds the limit, the connection sends a close frame to the peer +// and returns ErrReadLimit to the application. +func (c *Conn) SetReadLimit(limit int64) { + c.readLimit = limit +} + +// CloseHandler returns the current close handler +func (c *Conn) CloseHandler() func(code int, text string) error { + return c.handleClose +} + +// SetCloseHandler sets the handler for close messages received from the peer. +// The code argument to h is the received close code or CloseNoStatusReceived +// if the close message is empty. The default close handler sends a close frame +// back to the peer. +// +// The application must read the connection to process close messages as +// described in the section on Control Frames above. +// +// The connection read methods return a CloseError when a close frame is +// received. Most applications should handle close messages as part of their +// normal error handling. Applications should only set a close handler when the +// application must perform some action before sending a close frame back to +// the peer. +func (c *Conn) SetCloseHandler(h func(code int, text string) error) { + if h == nil { + h = func(code int, text string) error { + message := []byte{} + if code != CloseNoStatusReceived { + message = FormatCloseMessage(code, "") + } + c.WriteControl(CloseMessage, message, time.Now().Add(writeWait)) + return nil + } + } + c.handleClose = h +} + +// PingHandler returns the current ping handler +func (c *Conn) PingHandler() func(appData string) error { + return c.handlePing +} + +// SetPingHandler sets the handler for ping messages received from the peer. +// The appData argument to h is the PING frame application data. The default +// ping handler sends a pong to the peer. +// +// The application must read the connection to process ping messages as +// described in the section on Control Frames above. +func (c *Conn) SetPingHandler(h func(appData string) error) { + if h == nil { + h = func(message string) error { + err := c.WriteControl(PongMessage, []byte(message), time.Now().Add(writeWait)) + if err == ErrCloseSent { + return nil + } else if e, ok := err.(net.Error); ok && e.Temporary() { + return nil + } + return err + } + } + c.handlePing = h +} + +// PongHandler returns the current pong handler +func (c *Conn) PongHandler() func(appData string) error { + return c.handlePong +} + +// SetPongHandler sets the handler for pong messages received from the peer. +// The appData argument to h is the PONG frame application data. The default +// pong handler does nothing. +// +// The application must read the connection to process ping messages as +// described in the section on Control Frames above. +func (c *Conn) SetPongHandler(h func(appData string) error) { + if h == nil { + h = func(string) error { return nil } + } + c.handlePong = h +} + +// UnderlyingConn returns the internal net.Conn. This can be used to further +// modifications to connection specific flags. +func (c *Conn) UnderlyingConn() net.Conn { + return c.conn +} + +// EnableWriteCompression enables and disables write compression of +// subsequent text and binary messages. This function is a noop if +// compression was not negotiated with the peer. +func (c *Conn) EnableWriteCompression(enable bool) { + c.enableWriteCompression = enable +} + +// SetCompressionLevel sets the flate compression level for subsequent text and +// binary messages. This function is a noop if compression was not negotiated +// with the peer. See the compress/flate package for a description of +// compression levels. +func (c *Conn) SetCompressionLevel(level int) error { + if !isValidCompressionLevel(level) { + return errors.New("websocket: invalid compression level") + } + c.compressionLevel = level + return nil +} + +// FormatCloseMessage formats closeCode and text as a WebSocket close message. +func FormatCloseMessage(closeCode int, text string) []byte { + buf := make([]byte, 2+len(text)) + binary.BigEndian.PutUint16(buf, uint16(closeCode)) + copy(buf[2:], text) + return buf +} diff --git a/vendor/github.com/gorilla/websocket/conn_read.go b/vendor/github.com/gorilla/websocket/conn_read.go new file mode 100644 index 000000000..1ea15059e --- /dev/null +++ b/vendor/github.com/gorilla/websocket/conn_read.go @@ -0,0 +1,18 @@ +// Copyright 2016 The Gorilla WebSocket Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +// +build go1.5 + +package websocket + +import "io" + +func (c *Conn) read(n int) ([]byte, error) { + p, err := c.br.Peek(n) + if err == io.EOF { + err = errUnexpectedEOF + } + c.br.Discard(len(p)) + return p, err +} diff --git a/vendor/github.com/gorilla/websocket/conn_read_legacy.go b/vendor/github.com/gorilla/websocket/conn_read_legacy.go new file mode 100644 index 000000000..018541cf6 --- /dev/null +++ b/vendor/github.com/gorilla/websocket/conn_read_legacy.go @@ -0,0 +1,21 @@ +// Copyright 2016 The Gorilla WebSocket Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +// +build !go1.5 + +package websocket + +import "io" + +func (c *Conn) read(n int) ([]byte, error) { + p, err := c.br.Peek(n) + if err == io.EOF { + err = errUnexpectedEOF + } + if len(p) > 0 { + // advance over the bytes just read + io.ReadFull(c.br, p) + } + return p, err +} diff --git a/vendor/github.com/gorilla/websocket/doc.go b/vendor/github.com/gorilla/websocket/doc.go new file mode 100644 index 000000000..e291a952c --- /dev/null +++ b/vendor/github.com/gorilla/websocket/doc.go @@ -0,0 +1,180 @@ +// Copyright 2013 The Gorilla WebSocket Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +// Package websocket implements the WebSocket protocol defined in RFC 6455. +// +// Overview +// +// The Conn type represents a WebSocket connection. A server application uses +// the Upgrade function from an Upgrader object with a HTTP request handler +// to get a pointer to a Conn: +// +// var upgrader = websocket.Upgrader{ +// ReadBufferSize: 1024, +// WriteBufferSize: 1024, +// } +// +// func handler(w http.ResponseWriter, r *http.Request) { +// conn, err := upgrader.Upgrade(w, r, nil) +// if err != nil { +// log.Println(err) +// return +// } +// ... Use conn to send and receive messages. +// } +// +// Call the connection's WriteMessage and ReadMessage methods to send and +// receive messages as a slice of bytes. This snippet of code shows how to echo +// messages using these methods: +// +// for { +// messageType, p, err := conn.ReadMessage() +// if err != nil { +// return +// } +// if err = conn.WriteMessage(messageType, p); err != nil { +// return err +// } +// } +// +// In above snippet of code, p is a []byte and messageType is an int with value +// websocket.BinaryMessage or websocket.TextMessage. +// +// An application can also send and receive messages using the io.WriteCloser +// and io.Reader interfaces. To send a message, call the connection NextWriter +// method to get an io.WriteCloser, write the message to the writer and close +// the writer when done. To receive a message, call the connection NextReader +// method to get an io.Reader and read until io.EOF is returned. This snippet +// shows how to echo messages using the NextWriter and NextReader methods: +// +// for { +// messageType, r, err := conn.NextReader() +// if err != nil { +// return +// } +// w, err := conn.NextWriter(messageType) +// if err != nil { +// return err +// } +// if _, err := io.Copy(w, r); err != nil { +// return err +// } +// if err := w.Close(); err != nil { +// return err +// } +// } +// +// Data Messages +// +// The WebSocket protocol distinguishes between text and binary data messages. +// Text messages are interpreted as UTF-8 encoded text. The interpretation of +// binary messages is left to the application. +// +// This package uses the TextMessage and BinaryMessage integer constants to +// identify the two data message types. The ReadMessage and NextReader methods +// return the type of the received message. The messageType argument to the +// WriteMessage and NextWriter methods specifies the type of a sent message. +// +// It is the application's responsibility to ensure that text messages are +// valid UTF-8 encoded text. +// +// Control Messages +// +// The WebSocket protocol defines three types of control messages: close, ping +// and pong. Call the connection WriteControl, WriteMessage or NextWriter +// methods to send a control message to the peer. +// +// Connections handle received close messages by sending a close message to the +// peer and returning a *CloseError from the the NextReader, ReadMessage or the +// message Read method. +// +// Connections handle received ping and pong messages by invoking callback +// functions set with SetPingHandler and SetPongHandler methods. The callback +// functions are called from the NextReader, ReadMessage and the message Read +// methods. +// +// The default ping handler sends a pong to the peer. The application's reading +// goroutine can block for a short time while the handler writes the pong data +// to the connection. +// +// The application must read the connection to process ping, pong and close +// messages sent from the peer. If the application is not otherwise interested +// in messages from the peer, then the application should start a goroutine to +// read and discard messages from the peer. A simple example is: +// +// func readLoop(c *websocket.Conn) { +// for { +// if _, _, err := c.NextReader(); err != nil { +// c.Close() +// break +// } +// } +// } +// +// Concurrency +// +// Connections support one concurrent reader and one concurrent writer. +// +// Applications are responsible for ensuring that no more than one goroutine +// calls the write methods (NextWriter, SetWriteDeadline, WriteMessage, +// WriteJSON, EnableWriteCompression, SetCompressionLevel) concurrently and +// that no more than one goroutine calls the read methods (NextReader, +// SetReadDeadline, ReadMessage, ReadJSON, SetPongHandler, SetPingHandler) +// concurrently. +// +// The Close and WriteControl methods can be called concurrently with all other +// methods. +// +// Origin Considerations +// +// Web browsers allow Javascript applications to open a WebSocket connection to +// any host. It's up to the server to enforce an origin policy using the Origin +// request header sent by the browser. +// +// The Upgrader calls the function specified in the CheckOrigin field to check +// the origin. If the CheckOrigin function returns false, then the Upgrade +// method fails the WebSocket handshake with HTTP status 403. +// +// If the CheckOrigin field is nil, then the Upgrader uses a safe default: fail +// the handshake if the Origin request header is present and not equal to the +// Host request header. +// +// An application can allow connections from any origin by specifying a +// function that always returns true: +// +// var upgrader = websocket.Upgrader{ +// CheckOrigin: func(r *http.Request) bool { return true }, +// } +// +// The deprecated Upgrade function does not enforce an origin policy. It's the +// application's responsibility to check the Origin header before calling +// Upgrade. +// +// Compression EXPERIMENTAL +// +// Per message compression extensions (RFC 7692) are experimentally supported +// by this package in a limited capacity. Setting the EnableCompression option +// to true in Dialer or Upgrader will attempt to negotiate per message deflate +// support. +// +// var upgrader = websocket.Upgrader{ +// EnableCompression: true, +// } +// +// If compression was successfully negotiated with the connection's peer, any +// message received in compressed form will be automatically decompressed. +// All Read methods will return uncompressed bytes. +// +// Per message compression of messages written to a connection can be enabled +// or disabled by calling the corresponding Conn method: +// +// conn.EnableWriteCompression(false) +// +// Currently this package does not support compression with "context takeover". +// This means that messages must be compressed and decompressed in isolation, +// without retaining sliding window or dictionary state across messages. For +// more details refer to RFC 7692. +// +// Use of compression is experimental and may result in decreased performance. +package websocket diff --git a/vendor/github.com/gorilla/websocket/json.go b/vendor/github.com/gorilla/websocket/json.go new file mode 100644 index 000000000..4f0e36875 --- /dev/null +++ b/vendor/github.com/gorilla/websocket/json.go @@ -0,0 +1,55 @@ +// Copyright 2013 The Gorilla WebSocket Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +package websocket + +import ( + "encoding/json" + "io" +) + +// WriteJSON is deprecated, use c.WriteJSON instead. +func WriteJSON(c *Conn, v interface{}) error { + return c.WriteJSON(v) +} + +// WriteJSON writes the JSON encoding of v to the connection. +// +// See the documentation for encoding/json Marshal for details about the +// conversion of Go values to JSON. +func (c *Conn) WriteJSON(v interface{}) error { + w, err := c.NextWriter(TextMessage) + if err != nil { + return err + } + err1 := json.NewEncoder(w).Encode(v) + err2 := w.Close() + if err1 != nil { + return err1 + } + return err2 +} + +// ReadJSON is deprecated, use c.ReadJSON instead. +func ReadJSON(c *Conn, v interface{}) error { + return c.ReadJSON(v) +} + +// ReadJSON reads the next JSON-encoded message from the connection and stores +// it in the value pointed to by v. +// +// See the documentation for the encoding/json Unmarshal function for details +// about the conversion of JSON to a Go value. +func (c *Conn) ReadJSON(v interface{}) error { + _, r, err := c.NextReader() + if err != nil { + return err + } + err = json.NewDecoder(r).Decode(v) + if err == io.EOF { + // One value is expected in the message. + err = io.ErrUnexpectedEOF + } + return err +} diff --git a/vendor/github.com/gorilla/websocket/mask.go b/vendor/github.com/gorilla/websocket/mask.go new file mode 100644 index 000000000..6a88bbc74 --- /dev/null +++ b/vendor/github.com/gorilla/websocket/mask.go @@ -0,0 +1,55 @@ +// Copyright 2016 The Gorilla WebSocket Authors. All rights reserved. Use of +// this source code is governed by a BSD-style license that can be found in the +// LICENSE file. + +// +build !appengine + +package websocket + +import "unsafe" + +const wordSize = int(unsafe.Sizeof(uintptr(0))) + +func maskBytes(key [4]byte, pos int, b []byte) int { + + // Mask one byte at a time for small buffers. + if len(b) < 2*wordSize { + for i := range b { + b[i] ^= key[pos&3] + pos++ + } + return pos & 3 + } + + // Mask one byte at a time to word boundary. + if n := int(uintptr(unsafe.Pointer(&b[0]))) % wordSize; n != 0 { + n = wordSize - n + for i := range b[:n] { + b[i] ^= key[pos&3] + pos++ + } + b = b[n:] + } + + // Create aligned word size key. + var k [wordSize]byte + for i := range k { + k[i] = key[(pos+i)&3] + } + kw := *(*uintptr)(unsafe.Pointer(&k)) + + // Mask one word at a time. + n := (len(b) / wordSize) * wordSize + for i := 0; i < n; i += wordSize { + *(*uintptr)(unsafe.Pointer(uintptr(unsafe.Pointer(&b[0])) + uintptr(i))) ^= kw + } + + // Mask one byte at a time for remaining bytes. + b = b[n:] + for i := range b { + b[i] ^= key[pos&3] + pos++ + } + + return pos & 3 +} diff --git a/vendor/github.com/gorilla/websocket/prepared.go b/vendor/github.com/gorilla/websocket/prepared.go new file mode 100644 index 000000000..1efffbd1e --- /dev/null +++ b/vendor/github.com/gorilla/websocket/prepared.go @@ -0,0 +1,103 @@ +// Copyright 2017 The Gorilla WebSocket Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +package websocket + +import ( + "bytes" + "net" + "sync" + "time" +) + +// PreparedMessage caches on the wire representations of a message payload. +// Use PreparedMessage to efficiently send a message payload to multiple +// connections. PreparedMessage is especially useful when compression is used +// because the CPU and memory expensive compression operation can be executed +// once for a given set of compression options. +type PreparedMessage struct { + messageType int + data []byte + err error + mu sync.Mutex + frames map[prepareKey]*preparedFrame +} + +// prepareKey defines a unique set of options to cache prepared frames in PreparedMessage. +type prepareKey struct { + isServer bool + compress bool + compressionLevel int +} + +// preparedFrame contains data in wire representation. +type preparedFrame struct { + once sync.Once + data []byte +} + +// NewPreparedMessage returns an initialized PreparedMessage. You can then send +// it to connection using WritePreparedMessage method. Valid wire +// representation will be calculated lazily only once for a set of current +// connection options. +func NewPreparedMessage(messageType int, data []byte) (*PreparedMessage, error) { + pm := &PreparedMessage{ + messageType: messageType, + frames: make(map[prepareKey]*preparedFrame), + data: data, + } + + // Prepare a plain server frame. + _, frameData, err := pm.frame(prepareKey{isServer: true, compress: false}) + if err != nil { + return nil, err + } + + // To protect against caller modifying the data argument, remember the data + // copied to the plain server frame. + pm.data = frameData[len(frameData)-len(data):] + return pm, nil +} + +func (pm *PreparedMessage) frame(key prepareKey) (int, []byte, error) { + pm.mu.Lock() + frame, ok := pm.frames[key] + if !ok { + frame = &preparedFrame{} + pm.frames[key] = frame + } + pm.mu.Unlock() + + var err error + frame.once.Do(func() { + // Prepare a frame using a 'fake' connection. + // TODO: Refactor code in conn.go to allow more direct construction of + // the frame. + mu := make(chan bool, 1) + mu <- true + var nc prepareConn + c := &Conn{ + conn: &nc, + mu: mu, + isServer: key.isServer, + compressionLevel: key.compressionLevel, + enableWriteCompression: true, + writeBuf: make([]byte, defaultWriteBufferSize+maxFrameHeaderSize), + } + if key.compress { + c.newCompressionWriter = compressNoContextTakeover + } + err = c.WriteMessage(pm.messageType, pm.data) + frame.data = nc.buf.Bytes() + }) + return pm.messageType, frame.data, err +} + +type prepareConn struct { + buf bytes.Buffer + net.Conn +} + +func (pc *prepareConn) Write(p []byte) (int, error) { return pc.buf.Write(p) } +func (pc *prepareConn) SetWriteDeadline(t time.Time) error { return nil } diff --git a/vendor/github.com/gorilla/websocket/server.go b/vendor/github.com/gorilla/websocket/server.go new file mode 100644 index 000000000..3495e0f1a --- /dev/null +++ b/vendor/github.com/gorilla/websocket/server.go @@ -0,0 +1,291 @@ +// Copyright 2013 The Gorilla WebSocket Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +package websocket + +import ( + "bufio" + "errors" + "net" + "net/http" + "net/url" + "strings" + "time" +) + +// HandshakeError describes an error with the handshake from the peer. +type HandshakeError struct { + message string +} + +func (e HandshakeError) Error() string { return e.message } + +// Upgrader specifies parameters for upgrading an HTTP connection to a +// WebSocket connection. +type Upgrader struct { + // HandshakeTimeout specifies the duration for the handshake to complete. + HandshakeTimeout time.Duration + + // ReadBufferSize and WriteBufferSize specify I/O buffer sizes. If a buffer + // size is zero, then buffers allocated by the HTTP server are used. The + // I/O buffer sizes do not limit the size of the messages that can be sent + // or received. + ReadBufferSize, WriteBufferSize int + + // Subprotocols specifies the server's supported protocols in order of + // preference. If this field is set, then the Upgrade method negotiates a + // subprotocol by selecting the first match in this list with a protocol + // requested by the client. + Subprotocols []string + + // Error specifies the function for generating HTTP error responses. If Error + // is nil, then http.Error is used to generate the HTTP response. + Error func(w http.ResponseWriter, r *http.Request, status int, reason error) + + // CheckOrigin returns true if the request Origin header is acceptable. If + // CheckOrigin is nil, the host in the Origin header must not be set or + // must match the host of the request. + CheckOrigin func(r *http.Request) bool + + // EnableCompression specify if the server should attempt to negotiate per + // message compression (RFC 7692). Setting this value to true does not + // guarantee that compression will be supported. Currently only "no context + // takeover" modes are supported. + EnableCompression bool +} + +func (u *Upgrader) returnError(w http.ResponseWriter, r *http.Request, status int, reason string) (*Conn, error) { + err := HandshakeError{reason} + if u.Error != nil { + u.Error(w, r, status, err) + } else { + w.Header().Set("Sec-Websocket-Version", "13") + http.Error(w, http.StatusText(status), status) + } + return nil, err +} + +// checkSameOrigin returns true if the origin is not set or is equal to the request host. +func checkSameOrigin(r *http.Request) bool { + origin := r.Header["Origin"] + if len(origin) == 0 { + return true + } + u, err := url.Parse(origin[0]) + if err != nil { + return false + } + return u.Host == r.Host +} + +func (u *Upgrader) selectSubprotocol(r *http.Request, responseHeader http.Header) string { + if u.Subprotocols != nil { + clientProtocols := Subprotocols(r) + for _, serverProtocol := range u.Subprotocols { + for _, clientProtocol := range clientProtocols { + if clientProtocol == serverProtocol { + return clientProtocol + } + } + } + } else if responseHeader != nil { + return responseHeader.Get("Sec-Websocket-Protocol") + } + return "" +} + +// Upgrade upgrades the HTTP server connection to the WebSocket protocol. +// +// The responseHeader is included in the response to the client's upgrade +// request. Use the responseHeader to specify cookies (Set-Cookie) and the +// application negotiated subprotocol (Sec-Websocket-Protocol). +// +// If the upgrade fails, then Upgrade replies to the client with an HTTP error +// response. +func (u *Upgrader) Upgrade(w http.ResponseWriter, r *http.Request, responseHeader http.Header) (*Conn, error) { + if r.Method != "GET" { + return u.returnError(w, r, http.StatusMethodNotAllowed, "websocket: not a websocket handshake: request method is not GET") + } + + if _, ok := responseHeader["Sec-Websocket-Extensions"]; ok { + return u.returnError(w, r, http.StatusInternalServerError, "websocket: application specific 'Sec-Websocket-Extensions' headers are unsupported") + } + + if !tokenListContainsValue(r.Header, "Connection", "upgrade") { + return u.returnError(w, r, http.StatusBadRequest, "websocket: not a websocket handshake: 'upgrade' token not found in 'Connection' header") + } + + if !tokenListContainsValue(r.Header, "Upgrade", "websocket") { + return u.returnError(w, r, http.StatusBadRequest, "websocket: not a websocket handshake: 'websocket' token not found in 'Upgrade' header") + } + + if !tokenListContainsValue(r.Header, "Sec-Websocket-Version", "13") { + return u.returnError(w, r, http.StatusBadRequest, "websocket: unsupported version: 13 not found in 'Sec-Websocket-Version' header") + } + + checkOrigin := u.CheckOrigin + if checkOrigin == nil { + checkOrigin = checkSameOrigin + } + if !checkOrigin(r) { + return u.returnError(w, r, http.StatusForbidden, "websocket: 'Origin' header value not allowed") + } + + challengeKey := r.Header.Get("Sec-Websocket-Key") + if challengeKey == "" { + return u.returnError(w, r, http.StatusBadRequest, "websocket: not a websocket handshake: `Sec-Websocket-Key' header is missing or blank") + } + + subprotocol := u.selectSubprotocol(r, responseHeader) + + // Negotiate PMCE + var compress bool + if u.EnableCompression { + for _, ext := range parseExtensions(r.Header) { + if ext[""] != "permessage-deflate" { + continue + } + compress = true + break + } + } + + var ( + netConn net.Conn + err error + ) + + h, ok := w.(http.Hijacker) + if !ok { + return u.returnError(w, r, http.StatusInternalServerError, "websocket: response does not implement http.Hijacker") + } + var brw *bufio.ReadWriter + netConn, brw, err = h.Hijack() + if err != nil { + return u.returnError(w, r, http.StatusInternalServerError, err.Error()) + } + + if brw.Reader.Buffered() > 0 { + netConn.Close() + return nil, errors.New("websocket: client sent data before handshake is complete") + } + + c := newConnBRW(netConn, true, u.ReadBufferSize, u.WriteBufferSize, brw) + c.subprotocol = subprotocol + + if compress { + c.newCompressionWriter = compressNoContextTakeover + c.newDecompressionReader = decompressNoContextTakeover + } + + p := c.writeBuf[:0] + p = append(p, "HTTP/1.1 101 Switching Protocols\r\nUpgrade: websocket\r\nConnection: Upgrade\r\nSec-WebSocket-Accept: "...) + p = append(p, computeAcceptKey(challengeKey)...) + p = append(p, "\r\n"...) + if c.subprotocol != "" { + p = append(p, "Sec-Websocket-Protocol: "...) + p = append(p, c.subprotocol...) + p = append(p, "\r\n"...) + } + if compress { + p = append(p, "Sec-Websocket-Extensions: permessage-deflate; server_no_context_takeover; client_no_context_takeover\r\n"...) + } + for k, vs := range responseHeader { + if k == "Sec-Websocket-Protocol" { + continue + } + for _, v := range vs { + p = append(p, k...) + p = append(p, ": "...) + for i := 0; i < len(v); i++ { + b := v[i] + if b <= 31 { + // prevent response splitting. + b = ' ' + } + p = append(p, b) + } + p = append(p, "\r\n"...) + } + } + p = append(p, "\r\n"...) + + // Clear deadlines set by HTTP server. + netConn.SetDeadline(time.Time{}) + + if u.HandshakeTimeout > 0 { + netConn.SetWriteDeadline(time.Now().Add(u.HandshakeTimeout)) + } + if _, err = netConn.Write(p); err != nil { + netConn.Close() + return nil, err + } + if u.HandshakeTimeout > 0 { + netConn.SetWriteDeadline(time.Time{}) + } + + return c, nil +} + +// Upgrade upgrades the HTTP server connection to the WebSocket protocol. +// +// This function is deprecated, use websocket.Upgrader instead. +// +// The application is responsible for checking the request origin before +// calling Upgrade. An example implementation of the same origin policy is: +// +// if req.Header.Get("Origin") != "http://"+req.Host { +// http.Error(w, "Origin not allowed", 403) +// return +// } +// +// If the endpoint supports subprotocols, then the application is responsible +// for negotiating the protocol used on the connection. Use the Subprotocols() +// function to get the subprotocols requested by the client. Use the +// Sec-Websocket-Protocol response header to specify the subprotocol selected +// by the application. +// +// The responseHeader is included in the response to the client's upgrade +// request. Use the responseHeader to specify cookies (Set-Cookie) and the +// negotiated subprotocol (Sec-Websocket-Protocol). +// +// The connection buffers IO to the underlying network connection. The +// readBufSize and writeBufSize parameters specify the size of the buffers to +// use. Messages can be larger than the buffers. +// +// If the request is not a valid WebSocket handshake, then Upgrade returns an +// error of type HandshakeError. Applications should handle this error by +// replying to the client with an HTTP error response. +func Upgrade(w http.ResponseWriter, r *http.Request, responseHeader http.Header, readBufSize, writeBufSize int) (*Conn, error) { + u := Upgrader{ReadBufferSize: readBufSize, WriteBufferSize: writeBufSize} + u.Error = func(w http.ResponseWriter, r *http.Request, status int, reason error) { + // don't return errors to maintain backwards compatibility + } + u.CheckOrigin = func(r *http.Request) bool { + // allow all connections by default + return true + } + return u.Upgrade(w, r, responseHeader) +} + +// Subprotocols returns the subprotocols requested by the client in the +// Sec-Websocket-Protocol header. +func Subprotocols(r *http.Request) []string { + h := strings.TrimSpace(r.Header.Get("Sec-Websocket-Protocol")) + if h == "" { + return nil + } + protocols := strings.Split(h, ",") + for i := range protocols { + protocols[i] = strings.TrimSpace(protocols[i]) + } + return protocols +} + +// IsWebSocketUpgrade returns true if the client requested upgrade to the +// WebSocket protocol. +func IsWebSocketUpgrade(r *http.Request) bool { + return tokenListContainsValue(r.Header, "Connection", "upgrade") && + tokenListContainsValue(r.Header, "Upgrade", "websocket") +} diff --git a/vendor/github.com/gorilla/websocket/util.go b/vendor/github.com/gorilla/websocket/util.go new file mode 100644 index 000000000..9a4908df2 --- /dev/null +++ b/vendor/github.com/gorilla/websocket/util.go @@ -0,0 +1,214 @@ +// Copyright 2013 The Gorilla WebSocket Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +package websocket + +import ( + "crypto/rand" + "crypto/sha1" + "encoding/base64" + "io" + "net/http" + "strings" +) + +var keyGUID = []byte("258EAFA5-E914-47DA-95CA-C5AB0DC85B11") + +func computeAcceptKey(challengeKey string) string { + h := sha1.New() + h.Write([]byte(challengeKey)) + h.Write(keyGUID) + return base64.StdEncoding.EncodeToString(h.Sum(nil)) +} + +func generateChallengeKey() (string, error) { + p := make([]byte, 16) + if _, err := io.ReadFull(rand.Reader, p); err != nil { + return "", err + } + return base64.StdEncoding.EncodeToString(p), nil +} + +// Octet types from RFC 2616. +var octetTypes [256]byte + +const ( + isTokenOctet = 1 << iota + isSpaceOctet +) + +func init() { + // From RFC 2616 + // + // OCTET = + // CHAR = + // CTL = + // CR = + // LF = + // SP = + // HT = + // <"> = + // CRLF = CR LF + // LWS = [CRLF] 1*( SP | HT ) + // TEXT = + // separators = "(" | ")" | "<" | ">" | "@" | "," | ";" | ":" | "\" | <"> + // | "/" | "[" | "]" | "?" | "=" | "{" | "}" | SP | HT + // token = 1* + // qdtext = > + + for c := 0; c < 256; c++ { + var t byte + isCtl := c <= 31 || c == 127 + isChar := 0 <= c && c <= 127 + isSeparator := strings.IndexRune(" \t\"(),/:;<=>?@[]\\{}", rune(c)) >= 0 + if strings.IndexRune(" \t\r\n", rune(c)) >= 0 { + t |= isSpaceOctet + } + if isChar && !isCtl && !isSeparator { + t |= isTokenOctet + } + octetTypes[c] = t + } +} + +func skipSpace(s string) (rest string) { + i := 0 + for ; i < len(s); i++ { + if octetTypes[s[i]]&isSpaceOctet == 0 { + break + } + } + return s[i:] +} + +func nextToken(s string) (token, rest string) { + i := 0 + for ; i < len(s); i++ { + if octetTypes[s[i]]&isTokenOctet == 0 { + break + } + } + return s[:i], s[i:] +} + +func nextTokenOrQuoted(s string) (value string, rest string) { + if !strings.HasPrefix(s, "\"") { + return nextToken(s) + } + s = s[1:] + for i := 0; i < len(s); i++ { + switch s[i] { + case '"': + return s[:i], s[i+1:] + case '\\': + p := make([]byte, len(s)-1) + j := copy(p, s[:i]) + escape := true + for i = i + 1; i < len(s); i++ { + b := s[i] + switch { + case escape: + escape = false + p[j] = b + j += 1 + case b == '\\': + escape = true + case b == '"': + return string(p[:j]), s[i+1:] + default: + p[j] = b + j += 1 + } + } + return "", "" + } + } + return "", "" +} + +// tokenListContainsValue returns true if the 1#token header with the given +// name contains token. +func tokenListContainsValue(header http.Header, name string, value string) bool { +headers: + for _, s := range header[name] { + for { + var t string + t, s = nextToken(skipSpace(s)) + if t == "" { + continue headers + } + s = skipSpace(s) + if s != "" && s[0] != ',' { + continue headers + } + if strings.EqualFold(t, value) { + return true + } + if s == "" { + continue headers + } + s = s[1:] + } + } + return false +} + +// parseExtensiosn parses WebSocket extensions from a header. +func parseExtensions(header http.Header) []map[string]string { + + // From RFC 6455: + // + // Sec-WebSocket-Extensions = extension-list + // extension-list = 1#extension + // extension = extension-token *( ";" extension-param ) + // extension-token = registered-token + // registered-token = token + // extension-param = token [ "=" (token | quoted-string) ] + // ;When using the quoted-string syntax variant, the value + // ;after quoted-string unescaping MUST conform to the + // ;'token' ABNF. + + var result []map[string]string +headers: + for _, s := range header["Sec-Websocket-Extensions"] { + for { + var t string + t, s = nextToken(skipSpace(s)) + if t == "" { + continue headers + } + ext := map[string]string{"": t} + for { + s = skipSpace(s) + if !strings.HasPrefix(s, ";") { + break + } + var k string + k, s = nextToken(skipSpace(s[1:])) + if k == "" { + continue headers + } + s = skipSpace(s) + var v string + if strings.HasPrefix(s, "=") { + v, s = nextTokenOrQuoted(skipSpace(s[1:])) + s = skipSpace(s) + } + if s != "" && s[0] != ',' && s[0] != ';' { + continue headers + } + ext[k] = v + } + if s != "" && s[0] != ',' { + continue headers + } + result = append(result, ext) + if s == "" { + continue headers + } + s = s[1:] + } + } + return result +} diff --git a/vendor/github.com/hashicorp/go-oracle-terraform/CHANGELOG.md b/vendor/github.com/hashicorp/go-oracle-terraform/CHANGELOG.md new file mode 100644 index 000000000..70e5b3c31 --- /dev/null +++ b/vendor/github.com/hashicorp/go-oracle-terraform/CHANGELOG.md @@ -0,0 +1,134 @@ +## 0.6.6 (January 11, 2018) + +* compute: Create and delete machine images [GH-101] + +## 0.6.5 (January 8, 2018) + +* compute: Orchestration failures should explicitly tell the user why it failed [GH-100] + +## 0.6.4 (Decemeber 20, 2017) + +* compute: Added suspend functionality to orchestrated instances [GH-99] + +## 0.6.3 (December 13, 2017) + +* storage: Added remove header option to storage objects and containers [GH-96] + +## 0.6.2 (November 28, 2017) + +* client: Added a UserAgent to the Client [GH-98] + +## 0.6.1 (Novemeber 26, 2017) + +* compute: Added is_default_gateway to network attributes for instances [GH-97] + + +## 0.6.0 (November 10, 2017) + +* compute: Added is_default_gateway to network attributes for instances [GH-90] + +* compute: Added the orchestration resource, specifically for instance creation [GH-91] + +## 0.5.1 (October 5, 2017) + +* java: Fixed subscription_type field + +## 0.5.0 (October 5, 2017) + +* java: Added more fields to java service instance [GH-89] + +## 0.4.0 (September 14, 2017) + +* database: Add utility resources [GH-87] + +* compute: Increase storage volume snapshot create timeout [GH-88] + +## 0.3.4 (August 16, 2017) + +* storage_volumes: Actually capture errors during a storage volume create ([#86](https://github.com/hashicorp/go-oracle-terraform/issues/86)) + +## 0.3.3 (August 10, 2017) + +* Add `ExposedHeaders` to storage containers ([#85](https://github.com/hashicorp/go-oracle-terraform/issues/85)) + +* Fixed `AllowedOrigins` in storage containers ([#85](https://github.com/hashicorp/go-oracle-terraform/issues/85)) + +## 0.3.2 (August 7, 2017) + +* Add `id` for storage objects ([#84](https://github.com/hashicorp/go-oracle-terraform/issues/84)) + +## 0.3.1 (August 7, 2017) + +* Update tests for Database parameter changes ([#83](https://github.com/hashicorp/go-oracle-terraform/issues/83)) + +## 0.3.0 (August 7, 2017) + + * Add JaaS Service Instances ([#82](https://github.com/hashicorp/go-oracle-terraform/issues/82)) + + * Add storage objects ([#81](https://github.com/hashicorp/go-oracle-terraform/issues/81)) + +## 0.2.0 (July 27, 2017) + + * service_instance: Switches yes/no strings to bool in input struct and then converts back to strings for ease of use on user end ([#80](https://github.com/hashicorp/go-oracle-terraform/issues/80)) + +## 0.1.9 (July 20, 2017) + + * service_instance: Update delete retry count ([#79](https://github.com/hashicorp/go-oracle-terraform/issues/79)) + + * service_instance: Add additional fields ([#79](https://github.com/hashicorp/go-oracle-terraform/issues/79)) + +## 0.1.8 (July 19, 2017) + + * storage_volumes: Add SSD support ([#78](https://github.com/hashicorp/go-oracle-terraform/issues/78)) + +## 0.1.7 (July 19, 2017) + + * database: Adds the Oracle Database Cloud to the available sdks. ([#77](https://github.com/hashicorp/go-oracle-terraform/issues/77)) + + * database: Adds Service Instances to the database sdk ([#77](https://github.com/hashicorp/go-oracle-terraform/issues/77)) + +## 0.1.6 (July 18, 2017) + + * opc: Add timeouts to instance and storage inputs ([#75](https://github.com/hashicorp/go-oracle-terraform/issues/75)) + +## 0.1.5 (July 5, 2017) + + * storage: User must pass in Storage URL to CRUD resources ([#74](https://github.com/hashicorp/go-oracle-terraform/issues/74)) + +## 0.1.4 (June 30, 2017) + + * opc: Fix infinite loop around auth token exceeding it's 25 minute duration. ([#73](https://github.com/hashicorp/go-oracle-terraform/issues/73)) + +## 0.1.3 (June 30, 2017) + + * opc: Add additional logs instance logs ([#72](https://github.com/hashicorp/go-oracle-terraform/issues/72)) + + * opc: Increase instance creation and deletion timeout ([#72](https://github.com/hashicorp/go-oracle-terraform/issues/72)) + +## 0.1.2 (June 30, 2017) + + +FEATURES: + + * opc: Add image snapshots ([#67](https://github.com/hashicorp/go-oracle-terraform/issues/67)) + + * storage: Storage containers have been added ([#70](https://github.com/hashicorp/go-oracle-terraform/issues/70)) + + +IMPROVEMENTS: + + * opc: Refactored client to be generic for multiple Oracle api endpoints ([#68](https://github.com/hashicorp/go-oracle-terraform/issues/68)) + + * opc: Instance creation retries when an instance enters a deleted state ([#71](https://github.com/hashicorp/go-oracle-terraform/issues/71)) + +## 0.1.1 (May 31, 2017) + +IMPROVEMENTS: + + * opc: Add max_retries capabilities ([#66](https://github.com/hashicorp/go-oracle-terraform/issues/66)) + +## 0.1.0 (May 25, 2017) + +BACKWARDS INCOMPATIBILITIES / NOTES: + + * Initial Release of OPC SDK diff --git a/vendor/github.com/hashicorp/go-oracle-terraform/GNUmakefile b/vendor/github.com/hashicorp/go-oracle-terraform/GNUmakefile new file mode 100644 index 000000000..9c5a0e2f2 --- /dev/null +++ b/vendor/github.com/hashicorp/go-oracle-terraform/GNUmakefile @@ -0,0 +1,41 @@ +TEST?=$$(go list ./... |grep -v 'vendor') +GOFMT_FILES?=$$(find . -name '*.go' |grep -v vendor) + +test: fmtcheck errcheck + go test -i $(TEST) || exit 1 + echo $(TEST) | \ + xargs -t -n4 go test $(TESTARGS) -timeout=60m -parallel=4 + +testacc: fmtcheck + ORACLE_ACC=1 go test -v $(TEST) $(TESTARGS) -timeout 120m + +testrace: fmtcheck + ORACLE_ACC= go test -race $(TEST) $(TESTARGS) + +cover: + @go tool cover 2>/dev/null; if [ $$? -eq 3 ]; then \ + go get -u golang.org/x/tools/cmd/cover; \ + fi + go test $(TEST) -coverprofile=coverage.out + go tool cover -html=coverage.out + rm coverage.out + +vet: + @echo "go vet ." + @go vet $$(go list ./... | grep -v vendor/) ; if [ $$? -eq 1 ]; then \ + echo ""; \ + echo "Vet found suspicious constructs. Please check the reported constructs"; \ + echo "and fix them if necessary before submitting the code for review."; \ + exit 1; \ + fi + +fmt: + gofmt -w $(GOFMT_FILES) + +fmtcheck: + @sh -c "'$(CURDIR)/scripts/gofmtcheck.sh'" + +errcheck: + @sh -c "'$(CURDIR)/scripts/errcheck.sh'" + +.PHONY: tools build test testacc testrace cover vet fmt fmtcheck errcheck diff --git a/vendor/github.com/hashicorp/go-oracle-terraform/LICENSE b/vendor/github.com/hashicorp/go-oracle-terraform/LICENSE new file mode 100644 index 000000000..a612ad981 --- /dev/null +++ b/vendor/github.com/hashicorp/go-oracle-terraform/LICENSE @@ -0,0 +1,373 @@ +Mozilla Public License Version 2.0 +================================== + +1. Definitions +-------------- + +1.1. "Contributor" + means each individual or legal entity that creates, contributes to + the creation of, or owns Covered Software. + +1.2. "Contributor Version" + means the combination of the Contributions of others (if any) used + by a Contributor and that particular Contributor's Contribution. + +1.3. "Contribution" + means Covered Software of a particular Contributor. + +1.4. "Covered Software" + means Source Code Form to which the initial Contributor has attached + the notice in Exhibit A, the Executable Form of such Source Code + Form, and Modifications of such Source Code Form, in each case + including portions thereof. + +1.5. "Incompatible With Secondary Licenses" + means + + (a) that the initial Contributor has attached the notice described + in Exhibit B to the Covered Software; or + + (b) that the Covered Software was made available under the terms of + version 1.1 or earlier of the License, but not also under the + terms of a Secondary License. + +1.6. "Executable Form" + means any form of the work other than Source Code Form. + +1.7. "Larger Work" + means a work that combines Covered Software with other material, in + a separate file or files, that is not Covered Software. + +1.8. "License" + means this document. + +1.9. "Licensable" + means having the right to grant, to the maximum extent possible, + whether at the time of the initial grant or subsequently, any and + all of the rights conveyed by this License. + +1.10. "Modifications" + means any of the following: + + (a) any file in Source Code Form that results from an addition to, + deletion from, or modification of the contents of Covered + Software; or + + (b) any new file in Source Code Form that contains any Covered + Software. + +1.11. "Patent Claims" of a Contributor + means any patent claim(s), including without limitation, method, + process, and apparatus claims, in any patent Licensable by such + Contributor that would be infringed, but for the grant of the + License, by the making, using, selling, offering for sale, having + made, import, or transfer of either its Contributions or its + Contributor Version. + +1.12. "Secondary License" + means either the GNU General Public License, Version 2.0, the GNU + Lesser General Public License, Version 2.1, the GNU Affero General + Public License, Version 3.0, or any later versions of those + licenses. + +1.13. "Source Code Form" + means the form of the work preferred for making modifications. + +1.14. "You" (or "Your") + means an individual or a legal entity exercising rights under this + License. For legal entities, "You" includes any entity that + controls, is controlled by, or is under common control with You. For + purposes of this definition, "control" means (a) the power, direct + or indirect, to cause the direction or management of such entity, + whether by contract or otherwise, or (b) ownership of more than + fifty percent (50%) of the outstanding shares or beneficial + ownership of such entity. + +2. License Grants and Conditions +-------------------------------- + +2.1. Grants + +Each Contributor hereby grants You a world-wide, royalty-free, +non-exclusive license: + +(a) under intellectual property rights (other than patent or trademark) + Licensable by such Contributor to use, reproduce, make available, + modify, display, perform, distribute, and otherwise exploit its + Contributions, either on an unmodified basis, with Modifications, or + as part of a Larger Work; and + +(b) under Patent Claims of such Contributor to make, use, sell, offer + for sale, have made, import, and otherwise transfer either its + Contributions or its Contributor Version. + +2.2. Effective Date + +The licenses granted in Section 2.1 with respect to any Contribution +become effective for each Contribution on the date the Contributor first +distributes such Contribution. + +2.3. Limitations on Grant Scope + +The licenses granted in this Section 2 are the only rights granted under +this License. No additional rights or licenses will be implied from the +distribution or licensing of Covered Software under this License. +Notwithstanding Section 2.1(b) above, no patent license is granted by a +Contributor: + +(a) for any code that a Contributor has removed from Covered Software; + or + +(b) for infringements caused by: (i) Your and any other third party's + modifications of Covered Software, or (ii) the combination of its + Contributions with other software (except as part of its Contributor + Version); or + +(c) under Patent Claims infringed by Covered Software in the absence of + its Contributions. + +This License does not grant any rights in the trademarks, service marks, +or logos of any Contributor (except as may be necessary to comply with +the notice requirements in Section 3.4). + +2.4. Subsequent Licenses + +No Contributor makes additional grants as a result of Your choice to +distribute the Covered Software under a subsequent version of this +License (see Section 10.2) or under the terms of a Secondary License (if +permitted under the terms of Section 3.3). + +2.5. Representation + +Each Contributor represents that the Contributor believes its +Contributions are its original creation(s) or it has sufficient rights +to grant the rights to its Contributions conveyed by this License. + +2.6. Fair Use + +This License is not intended to limit any rights You have under +applicable copyright doctrines of fair use, fair dealing, or other +equivalents. + +2.7. Conditions + +Sections 3.1, 3.2, 3.3, and 3.4 are conditions of the licenses granted +in Section 2.1. + +3. Responsibilities +------------------- + +3.1. Distribution of Source Form + +All distribution of Covered Software in Source Code Form, including any +Modifications that You create or to which You contribute, must be under +the terms of this License. You must inform recipients that the Source +Code Form of the Covered Software is governed by the terms of this +License, and how they can obtain a copy of this License. You may not +attempt to alter or restrict the recipients' rights in the Source Code +Form. + +3.2. Distribution of Executable Form + +If You distribute Covered Software in Executable Form then: + +(a) such Covered Software must also be made available in Source Code + Form, as described in Section 3.1, and You must inform recipients of + the Executable Form how they can obtain a copy of such Source Code + Form by reasonable means in a timely manner, at a charge no more + than the cost of distribution to the recipient; and + +(b) You may distribute such Executable Form under the terms of this + License, or sublicense it under different terms, provided that the + license for the Executable Form does not attempt to limit or alter + the recipients' rights in the Source Code Form under this License. + +3.3. Distribution of a Larger Work + +You may create and distribute a Larger Work under terms of Your choice, +provided that You also comply with the requirements of this License for +the Covered Software. If the Larger Work is a combination of Covered +Software with a work governed by one or more Secondary Licenses, and the +Covered Software is not Incompatible With Secondary Licenses, this +License permits You to additionally distribute such Covered Software +under the terms of such Secondary License(s), so that the recipient of +the Larger Work may, at their option, further distribute the Covered +Software under the terms of either this License or such Secondary +License(s). + +3.4. Notices + +You may not remove or alter the substance of any license notices +(including copyright notices, patent notices, disclaimers of warranty, +or limitations of liability) contained within the Source Code Form of +the Covered Software, except that You may alter any license notices to +the extent required to remedy known factual inaccuracies. + +3.5. Application of Additional Terms + +You may choose to offer, and to charge a fee for, warranty, support, +indemnity or liability obligations to one or more recipients of Covered +Software. However, You may do so only on Your own behalf, and not on +behalf of any Contributor. You must make it absolutely clear that any +such warranty, support, indemnity, or liability obligation is offered by +You alone, and You hereby agree to indemnify every Contributor for any +liability incurred by such Contributor as a result of warranty, support, +indemnity or liability terms You offer. You may include additional +disclaimers of warranty and limitations of liability specific to any +jurisdiction. + +4. Inability to Comply Due to Statute or Regulation +--------------------------------------------------- + +If it is impossible for You to comply with any of the terms of this +License with respect to some or all of the Covered Software due to +statute, judicial order, or regulation then You must: (a) comply with +the terms of this License to the maximum extent possible; and (b) +describe the limitations and the code they affect. Such description must +be placed in a text file included with all distributions of the Covered +Software under this License. Except to the extent prohibited by statute +or regulation, such description must be sufficiently detailed for a +recipient of ordinary skill to be able to understand it. + +5. Termination +-------------- + +5.1. The rights granted under this License will terminate automatically +if You fail to comply with any of its terms. However, if You become +compliant, then the rights granted under this License from a particular +Contributor are reinstated (a) provisionally, unless and until such +Contributor explicitly and finally terminates Your grants, and (b) on an +ongoing basis, if such Contributor fails to notify You of the +non-compliance by some reasonable means prior to 60 days after You have +come back into compliance. Moreover, Your grants from a particular +Contributor are reinstated on an ongoing basis if such Contributor +notifies You of the non-compliance by some reasonable means, this is the +first time You have received notice of non-compliance with this License +from such Contributor, and You become compliant prior to 30 days after +Your receipt of the notice. + +5.2. If You initiate litigation against any entity by asserting a patent +infringement claim (excluding declaratory judgment actions, +counter-claims, and cross-claims) alleging that a Contributor Version +directly or indirectly infringes any patent, then the rights granted to +You by any and all Contributors for the Covered Software under Section +2.1 of this License shall terminate. + +5.3. In the event of termination under Sections 5.1 or 5.2 above, all +end user license agreements (excluding distributors and resellers) which +have been validly granted by You or Your distributors under this License +prior to termination shall survive termination. + +************************************************************************ +* * +* 6. Disclaimer of Warranty * +* ------------------------- * +* * +* Covered Software is provided under this License on an "as is" * +* basis, without warranty of any kind, either expressed, implied, or * +* statutory, including, without limitation, warranties that the * +* Covered Software is free of defects, merchantable, fit for a * +* particular purpose or non-infringing. The entire risk as to the * +* quality and performance of the Covered Software is with You. * +* Should any Covered Software prove defective in any respect, You * +* (not any Contributor) assume the cost of any necessary servicing, * +* repair, or correction. This disclaimer of warranty constitutes an * +* essential part of this License. No use of any Covered Software is * +* authorized under this License except under this disclaimer. * +* * +************************************************************************ + +************************************************************************ +* * +* 7. Limitation of Liability * +* -------------------------- * +* * +* Under no circumstances and under no legal theory, whether tort * +* (including negligence), contract, or otherwise, shall any * +* Contributor, or anyone who distributes Covered Software as * +* permitted above, be liable to You for any direct, indirect, * +* special, incidental, or consequential damages of any character * +* including, without limitation, damages for lost profits, loss of * +* goodwill, work stoppage, computer failure or malfunction, or any * +* and all other commercial damages or losses, even if such party * +* shall have been informed of the possibility of such damages. This * +* limitation of liability shall not apply to liability for death or * +* personal injury resulting from such party's negligence to the * +* extent applicable law prohibits such limitation. Some * +* jurisdictions do not allow the exclusion or limitation of * +* incidental or consequential damages, so this exclusion and * +* limitation may not apply to You. * +* * +************************************************************************ + +8. Litigation +------------- + +Any litigation relating to this License may be brought only in the +courts of a jurisdiction where the defendant maintains its principal +place of business and such litigation shall be governed by laws of that +jurisdiction, without reference to its conflict-of-law provisions. +Nothing in this Section shall prevent a party's ability to bring +cross-claims or counter-claims. + +9. Miscellaneous +---------------- + +This License represents the complete agreement concerning the subject +matter hereof. If any provision of this License is held to be +unenforceable, such provision shall be reformed only to the extent +necessary to make it enforceable. Any law or regulation which provides +that the language of a contract shall be construed against the drafter +shall not be used to construe this License against a Contributor. + +10. Versions of the License +--------------------------- + +10.1. New Versions + +Mozilla Foundation is the license steward. Except as provided in Section +10.3, no one other than the license steward has the right to modify or +publish new versions of this License. Each version will be given a +distinguishing version number. + +10.2. Effect of New Versions + +You may distribute the Covered Software under the terms of the version +of the License under which You originally received the Covered Software, +or under the terms of any subsequent version published by the license +steward. + +10.3. Modified Versions + +If you create software not governed by this License, and you want to +create a new license for such software, you may create and use a +modified version of this License if you rename the license and remove +any references to the name of the license steward (except to note that +such modified license differs from this License). + +10.4. Distributing Source Code Form that is Incompatible With Secondary +Licenses + +If You choose to distribute Source Code Form that is Incompatible With +Secondary Licenses under the terms of this version of the License, the +notice described in Exhibit B of this License must be attached. + +Exhibit A - Source Code Form License Notice +------------------------------------------- + + This Source Code Form is subject to the terms of the Mozilla Public + License, v. 2.0. If a copy of the MPL was not distributed with this + file, You can obtain one at http://mozilla.org/MPL/2.0/. + +If it is not possible or desirable to put the notice in a particular +file, then You may include the notice in a location (such as a LICENSE +file in a relevant directory) where a recipient would be likely to look +for such a notice. + +You may add additional accurate notices of copyright ownership. + +Exhibit B - "Incompatible With Secondary Licenses" Notice +--------------------------------------------------------- + + This Source Code Form is "Incompatible With Secondary Licenses", as + defined by the Mozilla Public License, v. 2.0. diff --git a/vendor/github.com/hashicorp/go-oracle-terraform/README.md b/vendor/github.com/hashicorp/go-oracle-terraform/README.md new file mode 100644 index 000000000..0accc2c32 --- /dev/null +++ b/vendor/github.com/hashicorp/go-oracle-terraform/README.md @@ -0,0 +1,108 @@ +Oracle SDK for Terraform +=========================================== + +**Note:** This SDK is _not_ meant to be a comprehensive SDK for Oracle Cloud. This is meant to be used solely with Terraform. + +OPC Config +---------- + +To create the Oracle clients, a populated configuration struct is required. +The config struct holds the following fields: + +* `Username` - (`*string`) The Username used to authenticate to Oracle Public Cloud. +* `Password` - (`*string`) The Password used to authenticate to Oracle Public Cloud. +* `IdentityDomain` - (`*string`) The identity domain for Oracle Public Cloud. +* `APIEndpoint` - (`*url.URL`) The API Endpoint provided by Oracle Public Cloud. +* `LogLevel` - (`LogLevelType`) Defaults to `opc.LogOff`, can be either `opc.LogOff` or `opc.LogDebug`. +* `Logger` - (`Logger`) Must satisfy the generic `Logger` interface. Defaults to `ioutil.Discard` for the `LogOff` loglevel, and `os.Stderr` for the `LogDebug` loglevel. +* `HTTPClient` - (`*http.Client`) Defaults to generic HTTP Client if unspecified. + +Oracle Compute Client +---------------------- +The Oracle Compute Client requires an OPC Config object to be populated in order to create the client. + +Full example to create an OPC Compute instance: +```go +package main + +import ( + "fmt" + "net/url" + "github.com/hashicorp/go-oracle-terraform/opc" + "github.com/hashicorp/go-oracle-terraform/compute" +) + +func main() { + apiEndpoint, err := url.Parse("myAPIEndpoint") + if err != nil { + fmt.Errorf("Error parsing API Endpoint: %s", err) + } + + config := &opc.Config{ + Username: opc.String("myusername"), + Password: opc.String("mypassword"), + IdentityDomain: opc.String("myidentitydomain"), + APIEndpoint: apiEndpoint, + LogLevel: opc.LogDebug, + // Logger: # Leave blank to use the default logger, or provide your own + // HTTPClient: # Leave blank to use default HTTP Client, or provider your own + } + // Create the Compute Client + client, err := compute.NewComputeClient(config) + if err != nil { + fmt.Errorf("Error creating OPC Compute Client: %s", err) + } + // Create instances client + instanceClient := client.Instances() + + // Instances Input + input := &compute.CreateInstanceInput{ + Name: "test-instance", + Label: "test", + Shape: "oc3", + ImageList: "/oracle/public/oel_6.7_apaas_16.4.5_1610211300", + Storage: nil, + BootOrder: nil, + SSHKeys: []string{}, + Attributes: map[string]interface{}{}, + } + + // Create the instance + instance, err := instanceClient.CreateInstance(input) + if err != nil { + fmt.Errorf("Error creating instance: %s", err) + } + fmt.Printf("Instance Created: %#v", instance) +} +``` + +Please refer to inline documentation for each resource that the compute client provides. + +Running the SDK Integration Tests +----------------------------- + +To authenticate with the Oracle Compute Cloud the following credentails must be set in the following environment variables: + +- `OPC_ENDPOINT` - Endpoint provided by Oracle Public Cloud (e.g. https://api-z13.compute.em2.oraclecloud.com/\) +- `OPC_USERNAME` - Username for Oracle Public Cloud +- `OPC_PASSWORD` - Password for Oracle Public Cloud +- `OPC_IDENTITY_DOMAIN` - Identity domain for Oracle Public Cloud + + +The Integration tests can be ran with the following command: +```sh +$ make testacc +``` + +Isolating a single SDK package can be done via the `TEST` environment variable +```sh +$ make testacc TEST=./compute +``` + +Isolating a single test within a package can be done via the `TESTARGS` environment variable +```sh +$ make testacc TEST=./compute TESTARGS='-run=TestAccIPAssociationLifeCycle' +``` + +Tests are ran with logs being sent to `ioutil.Discard` by default. +Display debug logs inside of tests by setting the `ORACLE_LOG` environment variable to any value. diff --git a/vendor/github.com/hashicorp/go-oracle-terraform/client/client.go b/vendor/github.com/hashicorp/go-oracle-terraform/client/client.go new file mode 100644 index 000000000..4188d111b --- /dev/null +++ b/vendor/github.com/hashicorp/go-oracle-terraform/client/client.go @@ -0,0 +1,245 @@ +package client + +import ( + "bytes" + "encoding/json" + "fmt" + "io" + "net/http" + "net/url" + "runtime" + "time" + + "github.com/hashicorp/go-oracle-terraform/opc" +) + +const DEFAULT_MAX_RETRIES = 1 +const USER_AGENT_HEADER = "User-Agent" + +var ( + // defaultUserAgent builds a string containing the Go version, system archityecture and OS, + // and the go-autorest version. + defaultUserAgent = fmt.Sprintf("Go/%s (%s-%s) go-oracle-terraform/%s", + runtime.Version(), + runtime.GOARCH, + runtime.GOOS, + Version(), + ) +) + +// Client represents an authenticated compute client, with compute credentials and an api client. +type Client struct { + IdentityDomain *string + UserName *string + Password *string + APIEndpoint *url.URL + httpClient *http.Client + MaxRetries *int + UserAgent *string + logger opc.Logger + loglevel opc.LogLevelType +} + +func NewClient(c *opc.Config) (*Client, error) { + // First create a client + client := &Client{ + IdentityDomain: c.IdentityDomain, + UserName: c.Username, + Password: c.Password, + APIEndpoint: c.APIEndpoint, + UserAgent: &defaultUserAgent, + httpClient: c.HTTPClient, + MaxRetries: c.MaxRetries, + loglevel: c.LogLevel, + } + if c.UserAgent != nil { + client.UserAgent = c.UserAgent + } + + // Setup logger; defaults to stdout + if c.Logger == nil { + client.logger = opc.NewDefaultLogger() + } else { + client.logger = c.Logger + } + + // If LogLevel was not set to something different, + // double check for env var + if c.LogLevel == 0 { + client.loglevel = opc.LogLevel() + } + + // Default max retries if unset + if c.MaxRetries == nil { + client.MaxRetries = opc.Int(DEFAULT_MAX_RETRIES) + } + + // Protect against any nil http client + if c.HTTPClient == nil { + return nil, fmt.Errorf("No HTTP client specified in config") + } + + return client, nil +} + +// Marshalls the request body and returns the resulting byte slice +// This is split out of the BuildRequestBody method so as to allow +// the developer to print a debug string of the request body if they +// should so choose. +func (c *Client) MarshallRequestBody(body interface{}) ([]byte, error) { + // Verify interface isnt' nil + if body == nil { + return nil, nil + } + + return json.Marshal(body) +} + +// Builds an HTTP Request that accepts a pre-marshaled body parameter as a raw byte array +// Returns the raw HTTP Request and any error occured +func (c *Client) BuildRequestBody(method, path string, body []byte) (*http.Request, error) { + // Parse URL Path + urlPath, err := url.Parse(path) + if err != nil { + return nil, err + } + + var requestBody io.ReadSeeker + if len(body) != 0 { + requestBody = bytes.NewReader(body) + } + + // Create Request + req, err := http.NewRequest(method, c.formatURL(urlPath), requestBody) + if err != nil { + return nil, err + } + // Adding UserAgent Header + req.Header.Add(USER_AGENT_HEADER, *c.UserAgent) + + return req, nil +} + +// Build a new HTTP request that doesn't marshall the request body +func (c *Client) BuildNonJSONRequest(method, path string, body io.ReadSeeker) (*http.Request, error) { + // Parse URL Path + urlPath, err := url.Parse(path) + if err != nil { + return nil, err + } + + // Create request + req, err := http.NewRequest(method, c.formatURL(urlPath), body) + if err != nil { + return nil, err + } + // Adding UserAgentHeader + req.Header.Add(USER_AGENT_HEADER, *c.UserAgent) + + return req, nil +} + +// This method executes the http.Request from the BuildRequest method. +// It is split up to add additional authentication that is Oracle API dependent. +func (c *Client) ExecuteRequest(req *http.Request) (*http.Response, error) { + // Execute request with supplied client + resp, err := c.retryRequest(req) + if err != nil { + return resp, err + } + + if resp.StatusCode >= http.StatusOK && resp.StatusCode < http.StatusMultipleChoices { + return resp, nil + } + + oracleErr := &opc.OracleError{ + StatusCode: resp.StatusCode, + } + + // Even though the returned body will be in json form, it's undocumented what + // fields are actually returned. Once we get documentation of the actual + // error fields that are possible to be returned we can have stricter error types. + if resp.Body != nil { + buf := new(bytes.Buffer) + buf.ReadFrom(resp.Body) + oracleErr.Message = buf.String() + } + + // Should return the response object regardless of error, + // some resources need to verify and check status code on errors to + // determine if an error actually occurs or not. + return resp, oracleErr +} + +// Allow retrying the request until it either returns no error, +// or we exceed the number of max retries +func (c *Client) retryRequest(req *http.Request) (*http.Response, error) { + // Double check maxRetries is not nil + var retries int + if c.MaxRetries == nil { + retries = DEFAULT_MAX_RETRIES + } else { + retries = *c.MaxRetries + } + + var statusCode int + var errMessage string + + for i := 0; i < retries; i++ { + resp, err := c.httpClient.Do(req) + if err != nil { + return resp, err + } + + if resp.StatusCode >= http.StatusOK && resp.StatusCode < http.StatusMultipleChoices { + return resp, nil + } + + buf := new(bytes.Buffer) + buf.ReadFrom(resp.Body) + errMessage = buf.String() + statusCode = resp.StatusCode + c.DebugLogString(fmt.Sprintf("Encountered HTTP (%d) Error: %s", statusCode, errMessage)) + c.DebugLogString(fmt.Sprintf("%d/%d retries left", i+1, retries)) + } + + oracleErr := &opc.OracleError{ + StatusCode: statusCode, + Message: errMessage, + } + + // We ran out of retries to make, return the error and response + return nil, oracleErr +} + +func (c *Client) formatURL(path *url.URL) string { + return c.APIEndpoint.ResolveReference(path).String() +} + +// Retry function +func (c *Client) WaitFor(description string, timeout time.Duration, test func() (bool, error)) error { + tick := time.Tick(1 * time.Second) + + timeoutSeconds := int(timeout.Seconds()) + + for i := 0; i < timeoutSeconds; i++ { + select { + case <-tick: + completed, err := test() + c.DebugLogString(fmt.Sprintf("Waiting for %s (%d/%ds)", description, i, timeoutSeconds)) + if err != nil || completed { + return err + } + } + } + return fmt.Errorf("Timeout waiting for %s", description) +} + +// Used to determine if the checked resource was found or not. +func WasNotFoundError(e error) bool { + err, ok := e.(*opc.OracleError) + if ok { + return err.StatusCode == 404 + } + return false +} diff --git a/vendor/github.com/hashicorp/go-oracle-terraform/client/logging.go b/vendor/github.com/hashicorp/go-oracle-terraform/client/logging.go new file mode 100644 index 000000000..893997486 --- /dev/null +++ b/vendor/github.com/hashicorp/go-oracle-terraform/client/logging.go @@ -0,0 +1,28 @@ +package client + +import ( + "bytes" + "fmt" + "net/http" + + "github.com/hashicorp/go-oracle-terraform/opc" +) + +// Log a string if debug logs are on +func (c *Client) DebugLogString(str string) { + if c.loglevel != opc.LogDebug { + return + } + c.logger.Log(str) +} + +func (c *Client) DebugLogReq(req *http.Request) { + // Don't need to log this if not debugging + if c.loglevel != opc.LogDebug { + return + } + buf := new(bytes.Buffer) + buf.ReadFrom(req.Body) + c.logger.Log(fmt.Sprintf("DEBUG: HTTP %s Req %s: %s", + req.Method, req.URL.String(), buf.String())) +} diff --git a/vendor/github.com/hashicorp/go-oracle-terraform/client/version.go b/vendor/github.com/hashicorp/go-oracle-terraform/client/version.go new file mode 100644 index 000000000..3538fd181 --- /dev/null +++ b/vendor/github.com/hashicorp/go-oracle-terraform/client/version.go @@ -0,0 +1,35 @@ +package client + +import ( + "bytes" + "fmt" + "strings" + "sync" +) + +const ( + major = 0 + minor = 6 + patch = 2 + tag = "" +) + +var once sync.Once +var version string + +// Version returns the semantic version (see http://semver.org). +func Version() string { + once.Do(func() { + semver := fmt.Sprintf("%d.%d.%d", major, minor, patch) + verBuilder := bytes.NewBufferString(semver) + if tag != "" && tag != "-" { + updated := strings.TrimPrefix(tag, "-") + _, err := verBuilder.WriteString("-" + updated) + if err == nil { + verBuilder = bytes.NewBufferString(semver) + } + } + version = verBuilder.String() + }) + return version +} diff --git a/vendor/github.com/hashicorp/go-oracle-terraform/compute/acl.go b/vendor/github.com/hashicorp/go-oracle-terraform/compute/acl.go new file mode 100644 index 000000000..877f51c28 --- /dev/null +++ b/vendor/github.com/hashicorp/go-oracle-terraform/compute/acl.go @@ -0,0 +1,138 @@ +package compute + +// ACLsClient is a client for the ACLs functions of the Compute API. +type ACLsClient struct { + ResourceClient +} + +const ( + ACLDescription = "acl" + ACLContainerPath = "/network/v1/acl/" + ACLResourcePath = "/network/v1/acl" +) + +// ACLs obtains a ACLsClient which can be used to access to the +// ACLs functions of the Compute API +func (c *ComputeClient) ACLs() *ACLsClient { + return &ACLsClient{ + ResourceClient: ResourceClient{ + ComputeClient: c, + ResourceDescription: ACLDescription, + ContainerPath: ACLContainerPath, + ResourceRootPath: ACLResourcePath, + }} +} + +// ACLInfo describes an existing ACL. +type ACLInfo struct { + // Description of the ACL + Description string `json:"description"` + // Indicates whether the ACL is enabled + Enabled bool `json:"enabledFlag"` + // The name of the ACL + Name string `json:"name"` + // Tags associated with the ACL + Tags []string `json:"tags"` + // Uniform Resource Identifier for the ACL + URI string `json:"uri"` +} + +// CreateACLInput defines a ACL to be created. +type CreateACLInput struct { + // Description of the ACL + // Optional + Description string `json:"description"` + + // Enables or disables the ACL. Set to true by default. + //Set this to false to disable the ACL. + // Optional + Enabled bool `json:"enabledFlag"` + + // The name of the ACL to create. Object names can only contain alphanumeric, + // underscore, dash, and period characters. Names are case-sensitive. + // Required + Name string `json:"name"` + + // Strings that you can use to tag the ACL. + // Optional + Tags []string `json:"tags"` +} + +// CreateACL creates a new ACL. +func (c *ACLsClient) CreateACL(createInput *CreateACLInput) (*ACLInfo, error) { + createInput.Name = c.getQualifiedName(createInput.Name) + + var aclInfo ACLInfo + if err := c.createResource(createInput, &aclInfo); err != nil { + return nil, err + } + + return c.success(&aclInfo) +} + +// GetACLInput describes the ACL to get +type GetACLInput struct { + // The name of the ACL to query for + // Required + Name string `json:"name"` +} + +// GetACL retrieves the ACL with the given name. +func (c *ACLsClient) GetACL(getInput *GetACLInput) (*ACLInfo, error) { + var aclInfo ACLInfo + if err := c.getResource(getInput.Name, &aclInfo); err != nil { + return nil, err + } + + return c.success(&aclInfo) +} + +// UpdateACLInput describes a secruity rule to update +type UpdateACLInput struct { + // Description of the ACL + // Optional + Description string `json:"description"` + + // Enables or disables the ACL. Set to true by default. + //Set this to false to disable the ACL. + // Optional + Enabled bool `json:"enabledFlag"` + + // The name of the ACL to create. Object names can only contain alphanumeric, + // underscore, dash, and period characters. Names are case-sensitive. + // Required + Name string `json:"name"` + + // Strings that you can use to tag the ACL. + // Optional + Tags []string `json:"tags"` +} + +// UpdateACL modifies the properties of the ACL with the given name. +func (c *ACLsClient) UpdateACL(updateInput *UpdateACLInput) (*ACLInfo, error) { + updateInput.Name = c.getQualifiedName(updateInput.Name) + + var aclInfo ACLInfo + if err := c.updateResource(updateInput.Name, updateInput, &aclInfo); err != nil { + return nil, err + } + + return c.success(&aclInfo) +} + +// DeleteACLInput describes the ACL to delete +type DeleteACLInput struct { + // The name of the ACL to delete. + // Required + Name string `json:"name"` +} + +// DeleteACL deletes the ACL with the given name. +func (c *ACLsClient) DeleteACL(deleteInput *DeleteACLInput) error { + return c.deleteResource(deleteInput.Name) +} + +func (c *ACLsClient) success(aclInfo *ACLInfo) (*ACLInfo, error) { + aclInfo.Name = c.getUnqualifiedName(aclInfo.Name) + return aclInfo, nil +} diff --git a/vendor/github.com/hashicorp/go-oracle-terraform/compute/authentication.go b/vendor/github.com/hashicorp/go-oracle-terraform/compute/authentication.go new file mode 100644 index 000000000..6c62d7ddd --- /dev/null +++ b/vendor/github.com/hashicorp/go-oracle-terraform/compute/authentication.go @@ -0,0 +1,34 @@ +package compute + +import ( + "fmt" + "time" +) + +// AuthenticationReq represents the body of an authentication request. +type AuthenticationReq struct { + User string `json:"user"` + Password string `json:"password"` +} + +// Get a new auth cookie for the compute client +func (c *ComputeClient) getAuthenticationCookie() error { + req := AuthenticationReq{ + User: c.getUserName(), + Password: *c.client.Password, + } + + rsp, err := c.executeRequest("POST", "/authenticate/", req) + if err != nil { + return err + } + + if len(rsp.Cookies()) == 0 { + return fmt.Errorf("No authentication cookie found in response %#v", rsp) + } + + c.client.DebugLogString("Successfully authenticated to OPC") + c.authCookie = rsp.Cookies()[0] + c.cookieIssued = time.Now() + return nil +} diff --git a/vendor/github.com/hashicorp/go-oracle-terraform/compute/compute_client.go b/vendor/github.com/hashicorp/go-oracle-terraform/compute/compute_client.go new file mode 100644 index 000000000..3719dbb15 --- /dev/null +++ b/vendor/github.com/hashicorp/go-oracle-terraform/compute/compute_client.go @@ -0,0 +1,167 @@ +package compute + +import ( + "fmt" + "net/http" + "regexp" + "strings" + "time" + + "github.com/hashicorp/go-oracle-terraform/client" + "github.com/hashicorp/go-oracle-terraform/opc" +) + +const CMP_ACME = "/Compute-%s" +const CMP_USERNAME = "/Compute-%s/%s" +const CMP_QUALIFIED_NAME = "%s/%s" + +// Client represents an authenticated compute client, with compute credentials and an api client. +type ComputeClient struct { + client *client.Client + authCookie *http.Cookie + cookieIssued time.Time +} + +func NewComputeClient(c *opc.Config) (*ComputeClient, error) { + computeClient := &ComputeClient{} + client, err := client.NewClient(c) + if err != nil { + return nil, err + } + computeClient.client = client + + if err := computeClient.getAuthenticationCookie(); err != nil { + return nil, err + } + + return computeClient, nil +} + +func (c *ComputeClient) executeRequest(method, path string, body interface{}) (*http.Response, error) { + reqBody, err := c.client.MarshallRequestBody(body) + if err != nil { + return nil, err + } + + req, err := c.client.BuildRequestBody(method, path, reqBody) + if err != nil { + return nil, err + } + + debugReqString := fmt.Sprintf("HTTP %s Req (%s)", method, path) + if body != nil { + req.Header.Set("Content-Type", "application/oracle-compute-v3+json") + // Don't leak credentials in STDERR + if path != "/authenticate/" { + debugReqString = fmt.Sprintf("%s:\n %+v", debugReqString, string(reqBody)) + } + } + // Log the request before the authentication cookie, so as not to leak credentials + c.client.DebugLogString(debugReqString) + // If we have an authentication cookie, let's authenticate, refreshing cookie if need be + if c.authCookie != nil { + if time.Since(c.cookieIssued).Minutes() > 25 { + c.authCookie = nil + if err := c.getAuthenticationCookie(); err != nil { + return nil, err + } + } + req.AddCookie(c.authCookie) + } + + resp, err := c.client.ExecuteRequest(req) + if err != nil { + return nil, err + } + return resp, nil +} + +func (c *ComputeClient) getACME() string { + return fmt.Sprintf(CMP_ACME, *c.client.IdentityDomain) +} + +func (c *ComputeClient) getUserName() string { + return fmt.Sprintf(CMP_USERNAME, *c.client.IdentityDomain, *c.client.UserName) +} + +func (c *ComputeClient) getQualifiedACMEName(name string) string { + if name == "" { + return "" + } + if strings.HasPrefix(name, "/Compute-") && len(strings.Split(name, "/")) == 1 { + return name + } + return fmt.Sprintf(CMP_QUALIFIED_NAME, c.getACME(), name) +} + +// From compute_client +// GetObjectName returns the fully-qualified name of an OPC object, e.g. /identity-domain/user@email/{name} +func (c *ComputeClient) getQualifiedName(name string) string { + if name == "" { + return "" + } + if strings.HasPrefix(name, "/oracle") || strings.HasPrefix(name, "/Compute-") { + return name + } + return fmt.Sprintf(CMP_QUALIFIED_NAME, c.getUserName(), name) +} + +func (c *ComputeClient) getObjectPath(root, name string) string { + return fmt.Sprintf("%s%s", root, c.getQualifiedName(name)) +} + +// GetUnqualifiedName returns the unqualified name of an OPC object, e.g. the {name} part of /identity-domain/user@email/{name} +func (c *ComputeClient) getUnqualifiedName(name string) string { + if name == "" { + return name + } + if strings.HasPrefix(name, "/oracle") { + return name + } + if !strings.Contains(name, "/") { + return name + } + + nameParts := strings.Split(name, "/") + return strings.Join(nameParts[3:], "/") +} + +func (c *ComputeClient) unqualify(names ...*string) { + for _, name := range names { + *name = c.getUnqualifiedName(*name) + } +} + +func (c *ComputeClient) unqualifyUrl(url *string) { + var validID = regexp.MustCompile(`(\/(Compute[^\/\s]+))(\/[^\/\s]+)(\/[^\/\s]+)`) + name := validID.FindString(*url) + *url = c.getUnqualifiedName(name) +} + +func (c *ComputeClient) getQualifiedList(list []string) []string { + for i, name := range list { + list[i] = c.getQualifiedName(name) + } + return list +} + +func (c *ComputeClient) getUnqualifiedList(list []string) []string { + for i, name := range list { + list[i] = c.getUnqualifiedName(name) + } + return list +} + +func (c *ComputeClient) getQualifiedListName(name string) string { + nameParts := strings.Split(name, ":") + listType := nameParts[0] + listName := nameParts[1] + return fmt.Sprintf("%s:%s", listType, c.getQualifiedName(listName)) +} + +func (c *ComputeClient) unqualifyListName(qualifiedName string) string { + nameParts := strings.Split(qualifiedName, ":") + listType := nameParts[0] + listName := nameParts[1] + return fmt.Sprintf("%s:%s", listType, c.getUnqualifiedName(listName)) +} diff --git a/vendor/github.com/hashicorp/go-oracle-terraform/compute/compute_resource_client.go b/vendor/github.com/hashicorp/go-oracle-terraform/compute/compute_resource_client.go new file mode 100644 index 000000000..2c3e1dc9e --- /dev/null +++ b/vendor/github.com/hashicorp/go-oracle-terraform/compute/compute_resource_client.go @@ -0,0 +1,113 @@ +package compute + +import ( + "bytes" + "encoding/json" + "fmt" + "net/http" + + "github.com/mitchellh/mapstructure" +) + +// ResourceClient is an AuthenticatedClient with some additional information about the resources to be addressed. +type ResourceClient struct { + *ComputeClient + ResourceDescription string + ContainerPath string + ResourceRootPath string +} + +func (c *ResourceClient) createResource(requestBody interface{}, responseBody interface{}) error { + resp, err := c.executeRequest("POST", c.ContainerPath, requestBody) + if err != nil { + return err + } + + return c.unmarshalResponseBody(resp, responseBody) +} + +func (c *ResourceClient) updateResource(name string, requestBody interface{}, responseBody interface{}) error { + resp, err := c.executeRequest("PUT", c.getObjectPath(c.ResourceRootPath, name), requestBody) + if err != nil { + return err + } + + return c.unmarshalResponseBody(resp, responseBody) +} + +func (c *ResourceClient) getResource(name string, responseBody interface{}) error { + var objectPath string + if name != "" { + objectPath = c.getObjectPath(c.ResourceRootPath, name) + } else { + objectPath = c.ResourceRootPath + } + resp, err := c.executeRequest("GET", objectPath, nil) + if err != nil { + return err + } + + return c.unmarshalResponseBody(resp, responseBody) +} + +func (c *ResourceClient) deleteResource(name string) error { + var objectPath string + if name != "" { + objectPath = c.getObjectPath(c.ResourceRootPath, name) + } else { + objectPath = c.ResourceRootPath + } + _, err := c.executeRequest("DELETE", objectPath, nil) + if err != nil { + return err + } + + // No errors and no response body to write + return nil +} + +func (c *ResourceClient) deleteOrchestration(name string) error { + var objectPath string + if name != "" { + objectPath = c.getObjectPath(c.ResourceRootPath, name) + } else { + objectPath = c.ResourceRootPath + } + // Set terminate to true as we always want to delete an orchestration + objectPath = fmt.Sprintf("%s?terminate=True", objectPath) + + _, err := c.executeRequest("DELETE", objectPath, nil) + if err != nil { + return err + } + + // No errors and no response body to write + return nil +} + +func (c *ResourceClient) unmarshalResponseBody(resp *http.Response, iface interface{}) error { + buf := new(bytes.Buffer) + buf.ReadFrom(resp.Body) + c.client.DebugLogString(fmt.Sprintf("HTTP Resp (%d): %s", resp.StatusCode, buf.String())) + // JSON decode response into interface + var tmp interface{} + dcd := json.NewDecoder(buf) + if err := dcd.Decode(&tmp); err != nil { + return err + } + + // Use mapstructure to weakly decode into the resulting interface + msdcd, err := mapstructure.NewDecoder(&mapstructure.DecoderConfig{ + WeaklyTypedInput: true, + Result: iface, + TagName: "json", + }) + if err != nil { + return err + } + + if err := msdcd.Decode(tmp); err != nil { + return err + } + return nil +} diff --git a/vendor/github.com/hashicorp/go-oracle-terraform/compute/image_list.go b/vendor/github.com/hashicorp/go-oracle-terraform/compute/image_list.go new file mode 100644 index 000000000..1d908987c --- /dev/null +++ b/vendor/github.com/hashicorp/go-oracle-terraform/compute/image_list.go @@ -0,0 +1,154 @@ +package compute + +const ( + ImageListDescription = "Image List" + ImageListContainerPath = "/imagelist/" + ImageListResourcePath = "/imagelist" +) + +// ImageListClient is a client for the Image List functions of the Compute API. +type ImageListClient struct { + ResourceClient +} + +// ImageList obtains an ImageListClient which can be used to access to the +// Image List functions of the Compute API +func (c *ComputeClient) ImageList() *ImageListClient { + return &ImageListClient{ + ResourceClient: ResourceClient{ + ComputeClient: c, + ResourceDescription: ImageListDescription, + ContainerPath: ImageListContainerPath, + ResourceRootPath: ImageListResourcePath, + }} +} + +type ImageListEntry struct { + // User-defined parameters, in JSON format, that can be passed to an instance of this machine image when it is launched. + Attributes map[string]interface{} `json:"attributes"` + + // Name of the Image List. + ImageList string `json:"imagelist"` + + // A list of machine images. + MachineImages []string `json:"machineimages"` + + // Uniform Resource Identifier. + URI string `json:"uri"` + + // Version number of these Machine Images in the Image List. + Version int `json:"version"` +} + +// ImageList describes an existing Image List. +type ImageList struct { + // The image list entry to be used, by default, when launching instances using this image list + Default int `json:"default"` + + // A description of this image list. + Description string `json:"description"` + + // Each machine image in an image list is identified by an image list entry. + Entries []ImageListEntry `json:"entries"` + + // The name of the Image List + Name string `json:"name"` + + // Uniform Resource Identifier + URI string `json:"uri"` +} + +// CreateImageListInput defines an Image List to be created. +type CreateImageListInput struct { + // The image list entry to be used, by default, when launching instances using this image list. + // If you don't specify this value, it is set to 1. + // Optional + Default int `json:"default"` + + // A description of this image list. + // Required + Description string `json:"description"` + + // The name of the Image List + // Object names can contain only alphanumeric characters, hyphens, underscores, and periods. Object names are case-sensitive. + // Required + Name string `json:"name"` +} + +// CreateImageList creates a new Image List with the given name, key and enabled flag. +func (c *ImageListClient) CreateImageList(createInput *CreateImageListInput) (*ImageList, error) { + var imageList ImageList + createInput.Name = c.getQualifiedName(createInput.Name) + if err := c.createResource(&createInput, &imageList); err != nil { + return nil, err + } + + return c.success(&imageList) +} + +// DeleteKeyInput describes the image list to delete +type DeleteImageListInput struct { + // The name of the Image List + Name string `json:"name"` +} + +// DeleteImageList deletes the Image List with the given name. +func (c *ImageListClient) DeleteImageList(deleteInput *DeleteImageListInput) error { + deleteInput.Name = c.getQualifiedName(deleteInput.Name) + return c.deleteResource(deleteInput.Name) +} + +// GetImageListInput describes the image list to get +type GetImageListInput struct { + // The name of the Image List + Name string `json:"name"` +} + +// GetImageList retrieves the Image List with the given name. +func (c *ImageListClient) GetImageList(getInput *GetImageListInput) (*ImageList, error) { + getInput.Name = c.getQualifiedName(getInput.Name) + + var imageList ImageList + if err := c.getResource(getInput.Name, &imageList); err != nil { + return nil, err + } + + return c.success(&imageList) +} + +// UpdateImageListInput defines an Image List to be updated +type UpdateImageListInput struct { + // The image list entry to be used, by default, when launching instances using this image list. + // If you don't specify this value, it is set to 1. + // Optional + Default int `json:"default"` + + // A description of this image list. + // Required + Description string `json:"description"` + + // The name of the Image List + // Object names can contain only alphanumeric characters, hyphens, underscores, and periods. Object names are case-sensitive. + // Required + Name string `json:"name"` +} + +// UpdateImageList updates the key and enabled flag of the Image List with the given name. +func (c *ImageListClient) UpdateImageList(updateInput *UpdateImageListInput) (*ImageList, error) { + var imageList ImageList + updateInput.Name = c.getQualifiedName(updateInput.Name) + if err := c.updateResource(updateInput.Name, updateInput, &imageList); err != nil { + return nil, err + } + return c.success(&imageList) +} + +func (c *ImageListClient) success(imageList *ImageList) (*ImageList, error) { + c.unqualify(&imageList.Name) + + for _, v := range imageList.Entries { + v.MachineImages = c.getUnqualifiedList(v.MachineImages) + } + + return imageList, nil +} diff --git a/vendor/github.com/hashicorp/go-oracle-terraform/compute/image_list_entries.go b/vendor/github.com/hashicorp/go-oracle-terraform/compute/image_list_entries.go new file mode 100644 index 000000000..ac72ef45a --- /dev/null +++ b/vendor/github.com/hashicorp/go-oracle-terraform/compute/image_list_entries.go @@ -0,0 +1,122 @@ +package compute + +import "fmt" + +const ( + ImageListEntryDescription = "image list entry" + ImageListEntryContainerPath = "/imagelist" + ImageListEntryResourcePath = "/imagelist" +) + +type ImageListEntriesClient struct { + ResourceClient +} + +// ImageListEntries() returns an ImageListEntriesClient that can be used to access the +// necessary CRUD functions for Image List Entry's. +func (c *ComputeClient) ImageListEntries() *ImageListEntriesClient { + return &ImageListEntriesClient{ + ResourceClient: ResourceClient{ + ComputeClient: c, + ResourceDescription: ImageListEntryDescription, + ContainerPath: ImageListEntryContainerPath, + ResourceRootPath: ImageListEntryResourcePath, + }, + } +} + +// ImageListEntryInfo contains the exported fields necessary to hold all the information about an +// Image List Entry +type ImageListEntryInfo struct { + // User-defined parameters, in JSON format, that can be passed to an instance of this machine + // image when it is launched. This field can be used, for example, to specify the location of + // a database server and login details. Instance metadata, including user-defined data is available + // at http://192.0.0.192/ within an instance. See Retrieving User-Defined Instance Attributes in Using + // Oracle Compute Cloud Service (IaaS). + Attributes map[string]interface{} `json:"attributes"` + // Name of the imagelist. + Name string `json:"imagelist"` + // A list of machine images. + MachineImages []string `json:"machineimages"` + // Uniform Resource Identifier for the Image List Entry + Uri string `json:"uri"` + // Version number of these machineImages in the imagelist. + Version int `json:"version"` +} + +type CreateImageListEntryInput struct { + // The name of the Image List + Name string + // User-defined parameters, in JSON format, that can be passed to an instance of this machine + // image when it is launched. This field can be used, for example, to specify the location of + // a database server and login details. Instance metadata, including user-defined data is + //available at http://192.0.0.192/ within an instance. See Retrieving User-Defined Instance + //Attributes in Using Oracle Compute Cloud Service (IaaS). + // Optional + Attributes map[string]interface{} `json:"attributes"` + // A list of machine images. + // Required + MachineImages []string `json:"machineimages"` + // The unique version of the entry in the image list. + // Required + Version int `json:"version"` +} + +// Create a new Image List Entry from an ImageListEntriesClient and an input struct. +// Returns a populated Info struct for the Image List Entry, and any errors +func (c *ImageListEntriesClient) CreateImageListEntry(input *CreateImageListEntryInput) (*ImageListEntryInfo, error) { + c.updateClientPaths(input.Name, -1) + var imageListEntryInfo ImageListEntryInfo + if err := c.createResource(&input, &imageListEntryInfo); err != nil { + return nil, err + } + return c.success(&imageListEntryInfo) +} + +type GetImageListEntryInput struct { + // The name of the Image List + Name string + // Version number of these machineImages in the imagelist. + Version int +} + +// Returns a populated ImageListEntryInfo struct from an input struct +func (c *ImageListEntriesClient) GetImageListEntry(input *GetImageListEntryInput) (*ImageListEntryInfo, error) { + c.updateClientPaths(input.Name, input.Version) + var imageListEntryInfo ImageListEntryInfo + if err := c.getResource("", &imageListEntryInfo); err != nil { + return nil, err + } + return c.success(&imageListEntryInfo) +} + +type DeleteImageListEntryInput struct { + // The name of the Image List + Name string + // Version number of these machineImages in the imagelist. + Version int +} + +func (c *ImageListEntriesClient) DeleteImageListEntry(input *DeleteImageListEntryInput) error { + c.updateClientPaths(input.Name, input.Version) + return c.deleteResource("") +} + +func (c *ImageListEntriesClient) updateClientPaths(name string, version int) { + var containerPath, resourcePath string + name = c.getQualifiedName(name) + containerPath = ImageListEntryContainerPath + name + "/entry/" + resourcePath = ImageListEntryContainerPath + name + "/entry" + if version != -1 { + containerPath = fmt.Sprintf("%s%d", containerPath, version) + resourcePath = fmt.Sprintf("%s/%d", resourcePath, version) + } + c.ContainerPath = containerPath + c.ResourceRootPath = resourcePath +} + +// Unqualifies any qualified fields in the IPNetworkInfo struct +func (c *ImageListEntriesClient) success(info *ImageListEntryInfo) (*ImageListEntryInfo, error) { + c.unqualifyUrl(&info.Uri) + return info, nil +} diff --git a/vendor/github.com/hashicorp/go-oracle-terraform/compute/instances.go b/vendor/github.com/hashicorp/go-oracle-terraform/compute/instances.go new file mode 100644 index 000000000..ddfa8d6ec --- /dev/null +++ b/vendor/github.com/hashicorp/go-oracle-terraform/compute/instances.go @@ -0,0 +1,788 @@ +package compute + +import ( + "errors" + "fmt" + "strings" + "time" + + "github.com/hashicorp/go-oracle-terraform/client" +) + +const WaitForInstanceReadyTimeout = time.Duration(3600 * time.Second) +const WaitForInstanceDeleteTimeout = time.Duration(3600 * time.Second) + +// InstancesClient is a client for the Instance functions of the Compute API. +type InstancesClient struct { + ResourceClient +} + +// Instances obtains an InstancesClient which can be used to access to the +// Instance functions of the Compute API +func (c *ComputeClient) Instances() *InstancesClient { + return &InstancesClient{ + ResourceClient: ResourceClient{ + ComputeClient: c, + ResourceDescription: "instance", + ContainerPath: "/launchplan/", + ResourceRootPath: "/instance", + }} +} + +type InstanceState string + +const ( + InstanceRunning InstanceState = "running" + InstanceInitializing InstanceState = "initializing" + InstancePreparing InstanceState = "preparing" + InstanceStarting InstanceState = "starting" + InstanceStopping InstanceState = "stopping" + InstanceShutdown InstanceState = "shutdown" + InstanceQueued InstanceState = "queued" + InstanceError InstanceState = "error" +) + +type InstanceDesiredState string + +const ( + InstanceDesiredRunning InstanceDesiredState = "running" + InstanceDesiredShutdown InstanceDesiredState = "shutdown" +) + +// InstanceInfo represents the Compute API's view of the state of an instance. +type InstanceInfo struct { + // The ID for the instance. Set by the SDK based on the request - not the API. + ID string + + // A dictionary of attributes to be made available to the instance. + // A value with the key "userdata" will be made available in an EC2-compatible manner. + Attributes map[string]interface{} `json:"attributes"` + + // The availability domain for the instance + AvailabilityDomain string `json:"availability_domain"` + + // Boot order list. + BootOrder []int `json:"boot_order"` + + // The default domain to use for the hostname and DNS lookups + Domain string `json:"domain"` + + // The desired state of an instance + DesiredState InstanceDesiredState `json:"desired_state"` + + // Optional ImageListEntry number. Default will be used if not specified + Entry int `json:"entry"` + + // The reason for the instance going to error state, if available. + ErrorReason string `json:"error_reason"` + + // SSH Server Fingerprint presented by the instance + Fingerprint string `json:"fingerprint"` + + // The hostname for the instance + Hostname string `json:"hostname"` + + // The format of the image + ImageFormat string `json:"image_format"` + + // Name of imagelist to be launched. + ImageList string `json:"imagelist"` + + // IP address of the instance. + IPAddress string `json:"ip"` + + // A label assigned by the user, specifically for defining inter-instance relationships. + Label string `json:"label"` + + // Name of this instance, generated by the server. + Name string `json:"name"` + + // Mapping of to network specifiers for virtual NICs to be attached to this instance. + Networking map[string]NetworkingInfo `json:"networking"` + + // A list of strings specifying arbitrary tags on nodes to be matched on placement. + PlacementRequirements []string `json:"placement_requirements"` + + // The OS platform for the instance. + Platform string `json:"platform"` + + // The priority at which this instance will be run + Priority string `json:"priority"` + + // Reference to the QuotaReservation, to be destroyed with the instance + QuotaReservation string `json:"quota_reservation"` + + // Array of relationship specifications to be satisfied on this instance's placement + Relationships []string `json:"relationships"` + + // Resolvers to use instead of the default resolvers + Resolvers []string `json:"resolvers"` + + // Add PTR records for the hostname + ReverseDNS bool `json:"reverse_dns"` + + // Type of instance, as defined on site configuration. + Shape string `json:"shape"` + + // Site to run on + Site string `json:"site"` + + // ID's of SSH keys that will be exposed to the instance. + SSHKeys []string `json:"sshkeys"` + + // The start time of the instance + StartTime string `json:"start_time"` + + // State of the instance. + State InstanceState `json:"state"` + + // The Storage Attachment information. + Storage []StorageAttachment `json:"storage_attachments"` + + // Array of tags associated with the instance. + Tags []string `json:"tags"` + + // vCable for this instance. + VCableID string `json:"vcable_id"` + + // Specify if the devices created for the instance are virtio devices. If not specified, the default + // will come from the cluster configuration file + Virtio bool `json:"virtio,omitempty"` + + // IP Address and port of the VNC console for the instance + VNC string `json:"vnc"` +} + +type StorageAttachment struct { + // The index number for the volume. + Index int `json:"index"` + + // The three-part name (/Compute-identity_domain/user/object) of the storage attachment. + Name string `json:"name"` + + // The three-part name (/Compute-identity_domain/user/object) of the storage volume attached to the instance. + StorageVolumeName string `json:"storage_volume_name"` +} + +func (i *InstanceInfo) getInstanceName() string { + return fmt.Sprintf(CMP_QUALIFIED_NAME, i.Name, i.ID) +} + +type CreateInstanceInput struct { + // A dictionary of user-defined attributes to be made available to the instance. + // Optional + Attributes map[string]interface{} `json:"attributes"` + // Boot order list + // Optional + BootOrder []int `json:"boot_order,omitempty"` + // The desired state of the opc instance. Can only be `running` or `shutdown` + // Omits if empty. + // Optional + DesiredState InstanceDesiredState `json:"desired_state,omitempty"` + // The host name assigned to the instance. On an Oracle Linux instance, + // this host name is displayed in response to the hostname command. + // Only relative DNS is supported. The domain name is suffixed to the host name + // that you specify. The host name must not end with a period. If you don't specify a + // host name, then a name is generated automatically. + // Optional + Hostname string `json:"hostname"` + // Name of imagelist to be launched. + // Optional + ImageList string `json:"imagelist"` + // A label assigned by the user, specifically for defining inter-instance relationships. + // Optional + Label string `json:"label"` + // Name of this instance, generated by the server. + // Optional + Name string `json:"name"` + // Networking information. + // Optional + Networking map[string]NetworkingInfo `json:"networking"` + // If set to true (default), then reverse DNS records are created. + // If set to false, no reverse DNS records are created. + // Optional + ReverseDNS bool `json:"reverse_dns,omitempty"` + // Type of instance, as defined on site configuration. + // Required + Shape string `json:"shape"` + // A list of the Storage Attachments you want to associate with the instance. + // Optional + Storage []StorageAttachmentInput `json:"storage_attachments,omitempty"` + // A list of the SSH public keys that you want to associate with the instance. + // Optional + SSHKeys []string `json:"sshkeys"` + // A list of tags to be supplied to the instance + // Optional + Tags []string `json:"tags"` + // Time to wait for an instance to be ready + Timeout time.Duration `json:"-"` +} + +type StorageAttachmentInput struct { + // The index number for the volume. The allowed range is 1 to 10. + // If you want to use a storage volume as the boot disk for an instance, you must specify the index number for that volume as 1. + // The index determines the device name by which the volume is exposed to the instance. + Index int `json:"index"` + // The three-part name (/Compute-identity_domain/user/object) of the storage volume that you want to attach to the instance. + // Note that volumes attached to an instance at launch time can't be detached. + Volume string `json:"volume"` +} + +const ReservationPrefix = "ipreservation" +const ReservationIPPrefix = "network/v1/ipreservation" + +type NICModel string + +const ( + NICDefaultModel NICModel = "e1000" +) + +// Struct of Networking info from a populated instance, or to be used as input to create an instance +type NetworkingInfo struct { + // The DNS name for the Shared network (Required) + // DNS A Record for an IP Network (Optional) + DNS []string `json:"dns,omitempty"` + // IP Network only. + // If you want to associate a static private IP Address, + // specify that here within the range of the supplied IPNetwork attribute. + // Optional + IPAddress string `json:"ip,omitempty"` + // IP Network only. + // The name of the IP Network you want to add the instance to. + // Required + IPNetwork string `json:"ipnetwork,omitempty"` + // IP Network only. + // Set interface as default gateway for all traffic + // Optional + IsDefaultGateway bool `json:"is_default_gateway,omitempty"` + // IP Network only. + // The hexadecimal MAC Address of the interface + // Optional + MACAddress string `json:"address,omitempty"` + // Shared Network only. + // The type of NIC used. Must be set to 'e1000' + // Required + Model NICModel `json:"model,omitempty"` + // IP Network and Shared Network + // The name servers that are sent through DHCP as option 6. + // You can specify a maximum of eight name server IP addresses per interface. + // Optional + NameServers []string `json:"name_servers,omitempty"` + // The names of an IP Reservation to associate in an IP Network (Optional) + // Indicates whether a temporary or permanent public IP Address should be assigned + // in a Shared Network (Required) + Nat []string `json:"nat,omitempty"` + // IP Network and Shared Network + // The search domains that should be sent through DHCP as option 119. + // You can enter a maximum of eight search domain zones per interface. + // Optional + SearchDomains []string `json:"search_domains,omitempty"` + // Shared Network only. + // The security lists that you want to add the instance to + // Required + SecLists []string `json:"seclists,omitempty"` + // IP Network Only + // The name of the vNIC + // Optional + Vnic string `json:"vnic,omitempty"` + // IP Network only. + // The names of the vNICSets you want to add the interface to. + // Optional + VnicSets []string `json:"vnicsets,omitempty"` +} + +// LaunchPlan defines a launch plan, used to launch instances with the supplied InstanceSpec(s) +type LaunchPlanInput struct { + // Describes an array of instances which should be launched + Instances []CreateInstanceInput `json:"instances"` + // Time to wait for instance boot + Timeout time.Duration `json:"-"` +} + +type LaunchPlanResponse struct { + // An array of instances which have been launched + Instances []InstanceInfo `json:"instances"` +} + +// LaunchInstance creates and submits a LaunchPlan to launch a new instance. +func (c *InstancesClient) CreateInstance(input *CreateInstanceInput) (*InstanceInfo, error) { + qualifiedSSHKeys := []string{} + for _, key := range input.SSHKeys { + qualifiedSSHKeys = append(qualifiedSSHKeys, c.getQualifiedName(key)) + } + + input.SSHKeys = qualifiedSSHKeys + + qualifiedStorageAttachments := []StorageAttachmentInput{} + for _, attachment := range input.Storage { + qualifiedStorageAttachments = append(qualifiedStorageAttachments, StorageAttachmentInput{ + Index: attachment.Index, + Volume: c.getQualifiedName(attachment.Volume), + }) + } + input.Storage = qualifiedStorageAttachments + + input.Networking = c.qualifyNetworking(input.Networking) + + input.Name = fmt.Sprintf(CMP_QUALIFIED_NAME, c.getUserName(), input.Name) + + plan := LaunchPlanInput{ + Instances: []CreateInstanceInput{*input}, + Timeout: input.Timeout, + } + + var ( + instanceInfo *InstanceInfo + instanceError error + ) + for i := 0; i < *c.ComputeClient.client.MaxRetries; i++ { + c.client.DebugLogString(fmt.Sprintf("(Iteration: %d of %d) Creating instance with name %s\n Plan: %+v", i, *c.ComputeClient.client.MaxRetries, input.Name, plan)) + + instanceInfo, instanceError = c.startInstance(input.Name, plan) + if instanceError == nil { + c.client.DebugLogString(fmt.Sprintf("(Iteration: %d of %d) Finished creating instance with name %s\n Info: %+v", i, *c.ComputeClient.client.MaxRetries, input.Name, instanceInfo)) + return instanceInfo, nil + } + } + return nil, instanceError +} + +func (c *InstancesClient) startInstance(name string, plan LaunchPlanInput) (*InstanceInfo, error) { + var responseBody LaunchPlanResponse + + if err := c.createResource(&plan, &responseBody); err != nil { + return nil, err + } + + if len(responseBody.Instances) == 0 { + return nil, fmt.Errorf("No instance information returned: %#v", responseBody) + } + + // Call wait for instance ready now, as creating the instance is an eventually consistent operation + getInput := &GetInstanceInput{ + Name: name, + ID: responseBody.Instances[0].ID, + } + + //timeout := WaitForInstanceReadyTimeout + if plan.Timeout == 0 { + plan.Timeout = WaitForInstanceReadyTimeout + } + + // Wait for instance to be ready and return the result + // Don't have to unqualify any objects, as the GetInstance method will handle that + instanceInfo, instanceError := c.WaitForInstanceRunning(getInput, plan.Timeout) + // If the instance enters an error state we need to delete the instance and retry + if instanceError != nil { + deleteInput := &DeleteInstanceInput{ + Name: name, + ID: responseBody.Instances[0].ID, + } + err := c.DeleteInstance(deleteInput) + if err != nil { + return nil, fmt.Errorf("Error deleting instance %s: %s", name, err) + } + return nil, instanceError + } + return instanceInfo, nil +} + +// Both of these fields are required. If they're not provided, things go wrong in +// incredibly amazing ways. +type GetInstanceInput struct { + // The Unqualified Name of this Instance + Name string + // The Unqualified ID of this Instance + ID string +} + +func (g *GetInstanceInput) String() string { + return fmt.Sprintf(CMP_QUALIFIED_NAME, g.Name, g.ID) +} + +// GetInstance retrieves information about an instance. +func (c *InstancesClient) GetInstance(input *GetInstanceInput) (*InstanceInfo, error) { + if input.ID == "" || input.Name == "" { + return nil, errors.New("Both instance name and ID need to be specified") + } + + var responseBody InstanceInfo + if err := c.getResource(input.String(), &responseBody); err != nil { + return nil, err + } + + if responseBody.Name == "" { + return nil, fmt.Errorf("Empty response body when requesting instance %s", input.Name) + } + + // The returned 'Name' attribute is the fully qualified instance name + "/" + ID + // Split these out to accurately populate the fields + nID := strings.Split(c.getUnqualifiedName(responseBody.Name), "/") + responseBody.Name = nID[0] + responseBody.ID = nID[1] + + c.unqualify(&responseBody.VCableID) + + // Unqualify SSH Key names + sshKeyNames := []string{} + for _, sshKeyRef := range responseBody.SSHKeys { + sshKeyNames = append(sshKeyNames, c.getUnqualifiedName(sshKeyRef)) + } + responseBody.SSHKeys = sshKeyNames + + var networkingErr error + responseBody.Networking, networkingErr = c.unqualifyNetworking(responseBody.Networking) + if networkingErr != nil { + return nil, networkingErr + } + responseBody.Storage = c.unqualifyStorage(responseBody.Storage) + + return &responseBody, nil +} + +type InstancesInfo struct { + Instances []InstanceInfo `json:"result"` +} + +type GetInstanceIdInput struct { + // Name of the instance you want to get + Name string +} + +// GetInstanceFromName loops through all the instances and finds the instance for the given name +// This is needed for orchestration since it doesn't return the id for the instance it creates. +func (c *InstancesClient) GetInstanceFromName(input *GetInstanceIdInput) (*InstanceInfo, error) { + input.Name = c.getQualifiedName(input.Name) + + var instancesInfo InstancesInfo + if err := c.getResource(fmt.Sprintf("%s/", c.getUserName()), &instancesInfo); err != nil { + return nil, err + } + + for _, i := range instancesInfo.Instances { + if strings.Contains(i.Name, input.Name) { + if i.Name == "" { + return nil, fmt.Errorf("Empty response body when requesting instance %s", input.Name) + } + + // The returned 'Name' attribute is the fully qualified instance name + "/" + ID + // Split these out to accurately populate the fields + nID := strings.Split(c.getUnqualifiedName(i.Name), "/") + i.Name = nID[0] + i.ID = nID[1] + + c.unqualify(&i.VCableID) + + // Unqualify SSH Key names + sshKeyNames := []string{} + for _, sshKeyRef := range i.SSHKeys { + sshKeyNames = append(sshKeyNames, c.getUnqualifiedName(sshKeyRef)) + } + i.SSHKeys = sshKeyNames + + var networkingErr error + i.Networking, networkingErr = c.unqualifyNetworking(i.Networking) + if networkingErr != nil { + return nil, networkingErr + } + i.Storage = c.unqualifyStorage(i.Storage) + + return &i, nil + } + } + + return nil, fmt.Errorf("Unable to find instance: %q", input.Name) +} + +type UpdateInstanceInput struct { + // Name of this instance, generated by the server. + // Required + Name string `json:"name"` + // The desired state of the opc instance. Can only be `running` or `shutdown` + // Omits if empty. + // Optional + DesiredState InstanceDesiredState `json:"desired_state,omitempty"` + // The ID of the instance + // Required + ID string `json:"-"` + // A list of tags to be supplied to the instance + // Optional + Tags []string `json:"tags,omitempty"` + // Time to wait for instance to be ready, or shutdown depending on desired state + Timeout time.Duration `json:"-"` +} + +func (g *UpdateInstanceInput) String() string { + return fmt.Sprintf(CMP_QUALIFIED_NAME, g.Name, g.ID) +} + +func (c *InstancesClient) UpdateInstance(input *UpdateInstanceInput) (*InstanceInfo, error) { + if input.Name == "" || input.ID == "" { + return nil, errors.New("Both instance name and ID need to be specified") + } + + input.Name = fmt.Sprintf(CMP_QUALIFIED_NAME, c.getUserName(), input.Name) + + var responseBody InstanceInfo + if err := c.updateResource(input.String(), input, &responseBody); err != nil { + return nil, err + } + + getInput := &GetInstanceInput{ + Name: input.Name, + ID: input.ID, + } + + if input.Timeout == 0 { + input.Timeout = WaitForInstanceReadyTimeout + } + + // Wait for the correct instance action depending on the current desired state. + // If the instance is already running, and the desired state is to be "running", the + // wait loop will only execute a single time to verify the instance state. Otherwise + // we wait until the correct action has finalized, either a shutdown or restart, catching + // any intermittent errors during the process. + if responseBody.DesiredState == InstanceDesiredRunning { + return c.WaitForInstanceRunning(getInput, input.Timeout) + } else { + return c.WaitForInstanceShutdown(getInput, input.Timeout) + } +} + +type DeleteInstanceInput struct { + // The Unqualified Name of this Instance + Name string + // The Unqualified ID of this Instance + ID string + // Time to wait for instance to be deleted + Timeout time.Duration +} + +func (d *DeleteInstanceInput) String() string { + return fmt.Sprintf(CMP_QUALIFIED_NAME, d.Name, d.ID) +} + +// DeleteInstance deletes an instance. +func (c *InstancesClient) DeleteInstance(input *DeleteInstanceInput) error { + // Call to delete the instance + if err := c.deleteResource(input.String()); err != nil { + return err + } + + if input.Timeout == 0 { + input.Timeout = WaitForInstanceDeleteTimeout + } + + // Wait for instance to be deleted + return c.WaitForInstanceDeleted(input, input.Timeout) +} + +// WaitForInstanceRunning waits for an instance to be completely initialized and available. +func (c *InstancesClient) WaitForInstanceRunning(input *GetInstanceInput, timeout time.Duration) (*InstanceInfo, error) { + var info *InstanceInfo + var getErr error + err := c.client.WaitFor("instance to be ready", timeout, func() (bool, error) { + info, getErr = c.GetInstance(input) + if getErr != nil { + return false, getErr + } + c.client.DebugLogString(fmt.Sprintf("Instance name is %v, Instance info is %+v", info.Name, info)) + switch s := info.State; s { + case InstanceError: + return false, fmt.Errorf("Error initializing instance: %s", info.ErrorReason) + case InstanceRunning: // Target State + c.client.DebugLogString("Instance Running") + return true, nil + case InstanceQueued: + c.client.DebugLogString("Instance Queuing") + return false, nil + case InstanceInitializing: + c.client.DebugLogString("Instance Initializing") + return false, nil + case InstancePreparing: + c.client.DebugLogString("Instance Preparing") + return false, nil + case InstanceStarting: + c.client.DebugLogString("Instance Starting") + return false, nil + default: + c.client.DebugLogString(fmt.Sprintf("Unknown instance state: %s, waiting", s)) + return false, nil + } + }) + return info, err +} + +// WaitForInstanceShutdown waits for an instance to be shutdown +func (c *InstancesClient) WaitForInstanceShutdown(input *GetInstanceInput, timeout time.Duration) (*InstanceInfo, error) { + var info *InstanceInfo + var getErr error + err := c.client.WaitFor("instance to be shutdown", timeout, func() (bool, error) { + info, getErr = c.GetInstance(input) + if getErr != nil { + return false, getErr + } + switch s := info.State; s { + case InstanceError: + return false, fmt.Errorf("Error initializing instance: %s", info.ErrorReason) + case InstanceRunning: + c.client.DebugLogString("Instance Running") + return false, nil + case InstanceQueued: + c.client.DebugLogString("Instance Queuing") + return false, nil + case InstanceInitializing: + c.client.DebugLogString("Instance Initializing") + return false, nil + case InstancePreparing: + c.client.DebugLogString("Instance Preparing") + return false, nil + case InstanceStarting: + c.client.DebugLogString("Instance Starting") + return false, nil + case InstanceShutdown: // Target State + c.client.DebugLogString("Instance Shutdown") + return true, nil + default: + c.client.DebugLogString(fmt.Sprintf("Unknown instance state: %s, waiting", s)) + return false, nil + } + }) + return info, err +} + +// WaitForInstanceDeleted waits for an instance to be fully deleted. +func (c *InstancesClient) WaitForInstanceDeleted(input *DeleteInstanceInput, timeout time.Duration) error { + return c.client.WaitFor("instance to be deleted", timeout, func() (bool, error) { + var info InstanceInfo + if err := c.getResource(input.String(), &info); err != nil { + if client.WasNotFoundError(err) { + // Instance could not be found, thus deleted + return true, nil + } + // Some other error occurred trying to get instance, exit + return false, err + } + switch s := info.State; s { + case InstanceError: + return false, fmt.Errorf("Error stopping instance: %s", info.ErrorReason) + case InstanceStopping: + c.client.DebugLogString("Instance stopping") + return false, nil + default: + c.client.DebugLogString(fmt.Sprintf("Unknown instance state: %s, waiting", s)) + return false, nil + } + }) +} + +func (c *InstancesClient) qualifyNetworking(info map[string]NetworkingInfo) map[string]NetworkingInfo { + qualifiedNetworks := map[string]NetworkingInfo{} + for k, v := range info { + qfd := v + sharedNetwork := false + if v.IPNetwork != "" { + // Network interface is for an IP Network + qfd.IPNetwork = c.getQualifiedName(v.IPNetwork) + sharedNetwork = true + } + if v.Vnic != "" { + qfd.Vnic = c.getQualifiedName(v.Vnic) + } + if v.Nat != nil { + qfd.Nat = c.qualifyNat(v.Nat, sharedNetwork) + } + if v.VnicSets != nil { + qfd.VnicSets = c.getQualifiedList(v.VnicSets) + } + if v.SecLists != nil { + // Network interface is for the shared network + secLists := []string{} + for _, v := range v.SecLists { + secLists = append(secLists, c.getQualifiedName(v)) + } + qfd.SecLists = secLists + } + + qualifiedNetworks[k] = qfd + } + return qualifiedNetworks +} + +func (c *InstancesClient) unqualifyNetworking(info map[string]NetworkingInfo) (map[string]NetworkingInfo, error) { + // Unqualify ip network + var err error + unqualifiedNetworks := map[string]NetworkingInfo{} + for k, v := range info { + unq := v + if v.IPNetwork != "" { + unq.IPNetwork = c.getUnqualifiedName(v.IPNetwork) + } + if v.Vnic != "" { + unq.Vnic = c.getUnqualifiedName(v.Vnic) + } + if v.Nat != nil { + unq.Nat, err = c.unqualifyNat(v.Nat) + if err != nil { + return nil, err + } + } + if v.VnicSets != nil { + unq.VnicSets = c.getUnqualifiedList(v.VnicSets) + } + if v.SecLists != nil { + secLists := []string{} + for _, v := range v.SecLists { + secLists = append(secLists, c.getUnqualifiedName(v)) + } + v.SecLists = secLists + } + unqualifiedNetworks[k] = unq + } + return unqualifiedNetworks, nil +} + +func (c *InstancesClient) qualifyNat(nat []string, shared bool) []string { + qualifiedNats := []string{} + for _, v := range nat { + if strings.HasPrefix(v, "ippool:/oracle") { + qualifiedNats = append(qualifiedNats, v) + continue + } + prefix := ReservationPrefix + if shared { + prefix = ReservationIPPrefix + } + qualifiedNats = append(qualifiedNats, fmt.Sprintf("%s:%s", prefix, c.getQualifiedName(v))) + } + return qualifiedNats +} + +func (c *InstancesClient) unqualifyNat(nat []string) ([]string, error) { + unQualifiedNats := []string{} + for _, v := range nat { + if strings.HasPrefix(v, "ippool:/oracle") { + unQualifiedNats = append(unQualifiedNats, v) + continue + } + n := strings.Split(v, ":") + if len(n) < 1 { + return nil, fmt.Errorf("Error unqualifying NAT: %s", v) + } + u := n[1] + unQualifiedNats = append(unQualifiedNats, c.getUnqualifiedName(u)) + } + return unQualifiedNats, nil +} + +func (c *InstancesClient) unqualifyStorage(attachments []StorageAttachment) []StorageAttachment { + unqAttachments := []StorageAttachment{} + for _, v := range attachments { + if v.StorageVolumeName != "" { + v.StorageVolumeName = c.getUnqualifiedName(v.StorageVolumeName) + } + unqAttachments = append(unqAttachments, v) + } + + return unqAttachments +} diff --git a/vendor/github.com/hashicorp/go-oracle-terraform/compute/ip_address_associations.go b/vendor/github.com/hashicorp/go-oracle-terraform/compute/ip_address_associations.go new file mode 100644 index 000000000..6c1167f12 --- /dev/null +++ b/vendor/github.com/hashicorp/go-oracle-terraform/compute/ip_address_associations.go @@ -0,0 +1,152 @@ +package compute + +const ( + IPAddressAssociationDescription = "ip address association" + IPAddressAssociationContainerPath = "/network/v1/ipassociation/" + IPAddressAssociationResourcePath = "/network/v1/ipassociation" +) + +type IPAddressAssociationsClient struct { + ResourceClient +} + +// IPAddressAssociations() returns an IPAddressAssociationsClient that can be used to access the +// necessary CRUD functions for IP Address Associations. +func (c *ComputeClient) IPAddressAssociations() *IPAddressAssociationsClient { + return &IPAddressAssociationsClient{ + ResourceClient: ResourceClient{ + ComputeClient: c, + ResourceDescription: IPAddressAssociationDescription, + ContainerPath: IPAddressAssociationContainerPath, + ResourceRootPath: IPAddressAssociationResourcePath, + }, + } +} + +// IPAddressAssociationInfo contains the exported fields necessary to hold all the information about an +// IP Address Association +type IPAddressAssociationInfo struct { + // The name of the NAT IP address reservation. + IPAddressReservation string `json:"ipAddressReservation"` + // Name of the virtual NIC associated with this NAT IP reservation. + Vnic string `json:"vnic"` + // The name of the IP Address Association + Name string `json:"name"` + // Description of the IP Address Association + Description string `json:"description"` + // Slice of tags associated with the IP Address Association + Tags []string `json:"tags"` + // Uniform Resource Identifier for the IP Address Association + Uri string `json:"uri"` +} + +type CreateIPAddressAssociationInput struct { + // The name of the IP Address Association to create. Object names can only contain alphanumeric, + // underscore, dash, and period characters. Names are case-sensitive. + // Required + Name string `json:"name"` + + // The name of the NAT IP address reservation. + // Optional + IPAddressReservation string `json:"ipAddressReservation,omitempty"` + + // Name of the virtual NIC associated with this NAT IP reservation. + // Optional + Vnic string `json:"vnic,omitempty"` + + // Description of the IPAddressAssociation + // Optional + Description string `json:"description"` + + // String slice of tags to apply to the IP Address Association object + // Optional + Tags []string `json:"tags"` +} + +// Create a new IP Address Association from an IPAddressAssociationsClient and an input struct. +// Returns a populated Info struct for the IP Address Association, and any errors +func (c *IPAddressAssociationsClient) CreateIPAddressAssociation(input *CreateIPAddressAssociationInput) (*IPAddressAssociationInfo, error) { + input.Name = c.getQualifiedName(input.Name) + input.IPAddressReservation = c.getQualifiedName(input.IPAddressReservation) + input.Vnic = c.getQualifiedName(input.Vnic) + + var ipInfo IPAddressAssociationInfo + if err := c.createResource(&input, &ipInfo); err != nil { + return nil, err + } + + return c.success(&ipInfo) +} + +type GetIPAddressAssociationInput struct { + // The name of the IP Address Association to query for. Case-sensitive + // Required + Name string `json:"name"` +} + +// Returns a populated IPAddressAssociationInfo struct from an input struct +func (c *IPAddressAssociationsClient) GetIPAddressAssociation(input *GetIPAddressAssociationInput) (*IPAddressAssociationInfo, error) { + input.Name = c.getQualifiedName(input.Name) + + var ipInfo IPAddressAssociationInfo + if err := c.getResource(input.Name, &ipInfo); err != nil { + return nil, err + } + + return c.success(&ipInfo) +} + +// UpdateIPAddressAssociationInput defines what to update in a ip address association +type UpdateIPAddressAssociationInput struct { + // The name of the IP Address Association to create. Object names can only contain alphanumeric, + // underscore, dash, and period characters. Names are case-sensitive. + // Required + Name string `json:"name"` + + // The name of the NAT IP address reservation. + // Optional + IPAddressReservation string `json:"ipAddressReservation,omitempty"` + + // Name of the virtual NIC associated with this NAT IP reservation. + // Optional + Vnic string `json:"vnic,omitempty"` + + // Description of the IPAddressAssociation + // Optional + Description string `json:"description"` + + // String slice of tags to apply to the IP Address Association object + // Optional + Tags []string `json:"tags"` +} + +// UpdateIPAddressAssociation update the ip address association +func (c *IPAddressAssociationsClient) UpdateIPAddressAssociation(updateInput *UpdateIPAddressAssociationInput) (*IPAddressAssociationInfo, error) { + updateInput.Name = c.getQualifiedName(updateInput.Name) + updateInput.IPAddressReservation = c.getQualifiedName(updateInput.IPAddressReservation) + updateInput.Vnic = c.getQualifiedName(updateInput.Vnic) + var ipInfo IPAddressAssociationInfo + if err := c.updateResource(updateInput.Name, updateInput, &ipInfo); err != nil { + return nil, err + } + + return c.success(&ipInfo) +} + +type DeleteIPAddressAssociationInput struct { + // The name of the IP Address Association to query for. Case-sensitive + // Required + Name string `json:"name"` +} + +func (c *IPAddressAssociationsClient) DeleteIPAddressAssociation(input *DeleteIPAddressAssociationInput) error { + return c.deleteResource(input.Name) +} + +// Unqualifies any qualified fields in the IPAddressAssociationInfo struct +func (c *IPAddressAssociationsClient) success(info *IPAddressAssociationInfo) (*IPAddressAssociationInfo, error) { + c.unqualify(&info.Name) + c.unqualify(&info.Vnic) + c.unqualify(&info.IPAddressReservation) + return info, nil +} diff --git a/vendor/github.com/hashicorp/go-oracle-terraform/compute/ip_address_prefix_set.go b/vendor/github.com/hashicorp/go-oracle-terraform/compute/ip_address_prefix_set.go new file mode 100644 index 000000000..3f1503c08 --- /dev/null +++ b/vendor/github.com/hashicorp/go-oracle-terraform/compute/ip_address_prefix_set.go @@ -0,0 +1,135 @@ +package compute + +const ( + IPAddressPrefixSetDescription = "ip address prefix set" + IPAddressPrefixSetContainerPath = "/network/v1/ipaddressprefixset/" + IPAddressPrefixSetResourcePath = "/network/v1/ipaddressprefixset" +) + +type IPAddressPrefixSetsClient struct { + ResourceClient +} + +// IPAddressPrefixSets() returns an IPAddressPrefixSetsClient that can be used to access the +// necessary CRUD functions for IP Address Prefix Sets. +func (c *ComputeClient) IPAddressPrefixSets() *IPAddressPrefixSetsClient { + return &IPAddressPrefixSetsClient{ + ResourceClient: ResourceClient{ + ComputeClient: c, + ResourceDescription: IPAddressPrefixSetDescription, + ContainerPath: IPAddressPrefixSetContainerPath, + ResourceRootPath: IPAddressPrefixSetResourcePath, + }, + } +} + +// IPAddressPrefixSetInfo contains the exported fields necessary to hold all the information about an +// IP Address Prefix Set +type IPAddressPrefixSetInfo struct { + // The name of the IP Address Prefix Set + Name string `json:"name"` + // Description of the IP Address Prefix Set + Description string `json:"description"` + // List of CIDR IPv4 prefixes assigned in the virtual network. + IPAddressPrefixes []string `json:"ipAddressPrefixes"` + // Slice of tags associated with the IP Address Prefix Set + Tags []string `json:"tags"` + // Uniform Resource Identifier for the IP Address Prefix Set + Uri string `json:"uri"` +} + +type CreateIPAddressPrefixSetInput struct { + // The name of the IP Address Prefix Set to create. Object names can only contain alphanumeric, + // underscore, dash, and period characters. Names are case-sensitive. + // Required + Name string `json:"name"` + + // Description of the IPAddressPrefixSet + // Optional + Description string `json:"description"` + + // List of CIDR IPv4 prefixes assigned in the virtual network. + // Optional + IPAddressPrefixes []string `json:"ipAddressPrefixes"` + + // String slice of tags to apply to the IP Address Prefix Set object + // Optional + Tags []string `json:"tags"` +} + +// Create a new IP Address Prefix Set from an IPAddressPrefixSetsClient and an input struct. +// Returns a populated Info struct for the IP Address Prefix Set, and any errors +func (c *IPAddressPrefixSetsClient) CreateIPAddressPrefixSet(input *CreateIPAddressPrefixSetInput) (*IPAddressPrefixSetInfo, error) { + input.Name = c.getQualifiedName(input.Name) + + var ipInfo IPAddressPrefixSetInfo + if err := c.createResource(&input, &ipInfo); err != nil { + return nil, err + } + + return c.success(&ipInfo) +} + +type GetIPAddressPrefixSetInput struct { + // The name of the IP Address Prefix Set to query for. Case-sensitive + // Required + Name string `json:"name"` +} + +// Returns a populated IPAddressPrefixSetInfo struct from an input struct +func (c *IPAddressPrefixSetsClient) GetIPAddressPrefixSet(input *GetIPAddressPrefixSetInput) (*IPAddressPrefixSetInfo, error) { + input.Name = c.getQualifiedName(input.Name) + + var ipInfo IPAddressPrefixSetInfo + if err := c.getResource(input.Name, &ipInfo); err != nil { + return nil, err + } + + return c.success(&ipInfo) +} + +// UpdateIPAddressPrefixSetInput defines what to update in a ip address prefix set +type UpdateIPAddressPrefixSetInput struct { + // The name of the IP Address Prefix Set to create. Object names can only contain alphanumeric, + // underscore, dash, and period characters. Names are case-sensitive. + // Required + Name string `json:"name"` + + // Description of the IPAddressPrefixSet + // Optional + Description string `json:"description"` + + // List of CIDR IPv4 prefixes assigned in the virtual network. + IPAddressPrefixes []string `json:"ipAddressPrefixes"` + + // String slice of tags to apply to the IP Address Prefix Set object + // Optional + Tags []string `json:"tags"` +} + +// UpdateIPAddressPrefixSet update the ip address prefix set +func (c *IPAddressPrefixSetsClient) UpdateIPAddressPrefixSet(updateInput *UpdateIPAddressPrefixSetInput) (*IPAddressPrefixSetInfo, error) { + updateInput.Name = c.getQualifiedName(updateInput.Name) + var ipInfo IPAddressPrefixSetInfo + if err := c.updateResource(updateInput.Name, updateInput, &ipInfo); err != nil { + return nil, err + } + + return c.success(&ipInfo) +} + +type DeleteIPAddressPrefixSetInput struct { + // The name of the IP Address Prefix Set to query for. Case-sensitive + // Required + Name string `json:"name"` +} + +func (c *IPAddressPrefixSetsClient) DeleteIPAddressPrefixSet(input *DeleteIPAddressPrefixSetInput) error { + return c.deleteResource(input.Name) +} + +// Unqualifies any qualified fields in the IPAddressPrefixSetInfo struct +func (c *IPAddressPrefixSetsClient) success(info *IPAddressPrefixSetInfo) (*IPAddressPrefixSetInfo, error) { + c.unqualify(&info.Name) + return info, nil +} diff --git a/vendor/github.com/hashicorp/go-oracle-terraform/compute/ip_address_reservations.go b/vendor/github.com/hashicorp/go-oracle-terraform/compute/ip_address_reservations.go new file mode 100644 index 000000000..a1175711e --- /dev/null +++ b/vendor/github.com/hashicorp/go-oracle-terraform/compute/ip_address_reservations.go @@ -0,0 +1,190 @@ +package compute + +import ( + "fmt" + "path/filepath" +) + +// IPAddressReservationsClient is a client to manage ip address reservation resources +type IPAddressReservationsClient struct { + *ResourceClient +} + +const ( + IPAddressReservationDescription = "IP Address Reservation" + IPAddressReservationContainerPath = "/network/v1/ipreservation/" + IPAddressReservationResourcePath = "/network/v1/ipreservation" + IPAddressReservationQualifier = "/oracle/public" +) + +// IPAddressReservations returns an IPAddressReservationsClient to manage IP address reservation +// resources +func (c *ComputeClient) IPAddressReservations() *IPAddressReservationsClient { + return &IPAddressReservationsClient{ + ResourceClient: &ResourceClient{ + ComputeClient: c, + ResourceDescription: IPAddressReservationDescription, + ContainerPath: IPAddressReservationContainerPath, + ResourceRootPath: IPAddressReservationResourcePath, + }, + } +} + +// IPAddressReservation describes an IP Address reservation +type IPAddressReservation struct { + // Description of the IP Address Reservation + Description string `json:"description"` + + // Reserved NAT IPv4 address from the IP Address Pool + IPAddress string `json:"ipAddress"` + + // Name of the IP Address pool to reserve the NAT IP from + IPAddressPool string `json:"ipAddressPool"` + + // Name of the reservation + Name string `json:"name"` + + // Tags associated with the object + Tags []string `json:"tags"` + + // Uniform Resource Identified for the reservation + Uri string `json:"uri"` +} + +const ( + PublicIPAddressPool = "public-ippool" + PrivateIPAddressPool = "cloud-ippool" +) + +// CreateIPAddressReservationInput defines input parameters to create an ip address reservation +type CreateIPAddressReservationInput struct { + // Description of the IP Address Reservation + // Optional + Description string `json:"description"` + + // IP Address pool from which to reserve an IP Address. + // Can be one of the following: + // + // 'public-ippool' - When you attach an IP Address from this pool to an instance, you enable + // access between the public Internet and the instance + // 'cloud-ippool' - When you attach an IP Address from this pool to an instance, the instance + // can communicate privately with other Oracle Cloud Services + // Optional + IPAddressPool string `json:"ipAddressPool"` + + // The name of the reservation to create + // Required + Name string `json:"name"` + + // Tags to associate with the IP Reservation + // Optional + Tags []string `json:"tags"` +} + +// Takes an input struct, creates an IP Address reservation, and returns the info struct and any errors +func (c *IPAddressReservationsClient) CreateIPAddressReservation(input *CreateIPAddressReservationInput) (*IPAddressReservation, error) { + var ipAddrRes IPAddressReservation + // Qualify supplied name + input.Name = c.getQualifiedName(input.Name) + // Qualify supplied address pool if not nil + if input.IPAddressPool != "" { + input.IPAddressPool = c.qualifyIPAddressPool(input.IPAddressPool) + } + + if err := c.createResource(input, &ipAddrRes); err != nil { + return nil, err + } + + return c.success(&ipAddrRes) +} + +// Parameters to retrieve information on an ip address reservation +type GetIPAddressReservationInput struct { + // Name of the IP Reservation + // Required + Name string `json:"name"` +} + +// Returns an IP Address Reservation and any errors +func (c *IPAddressReservationsClient) GetIPAddressReservation(input *GetIPAddressReservationInput) (*IPAddressReservation, error) { + var ipAddrRes IPAddressReservation + + input.Name = c.getQualifiedName(input.Name) + if err := c.getResource(input.Name, &ipAddrRes); err != nil { + return nil, err + } + + return c.success(&ipAddrRes) +} + +// Parameters to update an IP Address reservation +type UpdateIPAddressReservationInput struct { + // Description of the IP Address Reservation + // Optional + Description string `json:"description"` + + // IP Address pool from which to reserve an IP Address. + // Can be one of the following: + // + // 'public-ippool' - When you attach an IP Address from this pool to an instance, you enable + // access between the public Internet and the instance + // 'cloud-ippool' - When you attach an IP Address from this pool to an instance, the instance + // can communicate privately with other Oracle Cloud Services + // Optional + IPAddressPool string `json:"ipAddressPool"` + + // The name of the reservation to create + // Required + Name string `json:"name"` + + // Tags to associate with the IP Reservation + // Optional + Tags []string `json:"tags"` +} + +func (c *IPAddressReservationsClient) UpdateIPAddressReservation(input *UpdateIPAddressReservationInput) (*IPAddressReservation, error) { + var ipAddrRes IPAddressReservation + + // Qualify supplied name + input.Name = c.getQualifiedName(input.Name) + // Qualify supplied address pool if not nil + if input.IPAddressPool != "" { + input.IPAddressPool = c.qualifyIPAddressPool(input.IPAddressPool) + } + + if err := c.updateResource(input.Name, input, &ipAddrRes); err != nil { + return nil, err + } + + return c.success(&ipAddrRes) +} + +// Parameters to delete an IP Address Reservation +type DeleteIPAddressReservationInput struct { + // The name of the reservation to delete + Name string `json:"name"` +} + +func (c *IPAddressReservationsClient) DeleteIPAddressReservation(input *DeleteIPAddressReservationInput) error { + input.Name = c.getQualifiedName(input.Name) + return c.deleteResource(input.Name) +} + +func (c *IPAddressReservationsClient) success(result *IPAddressReservation) (*IPAddressReservation, error) { + c.unqualify(&result.Name) + if result.IPAddressPool != "" { + result.IPAddressPool = c.unqualifyIPAddressPool(result.IPAddressPool) + } + + return result, nil +} + +func (c *IPAddressReservationsClient) qualifyIPAddressPool(input string) string { + // Add '/oracle/public/' + return fmt.Sprintf("%s/%s", IPAddressReservationQualifier, input) +} + +func (c *IPAddressReservationsClient) unqualifyIPAddressPool(input string) string { + // Remove '/oracle/public/' + return filepath.Base(input) +} diff --git a/vendor/github.com/hashicorp/go-oracle-terraform/compute/ip_associations.go b/vendor/github.com/hashicorp/go-oracle-terraform/compute/ip_associations.go new file mode 100644 index 000000000..09e194985 --- /dev/null +++ b/vendor/github.com/hashicorp/go-oracle-terraform/compute/ip_associations.go @@ -0,0 +1,118 @@ +package compute + +import ( + "fmt" + "strings" +) + +// IPAssociationsClient is a client for the IP Association functions of the Compute API. +type IPAssociationsClient struct { + *ResourceClient +} + +// IPAssociations obtains a IPAssociationsClient which can be used to access to the +// IP Association functions of the Compute API +func (c *ComputeClient) IPAssociations() *IPAssociationsClient { + return &IPAssociationsClient{ + ResourceClient: &ResourceClient{ + ComputeClient: c, + ResourceDescription: "ip association", + ContainerPath: "/ip/association/", + ResourceRootPath: "/ip/association", + }} +} + +// IPAssociationInfo describes an existing IP association. +type IPAssociationInfo struct { + // TODO: it'd probably make sense to expose the `ip` field here too? + + // The three-part name of the object (/Compute-identity_domain/user/object). + Name string `json:"name"` + + // The three-part name of the IP reservation object in the format (/Compute-identity_domain/user/object). + // An IP reservation is a public IP address which is attached to an Oracle Compute Cloud Service instance that requires access to or from the Internet. + Reservation string `json:"reservation"` + + // The type of IP Address to associate with this instance + // for a Dynamic IP address specify `ippool:/oracle/public/ippool`. + // for a Static IP address specify the three part name of the existing IP reservation + ParentPool string `json:"parentpool"` + + // Uniform Resource Identifier for the IP Association + URI string `json:"uri"` + + // The three-part name of a vcable ID of an instance that is associated with the IP reservation. + VCable string `json:"vcable"` +} + +type CreateIPAssociationInput struct { + // The type of IP Address to associate with this instance + // for a Dynamic IP address specify `ippool:/oracle/public/ippool`. + // for a Static IP address specify the three part name of the existing IP reservation + // Required + ParentPool string `json:"parentpool"` + + // The three-part name of the vcable ID of the instance that you want to associate with an IP address. The three-part name is in the format: /Compute-identity_domain/user/object. + // Required + VCable string `json:"vcable"` +} + +// CreateIPAssociation creates a new IP association with the supplied vcable and parentpool. +func (c *IPAssociationsClient) CreateIPAssociation(input *CreateIPAssociationInput) (*IPAssociationInfo, error) { + input.VCable = c.getQualifiedName(input.VCable) + input.ParentPool = c.getQualifiedParentPoolName(input.ParentPool) + var assocInfo IPAssociationInfo + if err := c.createResource(input, &assocInfo); err != nil { + return nil, err + } + + return c.success(&assocInfo) +} + +type GetIPAssociationInput struct { + // The three-part name of the IP Association + // Required. + Name string `json:"name"` +} + +// GetIPAssociation retrieves the IP association with the given name. +func (c *IPAssociationsClient) GetIPAssociation(input *GetIPAssociationInput) (*IPAssociationInfo, error) { + var assocInfo IPAssociationInfo + if err := c.getResource(input.Name, &assocInfo); err != nil { + return nil, err + } + + return c.success(&assocInfo) +} + +type DeleteIPAssociationInput struct { + // The three-part name of the IP Association + // Required. + Name string `json:"name"` +} + +// DeleteIPAssociation deletes the IP association with the given name. +func (c *IPAssociationsClient) DeleteIPAssociation(input *DeleteIPAssociationInput) error { + return c.deleteResource(input.Name) +} + +func (c *IPAssociationsClient) getQualifiedParentPoolName(parentpool string) string { + parts := strings.Split(parentpool, ":") + pooltype := parts[0] + name := parts[1] + return fmt.Sprintf("%s:%s", pooltype, c.getQualifiedName(name)) +} + +func (c *IPAssociationsClient) unqualifyParentPoolName(parentpool *string) { + parts := strings.Split(*parentpool, ":") + pooltype := parts[0] + name := parts[1] + *parentpool = fmt.Sprintf("%s:%s", pooltype, c.getUnqualifiedName(name)) +} + +// Unqualifies identifiers +func (c *IPAssociationsClient) success(assocInfo *IPAssociationInfo) (*IPAssociationInfo, error) { + c.unqualify(&assocInfo.Name, &assocInfo.VCable) + c.unqualifyParentPoolName(&assocInfo.ParentPool) + return assocInfo, nil +} diff --git a/vendor/github.com/hashicorp/go-oracle-terraform/compute/ip_network_exchange.go b/vendor/github.com/hashicorp/go-oracle-terraform/compute/ip_network_exchange.go new file mode 100644 index 000000000..1df33f296 --- /dev/null +++ b/vendor/github.com/hashicorp/go-oracle-terraform/compute/ip_network_exchange.go @@ -0,0 +1,99 @@ +package compute + +const ( + IPNetworkExchangeDescription = "ip network exchange" + IPNetworkExchangeContainerPath = "/network/v1/ipnetworkexchange/" + IPNetworkExchangeResourcePath = "/network/v1/ipnetworkexchange" +) + +type IPNetworkExchangesClient struct { + ResourceClient +} + +// IPNetworkExchanges() returns an IPNetworkExchangesClient that can be used to access the +// necessary CRUD functions for IP Network Exchanges. +func (c *ComputeClient) IPNetworkExchanges() *IPNetworkExchangesClient { + return &IPNetworkExchangesClient{ + ResourceClient: ResourceClient{ + ComputeClient: c, + ResourceDescription: IPNetworkExchangeDescription, + ContainerPath: IPNetworkExchangeContainerPath, + ResourceRootPath: IPNetworkExchangeResourcePath, + }, + } +} + +// IPNetworkExchangeInfo contains the exported fields necessary to hold all the information about an +// IP Network Exchange +type IPNetworkExchangeInfo struct { + // The name of the IP Network Exchange + Name string `json:"name"` + // Description of the IP Network Exchange + Description string `json:"description"` + // Slice of tags associated with the IP Network Exchange + Tags []string `json:"tags"` + // Uniform Resource Identifier for the IP Network Exchange + Uri string `json:"uri"` +} + +type CreateIPNetworkExchangeInput struct { + // The name of the IP Network Exchange to create. Object names can only contain alphanumeric, + // underscore, dash, and period characters. Names are case-sensitive. + // Required + Name string `json:"name"` + + // Description of the IPNetworkExchange + // Optional + Description string `json:"description"` + + // String slice of tags to apply to the IP Network Exchange object + // Optional + Tags []string `json:"tags"` +} + +// Create a new IP Network Exchange from an IPNetworkExchangesClient and an input struct. +// Returns a populated Info struct for the IP Network Exchange, and any errors +func (c *IPNetworkExchangesClient) CreateIPNetworkExchange(input *CreateIPNetworkExchangeInput) (*IPNetworkExchangeInfo, error) { + input.Name = c.getQualifiedName(input.Name) + + var ipInfo IPNetworkExchangeInfo + if err := c.createResource(&input, &ipInfo); err != nil { + return nil, err + } + + return c.success(&ipInfo) +} + +type GetIPNetworkExchangeInput struct { + // The name of the IP Network Exchange to query for. Case-sensitive + // Required + Name string `json:"name"` +} + +// Returns a populated IPNetworkExchangeInfo struct from an input struct +func (c *IPNetworkExchangesClient) GetIPNetworkExchange(input *GetIPNetworkExchangeInput) (*IPNetworkExchangeInfo, error) { + input.Name = c.getQualifiedName(input.Name) + + var ipInfo IPNetworkExchangeInfo + if err := c.getResource(input.Name, &ipInfo); err != nil { + return nil, err + } + + return c.success(&ipInfo) +} + +type DeleteIPNetworkExchangeInput struct { + // The name of the IP Network Exchange to query for. Case-sensitive + // Required + Name string `json:"name"` +} + +func (c *IPNetworkExchangesClient) DeleteIPNetworkExchange(input *DeleteIPNetworkExchangeInput) error { + return c.deleteResource(input.Name) +} + +// Unqualifies any qualified fields in the IPNetworkExchangeInfo struct +func (c *IPNetworkExchangesClient) success(info *IPNetworkExchangeInfo) (*IPNetworkExchangeInfo, error) { + c.unqualify(&info.Name) + return info, nil +} diff --git a/vendor/github.com/hashicorp/go-oracle-terraform/compute/ip_networks.go b/vendor/github.com/hashicorp/go-oracle-terraform/compute/ip_networks.go new file mode 100644 index 000000000..bfc324845 --- /dev/null +++ b/vendor/github.com/hashicorp/go-oracle-terraform/compute/ip_networks.go @@ -0,0 +1,186 @@ +package compute + +const ( + IPNetworkDescription = "ip network" + IPNetworkContainerPath = "/network/v1/ipnetwork/" + IPNetworkResourcePath = "/network/v1/ipnetwork" +) + +type IPNetworksClient struct { + ResourceClient +} + +// IPNetworks() returns an IPNetworksClient that can be used to access the +// necessary CRUD functions for IP Networks. +func (c *ComputeClient) IPNetworks() *IPNetworksClient { + return &IPNetworksClient{ + ResourceClient: ResourceClient{ + ComputeClient: c, + ResourceDescription: IPNetworkDescription, + ContainerPath: IPNetworkContainerPath, + ResourceRootPath: IPNetworkResourcePath, + }, + } +} + +// IPNetworkInfo contains the exported fields necessary to hold all the information about an +// IP Network +type IPNetworkInfo struct { + // The name of the IP Network + Name string `json:"name"` + // The CIDR IPv4 prefix associated with the IP Network + IPAddressPrefix string `json:"ipAddressPrefix"` + // Name of the IP Network Exchange associated with the IP Network + IPNetworkExchange string `json:"ipNetworkExchange,omitempty"` + // Description of the IP Network + Description string `json:"description"` + // Whether public internet access was enabled using NAPT for VNICs without any public IP reservation + PublicNaptEnabled bool `json:"publicNaptEnabledFlag"` + // Slice of tags associated with the IP Network + Tags []string `json:"tags"` + // Uniform Resource Identifier for the IP Network + Uri string `json:"uri"` +} + +type CreateIPNetworkInput struct { + // The name of the IP Network to create. Object names can only contain alphanumeric, + // underscore, dash, and period characters. Names are case-sensitive. + // Required + Name string `json:"name"` + + // Specify the size of the IP Subnet. It is a range of IPv4 addresses assigned in the virtual + // network, in CIDR address prefix format. + // While specifying the IP address prefix take care of the following points: + // + //* These IP addresses aren't part of the common pool of Oracle-provided IP addresses used by the shared network. + // + //* There's no conflict with the range of IP addresses used in another IP network, the IP addresses used your on-premises network, or with the range of private IP addresses used in the shared network. If IP networks with overlapping IP subnets are linked to an IP exchange, packets going to and from those IP networks are dropped. + // + //* The upper limit of the CIDR block size for an IP network is /16. + // + //Note: The first IP address of any IP network is reserved for the default gateway, the DHCP server, and the DNS server of that IP network. + // Required + IPAddressPrefix string `json:"ipAddressPrefix"` + + //Specify the IP network exchange to which the IP network belongs. + //You can add an IP network to only one IP network exchange, but an IP network exchange + //can include multiple IP networks. An IP network exchange enables access between IP networks + //that have non-overlapping addresses, so that instances on these networks can exchange packets + //with each other without NAT. + // Optional + IPNetworkExchange string `json:"ipNetworkExchange,omitempty"` + + // Description of the IPNetwork + // Optional + Description string `json:"description"` + + // Enable public internet access using NAPT for VNICs without any public IP reservation + // Optional + PublicNaptEnabled bool `json:"publicNaptEnabledFlag"` + + // String slice of tags to apply to the IP Network object + // Optional + Tags []string `json:"tags"` +} + +// Create a new IP Network from an IPNetworksClient and an input struct. +// Returns a populated Info struct for the IP Network, and any errors +func (c *IPNetworksClient) CreateIPNetwork(input *CreateIPNetworkInput) (*IPNetworkInfo, error) { + input.Name = c.getQualifiedName(input.Name) + input.IPNetworkExchange = c.getQualifiedName(input.IPNetworkExchange) + + var ipInfo IPNetworkInfo + if err := c.createResource(&input, &ipInfo); err != nil { + return nil, err + } + + return c.success(&ipInfo) +} + +type GetIPNetworkInput struct { + // The name of the IP Network to query for. Case-sensitive + // Required + Name string `json:"name"` +} + +// Returns a populated IPNetworkInfo struct from an input struct +func (c *IPNetworksClient) GetIPNetwork(input *GetIPNetworkInput) (*IPNetworkInfo, error) { + input.Name = c.getQualifiedName(input.Name) + + var ipInfo IPNetworkInfo + if err := c.getResource(input.Name, &ipInfo); err != nil { + return nil, err + } + + return c.success(&ipInfo) +} + +type UpdateIPNetworkInput struct { + // The name of the IP Network to update. Object names can only contain alphanumeric, + // underscore, dash, and period characters. Names are case-sensitive. + // Required + Name string `json:"name"` + + // Specify the size of the IP Subnet. It is a range of IPv4 addresses assigned in the virtual + // network, in CIDR address prefix format. + // While specifying the IP address prefix take care of the following points: + // + //* These IP addresses aren't part of the common pool of Oracle-provided IP addresses used by the shared network. + // + //* There's no conflict with the range of IP addresses used in another IP network, the IP addresses used your on-premises network, or with the range of private IP addresses used in the shared network. If IP networks with overlapping IP subnets are linked to an IP exchange, packets going to and from those IP networks are dropped. + // + //* The upper limit of the CIDR block size for an IP network is /16. + // + //Note: The first IP address of any IP network is reserved for the default gateway, the DHCP server, and the DNS server of that IP network. + // Required + IPAddressPrefix string `json:"ipAddressPrefix"` + + //Specify the IP network exchange to which the IP network belongs. + //You can add an IP network to only one IP network exchange, but an IP network exchange + //can include multiple IP networks. An IP network exchange enables access between IP networks + //that have non-overlapping addresses, so that instances on these networks can exchange packets + //with each other without NAT. + // Optional + IPNetworkExchange string `json:"ipNetworkExchange,omitempty"` + + // Description of the IPNetwork + // Optional + Description string `json:"description"` + + // Enable public internet access using NAPT for VNICs without any public IP reservation + // Optional + PublicNaptEnabled bool `json:"publicNaptEnabledFlag"` + + // String slice of tags to apply to the IP Network object + // Optional + Tags []string `json:"tags"` +} + +func (c *IPNetworksClient) UpdateIPNetwork(input *UpdateIPNetworkInput) (*IPNetworkInfo, error) { + input.Name = c.getQualifiedName(input.Name) + input.IPNetworkExchange = c.getQualifiedName(input.IPNetworkExchange) + + var ipInfo IPNetworkInfo + if err := c.updateResource(input.Name, &input, &ipInfo); err != nil { + return nil, err + } + + return c.success(&ipInfo) +} + +type DeleteIPNetworkInput struct { + // The name of the IP Network to query for. Case-sensitive + // Required + Name string `json:"name"` +} + +func (c *IPNetworksClient) DeleteIPNetwork(input *DeleteIPNetworkInput) error { + return c.deleteResource(input.Name) +} + +// Unqualifies any qualified fields in the IPNetworkInfo struct +func (c *IPNetworksClient) success(info *IPNetworkInfo) (*IPNetworkInfo, error) { + c.unqualify(&info.Name) + c.unqualify(&info.IPNetworkExchange) + return info, nil +} diff --git a/vendor/github.com/hashicorp/go-oracle-terraform/compute/ip_reservations.go b/vendor/github.com/hashicorp/go-oracle-terraform/compute/ip_reservations.go new file mode 100644 index 000000000..42f9f1741 --- /dev/null +++ b/vendor/github.com/hashicorp/go-oracle-terraform/compute/ip_reservations.go @@ -0,0 +1,147 @@ +package compute + +// IPReservationsClient is a client for the IP Reservations functions of the Compute API. +type IPReservationsClient struct { + *ResourceClient +} + +const ( + IPReservationDesc = "ip reservation" + IPReservationContainerPath = "/ip/reservation/" + IPReservataionResourcePath = "/ip/reservation" +) + +// IPReservations obtains an IPReservationsClient which can be used to access to the +// IP Reservations functions of the Compute API +func (c *ComputeClient) IPReservations() *IPReservationsClient { + return &IPReservationsClient{ + ResourceClient: &ResourceClient{ + ComputeClient: c, + ResourceDescription: IPReservationDesc, + ContainerPath: IPReservationContainerPath, + ResourceRootPath: IPReservataionResourcePath, + }} +} + +type IPReservationPool string + +const ( + PublicReservationPool IPReservationPool = "/oracle/public/ippool" +) + +// IPReservationInput describes an existing IP reservation. +type IPReservation struct { + // Shows the default account for your identity domain. + Account string `json:"account"` + // Public IP address. + IP string `json:"ip"` + // The three-part name of the IP Reservation (/Compute-identity_domain/user/object). + Name string `json:"name"` + // Pool of public IP addresses + ParentPool IPReservationPool `json:"parentpool"` + // Is the IP Reservation Persistent (i.e. static) or not (i.e. Dynamic)? + Permanent bool `json:"permanent"` + // A comma-separated list of strings which helps you to identify IP reservation. + Tags []string `json:"tags"` + // Uniform Resource Identifier + Uri string `json:"uri"` + // Is the IP reservation associated with an instance? + Used bool `json:"used"` +} + +// CreateIPReservationInput defines an IP reservation to be created. +type CreateIPReservationInput struct { + // The name of the object + // If you don't specify a name for this object, then the name is generated automatically. + // Object names can contain only alphanumeric characters, hyphens, underscores, and periods. + // Object names are case-sensitive. + // Optional + Name string `json:"name"` + // Pool of public IP addresses. This must be set to `ippool` + // Required + ParentPool IPReservationPool `json:"parentpool"` + // Is the IP Reservation Persistent (i.e. static) or not (i.e. Dynamic)? + // Required + Permanent bool `json:"permanent"` + // A comma-separated list of strings which helps you to identify IP reservations. + // Optional + Tags []string `json:"tags"` +} + +// CreateIPReservation creates a new IP reservation with the given parentpool, tags and permanent flag. +func (c *IPReservationsClient) CreateIPReservation(input *CreateIPReservationInput) (*IPReservation, error) { + var ipInput IPReservation + + input.Name = c.getQualifiedName(input.Name) + if err := c.createResource(input, &ipInput); err != nil { + return nil, err + } + + return c.success(&ipInput) +} + +// GetIPReservationInput defines an IP Reservation to get +type GetIPReservationInput struct { + // The name of the IP Reservation + // Required + Name string +} + +// GetIPReservation retrieves the IP reservation with the given name. +func (c *IPReservationsClient) GetIPReservation(input *GetIPReservationInput) (*IPReservation, error) { + var ipInput IPReservation + + input.Name = c.getQualifiedName(input.Name) + if err := c.getResource(input.Name, &ipInput); err != nil { + return nil, err + } + + return c.success(&ipInput) +} + +// UpdateIPReservationInput defines an IP Reservation to be updated +type UpdateIPReservationInput struct { + // The name of the object + // If you don't specify a name for this object, then the name is generated automatically. + // Object names can contain only alphanumeric characters, hyphens, underscores, and periods. + // Object names are case-sensitive. + // Required + Name string `json:"name"` + // Pool of public IP addresses. + // Required + ParentPool IPReservationPool `json:"parentpool"` + // Is the IP Reservation Persistent (i.e. static) or not (i.e. Dynamic)? + // Required + Permanent bool `json:"permanent"` + // A comma-separated list of strings which helps you to identify IP reservations. + // Optional + Tags []string `json:"tags"` +} + +// UpdateIPReservation updates the IP reservation. +func (c *IPReservationsClient) UpdateIPReservation(input *UpdateIPReservationInput) (*IPReservation, error) { + var updateOutput IPReservation + input.Name = c.getQualifiedName(input.Name) + if err := c.updateResource(input.Name, input, &updateOutput); err != nil { + return nil, err + } + return c.success(&updateOutput) +} + +// DeleteIPReservationInput defines an IP Reservation to delete +type DeleteIPReservationInput struct { + // The name of the IP Reservation + // Required + Name string +} + +// DeleteIPReservation deletes the IP reservation with the given name. +func (c *IPReservationsClient) DeleteIPReservation(input *DeleteIPReservationInput) error { + input.Name = c.getQualifiedName(input.Name) + return c.deleteResource(input.Name) +} + +func (c *IPReservationsClient) success(result *IPReservation) (*IPReservation, error) { + c.unqualify(&result.Name) + return result, nil +} diff --git a/vendor/github.com/hashicorp/go-oracle-terraform/compute/machine_images.go b/vendor/github.com/hashicorp/go-oracle-terraform/compute/machine_images.go new file mode 100644 index 000000000..f2c4b4a15 --- /dev/null +++ b/vendor/github.com/hashicorp/go-oracle-terraform/compute/machine_images.go @@ -0,0 +1,143 @@ +package compute + +// MachineImagesClient is a client for the MachineImage functions of the Compute API. +type MachineImagesClient struct { + ResourceClient +} + +// MachineImages obtains an MachineImagesClient which can be used to access to the +// MachineImage functions of the Compute API +func (c *ComputeClient) MachineImages() *MachineImagesClient { + return &MachineImagesClient{ + ResourceClient: ResourceClient{ + ComputeClient: c, + ResourceDescription: "MachineImage", + ContainerPath: "/machineimage/", + ResourceRootPath: "/machineimage", + }} +} + +// MahineImage describes an existing Machine Image. +type MachineImage struct { + // account of the associated Object Storage Classic instance + Account string `json:"account"` + + // Dictionary of attributes to be made available to the instance + Attributes map[string]interface{} `json:"attributes"` + + // Last time when this image was audited + Audited string `json:"audited"` + + // Describing the image + Description string `json:"description"` + + // Description of the state of the machine image if there is an error + ErrorReason string `json:"error_reason"` + + // dictionary of hypervisor-specific attributes + Hypervisor map[string]interface{} `json:"hypervisor"` + + // The format of the image + ImageFormat string `json:"image_format"` + + // name of the machine image file uploaded to Object Storage Classic + File string `json:"file"` + + // name of the machine image + Name string `json:"name"` + + // Indicates that the image file is available in Object Storage Classic + NoUpload bool `json:"no_upload"` + + // The OS platform of the image + Platform string `json:"platform"` + + // Size values of the image file + Sizes map[string]interface{} `json:"sizes"` + + // The state of the uploaded machine image + State string `json:"state"` + + // Uniform Resource Identifier + URI string `json:"uri"` +} + +// CreateMachineImageInput defines an Image List to be created. +type CreateMachineImageInput struct { + // account of the associated Object Storage Classic instance + Account string `json:"account"` + + // Dictionary of attributes to be made available to the instance + Attributes map[string]interface{} `json:"attributes,omitempty"` + + // Describing the image + Description string `json:"description,omitempty"` + + // name of the machine image file uploaded to Object Storage Classic + File string `json:"file,omitempty"` + + // name of the machine image + Name string `json:"name"` + + // Indicates that the image file is available in Object Storage Classic + NoUpload bool `json:"no_upload"` + + // Size values of the image file + Sizes map[string]interface{} `json:"sizes"` +} + +// CreateMachineImage creates a new Machine Image with the given parameters. +func (c *MachineImagesClient) CreateMachineImage(createInput *CreateMachineImageInput) (*MachineImage, error) { + var machineImage MachineImage + + // If `sizes` is not set then is mst be defaulted to {"total": 0} + if createInput.Sizes == nil { + createInput.Sizes = map[string]interface{}{"total": 0} + } + + // `no_upload` must always be true + createInput.NoUpload = true + + createInput.Name = c.getQualifiedName(createInput.Name) + if err := c.createResource(createInput, &machineImage); err != nil { + return nil, err + } + + return c.success(&machineImage) +} + +// DeleteMachineImageInput describes the MachineImage to delete +type DeleteMachineImageInput struct { + // The name of the MachineImage + Name string `json:"name"` +} + +// DeleteMachineImage deletes the MachineImage with the given name. +func (c *MachineImagesClient) DeleteMachineImage(deleteInput *DeleteMachineImageInput) error { + return c.deleteResource(deleteInput.Name) +} + +// GetMachineList describes the MachineImage to get +type GetMachineImageInput struct { + // account of the associated Object Storage Classic instance + Account string `json:"account"` + // The name of the Machine Image + Name string `json:"name"` +} + +// GetMachineImage retrieves the MachineImage with the given name. +func (c *MachineImagesClient) GetMachineImage(getInput *GetMachineImageInput) (*MachineImage, error) { + getInput.Name = c.getQualifiedName(getInput.Name) + + var machineImage MachineImage + if err := c.getResource(getInput.Name, &machineImage); err != nil { + return nil, err + } + + return c.success(&machineImage) +} + +func (c *MachineImagesClient) success(result *MachineImage) (*MachineImage, error) { + c.unqualify(&result.Name) + return result, nil +} diff --git a/vendor/github.com/hashicorp/go-oracle-terraform/compute/orchestration.go b/vendor/github.com/hashicorp/go-oracle-terraform/compute/orchestration.go new file mode 100644 index 000000000..93d9e2342 --- /dev/null +++ b/vendor/github.com/hashicorp/go-oracle-terraform/compute/orchestration.go @@ -0,0 +1,468 @@ +package compute + +import ( + "fmt" + "time" + + "github.com/hashicorp/go-oracle-terraform/client" +) + +const WaitForOrchestrationActiveTimeout = time.Duration(3600 * time.Second) +const WaitForOrchestrationDeleteTimeout = time.Duration(3600 * time.Second) + +// OrchestrationsClient is a client for the Orchestration functions of the Compute API. +type OrchestrationsClient struct { + ResourceClient +} + +// Orchestrations obtains an OrchestrationsClient which can be used to access to the +// Orchestration functions of the Compute API +func (c *ComputeClient) Orchestrations() *OrchestrationsClient { + return &OrchestrationsClient{ + ResourceClient: ResourceClient{ + ComputeClient: c, + ResourceDescription: "Orchestration", + ContainerPath: "/platform/v1/orchestration/", + ResourceRootPath: "/platform/v1/orchestration", + }} +} + +type OrchestrationDesiredState string + +const ( + // * active: Creates all the orchestration objects defined in the orchestration. + OrchestrationDesiredStateActive OrchestrationDesiredState = "active" + // * inactive: Adds the orchestration to Oracle Compute Cloud Service, but does not create any of the orchestration + OrchestrationDesiredStateInactive OrchestrationDesiredState = "inactive" + // * suspended: Suspends all orchestration objects defined in the orchestration + OrchestrationDesiredStateSuspend OrchestrationDesiredState = "suspend" +) + +type OrchestrationStatus string + +const ( + OrchestrationStatusActive OrchestrationStatus = "active" + OrchestrationStatusInactive OrchestrationStatus = "inactive" + OrchestrationStatusSuspend OrchestrationStatus = "suspend" + OrchestrationStatusActivating OrchestrationStatus = "activating" + OrchestrationStatusDeleting OrchestrationStatus = "deleting" + OrchestrationStatusError OrchestrationStatus = "terminal_error" + OrchestrationStatusStopping OrchestrationStatus = "stopping" + OrchestrationStatusSuspending OrchestrationStatus = "suspending" + OrchestrationStatusStarting OrchestrationStatus = "starting" + OrchestrationStatusDeactivating OrchestrationStatus = "deactivating" + OrchestrationStatusSuspended OrchestrationStatus = "suspended" +) + +type OrchestrationType string + +const ( + OrchestrationTypeInstance OrchestrationType = "Instance" +) + +type OrchestrationRelationshipType string + +const ( + OrchestrationRelationshipTypeDepends OrchestrationRelationshipType = "depends" +) + +// OrchestrationInfo describes an existing Orchestration. +type Orchestration struct { + // The default Oracle Compute Cloud Service account, such as /Compute-acme/default. + Account string `json:"account"` + // Description of this orchestration + Description string `json:"description"` + // The desired_state specified in the orchestration JSON file. A unique identifier for this orchestration. + DesiredState OrchestrationDesiredState `json:"desired_state"` + // Unique identifier of this orchestration + ID string `json:"id"` + // The three-part name of the Orchestration (/Compute-identity_domain/user/object). + Name string `json:"name"` + // List of orchestration objects + Objects []Object `json:"objects"` + // Current status of this orchestration + Status OrchestrationStatus `json:"status"` + // Strings that describe the orchestration and help you identify it. + Tags []string `json:"tags"` + // Time the orchestration was last audited + TimeAudited string `json:"time_audited"` + // The time when the orchestration was added to Oracle Compute Cloud Service. + TimeCreated string `json:"time_created"` + // The time when the orchestration was last updated in Oracle Compute Cloud Service. + TimeUpdated string `json:"time_updated"` + // Unique Resource Identifier + URI string `json:"uri"` + // Name of the user who added this orchestration or made the most recent update to this orchestration. + User string `json:"user"` + // Version of this orchestration. It is automatically generated by the server. + Version int `json:"version"` +} + +// CreateOrchestrationInput defines an Orchestration to be created. +type CreateOrchestrationInput struct { + // The default Oracle Compute Cloud Service account, such as /Compute-acme/default. + // Optional + Account string `json:"account,omitempty"` + // Description of this orchestration + // Optional + Description string `json:"description,omitempty"` + // Specify the desired state of this orchestration: active, inactive, or suspend. + // You can manage the state of the orchestration objects by changing the desired state of the orchestration. + // * active: Creates all the orchestration objects defined in the orchestration. + // * inactive: Adds the orchestration to Oracle Compute Cloud Service, but does not create any of the orchestration + // objects defined in the orchestration. + // Required + DesiredState OrchestrationDesiredState `json:"desired_state"` + // The three-part name of the Orchestration (/Compute-identity_domain/user/object). + // Object names can contain only alphanumeric characters, hyphens, underscores, and periods. Object names are case-sensitive. + // Required + Name string `json:"name"` + // The list of objects in the orchestration. An object is the primary building block of an orchestration. + // An orchestration can contain up to 100 objects. + // Required + Objects []Object `json:"objects"` + // Strings that describe the orchestration and help you identify it. + Tags []string `json:"tags,omitempty"` + // Version of this orchestration. It is automatically generated by the server. + Version int `json:"version,omitempty"` + // Time to wait for an orchestration to be ready + Timeout time.Duration `json:"-"` +} + +type Object struct { + // The default Oracle Compute Cloud Service account, such as /Compute-acme/default. + // Optional + Account string `json:"account,omitempty"` + // Description of this orchestration + // Optional + Description string `json:"description,omitempty"` + // The desired state of the object + // Optional + DesiredState OrchestrationDesiredState `json:"desired_state,omitempty"` + // Dictionary containing the current state of the object + Health Health `json:"health,omitempty"` + // A text string describing the object. Labels can't include spaces. In an orchestration, the label for + // each object must be unique. Maximum length is 256 characters. + // Required + Label string `json:"label"` + // The four-part name of the object (/Compute-identity_domain/user/orchestration/object). If you don't specify a name + // for this object, the name is generated automatically. Object names can contain only alphanumeric characters, hyphens, + // underscores, and periods. Object names are case-sensitive. When you specify the object name, ensure that an object of + // the same type and with the same name doesn't already exist. If such a object already exists, then another + // object of the same type and with the same name won't be created and the existing object won't be updated. + // Optional + Name string `json:"name,omitempty"` + // The three-part name (/Compute-identity_domain/user/object) of the orchestration to which the object belongs. + // Required + Orchestration string `json:"orchestration"` + // Specifies whether the object should persist when the orchestration is suspended. Specify one of the following: + // * true: The object persists when the orchestration is suspended. + // * false: The object is deleted when the orchestration is suspended. + // By default, persistent is set to false. It is recommended that you specify true for storage + // volumes and other critical objects. Persistence applies only when you're suspending an orchestration. + // When you terminate an orchestration, all the objects defined in it are deleted. + // Optional + Persistent bool `json:"persistent,omitempty"` + // The relationship between the objects that are created by this orchestration. The + // only supported relationship is depends, indicating that the specified target objects must be created first. + // Note that when recovering from a failure, the orchestration doesn't consider object relationships. + // Orchestrations v2 use object references to recover interdependent objects to a healthy state. SeeObject + // References and Relationships in Using Oracle Compute Cloud Service (IaaS). + Relationships []Relationship `json:"relationships,omitempty"` + // The template attribute defines the properties or characteristics of the Oracle Compute Cloud Service object + // that you want to create, as specified by the type attribute. + // The fields in the template section vary depending on the specified type. See Orchestration v2 Attributes + // Specific to Each Object Type in Using Oracle Compute Cloud Service (IaaS) to determine the parameters that are + // specific to each object type that you want to create. + // For example, if you want to create a storage volume, the type would be StorageVolume, and the template would include + // size and bootable. If you want to create an instance, the type would be Instance, and the template would include + // instance-specific attributes, such as imagelist and shape. + // Required + Template interface{} `json:"template"` + // Specify one of the following object types that you want to create. + // The only allowed type is Instance + // Required + Type OrchestrationType `json:"type"` + // Version of this object, generated by the server + // Optional + Version int `json:"version,omitempty"` +} + +type Health struct { + // The status of the object + Status OrchestrationStatus `json:"status,omitempty"` + // What caused the status of the object + Cause string `json:"cause,omitempty"` + // The specific details for what happened to the object + Detail string `json:"detail,omitempty"` + // Any errors associated with creation of the object + Error string `json:"error,omitempty"` +} + +type Relationship struct { + // The type of Relationship + // The only type is depends + // Required + Type OrchestrationRelationshipType `json:"type"` + // What objects the relationship depends on + // Required + Targets []string `json:"targets"` +} + +// CreateOrchestration creates a new Orchestration with the given name, key and enabled flag. +func (c *OrchestrationsClient) CreateOrchestration(input *CreateOrchestrationInput) (*Orchestration, error) { + var createdOrchestration Orchestration + + input.Name = c.getQualifiedName(input.Name) + for _, i := range input.Objects { + i.Orchestration = c.getQualifiedName(i.Orchestration) + if i.Type == OrchestrationTypeInstance { + instanceClient := c.ComputeClient.Instances() + instanceInput := i.Template.(*CreateInstanceInput) + instanceInput.Name = c.getQualifiedName(instanceInput.Name) + + qualifiedSSHKeys := []string{} + for _, key := range instanceInput.SSHKeys { + qualifiedSSHKeys = append(qualifiedSSHKeys, c.getQualifiedName(key)) + } + + instanceInput.SSHKeys = qualifiedSSHKeys + + qualifiedStorageAttachments := []StorageAttachmentInput{} + for _, attachment := range instanceInput.Storage { + qualifiedStorageAttachments = append(qualifiedStorageAttachments, StorageAttachmentInput{ + Index: attachment.Index, + Volume: c.getQualifiedName(attachment.Volume), + }) + } + instanceInput.Storage = qualifiedStorageAttachments + + instanceInput.Networking = instanceClient.qualifyNetworking(instanceInput.Networking) + + } + } + + if err := c.createResource(&input, &createdOrchestration); err != nil { + return nil, err + } + + // Call wait for orchestration ready now, as creating the orchestration is an eventually consistent operation + getInput := &GetOrchestrationInput{ + Name: createdOrchestration.Name, + } + + if input.Timeout == 0 { + input.Timeout = WaitForOrchestrationActiveTimeout + } + + // Wait for orchestration to be ready and return the result + // Don't have to unqualify any objects, as the GetOrchestration method will handle that + orchestrationInfo, orchestrationError := c.WaitForOrchestrationState(getInput, input.Timeout) + if orchestrationError != nil { + deleteInput := &DeleteOrchestrationInput{ + Name: createdOrchestration.Name, + } + err := c.DeleteOrchestration(deleteInput) + if err != nil { + return nil, fmt.Errorf("Error deleting orchestration %s: %s", getInput.Name, err) + } + return nil, fmt.Errorf("Error creating orchestration %s: %s", getInput.Name, orchestrationError) + } + + return &orchestrationInfo, nil +} + +// GetOrchestrationInput describes the Orchestration to get +type GetOrchestrationInput struct { + // The three-part name of the Orchestration (/Compute-identity_domain/user/object). + Name string `json:"name"` +} + +// GetOrchestration retrieves the Orchestration with the given name. +func (c *OrchestrationsClient) GetOrchestration(input *GetOrchestrationInput) (*Orchestration, error) { + var orchestrationInfo Orchestration + if err := c.getResource(input.Name, &orchestrationInfo); err != nil { + return nil, err + } + + return c.success(&orchestrationInfo) +} + +// UpdateOrchestrationInput defines an Orchestration to be updated +type UpdateOrchestrationInput struct { + // The default Oracle Compute Cloud Service account, such as /Compute-acme/default. + // Optional + Account string `json:"account,omitempty"` + // Description of this orchestration + // Optional + Description string `json:"description,omitempty"` + // Specify the desired state of this orchestration: active, inactive, or suspend. + // You can manage the state of the orchestration objects by changing the desired state of the orchestration. + // * active: Creates all the orchestration objects defined in the orchestration. + // * inactive: Adds the orchestration to Oracle Compute Cloud Service, but does not create any of the orchestration + // objects defined in the orchestration. + // Required + DesiredState OrchestrationDesiredState `json:"desired_state"` + // The three-part name of the Orchestration (/Compute-identity_domain/user/object). + // Object names can contain only alphanumeric characters, hyphens, underscores, and periods. Object names are case-sensitive. + // Required + Name string `json:"name"` + // The list of objects in the orchestration. An object is the primary building block of an orchestration. + // An orchestration can contain up to 100 objects. + // Required + Objects []Object `json:"objects"` + // Strings that describe the orchestration and help you identify it. + Tags []string `json:"tags,omitempty"` + // Version of this orchestration. It is automatically generated by the server. + Version int `json:"version,omitempty"` + // Time to wait for an orchestration to be ready + Timeout time.Duration `json:"-"` +} + +// UpdateOrchestration updates the orchestration. +func (c *OrchestrationsClient) UpdateOrchestration(input *UpdateOrchestrationInput) (*Orchestration, error) { + var updatedOrchestration Orchestration + input.Name = c.getQualifiedName(input.Name) + for _, i := range input.Objects { + i.Orchestration = c.getQualifiedName(i.Orchestration) + if i.Type == OrchestrationTypeInstance { + instanceInput := i.Template.(map[string]interface{}) + instanceInput["name"] = c.getQualifiedName(instanceInput["name"].(string)) + } + } + + if err := c.updateResource(input.Name, input, &updatedOrchestration); err != nil { + return nil, err + } + + // Call wait for orchestration ready now, as creating the orchestration is an eventually consistent operation + getInput := &GetOrchestrationInput{ + Name: updatedOrchestration.Name, + } + + if input.Timeout == 0 { + input.Timeout = WaitForOrchestrationActiveTimeout + } + + // Wait for orchestration to be ready and return the result + // Don't have to unqualify any objects, as the GetOrchestration method will handle that + orchestrationInfo, orchestrationError := c.WaitForOrchestrationState(getInput, input.Timeout) + if orchestrationError != nil { + return nil, orchestrationError + } + + return &orchestrationInfo, nil +} + +// DeleteOrchestrationInput describes the Orchestration to delete +type DeleteOrchestrationInput struct { + // The three-part name of the Orchestration (/Compute-identity_domain/user/object). + // Required + Name string `json:"name"` + // Timeout for delete request + Timeout time.Duration `json:"-"` +} + +// DeleteOrchestration deletes the Orchestration with the given name. +func (c *OrchestrationsClient) DeleteOrchestration(input *DeleteOrchestrationInput) error { + if err := c.deleteOrchestration(input.Name); err != nil { + return err + } + + if input.Timeout == 0 { + input.Timeout = WaitForOrchestrationDeleteTimeout + } + + return c.WaitForOrchestrationDeleted(input, input.Timeout) +} + +func (c *OrchestrationsClient) success(info *Orchestration) (*Orchestration, error) { + c.unqualify(&info.Name) + for _, i := range info.Objects { + c.unqualify(&i.Orchestration) + if OrchestrationType(i.Type) == OrchestrationTypeInstance { + instanceInput := i.Template.(map[string]interface{}) + instanceInput["name"] = c.getUnqualifiedName(instanceInput["name"].(string)) + } + } + + return info, nil +} + +// WaitForOrchestrationActive waits for an orchestration to be completely initialized and available. +func (c *OrchestrationsClient) WaitForOrchestrationState(input *GetOrchestrationInput, timeout time.Duration) (Orchestration, error) { + var info *Orchestration + var getErr error + err := c.client.WaitFor("orchestration to be ready", timeout, func() (bool, error) { + info, getErr = c.GetOrchestration(input) + if getErr != nil { + return false, getErr + } + c.client.DebugLogString(fmt.Sprintf("Orchestration name is %v, Orchestration info is %+v", info.Name, info)) + switch s := info.Status; s { + case OrchestrationStatusError: + // We need to check and see if an object the orchestration is trying to create is giving us an error instead of just the orchestration as a whole. + for _, object := range info.Objects { + if object.Health.Status == OrchestrationStatusError { + return false, fmt.Errorf("Error creating instance %s: %+v", object.Name, object.Health) + } + } + return false, fmt.Errorf("Error initializing orchestration: %+v", info) + case OrchestrationStatus(info.DesiredState): + c.client.DebugLogString(fmt.Sprintf("Orchestration %s", info.DesiredState)) + return true, nil + case OrchestrationStatusActivating: + c.client.DebugLogString("Orchestration activating") + return false, nil + case OrchestrationStatusStopping: + c.client.DebugLogString("Orchestration stopping") + return false, nil + case OrchestrationStatusSuspending: + c.client.DebugLogString("Orchestration suspending") + return false, nil + case OrchestrationStatusDeactivating: + c.client.DebugLogString("Orchestration deactivating") + return false, nil + case OrchestrationStatusSuspended: + c.client.DebugLogString("Orchestration suspended") + if info.DesiredState == OrchestrationDesiredStateSuspend { + return true, nil + } else { + return false, nil + } + default: + return false, fmt.Errorf("Unknown orchestration state: %s, erroring", s) + } + }) + return *info, err +} + +// WaitForOrchestrationDeleted waits for an orchestration to be fully deleted. +func (c *OrchestrationsClient) WaitForOrchestrationDeleted(input *DeleteOrchestrationInput, timeout time.Duration) error { + return c.client.WaitFor("orchestration to be deleted", timeout, func() (bool, error) { + var info Orchestration + if err := c.getResource(input.Name, &info); err != nil { + if client.WasNotFoundError(err) { + // Orchestration could not be found, thus deleted + return true, nil + } + // Some other error occurred trying to get Orchestration, exit + return false, err + } + switch s := info.Status; s { + case OrchestrationStatusError: + return false, fmt.Errorf("Error stopping orchestration: %+v", info) + case OrchestrationStatusStopping: + c.client.DebugLogString("Orchestration stopping") + return false, nil + case OrchestrationStatusDeleting: + c.client.DebugLogString("Orchestration deleting") + return false, nil + case OrchestrationStatusActive: + c.client.DebugLogString("Orchestration active") + return false, nil + default: + return false, fmt.Errorf("Unknown orchestration state: %s, erroring", s) + } + }) +} diff --git a/vendor/github.com/hashicorp/go-oracle-terraform/compute/routes.go b/vendor/github.com/hashicorp/go-oracle-terraform/compute/routes.go new file mode 100644 index 000000000..f46beaecf --- /dev/null +++ b/vendor/github.com/hashicorp/go-oracle-terraform/compute/routes.go @@ -0,0 +1,153 @@ +package compute + +const ( + RoutesDescription = "IP Network Route" + RoutesContainerPath = "/network/v1/route/" + RoutesResourcePath = "/network/v1/route" +) + +type RoutesClient struct { + ResourceClient +} + +func (c *ComputeClient) Routes() *RoutesClient { + return &RoutesClient{ + ResourceClient: ResourceClient{ + ComputeClient: c, + ResourceDescription: RoutesDescription, + ContainerPath: RoutesContainerPath, + ResourceRootPath: RoutesResourcePath, + }, + } +} + +type RouteInfo struct { + // Admin distance associated with this route + AdminDistance int `json:"adminDistance"` + // Description of the route + Description string `json:"description"` + // CIDR IPv4 Prefix associated with this route + IPAddressPrefix string `json:"ipAddressPrefix"` + // Name of the route + Name string `json:"name"` + // Name of the VNIC set associated with the route + NextHopVnicSet string `json:"nextHopVnicSet"` + // Slice of Tags associated with the route + Tags []string `json:"tags,omitempty"` + // Uniform resource identifier associated with the route + Uri string `json:"uri"` +} + +type CreateRouteInput struct { + // Specify 0,1, or 2 as the route's administrative distance. + // If you do not specify a value, the default value is 0. + // The same prefix can be used in multiple routes. In this case, packets are routed over all the matching + // routes with the lowest administrative distance. + // In the case multiple routes with the same lowest administrative distance match, + // routing occurs over all these routes using ECMP. + // Optional + AdminDistance int `json:"adminDistance"` + // Description of the route + // Optional + Description string `json:"description"` + // The IPv4 address prefix in CIDR format, of the external network (external to the vNIC set) + // from which you want to route traffic + // Required + IPAddressPrefix string `json:"ipAddressPrefix"` + // Name of the route. + // Names can only contain alphanumeric, underscore, dash, and period characters. Case-sensitive + // Required + Name string `json:"name"` + // Name of the virtual NIC set to route matching packets to. + // Routed flows are load-balanced among all the virtual NICs in the virtual NIC set + // Required + NextHopVnicSet string `json:"nextHopVnicSet"` + // Slice of tags to be associated with the route + // Optional + Tags []string `json:"tags,omitempty"` +} + +func (c *RoutesClient) CreateRoute(input *CreateRouteInput) (*RouteInfo, error) { + input.Name = c.getQualifiedName(input.Name) + input.NextHopVnicSet = c.getQualifiedName(input.NextHopVnicSet) + + var routeInfo RouteInfo + if err := c.createResource(&input, &routeInfo); err != nil { + return nil, err + } + + return c.success(&routeInfo) +} + +type GetRouteInput struct { + // Name of the Route to query for. Case-sensitive + // Required + Name string `json:"name"` +} + +func (c *RoutesClient) GetRoute(input *GetRouteInput) (*RouteInfo, error) { + input.Name = c.getQualifiedName(input.Name) + + var routeInfo RouteInfo + if err := c.getResource(input.Name, &routeInfo); err != nil { + return nil, err + } + return c.success(&routeInfo) +} + +type UpdateRouteInput struct { + // Specify 0,1, or 2 as the route's administrative distance. + // If you do not specify a value, the default value is 0. + // The same prefix can be used in multiple routes. In this case, packets are routed over all the matching + // routes with the lowest administrative distance. + // In the case multiple routes with the same lowest administrative distance match, + // routing occurs over all these routes using ECMP. + // Optional + AdminDistance int `json:"adminDistance"` + // Description of the route + // Optional + Description string `json:"description"` + // The IPv4 address prefix in CIDR format, of the external network (external to the vNIC set) + // from which you want to route traffic + // Required + IPAddressPrefix string `json:"ipAddressPrefix"` + // Name of the route. + // Names can only contain alphanumeric, underscore, dash, and period characters. Case-sensitive + // Required + Name string `json:"name"` + // Name of the virtual NIC set to route matching packets to. + // Routed flows are load-balanced among all the virtual NICs in the virtual NIC set + // Required + NextHopVnicSet string `json:"nextHopVnicSet"` + // Slice of tags to be associated with the route + // Optional + Tags []string `json:"tags"` +} + +func (c *RoutesClient) UpdateRoute(input *UpdateRouteInput) (*RouteInfo, error) { + input.Name = c.getQualifiedName(input.Name) + input.NextHopVnicSet = c.getQualifiedName(input.NextHopVnicSet) + + var routeInfo RouteInfo + if err := c.updateResource(input.Name, &input, &routeInfo); err != nil { + return nil, err + } + + return c.success(&routeInfo) +} + +type DeleteRouteInput struct { + // Name of the Route to delete. Case-sensitive + // Required + Name string `json:"name"` +} + +func (c *RoutesClient) DeleteRoute(input *DeleteRouteInput) error { + return c.deleteResource(input.Name) +} + +func (c *RoutesClient) success(info *RouteInfo) (*RouteInfo, error) { + c.unqualify(&info.Name) + c.unqualify(&info.NextHopVnicSet) + return info, nil +} diff --git a/vendor/github.com/hashicorp/go-oracle-terraform/compute/sec_rules.go b/vendor/github.com/hashicorp/go-oracle-terraform/compute/sec_rules.go new file mode 100644 index 000000000..b45df047d --- /dev/null +++ b/vendor/github.com/hashicorp/go-oracle-terraform/compute/sec_rules.go @@ -0,0 +1,193 @@ +package compute + +// SecRulesClient is a client for the Sec Rules functions of the Compute API. +type SecRulesClient struct { + ResourceClient +} + +// SecRules obtains a SecRulesClient which can be used to access to the +// Sec Rules functions of the Compute API +func (c *ComputeClient) SecRules() *SecRulesClient { + return &SecRulesClient{ + ResourceClient: ResourceClient{ + ComputeClient: c, + ResourceDescription: "security ip list", + ContainerPath: "/secrule/", + ResourceRootPath: "/secrule", + }} +} + +// SecRuleInfo describes an existing sec rule. +type SecRuleInfo struct { + // Set this parameter to PERMIT. + Action string `json:"action"` + // The name of the security application + Application string `json:"application"` + // A description of the sec rule + Description string `json:"description"` + // Indicates whether the security rule is enabled + Disabled bool `json:"disabled"` + // The name of the destination security list or security IP list. + DestinationList string `json:"dst_list"` + // The name of the sec rule + Name string `json:"name"` + // The name of the source security list or security IP list. + SourceList string `json:"src_list"` + // Uniform Resource Identifier for the sec rule + URI string `json:"uri"` +} + +// CreateSecRuleInput defines a sec rule to be created. +type CreateSecRuleInput struct { + // Set this parameter to PERMIT. + // Required + Action string `json:"action"` + + // The name of the security application for user-defined or predefined security applications. + // Required + Application string `json:"application"` + + // Description of the IP Network + // Optional + Description string `json:"description"` + + // Indicates whether the sec rule is enabled (set to false) or disabled (true). + // The default setting is false. + // Optional + Disabled bool `json:"disabled"` + + // The name of the destination security list or security IP list. + // + // You must use the prefix seclist: or seciplist: to identify the list type. + // + // You can specify a security IP list as the destination in a secrule, + // provided src_list is a security list that has DENY as its outbound policy. + // + // You cannot specify any of the security IP lists in the /oracle/public container + // as a destination in a secrule. + // Required + DestinationList string `json:"dst_list"` + + // The name of the Sec Rule to create. Object names can only contain alphanumeric, + // underscore, dash, and period characters. Names are case-sensitive. + // Required + Name string `json:"name"` + + // The name of the source security list or security IP list. + // + // You must use the prefix seclist: or seciplist: to identify the list type. + // + // Required + SourceList string `json:"src_list"` +} + +// CreateSecRule creates a new sec rule. +func (c *SecRulesClient) CreateSecRule(createInput *CreateSecRuleInput) (*SecRuleInfo, error) { + createInput.Name = c.getQualifiedName(createInput.Name) + createInput.SourceList = c.getQualifiedListName(createInput.SourceList) + createInput.DestinationList = c.getQualifiedListName(createInput.DestinationList) + createInput.Application = c.getQualifiedName(createInput.Application) + + var ruleInfo SecRuleInfo + if err := c.createResource(createInput, &ruleInfo); err != nil { + return nil, err + } + + return c.success(&ruleInfo) +} + +// GetSecRuleInput describes the Sec Rule to get +type GetSecRuleInput struct { + // The name of the Sec Rule to query for + // Required + Name string `json:"name"` +} + +// GetSecRule retrieves the sec rule with the given name. +func (c *SecRulesClient) GetSecRule(getInput *GetSecRuleInput) (*SecRuleInfo, error) { + var ruleInfo SecRuleInfo + if err := c.getResource(getInput.Name, &ruleInfo); err != nil { + return nil, err + } + + return c.success(&ruleInfo) +} + +// UpdateSecRuleInput describes a secruity rule to update +type UpdateSecRuleInput struct { + // Set this parameter to PERMIT. + // Required + Action string `json:"action"` + + // The name of the security application for user-defined or predefined security applications. + // Required + Application string `json:"application"` + + // Description of the IP Network + // Optional + Description string `json:"description"` + + // Indicates whether the sec rule is enabled (set to false) or disabled (true). + // The default setting is false. + // Optional + Disabled bool `json:"disabled"` + + // The name of the destination security list or security IP list. + // + // You must use the prefix seclist: or seciplist: to identify the list type. + // + // You can specify a security IP list as the destination in a secrule, + // provided src_list is a security list that has DENY as its outbound policy. + // + // You cannot specify any of the security IP lists in the /oracle/public container + // as a destination in a secrule. + // Required + DestinationList string `json:"dst_list"` + + // The name of the Sec Rule to create. Object names can only contain alphanumeric, + // underscore, dash, and period characters. Names are case-sensitive. + // Required + Name string `json:"name"` + + // The name of the source security list or security IP list. + // + // You must use the prefix seclist: or seciplist: to identify the list type. + // + // Required + SourceList string `json:"src_list"` +} + +// UpdateSecRule modifies the properties of the sec rule with the given name. +func (c *SecRulesClient) UpdateSecRule(updateInput *UpdateSecRuleInput) (*SecRuleInfo, error) { + updateInput.Name = c.getQualifiedName(updateInput.Name) + updateInput.SourceList = c.getQualifiedListName(updateInput.SourceList) + updateInput.DestinationList = c.getQualifiedListName(updateInput.DestinationList) + updateInput.Application = c.getQualifiedName(updateInput.Application) + + var ruleInfo SecRuleInfo + if err := c.updateResource(updateInput.Name, updateInput, &ruleInfo); err != nil { + return nil, err + } + + return c.success(&ruleInfo) +} + +// DeleteSecRuleInput describes the sec rule to delete +type DeleteSecRuleInput struct { + // The name of the Sec Rule to delete. + // Required + Name string `json:"name"` +} + +// DeleteSecRule deletes the sec rule with the given name. +func (c *SecRulesClient) DeleteSecRule(deleteInput *DeleteSecRuleInput) error { + return c.deleteResource(deleteInput.Name) +} + +func (c *SecRulesClient) success(ruleInfo *SecRuleInfo) (*SecRuleInfo, error) { + ruleInfo.Name = c.getUnqualifiedName(ruleInfo.Name) + ruleInfo.SourceList = c.unqualifyListName(ruleInfo.SourceList) + ruleInfo.DestinationList = c.unqualifyListName(ruleInfo.DestinationList) + ruleInfo.Application = c.getUnqualifiedName(ruleInfo.Application) + return ruleInfo, nil +} diff --git a/vendor/github.com/hashicorp/go-oracle-terraform/compute/security_applications.go b/vendor/github.com/hashicorp/go-oracle-terraform/compute/security_applications.go new file mode 100644 index 000000000..c473ecb83 --- /dev/null +++ b/vendor/github.com/hashicorp/go-oracle-terraform/compute/security_applications.go @@ -0,0 +1,150 @@ +package compute + +// SecurityApplicationsClient is a client for the Security Application functions of the Compute API. +type SecurityApplicationsClient struct { + ResourceClient +} + +// SecurityApplications obtains a SecurityApplicationsClient which can be used to access to the +// Security Application functions of the Compute API +func (c *ComputeClient) SecurityApplications() *SecurityApplicationsClient { + return &SecurityApplicationsClient{ + ResourceClient: ResourceClient{ + ComputeClient: c, + ResourceDescription: "security application", + ContainerPath: "/secapplication/", + ResourceRootPath: "/secapplication", + }} +} + +// SecurityApplicationInfo describes an existing security application. +type SecurityApplicationInfo struct { + // A description of the security application. + Description string `json:"description"` + // The TCP or UDP destination port number. This can be a port range, such as 5900-5999 for TCP. + DPort string `json:"dport"` + // The ICMP code. + ICMPCode SecurityApplicationICMPCode `json:"icmpcode"` + // The ICMP type. + ICMPType SecurityApplicationICMPType `json:"icmptype"` + // The three-part name of the Security Application (/Compute-identity_domain/user/object). + Name string `json:"name"` + // The protocol to use. + Protocol SecurityApplicationProtocol `json:"protocol"` + // The Uniform Resource Identifier + URI string `json:"uri"` +} + +type SecurityApplicationProtocol string + +const ( + All SecurityApplicationProtocol = "all" + AH SecurityApplicationProtocol = "ah" + ESP SecurityApplicationProtocol = "esp" + ICMP SecurityApplicationProtocol = "icmp" + ICMPV6 SecurityApplicationProtocol = "icmpv6" + IGMP SecurityApplicationProtocol = "igmp" + IPIP SecurityApplicationProtocol = "ipip" + GRE SecurityApplicationProtocol = "gre" + MPLSIP SecurityApplicationProtocol = "mplsip" + OSPF SecurityApplicationProtocol = "ospf" + PIM SecurityApplicationProtocol = "pim" + RDP SecurityApplicationProtocol = "rdp" + SCTP SecurityApplicationProtocol = "sctp" + TCP SecurityApplicationProtocol = "tcp" + UDP SecurityApplicationProtocol = "udp" +) + +type SecurityApplicationICMPCode string + +const ( + Admin SecurityApplicationICMPCode = "admin" + Df SecurityApplicationICMPCode = "df" + Host SecurityApplicationICMPCode = "host" + Network SecurityApplicationICMPCode = "network" + Port SecurityApplicationICMPCode = "port" + Protocol SecurityApplicationICMPCode = "protocol" +) + +type SecurityApplicationICMPType string + +const ( + Echo SecurityApplicationICMPType = "echo" + Reply SecurityApplicationICMPType = "reply" + TTL SecurityApplicationICMPType = "ttl" + TraceRoute SecurityApplicationICMPType = "traceroute" + Unreachable SecurityApplicationICMPType = "unreachable" +) + +func (c *SecurityApplicationsClient) success(result *SecurityApplicationInfo) (*SecurityApplicationInfo, error) { + c.unqualify(&result.Name) + return result, nil +} + +// CreateSecurityApplicationInput describes the Security Application to create +type CreateSecurityApplicationInput struct { + // A description of the security application. + // Optional + Description string `json:"description"` + // The TCP or UDP destination port number. + // You can also specify a port range, such as 5900-5999 for TCP. + // This parameter isn't relevant to the icmp protocol. + // Required if the Protocol is TCP or UDP + DPort string `json:"dport"` + // The ICMP code. This parameter is relevant only if you specify ICMP as the protocol. + // If you specify icmp as the protocol and don't specify icmptype or icmpcode, then all ICMP packets are matched. + // Optional + ICMPCode SecurityApplicationICMPCode `json:"icmpcode,omitempty"` + // This parameter is relevant only if you specify ICMP as the protocol. + // If you specify icmp as the protocol and don't specify icmptype or icmpcode, then all ICMP packets are matched. + // Optional + ICMPType SecurityApplicationICMPType `json:"icmptype,omitempty"` + // The three-part name of the Security Application (/Compute-identity_domain/user/object). + // Object names can contain only alphanumeric characters, hyphens, underscores, and periods. Object names are case-sensitive. + // Required + Name string `json:"name"` + // The protocol to use. + // Required + Protocol SecurityApplicationProtocol `json:"protocol"` +} + +// CreateSecurityApplication creates a new security application. +func (c *SecurityApplicationsClient) CreateSecurityApplication(input *CreateSecurityApplicationInput) (*SecurityApplicationInfo, error) { + input.Name = c.getQualifiedName(input.Name) + + var appInfo SecurityApplicationInfo + if err := c.createResource(&input, &appInfo); err != nil { + return nil, err + } + + return c.success(&appInfo) +} + +// GetSecurityApplicationInput describes the Security Application to obtain +type GetSecurityApplicationInput struct { + // The three-part name of the Security Application (/Compute-identity_domain/user/object). + // Required + Name string `json:"name"` +} + +// GetSecurityApplication retrieves the security application with the given name. +func (c *SecurityApplicationsClient) GetSecurityApplication(input *GetSecurityApplicationInput) (*SecurityApplicationInfo, error) { + var appInfo SecurityApplicationInfo + if err := c.getResource(input.Name, &appInfo); err != nil { + return nil, err + } + + return c.success(&appInfo) +} + +// DeleteSecurityApplicationInput describes the Security Application to delete +type DeleteSecurityApplicationInput struct { + // The three-part name of the Security Application (/Compute-identity_domain/user/object). + // Required + Name string `json:"name"` +} + +// DeleteSecurityApplication deletes the security application with the given name. +func (c *SecurityApplicationsClient) DeleteSecurityApplication(input *DeleteSecurityApplicationInput) error { + return c.deleteResource(input.Name) +} diff --git a/vendor/github.com/hashicorp/go-oracle-terraform/compute/security_associations.go b/vendor/github.com/hashicorp/go-oracle-terraform/compute/security_associations.go new file mode 100644 index 000000000..0b806f029 --- /dev/null +++ b/vendor/github.com/hashicorp/go-oracle-terraform/compute/security_associations.go @@ -0,0 +1,95 @@ +package compute + +// SecurityAssociationsClient is a client for the Security Association functions of the Compute API. +type SecurityAssociationsClient struct { + ResourceClient +} + +// SecurityAssociations obtains a SecurityAssociationsClient which can be used to access to the +// Security Association functions of the Compute API +func (c *ComputeClient) SecurityAssociations() *SecurityAssociationsClient { + return &SecurityAssociationsClient{ + ResourceClient: ResourceClient{ + ComputeClient: c, + ResourceDescription: "security association", + ContainerPath: "/secassociation/", + ResourceRootPath: "/secassociation", + }} +} + +// SecurityAssociationInfo describes an existing security association. +type SecurityAssociationInfo struct { + // The three-part name of the Security Association (/Compute-identity_domain/user/object). + Name string `json:"name"` + // The name of the Security List that you want to associate with the instance. + SecList string `json:"seclist"` + // vCable of the instance that you want to associate with the security list. + VCable string `json:"vcable"` + // Uniform Resource Identifier + URI string `json:"uri"` +} + +// CreateSecurityAssociationInput defines a security association to be created. +type CreateSecurityAssociationInput struct { + // The three-part name of the Security Association (/Compute-identity_domain/user/object). + // If you don't specify a name for this object, then the name is generated automatically. + // Object names can contain only alphanumeric characters, hyphens, underscores, and periods. Object names are case-sensitive. + // Optional + Name string `json:"name"` + // The name of the Security list that you want to associate with the instance. + // Required + SecList string `json:"seclist"` + // The name of the vCable of the instance that you want to associate with the security list. + // Required + VCable string `json:"vcable"` +} + +// CreateSecurityAssociation creates a security association between the given VCable and security list. +func (c *SecurityAssociationsClient) CreateSecurityAssociation(createInput *CreateSecurityAssociationInput) (*SecurityAssociationInfo, error) { + if createInput.Name != "" { + createInput.Name = c.getQualifiedName(createInput.Name) + } + createInput.VCable = c.getQualifiedName(createInput.VCable) + createInput.SecList = c.getQualifiedName(createInput.SecList) + + var assocInfo SecurityAssociationInfo + if err := c.createResource(&createInput, &assocInfo); err != nil { + return nil, err + } + + return c.success(&assocInfo) +} + +// GetSecurityAssociationInput describes the security association to get +type GetSecurityAssociationInput struct { + // The three-part name of the Security Association (/Compute-identity_domain/user/object). + // Required + Name string `json:"name"` +} + +// GetSecurityAssociation retrieves the security association with the given name. +func (c *SecurityAssociationsClient) GetSecurityAssociation(getInput *GetSecurityAssociationInput) (*SecurityAssociationInfo, error) { + var assocInfo SecurityAssociationInfo + if err := c.getResource(getInput.Name, &assocInfo); err != nil { + return nil, err + } + + return c.success(&assocInfo) +} + +// DeleteSecurityAssociationInput describes the security association to delete +type DeleteSecurityAssociationInput struct { + // The three-part name of the Security Association (/Compute-identity_domain/user/object). + // Required + Name string `json:"name"` +} + +// DeleteSecurityAssociation deletes the security association with the given name. +func (c *SecurityAssociationsClient) DeleteSecurityAssociation(deleteInput *DeleteSecurityAssociationInput) error { + return c.deleteResource(deleteInput.Name) +} + +func (c *SecurityAssociationsClient) success(assocInfo *SecurityAssociationInfo) (*SecurityAssociationInfo, error) { + c.unqualify(&assocInfo.Name, &assocInfo.SecList, &assocInfo.VCable) + return assocInfo, nil +} diff --git a/vendor/github.com/hashicorp/go-oracle-terraform/compute/security_ip_lists.go b/vendor/github.com/hashicorp/go-oracle-terraform/compute/security_ip_lists.go new file mode 100644 index 000000000..28b313374 --- /dev/null +++ b/vendor/github.com/hashicorp/go-oracle-terraform/compute/security_ip_lists.go @@ -0,0 +1,113 @@ +package compute + +// SecurityIPListsClient is a client for the Security IP List functions of the Compute API. +type SecurityIPListsClient struct { + ResourceClient +} + +// SecurityIPLists obtains a SecurityIPListsClient which can be used to access to the +// Security IP List functions of the Compute API +func (c *ComputeClient) SecurityIPLists() *SecurityIPListsClient { + return &SecurityIPListsClient{ + ResourceClient: ResourceClient{ + ComputeClient: c, + ResourceDescription: "security ip list", + ContainerPath: "/seciplist/", + ResourceRootPath: "/seciplist", + }} +} + +// SecurityIPListInfo describes an existing security IP list. +type SecurityIPListInfo struct { + // A description of the security IP list. + Description string `json:"description"` + // The three-part name of the object (/Compute-identity_domain/user/object). + Name string `json:"name"` + // A comma-separated list of the subnets (in CIDR format) or IPv4 addresses for which you want to create this security IP list. + SecIPEntries []string `json:"secipentries"` + // Uniform Resource Identifier + URI string `json:"uri"` +} + +// CreateSecurityIPListInput defines a security IP list to be created. +type CreateSecurityIPListInput struct { + // A description of the security IP list. + // Optional + Description string `json:"description"` + // The three-part name of the object (/Compute-identity_domain/user/object). + // Object names can contain only alphanumeric characters, hyphens, underscores, and periods. Object names are case-sensitive. + // Required + Name string `json:"name"` + // A comma-separated list of the subnets (in CIDR format) or IPv4 addresses for which you want to create this security IP list. + // Required + SecIPEntries []string `json:"secipentries"` +} + +// CreateSecurityIPList creates a security IP list with the given name and entries. +func (c *SecurityIPListsClient) CreateSecurityIPList(createInput *CreateSecurityIPListInput) (*SecurityIPListInfo, error) { + createInput.Name = c.getQualifiedName(createInput.Name) + var listInfo SecurityIPListInfo + if err := c.createResource(createInput, &listInfo); err != nil { + return nil, err + } + + return c.success(&listInfo) +} + +// GetSecurityIPListInput describes the Security IP List to obtain +type GetSecurityIPListInput struct { + // The three-part name of the object (/Compute-identity_domain/user/object). + // Required + Name string `json:"name"` +} + +// GetSecurityIPList gets the security IP list with the given name. +func (c *SecurityIPListsClient) GetSecurityIPList(getInput *GetSecurityIPListInput) (*SecurityIPListInfo, error) { + var listInfo SecurityIPListInfo + if err := c.getResource(getInput.Name, &listInfo); err != nil { + return nil, err + } + + return c.success(&listInfo) +} + +// UpdateSecurityIPListInput describes the security ip list to update +type UpdateSecurityIPListInput struct { + // A description of the security IP list. + // Optional + Description string `json:"description"` + // The three-part name of the object (/Compute-identity_domain/user/object). + // Required + Name string `json:"name"` + // A comma-separated list of the subnets (in CIDR format) or IPv4 addresses for which you want to create this security IP list. + // Required + SecIPEntries []string `json:"secipentries"` +} + +// UpdateSecurityIPList modifies the entries in the security IP list with the given name. +func (c *SecurityIPListsClient) UpdateSecurityIPList(updateInput *UpdateSecurityIPListInput) (*SecurityIPListInfo, error) { + updateInput.Name = c.getQualifiedName(updateInput.Name) + var listInfo SecurityIPListInfo + if err := c.updateResource(updateInput.Name, updateInput, &listInfo); err != nil { + return nil, err + } + + return c.success(&listInfo) +} + +// DeleteSecurityIPListInput describes the security ip list to delete. +type DeleteSecurityIPListInput struct { + // The three-part name of the object (/Compute-identity_domain/user/object). + // Required + Name string `json:"name"` +} + +// DeleteSecurityIPList deletes the security IP list with the given name. +func (c *SecurityIPListsClient) DeleteSecurityIPList(deleteInput *DeleteSecurityIPListInput) error { + return c.deleteResource(deleteInput.Name) +} + +func (c *SecurityIPListsClient) success(listInfo *SecurityIPListInfo) (*SecurityIPListInfo, error) { + c.unqualify(&listInfo.Name) + return listInfo, nil +} diff --git a/vendor/github.com/hashicorp/go-oracle-terraform/compute/security_lists.go b/vendor/github.com/hashicorp/go-oracle-terraform/compute/security_lists.go new file mode 100644 index 000000000..dec906e8c --- /dev/null +++ b/vendor/github.com/hashicorp/go-oracle-terraform/compute/security_lists.go @@ -0,0 +1,131 @@ +package compute + +// SecurityListsClient is a client for the Security List functions of the Compute API. +type SecurityListsClient struct { + ResourceClient +} + +// SecurityLists obtains a SecurityListsClient which can be used to access to the +// Security List functions of the Compute API +func (c *ComputeClient) SecurityLists() *SecurityListsClient { + return &SecurityListsClient{ + ResourceClient: ResourceClient{ + ComputeClient: c, + ResourceDescription: "security list", + ContainerPath: "/seclist/", + ResourceRootPath: "/seclist", + }} +} + +type SecurityListPolicy string + +const ( + SecurityListPolicyDeny SecurityListPolicy = "deny" + SecurityListPolicyReject SecurityListPolicy = "reject" + SecurityListPolicyPermit SecurityListPolicy = "permit" +) + +// SecurityListInfo describes an existing security list. +type SecurityListInfo struct { + // Shows the default account for your identity domain. + Account string `json:"account"` + // A description of the security list. + Description string `json:"description"` + // The three-part name of the security list (/Compute-identity_domain/user/object). + Name string `json:"name"` + // The policy for outbound traffic from the security list. + OutboundCIDRPolicy SecurityListPolicy `json:"outbound_cidr_policy"` + // The policy for inbound traffic to the security list + Policy SecurityListPolicy `json:"policy"` + // Uniform Resource Identifier + URI string `json:"uri"` +} + +// CreateSecurityListInput defines a security list to be created. +type CreateSecurityListInput struct { + // A description of the security list. + // Optional + Description string `json:"description"` + // The three-part name of the Security List (/Compute-identity_domain/user/object). + // Object names can contain only alphanumeric characters, hyphens, underscores, and periods. Object names are case-sensitive. + // Required + Name string `json:"name"` + // The policy for outbound traffic from the security list. + // Optional (defaults to `permit`) + OutboundCIDRPolicy SecurityListPolicy `json:"outbound_cidr_policy"` + // The policy for inbound traffic to the security list. + // Optional (defaults to `deny`) + Policy SecurityListPolicy `json:"policy"` +} + +// CreateSecurityList creates a new security list with the given name, policy and outbound CIDR policy. +func (c *SecurityListsClient) CreateSecurityList(createInput *CreateSecurityListInput) (*SecurityListInfo, error) { + createInput.Name = c.getQualifiedName(createInput.Name) + var listInfo SecurityListInfo + if err := c.createResource(createInput, &listInfo); err != nil { + return nil, err + } + + return c.success(&listInfo) +} + +// GetSecurityListInput describes the security list you want to get +type GetSecurityListInput struct { + // The three-part name of the Security List (/Compute-identity_domain/user/object). + // Required + Name string `json:"name"` +} + +// GetSecurityList retrieves the security list with the given name. +func (c *SecurityListsClient) GetSecurityList(getInput *GetSecurityListInput) (*SecurityListInfo, error) { + var listInfo SecurityListInfo + if err := c.getResource(getInput.Name, &listInfo); err != nil { + return nil, err + } + + return c.success(&listInfo) +} + +// UpdateSecurityListInput defines what to update in a security list +type UpdateSecurityListInput struct { + // A description of the security list. + // Optional + Description string `json:"description"` + // The three-part name of the Security List (/Compute-identity_domain/user/object). + // Required + Name string `json:"name"` + // The policy for outbound traffic from the security list. + // Optional (defaults to `permit`) + OutboundCIDRPolicy SecurityListPolicy `json:"outbound_cidr_policy"` + // The policy for inbound traffic to the security list. + // Optional (defaults to `deny`) + Policy SecurityListPolicy `json:"policy"` +} + +// UpdateSecurityList updates the policy and outbound CIDR pol +func (c *SecurityListsClient) UpdateSecurityList(updateInput *UpdateSecurityListInput) (*SecurityListInfo, error) { + updateInput.Name = c.getQualifiedName(updateInput.Name) + var listInfo SecurityListInfo + if err := c.updateResource(updateInput.Name, updateInput, &listInfo); err != nil { + return nil, err + } + + return c.success(&listInfo) +} + +// DeleteSecurityListInput describes the security list to destroy +type DeleteSecurityListInput struct { + // The three-part name of the Security List (/Compute-identity_domain/user/object). + // Required + Name string `json:"name"` +} + +// DeleteSecurityList deletes the security list with the given name. +func (c *SecurityListsClient) DeleteSecurityList(deleteInput *DeleteSecurityListInput) error { + return c.deleteResource(deleteInput.Name) +} + +func (c *SecurityListsClient) success(listInfo *SecurityListInfo) (*SecurityListInfo, error) { + c.unqualify(&listInfo.Name) + return listInfo, nil +} diff --git a/vendor/github.com/hashicorp/go-oracle-terraform/compute/security_protocols.go b/vendor/github.com/hashicorp/go-oracle-terraform/compute/security_protocols.go new file mode 100644 index 000000000..406be9cad --- /dev/null +++ b/vendor/github.com/hashicorp/go-oracle-terraform/compute/security_protocols.go @@ -0,0 +1,187 @@ +package compute + +const ( + SecurityProtocolDescription = "security protocol" + SecurityProtocolContainerPath = "/network/v1/secprotocol/" + SecurityProtocolResourcePath = "/network/v1/secprotocol" +) + +type SecurityProtocolsClient struct { + ResourceClient +} + +// SecurityProtocols() returns an SecurityProtocolsClient that can be used to access the +// necessary CRUD functions for Security Protocols. +func (c *ComputeClient) SecurityProtocols() *SecurityProtocolsClient { + return &SecurityProtocolsClient{ + ResourceClient: ResourceClient{ + ComputeClient: c, + ResourceDescription: SecurityProtocolDescription, + ContainerPath: SecurityProtocolContainerPath, + ResourceRootPath: SecurityProtocolResourcePath, + }, + } +} + +// SecurityProtocolInfo contains the exported fields necessary to hold all the information about an +// Security Protocol +type SecurityProtocolInfo struct { + // List of port numbers or port range strings to match the packet's destination port. + DstPortSet []string `json:"dstPortSet"` + // Protocol used in the data portion of the IP datagram. + IPProtocol string `json:"ipProtocol"` + // List of port numbers or port range strings to match the packet's source port. + SrcPortSet []string `json:"srcPortSet"` + // The name of the Security Protocol + Name string `json:"name"` + // Description of the Security Protocol + Description string `json:"description"` + // Slice of tags associated with the Security Protocol + Tags []string `json:"tags"` + // Uniform Resource Identifier for the Security Protocol + Uri string `json:"uri"` +} + +type CreateSecurityProtocolInput struct { + // The name of the Security Protocol to create. Object names can only contain alphanumeric, + // underscore, dash, and period characters. Names are case-sensitive. + // Required + Name string `json:"name"` + + // Description of the SecurityProtocol + // Optional + Description string `json:"description"` + + // Enter a list of port numbers or port range strings. + //Traffic is enabled by a security rule when a packet's destination port matches the + // ports specified here. + // For TCP, SCTP, and UDP, each port is a destination transport port, between 0 and 65535, + // inclusive. For ICMP, each port is an ICMP type, between 0 and 255, inclusive. + // If no destination ports are specified, all destination ports or ICMP types are allowed. + // Optional + DstPortSet []string `json:"dstPortSet"` + + // The protocol used in the data portion of the IP datagram. + // Specify one of the permitted values or enter a number in the range 0–254 to + // represent the protocol that you want to specify. See Assigned Internet Protocol Numbers. + // Permitted values are: tcp, udp, icmp, igmp, ipip, rdp, esp, ah, gre, icmpv6, ospf, pim, sctp, + // mplsip, all. + // Traffic is enabled by a security rule when the protocol in the packet matches the + // protocol specified here. If no protocol is specified, all protocols are allowed. + // Optional + IPProtocol string `json:"ipProtocol"` + + // Enter a list of port numbers or port range strings. + // Traffic is enabled by a security rule when a packet's source port matches the + // ports specified here. + // For TCP, SCTP, and UDP, each port is a source transport port, + // between 0 and 65535, inclusive. + // For ICMP, each port is an ICMP type, between 0 and 255, inclusive. + // If no source ports are specified, all source ports or ICMP types are allowed. + // Optional + SrcPortSet []string `json:"srcPortSet"` + + // String slice of tags to apply to the Security Protocol object + // Optional + Tags []string `json:"tags"` +} + +// Create a new Security Protocol from an SecurityProtocolsClient and an input struct. +// Returns a populated Info struct for the Security Protocol, and any errors +func (c *SecurityProtocolsClient) CreateSecurityProtocol(input *CreateSecurityProtocolInput) (*SecurityProtocolInfo, error) { + input.Name = c.getQualifiedName(input.Name) + + var ipInfo SecurityProtocolInfo + if err := c.createResource(&input, &ipInfo); err != nil { + return nil, err + } + + return c.success(&ipInfo) +} + +type GetSecurityProtocolInput struct { + // The name of the Security Protocol to query for. Case-sensitive + // Required + Name string `json:"name"` +} + +// Returns a populated SecurityProtocolInfo struct from an input struct +func (c *SecurityProtocolsClient) GetSecurityProtocol(input *GetSecurityProtocolInput) (*SecurityProtocolInfo, error) { + input.Name = c.getQualifiedName(input.Name) + + var ipInfo SecurityProtocolInfo + if err := c.getResource(input.Name, &ipInfo); err != nil { + return nil, err + } + + return c.success(&ipInfo) +} + +// UpdateSecurityProtocolInput defines what to update in a security protocol +type UpdateSecurityProtocolInput struct { + // The name of the Security Protocol to create. Object names can only contain alphanumeric, + // underscore, dash, and period characters. Names are case-sensitive. + // Required + Name string `json:"name"` + + // Description of the SecurityProtocol + // Optional + Description string `json:"description"` + + // Enter a list of port numbers or port range strings. + //Traffic is enabled by a security rule when a packet's destination port matches the + // ports specified here. + // For TCP, SCTP, and UDP, each port is a destination transport port, between 0 and 65535, + // inclusive. For ICMP, each port is an ICMP type, between 0 and 255, inclusive. + // If no destination ports are specified, all destination ports or ICMP types are allowed. + DstPortSet []string `json:"dstPortSet"` + + // The protocol used in the data portion of the IP datagram. + // Specify one of the permitted values or enter a number in the range 0–254 to + // represent the protocol that you want to specify. See Assigned Internet Protocol Numbers. + // Permitted values are: tcp, udp, icmp, igmp, ipip, rdp, esp, ah, gre, icmpv6, ospf, pim, sctp, + // mplsip, all. + // Traffic is enabled by a security rule when the protocol in the packet matches the + // protocol specified here. If no protocol is specified, all protocols are allowed. + IPProtocol string `json:"ipProtocol"` + + // Enter a list of port numbers or port range strings. + // Traffic is enabled by a security rule when a packet's source port matches the + // ports specified here. + // For TCP, SCTP, and UDP, each port is a source transport port, + // between 0 and 65535, inclusive. + // For ICMP, each port is an ICMP type, between 0 and 255, inclusive. + // If no source ports are specified, all source ports or ICMP types are allowed. + SrcPortSet []string `json:"srcPortSet"` + + // String slice of tags to apply to the Security Protocol object + // Optional + Tags []string `json:"tags"` +} + +// UpdateSecurityProtocol update the security protocol +func (c *SecurityProtocolsClient) UpdateSecurityProtocol(updateInput *UpdateSecurityProtocolInput) (*SecurityProtocolInfo, error) { + updateInput.Name = c.getQualifiedName(updateInput.Name) + var ipInfo SecurityProtocolInfo + if err := c.updateResource(updateInput.Name, updateInput, &ipInfo); err != nil { + return nil, err + } + + return c.success(&ipInfo) +} + +type DeleteSecurityProtocolInput struct { + // The name of the Security Protocol to query for. Case-sensitive + // Required + Name string `json:"name"` +} + +func (c *SecurityProtocolsClient) DeleteSecurityProtocol(input *DeleteSecurityProtocolInput) error { + return c.deleteResource(input.Name) +} + +// Unqualifies any qualified fields in the SecurityProtocolInfo struct +func (c *SecurityProtocolsClient) success(info *SecurityProtocolInfo) (*SecurityProtocolInfo, error) { + c.unqualify(&info.Name) + return info, nil +} diff --git a/vendor/github.com/hashicorp/go-oracle-terraform/compute/security_rules.go b/vendor/github.com/hashicorp/go-oracle-terraform/compute/security_rules.go new file mode 100644 index 000000000..1773ea8b5 --- /dev/null +++ b/vendor/github.com/hashicorp/go-oracle-terraform/compute/security_rules.go @@ -0,0 +1,266 @@ +package compute + +const ( + SecurityRuleDescription = "security rules" + SecurityRuleContainerPath = "/network/v1/secrule/" + SecurityRuleResourcePath = "/network/v1/secrule" +) + +type SecurityRuleClient struct { + ResourceClient +} + +// SecurityRules() returns an SecurityRulesClient that can be used to access the +// necessary CRUD functions for Security Rules. +func (c *ComputeClient) SecurityRules() *SecurityRuleClient { + return &SecurityRuleClient{ + ResourceClient: ResourceClient{ + ComputeClient: c, + ResourceDescription: SecurityRuleDescription, + ContainerPath: SecurityRuleContainerPath, + ResourceRootPath: SecurityRuleResourcePath, + }, + } +} + +// SecurityRuleInfo contains the exported fields necessary to hold all the information about a +// Security Rule +type SecurityRuleInfo struct { + // Name of the ACL that contains this rule. + ACL string `json:"acl"` + // Description of the Security Rule + Description string `json:"description"` + // List of IP address prefix set names to match the packet's destination IP address. + DstIpAddressPrefixSets []string `json:"dstIpAddressPrefixSets"` + // Name of virtual NIC set containing the packet's destination virtual NIC. + DstVnicSet string `json:"dstVnicSet"` + // Allows the security rule to be disabled. + Enabled bool `json:"enabledFlag"` + // Direction of the flow; Can be "egress" or "ingress". + FlowDirection string `json:"FlowDirection"` + // The name of the Security Rule + Name string `json:"name"` + // List of security protocol names to match the packet's protocol and port. + SecProtocols []string `json:"secProtocols"` + // List of multipart names of IP address prefix set to match the packet's source IP address. + SrcIpAddressPrefixSets []string `json:"srcIpAddressPrefixSets"` + // Name of virtual NIC set containing the packet's source virtual NIC. + SrcVnicSet string `json:"srcVnicSet"` + // Slice of tags associated with the Security Rule + Tags []string `json:"tags"` + // Uniform Resource Identifier for the Security Rule + Uri string `json:"uri"` +} + +type CreateSecurityRuleInput struct { + //Select the name of the access control list (ACL) that you want to add this + // security rule to. Security rules are applied to vNIC sets by using ACLs. + // Optional + ACL string `json:"acl,omitempty"` + + // Description of the Security Rule + // Optional + Description string `json:"description"` + + // A list of IP address prefix sets to which you want to permit traffic. + // Only packets to IP addresses in the specified IP address prefix sets are permitted. + // When no destination IP address prefix sets are specified, traffic to any + // IP address is permitted. + // Optional + DstIpAddressPrefixSets []string `json:"dstIpAddressPrefixSets"` + + // The vNICset to which you want to permit traffic. Only packets to vNICs in the + // specified vNICset are permitted. When no destination vNICset is specified, traffic + // to any vNIC is permitted. + // Optional + DstVnicSet string `json:"dstVnicSet,omitempty"` + + // Allows the security rule to be enabled or disabled. This parameter is set to + // true by default. Specify false to disable the security rule. + // Optional + Enabled bool `json:"enabledFlag"` + + // Specify the direction of flow of traffic, which is relative to the instances, + // for this security rule. Allowed values are ingress or egress. + // An ingress packet is a packet received by a virtual NIC, for example from + // another virtual NIC or from the public Internet. + // An egress packet is a packet sent by a virtual NIC, for example to another + // virtual NIC or to the public Internet. + // Required + FlowDirection string `json:"flowDirection"` + + // The name of the Security Rule + // Object names can contain only alphanumeric characters, hyphens, underscores, and periods. + // Object names are case-sensitive. When you specify the object name, ensure that an object + // of the same type and with the same name doesn't already exist. + // If such an object already exists, another object of the same type and with the same name won't + // be created and the existing object won't be updated. + // Required + Name string `json:"name"` + + // A list of security protocols for which you want to permit traffic. Only packets that + // match the specified protocols and ports are permitted. When no security protocols are + // specified, traffic using any protocol over any port is permitted. + // Optional + SecProtocols []string `json:"secProtocols"` + + // A list of IP address prefix sets from which you want to permit traffic. Only packets + // from IP addresses in the specified IP address prefix sets are permitted. When no source + // IP address prefix sets are specified, traffic from any IP address is permitted. + // Optional + SrcIpAddressPrefixSets []string `json:"srcIpAddressPrefixSets"` + + // The vNICset from which you want to permit traffic. Only packets from vNICs in the + // specified vNICset are permitted. When no source vNICset is specified, traffic from any + // vNIC is permitted. + // Optional + SrcVnicSet string `json:"srcVnicSet,omitempty"` + + // Strings that you can use to tag the security rule. + // Optional + Tags []string `json:"tags"` +} + +// Create a new Security Rule from an SecurityRuleClient and an input struct. +// Returns a populated Info struct for the Security Rule, and any errors +func (c *SecurityRuleClient) CreateSecurityRule(input *CreateSecurityRuleInput) (*SecurityRuleInfo, error) { + input.Name = c.getQualifiedName(input.Name) + input.ACL = c.getQualifiedName(input.ACL) + input.SrcVnicSet = c.getQualifiedName(input.SrcVnicSet) + input.DstVnicSet = c.getQualifiedName(input.DstVnicSet) + input.SrcIpAddressPrefixSets = c.getQualifiedList(input.SrcIpAddressPrefixSets) + input.DstIpAddressPrefixSets = c.getQualifiedList(input.DstIpAddressPrefixSets) + input.SecProtocols = c.getQualifiedList(input.SecProtocols) + + var securityRuleInfo SecurityRuleInfo + if err := c.createResource(&input, &securityRuleInfo); err != nil { + return nil, err + } + + return c.success(&securityRuleInfo) +} + +type GetSecurityRuleInput struct { + // The name of the Security Rule to query for. Case-sensitive + // Required + Name string `json:"name"` +} + +// Returns a populated SecurityRuleInfo struct from an input struct +func (c *SecurityRuleClient) GetSecurityRule(input *GetSecurityRuleInput) (*SecurityRuleInfo, error) { + input.Name = c.getQualifiedName(input.Name) + + var securityRuleInfo SecurityRuleInfo + if err := c.getResource(input.Name, &securityRuleInfo); err != nil { + return nil, err + } + + return c.success(&securityRuleInfo) +} + +// UpdateSecurityRuleInput describes a secruity rule to update +type UpdateSecurityRuleInput struct { + //Select the name of the access control list (ACL) that you want to add this + // security rule to. Security rules are applied to vNIC sets by using ACLs. + // Optional + ACL string `json:"acl,omitempty"` + + // Description of the Security Rule + // Optional + Description string `json:"description"` + + // A list of IP address prefix sets to which you want to permit traffic. + // Only packets to IP addresses in the specified IP address prefix sets are permitted. + // When no destination IP address prefix sets are specified, traffic to any + // IP address is permitted. + // Optional + DstIpAddressPrefixSets []string `json:"dstIpAddressPrefixSets"` + + // The vNICset to which you want to permit traffic. Only packets to vNICs in the + // specified vNICset are permitted. When no destination vNICset is specified, traffic + // to any vNIC is permitted. + // Optional + DstVnicSet string `json:"dstVnicSet,omitempty"` + + // Allows the security rule to be enabled or disabled. This parameter is set to + // true by default. Specify false to disable the security rule. + // Optional + Enabled bool `json:"enabledFlag"` + + // Specify the direction of flow of traffic, which is relative to the instances, + // for this security rule. Allowed values are ingress or egress. + // An ingress packet is a packet received by a virtual NIC, for example from + // another virtual NIC or from the public Internet. + // An egress packet is a packet sent by a virtual NIC, for example to another + // virtual NIC or to the public Internet. + // Required + FlowDirection string `json:"flowDirection"` + + // The name of the Security Rule + // Object names can contain only alphanumeric characters, hyphens, underscores, and periods. + // Object names are case-sensitive. When you specify the object name, ensure that an object + // of the same type and with the same name doesn't already exist. + // If such an object already exists, another object of the same type and with the same name won't + // be created and the existing object won't be updated. + // Required + Name string `json:"name"` + + // A list of security protocols for which you want to permit traffic. Only packets that + // match the specified protocols and ports are permitted. When no security protocols are + // specified, traffic using any protocol over any port is permitted. + // Optional + SecProtocols []string `json:"secProtocols"` + + // A list of IP address prefix sets from which you want to permit traffic. Only packets + // from IP addresses in the specified IP address prefix sets are permitted. When no source + // IP address prefix sets are specified, traffic from any IP address is permitted. + // Optional + SrcIpAddressPrefixSets []string `json:"srcIpAddressPrefixSets"` + + // The vNICset from which you want to permit traffic. Only packets from vNICs in the + // specified vNICset are permitted. When no source vNICset is specified, traffic from any + // vNIC is permitted. + // Optional + SrcVnicSet string `json:"srcVnicSet,omitempty"` + + // Strings that you can use to tag the security rule. + // Optional + Tags []string `json:"tags"` +} + +// UpdateSecRule modifies the properties of the sec rule with the given name. +func (c *SecurityRuleClient) UpdateSecurityRule(updateInput *UpdateSecurityRuleInput) (*SecurityRuleInfo, error) { + updateInput.Name = c.getQualifiedName(updateInput.Name) + updateInput.ACL = c.getQualifiedName(updateInput.ACL) + updateInput.SrcVnicSet = c.getQualifiedName(updateInput.SrcVnicSet) + updateInput.DstVnicSet = c.getQualifiedName(updateInput.DstVnicSet) + updateInput.SrcIpAddressPrefixSets = c.getQualifiedList(updateInput.SrcIpAddressPrefixSets) + updateInput.DstIpAddressPrefixSets = c.getQualifiedList(updateInput.DstIpAddressPrefixSets) + updateInput.SecProtocols = c.getQualifiedList(updateInput.SecProtocols) + + var securityRuleInfo SecurityRuleInfo + if err := c.updateResource(updateInput.Name, updateInput, &securityRuleInfo); err != nil { + return nil, err + } + + return c.success(&securityRuleInfo) +} + +type DeleteSecurityRuleInput struct { + // The name of the Security Rule to query for. Case-sensitive + // Required + Name string `json:"name"` +} + +func (c *SecurityRuleClient) DeleteSecurityRule(input *DeleteSecurityRuleInput) error { + return c.deleteResource(input.Name) +} + +// Unqualifies any qualified fields in the IPNetworkExchangeInfo struct +func (c *SecurityRuleClient) success(info *SecurityRuleInfo) (*SecurityRuleInfo, error) { + c.unqualify(&info.Name, &info.ACL, &info.SrcVnicSet, &info.DstVnicSet) + info.SrcIpAddressPrefixSets = c.getUnqualifiedList(info.SrcIpAddressPrefixSets) + info.DstIpAddressPrefixSets = c.getUnqualifiedList(info.DstIpAddressPrefixSets) + info.SecProtocols = c.getUnqualifiedList(info.SecProtocols) + return info, nil +} diff --git a/vendor/github.com/hashicorp/go-oracle-terraform/compute/snapshots.go b/vendor/github.com/hashicorp/go-oracle-terraform/compute/snapshots.go new file mode 100644 index 000000000..98a24ac9d --- /dev/null +++ b/vendor/github.com/hashicorp/go-oracle-terraform/compute/snapshots.go @@ -0,0 +1,242 @@ +package compute + +import ( + "fmt" + "time" +) + +const WaitForSnapshotCompleteTimeout = time.Duration(600 * time.Second) + +// SnapshotsClient is a client for the Snapshot functions of the Compute API. +type SnapshotsClient struct { + ResourceClient +} + +// Snapshots obtains an SnapshotsClient which can be used to access to the +// Snapshot functions of the Compute API +func (c *ComputeClient) Snapshots() *SnapshotsClient { + return &SnapshotsClient{ + ResourceClient: ResourceClient{ + ComputeClient: c, + ResourceDescription: "Snapshot", + ContainerPath: "/snapshot/", + ResourceRootPath: "/snapshot", + }} +} + +type SnapshotState string + +const ( + SnapshotActive SnapshotState = "active" + SnapshotComplete SnapshotState = "complete" + SnapshotQueued SnapshotState = "queued" + SnapshotError SnapshotState = "error" +) + +type SnapshotDelay string + +const ( + SnapshotDelayShutdown SnapshotDelay = "shutdown" +) + +// SnapshotInfo describes an existing Snapshot. +type Snapshot struct { + // Shows the default account for your identity domain. + Account string `json:"account"` + // Timestamp when this request was created. + CreationTime string `json:"creation_time"` + // Snapshot of the instance is not taken immediately. + Delay SnapshotDelay `json:"delay"` + // A description of the reason this request entered "error" state. + ErrorReason string `json:"error_reason"` + // Name of the instance + Instance string `json:"instance"` + // Name of the machine image generated from the instance snapshot request. + MachineImage string `json:"machineimage"` + // Name of the instance snapshot request. + Name string `json:"name"` + // Not used + Quota string `json:"quota"` + // The state of the request. + State SnapshotState `json:"state"` + // Uniform Resource Identifier + URI string `json:"uri"` +} + +// CreateSnapshotInput defines an Snapshot to be created. +type CreateSnapshotInput struct { + // The name of the account that contains the credentials and access details of + // Oracle Storage Cloud Service. The machine image file is uploaded to the Oracle + // Storage Cloud Service account that you specify. + // Optional + Account string `json:"account,omitempty"` + // Use this option when you want to preserve the custom changes you have made + // to an instance before deleting the instance. The only permitted value is shutdown. + // Snapshot of the instance is not taken immediately. It creates a machine image which + // preserves the changes you have made to the instance, and then the instance is deleted. + // Note: This option has no effect if you shutdown the instance from inside it. Any pending + // snapshot request on that instance goes into error state. You must delete the instance + // (DELETE /instance/{name}). + // Optional + Delay SnapshotDelay `json:"delay,omitempty"` + // Name of the instance that you want to clone. + // Required + Instance string `json:"instance"` + // Specify the name of the machine image created by the snapshot request. + // Object names can contain only alphanumeric characters, hyphens, underscores, and periods. + // Object names are case-sensitive. + // If you don't specify a name for this object, then the name is generated automatically. + // Optional + MachineImage string `json:"machineimage,omitempty"` + // Time to wait for snapshot to be completed + Timeout time.Duration `json:"-"` +} + +// CreateSnapshot creates a new Snapshot +func (c *SnapshotsClient) CreateSnapshot(input *CreateSnapshotInput) (*Snapshot, error) { + input.Account = c.getQualifiedACMEName(input.Account) + input.Instance = c.getQualifiedName(input.Instance) + input.MachineImage = c.getQualifiedName(input.MachineImage) + + var snapshotInfo Snapshot + if err := c.createResource(&input, &snapshotInfo); err != nil { + return nil, err + } + + // Call wait for snapshot complete now, as creating the snashot is an eventually consistent operation + getInput := &GetSnapshotInput{ + Name: snapshotInfo.Name, + } + + if input.Timeout == 0 { + input.Timeout = WaitForSnapshotCompleteTimeout + } + + // Wait for snapshot to be complete and return the result + return c.WaitForSnapshotComplete(getInput, input.Timeout) +} + +// GetSnapshotInput describes the snapshot to get +type GetSnapshotInput struct { + // The name of the Snapshot + // Required + Name string `json:"name"` +} + +// GetSnapshot retrieves the Snapshot with the given name. +func (c *SnapshotsClient) GetSnapshot(getInput *GetSnapshotInput) (*Snapshot, error) { + getInput.Name = c.getQualifiedName(getInput.Name) + var snapshotInfo Snapshot + if err := c.getResource(getInput.Name, &snapshotInfo); err != nil { + return nil, err + } + + return c.success(&snapshotInfo) +} + +// DeleteSnapshotInput describes the snapshot to delete +type DeleteSnapshotInput struct { + // The name of the Snapshot + // Required + Snapshot string + // The name of the machine image + // Required + MachineImage string + // Time to wait for snapshot to be deleted + Timeout time.Duration +} + +// DeleteSnapshot deletes the Snapshot with the given name. +// A machine image gets created with the associated snapshot and needs to be deleted as well. +func (c *SnapshotsClient) DeleteSnapshot(machineImagesClient *MachineImagesClient, input *DeleteSnapshotInput) error { + // Wait for snapshot complete in case delay is active and the corresponding instance needs to be deleted first + getInput := &GetSnapshotInput{ + Name: input.Snapshot, + } + + if input.Timeout == 0 { + input.Timeout = WaitForSnapshotCompleteTimeout + } + + if _, err := c.WaitForSnapshotComplete(getInput, input.Timeout); err != nil { + return fmt.Errorf("Could not delete snapshot: %s", err) + } + + if err := c.deleteResource(input.Snapshot); err != nil { + return fmt.Errorf("Could not delete snapshot: %s", err) + } + + deleteMachineImageRequest := &DeleteMachineImageInput{ + Name: input.MachineImage, + } + if err := machineImagesClient.DeleteMachineImage(deleteMachineImageRequest); err != nil { + return fmt.Errorf("Could not delete machine image associated with snapshot: %s", err) + } + + return nil +} + +// DeleteSnapshotResourceOnly deletes the Snapshot with the given name. +// The machine image that gets created with the associated snapshot is not +// deleted by this method. +func (c *SnapshotsClient) DeleteSnapshotResourceOnly(input *DeleteSnapshotInput) error { + // Wait for snapshot complete in case delay is active and the corresponding + // instance needs to be deleted first + getInput := &GetSnapshotInput{ + Name: input.Snapshot, + } + + if input.Timeout == 0 { + input.Timeout = WaitForSnapshotCompleteTimeout + } + + if _, err := c.WaitForSnapshotComplete(getInput, input.Timeout); err != nil { + return fmt.Errorf("Could not delete snapshot: %s", err) + } + + if err := c.deleteResource(input.Snapshot); err != nil { + return fmt.Errorf("Could not delete snapshot: %s", err) + } + + return nil +} + +// WaitForSnapshotComplete waits for an snapshot to be completely initialized and available. +func (c *SnapshotsClient) WaitForSnapshotComplete(input *GetSnapshotInput, timeout time.Duration) (*Snapshot, error) { + var info *Snapshot + var getErr error + err := c.client.WaitFor("snapshot to be complete", timeout, func() (bool, error) { + info, getErr = c.GetSnapshot(input) + if getErr != nil { + return false, getErr + } + switch s := info.State; s { + case SnapshotError: + return false, fmt.Errorf("Error initializing snapshot: %s", info.ErrorReason) + case SnapshotComplete: + c.client.DebugLogString("Snapshot Complete") + return true, nil + case SnapshotQueued: + c.client.DebugLogString("Snapshot Queuing") + return false, nil + case SnapshotActive: + c.client.DebugLogString("Snapshot Active") + if info.Delay == SnapshotDelayShutdown { + return true, nil + } + return false, nil + default: + c.client.DebugLogString(fmt.Sprintf("Unknown snapshot state: %s, waiting", s)) + return false, nil + } + }) + return info, err +} + +func (c *SnapshotsClient) success(snapshotInfo *Snapshot) (*Snapshot, error) { + c.unqualify(&snapshotInfo.Account) + c.unqualify(&snapshotInfo.Instance) + c.unqualify(&snapshotInfo.MachineImage) + c.unqualify(&snapshotInfo.Name) + return snapshotInfo, nil +} diff --git a/vendor/github.com/hashicorp/go-oracle-terraform/compute/ssh_keys.go b/vendor/github.com/hashicorp/go-oracle-terraform/compute/ssh_keys.go new file mode 100644 index 000000000..821fd2216 --- /dev/null +++ b/vendor/github.com/hashicorp/go-oracle-terraform/compute/ssh_keys.go @@ -0,0 +1,124 @@ +package compute + +// SSHKeysClient is a client for the SSH key functions of the Compute API. +type SSHKeysClient struct { + ResourceClient +} + +// SSHKeys obtains an SSHKeysClient which can be used to access to the +// SSH key functions of the Compute API +func (c *ComputeClient) SSHKeys() *SSHKeysClient { + return &SSHKeysClient{ + ResourceClient: ResourceClient{ + ComputeClient: c, + ResourceDescription: "SSH key", + ContainerPath: "/sshkey/", + ResourceRootPath: "/sshkey", + }} +} + +// SSHKeyInfo describes an existing SSH key. +type SSHKey struct { + // Indicates whether the key is enabled (true) or disabled. + Enabled bool `json:"enabled"` + // The SSH public key value. + Key string `json:"key"` + // The three-part name of the SSH Key (/Compute-identity_domain/user/object). + Name string `json:"name"` + // Unique Resource Identifier + URI string `json:"uri"` +} + +// CreateSSHKeyInput defines an SSH key to be created. +type CreateSSHKeyInput struct { + // The three-part name of the SSH Key (/Compute-identity_domain/user/object). + // Object names can contain only alphanumeric characters, hyphens, underscores, and periods. Object names are case-sensitive. + // Required + Name string `json:"name"` + // The SSH public key value. + // Required + Key string `json:"key"` + // Indicates whether the key must be enabled (default) or disabled. Note that disabled keys cannot be associated with instances. + // To explicitly enable the key, specify true. To disable the key, specify false. + // Optional + Enabled bool `json:"enabled"` +} + +// CreateSSHKey creates a new SSH key with the given name, key and enabled flag. +func (c *SSHKeysClient) CreateSSHKey(createInput *CreateSSHKeyInput) (*SSHKey, error) { + var keyInfo SSHKey + // We have to update after create to get the full ssh key into opc + updateSSHKeyInput := UpdateSSHKeyInput{ + Name: createInput.Name, + Key: createInput.Key, + Enabled: createInput.Enabled, + } + + createInput.Name = c.getQualifiedName(createInput.Name) + if err := c.createResource(&createInput, &keyInfo); err != nil { + return nil, err + } + + _, err := c.UpdateSSHKey(&updateSSHKeyInput) + if err != nil { + return nil, err + } + + return c.success(&keyInfo) +} + +// GetSSHKeyInput describes the ssh key to get +type GetSSHKeyInput struct { + // The three-part name of the SSH Key (/Compute-identity_domain/user/object). + Name string `json:"name"` +} + +// GetSSHKey retrieves the SSH key with the given name. +func (c *SSHKeysClient) GetSSHKey(getInput *GetSSHKeyInput) (*SSHKey, error) { + var keyInfo SSHKey + if err := c.getResource(getInput.Name, &keyInfo); err != nil { + return nil, err + } + + return c.success(&keyInfo) +} + +// UpdateSSHKeyInput defines an SSH key to be updated +type UpdateSSHKeyInput struct { + // The three-part name of the object (/Compute-identity_domain/user/object). + Name string `json:"name"` + // The SSH public key value. + // Required + Key string `json:"key"` + // Indicates whether the key must be enabled (default) or disabled. Note that disabled keys cannot be associated with instances. + // To explicitly enable the key, specify true. To disable the key, specify false. + // Optional + // TODO/NOTE: isn't this required? + Enabled bool `json:"enabled"` +} + +// UpdateSSHKey updates the key and enabled flag of the SSH key with the given name. +func (c *SSHKeysClient) UpdateSSHKey(updateInput *UpdateSSHKeyInput) (*SSHKey, error) { + var keyInfo SSHKey + updateInput.Name = c.getQualifiedName(updateInput.Name) + if err := c.updateResource(updateInput.Name, updateInput, &keyInfo); err != nil { + return nil, err + } + return c.success(&keyInfo) +} + +// DeleteKeyInput describes the ssh key to delete +type DeleteSSHKeyInput struct { + // The three-part name of the SSH Key (/Compute-identity_domain/user/object). + Name string `json:"name"` +} + +// DeleteSSHKey deletes the SSH key with the given name. +func (c *SSHKeysClient) DeleteSSHKey(deleteInput *DeleteSSHKeyInput) error { + return c.deleteResource(deleteInput.Name) +} + +func (c *SSHKeysClient) success(keyInfo *SSHKey) (*SSHKey, error) { + c.unqualify(&keyInfo.Name) + return keyInfo, nil +} diff --git a/vendor/github.com/hashicorp/go-oracle-terraform/compute/storage_volume_attachments.go b/vendor/github.com/hashicorp/go-oracle-terraform/compute/storage_volume_attachments.go new file mode 100644 index 000000000..7fe9a13ef --- /dev/null +++ b/vendor/github.com/hashicorp/go-oracle-terraform/compute/storage_volume_attachments.go @@ -0,0 +1,179 @@ +package compute + +import ( + "time" + + "github.com/hashicorp/go-oracle-terraform/client" +) + +const WaitForVolumeAttachmentDeleteTimeout = time.Duration(30 * time.Second) +const WaitForVolumeAttachmentReadyTimeout = time.Duration(30 * time.Second) + +// StorageAttachmentsClient is a client for the Storage Attachment functions of the Compute API. +type StorageAttachmentsClient struct { + ResourceClient +} + +// StorageAttachments obtains a StorageAttachmentsClient which can be used to access to the +// Storage Attachment functions of the Compute API +func (c *ComputeClient) StorageAttachments() *StorageAttachmentsClient { + return &StorageAttachmentsClient{ + ResourceClient: ResourceClient{ + ComputeClient: c, + ResourceDescription: "storage volume attachment", + ContainerPath: "/storage/attachment/", + ResourceRootPath: "/storage/attachment", + }} +} + +type StorageAttachmentState string + +const ( + Attaching StorageAttachmentState = "attaching" + Attached StorageAttachmentState = "attached" + Detaching StorageAttachmentState = "detaching" + Unavailable StorageAttachmentState = "unavailable" + Unknown StorageAttachmentState = "unknown" +) + +// StorageAttachmentInfo describes an existing storage attachment. +type StorageAttachmentInfo struct { + // Name of this attachment, generated by the server. + Name string `json:"name"` + + // Index number for the volume. The allowed range is 1-10 + // An attachment with index 1 is exposed to the instance as /dev/xvdb, an attachment with index 2 is exposed as /dev/xvdc, and so on. + Index int `json:"index"` + + // Multipart name of the instance attached to the storage volume. + InstanceName string `json:"instance_name"` + + // Multipart name of the volume attached to the instance. + StorageVolumeName string `json:"storage_volume_name"` + + // The State of the Storage Attachment + State StorageAttachmentState `json:"state"` +} + +func (c *StorageAttachmentsClient) success(attachmentInfo *StorageAttachmentInfo) (*StorageAttachmentInfo, error) { + c.unqualify(&attachmentInfo.Name, &attachmentInfo.InstanceName, &attachmentInfo.StorageVolumeName) + return attachmentInfo, nil +} + +type CreateStorageAttachmentInput struct { + // Index number for the volume. The allowed range is 1-10 + // An attachment with index 1 is exposed to the instance as /dev/xvdb, an attachment with index 2 is exposed as /dev/xvdc, and so on. + // Required. + Index int `json:"index"` + + // Multipart name of the instance to which you want to attach the volume. + // Required. + InstanceName string `json:"instance_name"` + + // Multipart name of the volume that you want to attach. + // Required. + StorageVolumeName string `json:"storage_volume_name"` + + // Time to wait for storage volume attachment + Timeout time.Duration `json:"-"` +} + +// CreateStorageAttachment creates a storage attachment attaching the given volume to the given instance at the given index. +func (c *StorageAttachmentsClient) CreateStorageAttachment(input *CreateStorageAttachmentInput) (*StorageAttachmentInfo, error) { + input.InstanceName = c.getQualifiedName(input.InstanceName) + input.StorageVolumeName = c.getQualifiedName(input.StorageVolumeName) + + var attachmentInfo *StorageAttachmentInfo + if err := c.createResource(&input, &attachmentInfo); err != nil { + return nil, err + } + + if input.Timeout == 0 { + input.Timeout = WaitForVolumeAttachmentReadyTimeout + } + + return c.waitForStorageAttachmentToFullyAttach(attachmentInfo.Name, input.Timeout) +} + +// DeleteStorageAttachmentInput represents the body of an API request to delete a Storage Attachment. +type DeleteStorageAttachmentInput struct { + // The three-part name of the Storage Attachment (/Compute-identity_domain/user/object). + // Required + Name string `json:"name"` + + // Time to wait for storage volume snapshot + Timeout time.Duration `json:"-"` +} + +// DeleteStorageAttachment deletes the storage attachment with the given name. +func (c *StorageAttachmentsClient) DeleteStorageAttachment(input *DeleteStorageAttachmentInput) error { + if err := c.deleteResource(input.Name); err != nil { + return err + } + + if input.Timeout == 0 { + input.Timeout = WaitForVolumeAttachmentDeleteTimeout + } + + return c.waitForStorageAttachmentToBeDeleted(input.Name, input.Timeout) +} + +// GetStorageAttachmentInput represents the body of an API request to obtain a Storage Attachment. +type GetStorageAttachmentInput struct { + // The three-part name of the Storage Attachment (/Compute-identity_domain/user/object). + // Required + Name string `json:"name"` +} + +// GetStorageAttachment retrieves the storage attachment with the given name. +func (c *StorageAttachmentsClient) GetStorageAttachment(input *GetStorageAttachmentInput) (*StorageAttachmentInfo, error) { + var attachmentInfo *StorageAttachmentInfo + if err := c.getResource(input.Name, &attachmentInfo); err != nil { + return nil, err + } + + return c.success(attachmentInfo) +} + +// waitForStorageAttachmentToFullyAttach waits for the storage attachment with the given name to be fully attached, or times out. +func (c *StorageAttachmentsClient) waitForStorageAttachmentToFullyAttach(name string, timeout time.Duration) (*StorageAttachmentInfo, error) { + var waitResult *StorageAttachmentInfo + + err := c.client.WaitFor("storage attachment to be attached", timeout, func() (bool, error) { + input := &GetStorageAttachmentInput{ + Name: name, + } + info, err := c.GetStorageAttachment(input) + if err != nil { + return false, err + } + + if info != nil { + if info.State == Attached { + waitResult = info + return true, nil + } + } + + return false, nil + }) + + return waitResult, err +} + +// waitForStorageAttachmentToBeDeleted waits for the storage attachment with the given name to be fully deleted, or times out. +func (c *StorageAttachmentsClient) waitForStorageAttachmentToBeDeleted(name string, timeout time.Duration) error { + return c.client.WaitFor("storage attachment to be deleted", timeout, func() (bool, error) { + input := &GetStorageAttachmentInput{ + Name: name, + } + _, err := c.GetStorageAttachment(input) + if err != nil { + if client.WasNotFoundError(err) { + return true, nil + } + return false, err + } + return false, nil + }) +} diff --git a/vendor/github.com/hashicorp/go-oracle-terraform/compute/storage_volume_snapshots.go b/vendor/github.com/hashicorp/go-oracle-terraform/compute/storage_volume_snapshots.go new file mode 100644 index 000000000..aa7c4a7a2 --- /dev/null +++ b/vendor/github.com/hashicorp/go-oracle-terraform/compute/storage_volume_snapshots.go @@ -0,0 +1,251 @@ +package compute + +import ( + "fmt" + "strings" + "time" + + "github.com/hashicorp/go-oracle-terraform/client" +) + +const ( + StorageVolumeSnapshotDescription = "storage volume snapshot" + StorageVolumeSnapshotContainerPath = "/storage/snapshot/" + StorageVolumeSnapshotResourcePath = "/storage/snapshot" + + WaitForSnapshotCreateTimeout = time.Duration(2400 * time.Second) + WaitForSnapshotDeleteTimeout = time.Duration(1500 * time.Second) + + // Collocated Snapshot Property + SnapshotPropertyCollocated = "/oracle/private/storage/snapshot/collocated" +) + +// StorageVolumeSnapshotClient is a client for the Storage Volume Snapshot functions of the Compute API. +type StorageVolumeSnapshotClient struct { + ResourceClient +} + +func (c *ComputeClient) StorageVolumeSnapshots() *StorageVolumeSnapshotClient { + return &StorageVolumeSnapshotClient{ + ResourceClient: ResourceClient{ + ComputeClient: c, + ResourceDescription: StorageVolumeSnapshotDescription, + ContainerPath: StorageVolumeSnapshotContainerPath, + ResourceRootPath: StorageVolumeSnapshotResourcePath, + }, + } +} + +// StorageVolumeSnapshotInfo represents the information retrieved from the service about a storage volume snapshot +type StorageVolumeSnapshotInfo struct { + // Account to use for snapshots + Account string `json:"account"` + + // Description of the snapshot + Description string `json:"description"` + + // The name of the machine image that's used in the boot volume from which this snapshot is taken + MachineImageName string `json:"machineimage_name"` + + // Name of the snapshot + Name string `json:"name"` + + // String indicating whether the parent volume is bootable or not + ParentVolumeBootable string `json:"parent_volume_bootable"` + + // Platform the snapshot is compatible with + Platform string `json:"platform"` + + // String determining whether the snapshot is remote or collocated + Property string `json:"property"` + + // The size of the snapshot in GB + Size string `json:"size"` + + // The ID of the snapshot. Generated by the server + SnapshotID string `json:"snapshot_id"` + + // The timestamp of the storage snapshot + SnapshotTimestamp string `json:"snapshot_timestamp"` + + // Timestamp for when the operation started + StartTimestamp string `json:"start_timestamp"` + + // Status of the snapshot + Status string `json:"status"` + + // Status Detail of the storage snapshot + StatusDetail string `json:"status_detail"` + + // Indicates the time that the current view of the storage volume snapshot was generated. + StatusTimestamp string `json:"status_timestamp"` + + // Array of tags for the snapshot + Tags []string `json:"tags,omitempty"` + + // Uniform Resource Identifier + URI string `json:"uri"` + + // Name of the parent storage volume for the snapshot + Volume string `json:"volume"` +} + +// CreateStorageVolumeSnapshotInput represents the body of an API request to create a new storage volume snapshot +type CreateStorageVolumeSnapshotInput struct { + // Description of the snapshot + // Optional + Description string `json:"description,omitempty"` + + // Name of the snapshot + // Optional, will be generated if not specified + Name string `json:"name,omitempty"` + + // Whether or not the parent volume is bootable + // Optional + ParentVolumeBootable string `json:"parent_volume_bootable,omitempty"` + + // Whether collocated or remote + // Optional, will be remote if unspecified + Property string `json:"property,omitempty"` + + // Array of tags for the snapshot + // Optional + Tags []string `json:"tags,omitempty"` + + // Name of the volume to create the snapshot from + // Required + Volume string `json:"volume"` + + // Timeout to wait for snapshot to be completed. Will use default if unspecified + Timeout time.Duration `json:"-"` +} + +// CreateStorageVolumeSnapshot creates a snapshot based on the supplied information struct +func (c *StorageVolumeSnapshotClient) CreateStorageVolumeSnapshot(input *CreateStorageVolumeSnapshotInput) (*StorageVolumeSnapshotInfo, error) { + if input.Name != "" { + input.Name = c.getQualifiedName(input.Name) + } + input.Volume = c.getQualifiedName(input.Volume) + + var storageSnapshotInfo StorageVolumeSnapshotInfo + if err := c.createResource(&input, &storageSnapshotInfo); err != nil { + return nil, err + } + + if input.Timeout == 0 { + input.Timeout = WaitForSnapshotCreateTimeout + } + + // The name of the snapshot could have been generated. Use the response name as input + return c.waitForStorageSnapshotAvailable(storageSnapshotInfo.Name, input.Timeout) +} + +// GetStorageVolumeSnapshotInput represents the body of an API request to get information on a storage volume snapshot +type GetStorageVolumeSnapshotInput struct { + // Name of the snapshot + Name string `json:"name"` +} + +// GetStorageVolumeSnapshot makes an API request to populate information on a storage volume snapshot +func (c *StorageVolumeSnapshotClient) GetStorageVolumeSnapshot(input *GetStorageVolumeSnapshotInput) (*StorageVolumeSnapshotInfo, error) { + var storageSnapshot StorageVolumeSnapshotInfo + input.Name = c.getQualifiedName(input.Name) + if err := c.getResource(input.Name, &storageSnapshot); err != nil { + if client.WasNotFoundError(err) { + return nil, nil + } + + return nil, err + } + return c.success(&storageSnapshot) +} + +// DeleteStorageVolumeSnapshotInput represents the body of an API request to delete a storage volume snapshot +type DeleteStorageVolumeSnapshotInput struct { + // Name of the snapshot to delete + Name string `json:"name"` + + // Timeout to wait for deletion, will use default if unspecified + Timeout time.Duration `json:"-"` +} + +// DeleteStoragevolumeSnapshot makes an API request to delete a storage volume snapshot +func (c *StorageVolumeSnapshotClient) DeleteStorageVolumeSnapshot(input *DeleteStorageVolumeSnapshotInput) error { + input.Name = c.getQualifiedName(input.Name) + + if err := c.deleteResource(input.Name); err != nil { + return err + } + + if input.Timeout == 0 { + input.Timeout = WaitForSnapshotDeleteTimeout + } + + return c.waitForStorageSnapshotDeleted(input.Name, input.Timeout) +} + +func (c *StorageVolumeSnapshotClient) success(result *StorageVolumeSnapshotInfo) (*StorageVolumeSnapshotInfo, error) { + c.unqualify(&result.Name) + c.unqualify(&result.Volume) + + sizeInGigaBytes, err := sizeInGigaBytes(result.Size) + if err != nil { + return nil, err + } + result.Size = sizeInGigaBytes + + return result, nil +} + +// Waits for a storage snapshot to become available +func (c *StorageVolumeSnapshotClient) waitForStorageSnapshotAvailable(name string, timeout time.Duration) (*StorageVolumeSnapshotInfo, error) { + var result *StorageVolumeSnapshotInfo + + err := c.client.WaitFor( + fmt.Sprintf("storage volume snapshot %s to become available", c.getQualifiedName(name)), + timeout, + func() (bool, error) { + req := &GetStorageVolumeSnapshotInput{ + Name: name, + } + res, err := c.GetStorageVolumeSnapshot(req) + if err != nil { + return false, err + } + + if res != nil { + result = res + if strings.ToLower(result.Status) == "completed" { + return true, nil + } else if strings.ToLower(result.Status) == "error" { + return false, fmt.Errorf("Snapshot '%s' failed to create successfully. Status: %s Status Detail: %s", result.Name, result.Status, result.StatusDetail) + } + } + + return false, nil + }) + + return result, err +} + +// Waits for a storage snapshot to be deleted +func (c *StorageVolumeSnapshotClient) waitForStorageSnapshotDeleted(name string, timeout time.Duration) error { + return c.client.WaitFor( + fmt.Sprintf("storage volume snapshot %s to be deleted", c.getQualifiedName(name)), + timeout, + func() (bool, error) { + req := &GetStorageVolumeSnapshotInput{ + Name: name, + } + res, err := c.GetStorageVolumeSnapshot(req) + if res == nil { + return true, nil + } + + if err != nil { + return false, err + } + + return res == nil, nil + }) +} diff --git a/vendor/github.com/hashicorp/go-oracle-terraform/compute/storage_volumes.go b/vendor/github.com/hashicorp/go-oracle-terraform/compute/storage_volumes.go new file mode 100644 index 000000000..c479638de --- /dev/null +++ b/vendor/github.com/hashicorp/go-oracle-terraform/compute/storage_volumes.go @@ -0,0 +1,389 @@ +package compute + +import ( + "fmt" + "strconv" + "strings" + "time" + + "github.com/hashicorp/go-oracle-terraform/client" +) + +const WaitForVolumeReadyTimeout = time.Duration(600 * time.Second) +const WaitForVolumeDeleteTimeout = time.Duration(600 * time.Second) + +// StorageVolumeClient is a client for the Storage Volume functions of the Compute API. +type StorageVolumeClient struct { + ResourceClient +} + +// StorageVolumes obtains a StorageVolumeClient which can be used to access to the +// Storage Volume functions of the Compute API +func (c *ComputeClient) StorageVolumes() *StorageVolumeClient { + return &StorageVolumeClient{ + ResourceClient: ResourceClient{ + ComputeClient: c, + ResourceDescription: "storage volume", + ContainerPath: "/storage/volume/", + ResourceRootPath: "/storage/volume", + }} + +} + +type StorageVolumeKind string + +const ( + StorageVolumeKindDefault StorageVolumeKind = "/oracle/public/storage/default" + StorageVolumeKindLatency StorageVolumeKind = "/oracle/public/storage/latency" + StorageVolumeKindSSD StorageVolumeKind = "/oracle/public/storage/ssd/gpl" +) + +// StorageVolumeInfo represents information retrieved from the service about a Storage Volume. +type StorageVolumeInfo struct { + // Shows the default account for your identity domain. + Account string `json:"account,omitempty"` + + // true indicates that the storage volume can also be used as a boot disk for an instance. + // If you set the value to true, then you must specify values for the `ImageList` and `ImageListEntry` fields. + Bootable bool `json:"bootable,omitempty"` + + // The description of the storage volume. + Description string `json:"description,omitempty"` + + // The hypervisor that this volume is compatible with. + Hypervisor string `json:"hypervisor,omitempty"` + + // Name of machine image to extract onto this volume when created. This information is provided only for bootable storage volumes. + ImageList string `json:"imagelist,omitempty"` + + // Specific imagelist entry version to extract. + ImageListEntry int `json:"imagelist_entry,omitempty"` + + // Three-part name of the machine image. This information is available if the volume is a bootable storage volume. + MachineImage string `json:"machineimage_name,omitempty"` + + // All volumes are managed volumes. Default value is true. + Managed bool `json:"managed,omitempty"` + + // The three-part name of the object (/Compute-identity_domain/user/object). + Name string `json:"name"` + + // The OS platform this volume is compatible with. + Platform string `json:"platform,omitempty"` + + // The storage-pool property: /oracle/public/storage/latency or /oracle/public/storage/default. + Properties []string `json:"properties,omitempty"` + + // Boolean field indicating whether this volume can be attached as readonly. If set to False the volume will be attached as read-write. + ReadOnly bool `json:"readonly,omitempty"` + + // The size of this storage volume in GB. + Size string `json:"size"` + + // Name of the parent snapshot from which the storage volume is restored or cloned. + Snapshot string `json:"snapshot,omitempty"` + + // Account of the parent snapshot from which the storage volume is restored. + SnapshotAccount string `json:"snapshot_account,omitempty"` + + // Id of the parent snapshot from which the storage volume is restored or cloned. + SnapshotID string `json:"snapshot_id,omitempty"` + + // TODO: this should become a Constant, if/when we have the values + // The current state of the storage volume. + Status string `json:"status,omitempty"` + + // Details about the latest state of the storage volume. + StatusDetail string `json:"status_detail,omitempty"` + + // It indicates the time that the current view of the storage volume was generated. + StatusTimestamp string `json:"status_timestamp,omitempty"` + + // The storage pool from which this volume is allocated. + StoragePool string `json:"storage_pool,omitempty"` + + // Comma-separated strings that tag the storage volume. + Tags []string `json:"tags,omitempty"` + + // Uniform Resource Identifier + URI string `json:"uri,omitempty"` +} + +func (c *StorageVolumeClient) getStorageVolumePath(name string) string { + return c.getObjectPath("/storage/volume", name) +} + +// CreateStorageVolumeInput represents the body of an API request to create a new Storage Volume. +type CreateStorageVolumeInput struct { + // true indicates that the storage volume can also be used as a boot disk for an instance. + // If you set the value to true, then you must specify values for the `ImageList` and `ImageListEntry` fields. + Bootable bool `json:"bootable,omitempty"` + + // The description of the storage volume. + Description string `json:"description,omitempty"` + + // Name of machine image to extract onto this volume when created. This information is provided only for bootable storage volumes. + ImageList string `json:"imagelist,omitempty"` + + // Specific imagelist entry version to extract. + ImageListEntry int `json:"imagelist_entry,omitempty"` + + // The three-part name of the object (/Compute-identity_domain/user/object). + Name string `json:"name"` + + // The storage-pool property: /oracle/public/storage/latency or /oracle/public/storage/default. + Properties []string `json:"properties,omitempty"` + + // The size of this storage volume in GB. + Size string `json:"size"` + + // Name of the parent snapshot from which the storage volume is restored or cloned. + Snapshot string `json:"snapshot,omitempty"` + + // Account of the parent snapshot from which the storage volume is restored. + SnapshotAccount string `json:"snapshot_account,omitempty"` + + // Id of the parent snapshot from which the storage volume is restored or cloned. + SnapshotID string `json:"snapshot_id,omitempty"` + + // Comma-separated strings that tag the storage volume. + Tags []string `json:"tags,omitempty"` + + // Timeout to wait for storage volume creation. + Timeout time.Duration `json:"-"` +} + +// CreateStorageVolume uses the given CreateStorageVolumeInput to create a new Storage Volume. +func (c *StorageVolumeClient) CreateStorageVolume(input *CreateStorageVolumeInput) (*StorageVolumeInfo, error) { + input.Name = c.getQualifiedName(input.Name) + input.ImageList = c.getQualifiedName(input.ImageList) + + sizeInBytes, err := sizeInBytes(input.Size) + if err != nil { + return nil, err + } + input.Size = sizeInBytes + + var storageInfo StorageVolumeInfo + if err := c.createResource(&input, &storageInfo); err != nil { + return nil, err + } + + // Should never be nil, as we set this in the provider; but protect against it anyways. + if input.Timeout == 0 { + input.Timeout = WaitForVolumeReadyTimeout + } + + volume, err := c.waitForStorageVolumeToBecomeAvailable(input.Name, input.Timeout) + if err != nil { + if volume != nil { + deleteInput := &DeleteStorageVolumeInput{ + Name: volume.Name, + } + + if err := c.DeleteStorageVolume(deleteInput); err != nil { + return nil, err + } + } + } + return volume, err +} + +// DeleteStorageVolumeInput represents the body of an API request to delete a Storage Volume. +type DeleteStorageVolumeInput struct { + // The three-part name of the object (/Compute-identity_domain/user/object). + Name string `json:"name"` + + // Timeout to wait for storage volume deletion + Timeout time.Duration `json:"-"` +} + +// DeleteStorageVolume deletes the specified storage volume. +func (c *StorageVolumeClient) DeleteStorageVolume(input *DeleteStorageVolumeInput) error { + if err := c.deleteResource(input.Name); err != nil { + return err + } + + // Should never be nil, but protect against it anyways + if input.Timeout == 0 { + input.Timeout = WaitForVolumeDeleteTimeout + } + + return c.waitForStorageVolumeToBeDeleted(input.Name, input.Timeout) +} + +// GetStorageVolumeInput represents the body of an API request to obtain a Storage Volume. +type GetStorageVolumeInput struct { + // The three-part name of the object (/Compute-identity_domain/user/object). + Name string `json:"name"` +} + +func (c *StorageVolumeClient) success(result *StorageVolumeInfo) (*StorageVolumeInfo, error) { + c.unqualify(&result.ImageList) + c.unqualify(&result.Name) + c.unqualify(&result.Snapshot) + + sizeInMegaBytes, err := sizeInGigaBytes(result.Size) + if err != nil { + return nil, err + } + result.Size = sizeInMegaBytes + + return result, nil +} + +// GetStorageVolume gets Storage Volume information for the specified storage volume. +func (c *StorageVolumeClient) GetStorageVolume(input *GetStorageVolumeInput) (*StorageVolumeInfo, error) { + var storageVolume StorageVolumeInfo + if err := c.getResource(input.Name, &storageVolume); err != nil { + if client.WasNotFoundError(err) { + return nil, nil + } + + return nil, err + } + + return c.success(&storageVolume) +} + +// UpdateStorageVolumeInput represents the body of an API request to update a Storage Volume. +type UpdateStorageVolumeInput struct { + // The description of the storage volume. + Description string `json:"description,omitempty"` + + // Name of machine image to extract onto this volume when created. This information is provided only for bootable storage volumes. + ImageList string `json:"imagelist,omitempty"` + + // Specific imagelist entry version to extract. + ImageListEntry int `json:"imagelist_entry,omitempty"` + + // The three-part name of the object (/Compute-identity_domain/user/object). + Name string `json:"name"` + + // The storage-pool property: /oracle/public/storage/latency or /oracle/public/storage/default. + Properties []string `json:"properties,omitempty"` + + // The size of this storage volume in GB. + Size string `json:"size"` + + // Name of the parent snapshot from which the storage volume is restored or cloned. + Snapshot string `json:"snapshot,omitempty"` + + // Account of the parent snapshot from which the storage volume is restored. + SnapshotAccount string `json:"snapshot_account,omitempty"` + + // Id of the parent snapshot from which the storage volume is restored or cloned. + SnapshotID string `json:"snapshot_id,omitempty"` + + // Comma-separated strings that tag the storage volume. + Tags []string `json:"tags,omitempty"` + + // Time to wait for storage volume ready + Timeout time.Duration `json:"-"` +} + +// UpdateStorageVolume updates the specified storage volume, optionally modifying size, description and tags. +func (c *StorageVolumeClient) UpdateStorageVolume(input *UpdateStorageVolumeInput) (*StorageVolumeInfo, error) { + input.Name = c.getQualifiedName(input.Name) + input.ImageList = c.getQualifiedName(input.ImageList) + + sizeInBytes, err := sizeInBytes(input.Size) + if err != nil { + return nil, err + } + input.Size = sizeInBytes + + path := c.getStorageVolumePath(input.Name) + _, err = c.executeRequest("PUT", path, input) + if err != nil { + return nil, err + } + + if input.Timeout == 0 { + input.Timeout = WaitForVolumeReadyTimeout + } + + volumeInfo, err := c.waitForStorageVolumeToBecomeAvailable(input.Name, input.Timeout) + if err != nil { + return nil, err + } + + return volumeInfo, nil +} + +// waitForStorageVolumeToBecomeAvailable waits until a new Storage Volume is available (i.e. has finished initialising or updating). +func (c *StorageVolumeClient) waitForStorageVolumeToBecomeAvailable(name string, timeout time.Duration) (*StorageVolumeInfo, error) { + var waitResult *StorageVolumeInfo + + err := c.client.WaitFor( + fmt.Sprintf("storage volume %s to become available", c.getQualifiedName(name)), + timeout, + func() (bool, error) { + getRequest := &GetStorageVolumeInput{ + Name: name, + } + result, err := c.GetStorageVolume(getRequest) + + if err != nil { + return false, err + } + + if result != nil { + waitResult = result + if strings.ToLower(waitResult.Status) == "online" { + return true, nil + } + if strings.ToLower(waitResult.Status) == "error" { + return false, fmt.Errorf("Error Creating Storage Volume: %s", waitResult.StatusDetail) + } + } + + return false, nil + }) + + return waitResult, err +} + +// waitForStorageVolumeToBeDeleted waits until the specified storage volume has been deleted. +func (c *StorageVolumeClient) waitForStorageVolumeToBeDeleted(name string, timeout time.Duration) error { + return c.client.WaitFor( + fmt.Sprintf("storage volume %s to be deleted", c.getQualifiedName(name)), + timeout, + func() (bool, error) { + getRequest := &GetStorageVolumeInput{ + Name: name, + } + result, err := c.GetStorageVolume(getRequest) + if result == nil { + return true, nil + } + + if err != nil { + return false, err + } + + return result == nil, nil + }) +} + +func sizeInGigaBytes(input string) (string, error) { + sizeInBytes, err := strconv.Atoi(input) + if err != nil { + return "", err + } + sizeInKB := sizeInBytes / 1024 + sizeInMB := sizeInKB / 1024 + sizeInGb := sizeInMB / 1024 + return strconv.Itoa(sizeInGb), nil +} + +func sizeInBytes(input string) (string, error) { + sizeInGB, err := strconv.Atoi(input) + if err != nil { + return "", err + } + sizeInMB := sizeInGB * 1024 + sizeInKB := sizeInMB * 1024 + sizeInBytes := sizeInKB * 1024 + return strconv.Itoa(sizeInBytes), nil +} diff --git a/vendor/github.com/hashicorp/go-oracle-terraform/compute/test_utils.go b/vendor/github.com/hashicorp/go-oracle-terraform/compute/test_utils.go new file mode 100644 index 000000000..5667e097e --- /dev/null +++ b/vendor/github.com/hashicorp/go-oracle-terraform/compute/test_utils.go @@ -0,0 +1,119 @@ +package compute + +import ( + "bytes" + "encoding/json" + "log" + "net/http" + "net/http/httptest" + "net/url" + "os" + "testing" + "time" + + "github.com/hashicorp/go-oracle-terraform/opc" +) + +const ( + _ClientTestUser = "test-user" + _ClientTestDomain = "test-domain" +) + +func newAuthenticatingServer(handler func(w http.ResponseWriter, r *http.Request)) *httptest.Server { + return httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { + if os.Getenv("ORACLE_LOG") != "" { + log.Printf("[DEBUG] Received request: %s, %s\n", r.Method, r.URL) + } + + if r.URL.Path == "/authenticate/" { + http.SetCookie(w, &http.Cookie{Name: "testAuthCookie", Value: "cookie value"}) + // w.WriteHeader(200) + } else { + handler(w, r) + } + })) +} + +func getTestClient(c *opc.Config) (*ComputeClient, error) { + // Build up config with default values if omitted + if c.APIEndpoint == nil { + if os.Getenv("OPC_ENDPOINT") == "" { + panic("OPC_ENDPOINT not set in environment") + } + endpoint, err := url.Parse(os.Getenv("OPC_ENDPOINT")) + if err != nil { + return nil, err + } + c.APIEndpoint = endpoint + } + + if c.IdentityDomain == nil { + domain := os.Getenv("OPC_IDENTITY_DOMAIN") + c.IdentityDomain = &domain + } + + if c.Username == nil { + username := os.Getenv("OPC_USERNAME") + c.Username = &username + } + + if c.Password == nil { + password := os.Getenv("OPC_PASSWORD") + c.Password = &password + } + + if c.HTTPClient == nil { + c.HTTPClient = &http.Client{ + Transport: &http.Transport{ + Proxy: http.ProxyFromEnvironment, + TLSHandshakeTimeout: 120 * time.Second}, + } + } + + return NewComputeClient(c) +} + +func getBlankTestClient() (*ComputeClient, *httptest.Server, error) { + server := newAuthenticatingServer(func(w http.ResponseWriter, r *http.Request) { + }) + + endpoint, err := url.Parse(server.URL) + if err != nil { + server.Close() + return nil, nil, err + } + + client, err := getTestClient(&opc.Config{ + IdentityDomain: opc.String(_ClientTestDomain), + Username: opc.String(_ClientTestUser), + APIEndpoint: endpoint, + }) + if err != nil { + server.Close() + return nil, nil, err + } + return client, server, nil +} + +// Returns a stub client with default values, and a custom API Endpoint +func getStubClient(endpoint *url.URL) (*ComputeClient, error) { + domain := "test" + username := "test" + password := "test" + config := &opc.Config{ + IdentityDomain: &domain, + Username: &username, + Password: &password, + APIEndpoint: endpoint, + } + return getTestClient(config) +} + +func unmarshalRequestBody(t *testing.T, r *http.Request, target interface{}) { + buf := new(bytes.Buffer) + buf.ReadFrom(r.Body) + err := json.Unmarshal(buf.Bytes(), target) + if err != nil { + t.Fatalf("Error marshalling request: %s", err) + } +} diff --git a/vendor/github.com/hashicorp/go-oracle-terraform/compute/virtual_nic.go b/vendor/github.com/hashicorp/go-oracle-terraform/compute/virtual_nic.go new file mode 100644 index 000000000..44509d7c9 --- /dev/null +++ b/vendor/github.com/hashicorp/go-oracle-terraform/compute/virtual_nic.go @@ -0,0 +1,52 @@ +package compute + +type VirtNICsClient struct { + ResourceClient +} + +func (c *ComputeClient) VirtNICs() *VirtNICsClient { + return &VirtNICsClient{ + ResourceClient: ResourceClient{ + ComputeClient: c, + ResourceDescription: "Virtual NIC", + ContainerPath: "/network/v1/vnic/", + ResourceRootPath: "/network/v1/vnic", + }, + } +} + +type VirtualNIC struct { + // Description of the object. + Description string `json:"description"` + // MAC address of this VNIC. + MACAddress string `json:"macAddress"` + // The three-part name (/Compute-identity_domain/user/object) of the Virtual NIC. + Name string `json:"name"` + // Tags associated with the object. + Tags []string `json:"tags"` + // True if the VNIC is of type "transit". + TransitFlag bool `json:"transitFlag"` + // Uniform Resource Identifier + Uri string `json:"uri"` +} + +// Can only GET a virtual NIC, not update, create, or delete +type GetVirtualNICInput struct { + // The three-part name (/Compute-identity_domain/user/object) of the Virtual NIC. + // Required + Name string `json:"name"` +} + +func (c *VirtNICsClient) GetVirtualNIC(input *GetVirtualNICInput) (*VirtualNIC, error) { + var virtNIC VirtualNIC + input.Name = c.getQualifiedName(input.Name) + if err := c.getResource(input.Name, &virtNIC); err != nil { + return nil, err + } + return c.success(&virtNIC) +} + +func (c *VirtNICsClient) success(info *VirtualNIC) (*VirtualNIC, error) { + c.unqualify(&info.Name) + return info, nil +} diff --git a/vendor/github.com/hashicorp/go-oracle-terraform/compute/virtual_nic_sets.go b/vendor/github.com/hashicorp/go-oracle-terraform/compute/virtual_nic_sets.go new file mode 100644 index 000000000..85762e52f --- /dev/null +++ b/vendor/github.com/hashicorp/go-oracle-terraform/compute/virtual_nic_sets.go @@ -0,0 +1,154 @@ +package compute + +type VirtNICSetsClient struct { + ResourceClient +} + +func (c *ComputeClient) VirtNICSets() *VirtNICSetsClient { + return &VirtNICSetsClient{ + ResourceClient: ResourceClient{ + ComputeClient: c, + ResourceDescription: "Virtual NIC Set", + ContainerPath: "/network/v1/vnicset/", + ResourceRootPath: "/network/v1/vnicset", + }, + } +} + +// Describes an existing virtual nic set +type VirtualNICSet struct { + // List of ACLs applied to the VNICs in the set. + AppliedACLs []string `json:"appliedAcls"` + // Description of the VNIC Set. + Description string `json:"description"` + // Name of the VNIC set. + Name string `json:"name"` + // The three-part name (/Compute-identity_domain/user/object) of the virtual NIC set. + Tags []string `json:"tags"` + // Uniform Resource Identifier + Uri string `json:"uri"` + // List of VNICs associated with this VNIC set. + VirtualNICs []string `json:"vnics"` +} + +type CreateVirtualNICSetInput struct { + // List of ACLs applied to the VNICs in the set. + // Optional + AppliedACLs []string `json:"appliedAcls"` + // Description of the object. + // Optional + Description string `json:"description"` + // The three-part name (/Compute-identity_domain/user/object) of the virtual NIC set. + // Object names can contain only alphanumeric, underscore (_), dash (-), and period (.) characters. Object names are case-sensitive. + // Required + Name string `json:"name"` + // Tags associated with this VNIC set. + // Optional + Tags []string `json:"tags"` + // List of VNICs associated with this VNIC set. + // Optional + VirtualNICs []string `json:"vnics"` +} + +func (c *VirtNICSetsClient) CreateVirtualNICSet(input *CreateVirtualNICSetInput) (*VirtualNICSet, error) { + input.Name = c.getQualifiedName(input.Name) + input.AppliedACLs = c.getQualifiedAcls(input.AppliedACLs) + qualifiedNics := c.getQualifiedList(input.VirtualNICs) + if len(qualifiedNics) != 0 { + input.VirtualNICs = qualifiedNics + } + + var virtNicSet VirtualNICSet + if err := c.createResource(input, &virtNicSet); err != nil { + return nil, err + } + + return c.success(&virtNicSet) +} + +type GetVirtualNICSetInput struct { + // The three-part name (/Compute-identity_domain/user/object) of the virtual NIC set. + // Required + Name string `json:"name"` +} + +func (c *VirtNICSetsClient) GetVirtualNICSet(input *GetVirtualNICSetInput) (*VirtualNICSet, error) { + var virtNicSet VirtualNICSet + // Qualify Name + input.Name = c.getQualifiedName(input.Name) + if err := c.getResource(input.Name, &virtNicSet); err != nil { + return nil, err + } + + return c.success(&virtNicSet) +} + +type UpdateVirtualNICSetInput struct { + // List of ACLs applied to the VNICs in the set. + // Optional + AppliedACLs []string `json:"appliedAcls"` + // Description of the object. + // Optional + Description string `json:"description"` + // The three-part name (/Compute-identity_domain/user/object) of the virtual NIC set. + // Object names can contain only alphanumeric, underscore (_), dash (-), and period (.) characters. Object names are case-sensitive. + // Required + Name string `json:"name"` + // Tags associated with this VNIC set. + // Optional + Tags []string `json:"tags"` + // List of VNICs associated with this VNIC set. + // Optional + VirtualNICs []string `json:"vnics"` +} + +func (c *VirtNICSetsClient) UpdateVirtualNICSet(input *UpdateVirtualNICSetInput) (*VirtualNICSet, error) { + input.Name = c.getQualifiedName(input.Name) + input.AppliedACLs = c.getQualifiedAcls(input.AppliedACLs) + // Qualify VirtualNICs + qualifiedVNICs := c.getQualifiedList(input.VirtualNICs) + if len(qualifiedVNICs) != 0 { + input.VirtualNICs = qualifiedVNICs + } + + var virtNICSet VirtualNICSet + if err := c.updateResource(input.Name, input, &virtNICSet); err != nil { + return nil, err + } + + return c.success(&virtNICSet) +} + +type DeleteVirtualNICSetInput struct { + // The name of the virtual NIC set. + // Required + Name string `json:"name"` +} + +func (c *VirtNICSetsClient) DeleteVirtualNICSet(input *DeleteVirtualNICSetInput) error { + input.Name = c.getQualifiedName(input.Name) + return c.deleteResource(input.Name) +} + +func (c *VirtNICSetsClient) getQualifiedAcls(acls []string) []string { + qualifiedAcls := []string{} + for _, acl := range acls { + qualifiedAcls = append(qualifiedAcls, c.getQualifiedName(acl)) + } + return qualifiedAcls +} + +func (c *VirtNICSetsClient) unqualifyAcls(acls []string) []string { + unqualifiedAcls := []string{} + for _, acl := range acls { + unqualifiedAcls = append(unqualifiedAcls, c.getUnqualifiedName(acl)) + } + return unqualifiedAcls +} + +func (c *VirtNICSetsClient) success(info *VirtualNICSet) (*VirtualNICSet, error) { + c.unqualify(&info.Name) + info.AppliedACLs = c.unqualifyAcls(info.AppliedACLs) + info.VirtualNICs = c.getUnqualifiedList(info.VirtualNICs) + return info, nil +} diff --git a/vendor/github.com/hashicorp/go-oracle-terraform/opc/config.go b/vendor/github.com/hashicorp/go-oracle-terraform/opc/config.go new file mode 100644 index 000000000..5c144a66d --- /dev/null +++ b/vendor/github.com/hashicorp/go-oracle-terraform/opc/config.go @@ -0,0 +1,22 @@ +package opc + +import ( + "net/http" + "net/url" +) + +type Config struct { + Username *string + Password *string + IdentityDomain *string + APIEndpoint *url.URL + MaxRetries *int + LogLevel LogLevelType + Logger Logger + HTTPClient *http.Client + UserAgent *string +} + +func NewConfig() *Config { + return &Config{} +} diff --git a/vendor/github.com/hashicorp/go-oracle-terraform/opc/convert.go b/vendor/github.com/hashicorp/go-oracle-terraform/opc/convert.go new file mode 100644 index 000000000..52fa08902 --- /dev/null +++ b/vendor/github.com/hashicorp/go-oracle-terraform/opc/convert.go @@ -0,0 +1,9 @@ +package opc + +func String(v string) *string { + return &v +} + +func Int(v int) *int { + return &v +} diff --git a/vendor/github.com/hashicorp/go-oracle-terraform/opc/errors.go b/vendor/github.com/hashicorp/go-oracle-terraform/opc/errors.go new file mode 100644 index 000000000..6b12c10d9 --- /dev/null +++ b/vendor/github.com/hashicorp/go-oracle-terraform/opc/errors.go @@ -0,0 +1,12 @@ +package opc + +import "fmt" + +type OracleError struct { + StatusCode int + Message string +} + +func (e OracleError) Error() string { + return fmt.Sprintf("%d: %s", e.StatusCode, e.Message) +} diff --git a/vendor/github.com/hashicorp/go-oracle-terraform/opc/logger.go b/vendor/github.com/hashicorp/go-oracle-terraform/opc/logger.go new file mode 100644 index 000000000..f9714a7a8 --- /dev/null +++ b/vendor/github.com/hashicorp/go-oracle-terraform/opc/logger.go @@ -0,0 +1,70 @@ +package opc + +import ( + "io" + "io/ioutil" + "log" + "os" +) + +const ( + LogOff LogLevelType = 0 + LogDebug LogLevelType = 1 +) + +type LogLevelType uint + +// Logger interface. Should be satisfied by Terraform's logger as well as the Default logger +type Logger interface { + Log(...interface{}) +} + +type LoggerFunc func(...interface{}) + +func (f LoggerFunc) Log(args ...interface{}) { + f(args...) +} + +// Returns a default logger if one isn't specified during configuration +func NewDefaultLogger() Logger { + logWriter, err := LogOutput() + if err != nil { + log.Fatalf("Error setting up log writer: %s", err) + } + return &defaultLogger{ + logger: log.New(logWriter, "", log.LstdFlags), + } +} + +// Default logger to satisfy the logger interface +type defaultLogger struct { + logger *log.Logger +} + +func (l defaultLogger) Log(args ...interface{}) { + l.logger.Println(args...) +} + +func LogOutput() (logOutput io.Writer, err error) { + // Default to nil + logOutput = ioutil.Discard + + logLevel := LogLevel() + if logLevel == LogOff { + return + } + + // Logging is on, set output to STDERR + logOutput = os.Stderr + return +} + +// Gets current Log Level from the ORACLE_LOG env var +func LogLevel() LogLevelType { + envLevel := os.Getenv("ORACLE_LOG") + if envLevel == "" { + return LogOff + } else { + return LogDebug + } +} diff --git a/vendor/github.com/joyent/triton-go/CHANGELOG.md b/vendor/github.com/joyent/triton-go/CHANGELOG.md new file mode 100644 index 000000000..7b2df2e8a --- /dev/null +++ b/vendor/github.com/joyent/triton-go/CHANGELOG.md @@ -0,0 +1,64 @@ +## Unreleased + +- Add support for managing columes in Triton [#100] +- identity/policies: Add support for managing policies in Triton [#86] +- addition of triton-go errors package to expose unwraping of internal errors +- Migration from hashicorp/errwrap to pkg/errors +- Using path.Join() for URL structures rather than fmt.Sprintf() + +## 0.5.2 (December 28) + +- Standardise the API SSH Signers input casing and naming + +## 0.5.1 (December 28) + +- Include leading '/' when working with SSH Agent signers + +## 0.5.0 (December 28) + +- Add support for RBAC in triton-go [#82] +This is a breaking change. No longer do we pass individual parameters to the SSH Signer funcs, but we now pass an input Struct. This will guard from from additional parameter changes in the future. +We also now add support for using `SDC_*` and `TRITON_*` env vars when working with the Default agent signer + +## 0.4.2 (December 22) + +- Fixing a panic when the user loses network connectivity when making a GET request to instance [#81] + +## 0.4.1 (December 15) + +- Clean up the handling of directory sanitization. Use abs paths everywhere [#79] + +## 0.4.0 (December 15) + +- Fix an issue where Manta HEAD requests do not return an error resp body [#77] +- Add support for recursively creating child directories [#78] + +## 0.3.0 (December 14) + +- Introduce CloudAPI's ListRulesMachines under networking +- Enable HTTP KeepAlives by default in the client. 15s idle timeout, 2x + connections per host, total of 10x connections per client. +- Expose an optional Headers attribute to clients to allow them to customize + HTTP headers when making Object requests. +- Fix a bug in Directory ListIndex [#69](https://github.com/joyent/issues/69) +- Inputs to Object inputs have been relaxed to `io.Reader` (formerly a + `io.ReadSeeker`) [#73](https://github.com/joyent/issues/73). +- Add support for ForceDelete of all children of a directory [#71](https://github.com/joyent/issues/71) +- storage: Introduce `Objects.GetInfo` and `Objects.IsDir` using HEAD requests [#74](https://github.com/joyent/triton-go/issues/74) + +## 0.2.1 (November 8) + +- Fixing a bug where CreateUser and UpdateUser didn't return the UserID + +## 0.2.0 (November 7) + +- Introduce CloudAPI's Ping under compute +- Introduce CloudAPI's RebootMachine under compute instances +- Introduce CloudAPI's ListUsers, GetUser, CreateUser, UpdateUser and DeleteUser under identity package +- Introduce CloudAPI's ListMachineSnapshots, GetMachineSnapshot, CreateSnapshot, DeleteMachineSnapshot and StartMachineFromSnapshot under compute package +- tools: Introduce unit testing and scripts for linting, etc. +- bug: Fix the `compute.ListMachineRules` endpoint + +## 0.1.0 (November 2) + +- Initial release of a versioned SDK diff --git a/vendor/github.com/joyent/triton-go/GNUmakefile b/vendor/github.com/joyent/triton-go/GNUmakefile new file mode 100644 index 000000000..f996b9951 --- /dev/null +++ b/vendor/github.com/joyent/triton-go/GNUmakefile @@ -0,0 +1,46 @@ +TEST?=$$(go list ./... |grep -Ev 'vendor|examples|testutils') +GOFMT_FILES?=$$(find . -name '*.go' |grep -v vendor) + +default: vet errcheck test + +tools:: ## Download and install all dev/code tools + @echo "==> Installing dev tools" + go get -u github.com/golang/dep/cmd/dep + go get -u github.com/golang/lint/golint + go get -u github.com/kisielk/errcheck + @echo "==> Installing test package dependencies" + go test -i $(TEST) || exit 1 + +test:: ## Run unit tests + @echo "==> Running unit test with coverage" + @./scripts/go-test-with-coverage.sh + +testacc:: ## Run acceptance tests + @echo "==> Running acceptance tests" + TRITON_TEST=1 go test $(TEST) -v $(TESTARGS) -run -timeout 120m + +vet:: ## Check for unwanted code constructs + @echo "go vet ." + @go vet $$(go list ./... | grep -v vendor/) ; if [ $$? -eq 1 ]; then \ + echo ""; \ + echo "Vet found suspicious constructs. Please check the reported constructs"; \ + echo "and fix them if necessary before submitting the code for review."; \ + exit 1; \ + fi + +lint:: ## Lint and vet code by common Go standards + @bash $(CURDIR)/scripts/lint.sh + +fmt:: ## Format as canonical Go code + gofmt -w $(GOFMT_FILES) + +fmtcheck:: ## Check if code format is canonical Go + @bash $(CURDIR)/scripts/gofmtcheck.sh + +errcheck:: ## Check for unhandled errors + @bash $(CURDIR)/scripts/errcheck.sh + +.PHONY: help +help:: ## Display this help message + @echo "GNU make(1) targets:" + @grep -E '^[a-zA-Z_.-]+:.*?## .*$$' $(MAKEFILE_LIST) | sort | awk 'BEGIN {FS = ":.*?## "}; {printf "\033[36m%-15s\033[0m %s\n", $$1, $$2}' diff --git a/vendor/github.com/joyent/triton-go/Gopkg.lock b/vendor/github.com/joyent/triton-go/Gopkg.lock new file mode 100644 index 000000000..858574604 --- /dev/null +++ b/vendor/github.com/joyent/triton-go/Gopkg.lock @@ -0,0 +1,33 @@ +# This file is autogenerated, do not edit; changes may be undone by the next 'dep ensure'. + + +[[projects]] + branch = "master" + name = "github.com/abdullin/seq" + packages = ["."] + revision = "d5467c17e7afe8d8f08f556c6c811a50c3feb28d" + +[[projects]] + branch = "master" + name = "github.com/pkg/errors" + packages = ["."] + revision = "e881fd58d78e04cf6d0de1217f8707c8cc2249bc" + +[[projects]] + branch = "master" + name = "github.com/sean-/seed" + packages = ["."] + revision = "e2103e2c35297fb7e17febb81e49b312087a2372" + +[[projects]] + branch = "master" + name = "golang.org/x/crypto" + packages = ["curve25519","ed25519","ed25519/internal/edwards25519","ssh","ssh/agent"] + revision = "0fcca4842a8d74bfddc2c96a073bd2a4d2a7a2e8" + +[solve-meta] + analyzer-name = "dep" + analyzer-version = 1 + inputs-digest = "f7efd974ae38e2ee077c4d2698df74128a04797460b5f9c833853ddfaa86a0a0" + solver-name = "gps-cdcl" + solver-version = 1 diff --git a/vendor/github.com/joyent/triton-go/Gopkg.toml b/vendor/github.com/joyent/triton-go/Gopkg.toml new file mode 100644 index 000000000..3d3f322e6 --- /dev/null +++ b/vendor/github.com/joyent/triton-go/Gopkg.toml @@ -0,0 +1,38 @@ + +# Gopkg.toml example +# +# Refer to https://github.com/golang/dep/blob/master/docs/Gopkg.toml.md +# for detailed Gopkg.toml documentation. +# +# required = ["github.com/user/thing/cmd/thing"] +# ignored = ["github.com/user/project/pkgX", "bitbucket.org/user/project/pkgA/pkgY"] +# +# [[constraint]] +# name = "github.com/user/project" +# version = "1.0.0" +# +# [[constraint]] +# name = "github.com/user/project2" +# branch = "dev" +# source = "github.com/myfork/project2" +# +# [[override]] +# name = "github.com/x/y" +# version = "2.4.0" + + +[[constraint]] + branch = "master" + name = "github.com/abdullin/seq" + +[[constraint]] + branch = "master" + name = "github.com/sean-/seed" + +[[constraint]] + branch = "master" + name = "golang.org/x/crypto" + +[[constraint]] + branch = "master" + name = "github.com/pkg/errors" diff --git a/vendor/github.com/joyent/triton-go/README.md b/vendor/github.com/joyent/triton-go/README.md index 1089c72da..5dbd5de0f 100644 --- a/vendor/github.com/joyent/triton-go/README.md +++ b/vendor/github.com/joyent/triton-go/README.md @@ -3,6 +3,8 @@ `triton-go` is an idiomatic library exposing a client SDK for Go applications using Joyent's Triton Compute and Storage (Manta) APIs. +[![Build Status](https://travis-ci.org/joyent/triton-go.svg?branch=master)](https://travis-ci.org/joyent/triton-go) [![Go Report Card](https://goreportcard.com/badge/github.com/joyent/triton-go)](https://goreportcard.com/report/github.com/joyent/triton-go) + ## Usage Triton uses [HTTP Signature][4] to sign the Date header in each HTTP request @@ -13,11 +15,17 @@ using a key stored with the local SSH Agent (using an [`SSHAgentSigner`][6]. To construct a Signer, use the `New*` range of methods in the `authentication` package. In the case of `authentication.NewSSHAgentSigner`, the parameters are the fingerprint of the key with which to sign, and the account name (normally -stored in the `SDC_ACCOUNT` environment variable). For example: +stored in the `TRITON_ACCOUNT` environment variable). There is also support for +passing in a username, this will allow you to use an account other than the main +Triton account. For example: -``` -const fingerprint := "a4:c6:f3:75:80:27:e0:03:a9:98:79:ef:c5:0a:06:11" -sshKeySigner, err := authentication.NewSSHAgentSigner(fingerprint, "AccountName") +```go +input := authentication.SSHAgentSignerInput{ + KeyID: "a4:c6:f3:75:80:27:e0:03:a9:98:79:ef:c5:0a:06:11", + AccountName: "AccountName", + Username: "Username", +} +sshKeySigner, err := authentication.NewSSHAgentSigner(input) if err != nil { log.Fatalf("NewSSHAgentSigner: %s", err) } @@ -34,17 +42,18 @@ their own seperate client. In order to initialize a package client, simply pass the global `triton.ClientConfig` struct into the client's constructor function. ```go - config := &triton.ClientConfig{ - TritonURL: os.Getenv("SDC_URL"), - MantaURL: os.Getenv("MANTA_URL"), - AccountName: accountName, - Signers: []authentication.Signer{sshKeySigner}, - } +config := &triton.ClientConfig{ + TritonURL: os.Getenv("TRITON_URL"), + MantaURL: os.Getenv("MANTA_URL"), + AccountName: accountName, + Username: os.Getenv("TRITON_USER"), + Signers: []authentication.Signer{sshKeySigner}, +} - c, err := compute.NewClient(config) - if err != nil { - log.Fatalf("compute.NewClient: %s", err) - } +c, err := compute.NewClient(config) +if err != nil { + log.Fatalf("compute.NewClient: %s", err) +} ``` Constructing `compute.Client` returns an interface which exposes `compute` API @@ -55,10 +64,10 @@ The same `triton.ClientConfig` will initialize the Manta `storage` client as well... ```go - c, err := storage.NewClient(config) - if err != nil { - log.Fatalf("storage.NewClient: %s", err) - } +c, err := storage.NewClient(config) +if err != nil { + log.Fatalf("storage.NewClient: %s", err) +} ``` ## Error Handling @@ -79,13 +88,14 @@ set: - `TRITON_TEST` - must be set to any value in order to indicate desire to create resources -- `SDC_URL` - the base endpoint for the Triton API -- `SDC_ACCOUNT` - the account name for the Triton API -- `SDC_KEY_ID` - the fingerprint of the SSH key identifying the key +- `TRITON_URL` - the base endpoint for the Triton API +- `TRITON_ACCOUNT` - the account name for the Triton API +- `TRITON_KEY_ID` - the fingerprint of the SSH key identifying the key -Additionally, you may set `SDC_KEY_MATERIAL` to the contents of an unencrypted +Additionally, you may set `TRITON_KEY_MATERIAL` to the contents of an unencrypted private key. If this is set, the PrivateKeySigner (see above) will be used - if -not the SSHAgentSigner will be used. +not the SSHAgentSigner will be used. You can also set `TRITON_USER` to run the tests +against an account other than the main Triton account ### Example Run @@ -94,9 +104,9 @@ The verbose output has been removed for brevity here. ``` $ HTTP_PROXY=http://localhost:8888 \ TRITON_TEST=1 \ - SDC_URL=https://us-sw-1.api.joyent.com \ - SDC_ACCOUNT=AccountName \ - SDC_KEY_ID=a4:c6:f3:75:80:27:e0:03:a9:98:79:ef:c5:0a:06:11 \ + TRITON_URL=https://us-sw-1.api.joyent.com \ + TRITON_ACCOUNT=AccountName \ + TRITON_KEY_ID=a4:c6:f3:75:80:27:e0:03:a9:98:79:ef:c5:0a:06:11 \ go test -v -run "TestAccKey" === RUN TestAccKey_Create --- PASS: TestAccKey_Create (12.46s) @@ -116,7 +126,7 @@ referencing your SSH key file use by your active `triton` CLI profile. ```sh $ eval "$(triton env us-sw-1)" -$ SDC_KEY_FILE=~/.ssh/triton-id_rsa go run examples/compute/instances.go +$ TRITON_KEY_FILE=~/.ssh/triton-id_rsa go run examples/compute/instances.go ``` The following is a complete example of how to initialize the `compute` package @@ -142,15 +152,21 @@ import ( ) func main() { - keyID := os.Getenv("SDC_KEY_ID") - accountName := os.Getenv("SDC_ACCOUNT") - keyMaterial := os.Getenv("SDC_KEY_MATERIAL") + keyID := os.Getenv("TRITON_KEY_ID") + accountName := os.Getenv("TRITON_ACCOUNT") + keyMaterial := os.Getenv("TRITON_KEY_MATERIAL") + userName := os.Getenv("TRITON_USER") var signer authentication.Signer var err error if keyMaterial == "" { - signer, err = authentication.NewSSHAgentSigner(keyID, accountName) + input := authentication.SSHAgentSignerInput{ + KeyID: keyID, + AccountName: accountName, + Username: userName, + } + signer, err = authentication.NewSSHAgentSigner(input) if err != nil { log.Fatalf("Error Creating SSH Agent Signer: {{err}}", err) } @@ -178,15 +194,22 @@ func main() { keyBytes = []byte(keyMaterial) } - signer, err = authentication.NewPrivateKeySigner(keyID, []byte(keyMaterial), accountName) + input := authentication.PrivateKeySignerInput{ + KeyID: keyID, + PrivateKeyMaterial: keyBytes, + AccountName: accountName, + Username: userName, + } + signer, err = authentication.NewPrivateKeySigner(input) if err != nil { log.Fatalf("Error Creating SSH Private Key Signer: {{err}}", err) } } config := &triton.ClientConfig{ - TritonURL: os.Getenv("SDC_URL"), + TritonURL: os.Getenv("TRITON_URL"), AccountName: accountName, + Username: userName, Signers: []authentication.Signer{signer}, } diff --git a/vendor/github.com/joyent/triton-go/authentication/dummy.go b/vendor/github.com/joyent/triton-go/authentication/dummy.go new file mode 100644 index 000000000..797e0047b --- /dev/null +++ b/vendor/github.com/joyent/triton-go/authentication/dummy.go @@ -0,0 +1,80 @@ +// +// Copyright (c) 2018, Joyent, Inc. All rights reserved. +// +// This Source Code Form is subject to the terms of the Mozilla Public +// License, v. 2.0. If a copy of the MPL was not distributed with this +// file, You can obtain one at http://mozilla.org/MPL/2.0/. +// + +package authentication + +// DON'T USE THIS OUTSIDE TESTING ~ This key was only created to use for +// internal unit testing. It should never be used for acceptance testing either. +// +// This is just a randomly generated key pair. +var Dummy = struct { + Fingerprint string + PrivateKey []byte + PublicKey []byte + Signer Signer +}{ + "9f:d6:65:fc:d6:60:dc:d0:4e:db:2d:75:f7:92:8c:31", + []byte(`-----BEGIN RSA PRIVATE KEY----- +MIIJKAIBAAKCAgEAui9lNjCJahHeFSFC6HXi/CNX588C/L2gJUx65bnNphVC98hW +1wzoRvPXHx5aWnb7lEbpNhP6B0UoCBDTaPgt9hHfD/oNQ+6HT1QpDIGfZmXI91/t +cjGVSBbxN7WaYt/HsPrGjbalwvQPChN53sMVmFkMTEDR5G3zOBOAGrOimlCT80wI +2S5Xg0spd8jjKM5I1swDR0xtuDWnHTR1Ohin+pEQIE6glLTfYq7oQx6nmMXXBNmk ++SaPD1FAyjkF/81im2EHXBygNEwraVrDcAxK2mKlU2XMJiogQKNYWlm3UkbNB6WP +Le12+Ka02rmIVsSqIpc/ZCBraAlCaSWlYCkU+vJ2hH/+ypy5bXNlbaTiWZK+vuI7 +PC87T50yLNeXVuNZAynzDpBCvsjiiHrB/ZFRfVfF6PviV8CV+m7GTzfAwJhVeSbl +rR6nts16K0HTD48v57DU0b0t5VOvC7cWPShs+afdSL3Z8ReL5EWMgU1wfvtycRKe +hiDVGj3Ms2cf83RIANr387G+1LcTQYP7JJuB7Svy5j+R6+HjI0cgu4EMUPdWfCNG +GyrlxwJNtPmUSfasH1xUKpqr7dC+0sN4/gfJw75WTAYrATkPzexoYNaMsGDfhuoh +kYa3Tn2q1g3kqhsX/R0Fd5d8d5qc137qcRCxiZYz9f3bVkXQbhYmO9da3KsCAwEA +AQKCAgAeEAURqOinPddUJhi9nDtYZwSMo3piAORY4W5+pW+1P32esLSE6MqgmkLD +/YytSsT4fjKtzq/yeJIsKztXmasiLmSMGd4Gd/9VKcuu/0cTq5+1gcG/TI5EI6Az +VJlnGacOxo9E1pcRUYMUJ2zoMSvNe6NmtJivf6lkBpIKvbKlpBkfkclj9/2db4d0 +lfVH43cTZ8Gnw4l70v320z+Sb+S/qqil7swy9rmTH5bVL5/0JQ3A9LuUl0tGN+J0 +RJzZXvprCFG958leaGYiDsu7zeBQPtlfC/LYvriSd02O2SmmmVQFxg/GZK9vGsvc +/VQsXnjyOOW9bxaop8YXYELBsiB21ipTHzOwoqHT8wFnjgU9Y/7iZIv7YbZKQsCS +DrwdlZ/Yw90wiif+ldYryIVinWfytt6ERv4Dgezc98+1XPi1Z/WB74/lIaDXFl3M +3ypjtvLYbKew2IkIjeAwjvZJg/QpC/50RrrPtVDgeAI1Ni01ikixUhMYsHJ1kRih +0tqLvLqSPoHmr6luFlaoKdc2eBqb+8U6K/TrXhKtT7BeUFiSbvnVfdbrH9r+AY/2 +zYtG6llzkE5DH8ZR3Qp+dx7QEDtvYhGftWhx9uasd79AN7CuGYnL54YFLKGRrWKN +ylysqfUyOQYiitdWdNCw9PP2vGRx5JAsMMSy+ft18jjTJvNQ0QKCAQEA28M11EE6 +MpnHxfyP00Dl1+3wl2lRyNXZnZ4hgkk1f83EJGpoB2amiMTF8P1qJb7US1fXtf7l +gkJMMk6t6iccexV1/NBh/7tDZHH/v4HPirFTXQFizflaghD8dEADy9DY4BpQYFRe +8zGsv4/4U0txCXkUIfKcENt/FtXv2T9blJT6cDV0yTx9IAyd4Kor7Ly2FIYroSME +uqnOQt5PwB+2qkE+9hdg4xBhFs9sW5dvyBvQvlBfX/xOmMw2ygH6vsaJlNfZ5VPa +EP/wFP/qHyhDlCfbHdL6qF2//wUoM2QM9RgBdZNhcKU7zWuf7Ev199tmlLC5O14J +PkQxUGftMfmWxQKCAQEA2OLKD8dwOzpwGJiPQdBmGpwCamfcCY4nDwqEaCu4vY1R +OJR+rpYdC2hgl5PTXWH7qzJVdT/ZAz2xUQOgB1hD3Ltk7DQ+EZIA8+vJdaicQOme +vfpMPNDxCEX9ee0AXAmAC3aET82B4cMFnjXjl1WXLLTowF/Jp/hMorm6tl2m15A2 +oTyWlB/i/W/cxHl2HFWK7o8uCNoKpKJjheNYn+emEcH1bkwrk8sxQ78cBNmqe/gk +MLgu8qfXQ0LLKIL7wqmIUHeUpkepOod8uXcTmmN2X9saCIwFKx4Jal5hh5v5cy0G +MkyZcUIhhnmzr7lXbepauE5V2Sj5Qp040AfRVjZcrwKCAQANe8OwuzPL6P2F20Ij +zwaLIhEx6QdYkC5i6lHaAY3jwoc3SMQLODQdjh0q9RFvMW8rFD+q7fG89T5hk8w9 +4ppvvthXY52vqBixcAEmCdvnAYxA15XtV1BDTLGAnHDfL3gu/85QqryMpU6ZDkdJ +LQbJcwFWN+F1c1Iv335w0N9YlW9sNQtuUWTH8544K5i4VLfDOJwyrchbf5GlLqir +/AYkGg634KVUKSwbzywxzm/QUkyTcLD5Xayg2V6/NDHjRKEqXbgDxwpJIrrjPvRp +ZvoGfA+Im+o/LElcZz+ZL5lP7GIiiaFf3PN3XhQY1mxIAdEgbFthFhrxFBQGf+ng +uBSVAoIBAHl12K8pg8LHoUtE9MVoziWMxRWOAH4ha+JSg4BLK/SLlbbYAnIHg1CG +LcH1eWNMokJnt9An54KXJBw4qYAzgB23nHdjcncoivwPSg1oVclMjCfcaqGMac+2 +UpPblF32vAyvXL3MWzZxn03Q5Bo2Rqk0zzwc6LP2rARdeyDyJaOHEfEOG03s5ZQE +91/YnbqUdW/QI3m1kkxM3Ot4PIOgmTJMqwQQCD+GhZppBmn49C7k8m+OVkxyjm0O +lPOlFxUXGE3oCgltDGrIwaKj+wh1Ny/LZjLvJ13UPnWhUYE+al6EEnpMx4nT/S5w +LZ71bu8RVajtxcoN1jnmDpECL8vWOeUCggEBAIEuKoY7pVHfs5gr5dXfQeVZEtqy +LnSdsd37/aqQZRlUpVmBrPNl1JBLiEVhk2SL3XJIDU4Er7f0idhtYLY3eE7wqZ4d +38Iaj5tv3zBc/wb1bImPgOgXCH7QrrbW7uTiYMLScuUbMR4uSpfubLaV8Zc9WHT8 +kTJ2pKKtA1GPJ4V7HCIxuTjD2iyOK1CRkaqSC+5VUuq5gHf92CEstv9AIvvy5cWg +gnfBQoS89m3aO035henSfRFKVJkHaEoasj8hB3pwl9FGZUJp1c2JxiKzONqZhyGa +6tcIAM3od0QtAfDJ89tWJ5D31W8KNNysobFSQxZ62WgLUUtXrkN1LGodxGQ= +-----END RSA PRIVATE KEY-----`), + []byte(`ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQC6L2U2MIlqEd4VIULodeL8I1fnzwL8vaAlTHrluc2mFUL3yFbXDOhG89cfHlpadvuURuk2E/oHRSgIENNo+C32Ed8P+g1D7odPVCkMgZ9mZcj3X+1yMZVIFvE3tZpi38ew+saNtqXC9A8KE3newxWYWQxMQNHkbfM4E4Aas6KaUJPzTAjZLleDSyl3yOMozkjWzANHTG24NacdNHU6GKf6kRAgTqCUtN9iruhDHqeYxdcE2aT5Jo8PUUDKOQX/zWKbYQdcHKA0TCtpWsNwDEraYqVTZcwmKiBAo1haWbdSRs0HpY8t7Xb4prTauYhWxKoilz9kIGtoCUJpJaVgKRT68naEf/7KnLltc2VtpOJZkr6+4js8LztPnTIs15dW41kDKfMOkEK+yOKIesH9kVF9V8Xo++JXwJX6bsZPN8DAmFV5JuWtHqe2zXorQdMPjy/nsNTRvS3lU68LtxY9KGz5p91IvdnxF4vkRYyBTXB++3JxEp6GINUaPcyzZx/zdEgA2vfzsb7UtxNBg/skm4HtK/LmP5Hr4eMjRyC7gQxQ91Z8I0YbKuXHAk20+ZRJ9qwfXFQqmqvt0L7Sw3j+B8nDvlZMBisBOQ/N7Ghg1oywYN+G6iGRhrdOfarWDeSqGxf9HQV3l3x3mpzXfupxELGJljP1/dtWRdBuFiY711rcqw== test-dummy-20171002140848`), + nil, +} + +func init() { + testSigner, _ := NewTestSigner() + Dummy.Signer = testSigner +} diff --git a/vendor/github.com/joyent/triton-go/authentication/ecdsa_signature.go b/vendor/github.com/joyent/triton-go/authentication/ecdsa_signature.go index 8aaba97a5..7fb5a5ab8 100644 --- a/vendor/github.com/joyent/triton-go/authentication/ecdsa_signature.go +++ b/vendor/github.com/joyent/triton-go/authentication/ecdsa_signature.go @@ -1,3 +1,11 @@ +// +// Copyright (c) 2018, Joyent, Inc. All rights reserved. +// +// This Source Code Form is subject to the terms of the Mozilla Public +// License, v. 2.0. If a copy of the MPL was not distributed with this +// file, You can obtain one at http://mozilla.org/MPL/2.0/. +// + package authentication import ( @@ -6,7 +14,7 @@ import ( "fmt" "math/big" - "github.com/hashicorp/errwrap" + "github.com/pkg/errors" "golang.org/x/crypto/ssh" ) @@ -44,7 +52,7 @@ func newECDSASignature(signatureBlob []byte) (*ecdsaSignature, error) { } if err := ssh.Unmarshal(signatureBlob, &ecSig); err != nil { - return nil, errwrap.Wrapf("Error unmarshaling signature: {{err}}", err) + return nil, errors.Wrap(err, "unable to unmarshall signature") } rValue := ecSig.R.Bytes() diff --git a/vendor/github.com/joyent/triton-go/authentication/private_key_signer.go b/vendor/github.com/joyent/triton-go/authentication/private_key_signer.go index 43bc286f0..6e5a67331 100644 --- a/vendor/github.com/joyent/triton-go/authentication/private_key_signer.go +++ b/vendor/github.com/joyent/triton-go/authentication/private_key_signer.go @@ -1,3 +1,11 @@ +// +// Copyright (c) 2018, Joyent, Inc. All rights reserved. +// +// This Source Code Form is subject to the terms of the Mozilla Public +// License, v. 2.0. If a copy of the MPL was not distributed with this +// file, You can obtain one at http://mozilla.org/MPL/2.0/. +// + package authentication import ( @@ -7,11 +15,11 @@ import ( "crypto/x509" "encoding/base64" "encoding/pem" - "errors" "fmt" + "path" "strings" - "github.com/hashicorp/errwrap" + "github.com/pkg/errors" "golang.org/x/crypto/ssh" ) @@ -20,27 +28,35 @@ type PrivateKeySigner struct { keyFingerprint string algorithm string accountName string + userName string hashFunc crypto.Hash privateKey *rsa.PrivateKey } -func NewPrivateKeySigner(keyFingerprint string, privateKeyMaterial []byte, accountName string) (*PrivateKeySigner, error) { - keyFingerprintMD5 := strings.Replace(keyFingerprint, ":", "", -1) +type PrivateKeySignerInput struct { + KeyID string + PrivateKeyMaterial []byte + AccountName string + Username string +} - block, _ := pem.Decode(privateKeyMaterial) +func NewPrivateKeySigner(input PrivateKeySignerInput) (*PrivateKeySigner, error) { + keyFingerprintMD5 := strings.Replace(input.KeyID, ":", "", -1) + + block, _ := pem.Decode(input.PrivateKeyMaterial) if block == nil { return nil, errors.New("Error PEM-decoding private key material: nil block received") } rsakey, err := x509.ParsePKCS1PrivateKey(block.Bytes) if err != nil { - return nil, errwrap.Wrapf("Error parsing private key: {{err}}", err) + return nil, errors.Wrap(err, "unable to parse private key") } sshPublicKey, err := ssh.NewPublicKey(rsakey.Public()) if err != nil { - return nil, errwrap.Wrapf("Error parsing SSH key from private key: {{err}}", err) + return nil, errors.Wrap(err, "unable to parse SSH key from private key") } matchKeyFingerprint := formatPublicKeyFingerprint(sshPublicKey, false) @@ -51,13 +67,17 @@ func NewPrivateKeySigner(keyFingerprint string, privateKeyMaterial []byte, accou signer := &PrivateKeySigner{ formattedKeyFingerprint: displayKeyFingerprint, - keyFingerprint: keyFingerprint, - accountName: accountName, + keyFingerprint: input.KeyID, + accountName: input.AccountName, hashFunc: crypto.SHA1, privateKey: rsakey, } + if input.Username != "" { + signer.userName = input.Username + } + _, algorithm, err := signer.SignRaw("HelloWorld") if err != nil { return nil, fmt.Errorf("Cannot sign using ssh agent: %s", err) @@ -76,11 +96,17 @@ func (s *PrivateKeySigner) Sign(dateHeader string) (string, error) { signed, err := rsa.SignPKCS1v15(rand.Reader, s.privateKey, s.hashFunc, digest) if err != nil { - return "", errwrap.Wrapf("Error signing date header: {{err}}", err) + return "", errors.Wrap(err, "unable to sign date header") } signedBase64 := base64.StdEncoding.EncodeToString(signed) - keyID := fmt.Sprintf("/%s/keys/%s", s.accountName, s.formattedKeyFingerprint) + var keyID string + if s.userName != "" { + + keyID = path.Join("/", s.accountName, "users", s.userName, "keys", s.formattedKeyFingerprint) + } else { + keyID = path.Join("/", s.accountName, "keys", s.formattedKeyFingerprint) + } return fmt.Sprintf(authorizationHeaderFormat, keyID, "rsa-sha1", headerName, signedBase64), nil } @@ -91,7 +117,7 @@ func (s *PrivateKeySigner) SignRaw(toSign string) (string, string, error) { signed, err := rsa.SignPKCS1v15(rand.Reader, s.privateKey, s.hashFunc, digest) if err != nil { - return "", "", errwrap.Wrapf("Error signing date header: {{err}}", err) + return "", "", errors.Wrap(err, "unable to sign date header") } signedBase64 := base64.StdEncoding.EncodeToString(signed) return signedBase64, "rsa-sha1", nil diff --git a/vendor/github.com/joyent/triton-go/authentication/rsa_signature.go b/vendor/github.com/joyent/triton-go/authentication/rsa_signature.go index 8d513f6c4..81b735eb1 100644 --- a/vendor/github.com/joyent/triton-go/authentication/rsa_signature.go +++ b/vendor/github.com/joyent/triton-go/authentication/rsa_signature.go @@ -1,3 +1,11 @@ +// +// Copyright (c) 2018, Joyent, Inc. All rights reserved. +// +// This Source Code Form is subject to the terms of the Mozilla Public +// License, v. 2.0. If a copy of the MPL was not distributed with this +// file, You can obtain one at http://mozilla.org/MPL/2.0/. +// + package authentication import ( diff --git a/vendor/github.com/joyent/triton-go/authentication/signature.go b/vendor/github.com/joyent/triton-go/authentication/signature.go index e6a52df30..8cc18d964 100644 --- a/vendor/github.com/joyent/triton-go/authentication/signature.go +++ b/vendor/github.com/joyent/triton-go/authentication/signature.go @@ -1,8 +1,16 @@ +// +// Copyright (c) 2018, Joyent, Inc. All rights reserved. +// +// This Source Code Form is subject to the terms of the Mozilla Public +// License, v. 2.0. If a copy of the MPL was not distributed with this +// file, You can obtain one at http://mozilla.org/MPL/2.0/. +// + package authentication import ( - "regexp" "fmt" + "regexp" ) type httpAuthSignature interface { diff --git a/vendor/github.com/joyent/triton-go/authentication/signer.go b/vendor/github.com/joyent/triton-go/authentication/signer.go index 6e3d31dd7..f1e114194 100644 --- a/vendor/github.com/joyent/triton-go/authentication/signer.go +++ b/vendor/github.com/joyent/triton-go/authentication/signer.go @@ -1,3 +1,11 @@ +// +// Copyright (c) 2018, Joyent, Inc. All rights reserved. +// +// This Source Code Form is subject to the terms of the Mozilla Public +// License, v. 2.0. If a copy of the MPL was not distributed with this +// file, You can obtain one at http://mozilla.org/MPL/2.0/. +// + package authentication const authorizationHeaderFormat = `Signature keyId="%s",algorithm="%s",headers="%s",signature="%s"` diff --git a/vendor/github.com/joyent/triton-go/authentication/ssh_agent_signer.go b/vendor/github.com/joyent/triton-go/authentication/ssh_agent_signer.go index ea84c5070..304e7fb3b 100644 --- a/vendor/github.com/joyent/triton-go/authentication/ssh_agent_signer.go +++ b/vendor/github.com/joyent/triton-go/authentication/ssh_agent_signer.go @@ -1,50 +1,98 @@ +// +// Copyright (c) 2018, Joyent, Inc. All rights reserved. +// +// This Source Code Form is subject to the terms of the Mozilla Public +// License, v. 2.0. If a copy of the MPL was not distributed with this +// file, You can obtain one at http://mozilla.org/MPL/2.0/. +// + package authentication import ( "crypto/md5" "crypto/sha256" "encoding/base64" - "errors" "fmt" "net" "os" + "path" "strings" - "github.com/hashicorp/errwrap" + pkgerrors "github.com/pkg/errors" "golang.org/x/crypto/ssh" "golang.org/x/crypto/ssh/agent" ) +var ( + ErrUnsetEnvVar = pkgerrors.New("environment variable SSH_AUTH_SOCK not set") +) + type SSHAgentSigner struct { formattedKeyFingerprint string keyFingerprint string algorithm string accountName string + userName string keyIdentifier string agent agent.Agent key ssh.PublicKey } -func NewSSHAgentSigner(keyFingerprint, accountName string) (*SSHAgentSigner, error) { - sshAgentAddress := os.Getenv("SSH_AUTH_SOCK") - if sshAgentAddress == "" { - return nil, errors.New("SSH_AUTH_SOCK is not set") +type SSHAgentSignerInput struct { + KeyID string + AccountName string + Username string +} + +func NewSSHAgentSigner(input SSHAgentSignerInput) (*SSHAgentSigner, error) { + sshAgentAddress, agentOk := os.LookupEnv("SSH_AUTH_SOCK") + if !agentOk { + return nil, ErrUnsetEnvVar } conn, err := net.Dial("unix", sshAgentAddress) if err != nil { - return nil, errwrap.Wrapf("Error dialing SSH agent: {{err}}", err) + return nil, pkgerrors.Wrap(err, "unable to dial SSH agent") } ag := agent.NewClient(conn) - keys, err := ag.List() - if err != nil { - return nil, errwrap.Wrapf("Error listing keys in SSH Agent: %s", err) + signer := &SSHAgentSigner{ + keyFingerprint: input.KeyID, + accountName: input.AccountName, + agent: ag, } - keyFingerprintStripped := strings.TrimPrefix(keyFingerprint, "MD5:") + matchingKey, err := signer.MatchKey() + if err != nil { + return nil, err + } + signer.key = matchingKey + signer.formattedKeyFingerprint = formatPublicKeyFingerprint(signer.key, true) + if input.Username != "" { + signer.userName = input.Username + signer.keyIdentifier = path.Join("/", signer.accountName, "users", input.Username, "keys", signer.formattedKeyFingerprint) + } else { + signer.keyIdentifier = path.Join("/", signer.accountName, "keys", signer.formattedKeyFingerprint) + } + + _, algorithm, err := signer.SignRaw("HelloWorld") + if err != nil { + return nil, fmt.Errorf("Cannot sign using ssh agent: %s", err) + } + signer.algorithm = algorithm + + return signer, nil +} + +func (s *SSHAgentSigner) MatchKey() (ssh.PublicKey, error) { + keys, err := s.agent.List() + if err != nil { + return nil, pkgerrors.Wrap(err, "unable to list keys in SSH Agent") + } + + keyFingerprintStripped := strings.TrimPrefix(s.keyFingerprint, "MD5:") keyFingerprintStripped = strings.TrimPrefix(keyFingerprintStripped, "SHA256:") keyFingerprintStripped = strings.Replace(keyFingerprintStripped, ":", "", -1) @@ -64,27 +112,10 @@ func NewSSHAgentSigner(keyFingerprint, accountName string) (*SSHAgentSigner, err } if matchingKey == nil { - return nil, fmt.Errorf("No key in the SSH Agent matches fingerprint: %s", keyFingerprint) + return nil, fmt.Errorf("No key in the SSH Agent matches fingerprint: %s", s.keyFingerprint) } - formattedKeyFingerprint := formatPublicKeyFingerprint(matchingKey, true) - - signer := &SSHAgentSigner{ - formattedKeyFingerprint: formattedKeyFingerprint, - keyFingerprint: keyFingerprint, - accountName: accountName, - agent: ag, - key: matchingKey, - keyIdentifier: fmt.Sprintf("/%s/keys/%s", accountName, formattedKeyFingerprint), - } - - _, algorithm, err := signer.SignRaw("HelloWorld") - if err != nil { - return nil, fmt.Errorf("Cannot sign using ssh agent: %s", err) - } - signer.algorithm = algorithm - - return signer, nil + return matchingKey, nil } func (s *SSHAgentSigner) Sign(dateHeader string) (string, error) { @@ -92,12 +123,12 @@ func (s *SSHAgentSigner) Sign(dateHeader string) (string, error) { signature, err := s.agent.Sign(s.key, []byte(fmt.Sprintf("%s: %s", headerName, dateHeader))) if err != nil { - return "", errwrap.Wrapf("Error signing date header: {{err}}", err) + return "", pkgerrors.Wrap(err, "unable to sign date header") } keyFormat, err := keyFormatToKeyType(signature.Format) if err != nil { - return "", errwrap.Wrapf("Error reading signature: {{err}}", err) + return "", pkgerrors.Wrap(err, "unable to format signature") } var authSignature httpAuthSignature @@ -105,12 +136,12 @@ func (s *SSHAgentSigner) Sign(dateHeader string) (string, error) { case "rsa": authSignature, err = newRSASignature(signature.Blob) if err != nil { - return "", errwrap.Wrapf("Error reading signature: {{err}}", err) + return "", pkgerrors.Wrap(err, "unable to read RSA signature") } case "ecdsa": authSignature, err = newECDSASignature(signature.Blob) if err != nil { - return "", errwrap.Wrapf("Error reading signature: {{err}}", err) + return "", pkgerrors.Wrap(err, "unable to read ECDSA signature") } default: return "", fmt.Errorf("Unsupported algorithm from SSH agent: %s", signature.Format) @@ -123,12 +154,12 @@ func (s *SSHAgentSigner) Sign(dateHeader string) (string, error) { func (s *SSHAgentSigner) SignRaw(toSign string) (string, string, error) { signature, err := s.agent.Sign(s.key, []byte(toSign)) if err != nil { - return "", "", errwrap.Wrapf("Error signing string: {{err}}", err) + return "", "", pkgerrors.Wrap(err, "unable to sign string") } keyFormat, err := keyFormatToKeyType(signature.Format) if err != nil { - return "", "", errwrap.Wrapf("Error reading signature: {{err}}", err) + return "", "", pkgerrors.Wrap(err, "unable to format key") } var authSignature httpAuthSignature @@ -136,12 +167,12 @@ func (s *SSHAgentSigner) SignRaw(toSign string) (string, string, error) { case "rsa": authSignature, err = newRSASignature(signature.Blob) if err != nil { - return "", "", errwrap.Wrapf("Error reading signature: {{err}}", err) + return "", "", pkgerrors.Wrap(err, "unable to read RSA signature") } case "ecdsa": authSignature, err = newECDSASignature(signature.Blob) if err != nil { - return "", "", errwrap.Wrapf("Error reading signature: {{err}}", err) + return "", "", pkgerrors.Wrap(err, "unable to read ECDSA signature") } default: return "", "", fmt.Errorf("Unsupported algorithm from SSH agent: %s", signature.Format) diff --git a/vendor/github.com/joyent/triton-go/authentication/test_signer.go b/vendor/github.com/joyent/triton-go/authentication/test_signer.go new file mode 100644 index 000000000..4ccfd3847 --- /dev/null +++ b/vendor/github.com/joyent/triton-go/authentication/test_signer.go @@ -0,0 +1,35 @@ +// +// Copyright (c) 2018, Joyent, Inc. All rights reserved. +// +// This Source Code Form is subject to the terms of the Mozilla Public +// License, v. 2.0. If a copy of the MPL was not distributed with this +// file, You can obtain one at http://mozilla.org/MPL/2.0/. +// + +package authentication + +// TestSigner represents an authentication key signer which we can use for +// testing purposes only. This will largely be a stub to send through client +// unit tests. +type TestSigner struct{} + +// NewTestSigner constructs a new instance of test signer +func NewTestSigner() (Signer, error) { + return &TestSigner{}, nil +} + +func (s *TestSigner) DefaultAlgorithm() string { + return "" +} + +func (s *TestSigner) KeyFingerprint() string { + return "" +} + +func (s *TestSigner) Sign(dateHeader string) (string, error) { + return "", nil +} + +func (s *TestSigner) SignRaw(toSign string) (string, string, error) { + return "", "", nil +} diff --git a/vendor/github.com/joyent/triton-go/authentication/util.go b/vendor/github.com/joyent/triton-go/authentication/util.go index 7c298b68c..95918439f 100644 --- a/vendor/github.com/joyent/triton-go/authentication/util.go +++ b/vendor/github.com/joyent/triton-go/authentication/util.go @@ -1,3 +1,11 @@ +// +// Copyright (c) 2018, Joyent, Inc. All rights reserved. +// +// This Source Code Form is subject to the terms of the Mozilla Public +// License, v. 2.0. If a copy of the MPL was not distributed with this +// file, You can obtain one at http://mozilla.org/MPL/2.0/. +// + package authentication import ( diff --git a/vendor/github.com/joyent/triton-go/client/client.go b/vendor/github.com/joyent/triton-go/client/client.go index b01f86baf..0d8bc8931 100644 --- a/vendor/github.com/joyent/triton-go/client/client.go +++ b/vendor/github.com/joyent/triton-go/client/client.go @@ -1,3 +1,11 @@ +// +// Copyright (c) 2018, Joyent, Inc. All rights reserved. +// +// This Source Code Form is subject to the terms of the Mozilla Public +// License, v. 2.0. If a copy of the MPL was not distributed with this +// file, You can obtain one at http://mozilla.org/MPL/2.0/. +// + package client import ( @@ -5,7 +13,7 @@ import ( "context" "crypto/tls" "encoding/json" - "errors" + "fmt" "io" "net" "net/http" @@ -13,13 +21,22 @@ import ( "os" "time" - "github.com/hashicorp/errwrap" + "github.com/joyent/triton-go" "github.com/joyent/triton-go/authentication" + "github.com/joyent/triton-go/errors" + pkgerrors "github.com/pkg/errors" ) const nilContext = "nil context" -var MissingKeyIdError = errors.New("Default SSH agent authentication requires SDC_KEY_ID") +var ( + ErrDefaultAuth = pkgerrors.New("default SSH agent authentication requires SDC_KEY_ID / TRITON_KEY_ID and SSH_AUTH_SOCK") + ErrAccountName = pkgerrors.New("missing account name") + ErrMissingURL = pkgerrors.New("missing API URL") + + InvalidTritonURL = "invalid format of Triton URL" + InvalidMantaURL = "invalid format of Manta URL" +) // Client represents a connection to the Triton Compute or Object Storage APIs. type Client struct { @@ -28,7 +45,7 @@ type Client struct { TritonURL url.URL MantaURL url.URL AccountName string - Endpoint string + Username string } // New is used to construct a Client in order to make API @@ -37,61 +54,93 @@ type Client struct { // At least one signer must be provided - example signers include // authentication.PrivateKeySigner and authentication.SSHAgentSigner. func New(tritonURL string, mantaURL string, accountName string, signers ...authentication.Signer) (*Client, error) { + if accountName == "" { + return nil, ErrAccountName + } + + if tritonURL == "" && mantaURL == "" { + return nil, ErrMissingURL + } + cloudURL, err := url.Parse(tritonURL) if err != nil { - return nil, errwrap.Wrapf("invalid endpoint URL: {{err}}", err) + return nil, pkgerrors.Wrapf(err, InvalidTritonURL) } storageURL, err := url.Parse(mantaURL) if err != nil { - return nil, errwrap.Wrapf("invalid manta URL: {{err}}", err) + return nil, pkgerrors.Wrapf(err, InvalidMantaURL) } - if accountName == "" { - return nil, errors.New("account name can not be empty") - } - - httpClient := &http.Client{ - Transport: httpTransport(false), - CheckRedirect: doNotFollowRedirects, - } - - newClient := &Client{ - HTTPClient: httpClient, - Authorizers: signers, - TritonURL: *cloudURL, - MantaURL: *storageURL, - AccountName: accountName, - // TODO(justinwr): Deprecated? - // Endpoint: tritonURL, - } - - var authorizers []authentication.Signer + authorizers := make([]authentication.Signer, 0) for _, key := range signers { if key != nil { authorizers = append(authorizers, key) } } + newClient := &Client{ + HTTPClient: &http.Client{ + Transport: httpTransport(false), + CheckRedirect: doNotFollowRedirects, + }, + Authorizers: authorizers, + TritonURL: *cloudURL, + MantaURL: *storageURL, + AccountName: accountName, + } + // Default to constructing an SSHAgentSigner if there are no other signers - // passed into NewClient and there's an SDC_KEY_ID value available in the - // user environ. - if len(authorizers) == 0 { - keyID := os.Getenv("SDC_KEY_ID") - if len(keyID) != 0 { - keySigner, err := authentication.NewSSHAgentSigner(keyID, accountName) - if err != nil { - return nil, errwrap.Wrapf("Problem initializing NewSSHAgentSigner: {{err}}", err) - } - newClient.Authorizers = append(authorizers, keySigner) - } else { - return nil, MissingKeyIdError + // passed into NewClient and there's an TRITON_KEY_ID and SSH_AUTH_SOCK + // available in the user's environ(7). + if len(newClient.Authorizers) == 0 { + if err := newClient.DefaultAuth(); err != nil { + return nil, err } } return newClient, nil } +var envPrefixes = []string{"TRITON", "SDC"} + +// GetTritonEnv looks up environment variables using the preferred "TRITON" +// prefix, but falls back to the SDC prefix. For example, looking up "USER" +// will search for "TRITON_USER" followed by "SDC_USER". If the environment +// variable is not set, an empty string is returned. GetTritonEnv() is used to +// aid in the transition and deprecation of the SDC_* environment variables. +func GetTritonEnv(name string) string { + for _, prefix := range envPrefixes { + if val, found := os.LookupEnv(prefix + "_" + name); found { + return val + } + } + + return "" +} + +// initDefaultAuth provides a default key signer for a client. This should only +// be used internally if the client has no other key signer for authenticating +// with Triton. We first look for both `SDC_KEY_ID` and `SSH_AUTH_SOCK` in the +// user's environ(7). If so we default to the SSH agent key signer. +func (c *Client) DefaultAuth() error { + tritonKeyId := GetTritonEnv("KEY_ID") + if tritonKeyId != "" { + input := authentication.SSHAgentSignerInput{ + KeyID: tritonKeyId, + AccountName: c.AccountName, + Username: c.Username, + } + defaultSigner, err := authentication.NewSSHAgentSigner(input) + if err != nil { + return pkgerrors.Wrapf(err, "unable to initialize NewSSHAgentSigner") + } + c.Authorizers = append(c.Authorizers, defaultSigner) + } + + return ErrDefaultAuth +} + // InsecureSkipTLSVerify turns off TLS verification for the client connection. This // allows connection to an endpoint with a certificate which was signed by a non- // trusted CA, such as self-signed certificates. This can be useful when connecting @@ -112,8 +161,8 @@ func httpTransport(insecureSkipTLSVerify bool) *http.Transport { KeepAlive: 30 * time.Second, }).Dial, TLSHandshakeTimeout: 10 * time.Second, - DisableKeepAlives: true, - MaxIdleConnsPerHost: -1, + MaxIdleConns: 10, + IdleConnTimeout: 15 * time.Second, TLSClientConfig: &tls.Config{ InsecureSkipVerify: insecureSkipTLSVerify, }, @@ -124,19 +173,20 @@ func doNotFollowRedirects(*http.Request, []*http.Request) error { return http.ErrUseLastResponse } -// TODO(justinwr): Deprecated? -// func (c *Client) FormatURL(path string) string { -// return fmt.Sprintf("%s%s", c.Endpoint, path) -// } - -func (c *Client) DecodeError(statusCode int, body io.Reader) error { - err := &TritonError{ - StatusCode: statusCode, +func (c *Client) DecodeError(resp *http.Response, requestMethod string) error { + err := &errors.APIError{ + StatusCode: resp.StatusCode, } - errorDecoder := json.NewDecoder(body) - if err := errorDecoder.Decode(err); err != nil { - return errwrap.Wrapf("Error decoding error response: {{err}}", err) + if requestMethod != http.MethodHead && resp.Body != nil { + errorDecoder := json.NewDecoder(resp.Body) + if err := errorDecoder.Decode(err); err != nil { + return pkgerrors.Wrapf(err, "unable to decode error response") + } + } + + if err.Message == "" { + err.Message = fmt.Sprintf("HTTP response returned status code %d", err.StatusCode) } return err @@ -158,7 +208,7 @@ func (c *Client) ExecuteRequestURIParams(ctx context.Context, inputs RequestInpu body := inputs.Body query := inputs.Query - var requestBody io.ReadSeeker + var requestBody io.Reader if body != nil { marshaled, err := json.MarshalIndent(body, "", " ") if err != nil { @@ -175,7 +225,7 @@ func (c *Client) ExecuteRequestURIParams(ctx context.Context, inputs RequestInpu req, err := http.NewRequest(method, endpoint.String(), requestBody) if err != nil { - return nil, errwrap.Wrapf("Error constructing HTTP request: {{err}}", err) + return nil, pkgerrors.Wrapf(err, "unable to construct HTTP request") } dateHeader := time.Now().UTC().Format(time.RFC1123) @@ -185,12 +235,12 @@ func (c *Client) ExecuteRequestURIParams(ctx context.Context, inputs RequestInpu // outside that constructor). authHeader, err := c.Authorizers[0].Sign(dateHeader) if err != nil { - return nil, errwrap.Wrapf("Error signing HTTP request: {{err}}", err) + return nil, pkgerrors.Wrapf(err, "unable to sign HTTP request") } req.Header.Set("Authorization", authHeader) req.Header.Set("Accept", "application/json") - req.Header.Set("Accept-Version", "8") - req.Header.Set("User-Agent", "triton-go Client API") + req.Header.Set("Accept-Version", triton.CloudAPIMajorVersion) + req.Header.Set("User-Agent", triton.UserAgent()) if body != nil { req.Header.Set("Content-Type", "application/json") @@ -198,14 +248,16 @@ func (c *Client) ExecuteRequestURIParams(ctx context.Context, inputs RequestInpu resp, err := c.HTTPClient.Do(req.WithContext(ctx)) if err != nil { - return nil, errwrap.Wrapf("Error executing HTTP request: {{err}}", err) + return nil, pkgerrors.Wrapf(err, "unable to execute HTTP request") } + // We will only return a response from the API it is in the HTTP StatusCode 2xx range + // StatusMultipleChoices is StatusCode 300 if resp.StatusCode >= http.StatusOK && resp.StatusCode < http.StatusMultipleChoices { return resp.Body, nil } - return nil, c.DecodeError(resp.StatusCode, resp.Body) + return nil, c.DecodeError(resp, req.Method) } func (c *Client) ExecuteRequest(ctx context.Context, inputs RequestInput) (io.ReadCloser, error) { @@ -217,7 +269,7 @@ func (c *Client) ExecuteRequestRaw(ctx context.Context, inputs RequestInput) (*h path := inputs.Path body := inputs.Body - var requestBody io.ReadSeeker + var requestBody io.Reader if body != nil { marshaled, err := json.MarshalIndent(body, "", " ") if err != nil { @@ -231,7 +283,7 @@ func (c *Client) ExecuteRequestRaw(ctx context.Context, inputs RequestInput) (*h req, err := http.NewRequest(method, endpoint.String(), requestBody) if err != nil { - return nil, errwrap.Wrapf("Error constructing HTTP request: {{err}}", err) + return nil, pkgerrors.Wrapf(err, "unable to construct HTTP request") } dateHeader := time.Now().UTC().Format(time.RFC1123) @@ -241,12 +293,12 @@ func (c *Client) ExecuteRequestRaw(ctx context.Context, inputs RequestInput) (*h // outside that constructor). authHeader, err := c.Authorizers[0].Sign(dateHeader) if err != nil { - return nil, errwrap.Wrapf("Error signing HTTP request: {{err}}", err) + return nil, pkgerrors.Wrapf(err, "unable to sign HTTP request") } req.Header.Set("Authorization", authHeader) req.Header.Set("Accept", "application/json") - req.Header.Set("Accept-Version", "8") - req.Header.Set("User-Agent", "triton-go c API") + req.Header.Set("Accept-Version", triton.CloudAPIMajorVersion) + req.Header.Set("User-Agent", triton.UserAgent()) if body != nil { req.Header.Set("Content-Type", "application/json") @@ -254,10 +306,16 @@ func (c *Client) ExecuteRequestRaw(ctx context.Context, inputs RequestInput) (*h resp, err := c.HTTPClient.Do(req.WithContext(ctx)) if err != nil { - return nil, errwrap.Wrapf("Error executing HTTP request: {{err}}", err) + return nil, pkgerrors.Wrapf(err, "unable to execute HTTP request") } - return resp, nil + // We will only return a response from the API it is in the HTTP StatusCode 2xx range + // StatusMultipleChoices is StatusCode 300 + if resp.StatusCode >= http.StatusOK && resp.StatusCode < http.StatusMultipleChoices { + return resp, nil + } + + return nil, c.DecodeError(resp, req.Method) } func (c *Client) ExecuteRequestStorage(ctx context.Context, inputs RequestInput) (io.ReadCloser, http.Header, error) { @@ -270,7 +328,7 @@ func (c *Client) ExecuteRequestStorage(ctx context.Context, inputs RequestInput) endpoint := c.MantaURL endpoint.Path = path - var requestBody io.ReadSeeker + var requestBody io.Reader if body != nil { marshaled, err := json.MarshalIndent(body, "", " ") if err != nil { @@ -281,7 +339,7 @@ func (c *Client) ExecuteRequestStorage(ctx context.Context, inputs RequestInput) req, err := http.NewRequest(method, endpoint.String(), requestBody) if err != nil { - return nil, nil, errwrap.Wrapf("Error constructing HTTP request: {{err}}", err) + return nil, nil, pkgerrors.Wrapf(err, "unable to construct HTTP request") } if body != nil && (headers == nil || headers.Get("Content-Type") == "") { @@ -300,11 +358,11 @@ func (c *Client) ExecuteRequestStorage(ctx context.Context, inputs RequestInput) authHeader, err := c.Authorizers[0].Sign(dateHeader) if err != nil { - return nil, nil, errwrap.Wrapf("Error signing HTTP request: {{err}}", err) + return nil, nil, pkgerrors.Wrapf(err, "unable to sign HTTP request") } req.Header.Set("Authorization", authHeader) req.Header.Set("Accept", "*/*") - req.Header.Set("User-Agent", "manta-go client API") + req.Header.Set("User-Agent", triton.UserAgent()) if query != nil { req.URL.RawQuery = query.Encode() @@ -312,22 +370,16 @@ func (c *Client) ExecuteRequestStorage(ctx context.Context, inputs RequestInput) resp, err := c.HTTPClient.Do(req.WithContext(ctx)) if err != nil { - return nil, nil, errwrap.Wrapf("Error executing HTTP request: {{err}}", err) + return nil, nil, pkgerrors.Wrapf(err, "unable to execute HTTP request") } + // We will only return a response from the API it is in the HTTP StatusCode 2xx range + // StatusMultipleChoices is StatusCode 300 if resp.StatusCode >= http.StatusOK && resp.StatusCode < http.StatusMultipleChoices { return resp.Body, resp.Header, nil } - mantaError := &MantaError{ - StatusCode: resp.StatusCode, - } - - errorDecoder := json.NewDecoder(resp.Body) - if err := errorDecoder.Decode(mantaError); err != nil { - return nil, nil, errwrap.Wrapf("Error decoding error response: {{err}}", err) - } - return nil, nil, mantaError + return nil, nil, c.DecodeError(resp, req.Method) } type RequestNoEncodeInput struct { @@ -335,7 +387,7 @@ type RequestNoEncodeInput struct { Path string Query *url.Values Headers *http.Header - Body io.ReadSeeker + Body io.Reader } func (c *Client) ExecuteRequestNoEncode(ctx context.Context, inputs RequestNoEncodeInput) (io.ReadCloser, http.Header, error) { @@ -350,7 +402,7 @@ func (c *Client) ExecuteRequestNoEncode(ctx context.Context, inputs RequestNoEnc req, err := http.NewRequest(method, endpoint.String(), body) if err != nil { - return nil, nil, errwrap.Wrapf("Error constructing HTTP request: {{err}}", err) + return nil, nil, pkgerrors.Wrapf(err, "unable to construct HTTP request") } if headers != nil { @@ -366,11 +418,12 @@ func (c *Client) ExecuteRequestNoEncode(ctx context.Context, inputs RequestNoEnc authHeader, err := c.Authorizers[0].Sign(dateHeader) if err != nil { - return nil, nil, errwrap.Wrapf("Error signing HTTP request: {{err}}", err) + return nil, nil, pkgerrors.Wrapf(err, "unable to sign HTTP request") } req.Header.Set("Authorization", authHeader) req.Header.Set("Accept", "*/*") - req.Header.Set("User-Agent", "manta-go client API") + req.Header.Set("Accept-Version", triton.CloudAPIMajorVersion) + req.Header.Set("User-Agent", triton.UserAgent()) if query != nil { req.URL.RawQuery = query.Encode() @@ -378,20 +431,14 @@ func (c *Client) ExecuteRequestNoEncode(ctx context.Context, inputs RequestNoEnc resp, err := c.HTTPClient.Do(req.WithContext(ctx)) if err != nil { - return nil, nil, errwrap.Wrapf("Error executing HTTP request: {{err}}", err) + return nil, nil, pkgerrors.Wrapf(err, "unable to execute HTTP request") } + // We will only return a response from the API it is in the HTTP StatusCode 2xx range + // StatusMultipleChoices is StatusCode 300 if resp.StatusCode >= http.StatusOK && resp.StatusCode < http.StatusMultipleChoices { return resp.Body, resp.Header, nil } - mantaError := &MantaError{ - StatusCode: resp.StatusCode, - } - - errorDecoder := json.NewDecoder(resp.Body) - if err := errorDecoder.Decode(mantaError); err != nil { - return nil, nil, errwrap.Wrapf("Error decoding error response: {{err}}", err) - } - return nil, nil, mantaError + return nil, nil, c.DecodeError(resp, req.Method) } diff --git a/vendor/github.com/joyent/triton-go/client/errors.go b/vendor/github.com/joyent/triton-go/client/errors.go deleted file mode 100644 index 1fc64a095..000000000 --- a/vendor/github.com/joyent/triton-go/client/errors.go +++ /dev/null @@ -1,190 +0,0 @@ -package client - -import ( - "fmt" - - "github.com/hashicorp/errwrap" -) - -// ClientError represents an error code and message along with the status code -// of the HTTP request which resulted in the error message. -type ClientError struct { - StatusCode int - Code string - Message string -} - -// Error implements interface Error on the TritonError type. -func (e ClientError) Error() string { - return fmt.Sprintf("%s: %s", e.Code, e.Message) -} - -// MantaError represents an error code and message along with -// the status code of the HTTP request which resulted in the error -// message. Error codes used by the Manta API are listed at -// https://apidocs.joyent.com/manta/api.html#errors -type MantaError struct { - StatusCode int - Code string `json:"code"` - Message string `json:"message"` -} - -// Error implements interface Error on the MantaError type. -func (e MantaError) Error() string { - return fmt.Sprintf("%s: %s", e.Code, e.Message) -} - -// TritonError represents an error code and message along with -// the status code of the HTTP request which resulted in the error -// message. Error codes used by the Triton API are listed at -// https://apidocs.joyent.com/cloudapi/#cloudapi-http-responses -type TritonError struct { - StatusCode int - Code string `json:"code"` - Message string `json:"message"` -} - -// Error implements interface Error on the TritonError type. -func (e TritonError) Error() string { - return fmt.Sprintf("%s: %s", e.Code, e.Message) -} - -func IsAuthSchemeError(err error) bool { - return isSpecificError(err, "AuthScheme") -} -func IsAuthorizationError(err error) bool { - return isSpecificError(err, "Authorization") -} -func IsBadRequestError(err error) bool { - return isSpecificError(err, "BadRequest") -} -func IsChecksumError(err error) bool { - return isSpecificError(err, "Checksum") -} -func IsConcurrentRequestError(err error) bool { - return isSpecificError(err, "ConcurrentRequest") -} -func IsContentLengthError(err error) bool { - return isSpecificError(err, "ContentLength") -} -func IsContentMD5MismatchError(err error) bool { - return isSpecificError(err, "ContentMD5Mismatch") -} -func IsEntityExistsError(err error) bool { - return isSpecificError(err, "EntityExists") -} -func IsInvalidArgumentError(err error) bool { - return isSpecificError(err, "InvalidArgument") -} -func IsInvalidAuthTokenError(err error) bool { - return isSpecificError(err, "InvalidAuthToken") -} -func IsInvalidCredentialsError(err error) bool { - return isSpecificError(err, "InvalidCredentials") -} -func IsInvalidDurabilityLevelError(err error) bool { - return isSpecificError(err, "InvalidDurabilityLevel") -} -func IsInvalidKeyIdError(err error) bool { - return isSpecificError(err, "InvalidKeyId") -} -func IsInvalidJobError(err error) bool { - return isSpecificError(err, "InvalidJob") -} -func IsInvalidLinkError(err error) bool { - return isSpecificError(err, "InvalidLink") -} -func IsInvalidLimitError(err error) bool { - return isSpecificError(err, "InvalidLimit") -} -func IsInvalidSignatureError(err error) bool { - return isSpecificError(err, "InvalidSignature") -} -func IsInvalidUpdateError(err error) bool { - return isSpecificError(err, "InvalidUpdate") -} -func IsDirectoryDoesNotExistError(err error) bool { - return isSpecificError(err, "DirectoryDoesNotExist") -} -func IsDirectoryExistsError(err error) bool { - return isSpecificError(err, "DirectoryExists") -} -func IsDirectoryNotEmptyError(err error) bool { - return isSpecificError(err, "DirectoryNotEmpty") -} -func IsDirectoryOperationError(err error) bool { - return isSpecificError(err, "DirectoryOperation") -} -func IsInternalError(err error) bool { - return isSpecificError(err, "Internal") -} -func IsJobNotFoundError(err error) bool { - return isSpecificError(err, "JobNotFound") -} -func IsJobStateError(err error) bool { - return isSpecificError(err, "JobState") -} -func IsKeyDoesNotExistError(err error) bool { - return isSpecificError(err, "KeyDoesNotExist") -} -func IsNotAcceptableError(err error) bool { - return isSpecificError(err, "NotAcceptable") -} -func IsNotEnoughSpaceError(err error) bool { - return isSpecificError(err, "NotEnoughSpace") -} -func IsLinkNotFoundError(err error) bool { - return isSpecificError(err, "LinkNotFound") -} -func IsLinkNotObjectError(err error) bool { - return isSpecificError(err, "LinkNotObject") -} -func IsLinkRequiredError(err error) bool { - return isSpecificError(err, "LinkRequired") -} -func IsParentNotDirectoryError(err error) bool { - return isSpecificError(err, "ParentNotDirectory") -} -func IsPreconditionFailedError(err error) bool { - return isSpecificError(err, "PreconditionFailed") -} -func IsPreSignedRequestError(err error) bool { - return isSpecificError(err, "PreSignedRequest") -} -func IsRequestEntityTooLargeError(err error) bool { - return isSpecificError(err, "RequestEntityTooLarge") -} -func IsResourceNotFoundError(err error) bool { - return isSpecificError(err, "ResourceNotFound") -} -func IsRootDirectoryError(err error) bool { - return isSpecificError(err, "RootDirectory") -} -func IsServiceUnavailableError(err error) bool { - return isSpecificError(err, "ServiceUnavailable") -} -func IsSSLRequiredError(err error) bool { - return isSpecificError(err, "SSLRequired") -} -func IsUploadTimeoutError(err error) bool { - return isSpecificError(err, "UploadTimeout") -} -func IsUserDoesNotExistError(err error) bool { - return isSpecificError(err, "UserDoesNotExist") -} - -// isSpecificError checks whether the error represented by err wraps -// an underlying MantaError with code errorCode. -func isSpecificError(err error, errorCode string) bool { - tritonErrorInterface := errwrap.GetType(err.(error), &MantaError{}) - if tritonErrorInterface == nil { - return false - } - - tritonErr := tritonErrorInterface.(*MantaError) - if tritonErr.Code == errorCode { - return true - } - - return false -} diff --git a/vendor/github.com/joyent/triton-go/compute/client.go b/vendor/github.com/joyent/triton-go/compute/client.go index 8ce726cb1..c70daef8c 100644 --- a/vendor/github.com/joyent/triton-go/compute/client.go +++ b/vendor/github.com/joyent/triton-go/compute/client.go @@ -1,3 +1,11 @@ +// +// Copyright (c) 2018, Joyent, Inc. All rights reserved. +// +// This Source Code Form is subject to the terms of the Mozilla Public +// License, v. 2.0. If a copy of the MPL was not distributed with this +// file, You can obtain one at http://mozilla.org/MPL/2.0/. +// + package compute import ( @@ -38,7 +46,7 @@ func (c *ComputeClient) Images() *ImagesClient { return &ImagesClient{c.Client} } -// Machines returns a Compute client used for accessing functions pertaining to +// Machine returns a Compute client used for accessing functions pertaining to // machine functionality in the Triton API. func (c *ComputeClient) Instances() *InstancesClient { return &InstancesClient{c.Client} @@ -55,3 +63,15 @@ func (c *ComputeClient) Packages() *PackagesClient { func (c *ComputeClient) Services() *ServicesClient { return &ServicesClient{c.Client} } + +// Snapshots returns a Compute client used for accessing functions pertaining to +// Snapshots functionality in the Triton API. +func (c *ComputeClient) Snapshots() *SnapshotsClient { + return &SnapshotsClient{c.Client} +} + +// Snapshots returns a Compute client used for accessing functions pertaining to +// Snapshots functionality in the Triton API. +func (c *ComputeClient) Volumes() *VolumesClient { + return &VolumesClient{c.Client} +} diff --git a/vendor/github.com/joyent/triton-go/compute/datacenters.go b/vendor/github.com/joyent/triton-go/compute/datacenters.go index 7acaf20a1..9f22cffbb 100644 --- a/vendor/github.com/joyent/triton-go/compute/datacenters.go +++ b/vendor/github.com/joyent/triton-go/compute/datacenters.go @@ -1,16 +1,24 @@ +// +// Copyright (c) 2018, Joyent, Inc. All rights reserved. +// +// This Source Code Form is subject to the terms of the Mozilla Public +// License, v. 2.0. If a copy of the MPL was not distributed with this +// file, You can obtain one at http://mozilla.org/MPL/2.0/. +// + package compute import ( + "context" "encoding/json" - "errors" "fmt" "net/http" + "path" "sort" - "context" - - "github.com/hashicorp/errwrap" "github.com/joyent/triton-go/client" + "github.com/joyent/triton-go/errors" + pkgerrors "github.com/pkg/errors" ) type DataCentersClient struct { @@ -25,23 +33,24 @@ type DataCenter struct { type ListDataCentersInput struct{} func (c *DataCentersClient) List(ctx context.Context, _ *ListDataCentersInput) ([]*DataCenter, error) { - path := fmt.Sprintf("/%s/datacenters", c.client.AccountName) + fullPath := path.Join("/", c.client.AccountName, "datacenters") + reqInputs := client.RequestInput{ Method: http.MethodGet, - Path: path, + Path: fullPath, } respReader, err := c.client.ExecuteRequest(ctx, reqInputs) if respReader != nil { defer respReader.Close() } if err != nil { - return nil, errwrap.Wrapf("Error executing List request: {{err}}", err) + return nil, pkgerrors.Wrap(err, "unable to list data centers") } var intermediate map[string]string decoder := json.NewDecoder(respReader) if err = decoder.Decode(&intermediate); err != nil { - return nil, errwrap.Wrapf("Error decoding List response: {{err}}", err) + return nil, pkgerrors.Wrap(err, "unable to decode list data centers response") } keys := make([]string, len(intermediate)) @@ -70,28 +79,23 @@ type GetDataCenterInput struct { } func (c *DataCentersClient) Get(ctx context.Context, input *GetDataCenterInput) (*DataCenter, error) { - path := fmt.Sprintf("/%s/datacenters/%s", c.client.AccountName, input.Name) - reqInputs := client.RequestInput{ - Method: http.MethodGet, - Path: path, - } - resp, err := c.client.ExecuteRequestRaw(ctx, reqInputs) + dcs, err := c.List(ctx, &ListDataCentersInput{}) if err != nil { - return nil, errwrap.Wrapf("Error executing Get request: {{err}}", err) + return nil, pkgerrors.Wrap(err, "unable to get data center") } - if resp.StatusCode != http.StatusFound { - return nil, fmt.Errorf("Error executing Get request: expected status code 302, got %s", - resp.StatusCode) + for _, dc := range dcs { + if dc.Name == input.Name { + return &DataCenter{ + Name: input.Name, + URL: dc.URL, + }, nil + } } - location := resp.Header.Get("Location") - if location == "" { - return nil, errors.New("Error decoding Get response: no Location header") + return nil, &errors.APIError{ + StatusCode: http.StatusNotFound, + Code: "ResourceNotFound", + Message: fmt.Sprintf("data center %q not found", input.Name), } - - return &DataCenter{ - Name: input.Name, - URL: location, - }, nil } diff --git a/vendor/github.com/joyent/triton-go/compute/errors.go b/vendor/github.com/joyent/triton-go/compute/errors.go deleted file mode 100644 index ae2a4bf5c..000000000 --- a/vendor/github.com/joyent/triton-go/compute/errors.go +++ /dev/null @@ -1,115 +0,0 @@ -package compute - -import ( - "github.com/hashicorp/errwrap" - "github.com/joyent/triton-go/client" -) - -// IsBadRequest tests whether err wraps a client.TritonError with -// code BadRequest -func IsBadRequest(err error) bool { - return isSpecificError(err, "BadRequest") -} - -// IsInternalError tests whether err wraps a client.TritonError with -// code InternalError -func IsInternalError(err error) bool { - return isSpecificError(err, "InternalError") -} - -// IsInUseError tests whether err wraps a client.TritonError with -// code InUseError -func IsInUseError(err error) bool { - return isSpecificError(err, "InUseError") -} - -// IsInvalidArgument tests whether err wraps a client.TritonError with -// code InvalidArgument -func IsInvalidArgument(err error) bool { - return isSpecificError(err, "InvalidArgument") -} - -// IsInvalidCredentials tests whether err wraps a client.TritonError with -// code InvalidCredentials -func IsInvalidCredentials(err error) bool { - return isSpecificError(err, "InvalidCredentials") -} - -// IsInvalidHeader tests whether err wraps a client.TritonError with -// code InvalidHeader -func IsInvalidHeader(err error) bool { - return isSpecificError(err, "InvalidHeader") -} - -// IsInvalidVersion tests whether err wraps a client.TritonError with -// code InvalidVersion -func IsInvalidVersion(err error) bool { - return isSpecificError(err, "InvalidVersion") -} - -// IsMissingParameter tests whether err wraps a client.TritonError with -// code MissingParameter -func IsMissingParameter(err error) bool { - return isSpecificError(err, "MissingParameter") -} - -// IsNotAuthorized tests whether err wraps a client.TritonError with -// code NotAuthorized -func IsNotAuthorized(err error) bool { - return isSpecificError(err, "NotAuthorized") -} - -// IsRequestThrottled tests whether err wraps a client.TritonError with -// code RequestThrottled -func IsRequestThrottled(err error) bool { - return isSpecificError(err, "RequestThrottled") -} - -// IsRequestTooLarge tests whether err wraps a client.TritonError with -// code RequestTooLarge -func IsRequestTooLarge(err error) bool { - return isSpecificError(err, "RequestTooLarge") -} - -// IsRequestMoved tests whether err wraps a client.TritonError with -// code RequestMoved -func IsRequestMoved(err error) bool { - return isSpecificError(err, "RequestMoved") -} - -// IsResourceFound tests whether err wraps a client.TritonError with code ResourceFound -func IsResourceFound(err error) bool { - return isSpecificError(err, "ResourceFound") -} - -// IsResourceNotFound tests whether err wraps a client.TritonError with -// code ResourceNotFound -func IsResourceNotFound(err error) bool { - return isSpecificError(err, "ResourceNotFound") -} - -// IsUnknownError tests whether err wraps a client.TritonError with -// code UnknownError -func IsUnknownError(err error) bool { - return isSpecificError(err, "UnknownError") -} - -// isSpecificError checks whether the error represented by err wraps -// an underlying client.TritonError with code errorCode. -func isSpecificError(err error, errorCode string) bool { - if err == nil { - return false - } - - tritonErrorInterface := errwrap.GetType(err.(error), &client.TritonError{}) - if tritonErrorInterface == nil { - return false - } - - tritonErr := tritonErrorInterface.(*client.TritonError) - if tritonErr.Code == errorCode { - return true - } - - return false -} diff --git a/vendor/github.com/joyent/triton-go/compute/images.go b/vendor/github.com/joyent/triton-go/compute/images.go index b60f05e53..8dcae0a89 100644 --- a/vendor/github.com/joyent/triton-go/compute/images.go +++ b/vendor/github.com/joyent/triton-go/compute/images.go @@ -1,15 +1,23 @@ +// +// Copyright (c) 2018, Joyent, Inc. All rights reserved. +// +// This Source Code Form is subject to the terms of the Mozilla Public +// License, v. 2.0. If a copy of the MPL was not distributed with this +// file, You can obtain one at http://mozilla.org/MPL/2.0/. +// + package compute import ( "context" "encoding/json" - "fmt" "net/http" "net/url" + "path" "time" - "github.com/hashicorp/errwrap" "github.com/joyent/triton-go/client" + "github.com/pkg/errors" ) type ImagesClient struct { @@ -39,7 +47,6 @@ type Image struct { Tags map[string]string `json:"tags"` EULA string `json:"eula"` ACL []string `json:"acl"` - Error client.TritonError `json:"error"` } type ListImagesInput struct { @@ -53,7 +60,7 @@ type ListImagesInput struct { } func (c *ImagesClient) List(ctx context.Context, input *ListImagesInput) ([]*Image, error) { - path := fmt.Sprintf("/%s/images", c.client.AccountName) + fullPath := path.Join("/", c.client.AccountName, "images") query := &url.Values{} if input.Name != "" { @@ -80,7 +87,7 @@ func (c *ImagesClient) List(ctx context.Context, input *ListImagesInput) ([]*Ima reqInputs := client.RequestInput{ Method: http.MethodGet, - Path: path, + Path: fullPath, Query: query, } respReader, err := c.client.ExecuteRequestURIParams(ctx, reqInputs) @@ -88,13 +95,13 @@ func (c *ImagesClient) List(ctx context.Context, input *ListImagesInput) ([]*Ima defer respReader.Close() } if err != nil { - return nil, errwrap.Wrapf("Error executing List request: {{err}}", err) + return nil, errors.Wrap(err, "unable to list images") } var result []*Image decoder := json.NewDecoder(respReader) if err = decoder.Decode(&result); err != nil { - return nil, errwrap.Wrapf("Error decoding List response: {{err}}", err) + return nil, errors.Wrap(err, "unable to decode list images response") } return result, nil @@ -105,23 +112,23 @@ type GetImageInput struct { } func (c *ImagesClient) Get(ctx context.Context, input *GetImageInput) (*Image, error) { - path := fmt.Sprintf("/%s/images/%s", c.client.AccountName, input.ImageID) + fullPath := path.Join("/", c.client.AccountName, "images", input.ImageID) reqInputs := client.RequestInput{ Method: http.MethodGet, - Path: path, + Path: fullPath, } respReader, err := c.client.ExecuteRequest(ctx, reqInputs) if respReader != nil { defer respReader.Close() } if err != nil { - return nil, errwrap.Wrapf("Error executing Get request: {{err}}", err) + return nil, errors.Wrap(err, "unable to get image") } var result *Image decoder := json.NewDecoder(respReader) if err = decoder.Decode(&result); err != nil { - return nil, errwrap.Wrapf("Error decoding Get response: {{err}}", err) + return nil, errors.Wrap(err, "unable to decode get image response") } return result, nil @@ -132,17 +139,17 @@ type DeleteImageInput struct { } func (c *ImagesClient) Delete(ctx context.Context, input *DeleteImageInput) error { - path := fmt.Sprintf("/%s/images/%s", c.client.AccountName, input.ImageID) + fullPath := path.Join("/", c.client.AccountName, "images", input.ImageID) reqInputs := client.RequestInput{ Method: http.MethodDelete, - Path: path, + Path: fullPath, } respReader, err := c.client.ExecuteRequest(ctx, reqInputs) if respReader != nil { defer respReader.Close() } if err != nil { - return errwrap.Wrapf("Error executing Delete request: {{err}}", err) + return errors.Wrap(err, "unable to delete image") } return nil @@ -160,14 +167,13 @@ type MantaLocation struct { } func (c *ImagesClient) Export(ctx context.Context, input *ExportImageInput) (*MantaLocation, error) { - path := fmt.Sprintf("/%s/images/%s", c.client.AccountName, input.ImageID) + fullPath := path.Join("/", c.client.AccountName, "images", input.ImageID) query := &url.Values{} query.Set("action", "export") - query.Set("manta_path", input.MantaPath) reqInputs := client.RequestInput{ - Method: http.MethodGet, - Path: path, + Method: http.MethodPost, + Path: fullPath, Query: query, } respReader, err := c.client.ExecuteRequestURIParams(ctx, reqInputs) @@ -175,13 +181,13 @@ func (c *ImagesClient) Export(ctx context.Context, input *ExportImageInput) (*Ma defer respReader.Close() } if err != nil { - return nil, errwrap.Wrapf("Error executing Get request: {{err}}", err) + return nil, errors.Wrap(err, "unable to export image") } var result *MantaLocation decoder := json.NewDecoder(respReader) if err = decoder.Decode(&result); err != nil { - return nil, errwrap.Wrapf("Error decoding Get response: {{err}}", err) + return nil, errors.Wrap(err, "unable to decode export image response") } return result, nil @@ -199,10 +205,10 @@ type CreateImageFromMachineInput struct { } func (c *ImagesClient) CreateFromMachine(ctx context.Context, input *CreateImageFromMachineInput) (*Image, error) { - path := fmt.Sprintf("/%s/images", c.client.AccountName) + fullPath := path.Join("/", c.client.AccountName, "images") reqInputs := client.RequestInput{ Method: http.MethodPost, - Path: path, + Path: fullPath, Body: input, } respReader, err := c.client.ExecuteRequest(ctx, reqInputs) @@ -210,13 +216,13 @@ func (c *ImagesClient) CreateFromMachine(ctx context.Context, input *CreateImage defer respReader.Close() } if err != nil { - return nil, errwrap.Wrapf("Error executing CreateFromMachine request: {{err}}", err) + return nil, errors.Wrap(err, "unable to create machine from image") } var result *Image decoder := json.NewDecoder(respReader) if err = decoder.Decode(&result); err != nil { - return nil, errwrap.Wrapf("Error decoding CreateFromMachine response: {{err}}", err) + return nil, errors.Wrap(err, "unable to decode create machine from image response") } return result, nil @@ -224,7 +230,7 @@ func (c *ImagesClient) CreateFromMachine(ctx context.Context, input *CreateImage type UpdateImageInput struct { ImageID string `json:"-"` - Name string `json:"name"` + Name string `json:"name,omitempty"` Version string `json:"version,omitempty"` Description string `json:"description,omitempty"` HomePage string `json:"homepage,omitempty"` @@ -234,13 +240,13 @@ type UpdateImageInput struct { } func (c *ImagesClient) Update(ctx context.Context, input *UpdateImageInput) (*Image, error) { - path := fmt.Sprintf("/%s/images/%s", c.client.AccountName, input.ImageID) + fullPath := path.Join("/", c.client.AccountName, "images", input.ImageID) query := &url.Values{} query.Set("action", "update") reqInputs := client.RequestInput{ Method: http.MethodPost, - Path: path, + Path: fullPath, Query: query, Body: input, } @@ -249,13 +255,13 @@ func (c *ImagesClient) Update(ctx context.Context, input *UpdateImageInput) (*Im defer respReader.Close() } if err != nil { - return nil, errwrap.Wrapf("Error executing Update request: {{err}}", err) + return nil, errors.Wrap(err, "unable to update image") } var result *Image decoder := json.NewDecoder(respReader) if err = decoder.Decode(&result); err != nil { - return nil, errwrap.Wrapf("Error decoding Update response: {{err}}", err) + return nil, errors.Wrap(err, "unable to decode update image response") } return result, nil diff --git a/vendor/github.com/joyent/triton-go/compute/instances.go b/vendor/github.com/joyent/triton-go/compute/instances.go index 337e6a482..eae65804e 100644 --- a/vendor/github.com/joyent/triton-go/compute/instances.go +++ b/vendor/github.com/joyent/triton-go/compute/instances.go @@ -1,3 +1,11 @@ +// +// Copyright (c) 2018, Joyent, Inc. All rights reserved. +// +// This Source Code Form is subject to the terms of the Mozilla Public +// License, v. 2.0. If a copy of the MPL was not distributed with this +// file, You can obtain one at http://mozilla.org/MPL/2.0/. +// + package compute import ( @@ -7,11 +15,13 @@ import ( "io/ioutil" "net/http" "net/url" + "path" "strings" "time" - "github.com/hashicorp/errwrap" "github.com/joyent/triton-go/client" + "github.com/joyent/triton-go/errors" + pkgerrors "github.com/pkg/errors" ) type InstancesClient struct { @@ -33,6 +43,13 @@ type InstanceCNS struct { Services []string } +type InstanceVolume struct { + Name string `json:"name,omitempty"` + Type string `json:"type,omitempty"` + Mode string `json:"mode,omitempty"` + Mountpoint string `json:"mountpoint,omitempty"` +} + type Instance struct { ID string `json:"id"` Name string `json:"name"` @@ -88,38 +105,40 @@ func (gmi *GetInstanceInput) Validate() error { func (c *InstancesClient) Get(ctx context.Context, input *GetInstanceInput) (*Instance, error) { if err := input.Validate(); err != nil { - return nil, errwrap.Wrapf("unable to get machine: {{err}}", err) + return nil, pkgerrors.Wrap(err, "unable to get machine") } - path := fmt.Sprintf("/%s/machines/%s", c.client.AccountName, input.ID) + fullPath := path.Join("/", c.client.AccountName, "machines", input.ID) reqInputs := client.RequestInput{ Method: http.MethodGet, - Path: path, + Path: fullPath, } response, err := c.client.ExecuteRequestRaw(ctx, reqInputs) - if response != nil { + if response == nil { + return nil, pkgerrors.Wrap(err, "unable to get machine") + } + if response.Body != nil { defer response.Body.Close() } - if response == nil || response.StatusCode == http.StatusNotFound || response.StatusCode == http.StatusGone { - return nil, &client.TritonError{ + if response.StatusCode == http.StatusNotFound || response.StatusCode == http.StatusGone { + return nil, &errors.APIError{ StatusCode: response.StatusCode, Code: "ResourceNotFound", } } if err != nil { - return nil, errwrap.Wrapf("Error executing Get request: {{err}}", - c.client.DecodeError(response.StatusCode, response.Body)) + return nil, pkgerrors.Wrap(err, "unable to get machine") } var result *_Instance decoder := json.NewDecoder(response.Body) if err = decoder.Decode(&result); err != nil { - return nil, errwrap.Wrapf("Error decoding Get response: {{err}}", err) + return nil, pkgerrors.Wrap(err, "unable to decode get machine response") } native, err := result.toNative() if err != nil { - return nil, errwrap.Wrapf("unable to convert API response for instances to native type: {{err}}", err) + return nil, pkgerrors.Wrap(err, "unable to decode get machine response") } return native, nil @@ -141,7 +160,7 @@ type ListInstancesInput struct { } func (c *InstancesClient) List(ctx context.Context, input *ListInstancesInput) ([]*Instance, error) { - path := fmt.Sprintf("/%s/machines", c.client.AccountName) + fullPath := path.Join("/", c.client.AccountName, "machines") query := &url.Values{} if input.Brand != "" { @@ -177,25 +196,25 @@ func (c *InstancesClient) List(ctx context.Context, input *ListInstancesInput) ( reqInputs := client.RequestInput{ Method: http.MethodGet, - Path: path, + Path: fullPath, Query: query, } respReader, err := c.client.ExecuteRequest(ctx, reqInputs) if err != nil { - return nil, errwrap.Wrapf("Error executing List request: {{err}}", err) + return nil, pkgerrors.Wrap(err, "unable to list machines") } var results []*_Instance decoder := json.NewDecoder(respReader) if err = decoder.Decode(&results); err != nil { - return nil, errwrap.Wrapf("Error decoding List response: {{err}}", err) + return nil, pkgerrors.Wrap(err, "unable to decode list machines response") } machines := make([]*Instance, 0, len(results)) for _, machineAPI := range results { native, err := machineAPI.toNative() if err != nil { - return nil, errwrap.Wrapf("unable to convert API response for instances to native type: {{err}}", err) + return nil, pkgerrors.Wrap(err, "unable to decode list machines response") } machines = append(machines, native) } @@ -216,6 +235,7 @@ type CreateInstanceInput struct { Tags map[string]string FirewallEnabled bool CNS InstanceCNS + Volumes []InstanceVolume } func (input *CreateInstanceInput) toAPI() (map[string]interface{}, error) { @@ -240,6 +260,10 @@ func (input *CreateInstanceInput) toAPI() (map[string]interface{}, error) { result["networks"] = input.Networks } + if len(input.Volumes) > 0 { + result["volumes"] = input.Volumes + } + // validate that affinity and locality are not included together hasAffinity := len(input.Affinity) > 0 hasLocality := len(input.LocalityNear) > 0 || len(input.LocalityFar) > 0 @@ -283,15 +307,15 @@ func (input *CreateInstanceInput) toAPI() (map[string]interface{}, error) { } func (c *InstancesClient) Create(ctx context.Context, input *CreateInstanceInput) (*Instance, error) { - path := fmt.Sprintf("/%s/machines", c.client.AccountName) + fullPath := path.Join("/", c.client.AccountName, "machines") body, err := input.toAPI() if err != nil { - return nil, errwrap.Wrapf("Error preparing Create request: {{err}}", err) + return nil, pkgerrors.Wrap(err, "unable to prepare for machine creation") } reqInputs := client.RequestInput{ Method: http.MethodPost, - Path: path, + Path: fullPath, Body: body, } respReader, err := c.client.ExecuteRequest(ctx, reqInputs) @@ -299,13 +323,13 @@ func (c *InstancesClient) Create(ctx context.Context, input *CreateInstanceInput defer respReader.Close() } if err != nil { - return nil, errwrap.Wrapf("Error executing Create request: {{err}}", err) + return nil, pkgerrors.Wrap(err, "unable to create machine") } var result *Instance decoder := json.NewDecoder(respReader) if err = decoder.Decode(&result); err != nil { - return nil, errwrap.Wrapf("Error decoding Create response: {{err}}", err) + return nil, pkgerrors.Wrap(err, "unable to decode create machine response") } return result, nil @@ -316,14 +340,14 @@ type DeleteInstanceInput struct { } func (c *InstancesClient) Delete(ctx context.Context, input *DeleteInstanceInput) error { - path := fmt.Sprintf("/%s/machines/%s", c.client.AccountName, input.ID) + fullPath := path.Join("/", c.client.AccountName, "machines", input.ID) reqInputs := client.RequestInput{ Method: http.MethodDelete, - Path: path, + Path: fullPath, } response, err := c.client.ExecuteRequestRaw(ctx, reqInputs) if response == nil { - return fmt.Errorf("Delete request has empty response") + return pkgerrors.Wrap(err, "unable to delete machine") } if response.Body != nil { defer response.Body.Close() @@ -332,8 +356,7 @@ func (c *InstancesClient) Delete(ctx context.Context, input *DeleteInstanceInput return nil } if err != nil { - return errwrap.Wrapf("Error executing Delete request: {{err}}", - c.client.DecodeError(response.StatusCode, response.Body)) + return pkgerrors.Wrap(err, "unable to decode delete machine response") } return nil @@ -344,12 +367,15 @@ type DeleteTagsInput struct { } func (c *InstancesClient) DeleteTags(ctx context.Context, input *DeleteTagsInput) error { - path := fmt.Sprintf("/%s/machines/%s/tags", c.client.AccountName, input.ID) + fullPath := path.Join("/", c.client.AccountName, "machines", input.ID, "tags") reqInputs := client.RequestInput{ Method: http.MethodDelete, - Path: path, + Path: fullPath, } response, err := c.client.ExecuteRequestRaw(ctx, reqInputs) + if err != nil { + return pkgerrors.Wrap(err, "unable to delete tags from machine") + } if response == nil { return fmt.Errorf("DeleteTags request has empty response") } @@ -359,10 +385,6 @@ func (c *InstancesClient) DeleteTags(ctx context.Context, input *DeleteTagsInput if response.StatusCode == http.StatusNotFound { return nil } - if err != nil { - return errwrap.Wrapf("Error executing DeleteTags request: {{err}}", - c.client.DecodeError(response.StatusCode, response.Body)) - } return nil } @@ -373,12 +395,15 @@ type DeleteTagInput struct { } func (c *InstancesClient) DeleteTag(ctx context.Context, input *DeleteTagInput) error { - path := fmt.Sprintf("/%s/machines/%s/tags/%s", c.client.AccountName, input.ID, input.Key) + fullPath := path.Join("/", c.client.AccountName, "machines", input.ID, "tags", input.Key) reqInputs := client.RequestInput{ Method: http.MethodDelete, - Path: path, + Path: fullPath, } response, err := c.client.ExecuteRequestRaw(ctx, reqInputs) + if err != nil { + return pkgerrors.Wrap(err, "unable to delete tag from machine") + } if response == nil { return fmt.Errorf("DeleteTag request has empty response") } @@ -388,10 +413,6 @@ func (c *InstancesClient) DeleteTag(ctx context.Context, input *DeleteTagInput) if response.StatusCode == http.StatusNotFound { return nil } - if err != nil { - return errwrap.Wrapf("Error executing DeleteTag request: {{err}}", - c.client.DecodeError(response.StatusCode, response.Body)) - } return nil } @@ -402,7 +423,7 @@ type RenameInstanceInput struct { } func (c *InstancesClient) Rename(ctx context.Context, input *RenameInstanceInput) error { - path := fmt.Sprintf("/%s/machines/%s", c.client.AccountName, input.ID) + fullPath := path.Join("/", c.client.AccountName, "machines", input.ID) params := &url.Values{} params.Set("action", "rename") @@ -410,7 +431,7 @@ func (c *InstancesClient) Rename(ctx context.Context, input *RenameInstanceInput reqInputs := client.RequestInput{ Method: http.MethodPost, - Path: path, + Path: fullPath, Query: params, } respReader, err := c.client.ExecuteRequestURIParams(ctx, reqInputs) @@ -418,7 +439,7 @@ func (c *InstancesClient) Rename(ctx context.Context, input *RenameInstanceInput defer respReader.Close() } if err != nil { - return errwrap.Wrapf("Error executing Rename request: {{err}}", err) + return pkgerrors.Wrap(err, "unable to rename machine") } return nil @@ -442,10 +463,10 @@ func (input ReplaceTagsInput) toAPI() map[string]interface{} { } func (c *InstancesClient) ReplaceTags(ctx context.Context, input *ReplaceTagsInput) error { - path := fmt.Sprintf("/%s/machines/%s/tags", c.client.AccountName, input.ID) + fullPath := path.Join("/", c.client.AccountName, "machines", input.ID, "tags") reqInputs := client.RequestInput{ Method: http.MethodPut, - Path: path, + Path: fullPath, Body: input.toAPI(), } respReader, err := c.client.ExecuteRequest(ctx, reqInputs) @@ -453,7 +474,7 @@ func (c *InstancesClient) ReplaceTags(ctx context.Context, input *ReplaceTagsInp defer respReader.Close() } if err != nil { - return errwrap.Wrapf("Error executing ReplaceTags request: {{err}}", err) + return pkgerrors.Wrap(err, "unable to replace machine tags") } return nil @@ -465,10 +486,10 @@ type AddTagsInput struct { } func (c *InstancesClient) AddTags(ctx context.Context, input *AddTagsInput) error { - path := fmt.Sprintf("/%s/machines/%s/tags", c.client.AccountName, input.ID) + fullPath := path.Join("/", c.client.AccountName, "machines", input.ID, "tags") reqInputs := client.RequestInput{ Method: http.MethodPost, - Path: path, + Path: fullPath, Body: input.Tags, } respReader, err := c.client.ExecuteRequest(ctx, reqInputs) @@ -476,7 +497,7 @@ func (c *InstancesClient) AddTags(ctx context.Context, input *AddTagsInput) erro defer respReader.Close() } if err != nil { - return errwrap.Wrapf("Error executing AddTags request: {{err}}", err) + return pkgerrors.Wrap(err, "unable to add tags to machine") } return nil @@ -488,23 +509,23 @@ type GetTagInput struct { } func (c *InstancesClient) GetTag(ctx context.Context, input *GetTagInput) (string, error) { - path := fmt.Sprintf("/%s/machines/%s/tags/%s", c.client.AccountName, input.ID, input.Key) + fullPath := path.Join("/", c.client.AccountName, "machines", input.ID, "tags", input.Key) reqInputs := client.RequestInput{ Method: http.MethodGet, - Path: path, + Path: fullPath, } respReader, err := c.client.ExecuteRequest(ctx, reqInputs) + if err != nil { + return "", pkgerrors.Wrap(err, "unable to get tag") + } if respReader != nil { defer respReader.Close() } - if err != nil { - return "", errwrap.Wrapf("Error executing GetTag request: {{err}}", err) - } var result string decoder := json.NewDecoder(respReader) if err = decoder.Decode(&result); err != nil { - return "", errwrap.Wrapf("Error decoding GetTag response: {{err}}", err) + return "", pkgerrors.Wrap(err, "unable to decode get tag response") } return result, nil @@ -515,23 +536,23 @@ type ListTagsInput struct { } func (c *InstancesClient) ListTags(ctx context.Context, input *ListTagsInput) (map[string]interface{}, error) { - path := fmt.Sprintf("/%s/machines/%s/tags", c.client.AccountName, input.ID) + fullPath := path.Join("/", c.client.AccountName, "machines", input.ID, "tags") reqInputs := client.RequestInput{ Method: http.MethodGet, - Path: path, + Path: fullPath, } respReader, err := c.client.ExecuteRequest(ctx, reqInputs) if respReader != nil { defer respReader.Close() } if err != nil { - return nil, errwrap.Wrapf("Error executing ListTags request: {{err}}", err) + return nil, pkgerrors.Wrap(err, "unable to list machine tags") } var result map[string]interface{} decoder := json.NewDecoder(respReader) if err = decoder.Decode(&result); err != nil { - return nil, errwrap.Wrapf("Error decoding ListTags response: {{err}}", err) + return nil, pkgerrors.Wrap(err, "unable decode list machine tags response") } _, tags := tagsExtractMeta(result) @@ -549,30 +570,28 @@ func (c *InstancesClient) GetMetadata(ctx context.Context, input *GetMetadataInp return "", fmt.Errorf("Missing metadata Key from input: %s", input.Key) } - path := fmt.Sprintf("/%s/machines/%s/metadata/%s", c.client.AccountName, input.ID, input.Key) + fullPath := path.Join("/", c.client.AccountName, "machines", input.ID, "metadata", input.Key) reqInputs := client.RequestInput{ Method: http.MethodGet, - Path: path, + Path: fullPath, } response, err := c.client.ExecuteRequestRaw(ctx, reqInputs) + if err != nil { + return "", pkgerrors.Wrap(err, "unable to get machine metadata") + } if response != nil { defer response.Body.Close() } if response.StatusCode == http.StatusNotFound || response.StatusCode == http.StatusGone { - return "", &client.TritonError{ + return "", &errors.APIError{ StatusCode: response.StatusCode, Code: "ResourceNotFound", } } - if err != nil { - return "", errwrap.Wrapf("Error executing Get request: {{err}}", - c.client.DecodeError(response.StatusCode, response.Body)) - } body, err := ioutil.ReadAll(response.Body) if err != nil { - return "", errwrap.Wrapf("Error unwrapping request body: {{err}}", - c.client.DecodeError(response.StatusCode, response.Body)) + return "", pkgerrors.Wrap(err, "unable to decode get machine metadata response") } return fmt.Sprintf("%s", body), nil @@ -584,7 +603,7 @@ type ListMetadataInput struct { } func (c *InstancesClient) ListMetadata(ctx context.Context, input *ListMetadataInput) (map[string]string, error) { - path := fmt.Sprintf("/%s/machines/%s/metadata", c.client.AccountName, input.ID) + fullPath := path.Join("/", c.client.AccountName, "machines", input.ID, "metadata") query := &url.Values{} if input.Credentials { @@ -593,7 +612,7 @@ func (c *InstancesClient) ListMetadata(ctx context.Context, input *ListMetadataI reqInputs := client.RequestInput{ Method: http.MethodGet, - Path: path, + Path: fullPath, Query: query, } respReader, err := c.client.ExecuteRequest(ctx, reqInputs) @@ -601,13 +620,13 @@ func (c *InstancesClient) ListMetadata(ctx context.Context, input *ListMetadataI defer respReader.Close() } if err != nil { - return nil, errwrap.Wrapf("Error executing ListMetadata request: {{err}}", err) + return nil, pkgerrors.Wrap(err, "unable to list machine metadata") } var result map[string]string decoder := json.NewDecoder(respReader) if err = decoder.Decode(&result); err != nil { - return nil, errwrap.Wrapf("Error decoding ListMetadata response: {{err}}", err) + return nil, pkgerrors.Wrap(err, "unable to decode list machine metadata response") } return result, nil @@ -619,10 +638,10 @@ type UpdateMetadataInput struct { } func (c *InstancesClient) UpdateMetadata(ctx context.Context, input *UpdateMetadataInput) (map[string]string, error) { - path := fmt.Sprintf("/%s/machines/%s/metadata", c.client.AccountName, input.ID) + fullPath := path.Join("/", c.client.AccountName, "machines", input.ID, "metadata") reqInputs := client.RequestInput{ Method: http.MethodPost, - Path: path, + Path: fullPath, Body: input.Metadata, } respReader, err := c.client.ExecuteRequest(ctx, reqInputs) @@ -630,13 +649,13 @@ func (c *InstancesClient) UpdateMetadata(ctx context.Context, input *UpdateMetad defer respReader.Close() } if err != nil { - return nil, errwrap.Wrapf("Error executing UpdateMetadata request: {{err}}", err) + return nil, pkgerrors.Wrap(err, "unable to update machine metadata") } var result map[string]string decoder := json.NewDecoder(respReader) if err = decoder.Decode(&result); err != nil { - return nil, errwrap.Wrapf("Error decoding UpdateMetadata response: {{err}}", err) + return nil, pkgerrors.Wrap(err, "unable to decode update machine metadata response") } return result, nil @@ -653,18 +672,18 @@ func (c *InstancesClient) DeleteMetadata(ctx context.Context, input *DeleteMetad return fmt.Errorf("Missing metadata Key from input: %s", input.Key) } - path := fmt.Sprintf("/%s/machines/%s/metadata/%s", c.client.AccountName, input.ID, input.Key) + fullPath := path.Join("/", c.client.AccountName, "machines", input.ID, "metadata", input.Key) reqInputs := client.RequestInput{ Method: http.MethodDelete, - Path: path, + Path: fullPath, } respReader, err := c.client.ExecuteRequest(ctx, reqInputs) + if err != nil { + return pkgerrors.Wrap(err, "unable to delete machine metadata") + } if respReader != nil { defer respReader.Close() } - if err != nil { - return errwrap.Wrapf("Error executing DeleteMetadata request: {{err}}", err) - } return nil } @@ -675,18 +694,18 @@ type DeleteAllMetadataInput struct { // DeleteAllMetadata deletes all metadata keys from this instance func (c *InstancesClient) DeleteAllMetadata(ctx context.Context, input *DeleteAllMetadataInput) error { - path := fmt.Sprintf("/%s/machines/%s/metadata", c.client.AccountName, input.ID) + fullPath := path.Join("/", c.client.AccountName, "machines", input.ID, "metadata") reqInputs := client.RequestInput{ Method: http.MethodDelete, - Path: path, + Path: fullPath, } respReader, err := c.client.ExecuteRequest(ctx, reqInputs) + if err != nil { + return pkgerrors.Wrap(err, "unable to delete all machine metadata") + } if respReader != nil { defer respReader.Close() } - if err != nil { - return errwrap.Wrapf("Error executing DeleteAllMetadata request: {{err}}", err) - } return nil } @@ -697,7 +716,7 @@ type ResizeInstanceInput struct { } func (c *InstancesClient) Resize(ctx context.Context, input *ResizeInstanceInput) error { - path := fmt.Sprintf("/%s/machines/%s", c.client.AccountName, input.ID) + fullPath := path.Join("/", c.client.AccountName, "machines", input.ID) params := &url.Values{} params.Set("action", "resize") @@ -705,7 +724,7 @@ func (c *InstancesClient) Resize(ctx context.Context, input *ResizeInstanceInput reqInputs := client.RequestInput{ Method: http.MethodPost, - Path: path, + Path: fullPath, Query: params, } respReader, err := c.client.ExecuteRequestURIParams(ctx, reqInputs) @@ -713,7 +732,7 @@ func (c *InstancesClient) Resize(ctx context.Context, input *ResizeInstanceInput defer respReader.Close() } if err != nil { - return errwrap.Wrapf("Error executing Resize request: {{err}}", err) + return pkgerrors.Wrap(err, "unable to resize machine") } return nil @@ -724,14 +743,14 @@ type EnableFirewallInput struct { } func (c *InstancesClient) EnableFirewall(ctx context.Context, input *EnableFirewallInput) error { - path := fmt.Sprintf("/%s/machines/%s", c.client.AccountName, input.ID) + fullPath := path.Join("/", c.client.AccountName, "machines", input.ID) params := &url.Values{} params.Set("action", "enable_firewall") reqInputs := client.RequestInput{ Method: http.MethodPost, - Path: path, + Path: fullPath, Query: params, } respReader, err := c.client.ExecuteRequestURIParams(ctx, reqInputs) @@ -739,7 +758,7 @@ func (c *InstancesClient) EnableFirewall(ctx context.Context, input *EnableFirew defer respReader.Close() } if err != nil { - return errwrap.Wrapf("Error executing EnableFirewall request: {{err}}", err) + return pkgerrors.Wrap(err, "unable to enable machine firewall") } return nil @@ -750,14 +769,14 @@ type DisableFirewallInput struct { } func (c *InstancesClient) DisableFirewall(ctx context.Context, input *DisableFirewallInput) error { - path := fmt.Sprintf("/%s/machines/%s", c.client.AccountName, input.ID) + fullPath := path.Join("/", c.client.AccountName, "machines", input.ID) params := &url.Values{} params.Set("action", "disable_firewall") reqInputs := client.RequestInput{ Method: http.MethodPost, - Path: path, + Path: fullPath, Query: params, } respReader, err := c.client.ExecuteRequestURIParams(ctx, reqInputs) @@ -765,7 +784,7 @@ func (c *InstancesClient) DisableFirewall(ctx context.Context, input *DisableFir defer respReader.Close() } if err != nil { - return errwrap.Wrapf("Error executing DisableFirewall request: {{err}}", err) + return pkgerrors.Wrap(err, "unable to disable machine firewall") } return nil @@ -776,23 +795,23 @@ type ListNICsInput struct { } func (c *InstancesClient) ListNICs(ctx context.Context, input *ListNICsInput) ([]*NIC, error) { - path := fmt.Sprintf("/%s/machines/%s/nics", c.client.AccountName, input.InstanceID) + fullPath := path.Join("/", c.client.AccountName, "machines", input.InstanceID, "nics") reqInputs := client.RequestInput{ Method: http.MethodGet, - Path: path, + Path: fullPath, } respReader, err := c.client.ExecuteRequest(ctx, reqInputs) if respReader != nil { defer respReader.Close() } if err != nil { - return nil, errwrap.Wrapf("Error executing ListNICs request: {{err}}", err) + return nil, pkgerrors.Wrap(err, "unable to list machine NICs") } var result []*NIC decoder := json.NewDecoder(respReader) if err = decoder.Decode(&result); err != nil { - return nil, errwrap.Wrapf("Error decoding ListNICs response: {{err}}", err) + return nil, pkgerrors.Wrap(err, "unable to decode list machine NICs response") } return result, nil @@ -805,30 +824,30 @@ type GetNICInput struct { func (c *InstancesClient) GetNIC(ctx context.Context, input *GetNICInput) (*NIC, error) { mac := strings.Replace(input.MAC, ":", "", -1) - path := fmt.Sprintf("/%s/machines/%s/nics/%s", c.client.AccountName, input.InstanceID, mac) + fullPath := path.Join("/", c.client.AccountName, "machines", input.InstanceID, "nics", mac) reqInputs := client.RequestInput{ Method: http.MethodGet, - Path: path, + Path: fullPath, } response, err := c.client.ExecuteRequestRaw(ctx, reqInputs) + if err != nil { + return nil, pkgerrors.Wrap(err, "unable to get machine NIC") + } if response != nil { defer response.Body.Close() } switch response.StatusCode { case http.StatusNotFound: - return nil, &client.TritonError{ + return nil, &errors.APIError{ StatusCode: response.StatusCode, Code: "ResourceNotFound", } } - if err != nil { - return nil, errwrap.Wrapf("Error executing GetNIC request: {{err}}", err) - } var result *NIC decoder := json.NewDecoder(response.Body) if err = decoder.Decode(&result); err != nil { - return nil, errwrap.Wrapf("Error decoding ListNICs response: {{err}}", err) + return nil, pkgerrors.Wrap(err, "unable to decode get machine NIC response") } return result, nil @@ -845,32 +864,32 @@ type AddNICInput struct { // until its state is set to "running". Only one NIC per network may exist. // Warning: this operation causes the instance to restart. func (c *InstancesClient) AddNIC(ctx context.Context, input *AddNICInput) (*NIC, error) { - path := fmt.Sprintf("/%s/machines/%s/nics", c.client.AccountName, input.InstanceID) + fullPath := path.Join("/", c.client.AccountName, "machines", input.InstanceID, "nics") reqInputs := client.RequestInput{ Method: http.MethodPost, - Path: path, + Path: fullPath, Body: input, } response, err := c.client.ExecuteRequestRaw(ctx, reqInputs) + if err != nil { + return nil, pkgerrors.Wrap(err, "unable to add NIC to machine") + } if response != nil { defer response.Body.Close() } switch response.StatusCode { case http.StatusFound: - return nil, &client.TritonError{ + return nil, &errors.APIError{ StatusCode: response.StatusCode, Code: "ResourceFound", Message: response.Header.Get("Location"), } } - if err != nil { - return nil, errwrap.Wrapf("Error executing AddNIC request: {{err}}", err) - } var result *NIC decoder := json.NewDecoder(response.Body) if err = decoder.Decode(&result); err != nil { - return nil, errwrap.Wrapf("Error decoding AddNIC response: {{err}}", err) + return nil, pkgerrors.Wrap(err, "unable to decode add NIC to machine response") } return result, nil @@ -887,25 +906,28 @@ type RemoveNICInput struct { // machine to restart. func (c *InstancesClient) RemoveNIC(ctx context.Context, input *RemoveNICInput) error { mac := strings.Replace(input.MAC, ":", "", -1) - path := fmt.Sprintf("/%s/machines/%s/nics/%s", c.client.AccountName, input.InstanceID, mac) + fullPath := path.Join("/", c.client.AccountName, "machines", input.InstanceID, "nics", mac) reqInputs := client.RequestInput{ Method: http.MethodDelete, - Path: path, + Path: fullPath, } response, err := c.client.ExecuteRequestRaw(ctx, reqInputs) - if response != nil { + if err != nil { + return pkgerrors.Wrap(err, "unable to remove NIC from machine") + } + if response == nil { + return pkgerrors.Wrap(err, "unable to remove NIC from machine") + } + if response.Body != nil { defer response.Body.Close() } switch response.StatusCode { case http.StatusNotFound: - return &client.TritonError{ + return &errors.APIError{ StatusCode: response.StatusCode, Code: "ResourceNotFound", } } - if err != nil { - return errwrap.Wrapf("Error executing RemoveNIC request: {{err}}", err) - } return nil } @@ -915,14 +937,14 @@ type StopInstanceInput struct { } func (c *InstancesClient) Stop(ctx context.Context, input *StopInstanceInput) error { - path := fmt.Sprintf("/%s/machines/%s", c.client.AccountName, input.InstanceID) + fullPath := path.Join("/", c.client.AccountName, "machines", input.InstanceID) params := &url.Values{} params.Set("action", "stop") reqInputs := client.RequestInput{ Method: http.MethodPost, - Path: path, + Path: fullPath, Query: params, } respReader, err := c.client.ExecuteRequestURIParams(ctx, reqInputs) @@ -930,7 +952,7 @@ func (c *InstancesClient) Stop(ctx context.Context, input *StopInstanceInput) er defer respReader.Close() } if err != nil { - return errwrap.Wrapf("Error executing Stop request: {{err}}", err) + return pkgerrors.Wrap(err, "unable to stop machine") } return nil @@ -941,14 +963,14 @@ type StartInstanceInput struct { } func (c *InstancesClient) Start(ctx context.Context, input *StartInstanceInput) error { - path := fmt.Sprintf("/%s/machines/%s", c.client.AccountName, input.InstanceID) + fullPath := path.Join("/", c.client.AccountName, "machines", input.InstanceID) params := &url.Values{} params.Set("action", "start") reqInputs := client.RequestInput{ Method: http.MethodPost, - Path: path, + Path: fullPath, Query: params, } respReader, err := c.client.ExecuteRequestURIParams(ctx, reqInputs) @@ -956,7 +978,33 @@ func (c *InstancesClient) Start(ctx context.Context, input *StartInstanceInput) defer respReader.Close() } if err != nil { - return errwrap.Wrapf("Error executing Start request: {{err}}", err) + return pkgerrors.Wrap(err, "unable to start machine") + } + + return nil +} + +type RebootInstanceInput struct { + InstanceID string +} + +func (c *InstancesClient) Reboot(ctx context.Context, input *RebootInstanceInput) error { + fullPath := path.Join("/", c.client.AccountName, "machines", input.InstanceID) + + params := &url.Values{} + params.Set("action", "reboot") + + reqInputs := client.RequestInput{ + Method: http.MethodPost, + Path: fullPath, + Query: params, + } + respReader, err := c.client.ExecuteRequestURIParams(ctx, reqInputs) + if respReader != nil { + defer respReader.Close() + } + if err != nil { + return pkgerrors.Wrap(err, "unable to reboot machine") } return nil diff --git a/vendor/github.com/joyent/triton-go/compute/packages.go b/vendor/github.com/joyent/triton-go/compute/packages.go index f18407e45..e3bfb3cbe 100644 --- a/vendor/github.com/joyent/triton-go/compute/packages.go +++ b/vendor/github.com/joyent/triton-go/compute/packages.go @@ -1,13 +1,21 @@ +// +// Copyright (c) 2018, Joyent, Inc. All rights reserved. +// +// This Source Code Form is subject to the terms of the Mozilla Public +// License, v. 2.0. If a copy of the MPL was not distributed with this +// file, You can obtain one at http://mozilla.org/MPL/2.0/. +// + package compute import ( "context" "encoding/json" - "fmt" "net/http" + "path" - "github.com/hashicorp/errwrap" "github.com/joyent/triton-go/client" + "github.com/pkg/errors" ) type PackagesClient struct { @@ -40,10 +48,10 @@ type ListPackagesInput struct { } func (c *PackagesClient) List(ctx context.Context, input *ListPackagesInput) ([]*Package, error) { - path := fmt.Sprintf("/%s/packages", c.client.AccountName) + fullPath := path.Join("/", c.client.AccountName, "packages") reqInputs := client.RequestInput{ Method: http.MethodGet, - Path: path, + Path: fullPath, Body: input, } respReader, err := c.client.ExecuteRequest(ctx, reqInputs) @@ -51,13 +59,13 @@ func (c *PackagesClient) List(ctx context.Context, input *ListPackagesInput) ([] defer respReader.Close() } if err != nil { - return nil, errwrap.Wrapf("Error executing List request: {{err}}", err) + return nil, errors.Wrap(err, "unable to list packages") } var result []*Package decoder := json.NewDecoder(respReader) if err = decoder.Decode(&result); err != nil { - return nil, errwrap.Wrapf("Error decoding List response: {{err}}", err) + return nil, errors.Wrap(err, "unable to decode list packages response") } return result, nil @@ -68,23 +76,23 @@ type GetPackageInput struct { } func (c *PackagesClient) Get(ctx context.Context, input *GetPackageInput) (*Package, error) { - path := fmt.Sprintf("/%s/packages/%s", c.client.AccountName, input.ID) + fullPath := path.Join("/", c.client.AccountName, "packages", input.ID) reqInputs := client.RequestInput{ Method: http.MethodGet, - Path: path, + Path: fullPath, } respReader, err := c.client.ExecuteRequest(ctx, reqInputs) if respReader != nil { defer respReader.Close() } if err != nil { - return nil, errwrap.Wrapf("Error executing Get request: {{err}}", err) + return nil, errors.Wrap(err, "unable to get package") } var result *Package decoder := json.NewDecoder(respReader) if err = decoder.Decode(&result); err != nil { - return nil, errwrap.Wrapf("Error decoding Get response: {{err}}", err) + return nil, errors.Wrap(err, "unable to decode get package response") } return result, nil diff --git a/vendor/github.com/joyent/triton-go/compute/ping.go b/vendor/github.com/joyent/triton-go/compute/ping.go new file mode 100644 index 000000000..f346e5c26 --- /dev/null +++ b/vendor/github.com/joyent/triton-go/compute/ping.go @@ -0,0 +1,58 @@ +// +// Copyright (c) 2018, Joyent, Inc. All rights reserved. +// +// This Source Code Form is subject to the terms of the Mozilla Public +// License, v. 2.0. If a copy of the MPL was not distributed with this +// file, You can obtain one at http://mozilla.org/MPL/2.0/. +// + +package compute + +import ( + "context" + "encoding/json" + "net/http" + + "github.com/joyent/triton-go/client" + pkgerrors "github.com/pkg/errors" +) + +const pingEndpoint = "/--ping" + +type CloudAPI struct { + Versions []string `json:"versions"` +} + +type PingOutput struct { + Ping string `json:"ping"` + CloudAPI CloudAPI `json:"cloudapi"` +} + +// Ping sends a request to the '/--ping' endpoint and returns a `pong` as well +// as a list of API version numbers your instance of CloudAPI is presenting. +func (c *ComputeClient) Ping(ctx context.Context) (*PingOutput, error) { + reqInputs := client.RequestInput{ + Method: http.MethodGet, + Path: pingEndpoint, + } + response, err := c.Client.ExecuteRequestRaw(ctx, reqInputs) + if err != nil { + return nil, pkgerrors.Wrap(err, "unable to ping") + } + if response == nil { + return nil, pkgerrors.Wrap(err, "unable to ping") + } + if response.Body != nil { + defer response.Body.Close() + } + + var result *PingOutput + decoder := json.NewDecoder(response.Body) + if err = decoder.Decode(&result); err != nil { + if err != nil { + return nil, pkgerrors.Wrap(err, "unable to decode ping response") + } + } + + return result, nil +} diff --git a/vendor/github.com/joyent/triton-go/compute/services.go b/vendor/github.com/joyent/triton-go/compute/services.go index af1b017e0..3597da9b1 100644 --- a/vendor/github.com/joyent/triton-go/compute/services.go +++ b/vendor/github.com/joyent/triton-go/compute/services.go @@ -1,14 +1,22 @@ +// +// Copyright (c) 2018, Joyent, Inc. All rights reserved. +// +// This Source Code Form is subject to the terms of the Mozilla Public +// License, v. 2.0. If a copy of the MPL was not distributed with this +// file, You can obtain one at http://mozilla.org/MPL/2.0/. +// + package compute import ( "context" "encoding/json" - "fmt" "net/http" + "path" "sort" - "github.com/hashicorp/errwrap" "github.com/joyent/triton-go/client" + "github.com/pkg/errors" ) type ServicesClient struct { @@ -23,23 +31,23 @@ type Service struct { type ListServicesInput struct{} func (c *ServicesClient) List(ctx context.Context, _ *ListServicesInput) ([]*Service, error) { - path := fmt.Sprintf("/%s/services", c.client.AccountName) + fullPath := path.Join("/", c.client.AccountName, "services") reqInputs := client.RequestInput{ Method: http.MethodGet, - Path: path, + Path: fullPath, } respReader, err := c.client.ExecuteRequest(ctx, reqInputs) if respReader != nil { defer respReader.Close() } if err != nil { - return nil, errwrap.Wrapf("Error executing List request: {{err}}", err) + return nil, errors.Wrap(err, "unable to list services") } var intermediate map[string]string decoder := json.NewDecoder(respReader) if err = decoder.Decode(&intermediate); err != nil { - return nil, errwrap.Wrapf("Error decoding List response: {{err}}", err) + return nil, errors.Wrap(err, "unable to decode list services response") } keys := make([]string, len(intermediate)) diff --git a/vendor/github.com/joyent/triton-go/compute/snapshots.go b/vendor/github.com/joyent/triton-go/compute/snapshots.go new file mode 100644 index 000000000..f30d394ba --- /dev/null +++ b/vendor/github.com/joyent/triton-go/compute/snapshots.go @@ -0,0 +1,164 @@ +// +// Copyright (c) 2018, Joyent, Inc. All rights reserved. +// +// This Source Code Form is subject to the terms of the Mozilla Public +// License, v. 2.0. If a copy of the MPL was not distributed with this +// file, You can obtain one at http://mozilla.org/MPL/2.0/. +// + +package compute + +import ( + "context" + "encoding/json" + "net/http" + "path" + "time" + + "github.com/joyent/triton-go/client" + "github.com/pkg/errors" +) + +type SnapshotsClient struct { + client *client.Client +} + +type Snapshot struct { + Name string + State string + Created time.Time + Updated time.Time +} + +type ListSnapshotsInput struct { + MachineID string +} + +func (c *SnapshotsClient) List(ctx context.Context, input *ListSnapshotsInput) ([]*Snapshot, error) { + fullPath := path.Join("/", c.client.AccountName, "machines", input.MachineID, "snapshots") + reqInputs := client.RequestInput{ + Method: http.MethodGet, + Path: fullPath, + } + respReader, err := c.client.ExecuteRequest(ctx, reqInputs) + if respReader != nil { + defer respReader.Close() + } + if err != nil { + return nil, errors.Wrap(err, "unable to list snapshots") + } + + var result []*Snapshot + decoder := json.NewDecoder(respReader) + if err = decoder.Decode(&result); err != nil { + return nil, errors.Wrap(err, "unable to decode list snapshots response") + } + + return result, nil +} + +type GetSnapshotInput struct { + MachineID string + Name string +} + +func (c *SnapshotsClient) Get(ctx context.Context, input *GetSnapshotInput) (*Snapshot, error) { + fullPath := path.Join("/", c.client.AccountName, "machines", input.MachineID, "snapshots", input.Name) + reqInputs := client.RequestInput{ + Method: http.MethodGet, + Path: fullPath, + } + respReader, err := c.client.ExecuteRequest(ctx, reqInputs) + if respReader != nil { + defer respReader.Close() + } + if err != nil { + return nil, errors.Wrap(err, "unable to get snapshot") + } + + var result *Snapshot + decoder := json.NewDecoder(respReader) + if err = decoder.Decode(&result); err != nil { + return nil, errors.Wrap(err, "unable to decode get snapshot response") + } + + return result, nil +} + +type DeleteSnapshotInput struct { + MachineID string + Name string +} + +func (c *SnapshotsClient) Delete(ctx context.Context, input *DeleteSnapshotInput) error { + fullPath := path.Join("/", c.client.AccountName, "machines", input.MachineID, "snapshots", input.Name) + reqInputs := client.RequestInput{ + Method: http.MethodDelete, + Path: fullPath, + } + respReader, err := c.client.ExecuteRequest(ctx, reqInputs) + if respReader != nil { + defer respReader.Close() + } + if err != nil { + return errors.Wrap(err, "unable to delete snapshot") + } + + return nil +} + +type StartMachineFromSnapshotInput struct { + MachineID string + Name string +} + +func (c *SnapshotsClient) StartMachine(ctx context.Context, input *StartMachineFromSnapshotInput) error { + fullPath := path.Join("/", c.client.AccountName, "machines", input.MachineID, "snapshots", input.Name) + reqInputs := client.RequestInput{ + Method: http.MethodPost, + Path: fullPath, + } + respReader, err := c.client.ExecuteRequest(ctx, reqInputs) + if respReader != nil { + defer respReader.Close() + } + if err != nil { + return errors.Wrap(err, "unable to start machine") + } + + return nil +} + +type CreateSnapshotInput struct { + MachineID string + Name string +} + +func (c *SnapshotsClient) Create(ctx context.Context, input *CreateSnapshotInput) (*Snapshot, error) { + fullPath := path.Join("/", c.client.AccountName, "machines", input.MachineID, "snapshots") + + data := make(map[string]interface{}) + data["name"] = input.Name + + reqInputs := client.RequestInput{ + Method: http.MethodPost, + Path: fullPath, + Body: data, + } + + respReader, err := c.client.ExecuteRequest(ctx, reqInputs) + if respReader != nil { + defer respReader.Close() + } + if err != nil { + return nil, errors.Wrap(err, "unable to create snapshot") + } + + var result *Snapshot + decoder := json.NewDecoder(respReader) + if err = decoder.Decode(&result); err != nil { + return nil, errors.Wrap(err, "unable to decode create snapshot response") + } + + return result, nil +} diff --git a/vendor/github.com/joyent/triton-go/compute/volumes.go b/vendor/github.com/joyent/triton-go/compute/volumes.go new file mode 100644 index 000000000..af858061f --- /dev/null +++ b/vendor/github.com/joyent/triton-go/compute/volumes.go @@ -0,0 +1,217 @@ +// +// Copyright (c) 2018, Joyent, Inc. All rights reserved. +// +// This Source Code Form is subject to the terms of the Mozilla Public +// License, v. 2.0. If a copy of the MPL was not distributed with this +// file, You can obtain one at http://mozilla.org/MPL/2.0/. +// + +package compute + +import ( + "context" + "encoding/json" + "net/http" + "net/url" + "path" + + "github.com/joyent/triton-go/client" + "github.com/pkg/errors" +) + +type VolumesClient struct { + client *client.Client +} + +type Volume struct { + ID string `json:"id"` + Name string `json:"name"` + Owner string `json:"owner_uuid"` + Type string `json:"type"` + FileSystemPath string `json:"filesystem_path"` + Size int64 `json:"size"` + State string `json:"state"` + Networks []string `json:"networks"` + Refs []string `json:"refs"` +} + +type ListVolumesInput struct { + Name string + Size string + State string + Type string +} + +func (c *VolumesClient) List(ctx context.Context, input *ListVolumesInput) ([]*Volume, error) { + fullPath := path.Join("/", c.client.AccountName, "volumes") + + query := &url.Values{} + if input.Name != "" { + query.Set("name", input.Name) + } + if input.Size != "" { + query.Set("size", input.Size) + } + if input.State != "" { + query.Set("state", input.State) + } + if input.Type != "" { + query.Set("type", input.Type) + } + + reqInputs := client.RequestInput{ + Method: http.MethodGet, + Path: fullPath, + Query: query, + } + resp, err := c.client.ExecuteRequest(ctx, reqInputs) + if resp != nil { + defer resp.Close() + } + if err != nil { + return nil, errors.Wrap(err, "unable to list volumes") + } + + var result []*Volume + decoder := json.NewDecoder(resp) + if err = decoder.Decode(&result); err != nil { + return nil, errors.Wrap(err, "unable to decode list volumes response") + } + + return result, nil +} + +type CreateVolumeInput struct { + Name string + Size int64 + Networks []string + Type string +} + +func (input *CreateVolumeInput) toAPI() map[string]interface{} { + result := make(map[string]interface{}, 0) + + if input.Name != "" { + result["name"] = input.Name + } + + if input.Size != 0 { + result["size"] = input.Size + } + + if input.Type != "" { + result["type"] = input.Type + } + + if len(input.Networks) > 0 { + result["networks"] = input.Networks + } + + return result +} + +func (c *VolumesClient) Create(ctx context.Context, input *CreateVolumeInput) (*Volume, error) { + fullPath := path.Join("/", c.client.AccountName, "volumes") + + reqInputs := client.RequestInput{ + Method: http.MethodPost, + Path: fullPath, + Body: input.toAPI(), + } + resp, err := c.client.ExecuteRequest(ctx, reqInputs) + if err != nil { + return nil, errors.Wrap(err, "unable to create volume") + } + if resp != nil { + defer resp.Close() + } + + var result *Volume + decoder := json.NewDecoder(resp) + if err = decoder.Decode(&result); err != nil { + return nil, errors.Wrap(err, "unable to decode create volume response") + } + + return result, nil +} + +type DeleteVolumeInput struct { + ID string +} + +func (c *VolumesClient) Delete(ctx context.Context, input *DeleteVolumeInput) error { + fullPath := path.Join("/", c.client.AccountName, "volumes", input.ID) + reqInputs := client.RequestInput{ + Method: http.MethodDelete, + Path: fullPath, + } + resp, err := c.client.ExecuteRequestRaw(ctx, reqInputs) + if err != nil { + return errors.Wrap(err, "unable to delete volume") + } + if resp == nil { + return errors.Wrap(err, "unable to delete volume") + } + if resp.Body != nil { + defer resp.Body.Close() + } + if resp.StatusCode == http.StatusNotFound || resp.StatusCode == http.StatusGone { + return nil + } + + return nil +} + +type GetVolumeInput struct { + ID string +} + +func (c *VolumesClient) Get(ctx context.Context, input *GetVolumeInput) (*Volume, error) { + fullPath := path.Join("/", c.client.AccountName, "volumes", input.ID) + reqInputs := client.RequestInput{ + Method: http.MethodGet, + Path: fullPath, + } + resp, err := c.client.ExecuteRequestRaw(ctx, reqInputs) + if err != nil { + return nil, errors.Wrap(err, "unable to get volume") + } + if resp == nil { + return nil, errors.Wrap(err, "unable to get volume") + } + if resp.Body != nil { + defer resp.Body.Close() + } + + var result *Volume + decoder := json.NewDecoder(resp.Body) + if err = decoder.Decode(&result); err != nil { + return nil, errors.Wrap(err, "unable to decode get volume volume") + } + + return result, nil +} + +type UpdateVolumeInput struct { + ID string `json:"-"` + Name string `json:"name"` +} + +func (c *VolumesClient) Update(ctx context.Context, input *UpdateVolumeInput) error { + fullPath := path.Join("/", c.client.AccountName, "volumes", input.ID) + + reqInputs := client.RequestInput{ + Method: http.MethodPost, + Path: fullPath, + Body: input, + } + resp, err := c.client.ExecuteRequest(ctx, reqInputs) + if err != nil { + return errors.Wrap(err, "unable to update volume") + } + if resp != nil { + defer resp.Close() + } + + return nil +} diff --git a/vendor/github.com/joyent/triton-go/errors/errors.go b/vendor/github.com/joyent/triton-go/errors/errors.go new file mode 100644 index 000000000..8f2719738 --- /dev/null +++ b/vendor/github.com/joyent/triton-go/errors/errors.go @@ -0,0 +1,297 @@ +// +// Copyright (c) 2018, Joyent, Inc. All rights reserved. +// +// This Source Code Form is subject to the terms of the Mozilla Public +// License, v. 2.0. If a copy of the MPL was not distributed with this +// file, You can obtain one at http://mozilla.org/MPL/2.0/. +// + +package errors + +import ( + "fmt" + "net/http" + "strings" + + "github.com/pkg/errors" +) + +// APIError represents an error code and message along with +// the status code of the HTTP request which resulted in the error +// message. Error codes used by the Triton API are listed at +// https://apidocs.joyent.com/cloudapi/#cloudapi-http-responses +// Error codes used by the Manta API are listed at +// https://apidocs.joyent.com/manta/api.html#errors +type APIError struct { + StatusCode int + Code string `json:"code"` + Message string `json:"message"` +} + +// Error implements interface Error on the APIError type. +func (e APIError) Error() string { + return strings.Trim(fmt.Sprintf("%+q", e.Code), `"`) + ": " + strings.Trim(fmt.Sprintf("%+q", e.Message), `"`) +} + +// ClientError represents an error code and message returned +// when connecting to the triton-go client +type ClientError struct { + StatusCode int + Code string `json:"code"` + Message string `json:"message"` +} + +// Error implements interface Error on the ClientError type. +func (e ClientError) Error() string { + return strings.Trim(fmt.Sprintf("%+q", e.Code), `"`) + ": " + strings.Trim(fmt.Sprintf("%+q", e.Message), `"`) +} + +func IsAuthSchemeError(err error) bool { + return IsSpecificError(err, "AuthScheme") +} + +func IsAuthorizationError(err error) bool { + return IsSpecificError(err, "Authorization") +} + +func IsBadRequestError(err error) bool { + return IsSpecificError(err, "BadRequest") +} + +func IsChecksumError(err error) bool { + return IsSpecificError(err, "Checksum") +} + +func IsConcurrentRequestError(err error) bool { + return IsSpecificError(err, "ConcurrentRequest") +} + +func IsContentLengthError(err error) bool { + return IsSpecificError(err, "ContentLength") +} + +func IsContentMD5MismatchError(err error) bool { + return IsSpecificError(err, "ContentMD5Mismatch") +} + +func IsEntityExistsError(err error) bool { + return IsSpecificError(err, "EntityExists") +} + +func IsInvalidArgumentError(err error) bool { + return IsSpecificError(err, "InvalidArgument") +} + +func IsInvalidAuthTokenError(err error) bool { + return IsSpecificError(err, "InvalidAuthToken") +} + +func IsInvalidCredentialsError(err error) bool { + return IsSpecificError(err, "InvalidCredentials") +} + +func IsInvalidDurabilityLevelError(err error) bool { + return IsSpecificError(err, "InvalidDurabilityLevel") +} + +func IsInvalidKeyIdError(err error) bool { + return IsSpecificError(err, "InvalidKeyId") +} + +func IsInvalidJobError(err error) bool { + return IsSpecificError(err, "InvalidJob") +} + +func IsInvalidLinkError(err error) bool { + return IsSpecificError(err, "InvalidLink") +} + +func IsInvalidLimitError(err error) bool { + return IsSpecificError(err, "InvalidLimit") +} + +func IsInvalidSignatureError(err error) bool { + return IsSpecificError(err, "InvalidSignature") +} + +func IsInvalidUpdateError(err error) bool { + return IsSpecificError(err, "InvalidUpdate") +} + +func IsDirectoryDoesNotExistError(err error) bool { + return IsSpecificError(err, "DirectoryDoesNotExist") +} + +func IsDirectoryExistsError(err error) bool { + return IsSpecificError(err, "DirectoryExists") +} + +func IsDirectoryNotEmptyError(err error) bool { + return IsSpecificError(err, "DirectoryNotEmpty") +} + +func IsDirectoryOperationError(err error) bool { + return IsSpecificError(err, "DirectoryOperation") +} + +func IsInternalError(err error) bool { + return IsSpecificError(err, "Internal") +} + +func IsJobNotFoundError(err error) bool { + return IsSpecificError(err, "JobNotFound") +} + +func IsJobStateError(err error) bool { + return IsSpecificError(err, "JobState") +} + +func IsKeyDoesNotExistError(err error) bool { + return IsSpecificError(err, "KeyDoesNotExist") +} + +func IsNotAcceptableError(err error) bool { + return IsSpecificError(err, "NotAcceptable") +} + +func IsNotEnoughSpaceError(err error) bool { + return IsSpecificError(err, "NotEnoughSpace") +} + +func IsLinkNotFoundError(err error) bool { + return IsSpecificError(err, "LinkNotFound") +} + +func IsLinkNotObjectError(err error) bool { + return IsSpecificError(err, "LinkNotObject") +} + +func IsLinkRequiredError(err error) bool { + return IsSpecificError(err, "LinkRequired") +} + +func IsParentNotDirectoryError(err error) bool { + return IsSpecificError(err, "ParentNotDirectory") +} + +func IsPreconditionFailedError(err error) bool { + return IsSpecificError(err, "PreconditionFailed") +} + +func IsPreSignedRequestError(err error) bool { + return IsSpecificError(err, "PreSignedRequest") +} + +func IsRequestEntityTooLargeError(err error) bool { + return IsSpecificError(err, "RequestEntityTooLarge") +} + +func IsResourceNotFoundError(err error) bool { + return IsSpecificError(err, "ResourceNotFound") +} + +func IsRootDirectoryError(err error) bool { + return IsSpecificError(err, "RootDirectory") +} + +func IsServiceUnavailableError(err error) bool { + return IsSpecificError(err, "ServiceUnavailable") +} + +func IsSSLRequiredError(err error) bool { + return IsSpecificError(err, "SSLRequired") +} + +func IsUploadTimeoutError(err error) bool { + return IsSpecificError(err, "UploadTimeout") +} + +func IsUserDoesNotExistError(err error) bool { + return IsSpecificError(err, "UserDoesNotExist") +} + +func IsBadRequest(err error) bool { + return IsSpecificError(err, "BadRequest") +} + +func IsInUseError(err error) bool { + return IsSpecificError(err, "InUseError") +} + +func IsInvalidArgument(err error) bool { + return IsSpecificError(err, "InvalidArgument") +} + +func IsInvalidCredentials(err error) bool { + return IsSpecificError(err, "InvalidCredentials") +} + +func IsInvalidHeader(err error) bool { + return IsSpecificError(err, "InvalidHeader") +} + +func IsInvalidVersion(err error) bool { + return IsSpecificError(err, "InvalidVersion") +} + +func IsMissingParameter(err error) bool { + return IsSpecificError(err, "MissingParameter") +} + +func IsNotAuthorized(err error) bool { + return IsSpecificError(err, "NotAuthorized") +} + +func IsRequestThrottled(err error) bool { + return IsSpecificError(err, "RequestThrottled") +} + +func IsRequestTooLarge(err error) bool { + return IsSpecificError(err, "RequestTooLarge") +} + +func IsRequestMoved(err error) bool { + return IsSpecificError(err, "RequestMoved") +} + +func IsResourceFound(err error) bool { + return IsSpecificError(err, "ResourceFound") +} + +func IsResourceNotFound(err error) bool { + return IsSpecificError(err, "ResourceNotFound") +} + +func IsUnknownError(err error) bool { + return IsSpecificError(err, "UnknownError") +} + +func IsEmptyResponse(err error) bool { + return IsSpecificError(err, "EmptyResponse") +} + +func IsStatusNotFoundCode(err error) bool { + return IsSpecificStatusCode(err, http.StatusNotFound) +} + +func IsSpecificError(myError error, errorCode string) bool { + switch err := errors.Cause(myError).(type) { + case *APIError: + if err.Code == errorCode { + return true + } + } + + return false +} + +func IsSpecificStatusCode(myError error, statusCode int) bool { + switch err := errors.Cause(myError).(type) { + case *APIError: + if err.StatusCode == statusCode { + return true + } + } + + return false +} diff --git a/vendor/github.com/joyent/triton-go/network/client.go b/vendor/github.com/joyent/triton-go/network/client.go index dbc25a84c..097af171c 100644 --- a/vendor/github.com/joyent/triton-go/network/client.go +++ b/vendor/github.com/joyent/triton-go/network/client.go @@ -1,3 +1,11 @@ +// +// Copyright (c) 2018, Joyent, Inc. All rights reserved. +// +// This Source Code Form is subject to the terms of the Mozilla Public +// License, v. 2.0. If a copy of the MPL was not distributed with this +// file, You can obtain one at http://mozilla.org/MPL/2.0/. +// + package network import ( diff --git a/vendor/github.com/joyent/triton-go/network/fabrics.go b/vendor/github.com/joyent/triton-go/network/fabrics.go index 20f1d2fe6..ff83ee1fa 100644 --- a/vendor/github.com/joyent/triton-go/network/fabrics.go +++ b/vendor/github.com/joyent/triton-go/network/fabrics.go @@ -1,14 +1,22 @@ +// +// Copyright (c) 2018, Joyent, Inc. All rights reserved. +// +// This Source Code Form is subject to the terms of the Mozilla Public +// License, v. 2.0. If a copy of the MPL was not distributed with this +// file, You can obtain one at http://mozilla.org/MPL/2.0/. +// + package network import ( - "encoding/json" - "fmt" - "net/http" - "context" + "encoding/json" + "net/http" + "path" + "strconv" - "github.com/hashicorp/errwrap" "github.com/joyent/triton-go/client" + "github.com/pkg/errors" ) type FabricsClient struct { @@ -24,23 +32,23 @@ type FabricVLAN struct { type ListVLANsInput struct{} func (c *FabricsClient) ListVLANs(ctx context.Context, _ *ListVLANsInput) ([]*FabricVLAN, error) { - path := fmt.Sprintf("/%s/fabrics/default/vlans", c.client.AccountName) + fullPath := path.Join("/", c.client.AccountName, "fabrics", "default", "vlans") reqInputs := client.RequestInput{ Method: http.MethodGet, - Path: path, + Path: fullPath, } respReader, err := c.client.ExecuteRequest(ctx, reqInputs) if respReader != nil { defer respReader.Close() } if err != nil { - return nil, errwrap.Wrapf("Error executing ListVLANs request: {{err}}", err) + return nil, errors.Wrap(err, "unable to list VLANs") } var result []*FabricVLAN decoder := json.NewDecoder(respReader) if err = decoder.Decode(&result); err != nil { - return nil, errwrap.Wrapf("Error decoding ListVLANs response: {{err}}", err) + return nil, errors.Wrap(err, "unable to decode list VLANs response") } return result, nil @@ -53,10 +61,10 @@ type CreateVLANInput struct { } func (c *FabricsClient) CreateVLAN(ctx context.Context, input *CreateVLANInput) (*FabricVLAN, error) { - path := fmt.Sprintf("/%s/fabrics/default/vlans", c.client.AccountName) + fullPath := path.Join("/", c.client.AccountName, "fabrics", "default", "vlans") reqInputs := client.RequestInput{ Method: http.MethodPost, - Path: path, + Path: fullPath, Body: input, } respReader, err := c.client.ExecuteRequest(ctx, reqInputs) @@ -64,13 +72,13 @@ func (c *FabricsClient) CreateVLAN(ctx context.Context, input *CreateVLANInput) defer respReader.Close() } if err != nil { - return nil, errwrap.Wrapf("Error executing CreateVLAN request: {{err}}", err) + return nil, errors.Wrap(err, "unable to create VLAN") } var result *FabricVLAN decoder := json.NewDecoder(respReader) if err = decoder.Decode(&result); err != nil { - return nil, errwrap.Wrapf("Error decoding CreateVLAN response: {{err}}", err) + return nil, errors.Wrap(err, "unable to decode create VLAN response") } return result, nil @@ -83,10 +91,10 @@ type UpdateVLANInput struct { } func (c *FabricsClient) UpdateVLAN(ctx context.Context, input *UpdateVLANInput) (*FabricVLAN, error) { - path := fmt.Sprintf("/%s/fabrics/default/vlans/%d", c.client.AccountName, input.ID) + fullPath := path.Join("/", c.client.AccountName, "fabrics", "default", "vlans", strconv.Itoa(input.ID)) reqInputs := client.RequestInput{ Method: http.MethodPut, - Path: path, + Path: fullPath, Body: input, } respReader, err := c.client.ExecuteRequest(ctx, reqInputs) @@ -94,13 +102,13 @@ func (c *FabricsClient) UpdateVLAN(ctx context.Context, input *UpdateVLANInput) defer respReader.Close() } if err != nil { - return nil, errwrap.Wrapf("Error executing UpdateVLAN request: {{err}}", err) + return nil, errors.Wrap(err, "unable to update VLAN") } var result *FabricVLAN decoder := json.NewDecoder(respReader) if err = decoder.Decode(&result); err != nil { - return nil, errwrap.Wrapf("Error decoding UpdateVLAN response: {{err}}", err) + return nil, errors.Wrap(err, "unable to decode update VLAN response") } return result, nil @@ -111,23 +119,23 @@ type GetVLANInput struct { } func (c *FabricsClient) GetVLAN(ctx context.Context, input *GetVLANInput) (*FabricVLAN, error) { - path := fmt.Sprintf("/%s/fabrics/default/vlans/%d", c.client.AccountName, input.ID) + fullPath := path.Join("/", c.client.AccountName, "fabrics", "default", "vlans", strconv.Itoa(input.ID)) reqInputs := client.RequestInput{ Method: http.MethodGet, - Path: path, + Path: fullPath, } respReader, err := c.client.ExecuteRequest(ctx, reqInputs) if respReader != nil { defer respReader.Close() } if err != nil { - return nil, errwrap.Wrapf("Error executing GetVLAN request: {{err}}", err) + return nil, errors.Wrap(err, "unable to get VLAN") } var result *FabricVLAN decoder := json.NewDecoder(respReader) if err = decoder.Decode(&result); err != nil { - return nil, errwrap.Wrapf("Error decoding GetVLAN response: {{err}}", err) + return nil, errors.Wrap(err, "unable to decode get VLAN response") } return result, nil @@ -138,17 +146,17 @@ type DeleteVLANInput struct { } func (c *FabricsClient) DeleteVLAN(ctx context.Context, input *DeleteVLANInput) error { - path := fmt.Sprintf("/%s/fabrics/default/vlans/%d", c.client.AccountName, input.ID) + fullPath := path.Join("/", c.client.AccountName, "fabrics", "default", "vlans", strconv.Itoa(input.ID)) reqInputs := client.RequestInput{ Method: http.MethodDelete, - Path: path, + Path: fullPath, } respReader, err := c.client.ExecuteRequest(ctx, reqInputs) if respReader != nil { defer respReader.Close() } if err != nil { - return errwrap.Wrapf("Error executing DeleteVLAN request: {{err}}", err) + return errors.Wrap(err, "unable to delete VLAN") } return nil @@ -159,23 +167,23 @@ type ListFabricsInput struct { } func (c *FabricsClient) List(ctx context.Context, input *ListFabricsInput) ([]*Network, error) { - path := fmt.Sprintf("/%s/fabrics/default/vlans/%d/networks", c.client.AccountName, input.FabricVLANID) + fullPath := path.Join("/", c.client.AccountName, "fabrics", "default", "vlans", strconv.Itoa(input.FabricVLANID), "networks") reqInputs := client.RequestInput{ Method: http.MethodGet, - Path: path, + Path: fullPath, } respReader, err := c.client.ExecuteRequest(ctx, reqInputs) if respReader != nil { defer respReader.Close() } if err != nil { - return nil, errwrap.Wrapf("Error executing ListFabrics request: {{err}}", err) + return nil, errors.Wrap(err, "unable to list fabrics") } var result []*Network decoder := json.NewDecoder(respReader) if err = decoder.Decode(&result); err != nil { - return nil, errwrap.Wrapf("Error decoding ListFabrics response: {{err}}", err) + return nil, errors.Wrap(err, "unable to decode list fabrics response") } return result, nil @@ -195,10 +203,10 @@ type CreateFabricInput struct { } func (c *FabricsClient) Create(ctx context.Context, input *CreateFabricInput) (*Network, error) { - path := fmt.Sprintf("/%s/fabrics/default/vlans/%d/networks", c.client.AccountName, input.FabricVLANID) + fullPath := path.Join("/", c.client.AccountName, "fabrics", "default", "vlans", strconv.Itoa(input.FabricVLANID), "networks") reqInputs := client.RequestInput{ Method: http.MethodPost, - Path: path, + Path: fullPath, Body: input, } respReader, err := c.client.ExecuteRequest(ctx, reqInputs) @@ -206,13 +214,13 @@ func (c *FabricsClient) Create(ctx context.Context, input *CreateFabricInput) (* defer respReader.Close() } if err != nil { - return nil, errwrap.Wrapf("Error executing CreateFabric request: {{err}}", err) + return nil, errors.Wrap(err, "unable to create fabric") } var result *Network decoder := json.NewDecoder(respReader) if err = decoder.Decode(&result); err != nil { - return nil, errwrap.Wrapf("Error decoding CreateFabric response: {{err}}", err) + return nil, errors.Wrap(err, "unable to decode create fabric response") } return result, nil @@ -224,23 +232,23 @@ type GetFabricInput struct { } func (c *FabricsClient) Get(ctx context.Context, input *GetFabricInput) (*Network, error) { - path := fmt.Sprintf("/%s/fabrics/default/vlans/%d/networks/%s", c.client.AccountName, input.FabricVLANID, input.NetworkID) + fullPath := path.Join("/", c.client.AccountName, "fabrics", "default", "vlans", strconv.Itoa(input.FabricVLANID), "networks", input.NetworkID) reqInputs := client.RequestInput{ Method: http.MethodGet, - Path: path, + Path: fullPath, } respReader, err := c.client.ExecuteRequest(ctx, reqInputs) if respReader != nil { defer respReader.Close() } if err != nil { - return nil, errwrap.Wrapf("Error executing GetFabric request: {{err}}", err) + return nil, errors.Wrap(err, "unable to get fabric") } var result *Network decoder := json.NewDecoder(respReader) if err = decoder.Decode(&result); err != nil { - return nil, errwrap.Wrapf("Error decoding GetFabric response: {{err}}", err) + return nil, errors.Wrap(err, "unable to decode get fabric response") } return result, nil @@ -252,17 +260,17 @@ type DeleteFabricInput struct { } func (c *FabricsClient) Delete(ctx context.Context, input *DeleteFabricInput) error { - path := fmt.Sprintf("/%s/fabrics/default/vlans/%d/networks/%s", c.client.AccountName, input.FabricVLANID, input.NetworkID) + fullPath := path.Join("/", c.client.AccountName, "fabrics", "default", "vlans", strconv.Itoa(input.FabricVLANID), "networks", input.NetworkID) reqInputs := client.RequestInput{ Method: http.MethodDelete, - Path: path, + Path: fullPath, } respReader, err := c.client.ExecuteRequest(ctx, reqInputs) if respReader != nil { defer respReader.Close() } if err != nil { - return errwrap.Wrapf("Error executing DeleteFabric request: {{err}}", err) + return errors.Wrap(err, "unable to delete fabric") } return nil diff --git a/vendor/github.com/joyent/triton-go/network/firewall.go b/vendor/github.com/joyent/triton-go/network/firewall.go index 60054702a..4a6565ea5 100644 --- a/vendor/github.com/joyent/triton-go/network/firewall.go +++ b/vendor/github.com/joyent/triton-go/network/firewall.go @@ -1,13 +1,22 @@ +// +// Copyright (c) 2018, Joyent, Inc. All rights reserved. +// +// This Source Code Form is subject to the terms of the Mozilla Public +// License, v. 2.0. If a copy of the MPL was not distributed with this +// file, You can obtain one at http://mozilla.org/MPL/2.0/. +// + package network import ( "context" "encoding/json" - "fmt" "net/http" + "path" + "time" - "github.com/hashicorp/errwrap" "github.com/joyent/triton-go/client" + "github.com/pkg/errors" ) type FirewallClient struct { @@ -35,23 +44,23 @@ type FirewallRule struct { type ListRulesInput struct{} func (c *FirewallClient) ListRules(ctx context.Context, _ *ListRulesInput) ([]*FirewallRule, error) { - path := fmt.Sprintf("/%s/fwrules", c.client.AccountName) + fullPath := path.Join("/", c.client.AccountName, "fwrules") reqInputs := client.RequestInput{ Method: http.MethodGet, - Path: path, + Path: fullPath, } respReader, err := c.client.ExecuteRequest(ctx, reqInputs) if respReader != nil { defer respReader.Close() } if err != nil { - return nil, errwrap.Wrapf("Error executing ListRules request: {{err}}", err) + return nil, errors.Wrap(err, "unable to list firewall rules") } var result []*FirewallRule decoder := json.NewDecoder(respReader) if err = decoder.Decode(&result); err != nil { - return nil, errwrap.Wrapf("Error decoding ListRules response: {{err}}", err) + return nil, errors.Wrap(err, "unable to decode list firewall rules response") } return result, nil @@ -62,23 +71,23 @@ type GetRuleInput struct { } func (c *FirewallClient) GetRule(ctx context.Context, input *GetRuleInput) (*FirewallRule, error) { - path := fmt.Sprintf("/%s/fwrules/%s", c.client.AccountName, input.ID) + fullPath := path.Join("/", c.client.AccountName, "fwrules", input.ID) reqInputs := client.RequestInput{ Method: http.MethodGet, - Path: path, + Path: fullPath, } respReader, err := c.client.ExecuteRequest(ctx, reqInputs) if respReader != nil { defer respReader.Close() } if err != nil { - return nil, errwrap.Wrapf("Error executing GetRule request: {{err}}", err) + return nil, errors.Wrap(err, "unable to get firewall rule") } var result *FirewallRule decoder := json.NewDecoder(respReader) if err = decoder.Decode(&result); err != nil { - return nil, errwrap.Wrapf("Error decoding GetRule response: {{err}}", err) + return nil, errors.Wrap(err, "unable to decode get firewall rule response") } return result, nil @@ -91,10 +100,10 @@ type CreateRuleInput struct { } func (c *FirewallClient) CreateRule(ctx context.Context, input *CreateRuleInput) (*FirewallRule, error) { - path := fmt.Sprintf("/%s/fwrules", c.client.AccountName) + fullPath := path.Join("/", c.client.AccountName, "fwrules") reqInputs := client.RequestInput{ Method: http.MethodPost, - Path: path, + Path: fullPath, Body: input, } respReader, err := c.client.ExecuteRequest(ctx, reqInputs) @@ -102,13 +111,13 @@ func (c *FirewallClient) CreateRule(ctx context.Context, input *CreateRuleInput) defer respReader.Close() } if err != nil { - return nil, errwrap.Wrapf("Error executing CreateRule request: {{err}}", err) + return nil, errors.Wrap(err, "unable to create firewall rule") } var result *FirewallRule decoder := json.NewDecoder(respReader) if err = decoder.Decode(&result); err != nil { - return nil, errwrap.Wrapf("Error decoding CreateRule response: {{err}}", err) + return nil, errors.Wrap(err, "unable to decode create firewall rule response") } return result, nil @@ -122,10 +131,10 @@ type UpdateRuleInput struct { } func (c *FirewallClient) UpdateRule(ctx context.Context, input *UpdateRuleInput) (*FirewallRule, error) { - path := fmt.Sprintf("/%s/fwrules/%s", c.client.AccountName, input.ID) + fullPath := path.Join("/", c.client.AccountName, "fwrules", input.ID) reqInputs := client.RequestInput{ Method: http.MethodPost, - Path: path, + Path: fullPath, Body: input, } respReader, err := c.client.ExecuteRequest(ctx, reqInputs) @@ -133,13 +142,13 @@ func (c *FirewallClient) UpdateRule(ctx context.Context, input *UpdateRuleInput) defer respReader.Close() } if err != nil { - return nil, errwrap.Wrapf("Error executing UpdateRule request: {{err}}", err) + return nil, errors.Wrap(err, "unable to update firewall rule") } var result *FirewallRule decoder := json.NewDecoder(respReader) if err = decoder.Decode(&result); err != nil { - return nil, errwrap.Wrapf("Error decoding UpdateRule response: {{err}}", err) + return nil, errors.Wrap(err, "unable to decode update firewall rule response") } return result, nil @@ -150,10 +159,10 @@ type EnableRuleInput struct { } func (c *FirewallClient) EnableRule(ctx context.Context, input *EnableRuleInput) (*FirewallRule, error) { - path := fmt.Sprintf("/%s/fwrules/%s/enable", c.client.AccountName, input.ID) + fullPath := path.Join("/", c.client.AccountName, "fwrules", input.ID, "enable") reqInputs := client.RequestInput{ Method: http.MethodPost, - Path: path, + Path: fullPath, Body: input, } respReader, err := c.client.ExecuteRequest(ctx, reqInputs) @@ -161,13 +170,13 @@ func (c *FirewallClient) EnableRule(ctx context.Context, input *EnableRuleInput) defer respReader.Close() } if err != nil { - return nil, errwrap.Wrapf("Error executing EnableRule request: {{err}}", err) + return nil, errors.Wrap(err, "unable to enable firewall rule") } var result *FirewallRule decoder := json.NewDecoder(respReader) if err = decoder.Decode(&result); err != nil { - return nil, errwrap.Wrapf("Error decoding EnableRule response: {{err}}", err) + return nil, errors.Wrap(err, "unable to decode enable firewall rule response") } return result, nil @@ -178,10 +187,10 @@ type DisableRuleInput struct { } func (c *FirewallClient) DisableRule(ctx context.Context, input *DisableRuleInput) (*FirewallRule, error) { - path := fmt.Sprintf("/%s/fwrules/%s/disable", c.client.AccountName, input.ID) + fullPath := path.Join("/", c.client.AccountName, "fwrules", input.ID, "disable") reqInputs := client.RequestInput{ Method: http.MethodPost, - Path: path, + Path: fullPath, Body: input, } respReader, err := c.client.ExecuteRequest(ctx, reqInputs) @@ -189,13 +198,13 @@ func (c *FirewallClient) DisableRule(ctx context.Context, input *DisableRuleInpu defer respReader.Close() } if err != nil { - return nil, errwrap.Wrapf("Error executing DisableRule request: {{err}}", err) + return nil, errors.Wrap(err, "unable to disable firewall rule") } var result *FirewallRule decoder := json.NewDecoder(respReader) if err = decoder.Decode(&result); err != nil { - return nil, errwrap.Wrapf("Error decoding DisableRule response: {{err}}", err) + return nil, errors.Wrap(err, "unable to decode disable firewall rule response") } return result, nil @@ -206,17 +215,17 @@ type DeleteRuleInput struct { } func (c *FirewallClient) DeleteRule(ctx context.Context, input *DeleteRuleInput) error { - path := fmt.Sprintf("/%s/fwrules/%s", c.client.AccountName, input.ID) + fullPath := path.Join("/", c.client.AccountName, "fwrules", input.ID) reqInputs := client.RequestInput{ Method: http.MethodDelete, - Path: path, + Path: fullPath, } respReader, err := c.client.ExecuteRequest(ctx, reqInputs) if respReader != nil { defer respReader.Close() } if err != nil { - return errwrap.Wrapf("Error executing DeleteRule request: {{err}}", err) + return errors.Wrap(err, "unable to delete firewall rule") } return nil @@ -227,23 +236,72 @@ type ListMachineRulesInput struct { } func (c *FirewallClient) ListMachineRules(ctx context.Context, input *ListMachineRulesInput) ([]*FirewallRule, error) { - path := fmt.Sprintf("/%s/machines/%s/firewallrules", c.client.AccountName, input.MachineID) + fullPath := path.Join("/", c.client.AccountName, "machines", input.MachineID, "fwrules") reqInputs := client.RequestInput{ Method: http.MethodGet, - Path: path, + Path: fullPath, } respReader, err := c.client.ExecuteRequest(ctx, reqInputs) if respReader != nil { defer respReader.Close() } if err != nil { - return nil, errwrap.Wrapf("Error executing ListMachineRules request: {{err}}", err) + return nil, errors.Wrap(err, "unable to list machine firewall rules") } var result []*FirewallRule decoder := json.NewDecoder(respReader) if err = decoder.Decode(&result); err != nil { - return nil, errwrap.Wrapf("Error decoding ListRules response: {{err}}", err) + return nil, errors.Wrap(err, "unable to decode list machine firewall rules response") + } + + return result, nil +} + +type ListRuleMachinesInput struct { + ID string +} + +type Machine struct { + ID string `json:"id"` + Name string `json:"name"` + Type string `json:"type"` + Brand string `json:"brand"` + State string `json:"state"` + Image string `json:"image"` + Memory int `json:"memory"` + Disk int `json:"disk"` + Metadata map[string]string `json:"metadata"` + Tags map[string]interface{} `json:"tags"` + Created time.Time `json:"created"` + Updated time.Time `json:"updated"` + Docker bool `json:"docker"` + IPs []string `json:"ips"` + Networks []string `json:"networks"` + PrimaryIP string `json:"primaryIp"` + FirewallEnabled bool `json:"firewall_enabled"` + ComputeNode string `json:"compute_node"` + Package string `json:"package"` +} + +func (c *FirewallClient) ListRuleMachines(ctx context.Context, input *ListRuleMachinesInput) ([]*Machine, error) { + fullPath := path.Join("/", c.client.AccountName, "fwrules", input.ID, "machines") + reqInputs := client.RequestInput{ + Method: http.MethodGet, + Path: fullPath, + } + respReader, err := c.client.ExecuteRequest(ctx, reqInputs) + if respReader != nil { + defer respReader.Close() + } + if err != nil { + return nil, errors.Wrap(err, "unable to list firewall rule machines") + } + + var result []*Machine + decoder := json.NewDecoder(respReader) + if err = decoder.Decode(&result); err != nil { + return nil, errors.Wrap(err, "unable to decode list firewall rule machines response") } return result, nil diff --git a/vendor/github.com/joyent/triton-go/network/network.go b/vendor/github.com/joyent/triton-go/network/network.go index d853e0402..3ad43fd54 100644 --- a/vendor/github.com/joyent/triton-go/network/network.go +++ b/vendor/github.com/joyent/triton-go/network/network.go @@ -1,13 +1,21 @@ +// +// Copyright (c) 2018, Joyent, Inc. All rights reserved. +// +// This Source Code Form is subject to the terms of the Mozilla Public +// License, v. 2.0. If a copy of the MPL was not distributed with this +// file, You can obtain one at http://mozilla.org/MPL/2.0/. +// + package network import ( "context" "encoding/json" - "fmt" "net/http" + "path" - "github.com/hashicorp/errwrap" "github.com/joyent/triton-go/client" + "github.com/pkg/errors" ) type Network struct { @@ -28,23 +36,23 @@ type Network struct { type ListInput struct{} func (c *NetworkClient) List(ctx context.Context, _ *ListInput) ([]*Network, error) { - path := fmt.Sprintf("/%s/networks", c.Client.AccountName) + fullPath := path.Join("/", c.Client.AccountName, "networks") reqInputs := client.RequestInput{ Method: http.MethodGet, - Path: path, + Path: fullPath, } respReader, err := c.Client.ExecuteRequest(ctx, reqInputs) if respReader != nil { defer respReader.Close() } if err != nil { - return nil, errwrap.Wrapf("Error executing ListNetworks request: {{err}}", err) + return nil, errors.Wrap(err, "unable to list networks") } var result []*Network decoder := json.NewDecoder(respReader) if err = decoder.Decode(&result); err != nil { - return nil, errwrap.Wrapf("Error decoding ListNetworks response: {{err}}", err) + return nil, errors.Wrap(err, "unable to decode list networks response") } return result, nil @@ -55,23 +63,23 @@ type GetInput struct { } func (c *NetworkClient) Get(ctx context.Context, input *GetInput) (*Network, error) { - path := fmt.Sprintf("/%s/networks/%s", c.Client.AccountName, input.ID) + fullPath := path.Join("/", c.Client.AccountName, "networks", input.ID) reqInputs := client.RequestInput{ Method: http.MethodGet, - Path: path, + Path: fullPath, } respReader, err := c.Client.ExecuteRequest(ctx, reqInputs) if respReader != nil { defer respReader.Close() } if err != nil { - return nil, errwrap.Wrapf("Error executing GetNetwork request: {{err}}", err) + return nil, errors.Wrap(err, "unable to get network") } var result *Network decoder := json.NewDecoder(respReader) if err = decoder.Decode(&result); err != nil { - return nil, errwrap.Wrapf("Error decoding GetNetwork response: {{err}}", err) + return nil, errors.Wrap(err, "unable to decode get network response") } return result, nil diff --git a/vendor/github.com/joyent/triton-go/triton.go b/vendor/github.com/joyent/triton-go/triton.go index b5bacd255..5d74258d4 100644 --- a/vendor/github.com/joyent/triton-go/triton.go +++ b/vendor/github.com/joyent/triton-go/triton.go @@ -1,3 +1,11 @@ +// +// Copyright (c) 2018, Joyent, Inc. All rights reserved. +// +// This Source Code Form is subject to the terms of the Mozilla Public +// License, v. 2.0. If a copy of the MPL was not distributed with this +// file, You can obtain one at http://mozilla.org/MPL/2.0/. +// + package triton import ( @@ -14,5 +22,6 @@ type ClientConfig struct { TritonURL string MantaURL string AccountName string + Username string Signers []authentication.Signer } diff --git a/vendor/github.com/joyent/triton-go/version.go b/vendor/github.com/joyent/triton-go/version.go new file mode 100644 index 000000000..0c1bc8680 --- /dev/null +++ b/vendor/github.com/joyent/triton-go/version.go @@ -0,0 +1,32 @@ +// +// Copyright (c) 2018, Joyent, Inc. All rights reserved. +// +// This Source Code Form is subject to the terms of the Mozilla Public +// License, v. 2.0. If a copy of the MPL was not distributed with this +// file, You can obtain one at http://mozilla.org/MPL/2.0/. +// + +package triton + +import ( + "fmt" + "runtime" +) + +// The main version number of the current released Triton-go SDK. +const Version = "0.9.0" + +// A pre-release marker for the version. If this is "" (empty string) +// then it means that it is a final release. Otherwise, this is a pre-release +// such as "dev" (in development), "beta", "rc1", etc. +var Prerelease = "" + +func UserAgent() string { + if Prerelease != "" { + return fmt.Sprintf("triton-go/%s-%s (%s-%s; %s)", Version, Prerelease, runtime.GOARCH, runtime.GOOS, runtime.Version()) + } else { + return fmt.Sprintf("triton-go/%s (%s-%s; %s)", Version, runtime.GOARCH, runtime.GOOS, runtime.Version()) + } +} + +const CloudAPIMajorVersion = "8" diff --git a/vendor/github.com/marstr/guid/LICENSE.txt b/vendor/github.com/marstr/guid/LICENSE.txt new file mode 100644 index 000000000..e18a0841a --- /dev/null +++ b/vendor/github.com/marstr/guid/LICENSE.txt @@ -0,0 +1,21 @@ +MIT License + +Copyright (c) 2016 Martin Strobel + +Permission is hereby granted, free of charge, to any person obtaining a copy +of this software and associated documentation files (the "Software"), to deal +in the Software without restriction, including without limitation the rights +to use, copy, modify, merge, publish, distribute, sublicense, and/or sell +copies of the Software, and to permit persons to whom the Software is +furnished to do so, subject to the following conditions: + +The above copyright notice and this permission notice shall be included in all +copies or substantial portions of the Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR +IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, +FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE +AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER +LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, +OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +SOFTWARE. \ No newline at end of file diff --git a/vendor/github.com/marstr/guid/README.md b/vendor/github.com/marstr/guid/README.md new file mode 100644 index 000000000..355fad16d --- /dev/null +++ b/vendor/github.com/marstr/guid/README.md @@ -0,0 +1,27 @@ +[![Build Status](https://travis-ci.org/marstr/guid.svg?branch=master)](https://travis-ci.org/marstr/guid) +[![GoDoc](https://godoc.org/github.com/marstr/guid?status.svg)](https://godoc.org/github.com/marstr/guid) +[![Go Report Card](https://goreportcard.com/badge/github.com/marstr/guid)](https://goreportcard.com/report/github.com/marstr/guid) + +# Guid +Globally unique identifiers offer a quick means of generating non-colliding values across a distributed system. For this implemenation, [RFC 4122](http://ietf.org/rfc/rfc4122.txt) governs the desired behavior. + +## What's in a name? +You have likely already noticed that RFC and some implementations refer to these structures as UUIDs (Universally Unique Identifiers), where as this project is annotated as GUIDs (Globally Unique Identifiers). The name Guid was selected to make clear this project's ties to the [.NET struct Guid.](https://msdn.microsoft.com/en-us/library/system.guid(v=vs.110).aspx) The most obvious relationship is the desire to have the same format specifiers available in this library's Format and Parse methods as .NET would have in its ToString and Parse methods. + +# Installation +- Ensure you have the [Go Programming Language](https://golang.org/) installed on your system. +- Run the command: `go get -u github.com/marstr/guid` + +# Contribution +Contributions are welcome! Feel free to send Pull Requests. Continuous Integration will ensure that you have conformed to Go conventions. Please remember to add tests for your changes. + +# Versioning +This library will adhere to the +[Semantic Versioning 2.0.0](http://semver.org/spec/v2.0.0.html) specification. It may be worth noting this should allow for tools like [glide](https://glide.readthedocs.io/en/latest/) to pull in this library with ease. + +The Release Notes portion of this file will be updated to reflect the most recent major/minor updates, with the option to tag particular bug-fixes as well. Updates to the Release Notes for patches should be addative, where as major/minor updates should replace the previous version. If one desires to see the release notes for an older version, checkout that version of code and open this file. + +# Release Notes 1.1.* + +## v1.1.0 +Adding support for JSON marshaling and unmarshaling. diff --git a/vendor/github.com/marstr/guid/guid.go b/vendor/github.com/marstr/guid/guid.go new file mode 100644 index 000000000..51b038b75 --- /dev/null +++ b/vendor/github.com/marstr/guid/guid.go @@ -0,0 +1,301 @@ +package guid + +import ( + "bytes" + "crypto/rand" + "errors" + "fmt" + "net" + "strings" + "sync" + "time" +) + +// GUID is a unique identifier designed to virtually guarantee non-conflict between values generated +// across a distributed system. +type GUID struct { + timeHighAndVersion uint16 + timeMid uint16 + timeLow uint32 + clockSeqHighAndReserved uint8 + clockSeqLow uint8 + node [6]byte +} + +// Format enumerates the values that are supported by Parse and Format +type Format string + +// These constants define the possible string formats available via this implementation of Guid. +const ( + FormatB Format = "B" // {00000000-0000-0000-0000-000000000000} + FormatD Format = "D" // 00000000-0000-0000-0000-000000000000 + FormatN Format = "N" // 00000000000000000000000000000000 + FormatP Format = "P" // (00000000-0000-0000-0000-000000000000) + FormatX Format = "X" // {0x00000000,0x0000,0x0000,{0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00}} + FormatDefault Format = FormatD +) + +// CreationStrategy enumerates the values that are supported for populating the bits of a new Guid. +type CreationStrategy string + +// These constants define the possible creation strategies available via this implementation of Guid. +const ( + CreationStrategyVersion1 CreationStrategy = "version1" + CreationStrategyVersion2 CreationStrategy = "version2" + CreationStrategyVersion3 CreationStrategy = "version3" + CreationStrategyVersion4 CreationStrategy = "version4" + CreationStrategyVersion5 CreationStrategy = "version5" +) + +var emptyGUID GUID + +// NewGUID generates and returns a new globally unique identifier +func NewGUID() GUID { + result, err := version4() + if err != nil { + panic(err) //Version 4 (pseudo-random GUID) doesn't use anything that could fail. + } + return result +} + +var knownStrategies = map[CreationStrategy]func() (GUID, error){ + CreationStrategyVersion1: version1, + CreationStrategyVersion4: version4, +} + +// NewGUIDs generates and returns a new globally unique identifier that conforms to the given strategy. +func NewGUIDs(strategy CreationStrategy) (GUID, error) { + if creator, present := knownStrategies[strategy]; present { + result, err := creator() + return result, err + } + return emptyGUID, errors.New("Unsupported CreationStrategy") +} + +// Empty returns a copy of the default and empty GUID. +func Empty() GUID { + return emptyGUID +} + +var knownFormats = map[Format]string{ + FormatN: "%08x%04x%04x%02x%02x%02x%02x%02x%02x%02x%02x", + FormatD: "%08x-%04x-%04x-%02x%02x-%02x%02x%02x%02x%02x%02x", + FormatB: "{%08x-%04x-%04x-%02x%02x-%02x%02x%02x%02x%02x%02x}", + FormatP: "(%08x-%04x-%04x-%02x%02x-%02x%02x%02x%02x%02x%02x)", + FormatX: "{0x%08x,0x%04x,0x%04x,{0x%02x,0x%02x,0x%02x,0x%02x,0x%02x,0x%02x,0x%02x,0x%02x}}", +} + +// MarshalJSON writes a GUID as a JSON string. +func (guid GUID) MarshalJSON() (marshaled []byte, err error) { + buf := bytes.Buffer{} + + _, err = buf.WriteRune('"') + buf.WriteString(guid.String()) + buf.WriteRune('"') + + marshaled = buf.Bytes() + return +} + +// Parse instantiates a GUID from a text representation of the same GUID. +// This is the inverse of function family String() +func Parse(value string) (GUID, error) { + var guid GUID + for _, fullFormat := range knownFormats { + parity, err := fmt.Sscanf( + value, + fullFormat, + &guid.timeLow, + &guid.timeMid, + &guid.timeHighAndVersion, + &guid.clockSeqHighAndReserved, + &guid.clockSeqLow, + &guid.node[0], + &guid.node[1], + &guid.node[2], + &guid.node[3], + &guid.node[4], + &guid.node[5]) + if parity == 11 && err == nil { + return guid, err + } + } + return emptyGUID, fmt.Errorf("\"%s\" is not in a recognized format", value) +} + +// String returns a text representation of a GUID in the default format. +func (guid GUID) String() string { + return guid.Stringf(FormatDefault) +} + +// Stringf returns a text representation of a GUID that conforms to the specified format. +// If an unrecognized format is provided, the empty string is returned. +func (guid GUID) Stringf(format Format) string { + if format == "" { + format = FormatDefault + } + fullFormat, present := knownFormats[format] + if !present { + return "" + } + return fmt.Sprintf( + fullFormat, + guid.timeLow, + guid.timeMid, + guid.timeHighAndVersion, + guid.clockSeqHighAndReserved, + guid.clockSeqLow, + guid.node[0], + guid.node[1], + guid.node[2], + guid.node[3], + guid.node[4], + guid.node[5]) +} + +// UnmarshalJSON parses a GUID from a JSON string token. +func (guid *GUID) UnmarshalJSON(marshaled []byte) (err error) { + if len(marshaled) < 2 { + err = errors.New("JSON GUID must be surrounded by quotes") + return + } + stripped := marshaled[1 : len(marshaled)-1] + *guid, err = Parse(string(stripped)) + return +} + +// Version reads a GUID to parse which mechanism of generating GUIDS was employed. +// Values returned here are documented in rfc4122.txt. +func (guid GUID) Version() uint { + return uint(guid.timeHighAndVersion >> 12) +} + +var unixToGregorianOffset = time.Date(1970, 01, 01, 0, 0, 00, 0, time.UTC).Sub(time.Date(1582, 10, 15, 0, 0, 0, 0, time.UTC)) + +// getRFC4122Time returns a 60-bit count of 100-nanosecond intervals since 00:00:00.00 October 15th, 1582 +func getRFC4122Time() int64 { + currentTime := time.Now().UTC().Add(unixToGregorianOffset).UnixNano() + currentTime /= 100 + return currentTime & 0x0FFFFFFFFFFFFFFF +} + +var clockSeqVal uint16 +var clockSeqKey sync.Mutex + +func getClockSequence() (uint16, error) { + clockSeqKey.Lock() + defer clockSeqKey.Unlock() + + if 0 == clockSeqVal { + var temp [2]byte + if parity, err := rand.Read(temp[:]); !(2 == parity && nil == err) { + return 0, err + } + clockSeqVal = uint16(temp[0])<<8 | uint16(temp[1]) + } + clockSeqVal++ + return clockSeqVal, nil +} + +func getMACAddress() (mac [6]byte, err error) { + var hostNICs []net.Interface + + hostNICs, err = net.Interfaces() + if err != nil { + return + } + + for _, nic := range hostNICs { + var parity int + + parity, err = fmt.Sscanf( + strings.ToLower(nic.HardwareAddr.String()), + "%02x:%02x:%02x:%02x:%02x:%02x", + &mac[0], + &mac[1], + &mac[2], + &mac[3], + &mac[4], + &mac[5]) + + if parity == len(mac) { + return + } + } + + err = fmt.Errorf("No suitable address found") + + return +} + +func version1() (result GUID, err error) { + var localMAC [6]byte + var clockSeq uint16 + + currentTime := getRFC4122Time() + + result.timeLow = uint32(currentTime) + result.timeMid = uint16(currentTime >> 32) + result.timeHighAndVersion = uint16(currentTime >> 48) + if err = result.setVersion(1); err != nil { + return emptyGUID, err + } + + if localMAC, err = getMACAddress(); nil != err { + if parity, err := rand.Read(localMAC[:]); !(len(localMAC) != parity && err == nil) { + return emptyGUID, err + } + localMAC[0] |= 0x1 + } + copy(result.node[:], localMAC[:]) + + if clockSeq, err = getClockSequence(); nil != err { + return emptyGUID, err + } + + result.clockSeqLow = uint8(clockSeq) + result.clockSeqHighAndReserved = uint8(clockSeq >> 8) + + result.setReservedBits() + + return +} + +func version4() (GUID, error) { + var retval GUID + var bits [10]byte + + if parity, err := rand.Read(bits[:]); !(len(bits) == parity && err == nil) { + return emptyGUID, err + } + retval.timeHighAndVersion |= uint16(bits[0]) | uint16(bits[1])<<8 + retval.timeMid |= uint16(bits[2]) | uint16(bits[3])<<8 + retval.timeLow |= uint32(bits[4]) | uint32(bits[5])<<8 | uint32(bits[6])<<16 | uint32(bits[7])<<24 + retval.clockSeqHighAndReserved = uint8(bits[8]) + retval.clockSeqLow = uint8(bits[9]) + + //Randomly set clock-sequence, reserved, and node + if written, err := rand.Read(retval.node[:]); !(nil == err && written == len(retval.node)) { + retval = emptyGUID + return retval, err + } + + if err := retval.setVersion(4); nil != err { + return emptyGUID, err + } + retval.setReservedBits() + + return retval, nil +} + +func (guid *GUID) setVersion(version uint16) error { + if version > 5 || version == 0 { + return fmt.Errorf("While setting GUID version, unsupported version: %d", version) + } + guid.timeHighAndVersion = (guid.timeHighAndVersion & 0x0fff) | version<<12 + return nil +} + +func (guid *GUID) setReservedBits() { + guid.clockSeqHighAndReserved = (guid.clockSeqHighAndReserved & 0x3f) | 0x80 +} diff --git a/vendor/github.com/masterzen/winrm/client.go b/vendor/github.com/masterzen/winrm/client.go index 732dd61cb..c19515194 100644 --- a/vendor/github.com/masterzen/winrm/client.go +++ b/vendor/github.com/masterzen/winrm/client.go @@ -152,10 +152,20 @@ func (c *Client) RunWithString(command string, stdin string) (string, string, in } var outWriter, errWriter bytes.Buffer - go io.Copy(&outWriter, cmd.Stdout) - go io.Copy(&errWriter, cmd.Stderr) + var wg sync.WaitGroup + wg.Add(2) + go func() { + defer wg.Done() + io.Copy(&outWriter, cmd.Stdout) + }() + + go func() { + defer wg.Done() + io.Copy(&errWriter, cmd.Stderr) + }() cmd.Wait() + wg.Wait() return outWriter.String(), errWriter.String(), cmd.ExitCode(), cmd.err } @@ -176,11 +186,24 @@ func (c Client) RunWithInput(command string, stdout, stderr io.Writer, stdin io. return 1, err } - go io.Copy(cmd.Stdin, stdin) - go io.Copy(stdout, cmd.Stdout) - go io.Copy(stderr, cmd.Stderr) + var wg sync.WaitGroup + wg.Add(3) + + go func() { + defer wg.Done() + io.Copy(cmd.Stdin, stdin) + }() + go func() { + defer wg.Done() + io.Copy(stdout, cmd.Stdout) + }() + go func() { + defer wg.Done() + io.Copy(stderr, cmd.Stderr) + }() cmd.Wait() + wg.Wait() return cmd.ExitCode(), cmd.err diff --git a/vendor/github.com/mattn/go-runewidth/LICENSE b/vendor/github.com/mattn/go-runewidth/LICENSE new file mode 100644 index 000000000..91b5cef30 --- /dev/null +++ b/vendor/github.com/mattn/go-runewidth/LICENSE @@ -0,0 +1,21 @@ +The MIT License (MIT) + +Copyright (c) 2016 Yasuhiro Matsumoto + +Permission is hereby granted, free of charge, to any person obtaining a copy +of this software and associated documentation files (the "Software"), to deal +in the Software without restriction, including without limitation the rights +to use, copy, modify, merge, publish, distribute, sublicense, and/or sell +copies of the Software, and to permit persons to whom the Software is +furnished to do so, subject to the following conditions: + +The above copyright notice and this permission notice shall be included in all +copies or substantial portions of the Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR +IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, +FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE +AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER +LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, +OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +SOFTWARE. diff --git a/vendor/github.com/mattn/go-runewidth/README.mkd b/vendor/github.com/mattn/go-runewidth/README.mkd new file mode 100644 index 000000000..66663a94b --- /dev/null +++ b/vendor/github.com/mattn/go-runewidth/README.mkd @@ -0,0 +1,27 @@ +go-runewidth +============ + +[![Build Status](https://travis-ci.org/mattn/go-runewidth.png?branch=master)](https://travis-ci.org/mattn/go-runewidth) +[![Coverage Status](https://coveralls.io/repos/mattn/go-runewidth/badge.png?branch=HEAD)](https://coveralls.io/r/mattn/go-runewidth?branch=HEAD) +[![GoDoc](https://godoc.org/github.com/mattn/go-runewidth?status.svg)](http://godoc.org/github.com/mattn/go-runewidth) +[![Go Report Card](https://goreportcard.com/badge/github.com/mattn/go-runewidth)](https://goreportcard.com/report/github.com/mattn/go-runewidth) + +Provides functions to get fixed width of the character or string. + +Usage +----- + +```go +runewidth.StringWidth("つのだ☆HIRO") == 12 +``` + + +Author +------ + +Yasuhiro Matsumoto + +License +------- + +under the MIT License: http://mattn.mit-license.org/2013 diff --git a/vendor/github.com/mattn/go-runewidth/runewidth.go b/vendor/github.com/mattn/go-runewidth/runewidth.go new file mode 100644 index 000000000..a16f1af0e --- /dev/null +++ b/vendor/github.com/mattn/go-runewidth/runewidth.go @@ -0,0 +1,1224 @@ +package runewidth + +var ( + // EastAsianWidth will be set true if the current locale is CJK + EastAsianWidth = IsEastAsian() + + // DefaultCondition is a condition in current locale + DefaultCondition = &Condition{EastAsianWidth} +) + +type interval struct { + first rune + last rune +} + +type table []interval + +func inTables(r rune, ts ...table) bool { + for _, t := range ts { + if inTable(r, t) { + return true + } + } + return false +} + +func inTable(r rune, t table) bool { + // func (t table) IncludesRune(r rune) bool { + if r < t[0].first { + return false + } + + bot := 0 + top := len(t) - 1 + for top >= bot { + mid := (bot + top) / 2 + + switch { + case t[mid].last < r: + bot = mid + 1 + case t[mid].first > r: + top = mid - 1 + default: + return true + } + } + + return false +} + +var private = table{ + {0x00E000, 0x00F8FF}, {0x0F0000, 0x0FFFFD}, {0x100000, 0x10FFFD}, +} + +var nonprint = table{ + {0x0000, 0x001F}, {0x007F, 0x009F}, {0x00AD, 0x00AD}, + {0x070F, 0x070F}, {0x180B, 0x180E}, {0x200B, 0x200F}, + {0x2028, 0x2029}, + {0x202A, 0x202E}, {0x206A, 0x206F}, {0xD800, 0xDFFF}, + {0xFEFF, 0xFEFF}, {0xFFF9, 0xFFFB}, {0xFFFE, 0xFFFF}, +} + +var combining = table{ + {0x0300, 0x036F}, {0x0483, 0x0489}, {0x0591, 0x05BD}, + {0x05BF, 0x05BF}, {0x05C1, 0x05C2}, {0x05C4, 0x05C5}, + {0x05C7, 0x05C7}, {0x0610, 0x061A}, {0x064B, 0x065F}, + {0x0670, 0x0670}, {0x06D6, 0x06DC}, {0x06DF, 0x06E4}, + {0x06E7, 0x06E8}, {0x06EA, 0x06ED}, {0x0711, 0x0711}, + {0x0730, 0x074A}, {0x07A6, 0x07B0}, {0x07EB, 0x07F3}, + {0x0816, 0x0819}, {0x081B, 0x0823}, {0x0825, 0x0827}, + {0x0829, 0x082D}, {0x0859, 0x085B}, {0x08D4, 0x08E1}, + {0x08E3, 0x0903}, {0x093A, 0x093C}, {0x093E, 0x094F}, + {0x0951, 0x0957}, {0x0962, 0x0963}, {0x0981, 0x0983}, + {0x09BC, 0x09BC}, {0x09BE, 0x09C4}, {0x09C7, 0x09C8}, + {0x09CB, 0x09CD}, {0x09D7, 0x09D7}, {0x09E2, 0x09E3}, + {0x0A01, 0x0A03}, {0x0A3C, 0x0A3C}, {0x0A3E, 0x0A42}, + {0x0A47, 0x0A48}, {0x0A4B, 0x0A4D}, {0x0A51, 0x0A51}, + {0x0A70, 0x0A71}, {0x0A75, 0x0A75}, {0x0A81, 0x0A83}, + {0x0ABC, 0x0ABC}, {0x0ABE, 0x0AC5}, {0x0AC7, 0x0AC9}, + {0x0ACB, 0x0ACD}, {0x0AE2, 0x0AE3}, {0x0B01, 0x0B03}, + {0x0B3C, 0x0B3C}, {0x0B3E, 0x0B44}, {0x0B47, 0x0B48}, + {0x0B4B, 0x0B4D}, {0x0B56, 0x0B57}, {0x0B62, 0x0B63}, + {0x0B82, 0x0B82}, {0x0BBE, 0x0BC2}, {0x0BC6, 0x0BC8}, + {0x0BCA, 0x0BCD}, {0x0BD7, 0x0BD7}, {0x0C00, 0x0C03}, + {0x0C3E, 0x0C44}, {0x0C46, 0x0C48}, {0x0C4A, 0x0C4D}, + {0x0C55, 0x0C56}, {0x0C62, 0x0C63}, {0x0C81, 0x0C83}, + {0x0CBC, 0x0CBC}, {0x0CBE, 0x0CC4}, {0x0CC6, 0x0CC8}, + {0x0CCA, 0x0CCD}, {0x0CD5, 0x0CD6}, {0x0CE2, 0x0CE3}, + {0x0D01, 0x0D03}, {0x0D3E, 0x0D44}, {0x0D46, 0x0D48}, + {0x0D4A, 0x0D4D}, {0x0D57, 0x0D57}, {0x0D62, 0x0D63}, + {0x0D82, 0x0D83}, {0x0DCA, 0x0DCA}, {0x0DCF, 0x0DD4}, + {0x0DD6, 0x0DD6}, {0x0DD8, 0x0DDF}, {0x0DF2, 0x0DF3}, + {0x0E31, 0x0E31}, {0x0E34, 0x0E3A}, {0x0E47, 0x0E4E}, + {0x0EB1, 0x0EB1}, {0x0EB4, 0x0EB9}, {0x0EBB, 0x0EBC}, + {0x0EC8, 0x0ECD}, {0x0F18, 0x0F19}, {0x0F35, 0x0F35}, + {0x0F37, 0x0F37}, {0x0F39, 0x0F39}, {0x0F3E, 0x0F3F}, + {0x0F71, 0x0F84}, {0x0F86, 0x0F87}, {0x0F8D, 0x0F97}, + {0x0F99, 0x0FBC}, {0x0FC6, 0x0FC6}, {0x102B, 0x103E}, + {0x1056, 0x1059}, {0x105E, 0x1060}, {0x1062, 0x1064}, + {0x1067, 0x106D}, {0x1071, 0x1074}, {0x1082, 0x108D}, + {0x108F, 0x108F}, {0x109A, 0x109D}, {0x135D, 0x135F}, + {0x1712, 0x1714}, {0x1732, 0x1734}, {0x1752, 0x1753}, + {0x1772, 0x1773}, {0x17B4, 0x17D3}, {0x17DD, 0x17DD}, + {0x180B, 0x180D}, {0x1885, 0x1886}, {0x18A9, 0x18A9}, + {0x1920, 0x192B}, {0x1930, 0x193B}, {0x1A17, 0x1A1B}, + {0x1A55, 0x1A5E}, {0x1A60, 0x1A7C}, {0x1A7F, 0x1A7F}, + {0x1AB0, 0x1ABE}, {0x1B00, 0x1B04}, {0x1B34, 0x1B44}, + {0x1B6B, 0x1B73}, {0x1B80, 0x1B82}, {0x1BA1, 0x1BAD}, + {0x1BE6, 0x1BF3}, {0x1C24, 0x1C37}, {0x1CD0, 0x1CD2}, + {0x1CD4, 0x1CE8}, {0x1CED, 0x1CED}, {0x1CF2, 0x1CF4}, + {0x1CF8, 0x1CF9}, {0x1DC0, 0x1DF5}, {0x1DFB, 0x1DFF}, + {0x20D0, 0x20F0}, {0x2CEF, 0x2CF1}, {0x2D7F, 0x2D7F}, + {0x2DE0, 0x2DFF}, {0x302A, 0x302F}, {0x3099, 0x309A}, + {0xA66F, 0xA672}, {0xA674, 0xA67D}, {0xA69E, 0xA69F}, + {0xA6F0, 0xA6F1}, {0xA802, 0xA802}, {0xA806, 0xA806}, + {0xA80B, 0xA80B}, {0xA823, 0xA827}, {0xA880, 0xA881}, + {0xA8B4, 0xA8C5}, {0xA8E0, 0xA8F1}, {0xA926, 0xA92D}, + {0xA947, 0xA953}, {0xA980, 0xA983}, {0xA9B3, 0xA9C0}, + {0xA9E5, 0xA9E5}, {0xAA29, 0xAA36}, {0xAA43, 0xAA43}, + {0xAA4C, 0xAA4D}, {0xAA7B, 0xAA7D}, {0xAAB0, 0xAAB0}, + {0xAAB2, 0xAAB4}, {0xAAB7, 0xAAB8}, {0xAABE, 0xAABF}, + {0xAAC1, 0xAAC1}, {0xAAEB, 0xAAEF}, {0xAAF5, 0xAAF6}, + {0xABE3, 0xABEA}, {0xABEC, 0xABED}, {0xFB1E, 0xFB1E}, + {0xFE00, 0xFE0F}, {0xFE20, 0xFE2F}, {0x101FD, 0x101FD}, + {0x102E0, 0x102E0}, {0x10376, 0x1037A}, {0x10A01, 0x10A03}, + {0x10A05, 0x10A06}, {0x10A0C, 0x10A0F}, {0x10A38, 0x10A3A}, + {0x10A3F, 0x10A3F}, {0x10AE5, 0x10AE6}, {0x11000, 0x11002}, + {0x11038, 0x11046}, {0x1107F, 0x11082}, {0x110B0, 0x110BA}, + {0x11100, 0x11102}, {0x11127, 0x11134}, {0x11173, 0x11173}, + {0x11180, 0x11182}, {0x111B3, 0x111C0}, {0x111CA, 0x111CC}, + {0x1122C, 0x11237}, {0x1123E, 0x1123E}, {0x112DF, 0x112EA}, + {0x11300, 0x11303}, {0x1133C, 0x1133C}, {0x1133E, 0x11344}, + {0x11347, 0x11348}, {0x1134B, 0x1134D}, {0x11357, 0x11357}, + {0x11362, 0x11363}, {0x11366, 0x1136C}, {0x11370, 0x11374}, + {0x11435, 0x11446}, {0x114B0, 0x114C3}, {0x115AF, 0x115B5}, + {0x115B8, 0x115C0}, {0x115DC, 0x115DD}, {0x11630, 0x11640}, + {0x116AB, 0x116B7}, {0x1171D, 0x1172B}, {0x11C2F, 0x11C36}, + {0x11C38, 0x11C3F}, {0x11C92, 0x11CA7}, {0x11CA9, 0x11CB6}, + {0x16AF0, 0x16AF4}, {0x16B30, 0x16B36}, {0x16F51, 0x16F7E}, + {0x16F8F, 0x16F92}, {0x1BC9D, 0x1BC9E}, {0x1D165, 0x1D169}, + {0x1D16D, 0x1D172}, {0x1D17B, 0x1D182}, {0x1D185, 0x1D18B}, + {0x1D1AA, 0x1D1AD}, {0x1D242, 0x1D244}, {0x1DA00, 0x1DA36}, + {0x1DA3B, 0x1DA6C}, {0x1DA75, 0x1DA75}, {0x1DA84, 0x1DA84}, + {0x1DA9B, 0x1DA9F}, {0x1DAA1, 0x1DAAF}, {0x1E000, 0x1E006}, + {0x1E008, 0x1E018}, {0x1E01B, 0x1E021}, {0x1E023, 0x1E024}, + {0x1E026, 0x1E02A}, {0x1E8D0, 0x1E8D6}, {0x1E944, 0x1E94A}, + {0xE0100, 0xE01EF}, +} + +var doublewidth = table{ + {0x1100, 0x115F}, {0x231A, 0x231B}, {0x2329, 0x232A}, + {0x23E9, 0x23EC}, {0x23F0, 0x23F0}, {0x23F3, 0x23F3}, + {0x25FD, 0x25FE}, {0x2614, 0x2615}, {0x2648, 0x2653}, + {0x267F, 0x267F}, {0x2693, 0x2693}, {0x26A1, 0x26A1}, + {0x26AA, 0x26AB}, {0x26BD, 0x26BE}, {0x26C4, 0x26C5}, + {0x26CE, 0x26CE}, {0x26D4, 0x26D4}, {0x26EA, 0x26EA}, + {0x26F2, 0x26F3}, {0x26F5, 0x26F5}, {0x26FA, 0x26FA}, + {0x26FD, 0x26FD}, {0x2705, 0x2705}, {0x270A, 0x270B}, + {0x2728, 0x2728}, {0x274C, 0x274C}, {0x274E, 0x274E}, + {0x2753, 0x2755}, {0x2757, 0x2757}, {0x2795, 0x2797}, + {0x27B0, 0x27B0}, {0x27BF, 0x27BF}, {0x2B1B, 0x2B1C}, + {0x2B50, 0x2B50}, {0x2B55, 0x2B55}, {0x2E80, 0x2E99}, + {0x2E9B, 0x2EF3}, {0x2F00, 0x2FD5}, {0x2FF0, 0x2FFB}, + {0x3000, 0x303E}, {0x3041, 0x3096}, {0x3099, 0x30FF}, + {0x3105, 0x312D}, {0x3131, 0x318E}, {0x3190, 0x31BA}, + {0x31C0, 0x31E3}, {0x31F0, 0x321E}, {0x3220, 0x3247}, + {0x3250, 0x32FE}, {0x3300, 0x4DBF}, {0x4E00, 0xA48C}, + {0xA490, 0xA4C6}, {0xA960, 0xA97C}, {0xAC00, 0xD7A3}, + {0xF900, 0xFAFF}, {0xFE10, 0xFE19}, {0xFE30, 0xFE52}, + {0xFE54, 0xFE66}, {0xFE68, 0xFE6B}, {0xFF01, 0xFF60}, + {0xFFE0, 0xFFE6}, {0x16FE0, 0x16FE0}, {0x17000, 0x187EC}, + {0x18800, 0x18AF2}, {0x1B000, 0x1B001}, {0x1F004, 0x1F004}, + {0x1F0CF, 0x1F0CF}, {0x1F18E, 0x1F18E}, {0x1F191, 0x1F19A}, + {0x1F200, 0x1F202}, {0x1F210, 0x1F23B}, {0x1F240, 0x1F248}, + {0x1F250, 0x1F251}, {0x1F300, 0x1F320}, {0x1F32D, 0x1F335}, + {0x1F337, 0x1F37C}, {0x1F37E, 0x1F393}, {0x1F3A0, 0x1F3CA}, + {0x1F3CF, 0x1F3D3}, {0x1F3E0, 0x1F3F0}, {0x1F3F4, 0x1F3F4}, + {0x1F3F8, 0x1F43E}, {0x1F440, 0x1F440}, {0x1F442, 0x1F4FC}, + {0x1F4FF, 0x1F53D}, {0x1F54B, 0x1F54E}, {0x1F550, 0x1F567}, + {0x1F57A, 0x1F57A}, {0x1F595, 0x1F596}, {0x1F5A4, 0x1F5A4}, + {0x1F5FB, 0x1F64F}, {0x1F680, 0x1F6C5}, {0x1F6CC, 0x1F6CC}, + {0x1F6D0, 0x1F6D2}, {0x1F6EB, 0x1F6EC}, {0x1F6F4, 0x1F6F6}, + {0x1F910, 0x1F91E}, {0x1F920, 0x1F927}, {0x1F930, 0x1F930}, + {0x1F933, 0x1F93E}, {0x1F940, 0x1F94B}, {0x1F950, 0x1F95E}, + {0x1F980, 0x1F991}, {0x1F9C0, 0x1F9C0}, {0x20000, 0x2FFFD}, + {0x30000, 0x3FFFD}, +} + +var ambiguous = table{ + {0x00A1, 0x00A1}, {0x00A4, 0x00A4}, {0x00A7, 0x00A8}, + {0x00AA, 0x00AA}, {0x00AD, 0x00AE}, {0x00B0, 0x00B4}, + {0x00B6, 0x00BA}, {0x00BC, 0x00BF}, {0x00C6, 0x00C6}, + {0x00D0, 0x00D0}, {0x00D7, 0x00D8}, {0x00DE, 0x00E1}, + {0x00E6, 0x00E6}, {0x00E8, 0x00EA}, {0x00EC, 0x00ED}, + {0x00F0, 0x00F0}, {0x00F2, 0x00F3}, {0x00F7, 0x00FA}, + {0x00FC, 0x00FC}, {0x00FE, 0x00FE}, {0x0101, 0x0101}, + {0x0111, 0x0111}, {0x0113, 0x0113}, {0x011B, 0x011B}, + {0x0126, 0x0127}, {0x012B, 0x012B}, {0x0131, 0x0133}, + {0x0138, 0x0138}, {0x013F, 0x0142}, {0x0144, 0x0144}, + {0x0148, 0x014B}, {0x014D, 0x014D}, {0x0152, 0x0153}, + {0x0166, 0x0167}, {0x016B, 0x016B}, {0x01CE, 0x01CE}, + {0x01D0, 0x01D0}, {0x01D2, 0x01D2}, {0x01D4, 0x01D4}, + {0x01D6, 0x01D6}, {0x01D8, 0x01D8}, {0x01DA, 0x01DA}, + {0x01DC, 0x01DC}, {0x0251, 0x0251}, {0x0261, 0x0261}, + {0x02C4, 0x02C4}, {0x02C7, 0x02C7}, {0x02C9, 0x02CB}, + {0x02CD, 0x02CD}, {0x02D0, 0x02D0}, {0x02D8, 0x02DB}, + {0x02DD, 0x02DD}, {0x02DF, 0x02DF}, {0x0300, 0x036F}, + {0x0391, 0x03A1}, {0x03A3, 0x03A9}, {0x03B1, 0x03C1}, + {0x03C3, 0x03C9}, {0x0401, 0x0401}, {0x0410, 0x044F}, + {0x0451, 0x0451}, {0x2010, 0x2010}, {0x2013, 0x2016}, + {0x2018, 0x2019}, {0x201C, 0x201D}, {0x2020, 0x2022}, + {0x2024, 0x2027}, {0x2030, 0x2030}, {0x2032, 0x2033}, + {0x2035, 0x2035}, {0x203B, 0x203B}, {0x203E, 0x203E}, + {0x2074, 0x2074}, {0x207F, 0x207F}, {0x2081, 0x2084}, + {0x20AC, 0x20AC}, {0x2103, 0x2103}, {0x2105, 0x2105}, + {0x2109, 0x2109}, {0x2113, 0x2113}, {0x2116, 0x2116}, + {0x2121, 0x2122}, {0x2126, 0x2126}, {0x212B, 0x212B}, + {0x2153, 0x2154}, {0x215B, 0x215E}, {0x2160, 0x216B}, + {0x2170, 0x2179}, {0x2189, 0x2189}, {0x2190, 0x2199}, + {0x21B8, 0x21B9}, {0x21D2, 0x21D2}, {0x21D4, 0x21D4}, + {0x21E7, 0x21E7}, {0x2200, 0x2200}, {0x2202, 0x2203}, + {0x2207, 0x2208}, {0x220B, 0x220B}, {0x220F, 0x220F}, + {0x2211, 0x2211}, {0x2215, 0x2215}, {0x221A, 0x221A}, + {0x221D, 0x2220}, {0x2223, 0x2223}, {0x2225, 0x2225}, + {0x2227, 0x222C}, {0x222E, 0x222E}, {0x2234, 0x2237}, + {0x223C, 0x223D}, {0x2248, 0x2248}, {0x224C, 0x224C}, + {0x2252, 0x2252}, {0x2260, 0x2261}, {0x2264, 0x2267}, + {0x226A, 0x226B}, {0x226E, 0x226F}, {0x2282, 0x2283}, + {0x2286, 0x2287}, {0x2295, 0x2295}, {0x2299, 0x2299}, + {0x22A5, 0x22A5}, {0x22BF, 0x22BF}, {0x2312, 0x2312}, + {0x2460, 0x24E9}, {0x24EB, 0x254B}, {0x2550, 0x2573}, + {0x2580, 0x258F}, {0x2592, 0x2595}, {0x25A0, 0x25A1}, + {0x25A3, 0x25A9}, {0x25B2, 0x25B3}, {0x25B6, 0x25B7}, + {0x25BC, 0x25BD}, {0x25C0, 0x25C1}, {0x25C6, 0x25C8}, + {0x25CB, 0x25CB}, {0x25CE, 0x25D1}, {0x25E2, 0x25E5}, + {0x25EF, 0x25EF}, {0x2605, 0x2606}, {0x2609, 0x2609}, + {0x260E, 0x260F}, {0x261C, 0x261C}, {0x261E, 0x261E}, + {0x2640, 0x2640}, {0x2642, 0x2642}, {0x2660, 0x2661}, + {0x2663, 0x2665}, {0x2667, 0x266A}, {0x266C, 0x266D}, + {0x266F, 0x266F}, {0x269E, 0x269F}, {0x26BF, 0x26BF}, + {0x26C6, 0x26CD}, {0x26CF, 0x26D3}, {0x26D5, 0x26E1}, + {0x26E3, 0x26E3}, {0x26E8, 0x26E9}, {0x26EB, 0x26F1}, + {0x26F4, 0x26F4}, {0x26F6, 0x26F9}, {0x26FB, 0x26FC}, + {0x26FE, 0x26FF}, {0x273D, 0x273D}, {0x2776, 0x277F}, + {0x2B56, 0x2B59}, {0x3248, 0x324F}, {0xE000, 0xF8FF}, + {0xFE00, 0xFE0F}, {0xFFFD, 0xFFFD}, {0x1F100, 0x1F10A}, + {0x1F110, 0x1F12D}, {0x1F130, 0x1F169}, {0x1F170, 0x1F18D}, + {0x1F18F, 0x1F190}, {0x1F19B, 0x1F1AC}, {0xE0100, 0xE01EF}, + {0xF0000, 0xFFFFD}, {0x100000, 0x10FFFD}, +} + +var emoji = table{ + {0x1F1E6, 0x1F1FF}, {0x1F321, 0x1F321}, {0x1F324, 0x1F32C}, + {0x1F336, 0x1F336}, {0x1F37D, 0x1F37D}, {0x1F396, 0x1F397}, + {0x1F399, 0x1F39B}, {0x1F39E, 0x1F39F}, {0x1F3CB, 0x1F3CE}, + {0x1F3D4, 0x1F3DF}, {0x1F3F3, 0x1F3F5}, {0x1F3F7, 0x1F3F7}, + {0x1F43F, 0x1F43F}, {0x1F441, 0x1F441}, {0x1F4FD, 0x1F4FD}, + {0x1F549, 0x1F54A}, {0x1F56F, 0x1F570}, {0x1F573, 0x1F579}, + {0x1F587, 0x1F587}, {0x1F58A, 0x1F58D}, {0x1F590, 0x1F590}, + {0x1F5A5, 0x1F5A5}, {0x1F5A8, 0x1F5A8}, {0x1F5B1, 0x1F5B2}, + {0x1F5BC, 0x1F5BC}, {0x1F5C2, 0x1F5C4}, {0x1F5D1, 0x1F5D3}, + {0x1F5DC, 0x1F5DE}, {0x1F5E1, 0x1F5E1}, {0x1F5E3, 0x1F5E3}, + {0x1F5E8, 0x1F5E8}, {0x1F5EF, 0x1F5EF}, {0x1F5F3, 0x1F5F3}, + {0x1F5FA, 0x1F5FA}, {0x1F6CB, 0x1F6CF}, {0x1F6E0, 0x1F6E5}, + {0x1F6E9, 0x1F6E9}, {0x1F6F0, 0x1F6F0}, {0x1F6F3, 0x1F6F3}, +} + +var notassigned = table{ + {0x0378, 0x0379}, {0x0380, 0x0383}, {0x038B, 0x038B}, + {0x038D, 0x038D}, {0x03A2, 0x03A2}, {0x0530, 0x0530}, + {0x0557, 0x0558}, {0x0560, 0x0560}, {0x0588, 0x0588}, + {0x058B, 0x058C}, {0x0590, 0x0590}, {0x05C8, 0x05CF}, + {0x05EB, 0x05EF}, {0x05F5, 0x05FF}, {0x061D, 0x061D}, + {0x070E, 0x070E}, {0x074B, 0x074C}, {0x07B2, 0x07BF}, + {0x07FB, 0x07FF}, {0x082E, 0x082F}, {0x083F, 0x083F}, + {0x085C, 0x085D}, {0x085F, 0x089F}, {0x08B5, 0x08B5}, + {0x08BE, 0x08D3}, {0x0984, 0x0984}, {0x098D, 0x098E}, + {0x0991, 0x0992}, {0x09A9, 0x09A9}, {0x09B1, 0x09B1}, + {0x09B3, 0x09B5}, {0x09BA, 0x09BB}, {0x09C5, 0x09C6}, + {0x09C9, 0x09CA}, {0x09CF, 0x09D6}, {0x09D8, 0x09DB}, + {0x09DE, 0x09DE}, {0x09E4, 0x09E5}, {0x09FC, 0x0A00}, + {0x0A04, 0x0A04}, {0x0A0B, 0x0A0E}, {0x0A11, 0x0A12}, + {0x0A29, 0x0A29}, {0x0A31, 0x0A31}, {0x0A34, 0x0A34}, + {0x0A37, 0x0A37}, {0x0A3A, 0x0A3B}, {0x0A3D, 0x0A3D}, + {0x0A43, 0x0A46}, {0x0A49, 0x0A4A}, {0x0A4E, 0x0A50}, + {0x0A52, 0x0A58}, {0x0A5D, 0x0A5D}, {0x0A5F, 0x0A65}, + {0x0A76, 0x0A80}, {0x0A84, 0x0A84}, {0x0A8E, 0x0A8E}, + {0x0A92, 0x0A92}, {0x0AA9, 0x0AA9}, {0x0AB1, 0x0AB1}, + {0x0AB4, 0x0AB4}, {0x0ABA, 0x0ABB}, {0x0AC6, 0x0AC6}, + {0x0ACA, 0x0ACA}, {0x0ACE, 0x0ACF}, {0x0AD1, 0x0ADF}, + {0x0AE4, 0x0AE5}, {0x0AF2, 0x0AF8}, {0x0AFA, 0x0B00}, + {0x0B04, 0x0B04}, {0x0B0D, 0x0B0E}, {0x0B11, 0x0B12}, + {0x0B29, 0x0B29}, {0x0B31, 0x0B31}, {0x0B34, 0x0B34}, + {0x0B3A, 0x0B3B}, {0x0B45, 0x0B46}, {0x0B49, 0x0B4A}, + {0x0B4E, 0x0B55}, {0x0B58, 0x0B5B}, {0x0B5E, 0x0B5E}, + {0x0B64, 0x0B65}, {0x0B78, 0x0B81}, {0x0B84, 0x0B84}, + {0x0B8B, 0x0B8D}, {0x0B91, 0x0B91}, {0x0B96, 0x0B98}, + {0x0B9B, 0x0B9B}, {0x0B9D, 0x0B9D}, {0x0BA0, 0x0BA2}, + {0x0BA5, 0x0BA7}, {0x0BAB, 0x0BAD}, {0x0BBA, 0x0BBD}, + {0x0BC3, 0x0BC5}, {0x0BC9, 0x0BC9}, {0x0BCE, 0x0BCF}, + {0x0BD1, 0x0BD6}, {0x0BD8, 0x0BE5}, {0x0BFB, 0x0BFF}, + {0x0C04, 0x0C04}, {0x0C0D, 0x0C0D}, {0x0C11, 0x0C11}, + {0x0C29, 0x0C29}, {0x0C3A, 0x0C3C}, {0x0C45, 0x0C45}, + {0x0C49, 0x0C49}, {0x0C4E, 0x0C54}, {0x0C57, 0x0C57}, + {0x0C5B, 0x0C5F}, {0x0C64, 0x0C65}, {0x0C70, 0x0C77}, + {0x0C84, 0x0C84}, {0x0C8D, 0x0C8D}, {0x0C91, 0x0C91}, + {0x0CA9, 0x0CA9}, {0x0CB4, 0x0CB4}, {0x0CBA, 0x0CBB}, + {0x0CC5, 0x0CC5}, {0x0CC9, 0x0CC9}, {0x0CCE, 0x0CD4}, + {0x0CD7, 0x0CDD}, {0x0CDF, 0x0CDF}, {0x0CE4, 0x0CE5}, + {0x0CF0, 0x0CF0}, {0x0CF3, 0x0D00}, {0x0D04, 0x0D04}, + {0x0D0D, 0x0D0D}, {0x0D11, 0x0D11}, {0x0D3B, 0x0D3C}, + {0x0D45, 0x0D45}, {0x0D49, 0x0D49}, {0x0D50, 0x0D53}, + {0x0D64, 0x0D65}, {0x0D80, 0x0D81}, {0x0D84, 0x0D84}, + {0x0D97, 0x0D99}, {0x0DB2, 0x0DB2}, {0x0DBC, 0x0DBC}, + {0x0DBE, 0x0DBF}, {0x0DC7, 0x0DC9}, {0x0DCB, 0x0DCE}, + {0x0DD5, 0x0DD5}, {0x0DD7, 0x0DD7}, {0x0DE0, 0x0DE5}, + {0x0DF0, 0x0DF1}, {0x0DF5, 0x0E00}, {0x0E3B, 0x0E3E}, + {0x0E5C, 0x0E80}, {0x0E83, 0x0E83}, {0x0E85, 0x0E86}, + {0x0E89, 0x0E89}, {0x0E8B, 0x0E8C}, {0x0E8E, 0x0E93}, + {0x0E98, 0x0E98}, {0x0EA0, 0x0EA0}, {0x0EA4, 0x0EA4}, + {0x0EA6, 0x0EA6}, {0x0EA8, 0x0EA9}, {0x0EAC, 0x0EAC}, + {0x0EBA, 0x0EBA}, {0x0EBE, 0x0EBF}, {0x0EC5, 0x0EC5}, + {0x0EC7, 0x0EC7}, {0x0ECE, 0x0ECF}, {0x0EDA, 0x0EDB}, + {0x0EE0, 0x0EFF}, {0x0F48, 0x0F48}, {0x0F6D, 0x0F70}, + {0x0F98, 0x0F98}, {0x0FBD, 0x0FBD}, {0x0FCD, 0x0FCD}, + {0x0FDB, 0x0FFF}, {0x10C6, 0x10C6}, {0x10C8, 0x10CC}, + {0x10CE, 0x10CF}, {0x1249, 0x1249}, {0x124E, 0x124F}, + {0x1257, 0x1257}, {0x1259, 0x1259}, {0x125E, 0x125F}, + {0x1289, 0x1289}, {0x128E, 0x128F}, {0x12B1, 0x12B1}, + {0x12B6, 0x12B7}, {0x12BF, 0x12BF}, {0x12C1, 0x12C1}, + {0x12C6, 0x12C7}, {0x12D7, 0x12D7}, {0x1311, 0x1311}, + {0x1316, 0x1317}, {0x135B, 0x135C}, {0x137D, 0x137F}, + {0x139A, 0x139F}, {0x13F6, 0x13F7}, {0x13FE, 0x13FF}, + {0x169D, 0x169F}, {0x16F9, 0x16FF}, {0x170D, 0x170D}, + {0x1715, 0x171F}, {0x1737, 0x173F}, {0x1754, 0x175F}, + {0x176D, 0x176D}, {0x1771, 0x1771}, {0x1774, 0x177F}, + {0x17DE, 0x17DF}, {0x17EA, 0x17EF}, {0x17FA, 0x17FF}, + {0x180F, 0x180F}, {0x181A, 0x181F}, {0x1878, 0x187F}, + {0x18AB, 0x18AF}, {0x18F6, 0x18FF}, {0x191F, 0x191F}, + {0x192C, 0x192F}, {0x193C, 0x193F}, {0x1941, 0x1943}, + {0x196E, 0x196F}, {0x1975, 0x197F}, {0x19AC, 0x19AF}, + {0x19CA, 0x19CF}, {0x19DB, 0x19DD}, {0x1A1C, 0x1A1D}, + {0x1A5F, 0x1A5F}, {0x1A7D, 0x1A7E}, {0x1A8A, 0x1A8F}, + {0x1A9A, 0x1A9F}, {0x1AAE, 0x1AAF}, {0x1ABF, 0x1AFF}, + {0x1B4C, 0x1B4F}, {0x1B7D, 0x1B7F}, {0x1BF4, 0x1BFB}, + {0x1C38, 0x1C3A}, {0x1C4A, 0x1C4C}, {0x1C89, 0x1CBF}, + {0x1CC8, 0x1CCF}, {0x1CF7, 0x1CF7}, {0x1CFA, 0x1CFF}, + {0x1DF6, 0x1DFA}, {0x1F16, 0x1F17}, {0x1F1E, 0x1F1F}, + {0x1F46, 0x1F47}, {0x1F4E, 0x1F4F}, {0x1F58, 0x1F58}, + {0x1F5A, 0x1F5A}, {0x1F5C, 0x1F5C}, {0x1F5E, 0x1F5E}, + {0x1F7E, 0x1F7F}, {0x1FB5, 0x1FB5}, {0x1FC5, 0x1FC5}, + {0x1FD4, 0x1FD5}, {0x1FDC, 0x1FDC}, {0x1FF0, 0x1FF1}, + {0x1FF5, 0x1FF5}, {0x1FFF, 0x1FFF}, {0x2065, 0x2065}, + {0x2072, 0x2073}, {0x208F, 0x208F}, {0x209D, 0x209F}, + {0x20BF, 0x20CF}, {0x20F1, 0x20FF}, {0x218C, 0x218F}, + {0x23FF, 0x23FF}, {0x2427, 0x243F}, {0x244B, 0x245F}, + {0x2B74, 0x2B75}, {0x2B96, 0x2B97}, {0x2BBA, 0x2BBC}, + {0x2BC9, 0x2BC9}, {0x2BD2, 0x2BEB}, {0x2BF0, 0x2BFF}, + {0x2C2F, 0x2C2F}, {0x2C5F, 0x2C5F}, {0x2CF4, 0x2CF8}, + {0x2D26, 0x2D26}, {0x2D28, 0x2D2C}, {0x2D2E, 0x2D2F}, + {0x2D68, 0x2D6E}, {0x2D71, 0x2D7E}, {0x2D97, 0x2D9F}, + {0x2DA7, 0x2DA7}, {0x2DAF, 0x2DAF}, {0x2DB7, 0x2DB7}, + {0x2DBF, 0x2DBF}, {0x2DC7, 0x2DC7}, {0x2DCF, 0x2DCF}, + {0x2DD7, 0x2DD7}, {0x2DDF, 0x2DDF}, {0x2E45, 0x2E7F}, + {0x2E9A, 0x2E9A}, {0x2EF4, 0x2EFF}, {0x2FD6, 0x2FEF}, + {0x2FFC, 0x2FFF}, {0x3040, 0x3040}, {0x3097, 0x3098}, + {0x3100, 0x3104}, {0x312E, 0x3130}, {0x318F, 0x318F}, + {0x31BB, 0x31BF}, {0x31E4, 0x31EF}, {0x321F, 0x321F}, + {0x32FF, 0x32FF}, {0x4DB6, 0x4DBF}, {0x9FD6, 0x9FFF}, + {0xA48D, 0xA48F}, {0xA4C7, 0xA4CF}, {0xA62C, 0xA63F}, + {0xA6F8, 0xA6FF}, {0xA7AF, 0xA7AF}, {0xA7B8, 0xA7F6}, + {0xA82C, 0xA82F}, {0xA83A, 0xA83F}, {0xA878, 0xA87F}, + {0xA8C6, 0xA8CD}, {0xA8DA, 0xA8DF}, {0xA8FE, 0xA8FF}, + {0xA954, 0xA95E}, {0xA97D, 0xA97F}, {0xA9CE, 0xA9CE}, + {0xA9DA, 0xA9DD}, {0xA9FF, 0xA9FF}, {0xAA37, 0xAA3F}, + {0xAA4E, 0xAA4F}, {0xAA5A, 0xAA5B}, {0xAAC3, 0xAADA}, + {0xAAF7, 0xAB00}, {0xAB07, 0xAB08}, {0xAB0F, 0xAB10}, + {0xAB17, 0xAB1F}, {0xAB27, 0xAB27}, {0xAB2F, 0xAB2F}, + {0xAB66, 0xAB6F}, {0xABEE, 0xABEF}, {0xABFA, 0xABFF}, + {0xD7A4, 0xD7AF}, {0xD7C7, 0xD7CA}, {0xD7FC, 0xD7FF}, + {0xFA6E, 0xFA6F}, {0xFADA, 0xFAFF}, {0xFB07, 0xFB12}, + {0xFB18, 0xFB1C}, {0xFB37, 0xFB37}, {0xFB3D, 0xFB3D}, + {0xFB3F, 0xFB3F}, {0xFB42, 0xFB42}, {0xFB45, 0xFB45}, + {0xFBC2, 0xFBD2}, {0xFD40, 0xFD4F}, {0xFD90, 0xFD91}, + {0xFDC8, 0xFDEF}, {0xFDFE, 0xFDFF}, {0xFE1A, 0xFE1F}, + {0xFE53, 0xFE53}, {0xFE67, 0xFE67}, {0xFE6C, 0xFE6F}, + {0xFE75, 0xFE75}, {0xFEFD, 0xFEFE}, {0xFF00, 0xFF00}, + {0xFFBF, 0xFFC1}, {0xFFC8, 0xFFC9}, {0xFFD0, 0xFFD1}, + {0xFFD8, 0xFFD9}, {0xFFDD, 0xFFDF}, {0xFFE7, 0xFFE7}, + {0xFFEF, 0xFFF8}, {0xFFFE, 0xFFFF}, {0x1000C, 0x1000C}, + {0x10027, 0x10027}, {0x1003B, 0x1003B}, {0x1003E, 0x1003E}, + {0x1004E, 0x1004F}, {0x1005E, 0x1007F}, {0x100FB, 0x100FF}, + {0x10103, 0x10106}, {0x10134, 0x10136}, {0x1018F, 0x1018F}, + {0x1019C, 0x1019F}, {0x101A1, 0x101CF}, {0x101FE, 0x1027F}, + {0x1029D, 0x1029F}, {0x102D1, 0x102DF}, {0x102FC, 0x102FF}, + {0x10324, 0x1032F}, {0x1034B, 0x1034F}, {0x1037B, 0x1037F}, + {0x1039E, 0x1039E}, {0x103C4, 0x103C7}, {0x103D6, 0x103FF}, + {0x1049E, 0x1049F}, {0x104AA, 0x104AF}, {0x104D4, 0x104D7}, + {0x104FC, 0x104FF}, {0x10528, 0x1052F}, {0x10564, 0x1056E}, + {0x10570, 0x105FF}, {0x10737, 0x1073F}, {0x10756, 0x1075F}, + {0x10768, 0x107FF}, {0x10806, 0x10807}, {0x10809, 0x10809}, + {0x10836, 0x10836}, {0x10839, 0x1083B}, {0x1083D, 0x1083E}, + {0x10856, 0x10856}, {0x1089F, 0x108A6}, {0x108B0, 0x108DF}, + {0x108F3, 0x108F3}, {0x108F6, 0x108FA}, {0x1091C, 0x1091E}, + {0x1093A, 0x1093E}, {0x10940, 0x1097F}, {0x109B8, 0x109BB}, + {0x109D0, 0x109D1}, {0x10A04, 0x10A04}, {0x10A07, 0x10A0B}, + {0x10A14, 0x10A14}, {0x10A18, 0x10A18}, {0x10A34, 0x10A37}, + {0x10A3B, 0x10A3E}, {0x10A48, 0x10A4F}, {0x10A59, 0x10A5F}, + {0x10AA0, 0x10ABF}, {0x10AE7, 0x10AEA}, {0x10AF7, 0x10AFF}, + {0x10B36, 0x10B38}, {0x10B56, 0x10B57}, {0x10B73, 0x10B77}, + {0x10B92, 0x10B98}, {0x10B9D, 0x10BA8}, {0x10BB0, 0x10BFF}, + {0x10C49, 0x10C7F}, {0x10CB3, 0x10CBF}, {0x10CF3, 0x10CF9}, + {0x10D00, 0x10E5F}, {0x10E7F, 0x10FFF}, {0x1104E, 0x11051}, + {0x11070, 0x1107E}, {0x110C2, 0x110CF}, {0x110E9, 0x110EF}, + {0x110FA, 0x110FF}, {0x11135, 0x11135}, {0x11144, 0x1114F}, + {0x11177, 0x1117F}, {0x111CE, 0x111CF}, {0x111E0, 0x111E0}, + {0x111F5, 0x111FF}, {0x11212, 0x11212}, {0x1123F, 0x1127F}, + {0x11287, 0x11287}, {0x11289, 0x11289}, {0x1128E, 0x1128E}, + {0x1129E, 0x1129E}, {0x112AA, 0x112AF}, {0x112EB, 0x112EF}, + {0x112FA, 0x112FF}, {0x11304, 0x11304}, {0x1130D, 0x1130E}, + {0x11311, 0x11312}, {0x11329, 0x11329}, {0x11331, 0x11331}, + {0x11334, 0x11334}, {0x1133A, 0x1133B}, {0x11345, 0x11346}, + {0x11349, 0x1134A}, {0x1134E, 0x1134F}, {0x11351, 0x11356}, + {0x11358, 0x1135C}, {0x11364, 0x11365}, {0x1136D, 0x1136F}, + {0x11375, 0x113FF}, {0x1145A, 0x1145A}, {0x1145C, 0x1145C}, + {0x1145E, 0x1147F}, {0x114C8, 0x114CF}, {0x114DA, 0x1157F}, + {0x115B6, 0x115B7}, {0x115DE, 0x115FF}, {0x11645, 0x1164F}, + {0x1165A, 0x1165F}, {0x1166D, 0x1167F}, {0x116B8, 0x116BF}, + {0x116CA, 0x116FF}, {0x1171A, 0x1171C}, {0x1172C, 0x1172F}, + {0x11740, 0x1189F}, {0x118F3, 0x118FE}, {0x11900, 0x11ABF}, + {0x11AF9, 0x11BFF}, {0x11C09, 0x11C09}, {0x11C37, 0x11C37}, + {0x11C46, 0x11C4F}, {0x11C6D, 0x11C6F}, {0x11C90, 0x11C91}, + {0x11CA8, 0x11CA8}, {0x11CB7, 0x11FFF}, {0x1239A, 0x123FF}, + {0x1246F, 0x1246F}, {0x12475, 0x1247F}, {0x12544, 0x12FFF}, + {0x1342F, 0x143FF}, {0x14647, 0x167FF}, {0x16A39, 0x16A3F}, + {0x16A5F, 0x16A5F}, {0x16A6A, 0x16A6D}, {0x16A70, 0x16ACF}, + {0x16AEE, 0x16AEF}, {0x16AF6, 0x16AFF}, {0x16B46, 0x16B4F}, + {0x16B5A, 0x16B5A}, {0x16B62, 0x16B62}, {0x16B78, 0x16B7C}, + {0x16B90, 0x16EFF}, {0x16F45, 0x16F4F}, {0x16F7F, 0x16F8E}, + {0x16FA0, 0x16FDF}, {0x16FE1, 0x16FFF}, {0x187ED, 0x187FF}, + {0x18AF3, 0x1AFFF}, {0x1B002, 0x1BBFF}, {0x1BC6B, 0x1BC6F}, + {0x1BC7D, 0x1BC7F}, {0x1BC89, 0x1BC8F}, {0x1BC9A, 0x1BC9B}, + {0x1BCA4, 0x1CFFF}, {0x1D0F6, 0x1D0FF}, {0x1D127, 0x1D128}, + {0x1D1E9, 0x1D1FF}, {0x1D246, 0x1D2FF}, {0x1D357, 0x1D35F}, + {0x1D372, 0x1D3FF}, {0x1D455, 0x1D455}, {0x1D49D, 0x1D49D}, + {0x1D4A0, 0x1D4A1}, {0x1D4A3, 0x1D4A4}, {0x1D4A7, 0x1D4A8}, + {0x1D4AD, 0x1D4AD}, {0x1D4BA, 0x1D4BA}, {0x1D4BC, 0x1D4BC}, + {0x1D4C4, 0x1D4C4}, {0x1D506, 0x1D506}, {0x1D50B, 0x1D50C}, + {0x1D515, 0x1D515}, {0x1D51D, 0x1D51D}, {0x1D53A, 0x1D53A}, + {0x1D53F, 0x1D53F}, {0x1D545, 0x1D545}, {0x1D547, 0x1D549}, + {0x1D551, 0x1D551}, {0x1D6A6, 0x1D6A7}, {0x1D7CC, 0x1D7CD}, + {0x1DA8C, 0x1DA9A}, {0x1DAA0, 0x1DAA0}, {0x1DAB0, 0x1DFFF}, + {0x1E007, 0x1E007}, {0x1E019, 0x1E01A}, {0x1E022, 0x1E022}, + {0x1E025, 0x1E025}, {0x1E02B, 0x1E7FF}, {0x1E8C5, 0x1E8C6}, + {0x1E8D7, 0x1E8FF}, {0x1E94B, 0x1E94F}, {0x1E95A, 0x1E95D}, + {0x1E960, 0x1EDFF}, {0x1EE04, 0x1EE04}, {0x1EE20, 0x1EE20}, + {0x1EE23, 0x1EE23}, {0x1EE25, 0x1EE26}, {0x1EE28, 0x1EE28}, + {0x1EE33, 0x1EE33}, {0x1EE38, 0x1EE38}, {0x1EE3A, 0x1EE3A}, + {0x1EE3C, 0x1EE41}, {0x1EE43, 0x1EE46}, {0x1EE48, 0x1EE48}, + {0x1EE4A, 0x1EE4A}, {0x1EE4C, 0x1EE4C}, {0x1EE50, 0x1EE50}, + {0x1EE53, 0x1EE53}, {0x1EE55, 0x1EE56}, {0x1EE58, 0x1EE58}, + {0x1EE5A, 0x1EE5A}, {0x1EE5C, 0x1EE5C}, {0x1EE5E, 0x1EE5E}, + {0x1EE60, 0x1EE60}, {0x1EE63, 0x1EE63}, {0x1EE65, 0x1EE66}, + {0x1EE6B, 0x1EE6B}, {0x1EE73, 0x1EE73}, {0x1EE78, 0x1EE78}, + {0x1EE7D, 0x1EE7D}, {0x1EE7F, 0x1EE7F}, {0x1EE8A, 0x1EE8A}, + {0x1EE9C, 0x1EEA0}, {0x1EEA4, 0x1EEA4}, {0x1EEAA, 0x1EEAA}, + {0x1EEBC, 0x1EEEF}, {0x1EEF2, 0x1EFFF}, {0x1F02C, 0x1F02F}, + {0x1F094, 0x1F09F}, {0x1F0AF, 0x1F0B0}, {0x1F0C0, 0x1F0C0}, + {0x1F0D0, 0x1F0D0}, {0x1F0F6, 0x1F0FF}, {0x1F10D, 0x1F10F}, + {0x1F12F, 0x1F12F}, {0x1F16C, 0x1F16F}, {0x1F1AD, 0x1F1E5}, + {0x1F203, 0x1F20F}, {0x1F23C, 0x1F23F}, {0x1F249, 0x1F24F}, + {0x1F252, 0x1F2FF}, {0x1F6D3, 0x1F6DF}, {0x1F6ED, 0x1F6EF}, + {0x1F6F7, 0x1F6FF}, {0x1F774, 0x1F77F}, {0x1F7D5, 0x1F7FF}, + {0x1F80C, 0x1F80F}, {0x1F848, 0x1F84F}, {0x1F85A, 0x1F85F}, + {0x1F888, 0x1F88F}, {0x1F8AE, 0x1F90F}, {0x1F91F, 0x1F91F}, + {0x1F928, 0x1F92F}, {0x1F931, 0x1F932}, {0x1F93F, 0x1F93F}, + {0x1F94C, 0x1F94F}, {0x1F95F, 0x1F97F}, {0x1F992, 0x1F9BF}, + {0x1F9C1, 0x1FFFF}, {0x2A6D7, 0x2A6FF}, {0x2B735, 0x2B73F}, + {0x2B81E, 0x2B81F}, {0x2CEA2, 0x2F7FF}, {0x2FA1E, 0xE0000}, + {0xE0002, 0xE001F}, {0xE0080, 0xE00FF}, {0xE01F0, 0xEFFFF}, + {0xFFFFE, 0xFFFFF}, +} + +var neutral = table{ + {0x0000, 0x001F}, {0x007F, 0x007F}, {0x0080, 0x009F}, + {0x00A0, 0x00A0}, {0x00A9, 0x00A9}, {0x00AB, 0x00AB}, + {0x00B5, 0x00B5}, {0x00BB, 0x00BB}, {0x00C0, 0x00C5}, + {0x00C7, 0x00CF}, {0x00D1, 0x00D6}, {0x00D9, 0x00DD}, + {0x00E2, 0x00E5}, {0x00E7, 0x00E7}, {0x00EB, 0x00EB}, + {0x00EE, 0x00EF}, {0x00F1, 0x00F1}, {0x00F4, 0x00F6}, + {0x00FB, 0x00FB}, {0x00FD, 0x00FD}, {0x00FF, 0x00FF}, + {0x0100, 0x0100}, {0x0102, 0x0110}, {0x0112, 0x0112}, + {0x0114, 0x011A}, {0x011C, 0x0125}, {0x0128, 0x012A}, + {0x012C, 0x0130}, {0x0134, 0x0137}, {0x0139, 0x013E}, + {0x0143, 0x0143}, {0x0145, 0x0147}, {0x014C, 0x014C}, + {0x014E, 0x0151}, {0x0154, 0x0165}, {0x0168, 0x016A}, + {0x016C, 0x017F}, {0x0180, 0x01BA}, {0x01BB, 0x01BB}, + {0x01BC, 0x01BF}, {0x01C0, 0x01C3}, {0x01C4, 0x01CD}, + {0x01CF, 0x01CF}, {0x01D1, 0x01D1}, {0x01D3, 0x01D3}, + {0x01D5, 0x01D5}, {0x01D7, 0x01D7}, {0x01D9, 0x01D9}, + {0x01DB, 0x01DB}, {0x01DD, 0x024F}, {0x0250, 0x0250}, + {0x0252, 0x0260}, {0x0262, 0x0293}, {0x0294, 0x0294}, + {0x0295, 0x02AF}, {0x02B0, 0x02C1}, {0x02C2, 0x02C3}, + {0x02C5, 0x02C5}, {0x02C6, 0x02C6}, {0x02C8, 0x02C8}, + {0x02CC, 0x02CC}, {0x02CE, 0x02CF}, {0x02D1, 0x02D1}, + {0x02D2, 0x02D7}, {0x02DC, 0x02DC}, {0x02DE, 0x02DE}, + {0x02E0, 0x02E4}, {0x02E5, 0x02EB}, {0x02EC, 0x02EC}, + {0x02ED, 0x02ED}, {0x02EE, 0x02EE}, {0x02EF, 0x02FF}, + {0x0370, 0x0373}, {0x0374, 0x0374}, {0x0375, 0x0375}, + {0x0376, 0x0377}, {0x037A, 0x037A}, {0x037B, 0x037D}, + {0x037E, 0x037E}, {0x037F, 0x037F}, {0x0384, 0x0385}, + {0x0386, 0x0386}, {0x0387, 0x0387}, {0x0388, 0x038A}, + {0x038C, 0x038C}, {0x038E, 0x0390}, {0x03AA, 0x03B0}, + {0x03C2, 0x03C2}, {0x03CA, 0x03F5}, {0x03F6, 0x03F6}, + {0x03F7, 0x03FF}, {0x0400, 0x0400}, {0x0402, 0x040F}, + {0x0450, 0x0450}, {0x0452, 0x0481}, {0x0482, 0x0482}, + {0x0483, 0x0487}, {0x0488, 0x0489}, {0x048A, 0x04FF}, + {0x0500, 0x052F}, {0x0531, 0x0556}, {0x0559, 0x0559}, + {0x055A, 0x055F}, {0x0561, 0x0587}, {0x0589, 0x0589}, + {0x058A, 0x058A}, {0x058D, 0x058E}, {0x058F, 0x058F}, + {0x0591, 0x05BD}, {0x05BE, 0x05BE}, {0x05BF, 0x05BF}, + {0x05C0, 0x05C0}, {0x05C1, 0x05C2}, {0x05C3, 0x05C3}, + {0x05C4, 0x05C5}, {0x05C6, 0x05C6}, {0x05C7, 0x05C7}, + {0x05D0, 0x05EA}, {0x05F0, 0x05F2}, {0x05F3, 0x05F4}, + {0x0600, 0x0605}, {0x0606, 0x0608}, {0x0609, 0x060A}, + {0x060B, 0x060B}, {0x060C, 0x060D}, {0x060E, 0x060F}, + {0x0610, 0x061A}, {0x061B, 0x061B}, {0x061C, 0x061C}, + {0x061E, 0x061F}, {0x0620, 0x063F}, {0x0640, 0x0640}, + {0x0641, 0x064A}, {0x064B, 0x065F}, {0x0660, 0x0669}, + {0x066A, 0x066D}, {0x066E, 0x066F}, {0x0670, 0x0670}, + {0x0671, 0x06D3}, {0x06D4, 0x06D4}, {0x06D5, 0x06D5}, + {0x06D6, 0x06DC}, {0x06DD, 0x06DD}, {0x06DE, 0x06DE}, + {0x06DF, 0x06E4}, {0x06E5, 0x06E6}, {0x06E7, 0x06E8}, + {0x06E9, 0x06E9}, {0x06EA, 0x06ED}, {0x06EE, 0x06EF}, + {0x06F0, 0x06F9}, {0x06FA, 0x06FC}, {0x06FD, 0x06FE}, + {0x06FF, 0x06FF}, {0x0700, 0x070D}, {0x070F, 0x070F}, + {0x0710, 0x0710}, {0x0711, 0x0711}, {0x0712, 0x072F}, + {0x0730, 0x074A}, {0x074D, 0x074F}, {0x0750, 0x077F}, + {0x0780, 0x07A5}, {0x07A6, 0x07B0}, {0x07B1, 0x07B1}, + {0x07C0, 0x07C9}, {0x07CA, 0x07EA}, {0x07EB, 0x07F3}, + {0x07F4, 0x07F5}, {0x07F6, 0x07F6}, {0x07F7, 0x07F9}, + {0x07FA, 0x07FA}, {0x0800, 0x0815}, {0x0816, 0x0819}, + {0x081A, 0x081A}, {0x081B, 0x0823}, {0x0824, 0x0824}, + {0x0825, 0x0827}, {0x0828, 0x0828}, {0x0829, 0x082D}, + {0x0830, 0x083E}, {0x0840, 0x0858}, {0x0859, 0x085B}, + {0x085E, 0x085E}, {0x08A0, 0x08B4}, {0x08B6, 0x08BD}, + {0x08D4, 0x08E1}, {0x08E2, 0x08E2}, {0x08E3, 0x08FF}, + {0x0900, 0x0902}, {0x0903, 0x0903}, {0x0904, 0x0939}, + {0x093A, 0x093A}, {0x093B, 0x093B}, {0x093C, 0x093C}, + {0x093D, 0x093D}, {0x093E, 0x0940}, {0x0941, 0x0948}, + {0x0949, 0x094C}, {0x094D, 0x094D}, {0x094E, 0x094F}, + {0x0950, 0x0950}, {0x0951, 0x0957}, {0x0958, 0x0961}, + {0x0962, 0x0963}, {0x0964, 0x0965}, {0x0966, 0x096F}, + {0x0970, 0x0970}, {0x0971, 0x0971}, {0x0972, 0x097F}, + {0x0980, 0x0980}, {0x0981, 0x0981}, {0x0982, 0x0983}, + {0x0985, 0x098C}, {0x098F, 0x0990}, {0x0993, 0x09A8}, + {0x09AA, 0x09B0}, {0x09B2, 0x09B2}, {0x09B6, 0x09B9}, + {0x09BC, 0x09BC}, {0x09BD, 0x09BD}, {0x09BE, 0x09C0}, + {0x09C1, 0x09C4}, {0x09C7, 0x09C8}, {0x09CB, 0x09CC}, + {0x09CD, 0x09CD}, {0x09CE, 0x09CE}, {0x09D7, 0x09D7}, + {0x09DC, 0x09DD}, {0x09DF, 0x09E1}, {0x09E2, 0x09E3}, + {0x09E6, 0x09EF}, {0x09F0, 0x09F1}, {0x09F2, 0x09F3}, + {0x09F4, 0x09F9}, {0x09FA, 0x09FA}, {0x09FB, 0x09FB}, + {0x0A01, 0x0A02}, {0x0A03, 0x0A03}, {0x0A05, 0x0A0A}, + {0x0A0F, 0x0A10}, {0x0A13, 0x0A28}, {0x0A2A, 0x0A30}, + {0x0A32, 0x0A33}, {0x0A35, 0x0A36}, {0x0A38, 0x0A39}, + {0x0A3C, 0x0A3C}, {0x0A3E, 0x0A40}, {0x0A41, 0x0A42}, + {0x0A47, 0x0A48}, {0x0A4B, 0x0A4D}, {0x0A51, 0x0A51}, + {0x0A59, 0x0A5C}, {0x0A5E, 0x0A5E}, {0x0A66, 0x0A6F}, + {0x0A70, 0x0A71}, {0x0A72, 0x0A74}, {0x0A75, 0x0A75}, + {0x0A81, 0x0A82}, {0x0A83, 0x0A83}, {0x0A85, 0x0A8D}, + {0x0A8F, 0x0A91}, {0x0A93, 0x0AA8}, {0x0AAA, 0x0AB0}, + {0x0AB2, 0x0AB3}, {0x0AB5, 0x0AB9}, {0x0ABC, 0x0ABC}, + {0x0ABD, 0x0ABD}, {0x0ABE, 0x0AC0}, {0x0AC1, 0x0AC5}, + {0x0AC7, 0x0AC8}, {0x0AC9, 0x0AC9}, {0x0ACB, 0x0ACC}, + {0x0ACD, 0x0ACD}, {0x0AD0, 0x0AD0}, {0x0AE0, 0x0AE1}, + {0x0AE2, 0x0AE3}, {0x0AE6, 0x0AEF}, {0x0AF0, 0x0AF0}, + {0x0AF1, 0x0AF1}, {0x0AF9, 0x0AF9}, {0x0B01, 0x0B01}, + {0x0B02, 0x0B03}, {0x0B05, 0x0B0C}, {0x0B0F, 0x0B10}, + {0x0B13, 0x0B28}, {0x0B2A, 0x0B30}, {0x0B32, 0x0B33}, + {0x0B35, 0x0B39}, {0x0B3C, 0x0B3C}, {0x0B3D, 0x0B3D}, + {0x0B3E, 0x0B3E}, {0x0B3F, 0x0B3F}, {0x0B40, 0x0B40}, + {0x0B41, 0x0B44}, {0x0B47, 0x0B48}, {0x0B4B, 0x0B4C}, + {0x0B4D, 0x0B4D}, {0x0B56, 0x0B56}, {0x0B57, 0x0B57}, + {0x0B5C, 0x0B5D}, {0x0B5F, 0x0B61}, {0x0B62, 0x0B63}, + {0x0B66, 0x0B6F}, {0x0B70, 0x0B70}, {0x0B71, 0x0B71}, + {0x0B72, 0x0B77}, {0x0B82, 0x0B82}, {0x0B83, 0x0B83}, + {0x0B85, 0x0B8A}, {0x0B8E, 0x0B90}, {0x0B92, 0x0B95}, + {0x0B99, 0x0B9A}, {0x0B9C, 0x0B9C}, {0x0B9E, 0x0B9F}, + {0x0BA3, 0x0BA4}, {0x0BA8, 0x0BAA}, {0x0BAE, 0x0BB9}, + {0x0BBE, 0x0BBF}, {0x0BC0, 0x0BC0}, {0x0BC1, 0x0BC2}, + {0x0BC6, 0x0BC8}, {0x0BCA, 0x0BCC}, {0x0BCD, 0x0BCD}, + {0x0BD0, 0x0BD0}, {0x0BD7, 0x0BD7}, {0x0BE6, 0x0BEF}, + {0x0BF0, 0x0BF2}, {0x0BF3, 0x0BF8}, {0x0BF9, 0x0BF9}, + {0x0BFA, 0x0BFA}, {0x0C00, 0x0C00}, {0x0C01, 0x0C03}, + {0x0C05, 0x0C0C}, {0x0C0E, 0x0C10}, {0x0C12, 0x0C28}, + {0x0C2A, 0x0C39}, {0x0C3D, 0x0C3D}, {0x0C3E, 0x0C40}, + {0x0C41, 0x0C44}, {0x0C46, 0x0C48}, {0x0C4A, 0x0C4D}, + {0x0C55, 0x0C56}, {0x0C58, 0x0C5A}, {0x0C60, 0x0C61}, + {0x0C62, 0x0C63}, {0x0C66, 0x0C6F}, {0x0C78, 0x0C7E}, + {0x0C7F, 0x0C7F}, {0x0C80, 0x0C80}, {0x0C81, 0x0C81}, + {0x0C82, 0x0C83}, {0x0C85, 0x0C8C}, {0x0C8E, 0x0C90}, + {0x0C92, 0x0CA8}, {0x0CAA, 0x0CB3}, {0x0CB5, 0x0CB9}, + {0x0CBC, 0x0CBC}, {0x0CBD, 0x0CBD}, {0x0CBE, 0x0CBE}, + {0x0CBF, 0x0CBF}, {0x0CC0, 0x0CC4}, {0x0CC6, 0x0CC6}, + {0x0CC7, 0x0CC8}, {0x0CCA, 0x0CCB}, {0x0CCC, 0x0CCD}, + {0x0CD5, 0x0CD6}, {0x0CDE, 0x0CDE}, {0x0CE0, 0x0CE1}, + {0x0CE2, 0x0CE3}, {0x0CE6, 0x0CEF}, {0x0CF1, 0x0CF2}, + {0x0D01, 0x0D01}, {0x0D02, 0x0D03}, {0x0D05, 0x0D0C}, + {0x0D0E, 0x0D10}, {0x0D12, 0x0D3A}, {0x0D3D, 0x0D3D}, + {0x0D3E, 0x0D40}, {0x0D41, 0x0D44}, {0x0D46, 0x0D48}, + {0x0D4A, 0x0D4C}, {0x0D4D, 0x0D4D}, {0x0D4E, 0x0D4E}, + {0x0D4F, 0x0D4F}, {0x0D54, 0x0D56}, {0x0D57, 0x0D57}, + {0x0D58, 0x0D5E}, {0x0D5F, 0x0D61}, {0x0D62, 0x0D63}, + {0x0D66, 0x0D6F}, {0x0D70, 0x0D78}, {0x0D79, 0x0D79}, + {0x0D7A, 0x0D7F}, {0x0D82, 0x0D83}, {0x0D85, 0x0D96}, + {0x0D9A, 0x0DB1}, {0x0DB3, 0x0DBB}, {0x0DBD, 0x0DBD}, + {0x0DC0, 0x0DC6}, {0x0DCA, 0x0DCA}, {0x0DCF, 0x0DD1}, + {0x0DD2, 0x0DD4}, {0x0DD6, 0x0DD6}, {0x0DD8, 0x0DDF}, + {0x0DE6, 0x0DEF}, {0x0DF2, 0x0DF3}, {0x0DF4, 0x0DF4}, + {0x0E01, 0x0E30}, {0x0E31, 0x0E31}, {0x0E32, 0x0E33}, + {0x0E34, 0x0E3A}, {0x0E3F, 0x0E3F}, {0x0E40, 0x0E45}, + {0x0E46, 0x0E46}, {0x0E47, 0x0E4E}, {0x0E4F, 0x0E4F}, + {0x0E50, 0x0E59}, {0x0E5A, 0x0E5B}, {0x0E81, 0x0E82}, + {0x0E84, 0x0E84}, {0x0E87, 0x0E88}, {0x0E8A, 0x0E8A}, + {0x0E8D, 0x0E8D}, {0x0E94, 0x0E97}, {0x0E99, 0x0E9F}, + {0x0EA1, 0x0EA3}, {0x0EA5, 0x0EA5}, {0x0EA7, 0x0EA7}, + {0x0EAA, 0x0EAB}, {0x0EAD, 0x0EB0}, {0x0EB1, 0x0EB1}, + {0x0EB2, 0x0EB3}, {0x0EB4, 0x0EB9}, {0x0EBB, 0x0EBC}, + {0x0EBD, 0x0EBD}, {0x0EC0, 0x0EC4}, {0x0EC6, 0x0EC6}, + {0x0EC8, 0x0ECD}, {0x0ED0, 0x0ED9}, {0x0EDC, 0x0EDF}, + {0x0F00, 0x0F00}, {0x0F01, 0x0F03}, {0x0F04, 0x0F12}, + {0x0F13, 0x0F13}, {0x0F14, 0x0F14}, {0x0F15, 0x0F17}, + {0x0F18, 0x0F19}, {0x0F1A, 0x0F1F}, {0x0F20, 0x0F29}, + {0x0F2A, 0x0F33}, {0x0F34, 0x0F34}, {0x0F35, 0x0F35}, + {0x0F36, 0x0F36}, {0x0F37, 0x0F37}, {0x0F38, 0x0F38}, + {0x0F39, 0x0F39}, {0x0F3A, 0x0F3A}, {0x0F3B, 0x0F3B}, + {0x0F3C, 0x0F3C}, {0x0F3D, 0x0F3D}, {0x0F3E, 0x0F3F}, + {0x0F40, 0x0F47}, {0x0F49, 0x0F6C}, {0x0F71, 0x0F7E}, + {0x0F7F, 0x0F7F}, {0x0F80, 0x0F84}, {0x0F85, 0x0F85}, + {0x0F86, 0x0F87}, {0x0F88, 0x0F8C}, {0x0F8D, 0x0F97}, + {0x0F99, 0x0FBC}, {0x0FBE, 0x0FC5}, {0x0FC6, 0x0FC6}, + {0x0FC7, 0x0FCC}, {0x0FCE, 0x0FCF}, {0x0FD0, 0x0FD4}, + {0x0FD5, 0x0FD8}, {0x0FD9, 0x0FDA}, {0x1000, 0x102A}, + {0x102B, 0x102C}, {0x102D, 0x1030}, {0x1031, 0x1031}, + {0x1032, 0x1037}, {0x1038, 0x1038}, {0x1039, 0x103A}, + {0x103B, 0x103C}, {0x103D, 0x103E}, {0x103F, 0x103F}, + {0x1040, 0x1049}, {0x104A, 0x104F}, {0x1050, 0x1055}, + {0x1056, 0x1057}, {0x1058, 0x1059}, {0x105A, 0x105D}, + {0x105E, 0x1060}, {0x1061, 0x1061}, {0x1062, 0x1064}, + {0x1065, 0x1066}, {0x1067, 0x106D}, {0x106E, 0x1070}, + {0x1071, 0x1074}, {0x1075, 0x1081}, {0x1082, 0x1082}, + {0x1083, 0x1084}, {0x1085, 0x1086}, {0x1087, 0x108C}, + {0x108D, 0x108D}, {0x108E, 0x108E}, {0x108F, 0x108F}, + {0x1090, 0x1099}, {0x109A, 0x109C}, {0x109D, 0x109D}, + {0x109E, 0x109F}, {0x10A0, 0x10C5}, {0x10C7, 0x10C7}, + {0x10CD, 0x10CD}, {0x10D0, 0x10FA}, {0x10FB, 0x10FB}, + {0x10FC, 0x10FC}, {0x10FD, 0x10FF}, {0x1160, 0x11FF}, + {0x1200, 0x1248}, {0x124A, 0x124D}, {0x1250, 0x1256}, + {0x1258, 0x1258}, {0x125A, 0x125D}, {0x1260, 0x1288}, + {0x128A, 0x128D}, {0x1290, 0x12B0}, {0x12B2, 0x12B5}, + {0x12B8, 0x12BE}, {0x12C0, 0x12C0}, {0x12C2, 0x12C5}, + {0x12C8, 0x12D6}, {0x12D8, 0x1310}, {0x1312, 0x1315}, + {0x1318, 0x135A}, {0x135D, 0x135F}, {0x1360, 0x1368}, + {0x1369, 0x137C}, {0x1380, 0x138F}, {0x1390, 0x1399}, + {0x13A0, 0x13F5}, {0x13F8, 0x13FD}, {0x1400, 0x1400}, + {0x1401, 0x166C}, {0x166D, 0x166E}, {0x166F, 0x167F}, + {0x1680, 0x1680}, {0x1681, 0x169A}, {0x169B, 0x169B}, + {0x169C, 0x169C}, {0x16A0, 0x16EA}, {0x16EB, 0x16ED}, + {0x16EE, 0x16F0}, {0x16F1, 0x16F8}, {0x1700, 0x170C}, + {0x170E, 0x1711}, {0x1712, 0x1714}, {0x1720, 0x1731}, + {0x1732, 0x1734}, {0x1735, 0x1736}, {0x1740, 0x1751}, + {0x1752, 0x1753}, {0x1760, 0x176C}, {0x176E, 0x1770}, + {0x1772, 0x1773}, {0x1780, 0x17B3}, {0x17B4, 0x17B5}, + {0x17B6, 0x17B6}, {0x17B7, 0x17BD}, {0x17BE, 0x17C5}, + {0x17C6, 0x17C6}, {0x17C7, 0x17C8}, {0x17C9, 0x17D3}, + {0x17D4, 0x17D6}, {0x17D7, 0x17D7}, {0x17D8, 0x17DA}, + {0x17DB, 0x17DB}, {0x17DC, 0x17DC}, {0x17DD, 0x17DD}, + {0x17E0, 0x17E9}, {0x17F0, 0x17F9}, {0x1800, 0x1805}, + {0x1806, 0x1806}, {0x1807, 0x180A}, {0x180B, 0x180D}, + {0x180E, 0x180E}, {0x1810, 0x1819}, {0x1820, 0x1842}, + {0x1843, 0x1843}, {0x1844, 0x1877}, {0x1880, 0x1884}, + {0x1885, 0x1886}, {0x1887, 0x18A8}, {0x18A9, 0x18A9}, + {0x18AA, 0x18AA}, {0x18B0, 0x18F5}, {0x1900, 0x191E}, + {0x1920, 0x1922}, {0x1923, 0x1926}, {0x1927, 0x1928}, + {0x1929, 0x192B}, {0x1930, 0x1931}, {0x1932, 0x1932}, + {0x1933, 0x1938}, {0x1939, 0x193B}, {0x1940, 0x1940}, + {0x1944, 0x1945}, {0x1946, 0x194F}, {0x1950, 0x196D}, + {0x1970, 0x1974}, {0x1980, 0x19AB}, {0x19B0, 0x19C9}, + {0x19D0, 0x19D9}, {0x19DA, 0x19DA}, {0x19DE, 0x19DF}, + {0x19E0, 0x19FF}, {0x1A00, 0x1A16}, {0x1A17, 0x1A18}, + {0x1A19, 0x1A1A}, {0x1A1B, 0x1A1B}, {0x1A1E, 0x1A1F}, + {0x1A20, 0x1A54}, {0x1A55, 0x1A55}, {0x1A56, 0x1A56}, + {0x1A57, 0x1A57}, {0x1A58, 0x1A5E}, {0x1A60, 0x1A60}, + {0x1A61, 0x1A61}, {0x1A62, 0x1A62}, {0x1A63, 0x1A64}, + {0x1A65, 0x1A6C}, {0x1A6D, 0x1A72}, {0x1A73, 0x1A7C}, + {0x1A7F, 0x1A7F}, {0x1A80, 0x1A89}, {0x1A90, 0x1A99}, + {0x1AA0, 0x1AA6}, {0x1AA7, 0x1AA7}, {0x1AA8, 0x1AAD}, + {0x1AB0, 0x1ABD}, {0x1ABE, 0x1ABE}, {0x1B00, 0x1B03}, + {0x1B04, 0x1B04}, {0x1B05, 0x1B33}, {0x1B34, 0x1B34}, + {0x1B35, 0x1B35}, {0x1B36, 0x1B3A}, {0x1B3B, 0x1B3B}, + {0x1B3C, 0x1B3C}, {0x1B3D, 0x1B41}, {0x1B42, 0x1B42}, + {0x1B43, 0x1B44}, {0x1B45, 0x1B4B}, {0x1B50, 0x1B59}, + {0x1B5A, 0x1B60}, {0x1B61, 0x1B6A}, {0x1B6B, 0x1B73}, + {0x1B74, 0x1B7C}, {0x1B80, 0x1B81}, {0x1B82, 0x1B82}, + {0x1B83, 0x1BA0}, {0x1BA1, 0x1BA1}, {0x1BA2, 0x1BA5}, + {0x1BA6, 0x1BA7}, {0x1BA8, 0x1BA9}, {0x1BAA, 0x1BAA}, + {0x1BAB, 0x1BAD}, {0x1BAE, 0x1BAF}, {0x1BB0, 0x1BB9}, + {0x1BBA, 0x1BBF}, {0x1BC0, 0x1BE5}, {0x1BE6, 0x1BE6}, + {0x1BE7, 0x1BE7}, {0x1BE8, 0x1BE9}, {0x1BEA, 0x1BEC}, + {0x1BED, 0x1BED}, {0x1BEE, 0x1BEE}, {0x1BEF, 0x1BF1}, + {0x1BF2, 0x1BF3}, {0x1BFC, 0x1BFF}, {0x1C00, 0x1C23}, + {0x1C24, 0x1C2B}, {0x1C2C, 0x1C33}, {0x1C34, 0x1C35}, + {0x1C36, 0x1C37}, {0x1C3B, 0x1C3F}, {0x1C40, 0x1C49}, + {0x1C4D, 0x1C4F}, {0x1C50, 0x1C59}, {0x1C5A, 0x1C77}, + {0x1C78, 0x1C7D}, {0x1C7E, 0x1C7F}, {0x1C80, 0x1C88}, + {0x1CC0, 0x1CC7}, {0x1CD0, 0x1CD2}, {0x1CD3, 0x1CD3}, + {0x1CD4, 0x1CE0}, {0x1CE1, 0x1CE1}, {0x1CE2, 0x1CE8}, + {0x1CE9, 0x1CEC}, {0x1CED, 0x1CED}, {0x1CEE, 0x1CF1}, + {0x1CF2, 0x1CF3}, {0x1CF4, 0x1CF4}, {0x1CF5, 0x1CF6}, + {0x1CF8, 0x1CF9}, {0x1D00, 0x1D2B}, {0x1D2C, 0x1D6A}, + {0x1D6B, 0x1D77}, {0x1D78, 0x1D78}, {0x1D79, 0x1D7F}, + {0x1D80, 0x1D9A}, {0x1D9B, 0x1DBF}, {0x1DC0, 0x1DF5}, + {0x1DFB, 0x1DFF}, {0x1E00, 0x1EFF}, {0x1F00, 0x1F15}, + {0x1F18, 0x1F1D}, {0x1F20, 0x1F45}, {0x1F48, 0x1F4D}, + {0x1F50, 0x1F57}, {0x1F59, 0x1F59}, {0x1F5B, 0x1F5B}, + {0x1F5D, 0x1F5D}, {0x1F5F, 0x1F7D}, {0x1F80, 0x1FB4}, + {0x1FB6, 0x1FBC}, {0x1FBD, 0x1FBD}, {0x1FBE, 0x1FBE}, + {0x1FBF, 0x1FC1}, {0x1FC2, 0x1FC4}, {0x1FC6, 0x1FCC}, + {0x1FCD, 0x1FCF}, {0x1FD0, 0x1FD3}, {0x1FD6, 0x1FDB}, + {0x1FDD, 0x1FDF}, {0x1FE0, 0x1FEC}, {0x1FED, 0x1FEF}, + {0x1FF2, 0x1FF4}, {0x1FF6, 0x1FFC}, {0x1FFD, 0x1FFE}, + {0x2000, 0x200A}, {0x200B, 0x200F}, {0x2011, 0x2012}, + {0x2017, 0x2017}, {0x201A, 0x201A}, {0x201B, 0x201B}, + {0x201E, 0x201E}, {0x201F, 0x201F}, {0x2023, 0x2023}, + {0x2028, 0x2028}, {0x2029, 0x2029}, {0x202A, 0x202E}, + {0x202F, 0x202F}, {0x2031, 0x2031}, {0x2034, 0x2034}, + {0x2036, 0x2038}, {0x2039, 0x2039}, {0x203A, 0x203A}, + {0x203C, 0x203D}, {0x203F, 0x2040}, {0x2041, 0x2043}, + {0x2044, 0x2044}, {0x2045, 0x2045}, {0x2046, 0x2046}, + {0x2047, 0x2051}, {0x2052, 0x2052}, {0x2053, 0x2053}, + {0x2054, 0x2054}, {0x2055, 0x205E}, {0x205F, 0x205F}, + {0x2060, 0x2064}, {0x2066, 0x206F}, {0x2070, 0x2070}, + {0x2071, 0x2071}, {0x2075, 0x2079}, {0x207A, 0x207C}, + {0x207D, 0x207D}, {0x207E, 0x207E}, {0x2080, 0x2080}, + {0x2085, 0x2089}, {0x208A, 0x208C}, {0x208D, 0x208D}, + {0x208E, 0x208E}, {0x2090, 0x209C}, {0x20A0, 0x20A8}, + {0x20AA, 0x20AB}, {0x20AD, 0x20BE}, {0x20D0, 0x20DC}, + {0x20DD, 0x20E0}, {0x20E1, 0x20E1}, {0x20E2, 0x20E4}, + {0x20E5, 0x20F0}, {0x2100, 0x2101}, {0x2102, 0x2102}, + {0x2104, 0x2104}, {0x2106, 0x2106}, {0x2107, 0x2107}, + {0x2108, 0x2108}, {0x210A, 0x2112}, {0x2114, 0x2114}, + {0x2115, 0x2115}, {0x2117, 0x2117}, {0x2118, 0x2118}, + {0x2119, 0x211D}, {0x211E, 0x2120}, {0x2123, 0x2123}, + {0x2124, 0x2124}, {0x2125, 0x2125}, {0x2127, 0x2127}, + {0x2128, 0x2128}, {0x2129, 0x2129}, {0x212A, 0x212A}, + {0x212C, 0x212D}, {0x212E, 0x212E}, {0x212F, 0x2134}, + {0x2135, 0x2138}, {0x2139, 0x2139}, {0x213A, 0x213B}, + {0x213C, 0x213F}, {0x2140, 0x2144}, {0x2145, 0x2149}, + {0x214A, 0x214A}, {0x214B, 0x214B}, {0x214C, 0x214D}, + {0x214E, 0x214E}, {0x214F, 0x214F}, {0x2150, 0x2152}, + {0x2155, 0x215A}, {0x215F, 0x215F}, {0x216C, 0x216F}, + {0x217A, 0x2182}, {0x2183, 0x2184}, {0x2185, 0x2188}, + {0x218A, 0x218B}, {0x219A, 0x219B}, {0x219C, 0x219F}, + {0x21A0, 0x21A0}, {0x21A1, 0x21A2}, {0x21A3, 0x21A3}, + {0x21A4, 0x21A5}, {0x21A6, 0x21A6}, {0x21A7, 0x21AD}, + {0x21AE, 0x21AE}, {0x21AF, 0x21B7}, {0x21BA, 0x21CD}, + {0x21CE, 0x21CF}, {0x21D0, 0x21D1}, {0x21D3, 0x21D3}, + {0x21D5, 0x21E6}, {0x21E8, 0x21F3}, {0x21F4, 0x21FF}, + {0x2201, 0x2201}, {0x2204, 0x2206}, {0x2209, 0x220A}, + {0x220C, 0x220E}, {0x2210, 0x2210}, {0x2212, 0x2214}, + {0x2216, 0x2219}, {0x221B, 0x221C}, {0x2221, 0x2222}, + {0x2224, 0x2224}, {0x2226, 0x2226}, {0x222D, 0x222D}, + {0x222F, 0x2233}, {0x2238, 0x223B}, {0x223E, 0x2247}, + {0x2249, 0x224B}, {0x224D, 0x2251}, {0x2253, 0x225F}, + {0x2262, 0x2263}, {0x2268, 0x2269}, {0x226C, 0x226D}, + {0x2270, 0x2281}, {0x2284, 0x2285}, {0x2288, 0x2294}, + {0x2296, 0x2298}, {0x229A, 0x22A4}, {0x22A6, 0x22BE}, + {0x22C0, 0x22FF}, {0x2300, 0x2307}, {0x2308, 0x2308}, + {0x2309, 0x2309}, {0x230A, 0x230A}, {0x230B, 0x230B}, + {0x230C, 0x2311}, {0x2313, 0x2319}, {0x231C, 0x231F}, + {0x2320, 0x2321}, {0x2322, 0x2328}, {0x232B, 0x237B}, + {0x237C, 0x237C}, {0x237D, 0x239A}, {0x239B, 0x23B3}, + {0x23B4, 0x23DB}, {0x23DC, 0x23E1}, {0x23E2, 0x23E8}, + {0x23ED, 0x23EF}, {0x23F1, 0x23F2}, {0x23F4, 0x23FE}, + {0x2400, 0x2426}, {0x2440, 0x244A}, {0x24EA, 0x24EA}, + {0x254C, 0x254F}, {0x2574, 0x257F}, {0x2590, 0x2591}, + {0x2596, 0x259F}, {0x25A2, 0x25A2}, {0x25AA, 0x25B1}, + {0x25B4, 0x25B5}, {0x25B8, 0x25BB}, {0x25BE, 0x25BF}, + {0x25C2, 0x25C5}, {0x25C9, 0x25CA}, {0x25CC, 0x25CD}, + {0x25D2, 0x25E1}, {0x25E6, 0x25EE}, {0x25F0, 0x25F7}, + {0x25F8, 0x25FC}, {0x25FF, 0x25FF}, {0x2600, 0x2604}, + {0x2607, 0x2608}, {0x260A, 0x260D}, {0x2610, 0x2613}, + {0x2616, 0x261B}, {0x261D, 0x261D}, {0x261F, 0x263F}, + {0x2641, 0x2641}, {0x2643, 0x2647}, {0x2654, 0x265F}, + {0x2662, 0x2662}, {0x2666, 0x2666}, {0x266B, 0x266B}, + {0x266E, 0x266E}, {0x2670, 0x267E}, {0x2680, 0x2692}, + {0x2694, 0x269D}, {0x26A0, 0x26A0}, {0x26A2, 0x26A9}, + {0x26AC, 0x26BC}, {0x26C0, 0x26C3}, {0x26E2, 0x26E2}, + {0x26E4, 0x26E7}, {0x2700, 0x2704}, {0x2706, 0x2709}, + {0x270C, 0x2727}, {0x2729, 0x273C}, {0x273E, 0x274B}, + {0x274D, 0x274D}, {0x274F, 0x2752}, {0x2756, 0x2756}, + {0x2758, 0x2767}, {0x2768, 0x2768}, {0x2769, 0x2769}, + {0x276A, 0x276A}, {0x276B, 0x276B}, {0x276C, 0x276C}, + {0x276D, 0x276D}, {0x276E, 0x276E}, {0x276F, 0x276F}, + {0x2770, 0x2770}, {0x2771, 0x2771}, {0x2772, 0x2772}, + {0x2773, 0x2773}, {0x2774, 0x2774}, {0x2775, 0x2775}, + {0x2780, 0x2793}, {0x2794, 0x2794}, {0x2798, 0x27AF}, + {0x27B1, 0x27BE}, {0x27C0, 0x27C4}, {0x27C5, 0x27C5}, + {0x27C6, 0x27C6}, {0x27C7, 0x27E5}, {0x27EE, 0x27EE}, + {0x27EF, 0x27EF}, {0x27F0, 0x27FF}, {0x2800, 0x28FF}, + {0x2900, 0x297F}, {0x2980, 0x2982}, {0x2983, 0x2983}, + {0x2984, 0x2984}, {0x2987, 0x2987}, {0x2988, 0x2988}, + {0x2989, 0x2989}, {0x298A, 0x298A}, {0x298B, 0x298B}, + {0x298C, 0x298C}, {0x298D, 0x298D}, {0x298E, 0x298E}, + {0x298F, 0x298F}, {0x2990, 0x2990}, {0x2991, 0x2991}, + {0x2992, 0x2992}, {0x2993, 0x2993}, {0x2994, 0x2994}, + {0x2995, 0x2995}, {0x2996, 0x2996}, {0x2997, 0x2997}, + {0x2998, 0x2998}, {0x2999, 0x29D7}, {0x29D8, 0x29D8}, + {0x29D9, 0x29D9}, {0x29DA, 0x29DA}, {0x29DB, 0x29DB}, + {0x29DC, 0x29FB}, {0x29FC, 0x29FC}, {0x29FD, 0x29FD}, + {0x29FE, 0x29FF}, {0x2A00, 0x2AFF}, {0x2B00, 0x2B1A}, + {0x2B1D, 0x2B2F}, {0x2B30, 0x2B44}, {0x2B45, 0x2B46}, + {0x2B47, 0x2B4C}, {0x2B4D, 0x2B4F}, {0x2B51, 0x2B54}, + {0x2B5A, 0x2B73}, {0x2B76, 0x2B95}, {0x2B98, 0x2BB9}, + {0x2BBD, 0x2BC8}, {0x2BCA, 0x2BD1}, {0x2BEC, 0x2BEF}, + {0x2C00, 0x2C2E}, {0x2C30, 0x2C5E}, {0x2C60, 0x2C7B}, + {0x2C7C, 0x2C7D}, {0x2C7E, 0x2C7F}, {0x2C80, 0x2CE4}, + {0x2CE5, 0x2CEA}, {0x2CEB, 0x2CEE}, {0x2CEF, 0x2CF1}, + {0x2CF2, 0x2CF3}, {0x2CF9, 0x2CFC}, {0x2CFD, 0x2CFD}, + {0x2CFE, 0x2CFF}, {0x2D00, 0x2D25}, {0x2D27, 0x2D27}, + {0x2D2D, 0x2D2D}, {0x2D30, 0x2D67}, {0x2D6F, 0x2D6F}, + {0x2D70, 0x2D70}, {0x2D7F, 0x2D7F}, {0x2D80, 0x2D96}, + {0x2DA0, 0x2DA6}, {0x2DA8, 0x2DAE}, {0x2DB0, 0x2DB6}, + {0x2DB8, 0x2DBE}, {0x2DC0, 0x2DC6}, {0x2DC8, 0x2DCE}, + {0x2DD0, 0x2DD6}, {0x2DD8, 0x2DDE}, {0x2DE0, 0x2DFF}, + {0x2E00, 0x2E01}, {0x2E02, 0x2E02}, {0x2E03, 0x2E03}, + {0x2E04, 0x2E04}, {0x2E05, 0x2E05}, {0x2E06, 0x2E08}, + {0x2E09, 0x2E09}, {0x2E0A, 0x2E0A}, {0x2E0B, 0x2E0B}, + {0x2E0C, 0x2E0C}, {0x2E0D, 0x2E0D}, {0x2E0E, 0x2E16}, + {0x2E17, 0x2E17}, {0x2E18, 0x2E19}, {0x2E1A, 0x2E1A}, + {0x2E1B, 0x2E1B}, {0x2E1C, 0x2E1C}, {0x2E1D, 0x2E1D}, + {0x2E1E, 0x2E1F}, {0x2E20, 0x2E20}, {0x2E21, 0x2E21}, + {0x2E22, 0x2E22}, {0x2E23, 0x2E23}, {0x2E24, 0x2E24}, + {0x2E25, 0x2E25}, {0x2E26, 0x2E26}, {0x2E27, 0x2E27}, + {0x2E28, 0x2E28}, {0x2E29, 0x2E29}, {0x2E2A, 0x2E2E}, + {0x2E2F, 0x2E2F}, {0x2E30, 0x2E39}, {0x2E3A, 0x2E3B}, + {0x2E3C, 0x2E3F}, {0x2E40, 0x2E40}, {0x2E41, 0x2E41}, + {0x2E42, 0x2E42}, {0x2E43, 0x2E44}, {0x303F, 0x303F}, + {0x4DC0, 0x4DFF}, {0xA4D0, 0xA4F7}, {0xA4F8, 0xA4FD}, + {0xA4FE, 0xA4FF}, {0xA500, 0xA60B}, {0xA60C, 0xA60C}, + {0xA60D, 0xA60F}, {0xA610, 0xA61F}, {0xA620, 0xA629}, + {0xA62A, 0xA62B}, {0xA640, 0xA66D}, {0xA66E, 0xA66E}, + {0xA66F, 0xA66F}, {0xA670, 0xA672}, {0xA673, 0xA673}, + {0xA674, 0xA67D}, {0xA67E, 0xA67E}, {0xA67F, 0xA67F}, + {0xA680, 0xA69B}, {0xA69C, 0xA69D}, {0xA69E, 0xA69F}, + {0xA6A0, 0xA6E5}, {0xA6E6, 0xA6EF}, {0xA6F0, 0xA6F1}, + {0xA6F2, 0xA6F7}, {0xA700, 0xA716}, {0xA717, 0xA71F}, + {0xA720, 0xA721}, {0xA722, 0xA76F}, {0xA770, 0xA770}, + {0xA771, 0xA787}, {0xA788, 0xA788}, {0xA789, 0xA78A}, + {0xA78B, 0xA78E}, {0xA78F, 0xA78F}, {0xA790, 0xA7AE}, + {0xA7B0, 0xA7B7}, {0xA7F7, 0xA7F7}, {0xA7F8, 0xA7F9}, + {0xA7FA, 0xA7FA}, {0xA7FB, 0xA7FF}, {0xA800, 0xA801}, + {0xA802, 0xA802}, {0xA803, 0xA805}, {0xA806, 0xA806}, + {0xA807, 0xA80A}, {0xA80B, 0xA80B}, {0xA80C, 0xA822}, + {0xA823, 0xA824}, {0xA825, 0xA826}, {0xA827, 0xA827}, + {0xA828, 0xA82B}, {0xA830, 0xA835}, {0xA836, 0xA837}, + {0xA838, 0xA838}, {0xA839, 0xA839}, {0xA840, 0xA873}, + {0xA874, 0xA877}, {0xA880, 0xA881}, {0xA882, 0xA8B3}, + {0xA8B4, 0xA8C3}, {0xA8C4, 0xA8C5}, {0xA8CE, 0xA8CF}, + {0xA8D0, 0xA8D9}, {0xA8E0, 0xA8F1}, {0xA8F2, 0xA8F7}, + {0xA8F8, 0xA8FA}, {0xA8FB, 0xA8FB}, {0xA8FC, 0xA8FC}, + {0xA8FD, 0xA8FD}, {0xA900, 0xA909}, {0xA90A, 0xA925}, + {0xA926, 0xA92D}, {0xA92E, 0xA92F}, {0xA930, 0xA946}, + {0xA947, 0xA951}, {0xA952, 0xA953}, {0xA95F, 0xA95F}, + {0xA980, 0xA982}, {0xA983, 0xA983}, {0xA984, 0xA9B2}, + {0xA9B3, 0xA9B3}, {0xA9B4, 0xA9B5}, {0xA9B6, 0xA9B9}, + {0xA9BA, 0xA9BB}, {0xA9BC, 0xA9BC}, {0xA9BD, 0xA9C0}, + {0xA9C1, 0xA9CD}, {0xA9CF, 0xA9CF}, {0xA9D0, 0xA9D9}, + {0xA9DE, 0xA9DF}, {0xA9E0, 0xA9E4}, {0xA9E5, 0xA9E5}, + {0xA9E6, 0xA9E6}, {0xA9E7, 0xA9EF}, {0xA9F0, 0xA9F9}, + {0xA9FA, 0xA9FE}, {0xAA00, 0xAA28}, {0xAA29, 0xAA2E}, + {0xAA2F, 0xAA30}, {0xAA31, 0xAA32}, {0xAA33, 0xAA34}, + {0xAA35, 0xAA36}, {0xAA40, 0xAA42}, {0xAA43, 0xAA43}, + {0xAA44, 0xAA4B}, {0xAA4C, 0xAA4C}, {0xAA4D, 0xAA4D}, + {0xAA50, 0xAA59}, {0xAA5C, 0xAA5F}, {0xAA60, 0xAA6F}, + {0xAA70, 0xAA70}, {0xAA71, 0xAA76}, {0xAA77, 0xAA79}, + {0xAA7A, 0xAA7A}, {0xAA7B, 0xAA7B}, {0xAA7C, 0xAA7C}, + {0xAA7D, 0xAA7D}, {0xAA7E, 0xAA7F}, {0xAA80, 0xAAAF}, + {0xAAB0, 0xAAB0}, {0xAAB1, 0xAAB1}, {0xAAB2, 0xAAB4}, + {0xAAB5, 0xAAB6}, {0xAAB7, 0xAAB8}, {0xAAB9, 0xAABD}, + {0xAABE, 0xAABF}, {0xAAC0, 0xAAC0}, {0xAAC1, 0xAAC1}, + {0xAAC2, 0xAAC2}, {0xAADB, 0xAADC}, {0xAADD, 0xAADD}, + {0xAADE, 0xAADF}, {0xAAE0, 0xAAEA}, {0xAAEB, 0xAAEB}, + {0xAAEC, 0xAAED}, {0xAAEE, 0xAAEF}, {0xAAF0, 0xAAF1}, + {0xAAF2, 0xAAF2}, {0xAAF3, 0xAAF4}, {0xAAF5, 0xAAF5}, + {0xAAF6, 0xAAF6}, {0xAB01, 0xAB06}, {0xAB09, 0xAB0E}, + {0xAB11, 0xAB16}, {0xAB20, 0xAB26}, {0xAB28, 0xAB2E}, + {0xAB30, 0xAB5A}, {0xAB5B, 0xAB5B}, {0xAB5C, 0xAB5F}, + {0xAB60, 0xAB65}, {0xAB70, 0xABBF}, {0xABC0, 0xABE2}, + {0xABE3, 0xABE4}, {0xABE5, 0xABE5}, {0xABE6, 0xABE7}, + {0xABE8, 0xABE8}, {0xABE9, 0xABEA}, {0xABEB, 0xABEB}, + {0xABEC, 0xABEC}, {0xABED, 0xABED}, {0xABF0, 0xABF9}, + {0xD7B0, 0xD7C6}, {0xD7CB, 0xD7FB}, {0xD800, 0xDB7F}, + {0xDB80, 0xDBFF}, {0xDC00, 0xDFFF}, {0xFB00, 0xFB06}, + {0xFB13, 0xFB17}, {0xFB1D, 0xFB1D}, {0xFB1E, 0xFB1E}, + {0xFB1F, 0xFB28}, {0xFB29, 0xFB29}, {0xFB2A, 0xFB36}, + {0xFB38, 0xFB3C}, {0xFB3E, 0xFB3E}, {0xFB40, 0xFB41}, + {0xFB43, 0xFB44}, {0xFB46, 0xFB4F}, {0xFB50, 0xFBB1}, + {0xFBB2, 0xFBC1}, {0xFBD3, 0xFD3D}, {0xFD3E, 0xFD3E}, + {0xFD3F, 0xFD3F}, {0xFD50, 0xFD8F}, {0xFD92, 0xFDC7}, + {0xFDF0, 0xFDFB}, {0xFDFC, 0xFDFC}, {0xFDFD, 0xFDFD}, + {0xFE20, 0xFE2F}, {0xFE70, 0xFE74}, {0xFE76, 0xFEFC}, + {0xFEFF, 0xFEFF}, {0xFFF9, 0xFFFB}, {0xFFFC, 0xFFFC}, + {0x10000, 0x1000B}, {0x1000D, 0x10026}, {0x10028, 0x1003A}, + {0x1003C, 0x1003D}, {0x1003F, 0x1004D}, {0x10050, 0x1005D}, + {0x10080, 0x100FA}, {0x10100, 0x10102}, {0x10107, 0x10133}, + {0x10137, 0x1013F}, {0x10140, 0x10174}, {0x10175, 0x10178}, + {0x10179, 0x10189}, {0x1018A, 0x1018B}, {0x1018C, 0x1018E}, + {0x10190, 0x1019B}, {0x101A0, 0x101A0}, {0x101D0, 0x101FC}, + {0x101FD, 0x101FD}, {0x10280, 0x1029C}, {0x102A0, 0x102D0}, + {0x102E0, 0x102E0}, {0x102E1, 0x102FB}, {0x10300, 0x1031F}, + {0x10320, 0x10323}, {0x10330, 0x10340}, {0x10341, 0x10341}, + {0x10342, 0x10349}, {0x1034A, 0x1034A}, {0x10350, 0x10375}, + {0x10376, 0x1037A}, {0x10380, 0x1039D}, {0x1039F, 0x1039F}, + {0x103A0, 0x103C3}, {0x103C8, 0x103CF}, {0x103D0, 0x103D0}, + {0x103D1, 0x103D5}, {0x10400, 0x1044F}, {0x10450, 0x1047F}, + {0x10480, 0x1049D}, {0x104A0, 0x104A9}, {0x104B0, 0x104D3}, + {0x104D8, 0x104FB}, {0x10500, 0x10527}, {0x10530, 0x10563}, + {0x1056F, 0x1056F}, {0x10600, 0x10736}, {0x10740, 0x10755}, + {0x10760, 0x10767}, {0x10800, 0x10805}, {0x10808, 0x10808}, + {0x1080A, 0x10835}, {0x10837, 0x10838}, {0x1083C, 0x1083C}, + {0x1083F, 0x1083F}, {0x10840, 0x10855}, {0x10857, 0x10857}, + {0x10858, 0x1085F}, {0x10860, 0x10876}, {0x10877, 0x10878}, + {0x10879, 0x1087F}, {0x10880, 0x1089E}, {0x108A7, 0x108AF}, + {0x108E0, 0x108F2}, {0x108F4, 0x108F5}, {0x108FB, 0x108FF}, + {0x10900, 0x10915}, {0x10916, 0x1091B}, {0x1091F, 0x1091F}, + {0x10920, 0x10939}, {0x1093F, 0x1093F}, {0x10980, 0x1099F}, + {0x109A0, 0x109B7}, {0x109BC, 0x109BD}, {0x109BE, 0x109BF}, + {0x109C0, 0x109CF}, {0x109D2, 0x109FF}, {0x10A00, 0x10A00}, + {0x10A01, 0x10A03}, {0x10A05, 0x10A06}, {0x10A0C, 0x10A0F}, + {0x10A10, 0x10A13}, {0x10A15, 0x10A17}, {0x10A19, 0x10A33}, + {0x10A38, 0x10A3A}, {0x10A3F, 0x10A3F}, {0x10A40, 0x10A47}, + {0x10A50, 0x10A58}, {0x10A60, 0x10A7C}, {0x10A7D, 0x10A7E}, + {0x10A7F, 0x10A7F}, {0x10A80, 0x10A9C}, {0x10A9D, 0x10A9F}, + {0x10AC0, 0x10AC7}, {0x10AC8, 0x10AC8}, {0x10AC9, 0x10AE4}, + {0x10AE5, 0x10AE6}, {0x10AEB, 0x10AEF}, {0x10AF0, 0x10AF6}, + {0x10B00, 0x10B35}, {0x10B39, 0x10B3F}, {0x10B40, 0x10B55}, + {0x10B58, 0x10B5F}, {0x10B60, 0x10B72}, {0x10B78, 0x10B7F}, + {0x10B80, 0x10B91}, {0x10B99, 0x10B9C}, {0x10BA9, 0x10BAF}, + {0x10C00, 0x10C48}, {0x10C80, 0x10CB2}, {0x10CC0, 0x10CF2}, + {0x10CFA, 0x10CFF}, {0x10E60, 0x10E7E}, {0x11000, 0x11000}, + {0x11001, 0x11001}, {0x11002, 0x11002}, {0x11003, 0x11037}, + {0x11038, 0x11046}, {0x11047, 0x1104D}, {0x11052, 0x11065}, + {0x11066, 0x1106F}, {0x1107F, 0x1107F}, {0x11080, 0x11081}, + {0x11082, 0x11082}, {0x11083, 0x110AF}, {0x110B0, 0x110B2}, + {0x110B3, 0x110B6}, {0x110B7, 0x110B8}, {0x110B9, 0x110BA}, + {0x110BB, 0x110BC}, {0x110BD, 0x110BD}, {0x110BE, 0x110C1}, + {0x110D0, 0x110E8}, {0x110F0, 0x110F9}, {0x11100, 0x11102}, + {0x11103, 0x11126}, {0x11127, 0x1112B}, {0x1112C, 0x1112C}, + {0x1112D, 0x11134}, {0x11136, 0x1113F}, {0x11140, 0x11143}, + {0x11150, 0x11172}, {0x11173, 0x11173}, {0x11174, 0x11175}, + {0x11176, 0x11176}, {0x11180, 0x11181}, {0x11182, 0x11182}, + {0x11183, 0x111B2}, {0x111B3, 0x111B5}, {0x111B6, 0x111BE}, + {0x111BF, 0x111C0}, {0x111C1, 0x111C4}, {0x111C5, 0x111C9}, + {0x111CA, 0x111CC}, {0x111CD, 0x111CD}, {0x111D0, 0x111D9}, + {0x111DA, 0x111DA}, {0x111DB, 0x111DB}, {0x111DC, 0x111DC}, + {0x111DD, 0x111DF}, {0x111E1, 0x111F4}, {0x11200, 0x11211}, + {0x11213, 0x1122B}, {0x1122C, 0x1122E}, {0x1122F, 0x11231}, + {0x11232, 0x11233}, {0x11234, 0x11234}, {0x11235, 0x11235}, + {0x11236, 0x11237}, {0x11238, 0x1123D}, {0x1123E, 0x1123E}, + {0x11280, 0x11286}, {0x11288, 0x11288}, {0x1128A, 0x1128D}, + {0x1128F, 0x1129D}, {0x1129F, 0x112A8}, {0x112A9, 0x112A9}, + {0x112B0, 0x112DE}, {0x112DF, 0x112DF}, {0x112E0, 0x112E2}, + {0x112E3, 0x112EA}, {0x112F0, 0x112F9}, {0x11300, 0x11301}, + {0x11302, 0x11303}, {0x11305, 0x1130C}, {0x1130F, 0x11310}, + {0x11313, 0x11328}, {0x1132A, 0x11330}, {0x11332, 0x11333}, + {0x11335, 0x11339}, {0x1133C, 0x1133C}, {0x1133D, 0x1133D}, + {0x1133E, 0x1133F}, {0x11340, 0x11340}, {0x11341, 0x11344}, + {0x11347, 0x11348}, {0x1134B, 0x1134D}, {0x11350, 0x11350}, + {0x11357, 0x11357}, {0x1135D, 0x11361}, {0x11362, 0x11363}, + {0x11366, 0x1136C}, {0x11370, 0x11374}, {0x11400, 0x11434}, + {0x11435, 0x11437}, {0x11438, 0x1143F}, {0x11440, 0x11441}, + {0x11442, 0x11444}, {0x11445, 0x11445}, {0x11446, 0x11446}, + {0x11447, 0x1144A}, {0x1144B, 0x1144F}, {0x11450, 0x11459}, + {0x1145B, 0x1145B}, {0x1145D, 0x1145D}, {0x11480, 0x114AF}, + {0x114B0, 0x114B2}, {0x114B3, 0x114B8}, {0x114B9, 0x114B9}, + {0x114BA, 0x114BA}, {0x114BB, 0x114BE}, {0x114BF, 0x114C0}, + {0x114C1, 0x114C1}, {0x114C2, 0x114C3}, {0x114C4, 0x114C5}, + {0x114C6, 0x114C6}, {0x114C7, 0x114C7}, {0x114D0, 0x114D9}, + {0x11580, 0x115AE}, {0x115AF, 0x115B1}, {0x115B2, 0x115B5}, + {0x115B8, 0x115BB}, {0x115BC, 0x115BD}, {0x115BE, 0x115BE}, + {0x115BF, 0x115C0}, {0x115C1, 0x115D7}, {0x115D8, 0x115DB}, + {0x115DC, 0x115DD}, {0x11600, 0x1162F}, {0x11630, 0x11632}, + {0x11633, 0x1163A}, {0x1163B, 0x1163C}, {0x1163D, 0x1163D}, + {0x1163E, 0x1163E}, {0x1163F, 0x11640}, {0x11641, 0x11643}, + {0x11644, 0x11644}, {0x11650, 0x11659}, {0x11660, 0x1166C}, + {0x11680, 0x116AA}, {0x116AB, 0x116AB}, {0x116AC, 0x116AC}, + {0x116AD, 0x116AD}, {0x116AE, 0x116AF}, {0x116B0, 0x116B5}, + {0x116B6, 0x116B6}, {0x116B7, 0x116B7}, {0x116C0, 0x116C9}, + {0x11700, 0x11719}, {0x1171D, 0x1171F}, {0x11720, 0x11721}, + {0x11722, 0x11725}, {0x11726, 0x11726}, {0x11727, 0x1172B}, + {0x11730, 0x11739}, {0x1173A, 0x1173B}, {0x1173C, 0x1173E}, + {0x1173F, 0x1173F}, {0x118A0, 0x118DF}, {0x118E0, 0x118E9}, + {0x118EA, 0x118F2}, {0x118FF, 0x118FF}, {0x11AC0, 0x11AF8}, + {0x11C00, 0x11C08}, {0x11C0A, 0x11C2E}, {0x11C2F, 0x11C2F}, + {0x11C30, 0x11C36}, {0x11C38, 0x11C3D}, {0x11C3E, 0x11C3E}, + {0x11C3F, 0x11C3F}, {0x11C40, 0x11C40}, {0x11C41, 0x11C45}, + {0x11C50, 0x11C59}, {0x11C5A, 0x11C6C}, {0x11C70, 0x11C71}, + {0x11C72, 0x11C8F}, {0x11C92, 0x11CA7}, {0x11CA9, 0x11CA9}, + {0x11CAA, 0x11CB0}, {0x11CB1, 0x11CB1}, {0x11CB2, 0x11CB3}, + {0x11CB4, 0x11CB4}, {0x11CB5, 0x11CB6}, {0x12000, 0x12399}, + {0x12400, 0x1246E}, {0x12470, 0x12474}, {0x12480, 0x12543}, + {0x13000, 0x1342E}, {0x14400, 0x14646}, {0x16800, 0x16A38}, + {0x16A40, 0x16A5E}, {0x16A60, 0x16A69}, {0x16A6E, 0x16A6F}, + {0x16AD0, 0x16AED}, {0x16AF0, 0x16AF4}, {0x16AF5, 0x16AF5}, + {0x16B00, 0x16B2F}, {0x16B30, 0x16B36}, {0x16B37, 0x16B3B}, + {0x16B3C, 0x16B3F}, {0x16B40, 0x16B43}, {0x16B44, 0x16B44}, + {0x16B45, 0x16B45}, {0x16B50, 0x16B59}, {0x16B5B, 0x16B61}, + {0x16B63, 0x16B77}, {0x16B7D, 0x16B8F}, {0x16F00, 0x16F44}, + {0x16F50, 0x16F50}, {0x16F51, 0x16F7E}, {0x16F8F, 0x16F92}, + {0x16F93, 0x16F9F}, {0x1BC00, 0x1BC6A}, {0x1BC70, 0x1BC7C}, + {0x1BC80, 0x1BC88}, {0x1BC90, 0x1BC99}, {0x1BC9C, 0x1BC9C}, + {0x1BC9D, 0x1BC9E}, {0x1BC9F, 0x1BC9F}, {0x1BCA0, 0x1BCA3}, + {0x1D000, 0x1D0F5}, {0x1D100, 0x1D126}, {0x1D129, 0x1D164}, + {0x1D165, 0x1D166}, {0x1D167, 0x1D169}, {0x1D16A, 0x1D16C}, + {0x1D16D, 0x1D172}, {0x1D173, 0x1D17A}, {0x1D17B, 0x1D182}, + {0x1D183, 0x1D184}, {0x1D185, 0x1D18B}, {0x1D18C, 0x1D1A9}, + {0x1D1AA, 0x1D1AD}, {0x1D1AE, 0x1D1E8}, {0x1D200, 0x1D241}, + {0x1D242, 0x1D244}, {0x1D245, 0x1D245}, {0x1D300, 0x1D356}, + {0x1D360, 0x1D371}, {0x1D400, 0x1D454}, {0x1D456, 0x1D49C}, + {0x1D49E, 0x1D49F}, {0x1D4A2, 0x1D4A2}, {0x1D4A5, 0x1D4A6}, + {0x1D4A9, 0x1D4AC}, {0x1D4AE, 0x1D4B9}, {0x1D4BB, 0x1D4BB}, + {0x1D4BD, 0x1D4C3}, {0x1D4C5, 0x1D505}, {0x1D507, 0x1D50A}, + {0x1D50D, 0x1D514}, {0x1D516, 0x1D51C}, {0x1D51E, 0x1D539}, + {0x1D53B, 0x1D53E}, {0x1D540, 0x1D544}, {0x1D546, 0x1D546}, + {0x1D54A, 0x1D550}, {0x1D552, 0x1D6A5}, {0x1D6A8, 0x1D6C0}, + {0x1D6C1, 0x1D6C1}, {0x1D6C2, 0x1D6DA}, {0x1D6DB, 0x1D6DB}, + {0x1D6DC, 0x1D6FA}, {0x1D6FB, 0x1D6FB}, {0x1D6FC, 0x1D714}, + {0x1D715, 0x1D715}, {0x1D716, 0x1D734}, {0x1D735, 0x1D735}, + {0x1D736, 0x1D74E}, {0x1D74F, 0x1D74F}, {0x1D750, 0x1D76E}, + {0x1D76F, 0x1D76F}, {0x1D770, 0x1D788}, {0x1D789, 0x1D789}, + {0x1D78A, 0x1D7A8}, {0x1D7A9, 0x1D7A9}, {0x1D7AA, 0x1D7C2}, + {0x1D7C3, 0x1D7C3}, {0x1D7C4, 0x1D7CB}, {0x1D7CE, 0x1D7FF}, + {0x1D800, 0x1D9FF}, {0x1DA00, 0x1DA36}, {0x1DA37, 0x1DA3A}, + {0x1DA3B, 0x1DA6C}, {0x1DA6D, 0x1DA74}, {0x1DA75, 0x1DA75}, + {0x1DA76, 0x1DA83}, {0x1DA84, 0x1DA84}, {0x1DA85, 0x1DA86}, + {0x1DA87, 0x1DA8B}, {0x1DA9B, 0x1DA9F}, {0x1DAA1, 0x1DAAF}, + {0x1E000, 0x1E006}, {0x1E008, 0x1E018}, {0x1E01B, 0x1E021}, + {0x1E023, 0x1E024}, {0x1E026, 0x1E02A}, {0x1E800, 0x1E8C4}, + {0x1E8C7, 0x1E8CF}, {0x1E8D0, 0x1E8D6}, {0x1E900, 0x1E943}, + {0x1E944, 0x1E94A}, {0x1E950, 0x1E959}, {0x1E95E, 0x1E95F}, + {0x1EE00, 0x1EE03}, {0x1EE05, 0x1EE1F}, {0x1EE21, 0x1EE22}, + {0x1EE24, 0x1EE24}, {0x1EE27, 0x1EE27}, {0x1EE29, 0x1EE32}, + {0x1EE34, 0x1EE37}, {0x1EE39, 0x1EE39}, {0x1EE3B, 0x1EE3B}, + {0x1EE42, 0x1EE42}, {0x1EE47, 0x1EE47}, {0x1EE49, 0x1EE49}, + {0x1EE4B, 0x1EE4B}, {0x1EE4D, 0x1EE4F}, {0x1EE51, 0x1EE52}, + {0x1EE54, 0x1EE54}, {0x1EE57, 0x1EE57}, {0x1EE59, 0x1EE59}, + {0x1EE5B, 0x1EE5B}, {0x1EE5D, 0x1EE5D}, {0x1EE5F, 0x1EE5F}, + {0x1EE61, 0x1EE62}, {0x1EE64, 0x1EE64}, {0x1EE67, 0x1EE6A}, + {0x1EE6C, 0x1EE72}, {0x1EE74, 0x1EE77}, {0x1EE79, 0x1EE7C}, + {0x1EE7E, 0x1EE7E}, {0x1EE80, 0x1EE89}, {0x1EE8B, 0x1EE9B}, + {0x1EEA1, 0x1EEA3}, {0x1EEA5, 0x1EEA9}, {0x1EEAB, 0x1EEBB}, + {0x1EEF0, 0x1EEF1}, {0x1F000, 0x1F003}, {0x1F005, 0x1F02B}, + {0x1F030, 0x1F093}, {0x1F0A0, 0x1F0AE}, {0x1F0B1, 0x1F0BF}, + {0x1F0C1, 0x1F0CE}, {0x1F0D1, 0x1F0F5}, {0x1F10B, 0x1F10C}, + {0x1F12E, 0x1F12E}, {0x1F16A, 0x1F16B}, {0x1F1E6, 0x1F1FF}, + {0x1F321, 0x1F32C}, {0x1F336, 0x1F336}, {0x1F37D, 0x1F37D}, + {0x1F394, 0x1F39F}, {0x1F3CB, 0x1F3CE}, {0x1F3D4, 0x1F3DF}, + {0x1F3F1, 0x1F3F3}, {0x1F3F5, 0x1F3F7}, {0x1F43F, 0x1F43F}, + {0x1F441, 0x1F441}, {0x1F4FD, 0x1F4FE}, {0x1F53E, 0x1F54A}, + {0x1F54F, 0x1F54F}, {0x1F568, 0x1F579}, {0x1F57B, 0x1F594}, + {0x1F597, 0x1F5A3}, {0x1F5A5, 0x1F5FA}, {0x1F650, 0x1F67F}, + {0x1F6C6, 0x1F6CB}, {0x1F6CD, 0x1F6CF}, {0x1F6E0, 0x1F6EA}, + {0x1F6F0, 0x1F6F3}, {0x1F700, 0x1F773}, {0x1F780, 0x1F7D4}, + {0x1F800, 0x1F80B}, {0x1F810, 0x1F847}, {0x1F850, 0x1F859}, + {0x1F860, 0x1F887}, {0x1F890, 0x1F8AD}, {0xE0001, 0xE0001}, + {0xE0020, 0xE007F}, +} + +// Condition have flag EastAsianWidth whether the current locale is CJK or not. +type Condition struct { + EastAsianWidth bool +} + +// NewCondition return new instance of Condition which is current locale. +func NewCondition() *Condition { + return &Condition{EastAsianWidth} +} + +// RuneWidth returns the number of cells in r. +// See http://www.unicode.org/reports/tr11/ +func (c *Condition) RuneWidth(r rune) int { + switch { + case r < 0 || r > 0x10FFFF || + inTables(r, nonprint, combining, notassigned): + return 0 + case (c.EastAsianWidth && IsAmbiguousWidth(r)) || + inTables(r, doublewidth, emoji): + return 2 + default: + return 1 + } +} + +// StringWidth return width as you can see +func (c *Condition) StringWidth(s string) (width int) { + for _, r := range []rune(s) { + width += c.RuneWidth(r) + } + return width +} + +// Truncate return string truncated with w cells +func (c *Condition) Truncate(s string, w int, tail string) string { + if c.StringWidth(s) <= w { + return s + } + r := []rune(s) + tw := c.StringWidth(tail) + w -= tw + width := 0 + i := 0 + for ; i < len(r); i++ { + cw := c.RuneWidth(r[i]) + if width+cw > w { + break + } + width += cw + } + return string(r[0:i]) + tail +} + +// Wrap return string wrapped with w cells +func (c *Condition) Wrap(s string, w int) string { + width := 0 + out := "" + for _, r := range []rune(s) { + cw := RuneWidth(r) + if r == '\n' { + out += string(r) + width = 0 + continue + } else if width+cw > w { + out += "\n" + width = 0 + out += string(r) + width += cw + continue + } + out += string(r) + width += cw + } + return out +} + +// FillLeft return string filled in left by spaces in w cells +func (c *Condition) FillLeft(s string, w int) string { + width := c.StringWidth(s) + count := w - width + if count > 0 { + b := make([]byte, count) + for i := range b { + b[i] = ' ' + } + return string(b) + s + } + return s +} + +// FillRight return string filled in left by spaces in w cells +func (c *Condition) FillRight(s string, w int) string { + width := c.StringWidth(s) + count := w - width + if count > 0 { + b := make([]byte, count) + for i := range b { + b[i] = ' ' + } + return s + string(b) + } + return s +} + +// RuneWidth returns the number of cells in r. +// See http://www.unicode.org/reports/tr11/ +func RuneWidth(r rune) int { + return DefaultCondition.RuneWidth(r) +} + +// IsAmbiguousWidth returns whether is ambiguous width or not. +func IsAmbiguousWidth(r rune) bool { + return inTables(r, private, ambiguous) +} + +// IsNeutralWidth returns whether is neutral width or not. +func IsNeutralWidth(r rune) bool { + return inTable(r, neutral) +} + +// StringWidth return width as you can see +func StringWidth(s string) (width int) { + return DefaultCondition.StringWidth(s) +} + +// Truncate return string truncated with w cells +func Truncate(s string, w int, tail string) string { + return DefaultCondition.Truncate(s, w, tail) +} + +// Wrap return string wrapped with w cells +func Wrap(s string, w int) string { + return DefaultCondition.Wrap(s, w) +} + +// FillLeft return string filled in left by spaces in w cells +func FillLeft(s string, w int) string { + return DefaultCondition.FillLeft(s, w) +} + +// FillRight return string filled in left by spaces in w cells +func FillRight(s string, w int) string { + return DefaultCondition.FillRight(s, w) +} diff --git a/vendor/github.com/mattn/go-runewidth/runewidth_js.go b/vendor/github.com/mattn/go-runewidth/runewidth_js.go new file mode 100644 index 000000000..0ce32c5e7 --- /dev/null +++ b/vendor/github.com/mattn/go-runewidth/runewidth_js.go @@ -0,0 +1,8 @@ +// +build js + +package runewidth + +func IsEastAsian() bool { + // TODO: Implement this for the web. Detect east asian in a compatible way, and return true. + return false +} diff --git a/vendor/github.com/mattn/go-runewidth/runewidth_posix.go b/vendor/github.com/mattn/go-runewidth/runewidth_posix.go new file mode 100644 index 000000000..c579e9a31 --- /dev/null +++ b/vendor/github.com/mattn/go-runewidth/runewidth_posix.go @@ -0,0 +1,77 @@ +// +build !windows,!js + +package runewidth + +import ( + "os" + "regexp" + "strings" +) + +var reLoc = regexp.MustCompile(`^[a-z][a-z][a-z]?(?:_[A-Z][A-Z])?\.(.+)`) + +var mblenTable = map[string]int{ + "utf-8": 6, + "utf8": 6, + "jis": 8, + "eucjp": 3, + "euckr": 2, + "euccn": 2, + "sjis": 2, + "cp932": 2, + "cp51932": 2, + "cp936": 2, + "cp949": 2, + "cp950": 2, + "big5": 2, + "gbk": 2, + "gb2312": 2, +} + +func isEastAsian(locale string) bool { + charset := strings.ToLower(locale) + r := reLoc.FindStringSubmatch(locale) + if len(r) == 2 { + charset = strings.ToLower(r[1]) + } + + if strings.HasSuffix(charset, "@cjk_narrow") { + return false + } + + for pos, b := range []byte(charset) { + if b == '@' { + charset = charset[:pos] + break + } + } + max := 1 + if m, ok := mblenTable[charset]; ok { + max = m + } + if max > 1 && (charset[0] != 'u' || + strings.HasPrefix(locale, "ja") || + strings.HasPrefix(locale, "ko") || + strings.HasPrefix(locale, "zh")) { + return true + } + return false +} + +// IsEastAsian return true if the current locale is CJK +func IsEastAsian() bool { + locale := os.Getenv("LC_CTYPE") + if locale == "" { + locale = os.Getenv("LANG") + } + + // ignore C locale + if locale == "POSIX" || locale == "C" { + return false + } + if len(locale) > 1 && locale[0] == 'C' && (locale[1] == '.' || locale[1] == '-') { + return false + } + + return isEastAsian(locale) +} diff --git a/vendor/github.com/mattn/go-runewidth/runewidth_windows.go b/vendor/github.com/mattn/go-runewidth/runewidth_windows.go new file mode 100644 index 000000000..0258876b9 --- /dev/null +++ b/vendor/github.com/mattn/go-runewidth/runewidth_windows.go @@ -0,0 +1,25 @@ +package runewidth + +import ( + "syscall" +) + +var ( + kernel32 = syscall.NewLazyDLL("kernel32") + procGetConsoleOutputCP = kernel32.NewProc("GetConsoleOutputCP") +) + +// IsEastAsian return true if the current locale is CJK +func IsEastAsian() bool { + r1, _, _ := procGetConsoleOutputCP.Call() + if r1 == 0 { + return false + } + + switch int(r1) { + case 932, 51932, 936, 949, 950: + return true + } + + return false +} diff --git a/vendor/github.com/mitchellh/cli/README.md b/vendor/github.com/mitchellh/cli/README.md index dd211cf0e..8f02cdd0a 100644 --- a/vendor/github.com/mitchellh/cli/README.md +++ b/vendor/github.com/mitchellh/cli/README.md @@ -18,6 +18,9 @@ cli is the library that powers the CLI for * Optional support for default subcommands so `cli` does something other than error. +* Support for shell autocompletion of subcommands, flags, and arguments + with callbacks in Go. You don't need to write any shell code. + * Automatic help generation for listing subcommands * Automatic help flag recognition of `-h`, `--help`, etc. diff --git a/vendor/github.com/mitchellh/cli/autocomplete.go b/vendor/github.com/mitchellh/cli/autocomplete.go new file mode 100644 index 000000000..3bec6258f --- /dev/null +++ b/vendor/github.com/mitchellh/cli/autocomplete.go @@ -0,0 +1,43 @@ +package cli + +import ( + "github.com/posener/complete/cmd/install" +) + +// autocompleteInstaller is an interface to be implemented to perform the +// autocomplete installation and uninstallation with a CLI. +// +// This interface is not exported because it only exists for unit tests +// to be able to test that the installation is called properly. +type autocompleteInstaller interface { + Install(string) error + Uninstall(string) error +} + +// realAutocompleteInstaller uses the real install package to do the +// install/uninstall. +type realAutocompleteInstaller struct{} + +func (i *realAutocompleteInstaller) Install(cmd string) error { + return install.Install(cmd) +} + +func (i *realAutocompleteInstaller) Uninstall(cmd string) error { + return install.Uninstall(cmd) +} + +// mockAutocompleteInstaller is used for tests to record the install/uninstall. +type mockAutocompleteInstaller struct { + InstallCalled bool + UninstallCalled bool +} + +func (i *mockAutocompleteInstaller) Install(cmd string) error { + i.InstallCalled = true + return nil +} + +func (i *mockAutocompleteInstaller) Uninstall(cmd string) error { + i.UninstallCalled = true + return nil +} diff --git a/vendor/github.com/mitchellh/cli/cli.go b/vendor/github.com/mitchellh/cli/cli.go index 4a69d176d..61206d6aa 100644 --- a/vendor/github.com/mitchellh/cli/cli.go +++ b/vendor/github.com/mitchellh/cli/cli.go @@ -11,6 +11,7 @@ import ( "text/template" "github.com/armon/go-radix" + "github.com/posener/complete" ) // CLI contains the state necessary to run subcommands and parse the @@ -58,14 +59,56 @@ type CLI struct { // For example, if the key is "foo bar", then to access it our CLI // must be accessed with "./cli foo bar". See the docs for CLI for // notes on how this changes some other behavior of the CLI as well. + // + // The factory should be as cheap as possible, ideally only allocating + // a struct. The factory may be called multiple times in the course + // of a command execution and certain events such as help require the + // instantiation of all commands. Expensive initialization should be + // deferred to function calls within the interface implementation. Commands map[string]CommandFactory + // HiddenCommands is a list of commands that are "hidden". Hidden + // commands are not given to the help function callback and do not + // show up in autocomplete. The values in the slice should be equivalent + // to the keys in the command map. + HiddenCommands []string + // Name defines the name of the CLI. Name string // Version of the CLI. Version string + // Autocomplete enables or disables subcommand auto-completion support. + // This is enabled by default when NewCLI is called. Otherwise, this + // must enabled explicitly. + // + // Autocomplete requires the "Name" option to be set on CLI. This name + // should be set exactly to the binary name that is autocompleted. + // + // Autocompletion is supported via the github.com/posener/complete + // library. This library supports both bash and zsh. To add support + // for other shells, please see that library. + // + // AutocompleteInstall and AutocompleteUninstall are the global flag + // names for installing and uninstalling the autocompletion handlers + // for the user's shell. The flag should omit the hyphen(s) in front of + // the value. Both single and double hyphens will automatically be supported + // for the flag name. These default to `autocomplete-install` and + // `autocomplete-uninstall` respectively. + // + // AutocompleteNoDefaultFlags is a boolean which controls if the default auto- + // complete flags like -help and -version are added to the output. + // + // AutocompleteGlobalFlags are a mapping of global flags for + // autocompletion. The help and version flags are automatically added. + Autocomplete bool + AutocompleteInstall string + AutocompleteUninstall string + AutocompleteNoDefaultFlags bool + AutocompleteGlobalFlags complete.Flags + autocompleteInstaller autocompleteInstaller // For tests + // HelpFunc and HelpWriter are used to output help information, if // requested. // @@ -78,23 +121,33 @@ type CLI struct { HelpFunc HelpFunc HelpWriter io.Writer + //--------------------------------------------------------------- + // Internal fields set automatically + once sync.Once + autocomplete *complete.Complete commandTree *radix.Tree commandNested bool - isHelp bool + commandHidden map[string]struct{} subcommand string subcommandArgs []string topFlags []string - isVersion bool + // These are true when special global flags are set. We can/should + // probably use a bitset for this one day. + isHelp bool + isVersion bool + isAutocompleteInstall bool + isAutocompleteUninstall bool } // NewClI returns a new CLI instance with sensible defaults. func NewCLI(app, version string) *CLI { return &CLI{ - Name: app, - Version: version, - HelpFunc: BasicHelpFunc(app), + Name: app, + Version: version, + HelpFunc: BasicHelpFunc(app), + Autocomplete: true, } } @@ -117,6 +170,14 @@ func (c *CLI) IsVersion() bool { func (c *CLI) Run() (int, error) { c.once.Do(c.init) + // If this is a autocompletion request, satisfy it. This must be called + // first before anything else since its possible to be autocompleting + // -help or -version or other flags and we want to show completions + // and not actually write the help or version. + if c.Autocomplete && c.autocomplete.Complete() { + return 0, nil + } + // Just show the version and exit if instructed. if c.IsVersion() && c.Version != "" { c.HelpWriter.Write([]byte(c.Version + "\n")) @@ -125,16 +186,50 @@ func (c *CLI) Run() (int, error) { // Just print the help when only '-h' or '--help' is passed. if c.IsHelp() && c.Subcommand() == "" { - c.HelpWriter.Write([]byte(c.HelpFunc(c.Commands) + "\n")) + c.HelpWriter.Write([]byte(c.HelpFunc(c.helpCommands(c.Subcommand())) + "\n")) return 0, nil } + // If we're attempting to install or uninstall autocomplete then handle + if c.Autocomplete { + // Autocomplete requires the "Name" to be set so that we know what + // command to setup the autocomplete on. + if c.Name == "" { + return 1, fmt.Errorf( + "internal error: CLI.Name must be specified for autocomplete to work") + } + + // If both install and uninstall flags are specified, then error + if c.isAutocompleteInstall && c.isAutocompleteUninstall { + return 1, fmt.Errorf( + "Either the autocomplete install or uninstall flag may " + + "be specified, but not both.") + } + + // If the install flag is specified, perform the install or uninstall + if c.isAutocompleteInstall { + if err := c.autocompleteInstaller.Install(c.Name); err != nil { + return 1, err + } + + return 0, nil + } + + if c.isAutocompleteUninstall { + if err := c.autocompleteInstaller.Uninstall(c.Name); err != nil { + return 1, err + } + + return 0, nil + } + } + // Attempt to get the factory function for creating the command // implementation. If the command is invalid or blank, it is an error. raw, ok := c.commandTree.Get(c.Subcommand()) if !ok { c.HelpWriter.Write([]byte(c.HelpFunc(c.helpCommands(c.subcommandParent())) + "\n")) - return 1, nil + return 127, nil } command, err := raw.(CommandFactory)() @@ -216,6 +311,14 @@ func (c *CLI) init() { c.HelpWriter = os.Stderr } + // Build our hidden commands + if len(c.HiddenCommands) > 0 { + c.commandHidden = make(map[string]struct{}) + for _, h := range c.HiddenCommands { + c.commandHidden[h] = struct{}{} + } + } + // Build our command tree c.commandTree = radix.New() c.commandNested = false @@ -268,10 +371,113 @@ func (c *CLI) init() { } } + // Setup autocomplete if we have it enabled. We have to do this after + // the command tree is setup so we can use the radix tree to easily find + // all subcommands. + if c.Autocomplete { + c.initAutocomplete() + } + // Process the args c.processArgs() } +func (c *CLI) initAutocomplete() { + if c.AutocompleteInstall == "" { + c.AutocompleteInstall = defaultAutocompleteInstall + } + + if c.AutocompleteUninstall == "" { + c.AutocompleteUninstall = defaultAutocompleteUninstall + } + + if c.autocompleteInstaller == nil { + c.autocompleteInstaller = &realAutocompleteInstaller{} + } + + // Build the root command + cmd := c.initAutocompleteSub("") + + // For the root, we add the global flags to the "Flags". This way + // they don't show up on every command. + if !c.AutocompleteNoDefaultFlags { + cmd.Flags = map[string]complete.Predictor{ + "-" + c.AutocompleteInstall: complete.PredictNothing, + "-" + c.AutocompleteUninstall: complete.PredictNothing, + "-help": complete.PredictNothing, + "-version": complete.PredictNothing, + } + } + cmd.GlobalFlags = c.AutocompleteGlobalFlags + + c.autocomplete = complete.New(c.Name, cmd) +} + +// initAutocompleteSub creates the complete.Command for a subcommand with +// the given prefix. This will continue recursively for all subcommands. +// The prefix "" (empty string) can be used for the root command. +func (c *CLI) initAutocompleteSub(prefix string) complete.Command { + var cmd complete.Command + walkFn := func(k string, raw interface{}) bool { + // Keep track of the full key so that we can nest further if necessary + fullKey := k + + if len(prefix) > 0 { + // If we have a prefix, trim the prefix + 1 (for the space) + // Example: turns "sub one" to "one" with prefix "sub" + k = k[len(prefix)+1:] + } + + if idx := strings.Index(k, " "); idx >= 0 { + // If there is a space, we trim up to the space. This turns + // "sub sub2 sub3" into "sub". The prefix trim above will + // trim our current depth properly. + k = k[:idx] + } + + if _, ok := cmd.Sub[k]; ok { + // If we already tracked this subcommand then ignore + return false + } + + // If the command is hidden, don't record it at all + if _, ok := c.commandHidden[fullKey]; ok { + return false + } + + if cmd.Sub == nil { + cmd.Sub = complete.Commands(make(map[string]complete.Command)) + } + subCmd := c.initAutocompleteSub(fullKey) + + // Instantiate the command so that we can check if the command is + // a CommandAutocomplete implementation. If there is an error + // creating the command, we just ignore it since that will be caught + // later. + impl, err := raw.(CommandFactory)() + if err != nil { + impl = nil + } + + // Check if it implements ComandAutocomplete. If so, setup the autocomplete + if c, ok := impl.(CommandAutocomplete); ok { + subCmd.Args = c.AutocompleteArgs() + subCmd.Flags = c.AutocompleteFlags() + } + + cmd.Sub[k] = subCmd + return false + } + + walkPrefix := prefix + if walkPrefix != "" { + walkPrefix += " " + } + + c.commandTree.WalkPrefix(walkPrefix, walkFn) + return cmd +} + func (c *CLI) commandHelp(command Command) { // Get the template to use tpl := strings.TrimSpace(defaultHelpTemplate) @@ -386,6 +592,11 @@ func (c *CLI) helpCommands(prefix string) map[string]CommandFactory { panic("not found: " + k) } + // If this is a hidden command, don't show it + if _, ok := c.commandHidden[k]; ok { + continue + } + result[k] = raw.(CommandFactory) } @@ -404,6 +615,19 @@ func (c *CLI) processArgs() { continue } + // Check for autocomplete flags + if c.Autocomplete { + if arg == "-"+c.AutocompleteInstall || arg == "--"+c.AutocompleteInstall { + c.isAutocompleteInstall = true + continue + } + + if arg == "-"+c.AutocompleteUninstall || arg == "--"+c.AutocompleteUninstall { + c.isAutocompleteUninstall = true + continue + } + } + if c.subcommand == "" { // Check for version flags if not in a subcommand. if arg == "-v" || arg == "-version" || arg == "--version" { @@ -456,6 +680,11 @@ func (c *CLI) processArgs() { } } +// defaultAutocompleteInstall and defaultAutocompleteUninstall are the +// default values for the autocomplete install and uninstall flags. +const defaultAutocompleteInstall = "autocomplete-install" +const defaultAutocompleteUninstall = "autocomplete-uninstall" + const defaultHelpTemplate = ` {{.Help}}{{if gt (len .Subcommands) 0}} diff --git a/vendor/github.com/mitchellh/cli/command.go b/vendor/github.com/mitchellh/cli/command.go index b4924eb00..bed11faf5 100644 --- a/vendor/github.com/mitchellh/cli/command.go +++ b/vendor/github.com/mitchellh/cli/command.go @@ -1,5 +1,9 @@ package cli +import ( + "github.com/posener/complete" +) + const ( // RunResultHelp is a value that can be returned from Run to signal // to the CLI to render the help output. @@ -26,6 +30,22 @@ type Command interface { Synopsis() string } +// CommandAutocomplete is an extension of Command that enables fine-grained +// autocompletion. Subcommand autocompletion will work even if this interface +// is not implemented. By implementing this interface, more advanced +// autocompletion is enabled. +type CommandAutocomplete interface { + // AutocompleteArgs returns the argument predictor for this command. + // If argument completion is not supported, this should return + // complete.PredictNothing. + AutocompleteArgs() complete.Predictor + + // AutocompleteFlags returns a mapping of supported flags and autocomplete + // options for this command. The map key for the Flags map should be the + // complete flag such as "-foo" or "--foo". + AutocompleteFlags() complete.Flags +} + // CommandHelpTemplate is an extension of Command that also has a function // for returning a template for the help rather than the help itself. In // this scenario, both Help and HelpTemplate should be implemented. diff --git a/vendor/github.com/mitchellh/cli/command_mock.go b/vendor/github.com/mitchellh/cli/command_mock.go index 6371e573b..7a584b7e9 100644 --- a/vendor/github.com/mitchellh/cli/command_mock.go +++ b/vendor/github.com/mitchellh/cli/command_mock.go @@ -1,5 +1,9 @@ package cli +import ( + "github.com/posener/complete" +) + // MockCommand is an implementation of Command that can be used for tests. // It is publicly exported from this package in case you want to use it // externally. @@ -29,6 +33,23 @@ func (c *MockCommand) Synopsis() string { return c.SynopsisText } +// MockCommandAutocomplete is an implementation of CommandAutocomplete. +type MockCommandAutocomplete struct { + MockCommand + + // Settable + AutocompleteArgsValue complete.Predictor + AutocompleteFlagsValue complete.Flags +} + +func (c *MockCommandAutocomplete) AutocompleteArgs() complete.Predictor { + return c.AutocompleteArgsValue +} + +func (c *MockCommandAutocomplete) AutocompleteFlags() complete.Flags { + return c.AutocompleteFlagsValue +} + // MockCommandHelpTemplate is an implementation of CommandHelpTemplate. type MockCommandHelpTemplate struct { MockCommand diff --git a/vendor/github.com/mitchellh/cli/ui_mock.go b/vendor/github.com/mitchellh/cli/ui_mock.go index c46772855..0bfe0a191 100644 --- a/vendor/github.com/mitchellh/cli/ui_mock.go +++ b/vendor/github.com/mitchellh/cli/ui_mock.go @@ -7,12 +7,25 @@ import ( "sync" ) -// MockUi is a mock UI that is used for tests and is exported publicly for -// use in external tests if needed as well. +// NewMockUi returns a fully initialized MockUi instance +// which is safe for concurrent use. +func NewMockUi() *MockUi { + m := new(MockUi) + m.once.Do(m.init) + return m +} + +// MockUi is a mock UI that is used for tests and is exported publicly +// for use in external tests if needed as well. Do not instantite this +// directly since the buffers will be initialized on the first write. If +// there is no write then you will get a nil panic. Please use the +// NewMockUi() constructor function instead. You can fix your code with +// +// sed -i -e 's/new(cli.MockUi)/cli.NewMockUi()/g' *_test.go type MockUi struct { InputReader io.Reader - ErrorWriter *bytes.Buffer - OutputWriter *bytes.Buffer + ErrorWriter *syncBuffer + OutputWriter *syncBuffer once sync.Once } @@ -59,6 +72,40 @@ func (u *MockUi) Warn(message string) { } func (u *MockUi) init() { - u.ErrorWriter = new(bytes.Buffer) - u.OutputWriter = new(bytes.Buffer) + u.ErrorWriter = new(syncBuffer) + u.OutputWriter = new(syncBuffer) +} + +type syncBuffer struct { + sync.RWMutex + b bytes.Buffer +} + +func (b *syncBuffer) Write(data []byte) (int, error) { + b.Lock() + defer b.Unlock() + return b.b.Write(data) +} + +func (b *syncBuffer) Read(data []byte) (int, error) { + b.RLock() + defer b.RUnlock() + return b.b.Read(data) +} + +func (b *syncBuffer) Reset() { + b.Lock() + b.b.Reset() + b.Unlock() +} + +func (b *syncBuffer) String() string { + return string(b.Bytes()) +} + +func (b *syncBuffer) Bytes() []byte { + b.RLock() + data := b.b.Bytes() + b.RUnlock() + return data } diff --git a/vendor/github.com/mitchellh/go-fs/fat/boot_sector.go b/vendor/github.com/mitchellh/go-fs/fat/boot_sector.go index 689a6b75c..f62df9283 100644 --- a/vendor/github.com/mitchellh/go-fs/fat/boot_sector.go +++ b/vendor/github.com/mitchellh/go-fs/fat/boot_sector.go @@ -4,8 +4,9 @@ import ( "encoding/binary" "errors" "fmt" - "github.com/mitchellh/go-fs" "unicode" + + "github.com/mitchellh/go-fs" ) type MediaType uint8 @@ -99,7 +100,7 @@ func (b *BootSectorCommon) Bytes() ([]byte, error) { for i, r := range b.OEMName { if r > unicode.MaxASCII { - return nil, fmt.Errorf("'%s' in OEM name not a valid ASCII char. Must be ASCII.", r) + return nil, fmt.Errorf("%#U in OEM name not a valid ASCII char. Must be ASCII.", r) } sector[0x3+i] = byte(r) @@ -245,7 +246,7 @@ func (b *BootSectorFat16) Bytes() ([]byte, error) { for i, r := range b.VolumeLabel { if r > unicode.MaxASCII { - return nil, fmt.Errorf("'%s' in VolumeLabel not a valid ASCII char. Must be ASCII.", r) + return nil, fmt.Errorf("%#U in VolumeLabel not a valid ASCII char. Must be ASCII.", r) } sector[43+i] = byte(r) @@ -258,7 +259,7 @@ func (b *BootSectorFat16) Bytes() ([]byte, error) { for i, r := range b.FileSystemTypeLabel { if r > unicode.MaxASCII { - return nil, fmt.Errorf("'%s' in FileSystemTypeLabel not a valid ASCII char. Must be ASCII.", r) + return nil, fmt.Errorf("%#U in FileSystemTypeLabel not a valid ASCII char. Must be ASCII.", r) } sector[54+i] = byte(r) @@ -324,7 +325,7 @@ func (b *BootSectorFat32) Bytes() ([]byte, error) { for i, r := range b.VolumeLabel { if r > unicode.MaxASCII { - return nil, fmt.Errorf("'%s' in VolumeLabel not a valid ASCII char. Must be ASCII.", r) + return nil, fmt.Errorf("%#U in VolumeLabel not a valid ASCII char. Must be ASCII.", r) } sector[71+i] = byte(r) @@ -337,7 +338,7 @@ func (b *BootSectorFat32) Bytes() ([]byte, error) { for i, r := range b.FileSystemTypeLabel { if r > unicode.MaxASCII { - return nil, fmt.Errorf("'%s' in FileSystemTypeLabel not a valid ASCII char. Must be ASCII.", r) + return nil, fmt.Errorf("%#U in FileSystemTypeLabel not a valid ASCII char. Must be ASCII.", r) } sector[82+i] = byte(r) diff --git a/vendor/github.com/mitchellh/go-fs/fat/cluster_chain.go b/vendor/github.com/mitchellh/go-fs/fat/cluster_chain.go index a1f9161d7..344f12e7c 100644 --- a/vendor/github.com/mitchellh/go-fs/fat/cluster_chain.go +++ b/vendor/github.com/mitchellh/go-fs/fat/cluster_chain.go @@ -1,9 +1,10 @@ package fat import ( - "github.com/mitchellh/go-fs" "io" "math" + + "github.com/mitchellh/go-fs" ) type ClusterChain struct { diff --git a/vendor/github.com/mitchellh/go-fs/fat/directory.go b/vendor/github.com/mitchellh/go-fs/fat/directory.go index ff62319fd..2dec26ad2 100644 --- a/vendor/github.com/mitchellh/go-fs/fat/directory.go +++ b/vendor/github.com/mitchellh/go-fs/fat/directory.go @@ -2,9 +2,10 @@ package fat import ( "fmt" - "github.com/mitchellh/go-fs" "strings" "time" + + "github.com/mitchellh/go-fs" ) // Directory implements fs.Directory and is used to interface with @@ -16,9 +17,9 @@ type Directory struct { } // DirectoryEntry implements fs.DirectoryEntry and represents a single -// file/folder within a directory in a FAT filesystem. Note that the -// underlying directory entry data structures on the disk may be more -// than one to accomodate for long filenames. +// file/folder within a directory in a FAT filesystem. Note that there may be +// more than one underlying directory entry data structure on the disk to +// account for long filenames. type DirectoryEntry struct { dir *Directory lfnEntries []*DirectoryClusterEntry diff --git a/vendor/github.com/mitchellh/go-fs/fat/directory_cluster.go b/vendor/github.com/mitchellh/go-fs/fat/directory_cluster.go index b822e82db..691710636 100644 --- a/vendor/github.com/mitchellh/go-fs/fat/directory_cluster.go +++ b/vendor/github.com/mitchellh/go-fs/fat/directory_cluster.go @@ -5,11 +5,12 @@ import ( "encoding/binary" "errors" "fmt" - "github.com/mitchellh/go-fs" "math" "strings" "time" "unicode/utf16" + + "github.com/mitchellh/go-fs" ) type DirectoryAttr uint8 @@ -126,7 +127,7 @@ func NewDirectoryCluster(start uint32, parent uint32, t time.Time) *DirectoryClu // Create the "." and ".." entries cluster.entries = []*DirectoryClusterEntry{ - &DirectoryClusterEntry{ + { accessTime: t, attr: AttrDirectory, cluster: start, @@ -134,7 +135,7 @@ func NewDirectoryCluster(start uint32, parent uint32, t time.Time) *DirectoryClu name: ".", writeTime: t, }, - &DirectoryClusterEntry{ + { accessTime: t, attr: AttrDirectory, cluster: parent, diff --git a/vendor/github.com/mitchellh/go-fs/fat/fat.go b/vendor/github.com/mitchellh/go-fs/fat/fat.go index 8597986f6..3b6ceb16a 100644 --- a/vendor/github.com/mitchellh/go-fs/fat/fat.go +++ b/vendor/github.com/mitchellh/go-fs/fat/fat.go @@ -3,8 +3,9 @@ package fat import ( "errors" "fmt" - "github.com/mitchellh/go-fs" "math" + + "github.com/mitchellh/go-fs" ) // The first cluster that can really hold user data is always 2 diff --git a/vendor/github.com/mitchellh/go-fs/fat/short_name.go b/vendor/github.com/mitchellh/go-fs/fat/short_name.go index df03a2aaf..c91c161f5 100644 --- a/vendor/github.com/mitchellh/go-fs/fat/short_name.go +++ b/vendor/github.com/mitchellh/go-fs/fat/short_name.go @@ -30,11 +30,11 @@ func generateShortName(longName string, used []string) (string, error) { if dotIdx == -1 { dotIdx = len(longName) } else { - ext = longName[dotIdx+1 : len(longName)] + ext = longName[dotIdx+1:] } ext = cleanShortString(ext) - ext = ext[0:len(ext)] + ext = ext[0:] rawName := longName[0:dotIdx] name := cleanShortString(rawName) simpleName := fmt.Sprintf("%s.%s", name, ext) @@ -42,7 +42,7 @@ func generateShortName(longName string, used []string) (string, error) { simpleName = simpleName[0 : len(simpleName)-1] } - doSuffix := name != rawName || len(name) > 8 + doSuffix := name != rawName || len(name) > 8 || len(ext) > 3 if !doSuffix { for _, usedSingle := range used { if strings.ToUpper(usedSingle) == simpleName { @@ -53,6 +53,9 @@ func generateShortName(longName string, used []string) (string, error) { } if doSuffix { + if len(ext) > 3 { + ext = ext[:3] + } found := false for i := 1; i < 99999; i++ { serial := fmt.Sprintf("~%d", i) diff --git a/vendor/github.com/mitchellh/go-fs/fat/super_floppy.go b/vendor/github.com/mitchellh/go-fs/fat/super_floppy.go index 16ad2caeb..ceaf4926a 100644 --- a/vendor/github.com/mitchellh/go-fs/fat/super_floppy.go +++ b/vendor/github.com/mitchellh/go-fs/fat/super_floppy.go @@ -3,8 +3,9 @@ package fat import ( "errors" "fmt" - "github.com/mitchellh/go-fs" "time" + + "github.com/mitchellh/go-fs" ) // SuperFloppyConfig is the configuration for various properties of diff --git a/vendor/github.com/mitchellh/mapstructure/README.md b/vendor/github.com/mitchellh/mapstructure/README.md index 659d6885f..7ecc785e4 100644 --- a/vendor/github.com/mitchellh/mapstructure/README.md +++ b/vendor/github.com/mitchellh/mapstructure/README.md @@ -1,4 +1,4 @@ -# mapstructure +# mapstructure [![Godoc](https://godoc.org/github.com/mitchell/mapstructure?status.svg)](https://godoc.org/github.com/mitchell/mapstructure) mapstructure is a Go library for decoding generic map values to structures and vice versa, while providing helpful error handling. diff --git a/vendor/github.com/mitchellh/mapstructure/decode_hooks.go b/vendor/github.com/mitchellh/mapstructure/decode_hooks.go index aa91f76ce..afcfd5eed 100644 --- a/vendor/github.com/mitchellh/mapstructure/decode_hooks.go +++ b/vendor/github.com/mitchellh/mapstructure/decode_hooks.go @@ -38,12 +38,6 @@ func DecodeHookExec( raw DecodeHookFunc, from reflect.Type, to reflect.Type, data interface{}) (interface{}, error) { - // Build our arguments that reflect expects - argVals := make([]reflect.Value, 3) - argVals[0] = reflect.ValueOf(from) - argVals[1] = reflect.ValueOf(to) - argVals[2] = reflect.ValueOf(data) - switch f := typedDecodeHook(raw).(type) { case DecodeHookFuncType: return f(from, to, data) @@ -72,7 +66,10 @@ func ComposeDecodeHookFunc(fs ...DecodeHookFunc) DecodeHookFunc { } // Modify the from kind to be correct with the new data - f = reflect.ValueOf(data).Type() + f = nil + if val := reflect.ValueOf(data); val.IsValid() { + f = val.Type() + } } return data, nil @@ -118,6 +115,11 @@ func StringToTimeDurationHookFunc() DecodeHookFunc { } } +// WeaklyTypedHook is a DecodeHookFunc which adds support for weak typing to +// the decoder. +// +// Note that this is significantly different from the WeaklyTypedInput option +// of the DecoderConfig. func WeaklyTypedHook( f reflect.Kind, t reflect.Kind, @@ -129,9 +131,8 @@ func WeaklyTypedHook( case reflect.Bool: if dataVal.Bool() { return "1", nil - } else { - return "0", nil } + return "0", nil case reflect.Float32: return strconv.FormatFloat(dataVal.Float(), 'f', -1, 64), nil case reflect.Int: diff --git a/vendor/github.com/mitchellh/mapstructure/mapstructure.go b/vendor/github.com/mitchellh/mapstructure/mapstructure.go index 40be5116d..39ec1e943 100644 --- a/vendor/github.com/mitchellh/mapstructure/mapstructure.go +++ b/vendor/github.com/mitchellh/mapstructure/mapstructure.go @@ -1,5 +1,5 @@ -// The mapstructure package exposes functionality to convert an -// abitrary map[string]interface{} into a native Go structure. +// Package mapstructure exposes functionality to convert an arbitrary +// map[string]interface{} into a native Go structure. // // The Go structure can be arbitrarily complex, containing slices, // other structs, etc. and the decoder will properly decode nested @@ -8,6 +8,7 @@ package mapstructure import ( + "encoding/json" "errors" "fmt" "reflect" @@ -31,7 +32,12 @@ import ( // both. type DecodeHookFunc interface{} +// DecodeHookFuncType is a DecodeHookFunc which has complete information about +// the source and target types. type DecodeHookFuncType func(reflect.Type, reflect.Type, interface{}) (interface{}, error) + +// DecodeHookFuncKind is a DecodeHookFunc which knows only the Kinds of the +// source and target types. type DecodeHookFuncKind func(reflect.Kind, reflect.Kind, interface{}) (interface{}, error) // DecoderConfig is the configuration that is used to create a new decoder @@ -67,6 +73,10 @@ type DecoderConfig struct { // FALSE, false, False. Anything else is an error) // - empty array = empty map and vice versa // - negative numbers to overflowed uint values (base 10) + // - slice of maps to a merged map + // - single values are converted to slices if required. Each + // element is weakly decoded. For example: "4" can become []int{4} + // if the target type is an int slice. // WeaklyTypedInput bool @@ -200,7 +210,7 @@ func (d *Decoder) decode(name string, data interface{}, val reflect.Value) error d.config.DecodeHook, dataVal.Type(), val.Type(), data) if err != nil { - return err + return fmt.Errorf("error decoding '%s': %s", name, err) } } @@ -227,6 +237,10 @@ func (d *Decoder) decode(name string, data interface{}, val reflect.Value) error err = d.decodePtr(name, data, val) case reflect.Slice: err = d.decodeSlice(name, data, val) + case reflect.Array: + err = d.decodeArray(name, data, val) + case reflect.Func: + err = d.decodeFunc(name, data, val) default: // If we reached this point then we weren't able to decode it return fmt.Errorf("%s: unsupported type: %s", name, dataKind) @@ -245,6 +259,10 @@ func (d *Decoder) decode(name string, data interface{}, val reflect.Value) error // value to "data" of that type. func (d *Decoder) decodeBasic(name string, data interface{}, val reflect.Value) error { dataVal := reflect.ValueOf(data) + if !dataVal.IsValid() { + dataVal = reflect.Zero(val.Type()) + } + dataValType := dataVal.Type() if !dataValType.AssignableTo(val.Type()) { return fmt.Errorf( @@ -276,12 +294,22 @@ func (d *Decoder) decodeString(name string, data interface{}, val reflect.Value) val.SetString(strconv.FormatUint(dataVal.Uint(), 10)) case dataKind == reflect.Float32 && d.config.WeaklyTypedInput: val.SetString(strconv.FormatFloat(dataVal.Float(), 'f', -1, 64)) - case dataKind == reflect.Slice && d.config.WeaklyTypedInput: + case dataKind == reflect.Slice && d.config.WeaklyTypedInput, + dataKind == reflect.Array && d.config.WeaklyTypedInput: dataType := dataVal.Type() elemKind := dataType.Elem().Kind() - switch { - case elemKind == reflect.Uint8: - val.SetString(string(dataVal.Interface().([]uint8))) + switch elemKind { + case reflect.Uint8: + var uints []uint8 + if dataKind == reflect.Array { + uints = make([]uint8, dataVal.Len(), dataVal.Len()) + for i := range uints { + uints[i] = dataVal.Index(i).Interface().(uint8) + } + } else { + uints = dataVal.Interface().([]uint8) + } + val.SetString(string(uints)) default: converted = false } @@ -301,6 +329,7 @@ func (d *Decoder) decodeString(name string, data interface{}, val reflect.Value) func (d *Decoder) decodeInt(name string, data interface{}, val reflect.Value) error { dataVal := reflect.ValueOf(data) dataKind := getKind(dataVal) + dataType := dataVal.Type() switch { case dataKind == reflect.Int: @@ -322,6 +351,14 @@ func (d *Decoder) decodeInt(name string, data interface{}, val reflect.Value) er } else { return fmt.Errorf("cannot parse '%s' as int: %s", name, err) } + case dataType.PkgPath() == "encoding/json" && dataType.Name() == "Number": + jn := data.(json.Number) + i, err := jn.Int64() + if err != nil { + return fmt.Errorf( + "error decoding json.Number into %s: %s", name, err) + } + val.SetInt(i) default: return fmt.Errorf( "'%s' expected type '%s', got unconvertible type '%s'", @@ -408,6 +445,7 @@ func (d *Decoder) decodeBool(name string, data interface{}, val reflect.Value) e func (d *Decoder) decodeFloat(name string, data interface{}, val reflect.Value) error { dataVal := reflect.ValueOf(data) dataKind := getKind(dataVal) + dataType := dataVal.Type() switch { case dataKind == reflect.Int: @@ -415,7 +453,7 @@ func (d *Decoder) decodeFloat(name string, data interface{}, val reflect.Value) case dataKind == reflect.Uint: val.SetFloat(float64(dataVal.Uint())) case dataKind == reflect.Float32: - val.SetFloat(float64(dataVal.Float())) + val.SetFloat(dataVal.Float()) case dataKind == reflect.Bool && d.config.WeaklyTypedInput: if dataVal.Bool() { val.SetFloat(1) @@ -429,6 +467,14 @@ func (d *Decoder) decodeFloat(name string, data interface{}, val reflect.Value) } else { return fmt.Errorf("cannot parse '%s' as float: %s", name, err) } + case dataType.PkgPath() == "encoding/json" && dataType.Name() == "Number": + jn := data.(json.Number) + i, err := jn.Float64() + if err != nil { + return fmt.Errorf( + "error decoding json.Number into %s: %s", name, err) + } + val.SetFloat(i) default: return fmt.Errorf( "'%s' expected type '%s', got unconvertible type '%s'", @@ -456,15 +502,30 @@ func (d *Decoder) decodeMap(name string, data interface{}, val reflect.Value) er // Check input type dataVal := reflect.Indirect(reflect.ValueOf(data)) if dataVal.Kind() != reflect.Map { - // Accept empty array/slice instead of an empty map in weakly typed mode - if d.config.WeaklyTypedInput && - (dataVal.Kind() == reflect.Slice || dataVal.Kind() == reflect.Array) && - dataVal.Len() == 0 { - val.Set(valMap) - return nil - } else { - return fmt.Errorf("'%s' expected a map, got '%s'", name, dataVal.Kind()) + // In weak mode, we accept a slice of maps as an input... + if d.config.WeaklyTypedInput { + switch dataVal.Kind() { + case reflect.Array, reflect.Slice: + // Special case for BC reasons (covered by tests) + if dataVal.Len() == 0 { + val.Set(valMap) + return nil + } + + for i := 0; i < dataVal.Len(); i++ { + err := d.decode( + fmt.Sprintf("%s[%d]", name, i), + dataVal.Index(i).Interface(), val) + if err != nil { + return err + } + } + + return nil + } } + + return fmt.Errorf("'%s' expected a map, got '%s'", name, dataVal.Kind()) } // Accumulate errors @@ -507,7 +568,12 @@ func (d *Decoder) decodePtr(name string, data interface{}, val reflect.Value) er // into that. Then set the value of the pointer to this type. valType := val.Type() valElemType := valType.Elem() - realVal := reflect.New(valElemType) + + realVal := val + if realVal.IsNil() || d.config.ZeroFields { + realVal = reflect.New(valElemType) + } + if err := d.decode(name, data, reflect.Indirect(realVal)); err != nil { return err } @@ -516,6 +582,19 @@ func (d *Decoder) decodePtr(name string, data interface{}, val reflect.Value) er return nil } +func (d *Decoder) decodeFunc(name string, data interface{}, val reflect.Value) error { + // Create an element of the concrete (non pointer) type and decode + // into that. Then set the value of the pointer to this type. + dataVal := reflect.Indirect(reflect.ValueOf(data)) + if val.Type() != dataVal.Type() { + return fmt.Errorf( + "'%s' expected type '%s', got unconvertible type '%s'", + name, val.Type(), dataVal.Type()) + } + val.Set(dataVal) + return nil +} + func (d *Decoder) decodeSlice(name string, data interface{}, val reflect.Value) error { dataVal := reflect.Indirect(reflect.ValueOf(data)) dataValKind := dataVal.Kind() @@ -523,26 +602,44 @@ func (d *Decoder) decodeSlice(name string, data interface{}, val reflect.Value) valElemType := valType.Elem() sliceType := reflect.SliceOf(valElemType) - // Check input type - if dataValKind != reflect.Array && dataValKind != reflect.Slice { - // Accept empty map instead of array/slice in weakly typed mode - if d.config.WeaklyTypedInput && dataVal.Kind() == reflect.Map && dataVal.Len() == 0 { - val.Set(reflect.MakeSlice(sliceType, 0, 0)) - return nil - } else { + valSlice := val + if valSlice.IsNil() || d.config.ZeroFields { + // Check input type + if dataValKind != reflect.Array && dataValKind != reflect.Slice { + if d.config.WeaklyTypedInput { + switch { + // Empty maps turn into empty slices + case dataValKind == reflect.Map: + if dataVal.Len() == 0 { + val.Set(reflect.MakeSlice(sliceType, 0, 0)) + return nil + } + + // All other types we try to convert to the slice type + // and "lift" it into it. i.e. a string becomes a string slice. + default: + // Just re-try this function with data as a slice. + return d.decodeSlice(name, []interface{}{data}, val) + } + } + return fmt.Errorf( "'%s': source data must be an array or slice, got %s", name, dataValKind) - } - } - // Make a new slice to hold our result, same size as the original data. - valSlice := reflect.MakeSlice(sliceType, dataVal.Len(), dataVal.Len()) + } + + // Make a new slice to hold our result, same size as the original data. + valSlice = reflect.MakeSlice(sliceType, dataVal.Len(), dataVal.Len()) + } // Accumulate any errors errors := make([]string, 0) for i := 0; i < dataVal.Len(); i++ { currentData := dataVal.Index(i).Interface() + for valSlice.Len() <= i { + valSlice = reflect.Append(valSlice, reflect.Zero(valElemType)) + } currentField := valSlice.Index(i) fieldName := fmt.Sprintf("%s[%d]", name, i) @@ -562,6 +659,73 @@ func (d *Decoder) decodeSlice(name string, data interface{}, val reflect.Value) return nil } +func (d *Decoder) decodeArray(name string, data interface{}, val reflect.Value) error { + dataVal := reflect.Indirect(reflect.ValueOf(data)) + dataValKind := dataVal.Kind() + valType := val.Type() + valElemType := valType.Elem() + arrayType := reflect.ArrayOf(valType.Len(), valElemType) + + valArray := val + + if valArray.Interface() == reflect.Zero(valArray.Type()).Interface() || d.config.ZeroFields { + // Check input type + if dataValKind != reflect.Array && dataValKind != reflect.Slice { + if d.config.WeaklyTypedInput { + switch { + // Empty maps turn into empty arrays + case dataValKind == reflect.Map: + if dataVal.Len() == 0 { + val.Set(reflect.Zero(arrayType)) + return nil + } + + // All other types we try to convert to the array type + // and "lift" it into it. i.e. a string becomes a string array. + default: + // Just re-try this function with data as a slice. + return d.decodeArray(name, []interface{}{data}, val) + } + } + + return fmt.Errorf( + "'%s': source data must be an array or slice, got %s", name, dataValKind) + + } + if dataVal.Len() > arrayType.Len() { + return fmt.Errorf( + "'%s': expected source data to have length less or equal to %d, got %d", name, arrayType.Len(), dataVal.Len()) + + } + + // Make a new array to hold our result, same size as the original data. + valArray = reflect.New(arrayType).Elem() + } + + // Accumulate any errors + errors := make([]string, 0) + + for i := 0; i < dataVal.Len(); i++ { + currentData := dataVal.Index(i).Interface() + currentField := valArray.Index(i) + + fieldName := fmt.Sprintf("%s[%d]", name, i) + if err := d.decode(fieldName, currentData, currentField); err != nil { + errors = appendErrors(errors, err) + } + } + + // Finally, set the value to the array we built up + val.Set(valArray) + + // If there were errors, we return those + if len(errors) > 0 { + return &Error{errors} + } + + return nil +} + func (d *Decoder) decodeStruct(name string, data interface{}, val reflect.Value) error { dataVal := reflect.Indirect(reflect.ValueOf(data)) @@ -601,23 +765,20 @@ func (d *Decoder) decodeStruct(name string, data interface{}, val reflect.Value) // Compile the list of all the fields that we're going to be decoding // from all the structs. - fields := make(map[*reflect.StructField]reflect.Value) + type field struct { + field reflect.StructField + val reflect.Value + } + fields := []field{} for len(structs) > 0 { structVal := structs[0] structs = structs[1:] structType := structVal.Type() + for i := 0; i < structType.NumField(); i++ { fieldType := structType.Field(i) - - if fieldType.Anonymous { - fieldKind := fieldType.Type.Kind() - if fieldKind != reflect.Struct { - errors = appendErrors(errors, - fmt.Errorf("%s: unsupported type: %s", fieldType.Name, fieldKind)) - continue - } - } + fieldKind := fieldType.Type.Kind() // If "squash" is specified in the tag, we squash the field down. squash := false @@ -630,19 +791,26 @@ func (d *Decoder) decodeStruct(name string, data interface{}, val reflect.Value) } if squash { - structs = append(structs, val.FieldByName(fieldType.Name)) + if fieldKind != reflect.Struct { + errors = appendErrors(errors, + fmt.Errorf("%s: unsupported type for squash: %s", fieldType.Name, fieldKind)) + } else { + structs = append(structs, structVal.FieldByName(fieldType.Name)) + } continue } // Normal struct field, store it away - fields[&fieldType] = structVal.Field(i) + fields = append(fields, field{fieldType, structVal.Field(i)}) } } - for fieldType, field := range fields { - fieldName := fieldType.Name + // for fieldType, field := range fields { + for _, f := range fields { + field, fieldValue := f.field, f.val + fieldName := field.Name - tagValue := fieldType.Tag.Get(d.config.TagName) + tagValue := field.Tag.Get(d.config.TagName) tagValue = strings.SplitN(tagValue, ",", 2)[0] if tagValue != "" { fieldName = tagValue @@ -653,7 +821,7 @@ func (d *Decoder) decodeStruct(name string, data interface{}, val reflect.Value) if !rawMapVal.IsValid() { // Do a slower search by iterating over each key and // doing case-insensitive search. - for dataValKey, _ := range dataValKeys { + for dataValKey := range dataValKeys { mK, ok := dataValKey.Interface().(string) if !ok { // Not a string key @@ -677,14 +845,14 @@ func (d *Decoder) decodeStruct(name string, data interface{}, val reflect.Value) // Delete the key we're using from the unused map so we stop tracking delete(dataValKeysUnused, rawMapKey.Interface()) - if !field.IsValid() { + if !fieldValue.IsValid() { // This should never happen panic("field is not valid") } // If we can't set the field, then it is unexported or something, // and we just continue onwards. - if !field.CanSet() { + if !fieldValue.CanSet() { continue } @@ -694,14 +862,14 @@ func (d *Decoder) decodeStruct(name string, data interface{}, val reflect.Value) fieldName = fmt.Sprintf("%s.%s", name, fieldName) } - if err := d.decode(fieldName, rawMapVal.Interface(), field); err != nil { + if err := d.decode(fieldName, rawMapVal.Interface(), fieldValue); err != nil { errors = appendErrors(errors, err) } } if d.config.ErrorUnused && len(dataValKeysUnused) > 0 { keys := make([]string, 0, len(dataValKeysUnused)) - for rawKey, _ := range dataValKeysUnused { + for rawKey := range dataValKeysUnused { keys = append(keys, rawKey.(string)) } sort.Strings(keys) @@ -716,7 +884,7 @@ func (d *Decoder) decodeStruct(name string, data interface{}, val reflect.Value) // Add the unused keys to the list of unused keys if we're tracking metadata if d.config.Metadata != nil { - for rawKey, _ := range dataValKeysUnused { + for rawKey := range dataValKeysUnused { key := rawKey.(string) if name != "" { key = fmt.Sprintf("%s.%s", name, key) diff --git a/vendor/github.com/moul/anonuuid/LICENSE b/vendor/github.com/moul/anonuuid/LICENSE new file mode 100644 index 000000000..492e2c629 --- /dev/null +++ b/vendor/github.com/moul/anonuuid/LICENSE @@ -0,0 +1,22 @@ +The MIT License (MIT) + +Copyright (c) 2015 Manfred Touron + +Permission is hereby granted, free of charge, to any person obtaining a copy +of this software and associated documentation files (the "Software"), to deal +in the Software without restriction, including without limitation the rights +to use, copy, modify, merge, publish, distribute, sublicense, and/or sell +copies of the Software, and to permit persons to whom the Software is +furnished to do so, subject to the following conditions: + +The above copyright notice and this permission notice shall be included in all +copies or substantial portions of the Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR +IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, +FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE +AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER +LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, +OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +SOFTWARE. + diff --git a/vendor/github.com/moul/anonuuid/Makefile b/vendor/github.com/moul/anonuuid/Makefile new file mode 100644 index 000000000..4e2f3bb44 --- /dev/null +++ b/vendor/github.com/moul/anonuuid/Makefile @@ -0,0 +1,73 @@ +# Project-specific variables +BINARIES ?= anonuuid +CONVEY_PORT ?= 9042 + + +# Common variables +SOURCES := $(shell find . -name "*.go") +COMMANDS := $(shell go list ./... | grep -v /vendor/ | grep /cmd/) +PACKAGES := $(shell go list ./... | grep -v /vendor/ | grep -v /cmd/) +GOENV ?= GO15VENDOREXPERIMENT=1 +GO ?= $(GOENV) go +GODEP ?= $(GOENV) godep +USER ?= $(shell whoami) + + +all: build + + +.PHONY: build +build: $(BINARIES) + + +$(BINARIES): $(SOURCES) + $(GO) build -o $@ ./cmd/$@ + + +.PHONY: test +test: + $(GO) get -t . + $(GO) test -v . + + +.PHONY: godep-save +godep-save: + $(GODEP) save $(PACKAGES) $(COMMANDS) + + +.PHONY: clean +clean: + rm -f $(BINARIES) + + +.PHONY: re +re: clean all + + +.PHONY: convey +convey: + $(GO) get github.com/smartystreets/goconvey + goconvey -cover -port=$(CONVEY_PORT) -workDir="$(realpath .)" -depth=1 + + +.PHONY: cover +cover: profile.out + + +profile.out: $(SOURCES) + rm -f $@ + $(GO) test -covermode=count -coverpkg=. -coverprofile=$@ . + + +.PHONY: docker-build +docker-build: + go get github.com/laher/goxc + rm -rf contrib/docker/linux_386 + for binary in $(BINARIES); do \ + goxc -bc="linux,386" -d . -pv contrib/docker -n $$binary xc; \ + mv contrib/docker/linux_386/$$binary contrib/docker/entrypoint; \ + docker build -t $(USER)/$$binary contrib/docker; \ + docker run -it --rm $(USER)/$$binary || true; \ + docker inspect --type=image --format="{{ .Id }}" moul/$$binary || true; \ + echo "Now you can run 'docker push $(USER)/$$binary'"; \ + done diff --git a/vendor/github.com/moul/anonuuid/README.md b/vendor/github.com/moul/anonuuid/README.md new file mode 100644 index 000000000..f5c9a57eb --- /dev/null +++ b/vendor/github.com/moul/anonuuid/README.md @@ -0,0 +1,170 @@ +# AnonUUID + +[![Build Status](https://travis-ci.org/moul/anonuuid.svg)](https://travis-ci.org/moul/anonuuid) +[![GoDoc](https://godoc.org/github.com/moul/anonuuid?status.svg)](https://godoc.org/github.com/moul/anonuuid) +[![Coverage Status](https://coveralls.io/repos/moul/anonuuid/badge.svg?branch=master&service=github)](https://coveralls.io/github/moul/anonuuid?branch=master) + +:wrench: Anonymize UUIDs outputs (written in Golang) + +![AnonUUID Logo](https://raw.githubusercontent.com/moul/anonuuid/master/assets/anonuuid.png) + +**anonuuid** anonymize an input string by replacing all UUIDs by an anonymized +new one. + +The fake UUIDs are cached, so if AnonUUID encounter the same real UUIDs multiple +times, the translation will be the same. + +## Usage + +```console +$ anonuuid --help +NAME: + anonuuid - Anonymize UUIDs outputs + +USAGE: + anonuuid [global options] command [command options] [arguments...] + +VERSION: + 1.0.0-dev + +AUTHOR(S): + Manfred Touron + +COMMANDS: + help, h Shows a list of commands or help for one command + +GLOBAL OPTIONS: + --hexspeak Generate hexspeak style fake UUIDs + --random, -r Generate random fake UUIDs + --keep-beginning Keep first part of the UUID unchanged + --keep-end Keep last part of the UUID unchanged + --prefix, -p Prefix generated UUIDs + --suffix Suffix generated UUIDs + --help, -h show help + --version, -v print the version + ``` + +## Example + +Replace all UUIDs and cache the correspondance. + +```command +$ anonuuid git:(master) ✗ cat < 32 { + part = part[:32] + } + uuid := part[:8] + "-" + part[8:12] + "-1" + part[13:16] + "-" + part[16:20] + "-" + part[20:32] + + err := IsUUID(uuid) + if err != nil { + return "", err + } + + return uuid, nil +} + +// GenerateRandomUUID returns an UUID based on random strings +func GenerateRandomUUID(length int) (string, error) { + var letters = []rune("abcdef0123456789") + + b := make([]rune, length) + for i := range b { + b[i] = letters[rand.Intn(len(letters))] + } + return FormatUUID(string(b)) +} + +// GenerateHexspeakUUID returns an UUID formatted string containing hexspeak words +func GenerateHexspeakUUID(i int) (string, error) { + if i < 0 { + i = -i + } + hexspeaks := []string{ + "0ff1ce", + "31337", + "4b1d", + "badc0de", + "badcafe", + "badf00d", + "deadbabe", + "deadbeef", + "deadc0de", + "deadfeed", + "fee1bad", + } + return FormatUUID(hexspeaks[i%len(hexspeaks)]) +} + +// GenerateLenUUID returns an UUID formatted string based on an index number +func GenerateLenUUID(i int) (string, error) { + if i < 0 { + i = 2<<29 + i + } + return FormatUUID(fmt.Sprintf("%x", i)) +} diff --git a/vendor/github.com/moul/gotty-client/Dockerfile b/vendor/github.com/moul/gotty-client/Dockerfile new file mode 100644 index 000000000..e598be675 --- /dev/null +++ b/vendor/github.com/moul/gotty-client/Dockerfile @@ -0,0 +1,11 @@ +# build +FROM golang:1.9 as builder +RUN apt update && apt -y install jq +COPY . /go/src/github.com/moul/gotty-client +WORKDIR /go/src/github.com/moul/gotty-client +RUN make install + +# minimal runtime +FROM scratch +COPY --from=builder /go/bin/gotty-client /bin/gotty-client +ENTRYPOINT ["/bin/gotty-client"] diff --git a/vendor/github.com/moul/gotty-client/LICENSE b/vendor/github.com/moul/gotty-client/LICENSE new file mode 100644 index 000000000..492e2c629 --- /dev/null +++ b/vendor/github.com/moul/gotty-client/LICENSE @@ -0,0 +1,22 @@ +The MIT License (MIT) + +Copyright (c) 2015 Manfred Touron + +Permission is hereby granted, free of charge, to any person obtaining a copy +of this software and associated documentation files (the "Software"), to deal +in the Software without restriction, including without limitation the rights +to use, copy, modify, merge, publish, distribute, sublicense, and/or sell +copies of the Software, and to permit persons to whom the Software is +furnished to do so, subject to the following conditions: + +The above copyright notice and this permission notice shall be included in all +copies or substantial portions of the Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR +IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, +FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE +AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER +LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, +OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +SOFTWARE. + diff --git a/vendor/github.com/moul/gotty-client/LICENSE.apache b/vendor/github.com/moul/gotty-client/LICENSE.apache new file mode 100644 index 000000000..6fe259dd5 --- /dev/null +++ b/vendor/github.com/moul/gotty-client/LICENSE.apache @@ -0,0 +1,192 @@ + + Apache License + Version 2.0, January 2004 + https://www.apache.org/licenses/ + + TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION + + 1. Definitions. + + "License" shall mean the terms and conditions for use, reproduction, + and distribution as defined by Sections 1 through 9 of this document. + + "Licensor" shall mean the copyright owner or entity authorized by + the copyright owner that is granting the License. + + "Legal Entity" shall mean the union of the acting entity and all + other entities that control, are controlled by, or are under common + control with that entity. For the purposes of this definition, + "control" means (i) the power, direct or indirect, to cause the + direction or management of such entity, whether by contract or + otherwise, or (ii) ownership of fifty percent (50%) or more of the + outstanding shares, or (iii) beneficial ownership of such entity. + + "You" (or "Your") shall mean an individual or Legal Entity + exercising permissions granted by this License. + + "Source" form shall mean the preferred form for making modifications, + including but not limited to software source code, documentation + source, and configuration files. + + "Object" form shall mean any form resulting from mechanical + transformation or translation of a Source form, including but + not limited to compiled object code, generated documentation, + and conversions to other media types. + + "Work" shall mean the work of authorship, whether in Source or + Object form, made available under the License, as indicated by a + copyright notice that is included in or attached to the work + (an example is provided in the Appendix below). + + "Derivative Works" shall mean any work, whether in Source or Object + form, that is based on (or derived from) the Work and for which the + editorial revisions, annotations, elaborations, or other modifications + represent, as a whole, an original work of authorship. For the purposes + of this License, Derivative Works shall not include works that remain + separable from, or merely link (or bind by name) to the interfaces of, + the Work and Derivative Works thereof. + + "Contribution" shall mean any work of authorship, including + the original version of the Work and any modifications or additions + to that Work or Derivative Works thereof, that is intentionally + submitted to Licensor for inclusion in the Work by the copyright owner + or by an individual or Legal Entity authorized to submit on behalf of + the copyright owner. For the purposes of this definition, "submitted" + means any form of electronic, verbal, or written communication sent + to the Licensor or its representatives, including but not limited to + communication on electronic mailing lists, source code control systems, + and issue tracking systems that are managed by, or on behalf of, the + Licensor for the purpose of discussing and improving the Work, but + excluding communication that is conspicuously marked or otherwise + designated in writing by the copyright owner as "Not a Contribution." + + "Contributor" shall mean Licensor and any individual or Legal Entity + on behalf of whom a Contribution has been received by Licensor and + subsequently incorporated within the Work. + + 2. Grant of Copyright License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + copyright license to reproduce, prepare Derivative Works of, + publicly display, publicly perform, sublicense, and distribute the + Work and such Derivative Works in Source or Object form. + + 3. Grant of Patent License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + (except as stated in this section) patent license to make, have made, + use, offer to sell, sell, import, and otherwise transfer the Work, + where such license applies only to those patent claims licensable + by such Contributor that are necessarily infringed by their + Contribution(s) alone or by combination of their Contribution(s) + with the Work to which such Contribution(s) was submitted. If You + institute patent litigation against any entity (including a + cross-claim or counterclaim in a lawsuit) alleging that the Work + or a Contribution incorporated within the Work constitutes direct + or contributory patent infringement, then any patent licenses + granted to You under this License for that Work shall terminate + as of the date such litigation is filed. + + 4. Redistribution. You may reproduce and distribute copies of the + Work or Derivative Works thereof in any medium, with or without + modifications, and in Source or Object form, provided that You + meet the following conditions: + + (a) You must give any other recipients of the Work or + Derivative Works a copy of this License; and + + (b) You must cause any modified files to carry prominent notices + stating that You changed the files; and + + (c) You must retain, in the Source form of any Derivative Works + that You distribute, all copyright, patent, trademark, and + attribution notices from the Source form of the Work, + excluding those notices that do not pertain to any part of + the Derivative Works; and + + (d) If the Work includes a "NOTICE" text file as part of its + distribution, then any Derivative Works that You distribute must + include a readable copy of the attribution notices contained + within such NOTICE file, excluding those notices that do not + pertain to any part of the Derivative Works, in at least one + of the following places: within a NOTICE text file distributed + as part of the Derivative Works; within the Source form or + documentation, if provided along with the Derivative Works; or, + within a display generated by the Derivative Works, if and + wherever such third-party notices normally appear. The contents + of the NOTICE file are for informational purposes only and + do not modify the License. You may add Your own attribution + notices within Derivative Works that You distribute, alongside + or as an addendum to the NOTICE text from the Work, provided + that such additional attribution notices cannot be construed + as modifying the License. + + You may add Your own copyright statement to Your modifications and + may provide additional or different license terms and conditions + for use, reproduction, or distribution of Your modifications, or + for any such Derivative Works as a whole, provided Your use, + reproduction, and distribution of the Work otherwise complies with + the conditions stated in this License. + + 5. Submission of Contributions. Unless You explicitly state otherwise, + any Contribution intentionally submitted for inclusion in the Work + by You to the Licensor shall be under the terms and conditions of + this License, without any additional terms or conditions. + Notwithstanding the above, nothing herein shall supersede or modify + the terms of any separate license agreement you may have executed + with Licensor regarding such Contributions. + + 6. Trademarks. This License does not grant permission to use the trade + names, trademarks, service marks, or product names of the Licensor, + except as required for reasonable and customary use in describing the + origin of the Work and reproducing the content of the NOTICE file. + + 7. Disclaimer of Warranty. Unless required by applicable law or + agreed to in writing, Licensor provides the Work (and each + Contributor provides its Contributions) on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or + implied, including, without limitation, any warranties or conditions + of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A + PARTICULAR PURPOSE. You are solely responsible for determining the + appropriateness of using or redistributing the Work and assume any + risks associated with Your exercise of permissions under this License. + + 8. Limitation of Liability. In no event and under no legal theory, + whether in tort (including negligence), contract, or otherwise, + unless required by applicable law (such as deliberate and grossly + negligent acts) or agreed to in writing, shall any Contributor be + liable to You for damages, including any direct, indirect, special, + incidental, or consequential damages of any character arising as a + result of this License or out of the use or inability to use the + Work (including but not limited to damages for loss of goodwill, + work stoppage, computer failure or malfunction, or any and all + other commercial damages or losses), even if such Contributor + has been advised of the possibility of such damages. + + 9. Accepting Warranty or Additional Liability. While redistributing + the Work or Derivative Works thereof, You may choose to offer, + and charge a fee for, acceptance of support, warranty, indemnity, + or other liability obligations and/or rights consistent with this + License. However, in accepting such obligations, You may act only + on Your own behalf and on Your sole responsibility, not on behalf + of any other Contributor, and only if You agree to indemnify, + defend, and hold each Contributor harmless for any liability + incurred by, or claims asserted against, such Contributor by reason + of your accepting any such warranty or additional liability. + + END OF TERMS AND CONDITIONS + + Copyright 2013-2017 Docker, Inc. + + Licensed under the Apache License, Version 2.0 (the "License"); + you may not use this file except in compliance with the License. + You may obtain a copy of the License at + + https://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. + diff --git a/vendor/github.com/moul/gotty-client/Makefile b/vendor/github.com/moul/gotty-client/Makefile new file mode 100644 index 000000000..e8ff3b579 --- /dev/null +++ b/vendor/github.com/moul/gotty-client/Makefile @@ -0,0 +1,73 @@ +# Project-specific variables +BINARIES ?= gotty-client +GOTTY_URL := http://localhost:8081/ +VERSION := $(shell cat .goxc.json | jq -c .PackageVersion | sed 's/"//g') + +CONVEY_PORT ?= 9042 + + +# Common variables +SOURCES := $(shell find . -type f -name "*.go") +COMMANDS := $(shell go list ./... | grep -v /vendor/ | grep /cmd/) +PACKAGES := $(shell go list ./... | grep -v /vendor/ | grep -v /cmd/) +GOENV ?= GO15VENDOREXPERIMENT=1 +GO ?= $(GOENV) go +GODEP ?= $(GOENV) godep +USER ?= $(shell whoami) + + +all: build + + +.PHONY: build +build: $(BINARIES) + + +.PHONY: install +install: + $(GO) install ./cmd/gotty-client + + +$(BINARIES): $(SOURCES) + $(GO) build -i -o $@ ./cmd/$@ + + +.PHONY: test +test: +# $(GO) get -t . + $(GO) test -i -v . + $(GO) test -v . + + +.PHONY: godep-save +godep-save: + $(GODEP) save $(PACKAGES) $(COMMANDS) + + +.PHONY: clean +clean: + rm -f $(BINARIES) + + +.PHONY: re +re: clean all + + +.PHONY: convey +convey: + $(GO) get github.com/smartystreets/goconvey + goconvey -cover -port=$(CONVEY_PORT) -workDir="$(realpath .)" -depth=1 + + +.PHONY: cover +cover: profile.out + + +profile.out: $(SOURCES) + rm -f $@ + $(GO) test -covermode=count -coverpkg=. -coverprofile=$@ . + + +.PHONY: docker-build +docker-build: + docker build -t moul/gotty-client . diff --git a/vendor/github.com/moul/gotty-client/README.md b/vendor/github.com/moul/gotty-client/README.md new file mode 100644 index 000000000..91d3be331 --- /dev/null +++ b/vendor/github.com/moul/gotty-client/README.md @@ -0,0 +1,205 @@ +# gotty-client +:wrench: Terminal client for [GoTTY](https://github.com/yudai/gotty). + +![](https://raw.githubusercontent.com/moul/gotty-client/master/resources/gotty-client.png) + +[![Build Status](https://travis-ci.org/moul/gotty-client.svg?branch=master)](https://travis-ci.org/moul/gotty-client) +[![GoDoc](https://godoc.org/github.com/moul/gotty-client?status.svg)](https://godoc.org/github.com/moul/gotty-client) +[![FOSSA Status](https://app.fossa.io/api/projects/git%2Bhttps%3A%2F%2Fgithub.com%2Fmoul%2Fgotty-client.svg?type=shield)](https://app.fossa.io/projects/git%2Bhttps%3A%2F%2Fgithub.com%2Fmoul%2Fgotty-client?ref=badge_shield) + +```ruby + ┌─────────────────┐ + ┌──────▶│ /bin/bash │ + │ └─────────────────┘ + ┌──────────────┐ ┌──────────┐ + │ │ │ Gotty │ +┌───────┐ ┌──▶│ Browser │───────┐ │ │ +│ │ │ │ │ │ │ │ +│ │ │ └──────────────┘ │ │ │ ┌─────────────────┐ +│ Bob │───┤ websockets─▶│ │─▶│ emacs /var/www │ +│ │ │ ╔═ ══ ══ ══ ══ ╗ │ │ │ └─────────────────┘ +│ │ │ ║ ║ │ │ │ +└───────┘ └──▶║ gotty-client ───────┘ │ │ + ║ │ │ + ╚═ ══ ══ ══ ══ ╝ └──────────┘ + │ ┌─────────────────┐ + └──────▶│ tmux attach │ + └─────────────────┘ +``` + +## Example + +Server side ([GoTTY](https://github.com/yudai/gotty)) + +```console +$ gotty -p 9191 sh -c 'while true; do date; sleep 1; done' +2015/08/24 18:54:31 Server is starting with command: sh -c while true; do date; sleep 1; done +2015/08/24 18:54:31 URL: http://[::1]:9191/ +2015/08/24 18:54:34 GET /ws +2015/08/24 18:54:34 New client connected: 127.0.0.1:61811 +2015/08/24 18:54:34 Command is running for client 127.0.0.1:61811 with PID 64834 +2015/08/24 18:54:39 Command exited for: 127.0.0.1:61811 +2015/08/24 18:54:39 Connection closed: 127.0.0.1:61811 +... +``` + +**Client side** + +```console +$ gotty-client http://localhost:9191/ +INFO[0000] New title: GoTTY - sh -c while true; do date; sleep 1; done (jean-michel-van-damme.local) +WARN[0000] Unhandled protocol message: json pref: 2{} +Mon Aug 24 18:54:34 CEST 2015 +Mon Aug 24 18:54:35 CEST 2015 +Mon Aug 24 18:54:36 CEST 2015 +Mon Aug 24 18:54:37 CEST 2015 +Mon Aug 24 18:54:38 CEST 2015 +^C +``` + +## Usage + +```console +$ gotty-client -h +NAME: + gotty-client - GoTTY client for your terminal + +USAGE: + gotty-client [global options] command [command options] GOTTY_URL + +VERSION: + 1.3.0+ + +AUTHOR(S): + Manfred Touron + +COMMANDS: + help, h Shows a list of commands or help for one command + +GLOBAL OPTIONS: + --debug, -D Enable debug mode [$GOTTY_CLIENT_DEBUG] + --help, -h show help + --version, -v print the version +``` + +## Install + +Install latest version using Golang (recommended) + +```console +$ go get github.com/moul/gotty-client/cmd/gotty-client +``` + +--- + +Install latest version using Homebrew (Mac OS X) + +```console +$ brew install https://raw.githubusercontent.com/moul/gotty-client/master/contrib/homebrew/gotty-client.rb --HEAD +``` + +or the latest released version + +```console +$ brew install https://raw.githubusercontent.com/moul/gotty-client/master/contrib/homebrew/gotty-client.rb +``` + +## Changelog + +### master (unreleased) + +* Add `--detach-keys` option ([#52](https://github.com/moul/gotty-client/issues/52)) + +[full commits list](https://github.com/moul/gotty-client/compare/v1.6.1...master) + +### [v1.6.1](https://github.com/moul/gotty-client/releases/tag/v1.6.1) (2017-01-19) + +* Do not exit on EOF ([#45](https://github.com/moul/gotty-client/pull/45)) ([@gurjeet](https://github.com/gurjeet)) + +[full commits list](https://github.com/moul/gotty-client/compare/v1.6.0...v1.6.1) + +### [v1.6.0](https://github.com/moul/gotty-client/releases/tag/v1.6.0) (2016-05-23) + +* Support of `--use-proxy-from-env` (Add Proxy support) ([#36](https://github.com/moul/gotty-client/pull/36)) ([@byung2](https://github.com/byung2)) +* Add debug mode ([#18](https://github.com/moul/gotty-client/issues/18)) +* Fix argument passing ([#16](https://github.com/moul/gotty-client/issues/16)) +* Remove warnings + golang fixes and refactoring ([@QuentinPerez](https://github.com/QuentinPerez)) + +[full commits list](https://github.com/moul/gotty-client/compare/v1.5.0...v1.6.0) + +### [v1.5.0](https://github.com/moul/gotty-client/releases/tag/v1.5.0) (2016-02-18) + +* Add autocomplete support ([#19](https://github.com/moul/gotty-client/issues/19)) +* Switch from `Party` to `Godep` +* Fix terminal data being interpreted as format string ([#34](https://github.com/moul/gotty-client/pull/34)) ([@mickael9](https://github.com/mickael9)) + +[full commits list](https://github.com/moul/gotty-client/compare/v1.4.0...v1.5.0) + +### [v1.4.0](https://github.com/moul/gotty-client/releases/tag/v1.4.0) (2015-12-09) + +* Remove solaris,plan9,nacl for `.goxc.json` +* Add an error if the go version is lower than 1.5 +* Flexible parsing of the input URL +* Add tests +* Support of `--skip-tls-verify` + +[full commits list](https://github.com/moul/gotty-client/compare/v1.3.0...v1.4.0) + +### [v1.3.0](https://github.com/moul/gotty-client/releases/tag/v1.3.0) (2015-10-27) + +* Fix `connected` state when using `Connect()` + `Loop()` methods +* Add `ExitLoop` which allow to exit from `Loop` function + +[full commits list](https://github.com/moul/gotty-client/compare/v1.2.0...v1.3.0) + +### [v1.2.0](https://github.com/moul/gotty-client/releases/tag/v1.2.0) (2015-10-23) + +* Removed an annoying warning when exiting connection ([#22](https://github.com/moul/gotty-client/issues/22)) ([@QuentinPerez](https://github.com/QuentinPerez)) +* Add the ability to configure alternative stdout ([#21](https://github.com/moul/gotty-client/issues/21)) ([@QuentinPerez](https://github.com/QuentinPerez)) +* Refactored the goroutine system with select, improve speed and stability ([@QuentinPerez](https://github.com/QuentinPerez)) +* Add debug mode (`--debug`/`-D`) ([#18](https://github.com/moul/gotty-client/issues/18)) +* Improve error message when connecting by checking HTTP status code +* Fix arguments passing ([#16](https://github.com/moul/gotty-client/issues/16)) +* Dropped support for golang<1.5 +* Small fixes + +[full commits list](https://github.com/moul/gotty-client/compare/v1.1.0...v1.2.0) + +### [v1.1.0](https://github.com/moul/gotty-client/releases/tag/v1.1.0) (2015-10-10) + +* Handling arguments + using mutexes (thanks to [@QuentinPerez](https://github.com/QuentinPerez)) +* Add logo ([#9](https://github.com/moul/gotty-client/issues/9)) +* Using codegansta/cli for CLI parsing ([#3](https://github.com/moul/gotty-client/issues/3)) +* Fix panic when running on older GoTTY server ([#13](https://github.com/moul/gotty-client/issues/13)) +* Add 'homebrew support' ([#1](https://github.com/moul/gotty-client/issues/1)) +* Add Changelog ([#5](https://github.com/moul/gotty-client/issues/5)) +* Add GOXC configuration to build binaries for multiple architectures ([#2](https://github.com/moul/gotty-client/issues/2)) + +[full commits list](https://github.com/moul/gotty-client/compare/v1.0.1...v1.1.0) + +### [v1.0.1](https://github.com/moul/gotty-client/releases/tag/v1.0.1) (2015-09-27) + +* Using party to manage dependencies + +[full commits list](https://github.com/moul/gotty-client/compare/v1.0.0...v1.0.1) + +### [v1.0.0](https://github.com/moul/gotty-client/releases/tag/v1.0.0) (2015-09-27) + +Compatible with [GoTTY](https://github.com/yudai/gotty) version: [v0.0.10](https://github.com/yudai/gotty/releases/tag/v0.0.10) + +#### Features + +* Support **basic-auth** +* Support **terminal-(re)size** +* Support **write** +* Support **title** +* Support **custom URI** + +[full commits list](https://github.com/moul/gotty-client/compare/cf0c1146c7ce20fe0bd65764c13253bc575cd43a...v1.0.0) + +## License + +MIT + + +[![FOSSA Status](https://app.fossa.io/api/projects/git%2Bhttps%3A%2F%2Fgithub.com%2Fmoul%2Fgotty-client.svg?type=large)](https://app.fossa.io/projects/git%2Bhttps%3A%2F%2Fgithub.com%2Fmoul%2Fgotty-client?ref=badge_large) diff --git a/vendor/github.com/moul/gotty-client/arch.go b/vendor/github.com/moul/gotty-client/arch.go new file mode 100644 index 000000000..77a4deaec --- /dev/null +++ b/vendor/github.com/moul/gotty-client/arch.go @@ -0,0 +1,32 @@ +// +build !windows + +package gottyclient + +import ( + "encoding/json" + "fmt" + "golang.org/x/sys/unix" + "os" + "os/signal" + "syscall" +) + +func notifySignalSIGWINCH(c chan<- os.Signal) { + signal.Notify(c, syscall.SIGWINCH) +} + +func resetSignalSIGWINCH() { + signal.Reset(syscall.SIGWINCH) +} + +func syscallTIOCGWINSZ() ([]byte, error) { + ws, err := unix.IoctlGetWinsize(0, 0) + if err != nil { + return nil, fmt.Errorf("ioctl error: %v", err) + } + b, err := json.Marshal(ws) + if err != nil { + return nil, fmt.Errorf("json.Marshal error: %v", err) + } + return b, err +} diff --git a/vendor/github.com/moul/gotty-client/arch_windows.go b/vendor/github.com/moul/gotty-client/arch_windows.go new file mode 100644 index 000000000..53aaffc5f --- /dev/null +++ b/vendor/github.com/moul/gotty-client/arch_windows.go @@ -0,0 +1,16 @@ +package gottyclient + +import ( + "errors" + "os" +) + +func notifySignalSIGWINCH(c chan<- os.Signal) { +} + +func resetSignalSIGWINCH() { +} + +func syscallTIOCGWINSZ() ([]byte, error) { + return nil, errors.New("SIGWINCH isn't supported on this ARCH") +} diff --git a/vendor/github.com/moul/gotty-client/gotty-client.go b/vendor/github.com/moul/gotty-client/gotty-client.go new file mode 100644 index 000000000..58f4a8547 --- /dev/null +++ b/vendor/github.com/moul/gotty-client/gotty-client.go @@ -0,0 +1,473 @@ +package gottyclient + +import ( + "crypto/tls" + "encoding/base64" + "encoding/json" + "fmt" + "io" + "io/ioutil" + "net/http" + "net/url" + "os" + "regexp" + "strings" + "sync" + "time" + + "github.com/creack/goselect" + "github.com/gorilla/websocket" + "github.com/sirupsen/logrus" + "golang.org/x/crypto/ssh/terminal" +) + +// GetAuthTokenURL transforms a GoTTY http URL to its AuthToken file URL +func GetAuthTokenURL(httpURL string) (*url.URL, *http.Header, error) { + header := http.Header{} + target, err := url.Parse(httpURL) + if err != nil { + return nil, nil, err + } + + target.Path = strings.TrimLeft(target.Path+"auth_token.js", "/") + + if target.User != nil { + header.Add("Authorization", "Basic "+base64.StdEncoding.EncodeToString([]byte(target.User.String()))) + target.User = nil + } + + return target, &header, nil +} + +// GetURLQuery returns url.query +func GetURLQuery(rawurl string) (url.Values, error) { + target, err := url.Parse(rawurl) + if err != nil { + return nil, err + } + return target.Query(), nil +} + +// GetWebsocketURL transforms a GoTTY http URL to its WebSocket URL +func GetWebsocketURL(httpURL string) (*url.URL, *http.Header, error) { + header := http.Header{} + target, err := url.Parse(httpURL) + if err != nil { + return nil, nil, err + } + + if target.Scheme == "https" { + target.Scheme = "wss" + } else { + target.Scheme = "ws" + } + + target.Path = strings.TrimLeft(target.Path+"ws", "/") + + if target.User != nil { + header.Add("Authorization", "Basic "+base64.StdEncoding.EncodeToString([]byte(target.User.String()))) + target.User = nil + } + + return target, &header, nil +} + +type Client struct { + Dialer *websocket.Dialer + Conn *websocket.Conn + URL string + WriteMutex *sync.Mutex + Output io.Writer + poison chan bool + SkipTLSVerify bool + UseProxyFromEnv bool + Connected bool + EscapeKeys []byte +} + +type querySingleType struct { + AuthToken string `json:"AuthToken"` + Arguments string `json:"Arguments"` +} + +func (c *Client) write(data []byte) error { + c.WriteMutex.Lock() + defer c.WriteMutex.Unlock() + return c.Conn.WriteMessage(websocket.TextMessage, data) +} + +// GetAuthToken retrieves an Auth Token from dynamic auth_token.js file +func (c *Client) GetAuthToken() (string, error) { + target, header, err := GetAuthTokenURL(c.URL) + if err != nil { + return "", err + } + + logrus.Debugf("Fetching auth token auth-token: %q", target.String()) + req, err := http.NewRequest("GET", target.String(), nil) + req.Header = *header + tr := &http.Transport{} + if c.SkipTLSVerify { + conf := &tls.Config{InsecureSkipVerify: true} + tr.TLSClientConfig = conf + } + if c.UseProxyFromEnv { + tr.Proxy = http.ProxyFromEnvironment + } + client := &http.Client{Transport: tr} + resp, err := client.Do(req) + if err != nil { + return "", err + } + + switch resp.StatusCode { + case 200: + // Everything is OK + default: + return "", fmt.Errorf("unknown status code: %d (%s)", resp.StatusCode, http.StatusText(resp.StatusCode)) + } + + defer resp.Body.Close() + body, err := ioutil.ReadAll(resp.Body) + if err != nil { + return "", err + } + + re := regexp.MustCompile("var gotty_auth_token = '(.*)'") + output := re.FindStringSubmatch(string(body)) + if len(output) == 0 { + return "", fmt.Errorf("Cannot fetch GoTTY auth-token, please upgrade your GoTTY server.") + } + + return output[1], nil +} + +// Connect tries to dial a websocket server +func (c *Client) Connect() error { + // Retrieve AuthToken + authToken, err := c.GetAuthToken() + if err != nil { + return err + } + logrus.Debugf("Auth-token: %q", authToken) + + // Open WebSocket connection + target, header, err := GetWebsocketURL(c.URL) + if err != nil { + return err + } + logrus.Debugf("Connecting to websocket: %q", target.String()) + if c.SkipTLSVerify { + c.Dialer.TLSClientConfig = &tls.Config{InsecureSkipVerify: true} + } + if c.UseProxyFromEnv { + c.Dialer.Proxy = http.ProxyFromEnvironment + } + conn, _, err := c.Dialer.Dial(target.String(), *header) + if err != nil { + return err + } + c.Conn = conn + c.Connected = true + + // Pass arguments and auth-token + query, err := GetURLQuery(c.URL) + if err != nil { + return err + } + querySingle := querySingleType{ + Arguments: "?" + query.Encode(), + AuthToken: authToken, + } + json, err := json.Marshal(querySingle) + if err != nil { + logrus.Errorf("Failed to parse init message %v", err) + return err + } + // Send Json + logrus.Debugf("Sending arguments and auth-token") + err = c.write(json) + if err != nil { + return err + } + + go c.pingLoop() + + return nil +} + +func (c *Client) pingLoop() { + for { + logrus.Debugf("Sending ping") + c.write([]byte("1")) + time.Sleep(30 * time.Second) + } +} + +// Close will nicely close the dialer +func (c *Client) Close() { + c.Conn.Close() +} + +// ExitLoop will kill all goroutines launched by c.Loop() +// ExitLoop() -> wait Loop() -> Close() +func (c *Client) ExitLoop() { + fname := "ExitLoop" + openPoison(fname, c.poison) +} + +// Loop will look indefinitely for new messages +func (c *Client) Loop() error { + if !c.Connected { + err := c.Connect() + if err != nil { + return err + } + } + + wg := &sync.WaitGroup{} + + wg.Add(1) + go c.termsizeLoop(wg) + + wg.Add(1) + go c.readLoop(wg) + + wg.Add(1) + go c.writeLoop(wg) + + /* Wait for all of the above goroutines to finish */ + wg.Wait() + + logrus.Debug("Client.Loop() exiting") + return nil +} + +type winsize struct { + Rows uint16 `json:"rows"` + Columns uint16 `json:"columns"` + // unused + x uint16 + y uint16 +} + +type posionReason int + +const ( + committedSuicide = iota + killed +) + +func openPoison(fname string, poison chan bool) posionReason { + logrus.Debug(fname + " suicide") + + /* + * The close() may raise panic if multiple goroutines commit suicide at the + * same time. Prevent that panic from bubbling up. + */ + defer func() { + if r := recover(); r != nil { + logrus.Debug("Prevented panic() of simultaneous suicides", r) + } + }() + + /* Signal others to die */ + close(poison) + return committedSuicide +} + +func die(fname string, poison chan bool) posionReason { + logrus.Debug(fname + " died") + + wasOpen := <-poison + if wasOpen { + logrus.Error("ERROR: The channel was open when it wasn't suppoed to be") + } + + return killed +} + +func (c *Client) termsizeLoop(wg *sync.WaitGroup) posionReason { + + defer wg.Done() + fname := "termsizeLoop" + + ch := make(chan os.Signal, 1) + notifySignalSIGWINCH(ch) + defer resetSignalSIGWINCH() + + for { + if b, err := syscallTIOCGWINSZ(); err != nil { + logrus.Warn(err) + } else { + if err = c.write(append([]byte("2"), b...)); err != nil { + logrus.Warnf("ws.WriteMessage failed: %v", err) + } + } + select { + case <-c.poison: + /* Somebody poisoned the well; die */ + return die(fname, c.poison) + case <-ch: + } + } +} + +type exposeFd interface { + Fd() uintptr +} + +func (c *Client) writeLoop(wg *sync.WaitGroup) posionReason { + defer wg.Done() + fname := "writeLoop" + + buff := make([]byte, 128) + oldState, err := terminal.MakeRaw(0) + if err == nil { + defer terminal.Restore(0, oldState) + } + + rdfs := &goselect.FDSet{} + reader := io.ReadCloser(os.Stdin) + + pr := NewEscapeProxy(reader, c.EscapeKeys) + defer reader.Close() + + for { + select { + case <-c.poison: + /* Somebody poisoned the well; die */ + return die(fname, c.poison) + default: + } + + rdfs.Zero() + rdfs.Set(reader.(exposeFd).Fd()) + err := goselect.Select(1, rdfs, nil, nil, 50*time.Millisecond) + if err != nil { + return openPoison(fname, c.poison) + } + if rdfs.IsSet(reader.(exposeFd).Fd()) { + size, err := pr.Read(buff) + + if err != nil { + if err == io.EOF { + // Send EOF to GoTTY + + // Send 'Input' marker, as defined in GoTTY::client_context.go, + // followed by EOT (a translation of Ctrl-D for terminals) + err = c.write(append([]byte("0"), byte(4))) + + if err != nil { + return openPoison(fname, c.poison) + } + continue + } else { + return openPoison(fname, c.poison) + } + } + + if size <= 0 { + continue + } + + data := buff[:size] + err = c.write(append([]byte("0"), data...)) + if err != nil { + return openPoison(fname, c.poison) + } + } + } +} + +func (c *Client) readLoop(wg *sync.WaitGroup) posionReason { + + defer wg.Done() + fname := "readLoop" + + type MessageNonBlocking struct { + Data []byte + Err error + } + msgChan := make(chan MessageNonBlocking) + + for { + go func() { + _, data, err := c.Conn.ReadMessage() + msgChan <- MessageNonBlocking{Data: data, Err: err} + }() + + select { + case <-c.poison: + /* Somebody poisoned the well; die */ + return die(fname, c.poison) + case msg := <-msgChan: + if msg.Err != nil { + + if _, ok := msg.Err.(*websocket.CloseError); !ok { + logrus.Warnf("c.Conn.ReadMessage: %v", msg.Err) + } + return openPoison(fname, c.poison) + } + if len(msg.Data) == 0 { + + logrus.Warnf("An error has occured") + return openPoison(fname, c.poison) + } + switch msg.Data[0] { + case '0': // data + buf, err := base64.StdEncoding.DecodeString(string(msg.Data[1:])) + if err != nil { + logrus.Warnf("Invalid base64 content: %q", msg.Data[1:]) + break + } + c.Output.Write(buf) + case '1': // pong + case '2': // new title + newTitle := string(msg.Data[1:]) + fmt.Fprintf(c.Output, "\033]0;%s\007", newTitle) + case '3': // json prefs + logrus.Debugf("Unhandled protocol message: json pref: %s", string(msg.Data[1:])) + case '4': // autoreconnect + logrus.Debugf("Unhandled protocol message: autoreconnect: %s", string(msg.Data)) + default: + logrus.Warnf("Unhandled protocol message: %s", string(msg.Data)) + } + } + } +} + +// SetOutput changes the output stream +func (c *Client) SetOutput(w io.Writer) { + c.Output = w +} + +// ParseURL parses an URL which may be incomplete and tries to standardize it +func ParseURL(input string) (string, error) { + parsed, err := url.Parse(input) + if err != nil { + return "", err + } + switch parsed.Scheme { + case "http", "https": + // everything is ok + default: + return ParseURL(fmt.Sprintf("http://%s", input)) + } + return parsed.String(), nil +} + +// NewClient returns a GoTTY client object +func NewClient(inputURL string) (*Client, error) { + url, err := ParseURL(inputURL) + if err != nil { + return nil, err + } + return &Client{ + Dialer: &websocket.Dialer{}, + URL: url, + WriteMutex: &sync.Mutex{}, + Output: os.Stdout, + poison: make(chan bool), + }, nil +} diff --git a/vendor/github.com/moul/gotty-client/proxy.go b/vendor/github.com/moul/gotty-client/proxy.go new file mode 100644 index 000000000..e42143ca8 --- /dev/null +++ b/vendor/github.com/moul/gotty-client/proxy.go @@ -0,0 +1,90 @@ +// Copyright 2013-2017 Docker, Inc. +// +// Licensed under the Apache License, Version 2.0 (the "License"); you may not +// use this file except in compliance with the License. You may obtain a copy of +// the License at http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +// WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +// License for the specific language governing permissions and limitations under +// the License. + +// CHANGES: +// - update package + + +package gottyclient + +import ( + "io" +) + +// EscapeError is special error which returned by a TTY proxy reader's Read() +// method in case its detach escape sequence is read. +type EscapeError struct{} + +func (EscapeError) Error() string { + return "read escape sequence" +} + +// escapeProxy is used only for attaches with a TTY. It is used to proxy +// stdin keypresses from the underlying reader and look for the passed in +// escape key sequence to signal a detach. +type escapeProxy struct { + escapeKeys []byte + escapeKeyPos int + r io.Reader +} + +// NewEscapeProxy returns a new TTY proxy reader which wraps the given reader +// and detects when the specified escape keys are read, in which case the Read +// method will return an error of type EscapeError. +func NewEscapeProxy(r io.Reader, escapeKeys []byte) io.Reader { + return &escapeProxy{ + escapeKeys: escapeKeys, + r: r, + } +} + +func (r *escapeProxy) Read(buf []byte) (int, error) { + nr, err := r.r.Read(buf) + + preserve := func() { + // this preserves the original key presses in the passed in buffer + nr += r.escapeKeyPos + preserve := make([]byte, 0, r.escapeKeyPos+len(buf)) + preserve = append(preserve, r.escapeKeys[:r.escapeKeyPos]...) + preserve = append(preserve, buf...) + r.escapeKeyPos = 0 + copy(buf[0:nr], preserve) + } + + if nr != 1 || err != nil { + if r.escapeKeyPos > 0 { + preserve() + } + return nr, err + } + + if buf[0] != r.escapeKeys[r.escapeKeyPos] { + if r.escapeKeyPos > 0 { + preserve() + } + return nr, nil + } + + if r.escapeKeyPos == len(r.escapeKeys)-1 { + return 0, EscapeError{} + } + + // Looks like we've got an escape key, but we need to match again on the next + // read. + // Store the current escape key we found so we can look for the next one on + // the next read. + // Since this is an escape key, make sure we don't let the caller read it + // If later on we find that this is not the escape sequence, we'll add the + // keys back + r.escapeKeyPos++ + return nr - r.escapeKeyPos, nil +} diff --git a/vendor/github.com/olekukonko/tablewriter/LICENCE.md b/vendor/github.com/olekukonko/tablewriter/LICENCE.md new file mode 100644 index 000000000..1fd848425 --- /dev/null +++ b/vendor/github.com/olekukonko/tablewriter/LICENCE.md @@ -0,0 +1,19 @@ +Copyright (C) 2014 by Oleku Konko + +Permission is hereby granted, free of charge, to any person obtaining a copy +of this software and associated documentation files (the "Software"), to deal +in the Software without restriction, including without limitation the rights +to use, copy, modify, merge, publish, distribute, sublicense, and/or sell +copies of the Software, and to permit persons to whom the Software is +furnished to do so, subject to the following conditions: + +The above copyright notice and this permission notice shall be included in +all copies or substantial portions of the Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR +IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, +FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE +AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER +LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, +OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN +THE SOFTWARE. \ No newline at end of file diff --git a/vendor/github.com/olekukonko/tablewriter/README.md b/vendor/github.com/olekukonko/tablewriter/README.md new file mode 100644 index 000000000..ae12208be --- /dev/null +++ b/vendor/github.com/olekukonko/tablewriter/README.md @@ -0,0 +1,279 @@ +ASCII Table Writer +========= + +[![Build Status](https://travis-ci.org/olekukonko/tablewriter.png?branch=master)](https://travis-ci.org/olekukonko/tablewriter) +[![Total views](https://img.shields.io/sourcegraph/rrc/github.com/olekukonko/tablewriter.svg)](https://sourcegraph.com/github.com/olekukonko/tablewriter) +[![Godoc](https://godoc.org/github.com/olekukonko/tablewriter?status.svg)](https://godoc.org/github.com/olekukonko/tablewriter) + +Generate ASCII table on the fly ... Installation is simple as + + go get github.com/olekukonko/tablewriter + + +#### Features +- Automatic Padding +- Support Multiple Lines +- Supports Alignment +- Support Custom Separators +- Automatic Alignment of numbers & percentage +- Write directly to http , file etc via `io.Writer` +- Read directly from CSV file +- Optional row line via `SetRowLine` +- Normalise table header +- Make CSV Headers optional +- Enable or disable table border +- Set custom footer support +- Optional identical cells merging +- Set custom caption + + +#### Example 1 - Basic +```go +data := [][]string{ + []string{"A", "The Good", "500"}, + []string{"B", "The Very very Bad Man", "288"}, + []string{"C", "The Ugly", "120"}, + []string{"D", "The Gopher", "800"}, +} + +table := tablewriter.NewWriter(os.Stdout) +table.SetHeader([]string{"Name", "Sign", "Rating"}) + +for _, v := range data { + table.Append(v) +} +table.Render() // Send output +``` + +##### Output 1 +``` ++------+-----------------------+--------+ +| NAME | SIGN | RATING | ++------+-----------------------+--------+ +| A | The Good | 500 | +| B | The Very very Bad Man | 288 | +| C | The Ugly | 120 | +| D | The Gopher | 800 | ++------+-----------------------+--------+ +``` + +#### Example 2 - Without Border / Footer / Bulk Append +```go +data := [][]string{ + []string{"1/1/2014", "Domain name", "2233", "$10.98"}, + []string{"1/1/2014", "January Hosting", "2233", "$54.95"}, + []string{"1/4/2014", "February Hosting", "2233", "$51.00"}, + []string{"1/4/2014", "February Extra Bandwidth", "2233", "$30.00"}, +} + +table := tablewriter.NewWriter(os.Stdout) +table.SetHeader([]string{"Date", "Description", "CV2", "Amount"}) +table.SetFooter([]string{"", "", "Total", "$146.93"}) // Add Footer +table.SetBorder(false) // Set Border to false +table.AppendBulk(data) // Add Bulk Data +table.Render() +``` + +##### Output 2 +``` + + DATE | DESCRIPTION | CV2 | AMOUNT ++----------+--------------------------+-------+---------+ + 1/1/2014 | Domain name | 2233 | $10.98 + 1/1/2014 | January Hosting | 2233 | $54.95 + 1/4/2014 | February Hosting | 2233 | $51.00 + 1/4/2014 | February Extra Bandwidth | 2233 | $30.00 ++----------+--------------------------+-------+---------+ + TOTAL | $146 93 + +-------+---------+ + +``` + + +#### Example 3 - CSV +```go +table, _ := tablewriter.NewCSV(os.Stdout, "test_info.csv", true) +table.SetAlignment(tablewriter.ALIGN_LEFT) // Set Alignment +table.Render() +``` + +##### Output 3 +``` ++----------+--------------+------+-----+---------+----------------+ +| FIELD | TYPE | NULL | KEY | DEFAULT | EXTRA | ++----------+--------------+------+-----+---------+----------------+ +| user_id | smallint(5) | NO | PRI | NULL | auto_increment | +| username | varchar(10) | NO | | NULL | | +| password | varchar(100) | NO | | NULL | | ++----------+--------------+------+-----+---------+----------------+ +``` + +#### Example 4 - Custom Separator +```go +table, _ := tablewriter.NewCSV(os.Stdout, "test.csv", true) +table.SetRowLine(true) // Enable row line + +// Change table lines +table.SetCenterSeparator("*") +table.SetColumnSeparator("‡") +table.SetRowSeparator("-") + +table.SetAlignment(tablewriter.ALIGN_LEFT) +table.Render() +``` + +##### Output 4 +``` +*------------*-----------*---------* +╪ FIRST NAME ╪ LAST NAME ╪ SSN ╪ +*------------*-----------*---------* +╪ John ╪ Barry ╪ 123456 ╪ +*------------*-----------*---------* +╪ Kathy ╪ Smith ╪ 687987 ╪ +*------------*-----------*---------* +╪ Bob ╪ McCornick ╪ 3979870 ╪ +*------------*-----------*---------* +``` + +#### Example 5 - Markdown Format +```go +data := [][]string{ + []string{"1/1/2014", "Domain name", "2233", "$10.98"}, + []string{"1/1/2014", "January Hosting", "2233", "$54.95"}, + []string{"1/4/2014", "February Hosting", "2233", "$51.00"}, + []string{"1/4/2014", "February Extra Bandwidth", "2233", "$30.00"}, +} + +table := tablewriter.NewWriter(os.Stdout) +table.SetHeader([]string{"Date", "Description", "CV2", "Amount"}) +table.SetBorders(tablewriter.Border{Left: true, Top: false, Right: true, Bottom: false}) +table.SetCenterSeparator("|") +table.AppendBulk(data) // Add Bulk Data +table.Render() +``` + +##### Output 5 +``` +| DATE | DESCRIPTION | CV2 | AMOUNT | +|----------|--------------------------|------|--------| +| 1/1/2014 | Domain name | 2233 | $10.98 | +| 1/1/2014 | January Hosting | 2233 | $54.95 | +| 1/4/2014 | February Hosting | 2233 | $51.00 | +| 1/4/2014 | February Extra Bandwidth | 2233 | $30.00 | +``` + +#### Example 6 - Identical cells merging +```go +data := [][]string{ + []string{"1/1/2014", "Domain name", "1234", "$10.98"}, + []string{"1/1/2014", "January Hosting", "2345", "$54.95"}, + []string{"1/4/2014", "February Hosting", "3456", "$51.00"}, + []string{"1/4/2014", "February Extra Bandwidth", "4567", "$30.00"}, +} + +table := tablewriter.NewWriter(os.Stdout) +table.SetHeader([]string{"Date", "Description", "CV2", "Amount"}) +table.SetFooter([]string{"", "", "Total", "$146.93"}) +table.SetAutoMergeCells(true) +table.SetRowLine(true) +table.AppendBulk(data) +table.Render() +``` + +##### Output 6 +``` ++----------+--------------------------+-------+---------+ +| DATE | DESCRIPTION | CV2 | AMOUNT | ++----------+--------------------------+-------+---------+ +| 1/1/2014 | Domain name | 1234 | $10.98 | ++ +--------------------------+-------+---------+ +| | January Hosting | 2345 | $54.95 | ++----------+--------------------------+-------+---------+ +| 1/4/2014 | February Hosting | 3456 | $51.00 | ++ +--------------------------+-------+---------+ +| | February Extra Bandwidth | 4567 | $30.00 | ++----------+--------------------------+-------+---------+ +| TOTAL | $146 93 | ++----------+--------------------------+-------+---------+ +``` + + +#### Table with color +```go +data := [][]string{ + []string{"1/1/2014", "Domain name", "2233", "$10.98"}, + []string{"1/1/2014", "January Hosting", "2233", "$54.95"}, + []string{"1/4/2014", "February Hosting", "2233", "$51.00"}, + []string{"1/4/2014", "February Extra Bandwidth", "2233", "$30.00"}, +} + +table := tablewriter.NewWriter(os.Stdout) +table.SetHeader([]string{"Date", "Description", "CV2", "Amount"}) +table.SetFooter([]string{"", "", "Total", "$146.93"}) // Add Footer +table.SetBorder(false) // Set Border to false + +table.SetHeaderColor(tablewriter.Colors{tablewriter.Bold, tablewriter.BgGreenColor}, + tablewriter.Colors{tablewriter.FgHiRedColor, tablewriter.Bold, tablewriter.BgBlackColor}, + tablewriter.Colors{tablewriter.BgRedColor, tablewriter.FgWhiteColor}, + tablewriter.Colors{tablewriter.BgCyanColor, tablewriter.FgWhiteColor}) + +table.SetColumnColor(tablewriter.Colors{tablewriter.Bold, tablewriter.FgHiBlackColor}, + tablewriter.Colors{tablewriter.Bold, tablewriter.FgHiRedColor}, + tablewriter.Colors{tablewriter.Bold, tablewriter.FgHiBlackColor}, + tablewriter.Colors{tablewriter.Bold, tablewriter.FgBlackColor}) + +table.SetFooterColor(tablewriter.Colors{}, tablewriter.Colors{}, + tablewriter.Colors{tablewriter.Bold}, + tablewriter.Colors{tablewriter.FgHiRedColor}) + +table.AppendBulk(data) +table.Render() +``` + +#### Table with color Output +![Table with Color](https://cloud.githubusercontent.com/assets/6460392/21101956/bbc7b356-c0a1-11e6-9f36-dba694746efc.png) + +#### Example 6 - Set table caption +```go +data := [][]string{ + []string{"A", "The Good", "500"}, + []string{"B", "The Very very Bad Man", "288"}, + []string{"C", "The Ugly", "120"}, + []string{"D", "The Gopher", "800"}, +} + +table := tablewriter.NewWriter(os.Stdout) +table.SetHeader([]string{"Name", "Sign", "Rating"}) +table.SetCaption(true, "Movie ratings.") + +for _, v := range data { + table.Append(v) +} +table.Render() // Send output +``` + +Note: Caption text will wrap with total width of rendered table. + +##### Output 6 +``` ++------+-----------------------+--------+ +| NAME | SIGN | RATING | ++------+-----------------------+--------+ +| A | The Good | 500 | +| B | The Very very Bad Man | 288 | +| C | The Ugly | 120 | +| D | The Gopher | 800 | ++------+-----------------------+--------+ +Movie ratings. +``` + +#### TODO +- ~~Import Directly from CSV~~ - `done` +- ~~Support for `SetFooter`~~ - `done` +- ~~Support for `SetBorder`~~ - `done` +- ~~Support table with uneven rows~~ - `done` +- Support custom alignment +- General Improvement & Optimisation +- `NewHTML` Parse table from HTML + + diff --git a/vendor/github.com/olekukonko/tablewriter/csv.go b/vendor/github.com/olekukonko/tablewriter/csv.go new file mode 100644 index 000000000..98878303b --- /dev/null +++ b/vendor/github.com/olekukonko/tablewriter/csv.go @@ -0,0 +1,52 @@ +// Copyright 2014 Oleku Konko All rights reserved. +// Use of this source code is governed by a MIT +// license that can be found in the LICENSE file. + +// This module is a Table Writer API for the Go Programming Language. +// The protocols were written in pure Go and works on windows and unix systems + +package tablewriter + +import ( + "encoding/csv" + "io" + "os" +) + +// Start A new table by importing from a CSV file +// Takes io.Writer and csv File name +func NewCSV(writer io.Writer, fileName string, hasHeader bool) (*Table, error) { + file, err := os.Open(fileName) + if err != nil { + return &Table{}, err + } + defer file.Close() + csvReader := csv.NewReader(file) + t, err := NewCSVReader(writer, csvReader, hasHeader) + return t, err +} + +// Start a New Table Writer with csv.Reader +// This enables customisation such as reader.Comma = ';' +// See http://golang.org/src/pkg/encoding/csv/reader.go?s=3213:3671#L94 +func NewCSVReader(writer io.Writer, csvReader *csv.Reader, hasHeader bool) (*Table, error) { + t := NewWriter(writer) + if hasHeader { + // Read the first row + headers, err := csvReader.Read() + if err != nil { + return &Table{}, err + } + t.SetHeader(headers) + } + for { + record, err := csvReader.Read() + if err == io.EOF { + break + } else if err != nil { + return &Table{}, err + } + t.Append(record) + } + return t, nil +} diff --git a/vendor/github.com/olekukonko/tablewriter/table.go b/vendor/github.com/olekukonko/tablewriter/table.go new file mode 100644 index 000000000..8b62c628c --- /dev/null +++ b/vendor/github.com/olekukonko/tablewriter/table.go @@ -0,0 +1,798 @@ +// Copyright 2014 Oleku Konko All rights reserved. +// Use of this source code is governed by a MIT +// license that can be found in the LICENSE file. + +// This module is a Table Writer API for the Go Programming Language. +// The protocols were written in pure Go and works on windows and unix systems + +// Create & Generate text based table +package tablewriter + +import ( + "bytes" + "fmt" + "io" + "regexp" + "strings" +) + +const ( + MAX_ROW_WIDTH = 30 +) + +const ( + CENTER = "+" + ROW = "-" + COLUMN = "|" + SPACE = " " + NEWLINE = "\n" +) + +const ( + ALIGN_DEFAULT = iota + ALIGN_CENTER + ALIGN_RIGHT + ALIGN_LEFT +) + +var ( + decimal = regexp.MustCompile(`^-*\d*\.?\d*$`) + percent = regexp.MustCompile(`^-*\d*\.?\d*$%$`) +) + +type Border struct { + Left bool + Right bool + Top bool + Bottom bool +} + +type Table struct { + out io.Writer + rows [][]string + lines [][][]string + cs map[int]int + rs map[int]int + headers []string + footers []string + caption bool + captionText string + autoFmt bool + autoWrap bool + mW int + pCenter string + pRow string + pColumn string + tColumn int + tRow int + hAlign int + fAlign int + align int + newLine string + rowLine bool + autoMergeCells bool + hdrLine bool + borders Border + colSize int + headerParams []string + columnsParams []string + footerParams []string + columnsAlign []int +} + +// Start New Table +// Take io.Writer Directly +func NewWriter(writer io.Writer) *Table { + t := &Table{ + out: writer, + rows: [][]string{}, + lines: [][][]string{}, + cs: make(map[int]int), + rs: make(map[int]int), + headers: []string{}, + footers: []string{}, + caption: false, + captionText: "Table caption.", + autoFmt: true, + autoWrap: true, + mW: MAX_ROW_WIDTH, + pCenter: CENTER, + pRow: ROW, + pColumn: COLUMN, + tColumn: -1, + tRow: -1, + hAlign: ALIGN_DEFAULT, + fAlign: ALIGN_DEFAULT, + align: ALIGN_DEFAULT, + newLine: NEWLINE, + rowLine: false, + hdrLine: true, + borders: Border{Left: true, Right: true, Bottom: true, Top: true}, + colSize: -1, + headerParams: []string{}, + columnsParams: []string{}, + footerParams: []string{}, + columnsAlign: []int{}} + return t +} + +// Render table output +func (t *Table) Render() { + if t.borders.Top { + t.printLine(true) + } + t.printHeading() + if t.autoMergeCells { + t.printRowsMergeCells() + } else { + t.printRows() + } + if !t.rowLine && t.borders.Bottom { + t.printLine(true) + } + t.printFooter() + + if t.caption { + t.printCaption() + } +} + +// Set table header +func (t *Table) SetHeader(keys []string) { + t.colSize = len(keys) + for i, v := range keys { + t.parseDimension(v, i, -1) + t.headers = append(t.headers, v) + } +} + +// Set table Footer +func (t *Table) SetFooter(keys []string) { + //t.colSize = len(keys) + for i, v := range keys { + t.parseDimension(v, i, -1) + t.footers = append(t.footers, v) + } +} + +// Set table Caption +func (t *Table) SetCaption(caption bool, captionText ...string) { + t.caption = caption + if len(captionText) == 1 { + t.captionText = captionText[0] + } +} + +// Turn header autoformatting on/off. Default is on (true). +func (t *Table) SetAutoFormatHeaders(auto bool) { + t.autoFmt = auto +} + +// Turn automatic multiline text adjustment on/off. Default is on (true). +func (t *Table) SetAutoWrapText(auto bool) { + t.autoWrap = auto +} + +// Set the Default column width +func (t *Table) SetColWidth(width int) { + t.mW = width +} + +// Set the minimal width for a column +func (t *Table) SetColMinWidth(column int, width int) { + t.cs[column] = width +} + +// Set the Column Separator +func (t *Table) SetColumnSeparator(sep string) { + t.pColumn = sep +} + +// Set the Row Separator +func (t *Table) SetRowSeparator(sep string) { + t.pRow = sep +} + +// Set the center Separator +func (t *Table) SetCenterSeparator(sep string) { + t.pCenter = sep +} + +// Set Header Alignment +func (t *Table) SetHeaderAlignment(hAlign int) { + t.hAlign = hAlign +} + +// Set Footer Alignment +func (t *Table) SetFooterAlignment(fAlign int) { + t.fAlign = fAlign +} + +// Set Table Alignment +func (t *Table) SetAlignment(align int) { + t.align = align +} + +func (t *Table) SetColumnAlignment(keys []int) { + for _, v := range keys { + switch v { + case ALIGN_CENTER: + break + case ALIGN_LEFT: + break + case ALIGN_RIGHT: + break + default: + v = ALIGN_DEFAULT + } + t.columnsAlign = append(t.columnsAlign, v) + } +} + +// Set New Line +func (t *Table) SetNewLine(nl string) { + t.newLine = nl +} + +// Set Header Line +// This would enable / disable a line after the header +func (t *Table) SetHeaderLine(line bool) { + t.hdrLine = line +} + +// Set Row Line +// This would enable / disable a line on each row of the table +func (t *Table) SetRowLine(line bool) { + t.rowLine = line +} + +// Set Auto Merge Cells +// This would enable / disable the merge of cells with identical values +func (t *Table) SetAutoMergeCells(auto bool) { + t.autoMergeCells = auto +} + +// Set Table Border +// This would enable / disable line around the table +func (t *Table) SetBorder(border bool) { + t.SetBorders(Border{border, border, border, border}) +} + +func (t *Table) SetBorders(border Border) { + t.borders = border +} + +// Append row to table +func (t *Table) Append(row []string) { + rowSize := len(t.headers) + if rowSize > t.colSize { + t.colSize = rowSize + } + + n := len(t.lines) + line := [][]string{} + for i, v := range row { + + // Detect string width + // Detect String height + // Break strings into words + out := t.parseDimension(v, i, n) + + // Append broken words + line = append(line, out) + } + t.lines = append(t.lines, line) +} + +// Allow Support for Bulk Append +// Eliminates repeated for loops +func (t *Table) AppendBulk(rows [][]string) { + for _, row := range rows { + t.Append(row) + } +} + +// NumLines to get the number of lines +func (t *Table) NumLines() int { + return len(t.lines) +} + +// Clear rows +func (t *Table) ClearRows() { + t.lines = [][][]string{} +} + +// Clear footer +func (t *Table) ClearFooter() { + t.footers = []string{} +} + +// Print line based on row width +func (t *Table) printLine(nl bool) { + fmt.Fprint(t.out, t.pCenter) + for i := 0; i < len(t.cs); i++ { + v := t.cs[i] + fmt.Fprintf(t.out, "%s%s%s%s", + t.pRow, + strings.Repeat(string(t.pRow), v), + t.pRow, + t.pCenter) + } + if nl { + fmt.Fprint(t.out, t.newLine) + } +} + +// Print line based on row width with our without cell separator +func (t *Table) printLineOptionalCellSeparators(nl bool, displayCellSeparator []bool) { + fmt.Fprint(t.out, t.pCenter) + for i := 0; i < len(t.cs); i++ { + v := t.cs[i] + if i > len(displayCellSeparator) || displayCellSeparator[i] { + // Display the cell separator + fmt.Fprintf(t.out, "%s%s%s%s", + t.pRow, + strings.Repeat(string(t.pRow), v), + t.pRow, + t.pCenter) + } else { + // Don't display the cell separator for this cell + fmt.Fprintf(t.out, "%s%s", + strings.Repeat(" ", v+2), + t.pCenter) + } + } + if nl { + fmt.Fprint(t.out, t.newLine) + } +} + +// Return the PadRight function if align is left, PadLeft if align is right, +// and Pad by default +func pad(align int) func(string, string, int) string { + padFunc := Pad + switch align { + case ALIGN_LEFT: + padFunc = PadRight + case ALIGN_RIGHT: + padFunc = PadLeft + } + return padFunc +} + +// Print heading information +func (t *Table) printHeading() { + // Check if headers is available + if len(t.headers) < 1 { + return + } + + // Check if border is set + // Replace with space if not set + fmt.Fprint(t.out, ConditionString(t.borders.Left, t.pColumn, SPACE)) + + // Identify last column + end := len(t.cs) - 1 + + // Get pad function + padFunc := pad(t.hAlign) + + // Checking for ANSI escape sequences for header + is_esc_seq := false + if len(t.headerParams) > 0 { + is_esc_seq = true + } + + // Print Heading column + for i := 0; i <= end; i++ { + v := t.cs[i] + h := "" + if i < len(t.headers) { + h = t.headers[i] + } + if t.autoFmt { + h = Title(h) + } + pad := ConditionString((i == end && !t.borders.Left), SPACE, t.pColumn) + + if is_esc_seq { + fmt.Fprintf(t.out, " %s %s", + format(padFunc(h, SPACE, v), + t.headerParams[i]), pad) + } else { + fmt.Fprintf(t.out, " %s %s", + padFunc(h, SPACE, v), + pad) + } + + } + // Next line + fmt.Fprint(t.out, t.newLine) + if t.hdrLine { + t.printLine(true) + } +} + +// Print heading information +func (t *Table) printFooter() { + // Check if headers is available + if len(t.footers) < 1 { + return + } + + // Only print line if border is not set + if !t.borders.Bottom { + t.printLine(true) + } + // Check if border is set + // Replace with space if not set + fmt.Fprint(t.out, ConditionString(t.borders.Bottom, t.pColumn, SPACE)) + + // Identify last column + end := len(t.cs) - 1 + + // Get pad function + padFunc := pad(t.fAlign) + + // Checking for ANSI escape sequences for header + is_esc_seq := false + if len(t.footerParams) > 0 { + is_esc_seq = true + } + + // Print Heading column + for i := 0; i <= end; i++ { + v := t.cs[i] + f := t.footers[i] + if t.autoFmt { + f = Title(f) + } + pad := ConditionString((i == end && !t.borders.Top), SPACE, t.pColumn) + + if len(t.footers[i]) == 0 { + pad = SPACE + } + + if is_esc_seq { + fmt.Fprintf(t.out, " %s %s", + format(padFunc(f, SPACE, v), + t.footerParams[i]), pad) + } else { + fmt.Fprintf(t.out, " %s %s", + padFunc(f, SPACE, v), + pad) + } + + //fmt.Fprintf(t.out, " %s %s", + // padFunc(f, SPACE, v), + // pad) + } + // Next line + fmt.Fprint(t.out, t.newLine) + //t.printLine(true) + + hasPrinted := false + + for i := 0; i <= end; i++ { + v := t.cs[i] + pad := t.pRow + center := t.pCenter + length := len(t.footers[i]) + + if length > 0 { + hasPrinted = true + } + + // Set center to be space if length is 0 + if length == 0 && !t.borders.Right { + center = SPACE + } + + // Print first junction + if i == 0 { + fmt.Fprint(t.out, center) + } + + // Pad With space of length is 0 + if length == 0 { + pad = SPACE + } + // Ignore left space of it has printed before + if hasPrinted || t.borders.Left { + pad = t.pRow + center = t.pCenter + } + + // Change Center start position + if center == SPACE { + if i < end && len(t.footers[i+1]) != 0 { + center = t.pCenter + } + } + + // Print the footer + fmt.Fprintf(t.out, "%s%s%s%s", + pad, + strings.Repeat(string(pad), v), + pad, + center) + + } + + fmt.Fprint(t.out, t.newLine) +} + +// Print caption text +func (t Table) printCaption() { + width := t.getTableWidth() + paragraph, _ := WrapString(t.captionText, width) + for linecount := 0; linecount < len(paragraph); linecount++ { + fmt.Fprintln(t.out, paragraph[linecount]) + } +} + +// Calculate the total number of characters in a row +func (t Table) getTableWidth() int { + var chars int + for _, v := range t.cs { + chars += v + } + + // Add chars, spaces, seperators to calculate the total width of the table. + // ncols := t.colSize + // spaces := ncols * 2 + // seps := ncols + 1 + + return (chars + (3 * t.colSize) + 2) +} + +func (t Table) printRows() { + for i, lines := range t.lines { + t.printRow(lines, i) + } +} + +func (t *Table) fillAlignment(num int) { + if len(t.columnsAlign) < num { + t.columnsAlign = make([]int, num) + for i := range t.columnsAlign { + t.columnsAlign[i] = t.align + } + } +} + +// Print Row Information +// Adjust column alignment based on type + +func (t *Table) printRow(columns [][]string, colKey int) { + // Get Maximum Height + max := t.rs[colKey] + total := len(columns) + + // TODO Fix uneven col size + // if total < t.colSize { + // for n := t.colSize - total; n < t.colSize ; n++ { + // columns = append(columns, []string{SPACE}) + // t.cs[n] = t.mW + // } + //} + + // Pad Each Height + // pads := []int{} + pads := []int{} + + // Checking for ANSI escape sequences for columns + is_esc_seq := false + if len(t.columnsParams) > 0 { + is_esc_seq = true + } + t.fillAlignment(total) + + for i, line := range columns { + length := len(line) + pad := max - length + pads = append(pads, pad) + for n := 0; n < pad; n++ { + columns[i] = append(columns[i], " ") + } + } + //fmt.Println(max, "\n") + for x := 0; x < max; x++ { + for y := 0; y < total; y++ { + + // Check if border is set + fmt.Fprint(t.out, ConditionString((!t.borders.Left && y == 0), SPACE, t.pColumn)) + + fmt.Fprintf(t.out, SPACE) + str := columns[y][x] + + // Embedding escape sequence with column value + if is_esc_seq { + str = format(str, t.columnsParams[y]) + } + + // This would print alignment + // Default alignment would use multiple configuration + switch t.columnsAlign[y] { + case ALIGN_CENTER: // + fmt.Fprintf(t.out, "%s", Pad(str, SPACE, t.cs[y])) + case ALIGN_RIGHT: + fmt.Fprintf(t.out, "%s", PadLeft(str, SPACE, t.cs[y])) + case ALIGN_LEFT: + fmt.Fprintf(t.out, "%s", PadRight(str, SPACE, t.cs[y])) + default: + if decimal.MatchString(strings.TrimSpace(str)) || percent.MatchString(strings.TrimSpace(str)) { + fmt.Fprintf(t.out, "%s", PadLeft(str, SPACE, t.cs[y])) + } else { + fmt.Fprintf(t.out, "%s", PadRight(str, SPACE, t.cs[y])) + + // TODO Custom alignment per column + //if max == 1 || pads[y] > 0 { + // fmt.Fprintf(t.out, "%s", Pad(str, SPACE, t.cs[y])) + //} else { + // fmt.Fprintf(t.out, "%s", PadRight(str, SPACE, t.cs[y])) + //} + + } + } + fmt.Fprintf(t.out, SPACE) + } + // Check if border is set + // Replace with space if not set + fmt.Fprint(t.out, ConditionString(t.borders.Left, t.pColumn, SPACE)) + fmt.Fprint(t.out, t.newLine) + } + + if t.rowLine { + t.printLine(true) + } +} + +// Print the rows of the table and merge the cells that are identical +func (t *Table) printRowsMergeCells() { + var previousLine []string + var displayCellBorder []bool + var tmpWriter bytes.Buffer + for i, lines := range t.lines { + // We store the display of the current line in a tmp writer, as we need to know which border needs to be print above + previousLine, displayCellBorder = t.printRowMergeCells(&tmpWriter, lines, i, previousLine) + if i > 0 { //We don't need to print borders above first line + if t.rowLine { + t.printLineOptionalCellSeparators(true, displayCellBorder) + } + } + tmpWriter.WriteTo(t.out) + } + //Print the end of the table + if t.rowLine { + t.printLine(true) + } +} + +// Print Row Information to a writer and merge identical cells. +// Adjust column alignment based on type + +func (t *Table) printRowMergeCells(writer io.Writer, columns [][]string, colKey int, previousLine []string) ([]string, []bool) { + // Get Maximum Height + max := t.rs[colKey] + total := len(columns) + + // Pad Each Height + pads := []int{} + + for i, line := range columns { + length := len(line) + pad := max - length + pads = append(pads, pad) + for n := 0; n < pad; n++ { + columns[i] = append(columns[i], " ") + } + } + + var displayCellBorder []bool + t.fillAlignment(total) + for x := 0; x < max; x++ { + for y := 0; y < total; y++ { + + // Check if border is set + fmt.Fprint(writer, ConditionString((!t.borders.Left && y == 0), SPACE, t.pColumn)) + + fmt.Fprintf(writer, SPACE) + + str := columns[y][x] + + if t.autoMergeCells { + //Store the full line to merge mutli-lines cells + fullLine := strings.Join(columns[y], " ") + if len(previousLine) > y && fullLine == previousLine[y] && fullLine != "" { + // If this cell is identical to the one above but not empty, we don't display the border and keep the cell empty. + displayCellBorder = append(displayCellBorder, false) + str = "" + } else { + // First line or different content, keep the content and print the cell border + displayCellBorder = append(displayCellBorder, true) + } + } + + // This would print alignment + // Default alignment would use multiple configuration + switch t.columnsAlign[y] { + case ALIGN_CENTER: // + fmt.Fprintf(writer, "%s", Pad(str, SPACE, t.cs[y])) + case ALIGN_RIGHT: + fmt.Fprintf(writer, "%s", PadLeft(str, SPACE, t.cs[y])) + case ALIGN_LEFT: + fmt.Fprintf(writer, "%s", PadRight(str, SPACE, t.cs[y])) + default: + if decimal.MatchString(strings.TrimSpace(str)) || percent.MatchString(strings.TrimSpace(str)) { + fmt.Fprintf(writer, "%s", PadLeft(str, SPACE, t.cs[y])) + } else { + fmt.Fprintf(writer, "%s", PadRight(str, SPACE, t.cs[y])) + } + } + fmt.Fprintf(writer, SPACE) + } + // Check if border is set + // Replace with space if not set + fmt.Fprint(writer, ConditionString(t.borders.Left, t.pColumn, SPACE)) + fmt.Fprint(writer, t.newLine) + } + + //The new previous line is the current one + previousLine = make([]string, total) + for y := 0; y < total; y++ { + previousLine[y] = strings.Join(columns[y], " ") //Store the full line for multi-lines cells + } + //Returns the newly added line and wether or not a border should be displayed above. + return previousLine, displayCellBorder +} + +func (t *Table) parseDimension(str string, colKey, rowKey int) []string { + var ( + raw []string + max int + ) + w := DisplayWidth(str) + // Calculate Width + // Check if with is grater than maximum width + if w > t.mW { + w = t.mW + } + + // Check if width exists + v, ok := t.cs[colKey] + if !ok || v < w || v == 0 { + t.cs[colKey] = w + } + + if rowKey == -1 { + return raw + } + // Calculate Height + if t.autoWrap { + raw, _ = WrapString(str, t.cs[colKey]) + } else { + raw = getLines(str) + } + + for _, line := range raw { + if w := DisplayWidth(line); w > max { + max = w + } + } + + // Make sure the with is the same length as maximum word + // Important for cases where the width is smaller than maxu word + if max > t.cs[colKey] { + t.cs[colKey] = max + } + + h := len(raw) + v, ok = t.rs[rowKey] + + if !ok || v < h || v == 0 { + t.rs[rowKey] = h + } + //fmt.Printf("Raw %+v %d\n", raw, len(raw)) + return raw +} diff --git a/vendor/github.com/olekukonko/tablewriter/table_with_color.go b/vendor/github.com/olekukonko/tablewriter/table_with_color.go new file mode 100644 index 000000000..5a4a53ec2 --- /dev/null +++ b/vendor/github.com/olekukonko/tablewriter/table_with_color.go @@ -0,0 +1,134 @@ +package tablewriter + +import ( + "fmt" + "strconv" + "strings" +) + +const ESC = "\033" +const SEP = ";" + +const ( + BgBlackColor int = iota + 40 + BgRedColor + BgGreenColor + BgYellowColor + BgBlueColor + BgMagentaColor + BgCyanColor + BgWhiteColor +) + +const ( + FgBlackColor int = iota + 30 + FgRedColor + FgGreenColor + FgYellowColor + FgBlueColor + FgMagentaColor + FgCyanColor + FgWhiteColor +) + +const ( + BgHiBlackColor int = iota + 100 + BgHiRedColor + BgHiGreenColor + BgHiYellowColor + BgHiBlueColor + BgHiMagentaColor + BgHiCyanColor + BgHiWhiteColor +) + +const ( + FgHiBlackColor int = iota + 90 + FgHiRedColor + FgHiGreenColor + FgHiYellowColor + FgHiBlueColor + FgHiMagentaColor + FgHiCyanColor + FgHiWhiteColor +) + +const ( + Normal = 0 + Bold = 1 + UnderlineSingle = 4 + Italic +) + +type Colors []int + +func startFormat(seq string) string { + return fmt.Sprintf("%s[%sm", ESC, seq) +} + +func stopFormat() string { + return fmt.Sprintf("%s[%dm", ESC, Normal) +} + +// Making the SGR (Select Graphic Rendition) sequence. +func makeSequence(codes []int) string { + codesInString := []string{} + for _, code := range codes { + codesInString = append(codesInString, strconv.Itoa(code)) + } + return strings.Join(codesInString, SEP) +} + +// Adding ANSI escape sequences before and after string +func format(s string, codes interface{}) string { + var seq string + + switch v := codes.(type) { + + case string: + seq = v + case []int: + seq = makeSequence(v) + default: + return s + } + + if len(seq) == 0 { + return s + } + return startFormat(seq) + s + stopFormat() +} + +// Adding header colors (ANSI codes) +func (t *Table) SetHeaderColor(colors ...Colors) { + if t.colSize != len(colors) { + panic("Number of header colors must be equal to number of headers.") + } + for i := 0; i < len(colors); i++ { + t.headerParams = append(t.headerParams, makeSequence(colors[i])) + } +} + +// Adding column colors (ANSI codes) +func (t *Table) SetColumnColor(colors ...Colors) { + if t.colSize != len(colors) { + panic("Number of column colors must be equal to number of headers.") + } + for i := 0; i < len(colors); i++ { + t.columnsParams = append(t.columnsParams, makeSequence(colors[i])) + } +} + +// Adding column colors (ANSI codes) +func (t *Table) SetFooterColor(colors ...Colors) { + if len(t.footers) != len(colors) { + panic("Number of footer colors must be equal to number of footer.") + } + for i := 0; i < len(colors); i++ { + t.footerParams = append(t.footerParams, makeSequence(colors[i])) + } +} + +func Color(colors ...int) []int { + return colors +} diff --git a/vendor/github.com/olekukonko/tablewriter/test.csv b/vendor/github.com/olekukonko/tablewriter/test.csv new file mode 100644 index 000000000..1609327e9 --- /dev/null +++ b/vendor/github.com/olekukonko/tablewriter/test.csv @@ -0,0 +1,4 @@ +first_name,last_name,ssn +John,Barry,123456 +Kathy,Smith,687987 +Bob,McCornick,3979870 \ No newline at end of file diff --git a/vendor/github.com/olekukonko/tablewriter/test_info.csv b/vendor/github.com/olekukonko/tablewriter/test_info.csv new file mode 100644 index 000000000..e4c40e983 --- /dev/null +++ b/vendor/github.com/olekukonko/tablewriter/test_info.csv @@ -0,0 +1,4 @@ +Field,Type,Null,Key,Default,Extra +user_id,smallint(5),NO,PRI,NULL,auto_increment +username,varchar(10),NO,,NULL, +password,varchar(100),NO,,NULL, \ No newline at end of file diff --git a/vendor/github.com/olekukonko/tablewriter/util.go b/vendor/github.com/olekukonko/tablewriter/util.go new file mode 100644 index 000000000..2deefbc52 --- /dev/null +++ b/vendor/github.com/olekukonko/tablewriter/util.go @@ -0,0 +1,72 @@ +// Copyright 2014 Oleku Konko All rights reserved. +// Use of this source code is governed by a MIT +// license that can be found in the LICENSE file. + +// This module is a Table Writer API for the Go Programming Language. +// The protocols were written in pure Go and works on windows and unix systems + +package tablewriter + +import ( + "math" + "regexp" + "strings" + + "github.com/mattn/go-runewidth" +) + +var ansi = regexp.MustCompile("\033\\[(?:[0-9]{1,3}(?:;[0-9]{1,3})*)?[m|K]") + +func DisplayWidth(str string) int { + return runewidth.StringWidth(ansi.ReplaceAllLiteralString(str, "")) +} + +// Simple Condition for string +// Returns value based on condition +func ConditionString(cond bool, valid, inValid string) string { + if cond { + return valid + } + return inValid +} + +// Format Table Header +// Replace _ , . and spaces +func Title(name string) string { + name = strings.Replace(name, "_", " ", -1) + name = strings.Replace(name, ".", " ", -1) + name = strings.TrimSpace(name) + return strings.ToUpper(name) +} + +// Pad String +// Attempts to play string in the center +func Pad(s, pad string, width int) string { + gap := width - DisplayWidth(s) + if gap > 0 { + gapLeft := int(math.Ceil(float64(gap / 2))) + gapRight := gap - gapLeft + return strings.Repeat(string(pad), gapLeft) + s + strings.Repeat(string(pad), gapRight) + } + return s +} + +// Pad String Right position +// This would pace string at the left side fo the screen +func PadRight(s, pad string, width int) string { + gap := width - DisplayWidth(s) + if gap > 0 { + return s + strings.Repeat(string(pad), gap) + } + return s +} + +// Pad String Left position +// This would pace string at the right side fo the screen +func PadLeft(s, pad string, width int) string { + gap := width - DisplayWidth(s) + if gap > 0 { + return strings.Repeat(string(pad), gap) + s + } + return s +} diff --git a/vendor/github.com/olekukonko/tablewriter/wrap.go b/vendor/github.com/olekukonko/tablewriter/wrap.go new file mode 100644 index 000000000..9ef69e904 --- /dev/null +++ b/vendor/github.com/olekukonko/tablewriter/wrap.go @@ -0,0 +1,104 @@ +// Copyright 2014 Oleku Konko All rights reserved. +// Use of this source code is governed by a MIT +// license that can be found in the LICENSE file. + +// This module is a Table Writer API for the Go Programming Language. +// The protocols were written in pure Go and works on windows and unix systems + +package tablewriter + +import ( + "math" + "strings" + + "github.com/mattn/go-runewidth" +) + +var ( + nl = "\n" + sp = " " +) + +const defaultPenalty = 1e5 + +// Wrap wraps s into a paragraph of lines of length lim, with minimal +// raggedness. +func WrapString(s string, lim int) ([]string, int) { + words := strings.Split(strings.Replace(s, nl, sp, -1), sp) + var lines []string + max := 0 + for _, v := range words { + max = runewidth.StringWidth(v) + if max > lim { + lim = max + } + } + for _, line := range WrapWords(words, 1, lim, defaultPenalty) { + lines = append(lines, strings.Join(line, sp)) + } + return lines, lim +} + +// WrapWords is the low-level line-breaking algorithm, useful if you need more +// control over the details of the text wrapping process. For most uses, +// WrapString will be sufficient and more convenient. +// +// WrapWords splits a list of words into lines with minimal "raggedness", +// treating each rune as one unit, accounting for spc units between adjacent +// words on each line, and attempting to limit lines to lim units. Raggedness +// is the total error over all lines, where error is the square of the +// difference of the length of the line and lim. Too-long lines (which only +// happen when a single word is longer than lim units) have pen penalty units +// added to the error. +func WrapWords(words []string, spc, lim, pen int) [][]string { + n := len(words) + + length := make([][]int, n) + for i := 0; i < n; i++ { + length[i] = make([]int, n) + length[i][i] = runewidth.StringWidth(words[i]) + for j := i + 1; j < n; j++ { + length[i][j] = length[i][j-1] + spc + runewidth.StringWidth(words[j]) + } + } + nbrk := make([]int, n) + cost := make([]int, n) + for i := range cost { + cost[i] = math.MaxInt32 + } + for i := n - 1; i >= 0; i-- { + if length[i][n-1] <= lim { + cost[i] = 0 + nbrk[i] = n + } else { + for j := i + 1; j < n; j++ { + d := lim - length[i][j-1] + c := d*d + cost[j] + if length[i][j-1] > lim { + c += pen // too-long lines get a worse penalty + } + if c < cost[i] { + cost[i] = c + nbrk[i] = j + } + } + } + } + var lines [][]string + i := 0 + for i < n { + lines = append(lines, words[i:nbrk[i]]) + i = nbrk[i] + } + return lines +} + +// getLines decomposes a multiline string into a slice of strings. +func getLines(s string) []string { + var lines []string + + for _, line := range strings.Split(s, nl) { + lines = append(lines, line) + } + return lines +} diff --git a/vendor/github.com/oracle/oci-go-sdk/CHANGELOG.md b/vendor/github.com/oracle/oci-go-sdk/CHANGELOG.md new file mode 100644 index 000000000..e1a29e257 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/CHANGELOG.md @@ -0,0 +1,15 @@ +# CHANGELOG + +All notable changes to this project will be documented in this file. + +The format is based on [Keep a Changelog](http://keepachangelog.com/) + + +## 1.0.0 - 2018-02-28 Initial Release +### Added +- Support for Audit service +- Support for Core Services (Networking, Compute, Block Volume) +- Support for Database service +- Support for IAM service +- Support for Load Balancing service +- Suport for Object Storage service diff --git a/vendor/github.com/oracle/oci-go-sdk/CONTRIBUTING.md b/vendor/github.com/oracle/oci-go-sdk/CONTRIBUTING.md new file mode 100644 index 000000000..31e4b6c60 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/CONTRIBUTING.md @@ -0,0 +1,24 @@ +# Contributing to the Oracle Cloud Infrastructure Go SDK + +*Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved.* + +Pull requests can be made under +[The Oracle Contributor Agreement](https://www.oracle.com/technetwork/community/oca-486395.html) +(OCA). + +For pull requests to be accepted, the bottom of +your commit message must have the following line using your name and +e-mail address as it appears in the OCA Signatories list. + +``` +Signed-off-by: Your Name +``` + +This can be automatically added to pull requests by committing with: + +``` +git commit --signoff +```` + +Only pull requests from committers that can be verified as having +signed the OCA can be accepted. \ No newline at end of file diff --git a/vendor/github.com/oracle/oci-go-sdk/LICENSE.txt b/vendor/github.com/oracle/oci-go-sdk/LICENSE.txt new file mode 100644 index 000000000..63bc0c882 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/LICENSE.txt @@ -0,0 +1,82 @@ +Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. + +This software is dual-licensed to you under the Universal Permissive License (UPL) and Apache License 2.0.  See below for license terms.  You may choose either license, or both. + ____________________________ +The Universal Permissive License (UPL), Version 1.0 +Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. + +Subject to the condition set forth below, permission is hereby granted to any person obtaining a copy of this software, associated documentation and/or data (collectively the "Software"), free of charge and under any and all copyright rights in the Software, and any and all patent rights owned or freely licensable by each licensor hereunder covering either (i) the unmodified Software as contributed to or provided by such licensor, or (ii) the Larger Works (as defined below), to deal in both + +(a) the Software, and +(b) any piece of software and/or hardware listed in the lrgrwrks.txt file if one is included with the Software (each a "Larger Work" to which the Software is contributed by such licensors), + +without restriction, including without limitation the rights to copy, create derivative works of, display, perform, and distribute the Software and make, use, sell, offer for sale, import, export, have made, and have sold the Software and the Larger Work(s), and to sublicense the foregoing rights on either these or other terms. + +This license is subject to the following condition: + +The above copyright notice and either this complete permission notice or at a minimum a reference to the UPL must be included in all copies or substantial portions of the Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. + +The Apache Software License, Version 2.0 +Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. + +Licensed under the Apache License, Version 2.0 (the "License"); You may not use this product except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0. A copy of the license is also reproduced below. Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. + +Apache License + +Version 2.0, January 2004 + +http://www.apache.org/licenses/ +TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION +1. Definitions. +"License" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document. +"Licensor" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License. +"Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity. +"You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by this License. +"Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files. +"Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types. +"Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below). +"Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof. +"Contribution" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as "Not a Contribution." +"Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work. +2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form. +3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed. +4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions: +You must give any other recipients of the Work or Derivative Works a copy of this License; and +You must cause any modified files to carry prominent notices stating that You changed the files; and +You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and +If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License. + +You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License. +5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. +6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file. +7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License. +8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages. +9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability. +END OF TERMS AND CONDITIONS + + APPENDIX: How to apply the Apache License to your work. + + To apply the Apache License to your work, attach the following + boilerplate notice, with the fields enclosed by brackets "[]" + replaced with your own identifying information. (Don't include + the brackets!) The text should be enclosed in the appropriate + comment syntax for the file format. We also recommend that a + file or class name and description of purpose be included on the + same "printed page" as the copyright notice for easier + identification within third-party archives. + + Copyright [yyyy] [name of copyright owner] + + Licensed under the Apache License, Version 2.0 (the "License"); + you may not use this file except in compliance with the License. + You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. diff --git a/vendor/github.com/oracle/oci-go-sdk/Makefile b/vendor/github.com/oracle/oci-go-sdk/Makefile new file mode 100644 index 000000000..d69dfd999 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/Makefile @@ -0,0 +1,55 @@ +DOC_SERVER_URL=https:\/\/docs.us-phoenix-1.oraclecloud.com + +GEN_TARGETS = identity core objectstorage loadbalancer database audit +NON_GEN_TARGETS = common common/auth +TARGETS = $(NON_GEN_TARGETS) $(GEN_TARGETS) + +TARGETS_WITH_TESTS = common common/auth +TARGETS_BUILD = $(patsubst %,build-%, $(TARGETS)) +TARGETS_CLEAN = $(patsubst %,clean-%, $(GEN_TARGETS)) +TARGETS_LINT = $(patsubst %,lint-%, $(TARGETS)) +TARGETS_TEST = $(patsubst %,test-%, $(TARGETS_WITH_TESTS)) +TARGETS_RELEASE= $(patsubst %,release-%, $(TARGETS)) +GOLINT=$(GOPATH)/bin/golint +LINT_FLAGS=-min_confidence 0.9 -set_exit_status + + +.PHONY: $(TARGETS_BUILD) $(TARGET_TEST) + +build: lint $(TARGETS_BUILD) + +test: build $(TARGETS_TEST) + +lint: $(TARGETS_LINT) + +clean: $(TARGETS_CLEAN) + +$(TARGETS_LINT): lint-%:% + @echo "linting and formatting: $<" + @(cd $< && gofmt -s -w .) + @if [ \( $< = common \) -o \( $< = common/auth \) ]; then\ + (cd $< && $(GOLINT) -set_exit_status .);\ + else\ + (cd $< && $(GOLINT) $(LINT_FLAGS) .);\ + fi + +$(TARGETS_BUILD): build-%:% + @echo "building: $<" + @(cd $< && find . -name '*_integ_test.go' | xargs -I{} mv {} ../integtest) + @(cd $< && go build -v) + +$(TARGETS_TEST): test-%:% + @(cd $< && go test -v) + +$(TARGETS_CLEAN): clean-%:% + @echo "cleaning $<" + @-rm -rf $< + +pre-doc: + @echo "Rendering doc server to ${DOC_SERVER_URL}" + find . -name \*.go |xargs sed -i '' 's/{{DOC_SERVER_URL}}/${DOC_SERVER_URL}/g' + +gen-version: + go generate -x + +release: gen-version build pre-doc diff --git a/vendor/github.com/oracle/oci-go-sdk/README.md b/vendor/github.com/oracle/oci-go-sdk/README.md new file mode 100644 index 000000000..4065a1529 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/README.md @@ -0,0 +1,153 @@ +# Oracle Cloud Infrastructure Golang SDK +[![wercker status](https://app.wercker.com/status/09bc4818e7b1d70b04285331a9bdbc41/s/master "wercker status")](https://app.wercker.com/project/byKey/09bc4818e7b1d70b04285331a9bdbc41) + +This is the Go SDK for Oracle Cloud Infrastructure. This project is open source and maintained by Oracle Corp. +The home page for the project is [here](https://godoc.org/github.com/oracle/oci-go-sdk/). +>***WARNING:***: To avoid automatically consuming breaking changes if we have to rev the major version of the Go SDK, +please consider using the [Go dependency management tool](https://github.com/golang/dep), or vendoring the SDK. +This will allow you to pin to a specific version of the Go SDK in your project, letting you control how and when you move to the next major version. + +## Dependencies +- Install [Go programming language](https://golang.org/dl/). +- Install [GNU Make](https://www.gnu.org/software/make/), using the package manager or binary distribution tool appropriate for your platform. + + + +## Installing +Use the following command to install this SDK: + +``` +go get -u github.com/oracle/oci-go-sdk +``` +Alternatively you can git clone this repo. + +## Working with the Go SDK +To start working with the Go SDK, you import the service package, create a client, and then use that client to make calls. + +### Configuring +Before using the SDK, set up a config file with the required credentials. See [SDK and Tool Configuration](https://docs.us-phoenix-1.oraclecloud.com/Content/API/Concepts/sdkconfig.htm) for instructions. + +Once a config file has been setup, call `common.DefaultConfigProvider()` function as follows: + + ```go + // Import necessary packages + import ( + "github.com/oracle/oci-go-sdk/common" + "github.com/oracle/oci-go-sdk/identity" // Identity or any other service you wish to make requests to +) + + //... + +configProvider := common.DefaultConfigProvider() +``` + + Or, to configure the SDK programmatically instead, implement the `ConfigurationProvider` interface shown below: + ```go +// ConfigurationProvider wraps information about the account owner +type ConfigurationProvider interface { + KeyProvider + TenancyOCID() (string, error) + UserOCID() (string, error) + KeyFingerprint() (string, error) + Region() (string, error) +} +``` + +### Making a request +To make a request to an OCI service, create a client for the service and then use the client to call a function from the service. + +- *Creating a client*: All packages provide a function to create clients, using the naming convention `NewClientWithConfigurationProvider`, +such as `NewVirtualNetworkClientWithConfigurationProvider` or `NewIdentityClientWithConfigurationProvider`. To create a new client, +pass a struct that conforms to the `ConfigurationProvider` interface, or use the `DefaultConfigProvider()` function in the common package. + +For example: +```go +config := common.DefaultConfigProvider() +client, err := identity.NewIdentityClientWithConfigurationProvider(config) +if err != nil { + panic(err) +} +``` + +- *Making calls*: After successfully creating a client, requests can now be made to the service. Generally all functions associated with an operation +accept [`context.Context`](https://golang.org/pkg/context/) and a struct that wraps all input parameters. The functions then return a response struct +that contains the desired data, and an error struct that describes the error if an error occurs. + +For example: +```go +id := "your_group_id" +response, err := client.GetGroup(context.Background(), identity.GetGroupRequest{GroupId:&id}) +if err != nil { + //Something happened + panic(err) +} +//Process the data in response struct +fmt.Println("Group's name is:", response.Name) +``` + +## Organization of the SDK +The `oci-go-sdk` contains the following: +- **Service packages**: All packages except `common` and any other package found inside `cmd`. These packages represent +the Oracle Cloud Infrastructure services supported by the Go SDK. Each package represents a service. +These packages include methods to interact with the service, structs that model +input and output parameters, and a client struct that acts as receiver for the above methods. + +- **Common package**: Found in the `common` directory. The common package provides supporting functions and structs used by service packages. +Includes HTTP request/response (de)serialization, request signing, JSON parsing, pointer to reference and other helper functions. Most of the functions +in this package are meant to be used by the service packages. + +- **cmd**: Internal tools used by the `oci-go-sdk`. + +## Examples +Examples can be found [here](https://github.com/oracle/oci-go-sdk/tree/master/example) + +## Documentation +Full documentation can be found [on the godocs site](https://godoc.org/github.com/oracle/oci-go-sdk/). + +## Help +* The [Issues](https://github.com/oracle/oci-go-sdk/issues) page of this GitHub repository. +* [Stack Overflow](https://stackoverflow.com/), use the [oracle-cloud-infrastructure](https://stackoverflow.com/questions/tagged/oracle-cloud-infrastructure) and [oci-go-sdk](https://stackoverflow.com/questions/tagged/oci-go-sdk) tags in your post. +* [Developer Tools](https://community.oracle.com/community/cloud_computing/bare-metal/content?filterID=contentstatus%5Bpublished%5D~category%5Bdeveloper-tools%5D&filterID=contentstatus%5Bpublished%5D~objecttype~objecttype%5Bthread%5D) of the Oracle Cloud forums. +* [My Oracle Support](https://support.oracle.com). + + +## Contributing +`oci-go-sdk` is an open source project. See [CONTRIBUTING](/CONTRIBUTING.md) for details. + +Oracle gratefully acknowledges the contributions to oci-go-sdk that have been made by the community. + + +## License +Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. + +This SDK and sample is dual licensed under the Universal Permissive License 1.0 and the Apache License 2.0. + +See [LICENSE](/LICENSE.txt) for more details. + +## Changes +See [CHANGELOG](/CHANGELOG.md). + +## Known Issues +You can find information on any known issues with the SDK here and under the [Issues](https://github.com/oracle/oci-go-sdk/issues) tab of this project's GitHub repository. + +## Building and testing +### Dev dependencies +- Install [Testify](https://github.com/stretchr/testify) with the command: +```sh +go get github.com/stretchr/testify +``` +- Install [go lint](https://github.com/golang/lint) with the command: +``` +go get -u github.com/golang/lint/golint +``` +### Build +Building is provided by the make file at the root of the project. To build the project execute. + +``` +make build +``` + +To run the tests: +``` +make test +``` diff --git a/vendor/github.com/oracle/oci-go-sdk/audit/audit_client.go b/vendor/github.com/oracle/oci-go-sdk/audit/audit_client.go new file mode 100644 index 000000000..ce0ad0c55 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/audit/audit_client.go @@ -0,0 +1,113 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Audit API +// +// API for the Audit Service. You can use this API for queries, but not bulk-export operations. +// + +package audit + +import ( + "context" + "fmt" + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +//AuditClient a client for Audit +type AuditClient struct { + common.BaseClient + config *common.ConfigurationProvider +} + +// NewAuditClientWithConfigurationProvider Creates a new default Audit client with the given configuration provider. +// the configuration provider will be used for the default signer as well as reading the region +func NewAuditClientWithConfigurationProvider(configProvider common.ConfigurationProvider) (client AuditClient, err error) { + baseClient, err := common.NewClientWithConfig(configProvider) + if err != nil { + return + } + + client = AuditClient{BaseClient: baseClient} + client.BasePath = "20160918" + err = client.setConfigurationProvider(configProvider) + return +} + +// SetRegion overrides the region of this client. +func (client *AuditClient) SetRegion(region string) { + client.Host = fmt.Sprintf(common.DefaultHostURLTemplate, "audit", region) +} + +// SetConfigurationProvider sets the configuration provider including the region, returns an error if is not valid +func (client *AuditClient) setConfigurationProvider(configProvider common.ConfigurationProvider) error { + if ok, err := common.IsConfigurationProviderValid(configProvider); !ok { + return err + } + + // Error has been checked already + region, _ := configProvider.Region() + client.config = &configProvider + client.SetRegion(region) + return nil +} + +// ConfigurationProvider the ConfigurationProvider used in this client, or null if none set +func (client *AuditClient) ConfigurationProvider() *common.ConfigurationProvider { + return client.config +} + +// GetConfiguration Get the configuration +func (client AuditClient) GetConfiguration(ctx context.Context, request GetConfigurationRequest) (response GetConfigurationResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodGet, "/configuration", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// ListEvents Returns all audit events for the specified compartment that were processed within the specified time range. +func (client AuditClient) ListEvents(ctx context.Context, request ListEventsRequest) (response ListEventsResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodGet, "/auditEvents", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// UpdateConfiguration Update the configuration +func (client AuditClient) UpdateConfiguration(ctx context.Context, request UpdateConfigurationRequest) (response UpdateConfigurationResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodPut, "/configuration", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} diff --git a/vendor/github.com/oracle/oci-go-sdk/audit/audit_event.go b/vendor/github.com/oracle/oci-go-sdk/audit/audit_event.go new file mode 100644 index 000000000..bc0012c6d --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/audit/audit_event.go @@ -0,0 +1,78 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Audit API +// +// API for the Audit Service. You can use this API for queries, but not bulk-export operations. +// + +package audit + +import ( + "github.com/oracle/oci-go-sdk/common" +) + +// AuditEvent The representation of AuditEvent +type AuditEvent struct { + + // The OCID of the tenant. + TenantId *string `mandatory:"false" json:"tenantId"` + + // The OCID of the compartment. + CompartmentId *string `mandatory:"false" json:"compartmentId"` + + // The GUID of the event. + EventId *string `mandatory:"false" json:"eventId"` + + // The source of the event. + EventSource *string `mandatory:"false" json:"eventSource"` + + // The type of the event. + EventType *string `mandatory:"false" json:"eventType"` + + // The time the event occurred, expressed in RFC 3339 (https://tools.ietf.org/html/rfc3339) timestamp format. + EventTime *common.SDKTime `mandatory:"false" json:"eventTime"` + + // The OCID of the user whose action triggered the event. + PrincipalId *string `mandatory:"false" json:"principalId"` + + // The credential ID of the user. This value is extracted from the HTTP 'Authorization' request header. It consists of the tenantId, userId, and user fingerprint, all delimited by a slash (/). + CredentialId *string `mandatory:"false" json:"credentialId"` + + // The HTTP method of the request. + RequestAction *string `mandatory:"false" json:"requestAction"` + + // The opc-request-id of the request. + RequestId *string `mandatory:"false" json:"requestId"` + + // The user agent of the client that made the request. + RequestAgent *string `mandatory:"false" json:"requestAgent"` + + // The HTTP header fields and values in the request. + RequestHeaders map[string][]string `mandatory:"false" json:"requestHeaders"` + + // The IP address of the source of the request. + RequestOrigin *string `mandatory:"false" json:"requestOrigin"` + + // The query parameter fields and values for the request. + RequestParameters map[string][]string `mandatory:"false" json:"requestParameters"` + + // The resource targeted by the request. + RequestResource *string `mandatory:"false" json:"requestResource"` + + // The headers of the response. + ResponseHeaders map[string][]string `mandatory:"false" json:"responseHeaders"` + + // The status code of the response. + ResponseStatus *string `mandatory:"false" json:"responseStatus"` + + // The time of the response to the audited request, expressed in RFC 3339 (https://tools.ietf.org/html/rfc3339) timestamp format. + ResponseTime *common.SDKTime `mandatory:"false" json:"responseTime"` + + // Metadata of interest from the response payload. For example, the OCID of a resource. + ResponsePayload map[string]interface{} `mandatory:"false" json:"responsePayload"` +} + +func (m AuditEvent) String() string { + return common.PointerString(m) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/audit/configuration.go b/vendor/github.com/oracle/oci-go-sdk/audit/configuration.go new file mode 100644 index 000000000..35d0c9b07 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/audit/configuration.go @@ -0,0 +1,24 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Audit API +// +// API for the Audit Service. You can use this API for queries, but not bulk-export operations. +// + +package audit + +import ( + "github.com/oracle/oci-go-sdk/common" +) + +// Configuration The representation of Configuration +type Configuration struct { + + // The retention period days + RetentionPeriodDays *int `mandatory:"false" json:"retentionPeriodDays"` +} + +func (m Configuration) String() string { + return common.PointerString(m) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/audit/get_configuration_request_response.go b/vendor/github.com/oracle/oci-go-sdk/audit/get_configuration_request_response.go new file mode 100644 index 000000000..684d790b0 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/audit/get_configuration_request_response.go @@ -0,0 +1,34 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package audit + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// GetConfigurationRequest wrapper for the GetConfiguration operation +type GetConfigurationRequest struct { + + // ID of the root compartment (tenancy) + CompartmentId *string `mandatory:"true" contributesTo:"query" name:"compartmentId"` +} + +func (request GetConfigurationRequest) String() string { + return common.PointerString(request) +} + +// GetConfigurationResponse wrapper for the GetConfiguration operation +type GetConfigurationResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The Configuration instance + Configuration `presentIn:"body"` +} + +func (response GetConfigurationResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/audit/list_events_request_response.go b/vendor/github.com/oracle/oci-go-sdk/audit/list_events_request_response.go new file mode 100644 index 000000000..e8c6b8d6b --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/audit/list_events_request_response.go @@ -0,0 +1,60 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package audit + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// ListEventsRequest wrapper for the ListEvents operation +type ListEventsRequest struct { + + // The OCID of the compartment. + CompartmentId *string `mandatory:"true" contributesTo:"query" name:"compartmentId"` + + // Returns events that were processed at or after this start date and time, expressed in RFC 3339 (https://tools.ietf.org/html/rfc3339) timestamp format. + // For example, a start value of `2017-01-15T11:30:00Z` will retrieve a list of all events processed since 30 minutes after the 11th hour of January 15, 2017, in Coordinated Universal Time (UTC). + // You can specify a value with granularity to the minute. Seconds (and milliseconds, if included) must be set to `0`. + StartTime *common.SDKTime `mandatory:"true" contributesTo:"query" name:"startTime"` + + // Returns events that were processed before this end date and time, expressed in RFC 3339 (https://tools.ietf.org/html/rfc3339) timestamp format. For example, a start value of `2017-01-01T00:00:00Z` and an end value of `2017-01-02T00:00:00Z` will retrieve a list of all events processed on January 1, 2017. + // Similarly, a start value of `2017-01-01T00:00:00Z` and an end value of `2017-02-01T00:00:00Z` will result in a list of all events processed between January 1, 2017 and January 31, 2017. + // You can specify a value with granularity to the minute. Seconds (and milliseconds, if included) must be set to `0`. + EndTime *common.SDKTime `mandatory:"true" contributesTo:"query" name:"endTime"` + + // The value of the `opc-next-page` response header from the previous list query. + Page *string `mandatory:"false" contributesTo:"query" name:"page"` + + // Unique Oracle-assigned identifier for the request. + // If you need to contact Oracle about a particular request, please provide the request ID. + OpcRequestId *string `mandatory:"false" contributesTo:"header" name:"opc-request-id"` +} + +func (request ListEventsRequest) String() string { + return common.PointerString(request) +} + +// ListEventsResponse wrapper for the ListEvents operation +type ListEventsResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The []AuditEvent instance + Items []AuditEvent `presentIn:"body"` + + // For pagination of a list of audit events. When this header appears in the response, + // it means you received a partial list and there are more results. + // Include this value as the `page` parameter for the subsequent ListEvents request to get the next batch of events. + OpcNextPage *string `presentIn:"header" name:"opc-next-page"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about + // a particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response ListEventsResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/audit/update_configuration_details.go b/vendor/github.com/oracle/oci-go-sdk/audit/update_configuration_details.go new file mode 100644 index 000000000..a1129293b --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/audit/update_configuration_details.go @@ -0,0 +1,24 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Audit API +// +// API for the Audit Service. You can use this API for queries, but not bulk-export operations. +// + +package audit + +import ( + "github.com/oracle/oci-go-sdk/common" +) + +// UpdateConfigurationDetails The representation of UpdateConfigurationDetails +type UpdateConfigurationDetails struct { + + // The retention period days + RetentionPeriodDays *int `mandatory:"false" json:"retentionPeriodDays"` +} + +func (m UpdateConfigurationDetails) String() string { + return common.PointerString(m) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/audit/update_configuration_request_response.go b/vendor/github.com/oracle/oci-go-sdk/audit/update_configuration_request_response.go new file mode 100644 index 000000000..2d709a8a8 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/audit/update_configuration_request_response.go @@ -0,0 +1,41 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package audit + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// UpdateConfigurationRequest wrapper for the UpdateConfiguration operation +type UpdateConfigurationRequest struct { + + // ID of the root compartment (tenancy) + CompartmentId *string `mandatory:"true" contributesTo:"query" name:"compartmentId"` + + // The configuration properties + UpdateConfigurationDetails `contributesTo:"body"` +} + +func (request UpdateConfigurationRequest) String() string { + return common.PointerString(request) +} + +// UpdateConfigurationResponse wrapper for the UpdateConfiguration operation +type UpdateConfigurationResponse struct { + + // The underlying http response + RawResponse *http.Response + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about a + // particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` + + // The OCID (https://docs.us-phoenix-1.oraclecloud.com/Content/General/Concepts/identifiers.htm) of the work request. + OpcWorkRequestId *string `presentIn:"header" name:"opc-work-request-id"` +} + +func (response UpdateConfigurationResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/cmd/genver/version_template.go b/vendor/github.com/oracle/oci-go-sdk/cmd/genver/version_template.go new file mode 100644 index 000000000..5c684fd5e --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/cmd/genver/version_template.go @@ -0,0 +1,42 @@ +package main + + +const versionTemplate = ` +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated by go generate; DO NOT EDIT + +package common + +import ( + "bytes" + "fmt" + "sync" +) + +const ( + major = "{{.Major}}" + minor = "{{.Minor}}" + patch = "{{.Patch}}" + tag = "{{.Tag}}" +) + +var once sync.Once +var version string + +// Version returns semantic version of the sdk +func Version() string { + once.Do(func() { + ver := fmt.Sprintf("%s.%s.%s", major, minor, patch) + verBuilder := bytes.NewBufferString(ver) + if tag != "" && tag != "-" { + _, err := verBuilder.WriteString(tag) + if err == nil { + verBuilder = bytes.NewBufferString(ver) + } + } + version = verBuilder.String() + }) + return version +} +` + diff --git a/vendor/github.com/oracle/oci-go-sdk/common/auth/certificate_retriever.go b/vendor/github.com/oracle/oci-go-sdk/common/auth/certificate_retriever.go new file mode 100644 index 000000000..e2e04ee7a --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/common/auth/certificate_retriever.go @@ -0,0 +1,154 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. + +package auth + +import ( + "bytes" + "crypto/rsa" + "crypto/x509" + "encoding/pem" + "fmt" + "github.com/oracle/oci-go-sdk/common" + "sync" +) + +// x509CertificateRetriever provides an X509 certificate with the RSA private key +type x509CertificateRetriever interface { + Refresh() error + CertificatePemRaw() []byte + Certificate() *x509.Certificate + PrivateKeyPemRaw() []byte + PrivateKey() *rsa.PrivateKey +} + +// urlBasedX509CertificateRetriever retrieves PEM-encoded X509 certificates from the given URLs. +type urlBasedX509CertificateRetriever struct { + certURL string + privateKeyURL string + passphrase string + certificatePemRaw []byte + certificate *x509.Certificate + privateKeyPemRaw []byte + privateKey *rsa.PrivateKey + mux sync.Mutex +} + +func newURLBasedX509CertificateRetriever(certURL, privateKeyURL, passphrase string) x509CertificateRetriever { + return &urlBasedX509CertificateRetriever{ + certURL: certURL, + privateKeyURL: privateKeyURL, + passphrase: passphrase, + mux: sync.Mutex{}, + } +} + +// Refresh() is failure atomic, i.e., CertificatePemRaw(), Certificate(), PrivateKeyPemRaw(), and PrivateKey() would +// return their previous values if Refresh() fails. +func (r *urlBasedX509CertificateRetriever) Refresh() error { + common.Debugln("Refreshing certificate") + + r.mux.Lock() + defer r.mux.Unlock() + + var err error + + var certificatePemRaw []byte + var certificate *x509.Certificate + if certificatePemRaw, certificate, err = r.renewCertificate(r.certURL); err != nil { + return fmt.Errorf("failed to renew certificate: %s", err.Error()) + } + + var privateKeyPemRaw []byte + var privateKey *rsa.PrivateKey + if r.privateKeyURL != "" { + if privateKeyPemRaw, privateKey, err = r.renewPrivateKey(r.privateKeyURL, r.passphrase); err != nil { + return fmt.Errorf("failed to renew private key: %s", err.Error()) + } + } + + r.certificatePemRaw = certificatePemRaw + r.certificate = certificate + r.privateKeyPemRaw = privateKeyPemRaw + r.privateKey = privateKey + return nil +} + +func (r *urlBasedX509CertificateRetriever) renewCertificate(url string) (certificatePemRaw []byte, certificate *x509.Certificate, err error) { + var body bytes.Buffer + if body, err = httpGet(url); err != nil { + return nil, nil, fmt.Errorf("failed to get certificate from %s: %s", url, err.Error()) + } + + certificatePemRaw = body.Bytes() + var block *pem.Block + block, _ = pem.Decode(certificatePemRaw) + if certificate, err = x509.ParseCertificate(block.Bytes); err != nil { + return nil, nil, fmt.Errorf("failed to parse the new certificate: %s", err.Error()) + } + + return certificatePemRaw, certificate, nil +} + +func (r *urlBasedX509CertificateRetriever) renewPrivateKey(url, passphrase string) (privateKeyPemRaw []byte, privateKey *rsa.PrivateKey, err error) { + var body bytes.Buffer + if body, err = httpGet(url); err != nil { + return nil, nil, fmt.Errorf("failed to get private key from %s: %s", url, err.Error()) + } + + privateKeyPemRaw = body.Bytes() + if privateKey, err = common.PrivateKeyFromBytes(privateKeyPemRaw, &passphrase); err != nil { + return nil, nil, fmt.Errorf("failed to parse the new private key: %s", err.Error()) + } + + return privateKeyPemRaw, privateKey, nil +} + +func (r *urlBasedX509CertificateRetriever) CertificatePemRaw() []byte { + r.mux.Lock() + defer r.mux.Unlock() + + if r.certificatePemRaw == nil { + return nil + } + + c := make([]byte, len(r.certificatePemRaw)) + copy(c, r.certificatePemRaw) + return c +} + +func (r *urlBasedX509CertificateRetriever) Certificate() *x509.Certificate { + r.mux.Lock() + defer r.mux.Unlock() + + if r.certificate == nil { + return nil + } + + c := *r.certificate + return &c +} + +func (r *urlBasedX509CertificateRetriever) PrivateKeyPemRaw() []byte { + r.mux.Lock() + defer r.mux.Unlock() + + if r.privateKeyPemRaw == nil { + return nil + } + + c := make([]byte, len(r.privateKeyPemRaw)) + copy(c, r.privateKeyPemRaw) + return c +} + +func (r *urlBasedX509CertificateRetriever) PrivateKey() *rsa.PrivateKey { + r.mux.Lock() + defer r.mux.Unlock() + + if r.privateKey == nil { + return nil + } + + c := *r.privateKey + return &c +} diff --git a/vendor/github.com/oracle/oci-go-sdk/common/auth/configuration.go b/vendor/github.com/oracle/oci-go-sdk/common/auth/configuration.go new file mode 100644 index 000000000..b0f455e9b --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/common/auth/configuration.go @@ -0,0 +1,61 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. + +package auth + +import ( + "crypto/rsa" + "fmt" + "github.com/oracle/oci-go-sdk/common" +) + +type instancePrincipalConfigurationProvider struct { + keyProvider *instancePrincipalKeyProvider + region *common.Region +} + +//InstancePrincipalConfigurationProvider returns a configuration for instance principals +func InstancePrincipalConfigurationProvider() (common.ConfigurationProvider, error) { + var err error + var keyProvider *instancePrincipalKeyProvider + if keyProvider, err = newInstancePrincipalKeyProvider(); err != nil { + return nil, fmt.Errorf("failed to create a new key provider for instance principal: %s", err.Error()) + } + return instancePrincipalConfigurationProvider{keyProvider: keyProvider, region: nil}, nil +} + +//InstancePrincipalConfigurationProviderForRegion returns a configuration for instance principals with a given region +func InstancePrincipalConfigurationProviderForRegion(region common.Region) (common.ConfigurationProvider, error) { + var err error + var keyProvider *instancePrincipalKeyProvider + if keyProvider, err = newInstancePrincipalKeyProvider(); err != nil { + return nil, fmt.Errorf("failed to create a new key provider for instance principal: %s", err.Error()) + } + return instancePrincipalConfigurationProvider{keyProvider: keyProvider, region: ®ion}, nil +} + +func (p instancePrincipalConfigurationProvider) PrivateRSAKey() (*rsa.PrivateKey, error) { + return p.keyProvider.PrivateRSAKey() +} + +func (p instancePrincipalConfigurationProvider) KeyID() (string, error) { + return p.keyProvider.KeyID() +} + +func (p instancePrincipalConfigurationProvider) TenancyOCID() (string, error) { + return "", nil +} + +func (p instancePrincipalConfigurationProvider) UserOCID() (string, error) { + return "", nil +} + +func (p instancePrincipalConfigurationProvider) KeyFingerprint() (string, error) { + return "", nil +} + +func (p instancePrincipalConfigurationProvider) Region() (string, error) { + if p.region == nil { + return string(p.keyProvider.RegionForFederationClient()), nil + } + return string(*p.region), nil +} diff --git a/vendor/github.com/oracle/oci-go-sdk/common/auth/federation_client.go b/vendor/github.com/oracle/oci-go-sdk/common/auth/federation_client.go new file mode 100644 index 000000000..f15aff82d --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/common/auth/federation_client.go @@ -0,0 +1,288 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. + +// Package auth provides supporting functions and structs for authentication +package auth + +import ( + "context" + "crypto/rand" + "crypto/rsa" + "crypto/x509" + "encoding/pem" + "fmt" + "github.com/oracle/oci-go-sdk/common" + "net/http" + "strings" + "sync" +) + +// federationClient is a client to retrieve the security token for an instance principal necessary to sign a request. +// It also provides the private key whose corresponding public key is used to retrieve the security token. +type federationClient interface { + PrivateKey() (*rsa.PrivateKey, error) + SecurityToken() (string, error) +} + +// x509FederationClient retrieves a security token from Auth service. +type x509FederationClient struct { + tenancyID string + sessionKeySupplier sessionKeySupplier + leafCertificateRetriever x509CertificateRetriever + intermediateCertificateRetrievers []x509CertificateRetriever + securityToken securityToken + authClient *common.BaseClient + mux sync.Mutex +} + +func newX509FederationClient(region common.Region, tenancyID string, leafCertificateRetriever x509CertificateRetriever, intermediateCertificateRetrievers []x509CertificateRetriever) federationClient { + client := &x509FederationClient{ + tenancyID: tenancyID, + leafCertificateRetriever: leafCertificateRetriever, + intermediateCertificateRetrievers: intermediateCertificateRetrievers, + } + client.sessionKeySupplier = newSessionKeySupplier() + client.authClient = newAuthClient(region, client) + return client +} + +var ( + genericHeaders = []string{"date", "(request-target)"} // "host" is not needed for the federation endpoint. Don't ask me why. + bodyHeaders = []string{"content-length", "content-type", "x-content-sha256"} +) + +func newAuthClient(region common.Region, provider common.KeyProvider) *common.BaseClient { + signer := common.RequestSigner(provider, genericHeaders, bodyHeaders) + client := common.DefaultBaseClientWithSigner(signer) + client.Host = fmt.Sprintf(common.DefaultHostURLTemplate, "auth", string(region)) + client.BasePath = "v1/x509" + return &client +} + +// For authClient to sign requests to X509 Federation Endpoint +func (c *x509FederationClient) KeyID() (string, error) { + tenancy := c.tenancyID + fingerprint := fingerprint(c.leafCertificateRetriever.Certificate()) + return fmt.Sprintf("%s/fed-x509/%s", tenancy, fingerprint), nil +} + +// For authClient to sign requests to X509 Federation Endpoint +func (c *x509FederationClient) PrivateRSAKey() (*rsa.PrivateKey, error) { + return c.leafCertificateRetriever.PrivateKey(), nil +} + +func (c *x509FederationClient) PrivateKey() (*rsa.PrivateKey, error) { + c.mux.Lock() + defer c.mux.Unlock() + + if err := c.renewSecurityTokenIfNotValid(); err != nil { + return nil, err + } + return c.sessionKeySupplier.PrivateKey(), nil +} + +func (c *x509FederationClient) SecurityToken() (token string, err error) { + c.mux.Lock() + defer c.mux.Unlock() + + if err = c.renewSecurityTokenIfNotValid(); err != nil { + return "", err + } + return c.securityToken.String(), nil +} + +func (c *x509FederationClient) renewSecurityTokenIfNotValid() (err error) { + if c.securityToken == nil || !c.securityToken.Valid() { + if err = c.renewSecurityToken(); err != nil { + return fmt.Errorf("failed to renew security token: %s", err.Error()) + } + } + return nil +} + +func (c *x509FederationClient) renewSecurityToken() (err error) { + if err = c.sessionKeySupplier.Refresh(); err != nil { + return fmt.Errorf("failed to refresh session key: %s", err.Error()) + } + + if err = c.leafCertificateRetriever.Refresh(); err != nil { + return fmt.Errorf("failed to refresh leaf certificate: %s", err.Error()) + } + + updatedTenancyID := extractTenancyIDFromCertificate(c.leafCertificateRetriever.Certificate()) + if c.tenancyID != updatedTenancyID { + err = fmt.Errorf("unexpected update of tenancy OCID in the leaf certificate. Previous tenancy: %s, Updated: %s", c.tenancyID, updatedTenancyID) + return + } + + for _, retriever := range c.intermediateCertificateRetrievers { + if err = retriever.Refresh(); err != nil { + return fmt.Errorf("failed to refresh intermediate certificate: %s", err.Error()) + } + } + + if c.securityToken, err = c.getSecurityToken(); err != nil { + return fmt.Errorf("failed to get security token: %s", err.Error()) + } + + return nil +} + +func (c *x509FederationClient) getSecurityToken() (securityToken, error) { + request := c.makeX509FederationRequest() + + var err error + var httpRequest http.Request + if httpRequest, err = common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodPost, "", request); err != nil { + return nil, fmt.Errorf("failed to make http request: %s", err.Error()) + } + + var httpResponse *http.Response + defer common.CloseBodyIfValid(httpResponse) + if httpResponse, err = c.authClient.Call(context.Background(), &httpRequest); err != nil { + return nil, fmt.Errorf("failed to call: %s", err.Error()) + } + + response := x509FederationResponse{} + if err = common.UnmarshalResponse(httpResponse, &response); err != nil { + return nil, fmt.Errorf("failed to unmarshal the response: %s", err.Error()) + } + + return newInstancePrincipalToken(response.Token.Token) +} + +type x509FederationRequest struct { + X509FederationDetails `contributesTo:"body"` +} + +// X509FederationDetails x509 federation details +type X509FederationDetails struct { + Certificate string `mandatory:"true" json:"certificate,omitempty"` + PublicKey string `mandatory:"true" json:"publicKey,omitempty"` + IntermediateCertificates []string `mandatory:"false" json:"intermediateCertificates,omitempty"` +} + +type x509FederationResponse struct { + Token `presentIn:"body"` +} + +// Token token +type Token struct { + Token string `mandatory:"true" json:"token,omitempty"` +} + +func (c *x509FederationClient) makeX509FederationRequest() *x509FederationRequest { + certificate := c.sanitizeCertificateString(string(c.leafCertificateRetriever.CertificatePemRaw())) + publicKey := c.sanitizeCertificateString(string(c.sessionKeySupplier.PublicKeyPemRaw())) + var intermediateCertificates []string + for _, retriever := range c.intermediateCertificateRetrievers { + intermediateCertificates = append(intermediateCertificates, c.sanitizeCertificateString(string(retriever.CertificatePemRaw()))) + } + + details := X509FederationDetails{ + Certificate: certificate, + PublicKey: publicKey, + IntermediateCertificates: intermediateCertificates, + } + return &x509FederationRequest{details} +} + +func (c *x509FederationClient) sanitizeCertificateString(certString string) string { + certString = strings.Replace(certString, "-----BEGIN CERTIFICATE-----", "", -1) + certString = strings.Replace(certString, "-----END CERTIFICATE-----", "", -1) + certString = strings.Replace(certString, "-----BEGIN PUBLIC KEY-----", "", -1) + certString = strings.Replace(certString, "-----END PUBLIC KEY-----", "", -1) + certString = strings.Replace(certString, "\n", "", -1) + return certString +} + +// sessionKeySupplier provides an RSA keypair which can be re-generated by calling Refresh(). +type sessionKeySupplier interface { + Refresh() error + PrivateKey() *rsa.PrivateKey + PublicKeyPemRaw() []byte +} + +// inMemorySessionKeySupplier implements sessionKeySupplier to vend an RSA keypair. +// Refresh() generates a new RSA keypair with a random source, and keeps it in memory. +// +// inMemorySessionKeySupplier is not thread-safe. +type inMemorySessionKeySupplier struct { + keySize int + privateKey *rsa.PrivateKey + publicKeyPemRaw []byte +} + +// newSessionKeySupplier creates and returns a sessionKeySupplier instance which generates key pairs of size 2048. +func newSessionKeySupplier() sessionKeySupplier { + return &inMemorySessionKeySupplier{keySize: 2048} +} + +// Refresh() is failure atomic, i.e., PrivateKey() and PublicKeyPemRaw() would return their previous values +// if Refresh() fails. +func (s *inMemorySessionKeySupplier) Refresh() (err error) { + common.Debugln("Refreshing session key") + + var privateKey *rsa.PrivateKey + privateKey, err = rsa.GenerateKey(rand.Reader, s.keySize) + if err != nil { + return fmt.Errorf("failed to generate a new keypair: %s", err) + } + + var publicKeyAsnBytes []byte + if publicKeyAsnBytes, err = x509.MarshalPKIXPublicKey(privateKey.Public()); err != nil { + return fmt.Errorf("failed to marshal the public part of the new keypair: %s", err.Error()) + } + publicKeyPemRaw := pem.EncodeToMemory(&pem.Block{ + Type: "PUBLIC KEY", + Bytes: publicKeyAsnBytes, + }) + + s.privateKey = privateKey + s.publicKeyPemRaw = publicKeyPemRaw + return nil +} + +func (s *inMemorySessionKeySupplier) PrivateKey() *rsa.PrivateKey { + if s.privateKey == nil { + return nil + } + + c := *s.privateKey + return &c +} + +func (s *inMemorySessionKeySupplier) PublicKeyPemRaw() []byte { + if s.publicKeyPemRaw == nil { + return nil + } + + c := make([]byte, len(s.publicKeyPemRaw)) + copy(c, s.publicKeyPemRaw) + return c +} + +type securityToken interface { + fmt.Stringer + Valid() bool +} + +type instancePrincipalToken struct { + tokenString string + jwtToken *jwtToken +} + +func newInstancePrincipalToken(tokenString string) (newToken securityToken, err error) { + var jwtToken *jwtToken + if jwtToken, err = parseJwt(tokenString); err != nil { + return nil, fmt.Errorf("failed to parse the token string \"%s\": %s", tokenString, err.Error()) + } + return &instancePrincipalToken{tokenString, jwtToken}, nil +} + +func (t *instancePrincipalToken) String() string { + return t.tokenString +} + +func (t *instancePrincipalToken) Valid() bool { + return !t.jwtToken.expired() +} diff --git a/vendor/github.com/oracle/oci-go-sdk/common/auth/instance_principal_key_provider.go b/vendor/github.com/oracle/oci-go-sdk/common/auth/instance_principal_key_provider.go new file mode 100644 index 000000000..ecf14b072 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/common/auth/instance_principal_key_provider.go @@ -0,0 +1,95 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. + +package auth + +import ( + "bytes" + "crypto/rsa" + "fmt" + "github.com/oracle/oci-go-sdk/common" +) + +const ( + regionURL = `http://169.254.169.254/opc/v1/instance/region` + leafCertificateURL = `http://169.254.169.254/opc/v1/identity/cert.pem` + leafCertificateKeyURL = `http://169.254.169.254/opc/v1/identity/key.pem` + leafCertificateKeyPassphrase = `` // No passphrase for the private key for Compute instances + intermediateCertificateURL = `http://169.254.169.254/opc/v1/identity/intermediate.pem` + intermediateCertificateKeyURL = `` + intermediateCertificateKeyPassphrase = `` // No passphrase for the private key for Compute instances +) + +// instancePrincipalKeyProvider implements KeyProvider to provide a key ID and its corresponding private key +// for an instance principal by getting a security token via x509FederationClient. +// +// The region name of the endpoint for x509FederationClient is obtained from the metadata service on the compute +// instance. +type instancePrincipalKeyProvider struct { + regionForFederationClient common.Region + federationClient federationClient +} + +// newInstancePrincipalKeyProvider creates and returns an instancePrincipalKeyProvider instance based on +// x509FederationClient. +// +// NOTE: There is a race condition between PrivateRSAKey() and KeyID(). These two pieces are tightly coupled; KeyID +// includes a security token obtained from Auth service by giving a public key which is paired with PrivateRSAKey. +// The x509FederationClient caches the security token in memory until it is expired. Thus, even if a client obtains a +// KeyID that is not expired at the moment, the PrivateRSAKey that the client acquires at a next moment could be +// invalid because the KeyID could be already expired. +func newInstancePrincipalKeyProvider() (provider *instancePrincipalKeyProvider, err error) { + var region common.Region + if region, err = getRegionForFederationClient(regionURL); err != nil { + err = fmt.Errorf("failed to get the region name from %s: %s", regionURL, err.Error()) + common.Logln(err) + return nil, err + } + + leafCertificateRetriever := newURLBasedX509CertificateRetriever( + leafCertificateURL, leafCertificateKeyURL, leafCertificateKeyPassphrase) + intermediateCertificateRetrievers := []x509CertificateRetriever{ + newURLBasedX509CertificateRetriever( + intermediateCertificateURL, intermediateCertificateKeyURL, intermediateCertificateKeyPassphrase), + } + + if err = leafCertificateRetriever.Refresh(); err != nil { + err = fmt.Errorf("failed to refresh the leaf certificate: %s", err.Error()) + return nil, err + } + tenancyID := extractTenancyIDFromCertificate(leafCertificateRetriever.Certificate()) + + federationClient := newX509FederationClient( + region, tenancyID, leafCertificateRetriever, intermediateCertificateRetrievers) + + provider = &instancePrincipalKeyProvider{regionForFederationClient: region, federationClient: federationClient} + return +} + +func getRegionForFederationClient(url string) (r common.Region, err error) { + var body bytes.Buffer + if body, err = httpGet(url); err != nil { + return + } + return common.StringToRegion(body.String()), nil +} + +func (p *instancePrincipalKeyProvider) RegionForFederationClient() common.Region { + return p.regionForFederationClient +} + +func (p *instancePrincipalKeyProvider) PrivateRSAKey() (privateKey *rsa.PrivateKey, err error) { + if privateKey, err = p.federationClient.PrivateKey(); err != nil { + err = fmt.Errorf("failed to get private key: %s", err.Error()) + return nil, err + } + return privateKey, nil +} + +func (p *instancePrincipalKeyProvider) KeyID() (string, error) { + var securityToken string + var err error + if securityToken, err = p.federationClient.SecurityToken(); err != nil { + return "", fmt.Errorf("failed to get security token: %s", err.Error()) + } + return fmt.Sprintf("ST$%s", securityToken), nil +} diff --git a/vendor/github.com/oracle/oci-go-sdk/common/auth/jwt.go b/vendor/github.com/oracle/oci-go-sdk/common/auth/jwt.go new file mode 100644 index 000000000..b199a4d68 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/common/auth/jwt.go @@ -0,0 +1,61 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. + +package auth + +import ( + "bytes" + "encoding/base64" + "encoding/json" + "fmt" + "strings" + "time" +) + +type jwtToken struct { + raw string + header map[string]interface{} + payload map[string]interface{} +} + +func (t *jwtToken) expired() bool { + exp := int64(t.payload["exp"].(float64)) + return exp <= time.Now().Unix() +} + +func parseJwt(tokenString string) (*jwtToken, error) { + parts := strings.Split(tokenString, ".") + if len(parts) != 3 { + return nil, fmt.Errorf("the given token string contains an invalid number of parts") + } + + token := &jwtToken{raw: tokenString} + var err error + + // Parse Header part + var headerBytes []byte + if headerBytes, err = decodePart(parts[0]); err != nil { + return nil, fmt.Errorf("failed to decode the header bytes: %s", err.Error()) + } + if err = json.Unmarshal(headerBytes, &token.header); err != nil { + return nil, err + } + + // Parse Payload part + var payloadBytes []byte + if payloadBytes, err = decodePart(parts[1]); err != nil { + return nil, fmt.Errorf("failed to decode the payload bytes: %s", err.Error()) + } + decoder := json.NewDecoder(bytes.NewBuffer(payloadBytes)) + if err = decoder.Decode(&token.payload); err != nil { + return nil, fmt.Errorf("failed to decode the payload json: %s", err.Error()) + } + + return token, nil +} + +func decodePart(partString string) ([]byte, error) { + if l := len(partString) % 4; 0 < l { + partString += strings.Repeat("=", 4-l) + } + return base64.URLEncoding.DecodeString(partString) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/common/auth/utils.go b/vendor/github.com/oracle/oci-go-sdk/common/auth/utils.go new file mode 100644 index 000000000..f84490c7d --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/common/auth/utils.go @@ -0,0 +1,64 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. + +package auth + +import ( + "bytes" + "crypto/sha1" + "crypto/x509" + "fmt" + "github.com/oracle/oci-go-sdk/common" + "net/http" + "net/http/httputil" + "strings" +) + +// httpGet makes a simple HTTP GET request to the given URL, expecting only "200 OK" status code. +// This is basically for the Instance Metadata Service. +func httpGet(url string) (body bytes.Buffer, err error) { + var response *http.Response + if response, err = http.Get(url); err != nil { + return + } + + common.IfDebug(func() { + if dump, e := httputil.DumpResponse(response, true); e == nil { + common.Logf("Dump Response %v", string(dump)) + } else { + common.Debugln(e) + } + }) + + defer response.Body.Close() + if _, err = body.ReadFrom(response.Body); err != nil { + return + } + + if response.StatusCode != http.StatusOK { + err = fmt.Errorf("HTTP Get failed: URL: %s, Status: %s, Message: %s", + url, response.Status, body.String()) + return + } + + return +} + +func extractTenancyIDFromCertificate(cert *x509.Certificate) string { + for _, nameAttr := range cert.Subject.Names { + value := nameAttr.Value.(string) + if strings.HasPrefix(value, "opc-tenant:") { + return value[len("opc-tenant:"):] + } + } + return "" +} + +func fingerprint(certificate *x509.Certificate) string { + fingerprint := sha1.Sum(certificate.Raw) + return colonSeparatedString(fingerprint) +} + +func colonSeparatedString(fingerprint [sha1.Size]byte) string { + spaceSeparated := fmt.Sprintf("% x", fingerprint) + return strings.Replace(spaceSeparated, " ", ":", -1) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/common/client.go b/vendor/github.com/oracle/oci-go-sdk/common/client.go new file mode 100644 index 000000000..d67e99777 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/common/client.go @@ -0,0 +1,253 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. + +// Package common provides supporting functions and structs used by service packages +package common + +import ( + "context" + "fmt" + "net/http" + "net/http/httputil" + "net/url" + "os" + "os/user" + "path" + "runtime" + "strings" + "time" +) + +const ( + // DefaultHostURLTemplate The default url template for service hosts + DefaultHostURLTemplate = "%s.%s.oraclecloud.com" + defaultScheme = "https" + defaultSDKMarker = "Oracle-GoSDK" + defaultUserAgentTemplate = "%s/%s (%s/%s; go/%s)" //SDK/SDKVersion (OS/OSVersion; Lang/LangVersion) + defaultTimeout = time.Second * 30 + defaultConfigFileName = "config" + defaultConfigDirName = ".oci" + secondaryConfigDirName = ".oraclebmc" + maxBodyLenForDebug = 1024 * 1000 +) + +// RequestInterceptor function used to customize the request before calling the underlying service +type RequestInterceptor func(*http.Request) error + +// HTTPRequestDispatcher wraps the execution of a http request, it is generally implemented by +// http.Client.Do, but can be customized for testing +type HTTPRequestDispatcher interface { + Do(req *http.Request) (*http.Response, error) +} + +// BaseClient struct implements all basic operations to call oci web services. +type BaseClient struct { + //HTTPClient performs the http network operations + HTTPClient HTTPRequestDispatcher + + //Signer performs auth operation + Signer HTTPRequestSigner + + //A request interceptor can be used to customize the request before signing and dispatching + Interceptor RequestInterceptor + + //The host of the service + Host string + + //The user agent + UserAgent string + + //Base path for all operations of this client + BasePath string +} + +func defaultUserAgent() string { + userAgent := fmt.Sprintf(defaultUserAgentTemplate, defaultSDKMarker, Version(), runtime.GOOS, runtime.GOARCH, runtime.Version()) + return userAgent +} + +func newBaseClient(signer HTTPRequestSigner, dispatcher HTTPRequestDispatcher) BaseClient { + return BaseClient{ + UserAgent: defaultUserAgent(), + Interceptor: nil, + Signer: signer, + HTTPClient: dispatcher, + } +} + +func defaultHTTPDispatcher() http.Client { + httpClient := http.Client{ + Timeout: defaultTimeout, + } + return httpClient +} + +func defaultBaseClient(provider KeyProvider) BaseClient { + dispatcher := defaultHTTPDispatcher() + signer := DefaultRequestSigner(provider) + return newBaseClient(signer, &dispatcher) +} + +//DefaultBaseClientWithSigner creates a default base client with a given signer +func DefaultBaseClientWithSigner(signer HTTPRequestSigner) BaseClient { + dispatcher := defaultHTTPDispatcher() + return newBaseClient(signer, &dispatcher) +} + +// NewClientWithConfig Create a new client with a configuration provider, the configuration provider +// will be used for the default signer as well as reading the region +// This function does not check for valid regions to implement forward compatibility +func NewClientWithConfig(configProvider ConfigurationProvider) (client BaseClient, err error) { + var ok bool + if ok, err = IsConfigurationProviderValid(configProvider); !ok { + err = fmt.Errorf("can not create client, bad configuration: %s", err.Error()) + return + } + + client = defaultBaseClient(configProvider) + return +} + +func getHomeFolder() string { + current, e := user.Current() + if e != nil { + //Give up and try to return something sensible + home := os.Getenv("HOME") + if home == "" { + home = os.Getenv("USERPROFILE") + } + return home + } + return current.HomeDir +} + +// DefaultConfigProvider returns the default config provider. The default config provider +// will look for configurations in 3 places: file in $HOME/.oci/config, HOME/.obmcs/config and +// variables names starting with the string TF_VAR. If the same configuration is found in multiple +// places the provider will prefer the first one. +func DefaultConfigProvider() ConfigurationProvider { + homeFolder := getHomeFolder() + defaultConfigFile := path.Join(homeFolder, defaultConfigDirName, defaultConfigFileName) + secondaryConfigFile := path.Join(homeFolder, secondaryConfigDirName, defaultConfigFileName) + + defaultFileProvider, _ := ConfigurationProviderFromFile(defaultConfigFile, "") + secondaryFileProvider, _ := ConfigurationProviderFromFile(secondaryConfigFile, "") + environmentProvider := environmentConfigurationProvider{EnvironmentVariablePrefix: "TF_VAR"} + + provider, _ := ComposingConfigurationProvider([]ConfigurationProvider{defaultFileProvider, secondaryFileProvider, environmentProvider}) + Debugf("Configuration provided by: %s", provider) + return provider +} + +func (client *BaseClient) prepareRequest(request *http.Request) (err error) { + if client.UserAgent == "" { + return fmt.Errorf("user agent can not be blank") + } + + if request.Header == nil { + request.Header = http.Header{} + } + request.Header.Set("User-Agent", client.UserAgent) + + if !strings.Contains(client.Host, "http") && + !strings.Contains(client.Host, "https") { + client.Host = fmt.Sprintf("%s://%s", defaultScheme, client.Host) + } + + clientURL, err := url.Parse(client.Host) + if err != nil { + return fmt.Errorf("host is invalid. %s", err.Error()) + } + request.URL.Host = clientURL.Host + request.URL.Scheme = clientURL.Scheme + currentPath := request.URL.Path + request.URL.Path = path.Clean(fmt.Sprintf("/%s/%s", client.BasePath, currentPath)) + return +} + +func (client BaseClient) intercept(request *http.Request) (err error) { + if client.Interceptor != nil { + err = client.Interceptor(request) + } + return +} + +func checkForSuccessfulResponse(res *http.Response) error { + familyStatusCode := res.StatusCode / 100 + if familyStatusCode == 4 || familyStatusCode == 5 { + return newServiceFailureFromResponse(res) + } + return nil + +} + +//Call executes the underlying http requrest with the given context +func (client BaseClient) Call(ctx context.Context, request *http.Request) (response *http.Response, err error) { + Debugln("Atempting to call downstream service") + request = request.WithContext(ctx) + + err = client.prepareRequest(request) + if err != nil { + return + } + + //Intercept + err = client.intercept(request) + if err != nil { + return + } + + //Sign the request + err = client.Signer.Sign(request) + if err != nil { + return + } + + IfDebug(func() { + dumpBody := true + if request.ContentLength > maxBodyLenForDebug { + Logln("not dumping body too big") + dumpBody = false + } + if dump, e := httputil.DumpRequest(request, dumpBody); e == nil { + Logf("Dump Request %v", string(dump)) + } else { + Debugln(e) + } + }) + + //Execute the http request + response, err = client.HTTPClient.Do(request) + + IfDebug(func() { + if err != nil { + Logln(err) + return + } + + dumpBody := true + if response.ContentLength > maxBodyLenForDebug { + Logln("not dumping body too big") + dumpBody = false + } + + if dump, e := httputil.DumpResponse(response, dumpBody); e == nil { + Logf("Dump Response %v", string(dump)) + } else { + Debugln(e) + } + }) + + if err != nil { + return + } + + err = checkForSuccessfulResponse(response) + return +} + +//CloseBodyIfValid closes the body of an http response if the response and the body are valid +func CloseBodyIfValid(httpResponse *http.Response) { + if httpResponse != nil && httpResponse.Body != nil { + httpResponse.Body.Close() + } +} diff --git a/vendor/github.com/oracle/oci-go-sdk/common/common.go b/vendor/github.com/oracle/oci-go-sdk/common/common.go new file mode 100644 index 000000000..9164562b4 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/common/common.go @@ -0,0 +1,39 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. + +package common + +import ( + "strings" +) + +//Region type for regions +type Region string + +const ( + //RegionSEA region SEA + RegionSEA Region = "sea" + //RegionPHX region PHX + RegionPHX Region = "us-phoenix-1" + //RegionIAD region IAD + RegionIAD Region = "us-ashburn-1" + //RegionFRA region FRA + RegionFRA Region = "eu-frankfurt-1" +) + +//StringToRegion convert a string to Region type +func StringToRegion(stringRegion string) (r Region) { + switch strings.ToLower(stringRegion) { + case "sea": + r = RegionSEA + case "phx", "us-phoenix-1": + r = RegionPHX + case "iad", "us-ashburn-1": + r = RegionIAD + case "fra", "eu-frankfurt-1": + r = RegionFRA + default: + r = Region(stringRegion) + Debugf("region named: %s, is not recognized", stringRegion) + } + return +} diff --git a/vendor/github.com/oracle/oci-go-sdk/common/configuration.go b/vendor/github.com/oracle/oci-go-sdk/common/configuration.go new file mode 100644 index 000000000..d2292bb9d --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/common/configuration.go @@ -0,0 +1,487 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. + +package common + +import ( + "crypto/rsa" + "errors" + "fmt" + "io/ioutil" + "os" + "regexp" + "strings" +) + +// ConfigurationProvider wraps information about the account owner +type ConfigurationProvider interface { + KeyProvider + TenancyOCID() (string, error) + UserOCID() (string, error) + KeyFingerprint() (string, error) + Region() (string, error) +} + +// IsConfigurationProviderValid Tests all parts of the configuration provider do not return an error +func IsConfigurationProviderValid(conf ConfigurationProvider) (ok bool, err error) { + baseFn := []func() (string, error){conf.TenancyOCID, conf.UserOCID, conf.KeyFingerprint, conf.Region, conf.KeyID} + for _, fn := range baseFn { + _, err = fn() + ok = err == nil + if err != nil { + return + } + } + + _, err = conf.PrivateRSAKey() + ok = err == nil + if err != nil { + return + } + return true, nil +} + +// rawConfigurationProvider allows a user to simply construct a configuration provider from raw values. +type rawConfigurationProvider struct { + tenancy string + user string + region string + fingerprint string + privateKey string + privateKeyPassphrase *string +} + +// NewRawConfigurationProvider will create a rawConfigurationProvider +func NewRawConfigurationProvider(tenancy, user, region, fingerprint, privateKey string, privateKeyPassphrase *string) ConfigurationProvider { + return rawConfigurationProvider{tenancy, user, region, fingerprint, privateKey, privateKeyPassphrase} +} + +func (p rawConfigurationProvider) PrivateRSAKey() (key *rsa.PrivateKey, err error) { + return PrivateKeyFromBytes([]byte(p.privateKey), p.privateKeyPassphrase) +} + +func (p rawConfigurationProvider) KeyID() (keyID string, err error) { + tenancy, err := p.TenancyOCID() + if err != nil { + return + } + + user, err := p.UserOCID() + if err != nil { + return + } + + fingerprint, err := p.KeyFingerprint() + if err != nil { + return + } + + return fmt.Sprintf("%s/%s/%s", tenancy, user, fingerprint), nil +} + +func (p rawConfigurationProvider) TenancyOCID() (string, error) { + return p.tenancy, nil +} + +func (p rawConfigurationProvider) UserOCID() (string, error) { + return p.user, nil +} + +func (p rawConfigurationProvider) KeyFingerprint() (string, error) { + return p.fingerprint, nil +} + +func (p rawConfigurationProvider) Region() (string, error) { + return p.region, nil +} + +// environmentConfigurationProvider reads configuration from environment variables +type environmentConfigurationProvider struct { // TODO: Support Instance Principal + PrivateKeyPassword string + EnvironmentVariablePrefix string +} + +// ConfigurationProviderEnvironmentVariables creates a ConfigurationProvider from a uniform set of environment variables starting with a prefix +// The env variables should look like: [prefix]_private_key_path, [prefix]_tenancy_ocid, [prefix]_user_ocid, [prefix]_fingerprint +// [prefix]_region +func ConfigurationProviderEnvironmentVariables(environmentVariablePrefix, privateKeyPassword string) ConfigurationProvider { + return environmentConfigurationProvider{EnvironmentVariablePrefix: environmentVariablePrefix, + PrivateKeyPassword: privateKeyPassword} +} + +func (p environmentConfigurationProvider) String() string { + return fmt.Sprintf("Configuration provided by environment variables prefixed with: %s", p.EnvironmentVariablePrefix) +} + +func (p environmentConfigurationProvider) PrivateRSAKey() (key *rsa.PrivateKey, err error) { + environmentVariable := fmt.Sprintf("%s_%s", p.EnvironmentVariablePrefix, "private_key_path") + var ok bool + var value string + if value, ok = os.LookupEnv(environmentVariable); !ok { + return nil, fmt.Errorf("can not read PrivateKey from env variable: %s", environmentVariable) + } + + pemFileContent, err := ioutil.ReadFile(value) + if err != nil { + Debugln("Can not read PrivateKey location from environment variable: " + environmentVariable) + return + } + + key, err = PrivateKeyFromBytes(pemFileContent, &p.PrivateKeyPassword) + return +} + +func (p environmentConfigurationProvider) KeyID() (keyID string, err error) { + ocid, err := p.TenancyOCID() + if err != nil { + return + } + + userocid, err := p.UserOCID() + if err != nil { + return + } + + fingerprint, err := p.KeyFingerprint() + if err != nil { + return + } + + return fmt.Sprintf("%s/%s/%s", ocid, userocid, fingerprint), nil +} + +func (p environmentConfigurationProvider) TenancyOCID() (value string, err error) { + environmentVariable := fmt.Sprintf("%s_%s", p.EnvironmentVariablePrefix, "tenancy_ocid") + var ok bool + if value, ok = os.LookupEnv(environmentVariable); !ok { + err = fmt.Errorf("can not read Tenancy from environment variable %s", environmentVariable) + } + return +} + +func (p environmentConfigurationProvider) UserOCID() (value string, err error) { + environmentVariable := fmt.Sprintf("%s_%s", p.EnvironmentVariablePrefix, "user_ocid") + var ok bool + if value, ok = os.LookupEnv(environmentVariable); !ok { + err = fmt.Errorf("can not read user id from environment variable %s", environmentVariable) + } + return +} + +func (p environmentConfigurationProvider) KeyFingerprint() (value string, err error) { + environmentVariable := fmt.Sprintf("%s_%s", p.EnvironmentVariablePrefix, "fingerprint") + var ok bool + if value, ok = os.LookupEnv(environmentVariable); !ok { + err = fmt.Errorf("can not read fingerprint from environment variable %s", environmentVariable) + } + return +} + +func (p environmentConfigurationProvider) Region() (value string, err error) { + environmentVariable := fmt.Sprintf("%s_%s", p.EnvironmentVariablePrefix, "region") + var ok bool + if value, ok = os.LookupEnv(environmentVariable); !ok { + err = fmt.Errorf("can not read region from environment variable %s", environmentVariable) + } + return +} + +// fileConfigurationProvider. reads configuration information from a file +type fileConfigurationProvider struct { // TODO: Support Instance Principal + //The path to the configuration file + ConfigPath string + + //The password for the private key + PrivateKeyPassword string + + //The profile for the configuration + Profile string + + //ConfigFileInfo + FileInfo *configFileInfo +} + +// ConfigurationProviderFromFile creates a configuration provider from a configuration file +// by reading the "DEFAULT" profile +func ConfigurationProviderFromFile(configFilePath, privateKeyPassword string) (ConfigurationProvider, error) { + if configFilePath == "" { + return nil, fmt.Errorf("config file path can not be empty") + } + + return fileConfigurationProvider{ + ConfigPath: configFilePath, + PrivateKeyPassword: privateKeyPassword, + Profile: "DEFAULT"}, nil +} + +// ConfigurationProviderFromFileWithProfile creates a configuration provider from a configuration file +// and the given profile +func ConfigurationProviderFromFileWithProfile(configFilePath, profile, privateKeyPassword string) (ConfigurationProvider, error) { + if configFilePath == "" { + return nil, fmt.Errorf("config file path can not be empty") + } + + return fileConfigurationProvider{ + ConfigPath: configFilePath, + PrivateKeyPassword: privateKeyPassword, + Profile: profile}, nil +} + +type configFileInfo struct { + UserOcid, Fingerprint, KeyFilePath, TenancyOcid, Region string + PresentConfiguration byte +} + +const ( + hasTenancy = 1 << iota + hasUser + hasFingerprint + hasRegion + hasKeyFile + none +) + +var profileRegex = regexp.MustCompile(`^\[(.*)\]`) + +func parseConfigFile(data []byte, profile string) (info *configFileInfo, err error) { + + if len(data) == 0 { + return nil, fmt.Errorf("configuration file content is empty") + } + + content := string(data) + splitContent := strings.Split(content, "\n") + + //Look for profile + for i, line := range splitContent { + if match := profileRegex.FindStringSubmatch(line); match != nil && len(match) > 1 && match[1] == profile { + start := i + 1 + return parseConfigAtLine(start, splitContent) + } + } + + return nil, fmt.Errorf("configuration file did not contain profile: %s", profile) +} + +func parseConfigAtLine(start int, content []string) (info *configFileInfo, err error) { + var configurationPresent byte + info = &configFileInfo{} + for i := start; i < len(content); i++ { + line := content[i] + if profileRegex.MatchString(line) { + break + } + + if !strings.Contains(line, "=") { + continue + } + + splits := strings.Split(line, "=") + switch key, value := strings.TrimSpace(splits[0]), strings.TrimSpace(splits[1]); strings.ToLower(key) { + case "user": + configurationPresent = configurationPresent | hasUser + info.UserOcid = value + case "fingerprint": + configurationPresent = configurationPresent | hasFingerprint + info.Fingerprint = value + case "key_file": + configurationPresent = configurationPresent | hasKeyFile + info.KeyFilePath = value + case "tenancy": + configurationPresent = configurationPresent | hasTenancy + info.TenancyOcid = value + case "region": + configurationPresent = configurationPresent | hasRegion + info.Region = value + } + } + info.PresentConfiguration = configurationPresent + return + +} + +func openConfigFile(configFilePath string) (data []byte, err error) { + data, err = ioutil.ReadFile(configFilePath) + if err != nil { + err = fmt.Errorf("can not read config file: %s due to: %s", configFilePath, err.Error()) + } + + return +} + +func (p fileConfigurationProvider) String() string { + return fmt.Sprintf("Configuration provided by file: %s", p.ConfigPath) +} + +func (p fileConfigurationProvider) readAndParseConfigFile() (info *configFileInfo, err error) { + if p.FileInfo != nil { + return p.FileInfo, nil + } + + if p.ConfigPath == "" { + return nil, fmt.Errorf("configuration path can not be empty") + } + + data, err := openConfigFile(p.ConfigPath) + if err != nil { + err = fmt.Errorf("error while parsing config file: %s. Due to: %s", p.ConfigPath, err.Error()) + return + } + + p.FileInfo, err = parseConfigFile(data, p.Profile) + return p.FileInfo, err +} + +func presentOrError(value string, expectedConf, presentConf byte, confMissing string) (string, error) { + if presentConf&expectedConf == expectedConf { + return value, nil + } + return "", errors.New(confMissing + " configuration is missing from file") +} + +func (p fileConfigurationProvider) TenancyOCID() (value string, err error) { + info, err := p.readAndParseConfigFile() + if err != nil { + err = fmt.Errorf("can not read tenancy configuration due to: %s", err.Error()) + return + } + + value, err = presentOrError(info.TenancyOcid, hasTenancy, info.PresentConfiguration, "tenancy") + return +} + +func (p fileConfigurationProvider) UserOCID() (value string, err error) { + info, err := p.readAndParseConfigFile() + if err != nil { + err = fmt.Errorf("can not read tenancy configuration due to: %s", err.Error()) + return + } + + value, err = presentOrError(info.UserOcid, hasUser, info.PresentConfiguration, "user") + return +} + +func (p fileConfigurationProvider) KeyFingerprint() (value string, err error) { + info, err := p.readAndParseConfigFile() + if err != nil { + err = fmt.Errorf("can not read tenancy configuration due to: %s", err.Error()) + return + } + value, err = presentOrError(info.Fingerprint, hasFingerprint, info.PresentConfiguration, "fingerprint") + return +} + +func (p fileConfigurationProvider) KeyID() (keyID string, err error) { + info, err := p.readAndParseConfigFile() + if err != nil { + err = fmt.Errorf("can not read tenancy configuration due to: %s", err.Error()) + return + } + + return fmt.Sprintf("%s/%s/%s", info.TenancyOcid, info.UserOcid, info.Fingerprint), nil +} + +func (p fileConfigurationProvider) PrivateRSAKey() (key *rsa.PrivateKey, err error) { + info, err := p.readAndParseConfigFile() + if err != nil { + err = fmt.Errorf("can not read tenancy configuration due to: %s", err.Error()) + return + } + + filePath, err := presentOrError(info.KeyFilePath, hasKeyFile, info.PresentConfiguration, "key file path") + if err != nil { + return + } + pemFileContent, err := ioutil.ReadFile(filePath) + if err != nil { + err = fmt.Errorf("can not read PrivateKey from configuration file due to: %s", err.Error()) + return + } + + key, err = PrivateKeyFromBytes(pemFileContent, &p.PrivateKeyPassword) + return +} + +func (p fileConfigurationProvider) Region() (value string, err error) { + info, err := p.readAndParseConfigFile() + if err != nil { + err = fmt.Errorf("can not read region configuration due to: %s", err.Error()) + return + } + + value, err = presentOrError(info.Region, hasRegion, info.PresentConfiguration, "region") + return +} + +// A configuration provider that look for information in multiple configuration providers +type composingConfigurationProvider struct { + Providers []ConfigurationProvider +} + +// ComposingConfigurationProvider creates a composing configuration provider with the given slice of configuration providers +// A composing provider will return the configuration of the first provider that has the required property +// if no provider has the property it will return an error. +func ComposingConfigurationProvider(providers []ConfigurationProvider) (ConfigurationProvider, error) { + if len(providers) == 0 { + return nil, fmt.Errorf("providers can not be an empty slice") + } + return composingConfigurationProvider{Providers: providers}, nil +} + +func (c composingConfigurationProvider) TenancyOCID() (string, error) { + for _, p := range c.Providers { + val, err := p.TenancyOCID() + if err == nil { + return val, nil + } + } + return "", fmt.Errorf("did not find a proper configuration for tenancy") +} + +func (c composingConfigurationProvider) UserOCID() (string, error) { + for _, p := range c.Providers { + val, err := p.UserOCID() + if err == nil { + return val, nil + } + } + return "", fmt.Errorf("did not find a proper configuration for user") +} + +func (c composingConfigurationProvider) KeyFingerprint() (string, error) { + for _, p := range c.Providers { + val, err := p.KeyFingerprint() + if err == nil { + return val, nil + } + } + return "", fmt.Errorf("did not find a proper configuration for keyFingerprint") +} +func (c composingConfigurationProvider) Region() (string, error) { + for _, p := range c.Providers { + val, err := p.Region() + if err == nil { + return val, nil + } + } + return "", fmt.Errorf("did not find a proper configuration for region") +} + +func (c composingConfigurationProvider) KeyID() (string, error) { + for _, p := range c.Providers { + val, err := p.KeyID() + if err == nil { + return val, nil + } + } + return "", fmt.Errorf("did not find a proper configuration for key id") +} + +func (c composingConfigurationProvider) PrivateRSAKey() (*rsa.PrivateKey, error) { + for _, p := range c.Providers { + val, err := p.PrivateRSAKey() + if err == nil { + return val, nil + } + } + return nil, fmt.Errorf("did not find a proper configuration for private key") +} diff --git a/vendor/github.com/oracle/oci-go-sdk/common/errors.go b/vendor/github.com/oracle/oci-go-sdk/common/errors.go new file mode 100644 index 000000000..290bd08bf --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/common/errors.go @@ -0,0 +1,80 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. + +package common + +import ( + "encoding/json" + "fmt" + "io/ioutil" + "net/http" +) + +// ServiceError models all potential errors generated the service call +type ServiceError interface { + // The http status code of the error + GetHTTPStatusCode() int + + // The human-readable error string as sent by the service + GetMessage() string + + // A short error code that defines the error, meant for programmatic parsing. + // See https://docs.us-phoenix-1.oraclecloud.com/Content/API/References/apierrors.htm + GetCode() string +} + +type servicefailure struct { + StatusCode int + Code string `json:"code,omitempty"` + Message string `json:"message,omitempty"` +} + +func newServiceFailureFromResponse(response *http.Response) error { + var err error + + //If there is an error consume the body, entirely + body, err := ioutil.ReadAll(response.Body) + if err != nil { + return servicefailure{ + StatusCode: response.StatusCode, + Code: "BadErrorResponse", + Message: fmt.Sprintf("The body of the response was not readable, due to :%s", err.Error()), + } + } + + se := servicefailure{StatusCode: response.StatusCode} + err = json.Unmarshal(body, &se) + if err != nil { + Debugf("Error response could not be parsed due to: %s", err.Error()) + return servicefailure{ + StatusCode: response.StatusCode, + Code: "BadErrorResponse", + Message: fmt.Sprintf("Error while parsing failure from response"), + } + } + return se +} + +func (se servicefailure) Error() string { + return fmt.Sprintf("Service error:%s. %s. http status code: %d", + se.Code, se.Message, se.StatusCode) +} + +func (se servicefailure) GetHTTPStatusCode() int { + return se.StatusCode + +} + +func (se servicefailure) GetMessage() string { + return se.Message +} + +func (se servicefailure) GetCode() string { + return se.Code +} + +// IsServiceError returns false if the error is not service side, otherwise true +// additionally it returns an interface representing the ServiceError +func IsServiceError(err error) (failure ServiceError, ok bool) { + failure, ok = err.(servicefailure) + return +} diff --git a/vendor/github.com/oracle/oci-go-sdk/common/helpers.go b/vendor/github.com/oracle/oci-go-sdk/common/helpers.go new file mode 100644 index 000000000..554436944 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/common/helpers.go @@ -0,0 +1,168 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. + +package common + +import ( + "crypto/rsa" + "crypto/x509" + "encoding/pem" + "fmt" + "reflect" + "strings" + "time" +) + +// String returns a pointer to the provided string +func String(value string) *string { + return &value +} + +// Int returns a pointer to the provided int +func Int(value int) *int { + return &value +} + +// Uint returns a pointer to the provided uint +func Uint(value uint) *uint { + return &value +} + +//Float32 returns a pointer to the provided float32 +func Float32(value float32) *float32 { + return &value +} + +//Float64 returns a pointer to the provided float64 +func Float64(value float64) *float64 { + return &value +} + +//Bool returns a pointer to the provided bool +func Bool(value bool) *bool { + return &value +} + +//PointerString prints the values of pointers in a struct +//Producing a human friendly string for an struct with pointers. +//useful when debugging the values of a struct +func PointerString(datastruct interface{}) (representation string) { + val := reflect.ValueOf(datastruct) + typ := reflect.TypeOf(datastruct) + all := make([]string, 2) + all = append(all, "{") + for i := 0; i < typ.NumField(); i++ { + sf := typ.Field(i) + + //unexported + if sf.PkgPath != "" && !sf.Anonymous { + continue + } + + sv := val.Field(i) + stringValue := "" + if isNil(sv) { + stringValue = fmt.Sprintf("%s=", sf.Name) + } else { + if sv.Type().Kind() == reflect.Ptr { + sv = sv.Elem() + } + stringValue = fmt.Sprintf("%s=%v", sf.Name, sv) + } + all = append(all, stringValue) + } + all = append(all, "}") + representation = strings.TrimSpace(strings.Join(all, " ")) + return +} + +// SDKTime a time struct, which renders to/from json using RFC339 +type SDKTime struct { + time.Time +} + +func sdkTimeFromTime(t time.Time) SDKTime { + return SDKTime{t} +} + +func now() *SDKTime { + t := SDKTime{time.Now()} + return &t +} + +var timeType = reflect.TypeOf(SDKTime{}) +var timeTypePtr = reflect.TypeOf(&SDKTime{}) + +const sdkTimeFormat = time.RFC3339 + +const rfc1123OptionalLeadingDigitsInDay = "Mon, _2 Jan 2006 15:04:05 MST" + +func formatTime(t SDKTime) string { + return t.Format(sdkTimeFormat) +} + +func tryParsingTimeWithValidFormatsForHeaders(data []byte, headerName string) (t time.Time, err error) { + header := strings.ToLower(headerName) + switch header { + case "lastmodified", "date": + t, err = tryParsing(data, time.RFC3339, time.RFC1123, rfc1123OptionalLeadingDigitsInDay, time.RFC850, time.ANSIC) + return + default: //By default we parse with RFC3339 + t, err = time.Parse(sdkTimeFormat, string(data)) + return + } +} + +func tryParsing(data []byte, layouts ...string) (tm time.Time, err error) { + datestring := string(data) + for _, l := range layouts { + tm, err = time.Parse(l, datestring) + if err == nil { + return + } + } + err = fmt.Errorf("Could not parse time: %s with formats: %s", datestring, layouts[:]) + return +} + +// UnmarshalJSON unmarshals from json +func (t *SDKTime) UnmarshalJSON(data []byte) (e error) { + s := string(data) + if s == "null" { + t.Time = time.Time{} + } else { + //Try parsing with RFC3339 + t.Time, e = time.Parse(`"`+sdkTimeFormat+`"`, string(data)) + } + return +} + +// MarshalJSON marshals to JSON +func (t *SDKTime) MarshalJSON() (buff []byte, e error) { + s := t.Format(sdkTimeFormat) + buff = []byte(`"` + s + `"`) + return +} + +// PrivateKeyFromBytes is a helper function that will produce a RSA private +// key from bytes. +func PrivateKeyFromBytes(pemData []byte, password *string) (key *rsa.PrivateKey, e error) { + if pemBlock, _ := pem.Decode(pemData); pemBlock != nil { + decrypted := pemBlock.Bytes + if x509.IsEncryptedPEMBlock(pemBlock) { + if password == nil { + e = fmt.Errorf("private_key_password is required for encrypted private keys") + return + } + if decrypted, e = x509.DecryptPEMBlock(pemBlock, []byte(*password)); e != nil { + return + } + } + + key, e = x509.ParsePKCS1PrivateKey(decrypted) + + } else { + e = fmt.Errorf("PEM data was not found in buffer") + return + } + return +} diff --git a/vendor/github.com/oracle/oci-go-sdk/common/http.go b/vendor/github.com/oracle/oci-go-sdk/common/http.go new file mode 100644 index 000000000..5e75ed4b3 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/common/http.go @@ -0,0 +1,863 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. + +package common + +import ( + "bytes" + "encoding/json" + "fmt" + "io" + "io/ioutil" + "net/http" + "net/url" + "path" + "reflect" + "regexp" + "strconv" + "strings" + "time" +) + +/////////////////////////////////////////////////////////////////////////////////////////////////////////////////////// +/////////////////////////////////////////////////////////////////////////////////////////////////////////////////////// +//Request Marshaling +//////////////////////////////////////////////////////////////////////////////////////////////////////////////////////// +//////////////////////////////////////////////////////////////////////////////////////////////////////////////// + +func isNil(v reflect.Value) bool { + return v.Kind() == reflect.Ptr && v.IsNil() +} + +// Returns the string representation of a reflect.Value +// Only transforms primitive values +func toStringValue(v reflect.Value, field reflect.StructField) (string, error) { + if v.Kind() == reflect.Ptr { + if v.IsNil() { + return "", fmt.Errorf("can not marshal a nil pointer") + } + v = v.Elem() + } + + if v.Type() == timeType { + t := v.Interface().(SDKTime) + return formatTime(t), nil + } + + switch v.Kind() { + case reflect.Bool: + return strconv.FormatBool(v.Bool()), nil + case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64: + return strconv.FormatInt(v.Int(), 10), nil + case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64, reflect.Uintptr: + return strconv.FormatUint(v.Uint(), 10), nil + case reflect.String: + return v.String(), nil + case reflect.Float32: + return strconv.FormatFloat(v.Float(), 'f', 6, 32), nil + case reflect.Float64: + return strconv.FormatFloat(v.Float(), 'f', 6, 64), nil + default: + return "", fmt.Errorf("marshaling structure to a http.Request does not support field named: %s of type: %v", + field.Name, v.Type().String()) + } +} + +func addBinaryBody(request *http.Request, value reflect.Value) (e error) { + readCloser, ok := value.Interface().(io.ReadCloser) + if !ok { + e = fmt.Errorf("body of the request needs to be an io.ReadCloser interface. Can not marshal body of binary request") + return + } + + request.Body = readCloser + + //Set the default content type to application/octet-stream if not set + if request.Header.Get("Content-Type") == "" { + request.Header.Set("Content-Type", "application/octet-stream") + } + return nil +} + +// getTaggedNilFieldNameOrError, evaluates if a field with json and non mandatory tags is nil +// returns the json tag name, or an error if the tags are incorrectly present +func getTaggedNilFieldNameOrError(field reflect.StructField, fieldValue reflect.Value) (bool, string, error) { + currentTag := field.Tag + jsonTag := currentTag.Get("json") + + if jsonTag == "" { + return false, "", fmt.Errorf("json tag is not valid for field %s", field.Name) + } + + partsJSONTag := strings.Split(jsonTag, ",") + nameJSONField := partsJSONTag[0] + + if _, ok := currentTag.Lookup("mandatory"); !ok { + //No mandatory field set, no-op + return false, nameJSONField, nil + } + isMandatory, err := strconv.ParseBool(currentTag.Get("mandatory")) + if err != nil { + return false, "", fmt.Errorf("mandatory tag is not valid for field %s", field.Name) + } + + // If the field is marked as mandatory, no-op + if isMandatory { + return false, nameJSONField, nil + } + + Debugf("Adjusting tag: mandatory is false and json tag is valid on field: %s", field.Name) + + // If the field can not be nil, then no-op + if !isNillableType(&fieldValue) { + Debugf("WARNING json field is tagged with mandatory flags, but the type can not be nil, field name: %s", field.Name) + return false, nameJSONField, nil + } + + // If field value is nil, tag it as omitEmpty + return fieldValue.IsNil(), nameJSONField, nil + +} + +// isNillableType returns true if the filed can be nil +func isNillableType(value *reflect.Value) bool { + k := value.Kind() + switch k { + case reflect.Chan, reflect.Func, reflect.Map, reflect.Ptr, reflect.Interface, reflect.Slice: + return true + } + return false +} + +// omitNilFieldsInJSON, removes json keys whose struct value is nil, and the field is tag with the json and +// mandatory:false tags +func omitNilFieldsInJSON(data interface{}, value reflect.Value) (interface{}, error) { + switch value.Kind() { + case reflect.Struct: + jsonMap := data.(map[string]interface{}) + fieldType := value.Type() + for i := 0; i < fieldType.NumField(); i++ { + currentField := fieldType.Field(i) + //unexported skip + if currentField.PkgPath != "" { + continue + } + + //Does not have json tag, no-op + if _, ok := currentField.Tag.Lookup("json"); !ok { + continue + } + + currentFieldValue := value.Field(i) + ok, jsonFieldName, err := getTaggedNilFieldNameOrError(currentField, currentFieldValue) + if err != nil { + return nil, fmt.Errorf("can not omit nil fields for field: %s, due to: %s", + currentField.Name, err.Error()) + } + + //Delete the struct field from the json representation + if ok { + delete(jsonMap, jsonFieldName) + continue + } + + if currentFieldValue.Type() == timeType || currentFieldValue.Type() == timeTypePtr { + continue + } + // does it need to be adjusted? + var adjustedValue interface{} + adjustedValue, err = omitNilFieldsInJSON(jsonMap[jsonFieldName], currentFieldValue) + if err != nil { + return nil, fmt.Errorf("can not omit nil fields for field: %s, due to: %s", + currentField.Name, err.Error()) + } + jsonMap[jsonFieldName] = adjustedValue + } + return jsonMap, nil + case reflect.Slice, reflect.Array: + jsonList := data.([]interface{}) + newList := make([]interface{}, len(jsonList)) + var err error + for i, val := range jsonList { + newList[i], err = omitNilFieldsInJSON(val, value.Index(i)) + if err != nil { + return nil, err + } + } + return newList, nil + case reflect.Map: + jsonMap := data.(map[string]interface{}) + newMap := make(map[string]interface{}, len(jsonMap)) + var err error + for key, val := range jsonMap { + newMap[key], err = omitNilFieldsInJSON(val, value.MapIndex(reflect.ValueOf(key))) + if err != nil { + return nil, err + } + } + return newMap, nil + case reflect.Ptr, reflect.Interface: + valPtr := value.Elem() + return omitNilFieldsInJSON(data, valPtr) + default: + //Otherwise no-op + return data, nil + } +} + +// removeNilFieldsInJSONWithTaggedStruct remove struct fields tagged with json and mandatory false +// that are nil +func removeNilFieldsInJSONWithTaggedStruct(rawJSON []byte, value reflect.Value) ([]byte, error) { + rawMap := make(map[string]interface{}) + json.Unmarshal(rawJSON, &rawMap) + fixedMap, err := omitNilFieldsInJSON(rawMap, value) + if err != nil { + return nil, err + } + return json.Marshal(fixedMap) +} + +func addToBody(request *http.Request, value reflect.Value, field reflect.StructField) (e error) { + Debugln("Marshaling to body from field:", field.Name) + if request.Body != nil { + Logln("The body of the request is already set. Structure: ", field.Name, " will overwrite it") + } + tag := field.Tag + encoding := tag.Get("encoding") + + if encoding == "binary" { + return addBinaryBody(request, value) + } + + rawJSON, e := json.Marshal(value.Interface()) + if e != nil { + return + } + marshaled, e := removeNilFieldsInJSONWithTaggedStruct(rawJSON, value) + if e != nil { + return + } + Debugf("Marshaled body is: %s", string(marshaled)) + bodyBytes := bytes.NewReader(marshaled) + request.ContentLength = int64(bodyBytes.Len()) + request.Header.Set("Content-Length", strconv.FormatInt(request.ContentLength, 10)) + request.Header.Set("Content-Type", "application/json") + request.Body = ioutil.NopCloser(bodyBytes) + request.GetBody = func() (io.ReadCloser, error) { + return ioutil.NopCloser(bodyBytes), nil + } + return +} + +func addToQuery(request *http.Request, value reflect.Value, field reflect.StructField) (e error) { + Debugln("Marshaling to query from field:", field.Name) + if request.URL == nil { + request.URL = &url.URL{} + } + query := request.URL.Query() + var queryParameterValue, queryParameterName string + + if queryParameterName = field.Tag.Get("name"); queryParameterName == "" { + return fmt.Errorf("marshaling request to a query requires the 'name' tag for field: %s ", field.Name) + } + + mandatory, _ := strconv.ParseBool(strings.ToLower(field.Tag.Get("mandatory"))) + + //If mandatory and nil. Error out + if mandatory && isNil(value) { + return fmt.Errorf("marshaling request to a header requires not nil pointer for field: %s", field.Name) + } + + //if not mandatory and nil. Omit + if !mandatory && isNil(value) { + Debugf("Query parameter value is not mandatory and is nil pointer in field: %s. Skipping header", field.Name) + return + } + + if queryParameterValue, e = toStringValue(value, field); e != nil { + return + } + + //check for tag "omitEmpty", this is done to accomodate unset fields that do not + //support an empty string: enums in query params + if omitEmpty, present := field.Tag.Lookup("omitEmpty"); present { + omitEmptyBool, _ := strconv.ParseBool(strings.ToLower(omitEmpty)) + if queryParameterValue != "" || !omitEmptyBool { + query.Set(queryParameterName, queryParameterValue) + } else { + Debugf("Omitting %s, is empty and omitEmpty tag is set", field.Name) + } + } else { + query.Set(queryParameterName, queryParameterValue) + } + + request.URL.RawQuery = query.Encode() + return +} + +// Adds to the path of the url in the order they appear in the structure +func addToPath(request *http.Request, value reflect.Value, field reflect.StructField) (e error) { + var additionalURLPathPart string + if additionalURLPathPart, e = toStringValue(value, field); e != nil { + return fmt.Errorf("can not marshal to path in request for field %s. Due to %s", field.Name, e.Error()) + } + + if request.URL == nil { + request.URL = &url.URL{} + request.URL.Path = "" + } + var currentURLPath = request.URL.Path + + var templatedPathRegex, _ = regexp.Compile(".*{.+}.*") + if !templatedPathRegex.MatchString(currentURLPath) { + Debugln("Marshaling request to path by appending field:", field.Name) + allPath := []string{currentURLPath, additionalURLPathPart} + newPath := strings.Join(allPath, "/") + request.URL.Path = path.Clean(newPath) + } else { + var fieldName string + if fieldName = field.Tag.Get("name"); fieldName == "" { + e = fmt.Errorf("marshaling request to path name and template requires a 'name' tag for field: %s", field.Name) + return + } + urlTemplate := currentURLPath + Debugln("Marshaling to path from field:", field.Name, "in template:", urlTemplate) + request.URL.Path = path.Clean(strings.Replace(urlTemplate, "{"+fieldName+"}", additionalURLPathPart, -1)) + } + return +} + +func setWellKnownHeaders(request *http.Request, headerName, headerValue string) (e error) { + switch strings.ToLower(headerName) { + case "content-length": + var len int + len, e = strconv.Atoi(headerValue) + if e != nil { + return + } + request.ContentLength = int64(len) + } + return nil +} + +func addToHeader(request *http.Request, value reflect.Value, field reflect.StructField) (e error) { + Debugln("Marshaling to header from field:", field.Name) + if request.Header == nil { + request.Header = http.Header{} + } + + var headerName, headerValue string + if headerName = field.Tag.Get("name"); headerName == "" { + return fmt.Errorf("marshaling request to a header requires the 'name' tag for field: %s", field.Name) + } + + mandatory, _ := strconv.ParseBool(strings.ToLower(field.Tag.Get("mandatory"))) + //If mandatory and nil. Error out + if mandatory && isNil(value) { + return fmt.Errorf("marshaling request to a header requires not nil pointer for field: %s", field.Name) + } + + //if not mandatory and nil. Omit + if !mandatory && isNil(value) { + Debugf("Header value is not mandatory and is nil pointer in field: %s. Skipping header", field.Name) + return + } + + //Otherwise get value and set header + if headerValue, e = toStringValue(value, field); e != nil { + return + } + + if e = setWellKnownHeaders(request, headerName, headerValue); e != nil { + return + } + + request.Header.Set(headerName, headerValue) + return +} + +// Header collection is a map of string to string that gets rendered as individual headers with a given prefix +func addToHeaderCollection(request *http.Request, value reflect.Value, field reflect.StructField) (e error) { + Debugln("Marshaling to header-collection from field:", field.Name) + if request.Header == nil { + request.Header = http.Header{} + } + + var headerPrefix string + if headerPrefix = field.Tag.Get("prefix"); headerPrefix == "" { + return fmt.Errorf("marshaling request to a header requires the 'prefix' tag for field: %s", field.Name) + } + + mandatory, _ := strconv.ParseBool(strings.ToLower(field.Tag.Get("mandatory"))) + //If mandatory and nil. Error out + if mandatory && isNil(value) { + return fmt.Errorf("marshaling request to a header requires not nil pointer for field: %s", field.Name) + } + + //if not mandatory and nil. Omit + if !mandatory && isNil(value) { + Debugf("Header value is not mandatory and is nil pointer in field: %s. Skipping header", field.Name) + return + } + + //cast to map + headerValues, ok := value.Interface().(map[string]string) + if !ok { + e = fmt.Errorf("header fields need to be of type map[string]string") + return + } + + for k, v := range headerValues { + headerName := fmt.Sprintf("%s%s", headerPrefix, k) + request.Header.Set(headerName, v) + } + return +} + +// Makes sure the incoming structure is able to be marshalled +// to a request +func checkForValidRequestStruct(s interface{}) (*reflect.Value, error) { + val := reflect.ValueOf(s) + for val.Kind() == reflect.Ptr { + if val.IsNil() { + return nil, fmt.Errorf("can not marshal to request a pointer to structure") + } + val = val.Elem() + } + + if s == nil { + return nil, fmt.Errorf("can not marshal to request a nil structure") + } + + if val.Kind() != reflect.Struct { + return nil, fmt.Errorf("can not marshal to request, expects struct input. Got %v", val.Kind()) + } + + return &val, nil +} + +// Populates the parts of a request by reading tags in the passed structure +// nested structs are followed recursively depth-first. +func structToRequestPart(request *http.Request, val reflect.Value) (err error) { + typ := val.Type() + for i := 0; i < typ.NumField(); i++ { + if err != nil { + return + } + + sf := typ.Field(i) + //unexported + if sf.PkgPath != "" && !sf.Anonymous { + continue + } + + sv := val.Field(i) + tag := sf.Tag.Get("contributesTo") + switch tag { + case "header": + err = addToHeader(request, sv, sf) + case "header-collection": + err = addToHeaderCollection(request, sv, sf) + case "path": + err = addToPath(request, sv, sf) + case "query": + err = addToQuery(request, sv, sf) + case "body": + err = addToBody(request, sv, sf) + case "": + Debugln(sf.Name, "does not contain contributes tag. Skipping.") + default: + err = fmt.Errorf("can not marshal field: %s. It needs to contain valid contributesTo tag", sf.Name) + } + } + + //If headers are and the content type was not set, we default to application/json + if request.Header != nil && request.Header.Get("Content-Type") == "" { + request.Header.Set("Content-Type", "application/json") + } + + return +} + +// HTTPRequestMarshaller marshals a structure to an http request using tag values in the struct +// The marshaller tag should like the following +// type A struct { +// ANumber string `contributesTo="query" name="number"` +// TheBody `contributesTo="body"` +// } +// where the contributesTo tag can be: header, path, query, body +// and the 'name' tag is the name of the value used in the http request(not applicable for path) +// If path is specified as part of the tag, the values are appened to the url path +// in the order they appear in the structure +// The current implementation only supports primitive types, except for the body tag, which needs a struct type. +// The body of a request will be marshaled using the tags of the structure +func HTTPRequestMarshaller(requestStruct interface{}, httpRequest *http.Request) (err error) { + var val *reflect.Value + if val, err = checkForValidRequestStruct(requestStruct); err != nil { + return + } + + Debugln("Marshaling to Request:", val.Type().Name()) + err = structToRequestPart(httpRequest, *val) + return +} + +// MakeDefaultHTTPRequest creates the basic http request with the necessary headers set +func MakeDefaultHTTPRequest(method, path string) (httpRequest http.Request) { + httpRequest = http.Request{ + Proto: "HTTP/1.1", + ProtoMajor: 1, + ProtoMinor: 1, + Header: make(http.Header), + URL: &url.URL{}, + } + + httpRequest.Header.Set("Content-Length", "0") + httpRequest.Header.Set("Date", time.Now().UTC().Format(http.TimeFormat)) + httpRequest.Header.Set("Opc-Client-Info", strings.Join([]string{defaultSDKMarker, Version()}, "/")) + httpRequest.Header.Set("Accept", "*/*") + httpRequest.Method = method + httpRequest.URL.Path = path + return +} + +// MakeDefaultHTTPRequestWithTaggedStruct creates an http request from an struct with tagged fields, see HTTPRequestMarshaller +// for more information +func MakeDefaultHTTPRequestWithTaggedStruct(method, path string, requestStruct interface{}) (httpRequest http.Request, err error) { + httpRequest = MakeDefaultHTTPRequest(method, path) + err = HTTPRequestMarshaller(requestStruct, &httpRequest) + return +} + +/////////////////////////////////////////////////////////////////////////////////////////////////////////////////////// +/////////////////////////////////////////////////////////////////////////////////////////////////////////////////////// +//Request UnMarshaling +//////////////////////////////////////////////////////////////////////////////////////////////////////////////////////// +//////////////////////////////////////////////////////////////////////////////////////////////////////////////// + +// Makes sure the incoming structure is able to be unmarshaled +// to a request +func checkForValidResponseStruct(s interface{}) (*reflect.Value, error) { + val := reflect.ValueOf(s) + for val.Kind() == reflect.Ptr { + if val.IsNil() { + return nil, fmt.Errorf("can not unmarshal to response a pointer to nil structure") + } + val = val.Elem() + } + + if s == nil { + return nil, fmt.Errorf("can not unmarshal to response a nil structure") + } + + if val.Kind() != reflect.Struct { + return nil, fmt.Errorf("can not unmarshal to response, expects struct input. Got %v", val.Kind()) + } + + return &val, nil +} + +func intSizeFromKind(kind reflect.Kind) int { + switch kind { + case reflect.Int8, reflect.Uint8: + return 8 + case reflect.Int16, reflect.Uint16: + return 16 + case reflect.Int32, reflect.Uint32: + return 32 + case reflect.Int64, reflect.Uint64: + return 64 + case reflect.Int, reflect.Uint: + return strconv.IntSize + default: + Debugln("The type is not valid: %v. Returing int size for arch", kind.String()) + return strconv.IntSize + } + +} + +func analyzeValue(stringValue string, kind reflect.Kind, field reflect.StructField) (val reflect.Value, valPointer reflect.Value, err error) { + switch kind { + case timeType.Kind(): + var t time.Time + t, err = tryParsingTimeWithValidFormatsForHeaders([]byte(stringValue), field.Name) + if err != nil { + return + } + sdkTime := sdkTimeFromTime(t) + val = reflect.ValueOf(sdkTime) + valPointer = reflect.ValueOf(&sdkTime) + return + case reflect.Bool: + var bVal bool + if bVal, err = strconv.ParseBool(stringValue); err != nil { + return + } + val = reflect.ValueOf(bVal) + valPointer = reflect.ValueOf(&bVal) + return + case reflect.Int: + size := intSizeFromKind(kind) + var iVal int64 + if iVal, err = strconv.ParseInt(stringValue, 10, size); err != nil { + return + } + var iiVal int + iiVal = int(iVal) + val = reflect.ValueOf(iiVal) + valPointer = reflect.ValueOf(&iiVal) + return + case reflect.Int64: + size := intSizeFromKind(kind) + var iVal int64 + if iVal, err = strconv.ParseInt(stringValue, 10, size); err != nil { + return + } + val = reflect.ValueOf(iVal) + valPointer = reflect.ValueOf(&iVal) + return + case reflect.Uint: + size := intSizeFromKind(kind) + var iVal uint64 + if iVal, err = strconv.ParseUint(stringValue, 10, size); err != nil { + return + } + var uiVal uint + uiVal = uint(iVal) + val = reflect.ValueOf(uiVal) + valPointer = reflect.ValueOf(&uiVal) + return + case reflect.String: + val = reflect.ValueOf(stringValue) + valPointer = reflect.ValueOf(&stringValue) + case reflect.Float32: + var fVal float64 + if fVal, err = strconv.ParseFloat(stringValue, 32); err != nil { + return + } + var ffVal float32 + ffVal = float32(fVal) + val = reflect.ValueOf(ffVal) + valPointer = reflect.ValueOf(&ffVal) + return + case reflect.Float64: + var fVal float64 + if fVal, err = strconv.ParseFloat(stringValue, 64); err != nil { + return + } + val = reflect.ValueOf(fVal) + valPointer = reflect.ValueOf(&fVal) + return + default: + err = fmt.Errorf("value for kind: %s not supported", kind) + } + return +} + +// Sets the field of a struct, with the appropiate value of the string +// Only sets basic types +func fromStringValue(newValue string, val *reflect.Value, field reflect.StructField) (err error) { + + if !val.CanSet() { + err = fmt.Errorf("can not set field name: %s of type: %v", field.Name, val.Type().String()) + return + } + + kind := val.Kind() + isPointer := false + if val.Kind() == reflect.Ptr { + isPointer = true + kind = field.Type.Elem().Kind() + } + + value, valPtr, err := analyzeValue(newValue, kind, field) + if err != nil { + return + } + if !isPointer { + val.Set(value) + } else { + val.Set(valPtr) + } + return +} + +// PolymorphicJSONUnmarshaler is the interface to unmarshal polymorphic json payloads +type PolymorphicJSONUnmarshaler interface { + UnmarshalPolymorphicJSON(data []byte) (interface{}, error) +} + +func valueFromPolymorphicJSON(content []byte, unmarshaler PolymorphicJSONUnmarshaler) (val interface{}, err error) { + err = json.Unmarshal(content, unmarshaler) + if err != nil { + return + } + val, err = unmarshaler.UnmarshalPolymorphicJSON(content) + return +} + +func valueFromJSONBody(response *http.Response, value *reflect.Value, unmarshaler PolymorphicJSONUnmarshaler) (val interface{}, err error) { + //Consumes the body, consider implementing it + //without body consumption + var content []byte + content, err = ioutil.ReadAll(response.Body) + if err != nil { + return + } + + if unmarshaler != nil { + val, err = valueFromPolymorphicJSON(content, unmarshaler) + return + } + + val = reflect.New(value.Type()).Interface() + err = json.Unmarshal(content, &val) + return +} + +func addFromBody(response *http.Response, value *reflect.Value, field reflect.StructField, unmarshaler PolymorphicJSONUnmarshaler) (err error) { + Debugln("Unmarshaling from body to field:", field.Name) + if response.Body == nil { + Debugln("Unmarshaling body skipped due to nil body content for field: ", field.Name) + return nil + } + + tag := field.Tag + encoding := tag.Get("encoding") + var iVal interface{} + switch encoding { + case "binary": + value.Set(reflect.ValueOf(response.Body)) + return + case "plain-text": + //Expects UTF-8 + byteArr, e := ioutil.ReadAll(response.Body) + if e != nil { + return e + } + str := string(byteArr) + value.Set(reflect.ValueOf(&str)) + return + default: //If the encoding is not set. we'll decode with json + iVal, err = valueFromJSONBody(response, value, unmarshaler) + if err != nil { + return + } + + newVal := reflect.ValueOf(iVal) + if newVal.Kind() == reflect.Ptr { + newVal = newVal.Elem() + } + value.Set(newVal) + return + } +} + +func addFromHeader(response *http.Response, value *reflect.Value, field reflect.StructField) (err error) { + Debugln("Unmarshaling from header to field:", field.Name) + var headerName string + if headerName = field.Tag.Get("name"); headerName == "" { + return fmt.Errorf("unmarshaling response to a header requires the 'name' tag for field: %s", field.Name) + } + + headerValue := response.Header.Get(headerName) + if headerValue == "" { + Debugf("Unmarshalling did not find header with name:%s", headerName) + return nil + } + + if err = fromStringValue(headerValue, value, field); err != nil { + return fmt.Errorf("unmarshaling response to a header failed for field %s, due to %s", field.Name, + err.Error()) + } + return +} + +func addFromHeaderCollection(response *http.Response, value *reflect.Value, field reflect.StructField) error { + Debugln("Unmarshaling from header-collection to field:", field.Name) + var headerPrefix string + if headerPrefix = field.Tag.Get("prefix"); headerPrefix == "" { + return fmt.Errorf("Unmarshaling response to a header-collection requires the 'prefix' tag for field: %s", field.Name) + } + + mapCollection := make(map[string]string) + for name, value := range response.Header { + nameLowerCase := strings.ToLower(name) + if strings.HasPrefix(nameLowerCase, headerPrefix) { + headerNoPrefix := strings.TrimPrefix(nameLowerCase, headerPrefix) + mapCollection[headerNoPrefix] = value[0] + } + } + + Debugln("Marshalled header collection is:", mapCollection) + value.Set(reflect.ValueOf(mapCollection)) + return nil +} + +// Populates a struct from parts of a request by reading tags of the struct +func responseToStruct(response *http.Response, val *reflect.Value, unmarshaler PolymorphicJSONUnmarshaler) (err error) { + typ := val.Type() + for i := 0; i < typ.NumField(); i++ { + if err != nil { + return + } + + sf := typ.Field(i) + + //unexported + if sf.PkgPath != "" { + continue + } + + sv := val.Field(i) + tag := sf.Tag.Get("presentIn") + switch tag { + case "header": + err = addFromHeader(response, &sv, sf) + case "header-collection": + err = addFromHeaderCollection(response, &sv, sf) + case "body": + err = addFromBody(response, &sv, sf, unmarshaler) + case "": + Debugln(sf.Name, "does not contain presentIn tag. Skipping") + default: + err = fmt.Errorf("can not unmarshal field: %s. It needs to contain valid presentIn tag", sf.Name) + } + } + return +} + +// UnmarshalResponse hydrates the fields of a struct with the values of a http response, guided +// by the field tags. The directive tag is "presentIn" and it can be either +// - "header": Will look for the header tagged as "name" in the headers of the struct and set it value to that +// - "body": It will try to marshal the body from a json string to a struct tagged with 'presentIn: "body"'. +// Further this method will consume the body it should be safe to close it after this function +// Notice the current implementation only supports native types:int, strings, floats, bool as the field types +func UnmarshalResponse(httpResponse *http.Response, responseStruct interface{}) (err error) { + + var val *reflect.Value + if val, err = checkForValidResponseStruct(responseStruct); err != nil { + return + } + + if err = responseToStruct(httpResponse, val, nil); err != nil { + return + } + + return nil +} + +// UnmarshalResponseWithPolymorphicBody similar to UnmarshalResponse but assumes the body of the response +// contains polymorphic json. This function will use the unmarshaler argument to unmarshal json content +func UnmarshalResponseWithPolymorphicBody(httpResponse *http.Response, responseStruct interface{}, unmarshaler PolymorphicJSONUnmarshaler) (err error) { + + var val *reflect.Value + if val, err = checkForValidResponseStruct(responseStruct); err != nil { + return + } + + if err = responseToStruct(httpResponse, val, unmarshaler); err != nil { + return + } + + return nil +} diff --git a/vendor/github.com/oracle/oci-go-sdk/common/http_signer.go b/vendor/github.com/oracle/oci-go-sdk/common/http_signer.go new file mode 100644 index 000000000..3e82931aa --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/common/http_signer.go @@ -0,0 +1,230 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. + +package common + +import ( + "bytes" + "crypto" + "crypto/rand" + "crypto/rsa" + "crypto/sha256" + "encoding/base64" + "fmt" + "io" + "io/ioutil" + "net/http" + "strings" +) + +// HTTPRequestSigner the interface to sign a request +type HTTPRequestSigner interface { + Sign(r *http.Request) error +} + +// KeyProvider interface that wraps information about the key's account owner +type KeyProvider interface { + PrivateRSAKey() (*rsa.PrivateKey, error) + KeyID() (string, error) +} + +const signerVersion = "1" + +// SignerBodyHashPredicate a function that allows to disable/enable body hashing +// of requests and headers associated with body content +type SignerBodyHashPredicate func(r *http.Request) bool + +// ociRequestSigner implements the http-signatures-draft spec +// as described in https://tools.ietf.org/html/draft-cavage-http-signatures-08 +type ociRequestSigner struct { + KeyProvider KeyProvider + GenericHeaders []string + BodyHeaders []string + ShouldHashBody SignerBodyHashPredicate +} + +var ( + defaultGenericHeaders = []string{"date", "(request-target)", "host"} + defaultBodyHeaders = []string{"content-length", "content-type", "x-content-sha256"} + defaultBodyHashPredicate = func(r *http.Request) bool { + return r.Method == http.MethodPost || r.Method == http.MethodPut + } +) + +// DefaultRequestSigner creates a signer with default parameters. +func DefaultRequestSigner(provider KeyProvider) HTTPRequestSigner { + return RequestSigner(provider, defaultGenericHeaders, defaultBodyHeaders) +} + +// RequestSigner creates a signer that utilizes the specified headers for signing +// and the default predicate for using the body of the request as part of the signature +func RequestSigner(provider KeyProvider, genericHeaders, bodyHeaders []string) HTTPRequestSigner { + return ociRequestSigner{ + KeyProvider: provider, + GenericHeaders: genericHeaders, + BodyHeaders: bodyHeaders, + ShouldHashBody: defaultBodyHashPredicate} +} + +// RequestSignerWithBodyHashingPredicate creates a signer that utilizes the specified headers for signing, as well as a predicate for using +// the body of the request and bodyHeaders parameter as part of the signature +func RequestSignerWithBodyHashingPredicate(provider KeyProvider, genericHeaders, bodyHeaders []string, shouldHashBody SignerBodyHashPredicate) HTTPRequestSigner { + return ociRequestSigner{ + KeyProvider: provider, + GenericHeaders: genericHeaders, + BodyHeaders: bodyHeaders, + ShouldHashBody: shouldHashBody} +} + +func (signer ociRequestSigner) getSigningHeaders(r *http.Request) []string { + var result []string + result = append(result, signer.GenericHeaders...) + + if signer.ShouldHashBody(r) { + result = append(result, signer.BodyHeaders...) + } + + return result +} + +func (signer ociRequestSigner) getSigningString(request *http.Request) string { + signingHeaders := signer.getSigningHeaders(request) + signingParts := make([]string, len(signingHeaders)) + for i, part := range signingHeaders { + var value string + switch part { + case "(request-target)": + value = getRequestTarget(request) + case "host": + value = request.URL.Host + if len(value) == 0 { + value = request.Host + } + default: + value = request.Header.Get(part) + } + signingParts[i] = fmt.Sprintf("%s: %s", part, value) + } + + signingString := strings.Join(signingParts, "\n") + return signingString + +} + +func getRequestTarget(request *http.Request) string { + lowercaseMethod := strings.ToLower(request.Method) + return fmt.Sprintf("%s %s", lowercaseMethod, request.URL.RequestURI()) +} + +func calculateHashOfBody(request *http.Request) (err error) { + var hash string + if request.ContentLength > 0 { + hash, err = GetBodyHash(request) + if err != nil { + return + } + } else { + hash = hashAndEncode([]byte("")) + } + request.Header.Set("X-Content-Sha256", hash) + return +} + +// drainBody reads all of b to memory and then returns two equivalent +// ReadClosers yielding the same bytes. +// +// It returns an error if the initial slurp of all bytes fails. It does not attempt +// to make the returned ReadClosers have identical error-matching behavior. +func drainBody(b io.ReadCloser) (r1, r2 io.ReadCloser, err error) { + if b == http.NoBody { + // No copying needed. Preserve the magic sentinel meaning of NoBody. + return http.NoBody, http.NoBody, nil + } + var buf bytes.Buffer + if _, err = buf.ReadFrom(b); err != nil { + return nil, b, err + } + if err = b.Close(); err != nil { + return nil, b, err + } + return ioutil.NopCloser(&buf), ioutil.NopCloser(bytes.NewReader(buf.Bytes())), nil +} + +func hashAndEncode(data []byte) string { + hashedContent := sha256.Sum256(data) + hash := base64.StdEncoding.EncodeToString(hashedContent[:]) + return hash +} + +// GetBodyHash creates a base64 string from the hash of body the request +func GetBodyHash(request *http.Request) (hashString string, err error) { + if request.Body == nil { + return "", fmt.Errorf("can not read body of request while calculating body hash, nil body?") + } + + var data []byte + bReader := request.Body + bReader, request.Body, err = drainBody(request.Body) + if err != nil { + return "", fmt.Errorf("can not read body of request while calculating body hash: %s", err.Error()) + } + + data, err = ioutil.ReadAll(bReader) + if err != nil { + return "", fmt.Errorf("can not read body of request while calculating body hash: %s", err.Error()) + } + hashString = hashAndEncode(data) + return +} + +func (signer ociRequestSigner) computeSignature(request *http.Request) (signature string, err error) { + signingString := signer.getSigningString(request) + hasher := sha256.New() + hasher.Write([]byte(signingString)) + hashed := hasher.Sum(nil) + + privateKey, err := signer.KeyProvider.PrivateRSAKey() + if err != nil { + return + } + + var unencodedSig []byte + unencodedSig, e := rsa.SignPKCS1v15(rand.Reader, privateKey, crypto.SHA256, hashed) + if e != nil { + err = fmt.Errorf("can not compute signature while signing the request %s: ", e.Error()) + return + } + + signature = base64.StdEncoding.EncodeToString(unencodedSig) + return +} + +// Sign signs the http request, by inspecting the necessary headers. Once signed +// the request will have the proper 'Authorization' header set, otherwise +// and error is returned +func (signer ociRequestSigner) Sign(request *http.Request) (err error) { + if signer.ShouldHashBody(request) { + err = calculateHashOfBody(request) + if err != nil { + return + } + } + + var signature string + if signature, err = signer.computeSignature(request); err != nil { + return + } + + signingHeaders := strings.Join(signer.getSigningHeaders(request), " ") + + var keyID string + if keyID, err = signer.KeyProvider.KeyID(); err != nil { + return + } + + authValue := fmt.Sprintf("Signature version=\"%s\",headers=\"%s\",keyId=\"%s\",algorithm=\"rsa-sha256\",signature=\"%s\"", + signerVersion, signingHeaders, keyID, signature) + + request.Header.Set("Authorization", authValue) + + return +} diff --git a/vendor/github.com/oracle/oci-go-sdk/common/log.go b/vendor/github.com/oracle/oci-go-sdk/common/log.go new file mode 100644 index 000000000..328485fdd --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/common/log.go @@ -0,0 +1,67 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. + +package common + +import ( + "fmt" + "io" + "io/ioutil" + "log" + "os" + "sync" +) + +// Simple logging proxy to distinguish control for logging messages +// Debug logging is turned on/off by the presence of the environment variable "OCI_GO_SDK_DEBUG" +var debugLog = log.New(os.Stderr, "DEBUG ", log.Ldate|log.Ltime|log.Lshortfile) +var mainLog = log.New(os.Stderr, "", log.Ldate|log.Ltime|log.Lshortfile) +var isDebugLogEnabled bool +var checkDebug sync.Once + +func getOutputForEnv() (writer io.Writer) { + checkDebug.Do(func() { + isDebugLogEnabled = *new(bool) + _, isDebugLogEnabled = os.LookupEnv("OCI_GO_SDK_DEBUG") + }) + + writer = ioutil.Discard + if isDebugLogEnabled { + writer = os.Stderr + } + return +} + +// Debugf logs v with the provided format if debug mode is set +func Debugf(format string, v ...interface{}) { + debugLog.SetOutput(getOutputForEnv()) + debugLog.Output(3, fmt.Sprintf(format, v...)) +} + +// Debug logs v if debug mode is set +func Debug(v ...interface{}) { + debugLog.SetOutput(getOutputForEnv()) + debugLog.Output(3, fmt.Sprint(v...)) +} + +// Debugln logs v appending a new line if debug mode is set +func Debugln(v ...interface{}) { + debugLog.SetOutput(getOutputForEnv()) + debugLog.Output(3, fmt.Sprintln(v...)) +} + +// IfDebug executes closure if debug is enabled +func IfDebug(fn func()) { + if isDebugLogEnabled { + fn() + } +} + +// Logln logs v appending a new line at the end +func Logln(v ...interface{}) { + mainLog.Output(3, fmt.Sprintln(v...)) +} + +// Logf logs v with the provided format +func Logf(format string, v ...interface{}) { + mainLog.Output(3, fmt.Sprintf(format, v...)) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/common/version.go b/vendor/github.com/oracle/oci-go-sdk/common/version.go new file mode 100644 index 000000000..be610d170 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/common/version.go @@ -0,0 +1,36 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated by go generate; DO NOT EDIT + +package common + +import ( + "bytes" + "fmt" + "sync" +) + +const ( + major = "1" + minor = "0" + patch = "0" + tag = "" +) + +var once sync.Once +var version string + +// Version returns semantic version of the sdk +func Version() string { + once.Do(func() { + ver := fmt.Sprintf("%s.%s.%s", major, minor, patch) + verBuilder := bytes.NewBufferString(ver) + if tag != "" && tag != "-" { + _, err := verBuilder.WriteString(tag) + if err == nil { + verBuilder = bytes.NewBufferString(ver) + } + } + version = verBuilder.String() + }) + return version +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/attach_boot_volume_details.go b/vendor/github.com/oracle/oci-go-sdk/core/attach_boot_volume_details.go new file mode 100644 index 000000000..7dc403a4c --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/attach_boot_volume_details.go @@ -0,0 +1,30 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Core Services API +// +// APIs for Networking Service, Compute Service, and Block Volume Service. +// + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" +) + +// AttachBootVolumeDetails The representation of AttachBootVolumeDetails +type AttachBootVolumeDetails struct { + + // The OCID of the boot volume. + BootVolumeId *string `mandatory:"true" json:"bootVolumeId"` + + // The OCID of the instance. + InstanceId *string `mandatory:"true" json:"instanceId"` + + // A user-friendly name. Does not have to be unique, and it cannot be changed. Avoid entering confidential information. + DisplayName *string `mandatory:"false" json:"displayName"` +} + +func (m AttachBootVolumeDetails) String() string { + return common.PointerString(m) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/attach_boot_volume_request_response.go b/vendor/github.com/oracle/oci-go-sdk/core/attach_boot_volume_request_response.go new file mode 100644 index 000000000..f52a01930 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/attach_boot_volume_request_response.go @@ -0,0 +1,48 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// AttachBootVolumeRequest wrapper for the AttachBootVolume operation +type AttachBootVolumeRequest struct { + + // Attach boot volume request + AttachBootVolumeDetails `contributesTo:"body"` + + // A token that uniquely identifies a request so it can be retried in case of a timeout or + // server error without risk of executing that same action again. Retry tokens expire after 24 + // hours, but can be invalidated before then due to conflicting operations (for example, if a resource + // has been deleted and purged from the system, then a retry of the original creation request + // may be rejected). + OpcRetryToken *string `mandatory:"false" contributesTo:"header" name:"opc-retry-token"` +} + +func (request AttachBootVolumeRequest) String() string { + return common.PointerString(request) +} + +// AttachBootVolumeResponse wrapper for the AttachBootVolume operation +type AttachBootVolumeResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The BootVolumeAttachment instance + BootVolumeAttachment `presentIn:"body"` + + // For optimistic concurrency control. See `if-match`. + Etag *string `presentIn:"header" name:"etag"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about + // a particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response AttachBootVolumeResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/attach_i_scsi_volume_details.go b/vendor/github.com/oracle/oci-go-sdk/core/attach_i_scsi_volume_details.go new file mode 100644 index 000000000..827108d2f --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/attach_i_scsi_volume_details.go @@ -0,0 +1,63 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Core Services API +// +// APIs for Networking Service, Compute Service, and Block Volume Service. +// + +package core + +import ( + "encoding/json" + "github.com/oracle/oci-go-sdk/common" +) + +// AttachIScsiVolumeDetails The representation of AttachIScsiVolumeDetails +type AttachIScsiVolumeDetails struct { + + // The OCID of the instance. + InstanceId *string `mandatory:"true" json:"instanceId"` + + // The OCID of the volume. + VolumeId *string `mandatory:"true" json:"volumeId"` + + // A user-friendly name. Does not have to be unique, and it cannot be changed. Avoid entering confidential information. + DisplayName *string `mandatory:"false" json:"displayName"` + + // Whether to use CHAP authentication for the volume attachment. Defaults to false. + UseChap *bool `mandatory:"false" json:"useChap"` +} + +//GetDisplayName returns DisplayName +func (m AttachIScsiVolumeDetails) GetDisplayName() *string { + return m.DisplayName +} + +//GetInstanceId returns InstanceId +func (m AttachIScsiVolumeDetails) GetInstanceId() *string { + return m.InstanceId +} + +//GetVolumeId returns VolumeId +func (m AttachIScsiVolumeDetails) GetVolumeId() *string { + return m.VolumeId +} + +func (m AttachIScsiVolumeDetails) String() string { + return common.PointerString(m) +} + +// MarshalJSON marshals to json representation +func (m AttachIScsiVolumeDetails) MarshalJSON() (buff []byte, e error) { + type MarshalTypeAttachIScsiVolumeDetails AttachIScsiVolumeDetails + s := struct { + DiscriminatorParam string `json:"type"` + MarshalTypeAttachIScsiVolumeDetails + }{ + "iscsi", + (MarshalTypeAttachIScsiVolumeDetails)(m), + } + + return json.Marshal(&s) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/attach_vnic_details.go b/vendor/github.com/oracle/oci-go-sdk/core/attach_vnic_details.go new file mode 100644 index 000000000..a6e978940 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/attach_vnic_details.go @@ -0,0 +1,37 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Core Services API +// +// APIs for Networking Service, Compute Service, and Block Volume Service. +// + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" +) + +// AttachVnicDetails The representation of AttachVnicDetails +type AttachVnicDetails struct { + + // Details for creating a new VNIC. + CreateVnicDetails *CreateVnicDetails `mandatory:"true" json:"createVnicDetails"` + + // The OCID of the instance. + InstanceId *string `mandatory:"true" json:"instanceId"` + + // A user-friendly name for the attachment. Does not have to be unique, and it cannot be changed. + DisplayName *string `mandatory:"false" json:"displayName"` + + // Which physical network interface card (NIC) the VNIC will use. Defaults to 0. + // Certain bare metal instance shapes have two active physical NICs (0 and 1). If + // you add a secondary VNIC to one of these instances, you can specify which NIC + // the VNIC will use. For more information, see + // Virtual Network Interface Cards (VNICs) (https://docs.us-phoenix-1.oraclecloud.com/Content/Network/Tasks/managingVNICs.htm). + NicIndex *int `mandatory:"false" json:"nicIndex"` +} + +func (m AttachVnicDetails) String() string { + return common.PointerString(m) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/attach_vnic_request_response.go b/vendor/github.com/oracle/oci-go-sdk/core/attach_vnic_request_response.go new file mode 100644 index 000000000..3f04c241a --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/attach_vnic_request_response.go @@ -0,0 +1,48 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// AttachVnicRequest wrapper for the AttachVnic operation +type AttachVnicRequest struct { + + // Attach VNIC details. + AttachVnicDetails `contributesTo:"body"` + + // A token that uniquely identifies a request so it can be retried in case of a timeout or + // server error without risk of executing that same action again. Retry tokens expire after 24 + // hours, but can be invalidated before then due to conflicting operations (for example, if a resource + // has been deleted and purged from the system, then a retry of the original creation request + // may be rejected). + OpcRetryToken *string `mandatory:"false" contributesTo:"header" name:"opc-retry-token"` +} + +func (request AttachVnicRequest) String() string { + return common.PointerString(request) +} + +// AttachVnicResponse wrapper for the AttachVnic operation +type AttachVnicResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The VnicAttachment instance + VnicAttachment `presentIn:"body"` + + // For optimistic concurrency control. See `if-match`. + Etag *string `presentIn:"header" name:"etag"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about + // a particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response AttachVnicResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/attach_volume_details.go b/vendor/github.com/oracle/oci-go-sdk/core/attach_volume_details.go new file mode 100644 index 000000000..414fc9c3a --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/attach_volume_details.go @@ -0,0 +1,86 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Core Services API +// +// APIs for Networking Service, Compute Service, and Block Volume Service. +// + +package core + +import ( + "encoding/json" + "github.com/oracle/oci-go-sdk/common" +) + +// AttachVolumeDetails The representation of AttachVolumeDetails +type AttachVolumeDetails interface { + + // The OCID of the instance. + GetInstanceId() *string + + // The OCID of the volume. + GetVolumeId() *string + + // A user-friendly name. Does not have to be unique, and it cannot be changed. Avoid entering confidential information. + GetDisplayName() *string +} + +type attachvolumedetails struct { + JsonData []byte + InstanceId *string `mandatory:"true" json:"instanceId"` + VolumeId *string `mandatory:"true" json:"volumeId"` + DisplayName *string `mandatory:"false" json:"displayName"` + Type string `json:"type"` +} + +// UnmarshalJSON unmarshals json +func (m *attachvolumedetails) UnmarshalJSON(data []byte) error { + m.JsonData = data + type Unmarshalerattachvolumedetails attachvolumedetails + s := struct { + Model Unmarshalerattachvolumedetails + }{} + err := json.Unmarshal(data, &s.Model) + if err != nil { + return err + } + m.InstanceId = s.Model.InstanceId + m.VolumeId = s.Model.VolumeId + m.DisplayName = s.Model.DisplayName + m.Type = s.Model.Type + + return err +} + +// UnmarshalPolymorphicJSON unmarshals polymorphic json +func (m *attachvolumedetails) UnmarshalPolymorphicJSON(data []byte) (interface{}, error) { + var err error + switch m.Type { + case "iscsi": + mm := AttachIScsiVolumeDetails{} + err = json.Unmarshal(data, &mm) + return mm, err + default: + return m, nil + } +} + +//GetInstanceId returns InstanceId +func (m attachvolumedetails) GetInstanceId() *string { + return m.InstanceId +} + +//GetVolumeId returns VolumeId +func (m attachvolumedetails) GetVolumeId() *string { + return m.VolumeId +} + +//GetDisplayName returns DisplayName +func (m attachvolumedetails) GetDisplayName() *string { + return m.DisplayName +} + +func (m attachvolumedetails) String() string { + return common.PointerString(m) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/attach_volume_request_response.go b/vendor/github.com/oracle/oci-go-sdk/core/attach_volume_request_response.go new file mode 100644 index 000000000..d2f7076f8 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/attach_volume_request_response.go @@ -0,0 +1,48 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// AttachVolumeRequest wrapper for the AttachVolume operation +type AttachVolumeRequest struct { + + // Attach volume request + AttachVolumeDetails `contributesTo:"body"` + + // A token that uniquely identifies a request so it can be retried in case of a timeout or + // server error without risk of executing that same action again. Retry tokens expire after 24 + // hours, but can be invalidated before then due to conflicting operations (for example, if a resource + // has been deleted and purged from the system, then a retry of the original creation request + // may be rejected). + OpcRetryToken *string `mandatory:"false" contributesTo:"header" name:"opc-retry-token"` +} + +func (request AttachVolumeRequest) String() string { + return common.PointerString(request) +} + +// AttachVolumeResponse wrapper for the AttachVolume operation +type AttachVolumeResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The VolumeAttachment instance + VolumeAttachment `presentIn:"body"` + + // For optimistic concurrency control. See `if-match`. + Etag *string `presentIn:"header" name:"etag"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about + // a particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response AttachVolumeResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/boot_volume.go b/vendor/github.com/oracle/oci-go-sdk/core/boot_volume.go new file mode 100644 index 000000000..1f57aca45 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/boot_volume.go @@ -0,0 +1,86 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Core Services API +// +// APIs for Networking Service, Compute Service, and Block Volume Service. +// + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" +) + +// BootVolume A detachable boot volume device that contains the image used to boot an Compute instance. For more information, see +// Overview of Boot Volumes (https://docs.us-phoenix-1.oraclecloud.com/Content/Block/Concepts/bootvolumes.htm). +// To use any of the API operations, you must be authorized in an IAM policy. If you're not authorized, +// talk to an administrator. If you're an administrator who needs to write policies to give users access, see +// Getting Started with Policies (https://docs.us-phoenix-1.oraclecloud.com/Content/Identity/Concepts/policygetstarted.htm). +type BootVolume struct { + + // The Availability Domain of the boot volume. + // Example: `Uocm:PHX-AD-1` + AvailabilityDomain *string `mandatory:"true" json:"availabilityDomain"` + + // The OCID of the compartment that contains the boot volume. + CompartmentId *string `mandatory:"true" json:"compartmentId"` + + // The boot volume's Oracle ID (OCID). + Id *string `mandatory:"true" json:"id"` + + // The current state of a boot volume. + LifecycleState BootVolumeLifecycleStateEnum `mandatory:"true" json:"lifecycleState"` + + // The size of the volume in MBs. The value must be a multiple of 1024. + // This field is deprecated. Please use sizeInGBs. + SizeInMBs *int `mandatory:"true" json:"sizeInMBs"` + + // The date and time the boot volume was created. Format defined by RFC3339. + TimeCreated *common.SDKTime `mandatory:"true" json:"timeCreated"` + + // A user-friendly name. Does not have to be unique, and it's changeable. + // Avoid entering confidential information. + DisplayName *string `mandatory:"false" json:"displayName"` + + // The image OCID used to create the boot volume. + ImageId *string `mandatory:"false" json:"imageId"` + + // The size of the boot volume in GBs. + SizeInGBs *int `mandatory:"false" json:"sizeInGBs"` +} + +func (m BootVolume) String() string { + return common.PointerString(m) +} + +// BootVolumeLifecycleStateEnum Enum with underlying type: string +type BootVolumeLifecycleStateEnum string + +// Set of constants representing the allowable values for BootVolumeLifecycleState +const ( + BootVolumeLifecycleStateProvisioning BootVolumeLifecycleStateEnum = "PROVISIONING" + BootVolumeLifecycleStateRestoring BootVolumeLifecycleStateEnum = "RESTORING" + BootVolumeLifecycleStateAvailable BootVolumeLifecycleStateEnum = "AVAILABLE" + BootVolumeLifecycleStateTerminating BootVolumeLifecycleStateEnum = "TERMINATING" + BootVolumeLifecycleStateTerminated BootVolumeLifecycleStateEnum = "TERMINATED" + BootVolumeLifecycleStateFaulty BootVolumeLifecycleStateEnum = "FAULTY" +) + +var mappingBootVolumeLifecycleState = map[string]BootVolumeLifecycleStateEnum{ + "PROVISIONING": BootVolumeLifecycleStateProvisioning, + "RESTORING": BootVolumeLifecycleStateRestoring, + "AVAILABLE": BootVolumeLifecycleStateAvailable, + "TERMINATING": BootVolumeLifecycleStateTerminating, + "TERMINATED": BootVolumeLifecycleStateTerminated, + "FAULTY": BootVolumeLifecycleStateFaulty, +} + +// GetBootVolumeLifecycleStateEnumValues Enumerates the set of values for BootVolumeLifecycleState +func GetBootVolumeLifecycleStateEnumValues() []BootVolumeLifecycleStateEnum { + values := make([]BootVolumeLifecycleStateEnum, 0) + for _, v := range mappingBootVolumeLifecycleState { + values = append(values, v) + } + return values +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/boot_volume_attachment.go b/vendor/github.com/oracle/oci-go-sdk/core/boot_volume_attachment.go new file mode 100644 index 000000000..2c24059fc --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/boot_volume_attachment.go @@ -0,0 +1,76 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Core Services API +// +// APIs for Networking Service, Compute Service, and Block Volume Service. +// + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" +) + +// BootVolumeAttachment Represents an attachment between a boot volume and an instance. +type BootVolumeAttachment struct { + + // The Availability Domain of an instance. + // Example: `Uocm:PHX-AD-1` + AvailabilityDomain *string `mandatory:"true" json:"availabilityDomain"` + + // The OCID of the boot volume. + BootVolumeId *string `mandatory:"true" json:"bootVolumeId"` + + // The OCID of the compartment. + CompartmentId *string `mandatory:"true" json:"compartmentId"` + + // The OCID of the boot volume attachment. + Id *string `mandatory:"true" json:"id"` + + // The OCID of the instance the boot volume is attached to. + InstanceId *string `mandatory:"true" json:"instanceId"` + + // The current state of the boot volume attachment. + LifecycleState BootVolumeAttachmentLifecycleStateEnum `mandatory:"true" json:"lifecycleState"` + + // The date and time the boot volume was created, in the format defined by RFC3339. + // Example: `2016-08-25T21:10:29.600Z` + TimeCreated *common.SDKTime `mandatory:"true" json:"timeCreated"` + + // A user-friendly name. Does not have to be unique, and it cannot be changed. + // Avoid entering confidential information. + // Example: `My boot volume` + DisplayName *string `mandatory:"false" json:"displayName"` +} + +func (m BootVolumeAttachment) String() string { + return common.PointerString(m) +} + +// BootVolumeAttachmentLifecycleStateEnum Enum with underlying type: string +type BootVolumeAttachmentLifecycleStateEnum string + +// Set of constants representing the allowable values for BootVolumeAttachmentLifecycleState +const ( + BootVolumeAttachmentLifecycleStateAttaching BootVolumeAttachmentLifecycleStateEnum = "ATTACHING" + BootVolumeAttachmentLifecycleStateAttached BootVolumeAttachmentLifecycleStateEnum = "ATTACHED" + BootVolumeAttachmentLifecycleStateDetaching BootVolumeAttachmentLifecycleStateEnum = "DETACHING" + BootVolumeAttachmentLifecycleStateDetached BootVolumeAttachmentLifecycleStateEnum = "DETACHED" +) + +var mappingBootVolumeAttachmentLifecycleState = map[string]BootVolumeAttachmentLifecycleStateEnum{ + "ATTACHING": BootVolumeAttachmentLifecycleStateAttaching, + "ATTACHED": BootVolumeAttachmentLifecycleStateAttached, + "DETACHING": BootVolumeAttachmentLifecycleStateDetaching, + "DETACHED": BootVolumeAttachmentLifecycleStateDetached, +} + +// GetBootVolumeAttachmentLifecycleStateEnumValues Enumerates the set of values for BootVolumeAttachmentLifecycleState +func GetBootVolumeAttachmentLifecycleStateEnumValues() []BootVolumeAttachmentLifecycleStateEnum { + values := make([]BootVolumeAttachmentLifecycleStateEnum, 0) + for _, v := range mappingBootVolumeAttachmentLifecycleState { + values = append(values, v) + } + return values +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/bulk_add_virtual_circuit_public_prefixes_details.go b/vendor/github.com/oracle/oci-go-sdk/core/bulk_add_virtual_circuit_public_prefixes_details.go new file mode 100644 index 000000000..bf8afbbf2 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/bulk_add_virtual_circuit_public_prefixes_details.go @@ -0,0 +1,24 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Core Services API +// +// APIs for Networking Service, Compute Service, and Block Volume Service. +// + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" +) + +// BulkAddVirtualCircuitPublicPrefixesDetails The representation of BulkAddVirtualCircuitPublicPrefixesDetails +type BulkAddVirtualCircuitPublicPrefixesDetails struct { + + // The public IP prefixes (CIDRs) to add to the public virtual circuit. + PublicPrefixes []CreateVirtualCircuitPublicPrefixDetails `mandatory:"true" json:"publicPrefixes"` +} + +func (m BulkAddVirtualCircuitPublicPrefixesDetails) String() string { + return common.PointerString(m) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/bulk_add_virtual_circuit_public_prefixes_request_response.go b/vendor/github.com/oracle/oci-go-sdk/core/bulk_add_virtual_circuit_public_prefixes_request_response.go new file mode 100644 index 000000000..749adc891 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/bulk_add_virtual_circuit_public_prefixes_request_response.go @@ -0,0 +1,34 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// BulkAddVirtualCircuitPublicPrefixesRequest wrapper for the BulkAddVirtualCircuitPublicPrefixes operation +type BulkAddVirtualCircuitPublicPrefixesRequest struct { + + // The OCID of the virtual circuit. + VirtualCircuitId *string `mandatory:"true" contributesTo:"path" name:"virtualCircuitId"` + + // Request with publix prefixes to be added to the virtual circuit + BulkAddVirtualCircuitPublicPrefixesDetails `contributesTo:"body"` +} + +func (request BulkAddVirtualCircuitPublicPrefixesRequest) String() string { + return common.PointerString(request) +} + +// BulkAddVirtualCircuitPublicPrefixesResponse wrapper for the BulkAddVirtualCircuitPublicPrefixes operation +type BulkAddVirtualCircuitPublicPrefixesResponse struct { + + // The underlying http response + RawResponse *http.Response +} + +func (response BulkAddVirtualCircuitPublicPrefixesResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/bulk_delete_virtual_circuit_public_prefixes_details.go b/vendor/github.com/oracle/oci-go-sdk/core/bulk_delete_virtual_circuit_public_prefixes_details.go new file mode 100644 index 000000000..031eea71b --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/bulk_delete_virtual_circuit_public_prefixes_details.go @@ -0,0 +1,24 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Core Services API +// +// APIs for Networking Service, Compute Service, and Block Volume Service. +// + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" +) + +// BulkDeleteVirtualCircuitPublicPrefixesDetails The representation of BulkDeleteVirtualCircuitPublicPrefixesDetails +type BulkDeleteVirtualCircuitPublicPrefixesDetails struct { + + // The public IP prefixes (CIDRs) to remove from the public virtual circuit. + PublicPrefixes []DeleteVirtualCircuitPublicPrefixDetails `mandatory:"true" json:"publicPrefixes"` +} + +func (m BulkDeleteVirtualCircuitPublicPrefixesDetails) String() string { + return common.PointerString(m) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/bulk_delete_virtual_circuit_public_prefixes_request_response.go b/vendor/github.com/oracle/oci-go-sdk/core/bulk_delete_virtual_circuit_public_prefixes_request_response.go new file mode 100644 index 000000000..a57afa64e --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/bulk_delete_virtual_circuit_public_prefixes_request_response.go @@ -0,0 +1,34 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// BulkDeleteVirtualCircuitPublicPrefixesRequest wrapper for the BulkDeleteVirtualCircuitPublicPrefixes operation +type BulkDeleteVirtualCircuitPublicPrefixesRequest struct { + + // The OCID of the virtual circuit. + VirtualCircuitId *string `mandatory:"true" contributesTo:"path" name:"virtualCircuitId"` + + // Request with publix prefixes to be deleted from the virtual circuit + BulkDeleteVirtualCircuitPublicPrefixesDetails `contributesTo:"body"` +} + +func (request BulkDeleteVirtualCircuitPublicPrefixesRequest) String() string { + return common.PointerString(request) +} + +// BulkDeleteVirtualCircuitPublicPrefixesResponse wrapper for the BulkDeleteVirtualCircuitPublicPrefixes operation +type BulkDeleteVirtualCircuitPublicPrefixesResponse struct { + + // The underlying http response + RawResponse *http.Response +} + +func (response BulkDeleteVirtualCircuitPublicPrefixesResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/capture_console_history_details.go b/vendor/github.com/oracle/oci-go-sdk/core/capture_console_history_details.go new file mode 100644 index 000000000..9fb153ec6 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/capture_console_history_details.go @@ -0,0 +1,27 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Core Services API +// +// APIs for Networking Service, Compute Service, and Block Volume Service. +// + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" +) + +// CaptureConsoleHistoryDetails The representation of CaptureConsoleHistoryDetails +type CaptureConsoleHistoryDetails struct { + + // The OCID of the instance to get the console history from. + InstanceId *string `mandatory:"true" json:"instanceId"` + + // A user-friendly name. Does not have to be unique, and it's changeable. + DisplayName *string `mandatory:"false" json:"displayName"` +} + +func (m CaptureConsoleHistoryDetails) String() string { + return common.PointerString(m) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/capture_console_history_request_response.go b/vendor/github.com/oracle/oci-go-sdk/core/capture_console_history_request_response.go new file mode 100644 index 000000000..8e2919b70 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/capture_console_history_request_response.go @@ -0,0 +1,48 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// CaptureConsoleHistoryRequest wrapper for the CaptureConsoleHistory operation +type CaptureConsoleHistoryRequest struct { + + // Console history details + CaptureConsoleHistoryDetails `contributesTo:"body"` + + // A token that uniquely identifies a request so it can be retried in case of a timeout or + // server error without risk of executing that same action again. Retry tokens expire after 24 + // hours, but can be invalidated before then due to conflicting operations (for example, if a resource + // has been deleted and purged from the system, then a retry of the original creation request + // may be rejected). + OpcRetryToken *string `mandatory:"false" contributesTo:"header" name:"opc-retry-token"` +} + +func (request CaptureConsoleHistoryRequest) String() string { + return common.PointerString(request) +} + +// CaptureConsoleHistoryResponse wrapper for the CaptureConsoleHistory operation +type CaptureConsoleHistoryResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The ConsoleHistory instance + ConsoleHistory `presentIn:"body"` + + // For optimistic concurrency control. See `if-match`. + Etag *string `presentIn:"header" name:"etag"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about + // a particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response CaptureConsoleHistoryResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/connect_local_peering_gateways_details.go b/vendor/github.com/oracle/oci-go-sdk/core/connect_local_peering_gateways_details.go new file mode 100644 index 000000000..6a2d06c2a --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/connect_local_peering_gateways_details.go @@ -0,0 +1,24 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Core Services API +// +// APIs for Networking Service, Compute Service, and Block Volume Service. +// + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" +) + +// ConnectLocalPeeringGatewaysDetails Information about the other local peering gateway (LPG). +type ConnectLocalPeeringGatewaysDetails struct { + + // The OCID of the LPG you want to peer with. + PeerId *string `mandatory:"true" json:"peerId"` +} + +func (m ConnectLocalPeeringGatewaysDetails) String() string { + return common.PointerString(m) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/connect_local_peering_gateways_request_response.go b/vendor/github.com/oracle/oci-go-sdk/core/connect_local_peering_gateways_request_response.go new file mode 100644 index 000000000..084abbe5d --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/connect_local_peering_gateways_request_response.go @@ -0,0 +1,38 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// ConnectLocalPeeringGatewaysRequest wrapper for the ConnectLocalPeeringGateways operation +type ConnectLocalPeeringGatewaysRequest struct { + + // The OCID of the local peering gateway. + LocalPeeringGatewayId *string `mandatory:"true" contributesTo:"path" name:"localPeeringGatewayId"` + + // Details regarding the local peering gateway to connect. + ConnectLocalPeeringGatewaysDetails `contributesTo:"body"` +} + +func (request ConnectLocalPeeringGatewaysRequest) String() string { + return common.PointerString(request) +} + +// ConnectLocalPeeringGatewaysResponse wrapper for the ConnectLocalPeeringGateways operation +type ConnectLocalPeeringGatewaysResponse struct { + + // The underlying http response + RawResponse *http.Response + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about + // a particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response ConnectLocalPeeringGatewaysResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/console_history.go b/vendor/github.com/oracle/oci-go-sdk/core/console_history.go new file mode 100644 index 000000000..27b9d760a --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/console_history.go @@ -0,0 +1,75 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Core Services API +// +// APIs for Networking Service, Compute Service, and Block Volume Service. +// + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" +) + +// ConsoleHistory An instance's serial console data. It includes configuration messages that occur when the +// instance boots, such as kernel and BIOS messages, and is useful for checking the status of +// the instance or diagnosing problems. The console data is minimally formatted ASCII text. +type ConsoleHistory struct { + + // The Availability Domain of an instance. + // Example: `Uocm:PHX-AD-1` + AvailabilityDomain *string `mandatory:"true" json:"availabilityDomain"` + + // The OCID of the compartment. + CompartmentId *string `mandatory:"true" json:"compartmentId"` + + // The OCID of the console history metadata object. + Id *string `mandatory:"true" json:"id"` + + // The OCID of the instance this console history was fetched from. + InstanceId *string `mandatory:"true" json:"instanceId"` + + // The current state of the console history. + LifecycleState ConsoleHistoryLifecycleStateEnum `mandatory:"true" json:"lifecycleState"` + + // The date and time the history was created, in the format defined by RFC3339. + // Example: `2016-08-25T21:10:29.600Z` + TimeCreated *common.SDKTime `mandatory:"true" json:"timeCreated"` + + // A user-friendly name. Does not have to be unique, and it's changeable. + // Avoid entering confidential information. + // Example: `My console history metadata` + DisplayName *string `mandatory:"false" json:"displayName"` +} + +func (m ConsoleHistory) String() string { + return common.PointerString(m) +} + +// ConsoleHistoryLifecycleStateEnum Enum with underlying type: string +type ConsoleHistoryLifecycleStateEnum string + +// Set of constants representing the allowable values for ConsoleHistoryLifecycleState +const ( + ConsoleHistoryLifecycleStateRequested ConsoleHistoryLifecycleStateEnum = "REQUESTED" + ConsoleHistoryLifecycleStateGettingHistory ConsoleHistoryLifecycleStateEnum = "GETTING-HISTORY" + ConsoleHistoryLifecycleStateSucceeded ConsoleHistoryLifecycleStateEnum = "SUCCEEDED" + ConsoleHistoryLifecycleStateFailed ConsoleHistoryLifecycleStateEnum = "FAILED" +) + +var mappingConsoleHistoryLifecycleState = map[string]ConsoleHistoryLifecycleStateEnum{ + "REQUESTED": ConsoleHistoryLifecycleStateRequested, + "GETTING-HISTORY": ConsoleHistoryLifecycleStateGettingHistory, + "SUCCEEDED": ConsoleHistoryLifecycleStateSucceeded, + "FAILED": ConsoleHistoryLifecycleStateFailed, +} + +// GetConsoleHistoryLifecycleStateEnumValues Enumerates the set of values for ConsoleHistoryLifecycleState +func GetConsoleHistoryLifecycleStateEnumValues() []ConsoleHistoryLifecycleStateEnum { + values := make([]ConsoleHistoryLifecycleStateEnum, 0) + for _, v := range mappingConsoleHistoryLifecycleState { + values = append(values, v) + } + return values +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/core_blockstorage_client.go b/vendor/github.com/oracle/oci-go-sdk/core/core_blockstorage_client.go new file mode 100644 index 000000000..4b49fcaff --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/core_blockstorage_client.go @@ -0,0 +1,334 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Core Services API +// +// APIs for Networking Service, Compute Service, and Block Volume Service. +// + +package core + +import ( + "context" + "fmt" + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +//BlockstorageClient a client for Blockstorage +type BlockstorageClient struct { + common.BaseClient + config *common.ConfigurationProvider +} + +// NewBlockstorageClientWithConfigurationProvider Creates a new default Blockstorage client with the given configuration provider. +// the configuration provider will be used for the default signer as well as reading the region +func NewBlockstorageClientWithConfigurationProvider(configProvider common.ConfigurationProvider) (client BlockstorageClient, err error) { + baseClient, err := common.NewClientWithConfig(configProvider) + if err != nil { + return + } + + client = BlockstorageClient{BaseClient: baseClient} + client.BasePath = "20160918" + err = client.setConfigurationProvider(configProvider) + return +} + +// SetRegion overrides the region of this client. +func (client *BlockstorageClient) SetRegion(region string) { + client.Host = fmt.Sprintf(common.DefaultHostURLTemplate, "iaas", region) +} + +// SetConfigurationProvider sets the configuration provider including the region, returns an error if is not valid +func (client *BlockstorageClient) setConfigurationProvider(configProvider common.ConfigurationProvider) error { + if ok, err := common.IsConfigurationProviderValid(configProvider); !ok { + return err + } + + // Error has been checked already + region, _ := configProvider.Region() + client.config = &configProvider + client.SetRegion(region) + return nil +} + +// ConfigurationProvider the ConfigurationProvider used in this client, or null if none set +func (client *BlockstorageClient) ConfigurationProvider() *common.ConfigurationProvider { + return client.config +} + +// CreateVolume Creates a new volume in the specified compartment. Volumes can be created in sizes ranging from +// 50 GB (51200 MB) to 16 TB (16777216 MB), in 1 GB (1024 MB) increments. By default, volumes are 1 TB (1048576 MB). +// For general information about block volumes, see +// Overview of Block Volume Service (https://docs.us-phoenix-1.oraclecloud.com/Content/Block/Concepts/overview.htm). +// A volume and instance can be in separate compartments but must be in the same Availability Domain. +// For information about access control and compartments, see +// Overview of the IAM Service (https://docs.us-phoenix-1.oraclecloud.com/Content/Identity/Concepts/overview.htm). For information about +// Availability Domains, see Regions and Availability Domains (https://docs.us-phoenix-1.oraclecloud.com/Content/General/Concepts/regions.htm). +// To get a list of Availability Domains, use the `ListAvailabilityDomains` operation +// in the Identity and Access Management Service API. +// You may optionally specify a *display name* for the volume, which is simply a friendly name or +// description. It does not have to be unique, and you can change it. Avoid entering confidential information. +func (client BlockstorageClient) CreateVolume(ctx context.Context, request CreateVolumeRequest) (response CreateVolumeResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodPost, "/volumes", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// CreateVolumeBackup Creates a new backup of the specified volume. For general information about volume backups, +// see Overview of Block Volume Service Backups (https://docs.us-phoenix-1.oraclecloud.com/Content/Block/Concepts/blockvolumebackups.htm) +// When the request is received, the backup object is in a REQUEST_RECEIVED state. +// When the data is imaged, it goes into a CREATING state. +// After the backup is fully uploaded to the cloud, it goes into an AVAILABLE state. +func (client BlockstorageClient) CreateVolumeBackup(ctx context.Context, request CreateVolumeBackupRequest) (response CreateVolumeBackupResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodPost, "/volumeBackups", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// DeleteBootVolume Deletes the specified boot volume. The volume cannot have an active connection to an instance. +// To disconnect the boot volume from a connected instance, see +// Disconnecting From a Boot Volume (https://docs.us-phoenix-1.oraclecloud.com/Content/Block/Tasks/deletingbootvolume.htm). +// **Warning:** All data on the boot volume will be permanently lost when the boot volume is deleted. +func (client BlockstorageClient) DeleteBootVolume(ctx context.Context, request DeleteBootVolumeRequest) (response DeleteBootVolumeResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodDelete, "/bootVolumes/{bootVolumeId}", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// DeleteVolume Deletes the specified volume. The volume cannot have an active connection to an instance. +// To disconnect the volume from a connected instance, see +// Disconnecting From a Volume (https://docs.us-phoenix-1.oraclecloud.com/Content/Block/Tasks/disconnectingfromavolume.htm). +// **Warning:** All data on the volume will be permanently lost when the volume is deleted. +func (client BlockstorageClient) DeleteVolume(ctx context.Context, request DeleteVolumeRequest) (response DeleteVolumeResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodDelete, "/volumes/{volumeId}", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// DeleteVolumeBackup Deletes a volume backup. +func (client BlockstorageClient) DeleteVolumeBackup(ctx context.Context, request DeleteVolumeBackupRequest) (response DeleteVolumeBackupResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodDelete, "/volumeBackups/{volumeBackupId}", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// GetBootVolume Gets information for the specified boot volume. +func (client BlockstorageClient) GetBootVolume(ctx context.Context, request GetBootVolumeRequest) (response GetBootVolumeResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodGet, "/bootVolumes/{bootVolumeId}", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// GetVolume Gets information for the specified volume. +func (client BlockstorageClient) GetVolume(ctx context.Context, request GetVolumeRequest) (response GetVolumeResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodGet, "/volumes/{volumeId}", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// GetVolumeBackup Gets information for the specified volume backup. +func (client BlockstorageClient) GetVolumeBackup(ctx context.Context, request GetVolumeBackupRequest) (response GetVolumeBackupResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodGet, "/volumeBackups/{volumeBackupId}", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// ListBootVolumes Lists the boot volumes in the specified compartment and Availability Domain. +func (client BlockstorageClient) ListBootVolumes(ctx context.Context, request ListBootVolumesRequest) (response ListBootVolumesResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodGet, "/bootVolumes", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// ListVolumeBackups Lists the volume backups in the specified compartment. You can filter the results by volume. +func (client BlockstorageClient) ListVolumeBackups(ctx context.Context, request ListVolumeBackupsRequest) (response ListVolumeBackupsResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodGet, "/volumeBackups", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// ListVolumes Lists the volumes in the specified compartment and Availability Domain. +func (client BlockstorageClient) ListVolumes(ctx context.Context, request ListVolumesRequest) (response ListVolumesResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodGet, "/volumes", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// UpdateBootVolume Updates the specified boot volume's display name. +func (client BlockstorageClient) UpdateBootVolume(ctx context.Context, request UpdateBootVolumeRequest) (response UpdateBootVolumeResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodPut, "/bootVolumes/{bootVolumeId}", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// UpdateVolume Updates the specified volume's display name. +// Avoid entering confidential information. +func (client BlockstorageClient) UpdateVolume(ctx context.Context, request UpdateVolumeRequest) (response UpdateVolumeResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodPut, "/volumes/{volumeId}", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// UpdateVolumeBackup Updates the display name for the specified volume backup. +// Avoid entering confidential information. +func (client BlockstorageClient) UpdateVolumeBackup(ctx context.Context, request UpdateVolumeBackupRequest) (response UpdateVolumeBackupResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodPut, "/volumeBackups/{volumeBackupId}", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/core_compute_client.go b/vendor/github.com/oracle/oci-go-sdk/core/core_compute_client.go new file mode 100644 index 000000000..34b17d7b9 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/core_compute_client.go @@ -0,0 +1,833 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Core Services API +// +// APIs for Networking Service, Compute Service, and Block Volume Service. +// + +package core + +import ( + "context" + "fmt" + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +//ComputeClient a client for Compute +type ComputeClient struct { + common.BaseClient + config *common.ConfigurationProvider +} + +// NewComputeClientWithConfigurationProvider Creates a new default Compute client with the given configuration provider. +// the configuration provider will be used for the default signer as well as reading the region +func NewComputeClientWithConfigurationProvider(configProvider common.ConfigurationProvider) (client ComputeClient, err error) { + baseClient, err := common.NewClientWithConfig(configProvider) + if err != nil { + return + } + + client = ComputeClient{BaseClient: baseClient} + client.BasePath = "20160918" + err = client.setConfigurationProvider(configProvider) + return +} + +// SetRegion overrides the region of this client. +func (client *ComputeClient) SetRegion(region string) { + client.Host = fmt.Sprintf(common.DefaultHostURLTemplate, "iaas", region) +} + +// SetConfigurationProvider sets the configuration provider including the region, returns an error if is not valid +func (client *ComputeClient) setConfigurationProvider(configProvider common.ConfigurationProvider) error { + if ok, err := common.IsConfigurationProviderValid(configProvider); !ok { + return err + } + + // Error has been checked already + region, _ := configProvider.Region() + client.config = &configProvider + client.SetRegion(region) + return nil +} + +// ConfigurationProvider the ConfigurationProvider used in this client, or null if none set +func (client *ComputeClient) ConfigurationProvider() *common.ConfigurationProvider { + return client.config +} + +// AttachBootVolume Attaches the specified boot volume to the specified instance. +func (client ComputeClient) AttachBootVolume(ctx context.Context, request AttachBootVolumeRequest) (response AttachBootVolumeResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodPost, "/bootVolumeAttachments/", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// AttachVnic Creates a secondary VNIC and attaches it to the specified instance. +// For more information about secondary VNICs, see +// Virtual Network Interface Cards (VNICs) (https://docs.us-phoenix-1.oraclecloud.com/Content/Network/Tasks/managingVNICs.htm). +func (client ComputeClient) AttachVnic(ctx context.Context, request AttachVnicRequest) (response AttachVnicResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodPost, "/vnicAttachments/", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// AttachVolume Attaches the specified storage volume to the specified instance. +func (client ComputeClient) AttachVolume(ctx context.Context, request AttachVolumeRequest) (response AttachVolumeResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodPost, "/volumeAttachments/", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponseWithPolymorphicBody(httpResponse, &response, &volumeattachment{}) + return +} + +// CaptureConsoleHistory Captures the most recent serial console data (up to a megabyte) for the +// specified instance. +// The `CaptureConsoleHistory` operation works with the other console history operations +// as described below. +// 1. Use `CaptureConsoleHistory` to request the capture of up to a megabyte of the +// most recent console history. This call returns a `ConsoleHistory` +// object. The object will have a state of REQUESTED. +// 2. Wait for the capture operation to succeed by polling `GetConsoleHistory` with +// the identifier of the console history metadata. The state of the +// `ConsoleHistory` object will go from REQUESTED to GETTING-HISTORY and +// then SUCCEEDED (or FAILED). +// 3. Use `GetConsoleHistoryContent` to get the actual console history data (not the +// metadata). +// 4. Optionally, use `DeleteConsoleHistory` to delete the console history metadata +// and the console history data. +func (client ComputeClient) CaptureConsoleHistory(ctx context.Context, request CaptureConsoleHistoryRequest) (response CaptureConsoleHistoryResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodPost, "/instanceConsoleHistories/", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// CreateImage Creates a boot disk image for the specified instance or imports an exported image from the Oracle Cloud Infrastructure Object Storage service. +// When creating a new image, you must provide the OCID of the instance you want to use as the basis for the image, and +// the OCID of the compartment containing that instance. For more information about images, +// see Managing Custom Images (https://docs.us-phoenix-1.oraclecloud.com/Content/Compute/Tasks/managingcustomimages.htm). +// When importing an exported image from Object Storage, you specify the source information +// in ImageSourceDetails. +// When importing an image based on the namespace, bucket name, and object name, +// use ImageSourceViaObjectStorageTupleDetails. +// When importing an image based on the Object Storage URL, use +// ImageSourceViaObjectStorageUriDetails. +// See Object Storage URLs (https://docs.us-phoenix-1.oraclecloud.com/Content/Compute/Tasks/imageimportexport.htm#URLs) and pre-authenticated requests (https://docs.us-phoenix-1.oraclecloud.com/Content/Object/Tasks/managingaccess.htm#pre-auth) +// for constructing URLs for image import/export. +// For more information about importing exported images, see +// Image Import/Export (https://docs.us-phoenix-1.oraclecloud.com/Content/Compute/Tasks/imageimportexport.htm). +// You may optionally specify a *display name* for the image, which is simply a friendly name or description. +// It does not have to be unique, and you can change it. See UpdateImage. +// Avoid entering confidential information. +func (client ComputeClient) CreateImage(ctx context.Context, request CreateImageRequest) (response CreateImageResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodPost, "/images/", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// CreateInstanceConsoleConnection Creates a new console connection to the specified instance. +// Once the console connection has been created and is available, +// you connect to the console using SSH. +// For more information about console access, see Accessing the Console (https://docs.us-phoenix-1.oraclecloud.com/Content/Compute/References/serialconsole.htm). +func (client ComputeClient) CreateInstanceConsoleConnection(ctx context.Context, request CreateInstanceConsoleConnectionRequest) (response CreateInstanceConsoleConnectionResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodPost, "/instanceConsoleConnections", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// DeleteConsoleHistory Deletes the specified console history metadata and the console history data. +func (client ComputeClient) DeleteConsoleHistory(ctx context.Context, request DeleteConsoleHistoryRequest) (response DeleteConsoleHistoryResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodDelete, "/instanceConsoleHistories/{instanceConsoleHistoryId}", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// DeleteImage Deletes an image. +func (client ComputeClient) DeleteImage(ctx context.Context, request DeleteImageRequest) (response DeleteImageResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodDelete, "/images/{imageId}", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// DeleteInstanceConsoleConnection Deletes the specified instance console connection. +func (client ComputeClient) DeleteInstanceConsoleConnection(ctx context.Context, request DeleteInstanceConsoleConnectionRequest) (response DeleteInstanceConsoleConnectionResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodDelete, "/instanceConsoleConnections/{instanceConsoleConnectionId}", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// DetachBootVolume Detaches a boot volume from an instance. You must specify the OCID of the boot volume attachment. +// This is an asynchronous operation. The attachment's `lifecycleState` will change to DETACHING temporarily +// until the attachment is completely removed. +func (client ComputeClient) DetachBootVolume(ctx context.Context, request DetachBootVolumeRequest) (response DetachBootVolumeResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodDelete, "/bootVolumeAttachments/{bootVolumeAttachmentId}", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// DetachVnic Detaches and deletes the specified secondary VNIC. +// This operation cannot be used on the instance's primary VNIC. +// When you terminate an instance, all attached VNICs (primary +// and secondary) are automatically detached and deleted. +// **Important:** If the VNIC has a +// PrivateIp that is the +// target of a route rule (https://docs.us-phoenix-1.oraclecloud.com/Content/Network/Tasks/managingroutetables.htm#privateip), +// deleting the VNIC causes that route rule to blackhole and the traffic +// will be dropped. +func (client ComputeClient) DetachVnic(ctx context.Context, request DetachVnicRequest) (response DetachVnicResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodDelete, "/vnicAttachments/{vnicAttachmentId}", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// DetachVolume Detaches a storage volume from an instance. You must specify the OCID of the volume attachment. +// This is an asynchronous operation. The attachment's `lifecycleState` will change to DETACHING temporarily +// until the attachment is completely removed. +func (client ComputeClient) DetachVolume(ctx context.Context, request DetachVolumeRequest) (response DetachVolumeResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodDelete, "/volumeAttachments/{volumeAttachmentId}", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// ExportImage Exports the specified image to the Oracle Cloud Infrastructure Object Storage service. You can use the Object Storage URL, +// or the namespace, bucket name, and object name when specifying the location to export to. +// For more information about exporting images, see Image Import/Export (https://docs.us-phoenix-1.oraclecloud.com/Content/Compute/Tasks/imageimportexport.htm). +// To perform an image export, you need write access to the Object Storage bucket for the image, +// see Let Users Write Objects to Object Storage Buckets (https://docs.us-phoenix-1.oraclecloud.com/Content/Identity/Concepts/commonpolicies.htm#Let4). +// See Object Storage URLs (https://docs.us-phoenix-1.oraclecloud.com/Content/Compute/Tasks/imageimportexport.htm#URLs) and pre-authenticated requests (https://docs.us-phoenix-1.oraclecloud.com/Content/Object/Tasks/managingaccess.htm#pre-auth) +// for constructing URLs for image import/export. +func (client ComputeClient) ExportImage(ctx context.Context, request ExportImageRequest) (response ExportImageResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodPost, "/images/{imageId}/actions/export", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// GetBootVolumeAttachment Gets information about the specified boot volume attachment. +func (client ComputeClient) GetBootVolumeAttachment(ctx context.Context, request GetBootVolumeAttachmentRequest) (response GetBootVolumeAttachmentResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodGet, "/bootVolumeAttachments/{bootVolumeAttachmentId}", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// GetConsoleHistory Shows the metadata for the specified console history. +// See CaptureConsoleHistory +// for details about using the console history operations. +func (client ComputeClient) GetConsoleHistory(ctx context.Context, request GetConsoleHistoryRequest) (response GetConsoleHistoryResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodGet, "/instanceConsoleHistories/{instanceConsoleHistoryId}", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// GetConsoleHistoryContent Gets the actual console history data (not the metadata). +// See CaptureConsoleHistory +// for details about using the console history operations. +func (client ComputeClient) GetConsoleHistoryContent(ctx context.Context, request GetConsoleHistoryContentRequest) (response GetConsoleHistoryContentResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodGet, "/instanceConsoleHistories/{instanceConsoleHistoryId}/data", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// GetImage Gets the specified image. +func (client ComputeClient) GetImage(ctx context.Context, request GetImageRequest) (response GetImageResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodGet, "/images/{imageId}", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// GetInstance Gets information about the specified instance. +func (client ComputeClient) GetInstance(ctx context.Context, request GetInstanceRequest) (response GetInstanceResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodGet, "/instances/{instanceId}", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// GetInstanceConsoleConnection Gets the specified instance console connection's information. +func (client ComputeClient) GetInstanceConsoleConnection(ctx context.Context, request GetInstanceConsoleConnectionRequest) (response GetInstanceConsoleConnectionResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodGet, "/instanceConsoleConnections/{instanceConsoleConnectionId}", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// GetVnicAttachment Gets the information for the specified VNIC attachment. +func (client ComputeClient) GetVnicAttachment(ctx context.Context, request GetVnicAttachmentRequest) (response GetVnicAttachmentResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodGet, "/vnicAttachments/{vnicAttachmentId}", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// GetVolumeAttachment Gets information about the specified volume attachment. +func (client ComputeClient) GetVolumeAttachment(ctx context.Context, request GetVolumeAttachmentRequest) (response GetVolumeAttachmentResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodGet, "/volumeAttachments/{volumeAttachmentId}", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponseWithPolymorphicBody(httpResponse, &response, &volumeattachment{}) + return +} + +// GetWindowsInstanceInitialCredentials Gets the generated credentials for the instance. Only works for Windows instances. The returned credentials +// are only valid for the initial login. +func (client ComputeClient) GetWindowsInstanceInitialCredentials(ctx context.Context, request GetWindowsInstanceInitialCredentialsRequest) (response GetWindowsInstanceInitialCredentialsResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodGet, "/instances/{instanceId}/initialCredentials", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// InstanceAction Performs one of the power actions (start, stop, softreset, or reset) +// on the specified instance. +// **start** - power on +// **stop** - power off +// **softreset** - ACPI shutdown and power on +// **reset** - power off and power on +// Note that the **stop** state has no effect on the resources you consume. +// Billing continues for instances that you stop, and related resources continue +// to apply against any relevant quotas. You must terminate an instance +// (TerminateInstance) +// to remove its resources from billing and quotas. +func (client ComputeClient) InstanceAction(ctx context.Context, request InstanceActionRequest) (response InstanceActionResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodPost, "/instances/{instanceId}", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// LaunchInstance Creates a new instance in the specified compartment and the specified Availability Domain. +// For general information about instances, see +// Overview of the Compute Service (https://docs.us-phoenix-1.oraclecloud.com/Content/Compute/Concepts/computeoverview.htm). +// For information about access control and compartments, see +// Overview of the IAM Service (https://docs.us-phoenix-1.oraclecloud.com/Content/Identity/Concepts/overview.htm). +// For information about Availability Domains, see +// Regions and Availability Domains (https://docs.us-phoenix-1.oraclecloud.com/Content/General/Concepts/regions.htm). +// To get a list of Availability Domains, use the `ListAvailabilityDomains` operation +// in the Identity and Access Management Service API. +// All Oracle Cloud Infrastructure resources, including instances, get an Oracle-assigned, +// unique ID called an Oracle Cloud Identifier (OCID). +// When you create a resource, you can find its OCID in the response. You can +// also retrieve a resource's OCID by using a List API operation +// on that resource type, or by viewing the resource in the Console. +// When you launch an instance, it is automatically attached to a virtual +// network interface card (VNIC), called the *primary VNIC*. The VNIC +// has a private IP address from the subnet's CIDR. You can either assign a +// private IP address of your choice or let Oracle automatically assign one. +// You can choose whether the instance has a public IP address. To retrieve the +// addresses, use the ListVnicAttachments +// operation to get the VNIC ID for the instance, and then call +// GetVnic with the VNIC ID. +// You can later add secondary VNICs to an instance. For more information, see +// Virtual Network Interface Cards (VNICs) (https://docs.us-phoenix-1.oraclecloud.com/Content/Network/Tasks/managingVNICs.htm). +func (client ComputeClient) LaunchInstance(ctx context.Context, request LaunchInstanceRequest) (response LaunchInstanceResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodPost, "/instances/", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// ListBootVolumeAttachments Lists the boot volume attachments in the specified compartment. You can filter the +// list by specifying an instance OCID, boot volume OCID, or both. +func (client ComputeClient) ListBootVolumeAttachments(ctx context.Context, request ListBootVolumeAttachmentsRequest) (response ListBootVolumeAttachmentsResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodGet, "/bootVolumeAttachments/", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// ListConsoleHistories Lists the console history metadata for the specified compartment or instance. +func (client ComputeClient) ListConsoleHistories(ctx context.Context, request ListConsoleHistoriesRequest) (response ListConsoleHistoriesResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodGet, "/instanceConsoleHistories/", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// ListImages Lists the available images in the specified compartment. +// If you specify a value for the `sortBy` parameter, Oracle-provided images appear first in the list, followed by custom images. +// For more +// information about images, see +// Managing Custom Images (https://docs.us-phoenix-1.oraclecloud.com/Content/Compute/Tasks/managingcustomimages.htm). +func (client ComputeClient) ListImages(ctx context.Context, request ListImagesRequest) (response ListImagesResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodGet, "/images/", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// ListInstanceConsoleConnections Lists the console connections for the specified compartment or instance. +// For more information about console access, see Accessing the Instance Console (https://docs.us-phoenix-1.oraclecloud.com/Content/Compute/References/serialconsole.htm). +func (client ComputeClient) ListInstanceConsoleConnections(ctx context.Context, request ListInstanceConsoleConnectionsRequest) (response ListInstanceConsoleConnectionsResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodGet, "/instanceConsoleConnections", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// ListInstances Lists the instances in the specified compartment and the specified Availability Domain. +// You can filter the results by specifying an instance name (the list will include all the identically-named +// instances in the compartment). +func (client ComputeClient) ListInstances(ctx context.Context, request ListInstancesRequest) (response ListInstancesResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodGet, "/instances/", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// ListShapes Lists the shapes that can be used to launch an instance within the specified compartment. You can +// filter the list by compatibility with a specific image. +func (client ComputeClient) ListShapes(ctx context.Context, request ListShapesRequest) (response ListShapesResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodGet, "/shapes", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// ListVnicAttachments Lists the VNIC attachments in the specified compartment. A VNIC attachment +// resides in the same compartment as the attached instance. The list can be +// filtered by instance, VNIC, or Availability Domain. +func (client ComputeClient) ListVnicAttachments(ctx context.Context, request ListVnicAttachmentsRequest) (response ListVnicAttachmentsResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodGet, "/vnicAttachments/", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +//listvolumeattachment allows to unmarshal list of polymorphic VolumeAttachment +type listvolumeattachment []volumeattachment + +//UnmarshalPolymorphicJSON unmarshals polymorphic json list of items +func (m *listvolumeattachment) UnmarshalPolymorphicJSON(data []byte) (interface{}, error) { + res := make([]VolumeAttachment, len(*m)) + for i, v := range *m { + nn, err := v.UnmarshalPolymorphicJSON(v.JsonData) + if err != nil { + return nil, err + } + res[i] = nn.(VolumeAttachment) + } + return res, nil +} + +// ListVolumeAttachments Lists the volume attachments in the specified compartment. You can filter the +// list by specifying an instance OCID, volume OCID, or both. +// Currently, the only supported volume attachment type is IScsiVolumeAttachment. +func (client ComputeClient) ListVolumeAttachments(ctx context.Context, request ListVolumeAttachmentsRequest) (response ListVolumeAttachmentsResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodGet, "/volumeAttachments/", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponseWithPolymorphicBody(httpResponse, &response, &listvolumeattachment{}) + return +} + +// TerminateInstance Terminates the specified instance. Any attached VNICs and volumes are automatically detached +// when the instance terminates. +// To preserve the boot volume associated with the instance, specify `true` for `PreserveBootVolumeQueryParam`. +// To delete the boot volume when the instance is deleted, specify `false` or do not specify a value for `PreserveBootVolumeQueryParam`. +// This is an asynchronous operation. The instance's `lifecycleState` will change to TERMINATING temporarily +// until the instance is completely removed. +func (client ComputeClient) TerminateInstance(ctx context.Context, request TerminateInstanceRequest) (response TerminateInstanceResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodDelete, "/instances/{instanceId}", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// UpdateConsoleHistory Updates the specified console history metadata. +func (client ComputeClient) UpdateConsoleHistory(ctx context.Context, request UpdateConsoleHistoryRequest) (response UpdateConsoleHistoryResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodPut, "/instanceConsoleHistories/{instanceConsoleHistoryId}", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// UpdateImage Updates the display name of the image. Avoid entering confidential information. +func (client ComputeClient) UpdateImage(ctx context.Context, request UpdateImageRequest) (response UpdateImageResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodPut, "/images/{imageId}", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// UpdateInstance Updates the display name of the specified instance. Avoid entering confidential information. +// The OCID of the instance remains the same. +func (client ComputeClient) UpdateInstance(ctx context.Context, request UpdateInstanceRequest) (response UpdateInstanceResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodPut, "/instances/{instanceId}", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/core_virtualnetwork_client.go b/vendor/github.com/oracle/oci-go-sdk/core/core_virtualnetwork_client.go new file mode 100644 index 000000000..2888effff --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/core_virtualnetwork_client.go @@ -0,0 +1,2009 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Core Services API +// +// APIs for Networking Service, Compute Service, and Block Volume Service. +// + +package core + +import ( + "context" + "fmt" + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +//VirtualNetworkClient a client for VirtualNetwork +type VirtualNetworkClient struct { + common.BaseClient + config *common.ConfigurationProvider +} + +// NewVirtualNetworkClientWithConfigurationProvider Creates a new default VirtualNetwork client with the given configuration provider. +// the configuration provider will be used for the default signer as well as reading the region +func NewVirtualNetworkClientWithConfigurationProvider(configProvider common.ConfigurationProvider) (client VirtualNetworkClient, err error) { + baseClient, err := common.NewClientWithConfig(configProvider) + if err != nil { + return + } + + client = VirtualNetworkClient{BaseClient: baseClient} + client.BasePath = "20160918" + err = client.setConfigurationProvider(configProvider) + return +} + +// SetRegion overrides the region of this client. +func (client *VirtualNetworkClient) SetRegion(region string) { + client.Host = fmt.Sprintf(common.DefaultHostURLTemplate, "iaas", region) +} + +// SetConfigurationProvider sets the configuration provider including the region, returns an error if is not valid +func (client *VirtualNetworkClient) setConfigurationProvider(configProvider common.ConfigurationProvider) error { + if ok, err := common.IsConfigurationProviderValid(configProvider); !ok { + return err + } + + // Error has been checked already + region, _ := configProvider.Region() + client.config = &configProvider + client.SetRegion(region) + return nil +} + +// ConfigurationProvider the ConfigurationProvider used in this client, or null if none set +func (client *VirtualNetworkClient) ConfigurationProvider() *common.ConfigurationProvider { + return client.config +} + +// BulkAddVirtualCircuitPublicPrefixes Adds one or more customer public IP prefixes to the specified public virtual circuit. +// Use this operation (and not UpdateVirtualCircuit) +// to add prefixes to the virtual circuit. Oracle must verify the customer's ownership +// of each prefix before traffic for that prefix will flow across the virtual circuit. +func (client VirtualNetworkClient) BulkAddVirtualCircuitPublicPrefixes(ctx context.Context, request BulkAddVirtualCircuitPublicPrefixesRequest) (err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodPost, "/virtualCircuits/{virtualCircuitId}/actions/bulkAddPublicPrefixes", request) + if err != nil { + return + } + + _, err = client.Call(ctx, &httpRequest) + return +} + +// BulkDeleteVirtualCircuitPublicPrefixes Removes one or more customer public IP prefixes from the specified public virtual circuit. +// Use this operation (and not UpdateVirtualCircuit) +// to remove prefixes from the virtual circuit. When the virtual circuit's state switches +// back to PROVISIONED, Oracle stops advertising the specified prefixes across the connection. +func (client VirtualNetworkClient) BulkDeleteVirtualCircuitPublicPrefixes(ctx context.Context, request BulkDeleteVirtualCircuitPublicPrefixesRequest) (err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodPost, "/virtualCircuits/{virtualCircuitId}/actions/bulkDeletePublicPrefixes", request) + if err != nil { + return + } + + _, err = client.Call(ctx, &httpRequest) + return +} + +// ConnectLocalPeeringGateways Connects this local peering gateway (LPG) to another one in the same region. +// This operation must be called by the VCN administrator who is designated as +// the *requestor* in the peering relationship. The *acceptor* must implement +// an Identity and Access Management (IAM) policy that gives the requestor permission +// to connect to LPGs in the acceptor's compartment. Without that permission, this +// operation will fail. For more information, see +// VCN Peering (https://docs.us-phoenix-1.oraclecloud.com/Content/Network/Tasks/VCNpeering.htm). +func (client VirtualNetworkClient) ConnectLocalPeeringGateways(ctx context.Context, request ConnectLocalPeeringGatewaysRequest) (response ConnectLocalPeeringGatewaysResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodPost, "/localPeeringGateways/{localPeeringGatewayId}/actions/connect", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// CreateCpe Creates a new virtual Customer-Premises Equipment (CPE) object in the specified compartment. For +// more information, see IPSec VPNs (https://docs.us-phoenix-1.oraclecloud.com/Content/Network/Tasks/managingIPsec.htm). +// For the purposes of access control, you must provide the OCID of the compartment where you want +// the CPE to reside. Notice that the CPE doesn't have to be in the same compartment as the IPSec +// connection or other Networking Service components. If you're not sure which compartment to +// use, put the CPE in the same compartment as the DRG. For more information about +// compartments and access control, see Overview of the IAM Service (https://docs.us-phoenix-1.oraclecloud.com/Content/Identity/Concepts/overview.htm). +// For information about OCIDs, see Resource Identifiers (https://docs.us-phoenix-1.oraclecloud.com/Content/General/Concepts/identifiers.htm). +// You must provide the public IP address of your on-premises router. See +// Configuring Your On-Premises Router for an IPSec VPN (https://docs.us-phoenix-1.oraclecloud.com/Content/Network/Tasks/configuringCPE.htm). +// You may optionally specify a *display name* for the CPE, otherwise a default is provided. It does not have to +// be unique, and you can change it. Avoid entering confidential information. +func (client VirtualNetworkClient) CreateCpe(ctx context.Context, request CreateCpeRequest) (response CreateCpeResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodPost, "/cpes", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// CreateCrossConnect Creates a new cross-connect. Oracle recommends you create each cross-connect in a +// CrossConnectGroup so you can use link aggregation +// with the connection. +// After creating the `CrossConnect` object, you need to go the FastConnect location +// and request to have the physical cable installed. For more information, see +// FastConnect Overview (https://docs.us-phoenix-1.oraclecloud.com/Content/Network/Concepts/fastconnect.htm). +// For the purposes of access control, you must provide the OCID of the +// compartment where you want the cross-connect to reside. If you're +// not sure which compartment to use, put the cross-connect in the +// same compartment with your VCN. For more information about +// compartments and access control, see +// Overview of the IAM Service (https://docs.us-phoenix-1.oraclecloud.com/Content/Identity/Concepts/overview.htm). +// For information about OCIDs, see +// Resource Identifiers (https://docs.us-phoenix-1.oraclecloud.com/Content/General/Concepts/identifiers.htm). +// You may optionally specify a *display name* for the cross-connect. +// It does not have to be unique, and you can change it. Avoid entering confidential information. +func (client VirtualNetworkClient) CreateCrossConnect(ctx context.Context, request CreateCrossConnectRequest) (response CreateCrossConnectResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodPost, "/crossConnects", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// CreateCrossConnectGroup Creates a new cross-connect group to use with Oracle Cloud Infrastructure +// FastConnect. For more information, see +// FastConnect Overview (https://docs.us-phoenix-1.oraclecloud.com/Content/Network/Concepts/fastconnect.htm). +// For the purposes of access control, you must provide the OCID of the +// compartment where you want the cross-connect group to reside. If you're +// not sure which compartment to use, put the cross-connect group in the +// same compartment with your VCN. For more information about +// compartments and access control, see +// Overview of the IAM Service (https://docs.us-phoenix-1.oraclecloud.com/Content/Identity/Concepts/overview.htm). +// For information about OCIDs, see +// Resource Identifiers (https://docs.us-phoenix-1.oraclecloud.com/Content/General/Concepts/identifiers.htm). +// You may optionally specify a *display name* for the cross-connect group. +// It does not have to be unique, and you can change it. Avoid entering confidential information. +func (client VirtualNetworkClient) CreateCrossConnectGroup(ctx context.Context, request CreateCrossConnectGroupRequest) (response CreateCrossConnectGroupResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodPost, "/crossConnectGroups", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// CreateDhcpOptions Creates a new set of DHCP options for the specified VCN. For more information, see +// DhcpOptions. +// For the purposes of access control, you must provide the OCID of the compartment where you want the set of +// DHCP options to reside. Notice that the set of options doesn't have to be in the same compartment as the VCN, +// subnets, or other Networking Service components. If you're not sure which compartment to use, put the set +// of DHCP options in the same compartment as the VCN. For more information about compartments and access control, see +// Overview of the IAM Service (https://docs.us-phoenix-1.oraclecloud.com/Content/Identity/Concepts/overview.htm). For information about OCIDs, see +// Resource Identifiers (https://docs.us-phoenix-1.oraclecloud.com/Content/General/Concepts/identifiers.htm). +// You may optionally specify a *display name* for the set of DHCP options, otherwise a default is provided. +// It does not have to be unique, and you can change it. Avoid entering confidential information. +func (client VirtualNetworkClient) CreateDhcpOptions(ctx context.Context, request CreateDhcpOptionsRequest) (response CreateDhcpOptionsResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodPost, "/dhcps", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// CreateDrg Creates a new Dynamic Routing Gateway (DRG) in the specified compartment. For more information, +// see Dynamic Routing Gateways (DRGs) (https://docs.us-phoenix-1.oraclecloud.com/Content/Network/Tasks/managingDRGs.htm). +// For the purposes of access control, you must provide the OCID of the compartment where you want +// the DRG to reside. Notice that the DRG doesn't have to be in the same compartment as the VCN, +// the DRG attachment, or other Networking Service components. If you're not sure which compartment +// to use, put the DRG in the same compartment as the VCN. For more information about compartments +// and access control, see Overview of the IAM Service (https://docs.us-phoenix-1.oraclecloud.com/Content/Identity/Concepts/overview.htm). +// For information about OCIDs, see Resource Identifiers (https://docs.us-phoenix-1.oraclecloud.com/Content/General/Concepts/identifiers.htm). +// You may optionally specify a *display name* for the DRG, otherwise a default is provided. +// It does not have to be unique, and you can change it. Avoid entering confidential information. +func (client VirtualNetworkClient) CreateDrg(ctx context.Context, request CreateDrgRequest) (response CreateDrgResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodPost, "/drgs", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// CreateDrgAttachment Attaches the specified DRG to the specified VCN. A VCN can be attached to only one DRG at a time, +// and vice versa. The response includes a `DrgAttachment` object with its own OCID. For more +// information about DRGs, see +// Dynamic Routing Gateways (DRGs) (https://docs.us-phoenix-1.oraclecloud.com/Content/Network/Tasks/managingDRGs.htm). +// You may optionally specify a *display name* for the attachment, otherwise a default is provided. +// It does not have to be unique, and you can change it. Avoid entering confidential information. +// For the purposes of access control, the DRG attachment is automatically placed into the same compartment +// as the VCN. For more information about compartments and access control, see +// Overview of the IAM Service (https://docs.us-phoenix-1.oraclecloud.com/Content/Identity/Concepts/overview.htm). +func (client VirtualNetworkClient) CreateDrgAttachment(ctx context.Context, request CreateDrgAttachmentRequest) (response CreateDrgAttachmentResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodPost, "/drgAttachments", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// CreateIPSecConnection Creates a new IPSec connection between the specified DRG and CPE. For more information, see +// IPSec VPNs (https://docs.us-phoenix-1.oraclecloud.com/Content/Network/Tasks/managingIPsec.htm). +// In the request, you must include at least one static route to the CPE object (you're allowed a maximum +// of 10). For example: 10.0.8.0/16. +// For the purposes of access control, you must provide the OCID of the compartment where you want the +// IPSec connection to reside. Notice that the IPSec connection doesn't have to be in the same compartment +// as the DRG, CPE, or other Networking Service components. If you're not sure which compartment to +// use, put the IPSec connection in the same compartment as the DRG. For more information about +// compartments and access control, see +// Overview of the IAM Service (https://docs.us-phoenix-1.oraclecloud.com/Content/Identity/Concepts/overview.htm). +// For information about OCIDs, see Resource Identifiers (https://docs.us-phoenix-1.oraclecloud.com/Content/General/Concepts/identifiers.htm). +// You may optionally specify a *display name* for the IPSec connection, otherwise a default is provided. +// It does not have to be unique, and you can change it. Avoid entering confidential information. +// After creating the IPSec connection, you need to configure your on-premises router +// with tunnel-specific information returned by +// GetIPSecConnectionDeviceConfig. +// For each tunnel, that operation gives you the IP address of Oracle's VPN headend and the shared secret +// (that is, the pre-shared key). For more information, see +// Configuring Your On-Premises Router for an IPSec VPN (https://docs.us-phoenix-1.oraclecloud.com/Content/Network/Tasks/configuringCPE.htm). +// To get the status of the tunnels (whether they're up or down), use +// GetIPSecConnectionDeviceStatus. +func (client VirtualNetworkClient) CreateIPSecConnection(ctx context.Context, request CreateIPSecConnectionRequest) (response CreateIPSecConnectionResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodPost, "/ipsecConnections", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// CreateInternetGateway Creates a new Internet Gateway for the specified VCN. For more information, see +// Connectivity to the Internet (https://docs.us-phoenix-1.oraclecloud.com/Content/Network/Tasks/managingIGs.htm). +// For the purposes of access control, you must provide the OCID of the compartment where you want the Internet +// Gateway to reside. Notice that the Internet Gateway doesn't have to be in the same compartment as the VCN or +// other Networking Service components. If you're not sure which compartment to use, put the Internet +// Gateway in the same compartment with the VCN. For more information about compartments and access control, see +// Overview of the IAM Service (https://docs.us-phoenix-1.oraclecloud.com/Content/Identity/Concepts/overview.htm). For information about OCIDs, see +// Resource Identifiers (https://docs.us-phoenix-1.oraclecloud.com/Content/General/Concepts/identifiers.htm). +// You may optionally specify a *display name* for the Internet Gateway, otherwise a default is provided. It +// does not have to be unique, and you can change it. Avoid entering confidential information. +// For traffic to flow between a subnet and an Internet Gateway, you must create a route rule accordingly in +// the subnet's route table (for example, 0.0.0.0/0 > Internet Gateway). See +// UpdateRouteTable. +// You must specify whether the Internet Gateway is enabled when you create it. If it's disabled, that means no +// traffic will flow to/from the internet even if there's a route rule that enables that traffic. You can later +// use UpdateInternetGateway to easily disable/enable +// the gateway without changing the route rule. +func (client VirtualNetworkClient) CreateInternetGateway(ctx context.Context, request CreateInternetGatewayRequest) (response CreateInternetGatewayResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodPost, "/internetGateways", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// CreateLocalPeeringGateway Creates a new local peering gateway (LPG) for the specified VCN. +func (client VirtualNetworkClient) CreateLocalPeeringGateway(ctx context.Context, request CreateLocalPeeringGatewayRequest) (response CreateLocalPeeringGatewayResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodPost, "/localPeeringGateways", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// CreatePrivateIp Creates a secondary private IP for the specified VNIC. +// For more information about secondary private IPs, see +// IP Addresses (https://docs.us-phoenix-1.oraclecloud.com/Content/Network/Tasks/managingIPaddresses.htm). +func (client VirtualNetworkClient) CreatePrivateIp(ctx context.Context, request CreatePrivateIpRequest) (response CreatePrivateIpResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodPost, "/privateIps", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// CreateRouteTable Creates a new route table for the specified VCN. In the request you must also include at least one route +// rule for the new route table. For information on the number of rules you can have in a route table, see +// Service Limits (https://docs.us-phoenix-1.oraclecloud.com/Content/General/Concepts/servicelimits.htm). For general information about route +// tables in your VCN and the types of targets you can use in route rules, +// see Route Tables (https://docs.us-phoenix-1.oraclecloud.com/Content/Network/Tasks/managingroutetables.htm). +// For the purposes of access control, you must provide the OCID of the compartment where you want the route +// table to reside. Notice that the route table doesn't have to be in the same compartment as the VCN, subnets, +// or other Networking Service components. If you're not sure which compartment to use, put the route +// table in the same compartment as the VCN. For more information about compartments and access control, see +// Overview of the IAM Service (https://docs.us-phoenix-1.oraclecloud.com/Content/Identity/Concepts/overview.htm). For information about OCIDs, see +// Resource Identifiers (https://docs.us-phoenix-1.oraclecloud.com/Content/General/Concepts/identifiers.htm). +// You may optionally specify a *display name* for the route table, otherwise a default is provided. +// It does not have to be unique, and you can change it. Avoid entering confidential information. +func (client VirtualNetworkClient) CreateRouteTable(ctx context.Context, request CreateRouteTableRequest) (response CreateRouteTableResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodPost, "/routeTables", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// CreateSecurityList Creates a new security list for the specified VCN. For more information +// about security lists, see Security Lists (https://docs.us-phoenix-1.oraclecloud.com/Content/Network/Concepts/securitylists.htm). +// For information on the number of rules you can have in a security list, see +// Service Limits (https://docs.us-phoenix-1.oraclecloud.com/Content/General/Concepts/servicelimits.htm). +// For the purposes of access control, you must provide the OCID of the compartment where you want the security +// list to reside. Notice that the security list doesn't have to be in the same compartment as the VCN, subnets, +// or other Networking Service components. If you're not sure which compartment to use, put the security +// list in the same compartment as the VCN. For more information about compartments and access control, see +// Overview of the IAM Service (https://docs.us-phoenix-1.oraclecloud.com/Content/Identity/Concepts/overview.htm). For information about OCIDs, see +// Resource Identifiers (https://docs.us-phoenix-1.oraclecloud.com/Content/General/Concepts/identifiers.htm). +// You may optionally specify a *display name* for the security list, otherwise a default is provided. +// It does not have to be unique, and you can change it. Avoid entering confidential information. +func (client VirtualNetworkClient) CreateSecurityList(ctx context.Context, request CreateSecurityListRequest) (response CreateSecurityListResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodPost, "/securityLists", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// CreateSubnet Creates a new subnet in the specified VCN. You can't change the size of the subnet after creation, +// so it's important to think about the size of subnets you need before creating them. +// For more information, see VCNs and Subnets (https://docs.us-phoenix-1.oraclecloud.com/Content/Network/Tasks/managingVCNs.htm). +// For information on the number of subnets you can have in a VCN, see +// Service Limits (https://docs.us-phoenix-1.oraclecloud.com/Content/General/Concepts/servicelimits.htm). +// For the purposes of access control, you must provide the OCID of the compartment where you want the subnet +// to reside. Notice that the subnet doesn't have to be in the same compartment as the VCN, route tables, or +// other Networking Service components. If you're not sure which compartment to use, put the subnet in +// the same compartment as the VCN. For more information about compartments and access control, see +// Overview of the IAM Service (https://docs.us-phoenix-1.oraclecloud.com/Content/Identity/Concepts/overview.htm). For information about OCIDs, +// see Resource Identifiers (https://docs.us-phoenix-1.oraclecloud.com/Content/General/Concepts/identifiers.htm). +// You may optionally associate a route table with the subnet. If you don't, the subnet will use the +// VCN's default route table. For more information about route tables, see +// Route Tables (https://docs.us-phoenix-1.oraclecloud.com/Content/Network/Tasks/managingroutetables.htm). +// You may optionally associate a security list with the subnet. If you don't, the subnet will use the +// VCN's default security list. For more information about security lists, see +// Security Lists (https://docs.us-phoenix-1.oraclecloud.com/Content/Network/Concepts/securitylists.htm). +// You may optionally associate a set of DHCP options with the subnet. If you don't, the subnet will use the +// VCN's default set. For more information about DHCP options, see +// DHCP Options (https://docs.us-phoenix-1.oraclecloud.com/Content/Network/Tasks/managingDHCP.htm). +// You may optionally specify a *display name* for the subnet, otherwise a default is provided. +// It does not have to be unique, and you can change it. Avoid entering confidential information. +// You can also add a DNS label for the subnet, which is required if you want the Internet and +// VCN Resolver to resolve hostnames for instances in the subnet. For more information, see +// DNS in Your Virtual Cloud Network (https://docs.us-phoenix-1.oraclecloud.com/Content/Network/Concepts/dns.htm). +func (client VirtualNetworkClient) CreateSubnet(ctx context.Context, request CreateSubnetRequest) (response CreateSubnetResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodPost, "/subnets", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// CreateVcn Creates a new Virtual Cloud Network (VCN). For more information, see +// VCNs and Subnets (https://docs.us-phoenix-1.oraclecloud.com/Content/Network/Tasks/managingVCNs.htm). +// For the VCN you must specify a single, contiguous IPv4 CIDR block. Oracle recommends using one of the +// private IP address ranges specified in RFC 1918 (https://tools.ietf.org/html/rfc1918) (10.0.0.0/8, +// 172.16/12, and 192.168/16). Example: 172.16.0.0/16. The CIDR block can range from /16 to /30, and it +// must not overlap with your on-premises network. You can't change the size of the VCN after creation. +// For the purposes of access control, you must provide the OCID of the compartment where you want the VCN to +// reside. Consult an Oracle Cloud Infrastructure administrator in your organization if you're not sure which +// compartment to use. Notice that the VCN doesn't have to be in the same compartment as the subnets or other +// Networking Service components. For more information about compartments and access control, see +// Overview of the IAM Service (https://docs.us-phoenix-1.oraclecloud.com/Content/Identity/Concepts/overview.htm). For information about OCIDs, see +// Resource Identifiers (https://docs.us-phoenix-1.oraclecloud.com/Content/General/Concepts/identifiers.htm). +// You may optionally specify a *display name* for the VCN, otherwise a default is provided. It does not have to +// be unique, and you can change it. Avoid entering confidential information. +// You can also add a DNS label for the VCN, which is required if you want the instances to use the +// Interent and VCN Resolver option for DNS in the VCN. For more information, see +// DNS in Your Virtual Cloud Network (https://docs.us-phoenix-1.oraclecloud.com/Content/Network/Concepts/dns.htm). +// The VCN automatically comes with a default route table, default security list, and default set of DHCP options. +// The OCID for each is returned in the response. You can't delete these default objects, but you can change their +// contents (that is, change the route rules, security list rules, and so on). +// The VCN and subnets you create are not accessible until you attach an Internet Gateway or set up an IPSec VPN +// or FastConnect. For more information, see +// Overview of the Networking Service (https://docs.us-phoenix-1.oraclecloud.com/Content/Network/Concepts/overview.htm). +func (client VirtualNetworkClient) CreateVcn(ctx context.Context, request CreateVcnRequest) (response CreateVcnResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodPost, "/vcns", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// CreateVirtualCircuit Creates a new virtual circuit to use with Oracle Cloud +// Infrastructure FastConnect. For more information, see +// FastConnect Overview (https://docs.us-phoenix-1.oraclecloud.com/Content/Network/Concepts/fastconnect.htm). +// For the purposes of access control, you must provide the OCID of the +// compartment where you want the virtual circuit to reside. If you're +// not sure which compartment to use, put the virtual circuit in the +// same compartment with the DRG it's using. For more information about +// compartments and access control, see +// Overview of the IAM Service (https://docs.us-phoenix-1.oraclecloud.com/Content/Identity/Concepts/overview.htm). +// For information about OCIDs, see +// Resource Identifiers (https://docs.us-phoenix-1.oraclecloud.com/Content/General/Concepts/identifiers.htm). +// You may optionally specify a *display name* for the virtual circuit. +// It does not have to be unique, and you can change it. Avoid entering confidential information. +// **Important:** When creating a virtual circuit, you specify a DRG for +// the traffic to flow through. Make sure you attach the DRG to your +// VCN and confirm the VCN's routing sends traffic to the DRG. Otherwise +// traffic will not flow. For more information, see +// Route Tables (https://docs.us-phoenix-1.oraclecloud.com/Content/Network/Tasks/managingroutetables.htm). +func (client VirtualNetworkClient) CreateVirtualCircuit(ctx context.Context, request CreateVirtualCircuitRequest) (response CreateVirtualCircuitResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodPost, "/virtualCircuits", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// DeleteCpe Deletes the specified CPE object. The CPE must not be connected to a DRG. This is an asynchronous +// operation. The CPE's `lifecycleState` will change to TERMINATING temporarily until the CPE is completely +// removed. +func (client VirtualNetworkClient) DeleteCpe(ctx context.Context, request DeleteCpeRequest) (response DeleteCpeResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodDelete, "/cpes/{cpeId}", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// DeleteCrossConnect Deletes the specified cross-connect. It must not be mapped to a +// VirtualCircuit. +func (client VirtualNetworkClient) DeleteCrossConnect(ctx context.Context, request DeleteCrossConnectRequest) (response DeleteCrossConnectResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodDelete, "/crossConnects/{crossConnectId}", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// DeleteCrossConnectGroup Deletes the specified cross-connect group. It must not contain any +// cross-connects, and it cannot be mapped to a +// VirtualCircuit. +func (client VirtualNetworkClient) DeleteCrossConnectGroup(ctx context.Context, request DeleteCrossConnectGroupRequest) (response DeleteCrossConnectGroupResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodDelete, "/crossConnectGroups/{crossConnectGroupId}", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// DeleteDhcpOptions Deletes the specified set of DHCP options, but only if it's not associated with a subnet. You can't delete a +// VCN's default set of DHCP options. +// This is an asynchronous operation. The state of the set of options will switch to TERMINATING temporarily +// until the set is completely removed. +func (client VirtualNetworkClient) DeleteDhcpOptions(ctx context.Context, request DeleteDhcpOptionsRequest) (response DeleteDhcpOptionsResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodDelete, "/dhcps/{dhcpId}", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// DeleteDrg Deletes the specified DRG. The DRG must not be attached to a VCN or be connected to your on-premise +// network. Also, there must not be a route table that lists the DRG as a target. This is an asynchronous +// operation. The DRG's `lifecycleState` will change to TERMINATING temporarily until the DRG is completely +// removed. +func (client VirtualNetworkClient) DeleteDrg(ctx context.Context, request DeleteDrgRequest) (response DeleteDrgResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodDelete, "/drgs/{drgId}", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// DeleteDrgAttachment Detaches a DRG from a VCN by deleting the corresponding `DrgAttachment`. This is an asynchronous +// operation. The attachment's `lifecycleState` will change to DETACHING temporarily until the attachment +// is completely removed. +func (client VirtualNetworkClient) DeleteDrgAttachment(ctx context.Context, request DeleteDrgAttachmentRequest) (response DeleteDrgAttachmentResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodDelete, "/drgAttachments/{drgAttachmentId}", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// DeleteIPSecConnection Deletes the specified IPSec connection. If your goal is to disable the IPSec VPN between your VCN and +// on-premises network, it's easiest to simply detach the DRG but keep all the IPSec VPN components intact. +// If you were to delete all the components and then later need to create an IPSec VPN again, you would +// need to configure your on-premises router again with the new information returned from +// CreateIPSecConnection. +// This is an asynchronous operation. The connection's `lifecycleState` will change to TERMINATING temporarily +// until the connection is completely removed. +func (client VirtualNetworkClient) DeleteIPSecConnection(ctx context.Context, request DeleteIPSecConnectionRequest) (response DeleteIPSecConnectionResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodDelete, "/ipsecConnections/{ipscId}", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// DeleteInternetGateway Deletes the specified Internet Gateway. The Internet Gateway does not have to be disabled, but +// there must not be a route table that lists it as a target. +// This is an asynchronous operation. The gateway's `lifecycleState` will change to TERMINATING temporarily +// until the gateway is completely removed. +func (client VirtualNetworkClient) DeleteInternetGateway(ctx context.Context, request DeleteInternetGatewayRequest) (response DeleteInternetGatewayResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodDelete, "/internetGateways/{igId}", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// DeleteLocalPeeringGateway Deletes the specified local peering gateway (LPG). +// This is an asynchronous operation; the local peering gateway's `lifecycleState` changes to TERMINATING temporarily +// until the local peering gateway is completely removed. +func (client VirtualNetworkClient) DeleteLocalPeeringGateway(ctx context.Context, request DeleteLocalPeeringGatewayRequest) (response DeleteLocalPeeringGatewayResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodDelete, "/localPeeringGateways/{localPeeringGatewayId}", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// DeletePrivateIp Unassigns and deletes the specified private IP. You must +// specify the object's OCID. The private IP address is returned to +// the subnet's pool of available addresses. +// This operation cannot be used with primary private IPs, which are +// automatically unassigned and deleted when the VNIC is terminated. +// **Important:** If a secondary private IP is the +// target of a route rule (https://docs.us-phoenix-1.oraclecloud.com/Content/Network/Tasks/managingroutetables.htm#privateip), +// unassigning it from the VNIC causes that route rule to blackhole and the traffic +// will be dropped. +func (client VirtualNetworkClient) DeletePrivateIp(ctx context.Context, request DeletePrivateIpRequest) (response DeletePrivateIpResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodDelete, "/privateIps/{privateIpId}", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// DeleteRouteTable Deletes the specified route table, but only if it's not associated with a subnet. You can't delete a +// VCN's default route table. +// This is an asynchronous operation. The route table's `lifecycleState` will change to TERMINATING temporarily +// until the route table is completely removed. +func (client VirtualNetworkClient) DeleteRouteTable(ctx context.Context, request DeleteRouteTableRequest) (response DeleteRouteTableResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodDelete, "/routeTables/{rtId}", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// DeleteSecurityList Deletes the specified security list, but only if it's not associated with a subnet. You can't delete +// a VCN's default security list. +// This is an asynchronous operation. The security list's `lifecycleState` will change to TERMINATING temporarily +// until the security list is completely removed. +func (client VirtualNetworkClient) DeleteSecurityList(ctx context.Context, request DeleteSecurityListRequest) (response DeleteSecurityListResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodDelete, "/securityLists/{securityListId}", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// DeleteSubnet Deletes the specified subnet, but only if there are no instances in the subnet. This is an asynchronous +// operation. The subnet's `lifecycleState` will change to TERMINATING temporarily. If there are any +// instances in the subnet, the state will instead change back to AVAILABLE. +func (client VirtualNetworkClient) DeleteSubnet(ctx context.Context, request DeleteSubnetRequest) (response DeleteSubnetResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodDelete, "/subnets/{subnetId}", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// DeleteVcn Deletes the specified VCN. The VCN must be empty and have no attached gateways. This is an asynchronous +// operation. The VCN's `lifecycleState` will change to TERMINATING temporarily until the VCN is completely +// removed. +func (client VirtualNetworkClient) DeleteVcn(ctx context.Context, request DeleteVcnRequest) (response DeleteVcnResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodDelete, "/vcns/{vcnId}", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// DeleteVirtualCircuit Deletes the specified virtual circuit. +// **Important:** If you're using FastConnect via a provider, +// make sure to also terminate the connection with +// the provider, or else the provider may continue to bill you. +func (client VirtualNetworkClient) DeleteVirtualCircuit(ctx context.Context, request DeleteVirtualCircuitRequest) (response DeleteVirtualCircuitResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodDelete, "/virtualCircuits/{virtualCircuitId}", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// GetCpe Gets the specified CPE's information. +func (client VirtualNetworkClient) GetCpe(ctx context.Context, request GetCpeRequest) (response GetCpeResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodGet, "/cpes/{cpeId}", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// GetCrossConnect Gets the specified cross-connect's information. +func (client VirtualNetworkClient) GetCrossConnect(ctx context.Context, request GetCrossConnectRequest) (response GetCrossConnectResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodGet, "/crossConnects/{crossConnectId}", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// GetCrossConnectGroup Gets the specified cross-connect group's information. +func (client VirtualNetworkClient) GetCrossConnectGroup(ctx context.Context, request GetCrossConnectGroupRequest) (response GetCrossConnectGroupResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodGet, "/crossConnectGroups/{crossConnectGroupId}", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// GetCrossConnectLetterOfAuthority Gets the Letter of Authority for the specified cross-connect. +func (client VirtualNetworkClient) GetCrossConnectLetterOfAuthority(ctx context.Context, request GetCrossConnectLetterOfAuthorityRequest) (response GetCrossConnectLetterOfAuthorityResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodGet, "/crossConnects/{crossConnectId}/letterOfAuthority", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// GetCrossConnectStatus Gets the status of the specified cross-connect. +func (client VirtualNetworkClient) GetCrossConnectStatus(ctx context.Context, request GetCrossConnectStatusRequest) (response GetCrossConnectStatusResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodGet, "/crossConnects/{crossConnectId}/status", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// GetDhcpOptions Gets the specified set of DHCP options. +func (client VirtualNetworkClient) GetDhcpOptions(ctx context.Context, request GetDhcpOptionsRequest) (response GetDhcpOptionsResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodGet, "/dhcps/{dhcpId}", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// GetDrg Gets the specified DRG's information. +func (client VirtualNetworkClient) GetDrg(ctx context.Context, request GetDrgRequest) (response GetDrgResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodGet, "/drgs/{drgId}", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// GetDrgAttachment Gets the information for the specified `DrgAttachment`. +func (client VirtualNetworkClient) GetDrgAttachment(ctx context.Context, request GetDrgAttachmentRequest) (response GetDrgAttachmentResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodGet, "/drgAttachments/{drgAttachmentId}", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// GetFastConnectProviderService Gets the specified provider service. +// For more information, see FastConnect Overview (https://docs.us-phoenix-1.oraclecloud.com/Content/Network/Concepts/fastconnect.htm). +func (client VirtualNetworkClient) GetFastConnectProviderService(ctx context.Context, request GetFastConnectProviderServiceRequest) (response GetFastConnectProviderServiceResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodGet, "/fastConnectProviderServices/{providerServiceId}", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// GetIPSecConnection Gets the specified IPSec connection's basic information, including the static routes for the +// on-premises router. If you want the status of the connection (whether it's up or down), use +// GetIPSecConnectionDeviceStatus. +func (client VirtualNetworkClient) GetIPSecConnection(ctx context.Context, request GetIPSecConnectionRequest) (response GetIPSecConnectionResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodGet, "/ipsecConnections/{ipscId}", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// GetIPSecConnectionDeviceConfig Gets the configuration information for the specified IPSec connection. For each tunnel, the +// response includes the IP address of Oracle's VPN headend and the shared secret. +func (client VirtualNetworkClient) GetIPSecConnectionDeviceConfig(ctx context.Context, request GetIPSecConnectionDeviceConfigRequest) (response GetIPSecConnectionDeviceConfigResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodGet, "/ipsecConnections/{ipscId}/deviceConfig", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// GetIPSecConnectionDeviceStatus Gets the status of the specified IPSec connection (whether it's up or down). +func (client VirtualNetworkClient) GetIPSecConnectionDeviceStatus(ctx context.Context, request GetIPSecConnectionDeviceStatusRequest) (response GetIPSecConnectionDeviceStatusResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodGet, "/ipsecConnections/{ipscId}/deviceStatus", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// GetInternetGateway Gets the specified Internet Gateway's information. +func (client VirtualNetworkClient) GetInternetGateway(ctx context.Context, request GetInternetGatewayRequest) (response GetInternetGatewayResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodGet, "/internetGateways/{igId}", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// GetLocalPeeringGateway Gets the specified local peering gateway's information. +func (client VirtualNetworkClient) GetLocalPeeringGateway(ctx context.Context, request GetLocalPeeringGatewayRequest) (response GetLocalPeeringGatewayResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodGet, "/localPeeringGateways/{localPeeringGatewayId}", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// GetPrivateIp Gets the specified private IP. You must specify the object's OCID. +// Alternatively, you can get the object by using +// ListPrivateIps +// with the private IP address (for example, 10.0.3.3) and subnet OCID. +func (client VirtualNetworkClient) GetPrivateIp(ctx context.Context, request GetPrivateIpRequest) (response GetPrivateIpResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodGet, "/privateIps/{privateIpId}", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// GetRouteTable Gets the specified route table's information. +func (client VirtualNetworkClient) GetRouteTable(ctx context.Context, request GetRouteTableRequest) (response GetRouteTableResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodGet, "/routeTables/{rtId}", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// GetSecurityList Gets the specified security list's information. +func (client VirtualNetworkClient) GetSecurityList(ctx context.Context, request GetSecurityListRequest) (response GetSecurityListResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodGet, "/securityLists/{securityListId}", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// GetSubnet Gets the specified subnet's information. +func (client VirtualNetworkClient) GetSubnet(ctx context.Context, request GetSubnetRequest) (response GetSubnetResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodGet, "/subnets/{subnetId}", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// GetVcn Gets the specified VCN's information. +func (client VirtualNetworkClient) GetVcn(ctx context.Context, request GetVcnRequest) (response GetVcnResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodGet, "/vcns/{vcnId}", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// GetVirtualCircuit Gets the specified virtual circuit's information. +func (client VirtualNetworkClient) GetVirtualCircuit(ctx context.Context, request GetVirtualCircuitRequest) (response GetVirtualCircuitResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodGet, "/virtualCircuits/{virtualCircuitId}", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// GetVnic Gets the information for the specified virtual network interface card (VNIC). +// You can get the VNIC OCID from the +// ListVnicAttachments +// operation. +func (client VirtualNetworkClient) GetVnic(ctx context.Context, request GetVnicRequest) (response GetVnicResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodGet, "/vnics/{vnicId}", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// ListCpes Lists the Customer-Premises Equipment objects (CPEs) in the specified compartment. +func (client VirtualNetworkClient) ListCpes(ctx context.Context, request ListCpesRequest) (response ListCpesResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodGet, "/cpes", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// ListCrossConnectGroups Lists the cross-connect groups in the specified compartment. +func (client VirtualNetworkClient) ListCrossConnectGroups(ctx context.Context, request ListCrossConnectGroupsRequest) (response ListCrossConnectGroupsResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodGet, "/crossConnectGroups", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// ListCrossConnectLocations Lists the available FastConnect locations for cross-connect installation. You need +// this information so you can specify your desired location when you create a cross-connect. +func (client VirtualNetworkClient) ListCrossConnectLocations(ctx context.Context, request ListCrossConnectLocationsRequest) (response ListCrossConnectLocationsResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodGet, "/crossConnectLocations", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// ListCrossConnects Lists the cross-connects in the specified compartment. You can filter the list +// by specifying the OCID of a cross-connect group. +func (client VirtualNetworkClient) ListCrossConnects(ctx context.Context, request ListCrossConnectsRequest) (response ListCrossConnectsResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodGet, "/crossConnects", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// ListCrossconnectPortSpeedShapes Lists the available port speeds for cross-connects. You need this information +// so you can specify your desired port speed (that is, shape) when you create a +// cross-connect. +func (client VirtualNetworkClient) ListCrossconnectPortSpeedShapes(ctx context.Context, request ListCrossconnectPortSpeedShapesRequest) (response ListCrossconnectPortSpeedShapesResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodGet, "/crossConnectPortSpeedShapes", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// ListDhcpOptions Lists the sets of DHCP options in the specified VCN and specified compartment. +// The response includes the default set of options that automatically comes with each VCN, +// plus any other sets you've created. +func (client VirtualNetworkClient) ListDhcpOptions(ctx context.Context, request ListDhcpOptionsRequest) (response ListDhcpOptionsResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodGet, "/dhcps", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// ListDrgAttachments Lists the `DrgAttachment` objects for the specified compartment. You can filter the +// results by VCN or DRG. +func (client VirtualNetworkClient) ListDrgAttachments(ctx context.Context, request ListDrgAttachmentsRequest) (response ListDrgAttachmentsResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodGet, "/drgAttachments", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// ListDrgs Lists the DRGs in the specified compartment. +func (client VirtualNetworkClient) ListDrgs(ctx context.Context, request ListDrgsRequest) (response ListDrgsResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodGet, "/drgs", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// ListFastConnectProviderServices Lists the service offerings from supported providers. You need this +// information so you can specify your desired provider and service +// offering when you create a virtual circuit. +// For the compartment ID, provide the OCID of your tenancy (the root compartment). +// For more information, see FastConnect Overview (https://docs.us-phoenix-1.oraclecloud.com/Content/Network/Concepts/fastconnect.htm). +func (client VirtualNetworkClient) ListFastConnectProviderServices(ctx context.Context, request ListFastConnectProviderServicesRequest) (response ListFastConnectProviderServicesResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodGet, "/fastConnectProviderServices", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// ListFastConnectProviderVirtualCircuitBandwidthShapes Gets the list of available virtual circuit bandwidth levels for a provider. +// You need this information so you can specify your desired bandwidth level (shape) when you create a virtual circuit. +// For more information about virtual circuits, see FastConnect Overview (https://docs.us-phoenix-1.oraclecloud.com/Content/Network/Concepts/fastconnect.htm). +func (client VirtualNetworkClient) ListFastConnectProviderVirtualCircuitBandwidthShapes(ctx context.Context, request ListFastConnectProviderVirtualCircuitBandwidthShapesRequest) (response ListFastConnectProviderVirtualCircuitBandwidthShapesResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodGet, "/fastConnectProviderServices/{providerServiceId}/virtualCircuitBandwidthShapes", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// ListIPSecConnections Lists the IPSec connections for the specified compartment. You can filter the +// results by DRG or CPE. +func (client VirtualNetworkClient) ListIPSecConnections(ctx context.Context, request ListIPSecConnectionsRequest) (response ListIPSecConnectionsResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodGet, "/ipsecConnections", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// ListInternetGateways Lists the Internet Gateways in the specified VCN and the specified compartment. +func (client VirtualNetworkClient) ListInternetGateways(ctx context.Context, request ListInternetGatewaysRequest) (response ListInternetGatewaysResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodGet, "/internetGateways", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// ListLocalPeeringGateways Lists the local peering gateways (LPGs) for the specified VCN and compartment +// (the LPG's compartment). +func (client VirtualNetworkClient) ListLocalPeeringGateways(ctx context.Context, request ListLocalPeeringGatewaysRequest) (response ListLocalPeeringGatewaysResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodGet, "/localPeeringGateways", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// ListPrivateIps Lists the PrivateIp objects based +// on one of these filters: +// - Subnet OCID. +// - VNIC OCID. +// - Both private IP address and subnet OCID: This lets +// you get a `privateIP` object based on its private IP +// address (for example, 10.0.3.3) and not its OCID. For comparison, +// GetPrivateIp +// requires the OCID. +// If you're listing all the private IPs associated with a given subnet +// or VNIC, the response includes both primary and secondary private IPs. +func (client VirtualNetworkClient) ListPrivateIps(ctx context.Context, request ListPrivateIpsRequest) (response ListPrivateIpsResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodGet, "/privateIps", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// ListRouteTables Lists the route tables in the specified VCN and specified compartment. The response +// includes the default route table that automatically comes with each VCN, plus any route tables +// you've created. +func (client VirtualNetworkClient) ListRouteTables(ctx context.Context, request ListRouteTablesRequest) (response ListRouteTablesResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodGet, "/routeTables", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// ListSecurityLists Lists the security lists in the specified VCN and compartment. +func (client VirtualNetworkClient) ListSecurityLists(ctx context.Context, request ListSecurityListsRequest) (response ListSecurityListsResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodGet, "/securityLists", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// ListSubnets Lists the subnets in the specified VCN and the specified compartment. +func (client VirtualNetworkClient) ListSubnets(ctx context.Context, request ListSubnetsRequest) (response ListSubnetsResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodGet, "/subnets", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// ListVcns Lists the Virtual Cloud Networks (VCNs) in the specified compartment. +func (client VirtualNetworkClient) ListVcns(ctx context.Context, request ListVcnsRequest) (response ListVcnsResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodGet, "/vcns", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// ListVirtualCircuitBandwidthShapes The deprecated operation lists available bandwidth levels for virtual circuits. For the compartment ID, provide the OCID of your tenancy (the root compartment). +func (client VirtualNetworkClient) ListVirtualCircuitBandwidthShapes(ctx context.Context, request ListVirtualCircuitBandwidthShapesRequest) (response ListVirtualCircuitBandwidthShapesResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodGet, "/virtualCircuitBandwidthShapes", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// ListVirtualCircuitPublicPrefixes Lists the public IP prefixes and their details for the specified +// public virtual circuit. +func (client VirtualNetworkClient) ListVirtualCircuitPublicPrefixes(ctx context.Context, request ListVirtualCircuitPublicPrefixesRequest) (response ListVirtualCircuitPublicPrefixesResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodGet, "/virtualCircuits/{virtualCircuitId}/publicPrefixes", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// ListVirtualCircuits Lists the virtual circuits in the specified compartment. +func (client VirtualNetworkClient) ListVirtualCircuits(ctx context.Context, request ListVirtualCircuitsRequest) (response ListVirtualCircuitsResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodGet, "/virtualCircuits", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// UpdateCpe Updates the specified CPE's display name. +// Avoid entering confidential information. +func (client VirtualNetworkClient) UpdateCpe(ctx context.Context, request UpdateCpeRequest) (response UpdateCpeResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodPut, "/cpes/{cpeId}", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// UpdateCrossConnect Updates the specified cross-connect. +func (client VirtualNetworkClient) UpdateCrossConnect(ctx context.Context, request UpdateCrossConnectRequest) (response UpdateCrossConnectResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodPut, "/crossConnects/{crossConnectId}", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// UpdateCrossConnectGroup Updates the specified cross-connect group's display name. +// Avoid entering confidential information. +func (client VirtualNetworkClient) UpdateCrossConnectGroup(ctx context.Context, request UpdateCrossConnectGroupRequest) (response UpdateCrossConnectGroupResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodPut, "/crossConnectGroups/{crossConnectGroupId}", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// UpdateDhcpOptions Updates the specified set of DHCP options. You can update the display name or the options +// themselves. Avoid entering confidential information. +// Note that the `options` object you provide replaces the entire existing set of options. +func (client VirtualNetworkClient) UpdateDhcpOptions(ctx context.Context, request UpdateDhcpOptionsRequest) (response UpdateDhcpOptionsResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodPut, "/dhcps/{dhcpId}", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// UpdateDrg Updates the specified DRG's display name. Avoid entering confidential information. +func (client VirtualNetworkClient) UpdateDrg(ctx context.Context, request UpdateDrgRequest) (response UpdateDrgResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodPut, "/drgs/{drgId}", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// UpdateDrgAttachment Updates the display name for the specified `DrgAttachment`. +// Avoid entering confidential information. +func (client VirtualNetworkClient) UpdateDrgAttachment(ctx context.Context, request UpdateDrgAttachmentRequest) (response UpdateDrgAttachmentResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodPut, "/drgAttachments/{drgAttachmentId}", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// UpdateIPSecConnection Updates the display name for the specified IPSec connection. +// Avoid entering confidential information. +func (client VirtualNetworkClient) UpdateIPSecConnection(ctx context.Context, request UpdateIPSecConnectionRequest) (response UpdateIPSecConnectionResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodPut, "/ipsecConnections/{ipscId}", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// UpdateInternetGateway Updates the specified Internet Gateway. You can disable/enable it, or change its display name. +// Avoid entering confidential information. +// If the gateway is disabled, that means no traffic will flow to/from the internet even if there's +// a route rule that enables that traffic. +func (client VirtualNetworkClient) UpdateInternetGateway(ctx context.Context, request UpdateInternetGatewayRequest) (response UpdateInternetGatewayResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodPut, "/internetGateways/{igId}", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// UpdateLocalPeeringGateway Updates the specified local peering gateway (LPG). +func (client VirtualNetworkClient) UpdateLocalPeeringGateway(ctx context.Context, request UpdateLocalPeeringGatewayRequest) (response UpdateLocalPeeringGatewayResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodPut, "/localPeeringGateways/{localPeeringGatewayId}", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// UpdatePrivateIp Updates the specified private IP. You must specify the object's OCID. +// Use this operation if you want to: +// - Move a secondary private IP to a different VNIC in the same subnet. +// - Change the display name for a secondary private IP. +// - Change the hostname for a secondary private IP. +// This operation cannot be used with primary private IPs. +// To update the hostname for the primary IP on a VNIC, use +// UpdateVnic. +func (client VirtualNetworkClient) UpdatePrivateIp(ctx context.Context, request UpdatePrivateIpRequest) (response UpdatePrivateIpResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodPut, "/privateIps/{privateIpId}", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// UpdateRouteTable Updates the specified route table's display name or route rules. +// Avoid entering confidential information. +// Note that the `routeRules` object you provide replaces the entire existing set of rules. +func (client VirtualNetworkClient) UpdateRouteTable(ctx context.Context, request UpdateRouteTableRequest) (response UpdateRouteTableResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodPut, "/routeTables/{rtId}", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// UpdateSecurityList Updates the specified security list's display name or rules. +// Avoid entering confidential information. +// Note that the `egressSecurityRules` or `ingressSecurityRules` objects you provide replace the entire +// existing objects. +func (client VirtualNetworkClient) UpdateSecurityList(ctx context.Context, request UpdateSecurityListRequest) (response UpdateSecurityListResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodPut, "/securityLists/{securityListId}", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// UpdateSubnet Updates the specified subnet's display name. Avoid entering confidential information. +func (client VirtualNetworkClient) UpdateSubnet(ctx context.Context, request UpdateSubnetRequest) (response UpdateSubnetResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodPut, "/subnets/{subnetId}", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// UpdateVcn Updates the specified VCN's display name. +// Avoid entering confidential information. +func (client VirtualNetworkClient) UpdateVcn(ctx context.Context, request UpdateVcnRequest) (response UpdateVcnResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodPut, "/vcns/{vcnId}", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// UpdateVirtualCircuit Updates the specified virtual circuit. This can be called by +// either the customer who owns the virtual circuit, or the +// provider (when provisioning or de-provisioning the virtual +// circuit from their end). The documentation for +// UpdateVirtualCircuitDetails +// indicates who can update each property of the virtual circuit. +// **Important:** If the virtual circuit is working and in the +// PROVISIONED state, updating any of the network-related properties +// (such as the DRG being used, the BGP ASN, and so on) will cause the virtual +// circuit's state to switch to PROVISIONING and the related BGP +// session to go down. After Oracle re-provisions the virtual circuit, +// its state will return to PROVISIONED. Make sure you confirm that +// the associated BGP session is back up. For more information +// about the various states and how to test connectivity, see +// FastConnect Overview (https://docs.us-phoenix-1.oraclecloud.com/Content/Network/Concepts/fastconnect.htm). +// To change the list of public IP prefixes for a public virtual circuit, +// use BulkAddVirtualCircuitPublicPrefixes +// and +// BulkDeleteVirtualCircuitPublicPrefixes. +// Updating the list of prefixes does NOT cause the BGP session to go down. However, +// Oracle must verify the customer's ownership of each added prefix before +// traffic for that prefix will flow across the virtual circuit. +func (client VirtualNetworkClient) UpdateVirtualCircuit(ctx context.Context, request UpdateVirtualCircuitRequest) (response UpdateVirtualCircuitResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodPut, "/virtualCircuits/{virtualCircuitId}", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// UpdateVnic Updates the specified VNIC. +func (client VirtualNetworkClient) UpdateVnic(ctx context.Context, request UpdateVnicRequest) (response UpdateVnicResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodPut, "/vnics/{vnicId}", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/cpe.go b/vendor/github.com/oracle/oci-go-sdk/core/cpe.go new file mode 100644 index 000000000..9759c8cbf --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/cpe.go @@ -0,0 +1,45 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Core Services API +// +// APIs for Networking Service, Compute Service, and Block Volume Service. +// + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" +) + +// Cpe An object you create when setting up an IPSec VPN between your on-premises network +// and VCN. The `Cpe` is a virtual representation of your Customer-Premises Equipment, +// which is the actual router on-premises at your site at your end of the IPSec VPN connection. +// For more information, +// see Overview of the Networking Service (https://docs.us-phoenix-1.oraclecloud.com/Content/Network/Concepts/overview.htm). +// To use any of the API operations, you must be authorized in an IAM policy. If you're not authorized, +// talk to an administrator. If you're an administrator who needs to write policies to give users access, see +// Getting Started with Policies (https://docs.us-phoenix-1.oraclecloud.com/Content/Identity/Concepts/policygetstarted.htm). +type Cpe struct { + + // The OCID of the compartment containing the CPE. + CompartmentId *string `mandatory:"true" json:"compartmentId"` + + // The CPE's Oracle ID (OCID). + Id *string `mandatory:"true" json:"id"` + + // The public IP address of the on-premises router. + IpAddress *string `mandatory:"true" json:"ipAddress"` + + // A user-friendly name. Does not have to be unique, and it's changeable. + // Avoid entering confidential information. + DisplayName *string `mandatory:"false" json:"displayName"` + + // The date and time the CPE was created, in the format defined by RFC3339. + // Example: `2016-08-25T21:10:29.600Z` + TimeCreated *common.SDKTime `mandatory:"false" json:"timeCreated"` +} + +func (m Cpe) String() string { + return common.PointerString(m) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/create_cpe_details.go b/vendor/github.com/oracle/oci-go-sdk/core/create_cpe_details.go new file mode 100644 index 000000000..0dff34734 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/create_cpe_details.go @@ -0,0 +1,31 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Core Services API +// +// APIs for Networking Service, Compute Service, and Block Volume Service. +// + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" +) + +// CreateCpeDetails The representation of CreateCpeDetails +type CreateCpeDetails struct { + + // The OCID of the compartment to contain the CPE. + CompartmentId *string `mandatory:"true" json:"compartmentId"` + + // The public IP address of the on-premises router. + // Example: `143.19.23.16` + IpAddress *string `mandatory:"true" json:"ipAddress"` + + // A user-friendly name. Does not have to be unique, and it's changeable. Avoid entering confidential information. + DisplayName *string `mandatory:"false" json:"displayName"` +} + +func (m CreateCpeDetails) String() string { + return common.PointerString(m) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/create_cpe_request_response.go b/vendor/github.com/oracle/oci-go-sdk/core/create_cpe_request_response.go new file mode 100644 index 000000000..c4a5314a8 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/create_cpe_request_response.go @@ -0,0 +1,48 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// CreateCpeRequest wrapper for the CreateCpe operation +type CreateCpeRequest struct { + + // Details for creating a CPE. + CreateCpeDetails `contributesTo:"body"` + + // A token that uniquely identifies a request so it can be retried in case of a timeout or + // server error without risk of executing that same action again. Retry tokens expire after 24 + // hours, but can be invalidated before then due to conflicting operations (for example, if a resource + // has been deleted and purged from the system, then a retry of the original creation request + // may be rejected). + OpcRetryToken *string `mandatory:"false" contributesTo:"header" name:"opc-retry-token"` +} + +func (request CreateCpeRequest) String() string { + return common.PointerString(request) +} + +// CreateCpeResponse wrapper for the CreateCpe operation +type CreateCpeResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The Cpe instance + Cpe `presentIn:"body"` + + // For optimistic concurrency control. See `if-match`. + Etag *string `presentIn:"header" name:"etag"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about + // a particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response CreateCpeResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/create_cross_connect_details.go b/vendor/github.com/oracle/oci-go-sdk/core/create_cross_connect_details.go new file mode 100644 index 000000000..6a17ea194 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/create_cross_connect_details.go @@ -0,0 +1,53 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Core Services API +// +// APIs for Networking Service, Compute Service, and Block Volume Service. +// + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" +) + +// CreateCrossConnectDetails The representation of CreateCrossConnectDetails +type CreateCrossConnectDetails struct { + + // The OCID of the compartment to contain the cross-connect. + CompartmentId *string `mandatory:"true" json:"compartmentId"` + + // The name of the FastConnect location where this cross-connect will be installed. + // To get a list of the available locations, see + // ListCrossConnectLocations. + // Example: `CyrusOne, Chandler, AZ` + LocationName *string `mandatory:"true" json:"locationName"` + + // The port speed for this cross-connect. To get a list of the available port speeds, see + // ListCrossconnectPortSpeedShapes. + // Example: `10 Gbps` + PortSpeedShapeName *string `mandatory:"true" json:"portSpeedShapeName"` + + // The OCID of the cross-connect group to put this cross-connect in. + CrossConnectGroupId *string `mandatory:"false" json:"crossConnectGroupId"` + + // A user-friendly name. Does not have to be unique, and it's changeable. + // Avoid entering confidential information. + DisplayName *string `mandatory:"false" json:"displayName"` + + // If you already have an existing cross-connect or cross-connect group at this FastConnect + // location, and you want this new cross-connect to be on a different router (for the + // purposes of redundancy), provide the OCID of that existing cross-connect or + // cross-connect group. + FarCrossConnectOrCrossConnectGroupId *string `mandatory:"false" json:"farCrossConnectOrCrossConnectGroupId"` + + // If you already have an existing cross-connect or cross-connect group at this FastConnect + // location, and you want this new cross-connect to be on the same router, provide the + // OCID of that existing cross-connect or cross-connect group. + NearCrossConnectOrCrossConnectGroupId *string `mandatory:"false" json:"nearCrossConnectOrCrossConnectGroupId"` +} + +func (m CreateCrossConnectDetails) String() string { + return common.PointerString(m) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/create_cross_connect_group_details.go b/vendor/github.com/oracle/oci-go-sdk/core/create_cross_connect_group_details.go new file mode 100644 index 000000000..c1232cfca --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/create_cross_connect_group_details.go @@ -0,0 +1,28 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Core Services API +// +// APIs for Networking Service, Compute Service, and Block Volume Service. +// + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" +) + +// CreateCrossConnectGroupDetails The representation of CreateCrossConnectGroupDetails +type CreateCrossConnectGroupDetails struct { + + // The OCID of the compartment to contain the cross-connect group. + CompartmentId *string `mandatory:"true" json:"compartmentId"` + + // A user-friendly name. Does not have to be unique, and it's changeable. + // Avoid entering confidential information. + DisplayName *string `mandatory:"false" json:"displayName"` +} + +func (m CreateCrossConnectGroupDetails) String() string { + return common.PointerString(m) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/create_cross_connect_group_request_response.go b/vendor/github.com/oracle/oci-go-sdk/core/create_cross_connect_group_request_response.go new file mode 100644 index 000000000..33cdec7e9 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/create_cross_connect_group_request_response.go @@ -0,0 +1,48 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// CreateCrossConnectGroupRequest wrapper for the CreateCrossConnectGroup operation +type CreateCrossConnectGroupRequest struct { + + // Details to create a CrossConnectGroup + CreateCrossConnectGroupDetails `contributesTo:"body"` + + // A token that uniquely identifies a request so it can be retried in case of a timeout or + // server error without risk of executing that same action again. Retry tokens expire after 24 + // hours, but can be invalidated before then due to conflicting operations (for example, if a resource + // has been deleted and purged from the system, then a retry of the original creation request + // may be rejected). + OpcRetryToken *string `mandatory:"false" contributesTo:"header" name:"opc-retry-token"` +} + +func (request CreateCrossConnectGroupRequest) String() string { + return common.PointerString(request) +} + +// CreateCrossConnectGroupResponse wrapper for the CreateCrossConnectGroup operation +type CreateCrossConnectGroupResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The CrossConnectGroup instance + CrossConnectGroup `presentIn:"body"` + + // For optimistic concurrency control. See `if-match`. + Etag *string `presentIn:"header" name:"etag"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about + // a particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response CreateCrossConnectGroupResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/create_cross_connect_request_response.go b/vendor/github.com/oracle/oci-go-sdk/core/create_cross_connect_request_response.go new file mode 100644 index 000000000..b9bf4a46e --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/create_cross_connect_request_response.go @@ -0,0 +1,48 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// CreateCrossConnectRequest wrapper for the CreateCrossConnect operation +type CreateCrossConnectRequest struct { + + // Details to create a CrossConnect + CreateCrossConnectDetails `contributesTo:"body"` + + // A token that uniquely identifies a request so it can be retried in case of a timeout or + // server error without risk of executing that same action again. Retry tokens expire after 24 + // hours, but can be invalidated before then due to conflicting operations (for example, if a resource + // has been deleted and purged from the system, then a retry of the original creation request + // may be rejected). + OpcRetryToken *string `mandatory:"false" contributesTo:"header" name:"opc-retry-token"` +} + +func (request CreateCrossConnectRequest) String() string { + return common.PointerString(request) +} + +// CreateCrossConnectResponse wrapper for the CreateCrossConnect operation +type CreateCrossConnectResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The CrossConnect instance + CrossConnect `presentIn:"body"` + + // For optimistic concurrency control. See `if-match`. + Etag *string `presentIn:"header" name:"etag"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about + // a particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response CreateCrossConnectResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/create_dhcp_details.go b/vendor/github.com/oracle/oci-go-sdk/core/create_dhcp_details.go new file mode 100644 index 000000000..a29660b37 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/create_dhcp_details.go @@ -0,0 +1,61 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Core Services API +// +// APIs for Networking Service, Compute Service, and Block Volume Service. +// + +package core + +import ( + "encoding/json" + "github.com/oracle/oci-go-sdk/common" +) + +// CreateDhcpDetails The representation of CreateDhcpDetails +type CreateDhcpDetails struct { + + // The OCID of the compartment to contain the set of DHCP options. + CompartmentId *string `mandatory:"true" json:"compartmentId"` + + // A set of DHCP options. + Options []DhcpOption `mandatory:"true" json:"options"` + + // The OCID of the VCN the set of DHCP options belongs to. + VcnId *string `mandatory:"true" json:"vcnId"` + + // A user-friendly name. Does not have to be unique, and it's changeable. Avoid entering confidential information. + DisplayName *string `mandatory:"false" json:"displayName"` +} + +func (m CreateDhcpDetails) String() string { + return common.PointerString(m) +} + +// UnmarshalJSON unmarshals from json +func (m *CreateDhcpDetails) UnmarshalJSON(data []byte) (e error) { + model := struct { + DisplayName *string `json:"displayName"` + CompartmentId *string `json:"compartmentId"` + Options []dhcpoption `json:"options"` + VcnId *string `json:"vcnId"` + }{} + + e = json.Unmarshal(data, &model) + if e != nil { + return + } + m.DisplayName = model.DisplayName + m.CompartmentId = model.CompartmentId + m.Options = make([]DhcpOption, len(model.Options)) + for i, n := range model.Options { + nn, err := n.UnmarshalPolymorphicJSON(n.JsonData) + if err != nil { + return err + } + m.Options[i] = nn + } + m.VcnId = model.VcnId + return +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/create_dhcp_options_request_response.go b/vendor/github.com/oracle/oci-go-sdk/core/create_dhcp_options_request_response.go new file mode 100644 index 000000000..6fed1770d --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/create_dhcp_options_request_response.go @@ -0,0 +1,48 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// CreateDhcpOptionsRequest wrapper for the CreateDhcpOptions operation +type CreateDhcpOptionsRequest struct { + + // Request object for creating a new set of DHCP options. + CreateDhcpDetails `contributesTo:"body"` + + // A token that uniquely identifies a request so it can be retried in case of a timeout or + // server error without risk of executing that same action again. Retry tokens expire after 24 + // hours, but can be invalidated before then due to conflicting operations (for example, if a resource + // has been deleted and purged from the system, then a retry of the original creation request + // may be rejected). + OpcRetryToken *string `mandatory:"false" contributesTo:"header" name:"opc-retry-token"` +} + +func (request CreateDhcpOptionsRequest) String() string { + return common.PointerString(request) +} + +// CreateDhcpOptionsResponse wrapper for the CreateDhcpOptions operation +type CreateDhcpOptionsResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The DhcpOptions instance + DhcpOptions `presentIn:"body"` + + // For optimistic concurrency control. See `if-match`. + Etag *string `presentIn:"header" name:"etag"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about + // a particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response CreateDhcpOptionsResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/create_drg_attachment_details.go b/vendor/github.com/oracle/oci-go-sdk/core/create_drg_attachment_details.go new file mode 100644 index 000000000..d76431fc9 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/create_drg_attachment_details.go @@ -0,0 +1,30 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Core Services API +// +// APIs for Networking Service, Compute Service, and Block Volume Service. +// + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" +) + +// CreateDrgAttachmentDetails The representation of CreateDrgAttachmentDetails +type CreateDrgAttachmentDetails struct { + + // The OCID of the DRG. + DrgId *string `mandatory:"true" json:"drgId"` + + // The OCID of the VCN. + VcnId *string `mandatory:"true" json:"vcnId"` + + // A user-friendly name. Does not have to be unique. Avoid entering confidential information. + DisplayName *string `mandatory:"false" json:"displayName"` +} + +func (m CreateDrgAttachmentDetails) String() string { + return common.PointerString(m) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/create_drg_attachment_request_response.go b/vendor/github.com/oracle/oci-go-sdk/core/create_drg_attachment_request_response.go new file mode 100644 index 000000000..4e50c9bd6 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/create_drg_attachment_request_response.go @@ -0,0 +1,48 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// CreateDrgAttachmentRequest wrapper for the CreateDrgAttachment operation +type CreateDrgAttachmentRequest struct { + + // Details for creating a `DrgAttachment`. + CreateDrgAttachmentDetails `contributesTo:"body"` + + // A token that uniquely identifies a request so it can be retried in case of a timeout or + // server error without risk of executing that same action again. Retry tokens expire after 24 + // hours, but can be invalidated before then due to conflicting operations (for example, if a resource + // has been deleted and purged from the system, then a retry of the original creation request + // may be rejected). + OpcRetryToken *string `mandatory:"false" contributesTo:"header" name:"opc-retry-token"` +} + +func (request CreateDrgAttachmentRequest) String() string { + return common.PointerString(request) +} + +// CreateDrgAttachmentResponse wrapper for the CreateDrgAttachment operation +type CreateDrgAttachmentResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The DrgAttachment instance + DrgAttachment `presentIn:"body"` + + // For optimistic concurrency control. See `if-match`. + Etag *string `presentIn:"header" name:"etag"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about + // a particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response CreateDrgAttachmentResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/create_drg_details.go b/vendor/github.com/oracle/oci-go-sdk/core/create_drg_details.go new file mode 100644 index 000000000..a4a7d6f65 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/create_drg_details.go @@ -0,0 +1,27 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Core Services API +// +// APIs for Networking Service, Compute Service, and Block Volume Service. +// + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" +) + +// CreateDrgDetails The representation of CreateDrgDetails +type CreateDrgDetails struct { + + // The OCID of the compartment to contain the DRG. + CompartmentId *string `mandatory:"true" json:"compartmentId"` + + // A user-friendly name. Does not have to be unique, and it's changeable. Avoid entering confidential information. + DisplayName *string `mandatory:"false" json:"displayName"` +} + +func (m CreateDrgDetails) String() string { + return common.PointerString(m) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/create_drg_request_response.go b/vendor/github.com/oracle/oci-go-sdk/core/create_drg_request_response.go new file mode 100644 index 000000000..a1fa6bae0 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/create_drg_request_response.go @@ -0,0 +1,48 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// CreateDrgRequest wrapper for the CreateDrg operation +type CreateDrgRequest struct { + + // Details for creating a DRG. + CreateDrgDetails `contributesTo:"body"` + + // A token that uniquely identifies a request so it can be retried in case of a timeout or + // server error without risk of executing that same action again. Retry tokens expire after 24 + // hours, but can be invalidated before then due to conflicting operations (for example, if a resource + // has been deleted and purged from the system, then a retry of the original creation request + // may be rejected). + OpcRetryToken *string `mandatory:"false" contributesTo:"header" name:"opc-retry-token"` +} + +func (request CreateDrgRequest) String() string { + return common.PointerString(request) +} + +// CreateDrgResponse wrapper for the CreateDrg operation +type CreateDrgResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The Drg instance + Drg `presentIn:"body"` + + // For optimistic concurrency control. See `if-match`. + Etag *string `presentIn:"header" name:"etag"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about + // a particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response CreateDrgResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/create_i_p_sec_connection_request_response.go b/vendor/github.com/oracle/oci-go-sdk/core/create_i_p_sec_connection_request_response.go new file mode 100644 index 000000000..877bcc9c8 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/create_i_p_sec_connection_request_response.go @@ -0,0 +1,48 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// CreateIPSecConnectionRequest wrapper for the CreateIPSecConnection operation +type CreateIPSecConnectionRequest struct { + + // Details for creating an `IPSecConnection`. + CreateIpSecConnectionDetails `contributesTo:"body"` + + // A token that uniquely identifies a request so it can be retried in case of a timeout or + // server error without risk of executing that same action again. Retry tokens expire after 24 + // hours, but can be invalidated before then due to conflicting operations (for example, if a resource + // has been deleted and purged from the system, then a retry of the original creation request + // may be rejected). + OpcRetryToken *string `mandatory:"false" contributesTo:"header" name:"opc-retry-token"` +} + +func (request CreateIPSecConnectionRequest) String() string { + return common.PointerString(request) +} + +// CreateIPSecConnectionResponse wrapper for the CreateIPSecConnection operation +type CreateIPSecConnectionResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The IpSecConnection instance + IpSecConnection `presentIn:"body"` + + // For optimistic concurrency control. See `if-match`. + Etag *string `presentIn:"header" name:"etag"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about + // a particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response CreateIPSecConnectionResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/create_image_details.go b/vendor/github.com/oracle/oci-go-sdk/core/create_image_details.go new file mode 100644 index 000000000..6a482ea01 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/create_image_details.go @@ -0,0 +1,61 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Core Services API +// +// APIs for Networking Service, Compute Service, and Block Volume Service. +// + +package core + +import ( + "encoding/json" + "github.com/oracle/oci-go-sdk/common" +) + +// CreateImageDetails Either instanceId or imageSourceDetails must be provided in addition to other required parameters. +type CreateImageDetails struct { + + // The OCID of the compartment containing the instance you want to use as the basis for the image. + CompartmentId *string `mandatory:"true" json:"compartmentId"` + + // A user-friendly name for the image. It does not have to be unique, and it's changeable. + // Avoid entering confidential information. + // You cannot use an Oracle-provided image name as a custom image name. + // Example: `My Oracle Linux image` + DisplayName *string `mandatory:"false" json:"displayName"` + + // Details for creating an image through import + ImageSourceDetails ImageSourceDetails `mandatory:"false" json:"imageSourceDetails"` + + // The OCID of the instance you want to use as the basis for the image. + InstanceId *string `mandatory:"false" json:"instanceId"` +} + +func (m CreateImageDetails) String() string { + return common.PointerString(m) +} + +// UnmarshalJSON unmarshals from json +func (m *CreateImageDetails) UnmarshalJSON(data []byte) (e error) { + model := struct { + DisplayName *string `json:"displayName"` + ImageSourceDetails imagesourcedetails `json:"imageSourceDetails"` + InstanceId *string `json:"instanceId"` + CompartmentId *string `json:"compartmentId"` + }{} + + e = json.Unmarshal(data, &model) + if e != nil { + return + } + m.DisplayName = model.DisplayName + nn, e := model.ImageSourceDetails.UnmarshalPolymorphicJSON(model.ImageSourceDetails.JsonData) + if e != nil { + return + } + m.ImageSourceDetails = nn + m.InstanceId = model.InstanceId + m.CompartmentId = model.CompartmentId + return +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/create_image_request_response.go b/vendor/github.com/oracle/oci-go-sdk/core/create_image_request_response.go new file mode 100644 index 000000000..0481d6924 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/create_image_request_response.go @@ -0,0 +1,48 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// CreateImageRequest wrapper for the CreateImage operation +type CreateImageRequest struct { + + // Image creation details + CreateImageDetails `contributesTo:"body"` + + // A token that uniquely identifies a request so it can be retried in case of a timeout or + // server error without risk of executing that same action again. Retry tokens expire after 24 + // hours, but can be invalidated before then due to conflicting operations (for example, if a resource + // has been deleted and purged from the system, then a retry of the original creation request + // may be rejected). + OpcRetryToken *string `mandatory:"false" contributesTo:"header" name:"opc-retry-token"` +} + +func (request CreateImageRequest) String() string { + return common.PointerString(request) +} + +// CreateImageResponse wrapper for the CreateImage operation +type CreateImageResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The Image instance + Image `presentIn:"body"` + + // For optimistic concurrency control. See `if-match`. + Etag *string `presentIn:"header" name:"etag"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about + // a particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response CreateImageResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/create_instance_console_connection_details.go b/vendor/github.com/oracle/oci-go-sdk/core/create_instance_console_connection_details.go new file mode 100644 index 000000000..3d3225bf7 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/create_instance_console_connection_details.go @@ -0,0 +1,28 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Core Services API +// +// APIs for Networking Service, Compute Service, and Block Volume Service. +// + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" +) + +// CreateInstanceConsoleConnectionDetails The details for creating a instance console connection. +// The instance console connection is created in the same compartment as the instance. +type CreateInstanceConsoleConnectionDetails struct { + + // The OCID of the instance to create the console connection to. + InstanceId *string `mandatory:"true" json:"instanceId"` + + // The SSH public key used to authenticate the console connection. + PublicKey *string `mandatory:"true" json:"publicKey"` +} + +func (m CreateInstanceConsoleConnectionDetails) String() string { + return common.PointerString(m) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/create_instance_console_connection_request_response.go b/vendor/github.com/oracle/oci-go-sdk/core/create_instance_console_connection_request_response.go new file mode 100644 index 000000000..456bd9b78 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/create_instance_console_connection_request_response.go @@ -0,0 +1,48 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// CreateInstanceConsoleConnectionRequest wrapper for the CreateInstanceConsoleConnection operation +type CreateInstanceConsoleConnectionRequest struct { + + // Request object for creating an InstanceConsoleConnection + CreateInstanceConsoleConnectionDetails `contributesTo:"body"` + + // A token that uniquely identifies a request so it can be retried in case of a timeout or + // server error without risk of executing that same action again. Retry tokens expire after 24 + // hours, but can be invalidated before then due to conflicting operations (for example, if a resource + // has been deleted and purged from the system, then a retry of the original creation request + // may be rejected). + OpcRetryToken *string `mandatory:"false" contributesTo:"header" name:"opc-retry-token"` +} + +func (request CreateInstanceConsoleConnectionRequest) String() string { + return common.PointerString(request) +} + +// CreateInstanceConsoleConnectionResponse wrapper for the CreateInstanceConsoleConnection operation +type CreateInstanceConsoleConnectionResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The InstanceConsoleConnection instance + InstanceConsoleConnection `presentIn:"body"` + + // For optimistic concurrency control. See `if-match`. + Etag *string `presentIn:"header" name:"etag"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about + // a particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response CreateInstanceConsoleConnectionResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/create_internet_gateway_details.go b/vendor/github.com/oracle/oci-go-sdk/core/create_internet_gateway_details.go new file mode 100644 index 000000000..df49670c6 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/create_internet_gateway_details.go @@ -0,0 +1,33 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Core Services API +// +// APIs for Networking Service, Compute Service, and Block Volume Service. +// + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" +) + +// CreateInternetGatewayDetails The representation of CreateInternetGatewayDetails +type CreateInternetGatewayDetails struct { + + // The OCID of the compartment to contain the Internet Gateway. + CompartmentId *string `mandatory:"true" json:"compartmentId"` + + // Whether the gateway is enabled upon creation. + IsEnabled *bool `mandatory:"true" json:"isEnabled"` + + // The OCID of the VCN the Internet Gateway is attached to. + VcnId *string `mandatory:"true" json:"vcnId"` + + // A user-friendly name. Does not have to be unique, and it's changeable. Avoid entering confidential information. + DisplayName *string `mandatory:"false" json:"displayName"` +} + +func (m CreateInternetGatewayDetails) String() string { + return common.PointerString(m) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/create_internet_gateway_request_response.go b/vendor/github.com/oracle/oci-go-sdk/core/create_internet_gateway_request_response.go new file mode 100644 index 000000000..767747486 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/create_internet_gateway_request_response.go @@ -0,0 +1,48 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// CreateInternetGatewayRequest wrapper for the CreateInternetGateway operation +type CreateInternetGatewayRequest struct { + + // Details for creating a new Internet Gateway. + CreateInternetGatewayDetails `contributesTo:"body"` + + // A token that uniquely identifies a request so it can be retried in case of a timeout or + // server error without risk of executing that same action again. Retry tokens expire after 24 + // hours, but can be invalidated before then due to conflicting operations (for example, if a resource + // has been deleted and purged from the system, then a retry of the original creation request + // may be rejected). + OpcRetryToken *string `mandatory:"false" contributesTo:"header" name:"opc-retry-token"` +} + +func (request CreateInternetGatewayRequest) String() string { + return common.PointerString(request) +} + +// CreateInternetGatewayResponse wrapper for the CreateInternetGateway operation +type CreateInternetGatewayResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The InternetGateway instance + InternetGateway `presentIn:"body"` + + // For optimistic concurrency control. See `if-match`. + Etag *string `presentIn:"header" name:"etag"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about + // a particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response CreateInternetGatewayResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/create_ip_sec_connection_details.go b/vendor/github.com/oracle/oci-go-sdk/core/create_ip_sec_connection_details.go new file mode 100644 index 000000000..31cf23fa8 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/create_ip_sec_connection_details.go @@ -0,0 +1,38 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Core Services API +// +// APIs for Networking Service, Compute Service, and Block Volume Service. +// + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" +) + +// CreateIpSecConnectionDetails The representation of CreateIpSecConnectionDetails +type CreateIpSecConnectionDetails struct { + + // The OCID of the compartment to contain the IPSec connection. + CompartmentId *string `mandatory:"true" json:"compartmentId"` + + // The OCID of the CPE. + CpeId *string `mandatory:"true" json:"cpeId"` + + // The OCID of the DRG. + DrgId *string `mandatory:"true" json:"drgId"` + + // Static routes to the CPE. At least one route must be included. The CIDR must not be a + // multicast address or class E address. + // Example: `10.0.1.0/24` + StaticRoutes []string `mandatory:"true" json:"staticRoutes"` + + // A user-friendly name. Does not have to be unique, and it's changeable. Avoid entering confidential information. + DisplayName *string `mandatory:"false" json:"displayName"` +} + +func (m CreateIpSecConnectionDetails) String() string { + return common.PointerString(m) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/create_local_peering_gateway_details.go b/vendor/github.com/oracle/oci-go-sdk/core/create_local_peering_gateway_details.go new file mode 100644 index 000000000..6e0b45439 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/create_local_peering_gateway_details.go @@ -0,0 +1,31 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Core Services API +// +// APIs for Networking Service, Compute Service, and Block Volume Service. +// + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" +) + +// CreateLocalPeeringGatewayDetails The representation of CreateLocalPeeringGatewayDetails +type CreateLocalPeeringGatewayDetails struct { + + // The OCID of the compartment containing the local peering gateway (LPG). + CompartmentId *string `mandatory:"true" json:"compartmentId"` + + // The OCID of the VCN the LPG belongs to. + VcnId *string `mandatory:"true" json:"vcnId"` + + // A user-friendly name. Does not have to be unique, and it's changeable. Avoid + // entering confidential information. + DisplayName *string `mandatory:"false" json:"displayName"` +} + +func (m CreateLocalPeeringGatewayDetails) String() string { + return common.PointerString(m) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/create_local_peering_gateway_request_response.go b/vendor/github.com/oracle/oci-go-sdk/core/create_local_peering_gateway_request_response.go new file mode 100644 index 000000000..4f72bdb5c --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/create_local_peering_gateway_request_response.go @@ -0,0 +1,48 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// CreateLocalPeeringGatewayRequest wrapper for the CreateLocalPeeringGateway operation +type CreateLocalPeeringGatewayRequest struct { + + // Details for creating a new local peering gateway. + CreateLocalPeeringGatewayDetails `contributesTo:"body"` + + // A token that uniquely identifies a request so it can be retried in case of a timeout or + // server error without risk of executing that same action again. Retry tokens expire after 24 + // hours, but can be invalidated before then due to conflicting operations (for example, if a resource + // has been deleted and purged from the system, then a retry of the original creation request + // may be rejected). + OpcRetryToken *string `mandatory:"false" contributesTo:"header" name:"opc-retry-token"` +} + +func (request CreateLocalPeeringGatewayRequest) String() string { + return common.PointerString(request) +} + +// CreateLocalPeeringGatewayResponse wrapper for the CreateLocalPeeringGateway operation +type CreateLocalPeeringGatewayResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The LocalPeeringGateway instance + LocalPeeringGateway `presentIn:"body"` + + // For optimistic concurrency control. See `if-match`. + Etag *string `presentIn:"header" name:"etag"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about + // a particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response CreateLocalPeeringGatewayResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/create_private_ip_details.go b/vendor/github.com/oracle/oci-go-sdk/core/create_private_ip_details.go new file mode 100644 index 000000000..b9da8cc51 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/create_private_ip_details.go @@ -0,0 +1,46 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Core Services API +// +// APIs for Networking Service, Compute Service, and Block Volume Service. +// + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" +) + +// CreatePrivateIpDetails The representation of CreatePrivateIpDetails +type CreatePrivateIpDetails struct { + + // The OCID of the VNIC to assign the private IP to. The VNIC and private IP + // must be in the same subnet. + VnicId *string `mandatory:"true" json:"vnicId"` + + // A user-friendly name. Does not have to be unique, and it's changeable. Avoid + // entering confidential information. + DisplayName *string `mandatory:"false" json:"displayName"` + + // The hostname for the private IP. Used for DNS. The value + // is the hostname portion of the private IP's fully qualified domain name (FQDN) + // (for example, `bminstance-1` in FQDN `bminstance-1.subnet123.vcn1.oraclevcn.com`). + // Must be unique across all VNICs in the subnet and comply with + // RFC 952 (https://tools.ietf.org/html/rfc952) and + // RFC 1123 (https://tools.ietf.org/html/rfc1123). + // For more information, see + // DNS in Your Virtual Cloud Network (https://docs.us-phoenix-1.oraclecloud.com/Content/Network/Concepts/dns.htm). + // Example: `bminstance-1` + HostnameLabel *string `mandatory:"false" json:"hostnameLabel"` + + // A private IP address of your choice. Must be an available IP address within + // the subnet's CIDR. If you don't specify a value, Oracle automatically + // assigns a private IP address from the subnet. + // Example: `10.0.3.3` + IpAddress *string `mandatory:"false" json:"ipAddress"` +} + +func (m CreatePrivateIpDetails) String() string { + return common.PointerString(m) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/create_private_ip_request_response.go b/vendor/github.com/oracle/oci-go-sdk/core/create_private_ip_request_response.go new file mode 100644 index 000000000..941d3a3ed --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/create_private_ip_request_response.go @@ -0,0 +1,48 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// CreatePrivateIpRequest wrapper for the CreatePrivateIp operation +type CreatePrivateIpRequest struct { + + // Create private IP details. + CreatePrivateIpDetails `contributesTo:"body"` + + // A token that uniquely identifies a request so it can be retried in case of a timeout or + // server error without risk of executing that same action again. Retry tokens expire after 24 + // hours, but can be invalidated before then due to conflicting operations (for example, if a resource + // has been deleted and purged from the system, then a retry of the original creation request + // may be rejected). + OpcRetryToken *string `mandatory:"false" contributesTo:"header" name:"opc-retry-token"` +} + +func (request CreatePrivateIpRequest) String() string { + return common.PointerString(request) +} + +// CreatePrivateIpResponse wrapper for the CreatePrivateIp operation +type CreatePrivateIpResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The PrivateIp instance + PrivateIp `presentIn:"body"` + + // For optimistic concurrency control. See `if-match`. + Etag *string `presentIn:"header" name:"etag"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about + // a particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response CreatePrivateIpResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/create_route_table_details.go b/vendor/github.com/oracle/oci-go-sdk/core/create_route_table_details.go new file mode 100644 index 000000000..8f3005acd --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/create_route_table_details.go @@ -0,0 +1,33 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Core Services API +// +// APIs for Networking Service, Compute Service, and Block Volume Service. +// + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" +) + +// CreateRouteTableDetails The representation of CreateRouteTableDetails +type CreateRouteTableDetails struct { + + // The OCID of the compartment to contain the route table. + CompartmentId *string `mandatory:"true" json:"compartmentId"` + + // The collection of rules used for routing destination IPs to network devices. + RouteRules []RouteRule `mandatory:"true" json:"routeRules"` + + // The OCID of the VCN the route table belongs to. + VcnId *string `mandatory:"true" json:"vcnId"` + + // A user-friendly name. Does not have to be unique, and it's changeable. Avoid entering confidential information. + DisplayName *string `mandatory:"false" json:"displayName"` +} + +func (m CreateRouteTableDetails) String() string { + return common.PointerString(m) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/create_route_table_request_response.go b/vendor/github.com/oracle/oci-go-sdk/core/create_route_table_request_response.go new file mode 100644 index 000000000..7e70b4de8 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/create_route_table_request_response.go @@ -0,0 +1,48 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// CreateRouteTableRequest wrapper for the CreateRouteTable operation +type CreateRouteTableRequest struct { + + // Details for creating a new route table. + CreateRouteTableDetails `contributesTo:"body"` + + // A token that uniquely identifies a request so it can be retried in case of a timeout or + // server error without risk of executing that same action again. Retry tokens expire after 24 + // hours, but can be invalidated before then due to conflicting operations (for example, if a resource + // has been deleted and purged from the system, then a retry of the original creation request + // may be rejected). + OpcRetryToken *string `mandatory:"false" contributesTo:"header" name:"opc-retry-token"` +} + +func (request CreateRouteTableRequest) String() string { + return common.PointerString(request) +} + +// CreateRouteTableResponse wrapper for the CreateRouteTable operation +type CreateRouteTableResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The RouteTable instance + RouteTable `presentIn:"body"` + + // For optimistic concurrency control. See `if-match`. + Etag *string `presentIn:"header" name:"etag"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about + // a particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response CreateRouteTableResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/create_security_list_details.go b/vendor/github.com/oracle/oci-go-sdk/core/create_security_list_details.go new file mode 100644 index 000000000..06d4c4ec9 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/create_security_list_details.go @@ -0,0 +1,36 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Core Services API +// +// APIs for Networking Service, Compute Service, and Block Volume Service. +// + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" +) + +// CreateSecurityListDetails The representation of CreateSecurityListDetails +type CreateSecurityListDetails struct { + + // The OCID of the compartment to contain the security list. + CompartmentId *string `mandatory:"true" json:"compartmentId"` + + // Rules for allowing egress IP packets. + EgressSecurityRules []EgressSecurityRule `mandatory:"true" json:"egressSecurityRules"` + + // Rules for allowing ingress IP packets. + IngressSecurityRules []IngressSecurityRule `mandatory:"true" json:"ingressSecurityRules"` + + // The OCID of the VCN the security list belongs to. + VcnId *string `mandatory:"true" json:"vcnId"` + + // A user-friendly name. Does not have to be unique, and it's changeable. Avoid entering confidential information. + DisplayName *string `mandatory:"false" json:"displayName"` +} + +func (m CreateSecurityListDetails) String() string { + return common.PointerString(m) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/create_security_list_request_response.go b/vendor/github.com/oracle/oci-go-sdk/core/create_security_list_request_response.go new file mode 100644 index 000000000..e652c644b --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/create_security_list_request_response.go @@ -0,0 +1,48 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// CreateSecurityListRequest wrapper for the CreateSecurityList operation +type CreateSecurityListRequest struct { + + // Details regarding the security list to create. + CreateSecurityListDetails `contributesTo:"body"` + + // A token that uniquely identifies a request so it can be retried in case of a timeout or + // server error without risk of executing that same action again. Retry tokens expire after 24 + // hours, but can be invalidated before then due to conflicting operations (for example, if a resource + // has been deleted and purged from the system, then a retry of the original creation request + // may be rejected). + OpcRetryToken *string `mandatory:"false" contributesTo:"header" name:"opc-retry-token"` +} + +func (request CreateSecurityListRequest) String() string { + return common.PointerString(request) +} + +// CreateSecurityListResponse wrapper for the CreateSecurityList operation +type CreateSecurityListResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The SecurityList instance + SecurityList `presentIn:"body"` + + // For optimistic concurrency control. See `if-match`. + Etag *string `presentIn:"header" name:"etag"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about + // a particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response CreateSecurityListResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/create_subnet_details.go b/vendor/github.com/oracle/oci-go-sdk/core/create_subnet_details.go new file mode 100644 index 000000000..bc50b2add --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/create_subnet_details.go @@ -0,0 +1,76 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Core Services API +// +// APIs for Networking Service, Compute Service, and Block Volume Service. +// + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" +) + +// CreateSubnetDetails The representation of CreateSubnetDetails +type CreateSubnetDetails struct { + + // The Availability Domain to contain the subnet. + // Example: `Uocm:PHX-AD-1` + AvailabilityDomain *string `mandatory:"true" json:"availabilityDomain"` + + // The CIDR IP address range of the subnet. + // Example: `172.16.1.0/24` + CidrBlock *string `mandatory:"true" json:"cidrBlock"` + + // The OCID of the compartment to contain the subnet. + CompartmentId *string `mandatory:"true" json:"compartmentId"` + + // The OCID of the VCN to contain the subnet. + VcnId *string `mandatory:"true" json:"vcnId"` + + // The OCID of the set of DHCP options the subnet will use. If you don't + // provide a value, the subnet will use the VCN's default set of DHCP options. + DhcpOptionsId *string `mandatory:"false" json:"dhcpOptionsId"` + + // A user-friendly name. Does not have to be unique, and it's changeable. Avoid entering confidential information. + DisplayName *string `mandatory:"false" json:"displayName"` + + // A DNS label for the subnet, used in conjunction with the VNIC's hostname and + // VCN's DNS label to form a fully qualified domain name (FQDN) for each VNIC + // within this subnet (for example, `bminstance-1.subnet123.vcn1.oraclevcn.com`). + // Must be an alphanumeric string that begins with a letter and is unique within the VCN. + // The value cannot be changed. + // This value must be set if you want to use the Internet and VCN Resolver to resolve the + // hostnames of instances in the subnet. It can only be set if the VCN itself + // was created with a DNS label. + // For more information, see + // DNS in Your Virtual Cloud Network (https://docs.us-phoenix-1.oraclecloud.com/Content/Network/Concepts/dns.htm). + // Example: `subnet123` + DnsLabel *string `mandatory:"false" json:"dnsLabel"` + + // Whether VNICs within this subnet can have public IP addresses. + // Defaults to false, which means VNICs created in this subnet will + // automatically be assigned public IP addresses unless specified + // otherwise during instance launch or VNIC creation (with the + // `assignPublicIp` flag in CreateVnicDetails). + // If `prohibitPublicIpOnVnic` is set to true, VNICs created in this + // subnet cannot have public IP addresses (that is, it's a private + // subnet). + // Example: `true` + ProhibitPublicIpOnVnic *bool `mandatory:"false" json:"prohibitPublicIpOnVnic"` + + // The OCID of the route table the subnet will use. If you don't provide a value, + // the subnet will use the VCN's default route table. + RouteTableId *string `mandatory:"false" json:"routeTableId"` + + // OCIDs for the security lists to associate with the subnet. If you don't + // provide a value, the VCN's default security list will be associated with + // the subnet. Remember that security lists are associated at the subnet + // level, but the rules are applied to the individual VNICs in the subnet. + SecurityListIds []string `mandatory:"false" json:"securityListIds"` +} + +func (m CreateSubnetDetails) String() string { + return common.PointerString(m) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/create_subnet_request_response.go b/vendor/github.com/oracle/oci-go-sdk/core/create_subnet_request_response.go new file mode 100644 index 000000000..8d5818ec7 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/create_subnet_request_response.go @@ -0,0 +1,48 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// CreateSubnetRequest wrapper for the CreateSubnet operation +type CreateSubnetRequest struct { + + // Details for creating a subnet. + CreateSubnetDetails `contributesTo:"body"` + + // A token that uniquely identifies a request so it can be retried in case of a timeout or + // server error without risk of executing that same action again. Retry tokens expire after 24 + // hours, but can be invalidated before then due to conflicting operations (for example, if a resource + // has been deleted and purged from the system, then a retry of the original creation request + // may be rejected). + OpcRetryToken *string `mandatory:"false" contributesTo:"header" name:"opc-retry-token"` +} + +func (request CreateSubnetRequest) String() string { + return common.PointerString(request) +} + +// CreateSubnetResponse wrapper for the CreateSubnet operation +type CreateSubnetResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The Subnet instance + Subnet `presentIn:"body"` + + // For optimistic concurrency control. See `if-match`. + Etag *string `presentIn:"header" name:"etag"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about + // a particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response CreateSubnetResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/create_vcn_details.go b/vendor/github.com/oracle/oci-go-sdk/core/create_vcn_details.go new file mode 100644 index 000000000..70eb2c240 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/create_vcn_details.go @@ -0,0 +1,45 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Core Services API +// +// APIs for Networking Service, Compute Service, and Block Volume Service. +// + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" +) + +// CreateVcnDetails The representation of CreateVcnDetails +type CreateVcnDetails struct { + + // The CIDR IP address block of the VCN. + // Example: `172.16.0.0/16` + CidrBlock *string `mandatory:"true" json:"cidrBlock"` + + // The OCID of the compartment to contain the VCN. + CompartmentId *string `mandatory:"true" json:"compartmentId"` + + // A user-friendly name. Does not have to be unique, and it's changeable. Avoid entering confidential information. + DisplayName *string `mandatory:"false" json:"displayName"` + + // A DNS label for the VCN, used in conjunction with the VNIC's hostname and + // subnet's DNS label to form a fully qualified domain name (FQDN) for each VNIC + // within this subnet (for example, `bminstance-1.subnet123.vcn1.oraclevcn.com`). + // Not required to be unique, but it's a best practice to set unique DNS labels + // for VCNs in your tenancy. Must be an alphanumeric string that begins with a letter. + // The value cannot be changed. + // You must set this value if you want instances to be able to use hostnames to + // resolve other instances in the VCN. Otherwise the Internet and VCN Resolver + // will not work. + // For more information, see + // DNS in Your Virtual Cloud Network (https://docs.us-phoenix-1.oraclecloud.com/Content/Network/Concepts/dns.htm). + // Example: `vcn1` + DnsLabel *string `mandatory:"false" json:"dnsLabel"` +} + +func (m CreateVcnDetails) String() string { + return common.PointerString(m) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/create_vcn_request_response.go b/vendor/github.com/oracle/oci-go-sdk/core/create_vcn_request_response.go new file mode 100644 index 000000000..99cd907af --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/create_vcn_request_response.go @@ -0,0 +1,48 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// CreateVcnRequest wrapper for the CreateVcn operation +type CreateVcnRequest struct { + + // Details for creating a new VCN. + CreateVcnDetails `contributesTo:"body"` + + // A token that uniquely identifies a request so it can be retried in case of a timeout or + // server error without risk of executing that same action again. Retry tokens expire after 24 + // hours, but can be invalidated before then due to conflicting operations (for example, if a resource + // has been deleted and purged from the system, then a retry of the original creation request + // may be rejected). + OpcRetryToken *string `mandatory:"false" contributesTo:"header" name:"opc-retry-token"` +} + +func (request CreateVcnRequest) String() string { + return common.PointerString(request) +} + +// CreateVcnResponse wrapper for the CreateVcn operation +type CreateVcnResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The Vcn instance + Vcn `presentIn:"body"` + + // For optimistic concurrency control. See `if-match`. + Etag *string `presentIn:"header" name:"etag"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about + // a particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response CreateVcnResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/create_virtual_circuit_details.go b/vendor/github.com/oracle/oci-go-sdk/core/create_virtual_circuit_details.go new file mode 100644 index 000000000..f663f9854 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/create_virtual_circuit_details.go @@ -0,0 +1,98 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Core Services API +// +// APIs for Networking Service, Compute Service, and Block Volume Service. +// + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" +) + +// CreateVirtualCircuitDetails The representation of CreateVirtualCircuitDetails +type CreateVirtualCircuitDetails struct { + + // The OCID of the compartment to contain the virtual circuit. + CompartmentId *string `mandatory:"true" json:"compartmentId"` + + // The type of IP addresses used in this virtual circuit. PRIVATE + // means RFC 1918 (https://tools.ietf.org/html/rfc1918) addresses + // (10.0.0.0/8, 172.16/12, and 192.168/16). Only PRIVATE is supported. + Type CreateVirtualCircuitDetailsTypeEnum `mandatory:"true" json:"type"` + + // The provisioned data rate of the connection. To get a list of the + // available bandwidth levels (that is, shapes), see + // ListFastConnectProviderVirtualCircuitBandwidthShapes. + // Example: `10 Gbps` + BandwidthShapeName *string `mandatory:"false" json:"bandwidthShapeName"` + + // Create a `CrossConnectMapping` for each cross-connect or cross-connect + // group this virtual circuit will run on. + CrossConnectMappings []CrossConnectMapping `mandatory:"false" json:"crossConnectMappings"` + + // Your BGP ASN (either public or private). Provide this value only if + // there's a BGP session that goes from your edge router to Oracle. + // Otherwise, leave this empty or null. + CustomerBgpAsn *int `mandatory:"false" json:"customerBgpAsn"` + + // A user-friendly name. Does not have to be unique, and it's changeable. Avoid entering confidential information. + DisplayName *string `mandatory:"false" json:"displayName"` + + // For private virtual circuits only. The OCID of the Drg + // that this virtual circuit uses. + GatewayId *string `mandatory:"false" json:"gatewayId"` + + // Deprecated. Instead use `providerServiceId`. + // To get a list of the provider names, see + // ListFastConnectProviderServices. + ProviderName *string `mandatory:"false" json:"providerName"` + + // The OCID of the service offered by the provider (if you're connecting + // via a provider). To get a list of the available service offerings, see + // ListFastConnectProviderServices. + ProviderServiceId *string `mandatory:"false" json:"providerServiceId"` + + // Deprecated. Instead use `providerServiceId`. + // To get a list of the provider names, see + // ListFastConnectProviderServices. + ProviderServiceName *string `mandatory:"false" json:"providerServiceName"` + + // For a public virtual circuit. The public IP prefixes (CIDRs) the customer wants to + // advertise across the connection. + PublicPrefixes []CreateVirtualCircuitPublicPrefixDetails `mandatory:"false" json:"publicPrefixes"` + + // The Oracle Cloud Infrastructure region where this virtual + // circuit is located. + // Example: `phx` + Region *string `mandatory:"false" json:"region"` +} + +func (m CreateVirtualCircuitDetails) String() string { + return common.PointerString(m) +} + +// CreateVirtualCircuitDetailsTypeEnum Enum with underlying type: string +type CreateVirtualCircuitDetailsTypeEnum string + +// Set of constants representing the allowable values for CreateVirtualCircuitDetailsType +const ( + CreateVirtualCircuitDetailsTypePublic CreateVirtualCircuitDetailsTypeEnum = "PUBLIC" + CreateVirtualCircuitDetailsTypePrivate CreateVirtualCircuitDetailsTypeEnum = "PRIVATE" +) + +var mappingCreateVirtualCircuitDetailsType = map[string]CreateVirtualCircuitDetailsTypeEnum{ + "PUBLIC": CreateVirtualCircuitDetailsTypePublic, + "PRIVATE": CreateVirtualCircuitDetailsTypePrivate, +} + +// GetCreateVirtualCircuitDetailsTypeEnumValues Enumerates the set of values for CreateVirtualCircuitDetailsType +func GetCreateVirtualCircuitDetailsTypeEnumValues() []CreateVirtualCircuitDetailsTypeEnum { + values := make([]CreateVirtualCircuitDetailsTypeEnum, 0) + for _, v := range mappingCreateVirtualCircuitDetailsType { + values = append(values, v) + } + return values +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/create_virtual_circuit_public_prefix_details.go b/vendor/github.com/oracle/oci-go-sdk/core/create_virtual_circuit_public_prefix_details.go new file mode 100644 index 000000000..10e13c8f5 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/create_virtual_circuit_public_prefix_details.go @@ -0,0 +1,25 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Core Services API +// +// APIs for Networking Service, Compute Service, and Block Volume Service. +// + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" +) + +// CreateVirtualCircuitPublicPrefixDetails The representation of CreateVirtualCircuitPublicPrefixDetails +type CreateVirtualCircuitPublicPrefixDetails struct { + + // An individual public IP prefix (CIDR) to add to the public virtual circuit. + // Must be /24 or less specific. + CidrBlock *string `mandatory:"true" json:"cidrBlock"` +} + +func (m CreateVirtualCircuitPublicPrefixDetails) String() string { + return common.PointerString(m) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/create_virtual_circuit_request_response.go b/vendor/github.com/oracle/oci-go-sdk/core/create_virtual_circuit_request_response.go new file mode 100644 index 000000000..629fc3e8a --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/create_virtual_circuit_request_response.go @@ -0,0 +1,48 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// CreateVirtualCircuitRequest wrapper for the CreateVirtualCircuit operation +type CreateVirtualCircuitRequest struct { + + // Details to create a VirtualCircuit. + CreateVirtualCircuitDetails `contributesTo:"body"` + + // A token that uniquely identifies a request so it can be retried in case of a timeout or + // server error without risk of executing that same action again. Retry tokens expire after 24 + // hours, but can be invalidated before then due to conflicting operations (for example, if a resource + // has been deleted and purged from the system, then a retry of the original creation request + // may be rejected). + OpcRetryToken *string `mandatory:"false" contributesTo:"header" name:"opc-retry-token"` +} + +func (request CreateVirtualCircuitRequest) String() string { + return common.PointerString(request) +} + +// CreateVirtualCircuitResponse wrapper for the CreateVirtualCircuit operation +type CreateVirtualCircuitResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The VirtualCircuit instance + VirtualCircuit `presentIn:"body"` + + // For optimistic concurrency control. See `if-match`. + Etag *string `presentIn:"header" name:"etag"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about + // a particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response CreateVirtualCircuitResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/create_vnic_details.go b/vendor/github.com/oracle/oci-go-sdk/core/create_vnic_details.go new file mode 100644 index 000000000..c92cd1a5c --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/create_vnic_details.go @@ -0,0 +1,85 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Core Services API +// +// APIs for Networking Service, Compute Service, and Block Volume Service. +// + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" +) + +// CreateVnicDetails Contains properties for a VNIC. You use this object when creating the +// primary VNIC during instance launch or when creating a secondary VNIC. +// For more information about VNICs, see +// Virtual Network Interface Cards (VNICs) (https://docs.us-phoenix-1.oraclecloud.com/Content/Network/Tasks/managingVNICs.htm). +type CreateVnicDetails struct { + + // The OCID of the subnet to create the VNIC in. When launching an instance, + // use this `subnetId` instead of the deprecated `subnetId` in + // LaunchInstanceDetails. + // At least one of them is required; if you provide both, the values must match. + SubnetId *string `mandatory:"true" json:"subnetId"` + + // Whether the VNIC should be assigned a public IP address. Defaults to whether + // the subnet is public or private. If not set and the VNIC is being created + // in a private subnet (that is, where `prohibitPublicIpOnVnic` = true in the + // Subnet), then no public IP address is assigned. + // If not set and the subnet is public (`prohibitPublicIpOnVnic` = false), then + // a public IP address is assigned. If set to true and + // `prohibitPublicIpOnVnic` = true, an error is returned. + // **Note:** This public IP address is associated with the primary private IP + // on the VNIC. Secondary private IPs cannot have public IP + // addresses associated with them. For more information, see + // IP Addresses (https://docs.us-phoenix-1.oraclecloud.com/Content/Network/Tasks/managingIPaddresses.htm). + // Example: `false` + AssignPublicIp *bool `mandatory:"false" json:"assignPublicIp"` + + // A user-friendly name for the VNIC. Does not have to be unique. + // Avoid entering confidential information. + DisplayName *string `mandatory:"false" json:"displayName"` + + // The hostname for the VNIC's primary private IP. Used for DNS. The value is the hostname + // portion of the primary private IP's fully qualified domain name (FQDN) + // (for example, `bminstance-1` in FQDN `bminstance-1.subnet123.vcn1.oraclevcn.com`). + // Must be unique across all VNICs in the subnet and comply with + // RFC 952 (https://tools.ietf.org/html/rfc952) and + // RFC 1123 (https://tools.ietf.org/html/rfc1123). + // The value appears in the Vnic object and also the + // PrivateIp object returned by + // ListPrivateIps and + // GetPrivateIp. + // For more information, see + // DNS in Your Virtual Cloud Network (https://docs.us-phoenix-1.oraclecloud.com/Content/Network/Concepts/dns.htm). + // When launching an instance, use this `hostnameLabel` instead + // of the deprecated `hostnameLabel` in + // LaunchInstanceDetails. + // If you provide both, the values must match. + // Example: `bminstance-1` + HostnameLabel *string `mandatory:"false" json:"hostnameLabel"` + + // A private IP address of your choice to assign to the VNIC. Must be an + // available IP address within the subnet's CIDR. If you don't specify a + // value, Oracle automatically assigns a private IP address from the subnet. + // This is the VNIC's *primary* private IP address. The value appears in + // the Vnic object and also the + // PrivateIp object returned by + // ListPrivateIps and + // GetPrivateIp. + // Example: `10.0.3.3` + PrivateIp *string `mandatory:"false" json:"privateIp"` + + // Whether the source/destination check is disabled on the VNIC. + // Defaults to `false`, which means the check is performed. For information + // about why you would skip the source/destination check, see + // Using a Private IP as a Route Target (https://docs.us-phoenix-1.oraclecloud.com/Content/Network/Tasks/managingroutetables.htm#privateip). + // Example: `true` + SkipSourceDestCheck *bool `mandatory:"false" json:"skipSourceDestCheck"` +} + +func (m CreateVnicDetails) String() string { + return common.PointerString(m) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/create_volume_backup_details.go b/vendor/github.com/oracle/oci-go-sdk/core/create_volume_backup_details.go new file mode 100644 index 000000000..8a1907cb0 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/create_volume_backup_details.go @@ -0,0 +1,28 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Core Services API +// +// APIs for Networking Service, Compute Service, and Block Volume Service. +// + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" +) + +// CreateVolumeBackupDetails The representation of CreateVolumeBackupDetails +type CreateVolumeBackupDetails struct { + + // The OCID of the volume that needs to be backed up. + VolumeId *string `mandatory:"true" json:"volumeId"` + + // A user-friendly name for the volume backup. Does not have to be unique and it's changeable. + // Avoid entering confidential information. + DisplayName *string `mandatory:"false" json:"displayName"` +} + +func (m CreateVolumeBackupDetails) String() string { + return common.PointerString(m) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/create_volume_backup_request_response.go b/vendor/github.com/oracle/oci-go-sdk/core/create_volume_backup_request_response.go new file mode 100644 index 000000000..44e47b735 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/create_volume_backup_request_response.go @@ -0,0 +1,48 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// CreateVolumeBackupRequest wrapper for the CreateVolumeBackup operation +type CreateVolumeBackupRequest struct { + + // Request to create a new backup of given volume. + CreateVolumeBackupDetails `contributesTo:"body"` + + // A token that uniquely identifies a request so it can be retried in case of a timeout or + // server error without risk of executing that same action again. Retry tokens expire after 24 + // hours, but can be invalidated before then due to conflicting operations (for example, if a resource + // has been deleted and purged from the system, then a retry of the original creation request + // may be rejected). + OpcRetryToken *string `mandatory:"false" contributesTo:"header" name:"opc-retry-token"` +} + +func (request CreateVolumeBackupRequest) String() string { + return common.PointerString(request) +} + +// CreateVolumeBackupResponse wrapper for the CreateVolumeBackup operation +type CreateVolumeBackupResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The VolumeBackup instance + VolumeBackup `presentIn:"body"` + + // For optimistic concurrency control. See `if-match`. + Etag *string `presentIn:"header" name:"etag"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about + // a particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response CreateVolumeBackupResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/create_volume_details.go b/vendor/github.com/oracle/oci-go-sdk/core/create_volume_details.go new file mode 100644 index 000000000..492c815fb --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/create_volume_details.go @@ -0,0 +1,80 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Core Services API +// +// APIs for Networking Service, Compute Service, and Block Volume Service. +// + +package core + +import ( + "encoding/json" + "github.com/oracle/oci-go-sdk/common" +) + +// CreateVolumeDetails The representation of CreateVolumeDetails +type CreateVolumeDetails struct { + + // The Availability Domain of the volume. + // Example: `Uocm:PHX-AD-1` + AvailabilityDomain *string `mandatory:"true" json:"availabilityDomain"` + + // The OCID of the compartment that contains the volume. + CompartmentId *string `mandatory:"true" json:"compartmentId"` + + // A user-friendly name. Does not have to be unique, and it's changeable. + // Avoid entering confidential information. + DisplayName *string `mandatory:"false" json:"displayName"` + + // The size of the volume in GBs. + SizeInGBs *int `mandatory:"false" json:"sizeInGBs"` + + // The size of the volume in MBs. The value must be a multiple of 1024. + // This field is deprecated. Use sizeInGBs instead. + SizeInMBs *int `mandatory:"false" json:"sizeInMBs"` + + // Specifies the volume source details for a new Block volume. The volume source is either another Block volume in the same Availability Domain or a Block volume backup. + // This is an optional field. If not specified or set to null, the new Block volume will be empty. + // When specified, the new Block volume will contain data from the source volume or backup. + SourceDetails VolumeSourceDetails `mandatory:"false" json:"sourceDetails"` + + // The OCID of the volume backup from which the data should be restored on the newly created volume. + // This field is deprecated. Use the sourceDetails field instead to specify the + // backup for the volume. + VolumeBackupId *string `mandatory:"false" json:"volumeBackupId"` +} + +func (m CreateVolumeDetails) String() string { + return common.PointerString(m) +} + +// UnmarshalJSON unmarshals from json +func (m *CreateVolumeDetails) UnmarshalJSON(data []byte) (e error) { + model := struct { + DisplayName *string `json:"displayName"` + SizeInGBs *int `json:"sizeInGBs"` + SizeInMBs *int `json:"sizeInMBs"` + SourceDetails volumesourcedetails `json:"sourceDetails"` + VolumeBackupId *string `json:"volumeBackupId"` + AvailabilityDomain *string `json:"availabilityDomain"` + CompartmentId *string `json:"compartmentId"` + }{} + + e = json.Unmarshal(data, &model) + if e != nil { + return + } + m.DisplayName = model.DisplayName + m.SizeInGBs = model.SizeInGBs + m.SizeInMBs = model.SizeInMBs + nn, e := model.SourceDetails.UnmarshalPolymorphicJSON(model.SourceDetails.JsonData) + if e != nil { + return + } + m.SourceDetails = nn + m.VolumeBackupId = model.VolumeBackupId + m.AvailabilityDomain = model.AvailabilityDomain + m.CompartmentId = model.CompartmentId + return +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/create_volume_request_response.go b/vendor/github.com/oracle/oci-go-sdk/core/create_volume_request_response.go new file mode 100644 index 000000000..d505805f4 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/create_volume_request_response.go @@ -0,0 +1,48 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// CreateVolumeRequest wrapper for the CreateVolume operation +type CreateVolumeRequest struct { + + // Request to create a new volume. + CreateVolumeDetails `contributesTo:"body"` + + // A token that uniquely identifies a request so it can be retried in case of a timeout or + // server error without risk of executing that same action again. Retry tokens expire after 24 + // hours, but can be invalidated before then due to conflicting operations (for example, if a resource + // has been deleted and purged from the system, then a retry of the original creation request + // may be rejected). + OpcRetryToken *string `mandatory:"false" contributesTo:"header" name:"opc-retry-token"` +} + +func (request CreateVolumeRequest) String() string { + return common.PointerString(request) +} + +// CreateVolumeResponse wrapper for the CreateVolume operation +type CreateVolumeResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The Volume instance + Volume `presentIn:"body"` + + // For optimistic concurrency control. See `if-match`. + Etag *string `presentIn:"header" name:"etag"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about + // a particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response CreateVolumeResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/cross_connect.go b/vendor/github.com/oracle/oci-go-sdk/core/cross_connect.go new file mode 100644 index 000000000..67f43bf74 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/cross_connect.go @@ -0,0 +1,94 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Core Services API +// +// APIs for Networking Service, Compute Service, and Block Volume Service. +// + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" +) + +// CrossConnect For use with Oracle Cloud Infrastructure FastConnect. A cross-connect represents a +// physical connection between an existing network and Oracle. Customers who are colocated +// with Oracle in a FastConnect location create and use cross-connects. For more +// information, see FastConnect Overview (https://docs.us-phoenix-1.oraclecloud.com/Content/Network/Concepts/fastconnect.htm). +// Oracle recommends you create each cross-connect in a +// CrossConnectGroup so you can use link aggregation +// with the connection. +// **Note:** If you're a provider who is setting up a physical connection to Oracle so customers +// can use FastConnect over the connection, be aware that your connection is modeled the +// same way as a colocated customer's (with `CrossConnect` and `CrossConnectGroup` objects, and so on). +// To use any of the API operations, you must be authorized in an IAM policy. If you're not authorized, +// talk to an administrator. If you're an administrator who needs to write policies to give users access, see +// Getting Started with Policies (https://docs.us-phoenix-1.oraclecloud.com/Content/Identity/Concepts/policygetstarted.htm). +type CrossConnect struct { + + // The OCID of the compartment containing the cross-connect group. + CompartmentId *string `mandatory:"false" json:"compartmentId"` + + // The OCID of the cross-connect group this cross-connect belongs to (if any). + CrossConnectGroupId *string `mandatory:"false" json:"crossConnectGroupId"` + + // A user-friendly name. Does not have to be unique, and it's changeable. + // Avoid entering confidential information. + DisplayName *string `mandatory:"false" json:"displayName"` + + // The cross-connect's Oracle ID (OCID). + Id *string `mandatory:"false" json:"id"` + + // The cross-connect's current state. + LifecycleState CrossConnectLifecycleStateEnum `mandatory:"false" json:"lifecycleState,omitempty"` + + // The name of the FastConnect location where this cross-connect is installed. + LocationName *string `mandatory:"false" json:"locationName"` + + // A string identifying the meet-me room port for this cross-connect. + PortName *string `mandatory:"false" json:"portName"` + + // The port speed for this cross-connect. + // Example: `10 Gbps` + PortSpeedShapeName *string `mandatory:"false" json:"portSpeedShapeName"` + + // The date and time the cross-connect was created, in the format defined by RFC3339. + // Example: `2016-08-25T21:10:29.600Z` + TimeCreated *common.SDKTime `mandatory:"false" json:"timeCreated"` +} + +func (m CrossConnect) String() string { + return common.PointerString(m) +} + +// CrossConnectLifecycleStateEnum Enum with underlying type: string +type CrossConnectLifecycleStateEnum string + +// Set of constants representing the allowable values for CrossConnectLifecycleState +const ( + CrossConnectLifecycleStatePendingCustomer CrossConnectLifecycleStateEnum = "PENDING_CUSTOMER" + CrossConnectLifecycleStateProvisioning CrossConnectLifecycleStateEnum = "PROVISIONING" + CrossConnectLifecycleStateProvisioned CrossConnectLifecycleStateEnum = "PROVISIONED" + CrossConnectLifecycleStateInactive CrossConnectLifecycleStateEnum = "INACTIVE" + CrossConnectLifecycleStateTerminating CrossConnectLifecycleStateEnum = "TERMINATING" + CrossConnectLifecycleStateTerminated CrossConnectLifecycleStateEnum = "TERMINATED" +) + +var mappingCrossConnectLifecycleState = map[string]CrossConnectLifecycleStateEnum{ + "PENDING_CUSTOMER": CrossConnectLifecycleStatePendingCustomer, + "PROVISIONING": CrossConnectLifecycleStateProvisioning, + "PROVISIONED": CrossConnectLifecycleStateProvisioned, + "INACTIVE": CrossConnectLifecycleStateInactive, + "TERMINATING": CrossConnectLifecycleStateTerminating, + "TERMINATED": CrossConnectLifecycleStateTerminated, +} + +// GetCrossConnectLifecycleStateEnumValues Enumerates the set of values for CrossConnectLifecycleState +func GetCrossConnectLifecycleStateEnumValues() []CrossConnectLifecycleStateEnum { + values := make([]CrossConnectLifecycleStateEnum, 0) + for _, v := range mappingCrossConnectLifecycleState { + values = append(values, v) + } + return values +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/cross_connect_group.go b/vendor/github.com/oracle/oci-go-sdk/core/cross_connect_group.go new file mode 100644 index 000000000..2193bf7e7 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/cross_connect_group.go @@ -0,0 +1,77 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Core Services API +// +// APIs for Networking Service, Compute Service, and Block Volume Service. +// + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" +) + +// CrossConnectGroup For use with Oracle Cloud Infrastructure FastConnect. A cross-connect group +// is a link aggregation group (LAG), which can contain one or more +// CrossConnect. Customers who are colocated with +// Oracle in a FastConnect location create and use cross-connect groups. For more +// information, see FastConnect Overview (https://docs.us-phoenix-1.oraclecloud.com/Content/Network/Concepts/fastconnect.htm). +// **Note:** If you're a provider who is setting up a physical connection to Oracle so customers +// can use FastConnect over the connection, be aware that your connection is modeled the +// same way as a colocated customer's (with `CrossConnect` and `CrossConnectGroup` objects, and so on). +// To use any of the API operations, you must be authorized in an IAM policy. If you're not authorized, +// talk to an administrator. If you're an administrator who needs to write policies to give users access, see +// Getting Started with Policies (https://docs.us-phoenix-1.oraclecloud.com/Content/Identity/Concepts/policygetstarted.htm). +type CrossConnectGroup struct { + + // The OCID of the compartment containing the cross-connect group. + CompartmentId *string `mandatory:"false" json:"compartmentId"` + + // The display name of A user-friendly name. Does not have to be unique, and it's changeable. + // Avoid entering confidential information. + DisplayName *string `mandatory:"false" json:"displayName"` + + // The cross-connect group's Oracle ID (OCID). + Id *string `mandatory:"false" json:"id"` + + // The cross-connect group's current state. + LifecycleState CrossConnectGroupLifecycleStateEnum `mandatory:"false" json:"lifecycleState,omitempty"` + + // The date and time the cross-connect group was created, in the format defined by RFC3339. + // Example: `2016-08-25T21:10:29.600Z` + TimeCreated *common.SDKTime `mandatory:"false" json:"timeCreated"` +} + +func (m CrossConnectGroup) String() string { + return common.PointerString(m) +} + +// CrossConnectGroupLifecycleStateEnum Enum with underlying type: string +type CrossConnectGroupLifecycleStateEnum string + +// Set of constants representing the allowable values for CrossConnectGroupLifecycleState +const ( + CrossConnectGroupLifecycleStateProvisioning CrossConnectGroupLifecycleStateEnum = "PROVISIONING" + CrossConnectGroupLifecycleStateProvisioned CrossConnectGroupLifecycleStateEnum = "PROVISIONED" + CrossConnectGroupLifecycleStateInactive CrossConnectGroupLifecycleStateEnum = "INACTIVE" + CrossConnectGroupLifecycleStateTerminating CrossConnectGroupLifecycleStateEnum = "TERMINATING" + CrossConnectGroupLifecycleStateTerminated CrossConnectGroupLifecycleStateEnum = "TERMINATED" +) + +var mappingCrossConnectGroupLifecycleState = map[string]CrossConnectGroupLifecycleStateEnum{ + "PROVISIONING": CrossConnectGroupLifecycleStateProvisioning, + "PROVISIONED": CrossConnectGroupLifecycleStateProvisioned, + "INACTIVE": CrossConnectGroupLifecycleStateInactive, + "TERMINATING": CrossConnectGroupLifecycleStateTerminating, + "TERMINATED": CrossConnectGroupLifecycleStateTerminated, +} + +// GetCrossConnectGroupLifecycleStateEnumValues Enumerates the set of values for CrossConnectGroupLifecycleState +func GetCrossConnectGroupLifecycleStateEnumValues() []CrossConnectGroupLifecycleStateEnum { + values := make([]CrossConnectGroupLifecycleStateEnum, 0) + for _, v := range mappingCrossConnectGroupLifecycleState { + values = append(values, v) + } + return values +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/cross_connect_location.go b/vendor/github.com/oracle/oci-go-sdk/core/cross_connect_location.go new file mode 100644 index 000000000..3a291556b --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/cross_connect_location.go @@ -0,0 +1,28 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Core Services API +// +// APIs for Networking Service, Compute Service, and Block Volume Service. +// + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" +) + +// CrossConnectLocation An individual FastConnect location. +type CrossConnectLocation struct { + + // A description of the location. + Description *string `mandatory:"true" json:"description"` + + // The name of the location. + // Example: `CyrusOne, Chandler, AZ` + Name *string `mandatory:"true" json:"name"` +} + +func (m CrossConnectLocation) String() string { + return common.PointerString(m) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/cross_connect_mapping.go b/vendor/github.com/oracle/oci-go-sdk/core/cross_connect_mapping.go new file mode 100644 index 000000000..43ec7d68e --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/cross_connect_mapping.go @@ -0,0 +1,76 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Core Services API +// +// APIs for Networking Service, Compute Service, and Block Volume Service. +// + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" +) + +// CrossConnectMapping For use with Oracle Cloud Infrastructure FastConnect. Each +// VirtualCircuit runs on one or +// more cross-connects or cross-connect groups. A `CrossConnectMapping` +// contains the properties for an individual cross-connect or cross-connect group +// associated with a given virtual circuit. +// The mapping includes information about the cross-connect or +// cross-connect group, the VLAN, and the BGP peering session. +// If you're a customer who is colocated with Oracle, that means you own both +// the virtual circuit and the physical connection it runs on (cross-connect or +// cross-connect group), so you specify all the information in the mapping. There's +// one exception: for a public virtual circuit, Oracle specifies the BGP IP +// addresses. +// If you're a provider, then you own the physical connection that the customer's +// virtual circuit runs on, so you contribute information about the cross-connect +// or cross-connect group and VLAN. +// Who specifies the BGP peering information in the case of customer connection via +// provider? If the BGP session goes from Oracle to the provider's edge router, then +// the provider also specifies the BGP peering information. If the BGP session instead +// goes from Oracle to the customer's edge router, then the customer specifies the BGP +// peering information. There's one exception: for a public virtual circuit, Oracle +// specifies the BGP IP addresses. +type CrossConnectMapping struct { + + // The key for BGP MD5 authentication. Only applicable if your system + // requires MD5 authentication. If empty or not set (null), that + // means you don't use BGP MD5 authentication. + BgpMd5AuthKey *string `mandatory:"false" json:"bgpMd5AuthKey"` + + // The OCID of the cross-connect or cross-connect group for this mapping. + // Specified by the owner of the cross-connect or cross-connect group (the + // customer if the customer is colocated with Oracle, or the provider if the + // customer is connecting via provider). + CrossConnectOrCrossConnectGroupId *string `mandatory:"false" json:"crossConnectOrCrossConnectGroupId"` + + // The BGP IP address for the router on the other end of the BGP session from + // Oracle. Specified by the owner of that router. If the session goes from Oracle + // to a customer, this is the BGP IP address of the customer's edge router. If the + // session goes from Oracle to a provider, this is the BGP IP address of the + // provider's edge router. Must use a /30 or /31 subnet mask. + // There's one exception: for a public virtual circuit, Oracle specifies the BGP IP addresses. + // Example: `10.0.0.18/31` + CustomerBgpPeeringIp *string `mandatory:"false" json:"customerBgpPeeringIp"` + + // The IP address for Oracle's end of the BGP session. Must use a /30 or /31 + // subnet mask. If the session goes from Oracle to a customer's edge router, + // the customer specifies this information. If the session goes from Oracle to + // a provider's edge router, the provider specifies this. + // There's one exception: for a public virtual circuit, Oracle specifies the BGP IP addresses. + // Example: `10.0.0.19/31` + OracleBgpPeeringIp *string `mandatory:"false" json:"oracleBgpPeeringIp"` + + // The number of the specific VLAN (on the cross-connect or cross-connect group) + // that is assigned to this virtual circuit. Specified by the owner of the cross-connect + // or cross-connect group (the customer if the customer is colocated with Oracle, or + // the provider if the customer is connecting via provider). + // Example: `200` + Vlan *int `mandatory:"false" json:"vlan"` +} + +func (m CrossConnectMapping) String() string { + return common.PointerString(m) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/cross_connect_port_speed_shape.go b/vendor/github.com/oracle/oci-go-sdk/core/cross_connect_port_speed_shape.go new file mode 100644 index 000000000..3e854fa23 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/cross_connect_port_speed_shape.go @@ -0,0 +1,29 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Core Services API +// +// APIs for Networking Service, Compute Service, and Block Volume Service. +// + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" +) + +// CrossConnectPortSpeedShape An individual port speed level for cross-connects. +type CrossConnectPortSpeedShape struct { + + // The name of the port speed shape. + // Example: `10 Gbps` + Name *string `mandatory:"true" json:"name"` + + // The port speed in Gbps. + // Example: `10` + PortSpeedInGbps *int `mandatory:"true" json:"portSpeedInGbps"` +} + +func (m CrossConnectPortSpeedShape) String() string { + return common.PointerString(m) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/cross_connect_status.go b/vendor/github.com/oracle/oci-go-sdk/core/cross_connect_status.go new file mode 100644 index 000000000..608ab9c80 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/cross_connect_status.go @@ -0,0 +1,91 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Core Services API +// +// APIs for Networking Service, Compute Service, and Block Volume Service. +// + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" +) + +// CrossConnectStatus The status of the cross-connect. +type CrossConnectStatus struct { + + // The OCID of the cross-connect. + CrossConnectId *string `mandatory:"true" json:"crossConnectId"` + + // Whether Oracle's side of the interface is up or down. + InterfaceState CrossConnectStatusInterfaceStateEnum `mandatory:"false" json:"interfaceState,omitempty"` + + // The light level of the cross-connect (in dBm). + // Example: `14.0` + LightLevelIndBm *float32 `mandatory:"false" json:"lightLevelIndBm"` + + // Status indicator corresponding to the light level. + // * **NO_LIGHT:** No measurable light + // * **LOW_WARN:** There's measurable light but it's too low + // * **HIGH_WARN:** Light level is too high + // * **BAD:** There's measurable light but the signal-to-noise ratio is bad + // * **GOOD:** Good light level + LightLevelIndicator CrossConnectStatusLightLevelIndicatorEnum `mandatory:"false" json:"lightLevelIndicator,omitempty"` +} + +func (m CrossConnectStatus) String() string { + return common.PointerString(m) +} + +// CrossConnectStatusInterfaceStateEnum Enum with underlying type: string +type CrossConnectStatusInterfaceStateEnum string + +// Set of constants representing the allowable values for CrossConnectStatusInterfaceState +const ( + CrossConnectStatusInterfaceStateUp CrossConnectStatusInterfaceStateEnum = "UP" + CrossConnectStatusInterfaceStateDown CrossConnectStatusInterfaceStateEnum = "DOWN" +) + +var mappingCrossConnectStatusInterfaceState = map[string]CrossConnectStatusInterfaceStateEnum{ + "UP": CrossConnectStatusInterfaceStateUp, + "DOWN": CrossConnectStatusInterfaceStateDown, +} + +// GetCrossConnectStatusInterfaceStateEnumValues Enumerates the set of values for CrossConnectStatusInterfaceState +func GetCrossConnectStatusInterfaceStateEnumValues() []CrossConnectStatusInterfaceStateEnum { + values := make([]CrossConnectStatusInterfaceStateEnum, 0) + for _, v := range mappingCrossConnectStatusInterfaceState { + values = append(values, v) + } + return values +} + +// CrossConnectStatusLightLevelIndicatorEnum Enum with underlying type: string +type CrossConnectStatusLightLevelIndicatorEnum string + +// Set of constants representing the allowable values for CrossConnectStatusLightLevelIndicator +const ( + CrossConnectStatusLightLevelIndicatorNoLight CrossConnectStatusLightLevelIndicatorEnum = "NO_LIGHT" + CrossConnectStatusLightLevelIndicatorLowWarn CrossConnectStatusLightLevelIndicatorEnum = "LOW_WARN" + CrossConnectStatusLightLevelIndicatorHighWarn CrossConnectStatusLightLevelIndicatorEnum = "HIGH_WARN" + CrossConnectStatusLightLevelIndicatorBad CrossConnectStatusLightLevelIndicatorEnum = "BAD" + CrossConnectStatusLightLevelIndicatorGood CrossConnectStatusLightLevelIndicatorEnum = "GOOD" +) + +var mappingCrossConnectStatusLightLevelIndicator = map[string]CrossConnectStatusLightLevelIndicatorEnum{ + "NO_LIGHT": CrossConnectStatusLightLevelIndicatorNoLight, + "LOW_WARN": CrossConnectStatusLightLevelIndicatorLowWarn, + "HIGH_WARN": CrossConnectStatusLightLevelIndicatorHighWarn, + "BAD": CrossConnectStatusLightLevelIndicatorBad, + "GOOD": CrossConnectStatusLightLevelIndicatorGood, +} + +// GetCrossConnectStatusLightLevelIndicatorEnumValues Enumerates the set of values for CrossConnectStatusLightLevelIndicator +func GetCrossConnectStatusLightLevelIndicatorEnumValues() []CrossConnectStatusLightLevelIndicatorEnum { + values := make([]CrossConnectStatusLightLevelIndicatorEnum, 0) + for _, v := range mappingCrossConnectStatusLightLevelIndicator { + values = append(values, v) + } + return values +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/delete_boot_volume_request_response.go b/vendor/github.com/oracle/oci-go-sdk/core/delete_boot_volume_request_response.go new file mode 100644 index 000000000..1f5d59d9a --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/delete_boot_volume_request_response.go @@ -0,0 +1,40 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// DeleteBootVolumeRequest wrapper for the DeleteBootVolume operation +type DeleteBootVolumeRequest struct { + + // The OCID of the boot volume. + BootVolumeId *string `mandatory:"true" contributesTo:"path" name:"bootVolumeId"` + + // For optimistic concurrency control. In the PUT or DELETE call for a resource, set the `if-match` + // parameter to the value of the etag from a previous GET or POST response for that resource. The resource + // will be updated or deleted only if the etag you provide matches the resource's current etag value. + IfMatch *string `mandatory:"false" contributesTo:"header" name:"if-match"` +} + +func (request DeleteBootVolumeRequest) String() string { + return common.PointerString(request) +} + +// DeleteBootVolumeResponse wrapper for the DeleteBootVolume operation +type DeleteBootVolumeResponse struct { + + // The underlying http response + RawResponse *http.Response + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about + // a particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response DeleteBootVolumeResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/delete_console_history_request_response.go b/vendor/github.com/oracle/oci-go-sdk/core/delete_console_history_request_response.go new file mode 100644 index 000000000..e964e9bf4 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/delete_console_history_request_response.go @@ -0,0 +1,40 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// DeleteConsoleHistoryRequest wrapper for the DeleteConsoleHistory operation +type DeleteConsoleHistoryRequest struct { + + // The OCID of the console history. + InstanceConsoleHistoryId *string `mandatory:"true" contributesTo:"path" name:"instanceConsoleHistoryId"` + + // For optimistic concurrency control. In the PUT or DELETE call for a resource, set the `if-match` + // parameter to the value of the etag from a previous GET or POST response for that resource. The resource + // will be updated or deleted only if the etag you provide matches the resource's current etag value. + IfMatch *string `mandatory:"false" contributesTo:"header" name:"if-match"` +} + +func (request DeleteConsoleHistoryRequest) String() string { + return common.PointerString(request) +} + +// DeleteConsoleHistoryResponse wrapper for the DeleteConsoleHistory operation +type DeleteConsoleHistoryResponse struct { + + // The underlying http response + RawResponse *http.Response + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about + // a particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response DeleteConsoleHistoryResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/delete_cpe_request_response.go b/vendor/github.com/oracle/oci-go-sdk/core/delete_cpe_request_response.go new file mode 100644 index 000000000..6a398d922 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/delete_cpe_request_response.go @@ -0,0 +1,40 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// DeleteCpeRequest wrapper for the DeleteCpe operation +type DeleteCpeRequest struct { + + // The OCID of the CPE. + CpeId *string `mandatory:"true" contributesTo:"path" name:"cpeId"` + + // For optimistic concurrency control. In the PUT or DELETE call for a resource, set the `if-match` + // parameter to the value of the etag from a previous GET or POST response for that resource. The resource + // will be updated or deleted only if the etag you provide matches the resource's current etag value. + IfMatch *string `mandatory:"false" contributesTo:"header" name:"if-match"` +} + +func (request DeleteCpeRequest) String() string { + return common.PointerString(request) +} + +// DeleteCpeResponse wrapper for the DeleteCpe operation +type DeleteCpeResponse struct { + + // The underlying http response + RawResponse *http.Response + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about + // a particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response DeleteCpeResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/delete_cross_connect_group_request_response.go b/vendor/github.com/oracle/oci-go-sdk/core/delete_cross_connect_group_request_response.go new file mode 100644 index 000000000..fbfaf81e1 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/delete_cross_connect_group_request_response.go @@ -0,0 +1,40 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// DeleteCrossConnectGroupRequest wrapper for the DeleteCrossConnectGroup operation +type DeleteCrossConnectGroupRequest struct { + + // The OCID of the cross-connect group. + CrossConnectGroupId *string `mandatory:"true" contributesTo:"path" name:"crossConnectGroupId"` + + // For optimistic concurrency control. In the PUT or DELETE call for a resource, set the `if-match` + // parameter to the value of the etag from a previous GET or POST response for that resource. The resource + // will be updated or deleted only if the etag you provide matches the resource's current etag value. + IfMatch *string `mandatory:"false" contributesTo:"header" name:"if-match"` +} + +func (request DeleteCrossConnectGroupRequest) String() string { + return common.PointerString(request) +} + +// DeleteCrossConnectGroupResponse wrapper for the DeleteCrossConnectGroup operation +type DeleteCrossConnectGroupResponse struct { + + // The underlying http response + RawResponse *http.Response + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about + // a particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response DeleteCrossConnectGroupResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/delete_cross_connect_request_response.go b/vendor/github.com/oracle/oci-go-sdk/core/delete_cross_connect_request_response.go new file mode 100644 index 000000000..f3641dfd2 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/delete_cross_connect_request_response.go @@ -0,0 +1,40 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// DeleteCrossConnectRequest wrapper for the DeleteCrossConnect operation +type DeleteCrossConnectRequest struct { + + // The OCID of the cross-connect. + CrossConnectId *string `mandatory:"true" contributesTo:"path" name:"crossConnectId"` + + // For optimistic concurrency control. In the PUT or DELETE call for a resource, set the `if-match` + // parameter to the value of the etag from a previous GET or POST response for that resource. The resource + // will be updated or deleted only if the etag you provide matches the resource's current etag value. + IfMatch *string `mandatory:"false" contributesTo:"header" name:"if-match"` +} + +func (request DeleteCrossConnectRequest) String() string { + return common.PointerString(request) +} + +// DeleteCrossConnectResponse wrapper for the DeleteCrossConnect operation +type DeleteCrossConnectResponse struct { + + // The underlying http response + RawResponse *http.Response + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about + // a particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response DeleteCrossConnectResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/delete_dhcp_options_request_response.go b/vendor/github.com/oracle/oci-go-sdk/core/delete_dhcp_options_request_response.go new file mode 100644 index 000000000..e89249eec --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/delete_dhcp_options_request_response.go @@ -0,0 +1,40 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// DeleteDhcpOptionsRequest wrapper for the DeleteDhcpOptions operation +type DeleteDhcpOptionsRequest struct { + + // The OCID for the set of DHCP options. + DhcpId *string `mandatory:"true" contributesTo:"path" name:"dhcpId"` + + // For optimistic concurrency control. In the PUT or DELETE call for a resource, set the `if-match` + // parameter to the value of the etag from a previous GET or POST response for that resource. The resource + // will be updated or deleted only if the etag you provide matches the resource's current etag value. + IfMatch *string `mandatory:"false" contributesTo:"header" name:"if-match"` +} + +func (request DeleteDhcpOptionsRequest) String() string { + return common.PointerString(request) +} + +// DeleteDhcpOptionsResponse wrapper for the DeleteDhcpOptions operation +type DeleteDhcpOptionsResponse struct { + + // The underlying http response + RawResponse *http.Response + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about + // a particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response DeleteDhcpOptionsResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/delete_drg_attachment_request_response.go b/vendor/github.com/oracle/oci-go-sdk/core/delete_drg_attachment_request_response.go new file mode 100644 index 000000000..a921056e5 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/delete_drg_attachment_request_response.go @@ -0,0 +1,40 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// DeleteDrgAttachmentRequest wrapper for the DeleteDrgAttachment operation +type DeleteDrgAttachmentRequest struct { + + // The OCID of the DRG attachment. + DrgAttachmentId *string `mandatory:"true" contributesTo:"path" name:"drgAttachmentId"` + + // For optimistic concurrency control. In the PUT or DELETE call for a resource, set the `if-match` + // parameter to the value of the etag from a previous GET or POST response for that resource. The resource + // will be updated or deleted only if the etag you provide matches the resource's current etag value. + IfMatch *string `mandatory:"false" contributesTo:"header" name:"if-match"` +} + +func (request DeleteDrgAttachmentRequest) String() string { + return common.PointerString(request) +} + +// DeleteDrgAttachmentResponse wrapper for the DeleteDrgAttachment operation +type DeleteDrgAttachmentResponse struct { + + // The underlying http response + RawResponse *http.Response + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about + // a particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response DeleteDrgAttachmentResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/delete_drg_request_response.go b/vendor/github.com/oracle/oci-go-sdk/core/delete_drg_request_response.go new file mode 100644 index 000000000..225db2c44 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/delete_drg_request_response.go @@ -0,0 +1,40 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// DeleteDrgRequest wrapper for the DeleteDrg operation +type DeleteDrgRequest struct { + + // The OCID of the DRG. + DrgId *string `mandatory:"true" contributesTo:"path" name:"drgId"` + + // For optimistic concurrency control. In the PUT or DELETE call for a resource, set the `if-match` + // parameter to the value of the etag from a previous GET or POST response for that resource. The resource + // will be updated or deleted only if the etag you provide matches the resource's current etag value. + IfMatch *string `mandatory:"false" contributesTo:"header" name:"if-match"` +} + +func (request DeleteDrgRequest) String() string { + return common.PointerString(request) +} + +// DeleteDrgResponse wrapper for the DeleteDrg operation +type DeleteDrgResponse struct { + + // The underlying http response + RawResponse *http.Response + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about + // a particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response DeleteDrgResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/delete_i_p_sec_connection_request_response.go b/vendor/github.com/oracle/oci-go-sdk/core/delete_i_p_sec_connection_request_response.go new file mode 100644 index 000000000..47a19acff --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/delete_i_p_sec_connection_request_response.go @@ -0,0 +1,40 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// DeleteIPSecConnectionRequest wrapper for the DeleteIPSecConnection operation +type DeleteIPSecConnectionRequest struct { + + // The OCID of the IPSec connection. + IpscId *string `mandatory:"true" contributesTo:"path" name:"ipscId"` + + // For optimistic concurrency control. In the PUT or DELETE call for a resource, set the `if-match` + // parameter to the value of the etag from a previous GET or POST response for that resource. The resource + // will be updated or deleted only if the etag you provide matches the resource's current etag value. + IfMatch *string `mandatory:"false" contributesTo:"header" name:"if-match"` +} + +func (request DeleteIPSecConnectionRequest) String() string { + return common.PointerString(request) +} + +// DeleteIPSecConnectionResponse wrapper for the DeleteIPSecConnection operation +type DeleteIPSecConnectionResponse struct { + + // The underlying http response + RawResponse *http.Response + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about + // a particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response DeleteIPSecConnectionResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/delete_image_request_response.go b/vendor/github.com/oracle/oci-go-sdk/core/delete_image_request_response.go new file mode 100644 index 000000000..f7c0a2b7f --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/delete_image_request_response.go @@ -0,0 +1,40 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// DeleteImageRequest wrapper for the DeleteImage operation +type DeleteImageRequest struct { + + // The OCID of the image. + ImageId *string `mandatory:"true" contributesTo:"path" name:"imageId"` + + // For optimistic concurrency control. In the PUT or DELETE call for a resource, set the `if-match` + // parameter to the value of the etag from a previous GET or POST response for that resource. The resource + // will be updated or deleted only if the etag you provide matches the resource's current etag value. + IfMatch *string `mandatory:"false" contributesTo:"header" name:"if-match"` +} + +func (request DeleteImageRequest) String() string { + return common.PointerString(request) +} + +// DeleteImageResponse wrapper for the DeleteImage operation +type DeleteImageResponse struct { + + // The underlying http response + RawResponse *http.Response + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about + // a particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response DeleteImageResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/delete_instance_console_connection_request_response.go b/vendor/github.com/oracle/oci-go-sdk/core/delete_instance_console_connection_request_response.go new file mode 100644 index 000000000..126d52c84 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/delete_instance_console_connection_request_response.go @@ -0,0 +1,40 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// DeleteInstanceConsoleConnectionRequest wrapper for the DeleteInstanceConsoleConnection operation +type DeleteInstanceConsoleConnectionRequest struct { + + // The OCID of the intance console connection + InstanceConsoleConnectionId *string `mandatory:"true" contributesTo:"path" name:"instanceConsoleConnectionId"` + + // For optimistic concurrency control. In the PUT or DELETE call for a resource, set the `if-match` + // parameter to the value of the etag from a previous GET or POST response for that resource. The resource + // will be updated or deleted only if the etag you provide matches the resource's current etag value. + IfMatch *string `mandatory:"false" contributesTo:"header" name:"if-match"` +} + +func (request DeleteInstanceConsoleConnectionRequest) String() string { + return common.PointerString(request) +} + +// DeleteInstanceConsoleConnectionResponse wrapper for the DeleteInstanceConsoleConnection operation +type DeleteInstanceConsoleConnectionResponse struct { + + // The underlying http response + RawResponse *http.Response + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about + // a particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response DeleteInstanceConsoleConnectionResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/delete_internet_gateway_request_response.go b/vendor/github.com/oracle/oci-go-sdk/core/delete_internet_gateway_request_response.go new file mode 100644 index 000000000..e1c2fcf27 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/delete_internet_gateway_request_response.go @@ -0,0 +1,40 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// DeleteInternetGatewayRequest wrapper for the DeleteInternetGateway operation +type DeleteInternetGatewayRequest struct { + + // The OCID of the Internet Gateway. + IgId *string `mandatory:"true" contributesTo:"path" name:"igId"` + + // For optimistic concurrency control. In the PUT or DELETE call for a resource, set the `if-match` + // parameter to the value of the etag from a previous GET or POST response for that resource. The resource + // will be updated or deleted only if the etag you provide matches the resource's current etag value. + IfMatch *string `mandatory:"false" contributesTo:"header" name:"if-match"` +} + +func (request DeleteInternetGatewayRequest) String() string { + return common.PointerString(request) +} + +// DeleteInternetGatewayResponse wrapper for the DeleteInternetGateway operation +type DeleteInternetGatewayResponse struct { + + // The underlying http response + RawResponse *http.Response + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about + // a particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response DeleteInternetGatewayResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/delete_local_peering_gateway_request_response.go b/vendor/github.com/oracle/oci-go-sdk/core/delete_local_peering_gateway_request_response.go new file mode 100644 index 000000000..312cd7485 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/delete_local_peering_gateway_request_response.go @@ -0,0 +1,40 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// DeleteLocalPeeringGatewayRequest wrapper for the DeleteLocalPeeringGateway operation +type DeleteLocalPeeringGatewayRequest struct { + + // The OCID of the local peering gateway. + LocalPeeringGatewayId *string `mandatory:"true" contributesTo:"path" name:"localPeeringGatewayId"` + + // For optimistic concurrency control. In the PUT or DELETE call for a resource, set the `if-match` + // parameter to the value of the etag from a previous GET or POST response for that resource. The resource + // will be updated or deleted only if the etag you provide matches the resource's current etag value. + IfMatch *string `mandatory:"false" contributesTo:"header" name:"if-match"` +} + +func (request DeleteLocalPeeringGatewayRequest) String() string { + return common.PointerString(request) +} + +// DeleteLocalPeeringGatewayResponse wrapper for the DeleteLocalPeeringGateway operation +type DeleteLocalPeeringGatewayResponse struct { + + // The underlying http response + RawResponse *http.Response + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about + // a particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response DeleteLocalPeeringGatewayResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/delete_private_ip_request_response.go b/vendor/github.com/oracle/oci-go-sdk/core/delete_private_ip_request_response.go new file mode 100644 index 000000000..6994c1320 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/delete_private_ip_request_response.go @@ -0,0 +1,40 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// DeletePrivateIpRequest wrapper for the DeletePrivateIp operation +type DeletePrivateIpRequest struct { + + // The private IP's OCID. + PrivateIpId *string `mandatory:"true" contributesTo:"path" name:"privateIpId"` + + // For optimistic concurrency control. In the PUT or DELETE call for a resource, set the `if-match` + // parameter to the value of the etag from a previous GET or POST response for that resource. The resource + // will be updated or deleted only if the etag you provide matches the resource's current etag value. + IfMatch *string `mandatory:"false" contributesTo:"header" name:"if-match"` +} + +func (request DeletePrivateIpRequest) String() string { + return common.PointerString(request) +} + +// DeletePrivateIpResponse wrapper for the DeletePrivateIp operation +type DeletePrivateIpResponse struct { + + // The underlying http response + RawResponse *http.Response + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about + // a particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response DeletePrivateIpResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/delete_route_table_request_response.go b/vendor/github.com/oracle/oci-go-sdk/core/delete_route_table_request_response.go new file mode 100644 index 000000000..79cf5fee0 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/delete_route_table_request_response.go @@ -0,0 +1,40 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// DeleteRouteTableRequest wrapper for the DeleteRouteTable operation +type DeleteRouteTableRequest struct { + + // The OCID of the route table. + RtId *string `mandatory:"true" contributesTo:"path" name:"rtId"` + + // For optimistic concurrency control. In the PUT or DELETE call for a resource, set the `if-match` + // parameter to the value of the etag from a previous GET or POST response for that resource. The resource + // will be updated or deleted only if the etag you provide matches the resource's current etag value. + IfMatch *string `mandatory:"false" contributesTo:"header" name:"if-match"` +} + +func (request DeleteRouteTableRequest) String() string { + return common.PointerString(request) +} + +// DeleteRouteTableResponse wrapper for the DeleteRouteTable operation +type DeleteRouteTableResponse struct { + + // The underlying http response + RawResponse *http.Response + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about + // a particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response DeleteRouteTableResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/delete_security_list_request_response.go b/vendor/github.com/oracle/oci-go-sdk/core/delete_security_list_request_response.go new file mode 100644 index 000000000..6fccea27e --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/delete_security_list_request_response.go @@ -0,0 +1,40 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// DeleteSecurityListRequest wrapper for the DeleteSecurityList operation +type DeleteSecurityListRequest struct { + + // The OCID of the security list. + SecurityListId *string `mandatory:"true" contributesTo:"path" name:"securityListId"` + + // For optimistic concurrency control. In the PUT or DELETE call for a resource, set the `if-match` + // parameter to the value of the etag from a previous GET or POST response for that resource. The resource + // will be updated or deleted only if the etag you provide matches the resource's current etag value. + IfMatch *string `mandatory:"false" contributesTo:"header" name:"if-match"` +} + +func (request DeleteSecurityListRequest) String() string { + return common.PointerString(request) +} + +// DeleteSecurityListResponse wrapper for the DeleteSecurityList operation +type DeleteSecurityListResponse struct { + + // The underlying http response + RawResponse *http.Response + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about + // a particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response DeleteSecurityListResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/delete_subnet_request_response.go b/vendor/github.com/oracle/oci-go-sdk/core/delete_subnet_request_response.go new file mode 100644 index 000000000..cd198d893 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/delete_subnet_request_response.go @@ -0,0 +1,40 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// DeleteSubnetRequest wrapper for the DeleteSubnet operation +type DeleteSubnetRequest struct { + + // The OCID of the subnet. + SubnetId *string `mandatory:"true" contributesTo:"path" name:"subnetId"` + + // For optimistic concurrency control. In the PUT or DELETE call for a resource, set the `if-match` + // parameter to the value of the etag from a previous GET or POST response for that resource. The resource + // will be updated or deleted only if the etag you provide matches the resource's current etag value. + IfMatch *string `mandatory:"false" contributesTo:"header" name:"if-match"` +} + +func (request DeleteSubnetRequest) String() string { + return common.PointerString(request) +} + +// DeleteSubnetResponse wrapper for the DeleteSubnet operation +type DeleteSubnetResponse struct { + + // The underlying http response + RawResponse *http.Response + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about + // a particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response DeleteSubnetResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/delete_vcn_request_response.go b/vendor/github.com/oracle/oci-go-sdk/core/delete_vcn_request_response.go new file mode 100644 index 000000000..9e30cccc4 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/delete_vcn_request_response.go @@ -0,0 +1,40 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// DeleteVcnRequest wrapper for the DeleteVcn operation +type DeleteVcnRequest struct { + + // The OCID of the VCN. + VcnId *string `mandatory:"true" contributesTo:"path" name:"vcnId"` + + // For optimistic concurrency control. In the PUT or DELETE call for a resource, set the `if-match` + // parameter to the value of the etag from a previous GET or POST response for that resource. The resource + // will be updated or deleted only if the etag you provide matches the resource's current etag value. + IfMatch *string `mandatory:"false" contributesTo:"header" name:"if-match"` +} + +func (request DeleteVcnRequest) String() string { + return common.PointerString(request) +} + +// DeleteVcnResponse wrapper for the DeleteVcn operation +type DeleteVcnResponse struct { + + // The underlying http response + RawResponse *http.Response + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about + // a particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response DeleteVcnResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/delete_virtual_circuit_public_prefix_details.go b/vendor/github.com/oracle/oci-go-sdk/core/delete_virtual_circuit_public_prefix_details.go new file mode 100644 index 000000000..cea9aca23 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/delete_virtual_circuit_public_prefix_details.go @@ -0,0 +1,24 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Core Services API +// +// APIs for Networking Service, Compute Service, and Block Volume Service. +// + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" +) + +// DeleteVirtualCircuitPublicPrefixDetails The representation of DeleteVirtualCircuitPublicPrefixDetails +type DeleteVirtualCircuitPublicPrefixDetails struct { + + // An individual public IP prefix (CIDR) to remove from the public virtual circuit. + CidrBlock *string `mandatory:"true" json:"cidrBlock"` +} + +func (m DeleteVirtualCircuitPublicPrefixDetails) String() string { + return common.PointerString(m) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/delete_virtual_circuit_request_response.go b/vendor/github.com/oracle/oci-go-sdk/core/delete_virtual_circuit_request_response.go new file mode 100644 index 000000000..55caa0d27 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/delete_virtual_circuit_request_response.go @@ -0,0 +1,40 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// DeleteVirtualCircuitRequest wrapper for the DeleteVirtualCircuit operation +type DeleteVirtualCircuitRequest struct { + + // The OCID of the virtual circuit. + VirtualCircuitId *string `mandatory:"true" contributesTo:"path" name:"virtualCircuitId"` + + // For optimistic concurrency control. In the PUT or DELETE call for a resource, set the `if-match` + // parameter to the value of the etag from a previous GET or POST response for that resource. The resource + // will be updated or deleted only if the etag you provide matches the resource's current etag value. + IfMatch *string `mandatory:"false" contributesTo:"header" name:"if-match"` +} + +func (request DeleteVirtualCircuitRequest) String() string { + return common.PointerString(request) +} + +// DeleteVirtualCircuitResponse wrapper for the DeleteVirtualCircuit operation +type DeleteVirtualCircuitResponse struct { + + // The underlying http response + RawResponse *http.Response + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about + // a particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response DeleteVirtualCircuitResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/delete_volume_backup_request_response.go b/vendor/github.com/oracle/oci-go-sdk/core/delete_volume_backup_request_response.go new file mode 100644 index 000000000..4c1e30354 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/delete_volume_backup_request_response.go @@ -0,0 +1,40 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// DeleteVolumeBackupRequest wrapper for the DeleteVolumeBackup operation +type DeleteVolumeBackupRequest struct { + + // The OCID of the volume backup. + VolumeBackupId *string `mandatory:"true" contributesTo:"path" name:"volumeBackupId"` + + // For optimistic concurrency control. In the PUT or DELETE call for a resource, set the `if-match` + // parameter to the value of the etag from a previous GET or POST response for that resource. The resource + // will be updated or deleted only if the etag you provide matches the resource's current etag value. + IfMatch *string `mandatory:"false" contributesTo:"header" name:"if-match"` +} + +func (request DeleteVolumeBackupRequest) String() string { + return common.PointerString(request) +} + +// DeleteVolumeBackupResponse wrapper for the DeleteVolumeBackup operation +type DeleteVolumeBackupResponse struct { + + // The underlying http response + RawResponse *http.Response + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about + // a particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response DeleteVolumeBackupResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/delete_volume_request_response.go b/vendor/github.com/oracle/oci-go-sdk/core/delete_volume_request_response.go new file mode 100644 index 000000000..e9bfd8a60 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/delete_volume_request_response.go @@ -0,0 +1,40 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// DeleteVolumeRequest wrapper for the DeleteVolume operation +type DeleteVolumeRequest struct { + + // The OCID of the volume. + VolumeId *string `mandatory:"true" contributesTo:"path" name:"volumeId"` + + // For optimistic concurrency control. In the PUT or DELETE call for a resource, set the `if-match` + // parameter to the value of the etag from a previous GET or POST response for that resource. The resource + // will be updated or deleted only if the etag you provide matches the resource's current etag value. + IfMatch *string `mandatory:"false" contributesTo:"header" name:"if-match"` +} + +func (request DeleteVolumeRequest) String() string { + return common.PointerString(request) +} + +// DeleteVolumeResponse wrapper for the DeleteVolume operation +type DeleteVolumeResponse struct { + + // The underlying http response + RawResponse *http.Response + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about + // a particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response DeleteVolumeResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/detach_boot_volume_request_response.go b/vendor/github.com/oracle/oci-go-sdk/core/detach_boot_volume_request_response.go new file mode 100644 index 000000000..5820949e9 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/detach_boot_volume_request_response.go @@ -0,0 +1,40 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// DetachBootVolumeRequest wrapper for the DetachBootVolume operation +type DetachBootVolumeRequest struct { + + // The OCID of the boot volume attachment. + BootVolumeAttachmentId *string `mandatory:"true" contributesTo:"path" name:"bootVolumeAttachmentId"` + + // For optimistic concurrency control. In the PUT or DELETE call for a resource, set the `if-match` + // parameter to the value of the etag from a previous GET or POST response for that resource. The resource + // will be updated or deleted only if the etag you provide matches the resource's current etag value. + IfMatch *string `mandatory:"false" contributesTo:"header" name:"if-match"` +} + +func (request DetachBootVolumeRequest) String() string { + return common.PointerString(request) +} + +// DetachBootVolumeResponse wrapper for the DetachBootVolume operation +type DetachBootVolumeResponse struct { + + // The underlying http response + RawResponse *http.Response + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about + // a particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response DetachBootVolumeResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/detach_vnic_request_response.go b/vendor/github.com/oracle/oci-go-sdk/core/detach_vnic_request_response.go new file mode 100644 index 000000000..7f4bf5e93 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/detach_vnic_request_response.go @@ -0,0 +1,40 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// DetachVnicRequest wrapper for the DetachVnic operation +type DetachVnicRequest struct { + + // The OCID of the VNIC attachment. + VnicAttachmentId *string `mandatory:"true" contributesTo:"path" name:"vnicAttachmentId"` + + // For optimistic concurrency control. In the PUT or DELETE call for a resource, set the `if-match` + // parameter to the value of the etag from a previous GET or POST response for that resource. The resource + // will be updated or deleted only if the etag you provide matches the resource's current etag value. + IfMatch *string `mandatory:"false" contributesTo:"header" name:"if-match"` +} + +func (request DetachVnicRequest) String() string { + return common.PointerString(request) +} + +// DetachVnicResponse wrapper for the DetachVnic operation +type DetachVnicResponse struct { + + // The underlying http response + RawResponse *http.Response + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about + // a particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response DetachVnicResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/detach_volume_request_response.go b/vendor/github.com/oracle/oci-go-sdk/core/detach_volume_request_response.go new file mode 100644 index 000000000..59188b434 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/detach_volume_request_response.go @@ -0,0 +1,40 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// DetachVolumeRequest wrapper for the DetachVolume operation +type DetachVolumeRequest struct { + + // The OCID of the volume attachment. + VolumeAttachmentId *string `mandatory:"true" contributesTo:"path" name:"volumeAttachmentId"` + + // For optimistic concurrency control. In the PUT or DELETE call for a resource, set the `if-match` + // parameter to the value of the etag from a previous GET or POST response for that resource. The resource + // will be updated or deleted only if the etag you provide matches the resource's current etag value. + IfMatch *string `mandatory:"false" contributesTo:"header" name:"if-match"` +} + +func (request DetachVolumeRequest) String() string { + return common.PointerString(request) +} + +// DetachVolumeResponse wrapper for the DetachVolume operation +type DetachVolumeResponse struct { + + // The underlying http response + RawResponse *http.Response + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about + // a particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response DetachVolumeResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/dhcp_dns_option.go b/vendor/github.com/oracle/oci-go-sdk/core/dhcp_dns_option.go new file mode 100644 index 000000000..51237ca28 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/dhcp_dns_option.go @@ -0,0 +1,81 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Core Services API +// +// APIs for Networking Service, Compute Service, and Block Volume Service. +// + +package core + +import ( + "encoding/json" + "github.com/oracle/oci-go-sdk/common" +) + +// DhcpDnsOption DHCP option for specifying how DNS (hostname resolution) is handled in the subnets in the VCN. +// For more information, see +// DNS in Your Virtual Cloud Network (https://docs.us-phoenix-1.oraclecloud.com/Content/Network/Concepts/dns.htm). +type DhcpDnsOption struct { + + // If you set `serverType` to `CustomDnsServer`, specify the IP address + // of at least one DNS server of your choice (three maximum). + CustomDnsServers []string `mandatory:"false" json:"customDnsServers"` + + // - **VcnLocal:** Reserved for future use. + // - **VcnLocalPlusInternet:** Also referred to as "Internet and VCN Resolver". + // Instances can resolve internet hostnames (no Internet Gateway is required), + // and can resolve hostnames of instances in the VCN. This is the default + // value in the default set of DHCP options in the VCN. For the Internet and + // VCN Resolver to work across the VCN, there must also be a DNS label set for + // the VCN, a DNS label set for each subnet, and a hostname for each instance. + // The Internet and VCN Resolver also enables reverse DNS lookup, which lets + // you determine the hostname corresponding to the private IP address. For more + // information, see + // DNS in Your Virtual Cloud Network (https://docs.us-phoenix-1.oraclecloud.com/Content/Network/Concepts/dns.htm). + // - **CustomDnsServer:** Instances use a DNS server of your choice (three maximum). + ServerType DhcpDnsOptionServerTypeEnum `mandatory:"true" json:"serverType"` +} + +func (m DhcpDnsOption) String() string { + return common.PointerString(m) +} + +// MarshalJSON marshals to json representation +func (m DhcpDnsOption) MarshalJSON() (buff []byte, e error) { + type MarshalTypeDhcpDnsOption DhcpDnsOption + s := struct { + DiscriminatorParam string `json:"type"` + MarshalTypeDhcpDnsOption + }{ + "DomainNameServer", + (MarshalTypeDhcpDnsOption)(m), + } + + return json.Marshal(&s) +} + +// DhcpDnsOptionServerTypeEnum Enum with underlying type: string +type DhcpDnsOptionServerTypeEnum string + +// Set of constants representing the allowable values for DhcpDnsOptionServerType +const ( + DhcpDnsOptionServerTypeVcnlocal DhcpDnsOptionServerTypeEnum = "VcnLocal" + DhcpDnsOptionServerTypeVcnlocalplusinternet DhcpDnsOptionServerTypeEnum = "VcnLocalPlusInternet" + DhcpDnsOptionServerTypeCustomdnsserver DhcpDnsOptionServerTypeEnum = "CustomDnsServer" +) + +var mappingDhcpDnsOptionServerType = map[string]DhcpDnsOptionServerTypeEnum{ + "VcnLocal": DhcpDnsOptionServerTypeVcnlocal, + "VcnLocalPlusInternet": DhcpDnsOptionServerTypeVcnlocalplusinternet, + "CustomDnsServer": DhcpDnsOptionServerTypeCustomdnsserver, +} + +// GetDhcpDnsOptionServerTypeEnumValues Enumerates the set of values for DhcpDnsOptionServerType +func GetDhcpDnsOptionServerTypeEnumValues() []DhcpDnsOptionServerTypeEnum { + values := make([]DhcpDnsOptionServerTypeEnum, 0) + for _, v := range mappingDhcpDnsOptionServerType { + values = append(values, v) + } + return values +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/dhcp_option.go b/vendor/github.com/oracle/oci-go-sdk/core/dhcp_option.go new file mode 100644 index 000000000..742f3411b --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/dhcp_option.go @@ -0,0 +1,64 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Core Services API +// +// APIs for Networking Service, Compute Service, and Block Volume Service. +// + +package core + +import ( + "encoding/json" + "github.com/oracle/oci-go-sdk/common" +) + +// DhcpOption A single DHCP option according to RFC 1533 (https://tools.ietf.org/html/rfc1533). +// The two options available to use are DhcpDnsOption +// and DhcpSearchDomainOption. For more +// information, see DNS in Your Virtual Cloud Network (https://docs.us-phoenix-1.oraclecloud.com/Content/Network/Concepts/dns.htm) +// and DHCP Options (https://docs.us-phoenix-1.oraclecloud.com/Content/Network/Tasks/managingDHCP.htm). +type DhcpOption interface { +} + +type dhcpoption struct { + JsonData []byte + Type string `json:"type"` +} + +// UnmarshalJSON unmarshals json +func (m *dhcpoption) UnmarshalJSON(data []byte) error { + m.JsonData = data + type Unmarshalerdhcpoption dhcpoption + s := struct { + Model Unmarshalerdhcpoption + }{} + err := json.Unmarshal(data, &s.Model) + if err != nil { + return err + } + m.Type = s.Model.Type + + return err +} + +// UnmarshalPolymorphicJSON unmarshals polymorphic json +func (m *dhcpoption) UnmarshalPolymorphicJSON(data []byte) (interface{}, error) { + var err error + switch m.Type { + case "DomainNameServer": + mm := DhcpDnsOption{} + err = json.Unmarshal(data, &mm) + return mm, err + case "SearchDomain": + mm := DhcpSearchDomainOption{} + err = json.Unmarshal(data, &mm) + return mm, err + default: + return m, nil + } +} + +func (m dhcpoption) String() string { + return common.PointerString(m) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/dhcp_options.go b/vendor/github.com/oracle/oci-go-sdk/core/dhcp_options.go new file mode 100644 index 000000000..f96edd995 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/dhcp_options.go @@ -0,0 +1,115 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Core Services API +// +// APIs for Networking Service, Compute Service, and Block Volume Service. +// + +package core + +import ( + "encoding/json" + "github.com/oracle/oci-go-sdk/common" +) + +// DhcpOptions A set of DHCP options. Used by the VCN to automatically provide configuration +// information to the instances when they boot up. There are two options you can set: +// - DhcpDnsOption: Lets you specify how DNS (hostname resolution) is +// handled in the subnets in your VCN. +// - DhcpSearchDomainOption: Lets you specify +// a search domain name to use for DNS queries. +// For more information, see DNS in Your Virtual Cloud Network (https://docs.us-phoenix-1.oraclecloud.com/Content/Network/Concepts/dns.htm) +// and DHCP Options (https://docs.us-phoenix-1.oraclecloud.com/Content/Network/Tasks/managingDHCP.htm). +// To use any of the API operations, you must be authorized in an IAM policy. If you're not authorized, +// talk to an administrator. If you're an administrator who needs to write policies to give users access, see +// Getting Started with Policies (https://docs.us-phoenix-1.oraclecloud.com/Content/Identity/Concepts/policygetstarted.htm). +type DhcpOptions struct { + + // The OCID of the compartment containing the set of DHCP options. + CompartmentId *string `mandatory:"true" json:"compartmentId"` + + // Oracle ID (OCID) for the set of DHCP options. + Id *string `mandatory:"true" json:"id"` + + // The current state of the set of DHCP options. + LifecycleState DhcpOptionsLifecycleStateEnum `mandatory:"true" json:"lifecycleState"` + + // The collection of individual DHCP options. + Options []DhcpOption `mandatory:"true" json:"options"` + + // Date and time the set of DHCP options was created, in the format defined by RFC3339. + // Example: `2016-08-25T21:10:29.600Z` + TimeCreated *common.SDKTime `mandatory:"true" json:"timeCreated"` + + // The OCID of the VCN the set of DHCP options belongs to. + VcnId *string `mandatory:"true" json:"vcnId"` + + // A user-friendly name. Does not have to be unique, and it's changeable. + // Avoid entering confidential information. + DisplayName *string `mandatory:"false" json:"displayName"` +} + +func (m DhcpOptions) String() string { + return common.PointerString(m) +} + +// UnmarshalJSON unmarshals from json +func (m *DhcpOptions) UnmarshalJSON(data []byte) (e error) { + model := struct { + DisplayName *string `json:"displayName"` + CompartmentId *string `json:"compartmentId"` + Id *string `json:"id"` + LifecycleState DhcpOptionsLifecycleStateEnum `json:"lifecycleState"` + Options []dhcpoption `json:"options"` + TimeCreated *common.SDKTime `json:"timeCreated"` + VcnId *string `json:"vcnId"` + }{} + + e = json.Unmarshal(data, &model) + if e != nil { + return + } + m.DisplayName = model.DisplayName + m.CompartmentId = model.CompartmentId + m.Id = model.Id + m.LifecycleState = model.LifecycleState + m.Options = make([]DhcpOption, len(model.Options)) + for i, n := range model.Options { + nn, err := n.UnmarshalPolymorphicJSON(n.JsonData) + if err != nil { + return err + } + m.Options[i] = nn + } + m.TimeCreated = model.TimeCreated + m.VcnId = model.VcnId + return +} + +// DhcpOptionsLifecycleStateEnum Enum with underlying type: string +type DhcpOptionsLifecycleStateEnum string + +// Set of constants representing the allowable values for DhcpOptionsLifecycleState +const ( + DhcpOptionsLifecycleStateProvisioning DhcpOptionsLifecycleStateEnum = "PROVISIONING" + DhcpOptionsLifecycleStateAvailable DhcpOptionsLifecycleStateEnum = "AVAILABLE" + DhcpOptionsLifecycleStateTerminating DhcpOptionsLifecycleStateEnum = "TERMINATING" + DhcpOptionsLifecycleStateTerminated DhcpOptionsLifecycleStateEnum = "TERMINATED" +) + +var mappingDhcpOptionsLifecycleState = map[string]DhcpOptionsLifecycleStateEnum{ + "PROVISIONING": DhcpOptionsLifecycleStateProvisioning, + "AVAILABLE": DhcpOptionsLifecycleStateAvailable, + "TERMINATING": DhcpOptionsLifecycleStateTerminating, + "TERMINATED": DhcpOptionsLifecycleStateTerminated, +} + +// GetDhcpOptionsLifecycleStateEnumValues Enumerates the set of values for DhcpOptionsLifecycleState +func GetDhcpOptionsLifecycleStateEnumValues() []DhcpOptionsLifecycleStateEnum { + values := make([]DhcpOptionsLifecycleStateEnum, 0) + for _, v := range mappingDhcpOptionsLifecycleState { + values = append(values, v) + } + return values +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/dhcp_search_domain_option.go b/vendor/github.com/oracle/oci-go-sdk/core/dhcp_search_domain_option.go new file mode 100644 index 000000000..58968eb53 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/dhcp_search_domain_option.go @@ -0,0 +1,50 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Core Services API +// +// APIs for Networking Service, Compute Service, and Block Volume Service. +// + +package core + +import ( + "encoding/json" + "github.com/oracle/oci-go-sdk/common" +) + +// DhcpSearchDomainOption DHCP option for specifying a search domain name for DNS queries. For more information, see +// DNS in Your Virtual Cloud Network (https://docs.us-phoenix-1.oraclecloud.com/Content/Network/Concepts/dns.htm). +type DhcpSearchDomainOption struct { + + // A single search domain name according to RFC 952 (https://tools.ietf.org/html/rfc952) + // and RFC 1123 (https://tools.ietf.org/html/rfc1123). During a DNS query, + // the OS will append this search domain name to the value being queried. + // If you set DhcpDnsOption to `VcnLocalPlusInternet`, + // and you assign a DNS label to the VCN during creation, the search domain name in the + // VCN's default set of DHCP options is automatically set to the VCN domain + // (for example, `vcn1.oraclevcn.com`). + // If you don't want to use a search domain name, omit this option from the + // set of DHCP options. Do not include this option with an empty list + // of search domain names, or with an empty string as the value for any search + // domain name. + SearchDomainNames []string `mandatory:"true" json:"searchDomainNames"` +} + +func (m DhcpSearchDomainOption) String() string { + return common.PointerString(m) +} + +// MarshalJSON marshals to json representation +func (m DhcpSearchDomainOption) MarshalJSON() (buff []byte, e error) { + type MarshalTypeDhcpSearchDomainOption DhcpSearchDomainOption + s := struct { + DiscriminatorParam string `json:"type"` + MarshalTypeDhcpSearchDomainOption + }{ + "SearchDomain", + (MarshalTypeDhcpSearchDomainOption)(m), + } + + return json.Marshal(&s) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/drg.go b/vendor/github.com/oracle/oci-go-sdk/core/drg.go new file mode 100644 index 000000000..b87673bba --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/drg.go @@ -0,0 +1,72 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Core Services API +// +// APIs for Networking Service, Compute Service, and Block Volume Service. +// + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" +) + +// Drg A Dynamic Routing Gateway (DRG), which is a virtual router that provides a path for private +// network traffic between your VCN and your existing network. You use it with other Networking +// Service components to create an IPSec VPN or a connection that uses +// Oracle Cloud Infrastructure FastConnect. For more information, see +// Overview of the Networking Service (https://docs.us-phoenix-1.oraclecloud.com/Content/Network/Concepts/overview.htm). +// To use any of the API operations, you must be authorized in an IAM policy. If you're not authorized, +// talk to an administrator. If you're an administrator who needs to write policies to give users access, see +// Getting Started with Policies (https://docs.us-phoenix-1.oraclecloud.com/Content/Identity/Concepts/policygetstarted.htm). +type Drg struct { + + // The OCID of the compartment containing the DRG. + CompartmentId *string `mandatory:"true" json:"compartmentId"` + + // The DRG's Oracle ID (OCID). + Id *string `mandatory:"true" json:"id"` + + // The DRG's current state. + LifecycleState DrgLifecycleStateEnum `mandatory:"true" json:"lifecycleState"` + + // A user-friendly name. Does not have to be unique, and it's changeable. + // Avoid entering confidential information. + DisplayName *string `mandatory:"false" json:"displayName"` + + // The date and time the DRG was created, in the format defined by RFC3339. + // Example: `2016-08-25T21:10:29.600Z` + TimeCreated *common.SDKTime `mandatory:"false" json:"timeCreated"` +} + +func (m Drg) String() string { + return common.PointerString(m) +} + +// DrgLifecycleStateEnum Enum with underlying type: string +type DrgLifecycleStateEnum string + +// Set of constants representing the allowable values for DrgLifecycleState +const ( + DrgLifecycleStateProvisioning DrgLifecycleStateEnum = "PROVISIONING" + DrgLifecycleStateAvailable DrgLifecycleStateEnum = "AVAILABLE" + DrgLifecycleStateTerminating DrgLifecycleStateEnum = "TERMINATING" + DrgLifecycleStateTerminated DrgLifecycleStateEnum = "TERMINATED" +) + +var mappingDrgLifecycleState = map[string]DrgLifecycleStateEnum{ + "PROVISIONING": DrgLifecycleStateProvisioning, + "AVAILABLE": DrgLifecycleStateAvailable, + "TERMINATING": DrgLifecycleStateTerminating, + "TERMINATED": DrgLifecycleStateTerminated, +} + +// GetDrgLifecycleStateEnumValues Enumerates the set of values for DrgLifecycleState +func GetDrgLifecycleStateEnumValues() []DrgLifecycleStateEnum { + values := make([]DrgLifecycleStateEnum, 0) + for _, v := range mappingDrgLifecycleState { + values = append(values, v) + } + return values +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/drg_attachment.go b/vendor/github.com/oracle/oci-go-sdk/core/drg_attachment.go new file mode 100644 index 000000000..108763457 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/drg_attachment.go @@ -0,0 +1,72 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Core Services API +// +// APIs for Networking Service, Compute Service, and Block Volume Service. +// + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" +) + +// DrgAttachment A link between a DRG and VCN. For more information, see +// Overview of the Networking Service (https://docs.us-phoenix-1.oraclecloud.com/Content/Network/Concepts/overview.htm). +type DrgAttachment struct { + + // The OCID of the compartment containing the DRG attachment. + CompartmentId *string `mandatory:"true" json:"compartmentId"` + + // The OCID of the DRG. + DrgId *string `mandatory:"true" json:"drgId"` + + // The DRG attachment's Oracle ID (OCID). + Id *string `mandatory:"true" json:"id"` + + // The DRG attachment's current state. + LifecycleState DrgAttachmentLifecycleStateEnum `mandatory:"true" json:"lifecycleState"` + + // The OCID of the VCN. + VcnId *string `mandatory:"true" json:"vcnId"` + + // A user-friendly name. Does not have to be unique, and it's changeable. + // Avoid entering confidential information. + DisplayName *string `mandatory:"false" json:"displayName"` + + // The date and time the DRG attachment was created, in the format defined by RFC3339. + // Example: `2016-08-25T21:10:29.600Z` + TimeCreated *common.SDKTime `mandatory:"false" json:"timeCreated"` +} + +func (m DrgAttachment) String() string { + return common.PointerString(m) +} + +// DrgAttachmentLifecycleStateEnum Enum with underlying type: string +type DrgAttachmentLifecycleStateEnum string + +// Set of constants representing the allowable values for DrgAttachmentLifecycleState +const ( + DrgAttachmentLifecycleStateAttaching DrgAttachmentLifecycleStateEnum = "ATTACHING" + DrgAttachmentLifecycleStateAttached DrgAttachmentLifecycleStateEnum = "ATTACHED" + DrgAttachmentLifecycleStateDetaching DrgAttachmentLifecycleStateEnum = "DETACHING" + DrgAttachmentLifecycleStateDetached DrgAttachmentLifecycleStateEnum = "DETACHED" +) + +var mappingDrgAttachmentLifecycleState = map[string]DrgAttachmentLifecycleStateEnum{ + "ATTACHING": DrgAttachmentLifecycleStateAttaching, + "ATTACHED": DrgAttachmentLifecycleStateAttached, + "DETACHING": DrgAttachmentLifecycleStateDetaching, + "DETACHED": DrgAttachmentLifecycleStateDetached, +} + +// GetDrgAttachmentLifecycleStateEnumValues Enumerates the set of values for DrgAttachmentLifecycleState +func GetDrgAttachmentLifecycleStateEnumValues() []DrgAttachmentLifecycleStateEnum { + values := make([]DrgAttachmentLifecycleStateEnum, 0) + for _, v := range mappingDrgAttachmentLifecycleState { + values = append(values, v) + } + return values +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/egress_security_rule.go b/vendor/github.com/oracle/oci-go-sdk/core/egress_security_rule.go new file mode 100644 index 000000000..313470a2f --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/egress_security_rule.go @@ -0,0 +1,56 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Core Services API +// +// APIs for Networking Service, Compute Service, and Block Volume Service. +// + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" +) + +// EgressSecurityRule A rule for allowing outbound IP packets. +type EgressSecurityRule struct { + + // The destination CIDR block for the egress rule. This is the range of IP addresses that a + // packet originating from the instance can go to. + Destination *string `mandatory:"true" json:"destination"` + + // The transport protocol. Specify either `all` or an IPv4 protocol number as + // defined in + // Protocol Numbers (http://www.iana.org/assignments/protocol-numbers/protocol-numbers.xhtml). + // Options are supported only for ICMP ("1"), TCP ("6"), and UDP ("17"). + Protocol *string `mandatory:"true" json:"protocol"` + + // Optional and valid only for ICMP. Use to specify a particular ICMP type and code + // as defined in + // ICMP Parameters (http://www.iana.org/assignments/icmp-parameters/icmp-parameters.xhtml). + // If you specify ICMP as the protocol but omit this object, then all ICMP types and + // codes are allowed. If you do provide this object, the type is required and the code is optional. + // To enable MTU negotiation for ingress internet traffic, make sure to allow type 3 ("Destination + // Unreachable") code 4 ("Fragmentation Needed and Don't Fragment was Set"). If you need to specify + // multiple codes for a single type, create a separate security list rule for each. + IcmpOptions *IcmpOptions `mandatory:"false" json:"icmpOptions"` + + // A stateless rule allows traffic in one direction. Remember to add a corresponding + // stateless rule in the other direction if you need to support bidirectional traffic. For + // example, if egress traffic allows TCP destination port 80, there should be an ingress + // rule to allow TCP source port 80. Defaults to false, which means the rule is stateful + // and a corresponding rule is not necessary for bidirectional traffic. + IsStateless *bool `mandatory:"false" json:"isStateless"` + + // Optional and valid only for TCP. Use to specify particular destination ports for TCP rules. + // If you specify TCP as the protocol but omit this object, then all destination ports are allowed. + TcpOptions *TcpOptions `mandatory:"false" json:"tcpOptions"` + + // Optional and valid only for UDP. Use to specify particular destination ports for UDP rules. + // If you specify UDP as the protocol but omit this object, then all destination ports are allowed. + UdpOptions *UdpOptions `mandatory:"false" json:"udpOptions"` +} + +func (m EgressSecurityRule) String() string { + return common.PointerString(m) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/export_image_details.go b/vendor/github.com/oracle/oci-go-sdk/core/export_image_details.go new file mode 100644 index 000000000..f066a58a7 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/export_image_details.go @@ -0,0 +1,66 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Core Services API +// +// APIs for Networking Service, Compute Service, and Block Volume Service. +// + +package core + +import ( + "encoding/json" + "github.com/oracle/oci-go-sdk/common" +) + +// ExportImageDetails The destination details for the image export. +// Set `destinationType` to `objectStorageTuple` +// and use ExportImageViaObjectStorageTupleDetails +// when specifying the namespace, bucket name, and object name. +// Set `destinationType` to `objectStorageUri` and +// use ExportImageViaObjectStorageUriDetails +// when specifying the Object Storage URL. +type ExportImageDetails interface { +} + +type exportimagedetails struct { + JsonData []byte + DestinationType string `json:"destinationType"` +} + +// UnmarshalJSON unmarshals json +func (m *exportimagedetails) UnmarshalJSON(data []byte) error { + m.JsonData = data + type Unmarshalerexportimagedetails exportimagedetails + s := struct { + Model Unmarshalerexportimagedetails + }{} + err := json.Unmarshal(data, &s.Model) + if err != nil { + return err + } + m.DestinationType = s.Model.DestinationType + + return err +} + +// UnmarshalPolymorphicJSON unmarshals polymorphic json +func (m *exportimagedetails) UnmarshalPolymorphicJSON(data []byte) (interface{}, error) { + var err error + switch m.DestinationType { + case "objectStorageUri": + mm := ExportImageViaObjectStorageUriDetails{} + err = json.Unmarshal(data, &mm) + return mm, err + case "objectStorageTuple": + mm := ExportImageViaObjectStorageTupleDetails{} + err = json.Unmarshal(data, &mm) + return mm, err + default: + return m, nil + } +} + +func (m exportimagedetails) String() string { + return common.PointerString(m) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/export_image_request_response.go b/vendor/github.com/oracle/oci-go-sdk/core/export_image_request_response.go new file mode 100644 index 000000000..4ecb1a3ce --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/export_image_request_response.go @@ -0,0 +1,56 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// ExportImageRequest wrapper for the ExportImage operation +type ExportImageRequest struct { + + // The OCID of the image. + ImageId *string `mandatory:"true" contributesTo:"path" name:"imageId"` + + // Details for the image export. + ExportImageDetails `contributesTo:"body"` + + // A token that uniquely identifies a request so it can be retried in case of a timeout or + // server error without risk of executing that same action again. Retry tokens expire after 24 + // hours, but can be invalidated before then due to conflicting operations (for example, if a resource + // has been deleted and purged from the system, then a retry of the original creation request + // may be rejected). + OpcRetryToken *string `mandatory:"false" contributesTo:"header" name:"opc-retry-token"` + + // For optimistic concurrency control. In the PUT or DELETE call for a resource, set the `if-match` + // parameter to the value of the etag from a previous GET or POST response for that resource. The resource + // will be updated or deleted only if the etag you provide matches the resource's current etag value. + IfMatch *string `mandatory:"false" contributesTo:"header" name:"if-match"` +} + +func (request ExportImageRequest) String() string { + return common.PointerString(request) +} + +// ExportImageResponse wrapper for the ExportImage operation +type ExportImageResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The Image instance + Image `presentIn:"body"` + + // For optimistic concurrency control. See `if-match`. + Etag *string `presentIn:"header" name:"etag"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about + // a particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response ExportImageResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/export_image_via_object_storage_tuple_details.go b/vendor/github.com/oracle/oci-go-sdk/core/export_image_via_object_storage_tuple_details.go new file mode 100644 index 000000000..39d08873f --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/export_image_via_object_storage_tuple_details.go @@ -0,0 +1,45 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Core Services API +// +// APIs for Networking Service, Compute Service, and Block Volume Service. +// + +package core + +import ( + "encoding/json" + "github.com/oracle/oci-go-sdk/common" +) + +// ExportImageViaObjectStorageTupleDetails The representation of ExportImageViaObjectStorageTupleDetails +type ExportImageViaObjectStorageTupleDetails struct { + + // The Object Storage bucket to export the image to. + BucketName *string `mandatory:"false" json:"bucketName"` + + // The Object Storage namespace to export the image to. + NamespaceName *string `mandatory:"false" json:"namespaceName"` + + // The Object Storage object name for the exported image. + ObjectName *string `mandatory:"false" json:"objectName"` +} + +func (m ExportImageViaObjectStorageTupleDetails) String() string { + return common.PointerString(m) +} + +// MarshalJSON marshals to json representation +func (m ExportImageViaObjectStorageTupleDetails) MarshalJSON() (buff []byte, e error) { + type MarshalTypeExportImageViaObjectStorageTupleDetails ExportImageViaObjectStorageTupleDetails + s := struct { + DiscriminatorParam string `json:"destinationType"` + MarshalTypeExportImageViaObjectStorageTupleDetails + }{ + "objectStorageTuple", + (MarshalTypeExportImageViaObjectStorageTupleDetails)(m), + } + + return json.Marshal(&s) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/export_image_via_object_storage_uri_details.go b/vendor/github.com/oracle/oci-go-sdk/core/export_image_via_object_storage_uri_details.go new file mode 100644 index 000000000..5a44a21a6 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/export_image_via_object_storage_uri_details.go @@ -0,0 +1,40 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Core Services API +// +// APIs for Networking Service, Compute Service, and Block Volume Service. +// + +package core + +import ( + "encoding/json" + "github.com/oracle/oci-go-sdk/common" +) + +// ExportImageViaObjectStorageUriDetails The representation of ExportImageViaObjectStorageUriDetails +type ExportImageViaObjectStorageUriDetails struct { + + // The Object Storage URL to export the image to. See Object Storage URLs (https://docs.us-phoenix-1.oraclecloud.com/Content/Compute/Tasks/imageimportexport.htm#URLs) + // and pre-authenticated requests (https://docs.us-phoenix-1.oraclecloud.com/Content/Object/Tasks/managingaccess.htm#pre-auth) for constructing URLs for image import/export. + DestinationUri *string `mandatory:"true" json:"destinationUri"` +} + +func (m ExportImageViaObjectStorageUriDetails) String() string { + return common.PointerString(m) +} + +// MarshalJSON marshals to json representation +func (m ExportImageViaObjectStorageUriDetails) MarshalJSON() (buff []byte, e error) { + type MarshalTypeExportImageViaObjectStorageUriDetails ExportImageViaObjectStorageUriDetails + s := struct { + DiscriminatorParam string `json:"destinationType"` + MarshalTypeExportImageViaObjectStorageUriDetails + }{ + "objectStorageUri", + (MarshalTypeExportImageViaObjectStorageUriDetails)(m), + } + + return json.Marshal(&s) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/fast_connect_provider_service.go b/vendor/github.com/oracle/oci-go-sdk/core/fast_connect_provider_service.go new file mode 100644 index 000000000..1b8dcca1d --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/fast_connect_provider_service.go @@ -0,0 +1,142 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Core Services API +// +// APIs for Networking Service, Compute Service, and Block Volume Service. +// + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" +) + +// FastConnectProviderService A service offering from a supported provider. For more information, +// see FastConnect Overview (https://docs.us-phoenix-1.oraclecloud.com/Content/Network/Concepts/fastconnect.htm). +type FastConnectProviderService struct { + + // The OCID of the service offered by the provider. + Id *string `mandatory:"true" json:"id"` + + // Private peering BGP management. + PrivatePeeringBgpManagement FastConnectProviderServicePrivatePeeringBgpManagementEnum `mandatory:"true" json:"privatePeeringBgpManagement"` + + // The name of the provider. + ProviderName *string `mandatory:"true" json:"providerName"` + + // The name of the service offered by the provider. + ProviderServiceName *string `mandatory:"true" json:"providerServiceName"` + + // Public peering BGP management. + PublicPeeringBgpManagement FastConnectProviderServicePublicPeeringBgpManagementEnum `mandatory:"true" json:"publicPeeringBgpManagement"` + + // Provider service type. + Type FastConnectProviderServiceTypeEnum `mandatory:"true" json:"type"` + + // A description of the service offered by the provider. + Description *string `mandatory:"false" json:"description"` + + // An array of virtual circuit types supported by this service. + SupportedVirtualCircuitTypes []FastConnectProviderServiceSupportedVirtualCircuitTypesEnum `mandatory:"false" json:"supportedVirtualCircuitTypes,omitempty"` +} + +func (m FastConnectProviderService) String() string { + return common.PointerString(m) +} + +// FastConnectProviderServicePrivatePeeringBgpManagementEnum Enum with underlying type: string +type FastConnectProviderServicePrivatePeeringBgpManagementEnum string + +// Set of constants representing the allowable values for FastConnectProviderServicePrivatePeeringBgpManagement +const ( + FastConnectProviderServicePrivatePeeringBgpManagementCustomerManaged FastConnectProviderServicePrivatePeeringBgpManagementEnum = "CUSTOMER_MANAGED" + FastConnectProviderServicePrivatePeeringBgpManagementProviderManaged FastConnectProviderServicePrivatePeeringBgpManagementEnum = "PROVIDER_MANAGED" + FastConnectProviderServicePrivatePeeringBgpManagementOracleManaged FastConnectProviderServicePrivatePeeringBgpManagementEnum = "ORACLE_MANAGED" +) + +var mappingFastConnectProviderServicePrivatePeeringBgpManagement = map[string]FastConnectProviderServicePrivatePeeringBgpManagementEnum{ + "CUSTOMER_MANAGED": FastConnectProviderServicePrivatePeeringBgpManagementCustomerManaged, + "PROVIDER_MANAGED": FastConnectProviderServicePrivatePeeringBgpManagementProviderManaged, + "ORACLE_MANAGED": FastConnectProviderServicePrivatePeeringBgpManagementOracleManaged, +} + +// GetFastConnectProviderServicePrivatePeeringBgpManagementEnumValues Enumerates the set of values for FastConnectProviderServicePrivatePeeringBgpManagement +func GetFastConnectProviderServicePrivatePeeringBgpManagementEnumValues() []FastConnectProviderServicePrivatePeeringBgpManagementEnum { + values := make([]FastConnectProviderServicePrivatePeeringBgpManagementEnum, 0) + for _, v := range mappingFastConnectProviderServicePrivatePeeringBgpManagement { + values = append(values, v) + } + return values +} + +// FastConnectProviderServicePublicPeeringBgpManagementEnum Enum with underlying type: string +type FastConnectProviderServicePublicPeeringBgpManagementEnum string + +// Set of constants representing the allowable values for FastConnectProviderServicePublicPeeringBgpManagement +const ( + FastConnectProviderServicePublicPeeringBgpManagementCustomerManaged FastConnectProviderServicePublicPeeringBgpManagementEnum = "CUSTOMER_MANAGED" + FastConnectProviderServicePublicPeeringBgpManagementProviderManaged FastConnectProviderServicePublicPeeringBgpManagementEnum = "PROVIDER_MANAGED" + FastConnectProviderServicePublicPeeringBgpManagementOracleManaged FastConnectProviderServicePublicPeeringBgpManagementEnum = "ORACLE_MANAGED" +) + +var mappingFastConnectProviderServicePublicPeeringBgpManagement = map[string]FastConnectProviderServicePublicPeeringBgpManagementEnum{ + "CUSTOMER_MANAGED": FastConnectProviderServicePublicPeeringBgpManagementCustomerManaged, + "PROVIDER_MANAGED": FastConnectProviderServicePublicPeeringBgpManagementProviderManaged, + "ORACLE_MANAGED": FastConnectProviderServicePublicPeeringBgpManagementOracleManaged, +} + +// GetFastConnectProviderServicePublicPeeringBgpManagementEnumValues Enumerates the set of values for FastConnectProviderServicePublicPeeringBgpManagement +func GetFastConnectProviderServicePublicPeeringBgpManagementEnumValues() []FastConnectProviderServicePublicPeeringBgpManagementEnum { + values := make([]FastConnectProviderServicePublicPeeringBgpManagementEnum, 0) + for _, v := range mappingFastConnectProviderServicePublicPeeringBgpManagement { + values = append(values, v) + } + return values +} + +// FastConnectProviderServiceSupportedVirtualCircuitTypesEnum Enum with underlying type: string +type FastConnectProviderServiceSupportedVirtualCircuitTypesEnum string + +// Set of constants representing the allowable values for FastConnectProviderServiceSupportedVirtualCircuitTypes +const ( + FastConnectProviderServiceSupportedVirtualCircuitTypesPublic FastConnectProviderServiceSupportedVirtualCircuitTypesEnum = "PUBLIC" + FastConnectProviderServiceSupportedVirtualCircuitTypesPrivate FastConnectProviderServiceSupportedVirtualCircuitTypesEnum = "PRIVATE" +) + +var mappingFastConnectProviderServiceSupportedVirtualCircuitTypes = map[string]FastConnectProviderServiceSupportedVirtualCircuitTypesEnum{ + "PUBLIC": FastConnectProviderServiceSupportedVirtualCircuitTypesPublic, + "PRIVATE": FastConnectProviderServiceSupportedVirtualCircuitTypesPrivate, +} + +// GetFastConnectProviderServiceSupportedVirtualCircuitTypesEnumValues Enumerates the set of values for FastConnectProviderServiceSupportedVirtualCircuitTypes +func GetFastConnectProviderServiceSupportedVirtualCircuitTypesEnumValues() []FastConnectProviderServiceSupportedVirtualCircuitTypesEnum { + values := make([]FastConnectProviderServiceSupportedVirtualCircuitTypesEnum, 0) + for _, v := range mappingFastConnectProviderServiceSupportedVirtualCircuitTypes { + values = append(values, v) + } + return values +} + +// FastConnectProviderServiceTypeEnum Enum with underlying type: string +type FastConnectProviderServiceTypeEnum string + +// Set of constants representing the allowable values for FastConnectProviderServiceType +const ( + FastConnectProviderServiceTypeLayer2 FastConnectProviderServiceTypeEnum = "LAYER2" + FastConnectProviderServiceTypeLayer3 FastConnectProviderServiceTypeEnum = "LAYER3" +) + +var mappingFastConnectProviderServiceType = map[string]FastConnectProviderServiceTypeEnum{ + "LAYER2": FastConnectProviderServiceTypeLayer2, + "LAYER3": FastConnectProviderServiceTypeLayer3, +} + +// GetFastConnectProviderServiceTypeEnumValues Enumerates the set of values for FastConnectProviderServiceType +func GetFastConnectProviderServiceTypeEnumValues() []FastConnectProviderServiceTypeEnum { + values := make([]FastConnectProviderServiceTypeEnum, 0) + for _, v := range mappingFastConnectProviderServiceType { + values = append(values, v) + } + return values +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/get_boot_volume_attachment_request_response.go b/vendor/github.com/oracle/oci-go-sdk/core/get_boot_volume_attachment_request_response.go new file mode 100644 index 000000000..e14e9201b --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/get_boot_volume_attachment_request_response.go @@ -0,0 +1,41 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// GetBootVolumeAttachmentRequest wrapper for the GetBootVolumeAttachment operation +type GetBootVolumeAttachmentRequest struct { + + // The OCID of the boot volume attachment. + BootVolumeAttachmentId *string `mandatory:"true" contributesTo:"path" name:"bootVolumeAttachmentId"` +} + +func (request GetBootVolumeAttachmentRequest) String() string { + return common.PointerString(request) +} + +// GetBootVolumeAttachmentResponse wrapper for the GetBootVolumeAttachment operation +type GetBootVolumeAttachmentResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The BootVolumeAttachment instance + BootVolumeAttachment `presentIn:"body"` + + // For optimistic concurrency control. See `if-match`. + Etag *string `presentIn:"header" name:"etag"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about + // a particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response GetBootVolumeAttachmentResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/get_boot_volume_request_response.go b/vendor/github.com/oracle/oci-go-sdk/core/get_boot_volume_request_response.go new file mode 100644 index 000000000..95c3d0909 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/get_boot_volume_request_response.go @@ -0,0 +1,41 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// GetBootVolumeRequest wrapper for the GetBootVolume operation +type GetBootVolumeRequest struct { + + // The OCID of the boot volume. + BootVolumeId *string `mandatory:"true" contributesTo:"path" name:"bootVolumeId"` +} + +func (request GetBootVolumeRequest) String() string { + return common.PointerString(request) +} + +// GetBootVolumeResponse wrapper for the GetBootVolume operation +type GetBootVolumeResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The BootVolume instance + BootVolume `presentIn:"body"` + + // For optimistic concurrency control. See `if-match`. + Etag *string `presentIn:"header" name:"etag"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about + // a particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response GetBootVolumeResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/get_console_history_content_request_response.go b/vendor/github.com/oracle/oci-go-sdk/core/get_console_history_content_request_response.go new file mode 100644 index 000000000..4fc8e50ba --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/get_console_history_content_request_response.go @@ -0,0 +1,47 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// GetConsoleHistoryContentRequest wrapper for the GetConsoleHistoryContent operation +type GetConsoleHistoryContentRequest struct { + + // The OCID of the console history. + InstanceConsoleHistoryId *string `mandatory:"true" contributesTo:"path" name:"instanceConsoleHistoryId"` + + // Offset of the snapshot data to retrieve. + Offset *int `mandatory:"false" contributesTo:"query" name:"offset"` + + // Length of the snapshot data to retrieve. + Length *int `mandatory:"false" contributesTo:"query" name:"length"` +} + +func (request GetConsoleHistoryContentRequest) String() string { + return common.PointerString(request) +} + +// GetConsoleHistoryContentResponse wrapper for the GetConsoleHistoryContent operation +type GetConsoleHistoryContentResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The string instance + Value *string `presentIn:"body" encoding:"plain-text"` + + // The number of bytes remaining in the snapshot. + OpcBytesRemaining *int `presentIn:"header" name:"opc-bytes-remaining"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about + // a particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response GetConsoleHistoryContentResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/get_console_history_request_response.go b/vendor/github.com/oracle/oci-go-sdk/core/get_console_history_request_response.go new file mode 100644 index 000000000..b5aa9eb38 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/get_console_history_request_response.go @@ -0,0 +1,41 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// GetConsoleHistoryRequest wrapper for the GetConsoleHistory operation +type GetConsoleHistoryRequest struct { + + // The OCID of the console history. + InstanceConsoleHistoryId *string `mandatory:"true" contributesTo:"path" name:"instanceConsoleHistoryId"` +} + +func (request GetConsoleHistoryRequest) String() string { + return common.PointerString(request) +} + +// GetConsoleHistoryResponse wrapper for the GetConsoleHistory operation +type GetConsoleHistoryResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The ConsoleHistory instance + ConsoleHistory `presentIn:"body"` + + // For optimistic concurrency control. See `if-match`. + Etag *string `presentIn:"header" name:"etag"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about + // a particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response GetConsoleHistoryResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/get_cpe_request_response.go b/vendor/github.com/oracle/oci-go-sdk/core/get_cpe_request_response.go new file mode 100644 index 000000000..0ed76a7f9 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/get_cpe_request_response.go @@ -0,0 +1,41 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// GetCpeRequest wrapper for the GetCpe operation +type GetCpeRequest struct { + + // The OCID of the CPE. + CpeId *string `mandatory:"true" contributesTo:"path" name:"cpeId"` +} + +func (request GetCpeRequest) String() string { + return common.PointerString(request) +} + +// GetCpeResponse wrapper for the GetCpe operation +type GetCpeResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The Cpe instance + Cpe `presentIn:"body"` + + // For optimistic concurrency control. See `if-match`. + Etag *string `presentIn:"header" name:"etag"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about + // a particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response GetCpeResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/get_cross_connect_group_request_response.go b/vendor/github.com/oracle/oci-go-sdk/core/get_cross_connect_group_request_response.go new file mode 100644 index 000000000..fd3d8803e --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/get_cross_connect_group_request_response.go @@ -0,0 +1,41 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// GetCrossConnectGroupRequest wrapper for the GetCrossConnectGroup operation +type GetCrossConnectGroupRequest struct { + + // The OCID of the cross-connect group. + CrossConnectGroupId *string `mandatory:"true" contributesTo:"path" name:"crossConnectGroupId"` +} + +func (request GetCrossConnectGroupRequest) String() string { + return common.PointerString(request) +} + +// GetCrossConnectGroupResponse wrapper for the GetCrossConnectGroup operation +type GetCrossConnectGroupResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The CrossConnectGroup instance + CrossConnectGroup `presentIn:"body"` + + // For optimistic concurrency control. See `if-match`. + Etag *string `presentIn:"header" name:"etag"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about + // a particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response GetCrossConnectGroupResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/get_cross_connect_letter_of_authority_request_response.go b/vendor/github.com/oracle/oci-go-sdk/core/get_cross_connect_letter_of_authority_request_response.go new file mode 100644 index 000000000..674ea1f19 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/get_cross_connect_letter_of_authority_request_response.go @@ -0,0 +1,38 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// GetCrossConnectLetterOfAuthorityRequest wrapper for the GetCrossConnectLetterOfAuthority operation +type GetCrossConnectLetterOfAuthorityRequest struct { + + // The OCID of the cross-connect. + CrossConnectId *string `mandatory:"true" contributesTo:"path" name:"crossConnectId"` +} + +func (request GetCrossConnectLetterOfAuthorityRequest) String() string { + return common.PointerString(request) +} + +// GetCrossConnectLetterOfAuthorityResponse wrapper for the GetCrossConnectLetterOfAuthority operation +type GetCrossConnectLetterOfAuthorityResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The LetterOfAuthority instance + LetterOfAuthority `presentIn:"body"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about + // a particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response GetCrossConnectLetterOfAuthorityResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/get_cross_connect_request_response.go b/vendor/github.com/oracle/oci-go-sdk/core/get_cross_connect_request_response.go new file mode 100644 index 000000000..b505c6ad5 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/get_cross_connect_request_response.go @@ -0,0 +1,41 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// GetCrossConnectRequest wrapper for the GetCrossConnect operation +type GetCrossConnectRequest struct { + + // The OCID of the cross-connect. + CrossConnectId *string `mandatory:"true" contributesTo:"path" name:"crossConnectId"` +} + +func (request GetCrossConnectRequest) String() string { + return common.PointerString(request) +} + +// GetCrossConnectResponse wrapper for the GetCrossConnect operation +type GetCrossConnectResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The CrossConnect instance + CrossConnect `presentIn:"body"` + + // For optimistic concurrency control. See `if-match`. + Etag *string `presentIn:"header" name:"etag"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about + // a particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response GetCrossConnectResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/get_cross_connect_status_request_response.go b/vendor/github.com/oracle/oci-go-sdk/core/get_cross_connect_status_request_response.go new file mode 100644 index 000000000..cf826242e --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/get_cross_connect_status_request_response.go @@ -0,0 +1,38 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// GetCrossConnectStatusRequest wrapper for the GetCrossConnectStatus operation +type GetCrossConnectStatusRequest struct { + + // The OCID of the cross-connect. + CrossConnectId *string `mandatory:"true" contributesTo:"path" name:"crossConnectId"` +} + +func (request GetCrossConnectStatusRequest) String() string { + return common.PointerString(request) +} + +// GetCrossConnectStatusResponse wrapper for the GetCrossConnectStatus operation +type GetCrossConnectStatusResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The CrossConnectStatus instance + CrossConnectStatus `presentIn:"body"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about + // a particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response GetCrossConnectStatusResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/get_dhcp_options_request_response.go b/vendor/github.com/oracle/oci-go-sdk/core/get_dhcp_options_request_response.go new file mode 100644 index 000000000..72f46761c --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/get_dhcp_options_request_response.go @@ -0,0 +1,41 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// GetDhcpOptionsRequest wrapper for the GetDhcpOptions operation +type GetDhcpOptionsRequest struct { + + // The OCID for the set of DHCP options. + DhcpId *string `mandatory:"true" contributesTo:"path" name:"dhcpId"` +} + +func (request GetDhcpOptionsRequest) String() string { + return common.PointerString(request) +} + +// GetDhcpOptionsResponse wrapper for the GetDhcpOptions operation +type GetDhcpOptionsResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The DhcpOptions instance + DhcpOptions `presentIn:"body"` + + // For optimistic concurrency control. See `if-match`. + Etag *string `presentIn:"header" name:"etag"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about + // a particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response GetDhcpOptionsResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/get_drg_attachment_request_response.go b/vendor/github.com/oracle/oci-go-sdk/core/get_drg_attachment_request_response.go new file mode 100644 index 000000000..b485721ae --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/get_drg_attachment_request_response.go @@ -0,0 +1,41 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// GetDrgAttachmentRequest wrapper for the GetDrgAttachment operation +type GetDrgAttachmentRequest struct { + + // The OCID of the DRG attachment. + DrgAttachmentId *string `mandatory:"true" contributesTo:"path" name:"drgAttachmentId"` +} + +func (request GetDrgAttachmentRequest) String() string { + return common.PointerString(request) +} + +// GetDrgAttachmentResponse wrapper for the GetDrgAttachment operation +type GetDrgAttachmentResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The DrgAttachment instance + DrgAttachment `presentIn:"body"` + + // For optimistic concurrency control. See `if-match`. + Etag *string `presentIn:"header" name:"etag"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about + // a particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response GetDrgAttachmentResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/get_drg_request_response.go b/vendor/github.com/oracle/oci-go-sdk/core/get_drg_request_response.go new file mode 100644 index 000000000..4488accfc --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/get_drg_request_response.go @@ -0,0 +1,41 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// GetDrgRequest wrapper for the GetDrg operation +type GetDrgRequest struct { + + // The OCID of the DRG. + DrgId *string `mandatory:"true" contributesTo:"path" name:"drgId"` +} + +func (request GetDrgRequest) String() string { + return common.PointerString(request) +} + +// GetDrgResponse wrapper for the GetDrg operation +type GetDrgResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The Drg instance + Drg `presentIn:"body"` + + // For optimistic concurrency control. See `if-match`. + Etag *string `presentIn:"header" name:"etag"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about + // a particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response GetDrgResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/get_fast_connect_provider_service_request_response.go b/vendor/github.com/oracle/oci-go-sdk/core/get_fast_connect_provider_service_request_response.go new file mode 100644 index 000000000..28b352403 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/get_fast_connect_provider_service_request_response.go @@ -0,0 +1,38 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// GetFastConnectProviderServiceRequest wrapper for the GetFastConnectProviderService operation +type GetFastConnectProviderServiceRequest struct { + + // The OCID of the provider service. + ProviderServiceId *string `mandatory:"true" contributesTo:"path" name:"providerServiceId"` +} + +func (request GetFastConnectProviderServiceRequest) String() string { + return common.PointerString(request) +} + +// GetFastConnectProviderServiceResponse wrapper for the GetFastConnectProviderService operation +type GetFastConnectProviderServiceResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The FastConnectProviderService instance + FastConnectProviderService `presentIn:"body"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about + // a particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response GetFastConnectProviderServiceResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/get_i_p_sec_connection_device_config_request_response.go b/vendor/github.com/oracle/oci-go-sdk/core/get_i_p_sec_connection_device_config_request_response.go new file mode 100644 index 000000000..a41b69aed --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/get_i_p_sec_connection_device_config_request_response.go @@ -0,0 +1,41 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// GetIPSecConnectionDeviceConfigRequest wrapper for the GetIPSecConnectionDeviceConfig operation +type GetIPSecConnectionDeviceConfigRequest struct { + + // The OCID of the IPSec connection. + IpscId *string `mandatory:"true" contributesTo:"path" name:"ipscId"` +} + +func (request GetIPSecConnectionDeviceConfigRequest) String() string { + return common.PointerString(request) +} + +// GetIPSecConnectionDeviceConfigResponse wrapper for the GetIPSecConnectionDeviceConfig operation +type GetIPSecConnectionDeviceConfigResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The IpSecConnectionDeviceConfig instance + IpSecConnectionDeviceConfig `presentIn:"body"` + + // For optimistic concurrency control. See `if-match`. + Etag *string `presentIn:"header" name:"etag"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about + // a particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response GetIPSecConnectionDeviceConfigResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/get_i_p_sec_connection_device_status_request_response.go b/vendor/github.com/oracle/oci-go-sdk/core/get_i_p_sec_connection_device_status_request_response.go new file mode 100644 index 000000000..c3fbde11d --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/get_i_p_sec_connection_device_status_request_response.go @@ -0,0 +1,41 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// GetIPSecConnectionDeviceStatusRequest wrapper for the GetIPSecConnectionDeviceStatus operation +type GetIPSecConnectionDeviceStatusRequest struct { + + // The OCID of the IPSec connection. + IpscId *string `mandatory:"true" contributesTo:"path" name:"ipscId"` +} + +func (request GetIPSecConnectionDeviceStatusRequest) String() string { + return common.PointerString(request) +} + +// GetIPSecConnectionDeviceStatusResponse wrapper for the GetIPSecConnectionDeviceStatus operation +type GetIPSecConnectionDeviceStatusResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The IpSecConnectionDeviceStatus instance + IpSecConnectionDeviceStatus `presentIn:"body"` + + // For optimistic concurrency control. See `if-match`. + Etag *string `presentIn:"header" name:"etag"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about + // a particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response GetIPSecConnectionDeviceStatusResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/get_i_p_sec_connection_request_response.go b/vendor/github.com/oracle/oci-go-sdk/core/get_i_p_sec_connection_request_response.go new file mode 100644 index 000000000..1ba8d86d0 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/get_i_p_sec_connection_request_response.go @@ -0,0 +1,41 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// GetIPSecConnectionRequest wrapper for the GetIPSecConnection operation +type GetIPSecConnectionRequest struct { + + // The OCID of the IPSec connection. + IpscId *string `mandatory:"true" contributesTo:"path" name:"ipscId"` +} + +func (request GetIPSecConnectionRequest) String() string { + return common.PointerString(request) +} + +// GetIPSecConnectionResponse wrapper for the GetIPSecConnection operation +type GetIPSecConnectionResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The IpSecConnection instance + IpSecConnection `presentIn:"body"` + + // For optimistic concurrency control. See `if-match`. + Etag *string `presentIn:"header" name:"etag"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about + // a particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response GetIPSecConnectionResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/get_image_request_response.go b/vendor/github.com/oracle/oci-go-sdk/core/get_image_request_response.go new file mode 100644 index 000000000..9c64a2429 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/get_image_request_response.go @@ -0,0 +1,41 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// GetImageRequest wrapper for the GetImage operation +type GetImageRequest struct { + + // The OCID of the image. + ImageId *string `mandatory:"true" contributesTo:"path" name:"imageId"` +} + +func (request GetImageRequest) String() string { + return common.PointerString(request) +} + +// GetImageResponse wrapper for the GetImage operation +type GetImageResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The Image instance + Image `presentIn:"body"` + + // For optimistic concurrency control. See `if-match`. + Etag *string `presentIn:"header" name:"etag"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about + // a particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response GetImageResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/get_instance_console_connection_request_response.go b/vendor/github.com/oracle/oci-go-sdk/core/get_instance_console_connection_request_response.go new file mode 100644 index 000000000..c90d9c464 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/get_instance_console_connection_request_response.go @@ -0,0 +1,38 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// GetInstanceConsoleConnectionRequest wrapper for the GetInstanceConsoleConnection operation +type GetInstanceConsoleConnectionRequest struct { + + // The OCID of the intance console connection + InstanceConsoleConnectionId *string `mandatory:"true" contributesTo:"path" name:"instanceConsoleConnectionId"` +} + +func (request GetInstanceConsoleConnectionRequest) String() string { + return common.PointerString(request) +} + +// GetInstanceConsoleConnectionResponse wrapper for the GetInstanceConsoleConnection operation +type GetInstanceConsoleConnectionResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The InstanceConsoleConnection instance + InstanceConsoleConnection `presentIn:"body"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about + // a particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response GetInstanceConsoleConnectionResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/get_instance_request_response.go b/vendor/github.com/oracle/oci-go-sdk/core/get_instance_request_response.go new file mode 100644 index 000000000..8619c5ade --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/get_instance_request_response.go @@ -0,0 +1,41 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// GetInstanceRequest wrapper for the GetInstance operation +type GetInstanceRequest struct { + + // The OCID of the instance. + InstanceId *string `mandatory:"true" contributesTo:"path" name:"instanceId"` +} + +func (request GetInstanceRequest) String() string { + return common.PointerString(request) +} + +// GetInstanceResponse wrapper for the GetInstance operation +type GetInstanceResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The Instance instance + Instance `presentIn:"body"` + + // For optimistic concurrency control. See `if-match`. + Etag *string `presentIn:"header" name:"etag"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about + // a particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response GetInstanceResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/get_internet_gateway_request_response.go b/vendor/github.com/oracle/oci-go-sdk/core/get_internet_gateway_request_response.go new file mode 100644 index 000000000..61423dffb --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/get_internet_gateway_request_response.go @@ -0,0 +1,41 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// GetInternetGatewayRequest wrapper for the GetInternetGateway operation +type GetInternetGatewayRequest struct { + + // The OCID of the Internet Gateway. + IgId *string `mandatory:"true" contributesTo:"path" name:"igId"` +} + +func (request GetInternetGatewayRequest) String() string { + return common.PointerString(request) +} + +// GetInternetGatewayResponse wrapper for the GetInternetGateway operation +type GetInternetGatewayResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The InternetGateway instance + InternetGateway `presentIn:"body"` + + // For optimistic concurrency control. See `if-match`. + Etag *string `presentIn:"header" name:"etag"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about + // a particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response GetInternetGatewayResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/get_local_peering_gateway_request_response.go b/vendor/github.com/oracle/oci-go-sdk/core/get_local_peering_gateway_request_response.go new file mode 100644 index 000000000..3a04a7c17 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/get_local_peering_gateway_request_response.go @@ -0,0 +1,41 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// GetLocalPeeringGatewayRequest wrapper for the GetLocalPeeringGateway operation +type GetLocalPeeringGatewayRequest struct { + + // The OCID of the local peering gateway. + LocalPeeringGatewayId *string `mandatory:"true" contributesTo:"path" name:"localPeeringGatewayId"` +} + +func (request GetLocalPeeringGatewayRequest) String() string { + return common.PointerString(request) +} + +// GetLocalPeeringGatewayResponse wrapper for the GetLocalPeeringGateway operation +type GetLocalPeeringGatewayResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The LocalPeeringGateway instance + LocalPeeringGateway `presentIn:"body"` + + // For optimistic concurrency control. See `if-match`. + Etag *string `presentIn:"header" name:"etag"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about + // a particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response GetLocalPeeringGatewayResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/get_private_ip_request_response.go b/vendor/github.com/oracle/oci-go-sdk/core/get_private_ip_request_response.go new file mode 100644 index 000000000..778c771bf --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/get_private_ip_request_response.go @@ -0,0 +1,41 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// GetPrivateIpRequest wrapper for the GetPrivateIp operation +type GetPrivateIpRequest struct { + + // The private IP's OCID. + PrivateIpId *string `mandatory:"true" contributesTo:"path" name:"privateIpId"` +} + +func (request GetPrivateIpRequest) String() string { + return common.PointerString(request) +} + +// GetPrivateIpResponse wrapper for the GetPrivateIp operation +type GetPrivateIpResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The PrivateIp instance + PrivateIp `presentIn:"body"` + + // For optimistic concurrency control. See `if-match`. + Etag *string `presentIn:"header" name:"etag"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about + // a particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response GetPrivateIpResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/get_route_table_request_response.go b/vendor/github.com/oracle/oci-go-sdk/core/get_route_table_request_response.go new file mode 100644 index 000000000..f74bee61b --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/get_route_table_request_response.go @@ -0,0 +1,41 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// GetRouteTableRequest wrapper for the GetRouteTable operation +type GetRouteTableRequest struct { + + // The OCID of the route table. + RtId *string `mandatory:"true" contributesTo:"path" name:"rtId"` +} + +func (request GetRouteTableRequest) String() string { + return common.PointerString(request) +} + +// GetRouteTableResponse wrapper for the GetRouteTable operation +type GetRouteTableResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The RouteTable instance + RouteTable `presentIn:"body"` + + // For optimistic concurrency control. See `if-match`. + Etag *string `presentIn:"header" name:"etag"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about + // a particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response GetRouteTableResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/get_security_list_request_response.go b/vendor/github.com/oracle/oci-go-sdk/core/get_security_list_request_response.go new file mode 100644 index 000000000..a62a52d89 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/get_security_list_request_response.go @@ -0,0 +1,41 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// GetSecurityListRequest wrapper for the GetSecurityList operation +type GetSecurityListRequest struct { + + // The OCID of the security list. + SecurityListId *string `mandatory:"true" contributesTo:"path" name:"securityListId"` +} + +func (request GetSecurityListRequest) String() string { + return common.PointerString(request) +} + +// GetSecurityListResponse wrapper for the GetSecurityList operation +type GetSecurityListResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The SecurityList instance + SecurityList `presentIn:"body"` + + // For optimistic concurrency control. See `if-match`. + Etag *string `presentIn:"header" name:"etag"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about + // a particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response GetSecurityListResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/get_subnet_request_response.go b/vendor/github.com/oracle/oci-go-sdk/core/get_subnet_request_response.go new file mode 100644 index 000000000..a8ffa4c10 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/get_subnet_request_response.go @@ -0,0 +1,41 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// GetSubnetRequest wrapper for the GetSubnet operation +type GetSubnetRequest struct { + + // The OCID of the subnet. + SubnetId *string `mandatory:"true" contributesTo:"path" name:"subnetId"` +} + +func (request GetSubnetRequest) String() string { + return common.PointerString(request) +} + +// GetSubnetResponse wrapper for the GetSubnet operation +type GetSubnetResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The Subnet instance + Subnet `presentIn:"body"` + + // For optimistic concurrency control. See `if-match`. + Etag *string `presentIn:"header" name:"etag"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about + // a particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response GetSubnetResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/get_vcn_request_response.go b/vendor/github.com/oracle/oci-go-sdk/core/get_vcn_request_response.go new file mode 100644 index 000000000..5696f7c3f --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/get_vcn_request_response.go @@ -0,0 +1,41 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// GetVcnRequest wrapper for the GetVcn operation +type GetVcnRequest struct { + + // The OCID of the VCN. + VcnId *string `mandatory:"true" contributesTo:"path" name:"vcnId"` +} + +func (request GetVcnRequest) String() string { + return common.PointerString(request) +} + +// GetVcnResponse wrapper for the GetVcn operation +type GetVcnResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The Vcn instance + Vcn `presentIn:"body"` + + // For optimistic concurrency control. See `if-match`. + Etag *string `presentIn:"header" name:"etag"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about + // a particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response GetVcnResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/get_virtual_circuit_request_response.go b/vendor/github.com/oracle/oci-go-sdk/core/get_virtual_circuit_request_response.go new file mode 100644 index 000000000..85ac11259 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/get_virtual_circuit_request_response.go @@ -0,0 +1,41 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// GetVirtualCircuitRequest wrapper for the GetVirtualCircuit operation +type GetVirtualCircuitRequest struct { + + // The OCID of the virtual circuit. + VirtualCircuitId *string `mandatory:"true" contributesTo:"path" name:"virtualCircuitId"` +} + +func (request GetVirtualCircuitRequest) String() string { + return common.PointerString(request) +} + +// GetVirtualCircuitResponse wrapper for the GetVirtualCircuit operation +type GetVirtualCircuitResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The VirtualCircuit instance + VirtualCircuit `presentIn:"body"` + + // For optimistic concurrency control. See `if-match`. + Etag *string `presentIn:"header" name:"etag"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about + // a particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response GetVirtualCircuitResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/get_vnic_attachment_request_response.go b/vendor/github.com/oracle/oci-go-sdk/core/get_vnic_attachment_request_response.go new file mode 100644 index 000000000..3c0e62d0e --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/get_vnic_attachment_request_response.go @@ -0,0 +1,41 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// GetVnicAttachmentRequest wrapper for the GetVnicAttachment operation +type GetVnicAttachmentRequest struct { + + // The OCID of the VNIC attachment. + VnicAttachmentId *string `mandatory:"true" contributesTo:"path" name:"vnicAttachmentId"` +} + +func (request GetVnicAttachmentRequest) String() string { + return common.PointerString(request) +} + +// GetVnicAttachmentResponse wrapper for the GetVnicAttachment operation +type GetVnicAttachmentResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The VnicAttachment instance + VnicAttachment `presentIn:"body"` + + // For optimistic concurrency control. See `if-match`. + Etag *string `presentIn:"header" name:"etag"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about + // a particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response GetVnicAttachmentResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/get_vnic_request_response.go b/vendor/github.com/oracle/oci-go-sdk/core/get_vnic_request_response.go new file mode 100644 index 000000000..6f52d96c5 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/get_vnic_request_response.go @@ -0,0 +1,41 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// GetVnicRequest wrapper for the GetVnic operation +type GetVnicRequest struct { + + // The OCID of the VNIC. + VnicId *string `mandatory:"true" contributesTo:"path" name:"vnicId"` +} + +func (request GetVnicRequest) String() string { + return common.PointerString(request) +} + +// GetVnicResponse wrapper for the GetVnic operation +type GetVnicResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The Vnic instance + Vnic `presentIn:"body"` + + // For optimistic concurrency control. See `if-match`. + Etag *string `presentIn:"header" name:"etag"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about + // a particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response GetVnicResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/get_volume_attachment_request_response.go b/vendor/github.com/oracle/oci-go-sdk/core/get_volume_attachment_request_response.go new file mode 100644 index 000000000..e9492439f --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/get_volume_attachment_request_response.go @@ -0,0 +1,41 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// GetVolumeAttachmentRequest wrapper for the GetVolumeAttachment operation +type GetVolumeAttachmentRequest struct { + + // The OCID of the volume attachment. + VolumeAttachmentId *string `mandatory:"true" contributesTo:"path" name:"volumeAttachmentId"` +} + +func (request GetVolumeAttachmentRequest) String() string { + return common.PointerString(request) +} + +// GetVolumeAttachmentResponse wrapper for the GetVolumeAttachment operation +type GetVolumeAttachmentResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The VolumeAttachment instance + VolumeAttachment `presentIn:"body"` + + // For optimistic concurrency control. See `if-match`. + Etag *string `presentIn:"header" name:"etag"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about + // a particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response GetVolumeAttachmentResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/get_volume_backup_request_response.go b/vendor/github.com/oracle/oci-go-sdk/core/get_volume_backup_request_response.go new file mode 100644 index 000000000..6ca44c977 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/get_volume_backup_request_response.go @@ -0,0 +1,41 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// GetVolumeBackupRequest wrapper for the GetVolumeBackup operation +type GetVolumeBackupRequest struct { + + // The OCID of the volume backup. + VolumeBackupId *string `mandatory:"true" contributesTo:"path" name:"volumeBackupId"` +} + +func (request GetVolumeBackupRequest) String() string { + return common.PointerString(request) +} + +// GetVolumeBackupResponse wrapper for the GetVolumeBackup operation +type GetVolumeBackupResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The VolumeBackup instance + VolumeBackup `presentIn:"body"` + + // For optimistic concurrency control. See `if-match`. + Etag *string `presentIn:"header" name:"etag"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about + // a particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response GetVolumeBackupResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/get_volume_request_response.go b/vendor/github.com/oracle/oci-go-sdk/core/get_volume_request_response.go new file mode 100644 index 000000000..57e71a0f2 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/get_volume_request_response.go @@ -0,0 +1,41 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// GetVolumeRequest wrapper for the GetVolume operation +type GetVolumeRequest struct { + + // The OCID of the volume. + VolumeId *string `mandatory:"true" contributesTo:"path" name:"volumeId"` +} + +func (request GetVolumeRequest) String() string { + return common.PointerString(request) +} + +// GetVolumeResponse wrapper for the GetVolume operation +type GetVolumeResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The Volume instance + Volume `presentIn:"body"` + + // For optimistic concurrency control. See `if-match`. + Etag *string `presentIn:"header" name:"etag"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about + // a particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response GetVolumeResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/get_windows_instance_initial_credentials_request_response.go b/vendor/github.com/oracle/oci-go-sdk/core/get_windows_instance_initial_credentials_request_response.go new file mode 100644 index 000000000..98d138e2b --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/get_windows_instance_initial_credentials_request_response.go @@ -0,0 +1,38 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// GetWindowsInstanceInitialCredentialsRequest wrapper for the GetWindowsInstanceInitialCredentials operation +type GetWindowsInstanceInitialCredentialsRequest struct { + + // The OCID of the instance. + InstanceId *string `mandatory:"true" contributesTo:"path" name:"instanceId"` +} + +func (request GetWindowsInstanceInitialCredentialsRequest) String() string { + return common.PointerString(request) +} + +// GetWindowsInstanceInitialCredentialsResponse wrapper for the GetWindowsInstanceInitialCredentials operation +type GetWindowsInstanceInitialCredentialsResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The InstanceCredentials instance + InstanceCredentials `presentIn:"body"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about + // a particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response GetWindowsInstanceInitialCredentialsResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/i_scsi_volume_attachment.go b/vendor/github.com/oracle/oci-go-sdk/core/i_scsi_volume_attachment.go new file mode 100644 index 000000000..107ab34f3 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/i_scsi_volume_attachment.go @@ -0,0 +1,125 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Core Services API +// +// APIs for Networking Service, Compute Service, and Block Volume Service. +// + +package core + +import ( + "encoding/json" + "github.com/oracle/oci-go-sdk/common" +) + +// IScsiVolumeAttachment An ISCSI volume attachment. +type IScsiVolumeAttachment struct { + + // The Availability Domain of an instance. + // Example: `Uocm:PHX-AD-1` + AvailabilityDomain *string `mandatory:"true" json:"availabilityDomain"` + + // The OCID of the compartment. + CompartmentId *string `mandatory:"true" json:"compartmentId"` + + // The OCID of the volume attachment. + Id *string `mandatory:"true" json:"id"` + + // The OCID of the instance the volume is attached to. + InstanceId *string `mandatory:"true" json:"instanceId"` + + // The date and time the volume was created, in the format defined by RFC3339. + // Example: `2016-08-25T21:10:29.600Z` + TimeCreated *common.SDKTime `mandatory:"true" json:"timeCreated"` + + // The OCID of the volume. + VolumeId *string `mandatory:"true" json:"volumeId"` + + // The volume's iSCSI IP address. + // Example: `169.254.0.2` + Ipv4 *string `mandatory:"true" json:"ipv4"` + + // The target volume's iSCSI Qualified Name in the format defined by RFC 3720. + // Example: `iqn.2015-12.us.oracle.com:456b0391-17b8-4122-bbf1-f85fc0bb97d9` + Iqn *string `mandatory:"true" json:"iqn"` + + // The volume's iSCSI port. + // Example: `3260` + Port *int `mandatory:"true" json:"port"` + + // A user-friendly name. Does not have to be unique, and it cannot be changed. + // Avoid entering confidential information. + // Example: `My volume attachment` + DisplayName *string `mandatory:"false" json:"displayName"` + + // The Challenge-Handshake-Authentication-Protocol (CHAP) secret valid for the associated CHAP user name. + // (Also called the "CHAP password".) + // Example: `d6866c0d-298b-48ba-95af-309b4faux45e` + ChapSecret *string `mandatory:"false" json:"chapSecret"` + + // The volume's system-generated Challenge-Handshake-Authentication-Protocol (CHAP) user name. + // Example: `ocid1.volume.oc1.phx.abyhqljrgvttnlx73nmrwfaux7kcvzfs3s66izvxf2h4lgvyndsdsnoiwr5q` + ChapUsername *string `mandatory:"false" json:"chapUsername"` + + // The current state of the volume attachment. + LifecycleState VolumeAttachmentLifecycleStateEnum `mandatory:"true" json:"lifecycleState"` +} + +//GetAvailabilityDomain returns AvailabilityDomain +func (m IScsiVolumeAttachment) GetAvailabilityDomain() *string { + return m.AvailabilityDomain +} + +//GetCompartmentId returns CompartmentId +func (m IScsiVolumeAttachment) GetCompartmentId() *string { + return m.CompartmentId +} + +//GetDisplayName returns DisplayName +func (m IScsiVolumeAttachment) GetDisplayName() *string { + return m.DisplayName +} + +//GetId returns Id +func (m IScsiVolumeAttachment) GetId() *string { + return m.Id +} + +//GetInstanceId returns InstanceId +func (m IScsiVolumeAttachment) GetInstanceId() *string { + return m.InstanceId +} + +//GetLifecycleState returns LifecycleState +func (m IScsiVolumeAttachment) GetLifecycleState() VolumeAttachmentLifecycleStateEnum { + return m.LifecycleState +} + +//GetTimeCreated returns TimeCreated +func (m IScsiVolumeAttachment) GetTimeCreated() *common.SDKTime { + return m.TimeCreated +} + +//GetVolumeId returns VolumeId +func (m IScsiVolumeAttachment) GetVolumeId() *string { + return m.VolumeId +} + +func (m IScsiVolumeAttachment) String() string { + return common.PointerString(m) +} + +// MarshalJSON marshals to json representation +func (m IScsiVolumeAttachment) MarshalJSON() (buff []byte, e error) { + type MarshalTypeIScsiVolumeAttachment IScsiVolumeAttachment + s := struct { + DiscriminatorParam string `json:"attachmentType"` + MarshalTypeIScsiVolumeAttachment + }{ + "iscsi", + (MarshalTypeIScsiVolumeAttachment)(m), + } + + return json.Marshal(&s) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/icmp_options.go b/vendor/github.com/oracle/oci-go-sdk/core/icmp_options.go new file mode 100644 index 000000000..79b841e78 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/icmp_options.go @@ -0,0 +1,33 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Core Services API +// +// APIs for Networking Service, Compute Service, and Block Volume Service. +// + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" +) + +// IcmpOptions Optional object to specify a particular ICMP type and code. If you specify ICMP as the protocol +// but do not provide this object, then all ICMP types and codes are allowed. If you do provide +// this object, the type is required and the code is optional. +// See ICMP Parameters (http://www.iana.org/assignments/icmp-parameters/icmp-parameters.xhtml) +// for allowed values. To enable MTU negotiation for ingress internet traffic, make sure to allow +// type 3 ("Destination Unreachable") code 4 ("Fragmentation Needed and Don't Fragment was Set"). +// If you need to specify multiple codes for a single type, create a separate security list rule for each. +type IcmpOptions struct { + + // The ICMP type. + Type *int `mandatory:"true" json:"type"` + + // The ICMP code (optional). + Code *int `mandatory:"false" json:"code"` +} + +func (m IcmpOptions) String() string { + return common.PointerString(m) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/image.go b/vendor/github.com/oracle/oci-go-sdk/core/image.go new file mode 100644 index 000000000..250a1af1c --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/image.go @@ -0,0 +1,90 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Core Services API +// +// APIs for Networking Service, Compute Service, and Block Volume Service. +// + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" +) + +// Image A boot disk image for launching an instance. For more information, see +// Overview of the Compute Service (https://docs.us-phoenix-1.oraclecloud.com/Content/Compute/Concepts/computeoverview.htm). +// To use any of the API operations, you must be authorized in an IAM policy. If you're not authorized, +// talk to an administrator. If you're an administrator who needs to write policies to give users access, see +// Getting Started with Policies (https://docs.us-phoenix-1.oraclecloud.com/Content/Identity/Concepts/policygetstarted.htm). +type Image struct { + + // The OCID of the compartment containing the instance you want to use as the basis for the image. + CompartmentId *string `mandatory:"true" json:"compartmentId"` + + // Whether instances launched with this image can be used to create new images. + // For example, you cannot create an image of an Oracle Database instance. + // Example: `true` + CreateImageAllowed *bool `mandatory:"true" json:"createImageAllowed"` + + // The OCID of the image. + Id *string `mandatory:"true" json:"id"` + + LifecycleState ImageLifecycleStateEnum `mandatory:"true" json:"lifecycleState"` + + // The image's operating system. + // Example: `Oracle Linux` + OperatingSystem *string `mandatory:"true" json:"operatingSystem"` + + // The image's operating system version. + // Example: `7.2` + OperatingSystemVersion *string `mandatory:"true" json:"operatingSystemVersion"` + + // The date and time the image was created, in the format defined by RFC3339. + // Example: `2016-08-25T21:10:29.600Z` + TimeCreated *common.SDKTime `mandatory:"true" json:"timeCreated"` + + // The OCID of the image originally used to launch the instance. + BaseImageId *string `mandatory:"false" json:"baseImageId"` + + // A user-friendly name for the image. It does not have to be unique, and it's changeable. + // Avoid entering confidential information. + // You cannot use an Oracle-provided image name as a custom image name. + // Example: `My custom Oracle Linux image` + DisplayName *string `mandatory:"false" json:"displayName"` +} + +func (m Image) String() string { + return common.PointerString(m) +} + +// ImageLifecycleStateEnum Enum with underlying type: string +type ImageLifecycleStateEnum string + +// Set of constants representing the allowable values for ImageLifecycleState +const ( + ImageLifecycleStateProvisioning ImageLifecycleStateEnum = "PROVISIONING" + ImageLifecycleStateImporting ImageLifecycleStateEnum = "IMPORTING" + ImageLifecycleStateAvailable ImageLifecycleStateEnum = "AVAILABLE" + ImageLifecycleStateExporting ImageLifecycleStateEnum = "EXPORTING" + ImageLifecycleStateDisabled ImageLifecycleStateEnum = "DISABLED" + ImageLifecycleStateDeleted ImageLifecycleStateEnum = "DELETED" +) + +var mappingImageLifecycleState = map[string]ImageLifecycleStateEnum{ + "PROVISIONING": ImageLifecycleStateProvisioning, + "IMPORTING": ImageLifecycleStateImporting, + "AVAILABLE": ImageLifecycleStateAvailable, + "EXPORTING": ImageLifecycleStateExporting, + "DISABLED": ImageLifecycleStateDisabled, + "DELETED": ImageLifecycleStateDeleted, +} + +// GetImageLifecycleStateEnumValues Enumerates the set of values for ImageLifecycleState +func GetImageLifecycleStateEnumValues() []ImageLifecycleStateEnum { + values := make([]ImageLifecycleStateEnum, 0) + for _, v := range mappingImageLifecycleState { + values = append(values, v) + } + return values +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/image_source_details.go b/vendor/github.com/oracle/oci-go-sdk/core/image_source_details.go new file mode 100644 index 000000000..ae8d32acb --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/image_source_details.go @@ -0,0 +1,60 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Core Services API +// +// APIs for Networking Service, Compute Service, and Block Volume Service. +// + +package core + +import ( + "encoding/json" + "github.com/oracle/oci-go-sdk/common" +) + +// ImageSourceDetails The representation of ImageSourceDetails +type ImageSourceDetails interface { +} + +type imagesourcedetails struct { + JsonData []byte + SourceType string `json:"sourceType"` +} + +// UnmarshalJSON unmarshals json +func (m *imagesourcedetails) UnmarshalJSON(data []byte) error { + m.JsonData = data + type Unmarshalerimagesourcedetails imagesourcedetails + s := struct { + Model Unmarshalerimagesourcedetails + }{} + err := json.Unmarshal(data, &s.Model) + if err != nil { + return err + } + m.SourceType = s.Model.SourceType + + return err +} + +// UnmarshalPolymorphicJSON unmarshals polymorphic json +func (m *imagesourcedetails) UnmarshalPolymorphicJSON(data []byte) (interface{}, error) { + var err error + switch m.SourceType { + case "objectStorageTuple": + mm := ImageSourceViaObjectStorageTupleDetails{} + err = json.Unmarshal(data, &mm) + return mm, err + case "objectStorageUri": + mm := ImageSourceViaObjectStorageUriDetails{} + err = json.Unmarshal(data, &mm) + return mm, err + default: + return m, nil + } +} + +func (m imagesourcedetails) String() string { + return common.PointerString(m) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/image_source_via_object_storage_tuple_details.go b/vendor/github.com/oracle/oci-go-sdk/core/image_source_via_object_storage_tuple_details.go new file mode 100644 index 000000000..61eddb488 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/image_source_via_object_storage_tuple_details.go @@ -0,0 +1,45 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Core Services API +// +// APIs for Networking Service, Compute Service, and Block Volume Service. +// + +package core + +import ( + "encoding/json" + "github.com/oracle/oci-go-sdk/common" +) + +// ImageSourceViaObjectStorageTupleDetails The representation of ImageSourceViaObjectStorageTupleDetails +type ImageSourceViaObjectStorageTupleDetails struct { + + // The Object Storage bucket for the image. + BucketName *string `mandatory:"true" json:"bucketName"` + + // The Object Storage namespace for the image. + NamespaceName *string `mandatory:"true" json:"namespaceName"` + + // The Object Storage name for the image. + ObjectName *string `mandatory:"true" json:"objectName"` +} + +func (m ImageSourceViaObjectStorageTupleDetails) String() string { + return common.PointerString(m) +} + +// MarshalJSON marshals to json representation +func (m ImageSourceViaObjectStorageTupleDetails) MarshalJSON() (buff []byte, e error) { + type MarshalTypeImageSourceViaObjectStorageTupleDetails ImageSourceViaObjectStorageTupleDetails + s := struct { + DiscriminatorParam string `json:"sourceType"` + MarshalTypeImageSourceViaObjectStorageTupleDetails + }{ + "objectStorageTuple", + (MarshalTypeImageSourceViaObjectStorageTupleDetails)(m), + } + + return json.Marshal(&s) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/image_source_via_object_storage_uri_details.go b/vendor/github.com/oracle/oci-go-sdk/core/image_source_via_object_storage_uri_details.go new file mode 100644 index 000000000..e65376d4d --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/image_source_via_object_storage_uri_details.go @@ -0,0 +1,39 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Core Services API +// +// APIs for Networking Service, Compute Service, and Block Volume Service. +// + +package core + +import ( + "encoding/json" + "github.com/oracle/oci-go-sdk/common" +) + +// ImageSourceViaObjectStorageUriDetails The representation of ImageSourceViaObjectStorageUriDetails +type ImageSourceViaObjectStorageUriDetails struct { + + // The Object Storage URL for the image. + SourceUri *string `mandatory:"true" json:"sourceUri"` +} + +func (m ImageSourceViaObjectStorageUriDetails) String() string { + return common.PointerString(m) +} + +// MarshalJSON marshals to json representation +func (m ImageSourceViaObjectStorageUriDetails) MarshalJSON() (buff []byte, e error) { + type MarshalTypeImageSourceViaObjectStorageUriDetails ImageSourceViaObjectStorageUriDetails + s := struct { + DiscriminatorParam string `json:"sourceType"` + MarshalTypeImageSourceViaObjectStorageUriDetails + }{ + "objectStorageUri", + (MarshalTypeImageSourceViaObjectStorageUriDetails)(m), + } + + return json.Marshal(&s) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/ingress_security_rule.go b/vendor/github.com/oracle/oci-go-sdk/core/ingress_security_rule.go new file mode 100644 index 000000000..fbd7d7561 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/ingress_security_rule.go @@ -0,0 +1,56 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Core Services API +// +// APIs for Networking Service, Compute Service, and Block Volume Service. +// + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" +) + +// IngressSecurityRule A rule for allowing inbound IP packets. +type IngressSecurityRule struct { + + // The transport protocol. Specify either `all` or an IPv4 protocol number as + // defined in + // Protocol Numbers (http://www.iana.org/assignments/protocol-numbers/protocol-numbers.xhtml). + // Options are supported only for ICMP ("1"), TCP ("6"), and UDP ("17"). + Protocol *string `mandatory:"true" json:"protocol"` + + // The source CIDR block for the ingress rule. This is the range of IP addresses that a + // packet coming into the instance can come from. + Source *string `mandatory:"true" json:"source"` + + // Optional and valid only for ICMP. Use to specify a particular ICMP type and code + // as defined in + // ICMP Parameters (http://www.iana.org/assignments/icmp-parameters/icmp-parameters.xhtml). + // If you specify ICMP as the protocol but omit this object, then all ICMP types and + // codes are allowed. If you do provide this object, the type is required and the code is optional. + // To enable MTU negotiation for ingress internet traffic, make sure to allow type 3 ("Destination + // Unreachable") code 4 ("Fragmentation Needed and Don't Fragment was Set"). If you need to specify + // multiple codes for a single type, create a separate security list rule for each. + IcmpOptions *IcmpOptions `mandatory:"false" json:"icmpOptions"` + + // A stateless rule allows traffic in one direction. Remember to add a corresponding + // stateless rule in the other direction if you need to support bidirectional traffic. For + // example, if ingress traffic allows TCP destination port 80, there should be an egress + // rule to allow TCP source port 80. Defaults to false, which means the rule is stateful + // and a corresponding rule is not necessary for bidirectional traffic. + IsStateless *bool `mandatory:"false" json:"isStateless"` + + // Optional and valid only for TCP. Use to specify particular destination ports for TCP rules. + // If you specify TCP as the protocol but omit this object, then all destination ports are allowed. + TcpOptions *TcpOptions `mandatory:"false" json:"tcpOptions"` + + // Optional and valid only for UDP. Use to specify particular destination ports for UDP rules. + // If you specify UDP as the protocol but omit this object, then all destination ports are allowed. + UdpOptions *UdpOptions `mandatory:"false" json:"udpOptions"` +} + +func (m IngressSecurityRule) String() string { + return common.PointerString(m) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/instance.go b/vendor/github.com/oracle/oci-go-sdk/core/instance.go new file mode 100644 index 000000000..afad7986f --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/instance.go @@ -0,0 +1,170 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Core Services API +// +// APIs for Networking Service, Compute Service, and Block Volume Service. +// + +package core + +import ( + "encoding/json" + "github.com/oracle/oci-go-sdk/common" +) + +// Instance A compute host. The image used to launch the instance determines its operating system and other +// software. The shape specified during the launch process determines the number of CPUs and memory +// allocated to the instance. For more information, see +// Overview of the Compute Service (https://docs.us-phoenix-1.oraclecloud.com/Content/Compute/Concepts/computeoverview.htm). +// To use any of the API operations, you must be authorized in an IAM policy. If you're not authorized, +// talk to an administrator. If you're an administrator who needs to write policies to give users access, see +// Getting Started with Policies (https://docs.us-phoenix-1.oraclecloud.com/Content/Identity/Concepts/policygetstarted.htm). +type Instance struct { + + // The Availability Domain the instance is running in. + // Example: `Uocm:PHX-AD-1` + AvailabilityDomain *string `mandatory:"true" json:"availabilityDomain"` + + // The OCID of the compartment that contains the instance. + CompartmentId *string `mandatory:"true" json:"compartmentId"` + + // The OCID of the instance. + Id *string `mandatory:"true" json:"id"` + + // The current state of the instance. + LifecycleState InstanceLifecycleStateEnum `mandatory:"true" json:"lifecycleState"` + + // The region that contains the Availability Domain the instance is running in. + // Example: `phx` + Region *string `mandatory:"true" json:"region"` + + // The shape of the instance. The shape determines the number of CPUs and the amount of memory + // allocated to the instance. You can enumerate all available shapes by calling + // ListShapes. + Shape *string `mandatory:"true" json:"shape"` + + // The date and time the instance was created, in the format defined by RFC3339. + // Example: `2016-08-25T21:10:29.600Z` + TimeCreated *common.SDKTime `mandatory:"true" json:"timeCreated"` + + // A user-friendly name. Does not have to be unique, and it's changeable. + // Avoid entering confidential information. + // Example: `My bare metal instance` + DisplayName *string `mandatory:"false" json:"displayName"` + + // Additional metadata key/value pairs that you provide. They serve a similar purpose and functionality from fields in the 'metadata' object. + // They are distinguished from 'metadata' fields in that these can be nested JSON objects (whereas 'metadata' fields are string/string maps only). + // If you don't need nested metadata values, it is strongly advised to avoid using this object and use the Metadata object instead. + ExtendedMetadata map[string]interface{} `mandatory:"false" json:"extendedMetadata"` + + // Deprecated. Use `sourceDetails` instead. + ImageId *string `mandatory:"false" json:"imageId"` + + // When a bare metal or virtual machine + // instance boots, the iPXE firmware that runs on the instance is + // configured to run an iPXE script to continue the boot process. + // If you want more control over the boot process, you can provide + // your own custom iPXE script that will run when the instance boots; + // however, you should be aware that the same iPXE script will run + // every time an instance boots; not only after the initial + // LaunchInstance call. + // The default iPXE script connects to the instance's local boot + // volume over iSCSI and performs a network boot. If you use a custom iPXE + // script and want to network-boot from the instance's local boot volume + // over iSCSI the same way as the default iPXE script, you should use the + // following iSCSI IP address: 169.254.0.2, and boot volume IQN: + // iqn.2015-02.oracle.boot. + // For more information about the Bring Your Own Image feature of + // Oracle Cloud Infrastructure, see + // Bring Your Own Image (https://docs.us-phoenix-1.oraclecloud.com/Content/Compute/References/bringyourownimage.htm). + // For more information about iPXE, see http://ipxe.org. + IpxeScript *string `mandatory:"false" json:"ipxeScript"` + + // Custom metadata that you provide. + Metadata map[string]string `mandatory:"false" json:"metadata"` + + // Details for creating an instance + SourceDetails InstanceSourceDetails `mandatory:"false" json:"sourceDetails"` +} + +func (m Instance) String() string { + return common.PointerString(m) +} + +// UnmarshalJSON unmarshals from json +func (m *Instance) UnmarshalJSON(data []byte) (e error) { + model := struct { + DisplayName *string `json:"displayName"` + ExtendedMetadata map[string]interface{} `json:"extendedMetadata"` + ImageId *string `json:"imageId"` + IpxeScript *string `json:"ipxeScript"` + Metadata map[string]string `json:"metadata"` + SourceDetails instancesourcedetails `json:"sourceDetails"` + AvailabilityDomain *string `json:"availabilityDomain"` + CompartmentId *string `json:"compartmentId"` + Id *string `json:"id"` + LifecycleState InstanceLifecycleStateEnum `json:"lifecycleState"` + Region *string `json:"region"` + Shape *string `json:"shape"` + TimeCreated *common.SDKTime `json:"timeCreated"` + }{} + + e = json.Unmarshal(data, &model) + if e != nil { + return + } + m.DisplayName = model.DisplayName + m.ExtendedMetadata = model.ExtendedMetadata + m.ImageId = model.ImageId + m.IpxeScript = model.IpxeScript + m.Metadata = model.Metadata + nn, e := model.SourceDetails.UnmarshalPolymorphicJSON(model.SourceDetails.JsonData) + if e != nil { + return + } + m.SourceDetails = nn + m.AvailabilityDomain = model.AvailabilityDomain + m.CompartmentId = model.CompartmentId + m.Id = model.Id + m.LifecycleState = model.LifecycleState + m.Region = model.Region + m.Shape = model.Shape + m.TimeCreated = model.TimeCreated + return +} + +// InstanceLifecycleStateEnum Enum with underlying type: string +type InstanceLifecycleStateEnum string + +// Set of constants representing the allowable values for InstanceLifecycleState +const ( + InstanceLifecycleStateProvisioning InstanceLifecycleStateEnum = "PROVISIONING" + InstanceLifecycleStateRunning InstanceLifecycleStateEnum = "RUNNING" + InstanceLifecycleStateStarting InstanceLifecycleStateEnum = "STARTING" + InstanceLifecycleStateStopping InstanceLifecycleStateEnum = "STOPPING" + InstanceLifecycleStateStopped InstanceLifecycleStateEnum = "STOPPED" + InstanceLifecycleStateCreatingImage InstanceLifecycleStateEnum = "CREATING_IMAGE" + InstanceLifecycleStateTerminating InstanceLifecycleStateEnum = "TERMINATING" + InstanceLifecycleStateTerminated InstanceLifecycleStateEnum = "TERMINATED" +) + +var mappingInstanceLifecycleState = map[string]InstanceLifecycleStateEnum{ + "PROVISIONING": InstanceLifecycleStateProvisioning, + "RUNNING": InstanceLifecycleStateRunning, + "STARTING": InstanceLifecycleStateStarting, + "STOPPING": InstanceLifecycleStateStopping, + "STOPPED": InstanceLifecycleStateStopped, + "CREATING_IMAGE": InstanceLifecycleStateCreatingImage, + "TERMINATING": InstanceLifecycleStateTerminating, + "TERMINATED": InstanceLifecycleStateTerminated, +} + +// GetInstanceLifecycleStateEnumValues Enumerates the set of values for InstanceLifecycleState +func GetInstanceLifecycleStateEnumValues() []InstanceLifecycleStateEnum { + values := make([]InstanceLifecycleStateEnum, 0) + for _, v := range mappingInstanceLifecycleState { + values = append(values, v) + } + return values +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/instance_action_request_response.go b/vendor/github.com/oracle/oci-go-sdk/core/instance_action_request_response.go new file mode 100644 index 000000000..37faf1966 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/instance_action_request_response.go @@ -0,0 +1,83 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// InstanceActionRequest wrapper for the InstanceAction operation +type InstanceActionRequest struct { + + // The OCID of the instance. + InstanceId *string `mandatory:"true" contributesTo:"path" name:"instanceId"` + + // The action to perform on the instance. + Action InstanceActionActionEnum `mandatory:"true" contributesTo:"query" name:"action" omitEmpty:"true"` + + // A token that uniquely identifies a request so it can be retried in case of a timeout or + // server error without risk of executing that same action again. Retry tokens expire after 24 + // hours, but can be invalidated before then due to conflicting operations (for example, if a resource + // has been deleted and purged from the system, then a retry of the original creation request + // may be rejected). + OpcRetryToken *string `mandatory:"false" contributesTo:"header" name:"opc-retry-token"` + + // For optimistic concurrency control. In the PUT or DELETE call for a resource, set the `if-match` + // parameter to the value of the etag from a previous GET or POST response for that resource. The resource + // will be updated or deleted only if the etag you provide matches the resource's current etag value. + IfMatch *string `mandatory:"false" contributesTo:"header" name:"if-match"` +} + +func (request InstanceActionRequest) String() string { + return common.PointerString(request) +} + +// InstanceActionResponse wrapper for the InstanceAction operation +type InstanceActionResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The Instance instance + Instance `presentIn:"body"` + + // For optimistic concurrency control. See `if-match`. + Etag *string `presentIn:"header" name:"etag"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about + // a particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response InstanceActionResponse) String() string { + return common.PointerString(response) +} + +// InstanceActionActionEnum Enum with underlying type: string +type InstanceActionActionEnum string + +// Set of constants representing the allowable values for InstanceActionAction +const ( + InstanceActionActionStop InstanceActionActionEnum = "STOP" + InstanceActionActionStart InstanceActionActionEnum = "START" + InstanceActionActionSoftreset InstanceActionActionEnum = "SOFTRESET" + InstanceActionActionReset InstanceActionActionEnum = "RESET" +) + +var mappingInstanceActionAction = map[string]InstanceActionActionEnum{ + "STOP": InstanceActionActionStop, + "START": InstanceActionActionStart, + "SOFTRESET": InstanceActionActionSoftreset, + "RESET": InstanceActionActionReset, +} + +// GetInstanceActionActionEnumValues Enumerates the set of values for InstanceActionAction +func GetInstanceActionActionEnumValues() []InstanceActionActionEnum { + values := make([]InstanceActionActionEnum, 0) + for _, v := range mappingInstanceActionAction { + values = append(values, v) + } + return values +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/instance_console_connection.go b/vendor/github.com/oracle/oci-go-sdk/core/instance_console_connection.go new file mode 100644 index 000000000..3dd2f61a4 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/instance_console_connection.go @@ -0,0 +1,71 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Core Services API +// +// APIs for Networking Service, Compute Service, and Block Volume Service. +// + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" +) + +// InstanceConsoleConnection The `InstanceConsoleConnection` API provides you with console access to virtual machine (VM) instances, +// enabling you to troubleshoot malfunctioning instances remotely. +// For more information about console access, see +// Accessing the Console (https://docs.us-phoenix-1.oraclecloud.com/Content/Compute/References/serialconsole.htm). +type InstanceConsoleConnection struct { + + // The OCID of the compartment to contain the console connection. + CompartmentId *string `mandatory:"false" json:"compartmentId"` + + // The SSH connection string for the console connection. + ConnectionString *string `mandatory:"false" json:"connectionString"` + + // The SSH public key fingerprint for the console connection. + Fingerprint *string `mandatory:"false" json:"fingerprint"` + + // The OCID of the console connection. + Id *string `mandatory:"false" json:"id"` + + // The OCID of the instance the console connection connects to. + InstanceId *string `mandatory:"false" json:"instanceId"` + + // The current state of the console connection. + LifecycleState InstanceConsoleConnectionLifecycleStateEnum `mandatory:"false" json:"lifecycleState,omitempty"` +} + +func (m InstanceConsoleConnection) String() string { + return common.PointerString(m) +} + +// InstanceConsoleConnectionLifecycleStateEnum Enum with underlying type: string +type InstanceConsoleConnectionLifecycleStateEnum string + +// Set of constants representing the allowable values for InstanceConsoleConnectionLifecycleState +const ( + InstanceConsoleConnectionLifecycleStateActive InstanceConsoleConnectionLifecycleStateEnum = "ACTIVE" + InstanceConsoleConnectionLifecycleStateCreating InstanceConsoleConnectionLifecycleStateEnum = "CREATING" + InstanceConsoleConnectionLifecycleStateDeleted InstanceConsoleConnectionLifecycleStateEnum = "DELETED" + InstanceConsoleConnectionLifecycleStateDeleting InstanceConsoleConnectionLifecycleStateEnum = "DELETING" + InstanceConsoleConnectionLifecycleStateFailed InstanceConsoleConnectionLifecycleStateEnum = "FAILED" +) + +var mappingInstanceConsoleConnectionLifecycleState = map[string]InstanceConsoleConnectionLifecycleStateEnum{ + "ACTIVE": InstanceConsoleConnectionLifecycleStateActive, + "CREATING": InstanceConsoleConnectionLifecycleStateCreating, + "DELETED": InstanceConsoleConnectionLifecycleStateDeleted, + "DELETING": InstanceConsoleConnectionLifecycleStateDeleting, + "FAILED": InstanceConsoleConnectionLifecycleStateFailed, +} + +// GetInstanceConsoleConnectionLifecycleStateEnumValues Enumerates the set of values for InstanceConsoleConnectionLifecycleState +func GetInstanceConsoleConnectionLifecycleStateEnumValues() []InstanceConsoleConnectionLifecycleStateEnum { + values := make([]InstanceConsoleConnectionLifecycleStateEnum, 0) + for _, v := range mappingInstanceConsoleConnectionLifecycleState { + values = append(values, v) + } + return values +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/instance_credentials.go b/vendor/github.com/oracle/oci-go-sdk/core/instance_credentials.go new file mode 100644 index 000000000..62f78e935 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/instance_credentials.go @@ -0,0 +1,27 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Core Services API +// +// APIs for Networking Service, Compute Service, and Block Volume Service. +// + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" +) + +// InstanceCredentials The credentials for a particular instance. +type InstanceCredentials struct { + + // The password for the username. + Password *string `mandatory:"true" json:"password"` + + // The username. + Username *string `mandatory:"true" json:"username"` +} + +func (m InstanceCredentials) String() string { + return common.PointerString(m) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/instance_source_details.go b/vendor/github.com/oracle/oci-go-sdk/core/instance_source_details.go new file mode 100644 index 000000000..c9dffbd34 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/instance_source_details.go @@ -0,0 +1,60 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Core Services API +// +// APIs for Networking Service, Compute Service, and Block Volume Service. +// + +package core + +import ( + "encoding/json" + "github.com/oracle/oci-go-sdk/common" +) + +// InstanceSourceDetails The representation of InstanceSourceDetails +type InstanceSourceDetails interface { +} + +type instancesourcedetails struct { + JsonData []byte + SourceType string `json:"sourceType"` +} + +// UnmarshalJSON unmarshals json +func (m *instancesourcedetails) UnmarshalJSON(data []byte) error { + m.JsonData = data + type Unmarshalerinstancesourcedetails instancesourcedetails + s := struct { + Model Unmarshalerinstancesourcedetails + }{} + err := json.Unmarshal(data, &s.Model) + if err != nil { + return err + } + m.SourceType = s.Model.SourceType + + return err +} + +// UnmarshalPolymorphicJSON unmarshals polymorphic json +func (m *instancesourcedetails) UnmarshalPolymorphicJSON(data []byte) (interface{}, error) { + var err error + switch m.SourceType { + case "image": + mm := InstanceSourceViaImageDetails{} + err = json.Unmarshal(data, &mm) + return mm, err + case "bootVolume": + mm := InstanceSourceViaBootVolumeDetails{} + err = json.Unmarshal(data, &mm) + return mm, err + default: + return m, nil + } +} + +func (m instancesourcedetails) String() string { + return common.PointerString(m) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/instance_source_via_boot_volume_details.go b/vendor/github.com/oracle/oci-go-sdk/core/instance_source_via_boot_volume_details.go new file mode 100644 index 000000000..9dc4b2326 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/instance_source_via_boot_volume_details.go @@ -0,0 +1,39 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Core Services API +// +// APIs for Networking Service, Compute Service, and Block Volume Service. +// + +package core + +import ( + "encoding/json" + "github.com/oracle/oci-go-sdk/common" +) + +// InstanceSourceViaBootVolumeDetails The representation of InstanceSourceViaBootVolumeDetails +type InstanceSourceViaBootVolumeDetails struct { + + // The OCID of the boot volume used to boot the instance. + BootVolumeId *string `mandatory:"true" json:"bootVolumeId"` +} + +func (m InstanceSourceViaBootVolumeDetails) String() string { + return common.PointerString(m) +} + +// MarshalJSON marshals to json representation +func (m InstanceSourceViaBootVolumeDetails) MarshalJSON() (buff []byte, e error) { + type MarshalTypeInstanceSourceViaBootVolumeDetails InstanceSourceViaBootVolumeDetails + s := struct { + DiscriminatorParam string `json:"sourceType"` + MarshalTypeInstanceSourceViaBootVolumeDetails + }{ + "bootVolume", + (MarshalTypeInstanceSourceViaBootVolumeDetails)(m), + } + + return json.Marshal(&s) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/instance_source_via_image_details.go b/vendor/github.com/oracle/oci-go-sdk/core/instance_source_via_image_details.go new file mode 100644 index 000000000..3327d9ecf --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/instance_source_via_image_details.go @@ -0,0 +1,39 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Core Services API +// +// APIs for Networking Service, Compute Service, and Block Volume Service. +// + +package core + +import ( + "encoding/json" + "github.com/oracle/oci-go-sdk/common" +) + +// InstanceSourceViaImageDetails The representation of InstanceSourceViaImageDetails +type InstanceSourceViaImageDetails struct { + + // The OCID of the image used to boot the instance. + ImageId *string `mandatory:"true" json:"imageId"` +} + +func (m InstanceSourceViaImageDetails) String() string { + return common.PointerString(m) +} + +// MarshalJSON marshals to json representation +func (m InstanceSourceViaImageDetails) MarshalJSON() (buff []byte, e error) { + type MarshalTypeInstanceSourceViaImageDetails InstanceSourceViaImageDetails + s := struct { + DiscriminatorParam string `json:"sourceType"` + MarshalTypeInstanceSourceViaImageDetails + }{ + "image", + (MarshalTypeInstanceSourceViaImageDetails)(m), + } + + return json.Marshal(&s) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/internet_gateway.go b/vendor/github.com/oracle/oci-go-sdk/core/internet_gateway.go new file mode 100644 index 000000000..a9b8f1c0b --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/internet_gateway.go @@ -0,0 +1,77 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Core Services API +// +// APIs for Networking Service, Compute Service, and Block Volume Service. +// + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" +) + +// InternetGateway Represents a router that connects the edge of a VCN with the Internet. For an example scenario +// that uses an Internet Gateway, see +// Typical Networking Service Scenarios (https://docs.us-phoenix-1.oraclecloud.com/Content/Network/Concepts/overview.htm#scenarios). +// To use any of the API operations, you must be authorized in an IAM policy. If you're not authorized, +// talk to an administrator. If you're an administrator who needs to write policies to give users access, see +// Getting Started with Policies (https://docs.us-phoenix-1.oraclecloud.com/Content/Identity/Concepts/policygetstarted.htm). +type InternetGateway struct { + + // The OCID of the compartment containing the Internet Gateway. + CompartmentId *string `mandatory:"true" json:"compartmentId"` + + // The Internet Gateway's Oracle ID (OCID). + Id *string `mandatory:"true" json:"id"` + + // The Internet Gateway's current state. + LifecycleState InternetGatewayLifecycleStateEnum `mandatory:"true" json:"lifecycleState"` + + // The OCID of the VCN the Internet Gateway belongs to. + VcnId *string `mandatory:"true" json:"vcnId"` + + // A user-friendly name. Does not have to be unique, and it's changeable. + // Avoid entering confidential information. + DisplayName *string `mandatory:"false" json:"displayName"` + + // Whether the gateway is enabled. When the gateway is disabled, traffic is not + // routed to/from the Internet, regardless of route rules. + IsEnabled *bool `mandatory:"false" json:"isEnabled"` + + // The date and time the Internet Gateway was created, in the format defined by RFC3339. + // Example: `2016-08-25T21:10:29.600Z` + TimeCreated *common.SDKTime `mandatory:"false" json:"timeCreated"` +} + +func (m InternetGateway) String() string { + return common.PointerString(m) +} + +// InternetGatewayLifecycleStateEnum Enum with underlying type: string +type InternetGatewayLifecycleStateEnum string + +// Set of constants representing the allowable values for InternetGatewayLifecycleState +const ( + InternetGatewayLifecycleStateProvisioning InternetGatewayLifecycleStateEnum = "PROVISIONING" + InternetGatewayLifecycleStateAvailable InternetGatewayLifecycleStateEnum = "AVAILABLE" + InternetGatewayLifecycleStateTerminating InternetGatewayLifecycleStateEnum = "TERMINATING" + InternetGatewayLifecycleStateTerminated InternetGatewayLifecycleStateEnum = "TERMINATED" +) + +var mappingInternetGatewayLifecycleState = map[string]InternetGatewayLifecycleStateEnum{ + "PROVISIONING": InternetGatewayLifecycleStateProvisioning, + "AVAILABLE": InternetGatewayLifecycleStateAvailable, + "TERMINATING": InternetGatewayLifecycleStateTerminating, + "TERMINATED": InternetGatewayLifecycleStateTerminated, +} + +// GetInternetGatewayLifecycleStateEnumValues Enumerates the set of values for InternetGatewayLifecycleState +func GetInternetGatewayLifecycleStateEnumValues() []InternetGatewayLifecycleStateEnum { + values := make([]InternetGatewayLifecycleStateEnum, 0) + for _, v := range mappingInternetGatewayLifecycleState { + values = append(values, v) + } + return values +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/ip_sec_connection.go b/vendor/github.com/oracle/oci-go-sdk/core/ip_sec_connection.go new file mode 100644 index 000000000..c57e74559 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/ip_sec_connection.go @@ -0,0 +1,82 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Core Services API +// +// APIs for Networking Service, Compute Service, and Block Volume Service. +// + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" +) + +// IpSecConnection A connection between a DRG and CPE. This connection consists of multiple IPSec +// tunnels. Creating this connection is one of the steps required when setting up +// an IPSec VPN. For more information, see +// Overview of the Networking Service (https://docs.us-phoenix-1.oraclecloud.com/Content/Network/Concepts/overview.htm). +// To use any of the API operations, you must be authorized in an IAM policy. If you're not authorized, +// talk to an administrator. If you're an administrator who needs to write policies to give users access, see +// Getting Started with Policies (https://docs.us-phoenix-1.oraclecloud.com/Content/Identity/Concepts/policygetstarted.htm). +type IpSecConnection struct { + + // The OCID of the compartment containing the IPSec connection. + CompartmentId *string `mandatory:"true" json:"compartmentId"` + + // The OCID of the CPE. + CpeId *string `mandatory:"true" json:"cpeId"` + + // The OCID of the DRG. + DrgId *string `mandatory:"true" json:"drgId"` + + // The IPSec connection's Oracle ID (OCID). + Id *string `mandatory:"true" json:"id"` + + // The IPSec connection's current state. + LifecycleState IpSecConnectionLifecycleStateEnum `mandatory:"true" json:"lifecycleState"` + + // Static routes to the CPE. At least one route must be included. The CIDR must not be a + // multicast address or class E address. + // Example: `10.0.1.0/24` + StaticRoutes []string `mandatory:"true" json:"staticRoutes"` + + // A user-friendly name. Does not have to be unique, and it's changeable. + // Avoid entering confidential information. + DisplayName *string `mandatory:"false" json:"displayName"` + + // The date and time the IPSec connection was created, in the format defined by RFC3339. + // Example: `2016-08-25T21:10:29.600Z` + TimeCreated *common.SDKTime `mandatory:"false" json:"timeCreated"` +} + +func (m IpSecConnection) String() string { + return common.PointerString(m) +} + +// IpSecConnectionLifecycleStateEnum Enum with underlying type: string +type IpSecConnectionLifecycleStateEnum string + +// Set of constants representing the allowable values for IpSecConnectionLifecycleState +const ( + IpSecConnectionLifecycleStateProvisioning IpSecConnectionLifecycleStateEnum = "PROVISIONING" + IpSecConnectionLifecycleStateAvailable IpSecConnectionLifecycleStateEnum = "AVAILABLE" + IpSecConnectionLifecycleStateTerminating IpSecConnectionLifecycleStateEnum = "TERMINATING" + IpSecConnectionLifecycleStateTerminated IpSecConnectionLifecycleStateEnum = "TERMINATED" +) + +var mappingIpSecConnectionLifecycleState = map[string]IpSecConnectionLifecycleStateEnum{ + "PROVISIONING": IpSecConnectionLifecycleStateProvisioning, + "AVAILABLE": IpSecConnectionLifecycleStateAvailable, + "TERMINATING": IpSecConnectionLifecycleStateTerminating, + "TERMINATED": IpSecConnectionLifecycleStateTerminated, +} + +// GetIpSecConnectionLifecycleStateEnumValues Enumerates the set of values for IpSecConnectionLifecycleState +func GetIpSecConnectionLifecycleStateEnumValues() []IpSecConnectionLifecycleStateEnum { + values := make([]IpSecConnectionLifecycleStateEnum, 0) + for _, v := range mappingIpSecConnectionLifecycleState { + values = append(values, v) + } + return values +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/ip_sec_connection_device_config.go b/vendor/github.com/oracle/oci-go-sdk/core/ip_sec_connection_device_config.go new file mode 100644 index 000000000..5572af637 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/ip_sec_connection_device_config.go @@ -0,0 +1,33 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Core Services API +// +// APIs for Networking Service, Compute Service, and Block Volume Service. +// + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" +) + +// IpSecConnectionDeviceConfig Information about the IPSecConnection device configuration. +type IpSecConnectionDeviceConfig struct { + + // The OCID of the compartment containing the IPSec connection. + CompartmentId *string `mandatory:"true" json:"compartmentId"` + + // The IPSec connection's Oracle ID (OCID). + Id *string `mandatory:"true" json:"id"` + + // The date and time the IPSec connection was created. + TimeCreated *common.SDKTime `mandatory:"false" json:"timeCreated"` + + // Two TunnelConfig objects. + Tunnels []TunnelConfig `mandatory:"false" json:"tunnels"` +} + +func (m IpSecConnectionDeviceConfig) String() string { + return common.PointerString(m) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/ip_sec_connection_device_status.go b/vendor/github.com/oracle/oci-go-sdk/core/ip_sec_connection_device_status.go new file mode 100644 index 000000000..5f5a4d3c9 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/ip_sec_connection_device_status.go @@ -0,0 +1,34 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Core Services API +// +// APIs for Networking Service, Compute Service, and Block Volume Service. +// + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" +) + +// IpSecConnectionDeviceStatus Status of the IPSec connection. +type IpSecConnectionDeviceStatus struct { + + // The OCID of the compartment containing the IPSec connection. + CompartmentId *string `mandatory:"true" json:"compartmentId"` + + // The IPSec connection's Oracle ID (OCID). + Id *string `mandatory:"true" json:"id"` + + // The date and time the IPSec connection was created, in the format defined by RFC3339. + // Example: `2016-08-25T21:10:29.600Z` + TimeCreated *common.SDKTime `mandatory:"false" json:"timeCreated"` + + // Two TunnelStatus objects. + Tunnels []TunnelStatus `mandatory:"false" json:"tunnels"` +} + +func (m IpSecConnectionDeviceStatus) String() string { + return common.PointerString(m) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/launch_instance_details.go b/vendor/github.com/oracle/oci-go-sdk/core/launch_instance_details.go new file mode 100644 index 000000000..c40c778ca --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/launch_instance_details.go @@ -0,0 +1,172 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Core Services API +// +// APIs for Networking Service, Compute Service, and Block Volume Service. +// + +package core + +import ( + "encoding/json" + "github.com/oracle/oci-go-sdk/common" +) + +// LaunchInstanceDetails Instance launch details. +// Use the sourceDetails parameter to specify whether a boot volume should be used for a new instance launch. +type LaunchInstanceDetails struct { + + // The Availability Domain of the instance. + // Example: `Uocm:PHX-AD-1` + AvailabilityDomain *string `mandatory:"true" json:"availabilityDomain"` + + // The OCID of the compartment. + CompartmentId *string `mandatory:"true" json:"compartmentId"` + + // The shape of an instance. The shape determines the number of CPUs, amount of memory, + // and other resources allocated to the instance. + // You can enumerate all available shapes by calling ListShapes. + Shape *string `mandatory:"true" json:"shape"` + + // Details for the primary VNIC, which is automatically created and attached when + // the instance is launched. + CreateVnicDetails *CreateVnicDetails `mandatory:"false" json:"createVnicDetails"` + + // A user-friendly name. Does not have to be unique, and it's changeable. + // Avoid entering confidential information. + // Example: `My bare metal instance` + DisplayName *string `mandatory:"false" json:"displayName"` + + // Additional metadata key/value pairs that you provide. They serve a similar purpose and functionality from fields in the 'metadata' object. + // They are distinguished from 'metadata' fields in that these can be nested JSON objects (whereas 'metadata' fields are string/string maps only). + // If you don't need nested metadata values, it is strongly advised to avoid using this object and use the Metadata object instead. + ExtendedMetadata map[string]interface{} `mandatory:"false" json:"extendedMetadata"` + + // Deprecated. Instead Use `hostnameLabel` in + // CreateVnicDetails. + // If you provide both, the values must match. + HostnameLabel *string `mandatory:"false" json:"hostnameLabel"` + + // Deprecated. Use `sourceDetails` with InstanceSourceViaImageDetails + // source type instead. If you specify values for both, the values must match. + ImageId *string `mandatory:"false" json:"imageId"` + + // This is an advanced option. + // When a bare metal or virtual machine + // instance boots, the iPXE firmware that runs on the instance is + // configured to run an iPXE script to continue the boot process. + // If you want more control over the boot process, you can provide + // your own custom iPXE script that will run when the instance boots; + // however, you should be aware that the same iPXE script will run + // every time an instance boots; not only after the initial + // LaunchInstance call. + // The default iPXE script connects to the instance's local boot + // volume over iSCSI and performs a network boot. If you use a custom iPXE + // script and want to network-boot from the instance's local boot volume + // over iSCSI the same way as the default iPXE script, you should use the + // following iSCSI IP address: 169.254.0.2, and boot volume IQN: + // iqn.2015-02.oracle.boot. + // For more information about the Bring Your Own Image feature of + // Oracle Cloud Infrastructure, see + // Bring Your Own Image (https://docs.us-phoenix-1.oraclecloud.com/Content/Compute/References/bringyourownimage.htm). + // For more information about iPXE, see http://ipxe.org. + IpxeScript *string `mandatory:"false" json:"ipxeScript"` + + // Custom metadata key/value pairs that you provide, such as the SSH public key + // required to connect to the instance. + // A metadata service runs on every launched instance. The service is an HTTP + // endpoint listening on 169.254.169.254. You can use the service to: + // * Provide information to Cloud-Init (https://cloudinit.readthedocs.org/en/latest/) + // to be used for various system initialization tasks. + // * Get information about the instance, including the custom metadata that you + // provide when you launch the instance. + // **Providing Cloud-Init Metadata** + // You can use the following metadata key names to provide information to + // Cloud-Init: + // **"ssh_authorized_keys"** - Provide one or more public SSH keys to be + // included in the `~/.ssh/authorized_keys` file for the default user on the + // instance. Use a newline character to separate multiple keys. The SSH + // keys must be in the format necessary for the `authorized_keys` file, as shown + // in the example below. + // **"user_data"** - Provide your own base64-encoded data to be used by + // Cloud-Init to run custom scripts or provide custom Cloud-Init configuration. For + // information about how to take advantage of user data, see the + // Cloud-Init Documentation (http://cloudinit.readthedocs.org/en/latest/topics/format.html). + // **Note:** Cloud-Init does not pull this data from the `http://169.254.169.254/opc/v1/instance/metadata/` + // path. When the instance launches and either of these keys are provided, the key values are formatted as + // OpenStack metadata and copied to the following locations, which are recognized by Cloud-Init: + // `http://169.254.169.254/openstack/latest/meta_data.json` - This JSON blob + // contains, among other things, the SSH keys that you provided for + // **"ssh_authorized_keys"**. + // `http://169.254.169.254/openstack/latest/user_data` - Contains the + // base64-decoded data that you provided for **"user_data"**. + // **Metadata Example** + // "metadata" : { + // "quake_bot_level" : "Severe", + // "ssh_authorized_keys" : "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCZ06fccNTQfq+xubFlJ5ZR3kt+uzspdH9tXL+lAejSM1NXM+CFZev7MIxfEjas06y80ZBZ7DUTQO0GxJPeD8NCOb1VorF8M4xuLwrmzRtkoZzU16umt4y1W0Q4ifdp3IiiU0U8/WxczSXcUVZOLqkz5dc6oMHdMVpkimietWzGZ4LBBsH/LjEVY7E0V+a0sNchlVDIZcm7ErReBLcdTGDq0uLBiuChyl6RUkX1PNhusquTGwK7zc8OBXkRuubn5UKXhI3Ul9Nyk4XESkVWIGNKmw8mSpoJSjR8P9ZjRmcZVo8S+x4KVPMZKQEor== ryan.smith@company.com + // ssh-rsa AAAAB3NzaC1yc2EAAAABJQAAAQEAzJSAtwEPoB3Jmr58IXrDGzLuDYkWAYg8AsLYlo6JZvKpjY1xednIcfEVQJm4T2DhVmdWhRrwQ8DmayVZvBkLt+zs2LdoAJEVimKwXcJFD/7wtH8Lnk17HiglbbbNXsemjDY0hea4JUE5CfvkIdZBITuMrfqSmA4n3VNoorXYdvtTMoGG8fxMub46RPtuxtqi9bG9Zqenordkg5FJt2mVNfQRqf83CWojcOkklUWq4CjyxaeLf5i9gv1fRoBo4QhiA8I6NCSppO8GnoV/6Ox6TNoh9BiifqGKC9VGYuC89RvUajRBTZSK2TK4DPfaT+2R+slPsFrwiT/oPEhhEK1S5Q== rsa-key-20160227", + // "user_data" : "SWYgeW91IGNhbiBzZWUgdGhpcywgdGhlbiBpdCB3b3JrZWQgbWF5YmUuCg==" + // } + // **Getting Metadata on the Instance** + // To get information about your instance, connect to the instance using SSH and issue any of the + // following GET requests: + // curl http://169.254.169.254/opc/v1/instance/ + // curl http://169.254.169.254/opc/v1/instance/metadata/ + // curl http://169.254.169.254/opc/v1/instance/metadata/ + // You'll get back a response that includes all the instance information; only the metadata information; or + // the metadata information for the specified key name, respectively. + Metadata map[string]string `mandatory:"false" json:"metadata"` + + // Details for creating an instance. + SourceDetails InstanceSourceDetails `mandatory:"false" json:"sourceDetails"` + + // Deprecated. Instead use `subnetId` in + // CreateVnicDetails. + // At least one of them is required; if you provide both, the values must match. + SubnetId *string `mandatory:"false" json:"subnetId"` +} + +func (m LaunchInstanceDetails) String() string { + return common.PointerString(m) +} + +// UnmarshalJSON unmarshals from json +func (m *LaunchInstanceDetails) UnmarshalJSON(data []byte) (e error) { + model := struct { + CreateVnicDetails *CreateVnicDetails `json:"createVnicDetails"` + DisplayName *string `json:"displayName"` + ExtendedMetadata map[string]interface{} `json:"extendedMetadata"` + HostnameLabel *string `json:"hostnameLabel"` + ImageId *string `json:"imageId"` + IpxeScript *string `json:"ipxeScript"` + Metadata map[string]string `json:"metadata"` + SourceDetails instancesourcedetails `json:"sourceDetails"` + SubnetId *string `json:"subnetId"` + AvailabilityDomain *string `json:"availabilityDomain"` + CompartmentId *string `json:"compartmentId"` + Shape *string `json:"shape"` + }{} + + e = json.Unmarshal(data, &model) + if e != nil { + return + } + m.CreateVnicDetails = model.CreateVnicDetails + m.DisplayName = model.DisplayName + m.ExtendedMetadata = model.ExtendedMetadata + m.HostnameLabel = model.HostnameLabel + m.ImageId = model.ImageId + m.IpxeScript = model.IpxeScript + m.Metadata = model.Metadata + nn, e := model.SourceDetails.UnmarshalPolymorphicJSON(model.SourceDetails.JsonData) + if e != nil { + return + } + m.SourceDetails = nn + m.SubnetId = model.SubnetId + m.AvailabilityDomain = model.AvailabilityDomain + m.CompartmentId = model.CompartmentId + m.Shape = model.Shape + return +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/launch_instance_request_response.go b/vendor/github.com/oracle/oci-go-sdk/core/launch_instance_request_response.go new file mode 100644 index 000000000..dfccbc48b --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/launch_instance_request_response.go @@ -0,0 +1,48 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// LaunchInstanceRequest wrapper for the LaunchInstance operation +type LaunchInstanceRequest struct { + + // Instance details + LaunchInstanceDetails `contributesTo:"body"` + + // A token that uniquely identifies a request so it can be retried in case of a timeout or + // server error without risk of executing that same action again. Retry tokens expire after 24 + // hours, but can be invalidated before then due to conflicting operations (for example, if a resource + // has been deleted and purged from the system, then a retry of the original creation request + // may be rejected). + OpcRetryToken *string `mandatory:"false" contributesTo:"header" name:"opc-retry-token"` +} + +func (request LaunchInstanceRequest) String() string { + return common.PointerString(request) +} + +// LaunchInstanceResponse wrapper for the LaunchInstance operation +type LaunchInstanceResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The Instance instance + Instance `presentIn:"body"` + + // For optimistic concurrency control. See `if-match`. + Etag *string `presentIn:"header" name:"etag"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about + // a particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response LaunchInstanceResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/letter_of_authority.go b/vendor/github.com/oracle/oci-go-sdk/core/letter_of_authority.go new file mode 100644 index 000000000..3b24abade --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/letter_of_authority.go @@ -0,0 +1,67 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Core Services API +// +// APIs for Networking Service, Compute Service, and Block Volume Service. +// + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" +) + +// LetterOfAuthority The Letter of Authority for the cross-connect. You must submit this letter when +// requesting cabling for the cross-connect at the FastConnect location. +type LetterOfAuthority struct { + + // The name of the entity authorized by this Letter of Authority. + AuthorizedEntityName *string `mandatory:"false" json:"authorizedEntityName"` + + // The type of cross-connect fiber, termination, and optical specification. + CircuitType LetterOfAuthorityCircuitTypeEnum `mandatory:"false" json:"circuitType,omitempty"` + + // The OCID of the cross-connect. + CrossConnectId *string `mandatory:"false" json:"crossConnectId"` + + // The address of the FastConnect location. + FacilityLocation *string `mandatory:"false" json:"facilityLocation"` + + // The meet-me room port for this cross-connect. + PortName *string `mandatory:"false" json:"portName"` + + // The date and time when the Letter of Authority expires, in the format defined by RFC3339. + TimeExpires *common.SDKTime `mandatory:"false" json:"timeExpires"` + + // The date and time the Letter of Authority was created, in the format defined by RFC3339. + // Example: `2016-08-25T21:10:29.600Z` + TimeIssued *common.SDKTime `mandatory:"false" json:"timeIssued"` +} + +func (m LetterOfAuthority) String() string { + return common.PointerString(m) +} + +// LetterOfAuthorityCircuitTypeEnum Enum with underlying type: string +type LetterOfAuthorityCircuitTypeEnum string + +// Set of constants representing the allowable values for LetterOfAuthorityCircuitType +const ( + LetterOfAuthorityCircuitTypeLc LetterOfAuthorityCircuitTypeEnum = "Single_mode_LC" + LetterOfAuthorityCircuitTypeSc LetterOfAuthorityCircuitTypeEnum = "Single_mode_SC" +) + +var mappingLetterOfAuthorityCircuitType = map[string]LetterOfAuthorityCircuitTypeEnum{ + "Single_mode_LC": LetterOfAuthorityCircuitTypeLc, + "Single_mode_SC": LetterOfAuthorityCircuitTypeSc, +} + +// GetLetterOfAuthorityCircuitTypeEnumValues Enumerates the set of values for LetterOfAuthorityCircuitType +func GetLetterOfAuthorityCircuitTypeEnumValues() []LetterOfAuthorityCircuitTypeEnum { + values := make([]LetterOfAuthorityCircuitTypeEnum, 0) + for _, v := range mappingLetterOfAuthorityCircuitType { + values = append(values, v) + } + return values +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/list_boot_volume_attachments_request_response.go b/vendor/github.com/oracle/oci-go-sdk/core/list_boot_volume_attachments_request_response.go new file mode 100644 index 000000000..51669f7e1 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/list_boot_volume_attachments_request_response.go @@ -0,0 +1,60 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// ListBootVolumeAttachmentsRequest wrapper for the ListBootVolumeAttachments operation +type ListBootVolumeAttachmentsRequest struct { + + // The name of the Availability Domain. + // Example: `Uocm:PHX-AD-1` + AvailabilityDomain *string `mandatory:"true" contributesTo:"query" name:"availabilityDomain"` + + // The OCID of the compartment. + CompartmentId *string `mandatory:"true" contributesTo:"query" name:"compartmentId"` + + // The maximum number of items to return in a paginated "List" call. + // Example: `500` + Limit *int `mandatory:"false" contributesTo:"query" name:"limit"` + + // The value of the `opc-next-page` response header from the previous "List" call. + Page *string `mandatory:"false" contributesTo:"query" name:"page"` + + // The OCID of the instance. + InstanceId *string `mandatory:"false" contributesTo:"query" name:"instanceId"` + + // The OCID of the boot volume. + BootVolumeId *string `mandatory:"false" contributesTo:"query" name:"bootVolumeId"` +} + +func (request ListBootVolumeAttachmentsRequest) String() string { + return common.PointerString(request) +} + +// ListBootVolumeAttachmentsResponse wrapper for the ListBootVolumeAttachments operation +type ListBootVolumeAttachmentsResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The []BootVolumeAttachment instance + Items []BootVolumeAttachment `presentIn:"body"` + + // For pagination of a list of items. When paging through a list, if this header appears in the response, + // then a partial list might have been returned. Include this value as the `page` parameter for the + // subsequent GET request to get the next batch of items. + OpcNextPage *string `presentIn:"header" name:"opc-next-page"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about + // a particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response ListBootVolumeAttachmentsResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/list_boot_volumes_request_response.go b/vendor/github.com/oracle/oci-go-sdk/core/list_boot_volumes_request_response.go new file mode 100644 index 000000000..20370d315 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/list_boot_volumes_request_response.go @@ -0,0 +1,54 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// ListBootVolumesRequest wrapper for the ListBootVolumes operation +type ListBootVolumesRequest struct { + + // The name of the Availability Domain. + // Example: `Uocm:PHX-AD-1` + AvailabilityDomain *string `mandatory:"true" contributesTo:"query" name:"availabilityDomain"` + + // The OCID of the compartment. + CompartmentId *string `mandatory:"true" contributesTo:"query" name:"compartmentId"` + + // The maximum number of items to return in a paginated "List" call. + // Example: `500` + Limit *int `mandatory:"false" contributesTo:"query" name:"limit"` + + // The value of the `opc-next-page` response header from the previous "List" call. + Page *string `mandatory:"false" contributesTo:"query" name:"page"` +} + +func (request ListBootVolumesRequest) String() string { + return common.PointerString(request) +} + +// ListBootVolumesResponse wrapper for the ListBootVolumes operation +type ListBootVolumesResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The []BootVolume instance + Items []BootVolume `presentIn:"body"` + + // For pagination of a list of items. When paging through a list, if this header appears in the response, + // then a partial list might have been returned. Include this value as the `page` parameter for the + // subsequent GET request to get the next batch of items. + OpcNextPage *string `presentIn:"header" name:"opc-next-page"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about + // a particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response ListBootVolumesResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/list_console_histories_request_response.go b/vendor/github.com/oracle/oci-go-sdk/core/list_console_histories_request_response.go new file mode 100644 index 000000000..e47d04333 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/list_console_histories_request_response.go @@ -0,0 +1,119 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// ListConsoleHistoriesRequest wrapper for the ListConsoleHistories operation +type ListConsoleHistoriesRequest struct { + + // The OCID of the compartment. + CompartmentId *string `mandatory:"true" contributesTo:"query" name:"compartmentId"` + + // The name of the Availability Domain. + // Example: `Uocm:PHX-AD-1` + AvailabilityDomain *string `mandatory:"false" contributesTo:"query" name:"availabilityDomain"` + + // The maximum number of items to return in a paginated "List" call. + // Example: `500` + Limit *int `mandatory:"false" contributesTo:"query" name:"limit"` + + // The value of the `opc-next-page` response header from the previous "List" call. + Page *string `mandatory:"false" contributesTo:"query" name:"page"` + + // The OCID of the instance. + InstanceId *string `mandatory:"false" contributesTo:"query" name:"instanceId"` + + // The field to sort by. You can provide one sort order (`sortOrder`). Default order for + // TIMECREATED is descending. Default order for DISPLAYNAME is ascending. The DISPLAYNAME + // sort order is case sensitive. + // **Note:** In general, some "List" operations (for example, `ListInstances`) let you + // optionally filter by Availability Domain if the scope of the resource type is within a + // single Availability Domain. If you call one of these "List" operations without specifying + // an Availability Domain, the resources are grouped by Availability Domain, then sorted. + SortBy ListConsoleHistoriesSortByEnum `mandatory:"false" contributesTo:"query" name:"sortBy" omitEmpty:"true"` + + // The sort order to use, either ascending (`ASC`) or descending (`DESC`). The DISPLAYNAME sort order + // is case sensitive. + SortOrder ListConsoleHistoriesSortOrderEnum `mandatory:"false" contributesTo:"query" name:"sortOrder" omitEmpty:"true"` + + // A filter to only return resources that match the given lifecycle state. The state value is case-insensitive. + LifecycleState ConsoleHistoryLifecycleStateEnum `mandatory:"false" contributesTo:"query" name:"lifecycleState" omitEmpty:"true"` +} + +func (request ListConsoleHistoriesRequest) String() string { + return common.PointerString(request) +} + +// ListConsoleHistoriesResponse wrapper for the ListConsoleHistories operation +type ListConsoleHistoriesResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The []ConsoleHistory instance + Items []ConsoleHistory `presentIn:"body"` + + // For pagination of a list of items. When paging through a list, if this header appears in the response, + // then a partial list might have been returned. Include this value as the `page` parameter for the + // subsequent GET request to get the next batch of items. + OpcNextPage *string `presentIn:"header" name:"opc-next-page"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about + // a particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response ListConsoleHistoriesResponse) String() string { + return common.PointerString(response) +} + +// ListConsoleHistoriesSortByEnum Enum with underlying type: string +type ListConsoleHistoriesSortByEnum string + +// Set of constants representing the allowable values for ListConsoleHistoriesSortBy +const ( + ListConsoleHistoriesSortByTimecreated ListConsoleHistoriesSortByEnum = "TIMECREATED" + ListConsoleHistoriesSortByDisplayname ListConsoleHistoriesSortByEnum = "DISPLAYNAME" +) + +var mappingListConsoleHistoriesSortBy = map[string]ListConsoleHistoriesSortByEnum{ + "TIMECREATED": ListConsoleHistoriesSortByTimecreated, + "DISPLAYNAME": ListConsoleHistoriesSortByDisplayname, +} + +// GetListConsoleHistoriesSortByEnumValues Enumerates the set of values for ListConsoleHistoriesSortBy +func GetListConsoleHistoriesSortByEnumValues() []ListConsoleHistoriesSortByEnum { + values := make([]ListConsoleHistoriesSortByEnum, 0) + for _, v := range mappingListConsoleHistoriesSortBy { + values = append(values, v) + } + return values +} + +// ListConsoleHistoriesSortOrderEnum Enum with underlying type: string +type ListConsoleHistoriesSortOrderEnum string + +// Set of constants representing the allowable values for ListConsoleHistoriesSortOrder +const ( + ListConsoleHistoriesSortOrderAsc ListConsoleHistoriesSortOrderEnum = "ASC" + ListConsoleHistoriesSortOrderDesc ListConsoleHistoriesSortOrderEnum = "DESC" +) + +var mappingListConsoleHistoriesSortOrder = map[string]ListConsoleHistoriesSortOrderEnum{ + "ASC": ListConsoleHistoriesSortOrderAsc, + "DESC": ListConsoleHistoriesSortOrderDesc, +} + +// GetListConsoleHistoriesSortOrderEnumValues Enumerates the set of values for ListConsoleHistoriesSortOrder +func GetListConsoleHistoriesSortOrderEnumValues() []ListConsoleHistoriesSortOrderEnum { + values := make([]ListConsoleHistoriesSortOrderEnum, 0) + for _, v := range mappingListConsoleHistoriesSortOrder { + values = append(values, v) + } + return values +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/list_cpes_request_response.go b/vendor/github.com/oracle/oci-go-sdk/core/list_cpes_request_response.go new file mode 100644 index 000000000..5a0ce9fb3 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/list_cpes_request_response.go @@ -0,0 +1,50 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// ListCpesRequest wrapper for the ListCpes operation +type ListCpesRequest struct { + + // The OCID of the compartment. + CompartmentId *string `mandatory:"true" contributesTo:"query" name:"compartmentId"` + + // The maximum number of items to return in a paginated "List" call. + // Example: `500` + Limit *int `mandatory:"false" contributesTo:"query" name:"limit"` + + // The value of the `opc-next-page` response header from the previous "List" call. + Page *string `mandatory:"false" contributesTo:"query" name:"page"` +} + +func (request ListCpesRequest) String() string { + return common.PointerString(request) +} + +// ListCpesResponse wrapper for the ListCpes operation +type ListCpesResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The []Cpe instance + Items []Cpe `presentIn:"body"` + + // For pagination of a list of items. When paging through a list, if this header appears in the response, + // then a partial list might have been returned. Include this value as the `page` parameter for the + // subsequent GET request to get the next batch of items. + OpcNextPage *string `presentIn:"header" name:"opc-next-page"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about + // a particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response ListCpesResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/list_cross_connect_groups_request_response.go b/vendor/github.com/oracle/oci-go-sdk/core/list_cross_connect_groups_request_response.go new file mode 100644 index 000000000..221747c68 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/list_cross_connect_groups_request_response.go @@ -0,0 +1,115 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// ListCrossConnectGroupsRequest wrapper for the ListCrossConnectGroups operation +type ListCrossConnectGroupsRequest struct { + + // The OCID of the compartment. + CompartmentId *string `mandatory:"true" contributesTo:"query" name:"compartmentId"` + + // The maximum number of items to return in a paginated "List" call. + // Example: `500` + Limit *int `mandatory:"false" contributesTo:"query" name:"limit"` + + // The value of the `opc-next-page` response header from the previous "List" call. + Page *string `mandatory:"false" contributesTo:"query" name:"page"` + + // A filter to return only resources that match the given display name exactly. + DisplayName *string `mandatory:"false" contributesTo:"query" name:"displayName"` + + // The field to sort by. You can provide one sort order (`sortOrder`). Default order for + // TIMECREATED is descending. Default order for DISPLAYNAME is ascending. The DISPLAYNAME + // sort order is case sensitive. + // **Note:** In general, some "List" operations (for example, `ListInstances`) let you + // optionally filter by Availability Domain if the scope of the resource type is within a + // single Availability Domain. If you call one of these "List" operations without specifying + // an Availability Domain, the resources are grouped by Availability Domain, then sorted. + SortBy ListCrossConnectGroupsSortByEnum `mandatory:"false" contributesTo:"query" name:"sortBy" omitEmpty:"true"` + + // The sort order to use, either ascending (`ASC`) or descending (`DESC`). The DISPLAYNAME sort order + // is case sensitive. + SortOrder ListCrossConnectGroupsSortOrderEnum `mandatory:"false" contributesTo:"query" name:"sortOrder" omitEmpty:"true"` + + // A filter to return only resources that match the specified lifecycle state. The value is case insensitive. + LifecycleState CrossConnectGroupLifecycleStateEnum `mandatory:"false" contributesTo:"query" name:"lifecycleState" omitEmpty:"true"` +} + +func (request ListCrossConnectGroupsRequest) String() string { + return common.PointerString(request) +} + +// ListCrossConnectGroupsResponse wrapper for the ListCrossConnectGroups operation +type ListCrossConnectGroupsResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The []CrossConnectGroup instance + Items []CrossConnectGroup `presentIn:"body"` + + // For pagination of a list of items. When paging through a list, if this header appears in the response, + // then a partial list might have been returned. Include this value as the `page` parameter for the + // subsequent GET request to get the next batch of items. + OpcNextPage *string `presentIn:"header" name:"opc-next-page"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about + // a particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response ListCrossConnectGroupsResponse) String() string { + return common.PointerString(response) +} + +// ListCrossConnectGroupsSortByEnum Enum with underlying type: string +type ListCrossConnectGroupsSortByEnum string + +// Set of constants representing the allowable values for ListCrossConnectGroupsSortBy +const ( + ListCrossConnectGroupsSortByTimecreated ListCrossConnectGroupsSortByEnum = "TIMECREATED" + ListCrossConnectGroupsSortByDisplayname ListCrossConnectGroupsSortByEnum = "DISPLAYNAME" +) + +var mappingListCrossConnectGroupsSortBy = map[string]ListCrossConnectGroupsSortByEnum{ + "TIMECREATED": ListCrossConnectGroupsSortByTimecreated, + "DISPLAYNAME": ListCrossConnectGroupsSortByDisplayname, +} + +// GetListCrossConnectGroupsSortByEnumValues Enumerates the set of values for ListCrossConnectGroupsSortBy +func GetListCrossConnectGroupsSortByEnumValues() []ListCrossConnectGroupsSortByEnum { + values := make([]ListCrossConnectGroupsSortByEnum, 0) + for _, v := range mappingListCrossConnectGroupsSortBy { + values = append(values, v) + } + return values +} + +// ListCrossConnectGroupsSortOrderEnum Enum with underlying type: string +type ListCrossConnectGroupsSortOrderEnum string + +// Set of constants representing the allowable values for ListCrossConnectGroupsSortOrder +const ( + ListCrossConnectGroupsSortOrderAsc ListCrossConnectGroupsSortOrderEnum = "ASC" + ListCrossConnectGroupsSortOrderDesc ListCrossConnectGroupsSortOrderEnum = "DESC" +) + +var mappingListCrossConnectGroupsSortOrder = map[string]ListCrossConnectGroupsSortOrderEnum{ + "ASC": ListCrossConnectGroupsSortOrderAsc, + "DESC": ListCrossConnectGroupsSortOrderDesc, +} + +// GetListCrossConnectGroupsSortOrderEnumValues Enumerates the set of values for ListCrossConnectGroupsSortOrder +func GetListCrossConnectGroupsSortOrderEnumValues() []ListCrossConnectGroupsSortOrderEnum { + values := make([]ListCrossConnectGroupsSortOrderEnum, 0) + for _, v := range mappingListCrossConnectGroupsSortOrder { + values = append(values, v) + } + return values +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/list_cross_connect_locations_request_response.go b/vendor/github.com/oracle/oci-go-sdk/core/list_cross_connect_locations_request_response.go new file mode 100644 index 000000000..4200b32ac --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/list_cross_connect_locations_request_response.go @@ -0,0 +1,50 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// ListCrossConnectLocationsRequest wrapper for the ListCrossConnectLocations operation +type ListCrossConnectLocationsRequest struct { + + // The OCID of the compartment. + CompartmentId *string `mandatory:"true" contributesTo:"query" name:"compartmentId"` + + // The maximum number of items to return in a paginated "List" call. + // Example: `500` + Limit *int `mandatory:"false" contributesTo:"query" name:"limit"` + + // The value of the `opc-next-page` response header from the previous "List" call. + Page *string `mandatory:"false" contributesTo:"query" name:"page"` +} + +func (request ListCrossConnectLocationsRequest) String() string { + return common.PointerString(request) +} + +// ListCrossConnectLocationsResponse wrapper for the ListCrossConnectLocations operation +type ListCrossConnectLocationsResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The []CrossConnectLocation instance + Items []CrossConnectLocation `presentIn:"body"` + + // For pagination of a list of items. When paging through a list, if this header appears in the response, + // then a partial list might have been returned. Include this value as the `page` parameter for the + // subsequent GET request to get the next batch of items. + OpcNextPage *string `presentIn:"header" name:"opc-next-page"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about + // a particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response ListCrossConnectLocationsResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/list_cross_connects_request_response.go b/vendor/github.com/oracle/oci-go-sdk/core/list_cross_connects_request_response.go new file mode 100644 index 000000000..e10f72d55 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/list_cross_connects_request_response.go @@ -0,0 +1,118 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// ListCrossConnectsRequest wrapper for the ListCrossConnects operation +type ListCrossConnectsRequest struct { + + // The OCID of the compartment. + CompartmentId *string `mandatory:"true" contributesTo:"query" name:"compartmentId"` + + // The OCID of the cross-connect group. + CrossConnectGroupId *string `mandatory:"false" contributesTo:"query" name:"crossConnectGroupId"` + + // The maximum number of items to return in a paginated "List" call. + // Example: `500` + Limit *int `mandatory:"false" contributesTo:"query" name:"limit"` + + // The value of the `opc-next-page` response header from the previous "List" call. + Page *string `mandatory:"false" contributesTo:"query" name:"page"` + + // A filter to return only resources that match the given display name exactly. + DisplayName *string `mandatory:"false" contributesTo:"query" name:"displayName"` + + // The field to sort by. You can provide one sort order (`sortOrder`). Default order for + // TIMECREATED is descending. Default order for DISPLAYNAME is ascending. The DISPLAYNAME + // sort order is case sensitive. + // **Note:** In general, some "List" operations (for example, `ListInstances`) let you + // optionally filter by Availability Domain if the scope of the resource type is within a + // single Availability Domain. If you call one of these "List" operations without specifying + // an Availability Domain, the resources are grouped by Availability Domain, then sorted. + SortBy ListCrossConnectsSortByEnum `mandatory:"false" contributesTo:"query" name:"sortBy" omitEmpty:"true"` + + // The sort order to use, either ascending (`ASC`) or descending (`DESC`). The DISPLAYNAME sort order + // is case sensitive. + SortOrder ListCrossConnectsSortOrderEnum `mandatory:"false" contributesTo:"query" name:"sortOrder" omitEmpty:"true"` + + // A filter to return only resources that match the specified lifecycle state. The value is case insensitive. + LifecycleState CrossConnectLifecycleStateEnum `mandatory:"false" contributesTo:"query" name:"lifecycleState" omitEmpty:"true"` +} + +func (request ListCrossConnectsRequest) String() string { + return common.PointerString(request) +} + +// ListCrossConnectsResponse wrapper for the ListCrossConnects operation +type ListCrossConnectsResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The []CrossConnect instance + Items []CrossConnect `presentIn:"body"` + + // For pagination of a list of items. When paging through a list, if this header appears in the response, + // then a partial list might have been returned. Include this value as the `page` parameter for the + // subsequent GET request to get the next batch of items. + OpcNextPage *string `presentIn:"header" name:"opc-next-page"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about + // a particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response ListCrossConnectsResponse) String() string { + return common.PointerString(response) +} + +// ListCrossConnectsSortByEnum Enum with underlying type: string +type ListCrossConnectsSortByEnum string + +// Set of constants representing the allowable values for ListCrossConnectsSortBy +const ( + ListCrossConnectsSortByTimecreated ListCrossConnectsSortByEnum = "TIMECREATED" + ListCrossConnectsSortByDisplayname ListCrossConnectsSortByEnum = "DISPLAYNAME" +) + +var mappingListCrossConnectsSortBy = map[string]ListCrossConnectsSortByEnum{ + "TIMECREATED": ListCrossConnectsSortByTimecreated, + "DISPLAYNAME": ListCrossConnectsSortByDisplayname, +} + +// GetListCrossConnectsSortByEnumValues Enumerates the set of values for ListCrossConnectsSortBy +func GetListCrossConnectsSortByEnumValues() []ListCrossConnectsSortByEnum { + values := make([]ListCrossConnectsSortByEnum, 0) + for _, v := range mappingListCrossConnectsSortBy { + values = append(values, v) + } + return values +} + +// ListCrossConnectsSortOrderEnum Enum with underlying type: string +type ListCrossConnectsSortOrderEnum string + +// Set of constants representing the allowable values for ListCrossConnectsSortOrder +const ( + ListCrossConnectsSortOrderAsc ListCrossConnectsSortOrderEnum = "ASC" + ListCrossConnectsSortOrderDesc ListCrossConnectsSortOrderEnum = "DESC" +) + +var mappingListCrossConnectsSortOrder = map[string]ListCrossConnectsSortOrderEnum{ + "ASC": ListCrossConnectsSortOrderAsc, + "DESC": ListCrossConnectsSortOrderDesc, +} + +// GetListCrossConnectsSortOrderEnumValues Enumerates the set of values for ListCrossConnectsSortOrder +func GetListCrossConnectsSortOrderEnumValues() []ListCrossConnectsSortOrderEnum { + values := make([]ListCrossConnectsSortOrderEnum, 0) + for _, v := range mappingListCrossConnectsSortOrder { + values = append(values, v) + } + return values +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/list_crossconnect_port_speed_shapes_request_response.go b/vendor/github.com/oracle/oci-go-sdk/core/list_crossconnect_port_speed_shapes_request_response.go new file mode 100644 index 000000000..b60655348 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/list_crossconnect_port_speed_shapes_request_response.go @@ -0,0 +1,50 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// ListCrossconnectPortSpeedShapesRequest wrapper for the ListCrossconnectPortSpeedShapes operation +type ListCrossconnectPortSpeedShapesRequest struct { + + // The OCID of the compartment. + CompartmentId *string `mandatory:"true" contributesTo:"query" name:"compartmentId"` + + // The maximum number of items to return in a paginated "List" call. + // Example: `500` + Limit *int `mandatory:"false" contributesTo:"query" name:"limit"` + + // The value of the `opc-next-page` response header from the previous "List" call. + Page *string `mandatory:"false" contributesTo:"query" name:"page"` +} + +func (request ListCrossconnectPortSpeedShapesRequest) String() string { + return common.PointerString(request) +} + +// ListCrossconnectPortSpeedShapesResponse wrapper for the ListCrossconnectPortSpeedShapes operation +type ListCrossconnectPortSpeedShapesResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The []CrossConnectPortSpeedShape instance + Items []CrossConnectPortSpeedShape `presentIn:"body"` + + // For pagination of a list of items. When paging through a list, if this header appears in the response, + // then a partial list might have been returned. Include this value as the `page` parameter for the + // subsequent GET request to get the next batch of items. + OpcNextPage *string `presentIn:"header" name:"opc-next-page"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about + // a particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response ListCrossconnectPortSpeedShapesResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/list_dhcp_options_request_response.go b/vendor/github.com/oracle/oci-go-sdk/core/list_dhcp_options_request_response.go new file mode 100644 index 000000000..f06466fc6 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/list_dhcp_options_request_response.go @@ -0,0 +1,118 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// ListDhcpOptionsRequest wrapper for the ListDhcpOptions operation +type ListDhcpOptionsRequest struct { + + // The OCID of the compartment. + CompartmentId *string `mandatory:"true" contributesTo:"query" name:"compartmentId"` + + // The OCID of the VCN. + VcnId *string `mandatory:"true" contributesTo:"query" name:"vcnId"` + + // The maximum number of items to return in a paginated "List" call. + // Example: `500` + Limit *int `mandatory:"false" contributesTo:"query" name:"limit"` + + // The value of the `opc-next-page` response header from the previous "List" call. + Page *string `mandatory:"false" contributesTo:"query" name:"page"` + + // A filter to return only resources that match the given display name exactly. + DisplayName *string `mandatory:"false" contributesTo:"query" name:"displayName"` + + // The field to sort by. You can provide one sort order (`sortOrder`). Default order for + // TIMECREATED is descending. Default order for DISPLAYNAME is ascending. The DISPLAYNAME + // sort order is case sensitive. + // **Note:** In general, some "List" operations (for example, `ListInstances`) let you + // optionally filter by Availability Domain if the scope of the resource type is within a + // single Availability Domain. If you call one of these "List" operations without specifying + // an Availability Domain, the resources are grouped by Availability Domain, then sorted. + SortBy ListDhcpOptionsSortByEnum `mandatory:"false" contributesTo:"query" name:"sortBy" omitEmpty:"true"` + + // The sort order to use, either ascending (`ASC`) or descending (`DESC`). The DISPLAYNAME sort order + // is case sensitive. + SortOrder ListDhcpOptionsSortOrderEnum `mandatory:"false" contributesTo:"query" name:"sortOrder" omitEmpty:"true"` + + // A filter to only return resources that match the given lifecycle state. The state value is case-insensitive. + LifecycleState DhcpOptionsLifecycleStateEnum `mandatory:"false" contributesTo:"query" name:"lifecycleState" omitEmpty:"true"` +} + +func (request ListDhcpOptionsRequest) String() string { + return common.PointerString(request) +} + +// ListDhcpOptionsResponse wrapper for the ListDhcpOptions operation +type ListDhcpOptionsResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The []DhcpOptions instance + Items []DhcpOptions `presentIn:"body"` + + // For pagination of a list of items. When paging through a list, if this header appears in the response, + // then a partial list might have been returned. Include this value as the `page` parameter for the + // subsequent GET request to get the next batch of items. + OpcNextPage *string `presentIn:"header" name:"opc-next-page"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about + // a particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response ListDhcpOptionsResponse) String() string { + return common.PointerString(response) +} + +// ListDhcpOptionsSortByEnum Enum with underlying type: string +type ListDhcpOptionsSortByEnum string + +// Set of constants representing the allowable values for ListDhcpOptionsSortBy +const ( + ListDhcpOptionsSortByTimecreated ListDhcpOptionsSortByEnum = "TIMECREATED" + ListDhcpOptionsSortByDisplayname ListDhcpOptionsSortByEnum = "DISPLAYNAME" +) + +var mappingListDhcpOptionsSortBy = map[string]ListDhcpOptionsSortByEnum{ + "TIMECREATED": ListDhcpOptionsSortByTimecreated, + "DISPLAYNAME": ListDhcpOptionsSortByDisplayname, +} + +// GetListDhcpOptionsSortByEnumValues Enumerates the set of values for ListDhcpOptionsSortBy +func GetListDhcpOptionsSortByEnumValues() []ListDhcpOptionsSortByEnum { + values := make([]ListDhcpOptionsSortByEnum, 0) + for _, v := range mappingListDhcpOptionsSortBy { + values = append(values, v) + } + return values +} + +// ListDhcpOptionsSortOrderEnum Enum with underlying type: string +type ListDhcpOptionsSortOrderEnum string + +// Set of constants representing the allowable values for ListDhcpOptionsSortOrder +const ( + ListDhcpOptionsSortOrderAsc ListDhcpOptionsSortOrderEnum = "ASC" + ListDhcpOptionsSortOrderDesc ListDhcpOptionsSortOrderEnum = "DESC" +) + +var mappingListDhcpOptionsSortOrder = map[string]ListDhcpOptionsSortOrderEnum{ + "ASC": ListDhcpOptionsSortOrderAsc, + "DESC": ListDhcpOptionsSortOrderDesc, +} + +// GetListDhcpOptionsSortOrderEnumValues Enumerates the set of values for ListDhcpOptionsSortOrder +func GetListDhcpOptionsSortOrderEnumValues() []ListDhcpOptionsSortOrderEnum { + values := make([]ListDhcpOptionsSortOrderEnum, 0) + for _, v := range mappingListDhcpOptionsSortOrder { + values = append(values, v) + } + return values +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/list_drg_attachments_request_response.go b/vendor/github.com/oracle/oci-go-sdk/core/list_drg_attachments_request_response.go new file mode 100644 index 000000000..d526209d5 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/list_drg_attachments_request_response.go @@ -0,0 +1,56 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// ListDrgAttachmentsRequest wrapper for the ListDrgAttachments operation +type ListDrgAttachmentsRequest struct { + + // The OCID of the compartment. + CompartmentId *string `mandatory:"true" contributesTo:"query" name:"compartmentId"` + + // The OCID of the VCN. + VcnId *string `mandatory:"false" contributesTo:"query" name:"vcnId"` + + // The OCID of the DRG. + DrgId *string `mandatory:"false" contributesTo:"query" name:"drgId"` + + // The maximum number of items to return in a paginated "List" call. + // Example: `500` + Limit *int `mandatory:"false" contributesTo:"query" name:"limit"` + + // The value of the `opc-next-page` response header from the previous "List" call. + Page *string `mandatory:"false" contributesTo:"query" name:"page"` +} + +func (request ListDrgAttachmentsRequest) String() string { + return common.PointerString(request) +} + +// ListDrgAttachmentsResponse wrapper for the ListDrgAttachments operation +type ListDrgAttachmentsResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The []DrgAttachment instance + Items []DrgAttachment `presentIn:"body"` + + // For pagination of a list of items. When paging through a list, if this header appears in the response, + // then a partial list might have been returned. Include this value as the `page` parameter for the + // subsequent GET request to get the next batch of items. + OpcNextPage *string `presentIn:"header" name:"opc-next-page"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about + // a particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response ListDrgAttachmentsResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/list_drgs_request_response.go b/vendor/github.com/oracle/oci-go-sdk/core/list_drgs_request_response.go new file mode 100644 index 000000000..6577f9c2f --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/list_drgs_request_response.go @@ -0,0 +1,50 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// ListDrgsRequest wrapper for the ListDrgs operation +type ListDrgsRequest struct { + + // The OCID of the compartment. + CompartmentId *string `mandatory:"true" contributesTo:"query" name:"compartmentId"` + + // The maximum number of items to return in a paginated "List" call. + // Example: `500` + Limit *int `mandatory:"false" contributesTo:"query" name:"limit"` + + // The value of the `opc-next-page` response header from the previous "List" call. + Page *string `mandatory:"false" contributesTo:"query" name:"page"` +} + +func (request ListDrgsRequest) String() string { + return common.PointerString(request) +} + +// ListDrgsResponse wrapper for the ListDrgs operation +type ListDrgsResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The []Drg instance + Items []Drg `presentIn:"body"` + + // For pagination of a list of items. When paging through a list, if this header appears in the response, + // then a partial list might have been returned. Include this value as the `page` parameter for the + // subsequent GET request to get the next batch of items. + OpcNextPage *string `presentIn:"header" name:"opc-next-page"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about + // a particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response ListDrgsResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/list_fast_connect_provider_services_request_response.go b/vendor/github.com/oracle/oci-go-sdk/core/list_fast_connect_provider_services_request_response.go new file mode 100644 index 000000000..27614fd85 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/list_fast_connect_provider_services_request_response.go @@ -0,0 +1,50 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// ListFastConnectProviderServicesRequest wrapper for the ListFastConnectProviderServices operation +type ListFastConnectProviderServicesRequest struct { + + // The OCID of the compartment. + CompartmentId *string `mandatory:"true" contributesTo:"query" name:"compartmentId"` + + // The maximum number of items to return in a paginated "List" call. + // Example: `500` + Limit *int `mandatory:"false" contributesTo:"query" name:"limit"` + + // The value of the `opc-next-page` response header from the previous "List" call. + Page *string `mandatory:"false" contributesTo:"query" name:"page"` +} + +func (request ListFastConnectProviderServicesRequest) String() string { + return common.PointerString(request) +} + +// ListFastConnectProviderServicesResponse wrapper for the ListFastConnectProviderServices operation +type ListFastConnectProviderServicesResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The []FastConnectProviderService instance + Items []FastConnectProviderService `presentIn:"body"` + + // For pagination of a list of items. When paging through a list, if this header appears in the response, + // then a partial list might have been returned. Include this value as the `page` parameter for the + // subsequent GET request to get the next batch of items. + OpcNextPage *string `presentIn:"header" name:"opc-next-page"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about + // a particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response ListFastConnectProviderServicesResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/list_fast_connect_provider_virtual_circuit_bandwidth_shapes_request_response.go b/vendor/github.com/oracle/oci-go-sdk/core/list_fast_connect_provider_virtual_circuit_bandwidth_shapes_request_response.go new file mode 100644 index 000000000..6197b9b54 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/list_fast_connect_provider_virtual_circuit_bandwidth_shapes_request_response.go @@ -0,0 +1,50 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// ListFastConnectProviderVirtualCircuitBandwidthShapesRequest wrapper for the ListFastConnectProviderVirtualCircuitBandwidthShapes operation +type ListFastConnectProviderVirtualCircuitBandwidthShapesRequest struct { + + // The OCID of the provider service. + ProviderServiceId *string `mandatory:"true" contributesTo:"path" name:"providerServiceId"` + + // The maximum number of items to return in a paginated "List" call. + // Example: `500` + Limit *int `mandatory:"false" contributesTo:"query" name:"limit"` + + // The value of the `opc-next-page` response header from the previous "List" call. + Page *string `mandatory:"false" contributesTo:"query" name:"page"` +} + +func (request ListFastConnectProviderVirtualCircuitBandwidthShapesRequest) String() string { + return common.PointerString(request) +} + +// ListFastConnectProviderVirtualCircuitBandwidthShapesResponse wrapper for the ListFastConnectProviderVirtualCircuitBandwidthShapes operation +type ListFastConnectProviderVirtualCircuitBandwidthShapesResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The []VirtualCircuitBandwidthShape instance + Items []VirtualCircuitBandwidthShape `presentIn:"body"` + + // For pagination of a list of items. When paging through a list, if this header appears in the response, + // then a partial list might have been returned. Include this value as the `page` parameter for the + // subsequent GET request to get the next batch of items. + OpcNextPage *string `presentIn:"header" name:"opc-next-page"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about + // a particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response ListFastConnectProviderVirtualCircuitBandwidthShapesResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/list_i_p_sec_connections_request_response.go b/vendor/github.com/oracle/oci-go-sdk/core/list_i_p_sec_connections_request_response.go new file mode 100644 index 000000000..c1b346754 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/list_i_p_sec_connections_request_response.go @@ -0,0 +1,56 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// ListIPSecConnectionsRequest wrapper for the ListIPSecConnections operation +type ListIPSecConnectionsRequest struct { + + // The OCID of the compartment. + CompartmentId *string `mandatory:"true" contributesTo:"query" name:"compartmentId"` + + // The OCID of the DRG. + DrgId *string `mandatory:"false" contributesTo:"query" name:"drgId"` + + // The OCID of the CPE. + CpeId *string `mandatory:"false" contributesTo:"query" name:"cpeId"` + + // The maximum number of items to return in a paginated "List" call. + // Example: `500` + Limit *int `mandatory:"false" contributesTo:"query" name:"limit"` + + // The value of the `opc-next-page` response header from the previous "List" call. + Page *string `mandatory:"false" contributesTo:"query" name:"page"` +} + +func (request ListIPSecConnectionsRequest) String() string { + return common.PointerString(request) +} + +// ListIPSecConnectionsResponse wrapper for the ListIPSecConnections operation +type ListIPSecConnectionsResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The []IpSecConnection instance + Items []IpSecConnection `presentIn:"body"` + + // For pagination of a list of items. When paging through a list, if this header appears in the response, + // then a partial list might have been returned. Include this value as the `page` parameter for the + // subsequent GET request to get the next batch of items. + OpcNextPage *string `presentIn:"header" name:"opc-next-page"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about + // a particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response ListIPSecConnectionsResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/list_images_request_response.go b/vendor/github.com/oracle/oci-go-sdk/core/list_images_request_response.go new file mode 100644 index 000000000..5ac886860 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/list_images_request_response.go @@ -0,0 +1,123 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// ListImagesRequest wrapper for the ListImages operation +type ListImagesRequest struct { + + // The OCID of the compartment. + CompartmentId *string `mandatory:"true" contributesTo:"query" name:"compartmentId"` + + // A filter to return only resources that match the given display name exactly. + DisplayName *string `mandatory:"false" contributesTo:"query" name:"displayName"` + + // The image's operating system. + // Example: `Oracle Linux` + OperatingSystem *string `mandatory:"false" contributesTo:"query" name:"operatingSystem"` + + // The image's operating system version. + // Example: `7.2` + OperatingSystemVersion *string `mandatory:"false" contributesTo:"query" name:"operatingSystemVersion"` + + // The maximum number of items to return in a paginated "List" call. + // Example: `500` + Limit *int `mandatory:"false" contributesTo:"query" name:"limit"` + + // The value of the `opc-next-page` response header from the previous "List" call. + Page *string `mandatory:"false" contributesTo:"query" name:"page"` + + // The field to sort by. You can provide one sort order (`sortOrder`). Default order for + // TIMECREATED is descending. Default order for DISPLAYNAME is ascending. The DISPLAYNAME + // sort order is case sensitive. + // **Note:** In general, some "List" operations (for example, `ListInstances`) let you + // optionally filter by Availability Domain if the scope of the resource type is within a + // single Availability Domain. If you call one of these "List" operations without specifying + // an Availability Domain, the resources are grouped by Availability Domain, then sorted. + SortBy ListImagesSortByEnum `mandatory:"false" contributesTo:"query" name:"sortBy" omitEmpty:"true"` + + // The sort order to use, either ascending (`ASC`) or descending (`DESC`). The DISPLAYNAME sort order + // is case sensitive. + SortOrder ListImagesSortOrderEnum `mandatory:"false" contributesTo:"query" name:"sortOrder" omitEmpty:"true"` + + // A filter to only return resources that match the given lifecycle state. The state value is case-insensitive. + LifecycleState ImageLifecycleStateEnum `mandatory:"false" contributesTo:"query" name:"lifecycleState" omitEmpty:"true"` +} + +func (request ListImagesRequest) String() string { + return common.PointerString(request) +} + +// ListImagesResponse wrapper for the ListImages operation +type ListImagesResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The []Image instance + Items []Image `presentIn:"body"` + + // For pagination of a list of items. When paging through a list, if this header appears in the response, + // then a partial list might have been returned. Include this value as the `page` parameter for the + // subsequent GET request to get the next batch of items. + OpcNextPage *string `presentIn:"header" name:"opc-next-page"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about + // a particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response ListImagesResponse) String() string { + return common.PointerString(response) +} + +// ListImagesSortByEnum Enum with underlying type: string +type ListImagesSortByEnum string + +// Set of constants representing the allowable values for ListImagesSortBy +const ( + ListImagesSortByTimecreated ListImagesSortByEnum = "TIMECREATED" + ListImagesSortByDisplayname ListImagesSortByEnum = "DISPLAYNAME" +) + +var mappingListImagesSortBy = map[string]ListImagesSortByEnum{ + "TIMECREATED": ListImagesSortByTimecreated, + "DISPLAYNAME": ListImagesSortByDisplayname, +} + +// GetListImagesSortByEnumValues Enumerates the set of values for ListImagesSortBy +func GetListImagesSortByEnumValues() []ListImagesSortByEnum { + values := make([]ListImagesSortByEnum, 0) + for _, v := range mappingListImagesSortBy { + values = append(values, v) + } + return values +} + +// ListImagesSortOrderEnum Enum with underlying type: string +type ListImagesSortOrderEnum string + +// Set of constants representing the allowable values for ListImagesSortOrder +const ( + ListImagesSortOrderAsc ListImagesSortOrderEnum = "ASC" + ListImagesSortOrderDesc ListImagesSortOrderEnum = "DESC" +) + +var mappingListImagesSortOrder = map[string]ListImagesSortOrderEnum{ + "ASC": ListImagesSortOrderAsc, + "DESC": ListImagesSortOrderDesc, +} + +// GetListImagesSortOrderEnumValues Enumerates the set of values for ListImagesSortOrder +func GetListImagesSortOrderEnumValues() []ListImagesSortOrderEnum { + values := make([]ListImagesSortOrderEnum, 0) + for _, v := range mappingListImagesSortOrder { + values = append(values, v) + } + return values +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/list_instance_console_connections_request_response.go b/vendor/github.com/oracle/oci-go-sdk/core/list_instance_console_connections_request_response.go new file mode 100644 index 000000000..012b2a77b --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/list_instance_console_connections_request_response.go @@ -0,0 +1,53 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// ListInstanceConsoleConnectionsRequest wrapper for the ListInstanceConsoleConnections operation +type ListInstanceConsoleConnectionsRequest struct { + + // The OCID of the compartment. + CompartmentId *string `mandatory:"true" contributesTo:"query" name:"compartmentId"` + + // The OCID of the instance. + InstanceId *string `mandatory:"false" contributesTo:"query" name:"instanceId"` + + // The maximum number of items to return in a paginated "List" call. + // Example: `500` + Limit *int `mandatory:"false" contributesTo:"query" name:"limit"` + + // The value of the `opc-next-page` response header from the previous "List" call. + Page *string `mandatory:"false" contributesTo:"query" name:"page"` +} + +func (request ListInstanceConsoleConnectionsRequest) String() string { + return common.PointerString(request) +} + +// ListInstanceConsoleConnectionsResponse wrapper for the ListInstanceConsoleConnections operation +type ListInstanceConsoleConnectionsResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The []InstanceConsoleConnection instance + Items []InstanceConsoleConnection `presentIn:"body"` + + // For pagination of a list of items. When paging through a list, if this header appears in the response, + // then a partial list might have been returned. Include this value as the `page` parameter for the + // subsequent GET request to get the next batch of items. + OpcNextPage *string `presentIn:"header" name:"opc-next-page"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about + // a particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response ListInstanceConsoleConnectionsResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/list_instances_request_response.go b/vendor/github.com/oracle/oci-go-sdk/core/list_instances_request_response.go new file mode 100644 index 000000000..f687890c4 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/list_instances_request_response.go @@ -0,0 +1,119 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// ListInstancesRequest wrapper for the ListInstances operation +type ListInstancesRequest struct { + + // The OCID of the compartment. + CompartmentId *string `mandatory:"true" contributesTo:"query" name:"compartmentId"` + + // The name of the Availability Domain. + // Example: `Uocm:PHX-AD-1` + AvailabilityDomain *string `mandatory:"false" contributesTo:"query" name:"availabilityDomain"` + + // A filter to return only resources that match the given display name exactly. + DisplayName *string `mandatory:"false" contributesTo:"query" name:"displayName"` + + // The maximum number of items to return in a paginated "List" call. + // Example: `500` + Limit *int `mandatory:"false" contributesTo:"query" name:"limit"` + + // The value of the `opc-next-page` response header from the previous "List" call. + Page *string `mandatory:"false" contributesTo:"query" name:"page"` + + // The field to sort by. You can provide one sort order (`sortOrder`). Default order for + // TIMECREATED is descending. Default order for DISPLAYNAME is ascending. The DISPLAYNAME + // sort order is case sensitive. + // **Note:** In general, some "List" operations (for example, `ListInstances`) let you + // optionally filter by Availability Domain if the scope of the resource type is within a + // single Availability Domain. If you call one of these "List" operations without specifying + // an Availability Domain, the resources are grouped by Availability Domain, then sorted. + SortBy ListInstancesSortByEnum `mandatory:"false" contributesTo:"query" name:"sortBy" omitEmpty:"true"` + + // The sort order to use, either ascending (`ASC`) or descending (`DESC`). The DISPLAYNAME sort order + // is case sensitive. + SortOrder ListInstancesSortOrderEnum `mandatory:"false" contributesTo:"query" name:"sortOrder" omitEmpty:"true"` + + // A filter to only return resources that match the given lifecycle state. The state value is case-insensitive. + LifecycleState InstanceLifecycleStateEnum `mandatory:"false" contributesTo:"query" name:"lifecycleState" omitEmpty:"true"` +} + +func (request ListInstancesRequest) String() string { + return common.PointerString(request) +} + +// ListInstancesResponse wrapper for the ListInstances operation +type ListInstancesResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The []Instance instance + Items []Instance `presentIn:"body"` + + // For pagination of a list of items. When paging through a list, if this header appears in the response, + // then a partial list might have been returned. Include this value as the `page` parameter for the + // subsequent GET request to get the next batch of items. + OpcNextPage *string `presentIn:"header" name:"opc-next-page"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about + // a particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response ListInstancesResponse) String() string { + return common.PointerString(response) +} + +// ListInstancesSortByEnum Enum with underlying type: string +type ListInstancesSortByEnum string + +// Set of constants representing the allowable values for ListInstancesSortBy +const ( + ListInstancesSortByTimecreated ListInstancesSortByEnum = "TIMECREATED" + ListInstancesSortByDisplayname ListInstancesSortByEnum = "DISPLAYNAME" +) + +var mappingListInstancesSortBy = map[string]ListInstancesSortByEnum{ + "TIMECREATED": ListInstancesSortByTimecreated, + "DISPLAYNAME": ListInstancesSortByDisplayname, +} + +// GetListInstancesSortByEnumValues Enumerates the set of values for ListInstancesSortBy +func GetListInstancesSortByEnumValues() []ListInstancesSortByEnum { + values := make([]ListInstancesSortByEnum, 0) + for _, v := range mappingListInstancesSortBy { + values = append(values, v) + } + return values +} + +// ListInstancesSortOrderEnum Enum with underlying type: string +type ListInstancesSortOrderEnum string + +// Set of constants representing the allowable values for ListInstancesSortOrder +const ( + ListInstancesSortOrderAsc ListInstancesSortOrderEnum = "ASC" + ListInstancesSortOrderDesc ListInstancesSortOrderEnum = "DESC" +) + +var mappingListInstancesSortOrder = map[string]ListInstancesSortOrderEnum{ + "ASC": ListInstancesSortOrderAsc, + "DESC": ListInstancesSortOrderDesc, +} + +// GetListInstancesSortOrderEnumValues Enumerates the set of values for ListInstancesSortOrder +func GetListInstancesSortOrderEnumValues() []ListInstancesSortOrderEnum { + values := make([]ListInstancesSortOrderEnum, 0) + for _, v := range mappingListInstancesSortOrder { + values = append(values, v) + } + return values +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/list_internet_gateways_request_response.go b/vendor/github.com/oracle/oci-go-sdk/core/list_internet_gateways_request_response.go new file mode 100644 index 000000000..b37985a4d --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/list_internet_gateways_request_response.go @@ -0,0 +1,118 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// ListInternetGatewaysRequest wrapper for the ListInternetGateways operation +type ListInternetGatewaysRequest struct { + + // The OCID of the compartment. + CompartmentId *string `mandatory:"true" contributesTo:"query" name:"compartmentId"` + + // The OCID of the VCN. + VcnId *string `mandatory:"true" contributesTo:"query" name:"vcnId"` + + // The maximum number of items to return in a paginated "List" call. + // Example: `500` + Limit *int `mandatory:"false" contributesTo:"query" name:"limit"` + + // The value of the `opc-next-page` response header from the previous "List" call. + Page *string `mandatory:"false" contributesTo:"query" name:"page"` + + // A filter to return only resources that match the given display name exactly. + DisplayName *string `mandatory:"false" contributesTo:"query" name:"displayName"` + + // The field to sort by. You can provide one sort order (`sortOrder`). Default order for + // TIMECREATED is descending. Default order for DISPLAYNAME is ascending. The DISPLAYNAME + // sort order is case sensitive. + // **Note:** In general, some "List" operations (for example, `ListInstances`) let you + // optionally filter by Availability Domain if the scope of the resource type is within a + // single Availability Domain. If you call one of these "List" operations without specifying + // an Availability Domain, the resources are grouped by Availability Domain, then sorted. + SortBy ListInternetGatewaysSortByEnum `mandatory:"false" contributesTo:"query" name:"sortBy" omitEmpty:"true"` + + // The sort order to use, either ascending (`ASC`) or descending (`DESC`). The DISPLAYNAME sort order + // is case sensitive. + SortOrder ListInternetGatewaysSortOrderEnum `mandatory:"false" contributesTo:"query" name:"sortOrder" omitEmpty:"true"` + + // A filter to only return resources that match the given lifecycle state. The state value is case-insensitive. + LifecycleState InternetGatewayLifecycleStateEnum `mandatory:"false" contributesTo:"query" name:"lifecycleState" omitEmpty:"true"` +} + +func (request ListInternetGatewaysRequest) String() string { + return common.PointerString(request) +} + +// ListInternetGatewaysResponse wrapper for the ListInternetGateways operation +type ListInternetGatewaysResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The []InternetGateway instance + Items []InternetGateway `presentIn:"body"` + + // For pagination of a list of items. When paging through a list, if this header appears in the response, + // then a partial list might have been returned. Include this value as the `page` parameter for the + // subsequent GET request to get the next batch of items. + OpcNextPage *string `presentIn:"header" name:"opc-next-page"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about + // a particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response ListInternetGatewaysResponse) String() string { + return common.PointerString(response) +} + +// ListInternetGatewaysSortByEnum Enum with underlying type: string +type ListInternetGatewaysSortByEnum string + +// Set of constants representing the allowable values for ListInternetGatewaysSortBy +const ( + ListInternetGatewaysSortByTimecreated ListInternetGatewaysSortByEnum = "TIMECREATED" + ListInternetGatewaysSortByDisplayname ListInternetGatewaysSortByEnum = "DISPLAYNAME" +) + +var mappingListInternetGatewaysSortBy = map[string]ListInternetGatewaysSortByEnum{ + "TIMECREATED": ListInternetGatewaysSortByTimecreated, + "DISPLAYNAME": ListInternetGatewaysSortByDisplayname, +} + +// GetListInternetGatewaysSortByEnumValues Enumerates the set of values for ListInternetGatewaysSortBy +func GetListInternetGatewaysSortByEnumValues() []ListInternetGatewaysSortByEnum { + values := make([]ListInternetGatewaysSortByEnum, 0) + for _, v := range mappingListInternetGatewaysSortBy { + values = append(values, v) + } + return values +} + +// ListInternetGatewaysSortOrderEnum Enum with underlying type: string +type ListInternetGatewaysSortOrderEnum string + +// Set of constants representing the allowable values for ListInternetGatewaysSortOrder +const ( + ListInternetGatewaysSortOrderAsc ListInternetGatewaysSortOrderEnum = "ASC" + ListInternetGatewaysSortOrderDesc ListInternetGatewaysSortOrderEnum = "DESC" +) + +var mappingListInternetGatewaysSortOrder = map[string]ListInternetGatewaysSortOrderEnum{ + "ASC": ListInternetGatewaysSortOrderAsc, + "DESC": ListInternetGatewaysSortOrderDesc, +} + +// GetListInternetGatewaysSortOrderEnumValues Enumerates the set of values for ListInternetGatewaysSortOrder +func GetListInternetGatewaysSortOrderEnumValues() []ListInternetGatewaysSortOrderEnum { + values := make([]ListInternetGatewaysSortOrderEnum, 0) + for _, v := range mappingListInternetGatewaysSortOrder { + values = append(values, v) + } + return values +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/list_local_peering_gateways_request_response.go b/vendor/github.com/oracle/oci-go-sdk/core/list_local_peering_gateways_request_response.go new file mode 100644 index 000000000..071ec59af --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/list_local_peering_gateways_request_response.go @@ -0,0 +1,53 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// ListLocalPeeringGatewaysRequest wrapper for the ListLocalPeeringGateways operation +type ListLocalPeeringGatewaysRequest struct { + + // The OCID of the compartment. + CompartmentId *string `mandatory:"true" contributesTo:"query" name:"compartmentId"` + + // The OCID of the VCN. + VcnId *string `mandatory:"true" contributesTo:"query" name:"vcnId"` + + // The maximum number of items to return in a paginated "List" call. + // Example: `500` + Limit *int `mandatory:"false" contributesTo:"query" name:"limit"` + + // The value of the `opc-next-page` response header from the previous "List" call. + Page *string `mandatory:"false" contributesTo:"query" name:"page"` +} + +func (request ListLocalPeeringGatewaysRequest) String() string { + return common.PointerString(request) +} + +// ListLocalPeeringGatewaysResponse wrapper for the ListLocalPeeringGateways operation +type ListLocalPeeringGatewaysResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The []LocalPeeringGateway instance + Items []LocalPeeringGateway `presentIn:"body"` + + // For pagination of a list of items. When paging through a list, if this header appears in the response, + // then a partial list might have been returned. Include this value as the `page` parameter for the + // subsequent GET request to get the next batch of items. + OpcNextPage *string `presentIn:"header" name:"opc-next-page"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about + // a particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response ListLocalPeeringGatewaysResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/list_private_ips_request_response.go b/vendor/github.com/oracle/oci-go-sdk/core/list_private_ips_request_response.go new file mode 100644 index 000000000..7054edfc3 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/list_private_ips_request_response.go @@ -0,0 +1,57 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// ListPrivateIpsRequest wrapper for the ListPrivateIps operation +type ListPrivateIpsRequest struct { + + // The maximum number of items to return in a paginated "List" call. + // Example: `500` + Limit *int `mandatory:"false" contributesTo:"query" name:"limit"` + + // The value of the `opc-next-page` response header from the previous "List" call. + Page *string `mandatory:"false" contributesTo:"query" name:"page"` + + // The private IP address of the `privateIp` object. + // Example: `10.0.3.3` + IpAddress *string `mandatory:"false" contributesTo:"query" name:"ipAddress"` + + // The OCID of the subnet. + SubnetId *string `mandatory:"false" contributesTo:"query" name:"subnetId"` + + // The OCID of the VNIC. + VnicId *string `mandatory:"false" contributesTo:"query" name:"vnicId"` +} + +func (request ListPrivateIpsRequest) String() string { + return common.PointerString(request) +} + +// ListPrivateIpsResponse wrapper for the ListPrivateIps operation +type ListPrivateIpsResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The []PrivateIp instance + Items []PrivateIp `presentIn:"body"` + + // For pagination of a list of items. When paging through a list, if this header appears in the response, + // then a partial list might have been returned. Include this value as the `page` parameter for the + // subsequent GET request to get the next batch of items. + OpcNextPage *string `presentIn:"header" name:"opc-next-page"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about + // a particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response ListPrivateIpsResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/list_route_tables_request_response.go b/vendor/github.com/oracle/oci-go-sdk/core/list_route_tables_request_response.go new file mode 100644 index 000000000..88cc414c2 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/list_route_tables_request_response.go @@ -0,0 +1,118 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// ListRouteTablesRequest wrapper for the ListRouteTables operation +type ListRouteTablesRequest struct { + + // The OCID of the compartment. + CompartmentId *string `mandatory:"true" contributesTo:"query" name:"compartmentId"` + + // The OCID of the VCN. + VcnId *string `mandatory:"true" contributesTo:"query" name:"vcnId"` + + // The maximum number of items to return in a paginated "List" call. + // Example: `500` + Limit *int `mandatory:"false" contributesTo:"query" name:"limit"` + + // The value of the `opc-next-page` response header from the previous "List" call. + Page *string `mandatory:"false" contributesTo:"query" name:"page"` + + // A filter to return only resources that match the given display name exactly. + DisplayName *string `mandatory:"false" contributesTo:"query" name:"displayName"` + + // The field to sort by. You can provide one sort order (`sortOrder`). Default order for + // TIMECREATED is descending. Default order for DISPLAYNAME is ascending. The DISPLAYNAME + // sort order is case sensitive. + // **Note:** In general, some "List" operations (for example, `ListInstances`) let you + // optionally filter by Availability Domain if the scope of the resource type is within a + // single Availability Domain. If you call one of these "List" operations without specifying + // an Availability Domain, the resources are grouped by Availability Domain, then sorted. + SortBy ListRouteTablesSortByEnum `mandatory:"false" contributesTo:"query" name:"sortBy" omitEmpty:"true"` + + // The sort order to use, either ascending (`ASC`) or descending (`DESC`). The DISPLAYNAME sort order + // is case sensitive. + SortOrder ListRouteTablesSortOrderEnum `mandatory:"false" contributesTo:"query" name:"sortOrder" omitEmpty:"true"` + + // A filter to only return resources that match the given lifecycle state. The state value is case-insensitive. + LifecycleState RouteTableLifecycleStateEnum `mandatory:"false" contributesTo:"query" name:"lifecycleState" omitEmpty:"true"` +} + +func (request ListRouteTablesRequest) String() string { + return common.PointerString(request) +} + +// ListRouteTablesResponse wrapper for the ListRouteTables operation +type ListRouteTablesResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The []RouteTable instance + Items []RouteTable `presentIn:"body"` + + // For pagination of a list of items. When paging through a list, if this header appears in the response, + // then a partial list might have been returned. Include this value as the `page` parameter for the + // subsequent GET request to get the next batch of items. + OpcNextPage *string `presentIn:"header" name:"opc-next-page"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about + // a particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response ListRouteTablesResponse) String() string { + return common.PointerString(response) +} + +// ListRouteTablesSortByEnum Enum with underlying type: string +type ListRouteTablesSortByEnum string + +// Set of constants representing the allowable values for ListRouteTablesSortBy +const ( + ListRouteTablesSortByTimecreated ListRouteTablesSortByEnum = "TIMECREATED" + ListRouteTablesSortByDisplayname ListRouteTablesSortByEnum = "DISPLAYNAME" +) + +var mappingListRouteTablesSortBy = map[string]ListRouteTablesSortByEnum{ + "TIMECREATED": ListRouteTablesSortByTimecreated, + "DISPLAYNAME": ListRouteTablesSortByDisplayname, +} + +// GetListRouteTablesSortByEnumValues Enumerates the set of values for ListRouteTablesSortBy +func GetListRouteTablesSortByEnumValues() []ListRouteTablesSortByEnum { + values := make([]ListRouteTablesSortByEnum, 0) + for _, v := range mappingListRouteTablesSortBy { + values = append(values, v) + } + return values +} + +// ListRouteTablesSortOrderEnum Enum with underlying type: string +type ListRouteTablesSortOrderEnum string + +// Set of constants representing the allowable values for ListRouteTablesSortOrder +const ( + ListRouteTablesSortOrderAsc ListRouteTablesSortOrderEnum = "ASC" + ListRouteTablesSortOrderDesc ListRouteTablesSortOrderEnum = "DESC" +) + +var mappingListRouteTablesSortOrder = map[string]ListRouteTablesSortOrderEnum{ + "ASC": ListRouteTablesSortOrderAsc, + "DESC": ListRouteTablesSortOrderDesc, +} + +// GetListRouteTablesSortOrderEnumValues Enumerates the set of values for ListRouteTablesSortOrder +func GetListRouteTablesSortOrderEnumValues() []ListRouteTablesSortOrderEnum { + values := make([]ListRouteTablesSortOrderEnum, 0) + for _, v := range mappingListRouteTablesSortOrder { + values = append(values, v) + } + return values +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/list_security_lists_request_response.go b/vendor/github.com/oracle/oci-go-sdk/core/list_security_lists_request_response.go new file mode 100644 index 000000000..883452ce7 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/list_security_lists_request_response.go @@ -0,0 +1,118 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// ListSecurityListsRequest wrapper for the ListSecurityLists operation +type ListSecurityListsRequest struct { + + // The OCID of the compartment. + CompartmentId *string `mandatory:"true" contributesTo:"query" name:"compartmentId"` + + // The OCID of the VCN. + VcnId *string `mandatory:"true" contributesTo:"query" name:"vcnId"` + + // The maximum number of items to return in a paginated "List" call. + // Example: `500` + Limit *int `mandatory:"false" contributesTo:"query" name:"limit"` + + // The value of the `opc-next-page` response header from the previous "List" call. + Page *string `mandatory:"false" contributesTo:"query" name:"page"` + + // A filter to return only resources that match the given display name exactly. + DisplayName *string `mandatory:"false" contributesTo:"query" name:"displayName"` + + // The field to sort by. You can provide one sort order (`sortOrder`). Default order for + // TIMECREATED is descending. Default order for DISPLAYNAME is ascending. The DISPLAYNAME + // sort order is case sensitive. + // **Note:** In general, some "List" operations (for example, `ListInstances`) let you + // optionally filter by Availability Domain if the scope of the resource type is within a + // single Availability Domain. If you call one of these "List" operations without specifying + // an Availability Domain, the resources are grouped by Availability Domain, then sorted. + SortBy ListSecurityListsSortByEnum `mandatory:"false" contributesTo:"query" name:"sortBy" omitEmpty:"true"` + + // The sort order to use, either ascending (`ASC`) or descending (`DESC`). The DISPLAYNAME sort order + // is case sensitive. + SortOrder ListSecurityListsSortOrderEnum `mandatory:"false" contributesTo:"query" name:"sortOrder" omitEmpty:"true"` + + // A filter to only return resources that match the given lifecycle state. The state value is case-insensitive. + LifecycleState SecurityListLifecycleStateEnum `mandatory:"false" contributesTo:"query" name:"lifecycleState" omitEmpty:"true"` +} + +func (request ListSecurityListsRequest) String() string { + return common.PointerString(request) +} + +// ListSecurityListsResponse wrapper for the ListSecurityLists operation +type ListSecurityListsResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The []SecurityList instance + Items []SecurityList `presentIn:"body"` + + // For pagination of a list of items. When paging through a list, if this header appears in the response, + // then a partial list might have been returned. Include this value as the `page` parameter for the + // subsequent GET request to get the next batch of items. + OpcNextPage *string `presentIn:"header" name:"opc-next-page"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about + // a particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response ListSecurityListsResponse) String() string { + return common.PointerString(response) +} + +// ListSecurityListsSortByEnum Enum with underlying type: string +type ListSecurityListsSortByEnum string + +// Set of constants representing the allowable values for ListSecurityListsSortBy +const ( + ListSecurityListsSortByTimecreated ListSecurityListsSortByEnum = "TIMECREATED" + ListSecurityListsSortByDisplayname ListSecurityListsSortByEnum = "DISPLAYNAME" +) + +var mappingListSecurityListsSortBy = map[string]ListSecurityListsSortByEnum{ + "TIMECREATED": ListSecurityListsSortByTimecreated, + "DISPLAYNAME": ListSecurityListsSortByDisplayname, +} + +// GetListSecurityListsSortByEnumValues Enumerates the set of values for ListSecurityListsSortBy +func GetListSecurityListsSortByEnumValues() []ListSecurityListsSortByEnum { + values := make([]ListSecurityListsSortByEnum, 0) + for _, v := range mappingListSecurityListsSortBy { + values = append(values, v) + } + return values +} + +// ListSecurityListsSortOrderEnum Enum with underlying type: string +type ListSecurityListsSortOrderEnum string + +// Set of constants representing the allowable values for ListSecurityListsSortOrder +const ( + ListSecurityListsSortOrderAsc ListSecurityListsSortOrderEnum = "ASC" + ListSecurityListsSortOrderDesc ListSecurityListsSortOrderEnum = "DESC" +) + +var mappingListSecurityListsSortOrder = map[string]ListSecurityListsSortOrderEnum{ + "ASC": ListSecurityListsSortOrderAsc, + "DESC": ListSecurityListsSortOrderDesc, +} + +// GetListSecurityListsSortOrderEnumValues Enumerates the set of values for ListSecurityListsSortOrder +func GetListSecurityListsSortOrderEnumValues() []ListSecurityListsSortOrderEnum { + values := make([]ListSecurityListsSortOrderEnum, 0) + for _, v := range mappingListSecurityListsSortOrder { + values = append(values, v) + } + return values +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/list_shapes_request_response.go b/vendor/github.com/oracle/oci-go-sdk/core/list_shapes_request_response.go new file mode 100644 index 000000000..71c7a2dec --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/list_shapes_request_response.go @@ -0,0 +1,57 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// ListShapesRequest wrapper for the ListShapes operation +type ListShapesRequest struct { + + // The OCID of the compartment. + CompartmentId *string `mandatory:"true" contributesTo:"query" name:"compartmentId"` + + // The name of the Availability Domain. + // Example: `Uocm:PHX-AD-1` + AvailabilityDomain *string `mandatory:"false" contributesTo:"query" name:"availabilityDomain"` + + // The maximum number of items to return in a paginated "List" call. + // Example: `500` + Limit *int `mandatory:"false" contributesTo:"query" name:"limit"` + + // The value of the `opc-next-page` response header from the previous "List" call. + Page *string `mandatory:"false" contributesTo:"query" name:"page"` + + // The OCID of an image. + ImageId *string `mandatory:"false" contributesTo:"query" name:"imageId"` +} + +func (request ListShapesRequest) String() string { + return common.PointerString(request) +} + +// ListShapesResponse wrapper for the ListShapes operation +type ListShapesResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The []Shape instance + Items []Shape `presentIn:"body"` + + // For pagination of a list of items. When paging through a list, if this header appears in the response, + // then a partial list might have been returned. Include this value as the `page` parameter for the + // subsequent GET request to get the next batch of items. + OpcNextPage *string `presentIn:"header" name:"opc-next-page"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about + // a particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response ListShapesResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/list_subnets_request_response.go b/vendor/github.com/oracle/oci-go-sdk/core/list_subnets_request_response.go new file mode 100644 index 000000000..a74e9e134 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/list_subnets_request_response.go @@ -0,0 +1,118 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// ListSubnetsRequest wrapper for the ListSubnets operation +type ListSubnetsRequest struct { + + // The OCID of the compartment. + CompartmentId *string `mandatory:"true" contributesTo:"query" name:"compartmentId"` + + // The OCID of the VCN. + VcnId *string `mandatory:"true" contributesTo:"query" name:"vcnId"` + + // The maximum number of items to return in a paginated "List" call. + // Example: `500` + Limit *int `mandatory:"false" contributesTo:"query" name:"limit"` + + // The value of the `opc-next-page` response header from the previous "List" call. + Page *string `mandatory:"false" contributesTo:"query" name:"page"` + + // A filter to return only resources that match the given display name exactly. + DisplayName *string `mandatory:"false" contributesTo:"query" name:"displayName"` + + // The field to sort by. You can provide one sort order (`sortOrder`). Default order for + // TIMECREATED is descending. Default order for DISPLAYNAME is ascending. The DISPLAYNAME + // sort order is case sensitive. + // **Note:** In general, some "List" operations (for example, `ListInstances`) let you + // optionally filter by Availability Domain if the scope of the resource type is within a + // single Availability Domain. If you call one of these "List" operations without specifying + // an Availability Domain, the resources are grouped by Availability Domain, then sorted. + SortBy ListSubnetsSortByEnum `mandatory:"false" contributesTo:"query" name:"sortBy" omitEmpty:"true"` + + // The sort order to use, either ascending (`ASC`) or descending (`DESC`). The DISPLAYNAME sort order + // is case sensitive. + SortOrder ListSubnetsSortOrderEnum `mandatory:"false" contributesTo:"query" name:"sortOrder" omitEmpty:"true"` + + // A filter to only return resources that match the given lifecycle state. The state value is case-insensitive. + LifecycleState SubnetLifecycleStateEnum `mandatory:"false" contributesTo:"query" name:"lifecycleState" omitEmpty:"true"` +} + +func (request ListSubnetsRequest) String() string { + return common.PointerString(request) +} + +// ListSubnetsResponse wrapper for the ListSubnets operation +type ListSubnetsResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The []Subnet instance + Items []Subnet `presentIn:"body"` + + // For pagination of a list of items. When paging through a list, if this header appears in the response, + // then a partial list might have been returned. Include this value as the `page` parameter for the + // subsequent GET request to get the next batch of items. + OpcNextPage *string `presentIn:"header" name:"opc-next-page"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about + // a particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response ListSubnetsResponse) String() string { + return common.PointerString(response) +} + +// ListSubnetsSortByEnum Enum with underlying type: string +type ListSubnetsSortByEnum string + +// Set of constants representing the allowable values for ListSubnetsSortBy +const ( + ListSubnetsSortByTimecreated ListSubnetsSortByEnum = "TIMECREATED" + ListSubnetsSortByDisplayname ListSubnetsSortByEnum = "DISPLAYNAME" +) + +var mappingListSubnetsSortBy = map[string]ListSubnetsSortByEnum{ + "TIMECREATED": ListSubnetsSortByTimecreated, + "DISPLAYNAME": ListSubnetsSortByDisplayname, +} + +// GetListSubnetsSortByEnumValues Enumerates the set of values for ListSubnetsSortBy +func GetListSubnetsSortByEnumValues() []ListSubnetsSortByEnum { + values := make([]ListSubnetsSortByEnum, 0) + for _, v := range mappingListSubnetsSortBy { + values = append(values, v) + } + return values +} + +// ListSubnetsSortOrderEnum Enum with underlying type: string +type ListSubnetsSortOrderEnum string + +// Set of constants representing the allowable values for ListSubnetsSortOrder +const ( + ListSubnetsSortOrderAsc ListSubnetsSortOrderEnum = "ASC" + ListSubnetsSortOrderDesc ListSubnetsSortOrderEnum = "DESC" +) + +var mappingListSubnetsSortOrder = map[string]ListSubnetsSortOrderEnum{ + "ASC": ListSubnetsSortOrderAsc, + "DESC": ListSubnetsSortOrderDesc, +} + +// GetListSubnetsSortOrderEnumValues Enumerates the set of values for ListSubnetsSortOrder +func GetListSubnetsSortOrderEnumValues() []ListSubnetsSortOrderEnum { + values := make([]ListSubnetsSortOrderEnum, 0) + for _, v := range mappingListSubnetsSortOrder { + values = append(values, v) + } + return values +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/list_vcns_request_response.go b/vendor/github.com/oracle/oci-go-sdk/core/list_vcns_request_response.go new file mode 100644 index 000000000..b1e5b0edd --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/list_vcns_request_response.go @@ -0,0 +1,115 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// ListVcnsRequest wrapper for the ListVcns operation +type ListVcnsRequest struct { + + // The OCID of the compartment. + CompartmentId *string `mandatory:"true" contributesTo:"query" name:"compartmentId"` + + // The maximum number of items to return in a paginated "List" call. + // Example: `500` + Limit *int `mandatory:"false" contributesTo:"query" name:"limit"` + + // The value of the `opc-next-page` response header from the previous "List" call. + Page *string `mandatory:"false" contributesTo:"query" name:"page"` + + // A filter to return only resources that match the given display name exactly. + DisplayName *string `mandatory:"false" contributesTo:"query" name:"displayName"` + + // The field to sort by. You can provide one sort order (`sortOrder`). Default order for + // TIMECREATED is descending. Default order for DISPLAYNAME is ascending. The DISPLAYNAME + // sort order is case sensitive. + // **Note:** In general, some "List" operations (for example, `ListInstances`) let you + // optionally filter by Availability Domain if the scope of the resource type is within a + // single Availability Domain. If you call one of these "List" operations without specifying + // an Availability Domain, the resources are grouped by Availability Domain, then sorted. + SortBy ListVcnsSortByEnum `mandatory:"false" contributesTo:"query" name:"sortBy" omitEmpty:"true"` + + // The sort order to use, either ascending (`ASC`) or descending (`DESC`). The DISPLAYNAME sort order + // is case sensitive. + SortOrder ListVcnsSortOrderEnum `mandatory:"false" contributesTo:"query" name:"sortOrder" omitEmpty:"true"` + + // A filter to only return resources that match the given lifecycle state. The state value is case-insensitive. + LifecycleState VcnLifecycleStateEnum `mandatory:"false" contributesTo:"query" name:"lifecycleState" omitEmpty:"true"` +} + +func (request ListVcnsRequest) String() string { + return common.PointerString(request) +} + +// ListVcnsResponse wrapper for the ListVcns operation +type ListVcnsResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The []Vcn instance + Items []Vcn `presentIn:"body"` + + // For pagination of a list of items. When paging through a list, if this header appears in the response, + // then a partial list might have been returned. Include this value as the `page` parameter for the + // subsequent GET request to get the next batch of items. + OpcNextPage *string `presentIn:"header" name:"opc-next-page"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about + // a particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response ListVcnsResponse) String() string { + return common.PointerString(response) +} + +// ListVcnsSortByEnum Enum with underlying type: string +type ListVcnsSortByEnum string + +// Set of constants representing the allowable values for ListVcnsSortBy +const ( + ListVcnsSortByTimecreated ListVcnsSortByEnum = "TIMECREATED" + ListVcnsSortByDisplayname ListVcnsSortByEnum = "DISPLAYNAME" +) + +var mappingListVcnsSortBy = map[string]ListVcnsSortByEnum{ + "TIMECREATED": ListVcnsSortByTimecreated, + "DISPLAYNAME": ListVcnsSortByDisplayname, +} + +// GetListVcnsSortByEnumValues Enumerates the set of values for ListVcnsSortBy +func GetListVcnsSortByEnumValues() []ListVcnsSortByEnum { + values := make([]ListVcnsSortByEnum, 0) + for _, v := range mappingListVcnsSortBy { + values = append(values, v) + } + return values +} + +// ListVcnsSortOrderEnum Enum with underlying type: string +type ListVcnsSortOrderEnum string + +// Set of constants representing the allowable values for ListVcnsSortOrder +const ( + ListVcnsSortOrderAsc ListVcnsSortOrderEnum = "ASC" + ListVcnsSortOrderDesc ListVcnsSortOrderEnum = "DESC" +) + +var mappingListVcnsSortOrder = map[string]ListVcnsSortOrderEnum{ + "ASC": ListVcnsSortOrderAsc, + "DESC": ListVcnsSortOrderDesc, +} + +// GetListVcnsSortOrderEnumValues Enumerates the set of values for ListVcnsSortOrder +func GetListVcnsSortOrderEnumValues() []ListVcnsSortOrderEnum { + values := make([]ListVcnsSortOrderEnum, 0) + for _, v := range mappingListVcnsSortOrder { + values = append(values, v) + } + return values +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/list_virtual_circuit_bandwidth_shapes_request_response.go b/vendor/github.com/oracle/oci-go-sdk/core/list_virtual_circuit_bandwidth_shapes_request_response.go new file mode 100644 index 000000000..3d0ac6253 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/list_virtual_circuit_bandwidth_shapes_request_response.go @@ -0,0 +1,50 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// ListVirtualCircuitBandwidthShapesRequest wrapper for the ListVirtualCircuitBandwidthShapes operation +type ListVirtualCircuitBandwidthShapesRequest struct { + + // The OCID of the compartment. + CompartmentId *string `mandatory:"true" contributesTo:"query" name:"compartmentId"` + + // The maximum number of items to return in a paginated "List" call. + // Example: `500` + Limit *int `mandatory:"false" contributesTo:"query" name:"limit"` + + // The value of the `opc-next-page` response header from the previous "List" call. + Page *string `mandatory:"false" contributesTo:"query" name:"page"` +} + +func (request ListVirtualCircuitBandwidthShapesRequest) String() string { + return common.PointerString(request) +} + +// ListVirtualCircuitBandwidthShapesResponse wrapper for the ListVirtualCircuitBandwidthShapes operation +type ListVirtualCircuitBandwidthShapesResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The []VirtualCircuitBandwidthShape instance + Items []VirtualCircuitBandwidthShape `presentIn:"body"` + + // For pagination of a list of items. When paging through a list, if this header appears in the response, + // then a partial list might have been returned. Include this value as the `page` parameter for the + // subsequent GET request to get the next batch of items. + OpcNextPage *string `presentIn:"header" name:"opc-next-page"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about + // a particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response ListVirtualCircuitBandwidthShapesResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/list_virtual_circuit_public_prefixes_request_response.go b/vendor/github.com/oracle/oci-go-sdk/core/list_virtual_circuit_public_prefixes_request_response.go new file mode 100644 index 000000000..0e885eb36 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/list_virtual_circuit_public_prefixes_request_response.go @@ -0,0 +1,42 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// ListVirtualCircuitPublicPrefixesRequest wrapper for the ListVirtualCircuitPublicPrefixes operation +type ListVirtualCircuitPublicPrefixesRequest struct { + + // The OCID of the virtual circuit. + VirtualCircuitId *string `mandatory:"true" contributesTo:"path" name:"virtualCircuitId"` + + // A filter to only return resources that match the given verification state. + // The state value is case-insensitive. + VerificationState VirtualCircuitPublicPrefixVerificationStateEnum `mandatory:"false" contributesTo:"query" name:"verificationState" omitEmpty:"true"` +} + +func (request ListVirtualCircuitPublicPrefixesRequest) String() string { + return common.PointerString(request) +} + +// ListVirtualCircuitPublicPrefixesResponse wrapper for the ListVirtualCircuitPublicPrefixes operation +type ListVirtualCircuitPublicPrefixesResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The []VirtualCircuitPublicPrefix instance + Items []VirtualCircuitPublicPrefix `presentIn:"body"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about + // a particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response ListVirtualCircuitPublicPrefixesResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/list_virtual_circuits_request_response.go b/vendor/github.com/oracle/oci-go-sdk/core/list_virtual_circuits_request_response.go new file mode 100644 index 000000000..65dc99414 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/list_virtual_circuits_request_response.go @@ -0,0 +1,115 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// ListVirtualCircuitsRequest wrapper for the ListVirtualCircuits operation +type ListVirtualCircuitsRequest struct { + + // The OCID of the compartment. + CompartmentId *string `mandatory:"true" contributesTo:"query" name:"compartmentId"` + + // The maximum number of items to return in a paginated "List" call. + // Example: `500` + Limit *int `mandatory:"false" contributesTo:"query" name:"limit"` + + // The value of the `opc-next-page` response header from the previous "List" call. + Page *string `mandatory:"false" contributesTo:"query" name:"page"` + + // A filter to return only resources that match the given display name exactly. + DisplayName *string `mandatory:"false" contributesTo:"query" name:"displayName"` + + // The field to sort by. You can provide one sort order (`sortOrder`). Default order for + // TIMECREATED is descending. Default order for DISPLAYNAME is ascending. The DISPLAYNAME + // sort order is case sensitive. + // **Note:** In general, some "List" operations (for example, `ListInstances`) let you + // optionally filter by Availability Domain if the scope of the resource type is within a + // single Availability Domain. If you call one of these "List" operations without specifying + // an Availability Domain, the resources are grouped by Availability Domain, then sorted. + SortBy ListVirtualCircuitsSortByEnum `mandatory:"false" contributesTo:"query" name:"sortBy" omitEmpty:"true"` + + // The sort order to use, either ascending (`ASC`) or descending (`DESC`). The DISPLAYNAME sort order + // is case sensitive. + SortOrder ListVirtualCircuitsSortOrderEnum `mandatory:"false" contributesTo:"query" name:"sortOrder" omitEmpty:"true"` + + // A filter to return only resources that match the specified lifecycle state. The value is case insensitive. + LifecycleState VirtualCircuitLifecycleStateEnum `mandatory:"false" contributesTo:"query" name:"lifecycleState" omitEmpty:"true"` +} + +func (request ListVirtualCircuitsRequest) String() string { + return common.PointerString(request) +} + +// ListVirtualCircuitsResponse wrapper for the ListVirtualCircuits operation +type ListVirtualCircuitsResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The []VirtualCircuit instance + Items []VirtualCircuit `presentIn:"body"` + + // For pagination of a list of items. When paging through a list, if this header appears in the response, + // then a partial list might have been returned. Include this value as the `page` parameter for the + // subsequent GET request to get the next batch of items. + OpcNextPage *string `presentIn:"header" name:"opc-next-page"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about + // a particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response ListVirtualCircuitsResponse) String() string { + return common.PointerString(response) +} + +// ListVirtualCircuitsSortByEnum Enum with underlying type: string +type ListVirtualCircuitsSortByEnum string + +// Set of constants representing the allowable values for ListVirtualCircuitsSortBy +const ( + ListVirtualCircuitsSortByTimecreated ListVirtualCircuitsSortByEnum = "TIMECREATED" + ListVirtualCircuitsSortByDisplayname ListVirtualCircuitsSortByEnum = "DISPLAYNAME" +) + +var mappingListVirtualCircuitsSortBy = map[string]ListVirtualCircuitsSortByEnum{ + "TIMECREATED": ListVirtualCircuitsSortByTimecreated, + "DISPLAYNAME": ListVirtualCircuitsSortByDisplayname, +} + +// GetListVirtualCircuitsSortByEnumValues Enumerates the set of values for ListVirtualCircuitsSortBy +func GetListVirtualCircuitsSortByEnumValues() []ListVirtualCircuitsSortByEnum { + values := make([]ListVirtualCircuitsSortByEnum, 0) + for _, v := range mappingListVirtualCircuitsSortBy { + values = append(values, v) + } + return values +} + +// ListVirtualCircuitsSortOrderEnum Enum with underlying type: string +type ListVirtualCircuitsSortOrderEnum string + +// Set of constants representing the allowable values for ListVirtualCircuitsSortOrder +const ( + ListVirtualCircuitsSortOrderAsc ListVirtualCircuitsSortOrderEnum = "ASC" + ListVirtualCircuitsSortOrderDesc ListVirtualCircuitsSortOrderEnum = "DESC" +) + +var mappingListVirtualCircuitsSortOrder = map[string]ListVirtualCircuitsSortOrderEnum{ + "ASC": ListVirtualCircuitsSortOrderAsc, + "DESC": ListVirtualCircuitsSortOrderDesc, +} + +// GetListVirtualCircuitsSortOrderEnumValues Enumerates the set of values for ListVirtualCircuitsSortOrder +func GetListVirtualCircuitsSortOrderEnumValues() []ListVirtualCircuitsSortOrderEnum { + values := make([]ListVirtualCircuitsSortOrderEnum, 0) + for _, v := range mappingListVirtualCircuitsSortOrder { + values = append(values, v) + } + return values +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/list_vnic_attachments_request_response.go b/vendor/github.com/oracle/oci-go-sdk/core/list_vnic_attachments_request_response.go new file mode 100644 index 000000000..6746850b4 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/list_vnic_attachments_request_response.go @@ -0,0 +1,60 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// ListVnicAttachmentsRequest wrapper for the ListVnicAttachments operation +type ListVnicAttachmentsRequest struct { + + // The OCID of the compartment. + CompartmentId *string `mandatory:"true" contributesTo:"query" name:"compartmentId"` + + // The name of the Availability Domain. + // Example: `Uocm:PHX-AD-1` + AvailabilityDomain *string `mandatory:"false" contributesTo:"query" name:"availabilityDomain"` + + // The OCID of the instance. + InstanceId *string `mandatory:"false" contributesTo:"query" name:"instanceId"` + + // The maximum number of items to return in a paginated "List" call. + // Example: `500` + Limit *int `mandatory:"false" contributesTo:"query" name:"limit"` + + // The value of the `opc-next-page` response header from the previous "List" call. + Page *string `mandatory:"false" contributesTo:"query" name:"page"` + + // The OCID of the VNIC. + VnicId *string `mandatory:"false" contributesTo:"query" name:"vnicId"` +} + +func (request ListVnicAttachmentsRequest) String() string { + return common.PointerString(request) +} + +// ListVnicAttachmentsResponse wrapper for the ListVnicAttachments operation +type ListVnicAttachmentsResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The []VnicAttachment instance + Items []VnicAttachment `presentIn:"body"` + + // For pagination of a list of items. When paging through a list, if this header appears in the response, + // then a partial list might have been returned. Include this value as the `page` parameter for the + // subsequent GET request to get the next batch of items. + OpcNextPage *string `presentIn:"header" name:"opc-next-page"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about + // a particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response ListVnicAttachmentsResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/list_volume_attachments_request_response.go b/vendor/github.com/oracle/oci-go-sdk/core/list_volume_attachments_request_response.go new file mode 100644 index 000000000..5bc7ffdf4 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/list_volume_attachments_request_response.go @@ -0,0 +1,60 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// ListVolumeAttachmentsRequest wrapper for the ListVolumeAttachments operation +type ListVolumeAttachmentsRequest struct { + + // The OCID of the compartment. + CompartmentId *string `mandatory:"true" contributesTo:"query" name:"compartmentId"` + + // The name of the Availability Domain. + // Example: `Uocm:PHX-AD-1` + AvailabilityDomain *string `mandatory:"false" contributesTo:"query" name:"availabilityDomain"` + + // The maximum number of items to return in a paginated "List" call. + // Example: `500` + Limit *int `mandatory:"false" contributesTo:"query" name:"limit"` + + // The value of the `opc-next-page` response header from the previous "List" call. + Page *string `mandatory:"false" contributesTo:"query" name:"page"` + + // The OCID of the instance. + InstanceId *string `mandatory:"false" contributesTo:"query" name:"instanceId"` + + // The OCID of the volume. + VolumeId *string `mandatory:"false" contributesTo:"query" name:"volumeId"` +} + +func (request ListVolumeAttachmentsRequest) String() string { + return common.PointerString(request) +} + +// ListVolumeAttachmentsResponse wrapper for the ListVolumeAttachments operation +type ListVolumeAttachmentsResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The []VolumeAttachment instance + Items []VolumeAttachment `presentIn:"body"` + + // For pagination of a list of items. When paging through a list, if this header appears in the response, + // then a partial list might have been returned. Include this value as the `page` parameter for the + // subsequent GET request to get the next batch of items. + OpcNextPage *string `presentIn:"header" name:"opc-next-page"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about + // a particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response ListVolumeAttachmentsResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/list_volume_backups_request_response.go b/vendor/github.com/oracle/oci-go-sdk/core/list_volume_backups_request_response.go new file mode 100644 index 000000000..19bdff08c --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/list_volume_backups_request_response.go @@ -0,0 +1,118 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// ListVolumeBackupsRequest wrapper for the ListVolumeBackups operation +type ListVolumeBackupsRequest struct { + + // The OCID of the compartment. + CompartmentId *string `mandatory:"true" contributesTo:"query" name:"compartmentId"` + + // The OCID of the volume. + VolumeId *string `mandatory:"false" contributesTo:"query" name:"volumeId"` + + // The maximum number of items to return in a paginated "List" call. + // Example: `500` + Limit *int `mandatory:"false" contributesTo:"query" name:"limit"` + + // The value of the `opc-next-page` response header from the previous "List" call. + Page *string `mandatory:"false" contributesTo:"query" name:"page"` + + // A filter to return only resources that match the given display name exactly. + DisplayName *string `mandatory:"false" contributesTo:"query" name:"displayName"` + + // The field to sort by. You can provide one sort order (`sortOrder`). Default order for + // TIMECREATED is descending. Default order for DISPLAYNAME is ascending. The DISPLAYNAME + // sort order is case sensitive. + // **Note:** In general, some "List" operations (for example, `ListInstances`) let you + // optionally filter by Availability Domain if the scope of the resource type is within a + // single Availability Domain. If you call one of these "List" operations without specifying + // an Availability Domain, the resources are grouped by Availability Domain, then sorted. + SortBy ListVolumeBackupsSortByEnum `mandatory:"false" contributesTo:"query" name:"sortBy" omitEmpty:"true"` + + // The sort order to use, either ascending (`ASC`) or descending (`DESC`). The DISPLAYNAME sort order + // is case sensitive. + SortOrder ListVolumeBackupsSortOrderEnum `mandatory:"false" contributesTo:"query" name:"sortOrder" omitEmpty:"true"` + + // A filter to only return resources that match the given lifecycle state. The state value is case-insensitive. + LifecycleState VolumeBackupLifecycleStateEnum `mandatory:"false" contributesTo:"query" name:"lifecycleState" omitEmpty:"true"` +} + +func (request ListVolumeBackupsRequest) String() string { + return common.PointerString(request) +} + +// ListVolumeBackupsResponse wrapper for the ListVolumeBackups operation +type ListVolumeBackupsResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The []VolumeBackup instance + Items []VolumeBackup `presentIn:"body"` + + // For pagination of a list of items. When paging through a list, if this header appears in the response, + // then a partial list might have been returned. Include this value as the `page` parameter for the + // subsequent GET request to get the next batch of items. + OpcNextPage *string `presentIn:"header" name:"opc-next-page"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about + // a particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response ListVolumeBackupsResponse) String() string { + return common.PointerString(response) +} + +// ListVolumeBackupsSortByEnum Enum with underlying type: string +type ListVolumeBackupsSortByEnum string + +// Set of constants representing the allowable values for ListVolumeBackupsSortBy +const ( + ListVolumeBackupsSortByTimecreated ListVolumeBackupsSortByEnum = "TIMECREATED" + ListVolumeBackupsSortByDisplayname ListVolumeBackupsSortByEnum = "DISPLAYNAME" +) + +var mappingListVolumeBackupsSortBy = map[string]ListVolumeBackupsSortByEnum{ + "TIMECREATED": ListVolumeBackupsSortByTimecreated, + "DISPLAYNAME": ListVolumeBackupsSortByDisplayname, +} + +// GetListVolumeBackupsSortByEnumValues Enumerates the set of values for ListVolumeBackupsSortBy +func GetListVolumeBackupsSortByEnumValues() []ListVolumeBackupsSortByEnum { + values := make([]ListVolumeBackupsSortByEnum, 0) + for _, v := range mappingListVolumeBackupsSortBy { + values = append(values, v) + } + return values +} + +// ListVolumeBackupsSortOrderEnum Enum with underlying type: string +type ListVolumeBackupsSortOrderEnum string + +// Set of constants representing the allowable values for ListVolumeBackupsSortOrder +const ( + ListVolumeBackupsSortOrderAsc ListVolumeBackupsSortOrderEnum = "ASC" + ListVolumeBackupsSortOrderDesc ListVolumeBackupsSortOrderEnum = "DESC" +) + +var mappingListVolumeBackupsSortOrder = map[string]ListVolumeBackupsSortOrderEnum{ + "ASC": ListVolumeBackupsSortOrderAsc, + "DESC": ListVolumeBackupsSortOrderDesc, +} + +// GetListVolumeBackupsSortOrderEnumValues Enumerates the set of values for ListVolumeBackupsSortOrder +func GetListVolumeBackupsSortOrderEnumValues() []ListVolumeBackupsSortOrderEnum { + values := make([]ListVolumeBackupsSortOrderEnum, 0) + for _, v := range mappingListVolumeBackupsSortOrder { + values = append(values, v) + } + return values +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/list_volumes_request_response.go b/vendor/github.com/oracle/oci-go-sdk/core/list_volumes_request_response.go new file mode 100644 index 000000000..20bb4c553 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/list_volumes_request_response.go @@ -0,0 +1,119 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// ListVolumesRequest wrapper for the ListVolumes operation +type ListVolumesRequest struct { + + // The OCID of the compartment. + CompartmentId *string `mandatory:"true" contributesTo:"query" name:"compartmentId"` + + // The name of the Availability Domain. + // Example: `Uocm:PHX-AD-1` + AvailabilityDomain *string `mandatory:"false" contributesTo:"query" name:"availabilityDomain"` + + // The maximum number of items to return in a paginated "List" call. + // Example: `500` + Limit *int `mandatory:"false" contributesTo:"query" name:"limit"` + + // The value of the `opc-next-page` response header from the previous "List" call. + Page *string `mandatory:"false" contributesTo:"query" name:"page"` + + // A filter to return only resources that match the given display name exactly. + DisplayName *string `mandatory:"false" contributesTo:"query" name:"displayName"` + + // The field to sort by. You can provide one sort order (`sortOrder`). Default order for + // TIMECREATED is descending. Default order for DISPLAYNAME is ascending. The DISPLAYNAME + // sort order is case sensitive. + // **Note:** In general, some "List" operations (for example, `ListInstances`) let you + // optionally filter by Availability Domain if the scope of the resource type is within a + // single Availability Domain. If you call one of these "List" operations without specifying + // an Availability Domain, the resources are grouped by Availability Domain, then sorted. + SortBy ListVolumesSortByEnum `mandatory:"false" contributesTo:"query" name:"sortBy" omitEmpty:"true"` + + // The sort order to use, either ascending (`ASC`) or descending (`DESC`). The DISPLAYNAME sort order + // is case sensitive. + SortOrder ListVolumesSortOrderEnum `mandatory:"false" contributesTo:"query" name:"sortOrder" omitEmpty:"true"` + + // A filter to only return resources that match the given lifecycle state. The state value is case-insensitive. + LifecycleState VolumeLifecycleStateEnum `mandatory:"false" contributesTo:"query" name:"lifecycleState" omitEmpty:"true"` +} + +func (request ListVolumesRequest) String() string { + return common.PointerString(request) +} + +// ListVolumesResponse wrapper for the ListVolumes operation +type ListVolumesResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The []Volume instance + Items []Volume `presentIn:"body"` + + // For pagination of a list of items. When paging through a list, if this header appears in the response, + // then a partial list might have been returned. Include this value as the `page` parameter for the + // subsequent GET request to get the next batch of items. + OpcNextPage *string `presentIn:"header" name:"opc-next-page"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about + // a particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response ListVolumesResponse) String() string { + return common.PointerString(response) +} + +// ListVolumesSortByEnum Enum with underlying type: string +type ListVolumesSortByEnum string + +// Set of constants representing the allowable values for ListVolumesSortBy +const ( + ListVolumesSortByTimecreated ListVolumesSortByEnum = "TIMECREATED" + ListVolumesSortByDisplayname ListVolumesSortByEnum = "DISPLAYNAME" +) + +var mappingListVolumesSortBy = map[string]ListVolumesSortByEnum{ + "TIMECREATED": ListVolumesSortByTimecreated, + "DISPLAYNAME": ListVolumesSortByDisplayname, +} + +// GetListVolumesSortByEnumValues Enumerates the set of values for ListVolumesSortBy +func GetListVolumesSortByEnumValues() []ListVolumesSortByEnum { + values := make([]ListVolumesSortByEnum, 0) + for _, v := range mappingListVolumesSortBy { + values = append(values, v) + } + return values +} + +// ListVolumesSortOrderEnum Enum with underlying type: string +type ListVolumesSortOrderEnum string + +// Set of constants representing the allowable values for ListVolumesSortOrder +const ( + ListVolumesSortOrderAsc ListVolumesSortOrderEnum = "ASC" + ListVolumesSortOrderDesc ListVolumesSortOrderEnum = "DESC" +) + +var mappingListVolumesSortOrder = map[string]ListVolumesSortOrderEnum{ + "ASC": ListVolumesSortOrderAsc, + "DESC": ListVolumesSortOrderDesc, +} + +// GetListVolumesSortOrderEnumValues Enumerates the set of values for ListVolumesSortOrder +func GetListVolumesSortOrderEnumValues() []ListVolumesSortOrderEnum { + values := make([]ListVolumesSortOrderEnum, 0) + for _, v := range mappingListVolumesSortOrder { + values = append(values, v) + } + return values +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/local_peering_gateway.go b/vendor/github.com/oracle/oci-go-sdk/core/local_peering_gateway.go new file mode 100644 index 000000000..61242c667 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/local_peering_gateway.go @@ -0,0 +1,123 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Core Services API +// +// APIs for Networking Service, Compute Service, and Block Volume Service. +// + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" +) + +// LocalPeeringGateway A local peering gateway (LPG) is an object on a VCN that lets that VCN peer +// with another VCN in the same region. *Peering* means that the two VCNs can +// communicate using private IP addresses, but without the traffic traversing the +// internet or routing through your on-premises network. For more information, +// see VCN Peering (https://docs.us-phoenix-1.oraclecloud.com/Content/Network/Tasks/VCNpeering.htm). +// To use any of the API operations, you must be authorized in an IAM policy. If you're not authorized, +// talk to an administrator. If you're an administrator who needs to write policies to give users access, see +// Getting Started with Policies (https://docs.us-phoenix-1.oraclecloud.com/Content/Identity/Concepts/policygetstarted.htm). +type LocalPeeringGateway struct { + + // The OCID of the compartment containing the LPG. + CompartmentId *string `mandatory:"true" json:"compartmentId"` + + // A user-friendly name. Does not have to be unique, and it's changeable. Avoid + // entering confidential information. + DisplayName *string `mandatory:"true" json:"displayName"` + + // The LPG's Oracle ID (OCID). + Id *string `mandatory:"true" json:"id"` + + // Whether the VCN at the other end of the peering is in a different tenancy. + // Example: `false` + IsCrossTenancyPeering *bool `mandatory:"true" json:"isCrossTenancyPeering"` + + // The LPG's current lifecycle state. + LifecycleState LocalPeeringGatewayLifecycleStateEnum `mandatory:"true" json:"lifecycleState"` + + // Whether the LPG is peered with another LPG. `NEW` means the LPG has not yet been + // peered. `PENDING` means the peering is being established. `REVOKED` means the + // LPG at the other end of the peering has been deleted. + PeeringStatus LocalPeeringGatewayPeeringStatusEnum `mandatory:"true" json:"peeringStatus"` + + // The date and time the LPG was created, in the format defined by RFC3339. + // Example: `2016-08-25T21:10:29.600Z` + TimeCreated *common.SDKTime `mandatory:"true" json:"timeCreated"` + + // The OCID of the VCN the LPG belongs to. + VcnId *string `mandatory:"true" json:"vcnId"` + + // The range of IP addresses available on the VCN at the other + // end of the peering from this LPG. The value is `null` if the LPG is not peered. + // You can use this as the destination CIDR for a route rule to route a subnet's + // traffic to this LPG. + // Example: `192.168.0.0/16` + PeerAdvertisedCidr *string `mandatory:"false" json:"peerAdvertisedCidr"` + + // Additional information regarding the peering status, if applicable. + PeeringStatusDetails *string `mandatory:"false" json:"peeringStatusDetails"` +} + +func (m LocalPeeringGateway) String() string { + return common.PointerString(m) +} + +// LocalPeeringGatewayLifecycleStateEnum Enum with underlying type: string +type LocalPeeringGatewayLifecycleStateEnum string + +// Set of constants representing the allowable values for LocalPeeringGatewayLifecycleState +const ( + LocalPeeringGatewayLifecycleStateProvisioning LocalPeeringGatewayLifecycleStateEnum = "PROVISIONING" + LocalPeeringGatewayLifecycleStateAvailable LocalPeeringGatewayLifecycleStateEnum = "AVAILABLE" + LocalPeeringGatewayLifecycleStateTerminating LocalPeeringGatewayLifecycleStateEnum = "TERMINATING" + LocalPeeringGatewayLifecycleStateTerminated LocalPeeringGatewayLifecycleStateEnum = "TERMINATED" +) + +var mappingLocalPeeringGatewayLifecycleState = map[string]LocalPeeringGatewayLifecycleStateEnum{ + "PROVISIONING": LocalPeeringGatewayLifecycleStateProvisioning, + "AVAILABLE": LocalPeeringGatewayLifecycleStateAvailable, + "TERMINATING": LocalPeeringGatewayLifecycleStateTerminating, + "TERMINATED": LocalPeeringGatewayLifecycleStateTerminated, +} + +// GetLocalPeeringGatewayLifecycleStateEnumValues Enumerates the set of values for LocalPeeringGatewayLifecycleState +func GetLocalPeeringGatewayLifecycleStateEnumValues() []LocalPeeringGatewayLifecycleStateEnum { + values := make([]LocalPeeringGatewayLifecycleStateEnum, 0) + for _, v := range mappingLocalPeeringGatewayLifecycleState { + values = append(values, v) + } + return values +} + +// LocalPeeringGatewayPeeringStatusEnum Enum with underlying type: string +type LocalPeeringGatewayPeeringStatusEnum string + +// Set of constants representing the allowable values for LocalPeeringGatewayPeeringStatus +const ( + LocalPeeringGatewayPeeringStatusInvalid LocalPeeringGatewayPeeringStatusEnum = "INVALID" + LocalPeeringGatewayPeeringStatusNew LocalPeeringGatewayPeeringStatusEnum = "NEW" + LocalPeeringGatewayPeeringStatusPeered LocalPeeringGatewayPeeringStatusEnum = "PEERED" + LocalPeeringGatewayPeeringStatusPending LocalPeeringGatewayPeeringStatusEnum = "PENDING" + LocalPeeringGatewayPeeringStatusRevoked LocalPeeringGatewayPeeringStatusEnum = "REVOKED" +) + +var mappingLocalPeeringGatewayPeeringStatus = map[string]LocalPeeringGatewayPeeringStatusEnum{ + "INVALID": LocalPeeringGatewayPeeringStatusInvalid, + "NEW": LocalPeeringGatewayPeeringStatusNew, + "PEERED": LocalPeeringGatewayPeeringStatusPeered, + "PENDING": LocalPeeringGatewayPeeringStatusPending, + "REVOKED": LocalPeeringGatewayPeeringStatusRevoked, +} + +// GetLocalPeeringGatewayPeeringStatusEnumValues Enumerates the set of values for LocalPeeringGatewayPeeringStatus +func GetLocalPeeringGatewayPeeringStatusEnumValues() []LocalPeeringGatewayPeeringStatusEnum { + values := make([]LocalPeeringGatewayPeeringStatusEnum, 0) + for _, v := range mappingLocalPeeringGatewayPeeringStatus { + values = append(values, v) + } + return values +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/port_range.go b/vendor/github.com/oracle/oci-go-sdk/core/port_range.go new file mode 100644 index 000000000..f01332769 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/port_range.go @@ -0,0 +1,28 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Core Services API +// +// APIs for Networking Service, Compute Service, and Block Volume Service. +// + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" +) + +// PortRange The representation of PortRange +type PortRange struct { + + // The maximum port number. Must not be lower than the minimum port number. To specify + // a single port number, set both the min and max to the same value. + Max *int `mandatory:"true" json:"max"` + + // The minimum port number. Must not be greater than the maximum port number. + Min *int `mandatory:"true" json:"min"` +} + +func (m PortRange) String() string { + return common.PointerString(m) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/private_ip.go b/vendor/github.com/oracle/oci-go-sdk/core/private_ip.go new file mode 100644 index 000000000..7ae7b37a0 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/private_ip.go @@ -0,0 +1,89 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Core Services API +// +// APIs for Networking Service, Compute Service, and Block Volume Service. +// + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" +) + +// PrivateIp A *private IP* is a conceptual term that refers to a private IP address and related properties. +// The `privateIp` object is the API representation of a private IP. +// Each instance has a *primary private IP* that is automatically created and +// assigned to the primary VNIC during instance launch. If you add a secondary +// VNIC to the instance, it also automatically gets a primary private IP. You +// can't remove a primary private IP from its VNIC. The primary private IP is +// automatically deleted when the VNIC is terminated. +// You can add *secondary private IPs* to a VNIC after it's created. For more +// information, see the `privateIp` operations and also +// IP Addresses (https://docs.us-phoenix-1.oraclecloud.com/Content/Network/Tasks/managingIPaddresses.htm). +// **Note:** Only +// ListPrivateIps and +// GetPrivateIp work with +// *primary* private IPs. To create and update primary private IPs, you instead +// work with instance and VNIC operations. For example, a primary private IP's +// properties come from the values you specify in +// CreateVnicDetails when calling either +// LaunchInstance or +// AttachVnic. To update the hostname +// for a primary private IP, you use UpdateVnic. +// To use any of the API operations, you must be authorized in an IAM policy. If you're not authorized, +// talk to an administrator. If you're an administrator who needs to write policies to give users access, see +// Getting Started with Policies (https://docs.us-phoenix-1.oraclecloud.com/Content/Identity/Concepts/policygetstarted.htm). +type PrivateIp struct { + + // The private IP's Availability Domain. + // Example: `Uocm:PHX-AD-1` + AvailabilityDomain *string `mandatory:"false" json:"availabilityDomain"` + + // The OCID of the compartment containing the private IP. + CompartmentId *string `mandatory:"false" json:"compartmentId"` + + // A user-friendly name. Does not have to be unique, and it's changeable. Avoid + // entering confidential information. + DisplayName *string `mandatory:"false" json:"displayName"` + + // The hostname for the private IP. Used for DNS. The value is the hostname + // portion of the private IP's fully qualified domain name (FQDN) + // (for example, `bminstance-1` in FQDN `bminstance-1.subnet123.vcn1.oraclevcn.com`). + // Must be unique across all VNICs in the subnet and comply with + // RFC 952 (https://tools.ietf.org/html/rfc952) and + // RFC 1123 (https://tools.ietf.org/html/rfc1123). + // For more information, see + // DNS in Your Virtual Cloud Network (https://docs.us-phoenix-1.oraclecloud.com/Content/Network/Concepts/dns.htm). + // Example: `bminstance-1` + HostnameLabel *string `mandatory:"false" json:"hostnameLabel"` + + // The private IP's Oracle ID (OCID). + Id *string `mandatory:"false" json:"id"` + + // The private IP address of the `privateIp` object. The address is within the CIDR + // of the VNIC's subnet. + // Example: `10.0.3.3` + IpAddress *string `mandatory:"false" json:"ipAddress"` + + // Whether this private IP is the primary one on the VNIC. Primary private IPs + // are unassigned and deleted automatically when the VNIC is terminated. + // Example: `true` + IsPrimary *bool `mandatory:"false" json:"isPrimary"` + + // The OCID of the subnet the VNIC is in. + SubnetId *string `mandatory:"false" json:"subnetId"` + + // The date and time the private IP was created, in the format defined by RFC3339. + // Example: `2016-08-25T21:10:29.600Z` + TimeCreated *common.SDKTime `mandatory:"false" json:"timeCreated"` + + // The OCID of the VNIC the private IP is assigned to. The VNIC and private IP + // must be in the same subnet. + VnicId *string `mandatory:"false" json:"vnicId"` +} + +func (m PrivateIp) String() string { + return common.PointerString(m) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/route_rule.go b/vendor/github.com/oracle/oci-go-sdk/core/route_rule.go new file mode 100644 index 000000000..11b0e1b2a --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/route_rule.go @@ -0,0 +1,32 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Core Services API +// +// APIs for Networking Service, Compute Service, and Block Volume Service. +// + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" +) + +// RouteRule A mapping between a destination IP address range and a virtual device to route matching +// packets to (a target). +type RouteRule struct { + + // A destination IP address range in CIDR notation. Matching packets will + // be routed to the indicated network entity (the target). + // Example: `0.0.0.0/0` + CidrBlock *string `mandatory:"true" json:"cidrBlock"` + + // The OCID for the route rule's target. For information about the type of + // targets you can specify, see + // Route Tables (https://docs.us-phoenix-1.oraclecloud.com/Content/Network/Tasks/managingroutetables.htm). + NetworkEntityId *string `mandatory:"true" json:"networkEntityId"` +} + +func (m RouteRule) String() string { + return common.PointerString(m) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/route_table.go b/vendor/github.com/oracle/oci-go-sdk/core/route_table.go new file mode 100644 index 000000000..f255d131c --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/route_table.go @@ -0,0 +1,76 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Core Services API +// +// APIs for Networking Service, Compute Service, and Block Volume Service. +// + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" +) + +// RouteTable A collection of `RouteRule` objects, which are used to route packets +// based on destination IP to a particular network entity. For more information, see +// Overview of the Networking Service (https://docs.us-phoenix-1.oraclecloud.com/Content/Network/Concepts/overview.htm). +// To use any of the API operations, you must be authorized in an IAM policy. If you're not authorized, +// talk to an administrator. If you're an administrator who needs to write policies to give users access, see +// Getting Started with Policies (https://docs.us-phoenix-1.oraclecloud.com/Content/Identity/Concepts/policygetstarted.htm). +type RouteTable struct { + + // The OCID of the compartment containing the route table. + CompartmentId *string `mandatory:"true" json:"compartmentId"` + + // The route table's Oracle ID (OCID). + Id *string `mandatory:"true" json:"id"` + + // The route table's current state. + LifecycleState RouteTableLifecycleStateEnum `mandatory:"true" json:"lifecycleState"` + + // The collection of rules for routing destination IPs to network devices. + RouteRules []RouteRule `mandatory:"true" json:"routeRules"` + + // The OCID of the VCN the route table list belongs to. + VcnId *string `mandatory:"true" json:"vcnId"` + + // A user-friendly name. Does not have to be unique, and it's changeable. + // Avoid entering confidential information. + DisplayName *string `mandatory:"false" json:"displayName"` + + // The date and time the route table was created, in the format defined by RFC3339. + // Example: `2016-08-25T21:10:29.600Z` + TimeCreated *common.SDKTime `mandatory:"false" json:"timeCreated"` +} + +func (m RouteTable) String() string { + return common.PointerString(m) +} + +// RouteTableLifecycleStateEnum Enum with underlying type: string +type RouteTableLifecycleStateEnum string + +// Set of constants representing the allowable values for RouteTableLifecycleState +const ( + RouteTableLifecycleStateProvisioning RouteTableLifecycleStateEnum = "PROVISIONING" + RouteTableLifecycleStateAvailable RouteTableLifecycleStateEnum = "AVAILABLE" + RouteTableLifecycleStateTerminating RouteTableLifecycleStateEnum = "TERMINATING" + RouteTableLifecycleStateTerminated RouteTableLifecycleStateEnum = "TERMINATED" +) + +var mappingRouteTableLifecycleState = map[string]RouteTableLifecycleStateEnum{ + "PROVISIONING": RouteTableLifecycleStateProvisioning, + "AVAILABLE": RouteTableLifecycleStateAvailable, + "TERMINATING": RouteTableLifecycleStateTerminating, + "TERMINATED": RouteTableLifecycleStateTerminated, +} + +// GetRouteTableLifecycleStateEnumValues Enumerates the set of values for RouteTableLifecycleState +func GetRouteTableLifecycleStateEnumValues() []RouteTableLifecycleStateEnum { + values := make([]RouteTableLifecycleStateEnum, 0) + for _, v := range mappingRouteTableLifecycleState { + values = append(values, v) + } + return values +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/security_list.go b/vendor/github.com/oracle/oci-go-sdk/core/security_list.go new file mode 100644 index 000000000..d55bcd206 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/security_list.go @@ -0,0 +1,84 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Core Services API +// +// APIs for Networking Service, Compute Service, and Block Volume Service. +// + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" +) + +// SecurityList A set of virtual firewall rules for your VCN. Security lists are configured at the subnet +// level, but the rules are applied to the ingress and egress traffic for the individual instances +// in the subnet. The rules can be stateful or stateless. For more information, see +// Security Lists (https://docs.us-phoenix-1.oraclecloud.com/Content/Network/Concepts/securitylists.htm). +// **Important:** Oracle Cloud Infrastructure Compute service images automatically include firewall rules (for example, +// Linux iptables, Windows firewall). If there are issues with some type of access to an instance, +// make sure both the security lists associated with the instance's subnet and the instance's +// firewall rules are set correctly. +// To use any of the API operations, you must be authorized in an IAM policy. If you're not authorized, +// talk to an administrator. If you're an administrator who needs to write policies to give users access, see +// Getting Started with Policies (https://docs.us-phoenix-1.oraclecloud.com/Content/Identity/Concepts/policygetstarted.htm). +type SecurityList struct { + + // The OCID of the compartment containing the security list. + CompartmentId *string `mandatory:"true" json:"compartmentId"` + + // A user-friendly name. Does not have to be unique, and it's changeable. + // Avoid entering confidential information. + DisplayName *string `mandatory:"true" json:"displayName"` + + // Rules for allowing egress IP packets. + EgressSecurityRules []EgressSecurityRule `mandatory:"true" json:"egressSecurityRules"` + + // The security list's Oracle Cloud ID (OCID). + Id *string `mandatory:"true" json:"id"` + + // Rules for allowing ingress IP packets. + IngressSecurityRules []IngressSecurityRule `mandatory:"true" json:"ingressSecurityRules"` + + // The security list's current state. + LifecycleState SecurityListLifecycleStateEnum `mandatory:"true" json:"lifecycleState"` + + // The date and time the security list was created, in the format defined by RFC3339. + // Example: `2016-08-25T21:10:29.600Z` + TimeCreated *common.SDKTime `mandatory:"true" json:"timeCreated"` + + // The OCID of the VCN the security list belongs to. + VcnId *string `mandatory:"true" json:"vcnId"` +} + +func (m SecurityList) String() string { + return common.PointerString(m) +} + +// SecurityListLifecycleStateEnum Enum with underlying type: string +type SecurityListLifecycleStateEnum string + +// Set of constants representing the allowable values for SecurityListLifecycleState +const ( + SecurityListLifecycleStateProvisioning SecurityListLifecycleStateEnum = "PROVISIONING" + SecurityListLifecycleStateAvailable SecurityListLifecycleStateEnum = "AVAILABLE" + SecurityListLifecycleStateTerminating SecurityListLifecycleStateEnum = "TERMINATING" + SecurityListLifecycleStateTerminated SecurityListLifecycleStateEnum = "TERMINATED" +) + +var mappingSecurityListLifecycleState = map[string]SecurityListLifecycleStateEnum{ + "PROVISIONING": SecurityListLifecycleStateProvisioning, + "AVAILABLE": SecurityListLifecycleStateAvailable, + "TERMINATING": SecurityListLifecycleStateTerminating, + "TERMINATED": SecurityListLifecycleStateTerminated, +} + +// GetSecurityListLifecycleStateEnumValues Enumerates the set of values for SecurityListLifecycleState +func GetSecurityListLifecycleStateEnumValues() []SecurityListLifecycleStateEnum { + values := make([]SecurityListLifecycleStateEnum, 0) + for _, v := range mappingSecurityListLifecycleState { + values = append(values, v) + } + return values +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/shape.go b/vendor/github.com/oracle/oci-go-sdk/core/shape.go new file mode 100644 index 000000000..7cb572cff --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/shape.go @@ -0,0 +1,26 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Core Services API +// +// APIs for Networking Service, Compute Service, and Block Volume Service. +// + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" +) + +// Shape A compute instance shape that can be used in LaunchInstance. +// For more information, see Overview of the Compute Service (https://docs.us-phoenix-1.oraclecloud.com/Content/Compute/Concepts/computeoverview.htm). +type Shape struct { + + // The name of the shape. You can enumerate all available shapes by calling + // ListShapes. + Shape *string `mandatory:"true" json:"shape"` +} + +func (m Shape) String() string { + return common.PointerString(m) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/subnet.go b/vendor/github.com/oracle/oci-go-sdk/core/subnet.go new file mode 100644 index 000000000..d765cefce --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/subnet.go @@ -0,0 +1,131 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Core Services API +// +// APIs for Networking Service, Compute Service, and Block Volume Service. +// + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" +) + +// Subnet A logical subdivision of a VCN. Each subnet exists in a single Availability Domain and +// consists of a contiguous range of IP addresses that do not overlap with +// other subnets in the VCN. Example: 172.16.1.0/24. For more information, see +// Overview of the Networking Service (https://docs.us-phoenix-1.oraclecloud.com/Content/Network/Concepts/overview.htm) and +// VCNs and Subnets (https://docs.us-phoenix-1.oraclecloud.com/Content/Network/Tasks/managingVCNs.htm). +// To use any of the API operations, you must be authorized in an IAM policy. If you're not authorized, +// talk to an administrator. If you're an administrator who needs to write policies to give users access, see +// Getting Started with Policies (https://docs.us-phoenix-1.oraclecloud.com/Content/Identity/Concepts/policygetstarted.htm). +type Subnet struct { + + // The subnet's Availability Domain. + // Example: `Uocm:PHX-AD-1` + AvailabilityDomain *string `mandatory:"true" json:"availabilityDomain"` + + // The subnet's CIDR block. + // Example: `172.16.1.0/24` + CidrBlock *string `mandatory:"true" json:"cidrBlock"` + + // The OCID of the compartment containing the subnet. + CompartmentId *string `mandatory:"true" json:"compartmentId"` + + // The subnet's Oracle ID (OCID). + Id *string `mandatory:"true" json:"id"` + + // The subnet's current state. + LifecycleState SubnetLifecycleStateEnum `mandatory:"true" json:"lifecycleState"` + + // The OCID of the route table the subnet is using. + RouteTableId *string `mandatory:"true" json:"routeTableId"` + + // The OCID of the VCN the subnet is in. + VcnId *string `mandatory:"true" json:"vcnId"` + + // The IP address of the virtual router. + // Example: `10.0.14.1` + VirtualRouterIp *string `mandatory:"true" json:"virtualRouterIp"` + + // The MAC address of the virtual router. + // Example: `00:00:17:B6:4D:DD` + VirtualRouterMac *string `mandatory:"true" json:"virtualRouterMac"` + + // The OCID of the set of DHCP options associated with the subnet. + DhcpOptionsId *string `mandatory:"false" json:"dhcpOptionsId"` + + // A user-friendly name. Does not have to be unique, and it's changeable. + // Avoid entering confidential information. + DisplayName *string `mandatory:"false" json:"displayName"` + + // A DNS label for the subnet, used in conjunction with the VNIC's hostname and + // VCN's DNS label to form a fully qualified domain name (FQDN) for each VNIC + // within this subnet (for example, `bminstance-1.subnet123.vcn1.oraclevcn.com`). + // Must be an alphanumeric string that begins with a letter and is unique within the VCN. + // The value cannot be changed. + // The absence of this parameter means the Internet and VCN Resolver + // will not resolve hostnames of instances in this subnet. + // For more information, see + // DNS in Your Virtual Cloud Network (https://docs.us-phoenix-1.oraclecloud.com/Content/Network/Concepts/dns.htm). + // Example: `subnet123` + DnsLabel *string `mandatory:"false" json:"dnsLabel"` + + // Whether VNICs within this subnet can have public IP addresses. + // Defaults to false, which means VNICs created in this subnet will + // automatically be assigned public IP addresses unless specified + // otherwise during instance launch or VNIC creation (with the + // `assignPublicIp` flag in + // CreateVnicDetails). + // If `prohibitPublicIpOnVnic` is set to true, VNICs created in this + // subnet cannot have public IP addresses (that is, it's a private + // subnet). + // Example: `true` + ProhibitPublicIpOnVnic *bool `mandatory:"false" json:"prohibitPublicIpOnVnic"` + + // OCIDs for the security lists to use for VNICs in this subnet. + SecurityListIds []string `mandatory:"false" json:"securityListIds"` + + // The subnet's domain name, which consists of the subnet's DNS label, + // the VCN's DNS label, and the `oraclevcn.com` domain. + // For more information, see + // DNS in Your Virtual Cloud Network (https://docs.us-phoenix-1.oraclecloud.com/Content/Network/Concepts/dns.htm). + // Example: `subnet123.vcn1.oraclevcn.com` + SubnetDomainName *string `mandatory:"false" json:"subnetDomainName"` + + // The date and time the subnet was created, in the format defined by RFC3339. + // Example: `2016-08-25T21:10:29.600Z` + TimeCreated *common.SDKTime `mandatory:"false" json:"timeCreated"` +} + +func (m Subnet) String() string { + return common.PointerString(m) +} + +// SubnetLifecycleStateEnum Enum with underlying type: string +type SubnetLifecycleStateEnum string + +// Set of constants representing the allowable values for SubnetLifecycleState +const ( + SubnetLifecycleStateProvisioning SubnetLifecycleStateEnum = "PROVISIONING" + SubnetLifecycleStateAvailable SubnetLifecycleStateEnum = "AVAILABLE" + SubnetLifecycleStateTerminating SubnetLifecycleStateEnum = "TERMINATING" + SubnetLifecycleStateTerminated SubnetLifecycleStateEnum = "TERMINATED" +) + +var mappingSubnetLifecycleState = map[string]SubnetLifecycleStateEnum{ + "PROVISIONING": SubnetLifecycleStateProvisioning, + "AVAILABLE": SubnetLifecycleStateAvailable, + "TERMINATING": SubnetLifecycleStateTerminating, + "TERMINATED": SubnetLifecycleStateTerminated, +} + +// GetSubnetLifecycleStateEnumValues Enumerates the set of values for SubnetLifecycleState +func GetSubnetLifecycleStateEnumValues() []SubnetLifecycleStateEnum { + values := make([]SubnetLifecycleStateEnum, 0) + for _, v := range mappingSubnetLifecycleState { + values = append(values, v) + } + return values +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/tcp_options.go b/vendor/github.com/oracle/oci-go-sdk/core/tcp_options.go new file mode 100644 index 000000000..4037afbe8 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/tcp_options.go @@ -0,0 +1,30 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Core Services API +// +// APIs for Networking Service, Compute Service, and Block Volume Service. +// + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" +) + +// TcpOptions Optional object to specify ports for a TCP rule. If you specify TCP as the +// protocol but omit this object, then all ports are allowed. +type TcpOptions struct { + + // An inclusive range of allowed destination ports. Use the same number for the min and max + // to indicate a single port. Defaults to all ports if not specified. + DestinationPortRange *PortRange `mandatory:"false" json:"destinationPortRange"` + + // An inclusive range of allowed source ports. Use the same number for the min and max to + // indicate a single port. Defaults to all ports if not specified. + SourcePortRange *PortRange `mandatory:"false" json:"sourcePortRange"` +} + +func (m TcpOptions) String() string { + return common.PointerString(m) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/terminate_instance_request_response.go b/vendor/github.com/oracle/oci-go-sdk/core/terminate_instance_request_response.go new file mode 100644 index 000000000..085b4c5d8 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/terminate_instance_request_response.go @@ -0,0 +1,44 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// TerminateInstanceRequest wrapper for the TerminateInstance operation +type TerminateInstanceRequest struct { + + // The OCID of the instance. + InstanceId *string `mandatory:"true" contributesTo:"path" name:"instanceId"` + + // For optimistic concurrency control. In the PUT or DELETE call for a resource, set the `if-match` + // parameter to the value of the etag from a previous GET or POST response for that resource. The resource + // will be updated or deleted only if the etag you provide matches the resource's current etag value. + IfMatch *string `mandatory:"false" contributesTo:"header" name:"if-match"` + + // Specifies whether to delete or preserve the boot volume when terminating an instance. + // The default value is false. + PreserveBootVolume *bool `mandatory:"false" contributesTo:"query" name:"preserveBootVolume"` +} + +func (request TerminateInstanceRequest) String() string { + return common.PointerString(request) +} + +// TerminateInstanceResponse wrapper for the TerminateInstance operation +type TerminateInstanceResponse struct { + + // The underlying http response + RawResponse *http.Response + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about + // a particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response TerminateInstanceResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/tunnel_config.go b/vendor/github.com/oracle/oci-go-sdk/core/tunnel_config.go new file mode 100644 index 000000000..fe12d0997 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/tunnel_config.go @@ -0,0 +1,33 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Core Services API +// +// APIs for Networking Service, Compute Service, and Block Volume Service. +// + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" +) + +// TunnelConfig Specific connection details for an IPSec tunnel. +type TunnelConfig struct { + + // The IP address of Oracle's VPN headend. + // Example: `129.146.17.50` + IpAddress *string `mandatory:"true" json:"ipAddress"` + + // The shared secret of the IPSec tunnel. + // Example: `vFG2IF6TWq4UToUiLSRDoJEUs6j1c.p8G.dVQxiMfMO0yXMLi.lZTbYIWhGu4V8o` + SharedSecret *string `mandatory:"true" json:"sharedSecret"` + + // The date and time the IPSec connection was created, in the format defined by RFC3339. + // Example: `2016-08-25T21:10:29.600Z` + TimeCreated *common.SDKTime `mandatory:"false" json:"timeCreated"` +} + +func (m TunnelConfig) String() string { + return common.PointerString(m) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/tunnel_status.go b/vendor/github.com/oracle/oci-go-sdk/core/tunnel_status.go new file mode 100644 index 000000000..c2655abc8 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/tunnel_status.go @@ -0,0 +1,61 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Core Services API +// +// APIs for Networking Service, Compute Service, and Block Volume Service. +// + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" +) + +// TunnelStatus Specific connection details for an IPSec tunnel. +type TunnelStatus struct { + + // The IP address of Oracle's VPN headend. + // Example: `129.146.17.50` + IpAddress *string `mandatory:"true" json:"ipAddress"` + + // The tunnel's current state. + LifecycleState TunnelStatusLifecycleStateEnum `mandatory:"false" json:"lifecycleState,omitempty"` + + // The date and time the IPSec connection was created, in the format defined by RFC3339. + // Example: `2016-08-25T21:10:29.600Z` + TimeCreated *common.SDKTime `mandatory:"false" json:"timeCreated"` + + // When the state of the tunnel last changed, in the format defined by RFC3339. + // Example: `2016-08-25T21:10:29.600Z` + TimeStateModified *common.SDKTime `mandatory:"false" json:"timeStateModified"` +} + +func (m TunnelStatus) String() string { + return common.PointerString(m) +} + +// TunnelStatusLifecycleStateEnum Enum with underlying type: string +type TunnelStatusLifecycleStateEnum string + +// Set of constants representing the allowable values for TunnelStatusLifecycleState +const ( + TunnelStatusLifecycleStateUp TunnelStatusLifecycleStateEnum = "UP" + TunnelStatusLifecycleStateDown TunnelStatusLifecycleStateEnum = "DOWN" + TunnelStatusLifecycleStateDownForMaintenance TunnelStatusLifecycleStateEnum = "DOWN_FOR_MAINTENANCE" +) + +var mappingTunnelStatusLifecycleState = map[string]TunnelStatusLifecycleStateEnum{ + "UP": TunnelStatusLifecycleStateUp, + "DOWN": TunnelStatusLifecycleStateDown, + "DOWN_FOR_MAINTENANCE": TunnelStatusLifecycleStateDownForMaintenance, +} + +// GetTunnelStatusLifecycleStateEnumValues Enumerates the set of values for TunnelStatusLifecycleState +func GetTunnelStatusLifecycleStateEnumValues() []TunnelStatusLifecycleStateEnum { + values := make([]TunnelStatusLifecycleStateEnum, 0) + for _, v := range mappingTunnelStatusLifecycleState { + values = append(values, v) + } + return values +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/udp_options.go b/vendor/github.com/oracle/oci-go-sdk/core/udp_options.go new file mode 100644 index 000000000..c68886ee2 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/udp_options.go @@ -0,0 +1,30 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Core Services API +// +// APIs for Networking Service, Compute Service, and Block Volume Service. +// + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" +) + +// UdpOptions Optional object to specify ports for a UDP rule. If you specify UDP as the +// protocol but omit this object, then all ports are allowed. +type UdpOptions struct { + + // An inclusive range of allowed destination ports. Use the same number for the min and max + // to indicate a single port. Defaults to all ports if not specified. + DestinationPortRange *PortRange `mandatory:"false" json:"destinationPortRange"` + + // An inclusive range of allowed source ports. Use the same number for the min and max to + // indicate a single port. Defaults to all ports if not specified. + SourcePortRange *PortRange `mandatory:"false" json:"sourcePortRange"` +} + +func (m UdpOptions) String() string { + return common.PointerString(m) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/update_boot_volume_details.go b/vendor/github.com/oracle/oci-go-sdk/core/update_boot_volume_details.go new file mode 100644 index 000000000..ead6727c7 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/update_boot_volume_details.go @@ -0,0 +1,25 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Core Services API +// +// APIs for Networking Service, Compute Service, and Block Volume Service. +// + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" +) + +// UpdateBootVolumeDetails The representation of UpdateBootVolumeDetails +type UpdateBootVolumeDetails struct { + + // A user-friendly name. Does not have to be unique, and it's changeable. + // Avoid entering confidential information. + DisplayName *string `mandatory:"false" json:"displayName"` +} + +func (m UpdateBootVolumeDetails) String() string { + return common.PointerString(m) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/update_boot_volume_request_response.go b/vendor/github.com/oracle/oci-go-sdk/core/update_boot_volume_request_response.go new file mode 100644 index 000000000..08ea32ad4 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/update_boot_volume_request_response.go @@ -0,0 +1,49 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// UpdateBootVolumeRequest wrapper for the UpdateBootVolume operation +type UpdateBootVolumeRequest struct { + + // The OCID of the boot volume. + BootVolumeId *string `mandatory:"true" contributesTo:"path" name:"bootVolumeId"` + + // Update boot volume's display name. + UpdateBootVolumeDetails `contributesTo:"body"` + + // For optimistic concurrency control. In the PUT or DELETE call for a resource, set the `if-match` + // parameter to the value of the etag from a previous GET or POST response for that resource. The resource + // will be updated or deleted only if the etag you provide matches the resource's current etag value. + IfMatch *string `mandatory:"false" contributesTo:"header" name:"if-match"` +} + +func (request UpdateBootVolumeRequest) String() string { + return common.PointerString(request) +} + +// UpdateBootVolumeResponse wrapper for the UpdateBootVolume operation +type UpdateBootVolumeResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The BootVolume instance + BootVolume `presentIn:"body"` + + // For optimistic concurrency control. See `if-match`. + Etag *string `presentIn:"header" name:"etag"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about + // a particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response UpdateBootVolumeResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/update_console_history_details.go b/vendor/github.com/oracle/oci-go-sdk/core/update_console_history_details.go new file mode 100644 index 000000000..cfecb4ead --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/update_console_history_details.go @@ -0,0 +1,24 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Core Services API +// +// APIs for Networking Service, Compute Service, and Block Volume Service. +// + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" +) + +// UpdateConsoleHistoryDetails The representation of UpdateConsoleHistoryDetails +type UpdateConsoleHistoryDetails struct { + + // A user-friendly name. Does not have to be unique, and it's changeable. + DisplayName *string `mandatory:"false" json:"displayName"` +} + +func (m UpdateConsoleHistoryDetails) String() string { + return common.PointerString(m) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/update_console_history_request_response.go b/vendor/github.com/oracle/oci-go-sdk/core/update_console_history_request_response.go new file mode 100644 index 000000000..c9bc64732 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/update_console_history_request_response.go @@ -0,0 +1,49 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// UpdateConsoleHistoryRequest wrapper for the UpdateConsoleHistory operation +type UpdateConsoleHistoryRequest struct { + + // The OCID of the console history. + InstanceConsoleHistoryId *string `mandatory:"true" contributesTo:"path" name:"instanceConsoleHistoryId"` + + // Update instance fields + UpdateConsoleHistoryDetails `contributesTo:"body"` + + // For optimistic concurrency control. In the PUT or DELETE call for a resource, set the `if-match` + // parameter to the value of the etag from a previous GET or POST response for that resource. The resource + // will be updated or deleted only if the etag you provide matches the resource's current etag value. + IfMatch *string `mandatory:"false" contributesTo:"header" name:"if-match"` +} + +func (request UpdateConsoleHistoryRequest) String() string { + return common.PointerString(request) +} + +// UpdateConsoleHistoryResponse wrapper for the UpdateConsoleHistory operation +type UpdateConsoleHistoryResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The ConsoleHistory instance + ConsoleHistory `presentIn:"body"` + + // For optimistic concurrency control. See `if-match`. + Etag *string `presentIn:"header" name:"etag"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about + // a particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response UpdateConsoleHistoryResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/update_cpe_details.go b/vendor/github.com/oracle/oci-go-sdk/core/update_cpe_details.go new file mode 100644 index 000000000..bd0daa6ef --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/update_cpe_details.go @@ -0,0 +1,25 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Core Services API +// +// APIs for Networking Service, Compute Service, and Block Volume Service. +// + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" +) + +// UpdateCpeDetails The representation of UpdateCpeDetails +type UpdateCpeDetails struct { + + // A user-friendly name. Does not have to be unique, and it's changeable. + // Avoid entering confidential information. + DisplayName *string `mandatory:"false" json:"displayName"` +} + +func (m UpdateCpeDetails) String() string { + return common.PointerString(m) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/update_cpe_request_response.go b/vendor/github.com/oracle/oci-go-sdk/core/update_cpe_request_response.go new file mode 100644 index 000000000..a87170834 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/update_cpe_request_response.go @@ -0,0 +1,49 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// UpdateCpeRequest wrapper for the UpdateCpe operation +type UpdateCpeRequest struct { + + // The OCID of the CPE. + CpeId *string `mandatory:"true" contributesTo:"path" name:"cpeId"` + + // Details object for updating a CPE. + UpdateCpeDetails `contributesTo:"body"` + + // For optimistic concurrency control. In the PUT or DELETE call for a resource, set the `if-match` + // parameter to the value of the etag from a previous GET or POST response for that resource. The resource + // will be updated or deleted only if the etag you provide matches the resource's current etag value. + IfMatch *string `mandatory:"false" contributesTo:"header" name:"if-match"` +} + +func (request UpdateCpeRequest) String() string { + return common.PointerString(request) +} + +// UpdateCpeResponse wrapper for the UpdateCpe operation +type UpdateCpeResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The Cpe instance + Cpe `presentIn:"body"` + + // For optimistic concurrency control. See `if-match`. + Etag *string `presentIn:"header" name:"etag"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about + // a particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response UpdateCpeResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/update_cross_connect_details.go b/vendor/github.com/oracle/oci-go-sdk/core/update_cross_connect_details.go new file mode 100644 index 000000000..4cfa19004 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/update_cross_connect_details.go @@ -0,0 +1,31 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Core Services API +// +// APIs for Networking Service, Compute Service, and Block Volume Service. +// + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" +) + +// UpdateCrossConnectDetails Update a CrossConnect +type UpdateCrossConnectDetails struct { + + // A user-friendly name. Does not have to be unique, and it's changeable. + // Avoid entering confidential information. + DisplayName *string `mandatory:"false" json:"displayName"` + + // Set to true to activate the cross-connect. You activate it after the physical cabling + // is complete, and you've confirmed the cross-connect's light levels are good and your side + // of the interface is up. Activation indicates to Oracle that the physical connection is ready. + // Example: `true` + IsActive *bool `mandatory:"false" json:"isActive"` +} + +func (m UpdateCrossConnectDetails) String() string { + return common.PointerString(m) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/update_cross_connect_group_details.go b/vendor/github.com/oracle/oci-go-sdk/core/update_cross_connect_group_details.go new file mode 100644 index 000000000..97a424d29 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/update_cross_connect_group_details.go @@ -0,0 +1,25 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Core Services API +// +// APIs for Networking Service, Compute Service, and Block Volume Service. +// + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" +) + +// UpdateCrossConnectGroupDetails The representation of UpdateCrossConnectGroupDetails +type UpdateCrossConnectGroupDetails struct { + + // A user-friendly name. Does not have to be unique, and it's changeable. + // Avoid entering confidential information. + DisplayName *string `mandatory:"false" json:"displayName"` +} + +func (m UpdateCrossConnectGroupDetails) String() string { + return common.PointerString(m) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/update_cross_connect_group_request_response.go b/vendor/github.com/oracle/oci-go-sdk/core/update_cross_connect_group_request_response.go new file mode 100644 index 000000000..23164788c --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/update_cross_connect_group_request_response.go @@ -0,0 +1,49 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// UpdateCrossConnectGroupRequest wrapper for the UpdateCrossConnectGroup operation +type UpdateCrossConnectGroupRequest struct { + + // The OCID of the cross-connect group. + CrossConnectGroupId *string `mandatory:"true" contributesTo:"path" name:"crossConnectGroupId"` + + // Update CrossConnectGroup fields + UpdateCrossConnectGroupDetails `contributesTo:"body"` + + // For optimistic concurrency control. In the PUT or DELETE call for a resource, set the `if-match` + // parameter to the value of the etag from a previous GET or POST response for that resource. The resource + // will be updated or deleted only if the etag you provide matches the resource's current etag value. + IfMatch *string `mandatory:"false" contributesTo:"header" name:"if-match"` +} + +func (request UpdateCrossConnectGroupRequest) String() string { + return common.PointerString(request) +} + +// UpdateCrossConnectGroupResponse wrapper for the UpdateCrossConnectGroup operation +type UpdateCrossConnectGroupResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The CrossConnectGroup instance + CrossConnectGroup `presentIn:"body"` + + // For optimistic concurrency control. See `if-match`. + Etag *string `presentIn:"header" name:"etag"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about + // a particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response UpdateCrossConnectGroupResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/update_cross_connect_request_response.go b/vendor/github.com/oracle/oci-go-sdk/core/update_cross_connect_request_response.go new file mode 100644 index 000000000..8af219a51 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/update_cross_connect_request_response.go @@ -0,0 +1,49 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// UpdateCrossConnectRequest wrapper for the UpdateCrossConnect operation +type UpdateCrossConnectRequest struct { + + // The OCID of the cross-connect. + CrossConnectId *string `mandatory:"true" contributesTo:"path" name:"crossConnectId"` + + // Update CrossConnect fields. + UpdateCrossConnectDetails `contributesTo:"body"` + + // For optimistic concurrency control. In the PUT or DELETE call for a resource, set the `if-match` + // parameter to the value of the etag from a previous GET or POST response for that resource. The resource + // will be updated or deleted only if the etag you provide matches the resource's current etag value. + IfMatch *string `mandatory:"false" contributesTo:"header" name:"if-match"` +} + +func (request UpdateCrossConnectRequest) String() string { + return common.PointerString(request) +} + +// UpdateCrossConnectResponse wrapper for the UpdateCrossConnect operation +type UpdateCrossConnectResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The CrossConnect instance + CrossConnect `presentIn:"body"` + + // For optimistic concurrency control. See `if-match`. + Etag *string `presentIn:"header" name:"etag"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about + // a particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response UpdateCrossConnectResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/update_dhcp_details.go b/vendor/github.com/oracle/oci-go-sdk/core/update_dhcp_details.go new file mode 100644 index 000000000..b2e221a13 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/update_dhcp_details.go @@ -0,0 +1,51 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Core Services API +// +// APIs for Networking Service, Compute Service, and Block Volume Service. +// + +package core + +import ( + "encoding/json" + "github.com/oracle/oci-go-sdk/common" +) + +// UpdateDhcpDetails The representation of UpdateDhcpDetails +type UpdateDhcpDetails struct { + + // A user-friendly name. Does not have to be unique, and it's changeable. + // Avoid entering confidential information. + DisplayName *string `mandatory:"false" json:"displayName"` + + Options []DhcpOption `mandatory:"false" json:"options"` +} + +func (m UpdateDhcpDetails) String() string { + return common.PointerString(m) +} + +// UnmarshalJSON unmarshals from json +func (m *UpdateDhcpDetails) UnmarshalJSON(data []byte) (e error) { + model := struct { + DisplayName *string `json:"displayName"` + Options []dhcpoption `json:"options"` + }{} + + e = json.Unmarshal(data, &model) + if e != nil { + return + } + m.DisplayName = model.DisplayName + m.Options = make([]DhcpOption, len(model.Options)) + for i, n := range model.Options { + nn, err := n.UnmarshalPolymorphicJSON(n.JsonData) + if err != nil { + return err + } + m.Options[i] = nn + } + return +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/update_dhcp_options_request_response.go b/vendor/github.com/oracle/oci-go-sdk/core/update_dhcp_options_request_response.go new file mode 100644 index 000000000..c8b6e542d --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/update_dhcp_options_request_response.go @@ -0,0 +1,49 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// UpdateDhcpOptionsRequest wrapper for the UpdateDhcpOptions operation +type UpdateDhcpOptionsRequest struct { + + // The OCID for the set of DHCP options. + DhcpId *string `mandatory:"true" contributesTo:"path" name:"dhcpId"` + + // Request object for updating a set of DHCP options. + UpdateDhcpDetails `contributesTo:"body"` + + // For optimistic concurrency control. In the PUT or DELETE call for a resource, set the `if-match` + // parameter to the value of the etag from a previous GET or POST response for that resource. The resource + // will be updated or deleted only if the etag you provide matches the resource's current etag value. + IfMatch *string `mandatory:"false" contributesTo:"header" name:"if-match"` +} + +func (request UpdateDhcpOptionsRequest) String() string { + return common.PointerString(request) +} + +// UpdateDhcpOptionsResponse wrapper for the UpdateDhcpOptions operation +type UpdateDhcpOptionsResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The DhcpOptions instance + DhcpOptions `presentIn:"body"` + + // For optimistic concurrency control. See `if-match`. + Etag *string `presentIn:"header" name:"etag"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about + // a particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response UpdateDhcpOptionsResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/update_drg_attachment_details.go b/vendor/github.com/oracle/oci-go-sdk/core/update_drg_attachment_details.go new file mode 100644 index 000000000..876f22c9d --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/update_drg_attachment_details.go @@ -0,0 +1,25 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Core Services API +// +// APIs for Networking Service, Compute Service, and Block Volume Service. +// + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" +) + +// UpdateDrgAttachmentDetails The representation of UpdateDrgAttachmentDetails +type UpdateDrgAttachmentDetails struct { + + // A user-friendly name. Does not have to be unique, and it's changeable. + // Avoid entering confidential information. + DisplayName *string `mandatory:"false" json:"displayName"` +} + +func (m UpdateDrgAttachmentDetails) String() string { + return common.PointerString(m) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/update_drg_attachment_request_response.go b/vendor/github.com/oracle/oci-go-sdk/core/update_drg_attachment_request_response.go new file mode 100644 index 000000000..c0580c97c --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/update_drg_attachment_request_response.go @@ -0,0 +1,49 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// UpdateDrgAttachmentRequest wrapper for the UpdateDrgAttachment operation +type UpdateDrgAttachmentRequest struct { + + // The OCID of the DRG attachment. + DrgAttachmentId *string `mandatory:"true" contributesTo:"path" name:"drgAttachmentId"` + + // Details object for updating a `DrgAttachment`. + UpdateDrgAttachmentDetails `contributesTo:"body"` + + // For optimistic concurrency control. In the PUT or DELETE call for a resource, set the `if-match` + // parameter to the value of the etag from a previous GET or POST response for that resource. The resource + // will be updated or deleted only if the etag you provide matches the resource's current etag value. + IfMatch *string `mandatory:"false" contributesTo:"header" name:"if-match"` +} + +func (request UpdateDrgAttachmentRequest) String() string { + return common.PointerString(request) +} + +// UpdateDrgAttachmentResponse wrapper for the UpdateDrgAttachment operation +type UpdateDrgAttachmentResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The DrgAttachment instance + DrgAttachment `presentIn:"body"` + + // For optimistic concurrency control. See `if-match`. + Etag *string `presentIn:"header" name:"etag"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about + // a particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response UpdateDrgAttachmentResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/update_drg_details.go b/vendor/github.com/oracle/oci-go-sdk/core/update_drg_details.go new file mode 100644 index 000000000..2e64240b5 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/update_drg_details.go @@ -0,0 +1,25 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Core Services API +// +// APIs for Networking Service, Compute Service, and Block Volume Service. +// + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" +) + +// UpdateDrgDetails The representation of UpdateDrgDetails +type UpdateDrgDetails struct { + + // A user-friendly name. Does not have to be unique, and it's changeable. + // Avoid entering confidential information. + DisplayName *string `mandatory:"false" json:"displayName"` +} + +func (m UpdateDrgDetails) String() string { + return common.PointerString(m) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/update_drg_request_response.go b/vendor/github.com/oracle/oci-go-sdk/core/update_drg_request_response.go new file mode 100644 index 000000000..0aba53e12 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/update_drg_request_response.go @@ -0,0 +1,49 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// UpdateDrgRequest wrapper for the UpdateDrg operation +type UpdateDrgRequest struct { + + // The OCID of the DRG. + DrgId *string `mandatory:"true" contributesTo:"path" name:"drgId"` + + // Details object for updating a DRG. + UpdateDrgDetails `contributesTo:"body"` + + // For optimistic concurrency control. In the PUT or DELETE call for a resource, set the `if-match` + // parameter to the value of the etag from a previous GET or POST response for that resource. The resource + // will be updated or deleted only if the etag you provide matches the resource's current etag value. + IfMatch *string `mandatory:"false" contributesTo:"header" name:"if-match"` +} + +func (request UpdateDrgRequest) String() string { + return common.PointerString(request) +} + +// UpdateDrgResponse wrapper for the UpdateDrg operation +type UpdateDrgResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The Drg instance + Drg `presentIn:"body"` + + // For optimistic concurrency control. See `if-match`. + Etag *string `presentIn:"header" name:"etag"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about + // a particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response UpdateDrgResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/update_i_p_sec_connection_request_response.go b/vendor/github.com/oracle/oci-go-sdk/core/update_i_p_sec_connection_request_response.go new file mode 100644 index 000000000..ad809bea8 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/update_i_p_sec_connection_request_response.go @@ -0,0 +1,49 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// UpdateIPSecConnectionRequest wrapper for the UpdateIPSecConnection operation +type UpdateIPSecConnectionRequest struct { + + // The OCID of the IPSec connection. + IpscId *string `mandatory:"true" contributesTo:"path" name:"ipscId"` + + // Details object for updating a IPSec connection. + UpdateIpSecConnectionDetails `contributesTo:"body"` + + // For optimistic concurrency control. In the PUT or DELETE call for a resource, set the `if-match` + // parameter to the value of the etag from a previous GET or POST response for that resource. The resource + // will be updated or deleted only if the etag you provide matches the resource's current etag value. + IfMatch *string `mandatory:"false" contributesTo:"header" name:"if-match"` +} + +func (request UpdateIPSecConnectionRequest) String() string { + return common.PointerString(request) +} + +// UpdateIPSecConnectionResponse wrapper for the UpdateIPSecConnection operation +type UpdateIPSecConnectionResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The IpSecConnection instance + IpSecConnection `presentIn:"body"` + + // For optimistic concurrency control. See `if-match`. + Etag *string `presentIn:"header" name:"etag"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about + // a particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response UpdateIPSecConnectionResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/update_image_details.go b/vendor/github.com/oracle/oci-go-sdk/core/update_image_details.go new file mode 100644 index 000000000..f3c35d5fa --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/update_image_details.go @@ -0,0 +1,26 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Core Services API +// +// APIs for Networking Service, Compute Service, and Block Volume Service. +// + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" +) + +// UpdateImageDetails The representation of UpdateImageDetails +type UpdateImageDetails struct { + + // A user-friendly name. Does not have to be unique, and it's changeable. + // Avoid entering confidential information. + // Example: `My custom Oracle Linux image` + DisplayName *string `mandatory:"false" json:"displayName"` +} + +func (m UpdateImageDetails) String() string { + return common.PointerString(m) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/update_image_request_response.go b/vendor/github.com/oracle/oci-go-sdk/core/update_image_request_response.go new file mode 100644 index 000000000..6757b1d1a --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/update_image_request_response.go @@ -0,0 +1,56 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// UpdateImageRequest wrapper for the UpdateImage operation +type UpdateImageRequest struct { + + // The OCID of the image. + ImageId *string `mandatory:"true" contributesTo:"path" name:"imageId"` + + // Updates the image display name field. Avoid entering confidential information. + UpdateImageDetails `contributesTo:"body"` + + // A token that uniquely identifies a request so it can be retried in case of a timeout or + // server error without risk of executing that same action again. Retry tokens expire after 24 + // hours, but can be invalidated before then due to conflicting operations (for example, if a resource + // has been deleted and purged from the system, then a retry of the original creation request + // may be rejected). + OpcRetryToken *string `mandatory:"false" contributesTo:"header" name:"opc-retry-token"` + + // For optimistic concurrency control. In the PUT or DELETE call for a resource, set the `if-match` + // parameter to the value of the etag from a previous GET or POST response for that resource. The resource + // will be updated or deleted only if the etag you provide matches the resource's current etag value. + IfMatch *string `mandatory:"false" contributesTo:"header" name:"if-match"` +} + +func (request UpdateImageRequest) String() string { + return common.PointerString(request) +} + +// UpdateImageResponse wrapper for the UpdateImage operation +type UpdateImageResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The Image instance + Image `presentIn:"body"` + + // For optimistic concurrency control. See `if-match`. + Etag *string `presentIn:"header" name:"etag"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about + // a particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response UpdateImageResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/update_instance_details.go b/vendor/github.com/oracle/oci-go-sdk/core/update_instance_details.go new file mode 100644 index 000000000..e18c8c402 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/update_instance_details.go @@ -0,0 +1,26 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Core Services API +// +// APIs for Networking Service, Compute Service, and Block Volume Service. +// + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" +) + +// UpdateInstanceDetails The representation of UpdateInstanceDetails +type UpdateInstanceDetails struct { + + // A user-friendly name. Does not have to be unique, and it's changeable. + // Avoid entering confidential information. + // Example: `My bare metal instance` + DisplayName *string `mandatory:"false" json:"displayName"` +} + +func (m UpdateInstanceDetails) String() string { + return common.PointerString(m) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/update_instance_request_response.go b/vendor/github.com/oracle/oci-go-sdk/core/update_instance_request_response.go new file mode 100644 index 000000000..3705804cd --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/update_instance_request_response.go @@ -0,0 +1,56 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// UpdateInstanceRequest wrapper for the UpdateInstance operation +type UpdateInstanceRequest struct { + + // The OCID of the instance. + InstanceId *string `mandatory:"true" contributesTo:"path" name:"instanceId"` + + // Update instance fields + UpdateInstanceDetails `contributesTo:"body"` + + // A token that uniquely identifies a request so it can be retried in case of a timeout or + // server error without risk of executing that same action again. Retry tokens expire after 24 + // hours, but can be invalidated before then due to conflicting operations (for example, if a resource + // has been deleted and purged from the system, then a retry of the original creation request + // may be rejected). + OpcRetryToken *string `mandatory:"false" contributesTo:"header" name:"opc-retry-token"` + + // For optimistic concurrency control. In the PUT or DELETE call for a resource, set the `if-match` + // parameter to the value of the etag from a previous GET or POST response for that resource. The resource + // will be updated or deleted only if the etag you provide matches the resource's current etag value. + IfMatch *string `mandatory:"false" contributesTo:"header" name:"if-match"` +} + +func (request UpdateInstanceRequest) String() string { + return common.PointerString(request) +} + +// UpdateInstanceResponse wrapper for the UpdateInstance operation +type UpdateInstanceResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The Instance instance + Instance `presentIn:"body"` + + // For optimistic concurrency control. See `if-match`. + Etag *string `presentIn:"header" name:"etag"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about + // a particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response UpdateInstanceResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/update_internet_gateway_details.go b/vendor/github.com/oracle/oci-go-sdk/core/update_internet_gateway_details.go new file mode 100644 index 000000000..9410eeef1 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/update_internet_gateway_details.go @@ -0,0 +1,28 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Core Services API +// +// APIs for Networking Service, Compute Service, and Block Volume Service. +// + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" +) + +// UpdateInternetGatewayDetails The representation of UpdateInternetGatewayDetails +type UpdateInternetGatewayDetails struct { + + // A user-friendly name. Does not have to be unique, and it's changeable. + // Avoid entering confidential information. + DisplayName *string `mandatory:"false" json:"displayName"` + + // Whether the gateway is enabled. + IsEnabled *bool `mandatory:"false" json:"isEnabled"` +} + +func (m UpdateInternetGatewayDetails) String() string { + return common.PointerString(m) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/update_internet_gateway_request_response.go b/vendor/github.com/oracle/oci-go-sdk/core/update_internet_gateway_request_response.go new file mode 100644 index 000000000..a81ca3093 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/update_internet_gateway_request_response.go @@ -0,0 +1,49 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// UpdateInternetGatewayRequest wrapper for the UpdateInternetGateway operation +type UpdateInternetGatewayRequest struct { + + // The OCID of the Internet Gateway. + IgId *string `mandatory:"true" contributesTo:"path" name:"igId"` + + // Details for updating the Internet Gateway. + UpdateInternetGatewayDetails `contributesTo:"body"` + + // For optimistic concurrency control. In the PUT or DELETE call for a resource, set the `if-match` + // parameter to the value of the etag from a previous GET or POST response for that resource. The resource + // will be updated or deleted only if the etag you provide matches the resource's current etag value. + IfMatch *string `mandatory:"false" contributesTo:"header" name:"if-match"` +} + +func (request UpdateInternetGatewayRequest) String() string { + return common.PointerString(request) +} + +// UpdateInternetGatewayResponse wrapper for the UpdateInternetGateway operation +type UpdateInternetGatewayResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The InternetGateway instance + InternetGateway `presentIn:"body"` + + // For optimistic concurrency control. See `if-match`. + Etag *string `presentIn:"header" name:"etag"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about + // a particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response UpdateInternetGatewayResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/update_ip_sec_connection_details.go b/vendor/github.com/oracle/oci-go-sdk/core/update_ip_sec_connection_details.go new file mode 100644 index 000000000..b6834690b --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/update_ip_sec_connection_details.go @@ -0,0 +1,25 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Core Services API +// +// APIs for Networking Service, Compute Service, and Block Volume Service. +// + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" +) + +// UpdateIpSecConnectionDetails The representation of UpdateIpSecConnectionDetails +type UpdateIpSecConnectionDetails struct { + + // A user-friendly name. Does not have to be unique, and it's changeable. + // Avoid entering confidential information. + DisplayName *string `mandatory:"false" json:"displayName"` +} + +func (m UpdateIpSecConnectionDetails) String() string { + return common.PointerString(m) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/update_local_peering_gateway_details.go b/vendor/github.com/oracle/oci-go-sdk/core/update_local_peering_gateway_details.go new file mode 100644 index 000000000..7b9d10083 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/update_local_peering_gateway_details.go @@ -0,0 +1,25 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Core Services API +// +// APIs for Networking Service, Compute Service, and Block Volume Service. +// + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" +) + +// UpdateLocalPeeringGatewayDetails The representation of UpdateLocalPeeringGatewayDetails +type UpdateLocalPeeringGatewayDetails struct { + + // A user-friendly name. Does not have to be unique, and it's changeable. Avoid + // entering confidential information. + DisplayName *string `mandatory:"false" json:"displayName"` +} + +func (m UpdateLocalPeeringGatewayDetails) String() string { + return common.PointerString(m) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/update_local_peering_gateway_request_response.go b/vendor/github.com/oracle/oci-go-sdk/core/update_local_peering_gateway_request_response.go new file mode 100644 index 000000000..093c48638 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/update_local_peering_gateway_request_response.go @@ -0,0 +1,49 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// UpdateLocalPeeringGatewayRequest wrapper for the UpdateLocalPeeringGateway operation +type UpdateLocalPeeringGatewayRequest struct { + + // The OCID of the local peering gateway. + LocalPeeringGatewayId *string `mandatory:"true" contributesTo:"path" name:"localPeeringGatewayId"` + + // Details object for updating a local peering gateway. + UpdateLocalPeeringGatewayDetails `contributesTo:"body"` + + // For optimistic concurrency control. In the PUT or DELETE call for a resource, set the `if-match` + // parameter to the value of the etag from a previous GET or POST response for that resource. The resource + // will be updated or deleted only if the etag you provide matches the resource's current etag value. + IfMatch *string `mandatory:"false" contributesTo:"header" name:"if-match"` +} + +func (request UpdateLocalPeeringGatewayRequest) String() string { + return common.PointerString(request) +} + +// UpdateLocalPeeringGatewayResponse wrapper for the UpdateLocalPeeringGateway operation +type UpdateLocalPeeringGatewayResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The LocalPeeringGateway instance + LocalPeeringGateway `presentIn:"body"` + + // For optimistic concurrency control. See `if-match`. + Etag *string `presentIn:"header" name:"etag"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about + // a particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response UpdateLocalPeeringGatewayResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/update_private_ip_details.go b/vendor/github.com/oracle/oci-go-sdk/core/update_private_ip_details.go new file mode 100644 index 000000000..96c7a2a74 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/update_private_ip_details.go @@ -0,0 +1,40 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Core Services API +// +// APIs for Networking Service, Compute Service, and Block Volume Service. +// + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" +) + +// UpdatePrivateIpDetails The representation of UpdatePrivateIpDetails +type UpdatePrivateIpDetails struct { + + // A user-friendly name. Does not have to be unique, and it's changeable. Avoid + // entering confidential information. + DisplayName *string `mandatory:"false" json:"displayName"` + + // The hostname for the private IP. Used for DNS. The value + // is the hostname portion of the private IP's fully qualified domain name (FQDN) + // (for example, `bminstance-1` in FQDN `bminstance-1.subnet123.vcn1.oraclevcn.com`). + // Must be unique across all VNICs in the subnet and comply with + // RFC 952 (https://tools.ietf.org/html/rfc952) and + // RFC 1123 (https://tools.ietf.org/html/rfc1123). + // For more information, see + // DNS in Your Virtual Cloud Network (https://docs.us-phoenix-1.oraclecloud.com/Content/Network/Concepts/dns.htm). + // Example: `bminstance-1` + HostnameLabel *string `mandatory:"false" json:"hostnameLabel"` + + // The OCID of the VNIC to reassign the private IP to. The VNIC must + // be in the same subnet as the current VNIC. + VnicId *string `mandatory:"false" json:"vnicId"` +} + +func (m UpdatePrivateIpDetails) String() string { + return common.PointerString(m) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/update_private_ip_request_response.go b/vendor/github.com/oracle/oci-go-sdk/core/update_private_ip_request_response.go new file mode 100644 index 000000000..ae74cf5c7 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/update_private_ip_request_response.go @@ -0,0 +1,49 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// UpdatePrivateIpRequest wrapper for the UpdatePrivateIp operation +type UpdatePrivateIpRequest struct { + + // The private IP's OCID. + PrivateIpId *string `mandatory:"true" contributesTo:"path" name:"privateIpId"` + + // Private IP details. + UpdatePrivateIpDetails `contributesTo:"body"` + + // For optimistic concurrency control. In the PUT or DELETE call for a resource, set the `if-match` + // parameter to the value of the etag from a previous GET or POST response for that resource. The resource + // will be updated or deleted only if the etag you provide matches the resource's current etag value. + IfMatch *string `mandatory:"false" contributesTo:"header" name:"if-match"` +} + +func (request UpdatePrivateIpRequest) String() string { + return common.PointerString(request) +} + +// UpdatePrivateIpResponse wrapper for the UpdatePrivateIp operation +type UpdatePrivateIpResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The PrivateIp instance + PrivateIp `presentIn:"body"` + + // For optimistic concurrency control. See `if-match`. + Etag *string `presentIn:"header" name:"etag"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about + // a particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response UpdatePrivateIpResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/update_route_table_details.go b/vendor/github.com/oracle/oci-go-sdk/core/update_route_table_details.go new file mode 100644 index 000000000..547a1b2b9 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/update_route_table_details.go @@ -0,0 +1,28 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Core Services API +// +// APIs for Networking Service, Compute Service, and Block Volume Service. +// + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" +) + +// UpdateRouteTableDetails The representation of UpdateRouteTableDetails +type UpdateRouteTableDetails struct { + + // A user-friendly name. Does not have to be unique, and it's changeable. + // Avoid entering confidential information. + DisplayName *string `mandatory:"false" json:"displayName"` + + // The collection of rules used for routing destination IPs to network devices. + RouteRules []RouteRule `mandatory:"false" json:"routeRules"` +} + +func (m UpdateRouteTableDetails) String() string { + return common.PointerString(m) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/update_route_table_request_response.go b/vendor/github.com/oracle/oci-go-sdk/core/update_route_table_request_response.go new file mode 100644 index 000000000..5323576a9 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/update_route_table_request_response.go @@ -0,0 +1,49 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// UpdateRouteTableRequest wrapper for the UpdateRouteTable operation +type UpdateRouteTableRequest struct { + + // The OCID of the route table. + RtId *string `mandatory:"true" contributesTo:"path" name:"rtId"` + + // Details object for updating a route table. + UpdateRouteTableDetails `contributesTo:"body"` + + // For optimistic concurrency control. In the PUT or DELETE call for a resource, set the `if-match` + // parameter to the value of the etag from a previous GET or POST response for that resource. The resource + // will be updated or deleted only if the etag you provide matches the resource's current etag value. + IfMatch *string `mandatory:"false" contributesTo:"header" name:"if-match"` +} + +func (request UpdateRouteTableRequest) String() string { + return common.PointerString(request) +} + +// UpdateRouteTableResponse wrapper for the UpdateRouteTable operation +type UpdateRouteTableResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The RouteTable instance + RouteTable `presentIn:"body"` + + // For optimistic concurrency control. See `if-match`. + Etag *string `presentIn:"header" name:"etag"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about + // a particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response UpdateRouteTableResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/update_security_list_details.go b/vendor/github.com/oracle/oci-go-sdk/core/update_security_list_details.go new file mode 100644 index 000000000..d23a3f75c --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/update_security_list_details.go @@ -0,0 +1,31 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Core Services API +// +// APIs for Networking Service, Compute Service, and Block Volume Service. +// + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" +) + +// UpdateSecurityListDetails The representation of UpdateSecurityListDetails +type UpdateSecurityListDetails struct { + + // A user-friendly name. Does not have to be unique, and it's changeable. + // Avoid entering confidential information. + DisplayName *string `mandatory:"false" json:"displayName"` + + // Rules for allowing egress IP packets. + EgressSecurityRules []EgressSecurityRule `mandatory:"false" json:"egressSecurityRules"` + + // Rules for allowing ingress IP packets. + IngressSecurityRules []IngressSecurityRule `mandatory:"false" json:"ingressSecurityRules"` +} + +func (m UpdateSecurityListDetails) String() string { + return common.PointerString(m) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/update_security_list_request_response.go b/vendor/github.com/oracle/oci-go-sdk/core/update_security_list_request_response.go new file mode 100644 index 000000000..d9e7b468d --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/update_security_list_request_response.go @@ -0,0 +1,49 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// UpdateSecurityListRequest wrapper for the UpdateSecurityList operation +type UpdateSecurityListRequest struct { + + // The OCID of the security list. + SecurityListId *string `mandatory:"true" contributesTo:"path" name:"securityListId"` + + // Updated details for the security list. + UpdateSecurityListDetails `contributesTo:"body"` + + // For optimistic concurrency control. In the PUT or DELETE call for a resource, set the `if-match` + // parameter to the value of the etag from a previous GET or POST response for that resource. The resource + // will be updated or deleted only if the etag you provide matches the resource's current etag value. + IfMatch *string `mandatory:"false" contributesTo:"header" name:"if-match"` +} + +func (request UpdateSecurityListRequest) String() string { + return common.PointerString(request) +} + +// UpdateSecurityListResponse wrapper for the UpdateSecurityList operation +type UpdateSecurityListResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The SecurityList instance + SecurityList `presentIn:"body"` + + // For optimistic concurrency control. See `if-match`. + Etag *string `presentIn:"header" name:"etag"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about + // a particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response UpdateSecurityListResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/update_subnet_details.go b/vendor/github.com/oracle/oci-go-sdk/core/update_subnet_details.go new file mode 100644 index 000000000..e15eaa013 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/update_subnet_details.go @@ -0,0 +1,25 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Core Services API +// +// APIs for Networking Service, Compute Service, and Block Volume Service. +// + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" +) + +// UpdateSubnetDetails The representation of UpdateSubnetDetails +type UpdateSubnetDetails struct { + + // A user-friendly name. Does not have to be unique, and it's changeable. + // Avoid entering confidential information. + DisplayName *string `mandatory:"false" json:"displayName"` +} + +func (m UpdateSubnetDetails) String() string { + return common.PointerString(m) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/update_subnet_request_response.go b/vendor/github.com/oracle/oci-go-sdk/core/update_subnet_request_response.go new file mode 100644 index 000000000..1dec9a727 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/update_subnet_request_response.go @@ -0,0 +1,49 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// UpdateSubnetRequest wrapper for the UpdateSubnet operation +type UpdateSubnetRequest struct { + + // The OCID of the subnet. + SubnetId *string `mandatory:"true" contributesTo:"path" name:"subnetId"` + + // Details object for updating a subnet. + UpdateSubnetDetails `contributesTo:"body"` + + // For optimistic concurrency control. In the PUT or DELETE call for a resource, set the `if-match` + // parameter to the value of the etag from a previous GET or POST response for that resource. The resource + // will be updated or deleted only if the etag you provide matches the resource's current etag value. + IfMatch *string `mandatory:"false" contributesTo:"header" name:"if-match"` +} + +func (request UpdateSubnetRequest) String() string { + return common.PointerString(request) +} + +// UpdateSubnetResponse wrapper for the UpdateSubnet operation +type UpdateSubnetResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The Subnet instance + Subnet `presentIn:"body"` + + // For optimistic concurrency control. See `if-match`. + Etag *string `presentIn:"header" name:"etag"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about + // a particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response UpdateSubnetResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/update_vcn_details.go b/vendor/github.com/oracle/oci-go-sdk/core/update_vcn_details.go new file mode 100644 index 000000000..e4dca15cd --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/update_vcn_details.go @@ -0,0 +1,25 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Core Services API +// +// APIs for Networking Service, Compute Service, and Block Volume Service. +// + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" +) + +// UpdateVcnDetails The representation of UpdateVcnDetails +type UpdateVcnDetails struct { + + // A user-friendly name. Does not have to be unique, and it's changeable. + // Avoid entering confidential information. + DisplayName *string `mandatory:"false" json:"displayName"` +} + +func (m UpdateVcnDetails) String() string { + return common.PointerString(m) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/update_vcn_request_response.go b/vendor/github.com/oracle/oci-go-sdk/core/update_vcn_request_response.go new file mode 100644 index 000000000..37e483eb0 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/update_vcn_request_response.go @@ -0,0 +1,49 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// UpdateVcnRequest wrapper for the UpdateVcn operation +type UpdateVcnRequest struct { + + // The OCID of the VCN. + VcnId *string `mandatory:"true" contributesTo:"path" name:"vcnId"` + + // Details object for updating a VCN. + UpdateVcnDetails `contributesTo:"body"` + + // For optimistic concurrency control. In the PUT or DELETE call for a resource, set the `if-match` + // parameter to the value of the etag from a previous GET or POST response for that resource. The resource + // will be updated or deleted only if the etag you provide matches the resource's current etag value. + IfMatch *string `mandatory:"false" contributesTo:"header" name:"if-match"` +} + +func (request UpdateVcnRequest) String() string { + return common.PointerString(request) +} + +// UpdateVcnResponse wrapper for the UpdateVcn operation +type UpdateVcnResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The Vcn instance + Vcn `presentIn:"body"` + + // For optimistic concurrency control. See `if-match`. + Etag *string `presentIn:"header" name:"etag"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about + // a particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response UpdateVcnResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/update_virtual_circuit_details.go b/vendor/github.com/oracle/oci-go-sdk/core/update_virtual_circuit_details.go new file mode 100644 index 000000000..cbfaf178f --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/update_virtual_circuit_details.go @@ -0,0 +1,90 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Core Services API +// +// APIs for Networking Service, Compute Service, and Block Volume Service. +// + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" +) + +// UpdateVirtualCircuitDetails The representation of UpdateVirtualCircuitDetails +type UpdateVirtualCircuitDetails struct { + + // The provisioned data rate of the connection. To get a list of the + // available bandwidth levels (that is, shapes), see + // ListFastConnectProviderVirtualCircuitBandwidthShapes. + // To be updated only by the customer who owns the virtual circuit. + BandwidthShapeName *string `mandatory:"false" json:"bandwidthShapeName"` + + // An array of mappings, each containing properties for a cross-connect or + // cross-connect group associated with this virtual circuit. + // The customer and provider can update different properties in the mapping + // depending on the situation. See the description of the + // CrossConnectMapping. + CrossConnectMappings []CrossConnectMapping `mandatory:"false" json:"crossConnectMappings"` + + // The BGP ASN of the network at the other end of the BGP + // session from Oracle. + // If the BGP session is from the customer's edge router to Oracle, the + // required value is the customer's ASN, and it can be updated only + // by the customer. + // If the BGP session is from the provider's edge router to Oracle, the + // required value is the provider's ASN, and it can be updated only + // by the provider. + CustomerBgpAsn *int `mandatory:"false" json:"customerBgpAsn"` + + // A user-friendly name. Does not have to be unique. + // Avoid entering confidential information. + // To be updated only by the customer who owns the virtual circuit. + DisplayName *string `mandatory:"false" json:"displayName"` + + // The OCID of the Drg + // that this private virtual circuit uses. + // To be updated only by the customer who owns the virtual circuit. + GatewayId *string `mandatory:"false" json:"gatewayId"` + + // The provider's state in relation to this virtual circuit. Relevant only + // if the customer is using FastConnect via a provider. ACTIVE + // means the provider has provisioned the virtual circuit from their + // end. INACTIVE means the provider has not yet provisioned the virtual + // circuit, or has de-provisioned it. + // To be updated only by the provider. + ProviderState UpdateVirtualCircuitDetailsProviderStateEnum `mandatory:"false" json:"providerState,omitempty"` + + // Provider-supplied reference information about this virtual circuit. + // Relevant only if the customer is using FastConnect via a provider. + // To be updated only by the provider. + ReferenceComment *string `mandatory:"false" json:"referenceComment"` +} + +func (m UpdateVirtualCircuitDetails) String() string { + return common.PointerString(m) +} + +// UpdateVirtualCircuitDetailsProviderStateEnum Enum with underlying type: string +type UpdateVirtualCircuitDetailsProviderStateEnum string + +// Set of constants representing the allowable values for UpdateVirtualCircuitDetailsProviderState +const ( + UpdateVirtualCircuitDetailsProviderStateActive UpdateVirtualCircuitDetailsProviderStateEnum = "ACTIVE" + UpdateVirtualCircuitDetailsProviderStateInactive UpdateVirtualCircuitDetailsProviderStateEnum = "INACTIVE" +) + +var mappingUpdateVirtualCircuitDetailsProviderState = map[string]UpdateVirtualCircuitDetailsProviderStateEnum{ + "ACTIVE": UpdateVirtualCircuitDetailsProviderStateActive, + "INACTIVE": UpdateVirtualCircuitDetailsProviderStateInactive, +} + +// GetUpdateVirtualCircuitDetailsProviderStateEnumValues Enumerates the set of values for UpdateVirtualCircuitDetailsProviderState +func GetUpdateVirtualCircuitDetailsProviderStateEnumValues() []UpdateVirtualCircuitDetailsProviderStateEnum { + values := make([]UpdateVirtualCircuitDetailsProviderStateEnum, 0) + for _, v := range mappingUpdateVirtualCircuitDetailsProviderState { + values = append(values, v) + } + return values +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/update_virtual_circuit_request_response.go b/vendor/github.com/oracle/oci-go-sdk/core/update_virtual_circuit_request_response.go new file mode 100644 index 000000000..7f2a8df15 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/update_virtual_circuit_request_response.go @@ -0,0 +1,49 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// UpdateVirtualCircuitRequest wrapper for the UpdateVirtualCircuit operation +type UpdateVirtualCircuitRequest struct { + + // The OCID of the virtual circuit. + VirtualCircuitId *string `mandatory:"true" contributesTo:"path" name:"virtualCircuitId"` + + // Update VirtualCircuit fields. + UpdateVirtualCircuitDetails `contributesTo:"body"` + + // For optimistic concurrency control. In the PUT or DELETE call for a resource, set the `if-match` + // parameter to the value of the etag from a previous GET or POST response for that resource. The resource + // will be updated or deleted only if the etag you provide matches the resource's current etag value. + IfMatch *string `mandatory:"false" contributesTo:"header" name:"if-match"` +} + +func (request UpdateVirtualCircuitRequest) String() string { + return common.PointerString(request) +} + +// UpdateVirtualCircuitResponse wrapper for the UpdateVirtualCircuit operation +type UpdateVirtualCircuitResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The VirtualCircuit instance + VirtualCircuit `presentIn:"body"` + + // For optimistic concurrency control. See `if-match`. + Etag *string `presentIn:"header" name:"etag"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about + // a particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response UpdateVirtualCircuitResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/update_vnic_details.go b/vendor/github.com/oracle/oci-go-sdk/core/update_vnic_details.go new file mode 100644 index 000000000..6eb74fde7 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/update_vnic_details.go @@ -0,0 +1,45 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Core Services API +// +// APIs for Networking Service, Compute Service, and Block Volume Service. +// + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" +) + +// UpdateVnicDetails The representation of UpdateVnicDetails +type UpdateVnicDetails struct { + + // A user-friendly name. Does not have to be unique, and it's changeable. + DisplayName *string `mandatory:"false" json:"displayName"` + + // The hostname for the VNIC's primary private IP. Used for DNS. The value is the hostname + // portion of the primary private IP's fully qualified domain name (FQDN) + // (for example, `bminstance-1` in FQDN `bminstance-1.subnet123.vcn1.oraclevcn.com`). + // Must be unique across all VNICs in the subnet and comply with + // RFC 952 (https://tools.ietf.org/html/rfc952) and + // RFC 1123 (https://tools.ietf.org/html/rfc1123). + // The value appears in the Vnic object and also the + // PrivateIp object returned by + // ListPrivateIps and + // GetPrivateIp. + // For more information, see + // DNS in Your Virtual Cloud Network (https://docs.us-phoenix-1.oraclecloud.com/Content/Network/Concepts/dns.htm). + HostnameLabel *string `mandatory:"false" json:"hostnameLabel"` + + // Whether the source/destination check is disabled on the VNIC. + // Defaults to `false`, which means the check is performed. For information + // about why you would skip the source/destination check, see + // Using a Private IP as a Route Target (https://docs.us-phoenix-1.oraclecloud.com/Content/Network/Tasks/managingroutetables.htm#privateip). + // Example: `true` + SkipSourceDestCheck *bool `mandatory:"false" json:"skipSourceDestCheck"` +} + +func (m UpdateVnicDetails) String() string { + return common.PointerString(m) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/update_vnic_request_response.go b/vendor/github.com/oracle/oci-go-sdk/core/update_vnic_request_response.go new file mode 100644 index 000000000..9bacd4eb5 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/update_vnic_request_response.go @@ -0,0 +1,49 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// UpdateVnicRequest wrapper for the UpdateVnic operation +type UpdateVnicRequest struct { + + // The OCID of the VNIC. + VnicId *string `mandatory:"true" contributesTo:"path" name:"vnicId"` + + // Details object for updating a VNIC. + UpdateVnicDetails `contributesTo:"body"` + + // For optimistic concurrency control. In the PUT or DELETE call for a resource, set the `if-match` + // parameter to the value of the etag from a previous GET or POST response for that resource. The resource + // will be updated or deleted only if the etag you provide matches the resource's current etag value. + IfMatch *string `mandatory:"false" contributesTo:"header" name:"if-match"` +} + +func (request UpdateVnicRequest) String() string { + return common.PointerString(request) +} + +// UpdateVnicResponse wrapper for the UpdateVnic operation +type UpdateVnicResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The Vnic instance + Vnic `presentIn:"body"` + + // For optimistic concurrency control. See `if-match`. + Etag *string `presentIn:"header" name:"etag"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about + // a particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response UpdateVnicResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/update_volume_backup_details.go b/vendor/github.com/oracle/oci-go-sdk/core/update_volume_backup_details.go new file mode 100644 index 000000000..05be76e2b --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/update_volume_backup_details.go @@ -0,0 +1,25 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Core Services API +// +// APIs for Networking Service, Compute Service, and Block Volume Service. +// + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" +) + +// UpdateVolumeBackupDetails The representation of UpdateVolumeBackupDetails +type UpdateVolumeBackupDetails struct { + + // A friendly user-specified name for the volume backup. + // Avoid entering confidential information. + DisplayName *string `mandatory:"false" json:"displayName"` +} + +func (m UpdateVolumeBackupDetails) String() string { + return common.PointerString(m) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/update_volume_backup_request_response.go b/vendor/github.com/oracle/oci-go-sdk/core/update_volume_backup_request_response.go new file mode 100644 index 000000000..e98d05109 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/update_volume_backup_request_response.go @@ -0,0 +1,45 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// UpdateVolumeBackupRequest wrapper for the UpdateVolumeBackup operation +type UpdateVolumeBackupRequest struct { + + // The OCID of the volume backup. + VolumeBackupId *string `mandatory:"true" contributesTo:"path" name:"volumeBackupId"` + + // Update volume backup fields + UpdateVolumeBackupDetails `contributesTo:"body"` + + // For optimistic concurrency control. In the PUT or DELETE call for a resource, set the `if-match` + // parameter to the value of the etag from a previous GET or POST response for that resource. The resource + // will be updated or deleted only if the etag you provide matches the resource's current etag value. + IfMatch *string `mandatory:"false" contributesTo:"header" name:"if-match"` +} + +func (request UpdateVolumeBackupRequest) String() string { + return common.PointerString(request) +} + +// UpdateVolumeBackupResponse wrapper for the UpdateVolumeBackup operation +type UpdateVolumeBackupResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The VolumeBackup instance + VolumeBackup `presentIn:"body"` + + // For optimistic concurrency control. See `if-match`. + Etag *string `presentIn:"header" name:"etag"` +} + +func (response UpdateVolumeBackupResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/update_volume_details.go b/vendor/github.com/oracle/oci-go-sdk/core/update_volume_details.go new file mode 100644 index 000000000..e0b642bd7 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/update_volume_details.go @@ -0,0 +1,25 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Core Services API +// +// APIs for Networking Service, Compute Service, and Block Volume Service. +// + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" +) + +// UpdateVolumeDetails The representation of UpdateVolumeDetails +type UpdateVolumeDetails struct { + + // A user-friendly name. Does not have to be unique, and it's changeable. + // Avoid entering confidential information. + DisplayName *string `mandatory:"false" json:"displayName"` +} + +func (m UpdateVolumeDetails) String() string { + return common.PointerString(m) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/update_volume_request_response.go b/vendor/github.com/oracle/oci-go-sdk/core/update_volume_request_response.go new file mode 100644 index 000000000..b8b044f97 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/update_volume_request_response.go @@ -0,0 +1,49 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// UpdateVolumeRequest wrapper for the UpdateVolume operation +type UpdateVolumeRequest struct { + + // The OCID of the volume. + VolumeId *string `mandatory:"true" contributesTo:"path" name:"volumeId"` + + // Update volume's display name. Avoid entering confidential information. + UpdateVolumeDetails `contributesTo:"body"` + + // For optimistic concurrency control. In the PUT or DELETE call for a resource, set the `if-match` + // parameter to the value of the etag from a previous GET or POST response for that resource. The resource + // will be updated or deleted only if the etag you provide matches the resource's current etag value. + IfMatch *string `mandatory:"false" contributesTo:"header" name:"if-match"` +} + +func (request UpdateVolumeRequest) String() string { + return common.PointerString(request) +} + +// UpdateVolumeResponse wrapper for the UpdateVolume operation +type UpdateVolumeResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The Volume instance + Volume `presentIn:"body"` + + // For optimistic concurrency control. See `if-match`. + Etag *string `presentIn:"header" name:"etag"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about + // a particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response UpdateVolumeResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/vcn.go b/vendor/github.com/oracle/oci-go-sdk/core/vcn.go new file mode 100644 index 000000000..8077a531d --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/vcn.go @@ -0,0 +1,101 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Core Services API +// +// APIs for Networking Service, Compute Service, and Block Volume Service. +// + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" +) + +// Vcn A Virtual Cloud Network (VCN). For more information, see +// Overview of the Networking Service (https://docs.us-phoenix-1.oraclecloud.com/Content/Network/Concepts/overview.htm). +// To use any of the API operations, you must be authorized in an IAM policy. If you're not authorized, +// talk to an administrator. If you're an administrator who needs to write policies to give users access, see +// Getting Started with Policies (https://docs.us-phoenix-1.oraclecloud.com/Content/Identity/Concepts/policygetstarted.htm). +type Vcn struct { + + // The CIDR IP address block of the VCN. + // Example: `172.16.0.0/16` + CidrBlock *string `mandatory:"true" json:"cidrBlock"` + + // The OCID of the compartment containing the VCN. + CompartmentId *string `mandatory:"true" json:"compartmentId"` + + // The VCN's Oracle ID (OCID). + Id *string `mandatory:"true" json:"id"` + + // The VCN's current state. + LifecycleState VcnLifecycleStateEnum `mandatory:"true" json:"lifecycleState"` + + // The OCID for the VCN's default set of DHCP options. + DefaultDhcpOptionsId *string `mandatory:"false" json:"defaultDhcpOptionsId"` + + // The OCID for the VCN's default route table. + DefaultRouteTableId *string `mandatory:"false" json:"defaultRouteTableId"` + + // The OCID for the VCN's default security list. + DefaultSecurityListId *string `mandatory:"false" json:"defaultSecurityListId"` + + // A user-friendly name. Does not have to be unique, and it's changeable. + // Avoid entering confidential information. + DisplayName *string `mandatory:"false" json:"displayName"` + + // A DNS label for the VCN, used in conjunction with the VNIC's hostname and + // subnet's DNS label to form a fully qualified domain name (FQDN) for each VNIC + // within this subnet (for example, `bminstance-1.subnet123.vcn1.oraclevcn.com`). + // Must be an alphanumeric string that begins with a letter. + // The value cannot be changed. + // The absence of this parameter means the Internet and VCN Resolver will + // not work for this VCN. + // For more information, see + // DNS in Your Virtual Cloud Network (https://docs.us-phoenix-1.oraclecloud.com/Content/Network/Concepts/dns.htm). + // Example: `vcn1` + DnsLabel *string `mandatory:"false" json:"dnsLabel"` + + // The date and time the VCN was created, in the format defined by RFC3339. + // Example: `2016-08-25T21:10:29.600Z` + TimeCreated *common.SDKTime `mandatory:"false" json:"timeCreated"` + + // The VCN's domain name, which consists of the VCN's DNS label, and the + // `oraclevcn.com` domain. + // For more information, see + // DNS in Your Virtual Cloud Network (https://docs.us-phoenix-1.oraclecloud.com/Content/Network/Concepts/dns.htm). + // Example: `vcn1.oraclevcn.com` + VcnDomainName *string `mandatory:"false" json:"vcnDomainName"` +} + +func (m Vcn) String() string { + return common.PointerString(m) +} + +// VcnLifecycleStateEnum Enum with underlying type: string +type VcnLifecycleStateEnum string + +// Set of constants representing the allowable values for VcnLifecycleState +const ( + VcnLifecycleStateProvisioning VcnLifecycleStateEnum = "PROVISIONING" + VcnLifecycleStateAvailable VcnLifecycleStateEnum = "AVAILABLE" + VcnLifecycleStateTerminating VcnLifecycleStateEnum = "TERMINATING" + VcnLifecycleStateTerminated VcnLifecycleStateEnum = "TERMINATED" +) + +var mappingVcnLifecycleState = map[string]VcnLifecycleStateEnum{ + "PROVISIONING": VcnLifecycleStateProvisioning, + "AVAILABLE": VcnLifecycleStateAvailable, + "TERMINATING": VcnLifecycleStateTerminating, + "TERMINATED": VcnLifecycleStateTerminated, +} + +// GetVcnLifecycleStateEnumValues Enumerates the set of values for VcnLifecycleState +func GetVcnLifecycleStateEnumValues() []VcnLifecycleStateEnum { + values := make([]VcnLifecycleStateEnum, 0) + for _, v := range mappingVcnLifecycleState { + values = append(values, v) + } + return values +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/virtual_circuit.go b/vendor/github.com/oracle/oci-go-sdk/core/virtual_circuit.go new file mode 100644 index 000000000..492c84bf4 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/virtual_circuit.go @@ -0,0 +1,273 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Core Services API +// +// APIs for Networking Service, Compute Service, and Block Volume Service. +// + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" +) + +// VirtualCircuit For use with Oracle Cloud Infrastructure FastConnect. +// A virtual circuit is an isolated network path that runs over one or more physical +// network connections to provide a single, logical connection between the edge router +// on the customer's existing network and Oracle Cloud Infrastructure. *Private* +// virtual circuits support private peering, and *public* virtual circuits support +// public peering. For more information, see FastConnect Overview (https://docs.us-phoenix-1.oraclecloud.com/Content/Network/Concepts/fastconnect.htm). +// Each virtual circuit is made up of information shared between a customer, Oracle, +// and a provider (if the customer is using FastConnect via a provider). Who fills in +// a given property of a virtual circuit depends on whether the BGP session related to +// that virtual circuit goes from the customer's edge router to Oracle, or from the provider's +// edge router to Oracle. Also, in the case where the customer is using a provider, values +// for some of the properties may not be present immediately, but may get filled in as the +// provider and Oracle each do their part to provision the virtual circuit. +// To use any of the API operations, you must be authorized in an IAM policy. If you're not authorized, +// talk to an administrator. If you're an administrator who needs to write policies to give users access, see +// Getting Started with Policies (https://docs.us-phoenix-1.oraclecloud.com/Content/Identity/Concepts/policygetstarted.htm). +type VirtualCircuit struct { + + // The provisioned data rate of the connection. + BandwidthShapeName *string `mandatory:"false" json:"bandwidthShapeName"` + + // BGP management option. + BgpManagement VirtualCircuitBgpManagementEnum `mandatory:"false" json:"bgpManagement,omitempty"` + + // The state of the BGP session associated with the virtual circuit. + BgpSessionState VirtualCircuitBgpSessionStateEnum `mandatory:"false" json:"bgpSessionState,omitempty"` + + // The OCID of the compartment containing the virtual circuit. + CompartmentId *string `mandatory:"false" json:"compartmentId"` + + // An array of mappings, each containing properties for a + // cross-connect or cross-connect group that is associated with this + // virtual circuit. + CrossConnectMappings []CrossConnectMapping `mandatory:"false" json:"crossConnectMappings"` + + // The BGP ASN of the network at the other end of the BGP + // session from Oracle. If the session is between the customer's + // edge router and Oracle, the value is the customer's ASN. If the BGP + // session is between the provider's edge router and Oracle, the value + // is the provider's ASN. + CustomerBgpAsn *int `mandatory:"false" json:"customerBgpAsn"` + + // A user-friendly name. Does not have to be unique, and it's changeable. + // Avoid entering confidential information. + DisplayName *string `mandatory:"false" json:"displayName"` + + // The OCID of the customer's Drg + // that this virtual circuit uses. Applicable only to private virtual circuits. + GatewayId *string `mandatory:"false" json:"gatewayId"` + + // The virtual circuit's Oracle ID (OCID). + Id *string `mandatory:"false" json:"id"` + + // The virtual circuit's current state. For information about + // the different states, see + // FastConnect Overview (https://docs.us-phoenix-1.oraclecloud.com/Content/Network/Concepts/fastconnect.htm). + LifecycleState VirtualCircuitLifecycleStateEnum `mandatory:"false" json:"lifecycleState,omitempty"` + + // The Oracle BGP ASN. + OracleBgpAsn *int `mandatory:"false" json:"oracleBgpAsn"` + + // Deprecated. Instead use `providerServiceId`. + ProviderName *string `mandatory:"false" json:"providerName"` + + // The OCID of the service offered by the provider (if the customer is connecting via a provider). + ProviderServiceId *string `mandatory:"false" json:"providerServiceId"` + + // Deprecated. Instead use `providerServiceId`. + ProviderServiceName *string `mandatory:"false" json:"providerServiceName"` + + // The provider's state in relation to this virtual circuit (if the + // customer is connecting via a provider). ACTIVE means + // the provider has provisioned the virtual circuit from their end. + // INACTIVE means the provider has not yet provisioned the virtual + // circuit, or has de-provisioned it. + ProviderState VirtualCircuitProviderStateEnum `mandatory:"false" json:"providerState,omitempty"` + + // For a public virtual circuit. The public IP prefixes (CIDRs) the customer wants to + // advertise across the connection. Each prefix must be /24 or less specific. + PublicPrefixes []string `mandatory:"false" json:"publicPrefixes"` + + // Provider-supplied reference information about this virtual circuit + // (if the customer is connecting via a provider). + ReferenceComment *string `mandatory:"false" json:"referenceComment"` + + // The Oracle Cloud Infrastructure region where this virtual + // circuit is located. + Region *string `mandatory:"false" json:"region"` + + // Provider service type. + ServiceType VirtualCircuitServiceTypeEnum `mandatory:"false" json:"serviceType,omitempty"` + + // The date and time the virtual circuit was created, + // in the format defined by RFC3339. + // Example: `2016-08-25T21:10:29.600Z` + TimeCreated *common.SDKTime `mandatory:"false" json:"timeCreated"` + + // Whether the virtual circuit supports private or public peering. For more information, + // see FastConnect Overview (https://docs.us-phoenix-1.oraclecloud.com/Content/Network/Concepts/fastconnect.htm). + Type VirtualCircuitTypeEnum `mandatory:"false" json:"type,omitempty"` +} + +func (m VirtualCircuit) String() string { + return common.PointerString(m) +} + +// VirtualCircuitBgpManagementEnum Enum with underlying type: string +type VirtualCircuitBgpManagementEnum string + +// Set of constants representing the allowable values for VirtualCircuitBgpManagement +const ( + VirtualCircuitBgpManagementCustomerManaged VirtualCircuitBgpManagementEnum = "CUSTOMER_MANAGED" + VirtualCircuitBgpManagementProviderManaged VirtualCircuitBgpManagementEnum = "PROVIDER_MANAGED" + VirtualCircuitBgpManagementOracleManaged VirtualCircuitBgpManagementEnum = "ORACLE_MANAGED" +) + +var mappingVirtualCircuitBgpManagement = map[string]VirtualCircuitBgpManagementEnum{ + "CUSTOMER_MANAGED": VirtualCircuitBgpManagementCustomerManaged, + "PROVIDER_MANAGED": VirtualCircuitBgpManagementProviderManaged, + "ORACLE_MANAGED": VirtualCircuitBgpManagementOracleManaged, +} + +// GetVirtualCircuitBgpManagementEnumValues Enumerates the set of values for VirtualCircuitBgpManagement +func GetVirtualCircuitBgpManagementEnumValues() []VirtualCircuitBgpManagementEnum { + values := make([]VirtualCircuitBgpManagementEnum, 0) + for _, v := range mappingVirtualCircuitBgpManagement { + values = append(values, v) + } + return values +} + +// VirtualCircuitBgpSessionStateEnum Enum with underlying type: string +type VirtualCircuitBgpSessionStateEnum string + +// Set of constants representing the allowable values for VirtualCircuitBgpSessionState +const ( + VirtualCircuitBgpSessionStateUp VirtualCircuitBgpSessionStateEnum = "UP" + VirtualCircuitBgpSessionStateDown VirtualCircuitBgpSessionStateEnum = "DOWN" +) + +var mappingVirtualCircuitBgpSessionState = map[string]VirtualCircuitBgpSessionStateEnum{ + "UP": VirtualCircuitBgpSessionStateUp, + "DOWN": VirtualCircuitBgpSessionStateDown, +} + +// GetVirtualCircuitBgpSessionStateEnumValues Enumerates the set of values for VirtualCircuitBgpSessionState +func GetVirtualCircuitBgpSessionStateEnumValues() []VirtualCircuitBgpSessionStateEnum { + values := make([]VirtualCircuitBgpSessionStateEnum, 0) + for _, v := range mappingVirtualCircuitBgpSessionState { + values = append(values, v) + } + return values +} + +// VirtualCircuitLifecycleStateEnum Enum with underlying type: string +type VirtualCircuitLifecycleStateEnum string + +// Set of constants representing the allowable values for VirtualCircuitLifecycleState +const ( + VirtualCircuitLifecycleStatePendingProvider VirtualCircuitLifecycleStateEnum = "PENDING_PROVIDER" + VirtualCircuitLifecycleStateVerifying VirtualCircuitLifecycleStateEnum = "VERIFYING" + VirtualCircuitLifecycleStateProvisioning VirtualCircuitLifecycleStateEnum = "PROVISIONING" + VirtualCircuitLifecycleStateProvisioned VirtualCircuitLifecycleStateEnum = "PROVISIONED" + VirtualCircuitLifecycleStateFailed VirtualCircuitLifecycleStateEnum = "FAILED" + VirtualCircuitLifecycleStateInactive VirtualCircuitLifecycleStateEnum = "INACTIVE" + VirtualCircuitLifecycleStateTerminating VirtualCircuitLifecycleStateEnum = "TERMINATING" + VirtualCircuitLifecycleStateTerminated VirtualCircuitLifecycleStateEnum = "TERMINATED" +) + +var mappingVirtualCircuitLifecycleState = map[string]VirtualCircuitLifecycleStateEnum{ + "PENDING_PROVIDER": VirtualCircuitLifecycleStatePendingProvider, + "VERIFYING": VirtualCircuitLifecycleStateVerifying, + "PROVISIONING": VirtualCircuitLifecycleStateProvisioning, + "PROVISIONED": VirtualCircuitLifecycleStateProvisioned, + "FAILED": VirtualCircuitLifecycleStateFailed, + "INACTIVE": VirtualCircuitLifecycleStateInactive, + "TERMINATING": VirtualCircuitLifecycleStateTerminating, + "TERMINATED": VirtualCircuitLifecycleStateTerminated, +} + +// GetVirtualCircuitLifecycleStateEnumValues Enumerates the set of values for VirtualCircuitLifecycleState +func GetVirtualCircuitLifecycleStateEnumValues() []VirtualCircuitLifecycleStateEnum { + values := make([]VirtualCircuitLifecycleStateEnum, 0) + for _, v := range mappingVirtualCircuitLifecycleState { + values = append(values, v) + } + return values +} + +// VirtualCircuitProviderStateEnum Enum with underlying type: string +type VirtualCircuitProviderStateEnum string + +// Set of constants representing the allowable values for VirtualCircuitProviderState +const ( + VirtualCircuitProviderStateActive VirtualCircuitProviderStateEnum = "ACTIVE" + VirtualCircuitProviderStateInactive VirtualCircuitProviderStateEnum = "INACTIVE" +) + +var mappingVirtualCircuitProviderState = map[string]VirtualCircuitProviderStateEnum{ + "ACTIVE": VirtualCircuitProviderStateActive, + "INACTIVE": VirtualCircuitProviderStateInactive, +} + +// GetVirtualCircuitProviderStateEnumValues Enumerates the set of values for VirtualCircuitProviderState +func GetVirtualCircuitProviderStateEnumValues() []VirtualCircuitProviderStateEnum { + values := make([]VirtualCircuitProviderStateEnum, 0) + for _, v := range mappingVirtualCircuitProviderState { + values = append(values, v) + } + return values +} + +// VirtualCircuitServiceTypeEnum Enum with underlying type: string +type VirtualCircuitServiceTypeEnum string + +// Set of constants representing the allowable values for VirtualCircuitServiceType +const ( + VirtualCircuitServiceTypeColocated VirtualCircuitServiceTypeEnum = "COLOCATED" + VirtualCircuitServiceTypeLayer2 VirtualCircuitServiceTypeEnum = "LAYER2" + VirtualCircuitServiceTypeLayer3 VirtualCircuitServiceTypeEnum = "LAYER3" +) + +var mappingVirtualCircuitServiceType = map[string]VirtualCircuitServiceTypeEnum{ + "COLOCATED": VirtualCircuitServiceTypeColocated, + "LAYER2": VirtualCircuitServiceTypeLayer2, + "LAYER3": VirtualCircuitServiceTypeLayer3, +} + +// GetVirtualCircuitServiceTypeEnumValues Enumerates the set of values for VirtualCircuitServiceType +func GetVirtualCircuitServiceTypeEnumValues() []VirtualCircuitServiceTypeEnum { + values := make([]VirtualCircuitServiceTypeEnum, 0) + for _, v := range mappingVirtualCircuitServiceType { + values = append(values, v) + } + return values +} + +// VirtualCircuitTypeEnum Enum with underlying type: string +type VirtualCircuitTypeEnum string + +// Set of constants representing the allowable values for VirtualCircuitType +const ( + VirtualCircuitTypePublic VirtualCircuitTypeEnum = "PUBLIC" + VirtualCircuitTypePrivate VirtualCircuitTypeEnum = "PRIVATE" +) + +var mappingVirtualCircuitType = map[string]VirtualCircuitTypeEnum{ + "PUBLIC": VirtualCircuitTypePublic, + "PRIVATE": VirtualCircuitTypePrivate, +} + +// GetVirtualCircuitTypeEnumValues Enumerates the set of values for VirtualCircuitType +func GetVirtualCircuitTypeEnumValues() []VirtualCircuitTypeEnum { + values := make([]VirtualCircuitTypeEnum, 0) + for _, v := range mappingVirtualCircuitType { + values = append(values, v) + } + return values +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/virtual_circuit_bandwidth_shape.go b/vendor/github.com/oracle/oci-go-sdk/core/virtual_circuit_bandwidth_shape.go new file mode 100644 index 000000000..e237448d5 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/virtual_circuit_bandwidth_shape.go @@ -0,0 +1,29 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Core Services API +// +// APIs for Networking Service, Compute Service, and Block Volume Service. +// + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" +) + +// VirtualCircuitBandwidthShape An individual bandwidth level for virtual circuits. +type VirtualCircuitBandwidthShape struct { + + // The name of the bandwidth shape. + // Example: `10 Gbps` + Name *string `mandatory:"true" json:"name"` + + // The bandwidth in Mbps. + // Example: `10000` + BandwidthInMbps *int `mandatory:"false" json:"bandwidthInMbps"` +} + +func (m VirtualCircuitBandwidthShape) String() string { + return common.PointerString(m) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/virtual_circuit_public_prefix.go b/vendor/github.com/oracle/oci-go-sdk/core/virtual_circuit_public_prefix.go new file mode 100644 index 000000000..62c1f41f3 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/virtual_circuit_public_prefix.go @@ -0,0 +1,58 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Core Services API +// +// APIs for Networking Service, Compute Service, and Block Volume Service. +// + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" +) + +// VirtualCircuitPublicPrefix A public IP prefix and its details. With a public virtual circuit, the customer +// specifies the customer-owned public IP prefixes to advertise across the connection. +// For more information, see FastConnect Overview (https://docs.us-phoenix-1.oraclecloud.com/Content/Network/Concepts/fastconnect.htm). +type VirtualCircuitPublicPrefix struct { + + // Publix IP prefix (CIDR) that the customer specified. + CidrBlock *string `mandatory:"true" json:"cidrBlock"` + + // Oracle must verify that the customer owns the public IP prefix before traffic + // for that prefix can flow across the virtual circuit. Verification can take a + // few business days. `IN_PROGRESS` means Oracle is verifying the prefix. `COMPLETED` + // means verification succeeded. `FAILED` means verification failed and traffic for + // this prefix will not flow across the connection. + VerificationState VirtualCircuitPublicPrefixVerificationStateEnum `mandatory:"true" json:"verificationState"` +} + +func (m VirtualCircuitPublicPrefix) String() string { + return common.PointerString(m) +} + +// VirtualCircuitPublicPrefixVerificationStateEnum Enum with underlying type: string +type VirtualCircuitPublicPrefixVerificationStateEnum string + +// Set of constants representing the allowable values for VirtualCircuitPublicPrefixVerificationState +const ( + VirtualCircuitPublicPrefixVerificationStateInProgress VirtualCircuitPublicPrefixVerificationStateEnum = "IN_PROGRESS" + VirtualCircuitPublicPrefixVerificationStateCompleted VirtualCircuitPublicPrefixVerificationStateEnum = "COMPLETED" + VirtualCircuitPublicPrefixVerificationStateFailed VirtualCircuitPublicPrefixVerificationStateEnum = "FAILED" +) + +var mappingVirtualCircuitPublicPrefixVerificationState = map[string]VirtualCircuitPublicPrefixVerificationStateEnum{ + "IN_PROGRESS": VirtualCircuitPublicPrefixVerificationStateInProgress, + "COMPLETED": VirtualCircuitPublicPrefixVerificationStateCompleted, + "FAILED": VirtualCircuitPublicPrefixVerificationStateFailed, +} + +// GetVirtualCircuitPublicPrefixVerificationStateEnumValues Enumerates the set of values for VirtualCircuitPublicPrefixVerificationState +func GetVirtualCircuitPublicPrefixVerificationStateEnumValues() []VirtualCircuitPublicPrefixVerificationStateEnum { + values := make([]VirtualCircuitPublicPrefixVerificationStateEnum, 0) + for _, v := range mappingVirtualCircuitPublicPrefixVerificationState { + values = append(values, v) + } + return values +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/vnic.go b/vendor/github.com/oracle/oci-go-sdk/core/vnic.go new file mode 100644 index 000000000..d7eb9d053 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/vnic.go @@ -0,0 +1,118 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Core Services API +// +// APIs for Networking Service, Compute Service, and Block Volume Service. +// + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" +) + +// Vnic A virtual network interface card. Each VNIC resides in a subnet in a VCN. +// An instance attaches to a VNIC to obtain a network connection into the VCN +// through that subnet. Each instance has a *primary VNIC* that is automatically +// created and attached during launch. You can add *secondary VNICs* to an +// instance after it's launched. For more information, see +// Virtual Network Interface Cards (VNICs) (https://docs.us-phoenix-1.oraclecloud.com/Content/Network/Tasks/managingVNICs.htm). +// Each VNIC has a *primary private IP* that is automatically assigned during launch. +// You can add *secondary private IPs* to a VNIC after it's created. For more +// information, see CreatePrivateIp and +// IP Addresses (https://docs.us-phoenix-1.oraclecloud.com/Content/Network/Tasks/managingIPaddresses.htm). +// To use any of the API operations, you must be authorized in an IAM policy. If you're not authorized, +// talk to an administrator. If you're an administrator who needs to write policies to give users access, see +// Getting Started with Policies (https://docs.us-phoenix-1.oraclecloud.com/Content/Identity/Concepts/policygetstarted.htm). +type Vnic struct { + + // The VNIC's Availability Domain. + // Example: `Uocm:PHX-AD-1` + AvailabilityDomain *string `mandatory:"true" json:"availabilityDomain"` + + // The OCID of the compartment containing the VNIC. + CompartmentId *string `mandatory:"true" json:"compartmentId"` + + // The OCID of the VNIC. + Id *string `mandatory:"true" json:"id"` + + // The current state of the VNIC. + LifecycleState VnicLifecycleStateEnum `mandatory:"true" json:"lifecycleState"` + + // The private IP address of the primary `privateIp` object on the VNIC. + // The address is within the CIDR of the VNIC's subnet. + // Example: `10.0.3.3` + PrivateIp *string `mandatory:"true" json:"privateIp"` + + // The OCID of the subnet the VNIC is in. + SubnetId *string `mandatory:"true" json:"subnetId"` + + // The date and time the VNIC was created, in the format defined by RFC3339. + // Example: `2016-08-25T21:10:29.600Z` + TimeCreated *common.SDKTime `mandatory:"true" json:"timeCreated"` + + // A user-friendly name. Does not have to be unique. + // Avoid entering confidential information. + DisplayName *string `mandatory:"false" json:"displayName"` + + // The hostname for the VNIC's primary private IP. Used for DNS. The value is the hostname + // portion of the primary private IP's fully qualified domain name (FQDN) + // (for example, `bminstance-1` in FQDN `bminstance-1.subnet123.vcn1.oraclevcn.com`). + // Must be unique across all VNICs in the subnet and comply with + // RFC 952 (https://tools.ietf.org/html/rfc952) and + // RFC 1123 (https://tools.ietf.org/html/rfc1123). + // For more information, see + // DNS in Your Virtual Cloud Network (https://docs.us-phoenix-1.oraclecloud.com/Content/Network/Concepts/dns.htm). + // Example: `bminstance-1` + HostnameLabel *string `mandatory:"false" json:"hostnameLabel"` + + // Whether the VNIC is the primary VNIC (the VNIC that is automatically created + // and attached during instance launch). + IsPrimary *bool `mandatory:"false" json:"isPrimary"` + + // The MAC address of the VNIC. + // Example: `00:00:17:B6:4D:DD` + MacAddress *string `mandatory:"false" json:"macAddress"` + + // The public IP address of the VNIC, if one is assigned. + PublicIp *string `mandatory:"false" json:"publicIp"` + + // Whether the source/destination check is disabled on the VNIC. + // Defaults to `false`, which means the check is performed. For information + // about why you would skip the source/destination check, see + // Using a Private IP as a Route Target (https://docs.us-phoenix-1.oraclecloud.com/Content/Network/Tasks/managingroutetables.htm#privateip). + // Example: `true` + SkipSourceDestCheck *bool `mandatory:"false" json:"skipSourceDestCheck"` +} + +func (m Vnic) String() string { + return common.PointerString(m) +} + +// VnicLifecycleStateEnum Enum with underlying type: string +type VnicLifecycleStateEnum string + +// Set of constants representing the allowable values for VnicLifecycleState +const ( + VnicLifecycleStateProvisioning VnicLifecycleStateEnum = "PROVISIONING" + VnicLifecycleStateAvailable VnicLifecycleStateEnum = "AVAILABLE" + VnicLifecycleStateTerminating VnicLifecycleStateEnum = "TERMINATING" + VnicLifecycleStateTerminated VnicLifecycleStateEnum = "TERMINATED" +) + +var mappingVnicLifecycleState = map[string]VnicLifecycleStateEnum{ + "PROVISIONING": VnicLifecycleStateProvisioning, + "AVAILABLE": VnicLifecycleStateAvailable, + "TERMINATING": VnicLifecycleStateTerminating, + "TERMINATED": VnicLifecycleStateTerminated, +} + +// GetVnicLifecycleStateEnumValues Enumerates the set of values for VnicLifecycleState +func GetVnicLifecycleStateEnumValues() []VnicLifecycleStateEnum { + values := make([]VnicLifecycleStateEnum, 0) + for _, v := range mappingVnicLifecycleState { + values = append(values, v) + } + return values +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/vnic_attachment.go b/vendor/github.com/oracle/oci-go-sdk/core/vnic_attachment.go new file mode 100644 index 000000000..f08db683f --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/vnic_attachment.go @@ -0,0 +1,92 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Core Services API +// +// APIs for Networking Service, Compute Service, and Block Volume Service. +// + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" +) + +// VnicAttachment Represents an attachment between a VNIC and an instance. For more information, see +// Virtual Network Interface Cards (VNICs) (https://docs.us-phoenix-1.oraclecloud.com/Content/Network/Tasks/managingVNICs.htm). +type VnicAttachment struct { + + // The Availability Domain of the instance. + // Example: `Uocm:PHX-AD-1` + AvailabilityDomain *string `mandatory:"true" json:"availabilityDomain"` + + // The OCID of the compartment the VNIC attachment is in, which is the same + // compartment the instance is in. + CompartmentId *string `mandatory:"true" json:"compartmentId"` + + // The OCID of the VNIC attachment. + Id *string `mandatory:"true" json:"id"` + + // The OCID of the instance. + InstanceId *string `mandatory:"true" json:"instanceId"` + + // The current state of the VNIC attachment. + LifecycleState VnicAttachmentLifecycleStateEnum `mandatory:"true" json:"lifecycleState"` + + // The OCID of the VNIC's subnet. + SubnetId *string `mandatory:"true" json:"subnetId"` + + // The date and time the VNIC attachment was created, in the format defined by RFC3339. + // Example: `2016-08-25T21:10:29.600Z` + TimeCreated *common.SDKTime `mandatory:"true" json:"timeCreated"` + + // A user-friendly name. Does not have to be unique. + // Avoid entering confidential information. + DisplayName *string `mandatory:"false" json:"displayName"` + + // Which physical network interface card (NIC) the VNIC uses. + // Certain bare metal instance shapes have two active physical NICs (0 and 1). If + // you add a secondary VNIC to one of these instances, you can specify which NIC + // the VNIC will use. For more information, see + // Virtual Network Interface Cards (VNICs) (https://docs.us-phoenix-1.oraclecloud.com/Content/Network/Tasks/managingVNICs.htm). + NicIndex *int `mandatory:"false" json:"nicIndex"` + + // The Oracle-assigned VLAN tag of the attached VNIC. Available after the + // attachment process is complete. + // Example: `0` + VlanTag *int `mandatory:"false" json:"vlanTag"` + + // The OCID of the VNIC. Available after the attachment process is complete. + VnicId *string `mandatory:"false" json:"vnicId"` +} + +func (m VnicAttachment) String() string { + return common.PointerString(m) +} + +// VnicAttachmentLifecycleStateEnum Enum with underlying type: string +type VnicAttachmentLifecycleStateEnum string + +// Set of constants representing the allowable values for VnicAttachmentLifecycleState +const ( + VnicAttachmentLifecycleStateAttaching VnicAttachmentLifecycleStateEnum = "ATTACHING" + VnicAttachmentLifecycleStateAttached VnicAttachmentLifecycleStateEnum = "ATTACHED" + VnicAttachmentLifecycleStateDetaching VnicAttachmentLifecycleStateEnum = "DETACHING" + VnicAttachmentLifecycleStateDetached VnicAttachmentLifecycleStateEnum = "DETACHED" +) + +var mappingVnicAttachmentLifecycleState = map[string]VnicAttachmentLifecycleStateEnum{ + "ATTACHING": VnicAttachmentLifecycleStateAttaching, + "ATTACHED": VnicAttachmentLifecycleStateAttached, + "DETACHING": VnicAttachmentLifecycleStateDetaching, + "DETACHED": VnicAttachmentLifecycleStateDetached, +} + +// GetVnicAttachmentLifecycleStateEnumValues Enumerates the set of values for VnicAttachmentLifecycleState +func GetVnicAttachmentLifecycleStateEnumValues() []VnicAttachmentLifecycleStateEnum { + values := make([]VnicAttachmentLifecycleStateEnum, 0) + for _, v := range mappingVnicAttachmentLifecycleState { + values = append(values, v) + } + return values +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/volume.go b/vendor/github.com/oracle/oci-go-sdk/core/volume.go new file mode 100644 index 000000000..85c7394c5 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/volume.go @@ -0,0 +1,127 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Core Services API +// +// APIs for Networking Service, Compute Service, and Block Volume Service. +// + +package core + +import ( + "encoding/json" + "github.com/oracle/oci-go-sdk/common" +) + +// Volume A detachable block volume device that allows you to dynamically expand +// the storage capacity of an instance. For more information, see +// Overview of Cloud Volume Storage (https://docs.us-phoenix-1.oraclecloud.com/Content/Block/Concepts/overview.htm). +// To use any of the API operations, you must be authorized in an IAM policy. If you're not authorized, +// talk to an administrator. If you're an administrator who needs to write policies to give users access, see +// Getting Started with Policies (https://docs.us-phoenix-1.oraclecloud.com/Content/Identity/Concepts/policygetstarted.htm). +type Volume struct { + + // The Availability Domain of the volume. + // Example: `Uocm:PHX-AD-1` + AvailabilityDomain *string `mandatory:"true" json:"availabilityDomain"` + + // The OCID of the compartment that contains the volume. + CompartmentId *string `mandatory:"true" json:"compartmentId"` + + // A user-friendly name. Does not have to be unique, and it's changeable. + // Avoid entering confidential information. + DisplayName *string `mandatory:"true" json:"displayName"` + + // The OCID of the volume. + Id *string `mandatory:"true" json:"id"` + + // The current state of a volume. + LifecycleState VolumeLifecycleStateEnum `mandatory:"true" json:"lifecycleState"` + + // The size of the volume in MBs. This field is deprecated. Use sizeInGBs instead. + SizeInMBs *int `mandatory:"true" json:"sizeInMBs"` + + // The date and time the volume was created. Format defined by RFC3339. + TimeCreated *common.SDKTime `mandatory:"true" json:"timeCreated"` + + // Specifies whether the cloned volume's data has finished copying from the source volume or backup. + IsHydrated *bool `mandatory:"false" json:"isHydrated"` + + // The size of the volume in GBs. + SizeInGBs *int `mandatory:"false" json:"sizeInGBs"` + + // The volume source, either an existing volume in the same Availability Domain or a volume backup. + // If null, an empty volume is created. + SourceDetails VolumeSourceDetails `mandatory:"false" json:"sourceDetails"` +} + +func (m Volume) String() string { + return common.PointerString(m) +} + +// UnmarshalJSON unmarshals from json +func (m *Volume) UnmarshalJSON(data []byte) (e error) { + model := struct { + IsHydrated *bool `json:"isHydrated"` + SizeInGBs *int `json:"sizeInGBs"` + SourceDetails volumesourcedetails `json:"sourceDetails"` + AvailabilityDomain *string `json:"availabilityDomain"` + CompartmentId *string `json:"compartmentId"` + DisplayName *string `json:"displayName"` + Id *string `json:"id"` + LifecycleState VolumeLifecycleStateEnum `json:"lifecycleState"` + SizeInMBs *int `json:"sizeInMBs"` + TimeCreated *common.SDKTime `json:"timeCreated"` + }{} + + e = json.Unmarshal(data, &model) + if e != nil { + return + } + m.IsHydrated = model.IsHydrated + m.SizeInGBs = model.SizeInGBs + nn, e := model.SourceDetails.UnmarshalPolymorphicJSON(model.SourceDetails.JsonData) + if e != nil { + return + } + m.SourceDetails = nn + m.AvailabilityDomain = model.AvailabilityDomain + m.CompartmentId = model.CompartmentId + m.DisplayName = model.DisplayName + m.Id = model.Id + m.LifecycleState = model.LifecycleState + m.SizeInMBs = model.SizeInMBs + m.TimeCreated = model.TimeCreated + return +} + +// VolumeLifecycleStateEnum Enum with underlying type: string +type VolumeLifecycleStateEnum string + +// Set of constants representing the allowable values for VolumeLifecycleState +const ( + VolumeLifecycleStateProvisioning VolumeLifecycleStateEnum = "PROVISIONING" + VolumeLifecycleStateRestoring VolumeLifecycleStateEnum = "RESTORING" + VolumeLifecycleStateAvailable VolumeLifecycleStateEnum = "AVAILABLE" + VolumeLifecycleStateTerminating VolumeLifecycleStateEnum = "TERMINATING" + VolumeLifecycleStateTerminated VolumeLifecycleStateEnum = "TERMINATED" + VolumeLifecycleStateFaulty VolumeLifecycleStateEnum = "FAULTY" +) + +var mappingVolumeLifecycleState = map[string]VolumeLifecycleStateEnum{ + "PROVISIONING": VolumeLifecycleStateProvisioning, + "RESTORING": VolumeLifecycleStateRestoring, + "AVAILABLE": VolumeLifecycleStateAvailable, + "TERMINATING": VolumeLifecycleStateTerminating, + "TERMINATED": VolumeLifecycleStateTerminated, + "FAULTY": VolumeLifecycleStateFaulty, +} + +// GetVolumeLifecycleStateEnumValues Enumerates the set of values for VolumeLifecycleState +func GetVolumeLifecycleStateEnumValues() []VolumeLifecycleStateEnum { + values := make([]VolumeLifecycleStateEnum, 0) + for _, v := range mappingVolumeLifecycleState { + values = append(values, v) + } + return values +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/volume_attachment.go b/vendor/github.com/oracle/oci-go-sdk/core/volume_attachment.go new file mode 100644 index 000000000..d8942c4e9 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/volume_attachment.go @@ -0,0 +1,171 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Core Services API +// +// APIs for Networking Service, Compute Service, and Block Volume Service. +// + +package core + +import ( + "encoding/json" + "github.com/oracle/oci-go-sdk/common" +) + +// VolumeAttachment A base object for all types of attachments between a storage volume and an instance. +// For specific details about iSCSI attachments, see +// IScsiVolumeAttachment. +// For general information about volume attachments, see +// Overview of Block Volume Storage (https://docs.us-phoenix-1.oraclecloud.com/Content/Block/Concepts/overview.htm). +type VolumeAttachment interface { + + // The Availability Domain of an instance. + // Example: `Uocm:PHX-AD-1` + GetAvailabilityDomain() *string + + // The OCID of the compartment. + GetCompartmentId() *string + + // The OCID of the volume attachment. + GetId() *string + + // The OCID of the instance the volume is attached to. + GetInstanceId() *string + + // The current state of the volume attachment. + GetLifecycleState() VolumeAttachmentLifecycleStateEnum + + // The date and time the volume was created, in the format defined by RFC3339. + // Example: `2016-08-25T21:10:29.600Z` + GetTimeCreated() *common.SDKTime + + // The OCID of the volume. + GetVolumeId() *string + + // A user-friendly name. Does not have to be unique, and it cannot be changed. + // Avoid entering confidential information. + // Example: `My volume attachment` + GetDisplayName() *string +} + +type volumeattachment struct { + JsonData []byte + AvailabilityDomain *string `mandatory:"true" json:"availabilityDomain"` + CompartmentId *string `mandatory:"true" json:"compartmentId"` + Id *string `mandatory:"true" json:"id"` + InstanceId *string `mandatory:"true" json:"instanceId"` + LifecycleState VolumeAttachmentLifecycleStateEnum `mandatory:"true" json:"lifecycleState"` + TimeCreated *common.SDKTime `mandatory:"true" json:"timeCreated"` + VolumeId *string `mandatory:"true" json:"volumeId"` + DisplayName *string `mandatory:"false" json:"displayName"` + AttachmentType string `json:"attachmentType"` +} + +// UnmarshalJSON unmarshals json +func (m *volumeattachment) UnmarshalJSON(data []byte) error { + m.JsonData = data + type Unmarshalervolumeattachment volumeattachment + s := struct { + Model Unmarshalervolumeattachment + }{} + err := json.Unmarshal(data, &s.Model) + if err != nil { + return err + } + m.AvailabilityDomain = s.Model.AvailabilityDomain + m.CompartmentId = s.Model.CompartmentId + m.Id = s.Model.Id + m.InstanceId = s.Model.InstanceId + m.LifecycleState = s.Model.LifecycleState + m.TimeCreated = s.Model.TimeCreated + m.VolumeId = s.Model.VolumeId + m.DisplayName = s.Model.DisplayName + m.AttachmentType = s.Model.AttachmentType + + return err +} + +// UnmarshalPolymorphicJSON unmarshals polymorphic json +func (m *volumeattachment) UnmarshalPolymorphicJSON(data []byte) (interface{}, error) { + var err error + switch m.AttachmentType { + case "iscsi": + mm := IScsiVolumeAttachment{} + err = json.Unmarshal(data, &mm) + return mm, err + default: + return m, nil + } +} + +//GetAvailabilityDomain returns AvailabilityDomain +func (m volumeattachment) GetAvailabilityDomain() *string { + return m.AvailabilityDomain +} + +//GetCompartmentId returns CompartmentId +func (m volumeattachment) GetCompartmentId() *string { + return m.CompartmentId +} + +//GetId returns Id +func (m volumeattachment) GetId() *string { + return m.Id +} + +//GetInstanceId returns InstanceId +func (m volumeattachment) GetInstanceId() *string { + return m.InstanceId +} + +//GetLifecycleState returns LifecycleState +func (m volumeattachment) GetLifecycleState() VolumeAttachmentLifecycleStateEnum { + return m.LifecycleState +} + +//GetTimeCreated returns TimeCreated +func (m volumeattachment) GetTimeCreated() *common.SDKTime { + return m.TimeCreated +} + +//GetVolumeId returns VolumeId +func (m volumeattachment) GetVolumeId() *string { + return m.VolumeId +} + +//GetDisplayName returns DisplayName +func (m volumeattachment) GetDisplayName() *string { + return m.DisplayName +} + +func (m volumeattachment) String() string { + return common.PointerString(m) +} + +// VolumeAttachmentLifecycleStateEnum Enum with underlying type: string +type VolumeAttachmentLifecycleStateEnum string + +// Set of constants representing the allowable values for VolumeAttachmentLifecycleState +const ( + VolumeAttachmentLifecycleStateAttaching VolumeAttachmentLifecycleStateEnum = "ATTACHING" + VolumeAttachmentLifecycleStateAttached VolumeAttachmentLifecycleStateEnum = "ATTACHED" + VolumeAttachmentLifecycleStateDetaching VolumeAttachmentLifecycleStateEnum = "DETACHING" + VolumeAttachmentLifecycleStateDetached VolumeAttachmentLifecycleStateEnum = "DETACHED" +) + +var mappingVolumeAttachmentLifecycleState = map[string]VolumeAttachmentLifecycleStateEnum{ + "ATTACHING": VolumeAttachmentLifecycleStateAttaching, + "ATTACHED": VolumeAttachmentLifecycleStateAttached, + "DETACHING": VolumeAttachmentLifecycleStateDetaching, + "DETACHED": VolumeAttachmentLifecycleStateDetached, +} + +// GetVolumeAttachmentLifecycleStateEnumValues Enumerates the set of values for VolumeAttachmentLifecycleState +func GetVolumeAttachmentLifecycleStateEnumValues() []VolumeAttachmentLifecycleStateEnum { + values := make([]VolumeAttachmentLifecycleStateEnum, 0) + for _, v := range mappingVolumeAttachmentLifecycleState { + values = append(values, v) + } + return values +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/volume_backup.go b/vendor/github.com/oracle/oci-go-sdk/core/volume_backup.go new file mode 100644 index 000000000..6fd52b878 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/volume_backup.go @@ -0,0 +1,96 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Core Services API +// +// APIs for Networking Service, Compute Service, and Block Volume Service. +// + +package core + +import ( + "github.com/oracle/oci-go-sdk/common" +) + +// VolumeBackup A point-in-time copy of a volume that can then be used to create a new block volume +// or recover a block volume. For more information, see +// Overview of Cloud Volume Storage (https://docs.us-phoenix-1.oraclecloud.com/Content/Block/Concepts/overview.htm). +// To use any of the API operations, you must be authorized in an IAM policy. If you're not authorized, +// talk to an administrator. If you're an administrator who needs to write policies to give users access, see +// Getting Started with Policies (https://docs.us-phoenix-1.oraclecloud.com/Content/Identity/Concepts/policygetstarted.htm). +type VolumeBackup struct { + + // The OCID of the compartment that contains the volume backup. + CompartmentId *string `mandatory:"true" json:"compartmentId"` + + // A user-friendly name for the volume backup. Does not have to be unique and it's changeable. + // Avoid entering confidential information. + DisplayName *string `mandatory:"true" json:"displayName"` + + // The OCID of the volume backup. + Id *string `mandatory:"true" json:"id"` + + // The current state of a volume backup. + LifecycleState VolumeBackupLifecycleStateEnum `mandatory:"true" json:"lifecycleState"` + + // The date and time the volume backup was created. This is the time the actual point-in-time image + // of the volume data was taken. Format defined by RFC3339. + TimeCreated *common.SDKTime `mandatory:"true" json:"timeCreated"` + + // The size of the volume, in GBs. + SizeInGBs *int `mandatory:"false" json:"sizeInGBs"` + + // The size of the volume in MBs. The value must be a multiple of 1024. + // This field is deprecated. Please use sizeInGBs. + SizeInMBs *int `mandatory:"false" json:"sizeInMBs"` + + // The date and time the request to create the volume backup was received. Format defined by RFC3339. + TimeRequestReceived *common.SDKTime `mandatory:"false" json:"timeRequestReceived"` + + // The size used by the backup, in GBs. It is typically smaller than sizeInGBs, depending on the space + // consumed on the volume and whether the backup is full or incremental. + UniqueSizeInGBs *int `mandatory:"false" json:"uniqueSizeInGBs"` + + // The size used by the backup, in MBs. It is typically smaller than sizeInMBs, depending on the space + // consumed on the volume and whether the backup is full or incremental. + // This field is deprecated. Please use uniqueSizeInGBs. + UniqueSizeInMbs *int `mandatory:"false" json:"uniqueSizeInMbs"` + + // The OCID of the volume. + VolumeId *string `mandatory:"false" json:"volumeId"` +} + +func (m VolumeBackup) String() string { + return common.PointerString(m) +} + +// VolumeBackupLifecycleStateEnum Enum with underlying type: string +type VolumeBackupLifecycleStateEnum string + +// Set of constants representing the allowable values for VolumeBackupLifecycleState +const ( + VolumeBackupLifecycleStateCreating VolumeBackupLifecycleStateEnum = "CREATING" + VolumeBackupLifecycleStateAvailable VolumeBackupLifecycleStateEnum = "AVAILABLE" + VolumeBackupLifecycleStateTerminating VolumeBackupLifecycleStateEnum = "TERMINATING" + VolumeBackupLifecycleStateTerminated VolumeBackupLifecycleStateEnum = "TERMINATED" + VolumeBackupLifecycleStateFaulty VolumeBackupLifecycleStateEnum = "FAULTY" + VolumeBackupLifecycleStateRequestReceived VolumeBackupLifecycleStateEnum = "REQUEST_RECEIVED" +) + +var mappingVolumeBackupLifecycleState = map[string]VolumeBackupLifecycleStateEnum{ + "CREATING": VolumeBackupLifecycleStateCreating, + "AVAILABLE": VolumeBackupLifecycleStateAvailable, + "TERMINATING": VolumeBackupLifecycleStateTerminating, + "TERMINATED": VolumeBackupLifecycleStateTerminated, + "FAULTY": VolumeBackupLifecycleStateFaulty, + "REQUEST_RECEIVED": VolumeBackupLifecycleStateRequestReceived, +} + +// GetVolumeBackupLifecycleStateEnumValues Enumerates the set of values for VolumeBackupLifecycleState +func GetVolumeBackupLifecycleStateEnumValues() []VolumeBackupLifecycleStateEnum { + values := make([]VolumeBackupLifecycleStateEnum, 0) + for _, v := range mappingVolumeBackupLifecycleState { + values = append(values, v) + } + return values +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/volume_source_details.go b/vendor/github.com/oracle/oci-go-sdk/core/volume_source_details.go new file mode 100644 index 000000000..336c21e8b --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/volume_source_details.go @@ -0,0 +1,60 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Core Services API +// +// APIs for Networking Service, Compute Service, and Block Volume Service. +// + +package core + +import ( + "encoding/json" + "github.com/oracle/oci-go-sdk/common" +) + +// VolumeSourceDetails The representation of VolumeSourceDetails +type VolumeSourceDetails interface { +} + +type volumesourcedetails struct { + JsonData []byte + Type string `json:"type"` +} + +// UnmarshalJSON unmarshals json +func (m *volumesourcedetails) UnmarshalJSON(data []byte) error { + m.JsonData = data + type Unmarshalervolumesourcedetails volumesourcedetails + s := struct { + Model Unmarshalervolumesourcedetails + }{} + err := json.Unmarshal(data, &s.Model) + if err != nil { + return err + } + m.Type = s.Model.Type + + return err +} + +// UnmarshalPolymorphicJSON unmarshals polymorphic json +func (m *volumesourcedetails) UnmarshalPolymorphicJSON(data []byte) (interface{}, error) { + var err error + switch m.Type { + case "volume": + mm := VolumeSourceFromVolumeDetails{} + err = json.Unmarshal(data, &mm) + return mm, err + case "volumeBackup": + mm := VolumeSourceFromVolumeBackupDetails{} + err = json.Unmarshal(data, &mm) + return mm, err + default: + return m, nil + } +} + +func (m volumesourcedetails) String() string { + return common.PointerString(m) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/volume_source_from_volume_backup_details.go b/vendor/github.com/oracle/oci-go-sdk/core/volume_source_from_volume_backup_details.go new file mode 100644 index 000000000..5c6d57af3 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/volume_source_from_volume_backup_details.go @@ -0,0 +1,39 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Core Services API +// +// APIs for Networking Service, Compute Service, and Block Volume Service. +// + +package core + +import ( + "encoding/json" + "github.com/oracle/oci-go-sdk/common" +) + +// VolumeSourceFromVolumeBackupDetails Specifies the volume backup. +type VolumeSourceFromVolumeBackupDetails struct { + + // The OCID of the volume backup. + Id *string `mandatory:"true" json:"id"` +} + +func (m VolumeSourceFromVolumeBackupDetails) String() string { + return common.PointerString(m) +} + +// MarshalJSON marshals to json representation +func (m VolumeSourceFromVolumeBackupDetails) MarshalJSON() (buff []byte, e error) { + type MarshalTypeVolumeSourceFromVolumeBackupDetails VolumeSourceFromVolumeBackupDetails + s := struct { + DiscriminatorParam string `json:"type"` + MarshalTypeVolumeSourceFromVolumeBackupDetails + }{ + "volumeBackup", + (MarshalTypeVolumeSourceFromVolumeBackupDetails)(m), + } + + return json.Marshal(&s) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/core/volume_source_from_volume_details.go b/vendor/github.com/oracle/oci-go-sdk/core/volume_source_from_volume_details.go new file mode 100644 index 000000000..bb1ed162a --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/core/volume_source_from_volume_details.go @@ -0,0 +1,39 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Core Services API +// +// APIs for Networking Service, Compute Service, and Block Volume Service. +// + +package core + +import ( + "encoding/json" + "github.com/oracle/oci-go-sdk/common" +) + +// VolumeSourceFromVolumeDetails Specifies the source volume. +type VolumeSourceFromVolumeDetails struct { + + // The OCID of the volume. + Id *string `mandatory:"true" json:"id"` +} + +func (m VolumeSourceFromVolumeDetails) String() string { + return common.PointerString(m) +} + +// MarshalJSON marshals to json representation +func (m VolumeSourceFromVolumeDetails) MarshalJSON() (buff []byte, e error) { + type MarshalTypeVolumeSourceFromVolumeDetails VolumeSourceFromVolumeDetails + s := struct { + DiscriminatorParam string `json:"type"` + MarshalTypeVolumeSourceFromVolumeDetails + }{ + "volume", + (MarshalTypeVolumeSourceFromVolumeDetails)(m), + } + + return json.Marshal(&s) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/database/backup.go b/vendor/github.com/oracle/oci-go-sdk/database/backup.go new file mode 100644 index 000000000..f898387ef --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/database/backup.go @@ -0,0 +1,106 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Database Service API +// +// The API for the Database Service. +// + +package database + +import ( + "github.com/oracle/oci-go-sdk/common" +) + +// Backup A database backup +// To use any of the API operations, you must be authorized in an IAM policy. If you're not authorized, talk to an administrator. If you're an administrator who needs to write policies to give users access, see Getting Started with Policies (https://docs.us-phoenix-1.oraclecloud.com/Content/Identity/Concepts/policygetstarted.htm). +type Backup struct { + + // The name of the Availability Domain that the backup is located in. + AvailabilityDomain *string `mandatory:"false" json:"availabilityDomain"` + + // The OCID of the compartment. + CompartmentId *string `mandatory:"false" json:"compartmentId"` + + // The OCID of the database. + DatabaseId *string `mandatory:"false" json:"databaseId"` + + // The user-friendly name for the backup. It does not have to be unique. + DisplayName *string `mandatory:"false" json:"displayName"` + + // The OCID of the backup. + Id *string `mandatory:"false" json:"id"` + + // Additional information about the current lifecycleState. + LifecycleDetails *string `mandatory:"false" json:"lifecycleDetails"` + + // The current state of the backup. + LifecycleState BackupLifecycleStateEnum `mandatory:"false" json:"lifecycleState,omitempty"` + + // The date and time the backup was completed. + TimeEnded *common.SDKTime `mandatory:"false" json:"timeEnded"` + + // The date and time the backup starts. + TimeStarted *common.SDKTime `mandatory:"false" json:"timeStarted"` + + // The type of backup. + Type BackupTypeEnum `mandatory:"false" json:"type,omitempty"` +} + +func (m Backup) String() string { + return common.PointerString(m) +} + +// BackupLifecycleStateEnum Enum with underlying type: string +type BackupLifecycleStateEnum string + +// Set of constants representing the allowable values for BackupLifecycleState +const ( + BackupLifecycleStateCreating BackupLifecycleStateEnum = "CREATING" + BackupLifecycleStateActive BackupLifecycleStateEnum = "ACTIVE" + BackupLifecycleStateDeleting BackupLifecycleStateEnum = "DELETING" + BackupLifecycleStateDeleted BackupLifecycleStateEnum = "DELETED" + BackupLifecycleStateFailed BackupLifecycleStateEnum = "FAILED" + BackupLifecycleStateRestoring BackupLifecycleStateEnum = "RESTORING" +) + +var mappingBackupLifecycleState = map[string]BackupLifecycleStateEnum{ + "CREATING": BackupLifecycleStateCreating, + "ACTIVE": BackupLifecycleStateActive, + "DELETING": BackupLifecycleStateDeleting, + "DELETED": BackupLifecycleStateDeleted, + "FAILED": BackupLifecycleStateFailed, + "RESTORING": BackupLifecycleStateRestoring, +} + +// GetBackupLifecycleStateEnumValues Enumerates the set of values for BackupLifecycleState +func GetBackupLifecycleStateEnumValues() []BackupLifecycleStateEnum { + values := make([]BackupLifecycleStateEnum, 0) + for _, v := range mappingBackupLifecycleState { + values = append(values, v) + } + return values +} + +// BackupTypeEnum Enum with underlying type: string +type BackupTypeEnum string + +// Set of constants representing the allowable values for BackupType +const ( + BackupTypeIncremental BackupTypeEnum = "INCREMENTAL" + BackupTypeFull BackupTypeEnum = "FULL" +) + +var mappingBackupType = map[string]BackupTypeEnum{ + "INCREMENTAL": BackupTypeIncremental, + "FULL": BackupTypeFull, +} + +// GetBackupTypeEnumValues Enumerates the set of values for BackupType +func GetBackupTypeEnumValues() []BackupTypeEnum { + values := make([]BackupTypeEnum, 0) + for _, v := range mappingBackupType { + values = append(values, v) + } + return values +} diff --git a/vendor/github.com/oracle/oci-go-sdk/database/backup_summary.go b/vendor/github.com/oracle/oci-go-sdk/database/backup_summary.go new file mode 100644 index 000000000..5e12d1c60 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/database/backup_summary.go @@ -0,0 +1,106 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Database Service API +// +// The API for the Database Service. +// + +package database + +import ( + "github.com/oracle/oci-go-sdk/common" +) + +// BackupSummary A database backup +// To use any of the API operations, you must be authorized in an IAM policy. If you're not authorized, talk to an administrator. If you're an administrator who needs to write policies to give users access, see Getting Started with Policies (https://docs.us-phoenix-1.oraclecloud.com/Content/Identity/Concepts/policygetstarted.htm). +type BackupSummary struct { + + // The name of the Availability Domain that the backup is located in. + AvailabilityDomain *string `mandatory:"false" json:"availabilityDomain"` + + // The OCID of the compartment. + CompartmentId *string `mandatory:"false" json:"compartmentId"` + + // The OCID of the database. + DatabaseId *string `mandatory:"false" json:"databaseId"` + + // The user-friendly name for the backup. It does not have to be unique. + DisplayName *string `mandatory:"false" json:"displayName"` + + // The OCID of the backup. + Id *string `mandatory:"false" json:"id"` + + // Additional information about the current lifecycleState. + LifecycleDetails *string `mandatory:"false" json:"lifecycleDetails"` + + // The current state of the backup. + LifecycleState BackupSummaryLifecycleStateEnum `mandatory:"false" json:"lifecycleState,omitempty"` + + // The date and time the backup was completed. + TimeEnded *common.SDKTime `mandatory:"false" json:"timeEnded"` + + // The date and time the backup starts. + TimeStarted *common.SDKTime `mandatory:"false" json:"timeStarted"` + + // The type of backup. + Type BackupSummaryTypeEnum `mandatory:"false" json:"type,omitempty"` +} + +func (m BackupSummary) String() string { + return common.PointerString(m) +} + +// BackupSummaryLifecycleStateEnum Enum with underlying type: string +type BackupSummaryLifecycleStateEnum string + +// Set of constants representing the allowable values for BackupSummaryLifecycleState +const ( + BackupSummaryLifecycleStateCreating BackupSummaryLifecycleStateEnum = "CREATING" + BackupSummaryLifecycleStateActive BackupSummaryLifecycleStateEnum = "ACTIVE" + BackupSummaryLifecycleStateDeleting BackupSummaryLifecycleStateEnum = "DELETING" + BackupSummaryLifecycleStateDeleted BackupSummaryLifecycleStateEnum = "DELETED" + BackupSummaryLifecycleStateFailed BackupSummaryLifecycleStateEnum = "FAILED" + BackupSummaryLifecycleStateRestoring BackupSummaryLifecycleStateEnum = "RESTORING" +) + +var mappingBackupSummaryLifecycleState = map[string]BackupSummaryLifecycleStateEnum{ + "CREATING": BackupSummaryLifecycleStateCreating, + "ACTIVE": BackupSummaryLifecycleStateActive, + "DELETING": BackupSummaryLifecycleStateDeleting, + "DELETED": BackupSummaryLifecycleStateDeleted, + "FAILED": BackupSummaryLifecycleStateFailed, + "RESTORING": BackupSummaryLifecycleStateRestoring, +} + +// GetBackupSummaryLifecycleStateEnumValues Enumerates the set of values for BackupSummaryLifecycleState +func GetBackupSummaryLifecycleStateEnumValues() []BackupSummaryLifecycleStateEnum { + values := make([]BackupSummaryLifecycleStateEnum, 0) + for _, v := range mappingBackupSummaryLifecycleState { + values = append(values, v) + } + return values +} + +// BackupSummaryTypeEnum Enum with underlying type: string +type BackupSummaryTypeEnum string + +// Set of constants representing the allowable values for BackupSummaryType +const ( + BackupSummaryTypeIncremental BackupSummaryTypeEnum = "INCREMENTAL" + BackupSummaryTypeFull BackupSummaryTypeEnum = "FULL" +) + +var mappingBackupSummaryType = map[string]BackupSummaryTypeEnum{ + "INCREMENTAL": BackupSummaryTypeIncremental, + "FULL": BackupSummaryTypeFull, +} + +// GetBackupSummaryTypeEnumValues Enumerates the set of values for BackupSummaryType +func GetBackupSummaryTypeEnumValues() []BackupSummaryTypeEnum { + values := make([]BackupSummaryTypeEnum, 0) + for _, v := range mappingBackupSummaryType { + values = append(values, v) + } + return values +} diff --git a/vendor/github.com/oracle/oci-go-sdk/database/create_backup_details.go b/vendor/github.com/oracle/oci-go-sdk/database/create_backup_details.go new file mode 100644 index 000000000..6faa74c49 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/database/create_backup_details.go @@ -0,0 +1,27 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Database Service API +// +// The API for the Database Service. +// + +package database + +import ( + "github.com/oracle/oci-go-sdk/common" +) + +// CreateBackupDetails The representation of CreateBackupDetails +type CreateBackupDetails struct { + + // The OCID of the database. + DatabaseId *string `mandatory:"true" json:"databaseId"` + + // The user-friendly name for the backup. It does not have to be unique. + DisplayName *string `mandatory:"true" json:"displayName"` +} + +func (m CreateBackupDetails) String() string { + return common.PointerString(m) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/database/create_backup_request_response.go b/vendor/github.com/oracle/oci-go-sdk/database/create_backup_request_response.go new file mode 100644 index 000000000..9cd01036e --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/database/create_backup_request_response.go @@ -0,0 +1,48 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package database + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// CreateBackupRequest wrapper for the CreateBackup operation +type CreateBackupRequest struct { + + // Request to create a new database backup. + CreateBackupDetails `contributesTo:"body"` + + // A token that uniquely identifies a request so it can be retried in case of a timeout or + // server error without risk of executing that same action again. Retry tokens expire after 24 + // hours, but can be invalidated before then due to conflicting operations (for example, if a resource + // has been deleted and purged from the system, then a retry of the original creation request + // may be rejected). + OpcRetryToken *string `mandatory:"false" contributesTo:"header" name:"opc-retry-token"` +} + +func (request CreateBackupRequest) String() string { + return common.PointerString(request) +} + +// CreateBackupResponse wrapper for the CreateBackup operation +type CreateBackupResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The Backup instance + Backup `presentIn:"body"` + + // For optimistic concurrency control. See `if-match`. + Etag *string `presentIn:"header" name:"etag"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about + // a particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response CreateBackupResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/database/create_data_guard_association_details.go b/vendor/github.com/oracle/oci-go-sdk/database/create_data_guard_association_details.go new file mode 100644 index 000000000..dc92eb654 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/database/create_data_guard_association_details.go @@ -0,0 +1,158 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Database Service API +// +// The API for the Database Service. +// + +package database + +import ( + "encoding/json" + "github.com/oracle/oci-go-sdk/common" +) + +// CreateDataGuardAssociationDetails The configuration details for creating a Data Guard association between databases. +// **NOTE:** +// "ExistingDbSystem" is the only supported `creationType` value. Therefore, all +// CreateDataGuardAssociation +// requests must include the `peerDbSystemId` parameter found in the +// CreateDataGuardAssociationToExistingDbSystemDetails +// object. +type CreateDataGuardAssociationDetails interface { + + // A strong password for the `SYS`, `SYSTEM`, and `PDB Admin` users to apply during standby creation. + // The password must contain no fewer than nine characters and include: + // * At least two uppercase characters. + // * At least two lowercase characters. + // * At least two numeric characters. + // * At least two special characters. Valid special characters include "_", "#", and "-" only. + // **The password MUST be the same as the primary admin password.** + GetDatabaseAdminPassword() *string + + // The protection mode to set up between the primary and standby databases. For more information, see + // Oracle Data Guard Protection Modes (http://docs.oracle.com/database/122/SBYDB/oracle-data-guard-protection-modes.htm#SBYDB02000) + // in the Oracle Data Guard documentation. + // **IMPORTANT** - The only protection mode currently supported by the Database Service is MAXIMUM_PERFORMANCE. + GetProtectionMode() CreateDataGuardAssociationDetailsProtectionModeEnum + + // The redo transport type to use for this Data Guard association. Valid values depend on the specified `protectionMode`: + // * MAXIMUM_AVAILABILITY - SYNC or FASTSYNC + // * MAXIMUM_PERFORMANCE - ASYNC + // * MAXIMUM_PROTECTION - SYNC + // For more information, see + // Redo Transport Services (http://docs.oracle.com/database/122/SBYDB/oracle-data-guard-redo-transport-services.htm#SBYDB00400) + // in the Oracle Data Guard documentation. + // **IMPORTANT** - The only transport type currently supported by the Database Service is ASYNC. + GetTransportType() CreateDataGuardAssociationDetailsTransportTypeEnum +} + +type createdataguardassociationdetails struct { + JsonData []byte + DatabaseAdminPassword *string `mandatory:"true" json:"databaseAdminPassword"` + ProtectionMode CreateDataGuardAssociationDetailsProtectionModeEnum `mandatory:"true" json:"protectionMode"` + TransportType CreateDataGuardAssociationDetailsTransportTypeEnum `mandatory:"true" json:"transportType"` + CreationType string `json:"creationType"` +} + +// UnmarshalJSON unmarshals json +func (m *createdataguardassociationdetails) UnmarshalJSON(data []byte) error { + m.JsonData = data + type Unmarshalercreatedataguardassociationdetails createdataguardassociationdetails + s := struct { + Model Unmarshalercreatedataguardassociationdetails + }{} + err := json.Unmarshal(data, &s.Model) + if err != nil { + return err + } + m.DatabaseAdminPassword = s.Model.DatabaseAdminPassword + m.ProtectionMode = s.Model.ProtectionMode + m.TransportType = s.Model.TransportType + m.CreationType = s.Model.CreationType + + return err +} + +// UnmarshalPolymorphicJSON unmarshals polymorphic json +func (m *createdataguardassociationdetails) UnmarshalPolymorphicJSON(data []byte) (interface{}, error) { + var err error + switch m.CreationType { + case "ExistingDbSystem": + mm := CreateDataGuardAssociationToExistingDbSystemDetails{} + err = json.Unmarshal(data, &mm) + return mm, err + default: + return m, nil + } +} + +//GetDatabaseAdminPassword returns DatabaseAdminPassword +func (m createdataguardassociationdetails) GetDatabaseAdminPassword() *string { + return m.DatabaseAdminPassword +} + +//GetProtectionMode returns ProtectionMode +func (m createdataguardassociationdetails) GetProtectionMode() CreateDataGuardAssociationDetailsProtectionModeEnum { + return m.ProtectionMode +} + +//GetTransportType returns TransportType +func (m createdataguardassociationdetails) GetTransportType() CreateDataGuardAssociationDetailsTransportTypeEnum { + return m.TransportType +} + +func (m createdataguardassociationdetails) String() string { + return common.PointerString(m) +} + +// CreateDataGuardAssociationDetailsProtectionModeEnum Enum with underlying type: string +type CreateDataGuardAssociationDetailsProtectionModeEnum string + +// Set of constants representing the allowable values for CreateDataGuardAssociationDetailsProtectionMode +const ( + CreateDataGuardAssociationDetailsProtectionModeAvailability CreateDataGuardAssociationDetailsProtectionModeEnum = "MAXIMUM_AVAILABILITY" + CreateDataGuardAssociationDetailsProtectionModePerformance CreateDataGuardAssociationDetailsProtectionModeEnum = "MAXIMUM_PERFORMANCE" + CreateDataGuardAssociationDetailsProtectionModeProtection CreateDataGuardAssociationDetailsProtectionModeEnum = "MAXIMUM_PROTECTION" +) + +var mappingCreateDataGuardAssociationDetailsProtectionMode = map[string]CreateDataGuardAssociationDetailsProtectionModeEnum{ + "MAXIMUM_AVAILABILITY": CreateDataGuardAssociationDetailsProtectionModeAvailability, + "MAXIMUM_PERFORMANCE": CreateDataGuardAssociationDetailsProtectionModePerformance, + "MAXIMUM_PROTECTION": CreateDataGuardAssociationDetailsProtectionModeProtection, +} + +// GetCreateDataGuardAssociationDetailsProtectionModeEnumValues Enumerates the set of values for CreateDataGuardAssociationDetailsProtectionMode +func GetCreateDataGuardAssociationDetailsProtectionModeEnumValues() []CreateDataGuardAssociationDetailsProtectionModeEnum { + values := make([]CreateDataGuardAssociationDetailsProtectionModeEnum, 0) + for _, v := range mappingCreateDataGuardAssociationDetailsProtectionMode { + values = append(values, v) + } + return values +} + +// CreateDataGuardAssociationDetailsTransportTypeEnum Enum with underlying type: string +type CreateDataGuardAssociationDetailsTransportTypeEnum string + +// Set of constants representing the allowable values for CreateDataGuardAssociationDetailsTransportType +const ( + CreateDataGuardAssociationDetailsTransportTypeSync CreateDataGuardAssociationDetailsTransportTypeEnum = "SYNC" + CreateDataGuardAssociationDetailsTransportTypeAsync CreateDataGuardAssociationDetailsTransportTypeEnum = "ASYNC" + CreateDataGuardAssociationDetailsTransportTypeFastsync CreateDataGuardAssociationDetailsTransportTypeEnum = "FASTSYNC" +) + +var mappingCreateDataGuardAssociationDetailsTransportType = map[string]CreateDataGuardAssociationDetailsTransportTypeEnum{ + "SYNC": CreateDataGuardAssociationDetailsTransportTypeSync, + "ASYNC": CreateDataGuardAssociationDetailsTransportTypeAsync, + "FASTSYNC": CreateDataGuardAssociationDetailsTransportTypeFastsync, +} + +// GetCreateDataGuardAssociationDetailsTransportTypeEnumValues Enumerates the set of values for CreateDataGuardAssociationDetailsTransportType +func GetCreateDataGuardAssociationDetailsTransportTypeEnumValues() []CreateDataGuardAssociationDetailsTransportTypeEnum { + values := make([]CreateDataGuardAssociationDetailsTransportTypeEnum, 0) + for _, v := range mappingCreateDataGuardAssociationDetailsTransportType { + values = append(values, v) + } + return values +} diff --git a/vendor/github.com/oracle/oci-go-sdk/database/create_data_guard_association_request_response.go b/vendor/github.com/oracle/oci-go-sdk/database/create_data_guard_association_request_response.go new file mode 100644 index 000000000..b65888e01 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/database/create_data_guard_association_request_response.go @@ -0,0 +1,51 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package database + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// CreateDataGuardAssociationRequest wrapper for the CreateDataGuardAssociation operation +type CreateDataGuardAssociationRequest struct { + + // The database OCID (https://docs.us-phoenix-1.oraclecloud.com/Content/General/Concepts/identifiers.htm). + DatabaseId *string `mandatory:"true" contributesTo:"path" name:"databaseId"` + + // A request to create a Data Guard association. + CreateDataGuardAssociationDetails `contributesTo:"body"` + + // A token that uniquely identifies a request so it can be retried in case of a timeout or + // server error without risk of executing that same action again. Retry tokens expire after 24 + // hours, but can be invalidated before then due to conflicting operations (for example, if a resource + // has been deleted and purged from the system, then a retry of the original creation request + // may be rejected). + OpcRetryToken *string `mandatory:"false" contributesTo:"header" name:"opc-retry-token"` +} + +func (request CreateDataGuardAssociationRequest) String() string { + return common.PointerString(request) +} + +// CreateDataGuardAssociationResponse wrapper for the CreateDataGuardAssociation operation +type CreateDataGuardAssociationResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The DataGuardAssociation instance + DataGuardAssociation `presentIn:"body"` + + // For optimistic concurrency control. See `if-match`. + Etag *string `presentIn:"header" name:"etag"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about + // a particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response CreateDataGuardAssociationResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/database/create_data_guard_association_to_existing_db_system_details.go b/vendor/github.com/oracle/oci-go-sdk/database/create_data_guard_association_to_existing_db_system_details.go new file mode 100644 index 000000000..0badd6d50 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/database/create_data_guard_association_to_existing_db_system_details.go @@ -0,0 +1,79 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Database Service API +// +// The API for the Database Service. +// + +package database + +import ( + "encoding/json" + "github.com/oracle/oci-go-sdk/common" +) + +// CreateDataGuardAssociationToExistingDbSystemDetails The configuration details for creating a Data Guard association to an existing database. +type CreateDataGuardAssociationToExistingDbSystemDetails struct { + + // A strong password for the `SYS`, `SYSTEM`, and `PDB Admin` users to apply during standby creation. + // The password must contain no fewer than nine characters and include: + // * At least two uppercase characters. + // * At least two lowercase characters. + // * At least two numeric characters. + // * At least two special characters. Valid special characters include "_", "#", and "-" only. + // **The password MUST be the same as the primary admin password.** + DatabaseAdminPassword *string `mandatory:"true" json:"databaseAdminPassword"` + + // The OCID (https://docs.us-phoenix-1.oraclecloud.com/Content/General/Concepts/identifiers.htm) of the DB System to create the standby database on. + PeerDbSystemId *string `mandatory:"false" json:"peerDbSystemId"` + + // The protection mode to set up between the primary and standby databases. For more information, see + // Oracle Data Guard Protection Modes (http://docs.oracle.com/database/122/SBYDB/oracle-data-guard-protection-modes.htm#SBYDB02000) + // in the Oracle Data Guard documentation. + // **IMPORTANT** - The only protection mode currently supported by the Database Service is MAXIMUM_PERFORMANCE. + ProtectionMode CreateDataGuardAssociationDetailsProtectionModeEnum `mandatory:"true" json:"protectionMode"` + + // The redo transport type to use for this Data Guard association. Valid values depend on the specified `protectionMode`: + // * MAXIMUM_AVAILABILITY - SYNC or FASTSYNC + // * MAXIMUM_PERFORMANCE - ASYNC + // * MAXIMUM_PROTECTION - SYNC + // For more information, see + // Redo Transport Services (http://docs.oracle.com/database/122/SBYDB/oracle-data-guard-redo-transport-services.htm#SBYDB00400) + // in the Oracle Data Guard documentation. + // **IMPORTANT** - The only transport type currently supported by the Database Service is ASYNC. + TransportType CreateDataGuardAssociationDetailsTransportTypeEnum `mandatory:"true" json:"transportType"` +} + +//GetDatabaseAdminPassword returns DatabaseAdminPassword +func (m CreateDataGuardAssociationToExistingDbSystemDetails) GetDatabaseAdminPassword() *string { + return m.DatabaseAdminPassword +} + +//GetProtectionMode returns ProtectionMode +func (m CreateDataGuardAssociationToExistingDbSystemDetails) GetProtectionMode() CreateDataGuardAssociationDetailsProtectionModeEnum { + return m.ProtectionMode +} + +//GetTransportType returns TransportType +func (m CreateDataGuardAssociationToExistingDbSystemDetails) GetTransportType() CreateDataGuardAssociationDetailsTransportTypeEnum { + return m.TransportType +} + +func (m CreateDataGuardAssociationToExistingDbSystemDetails) String() string { + return common.PointerString(m) +} + +// MarshalJSON marshals to json representation +func (m CreateDataGuardAssociationToExistingDbSystemDetails) MarshalJSON() (buff []byte, e error) { + type MarshalTypeCreateDataGuardAssociationToExistingDbSystemDetails CreateDataGuardAssociationToExistingDbSystemDetails + s := struct { + DiscriminatorParam string `json:"creationType"` + MarshalTypeCreateDataGuardAssociationToExistingDbSystemDetails + }{ + "ExistingDbSystem", + (MarshalTypeCreateDataGuardAssociationToExistingDbSystemDetails)(m), + } + + return json.Marshal(&s) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/database/create_database_details.go b/vendor/github.com/oracle/oci-go-sdk/database/create_database_details.go new file mode 100644 index 000000000..0565459d5 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/database/create_database_details.go @@ -0,0 +1,66 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Database Service API +// +// The API for the Database Service. +// + +package database + +import ( + "github.com/oracle/oci-go-sdk/common" +) + +// CreateDatabaseDetails The representation of CreateDatabaseDetails +type CreateDatabaseDetails struct { + + // A strong password for SYS, SYSTEM, and PDB Admin. The password must be at least nine characters and contain at least two uppercase, two lowercase, two numbers, and two special characters. The special characters must be _, \#, or -. + AdminPassword *string `mandatory:"true" json:"adminPassword"` + + // The database name. It must begin with an alphabetic character and can contain a maximum of eight alphanumeric characters. Special characters are not permitted. + DbName *string `mandatory:"true" json:"dbName"` + + // The character set for the database. The default is AL32UTF8. Allowed values are: + // AL32UTF8, AR8ADOS710, AR8ADOS720, AR8APTEC715, AR8ARABICMACS, AR8ASMO8X, AR8ISO8859P6, AR8MSWIN1256, AR8MUSSAD768, AR8NAFITHA711, AR8NAFITHA721, AR8SAKHR706, AR8SAKHR707, AZ8ISO8859P9E, BG8MSWIN, BG8PC437S, BLT8CP921, BLT8ISO8859P13, BLT8MSWIN1257, BLT8PC775, BN8BSCII, CDN8PC863, CEL8ISO8859P14, CL8ISO8859P5, CL8ISOIR111, CL8KOI8R, CL8KOI8U, CL8MACCYRILLICS, CL8MSWIN1251, EE8ISO8859P2, EE8MACCES, EE8MACCROATIANS, EE8MSWIN1250, EE8PC852, EL8DEC, EL8ISO8859P7, EL8MACGREEKS, EL8MSWIN1253, EL8PC437S, EL8PC851, EL8PC869, ET8MSWIN923, HU8ABMOD, HU8CWI2, IN8ISCII, IS8PC861, IW8ISO8859P8, IW8MACHEBREWS, IW8MSWIN1255, IW8PC1507, JA16EUC, JA16EUCTILDE, JA16SJIS, JA16SJISTILDE, JA16VMS, KO16KSCCS, KO16MSWIN949, LA8ISO6937, LA8PASSPORT, LT8MSWIN921, LT8PC772, LT8PC774, LV8PC1117, LV8PC8LR, LV8RST104090, N8PC865, NE8ISO8859P10, NEE8ISO8859P4, RU8BESTA, RU8PC855, RU8PC866, SE8ISO8859P3, TH8MACTHAIS, TH8TISASCII, TR8DEC, TR8MACTURKISHS, TR8MSWIN1254, TR8PC857, US7ASCII, US8PC437, UTF8, VN8MSWIN1258, VN8VN3, WE8DEC, WE8DG, WE8ISO8859P15, WE8ISO8859P9, WE8MACROMAN8S, WE8MSWIN1252, WE8NCR4970, WE8NEXTSTEP, WE8PC850, WE8PC858, WE8PC860, WE8ROMAN8, ZHS16CGB231280, ZHS16GBK, ZHT16BIG5, ZHT16CCDC, ZHT16DBT, ZHT16HKSCS, ZHT16MSWIN950, ZHT32EUC, ZHT32SOPS, ZHT32TRIS + CharacterSet *string `mandatory:"false" json:"characterSet"` + + DbBackupConfig *DbBackupConfig `mandatory:"false" json:"dbBackupConfig"` + + // Database workload type. + DbWorkload CreateDatabaseDetailsDbWorkloadEnum `mandatory:"false" json:"dbWorkload,omitempty"` + + // National character set for the database. The default is AL16UTF16. Allowed values are: + // AL16UTF16 or UTF8. + NcharacterSet *string `mandatory:"false" json:"ncharacterSet"` + + // Pluggable database name. It must begin with an alphabetic character and can contain a maximum of eight alphanumeric characters. Special characters are not permitted. Pluggable database should not be same as database name. + PdbName *string `mandatory:"false" json:"pdbName"` +} + +func (m CreateDatabaseDetails) String() string { + return common.PointerString(m) +} + +// CreateDatabaseDetailsDbWorkloadEnum Enum with underlying type: string +type CreateDatabaseDetailsDbWorkloadEnum string + +// Set of constants representing the allowable values for CreateDatabaseDetailsDbWorkload +const ( + CreateDatabaseDetailsDbWorkloadOltp CreateDatabaseDetailsDbWorkloadEnum = "OLTP" + CreateDatabaseDetailsDbWorkloadDss CreateDatabaseDetailsDbWorkloadEnum = "DSS" +) + +var mappingCreateDatabaseDetailsDbWorkload = map[string]CreateDatabaseDetailsDbWorkloadEnum{ + "OLTP": CreateDatabaseDetailsDbWorkloadOltp, + "DSS": CreateDatabaseDetailsDbWorkloadDss, +} + +// GetCreateDatabaseDetailsDbWorkloadEnumValues Enumerates the set of values for CreateDatabaseDetailsDbWorkload +func GetCreateDatabaseDetailsDbWorkloadEnumValues() []CreateDatabaseDetailsDbWorkloadEnum { + values := make([]CreateDatabaseDetailsDbWorkloadEnum, 0) + for _, v := range mappingCreateDatabaseDetailsDbWorkload { + values = append(values, v) + } + return values +} diff --git a/vendor/github.com/oracle/oci-go-sdk/database/create_database_from_backup_details.go b/vendor/github.com/oracle/oci-go-sdk/database/create_database_from_backup_details.go new file mode 100644 index 000000000..9c25aadc4 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/database/create_database_from_backup_details.go @@ -0,0 +1,30 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Database Service API +// +// The API for the Database Service. +// + +package database + +import ( + "github.com/oracle/oci-go-sdk/common" +) + +// CreateDatabaseFromBackupDetails The representation of CreateDatabaseFromBackupDetails +type CreateDatabaseFromBackupDetails struct { + + // A strong password for SYS, SYSTEM, PDB Admin and TDE Wallet. The password must be at least nine characters and contain at least two uppercase, two lowercase, two numbers, and two special characters. The special characters must be _, \#, or -. + AdminPassword *string `mandatory:"true" json:"adminPassword"` + + // The backup OCID. + BackupId *string `mandatory:"true" json:"backupId"` + + // The password to open the TDE wallet. + BackupTDEPassword *string `mandatory:"true" json:"backupTDEPassword"` +} + +func (m CreateDatabaseFromBackupDetails) String() string { + return common.PointerString(m) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/database/create_db_home_details.go b/vendor/github.com/oracle/oci-go-sdk/database/create_db_home_details.go new file mode 100644 index 000000000..93eb4c928 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/database/create_db_home_details.go @@ -0,0 +1,28 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Database Service API +// +// The API for the Database Service. +// + +package database + +import ( + "github.com/oracle/oci-go-sdk/common" +) + +// CreateDbHomeDetails The representation of CreateDbHomeDetails +type CreateDbHomeDetails struct { + Database *CreateDatabaseDetails `mandatory:"true" json:"database"` + + // A valid Oracle database version. To get a list of supported versions, use the ListDbVersions operation. + DbVersion *string `mandatory:"true" json:"dbVersion"` + + // The user-provided name of the database home. + DisplayName *string `mandatory:"false" json:"displayName"` +} + +func (m CreateDbHomeDetails) String() string { + return common.PointerString(m) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/database/create_db_home_request_response.go b/vendor/github.com/oracle/oci-go-sdk/database/create_db_home_request_response.go new file mode 100644 index 000000000..6df0ab66c --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/database/create_db_home_request_response.go @@ -0,0 +1,48 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package database + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// CreateDbHomeRequest wrapper for the CreateDbHome operation +type CreateDbHomeRequest struct { + + // Request to create a new DB Home. + CreateDbHomeWithDbSystemIdBase `contributesTo:"body"` + + // A token that uniquely identifies a request so it can be retried in case of a timeout or + // server error without risk of executing that same action again. Retry tokens expire after 24 + // hours, but can be invalidated before then due to conflicting operations (for example, if a resource + // has been deleted and purged from the system, then a retry of the original creation request + // may be rejected). + OpcRetryToken *string `mandatory:"false" contributesTo:"header" name:"opc-retry-token"` +} + +func (request CreateDbHomeRequest) String() string { + return common.PointerString(request) +} + +// CreateDbHomeResponse wrapper for the CreateDbHome operation +type CreateDbHomeResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The DbHome instance + DbHome `presentIn:"body"` + + // For optimistic concurrency control. See `if-match`. + Etag *string `presentIn:"header" name:"etag"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about + // a particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response CreateDbHomeResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/database/create_db_home_with_db_system_id_base.go b/vendor/github.com/oracle/oci-go-sdk/database/create_db_home_with_db_system_id_base.go new file mode 100644 index 000000000..670c00026 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/database/create_db_home_with_db_system_id_base.go @@ -0,0 +1,80 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Database Service API +// +// The API for the Database Service. +// + +package database + +import ( + "encoding/json" + "github.com/oracle/oci-go-sdk/common" +) + +// CreateDbHomeWithDbSystemIdBase The representation of CreateDbHomeWithDbSystemIdBase +type CreateDbHomeWithDbSystemIdBase interface { + + // The OCID of the DB System. + GetDbSystemId() *string + + // The user-provided name of the database home. + GetDisplayName() *string +} + +type createdbhomewithdbsystemidbase struct { + JsonData []byte + DbSystemId *string `mandatory:"true" json:"dbSystemId"` + DisplayName *string `mandatory:"false" json:"displayName"` + Source string `json:"source"` +} + +// UnmarshalJSON unmarshals json +func (m *createdbhomewithdbsystemidbase) UnmarshalJSON(data []byte) error { + m.JsonData = data + type Unmarshalercreatedbhomewithdbsystemidbase createdbhomewithdbsystemidbase + s := struct { + Model Unmarshalercreatedbhomewithdbsystemidbase + }{} + err := json.Unmarshal(data, &s.Model) + if err != nil { + return err + } + m.DbSystemId = s.Model.DbSystemId + m.DisplayName = s.Model.DisplayName + m.Source = s.Model.Source + + return err +} + +// UnmarshalPolymorphicJSON unmarshals polymorphic json +func (m *createdbhomewithdbsystemidbase) UnmarshalPolymorphicJSON(data []byte) (interface{}, error) { + var err error + switch m.Source { + case "DB_BACKUP": + mm := CreateDbHomeWithDbSystemIdFromBackupDetails{} + err = json.Unmarshal(data, &mm) + return mm, err + case "NONE": + mm := CreateDbHomeWithDbSystemIdDetails{} + err = json.Unmarshal(data, &mm) + return mm, err + default: + return m, nil + } +} + +//GetDbSystemId returns DbSystemId +func (m createdbhomewithdbsystemidbase) GetDbSystemId() *string { + return m.DbSystemId +} + +//GetDisplayName returns DisplayName +func (m createdbhomewithdbsystemidbase) GetDisplayName() *string { + return m.DisplayName +} + +func (m createdbhomewithdbsystemidbase) String() string { + return common.PointerString(m) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/database/create_db_home_with_db_system_id_details.go b/vendor/github.com/oracle/oci-go-sdk/database/create_db_home_with_db_system_id_details.go new file mode 100644 index 000000000..92795f765 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/database/create_db_home_with_db_system_id_details.go @@ -0,0 +1,57 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Database Service API +// +// The API for the Database Service. +// + +package database + +import ( + "encoding/json" + "github.com/oracle/oci-go-sdk/common" +) + +// CreateDbHomeWithDbSystemIdDetails The representation of CreateDbHomeWithDbSystemIdDetails +type CreateDbHomeWithDbSystemIdDetails struct { + + // The OCID of the DB System. + DbSystemId *string `mandatory:"true" json:"dbSystemId"` + + Database *CreateDatabaseDetails `mandatory:"true" json:"database"` + + // A valid Oracle database version. To get a list of supported versions, use the ListDbVersions operation. + DbVersion *string `mandatory:"true" json:"dbVersion"` + + // The user-provided name of the database home. + DisplayName *string `mandatory:"false" json:"displayName"` +} + +//GetDbSystemId returns DbSystemId +func (m CreateDbHomeWithDbSystemIdDetails) GetDbSystemId() *string { + return m.DbSystemId +} + +//GetDisplayName returns DisplayName +func (m CreateDbHomeWithDbSystemIdDetails) GetDisplayName() *string { + return m.DisplayName +} + +func (m CreateDbHomeWithDbSystemIdDetails) String() string { + return common.PointerString(m) +} + +// MarshalJSON marshals to json representation +func (m CreateDbHomeWithDbSystemIdDetails) MarshalJSON() (buff []byte, e error) { + type MarshalTypeCreateDbHomeWithDbSystemIdDetails CreateDbHomeWithDbSystemIdDetails + s := struct { + DiscriminatorParam string `json:"source"` + MarshalTypeCreateDbHomeWithDbSystemIdDetails + }{ + "NONE", + (MarshalTypeCreateDbHomeWithDbSystemIdDetails)(m), + } + + return json.Marshal(&s) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/database/create_db_home_with_db_system_id_from_backup_details.go b/vendor/github.com/oracle/oci-go-sdk/database/create_db_home_with_db_system_id_from_backup_details.go new file mode 100644 index 000000000..98f9fc305 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/database/create_db_home_with_db_system_id_from_backup_details.go @@ -0,0 +1,54 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Database Service API +// +// The API for the Database Service. +// + +package database + +import ( + "encoding/json" + "github.com/oracle/oci-go-sdk/common" +) + +// CreateDbHomeWithDbSystemIdFromBackupDetails The representation of CreateDbHomeWithDbSystemIdFromBackupDetails +type CreateDbHomeWithDbSystemIdFromBackupDetails struct { + + // The OCID of the DB System. + DbSystemId *string `mandatory:"true" json:"dbSystemId"` + + Database *CreateDatabaseFromBackupDetails `mandatory:"true" json:"database"` + + // The user-provided name of the database home. + DisplayName *string `mandatory:"false" json:"displayName"` +} + +//GetDbSystemId returns DbSystemId +func (m CreateDbHomeWithDbSystemIdFromBackupDetails) GetDbSystemId() *string { + return m.DbSystemId +} + +//GetDisplayName returns DisplayName +func (m CreateDbHomeWithDbSystemIdFromBackupDetails) GetDisplayName() *string { + return m.DisplayName +} + +func (m CreateDbHomeWithDbSystemIdFromBackupDetails) String() string { + return common.PointerString(m) +} + +// MarshalJSON marshals to json representation +func (m CreateDbHomeWithDbSystemIdFromBackupDetails) MarshalJSON() (buff []byte, e error) { + type MarshalTypeCreateDbHomeWithDbSystemIdFromBackupDetails CreateDbHomeWithDbSystemIdFromBackupDetails + s := struct { + DiscriminatorParam string `json:"source"` + MarshalTypeCreateDbHomeWithDbSystemIdFromBackupDetails + }{ + "DB_BACKUP", + (MarshalTypeCreateDbHomeWithDbSystemIdFromBackupDetails)(m), + } + + return json.Marshal(&s) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/database/data_guard_association.go b/vendor/github.com/oracle/oci-go-sdk/database/data_guard_association.go new file mode 100644 index 000000000..d572feb42 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/database/data_guard_association.go @@ -0,0 +1,211 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Database Service API +// +// The API for the Database Service. +// + +package database + +import ( + "github.com/oracle/oci-go-sdk/common" +) + +// DataGuardAssociation The properties that define a Data Guard association. +// To use any of the API operations, you must be authorized in an IAM policy. If you're not authorized, talk to an +// administrator. If you're an administrator who needs to write policies to give users access, see +// Getting Started with Policies (https://docs.us-phoenix-1.oraclecloud.com/Content/Identity/Concepts/policygetstarted.htm). +// For information about endpoints and signing API requests, see +// About the API (https://docs.us-phoenix-1.oraclecloud.com/Content/API/Concepts/usingapi.htm). For information about available SDKs and tools, see +// SDKS and Other Tools (https://docs.us-phoenix-1.oraclecloud.com/Content/API/Concepts/sdks.htm). +type DataGuardAssociation struct { + + // The OCID (https://docs.us-phoenix-1.oraclecloud.com/Content/General/Concepts/identifiers.htm) of the reporting database. + DatabaseId *string `mandatory:"true" json:"databaseId"` + + // The OCID (https://docs.us-phoenix-1.oraclecloud.com/Content/General/Concepts/identifiers.htm) of the Data Guard association. + Id *string `mandatory:"true" json:"id"` + + // The current state of the Data Guard association. + LifecycleState DataGuardAssociationLifecycleStateEnum `mandatory:"true" json:"lifecycleState"` + + // The OCID (https://docs.us-phoenix-1.oraclecloud.com/Content/General/Concepts/identifiers.htm) of the DB System containing the associated + // peer database. + PeerDbSystemId *string `mandatory:"true" json:"peerDbSystemId"` + + // The role of the peer database in this Data Guard association. + PeerRole DataGuardAssociationPeerRoleEnum `mandatory:"true" json:"peerRole"` + + // The protection mode of this Data Guard association. For more information, see + // Oracle Data Guard Protection Modes (http://docs.oracle.com/database/122/SBYDB/oracle-data-guard-protection-modes.htm#SBYDB02000) + // in the Oracle Data Guard documentation. + ProtectionMode DataGuardAssociationProtectionModeEnum `mandatory:"true" json:"protectionMode"` + + // The role of the reporting database in this Data Guard association. + Role DataGuardAssociationRoleEnum `mandatory:"true" json:"role"` + + // The lag time between updates to the primary database and application of the redo data on the standby database, + // as computed by the reporting database. + // Example: `9 seconds` + ApplyLag *string `mandatory:"false" json:"applyLag"` + + // The rate at which redo logs are synced between the associated databases. + // Example: `180 Mb per second` + ApplyRate *string `mandatory:"false" json:"applyRate"` + + // Additional information about the current lifecycleState, if available. + LifecycleDetails *string `mandatory:"false" json:"lifecycleDetails"` + + // The OCID (https://docs.us-phoenix-1.oraclecloud.com/Content/General/Concepts/identifiers.htm) of the peer database's Data Guard association. + PeerDataGuardAssociationId *string `mandatory:"false" json:"peerDataGuardAssociationId"` + + // The OCID (https://docs.us-phoenix-1.oraclecloud.com/Content/General/Concepts/identifiers.htm) of the associated peer database. + PeerDatabaseId *string `mandatory:"false" json:"peerDatabaseId"` + + // The OCID (https://docs.us-phoenix-1.oraclecloud.com/Content/General/Concepts/identifiers.htm) of the database home containing the associated peer database. + PeerDbHomeId *string `mandatory:"false" json:"peerDbHomeId"` + + // The date and time the Data Guard Association was created. + TimeCreated *common.SDKTime `mandatory:"false" json:"timeCreated"` + + // The redo transport type used by this Data Guard association. For more information, see + // Redo Transport Services (http://docs.oracle.com/database/122/SBYDB/oracle-data-guard-redo-transport-services.htm#SBYDB00400) + // in the Oracle Data Guard documentation. + TransportType DataGuardAssociationTransportTypeEnum `mandatory:"false" json:"transportType,omitempty"` +} + +func (m DataGuardAssociation) String() string { + return common.PointerString(m) +} + +// DataGuardAssociationLifecycleStateEnum Enum with underlying type: string +type DataGuardAssociationLifecycleStateEnum string + +// Set of constants representing the allowable values for DataGuardAssociationLifecycleState +const ( + DataGuardAssociationLifecycleStateProvisioning DataGuardAssociationLifecycleStateEnum = "PROVISIONING" + DataGuardAssociationLifecycleStateAvailable DataGuardAssociationLifecycleStateEnum = "AVAILABLE" + DataGuardAssociationLifecycleStateUpdating DataGuardAssociationLifecycleStateEnum = "UPDATING" + DataGuardAssociationLifecycleStateTerminating DataGuardAssociationLifecycleStateEnum = "TERMINATING" + DataGuardAssociationLifecycleStateTerminated DataGuardAssociationLifecycleStateEnum = "TERMINATED" + DataGuardAssociationLifecycleStateFailed DataGuardAssociationLifecycleStateEnum = "FAILED" +) + +var mappingDataGuardAssociationLifecycleState = map[string]DataGuardAssociationLifecycleStateEnum{ + "PROVISIONING": DataGuardAssociationLifecycleStateProvisioning, + "AVAILABLE": DataGuardAssociationLifecycleStateAvailable, + "UPDATING": DataGuardAssociationLifecycleStateUpdating, + "TERMINATING": DataGuardAssociationLifecycleStateTerminating, + "TERMINATED": DataGuardAssociationLifecycleStateTerminated, + "FAILED": DataGuardAssociationLifecycleStateFailed, +} + +// GetDataGuardAssociationLifecycleStateEnumValues Enumerates the set of values for DataGuardAssociationLifecycleState +func GetDataGuardAssociationLifecycleStateEnumValues() []DataGuardAssociationLifecycleStateEnum { + values := make([]DataGuardAssociationLifecycleStateEnum, 0) + for _, v := range mappingDataGuardAssociationLifecycleState { + values = append(values, v) + } + return values +} + +// DataGuardAssociationPeerRoleEnum Enum with underlying type: string +type DataGuardAssociationPeerRoleEnum string + +// Set of constants representing the allowable values for DataGuardAssociationPeerRole +const ( + DataGuardAssociationPeerRolePrimary DataGuardAssociationPeerRoleEnum = "PRIMARY" + DataGuardAssociationPeerRoleStandby DataGuardAssociationPeerRoleEnum = "STANDBY" + DataGuardAssociationPeerRoleDisabledStandby DataGuardAssociationPeerRoleEnum = "DISABLED_STANDBY" +) + +var mappingDataGuardAssociationPeerRole = map[string]DataGuardAssociationPeerRoleEnum{ + "PRIMARY": DataGuardAssociationPeerRolePrimary, + "STANDBY": DataGuardAssociationPeerRoleStandby, + "DISABLED_STANDBY": DataGuardAssociationPeerRoleDisabledStandby, +} + +// GetDataGuardAssociationPeerRoleEnumValues Enumerates the set of values for DataGuardAssociationPeerRole +func GetDataGuardAssociationPeerRoleEnumValues() []DataGuardAssociationPeerRoleEnum { + values := make([]DataGuardAssociationPeerRoleEnum, 0) + for _, v := range mappingDataGuardAssociationPeerRole { + values = append(values, v) + } + return values +} + +// DataGuardAssociationProtectionModeEnum Enum with underlying type: string +type DataGuardAssociationProtectionModeEnum string + +// Set of constants representing the allowable values for DataGuardAssociationProtectionMode +const ( + DataGuardAssociationProtectionModeAvailability DataGuardAssociationProtectionModeEnum = "MAXIMUM_AVAILABILITY" + DataGuardAssociationProtectionModePerformance DataGuardAssociationProtectionModeEnum = "MAXIMUM_PERFORMANCE" + DataGuardAssociationProtectionModeProtection DataGuardAssociationProtectionModeEnum = "MAXIMUM_PROTECTION" +) + +var mappingDataGuardAssociationProtectionMode = map[string]DataGuardAssociationProtectionModeEnum{ + "MAXIMUM_AVAILABILITY": DataGuardAssociationProtectionModeAvailability, + "MAXIMUM_PERFORMANCE": DataGuardAssociationProtectionModePerformance, + "MAXIMUM_PROTECTION": DataGuardAssociationProtectionModeProtection, +} + +// GetDataGuardAssociationProtectionModeEnumValues Enumerates the set of values for DataGuardAssociationProtectionMode +func GetDataGuardAssociationProtectionModeEnumValues() []DataGuardAssociationProtectionModeEnum { + values := make([]DataGuardAssociationProtectionModeEnum, 0) + for _, v := range mappingDataGuardAssociationProtectionMode { + values = append(values, v) + } + return values +} + +// DataGuardAssociationRoleEnum Enum with underlying type: string +type DataGuardAssociationRoleEnum string + +// Set of constants representing the allowable values for DataGuardAssociationRole +const ( + DataGuardAssociationRolePrimary DataGuardAssociationRoleEnum = "PRIMARY" + DataGuardAssociationRoleStandby DataGuardAssociationRoleEnum = "STANDBY" + DataGuardAssociationRoleDisabledStandby DataGuardAssociationRoleEnum = "DISABLED_STANDBY" +) + +var mappingDataGuardAssociationRole = map[string]DataGuardAssociationRoleEnum{ + "PRIMARY": DataGuardAssociationRolePrimary, + "STANDBY": DataGuardAssociationRoleStandby, + "DISABLED_STANDBY": DataGuardAssociationRoleDisabledStandby, +} + +// GetDataGuardAssociationRoleEnumValues Enumerates the set of values for DataGuardAssociationRole +func GetDataGuardAssociationRoleEnumValues() []DataGuardAssociationRoleEnum { + values := make([]DataGuardAssociationRoleEnum, 0) + for _, v := range mappingDataGuardAssociationRole { + values = append(values, v) + } + return values +} + +// DataGuardAssociationTransportTypeEnum Enum with underlying type: string +type DataGuardAssociationTransportTypeEnum string + +// Set of constants representing the allowable values for DataGuardAssociationTransportType +const ( + DataGuardAssociationTransportTypeSync DataGuardAssociationTransportTypeEnum = "SYNC" + DataGuardAssociationTransportTypeAsync DataGuardAssociationTransportTypeEnum = "ASYNC" + DataGuardAssociationTransportTypeFastsync DataGuardAssociationTransportTypeEnum = "FASTSYNC" +) + +var mappingDataGuardAssociationTransportType = map[string]DataGuardAssociationTransportTypeEnum{ + "SYNC": DataGuardAssociationTransportTypeSync, + "ASYNC": DataGuardAssociationTransportTypeAsync, + "FASTSYNC": DataGuardAssociationTransportTypeFastsync, +} + +// GetDataGuardAssociationTransportTypeEnumValues Enumerates the set of values for DataGuardAssociationTransportType +func GetDataGuardAssociationTransportTypeEnumValues() []DataGuardAssociationTransportTypeEnum { + values := make([]DataGuardAssociationTransportTypeEnum, 0) + for _, v := range mappingDataGuardAssociationTransportType { + values = append(values, v) + } + return values +} diff --git a/vendor/github.com/oracle/oci-go-sdk/database/data_guard_association_summary.go b/vendor/github.com/oracle/oci-go-sdk/database/data_guard_association_summary.go new file mode 100644 index 000000000..ebfc69c62 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/database/data_guard_association_summary.go @@ -0,0 +1,211 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Database Service API +// +// The API for the Database Service. +// + +package database + +import ( + "github.com/oracle/oci-go-sdk/common" +) + +// DataGuardAssociationSummary The properties that define a Data Guard association. +// To use any of the API operations, you must be authorized in an IAM policy. If you're not authorized, talk to an +// administrator. If you're an administrator who needs to write policies to give users access, see +// Getting Started with Policies (https://docs.us-phoenix-1.oraclecloud.com/Content/Identity/Concepts/policygetstarted.htm). +// For information about endpoints and signing API requests, see +// About the API (https://docs.us-phoenix-1.oraclecloud.com/Content/API/Concepts/usingapi.htm). For information about available SDKs and tools, see +// SDKS and Other Tools (https://docs.us-phoenix-1.oraclecloud.com/Content/API/Concepts/sdks.htm). +type DataGuardAssociationSummary struct { + + // The OCID (https://docs.us-phoenix-1.oraclecloud.com/Content/General/Concepts/identifiers.htm) of the reporting database. + DatabaseId *string `mandatory:"true" json:"databaseId"` + + // The OCID (https://docs.us-phoenix-1.oraclecloud.com/Content/General/Concepts/identifiers.htm) of the Data Guard association. + Id *string `mandatory:"true" json:"id"` + + // The current state of the Data Guard association. + LifecycleState DataGuardAssociationSummaryLifecycleStateEnum `mandatory:"true" json:"lifecycleState"` + + // The OCID (https://docs.us-phoenix-1.oraclecloud.com/Content/General/Concepts/identifiers.htm) of the DB System containing the associated + // peer database. + PeerDbSystemId *string `mandatory:"true" json:"peerDbSystemId"` + + // The role of the peer database in this Data Guard association. + PeerRole DataGuardAssociationSummaryPeerRoleEnum `mandatory:"true" json:"peerRole"` + + // The protection mode of this Data Guard association. For more information, see + // Oracle Data Guard Protection Modes (http://docs.oracle.com/database/122/SBYDB/oracle-data-guard-protection-modes.htm#SBYDB02000) + // in the Oracle Data Guard documentation. + ProtectionMode DataGuardAssociationSummaryProtectionModeEnum `mandatory:"true" json:"protectionMode"` + + // The role of the reporting database in this Data Guard association. + Role DataGuardAssociationSummaryRoleEnum `mandatory:"true" json:"role"` + + // The lag time between updates to the primary database and application of the redo data on the standby database, + // as computed by the reporting database. + // Example: `9 seconds` + ApplyLag *string `mandatory:"false" json:"applyLag"` + + // The rate at which redo logs are synced between the associated databases. + // Example: `180 Mb per second` + ApplyRate *string `mandatory:"false" json:"applyRate"` + + // Additional information about the current lifecycleState, if available. + LifecycleDetails *string `mandatory:"false" json:"lifecycleDetails"` + + // The OCID (https://docs.us-phoenix-1.oraclecloud.com/Content/General/Concepts/identifiers.htm) of the peer database's Data Guard association. + PeerDataGuardAssociationId *string `mandatory:"false" json:"peerDataGuardAssociationId"` + + // The OCID (https://docs.us-phoenix-1.oraclecloud.com/Content/General/Concepts/identifiers.htm) of the associated peer database. + PeerDatabaseId *string `mandatory:"false" json:"peerDatabaseId"` + + // The OCID (https://docs.us-phoenix-1.oraclecloud.com/Content/General/Concepts/identifiers.htm) of the database home containing the associated peer database. + PeerDbHomeId *string `mandatory:"false" json:"peerDbHomeId"` + + // The date and time the Data Guard Association was created. + TimeCreated *common.SDKTime `mandatory:"false" json:"timeCreated"` + + // The redo transport type used by this Data Guard association. For more information, see + // Redo Transport Services (http://docs.oracle.com/database/122/SBYDB/oracle-data-guard-redo-transport-services.htm#SBYDB00400) + // in the Oracle Data Guard documentation. + TransportType DataGuardAssociationSummaryTransportTypeEnum `mandatory:"false" json:"transportType,omitempty"` +} + +func (m DataGuardAssociationSummary) String() string { + return common.PointerString(m) +} + +// DataGuardAssociationSummaryLifecycleStateEnum Enum with underlying type: string +type DataGuardAssociationSummaryLifecycleStateEnum string + +// Set of constants representing the allowable values for DataGuardAssociationSummaryLifecycleState +const ( + DataGuardAssociationSummaryLifecycleStateProvisioning DataGuardAssociationSummaryLifecycleStateEnum = "PROVISIONING" + DataGuardAssociationSummaryLifecycleStateAvailable DataGuardAssociationSummaryLifecycleStateEnum = "AVAILABLE" + DataGuardAssociationSummaryLifecycleStateUpdating DataGuardAssociationSummaryLifecycleStateEnum = "UPDATING" + DataGuardAssociationSummaryLifecycleStateTerminating DataGuardAssociationSummaryLifecycleStateEnum = "TERMINATING" + DataGuardAssociationSummaryLifecycleStateTerminated DataGuardAssociationSummaryLifecycleStateEnum = "TERMINATED" + DataGuardAssociationSummaryLifecycleStateFailed DataGuardAssociationSummaryLifecycleStateEnum = "FAILED" +) + +var mappingDataGuardAssociationSummaryLifecycleState = map[string]DataGuardAssociationSummaryLifecycleStateEnum{ + "PROVISIONING": DataGuardAssociationSummaryLifecycleStateProvisioning, + "AVAILABLE": DataGuardAssociationSummaryLifecycleStateAvailable, + "UPDATING": DataGuardAssociationSummaryLifecycleStateUpdating, + "TERMINATING": DataGuardAssociationSummaryLifecycleStateTerminating, + "TERMINATED": DataGuardAssociationSummaryLifecycleStateTerminated, + "FAILED": DataGuardAssociationSummaryLifecycleStateFailed, +} + +// GetDataGuardAssociationSummaryLifecycleStateEnumValues Enumerates the set of values for DataGuardAssociationSummaryLifecycleState +func GetDataGuardAssociationSummaryLifecycleStateEnumValues() []DataGuardAssociationSummaryLifecycleStateEnum { + values := make([]DataGuardAssociationSummaryLifecycleStateEnum, 0) + for _, v := range mappingDataGuardAssociationSummaryLifecycleState { + values = append(values, v) + } + return values +} + +// DataGuardAssociationSummaryPeerRoleEnum Enum with underlying type: string +type DataGuardAssociationSummaryPeerRoleEnum string + +// Set of constants representing the allowable values for DataGuardAssociationSummaryPeerRole +const ( + DataGuardAssociationSummaryPeerRolePrimary DataGuardAssociationSummaryPeerRoleEnum = "PRIMARY" + DataGuardAssociationSummaryPeerRoleStandby DataGuardAssociationSummaryPeerRoleEnum = "STANDBY" + DataGuardAssociationSummaryPeerRoleDisabledStandby DataGuardAssociationSummaryPeerRoleEnum = "DISABLED_STANDBY" +) + +var mappingDataGuardAssociationSummaryPeerRole = map[string]DataGuardAssociationSummaryPeerRoleEnum{ + "PRIMARY": DataGuardAssociationSummaryPeerRolePrimary, + "STANDBY": DataGuardAssociationSummaryPeerRoleStandby, + "DISABLED_STANDBY": DataGuardAssociationSummaryPeerRoleDisabledStandby, +} + +// GetDataGuardAssociationSummaryPeerRoleEnumValues Enumerates the set of values for DataGuardAssociationSummaryPeerRole +func GetDataGuardAssociationSummaryPeerRoleEnumValues() []DataGuardAssociationSummaryPeerRoleEnum { + values := make([]DataGuardAssociationSummaryPeerRoleEnum, 0) + for _, v := range mappingDataGuardAssociationSummaryPeerRole { + values = append(values, v) + } + return values +} + +// DataGuardAssociationSummaryProtectionModeEnum Enum with underlying type: string +type DataGuardAssociationSummaryProtectionModeEnum string + +// Set of constants representing the allowable values for DataGuardAssociationSummaryProtectionMode +const ( + DataGuardAssociationSummaryProtectionModeAvailability DataGuardAssociationSummaryProtectionModeEnum = "MAXIMUM_AVAILABILITY" + DataGuardAssociationSummaryProtectionModePerformance DataGuardAssociationSummaryProtectionModeEnum = "MAXIMUM_PERFORMANCE" + DataGuardAssociationSummaryProtectionModeProtection DataGuardAssociationSummaryProtectionModeEnum = "MAXIMUM_PROTECTION" +) + +var mappingDataGuardAssociationSummaryProtectionMode = map[string]DataGuardAssociationSummaryProtectionModeEnum{ + "MAXIMUM_AVAILABILITY": DataGuardAssociationSummaryProtectionModeAvailability, + "MAXIMUM_PERFORMANCE": DataGuardAssociationSummaryProtectionModePerformance, + "MAXIMUM_PROTECTION": DataGuardAssociationSummaryProtectionModeProtection, +} + +// GetDataGuardAssociationSummaryProtectionModeEnumValues Enumerates the set of values for DataGuardAssociationSummaryProtectionMode +func GetDataGuardAssociationSummaryProtectionModeEnumValues() []DataGuardAssociationSummaryProtectionModeEnum { + values := make([]DataGuardAssociationSummaryProtectionModeEnum, 0) + for _, v := range mappingDataGuardAssociationSummaryProtectionMode { + values = append(values, v) + } + return values +} + +// DataGuardAssociationSummaryRoleEnum Enum with underlying type: string +type DataGuardAssociationSummaryRoleEnum string + +// Set of constants representing the allowable values for DataGuardAssociationSummaryRole +const ( + DataGuardAssociationSummaryRolePrimary DataGuardAssociationSummaryRoleEnum = "PRIMARY" + DataGuardAssociationSummaryRoleStandby DataGuardAssociationSummaryRoleEnum = "STANDBY" + DataGuardAssociationSummaryRoleDisabledStandby DataGuardAssociationSummaryRoleEnum = "DISABLED_STANDBY" +) + +var mappingDataGuardAssociationSummaryRole = map[string]DataGuardAssociationSummaryRoleEnum{ + "PRIMARY": DataGuardAssociationSummaryRolePrimary, + "STANDBY": DataGuardAssociationSummaryRoleStandby, + "DISABLED_STANDBY": DataGuardAssociationSummaryRoleDisabledStandby, +} + +// GetDataGuardAssociationSummaryRoleEnumValues Enumerates the set of values for DataGuardAssociationSummaryRole +func GetDataGuardAssociationSummaryRoleEnumValues() []DataGuardAssociationSummaryRoleEnum { + values := make([]DataGuardAssociationSummaryRoleEnum, 0) + for _, v := range mappingDataGuardAssociationSummaryRole { + values = append(values, v) + } + return values +} + +// DataGuardAssociationSummaryTransportTypeEnum Enum with underlying type: string +type DataGuardAssociationSummaryTransportTypeEnum string + +// Set of constants representing the allowable values for DataGuardAssociationSummaryTransportType +const ( + DataGuardAssociationSummaryTransportTypeSync DataGuardAssociationSummaryTransportTypeEnum = "SYNC" + DataGuardAssociationSummaryTransportTypeAsync DataGuardAssociationSummaryTransportTypeEnum = "ASYNC" + DataGuardAssociationSummaryTransportTypeFastsync DataGuardAssociationSummaryTransportTypeEnum = "FASTSYNC" +) + +var mappingDataGuardAssociationSummaryTransportType = map[string]DataGuardAssociationSummaryTransportTypeEnum{ + "SYNC": DataGuardAssociationSummaryTransportTypeSync, + "ASYNC": DataGuardAssociationSummaryTransportTypeAsync, + "FASTSYNC": DataGuardAssociationSummaryTransportTypeFastsync, +} + +// GetDataGuardAssociationSummaryTransportTypeEnumValues Enumerates the set of values for DataGuardAssociationSummaryTransportType +func GetDataGuardAssociationSummaryTransportTypeEnumValues() []DataGuardAssociationSummaryTransportTypeEnum { + values := make([]DataGuardAssociationSummaryTransportTypeEnum, 0) + for _, v := range mappingDataGuardAssociationSummaryTransportType { + values = append(values, v) + } + return values +} diff --git a/vendor/github.com/oracle/oci-go-sdk/database/database.go b/vendor/github.com/oracle/oci-go-sdk/database/database.go new file mode 100644 index 000000000..be33ef1a8 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/database/database.go @@ -0,0 +1,95 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Database Service API +// +// The API for the Database Service. +// + +package database + +import ( + "github.com/oracle/oci-go-sdk/common" +) + +// Database An Oracle database on a DB System. For more information, see Managing Oracle Databases (https://docs.us-phoenix-1.oraclecloud.com/Content/Database/Concepts/overview.htm). +// To use any of the API operations, you must be authorized in an IAM policy. If you're not authorized, talk to an administrator. If you're an administrator who needs to write policies to give users access, see Getting Started with Policies (https://docs.us-phoenix-1.oraclecloud.com/Content/Identity/Concepts/policygetstarted.htm). +type Database struct { + + // The OCID of the compartment. + CompartmentId *string `mandatory:"true" json:"compartmentId"` + + // The database name. + DbName *string `mandatory:"true" json:"dbName"` + + // A system-generated name for the database to ensure uniqueness within an Oracle Data Guard group (a primary database and its standby databases). The unique name cannot be changed. + DbUniqueName *string `mandatory:"true" json:"dbUniqueName"` + + // The OCID of the database. + Id *string `mandatory:"true" json:"id"` + + // The current state of the database. + LifecycleState DatabaseLifecycleStateEnum `mandatory:"true" json:"lifecycleState"` + + // The character set for the database. + CharacterSet *string `mandatory:"false" json:"characterSet"` + + DbBackupConfig *DbBackupConfig `mandatory:"false" json:"dbBackupConfig"` + + // The OCID of the database home. + DbHomeId *string `mandatory:"false" json:"dbHomeId"` + + // Database workload type. + DbWorkload *string `mandatory:"false" json:"dbWorkload"` + + // Additional information about the current lifecycleState. + LifecycleDetails *string `mandatory:"false" json:"lifecycleDetails"` + + // The national character set for the database. + NcharacterSet *string `mandatory:"false" json:"ncharacterSet"` + + // Pluggable database name. It must begin with an alphabetic character and can contain a maximum of eight alphanumeric characters. Special characters are not permitted. Pluggable database should not be same as database name. + PdbName *string `mandatory:"false" json:"pdbName"` + + // The date and time the database was created. + TimeCreated *common.SDKTime `mandatory:"false" json:"timeCreated"` +} + +func (m Database) String() string { + return common.PointerString(m) +} + +// DatabaseLifecycleStateEnum Enum with underlying type: string +type DatabaseLifecycleStateEnum string + +// Set of constants representing the allowable values for DatabaseLifecycleState +const ( + DatabaseLifecycleStateProvisioning DatabaseLifecycleStateEnum = "PROVISIONING" + DatabaseLifecycleStateAvailable DatabaseLifecycleStateEnum = "AVAILABLE" + DatabaseLifecycleStateUpdating DatabaseLifecycleStateEnum = "UPDATING" + DatabaseLifecycleStateBackupInProgress DatabaseLifecycleStateEnum = "BACKUP_IN_PROGRESS" + DatabaseLifecycleStateTerminating DatabaseLifecycleStateEnum = "TERMINATING" + DatabaseLifecycleStateTerminated DatabaseLifecycleStateEnum = "TERMINATED" + DatabaseLifecycleStateRestoreFailed DatabaseLifecycleStateEnum = "RESTORE_FAILED" + DatabaseLifecycleStateFailed DatabaseLifecycleStateEnum = "FAILED" +) + +var mappingDatabaseLifecycleState = map[string]DatabaseLifecycleStateEnum{ + "PROVISIONING": DatabaseLifecycleStateProvisioning, + "AVAILABLE": DatabaseLifecycleStateAvailable, + "UPDATING": DatabaseLifecycleStateUpdating, + "BACKUP_IN_PROGRESS": DatabaseLifecycleStateBackupInProgress, + "TERMINATING": DatabaseLifecycleStateTerminating, + "TERMINATED": DatabaseLifecycleStateTerminated, + "RESTORE_FAILED": DatabaseLifecycleStateRestoreFailed, + "FAILED": DatabaseLifecycleStateFailed, +} + +// GetDatabaseLifecycleStateEnumValues Enumerates the set of values for DatabaseLifecycleState +func GetDatabaseLifecycleStateEnumValues() []DatabaseLifecycleStateEnum { + values := make([]DatabaseLifecycleStateEnum, 0) + for _, v := range mappingDatabaseLifecycleState { + values = append(values, v) + } + return values +} diff --git a/vendor/github.com/oracle/oci-go-sdk/database/database_client.go b/vendor/github.com/oracle/oci-go-sdk/database/database_client.go new file mode 100644 index 000000000..d666cc5f0 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/database/database_client.go @@ -0,0 +1,753 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Database Service API +// +// The API for the Database Service. +// + +package database + +import ( + "context" + "fmt" + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +//DatabaseClient a client for Database +type DatabaseClient struct { + common.BaseClient + config *common.ConfigurationProvider +} + +// NewDatabaseClientWithConfigurationProvider Creates a new default Database client with the given configuration provider. +// the configuration provider will be used for the default signer as well as reading the region +func NewDatabaseClientWithConfigurationProvider(configProvider common.ConfigurationProvider) (client DatabaseClient, err error) { + baseClient, err := common.NewClientWithConfig(configProvider) + if err != nil { + return + } + + client = DatabaseClient{BaseClient: baseClient} + client.BasePath = "20160918" + err = client.setConfigurationProvider(configProvider) + return +} + +// SetRegion overrides the region of this client. +func (client *DatabaseClient) SetRegion(region string) { + client.Host = fmt.Sprintf(common.DefaultHostURLTemplate, "database", region) +} + +// SetConfigurationProvider sets the configuration provider including the region, returns an error if is not valid +func (client *DatabaseClient) setConfigurationProvider(configProvider common.ConfigurationProvider) error { + if ok, err := common.IsConfigurationProviderValid(configProvider); !ok { + return err + } + + // Error has been checked already + region, _ := configProvider.Region() + client.config = &configProvider + client.SetRegion(region) + return nil +} + +// ConfigurationProvider the ConfigurationProvider used in this client, or null if none set +func (client *DatabaseClient) ConfigurationProvider() *common.ConfigurationProvider { + return client.config +} + +// CreateBackup Creates a new backup in the specified database based on the request parameters you provide. If you previously used RMAN or dbcli to configure backups and then you switch to using the Console or the API for backups, a new backup configuration is created and associated with your database. This means that you can no longer rely on your previously configured unmanaged backups to work. +func (client DatabaseClient) CreateBackup(ctx context.Context, request CreateBackupRequest) (response CreateBackupResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodPost, "/backups", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// CreateDataGuardAssociation Creates a new Data Guard association. A Data Guard association represents the replication relationship between the +// specified database and a peer database. For more information, see Using Oracle Data Guard (https://docs.us-phoenix-1.oraclecloud.com/Content/Database/Tasks/usingdataguard.htm). +// All Oracle Cloud Infrastructure resources, including Data Guard associations, get an Oracle-assigned, unique ID +// called an Oracle Cloud Identifier (OCID). When you create a resource, you can find its OCID in the response. +// You can also retrieve a resource's OCID by using a List API operation on that resource type, or by viewing the +// resource in the Console. Fore more information, see +// Resource Identifiers (http://localhost:8000/Content/General/Concepts/identifiers.htm). +func (client DatabaseClient) CreateDataGuardAssociation(ctx context.Context, request CreateDataGuardAssociationRequest) (response CreateDataGuardAssociationResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodPost, "/databases/{databaseId}/dataGuardAssociations", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// CreateDbHome Creates a new DB Home in the specified DB System based on the request parameters you provide. +func (client DatabaseClient) CreateDbHome(ctx context.Context, request CreateDbHomeRequest) (response CreateDbHomeResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodPost, "/dbHomes", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// DbNodeAction Performs an action, such as one of the power actions (start, stop, softreset, or reset), on the specified DB Node. +// **start** - power on +// **stop** - power off +// **softreset** - ACPI shutdown and power on +// **reset** - power off and power on +// Note that the **stop** state has no effect on the resources you consume. +// Billing continues for DB Nodes that you stop, and related resources continue +// to apply against any relevant quotas. You must terminate the DB System +// (TerminateDbSystem) +// to remove its resources from billing and quotas. +func (client DatabaseClient) DbNodeAction(ctx context.Context, request DbNodeActionRequest) (response DbNodeActionResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodPost, "/dbNodes/{dbNodeId}", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// DeleteBackup Deletes a full backup. You cannot delete automatic backups using this API. +func (client DatabaseClient) DeleteBackup(ctx context.Context, request DeleteBackupRequest) (response DeleteBackupResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodDelete, "/backups/{backupId}", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// DeleteDbHome Deletes a DB Home. The DB Home and its database data are local to the DB System and will be lost when it is deleted. Oracle recommends that you back up any data in the DB System prior to deleting it. +func (client DatabaseClient) DeleteDbHome(ctx context.Context, request DeleteDbHomeRequest) (response DeleteDbHomeResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodDelete, "/dbHomes/{dbHomeId}", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// FailoverDataGuardAssociation Performs a failover to transition the standby database identified by the `databaseId` parameter into the +// specified Data Guard association's primary role after the existing primary database fails or becomes unreachable. +// A failover might result in data loss depending on the protection mode in effect at the time of the primary +// database failure. +func (client DatabaseClient) FailoverDataGuardAssociation(ctx context.Context, request FailoverDataGuardAssociationRequest) (response FailoverDataGuardAssociationResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodPost, "/databases/{databaseId}/dataGuardAssociations/{dataGuardAssociationId}/actions/failover", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// GetBackup Gets information about the specified backup. +func (client DatabaseClient) GetBackup(ctx context.Context, request GetBackupRequest) (response GetBackupResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodGet, "/backups/{backupId}", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// GetDataGuardAssociation Gets the specified Data Guard association's configuration information. +func (client DatabaseClient) GetDataGuardAssociation(ctx context.Context, request GetDataGuardAssociationRequest) (response GetDataGuardAssociationResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodGet, "/databases/{databaseId}/dataGuardAssociations/{dataGuardAssociationId}", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// GetDatabase Gets information about a specific database. +func (client DatabaseClient) GetDatabase(ctx context.Context, request GetDatabaseRequest) (response GetDatabaseResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodGet, "/databases/{databaseId}", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// GetDbHome Gets information about the specified database home. +func (client DatabaseClient) GetDbHome(ctx context.Context, request GetDbHomeRequest) (response GetDbHomeResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodGet, "/dbHomes/{dbHomeId}", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// GetDbHomePatch Gets information about a specified patch package. +func (client DatabaseClient) GetDbHomePatch(ctx context.Context, request GetDbHomePatchRequest) (response GetDbHomePatchResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodGet, "/dbHomes/{dbHomeId}/patches/{patchId}", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// GetDbHomePatchHistoryEntry Gets the patch history details for the specified patchHistoryEntryId +func (client DatabaseClient) GetDbHomePatchHistoryEntry(ctx context.Context, request GetDbHomePatchHistoryEntryRequest) (response GetDbHomePatchHistoryEntryResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodGet, "/dbHomes/{dbHomeId}/patchHistoryEntries/{patchHistoryEntryId}", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// GetDbNode Gets information about the specified database node. +func (client DatabaseClient) GetDbNode(ctx context.Context, request GetDbNodeRequest) (response GetDbNodeResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodGet, "/dbNodes/{dbNodeId}", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// GetDbSystem Gets information about the specified DB System. +func (client DatabaseClient) GetDbSystem(ctx context.Context, request GetDbSystemRequest) (response GetDbSystemResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodGet, "/dbSystems/{dbSystemId}", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// GetDbSystemPatch Gets information about a specified patch package. +func (client DatabaseClient) GetDbSystemPatch(ctx context.Context, request GetDbSystemPatchRequest) (response GetDbSystemPatchResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodGet, "/dbSystems/{dbSystemId}/patches/{patchId}", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// GetDbSystemPatchHistoryEntry Gets the patch history details for the specified patchHistoryEntryId. +func (client DatabaseClient) GetDbSystemPatchHistoryEntry(ctx context.Context, request GetDbSystemPatchHistoryEntryRequest) (response GetDbSystemPatchHistoryEntryResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodGet, "/dbSystems/{dbSystemId}/patchHistoryEntries/{patchHistoryEntryId}", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// LaunchDbSystem Launches a new DB System in the specified compartment and Availability Domain. You'll specify a single Oracle +// Database Edition that applies to all the databases on that DB System. The selected edition cannot be changed. +// An initial database is created on the DB System based on the request parameters you provide and some default +// options. For more information, +// see Default Options for the Initial Database (https://docs.us-phoenix-1.oraclecloud.com/Content/Database/Tasks/launchingDB.htm#Default_Options_for_the_Initial_Database). +// The DB System will include a command line interface (CLI) that you can use to create additional databases and +// manage existing databases. For more information, see the +// Oracle Database CLI Reference (https://docs.us-phoenix-1.oraclecloud.com/Content/Database/References/odacli.htm#Oracle_Database_CLI_Reference). +func (client DatabaseClient) LaunchDbSystem(ctx context.Context, request LaunchDbSystemRequest) (response LaunchDbSystemResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodPost, "/dbSystems", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// ListBackups Gets a list of backups based on the databaseId or compartmentId specified. Either one of the query parameters must be provided. +func (client DatabaseClient) ListBackups(ctx context.Context, request ListBackupsRequest) (response ListBackupsResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodGet, "/backups", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// ListDataGuardAssociations Lists all Data Guard associations for the specified database. +func (client DatabaseClient) ListDataGuardAssociations(ctx context.Context, request ListDataGuardAssociationsRequest) (response ListDataGuardAssociationsResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodGet, "/databases/{databaseId}/dataGuardAssociations", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// ListDatabases Gets a list of the databases in the specified database home. +func (client DatabaseClient) ListDatabases(ctx context.Context, request ListDatabasesRequest) (response ListDatabasesResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodGet, "/databases", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// ListDbHomePatchHistoryEntries Gets history of the actions taken for patches for the specified database home. +func (client DatabaseClient) ListDbHomePatchHistoryEntries(ctx context.Context, request ListDbHomePatchHistoryEntriesRequest) (response ListDbHomePatchHistoryEntriesResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodGet, "/dbHomes/{dbHomeId}/patchHistoryEntries", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// ListDbHomePatches Lists patches applicable to the requested database home. +func (client DatabaseClient) ListDbHomePatches(ctx context.Context, request ListDbHomePatchesRequest) (response ListDbHomePatchesResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodGet, "/dbHomes/{dbHomeId}/patches", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// ListDbHomes Gets a list of database homes in the specified DB System and compartment. A database home is a directory where Oracle database software is installed. +func (client DatabaseClient) ListDbHomes(ctx context.Context, request ListDbHomesRequest) (response ListDbHomesResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodGet, "/dbHomes", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// ListDbNodes Gets a list of database nodes in the specified DB System and compartment. A database node is a server running database software. +func (client DatabaseClient) ListDbNodes(ctx context.Context, request ListDbNodesRequest) (response ListDbNodesResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodGet, "/dbNodes", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// ListDbSystemPatchHistoryEntries Gets the history of the patch actions performed on the specified DB System. +func (client DatabaseClient) ListDbSystemPatchHistoryEntries(ctx context.Context, request ListDbSystemPatchHistoryEntriesRequest) (response ListDbSystemPatchHistoryEntriesResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodGet, "/dbSystems/{dbSystemId}/patchHistoryEntries", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// ListDbSystemPatches Lists the patches applicable to the requested DB System. +func (client DatabaseClient) ListDbSystemPatches(ctx context.Context, request ListDbSystemPatchesRequest) (response ListDbSystemPatchesResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodGet, "/dbSystems/{dbSystemId}/patches", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// ListDbSystemShapes Gets a list of the shapes that can be used to launch a new DB System. The shape determines resources to allocate to the DB system - CPU cores and memory for VM shapes; CPU cores, memory and storage for non-VM (or bare metal) shapes. +func (client DatabaseClient) ListDbSystemShapes(ctx context.Context, request ListDbSystemShapesRequest) (response ListDbSystemShapesResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodGet, "/dbSystemShapes", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// ListDbSystems Gets a list of the DB Systems in the specified compartment. +// +func (client DatabaseClient) ListDbSystems(ctx context.Context, request ListDbSystemsRequest) (response ListDbSystemsResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodGet, "/dbSystems", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// ListDbVersions Gets a list of supported Oracle database versions. +func (client DatabaseClient) ListDbVersions(ctx context.Context, request ListDbVersionsRequest) (response ListDbVersionsResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodGet, "/dbVersions", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// ReinstateDataGuardAssociation Reinstates the database identified by the `databaseId` parameter into the standby role in a Data Guard association. +func (client DatabaseClient) ReinstateDataGuardAssociation(ctx context.Context, request ReinstateDataGuardAssociationRequest) (response ReinstateDataGuardAssociationResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodPost, "/databases/{databaseId}/dataGuardAssociations/{dataGuardAssociationId}/actions/reinstate", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// RestoreDatabase Restore a Database based on the request parameters you provide. +func (client DatabaseClient) RestoreDatabase(ctx context.Context, request RestoreDatabaseRequest) (response RestoreDatabaseResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodPost, "/databases/{databaseId}/actions/restore", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// SwitchoverDataGuardAssociation Performs a switchover to transition the primary database of a Data Guard association into a standby role. The +// standby database associated with the `dataGuardAssociationId` assumes the primary database role. +// A switchover guarantees no data loss. +func (client DatabaseClient) SwitchoverDataGuardAssociation(ctx context.Context, request SwitchoverDataGuardAssociationRequest) (response SwitchoverDataGuardAssociationResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodPost, "/databases/{databaseId}/dataGuardAssociations/{dataGuardAssociationId}/actions/switchover", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// TerminateDbSystem Terminates a DB System and permanently deletes it and any databases running on it, and any storage volumes attached to it. The database data is local to the DB System and will be lost when the system is terminated. Oracle recommends that you back up any data in the DB System prior to terminating it. +func (client DatabaseClient) TerminateDbSystem(ctx context.Context, request TerminateDbSystemRequest) (response TerminateDbSystemResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodDelete, "/dbSystems/{dbSystemId}", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// UpdateDatabase Update a Database based on the request parameters you provide. +func (client DatabaseClient) UpdateDatabase(ctx context.Context, request UpdateDatabaseRequest) (response UpdateDatabaseResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodPut, "/databases/{databaseId}", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// UpdateDbHome Patches the specified dbHome. +func (client DatabaseClient) UpdateDbHome(ctx context.Context, request UpdateDbHomeRequest) (response UpdateDbHomeResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodPut, "/dbHomes/{dbHomeId}", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// UpdateDbSystem Updates the properties of a DB System, such as the CPU core count. +func (client DatabaseClient) UpdateDbSystem(ctx context.Context, request UpdateDbSystemRequest) (response UpdateDbSystemResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodPut, "/dbSystems/{dbSystemId}", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} diff --git a/vendor/github.com/oracle/oci-go-sdk/database/database_summary.go b/vendor/github.com/oracle/oci-go-sdk/database/database_summary.go new file mode 100644 index 000000000..6191d850f --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/database/database_summary.go @@ -0,0 +1,95 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Database Service API +// +// The API for the Database Service. +// + +package database + +import ( + "github.com/oracle/oci-go-sdk/common" +) + +// DatabaseSummary An Oracle database on a DB System. For more information, see Managing Oracle Databases (https://docs.us-phoenix-1.oraclecloud.com/Content/Database/Concepts/overview.htm). +// To use any of the API operations, you must be authorized in an IAM policy. If you're not authorized, talk to an administrator. If you're an administrator who needs to write policies to give users access, see Getting Started with Policies (https://docs.us-phoenix-1.oraclecloud.com/Content/Identity/Concepts/policygetstarted.htm). +type DatabaseSummary struct { + + // The OCID of the compartment. + CompartmentId *string `mandatory:"true" json:"compartmentId"` + + // The database name. + DbName *string `mandatory:"true" json:"dbName"` + + // A system-generated name for the database to ensure uniqueness within an Oracle Data Guard group (a primary database and its standby databases). The unique name cannot be changed. + DbUniqueName *string `mandatory:"true" json:"dbUniqueName"` + + // The OCID of the database. + Id *string `mandatory:"true" json:"id"` + + // The current state of the database. + LifecycleState DatabaseSummaryLifecycleStateEnum `mandatory:"true" json:"lifecycleState"` + + // The character set for the database. + CharacterSet *string `mandatory:"false" json:"characterSet"` + + DbBackupConfig *DbBackupConfig `mandatory:"false" json:"dbBackupConfig"` + + // The OCID of the database home. + DbHomeId *string `mandatory:"false" json:"dbHomeId"` + + // Database workload type. + DbWorkload *string `mandatory:"false" json:"dbWorkload"` + + // Additional information about the current lifecycleState. + LifecycleDetails *string `mandatory:"false" json:"lifecycleDetails"` + + // The national character set for the database. + NcharacterSet *string `mandatory:"false" json:"ncharacterSet"` + + // Pluggable database name. It must begin with an alphabetic character and can contain a maximum of eight alphanumeric characters. Special characters are not permitted. Pluggable database should not be same as database name. + PdbName *string `mandatory:"false" json:"pdbName"` + + // The date and time the database was created. + TimeCreated *common.SDKTime `mandatory:"false" json:"timeCreated"` +} + +func (m DatabaseSummary) String() string { + return common.PointerString(m) +} + +// DatabaseSummaryLifecycleStateEnum Enum with underlying type: string +type DatabaseSummaryLifecycleStateEnum string + +// Set of constants representing the allowable values for DatabaseSummaryLifecycleState +const ( + DatabaseSummaryLifecycleStateProvisioning DatabaseSummaryLifecycleStateEnum = "PROVISIONING" + DatabaseSummaryLifecycleStateAvailable DatabaseSummaryLifecycleStateEnum = "AVAILABLE" + DatabaseSummaryLifecycleStateUpdating DatabaseSummaryLifecycleStateEnum = "UPDATING" + DatabaseSummaryLifecycleStateBackupInProgress DatabaseSummaryLifecycleStateEnum = "BACKUP_IN_PROGRESS" + DatabaseSummaryLifecycleStateTerminating DatabaseSummaryLifecycleStateEnum = "TERMINATING" + DatabaseSummaryLifecycleStateTerminated DatabaseSummaryLifecycleStateEnum = "TERMINATED" + DatabaseSummaryLifecycleStateRestoreFailed DatabaseSummaryLifecycleStateEnum = "RESTORE_FAILED" + DatabaseSummaryLifecycleStateFailed DatabaseSummaryLifecycleStateEnum = "FAILED" +) + +var mappingDatabaseSummaryLifecycleState = map[string]DatabaseSummaryLifecycleStateEnum{ + "PROVISIONING": DatabaseSummaryLifecycleStateProvisioning, + "AVAILABLE": DatabaseSummaryLifecycleStateAvailable, + "UPDATING": DatabaseSummaryLifecycleStateUpdating, + "BACKUP_IN_PROGRESS": DatabaseSummaryLifecycleStateBackupInProgress, + "TERMINATING": DatabaseSummaryLifecycleStateTerminating, + "TERMINATED": DatabaseSummaryLifecycleStateTerminated, + "RESTORE_FAILED": DatabaseSummaryLifecycleStateRestoreFailed, + "FAILED": DatabaseSummaryLifecycleStateFailed, +} + +// GetDatabaseSummaryLifecycleStateEnumValues Enumerates the set of values for DatabaseSummaryLifecycleState +func GetDatabaseSummaryLifecycleStateEnumValues() []DatabaseSummaryLifecycleStateEnum { + values := make([]DatabaseSummaryLifecycleStateEnum, 0) + for _, v := range mappingDatabaseSummaryLifecycleState { + values = append(values, v) + } + return values +} diff --git a/vendor/github.com/oracle/oci-go-sdk/database/db_backup_config.go b/vendor/github.com/oracle/oci-go-sdk/database/db_backup_config.go new file mode 100644 index 000000000..193a4a81c --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/database/db_backup_config.go @@ -0,0 +1,25 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Database Service API +// +// The API for the Database Service. +// + +package database + +import ( + "github.com/oracle/oci-go-sdk/common" +) + +// DbBackupConfig Backup Options +// To use any of the API operations, you must be authorized in an IAM policy. If you're not authorized, talk to an administrator. If you're an administrator who needs to write policies to give users access, see Getting Started with Policies (https://docs.us-phoenix-1.oraclecloud.com/Content/Identity/Concepts/policygetstarted.htm). +type DbBackupConfig struct { + + // If set to true, configures automatic backups. If you previously used RMAN or dbcli to configure backups and then you switch to using the Console or the API for backups, a new backup configuration is created and associated with your database. This means that you can no longer rely on your previously configured unmanaged backups to work. + AutoBackupEnabled *bool `mandatory:"false" json:"autoBackupEnabled"` +} + +func (m DbBackupConfig) String() string { + return common.PointerString(m) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/database/db_home.go b/vendor/github.com/oracle/oci-go-sdk/database/db_home.go new file mode 100644 index 000000000..dc768b7cc --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/database/db_home.go @@ -0,0 +1,82 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Database Service API +// +// The API for the Database Service. +// + +package database + +import ( + "github.com/oracle/oci-go-sdk/common" +) + +// DbHome A directory where Oracle database software is installed. Each DB System can have multiple database homes, +// and each database home can have multiple databases within it. All the databases within a single database home +// must be the same database version, but different database homes can run different versions. For more information, +// see Managing Oracle Databases (https://docs.us-phoenix-1.oraclecloud.com/Content/Database/Concepts/overview.htm). +// To use any of the API operations, you must be authorized in an IAM policy. If you're not authorized, talk to an +// administrator. If you're an administrator who needs to write policies to give users access, +// see Getting Started with Policies (https://docs.us-phoenix-1.oraclecloud.com/Content/Identity/Concepts/policygetstarted.htm). +type DbHome struct { + + // The OCID of the compartment. + CompartmentId *string `mandatory:"true" json:"compartmentId"` + + // The Oracle database version. + DbVersion *string `mandatory:"true" json:"dbVersion"` + + // The user-provided name for the database home. It does not need to be unique. + DisplayName *string `mandatory:"true" json:"displayName"` + + // The OCID of the database home. + Id *string `mandatory:"true" json:"id"` + + // The current state of the database home. + LifecycleState DbHomeLifecycleStateEnum `mandatory:"true" json:"lifecycleState"` + + // The OCID of the DB System. + DbSystemId *string `mandatory:"false" json:"dbSystemId"` + + // The OCID of the last patch history. This is updated as soon as a patch operation is started. + LastPatchHistoryEntryId *string `mandatory:"false" json:"lastPatchHistoryEntryId"` + + // The date and time the database home was created. + TimeCreated *common.SDKTime `mandatory:"false" json:"timeCreated"` +} + +func (m DbHome) String() string { + return common.PointerString(m) +} + +// DbHomeLifecycleStateEnum Enum with underlying type: string +type DbHomeLifecycleStateEnum string + +// Set of constants representing the allowable values for DbHomeLifecycleState +const ( + DbHomeLifecycleStateProvisioning DbHomeLifecycleStateEnum = "PROVISIONING" + DbHomeLifecycleStateAvailable DbHomeLifecycleStateEnum = "AVAILABLE" + DbHomeLifecycleStateUpdating DbHomeLifecycleStateEnum = "UPDATING" + DbHomeLifecycleStateTerminating DbHomeLifecycleStateEnum = "TERMINATING" + DbHomeLifecycleStateTerminated DbHomeLifecycleStateEnum = "TERMINATED" + DbHomeLifecycleStateFailed DbHomeLifecycleStateEnum = "FAILED" +) + +var mappingDbHomeLifecycleState = map[string]DbHomeLifecycleStateEnum{ + "PROVISIONING": DbHomeLifecycleStateProvisioning, + "AVAILABLE": DbHomeLifecycleStateAvailable, + "UPDATING": DbHomeLifecycleStateUpdating, + "TERMINATING": DbHomeLifecycleStateTerminating, + "TERMINATED": DbHomeLifecycleStateTerminated, + "FAILED": DbHomeLifecycleStateFailed, +} + +// GetDbHomeLifecycleStateEnumValues Enumerates the set of values for DbHomeLifecycleState +func GetDbHomeLifecycleStateEnumValues() []DbHomeLifecycleStateEnum { + values := make([]DbHomeLifecycleStateEnum, 0) + for _, v := range mappingDbHomeLifecycleState { + values = append(values, v) + } + return values +} diff --git a/vendor/github.com/oracle/oci-go-sdk/database/db_home_summary.go b/vendor/github.com/oracle/oci-go-sdk/database/db_home_summary.go new file mode 100644 index 000000000..b71dec836 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/database/db_home_summary.go @@ -0,0 +1,82 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Database Service API +// +// The API for the Database Service. +// + +package database + +import ( + "github.com/oracle/oci-go-sdk/common" +) + +// DbHomeSummary A directory where Oracle database software is installed. Each DB System can have multiple database homes, +// and each database home can have multiple databases within it. All the databases within a single database home +// must be the same database version, but different database homes can run different versions. For more information, +// see Managing Oracle Databases (https://docs.us-phoenix-1.oraclecloud.com/Content/Database/Concepts/overview.htm). +// To use any of the API operations, you must be authorized in an IAM policy. If you're not authorized, talk to an +// administrator. If you're an administrator who needs to write policies to give users access, +// see Getting Started with Policies (https://docs.us-phoenix-1.oraclecloud.com/Content/Identity/Concepts/policygetstarted.htm). +type DbHomeSummary struct { + + // The OCID of the compartment. + CompartmentId *string `mandatory:"true" json:"compartmentId"` + + // The Oracle database version. + DbVersion *string `mandatory:"true" json:"dbVersion"` + + // The user-provided name for the database home. It does not need to be unique. + DisplayName *string `mandatory:"true" json:"displayName"` + + // The OCID of the database home. + Id *string `mandatory:"true" json:"id"` + + // The current state of the database home. + LifecycleState DbHomeSummaryLifecycleStateEnum `mandatory:"true" json:"lifecycleState"` + + // The OCID of the DB System. + DbSystemId *string `mandatory:"false" json:"dbSystemId"` + + // The OCID of the last patch history. This is updated as soon as a patch operation is started. + LastPatchHistoryEntryId *string `mandatory:"false" json:"lastPatchHistoryEntryId"` + + // The date and time the database home was created. + TimeCreated *common.SDKTime `mandatory:"false" json:"timeCreated"` +} + +func (m DbHomeSummary) String() string { + return common.PointerString(m) +} + +// DbHomeSummaryLifecycleStateEnum Enum with underlying type: string +type DbHomeSummaryLifecycleStateEnum string + +// Set of constants representing the allowable values for DbHomeSummaryLifecycleState +const ( + DbHomeSummaryLifecycleStateProvisioning DbHomeSummaryLifecycleStateEnum = "PROVISIONING" + DbHomeSummaryLifecycleStateAvailable DbHomeSummaryLifecycleStateEnum = "AVAILABLE" + DbHomeSummaryLifecycleStateUpdating DbHomeSummaryLifecycleStateEnum = "UPDATING" + DbHomeSummaryLifecycleStateTerminating DbHomeSummaryLifecycleStateEnum = "TERMINATING" + DbHomeSummaryLifecycleStateTerminated DbHomeSummaryLifecycleStateEnum = "TERMINATED" + DbHomeSummaryLifecycleStateFailed DbHomeSummaryLifecycleStateEnum = "FAILED" +) + +var mappingDbHomeSummaryLifecycleState = map[string]DbHomeSummaryLifecycleStateEnum{ + "PROVISIONING": DbHomeSummaryLifecycleStateProvisioning, + "AVAILABLE": DbHomeSummaryLifecycleStateAvailable, + "UPDATING": DbHomeSummaryLifecycleStateUpdating, + "TERMINATING": DbHomeSummaryLifecycleStateTerminating, + "TERMINATED": DbHomeSummaryLifecycleStateTerminated, + "FAILED": DbHomeSummaryLifecycleStateFailed, +} + +// GetDbHomeSummaryLifecycleStateEnumValues Enumerates the set of values for DbHomeSummaryLifecycleState +func GetDbHomeSummaryLifecycleStateEnumValues() []DbHomeSummaryLifecycleStateEnum { + values := make([]DbHomeSummaryLifecycleStateEnum, 0) + for _, v := range mappingDbHomeSummaryLifecycleState { + values = append(values, v) + } + return values +} diff --git a/vendor/github.com/oracle/oci-go-sdk/database/db_node.go b/vendor/github.com/oracle/oci-go-sdk/database/db_node.go new file mode 100644 index 000000000..3e052d092 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/database/db_node.go @@ -0,0 +1,83 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Database Service API +// +// The API for the Database Service. +// + +package database + +import ( + "github.com/oracle/oci-go-sdk/common" +) + +// DbNode A server where Oracle database software is running. +// To use any of the API operations, you must be authorized in an IAM policy. If you're not authorized, talk to an administrator. If you're an administrator who needs to write policies to give users access, see Getting Started with Policies (https://docs.us-phoenix-1.oraclecloud.com/Content/Identity/Concepts/policygetstarted.htm). +type DbNode struct { + + // The OCID of the DB System. + DbSystemId *string `mandatory:"true" json:"dbSystemId"` + + // The OCID of the DB Node. + Id *string `mandatory:"true" json:"id"` + + // The current state of the database node. + LifecycleState DbNodeLifecycleStateEnum `mandatory:"true" json:"lifecycleState"` + + // The date and time that the DB Node was created. + TimeCreated *common.SDKTime `mandatory:"true" json:"timeCreated"` + + // The OCID of the VNIC. + VnicId *string `mandatory:"true" json:"vnicId"` + + // The OCID of the backup VNIC. + BackupVnicId *string `mandatory:"false" json:"backupVnicId"` + + // The host name for the DB Node. + Hostname *string `mandatory:"false" json:"hostname"` + + // Storage size, in GBs, of the software volume that is allocated to the DB system. This is applicable only for VM-based DBs. + SoftwareStorageSizeInGB *int `mandatory:"false" json:"softwareStorageSizeInGB"` +} + +func (m DbNode) String() string { + return common.PointerString(m) +} + +// DbNodeLifecycleStateEnum Enum with underlying type: string +type DbNodeLifecycleStateEnum string + +// Set of constants representing the allowable values for DbNodeLifecycleState +const ( + DbNodeLifecycleStateProvisioning DbNodeLifecycleStateEnum = "PROVISIONING" + DbNodeLifecycleStateAvailable DbNodeLifecycleStateEnum = "AVAILABLE" + DbNodeLifecycleStateUpdating DbNodeLifecycleStateEnum = "UPDATING" + DbNodeLifecycleStateStopping DbNodeLifecycleStateEnum = "STOPPING" + DbNodeLifecycleStateStopped DbNodeLifecycleStateEnum = "STOPPED" + DbNodeLifecycleStateStarting DbNodeLifecycleStateEnum = "STARTING" + DbNodeLifecycleStateTerminating DbNodeLifecycleStateEnum = "TERMINATING" + DbNodeLifecycleStateTerminated DbNodeLifecycleStateEnum = "TERMINATED" + DbNodeLifecycleStateFailed DbNodeLifecycleStateEnum = "FAILED" +) + +var mappingDbNodeLifecycleState = map[string]DbNodeLifecycleStateEnum{ + "PROVISIONING": DbNodeLifecycleStateProvisioning, + "AVAILABLE": DbNodeLifecycleStateAvailable, + "UPDATING": DbNodeLifecycleStateUpdating, + "STOPPING": DbNodeLifecycleStateStopping, + "STOPPED": DbNodeLifecycleStateStopped, + "STARTING": DbNodeLifecycleStateStarting, + "TERMINATING": DbNodeLifecycleStateTerminating, + "TERMINATED": DbNodeLifecycleStateTerminated, + "FAILED": DbNodeLifecycleStateFailed, +} + +// GetDbNodeLifecycleStateEnumValues Enumerates the set of values for DbNodeLifecycleState +func GetDbNodeLifecycleStateEnumValues() []DbNodeLifecycleStateEnum { + values := make([]DbNodeLifecycleStateEnum, 0) + for _, v := range mappingDbNodeLifecycleState { + values = append(values, v) + } + return values +} diff --git a/vendor/github.com/oracle/oci-go-sdk/database/db_node_action_request_response.go b/vendor/github.com/oracle/oci-go-sdk/database/db_node_action_request_response.go new file mode 100644 index 000000000..208e50489 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/database/db_node_action_request_response.go @@ -0,0 +1,83 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package database + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// DbNodeActionRequest wrapper for the DbNodeAction operation +type DbNodeActionRequest struct { + + // The database node OCID (https://docs.us-phoenix-1.oraclecloud.com/Content/General/Concepts/identifiers.htm). + DbNodeId *string `mandatory:"true" contributesTo:"path" name:"dbNodeId"` + + // The action to perform on the DB Node. + Action DbNodeActionActionEnum `mandatory:"true" contributesTo:"query" name:"action" omitEmpty:"true"` + + // A token that uniquely identifies a request so it can be retried in case of a timeout or + // server error without risk of executing that same action again. Retry tokens expire after 24 + // hours, but can be invalidated before then due to conflicting operations (for example, if a resource + // has been deleted and purged from the system, then a retry of the original creation request + // may be rejected). + OpcRetryToken *string `mandatory:"false" contributesTo:"header" name:"opc-retry-token"` + + // For optimistic concurrency control. In the PUT or DELETE call for a resource, set the `if-match` + // parameter to the value of the etag from a previous GET or POST response for that resource. The resource + // will be updated or deleted only if the etag you provide matches the resource's current etag value. + IfMatch *string `mandatory:"false" contributesTo:"header" name:"if-match"` +} + +func (request DbNodeActionRequest) String() string { + return common.PointerString(request) +} + +// DbNodeActionResponse wrapper for the DbNodeAction operation +type DbNodeActionResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The DbNode instance + DbNode `presentIn:"body"` + + // For optimistic concurrency control. See `if-match`. + Etag *string `presentIn:"header" name:"etag"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about + // a particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response DbNodeActionResponse) String() string { + return common.PointerString(response) +} + +// DbNodeActionActionEnum Enum with underlying type: string +type DbNodeActionActionEnum string + +// Set of constants representing the allowable values for DbNodeActionAction +const ( + DbNodeActionActionStop DbNodeActionActionEnum = "STOP" + DbNodeActionActionStart DbNodeActionActionEnum = "START" + DbNodeActionActionSoftreset DbNodeActionActionEnum = "SOFTRESET" + DbNodeActionActionReset DbNodeActionActionEnum = "RESET" +) + +var mappingDbNodeActionAction = map[string]DbNodeActionActionEnum{ + "STOP": DbNodeActionActionStop, + "START": DbNodeActionActionStart, + "SOFTRESET": DbNodeActionActionSoftreset, + "RESET": DbNodeActionActionReset, +} + +// GetDbNodeActionActionEnumValues Enumerates the set of values for DbNodeActionAction +func GetDbNodeActionActionEnumValues() []DbNodeActionActionEnum { + values := make([]DbNodeActionActionEnum, 0) + for _, v := range mappingDbNodeActionAction { + values = append(values, v) + } + return values +} diff --git a/vendor/github.com/oracle/oci-go-sdk/database/db_node_summary.go b/vendor/github.com/oracle/oci-go-sdk/database/db_node_summary.go new file mode 100644 index 000000000..47e4ea0dc --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/database/db_node_summary.go @@ -0,0 +1,83 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Database Service API +// +// The API for the Database Service. +// + +package database + +import ( + "github.com/oracle/oci-go-sdk/common" +) + +// DbNodeSummary A server where Oracle database software is running. +// To use any of the API operations, you must be authorized in an IAM policy. If you're not authorized, talk to an administrator. If you're an administrator who needs to write policies to give users access, see Getting Started with Policies (https://docs.us-phoenix-1.oraclecloud.com/Content/Identity/Concepts/policygetstarted.htm). +type DbNodeSummary struct { + + // The OCID of the DB System. + DbSystemId *string `mandatory:"true" json:"dbSystemId"` + + // The OCID of the DB Node. + Id *string `mandatory:"true" json:"id"` + + // The current state of the database node. + LifecycleState DbNodeSummaryLifecycleStateEnum `mandatory:"true" json:"lifecycleState"` + + // The date and time that the DB Node was created. + TimeCreated *common.SDKTime `mandatory:"true" json:"timeCreated"` + + // The OCID of the VNIC. + VnicId *string `mandatory:"true" json:"vnicId"` + + // The OCID of the backup VNIC. + BackupVnicId *string `mandatory:"false" json:"backupVnicId"` + + // The host name for the DB Node. + Hostname *string `mandatory:"false" json:"hostname"` + + // Storage size, in GBs, of the software volume that is allocated to the DB system. This is applicable only for VM-based DBs. + SoftwareStorageSizeInGB *int `mandatory:"false" json:"softwareStorageSizeInGB"` +} + +func (m DbNodeSummary) String() string { + return common.PointerString(m) +} + +// DbNodeSummaryLifecycleStateEnum Enum with underlying type: string +type DbNodeSummaryLifecycleStateEnum string + +// Set of constants representing the allowable values for DbNodeSummaryLifecycleState +const ( + DbNodeSummaryLifecycleStateProvisioning DbNodeSummaryLifecycleStateEnum = "PROVISIONING" + DbNodeSummaryLifecycleStateAvailable DbNodeSummaryLifecycleStateEnum = "AVAILABLE" + DbNodeSummaryLifecycleStateUpdating DbNodeSummaryLifecycleStateEnum = "UPDATING" + DbNodeSummaryLifecycleStateStopping DbNodeSummaryLifecycleStateEnum = "STOPPING" + DbNodeSummaryLifecycleStateStopped DbNodeSummaryLifecycleStateEnum = "STOPPED" + DbNodeSummaryLifecycleStateStarting DbNodeSummaryLifecycleStateEnum = "STARTING" + DbNodeSummaryLifecycleStateTerminating DbNodeSummaryLifecycleStateEnum = "TERMINATING" + DbNodeSummaryLifecycleStateTerminated DbNodeSummaryLifecycleStateEnum = "TERMINATED" + DbNodeSummaryLifecycleStateFailed DbNodeSummaryLifecycleStateEnum = "FAILED" +) + +var mappingDbNodeSummaryLifecycleState = map[string]DbNodeSummaryLifecycleStateEnum{ + "PROVISIONING": DbNodeSummaryLifecycleStateProvisioning, + "AVAILABLE": DbNodeSummaryLifecycleStateAvailable, + "UPDATING": DbNodeSummaryLifecycleStateUpdating, + "STOPPING": DbNodeSummaryLifecycleStateStopping, + "STOPPED": DbNodeSummaryLifecycleStateStopped, + "STARTING": DbNodeSummaryLifecycleStateStarting, + "TERMINATING": DbNodeSummaryLifecycleStateTerminating, + "TERMINATED": DbNodeSummaryLifecycleStateTerminated, + "FAILED": DbNodeSummaryLifecycleStateFailed, +} + +// GetDbNodeSummaryLifecycleStateEnumValues Enumerates the set of values for DbNodeSummaryLifecycleState +func GetDbNodeSummaryLifecycleStateEnumValues() []DbNodeSummaryLifecycleStateEnum { + values := make([]DbNodeSummaryLifecycleStateEnum, 0) + for _, v := range mappingDbNodeSummaryLifecycleState { + values = append(values, v) + } + return values +} diff --git a/vendor/github.com/oracle/oci-go-sdk/database/db_system.go b/vendor/github.com/oracle/oci-go-sdk/database/db_system.go new file mode 100644 index 000000000..04085ad17 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/database/db_system.go @@ -0,0 +1,236 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Database Service API +// +// The API for the Database Service. +// + +package database + +import ( + "github.com/oracle/oci-go-sdk/common" +) + +// DbSystem The Database Service supports several types of DB Systems, ranging in size, price, and performance. For details about each type of system, see: +// - Exadata DB Systems (https://docs.us-phoenix-1.oraclecloud.com/Content/Database/Concepts/exaoverview.htm) +// - Bare Metal or VM DB Systems (https://docs.us-phoenix-1.oraclecloud.com/Content/Database/Concepts/overview.htm) +// To use any of the API operations, you must be authorized in an IAM policy. If you're not authorized, talk to an administrator. If you're an administrator who needs to write policies to give users access, see Getting Started with Policies (https://docs.us-phoenix-1.oraclecloud.com/Content/Identity/Concepts/policygetstarted.htm). +// +// For information about access control and compartments, see +// Overview of the Identity Service (https://docs.us-phoenix-1.oraclecloud.com/Content/Identity/Concepts/overview.htm). +// For information about Availability Domains, see +// Regions and Availability Domains (https://docs.us-phoenix-1.oraclecloud.com/Content/General/Concepts/regions.htm). +// To get a list of Availability Domains, use the `ListAvailabilityDomains` operation +// in the Identity Service API. +type DbSystem struct { + + // The name of the Availability Domain that the DB System is located in. + AvailabilityDomain *string `mandatory:"true" json:"availabilityDomain"` + + // The OCID of the compartment. + CompartmentId *string `mandatory:"true" json:"compartmentId"` + + // The number of CPU cores enabled on the DB System. + CpuCoreCount *int `mandatory:"true" json:"cpuCoreCount"` + + // The Oracle Database Edition that applies to all the databases on the DB System. + DatabaseEdition DbSystemDatabaseEditionEnum `mandatory:"true" json:"databaseEdition"` + + // The user-friendly name for the DB System. It does not have to be unique. + DisplayName *string `mandatory:"true" json:"displayName"` + + // The domain name for the DB System. + Domain *string `mandatory:"true" json:"domain"` + + // The host name for the DB Node. + Hostname *string `mandatory:"true" json:"hostname"` + + // The OCID of the DB System. + Id *string `mandatory:"true" json:"id"` + + // The current state of the DB System. + LifecycleState DbSystemLifecycleStateEnum `mandatory:"true" json:"lifecycleState"` + + // The shape of the DB System. The shape determines resources to allocate to the DB system - CPU cores and memory for VM shapes; CPU cores, memory and storage for non-VM (or bare metal) shapes. + Shape *string `mandatory:"true" json:"shape"` + + // The public key portion of one or more key pairs used for SSH access to the DB System. + SshPublicKeys []string `mandatory:"true" json:"sshPublicKeys"` + + // The OCID of the subnet the DB System is associated with. + // **Subnet Restrictions:** + // - For single node and 2-node (RAC) DB Systems, do not use a subnet that overlaps with 192.168.16.16/28 + // - For Exadata and VM-based RAC DB Systems, do not use a subnet that overlaps with 192.168.128.0/20 + // These subnets are used by the Oracle Clusterware private interconnect on the database instance. + // Specifying an overlapping subnet will cause the private interconnect to malfunction. + // This restriction applies to both the client subnet and backup subnet. + SubnetId *string `mandatory:"true" json:"subnetId"` + + // The OCID of the backup network subnet the DB System is associated with. Applicable only to Exadata. + // **Subnet Restriction:** See above subnetId's 'Subnet Restriction'. + // to malfunction. + BackupSubnetId *string `mandatory:"false" json:"backupSubnetId"` + + // Cluster name for Exadata and 2-node RAC DB Systems. The cluster name must begin with an an alphabetic character, and may contain hyphens (-). Underscores (_) are not permitted. The cluster name can be no longer than 11 characters and is not case sensitive. + ClusterName *string `mandatory:"false" json:"clusterName"` + + // The percentage assigned to DATA storage (user data and database files). + // The remaining percentage is assigned to RECO storage (database redo logs, archive logs, and recovery manager backups). Accepted values are 40 and 80. + DataStoragePercentage *int `mandatory:"false" json:"dataStoragePercentage"` + + // Data storage size, in GBs, that is currently available to the DB system. This is applicable only for VM-based DBs. + DataStorageSizeInGBs *int `mandatory:"false" json:"dataStorageSizeInGBs"` + + // The type of redundancy configured for the DB System. + // Normal is 2-way redundancy. + // High is 3-way redundancy. + DiskRedundancy DbSystemDiskRedundancyEnum `mandatory:"false" json:"diskRedundancy,omitempty"` + + // The OCID of the last patch history. This is updated as soon as a patch operation is started. + LastPatchHistoryEntryId *string `mandatory:"false" json:"lastPatchHistoryEntryId"` + + // The Oracle license model that applies to all the databases on the DB System. The default is LICENSE_INCLUDED. + LicenseModel DbSystemLicenseModelEnum `mandatory:"false" json:"licenseModel,omitempty"` + + // Additional information about the current lifecycleState. + LifecycleDetails *string `mandatory:"false" json:"lifecycleDetails"` + + // The port number configured for the listener on the DB System. + ListenerPort *int `mandatory:"false" json:"listenerPort"` + + // Number of nodes in this DB system. For RAC DBs, this will be greater than 1. + NodeCount *int `mandatory:"false" json:"nodeCount"` + + // RECO/REDO storage size, in GBs, that is currently allocated to the DB system. This is applicable only for VM-based DBs. + RecoStorageSizeInGB *int `mandatory:"false" json:"recoStorageSizeInGB"` + + // The OCID of the DNS record for the SCAN IP addresses that are associated with the DB System. + ScanDnsRecordId *string `mandatory:"false" json:"scanDnsRecordId"` + + // The OCID of the Single Client Access Name (SCAN) IP addresses associated with the DB System. + // SCAN IP addresses are typically used for load balancing and are not assigned to any interface. + // Clusterware directs the requests to the appropriate nodes in the cluster. + // - For a single-node DB System, this list is empty. + ScanIpIds []string `mandatory:"false" json:"scanIpIds"` + + // The date and time the DB System was created. + TimeCreated *common.SDKTime `mandatory:"false" json:"timeCreated"` + + // The version of the DB System. + Version *string `mandatory:"false" json:"version"` + + // The OCID of the virtual IP (VIP) addresses associated with the DB System. + // The Cluster Ready Services (CRS) creates and maintains one VIP address for each node in the DB System to + // enable failover. If one node fails, the VIP is reassigned to another active node in the cluster. + // - For a single-node DB System, this list is empty. + VipIds []string `mandatory:"false" json:"vipIds"` +} + +func (m DbSystem) String() string { + return common.PointerString(m) +} + +// DbSystemDatabaseEditionEnum Enum with underlying type: string +type DbSystemDatabaseEditionEnum string + +// Set of constants representing the allowable values for DbSystemDatabaseEdition +const ( + DbSystemDatabaseEditionStandardEdition DbSystemDatabaseEditionEnum = "STANDARD_EDITION" + DbSystemDatabaseEditionEnterpriseEdition DbSystemDatabaseEditionEnum = "ENTERPRISE_EDITION" + DbSystemDatabaseEditionEnterpriseEditionExtremePerformance DbSystemDatabaseEditionEnum = "ENTERPRISE_EDITION_EXTREME_PERFORMANCE" + DbSystemDatabaseEditionEnterpriseEditionHighPerformance DbSystemDatabaseEditionEnum = "ENTERPRISE_EDITION_HIGH_PERFORMANCE" +) + +var mappingDbSystemDatabaseEdition = map[string]DbSystemDatabaseEditionEnum{ + "STANDARD_EDITION": DbSystemDatabaseEditionStandardEdition, + "ENTERPRISE_EDITION": DbSystemDatabaseEditionEnterpriseEdition, + "ENTERPRISE_EDITION_EXTREME_PERFORMANCE": DbSystemDatabaseEditionEnterpriseEditionExtremePerformance, + "ENTERPRISE_EDITION_HIGH_PERFORMANCE": DbSystemDatabaseEditionEnterpriseEditionHighPerformance, +} + +// GetDbSystemDatabaseEditionEnumValues Enumerates the set of values for DbSystemDatabaseEdition +func GetDbSystemDatabaseEditionEnumValues() []DbSystemDatabaseEditionEnum { + values := make([]DbSystemDatabaseEditionEnum, 0) + for _, v := range mappingDbSystemDatabaseEdition { + values = append(values, v) + } + return values +} + +// DbSystemDiskRedundancyEnum Enum with underlying type: string +type DbSystemDiskRedundancyEnum string + +// Set of constants representing the allowable values for DbSystemDiskRedundancy +const ( + DbSystemDiskRedundancyHigh DbSystemDiskRedundancyEnum = "HIGH" + DbSystemDiskRedundancyNormal DbSystemDiskRedundancyEnum = "NORMAL" +) + +var mappingDbSystemDiskRedundancy = map[string]DbSystemDiskRedundancyEnum{ + "HIGH": DbSystemDiskRedundancyHigh, + "NORMAL": DbSystemDiskRedundancyNormal, +} + +// GetDbSystemDiskRedundancyEnumValues Enumerates the set of values for DbSystemDiskRedundancy +func GetDbSystemDiskRedundancyEnumValues() []DbSystemDiskRedundancyEnum { + values := make([]DbSystemDiskRedundancyEnum, 0) + for _, v := range mappingDbSystemDiskRedundancy { + values = append(values, v) + } + return values +} + +// DbSystemLicenseModelEnum Enum with underlying type: string +type DbSystemLicenseModelEnum string + +// Set of constants representing the allowable values for DbSystemLicenseModel +const ( + DbSystemLicenseModelLicenseIncluded DbSystemLicenseModelEnum = "LICENSE_INCLUDED" + DbSystemLicenseModelBringYourOwnLicense DbSystemLicenseModelEnum = "BRING_YOUR_OWN_LICENSE" +) + +var mappingDbSystemLicenseModel = map[string]DbSystemLicenseModelEnum{ + "LICENSE_INCLUDED": DbSystemLicenseModelLicenseIncluded, + "BRING_YOUR_OWN_LICENSE": DbSystemLicenseModelBringYourOwnLicense, +} + +// GetDbSystemLicenseModelEnumValues Enumerates the set of values for DbSystemLicenseModel +func GetDbSystemLicenseModelEnumValues() []DbSystemLicenseModelEnum { + values := make([]DbSystemLicenseModelEnum, 0) + for _, v := range mappingDbSystemLicenseModel { + values = append(values, v) + } + return values +} + +// DbSystemLifecycleStateEnum Enum with underlying type: string +type DbSystemLifecycleStateEnum string + +// Set of constants representing the allowable values for DbSystemLifecycleState +const ( + DbSystemLifecycleStateProvisioning DbSystemLifecycleStateEnum = "PROVISIONING" + DbSystemLifecycleStateAvailable DbSystemLifecycleStateEnum = "AVAILABLE" + DbSystemLifecycleStateUpdating DbSystemLifecycleStateEnum = "UPDATING" + DbSystemLifecycleStateTerminating DbSystemLifecycleStateEnum = "TERMINATING" + DbSystemLifecycleStateTerminated DbSystemLifecycleStateEnum = "TERMINATED" + DbSystemLifecycleStateFailed DbSystemLifecycleStateEnum = "FAILED" +) + +var mappingDbSystemLifecycleState = map[string]DbSystemLifecycleStateEnum{ + "PROVISIONING": DbSystemLifecycleStateProvisioning, + "AVAILABLE": DbSystemLifecycleStateAvailable, + "UPDATING": DbSystemLifecycleStateUpdating, + "TERMINATING": DbSystemLifecycleStateTerminating, + "TERMINATED": DbSystemLifecycleStateTerminated, + "FAILED": DbSystemLifecycleStateFailed, +} + +// GetDbSystemLifecycleStateEnumValues Enumerates the set of values for DbSystemLifecycleState +func GetDbSystemLifecycleStateEnumValues() []DbSystemLifecycleStateEnum { + values := make([]DbSystemLifecycleStateEnum, 0) + for _, v := range mappingDbSystemLifecycleState { + values = append(values, v) + } + return values +} diff --git a/vendor/github.com/oracle/oci-go-sdk/database/db_system_shape_summary.go b/vendor/github.com/oracle/oci-go-sdk/database/db_system_shape_summary.go new file mode 100644 index 000000000..831411c16 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/database/db_system_shape_summary.go @@ -0,0 +1,34 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Database Service API +// +// The API for the Database Service. +// + +package database + +import ( + "github.com/oracle/oci-go-sdk/common" +) + +// DbSystemShapeSummary The shape of the DB System. The shape determines resources to allocate to the DB system - CPU cores and memory for VM shapes; CPU cores, memory and storage for non-VM (or bare metal) shapes. +// For a description of shapes, see DB System Launch Options (https://docs.us-phoenix-1.oraclecloud.com/Content/Database/References/launchoptions.htm). +// To use any of the API operations, you must be authorized in an IAM policy. If you're not authorized, talk to an administrator. +// If you're an administrator who needs to write policies to give users access, +// see Getting Started with Policies (https://docs.us-phoenix-1.oraclecloud.com/Content/Identity/Concepts/policygetstarted.htm). +type DbSystemShapeSummary struct { + + // The maximum number of CPU cores that can be enabled on the DB System. + AvailableCoreCount *int `mandatory:"true" json:"availableCoreCount"` + + // The name of the shape used for the DB System. + Name *string `mandatory:"true" json:"name"` + + // Deprecated. Use `name` instead of `shape`. + Shape *string `mandatory:"false" json:"shape"` +} + +func (m DbSystemShapeSummary) String() string { + return common.PointerString(m) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/database/db_system_summary.go b/vendor/github.com/oracle/oci-go-sdk/database/db_system_summary.go new file mode 100644 index 000000000..7f8ba31a3 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/database/db_system_summary.go @@ -0,0 +1,236 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Database Service API +// +// The API for the Database Service. +// + +package database + +import ( + "github.com/oracle/oci-go-sdk/common" +) + +// DbSystemSummary The Database Service supports several types of DB Systems, ranging in size, price, and performance. For details about each type of system, see: +// - Exadata DB Systems (https://docs.us-phoenix-1.oraclecloud.com/Content/Database/Concepts/exaoverview.htm) +// - Bare Metal or VM DB Systems (https://docs.us-phoenix-1.oraclecloud.com/Content/Database/Concepts/overview.htm) +// To use any of the API operations, you must be authorized in an IAM policy. If you're not authorized, talk to an administrator. If you're an administrator who needs to write policies to give users access, see Getting Started with Policies (https://docs.us-phoenix-1.oraclecloud.com/Content/Identity/Concepts/policygetstarted.htm). +// +// For information about access control and compartments, see +// Overview of the Identity Service (https://docs.us-phoenix-1.oraclecloud.com/Content/Identity/Concepts/overview.htm). +// For information about Availability Domains, see +// Regions and Availability Domains (https://docs.us-phoenix-1.oraclecloud.com/Content/General/Concepts/regions.htm). +// To get a list of Availability Domains, use the `ListAvailabilityDomains` operation +// in the Identity Service API. +type DbSystemSummary struct { + + // The name of the Availability Domain that the DB System is located in. + AvailabilityDomain *string `mandatory:"true" json:"availabilityDomain"` + + // The OCID of the compartment. + CompartmentId *string `mandatory:"true" json:"compartmentId"` + + // The number of CPU cores enabled on the DB System. + CpuCoreCount *int `mandatory:"true" json:"cpuCoreCount"` + + // The Oracle Database Edition that applies to all the databases on the DB System. + DatabaseEdition DbSystemSummaryDatabaseEditionEnum `mandatory:"true" json:"databaseEdition"` + + // The user-friendly name for the DB System. It does not have to be unique. + DisplayName *string `mandatory:"true" json:"displayName"` + + // The domain name for the DB System. + Domain *string `mandatory:"true" json:"domain"` + + // The host name for the DB Node. + Hostname *string `mandatory:"true" json:"hostname"` + + // The OCID of the DB System. + Id *string `mandatory:"true" json:"id"` + + // The current state of the DB System. + LifecycleState DbSystemSummaryLifecycleStateEnum `mandatory:"true" json:"lifecycleState"` + + // The shape of the DB System. The shape determines resources to allocate to the DB system - CPU cores and memory for VM shapes; CPU cores, memory and storage for non-VM (or bare metal) shapes. + Shape *string `mandatory:"true" json:"shape"` + + // The public key portion of one or more key pairs used for SSH access to the DB System. + SshPublicKeys []string `mandatory:"true" json:"sshPublicKeys"` + + // The OCID of the subnet the DB System is associated with. + // **Subnet Restrictions:** + // - For single node and 2-node (RAC) DB Systems, do not use a subnet that overlaps with 192.168.16.16/28 + // - For Exadata and VM-based RAC DB Systems, do not use a subnet that overlaps with 192.168.128.0/20 + // These subnets are used by the Oracle Clusterware private interconnect on the database instance. + // Specifying an overlapping subnet will cause the private interconnect to malfunction. + // This restriction applies to both the client subnet and backup subnet. + SubnetId *string `mandatory:"true" json:"subnetId"` + + // The OCID of the backup network subnet the DB System is associated with. Applicable only to Exadata. + // **Subnet Restriction:** See above subnetId's 'Subnet Restriction'. + // to malfunction. + BackupSubnetId *string `mandatory:"false" json:"backupSubnetId"` + + // Cluster name for Exadata and 2-node RAC DB Systems. The cluster name must begin with an an alphabetic character, and may contain hyphens (-). Underscores (_) are not permitted. The cluster name can be no longer than 11 characters and is not case sensitive. + ClusterName *string `mandatory:"false" json:"clusterName"` + + // The percentage assigned to DATA storage (user data and database files). + // The remaining percentage is assigned to RECO storage (database redo logs, archive logs, and recovery manager backups). Accepted values are 40 and 80. + DataStoragePercentage *int `mandatory:"false" json:"dataStoragePercentage"` + + // Data storage size, in GBs, that is currently available to the DB system. This is applicable only for VM-based DBs. + DataStorageSizeInGBs *int `mandatory:"false" json:"dataStorageSizeInGBs"` + + // The type of redundancy configured for the DB System. + // Normal is 2-way redundancy. + // High is 3-way redundancy. + DiskRedundancy DbSystemSummaryDiskRedundancyEnum `mandatory:"false" json:"diskRedundancy,omitempty"` + + // The OCID of the last patch history. This is updated as soon as a patch operation is started. + LastPatchHistoryEntryId *string `mandatory:"false" json:"lastPatchHistoryEntryId"` + + // The Oracle license model that applies to all the databases on the DB System. The default is LICENSE_INCLUDED. + LicenseModel DbSystemSummaryLicenseModelEnum `mandatory:"false" json:"licenseModel,omitempty"` + + // Additional information about the current lifecycleState. + LifecycleDetails *string `mandatory:"false" json:"lifecycleDetails"` + + // The port number configured for the listener on the DB System. + ListenerPort *int `mandatory:"false" json:"listenerPort"` + + // Number of nodes in this DB system. For RAC DBs, this will be greater than 1. + NodeCount *int `mandatory:"false" json:"nodeCount"` + + // RECO/REDO storage size, in GBs, that is currently allocated to the DB system. This is applicable only for VM-based DBs. + RecoStorageSizeInGB *int `mandatory:"false" json:"recoStorageSizeInGB"` + + // The OCID of the DNS record for the SCAN IP addresses that are associated with the DB System. + ScanDnsRecordId *string `mandatory:"false" json:"scanDnsRecordId"` + + // The OCID of the Single Client Access Name (SCAN) IP addresses associated with the DB System. + // SCAN IP addresses are typically used for load balancing and are not assigned to any interface. + // Clusterware directs the requests to the appropriate nodes in the cluster. + // - For a single-node DB System, this list is empty. + ScanIpIds []string `mandatory:"false" json:"scanIpIds"` + + // The date and time the DB System was created. + TimeCreated *common.SDKTime `mandatory:"false" json:"timeCreated"` + + // The version of the DB System. + Version *string `mandatory:"false" json:"version"` + + // The OCID of the virtual IP (VIP) addresses associated with the DB System. + // The Cluster Ready Services (CRS) creates and maintains one VIP address for each node in the DB System to + // enable failover. If one node fails, the VIP is reassigned to another active node in the cluster. + // - For a single-node DB System, this list is empty. + VipIds []string `mandatory:"false" json:"vipIds"` +} + +func (m DbSystemSummary) String() string { + return common.PointerString(m) +} + +// DbSystemSummaryDatabaseEditionEnum Enum with underlying type: string +type DbSystemSummaryDatabaseEditionEnum string + +// Set of constants representing the allowable values for DbSystemSummaryDatabaseEdition +const ( + DbSystemSummaryDatabaseEditionStandardEdition DbSystemSummaryDatabaseEditionEnum = "STANDARD_EDITION" + DbSystemSummaryDatabaseEditionEnterpriseEdition DbSystemSummaryDatabaseEditionEnum = "ENTERPRISE_EDITION" + DbSystemSummaryDatabaseEditionEnterpriseEditionExtremePerformance DbSystemSummaryDatabaseEditionEnum = "ENTERPRISE_EDITION_EXTREME_PERFORMANCE" + DbSystemSummaryDatabaseEditionEnterpriseEditionHighPerformance DbSystemSummaryDatabaseEditionEnum = "ENTERPRISE_EDITION_HIGH_PERFORMANCE" +) + +var mappingDbSystemSummaryDatabaseEdition = map[string]DbSystemSummaryDatabaseEditionEnum{ + "STANDARD_EDITION": DbSystemSummaryDatabaseEditionStandardEdition, + "ENTERPRISE_EDITION": DbSystemSummaryDatabaseEditionEnterpriseEdition, + "ENTERPRISE_EDITION_EXTREME_PERFORMANCE": DbSystemSummaryDatabaseEditionEnterpriseEditionExtremePerformance, + "ENTERPRISE_EDITION_HIGH_PERFORMANCE": DbSystemSummaryDatabaseEditionEnterpriseEditionHighPerformance, +} + +// GetDbSystemSummaryDatabaseEditionEnumValues Enumerates the set of values for DbSystemSummaryDatabaseEdition +func GetDbSystemSummaryDatabaseEditionEnumValues() []DbSystemSummaryDatabaseEditionEnum { + values := make([]DbSystemSummaryDatabaseEditionEnum, 0) + for _, v := range mappingDbSystemSummaryDatabaseEdition { + values = append(values, v) + } + return values +} + +// DbSystemSummaryDiskRedundancyEnum Enum with underlying type: string +type DbSystemSummaryDiskRedundancyEnum string + +// Set of constants representing the allowable values for DbSystemSummaryDiskRedundancy +const ( + DbSystemSummaryDiskRedundancyHigh DbSystemSummaryDiskRedundancyEnum = "HIGH" + DbSystemSummaryDiskRedundancyNormal DbSystemSummaryDiskRedundancyEnum = "NORMAL" +) + +var mappingDbSystemSummaryDiskRedundancy = map[string]DbSystemSummaryDiskRedundancyEnum{ + "HIGH": DbSystemSummaryDiskRedundancyHigh, + "NORMAL": DbSystemSummaryDiskRedundancyNormal, +} + +// GetDbSystemSummaryDiskRedundancyEnumValues Enumerates the set of values for DbSystemSummaryDiskRedundancy +func GetDbSystemSummaryDiskRedundancyEnumValues() []DbSystemSummaryDiskRedundancyEnum { + values := make([]DbSystemSummaryDiskRedundancyEnum, 0) + for _, v := range mappingDbSystemSummaryDiskRedundancy { + values = append(values, v) + } + return values +} + +// DbSystemSummaryLicenseModelEnum Enum with underlying type: string +type DbSystemSummaryLicenseModelEnum string + +// Set of constants representing the allowable values for DbSystemSummaryLicenseModel +const ( + DbSystemSummaryLicenseModelLicenseIncluded DbSystemSummaryLicenseModelEnum = "LICENSE_INCLUDED" + DbSystemSummaryLicenseModelBringYourOwnLicense DbSystemSummaryLicenseModelEnum = "BRING_YOUR_OWN_LICENSE" +) + +var mappingDbSystemSummaryLicenseModel = map[string]DbSystemSummaryLicenseModelEnum{ + "LICENSE_INCLUDED": DbSystemSummaryLicenseModelLicenseIncluded, + "BRING_YOUR_OWN_LICENSE": DbSystemSummaryLicenseModelBringYourOwnLicense, +} + +// GetDbSystemSummaryLicenseModelEnumValues Enumerates the set of values for DbSystemSummaryLicenseModel +func GetDbSystemSummaryLicenseModelEnumValues() []DbSystemSummaryLicenseModelEnum { + values := make([]DbSystemSummaryLicenseModelEnum, 0) + for _, v := range mappingDbSystemSummaryLicenseModel { + values = append(values, v) + } + return values +} + +// DbSystemSummaryLifecycleStateEnum Enum with underlying type: string +type DbSystemSummaryLifecycleStateEnum string + +// Set of constants representing the allowable values for DbSystemSummaryLifecycleState +const ( + DbSystemSummaryLifecycleStateProvisioning DbSystemSummaryLifecycleStateEnum = "PROVISIONING" + DbSystemSummaryLifecycleStateAvailable DbSystemSummaryLifecycleStateEnum = "AVAILABLE" + DbSystemSummaryLifecycleStateUpdating DbSystemSummaryLifecycleStateEnum = "UPDATING" + DbSystemSummaryLifecycleStateTerminating DbSystemSummaryLifecycleStateEnum = "TERMINATING" + DbSystemSummaryLifecycleStateTerminated DbSystemSummaryLifecycleStateEnum = "TERMINATED" + DbSystemSummaryLifecycleStateFailed DbSystemSummaryLifecycleStateEnum = "FAILED" +) + +var mappingDbSystemSummaryLifecycleState = map[string]DbSystemSummaryLifecycleStateEnum{ + "PROVISIONING": DbSystemSummaryLifecycleStateProvisioning, + "AVAILABLE": DbSystemSummaryLifecycleStateAvailable, + "UPDATING": DbSystemSummaryLifecycleStateUpdating, + "TERMINATING": DbSystemSummaryLifecycleStateTerminating, + "TERMINATED": DbSystemSummaryLifecycleStateTerminated, + "FAILED": DbSystemSummaryLifecycleStateFailed, +} + +// GetDbSystemSummaryLifecycleStateEnumValues Enumerates the set of values for DbSystemSummaryLifecycleState +func GetDbSystemSummaryLifecycleStateEnumValues() []DbSystemSummaryLifecycleStateEnum { + values := make([]DbSystemSummaryLifecycleStateEnum, 0) + for _, v := range mappingDbSystemSummaryLifecycleState { + values = append(values, v) + } + return values +} diff --git a/vendor/github.com/oracle/oci-go-sdk/database/db_version_summary.go b/vendor/github.com/oracle/oci-go-sdk/database/db_version_summary.go new file mode 100644 index 000000000..5920cb4b9 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/database/db_version_summary.go @@ -0,0 +1,28 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Database Service API +// +// The API for the Database Service. +// + +package database + +import ( + "github.com/oracle/oci-go-sdk/common" +) + +// DbVersionSummary The Oracle database software version. +// To use any of the API operations, you must be authorized in an IAM policy. If you're not authorized, talk to an administrator. If you're an administrator who needs to write policies to give users access, see Getting Started with Policies (https://docs.us-phoenix-1.oraclecloud.com/Content/Identity/Concepts/policygetstarted.htm). +type DbVersionSummary struct { + + // A valid Oracle database version. + Version *string `mandatory:"true" json:"version"` + + // True if this version of the Oracle database software supports pluggable dbs. + SupportsPdb *bool `mandatory:"false" json:"supportsPdb"` +} + +func (m DbVersionSummary) String() string { + return common.PointerString(m) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/database/delete_backup_request_response.go b/vendor/github.com/oracle/oci-go-sdk/database/delete_backup_request_response.go new file mode 100644 index 000000000..ff0446bbb --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/database/delete_backup_request_response.go @@ -0,0 +1,40 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package database + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// DeleteBackupRequest wrapper for the DeleteBackup operation +type DeleteBackupRequest struct { + + // The backup OCID. + BackupId *string `mandatory:"true" contributesTo:"path" name:"backupId"` + + // For optimistic concurrency control. In the PUT or DELETE call for a resource, set the `if-match` + // parameter to the value of the etag from a previous GET or POST response for that resource. The resource + // will be updated or deleted only if the etag you provide matches the resource's current etag value. + IfMatch *string `mandatory:"false" contributesTo:"header" name:"if-match"` +} + +func (request DeleteBackupRequest) String() string { + return common.PointerString(request) +} + +// DeleteBackupResponse wrapper for the DeleteBackup operation +type DeleteBackupResponse struct { + + // The underlying http response + RawResponse *http.Response + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about + // a particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response DeleteBackupResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/database/delete_db_home_request_response.go b/vendor/github.com/oracle/oci-go-sdk/database/delete_db_home_request_response.go new file mode 100644 index 000000000..d50d313f0 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/database/delete_db_home_request_response.go @@ -0,0 +1,43 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package database + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// DeleteDbHomeRequest wrapper for the DeleteDbHome operation +type DeleteDbHomeRequest struct { + + // The database home OCID (https://docs.us-phoenix-1.oraclecloud.com/Content/General/Concepts/identifiers.htm). + DbHomeId *string `mandatory:"true" contributesTo:"path" name:"dbHomeId"` + + // For optimistic concurrency control. In the PUT or DELETE call for a resource, set the `if-match` + // parameter to the value of the etag from a previous GET or POST response for that resource. The resource + // will be updated or deleted only if the etag you provide matches the resource's current etag value. + IfMatch *string `mandatory:"false" contributesTo:"header" name:"if-match"` + + // Whether to perform a final backup of the database or not. Default is false. If you previously used RMAN or dbcli to configure backups and then you switch to using the Console or the API for backups, a new backup configuration is created and associated with your database. This means that you can no longer rely on your previously configured unmanaged backups to work. + PerformFinalBackup *bool `mandatory:"false" contributesTo:"query" name:"performFinalBackup"` +} + +func (request DeleteDbHomeRequest) String() string { + return common.PointerString(request) +} + +// DeleteDbHomeResponse wrapper for the DeleteDbHome operation +type DeleteDbHomeResponse struct { + + // The underlying http response + RawResponse *http.Response + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about + // a particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response DeleteDbHomeResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/database/failover_data_guard_association_details.go b/vendor/github.com/oracle/oci-go-sdk/database/failover_data_guard_association_details.go new file mode 100644 index 000000000..40e8bc76b --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/database/failover_data_guard_association_details.go @@ -0,0 +1,24 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Database Service API +// +// The API for the Database Service. +// + +package database + +import ( + "github.com/oracle/oci-go-sdk/common" +) + +// FailoverDataGuardAssociationDetails The Data Guard association failover parameters. +type FailoverDataGuardAssociationDetails struct { + + // The DB System administrator password. + DatabaseAdminPassword *string `mandatory:"true" json:"databaseAdminPassword"` +} + +func (m FailoverDataGuardAssociationDetails) String() string { + return common.PointerString(m) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/database/failover_data_guard_association_request_response.go b/vendor/github.com/oracle/oci-go-sdk/database/failover_data_guard_association_request_response.go new file mode 100644 index 000000000..265c73f00 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/database/failover_data_guard_association_request_response.go @@ -0,0 +1,52 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package database + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// FailoverDataGuardAssociationRequest wrapper for the FailoverDataGuardAssociation operation +type FailoverDataGuardAssociationRequest struct { + + // The database OCID (https://docs.us-phoenix-1.oraclecloud.com/Content/General/Concepts/identifiers.htm). + DatabaseId *string `mandatory:"true" contributesTo:"path" name:"databaseId"` + + // The Data Guard association's OCID (https://docs.us-phoenix-1.oraclecloud.com/Content/General/Concepts/identifiers.htm). + DataGuardAssociationId *string `mandatory:"true" contributesTo:"path" name:"dataGuardAssociationId"` + + // A request to perform a failover, transitioning a standby database into a primary database. + FailoverDataGuardAssociationDetails `contributesTo:"body"` + + // For optimistic concurrency control. In the PUT or DELETE call for a resource, set the `if-match` + // parameter to the value of the etag from a previous GET or POST response for that resource. The resource + // will be updated or deleted only if the etag you provide matches the resource's current etag value. + IfMatch *string `mandatory:"false" contributesTo:"header" name:"if-match"` +} + +func (request FailoverDataGuardAssociationRequest) String() string { + return common.PointerString(request) +} + +// FailoverDataGuardAssociationResponse wrapper for the FailoverDataGuardAssociation operation +type FailoverDataGuardAssociationResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The DataGuardAssociation instance + DataGuardAssociation `presentIn:"body"` + + // For optimistic concurrency control. See `if-match`. + Etag *string `presentIn:"header" name:"etag"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about + // a particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response FailoverDataGuardAssociationResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/database/get_backup_request_response.go b/vendor/github.com/oracle/oci-go-sdk/database/get_backup_request_response.go new file mode 100644 index 000000000..8a7ce1835 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/database/get_backup_request_response.go @@ -0,0 +1,41 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package database + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// GetBackupRequest wrapper for the GetBackup operation +type GetBackupRequest struct { + + // The backup OCID. + BackupId *string `mandatory:"true" contributesTo:"path" name:"backupId"` +} + +func (request GetBackupRequest) String() string { + return common.PointerString(request) +} + +// GetBackupResponse wrapper for the GetBackup operation +type GetBackupResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The Backup instance + Backup `presentIn:"body"` + + // For optimistic concurrency control. See `if-match`. + Etag *string `presentIn:"header" name:"etag"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about + // a particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response GetBackupResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/database/get_data_guard_association_request_response.go b/vendor/github.com/oracle/oci-go-sdk/database/get_data_guard_association_request_response.go new file mode 100644 index 000000000..2b2f55456 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/database/get_data_guard_association_request_response.go @@ -0,0 +1,44 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package database + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// GetDataGuardAssociationRequest wrapper for the GetDataGuardAssociation operation +type GetDataGuardAssociationRequest struct { + + // The database OCID (https://docs.us-phoenix-1.oraclecloud.com/Content/General/Concepts/identifiers.htm). + DatabaseId *string `mandatory:"true" contributesTo:"path" name:"databaseId"` + + // The Data Guard association's OCID (https://docs.us-phoenix-1.oraclecloud.com/Content/General/Concepts/identifiers.htm). + DataGuardAssociationId *string `mandatory:"true" contributesTo:"path" name:"dataGuardAssociationId"` +} + +func (request GetDataGuardAssociationRequest) String() string { + return common.PointerString(request) +} + +// GetDataGuardAssociationResponse wrapper for the GetDataGuardAssociation operation +type GetDataGuardAssociationResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The DataGuardAssociation instance + DataGuardAssociation `presentIn:"body"` + + // For optimistic concurrency control. See `if-match`. + Etag *string `presentIn:"header" name:"etag"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about + // a particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response GetDataGuardAssociationResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/database/get_database_request_response.go b/vendor/github.com/oracle/oci-go-sdk/database/get_database_request_response.go new file mode 100644 index 000000000..8b3a5701b --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/database/get_database_request_response.go @@ -0,0 +1,41 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package database + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// GetDatabaseRequest wrapper for the GetDatabase operation +type GetDatabaseRequest struct { + + // The database OCID (https://docs.us-phoenix-1.oraclecloud.com/Content/General/Concepts/identifiers.htm). + DatabaseId *string `mandatory:"true" contributesTo:"path" name:"databaseId"` +} + +func (request GetDatabaseRequest) String() string { + return common.PointerString(request) +} + +// GetDatabaseResponse wrapper for the GetDatabase operation +type GetDatabaseResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The Database instance + Database `presentIn:"body"` + + // For optimistic concurrency control. See `if-match`. + Etag *string `presentIn:"header" name:"etag"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about + // a particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response GetDatabaseResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/database/get_db_home_patch_history_entry_request_response.go b/vendor/github.com/oracle/oci-go-sdk/database/get_db_home_patch_history_entry_request_response.go new file mode 100644 index 000000000..c129a3119 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/database/get_db_home_patch_history_entry_request_response.go @@ -0,0 +1,44 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package database + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// GetDbHomePatchHistoryEntryRequest wrapper for the GetDbHomePatchHistoryEntry operation +type GetDbHomePatchHistoryEntryRequest struct { + + // The database home OCID (https://docs.us-phoenix-1.oraclecloud.com/Content/General/Concepts/identifiers.htm). + DbHomeId *string `mandatory:"true" contributesTo:"path" name:"dbHomeId"` + + // The OCID of the patch history entry. + PatchHistoryEntryId *string `mandatory:"true" contributesTo:"path" name:"patchHistoryEntryId"` +} + +func (request GetDbHomePatchHistoryEntryRequest) String() string { + return common.PointerString(request) +} + +// GetDbHomePatchHistoryEntryResponse wrapper for the GetDbHomePatchHistoryEntry operation +type GetDbHomePatchHistoryEntryResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The PatchHistoryEntry instance + PatchHistoryEntry `presentIn:"body"` + + // For optimistic concurrency control. See `if-match`. + Etag *string `presentIn:"header" name:"etag"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about + // a particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response GetDbHomePatchHistoryEntryResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/database/get_db_home_patch_request_response.go b/vendor/github.com/oracle/oci-go-sdk/database/get_db_home_patch_request_response.go new file mode 100644 index 000000000..74b389688 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/database/get_db_home_patch_request_response.go @@ -0,0 +1,41 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package database + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// GetDbHomePatchRequest wrapper for the GetDbHomePatch operation +type GetDbHomePatchRequest struct { + + // The database home OCID (https://docs.us-phoenix-1.oraclecloud.com/Content/General/Concepts/identifiers.htm). + DbHomeId *string `mandatory:"true" contributesTo:"path" name:"dbHomeId"` + + // The OCID of the patch. + PatchId *string `mandatory:"true" contributesTo:"path" name:"patchId"` +} + +func (request GetDbHomePatchRequest) String() string { + return common.PointerString(request) +} + +// GetDbHomePatchResponse wrapper for the GetDbHomePatch operation +type GetDbHomePatchResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The Patch instance + Patch `presentIn:"body"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about + // a particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response GetDbHomePatchResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/database/get_db_home_request_response.go b/vendor/github.com/oracle/oci-go-sdk/database/get_db_home_request_response.go new file mode 100644 index 000000000..795c4f4bc --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/database/get_db_home_request_response.go @@ -0,0 +1,41 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package database + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// GetDbHomeRequest wrapper for the GetDbHome operation +type GetDbHomeRequest struct { + + // The database home OCID (https://docs.us-phoenix-1.oraclecloud.com/Content/General/Concepts/identifiers.htm). + DbHomeId *string `mandatory:"true" contributesTo:"path" name:"dbHomeId"` +} + +func (request GetDbHomeRequest) String() string { + return common.PointerString(request) +} + +// GetDbHomeResponse wrapper for the GetDbHome operation +type GetDbHomeResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The DbHome instance + DbHome `presentIn:"body"` + + // For optimistic concurrency control. See `if-match`. + Etag *string `presentIn:"header" name:"etag"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about + // a particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response GetDbHomeResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/database/get_db_node_request_response.go b/vendor/github.com/oracle/oci-go-sdk/database/get_db_node_request_response.go new file mode 100644 index 000000000..3f5e1fc2a --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/database/get_db_node_request_response.go @@ -0,0 +1,41 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package database + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// GetDbNodeRequest wrapper for the GetDbNode operation +type GetDbNodeRequest struct { + + // The database node OCID (https://docs.us-phoenix-1.oraclecloud.com/Content/General/Concepts/identifiers.htm). + DbNodeId *string `mandatory:"true" contributesTo:"path" name:"dbNodeId"` +} + +func (request GetDbNodeRequest) String() string { + return common.PointerString(request) +} + +// GetDbNodeResponse wrapper for the GetDbNode operation +type GetDbNodeResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The DbNode instance + DbNode `presentIn:"body"` + + // For optimistic concurrency control. See `if-match`. + Etag *string `presentIn:"header" name:"etag"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about + // a particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response GetDbNodeResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/database/get_db_system_patch_history_entry_request_response.go b/vendor/github.com/oracle/oci-go-sdk/database/get_db_system_patch_history_entry_request_response.go new file mode 100644 index 000000000..4891a2df8 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/database/get_db_system_patch_history_entry_request_response.go @@ -0,0 +1,44 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package database + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// GetDbSystemPatchHistoryEntryRequest wrapper for the GetDbSystemPatchHistoryEntry operation +type GetDbSystemPatchHistoryEntryRequest struct { + + // The DB System OCID (https://docs.us-phoenix-1.oraclecloud.com/Content/General/Concepts/identifiers.htm). + DbSystemId *string `mandatory:"true" contributesTo:"path" name:"dbSystemId"` + + // The OCID of the patch history entry. + PatchHistoryEntryId *string `mandatory:"true" contributesTo:"path" name:"patchHistoryEntryId"` +} + +func (request GetDbSystemPatchHistoryEntryRequest) String() string { + return common.PointerString(request) +} + +// GetDbSystemPatchHistoryEntryResponse wrapper for the GetDbSystemPatchHistoryEntry operation +type GetDbSystemPatchHistoryEntryResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The PatchHistoryEntry instance + PatchHistoryEntry `presentIn:"body"` + + // For optimistic concurrency control. See `if-match`. + Etag *string `presentIn:"header" name:"etag"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about + // a particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response GetDbSystemPatchHistoryEntryResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/database/get_db_system_patch_request_response.go b/vendor/github.com/oracle/oci-go-sdk/database/get_db_system_patch_request_response.go new file mode 100644 index 000000000..916b6aafc --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/database/get_db_system_patch_request_response.go @@ -0,0 +1,41 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package database + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// GetDbSystemPatchRequest wrapper for the GetDbSystemPatch operation +type GetDbSystemPatchRequest struct { + + // The DB System OCID (https://docs.us-phoenix-1.oraclecloud.com/Content/General/Concepts/identifiers.htm). + DbSystemId *string `mandatory:"true" contributesTo:"path" name:"dbSystemId"` + + // The OCID of the patch. + PatchId *string `mandatory:"true" contributesTo:"path" name:"patchId"` +} + +func (request GetDbSystemPatchRequest) String() string { + return common.PointerString(request) +} + +// GetDbSystemPatchResponse wrapper for the GetDbSystemPatch operation +type GetDbSystemPatchResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The Patch instance + Patch `presentIn:"body"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about + // a particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response GetDbSystemPatchResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/database/get_db_system_request_response.go b/vendor/github.com/oracle/oci-go-sdk/database/get_db_system_request_response.go new file mode 100644 index 000000000..9a1dc670f --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/database/get_db_system_request_response.go @@ -0,0 +1,41 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package database + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// GetDbSystemRequest wrapper for the GetDbSystem operation +type GetDbSystemRequest struct { + + // The DB System OCID (https://docs.us-phoenix-1.oraclecloud.com/Content/General/Concepts/identifiers.htm). + DbSystemId *string `mandatory:"true" contributesTo:"path" name:"dbSystemId"` +} + +func (request GetDbSystemRequest) String() string { + return common.PointerString(request) +} + +// GetDbSystemResponse wrapper for the GetDbSystem operation +type GetDbSystemResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The DbSystem instance + DbSystem `presentIn:"body"` + + // For optimistic concurrency control. See `if-match`. + Etag *string `presentIn:"header" name:"etag"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about + // a particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response GetDbSystemResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/database/launch_db_system_details.go b/vendor/github.com/oracle/oci-go-sdk/database/launch_db_system_details.go new file mode 100644 index 000000000..2ba66b6f7 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/database/launch_db_system_details.go @@ -0,0 +1,171 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Database Service API +// +// The API for the Database Service. +// + +package database + +import ( + "github.com/oracle/oci-go-sdk/common" +) + +// LaunchDbSystemDetails The representation of LaunchDbSystemDetails +type LaunchDbSystemDetails struct { + + // The Availability Domain where the DB System is located. + AvailabilityDomain *string `mandatory:"true" json:"availabilityDomain"` + + // The Oracle Cloud ID (OCID) of the compartment the DB System belongs in. + CompartmentId *string `mandatory:"true" json:"compartmentId"` + + // The number of CPU cores to enable. The valid values depend on the specified shape: + // - BM.DenseIO1.36 and BM.HighIO1.36 - Specify a multiple of 2, from 2 to 36. + // - BM.RACLocalStorage1.72 - Specify a multiple of 4, from 4 to 72. + // - Exadata.Quarter1.84 - Specify a multiple of 2, from 22 to 84. + // - Exadata.Half1.168 - Specify a multiple of 4, from 44 to 168. + // - Exadata.Full1.336 - Specify a multiple of 8, from 88 to 336. + // For VM DB systems, the core count is inferred from the specific VM shape chosen, so this parameter is not used. + CpuCoreCount *int `mandatory:"true" json:"cpuCoreCount"` + + // The Oracle Database Edition that applies to all the databases on the DB System. + // Exadata DB Systems and 2-node RAC DB Systems require ENTERPRISE_EDITION_EXTREME_PERFORMANCE. + DatabaseEdition LaunchDbSystemDetailsDatabaseEditionEnum `mandatory:"true" json:"databaseEdition"` + + DbHome *CreateDbHomeDetails `mandatory:"true" json:"dbHome"` + + // The host name for the DB System. The host name must begin with an alphabetic character and + // can contain a maximum of 30 alphanumeric characters, including hyphens (-). + // The maximum length of the combined hostname and domain is 63 characters. + // **Note:** The hostname must be unique within the subnet. If it is not unique, + // the DB System will fail to provision. + Hostname *string `mandatory:"true" json:"hostname"` + + // The shape of the DB System. The shape determines resources allocated to the DB System - CPU cores and memory for VM shapes; CPU cores, memory and storage for non-VM (or bare metal) shapes. To get a list of shapes, use the ListDbSystemShapes operation. + Shape *string `mandatory:"true" json:"shape"` + + // The public key portion of the key pair to use for SSH access to the DB System. Multiple public keys can be provided. The length of the combined keys cannot exceed 10,000 characters. + SshPublicKeys []string `mandatory:"true" json:"sshPublicKeys"` + + // The OCID of the subnet the DB System is associated with. + // **Subnet Restrictions:** + // - For single node and 2-node (RAC) DB Systems, do not use a subnet that overlaps with 192.168.16.16/28 + // - For Exadata and VM-based RAC DB Systems, do not use a subnet that overlaps with 192.168.128.0/20 + // These subnets are used by the Oracle Clusterware private interconnect on the database instance. + // Specifying an overlapping subnet will cause the private interconnect to malfunction. + // This restriction applies to both the client subnet and backup subnet. + SubnetId *string `mandatory:"true" json:"subnetId"` + + // The OCID of the backup network subnet the DB System is associated with. Applicable only to Exadata. + // **Subnet Restrictions:** See above subnetId's **Subnet Restriction**. + BackupSubnetId *string `mandatory:"false" json:"backupSubnetId"` + + // Cluster name for Exadata and 2-node RAC DB Systems. The cluster name must begin with an an alphabetic character, and may contain hyphens (-). Underscores (_) are not permitted. The cluster name can be no longer than 11 characters and is not case sensitive. + ClusterName *string `mandatory:"false" json:"clusterName"` + + // The percentage assigned to DATA storage (user data and database files). + // The remaining percentage is assigned to RECO storage (database redo logs, archive logs, and recovery manager backups). + // Specify 80 or 40. The default is 80 percent assigned to DATA storage. This is not applicable for VM based DB systems. + DataStoragePercentage *int `mandatory:"false" json:"dataStoragePercentage"` + + // The type of redundancy configured for the DB System. + // Normal is 2-way redundancy, recommended for test and development systems. + // High is 3-way redundancy, recommended for production systems. + DiskRedundancy LaunchDbSystemDetailsDiskRedundancyEnum `mandatory:"false" json:"diskRedundancy,omitempty"` + + // The user-friendly name for the DB System. It does not have to be unique. + DisplayName *string `mandatory:"false" json:"displayName"` + + // A domain name used for the DB System. If the Oracle-provided Internet and VCN + // Resolver is enabled for the specified subnet, the domain name for the subnet is used + // (don't provide one). Otherwise, provide a valid DNS domain name. Hyphens (-) are not permitted. + Domain *string `mandatory:"false" json:"domain"` + + // Size, in GBs, of the initial data volume that will be created and attached to VM-shape based DB system. This storage can later be scaled up if needed. Note that the total storage size attached will be more than what is requested, to account for REDO/RECO space and software volume. + InitialDataStorageSizeInGB *int `mandatory:"false" json:"initialDataStorageSizeInGB"` + + // The Oracle license model that applies to all the databases on the DB System. The default is LICENSE_INCLUDED. + LicenseModel LaunchDbSystemDetailsLicenseModelEnum `mandatory:"false" json:"licenseModel,omitempty"` + + // Number of nodes to launch for a VM-shape based RAC DB system. + NodeCount *int `mandatory:"false" json:"nodeCount"` +} + +func (m LaunchDbSystemDetails) String() string { + return common.PointerString(m) +} + +// LaunchDbSystemDetailsDatabaseEditionEnum Enum with underlying type: string +type LaunchDbSystemDetailsDatabaseEditionEnum string + +// Set of constants representing the allowable values for LaunchDbSystemDetailsDatabaseEdition +const ( + LaunchDbSystemDetailsDatabaseEditionStandardEdition LaunchDbSystemDetailsDatabaseEditionEnum = "STANDARD_EDITION" + LaunchDbSystemDetailsDatabaseEditionEnterpriseEdition LaunchDbSystemDetailsDatabaseEditionEnum = "ENTERPRISE_EDITION" + LaunchDbSystemDetailsDatabaseEditionEnterpriseEditionExtremePerformance LaunchDbSystemDetailsDatabaseEditionEnum = "ENTERPRISE_EDITION_EXTREME_PERFORMANCE" + LaunchDbSystemDetailsDatabaseEditionEnterpriseEditionHighPerformance LaunchDbSystemDetailsDatabaseEditionEnum = "ENTERPRISE_EDITION_HIGH_PERFORMANCE" +) + +var mappingLaunchDbSystemDetailsDatabaseEdition = map[string]LaunchDbSystemDetailsDatabaseEditionEnum{ + "STANDARD_EDITION": LaunchDbSystemDetailsDatabaseEditionStandardEdition, + "ENTERPRISE_EDITION": LaunchDbSystemDetailsDatabaseEditionEnterpriseEdition, + "ENTERPRISE_EDITION_EXTREME_PERFORMANCE": LaunchDbSystemDetailsDatabaseEditionEnterpriseEditionExtremePerformance, + "ENTERPRISE_EDITION_HIGH_PERFORMANCE": LaunchDbSystemDetailsDatabaseEditionEnterpriseEditionHighPerformance, +} + +// GetLaunchDbSystemDetailsDatabaseEditionEnumValues Enumerates the set of values for LaunchDbSystemDetailsDatabaseEdition +func GetLaunchDbSystemDetailsDatabaseEditionEnumValues() []LaunchDbSystemDetailsDatabaseEditionEnum { + values := make([]LaunchDbSystemDetailsDatabaseEditionEnum, 0) + for _, v := range mappingLaunchDbSystemDetailsDatabaseEdition { + values = append(values, v) + } + return values +} + +// LaunchDbSystemDetailsDiskRedundancyEnum Enum with underlying type: string +type LaunchDbSystemDetailsDiskRedundancyEnum string + +// Set of constants representing the allowable values for LaunchDbSystemDetailsDiskRedundancy +const ( + LaunchDbSystemDetailsDiskRedundancyHigh LaunchDbSystemDetailsDiskRedundancyEnum = "HIGH" + LaunchDbSystemDetailsDiskRedundancyNormal LaunchDbSystemDetailsDiskRedundancyEnum = "NORMAL" +) + +var mappingLaunchDbSystemDetailsDiskRedundancy = map[string]LaunchDbSystemDetailsDiskRedundancyEnum{ + "HIGH": LaunchDbSystemDetailsDiskRedundancyHigh, + "NORMAL": LaunchDbSystemDetailsDiskRedundancyNormal, +} + +// GetLaunchDbSystemDetailsDiskRedundancyEnumValues Enumerates the set of values for LaunchDbSystemDetailsDiskRedundancy +func GetLaunchDbSystemDetailsDiskRedundancyEnumValues() []LaunchDbSystemDetailsDiskRedundancyEnum { + values := make([]LaunchDbSystemDetailsDiskRedundancyEnum, 0) + for _, v := range mappingLaunchDbSystemDetailsDiskRedundancy { + values = append(values, v) + } + return values +} + +// LaunchDbSystemDetailsLicenseModelEnum Enum with underlying type: string +type LaunchDbSystemDetailsLicenseModelEnum string + +// Set of constants representing the allowable values for LaunchDbSystemDetailsLicenseModel +const ( + LaunchDbSystemDetailsLicenseModelLicenseIncluded LaunchDbSystemDetailsLicenseModelEnum = "LICENSE_INCLUDED" + LaunchDbSystemDetailsLicenseModelBringYourOwnLicense LaunchDbSystemDetailsLicenseModelEnum = "BRING_YOUR_OWN_LICENSE" +) + +var mappingLaunchDbSystemDetailsLicenseModel = map[string]LaunchDbSystemDetailsLicenseModelEnum{ + "LICENSE_INCLUDED": LaunchDbSystemDetailsLicenseModelLicenseIncluded, + "BRING_YOUR_OWN_LICENSE": LaunchDbSystemDetailsLicenseModelBringYourOwnLicense, +} + +// GetLaunchDbSystemDetailsLicenseModelEnumValues Enumerates the set of values for LaunchDbSystemDetailsLicenseModel +func GetLaunchDbSystemDetailsLicenseModelEnumValues() []LaunchDbSystemDetailsLicenseModelEnum { + values := make([]LaunchDbSystemDetailsLicenseModelEnum, 0) + for _, v := range mappingLaunchDbSystemDetailsLicenseModel { + values = append(values, v) + } + return values +} diff --git a/vendor/github.com/oracle/oci-go-sdk/database/launch_db_system_request_response.go b/vendor/github.com/oracle/oci-go-sdk/database/launch_db_system_request_response.go new file mode 100644 index 000000000..d1d6b45e6 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/database/launch_db_system_request_response.go @@ -0,0 +1,48 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package database + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// LaunchDbSystemRequest wrapper for the LaunchDbSystem operation +type LaunchDbSystemRequest struct { + + // Request to launch a DB System. + LaunchDbSystemDetails `contributesTo:"body"` + + // A token that uniquely identifies a request so it can be retried in case of a timeout or + // server error without risk of executing that same action again. Retry tokens expire after 24 + // hours, but can be invalidated before then due to conflicting operations (for example, if a resource + // has been deleted and purged from the system, then a retry of the original creation request + // may be rejected). + OpcRetryToken *string `mandatory:"false" contributesTo:"header" name:"opc-retry-token"` +} + +func (request LaunchDbSystemRequest) String() string { + return common.PointerString(request) +} + +// LaunchDbSystemResponse wrapper for the LaunchDbSystem operation +type LaunchDbSystemResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The DbSystem instance + DbSystem `presentIn:"body"` + + // For optimistic concurrency control. See `if-match`. + Etag *string `presentIn:"header" name:"etag"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about + // a particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response LaunchDbSystemResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/database/list_backups_request_response.go b/vendor/github.com/oracle/oci-go-sdk/database/list_backups_request_response.go new file mode 100644 index 000000000..6bb089016 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/database/list_backups_request_response.go @@ -0,0 +1,53 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package database + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// ListBackupsRequest wrapper for the ListBackups operation +type ListBackupsRequest struct { + + // The OCID of the database. + DatabaseId *string `mandatory:"false" contributesTo:"query" name:"databaseId"` + + // The compartment OCID. + CompartmentId *string `mandatory:"false" contributesTo:"query" name:"compartmentId"` + + // The maximum number of items to return. + Limit *int `mandatory:"false" contributesTo:"query" name:"limit"` + + // The pagination token to continue listing from. + Page *string `mandatory:"false" contributesTo:"query" name:"page"` +} + +func (request ListBackupsRequest) String() string { + return common.PointerString(request) +} + +// ListBackupsResponse wrapper for the ListBackups operation +type ListBackupsResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The []BackupSummary instance + Items []BackupSummary `presentIn:"body"` + + // For pagination of a list of items. When paging through a list, if this header appears in the response, + // then there are additional items still to get. Include this value as the `page` parameter for the + // subsequent GET request. For information about pagination, see + // List Pagination (https://docs.us-phoenix-1.oraclecloud.com/Content/API/Concepts/usingapi.htm#List_Pagination). + OpcNextPage *string `presentIn:"header" name:"opc-next-page"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about + // a particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response ListBackupsResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/database/list_data_guard_associations_request_response.go b/vendor/github.com/oracle/oci-go-sdk/database/list_data_guard_associations_request_response.go new file mode 100644 index 000000000..7f2f123d7 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/database/list_data_guard_associations_request_response.go @@ -0,0 +1,50 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package database + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// ListDataGuardAssociationsRequest wrapper for the ListDataGuardAssociations operation +type ListDataGuardAssociationsRequest struct { + + // The database OCID (https://docs.us-phoenix-1.oraclecloud.com/Content/General/Concepts/identifiers.htm). + DatabaseId *string `mandatory:"true" contributesTo:"path" name:"databaseId"` + + // The maximum number of items to return. + Limit *int `mandatory:"false" contributesTo:"query" name:"limit"` + + // The pagination token to continue listing from. + Page *string `mandatory:"false" contributesTo:"query" name:"page"` +} + +func (request ListDataGuardAssociationsRequest) String() string { + return common.PointerString(request) +} + +// ListDataGuardAssociationsResponse wrapper for the ListDataGuardAssociations operation +type ListDataGuardAssociationsResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The []DataGuardAssociationSummary instance + Items []DataGuardAssociationSummary `presentIn:"body"` + + // For pagination of a list of items. When paging through a list, if this header appears in the response, + // then there are additional items still to get. Include this value as the `page` parameter for the + // subsequent GET request. For information about pagination, see + // List Pagination (https://docs.us-phoenix-1.oraclecloud.com/Content/API/Concepts/usingapi.htm#List_Pagination). + OpcNextPage *string `presentIn:"header" name:"opc-next-page"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about + // a particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response ListDataGuardAssociationsResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/database/list_databases_request_response.go b/vendor/github.com/oracle/oci-go-sdk/database/list_databases_request_response.go new file mode 100644 index 000000000..73e357018 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/database/list_databases_request_response.go @@ -0,0 +1,53 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package database + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// ListDatabasesRequest wrapper for the ListDatabases operation +type ListDatabasesRequest struct { + + // The compartment OCID (https://docs.us-phoenix-1.oraclecloud.com/Content/General/Concepts/identifiers.htm). + CompartmentId *string `mandatory:"true" contributesTo:"query" name:"compartmentId"` + + // A database home OCID (https://docs.us-phoenix-1.oraclecloud.com/Content/General/Concepts/identifiers.htm). + DbHomeId *string `mandatory:"true" contributesTo:"query" name:"dbHomeId"` + + // The maximum number of items to return. + Limit *int `mandatory:"false" contributesTo:"query" name:"limit"` + + // The pagination token to continue listing from. + Page *string `mandatory:"false" contributesTo:"query" name:"page"` +} + +func (request ListDatabasesRequest) String() string { + return common.PointerString(request) +} + +// ListDatabasesResponse wrapper for the ListDatabases operation +type ListDatabasesResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The []DatabaseSummary instance + Items []DatabaseSummary `presentIn:"body"` + + // For pagination of a list of items. When paging through a list, if this header appears in the response, + // then there are additional items still to get. Include this value as the `page` parameter for the + // subsequent GET request. For information about pagination, see + // List Pagination (https://docs.us-phoenix-1.oraclecloud.com/Content/API/Concepts/usingapi.htm#List_Pagination). + OpcNextPage *string `presentIn:"header" name:"opc-next-page"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about + // a particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response ListDatabasesResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/database/list_db_home_patch_history_entries_request_response.go b/vendor/github.com/oracle/oci-go-sdk/database/list_db_home_patch_history_entries_request_response.go new file mode 100644 index 000000000..841a4a809 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/database/list_db_home_patch_history_entries_request_response.go @@ -0,0 +1,50 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package database + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// ListDbHomePatchHistoryEntriesRequest wrapper for the ListDbHomePatchHistoryEntries operation +type ListDbHomePatchHistoryEntriesRequest struct { + + // The database home OCID (https://docs.us-phoenix-1.oraclecloud.com/Content/General/Concepts/identifiers.htm). + DbHomeId *string `mandatory:"true" contributesTo:"path" name:"dbHomeId"` + + // The maximum number of items to return. + Limit *int `mandatory:"false" contributesTo:"query" name:"limit"` + + // The pagination token to continue listing from. + Page *string `mandatory:"false" contributesTo:"query" name:"page"` +} + +func (request ListDbHomePatchHistoryEntriesRequest) String() string { + return common.PointerString(request) +} + +// ListDbHomePatchHistoryEntriesResponse wrapper for the ListDbHomePatchHistoryEntries operation +type ListDbHomePatchHistoryEntriesResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The []PatchHistoryEntrySummary instance + Items []PatchHistoryEntrySummary `presentIn:"body"` + + // For pagination of a list of items. When paging through a list, if this header appears in the response, + // then there are additional items still to get. Include this value as the `page` parameter for the + // subsequent GET request. For information about pagination, see + // List Pagination (https://docs.us-phoenix-1.oraclecloud.com/Content/API/Concepts/usingapi.htm#List_Pagination). + OpcNextPage *string `presentIn:"header" name:"opc-next-page"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about + // a particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response ListDbHomePatchHistoryEntriesResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/database/list_db_home_patches_request_response.go b/vendor/github.com/oracle/oci-go-sdk/database/list_db_home_patches_request_response.go new file mode 100644 index 000000000..5dd89d1a6 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/database/list_db_home_patches_request_response.go @@ -0,0 +1,50 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package database + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// ListDbHomePatchesRequest wrapper for the ListDbHomePatches operation +type ListDbHomePatchesRequest struct { + + // The database home OCID (https://docs.us-phoenix-1.oraclecloud.com/Content/General/Concepts/identifiers.htm). + DbHomeId *string `mandatory:"true" contributesTo:"path" name:"dbHomeId"` + + // The maximum number of items to return. + Limit *int `mandatory:"false" contributesTo:"query" name:"limit"` + + // The pagination token to continue listing from. + Page *string `mandatory:"false" contributesTo:"query" name:"page"` +} + +func (request ListDbHomePatchesRequest) String() string { + return common.PointerString(request) +} + +// ListDbHomePatchesResponse wrapper for the ListDbHomePatches operation +type ListDbHomePatchesResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The []PatchSummary instance + Items []PatchSummary `presentIn:"body"` + + // For pagination of a list of items. When paging through a list, if this header appears in the response, + // then there are additional items still to get. Include this value as the `page` parameter for the + // subsequent GET request. For information about pagination, see + // List Pagination (https://docs.us-phoenix-1.oraclecloud.com/Content/API/Concepts/usingapi.htm#List_Pagination). + OpcNextPage *string `presentIn:"header" name:"opc-next-page"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about + // a particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response ListDbHomePatchesResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/database/list_db_homes_request_response.go b/vendor/github.com/oracle/oci-go-sdk/database/list_db_homes_request_response.go new file mode 100644 index 000000000..363b60972 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/database/list_db_homes_request_response.go @@ -0,0 +1,53 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package database + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// ListDbHomesRequest wrapper for the ListDbHomes operation +type ListDbHomesRequest struct { + + // The compartment OCID (https://docs.us-phoenix-1.oraclecloud.com/Content/General/Concepts/identifiers.htm). + CompartmentId *string `mandatory:"true" contributesTo:"query" name:"compartmentId"` + + // The OCID (https://docs.us-phoenix-1.oraclecloud.com/Content/General/Concepts/identifiers.htm) of the DB System. + DbSystemId *string `mandatory:"true" contributesTo:"query" name:"dbSystemId"` + + // The maximum number of items to return. + Limit *int `mandatory:"false" contributesTo:"query" name:"limit"` + + // The pagination token to continue listing from. + Page *string `mandatory:"false" contributesTo:"query" name:"page"` +} + +func (request ListDbHomesRequest) String() string { + return common.PointerString(request) +} + +// ListDbHomesResponse wrapper for the ListDbHomes operation +type ListDbHomesResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The []DbHomeSummary instance + Items []DbHomeSummary `presentIn:"body"` + + // For pagination of a list of items. When paging through a list, if this header appears in the response, + // then there are additional items still to get. Include this value as the `page` parameter for the + // subsequent GET request. For information about pagination, see + // List Pagination (https://docs.us-phoenix-1.oraclecloud.com/Content/API/Concepts/usingapi.htm#List_Pagination). + OpcNextPage *string `presentIn:"header" name:"opc-next-page"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about + // a particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response ListDbHomesResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/database/list_db_nodes_request_response.go b/vendor/github.com/oracle/oci-go-sdk/database/list_db_nodes_request_response.go new file mode 100644 index 000000000..ae02ae583 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/database/list_db_nodes_request_response.go @@ -0,0 +1,53 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package database + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// ListDbNodesRequest wrapper for the ListDbNodes operation +type ListDbNodesRequest struct { + + // The compartment OCID (https://docs.us-phoenix-1.oraclecloud.com/Content/General/Concepts/identifiers.htm). + CompartmentId *string `mandatory:"true" contributesTo:"query" name:"compartmentId"` + + // The OCID (https://docs.us-phoenix-1.oraclecloud.com/Content/General/Concepts/identifiers.htm) of the DB System. + DbSystemId *string `mandatory:"true" contributesTo:"query" name:"dbSystemId"` + + // The maximum number of items to return. + Limit *int `mandatory:"false" contributesTo:"query" name:"limit"` + + // The pagination token to continue listing from. + Page *string `mandatory:"false" contributesTo:"query" name:"page"` +} + +func (request ListDbNodesRequest) String() string { + return common.PointerString(request) +} + +// ListDbNodesResponse wrapper for the ListDbNodes operation +type ListDbNodesResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The []DbNodeSummary instance + Items []DbNodeSummary `presentIn:"body"` + + // For pagination of a list of items. When paging through a list, if this header appears in the response, + // then there are additional items still to get. Include this value as the `page` parameter for the + // subsequent GET request. For information about pagination, see + // List Pagination (https://docs.us-phoenix-1.oraclecloud.com/Content/API/Concepts/usingapi.htm#List_Pagination). + OpcNextPage *string `presentIn:"header" name:"opc-next-page"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about + // a particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response ListDbNodesResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/database/list_db_system_patch_history_entries_request_response.go b/vendor/github.com/oracle/oci-go-sdk/database/list_db_system_patch_history_entries_request_response.go new file mode 100644 index 000000000..2a4c1aba1 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/database/list_db_system_patch_history_entries_request_response.go @@ -0,0 +1,50 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package database + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// ListDbSystemPatchHistoryEntriesRequest wrapper for the ListDbSystemPatchHistoryEntries operation +type ListDbSystemPatchHistoryEntriesRequest struct { + + // The DB System OCID (https://docs.us-phoenix-1.oraclecloud.com/Content/General/Concepts/identifiers.htm). + DbSystemId *string `mandatory:"true" contributesTo:"path" name:"dbSystemId"` + + // The maximum number of items to return. + Limit *int `mandatory:"false" contributesTo:"query" name:"limit"` + + // The pagination token to continue listing from. + Page *string `mandatory:"false" contributesTo:"query" name:"page"` +} + +func (request ListDbSystemPatchHistoryEntriesRequest) String() string { + return common.PointerString(request) +} + +// ListDbSystemPatchHistoryEntriesResponse wrapper for the ListDbSystemPatchHistoryEntries operation +type ListDbSystemPatchHistoryEntriesResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The []PatchHistoryEntrySummary instance + Items []PatchHistoryEntrySummary `presentIn:"body"` + + // For pagination of a list of items. When paging through a list, if this header appears in the response, + // then there are additional items still to get. Include this value as the `page` parameter for the + // subsequent GET request. For information about pagination, see + // List Pagination (https://docs.us-phoenix-1.oraclecloud.com/Content/API/Concepts/usingapi.htm#List_Pagination). + OpcNextPage *string `presentIn:"header" name:"opc-next-page"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about + // a particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response ListDbSystemPatchHistoryEntriesResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/database/list_db_system_patches_request_response.go b/vendor/github.com/oracle/oci-go-sdk/database/list_db_system_patches_request_response.go new file mode 100644 index 000000000..46ae8fc10 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/database/list_db_system_patches_request_response.go @@ -0,0 +1,50 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package database + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// ListDbSystemPatchesRequest wrapper for the ListDbSystemPatches operation +type ListDbSystemPatchesRequest struct { + + // The DB System OCID (https://docs.us-phoenix-1.oraclecloud.com/Content/General/Concepts/identifiers.htm). + DbSystemId *string `mandatory:"true" contributesTo:"path" name:"dbSystemId"` + + // The maximum number of items to return. + Limit *int `mandatory:"false" contributesTo:"query" name:"limit"` + + // The pagination token to continue listing from. + Page *string `mandatory:"false" contributesTo:"query" name:"page"` +} + +func (request ListDbSystemPatchesRequest) String() string { + return common.PointerString(request) +} + +// ListDbSystemPatchesResponse wrapper for the ListDbSystemPatches operation +type ListDbSystemPatchesResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The []PatchSummary instance + Items []PatchSummary `presentIn:"body"` + + // For pagination of a list of items. When paging through a list, if this header appears in the response, + // then there are additional items still to get. Include this value as the `page` parameter for the + // subsequent GET request. For information about pagination, see + // List Pagination (https://docs.us-phoenix-1.oraclecloud.com/Content/API/Concepts/usingapi.htm#List_Pagination). + OpcNextPage *string `presentIn:"header" name:"opc-next-page"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about + // a particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response ListDbSystemPatchesResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/database/list_db_system_shapes_request_response.go b/vendor/github.com/oracle/oci-go-sdk/database/list_db_system_shapes_request_response.go new file mode 100644 index 000000000..8f3d8b470 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/database/list_db_system_shapes_request_response.go @@ -0,0 +1,53 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package database + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// ListDbSystemShapesRequest wrapper for the ListDbSystemShapes operation +type ListDbSystemShapesRequest struct { + + // The name of the Availability Domain. + AvailabilityDomain *string `mandatory:"true" contributesTo:"query" name:"availabilityDomain"` + + // The compartment OCID (https://docs.us-phoenix-1.oraclecloud.com/Content/General/Concepts/identifiers.htm). + CompartmentId *string `mandatory:"true" contributesTo:"query" name:"compartmentId"` + + // The maximum number of items to return. + Limit *int `mandatory:"false" contributesTo:"query" name:"limit"` + + // The pagination token to continue listing from. + Page *string `mandatory:"false" contributesTo:"query" name:"page"` +} + +func (request ListDbSystemShapesRequest) String() string { + return common.PointerString(request) +} + +// ListDbSystemShapesResponse wrapper for the ListDbSystemShapes operation +type ListDbSystemShapesResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The []DbSystemShapeSummary instance + Items []DbSystemShapeSummary `presentIn:"body"` + + // For pagination of a list of items. When paging through a list, if this header appears in the response, + // then there are additional items still to get. Include this value as the `page` parameter for the + // subsequent GET request. For information about pagination, see + // List Pagination (https://docs.us-phoenix-1.oraclecloud.com/Content/API/Concepts/usingapi.htm#List_Pagination). + OpcNextPage *string `presentIn:"header" name:"opc-next-page"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about + // a particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response ListDbSystemShapesResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/database/list_db_systems_request_response.go b/vendor/github.com/oracle/oci-go-sdk/database/list_db_systems_request_response.go new file mode 100644 index 000000000..96a4693e5 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/database/list_db_systems_request_response.go @@ -0,0 +1,50 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package database + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// ListDbSystemsRequest wrapper for the ListDbSystems operation +type ListDbSystemsRequest struct { + + // The compartment OCID (https://docs.us-phoenix-1.oraclecloud.com/Content/General/Concepts/identifiers.htm). + CompartmentId *string `mandatory:"true" contributesTo:"query" name:"compartmentId"` + + // The maximum number of items to return. + Limit *int `mandatory:"false" contributesTo:"query" name:"limit"` + + // The pagination token to continue listing from. + Page *string `mandatory:"false" contributesTo:"query" name:"page"` +} + +func (request ListDbSystemsRequest) String() string { + return common.PointerString(request) +} + +// ListDbSystemsResponse wrapper for the ListDbSystems operation +type ListDbSystemsResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The []DbSystemSummary instance + Items []DbSystemSummary `presentIn:"body"` + + // For pagination of a list of items. When paging through a list, if this header appears in the response, + // then there are additional items still to get. Include this value as the `page` parameter for the + // subsequent GET request. For information about pagination, see + // List Pagination (https://docs.us-phoenix-1.oraclecloud.com/Content/API/Concepts/usingapi.htm#List_Pagination). + OpcNextPage *string `presentIn:"header" name:"opc-next-page"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about + // a particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response ListDbSystemsResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/database/list_db_versions_request_response.go b/vendor/github.com/oracle/oci-go-sdk/database/list_db_versions_request_response.go new file mode 100644 index 000000000..e6a77b7bb --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/database/list_db_versions_request_response.go @@ -0,0 +1,53 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package database + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// ListDbVersionsRequest wrapper for the ListDbVersions operation +type ListDbVersionsRequest struct { + + // The compartment OCID (https://docs.us-phoenix-1.oraclecloud.com/Content/General/Concepts/identifiers.htm). + CompartmentId *string `mandatory:"true" contributesTo:"query" name:"compartmentId"` + + // The maximum number of items to return. + Limit *int `mandatory:"false" contributesTo:"query" name:"limit"` + + // The pagination token to continue listing from. + Page *string `mandatory:"false" contributesTo:"query" name:"page"` + + // If provided, filters the results to the set of database versions which are supported for the given shape. + DbSystemShape *string `mandatory:"false" contributesTo:"query" name:"dbSystemShape"` +} + +func (request ListDbVersionsRequest) String() string { + return common.PointerString(request) +} + +// ListDbVersionsResponse wrapper for the ListDbVersions operation +type ListDbVersionsResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The []DbVersionSummary instance + Items []DbVersionSummary `presentIn:"body"` + + // For pagination of a list of items. When paging through a list, if this header appears in the response, + // then there are additional items still to get. Include this value as the `page` parameter for the + // subsequent GET request. For information about pagination, see + // List Pagination (https://docs.us-phoenix-1.oraclecloud.com/Content/API/Concepts/usingapi.htm#List_Pagination). + OpcNextPage *string `presentIn:"header" name:"opc-next-page"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about + // a particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response ListDbVersionsResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/database/patch.go b/vendor/github.com/oracle/oci-go-sdk/database/patch.go new file mode 100644 index 000000000..7ed8a3398 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/database/patch.go @@ -0,0 +1,122 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Database Service API +// +// The API for the Database Service. +// + +package database + +import ( + "github.com/oracle/oci-go-sdk/common" +) + +// Patch A Patch for a DB System or DB Home. +// To use any of the API operations, you must be authorized in an IAM policy. If you're not authorized, +// talk to an administrator. If you're an administrator who needs to write policies to give users access, +// see Getting Started with Policies (https://docs.us-phoenix-1.oraclecloud.com/Content/Identity/Concepts/policygetstarted.htm). +type Patch struct { + + // The text describing this patch package. + Description *string `mandatory:"true" json:"description"` + + // The OCID of the patch. + Id *string `mandatory:"true" json:"id"` + + // The date and time that the patch was released. + TimeReleased *common.SDKTime `mandatory:"true" json:"timeReleased"` + + // The version of this patch package. + Version *string `mandatory:"true" json:"version"` + + // Actions that can possibly be performed using this patch. + AvailableActions []PatchAvailableActionsEnum `mandatory:"false" json:"availableActions,omitempty"` + + // Action that is currently being performed or was completed last. + LastAction PatchLastActionEnum `mandatory:"false" json:"lastAction,omitempty"` + + // A descriptive text associated with the lifecycleState. + // Typically can contain additional displayable text. + LifecycleDetails *string `mandatory:"false" json:"lifecycleDetails"` + + // The current state of the patch as a result of lastAction. + LifecycleState PatchLifecycleStateEnum `mandatory:"false" json:"lifecycleState,omitempty"` +} + +func (m Patch) String() string { + return common.PointerString(m) +} + +// PatchAvailableActionsEnum Enum with underlying type: string +type PatchAvailableActionsEnum string + +// Set of constants representing the allowable values for PatchAvailableActions +const ( + PatchAvailableActionsApply PatchAvailableActionsEnum = "APPLY" + PatchAvailableActionsPrecheck PatchAvailableActionsEnum = "PRECHECK" +) + +var mappingPatchAvailableActions = map[string]PatchAvailableActionsEnum{ + "APPLY": PatchAvailableActionsApply, + "PRECHECK": PatchAvailableActionsPrecheck, +} + +// GetPatchAvailableActionsEnumValues Enumerates the set of values for PatchAvailableActions +func GetPatchAvailableActionsEnumValues() []PatchAvailableActionsEnum { + values := make([]PatchAvailableActionsEnum, 0) + for _, v := range mappingPatchAvailableActions { + values = append(values, v) + } + return values +} + +// PatchLastActionEnum Enum with underlying type: string +type PatchLastActionEnum string + +// Set of constants representing the allowable values for PatchLastAction +const ( + PatchLastActionApply PatchLastActionEnum = "APPLY" + PatchLastActionPrecheck PatchLastActionEnum = "PRECHECK" +) + +var mappingPatchLastAction = map[string]PatchLastActionEnum{ + "APPLY": PatchLastActionApply, + "PRECHECK": PatchLastActionPrecheck, +} + +// GetPatchLastActionEnumValues Enumerates the set of values for PatchLastAction +func GetPatchLastActionEnumValues() []PatchLastActionEnum { + values := make([]PatchLastActionEnum, 0) + for _, v := range mappingPatchLastAction { + values = append(values, v) + } + return values +} + +// PatchLifecycleStateEnum Enum with underlying type: string +type PatchLifecycleStateEnum string + +// Set of constants representing the allowable values for PatchLifecycleState +const ( + PatchLifecycleStateAvailable PatchLifecycleStateEnum = "AVAILABLE" + PatchLifecycleStateSuccess PatchLifecycleStateEnum = "SUCCESS" + PatchLifecycleStateInProgress PatchLifecycleStateEnum = "IN_PROGRESS" + PatchLifecycleStateFailed PatchLifecycleStateEnum = "FAILED" +) + +var mappingPatchLifecycleState = map[string]PatchLifecycleStateEnum{ + "AVAILABLE": PatchLifecycleStateAvailable, + "SUCCESS": PatchLifecycleStateSuccess, + "IN_PROGRESS": PatchLifecycleStateInProgress, + "FAILED": PatchLifecycleStateFailed, +} + +// GetPatchLifecycleStateEnumValues Enumerates the set of values for PatchLifecycleState +func GetPatchLifecycleStateEnumValues() []PatchLifecycleStateEnum { + values := make([]PatchLifecycleStateEnum, 0) + for _, v := range mappingPatchLifecycleState { + values = append(values, v) + } + return values +} diff --git a/vendor/github.com/oracle/oci-go-sdk/database/patch_details.go b/vendor/github.com/oracle/oci-go-sdk/database/patch_details.go new file mode 100644 index 000000000..1dab6a67a --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/database/patch_details.go @@ -0,0 +1,52 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Database Service API +// +// The API for the Database Service. +// + +package database + +import ( + "github.com/oracle/oci-go-sdk/common" +) + +// PatchDetails The details about what actions to perform and using what patch to the specified target. +// This is part of an update request that is applied to a version field on the target such +// as DB System, database home, etc. +type PatchDetails struct { + + // The action to perform on the patch. + Action PatchDetailsActionEnum `mandatory:"false" json:"action,omitempty"` + + // The OCID of the patch. + PatchId *string `mandatory:"false" json:"patchId"` +} + +func (m PatchDetails) String() string { + return common.PointerString(m) +} + +// PatchDetailsActionEnum Enum with underlying type: string +type PatchDetailsActionEnum string + +// Set of constants representing the allowable values for PatchDetailsAction +const ( + PatchDetailsActionApply PatchDetailsActionEnum = "APPLY" + PatchDetailsActionPrecheck PatchDetailsActionEnum = "PRECHECK" +) + +var mappingPatchDetailsAction = map[string]PatchDetailsActionEnum{ + "APPLY": PatchDetailsActionApply, + "PRECHECK": PatchDetailsActionPrecheck, +} + +// GetPatchDetailsActionEnumValues Enumerates the set of values for PatchDetailsAction +func GetPatchDetailsActionEnumValues() []PatchDetailsActionEnum { + values := make([]PatchDetailsActionEnum, 0) + for _, v := range mappingPatchDetailsAction { + values = append(values, v) + } + return values +} diff --git a/vendor/github.com/oracle/oci-go-sdk/database/patch_history_entry.go b/vendor/github.com/oracle/oci-go-sdk/database/patch_history_entry.go new file mode 100644 index 000000000..21fa02767 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/database/patch_history_entry.go @@ -0,0 +1,91 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Database Service API +// +// The API for the Database Service. +// + +package database + +import ( + "github.com/oracle/oci-go-sdk/common" +) + +// PatchHistoryEntry The record of a patch action on a specified target. +type PatchHistoryEntry struct { + + // The OCID of the patch history entry. + Id *string `mandatory:"true" json:"id"` + + // The current state of the action. + LifecycleState PatchHistoryEntryLifecycleStateEnum `mandatory:"true" json:"lifecycleState"` + + // The OCID of the patch. + PatchId *string `mandatory:"true" json:"patchId"` + + // The date and time when the patch action started. + TimeStarted *common.SDKTime `mandatory:"true" json:"timeStarted"` + + // The action being performed or was completed. + Action PatchHistoryEntryActionEnum `mandatory:"false" json:"action,omitempty"` + + // A descriptive text associated with the lifecycleState. + // Typically contains additional displayable text. + LifecycleDetails *string `mandatory:"false" json:"lifecycleDetails"` + + // The date and time when the patch action completed. + TimeEnded *common.SDKTime `mandatory:"false" json:"timeEnded"` +} + +func (m PatchHistoryEntry) String() string { + return common.PointerString(m) +} + +// PatchHistoryEntryActionEnum Enum with underlying type: string +type PatchHistoryEntryActionEnum string + +// Set of constants representing the allowable values for PatchHistoryEntryAction +const ( + PatchHistoryEntryActionApply PatchHistoryEntryActionEnum = "APPLY" + PatchHistoryEntryActionPrecheck PatchHistoryEntryActionEnum = "PRECHECK" +) + +var mappingPatchHistoryEntryAction = map[string]PatchHistoryEntryActionEnum{ + "APPLY": PatchHistoryEntryActionApply, + "PRECHECK": PatchHistoryEntryActionPrecheck, +} + +// GetPatchHistoryEntryActionEnumValues Enumerates the set of values for PatchHistoryEntryAction +func GetPatchHistoryEntryActionEnumValues() []PatchHistoryEntryActionEnum { + values := make([]PatchHistoryEntryActionEnum, 0) + for _, v := range mappingPatchHistoryEntryAction { + values = append(values, v) + } + return values +} + +// PatchHistoryEntryLifecycleStateEnum Enum with underlying type: string +type PatchHistoryEntryLifecycleStateEnum string + +// Set of constants representing the allowable values for PatchHistoryEntryLifecycleState +const ( + PatchHistoryEntryLifecycleStateInProgress PatchHistoryEntryLifecycleStateEnum = "IN_PROGRESS" + PatchHistoryEntryLifecycleStateSucceeded PatchHistoryEntryLifecycleStateEnum = "SUCCEEDED" + PatchHistoryEntryLifecycleStateFailed PatchHistoryEntryLifecycleStateEnum = "FAILED" +) + +var mappingPatchHistoryEntryLifecycleState = map[string]PatchHistoryEntryLifecycleStateEnum{ + "IN_PROGRESS": PatchHistoryEntryLifecycleStateInProgress, + "SUCCEEDED": PatchHistoryEntryLifecycleStateSucceeded, + "FAILED": PatchHistoryEntryLifecycleStateFailed, +} + +// GetPatchHistoryEntryLifecycleStateEnumValues Enumerates the set of values for PatchHistoryEntryLifecycleState +func GetPatchHistoryEntryLifecycleStateEnumValues() []PatchHistoryEntryLifecycleStateEnum { + values := make([]PatchHistoryEntryLifecycleStateEnum, 0) + for _, v := range mappingPatchHistoryEntryLifecycleState { + values = append(values, v) + } + return values +} diff --git a/vendor/github.com/oracle/oci-go-sdk/database/patch_history_entry_summary.go b/vendor/github.com/oracle/oci-go-sdk/database/patch_history_entry_summary.go new file mode 100644 index 000000000..ddb964f93 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/database/patch_history_entry_summary.go @@ -0,0 +1,91 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Database Service API +// +// The API for the Database Service. +// + +package database + +import ( + "github.com/oracle/oci-go-sdk/common" +) + +// PatchHistoryEntrySummary The record of a patch action on a specified target. +type PatchHistoryEntrySummary struct { + + // The OCID of the patch history entry. + Id *string `mandatory:"true" json:"id"` + + // The current state of the action. + LifecycleState PatchHistoryEntrySummaryLifecycleStateEnum `mandatory:"true" json:"lifecycleState"` + + // The OCID of the patch. + PatchId *string `mandatory:"true" json:"patchId"` + + // The date and time when the patch action started. + TimeStarted *common.SDKTime `mandatory:"true" json:"timeStarted"` + + // The action being performed or was completed. + Action PatchHistoryEntrySummaryActionEnum `mandatory:"false" json:"action,omitempty"` + + // A descriptive text associated with the lifecycleState. + // Typically contains additional displayable text. + LifecycleDetails *string `mandatory:"false" json:"lifecycleDetails"` + + // The date and time when the patch action completed. + TimeEnded *common.SDKTime `mandatory:"false" json:"timeEnded"` +} + +func (m PatchHistoryEntrySummary) String() string { + return common.PointerString(m) +} + +// PatchHistoryEntrySummaryActionEnum Enum with underlying type: string +type PatchHistoryEntrySummaryActionEnum string + +// Set of constants representing the allowable values for PatchHistoryEntrySummaryAction +const ( + PatchHistoryEntrySummaryActionApply PatchHistoryEntrySummaryActionEnum = "APPLY" + PatchHistoryEntrySummaryActionPrecheck PatchHistoryEntrySummaryActionEnum = "PRECHECK" +) + +var mappingPatchHistoryEntrySummaryAction = map[string]PatchHistoryEntrySummaryActionEnum{ + "APPLY": PatchHistoryEntrySummaryActionApply, + "PRECHECK": PatchHistoryEntrySummaryActionPrecheck, +} + +// GetPatchHistoryEntrySummaryActionEnumValues Enumerates the set of values for PatchHistoryEntrySummaryAction +func GetPatchHistoryEntrySummaryActionEnumValues() []PatchHistoryEntrySummaryActionEnum { + values := make([]PatchHistoryEntrySummaryActionEnum, 0) + for _, v := range mappingPatchHistoryEntrySummaryAction { + values = append(values, v) + } + return values +} + +// PatchHistoryEntrySummaryLifecycleStateEnum Enum with underlying type: string +type PatchHistoryEntrySummaryLifecycleStateEnum string + +// Set of constants representing the allowable values for PatchHistoryEntrySummaryLifecycleState +const ( + PatchHistoryEntrySummaryLifecycleStateInProgress PatchHistoryEntrySummaryLifecycleStateEnum = "IN_PROGRESS" + PatchHistoryEntrySummaryLifecycleStateSucceeded PatchHistoryEntrySummaryLifecycleStateEnum = "SUCCEEDED" + PatchHistoryEntrySummaryLifecycleStateFailed PatchHistoryEntrySummaryLifecycleStateEnum = "FAILED" +) + +var mappingPatchHistoryEntrySummaryLifecycleState = map[string]PatchHistoryEntrySummaryLifecycleStateEnum{ + "IN_PROGRESS": PatchHistoryEntrySummaryLifecycleStateInProgress, + "SUCCEEDED": PatchHistoryEntrySummaryLifecycleStateSucceeded, + "FAILED": PatchHistoryEntrySummaryLifecycleStateFailed, +} + +// GetPatchHistoryEntrySummaryLifecycleStateEnumValues Enumerates the set of values for PatchHistoryEntrySummaryLifecycleState +func GetPatchHistoryEntrySummaryLifecycleStateEnumValues() []PatchHistoryEntrySummaryLifecycleStateEnum { + values := make([]PatchHistoryEntrySummaryLifecycleStateEnum, 0) + for _, v := range mappingPatchHistoryEntrySummaryLifecycleState { + values = append(values, v) + } + return values +} diff --git a/vendor/github.com/oracle/oci-go-sdk/database/patch_summary.go b/vendor/github.com/oracle/oci-go-sdk/database/patch_summary.go new file mode 100644 index 000000000..e803c53fb --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/database/patch_summary.go @@ -0,0 +1,122 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Database Service API +// +// The API for the Database Service. +// + +package database + +import ( + "github.com/oracle/oci-go-sdk/common" +) + +// PatchSummary A Patch for a DB System or DB Home. +// To use any of the API operations, you must be authorized in an IAM policy. If you're not authorized, +// talk to an administrator. If you're an administrator who needs to write policies to give users access, +// see Getting Started with Policies (https://docs.us-phoenix-1.oraclecloud.com/Content/Identity/Concepts/policygetstarted.htm). +type PatchSummary struct { + + // The text describing this patch package. + Description *string `mandatory:"true" json:"description"` + + // The OCID of the patch. + Id *string `mandatory:"true" json:"id"` + + // The date and time that the patch was released. + TimeReleased *common.SDKTime `mandatory:"true" json:"timeReleased"` + + // The version of this patch package. + Version *string `mandatory:"true" json:"version"` + + // Actions that can possibly be performed using this patch. + AvailableActions []PatchSummaryAvailableActionsEnum `mandatory:"false" json:"availableActions,omitempty"` + + // Action that is currently being performed or was completed last. + LastAction PatchSummaryLastActionEnum `mandatory:"false" json:"lastAction,omitempty"` + + // A descriptive text associated with the lifecycleState. + // Typically can contain additional displayable text. + LifecycleDetails *string `mandatory:"false" json:"lifecycleDetails"` + + // The current state of the patch as a result of lastAction. + LifecycleState PatchSummaryLifecycleStateEnum `mandatory:"false" json:"lifecycleState,omitempty"` +} + +func (m PatchSummary) String() string { + return common.PointerString(m) +} + +// PatchSummaryAvailableActionsEnum Enum with underlying type: string +type PatchSummaryAvailableActionsEnum string + +// Set of constants representing the allowable values for PatchSummaryAvailableActions +const ( + PatchSummaryAvailableActionsApply PatchSummaryAvailableActionsEnum = "APPLY" + PatchSummaryAvailableActionsPrecheck PatchSummaryAvailableActionsEnum = "PRECHECK" +) + +var mappingPatchSummaryAvailableActions = map[string]PatchSummaryAvailableActionsEnum{ + "APPLY": PatchSummaryAvailableActionsApply, + "PRECHECK": PatchSummaryAvailableActionsPrecheck, +} + +// GetPatchSummaryAvailableActionsEnumValues Enumerates the set of values for PatchSummaryAvailableActions +func GetPatchSummaryAvailableActionsEnumValues() []PatchSummaryAvailableActionsEnum { + values := make([]PatchSummaryAvailableActionsEnum, 0) + for _, v := range mappingPatchSummaryAvailableActions { + values = append(values, v) + } + return values +} + +// PatchSummaryLastActionEnum Enum with underlying type: string +type PatchSummaryLastActionEnum string + +// Set of constants representing the allowable values for PatchSummaryLastAction +const ( + PatchSummaryLastActionApply PatchSummaryLastActionEnum = "APPLY" + PatchSummaryLastActionPrecheck PatchSummaryLastActionEnum = "PRECHECK" +) + +var mappingPatchSummaryLastAction = map[string]PatchSummaryLastActionEnum{ + "APPLY": PatchSummaryLastActionApply, + "PRECHECK": PatchSummaryLastActionPrecheck, +} + +// GetPatchSummaryLastActionEnumValues Enumerates the set of values for PatchSummaryLastAction +func GetPatchSummaryLastActionEnumValues() []PatchSummaryLastActionEnum { + values := make([]PatchSummaryLastActionEnum, 0) + for _, v := range mappingPatchSummaryLastAction { + values = append(values, v) + } + return values +} + +// PatchSummaryLifecycleStateEnum Enum with underlying type: string +type PatchSummaryLifecycleStateEnum string + +// Set of constants representing the allowable values for PatchSummaryLifecycleState +const ( + PatchSummaryLifecycleStateAvailable PatchSummaryLifecycleStateEnum = "AVAILABLE" + PatchSummaryLifecycleStateSuccess PatchSummaryLifecycleStateEnum = "SUCCESS" + PatchSummaryLifecycleStateInProgress PatchSummaryLifecycleStateEnum = "IN_PROGRESS" + PatchSummaryLifecycleStateFailed PatchSummaryLifecycleStateEnum = "FAILED" +) + +var mappingPatchSummaryLifecycleState = map[string]PatchSummaryLifecycleStateEnum{ + "AVAILABLE": PatchSummaryLifecycleStateAvailable, + "SUCCESS": PatchSummaryLifecycleStateSuccess, + "IN_PROGRESS": PatchSummaryLifecycleStateInProgress, + "FAILED": PatchSummaryLifecycleStateFailed, +} + +// GetPatchSummaryLifecycleStateEnumValues Enumerates the set of values for PatchSummaryLifecycleState +func GetPatchSummaryLifecycleStateEnumValues() []PatchSummaryLifecycleStateEnum { + values := make([]PatchSummaryLifecycleStateEnum, 0) + for _, v := range mappingPatchSummaryLifecycleState { + values = append(values, v) + } + return values +} diff --git a/vendor/github.com/oracle/oci-go-sdk/database/reinstate_data_guard_association_details.go b/vendor/github.com/oracle/oci-go-sdk/database/reinstate_data_guard_association_details.go new file mode 100644 index 000000000..b348fafb9 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/database/reinstate_data_guard_association_details.go @@ -0,0 +1,24 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Database Service API +// +// The API for the Database Service. +// + +package database + +import ( + "github.com/oracle/oci-go-sdk/common" +) + +// ReinstateDataGuardAssociationDetails The Data Guard association reinstate parameters. +type ReinstateDataGuardAssociationDetails struct { + + // The DB System administrator password. + DatabaseAdminPassword *string `mandatory:"true" json:"databaseAdminPassword"` +} + +func (m ReinstateDataGuardAssociationDetails) String() string { + return common.PointerString(m) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/database/reinstate_data_guard_association_request_response.go b/vendor/github.com/oracle/oci-go-sdk/database/reinstate_data_guard_association_request_response.go new file mode 100644 index 000000000..994fdb3c2 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/database/reinstate_data_guard_association_request_response.go @@ -0,0 +1,52 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package database + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// ReinstateDataGuardAssociationRequest wrapper for the ReinstateDataGuardAssociation operation +type ReinstateDataGuardAssociationRequest struct { + + // The database OCID (https://docs.us-phoenix-1.oraclecloud.com/Content/General/Concepts/identifiers.htm). + DatabaseId *string `mandatory:"true" contributesTo:"path" name:"databaseId"` + + // The Data Guard association's OCID (https://docs.us-phoenix-1.oraclecloud.com/Content/General/Concepts/identifiers.htm). + DataGuardAssociationId *string `mandatory:"true" contributesTo:"path" name:"dataGuardAssociationId"` + + // A request to reinstate a database in a standby role. + ReinstateDataGuardAssociationDetails `contributesTo:"body"` + + // For optimistic concurrency control. In the PUT or DELETE call for a resource, set the `if-match` + // parameter to the value of the etag from a previous GET or POST response for that resource. The resource + // will be updated or deleted only if the etag you provide matches the resource's current etag value. + IfMatch *string `mandatory:"false" contributesTo:"header" name:"if-match"` +} + +func (request ReinstateDataGuardAssociationRequest) String() string { + return common.PointerString(request) +} + +// ReinstateDataGuardAssociationResponse wrapper for the ReinstateDataGuardAssociation operation +type ReinstateDataGuardAssociationResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The DataGuardAssociation instance + DataGuardAssociation `presentIn:"body"` + + // For optimistic concurrency control. See `if-match`. + Etag *string `presentIn:"header" name:"etag"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about + // a particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response ReinstateDataGuardAssociationResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/database/restore_database_details.go b/vendor/github.com/oracle/oci-go-sdk/database/restore_database_details.go new file mode 100644 index 000000000..4322a35ad --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/database/restore_database_details.go @@ -0,0 +1,30 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Database Service API +// +// The API for the Database Service. +// + +package database + +import ( + "github.com/oracle/oci-go-sdk/common" +) + +// RestoreDatabaseDetails The representation of RestoreDatabaseDetails +type RestoreDatabaseDetails struct { + + // Restores using the backup with the System Change Number (SCN) specified. + DatabaseSCN *string `mandatory:"false" json:"databaseSCN"` + + // Restores to the last known good state with the least possible data loss. + Latest *bool `mandatory:"false" json:"latest"` + + // Restores to the timestamp specified. + Timestamp *common.SDKTime `mandatory:"false" json:"timestamp"` +} + +func (m RestoreDatabaseDetails) String() string { + return common.PointerString(m) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/database/restore_database_request_response.go b/vendor/github.com/oracle/oci-go-sdk/database/restore_database_request_response.go new file mode 100644 index 000000000..19873c923 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/database/restore_database_request_response.go @@ -0,0 +1,49 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package database + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// RestoreDatabaseRequest wrapper for the RestoreDatabase operation +type RestoreDatabaseRequest struct { + + // The database OCID (https://docs.us-phoenix-1.oraclecloud.com/Content/General/Concepts/identifiers.htm). + DatabaseId *string `mandatory:"true" contributesTo:"path" name:"databaseId"` + + // Request to perform database restore. + RestoreDatabaseDetails `contributesTo:"body"` + + // For optimistic concurrency control. In the PUT or DELETE call for a resource, set the `if-match` + // parameter to the value of the etag from a previous GET or POST response for that resource. The resource + // will be updated or deleted only if the etag you provide matches the resource's current etag value. + IfMatch *string `mandatory:"false" contributesTo:"header" name:"if-match"` +} + +func (request RestoreDatabaseRequest) String() string { + return common.PointerString(request) +} + +// RestoreDatabaseResponse wrapper for the RestoreDatabase operation +type RestoreDatabaseResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The Database instance + Database `presentIn:"body"` + + // For optimistic concurrency control. See `if-match`. + Etag *string `presentIn:"header" name:"etag"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about + // a particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response RestoreDatabaseResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/database/switchover_data_guard_association_details.go b/vendor/github.com/oracle/oci-go-sdk/database/switchover_data_guard_association_details.go new file mode 100644 index 000000000..087705923 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/database/switchover_data_guard_association_details.go @@ -0,0 +1,24 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Database Service API +// +// The API for the Database Service. +// + +package database + +import ( + "github.com/oracle/oci-go-sdk/common" +) + +// SwitchoverDataGuardAssociationDetails The Data Guard association switchover parameters. +type SwitchoverDataGuardAssociationDetails struct { + + // The DB System administrator password. + DatabaseAdminPassword *string `mandatory:"true" json:"databaseAdminPassword"` +} + +func (m SwitchoverDataGuardAssociationDetails) String() string { + return common.PointerString(m) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/database/switchover_data_guard_association_request_response.go b/vendor/github.com/oracle/oci-go-sdk/database/switchover_data_guard_association_request_response.go new file mode 100644 index 000000000..0e3024715 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/database/switchover_data_guard_association_request_response.go @@ -0,0 +1,52 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package database + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// SwitchoverDataGuardAssociationRequest wrapper for the SwitchoverDataGuardAssociation operation +type SwitchoverDataGuardAssociationRequest struct { + + // The database OCID (https://docs.us-phoenix-1.oraclecloud.com/Content/General/Concepts/identifiers.htm). + DatabaseId *string `mandatory:"true" contributesTo:"path" name:"databaseId"` + + // The Data Guard association's OCID (https://docs.us-phoenix-1.oraclecloud.com/Content/General/Concepts/identifiers.htm). + DataGuardAssociationId *string `mandatory:"true" contributesTo:"path" name:"dataGuardAssociationId"` + + // Request to swtichover a primary to a standby. + SwitchoverDataGuardAssociationDetails `contributesTo:"body"` + + // For optimistic concurrency control. In the PUT or DELETE call for a resource, set the `if-match` + // parameter to the value of the etag from a previous GET or POST response for that resource. The resource + // will be updated or deleted only if the etag you provide matches the resource's current etag value. + IfMatch *string `mandatory:"false" contributesTo:"header" name:"if-match"` +} + +func (request SwitchoverDataGuardAssociationRequest) String() string { + return common.PointerString(request) +} + +// SwitchoverDataGuardAssociationResponse wrapper for the SwitchoverDataGuardAssociation operation +type SwitchoverDataGuardAssociationResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The DataGuardAssociation instance + DataGuardAssociation `presentIn:"body"` + + // For optimistic concurrency control. See `if-match`. + Etag *string `presentIn:"header" name:"etag"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about + // a particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response SwitchoverDataGuardAssociationResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/database/terminate_db_system_request_response.go b/vendor/github.com/oracle/oci-go-sdk/database/terminate_db_system_request_response.go new file mode 100644 index 000000000..991b68b0f --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/database/terminate_db_system_request_response.go @@ -0,0 +1,40 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package database + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// TerminateDbSystemRequest wrapper for the TerminateDbSystem operation +type TerminateDbSystemRequest struct { + + // The DB System OCID (https://docs.us-phoenix-1.oraclecloud.com/Content/General/Concepts/identifiers.htm). + DbSystemId *string `mandatory:"true" contributesTo:"path" name:"dbSystemId"` + + // For optimistic concurrency control. In the PUT or DELETE call for a resource, set the `if-match` + // parameter to the value of the etag from a previous GET or POST response for that resource. The resource + // will be updated or deleted only if the etag you provide matches the resource's current etag value. + IfMatch *string `mandatory:"false" contributesTo:"header" name:"if-match"` +} + +func (request TerminateDbSystemRequest) String() string { + return common.PointerString(request) +} + +// TerminateDbSystemResponse wrapper for the TerminateDbSystem operation +type TerminateDbSystemResponse struct { + + // The underlying http response + RawResponse *http.Response + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about + // a particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response TerminateDbSystemResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/database/update_database_details.go b/vendor/github.com/oracle/oci-go-sdk/database/update_database_details.go new file mode 100644 index 000000000..2d436d6d6 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/database/update_database_details.go @@ -0,0 +1,22 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Database Service API +// +// The API for the Database Service. +// + +package database + +import ( + "github.com/oracle/oci-go-sdk/common" +) + +// UpdateDatabaseDetails The representation of UpdateDatabaseDetails +type UpdateDatabaseDetails struct { + DbBackupConfig *DbBackupConfig `mandatory:"false" json:"dbBackupConfig"` +} + +func (m UpdateDatabaseDetails) String() string { + return common.PointerString(m) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/database/update_database_request_response.go b/vendor/github.com/oracle/oci-go-sdk/database/update_database_request_response.go new file mode 100644 index 000000000..c43867603 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/database/update_database_request_response.go @@ -0,0 +1,49 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package database + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// UpdateDatabaseRequest wrapper for the UpdateDatabase operation +type UpdateDatabaseRequest struct { + + // The database OCID (https://docs.us-phoenix-1.oraclecloud.com/Content/General/Concepts/identifiers.htm). + DatabaseId *string `mandatory:"true" contributesTo:"path" name:"databaseId"` + + // Request to perform database update. + UpdateDatabaseDetails `contributesTo:"body"` + + // For optimistic concurrency control. In the PUT or DELETE call for a resource, set the `if-match` + // parameter to the value of the etag from a previous GET or POST response for that resource. The resource + // will be updated or deleted only if the etag you provide matches the resource's current etag value. + IfMatch *string `mandatory:"false" contributesTo:"header" name:"if-match"` +} + +func (request UpdateDatabaseRequest) String() string { + return common.PointerString(request) +} + +// UpdateDatabaseResponse wrapper for the UpdateDatabase operation +type UpdateDatabaseResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The Database instance + Database `presentIn:"body"` + + // For optimistic concurrency control. See `if-match`. + Etag *string `presentIn:"header" name:"etag"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about + // a particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response UpdateDatabaseResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/database/update_db_home_details.go b/vendor/github.com/oracle/oci-go-sdk/database/update_db_home_details.go new file mode 100644 index 000000000..741a7e36f --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/database/update_db_home_details.go @@ -0,0 +1,22 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Database Service API +// +// The API for the Database Service. +// + +package database + +import ( + "github.com/oracle/oci-go-sdk/common" +) + +// UpdateDbHomeDetails Describes the modification parameters for the DB Home. +type UpdateDbHomeDetails struct { + DbVersion *PatchDetails `mandatory:"false" json:"dbVersion"` +} + +func (m UpdateDbHomeDetails) String() string { + return common.PointerString(m) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/database/update_db_home_request_response.go b/vendor/github.com/oracle/oci-go-sdk/database/update_db_home_request_response.go new file mode 100644 index 000000000..b02cd0fb8 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/database/update_db_home_request_response.go @@ -0,0 +1,49 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package database + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// UpdateDbHomeRequest wrapper for the UpdateDbHome operation +type UpdateDbHomeRequest struct { + + // The database home OCID (https://docs.us-phoenix-1.oraclecloud.com/Content/General/Concepts/identifiers.htm). + DbHomeId *string `mandatory:"true" contributesTo:"path" name:"dbHomeId"` + + // Request to update the properties of a DB Home. + UpdateDbHomeDetails `contributesTo:"body"` + + // For optimistic concurrency control. In the PUT or DELETE call for a resource, set the `if-match` + // parameter to the value of the etag from a previous GET or POST response for that resource. The resource + // will be updated or deleted only if the etag you provide matches the resource's current etag value. + IfMatch *string `mandatory:"false" contributesTo:"header" name:"if-match"` +} + +func (request UpdateDbHomeRequest) String() string { + return common.PointerString(request) +} + +// UpdateDbHomeResponse wrapper for the UpdateDbHome operation +type UpdateDbHomeResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The DbHome instance + DbHome `presentIn:"body"` + + // For optimistic concurrency control. See `if-match`. + Etag *string `presentIn:"header" name:"etag"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about + // a particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response UpdateDbHomeResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/database/update_db_system_details.go b/vendor/github.com/oracle/oci-go-sdk/database/update_db_system_details.go new file mode 100644 index 000000000..eb03d5d10 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/database/update_db_system_details.go @@ -0,0 +1,32 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Database Service API +// +// The API for the Database Service. +// + +package database + +import ( + "github.com/oracle/oci-go-sdk/common" +) + +// UpdateDbSystemDetails Describes the modification parameters for the DB System. +type UpdateDbSystemDetails struct { + + // The number of CPU Cores to be set on the DB System. Applicable only for non-VM based DB systems. + CpuCoreCount *int `mandatory:"false" json:"cpuCoreCount"` + + // Size, in GBs, to which the currently attached storage needs to be scaled up to for VM based DB system. This must be greater than current storage size. Note that the total storage size attached will be more than what is requested, to account for REDO/RECO space and software volume. + DataStorageSizeInGBs *int `mandatory:"false" json:"dataStorageSizeInGBs"` + + // The public key portion of the key pair to use for SSH access to the DB System. Multiple public keys can be provided. The length of the combined keys cannot exceed 10,000 characters. + SshPublicKeys []string `mandatory:"false" json:"sshPublicKeys"` + + Version *PatchDetails `mandatory:"false" json:"version"` +} + +func (m UpdateDbSystemDetails) String() string { + return common.PointerString(m) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/database/update_db_system_request_response.go b/vendor/github.com/oracle/oci-go-sdk/database/update_db_system_request_response.go new file mode 100644 index 000000000..d9cd0493e --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/database/update_db_system_request_response.go @@ -0,0 +1,49 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package database + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// UpdateDbSystemRequest wrapper for the UpdateDbSystem operation +type UpdateDbSystemRequest struct { + + // The DB System OCID (https://docs.us-phoenix-1.oraclecloud.com/Content/General/Concepts/identifiers.htm). + DbSystemId *string `mandatory:"true" contributesTo:"path" name:"dbSystemId"` + + // Request to update the properties of a DB System. + UpdateDbSystemDetails `contributesTo:"body"` + + // For optimistic concurrency control. In the PUT or DELETE call for a resource, set the `if-match` + // parameter to the value of the etag from a previous GET or POST response for that resource. The resource + // will be updated or deleted only if the etag you provide matches the resource's current etag value. + IfMatch *string `mandatory:"false" contributesTo:"header" name:"if-match"` +} + +func (request UpdateDbSystemRequest) String() string { + return common.PointerString(request) +} + +// UpdateDbSystemResponse wrapper for the UpdateDbSystem operation +type UpdateDbSystemResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The DbSystem instance + DbSystem `presentIn:"body"` + + // For optimistic concurrency control. See `if-match`. + Etag *string `presentIn:"header" name:"etag"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about + // a particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response UpdateDbSystemResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/example/helpers/args.go b/vendor/github.com/oracle/oci-go-sdk/example/helpers/args.go new file mode 100644 index 000000000..4f8b123f0 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/example/helpers/args.go @@ -0,0 +1,46 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// +// Helper methods for OCI GOSDK Samples +// + +package helpers + +import ( + "os" + + "github.com/oracle/oci-go-sdk/common" + + "github.com/subosito/gotenv" +) + +var ( + availabilityDomain string + compartmentID string + rootCompartmentID string +) + +// ParseAgrs parse shared variables from environment variables, other samples should define their own +// viariables and call this function to initialize shared variables +func ParseAgrs() { + err := gotenv.Load(".env.sample") + LogIfError(err) + + availabilityDomain = os.Getenv("OCI_AVAILABILITY_DOMAIN") + compartmentID = os.Getenv("OCI_COMPARTMENT_ID") + rootCompartmentID = os.Getenv("OCI_ROOT_COMPARTMENT_ID") +} + +// AvailabilityDomain return the aviailability domain defined in .env.sample file +func AvailabilityDomain() *string { + return common.String(availabilityDomain) +} + +// CompartmentID return the compartment ID defined in .env.sample file +func CompartmentID() *string { + return common.String(compartmentID) +} + +// RootCompartmentID return the root compartment ID defined in .env.sample file +func RootCompartmentID() *string { + return common.String(rootCompartmentID) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/example/helpers/helper.go b/vendor/github.com/oracle/oci-go-sdk/example/helpers/helper.go new file mode 100644 index 000000000..71317387d --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/example/helpers/helper.go @@ -0,0 +1,111 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// +// Helper methods for OCI GOSDK Samples +// + +package helpers + +import ( + "fmt" + "log" + "math/rand" + "reflect" + "strings" + "time" +) + +// LogIfError is equivalent to Println() followed by a call to os.Exit(1) if error is not nil +func LogIfError(err error) { + if err != nil { + log.Fatalln(err.Error()) + } +} + +// RetryUntilTrueOrError retries a function until the predicate is true or it reaches a timeout. +// The operation is retried at the give frequency +func RetryUntilTrueOrError(operation func() (interface{}, error), predicate func(interface{}) (bool, error), frequency, timeout <-chan time.Time) error { + for { + select { + case <-timeout: + return fmt.Errorf("timeout reached") + case <-frequency: + result, err := operation() + if err != nil { + return err + } + + isTrue, err := predicate(result) + if err != nil { + return err + } + + if isTrue { + return nil + } + } + } +} + +// FindLifecycleFieldValue finds lifecycle value inside the struct based on reflection +func FindLifecycleFieldValue(request interface{}) (string, error) { + val := reflect.ValueOf(request) + if val.Kind() == reflect.Ptr { + if val.IsNil() { + return "", fmt.Errorf("can not unmarshal to response a pointer to nil structure") + } + val = val.Elem() + } + + var err error + typ := val.Type() + for i := 0; i < typ.NumField(); i++ { + if err != nil { + return "", err + } + + sf := typ.Field(i) + + //unexported + if sf.PkgPath != "" { + continue + } + + sv := val.Field(i) + + if sv.Kind() == reflect.Struct { + lif, err := FindLifecycleFieldValue(sv.Interface()) + if err == nil { + return lif, nil + } + } + if !strings.Contains(strings.ToLower(sf.Name), "lifecyclestate") { + continue + } + return sv.String(), nil + } + return "", fmt.Errorf("request does not have a lifecycle field") +} + +// CheckLifecycleState returns a function that checks for that a struct has the given lifecycle +func CheckLifecycleState(lifecycleState string) func(interface{}) (bool, error) { + return func(request interface{}) (bool, error) { + fieldLifecycle, err := FindLifecycleFieldValue(request) + if err != nil { + return false, err + } + isEqual := fieldLifecycle == lifecycleState + log.Printf("Current lifecycle state is: %s, waiting for it becomes to: %s", fieldLifecycle, lifecycleState) + return isEqual, nil + } +} + +// GetRandomString returns a random string with length equals to n +func GetRandomString(n int) string { + letters := []rune("abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ") + + b := make([]rune, n) + for i := range b { + b[i] = letters[rand.Intn(len(letters))] + } + return string(b) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/identity/add_user_to_group_details.go b/vendor/github.com/oracle/oci-go-sdk/identity/add_user_to_group_details.go new file mode 100644 index 000000000..03b811f24 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/identity/add_user_to_group_details.go @@ -0,0 +1,27 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Identity and Access Management Service API +// +// APIs for managing users, groups, compartments, and policies. +// + +package identity + +import ( + "github.com/oracle/oci-go-sdk/common" +) + +// AddUserToGroupDetails The representation of AddUserToGroupDetails +type AddUserToGroupDetails struct { + + // The OCID of the user. + UserId *string `mandatory:"true" json:"userId"` + + // The OCID of the group. + GroupId *string `mandatory:"true" json:"groupId"` +} + +func (m AddUserToGroupDetails) String() string { + return common.PointerString(m) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/identity/add_user_to_group_request_response.go b/vendor/github.com/oracle/oci-go-sdk/identity/add_user_to_group_request_response.go new file mode 100644 index 000000000..34b121dd0 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/identity/add_user_to_group_request_response.go @@ -0,0 +1,48 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package identity + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// AddUserToGroupRequest wrapper for the AddUserToGroup operation +type AddUserToGroupRequest struct { + + // Request object for adding a user to a group. + AddUserToGroupDetails `contributesTo:"body"` + + // A token that uniquely identifies a request so it can be retried in case of a timeout or + // server error without risk of executing that same action again. Retry tokens expire after 24 + // hours, but can be invalidated before then due to conflicting operations (e.g., if a resource + // has been deleted and purged from the system, then a retry of the original creation request + // may be rejected). + OpcRetryToken *string `mandatory:"false" contributesTo:"header" name:"opc-retry-token"` +} + +func (request AddUserToGroupRequest) String() string { + return common.PointerString(request) +} + +// AddUserToGroupResponse wrapper for the AddUserToGroup operation +type AddUserToGroupResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The UserGroupMembership instance + UserGroupMembership `presentIn:"body"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about a + // particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` + + // For optimistic concurrency control. See `if-match`. + Etag *string `presentIn:"header" name:"etag"` +} + +func (response AddUserToGroupResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/identity/api_key.go b/vendor/github.com/oracle/oci-go-sdk/identity/api_key.go new file mode 100644 index 000000000..d5b9eed1d --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/identity/api_key.go @@ -0,0 +1,80 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Identity and Access Management Service API +// +// APIs for managing users, groups, compartments, and policies. +// + +package identity + +import ( + "github.com/oracle/oci-go-sdk/common" +) + +// ApiKey A PEM-format RSA credential for securing requests to the Oracle Cloud Infrastructure REST API. Also known +// as an *API signing key*. Specifically, this is the public key from the key pair. The private key remains with +// the user calling the API. For information about generating a key pair +// in the required PEM format, see Required Keys and OCIDs (https://docs.us-phoenix-1.oraclecloud.com/Content/API/Concepts/apisigningkey.htm). +// **Important:** This is **not** the SSH key for accessing compute instances. +// Each user can have a maximum of three API signing keys. +// For more information about user credentials, see User Credentials (https://docs.us-phoenix-1.oraclecloud.com/Content/Identity/Concepts/usercredentials.htm). +type ApiKey struct { + + // An Oracle-assigned identifier for the key, in this format: + // TENANCY_OCID/USER_OCID/KEY_FINGERPRINT. + KeyId *string `mandatory:"false" json:"keyId"` + + // The key's value. + KeyValue *string `mandatory:"false" json:"keyValue"` + + // The key's fingerprint (e.g., 12:34:56:78:90:ab:cd:ef:12:34:56:78:90:ab:cd:ef). + Fingerprint *string `mandatory:"false" json:"fingerprint"` + + // The OCID of the user the key belongs to. + UserId *string `mandatory:"false" json:"userId"` + + // Date and time the `ApiKey` object was created, in the format defined by RFC3339. + // Example: `2016-08-25T21:10:29.600Z` + TimeCreated *common.SDKTime `mandatory:"false" json:"timeCreated"` + + // The API key's current state. After creating an `ApiKey` object, make sure its `lifecycleState` changes from + // CREATING to ACTIVE before using it. + LifecycleState ApiKeyLifecycleStateEnum `mandatory:"false" json:"lifecycleState,omitempty"` + + // The detailed status of INACTIVE lifecycleState. + InactiveStatus *int `mandatory:"false" json:"inactiveStatus"` +} + +func (m ApiKey) String() string { + return common.PointerString(m) +} + +// ApiKeyLifecycleStateEnum Enum with underlying type: string +type ApiKeyLifecycleStateEnum string + +// Set of constants representing the allowable values for ApiKeyLifecycleState +const ( + ApiKeyLifecycleStateCreating ApiKeyLifecycleStateEnum = "CREATING" + ApiKeyLifecycleStateActive ApiKeyLifecycleStateEnum = "ACTIVE" + ApiKeyLifecycleStateInactive ApiKeyLifecycleStateEnum = "INACTIVE" + ApiKeyLifecycleStateDeleting ApiKeyLifecycleStateEnum = "DELETING" + ApiKeyLifecycleStateDeleted ApiKeyLifecycleStateEnum = "DELETED" +) + +var mappingApiKeyLifecycleState = map[string]ApiKeyLifecycleStateEnum{ + "CREATING": ApiKeyLifecycleStateCreating, + "ACTIVE": ApiKeyLifecycleStateActive, + "INACTIVE": ApiKeyLifecycleStateInactive, + "DELETING": ApiKeyLifecycleStateDeleting, + "DELETED": ApiKeyLifecycleStateDeleted, +} + +// GetApiKeyLifecycleStateEnumValues Enumerates the set of values for ApiKeyLifecycleState +func GetApiKeyLifecycleStateEnumValues() []ApiKeyLifecycleStateEnum { + values := make([]ApiKeyLifecycleStateEnum, 0) + for _, v := range mappingApiKeyLifecycleState { + values = append(values, v) + } + return values +} diff --git a/vendor/github.com/oracle/oci-go-sdk/identity/availability_domain.go b/vendor/github.com/oracle/oci-go-sdk/identity/availability_domain.go new file mode 100644 index 000000000..7d5d03b95 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/identity/availability_domain.go @@ -0,0 +1,29 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Identity and Access Management Service API +// +// APIs for managing users, groups, compartments, and policies. +// + +package identity + +import ( + "github.com/oracle/oci-go-sdk/common" +) + +// AvailabilityDomain One or more isolated, fault-tolerant Oracle data centers that host cloud resources such as instances, volumes, +// and subnets. A region contains several Availability Domains. For more information, see +// Regions and Availability Domains (https://docs.us-phoenix-1.oraclecloud.com/Content/General/Concepts/regions.htm). +type AvailabilityDomain struct { + + // The name of the Availability Domain. + Name *string `mandatory:"false" json:"name"` + + // The OCID of the tenancy. + CompartmentId *string `mandatory:"false" json:"compartmentId"` +} + +func (m AvailabilityDomain) String() string { + return common.PointerString(m) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/identity/compartment.go b/vendor/github.com/oracle/oci-go-sdk/identity/compartment.go new file mode 100644 index 000000000..941523d22 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/identity/compartment.go @@ -0,0 +1,88 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Identity and Access Management Service API +// +// APIs for managing users, groups, compartments, and policies. +// + +package identity + +import ( + "github.com/oracle/oci-go-sdk/common" +) + +// Compartment A collection of related resources. Compartments are a fundamental component of Oracle Cloud Infrastructure +// for organizing and isolating your cloud resources. You use them to clearly separate resources for the purposes +// of measuring usage and billing, access (through the use of IAM Service policies), and isolation (separating the +// resources for one project or business unit from another). A common approach is to create a compartment for each +// major part of your organization. For more information, see +// Overview of the IAM Service (https://docs.us-phoenix-1.oraclecloud.com/Content/Identity/Concepts/overview.htm) and also +// Setting Up Your Tenancy (https://docs.us-phoenix-1.oraclecloud.com/Content/GSG/Concepts/settinguptenancy.htm). +// +// To place a resource in a compartment, simply specify the compartment ID in the "Create" request object when +// initially creating the resource. For example, to launch an instance into a particular compartment, specify +// that compartment's OCID in the `LaunchInstance` request. You can't move an existing resource from one +// compartment to another. +// To use any of the API operations, you must be authorized in an IAM policy. If you're not authorized, +// talk to an administrator. If you're an administrator who needs to write policies to give users access, +// see Getting Started with Policies (https://docs.us-phoenix-1.oraclecloud.com/Content/Identity/Concepts/policygetstarted.htm). +type Compartment struct { + + // The OCID of the compartment. + Id *string `mandatory:"true" json:"id"` + + // The OCID of the tenancy containing the compartment. + CompartmentId *string `mandatory:"true" json:"compartmentId"` + + // The name you assign to the compartment during creation. The name must be unique across all + // compartments in the tenancy. + Name *string `mandatory:"true" json:"name"` + + // The description you assign to the compartment. Does not have to be unique, and it's changeable. + Description *string `mandatory:"true" json:"description"` + + // Date and time the compartment was created, in the format defined by RFC3339. + // Example: `2016-08-25T21:10:29.600Z` + TimeCreated *common.SDKTime `mandatory:"true" json:"timeCreated"` + + // The compartment's current state. After creating a compartment, make sure its `lifecycleState` changes from + // CREATING to ACTIVE before using it. + LifecycleState CompartmentLifecycleStateEnum `mandatory:"true" json:"lifecycleState"` + + // The detailed status of INACTIVE lifecycleState. + InactiveStatus *int `mandatory:"false" json:"inactiveStatus"` +} + +func (m Compartment) String() string { + return common.PointerString(m) +} + +// CompartmentLifecycleStateEnum Enum with underlying type: string +type CompartmentLifecycleStateEnum string + +// Set of constants representing the allowable values for CompartmentLifecycleState +const ( + CompartmentLifecycleStateCreating CompartmentLifecycleStateEnum = "CREATING" + CompartmentLifecycleStateActive CompartmentLifecycleStateEnum = "ACTIVE" + CompartmentLifecycleStateInactive CompartmentLifecycleStateEnum = "INACTIVE" + CompartmentLifecycleStateDeleting CompartmentLifecycleStateEnum = "DELETING" + CompartmentLifecycleStateDeleted CompartmentLifecycleStateEnum = "DELETED" +) + +var mappingCompartmentLifecycleState = map[string]CompartmentLifecycleStateEnum{ + "CREATING": CompartmentLifecycleStateCreating, + "ACTIVE": CompartmentLifecycleStateActive, + "INACTIVE": CompartmentLifecycleStateInactive, + "DELETING": CompartmentLifecycleStateDeleting, + "DELETED": CompartmentLifecycleStateDeleted, +} + +// GetCompartmentLifecycleStateEnumValues Enumerates the set of values for CompartmentLifecycleState +func GetCompartmentLifecycleStateEnumValues() []CompartmentLifecycleStateEnum { + values := make([]CompartmentLifecycleStateEnum, 0) + for _, v := range mappingCompartmentLifecycleState { + values = append(values, v) + } + return values +} diff --git a/vendor/github.com/oracle/oci-go-sdk/identity/create_api_key_details.go b/vendor/github.com/oracle/oci-go-sdk/identity/create_api_key_details.go new file mode 100644 index 000000000..b9d6157ab --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/identity/create_api_key_details.go @@ -0,0 +1,24 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Identity and Access Management Service API +// +// APIs for managing users, groups, compartments, and policies. +// + +package identity + +import ( + "github.com/oracle/oci-go-sdk/common" +) + +// CreateApiKeyDetails The representation of CreateApiKeyDetails +type CreateApiKeyDetails struct { + + // The public key. Must be an RSA key in PEM format. + Key *string `mandatory:"true" json:"key"` +} + +func (m CreateApiKeyDetails) String() string { + return common.PointerString(m) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/identity/create_compartment_details.go b/vendor/github.com/oracle/oci-go-sdk/identity/create_compartment_details.go new file mode 100644 index 000000000..fb4813a61 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/identity/create_compartment_details.go @@ -0,0 +1,31 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Identity and Access Management Service API +// +// APIs for managing users, groups, compartments, and policies. +// + +package identity + +import ( + "github.com/oracle/oci-go-sdk/common" +) + +// CreateCompartmentDetails The representation of CreateCompartmentDetails +type CreateCompartmentDetails struct { + + // The OCID of the tenancy containing the compartment. + CompartmentId *string `mandatory:"true" json:"compartmentId"` + + // The name you assign to the compartment during creation. The name must be unique across all compartments + // in the tenancy. + Name *string `mandatory:"true" json:"name"` + + // The description you assign to the compartment during creation. Does not have to be unique, and it's changeable. + Description *string `mandatory:"true" json:"description"` +} + +func (m CreateCompartmentDetails) String() string { + return common.PointerString(m) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/identity/create_compartment_request_response.go b/vendor/github.com/oracle/oci-go-sdk/identity/create_compartment_request_response.go new file mode 100644 index 000000000..5ae960ed6 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/identity/create_compartment_request_response.go @@ -0,0 +1,48 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package identity + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// CreateCompartmentRequest wrapper for the CreateCompartment operation +type CreateCompartmentRequest struct { + + // Request object for creating a new compartment. + CreateCompartmentDetails `contributesTo:"body"` + + // A token that uniquely identifies a request so it can be retried in case of a timeout or + // server error without risk of executing that same action again. Retry tokens expire after 24 + // hours, but can be invalidated before then due to conflicting operations (e.g., if a resource + // has been deleted and purged from the system, then a retry of the original creation request + // may be rejected). + OpcRetryToken *string `mandatory:"false" contributesTo:"header" name:"opc-retry-token"` +} + +func (request CreateCompartmentRequest) String() string { + return common.PointerString(request) +} + +// CreateCompartmentResponse wrapper for the CreateCompartment operation +type CreateCompartmentResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The Compartment instance + Compartment `presentIn:"body"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about a + // particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` + + // For optimistic concurrency control. See `if-match`. + Etag *string `presentIn:"header" name:"etag"` +} + +func (response CreateCompartmentResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/identity/create_customer_secret_key_details.go b/vendor/github.com/oracle/oci-go-sdk/identity/create_customer_secret_key_details.go new file mode 100644 index 000000000..f61527dc9 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/identity/create_customer_secret_key_details.go @@ -0,0 +1,24 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Identity and Access Management Service API +// +// APIs for managing users, groups, compartments, and policies. +// + +package identity + +import ( + "github.com/oracle/oci-go-sdk/common" +) + +// CreateCustomerSecretKeyDetails The representation of CreateCustomerSecretKeyDetails +type CreateCustomerSecretKeyDetails struct { + + // The name you assign to the secret key during creation. Does not have to be unique, and it's changeable. + DisplayName *string `mandatory:"true" json:"displayName"` +} + +func (m CreateCustomerSecretKeyDetails) String() string { + return common.PointerString(m) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/identity/create_customer_secret_key_request_response.go b/vendor/github.com/oracle/oci-go-sdk/identity/create_customer_secret_key_request_response.go new file mode 100644 index 000000000..db24ca7a8 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/identity/create_customer_secret_key_request_response.go @@ -0,0 +1,51 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package identity + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// CreateCustomerSecretKeyRequest wrapper for the CreateCustomerSecretKey operation +type CreateCustomerSecretKeyRequest struct { + + // Request object for creating a new secret key. + CreateCustomerSecretKeyDetails `contributesTo:"body"` + + // The OCID of the user. + UserId *string `mandatory:"true" contributesTo:"path" name:"userId"` + + // A token that uniquely identifies a request so it can be retried in case of a timeout or + // server error without risk of executing that same action again. Retry tokens expire after 24 + // hours, but can be invalidated before then due to conflicting operations (e.g., if a resource + // has been deleted and purged from the system, then a retry of the original creation request + // may be rejected). + OpcRetryToken *string `mandatory:"false" contributesTo:"header" name:"opc-retry-token"` +} + +func (request CreateCustomerSecretKeyRequest) String() string { + return common.PointerString(request) +} + +// CreateCustomerSecretKeyResponse wrapper for the CreateCustomerSecretKey operation +type CreateCustomerSecretKeyResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The CustomerSecretKey instance + CustomerSecretKey `presentIn:"body"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about a + // particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` + + // For optimistic concurrency control. See `if-match`. + Etag *string `presentIn:"header" name:"etag"` +} + +func (response CreateCustomerSecretKeyResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/identity/create_group_details.go b/vendor/github.com/oracle/oci-go-sdk/identity/create_group_details.go new file mode 100644 index 000000000..98a265dc9 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/identity/create_group_details.go @@ -0,0 +1,31 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Identity and Access Management Service API +// +// APIs for managing users, groups, compartments, and policies. +// + +package identity + +import ( + "github.com/oracle/oci-go-sdk/common" +) + +// CreateGroupDetails The representation of CreateGroupDetails +type CreateGroupDetails struct { + + // The OCID of the tenancy containing the group. + CompartmentId *string `mandatory:"true" json:"compartmentId"` + + // The name you assign to the group during creation. The name must be unique across all groups + // in the tenancy and cannot be changed. + Name *string `mandatory:"true" json:"name"` + + // The description you assign to the group during creation. Does not have to be unique, and it's changeable. + Description *string `mandatory:"true" json:"description"` +} + +func (m CreateGroupDetails) String() string { + return common.PointerString(m) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/identity/create_group_request_response.go b/vendor/github.com/oracle/oci-go-sdk/identity/create_group_request_response.go new file mode 100644 index 000000000..882880cb9 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/identity/create_group_request_response.go @@ -0,0 +1,48 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package identity + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// CreateGroupRequest wrapper for the CreateGroup operation +type CreateGroupRequest struct { + + // Request object for creating a new group. + CreateGroupDetails `contributesTo:"body"` + + // A token that uniquely identifies a request so it can be retried in case of a timeout or + // server error without risk of executing that same action again. Retry tokens expire after 24 + // hours, but can be invalidated before then due to conflicting operations (e.g., if a resource + // has been deleted and purged from the system, then a retry of the original creation request + // may be rejected). + OpcRetryToken *string `mandatory:"false" contributesTo:"header" name:"opc-retry-token"` +} + +func (request CreateGroupRequest) String() string { + return common.PointerString(request) +} + +// CreateGroupResponse wrapper for the CreateGroup operation +type CreateGroupResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The Group instance + Group `presentIn:"body"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about a + // particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` + + // For optimistic concurrency control. See `if-match`. + Etag *string `presentIn:"header" name:"etag"` +} + +func (response CreateGroupResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/identity/create_identity_provider_details.go b/vendor/github.com/oracle/oci-go-sdk/identity/create_identity_provider_details.go new file mode 100644 index 000000000..35d7f594a --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/identity/create_identity_provider_details.go @@ -0,0 +1,125 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Identity and Access Management Service API +// +// APIs for managing users, groups, compartments, and policies. +// + +package identity + +import ( + "encoding/json" + "github.com/oracle/oci-go-sdk/common" +) + +// CreateIdentityProviderDetails The representation of CreateIdentityProviderDetails +type CreateIdentityProviderDetails interface { + + // The OCID of your tenancy. + GetCompartmentId() *string + + // The name you assign to the `IdentityProvider` during creation. + // The name must be unique across all `IdentityProvider` objects in the + // tenancy and cannot be changed. + GetName() *string + + // The description you assign to the `IdentityProvider` during creation. + // Does not have to be unique, and it's changeable. + GetDescription() *string + + // The identity provider service or product. + // Supported identity providers are Oracle Identity Cloud Service (IDCS) and Microsoft + // Active Directory Federation Services (ADFS). + // Example: `IDCS` + GetProductType() CreateIdentityProviderDetailsProductTypeEnum +} + +type createidentityproviderdetails struct { + JsonData []byte + CompartmentId *string `mandatory:"true" json:"compartmentId"` + Name *string `mandatory:"true" json:"name"` + Description *string `mandatory:"true" json:"description"` + ProductType CreateIdentityProviderDetailsProductTypeEnum `mandatory:"true" json:"productType"` + Protocol string `json:"protocol"` +} + +// UnmarshalJSON unmarshals json +func (m *createidentityproviderdetails) UnmarshalJSON(data []byte) error { + m.JsonData = data + type Unmarshalercreateidentityproviderdetails createidentityproviderdetails + s := struct { + Model Unmarshalercreateidentityproviderdetails + }{} + err := json.Unmarshal(data, &s.Model) + if err != nil { + return err + } + m.CompartmentId = s.Model.CompartmentId + m.Name = s.Model.Name + m.Description = s.Model.Description + m.ProductType = s.Model.ProductType + m.Protocol = s.Model.Protocol + + return err +} + +// UnmarshalPolymorphicJSON unmarshals polymorphic json +func (m *createidentityproviderdetails) UnmarshalPolymorphicJSON(data []byte) (interface{}, error) { + var err error + switch m.Protocol { + case "SAML2": + mm := CreateSaml2IdentityProviderDetails{} + err = json.Unmarshal(data, &mm) + return mm, err + default: + return m, nil + } +} + +//GetCompartmentId returns CompartmentId +func (m createidentityproviderdetails) GetCompartmentId() *string { + return m.CompartmentId +} + +//GetName returns Name +func (m createidentityproviderdetails) GetName() *string { + return m.Name +} + +//GetDescription returns Description +func (m createidentityproviderdetails) GetDescription() *string { + return m.Description +} + +//GetProductType returns ProductType +func (m createidentityproviderdetails) GetProductType() CreateIdentityProviderDetailsProductTypeEnum { + return m.ProductType +} + +func (m createidentityproviderdetails) String() string { + return common.PointerString(m) +} + +// CreateIdentityProviderDetailsProductTypeEnum Enum with underlying type: string +type CreateIdentityProviderDetailsProductTypeEnum string + +// Set of constants representing the allowable values for CreateIdentityProviderDetailsProductType +const ( + CreateIdentityProviderDetailsProductTypeIdcs CreateIdentityProviderDetailsProductTypeEnum = "IDCS" + CreateIdentityProviderDetailsProductTypeAdfs CreateIdentityProviderDetailsProductTypeEnum = "ADFS" +) + +var mappingCreateIdentityProviderDetailsProductType = map[string]CreateIdentityProviderDetailsProductTypeEnum{ + "IDCS": CreateIdentityProviderDetailsProductTypeIdcs, + "ADFS": CreateIdentityProviderDetailsProductTypeAdfs, +} + +// GetCreateIdentityProviderDetailsProductTypeEnumValues Enumerates the set of values for CreateIdentityProviderDetailsProductType +func GetCreateIdentityProviderDetailsProductTypeEnumValues() []CreateIdentityProviderDetailsProductTypeEnum { + values := make([]CreateIdentityProviderDetailsProductTypeEnum, 0) + for _, v := range mappingCreateIdentityProviderDetailsProductType { + values = append(values, v) + } + return values +} diff --git a/vendor/github.com/oracle/oci-go-sdk/identity/create_identity_provider_request_response.go b/vendor/github.com/oracle/oci-go-sdk/identity/create_identity_provider_request_response.go new file mode 100644 index 000000000..a3ac552dd --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/identity/create_identity_provider_request_response.go @@ -0,0 +1,48 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package identity + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// CreateIdentityProviderRequest wrapper for the CreateIdentityProvider operation +type CreateIdentityProviderRequest struct { + + // Request object for creating a new SAML2 identity provider. + CreateIdentityProviderDetails `contributesTo:"body"` + + // A token that uniquely identifies a request so it can be retried in case of a timeout or + // server error without risk of executing that same action again. Retry tokens expire after 24 + // hours, but can be invalidated before then due to conflicting operations (e.g., if a resource + // has been deleted and purged from the system, then a retry of the original creation request + // may be rejected). + OpcRetryToken *string `mandatory:"false" contributesTo:"header" name:"opc-retry-token"` +} + +func (request CreateIdentityProviderRequest) String() string { + return common.PointerString(request) +} + +// CreateIdentityProviderResponse wrapper for the CreateIdentityProvider operation +type CreateIdentityProviderResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The IdentityProvider instance + IdentityProvider `presentIn:"body"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about a + // particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` + + // For optimistic concurrency control. See `if-match`. + Etag *string `presentIn:"header" name:"etag"` +} + +func (response CreateIdentityProviderResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/identity/create_idp_group_mapping_details.go b/vendor/github.com/oracle/oci-go-sdk/identity/create_idp_group_mapping_details.go new file mode 100644 index 000000000..a6cb91192 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/identity/create_idp_group_mapping_details.go @@ -0,0 +1,28 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Identity and Access Management Service API +// +// APIs for managing users, groups, compartments, and policies. +// + +package identity + +import ( + "github.com/oracle/oci-go-sdk/common" +) + +// CreateIdpGroupMappingDetails The representation of CreateIdpGroupMappingDetails +type CreateIdpGroupMappingDetails struct { + + // The name of the IdP group you want to map. + IdpGroupName *string `mandatory:"true" json:"idpGroupName"` + + // The OCID of the IAM Service Group + // you want to map to the IdP group. + GroupId *string `mandatory:"true" json:"groupId"` +} + +func (m CreateIdpGroupMappingDetails) String() string { + return common.PointerString(m) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/identity/create_idp_group_mapping_request_response.go b/vendor/github.com/oracle/oci-go-sdk/identity/create_idp_group_mapping_request_response.go new file mode 100644 index 000000000..376d07bc3 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/identity/create_idp_group_mapping_request_response.go @@ -0,0 +1,51 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package identity + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// CreateIdpGroupMappingRequest wrapper for the CreateIdpGroupMapping operation +type CreateIdpGroupMappingRequest struct { + + // Add a mapping from an SAML2.0 identity provider group to a BMC group. + CreateIdpGroupMappingDetails `contributesTo:"body"` + + // The OCID of the identity provider. + IdentityProviderId *string `mandatory:"true" contributesTo:"path" name:"identityProviderId"` + + // A token that uniquely identifies a request so it can be retried in case of a timeout or + // server error without risk of executing that same action again. Retry tokens expire after 24 + // hours, but can be invalidated before then due to conflicting operations (e.g., if a resource + // has been deleted and purged from the system, then a retry of the original creation request + // may be rejected). + OpcRetryToken *string `mandatory:"false" contributesTo:"header" name:"opc-retry-token"` +} + +func (request CreateIdpGroupMappingRequest) String() string { + return common.PointerString(request) +} + +// CreateIdpGroupMappingResponse wrapper for the CreateIdpGroupMapping operation +type CreateIdpGroupMappingResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The IdpGroupMapping instance + IdpGroupMapping `presentIn:"body"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about a + // particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` + + // For optimistic concurrency control. See `if-match`. + Etag *string `presentIn:"header" name:"etag"` +} + +func (response CreateIdpGroupMappingResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/identity/create_or_reset_u_i_password_request_response.go b/vendor/github.com/oracle/oci-go-sdk/identity/create_or_reset_u_i_password_request_response.go new file mode 100644 index 000000000..8d6de9aa8 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/identity/create_or_reset_u_i_password_request_response.go @@ -0,0 +1,48 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package identity + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// CreateOrResetUIPasswordRequest wrapper for the CreateOrResetUIPassword operation +type CreateOrResetUIPasswordRequest struct { + + // The OCID of the user. + UserId *string `mandatory:"true" contributesTo:"path" name:"userId"` + + // A token that uniquely identifies a request so it can be retried in case of a timeout or + // server error without risk of executing that same action again. Retry tokens expire after 24 + // hours, but can be invalidated before then due to conflicting operations (e.g., if a resource + // has been deleted and purged from the system, then a retry of the original creation request + // may be rejected). + OpcRetryToken *string `mandatory:"false" contributesTo:"header" name:"opc-retry-token"` +} + +func (request CreateOrResetUIPasswordRequest) String() string { + return common.PointerString(request) +} + +// CreateOrResetUIPasswordResponse wrapper for the CreateOrResetUIPassword operation +type CreateOrResetUIPasswordResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The UiPassword instance + UiPassword `presentIn:"body"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about a + // particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` + + // For optimistic concurrency control. See `if-match`. + Etag *string `presentIn:"header" name:"etag"` +} + +func (response CreateOrResetUIPasswordResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/identity/create_policy_details.go b/vendor/github.com/oracle/oci-go-sdk/identity/create_policy_details.go new file mode 100644 index 000000000..b25f8b649 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/identity/create_policy_details.go @@ -0,0 +1,41 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Identity and Access Management Service API +// +// APIs for managing users, groups, compartments, and policies. +// + +package identity + +import ( + "github.com/oracle/oci-go-sdk/common" +) + +// CreatePolicyDetails The representation of CreatePolicyDetails +type CreatePolicyDetails struct { + + // The OCID of the compartment containing the policy (either the tenancy or another compartment). + CompartmentId *string `mandatory:"true" json:"compartmentId"` + + // The name you assign to the policy during creation. The name must be unique across all policies + // in the tenancy and cannot be changed. + Name *string `mandatory:"true" json:"name"` + + // An array of policy statements written in the policy language. See + // How Policies Work (https://docs.us-phoenix-1.oraclecloud.com/Content/Identity/Concepts/policies.htm) and + // Common Policies (https://docs.us-phoenix-1.oraclecloud.com/Content/Identity/Concepts/commonpolicies.htm). + Statements []string `mandatory:"true" json:"statements"` + + // The description you assign to the policy during creation. Does not have to be unique, and it's changeable. + Description *string `mandatory:"true" json:"description"` + + // The version of the policy. If null or set to an empty string, when a request comes in for authorization, the + // policy will be evaluated according to the current behavior of the services at that moment. If set to a particular + // date (YYYY-MM-DD), the policy will be evaluated according to the behavior of the services on that date. + VersionDate *common.SDKTime `mandatory:"false" json:"versionDate"` +} + +func (m CreatePolicyDetails) String() string { + return common.PointerString(m) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/identity/create_policy_request_response.go b/vendor/github.com/oracle/oci-go-sdk/identity/create_policy_request_response.go new file mode 100644 index 000000000..b6d7e0028 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/identity/create_policy_request_response.go @@ -0,0 +1,48 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package identity + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// CreatePolicyRequest wrapper for the CreatePolicy operation +type CreatePolicyRequest struct { + + // Request object for creating a new policy. + CreatePolicyDetails `contributesTo:"body"` + + // A token that uniquely identifies a request so it can be retried in case of a timeout or + // server error without risk of executing that same action again. Retry tokens expire after 24 + // hours, but can be invalidated before then due to conflicting operations (e.g., if a resource + // has been deleted and purged from the system, then a retry of the original creation request + // may be rejected). + OpcRetryToken *string `mandatory:"false" contributesTo:"header" name:"opc-retry-token"` +} + +func (request CreatePolicyRequest) String() string { + return common.PointerString(request) +} + +// CreatePolicyResponse wrapper for the CreatePolicy operation +type CreatePolicyResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The Policy instance + Policy `presentIn:"body"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about a + // particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` + + // For optimistic concurrency control. See `if-match`. + Etag *string `presentIn:"header" name:"etag"` +} + +func (response CreatePolicyResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/identity/create_region_subscription_details.go b/vendor/github.com/oracle/oci-go-sdk/identity/create_region_subscription_details.go new file mode 100644 index 000000000..78e866f7c --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/identity/create_region_subscription_details.go @@ -0,0 +1,29 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Identity and Access Management Service API +// +// APIs for managing users, groups, compartments, and policies. +// + +package identity + +import ( + "github.com/oracle/oci-go-sdk/common" +) + +// CreateRegionSubscriptionDetails The representation of CreateRegionSubscriptionDetails +type CreateRegionSubscriptionDetails struct { + + // The regions's key. + // Allowed values are: + // - `PHX` + // - `IAD` + // - `FRA` + // Example: `PHX` + RegionKey *string `mandatory:"true" json:"regionKey"` +} + +func (m CreateRegionSubscriptionDetails) String() string { + return common.PointerString(m) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/identity/create_region_subscription_request_response.go b/vendor/github.com/oracle/oci-go-sdk/identity/create_region_subscription_request_response.go new file mode 100644 index 000000000..66dc199d1 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/identity/create_region_subscription_request_response.go @@ -0,0 +1,48 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package identity + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// CreateRegionSubscriptionRequest wrapper for the CreateRegionSubscription operation +type CreateRegionSubscriptionRequest struct { + + // Request object for activate a new region. + CreateRegionSubscriptionDetails `contributesTo:"body"` + + // The OCID of the tenancy. + TenancyId *string `mandatory:"true" contributesTo:"path" name:"tenancyId"` + + // A token that uniquely identifies a request so it can be retried in case of a timeout or + // server error without risk of executing that same action again. Retry tokens expire after 24 + // hours, but can be invalidated before then due to conflicting operations (e.g., if a resource + // has been deleted and purged from the system, then a retry of the original creation request + // may be rejected). + OpcRetryToken *string `mandatory:"false" contributesTo:"header" name:"opc-retry-token"` +} + +func (request CreateRegionSubscriptionRequest) String() string { + return common.PointerString(request) +} + +// CreateRegionSubscriptionResponse wrapper for the CreateRegionSubscription operation +type CreateRegionSubscriptionResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The RegionSubscription instance + RegionSubscription `presentIn:"body"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about a + // particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response CreateRegionSubscriptionResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/identity/create_saml2_identity_provider_details.go b/vendor/github.com/oracle/oci-go-sdk/identity/create_saml2_identity_provider_details.go new file mode 100644 index 000000000..f4eba6eee --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/identity/create_saml2_identity_provider_details.go @@ -0,0 +1,81 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Identity and Access Management Service API +// +// APIs for managing users, groups, compartments, and policies. +// + +package identity + +import ( + "encoding/json" + "github.com/oracle/oci-go-sdk/common" +) + +// CreateSaml2IdentityProviderDetails The representation of CreateSaml2IdentityProviderDetails +type CreateSaml2IdentityProviderDetails struct { + + // The OCID of your tenancy. + CompartmentId *string `mandatory:"true" json:"compartmentId"` + + // The name you assign to the `IdentityProvider` during creation. + // The name must be unique across all `IdentityProvider` objects in the + // tenancy and cannot be changed. + Name *string `mandatory:"true" json:"name"` + + // The description you assign to the `IdentityProvider` during creation. + // Does not have to be unique, and it's changeable. + Description *string `mandatory:"true" json:"description"` + + // The URL for retrieving the identity provider's metadata, + // which contains information required for federating. + MetadataUrl *string `mandatory:"true" json:"metadataUrl"` + + // The XML that contains the information required for federating. + Metadata *string `mandatory:"true" json:"metadata"` + + // The identity provider service or product. + // Supported identity providers are Oracle Identity Cloud Service (IDCS) and Microsoft + // Active Directory Federation Services (ADFS). + // Example: `IDCS` + ProductType CreateIdentityProviderDetailsProductTypeEnum `mandatory:"true" json:"productType"` +} + +//GetCompartmentId returns CompartmentId +func (m CreateSaml2IdentityProviderDetails) GetCompartmentId() *string { + return m.CompartmentId +} + +//GetName returns Name +func (m CreateSaml2IdentityProviderDetails) GetName() *string { + return m.Name +} + +//GetDescription returns Description +func (m CreateSaml2IdentityProviderDetails) GetDescription() *string { + return m.Description +} + +//GetProductType returns ProductType +func (m CreateSaml2IdentityProviderDetails) GetProductType() CreateIdentityProviderDetailsProductTypeEnum { + return m.ProductType +} + +func (m CreateSaml2IdentityProviderDetails) String() string { + return common.PointerString(m) +} + +// MarshalJSON marshals to json representation +func (m CreateSaml2IdentityProviderDetails) MarshalJSON() (buff []byte, e error) { + type MarshalTypeCreateSaml2IdentityProviderDetails CreateSaml2IdentityProviderDetails + s := struct { + DiscriminatorParam string `json:"protocol"` + MarshalTypeCreateSaml2IdentityProviderDetails + }{ + "SAML2", + (MarshalTypeCreateSaml2IdentityProviderDetails)(m), + } + + return json.Marshal(&s) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/identity/create_swift_password_details.go b/vendor/github.com/oracle/oci-go-sdk/identity/create_swift_password_details.go new file mode 100644 index 000000000..0182d9dc5 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/identity/create_swift_password_details.go @@ -0,0 +1,24 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Identity and Access Management Service API +// +// APIs for managing users, groups, compartments, and policies. +// + +package identity + +import ( + "github.com/oracle/oci-go-sdk/common" +) + +// CreateSwiftPasswordDetails The representation of CreateSwiftPasswordDetails +type CreateSwiftPasswordDetails struct { + + // The description you assign to the Swift password during creation. Does not have to be unique, and it's changeable. + Description *string `mandatory:"true" json:"description"` +} + +func (m CreateSwiftPasswordDetails) String() string { + return common.PointerString(m) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/identity/create_swift_password_request_response.go b/vendor/github.com/oracle/oci-go-sdk/identity/create_swift_password_request_response.go new file mode 100644 index 000000000..696261df5 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/identity/create_swift_password_request_response.go @@ -0,0 +1,51 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package identity + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// CreateSwiftPasswordRequest wrapper for the CreateSwiftPassword operation +type CreateSwiftPasswordRequest struct { + + // Request object for creating a new swift password. + CreateSwiftPasswordDetails `contributesTo:"body"` + + // The OCID of the user. + UserId *string `mandatory:"true" contributesTo:"path" name:"userId"` + + // A token that uniquely identifies a request so it can be retried in case of a timeout or + // server error without risk of executing that same action again. Retry tokens expire after 24 + // hours, but can be invalidated before then due to conflicting operations (e.g., if a resource + // has been deleted and purged from the system, then a retry of the original creation request + // may be rejected). + OpcRetryToken *string `mandatory:"false" contributesTo:"header" name:"opc-retry-token"` +} + +func (request CreateSwiftPasswordRequest) String() string { + return common.PointerString(request) +} + +// CreateSwiftPasswordResponse wrapper for the CreateSwiftPassword operation +type CreateSwiftPasswordResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The SwiftPassword instance + SwiftPassword `presentIn:"body"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about a + // particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` + + // For optimistic concurrency control. See `if-match`. + Etag *string `presentIn:"header" name:"etag"` +} + +func (response CreateSwiftPasswordResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/identity/create_user_details.go b/vendor/github.com/oracle/oci-go-sdk/identity/create_user_details.go new file mode 100644 index 000000000..37ddd5857 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/identity/create_user_details.go @@ -0,0 +1,31 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Identity and Access Management Service API +// +// APIs for managing users, groups, compartments, and policies. +// + +package identity + +import ( + "github.com/oracle/oci-go-sdk/common" +) + +// CreateUserDetails The representation of CreateUserDetails +type CreateUserDetails struct { + + // The OCID of the tenancy containing the user. + CompartmentId *string `mandatory:"true" json:"compartmentId"` + + // The name you assign to the user during creation. This is the user's login for the Console. + // The name must be unique across all users in the tenancy and cannot be changed. + Name *string `mandatory:"true" json:"name"` + + // The description you assign to the user during creation. Does not have to be unique, and it's changeable. + Description *string `mandatory:"true" json:"description"` +} + +func (m CreateUserDetails) String() string { + return common.PointerString(m) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/identity/create_user_request_response.go b/vendor/github.com/oracle/oci-go-sdk/identity/create_user_request_response.go new file mode 100644 index 000000000..137de9446 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/identity/create_user_request_response.go @@ -0,0 +1,48 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package identity + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// CreateUserRequest wrapper for the CreateUser operation +type CreateUserRequest struct { + + // Request object for creating a new user. + CreateUserDetails `contributesTo:"body"` + + // A token that uniquely identifies a request so it can be retried in case of a timeout or + // server error without risk of executing that same action again. Retry tokens expire after 24 + // hours, but can be invalidated before then due to conflicting operations (e.g., if a resource + // has been deleted and purged from the system, then a retry of the original creation request + // may be rejected). + OpcRetryToken *string `mandatory:"false" contributesTo:"header" name:"opc-retry-token"` +} + +func (request CreateUserRequest) String() string { + return common.PointerString(request) +} + +// CreateUserResponse wrapper for the CreateUser operation +type CreateUserResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The User instance + User `presentIn:"body"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about a + // particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` + + // For optimistic concurrency control. See `if-match`. + Etag *string `presentIn:"header" name:"etag"` +} + +func (response CreateUserResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/identity/customer_secret_key.go b/vendor/github.com/oracle/oci-go-sdk/identity/customer_secret_key.go new file mode 100644 index 000000000..f3137bf6e --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/identity/customer_secret_key.go @@ -0,0 +1,82 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Identity and Access Management Service API +// +// APIs for managing users, groups, compartments, and policies. +// + +package identity + +import ( + "github.com/oracle/oci-go-sdk/common" +) + +// CustomerSecretKey A `CustomerSecretKey` is an Oracle-provided key for using the Object Storage Service's +// Amazon S3 compatible API (https://docs.us-phoenix-1.oraclecloud.com/Content/Object/Tasks/s3compatibleapi.htm). +// A user can have up to two secret keys at a time. +// **Note:** The secret key is always an Oracle-generated string; you can't change it to a string of your choice. +// For more information, see Managing User Credentials (https://docs.us-phoenix-1.oraclecloud.com/Content/Identity/Tasks/managingcredentials.htm). +type CustomerSecretKey struct { + + // The secret key. + Key *string `mandatory:"false" json:"key"` + + // The OCID of the secret key. + Id *string `mandatory:"false" json:"id"` + + // The OCID of the user the password belongs to. + UserId *string `mandatory:"false" json:"userId"` + + // The display name you assign to the secret key. Does not have to be unique, and it's changeable. + DisplayName *string `mandatory:"false" json:"displayName"` + + // Date and time the `CustomerSecretKey` object was created, in the format defined by RFC3339. + // Example: `2016-08-25T21:10:29.600Z` + TimeCreated *common.SDKTime `mandatory:"false" json:"timeCreated"` + + // Date and time when this password will expire, in the format defined by RFC3339. + // Null if it never expires. + // Example: `2016-08-25T21:10:29.600Z` + TimeExpires *common.SDKTime `mandatory:"false" json:"timeExpires"` + + // The secret key's current state. After creating a secret key, make sure its `lifecycleState` changes from + // CREATING to ACTIVE before using it. + LifecycleState CustomerSecretKeyLifecycleStateEnum `mandatory:"false" json:"lifecycleState,omitempty"` + + // The detailed status of INACTIVE lifecycleState. + InactiveStatus *int `mandatory:"false" json:"inactiveStatus"` +} + +func (m CustomerSecretKey) String() string { + return common.PointerString(m) +} + +// CustomerSecretKeyLifecycleStateEnum Enum with underlying type: string +type CustomerSecretKeyLifecycleStateEnum string + +// Set of constants representing the allowable values for CustomerSecretKeyLifecycleState +const ( + CustomerSecretKeyLifecycleStateCreating CustomerSecretKeyLifecycleStateEnum = "CREATING" + CustomerSecretKeyLifecycleStateActive CustomerSecretKeyLifecycleStateEnum = "ACTIVE" + CustomerSecretKeyLifecycleStateInactive CustomerSecretKeyLifecycleStateEnum = "INACTIVE" + CustomerSecretKeyLifecycleStateDeleting CustomerSecretKeyLifecycleStateEnum = "DELETING" + CustomerSecretKeyLifecycleStateDeleted CustomerSecretKeyLifecycleStateEnum = "DELETED" +) + +var mappingCustomerSecretKeyLifecycleState = map[string]CustomerSecretKeyLifecycleStateEnum{ + "CREATING": CustomerSecretKeyLifecycleStateCreating, + "ACTIVE": CustomerSecretKeyLifecycleStateActive, + "INACTIVE": CustomerSecretKeyLifecycleStateInactive, + "DELETING": CustomerSecretKeyLifecycleStateDeleting, + "DELETED": CustomerSecretKeyLifecycleStateDeleted, +} + +// GetCustomerSecretKeyLifecycleStateEnumValues Enumerates the set of values for CustomerSecretKeyLifecycleState +func GetCustomerSecretKeyLifecycleStateEnumValues() []CustomerSecretKeyLifecycleStateEnum { + values := make([]CustomerSecretKeyLifecycleStateEnum, 0) + for _, v := range mappingCustomerSecretKeyLifecycleState { + values = append(values, v) + } + return values +} diff --git a/vendor/github.com/oracle/oci-go-sdk/identity/customer_secret_key_summary.go b/vendor/github.com/oracle/oci-go-sdk/identity/customer_secret_key_summary.go new file mode 100644 index 000000000..bc59eed42 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/identity/customer_secret_key_summary.go @@ -0,0 +1,76 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Identity and Access Management Service API +// +// APIs for managing users, groups, compartments, and policies. +// + +package identity + +import ( + "github.com/oracle/oci-go-sdk/common" +) + +// CustomerSecretKeySummary As the name suggests, a `CustomerSecretKeySummary` object contains information about a `CustomerSecretKey`. +// A `CustomerSecretKey` is an Oracle-provided key for using the Object Storage Service's Amazon S3 compatible API. +type CustomerSecretKeySummary struct { + + // The OCID of the secret key. + Id *string `mandatory:"false" json:"id"` + + // The OCID of the user the password belongs to. + UserId *string `mandatory:"false" json:"userId"` + + // The displayName you assign to the secret key. Does not have to be unique, and it's changeable. + DisplayName *string `mandatory:"false" json:"displayName"` + + // Date and time the `CustomerSecretKey` object was created, in the format defined by RFC3339. + // Example: `2016-08-25T21:10:29.600Z` + TimeCreated *common.SDKTime `mandatory:"false" json:"timeCreated"` + + // Date and time when this password will expire, in the format defined by RFC3339. + // Null if it never expires. + // Example: `2016-08-25T21:10:29.600Z` + TimeExpires *common.SDKTime `mandatory:"false" json:"timeExpires"` + + // The secret key's current state. After creating a secret key, make sure its `lifecycleState` changes from + // CREATING to ACTIVE before using it. + LifecycleState CustomerSecretKeySummaryLifecycleStateEnum `mandatory:"false" json:"lifecycleState,omitempty"` + + // The detailed status of INACTIVE lifecycleState. + InactiveStatus *int `mandatory:"false" json:"inactiveStatus"` +} + +func (m CustomerSecretKeySummary) String() string { + return common.PointerString(m) +} + +// CustomerSecretKeySummaryLifecycleStateEnum Enum with underlying type: string +type CustomerSecretKeySummaryLifecycleStateEnum string + +// Set of constants representing the allowable values for CustomerSecretKeySummaryLifecycleState +const ( + CustomerSecretKeySummaryLifecycleStateCreating CustomerSecretKeySummaryLifecycleStateEnum = "CREATING" + CustomerSecretKeySummaryLifecycleStateActive CustomerSecretKeySummaryLifecycleStateEnum = "ACTIVE" + CustomerSecretKeySummaryLifecycleStateInactive CustomerSecretKeySummaryLifecycleStateEnum = "INACTIVE" + CustomerSecretKeySummaryLifecycleStateDeleting CustomerSecretKeySummaryLifecycleStateEnum = "DELETING" + CustomerSecretKeySummaryLifecycleStateDeleted CustomerSecretKeySummaryLifecycleStateEnum = "DELETED" +) + +var mappingCustomerSecretKeySummaryLifecycleState = map[string]CustomerSecretKeySummaryLifecycleStateEnum{ + "CREATING": CustomerSecretKeySummaryLifecycleStateCreating, + "ACTIVE": CustomerSecretKeySummaryLifecycleStateActive, + "INACTIVE": CustomerSecretKeySummaryLifecycleStateInactive, + "DELETING": CustomerSecretKeySummaryLifecycleStateDeleting, + "DELETED": CustomerSecretKeySummaryLifecycleStateDeleted, +} + +// GetCustomerSecretKeySummaryLifecycleStateEnumValues Enumerates the set of values for CustomerSecretKeySummaryLifecycleState +func GetCustomerSecretKeySummaryLifecycleStateEnumValues() []CustomerSecretKeySummaryLifecycleStateEnum { + values := make([]CustomerSecretKeySummaryLifecycleStateEnum, 0) + for _, v := range mappingCustomerSecretKeySummaryLifecycleState { + values = append(values, v) + } + return values +} diff --git a/vendor/github.com/oracle/oci-go-sdk/identity/delete_api_key_request_response.go b/vendor/github.com/oracle/oci-go-sdk/identity/delete_api_key_request_response.go new file mode 100644 index 000000000..e2beffe0a --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/identity/delete_api_key_request_response.go @@ -0,0 +1,43 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package identity + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// DeleteApiKeyRequest wrapper for the DeleteApiKey operation +type DeleteApiKeyRequest struct { + + // The OCID of the user. + UserId *string `mandatory:"true" contributesTo:"path" name:"userId"` + + // The key's fingerprint. + Fingerprint *string `mandatory:"true" contributesTo:"path" name:"fingerprint"` + + // For optimistic concurrency control. In the PUT or DELETE call for a resource, set the `if-match` + // parameter to the value of the etag from a previous GET or POST response for that resource. The resource + // will be updated or deleted only if the etag you provide matches the resource's current etag value. + IfMatch *string `mandatory:"false" contributesTo:"header" name:"if-match"` +} + +func (request DeleteApiKeyRequest) String() string { + return common.PointerString(request) +} + +// DeleteApiKeyResponse wrapper for the DeleteApiKey operation +type DeleteApiKeyResponse struct { + + // The underlying http response + RawResponse *http.Response + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about a + // particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response DeleteApiKeyResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/identity/delete_customer_secret_key_request_response.go b/vendor/github.com/oracle/oci-go-sdk/identity/delete_customer_secret_key_request_response.go new file mode 100644 index 000000000..02855f42b --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/identity/delete_customer_secret_key_request_response.go @@ -0,0 +1,43 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package identity + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// DeleteCustomerSecretKeyRequest wrapper for the DeleteCustomerSecretKey operation +type DeleteCustomerSecretKeyRequest struct { + + // The OCID of the user. + UserId *string `mandatory:"true" contributesTo:"path" name:"userId"` + + // The OCID of the secret key. + CustomerSecretKeyId *string `mandatory:"true" contributesTo:"path" name:"customerSecretKeyId"` + + // For optimistic concurrency control. In the PUT or DELETE call for a resource, set the `if-match` + // parameter to the value of the etag from a previous GET or POST response for that resource. The resource + // will be updated or deleted only if the etag you provide matches the resource's current etag value. + IfMatch *string `mandatory:"false" contributesTo:"header" name:"if-match"` +} + +func (request DeleteCustomerSecretKeyRequest) String() string { + return common.PointerString(request) +} + +// DeleteCustomerSecretKeyResponse wrapper for the DeleteCustomerSecretKey operation +type DeleteCustomerSecretKeyResponse struct { + + // The underlying http response + RawResponse *http.Response + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about a + // particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response DeleteCustomerSecretKeyResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/identity/delete_group_request_response.go b/vendor/github.com/oracle/oci-go-sdk/identity/delete_group_request_response.go new file mode 100644 index 000000000..9b63b8bf4 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/identity/delete_group_request_response.go @@ -0,0 +1,40 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package identity + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// DeleteGroupRequest wrapper for the DeleteGroup operation +type DeleteGroupRequest struct { + + // The OCID of the group. + GroupId *string `mandatory:"true" contributesTo:"path" name:"groupId"` + + // For optimistic concurrency control. In the PUT or DELETE call for a resource, set the `if-match` + // parameter to the value of the etag from a previous GET or POST response for that resource. The resource + // will be updated or deleted only if the etag you provide matches the resource's current etag value. + IfMatch *string `mandatory:"false" contributesTo:"header" name:"if-match"` +} + +func (request DeleteGroupRequest) String() string { + return common.PointerString(request) +} + +// DeleteGroupResponse wrapper for the DeleteGroup operation +type DeleteGroupResponse struct { + + // The underlying http response + RawResponse *http.Response + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about a + // particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response DeleteGroupResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/identity/delete_identity_provider_request_response.go b/vendor/github.com/oracle/oci-go-sdk/identity/delete_identity_provider_request_response.go new file mode 100644 index 000000000..e0d82c0fa --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/identity/delete_identity_provider_request_response.go @@ -0,0 +1,40 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package identity + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// DeleteIdentityProviderRequest wrapper for the DeleteIdentityProvider operation +type DeleteIdentityProviderRequest struct { + + // The OCID of the identity provider. + IdentityProviderId *string `mandatory:"true" contributesTo:"path" name:"identityProviderId"` + + // For optimistic concurrency control. In the PUT or DELETE call for a resource, set the `if-match` + // parameter to the value of the etag from a previous GET or POST response for that resource. The resource + // will be updated or deleted only if the etag you provide matches the resource's current etag value. + IfMatch *string `mandatory:"false" contributesTo:"header" name:"if-match"` +} + +func (request DeleteIdentityProviderRequest) String() string { + return common.PointerString(request) +} + +// DeleteIdentityProviderResponse wrapper for the DeleteIdentityProvider operation +type DeleteIdentityProviderResponse struct { + + // The underlying http response + RawResponse *http.Response + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about a + // particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response DeleteIdentityProviderResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/identity/delete_idp_group_mapping_request_response.go b/vendor/github.com/oracle/oci-go-sdk/identity/delete_idp_group_mapping_request_response.go new file mode 100644 index 000000000..02d2b8c50 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/identity/delete_idp_group_mapping_request_response.go @@ -0,0 +1,43 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package identity + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// DeleteIdpGroupMappingRequest wrapper for the DeleteIdpGroupMapping operation +type DeleteIdpGroupMappingRequest struct { + + // The OCID of the identity provider. + IdentityProviderId *string `mandatory:"true" contributesTo:"path" name:"identityProviderId"` + + // The OCID of the group mapping. + MappingId *string `mandatory:"true" contributesTo:"path" name:"mappingId"` + + // For optimistic concurrency control. In the PUT or DELETE call for a resource, set the `if-match` + // parameter to the value of the etag from a previous GET or POST response for that resource. The resource + // will be updated or deleted only if the etag you provide matches the resource's current etag value. + IfMatch *string `mandatory:"false" contributesTo:"header" name:"if-match"` +} + +func (request DeleteIdpGroupMappingRequest) String() string { + return common.PointerString(request) +} + +// DeleteIdpGroupMappingResponse wrapper for the DeleteIdpGroupMapping operation +type DeleteIdpGroupMappingResponse struct { + + // The underlying http response + RawResponse *http.Response + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about a + // particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response DeleteIdpGroupMappingResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/identity/delete_policy_request_response.go b/vendor/github.com/oracle/oci-go-sdk/identity/delete_policy_request_response.go new file mode 100644 index 000000000..905f97563 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/identity/delete_policy_request_response.go @@ -0,0 +1,40 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package identity + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// DeletePolicyRequest wrapper for the DeletePolicy operation +type DeletePolicyRequest struct { + + // The OCID of the policy. + PolicyId *string `mandatory:"true" contributesTo:"path" name:"policyId"` + + // For optimistic concurrency control. In the PUT or DELETE call for a resource, set the `if-match` + // parameter to the value of the etag from a previous GET or POST response for that resource. The resource + // will be updated or deleted only if the etag you provide matches the resource's current etag value. + IfMatch *string `mandatory:"false" contributesTo:"header" name:"if-match"` +} + +func (request DeletePolicyRequest) String() string { + return common.PointerString(request) +} + +// DeletePolicyResponse wrapper for the DeletePolicy operation +type DeletePolicyResponse struct { + + // The underlying http response + RawResponse *http.Response + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about a + // particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response DeletePolicyResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/identity/delete_swift_password_request_response.go b/vendor/github.com/oracle/oci-go-sdk/identity/delete_swift_password_request_response.go new file mode 100644 index 000000000..fac41a4f3 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/identity/delete_swift_password_request_response.go @@ -0,0 +1,43 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package identity + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// DeleteSwiftPasswordRequest wrapper for the DeleteSwiftPassword operation +type DeleteSwiftPasswordRequest struct { + + // The OCID of the user. + UserId *string `mandatory:"true" contributesTo:"path" name:"userId"` + + // The OCID of the Swift password. + SwiftPasswordId *string `mandatory:"true" contributesTo:"path" name:"swiftPasswordId"` + + // For optimistic concurrency control. In the PUT or DELETE call for a resource, set the `if-match` + // parameter to the value of the etag from a previous GET or POST response for that resource. The resource + // will be updated or deleted only if the etag you provide matches the resource's current etag value. + IfMatch *string `mandatory:"false" contributesTo:"header" name:"if-match"` +} + +func (request DeleteSwiftPasswordRequest) String() string { + return common.PointerString(request) +} + +// DeleteSwiftPasswordResponse wrapper for the DeleteSwiftPassword operation +type DeleteSwiftPasswordResponse struct { + + // The underlying http response + RawResponse *http.Response + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about a + // particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response DeleteSwiftPasswordResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/identity/delete_user_request_response.go b/vendor/github.com/oracle/oci-go-sdk/identity/delete_user_request_response.go new file mode 100644 index 000000000..5dffac8f5 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/identity/delete_user_request_response.go @@ -0,0 +1,40 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package identity + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// DeleteUserRequest wrapper for the DeleteUser operation +type DeleteUserRequest struct { + + // The OCID of the user. + UserId *string `mandatory:"true" contributesTo:"path" name:"userId"` + + // For optimistic concurrency control. In the PUT or DELETE call for a resource, set the `if-match` + // parameter to the value of the etag from a previous GET or POST response for that resource. The resource + // will be updated or deleted only if the etag you provide matches the resource's current etag value. + IfMatch *string `mandatory:"false" contributesTo:"header" name:"if-match"` +} + +func (request DeleteUserRequest) String() string { + return common.PointerString(request) +} + +// DeleteUserResponse wrapper for the DeleteUser operation +type DeleteUserResponse struct { + + // The underlying http response + RawResponse *http.Response + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about a + // particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response DeleteUserResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/identity/fault_domain.go b/vendor/github.com/oracle/oci-go-sdk/identity/fault_domain.go new file mode 100644 index 000000000..4f596c95b --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/identity/fault_domain.go @@ -0,0 +1,32 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Identity and Access Management Service API +// +// APIs for managing users, groups, compartments, and policies. +// + +package identity + +import ( + "github.com/oracle/oci-go-sdk/common" +) + +// FaultDomain A Fault Domain is a logical grouping of hardware and infrastructure within an Availability Domain that can become +// unavailable in its entirety either due to hardware failure such as Top-of-rack (TOR) switch failure or due to +// planned software maintenance such as security updates that reboot your instances. +type FaultDomain struct { + + // The name of the Fault Domain. + Name *string `mandatory:"false" json:"name"` + + // The OCID of the of the compartment. Currently only tenancy (root) compartment can be provided. + CompartmentId *string `mandatory:"false" json:"compartmentId"` + + // The name of the availabilityDomain where the Fault Domain belongs. + AvailabilityDomain *string `mandatory:"false" json:"availabilityDomain"` +} + +func (m FaultDomain) String() string { + return common.PointerString(m) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/identity/get_compartment_request_response.go b/vendor/github.com/oracle/oci-go-sdk/identity/get_compartment_request_response.go new file mode 100644 index 000000000..f85794587 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/identity/get_compartment_request_response.go @@ -0,0 +1,41 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package identity + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// GetCompartmentRequest wrapper for the GetCompartment operation +type GetCompartmentRequest struct { + + // The OCID of the compartment. + CompartmentId *string `mandatory:"true" contributesTo:"path" name:"compartmentId"` +} + +func (request GetCompartmentRequest) String() string { + return common.PointerString(request) +} + +// GetCompartmentResponse wrapper for the GetCompartment operation +type GetCompartmentResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The Compartment instance + Compartment `presentIn:"body"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about a + // particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` + + // For optimistic concurrency control. See `if-match`. + Etag *string `presentIn:"header" name:"etag"` +} + +func (response GetCompartmentResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/identity/get_group_request_response.go b/vendor/github.com/oracle/oci-go-sdk/identity/get_group_request_response.go new file mode 100644 index 000000000..90d97badc --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/identity/get_group_request_response.go @@ -0,0 +1,41 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package identity + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// GetGroupRequest wrapper for the GetGroup operation +type GetGroupRequest struct { + + // The OCID of the group. + GroupId *string `mandatory:"true" contributesTo:"path" name:"groupId"` +} + +func (request GetGroupRequest) String() string { + return common.PointerString(request) +} + +// GetGroupResponse wrapper for the GetGroup operation +type GetGroupResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The Group instance + Group `presentIn:"body"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about a + // particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` + + // For optimistic concurrency control. See `if-match`. + Etag *string `presentIn:"header" name:"etag"` +} + +func (response GetGroupResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/identity/get_identity_provider_request_response.go b/vendor/github.com/oracle/oci-go-sdk/identity/get_identity_provider_request_response.go new file mode 100644 index 000000000..503c1687a --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/identity/get_identity_provider_request_response.go @@ -0,0 +1,41 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package identity + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// GetIdentityProviderRequest wrapper for the GetIdentityProvider operation +type GetIdentityProviderRequest struct { + + // The OCID of the identity provider. + IdentityProviderId *string `mandatory:"true" contributesTo:"path" name:"identityProviderId"` +} + +func (request GetIdentityProviderRequest) String() string { + return common.PointerString(request) +} + +// GetIdentityProviderResponse wrapper for the GetIdentityProvider operation +type GetIdentityProviderResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The IdentityProvider instance + IdentityProvider `presentIn:"body"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about a + // particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` + + // For optimistic concurrency control. See `if-match`. + Etag *string `presentIn:"header" name:"etag"` +} + +func (response GetIdentityProviderResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/identity/get_idp_group_mapping_request_response.go b/vendor/github.com/oracle/oci-go-sdk/identity/get_idp_group_mapping_request_response.go new file mode 100644 index 000000000..b589e1e9f --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/identity/get_idp_group_mapping_request_response.go @@ -0,0 +1,44 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package identity + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// GetIdpGroupMappingRequest wrapper for the GetIdpGroupMapping operation +type GetIdpGroupMappingRequest struct { + + // The OCID of the identity provider. + IdentityProviderId *string `mandatory:"true" contributesTo:"path" name:"identityProviderId"` + + // The OCID of the group mapping. + MappingId *string `mandatory:"true" contributesTo:"path" name:"mappingId"` +} + +func (request GetIdpGroupMappingRequest) String() string { + return common.PointerString(request) +} + +// GetIdpGroupMappingResponse wrapper for the GetIdpGroupMapping operation +type GetIdpGroupMappingResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The IdpGroupMapping instance + IdpGroupMapping `presentIn:"body"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about a + // particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` + + // For optimistic concurrency control. See `if-match`. + Etag *string `presentIn:"header" name:"etag"` +} + +func (response GetIdpGroupMappingResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/identity/get_policy_request_response.go b/vendor/github.com/oracle/oci-go-sdk/identity/get_policy_request_response.go new file mode 100644 index 000000000..3bff5478c --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/identity/get_policy_request_response.go @@ -0,0 +1,41 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package identity + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// GetPolicyRequest wrapper for the GetPolicy operation +type GetPolicyRequest struct { + + // The OCID of the policy. + PolicyId *string `mandatory:"true" contributesTo:"path" name:"policyId"` +} + +func (request GetPolicyRequest) String() string { + return common.PointerString(request) +} + +// GetPolicyResponse wrapper for the GetPolicy operation +type GetPolicyResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The Policy instance + Policy `presentIn:"body"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about a + // particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` + + // For optimistic concurrency control. See `if-match`. + Etag *string `presentIn:"header" name:"etag"` +} + +func (response GetPolicyResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/identity/get_tenancy_request_response.go b/vendor/github.com/oracle/oci-go-sdk/identity/get_tenancy_request_response.go new file mode 100644 index 000000000..8ad0f641d --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/identity/get_tenancy_request_response.go @@ -0,0 +1,38 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package identity + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// GetTenancyRequest wrapper for the GetTenancy operation +type GetTenancyRequest struct { + + // The OCID of the tenancy. + TenancyId *string `mandatory:"true" contributesTo:"path" name:"tenancyId"` +} + +func (request GetTenancyRequest) String() string { + return common.PointerString(request) +} + +// GetTenancyResponse wrapper for the GetTenancy operation +type GetTenancyResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The Tenancy instance + Tenancy `presentIn:"body"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about a + // particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response GetTenancyResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/identity/get_user_group_membership_request_response.go b/vendor/github.com/oracle/oci-go-sdk/identity/get_user_group_membership_request_response.go new file mode 100644 index 000000000..492240dbf --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/identity/get_user_group_membership_request_response.go @@ -0,0 +1,41 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package identity + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// GetUserGroupMembershipRequest wrapper for the GetUserGroupMembership operation +type GetUserGroupMembershipRequest struct { + + // The OCID of the userGroupMembership. + UserGroupMembershipId *string `mandatory:"true" contributesTo:"path" name:"userGroupMembershipId"` +} + +func (request GetUserGroupMembershipRequest) String() string { + return common.PointerString(request) +} + +// GetUserGroupMembershipResponse wrapper for the GetUserGroupMembership operation +type GetUserGroupMembershipResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The UserGroupMembership instance + UserGroupMembership `presentIn:"body"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about a + // particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` + + // For optimistic concurrency control. See `if-match`. + Etag *string `presentIn:"header" name:"etag"` +} + +func (response GetUserGroupMembershipResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/identity/get_user_request_response.go b/vendor/github.com/oracle/oci-go-sdk/identity/get_user_request_response.go new file mode 100644 index 000000000..980955dad --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/identity/get_user_request_response.go @@ -0,0 +1,41 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package identity + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// GetUserRequest wrapper for the GetUser operation +type GetUserRequest struct { + + // The OCID of the user. + UserId *string `mandatory:"true" contributesTo:"path" name:"userId"` +} + +func (request GetUserRequest) String() string { + return common.PointerString(request) +} + +// GetUserResponse wrapper for the GetUser operation +type GetUserResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The User instance + User `presentIn:"body"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about a + // particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` + + // For optimistic concurrency control. See `if-match`. + Etag *string `presentIn:"header" name:"etag"` +} + +func (response GetUserResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/identity/group.go b/vendor/github.com/oracle/oci-go-sdk/identity/group.go new file mode 100644 index 000000000..8029ba950 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/identity/group.go @@ -0,0 +1,84 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Identity and Access Management Service API +// +// APIs for managing users, groups, compartments, and policies. +// + +package identity + +import ( + "github.com/oracle/oci-go-sdk/common" +) + +// Group A collection of users who all need the same type of access to a particular set of resources or compartment. +// For conceptual information about groups and other IAM Service components, see +// Overview of the IAM Service (https://docs.us-phoenix-1.oraclecloud.com/Content/Identity/Concepts/overview.htm). +// If you're federating with an identity provider (IdP), you need to create mappings between the groups +// defined in the IdP and groups you define in the IAM service. For more information, see +// Identity Providers and Federation (https://docs.us-phoenix-1.oraclecloud.com/Content/Identity/Concepts/federation.htm). Also see +// IdentityProvider and +// IdpGroupMapping. +// To use any of the API operations, you must be authorized in an IAM policy. If you're not authorized, +// talk to an administrator. If you're an administrator who needs to write policies to give users access, +// see Getting Started with Policies (https://docs.us-phoenix-1.oraclecloud.com/Content/Identity/Concepts/policygetstarted.htm). +type Group struct { + + // The OCID of the group. + Id *string `mandatory:"true" json:"id"` + + // The OCID of the tenancy containing the group. + CompartmentId *string `mandatory:"true" json:"compartmentId"` + + // The name you assign to the group during creation. The name must be unique across all groups in + // the tenancy and cannot be changed. + Name *string `mandatory:"true" json:"name"` + + // The description you assign to the group. Does not have to be unique, and it's changeable. + Description *string `mandatory:"true" json:"description"` + + // Date and time the group was created, in the format defined by RFC3339. + // Example: `2016-08-25T21:10:29.600Z` + TimeCreated *common.SDKTime `mandatory:"true" json:"timeCreated"` + + // The group's current state. After creating a group, make sure its `lifecycleState` changes from CREATING to + // ACTIVE before using it. + LifecycleState GroupLifecycleStateEnum `mandatory:"true" json:"lifecycleState"` + + // The detailed status of INACTIVE lifecycleState. + InactiveStatus *int `mandatory:"false" json:"inactiveStatus"` +} + +func (m Group) String() string { + return common.PointerString(m) +} + +// GroupLifecycleStateEnum Enum with underlying type: string +type GroupLifecycleStateEnum string + +// Set of constants representing the allowable values for GroupLifecycleState +const ( + GroupLifecycleStateCreating GroupLifecycleStateEnum = "CREATING" + GroupLifecycleStateActive GroupLifecycleStateEnum = "ACTIVE" + GroupLifecycleStateInactive GroupLifecycleStateEnum = "INACTIVE" + GroupLifecycleStateDeleting GroupLifecycleStateEnum = "DELETING" + GroupLifecycleStateDeleted GroupLifecycleStateEnum = "DELETED" +) + +var mappingGroupLifecycleState = map[string]GroupLifecycleStateEnum{ + "CREATING": GroupLifecycleStateCreating, + "ACTIVE": GroupLifecycleStateActive, + "INACTIVE": GroupLifecycleStateInactive, + "DELETING": GroupLifecycleStateDeleting, + "DELETED": GroupLifecycleStateDeleted, +} + +// GetGroupLifecycleStateEnumValues Enumerates the set of values for GroupLifecycleState +func GetGroupLifecycleStateEnumValues() []GroupLifecycleStateEnum { + values := make([]GroupLifecycleStateEnum, 0) + for _, v := range mappingGroupLifecycleState { + values = append(values, v) + } + return values +} diff --git a/vendor/github.com/oracle/oci-go-sdk/identity/identity_client.go b/vendor/github.com/oracle/oci-go-sdk/identity/identity_client.go new file mode 100644 index 000000000..8bf0c90c5 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/identity/identity_client.go @@ -0,0 +1,1167 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Identity and Access Management Service API +// +// APIs for managing users, groups, compartments, and policies. +// + +package identity + +import ( + "context" + "fmt" + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +//IdentityClient a client for Identity +type IdentityClient struct { + common.BaseClient + config *common.ConfigurationProvider +} + +// NewIdentityClientWithConfigurationProvider Creates a new default Identity client with the given configuration provider. +// the configuration provider will be used for the default signer as well as reading the region +func NewIdentityClientWithConfigurationProvider(configProvider common.ConfigurationProvider) (client IdentityClient, err error) { + baseClient, err := common.NewClientWithConfig(configProvider) + if err != nil { + return + } + + client = IdentityClient{BaseClient: baseClient} + client.BasePath = "20160918" + err = client.setConfigurationProvider(configProvider) + return +} + +// SetRegion overrides the region of this client. +func (client *IdentityClient) SetRegion(region string) { + client.Host = fmt.Sprintf(common.DefaultHostURLTemplate, "identity", region) +} + +// SetConfigurationProvider sets the configuration provider including the region, returns an error if is not valid +func (client *IdentityClient) setConfigurationProvider(configProvider common.ConfigurationProvider) error { + if ok, err := common.IsConfigurationProviderValid(configProvider); !ok { + return err + } + + // Error has been checked already + region, _ := configProvider.Region() + client.config = &configProvider + client.SetRegion(region) + return nil +} + +// ConfigurationProvider the ConfigurationProvider used in this client, or null if none set +func (client *IdentityClient) ConfigurationProvider() *common.ConfigurationProvider { + return client.config +} + +// AddUserToGroup Adds the specified user to the specified group and returns a `UserGroupMembership` object with its own OCID. +// After you send your request, the new object's `lifecycleState` will temporarily be CREATING. Before using the +// object, first make sure its `lifecycleState` has changed to ACTIVE. +func (client IdentityClient) AddUserToGroup(ctx context.Context, request AddUserToGroupRequest) (response AddUserToGroupResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodPost, "/userGroupMemberships/", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// CreateCompartment Creates a new compartment in your tenancy. +// **Important:** Compartments cannot be deleted. +// You must specify your tenancy's OCID as the compartment ID in the request object. Remember that the tenancy +// is simply the root compartment. For information about OCIDs, see +// Resource Identifiers (https://docs.us-phoenix-1.oraclecloud.com/Content/General/Concepts/identifiers.htm). +// You must also specify a *name* for the compartment, which must be unique across all compartments in +// your tenancy. You can use this name or the OCID when writing policies that apply +// to the compartment. For more information about policies, see +// How Policies Work (https://docs.us-phoenix-1.oraclecloud.com/Content/Identity/Concepts/policies.htm). +// You must also specify a *description* for the compartment (although it can be an empty string). It does +// not have to be unique, and you can change it anytime with +// UpdateCompartment. +// After you send your request, the new object's `lifecycleState` will temporarily be CREATING. Before using the +// object, first make sure its `lifecycleState` has changed to ACTIVE. +func (client IdentityClient) CreateCompartment(ctx context.Context, request CreateCompartmentRequest) (response CreateCompartmentResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodPost, "/compartments/", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// CreateCustomerSecretKey Creates a new secret key for the specified user. Secret keys are used for authentication with the Object Storage Service's Amazon S3 +// compatible API. For information, see +// Managing User Credentials (https://docs.us-phoenix-1.oraclecloud.com/Content/Identity/Tasks/managingcredentials.htm). +// You must specify a *description* for the secret key (although it can be an empty string). It does not +// have to be unique, and you can change it anytime with +// UpdateCustomerSecretKey. +// Every user has permission to create a secret key for *their own user ID*. An administrator in your organization +// does not need to write a policy to give users this ability. To compare, administrators who have permission to the +// tenancy can use this operation to create a secret key for any user, including themselves. +func (client IdentityClient) CreateCustomerSecretKey(ctx context.Context, request CreateCustomerSecretKeyRequest) (response CreateCustomerSecretKeyResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodPost, "/users/{userId}/customerSecretKeys/", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// CreateGroup Creates a new group in your tenancy. +// You must specify your tenancy's OCID as the compartment ID in the request object (remember that the tenancy +// is simply the root compartment). Notice that IAM resources (users, groups, compartments, and some policies) +// reside within the tenancy itself, unlike cloud resources such as compute instances, which typically +// reside within compartments inside the tenancy. For information about OCIDs, see +// Resource Identifiers (https://docs.us-phoenix-1.oraclecloud.com/Content/General/Concepts/identifiers.htm). +// You must also specify a *name* for the group, which must be unique across all groups in your tenancy and +// cannot be changed. You can use this name or the OCID when writing policies that apply to the group. For more +// information about policies, see How Policies Work (https://docs.us-phoenix-1.oraclecloud.com/Content/Identity/Concepts/policies.htm). +// You must also specify a *description* for the group (although it can be an empty string). It does not +// have to be unique, and you can change it anytime with UpdateGroup. +// After you send your request, the new object's `lifecycleState` will temporarily be CREATING. Before using the +// object, first make sure its `lifecycleState` has changed to ACTIVE. +// After creating the group, you need to put users in it and write policies for it. +// See AddUserToGroup and +// CreatePolicy. +func (client IdentityClient) CreateGroup(ctx context.Context, request CreateGroupRequest) (response CreateGroupResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodPost, "/groups/", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// CreateIdentityProvider Creates a new identity provider in your tenancy. For more information, see +// Identity Providers and Federation (https://docs.us-phoenix-1.oraclecloud.com/Content/Identity/Concepts/federation.htm). +// You must specify your tenancy's OCID as the compartment ID in the request object. +// Remember that the tenancy is simply the root compartment. For information about +// OCIDs, see Resource Identifiers (https://docs.us-phoenix-1.oraclecloud.com/Content/General/Concepts/identifiers.htm). +// You must also specify a *name* for the `IdentityProvider`, which must be unique +// across all `IdentityProvider` objects in your tenancy and cannot be changed. +// You must also specify a *description* for the `IdentityProvider` (although +// it can be an empty string). It does not have to be unique, and you can change +// it anytime with +// UpdateIdentityProvider. +// After you send your request, the new object's `lifecycleState` will temporarily +// be CREATING. Before using the object, first make sure its `lifecycleState` has +// changed to ACTIVE. +func (client IdentityClient) CreateIdentityProvider(ctx context.Context, request CreateIdentityProviderRequest) (response CreateIdentityProviderResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodPost, "/identityProviders/", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponseWithPolymorphicBody(httpResponse, &response, &identityprovider{}) + return +} + +// CreateIdpGroupMapping Creates a single mapping between an IdP group and an IAM Service +// Group. +func (client IdentityClient) CreateIdpGroupMapping(ctx context.Context, request CreateIdpGroupMappingRequest) (response CreateIdpGroupMappingResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodPost, "/identityProviders/{identityProviderId}/groupMappings/", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// CreateOrResetUIPassword Creates a new Console one-time password for the specified user. For more information about user +// credentials, see User Credentials (https://docs.us-phoenix-1.oraclecloud.com/Content/Identity/Concepts/usercredentials.htm). +// Use this operation after creating a new user, or if a user forgets their password. The new one-time +// password is returned to you in the response, and you must securely deliver it to the user. They'll +// be prompted to change this password the next time they sign in to the Console. If they don't change +// it within 7 days, the password will expire and you'll need to create a new one-time password for the +// user. +// **Note:** The user's Console login is the unique name you specified when you created the user +// (see CreateUser). +func (client IdentityClient) CreateOrResetUIPassword(ctx context.Context, request CreateOrResetUIPasswordRequest) (response CreateOrResetUIPasswordResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodPost, "/users/{userId}/uiPassword", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// CreatePolicy Creates a new policy in the specified compartment (either the tenancy or another of your compartments). +// If you're new to policies, see Getting Started with Policies (https://docs.us-phoenix-1.oraclecloud.com/Content/Identity/Concepts/policygetstarted.htm). +// You must specify a *name* for the policy, which must be unique across all policies in your tenancy +// and cannot be changed. +// You must also specify a *description* for the policy (although it can be an empty string). It does not +// have to be unique, and you can change it anytime with UpdatePolicy. +// You must specify one or more policy statements in the statements array. For information about writing +// policies, see How Policies Work (https://docs.us-phoenix-1.oraclecloud.com/Content/Identity/Concepts/policies.htm) and +// Common Policies (https://docs.us-phoenix-1.oraclecloud.com/Content/Identity/Concepts/commonpolicies.htm). +// After you send your request, the new object's `lifecycleState` will temporarily be CREATING. Before using the +// object, first make sure its `lifecycleState` has changed to ACTIVE. +// New policies take effect typically within 10 seconds. +func (client IdentityClient) CreatePolicy(ctx context.Context, request CreatePolicyRequest) (response CreatePolicyResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodPost, "/policies/", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// CreateRegionSubscription Creates a subscription to a region for a tenancy. +func (client IdentityClient) CreateRegionSubscription(ctx context.Context, request CreateRegionSubscriptionRequest) (response CreateRegionSubscriptionResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodPost, "/tenancies/{tenancyId}/regionSubscriptions", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// CreateSwiftPassword Creates a new Swift password for the specified user. For information about what Swift passwords are for, see +// Managing User Credentials (https://docs.us-phoenix-1.oraclecloud.com/Content/Identity/Tasks/managingcredentials.htm). +// You must specify a *description* for the Swift password (although it can be an empty string). It does not +// have to be unique, and you can change it anytime with +// UpdateSwiftPassword. +// Every user has permission to create a Swift password for *their own user ID*. An administrator in your organization +// does not need to write a policy to give users this ability. To compare, administrators who have permission to the +// tenancy can use this operation to create a Swift password for any user, including themselves. +func (client IdentityClient) CreateSwiftPassword(ctx context.Context, request CreateSwiftPasswordRequest) (response CreateSwiftPasswordResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodPost, "/users/{userId}/swiftPasswords/", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// CreateUser Creates a new user in your tenancy. For conceptual information about users, your tenancy, and other +// IAM Service components, see Overview of the IAM Service (https://docs.us-phoenix-1.oraclecloud.com/Content/Identity/Concepts/overview.htm). +// You must specify your tenancy's OCID as the compartment ID in the request object (remember that the +// tenancy is simply the root compartment). Notice that IAM resources (users, groups, compartments, and +// some policies) reside within the tenancy itself, unlike cloud resources such as compute instances, +// which typically reside within compartments inside the tenancy. For information about OCIDs, see +// Resource Identifiers (https://docs.us-phoenix-1.oraclecloud.com/Content/General/Concepts/identifiers.htm). +// You must also specify a *name* for the user, which must be unique across all users in your tenancy +// and cannot be changed. Allowed characters: No spaces. Only letters, numerals, hyphens, periods, +// underscores, +, and @. If you specify a name that's already in use, you'll get a 409 error. +// This name will be the user's login to the Console. You might want to pick a +// name that your company's own identity system (e.g., Active Directory, LDAP, etc.) already uses. +// If you delete a user and then create a new user with the same name, they'll be considered different +// users because they have different OCIDs. +// You must also specify a *description* for the user (although it can be an empty string). +// It does not have to be unique, and you can change it anytime with +// UpdateUser. You can use the field to provide the user's +// full name, a description, a nickname, or other information to generally identify the user. +// After you send your request, the new object's `lifecycleState` will temporarily be CREATING. Before +// using the object, first make sure its `lifecycleState` has changed to ACTIVE. +// A new user has no permissions until you place the user in one or more groups (see +// AddUserToGroup). If the user needs to +// access the Console, you need to provide the user a password (see +// CreateOrResetUIPassword). +// If the user needs to access the Oracle Cloud Infrastructure REST API, you need to upload a +// public API signing key for that user (see +// Required Keys and OCIDs (https://docs.us-phoenix-1.oraclecloud.com/Content/API/Concepts/apisigningkey.htm) and also +// UploadApiKey). +// **Important:** Make sure to inform the new user which compartment(s) they have access to. +func (client IdentityClient) CreateUser(ctx context.Context, request CreateUserRequest) (response CreateUserResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodPost, "/users/", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// DeleteApiKey Deletes the specified API signing key for the specified user. +// Every user has permission to use this operation to delete a key for *their own user ID*. An +// administrator in your organization does not need to write a policy to give users this ability. +// To compare, administrators who have permission to the tenancy can use this operation to delete +// a key for any user, including themselves. +func (client IdentityClient) DeleteApiKey(ctx context.Context, request DeleteApiKeyRequest) (response DeleteApiKeyResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodDelete, "/users/{userId}/apiKeys/{fingerprint}", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// DeleteCustomerSecretKey Deletes the specified secret key for the specified user. +func (client IdentityClient) DeleteCustomerSecretKey(ctx context.Context, request DeleteCustomerSecretKeyRequest) (response DeleteCustomerSecretKeyResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodDelete, "/users/{userId}/customerSecretKeys/{customerSecretKeyId}", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// DeleteGroup Deletes the specified group. The group must be empty. +func (client IdentityClient) DeleteGroup(ctx context.Context, request DeleteGroupRequest) (response DeleteGroupResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodDelete, "/groups/{groupId}", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// DeleteIdentityProvider Deletes the specified identity provider. The identity provider must not have +// any group mappings (see IdpGroupMapping). +func (client IdentityClient) DeleteIdentityProvider(ctx context.Context, request DeleteIdentityProviderRequest) (response DeleteIdentityProviderResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodDelete, "/identityProviders/{identityProviderId}", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// DeleteIdpGroupMapping Deletes the specified group mapping. +func (client IdentityClient) DeleteIdpGroupMapping(ctx context.Context, request DeleteIdpGroupMappingRequest) (response DeleteIdpGroupMappingResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodDelete, "/identityProviders/{identityProviderId}/groupMappings/{mappingId}", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// DeletePolicy Deletes the specified policy. The deletion takes effect typically within 10 seconds. +func (client IdentityClient) DeletePolicy(ctx context.Context, request DeletePolicyRequest) (response DeletePolicyResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodDelete, "/policies/{policyId}", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// DeleteSwiftPassword Deletes the specified Swift password for the specified user. +func (client IdentityClient) DeleteSwiftPassword(ctx context.Context, request DeleteSwiftPasswordRequest) (response DeleteSwiftPasswordResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodDelete, "/users/{userId}/swiftPasswords/{swiftPasswordId}", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// DeleteUser Deletes the specified user. The user must not be in any groups. +func (client IdentityClient) DeleteUser(ctx context.Context, request DeleteUserRequest) (response DeleteUserResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodDelete, "/users/{userId}", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// GetCompartment Gets the specified compartment's information. +// This operation does not return a list of all the resources inside the compartment. There is no single +// API operation that does that. Compartments can contain multiple types of resources (instances, block +// storage volumes, etc.). To find out what's in a compartment, you must call the "List" operation for +// each resource type and specify the compartment's OCID as a query parameter in the request. For example, +// call the ListInstances operation in the Cloud Compute +// Service or the ListVolumes operation in Cloud Block Storage. +func (client IdentityClient) GetCompartment(ctx context.Context, request GetCompartmentRequest) (response GetCompartmentResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodGet, "/compartments/{compartmentId}", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// GetGroup Gets the specified group's information. +// This operation does not return a list of all the users in the group. To do that, use +// ListUserGroupMemberships and +// provide the group's OCID as a query parameter in the request. +func (client IdentityClient) GetGroup(ctx context.Context, request GetGroupRequest) (response GetGroupResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodGet, "/groups/{groupId}", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// GetIdentityProvider Gets the specified identity provider's information. +func (client IdentityClient) GetIdentityProvider(ctx context.Context, request GetIdentityProviderRequest) (response GetIdentityProviderResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodGet, "/identityProviders/{identityProviderId}", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponseWithPolymorphicBody(httpResponse, &response, &identityprovider{}) + return +} + +// GetIdpGroupMapping Gets the specified group mapping. +func (client IdentityClient) GetIdpGroupMapping(ctx context.Context, request GetIdpGroupMappingRequest) (response GetIdpGroupMappingResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodGet, "/identityProviders/{identityProviderId}/groupMappings/{mappingId}", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// GetPolicy Gets the specified policy's information. +func (client IdentityClient) GetPolicy(ctx context.Context, request GetPolicyRequest) (response GetPolicyResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodGet, "/policies/{policyId}", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// GetTenancy Get the specified tenancy's information. +func (client IdentityClient) GetTenancy(ctx context.Context, request GetTenancyRequest) (response GetTenancyResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodGet, "/tenancies/{tenancyId}", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// GetUser Gets the specified user's information. +func (client IdentityClient) GetUser(ctx context.Context, request GetUserRequest) (response GetUserResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodGet, "/users/{userId}", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// GetUserGroupMembership Gets the specified UserGroupMembership's information. +func (client IdentityClient) GetUserGroupMembership(ctx context.Context, request GetUserGroupMembershipRequest) (response GetUserGroupMembershipResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodGet, "/userGroupMemberships/{userGroupMembershipId}", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// ListApiKeys Lists the API signing keys for the specified user. A user can have a maximum of three keys. +// Every user has permission to use this API call for *their own user ID*. An administrator in your +// organization does not need to write a policy to give users this ability. +func (client IdentityClient) ListApiKeys(ctx context.Context, request ListApiKeysRequest) (response ListApiKeysResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodGet, "/users/{userId}/apiKeys/", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// ListAvailabilityDomains Lists the Availability Domains in your tenancy. Specify the OCID of either the tenancy or another +// of your compartments as the value for the compartment ID (remember that the tenancy is simply the root compartment). +// See Where to Get the Tenancy's OCID and User's OCID (https://docs.us-phoenix-1.oraclecloud.com/Content/API/Concepts/apisigningkey.htm#five). +func (client IdentityClient) ListAvailabilityDomains(ctx context.Context, request ListAvailabilityDomainsRequest) (response ListAvailabilityDomainsResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodGet, "/availabilityDomains/", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// ListCompartments Lists the compartments in your tenancy. You must specify your tenancy's OCID as the value +// for the compartment ID (remember that the tenancy is simply the root compartment). +// See Where to Get the Tenancy's OCID and User's OCID (https://docs.us-phoenix-1.oraclecloud.com/Content/API/Concepts/apisigningkey.htm#five). +func (client IdentityClient) ListCompartments(ctx context.Context, request ListCompartmentsRequest) (response ListCompartmentsResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodGet, "/compartments/", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// ListCustomerSecretKeys Lists the secret keys for the specified user. The returned object contains the secret key's OCID, but not +// the secret key itself. The actual secret key is returned only upon creation. +func (client IdentityClient) ListCustomerSecretKeys(ctx context.Context, request ListCustomerSecretKeysRequest) (response ListCustomerSecretKeysResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodGet, "/users/{userId}/customerSecretKeys/", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// ListFaultDomains Lists the Fault Domains in your tenancy. Specify the OCID of either the tenancy or another +// of your compartments as the value for the compartment ID (remember that the tenancy is simply the root compartment). +// See Where to Get the Tenancy's OCID and User's OCID (https://docs.us-phoenix-1.oraclecloud.com/Content/API/Concepts/apisigningkey.htm#five). +func (client IdentityClient) ListFaultDomains(ctx context.Context, request ListFaultDomainsRequest) (response ListFaultDomainsResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodGet, "/faultDomains/", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// ListGroups Lists the groups in your tenancy. You must specify your tenancy's OCID as the value for +// the compartment ID (remember that the tenancy is simply the root compartment). +// See Where to Get the Tenancy's OCID and User's OCID (https://docs.us-phoenix-1.oraclecloud.com/Content/API/Concepts/apisigningkey.htm#five). +func (client IdentityClient) ListGroups(ctx context.Context, request ListGroupsRequest) (response ListGroupsResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodGet, "/groups/", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +//listidentityprovider allows to unmarshal list of polymorphic IdentityProvider +type listidentityprovider []identityprovider + +//UnmarshalPolymorphicJSON unmarshals polymorphic json list of items +func (m *listidentityprovider) UnmarshalPolymorphicJSON(data []byte) (interface{}, error) { + res := make([]IdentityProvider, len(*m)) + for i, v := range *m { + nn, err := v.UnmarshalPolymorphicJSON(v.JsonData) + if err != nil { + return nil, err + } + res[i] = nn.(IdentityProvider) + } + return res, nil +} + +// ListIdentityProviders Lists all the identity providers in your tenancy. You must specify the identity provider type (e.g., `SAML2` for +// identity providers using the SAML2.0 protocol). You must specify your tenancy's OCID as the value for the +// compartment ID (remember that the tenancy is simply the root compartment). +// See Where to Get the Tenancy's OCID and User's OCID (https://docs.us-phoenix-1.oraclecloud.com/Content/API/Concepts/apisigningkey.htm#five). +func (client IdentityClient) ListIdentityProviders(ctx context.Context, request ListIdentityProvidersRequest) (response ListIdentityProvidersResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodGet, "/identityProviders/", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponseWithPolymorphicBody(httpResponse, &response, &listidentityprovider{}) + return +} + +// ListIdpGroupMappings Lists the group mappings for the specified identity provider. +func (client IdentityClient) ListIdpGroupMappings(ctx context.Context, request ListIdpGroupMappingsRequest) (response ListIdpGroupMappingsResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodGet, "/identityProviders/{identityProviderId}/groupMappings/", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// ListPolicies Lists the policies in the specified compartment (either the tenancy or another of your compartments). +// See Where to Get the Tenancy's OCID and User's OCID (https://docs.us-phoenix-1.oraclecloud.com/Content/API/Concepts/apisigningkey.htm#five). +// To determine which policies apply to a particular group or compartment, you must view the individual +// statements inside all your policies. There isn't a way to automatically obtain that information via the API. +func (client IdentityClient) ListPolicies(ctx context.Context, request ListPoliciesRequest) (response ListPoliciesResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodGet, "/policies/", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// ListRegionSubscriptions Lists the region subscriptions for the specified tenancy. +func (client IdentityClient) ListRegionSubscriptions(ctx context.Context, request ListRegionSubscriptionsRequest) (response ListRegionSubscriptionsResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodGet, "/tenancies/{tenancyId}/regionSubscriptions", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// ListRegions Lists all the regions offered by Oracle Cloud Infrastructure. +func (client IdentityClient) ListRegions(ctx context.Context) (response ListRegionsResponse, err error) { + httpRequest := common.MakeDefaultHTTPRequest(http.MethodGet, "/regions") + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// ListSwiftPasswords Lists the Swift passwords for the specified user. The returned object contains the password's OCID, but not +// the password itself. The actual password is returned only upon creation. +func (client IdentityClient) ListSwiftPasswords(ctx context.Context, request ListSwiftPasswordsRequest) (response ListSwiftPasswordsResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodGet, "/users/{userId}/swiftPasswords/", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// ListUserGroupMemberships Lists the `UserGroupMembership` objects in your tenancy. You must specify your tenancy's OCID +// as the value for the compartment ID +// (see Where to Get the Tenancy's OCID and User's OCID (https://docs.us-phoenix-1.oraclecloud.com/Content/API/Concepts/apisigningkey.htm#five)). +// You must also then filter the list in one of these ways: +// - You can limit the results to just the memberships for a given user by specifying a `userId`. +// - Similarly, you can limit the results to just the memberships for a given group by specifying a `groupId`. +// - You can set both the `userId` and `groupId` to determine if the specified user is in the specified group. +// If the answer is no, the response is an empty list. +func (client IdentityClient) ListUserGroupMemberships(ctx context.Context, request ListUserGroupMembershipsRequest) (response ListUserGroupMembershipsResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodGet, "/userGroupMemberships/", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// ListUsers Lists the users in your tenancy. You must specify your tenancy's OCID as the value for the +// compartment ID (remember that the tenancy is simply the root compartment). +// See Where to Get the Tenancy's OCID and User's OCID (https://docs.us-phoenix-1.oraclecloud.com/Content/API/Concepts/apisigningkey.htm#five). +func (client IdentityClient) ListUsers(ctx context.Context, request ListUsersRequest) (response ListUsersResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodGet, "/users/", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// RemoveUserFromGroup Removes a user from a group by deleting the corresponding `UserGroupMembership`. +func (client IdentityClient) RemoveUserFromGroup(ctx context.Context, request RemoveUserFromGroupRequest) (response RemoveUserFromGroupResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodDelete, "/userGroupMemberships/{userGroupMembershipId}", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// UpdateCompartment Updates the specified compartment's description or name. You can't update the root compartment. +func (client IdentityClient) UpdateCompartment(ctx context.Context, request UpdateCompartmentRequest) (response UpdateCompartmentResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodPut, "/compartments/{compartmentId}", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// UpdateCustomerSecretKey Updates the specified secret key's description. +func (client IdentityClient) UpdateCustomerSecretKey(ctx context.Context, request UpdateCustomerSecretKeyRequest) (response UpdateCustomerSecretKeyResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodPut, "/users/{userId}/customerSecretKeys/{customerSecretKeyId}", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// UpdateGroup Updates the specified group. +func (client IdentityClient) UpdateGroup(ctx context.Context, request UpdateGroupRequest) (response UpdateGroupResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodPut, "/groups/{groupId}", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// UpdateIdentityProvider Updates the specified identity provider. +func (client IdentityClient) UpdateIdentityProvider(ctx context.Context, request UpdateIdentityProviderRequest) (response UpdateIdentityProviderResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodPut, "/identityProviders/{identityProviderId}", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponseWithPolymorphicBody(httpResponse, &response, &identityprovider{}) + return +} + +// UpdateIdpGroupMapping Updates the specified group mapping. +func (client IdentityClient) UpdateIdpGroupMapping(ctx context.Context, request UpdateIdpGroupMappingRequest) (response UpdateIdpGroupMappingResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodPut, "/identityProviders/{identityProviderId}/groupMappings/{mappingId}", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// UpdatePolicy Updates the specified policy. You can update the description or the policy statements themselves. +// Policy changes take effect typically within 10 seconds. +func (client IdentityClient) UpdatePolicy(ctx context.Context, request UpdatePolicyRequest) (response UpdatePolicyResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodPut, "/policies/{policyId}", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// UpdateSwiftPassword Updates the specified Swift password's description. +func (client IdentityClient) UpdateSwiftPassword(ctx context.Context, request UpdateSwiftPasswordRequest) (response UpdateSwiftPasswordResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodPut, "/users/{userId}/swiftPasswords/{swiftPasswordId}", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// UpdateUser Updates the description of the specified user. +func (client IdentityClient) UpdateUser(ctx context.Context, request UpdateUserRequest) (response UpdateUserResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodPut, "/users/{userId}", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// UpdateUserState Updates the state of the specified user. +func (client IdentityClient) UpdateUserState(ctx context.Context, request UpdateUserStateRequest) (response UpdateUserStateResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodPut, "/users/{userId}/state/", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// UploadApiKey Uploads an API signing key for the specified user. +// Every user has permission to use this operation to upload a key for *their own user ID*. An +// administrator in your organization does not need to write a policy to give users this ability. +// To compare, administrators who have permission to the tenancy can use this operation to upload a +// key for any user, including themselves. +// **Important:** Even though you have permission to upload an API key, you might not yet +// have permission to do much else. If you try calling an operation unrelated to your own credential +// management (e.g., `ListUsers`, `LaunchInstance`) and receive an "unauthorized" error, +// check with an administrator to confirm which IAM Service group(s) you're in and what access +// you have. Also confirm you're working in the correct compartment. +// After you send your request, the new object's `lifecycleState` will temporarily be CREATING. Before using +// the object, first make sure its `lifecycleState` has changed to ACTIVE. +func (client IdentityClient) UploadApiKey(ctx context.Context, request UploadApiKeyRequest) (response UploadApiKeyResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodPost, "/users/{userId}/apiKeys/", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} diff --git a/vendor/github.com/oracle/oci-go-sdk/identity/identity_provider.go b/vendor/github.com/oracle/oci-go-sdk/identity/identity_provider.go new file mode 100644 index 000000000..170df8879 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/identity/identity_provider.go @@ -0,0 +1,185 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Identity and Access Management Service API +// +// APIs for managing users, groups, compartments, and policies. +// + +package identity + +import ( + "encoding/json" + "github.com/oracle/oci-go-sdk/common" +) + +// IdentityProvider The resulting base object when you add an identity provider to your tenancy. A +// Saml2IdentityProvider +// is a specific type of `IdentityProvider` that supports the SAML 2.0 protocol. Each +// `IdentityProvider` object has its own OCID. For more information, see +// Identity Providers and Federation (https://docs.us-phoenix-1.oraclecloud.com/Content/Identity/Concepts/federation.htm). +// To use any of the API operations, you must be authorized in an IAM policy. If you're not authorized, +// talk to an administrator. If you're an administrator who needs to write policies to give users access, +// see Getting Started with Policies (https://docs.us-phoenix-1.oraclecloud.com/Content/Identity/Concepts/policygetstarted.htm). +type IdentityProvider interface { + + // The OCID of the `IdentityProvider`. + GetId() *string + + // The OCID of the tenancy containing the `IdentityProvider`. + GetCompartmentId() *string + + // The name you assign to the `IdentityProvider` during creation. The name + // must be unique across all `IdentityProvider` objects in the tenancy and + // cannot be changed. This is the name federated users see when choosing + // which identity provider to use when signing in to the Oracle Cloud Infrastructure + // Console. + GetName() *string + + // The description you assign to the `IdentityProvider` during creation. Does + // not have to be unique, and it's changeable. + GetDescription() *string + + // The identity provider service or product. + // Supported identity providers are Oracle Identity Cloud Service (IDCS) and Microsoft + // Active Directory Federation Services (ADFS). + // Allowed values are: + // - `ADFS` + // - `IDCS` + // Example: `IDCS` + GetProductType() *string + + // Date and time the `IdentityProvider` was created, in the format defined by RFC3339. + // Example: `2016-08-25T21:10:29.600Z` + GetTimeCreated() *common.SDKTime + + // The current state. After creating an `IdentityProvider`, make sure its + // `lifecycleState` changes from CREATING to ACTIVE before using it. + GetLifecycleState() IdentityProviderLifecycleStateEnum + + // The detailed status of INACTIVE lifecycleState. + GetInactiveStatus() *int +} + +type identityprovider struct { + JsonData []byte + Id *string `mandatory:"true" json:"id"` + CompartmentId *string `mandatory:"true" json:"compartmentId"` + Name *string `mandatory:"true" json:"name"` + Description *string `mandatory:"true" json:"description"` + ProductType *string `mandatory:"true" json:"productType"` + TimeCreated *common.SDKTime `mandatory:"true" json:"timeCreated"` + LifecycleState IdentityProviderLifecycleStateEnum `mandatory:"true" json:"lifecycleState"` + InactiveStatus *int `mandatory:"false" json:"inactiveStatus"` + Protocol string `json:"protocol"` +} + +// UnmarshalJSON unmarshals json +func (m *identityprovider) UnmarshalJSON(data []byte) error { + m.JsonData = data + type Unmarshaleridentityprovider identityprovider + s := struct { + Model Unmarshaleridentityprovider + }{} + err := json.Unmarshal(data, &s.Model) + if err != nil { + return err + } + m.Id = s.Model.Id + m.CompartmentId = s.Model.CompartmentId + m.Name = s.Model.Name + m.Description = s.Model.Description + m.ProductType = s.Model.ProductType + m.TimeCreated = s.Model.TimeCreated + m.LifecycleState = s.Model.LifecycleState + m.InactiveStatus = s.Model.InactiveStatus + m.Protocol = s.Model.Protocol + + return err +} + +// UnmarshalPolymorphicJSON unmarshals polymorphic json +func (m *identityprovider) UnmarshalPolymorphicJSON(data []byte) (interface{}, error) { + var err error + switch m.Protocol { + case "SAML2": + mm := Saml2IdentityProvider{} + err = json.Unmarshal(data, &mm) + return mm, err + default: + return m, nil + } +} + +//GetId returns Id +func (m identityprovider) GetId() *string { + return m.Id +} + +//GetCompartmentId returns CompartmentId +func (m identityprovider) GetCompartmentId() *string { + return m.CompartmentId +} + +//GetName returns Name +func (m identityprovider) GetName() *string { + return m.Name +} + +//GetDescription returns Description +func (m identityprovider) GetDescription() *string { + return m.Description +} + +//GetProductType returns ProductType +func (m identityprovider) GetProductType() *string { + return m.ProductType +} + +//GetTimeCreated returns TimeCreated +func (m identityprovider) GetTimeCreated() *common.SDKTime { + return m.TimeCreated +} + +//GetLifecycleState returns LifecycleState +func (m identityprovider) GetLifecycleState() IdentityProviderLifecycleStateEnum { + return m.LifecycleState +} + +//GetInactiveStatus returns InactiveStatus +func (m identityprovider) GetInactiveStatus() *int { + return m.InactiveStatus +} + +func (m identityprovider) String() string { + return common.PointerString(m) +} + +// IdentityProviderLifecycleStateEnum Enum with underlying type: string +type IdentityProviderLifecycleStateEnum string + +// Set of constants representing the allowable values for IdentityProviderLifecycleState +const ( + IdentityProviderLifecycleStateCreating IdentityProviderLifecycleStateEnum = "CREATING" + IdentityProviderLifecycleStateActive IdentityProviderLifecycleStateEnum = "ACTIVE" + IdentityProviderLifecycleStateInactive IdentityProviderLifecycleStateEnum = "INACTIVE" + IdentityProviderLifecycleStateDeleting IdentityProviderLifecycleStateEnum = "DELETING" + IdentityProviderLifecycleStateDeleted IdentityProviderLifecycleStateEnum = "DELETED" +) + +var mappingIdentityProviderLifecycleState = map[string]IdentityProviderLifecycleStateEnum{ + "CREATING": IdentityProviderLifecycleStateCreating, + "ACTIVE": IdentityProviderLifecycleStateActive, + "INACTIVE": IdentityProviderLifecycleStateInactive, + "DELETING": IdentityProviderLifecycleStateDeleting, + "DELETED": IdentityProviderLifecycleStateDeleted, +} + +// GetIdentityProviderLifecycleStateEnumValues Enumerates the set of values for IdentityProviderLifecycleState +func GetIdentityProviderLifecycleStateEnumValues() []IdentityProviderLifecycleStateEnum { + values := make([]IdentityProviderLifecycleStateEnum, 0) + for _, v := range mappingIdentityProviderLifecycleState { + values = append(values, v) + } + return values +} diff --git a/vendor/github.com/oracle/oci-go-sdk/identity/idp_group_mapping.go b/vendor/github.com/oracle/oci-go-sdk/identity/idp_group_mapping.go new file mode 100644 index 000000000..cf9c0100b --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/identity/idp_group_mapping.go @@ -0,0 +1,84 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Identity and Access Management Service API +// +// APIs for managing users, groups, compartments, and policies. +// + +package identity + +import ( + "github.com/oracle/oci-go-sdk/common" +) + +// IdpGroupMapping A mapping between a single group defined by the identity provider (IdP) you're federating with +// and a single IAM Service Group in Oracle Cloud Infrastructure. +// For more information about group mappings and what they're for, see +// Identity Providers and Federation (https://docs.us-phoenix-1.oraclecloud.com/Content/Identity/Concepts/federation.htm). +// A given IdP group can be mapped to zero, one, or multiple IAM Service groups, and vice versa. +// But each `IdPGroupMapping` object is between only a single IdP group and IAM Service group. +// Each `IdPGroupMapping` object has its own OCID. +// **Note:** Any users who are in more than 50 IdP groups cannot be authenticated to use the Oracle +// Cloud Infrastructure Console. +type IdpGroupMapping struct { + + // The OCID of the `IdpGroupMapping`. + Id *string `mandatory:"true" json:"id"` + + // The OCID of the `IdentityProvider` this mapping belongs to. + IdpId *string `mandatory:"true" json:"idpId"` + + // The name of the IdP group that is mapped to the IAM Service group. + IdpGroupName *string `mandatory:"true" json:"idpGroupName"` + + // The OCID of the IAM Service group that is mapped to the IdP group. + GroupId *string `mandatory:"true" json:"groupId"` + + // The OCID of the tenancy containing the `IdentityProvider`. + CompartmentId *string `mandatory:"true" json:"compartmentId"` + + // Date and time the mapping was created, in the format defined by RFC3339. + // Example: `2016-08-25T21:10:29.600Z` + TimeCreated *common.SDKTime `mandatory:"true" json:"timeCreated"` + + // The mapping's current state. After creating a mapping object, make sure its `lifecycleState` changes + // from CREATING to ACTIVE before using it. + LifecycleState IdpGroupMappingLifecycleStateEnum `mandatory:"true" json:"lifecycleState"` + + // The detailed status of INACTIVE lifecycleState. + InactiveStatus *int `mandatory:"false" json:"inactiveStatus"` +} + +func (m IdpGroupMapping) String() string { + return common.PointerString(m) +} + +// IdpGroupMappingLifecycleStateEnum Enum with underlying type: string +type IdpGroupMappingLifecycleStateEnum string + +// Set of constants representing the allowable values for IdpGroupMappingLifecycleState +const ( + IdpGroupMappingLifecycleStateCreating IdpGroupMappingLifecycleStateEnum = "CREATING" + IdpGroupMappingLifecycleStateActive IdpGroupMappingLifecycleStateEnum = "ACTIVE" + IdpGroupMappingLifecycleStateInactive IdpGroupMappingLifecycleStateEnum = "INACTIVE" + IdpGroupMappingLifecycleStateDeleting IdpGroupMappingLifecycleStateEnum = "DELETING" + IdpGroupMappingLifecycleStateDeleted IdpGroupMappingLifecycleStateEnum = "DELETED" +) + +var mappingIdpGroupMappingLifecycleState = map[string]IdpGroupMappingLifecycleStateEnum{ + "CREATING": IdpGroupMappingLifecycleStateCreating, + "ACTIVE": IdpGroupMappingLifecycleStateActive, + "INACTIVE": IdpGroupMappingLifecycleStateInactive, + "DELETING": IdpGroupMappingLifecycleStateDeleting, + "DELETED": IdpGroupMappingLifecycleStateDeleted, +} + +// GetIdpGroupMappingLifecycleStateEnumValues Enumerates the set of values for IdpGroupMappingLifecycleState +func GetIdpGroupMappingLifecycleStateEnumValues() []IdpGroupMappingLifecycleStateEnum { + values := make([]IdpGroupMappingLifecycleStateEnum, 0) + for _, v := range mappingIdpGroupMappingLifecycleState { + values = append(values, v) + } + return values +} diff --git a/vendor/github.com/oracle/oci-go-sdk/identity/list_api_keys_request_response.go b/vendor/github.com/oracle/oci-go-sdk/identity/list_api_keys_request_response.go new file mode 100644 index 000000000..62ce550f5 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/identity/list_api_keys_request_response.go @@ -0,0 +1,43 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package identity + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// ListApiKeysRequest wrapper for the ListApiKeys operation +type ListApiKeysRequest struct { + + // The OCID of the user. + UserId *string `mandatory:"true" contributesTo:"path" name:"userId"` +} + +func (request ListApiKeysRequest) String() string { + return common.PointerString(request) +} + +// ListApiKeysResponse wrapper for the ListApiKeys operation +type ListApiKeysResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The []ApiKey instance + Items []ApiKey `presentIn:"body"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about a + // particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` + + // For pagination of a list of items. When paging through a list, if this header appears in the response, + // then a partial list might have been returned. Include this value as the `page` parameter for the + // subsequent GET request to get the next batch of items. + OpcNextPage *string `presentIn:"header" name:"opc-next-page"` +} + +func (response ListApiKeysResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/identity/list_availability_domains_request_response.go b/vendor/github.com/oracle/oci-go-sdk/identity/list_availability_domains_request_response.go new file mode 100644 index 000000000..407c2a36a --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/identity/list_availability_domains_request_response.go @@ -0,0 +1,43 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package identity + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// ListAvailabilityDomainsRequest wrapper for the ListAvailabilityDomains operation +type ListAvailabilityDomainsRequest struct { + + // The OCID of the compartment (remember that the tenancy is simply the root compartment). + CompartmentId *string `mandatory:"true" contributesTo:"query" name:"compartmentId"` +} + +func (request ListAvailabilityDomainsRequest) String() string { + return common.PointerString(request) +} + +// ListAvailabilityDomainsResponse wrapper for the ListAvailabilityDomains operation +type ListAvailabilityDomainsResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The []AvailabilityDomain instance + Items []AvailabilityDomain `presentIn:"body"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about a + // particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` + + // For pagination of a list of items. When paging through a list, if this header appears in the response, + // then a partial list might have been returned. Include this value as the `page` parameter for the + // subsequent GET request to get the next batch of items. + OpcNextPage *string `presentIn:"header" name:"opc-next-page"` +} + +func (response ListAvailabilityDomainsResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/identity/list_compartments_request_response.go b/vendor/github.com/oracle/oci-go-sdk/identity/list_compartments_request_response.go new file mode 100644 index 000000000..6d4338d3d --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/identity/list_compartments_request_response.go @@ -0,0 +1,49 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package identity + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// ListCompartmentsRequest wrapper for the ListCompartments operation +type ListCompartmentsRequest struct { + + // The OCID of the compartment (remember that the tenancy is simply the root compartment). + CompartmentId *string `mandatory:"true" contributesTo:"query" name:"compartmentId"` + + // The value of the `opc-next-page` response header from the previous "List" call. + Page *string `mandatory:"false" contributesTo:"query" name:"page"` + + // The maximum number of items to return in a paginated "List" call. + Limit *int `mandatory:"false" contributesTo:"query" name:"limit"` +} + +func (request ListCompartmentsRequest) String() string { + return common.PointerString(request) +} + +// ListCompartmentsResponse wrapper for the ListCompartments operation +type ListCompartmentsResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The []Compartment instance + Items []Compartment `presentIn:"body"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about a + // particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` + + // For pagination of a list of items. When paging through a list, if this header appears in the response, + // then a partial list might have been returned. Include this value as the `page` parameter for the + // subsequent GET request to get the next batch of items. + OpcNextPage *string `presentIn:"header" name:"opc-next-page"` +} + +func (response ListCompartmentsResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/identity/list_customer_secret_keys_request_response.go b/vendor/github.com/oracle/oci-go-sdk/identity/list_customer_secret_keys_request_response.go new file mode 100644 index 000000000..a1de6bb33 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/identity/list_customer_secret_keys_request_response.go @@ -0,0 +1,43 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package identity + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// ListCustomerSecretKeysRequest wrapper for the ListCustomerSecretKeys operation +type ListCustomerSecretKeysRequest struct { + + // The OCID of the user. + UserId *string `mandatory:"true" contributesTo:"path" name:"userId"` +} + +func (request ListCustomerSecretKeysRequest) String() string { + return common.PointerString(request) +} + +// ListCustomerSecretKeysResponse wrapper for the ListCustomerSecretKeys operation +type ListCustomerSecretKeysResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The []CustomerSecretKeySummary instance + Items []CustomerSecretKeySummary `presentIn:"body"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about a + // particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` + + // For pagination of a list of items. When paging through a list, if this header appears in the response, + // then a partial list might have been returned. Include this value as the `page` parameter for the + // subsequent GET request to get the next batch of items. + OpcNextPage *string `presentIn:"header" name:"opc-next-page"` +} + +func (response ListCustomerSecretKeysResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/identity/list_fault_domains_request_response.go b/vendor/github.com/oracle/oci-go-sdk/identity/list_fault_domains_request_response.go new file mode 100644 index 000000000..58e8cf4ed --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/identity/list_fault_domains_request_response.go @@ -0,0 +1,41 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package identity + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// ListFaultDomainsRequest wrapper for the ListFaultDomains operation +type ListFaultDomainsRequest struct { + + // The OCID of the compartment (remember that the tenancy is simply the root compartment). + CompartmentId *string `mandatory:"true" contributesTo:"query" name:"compartmentId"` + + // The name of the availibilityDomain. + AvailabilityDomain *string `mandatory:"true" contributesTo:"query" name:"availabilityDomain"` +} + +func (request ListFaultDomainsRequest) String() string { + return common.PointerString(request) +} + +// ListFaultDomainsResponse wrapper for the ListFaultDomains operation +type ListFaultDomainsResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The []FaultDomain instance + Items []FaultDomain `presentIn:"body"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about a + // particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response ListFaultDomainsResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/identity/list_groups_request_response.go b/vendor/github.com/oracle/oci-go-sdk/identity/list_groups_request_response.go new file mode 100644 index 000000000..b466a566f --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/identity/list_groups_request_response.go @@ -0,0 +1,49 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package identity + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// ListGroupsRequest wrapper for the ListGroups operation +type ListGroupsRequest struct { + + // The OCID of the compartment (remember that the tenancy is simply the root compartment). + CompartmentId *string `mandatory:"true" contributesTo:"query" name:"compartmentId"` + + // The value of the `opc-next-page` response header from the previous "List" call. + Page *string `mandatory:"false" contributesTo:"query" name:"page"` + + // The maximum number of items to return in a paginated "List" call. + Limit *int `mandatory:"false" contributesTo:"query" name:"limit"` +} + +func (request ListGroupsRequest) String() string { + return common.PointerString(request) +} + +// ListGroupsResponse wrapper for the ListGroups operation +type ListGroupsResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The []Group instance + Items []Group `presentIn:"body"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about a + // particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` + + // For pagination of a list of items. When paging through a list, if this header appears in the response, + // then a partial list might have been returned. Include this value as the `page` parameter for the + // subsequent GET request to get the next batch of items. + OpcNextPage *string `presentIn:"header" name:"opc-next-page"` +} + +func (response ListGroupsResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/identity/list_identity_providers_request_response.go b/vendor/github.com/oracle/oci-go-sdk/identity/list_identity_providers_request_response.go new file mode 100644 index 000000000..abebf6877 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/identity/list_identity_providers_request_response.go @@ -0,0 +1,73 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package identity + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// ListIdentityProvidersRequest wrapper for the ListIdentityProviders operation +type ListIdentityProvidersRequest struct { + + // The protocol used for federation. + Protocol ListIdentityProvidersProtocolEnum `mandatory:"true" contributesTo:"query" name:"protocol" omitEmpty:"true"` + + // The OCID of the compartment (remember that the tenancy is simply the root compartment). + CompartmentId *string `mandatory:"true" contributesTo:"query" name:"compartmentId"` + + // The value of the `opc-next-page` response header from the previous "List" call. + Page *string `mandatory:"false" contributesTo:"query" name:"page"` + + // The maximum number of items to return in a paginated "List" call. + Limit *int `mandatory:"false" contributesTo:"query" name:"limit"` +} + +func (request ListIdentityProvidersRequest) String() string { + return common.PointerString(request) +} + +// ListIdentityProvidersResponse wrapper for the ListIdentityProviders operation +type ListIdentityProvidersResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The []IdentityProvider instance + Items []IdentityProvider `presentIn:"body"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about a + // particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` + + // For pagination of a list of items. When paging through a list, if this header appears in the response, + // then a partial list might have been returned. Include this value as the `page` parameter for the + // subsequent GET request to get the next batch of items. + OpcNextPage *string `presentIn:"header" name:"opc-next-page"` +} + +func (response ListIdentityProvidersResponse) String() string { + return common.PointerString(response) +} + +// ListIdentityProvidersProtocolEnum Enum with underlying type: string +type ListIdentityProvidersProtocolEnum string + +// Set of constants representing the allowable values for ListIdentityProvidersProtocol +const ( + ListIdentityProvidersProtocolSaml2 ListIdentityProvidersProtocolEnum = "SAML2" +) + +var mappingListIdentityProvidersProtocol = map[string]ListIdentityProvidersProtocolEnum{ + "SAML2": ListIdentityProvidersProtocolSaml2, +} + +// GetListIdentityProvidersProtocolEnumValues Enumerates the set of values for ListIdentityProvidersProtocol +func GetListIdentityProvidersProtocolEnumValues() []ListIdentityProvidersProtocolEnum { + values := make([]ListIdentityProvidersProtocolEnum, 0) + for _, v := range mappingListIdentityProvidersProtocol { + values = append(values, v) + } + return values +} diff --git a/vendor/github.com/oracle/oci-go-sdk/identity/list_idp_group_mappings_request_response.go b/vendor/github.com/oracle/oci-go-sdk/identity/list_idp_group_mappings_request_response.go new file mode 100644 index 000000000..09323f659 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/identity/list_idp_group_mappings_request_response.go @@ -0,0 +1,49 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package identity + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// ListIdpGroupMappingsRequest wrapper for the ListIdpGroupMappings operation +type ListIdpGroupMappingsRequest struct { + + // The OCID of the identity provider. + IdentityProviderId *string `mandatory:"true" contributesTo:"path" name:"identityProviderId"` + + // The value of the `opc-next-page` response header from the previous "List" call. + Page *string `mandatory:"false" contributesTo:"query" name:"page"` + + // The maximum number of items to return in a paginated "List" call. + Limit *int `mandatory:"false" contributesTo:"query" name:"limit"` +} + +func (request ListIdpGroupMappingsRequest) String() string { + return common.PointerString(request) +} + +// ListIdpGroupMappingsResponse wrapper for the ListIdpGroupMappings operation +type ListIdpGroupMappingsResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The []IdpGroupMapping instance + Items []IdpGroupMapping `presentIn:"body"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about a + // particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` + + // For pagination of a list of items. When paging through a list, if this header appears in the response, + // then a partial list might have been returned. Include this value as the `page` parameter for the + // subsequent GET request to get the next batch of items. + OpcNextPage *string `presentIn:"header" name:"opc-next-page"` +} + +func (response ListIdpGroupMappingsResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/identity/list_policies_request_response.go b/vendor/github.com/oracle/oci-go-sdk/identity/list_policies_request_response.go new file mode 100644 index 000000000..6b6461d28 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/identity/list_policies_request_response.go @@ -0,0 +1,49 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package identity + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// ListPoliciesRequest wrapper for the ListPolicies operation +type ListPoliciesRequest struct { + + // The OCID of the compartment (remember that the tenancy is simply the root compartment). + CompartmentId *string `mandatory:"true" contributesTo:"query" name:"compartmentId"` + + // The value of the `opc-next-page` response header from the previous "List" call. + Page *string `mandatory:"false" contributesTo:"query" name:"page"` + + // The maximum number of items to return in a paginated "List" call. + Limit *int `mandatory:"false" contributesTo:"query" name:"limit"` +} + +func (request ListPoliciesRequest) String() string { + return common.PointerString(request) +} + +// ListPoliciesResponse wrapper for the ListPolicies operation +type ListPoliciesResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The []Policy instance + Items []Policy `presentIn:"body"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about a + // particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` + + // For pagination of a list of items. When paging through a list, if this header appears in the response, + // then a partial list might have been returned. Include this value as the `page` parameter for the + // subsequent GET request to get the next batch of items. + OpcNextPage *string `presentIn:"header" name:"opc-next-page"` +} + +func (response ListPoliciesResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/identity/list_region_subscriptions_request_response.go b/vendor/github.com/oracle/oci-go-sdk/identity/list_region_subscriptions_request_response.go new file mode 100644 index 000000000..a10917633 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/identity/list_region_subscriptions_request_response.go @@ -0,0 +1,38 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package identity + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// ListRegionSubscriptionsRequest wrapper for the ListRegionSubscriptions operation +type ListRegionSubscriptionsRequest struct { + + // The OCID of the tenancy. + TenancyId *string `mandatory:"true" contributesTo:"path" name:"tenancyId"` +} + +func (request ListRegionSubscriptionsRequest) String() string { + return common.PointerString(request) +} + +// ListRegionSubscriptionsResponse wrapper for the ListRegionSubscriptions operation +type ListRegionSubscriptionsResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The []RegionSubscription instance + Items []RegionSubscription `presentIn:"body"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about a + // particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response ListRegionSubscriptionsResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/identity/list_regions_request_response.go b/vendor/github.com/oracle/oci-go-sdk/identity/list_regions_request_response.go new file mode 100644 index 000000000..528b6b863 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/identity/list_regions_request_response.go @@ -0,0 +1,35 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package identity + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// ListRegionsRequest wrapper for the ListRegions operation +type ListRegionsRequest struct { +} + +func (request ListRegionsRequest) String() string { + return common.PointerString(request) +} + +// ListRegionsResponse wrapper for the ListRegions operation +type ListRegionsResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The []Region instance + Items []Region `presentIn:"body"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about a + // particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response ListRegionsResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/identity/list_swift_passwords_request_response.go b/vendor/github.com/oracle/oci-go-sdk/identity/list_swift_passwords_request_response.go new file mode 100644 index 000000000..86d737c0f --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/identity/list_swift_passwords_request_response.go @@ -0,0 +1,43 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package identity + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// ListSwiftPasswordsRequest wrapper for the ListSwiftPasswords operation +type ListSwiftPasswordsRequest struct { + + // The OCID of the user. + UserId *string `mandatory:"true" contributesTo:"path" name:"userId"` +} + +func (request ListSwiftPasswordsRequest) String() string { + return common.PointerString(request) +} + +// ListSwiftPasswordsResponse wrapper for the ListSwiftPasswords operation +type ListSwiftPasswordsResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The []SwiftPassword instance + Items []SwiftPassword `presentIn:"body"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about a + // particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` + + // For pagination of a list of items. When paging through a list, if this header appears in the response, + // then a partial list might have been returned. Include this value as the `page` parameter for the + // subsequent GET request to get the next batch of items. + OpcNextPage *string `presentIn:"header" name:"opc-next-page"` +} + +func (response ListSwiftPasswordsResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/identity/list_user_group_memberships_request_response.go b/vendor/github.com/oracle/oci-go-sdk/identity/list_user_group_memberships_request_response.go new file mode 100644 index 000000000..3f6d406b8 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/identity/list_user_group_memberships_request_response.go @@ -0,0 +1,55 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package identity + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// ListUserGroupMembershipsRequest wrapper for the ListUserGroupMemberships operation +type ListUserGroupMembershipsRequest struct { + + // The OCID of the compartment (remember that the tenancy is simply the root compartment). + CompartmentId *string `mandatory:"true" contributesTo:"query" name:"compartmentId"` + + // The OCID of the user. + UserId *string `mandatory:"false" contributesTo:"query" name:"userId"` + + // The OCID of the group. + GroupId *string `mandatory:"false" contributesTo:"query" name:"groupId"` + + // The value of the `opc-next-page` response header from the previous "List" call. + Page *string `mandatory:"false" contributesTo:"query" name:"page"` + + // The maximum number of items to return in a paginated "List" call. + Limit *int `mandatory:"false" contributesTo:"query" name:"limit"` +} + +func (request ListUserGroupMembershipsRequest) String() string { + return common.PointerString(request) +} + +// ListUserGroupMembershipsResponse wrapper for the ListUserGroupMemberships operation +type ListUserGroupMembershipsResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The []UserGroupMembership instance + Items []UserGroupMembership `presentIn:"body"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about a + // particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` + + // For pagination of a list of items. When paging through a list, if this header appears in the response, + // then a partial list might have been returned. Include this value as the `page` parameter for the + // subsequent GET request to get the next batch of items. + OpcNextPage *string `presentIn:"header" name:"opc-next-page"` +} + +func (response ListUserGroupMembershipsResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/identity/list_users_request_response.go b/vendor/github.com/oracle/oci-go-sdk/identity/list_users_request_response.go new file mode 100644 index 000000000..905d7e8c5 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/identity/list_users_request_response.go @@ -0,0 +1,49 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package identity + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// ListUsersRequest wrapper for the ListUsers operation +type ListUsersRequest struct { + + // The OCID of the compartment (remember that the tenancy is simply the root compartment). + CompartmentId *string `mandatory:"true" contributesTo:"query" name:"compartmentId"` + + // The value of the `opc-next-page` response header from the previous "List" call. + Page *string `mandatory:"false" contributesTo:"query" name:"page"` + + // The maximum number of items to return in a paginated "List" call. + Limit *int `mandatory:"false" contributesTo:"query" name:"limit"` +} + +func (request ListUsersRequest) String() string { + return common.PointerString(request) +} + +// ListUsersResponse wrapper for the ListUsers operation +type ListUsersResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The []User instance + Items []User `presentIn:"body"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about a + // particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` + + // For pagination of a list of items. When paging through a list, if this header appears in the response, + // then a partial list might have been returned. Include this value as the `page` parameter for the + // subsequent GET request to get the next batch of items. + OpcNextPage *string `presentIn:"header" name:"opc-next-page"` +} + +func (response ListUsersResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/identity/policy.go b/vendor/github.com/oracle/oci-go-sdk/identity/policy.go new file mode 100644 index 000000000..39eacc77a --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/identity/policy.go @@ -0,0 +1,91 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Identity and Access Management Service API +// +// APIs for managing users, groups, compartments, and policies. +// + +package identity + +import ( + "github.com/oracle/oci-go-sdk/common" +) + +// Policy A document that specifies the type of access a group has to the resources in a compartment. For information about +// policies and other IAM Service components, see +// Overview of the IAM Service (https://docs.us-phoenix-1.oraclecloud.com/Content/Identity/Concepts/overview.htm). If you're new to policies, see +// Getting Started with Policies (https://docs.us-phoenix-1.oraclecloud.com/Content/Identity/Concepts/policygetstarted.htm). +// The word "policy" is used by people in different ways: +// * An individual statement written in the policy language +// * A collection of statements in a single, named "policy" document (which has an Oracle Cloud ID (OCID) assigned to it) +// * The overall body of policies your organization uses to control access to resources +// To use any of the API operations, you must be authorized in an IAM policy. If you're not authorized, +// talk to an administrator. +type Policy struct { + + // The OCID of the policy. + Id *string `mandatory:"true" json:"id"` + + // The OCID of the compartment containing the policy (either the tenancy or another compartment). + CompartmentId *string `mandatory:"true" json:"compartmentId"` + + // The name you assign to the policy during creation. The name must be unique across all policies + // in the tenancy and cannot be changed. + Name *string `mandatory:"true" json:"name"` + + // An array of one or more policy statements written in the policy language. + Statements []string `mandatory:"true" json:"statements"` + + // The description you assign to the policy. Does not have to be unique, and it's changeable. + Description *string `mandatory:"true" json:"description"` + + // Date and time the policy was created, in the format defined by RFC3339. + // Example: `2016-08-25T21:10:29.600Z` + TimeCreated *common.SDKTime `mandatory:"true" json:"timeCreated"` + + // The policy's current state. After creating a policy, make sure its `lifecycleState` changes from CREATING to + // ACTIVE before using it. + LifecycleState PolicyLifecycleStateEnum `mandatory:"true" json:"lifecycleState"` + + // The detailed status of INACTIVE lifecycleState. + InactiveStatus *int `mandatory:"false" json:"inactiveStatus"` + + // The version of the policy. If null or set to an empty string, when a request comes in for authorization, the + // policy will be evaluated according to the current behavior of the services at that moment. If set to a particular + // date (YYYY-MM-DD), the policy will be evaluated according to the behavior of the services on that date. + VersionDate *common.SDKTime `mandatory:"false" json:"versionDate"` +} + +func (m Policy) String() string { + return common.PointerString(m) +} + +// PolicyLifecycleStateEnum Enum with underlying type: string +type PolicyLifecycleStateEnum string + +// Set of constants representing the allowable values for PolicyLifecycleState +const ( + PolicyLifecycleStateCreating PolicyLifecycleStateEnum = "CREATING" + PolicyLifecycleStateActive PolicyLifecycleStateEnum = "ACTIVE" + PolicyLifecycleStateInactive PolicyLifecycleStateEnum = "INACTIVE" + PolicyLifecycleStateDeleting PolicyLifecycleStateEnum = "DELETING" + PolicyLifecycleStateDeleted PolicyLifecycleStateEnum = "DELETED" +) + +var mappingPolicyLifecycleState = map[string]PolicyLifecycleStateEnum{ + "CREATING": PolicyLifecycleStateCreating, + "ACTIVE": PolicyLifecycleStateActive, + "INACTIVE": PolicyLifecycleStateInactive, + "DELETING": PolicyLifecycleStateDeleting, + "DELETED": PolicyLifecycleStateDeleted, +} + +// GetPolicyLifecycleStateEnumValues Enumerates the set of values for PolicyLifecycleState +func GetPolicyLifecycleStateEnumValues() []PolicyLifecycleStateEnum { + values := make([]PolicyLifecycleStateEnum, 0) + for _, v := range mappingPolicyLifecycleState { + values = append(values, v) + } + return values +} diff --git a/vendor/github.com/oracle/oci-go-sdk/identity/region.go b/vendor/github.com/oracle/oci-go-sdk/identity/region.go new file mode 100644 index 000000000..ba73a4453 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/identity/region.go @@ -0,0 +1,40 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Identity and Access Management Service API +// +// APIs for managing users, groups, compartments, and policies. +// + +package identity + +import ( + "github.com/oracle/oci-go-sdk/common" +) + +// Region A localized geographic area, such as Phoenix, AZ. Oracle Cloud Infrastructure is hosted in regions and Availability +// Domains. A region is composed of several Availability Domains. An Availability Domain is one or more data centers +// located within a region. For more information, see Regions and Availability Domains (https://docs.us-phoenix-1.oraclecloud.com/Content/General/Concepts/regions.htm). +// To use any of the API operations, you must be authorized in an IAM policy. If you're not authorized, +// talk to an administrator. If you're an administrator who needs to write policies to give users access, +// see Getting Started with Policies (https://docs.us-phoenix-1.oraclecloud.com/Content/Identity/Concepts/policygetstarted.htm). +type Region struct { + + // The key of the region. + // Allowed values are: + // - `PHX` + // - `IAD` + // - `FRA` + Key *string `mandatory:"false" json:"key"` + + // The name of the region. + // Allowed values are: + // - `us-phoenix-1` + // - `us-ashburn-1` + // - `eu-frankfurt-1` + Name *string `mandatory:"false" json:"name"` +} + +func (m Region) String() string { + return common.PointerString(m) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/identity/region_subscription.go b/vendor/github.com/oracle/oci-go-sdk/identity/region_subscription.go new file mode 100644 index 000000000..dfa9f26b6 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/identity/region_subscription.go @@ -0,0 +1,68 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Identity and Access Management Service API +// +// APIs for managing users, groups, compartments, and policies. +// + +package identity + +import ( + "github.com/oracle/oci-go-sdk/common" +) + +// RegionSubscription An object that represents your tenancy's access to a particular region (i.e., a subscription), the status of that +// access, and whether that region is the home region. For more information, see Managing Regions (https://docs.us-phoenix-1.oraclecloud.com/Content/Identity/Tasks/managingregions.htm). +// To use any of the API operations, you must be authorized in an IAM policy. If you're not authorized, +// talk to an administrator. If you're an administrator who needs to write policies to give users access, +// see Getting Started with Policies (https://docs.us-phoenix-1.oraclecloud.com/Content/Identity/Concepts/policygetstarted.htm). +type RegionSubscription struct { + + // The region's key. + // Allowed values are: + // - `PHX` + // - `IAD` + // - `FRA` + RegionKey *string `mandatory:"true" json:"regionKey"` + + // The region's name. + // Allowed values are: + // - `us-phoenix-1` + // - `us-ashburn-1` + // - `eu-frankurt-1` + RegionName *string `mandatory:"true" json:"regionName"` + + // The region subscription status. + Status RegionSubscriptionStatusEnum `mandatory:"true" json:"status"` + + // Indicates if the region is the home region or not. + IsHomeRegion *bool `mandatory:"true" json:"isHomeRegion"` +} + +func (m RegionSubscription) String() string { + return common.PointerString(m) +} + +// RegionSubscriptionStatusEnum Enum with underlying type: string +type RegionSubscriptionStatusEnum string + +// Set of constants representing the allowable values for RegionSubscriptionStatus +const ( + RegionSubscriptionStatusReady RegionSubscriptionStatusEnum = "READY" + RegionSubscriptionStatusInProgress RegionSubscriptionStatusEnum = "IN_PROGRESS" +) + +var mappingRegionSubscriptionStatus = map[string]RegionSubscriptionStatusEnum{ + "READY": RegionSubscriptionStatusReady, + "IN_PROGRESS": RegionSubscriptionStatusInProgress, +} + +// GetRegionSubscriptionStatusEnumValues Enumerates the set of values for RegionSubscriptionStatus +func GetRegionSubscriptionStatusEnumValues() []RegionSubscriptionStatusEnum { + values := make([]RegionSubscriptionStatusEnum, 0) + for _, v := range mappingRegionSubscriptionStatus { + values = append(values, v) + } + return values +} diff --git a/vendor/github.com/oracle/oci-go-sdk/identity/remove_user_from_group_request_response.go b/vendor/github.com/oracle/oci-go-sdk/identity/remove_user_from_group_request_response.go new file mode 100644 index 000000000..afed269d0 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/identity/remove_user_from_group_request_response.go @@ -0,0 +1,40 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package identity + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// RemoveUserFromGroupRequest wrapper for the RemoveUserFromGroup operation +type RemoveUserFromGroupRequest struct { + + // The OCID of the userGroupMembership. + UserGroupMembershipId *string `mandatory:"true" contributesTo:"path" name:"userGroupMembershipId"` + + // For optimistic concurrency control. In the PUT or DELETE call for a resource, set the `if-match` + // parameter to the value of the etag from a previous GET or POST response for that resource. The resource + // will be updated or deleted only if the etag you provide matches the resource's current etag value. + IfMatch *string `mandatory:"false" contributesTo:"header" name:"if-match"` +} + +func (request RemoveUserFromGroupRequest) String() string { + return common.PointerString(request) +} + +// RemoveUserFromGroupResponse wrapper for the RemoveUserFromGroup operation +type RemoveUserFromGroupResponse struct { + + // The underlying http response + RawResponse *http.Response + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about a + // particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response RemoveUserFromGroupResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/identity/saml2_identity_provider.go b/vendor/github.com/oracle/oci-go-sdk/identity/saml2_identity_provider.go new file mode 100644 index 000000000..0ef2511b3 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/identity/saml2_identity_provider.go @@ -0,0 +1,127 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Identity and Access Management Service API +// +// APIs for managing users, groups, compartments, and policies. +// + +package identity + +import ( + "encoding/json" + "github.com/oracle/oci-go-sdk/common" +) + +// Saml2IdentityProvider A special type of IdentityProvider that +// supports the SAML 2.0 protocol. For more information, see +// Identity Providers and Federation (https://docs.us-phoenix-1.oraclecloud.com/Content/Identity/Concepts/federation.htm). +type Saml2IdentityProvider struct { + + // The OCID of the `IdentityProvider`. + Id *string `mandatory:"true" json:"id"` + + // The OCID of the tenancy containing the `IdentityProvider`. + CompartmentId *string `mandatory:"true" json:"compartmentId"` + + // The name you assign to the `IdentityProvider` during creation. The name + // must be unique across all `IdentityProvider` objects in the tenancy and + // cannot be changed. This is the name federated users see when choosing + // which identity provider to use when signing in to the Oracle Cloud Infrastructure + // Console. + Name *string `mandatory:"true" json:"name"` + + // The description you assign to the `IdentityProvider` during creation. Does + // not have to be unique, and it's changeable. + Description *string `mandatory:"true" json:"description"` + + // The identity provider service or product. + // Supported identity providers are Oracle Identity Cloud Service (IDCS) and Microsoft + // Active Directory Federation Services (ADFS). + // Allowed values are: + // - `ADFS` + // - `IDCS` + // Example: `IDCS` + ProductType *string `mandatory:"true" json:"productType"` + + // Date and time the `IdentityProvider` was created, in the format defined by RFC3339. + // Example: `2016-08-25T21:10:29.600Z` + TimeCreated *common.SDKTime `mandatory:"true" json:"timeCreated"` + + // The URL for retrieving the identity provider's metadata, which + // contains information required for federating. + MetadataUrl *string `mandatory:"true" json:"metadataUrl"` + + // The identity provider's signing certificate used by the IAM Service + // to validate the SAML2 token. + SigningCertificate *string `mandatory:"true" json:"signingCertificate"` + + // The URL to redirect federated users to for authentication with the + // identity provider. + RedirectUrl *string `mandatory:"true" json:"redirectUrl"` + + // The detailed status of INACTIVE lifecycleState. + InactiveStatus *int `mandatory:"false" json:"inactiveStatus"` + + // The current state. After creating an `IdentityProvider`, make sure its + // `lifecycleState` changes from CREATING to ACTIVE before using it. + LifecycleState IdentityProviderLifecycleStateEnum `mandatory:"true" json:"lifecycleState"` +} + +//GetId returns Id +func (m Saml2IdentityProvider) GetId() *string { + return m.Id +} + +//GetCompartmentId returns CompartmentId +func (m Saml2IdentityProvider) GetCompartmentId() *string { + return m.CompartmentId +} + +//GetName returns Name +func (m Saml2IdentityProvider) GetName() *string { + return m.Name +} + +//GetDescription returns Description +func (m Saml2IdentityProvider) GetDescription() *string { + return m.Description +} + +//GetProductType returns ProductType +func (m Saml2IdentityProvider) GetProductType() *string { + return m.ProductType +} + +//GetTimeCreated returns TimeCreated +func (m Saml2IdentityProvider) GetTimeCreated() *common.SDKTime { + return m.TimeCreated +} + +//GetLifecycleState returns LifecycleState +func (m Saml2IdentityProvider) GetLifecycleState() IdentityProviderLifecycleStateEnum { + return m.LifecycleState +} + +//GetInactiveStatus returns InactiveStatus +func (m Saml2IdentityProvider) GetInactiveStatus() *int { + return m.InactiveStatus +} + +func (m Saml2IdentityProvider) String() string { + return common.PointerString(m) +} + +// MarshalJSON marshals to json representation +func (m Saml2IdentityProvider) MarshalJSON() (buff []byte, e error) { + type MarshalTypeSaml2IdentityProvider Saml2IdentityProvider + s := struct { + DiscriminatorParam string `json:"protocol"` + MarshalTypeSaml2IdentityProvider + }{ + "SAML2", + (MarshalTypeSaml2IdentityProvider)(m), + } + + return json.Marshal(&s) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/identity/swift_password.go b/vendor/github.com/oracle/oci-go-sdk/identity/swift_password.go new file mode 100644 index 000000000..89a37c63c --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/identity/swift_password.go @@ -0,0 +1,83 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Identity and Access Management Service API +// +// APIs for managing users, groups, compartments, and policies. +// + +package identity + +import ( + "github.com/oracle/oci-go-sdk/common" +) + +// SwiftPassword Swift is the OpenStack object storage service. A `SwiftPassword` is an Oracle-provided password for using a +// Swift client with the Oracle Cloud Infrastructure Object Storage Service. This password is associated with +// the user's Console login. Swift passwords never expire. A user can have up to two Swift passwords at a time. +// **Note:** The password is always an Oracle-generated string; you can't change it to a string of your choice. +// For more information, see Managing User Credentials (https://docs.us-phoenix-1.oraclecloud.com/Content/Identity/Tasks/managingcredentials.htm). +type SwiftPassword struct { + + // The Swift password. The value is available only in the response for `CreateSwiftPassword`, and not + // for `ListSwiftPasswords` or `UpdateSwiftPassword`. + Password *string `mandatory:"false" json:"password"` + + // The OCID of the Swift password. + Id *string `mandatory:"false" json:"id"` + + // The OCID of the user the password belongs to. + UserId *string `mandatory:"false" json:"userId"` + + // The description you assign to the Swift password. Does not have to be unique, and it's changeable. + Description *string `mandatory:"false" json:"description"` + + // Date and time the `SwiftPassword` object was created, in the format defined by RFC3339. + // Example: `2016-08-25T21:10:29.600Z` + TimeCreated *common.SDKTime `mandatory:"false" json:"timeCreated"` + + // Date and time when this password will expire, in the format defined by RFC3339. + // Null if it never expires. + // Example: `2016-08-25T21:10:29.600Z` + ExpiresOn *common.SDKTime `mandatory:"false" json:"expiresOn"` + + // The password's current state. After creating a password, make sure its `lifecycleState` changes from + // CREATING to ACTIVE before using it. + LifecycleState SwiftPasswordLifecycleStateEnum `mandatory:"false" json:"lifecycleState,omitempty"` + + // The detailed status of INACTIVE lifecycleState. + InactiveStatus *int `mandatory:"false" json:"inactiveStatus"` +} + +func (m SwiftPassword) String() string { + return common.PointerString(m) +} + +// SwiftPasswordLifecycleStateEnum Enum with underlying type: string +type SwiftPasswordLifecycleStateEnum string + +// Set of constants representing the allowable values for SwiftPasswordLifecycleState +const ( + SwiftPasswordLifecycleStateCreating SwiftPasswordLifecycleStateEnum = "CREATING" + SwiftPasswordLifecycleStateActive SwiftPasswordLifecycleStateEnum = "ACTIVE" + SwiftPasswordLifecycleStateInactive SwiftPasswordLifecycleStateEnum = "INACTIVE" + SwiftPasswordLifecycleStateDeleting SwiftPasswordLifecycleStateEnum = "DELETING" + SwiftPasswordLifecycleStateDeleted SwiftPasswordLifecycleStateEnum = "DELETED" +) + +var mappingSwiftPasswordLifecycleState = map[string]SwiftPasswordLifecycleStateEnum{ + "CREATING": SwiftPasswordLifecycleStateCreating, + "ACTIVE": SwiftPasswordLifecycleStateActive, + "INACTIVE": SwiftPasswordLifecycleStateInactive, + "DELETING": SwiftPasswordLifecycleStateDeleting, + "DELETED": SwiftPasswordLifecycleStateDeleted, +} + +// GetSwiftPasswordLifecycleStateEnumValues Enumerates the set of values for SwiftPasswordLifecycleState +func GetSwiftPasswordLifecycleStateEnumValues() []SwiftPasswordLifecycleStateEnum { + values := make([]SwiftPasswordLifecycleStateEnum, 0) + for _, v := range mappingSwiftPasswordLifecycleState { + values = append(values, v) + } + return values +} diff --git a/vendor/github.com/oracle/oci-go-sdk/identity/tenancy.go b/vendor/github.com/oracle/oci-go-sdk/identity/tenancy.go new file mode 100644 index 000000000..676b91b9a --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/identity/tenancy.go @@ -0,0 +1,43 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Identity and Access Management Service API +// +// APIs for managing users, groups, compartments, and policies. +// + +package identity + +import ( + "github.com/oracle/oci-go-sdk/common" +) + +// Tenancy The root compartment that contains all of your organization's compartments and other +// Oracle Cloud Infrastructure cloud resources. When you sign up for Oracle Cloud Infrastructure, +// Oracle creates a tenancy for your company, which is a secure and isolated partition +// where you can create, organize, and administer your cloud resources. +// To use any of the API operations, you must be authorized in an IAM policy. If you're not authorized, +// talk to an administrator. If you're an administrator who needs to write policies to give users access, +// see Getting Started with Policies (https://docs.us-phoenix-1.oraclecloud.com/Content/Identity/Concepts/policygetstarted.htm). +type Tenancy struct { + + // The OCID of the tenancy. + Id *string `mandatory:"false" json:"id"` + + // The name of the tenancy. + Name *string `mandatory:"false" json:"name"` + + // The description of the tenancy. + Description *string `mandatory:"false" json:"description"` + + // The region key for the tenancy's home region. + // Allowed values are: + // - `IAD` + // - `PHX` + // - `FRA` + HomeRegionKey *string `mandatory:"false" json:"homeRegionKey"` +} + +func (m Tenancy) String() string { + return common.PointerString(m) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/identity/ui_password.go b/vendor/github.com/oracle/oci-go-sdk/identity/ui_password.go new file mode 100644 index 000000000..d0ff89997 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/identity/ui_password.go @@ -0,0 +1,69 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Identity and Access Management Service API +// +// APIs for managing users, groups, compartments, and policies. +// + +package identity + +import ( + "github.com/oracle/oci-go-sdk/common" +) + +// UiPassword A text password that enables a user to sign in to the Console, the user interface for interacting with Oracle +// Cloud Infrastructure. +// For more information about user credentials, see User Credentials (https://docs.us-phoenix-1.oraclecloud.com/Content/Identity/Concepts/usercredentials.htm). +type UiPassword struct { + + // The user's password for the Console. + Password *string `mandatory:"false" json:"password"` + + // The OCID of the user. + UserId *string `mandatory:"false" json:"userId"` + + // Date and time the password was created, in the format defined by RFC3339. + // Example: `2016-08-25T21:10:29.600Z` + TimeCreated *common.SDKTime `mandatory:"false" json:"timeCreated"` + + // The password's current state. After creating a password, make sure its `lifecycleState` changes from + // CREATING to ACTIVE before using it. + LifecycleState UiPasswordLifecycleStateEnum `mandatory:"false" json:"lifecycleState,omitempty"` + + // The detailed status of INACTIVE lifecycleState. + InactiveStatus *int `mandatory:"false" json:"inactiveStatus"` +} + +func (m UiPassword) String() string { + return common.PointerString(m) +} + +// UiPasswordLifecycleStateEnum Enum with underlying type: string +type UiPasswordLifecycleStateEnum string + +// Set of constants representing the allowable values for UiPasswordLifecycleState +const ( + UiPasswordLifecycleStateCreating UiPasswordLifecycleStateEnum = "CREATING" + UiPasswordLifecycleStateActive UiPasswordLifecycleStateEnum = "ACTIVE" + UiPasswordLifecycleStateInactive UiPasswordLifecycleStateEnum = "INACTIVE" + UiPasswordLifecycleStateDeleting UiPasswordLifecycleStateEnum = "DELETING" + UiPasswordLifecycleStateDeleted UiPasswordLifecycleStateEnum = "DELETED" +) + +var mappingUiPasswordLifecycleState = map[string]UiPasswordLifecycleStateEnum{ + "CREATING": UiPasswordLifecycleStateCreating, + "ACTIVE": UiPasswordLifecycleStateActive, + "INACTIVE": UiPasswordLifecycleStateInactive, + "DELETING": UiPasswordLifecycleStateDeleting, + "DELETED": UiPasswordLifecycleStateDeleted, +} + +// GetUiPasswordLifecycleStateEnumValues Enumerates the set of values for UiPasswordLifecycleState +func GetUiPasswordLifecycleStateEnumValues() []UiPasswordLifecycleStateEnum { + values := make([]UiPasswordLifecycleStateEnum, 0) + for _, v := range mappingUiPasswordLifecycleState { + values = append(values, v) + } + return values +} diff --git a/vendor/github.com/oracle/oci-go-sdk/identity/update_compartment_details.go b/vendor/github.com/oracle/oci-go-sdk/identity/update_compartment_details.go new file mode 100644 index 000000000..0e064808c --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/identity/update_compartment_details.go @@ -0,0 +1,27 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Identity and Access Management Service API +// +// APIs for managing users, groups, compartments, and policies. +// + +package identity + +import ( + "github.com/oracle/oci-go-sdk/common" +) + +// UpdateCompartmentDetails The representation of UpdateCompartmentDetails +type UpdateCompartmentDetails struct { + + // The description you assign to the compartment. Does not have to be unique, and it's changeable. + Description *string `mandatory:"false" json:"description"` + + // The new name you assign to the compartment. The name must be unique across all compartments in the tenancy. + Name *string `mandatory:"false" json:"name"` +} + +func (m UpdateCompartmentDetails) String() string { + return common.PointerString(m) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/identity/update_compartment_request_response.go b/vendor/github.com/oracle/oci-go-sdk/identity/update_compartment_request_response.go new file mode 100644 index 000000000..217bc6877 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/identity/update_compartment_request_response.go @@ -0,0 +1,49 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package identity + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// UpdateCompartmentRequest wrapper for the UpdateCompartment operation +type UpdateCompartmentRequest struct { + + // The OCID of the compartment. + CompartmentId *string `mandatory:"true" contributesTo:"path" name:"compartmentId"` + + // Request object for updating a compartment. + UpdateCompartmentDetails `contributesTo:"body"` + + // For optimistic concurrency control. In the PUT or DELETE call for a resource, set the `if-match` + // parameter to the value of the etag from a previous GET or POST response for that resource. The resource + // will be updated or deleted only if the etag you provide matches the resource's current etag value. + IfMatch *string `mandatory:"false" contributesTo:"header" name:"if-match"` +} + +func (request UpdateCompartmentRequest) String() string { + return common.PointerString(request) +} + +// UpdateCompartmentResponse wrapper for the UpdateCompartment operation +type UpdateCompartmentResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The Compartment instance + Compartment `presentIn:"body"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about a + // particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` + + // For optimistic concurrency control. See `if-match`. + Etag *string `presentIn:"header" name:"etag"` +} + +func (response UpdateCompartmentResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/identity/update_customer_secret_key_details.go b/vendor/github.com/oracle/oci-go-sdk/identity/update_customer_secret_key_details.go new file mode 100644 index 000000000..397e18396 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/identity/update_customer_secret_key_details.go @@ -0,0 +1,24 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Identity and Access Management Service API +// +// APIs for managing users, groups, compartments, and policies. +// + +package identity + +import ( + "github.com/oracle/oci-go-sdk/common" +) + +// UpdateCustomerSecretKeyDetails The representation of UpdateCustomerSecretKeyDetails +type UpdateCustomerSecretKeyDetails struct { + + // The description you assign to the secret key. Does not have to be unique, and it's changeable. + DisplayName *string `mandatory:"false" json:"displayName"` +} + +func (m UpdateCustomerSecretKeyDetails) String() string { + return common.PointerString(m) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/identity/update_customer_secret_key_request_response.go b/vendor/github.com/oracle/oci-go-sdk/identity/update_customer_secret_key_request_response.go new file mode 100644 index 000000000..e722571e7 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/identity/update_customer_secret_key_request_response.go @@ -0,0 +1,52 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package identity + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// UpdateCustomerSecretKeyRequest wrapper for the UpdateCustomerSecretKey operation +type UpdateCustomerSecretKeyRequest struct { + + // The OCID of the user. + UserId *string `mandatory:"true" contributesTo:"path" name:"userId"` + + // The OCID of the secret key. + CustomerSecretKeyId *string `mandatory:"true" contributesTo:"path" name:"customerSecretKeyId"` + + // Request object for updating a secret key. + UpdateCustomerSecretKeyDetails `contributesTo:"body"` + + // For optimistic concurrency control. In the PUT or DELETE call for a resource, set the `if-match` + // parameter to the value of the etag from a previous GET or POST response for that resource. The resource + // will be updated or deleted only if the etag you provide matches the resource's current etag value. + IfMatch *string `mandatory:"false" contributesTo:"header" name:"if-match"` +} + +func (request UpdateCustomerSecretKeyRequest) String() string { + return common.PointerString(request) +} + +// UpdateCustomerSecretKeyResponse wrapper for the UpdateCustomerSecretKey operation +type UpdateCustomerSecretKeyResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The CustomerSecretKeySummary instance + CustomerSecretKeySummary `presentIn:"body"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about a + // particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` + + // For optimistic concurrency control. See `if-match`. + Etag *string `presentIn:"header" name:"etag"` +} + +func (response UpdateCustomerSecretKeyResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/identity/update_group_details.go b/vendor/github.com/oracle/oci-go-sdk/identity/update_group_details.go new file mode 100644 index 000000000..35a082a90 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/identity/update_group_details.go @@ -0,0 +1,24 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Identity and Access Management Service API +// +// APIs for managing users, groups, compartments, and policies. +// + +package identity + +import ( + "github.com/oracle/oci-go-sdk/common" +) + +// UpdateGroupDetails The representation of UpdateGroupDetails +type UpdateGroupDetails struct { + + // The description you assign to the group. Does not have to be unique, and it's changeable. + Description *string `mandatory:"false" json:"description"` +} + +func (m UpdateGroupDetails) String() string { + return common.PointerString(m) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/identity/update_group_request_response.go b/vendor/github.com/oracle/oci-go-sdk/identity/update_group_request_response.go new file mode 100644 index 000000000..75840e1c0 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/identity/update_group_request_response.go @@ -0,0 +1,49 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package identity + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// UpdateGroupRequest wrapper for the UpdateGroup operation +type UpdateGroupRequest struct { + + // The OCID of the group. + GroupId *string `mandatory:"true" contributesTo:"path" name:"groupId"` + + // Request object for updating a group. + UpdateGroupDetails `contributesTo:"body"` + + // For optimistic concurrency control. In the PUT or DELETE call for a resource, set the `if-match` + // parameter to the value of the etag from a previous GET or POST response for that resource. The resource + // will be updated or deleted only if the etag you provide matches the resource's current etag value. + IfMatch *string `mandatory:"false" contributesTo:"header" name:"if-match"` +} + +func (request UpdateGroupRequest) String() string { + return common.PointerString(request) +} + +// UpdateGroupResponse wrapper for the UpdateGroup operation +type UpdateGroupResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The Group instance + Group `presentIn:"body"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about a + // particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` + + // For optimistic concurrency control. See `if-match`. + Etag *string `presentIn:"header" name:"etag"` +} + +func (response UpdateGroupResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/identity/update_identity_provider_details.go b/vendor/github.com/oracle/oci-go-sdk/identity/update_identity_provider_details.go new file mode 100644 index 000000000..976c40edf --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/identity/update_identity_provider_details.go @@ -0,0 +1,67 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Identity and Access Management Service API +// +// APIs for managing users, groups, compartments, and policies. +// + +package identity + +import ( + "encoding/json" + "github.com/oracle/oci-go-sdk/common" +) + +// UpdateIdentityProviderDetails The representation of UpdateIdentityProviderDetails +type UpdateIdentityProviderDetails interface { + + // The description you assign to the `IdentityProvider`. Does not have to + // be unique, and it's changeable. + GetDescription() *string +} + +type updateidentityproviderdetails struct { + JsonData []byte + Description *string `mandatory:"false" json:"description"` + Protocol string `json:"protocol"` +} + +// UnmarshalJSON unmarshals json +func (m *updateidentityproviderdetails) UnmarshalJSON(data []byte) error { + m.JsonData = data + type Unmarshalerupdateidentityproviderdetails updateidentityproviderdetails + s := struct { + Model Unmarshalerupdateidentityproviderdetails + }{} + err := json.Unmarshal(data, &s.Model) + if err != nil { + return err + } + m.Description = s.Model.Description + m.Protocol = s.Model.Protocol + + return err +} + +// UnmarshalPolymorphicJSON unmarshals polymorphic json +func (m *updateidentityproviderdetails) UnmarshalPolymorphicJSON(data []byte) (interface{}, error) { + var err error + switch m.Protocol { + case "SAML2": + mm := UpdateSaml2IdentityProviderDetails{} + err = json.Unmarshal(data, &mm) + return mm, err + default: + return m, nil + } +} + +//GetDescription returns Description +func (m updateidentityproviderdetails) GetDescription() *string { + return m.Description +} + +func (m updateidentityproviderdetails) String() string { + return common.PointerString(m) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/identity/update_identity_provider_request_response.go b/vendor/github.com/oracle/oci-go-sdk/identity/update_identity_provider_request_response.go new file mode 100644 index 000000000..e092b59f4 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/identity/update_identity_provider_request_response.go @@ -0,0 +1,49 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package identity + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// UpdateIdentityProviderRequest wrapper for the UpdateIdentityProvider operation +type UpdateIdentityProviderRequest struct { + + // The OCID of the identity provider. + IdentityProviderId *string `mandatory:"true" contributesTo:"path" name:"identityProviderId"` + + // Request object for updating a identity provider. + UpdateIdentityProviderDetails `contributesTo:"body"` + + // For optimistic concurrency control. In the PUT or DELETE call for a resource, set the `if-match` + // parameter to the value of the etag from a previous GET or POST response for that resource. The resource + // will be updated or deleted only if the etag you provide matches the resource's current etag value. + IfMatch *string `mandatory:"false" contributesTo:"header" name:"if-match"` +} + +func (request UpdateIdentityProviderRequest) String() string { + return common.PointerString(request) +} + +// UpdateIdentityProviderResponse wrapper for the UpdateIdentityProvider operation +type UpdateIdentityProviderResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The IdentityProvider instance + IdentityProvider `presentIn:"body"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about a + // particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` + + // For optimistic concurrency control. See `if-match`. + Etag *string `presentIn:"header" name:"etag"` +} + +func (response UpdateIdentityProviderResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/identity/update_idp_group_mapping_details.go b/vendor/github.com/oracle/oci-go-sdk/identity/update_idp_group_mapping_details.go new file mode 100644 index 000000000..bf7716f61 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/identity/update_idp_group_mapping_details.go @@ -0,0 +1,27 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Identity and Access Management Service API +// +// APIs for managing users, groups, compartments, and policies. +// + +package identity + +import ( + "github.com/oracle/oci-go-sdk/common" +) + +// UpdateIdpGroupMappingDetails The representation of UpdateIdpGroupMappingDetails +type UpdateIdpGroupMappingDetails struct { + + // The idp group name. + IdpGroupName *string `mandatory:"false" json:"idpGroupName"` + + // The OCID of the group. + GroupId *string `mandatory:"false" json:"groupId"` +} + +func (m UpdateIdpGroupMappingDetails) String() string { + return common.PointerString(m) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/identity/update_idp_group_mapping_request_response.go b/vendor/github.com/oracle/oci-go-sdk/identity/update_idp_group_mapping_request_response.go new file mode 100644 index 000000000..be9dbd3df --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/identity/update_idp_group_mapping_request_response.go @@ -0,0 +1,52 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package identity + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// UpdateIdpGroupMappingRequest wrapper for the UpdateIdpGroupMapping operation +type UpdateIdpGroupMappingRequest struct { + + // The OCID of the identity provider. + IdentityProviderId *string `mandatory:"true" contributesTo:"path" name:"identityProviderId"` + + // The OCID of the group mapping. + MappingId *string `mandatory:"true" contributesTo:"path" name:"mappingId"` + + // Request object for updating an identity provider group mapping + UpdateIdpGroupMappingDetails `contributesTo:"body"` + + // For optimistic concurrency control. In the PUT or DELETE call for a resource, set the `if-match` + // parameter to the value of the etag from a previous GET or POST response for that resource. The resource + // will be updated or deleted only if the etag you provide matches the resource's current etag value. + IfMatch *string `mandatory:"false" contributesTo:"header" name:"if-match"` +} + +func (request UpdateIdpGroupMappingRequest) String() string { + return common.PointerString(request) +} + +// UpdateIdpGroupMappingResponse wrapper for the UpdateIdpGroupMapping operation +type UpdateIdpGroupMappingResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The IdpGroupMapping instance + IdpGroupMapping `presentIn:"body"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about a + // particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` + + // For optimistic concurrency control. See `if-match`. + Etag *string `presentIn:"header" name:"etag"` +} + +func (response UpdateIdpGroupMappingResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/identity/update_policy_details.go b/vendor/github.com/oracle/oci-go-sdk/identity/update_policy_details.go new file mode 100644 index 000000000..3fd45b64e --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/identity/update_policy_details.go @@ -0,0 +1,34 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Identity and Access Management Service API +// +// APIs for managing users, groups, compartments, and policies. +// + +package identity + +import ( + "github.com/oracle/oci-go-sdk/common" +) + +// UpdatePolicyDetails The representation of UpdatePolicyDetails +type UpdatePolicyDetails struct { + + // The description you assign to the policy. Does not have to be unique, and it's changeable. + Description *string `mandatory:"false" json:"description"` + + // An array of policy statements written in the policy language. See + // How Policies Work (https://docs.us-phoenix-1.oraclecloud.com/Content/Identity/Concepts/policies.htm) and + // Common Policies (https://docs.us-phoenix-1.oraclecloud.com/Content/Identity/Concepts/commonpolicies.htm). + Statements []string `mandatory:"false" json:"statements"` + + // The version of the policy. If null or set to an empty string, when a request comes in for authorization, the + // policy will be evaluated according to the current behavior of the services at that moment. If set to a particular + // date (YYYY-MM-DD), the policy will be evaluated according to the behavior of the services on that date. + VersionDate *common.SDKTime `mandatory:"false" json:"versionDate"` +} + +func (m UpdatePolicyDetails) String() string { + return common.PointerString(m) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/identity/update_policy_request_response.go b/vendor/github.com/oracle/oci-go-sdk/identity/update_policy_request_response.go new file mode 100644 index 000000000..198afbecd --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/identity/update_policy_request_response.go @@ -0,0 +1,49 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package identity + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// UpdatePolicyRequest wrapper for the UpdatePolicy operation +type UpdatePolicyRequest struct { + + // The OCID of the policy. + PolicyId *string `mandatory:"true" contributesTo:"path" name:"policyId"` + + // Request object for updating a policy. + UpdatePolicyDetails `contributesTo:"body"` + + // For optimistic concurrency control. In the PUT or DELETE call for a resource, set the `if-match` + // parameter to the value of the etag from a previous GET or POST response for that resource. The resource + // will be updated or deleted only if the etag you provide matches the resource's current etag value. + IfMatch *string `mandatory:"false" contributesTo:"header" name:"if-match"` +} + +func (request UpdatePolicyRequest) String() string { + return common.PointerString(request) +} + +// UpdatePolicyResponse wrapper for the UpdatePolicy operation +type UpdatePolicyResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The Policy instance + Policy `presentIn:"body"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about a + // particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` + + // For optimistic concurrency control. See `if-match`. + Etag *string `presentIn:"header" name:"etag"` +} + +func (response UpdatePolicyResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/identity/update_saml2_identity_provider_details.go b/vendor/github.com/oracle/oci-go-sdk/identity/update_saml2_identity_provider_details.go new file mode 100644 index 000000000..f83217f2b --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/identity/update_saml2_identity_provider_details.go @@ -0,0 +1,52 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Identity and Access Management Service API +// +// APIs for managing users, groups, compartments, and policies. +// + +package identity + +import ( + "encoding/json" + "github.com/oracle/oci-go-sdk/common" +) + +// UpdateSaml2IdentityProviderDetails The representation of UpdateSaml2IdentityProviderDetails +type UpdateSaml2IdentityProviderDetails struct { + + // The description you assign to the `IdentityProvider`. Does not have to + // be unique, and it's changeable. + Description *string `mandatory:"false" json:"description"` + + // The URL for retrieving the identity provider's metadata, + // which contains information required for federating. + MetadataUrl *string `mandatory:"false" json:"metadataUrl"` + + // The XML that contains the information required for federating. + Metadata *string `mandatory:"false" json:"metadata"` +} + +//GetDescription returns Description +func (m UpdateSaml2IdentityProviderDetails) GetDescription() *string { + return m.Description +} + +func (m UpdateSaml2IdentityProviderDetails) String() string { + return common.PointerString(m) +} + +// MarshalJSON marshals to json representation +func (m UpdateSaml2IdentityProviderDetails) MarshalJSON() (buff []byte, e error) { + type MarshalTypeUpdateSaml2IdentityProviderDetails UpdateSaml2IdentityProviderDetails + s := struct { + DiscriminatorParam string `json:"protocol"` + MarshalTypeUpdateSaml2IdentityProviderDetails + }{ + "SAML2", + (MarshalTypeUpdateSaml2IdentityProviderDetails)(m), + } + + return json.Marshal(&s) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/identity/update_state_details.go b/vendor/github.com/oracle/oci-go-sdk/identity/update_state_details.go new file mode 100644 index 000000000..3c920d786 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/identity/update_state_details.go @@ -0,0 +1,24 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Identity and Access Management Service API +// +// APIs for managing users, groups, compartments, and policies. +// + +package identity + +import ( + "github.com/oracle/oci-go-sdk/common" +) + +// UpdateStateDetails The representation of UpdateStateDetails +type UpdateStateDetails struct { + + // Update state to blocked or unblocked. Only "false" is supported (for changing the state to unblocked). + Blocked *bool `mandatory:"false" json:"blocked"` +} + +func (m UpdateStateDetails) String() string { + return common.PointerString(m) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/identity/update_swift_password_details.go b/vendor/github.com/oracle/oci-go-sdk/identity/update_swift_password_details.go new file mode 100644 index 000000000..7f987bf70 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/identity/update_swift_password_details.go @@ -0,0 +1,24 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Identity and Access Management Service API +// +// APIs for managing users, groups, compartments, and policies. +// + +package identity + +import ( + "github.com/oracle/oci-go-sdk/common" +) + +// UpdateSwiftPasswordDetails The representation of UpdateSwiftPasswordDetails +type UpdateSwiftPasswordDetails struct { + + // The description you assign to the Swift password. Does not have to be unique, and it's changeable. + Description *string `mandatory:"false" json:"description"` +} + +func (m UpdateSwiftPasswordDetails) String() string { + return common.PointerString(m) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/identity/update_swift_password_request_response.go b/vendor/github.com/oracle/oci-go-sdk/identity/update_swift_password_request_response.go new file mode 100644 index 000000000..0ec212ff8 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/identity/update_swift_password_request_response.go @@ -0,0 +1,52 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package identity + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// UpdateSwiftPasswordRequest wrapper for the UpdateSwiftPassword operation +type UpdateSwiftPasswordRequest struct { + + // The OCID of the user. + UserId *string `mandatory:"true" contributesTo:"path" name:"userId"` + + // The OCID of the Swift password. + SwiftPasswordId *string `mandatory:"true" contributesTo:"path" name:"swiftPasswordId"` + + // Request object for updating a Swift password. + UpdateSwiftPasswordDetails `contributesTo:"body"` + + // For optimistic concurrency control. In the PUT or DELETE call for a resource, set the `if-match` + // parameter to the value of the etag from a previous GET or POST response for that resource. The resource + // will be updated or deleted only if the etag you provide matches the resource's current etag value. + IfMatch *string `mandatory:"false" contributesTo:"header" name:"if-match"` +} + +func (request UpdateSwiftPasswordRequest) String() string { + return common.PointerString(request) +} + +// UpdateSwiftPasswordResponse wrapper for the UpdateSwiftPassword operation +type UpdateSwiftPasswordResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The SwiftPassword instance + SwiftPassword `presentIn:"body"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about a + // particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` + + // For optimistic concurrency control. See `if-match`. + Etag *string `presentIn:"header" name:"etag"` +} + +func (response UpdateSwiftPasswordResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/identity/update_user_details.go b/vendor/github.com/oracle/oci-go-sdk/identity/update_user_details.go new file mode 100644 index 000000000..07b2ce2a4 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/identity/update_user_details.go @@ -0,0 +1,24 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Identity and Access Management Service API +// +// APIs for managing users, groups, compartments, and policies. +// + +package identity + +import ( + "github.com/oracle/oci-go-sdk/common" +) + +// UpdateUserDetails The representation of UpdateUserDetails +type UpdateUserDetails struct { + + // The description you assign to the user. Does not have to be unique, and it's changeable. + Description *string `mandatory:"false" json:"description"` +} + +func (m UpdateUserDetails) String() string { + return common.PointerString(m) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/identity/update_user_request_response.go b/vendor/github.com/oracle/oci-go-sdk/identity/update_user_request_response.go new file mode 100644 index 000000000..41565565e --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/identity/update_user_request_response.go @@ -0,0 +1,49 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package identity + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// UpdateUserRequest wrapper for the UpdateUser operation +type UpdateUserRequest struct { + + // The OCID of the user. + UserId *string `mandatory:"true" contributesTo:"path" name:"userId"` + + // Request object for updating a user. + UpdateUserDetails `contributesTo:"body"` + + // For optimistic concurrency control. In the PUT or DELETE call for a resource, set the `if-match` + // parameter to the value of the etag from a previous GET or POST response for that resource. The resource + // will be updated or deleted only if the etag you provide matches the resource's current etag value. + IfMatch *string `mandatory:"false" contributesTo:"header" name:"if-match"` +} + +func (request UpdateUserRequest) String() string { + return common.PointerString(request) +} + +// UpdateUserResponse wrapper for the UpdateUser operation +type UpdateUserResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The User instance + User `presentIn:"body"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about a + // particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` + + // For optimistic concurrency control. See `if-match`. + Etag *string `presentIn:"header" name:"etag"` +} + +func (response UpdateUserResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/identity/update_user_state_request_response.go b/vendor/github.com/oracle/oci-go-sdk/identity/update_user_state_request_response.go new file mode 100644 index 000000000..f1e2c7514 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/identity/update_user_state_request_response.go @@ -0,0 +1,49 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package identity + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// UpdateUserStateRequest wrapper for the UpdateUserState operation +type UpdateUserStateRequest struct { + + // The OCID of the user. + UserId *string `mandatory:"true" contributesTo:"path" name:"userId"` + + // Request object for updating a user state. + UpdateStateDetails `contributesTo:"body"` + + // For optimistic concurrency control. In the PUT or DELETE call for a resource, set the `if-match` + // parameter to the value of the etag from a previous GET or POST response for that resource. The resource + // will be updated or deleted only if the etag you provide matches the resource's current etag value. + IfMatch *string `mandatory:"false" contributesTo:"header" name:"if-match"` +} + +func (request UpdateUserStateRequest) String() string { + return common.PointerString(request) +} + +// UpdateUserStateResponse wrapper for the UpdateUserState operation +type UpdateUserStateResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The User instance + User `presentIn:"body"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about a + // particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` + + // For optimistic concurrency control. See `if-match`. + Etag *string `presentIn:"header" name:"etag"` +} + +func (response UpdateUserStateResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/identity/upload_api_key_request_response.go b/vendor/github.com/oracle/oci-go-sdk/identity/upload_api_key_request_response.go new file mode 100644 index 000000000..4a971b526 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/identity/upload_api_key_request_response.go @@ -0,0 +1,51 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package identity + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// UploadApiKeyRequest wrapper for the UploadApiKey operation +type UploadApiKeyRequest struct { + + // The OCID of the user. + UserId *string `mandatory:"true" contributesTo:"path" name:"userId"` + + // Request object for uploading an API key for a user. + CreateApiKeyDetails `contributesTo:"body"` + + // A token that uniquely identifies a request so it can be retried in case of a timeout or + // server error without risk of executing that same action again. Retry tokens expire after 24 + // hours, but can be invalidated before then due to conflicting operations (e.g., if a resource + // has been deleted and purged from the system, then a retry of the original creation request + // may be rejected). + OpcRetryToken *string `mandatory:"false" contributesTo:"header" name:"opc-retry-token"` +} + +func (request UploadApiKeyRequest) String() string { + return common.PointerString(request) +} + +// UploadApiKeyResponse wrapper for the UploadApiKey operation +type UploadApiKeyResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The ApiKey instance + ApiKey `presentIn:"body"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about a + // particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` + + // For optimistic concurrency control. See `if-match`. + Etag *string `presentIn:"header" name:"etag"` +} + +func (response UploadApiKeyResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/identity/user.go b/vendor/github.com/oracle/oci-go-sdk/identity/user.go new file mode 100644 index 000000000..945e672e4 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/identity/user.go @@ -0,0 +1,91 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Identity and Access Management Service API +// +// APIs for managing users, groups, compartments, and policies. +// + +package identity + +import ( + "github.com/oracle/oci-go-sdk/common" +) + +// User An individual employee or system that needs to manage or use your company's Oracle Cloud Infrastructure +// resources. Users might need to launch instances, manage remote disks, work with your cloud network, etc. Users +// have one or more IAM Service credentials (ApiKey, +// UIPassword, and SwiftPassword). +// For more information, see User Credentials (https://docs.us-phoenix-1.oraclecloud.com/Content/API/Concepts/usercredentials.htm)). End users of your +// application are not typically IAM Service users. For conceptual information about users and other IAM Service +// components, see Overview of the IAM Service (https://docs.us-phoenix-1.oraclecloud.com/Content/Identity/Concepts/overview.htm). +// These users are created directly within the Oracle Cloud Infrastructure system, via the IAM service. +// They are different from *federated users*, who authenticate themselves to the Oracle Cloud Infrastructure +// Console via an identity provider. For more information, see +// Identity Providers and Federation (https://docs.us-phoenix-1.oraclecloud.com/Content/Identity/Concepts/federation.htm). +// To use any of the API operations, you must be authorized in an IAM policy. If you're not authorized, +// talk to an administrator. If you're an administrator who needs to write policies to give users access, +// see Getting Started with Policies (https://docs.us-phoenix-1.oraclecloud.com/Content/Identity/Concepts/policygetstarted.htm). +type User struct { + + // The OCID of the user. + Id *string `mandatory:"true" json:"id"` + + // The OCID of the tenancy containing the user. + CompartmentId *string `mandatory:"true" json:"compartmentId"` + + // The name you assign to the user during creation. This is the user's login for the Console. + // The name must be unique across all users in the tenancy and cannot be changed. + Name *string `mandatory:"true" json:"name"` + + // The description you assign to the user. Does not have to be unique, and it's changeable. + Description *string `mandatory:"true" json:"description"` + + // Date and time the user was created, in the format defined by RFC3339. + // Example: `2016-08-25T21:10:29.600Z` + TimeCreated *common.SDKTime `mandatory:"true" json:"timeCreated"` + + // The user's current state. After creating a user, make sure its `lifecycleState` changes from CREATING to + // ACTIVE before using it. + LifecycleState UserLifecycleStateEnum `mandatory:"true" json:"lifecycleState"` + + // Returned only if the user's `lifecycleState` is INACTIVE. A 16-bit value showing the reason why the user + // is inactive: + // - bit 0: SUSPENDED (reserved for future use) + // - bit 1: DISABLED (reserved for future use) + // - bit 2: BLOCKED (the user has exceeded the maximum number of failed login attempts for the Console) + InactiveStatus *int `mandatory:"false" json:"inactiveStatus"` +} + +func (m User) String() string { + return common.PointerString(m) +} + +// UserLifecycleStateEnum Enum with underlying type: string +type UserLifecycleStateEnum string + +// Set of constants representing the allowable values for UserLifecycleState +const ( + UserLifecycleStateCreating UserLifecycleStateEnum = "CREATING" + UserLifecycleStateActive UserLifecycleStateEnum = "ACTIVE" + UserLifecycleStateInactive UserLifecycleStateEnum = "INACTIVE" + UserLifecycleStateDeleting UserLifecycleStateEnum = "DELETING" + UserLifecycleStateDeleted UserLifecycleStateEnum = "DELETED" +) + +var mappingUserLifecycleState = map[string]UserLifecycleStateEnum{ + "CREATING": UserLifecycleStateCreating, + "ACTIVE": UserLifecycleStateActive, + "INACTIVE": UserLifecycleStateInactive, + "DELETING": UserLifecycleStateDeleting, + "DELETED": UserLifecycleStateDeleted, +} + +// GetUserLifecycleStateEnumValues Enumerates the set of values for UserLifecycleState +func GetUserLifecycleStateEnumValues() []UserLifecycleStateEnum { + values := make([]UserLifecycleStateEnum, 0) + for _, v := range mappingUserLifecycleState { + values = append(values, v) + } + return values +} diff --git a/vendor/github.com/oracle/oci-go-sdk/identity/user_group_membership.go b/vendor/github.com/oracle/oci-go-sdk/identity/user_group_membership.go new file mode 100644 index 000000000..f2b1a3aee --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/identity/user_group_membership.go @@ -0,0 +1,74 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Identity and Access Management Service API +// +// APIs for managing users, groups, compartments, and policies. +// + +package identity + +import ( + "github.com/oracle/oci-go-sdk/common" +) + +// UserGroupMembership An object that represents the membership of a user in a group. When you add a user to a group, the result is a +// `UserGroupMembership` with its own OCID. To remove a user from a group, you delete the `UserGroupMembership` object. +type UserGroupMembership struct { + + // The OCID of the membership. + Id *string `mandatory:"true" json:"id"` + + // The OCID of the tenancy containing the user, group, and membership object. + CompartmentId *string `mandatory:"true" json:"compartmentId"` + + // The OCID of the group. + GroupId *string `mandatory:"true" json:"groupId"` + + // The OCID of the user. + UserId *string `mandatory:"true" json:"userId"` + + // Date and time the membership was created, in the format defined by RFC3339. + // Example: `2016-08-25T21:10:29.600Z` + TimeCreated *common.SDKTime `mandatory:"true" json:"timeCreated"` + + // The membership's current state. After creating a membership object, make sure its `lifecycleState` changes + // from CREATING to ACTIVE before using it. + LifecycleState UserGroupMembershipLifecycleStateEnum `mandatory:"true" json:"lifecycleState"` + + // The detailed status of INACTIVE lifecycleState. + InactiveStatus *int `mandatory:"false" json:"inactiveStatus"` +} + +func (m UserGroupMembership) String() string { + return common.PointerString(m) +} + +// UserGroupMembershipLifecycleStateEnum Enum with underlying type: string +type UserGroupMembershipLifecycleStateEnum string + +// Set of constants representing the allowable values for UserGroupMembershipLifecycleState +const ( + UserGroupMembershipLifecycleStateCreating UserGroupMembershipLifecycleStateEnum = "CREATING" + UserGroupMembershipLifecycleStateActive UserGroupMembershipLifecycleStateEnum = "ACTIVE" + UserGroupMembershipLifecycleStateInactive UserGroupMembershipLifecycleStateEnum = "INACTIVE" + UserGroupMembershipLifecycleStateDeleting UserGroupMembershipLifecycleStateEnum = "DELETING" + UserGroupMembershipLifecycleStateDeleted UserGroupMembershipLifecycleStateEnum = "DELETED" +) + +var mappingUserGroupMembershipLifecycleState = map[string]UserGroupMembershipLifecycleStateEnum{ + "CREATING": UserGroupMembershipLifecycleStateCreating, + "ACTIVE": UserGroupMembershipLifecycleStateActive, + "INACTIVE": UserGroupMembershipLifecycleStateInactive, + "DELETING": UserGroupMembershipLifecycleStateDeleting, + "DELETED": UserGroupMembershipLifecycleStateDeleted, +} + +// GetUserGroupMembershipLifecycleStateEnumValues Enumerates the set of values for UserGroupMembershipLifecycleState +func GetUserGroupMembershipLifecycleStateEnumValues() []UserGroupMembershipLifecycleStateEnum { + values := make([]UserGroupMembershipLifecycleStateEnum, 0) + for _, v := range mappingUserGroupMembershipLifecycleState { + values = append(values, v) + } + return values +} diff --git a/vendor/github.com/oracle/oci-go-sdk/loadbalancer/backend.go b/vendor/github.com/oracle/oci-go-sdk/loadbalancer/backend.go new file mode 100644 index 000000000..ecca968c3 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/loadbalancer/backend.go @@ -0,0 +1,57 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Load Balancing Service API +// +// API for the Load Balancing Service +// + +package loadbalancer + +import ( + "github.com/oracle/oci-go-sdk/common" +) + +// Backend The configuration of a backend server that is a member of a load balancer backend set. +// For more information, see Managing Backend Servers (https://docs.us-phoenix-1.oraclecloud.com/Content/Balance/Tasks/managingbackendservers.htm). +type Backend struct { + + // Whether the load balancer should treat this server as a backup unit. If `true`, the load balancer forwards no ingress + // traffic to this backend server unless all other backend servers not marked as "backup" fail the health check policy. + // Example: `true` + Backup *bool `mandatory:"true" json:"backup"` + + // Whether the load balancer should drain this server. Servers marked "drain" receive no new + // incoming traffic. + // Example: `true` + Drain *bool `mandatory:"true" json:"drain"` + + // The IP address of the backend server. + // Example: `10.10.10.4` + IpAddress *string `mandatory:"true" json:"ipAddress"` + + // A read-only field showing the IP address and port that uniquely identify this backend server in the backend set. + // Example: `10.10.10.4:8080` + Name *string `mandatory:"true" json:"name"` + + // Whether the load balancer should treat this server as offline. Offline servers receive no incoming + // traffic. + // Example: `true` + Offline *bool `mandatory:"true" json:"offline"` + + // The communication port for the backend server. + // Example: `8080` + Port *int `mandatory:"true" json:"port"` + + // The load balancing policy weight assigned to the server. Backend servers with a higher weight receive a larger + // proportion of incoming traffic. For example, a server weighted '3' receives 3 times the number of new connections + // as a server weighted '1'. + // For more information on load balancing policies, see + // How Load Balancing Policies Work (https://docs.us-phoenix-1.oraclecloud.com/Content/Balance/Reference/lbpolicies.htm). + // Example: `3` + Weight *int `mandatory:"true" json:"weight"` +} + +func (m Backend) String() string { + return common.PointerString(m) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/loadbalancer/backend_details.go b/vendor/github.com/oracle/oci-go-sdk/loadbalancer/backend_details.go new file mode 100644 index 000000000..a65e538e3 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/loadbalancer/backend_details.go @@ -0,0 +1,52 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Load Balancing Service API +// +// API for the Load Balancing Service +// + +package loadbalancer + +import ( + "github.com/oracle/oci-go-sdk/common" +) + +// BackendDetails The load balancing configuration details of a backend server. +type BackendDetails struct { + + // The IP address of the backend server. + // Example: `10.10.10.4` + IpAddress *string `mandatory:"true" json:"ipAddress"` + + // The communication port for the backend server. + // Example: `8080` + Port *int `mandatory:"true" json:"port"` + + // Whether the load balancer should treat this server as a backup unit. If `true`, the load balancer forwards no ingress + // traffic to this backend server unless all other backend servers not marked as "backup" fail the health check policy. + // Example: `true` + Backup *bool `mandatory:"false" json:"backup"` + + // Whether the load balancer should drain this server. Servers marked "drain" receive no new + // incoming traffic. + // Example: `true` + Drain *bool `mandatory:"false" json:"drain"` + + // Whether the load balancer should treat this server as offline. Offline servers receive no incoming + // traffic. + // Example: `true` + Offline *bool `mandatory:"false" json:"offline"` + + // The load balancing policy weight assigned to the server. Backend servers with a higher weight receive a larger + // proportion of incoming traffic. For example, a server weighted '3' receives 3 times the number of new connections + // as a server weighted '1'. + // For more information on load balancing policies, see + // How Load Balancing Policies Work (https://docs.us-phoenix-1.oraclecloud.com/Content/Balance/Reference/lbpolicies.htm). + // Example: `3` + Weight *int `mandatory:"false" json:"weight"` +} + +func (m BackendDetails) String() string { + return common.PointerString(m) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/loadbalancer/backend_health.go b/vendor/github.com/oracle/oci-go-sdk/loadbalancer/backend_health.go new file mode 100644 index 000000000..ca8cbc42c --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/loadbalancer/backend_health.go @@ -0,0 +1,58 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Load Balancing Service API +// +// API for the Load Balancing Service +// + +package loadbalancer + +import ( + "github.com/oracle/oci-go-sdk/common" +) + +// BackendHealth The health status of the specified backend server as reported by the primary and standby load balancers. +type BackendHealth struct { + + // A list of the most recent health check results returned for the specified backend server. + HealthCheckResults []HealthCheckResult `mandatory:"true" json:"healthCheckResults"` + + // The general health status of the specified backend server as reported by the primary and standby load balancers. + // * **OK:** Both health checks returned `OK`. + // * **WARNING:** One health check returned `OK` and one did not. + // * **CRITICAL:** Neither health check returned `OK`. + // * **UNKNOWN:** One or both health checks returned `UNKNOWN`, or the system was unable to retrieve metrics at this time. + Status BackendHealthStatusEnum `mandatory:"true" json:"status"` +} + +func (m BackendHealth) String() string { + return common.PointerString(m) +} + +// BackendHealthStatusEnum Enum with underlying type: string +type BackendHealthStatusEnum string + +// Set of constants representing the allowable values for BackendHealthStatus +const ( + BackendHealthStatusOk BackendHealthStatusEnum = "OK" + BackendHealthStatusWarning BackendHealthStatusEnum = "WARNING" + BackendHealthStatusCritical BackendHealthStatusEnum = "CRITICAL" + BackendHealthStatusUnknown BackendHealthStatusEnum = "UNKNOWN" +) + +var mappingBackendHealthStatus = map[string]BackendHealthStatusEnum{ + "OK": BackendHealthStatusOk, + "WARNING": BackendHealthStatusWarning, + "CRITICAL": BackendHealthStatusCritical, + "UNKNOWN": BackendHealthStatusUnknown, +} + +// GetBackendHealthStatusEnumValues Enumerates the set of values for BackendHealthStatus +func GetBackendHealthStatusEnumValues() []BackendHealthStatusEnum { + values := make([]BackendHealthStatusEnum, 0) + for _, v := range mappingBackendHealthStatus { + values = append(values, v) + } + return values +} diff --git a/vendor/github.com/oracle/oci-go-sdk/loadbalancer/backend_set.go b/vendor/github.com/oracle/oci-go-sdk/loadbalancer/backend_set.go new file mode 100644 index 000000000..c47097c7a --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/loadbalancer/backend_set.go @@ -0,0 +1,41 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Load Balancing Service API +// +// API for the Load Balancing Service +// + +package loadbalancer + +import ( + "github.com/oracle/oci-go-sdk/common" +) + +// BackendSet The configuration of a load balancer backend set. +// For more information on backend set configuration, see +// Managing Backend Sets (https://docs.us-phoenix-1.oraclecloud.com/Content/Balance/tasks/managingbackendsets.htm). +type BackendSet struct { + Backends []Backend `mandatory:"true" json:"backends"` + + HealthChecker *HealthChecker `mandatory:"true" json:"healthChecker"` + + // A friendly name for the backend set. It must be unique and it cannot be changed. + // Valid backend set names include only alphanumeric characters, dashes, and underscores. Backend set names cannot + // contain spaces. Avoid entering confidential information. + // Example: `My_backend_set` + Name *string `mandatory:"true" json:"name"` + + // The load balancer policy for the backend set. To get a list of available policies, use the + // ListPolicies operation. + // Example: `LEAST_CONNECTIONS` + Policy *string `mandatory:"true" json:"policy"` + + SessionPersistenceConfiguration *SessionPersistenceConfigurationDetails `mandatory:"false" json:"sessionPersistenceConfiguration"` + + SslConfiguration *SslConfiguration `mandatory:"false" json:"sslConfiguration"` +} + +func (m BackendSet) String() string { + return common.PointerString(m) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/loadbalancer/backend_set_details.go b/vendor/github.com/oracle/oci-go-sdk/loadbalancer/backend_set_details.go new file mode 100644 index 000000000..dad4d6591 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/loadbalancer/backend_set_details.go @@ -0,0 +1,35 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Load Balancing Service API +// +// API for the Load Balancing Service +// + +package loadbalancer + +import ( + "github.com/oracle/oci-go-sdk/common" +) + +// BackendSetDetails The configuration details for a load balancer backend set. +// For more information on backend set configuration, see +// Managing Backend Sets (https://docs.us-phoenix-1.oraclecloud.com/Content/Balance/tasks/managingbackendsets.htm). +type BackendSetDetails struct { + HealthChecker *HealthCheckerDetails `mandatory:"true" json:"healthChecker"` + + // The load balancer policy for the backend set. To get a list of available policies, use the + // ListPolicies operation. + // Example: `LEAST_CONNECTIONS` + Policy *string `mandatory:"true" json:"policy"` + + Backends []BackendDetails `mandatory:"false" json:"backends"` + + SessionPersistenceConfiguration *SessionPersistenceConfigurationDetails `mandatory:"false" json:"sessionPersistenceConfiguration"` + + SslConfiguration *SslConfigurationDetails `mandatory:"false" json:"sslConfiguration"` +} + +func (m BackendSetDetails) String() string { + return common.PointerString(m) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/loadbalancer/backend_set_health.go b/vendor/github.com/oracle/oci-go-sdk/loadbalancer/backend_set_health.go new file mode 100644 index 000000000..1ef062996 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/loadbalancer/backend_set_health.go @@ -0,0 +1,78 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Load Balancing Service API +// +// API for the Load Balancing Service +// + +package loadbalancer + +import ( + "github.com/oracle/oci-go-sdk/common" +) + +// BackendSetHealth The health status details for a backend set. +// This object does not explicitly enumerate backend servers with a status of `OK`. However, they are included in the +// `totalBackendCount` sum. +type BackendSetHealth struct { + + // A list of backend servers that are currently in the `CRITICAL` health state. The list identifies each backend server by + // IP address and port. + // Example: `1.1.1.1:80` + CriticalStateBackendNames []string `mandatory:"true" json:"criticalStateBackendNames"` + + // Overall health status of the backend set. + // * **OK:** All backend servers in the backend set return a status of `OK`. + // * **WARNING:** Half or more of the backend set's backend servers return a status of `OK` and at least one backend + // server returns a status of `WARNING`, `CRITICAL`, or `UNKNOWN`. + // * **CRITICAL:** Fewer than half of the backend set's backend servers return a status of `OK`. + // * **UNKNOWN:** More than half of the backend set's backend servers return a status of `UNKNOWN`, the system was + // unable to retrieve metrics, or the backend set does not have a listener attached. + Status BackendSetHealthStatusEnum `mandatory:"true" json:"status"` + + // The total number of backend servers in this backend set. + // Example: `5` + TotalBackendCount *int `mandatory:"true" json:"totalBackendCount"` + + // A list of backend servers that are currently in the `UNKNOWN` health state. The list identifies each backend server by + // IP address and port. + // Example: `1.1.1.5:80` + UnknownStateBackendNames []string `mandatory:"true" json:"unknownStateBackendNames"` + + // A list of backend servers that are currently in the `WARNING` health state. The list identifies each backend server by + // IP address and port. + // Example: `1.1.1.7:42` + WarningStateBackendNames []string `mandatory:"true" json:"warningStateBackendNames"` +} + +func (m BackendSetHealth) String() string { + return common.PointerString(m) +} + +// BackendSetHealthStatusEnum Enum with underlying type: string +type BackendSetHealthStatusEnum string + +// Set of constants representing the allowable values for BackendSetHealthStatus +const ( + BackendSetHealthStatusOk BackendSetHealthStatusEnum = "OK" + BackendSetHealthStatusWarning BackendSetHealthStatusEnum = "WARNING" + BackendSetHealthStatusCritical BackendSetHealthStatusEnum = "CRITICAL" + BackendSetHealthStatusUnknown BackendSetHealthStatusEnum = "UNKNOWN" +) + +var mappingBackendSetHealthStatus = map[string]BackendSetHealthStatusEnum{ + "OK": BackendSetHealthStatusOk, + "WARNING": BackendSetHealthStatusWarning, + "CRITICAL": BackendSetHealthStatusCritical, + "UNKNOWN": BackendSetHealthStatusUnknown, +} + +// GetBackendSetHealthStatusEnumValues Enumerates the set of values for BackendSetHealthStatus +func GetBackendSetHealthStatusEnumValues() []BackendSetHealthStatusEnum { + values := make([]BackendSetHealthStatusEnum, 0) + for _, v := range mappingBackendSetHealthStatus { + values = append(values, v) + } + return values +} diff --git a/vendor/github.com/oracle/oci-go-sdk/loadbalancer/certificate.go b/vendor/github.com/oracle/oci-go-sdk/loadbalancer/certificate.go new file mode 100644 index 000000000..e2d038de7 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/loadbalancer/certificate.go @@ -0,0 +1,51 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Load Balancing Service API +// +// API for the Load Balancing Service +// + +package loadbalancer + +import ( + "github.com/oracle/oci-go-sdk/common" +) + +// Certificate The configuration details of a listener certificate bundle. +// For more information on SSL certficate configuration, see +// Managing SSL Certificates (https://docs.us-phoenix-1.oraclecloud.com/Content/Balance/Tasks/managingcertificates.htm). +type Certificate struct { + + // The Certificate Authority certificate, or any interim certificate, that you received from your SSL certificate provider. + // Example: + // -----BEGIN CERTIFICATE----- + // MIIEczCCA1ugAwIBAgIBADANBgkqhkiG9w0BAQQFAD..AkGA1UEBhMCR0Ix + // EzARBgNVBAgTClNvbWUtU3RhdGUxFDASBgNVBAoTC0..0EgTHRkMTcwNQYD + // VQQLEy5DbGFzcyAxIFB1YmxpYyBQcmltYXJ5IENlcn..XRpb24gQXV0aG9y + // aXR5MRQwEgYDVQQDEwtCZXN0IENBIEx0ZDAeFw0wMD..TUwMTZaFw0wMTAy + // ... + // -----END CERTIFICATE----- + CaCertificate *string `mandatory:"true" json:"caCertificate"` + + // A friendly name for the certificate bundle. It must be unique and it cannot be changed. + // Valid certificate bundle names include only alphanumeric characters, dashes, and underscores. + // Certificate bundle names cannot contain spaces. Avoid entering confidential information. + // Example: `My_certificate_bundle` + CertificateName *string `mandatory:"true" json:"certificateName"` + + // The public certificate, in PEM format, that you received from your SSL certificate provider. + // Example: + // -----BEGIN CERTIFICATE----- + // MIIC2jCCAkMCAg38MA0GCSqGSIb3DQEBBQUAMIGbMQswCQYDVQQGEwJKUDEOMAwG + // A1UECBMFVG9reW8xEDAOBgNVBAcTB0NodW8ta3UxETAPBgNVBAoTCEZyYW5rNERE + // MRgwFgYDVQQLEw9XZWJDZXJ0IFN1cHBvcnQxGDAWBgNVBAMTD0ZyYW5rNEREIFdl + // YiBDQTEjMCEGCSqGSIb3DQEJARYUc3VwcG9ydEBmcmFuazRkZC5jb20wHhcNMTIw + // ... + // -----END CERTIFICATE----- + PublicCertificate *string `mandatory:"true" json:"publicCertificate"` +} + +func (m Certificate) String() string { + return common.PointerString(m) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/loadbalancer/certificate_details.go b/vendor/github.com/oracle/oci-go-sdk/loadbalancer/certificate_details.go new file mode 100644 index 000000000..d63af94d9 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/loadbalancer/certificate_details.go @@ -0,0 +1,66 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Load Balancing Service API +// +// API for the Load Balancing Service +// + +package loadbalancer + +import ( + "github.com/oracle/oci-go-sdk/common" +) + +// CertificateDetails The configuration details for a listener certificate bundle. +// For more information on SSL certficate configuration, see +// Managing SSL Certificates (https://docs.us-phoenix-1.oraclecloud.com/Content/Balance/Tasks/managingcertificates.htm). +type CertificateDetails struct { + + // A friendly name for the certificate bundle. It must be unique and it cannot be changed. + // Valid certificate bundle names include only alphanumeric characters, dashes, and underscores. + // Certificate bundle names cannot contain spaces. Avoid entering confidential information. + // Example: `My_certificate_bundle` + CertificateName *string `mandatory:"true" json:"certificateName"` + + // The Certificate Authority certificate, or any interim certificate, that you received from your SSL certificate provider. + // Example: + // -----BEGIN CERTIFICATE----- + // MIIEczCCA1ugAwIBAgIBADANBgkqhkiG9w0BAQQFAD..AkGA1UEBhMCR0Ix + // EzARBgNVBAgTClNvbWUtU3RhdGUxFDASBgNVBAoTC0..0EgTHRkMTcwNQYD + // VQQLEy5DbGFzcyAxIFB1YmxpYyBQcmltYXJ5IENlcn..XRpb24gQXV0aG9y + // aXR5MRQwEgYDVQQDEwtCZXN0IENBIEx0ZDAeFw0wMD..TUwMTZaFw0wMTAy + // ... + // -----END CERTIFICATE----- + CaCertificate *string `mandatory:"false" json:"caCertificate"` + + // A passphrase for encrypted private keys. This is needed only if you created your certificate with a passphrase. + // Example: `Mysecretunlockingcode42!1!` + Passphrase *string `mandatory:"false" json:"passphrase"` + + // The SSL private key for your certificate, in PEM format. + // Example: + // -----BEGIN RSA PRIVATE KEY----- + // jO1O1v2ftXMsawM90tnXwc6xhOAT1gDBC9S8DKeca..JZNUgYYwNS0dP2UK + // tmyN+XqVcAKw4HqVmChXy5b5msu8eIq3uc2NqNVtR..2ksSLukP8pxXcHyb + // +sEwvM4uf8qbnHAqwnOnP9+KV9vds6BaH1eRA4CHz..n+NVZlzBsTxTlS16 + // /Umr7wJzVrMqK5sDiSu4WuaaBdqMGfL5hLsTjcBFD..Da2iyQmSKuVD4lIZ + // ... + // -----END RSA PRIVATE KEY----- + PrivateKey *string `mandatory:"false" json:"privateKey"` + + // The public certificate, in PEM format, that you received from your SSL certificate provider. + // Example: + // -----BEGIN CERTIFICATE----- + // MIIC2jCCAkMCAg38MA0GCSqGSIb3DQEBBQUAMIGbMQswCQYDVQQGEwJKUDEOMAwG + // A1UECBMFVG9reW8xEDAOBgNVBAcTB0NodW8ta3UxETAPBgNVBAoTCEZyYW5rNERE + // MRgwFgYDVQQLEw9XZWJDZXJ0IFN1cHBvcnQxGDAWBgNVBAMTD0ZyYW5rNEREIFdl + // YiBDQTEjMCEGCSqGSIb3DQEJARYUc3VwcG9ydEBmcmFuazRkZC5jb20wHhcNMTIw + // ... + // -----END CERTIFICATE----- + PublicCertificate *string `mandatory:"false" json:"publicCertificate"` +} + +func (m CertificateDetails) String() string { + return common.PointerString(m) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/loadbalancer/create_backend_details.go b/vendor/github.com/oracle/oci-go-sdk/loadbalancer/create_backend_details.go new file mode 100644 index 000000000..b47a042c0 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/loadbalancer/create_backend_details.go @@ -0,0 +1,54 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Load Balancing Service API +// +// API for the Load Balancing Service +// + +package loadbalancer + +import ( + "github.com/oracle/oci-go-sdk/common" +) + +// CreateBackendDetails The configuration details for creating a backend server in a backend set. +// For more information on backend server configuration, see +// Managing Backend Servers (https://docs.us-phoenix-1.oraclecloud.com/Content/Balance/tasks/managingbackendservers.htm). +type CreateBackendDetails struct { + + // The IP address of the backend server. + // Example: `10.10.10.4` + IpAddress *string `mandatory:"true" json:"ipAddress"` + + // The communication port for the backend server. + // Example: `8080` + Port *int `mandatory:"true" json:"port"` + + // Whether the load balancer should treat this server as a backup unit. If `true`, the load balancer forwards no ingress + // traffic to this backend server unless all other backend servers not marked as "backup" fail the health check policy. + // Example: `true` + Backup *bool `mandatory:"false" json:"backup"` + + // Whether the load balancer should drain this server. Servers marked "drain" receive no new + // incoming traffic. + // Example: `true` + Drain *bool `mandatory:"false" json:"drain"` + + // Whether the load balancer should treat this server as offline. Offline servers receive no incoming + // traffic. + // Example: `true` + Offline *bool `mandatory:"false" json:"offline"` + + // The load balancing policy weight assigned to the server. Backend servers with a higher weight receive a larger + // proportion of incoming traffic. For example, a server weighted '3' receives 3 times the number of new connections + // as a server weighted '1'. + // For more information on load balancing policies, see + // How Load Balancing Policies Work (https://docs.us-phoenix-1.oraclecloud.com/Content/Balance/Reference/lbpolicies.htm). + // Example: `3` + Weight *int `mandatory:"false" json:"weight"` +} + +func (m CreateBackendDetails) String() string { + return common.PointerString(m) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/loadbalancer/create_backend_request_response.go b/vendor/github.com/oracle/oci-go-sdk/loadbalancer/create_backend_request_response.go new file mode 100644 index 000000000..3d14273ce --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/loadbalancer/create_backend_request_response.go @@ -0,0 +1,56 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package loadbalancer + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// CreateBackendRequest wrapper for the CreateBackend operation +type CreateBackendRequest struct { + + // The details to add a backend server to a backend set. + CreateBackendDetails `contributesTo:"body"` + + // The OCID (https://docs.us-phoenix-1.oraclecloud.com/Content/General/Concepts/identifiers.htm) of the load balancer associated with the backend set and servers. + LoadBalancerId *string `mandatory:"true" contributesTo:"path" name:"loadBalancerId"` + + // The name of the backend set to add the backend server to. + // Example: `My_backend_set` + BackendSetName *string `mandatory:"true" contributesTo:"path" name:"backendSetName"` + + // The unique Oracle-assigned identifier for the request. If you need to contact Oracle about a + // particular request, please provide the request ID. + OpcRequestId *string `mandatory:"false" contributesTo:"header" name:"opc-request-id"` + + // A token that uniquely identifies a request so it can be retried in case of a timeout or + // server error without risk of executing that same action again. Retry tokens expire after 24 + // hours, but can be invalidated before then due to conflicting operations (e.g., if a resource + // has been deleted and purged from the system, then a retry of the original creation request + // may be rejected). + OpcRetryToken *string `mandatory:"false" contributesTo:"header" name:"opc-retry-token"` +} + +func (request CreateBackendRequest) String() string { + return common.PointerString(request) +} + +// CreateBackendResponse wrapper for the CreateBackend operation +type CreateBackendResponse struct { + + // The underlying http response + RawResponse *http.Response + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about + // a particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` + + // The OCID (https://docs.us-phoenix-1.oraclecloud.com/Content/General/Concepts/identifiers.htm) of the work request. + OpcWorkRequestId *string `presentIn:"header" name:"opc-work-request-id"` +} + +func (response CreateBackendResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/loadbalancer/create_backend_set_details.go b/vendor/github.com/oracle/oci-go-sdk/loadbalancer/create_backend_set_details.go new file mode 100644 index 000000000..2bf3acb40 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/loadbalancer/create_backend_set_details.go @@ -0,0 +1,41 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Load Balancing Service API +// +// API for the Load Balancing Service +// + +package loadbalancer + +import ( + "github.com/oracle/oci-go-sdk/common" +) + +// CreateBackendSetDetails The configuration details for creating a backend set in a load balancer. +// For more information on backend set configuration, see +// Managing Backend Sets (https://docs.us-phoenix-1.oraclecloud.com/Content/Balance/tasks/managingbackendsets.htm). +type CreateBackendSetDetails struct { + HealthChecker *HealthCheckerDetails `mandatory:"true" json:"healthChecker"` + + // A friendly name for the backend set. It must be unique and it cannot be changed. + // Valid backend set names include only alphanumeric characters, dashes, and underscores. Backend set names cannot + // contain spaces. Avoid entering confidential information. + // Example: `My_backend_set` + Name *string `mandatory:"true" json:"name"` + + // The load balancer policy for the backend set. To get a list of available policies, use the + // ListPolicies operation. + // Example: `LEAST_CONNECTIONS` + Policy *string `mandatory:"true" json:"policy"` + + Backends []BackendDetails `mandatory:"false" json:"backends"` + + SessionPersistenceConfiguration *SessionPersistenceConfigurationDetails `mandatory:"false" json:"sessionPersistenceConfiguration"` + + SslConfiguration *SslConfigurationDetails `mandatory:"false" json:"sslConfiguration"` +} + +func (m CreateBackendSetDetails) String() string { + return common.PointerString(m) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/loadbalancer/create_backend_set_request_response.go b/vendor/github.com/oracle/oci-go-sdk/loadbalancer/create_backend_set_request_response.go new file mode 100644 index 000000000..7617987a4 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/loadbalancer/create_backend_set_request_response.go @@ -0,0 +1,52 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package loadbalancer + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// CreateBackendSetRequest wrapper for the CreateBackendSet operation +type CreateBackendSetRequest struct { + + // The details for adding a backend set. + CreateBackendSetDetails `contributesTo:"body"` + + // The OCID (https://docs.us-phoenix-1.oraclecloud.com/Content/General/Concepts/identifiers.htm) of the load balancer on which to add a backend set. + LoadBalancerId *string `mandatory:"true" contributesTo:"path" name:"loadBalancerId"` + + // The unique Oracle-assigned identifier for the request. If you need to contact Oracle about a + // particular request, please provide the request ID. + OpcRequestId *string `mandatory:"false" contributesTo:"header" name:"opc-request-id"` + + // A token that uniquely identifies a request so it can be retried in case of a timeout or + // server error without risk of executing that same action again. Retry tokens expire after 24 + // hours, but can be invalidated before then due to conflicting operations (e.g., if a resource + // has been deleted and purged from the system, then a retry of the original creation request + // may be rejected). + OpcRetryToken *string `mandatory:"false" contributesTo:"header" name:"opc-retry-token"` +} + +func (request CreateBackendSetRequest) String() string { + return common.PointerString(request) +} + +// CreateBackendSetResponse wrapper for the CreateBackendSet operation +type CreateBackendSetResponse struct { + + // The underlying http response + RawResponse *http.Response + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about + // a particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` + + // The OCID (https://docs.us-phoenix-1.oraclecloud.com/Content/General/Concepts/identifiers.htm) of the work request. + OpcWorkRequestId *string `presentIn:"header" name:"opc-work-request-id"` +} + +func (response CreateBackendSetResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/loadbalancer/create_certificate_details.go b/vendor/github.com/oracle/oci-go-sdk/loadbalancer/create_certificate_details.go new file mode 100644 index 000000000..c7f4789bf --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/loadbalancer/create_certificate_details.go @@ -0,0 +1,66 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Load Balancing Service API +// +// API for the Load Balancing Service +// + +package loadbalancer + +import ( + "github.com/oracle/oci-go-sdk/common" +) + +// CreateCertificateDetails The configuration details for adding a certificate bundle to a listener. +// For more information on SSL certficate configuration, see +// Managing SSL Certificates (https://docs.us-phoenix-1.oraclecloud.com/Content/Balance/Tasks/managingcertificates.htm). +type CreateCertificateDetails struct { + + // A friendly name for the certificate bundle. It must be unique and it cannot be changed. + // Valid certificate bundle names include only alphanumeric characters, dashes, and underscores. + // Certificate bundle names cannot contain spaces. Avoid entering confidential information. + // Example: `My_certificate_bundle` + CertificateName *string `mandatory:"true" json:"certificateName"` + + // The Certificate Authority certificate, or any interim certificate, that you received from your SSL certificate provider. + // Example: + // -----BEGIN CERTIFICATE----- + // MIIEczCCA1ugAwIBAgIBADANBgkqhkiG9w0BAQQFAD..AkGA1UEBhMCR0Ix + // EzARBgNVBAgTClNvbWUtU3RhdGUxFDASBgNVBAoTC0..0EgTHRkMTcwNQYD + // VQQLEy5DbGFzcyAxIFB1YmxpYyBQcmltYXJ5IENlcn..XRpb24gQXV0aG9y + // aXR5MRQwEgYDVQQDEwtCZXN0IENBIEx0ZDAeFw0wMD..TUwMTZaFw0wMTAy + // ... + // -----END CERTIFICATE----- + CaCertificate *string `mandatory:"false" json:"caCertificate"` + + // A passphrase for encrypted private keys. This is needed only if you created your certificate with a passphrase. + // Example: `Mysecretunlockingcode42!1!` + Passphrase *string `mandatory:"false" json:"passphrase"` + + // The SSL private key for your certificate, in PEM format. + // Example: + // -----BEGIN RSA PRIVATE KEY----- + // jO1O1v2ftXMsawM90tnXwc6xhOAT1gDBC9S8DKeca..JZNUgYYwNS0dP2UK + // tmyN+XqVcAKw4HqVmChXy5b5msu8eIq3uc2NqNVtR..2ksSLukP8pxXcHyb + // +sEwvM4uf8qbnHAqwnOnP9+KV9vds6BaH1eRA4CHz..n+NVZlzBsTxTlS16 + // /Umr7wJzVrMqK5sDiSu4WuaaBdqMGfL5hLsTjcBFD..Da2iyQmSKuVD4lIZ + // ... + // -----END RSA PRIVATE KEY----- + PrivateKey *string `mandatory:"false" json:"privateKey"` + + // The public certificate, in PEM format, that you received from your SSL certificate provider. + // Example: + // -----BEGIN CERTIFICATE----- + // MIIC2jCCAkMCAg38MA0GCSqGSIb3DQEBBQUAMIGbMQswCQYDVQQGEwJKUDEOMAwG + // A1UECBMFVG9reW8xEDAOBgNVBAcTB0NodW8ta3UxETAPBgNVBAoTCEZyYW5rNERE + // MRgwFgYDVQQLEw9XZWJDZXJ0IFN1cHBvcnQxGDAWBgNVBAMTD0ZyYW5rNEREIFdl + // YiBDQTEjMCEGCSqGSIb3DQEJARYUc3VwcG9ydEBmcmFuazRkZC5jb20wHhcNMTIw + // ... + // -----END CERTIFICATE----- + PublicCertificate *string `mandatory:"false" json:"publicCertificate"` +} + +func (m CreateCertificateDetails) String() string { + return common.PointerString(m) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/loadbalancer/create_certificate_request_response.go b/vendor/github.com/oracle/oci-go-sdk/loadbalancer/create_certificate_request_response.go new file mode 100644 index 000000000..f92bd347e --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/loadbalancer/create_certificate_request_response.go @@ -0,0 +1,52 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package loadbalancer + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// CreateCertificateRequest wrapper for the CreateCertificate operation +type CreateCertificateRequest struct { + + // The details of the certificate to add. + CreateCertificateDetails `contributesTo:"body"` + + // The OCID (https://docs.us-phoenix-1.oraclecloud.com/Content/General/Concepts/identifiers.htm) of the load balancer on which to add the certificate. + LoadBalancerId *string `mandatory:"true" contributesTo:"path" name:"loadBalancerId"` + + // The unique Oracle-assigned identifier for the request. If you need to contact Oracle about a + // particular request, please provide the request ID. + OpcRequestId *string `mandatory:"false" contributesTo:"header" name:"opc-request-id"` + + // A token that uniquely identifies a request so it can be retried in case of a timeout or + // server error without risk of executing that same action again. Retry tokens expire after 24 + // hours, but can be invalidated before then due to conflicting operations (e.g., if a resource + // has been deleted and purged from the system, then a retry of the original creation request + // may be rejected). + OpcRetryToken *string `mandatory:"false" contributesTo:"header" name:"opc-retry-token"` +} + +func (request CreateCertificateRequest) String() string { + return common.PointerString(request) +} + +// CreateCertificateResponse wrapper for the CreateCertificate operation +type CreateCertificateResponse struct { + + // The underlying http response + RawResponse *http.Response + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about + // a particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` + + // The OCID (https://docs.us-phoenix-1.oraclecloud.com/Content/General/Concepts/identifiers.htm) of the work request. + OpcWorkRequestId *string `presentIn:"header" name:"opc-work-request-id"` +} + +func (response CreateCertificateResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/loadbalancer/create_listener_details.go b/vendor/github.com/oracle/oci-go-sdk/loadbalancer/create_listener_details.go new file mode 100644 index 000000000..58e7b30d6 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/loadbalancer/create_listener_details.go @@ -0,0 +1,43 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Load Balancing Service API +// +// API for the Load Balancing Service +// + +package loadbalancer + +import ( + "github.com/oracle/oci-go-sdk/common" +) + +// CreateListenerDetails The configuration details for adding a listener to a backend set. +// For more information on listener configuration, see +// Managing Load Balancer Listeners (https://docs.us-phoenix-1.oraclecloud.com/Content/Balance/tasks/managinglisteners.htm). +type CreateListenerDetails struct { + + // The name of the associated backend set. + DefaultBackendSetName *string `mandatory:"true" json:"defaultBackendSetName"` + + // A friendly name for the listener. It must be unique and it cannot be changed. + // Avoid entering confidential information. + // Example: `My listener` + Name *string `mandatory:"true" json:"name"` + + // The communication port for the listener. + // Example: `80` + Port *int `mandatory:"true" json:"port"` + + // The protocol on which the listener accepts connection requests. + // To get a list of valid protocols, use the ListProtocols + // operation. + // Example: `HTTP` + Protocol *string `mandatory:"true" json:"protocol"` + + SslConfiguration *SslConfigurationDetails `mandatory:"false" json:"sslConfiguration"` +} + +func (m CreateListenerDetails) String() string { + return common.PointerString(m) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/loadbalancer/create_listener_request_response.go b/vendor/github.com/oracle/oci-go-sdk/loadbalancer/create_listener_request_response.go new file mode 100644 index 000000000..9c9c9f570 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/loadbalancer/create_listener_request_response.go @@ -0,0 +1,52 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package loadbalancer + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// CreateListenerRequest wrapper for the CreateListener operation +type CreateListenerRequest struct { + + // Details to add a listener. + CreateListenerDetails `contributesTo:"body"` + + // The OCID (https://docs.us-phoenix-1.oraclecloud.com/Content/General/Concepts/identifiers.htm) of the load balancer on which to add a listener. + LoadBalancerId *string `mandatory:"true" contributesTo:"path" name:"loadBalancerId"` + + // The unique Oracle-assigned identifier for the request. If you need to contact Oracle about a + // particular request, please provide the request ID. + OpcRequestId *string `mandatory:"false" contributesTo:"header" name:"opc-request-id"` + + // A token that uniquely identifies a request so it can be retried in case of a timeout or + // server error without risk of executing that same action again. Retry tokens expire after 24 + // hours, but can be invalidated before then due to conflicting operations (e.g., if a resource + // has been deleted and purged from the system, then a retry of the original creation request + // may be rejected). + OpcRetryToken *string `mandatory:"false" contributesTo:"header" name:"opc-retry-token"` +} + +func (request CreateListenerRequest) String() string { + return common.PointerString(request) +} + +// CreateListenerResponse wrapper for the CreateListener operation +type CreateListenerResponse struct { + + // The underlying http response + RawResponse *http.Response + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about + // a particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` + + // The OCID (https://docs.us-phoenix-1.oraclecloud.com/Content/General/Concepts/identifiers.htm) of the work request. + OpcWorkRequestId *string `presentIn:"header" name:"opc-work-request-id"` +} + +func (response CreateListenerResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/loadbalancer/create_load_balancer_details.go b/vendor/github.com/oracle/oci-go-sdk/loadbalancer/create_load_balancer_details.go new file mode 100644 index 000000000..5b08bf52a --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/loadbalancer/create_load_balancer_details.go @@ -0,0 +1,57 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Load Balancing Service API +// +// API for the Load Balancing Service +// + +package loadbalancer + +import ( + "github.com/oracle/oci-go-sdk/common" +) + +// CreateLoadBalancerDetails The configuration details for creating a load balancer. +type CreateLoadBalancerDetails struct { + + // The OCID (https://docs.us-phoenix-1.oraclecloud.com/Content/General/Concepts/identifiers.htm) of the compartment in which to create the load balancer. + CompartmentId *string `mandatory:"true" json:"compartmentId"` + + // A user-friendly name. It does not have to be unique, and it is changeable. + // Avoid entering confidential information. + // Example: `My load balancer` + DisplayName *string `mandatory:"true" json:"displayName"` + + // A template that determines the total pre-provisioned bandwidth (ingress plus egress). + // To get a list of available shapes, use the ListShapes + // operation. + // Example: `100Mbps` + ShapeName *string `mandatory:"true" json:"shapeName"` + + // An array of subnet OCIDs (https://docs.us-phoenix-1.oraclecloud.com/Content/General/Concepts/identifiers.htm). + SubnetIds []string `mandatory:"true" json:"subnetIds"` + + BackendSets map[string]BackendSetDetails `mandatory:"false" json:"backendSets"` + + Certificates map[string]CertificateDetails `mandatory:"false" json:"certificates"` + + // Whether the load balancer has a VCN-local (private) IP address. + // If "true", the service assigns a private IP address to the load balancer. The load balancer requires only one subnet + // to host both the primary and secondary load balancers. The private IP address is local to the subnet. The load balancer + // is accessible only from within the VCN that contains the associated subnet, or as further restricted by your security + // list rules. The load balancer can route traffic to any backend server that is reachable from the VCN. + // For a private load balancer, both the primary and secondary load balancer hosts are within the same Availability Domain. + // If "false", the service assigns a public IP address to the load balancer. A load balancer with a public IP address + // requires two subnets, each in a different Availability Domain. One subnet hosts the primary load balancer and the other + // hosts the secondary (standby) load balancer. A public load balancer is accessible from the internet, depending on your + // VCN's security list rules (https://docs.us-phoenix-1.oraclecloud.com/Content/Network/Concepts/securitylists.htm). + // Example: `false` + IsPrivate *bool `mandatory:"false" json:"isPrivate"` + + Listeners map[string]ListenerDetails `mandatory:"false" json:"listeners"` +} + +func (m CreateLoadBalancerDetails) String() string { + return common.PointerString(m) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/loadbalancer/create_load_balancer_request_response.go b/vendor/github.com/oracle/oci-go-sdk/loadbalancer/create_load_balancer_request_response.go new file mode 100644 index 000000000..0ff053e4e --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/loadbalancer/create_load_balancer_request_response.go @@ -0,0 +1,49 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package loadbalancer + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// CreateLoadBalancerRequest wrapper for the CreateLoadBalancer operation +type CreateLoadBalancerRequest struct { + + // The configuration details for creating a load balancer. + CreateLoadBalancerDetails `contributesTo:"body"` + + // The unique Oracle-assigned identifier for the request. If you need to contact Oracle about a + // particular request, please provide the request ID. + OpcRequestId *string `mandatory:"false" contributesTo:"header" name:"opc-request-id"` + + // A token that uniquely identifies a request so it can be retried in case of a timeout or + // server error without risk of executing that same action again. Retry tokens expire after 24 + // hours, but can be invalidated before then due to conflicting operations (e.g., if a resource + // has been deleted and purged from the system, then a retry of the original creation request + // may be rejected). + OpcRetryToken *string `mandatory:"false" contributesTo:"header" name:"opc-retry-token"` +} + +func (request CreateLoadBalancerRequest) String() string { + return common.PointerString(request) +} + +// CreateLoadBalancerResponse wrapper for the CreateLoadBalancer operation +type CreateLoadBalancerResponse struct { + + // The underlying http response + RawResponse *http.Response + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about + // a particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` + + // The OCID (https://docs.us-phoenix-1.oraclecloud.com/Content/General/Concepts/identifiers.htm) of the work request. + OpcWorkRequestId *string `presentIn:"header" name:"opc-work-request-id"` +} + +func (response CreateLoadBalancerResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/loadbalancer/delete_backend_request_response.go b/vendor/github.com/oracle/oci-go-sdk/loadbalancer/delete_backend_request_response.go new file mode 100644 index 000000000..acc746c90 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/loadbalancer/delete_backend_request_response.go @@ -0,0 +1,50 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package loadbalancer + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// DeleteBackendRequest wrapper for the DeleteBackend operation +type DeleteBackendRequest struct { + + // The OCID (https://docs.us-phoenix-1.oraclecloud.com/Content/General/Concepts/identifiers.htm) of the load balancer associated with the backend set and server. + LoadBalancerId *string `mandatory:"true" contributesTo:"path" name:"loadBalancerId"` + + // The name of the backend set associated with the backend server. + // Example: `My_backend_set` + BackendSetName *string `mandatory:"true" contributesTo:"path" name:"backendSetName"` + + // The IP address and port of the backend server to remove. + // Example: `1.1.1.7:42` + BackendName *string `mandatory:"true" contributesTo:"path" name:"backendName"` + + // The unique Oracle-assigned identifier for the request. If you need to contact Oracle about a + // particular request, please provide the request ID. + OpcRequestId *string `mandatory:"false" contributesTo:"header" name:"opc-request-id"` +} + +func (request DeleteBackendRequest) String() string { + return common.PointerString(request) +} + +// DeleteBackendResponse wrapper for the DeleteBackend operation +type DeleteBackendResponse struct { + + // The underlying http response + RawResponse *http.Response + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about + // a particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` + + // The OCID (https://docs.us-phoenix-1.oraclecloud.com/Content/General/Concepts/identifiers.htm) of the work request. + OpcWorkRequestId *string `presentIn:"header" name:"opc-work-request-id"` +} + +func (response DeleteBackendResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/loadbalancer/delete_backend_set_request_response.go b/vendor/github.com/oracle/oci-go-sdk/loadbalancer/delete_backend_set_request_response.go new file mode 100644 index 000000000..6a73967a2 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/loadbalancer/delete_backend_set_request_response.go @@ -0,0 +1,46 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package loadbalancer + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// DeleteBackendSetRequest wrapper for the DeleteBackendSet operation +type DeleteBackendSetRequest struct { + + // The OCID (https://docs.us-phoenix-1.oraclecloud.com/Content/General/Concepts/identifiers.htm) of the load balancer associated with the backend set. + LoadBalancerId *string `mandatory:"true" contributesTo:"path" name:"loadBalancerId"` + + // The name of the backend set to delete. + // Example: `My_backend_set` + BackendSetName *string `mandatory:"true" contributesTo:"path" name:"backendSetName"` + + // The unique Oracle-assigned identifier for the request. If you need to contact Oracle about a + // particular request, please provide the request ID. + OpcRequestId *string `mandatory:"false" contributesTo:"header" name:"opc-request-id"` +} + +func (request DeleteBackendSetRequest) String() string { + return common.PointerString(request) +} + +// DeleteBackendSetResponse wrapper for the DeleteBackendSet operation +type DeleteBackendSetResponse struct { + + // The underlying http response + RawResponse *http.Response + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about + // a particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` + + // The OCID (https://docs.us-phoenix-1.oraclecloud.com/Content/General/Concepts/identifiers.htm) of the work request. + OpcWorkRequestId *string `presentIn:"header" name:"opc-work-request-id"` +} + +func (response DeleteBackendSetResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/loadbalancer/delete_certificate_request_response.go b/vendor/github.com/oracle/oci-go-sdk/loadbalancer/delete_certificate_request_response.go new file mode 100644 index 000000000..db86b0637 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/loadbalancer/delete_certificate_request_response.go @@ -0,0 +1,46 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package loadbalancer + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// DeleteCertificateRequest wrapper for the DeleteCertificate operation +type DeleteCertificateRequest struct { + + // The OCID (https://docs.us-phoenix-1.oraclecloud.com/Content/General/Concepts/identifiers.htm) of the load balancer associated with the certificate to be deleted. + LoadBalancerId *string `mandatory:"true" contributesTo:"path" name:"loadBalancerId"` + + // The name of the certificate to delete. + // Example: `My_certificate_bundle` + CertificateName *string `mandatory:"true" contributesTo:"path" name:"certificateName"` + + // The unique Oracle-assigned identifier for the request. If you need to contact Oracle about a + // particular request, please provide the request ID. + OpcRequestId *string `mandatory:"false" contributesTo:"header" name:"opc-request-id"` +} + +func (request DeleteCertificateRequest) String() string { + return common.PointerString(request) +} + +// DeleteCertificateResponse wrapper for the DeleteCertificate operation +type DeleteCertificateResponse struct { + + // The underlying http response + RawResponse *http.Response + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about + // a particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` + + // The OCID (https://docs.us-phoenix-1.oraclecloud.com/Content/General/Concepts/identifiers.htm) of the work request. + OpcWorkRequestId *string `presentIn:"header" name:"opc-work-request-id"` +} + +func (response DeleteCertificateResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/loadbalancer/delete_listener_request_response.go b/vendor/github.com/oracle/oci-go-sdk/loadbalancer/delete_listener_request_response.go new file mode 100644 index 000000000..b5496e029 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/loadbalancer/delete_listener_request_response.go @@ -0,0 +1,46 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package loadbalancer + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// DeleteListenerRequest wrapper for the DeleteListener operation +type DeleteListenerRequest struct { + + // The OCID (https://docs.us-phoenix-1.oraclecloud.com/Content/General/Concepts/identifiers.htm) of the load balancer associated with the listener to delete. + LoadBalancerId *string `mandatory:"true" contributesTo:"path" name:"loadBalancerId"` + + // The name of the listener to delete. + // Example: `My listener` + ListenerName *string `mandatory:"true" contributesTo:"path" name:"listenerName"` + + // The unique Oracle-assigned identifier for the request. If you need to contact Oracle about a + // particular request, please provide the request ID. + OpcRequestId *string `mandatory:"false" contributesTo:"header" name:"opc-request-id"` +} + +func (request DeleteListenerRequest) String() string { + return common.PointerString(request) +} + +// DeleteListenerResponse wrapper for the DeleteListener operation +type DeleteListenerResponse struct { + + // The underlying http response + RawResponse *http.Response + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about + // a particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` + + // The OCID (https://docs.us-phoenix-1.oraclecloud.com/Content/General/Concepts/identifiers.htm) of the work request. + OpcWorkRequestId *string `presentIn:"header" name:"opc-work-request-id"` +} + +func (response DeleteListenerResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/loadbalancer/delete_load_balancer_request_response.go b/vendor/github.com/oracle/oci-go-sdk/loadbalancer/delete_load_balancer_request_response.go new file mode 100644 index 000000000..3e39b143c --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/loadbalancer/delete_load_balancer_request_response.go @@ -0,0 +1,42 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package loadbalancer + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// DeleteLoadBalancerRequest wrapper for the DeleteLoadBalancer operation +type DeleteLoadBalancerRequest struct { + + // The OCID (https://docs.us-phoenix-1.oraclecloud.com/Content/General/Concepts/identifiers.htm) of the load balancer to delete. + LoadBalancerId *string `mandatory:"true" contributesTo:"path" name:"loadBalancerId"` + + // The unique Oracle-assigned identifier for the request. If you need to contact Oracle about a + // particular request, please provide the request ID. + OpcRequestId *string `mandatory:"false" contributesTo:"header" name:"opc-request-id"` +} + +func (request DeleteLoadBalancerRequest) String() string { + return common.PointerString(request) +} + +// DeleteLoadBalancerResponse wrapper for the DeleteLoadBalancer operation +type DeleteLoadBalancerResponse struct { + + // The underlying http response + RawResponse *http.Response + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about + // a particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` + + // The OCID (https://docs.us-phoenix-1.oraclecloud.com/Content/General/Concepts/identifiers.htm) of the work request. + OpcWorkRequestId *string `presentIn:"header" name:"opc-work-request-id"` +} + +func (response DeleteLoadBalancerResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/loadbalancer/get_backend_health_request_response.go b/vendor/github.com/oracle/oci-go-sdk/loadbalancer/get_backend_health_request_response.go new file mode 100644 index 000000000..6216d7aa9 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/loadbalancer/get_backend_health_request_response.go @@ -0,0 +1,50 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package loadbalancer + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// GetBackendHealthRequest wrapper for the GetBackendHealth operation +type GetBackendHealthRequest struct { + + // The OCID (https://docs.us-phoenix-1.oraclecloud.com/Content/General/Concepts/identifiers.htm) of the load balancer associated with the backend server health status to be retrieved. + LoadBalancerId *string `mandatory:"true" contributesTo:"path" name:"loadBalancerId"` + + // The name of the backend set associated with the backend server to retrieve the health status for. + // Example: `My_backend_set` + BackendSetName *string `mandatory:"true" contributesTo:"path" name:"backendSetName"` + + // The IP address and port of the backend server to retrieve the health status for. + // Example: `1.1.1.7:42` + BackendName *string `mandatory:"true" contributesTo:"path" name:"backendName"` + + // The unique Oracle-assigned identifier for the request. If you need to contact Oracle about a + // particular request, please provide the request ID. + OpcRequestId *string `mandatory:"false" contributesTo:"header" name:"opc-request-id"` +} + +func (request GetBackendHealthRequest) String() string { + return common.PointerString(request) +} + +// GetBackendHealthResponse wrapper for the GetBackendHealth operation +type GetBackendHealthResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The BackendHealth instance + BackendHealth `presentIn:"body"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about + // a particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response GetBackendHealthResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/loadbalancer/get_backend_request_response.go b/vendor/github.com/oracle/oci-go-sdk/loadbalancer/get_backend_request_response.go new file mode 100644 index 000000000..b795f0a4f --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/loadbalancer/get_backend_request_response.go @@ -0,0 +1,50 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package loadbalancer + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// GetBackendRequest wrapper for the GetBackend operation +type GetBackendRequest struct { + + // The OCID (https://docs.us-phoenix-1.oraclecloud.com/Content/General/Concepts/identifiers.htm) of the load balancer associated with the backend set and server. + LoadBalancerId *string `mandatory:"true" contributesTo:"path" name:"loadBalancerId"` + + // The name of the backend set that includes the backend server. + // Example: `My_backend_set` + BackendSetName *string `mandatory:"true" contributesTo:"path" name:"backendSetName"` + + // The IP address and port of the backend server to retrieve. + // Example: `1.1.1.7:42` + BackendName *string `mandatory:"true" contributesTo:"path" name:"backendName"` + + // The unique Oracle-assigned identifier for the request. If you need to contact Oracle about a + // particular request, please provide the request ID. + OpcRequestId *string `mandatory:"false" contributesTo:"header" name:"opc-request-id"` +} + +func (request GetBackendRequest) String() string { + return common.PointerString(request) +} + +// GetBackendResponse wrapper for the GetBackend operation +type GetBackendResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The Backend instance + Backend `presentIn:"body"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about + // a particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response GetBackendResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/loadbalancer/get_backend_set_health_request_response.go b/vendor/github.com/oracle/oci-go-sdk/loadbalancer/get_backend_set_health_request_response.go new file mode 100644 index 000000000..9c1952f17 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/loadbalancer/get_backend_set_health_request_response.go @@ -0,0 +1,46 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package loadbalancer + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// GetBackendSetHealthRequest wrapper for the GetBackendSetHealth operation +type GetBackendSetHealthRequest struct { + + // The OCID (https://docs.us-phoenix-1.oraclecloud.com/Content/General/Concepts/identifiers.htm) of the load balancer associated with the backend set health status to be retrieved. + LoadBalancerId *string `mandatory:"true" contributesTo:"path" name:"loadBalancerId"` + + // The name of the backend set to retrieve the health status for. + // Example: `My_backend_set` + BackendSetName *string `mandatory:"true" contributesTo:"path" name:"backendSetName"` + + // The unique Oracle-assigned identifier for the request. If you need to contact Oracle about a + // particular request, please provide the request ID. + OpcRequestId *string `mandatory:"false" contributesTo:"header" name:"opc-request-id"` +} + +func (request GetBackendSetHealthRequest) String() string { + return common.PointerString(request) +} + +// GetBackendSetHealthResponse wrapper for the GetBackendSetHealth operation +type GetBackendSetHealthResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The BackendSetHealth instance + BackendSetHealth `presentIn:"body"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about + // a particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response GetBackendSetHealthResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/loadbalancer/get_backend_set_request_response.go b/vendor/github.com/oracle/oci-go-sdk/loadbalancer/get_backend_set_request_response.go new file mode 100644 index 000000000..19b461368 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/loadbalancer/get_backend_set_request_response.go @@ -0,0 +1,46 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package loadbalancer + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// GetBackendSetRequest wrapper for the GetBackendSet operation +type GetBackendSetRequest struct { + + // The OCID (https://docs.us-phoenix-1.oraclecloud.com/Content/General/Concepts/identifiers.htm) of the specified load balancer. + LoadBalancerId *string `mandatory:"true" contributesTo:"path" name:"loadBalancerId"` + + // The name of the backend set to retrieve. + // Example: `My_backend_set` + BackendSetName *string `mandatory:"true" contributesTo:"path" name:"backendSetName"` + + // The unique Oracle-assigned identifier for the request. If you need to contact Oracle about a + // particular request, please provide the request ID. + OpcRequestId *string `mandatory:"false" contributesTo:"header" name:"opc-request-id"` +} + +func (request GetBackendSetRequest) String() string { + return common.PointerString(request) +} + +// GetBackendSetResponse wrapper for the GetBackendSet operation +type GetBackendSetResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The BackendSet instance + BackendSet `presentIn:"body"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about + // a particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response GetBackendSetResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/loadbalancer/get_health_checker_request_response.go b/vendor/github.com/oracle/oci-go-sdk/loadbalancer/get_health_checker_request_response.go new file mode 100644 index 000000000..70cfdc6f6 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/loadbalancer/get_health_checker_request_response.go @@ -0,0 +1,46 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package loadbalancer + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// GetHealthCheckerRequest wrapper for the GetHealthChecker operation +type GetHealthCheckerRequest struct { + + // The OCID (https://docs.us-phoenix-1.oraclecloud.com/Content/General/Concepts/identifiers.htm) of the load balancer associated with the health check policy to be retrieved. + LoadBalancerId *string `mandatory:"true" contributesTo:"path" name:"loadBalancerId"` + + // The name of the backend set associated with the health check policy to be retrieved. + // Example: `My_backend_set` + BackendSetName *string `mandatory:"true" contributesTo:"path" name:"backendSetName"` + + // The unique Oracle-assigned identifier for the request. If you need to contact Oracle about a + // particular request, please provide the request ID. + OpcRequestId *string `mandatory:"false" contributesTo:"header" name:"opc-request-id"` +} + +func (request GetHealthCheckerRequest) String() string { + return common.PointerString(request) +} + +// GetHealthCheckerResponse wrapper for the GetHealthChecker operation +type GetHealthCheckerResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The HealthChecker instance + HealthChecker `presentIn:"body"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about + // a particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response GetHealthCheckerResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/loadbalancer/get_load_balancer_health_request_response.go b/vendor/github.com/oracle/oci-go-sdk/loadbalancer/get_load_balancer_health_request_response.go new file mode 100644 index 000000000..03783ea3a --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/loadbalancer/get_load_balancer_health_request_response.go @@ -0,0 +1,42 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package loadbalancer + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// GetLoadBalancerHealthRequest wrapper for the GetLoadBalancerHealth operation +type GetLoadBalancerHealthRequest struct { + + // The OCID (https://docs.us-phoenix-1.oraclecloud.com/Content/General/Concepts/identifiers.htm) of the load balancer to return health status for. + LoadBalancerId *string `mandatory:"true" contributesTo:"path" name:"loadBalancerId"` + + // The unique Oracle-assigned identifier for the request. If you need to contact Oracle about a + // particular request, please provide the request ID. + OpcRequestId *string `mandatory:"false" contributesTo:"header" name:"opc-request-id"` +} + +func (request GetLoadBalancerHealthRequest) String() string { + return common.PointerString(request) +} + +// GetLoadBalancerHealthResponse wrapper for the GetLoadBalancerHealth operation +type GetLoadBalancerHealthResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The LoadBalancerHealth instance + LoadBalancerHealth `presentIn:"body"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about + // a particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response GetLoadBalancerHealthResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/loadbalancer/get_load_balancer_request_response.go b/vendor/github.com/oracle/oci-go-sdk/loadbalancer/get_load_balancer_request_response.go new file mode 100644 index 000000000..4f6347794 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/loadbalancer/get_load_balancer_request_response.go @@ -0,0 +1,42 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package loadbalancer + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// GetLoadBalancerRequest wrapper for the GetLoadBalancer operation +type GetLoadBalancerRequest struct { + + // The OCID (https://docs.us-phoenix-1.oraclecloud.com/Content/General/Concepts/identifiers.htm) of the load balancer to retrieve. + LoadBalancerId *string `mandatory:"true" contributesTo:"path" name:"loadBalancerId"` + + // The unique Oracle-assigned identifier for the request. If you need to contact Oracle about a + // particular request, please provide the request ID. + OpcRequestId *string `mandatory:"false" contributesTo:"header" name:"opc-request-id"` +} + +func (request GetLoadBalancerRequest) String() string { + return common.PointerString(request) +} + +// GetLoadBalancerResponse wrapper for the GetLoadBalancer operation +type GetLoadBalancerResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The LoadBalancer instance + LoadBalancer `presentIn:"body"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about + // a particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response GetLoadBalancerResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/loadbalancer/get_work_request_request_response.go b/vendor/github.com/oracle/oci-go-sdk/loadbalancer/get_work_request_request_response.go new file mode 100644 index 000000000..5159bf9c7 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/loadbalancer/get_work_request_request_response.go @@ -0,0 +1,42 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package loadbalancer + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// GetWorkRequestRequest wrapper for the GetWorkRequest operation +type GetWorkRequestRequest struct { + + // The OCID (https://docs.us-phoenix-1.oraclecloud.com/Content/General/Concepts/identifiers.htm) of the work request to retrieve. + WorkRequestId *string `mandatory:"true" contributesTo:"path" name:"workRequestId"` + + // The unique Oracle-assigned identifier for the request. If you need to contact Oracle about a + // particular request, please provide the request ID. + OpcRequestId *string `mandatory:"false" contributesTo:"header" name:"opc-request-id"` +} + +func (request GetWorkRequestRequest) String() string { + return common.PointerString(request) +} + +// GetWorkRequestResponse wrapper for the GetWorkRequest operation +type GetWorkRequestResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The WorkRequest instance + WorkRequest `presentIn:"body"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about + // a particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response GetWorkRequestResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/loadbalancer/health_check_result.go b/vendor/github.com/oracle/oci-go-sdk/loadbalancer/health_check_result.go new file mode 100644 index 000000000..8c7a04152 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/loadbalancer/health_check_result.go @@ -0,0 +1,71 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Load Balancing Service API +// +// API for the Load Balancing Service +// + +package loadbalancer + +import ( + "github.com/oracle/oci-go-sdk/common" +) + +// HealthCheckResult Information about a single backend server health check result reported by a load balancer. +type HealthCheckResult struct { + + // The result of the most recent health check. + HealthCheckStatus HealthCheckResultHealthCheckStatusEnum `mandatory:"true" json:"healthCheckStatus"` + + // The IP address of the health check status report provider. This identifier helps you differentiate same-subnet + // (private) load balancers that report health check status. + // Example: `10.2.0.1` + SourceIpAddress *string `mandatory:"true" json:"sourceIpAddress"` + + // The OCID of the subnet hosting the load balancer that reported this health check status. + SubnetId *string `mandatory:"true" json:"subnetId"` + + // The date and time the data was retrieved, in the format defined by RFC3339. + // Example: `2017-06-02T18:28:11+00:00` + Timestamp *common.SDKTime `mandatory:"true" json:"timestamp"` +} + +func (m HealthCheckResult) String() string { + return common.PointerString(m) +} + +// HealthCheckResultHealthCheckStatusEnum Enum with underlying type: string +type HealthCheckResultHealthCheckStatusEnum string + +// Set of constants representing the allowable values for HealthCheckResultHealthCheckStatus +const ( + HealthCheckResultHealthCheckStatusOk HealthCheckResultHealthCheckStatusEnum = "OK" + HealthCheckResultHealthCheckStatusInvalidStatusCode HealthCheckResultHealthCheckStatusEnum = "INVALID_STATUS_CODE" + HealthCheckResultHealthCheckStatusTimedOut HealthCheckResultHealthCheckStatusEnum = "TIMED_OUT" + HealthCheckResultHealthCheckStatusRegexMismatch HealthCheckResultHealthCheckStatusEnum = "REGEX_MISMATCH" + HealthCheckResultHealthCheckStatusConnectFailed HealthCheckResultHealthCheckStatusEnum = "CONNECT_FAILED" + HealthCheckResultHealthCheckStatusIoError HealthCheckResultHealthCheckStatusEnum = "IO_ERROR" + HealthCheckResultHealthCheckStatusOffline HealthCheckResultHealthCheckStatusEnum = "OFFLINE" + HealthCheckResultHealthCheckStatusUnknown HealthCheckResultHealthCheckStatusEnum = "UNKNOWN" +) + +var mappingHealthCheckResultHealthCheckStatus = map[string]HealthCheckResultHealthCheckStatusEnum{ + "OK": HealthCheckResultHealthCheckStatusOk, + "INVALID_STATUS_CODE": HealthCheckResultHealthCheckStatusInvalidStatusCode, + "TIMED_OUT": HealthCheckResultHealthCheckStatusTimedOut, + "REGEX_MISMATCH": HealthCheckResultHealthCheckStatusRegexMismatch, + "CONNECT_FAILED": HealthCheckResultHealthCheckStatusConnectFailed, + "IO_ERROR": HealthCheckResultHealthCheckStatusIoError, + "OFFLINE": HealthCheckResultHealthCheckStatusOffline, + "UNKNOWN": HealthCheckResultHealthCheckStatusUnknown, +} + +// GetHealthCheckResultHealthCheckStatusEnumValues Enumerates the set of values for HealthCheckResultHealthCheckStatus +func GetHealthCheckResultHealthCheckStatusEnumValues() []HealthCheckResultHealthCheckStatusEnum { + values := make([]HealthCheckResultHealthCheckStatusEnum, 0) + for _, v := range mappingHealthCheckResultHealthCheckStatus { + values = append(values, v) + } + return values +} diff --git a/vendor/github.com/oracle/oci-go-sdk/loadbalancer/health_checker.go b/vendor/github.com/oracle/oci-go-sdk/loadbalancer/health_checker.go new file mode 100644 index 000000000..5f5150076 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/loadbalancer/health_checker.go @@ -0,0 +1,57 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Load Balancing Service API +// +// API for the Load Balancing Service +// + +package loadbalancer + +import ( + "github.com/oracle/oci-go-sdk/common" +) + +// HealthChecker The health check policy configuration. +// For more information, see Editing Health Check Policies (https://docs.us-phoenix-1.oraclecloud.com/Content/Balance/Tasks/editinghealthcheck.htm). +type HealthChecker struct { + + // The backend server port against which to run the health check. If the port is not specified, the load balancer uses the + // port information from the `Backend` object. + // Example: `8080` + Port *int `mandatory:"true" json:"port"` + + // The protocol the health check must use; either HTTP or TCP. + // Example: `HTTP` + Protocol *string `mandatory:"true" json:"protocol"` + + // A regular expression for parsing the response body from the backend server. + // Example: `^(500|40[1348])$` + ResponseBodyRegex *string `mandatory:"true" json:"responseBodyRegex"` + + // The status code a healthy backend server should return. If you configure the health check policy to use the HTTP protocol, + // you can use common HTTP status codes such as "200". + // Example: `200` + ReturnCode *int `mandatory:"true" json:"returnCode"` + + // The interval between health checks, in milliseconds. The default is 10000 (10 seconds). + // Example: `30000` + IntervalInMillis *int `mandatory:"false" json:"intervalInMillis"` + + // The number of retries to attempt before a backend server is considered "unhealthy". Defaults to 3. + // Example: `3` + Retries *int `mandatory:"false" json:"retries"` + + // The maximum time, in milliseconds, to wait for a reply to a health check. A health check is successful only if a reply + // returns within this timeout period. Defaults to 3000 (3 seconds). + // Example: `6000` + TimeoutInMillis *int `mandatory:"false" json:"timeoutInMillis"` + + // The path against which to run the health check. + // Example: `/healthcheck` + UrlPath *string `mandatory:"false" json:"urlPath"` +} + +func (m HealthChecker) String() string { + return common.PointerString(m) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/loadbalancer/health_checker_details.go b/vendor/github.com/oracle/oci-go-sdk/loadbalancer/health_checker_details.go new file mode 100644 index 000000000..29f9a90c8 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/loadbalancer/health_checker_details.go @@ -0,0 +1,55 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Load Balancing Service API +// +// API for the Load Balancing Service +// + +package loadbalancer + +import ( + "github.com/oracle/oci-go-sdk/common" +) + +// HealthCheckerDetails The health check policy's configuration details. +type HealthCheckerDetails struct { + + // The protocol the health check must use; either HTTP or TCP. + // Example: `HTTP` + Protocol *string `mandatory:"true" json:"protocol"` + + // The interval between health checks, in milliseconds. + // Example: `30000` + IntervalInMillis *int `mandatory:"false" json:"intervalInMillis"` + + // The backend server port against which to run the health check. If the port is not specified, the load balancer uses the + // port information from the `Backend` object. + // Example: `8080` + Port *int `mandatory:"false" json:"port"` + + // A regular expression for parsing the response body from the backend server. + // Example: `^(500|40[1348])$` + ResponseBodyRegex *string `mandatory:"false" json:"responseBodyRegex"` + + // The number of retries to attempt before a backend server is considered "unhealthy". + // Example: `3` + Retries *int `mandatory:"false" json:"retries"` + + // The status code a healthy backend server should return. + // Example: `200` + ReturnCode *int `mandatory:"false" json:"returnCode"` + + // The maximum time, in milliseconds, to wait for a reply to a health check. A health check is successful only if a reply + // returns within this timeout period. + // Example: `6000` + TimeoutInMillis *int `mandatory:"false" json:"timeoutInMillis"` + + // The path against which to run the health check. + // Example: `/healthcheck` + UrlPath *string `mandatory:"false" json:"urlPath"` +} + +func (m HealthCheckerDetails) String() string { + return common.PointerString(m) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/loadbalancer/ip_address.go b/vendor/github.com/oracle/oci-go-sdk/loadbalancer/ip_address.go new file mode 100644 index 000000000..04892809e --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/loadbalancer/ip_address.go @@ -0,0 +1,30 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Load Balancing Service API +// +// API for the Load Balancing Service +// + +package loadbalancer + +import ( + "github.com/oracle/oci-go-sdk/common" +) + +// IpAddress A load balancer IP address. +type IpAddress struct { + + // An IP address. + // Example: `128.148.10.20` + IpAddress *string `mandatory:"true" json:"ipAddress"` + + // Whether the IP address is public or private. + // If "true", the IP address is public and accessible from the internet. + // If "false", the IP address is private and accessible only from within the associated VCN. + IsPublic *bool `mandatory:"false" json:"isPublic"` +} + +func (m IpAddress) String() string { + return common.PointerString(m) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/loadbalancer/list_backend_sets_request_response.go b/vendor/github.com/oracle/oci-go-sdk/loadbalancer/list_backend_sets_request_response.go new file mode 100644 index 000000000..240a49811 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/loadbalancer/list_backend_sets_request_response.go @@ -0,0 +1,42 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package loadbalancer + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// ListBackendSetsRequest wrapper for the ListBackendSets operation +type ListBackendSetsRequest struct { + + // The OCID (https://docs.us-phoenix-1.oraclecloud.com/Content/General/Concepts/identifiers.htm) of the load balancer associated with the backend sets to retrieve. + LoadBalancerId *string `mandatory:"true" contributesTo:"path" name:"loadBalancerId"` + + // The unique Oracle-assigned identifier for the request. If you need to contact Oracle about a + // particular request, please provide the request ID. + OpcRequestId *string `mandatory:"false" contributesTo:"header" name:"opc-request-id"` +} + +func (request ListBackendSetsRequest) String() string { + return common.PointerString(request) +} + +// ListBackendSetsResponse wrapper for the ListBackendSets operation +type ListBackendSetsResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The []BackendSet instance + Items []BackendSet `presentIn:"body"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about + // a particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response ListBackendSetsResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/loadbalancer/list_backends_request_response.go b/vendor/github.com/oracle/oci-go-sdk/loadbalancer/list_backends_request_response.go new file mode 100644 index 000000000..dd68c498a --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/loadbalancer/list_backends_request_response.go @@ -0,0 +1,46 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package loadbalancer + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// ListBackendsRequest wrapper for the ListBackends operation +type ListBackendsRequest struct { + + // The OCID (https://docs.us-phoenix-1.oraclecloud.com/Content/General/Concepts/identifiers.htm) of the load balancer associated with the backend set and servers. + LoadBalancerId *string `mandatory:"true" contributesTo:"path" name:"loadBalancerId"` + + // The name of the backend set associated with the backend servers. + // Example: `My_backend_set` + BackendSetName *string `mandatory:"true" contributesTo:"path" name:"backendSetName"` + + // The unique Oracle-assigned identifier for the request. If you need to contact Oracle about a + // particular request, please provide the request ID. + OpcRequestId *string `mandatory:"false" contributesTo:"header" name:"opc-request-id"` +} + +func (request ListBackendsRequest) String() string { + return common.PointerString(request) +} + +// ListBackendsResponse wrapper for the ListBackends operation +type ListBackendsResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The []Backend instance + Items []Backend `presentIn:"body"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about + // a particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response ListBackendsResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/loadbalancer/list_certificates_request_response.go b/vendor/github.com/oracle/oci-go-sdk/loadbalancer/list_certificates_request_response.go new file mode 100644 index 000000000..b5e62bc07 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/loadbalancer/list_certificates_request_response.go @@ -0,0 +1,42 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package loadbalancer + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// ListCertificatesRequest wrapper for the ListCertificates operation +type ListCertificatesRequest struct { + + // The OCID (https://docs.us-phoenix-1.oraclecloud.com/Content/General/Concepts/identifiers.htm) of the load balancer associated with the certificates to be listed. + LoadBalancerId *string `mandatory:"true" contributesTo:"path" name:"loadBalancerId"` + + // The unique Oracle-assigned identifier for the request. If you need to contact Oracle about a + // particular request, please provide the request ID. + OpcRequestId *string `mandatory:"false" contributesTo:"header" name:"opc-request-id"` +} + +func (request ListCertificatesRequest) String() string { + return common.PointerString(request) +} + +// ListCertificatesResponse wrapper for the ListCertificates operation +type ListCertificatesResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The []Certificate instance + Items []Certificate `presentIn:"body"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about + // a particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response ListCertificatesResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/loadbalancer/list_load_balancer_healths_request_response.go b/vendor/github.com/oracle/oci-go-sdk/loadbalancer/list_load_balancer_healths_request_response.go new file mode 100644 index 000000000..959153da0 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/loadbalancer/list_load_balancer_healths_request_response.go @@ -0,0 +1,55 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package loadbalancer + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// ListLoadBalancerHealthsRequest wrapper for the ListLoadBalancerHealths operation +type ListLoadBalancerHealthsRequest struct { + + // The OCID (https://docs.us-phoenix-1.oraclecloud.com/Content/General/Concepts/identifiers.htm) of the compartment containing the load balancers to return health status information for. + CompartmentId *string `mandatory:"true" contributesTo:"query" name:"compartmentId"` + + // The unique Oracle-assigned identifier for the request. If you need to contact Oracle about a + // particular request, please provide the request ID. + OpcRequestId *string `mandatory:"false" contributesTo:"header" name:"opc-request-id"` + + // The maximum number of items to return in a paginated "List" call. + // Example: `500` + Limit *int `mandatory:"false" contributesTo:"query" name:"limit"` + + // The value of the `opc-next-page` response header from the previous "List" call. + // Example: `3` + Page *string `mandatory:"false" contributesTo:"query" name:"page"` +} + +func (request ListLoadBalancerHealthsRequest) String() string { + return common.PointerString(request) +} + +// ListLoadBalancerHealthsResponse wrapper for the ListLoadBalancerHealths operation +type ListLoadBalancerHealthsResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The []LoadBalancerHealthSummary instance + Items []LoadBalancerHealthSummary `presentIn:"body"` + + // For pagination of a list of items. When paging through a list, if this header appears in the response, + // then a partial list might have been returned. Include this value as the `page` parameter for the + // subsequent GET request to get the next batch of items. + OpcNextPage *string `presentIn:"header" name:"opc-next-page"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about + // a particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response ListLoadBalancerHealthsResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/loadbalancer/list_load_balancers_request_response.go b/vendor/github.com/oracle/oci-go-sdk/loadbalancer/list_load_balancers_request_response.go new file mode 100644 index 000000000..608697685 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/loadbalancer/list_load_balancers_request_response.go @@ -0,0 +1,117 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package loadbalancer + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// ListLoadBalancersRequest wrapper for the ListLoadBalancers operation +type ListLoadBalancersRequest struct { + + // The OCID (https://docs.us-phoenix-1.oraclecloud.com/Content/General/Concepts/identifiers.htm) of the compartment containing the load balancers to list. + CompartmentId *string `mandatory:"true" contributesTo:"query" name:"compartmentId"` + + // The unique Oracle-assigned identifier for the request. If you need to contact Oracle about a + // particular request, please provide the request ID. + OpcRequestId *string `mandatory:"false" contributesTo:"header" name:"opc-request-id"` + + // The maximum number of items to return in a paginated "List" call. + // Example: `500` + Limit *int `mandatory:"false" contributesTo:"query" name:"limit"` + + // The value of the `opc-next-page` response header from the previous "List" call. + // Example: `3` + Page *string `mandatory:"false" contributesTo:"query" name:"page"` + + // The level of detail to return for each result. Can be `full` or `simple`. + // Example: `full` + Detail *string `mandatory:"false" contributesTo:"query" name:"detail"` + + // The field to sort by. Only one sort order may be provided. Time created is default ordered as descending. Display name is default ordered as ascending. + SortBy ListLoadBalancersSortByEnum `mandatory:"false" contributesTo:"query" name:"sortBy" omitEmpty:"true"` + + // The sort order to use, either 'asc' or 'desc' + SortOrder ListLoadBalancersSortOrderEnum `mandatory:"false" contributesTo:"query" name:"sortOrder" omitEmpty:"true"` + + // A filter to only return resources that match the given display name exactly. + DisplayName *string `mandatory:"false" contributesTo:"query" name:"displayName"` + + // A filter to only return resources that match the given lifecycle state. + LifecycleState LoadBalancerLifecycleStateEnum `mandatory:"false" contributesTo:"query" name:"lifecycleState" omitEmpty:"true"` +} + +func (request ListLoadBalancersRequest) String() string { + return common.PointerString(request) +} + +// ListLoadBalancersResponse wrapper for the ListLoadBalancers operation +type ListLoadBalancersResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The []LoadBalancer instance + Items []LoadBalancer `presentIn:"body"` + + // For pagination of a list of items. When paging through a list, if this header appears in the response, + // then a partial list might have been returned. Include this value as the `page` parameter for the + // subsequent GET request to get the next batch of items. + OpcNextPage *string `presentIn:"header" name:"opc-next-page"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about + // a particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response ListLoadBalancersResponse) String() string { + return common.PointerString(response) +} + +// ListLoadBalancersSortByEnum Enum with underlying type: string +type ListLoadBalancersSortByEnum string + +// Set of constants representing the allowable values for ListLoadBalancersSortBy +const ( + ListLoadBalancersSortByTimecreated ListLoadBalancersSortByEnum = "TIMECREATED" + ListLoadBalancersSortByDisplayname ListLoadBalancersSortByEnum = "DISPLAYNAME" +) + +var mappingListLoadBalancersSortBy = map[string]ListLoadBalancersSortByEnum{ + "TIMECREATED": ListLoadBalancersSortByTimecreated, + "DISPLAYNAME": ListLoadBalancersSortByDisplayname, +} + +// GetListLoadBalancersSortByEnumValues Enumerates the set of values for ListLoadBalancersSortBy +func GetListLoadBalancersSortByEnumValues() []ListLoadBalancersSortByEnum { + values := make([]ListLoadBalancersSortByEnum, 0) + for _, v := range mappingListLoadBalancersSortBy { + values = append(values, v) + } + return values +} + +// ListLoadBalancersSortOrderEnum Enum with underlying type: string +type ListLoadBalancersSortOrderEnum string + +// Set of constants representing the allowable values for ListLoadBalancersSortOrder +const ( + ListLoadBalancersSortOrderAsc ListLoadBalancersSortOrderEnum = "ASC" + ListLoadBalancersSortOrderDesc ListLoadBalancersSortOrderEnum = "DESC" +) + +var mappingListLoadBalancersSortOrder = map[string]ListLoadBalancersSortOrderEnum{ + "ASC": ListLoadBalancersSortOrderAsc, + "DESC": ListLoadBalancersSortOrderDesc, +} + +// GetListLoadBalancersSortOrderEnumValues Enumerates the set of values for ListLoadBalancersSortOrder +func GetListLoadBalancersSortOrderEnumValues() []ListLoadBalancersSortOrderEnum { + values := make([]ListLoadBalancersSortOrderEnum, 0) + for _, v := range mappingListLoadBalancersSortOrder { + values = append(values, v) + } + return values +} diff --git a/vendor/github.com/oracle/oci-go-sdk/loadbalancer/list_policies_request_response.go b/vendor/github.com/oracle/oci-go-sdk/loadbalancer/list_policies_request_response.go new file mode 100644 index 000000000..197f32e5a --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/loadbalancer/list_policies_request_response.go @@ -0,0 +1,55 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package loadbalancer + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// ListPoliciesRequest wrapper for the ListPolicies operation +type ListPoliciesRequest struct { + + // The OCID (https://docs.us-phoenix-1.oraclecloud.com/Content/General/Concepts/identifiers.htm) of the compartment containing the load balancer policies to list. + CompartmentId *string `mandatory:"true" contributesTo:"query" name:"compartmentId"` + + // The unique Oracle-assigned identifier for the request. If you need to contact Oracle about a + // particular request, please provide the request ID. + OpcRequestId *string `mandatory:"false" contributesTo:"header" name:"opc-request-id"` + + // The maximum number of items to return in a paginated "List" call. + // Example: `500` + Limit *int `mandatory:"false" contributesTo:"query" name:"limit"` + + // The value of the `opc-next-page` response header from the previous "List" call. + // Example: `3` + Page *string `mandatory:"false" contributesTo:"query" name:"page"` +} + +func (request ListPoliciesRequest) String() string { + return common.PointerString(request) +} + +// ListPoliciesResponse wrapper for the ListPolicies operation +type ListPoliciesResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The []LoadBalancerPolicy instance + Items []LoadBalancerPolicy `presentIn:"body"` + + // For pagination of a list of items. When paging through a list, if this header appears in the response, + // then a partial list might have been returned. Include this value as the `page` parameter for the + // subsequent GET request to get the next batch of items. + OpcNextPage *string `presentIn:"header" name:"opc-next-page"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about + // a particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response ListPoliciesResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/loadbalancer/list_protocols_request_response.go b/vendor/github.com/oracle/oci-go-sdk/loadbalancer/list_protocols_request_response.go new file mode 100644 index 000000000..d6e9c4e30 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/loadbalancer/list_protocols_request_response.go @@ -0,0 +1,55 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package loadbalancer + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// ListProtocolsRequest wrapper for the ListProtocols operation +type ListProtocolsRequest struct { + + // The OCID (https://docs.us-phoenix-1.oraclecloud.com/Content/General/Concepts/identifiers.htm) of the compartment containing the load balancer protocols to list. + CompartmentId *string `mandatory:"true" contributesTo:"query" name:"compartmentId"` + + // The unique Oracle-assigned identifier for the request. If you need to contact Oracle about a + // particular request, please provide the request ID. + OpcRequestId *string `mandatory:"false" contributesTo:"header" name:"opc-request-id"` + + // The maximum number of items to return in a paginated "List" call. + // Example: `500` + Limit *int `mandatory:"false" contributesTo:"query" name:"limit"` + + // The value of the `opc-next-page` response header from the previous "List" call. + // Example: `3` + Page *string `mandatory:"false" contributesTo:"query" name:"page"` +} + +func (request ListProtocolsRequest) String() string { + return common.PointerString(request) +} + +// ListProtocolsResponse wrapper for the ListProtocols operation +type ListProtocolsResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The []LoadBalancerProtocol instance + Items []LoadBalancerProtocol `presentIn:"body"` + + // For pagination of a list of items. When paging through a list, if this header appears in the response, + // then a partial list might have been returned. Include this value as the `page` parameter for the + // subsequent GET request to get the next batch of items. + OpcNextPage *string `presentIn:"header" name:"opc-next-page"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about + // a particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response ListProtocolsResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/loadbalancer/list_shapes_request_response.go b/vendor/github.com/oracle/oci-go-sdk/loadbalancer/list_shapes_request_response.go new file mode 100644 index 000000000..0db13f617 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/loadbalancer/list_shapes_request_response.go @@ -0,0 +1,55 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package loadbalancer + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// ListShapesRequest wrapper for the ListShapes operation +type ListShapesRequest struct { + + // The OCID (https://docs.us-phoenix-1.oraclecloud.com/Content/General/Concepts/identifiers.htm) of the compartment containing the load balancer shapes to list. + CompartmentId *string `mandatory:"true" contributesTo:"query" name:"compartmentId"` + + // The unique Oracle-assigned identifier for the request. If you need to contact Oracle about a + // particular request, please provide the request ID. + OpcRequestId *string `mandatory:"false" contributesTo:"header" name:"opc-request-id"` + + // The maximum number of items to return in a paginated "List" call. + // Example: `500` + Limit *int `mandatory:"false" contributesTo:"query" name:"limit"` + + // The value of the `opc-next-page` response header from the previous "List" call. + // Example: `3` + Page *string `mandatory:"false" contributesTo:"query" name:"page"` +} + +func (request ListShapesRequest) String() string { + return common.PointerString(request) +} + +// ListShapesResponse wrapper for the ListShapes operation +type ListShapesResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The []LoadBalancerShape instance + Items []LoadBalancerShape `presentIn:"body"` + + // For pagination of a list of items. When paging through a list, if this header appears in the response, + // then a partial list might have been returned. Include this value as the `page` parameter for the + // subsequent GET request to get the next batch of items. + OpcNextPage *string `presentIn:"header" name:"opc-next-page"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about + // a particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response ListShapesResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/loadbalancer/list_work_requests_request_response.go b/vendor/github.com/oracle/oci-go-sdk/loadbalancer/list_work_requests_request_response.go new file mode 100644 index 000000000..13018c150 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/loadbalancer/list_work_requests_request_response.go @@ -0,0 +1,55 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package loadbalancer + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// ListWorkRequestsRequest wrapper for the ListWorkRequests operation +type ListWorkRequestsRequest struct { + + // The OCID (https://docs.us-phoenix-1.oraclecloud.com/Content/General/Concepts/identifiers.htm) of the load balancer associated with the work requests to retrieve. + LoadBalancerId *string `mandatory:"true" contributesTo:"path" name:"loadBalancerId"` + + // The unique Oracle-assigned identifier for the request. If you need to contact Oracle about a + // particular request, please provide the request ID. + OpcRequestId *string `mandatory:"false" contributesTo:"header" name:"opc-request-id"` + + // The maximum number of items to return in a paginated "List" call. + // Example: `500` + Limit *int `mandatory:"false" contributesTo:"query" name:"limit"` + + // The value of the `opc-next-page` response header from the previous "List" call. + // Example: `3` + Page *string `mandatory:"false" contributesTo:"query" name:"page"` +} + +func (request ListWorkRequestsRequest) String() string { + return common.PointerString(request) +} + +// ListWorkRequestsResponse wrapper for the ListWorkRequests operation +type ListWorkRequestsResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The []WorkRequest instance + Items []WorkRequest `presentIn:"body"` + + // For pagination of a list of items. When paging through a list, if this header appears in the response, + // then a partial list might have been returned. Include this value as the `page` parameter for the + // subsequent GET request to get the next batch of items. + OpcNextPage *string `presentIn:"header" name:"opc-next-page"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about + // a particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response ListWorkRequestsResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/loadbalancer/listener.go b/vendor/github.com/oracle/oci-go-sdk/loadbalancer/listener.go new file mode 100644 index 000000000..31110fd5e --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/loadbalancer/listener.go @@ -0,0 +1,42 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Load Balancing Service API +// +// API for the Load Balancing Service +// + +package loadbalancer + +import ( + "github.com/oracle/oci-go-sdk/common" +) + +// Listener The listener's configuration. +// For more information on backend set configuration, see +// Managing Load Balancer Listeners (https://docs.us-phoenix-1.oraclecloud.com/Content/Balance/tasks/managinglisteners.htm). +type Listener struct { + + // The name of the associated backend set. + DefaultBackendSetName *string `mandatory:"true" json:"defaultBackendSetName"` + + // A friendly name for the listener. It must be unique and it cannot be changed. + // Example: `My listener` + Name *string `mandatory:"true" json:"name"` + + // The communication port for the listener. + // Example: `80` + Port *int `mandatory:"true" json:"port"` + + // The protocol on which the listener accepts connection requests. + // To get a list of valid protocols, use the ListProtocols + // operation. + // Example: `HTTP` + Protocol *string `mandatory:"true" json:"protocol"` + + SslConfiguration *SslConfiguration `mandatory:"false" json:"sslConfiguration"` +} + +func (m Listener) String() string { + return common.PointerString(m) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/loadbalancer/listener_details.go b/vendor/github.com/oracle/oci-go-sdk/loadbalancer/listener_details.go new file mode 100644 index 000000000..15b4e5d9a --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/loadbalancer/listener_details.go @@ -0,0 +1,36 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Load Balancing Service API +// +// API for the Load Balancing Service +// + +package loadbalancer + +import ( + "github.com/oracle/oci-go-sdk/common" +) + +// ListenerDetails The listener's configuration details. +type ListenerDetails struct { + + // The name of the associated backend set. + DefaultBackendSetName *string `mandatory:"true" json:"defaultBackendSetName"` + + // The communication port for the listener. + // Example: `80` + Port *int `mandatory:"true" json:"port"` + + // The protocol on which the listener accepts connection requests. + // To get a list of valid protocols, use the ListProtocols + // operation. + // Example: `HTTP` + Protocol *string `mandatory:"true" json:"protocol"` + + SslConfiguration *SslConfigurationDetails `mandatory:"false" json:"sslConfiguration"` +} + +func (m ListenerDetails) String() string { + return common.PointerString(m) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/loadbalancer/load_balancer.go b/vendor/github.com/oracle/oci-go-sdk/loadbalancer/load_balancer.go new file mode 100644 index 000000000..bf621e4a9 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/loadbalancer/load_balancer.go @@ -0,0 +1,104 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Load Balancing Service API +// +// API for the Load Balancing Service +// + +package loadbalancer + +import ( + "github.com/oracle/oci-go-sdk/common" +) + +// LoadBalancer The properties that define a load balancer. For more information, see +// Managing a Load Balancer (https://docs.us-phoenix-1.oraclecloud.com/Content/Balance/Tasks/managingloadbalancer.htm). +// To use any of the API operations, you must be authorized in an IAM policy. If you're not authorized, +// talk to an administrator. If you're an administrator who needs to write policies to give users access, see +// Getting Started with Policies (https://docs.us-phoenix-1.oraclecloud.com/Content/Identity/Concepts/policygetstarted.htm). +// For information about endpoints and signing API requests, see +// About the API (https://docs.us-phoenix-1.oraclecloud.com/Content/API/Concepts/usingapi.htm). For information about available SDKs and tools, see +// SDKS and Other Tools (https://docs.us-phoenix-1.oraclecloud.com/Content/API/Concepts/sdks.htm). +type LoadBalancer struct { + + // The OCID (https://docs.us-phoenix-1.oraclecloud.com/Content/General/Concepts/identifiers.htm) of the compartment containing the load balancer. + CompartmentId *string `mandatory:"true" json:"compartmentId"` + + // A user-friendly name. It does not have to be unique, and it is changeable. + // Example: `My load balancer` + DisplayName *string `mandatory:"true" json:"displayName"` + + // The OCID (https://docs.us-phoenix-1.oraclecloud.com/Content/General/Concepts/identifiers.htm) of the load balancer. + Id *string `mandatory:"true" json:"id"` + + // The current state of the load balancer. + LifecycleState LoadBalancerLifecycleStateEnum `mandatory:"true" json:"lifecycleState"` + + // A template that determines the total pre-provisioned bandwidth (ingress plus egress). + // To get a list of available shapes, use the ListShapes + // operation. + // Example: `100Mbps` + ShapeName *string `mandatory:"true" json:"shapeName"` + + // The date and time the load balancer was created, in the format defined by RFC3339. + // Example: `2016-08-25T21:10:29.600Z` + TimeCreated *common.SDKTime `mandatory:"true" json:"timeCreated"` + + BackendSets map[string]BackendSet `mandatory:"false" json:"backendSets"` + + Certificates map[string]Certificate `mandatory:"false" json:"certificates"` + + // An array of IP addresses. + IpAddresses []IpAddress `mandatory:"false" json:"ipAddresses"` + + // Whether the load balancer has a VCN-local (private) IP address. + // If "true", the service assigns a private IP address to the load balancer. The load balancer requires only one subnet + // to host both the primary and secondary load balancers. The private IP address is local to the subnet. The load balancer + // is accessible only from within the VCN that contains the associated subnet, or as further restricted by your security + // list rules. The load balancer can route traffic to any backend server that is reachable from the VCN. + // For a private load balancer, both the primary and secondary load balancer hosts are within the same Availability Domain. + // If "false", the service assigns a public IP address to the load balancer. A load balancer with a public IP address + // requires two subnets, each in a different Availability Domain. One subnet hosts the primary load balancer and the other + // hosts the secondary (standby) load balancer. A public load balancer is accessible from the internet, depending on your + // VCN's security list rules (https://docs.us-phoenix-1.oraclecloud.com/Content/Network/Concepts/securitylists.htm). + IsPrivate *bool `mandatory:"false" json:"isPrivate"` + + Listeners map[string]Listener `mandatory:"false" json:"listeners"` + + // An array of subnet OCIDs (https://docs.us-phoenix-1.oraclecloud.com/Content/General/Concepts/identifiers.htm). + SubnetIds []string `mandatory:"false" json:"subnetIds"` +} + +func (m LoadBalancer) String() string { + return common.PointerString(m) +} + +// LoadBalancerLifecycleStateEnum Enum with underlying type: string +type LoadBalancerLifecycleStateEnum string + +// Set of constants representing the allowable values for LoadBalancerLifecycleState +const ( + LoadBalancerLifecycleStateCreating LoadBalancerLifecycleStateEnum = "CREATING" + LoadBalancerLifecycleStateFailed LoadBalancerLifecycleStateEnum = "FAILED" + LoadBalancerLifecycleStateActive LoadBalancerLifecycleStateEnum = "ACTIVE" + LoadBalancerLifecycleStateDeleting LoadBalancerLifecycleStateEnum = "DELETING" + LoadBalancerLifecycleStateDeleted LoadBalancerLifecycleStateEnum = "DELETED" +) + +var mappingLoadBalancerLifecycleState = map[string]LoadBalancerLifecycleStateEnum{ + "CREATING": LoadBalancerLifecycleStateCreating, + "FAILED": LoadBalancerLifecycleStateFailed, + "ACTIVE": LoadBalancerLifecycleStateActive, + "DELETING": LoadBalancerLifecycleStateDeleting, + "DELETED": LoadBalancerLifecycleStateDeleted, +} + +// GetLoadBalancerLifecycleStateEnumValues Enumerates the set of values for LoadBalancerLifecycleState +func GetLoadBalancerLifecycleStateEnumValues() []LoadBalancerLifecycleStateEnum { + values := make([]LoadBalancerLifecycleStateEnum, 0) + for _, v := range mappingLoadBalancerLifecycleState { + values = append(values, v) + } + return values +} diff --git a/vendor/github.com/oracle/oci-go-sdk/loadbalancer/load_balancer_health.go b/vendor/github.com/oracle/oci-go-sdk/loadbalancer/load_balancer_health.go new file mode 100644 index 000000000..ab7de4a9d --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/loadbalancer/load_balancer_health.go @@ -0,0 +1,82 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Load Balancing Service API +// +// API for the Load Balancing Service +// + +package loadbalancer + +import ( + "github.com/oracle/oci-go-sdk/common" +) + +// LoadBalancerHealth The health status details for the specified load balancer. +// This object does not explicitly enumerate backend sets with a status of `OK`. However, they are included in the +// `totalBackendSetCount` sum. +type LoadBalancerHealth struct { + + // A list of backend sets that are currently in the `CRITICAL` health state. The list identifies each backend set by the + // friendly name you assigned when you created it. + // Example: `My_backend_set` + CriticalStateBackendSetNames []string `mandatory:"true" json:"criticalStateBackendSetNames"` + + // The overall health status of the load balancer. + // * **OK:** All backend sets associated with the load balancer return a status of `OK`. + // * **WARNING:** At least one of the backend sets associated with the load balancer returns a status of `WARNING`, + // no backend sets return a status of `CRITICAL`, and the load balancer life cycle state is `ACTIVE`. + // * **CRITICAL:** One or more of the backend sets associated with the load balancer return a status of `CRITICAL`. + // * **UNKNOWN:** If any one of the following conditions is true: + // * The load balancer life cycle state is not `ACTIVE`. + // * No backend sets are defined for the load balancer. + // * More than half of the backend sets associated with the load balancer return a status of `UNKNOWN`, none of the backend + // sets return a status of `WARNING` or `CRITICAL`, and the load balancer life cycle state is `ACTIVE`. + // * The system could not retrieve metrics for any reason. + Status LoadBalancerHealthStatusEnum `mandatory:"true" json:"status"` + + // The total number of backend sets associated with this load balancer. + // Example: `4` + TotalBackendSetCount *int `mandatory:"true" json:"totalBackendSetCount"` + + // A list of backend sets that are currently in the `UNKNOWN` health state. The list identifies each backend set by the + // friendly name you assigned when you created it. + // Example: `Backend_set2` + UnknownStateBackendSetNames []string `mandatory:"true" json:"unknownStateBackendSetNames"` + + // A list of backend sets that are currently in the `WARNING` health state. The list identifies each backend set by the + // friendly name you assigned when you created it. + // Example: `Backend_set3` + WarningStateBackendSetNames []string `mandatory:"true" json:"warningStateBackendSetNames"` +} + +func (m LoadBalancerHealth) String() string { + return common.PointerString(m) +} + +// LoadBalancerHealthStatusEnum Enum with underlying type: string +type LoadBalancerHealthStatusEnum string + +// Set of constants representing the allowable values for LoadBalancerHealthStatus +const ( + LoadBalancerHealthStatusOk LoadBalancerHealthStatusEnum = "OK" + LoadBalancerHealthStatusWarning LoadBalancerHealthStatusEnum = "WARNING" + LoadBalancerHealthStatusCritical LoadBalancerHealthStatusEnum = "CRITICAL" + LoadBalancerHealthStatusUnknown LoadBalancerHealthStatusEnum = "UNKNOWN" +) + +var mappingLoadBalancerHealthStatus = map[string]LoadBalancerHealthStatusEnum{ + "OK": LoadBalancerHealthStatusOk, + "WARNING": LoadBalancerHealthStatusWarning, + "CRITICAL": LoadBalancerHealthStatusCritical, + "UNKNOWN": LoadBalancerHealthStatusUnknown, +} + +// GetLoadBalancerHealthStatusEnumValues Enumerates the set of values for LoadBalancerHealthStatus +func GetLoadBalancerHealthStatusEnumValues() []LoadBalancerHealthStatusEnum { + values := make([]LoadBalancerHealthStatusEnum, 0) + for _, v := range mappingLoadBalancerHealthStatus { + values = append(values, v) + } + return values +} diff --git a/vendor/github.com/oracle/oci-go-sdk/loadbalancer/load_balancer_health_summary.go b/vendor/github.com/oracle/oci-go-sdk/loadbalancer/load_balancer_health_summary.go new file mode 100644 index 000000000..0ff51e894 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/loadbalancer/load_balancer_health_summary.go @@ -0,0 +1,64 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Load Balancing Service API +// +// API for the Load Balancing Service +// + +package loadbalancer + +import ( + "github.com/oracle/oci-go-sdk/common" +) + +// LoadBalancerHealthSummary A health status summary for the specified load balancer. +type LoadBalancerHealthSummary struct { + + // The OCID (https://docs.us-phoenix-1.oraclecloud.com/Content/General/Concepts/identifiers.htm) of the load balancer the health status is associated with. + LoadBalancerId *string `mandatory:"true" json:"loadBalancerId"` + + // The overall health status of the load balancer. + // * **OK:** All backend sets associated with the load balancer return a status of `OK`. + // * **WARNING:** At least one of the backend sets associated with the load balancer returns a status of `WARNING`, + // no backend sets return a status of `CRITICAL`, and the load balancer life cycle state is `ACTIVE`. + // * **CRITICAL:** One or more of the backend sets associated with the load balancer return a status of `CRITICAL`. + // * **UNKNOWN:** If any one of the following conditions is true: + // * The load balancer life cycle state is not `ACTIVE`. + // * No backend sets are defined for the load balancer. + // * More than half of the backend sets associated with the load balancer return a status of `UNKNOWN`, none of the backend + // sets return a status of `WARNING` or `CRITICAL`, and the load balancer life cycle state is `ACTIVE`. + // * The system could not retrieve metrics for any reason. + Status LoadBalancerHealthSummaryStatusEnum `mandatory:"true" json:"status"` +} + +func (m LoadBalancerHealthSummary) String() string { + return common.PointerString(m) +} + +// LoadBalancerHealthSummaryStatusEnum Enum with underlying type: string +type LoadBalancerHealthSummaryStatusEnum string + +// Set of constants representing the allowable values for LoadBalancerHealthSummaryStatus +const ( + LoadBalancerHealthSummaryStatusOk LoadBalancerHealthSummaryStatusEnum = "OK" + LoadBalancerHealthSummaryStatusWarning LoadBalancerHealthSummaryStatusEnum = "WARNING" + LoadBalancerHealthSummaryStatusCritical LoadBalancerHealthSummaryStatusEnum = "CRITICAL" + LoadBalancerHealthSummaryStatusUnknown LoadBalancerHealthSummaryStatusEnum = "UNKNOWN" +) + +var mappingLoadBalancerHealthSummaryStatus = map[string]LoadBalancerHealthSummaryStatusEnum{ + "OK": LoadBalancerHealthSummaryStatusOk, + "WARNING": LoadBalancerHealthSummaryStatusWarning, + "CRITICAL": LoadBalancerHealthSummaryStatusCritical, + "UNKNOWN": LoadBalancerHealthSummaryStatusUnknown, +} + +// GetLoadBalancerHealthSummaryStatusEnumValues Enumerates the set of values for LoadBalancerHealthSummaryStatus +func GetLoadBalancerHealthSummaryStatusEnumValues() []LoadBalancerHealthSummaryStatusEnum { + values := make([]LoadBalancerHealthSummaryStatusEnum, 0) + for _, v := range mappingLoadBalancerHealthSummaryStatus { + values = append(values, v) + } + return values +} diff --git a/vendor/github.com/oracle/oci-go-sdk/loadbalancer/load_balancer_policy.go b/vendor/github.com/oracle/oci-go-sdk/loadbalancer/load_balancer_policy.go new file mode 100644 index 000000000..610a970b8 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/loadbalancer/load_balancer_policy.go @@ -0,0 +1,26 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Load Balancing Service API +// +// API for the Load Balancing Service +// + +package loadbalancer + +import ( + "github.com/oracle/oci-go-sdk/common" +) + +// LoadBalancerPolicy A policy that determines how traffic is distributed among backend servers. +// For more information on load balancing policies, see +// How Load Balancing Policies Work (https://docs.us-phoenix-1.oraclecloud.com/Content/Balance/Reference/lbpolicies.htm). +type LoadBalancerPolicy struct { + + // The name of the load balancing policy. + Name *string `mandatory:"true" json:"name"` +} + +func (m LoadBalancerPolicy) String() string { + return common.PointerString(m) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/loadbalancer/load_balancer_protocol.go b/vendor/github.com/oracle/oci-go-sdk/loadbalancer/load_balancer_protocol.go new file mode 100644 index 000000000..91be70bb7 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/loadbalancer/load_balancer_protocol.go @@ -0,0 +1,24 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Load Balancing Service API +// +// API for the Load Balancing Service +// + +package loadbalancer + +import ( + "github.com/oracle/oci-go-sdk/common" +) + +// LoadBalancerProtocol The protocol that defines the type of traffic accepted by a listener. +type LoadBalancerProtocol struct { + + // The name of the protocol. + Name *string `mandatory:"true" json:"name"` +} + +func (m LoadBalancerProtocol) String() string { + return common.PointerString(m) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/loadbalancer/load_balancer_shape.go b/vendor/github.com/oracle/oci-go-sdk/loadbalancer/load_balancer_shape.go new file mode 100644 index 000000000..de82ebb47 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/loadbalancer/load_balancer_shape.go @@ -0,0 +1,27 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Load Balancing Service API +// +// API for the Load Balancing Service +// + +package loadbalancer + +import ( + "github.com/oracle/oci-go-sdk/common" +) + +// LoadBalancerShape A shape is a template that determines the total pre-provisioned bandwidth (ingress plus egress) for the +// load balancer. +// Note that the pre-provisioned maximum capacity applies to aggregated connections, not to a single client +// attempting to use the full bandwidth. +type LoadBalancerShape struct { + + // The name of the shape. + Name *string `mandatory:"true" json:"name"` +} + +func (m LoadBalancerShape) String() string { + return common.PointerString(m) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/loadbalancer/loadbalancer_client.go b/vendor/github.com/oracle/oci-go-sdk/loadbalancer/loadbalancer_client.go new file mode 100644 index 000000000..cb408565e --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/loadbalancer/loadbalancer_client.go @@ -0,0 +1,656 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Load Balancing Service API +// +// API for the Load Balancing Service +// + +package loadbalancer + +import ( + "context" + "fmt" + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +//LoadBalancerClient a client for LoadBalancer +type LoadBalancerClient struct { + common.BaseClient + config *common.ConfigurationProvider +} + +// NewLoadBalancerClientWithConfigurationProvider Creates a new default LoadBalancer client with the given configuration provider. +// the configuration provider will be used for the default signer as well as reading the region +func NewLoadBalancerClientWithConfigurationProvider(configProvider common.ConfigurationProvider) (client LoadBalancerClient, err error) { + baseClient, err := common.NewClientWithConfig(configProvider) + if err != nil { + return + } + + client = LoadBalancerClient{BaseClient: baseClient} + client.BasePath = "20170115" + err = client.setConfigurationProvider(configProvider) + return +} + +// SetRegion overrides the region of this client. +func (client *LoadBalancerClient) SetRegion(region string) { + client.Host = fmt.Sprintf(common.DefaultHostURLTemplate, "iaas", region) +} + +// SetConfigurationProvider sets the configuration provider including the region, returns an error if is not valid +func (client *LoadBalancerClient) setConfigurationProvider(configProvider common.ConfigurationProvider) error { + if ok, err := common.IsConfigurationProviderValid(configProvider); !ok { + return err + } + + // Error has been checked already + region, _ := configProvider.Region() + client.config = &configProvider + client.SetRegion(region) + return nil +} + +// ConfigurationProvider the ConfigurationProvider used in this client, or null if none set +func (client *LoadBalancerClient) ConfigurationProvider() *common.ConfigurationProvider { + return client.config +} + +// CreateBackend Adds a backend server to a backend set. +func (client LoadBalancerClient) CreateBackend(ctx context.Context, request CreateBackendRequest) (response CreateBackendResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodPost, "/loadBalancers/{loadBalancerId}/backendSets/{backendSetName}/backends", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// CreateBackendSet Adds a backend set to a load balancer. +func (client LoadBalancerClient) CreateBackendSet(ctx context.Context, request CreateBackendSetRequest) (response CreateBackendSetResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodPost, "/loadBalancers/{loadBalancerId}/backendSets", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// CreateCertificate Creates an asynchronous request to add an SSL certificate. +func (client LoadBalancerClient) CreateCertificate(ctx context.Context, request CreateCertificateRequest) (response CreateCertificateResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodPost, "/loadBalancers/{loadBalancerId}/certificates", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// CreateListener Adds a listener to a load balancer. +func (client LoadBalancerClient) CreateListener(ctx context.Context, request CreateListenerRequest) (response CreateListenerResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodPost, "/loadBalancers/{loadBalancerId}/listeners", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// CreateLoadBalancer Creates a new load balancer in the specified compartment. For general information about load balancers, +// see Overview of the Load Balancing Service (https://docs.us-phoenix-1.oraclecloud.com/Content/Balance/Concepts/balanceoverview.htm). +// For the purposes of access control, you must provide the OCID of the compartment where you want +// the load balancer to reside. Notice that the load balancer doesn't have to be in the same compartment as the VCN +// or backend set. If you're not sure which compartment to use, put the load balancer in the same compartment as the VCN. +// For information about access control and compartments, see +// Overview of the IAM Service (https://docs.us-phoenix-1.oraclecloud.com/Content/Identity/Concepts/overview.htm). +// You must specify a display name for the load balancer. It does not have to be unique, and you can change it. +// For information about Availability Domains, see +// Regions and Availability Domains (https://docs.us-phoenix-1.oraclecloud.com/Content/General/Concepts/regions.htm). +// To get a list of Availability Domains, use the `ListAvailabilityDomains` operation +// in the Identity and Access Management Service API. +// All Oracle Cloud Infrastructure resources, including load balancers, get an Oracle-assigned, +// unique ID called an Oracle Cloud Identifier (OCID). When you create a resource, you can find its OCID +// in the response. You can also retrieve a resource's OCID by using a List API operation on that resource type, +// or by viewing the resource in the Console. Fore more information, see +// Resource Identifiers (https://docs.us-phoenix-1.oraclecloud.com/Content/General/Concepts/identifiers.htm). +// After you send your request, the new object's state will temporarily be PROVISIONING. Before using the +// object, first make sure its state has changed to RUNNING. +// When you create a load balancer, the system assigns an IP address. +// To get the IP address, use the GetLoadBalancer operation. +func (client LoadBalancerClient) CreateLoadBalancer(ctx context.Context, request CreateLoadBalancerRequest) (response CreateLoadBalancerResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodPost, "/loadBalancers", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// DeleteBackend Removes a backend server from a given load balancer and backend set. +func (client LoadBalancerClient) DeleteBackend(ctx context.Context, request DeleteBackendRequest) (response DeleteBackendResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodDelete, "/loadBalancers/{loadBalancerId}/backendSets/{backendSetName}/backends/{backendName}", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// DeleteBackendSet Deletes the specified backend set. Note that deleting a backend set removes its backend servers from the load balancer. +// Before you can delete a backend set, you must remove it from any active listeners. +func (client LoadBalancerClient) DeleteBackendSet(ctx context.Context, request DeleteBackendSetRequest) (response DeleteBackendSetResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodDelete, "/loadBalancers/{loadBalancerId}/backendSets/{backendSetName}", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// DeleteCertificate Deletes an SSL certificate from a load balancer. +func (client LoadBalancerClient) DeleteCertificate(ctx context.Context, request DeleteCertificateRequest) (response DeleteCertificateResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodDelete, "/loadBalancers/{loadBalancerId}/certificates/{certificateName}", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// DeleteListener Deletes a listener from a load balancer. +func (client LoadBalancerClient) DeleteListener(ctx context.Context, request DeleteListenerRequest) (response DeleteListenerResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodDelete, "/loadBalancers/{loadBalancerId}/listeners/{listenerName}", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// DeleteLoadBalancer Stops a load balancer and removes it from service. +func (client LoadBalancerClient) DeleteLoadBalancer(ctx context.Context, request DeleteLoadBalancerRequest) (response DeleteLoadBalancerResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodDelete, "/loadBalancers/{loadBalancerId}", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// GetBackend Gets the specified backend server's configuration information. +func (client LoadBalancerClient) GetBackend(ctx context.Context, request GetBackendRequest) (response GetBackendResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodGet, "/loadBalancers/{loadBalancerId}/backendSets/{backendSetName}/backends/{backendName}", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// GetBackendHealth Gets the current health status of the specified backend server. +func (client LoadBalancerClient) GetBackendHealth(ctx context.Context, request GetBackendHealthRequest) (response GetBackendHealthResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodGet, "/loadBalancers/{loadBalancerId}/backendSets/{backendSetName}/backends/{backendName}/health", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// GetBackendSet Gets the specified backend set's configuration information. +func (client LoadBalancerClient) GetBackendSet(ctx context.Context, request GetBackendSetRequest) (response GetBackendSetResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodGet, "/loadBalancers/{loadBalancerId}/backendSets/{backendSetName}", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// GetBackendSetHealth Gets the health status for the specified backend set. +func (client LoadBalancerClient) GetBackendSetHealth(ctx context.Context, request GetBackendSetHealthRequest) (response GetBackendSetHealthResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodGet, "/loadBalancers/{loadBalancerId}/backendSets/{backendSetName}/health", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// GetHealthChecker Gets the health check policy information for a given load balancer and backend set. +func (client LoadBalancerClient) GetHealthChecker(ctx context.Context, request GetHealthCheckerRequest) (response GetHealthCheckerResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodGet, "/loadBalancers/{loadBalancerId}/backendSets/{backendSetName}/healthChecker", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// GetLoadBalancer Gets the specified load balancer's configuration information. +func (client LoadBalancerClient) GetLoadBalancer(ctx context.Context, request GetLoadBalancerRequest) (response GetLoadBalancerResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodGet, "/loadBalancers/{loadBalancerId}", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// GetLoadBalancerHealth Gets the health status for the specified load balancer. +func (client LoadBalancerClient) GetLoadBalancerHealth(ctx context.Context, request GetLoadBalancerHealthRequest) (response GetLoadBalancerHealthResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodGet, "/loadBalancers/{loadBalancerId}/health", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// GetWorkRequest Gets the details of a work request. +func (client LoadBalancerClient) GetWorkRequest(ctx context.Context, request GetWorkRequestRequest) (response GetWorkRequestResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodGet, "/loadBalancerWorkRequests/{workRequestId}", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// ListBackendSets Lists all backend sets associated with a given load balancer. +func (client LoadBalancerClient) ListBackendSets(ctx context.Context, request ListBackendSetsRequest) (response ListBackendSetsResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodGet, "/loadBalancers/{loadBalancerId}/backendSets", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// ListBackends Lists the backend servers for a given load balancer and backend set. +func (client LoadBalancerClient) ListBackends(ctx context.Context, request ListBackendsRequest) (response ListBackendsResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodGet, "/loadBalancers/{loadBalancerId}/backendSets/{backendSetName}/backends", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// ListCertificates Lists all SSL certificates associated with a given load balancer. +func (client LoadBalancerClient) ListCertificates(ctx context.Context, request ListCertificatesRequest) (response ListCertificatesResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodGet, "/loadBalancers/{loadBalancerId}/certificates", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// ListLoadBalancerHealths Lists the summary health statuses for all load balancers in the specified compartment. +func (client LoadBalancerClient) ListLoadBalancerHealths(ctx context.Context, request ListLoadBalancerHealthsRequest) (response ListLoadBalancerHealthsResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodGet, "/loadBalancerHealths", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// ListLoadBalancers Lists all load balancers in the specified compartment. +func (client LoadBalancerClient) ListLoadBalancers(ctx context.Context, request ListLoadBalancersRequest) (response ListLoadBalancersResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodGet, "/loadBalancers", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// ListPolicies Lists the available load balancer policies. +func (client LoadBalancerClient) ListPolicies(ctx context.Context, request ListPoliciesRequest) (response ListPoliciesResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodGet, "/loadBalancerPolicies", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// ListProtocols Lists all supported traffic protocols. +func (client LoadBalancerClient) ListProtocols(ctx context.Context, request ListProtocolsRequest) (response ListProtocolsResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodGet, "/loadBalancerProtocols", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// ListShapes Lists the valid load balancer shapes. +func (client LoadBalancerClient) ListShapes(ctx context.Context, request ListShapesRequest) (response ListShapesResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodGet, "/loadBalancerShapes", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// ListWorkRequests Lists the work requests for a given load balancer. +func (client LoadBalancerClient) ListWorkRequests(ctx context.Context, request ListWorkRequestsRequest) (response ListWorkRequestsResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodGet, "/loadBalancers/{loadBalancerId}/workRequests", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// UpdateBackend Updates the configuration of a backend server within the specified backend set. +func (client LoadBalancerClient) UpdateBackend(ctx context.Context, request UpdateBackendRequest) (response UpdateBackendResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodPut, "/loadBalancers/{loadBalancerId}/backendSets/{backendSetName}/backends/{backendName}", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// UpdateBackendSet Updates a backend set. +func (client LoadBalancerClient) UpdateBackendSet(ctx context.Context, request UpdateBackendSetRequest) (response UpdateBackendSetResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodPut, "/loadBalancers/{loadBalancerId}/backendSets/{backendSetName}", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// UpdateHealthChecker Updates the health check policy for a given load balancer and backend set. +func (client LoadBalancerClient) UpdateHealthChecker(ctx context.Context, request UpdateHealthCheckerRequest) (response UpdateHealthCheckerResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodPut, "/loadBalancers/{loadBalancerId}/backendSets/{backendSetName}/healthChecker", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// UpdateListener Updates a listener for a given load balancer. +func (client LoadBalancerClient) UpdateListener(ctx context.Context, request UpdateListenerRequest) (response UpdateListenerResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodPut, "/loadBalancers/{loadBalancerId}/listeners/{listenerName}", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// UpdateLoadBalancer Updates a load balancer's configuration. +func (client LoadBalancerClient) UpdateLoadBalancer(ctx context.Context, request UpdateLoadBalancerRequest) (response UpdateLoadBalancerResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodPut, "/loadBalancers/{loadBalancerId}", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} diff --git a/vendor/github.com/oracle/oci-go-sdk/loadbalancer/session_persistence_configuration_details.go b/vendor/github.com/oracle/oci-go-sdk/loadbalancer/session_persistence_configuration_details.go new file mode 100644 index 000000000..a83e81d99 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/loadbalancer/session_persistence_configuration_details.go @@ -0,0 +1,37 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Load Balancing Service API +// +// API for the Load Balancing Service +// + +package loadbalancer + +import ( + "github.com/oracle/oci-go-sdk/common" +) + +// SessionPersistenceConfigurationDetails The configuration details for implementing session persistence. Session persistence enables the Load Balancing +// Service to direct any number of requests that originate from a single logical client to a single backend web server. +// For more information, see Session Persistence (https://docs.us-phoenix-1.oraclecloud.com/Content/Balance/Reference/sessionpersistence.htm). +// To disable session persistence on a running load balancer, use the +// UpdateBackendSet operation and specify "null" for the +// `SessionPersistenceConfigurationDetails` object. +// Example: `SessionPersistenceConfigurationDetails: null` +type SessionPersistenceConfigurationDetails struct { + + // The name of the cookie used to detect a session initiated by the backend server. Use '*' to specify + // that any cookie set by the backend causes the session to persist. + // Example: `myCookieName` + CookieName *string `mandatory:"true" json:"cookieName"` + + // Whether the load balancer is prevented from directing traffic from a persistent session client to + // a different backend server if the original server is unavailable. Defaults to false. + // Example: `true` + DisableFallback *bool `mandatory:"false" json:"disableFallback"` +} + +func (m SessionPersistenceConfigurationDetails) String() string { + return common.PointerString(m) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/loadbalancer/ssl_configuration.go b/vendor/github.com/oracle/oci-go-sdk/loadbalancer/ssl_configuration.go new file mode 100644 index 000000000..5f9c102dc --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/loadbalancer/ssl_configuration.go @@ -0,0 +1,36 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Load Balancing Service API +// +// API for the Load Balancing Service +// + +package loadbalancer + +import ( + "github.com/oracle/oci-go-sdk/common" +) + +// SslConfiguration A listener's SSL handling configuration. +// To use SSL, a listener must be associated with a Certificate. +type SslConfiguration struct { + + // A friendly name for the certificate bundle. It must be unique and it cannot be changed. + // Valid certificate bundle names include only alphanumeric characters, dashes, and underscores. + // Certificate bundle names cannot contain spaces. Avoid entering confidential information. + // Example: `My_certificate_bundle` + CertificateName *string `mandatory:"true" json:"certificateName"` + + // The maximum depth for peer certificate chain verification. + // Example: `3` + VerifyDepth *int `mandatory:"true" json:"verifyDepth"` + + // Whether the load balancer listener should verify peer certificates. + // Example: `true` + VerifyPeerCertificate *bool `mandatory:"true" json:"verifyPeerCertificate"` +} + +func (m SslConfiguration) String() string { + return common.PointerString(m) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/loadbalancer/ssl_configuration_details.go b/vendor/github.com/oracle/oci-go-sdk/loadbalancer/ssl_configuration_details.go new file mode 100644 index 000000000..7b083fcf7 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/loadbalancer/ssl_configuration_details.go @@ -0,0 +1,35 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Load Balancing Service API +// +// API for the Load Balancing Service +// + +package loadbalancer + +import ( + "github.com/oracle/oci-go-sdk/common" +) + +// SslConfigurationDetails The load balancer's SSL handling configuration details. +type SslConfigurationDetails struct { + + // A friendly name for the certificate bundle. It must be unique and it cannot be changed. + // Valid certificate bundle names include only alphanumeric characters, dashes, and underscores. + // Certificate bundle names cannot contain spaces. Avoid entering confidential information. + // Example: `My_certificate_bundle` + CertificateName *string `mandatory:"true" json:"certificateName"` + + // The maximum depth for peer certificate chain verification. + // Example: `3` + VerifyDepth *int `mandatory:"false" json:"verifyDepth"` + + // Whether the load balancer listener should verify peer certificates. + // Example: `true` + VerifyPeerCertificate *bool `mandatory:"false" json:"verifyPeerCertificate"` +} + +func (m SslConfigurationDetails) String() string { + return common.PointerString(m) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/loadbalancer/update_backend_details.go b/vendor/github.com/oracle/oci-go-sdk/loadbalancer/update_backend_details.go new file mode 100644 index 000000000..56efca6d9 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/loadbalancer/update_backend_details.go @@ -0,0 +1,44 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Load Balancing Service API +// +// API for the Load Balancing Service +// + +package loadbalancer + +import ( + "github.com/oracle/oci-go-sdk/common" +) + +// UpdateBackendDetails The configuration details for updating a backend server. +type UpdateBackendDetails struct { + + // Whether the load balancer should treat this server as a backup unit. If `true`, the load balancer forwards no ingress + // traffic to this backend server unless all other backend servers not marked as "backup" fail the health check policy. + // Example: `true` + Backup *bool `mandatory:"true" json:"backup"` + + // Whether the load balancer should drain this server. Servers marked "drain" receive no new + // incoming traffic. + // Example: `true` + Drain *bool `mandatory:"true" json:"drain"` + + // Whether the load balancer should treat this server as offline. Offline servers receive no incoming + // traffic. + // Example: `true` + Offline *bool `mandatory:"true" json:"offline"` + + // The load balancing policy weight assigned to the server. Backend servers with a higher weight receive a larger + // proportion of incoming traffic. For example, a server weighted '3' receives 3 times the number of new connections + // as a server weighted '1'. + // For more information on load balancing policies, see + // How Load Balancing Policies Work (https://docs.us-phoenix-1.oraclecloud.com/Content/Balance/Reference/lbpolicies.htm). + // Example: `3` + Weight *int `mandatory:"true" json:"weight"` +} + +func (m UpdateBackendDetails) String() string { + return common.PointerString(m) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/loadbalancer/update_backend_request_response.go b/vendor/github.com/oracle/oci-go-sdk/loadbalancer/update_backend_request_response.go new file mode 100644 index 000000000..0d9915a19 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/loadbalancer/update_backend_request_response.go @@ -0,0 +1,60 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package loadbalancer + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// UpdateBackendRequest wrapper for the UpdateBackend operation +type UpdateBackendRequest struct { + + // Details for updating a backend server. + UpdateBackendDetails `contributesTo:"body"` + + // The OCID (https://docs.us-phoenix-1.oraclecloud.com/Content/General/Concepts/identifiers.htm) of the load balancer associated with the backend set and server. + LoadBalancerId *string `mandatory:"true" contributesTo:"path" name:"loadBalancerId"` + + // The name of the backend set associated with the backend server. + // Example: `My_backend_set` + BackendSetName *string `mandatory:"true" contributesTo:"path" name:"backendSetName"` + + // The IP address and port of the backend server to update. + // Example: `1.1.1.7:42` + BackendName *string `mandatory:"true" contributesTo:"path" name:"backendName"` + + // The unique Oracle-assigned identifier for the request. If you need to contact Oracle about a + // particular request, please provide the request ID. + OpcRequestId *string `mandatory:"false" contributesTo:"header" name:"opc-request-id"` + + // A token that uniquely identifies a request so it can be retried in case of a timeout or + // server error without risk of executing that same action again. Retry tokens expire after 24 + // hours, but can be invalidated before then due to conflicting operations (e.g., if a resource + // has been deleted and purged from the system, then a retry of the original creation request + // may be rejected). + OpcRetryToken *string `mandatory:"false" contributesTo:"header" name:"opc-retry-token"` +} + +func (request UpdateBackendRequest) String() string { + return common.PointerString(request) +} + +// UpdateBackendResponse wrapper for the UpdateBackend operation +type UpdateBackendResponse struct { + + // The underlying http response + RawResponse *http.Response + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about + // a particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` + + // The OCID (https://docs.us-phoenix-1.oraclecloud.com/Content/General/Concepts/identifiers.htm) of the work request. + OpcWorkRequestId *string `presentIn:"header" name:"opc-work-request-id"` +} + +func (response UpdateBackendResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/loadbalancer/update_backend_set_details.go b/vendor/github.com/oracle/oci-go-sdk/loadbalancer/update_backend_set_details.go new file mode 100644 index 000000000..4da009e97 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/loadbalancer/update_backend_set_details.go @@ -0,0 +1,35 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Load Balancing Service API +// +// API for the Load Balancing Service +// + +package loadbalancer + +import ( + "github.com/oracle/oci-go-sdk/common" +) + +// UpdateBackendSetDetails The configuration details for updating a load balancer backend set. +// For more information on backend set configuration, see +// Managing Backend Sets (https://docs.us-phoenix-1.oraclecloud.com/Content/Balance/tasks/managingbackendsets.htm). +type UpdateBackendSetDetails struct { + Backends []BackendDetails `mandatory:"true" json:"backends"` + + HealthChecker *HealthCheckerDetails `mandatory:"true" json:"healthChecker"` + + // The load balancer policy for the backend set. To get a list of available policies, use the + // ListPolicies operation. + // Example: `LEAST_CONNECTIONS` + Policy *string `mandatory:"true" json:"policy"` + + SessionPersistenceConfiguration *SessionPersistenceConfigurationDetails `mandatory:"false" json:"sessionPersistenceConfiguration"` + + SslConfiguration *SslConfigurationDetails `mandatory:"false" json:"sslConfiguration"` +} + +func (m UpdateBackendSetDetails) String() string { + return common.PointerString(m) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/loadbalancer/update_backend_set_request_response.go b/vendor/github.com/oracle/oci-go-sdk/loadbalancer/update_backend_set_request_response.go new file mode 100644 index 000000000..e099e1db4 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/loadbalancer/update_backend_set_request_response.go @@ -0,0 +1,56 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package loadbalancer + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// UpdateBackendSetRequest wrapper for the UpdateBackendSet operation +type UpdateBackendSetRequest struct { + + // The details to update a backend set. + UpdateBackendSetDetails `contributesTo:"body"` + + // The OCID (https://docs.us-phoenix-1.oraclecloud.com/Content/General/Concepts/identifiers.htm) of the load balancer associated with the backend set. + LoadBalancerId *string `mandatory:"true" contributesTo:"path" name:"loadBalancerId"` + + // The name of the backend set to update. + // Example: `My_backend_set` + BackendSetName *string `mandatory:"true" contributesTo:"path" name:"backendSetName"` + + // The unique Oracle-assigned identifier for the request. If you need to contact Oracle about a + // particular request, please provide the request ID. + OpcRequestId *string `mandatory:"false" contributesTo:"header" name:"opc-request-id"` + + // A token that uniquely identifies a request so it can be retried in case of a timeout or + // server error without risk of executing that same action again. Retry tokens expire after 24 + // hours, but can be invalidated before then due to conflicting operations (e.g., if a resource + // has been deleted and purged from the system, then a retry of the original creation request + // may be rejected). + OpcRetryToken *string `mandatory:"false" contributesTo:"header" name:"opc-retry-token"` +} + +func (request UpdateBackendSetRequest) String() string { + return common.PointerString(request) +} + +// UpdateBackendSetResponse wrapper for the UpdateBackendSet operation +type UpdateBackendSetResponse struct { + + // The underlying http response + RawResponse *http.Response + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about + // a particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` + + // The OCID (https://docs.us-phoenix-1.oraclecloud.com/Content/General/Concepts/identifiers.htm) of the work request. + OpcWorkRequestId *string `presentIn:"header" name:"opc-work-request-id"` +} + +func (response UpdateBackendSetResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/loadbalancer/update_health_checker_details.go b/vendor/github.com/oracle/oci-go-sdk/loadbalancer/update_health_checker_details.go new file mode 100644 index 000000000..5e2e8cda8 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/loadbalancer/update_health_checker_details.go @@ -0,0 +1,54 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Load Balancing Service API +// +// API for the Load Balancing Service +// + +package loadbalancer + +import ( + "github.com/oracle/oci-go-sdk/common" +) + +// UpdateHealthCheckerDetails The health checker's configuration details. +type UpdateHealthCheckerDetails struct { + + // The interval between health checks, in milliseconds. + // Example: `30000` + IntervalInMillis *int `mandatory:"true" json:"intervalInMillis"` + + // The backend server port against which to run the health check. + // Example: `8080` + Port *int `mandatory:"true" json:"port"` + + // The protocol the health check must use; either HTTP or TCP. + // Example: `HTTP` + Protocol *string `mandatory:"true" json:"protocol"` + + // A regular expression for parsing the response body from the backend server. + // Example: `^(500|40[1348])$` + ResponseBodyRegex *string `mandatory:"true" json:"responseBodyRegex"` + + // The number of retries to attempt before a backend server is considered "unhealthy". + // Example: `3` + Retries *int `mandatory:"true" json:"retries"` + + // The status code a healthy backend server should return. + // Example: `200` + ReturnCode *int `mandatory:"true" json:"returnCode"` + + // The maximum time, in milliseconds, to wait for a reply to a health check. A health check is successful only if a reply + // returns within this timeout period. + // Example: `6000` + TimeoutInMillis *int `mandatory:"true" json:"timeoutInMillis"` + + // The path against which to run the health check. + // Example: `/healthcheck` + UrlPath *string `mandatory:"false" json:"urlPath"` +} + +func (m UpdateHealthCheckerDetails) String() string { + return common.PointerString(m) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/loadbalancer/update_health_checker_request_response.go b/vendor/github.com/oracle/oci-go-sdk/loadbalancer/update_health_checker_request_response.go new file mode 100644 index 000000000..54ddccbbe --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/loadbalancer/update_health_checker_request_response.go @@ -0,0 +1,56 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package loadbalancer + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// UpdateHealthCheckerRequest wrapper for the UpdateHealthChecker operation +type UpdateHealthCheckerRequest struct { + + // The health check policy configuration details. + UpdateHealthCheckerDetails `contributesTo:"body"` + + // The OCID (https://docs.us-phoenix-1.oraclecloud.com/Content/General/Concepts/identifiers.htm) of the load balancer associated with the health check policy to be updated. + LoadBalancerId *string `mandatory:"true" contributesTo:"path" name:"loadBalancerId"` + + // The name of the backend set associated with the health check policy to be retrieved. + // Example: `My_backend_set` + BackendSetName *string `mandatory:"true" contributesTo:"path" name:"backendSetName"` + + // The unique Oracle-assigned identifier for the request. If you need to contact Oracle about a + // particular request, please provide the request ID. + OpcRequestId *string `mandatory:"false" contributesTo:"header" name:"opc-request-id"` + + // A token that uniquely identifies a request so it can be retried in case of a timeout or + // server error without risk of executing that same action again. Retry tokens expire after 24 + // hours, but can be invalidated before then due to conflicting operations (e.g., if a resource + // has been deleted and purged from the system, then a retry of the original creation request + // may be rejected). + OpcRetryToken *string `mandatory:"false" contributesTo:"header" name:"opc-retry-token"` +} + +func (request UpdateHealthCheckerRequest) String() string { + return common.PointerString(request) +} + +// UpdateHealthCheckerResponse wrapper for the UpdateHealthChecker operation +type UpdateHealthCheckerResponse struct { + + // The underlying http response + RawResponse *http.Response + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about + // a particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` + + // The OCID (https://docs.us-phoenix-1.oraclecloud.com/Content/General/Concepts/identifiers.htm) of the work request. + OpcWorkRequestId *string `presentIn:"header" name:"opc-work-request-id"` +} + +func (response UpdateHealthCheckerResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/loadbalancer/update_listener_details.go b/vendor/github.com/oracle/oci-go-sdk/loadbalancer/update_listener_details.go new file mode 100644 index 000000000..c935eac54 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/loadbalancer/update_listener_details.go @@ -0,0 +1,36 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Load Balancing Service API +// +// API for the Load Balancing Service +// + +package loadbalancer + +import ( + "github.com/oracle/oci-go-sdk/common" +) + +// UpdateListenerDetails The configuration details for updating a listener. +type UpdateListenerDetails struct { + + // The name of the associated backend set. + DefaultBackendSetName *string `mandatory:"true" json:"defaultBackendSetName"` + + // The communication port for the listener. + // Example: `80` + Port *int `mandatory:"true" json:"port"` + + // The protocol on which the listener accepts connection requests. + // To get a list of valid protocols, use the ListProtocols + // operation. + // Example: `HTTP` + Protocol *string `mandatory:"true" json:"protocol"` + + SslConfiguration *SslConfigurationDetails `mandatory:"false" json:"sslConfiguration"` +} + +func (m UpdateListenerDetails) String() string { + return common.PointerString(m) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/loadbalancer/update_listener_request_response.go b/vendor/github.com/oracle/oci-go-sdk/loadbalancer/update_listener_request_response.go new file mode 100644 index 000000000..b07a7a250 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/loadbalancer/update_listener_request_response.go @@ -0,0 +1,56 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package loadbalancer + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// UpdateListenerRequest wrapper for the UpdateListener operation +type UpdateListenerRequest struct { + + // Details to update a listener. + UpdateListenerDetails `contributesTo:"body"` + + // The OCID (https://docs.us-phoenix-1.oraclecloud.com/Content/General/Concepts/identifiers.htm) of the load balancer associated with the listener to update. + LoadBalancerId *string `mandatory:"true" contributesTo:"path" name:"loadBalancerId"` + + // The name of the listener to update. + // Example: `My listener` + ListenerName *string `mandatory:"true" contributesTo:"path" name:"listenerName"` + + // The unique Oracle-assigned identifier for the request. If you need to contact Oracle about a + // particular request, please provide the request ID. + OpcRequestId *string `mandatory:"false" contributesTo:"header" name:"opc-request-id"` + + // A token that uniquely identifies a request so it can be retried in case of a timeout or + // server error without risk of executing that same action again. Retry tokens expire after 24 + // hours, but can be invalidated before then due to conflicting operations (e.g., if a resource + // has been deleted and purged from the system, then a retry of the original creation request + // may be rejected). + OpcRetryToken *string `mandatory:"false" contributesTo:"header" name:"opc-retry-token"` +} + +func (request UpdateListenerRequest) String() string { + return common.PointerString(request) +} + +// UpdateListenerResponse wrapper for the UpdateListener operation +type UpdateListenerResponse struct { + + // The underlying http response + RawResponse *http.Response + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about + // a particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` + + // The OCID (https://docs.us-phoenix-1.oraclecloud.com/Content/General/Concepts/identifiers.htm) of the work request. + OpcWorkRequestId *string `presentIn:"header" name:"opc-work-request-id"` +} + +func (response UpdateListenerResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/loadbalancer/update_load_balancer_details.go b/vendor/github.com/oracle/oci-go-sdk/loadbalancer/update_load_balancer_details.go new file mode 100644 index 000000000..8a66761f4 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/loadbalancer/update_load_balancer_details.go @@ -0,0 +1,26 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Load Balancing Service API +// +// API for the Load Balancing Service +// + +package loadbalancer + +import ( + "github.com/oracle/oci-go-sdk/common" +) + +// UpdateLoadBalancerDetails Configuration details to update a load balancer. +type UpdateLoadBalancerDetails struct { + + // The user-friendly display name for the load balancer. It does not have to be unique, and it is changeable. + // Avoid entering confidential information. + // Example: `My load balancer` + DisplayName *string `mandatory:"true" json:"displayName"` +} + +func (m UpdateLoadBalancerDetails) String() string { + return common.PointerString(m) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/loadbalancer/update_load_balancer_request_response.go b/vendor/github.com/oracle/oci-go-sdk/loadbalancer/update_load_balancer_request_response.go new file mode 100644 index 000000000..15101ba1c --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/loadbalancer/update_load_balancer_request_response.go @@ -0,0 +1,52 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package loadbalancer + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// UpdateLoadBalancerRequest wrapper for the UpdateLoadBalancer operation +type UpdateLoadBalancerRequest struct { + + // The details for updating a load balancer's configuration. + UpdateLoadBalancerDetails `contributesTo:"body"` + + // The OCID (https://docs.us-phoenix-1.oraclecloud.com/Content/General/Concepts/identifiers.htm) of the load balancer to update. + LoadBalancerId *string `mandatory:"true" contributesTo:"path" name:"loadBalancerId"` + + // The unique Oracle-assigned identifier for the request. If you need to contact Oracle about a + // particular request, please provide the request ID. + OpcRequestId *string `mandatory:"false" contributesTo:"header" name:"opc-request-id"` + + // A token that uniquely identifies a request so it can be retried in case of a timeout or + // server error without risk of executing that same action again. Retry tokens expire after 24 + // hours, but can be invalidated before then due to conflicting operations (e.g., if a resource + // has been deleted and purged from the system, then a retry of the original creation request + // may be rejected). + OpcRetryToken *string `mandatory:"false" contributesTo:"header" name:"opc-retry-token"` +} + +func (request UpdateLoadBalancerRequest) String() string { + return common.PointerString(request) +} + +// UpdateLoadBalancerResponse wrapper for the UpdateLoadBalancer operation +type UpdateLoadBalancerResponse struct { + + // The underlying http response + RawResponse *http.Response + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about + // a particular request, please provide the request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` + + // The OCID (https://docs.us-phoenix-1.oraclecloud.com/Content/General/Concepts/identifiers.htm) of the work request. + OpcWorkRequestId *string `presentIn:"header" name:"opc-work-request-id"` +} + +func (response UpdateLoadBalancerResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/loadbalancer/work_request.go b/vendor/github.com/oracle/oci-go-sdk/loadbalancer/work_request.go new file mode 100644 index 000000000..0a8a007f4 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/loadbalancer/work_request.go @@ -0,0 +1,82 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Load Balancing Service API +// +// API for the Load Balancing Service +// + +package loadbalancer + +import ( + "github.com/oracle/oci-go-sdk/common" +) + +// WorkRequest Many of the API requests you use to create and configure load balancing do not take effect immediately. +// In these cases, the request spawns an asynchronous work flow to fulfill the request. WorkRequest objects provide visibility +// for in-progress work flows. +// For more information about work requests, see Viewing the State of a Work Request (https://docs.us-phoenix-1.oraclecloud.com/Content/Balance/Tasks/viewingworkrequest.htm). +type WorkRequest struct { + ErrorDetails []WorkRequestError `mandatory:"true" json:"errorDetails"` + + // The OCID (https://docs.us-phoenix-1.oraclecloud.com/Content/General/Concepts/identifiers.htm) of the work request. + Id *string `mandatory:"true" json:"id"` + + // The current state of the work request. + LifecycleState WorkRequestLifecycleStateEnum `mandatory:"true" json:"lifecycleState"` + + // The OCID (https://docs.us-phoenix-1.oraclecloud.com/Content/General/Concepts/identifiers.htm) of the load balancer with which the work request + // is associated. + LoadBalancerId *string `mandatory:"true" json:"loadBalancerId"` + + // A collection of data, related to the load balancer provisioning process, that helps with debugging in the event of failure. + // Possible data elements include: + // - workflow name + // - event ID + // - work request ID + // - load balancer ID + // - workflow completion message + Message *string `mandatory:"true" json:"message"` + + // The date and time the work request was created, in the format defined by RFC3339. + // Example: `2016-08-25T21:10:29.600Z` + TimeAccepted *common.SDKTime `mandatory:"true" json:"timeAccepted"` + + // The type of action the work request represents. + Type *string `mandatory:"true" json:"type"` + + // The date and time the work request was completed, in the format defined by RFC3339. + // Example: `2016-08-25T21:10:29.600Z` + TimeFinished *common.SDKTime `mandatory:"false" json:"timeFinished"` +} + +func (m WorkRequest) String() string { + return common.PointerString(m) +} + +// WorkRequestLifecycleStateEnum Enum with underlying type: string +type WorkRequestLifecycleStateEnum string + +// Set of constants representing the allowable values for WorkRequestLifecycleState +const ( + WorkRequestLifecycleStateAccepted WorkRequestLifecycleStateEnum = "ACCEPTED" + WorkRequestLifecycleStateInProgress WorkRequestLifecycleStateEnum = "IN_PROGRESS" + WorkRequestLifecycleStateFailed WorkRequestLifecycleStateEnum = "FAILED" + WorkRequestLifecycleStateSucceeded WorkRequestLifecycleStateEnum = "SUCCEEDED" +) + +var mappingWorkRequestLifecycleState = map[string]WorkRequestLifecycleStateEnum{ + "ACCEPTED": WorkRequestLifecycleStateAccepted, + "IN_PROGRESS": WorkRequestLifecycleStateInProgress, + "FAILED": WorkRequestLifecycleStateFailed, + "SUCCEEDED": WorkRequestLifecycleStateSucceeded, +} + +// GetWorkRequestLifecycleStateEnumValues Enumerates the set of values for WorkRequestLifecycleState +func GetWorkRequestLifecycleStateEnumValues() []WorkRequestLifecycleStateEnum { + values := make([]WorkRequestLifecycleStateEnum, 0) + for _, v := range mappingWorkRequestLifecycleState { + values = append(values, v) + } + return values +} diff --git a/vendor/github.com/oracle/oci-go-sdk/loadbalancer/work_request_error.go b/vendor/github.com/oracle/oci-go-sdk/loadbalancer/work_request_error.go new file mode 100644 index 000000000..6b755f4b1 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/loadbalancer/work_request_error.go @@ -0,0 +1,48 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Load Balancing Service API +// +// API for the Load Balancing Service +// + +package loadbalancer + +import ( + "github.com/oracle/oci-go-sdk/common" +) + +// WorkRequestError An object returned in the event of a work request error. +type WorkRequestError struct { + ErrorCode WorkRequestErrorErrorCodeEnum `mandatory:"true" json:"errorCode"` + + // A human-readable error string. + Message *string `mandatory:"true" json:"message"` +} + +func (m WorkRequestError) String() string { + return common.PointerString(m) +} + +// WorkRequestErrorErrorCodeEnum Enum with underlying type: string +type WorkRequestErrorErrorCodeEnum string + +// Set of constants representing the allowable values for WorkRequestErrorErrorCode +const ( + WorkRequestErrorErrorCodeBadInput WorkRequestErrorErrorCodeEnum = "BAD_INPUT" + WorkRequestErrorErrorCodeInternalError WorkRequestErrorErrorCodeEnum = "INTERNAL_ERROR" +) + +var mappingWorkRequestErrorErrorCode = map[string]WorkRequestErrorErrorCodeEnum{ + "BAD_INPUT": WorkRequestErrorErrorCodeBadInput, + "INTERNAL_ERROR": WorkRequestErrorErrorCodeInternalError, +} + +// GetWorkRequestErrorErrorCodeEnumValues Enumerates the set of values for WorkRequestErrorErrorCode +func GetWorkRequestErrorErrorCodeEnumValues() []WorkRequestErrorErrorCodeEnum { + values := make([]WorkRequestErrorErrorCodeEnum, 0) + for _, v := range mappingWorkRequestErrorErrorCode { + values = append(values, v) + } + return values +} diff --git a/vendor/github.com/oracle/oci-go-sdk/objectstorage/abort_multipart_upload_request_response.go b/vendor/github.com/oracle/oci-go-sdk/objectstorage/abort_multipart_upload_request_response.go new file mode 100644 index 000000000..8858e1066 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/objectstorage/abort_multipart_upload_request_response.go @@ -0,0 +1,52 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package objectstorage + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// AbortMultipartUploadRequest wrapper for the AbortMultipartUpload operation +type AbortMultipartUploadRequest struct { + + // The top-level namespace used for the request. + NamespaceName *string `mandatory:"true" contributesTo:"path" name:"namespaceName"` + + // The name of the bucket. + // Example: `my-new-bucket1` + BucketName *string `mandatory:"true" contributesTo:"path" name:"bucketName"` + + // The name of the object. + // Example: `test/object1.log` + ObjectName *string `mandatory:"true" contributesTo:"path" name:"objectName"` + + // The upload ID for a multipart upload. + UploadId *string `mandatory:"true" contributesTo:"query" name:"uploadId"` + + // The client request ID for tracing. + OpcClientRequestId *string `mandatory:"false" contributesTo:"header" name:"opc-client-request-id"` +} + +func (request AbortMultipartUploadRequest) String() string { + return common.PointerString(request) +} + +// AbortMultipartUploadResponse wrapper for the AbortMultipartUpload operation +type AbortMultipartUploadResponse struct { + + // The underlying http response + RawResponse *http.Response + + // Echoes back the value passed in the opc-client-request-id header, for use by clients when debugging. + OpcClientRequestId *string `presentIn:"header" name:"opc-client-request-id"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about a particular + // request, please provide this request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response AbortMultipartUploadResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/objectstorage/bucket.go b/vendor/github.com/oracle/oci-go-sdk/objectstorage/bucket.go new file mode 100644 index 000000000..81462ba1a --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/objectstorage/bucket.go @@ -0,0 +1,73 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Object Storage Service API +// +// APIs for managing buckets and objects. +// + +package objectstorage + +import ( + "github.com/oracle/oci-go-sdk/common" +) + +// Bucket To use any of the API operations, you must be authorized in an IAM policy. If you're not authorized, +// talk to an administrator. If you're an administrator who needs to write policies to give users access, see +// Getting Started with Policies (https://docs.us-phoenix-1.oraclecloud.com/Content/Identity/Concepts/policygetstarted.htm). +type Bucket struct { + + // The namespace in which the bucket lives. + Namespace *string `mandatory:"true" json:"namespace"` + + // The name of the bucket. + Name *string `mandatory:"true" json:"name"` + + // The compartment ID in which the bucket is authorized. + CompartmentId *string `mandatory:"true" json:"compartmentId"` + + // Arbitrary string keys and values for user-defined metadata. + Metadata map[string]string `mandatory:"true" json:"metadata"` + + // The OCID of the user who created the bucket. + CreatedBy *string `mandatory:"true" json:"createdBy"` + + // The date and time at which the bucket was created. + TimeCreated *common.SDKTime `mandatory:"true" json:"timeCreated"` + + // The entity tag for the bucket. + Etag *string `mandatory:"true" json:"etag"` + + // The type of public access available on this bucket. Allows authenticated caller to access the bucket or + // contents of this bucket. By default a bucket is set to NoPublicAccess. It is treated as NoPublicAccess + // when this value is not specified. When the type is NoPublicAccess the bucket does not allow any public access. + // When the type is ObjectRead the bucket allows public access to the GetObject, HeadObject, ListObjects. + PublicAccessType BucketPublicAccessTypeEnum `mandatory:"false" json:"publicAccessType,omitempty"` +} + +func (m Bucket) String() string { + return common.PointerString(m) +} + +// BucketPublicAccessTypeEnum Enum with underlying type: string +type BucketPublicAccessTypeEnum string + +// Set of constants representing the allowable values for BucketPublicAccessType +const ( + BucketPublicAccessTypeNopublicaccess BucketPublicAccessTypeEnum = "NoPublicAccess" + BucketPublicAccessTypeObjectread BucketPublicAccessTypeEnum = "ObjectRead" +) + +var mappingBucketPublicAccessType = map[string]BucketPublicAccessTypeEnum{ + "NoPublicAccess": BucketPublicAccessTypeNopublicaccess, + "ObjectRead": BucketPublicAccessTypeObjectread, +} + +// GetBucketPublicAccessTypeEnumValues Enumerates the set of values for BucketPublicAccessType +func GetBucketPublicAccessTypeEnumValues() []BucketPublicAccessTypeEnum { + values := make([]BucketPublicAccessTypeEnum, 0) + for _, v := range mappingBucketPublicAccessType { + values = append(values, v) + } + return values +} diff --git a/vendor/github.com/oracle/oci-go-sdk/objectstorage/bucket_summary.go b/vendor/github.com/oracle/oci-go-sdk/objectstorage/bucket_summary.go new file mode 100644 index 000000000..028177a52 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/objectstorage/bucket_summary.go @@ -0,0 +1,41 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Object Storage Service API +// +// APIs for managing buckets and objects. +// + +package objectstorage + +import ( + "github.com/oracle/oci-go-sdk/common" +) + +// BucketSummary To use any of the API operations, you must be authorized in an IAM policy. If you're not authorized, +// talk to an administrator. If you're an administrator who needs to write policies to give users access, see +// Getting Started with Policies (https://docs.us-phoenix-1.oraclecloud.com/Content/Identity/Concepts/policygetstarted.htm). +type BucketSummary struct { + + // The namespace in which the bucket lives. + Namespace *string `mandatory:"true" json:"namespace"` + + // The name of the bucket. + Name *string `mandatory:"true" json:"name"` + + // The compartment ID in which the bucket is authorized. + CompartmentId *string `mandatory:"true" json:"compartmentId"` + + // The OCID of the user who created the bucket. + CreatedBy *string `mandatory:"true" json:"createdBy"` + + // The date and time at which the bucket was created. + TimeCreated *common.SDKTime `mandatory:"true" json:"timeCreated"` + + // The entity tag for the bucket. + Etag *string `mandatory:"true" json:"etag"` +} + +func (m BucketSummary) String() string { + return common.PointerString(m) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/objectstorage/commit_multipart_upload_details.go b/vendor/github.com/oracle/oci-go-sdk/objectstorage/commit_multipart_upload_details.go new file mode 100644 index 000000000..c285f7383 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/objectstorage/commit_multipart_upload_details.go @@ -0,0 +1,30 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Object Storage Service API +// +// APIs for managing buckets and objects. +// + +package objectstorage + +import ( + "github.com/oracle/oci-go-sdk/common" +) + +// CommitMultipartUploadDetails To use any of the API operations, you must be authorized in an IAM policy. If you're not authorized, +// talk to an administrator. If you're an administrator who needs to write policies to give users access, see +// Getting Started with Policies (https://docs.us-phoenix-1.oraclecloud.com/Content/Identity/Concepts/policygetstarted.htm). +type CommitMultipartUploadDetails struct { + + // The part numbers and ETags for the parts to be committed. + PartsToCommit []CommitMultipartUploadPartDetails `mandatory:"true" json:"partsToCommit"` + + // The part numbers for the parts to be excluded from the completed object. + // Each part created for this upload must be in either partsToExclude or partsToCommit, but cannot be in both. + PartsToExclude []int `mandatory:"false" json:"partsToExclude"` +} + +func (m CommitMultipartUploadDetails) String() string { + return common.PointerString(m) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/objectstorage/commit_multipart_upload_part_details.go b/vendor/github.com/oracle/oci-go-sdk/objectstorage/commit_multipart_upload_part_details.go new file mode 100644 index 000000000..a4d053b37 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/objectstorage/commit_multipart_upload_part_details.go @@ -0,0 +1,29 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Object Storage Service API +// +// APIs for managing buckets and objects. +// + +package objectstorage + +import ( + "github.com/oracle/oci-go-sdk/common" +) + +// CommitMultipartUploadPartDetails To use any of the API operations, you must be authorized in an IAM policy. If you're not authorized, +// talk to an administrator. If you're an administrator who needs to write policies to give users access, see +// Getting Started with Policies (https://docs.us-phoenix-1.oraclecloud.com/Content/Identity/Concepts/policygetstarted.htm). +type CommitMultipartUploadPartDetails struct { + + // The part number for this part. + PartNum *int `mandatory:"true" json:"partNum"` + + // The ETag returned when this part was uploaded. + Etag *string `mandatory:"true" json:"etag"` +} + +func (m CommitMultipartUploadPartDetails) String() string { + return common.PointerString(m) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/objectstorage/commit_multipart_upload_request_response.go b/vendor/github.com/oracle/oci-go-sdk/objectstorage/commit_multipart_upload_request_response.go new file mode 100644 index 000000000..101834254 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/objectstorage/commit_multipart_upload_request_response.go @@ -0,0 +1,76 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package objectstorage + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// CommitMultipartUploadRequest wrapper for the CommitMultipartUpload operation +type CommitMultipartUploadRequest struct { + + // The top-level namespace used for the request. + NamespaceName *string `mandatory:"true" contributesTo:"path" name:"namespaceName"` + + // The name of the bucket. + // Example: `my-new-bucket1` + BucketName *string `mandatory:"true" contributesTo:"path" name:"bucketName"` + + // The name of the object. + // Example: `test/object1.log` + ObjectName *string `mandatory:"true" contributesTo:"path" name:"objectName"` + + // The upload ID for a multipart upload. + UploadId *string `mandatory:"true" contributesTo:"query" name:"uploadId"` + + // The part numbers and ETags for the parts you want to commit. + CommitMultipartUploadDetails `contributesTo:"body"` + + // The entity tag to match. For creating and committing a multipart upload to an object, this is the entity tag of the target object. + // For uploading a part, this is the entity tag of the target part. + IfMatch *string `mandatory:"false" contributesTo:"header" name:"if-match"` + + // The entity tag to avoid matching. The only valid value is ‘*’, which indicates that the request should fail if the object already exists. + // For creating and committing a multipart upload, this is the entity tag of the target object. For uploading a part, this is the entity tag + // of the target part. + IfNoneMatch *string `mandatory:"false" contributesTo:"header" name:"if-none-match"` + + // The client request ID for tracing. + OpcClientRequestId *string `mandatory:"false" contributesTo:"header" name:"opc-client-request-id"` +} + +func (request CommitMultipartUploadRequest) String() string { + return common.PointerString(request) +} + +// CommitMultipartUploadResponse wrapper for the CommitMultipartUpload operation +type CommitMultipartUploadResponse struct { + + // The underlying http response + RawResponse *http.Response + + // Echoes back the value passed in the opc-client-request-id header, for use by clients when debugging. + OpcClientRequestId *string `presentIn:"header" name:"opc-client-request-id"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about a particular + // request, please provide this request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` + + // Base-64 representation of the multipart object hash. + // The multipart object hash is calculated by taking the MD5 hashes of the parts passed to this call, + // concatenating the binary representation of those hashes in order of their part numbers, + // and then calculating the MD5 hash of the concatenated values. + OpcMultipartMd5 *string `presentIn:"header" name:"opc-multipart-md5"` + + // The entity tag for the object. + ETag *string `presentIn:"header" name:"etag"` + + // The time the object was last modified, as described in RFC 2616 (https://tools.ietf.org/rfc/rfc2616), section 14.29. + LastModified *common.SDKTime `presentIn:"header" name:"last-modified"` +} + +func (response CommitMultipartUploadResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/objectstorage/create_bucket_details.go b/vendor/github.com/oracle/oci-go-sdk/objectstorage/create_bucket_details.go new file mode 100644 index 000000000..b6be2982c --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/objectstorage/create_bucket_details.go @@ -0,0 +1,62 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Object Storage Service API +// +// APIs for managing buckets and objects. +// + +package objectstorage + +import ( + "github.com/oracle/oci-go-sdk/common" +) + +// CreateBucketDetails To use any of the API operations, you must be authorized in an IAM policy. If you're not authorized, +// talk to an administrator. If you're an administrator who needs to write policies to give users access, see +// Getting Started with Policies (https://docs.us-phoenix-1.oraclecloud.com/Content/Identity/Concepts/policygetstarted.htm). +type CreateBucketDetails struct { + + // The name of the bucket. Valid characters are uppercase or lowercase letters, + // numbers, and dashes. Bucket names must be unique within the namespace. + Name *string `mandatory:"true" json:"name"` + + // The ID of the compartment in which to create the bucket. + CompartmentId *string `mandatory:"true" json:"compartmentId"` + + // Arbitrary string, up to 4KB, of keys and values for user-defined metadata. + Metadata map[string]string `mandatory:"false" json:"metadata"` + + // The type of public access available on this bucket. Allows authenticated caller to access the bucket or + // contents of this bucket. By default a bucket is set to NoPublicAccess. It is treated as NoPublicAccess + // when this value is not specified. When the type is NoPublicAccess the bucket does not allow any public access. + // When the type is ObjectRead the bucket allows public access to the GetObject, HeadObject, ListObjects. + PublicAccessType CreateBucketDetailsPublicAccessTypeEnum `mandatory:"false" json:"publicAccessType,omitempty"` +} + +func (m CreateBucketDetails) String() string { + return common.PointerString(m) +} + +// CreateBucketDetailsPublicAccessTypeEnum Enum with underlying type: string +type CreateBucketDetailsPublicAccessTypeEnum string + +// Set of constants representing the allowable values for CreateBucketDetailsPublicAccessType +const ( + CreateBucketDetailsPublicAccessTypeNopublicaccess CreateBucketDetailsPublicAccessTypeEnum = "NoPublicAccess" + CreateBucketDetailsPublicAccessTypeObjectread CreateBucketDetailsPublicAccessTypeEnum = "ObjectRead" +) + +var mappingCreateBucketDetailsPublicAccessType = map[string]CreateBucketDetailsPublicAccessTypeEnum{ + "NoPublicAccess": CreateBucketDetailsPublicAccessTypeNopublicaccess, + "ObjectRead": CreateBucketDetailsPublicAccessTypeObjectread, +} + +// GetCreateBucketDetailsPublicAccessTypeEnumValues Enumerates the set of values for CreateBucketDetailsPublicAccessType +func GetCreateBucketDetailsPublicAccessTypeEnumValues() []CreateBucketDetailsPublicAccessTypeEnum { + values := make([]CreateBucketDetailsPublicAccessTypeEnum, 0) + for _, v := range mappingCreateBucketDetailsPublicAccessType { + values = append(values, v) + } + return values +} diff --git a/vendor/github.com/oracle/oci-go-sdk/objectstorage/create_bucket_request_response.go b/vendor/github.com/oracle/oci-go-sdk/objectstorage/create_bucket_request_response.go new file mode 100644 index 000000000..00a7aa13c --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/objectstorage/create_bucket_request_response.go @@ -0,0 +1,53 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package objectstorage + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// CreateBucketRequest wrapper for the CreateBucket operation +type CreateBucketRequest struct { + + // The top-level namespace used for the request. + NamespaceName *string `mandatory:"true" contributesTo:"path" name:"namespaceName"` + + // Request object for creating a bucket. + CreateBucketDetails `contributesTo:"body"` + + // The client request ID for tracing. + OpcClientRequestId *string `mandatory:"false" contributesTo:"header" name:"opc-client-request-id"` +} + +func (request CreateBucketRequest) String() string { + return common.PointerString(request) +} + +// CreateBucketResponse wrapper for the CreateBucket operation +type CreateBucketResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The Bucket instance + Bucket `presentIn:"body"` + + // Echoes back the value passed in the opc-client-request-id header, for use by clients when debugging. + OpcClientRequestId *string `presentIn:"header" name:"opc-client-request-id"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about a particular + // request, please provide this request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` + + // The entity tag for the bucket that was created. + ETag *string `presentIn:"header" name:"etag"` + + // The full path to the bucket that was created. + Location *string `presentIn:"header" name:"location"` +} + +func (response CreateBucketResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/objectstorage/create_multipart_upload_details.go b/vendor/github.com/oracle/oci-go-sdk/objectstorage/create_multipart_upload_details.go new file mode 100644 index 000000000..f21262d54 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/objectstorage/create_multipart_upload_details.go @@ -0,0 +1,39 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Object Storage Service API +// +// APIs for managing buckets and objects. +// + +package objectstorage + +import ( + "github.com/oracle/oci-go-sdk/common" +) + +// CreateMultipartUploadDetails To use any of the API operations, you must be authorized in an IAM policy. If you're not authorized, +// talk to an administrator. If you're an administrator who needs to write policies to give users access, see +// Getting Started with Policies (https://docs.us-phoenix-1.oraclecloud.com/Content/Identity/Concepts/policygetstarted.htm). +type CreateMultipartUploadDetails struct { + + // the name of the object to which this multi-part upload is targetted. + Object *string `mandatory:"true" json:"object"` + + // the content type of the object to upload. + ContentType *string `mandatory:"false" json:"contentType"` + + // the content language of the object to upload. + ContentLanguage *string `mandatory:"false" json:"contentLanguage"` + + // the content encoding of the object to upload. + ContentEncoding *string `mandatory:"false" json:"contentEncoding"` + + // Arbitrary string keys and values for the user-defined metadata for the object. + // Keys must be in "opc-meta-*" format. + Metadata map[string]string `mandatory:"false" json:"metadata"` +} + +func (m CreateMultipartUploadDetails) String() string { + return common.PointerString(m) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/objectstorage/create_multipart_upload_request_response.go b/vendor/github.com/oracle/oci-go-sdk/objectstorage/create_multipart_upload_request_response.go new file mode 100644 index 000000000..ca790ade0 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/objectstorage/create_multipart_upload_request_response.go @@ -0,0 +1,63 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package objectstorage + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// CreateMultipartUploadRequest wrapper for the CreateMultipartUpload operation +type CreateMultipartUploadRequest struct { + + // The top-level namespace used for the request. + NamespaceName *string `mandatory:"true" contributesTo:"path" name:"namespaceName"` + + // The name of the bucket. + // Example: `my-new-bucket1` + BucketName *string `mandatory:"true" contributesTo:"path" name:"bucketName"` + + // Request object for creating a multi-part upload. + CreateMultipartUploadDetails `contributesTo:"body"` + + // The entity tag to match. For creating and committing a multipart upload to an object, this is the entity tag of the target object. + // For uploading a part, this is the entity tag of the target part. + IfMatch *string `mandatory:"false" contributesTo:"header" name:"if-match"` + + // The entity tag to avoid matching. The only valid value is ‘*’, which indicates that the request should fail if the object already exists. + // For creating and committing a multipart upload, this is the entity tag of the target object. For uploading a part, this is the entity tag + // of the target part. + IfNoneMatch *string `mandatory:"false" contributesTo:"header" name:"if-none-match"` + + // The client request ID for tracing. + OpcClientRequestId *string `mandatory:"false" contributesTo:"header" name:"opc-client-request-id"` +} + +func (request CreateMultipartUploadRequest) String() string { + return common.PointerString(request) +} + +// CreateMultipartUploadResponse wrapper for the CreateMultipartUpload operation +type CreateMultipartUploadResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The MultipartUpload instance + MultipartUpload `presentIn:"body"` + + // Echoes back the value passed in the opc-client-request-id header, for use by clients when debugging. + OpcClientRequestId *string `presentIn:"header" name:"opc-client-request-id"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about a particular + // request, please provide this request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` + + // The full path to the new upload. + Location *string `presentIn:"header" name:"location"` +} + +func (response CreateMultipartUploadResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/objectstorage/create_preauthenticated_request_details.go b/vendor/github.com/oracle/oci-go-sdk/objectstorage/create_preauthenticated_request_details.go new file mode 100644 index 000000000..498dc6dd4 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/objectstorage/create_preauthenticated_request_details.go @@ -0,0 +1,61 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Object Storage Service API +// +// APIs for managing buckets and objects. +// + +package objectstorage + +import ( + "github.com/oracle/oci-go-sdk/common" +) + +// CreatePreauthenticatedRequestDetails The representation of CreatePreauthenticatedRequestDetails +type CreatePreauthenticatedRequestDetails struct { + + // user specified name for pre-authenticated request. Helpful for management purposes. + Name *string `mandatory:"true" json:"name"` + + // the operation that can be performed on this resource e.g PUT or GET. + AccessType CreatePreauthenticatedRequestDetailsAccessTypeEnum `mandatory:"true" json:"accessType"` + + // The expiration date after which the pre-authenticated request will no longer be valid per spec + // RFC 3339 (https://tools.ietf.org/rfc/rfc3339) + TimeExpires *common.SDKTime `mandatory:"true" json:"timeExpires"` + + // Name of object that is being granted access to by the pre-authenticated request. This can be null and that would mean that the pre-authenticated request is granting access to the entire bucket + ObjectName *string `mandatory:"false" json:"objectName"` +} + +func (m CreatePreauthenticatedRequestDetails) String() string { + return common.PointerString(m) +} + +// CreatePreauthenticatedRequestDetailsAccessTypeEnum Enum with underlying type: string +type CreatePreauthenticatedRequestDetailsAccessTypeEnum string + +// Set of constants representing the allowable values for CreatePreauthenticatedRequestDetailsAccessType +const ( + CreatePreauthenticatedRequestDetailsAccessTypeObjectread CreatePreauthenticatedRequestDetailsAccessTypeEnum = "ObjectRead" + CreatePreauthenticatedRequestDetailsAccessTypeObjectwrite CreatePreauthenticatedRequestDetailsAccessTypeEnum = "ObjectWrite" + CreatePreauthenticatedRequestDetailsAccessTypeObjectreadwrite CreatePreauthenticatedRequestDetailsAccessTypeEnum = "ObjectReadWrite" + CreatePreauthenticatedRequestDetailsAccessTypeAnyobjectwrite CreatePreauthenticatedRequestDetailsAccessTypeEnum = "AnyObjectWrite" +) + +var mappingCreatePreauthenticatedRequestDetailsAccessType = map[string]CreatePreauthenticatedRequestDetailsAccessTypeEnum{ + "ObjectRead": CreatePreauthenticatedRequestDetailsAccessTypeObjectread, + "ObjectWrite": CreatePreauthenticatedRequestDetailsAccessTypeObjectwrite, + "ObjectReadWrite": CreatePreauthenticatedRequestDetailsAccessTypeObjectreadwrite, + "AnyObjectWrite": CreatePreauthenticatedRequestDetailsAccessTypeAnyobjectwrite, +} + +// GetCreatePreauthenticatedRequestDetailsAccessTypeEnumValues Enumerates the set of values for CreatePreauthenticatedRequestDetailsAccessType +func GetCreatePreauthenticatedRequestDetailsAccessTypeEnumValues() []CreatePreauthenticatedRequestDetailsAccessTypeEnum { + values := make([]CreatePreauthenticatedRequestDetailsAccessTypeEnum, 0) + for _, v := range mappingCreatePreauthenticatedRequestDetailsAccessType { + values = append(values, v) + } + return values +} diff --git a/vendor/github.com/oracle/oci-go-sdk/objectstorage/create_preauthenticated_request_request_response.go b/vendor/github.com/oracle/oci-go-sdk/objectstorage/create_preauthenticated_request_request_response.go new file mode 100644 index 000000000..f0580b6d1 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/objectstorage/create_preauthenticated_request_request_response.go @@ -0,0 +1,51 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package objectstorage + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// CreatePreauthenticatedRequestRequest wrapper for the CreatePreauthenticatedRequest operation +type CreatePreauthenticatedRequestRequest struct { + + // The top-level namespace used for the request. + NamespaceName *string `mandatory:"true" contributesTo:"path" name:"namespaceName"` + + // The name of the bucket. + // Example: `my-new-bucket1` + BucketName *string `mandatory:"true" contributesTo:"path" name:"bucketName"` + + // details for creating the pre-authenticated request. + CreatePreauthenticatedRequestDetails `contributesTo:"body"` + + // The client request ID for tracing. + OpcClientRequestId *string `mandatory:"false" contributesTo:"header" name:"opc-client-request-id"` +} + +func (request CreatePreauthenticatedRequestRequest) String() string { + return common.PointerString(request) +} + +// CreatePreauthenticatedRequestResponse wrapper for the CreatePreauthenticatedRequest operation +type CreatePreauthenticatedRequestResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The PreauthenticatedRequest instance + PreauthenticatedRequest `presentIn:"body"` + + // Echoes back the value passed in the opc-client-request-id header, for use by clients when debugging. + OpcClientRequestId *string `presentIn:"header" name:"opc-client-request-id"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about a particular + // request, please provide this request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response CreatePreauthenticatedRequestResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/objectstorage/delete_bucket_request_response.go b/vendor/github.com/oracle/oci-go-sdk/objectstorage/delete_bucket_request_response.go new file mode 100644 index 000000000..71385a147 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/objectstorage/delete_bucket_request_response.go @@ -0,0 +1,49 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package objectstorage + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// DeleteBucketRequest wrapper for the DeleteBucket operation +type DeleteBucketRequest struct { + + // The top-level namespace used for the request. + NamespaceName *string `mandatory:"true" contributesTo:"path" name:"namespaceName"` + + // The name of the bucket. + // Example: `my-new-bucket1` + BucketName *string `mandatory:"true" contributesTo:"path" name:"bucketName"` + + // The entity tag to match. For creating and committing a multipart upload to an object, this is the entity tag of the target object. + // For uploading a part, this is the entity tag of the target part. + IfMatch *string `mandatory:"false" contributesTo:"header" name:"if-match"` + + // The client request ID for tracing. + OpcClientRequestId *string `mandatory:"false" contributesTo:"header" name:"opc-client-request-id"` +} + +func (request DeleteBucketRequest) String() string { + return common.PointerString(request) +} + +// DeleteBucketResponse wrapper for the DeleteBucket operation +type DeleteBucketResponse struct { + + // The underlying http response + RawResponse *http.Response + + // Echoes back the value passed in the opc-client-request-id header, for use by clients when debugging. + OpcClientRequestId *string `presentIn:"header" name:"opc-client-request-id"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about a particular + // request, please provide this request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response DeleteBucketResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/objectstorage/delete_object_request_response.go b/vendor/github.com/oracle/oci-go-sdk/objectstorage/delete_object_request_response.go new file mode 100644 index 000000000..ae33f3a61 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/objectstorage/delete_object_request_response.go @@ -0,0 +1,56 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package objectstorage + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// DeleteObjectRequest wrapper for the DeleteObject operation +type DeleteObjectRequest struct { + + // The top-level namespace used for the request. + NamespaceName *string `mandatory:"true" contributesTo:"path" name:"namespaceName"` + + // The name of the bucket. + // Example: `my-new-bucket1` + BucketName *string `mandatory:"true" contributesTo:"path" name:"bucketName"` + + // The name of the object. + // Example: `test/object1.log` + ObjectName *string `mandatory:"true" contributesTo:"path" name:"objectName"` + + // The entity tag to match. For creating and committing a multipart upload to an object, this is the entity tag of the target object. + // For uploading a part, this is the entity tag of the target part. + IfMatch *string `mandatory:"false" contributesTo:"header" name:"if-match"` + + // The client request ID for tracing. + OpcClientRequestId *string `mandatory:"false" contributesTo:"header" name:"opc-client-request-id"` +} + +func (request DeleteObjectRequest) String() string { + return common.PointerString(request) +} + +// DeleteObjectResponse wrapper for the DeleteObject operation +type DeleteObjectResponse struct { + + // The underlying http response + RawResponse *http.Response + + // Echoes back the value passed in the opc-client-request-id header, for use by clients when debugging. + OpcClientRequestId *string `presentIn:"header" name:"opc-client-request-id"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about a particular + // request, please provide this request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` + + // The time the object was deleted, as described in RFC 2616 (https://tools.ietf.org/rfc/rfc2616), section 14.29. + LastModified *common.SDKTime `presentIn:"header" name:"last-modified"` +} + +func (response DeleteObjectResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/objectstorage/delete_preauthenticated_request_request_response.go b/vendor/github.com/oracle/oci-go-sdk/objectstorage/delete_preauthenticated_request_request_response.go new file mode 100644 index 000000000..675d24927 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/objectstorage/delete_preauthenticated_request_request_response.go @@ -0,0 +1,49 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package objectstorage + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// DeletePreauthenticatedRequestRequest wrapper for the DeletePreauthenticatedRequest operation +type DeletePreauthenticatedRequestRequest struct { + + // The top-level namespace used for the request. + NamespaceName *string `mandatory:"true" contributesTo:"path" name:"namespaceName"` + + // The name of the bucket. + // Example: `my-new-bucket1` + BucketName *string `mandatory:"true" contributesTo:"path" name:"bucketName"` + + // The unique identifier for the pre-authenticated request (PAR). This can be used to manage the PAR + // such as GET or DELETE the PAR + ParId *string `mandatory:"true" contributesTo:"path" name:"parId"` + + // The client request ID for tracing. + OpcClientRequestId *string `mandatory:"false" contributesTo:"header" name:"opc-client-request-id"` +} + +func (request DeletePreauthenticatedRequestRequest) String() string { + return common.PointerString(request) +} + +// DeletePreauthenticatedRequestResponse wrapper for the DeletePreauthenticatedRequest operation +type DeletePreauthenticatedRequestResponse struct { + + // The underlying http response + RawResponse *http.Response + + // Echoes back the value passed in the opc-client-request-id header, for use by clients when debugging. + OpcClientRequestId *string `presentIn:"header" name:"opc-client-request-id"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about a particular + // request, please provide this request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response DeletePreauthenticatedRequestResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/objectstorage/get_bucket_request_response.go b/vendor/github.com/oracle/oci-go-sdk/objectstorage/get_bucket_request_response.go new file mode 100644 index 000000000..4284a4a56 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/objectstorage/get_bucket_request_response.go @@ -0,0 +1,66 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package objectstorage + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// GetBucketRequest wrapper for the GetBucket operation +type GetBucketRequest struct { + + // The top-level namespace used for the request. + NamespaceName *string `mandatory:"true" contributesTo:"path" name:"namespaceName"` + + // The name of the bucket. + // Example: `my-new-bucket1` + BucketName *string `mandatory:"true" contributesTo:"path" name:"bucketName"` + + // The entity tag to match. For creating and committing a multipart upload to an object, this is the entity tag of the target object. + // For uploading a part, this is the entity tag of the target part. + IfMatch *string `mandatory:"false" contributesTo:"header" name:"if-match"` + + // The entity tag to avoid matching. The only valid value is ‘*’, which indicates that the request should fail if the object already exists. + // For creating and committing a multipart upload, this is the entity tag of the target object. For uploading a part, this is the entity tag + // of the target part. + IfNoneMatch *string `mandatory:"false" contributesTo:"header" name:"if-none-match"` + + // The client request ID for tracing. + OpcClientRequestId *string `mandatory:"false" contributesTo:"header" name:"opc-client-request-id"` +} + +func (request GetBucketRequest) String() string { + return common.PointerString(request) +} + +// GetBucketResponse wrapper for the GetBucket operation +type GetBucketResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The Bucket instance + Bucket `presentIn:"body"` + + // Echoes back the value passed in the opc-client-request-id header, for use by clients when debugging. + OpcClientRequestId *string `presentIn:"header" name:"opc-client-request-id"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about a particular + // request, please provide this request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` + + // The current entity tag for the bucket. + ETag *string `presentIn:"header" name:"etag"` + + // Flag to indicate whether or not the object was modified. If this is true, + // the getter for the object itself will return null. Callers should check this + // if they specified one of the request params that might result in a conditional + // response (like 'if-match'/'if-none-match'). + IsNotModified bool +} + +func (response GetBucketResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/objectstorage/get_namespace_request_response.go b/vendor/github.com/oracle/oci-go-sdk/objectstorage/get_namespace_request_response.go new file mode 100644 index 000000000..043ae9c99 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/objectstorage/get_namespace_request_response.go @@ -0,0 +1,34 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package objectstorage + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// GetNamespaceRequest wrapper for the GetNamespace operation +type GetNamespaceRequest struct { + + // The client request ID for tracing. + OpcClientRequestId *string `mandatory:"false" contributesTo:"header" name:"opc-client-request-id"` +} + +func (request GetNamespaceRequest) String() string { + return common.PointerString(request) +} + +// GetNamespaceResponse wrapper for the GetNamespace operation +type GetNamespaceResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The string instance + Value *string `presentIn:"body"` +} + +func (response GetNamespaceResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/objectstorage/get_object_request_response.go b/vendor/github.com/oracle/oci-go-sdk/objectstorage/get_object_request_response.go new file mode 100644 index 000000000..f75e36b74 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/objectstorage/get_object_request_response.go @@ -0,0 +1,107 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package objectstorage + +import ( + "github.com/oracle/oci-go-sdk/common" + "io" + "net/http" +) + +// GetObjectRequest wrapper for the GetObject operation +type GetObjectRequest struct { + + // The top-level namespace used for the request. + NamespaceName *string `mandatory:"true" contributesTo:"path" name:"namespaceName"` + + // The name of the bucket. + // Example: `my-new-bucket1` + BucketName *string `mandatory:"true" contributesTo:"path" name:"bucketName"` + + // The name of the object. + // Example: `test/object1.log` + ObjectName *string `mandatory:"true" contributesTo:"path" name:"objectName"` + + // The entity tag to match. For creating and committing a multipart upload to an object, this is the entity tag of the target object. + // For uploading a part, this is the entity tag of the target part. + IfMatch *string `mandatory:"false" contributesTo:"header" name:"if-match"` + + // The entity tag to avoid matching. The only valid value is ‘*’, which indicates that the request should fail if the object already exists. + // For creating and committing a multipart upload, this is the entity tag of the target object. For uploading a part, this is the entity tag + // of the target part. + IfNoneMatch *string `mandatory:"false" contributesTo:"header" name:"if-none-match"` + + // The client request ID for tracing. + OpcClientRequestId *string `mandatory:"false" contributesTo:"header" name:"opc-client-request-id"` + + // Optional byte range to fetch, as described in RFC 7233 (https://tools.ietf.org/rfc/rfc7233), section 2.1. + // Note, only a single range of bytes is supported. + Range *string `mandatory:"false" contributesTo:"header" name:"range"` +} + +func (request GetObjectRequest) String() string { + return common.PointerString(request) +} + +// GetObjectResponse wrapper for the GetObject operation +type GetObjectResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The io.ReadCloser instance + Content io.ReadCloser `presentIn:"body" encoding:"binary"` + + // Echoes back the value passed in the opc-client-request-id header, for use by clients when debugging. + OpcClientRequestId *string `presentIn:"header" name:"opc-client-request-id"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about a particular + // request, please provide this request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` + + // The entity tag for the object. + ETag *string `presentIn:"header" name:"etag"` + + // The user-defined metadata for the object. + OpcMeta map[string]string `presentIn:"header-collection" prefix:"opc-meta-"` + + // The object size in bytes. + ContentLength *int `presentIn:"header" name:"content-length"` + + // Content-Range header for range requests, per RFC 7233 (https://tools.ietf.org/rfc/rfc7233), section 4.2. + ContentRange *string `presentIn:"header" name:"content-range"` + + // Content-MD5 header, as described in RFC 2616 (https://tools.ietf.org/rfc/rfc2616), section 14.15. + // Unavailable for objects uploaded using multipart upload. + ContentMd5 *string `presentIn:"header" name:"content-md5"` + + // Only applicable to objects uploaded using multipart upload. + // Base-64 representation of the multipart object hash. + // The multipart object hash is calculated by taking the MD5 hashes of the parts, + // concatenating the binary representation of those hashes in order of their part numbers, + // and then calculating the MD5 hash of the concatenated values. + OpcMultipartMd5 *string `presentIn:"header" name:"opc-multipart-md5"` + + // Content-Type header, as described in RFC 2616 (https://tools.ietf.org/rfc/rfc2616), section 14.17. + ContentType *string `presentIn:"header" name:"content-type"` + + // Content-Language header, as described in RFC 2616 (https://tools.ietf.org/rfc/rfc2616), section 14.12. + ContentLanguage *string `presentIn:"header" name:"content-language"` + + // Content-Encoding header, as described in RFC 2616 (https://tools.ietf.org/rfc/rfc2616), section 14.11. + ContentEncoding *string `presentIn:"header" name:"content-encoding"` + + // The object modification time, as described in RFC 2616 (https://tools.ietf.org/rfc/rfc2616), section 14.29. + LastModified *common.SDKTime `presentIn:"header" name:"last-modified"` + + // Flag to indicate whether or not the object was modified. If this is true, + // the getter for the object itself will return null. Callers should check this + // if they specified one of the request params that might result in a conditional + // response (like 'if-match'/'if-none-match'). + IsNotModified bool +} + +func (response GetObjectResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/objectstorage/get_preauthenticated_request_request_response.go b/vendor/github.com/oracle/oci-go-sdk/objectstorage/get_preauthenticated_request_request_response.go new file mode 100644 index 000000000..0e41fff80 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/objectstorage/get_preauthenticated_request_request_response.go @@ -0,0 +1,52 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package objectstorage + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// GetPreauthenticatedRequestRequest wrapper for the GetPreauthenticatedRequest operation +type GetPreauthenticatedRequestRequest struct { + + // The top-level namespace used for the request. + NamespaceName *string `mandatory:"true" contributesTo:"path" name:"namespaceName"` + + // The name of the bucket. + // Example: `my-new-bucket1` + BucketName *string `mandatory:"true" contributesTo:"path" name:"bucketName"` + + // The unique identifier for the pre-authenticated request (PAR). This can be used to manage the PAR + // such as GET or DELETE the PAR + ParId *string `mandatory:"true" contributesTo:"path" name:"parId"` + + // The client request ID for tracing. + OpcClientRequestId *string `mandatory:"false" contributesTo:"header" name:"opc-client-request-id"` +} + +func (request GetPreauthenticatedRequestRequest) String() string { + return common.PointerString(request) +} + +// GetPreauthenticatedRequestResponse wrapper for the GetPreauthenticatedRequest operation +type GetPreauthenticatedRequestResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The PreauthenticatedRequestSummary instance + PreauthenticatedRequestSummary `presentIn:"body"` + + // Echoes back the value passed in the opc-client-request-id header, for use by clients when debugging. + OpcClientRequestId *string `presentIn:"header" name:"opc-client-request-id"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about a particular + // request, please provide this request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response GetPreauthenticatedRequestResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/objectstorage/head_bucket_request_response.go b/vendor/github.com/oracle/oci-go-sdk/objectstorage/head_bucket_request_response.go new file mode 100644 index 000000000..cca2efc72 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/objectstorage/head_bucket_request_response.go @@ -0,0 +1,63 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package objectstorage + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// HeadBucketRequest wrapper for the HeadBucket operation +type HeadBucketRequest struct { + + // The top-level namespace used for the request. + NamespaceName *string `mandatory:"true" contributesTo:"path" name:"namespaceName"` + + // The name of the bucket. + // Example: `my-new-bucket1` + BucketName *string `mandatory:"true" contributesTo:"path" name:"bucketName"` + + // The entity tag to match. For creating and committing a multipart upload to an object, this is the entity tag of the target object. + // For uploading a part, this is the entity tag of the target part. + IfMatch *string `mandatory:"false" contributesTo:"header" name:"if-match"` + + // The entity tag to avoid matching. The only valid value is ‘*’, which indicates that the request should fail if the object already exists. + // For creating and committing a multipart upload, this is the entity tag of the target object. For uploading a part, this is the entity tag + // of the target part. + IfNoneMatch *string `mandatory:"false" contributesTo:"header" name:"if-none-match"` + + // The client request ID for tracing. + OpcClientRequestId *string `mandatory:"false" contributesTo:"header" name:"opc-client-request-id"` +} + +func (request HeadBucketRequest) String() string { + return common.PointerString(request) +} + +// HeadBucketResponse wrapper for the HeadBucket operation +type HeadBucketResponse struct { + + // The underlying http response + RawResponse *http.Response + + // Echoes back the value passed in the opc-client-request-id header, for use by clients when debugging. + OpcClientRequestId *string `presentIn:"header" name:"opc-client-request-id"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about a particular + // request, please provide this request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` + + // The current entity tag for the bucket. + ETag *string `presentIn:"header" name:"etag"` + + // Flag to indicate whether or not the object was modified. If this is true, + // the getter for the object itself will return null. Callers should check this + // if they specified one of the request params that might result in a conditional + // response (like 'if-match'/'if-none-match'). + IsNotModified bool +} + +func (response HeadBucketResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/objectstorage/head_object_request_response.go b/vendor/github.com/oracle/oci-go-sdk/objectstorage/head_object_request_response.go new file mode 100644 index 000000000..b058a2d01 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/objectstorage/head_object_request_response.go @@ -0,0 +1,96 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package objectstorage + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// HeadObjectRequest wrapper for the HeadObject operation +type HeadObjectRequest struct { + + // The top-level namespace used for the request. + NamespaceName *string `mandatory:"true" contributesTo:"path" name:"namespaceName"` + + // The name of the bucket. + // Example: `my-new-bucket1` + BucketName *string `mandatory:"true" contributesTo:"path" name:"bucketName"` + + // The name of the object. + // Example: `test/object1.log` + ObjectName *string `mandatory:"true" contributesTo:"path" name:"objectName"` + + // The entity tag to match. For creating and committing a multipart upload to an object, this is the entity tag of the target object. + // For uploading a part, this is the entity tag of the target part. + IfMatch *string `mandatory:"false" contributesTo:"header" name:"if-match"` + + // The entity tag to avoid matching. The only valid value is ‘*’, which indicates that the request should fail if the object already exists. + // For creating and committing a multipart upload, this is the entity tag of the target object. For uploading a part, this is the entity tag + // of the target part. + IfNoneMatch *string `mandatory:"false" contributesTo:"header" name:"if-none-match"` + + // The client request ID for tracing. + OpcClientRequestId *string `mandatory:"false" contributesTo:"header" name:"opc-client-request-id"` +} + +func (request HeadObjectRequest) String() string { + return common.PointerString(request) +} + +// HeadObjectResponse wrapper for the HeadObject operation +type HeadObjectResponse struct { + + // The underlying http response + RawResponse *http.Response + + // Echoes back the value passed in the opc-client-request-id header, for use by clients when debugging. + OpcClientRequestId *string `presentIn:"header" name:"opc-client-request-id"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about a particular + // request, please provide this request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` + + // The entity tag for the object. + ETag *string `presentIn:"header" name:"etag"` + + // The user-defined metadata for the object. + OpcMeta map[string]string `presentIn:"header-collection" prefix:"opc-meta-"` + + // The object size in bytes. + ContentLength *int `presentIn:"header" name:"content-length"` + + // Content-MD5 header, as described in RFC 2616 (https://tools.ietf.org/rfc/rfc2616), section 14.15. + // Unavailable for objects uploaded using multipart upload. + ContentMd5 *string `presentIn:"header" name:"content-md5"` + + // Only applicable to objects uploaded using multipart upload. + // Base-64 representation of the multipart object hash. + // The multipart object hash is calculated by taking the MD5 hashes of the parts, + // concatenating the binary representation of those hashes in order of their part numbers, + // and then calculating the MD5 hash of the concatenated values. + OpcMultipartMd5 *string `presentIn:"header" name:"opc-multipart-md5"` + + // Content-Type header, as described in RFC 2616 (https://tools.ietf.org/rfc/rfc2616), section 14.17. + ContentType *string `presentIn:"header" name:"content-type"` + + // Content-Language header, as described in RFC 2616 (https://tools.ietf.org/rfc/rfc2616), section 14.12. + ContentLanguage *string `presentIn:"header" name:"content-language"` + + // Content-Encoding header, as described in RFC 2616 (https://tools.ietf.org/rfc/rfc2616), section 14.11. + ContentEncoding *string `presentIn:"header" name:"content-encoding"` + + // The object modification time, as described in RFC 2616 (https://tools.ietf.org/rfc/rfc2616), section 14.29. + LastModified *common.SDKTime `presentIn:"header" name:"last-modified"` + + // Flag to indicate whether or not the object was modified. If this is true, + // the getter for the object itself will return null. Callers should check this + // if they specified one of the request params that might result in a conditional + // response (like 'if-match'/'if-none-match'). + IsNotModified bool +} + +func (response HeadObjectResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/objectstorage/list_buckets_request_response.go b/vendor/github.com/oracle/oci-go-sdk/objectstorage/list_buckets_request_response.go new file mode 100644 index 000000000..09474ba93 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/objectstorage/list_buckets_request_response.go @@ -0,0 +1,59 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package objectstorage + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// ListBucketsRequest wrapper for the ListBuckets operation +type ListBucketsRequest struct { + + // The top-level namespace used for the request. + NamespaceName *string `mandatory:"true" contributesTo:"path" name:"namespaceName"` + + // The ID of the compartment in which to create the bucket. + CompartmentId *string `mandatory:"true" contributesTo:"query" name:"compartmentId"` + + // The maximum number of items to return. + Limit *int `mandatory:"false" contributesTo:"query" name:"limit"` + + // The page at which to start retrieving results. + Page *string `mandatory:"false" contributesTo:"query" name:"page"` + + // The client request ID for tracing. + OpcClientRequestId *string `mandatory:"false" contributesTo:"header" name:"opc-client-request-id"` +} + +func (request ListBucketsRequest) String() string { + return common.PointerString(request) +} + +// ListBucketsResponse wrapper for the ListBuckets operation +type ListBucketsResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The []BucketSummary instance + Items []BucketSummary `presentIn:"body"` + + // Echoes back the value passed in the opc-client-request-id header, for use by clients when debugging. + OpcClientRequestId *string `presentIn:"header" name:"opc-client-request-id"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about a particular + // request, please provide this request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` + + // For pagination of a list of `Bucket`s. If this header appears in the response, then this + // is a partial list of buckets. Include this value as the `page` parameter in a subsequent + // GET request to get the next batch of buckets. For information about pagination, see + // List Pagination (https://docs.us-phoenix-1.oraclecloud.com/Content/API/Concepts/usingapi.htm#nine). + OpcNextPage *string `presentIn:"header" name:"opc-next-page"` +} + +func (response ListBucketsResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/objectstorage/list_multipart_upload_parts_request_response.go b/vendor/github.com/oracle/oci-go-sdk/objectstorage/list_multipart_upload_parts_request_response.go new file mode 100644 index 000000000..367ea7af7 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/objectstorage/list_multipart_upload_parts_request_response.go @@ -0,0 +1,67 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package objectstorage + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// ListMultipartUploadPartsRequest wrapper for the ListMultipartUploadParts operation +type ListMultipartUploadPartsRequest struct { + + // The top-level namespace used for the request. + NamespaceName *string `mandatory:"true" contributesTo:"path" name:"namespaceName"` + + // The name of the bucket. + // Example: `my-new-bucket1` + BucketName *string `mandatory:"true" contributesTo:"path" name:"bucketName"` + + // The name of the object. + // Example: `test/object1.log` + ObjectName *string `mandatory:"true" contributesTo:"path" name:"objectName"` + + // The upload ID for a multipart upload. + UploadId *string `mandatory:"true" contributesTo:"query" name:"uploadId"` + + // The maximum number of items to return. + Limit *int `mandatory:"false" contributesTo:"query" name:"limit"` + + // The page at which to start retrieving results. + Page *string `mandatory:"false" contributesTo:"query" name:"page"` + + // The client request ID for tracing. + OpcClientRequestId *string `mandatory:"false" contributesTo:"header" name:"opc-client-request-id"` +} + +func (request ListMultipartUploadPartsRequest) String() string { + return common.PointerString(request) +} + +// ListMultipartUploadPartsResponse wrapper for the ListMultipartUploadParts operation +type ListMultipartUploadPartsResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The []MultipartUploadPartSummary instance + Items []MultipartUploadPartSummary `presentIn:"body"` + + // Echoes back the value passed in the opc-client-request-id header, for use by clients when debugging. + OpcClientRequestId *string `presentIn:"header" name:"opc-client-request-id"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about a particular + // request, please provide this request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` + + // For pagination of a list of `MultipartUploadPartSummary`s. If this header appears in the response, + // then this is a partial list of object parts. Include this value as the `page` parameter in a subsequent + // GET request to get the next batch of object parts. For information about pagination, see + // List Pagination (https://docs.us-phoenix-1.oraclecloud.com/Content/API/Concepts/usingapi.htm). + OpcNextPage *string `presentIn:"header" name:"opc-next-page"` +} + +func (response ListMultipartUploadPartsResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/objectstorage/list_multipart_uploads_request_response.go b/vendor/github.com/oracle/oci-go-sdk/objectstorage/list_multipart_uploads_request_response.go new file mode 100644 index 000000000..259e0630e --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/objectstorage/list_multipart_uploads_request_response.go @@ -0,0 +1,60 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package objectstorage + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// ListMultipartUploadsRequest wrapper for the ListMultipartUploads operation +type ListMultipartUploadsRequest struct { + + // The top-level namespace used for the request. + NamespaceName *string `mandatory:"true" contributesTo:"path" name:"namespaceName"` + + // The name of the bucket. + // Example: `my-new-bucket1` + BucketName *string `mandatory:"true" contributesTo:"path" name:"bucketName"` + + // The maximum number of items to return. + Limit *int `mandatory:"false" contributesTo:"query" name:"limit"` + + // The page at which to start retrieving results. + Page *string `mandatory:"false" contributesTo:"query" name:"page"` + + // The client request ID for tracing. + OpcClientRequestId *string `mandatory:"false" contributesTo:"header" name:"opc-client-request-id"` +} + +func (request ListMultipartUploadsRequest) String() string { + return common.PointerString(request) +} + +// ListMultipartUploadsResponse wrapper for the ListMultipartUploads operation +type ListMultipartUploadsResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The []MultipartUpload instance + Items []MultipartUpload `presentIn:"body"` + + // Echoes back the value passed in the opc-client-request-id header, for use by clients when debugging. + OpcClientRequestId *string `presentIn:"header" name:"opc-client-request-id"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about a particular + // request, please provide this request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` + + // For pagination of a list of `MultipartUpload`s. If this header appears in the response, then + // this is a partial list of multipart uploads. Include this value as the `page` parameter in a subsequent + // GET request. For information about pagination, see + // List Pagination (https://docs.us-phoenix-1.oraclecloud.com/Content/API/Concepts/usingapi.htm). + OpcNextPage *string `presentIn:"header" name:"opc-next-page"` +} + +func (response ListMultipartUploadsResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/objectstorage/list_objects.go b/vendor/github.com/oracle/oci-go-sdk/objectstorage/list_objects.go new file mode 100644 index 000000000..aa216e61d --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/objectstorage/list_objects.go @@ -0,0 +1,33 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Object Storage Service API +// +// APIs for managing buckets and objects. +// + +package objectstorage + +import ( + "github.com/oracle/oci-go-sdk/common" +) + +// ListObjects To use any of the API operations, you must be authorized in an IAM policy. If you're not authorized, +// talk to an administrator. If you're an administrator who needs to write policies to give users access, see +// Getting Started with Policies (https://docs.us-phoenix-1.oraclecloud.com/Content/Identity/Concepts/policygetstarted.htm). +type ListObjects struct { + + // An array of object summaries. + Objects []ObjectSummary `mandatory:"true" json:"objects"` + + // Prefixes that are common to the results returned by the request if the request specified a delimiter. + Prefixes []string `mandatory:"false" json:"prefixes"` + + // The name of the object to use in the 'startWith' parameter to obtain the next page of + // a truncated ListObjects response. + NextStartWith *string `mandatory:"false" json:"nextStartWith"` +} + +func (m ListObjects) String() string { + return common.PointerString(m) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/objectstorage/list_objects_request_response.go b/vendor/github.com/oracle/oci-go-sdk/objectstorage/list_objects_request_response.go new file mode 100644 index 000000000..0cca1b1c6 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/objectstorage/list_objects_request_response.go @@ -0,0 +1,73 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package objectstorage + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// ListObjectsRequest wrapper for the ListObjects operation +type ListObjectsRequest struct { + + // The top-level namespace used for the request. + NamespaceName *string `mandatory:"true" contributesTo:"path" name:"namespaceName"` + + // The name of the bucket. + // Example: `my-new-bucket1` + BucketName *string `mandatory:"true" contributesTo:"path" name:"bucketName"` + + // The string to use for matching against the start of object names in a list query. + Prefix *string `mandatory:"false" contributesTo:"query" name:"prefix"` + + // Object names returned by a list query must be greater or equal to this parameter. + Start *string `mandatory:"false" contributesTo:"query" name:"start"` + + // Object names returned by a list query must be strictly less than this parameter. + End *string `mandatory:"false" contributesTo:"query" name:"end"` + + // The maximum number of items to return. + Limit *int `mandatory:"false" contributesTo:"query" name:"limit"` + + // When this parameter is set, only objects whose names do not contain the delimiter character + // (after an optionally specified prefix) are returned. Scanned objects whose names contain the + // delimiter have part of their name up to the last occurrence of the delimiter (after the optional + // prefix) returned as a set of prefixes. Note that only '/' is a supported delimiter character at + // this time. + Delimiter *string `mandatory:"false" contributesTo:"query" name:"delimiter"` + + // Object summary in list of objects includes the 'name' field. This parameter can also include 'size' + // (object size in bytes), 'md5', and 'timeCreated' (object creation date and time) fields. + // Value of this parameter should be a comma-separated, case-insensitive list of those field names. + // For example 'name,timeCreated,md5'. + Fields *string `mandatory:"false" contributesTo:"query" name:"fields" omitEmpty:"true"` + + // The client request ID for tracing. + OpcClientRequestId *string `mandatory:"false" contributesTo:"header" name:"opc-client-request-id"` +} + +func (request ListObjectsRequest) String() string { + return common.PointerString(request) +} + +// ListObjectsResponse wrapper for the ListObjects operation +type ListObjectsResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The ListObjects instance + ListObjects `presentIn:"body"` + + // Echoes back the value passed in the opc-client-request-id header, for use by clients when debugging. + OpcClientRequestId *string `presentIn:"header" name:"opc-client-request-id"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about a particular + // request, please provide this request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` +} + +func (response ListObjectsResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/objectstorage/list_preauthenticated_requests_request_response.go b/vendor/github.com/oracle/oci-go-sdk/objectstorage/list_preauthenticated_requests_request_response.go new file mode 100644 index 000000000..4adcbe26a --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/objectstorage/list_preauthenticated_requests_request_response.go @@ -0,0 +1,63 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package objectstorage + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// ListPreauthenticatedRequestsRequest wrapper for the ListPreauthenticatedRequests operation +type ListPreauthenticatedRequestsRequest struct { + + // The top-level namespace used for the request. + NamespaceName *string `mandatory:"true" contributesTo:"path" name:"namespaceName"` + + // The name of the bucket. + // Example: `my-new-bucket1` + BucketName *string `mandatory:"true" contributesTo:"path" name:"bucketName"` + + // Pre-authenticated requests returned by the list must have object names starting with prefix + ObjectNamePrefix *string `mandatory:"false" contributesTo:"query" name:"objectNamePrefix"` + + // The maximum number of items to return. + Limit *int `mandatory:"false" contributesTo:"query" name:"limit"` + + // The page at which to start retrieving results. + Page *string `mandatory:"false" contributesTo:"query" name:"page"` + + // The client request ID for tracing. + OpcClientRequestId *string `mandatory:"false" contributesTo:"header" name:"opc-client-request-id"` +} + +func (request ListPreauthenticatedRequestsRequest) String() string { + return common.PointerString(request) +} + +// ListPreauthenticatedRequestsResponse wrapper for the ListPreauthenticatedRequests operation +type ListPreauthenticatedRequestsResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The []PreauthenticatedRequestSummary instance + Items []PreauthenticatedRequestSummary `presentIn:"body"` + + // Echoes back the value passed in the opc-client-request-id header, for use by clients when debugging. + OpcClientRequestId *string `presentIn:"header" name:"opc-client-request-id"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about a particular + // request, please provide this request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` + + // For pagination of a list of pre-authenticated requests, if this header appears in the response, + // then this is a partial list. Include this value as the `page` parameter in a subsequent + // GET request to get the next batch of pre-authenticated requests. + // For information about pagination, see List Pagination (https://docs.us-phoenix-1.oraclecloud.com/Content/API/Concepts/usingapi.htm#nine). + OpcNextPage *string `presentIn:"header" name:"opc-next-page"` +} + +func (response ListPreauthenticatedRequestsResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/objectstorage/multipart_upload.go b/vendor/github.com/oracle/oci-go-sdk/objectstorage/multipart_upload.go new file mode 100644 index 000000000..00cad6055 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/objectstorage/multipart_upload.go @@ -0,0 +1,38 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Object Storage Service API +// +// APIs for managing buckets and objects. +// + +package objectstorage + +import ( + "github.com/oracle/oci-go-sdk/common" +) + +// MultipartUpload To use any of the API operations, you must be authorized in an IAM policy. If you're not authorized, +// talk to an administrator. If you're an administrator who needs to write policies to give users access, see +// Getting Started with Policies (https://docs.us-phoenix-1.oraclecloud.com/Content/Identity/Concepts/policygetstarted.htm). +type MultipartUpload struct { + + // The namespace in which the in-progress multipart upload is stored. + Namespace *string `mandatory:"true" json:"namespace"` + + // The bucket in which the in-progress multipart upload is stored. + Bucket *string `mandatory:"true" json:"bucket"` + + // The object name of the in-progress multipart upload. + Object *string `mandatory:"true" json:"object"` + + // The unique identifier for the in-progress multipart upload. + UploadId *string `mandatory:"true" json:"uploadId"` + + // The date and time when the upload was created. + TimeCreated *common.SDKTime `mandatory:"true" json:"timeCreated"` +} + +func (m MultipartUpload) String() string { + return common.PointerString(m) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/objectstorage/multipart_upload_part_summary.go b/vendor/github.com/oracle/oci-go-sdk/objectstorage/multipart_upload_part_summary.go new file mode 100644 index 000000000..05f9aff44 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/objectstorage/multipart_upload_part_summary.go @@ -0,0 +1,35 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Object Storage Service API +// +// APIs for managing buckets and objects. +// + +package objectstorage + +import ( + "github.com/oracle/oci-go-sdk/common" +) + +// MultipartUploadPartSummary To use any of the API operations, you must be authorized in an IAM policy. If you're not authorized, +// talk to an administrator. If you're an administrator who needs to write policies to give users access, see +// Getting Started with Policies (https://docs.us-phoenix-1.oraclecloud.com/Content/Identity/Concepts/policygetstarted.htm). +type MultipartUploadPartSummary struct { + + // the current entity tag for the part. + Etag *string `mandatory:"true" json:"etag"` + + // the MD5 hash of the bytes of the part. + Md5 *string `mandatory:"true" json:"md5"` + + // the size of the part in bytes. + Size *int `mandatory:"true" json:"size"` + + // the part number for this part. + PartNumber *int `mandatory:"true" json:"partNumber"` +} + +func (m MultipartUploadPartSummary) String() string { + return common.PointerString(m) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/objectstorage/object_summary.go b/vendor/github.com/oracle/oci-go-sdk/objectstorage/object_summary.go new file mode 100644 index 000000000..c1cf92ba8 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/objectstorage/object_summary.go @@ -0,0 +1,35 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Object Storage Service API +// +// APIs for managing buckets and objects. +// + +package objectstorage + +import ( + "github.com/oracle/oci-go-sdk/common" +) + +// ObjectSummary To use any of the API operations, you must be authorized in an IAM policy. If you're not authorized, +// talk to an administrator. If you're an administrator who needs to write policies to give users access, see +// Getting Started with Policies (https://docs.us-phoenix-1.oraclecloud.com/Content/Identity/Concepts/policygetstarted.htm). +type ObjectSummary struct { + + // The name of the object. + Name *string `mandatory:"true" json:"name"` + + // Size of the object in bytes. + Size *int `mandatory:"false" json:"size"` + + // Base64-encoded MD5 hash of the object data. + Md5 *string `mandatory:"false" json:"md5"` + + // Date and time of object creation. + TimeCreated *common.SDKTime `mandatory:"false" json:"timeCreated"` +} + +func (m ObjectSummary) String() string { + return common.PointerString(m) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/objectstorage/objectstorage_client.go b/vendor/github.com/oracle/oci-go-sdk/objectstorage/objectstorage_client.go new file mode 100644 index 000000000..eaf8bdfb6 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/objectstorage/objectstorage_client.go @@ -0,0 +1,478 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Object Storage Service API +// +// APIs for managing buckets and objects. +// + +package objectstorage + +import ( + "context" + "fmt" + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +//ObjectStorageClient a client for ObjectStorage +type ObjectStorageClient struct { + common.BaseClient + config *common.ConfigurationProvider +} + +func buildSigner(configProvider common.ConfigurationProvider) common.HTTPRequestSigner { + objStorageHeaders := []string{"date", "(request-target)", "host"} + defaultBodyHeaders := []string{"content-length", "content-type", "x-content-sha256"} + shouldHashBody := func(r *http.Request) bool { + return r.Method == http.MethodPost + } + signer := common.RequestSignerWithBodyHashingPredicate(configProvider, objStorageHeaders, defaultBodyHeaders, shouldHashBody) + return signer +} + +// NewObjectStorageClientWithConfigurationProvider Creates a new default ObjectStorage client with the given configuration provider. +// the configuration provider will be used for the default signer as well as reading the region +func NewObjectStorageClientWithConfigurationProvider(configProvider common.ConfigurationProvider) (client ObjectStorageClient, err error) { + baseClient, err := common.NewClientWithConfig(configProvider) + if err != nil { + return + } + baseClient.Signer = buildSigner(configProvider) + + client = ObjectStorageClient{BaseClient: baseClient} + err = client.setConfigurationProvider(configProvider) + return +} + +// SetRegion overrides the region of this client. +func (client *ObjectStorageClient) SetRegion(region string) { + client.Host = fmt.Sprintf(common.DefaultHostURLTemplate, "objectstorage", region) +} + +// SetConfigurationProvider sets the configuration provider including the region, returns an error if is not valid +func (client *ObjectStorageClient) setConfigurationProvider(configProvider common.ConfigurationProvider) error { + if ok, err := common.IsConfigurationProviderValid(configProvider); !ok { + return err + } + + // Error has been checked already + region, _ := configProvider.Region() + client.config = &configProvider + client.SetRegion(region) + return nil +} + +// ConfigurationProvider the ConfigurationProvider used in this client, or null if none set +func (client *ObjectStorageClient) ConfigurationProvider() *common.ConfigurationProvider { + return client.config +} + +// AbortMultipartUpload Aborts an in-progress multipart upload and deletes all parts that have been uploaded. +func (client ObjectStorageClient) AbortMultipartUpload(ctx context.Context, request AbortMultipartUploadRequest) (response AbortMultipartUploadResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodDelete, "/n/{namespaceName}/b/{bucketName}/u/{objectName}", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// CommitMultipartUpload Commits a multipart upload, which involves checking part numbers and ETags of the parts, to create an aggregate object. +func (client ObjectStorageClient) CommitMultipartUpload(ctx context.Context, request CommitMultipartUploadRequest) (response CommitMultipartUploadResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodPost, "/n/{namespaceName}/b/{bucketName}/u/{objectName}", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// CreateBucket Creates a bucket in the given namespace with a bucket name and optional user-defined metadata. +// To use this and other API operations, you must be authorized in an IAM policy. If you're not authorized, +// talk to an administrator. If you're an administrator who needs to write policies to give users access, see +// Getting Started with Policies (https://docs.us-phoenix-1.oraclecloud.com/Content/Identity/Concepts/policygetstarted.htm). +func (client ObjectStorageClient) CreateBucket(ctx context.Context, request CreateBucketRequest) (response CreateBucketResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodPost, "/n/{namespaceName}/b/", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// CreateMultipartUpload Starts a new multipart upload to a specific object in the given bucket in the given namespace. +func (client ObjectStorageClient) CreateMultipartUpload(ctx context.Context, request CreateMultipartUploadRequest) (response CreateMultipartUploadResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodPost, "/n/{namespaceName}/b/{bucketName}/u", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// CreatePreauthenticatedRequest Create a pre-authenticated request specific to the bucket +func (client ObjectStorageClient) CreatePreauthenticatedRequest(ctx context.Context, request CreatePreauthenticatedRequestRequest) (response CreatePreauthenticatedRequestResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodPost, "/n/{namespaceName}/b/{bucketName}/p/", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// DeleteBucket Deletes a bucket if it is already empty. If the bucket is not empty, use DeleteObject first. +func (client ObjectStorageClient) DeleteBucket(ctx context.Context, request DeleteBucketRequest) (response DeleteBucketResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodDelete, "/n/{namespaceName}/b/{bucketName}/", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// DeleteObject Deletes an object. +func (client ObjectStorageClient) DeleteObject(ctx context.Context, request DeleteObjectRequest) (response DeleteObjectResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodDelete, "/n/{namespaceName}/b/{bucketName}/o/{objectName}", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// DeletePreauthenticatedRequest Deletes the bucket level pre-authenticateted request +func (client ObjectStorageClient) DeletePreauthenticatedRequest(ctx context.Context, request DeletePreauthenticatedRequestRequest) (response DeletePreauthenticatedRequestResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodDelete, "/n/{namespaceName}/b/{bucketName}/p/{parId}", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// GetBucket Gets the current representation of the given bucket in the given namespace. +func (client ObjectStorageClient) GetBucket(ctx context.Context, request GetBucketRequest) (response GetBucketResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodGet, "/n/{namespaceName}/b/{bucketName}/", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// GetNamespace Gets the name of the namespace for the user making the request. An account name must be unique, must start with a +// letter, and can have up to 15 lowercase letters and numbers. You cannot use spaces or special characters. +func (client ObjectStorageClient) GetNamespace(ctx context.Context, request GetNamespaceRequest) (response GetNamespaceResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodGet, "/n/", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// GetObject Gets the metadata and body of an object. +func (client ObjectStorageClient) GetObject(ctx context.Context, request GetObjectRequest) (response GetObjectResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodGet, "/n/{namespaceName}/b/{bucketName}/o/{objectName}", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// GetPreauthenticatedRequest Get the bucket level pre-authenticateted request +func (client ObjectStorageClient) GetPreauthenticatedRequest(ctx context.Context, request GetPreauthenticatedRequestRequest) (response GetPreauthenticatedRequestResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodGet, "/n/{namespaceName}/b/{bucketName}/p/{parId}", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// HeadBucket Efficiently checks if a bucket exists and gets the current ETag for the bucket. +func (client ObjectStorageClient) HeadBucket(ctx context.Context, request HeadBucketRequest) (response HeadBucketResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodHead, "/n/{namespaceName}/b/{bucketName}/", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// HeadObject Gets the user-defined metadata and entity tag for an object. +func (client ObjectStorageClient) HeadObject(ctx context.Context, request HeadObjectRequest) (response HeadObjectResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodHead, "/n/{namespaceName}/b/{bucketName}/o/{objectName}", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// ListBuckets Gets a list of all `BucketSummary`s in a compartment. A `BucketSummary` contains only summary fields for the bucket +// and does not contain fields like the user-defined metadata. +// To use this and other API operations, you must be authorized in an IAM policy. If you're not authorized, +// talk to an administrator. If you're an administrator who needs to write policies to give users access, see +// Getting Started with Policies (https://docs.us-phoenix-1.oraclecloud.com/Content/Identity/Concepts/policygetstarted.htm). +func (client ObjectStorageClient) ListBuckets(ctx context.Context, request ListBucketsRequest) (response ListBucketsResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodGet, "/n/{namespaceName}/b/", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// ListMultipartUploadParts Lists the parts of an in-progress multipart upload. +func (client ObjectStorageClient) ListMultipartUploadParts(ctx context.Context, request ListMultipartUploadPartsRequest) (response ListMultipartUploadPartsResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodGet, "/n/{namespaceName}/b/{bucketName}/u/{objectName}", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// ListMultipartUploads Lists all in-progress multipart uploads for the given bucket in the given namespace. +func (client ObjectStorageClient) ListMultipartUploads(ctx context.Context, request ListMultipartUploadsRequest) (response ListMultipartUploadsResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodGet, "/n/{namespaceName}/b/{bucketName}/u", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// ListObjects Lists the objects in a bucket. +// To use this and other API operations, you must be authorized in an IAM policy. If you're not authorized, +// talk to an administrator. If you're an administrator who needs to write policies to give users access, see +// Getting Started with Policies (https://docs.us-phoenix-1.oraclecloud.com/Content/Identity/Concepts/policygetstarted.htm). +func (client ObjectStorageClient) ListObjects(ctx context.Context, request ListObjectsRequest) (response ListObjectsResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodGet, "/n/{namespaceName}/b/{bucketName}/o", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// ListPreauthenticatedRequests List pre-authenticated requests for the bucket +func (client ObjectStorageClient) ListPreauthenticatedRequests(ctx context.Context, request ListPreauthenticatedRequestsRequest) (response ListPreauthenticatedRequestsResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodGet, "/n/{namespaceName}/b/{bucketName}/p/", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// PutObject Creates a new object or overwrites an existing one. +// To use this and other API operations, you must be authorized in an IAM policy. If you're not authorized, +// talk to an administrator. If you're an administrator who needs to write policies to give users access, see +// Getting Started with Policies (https://docs.us-phoenix-1.oraclecloud.com/Content/Identity/Concepts/policygetstarted.htm). +func (client ObjectStorageClient) PutObject(ctx context.Context, request PutObjectRequest) (response PutObjectResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodPut, "/n/{namespaceName}/b/{bucketName}/o/{objectName}", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// UpdateBucket Performs a partial or full update of a bucket's user-defined metadata. +func (client ObjectStorageClient) UpdateBucket(ctx context.Context, request UpdateBucketRequest) (response UpdateBucketResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodPost, "/n/{namespaceName}/b/{bucketName}/", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} + +// UploadPart Uploads a single part of a multipart upload. +func (client ObjectStorageClient) UploadPart(ctx context.Context, request UploadPartRequest) (response UploadPartResponse, err error) { + httpRequest, err := common.MakeDefaultHTTPRequestWithTaggedStruct(http.MethodPut, "/n/{namespaceName}/b/{bucketName}/u/{objectName}", request) + if err != nil { + return + } + + httpResponse, err := client.Call(ctx, &httpRequest) + defer common.CloseBodyIfValid(httpResponse) + response.RawResponse = httpResponse + if err != nil { + return + } + + err = common.UnmarshalResponse(httpResponse, &response) + return +} diff --git a/vendor/github.com/oracle/oci-go-sdk/objectstorage/preauthenticated_request.go b/vendor/github.com/oracle/oci-go-sdk/objectstorage/preauthenticated_request.go new file mode 100644 index 000000000..54e4d2430 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/objectstorage/preauthenticated_request.go @@ -0,0 +1,71 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Object Storage Service API +// +// APIs for managing buckets and objects. +// + +package objectstorage + +import ( + "github.com/oracle/oci-go-sdk/common" +) + +// PreauthenticatedRequest The representation of PreauthenticatedRequest +type PreauthenticatedRequest struct { + + // the unique identifier to use when directly addressing the pre-authenticated request + Id *string `mandatory:"true" json:"id"` + + // the user supplied name of the pre-authenticated request. + Name *string `mandatory:"true" json:"name"` + + // the uri to embed in the url when using the pre-authenticated request. + AccessUri *string `mandatory:"true" json:"accessUri"` + + // the operation that can be performed on this resource e.g PUT or GET. + AccessType PreauthenticatedRequestAccessTypeEnum `mandatory:"true" json:"accessType"` + + // the expiration date after which the pre authenticated request will no longer be valid as per spec + // RFC 3339 (https://tools.ietf.org/rfc/rfc3339) + TimeExpires *common.SDKTime `mandatory:"true" json:"timeExpires"` + + // the date when the pre-authenticated request was created as per spec + // RFC 3339 (https://tools.ietf.org/rfc/rfc3339) + TimeCreated *common.SDKTime `mandatory:"true" json:"timeCreated"` + + // Name of object that is being granted access to by the pre-authenticated request. This can be null and that would mean that the pre-authenticated request is granting access to the entire bucket + ObjectName *string `mandatory:"false" json:"objectName"` +} + +func (m PreauthenticatedRequest) String() string { + return common.PointerString(m) +} + +// PreauthenticatedRequestAccessTypeEnum Enum with underlying type: string +type PreauthenticatedRequestAccessTypeEnum string + +// Set of constants representing the allowable values for PreauthenticatedRequestAccessType +const ( + PreauthenticatedRequestAccessTypeObjectread PreauthenticatedRequestAccessTypeEnum = "ObjectRead" + PreauthenticatedRequestAccessTypeObjectwrite PreauthenticatedRequestAccessTypeEnum = "ObjectWrite" + PreauthenticatedRequestAccessTypeObjectreadwrite PreauthenticatedRequestAccessTypeEnum = "ObjectReadWrite" + PreauthenticatedRequestAccessTypeAnyobjectwrite PreauthenticatedRequestAccessTypeEnum = "AnyObjectWrite" +) + +var mappingPreauthenticatedRequestAccessType = map[string]PreauthenticatedRequestAccessTypeEnum{ + "ObjectRead": PreauthenticatedRequestAccessTypeObjectread, + "ObjectWrite": PreauthenticatedRequestAccessTypeObjectwrite, + "ObjectReadWrite": PreauthenticatedRequestAccessTypeObjectreadwrite, + "AnyObjectWrite": PreauthenticatedRequestAccessTypeAnyobjectwrite, +} + +// GetPreauthenticatedRequestAccessTypeEnumValues Enumerates the set of values for PreauthenticatedRequestAccessType +func GetPreauthenticatedRequestAccessTypeEnumValues() []PreauthenticatedRequestAccessTypeEnum { + values := make([]PreauthenticatedRequestAccessTypeEnum, 0) + for _, v := range mappingPreauthenticatedRequestAccessType { + values = append(values, v) + } + return values +} diff --git a/vendor/github.com/oracle/oci-go-sdk/objectstorage/preauthenticated_request_summary.go b/vendor/github.com/oracle/oci-go-sdk/objectstorage/preauthenticated_request_summary.go new file mode 100644 index 000000000..f293237bc --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/objectstorage/preauthenticated_request_summary.go @@ -0,0 +1,68 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Object Storage Service API +// +// APIs for managing buckets and objects. +// + +package objectstorage + +import ( + "github.com/oracle/oci-go-sdk/common" +) + +// PreauthenticatedRequestSummary The representation of PreauthenticatedRequestSummary +type PreauthenticatedRequestSummary struct { + + // the unique identifier to use when directly addressing the pre-authenticated request + Id *string `mandatory:"true" json:"id"` + + // the user supplied name of the pre-authenticated request + Name *string `mandatory:"true" json:"name"` + + // the operation that can be performed on this resource e.g PUT or GET. + AccessType PreauthenticatedRequestSummaryAccessTypeEnum `mandatory:"true" json:"accessType"` + + // the expiration date after which the pre authenticated request will no longer be valid as per spec + // RFC 3339 (https://tools.ietf.org/rfc/rfc3339) + TimeExpires *common.SDKTime `mandatory:"true" json:"timeExpires"` + + // the date when the pre-authenticated request was created as per spec + // RFC 3339 (https://tools.ietf.org/rfc/rfc3339) + TimeCreated *common.SDKTime `mandatory:"true" json:"timeCreated"` + + // Name of object that is being granted access to by the pre-authenticated request. This can be null and that would mean that the pre-authenticated request is granting access to the entire bucket + ObjectName *string `mandatory:"false" json:"objectName"` +} + +func (m PreauthenticatedRequestSummary) String() string { + return common.PointerString(m) +} + +// PreauthenticatedRequestSummaryAccessTypeEnum Enum with underlying type: string +type PreauthenticatedRequestSummaryAccessTypeEnum string + +// Set of constants representing the allowable values for PreauthenticatedRequestSummaryAccessType +const ( + PreauthenticatedRequestSummaryAccessTypeObjectread PreauthenticatedRequestSummaryAccessTypeEnum = "ObjectRead" + PreauthenticatedRequestSummaryAccessTypeObjectwrite PreauthenticatedRequestSummaryAccessTypeEnum = "ObjectWrite" + PreauthenticatedRequestSummaryAccessTypeObjectreadwrite PreauthenticatedRequestSummaryAccessTypeEnum = "ObjectReadWrite" + PreauthenticatedRequestSummaryAccessTypeAnyobjectwrite PreauthenticatedRequestSummaryAccessTypeEnum = "AnyObjectWrite" +) + +var mappingPreauthenticatedRequestSummaryAccessType = map[string]PreauthenticatedRequestSummaryAccessTypeEnum{ + "ObjectRead": PreauthenticatedRequestSummaryAccessTypeObjectread, + "ObjectWrite": PreauthenticatedRequestSummaryAccessTypeObjectwrite, + "ObjectReadWrite": PreauthenticatedRequestSummaryAccessTypeObjectreadwrite, + "AnyObjectWrite": PreauthenticatedRequestSummaryAccessTypeAnyobjectwrite, +} + +// GetPreauthenticatedRequestSummaryAccessTypeEnumValues Enumerates the set of values for PreauthenticatedRequestSummaryAccessType +func GetPreauthenticatedRequestSummaryAccessTypeEnumValues() []PreauthenticatedRequestSummaryAccessTypeEnum { + values := make([]PreauthenticatedRequestSummaryAccessTypeEnum, 0) + for _, v := range mappingPreauthenticatedRequestSummaryAccessType { + values = append(values, v) + } + return values +} diff --git a/vendor/github.com/oracle/oci-go-sdk/objectstorage/put_object_request_response.go b/vendor/github.com/oracle/oci-go-sdk/objectstorage/put_object_request_response.go new file mode 100644 index 000000000..3652da5fc --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/objectstorage/put_object_request_response.go @@ -0,0 +1,92 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package objectstorage + +import ( + "github.com/oracle/oci-go-sdk/common" + "io" + "net/http" +) + +// PutObjectRequest wrapper for the PutObject operation +type PutObjectRequest struct { + + // The top-level namespace used for the request. + NamespaceName *string `mandatory:"true" contributesTo:"path" name:"namespaceName"` + + // The name of the bucket. + // Example: `my-new-bucket1` + BucketName *string `mandatory:"true" contributesTo:"path" name:"bucketName"` + + // The name of the object. + // Example: `test/object1.log` + ObjectName *string `mandatory:"true" contributesTo:"path" name:"objectName"` + + // The content length of the body. + ContentLength *int `mandatory:"true" contributesTo:"header" name:"Content-Length"` + + // The object to upload to the object store. + PutObjectBody io.ReadCloser `mandatory:"true" contributesTo:"body" encoding:"binary"` + + // The entity tag to match. For creating and committing a multipart upload to an object, this is the entity tag of the target object. + // For uploading a part, this is the entity tag of the target part. + IfMatch *string `mandatory:"false" contributesTo:"header" name:"if-match"` + + // The entity tag to avoid matching. The only valid value is ‘*’, which indicates that the request should fail if the object already exists. + // For creating and committing a multipart upload, this is the entity tag of the target object. For uploading a part, this is the entity tag + // of the target part. + IfNoneMatch *string `mandatory:"false" contributesTo:"header" name:"if-none-match"` + + // The client request ID for tracing. + OpcClientRequestId *string `mandatory:"false" contributesTo:"header" name:"opc-client-request-id"` + + // 100-continue + Expect *string `mandatory:"false" contributesTo:"header" name:"Expect"` + + // The base-64 encoded MD5 hash of the body. + ContentMD5 *string `mandatory:"false" contributesTo:"header" name:"Content-MD5"` + + // The content type of the object. Defaults to 'application/octet-stream' if not overridden during the PutObject call. + ContentType *string `mandatory:"false" contributesTo:"header" name:"Content-Type"` + + // The content language of the object. + ContentLanguage *string `mandatory:"false" contributesTo:"header" name:"Content-Language"` + + // The content encoding of the object. + ContentEncoding *string `mandatory:"false" contributesTo:"header" name:"Content-Encoding"` + + // Optional user-defined metadata key and value. + OpcMeta map[string]string `mandatory:"false" contributesTo:"header-collection" prefix:"opc-meta-"` +} + +func (request PutObjectRequest) String() string { + return common.PointerString(request) +} + +// PutObjectResponse wrapper for the PutObject operation +type PutObjectResponse struct { + + // The underlying http response + RawResponse *http.Response + + // Echoes back the value passed in the opc-client-request-id header, for use by clients when debugging. + OpcClientRequestId *string `presentIn:"header" name:"opc-client-request-id"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about a particular + // request, please provide this request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` + + // The base-64 encoded MD5 hash of the request body as computed by the server. + OpcContentMd5 *string `presentIn:"header" name:"opc-content-md5"` + + // The entity tag for the object. + ETag *string `presentIn:"header" name:"etag"` + + // The time the object was modified, as described in RFC 2616 (https://tools.ietf.org/rfc/rfc2616), section 14.29. + LastModified *common.SDKTime `presentIn:"header" name:"last-modified"` +} + +func (response PutObjectResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/objectstorage/update_bucket_details.go b/vendor/github.com/oracle/oci-go-sdk/objectstorage/update_bucket_details.go new file mode 100644 index 000000000..a3cf083a5 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/objectstorage/update_bucket_details.go @@ -0,0 +1,61 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +// Object Storage Service API +// +// APIs for managing buckets and objects. +// + +package objectstorage + +import ( + "github.com/oracle/oci-go-sdk/common" +) + +// UpdateBucketDetails To use any of the API operations, you must be authorized in an IAM policy. If you're not authorized, +// talk to an administrator. If you're an administrator who needs to write policies to give users access, see +// Getting Started with Policies (https://docs.us-phoenix-1.oraclecloud.com/Content/Identity/Concepts/policygetstarted.htm). +type UpdateBucketDetails struct { + + // The namespace in which the bucket lives. + Namespace *string `mandatory:"false" json:"namespace"` + + // The name of the bucket. + Name *string `mandatory:"false" json:"name"` + + // Arbitrary string, up to 4KB, of keys and values for user-defined metadata. + Metadata map[string]string `mandatory:"false" json:"metadata"` + + // The type of public access available on this bucket. Allows authenticated caller to access the bucket or + // contents of this bucket. By default a bucket is set to NoPublicAccess. It is treated as NoPublicAccess + // when this value is not specified. When the type is NoPublicAccess the bucket does not allow any public access. + // When the type is ObjectRead the bucket allows public access to the GetObject, HeadObject, ListObjects. + PublicAccessType UpdateBucketDetailsPublicAccessTypeEnum `mandatory:"false" json:"publicAccessType,omitempty"` +} + +func (m UpdateBucketDetails) String() string { + return common.PointerString(m) +} + +// UpdateBucketDetailsPublicAccessTypeEnum Enum with underlying type: string +type UpdateBucketDetailsPublicAccessTypeEnum string + +// Set of constants representing the allowable values for UpdateBucketDetailsPublicAccessType +const ( + UpdateBucketDetailsPublicAccessTypeNopublicaccess UpdateBucketDetailsPublicAccessTypeEnum = "NoPublicAccess" + UpdateBucketDetailsPublicAccessTypeObjectread UpdateBucketDetailsPublicAccessTypeEnum = "ObjectRead" +) + +var mappingUpdateBucketDetailsPublicAccessType = map[string]UpdateBucketDetailsPublicAccessTypeEnum{ + "NoPublicAccess": UpdateBucketDetailsPublicAccessTypeNopublicaccess, + "ObjectRead": UpdateBucketDetailsPublicAccessTypeObjectread, +} + +// GetUpdateBucketDetailsPublicAccessTypeEnumValues Enumerates the set of values for UpdateBucketDetailsPublicAccessType +func GetUpdateBucketDetailsPublicAccessTypeEnumValues() []UpdateBucketDetailsPublicAccessTypeEnum { + values := make([]UpdateBucketDetailsPublicAccessTypeEnum, 0) + for _, v := range mappingUpdateBucketDetailsPublicAccessType { + values = append(values, v) + } + return values +} diff --git a/vendor/github.com/oracle/oci-go-sdk/objectstorage/update_bucket_request_response.go b/vendor/github.com/oracle/oci-go-sdk/objectstorage/update_bucket_request_response.go new file mode 100644 index 000000000..a352d64fe --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/objectstorage/update_bucket_request_response.go @@ -0,0 +1,58 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package objectstorage + +import ( + "github.com/oracle/oci-go-sdk/common" + "net/http" +) + +// UpdateBucketRequest wrapper for the UpdateBucket operation +type UpdateBucketRequest struct { + + // The top-level namespace used for the request. + NamespaceName *string `mandatory:"true" contributesTo:"path" name:"namespaceName"` + + // The name of the bucket. + // Example: `my-new-bucket1` + BucketName *string `mandatory:"true" contributesTo:"path" name:"bucketName"` + + // Request object for updating a bucket. + UpdateBucketDetails `contributesTo:"body"` + + // The entity tag to match. For creating and committing a multipart upload to an object, this is the entity tag of the target object. + // For uploading a part, this is the entity tag of the target part. + IfMatch *string `mandatory:"false" contributesTo:"header" name:"if-match"` + + // The client request ID for tracing. + OpcClientRequestId *string `mandatory:"false" contributesTo:"header" name:"opc-client-request-id"` +} + +func (request UpdateBucketRequest) String() string { + return common.PointerString(request) +} + +// UpdateBucketResponse wrapper for the UpdateBucket operation +type UpdateBucketResponse struct { + + // The underlying http response + RawResponse *http.Response + + // The Bucket instance + Bucket `presentIn:"body"` + + // Echoes back the value passed in the opc-client-request-id header, for use by clients when debugging. + OpcClientRequestId *string `presentIn:"header" name:"opc-client-request-id"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about a particular + // request, please provide this request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` + + // The entity tag for the updated bucket. + ETag *string `presentIn:"header" name:"etag"` +} + +func (response UpdateBucketResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/objectstorage/upload_part_request_response.go b/vendor/github.com/oracle/oci-go-sdk/objectstorage/upload_part_request_response.go new file mode 100644 index 000000000..9a48fa05c --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/objectstorage/upload_part_request_response.go @@ -0,0 +1,83 @@ +// Copyright (c) 2016, 2018, Oracle and/or its affiliates. All rights reserved. +// Code generated. DO NOT EDIT. + +package objectstorage + +import ( + "github.com/oracle/oci-go-sdk/common" + "io" + "net/http" +) + +// UploadPartRequest wrapper for the UploadPart operation +type UploadPartRequest struct { + + // The top-level namespace used for the request. + NamespaceName *string `mandatory:"true" contributesTo:"path" name:"namespaceName"` + + // The name of the bucket. + // Example: `my-new-bucket1` + BucketName *string `mandatory:"true" contributesTo:"path" name:"bucketName"` + + // The name of the object. + // Example: `test/object1.log` + ObjectName *string `mandatory:"true" contributesTo:"path" name:"objectName"` + + // The upload ID for a multipart upload. + UploadId *string `mandatory:"true" contributesTo:"query" name:"uploadId"` + + // The part number that identifies the object part currently being uploaded. + UploadPartNum *int `mandatory:"true" contributesTo:"query" name:"uploadPartNum"` + + // The content length of the body. + ContentLength *int `mandatory:"true" contributesTo:"header" name:"Content-Length"` + + // The part being uploaded to the Object Storage Service. + UploadPartBody io.ReadCloser `mandatory:"true" contributesTo:"body" encoding:"binary"` + + // The client request ID for tracing. + OpcClientRequestId *string `mandatory:"false" contributesTo:"header" name:"opc-client-request-id"` + + // The entity tag to match. For creating and committing a multipart upload to an object, this is the entity tag of the target object. + // For uploading a part, this is the entity tag of the target part. + IfMatch *string `mandatory:"false" contributesTo:"header" name:"if-match"` + + // The entity tag to avoid matching. The only valid value is ‘*’, which indicates that the request should fail if the object already exists. + // For creating and committing a multipart upload, this is the entity tag of the target object. For uploading a part, this is the entity tag + // of the target part. + IfNoneMatch *string `mandatory:"false" contributesTo:"header" name:"if-none-match"` + + // 100-continue + Expect *string `mandatory:"false" contributesTo:"header" name:"Expect"` + + // The base-64 encoded MD5 hash of the body. + ContentMD5 *string `mandatory:"false" contributesTo:"header" name:"Content-MD5"` +} + +func (request UploadPartRequest) String() string { + return common.PointerString(request) +} + +// UploadPartResponse wrapper for the UploadPart operation +type UploadPartResponse struct { + + // The underlying http response + RawResponse *http.Response + + // Echoes back the value passed in the opc-client-request-id header, for use by clients when debugging. + OpcClientRequestId *string `presentIn:"header" name:"opc-client-request-id"` + + // Unique Oracle-assigned identifier for the request. If you need to contact Oracle about a particular + // request, please provide this request ID. + OpcRequestId *string `presentIn:"header" name:"opc-request-id"` + + // The base64-encoded MD5 hash of the request body, as computed by the server. + OpcContentMd5 *string `presentIn:"header" name:"opc-content-md5"` + + // The entity tag for the object. + ETag *string `presentIn:"header" name:"etag"` +} + +func (response UploadPartResponse) String() string { + return common.PointerString(response) +} diff --git a/vendor/github.com/oracle/oci-go-sdk/oci.go b/vendor/github.com/oracle/oci-go-sdk/oci.go new file mode 100644 index 000000000..47fcef9c6 --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/oci.go @@ -0,0 +1,232 @@ + +/* +This is the official Go SDK for Oracle Cloud Infrastructure + +Installation + +Refer to https://github.com/oracle/oci-go-sdk/blob/master/README.md#installing for installation instructions. + +Configuration + +Refer to https://github.com/oracle/oci-go-sdk/blob/master/README.md#configuring for configuration instructions. + +Quickstart + +The following example shows how to get started with the SDK. The example belows creates an identityClient +struct with the default configuration. It then utilizes the identityClient to list availability domains and prints +them out to stdout + + import ( + "context" + "fmt" + + "github.com/oracle/oci-go-sdk/common" + "github.com/oracle/oci-go-sdk/identity" + ) + + func main() { + c, err := identity.NewIdentityClientWithConfigurationProvider(common.DefaultConfigProvider()) + if err != nil { + fmt.Println("Error:", err) + return + } + + // The OCID of the tenancy containing the compartment. + tenancyID, err := common.DefaultConfigProvider().TenancyOCID() + if err != nil { + fmt.Println("Error:", err) + return + } + + request := identity.ListAvailabilityDomainsRequest{ + CompartmentId: &tenancyID, + } + + r, err := c.ListAvailabilityDomains(context.Background(), request) + if err != nil { + fmt.Println("Error:", err) + return + } + + fmt.Printf("List of available domains: %v", r.Items) + return + } + +More examples can be found in the SDK Github repo: https://github.com/oracle/oci-go-sdk/tree/master/example + +Optional fields in the SDK + +Optional fields are represented with the `mandatory:"false"` tag on input structs. The SDK will omit all optional fields that are nil when making requests. +In the case of enum-type fields, the SDK will omit fields whose value is an empty string. + +Helper functions + +The SDK uses pointers for primitive types in many input structs. To aid in the construction of such structs, the SDK provides +functions that return a pointer for a given value. For example: + + // Given the struct + type CreateVcnDetails struct { + + // Example: `172.16.0.0/16` + CidrBlock *string `mandatory:"true" json:"cidrBlock"` + + CompartmentId *string `mandatory:"true" json:"compartmentId"` + + DisplayName *string `mandatory:"false" json:"displayName"` + + } + + // We can use the helper functions to build the struct + details := core.CreateVcnDetails{ + CidrBlock: common.String("172.16.0.0/16"), + CompartmentId: common.String("someOcid"), + DisplayName: common.String("myVcn"), + } + + +Signing custom requests + +The SDK exposes a stand-alone signer that can be used to signing custom requests. Related code can be found here: +https://github.com/oracle/oci-go-sdk/blob/master/common/http_signer.go. + +The example below shows how to create a default signer. + + client := http.Client{} + var request http.Request + request = ... // some custom request + + // Set the Date header + request.Header.Set("Date", time.Now().UTC().Format(http.TimeFormat)) + + // And a provider of cryptographic keys + provider := common.DefaultConfigProvider() + + // Build the signer + signer := common.DefaultSigner(provider) + + // Sign the request + signer.Sign(&request) + + // Execute the request + client.Do(request) + + + +The signer also allows more granular control on the headers used for signing. For example: + + client := http.Client{} + var request http.Request + request = ... // some custom request + + // Set the Date header + request.Header.Set("Date", time.Now().UTC().Format(http.TimeFormat)) + + // Mandatory headers to be used in the sign process + defaultGenericHeaders = []string{"date", "(request-target)", "host"} + + // Optional headers + optionalHeaders = []string{"content-length", "content-type", "x-content-sha256"} + + // A predicate that specifies when to use the optional signing headers + optionalHeadersPredicate := func (r *http.Request) bool { + return r.Method == http.MethodPost + } + + // And a provider of cryptographic keys + provider := common.DefaultConfigProvider() + + // Build the signer + signer := common.RequestSigner(provider, defaultGenericHeaders, optionalHeaders, optionalHeadersPredicate) + + // Sign the request + signer.Sign(&request) + + // Execute the request + c.Do(request) + +For more information on the signing algorithm refer to: https://docs.us-phoenix-1.oraclecloud.com/Content/API/Concepts/signingrequests.htm + +Polymorphic json requests and responses + +Some operations accept or return polymorphic json objects. The SDK models such objects as interfaces. Further the SDK provides +structs that implement such interfaces. Thus, for all operations that expect interfaces as input, pass the struct in the SDK that satisfies +such interface. For example: + + c, err := identity.NewIdentityClientWithConfigurationProvider(common.DefaultConfigProvider()) + if err != nil { + panic(err) + } + + // The CreateIdentityProviderRequest takes a CreateIdentityProviderDetails interface as input + rCreate := identity.CreateIdentityProviderRequest{} + + // The CreateSaml2IdentityProviderDetails struct implements the CreateIdentityProviderDetails interface + details := identity.CreateSaml2IdentityProviderDetails{} + details.CompartmentId = common.String(getTenancyID()) + details.Name = common.String("someName") + //... more setup if needed + // Use the above struct + rCreate.CreateIdentityProviderDetails = details + + // Make the call + rspCreate, createErr := c.CreateIdentityProvider(context.Background(), rCreate) + +In the case of a polymorphic response you can type assert the interface to the expected type. For example: + + rRead := identity.GetIdentityProviderRequest{} + rRead.IdentityProviderId = common.String("aValidId") + response, err := c.GetIdentityProvider(context.Background(), rRead) + + provider := response.IdentityProvider.(identity.Saml2IdentityProvider) + +An example of polymorphic json request handling can be found here: https://github.com/oracle/oci-go-sdk/blob/master/example/example_core_test.go#L63 + + +Pagination + +When calling a list operation, the operation will retrieve a page of results. To retrieve more data, call the list operation again, +passing in the value of the most recent response's OpcNextPage as the value of Page in the next list operation call. +When there is no more data the OpcNextPage field will be nil. An example of pagination using this logic can be found here: https://github.com/oracle/oci-go-sdk/blob/master/example/example_core_test.go#L86 + +Logging and Debugging + +The SDK has a built-in logging mechanism used internally. The internal logging logic is used to record the raw http +requests, responses and potential errors when (un)marshalling request and responses. + +To expose debugging logs, set the environment variable "OCI_GO_SDK_DEBUG" to "1", or some other non empty string. + + +Forward Compatibility + +Some response fields are enum-typed. In the future, individual services may return values not covered by existing enums +for that field. To address this possibility, every enum-type response field is a modeled as a type that supports any string. +Thus if a service returns a value that is not recognized by your version of the SDK, then the response field will be set to this value. + +When individual services return a polymorphic json response not available as a concrete struct, the SDK will return an implementation that only satisfies +the interface modeling the polymorphic json response. + + +Contributions + +Got a fix for a bug, or a new feature you'd like to contribute? The SDK is open source and accepting pull requests on GitHub +https://github.com/oracle/oci-go-sdk + +License + +Licensing information available at: https://github.com/oracle/oci-go-sdk/blob/master/LICENSE.txt + +Notifications + +To be notified when a new version of the Go SDK is released, subscribe to the following feed: https://github.com/oracle/oci-go-sdk/releases.atom + +Questions or Feedback + +Please refer to this link: https://github.com/oracle/oci-go-sdk#help + + + + + */ +package oci + +//go:generate go run cmd/genver/main.go cmd/genver/version_template.go --output common/version.go diff --git a/vendor/github.com/oracle/oci-go-sdk/wercker.yml b/vendor/github.com/oracle/oci-go-sdk/wercker.yml new file mode 100644 index 000000000..58255f22c --- /dev/null +++ b/vendor/github.com/oracle/oci-go-sdk/wercker.yml @@ -0,0 +1,26 @@ +box: + id: golang + ports: + - "5000" + +build: + steps: + - script: + name: golint-prepare + code: | + go get github.com/golang/lint/golint + go install github.com/golang/lint/golint + + - script: + name: symlinking src code + code: | + mkdir -p $GOPATH/src/github.com/oracle + ln -s $WERCKER_SOURCE_DIR $GOPATH/src/github.com/oracle/oci-go-sdk + ls -al $GOPATH/src/github.com/oracle/oci-go-sdk + + - script: + name: make-build + code: | + make build + + diff --git a/vendor/github.com/packer-community/winrmcp/winrmcp/cp.go b/vendor/github.com/packer-community/winrmcp/winrmcp/cp.go index 2890e55ef..4d089c13d 100644 --- a/vendor/github.com/packer-community/winrmcp/winrmcp/cp.go +++ b/vendor/github.com/packer-community/winrmcp/winrmcp/cp.go @@ -176,7 +176,12 @@ func cleanupContent(client *winrm.Client, filePath string) error { } defer shell.Close() - script := fmt.Sprintf(`Remove-Item %s -ErrorAction SilentlyContinue`, filePath) + script := fmt.Sprintf(` + $tmp_file_path = [System.IO.Path]::GetFullPath("%s") + if (Test-Path $tmp_file_path) { + Remove-Item $tmp_file_path -ErrorAction SilentlyContinue + } + `, filePath) cmd, err := shell.Execute(winrm.Powershell(script)) if err != nil { diff --git a/vendor/github.com/pkg/errors/LICENSE b/vendor/github.com/pkg/errors/LICENSE new file mode 100644 index 000000000..835ba3e75 --- /dev/null +++ b/vendor/github.com/pkg/errors/LICENSE @@ -0,0 +1,23 @@ +Copyright (c) 2015, Dave Cheney +All rights reserved. + +Redistribution and use in source and binary forms, with or without +modification, are permitted provided that the following conditions are met: + +* Redistributions of source code must retain the above copyright notice, this + list of conditions and the following disclaimer. + +* Redistributions in binary form must reproduce the above copyright notice, + this list of conditions and the following disclaimer in the documentation + and/or other materials provided with the distribution. + +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" +AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE +DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE +FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL +DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR +SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER +CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, +OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. diff --git a/vendor/github.com/pkg/errors/README.md b/vendor/github.com/pkg/errors/README.md new file mode 100644 index 000000000..6483ba2af --- /dev/null +++ b/vendor/github.com/pkg/errors/README.md @@ -0,0 +1,52 @@ +# errors [![Travis-CI](https://travis-ci.org/pkg/errors.svg)](https://travis-ci.org/pkg/errors) [![AppVeyor](https://ci.appveyor.com/api/projects/status/b98mptawhudj53ep/branch/master?svg=true)](https://ci.appveyor.com/project/davecheney/errors/branch/master) [![GoDoc](https://godoc.org/github.com/pkg/errors?status.svg)](http://godoc.org/github.com/pkg/errors) [![Report card](https://goreportcard.com/badge/github.com/pkg/errors)](https://goreportcard.com/report/github.com/pkg/errors) [![Sourcegraph](https://sourcegraph.com/github.com/pkg/errors/-/badge.svg)](https://sourcegraph.com/github.com/pkg/errors?badge) + +Package errors provides simple error handling primitives. + +`go get github.com/pkg/errors` + +The traditional error handling idiom in Go is roughly akin to +```go +if err != nil { + return err +} +``` +which applied recursively up the call stack results in error reports without context or debugging information. The errors package allows programmers to add context to the failure path in their code in a way that does not destroy the original value of the error. + +## Adding context to an error + +The errors.Wrap function returns a new error that adds context to the original error. For example +```go +_, err := ioutil.ReadAll(r) +if err != nil { + return errors.Wrap(err, "read failed") +} +``` +## Retrieving the cause of an error + +Using `errors.Wrap` constructs a stack of errors, adding context to the preceding error. Depending on the nature of the error it may be necessary to reverse the operation of errors.Wrap to retrieve the original error for inspection. Any error value which implements this interface can be inspected by `errors.Cause`. +```go +type causer interface { + Cause() error +} +``` +`errors.Cause` will recursively retrieve the topmost error which does not implement `causer`, which is assumed to be the original cause. For example: +```go +switch err := errors.Cause(err).(type) { +case *MyError: + // handle specifically +default: + // unknown error +} +``` + +[Read the package documentation for more information](https://godoc.org/github.com/pkg/errors). + +## Contributing + +We welcome pull requests, bug fixes and issue reports. With that said, the bar for adding new symbols to this package is intentionally set high. + +Before proposing a change, please discuss your change by raising an issue. + +## License + +BSD-2-Clause diff --git a/vendor/github.com/pkg/errors/appveyor.yml b/vendor/github.com/pkg/errors/appveyor.yml new file mode 100644 index 000000000..a932eade0 --- /dev/null +++ b/vendor/github.com/pkg/errors/appveyor.yml @@ -0,0 +1,32 @@ +version: build-{build}.{branch} + +clone_folder: C:\gopath\src\github.com\pkg\errors +shallow_clone: true # for startup speed + +environment: + GOPATH: C:\gopath + +platform: + - x64 + +# http://www.appveyor.com/docs/installed-software +install: + # some helpful output for debugging builds + - go version + - go env + # pre-installed MinGW at C:\MinGW is 32bit only + # but MSYS2 at C:\msys64 has mingw64 + - set PATH=C:\msys64\mingw64\bin;%PATH% + - gcc --version + - g++ --version + +build_script: + - go install -v ./... + +test_script: + - set PATH=C:\gopath\bin;%PATH% + - go test -v ./... + +#artifacts: +# - path: '%GOPATH%\bin\*.exe' +deploy: off diff --git a/vendor/github.com/pkg/errors/errors.go b/vendor/github.com/pkg/errors/errors.go new file mode 100644 index 000000000..842ee8045 --- /dev/null +++ b/vendor/github.com/pkg/errors/errors.go @@ -0,0 +1,269 @@ +// Package errors provides simple error handling primitives. +// +// The traditional error handling idiom in Go is roughly akin to +// +// if err != nil { +// return err +// } +// +// which applied recursively up the call stack results in error reports +// without context or debugging information. The errors package allows +// programmers to add context to the failure path in their code in a way +// that does not destroy the original value of the error. +// +// Adding context to an error +// +// The errors.Wrap function returns a new error that adds context to the +// original error by recording a stack trace at the point Wrap is called, +// and the supplied message. For example +// +// _, err := ioutil.ReadAll(r) +// if err != nil { +// return errors.Wrap(err, "read failed") +// } +// +// If additional control is required the errors.WithStack and errors.WithMessage +// functions destructure errors.Wrap into its component operations of annotating +// an error with a stack trace and an a message, respectively. +// +// Retrieving the cause of an error +// +// Using errors.Wrap constructs a stack of errors, adding context to the +// preceding error. Depending on the nature of the error it may be necessary +// to reverse the operation of errors.Wrap to retrieve the original error +// for inspection. Any error value which implements this interface +// +// type causer interface { +// Cause() error +// } +// +// can be inspected by errors.Cause. errors.Cause will recursively retrieve +// the topmost error which does not implement causer, which is assumed to be +// the original cause. For example: +// +// switch err := errors.Cause(err).(type) { +// case *MyError: +// // handle specifically +// default: +// // unknown error +// } +// +// causer interface is not exported by this package, but is considered a part +// of stable public API. +// +// Formatted printing of errors +// +// All error values returned from this package implement fmt.Formatter and can +// be formatted by the fmt package. The following verbs are supported +// +// %s print the error. If the error has a Cause it will be +// printed recursively +// %v see %s +// %+v extended format. Each Frame of the error's StackTrace will +// be printed in detail. +// +// Retrieving the stack trace of an error or wrapper +// +// New, Errorf, Wrap, and Wrapf record a stack trace at the point they are +// invoked. This information can be retrieved with the following interface. +// +// type stackTracer interface { +// StackTrace() errors.StackTrace +// } +// +// Where errors.StackTrace is defined as +// +// type StackTrace []Frame +// +// The Frame type represents a call site in the stack trace. Frame supports +// the fmt.Formatter interface that can be used for printing information about +// the stack trace of this error. For example: +// +// if err, ok := err.(stackTracer); ok { +// for _, f := range err.StackTrace() { +// fmt.Printf("%+s:%d", f) +// } +// } +// +// stackTracer interface is not exported by this package, but is considered a part +// of stable public API. +// +// See the documentation for Frame.Format for more details. +package errors + +import ( + "fmt" + "io" +) + +// New returns an error with the supplied message. +// New also records the stack trace at the point it was called. +func New(message string) error { + return &fundamental{ + msg: message, + stack: callers(), + } +} + +// Errorf formats according to a format specifier and returns the string +// as a value that satisfies error. +// Errorf also records the stack trace at the point it was called. +func Errorf(format string, args ...interface{}) error { + return &fundamental{ + msg: fmt.Sprintf(format, args...), + stack: callers(), + } +} + +// fundamental is an error that has a message and a stack, but no caller. +type fundamental struct { + msg string + *stack +} + +func (f *fundamental) Error() string { return f.msg } + +func (f *fundamental) Format(s fmt.State, verb rune) { + switch verb { + case 'v': + if s.Flag('+') { + io.WriteString(s, f.msg) + f.stack.Format(s, verb) + return + } + fallthrough + case 's': + io.WriteString(s, f.msg) + case 'q': + fmt.Fprintf(s, "%q", f.msg) + } +} + +// WithStack annotates err with a stack trace at the point WithStack was called. +// If err is nil, WithStack returns nil. +func WithStack(err error) error { + if err == nil { + return nil + } + return &withStack{ + err, + callers(), + } +} + +type withStack struct { + error + *stack +} + +func (w *withStack) Cause() error { return w.error } + +func (w *withStack) Format(s fmt.State, verb rune) { + switch verb { + case 'v': + if s.Flag('+') { + fmt.Fprintf(s, "%+v", w.Cause()) + w.stack.Format(s, verb) + return + } + fallthrough + case 's': + io.WriteString(s, w.Error()) + case 'q': + fmt.Fprintf(s, "%q", w.Error()) + } +} + +// Wrap returns an error annotating err with a stack trace +// at the point Wrap is called, and the supplied message. +// If err is nil, Wrap returns nil. +func Wrap(err error, message string) error { + if err == nil { + return nil + } + err = &withMessage{ + cause: err, + msg: message, + } + return &withStack{ + err, + callers(), + } +} + +// Wrapf returns an error annotating err with a stack trace +// at the point Wrapf is call, and the format specifier. +// If err is nil, Wrapf returns nil. +func Wrapf(err error, format string, args ...interface{}) error { + if err == nil { + return nil + } + err = &withMessage{ + cause: err, + msg: fmt.Sprintf(format, args...), + } + return &withStack{ + err, + callers(), + } +} + +// WithMessage annotates err with a new message. +// If err is nil, WithMessage returns nil. +func WithMessage(err error, message string) error { + if err == nil { + return nil + } + return &withMessage{ + cause: err, + msg: message, + } +} + +type withMessage struct { + cause error + msg string +} + +func (w *withMessage) Error() string { return w.msg + ": " + w.cause.Error() } +func (w *withMessage) Cause() error { return w.cause } + +func (w *withMessage) Format(s fmt.State, verb rune) { + switch verb { + case 'v': + if s.Flag('+') { + fmt.Fprintf(s, "%+v\n", w.Cause()) + io.WriteString(s, w.msg) + return + } + fallthrough + case 's', 'q': + io.WriteString(s, w.Error()) + } +} + +// Cause returns the underlying cause of the error, if possible. +// An error value has a cause if it implements the following +// interface: +// +// type causer interface { +// Cause() error +// } +// +// If the error does not implement Cause, the original error will +// be returned. If the error is nil, nil will be returned without further +// investigation. +func Cause(err error) error { + type causer interface { + Cause() error + } + + for err != nil { + cause, ok := err.(causer) + if !ok { + break + } + err = cause.Cause() + } + return err +} diff --git a/vendor/github.com/pkg/errors/stack.go b/vendor/github.com/pkg/errors/stack.go new file mode 100644 index 000000000..b485761a7 --- /dev/null +++ b/vendor/github.com/pkg/errors/stack.go @@ -0,0 +1,187 @@ +package errors + +import ( + "fmt" + "io" + "path" + "runtime" + "strings" +) + +// Frame represents a program counter inside a stack frame. +type Frame uintptr + +// pc returns the program counter for this frame; +// multiple frames may have the same PC value. +func (f Frame) pc() uintptr { return uintptr(f) - 1 } + +// file returns the full path to the file that contains the +// function for this Frame's pc. +func (f Frame) file() string { + fn := runtime.FuncForPC(f.pc()) + if fn == nil { + return "unknown" + } + file, _ := fn.FileLine(f.pc()) + return file +} + +// line returns the line number of source code of the +// function for this Frame's pc. +func (f Frame) line() int { + fn := runtime.FuncForPC(f.pc()) + if fn == nil { + return 0 + } + _, line := fn.FileLine(f.pc()) + return line +} + +// Format formats the frame according to the fmt.Formatter interface. +// +// %s source file +// %d source line +// %n function name +// %v equivalent to %s:%d +// +// Format accepts flags that alter the printing of some verbs, as follows: +// +// %+s function name and path of source file relative to the compile time +// GOPATH separated by \n\t (\n\t) +// %+v equivalent to %+s:%d +func (f Frame) Format(s fmt.State, verb rune) { + switch verb { + case 's': + switch { + case s.Flag('+'): + pc := f.pc() + fn := runtime.FuncForPC(pc) + if fn == nil { + io.WriteString(s, "unknown") + } else { + file, _ := fn.FileLine(pc) + fmt.Fprintf(s, "%s\n\t%s", fn.Name(), file) + } + default: + io.WriteString(s, path.Base(f.file())) + } + case 'd': + fmt.Fprintf(s, "%d", f.line()) + case 'n': + name := runtime.FuncForPC(f.pc()).Name() + io.WriteString(s, funcname(name)) + case 'v': + f.Format(s, 's') + io.WriteString(s, ":") + f.Format(s, 'd') + } +} + +// StackTrace is stack of Frames from innermost (newest) to outermost (oldest). +type StackTrace []Frame + +// Format formats the stack of Frames according to the fmt.Formatter interface. +// +// %s lists source files for each Frame in the stack +// %v lists the source file and line number for each Frame in the stack +// +// Format accepts flags that alter the printing of some verbs, as follows: +// +// %+v Prints filename, function, and line number for each Frame in the stack. +func (st StackTrace) Format(s fmt.State, verb rune) { + switch verb { + case 'v': + switch { + case s.Flag('+'): + for _, f := range st { + fmt.Fprintf(s, "\n%+v", f) + } + case s.Flag('#'): + fmt.Fprintf(s, "%#v", []Frame(st)) + default: + fmt.Fprintf(s, "%v", []Frame(st)) + } + case 's': + fmt.Fprintf(s, "%s", []Frame(st)) + } +} + +// stack represents a stack of program counters. +type stack []uintptr + +func (s *stack) Format(st fmt.State, verb rune) { + switch verb { + case 'v': + switch { + case st.Flag('+'): + for _, pc := range *s { + f := Frame(pc) + fmt.Fprintf(st, "\n%+v", f) + } + } + } +} + +func (s *stack) StackTrace() StackTrace { + f := make([]Frame, len(*s)) + for i := 0; i < len(f); i++ { + f[i] = Frame((*s)[i]) + } + return f +} + +func callers() *stack { + const depth = 32 + var pcs [depth]uintptr + n := runtime.Callers(3, pcs[:]) + var st stack = pcs[0:n] + return &st +} + +// funcname removes the path prefix component of a function's name reported by func.Name(). +func funcname(name string) string { + i := strings.LastIndex(name, "/") + name = name[i+1:] + i = strings.Index(name, ".") + return name[i+1:] +} + +func trimGOPATH(name, file string) string { + // Here we want to get the source file path relative to the compile time + // GOPATH. As of Go 1.6.x there is no direct way to know the compiled + // GOPATH at runtime, but we can infer the number of path segments in the + // GOPATH. We note that fn.Name() returns the function name qualified by + // the import path, which does not include the GOPATH. Thus we can trim + // segments from the beginning of the file path until the number of path + // separators remaining is one more than the number of path separators in + // the function name. For example, given: + // + // GOPATH /home/user + // file /home/user/src/pkg/sub/file.go + // fn.Name() pkg/sub.Type.Method + // + // We want to produce: + // + // pkg/sub/file.go + // + // From this we can easily see that fn.Name() has one less path separator + // than our desired output. We count separators from the end of the file + // path until it finds two more than in the function name and then move + // one character forward to preserve the initial path segment without a + // leading separator. + const sep = "/" + goal := strings.Count(name, sep) + 2 + i := len(file) + for n := 0; n < goal; n++ { + i = strings.LastIndex(file[:i], sep) + if i == -1 { + // not enough separators found, set i so that the slice expression + // below leaves file unmodified + i = -len(sep) + break + } + } + // get back to 0 or trim the leading separator + file = file[i+len(sep):] + return file +} diff --git a/vendor/github.com/posener/complete/LICENSE.txt b/vendor/github.com/posener/complete/LICENSE.txt new file mode 100644 index 000000000..16249b4a1 --- /dev/null +++ b/vendor/github.com/posener/complete/LICENSE.txt @@ -0,0 +1,21 @@ +The MIT License + +Copyright (c) 2017 Eyal Posener + +Permission is hereby granted, free of charge, to any person obtaining a copy +of this software and associated documentation files (the "Software"), to deal +in the Software without restriction, including without limitation the rights +to use, copy, modify, merge, publish, distribute, sublicense, and/or sell +copies of the Software, and to permit persons to whom the Software is +furnished to do so, subject to the following conditions: + +The above copyright notice and this permission notice shall be included in +all copies or substantial portions of the Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR +IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, +FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE +AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER +LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, +OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN +THE SOFTWARE. \ No newline at end of file diff --git a/vendor/github.com/posener/complete/args.go b/vendor/github.com/posener/complete/args.go new file mode 100644 index 000000000..73c356d76 --- /dev/null +++ b/vendor/github.com/posener/complete/args.go @@ -0,0 +1,75 @@ +package complete + +import ( + "os" + "path/filepath" +) + +// Args describes command line arguments +type Args struct { + // All lists of all arguments in command line (not including the command itself) + All []string + // Completed lists of all completed arguments in command line, + // If the last one is still being typed - no space after it, + // it won't appear in this list of arguments. + Completed []string + // Last argument in command line, the one being typed, if the last + // character in the command line is a space, this argument will be empty, + // otherwise this would be the last word. + Last string + // LastCompleted is the last argument that was fully typed. + // If the last character in the command line is space, this would be the + // last word, otherwise, it would be the word before that. + LastCompleted string +} + +// Directory gives the directory of the current written +// last argument if it represents a file name being written. +// in case that it is not, we fall back to the current directory. +func (a Args) Directory() string { + if info, err := os.Stat(a.Last); err == nil && info.IsDir() { + return fixPathForm(a.Last, a.Last) + } + dir := filepath.Dir(a.Last) + if info, err := os.Stat(dir); err != nil || !info.IsDir() { + return "./" + } + return fixPathForm(a.Last, dir) +} + +func newArgs(line []string) Args { + completed := removeLast(line[1:]) + return Args{ + All: line[1:], + Completed: completed, + Last: last(line), + LastCompleted: last(completed), + } +} + +func (a Args) from(i int) Args { + if i > len(a.All) { + i = len(a.All) + } + a.All = a.All[i:] + + if i > len(a.Completed) { + i = len(a.Completed) + } + a.Completed = a.Completed[i:] + return a +} + +func removeLast(a []string) []string { + if len(a) > 0 { + return a[:len(a)-1] + } + return a +} + +func last(args []string) (last string) { + if len(args) > 0 { + last = args[len(args)-1] + } + return +} diff --git a/vendor/github.com/posener/complete/cmd/cmd.go b/vendor/github.com/posener/complete/cmd/cmd.go new file mode 100644 index 000000000..7137dee17 --- /dev/null +++ b/vendor/github.com/posener/complete/cmd/cmd.go @@ -0,0 +1,128 @@ +// Package cmd used for command line options for the complete tool +package cmd + +import ( + "errors" + "flag" + "fmt" + "os" + "strings" + + "github.com/posener/complete/cmd/install" +) + +// CLI for command line +type CLI struct { + Name string + InstallName string + UninstallName string + + install bool + uninstall bool + yes bool +} + +const ( + defaultInstallName = "install" + defaultUninstallName = "uninstall" +) + +// Run is used when running complete in command line mode. +// this is used when the complete is not completing words, but to +// install it or uninstall it. +func (f *CLI) Run() bool { + err := f.validate() + if err != nil { + os.Stderr.WriteString(err.Error() + "\n") + os.Exit(1) + } + + switch { + case f.install: + f.prompt() + err = install.Install(f.Name) + case f.uninstall: + f.prompt() + err = install.Uninstall(f.Name) + default: + // non of the action flags matched, + // returning false should make the real program execute + return false + } + + if err != nil { + fmt.Printf("%s failed! %s\n", f.action(), err) + os.Exit(3) + } + fmt.Println("Done!") + return true +} + +// prompt use for approval +// exit if approval was not given +func (f *CLI) prompt() { + defer fmt.Println(f.action() + "ing...") + if f.yes { + return + } + fmt.Printf("%s completion for %s? ", f.action(), f.Name) + var answer string + fmt.Scanln(&answer) + + switch strings.ToLower(answer) { + case "y", "yes": + return + default: + fmt.Println("Cancelling...") + os.Exit(1) + } +} + +// AddFlags adds the CLI flags to the flag set. +// If flags is nil, the default command line flags will be taken. +// Pass non-empty strings as installName and uninstallName to override the default +// flag names. +func (f *CLI) AddFlags(flags *flag.FlagSet) { + if flags == nil { + flags = flag.CommandLine + } + + if f.InstallName == "" { + f.InstallName = defaultInstallName + } + if f.UninstallName == "" { + f.UninstallName = defaultUninstallName + } + + if flags.Lookup(f.InstallName) == nil { + flags.BoolVar(&f.install, f.InstallName, false, + fmt.Sprintf("Install completion for %s command", f.Name)) + } + if flags.Lookup(f.UninstallName) == nil { + flags.BoolVar(&f.uninstall, f.UninstallName, false, + fmt.Sprintf("Uninstall completion for %s command", f.Name)) + } + if flags.Lookup("y") == nil { + flags.BoolVar(&f.yes, "y", false, "Don't prompt user for typing 'yes'") + } +} + +// validate the CLI +func (f *CLI) validate() error { + if f.install && f.uninstall { + return errors.New("Install and uninstall are mutually exclusive") + } + return nil +} + +// action name according to the CLI values. +func (f *CLI) action() string { + switch { + case f.install: + return "Install" + case f.uninstall: + return "Uninstall" + default: + return "unknown" + } +} diff --git a/vendor/github.com/posener/complete/cmd/install/bash.go b/vendor/github.com/posener/complete/cmd/install/bash.go new file mode 100644 index 000000000..a287f9986 --- /dev/null +++ b/vendor/github.com/posener/complete/cmd/install/bash.go @@ -0,0 +1,32 @@ +package install + +import "fmt" + +// (un)install in bash +// basically adds/remove from .bashrc: +// +// complete -C +type bash struct { + rc string +} + +func (b bash) Install(cmd, bin string) error { + completeCmd := b.cmd(cmd, bin) + if lineInFile(b.rc, completeCmd) { + return fmt.Errorf("already installed in %s", b.rc) + } + return appendToFile(b.rc, completeCmd) +} + +func (b bash) Uninstall(cmd, bin string) error { + completeCmd := b.cmd(cmd, bin) + if !lineInFile(b.rc, completeCmd) { + return fmt.Errorf("does not installed in %s", b.rc) + } + + return removeFromFile(b.rc, completeCmd) +} + +func (bash) cmd(cmd, bin string) string { + return fmt.Sprintf("complete -C %s %s", bin, cmd) +} diff --git a/vendor/github.com/posener/complete/cmd/install/install.go b/vendor/github.com/posener/complete/cmd/install/install.go new file mode 100644 index 000000000..644b40576 --- /dev/null +++ b/vendor/github.com/posener/complete/cmd/install/install.go @@ -0,0 +1,92 @@ +package install + +import ( + "errors" + "os" + "os/user" + "path/filepath" + + "github.com/hashicorp/go-multierror" +) + +type installer interface { + Install(cmd, bin string) error + Uninstall(cmd, bin string) error +} + +// Install complete command given: +// cmd: is the command name +func Install(cmd string) error { + is := installers() + if len(is) == 0 { + return errors.New("Did not find any shells to install") + } + bin, err := getBinaryPath() + if err != nil { + return err + } + + for _, i := range is { + errI := i.Install(cmd, bin) + if errI != nil { + err = multierror.Append(err, errI) + } + } + + return err +} + +// Uninstall complete command given: +// cmd: is the command name +func Uninstall(cmd string) error { + is := installers() + if len(is) == 0 { + return errors.New("Did not find any shells to uninstall") + } + bin, err := getBinaryPath() + if err != nil { + return err + } + + for _, i := range is { + errI := i.Uninstall(cmd, bin) + if errI != nil { + multierror.Append(err, errI) + } + } + + return err +} + +func installers() (i []installer) { + for _, rc := range [...]string{".bashrc", ".bash_profile"} { + if f := rcFile(rc); f != "" { + i = append(i, bash{f}) + break + } + } + if f := rcFile(".zshrc"); f != "" { + i = append(i, zsh{f}) + } + return +} + +func getBinaryPath() (string, error) { + bin, err := os.Executable() + if err != nil { + return "", err + } + return filepath.Abs(bin) +} + +func rcFile(name string) string { + u, err := user.Current() + if err != nil { + return "" + } + path := filepath.Join(u.HomeDir, name) + if _, err := os.Stat(path); err != nil { + return "" + } + return path +} diff --git a/vendor/github.com/posener/complete/cmd/install/utils.go b/vendor/github.com/posener/complete/cmd/install/utils.go new file mode 100644 index 000000000..2c8b44cab --- /dev/null +++ b/vendor/github.com/posener/complete/cmd/install/utils.go @@ -0,0 +1,118 @@ +package install + +import ( + "bufio" + "fmt" + "io" + "io/ioutil" + "os" +) + +func lineInFile(name string, lookFor string) bool { + f, err := os.Open(name) + if err != nil { + return false + } + defer f.Close() + r := bufio.NewReader(f) + prefix := []byte{} + for { + line, isPrefix, err := r.ReadLine() + if err == io.EOF { + return false + } + if err != nil { + return false + } + if isPrefix { + prefix = append(prefix, line...) + continue + } + line = append(prefix, line...) + if string(line) == lookFor { + return true + } + prefix = prefix[:0] + } +} + +func appendToFile(name string, content string) error { + f, err := os.OpenFile(name, os.O_RDWR|os.O_APPEND, 0) + if err != nil { + return err + } + defer f.Close() + _, err = f.WriteString(fmt.Sprintf("\n%s\n", content)) + return err +} + +func removeFromFile(name string, content string) error { + backup := name + ".bck" + err := copyFile(name, backup) + if err != nil { + return err + } + temp, err := removeContentToTempFile(name, content) + if err != nil { + return err + } + + err = copyFile(temp, name) + if err != nil { + return err + } + + return os.Remove(backup) +} + +func removeContentToTempFile(name, content string) (string, error) { + rf, err := os.Open(name) + if err != nil { + return "", err + } + defer rf.Close() + wf, err := ioutil.TempFile("/tmp", "complete-") + if err != nil { + return "", err + } + defer wf.Close() + + r := bufio.NewReader(rf) + prefix := []byte{} + for { + line, isPrefix, err := r.ReadLine() + if err == io.EOF { + break + } + if err != nil { + return "", err + } + if isPrefix { + prefix = append(prefix, line...) + continue + } + line = append(prefix, line...) + str := string(line) + if str == content { + continue + } + wf.WriteString(str + "\n") + prefix = prefix[:0] + } + return wf.Name(), nil +} + +func copyFile(src string, dst string) error { + in, err := os.Open(src) + if err != nil { + return err + } + defer in.Close() + out, err := os.Create(dst) + if err != nil { + return err + } + defer out.Close() + _, err = io.Copy(out, in) + return err +} diff --git a/vendor/github.com/posener/complete/cmd/install/zsh.go b/vendor/github.com/posener/complete/cmd/install/zsh.go new file mode 100644 index 000000000..a625f53cf --- /dev/null +++ b/vendor/github.com/posener/complete/cmd/install/zsh.go @@ -0,0 +1,39 @@ +package install + +import "fmt" + +// (un)install in zsh +// basically adds/remove from .zshrc: +// +// autoload -U +X bashcompinit && bashcompinit" +// complete -C
    +type zsh struct { + rc string +} + +func (z zsh) Install(cmd, bin string) error { + completeCmd := z.cmd(cmd, bin) + if lineInFile(z.rc, completeCmd) { + return fmt.Errorf("already installed in %s", z.rc) + } + + bashCompInit := "autoload -U +X bashcompinit && bashcompinit" + if !lineInFile(z.rc, bashCompInit) { + completeCmd = bashCompInit + "\n" + completeCmd + } + + return appendToFile(z.rc, completeCmd) +} + +func (z zsh) Uninstall(cmd, bin string) error { + completeCmd := z.cmd(cmd, bin) + if !lineInFile(z.rc, completeCmd) { + return fmt.Errorf("does not installed in %s", z.rc) + } + + return removeFromFile(z.rc, completeCmd) +} + +func (zsh) cmd(cmd, bin string) string { + return fmt.Sprintf("complete -o nospace -C %s %s", bin, cmd) +} diff --git a/vendor/github.com/posener/complete/command.go b/vendor/github.com/posener/complete/command.go new file mode 100644 index 000000000..6de48e960 --- /dev/null +++ b/vendor/github.com/posener/complete/command.go @@ -0,0 +1,118 @@ +package complete + +import "github.com/posener/complete/match" + +// Command represents a command line +// It holds the data that enables auto completion of command line +// Command can also be a sub command. +type Command struct { + // Sub is map of sub commands of the current command + // The key refer to the sub command name, and the value is it's + // Command descriptive struct. + Sub Commands + + // Flags is a map of flags that the command accepts. + // The key is the flag name, and the value is it's predictions. + Flags Flags + + // GlobalFlags is a map of flags that the command accepts. + // Global flags that can appear also after a sub command. + GlobalFlags Flags + + // Args are extra arguments that the command accepts, those who are + // given without any flag before. + Args Predictor +} + +// Predict returns all possible predictions for args according to the command struct +func (c *Command) Predict(a Args) (predictions []string) { + predictions, _ = c.predict(a) + return +} + +// Commands is the type of Sub member, it maps a command name to a command struct +type Commands map[string]Command + +// Predict completion of sub command names names according to command line arguments +func (c Commands) Predict(a Args) (prediction []string) { + for sub := range c { + if match.Prefix(sub, a.Last) { + prediction = append(prediction, sub) + } + } + return +} + +// Flags is the type Flags of the Flags member, it maps a flag name to the flag predictions. +type Flags map[string]Predictor + +// Predict completion of flags names according to command line arguments +func (f Flags) Predict(a Args) (prediction []string) { + for flag := range f { + // If the flag starts with a hyphen, we avoid emitting the prediction + // unless the last typed arg contains a hyphen as well. + flagHyphenStart := len(flag) != 0 && flag[0] == '-' + lastHyphenStart := len(a.Last) != 0 && a.Last[0] == '-' + if flagHyphenStart && !lastHyphenStart { + continue + } + + if match.Prefix(flag, a.Last) { + prediction = append(prediction, flag) + } + } + return +} + +// predict options +// only is set to true if no more options are allowed to be returned +// those are in cases of special flag that has specific completion arguments, +// and other flags or sub commands can't come after it. +func (c *Command) predict(a Args) (options []string, only bool) { + + // search sub commands for predictions first + subCommandFound := false + for i, arg := range a.Completed { + if cmd, ok := c.Sub[arg]; ok { + subCommandFound = true + + // recursive call for sub command + options, only = cmd.predict(a.from(i)) + if only { + return + } + + // We matched so stop searching. Continuing to search can accidentally + // match a subcommand with current set of commands, see issue #46. + break + } + } + + // if last completed word is a global flag that we need to complete + if predictor, ok := c.GlobalFlags[a.LastCompleted]; ok && predictor != nil { + Log("Predicting according to global flag %s", a.LastCompleted) + return predictor.Predict(a), true + } + + options = append(options, c.GlobalFlags.Predict(a)...) + + // if a sub command was entered, we won't add the parent command + // completions and we return here. + if subCommandFound { + return + } + + // if last completed word is a command flag that we need to complete + if predictor, ok := c.Flags[a.LastCompleted]; ok && predictor != nil { + Log("Predicting according to flag %s", a.LastCompleted) + return predictor.Predict(a), true + } + + options = append(options, c.Sub.Predict(a)...) + options = append(options, c.Flags.Predict(a)...) + if c.Args != nil { + options = append(options, c.Args.Predict(a)...) + } + + return +} diff --git a/vendor/github.com/posener/complete/complete.go b/vendor/github.com/posener/complete/complete.go new file mode 100644 index 000000000..1df66170b --- /dev/null +++ b/vendor/github.com/posener/complete/complete.go @@ -0,0 +1,86 @@ +// Package complete provides a tool for bash writing bash completion in go. +// +// Writing bash completion scripts is a hard work. This package provides an easy way +// to create bash completion scripts for any command, and also an easy way to install/uninstall +// the completion of the command. +package complete + +import ( + "flag" + "fmt" + "os" + "strings" + + "github.com/posener/complete/cmd" +) + +const ( + envComplete = "COMP_LINE" + envDebug = "COMP_DEBUG" +) + +// Complete structs define completion for a command with CLI options +type Complete struct { + Command Command + cmd.CLI +} + +// New creates a new complete command. +// name is the name of command we want to auto complete. +// IMPORTANT: it must be the same name - if the auto complete +// completes the 'go' command, name must be equal to "go". +// command is the struct of the command completion. +func New(name string, command Command) *Complete { + return &Complete{ + Command: command, + CLI: cmd.CLI{Name: name}, + } +} + +// Run runs the completion and add installation flags beforehand. +// The flags are added to the main flag CommandLine variable. +func (c *Complete) Run() bool { + c.AddFlags(nil) + flag.Parse() + return c.Complete() +} + +// Complete a command from completion line in environment variable, +// and print out the complete options. +// returns success if the completion ran or if the cli matched +// any of the given flags, false otherwise +// For installation: it assumes that flags were added and parsed before +// it was called. +func (c *Complete) Complete() bool { + line, ok := getLine() + if !ok { + // make sure flags parsed, + // in case they were not added in the main program + return c.CLI.Run() + } + Log("Completing line: %s", line) + + a := newArgs(line) + + options := c.Command.Predict(a) + + Log("Completion: %s", options) + output(options) + return true +} + +func getLine() ([]string, bool) { + line := os.Getenv(envComplete) + if line == "" { + return nil, false + } + return strings.Split(line, " "), true +} + +func output(options []string) { + Log("") + // stdout of program defines the complete options + for _, option := range options { + fmt.Println(option) + } +} diff --git a/vendor/github.com/posener/complete/log.go b/vendor/github.com/posener/complete/log.go new file mode 100644 index 000000000..797a80ced --- /dev/null +++ b/vendor/github.com/posener/complete/log.go @@ -0,0 +1,23 @@ +package complete + +import ( + "io" + "io/ioutil" + "log" + "os" +) + +// Log is used for debugging purposes +// since complete is running on tab completion, it is nice to +// have logs to the stderr (when writing your own completer) +// to write logs, set the COMP_DEBUG environment variable and +// use complete.Log in the complete program +var Log = getLogger() + +func getLogger() func(format string, args ...interface{}) { + var logfile io.Writer = ioutil.Discard + if os.Getenv(envDebug) != "" { + logfile = os.Stderr + } + return log.New(logfile, "complete ", log.Flags()).Printf +} diff --git a/vendor/github.com/posener/complete/match/file.go b/vendor/github.com/posener/complete/match/file.go new file mode 100644 index 000000000..051171e8a --- /dev/null +++ b/vendor/github.com/posener/complete/match/file.go @@ -0,0 +1,19 @@ +package match + +import "strings" + +// File returns true if prefix can match the file +func File(file, prefix string) bool { + // special case for current directory completion + if file == "./" && (prefix == "." || prefix == "") { + return true + } + if prefix == "." && strings.HasPrefix(file, ".") { + return true + } + + file = strings.TrimPrefix(file, "./") + prefix = strings.TrimPrefix(prefix, "./") + + return strings.HasPrefix(file, prefix) +} diff --git a/vendor/github.com/posener/complete/match/match.go b/vendor/github.com/posener/complete/match/match.go new file mode 100644 index 000000000..812fcac96 --- /dev/null +++ b/vendor/github.com/posener/complete/match/match.go @@ -0,0 +1,6 @@ +package match + +// Match matches two strings +// it is used for comparing a term to the last typed +// word, the prefix, and see if it is a possible auto complete option. +type Match func(term, prefix string) bool diff --git a/vendor/github.com/posener/complete/match/prefix.go b/vendor/github.com/posener/complete/match/prefix.go new file mode 100644 index 000000000..9a01ba63a --- /dev/null +++ b/vendor/github.com/posener/complete/match/prefix.go @@ -0,0 +1,9 @@ +package match + +import "strings" + +// Prefix is a simple Matcher, if the word is it's prefix, there is a match +// Match returns true if a has the prefix as prefix +func Prefix(long, prefix string) bool { + return strings.HasPrefix(long, prefix) +} diff --git a/vendor/github.com/posener/complete/metalinter.json b/vendor/github.com/posener/complete/metalinter.json new file mode 100644 index 000000000..799c1d03f --- /dev/null +++ b/vendor/github.com/posener/complete/metalinter.json @@ -0,0 +1,21 @@ +{ + "Vendor": true, + "DisableAll": true, + "Enable": [ + "gofmt", + "goimports", + "interfacer", + "goconst", + "misspell", + "unconvert", + "gosimple", + "golint", + "structcheck", + "deadcode", + "vet" + ], + "Exclude": [ + "initTests is unused" + ], + "Deadline": "2m" +} diff --git a/vendor/github.com/posener/complete/predict.go b/vendor/github.com/posener/complete/predict.go new file mode 100644 index 000000000..820706325 --- /dev/null +++ b/vendor/github.com/posener/complete/predict.go @@ -0,0 +1,41 @@ +package complete + +// Predictor implements a predict method, in which given +// command line arguments returns a list of options it predicts. +type Predictor interface { + Predict(Args) []string +} + +// PredictOr unions two predicate functions, so that the result predicate +// returns the union of their predication +func PredictOr(predictors ...Predictor) Predictor { + return PredictFunc(func(a Args) (prediction []string) { + for _, p := range predictors { + if p == nil { + continue + } + prediction = append(prediction, p.Predict(a)...) + } + return + }) +} + +// PredictFunc determines what terms can follow a command or a flag +// It is used for auto completion, given last - the last word in the already +// in the command line, what words can complete it. +type PredictFunc func(Args) []string + +// Predict invokes the predict function and implements the Predictor interface +func (p PredictFunc) Predict(a Args) []string { + if p == nil { + return nil + } + return p(a) +} + +// PredictNothing does not expect anything after. +var PredictNothing Predictor + +// PredictAnything expects something, but nothing particular, such as a number +// or arbitrary name. +var PredictAnything = PredictFunc(func(Args) []string { return nil }) diff --git a/vendor/github.com/posener/complete/predict_files.go b/vendor/github.com/posener/complete/predict_files.go new file mode 100644 index 000000000..c8adf7e80 --- /dev/null +++ b/vendor/github.com/posener/complete/predict_files.go @@ -0,0 +1,108 @@ +package complete + +import ( + "io/ioutil" + "os" + "path/filepath" + "strings" + + "github.com/posener/complete/match" +) + +// PredictDirs will search for directories in the given started to be typed +// path, if no path was started to be typed, it will complete to directories +// in the current working directory. +func PredictDirs(pattern string) Predictor { + return files(pattern, false) +} + +// PredictFiles will search for files matching the given pattern in the started to +// be typed path, if no path was started to be typed, it will complete to files that +// match the pattern in the current working directory. +// To match any file, use "*" as pattern. To match go files use "*.go", and so on. +func PredictFiles(pattern string) Predictor { + return files(pattern, true) +} + +func files(pattern string, allowFiles bool) PredictFunc { + + // search for files according to arguments, + // if only one directory has matched the result, search recursively into + // this directory to give more results. + return func(a Args) (prediction []string) { + prediction = predictFiles(a, pattern, allowFiles) + + // if the number of prediction is not 1, we either have many results or + // have no results, so we return it. + if len(prediction) != 1 { + return + } + + // only try deeper, if the one item is a directory + if stat, err := os.Stat(prediction[0]); err != nil || !stat.IsDir() { + return + } + + a.Last = prediction[0] + return predictFiles(a, pattern, allowFiles) + } +} + +func predictFiles(a Args, pattern string, allowFiles bool) []string { + if strings.HasSuffix(a.Last, "/..") { + return nil + } + + dir := a.Directory() + files := listFiles(dir, pattern, allowFiles) + + // add dir if match + files = append(files, dir) + + return PredictFilesSet(files).Predict(a) +} + +// PredictFilesSet predict according to file rules to a given set of file names +func PredictFilesSet(files []string) PredictFunc { + return func(a Args) (prediction []string) { + // add all matching files to prediction + for _, f := range files { + f = fixPathForm(a.Last, f) + + // test matching of file to the argument + if match.File(f, a.Last) { + prediction = append(prediction, f) + } + } + return + } +} + +func listFiles(dir, pattern string, allowFiles bool) []string { + // set of all file names + m := map[string]bool{} + + // list files + if files, err := filepath.Glob(filepath.Join(dir, pattern)); err == nil { + for _, f := range files { + if stat, err := os.Stat(f); err != nil || stat.IsDir() || allowFiles { + m[f] = true + } + } + } + + // list directories + if dirs, err := ioutil.ReadDir(dir); err == nil { + for _, d := range dirs { + if d.IsDir() { + m[filepath.Join(dir, d.Name())] = true + } + } + } + + list := make([]string, 0, len(m)) + for k := range m { + list = append(list, k) + } + return list +} diff --git a/vendor/github.com/posener/complete/predict_set.go b/vendor/github.com/posener/complete/predict_set.go new file mode 100644 index 000000000..8fc59d714 --- /dev/null +++ b/vendor/github.com/posener/complete/predict_set.go @@ -0,0 +1,19 @@ +package complete + +import "github.com/posener/complete/match" + +// PredictSet expects specific set of terms, given in the options argument. +func PredictSet(options ...string) Predictor { + return predictSet(options) +} + +type predictSet []string + +func (p predictSet) Predict(a Args) (prediction []string) { + for _, m := range p { + if match.Prefix(m, a.Last) { + prediction = append(prediction, m) + } + } + return +} diff --git a/vendor/github.com/posener/complete/readme.md b/vendor/github.com/posener/complete/readme.md new file mode 100644 index 000000000..74077e357 --- /dev/null +++ b/vendor/github.com/posener/complete/readme.md @@ -0,0 +1,116 @@ +# complete + +[![Build Status](https://travis-ci.org/posener/complete.svg?branch=master)](https://travis-ci.org/posener/complete) +[![codecov](https://codecov.io/gh/posener/complete/branch/master/graph/badge.svg)](https://codecov.io/gh/posener/complete) +[![GoDoc](https://godoc.org/github.com/posener/complete?status.svg)](http://godoc.org/github.com/posener/complete) +[![Go Report Card](https://goreportcard.com/badge/github.com/posener/complete)](https://goreportcard.com/report/github.com/posener/complete) + +A tool for bash writing bash completion in go. + +Writing bash completion scripts is a hard work. This package provides an easy way +to create bash completion scripts for any command, and also an easy way to install/uninstall +the completion of the command. + +## go command bash completion + +In [gocomplete](./gocomplete) there is an example for bash completion for the `go` command line. + +This is an example that uses the `complete` package on the `go` command - the `complete` package +can also be used to implement any completions, see [Usage](#usage). + +### Install + +1. Type in your shell: +``` +go get -u github.com/posener/complete/gocomplete +gocomplete -install +``` + +2. Restart your shell + +Uninstall by `gocomplete -uninstall` + +### Features + +- Complete `go` command, including sub commands and all flags. +- Complete packages names or `.go` files when necessary. +- Complete test names after `-run` flag. + +## complete package + +Supported shells: + +- [x] bash +- [x] zsh + +### Usage + +Assuming you have program called `run` and you want to have bash completion +for it, meaning, if you type `run` then space, then press the `Tab` key, +the shell will suggest relevant complete options. + +In that case, we will create a program called `runcomplete`, a go program, +with a `func main()` and so, that will make the completion of the `run` +program. Once the `runcomplete` will be in a binary form, we could +`runcomplete -install` and that will add to our shell all the bash completion +options for `run`. + +So here it is: + +```go +import "github.com/posener/complete" + +func main() { + + // create a Command object, that represents the command we want + // to complete. + run := complete.Command{ + + // Sub defines a list of sub commands of the program, + // this is recursive, since every command is of type command also. + Sub: complete.Commands{ + + // add a build sub command + "build": complete.Command { + + // define flags of the build sub command + Flags: complete.Flags{ + // build sub command has a flag '-cpus', which + // expects number of cpus after it. in that case + // anything could complete this flag. + "-cpus": complete.PredictAnything, + }, + }, + }, + + // define flags of the 'run' main command + Flags: complete.Flags{ + // a flag -o, which expects a file ending with .out after + // it, the tab completion will auto complete for files matching + // the given pattern. + "-o": complete.PredictFiles("*.out"), + }, + + // define global flags of the 'run' main command + // those will show up also when a sub command was entered in the + // command line + GlobalFlags: complete.Flags{ + + // a flag '-h' which does not expects anything after it + "-h": complete.PredictNothing, + }, + } + + // run the command completion, as part of the main() function. + // this triggers the autocompletion when needed. + // name must be exactly as the binary that we want to complete. + complete.New("run", run).Run() +} +``` + +### Self completing program + +In case that the program that we want to complete is written in go we +can make it self completing. + +Here is an [example](./example/self/main.go) diff --git a/vendor/github.com/posener/complete/test.sh b/vendor/github.com/posener/complete/test.sh new file mode 100755 index 000000000..56bfcf15d --- /dev/null +++ b/vendor/github.com/posener/complete/test.sh @@ -0,0 +1,12 @@ +#!/usr/bin/env bash + +set -e +echo "" > coverage.txt + +for d in $(go list ./... | grep -v vendor); do + go test -v -race -coverprofile=profile.out -covermode=atomic $d + if [ -f profile.out ]; then + cat profile.out >> coverage.txt + rm profile.out + fi +done \ No newline at end of file diff --git a/vendor/github.com/posener/complete/utils.go b/vendor/github.com/posener/complete/utils.go new file mode 100644 index 000000000..58b8b7927 --- /dev/null +++ b/vendor/github.com/posener/complete/utils.go @@ -0,0 +1,46 @@ +package complete + +import ( + "os" + "path/filepath" + "strings" +) + +// fixPathForm changes a file name to a relative name +func fixPathForm(last string, file string) string { + // get wording directory for relative name + workDir, err := os.Getwd() + if err != nil { + return file + } + + abs, err := filepath.Abs(file) + if err != nil { + return file + } + + // if last is absolute, return path as absolute + if filepath.IsAbs(last) { + return fixDirPath(abs) + } + + rel, err := filepath.Rel(workDir, abs) + if err != nil { + return file + } + + // fix ./ prefix of path + if rel != "." && strings.HasPrefix(last, ".") { + rel = "./" + rel + } + + return fixDirPath(rel) +} + +func fixDirPath(path string) string { + info, err := os.Stat(path) + if err == nil && info.IsDir() && !strings.HasSuffix(path, "/") { + path += "/" + } + return path +} diff --git a/vendor/github.com/renstrom/fuzzysearch/LICENSE b/vendor/github.com/renstrom/fuzzysearch/LICENSE new file mode 100644 index 000000000..9cc753370 --- /dev/null +++ b/vendor/github.com/renstrom/fuzzysearch/LICENSE @@ -0,0 +1,21 @@ +The MIT License (MIT) + +Copyright (c) 2015 Peter Renström + +Permission is hereby granted, free of charge, to any person obtaining a copy +of this software and associated documentation files (the "Software"), to deal +in the Software without restriction, including without limitation the rights +to use, copy, modify, merge, publish, distribute, sublicense, and/or sell +copies of the Software, and to permit persons to whom the Software is +furnished to do so, subject to the following conditions: + +The above copyright notice and this permission notice shall be included in all +copies or substantial portions of the Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR +IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, +FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE +AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER +LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, +OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +SOFTWARE. diff --git a/vendor/github.com/renstrom/fuzzysearch/fuzzy/fuzzy.go b/vendor/github.com/renstrom/fuzzysearch/fuzzy/fuzzy.go new file mode 100644 index 000000000..63277d51e --- /dev/null +++ b/vendor/github.com/renstrom/fuzzysearch/fuzzy/fuzzy.go @@ -0,0 +1,167 @@ +// Fuzzy searching allows for flexibly matching a string with partial input, +// useful for filtering data very quickly based on lightweight user input. +package fuzzy + +import ( + "unicode" + "unicode/utf8" +) + +var noop = func(r rune) rune { return r } + +// Match returns true if source matches target using a fuzzy-searching +// algorithm. Note that it doesn't implement Levenshtein distance (see +// RankMatch instead), but rather a simplified version where there's no +// approximation. The method will return true only if each character in the +// source can be found in the target and occurs after the preceding matches. +func Match(source, target string) bool { + return match(source, target, noop) +} + +// MatchFold is a case-insensitive version of Match. +func MatchFold(source, target string) bool { + return match(source, target, unicode.ToLower) +} + +func match(source, target string, fn func(rune) rune) bool { + lenDiff := len(target) - len(source) + + if lenDiff < 0 { + return false + } + + if lenDiff == 0 && source == target { + return true + } + +Outer: + for _, r1 := range source { + for i, r2 := range target { + if fn(r1) == fn(r2) { + target = target[i+utf8.RuneLen(r2):] + continue Outer + } + } + return false + } + + return true +} + +// Find will return a list of strings in targets that fuzzy matches source. +func Find(source string, targets []string) []string { + return find(source, targets, noop) +} + +// FindFold is a case-insensitive version of Find. +func FindFold(source string, targets []string) []string { + return find(source, targets, unicode.ToLower) +} + +func find(source string, targets []string, fn func(rune) rune) []string { + var matches []string + + for _, target := range targets { + if match(source, target, fn) { + matches = append(matches, target) + } + } + + return matches +} + +// RankMatch is similar to Match except it will measure the Levenshtein +// distance between the source and the target and return its result. If there +// was no match, it will return -1. +// Given the requirements of match, RankMatch only needs to perform a subset of +// the Levenshtein calculation, only deletions need be considered, required +// additions and substitutions would fail the match test. +func RankMatch(source, target string) int { + return rank(source, target, noop) +} + +// RankMatchFold is a case-insensitive version of RankMatch. +func RankMatchFold(source, target string) int { + return rank(source, target, unicode.ToLower) +} + +func rank(source, target string, fn func(rune) rune) int { + lenDiff := len(target) - len(source) + + if lenDiff < 0 { + return -1 + } + + if lenDiff == 0 && source == target { + return 0 + } + + runeDiff := 0 + +Outer: + for _, r1 := range source { + for i, r2 := range target { + if fn(r1) == fn(r2) { + target = target[i+utf8.RuneLen(r2):] + continue Outer + } else { + runeDiff++ + } + } + return -1 + } + + // Count up remaining char + for len(target) > 0 { + target = target[utf8.RuneLen(rune(target[0])):] + runeDiff++ + } + + return runeDiff +} + +// RankFind is similar to Find, except it will also rank all matches using +// Levenshtein distance. +func RankFind(source string, targets []string) Ranks { + var r Ranks + for _, target := range find(source, targets, noop) { + distance := LevenshteinDistance(source, target) + r = append(r, Rank{source, target, distance}) + } + return r +} + +// RankFindFold is a case-insensitive version of RankFind. +func RankFindFold(source string, targets []string) Ranks { + var r Ranks + for _, target := range find(source, targets, unicode.ToLower) { + distance := LevenshteinDistance(source, target) + r = append(r, Rank{source, target, distance}) + } + return r +} + +type Rank struct { + // Source is used as the source for matching. + Source string + + // Target is the word matched against. + Target string + + // Distance is the Levenshtein distance between Source and Target. + Distance int +} + +type Ranks []Rank + +func (r Ranks) Len() int { + return len(r) +} + +func (r Ranks) Swap(i, j int) { + r[i], r[j] = r[j], r[i] +} + +func (r Ranks) Less(i, j int) bool { + return r[i].Distance < r[j].Distance +} diff --git a/vendor/github.com/renstrom/fuzzysearch/fuzzy/levenshtein.go b/vendor/github.com/renstrom/fuzzysearch/fuzzy/levenshtein.go new file mode 100644 index 000000000..237923d34 --- /dev/null +++ b/vendor/github.com/renstrom/fuzzysearch/fuzzy/levenshtein.go @@ -0,0 +1,43 @@ +package fuzzy + +// LevenshteinDistance measures the difference between two strings. +// The Levenshtein distance between two words is the minimum number of +// single-character edits (i.e. insertions, deletions or substitutions) +// required to change one word into the other. +// +// This implemention is optimized to use O(min(m,n)) space and is based on the +// optimized C version found here: +// http://en.wikibooks.org/wiki/Algorithm_implementation/Strings/Levenshtein_distance#C +func LevenshteinDistance(s, t string) int { + r1, r2 := []rune(s), []rune(t) + column := make([]int, len(r1)+1) + + for y := 1; y <= len(r1); y++ { + column[y] = y + } + + for x := 1; x <= len(r2); x++ { + column[0] = x + + for y, lastDiag := 1, x-1; y <= len(r1); y++ { + oldDiag := column[y] + cost := 0 + if r1[y-1] != r2[x-1] { + cost = 1 + } + column[y] = min(column[y]+1, column[y-1]+1, lastDiag+cost) + lastDiag = oldDiag + } + } + + return column[len(r1)] +} + +func min(a, b, c int) int { + if a < b && a < c { + return a + } else if b < c { + return b + } + return c +} diff --git a/vendor/github.com/scaleway/scaleway-cli/LICENSE.md b/vendor/github.com/scaleway/scaleway-cli/LICENSE.md new file mode 100644 index 000000000..7503a16ca --- /dev/null +++ b/vendor/github.com/scaleway/scaleway-cli/LICENSE.md @@ -0,0 +1,22 @@ +The MIT License +=============== + +Copyright (c) **2014-2016 Scaleway ([@scaleway](https://twitter.com/scaleway))** + +Permission is hereby granted, free of charge, to any person obtaining a copy +of this software and associated documentation files (the "Software"), to deal +in the Software without restriction, including without limitation the rights +to use, copy, modify, merge, publish, distribute, sublicense, and/or sell +copies of the Software, and to permit persons to whom the Software is +furnished to do so, subject to the following conditions: + +The above copyright notice and this permission notice shall be included in +all copies or substantial portions of the Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR +IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, +FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE +AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER +LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, +OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN +THE SOFTWARE. diff --git a/vendor/github.com/scaleway/scaleway-cli/pkg/api/README.md b/vendor/github.com/scaleway/scaleway-cli/pkg/api/README.md new file mode 100644 index 000000000..62dade6b9 --- /dev/null +++ b/vendor/github.com/scaleway/scaleway-cli/pkg/api/README.md @@ -0,0 +1,3 @@ +# Scaleway's API + +## Deprecated in favor of https://github.com/scaleway/go-scaleway diff --git a/vendor/github.com/scaleway/scaleway-cli/pkg/api/api.go b/vendor/github.com/scaleway/scaleway-cli/pkg/api/api.go new file mode 100644 index 000000000..7c60d6624 --- /dev/null +++ b/vendor/github.com/scaleway/scaleway-cli/pkg/api/api.go @@ -0,0 +1,2840 @@ +// Copyright (C) 2015 Scaleway. All rights reserved. +// Use of this source code is governed by a MIT-style +// license that can be found in the LICENSE.md file. + +// Interact with Scaleway API + +// Package api contains client and functions to interact with Scaleway API +package api + +import ( + "bytes" + "crypto/tls" + "encoding/json" + "errors" + "fmt" + "io" + "io/ioutil" + "net" + "net/http" + "net/http/httputil" + "net/url" + "os" + "sort" + "strconv" + "strings" + "text/tabwriter" + "text/template" + "time" + + "golang.org/x/sync/errgroup" +) + +// Default values +var ( + AccountAPI = "https://account.scaleway.com/" + MetadataAPI = "http://169.254.42.42/" + MarketplaceAPI = "https://api-marketplace.scaleway.com" + ComputeAPIPar1 = "https://cp-par1.scaleway.com/" + ComputeAPIAms1 = "https://cp-ams1.scaleway.com" + + URLPublicDNS = ".pub.cloud.scaleway.com" + URLPrivateDNS = ".priv.cloud.scaleway.com" +) + +func init() { + if url := os.Getenv("SCW_ACCOUNT_API"); url != "" { + AccountAPI = url + } + if url := os.Getenv("SCW_METADATA_API"); url != "" { + MetadataAPI = url + } + if url := os.Getenv("SCW_MARKETPLACE_API"); url != "" { + MarketplaceAPI = url + } + if url := os.Getenv("SCW_COMPUTE_PAR1_API"); url != "" { + ComputeAPIPar1 = url + } + if url := os.Getenv("SCW_COMPUTE_AMS1_API"); url != "" { + ComputeAPIAms1 = url + } +} + +const ( + perPage = 50 +) + +// ScalewayAPI is the interface used to communicate with the Scaleway API +type ScalewayAPI struct { + // Organization is the identifier of the Scaleway organization + Organization string + + // Token is the authentication token for the Scaleway organization + Token string + + // Password is the authentication password + password string + + userAgent string + + // Cache is used to quickly resolve identifiers from names + Cache *ScalewayCache + + client *http.Client + verbose bool + computeAPI string + + Region string + // + Logger +} + +// ScalewayAPIError represents a Scaleway API Error +type ScalewayAPIError struct { + // Message is a human-friendly error message + APIMessage string `json:"message,omitempty"` + + // Type is a string code that defines the kind of error + Type string `json:"type,omitempty"` + + // Fields contains detail about validation error + Fields map[string][]string `json:"fields,omitempty"` + + // StatusCode is the HTTP status code received + StatusCode int `json:"-"` + + // Message + Message string `json:"-"` +} + +// Error returns a string representing the error +func (e ScalewayAPIError) Error() string { + var b bytes.Buffer + + fmt.Fprintf(&b, "StatusCode: %v, ", e.StatusCode) + fmt.Fprintf(&b, "Type: %v, ", e.Type) + fmt.Fprintf(&b, "APIMessage: \x1b[31m%v\x1b[0m", e.APIMessage) + if len(e.Fields) > 0 { + fmt.Fprintf(&b, ", Details: %v", e.Fields) + } + return b.String() +} + +// HideAPICredentials removes API credentials from a string +func (s *ScalewayAPI) HideAPICredentials(input string) string { + output := input + if s.Token != "" { + output = strings.Replace(output, s.Token, "00000000-0000-4000-8000-000000000000", -1) + } + if s.Organization != "" { + output = strings.Replace(output, s.Organization, "00000000-0000-5000-9000-000000000000", -1) + } + if s.password != "" { + output = strings.Replace(output, s.password, "XX-XX-XX-XX", -1) + } + return output +} + +// ScalewayIPAddress represents a Scaleway IP address +type ScalewayIPAddress struct { + // Identifier is a unique identifier for the IP address + Identifier string `json:"id,omitempty"` + + // IP is an IPv4 address + IP string `json:"address,omitempty"` + + // Dynamic is a flag that defines an IP that change on each reboot + Dynamic *bool `json:"dynamic,omitempty"` +} + +// ScalewayVolume represents a Scaleway Volume +type ScalewayVolume struct { + // Identifier is a unique identifier for the volume + Identifier string `json:"id,omitempty"` + + // Size is the allocated size of the volume + Size uint64 `json:"size,omitempty"` + + // CreationDate is the creation date of the volume + CreationDate string `json:"creation_date,omitempty"` + + // ModificationDate is the date of the last modification of the volume + ModificationDate string `json:"modification_date,omitempty"` + + // Organization is the organization owning the volume + Organization string `json:"organization,omitempty"` + + // Name is the name of the volume + Name string `json:"name,omitempty"` + + // Server is the server using this image + Server *struct { + Identifier string `json:"id,omitempty"` + Name string `json:"name,omitempty"` + } `json:"server,omitempty"` + + // VolumeType is a Scaleway identifier for the kind of volume (default: l_ssd) + VolumeType string `json:"volume_type,omitempty"` + + // ExportURI represents the url used by initrd/scripts to attach the volume + ExportURI string `json:"export_uri,omitempty"` +} + +// ScalewayOneVolume represents the response of a GET /volumes/UUID API call +type ScalewayOneVolume struct { + Volume ScalewayVolume `json:"volume,omitempty"` +} + +// ScalewayVolumes represents a group of Scaleway volumes +type ScalewayVolumes struct { + // Volumes holds scaleway volumes of the response + Volumes []ScalewayVolume `json:"volumes,omitempty"` +} + +// ScalewayVolumeDefinition represents a Scaleway volume definition +type ScalewayVolumeDefinition struct { + // Name is the user-defined name of the volume + Name string `json:"name"` + + // Image is the image used by the volume + Size uint64 `json:"size"` + + // Bootscript is the bootscript used by the volume + Type string `json:"volume_type"` + + // Organization is the owner of the volume + Organization string `json:"organization"` +} + +// ScalewayVolumePutDefinition represents a Scaleway volume with nullable fields (for PUT) +type ScalewayVolumePutDefinition struct { + Identifier *string `json:"id,omitempty"` + Size *uint64 `json:"size,omitempty"` + CreationDate *string `json:"creation_date,omitempty"` + ModificationDate *string `json:"modification_date,omitempty"` + Organization *string `json:"organization,omitempty"` + Name *string `json:"name,omitempty"` + Server struct { + Identifier *string `json:"id,omitempty"` + Name *string `json:"name,omitempty"` + } `json:"server,omitempty"` + VolumeType *string `json:"volume_type,omitempty"` + ExportURI *string `json:"export_uri,omitempty"` +} + +// ScalewayImage represents a Scaleway Image +type ScalewayImage struct { + // Identifier is a unique identifier for the image + Identifier string `json:"id,omitempty"` + + // Name is a user-defined name for the image + Name string `json:"name,omitempty"` + + // CreationDate is the creation date of the image + CreationDate string `json:"creation_date,omitempty"` + + // ModificationDate is the date of the last modification of the image + ModificationDate string `json:"modification_date,omitempty"` + + // RootVolume is the root volume bound to the image + RootVolume ScalewayVolume `json:"root_volume,omitempty"` + + // Public is true for public images and false for user images + Public bool `json:"public,omitempty"` + + // Bootscript is the bootscript bound to the image + DefaultBootscript *ScalewayBootscript `json:"default_bootscript,omitempty"` + + // Organization is the owner of the image + Organization string `json:"organization,omitempty"` + + // Arch is the architecture target of the image + Arch string `json:"arch,omitempty"` + + // FIXME: extra_volumes +} + +// ScalewayImageIdentifier represents a Scaleway Image Identifier +type ScalewayImageIdentifier struct { + Identifier string + Arch string + Region string + Owner string +} + +// ScalewayOneImage represents the response of a GET /images/UUID API call +type ScalewayOneImage struct { + Image ScalewayImage `json:"image,omitempty"` +} + +// ScalewayImages represents a group of Scaleway images +type ScalewayImages struct { + // Images holds scaleway images of the response + Images []ScalewayImage `json:"images,omitempty"` +} + +// ProductNetworkInterface gives interval and external allowed bandwidth +type ProductNetworkInterface struct { + InternalBandwidth uint64 `json:"internal_bandwidth,omitempty"` + InternetBandwidth uint64 `json:"internet_bandwidth,omitempty"` +} + +// ProductNetwork lists all the network interfaces +type ProductNetwork struct { + Interfaces []ProductNetworkInterface `json:"interfaces,omitempty"` + TotalInternalBandwidth uint64 `json:"sum_internal_bandwidth,omitempty"` + TotalInternetBandwidth uint64 `json:"sum_internet_bandwidth,omitempty"` + IPv6_Support bool `json:"ipv6_support,omitempty"` +} + +// ProductVolumeConstraint contains any volume constraint that the offer has +type ProductVolumeConstraint struct { + MinSize uint64 `json:"min_size,omitempty"` + MaxSize uint64 `json:"max_size,omitempty"` +} + +// ProductServerOffer represents a specific offer +type ProductServer struct { + Arch string `json:"arch,omitempty"` + Ncpus uint64 `json:"ncpus,omitempty"` + Ram uint64 `json:"ram,omitempty"` + Baremetal bool `json:"baremetal,omitempty"` + VolumesConstraint ProductVolumeConstraint `json:"volumes_constraint,omitempty"` + AltNames []string `json:"alt_names,omitempty"` + Network ProductNetwork `json:"network,omitempty"` +} + +// Products holds a map of all Scaleway servers +type ScalewayProductsServers struct { + Servers map[string]ProductServer `json:"servers"` +} + +// ScalewaySnapshot represents a Scaleway Snapshot +type ScalewaySnapshot struct { + // Identifier is a unique identifier for the snapshot + Identifier string `json:"id,omitempty"` + + // Name is a user-defined name for the snapshot + Name string `json:"name,omitempty"` + + // CreationDate is the creation date of the snapshot + CreationDate string `json:"creation_date,omitempty"` + + // ModificationDate is the date of the last modification of the snapshot + ModificationDate string `json:"modification_date,omitempty"` + + // Size is the allocated size of the volume + Size uint64 `json:"size,omitempty"` + + // Organization is the owner of the snapshot + Organization string `json:"organization"` + + // State is the current state of the snapshot + State string `json:"state"` + + // VolumeType is the kind of volume behind the snapshot + VolumeType string `json:"volume_type"` + + // BaseVolume is the volume from which the snapshot inherits + BaseVolume ScalewayVolume `json:"base_volume,omitempty"` +} + +// ScalewayOneSnapshot represents the response of a GET /snapshots/UUID API call +type ScalewayOneSnapshot struct { + Snapshot ScalewaySnapshot `json:"snapshot,omitempty"` +} + +// ScalewaySnapshots represents a group of Scaleway snapshots +type ScalewaySnapshots struct { + // Snapshots holds scaleway snapshots of the response + Snapshots []ScalewaySnapshot `json:"snapshots,omitempty"` +} + +// ScalewayBootscript represents a Scaleway Bootscript +type ScalewayBootscript struct { + Bootcmdargs string `json:"bootcmdargs,omitempty"` + Dtb string `json:"dtb,omitempty"` + Initrd string `json:"initrd,omitempty"` + Kernel string `json:"kernel,omitempty"` + + // Arch is the architecture target of the bootscript + Arch string `json:"architecture,omitempty"` + + // Identifier is a unique identifier for the bootscript + Identifier string `json:"id,omitempty"` + + // Organization is the owner of the bootscript + Organization string `json:"organization,omitempty"` + + // Name is a user-defined name for the bootscript + Title string `json:"title,omitempty"` + + // Public is true for public bootscripts and false for user bootscripts + Public bool `json:"public,omitempty"` + + Default bool `json:"default,omitempty"` +} + +// ScalewayOneBootscript represents the response of a GET /bootscripts/UUID API call +type ScalewayOneBootscript struct { + Bootscript ScalewayBootscript `json:"bootscript,omitempty"` +} + +// ScalewayBootscripts represents a group of Scaleway bootscripts +type ScalewayBootscripts struct { + // Bootscripts holds Scaleway bootscripts of the response + Bootscripts []ScalewayBootscript `json:"bootscripts,omitempty"` +} + +// ScalewayTask represents a Scaleway Task +type ScalewayTask struct { + // Identifier is a unique identifier for the task + Identifier string `json:"id,omitempty"` + + // StartDate is the start date of the task + StartDate string `json:"started_at,omitempty"` + + // TerminationDate is the termination date of the task + TerminationDate string `json:"terminated_at,omitempty"` + + HrefFrom string `json:"href_from,omitempty"` + + Description string `json:"description,omitempty"` + + Status string `json:"status,omitempty"` + + Progress int `json:"progress,omitempty"` +} + +// ScalewayOneTask represents the response of a GET /tasks/UUID API call +type ScalewayOneTask struct { + Task ScalewayTask `json:"task,omitempty"` +} + +// ScalewayTasks represents a group of Scaleway tasks +type ScalewayTasks struct { + // Tasks holds scaleway tasks of the response + Tasks []ScalewayTask `json:"tasks,omitempty"` +} + +// ScalewaySecurityGroupRule definition +type ScalewaySecurityGroupRule struct { + Direction string `json:"direction"` + Protocol string `json:"protocol"` + IPRange string `json:"ip_range"` + DestPortFrom int `json:"dest_port_from,omitempty"` + Action string `json:"action"` + Position int `json:"position"` + DestPortTo string `json:"dest_port_to"` + Editable bool `json:"editable"` + ID string `json:"id"` +} + +// ScalewayGetSecurityGroupRules represents the response of a GET /security_group/{groupID}/rules +type ScalewayGetSecurityGroupRules struct { + Rules []ScalewaySecurityGroupRule `json:"rules"` +} + +// ScalewayGetSecurityGroupRule represents the response of a GET /security_group/{groupID}/rules/{ruleID} +type ScalewayGetSecurityGroupRule struct { + Rules ScalewaySecurityGroupRule `json:"rule"` +} + +// ScalewayNewSecurityGroupRule definition POST/PUT request /security_group/{groupID} +type ScalewayNewSecurityGroupRule struct { + Action string `json:"action"` + Direction string `json:"direction"` + IPRange string `json:"ip_range"` + Protocol string `json:"protocol"` + DestPortFrom int `json:"dest_port_from,omitempty"` +} + +// ScalewaySecurityGroups definition +type ScalewaySecurityGroups struct { + Description string `json:"description"` + ID string `json:"id"` + Organization string `json:"organization"` + Name string `json:"name"` + Servers []ScalewaySecurityGroup `json:"servers"` + EnableDefaultSecurity bool `json:"enable_default_security"` + OrganizationDefault bool `json:"organization_default"` +} + +// ScalewayGetSecurityGroups represents the response of a GET /security_groups/ +type ScalewayGetSecurityGroups struct { + SecurityGroups []ScalewaySecurityGroups `json:"security_groups"` +} + +// ScalewayGetSecurityGroup represents the response of a GET /security_groups/{groupID} +type ScalewayGetSecurityGroup struct { + SecurityGroups ScalewaySecurityGroups `json:"security_group"` +} + +// ScalewayIPDefinition represents the IP's fields +type ScalewayIPDefinition struct { + Organization string `json:"organization"` + Reverse *string `json:"reverse"` + ID string `json:"id"` + Server *struct { + Identifier string `json:"id,omitempty"` + Name string `json:"name,omitempty"` + } `json:"server"` + Address string `json:"address"` +} + +// ScalewayGetIPS represents the response of a GET /ips/ +type ScalewayGetIPS struct { + IPS []ScalewayIPDefinition `json:"ips"` +} + +// ScalewayGetIP represents the response of a GET /ips/{id_ip} +type ScalewayGetIP struct { + IP ScalewayIPDefinition `json:"ip"` +} + +// ScalewaySecurityGroup represents a Scaleway security group +type ScalewaySecurityGroup struct { + // Identifier is a unique identifier for the security group + Identifier string `json:"id,omitempty"` + + // Name is the user-defined name of the security group + Name string `json:"name,omitempty"` +} + +// ScalewayNewSecurityGroup definition POST request /security_groups +type ScalewayNewSecurityGroup struct { + Organization string `json:"organization"` + Name string `json:"name"` + Description string `json:"description"` +} + +// ScalewayUpdateSecurityGroup definition PUT request /security_groups +type ScalewayUpdateSecurityGroup struct { + Organization string `json:"organization"` + Name string `json:"name"` + Description string `json:"description"` + OrganizationDefault bool `json:"organization_default"` +} + +// ScalewayServer represents a Scaleway server +type ScalewayServer struct { + // Arch is the architecture target of the server + Arch string `json:"arch,omitempty"` + + // Identifier is a unique identifier for the server + Identifier string `json:"id,omitempty"` + + // Name is the user-defined name of the server + Name string `json:"name,omitempty"` + + // CreationDate is the creation date of the server + CreationDate string `json:"creation_date,omitempty"` + + // ModificationDate is the date of the last modification of the server + ModificationDate string `json:"modification_date,omitempty"` + + // Image is the image used by the server + Image ScalewayImage `json:"image,omitempty"` + + // DynamicIPRequired is a flag that defines a server with a dynamic ip address attached + DynamicIPRequired *bool `json:"dynamic_ip_required,omitempty"` + + // PublicIP is the public IP address bound to the server + PublicAddress ScalewayIPAddress `json:"public_ip,omitempty"` + + // State is the current status of the server + State string `json:"state,omitempty"` + + // StateDetail is the detailed status of the server + StateDetail string `json:"state_detail,omitempty"` + + // PrivateIP represents the private IPV4 attached to the server (changes on each boot) + PrivateIP string `json:"private_ip,omitempty"` + + // Bootscript is the unique identifier of the selected bootscript + Bootscript *ScalewayBootscript `json:"bootscript,omitempty"` + + // Hostname represents the ServerName in a format compatible with unix's hostname + Hostname string `json:"hostname,omitempty"` + + // Tags represents user-defined tags + Tags []string `json:"tags,omitempty"` + + // Volumes are the attached volumes + Volumes map[string]ScalewayVolume `json:"volumes,omitempty"` + + // SecurityGroup is the selected security group object + SecurityGroup ScalewaySecurityGroup `json:"security_group,omitempty"` + + // Organization is the owner of the server + Organization string `json:"organization,omitempty"` + + // CommercialType is the commercial type of the server (i.e: C1, C2[SML], VC1S) + CommercialType string `json:"commercial_type,omitempty"` + + // Location of the server + Location struct { + Platform string `json:"platform_id,omitempty"` + Chassis string `json:"chassis_id,omitempty"` + Cluster string `json:"cluster_id,omitempty"` + Hypervisor string `json:"hypervisor_id,omitempty"` + Blade string `json:"blade_id,omitempty"` + Node string `json:"node_id,omitempty"` + ZoneID string `json:"zone_id,omitempty"` + } `json:"location,omitempty"` + + IPV6 *ScalewayIPV6Definition `json:"ipv6,omitempty"` + + EnableIPV6 bool `json:"enable_ipv6,omitempty"` + + // This fields are not returned by the API, we generate it + DNSPublic string `json:"dns_public,omitempty"` + DNSPrivate string `json:"dns_private,omitempty"` +} + +// ScalewayIPV6Definition represents a Scaleway ipv6 +type ScalewayIPV6Definition struct { + Netmask string `json:"netmask"` + Gateway string `json:"gateway"` + Address string `json:"address"` +} + +// ScalewayServerPatchDefinition represents a Scaleway server with nullable fields (for PATCH) +type ScalewayServerPatchDefinition struct { + Arch *string `json:"arch,omitempty"` + Name *string `json:"name,omitempty"` + CreationDate *string `json:"creation_date,omitempty"` + ModificationDate *string `json:"modification_date,omitempty"` + Image *ScalewayImage `json:"image,omitempty"` + DynamicIPRequired *bool `json:"dynamic_ip_required,omitempty"` + PublicAddress *ScalewayIPAddress `json:"public_ip,omitempty"` + State *string `json:"state,omitempty"` + StateDetail *string `json:"state_detail,omitempty"` + PrivateIP *string `json:"private_ip,omitempty"` + Bootscript *string `json:"bootscript,omitempty"` + Hostname *string `json:"hostname,omitempty"` + Volumes *map[string]ScalewayVolume `json:"volumes,omitempty"` + SecurityGroup *ScalewaySecurityGroup `json:"security_group,omitempty"` + Organization *string `json:"organization,omitempty"` + Tags *[]string `json:"tags,omitempty"` + IPV6 *ScalewayIPV6Definition `json:"ipv6,omitempty"` + EnableIPV6 *bool `json:"enable_ipv6,omitempty"` +} + +// ScalewayServerDefinition represents a Scaleway server with image definition +type ScalewayServerDefinition struct { + // Name is the user-defined name of the server + Name string `json:"name"` + + // Image is the image used by the server + Image *string `json:"image,omitempty"` + + // Volumes are the attached volumes + Volumes map[string]string `json:"volumes,omitempty"` + + // DynamicIPRequired is a flag that defines a server with a dynamic ip address attached + DynamicIPRequired *bool `json:"dynamic_ip_required,omitempty"` + + // Bootscript is the bootscript used by the server + Bootscript *string `json:"bootscript"` + + // Tags are the metadata tags attached to the server + Tags []string `json:"tags,omitempty"` + + // Organization is the owner of the server + Organization string `json:"organization"` + + // CommercialType is the commercial type of the server (i.e: C1, C2[SML], VC1S) + CommercialType string `json:"commercial_type"` + + PublicIP string `json:"public_ip,omitempty"` + + EnableIPV6 bool `json:"enable_ipv6,omitempty"` + + SecurityGroup string `json:"security_group,omitempty"` +} + +// ScalewayOneServer represents the response of a GET /servers/UUID API call +type ScalewayOneServer struct { + Server ScalewayServer `json:"server,omitempty"` +} + +// ScalewayServers represents a group of Scaleway servers +type ScalewayServers struct { + // Servers holds scaleway servers of the response + Servers []ScalewayServer `json:"servers,omitempty"` +} + +// ScalewayServerAction represents an action to perform on a Scaleway server +type ScalewayServerAction struct { + // Action is the name of the action to trigger + Action string `json:"action,omitempty"` +} + +// ScalewaySnapshotDefinition represents a Scaleway snapshot definition +type ScalewaySnapshotDefinition struct { + VolumeIDentifier string `json:"volume_id"` + Name string `json:"name,omitempty"` + Organization string `json:"organization"` +} + +// ScalewayImageDefinition represents a Scaleway image definition +type ScalewayImageDefinition struct { + SnapshotIDentifier string `json:"root_volume"` + Name string `json:"name,omitempty"` + Organization string `json:"organization"` + Arch string `json:"arch"` + DefaultBootscript *string `json:"default_bootscript,omitempty"` +} + +// ScalewayRoleDefinition represents a Scaleway Token UserId Role +type ScalewayRoleDefinition struct { + Organization ScalewayOrganizationDefinition `json:"organization,omitempty"` + Role string `json:"role,omitempty"` +} + +// ScalewayTokenDefinition represents a Scaleway Token +type ScalewayTokenDefinition struct { + UserID string `json:"user_id"` + Description string `json:"description,omitempty"` + Roles ScalewayRoleDefinition `json:"roles"` + Expires string `json:"expires"` + InheritsUsersPerms bool `json:"inherits_user_perms"` + ID string `json:"id"` +} + +// ScalewayTokensDefinition represents a Scaleway Tokens +type ScalewayTokensDefinition struct { + Token ScalewayTokenDefinition `json:"token"` +} + +// ScalewayGetTokens represents a list of Scaleway Tokens +type ScalewayGetTokens struct { + Tokens []ScalewayTokenDefinition `json:"tokens"` +} + +// ScalewayContainerData represents a Scaleway container data (S3) +type ScalewayContainerData struct { + LastModified string `json:"last_modified"` + Name string `json:"name"` + Size string `json:"size"` +} + +// ScalewayGetContainerDatas represents a list of Scaleway containers data (S3) +type ScalewayGetContainerDatas struct { + Container []ScalewayContainerData `json:"container"` +} + +// ScalewayContainer represents a Scaleway container (S3) +type ScalewayContainer struct { + ScalewayOrganizationDefinition `json:"organization"` + Name string `json:"name"` + Size string `json:"size"` +} + +// ScalewayGetContainers represents a list of Scaleway containers (S3) +type ScalewayGetContainers struct { + Containers []ScalewayContainer `json:"containers"` +} + +// ScalewayConnectResponse represents the answer from POST /tokens +type ScalewayConnectResponse struct { + Token ScalewayTokenDefinition `json:"token"` +} + +// ScalewayConnect represents the data to connect +type ScalewayConnect struct { + Email string `json:"email"` + Password string `json:"password"` + Description string `json:"description"` + Expires bool `json:"expires"` +} + +// ScalewayConnectInterface is the interface implemented by ScalewayConnect, +// ScalewayConnectByOTP and ScalewayConnectByBackupCode +type ScalewayConnectInterface interface { + GetPassword() string +} + +func (s *ScalewayConnect) GetPassword() string { + return s.Password +} + +type ScalewayConnectByOTP struct { + ScalewayConnect + TwoFAToken string `json:"2FA_token"` +} + +type ScalewayConnectByBackupCode struct { + ScalewayConnect + TwoFABackupCode string `json:"2FA_backup_code"` +} + +// ScalewayOrganizationDefinition represents a Scaleway Organization +type ScalewayOrganizationDefinition struct { + ID string `json:"id"` + Name string `json:"name"` + Users []ScalewayUserDefinition `json:"users"` +} + +// ScalewayOrganizationsDefinition represents a Scaleway Organizations +type ScalewayOrganizationsDefinition struct { + Organizations []ScalewayOrganizationDefinition `json:"organizations"` +} + +// ScalewayUserDefinition represents a Scaleway User +type ScalewayUserDefinition struct { + Email string `json:"email"` + Firstname string `json:"firstname"` + Fullname string `json:"fullname"` + ID string `json:"id"` + Lastname string `json:"lastname"` + Organizations []ScalewayOrganizationDefinition `json:"organizations"` + Roles []ScalewayRoleDefinition `json:"roles"` + SSHPublicKeys []ScalewayKeyDefinition `json:"ssh_public_keys"` +} + +// ScalewayUsersDefinition represents the response of a GET /user +type ScalewayUsersDefinition struct { + User ScalewayUserDefinition `json:"user"` +} + +// ScalewayKeyDefinition represents a key +type ScalewayKeyDefinition struct { + Key string `json:"key"` + Fingerprint string `json:"fingerprint,omitempty"` +} + +// ScalewayUserPatchSSHKeyDefinition represents a User Patch +type ScalewayUserPatchSSHKeyDefinition struct { + SSHPublicKeys []ScalewayKeyDefinition `json:"ssh_public_keys"` +} + +// ScalewayDashboardResp represents a dashboard received from the API +type ScalewayDashboardResp struct { + Dashboard ScalewayDashboard +} + +// ScalewayDashboard represents a dashboard +type ScalewayDashboard struct { + VolumesCount int `json:"volumes_count"` + RunningServersCount int `json:"running_servers_count"` + ImagesCount int `json:"images_count"` + SnapshotsCount int `json:"snapshots_count"` + ServersCount int `json:"servers_count"` + IPsCount int `json:"ips_count"` +} + +// ScalewayPermissions represents the response of GET /permissions +type ScalewayPermissions map[string]ScalewayPermCategory + +// ScalewayPermCategory represents ScalewayPermissions's fields +type ScalewayPermCategory map[string][]string + +// ScalewayPermissionDefinition represents the permissions +type ScalewayPermissionDefinition struct { + Permissions ScalewayPermissions `json:"permissions"` +} + +// ScalewayUserdatas represents the response of a GET /user_data +type ScalewayUserdatas struct { + UserData []string `json:"user_data"` +} + +// ScalewayQuota represents a map of quota (name, value) +type ScalewayQuota map[string]int + +// ScalewayGetQuotas represents the response of GET /organizations/{orga_id}/quotas +type ScalewayGetQuotas struct { + Quotas ScalewayQuota `json:"quotas"` +} + +// ScalewayUserdata represents []byte +type ScalewayUserdata []byte + +// FuncMap used for json inspection +var FuncMap = template.FuncMap{ + "json": func(v interface{}) string { + a, _ := json.Marshal(v) + return string(a) + }, +} + +// MarketLocalImageDefinition represents localImage of marketplace version +type MarketLocalImageDefinition struct { + Arch string `json:"arch"` + ID string `json:"id"` + Zone string `json:"zone"` +} + +// MarketLocalImages represents an array of local images +type MarketLocalImages struct { + LocalImages []MarketLocalImageDefinition `json:"local_images"` +} + +// MarketLocalImage represents local image +type MarketLocalImage struct { + LocalImages MarketLocalImageDefinition `json:"local_image"` +} + +// MarketVersionDefinition represents version of marketplace image +type MarketVersionDefinition struct { + CreationDate string `json:"creation_date"` + ID string `json:"id"` + Image struct { + ID string `json:"id"` + Name string `json:"name"` + } `json:"image"` + ModificationDate string `json:"modification_date"` + Name string `json:"name"` + MarketLocalImages +} + +// MarketVersions represents an array of marketplace image versions +type MarketVersions struct { + Versions []MarketVersionDefinition `json:"versions"` +} + +// MarketVersion represents version of marketplace image +type MarketVersion struct { + Version MarketVersionDefinition `json:"version"` +} + +// MarketImage represents MarketPlace image +type MarketImage struct { + Categories []string `json:"categories"` + CreationDate string `json:"creation_date"` + CurrentPublicVersion string `json:"current_public_version"` + Description string `json:"description"` + ID string `json:"id"` + Logo string `json:"logo"` + ModificationDate string `json:"modification_date"` + Name string `json:"name"` + Organization struct { + ID string `json:"id"` + Name string `json:"name"` + } `json:"organization"` + Public bool `json:"-"` + MarketVersions +} + +// MarketImages represents MarketPlace images +type MarketImages struct { + Images []MarketImage `json:"images"` +} + +// NewScalewayAPI creates a ready-to-use ScalewayAPI client +func NewScalewayAPI(organization, token, userAgent, region string, options ...func(*ScalewayAPI)) (*ScalewayAPI, error) { + s := &ScalewayAPI{ + // exposed + Organization: organization, + Token: token, + Logger: NewDefaultLogger(), + + // internal + client: &http.Client{}, + verbose: os.Getenv("SCW_VERBOSE_API") != "", + password: "", + userAgent: userAgent, + } + for _, option := range options { + option(s) + } + cache, err := NewScalewayCache(func() { s.Logger.Debugf("Writing cache file to disk") }) + if err != nil { + return nil, err + } + s.Cache = cache + if os.Getenv("SCW_TLSVERIFY") == "0" { + s.client.Transport = &http.Transport{ + TLSClientConfig: &tls.Config{InsecureSkipVerify: true}, + } + } + switch region { + case "par1", "": + s.computeAPI = ComputeAPIPar1 + case "ams1": + s.computeAPI = ComputeAPIAms1 + default: + return nil, fmt.Errorf("%s isn't a valid region", region) + } + s.Region = region + if url := os.Getenv("SCW_COMPUTE_API"); url != "" { + s.computeAPI = url + } + return s, nil +} + +// ClearCache clears the cache +func (s *ScalewayAPI) ClearCache() { + s.Cache.Clear() +} + +// Sync flushes out the cache to the disk +func (s *ScalewayAPI) Sync() { + s.Cache.Save() +} + +func (s *ScalewayAPI) response(method, uri string, content io.Reader) (resp *http.Response, err error) { + var ( + req *http.Request + ) + + req, err = http.NewRequest(method, uri, content) + if err != nil { + err = fmt.Errorf("response %s %s", method, uri) + return + } + req.Header.Set("X-Auth-Token", s.Token) + req.Header.Set("Content-Type", "application/json") + req.Header.Set("User-Agent", s.userAgent) + s.LogHTTP(req) + if s.verbose { + dump, _ := httputil.DumpRequest(req, true) + s.Debugf("%v", string(dump)) + } else { + s.Debugf("[%s]: %v", method, uri) + } + resp, err = s.client.Do(req) + return +} + +// GetResponsePaginate fetchs all resources and returns an http.Response object for the requested resource +func (s *ScalewayAPI) GetResponsePaginate(apiURL, resource string, values url.Values) (*http.Response, error) { + resp, err := s.response("HEAD", fmt.Sprintf("%s/%s?%s", strings.TrimRight(apiURL, "/"), resource, values.Encode()), nil) + if err != nil { + return nil, err + } + + count := resp.Header.Get("X-Total-Count") + var maxElem int + if count == "" { + maxElem = 0 + } else { + maxElem, err = strconv.Atoi(count) + if err != nil { + return nil, err + } + } + + get := maxElem / perPage + if (float32(maxElem) / perPage) > float32(get) { + get++ + } + + if get <= 1 { // If there is 0 or 1 page of result, the response is not paginated + if len(values) == 0 { + return s.response("GET", fmt.Sprintf("%s/%s", strings.TrimRight(apiURL, "/"), resource), nil) + } + return s.response("GET", fmt.Sprintf("%s/%s?%s", strings.TrimRight(apiURL, "/"), resource, values.Encode()), nil) + } + + fetchAll := !(values.Get("per_page") != "" || values.Get("page") != "") + if fetchAll { + var g errgroup.Group + + ch := make(chan *http.Response, get) + for i := 1; i <= get; i++ { + i := i // closure tricks + g.Go(func() (err error) { + var resp *http.Response + + val := url.Values{} + val.Set("per_page", fmt.Sprintf("%v", perPage)) + val.Set("page", fmt.Sprintf("%v", i)) + resp, err = s.response("GET", fmt.Sprintf("%s/%s?%s", strings.TrimRight(apiURL, "/"), resource, val.Encode()), nil) + ch <- resp + return + }) + } + if err = g.Wait(); err != nil { + return nil, err + } + newBody := make(map[string][]json.RawMessage) + body := make(map[string][]json.RawMessage) + key := "" + for i := 0; i < get; i++ { + res := <-ch + if res.StatusCode != http.StatusOK { + return res, nil + } + content, err := ioutil.ReadAll(res.Body) + res.Body.Close() + if err != nil { + return nil, err + } + if err := json.Unmarshal(content, &body); err != nil { + return nil, err + } + + if i == 0 { + resp = res + for k := range body { + key = k + break + } + } + newBody[key] = append(newBody[key], body[key]...) + } + payload := new(bytes.Buffer) + if err := json.NewEncoder(payload).Encode(newBody); err != nil { + return nil, err + } + resp.Body = ioutil.NopCloser(payload) + } else { + resp, err = s.response("GET", fmt.Sprintf("%s/%s?%s", strings.TrimRight(apiURL, "/"), resource, values.Encode()), nil) + } + return resp, err +} + +// PostResponse returns an http.Response object for the updated resource +func (s *ScalewayAPI) PostResponse(apiURL, resource string, data interface{}) (*http.Response, error) { + payload := new(bytes.Buffer) + if err := json.NewEncoder(payload).Encode(data); err != nil { + return nil, err + } + return s.response("POST", fmt.Sprintf("%s/%s", strings.TrimRight(apiURL, "/"), resource), payload) +} + +// PatchResponse returns an http.Response object for the updated resource +func (s *ScalewayAPI) PatchResponse(apiURL, resource string, data interface{}) (*http.Response, error) { + payload := new(bytes.Buffer) + if err := json.NewEncoder(payload).Encode(data); err != nil { + return nil, err + } + return s.response("PATCH", fmt.Sprintf("%s/%s", strings.TrimRight(apiURL, "/"), resource), payload) +} + +// PutResponse returns an http.Response object for the updated resource +func (s *ScalewayAPI) PutResponse(apiURL, resource string, data interface{}) (*http.Response, error) { + payload := new(bytes.Buffer) + if err := json.NewEncoder(payload).Encode(data); err != nil { + return nil, err + } + return s.response("PUT", fmt.Sprintf("%s/%s", strings.TrimRight(apiURL, "/"), resource), payload) +} + +// DeleteResponse returns an http.Response object for the deleted resource +func (s *ScalewayAPI) DeleteResponse(apiURL, resource string) (*http.Response, error) { + return s.response("DELETE", fmt.Sprintf("%s/%s", strings.TrimRight(apiURL, "/"), resource), nil) +} + +// handleHTTPError checks the statusCode and displays the error +func (s *ScalewayAPI) handleHTTPError(goodStatusCode []int, resp *http.Response) ([]byte, error) { + body, err := ioutil.ReadAll(resp.Body) + if err != nil { + return nil, err + } + if s.verbose { + resp.Body = ioutil.NopCloser(bytes.NewBuffer(body)) + dump, err := httputil.DumpResponse(resp, true) + if err == nil { + var js bytes.Buffer + + err = json.Indent(&js, body, "", " ") + if err != nil { + s.Debugf("[Response]: [%v]\n%v", resp.StatusCode, string(dump)) + } else { + s.Debugf("[Response]: [%v]\n%v", resp.StatusCode, js.String()) + } + } + } else { + s.Debugf("[Response]: [%v]\n%v", resp.StatusCode, string(body)) + } + + if resp.StatusCode >= http.StatusInternalServerError { + return nil, errors.New(string(body)) + } + good := false + for _, code := range goodStatusCode { + if code == resp.StatusCode { + good = true + } + } + if !good { + var scwError ScalewayAPIError + + if err := json.Unmarshal(body, &scwError); err != nil { + return nil, err + } + scwError.StatusCode = resp.StatusCode + s.Debugf("%s", scwError.Error()) + return nil, scwError + } + return body, nil +} + +func (s *ScalewayAPI) fetchServers(api string, query url.Values, out chan<- ScalewayServers) func() error { + return func() error { + resp, err := s.GetResponsePaginate(api, "servers", query) + if err != nil { + return err + } + defer resp.Body.Close() + + body, err := s.handleHTTPError([]int{http.StatusOK}, resp) + if err != nil { + return err + } + var servers ScalewayServers + + if err = json.Unmarshal(body, &servers); err != nil { + return err + } + out <- servers + return nil + } +} + +// GetServers gets the list of servers from the ScalewayAPI +func (s *ScalewayAPI) GetServers(all bool, limit int) (*[]ScalewayServer, error) { + query := url.Values{} + if !all { + query.Set("state", "running") + } + if limit > 0 { + // FIXME: wait for the API to be ready + // query.Set("per_page", strconv.Itoa(limit)) + panic("Not implemented yet") + } + if all && limit == 0 { + s.Cache.ClearServers() + } + var ( + g errgroup.Group + apis = []string{ + ComputeAPIPar1, + ComputeAPIAms1, + } + ) + + serverChan := make(chan ScalewayServers, 2) + for _, api := range apis { + g.Go(s.fetchServers(api, query, serverChan)) + } + + if err := g.Wait(); err != nil { + return nil, err + } + close(serverChan) + var servers ScalewayServers + + for server := range serverChan { + servers.Servers = append(servers.Servers, server.Servers...) + } + + for i, server := range servers.Servers { + servers.Servers[i].DNSPublic = server.Identifier + URLPublicDNS + servers.Servers[i].DNSPrivate = server.Identifier + URLPrivateDNS + s.Cache.InsertServer(server.Identifier, server.Location.ZoneID, server.Arch, server.Organization, server.Name) + } + return &servers.Servers, nil +} + +// ScalewaySortServers represents a wrapper to sort by CreationDate the servers +type ScalewaySortServers []ScalewayServer + +func (s ScalewaySortServers) Len() int { + return len(s) +} + +func (s ScalewaySortServers) Swap(i, j int) { + s[i], s[j] = s[j], s[i] +} + +func (s ScalewaySortServers) Less(i, j int) bool { + date1, _ := time.Parse("2006-01-02T15:04:05.000000+00:00", s[i].CreationDate) + date2, _ := time.Parse("2006-01-02T15:04:05.000000+00:00", s[j].CreationDate) + return date2.Before(date1) +} + +// GetServer gets a server from the ScalewayAPI +func (s *ScalewayAPI) GetServer(serverID string) (*ScalewayServer, error) { + if serverID == "" { + return nil, fmt.Errorf("cannot get server without serverID") + } + resp, err := s.GetResponsePaginate(s.computeAPI, "servers/"+serverID, url.Values{}) + if err != nil { + return nil, err + } + defer resp.Body.Close() + + body, err := s.handleHTTPError([]int{http.StatusOK}, resp) + if err != nil { + return nil, err + } + + var oneServer ScalewayOneServer + + if err = json.Unmarshal(body, &oneServer); err != nil { + return nil, err + } + // FIXME arch, owner, title + oneServer.Server.DNSPublic = oneServer.Server.Identifier + URLPublicDNS + oneServer.Server.DNSPrivate = oneServer.Server.Identifier + URLPrivateDNS + s.Cache.InsertServer(oneServer.Server.Identifier, oneServer.Server.Location.ZoneID, oneServer.Server.Arch, oneServer.Server.Organization, oneServer.Server.Name) + return &oneServer.Server, nil +} + +// PostServerAction posts an action on a server +func (s *ScalewayAPI) PostServerAction(serverID, action string) error { + data := ScalewayServerAction{ + Action: action, + } + resp, err := s.PostResponse(s.computeAPI, fmt.Sprintf("servers/%s/action", serverID), data) + if err != nil { + return err + } + defer resp.Body.Close() + + _, err = s.handleHTTPError([]int{http.StatusAccepted}, resp) + return err +} + +// DeleteServer deletes a server +func (s *ScalewayAPI) DeleteServer(serverID string) error { + defer s.Cache.RemoveServer(serverID) + resp, err := s.DeleteResponse(s.computeAPI, fmt.Sprintf("servers/%s", serverID)) + if err != nil { + return err + } + defer resp.Body.Close() + + if _, err = s.handleHTTPError([]int{http.StatusNoContent}, resp); err != nil { + return err + } + return nil +} + +// PostServer creates a new server +func (s *ScalewayAPI) PostServer(definition ScalewayServerDefinition) (string, error) { + definition.Organization = s.Organization + + resp, err := s.PostResponse(s.computeAPI, "servers", definition) + if err != nil { + return "", err + } + defer resp.Body.Close() + + body, err := s.handleHTTPError([]int{http.StatusCreated}, resp) + if err != nil { + return "", err + } + var server ScalewayOneServer + + if err = json.Unmarshal(body, &server); err != nil { + return "", err + } + // FIXME arch, owner, title + s.Cache.InsertServer(server.Server.Identifier, server.Server.Location.ZoneID, server.Server.Arch, server.Server.Organization, server.Server.Name) + return server.Server.Identifier, nil +} + +// PatchUserSSHKey updates a user +func (s *ScalewayAPI) PatchUserSSHKey(UserID string, definition ScalewayUserPatchSSHKeyDefinition) error { + resp, err := s.PatchResponse(AccountAPI, fmt.Sprintf("users/%s", UserID), definition) + if err != nil { + return err + } + + defer resp.Body.Close() + + if _, err := s.handleHTTPError([]int{http.StatusOK}, resp); err != nil { + return err + } + return nil +} + +// PatchServer updates a server +func (s *ScalewayAPI) PatchServer(serverID string, definition ScalewayServerPatchDefinition) error { + resp, err := s.PatchResponse(s.computeAPI, fmt.Sprintf("servers/%s", serverID), definition) + if err != nil { + return err + } + defer resp.Body.Close() + + if _, err := s.handleHTTPError([]int{http.StatusOK}, resp); err != nil { + return err + } + return nil +} + +// PostSnapshot creates a new snapshot +func (s *ScalewayAPI) PostSnapshot(volumeID string, name string) (string, error) { + definition := ScalewaySnapshotDefinition{ + VolumeIDentifier: volumeID, + Name: name, + Organization: s.Organization, + } + resp, err := s.PostResponse(s.computeAPI, "snapshots", definition) + if err != nil { + return "", err + } + defer resp.Body.Close() + + body, err := s.handleHTTPError([]int{http.StatusCreated}, resp) + if err != nil { + return "", err + } + var snapshot ScalewayOneSnapshot + + if err = json.Unmarshal(body, &snapshot); err != nil { + return "", err + } + // FIXME arch, owner, title + s.Cache.InsertSnapshot(snapshot.Snapshot.Identifier, "", "", snapshot.Snapshot.Organization, snapshot.Snapshot.Name) + return snapshot.Snapshot.Identifier, nil +} + +// PostImage creates a new image +func (s *ScalewayAPI) PostImage(volumeID string, name string, bootscript string, arch string) (string, error) { + definition := ScalewayImageDefinition{ + SnapshotIDentifier: volumeID, + Name: name, + Organization: s.Organization, + Arch: arch, + } + if bootscript != "" { + definition.DefaultBootscript = &bootscript + } + + resp, err := s.PostResponse(s.computeAPI, "images", definition) + if err != nil { + return "", err + } + defer resp.Body.Close() + + body, err := s.handleHTTPError([]int{http.StatusCreated}, resp) + if err != nil { + return "", err + } + var image ScalewayOneImage + + if err = json.Unmarshal(body, &image); err != nil { + return "", err + } + // FIXME region, arch, owner, title + s.Cache.InsertImage(image.Image.Identifier, "", image.Image.Arch, image.Image.Organization, image.Image.Name, "") + return image.Image.Identifier, nil +} + +// PostVolume creates a new volume +func (s *ScalewayAPI) PostVolume(definition ScalewayVolumeDefinition) (string, error) { + definition.Organization = s.Organization + if definition.Type == "" { + definition.Type = "l_ssd" + } + + resp, err := s.PostResponse(s.computeAPI, "volumes", definition) + if err != nil { + return "", err + } + defer resp.Body.Close() + + body, err := s.handleHTTPError([]int{http.StatusCreated}, resp) + if err != nil { + return "", err + } + var volume ScalewayOneVolume + + if err = json.Unmarshal(body, &volume); err != nil { + return "", err + } + // FIXME: s.Cache.InsertVolume(volume.Volume.Identifier, volume.Volume.Name) + return volume.Volume.Identifier, nil +} + +// PutVolume updates a volume +func (s *ScalewayAPI) PutVolume(volumeID string, definition ScalewayVolumePutDefinition) error { + resp, err := s.PutResponse(s.computeAPI, fmt.Sprintf("volumes/%s", volumeID), definition) + if err != nil { + return err + } + defer resp.Body.Close() + + _, err = s.handleHTTPError([]int{http.StatusOK}, resp) + return err +} + +// ResolveServer attempts to find a matching Identifier for the input string +func (s *ScalewayAPI) ResolveServer(needle string) (ScalewayResolverResults, error) { + servers, err := s.Cache.LookUpServers(needle, true) + if err != nil { + return servers, err + } + if len(servers) == 0 { + if _, err = s.GetServers(true, 0); err != nil { + return nil, err + } + servers, err = s.Cache.LookUpServers(needle, true) + } + return servers, err +} + +// ResolveVolume attempts to find a matching Identifier for the input string +func (s *ScalewayAPI) ResolveVolume(needle string) (ScalewayResolverResults, error) { + volumes, err := s.Cache.LookUpVolumes(needle, true) + if err != nil { + return volumes, err + } + if len(volumes) == 0 { + if _, err = s.GetVolumes(); err != nil { + return nil, err + } + volumes, err = s.Cache.LookUpVolumes(needle, true) + } + return volumes, err +} + +// ResolveSnapshot attempts to find a matching Identifier for the input string +func (s *ScalewayAPI) ResolveSnapshot(needle string) (ScalewayResolverResults, error) { + snapshots, err := s.Cache.LookUpSnapshots(needle, true) + if err != nil { + return snapshots, err + } + if len(snapshots) == 0 { + if _, err = s.GetSnapshots(); err != nil { + return nil, err + } + snapshots, err = s.Cache.LookUpSnapshots(needle, true) + } + return snapshots, err +} + +// ResolveImage attempts to find a matching Identifier for the input string +func (s *ScalewayAPI) ResolveImage(needle string) (ScalewayResolverResults, error) { + images, err := s.Cache.LookUpImages(needle, true) + if err != nil { + return images, err + } + if len(images) == 0 { + if _, err = s.GetImages(); err != nil { + return nil, err + } + images, err = s.Cache.LookUpImages(needle, true) + } + return images, err +} + +// ResolveBootscript attempts to find a matching Identifier for the input string +func (s *ScalewayAPI) ResolveBootscript(needle string) (ScalewayResolverResults, error) { + bootscripts, err := s.Cache.LookUpBootscripts(needle, true) + if err != nil { + return bootscripts, err + } + if len(bootscripts) == 0 { + if _, err = s.GetBootscripts(); err != nil { + return nil, err + } + bootscripts, err = s.Cache.LookUpBootscripts(needle, true) + } + return bootscripts, err +} + +// GetImages gets the list of images from the ScalewayAPI +func (s *ScalewayAPI) GetImages() (*[]MarketImage, error) { + images, err := s.GetMarketPlaceImages("") + if err != nil { + return nil, err + } + s.Cache.ClearImages() + for i, image := range images.Images { + if image.CurrentPublicVersion != "" { + for _, version := range image.Versions { + if version.ID == image.CurrentPublicVersion { + for _, localImage := range version.LocalImages { + images.Images[i].Public = true + s.Cache.InsertImage(localImage.ID, localImage.Zone, localImage.Arch, image.Organization.ID, image.Name, image.CurrentPublicVersion) + } + } + } + } + } + values := url.Values{} + values.Set("organization", s.Organization) + resp, err := s.GetResponsePaginate(s.computeAPI, "images", values) + if err != nil { + return nil, err + } + defer resp.Body.Close() + + body, err := s.handleHTTPError([]int{http.StatusOK}, resp) + if err != nil { + return nil, err + } + var OrgaImages ScalewayImages + + if err = json.Unmarshal(body, &OrgaImages); err != nil { + return nil, err + } + + for _, orgaImage := range OrgaImages.Images { + images.Images = append(images.Images, MarketImage{ + Categories: []string{"MyImages"}, + CreationDate: orgaImage.CreationDate, + CurrentPublicVersion: orgaImage.Identifier, + ModificationDate: orgaImage.ModificationDate, + Name: orgaImage.Name, + Public: false, + MarketVersions: MarketVersions{ + Versions: []MarketVersionDefinition{ + { + CreationDate: orgaImage.CreationDate, + ID: orgaImage.Identifier, + ModificationDate: orgaImage.ModificationDate, + MarketLocalImages: MarketLocalImages{ + LocalImages: []MarketLocalImageDefinition{ + { + Arch: orgaImage.Arch, + ID: orgaImage.Identifier, + // TODO: fecth images from ams1 and par1 + Zone: s.Region, + }, + }, + }, + }, + }, + }, + }) + s.Cache.InsertImage(orgaImage.Identifier, s.Region, orgaImage.Arch, orgaImage.Organization, orgaImage.Name, "") + } + return &images.Images, nil +} + +// GetImage gets an image from the ScalewayAPI +func (s *ScalewayAPI) GetImage(imageID string) (*ScalewayImage, error) { + resp, err := s.GetResponsePaginate(s.computeAPI, "images/"+imageID, url.Values{}) + if err != nil { + return nil, err + } + defer resp.Body.Close() + + body, err := s.handleHTTPError([]int{http.StatusOK}, resp) + if err != nil { + return nil, err + } + var oneImage ScalewayOneImage + + if err = json.Unmarshal(body, &oneImage); err != nil { + return nil, err + } + // FIXME owner, title + s.Cache.InsertImage(oneImage.Image.Identifier, s.Region, oneImage.Image.Arch, oneImage.Image.Organization, oneImage.Image.Name, "") + return &oneImage.Image, nil +} + +// DeleteImage deletes a image +func (s *ScalewayAPI) DeleteImage(imageID string) error { + defer s.Cache.RemoveImage(imageID) + resp, err := s.DeleteResponse(s.computeAPI, fmt.Sprintf("images/%s", imageID)) + if err != nil { + return err + } + defer resp.Body.Close() + + if _, err := s.handleHTTPError([]int{http.StatusNoContent}, resp); err != nil { + return err + } + return nil +} + +// DeleteSnapshot deletes a snapshot +func (s *ScalewayAPI) DeleteSnapshot(snapshotID string) error { + defer s.Cache.RemoveSnapshot(snapshotID) + resp, err := s.DeleteResponse(s.computeAPI, fmt.Sprintf("snapshots/%s", snapshotID)) + if err != nil { + return err + } + defer resp.Body.Close() + + if _, err := s.handleHTTPError([]int{http.StatusNoContent}, resp); err != nil { + return err + } + return nil +} + +// DeleteVolume deletes a volume +func (s *ScalewayAPI) DeleteVolume(volumeID string) error { + defer s.Cache.RemoveVolume(volumeID) + resp, err := s.DeleteResponse(s.computeAPI, fmt.Sprintf("volumes/%s", volumeID)) + if err != nil { + return err + } + defer resp.Body.Close() + + if _, err := s.handleHTTPError([]int{http.StatusNoContent}, resp); err != nil { + return err + } + return nil +} + +// GetSnapshots gets the list of snapshots from the ScalewayAPI +func (s *ScalewayAPI) GetSnapshots() (*[]ScalewaySnapshot, error) { + query := url.Values{} + s.Cache.ClearSnapshots() + + resp, err := s.GetResponsePaginate(s.computeAPI, "snapshots", query) + if err != nil { + return nil, err + } + defer resp.Body.Close() + + body, err := s.handleHTTPError([]int{http.StatusOK}, resp) + if err != nil { + return nil, err + } + var snapshots ScalewaySnapshots + + if err = json.Unmarshal(body, &snapshots); err != nil { + return nil, err + } + for _, snapshot := range snapshots.Snapshots { + // FIXME region, arch, owner, title + s.Cache.InsertSnapshot(snapshot.Identifier, "", "", snapshot.Organization, snapshot.Name) + } + return &snapshots.Snapshots, nil +} + +// GetSnapshot gets a snapshot from the ScalewayAPI +func (s *ScalewayAPI) GetSnapshot(snapshotID string) (*ScalewaySnapshot, error) { + resp, err := s.GetResponsePaginate(s.computeAPI, "snapshots/"+snapshotID, url.Values{}) + if err != nil { + return nil, err + } + defer resp.Body.Close() + + body, err := s.handleHTTPError([]int{http.StatusOK}, resp) + if err != nil { + return nil, err + } + var oneSnapshot ScalewayOneSnapshot + + if err = json.Unmarshal(body, &oneSnapshot); err != nil { + return nil, err + } + // FIXME region, arch, owner, title + s.Cache.InsertSnapshot(oneSnapshot.Snapshot.Identifier, "", "", oneSnapshot.Snapshot.Organization, oneSnapshot.Snapshot.Name) + return &oneSnapshot.Snapshot, nil +} + +// GetVolumes gets the list of volumes from the ScalewayAPI +func (s *ScalewayAPI) GetVolumes() (*[]ScalewayVolume, error) { + query := url.Values{} + s.Cache.ClearVolumes() + + resp, err := s.GetResponsePaginate(s.computeAPI, "volumes", query) + if err != nil { + return nil, err + } + defer resp.Body.Close() + + body, err := s.handleHTTPError([]int{http.StatusOK}, resp) + if err != nil { + return nil, err + } + + var volumes ScalewayVolumes + + if err = json.Unmarshal(body, &volumes); err != nil { + return nil, err + } + for _, volume := range volumes.Volumes { + // FIXME region, arch, owner, title + s.Cache.InsertVolume(volume.Identifier, "", "", volume.Organization, volume.Name) + } + return &volumes.Volumes, nil +} + +// GetVolume gets a volume from the ScalewayAPI +func (s *ScalewayAPI) GetVolume(volumeID string) (*ScalewayVolume, error) { + resp, err := s.GetResponsePaginate(s.computeAPI, "volumes/"+volumeID, url.Values{}) + if err != nil { + return nil, err + } + defer resp.Body.Close() + + body, err := s.handleHTTPError([]int{http.StatusOK}, resp) + if err != nil { + return nil, err + } + var oneVolume ScalewayOneVolume + + if err = json.Unmarshal(body, &oneVolume); err != nil { + return nil, err + } + // FIXME region, arch, owner, title + s.Cache.InsertVolume(oneVolume.Volume.Identifier, "", "", oneVolume.Volume.Organization, oneVolume.Volume.Name) + return &oneVolume.Volume, nil +} + +// GetBootscripts gets the list of bootscripts from the ScalewayAPI +func (s *ScalewayAPI) GetBootscripts() (*[]ScalewayBootscript, error) { + query := url.Values{} + + s.Cache.ClearBootscripts() + resp, err := s.GetResponsePaginate(s.computeAPI, "bootscripts", query) + if err != nil { + return nil, err + } + defer resp.Body.Close() + + body, err := s.handleHTTPError([]int{http.StatusOK}, resp) + if err != nil { + return nil, err + } + var bootscripts ScalewayBootscripts + + if err = json.Unmarshal(body, &bootscripts); err != nil { + return nil, err + } + for _, bootscript := range bootscripts.Bootscripts { + // FIXME region, arch, owner, title + s.Cache.InsertBootscript(bootscript.Identifier, "", bootscript.Arch, bootscript.Organization, bootscript.Title) + } + return &bootscripts.Bootscripts, nil +} + +// GetBootscript gets a bootscript from the ScalewayAPI +func (s *ScalewayAPI) GetBootscript(bootscriptID string) (*ScalewayBootscript, error) { + resp, err := s.GetResponsePaginate(s.computeAPI, "bootscripts/"+bootscriptID, url.Values{}) + if err != nil { + return nil, err + } + defer resp.Body.Close() + + body, err := s.handleHTTPError([]int{http.StatusOK}, resp) + if err != nil { + return nil, err + } + var oneBootscript ScalewayOneBootscript + + if err = json.Unmarshal(body, &oneBootscript); err != nil { + return nil, err + } + // FIXME region, arch, owner, title + s.Cache.InsertBootscript(oneBootscript.Bootscript.Identifier, "", oneBootscript.Bootscript.Arch, oneBootscript.Bootscript.Organization, oneBootscript.Bootscript.Title) + return &oneBootscript.Bootscript, nil +} + +// GetUserdatas gets list of userdata for a server +func (s *ScalewayAPI) GetUserdatas(serverID string, metadata bool) (*ScalewayUserdatas, error) { + var uri, endpoint string + + endpoint = s.computeAPI + if metadata { + uri = "/user_data" + endpoint = MetadataAPI + } else { + uri = fmt.Sprintf("servers/%s/user_data", serverID) + } + + resp, err := s.GetResponsePaginate(endpoint, uri, url.Values{}) + if err != nil { + return nil, err + } + defer resp.Body.Close() + + body, err := s.handleHTTPError([]int{http.StatusOK}, resp) + if err != nil { + return nil, err + } + var userdatas ScalewayUserdatas + + if err = json.Unmarshal(body, &userdatas); err != nil { + return nil, err + } + return &userdatas, nil +} + +func (s *ScalewayUserdata) String() string { + return string(*s) +} + +// GetUserdata gets a specific userdata for a server +func (s *ScalewayAPI) GetUserdata(serverID, key string, metadata bool) (*ScalewayUserdata, error) { + var uri, endpoint string + + endpoint = s.computeAPI + if metadata { + uri = fmt.Sprintf("/user_data/%s", key) + endpoint = MetadataAPI + } else { + uri = fmt.Sprintf("servers/%s/user_data/%s", serverID, key) + } + + var err error + resp, err := s.GetResponsePaginate(endpoint, uri, url.Values{}) + if err != nil { + return nil, err + } + defer resp.Body.Close() + + if resp.StatusCode != http.StatusOK { + return nil, fmt.Errorf("no such user_data %q (%d)", key, resp.StatusCode) + } + var data ScalewayUserdata + data, err = ioutil.ReadAll(resp.Body) + return &data, err +} + +// PatchUserdata sets a user data +func (s *ScalewayAPI) PatchUserdata(serverID, key string, value []byte, metadata bool) error { + var resource, endpoint string + + endpoint = s.computeAPI + if metadata { + resource = fmt.Sprintf("/user_data/%s", key) + endpoint = MetadataAPI + } else { + resource = fmt.Sprintf("servers/%s/user_data/%s", serverID, key) + } + + uri := fmt.Sprintf("%s/%s", strings.TrimRight(endpoint, "/"), resource) + payload := new(bytes.Buffer) + payload.Write(value) + + req, err := http.NewRequest("PATCH", uri, payload) + if err != nil { + return err + } + + req.Header.Set("X-Auth-Token", s.Token) + req.Header.Set("Content-Type", "text/plain") + req.Header.Set("User-Agent", s.userAgent) + + s.LogHTTP(req) + + resp, err := s.client.Do(req) + if err != nil { + return err + } + defer resp.Body.Close() + + if resp.StatusCode == http.StatusNoContent { + return nil + } + + return fmt.Errorf("cannot set user_data (%d)", resp.StatusCode) +} + +// DeleteUserdata deletes a server user_data +func (s *ScalewayAPI) DeleteUserdata(serverID, key string, metadata bool) error { + var url, endpoint string + + endpoint = s.computeAPI + if metadata { + url = fmt.Sprintf("/user_data/%s", key) + endpoint = MetadataAPI + } else { + url = fmt.Sprintf("servers/%s/user_data/%s", serverID, key) + } + + resp, err := s.DeleteResponse(endpoint, url) + if err != nil { + return err + } + defer resp.Body.Close() + + _, err = s.handleHTTPError([]int{http.StatusNoContent}, resp) + return err +} + +// GetTasks get the list of tasks from the ScalewayAPI +func (s *ScalewayAPI) GetTasks() (*[]ScalewayTask, error) { + query := url.Values{} + resp, err := s.GetResponsePaginate(s.computeAPI, "tasks", query) + if err != nil { + return nil, err + } + defer resp.Body.Close() + + body, err := s.handleHTTPError([]int{http.StatusOK}, resp) + if err != nil { + return nil, err + } + var tasks ScalewayTasks + + if err = json.Unmarshal(body, &tasks); err != nil { + return nil, err + } + return &tasks.Tasks, nil +} + +// CheckCredentials performs a dummy check to ensure we can contact the API +func (s *ScalewayAPI) CheckCredentials() error { + query := url.Values{} + + resp, err := s.GetResponsePaginate(AccountAPI, "tokens", query) + if err != nil { + return err + } + defer resp.Body.Close() + body, err := s.handleHTTPError([]int{http.StatusOK}, resp) + if err != nil { + return err + } + found := false + var tokens ScalewayGetTokens + + if err = json.Unmarshal(body, &tokens); err != nil { + return err + } + for _, token := range tokens.Tokens { + if token.ID == s.Token { + found = true + break + } + } + if !found { + return fmt.Errorf("Invalid token %v", s.Token) + } + return nil +} + +// GetUserID returns the userID +func (s *ScalewayAPI) GetUserID() (string, error) { + resp, err := s.GetResponsePaginate(AccountAPI, fmt.Sprintf("tokens/%s", s.Token), url.Values{}) + if err != nil { + return "", err + } + defer resp.Body.Close() + + body, err := s.handleHTTPError([]int{http.StatusOK}, resp) + if err != nil { + return "", err + } + var token ScalewayTokensDefinition + + if err = json.Unmarshal(body, &token); err != nil { + return "", err + } + return token.Token.UserID, nil +} + +// GetOrganization returns Organization +func (s *ScalewayAPI) GetOrganization() (*ScalewayOrganizationsDefinition, error) { + resp, err := s.GetResponsePaginate(AccountAPI, "organizations", url.Values{}) + if err != nil { + return nil, err + } + defer resp.Body.Close() + + body, err := s.handleHTTPError([]int{http.StatusOK}, resp) + if err != nil { + return nil, err + } + var data ScalewayOrganizationsDefinition + + if err = json.Unmarshal(body, &data); err != nil { + return nil, err + } + return &data, nil +} + +// GetUser returns the user +func (s *ScalewayAPI) GetUser() (*ScalewayUserDefinition, error) { + userID, err := s.GetUserID() + if err != nil { + return nil, err + } + resp, err := s.GetResponsePaginate(AccountAPI, fmt.Sprintf("users/%s", userID), url.Values{}) + if err != nil { + return nil, err + } + defer resp.Body.Close() + + body, err := s.handleHTTPError([]int{http.StatusOK}, resp) + if err != nil { + return nil, err + } + var user ScalewayUsersDefinition + + if err = json.Unmarshal(body, &user); err != nil { + return nil, err + } + return &user.User, nil +} + +// GetPermissions returns the permissions +func (s *ScalewayAPI) GetPermissions() (*ScalewayPermissionDefinition, error) { + resp, err := s.GetResponsePaginate(AccountAPI, fmt.Sprintf("tokens/%s/permissions", s.Token), url.Values{}) + if err != nil { + return nil, err + } + defer resp.Body.Close() + + body, err := s.handleHTTPError([]int{http.StatusOK}, resp) + if err != nil { + return nil, err + } + var permissions ScalewayPermissionDefinition + + if err = json.Unmarshal(body, &permissions); err != nil { + return nil, err + } + return &permissions, nil +} + +// GetDashboard returns the dashboard +func (s *ScalewayAPI) GetDashboard() (*ScalewayDashboard, error) { + resp, err := s.GetResponsePaginate(s.computeAPI, "dashboard", url.Values{}) + if err != nil { + return nil, err + } + defer resp.Body.Close() + + body, err := s.handleHTTPError([]int{http.StatusOK}, resp) + if err != nil { + return nil, err + } + var dashboard ScalewayDashboardResp + + if err = json.Unmarshal(body, &dashboard); err != nil { + return nil, err + } + return &dashboard.Dashboard, nil +} + +// GetServerID returns exactly one server matching +func (s *ScalewayAPI) GetServerID(needle string) (string, error) { + // Parses optional type prefix, i.e: "server:name" -> "name" + _, needle = parseNeedle(needle) + + servers, err := s.ResolveServer(needle) + if err != nil { + return "", fmt.Errorf("Unable to resolve server %s: %s", needle, err) + } + if len(servers) == 1 { + return servers[0].Identifier, nil + } + if len(servers) == 0 { + return "", fmt.Errorf("No such server: %s", needle) + } + return "", showResolverResults(needle, servers) +} + +func showResolverResults(needle string, results ScalewayResolverResults) error { + w := tabwriter.NewWriter(os.Stderr, 20, 1, 3, ' ', 0) + defer w.Flush() + sort.Sort(results) + fmt.Fprintf(w, " IMAGEID\tFROM\tNAME\tZONE\tARCH\n") + for _, result := range results { + if result.Arch == "" { + result.Arch = "n/a" + } + fmt.Fprintf(w, "- %s\t%s\t%s\t%s\t%s\n", result.TruncIdentifier(), result.CodeName(), result.Name, result.Region, result.Arch) + } + return fmt.Errorf("Too many candidates for %s (%d)", needle, len(results)) +} + +// GetVolumeID returns exactly one volume matching +func (s *ScalewayAPI) GetVolumeID(needle string) (string, error) { + // Parses optional type prefix, i.e: "volume:name" -> "name" + _, needle = parseNeedle(needle) + + volumes, err := s.ResolveVolume(needle) + if err != nil { + return "", fmt.Errorf("Unable to resolve volume %s: %s", needle, err) + } + if len(volumes) == 1 { + return volumes[0].Identifier, nil + } + if len(volumes) == 0 { + return "", fmt.Errorf("No such volume: %s", needle) + } + return "", showResolverResults(needle, volumes) +} + +// GetSnapshotID returns exactly one snapshot matching +func (s *ScalewayAPI) GetSnapshotID(needle string) (string, error) { + // Parses optional type prefix, i.e: "snapshot:name" -> "name" + _, needle = parseNeedle(needle) + + snapshots, err := s.ResolveSnapshot(needle) + if err != nil { + return "", fmt.Errorf("Unable to resolve snapshot %s: %s", needle, err) + } + if len(snapshots) == 1 { + return snapshots[0].Identifier, nil + } + if len(snapshots) == 0 { + return "", fmt.Errorf("No such snapshot: %s", needle) + } + return "", showResolverResults(needle, snapshots) +} + +// FilterImagesByArch removes entry that doesn't match with architecture +func FilterImagesByArch(res ScalewayResolverResults, arch string) (ret ScalewayResolverResults) { + if arch == "*" { + return res + } + for _, result := range res { + if result.Arch == arch { + ret = append(ret, result) + } + } + return +} + +// FilterImagesByRegion removes entry that doesn't match with region +func FilterImagesByRegion(res ScalewayResolverResults, region string) (ret ScalewayResolverResults) { + if region == "*" { + return res + } + for _, result := range res { + if result.Region == region { + ret = append(ret, result) + } + } + return +} + +// GetImageID returns exactly one image matching +func (s *ScalewayAPI) GetImageID(needle, arch string) (*ScalewayImageIdentifier, error) { + // Parses optional type prefix, i.e: "image:name" -> "name" + _, needle = parseNeedle(needle) + + images, err := s.ResolveImage(needle) + if err != nil { + return nil, fmt.Errorf("Unable to resolve image %s: %s", needle, err) + } + images = FilterImagesByArch(images, arch) + images = FilterImagesByRegion(images, s.Region) + if len(images) == 1 { + return &ScalewayImageIdentifier{ + Identifier: images[0].Identifier, + Arch: images[0].Arch, + // FIXME region, owner hardcoded + Region: images[0].Region, + Owner: "", + }, nil + } + if len(images) == 0 { + return nil, fmt.Errorf("No such image (zone %s, arch %s) : %s", s.Region, arch, needle) + } + return nil, showResolverResults(needle, images) +} + +// GetSecurityGroups returns a ScalewaySecurityGroups +func (s *ScalewayAPI) GetSecurityGroups() (*ScalewayGetSecurityGroups, error) { + resp, err := s.GetResponsePaginate(s.computeAPI, "security_groups", url.Values{}) + if err != nil { + return nil, err + } + defer resp.Body.Close() + + body, err := s.handleHTTPError([]int{http.StatusOK}, resp) + if err != nil { + return nil, err + } + var securityGroups ScalewayGetSecurityGroups + + if err = json.Unmarshal(body, &securityGroups); err != nil { + return nil, err + } + return &securityGroups, nil +} + +// GetSecurityGroupRules returns a ScalewaySecurityGroupRules +func (s *ScalewayAPI) GetSecurityGroupRules(groupID string) (*ScalewayGetSecurityGroupRules, error) { + resp, err := s.GetResponsePaginate(s.computeAPI, fmt.Sprintf("security_groups/%s/rules", groupID), url.Values{}) + if err != nil { + return nil, err + } + defer resp.Body.Close() + + body, err := s.handleHTTPError([]int{http.StatusOK}, resp) + if err != nil { + return nil, err + } + var securityGroupRules ScalewayGetSecurityGroupRules + + if err = json.Unmarshal(body, &securityGroupRules); err != nil { + return nil, err + } + return &securityGroupRules, nil +} + +// GetASecurityGroupRule returns a ScalewaySecurityGroupRule +func (s *ScalewayAPI) GetASecurityGroupRule(groupID string, rulesID string) (*ScalewayGetSecurityGroupRule, error) { + resp, err := s.GetResponsePaginate(s.computeAPI, fmt.Sprintf("security_groups/%s/rules/%s", groupID, rulesID), url.Values{}) + if err != nil { + return nil, err + } + defer resp.Body.Close() + + body, err := s.handleHTTPError([]int{http.StatusOK}, resp) + if err != nil { + return nil, err + } + var securityGroupRules ScalewayGetSecurityGroupRule + + if err = json.Unmarshal(body, &securityGroupRules); err != nil { + return nil, err + } + return &securityGroupRules, nil +} + +// GetASecurityGroup returns a ScalewaySecurityGroup +func (s *ScalewayAPI) GetASecurityGroup(groupsID string) (*ScalewayGetSecurityGroup, error) { + resp, err := s.GetResponsePaginate(s.computeAPI, fmt.Sprintf("security_groups/%s", groupsID), url.Values{}) + if err != nil { + return nil, err + } + defer resp.Body.Close() + + body, err := s.handleHTTPError([]int{http.StatusOK}, resp) + if err != nil { + return nil, err + } + var securityGroups ScalewayGetSecurityGroup + + if err = json.Unmarshal(body, &securityGroups); err != nil { + return nil, err + } + return &securityGroups, nil +} + +// PostSecurityGroup posts a group on a server +func (s *ScalewayAPI) PostSecurityGroup(group ScalewayNewSecurityGroup) error { + resp, err := s.PostResponse(s.computeAPI, "security_groups", group) + if err != nil { + return err + } + defer resp.Body.Close() + + _, err = s.handleHTTPError([]int{http.StatusCreated}, resp) + return err +} + +// PostSecurityGroupRule posts a rule on a server +func (s *ScalewayAPI) PostSecurityGroupRule(SecurityGroupID string, rules ScalewayNewSecurityGroupRule) error { + resp, err := s.PostResponse(s.computeAPI, fmt.Sprintf("security_groups/%s/rules", SecurityGroupID), rules) + if err != nil { + return err + } + defer resp.Body.Close() + + _, err = s.handleHTTPError([]int{http.StatusCreated}, resp) + return err +} + +// DeleteSecurityGroup deletes a SecurityGroup +func (s *ScalewayAPI) DeleteSecurityGroup(securityGroupID string) error { + resp, err := s.DeleteResponse(s.computeAPI, fmt.Sprintf("security_groups/%s", securityGroupID)) + if err != nil { + return err + } + defer resp.Body.Close() + + _, err = s.handleHTTPError([]int{http.StatusNoContent}, resp) + return err +} + +// PutSecurityGroup updates a SecurityGroup +func (s *ScalewayAPI) PutSecurityGroup(group ScalewayUpdateSecurityGroup, securityGroupID string) error { + resp, err := s.PutResponse(s.computeAPI, fmt.Sprintf("security_groups/%s", securityGroupID), group) + if err != nil { + return err + } + defer resp.Body.Close() + + _, err = s.handleHTTPError([]int{http.StatusOK}, resp) + return err +} + +// PutSecurityGroupRule updates a SecurityGroupRule +func (s *ScalewayAPI) PutSecurityGroupRule(rules ScalewayNewSecurityGroupRule, securityGroupID, RuleID string) error { + resp, err := s.PutResponse(s.computeAPI, fmt.Sprintf("security_groups/%s/rules/%s", securityGroupID, RuleID), rules) + if err != nil { + return err + } + defer resp.Body.Close() + + _, err = s.handleHTTPError([]int{http.StatusOK}, resp) + return err +} + +// DeleteSecurityGroupRule deletes a SecurityGroupRule +func (s *ScalewayAPI) DeleteSecurityGroupRule(SecurityGroupID, RuleID string) error { + resp, err := s.DeleteResponse(s.computeAPI, fmt.Sprintf("security_groups/%s/rules/%s", SecurityGroupID, RuleID)) + if err != nil { + return err + } + defer resp.Body.Close() + + _, err = s.handleHTTPError([]int{http.StatusNoContent}, resp) + return err +} + +// GetContainers returns a ScalewayGetContainers +func (s *ScalewayAPI) GetContainers() (*ScalewayGetContainers, error) { + resp, err := s.GetResponsePaginate(s.computeAPI, "containers", url.Values{}) + if err != nil { + return nil, err + } + defer resp.Body.Close() + + body, err := s.handleHTTPError([]int{http.StatusOK}, resp) + if err != nil { + return nil, err + } + var containers ScalewayGetContainers + + if err = json.Unmarshal(body, &containers); err != nil { + return nil, err + } + return &containers, nil +} + +// GetContainerDatas returns a ScalewayGetContainerDatas +func (s *ScalewayAPI) GetContainerDatas(container string) (*ScalewayGetContainerDatas, error) { + resp, err := s.GetResponsePaginate(s.computeAPI, fmt.Sprintf("containers/%s", container), url.Values{}) + if err != nil { + return nil, err + } + defer resp.Body.Close() + + body, err := s.handleHTTPError([]int{http.StatusOK}, resp) + if err != nil { + return nil, err + } + var datas ScalewayGetContainerDatas + + if err = json.Unmarshal(body, &datas); err != nil { + return nil, err + } + return &datas, nil +} + +// GetIPS returns a ScalewayGetIPS +func (s *ScalewayAPI) GetIPS() (*ScalewayGetIPS, error) { + resp, err := s.GetResponsePaginate(s.computeAPI, "ips", url.Values{}) + if err != nil { + return nil, err + } + defer resp.Body.Close() + + body, err := s.handleHTTPError([]int{http.StatusOK}, resp) + if err != nil { + return nil, err + } + var ips ScalewayGetIPS + + if err = json.Unmarshal(body, &ips); err != nil { + return nil, err + } + return &ips, nil +} + +// NewIP returns a new IP +func (s *ScalewayAPI) NewIP() (*ScalewayGetIP, error) { + var orga struct { + Organization string `json:"organization"` + } + orga.Organization = s.Organization + resp, err := s.PostResponse(s.computeAPI, "ips", orga) + if err != nil { + return nil, err + } + defer resp.Body.Close() + + body, err := s.handleHTTPError([]int{http.StatusCreated}, resp) + if err != nil { + return nil, err + } + var ip ScalewayGetIP + + if err = json.Unmarshal(body, &ip); err != nil { + return nil, err + } + return &ip, nil +} + +// AttachIP attachs an IP to a server +func (s *ScalewayAPI) AttachIP(ipID, serverID string) error { + var update struct { + Address string `json:"address"` + ID string `json:"id"` + Reverse *string `json:"reverse"` + Organization string `json:"organization"` + Server string `json:"server"` + } + + ip, err := s.GetIP(ipID) + if err != nil { + return err + } + update.Address = ip.IP.Address + update.ID = ip.IP.ID + update.Organization = ip.IP.Organization + update.Server = serverID + resp, err := s.PutResponse(s.computeAPI, fmt.Sprintf("ips/%s", ipID), update) + if err != nil { + return err + } + _, err = s.handleHTTPError([]int{http.StatusOK}, resp) + return err +} + +// DetachIP detaches an IP from a server +func (s *ScalewayAPI) DetachIP(ipID string) error { + ip, err := s.GetIP(ipID) + if err != nil { + return err + } + ip.IP.Server = nil + resp, err := s.PutResponse(s.computeAPI, fmt.Sprintf("ips/%s", ipID), ip.IP) + if err != nil { + return err + } + defer resp.Body.Close() + _, err = s.handleHTTPError([]int{http.StatusOK}, resp) + return err +} + +// DeleteIP deletes an IP +func (s *ScalewayAPI) DeleteIP(ipID string) error { + resp, err := s.DeleteResponse(s.computeAPI, fmt.Sprintf("ips/%s", ipID)) + if err != nil { + return err + } + defer resp.Body.Close() + _, err = s.handleHTTPError([]int{http.StatusNoContent}, resp) + return err +} + +// GetIP returns a ScalewayGetIP +func (s *ScalewayAPI) GetIP(ipID string) (*ScalewayGetIP, error) { + resp, err := s.GetResponsePaginate(s.computeAPI, fmt.Sprintf("ips/%s", ipID), url.Values{}) + if err != nil { + return nil, err + } + defer resp.Body.Close() + + body, err := s.handleHTTPError([]int{http.StatusOK}, resp) + if err != nil { + return nil, err + } + var ip ScalewayGetIP + + if err = json.Unmarshal(body, &ip); err != nil { + return nil, err + } + return &ip, nil +} + +// GetQuotas returns a ScalewayGetQuotas +func (s *ScalewayAPI) GetQuotas() (*ScalewayGetQuotas, error) { + resp, err := s.GetResponsePaginate(AccountAPI, fmt.Sprintf("organizations/%s/quotas", s.Organization), url.Values{}) + if err != nil { + return nil, err + } + defer resp.Body.Close() + + body, err := s.handleHTTPError([]int{http.StatusOK}, resp) + if err != nil { + return nil, err + } + var quotas ScalewayGetQuotas + + if err = json.Unmarshal(body, "as); err != nil { + return nil, err + } + return "as, nil +} + +// GetBootscriptID returns exactly one bootscript matching +func (s *ScalewayAPI) GetBootscriptID(needle, arch string) (string, error) { + // Parses optional type prefix, i.e: "bootscript:name" -> "name" + _, needle = parseNeedle(needle) + + bootscripts, err := s.ResolveBootscript(needle) + if err != nil { + return "", fmt.Errorf("Unable to resolve bootscript %s: %s", needle, err) + } + bootscripts.FilterByArch(arch) + if len(bootscripts) == 1 { + return bootscripts[0].Identifier, nil + } + if len(bootscripts) == 0 { + return "", fmt.Errorf("No such bootscript: %s", needle) + } + return "", showResolverResults(needle, bootscripts) +} + +func rootNetDial(network, addr string) (net.Conn, error) { + dialer := net.Dialer{ + Timeout: 10 * time.Second, + KeepAlive: 10 * time.Second, + } + + // bruteforce privileged ports + var localAddr net.Addr + var err error + for port := 1; port <= 1024; port++ { + localAddr, err = net.ResolveTCPAddr("tcp", fmt.Sprintf(":%d", port)) + + // this should never happen + if err != nil { + return nil, err + } + + dialer.LocalAddr = localAddr + + conn, err := dialer.Dial(network, addr) + + // if err is nil, dialer.Dial succeed, so let's go + // else, err != nil, but we don't care + if err == nil { + return conn, nil + } + } + // if here, all privileged ports were tried without success + return nil, fmt.Errorf("bind: permission denied, are you root ?") +} + +// SetPassword register the password +func (s *ScalewayAPI) SetPassword(password string) { + s.password = password +} + +// GetMarketPlaceImages returns images from marketplace +func (s *ScalewayAPI) GetMarketPlaceImages(uuidImage string) (*MarketImages, error) { + resp, err := s.GetResponsePaginate(MarketplaceAPI, fmt.Sprintf("images/%s", uuidImage), url.Values{}) + if err != nil { + return nil, err + } + defer resp.Body.Close() + + body, err := s.handleHTTPError([]int{http.StatusOK}, resp) + if err != nil { + return nil, err + } + var ret MarketImages + + if uuidImage != "" { + ret.Images = make([]MarketImage, 1) + + var img MarketImage + + if err = json.Unmarshal(body, &img); err != nil { + return nil, err + } + ret.Images[0] = img + } else { + if err = json.Unmarshal(body, &ret); err != nil { + return nil, err + } + } + return &ret, nil +} + +// GetMarketPlaceImageVersions returns image version +func (s *ScalewayAPI) GetMarketPlaceImageVersions(uuidImage, uuidVersion string) (*MarketVersions, error) { + resp, err := s.GetResponsePaginate(MarketplaceAPI, fmt.Sprintf("images/%v/versions/%s", uuidImage, uuidVersion), url.Values{}) + if err != nil { + return nil, err + } + defer resp.Body.Close() + + body, err := s.handleHTTPError([]int{http.StatusOK}, resp) + if err != nil { + return nil, err + } + var ret MarketVersions + + if uuidImage != "" { + var version MarketVersion + ret.Versions = make([]MarketVersionDefinition, 1) + + if err = json.Unmarshal(body, &version); err != nil { + return nil, err + } + ret.Versions[0] = version.Version + } else { + if err = json.Unmarshal(body, &ret); err != nil { + return nil, err + } + } + return &ret, nil +} + +// GetMarketPlaceImageCurrentVersion return the image current version +func (s *ScalewayAPI) GetMarketPlaceImageCurrentVersion(uuidImage string) (*MarketVersion, error) { + resp, err := s.GetResponsePaginate(MarketplaceAPI, fmt.Sprintf("images/%v/versions/current", uuidImage), url.Values{}) + if err != nil { + return nil, err + } + defer resp.Body.Close() + + body, err := s.handleHTTPError([]int{http.StatusOK}, resp) + if err != nil { + return nil, err + } + var ret MarketVersion + + if err = json.Unmarshal(body, &ret); err != nil { + return nil, err + } + return &ret, nil +} + +// GetMarketPlaceLocalImages returns images from local region +func (s *ScalewayAPI) GetMarketPlaceLocalImages(uuidImage, uuidVersion, uuidLocalImage string) (*MarketLocalImages, error) { + resp, err := s.GetResponsePaginate(MarketplaceAPI, fmt.Sprintf("images/%v/versions/%s/local_images/%s", uuidImage, uuidVersion, uuidLocalImage), url.Values{}) + if err != nil { + return nil, err + } + defer resp.Body.Close() + body, err := s.handleHTTPError([]int{http.StatusOK}, resp) + if err != nil { + return nil, err + } + var ret MarketLocalImages + if uuidLocalImage != "" { + var localImage MarketLocalImage + ret.LocalImages = make([]MarketLocalImageDefinition, 1) + + if err = json.Unmarshal(body, &localImage); err != nil { + return nil, err + } + ret.LocalImages[0] = localImage.LocalImages + } else { + if err = json.Unmarshal(body, &ret); err != nil { + return nil, err + } + } + return &ret, nil +} + +// PostMarketPlaceImage adds new image +func (s *ScalewayAPI) PostMarketPlaceImage(images MarketImage) error { + resp, err := s.PostResponse(MarketplaceAPI, "images/", images) + if err != nil { + return err + } + defer resp.Body.Close() + _, err = s.handleHTTPError([]int{http.StatusAccepted}, resp) + return err +} + +// PostMarketPlaceImageVersion adds new image version +func (s *ScalewayAPI) PostMarketPlaceImageVersion(uuidImage string, version MarketVersion) error { + resp, err := s.PostResponse(MarketplaceAPI, fmt.Sprintf("images/%v/versions", uuidImage), version) + if err != nil { + return err + } + defer resp.Body.Close() + _, err = s.handleHTTPError([]int{http.StatusAccepted}, resp) + return err +} + +// PostMarketPlaceLocalImage adds new local image +func (s *ScalewayAPI) PostMarketPlaceLocalImage(uuidImage, uuidVersion, uuidLocalImage string, local MarketLocalImage) error { + resp, err := s.PostResponse(MarketplaceAPI, fmt.Sprintf("images/%v/versions/%s/local_images/%v", uuidImage, uuidVersion, uuidLocalImage), local) + if err != nil { + return err + } + defer resp.Body.Close() + _, err = s.handleHTTPError([]int{http.StatusAccepted}, resp) + return err +} + +// PutMarketPlaceImage updates image +func (s *ScalewayAPI) PutMarketPlaceImage(uudiImage string, images MarketImage) error { + resp, err := s.PutResponse(MarketplaceAPI, fmt.Sprintf("images/%v", uudiImage), images) + if err != nil { + return err + } + defer resp.Body.Close() + _, err = s.handleHTTPError([]int{http.StatusOK}, resp) + return err +} + +// PutMarketPlaceImageVersion updates image version +func (s *ScalewayAPI) PutMarketPlaceImageVersion(uuidImage, uuidVersion string, version MarketVersion) error { + resp, err := s.PutResponse(MarketplaceAPI, fmt.Sprintf("images/%v/versions/%v", uuidImage, uuidVersion), version) + if err != nil { + return err + } + defer resp.Body.Close() + _, err = s.handleHTTPError([]int{http.StatusOK}, resp) + return err +} + +// PutMarketPlaceLocalImage updates local image +func (s *ScalewayAPI) PutMarketPlaceLocalImage(uuidImage, uuidVersion, uuidLocalImage string, local MarketLocalImage) error { + resp, err := s.PostResponse(MarketplaceAPI, fmt.Sprintf("images/%v/versions/%s/local_images/%v", uuidImage, uuidVersion, uuidLocalImage), local) + if err != nil { + return err + } + defer resp.Body.Close() + _, err = s.handleHTTPError([]int{http.StatusOK}, resp) + return err +} + +// DeleteMarketPlaceImage deletes image +func (s *ScalewayAPI) DeleteMarketPlaceImage(uudImage string) error { + resp, err := s.DeleteResponse(MarketplaceAPI, fmt.Sprintf("images/%v", uudImage)) + if err != nil { + return err + } + defer resp.Body.Close() + _, err = s.handleHTTPError([]int{http.StatusNoContent}, resp) + return err +} + +// DeleteMarketPlaceImageVersion delete image version +func (s *ScalewayAPI) DeleteMarketPlaceImageVersion(uuidImage, uuidVersion string) error { + resp, err := s.DeleteResponse(MarketplaceAPI, fmt.Sprintf("images/%v/versions/%v", uuidImage, uuidVersion)) + if err != nil { + return err + } + defer resp.Body.Close() + _, err = s.handleHTTPError([]int{http.StatusNoContent}, resp) + return err +} + +// DeleteMarketPlaceLocalImage deletes local image +func (s *ScalewayAPI) DeleteMarketPlaceLocalImage(uuidImage, uuidVersion, uuidLocalImage string) error { + resp, err := s.DeleteResponse(MarketplaceAPI, fmt.Sprintf("images/%v/versions/%s/local_images/%v", uuidImage, uuidVersion, uuidLocalImage)) + if err != nil { + return err + } + defer resp.Body.Close() + _, err = s.handleHTTPError([]int{http.StatusNoContent}, resp) + return err +} + +// ResolveTTYUrl return an URL to get a tty +func (s *ScalewayAPI) ResolveTTYUrl() string { + switch s.Region { + case "par1", "": + return "https://tty-par1.scaleway.com/v2/" + case "ams1": + return "https://tty-ams1.scaleway.com" + } + return "" +} + +// GetProductServers Fetches all the server type and their constraints from the Products API +func (s *ScalewayAPI) GetProductsServers() (*ScalewayProductsServers, error) { + resp, err := s.GetResponsePaginate(s.computeAPI, "products/servers", url.Values{}) + if err != nil { + return nil, err + } + defer resp.Body.Close() + + body, err := s.handleHTTPError([]int{http.StatusOK}, resp) + if err != nil { + return nil, err + } + + var productServers ScalewayProductsServers + if err = json.Unmarshal(body, &productServers); err != nil { + return nil, err + } + + return &productServers, nil +} diff --git a/vendor/github.com/scaleway/scaleway-cli/pkg/api/cache.go b/vendor/github.com/scaleway/scaleway-cli/pkg/api/cache.go new file mode 100644 index 000000000..0a57796e9 --- /dev/null +++ b/vendor/github.com/scaleway/scaleway-cli/pkg/api/cache.go @@ -0,0 +1,816 @@ +// Copyright (C) 2015 Scaleway. All rights reserved. +// Use of this source code is governed by a MIT-style +// license that can be found in the LICENSE.md file. + +package api + +import ( + "encoding/json" + "fmt" + "io/ioutil" + "os" + "path/filepath" + "regexp" + "strings" + "sync" + + "github.com/moul/anonuuid" + "github.com/renstrom/fuzzysearch/fuzzy" +) + +const ( + // CacheRegion permits to access at the region field + CacheRegion = iota + // CacheArch permits to access at the arch field + CacheArch + // CacheOwner permits to access at the owner field + CacheOwner + // CacheTitle permits to access at the title field + CacheTitle + // CacheMarketPlaceUUID is used to determine the UUID of local images + CacheMarketPlaceUUID + // CacheMaxfield is used to determine the size of array + CacheMaxfield +) + +// ScalewayCache is used not to query the API to resolve full identifiers +type ScalewayCache struct { + // Images contains names of Scaleway images indexed by identifier + Images map[string][CacheMaxfield]string `json:"images"` + + // Snapshots contains names of Scaleway snapshots indexed by identifier + Snapshots map[string][CacheMaxfield]string `json:"snapshots"` + + // Volumes contains names of Scaleway volumes indexed by identifier + Volumes map[string][CacheMaxfield]string `json:"volumes"` + + // Bootscripts contains names of Scaleway bootscripts indexed by identifier + Bootscripts map[string][CacheMaxfield]string `json:"bootscripts"` + + // Servers contains names of Scaleway servers indexed by identifier + Servers map[string][CacheMaxfield]string `json:"servers"` + + // Path is the path to the cache file + Path string `json:"-"` + + // Modified tells if the cache needs to be overwritten or not + Modified bool `json:"-"` + + // Lock allows ScalewayCache to be used concurrently + Lock sync.Mutex `json:"-"` + + hookSave func() +} + +const ( + // IdentifierUnknown is used when we don't know explicitly the type key of the object (used for nil comparison) + IdentifierUnknown = 1 << iota + // IdentifierServer is the type key of cached server objects + IdentifierServer + // IdentifierImage is the type key of cached image objects + IdentifierImage + // IdentifierSnapshot is the type key of cached snapshot objects + IdentifierSnapshot + // IdentifierBootscript is the type key of cached bootscript objects + IdentifierBootscript + // IdentifierVolume is the type key of cached volume objects + IdentifierVolume +) + +// ScalewayResolverResult is a structure containing human-readable information +// about resolver results. This structure is used to display the user choices. +type ScalewayResolverResult struct { + Identifier string + Type int + Name string + Arch string + Needle string + RankMatch int + Region string +} + +// ScalewayResolverResults is a list of `ScalewayResolverResult` +type ScalewayResolverResults []ScalewayResolverResult + +// NewScalewayResolverResult returns a new ScalewayResolverResult +func NewScalewayResolverResult(Identifier, Name, Arch, Region string, Type int) (ScalewayResolverResult, error) { + if err := anonuuid.IsUUID(Identifier); err != nil { + return ScalewayResolverResult{}, err + } + return ScalewayResolverResult{ + Identifier: Identifier, + Type: Type, + Name: Name, + Arch: Arch, + Region: Region, + }, nil +} + +func (s ScalewayResolverResults) Len() int { + return len(s) +} + +func (s ScalewayResolverResults) Swap(i, j int) { + s[i], s[j] = s[j], s[i] +} + +func (s ScalewayResolverResults) Less(i, j int) bool { + return s[i].RankMatch < s[j].RankMatch +} + +// TruncIdentifier returns first 8 characters of an Identifier (UUID) +func (s *ScalewayResolverResult) TruncIdentifier() string { + return s.Identifier[:8] +} + +func identifierTypeName(kind int) string { + switch kind { + case IdentifierServer: + return "Server" + case IdentifierImage: + return "Image" + case IdentifierSnapshot: + return "Snapshot" + case IdentifierVolume: + return "Volume" + case IdentifierBootscript: + return "Bootscript" + } + return "" +} + +// CodeName returns a full resource name with typed prefix +func (s *ScalewayResolverResult) CodeName() string { + name := strings.ToLower(s.Name) + name = regexp.MustCompile(`[^a-z0-9-]`).ReplaceAllString(name, "-") + name = regexp.MustCompile(`--+`).ReplaceAllString(name, "-") + name = strings.Trim(name, "-") + + return fmt.Sprintf("%s:%s", strings.ToLower(identifierTypeName(s.Type)), name) +} + +// FilterByArch deletes the elements which not match with arch +func (s *ScalewayResolverResults) FilterByArch(arch string) { +REDO: + for i := range *s { + if (*s)[i].Arch != arch { + (*s)[i] = (*s)[len(*s)-1] + *s = (*s)[:len(*s)-1] + goto REDO + } + } +} + +// NewScalewayCache loads a per-user cache +func NewScalewayCache(hookSave func()) (*ScalewayCache, error) { + var cache ScalewayCache + + cache.hookSave = hookSave + homeDir := os.Getenv("HOME") // *nix + if homeDir == "" { // Windows + homeDir = os.Getenv("USERPROFILE") + } + if homeDir == "" { + homeDir = "/tmp" + } + cachePath := filepath.Join(homeDir, ".scw-cache.db") + cache.Path = cachePath + _, err := os.Stat(cachePath) + if os.IsNotExist(err) { + cache.Clear() + return &cache, nil + } else if err != nil { + return nil, err + } + file, err := ioutil.ReadFile(cachePath) + if err != nil { + return nil, err + } + err = json.Unmarshal(file, &cache) + if err != nil { + // fix compatibility with older version + if err = os.Remove(cachePath); err != nil { + return nil, err + } + cache.Clear() + return &cache, nil + } + if cache.Images == nil { + cache.Images = make(map[string][CacheMaxfield]string) + } + if cache.Snapshots == nil { + cache.Snapshots = make(map[string][CacheMaxfield]string) + } + if cache.Volumes == nil { + cache.Volumes = make(map[string][CacheMaxfield]string) + } + if cache.Servers == nil { + cache.Servers = make(map[string][CacheMaxfield]string) + } + if cache.Bootscripts == nil { + cache.Bootscripts = make(map[string][CacheMaxfield]string) + } + return &cache, nil +} + +// Clear removes all information from the cache +func (c *ScalewayCache) Clear() { + c.Images = make(map[string][CacheMaxfield]string) + c.Snapshots = make(map[string][CacheMaxfield]string) + c.Volumes = make(map[string][CacheMaxfield]string) + c.Bootscripts = make(map[string][CacheMaxfield]string) + c.Servers = make(map[string][CacheMaxfield]string) + c.Modified = true +} + +// Flush flushes the cache database +func (c *ScalewayCache) Flush() error { + return os.Remove(c.Path) +} + +// Save atomically overwrites the current cache database +func (c *ScalewayCache) Save() error { + c.Lock.Lock() + defer c.Lock.Unlock() + + c.hookSave() + if c.Modified { + file, err := ioutil.TempFile(filepath.Dir(c.Path), filepath.Base(c.Path)) + if err != nil { + return err + } + + if err := json.NewEncoder(file).Encode(c); err != nil { + file.Close() + os.Remove(file.Name()) + return err + } + + file.Close() + if err := os.Rename(file.Name(), c.Path); err != nil { + os.Remove(file.Name()) + return err + } + } + return nil +} + +// ComputeRankMatch fills `ScalewayResolverResult.RankMatch` with its `fuzzy` score +func (s *ScalewayResolverResult) ComputeRankMatch(needle string) { + s.Needle = needle + s.RankMatch = fuzzy.RankMatch(needle, s.Name) +} + +// LookUpImages attempts to return identifiers matching a pattern +func (c *ScalewayCache) LookUpImages(needle string, acceptUUID bool) (ScalewayResolverResults, error) { + c.Lock.Lock() + defer c.Lock.Unlock() + + var res ScalewayResolverResults + var exactMatches ScalewayResolverResults + + if acceptUUID && anonuuid.IsUUID(needle) == nil { + if fields, ok := c.Images[needle]; ok { + entry, err := NewScalewayResolverResult(needle, fields[CacheTitle], fields[CacheArch], fields[CacheRegion], IdentifierImage) + if err != nil { + return ScalewayResolverResults{}, err + } + entry.ComputeRankMatch(needle) + res = append(res, entry) + } + } + + needle = regexp.MustCompile(`^user/`).ReplaceAllString(needle, "") + // FIXME: if 'user/' is in needle, only watch for a user image + nameRegex := regexp.MustCompile(`(?i)` + regexp.MustCompile(`[_-]`).ReplaceAllString(needle, ".*")) + for identifier, fields := range c.Images { + if fields[CacheTitle] == needle { + entry, err := NewScalewayResolverResult(identifier, fields[CacheTitle], fields[CacheArch], fields[CacheRegion], IdentifierImage) + if err != nil { + return ScalewayResolverResults{}, err + } + entry.ComputeRankMatch(needle) + exactMatches = append(exactMatches, entry) + } + if strings.HasPrefix(identifier, needle) || nameRegex.MatchString(fields[CacheTitle]) { + entry, err := NewScalewayResolverResult(identifier, fields[CacheTitle], fields[CacheArch], fields[CacheRegion], IdentifierImage) + if err != nil { + return ScalewayResolverResults{}, err + } + entry.ComputeRankMatch(needle) + res = append(res, entry) + } else if strings.HasPrefix(fields[CacheMarketPlaceUUID], needle) || nameRegex.MatchString(fields[CacheMarketPlaceUUID]) { + entry, err := NewScalewayResolverResult(identifier, fields[CacheTitle], fields[CacheArch], fields[CacheRegion], IdentifierImage) + if err != nil { + return ScalewayResolverResults{}, err + } + entry.ComputeRankMatch(needle) + res = append(res, entry) + } + } + + if len(exactMatches) == 1 { + return exactMatches, nil + } + + return removeDuplicatesResults(res), nil +} + +// LookUpSnapshots attempts to return identifiers matching a pattern +func (c *ScalewayCache) LookUpSnapshots(needle string, acceptUUID bool) (ScalewayResolverResults, error) { + c.Lock.Lock() + defer c.Lock.Unlock() + + var res ScalewayResolverResults + var exactMatches ScalewayResolverResults + + if acceptUUID && anonuuid.IsUUID(needle) == nil { + if fields, ok := c.Snapshots[needle]; ok { + entry, err := NewScalewayResolverResult(needle, fields[CacheTitle], fields[CacheArch], fields[CacheRegion], IdentifierSnapshot) + if err != nil { + return ScalewayResolverResults{}, err + } + entry.ComputeRankMatch(needle) + res = append(res, entry) + } + } + + needle = regexp.MustCompile(`^user/`).ReplaceAllString(needle, "") + nameRegex := regexp.MustCompile(`(?i)` + regexp.MustCompile(`[_-]`).ReplaceAllString(needle, ".*")) + for identifier, fields := range c.Snapshots { + if fields[CacheTitle] == needle { + entry, err := NewScalewayResolverResult(identifier, fields[CacheTitle], fields[CacheArch], fields[CacheRegion], IdentifierSnapshot) + if err != nil { + return ScalewayResolverResults{}, err + } + entry.ComputeRankMatch(needle) + exactMatches = append(exactMatches, entry) + } + if strings.HasPrefix(identifier, needle) || nameRegex.MatchString(fields[CacheTitle]) { + entry, err := NewScalewayResolverResult(identifier, fields[CacheTitle], fields[CacheArch], fields[CacheRegion], IdentifierSnapshot) + if err != nil { + return ScalewayResolverResults{}, err + } + entry.ComputeRankMatch(needle) + res = append(res, entry) + } + } + + if len(exactMatches) == 1 { + return exactMatches, nil + } + + return removeDuplicatesResults(res), nil +} + +// LookUpVolumes attempts to return identifiers matching a pattern +func (c *ScalewayCache) LookUpVolumes(needle string, acceptUUID bool) (ScalewayResolverResults, error) { + c.Lock.Lock() + defer c.Lock.Unlock() + + var res ScalewayResolverResults + var exactMatches ScalewayResolverResults + + if acceptUUID && anonuuid.IsUUID(needle) == nil { + if fields, ok := c.Volumes[needle]; ok { + entry, err := NewScalewayResolverResult(needle, fields[CacheTitle], fields[CacheArch], fields[CacheRegion], IdentifierVolume) + if err != nil { + return ScalewayResolverResults{}, err + } + entry.ComputeRankMatch(needle) + res = append(res, entry) + } + } + + nameRegex := regexp.MustCompile(`(?i)` + regexp.MustCompile(`[_-]`).ReplaceAllString(needle, ".*")) + for identifier, fields := range c.Volumes { + if fields[CacheTitle] == needle { + entry, err := NewScalewayResolverResult(identifier, fields[CacheTitle], fields[CacheArch], fields[CacheRegion], IdentifierVolume) + if err != nil { + return ScalewayResolverResults{}, err + } + entry.ComputeRankMatch(needle) + exactMatches = append(exactMatches, entry) + } + if strings.HasPrefix(identifier, needle) || nameRegex.MatchString(fields[CacheTitle]) { + entry, err := NewScalewayResolverResult(identifier, fields[CacheTitle], fields[CacheArch], fields[CacheRegion], IdentifierVolume) + if err != nil { + return ScalewayResolverResults{}, err + } + entry.ComputeRankMatch(needle) + res = append(res, entry) + } + } + + if len(exactMatches) == 1 { + return exactMatches, nil + } + + return removeDuplicatesResults(res), nil +} + +// LookUpBootscripts attempts to return identifiers matching a pattern +func (c *ScalewayCache) LookUpBootscripts(needle string, acceptUUID bool) (ScalewayResolverResults, error) { + c.Lock.Lock() + defer c.Lock.Unlock() + + var res ScalewayResolverResults + var exactMatches ScalewayResolverResults + + if acceptUUID && anonuuid.IsUUID(needle) == nil { + if fields, ok := c.Bootscripts[needle]; ok { + entry, err := NewScalewayResolverResult(needle, fields[CacheTitle], fields[CacheArch], fields[CacheRegion], IdentifierBootscript) + if err != nil { + return ScalewayResolverResults{}, err + } + entry.ComputeRankMatch(needle) + res = append(res, entry) + } + } + + nameRegex := regexp.MustCompile(`(?i)` + regexp.MustCompile(`[_-]`).ReplaceAllString(needle, ".*")) + for identifier, fields := range c.Bootscripts { + if fields[CacheTitle] == needle { + entry, err := NewScalewayResolverResult(identifier, fields[CacheTitle], fields[CacheArch], fields[CacheRegion], IdentifierBootscript) + if err != nil { + return ScalewayResolverResults{}, err + } + entry.ComputeRankMatch(needle) + exactMatches = append(exactMatches, entry) + } + if strings.HasPrefix(identifier, needle) || nameRegex.MatchString(fields[CacheTitle]) { + entry, err := NewScalewayResolverResult(identifier, fields[CacheTitle], fields[CacheArch], fields[CacheRegion], IdentifierBootscript) + if err != nil { + return ScalewayResolverResults{}, err + } + entry.ComputeRankMatch(needle) + res = append(res, entry) + } + } + + if len(exactMatches) == 1 { + return exactMatches, nil + } + + return removeDuplicatesResults(res), nil +} + +// LookUpServers attempts to return identifiers matching a pattern +func (c *ScalewayCache) LookUpServers(needle string, acceptUUID bool) (ScalewayResolverResults, error) { + c.Lock.Lock() + defer c.Lock.Unlock() + + var res ScalewayResolverResults + var exactMatches ScalewayResolverResults + + if acceptUUID && anonuuid.IsUUID(needle) == nil { + if fields, ok := c.Servers[needle]; ok { + entry, err := NewScalewayResolverResult(needle, fields[CacheTitle], fields[CacheArch], fields[CacheRegion], IdentifierServer) + if err != nil { + return ScalewayResolverResults{}, err + } + entry.ComputeRankMatch(needle) + res = append(res, entry) + } + } + + nameRegex := regexp.MustCompile(`(?i)` + regexp.MustCompile(`[_-]`).ReplaceAllString(needle, ".*")) + for identifier, fields := range c.Servers { + if fields[CacheTitle] == needle { + entry, err := NewScalewayResolverResult(identifier, fields[CacheTitle], fields[CacheArch], fields[CacheRegion], IdentifierServer) + if err != nil { + return ScalewayResolverResults{}, err + } + entry.ComputeRankMatch(needle) + exactMatches = append(exactMatches, entry) + } + if strings.HasPrefix(identifier, needle) || nameRegex.MatchString(fields[CacheTitle]) { + entry, err := NewScalewayResolverResult(identifier, fields[CacheTitle], fields[CacheArch], fields[CacheRegion], IdentifierServer) + if err != nil { + return ScalewayResolverResults{}, err + } + entry.ComputeRankMatch(needle) + res = append(res, entry) + } + } + + if len(exactMatches) == 1 { + return exactMatches, nil + } + + return removeDuplicatesResults(res), nil +} + +// removeDuplicatesResults transforms an array into a unique array +func removeDuplicatesResults(elements ScalewayResolverResults) ScalewayResolverResults { + encountered := map[string]ScalewayResolverResult{} + + // Create a map of all unique elements. + for v := range elements { + encountered[elements[v].Identifier] = elements[v] + } + + // Place all keys from the map into a slice. + results := ScalewayResolverResults{} + for _, result := range encountered { + results = append(results, result) + } + return results +} + +// parseNeedle parses a user needle and try to extract a forced object type +// i.e: +// - server:blah-blah -> kind=server, needle=blah-blah +// - blah-blah -> kind="", needle=blah-blah +// - not-existing-type:blah-blah +func parseNeedle(input string) (identifierType int, needle string) { + parts := strings.Split(input, ":") + if len(parts) == 2 { + switch parts[0] { + case "server": + return IdentifierServer, parts[1] + case "image": + return IdentifierImage, parts[1] + case "snapshot": + return IdentifierSnapshot, parts[1] + case "bootscript": + return IdentifierBootscript, parts[1] + case "volume": + return IdentifierVolume, parts[1] + } + } + return IdentifierUnknown, input +} + +// LookUpIdentifiers attempts to return identifiers matching a pattern +func (c *ScalewayCache) LookUpIdentifiers(needle string) (ScalewayResolverResults, error) { + results := ScalewayResolverResults{} + + identifierType, needle := parseNeedle(needle) + + if identifierType&(IdentifierUnknown|IdentifierServer) > 0 { + servers, err := c.LookUpServers(needle, false) + if err != nil { + return ScalewayResolverResults{}, err + } + for _, result := range servers { + entry, err := NewScalewayResolverResult(result.Identifier, result.Name, result.Arch, result.Region, IdentifierServer) + if err != nil { + return ScalewayResolverResults{}, err + } + entry.ComputeRankMatch(needle) + results = append(results, entry) + } + } + + if identifierType&(IdentifierUnknown|IdentifierImage) > 0 { + images, err := c.LookUpImages(needle, false) + if err != nil { + return ScalewayResolverResults{}, err + } + for _, result := range images { + entry, err := NewScalewayResolverResult(result.Identifier, result.Name, result.Arch, result.Region, IdentifierImage) + if err != nil { + return ScalewayResolverResults{}, err + } + entry.ComputeRankMatch(needle) + results = append(results, entry) + } + } + + if identifierType&(IdentifierUnknown|IdentifierSnapshot) > 0 { + snapshots, err := c.LookUpSnapshots(needle, false) + if err != nil { + return ScalewayResolverResults{}, err + } + for _, result := range snapshots { + entry, err := NewScalewayResolverResult(result.Identifier, result.Name, result.Arch, result.Region, IdentifierSnapshot) + if err != nil { + return ScalewayResolverResults{}, err + } + entry.ComputeRankMatch(needle) + results = append(results, entry) + } + } + + if identifierType&(IdentifierUnknown|IdentifierVolume) > 0 { + volumes, err := c.LookUpVolumes(needle, false) + if err != nil { + return ScalewayResolverResults{}, err + } + for _, result := range volumes { + entry, err := NewScalewayResolverResult(result.Identifier, result.Name, result.Arch, result.Region, IdentifierVolume) + if err != nil { + return ScalewayResolverResults{}, err + } + entry.ComputeRankMatch(needle) + results = append(results, entry) + } + } + + if identifierType&(IdentifierUnknown|IdentifierBootscript) > 0 { + bootscripts, err := c.LookUpBootscripts(needle, false) + if err != nil { + return ScalewayResolverResults{}, err + } + for _, result := range bootscripts { + entry, err := NewScalewayResolverResult(result.Identifier, result.Name, result.Arch, result.Region, IdentifierBootscript) + if err != nil { + return ScalewayResolverResults{}, err + } + entry.ComputeRankMatch(needle) + results = append(results, entry) + } + } + return results, nil +} + +// InsertServer registers a server in the cache +func (c *ScalewayCache) InsertServer(identifier, region, arch, owner, name string) { + c.Lock.Lock() + defer c.Lock.Unlock() + + fields, exists := c.Servers[identifier] + if !exists || fields[CacheTitle] != name { + c.Servers[identifier] = [CacheMaxfield]string{region, arch, owner, name} + c.Modified = true + } +} + +// RemoveServer removes a server from the cache +func (c *ScalewayCache) RemoveServer(identifier string) { + c.Lock.Lock() + defer c.Lock.Unlock() + + delete(c.Servers, identifier) + c.Modified = true +} + +// ClearServers removes all servers from the cache +func (c *ScalewayCache) ClearServers() { + c.Lock.Lock() + defer c.Lock.Unlock() + + c.Servers = make(map[string][CacheMaxfield]string) + c.Modified = true +} + +// InsertImage registers an image in the cache +func (c *ScalewayCache) InsertImage(identifier, region, arch, owner, name, marketPlaceUUID string) { + c.Lock.Lock() + defer c.Lock.Unlock() + + fields, exists := c.Images[identifier] + if !exists || fields[CacheTitle] != name { + c.Images[identifier] = [CacheMaxfield]string{region, arch, owner, name, marketPlaceUUID} + c.Modified = true + } +} + +// RemoveImage removes a server from the cache +func (c *ScalewayCache) RemoveImage(identifier string) { + c.Lock.Lock() + defer c.Lock.Unlock() + + delete(c.Images, identifier) + c.Modified = true +} + +// ClearImages removes all images from the cache +func (c *ScalewayCache) ClearImages() { + c.Lock.Lock() + defer c.Lock.Unlock() + + c.Images = make(map[string][CacheMaxfield]string) + c.Modified = true +} + +// InsertSnapshot registers an snapshot in the cache +func (c *ScalewayCache) InsertSnapshot(identifier, region, arch, owner, name string) { + c.Lock.Lock() + defer c.Lock.Unlock() + + fields, exists := c.Snapshots[identifier] + if !exists || fields[CacheTitle] != name { + c.Snapshots[identifier] = [CacheMaxfield]string{region, arch, owner, name} + c.Modified = true + } +} + +// RemoveSnapshot removes a server from the cache +func (c *ScalewayCache) RemoveSnapshot(identifier string) { + c.Lock.Lock() + defer c.Lock.Unlock() + + delete(c.Snapshots, identifier) + c.Modified = true +} + +// ClearSnapshots removes all snapshots from the cache +func (c *ScalewayCache) ClearSnapshots() { + c.Lock.Lock() + defer c.Lock.Unlock() + + c.Snapshots = make(map[string][CacheMaxfield]string) + c.Modified = true +} + +// InsertVolume registers an volume in the cache +func (c *ScalewayCache) InsertVolume(identifier, region, arch, owner, name string) { + c.Lock.Lock() + defer c.Lock.Unlock() + + fields, exists := c.Volumes[identifier] + if !exists || fields[CacheTitle] != name { + c.Volumes[identifier] = [CacheMaxfield]string{region, arch, owner, name} + c.Modified = true + } +} + +// RemoveVolume removes a server from the cache +func (c *ScalewayCache) RemoveVolume(identifier string) { + c.Lock.Lock() + defer c.Lock.Unlock() + + delete(c.Volumes, identifier) + c.Modified = true +} + +// ClearVolumes removes all volumes from the cache +func (c *ScalewayCache) ClearVolumes() { + c.Lock.Lock() + defer c.Lock.Unlock() + + c.Volumes = make(map[string][CacheMaxfield]string) + c.Modified = true +} + +// InsertBootscript registers an bootscript in the cache +func (c *ScalewayCache) InsertBootscript(identifier, region, arch, owner, name string) { + c.Lock.Lock() + defer c.Lock.Unlock() + + fields, exists := c.Bootscripts[identifier] + if !exists || fields[CacheTitle] != name { + c.Bootscripts[identifier] = [CacheMaxfield]string{region, arch, owner, name} + c.Modified = true + } +} + +// RemoveBootscript removes a bootscript from the cache +func (c *ScalewayCache) RemoveBootscript(identifier string) { + c.Lock.Lock() + defer c.Lock.Unlock() + + delete(c.Bootscripts, identifier) + c.Modified = true +} + +// ClearBootscripts removes all bootscripts from the cache +func (c *ScalewayCache) ClearBootscripts() { + c.Lock.Lock() + defer c.Lock.Unlock() + + c.Bootscripts = make(map[string][CacheMaxfield]string) + c.Modified = true +} + +// GetNbServers returns the number of servers in the cache +func (c *ScalewayCache) GetNbServers() int { + c.Lock.Lock() + defer c.Lock.Unlock() + + return len(c.Servers) +} + +// GetNbImages returns the number of images in the cache +func (c *ScalewayCache) GetNbImages() int { + c.Lock.Lock() + defer c.Lock.Unlock() + + return len(c.Images) +} + +// GetNbSnapshots returns the number of snapshots in the cache +func (c *ScalewayCache) GetNbSnapshots() int { + c.Lock.Lock() + defer c.Lock.Unlock() + + return len(c.Snapshots) +} + +// GetNbVolumes returns the number of volumes in the cache +func (c *ScalewayCache) GetNbVolumes() int { + c.Lock.Lock() + defer c.Lock.Unlock() + + return len(c.Volumes) +} + +// GetNbBootscripts returns the number of bootscripts in the cache +func (c *ScalewayCache) GetNbBootscripts() int { + c.Lock.Lock() + defer c.Lock.Unlock() + + return len(c.Bootscripts) +} diff --git a/vendor/github.com/scaleway/scaleway-cli/pkg/api/helpers.go b/vendor/github.com/scaleway/scaleway-cli/pkg/api/helpers.go new file mode 100644 index 000000000..dcd50d738 --- /dev/null +++ b/vendor/github.com/scaleway/scaleway-cli/pkg/api/helpers.go @@ -0,0 +1,719 @@ +// Copyright (C) 2015 Scaleway. All rights reserved. +// Use of this source code is governed by a MIT-style +// license that can be found in the LICENSE.md file. + +package api + +import ( + "errors" + "fmt" + "math" + "os" + "sort" + "strings" + "sync" + "time" + + "github.com/docker/docker/pkg/namesgenerator" + "github.com/dustin/go-humanize" + "github.com/moul/anonuuid" + "github.com/scaleway/scaleway-cli/pkg/utils" + "github.com/sirupsen/logrus" + log "github.com/sirupsen/logrus" +) + +// ScalewayResolvedIdentifier represents a list of matching identifier for a specifier pattern +type ScalewayResolvedIdentifier struct { + // Identifiers holds matching identifiers + Identifiers ScalewayResolverResults + + // Needle is the criteria used to lookup identifiers + Needle string +} + +// ScalewayImageInterface is an interface to multiple Scaleway items +type ScalewayImageInterface struct { + CreationDate time.Time + Identifier string + Name string + Tag string + VirtualSize uint64 + Public bool + Type string + Organization string + Archs []string + Region []string +} + +// ResolveGateway tries to resolve a server public ip address, else returns the input string, i.e. IPv4, hostname +func ResolveGateway(api *ScalewayAPI, gateway string) (string, error) { + if gateway == "" { + return "", nil + } + + // Parses optional type prefix, i.e: "server:name" -> "name" + _, gateway = parseNeedle(gateway) + + servers, err := api.ResolveServer(gateway) + if err != nil { + return "", err + } + + if len(servers) == 0 { + return gateway, nil + } + + if len(servers) > 1 { + return "", showResolverResults(gateway, servers) + } + + // if len(servers) == 1 { + server, err := api.GetServer(servers[0].Identifier) + if err != nil { + return "", err + } + return server.PublicAddress.IP, nil +} + +// CreateVolumeFromHumanSize creates a volume on the API with a human readable size +func CreateVolumeFromHumanSize(api *ScalewayAPI, size string) (*string, error) { + bytes, err := humanize.ParseBytes(size) + if err != nil { + return nil, err + } + + var newVolume ScalewayVolumeDefinition + newVolume.Name = size + newVolume.Size = bytes + newVolume.Type = "l_ssd" + + volumeID, err := api.PostVolume(newVolume) + if err != nil { + return nil, err + } + + return &volumeID, nil +} + +// VolumesFromSize returns a string of standard sized volumes from a given size +func VolumesFromSize(size uint64) string { + const DefaultVolumeSize float64 = 50000000000 + StdVolumeSizes := []struct { + kind string + capacity float64 + }{ + {"150G", 150000000000}, + {"100G", 100000000000}, + {"50G", 50000000000}, + } + + RequiredSize := float64(size) - DefaultVolumeSize + Volumes := "" + for _, v := range StdVolumeSizes { + q := RequiredSize / v.capacity + r := math.Mod(RequiredSize, v.capacity) + RequiredSize = r + + if q > 0 { + Volumes += strings.Repeat(v.kind+" ", int(q)) + } + if r == 0 { + break + } + } + + return strings.TrimSpace(Volumes) +} + +// fillIdentifierCache fills the cache by fetching from the API +func fillIdentifierCache(api *ScalewayAPI, identifierType int) { + log.Debugf("Filling the cache") + var wg sync.WaitGroup + wg.Add(5) + go func() { + if identifierType&(IdentifierUnknown|IdentifierServer) > 0 { + api.GetServers(true, 0) + } + wg.Done() + }() + go func() { + if identifierType&(IdentifierUnknown|IdentifierImage) > 0 { + api.GetImages() + } + wg.Done() + }() + go func() { + if identifierType&(IdentifierUnknown|IdentifierSnapshot) > 0 { + api.GetSnapshots() + } + wg.Done() + }() + go func() { + if identifierType&(IdentifierUnknown|IdentifierVolume) > 0 { + api.GetVolumes() + } + wg.Done() + }() + go func() { + if identifierType&(IdentifierUnknown|IdentifierBootscript) > 0 { + api.GetBootscripts() + } + wg.Done() + }() + wg.Wait() +} + +// GetIdentifier returns a an identifier if the resolved needles only match one element, else, it exists the program +func GetIdentifier(api *ScalewayAPI, needle string) (*ScalewayResolverResult, error) { + idents, err := ResolveIdentifier(api, needle) + if err != nil { + return nil, err + } + + if len(idents) == 1 { + return &idents[0], nil + } + if len(idents) == 0 { + return nil, fmt.Errorf("No such identifier: %s", needle) + } + + sort.Sort(idents) + for _, identifier := range idents { + // FIXME: also print the name + fmt.Fprintf(os.Stderr, "- %s\n", identifier.Identifier) + } + return nil, fmt.Errorf("Too many candidates for %s (%d)", needle, len(idents)) +} + +// ResolveIdentifier resolves needle provided by the user +func ResolveIdentifier(api *ScalewayAPI, needle string) (ScalewayResolverResults, error) { + idents, err := api.Cache.LookUpIdentifiers(needle) + if err != nil { + return idents, err + } + if len(idents) > 0 { + return idents, nil + } + + identifierType, _ := parseNeedle(needle) + fillIdentifierCache(api, identifierType) + + return api.Cache.LookUpIdentifiers(needle) +} + +// ResolveIdentifiers resolves needles provided by the user +func ResolveIdentifiers(api *ScalewayAPI, needles []string, out chan ScalewayResolvedIdentifier) { + // first attempt, only lookup from the cache + var unresolved []string + for _, needle := range needles { + idents, err := api.Cache.LookUpIdentifiers(needle) + if err != nil { + api.Logger.Fatalf("%s", err) + } + if len(idents) == 0 { + unresolved = append(unresolved, needle) + } else { + out <- ScalewayResolvedIdentifier{ + Identifiers: idents, + Needle: needle, + } + } + } + // fill the cache by fetching from the API and resolve missing identifiers + if len(unresolved) > 0 { + // compute identifierType: + // if identifierType is the same for every unresolved needle, + // we use it directly, else, we choose IdentifierUnknown to + // fulfill every types of cache + identifierType, _ := parseNeedle(unresolved[0]) + for _, needle := range unresolved { + newIdentifierType, _ := parseNeedle(needle) + if identifierType != newIdentifierType { + identifierType = IdentifierUnknown + break + } + } + + // fill all the cache + fillIdentifierCache(api, identifierType) + + // lookup again in the cache + for _, needle := range unresolved { + idents, err := api.Cache.LookUpIdentifiers(needle) + if err != nil { + api.Logger.Fatalf("%s", err) + } + out <- ScalewayResolvedIdentifier{ + Identifiers: idents, + Needle: needle, + } + } + } + + close(out) +} + +// InspectIdentifierResult is returned by `InspectIdentifiers` and contains the inspected `Object` with its `Type` +type InspectIdentifierResult struct { + Type int + Object interface{} +} + +// InspectIdentifiers inspects identifiers concurrently +func InspectIdentifiers(api *ScalewayAPI, ci chan ScalewayResolvedIdentifier, cj chan InspectIdentifierResult, arch string) { + var wg sync.WaitGroup + for { + idents, ok := <-ci + if !ok { + break + } + idents.Identifiers = FilterImagesByArch(idents.Identifiers, arch) + idents.Identifiers = FilterImagesByRegion(idents.Identifiers, api.Region) + if len(idents.Identifiers) != 1 { + if len(idents.Identifiers) == 0 { + log.Errorf("Unable to resolve identifier %s", idents.Needle) + } else { + logrus.Fatal(showResolverResults(idents.Needle, idents.Identifiers)) + } + } else { + ident := idents.Identifiers[0] + wg.Add(1) + go func() { + var obj interface{} + var err error + + switch ident.Type { + case IdentifierServer: + obj, err = api.GetServer(ident.Identifier) + case IdentifierImage: + obj, err = api.GetImage(ident.Identifier) + case IdentifierSnapshot: + obj, err = api.GetSnapshot(ident.Identifier) + case IdentifierVolume: + obj, err = api.GetVolume(ident.Identifier) + case IdentifierBootscript: + obj, err = api.GetBootscript(ident.Identifier) + } + if err == nil && obj != nil { + cj <- InspectIdentifierResult{ + Type: ident.Type, + Object: obj, + } + } + wg.Done() + }() + } + } + wg.Wait() + close(cj) +} + +// ConfigCreateServer represents the options sent to CreateServer and defining a server +type ConfigCreateServer struct { + ImageName string + Name string + Bootscript string + Env string + AdditionalVolumes string + IP string + CommercialType string + DynamicIPRequired bool + EnableIPV6 bool +} + +// Return offer from any of the product name or alternate names +func OfferNameFromName(name string, products *ScalewayProductsServers) (*ProductServer, error) { + offer, ok := products.Servers[name] + if ok { + return &offer, nil + } + + for _, v := range products.Servers { + for _, alt := range v.AltNames { + alt := strings.ToUpper(alt) + if alt == name { + return &v, nil + } + } + } + + return nil, fmt.Errorf("Unknow commercial type: %v", name) +} + +// CreateServer creates a server using API based on typical server fields +func CreateServer(api *ScalewayAPI, c *ConfigCreateServer) (string, error) { + commercialType := os.Getenv("SCW_COMMERCIAL_TYPE") + if commercialType == "" { + commercialType = c.CommercialType + } + if len(commercialType) < 2 { + return "", errors.New("Invalid commercial type") + } + + if c.Name == "" { + c.Name = strings.Replace(namesgenerator.GetRandomName(0), "_", "-", -1) + } + + var server ScalewayServerDefinition + + server.CommercialType = strings.ToUpper(commercialType) + server.Volumes = make(map[string]string) + server.DynamicIPRequired = &c.DynamicIPRequired + server.EnableIPV6 = c.EnableIPV6 + if commercialType == "" { + return "", errors.New("You need to specify a commercial-type") + } + if c.IP != "" { + if anonuuid.IsUUID(c.IP) == nil { + server.PublicIP = c.IP + } else { + ips, err := api.GetIPS() + if err != nil { + return "", err + } + for _, ip := range ips.IPS { + if ip.Address == c.IP { + server.PublicIP = ip.ID + break + } + } + if server.PublicIP == "" { + return "", fmt.Errorf("IP address %v not found", c.IP) + } + } + } + server.Tags = []string{} + if c.Env != "" { + server.Tags = strings.Split(c.Env, " ") + } + + products, err := api.GetProductsServers() + if err != nil { + return "", fmt.Errorf("Unable to fetch products list from the Scaleway API: %v", err) + } + offer, err := OfferNameFromName(server.CommercialType, products) + if err != nil { + return "", fmt.Errorf("Unknow commercial type %v: %v", server.CommercialType, err) + } + if offer.VolumesConstraint.MinSize > 0 && c.AdditionalVolumes == "" { + c.AdditionalVolumes = VolumesFromSize(offer.VolumesConstraint.MinSize) + log.Debugf("%s needs at least %s. Automatically creates the following volumes: %s", + server.CommercialType, humanize.Bytes(offer.VolumesConstraint.MinSize), c.AdditionalVolumes) + } + if c.AdditionalVolumes != "" { + volumes := strings.Split(c.AdditionalVolumes, " ") + for i := range volumes { + volumeID, err := CreateVolumeFromHumanSize(api, volumes[i]) + if err != nil { + return "", err + } + + volumeIDx := fmt.Sprintf("%d", i+1) + server.Volumes[volumeIDx] = *volumeID + } + } + + arch := os.Getenv("SCW_TARGET_ARCH") + if arch == "" { + arch = offer.Arch + } + imageIdentifier := &ScalewayImageIdentifier{ + Arch: arch, + } + server.Name = c.Name + inheritingVolume := false + _, err = humanize.ParseBytes(c.ImageName) + if err == nil { + // Create a new root volume + volumeID, errCreateVol := CreateVolumeFromHumanSize(api, c.ImageName) + if errCreateVol != nil { + return "", errCreateVol + } + server.Volumes["0"] = *volumeID + } else { + // Use an existing image + inheritingVolume = true + if anonuuid.IsUUID(c.ImageName) == nil { + server.Image = &c.ImageName + } else { + imageIdentifier, err = api.GetImageID(c.ImageName, arch) + if err != nil { + return "", err + } + if imageIdentifier.Identifier != "" { + server.Image = &imageIdentifier.Identifier + } else { + snapshotID, errGetSnapID := api.GetSnapshotID(c.ImageName) + if errGetSnapID != nil { + return "", errGetSnapID + } + snapshot, errGetSnap := api.GetSnapshot(snapshotID) + if errGetSnap != nil { + return "", errGetSnap + } + if snapshot.BaseVolume.Identifier == "" { + return "", fmt.Errorf("snapshot %v does not have base volume", snapshot.Name) + } + server.Volumes["0"] = snapshot.BaseVolume.Identifier + } + } + } + + if c.Bootscript != "" { + bootscript := "" + + if anonuuid.IsUUID(c.Bootscript) == nil { + bootscript = c.Bootscript + } else { + var errGetBootScript error + + bootscript, errGetBootScript = api.GetBootscriptID(c.Bootscript, imageIdentifier.Arch) + if errGetBootScript != nil { + return "", errGetBootScript + } + } + server.Bootscript = &bootscript + } + serverID, err := api.PostServer(server) + if err != nil { + return "", err + } + + // For inherited volumes, we prefix the name with server hostname + if inheritingVolume { + createdServer, err := api.GetServer(serverID) + if err != nil { + return "", err + } + currentVolume := createdServer.Volumes["0"] + + var volumePayload ScalewayVolumePutDefinition + newName := fmt.Sprintf("%s-%s", createdServer.Hostname, currentVolume.Name) + volumePayload.Name = &newName + volumePayload.CreationDate = ¤tVolume.CreationDate + volumePayload.Organization = ¤tVolume.Organization + volumePayload.Server.Identifier = ¤tVolume.Server.Identifier + volumePayload.Server.Name = ¤tVolume.Server.Name + volumePayload.Identifier = ¤tVolume.Identifier + volumePayload.Size = ¤tVolume.Size + volumePayload.ModificationDate = ¤tVolume.ModificationDate + volumePayload.ExportURI = ¤tVolume.ExportURI + volumePayload.VolumeType = ¤tVolume.VolumeType + + err = api.PutVolume(currentVolume.Identifier, volumePayload) + if err != nil { + return "", err + } + } + + return serverID, nil +} + +// WaitForServerState asks API in a loop until a server matches a wanted state +func WaitForServerState(api *ScalewayAPI, serverID string, targetState string) (*ScalewayServer, error) { + var server *ScalewayServer + var err error + + var currentState string + + for { + server, err = api.GetServer(serverID) + if err != nil { + return nil, err + } + if currentState != server.State { + log.Infof("Server changed state to '%s'", server.State) + currentState = server.State + } + if server.State == targetState { + break + } + time.Sleep(1 * time.Second) + } + + return server, nil +} + +// WaitForServerReady wait for a server state to be running, then wait for the SSH port to be available +func WaitForServerReady(api *ScalewayAPI, serverID, gateway string) (*ScalewayServer, error) { + promise := make(chan bool) + var server *ScalewayServer + var err error + var currentState string + + go func() { + defer close(promise) + + for { + server, err = api.GetServer(serverID) + if err != nil { + promise <- false + return + } + if currentState != server.State { + log.Infof("Server changed state to '%s'", server.State) + currentState = server.State + } + if server.State == "running" { + break + } + if server.State == "stopped" { + err = fmt.Errorf("The server has been stopped") + promise <- false + return + } + time.Sleep(1 * time.Second) + } + + if gateway == "" { + ip := server.PublicAddress.IP + if ip == "" && server.EnableIPV6 { + ip = fmt.Sprintf("[%s]", server.IPV6.Address) + } + dest := fmt.Sprintf("%s:22", ip) + log.Debugf("Waiting for server SSH port %s", dest) + err = utils.WaitForTCPPortOpen(dest) + if err != nil { + promise <- false + return + } + } else { + dest := fmt.Sprintf("%s:22", gateway) + log.Debugf("Waiting for server SSH port %s", dest) + err = utils.WaitForTCPPortOpen(dest) + if err != nil { + promise <- false + return + } + log.Debugf("Check for SSH port through the gateway: %s", server.PrivateIP) + timeout := time.Tick(120 * time.Second) + for { + select { + case <-timeout: + err = fmt.Errorf("Timeout: unable to ping %s", server.PrivateIP) + goto OUT + default: + if utils.SSHExec("", server.PrivateIP, "root", 22, []string{ + "nc", + "-z", + "-w", + "1", + server.PrivateIP, + "22", + }, false, gateway, false) == nil { + goto OUT + } + time.Sleep(2 * time.Second) + } + } + OUT: + if err != nil { + logrus.Info(err) + err = nil + } + } + promise <- true + }() + + loop := 0 + for { + select { + case done := <-promise: + utils.LogQuiet("\r \r") + if !done { + return nil, err + } + return server, nil + case <-time.After(time.Millisecond * 100): + utils.LogQuiet(fmt.Sprintf("\r%c\r", "-\\|/"[loop%4])) + loop = loop + 1 + if loop == 5 { + loop = 0 + } + } + } +} + +// WaitForServerStopped wait for a server state to be stopped +func WaitForServerStopped(api *ScalewayAPI, serverID string) (*ScalewayServer, error) { + server, err := WaitForServerState(api, serverID, "stopped") + if err != nil { + return nil, err + } + return server, nil +} + +// ByCreationDate sorts images by CreationDate field +type ByCreationDate []ScalewayImageInterface + +func (a ByCreationDate) Len() int { return len(a) } +func (a ByCreationDate) Swap(i, j int) { a[i], a[j] = a[j], a[i] } +func (a ByCreationDate) Less(i, j int) bool { return a[j].CreationDate.Before(a[i].CreationDate) } + +// StartServer start a server based on its needle, can optionaly block while server is booting +func StartServer(api *ScalewayAPI, needle string, wait bool) error { + server, err := api.GetServerID(needle) + if err != nil { + return err + } + + if err = api.PostServerAction(server, "poweron"); err != nil { + return err + } + + if wait { + _, err = WaitForServerReady(api, server, "") + if err != nil { + return fmt.Errorf("failed to wait for server %s to be ready, %v", needle, err) + } + } + return nil +} + +// StartServerOnce wraps StartServer for golang channel +func StartServerOnce(api *ScalewayAPI, needle string, wait bool, successChan chan string, errChan chan error) { + err := StartServer(api, needle, wait) + + if err != nil { + errChan <- err + return + } + successChan <- needle +} + +// DeleteServerForce tries to delete a server using multiple ways +func (a *ScalewayAPI) DeleteServerForce(serverID string) error { + // FIXME: also delete attached volumes and ip address + // FIXME: call delete and stop -t in parallel to speed up process + err := a.DeleteServer(serverID) + if err == nil { + logrus.Infof("Server '%s' successfully deleted", serverID) + return nil + } + + err = a.PostServerAction(serverID, "terminate") + if err == nil { + logrus.Infof("Server '%s' successfully terminated", serverID) + return nil + } + + // FIXME: retry in a loop until timeout or Control+C + logrus.Errorf("Failed to delete server %s", serverID) + logrus.Errorf("Try to run 'scw rm -f %s' later", serverID) + return err +} + +// GetSSHFingerprintFromServer returns an array which containts ssh-host-fingerprints +func (a *ScalewayAPI) GetSSHFingerprintFromServer(serverID string) []string { + ret := []string{} + + if value, err := a.GetUserdata(serverID, "ssh-host-fingerprints", false); err == nil { + PublicKeys := strings.Split(string(*value), "\n") + for i := range PublicKeys { + if fingerprint, err := utils.SSHGetFingerprint([]byte(PublicKeys[i])); err == nil { + ret = append(ret, fingerprint) + } + } + } + return ret +} diff --git a/vendor/github.com/scaleway/scaleway-cli/pkg/api/logger.go b/vendor/github.com/scaleway/scaleway-cli/pkg/api/logger.go new file mode 100644 index 000000000..58ad93716 --- /dev/null +++ b/vendor/github.com/scaleway/scaleway-cli/pkg/api/logger.go @@ -0,0 +1,77 @@ +// Copyright (C) 2015 Scaleway. All rights reserved. +// Use of this source code is governed by a MIT-style +// license that can be found in the LICENSE.md file. + +package api + +import ( + "fmt" + "log" + "net/http" + "os" +) + +// Logger handles logging concerns for the Scaleway API SDK +type Logger interface { + LogHTTP(*http.Request) + Fatalf(format string, v ...interface{}) + Debugf(format string, v ...interface{}) + Infof(format string, v ...interface{}) + Warnf(format string, v ...interface{}) +} + +// NewDefaultLogger returns a logger which is configured for stdout +func NewDefaultLogger() Logger { + return &defaultLogger{ + Logger: log.New(os.Stdout, "", log.LstdFlags), + } +} + +type defaultLogger struct { + *log.Logger +} + +func (l *defaultLogger) LogHTTP(r *http.Request) { + l.Printf("%s %s\n", r.Method, r.URL.RawPath) +} + +func (l *defaultLogger) Fatalf(format string, v ...interface{}) { + l.Printf("[FATAL] %s\n", fmt.Sprintf(format, v)) + os.Exit(1) +} + +func (l *defaultLogger) Debugf(format string, v ...interface{}) { + l.Printf("[DEBUG] %s\n", fmt.Sprintf(format, v)) +} + +func (l *defaultLogger) Infof(format string, v ...interface{}) { + l.Printf("[INFO ] %s\n", fmt.Sprintf(format, v)) +} + +func (l *defaultLogger) Warnf(format string, v ...interface{}) { + l.Printf("[WARN ] %s\n", fmt.Sprintf(format, v)) +} + +type disableLogger struct { +} + +// NewDisableLogger returns a logger which is configured to do nothing +func NewDisableLogger() Logger { + return &disableLogger{} +} + +func (d *disableLogger) LogHTTP(r *http.Request) { +} + +func (d *disableLogger) Fatalf(format string, v ...interface{}) { + panic(fmt.Sprintf(format, v)) +} + +func (d *disableLogger) Debugf(format string, v ...interface{}) { +} + +func (d *disableLogger) Infof(format string, v ...interface{}) { +} + +func (d *disableLogger) Warnf(format string, v ...interface{}) { +} diff --git a/vendor/github.com/scaleway/scaleway-cli/pkg/sshcommand/sshcommand.go b/vendor/github.com/scaleway/scaleway-cli/pkg/sshcommand/sshcommand.go new file mode 100644 index 000000000..676b2f976 --- /dev/null +++ b/vendor/github.com/scaleway/scaleway-cli/pkg/sshcommand/sshcommand.go @@ -0,0 +1,124 @@ +package sshcommand + +import ( + "fmt" + "runtime" + "strings" +) + +// Command contains settings to build a ssh command +type Command struct { + Host string + User string + Port int + SSHOptions []string + Gateway *Command + Command []string + Debug bool + NoEscapeCommand bool + SkipHostKeyChecking bool + Quiet bool + AllocateTTY bool + EnableSSHKeyForwarding bool + + isGateway bool +} + +// New returns a minimal Command +func New(host string) *Command { + return &Command{ + Host: host, + } +} + +func (c *Command) applyDefaults() { + if strings.Contains(c.Host, "@") { + parts := strings.Split(c.Host, "@") + c.User = parts[0] + c.Host = parts[1] + } + + if c.Port == 0 { + c.Port = 22 + } + + if c.isGateway { + c.SSHOptions = []string{"-W", "%h:%p"} + } +} + +// Slice returns an execve compatible slice of arguments +func (c *Command) Slice() []string { + c.applyDefaults() + + slice := []string{} + + slice = append(slice, "ssh") + + if c.EnableSSHKeyForwarding { + slice = append(slice, "-A") + } + + if c.Quiet { + slice = append(slice, "-q") + } + + if c.SkipHostKeyChecking { + slice = append(slice, "-o", "UserKnownHostsFile=/dev/null", "-o", "StrictHostKeyChecking=no") + } + + if len(c.SSHOptions) > 0 { + slice = append(slice, c.SSHOptions...) + } + + if c.Gateway != nil { + c.Gateway.isGateway = true + slice = append(slice, "-o", "ProxyCommand="+c.Gateway.String()) + } + + if c.User != "" { + slice = append(slice, "-l", c.User) + } + + slice = append(slice, c.Host) + + if c.AllocateTTY { + slice = append(slice, "-t", "-t") + } + + slice = append(slice, "-p", fmt.Sprintf("%d", c.Port)) + if len(c.Command) > 0 { + slice = append(slice, "--", "/bin/sh", "-e") + if c.Debug { + slice = append(slice, "-x") + } + slice = append(slice, "-c") + + var escapedCommand []string + if c.NoEscapeCommand { + escapedCommand = c.Command + } else { + escapedCommand = []string{} + for _, part := range c.Command { + escapedCommand = append(escapedCommand, fmt.Sprintf("%q", part)) + } + } + slice = append(slice, fmt.Sprintf("%q", strings.Join(escapedCommand, " "))) + } + if runtime.GOOS == "windows" { + slice[len(slice)-1] = slice[len(slice)-1] + " " // Why ? + } + return slice +} + +// String returns a copy-pasteable command, useful for debugging +func (c *Command) String() string { + slice := c.Slice() + for i := range slice { + quoted := fmt.Sprintf("%q", slice[i]) + if strings.Contains(slice[i], " ") || len(quoted) != len(slice[i])+2 { + slice[i] = quoted + } + } + return strings.Join(slice, " ") +} diff --git a/vendor/github.com/scaleway/scaleway-cli/pkg/utils/quiet.go b/vendor/github.com/scaleway/scaleway-cli/pkg/utils/quiet.go new file mode 100644 index 000000000..775918d8d --- /dev/null +++ b/vendor/github.com/scaleway/scaleway-cli/pkg/utils/quiet.go @@ -0,0 +1,30 @@ +// Copyright (C) 2015 Scaleway. All rights reserved. +// Use of this source code is governed by a MIT-style +// license that can be found in the LICENSE.md file. + +// Package utils contains logquiet +package utils + +import ( + "fmt" + "os" +) + +// LogQuietStruct is a struct to store information about quiet state +type LogQuietStruct struct { + quiet bool +} + +var instanceQuiet LogQuietStruct + +// Quiet enable or disable quiet +func Quiet(option bool) { + instanceQuiet.quiet = option +} + +// LogQuiet Displays info if quiet is activated +func LogQuiet(str string) { + if !instanceQuiet.quiet { + fmt.Fprintf(os.Stderr, "%s", str) + } +} diff --git a/vendor/github.com/scaleway/scaleway-cli/pkg/utils/utils.go b/vendor/github.com/scaleway/scaleway-cli/pkg/utils/utils.go new file mode 100644 index 000000000..126d9c834 --- /dev/null +++ b/vendor/github.com/scaleway/scaleway-cli/pkg/utils/utils.go @@ -0,0 +1,253 @@ +// Copyright (C) 2015 Scaleway. All rights reserved. +// Use of this source code is governed by a MIT-style +// license that can be found in the LICENSE.md file. + +// scw helpers + +// Package utils contains helpers +package utils + +import ( + "crypto/md5" + "errors" + "fmt" + "io" + "net" + "os" + "os/exec" + "path" + "path/filepath" + "reflect" + "regexp" + "strings" + "time" + + "golang.org/x/crypto/ssh" + + "github.com/mattn/go-isatty" + "github.com/moul/gotty-client" + "github.com/scaleway/scaleway-cli/pkg/sshcommand" + "github.com/sirupsen/logrus" + log "github.com/sirupsen/logrus" +) + +// SpawnRedirection is used to redirects the fluxes +type SpawnRedirection struct { + Stdin io.Reader + Stdout io.Writer + Stderr io.Writer +} + +// SSHExec executes a command over SSH and redirects file-descriptors +func SSHExec(publicIPAddress, privateIPAddress, user string, port int, command []string, checkConnection bool, gateway string, enableSSHKeyForwarding bool) error { + gatewayUser := "root" + gatewayIPAddress := gateway + if strings.Contains(gateway, "@") { + parts := strings.Split(gatewayIPAddress, "@") + if len(parts) != 2 { + return fmt.Errorf("gateway: must be like root@IP") + } + gatewayUser = parts[0] + gatewayIPAddress = parts[1] + gateway = gatewayUser + "@" + gatewayIPAddress + } + + if publicIPAddress == "" && gatewayIPAddress == "" { + return errors.New("server does not have public IP") + } + if privateIPAddress == "" && gatewayIPAddress != "" { + return errors.New("server does not have private IP") + } + + if checkConnection { + useGateway := gatewayIPAddress != "" + if useGateway && !IsTCPPortOpen(fmt.Sprintf("%s:22", gatewayIPAddress)) { + return errors.New("gateway is not available, try again later") + } + if !useGateway && !IsTCPPortOpen(fmt.Sprintf("%s:%d", publicIPAddress, port)) { + return errors.New("server is not ready, try again later") + } + } + + sshCommand := NewSSHExecCmd(publicIPAddress, privateIPAddress, user, port, isatty.IsTerminal(os.Stdin.Fd()), command, gateway, enableSSHKeyForwarding) + + log.Debugf("Executing: %s", sshCommand) + + spawn := exec.Command("ssh", sshCommand.Slice()[1:]...) + spawn.Stdout = os.Stdout + spawn.Stdin = os.Stdin + spawn.Stderr = os.Stderr + return spawn.Run() +} + +// NewSSHExecCmd computes execve compatible arguments to run a command via ssh +func NewSSHExecCmd(publicIPAddress, privateIPAddress, user string, port int, allocateTTY bool, command []string, gatewayIPAddress string, enableSSHKeyForwarding bool) *sshcommand.Command { + quiet := os.Getenv("DEBUG") != "1" + secureExec := os.Getenv("SCW_SECURE_EXEC") == "1" + sshCommand := &sshcommand.Command{ + AllocateTTY: allocateTTY, + Command: command, + Host: publicIPAddress, + Quiet: quiet, + SkipHostKeyChecking: !secureExec, + User: user, + NoEscapeCommand: true, + Port: port, + EnableSSHKeyForwarding: enableSSHKeyForwarding, + } + if gatewayIPAddress != "" { + sshCommand.Host = privateIPAddress + sshCommand.Gateway = &sshcommand.Command{ + Host: gatewayIPAddress, + SkipHostKeyChecking: !secureExec, + AllocateTTY: allocateTTY, + Quiet: quiet, + User: user, + Port: port, + } + } + + return sshCommand +} + +// GeneratingAnSSHKey generates an SSH key +func GeneratingAnSSHKey(cfg SpawnRedirection, path string, name string) (string, error) { + args := []string{ + "-t", + "rsa", + "-b", + "4096", + "-f", + filepath.Join(path, name), + "-N", + "", + "-C", + "", + } + log.Infof("Executing commands %v", args) + spawn := exec.Command("ssh-keygen", args...) + spawn.Stdout = cfg.Stdout + spawn.Stdin = cfg.Stdin + spawn.Stderr = cfg.Stderr + return args[5], spawn.Run() +} + +// WaitForTCPPortOpen calls IsTCPPortOpen in a loop +func WaitForTCPPortOpen(dest string) error { + for { + if IsTCPPortOpen(dest) { + break + } + time.Sleep(1 * time.Second) + } + return nil +} + +// IsTCPPortOpen returns true if a TCP communication with "host:port" can be initialized +func IsTCPPortOpen(dest string) bool { + conn, err := net.DialTimeout("tcp", dest, time.Duration(2000)*time.Millisecond) + if err == nil { + defer conn.Close() + } + return err == nil +} + +// TruncIf ensures the input string does not exceed max size if cond is met +func TruncIf(str string, max int, cond bool) string { + if cond && len(str) > max { + return str[:max] + } + return str +} + +// Wordify convert complex name to a single word without special shell characters +func Wordify(str string) string { + str = regexp.MustCompile(`[^a-zA-Z0-9-]`).ReplaceAllString(str, "_") + str = regexp.MustCompile(`__+`).ReplaceAllString(str, "_") + str = strings.Trim(str, "_") + return str +} + +// PathToTARPathparts returns the two parts of a unix path +func PathToTARPathparts(fullPath string) (string, string) { + fullPath = strings.TrimRight(fullPath, "/") + return path.Dir(fullPath), path.Base(fullPath) +} + +// RemoveDuplicates transforms an array into a unique array +func RemoveDuplicates(elements []string) []string { + encountered := map[string]bool{} + + // Create a map of all unique elements. + for v := range elements { + encountered[elements[v]] = true + } + + // Place all keys from the map into a slice. + result := []string{} + for key := range encountered { + result = append(result, key) + } + return result +} + +// AttachToSerial tries to connect to server serial using 'gotty-client' and fallback with a help message +func AttachToSerial(serverID, apiToken, url string) (*gottyclient.Client, chan bool, error) { + gottyURL := os.Getenv("SCW_GOTTY_URL") + if gottyURL == "" { + gottyURL = url + } + URL := fmt.Sprintf("%s?arg=%s&arg=%s", gottyURL, apiToken, serverID) + + logrus.Debug("Connection to ", URL) + gottycli, err := gottyclient.NewClient(URL) + if err != nil { + return nil, nil, err + } + + if os.Getenv("SCW_TLSVERIFY") == "0" { + gottycli.SkipTLSVerify = true + } + + gottycli.UseProxyFromEnv = true + + if err = gottycli.Connect(); err != nil { + return nil, nil, err + } + done := make(chan bool) + + fmt.Println("You are connected, type 'Ctrl+q' to quit.") + go func() { + gottycli.Loop() + gottycli.Close() + done <- true + }() + return gottycli, done, nil +} + +func rfc4716hex(data []byte) string { + fingerprint := "" + + for i := 0; i < len(data); i++ { + fingerprint = fmt.Sprintf("%s%0.2x", fingerprint, data[i]) + if i != len(data)-1 { + fingerprint = fingerprint + ":" + } + } + return fingerprint +} + +// SSHGetFingerprint returns the fingerprint of an SSH key +func SSHGetFingerprint(key []byte) (string, error) { + publicKey, comment, _, _, err := ssh.ParseAuthorizedKey(key) + if err != nil { + return "", err + } + switch reflect.TypeOf(publicKey).String() { + case "*ssh.rsaPublicKey", "*ssh.dsaPublicKey", "*ssh.ecdsaPublicKey": + md5sum := md5.Sum(publicKey.Marshal()) + return publicKey.Type() + " " + rfc4716hex(md5sum[:]) + " " + comment, nil + default: + return "", errors.New("Can't handle this key") + } +} diff --git a/vendor/github.com/sirupsen/logrus/CHANGELOG.md b/vendor/github.com/sirupsen/logrus/CHANGELOG.md new file mode 100644 index 000000000..1bd1deb29 --- /dev/null +++ b/vendor/github.com/sirupsen/logrus/CHANGELOG.md @@ -0,0 +1,123 @@ +# 1.0.5 + +* Fix hooks race (#707) +* Fix panic deadlock (#695) + +# 1.0.4 + +* Fix race when adding hooks (#612) +* Fix terminal check in AppEngine (#635) + +# 1.0.3 + +* Replace example files with testable examples + +# 1.0.2 + +* bug: quote non-string values in text formatter (#583) +* Make (*Logger) SetLevel a public method + +# 1.0.1 + +* bug: fix escaping in text formatter (#575) + +# 1.0.0 + +* Officially changed name to lower-case +* bug: colors on Windows 10 (#541) +* bug: fix race in accessing level (#512) + +# 0.11.5 + +* feature: add writer and writerlevel to entry (#372) + +# 0.11.4 + +* bug: fix undefined variable on solaris (#493) + +# 0.11.3 + +* formatter: configure quoting of empty values (#484) +* formatter: configure quoting character (default is `"`) (#484) +* bug: fix not importing io correctly in non-linux environments (#481) + +# 0.11.2 + +* bug: fix windows terminal detection (#476) + +# 0.11.1 + +* bug: fix tty detection with custom out (#471) + +# 0.11.0 + +* performance: Use bufferpool to allocate (#370) +* terminal: terminal detection for app-engine (#343) +* feature: exit handler (#375) + +# 0.10.0 + +* feature: Add a test hook (#180) +* feature: `ParseLevel` is now case-insensitive (#326) +* feature: `FieldLogger` interface that generalizes `Logger` and `Entry` (#308) +* performance: avoid re-allocations on `WithFields` (#335) + +# 0.9.0 + +* logrus/text_formatter: don't emit empty msg +* logrus/hooks/airbrake: move out of main repository +* logrus/hooks/sentry: move out of main repository +* logrus/hooks/papertrail: move out of main repository +* logrus/hooks/bugsnag: move out of main repository +* logrus/core: run tests with `-race` +* logrus/core: detect TTY based on `stderr` +* logrus/core: support `WithError` on logger +* logrus/core: Solaris support + +# 0.8.7 + +* logrus/core: fix possible race (#216) +* logrus/doc: small typo fixes and doc improvements + + +# 0.8.6 + +* hooks/raven: allow passing an initialized client + +# 0.8.5 + +* logrus/core: revert #208 + +# 0.8.4 + +* formatter/text: fix data race (#218) + +# 0.8.3 + +* logrus/core: fix entry log level (#208) +* logrus/core: improve performance of text formatter by 40% +* logrus/core: expose `LevelHooks` type +* logrus/core: add support for DragonflyBSD and NetBSD +* formatter/text: print structs more verbosely + +# 0.8.2 + +* logrus: fix more Fatal family functions + +# 0.8.1 + +* logrus: fix not exiting on `Fatalf` and `Fatalln` + +# 0.8.0 + +* logrus: defaults to stderr instead of stdout +* hooks/sentry: add special field for `*http.Request` +* formatter/text: ignore Windows for colors + +# 0.7.3 + +* formatter/\*: allow configuration of timestamp layout + +# 0.7.2 + +* formatter/text: Add configuration option for time format (#158) diff --git a/vendor/github.com/sirupsen/logrus/LICENSE b/vendor/github.com/sirupsen/logrus/LICENSE new file mode 100644 index 000000000..f090cb42f --- /dev/null +++ b/vendor/github.com/sirupsen/logrus/LICENSE @@ -0,0 +1,21 @@ +The MIT License (MIT) + +Copyright (c) 2014 Simon Eskildsen + +Permission is hereby granted, free of charge, to any person obtaining a copy +of this software and associated documentation files (the "Software"), to deal +in the Software without restriction, including without limitation the rights +to use, copy, modify, merge, publish, distribute, sublicense, and/or sell +copies of the Software, and to permit persons to whom the Software is +furnished to do so, subject to the following conditions: + +The above copyright notice and this permission notice shall be included in +all copies or substantial portions of the Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR +IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, +FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE +AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER +LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, +OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN +THE SOFTWARE. diff --git a/vendor/github.com/sirupsen/logrus/README.md b/vendor/github.com/sirupsen/logrus/README.md new file mode 100644 index 000000000..9751da1bf --- /dev/null +++ b/vendor/github.com/sirupsen/logrus/README.md @@ -0,0 +1,512 @@ +# Logrus :walrus: [![Build Status](https://travis-ci.org/sirupsen/logrus.svg?branch=master)](https://travis-ci.org/sirupsen/logrus) [![GoDoc](https://godoc.org/github.com/sirupsen/logrus?status.svg)](https://godoc.org/github.com/sirupsen/logrus) + +Logrus is a structured logger for Go (golang), completely API compatible with +the standard library logger. + +**Seeing weird case-sensitive problems?** It's in the past been possible to +import Logrus as both upper- and lower-case. Due to the Go package environment, +this caused issues in the community and we needed a standard. Some environments +experienced problems with the upper-case variant, so the lower-case was decided. +Everything using `logrus` will need to use the lower-case: +`github.com/sirupsen/logrus`. Any package that isn't, should be changed. + +To fix Glide, see [these +comments](https://github.com/sirupsen/logrus/issues/553#issuecomment-306591437). +For an in-depth explanation of the casing issue, see [this +comment](https://github.com/sirupsen/logrus/issues/570#issuecomment-313933276). + +**Are you interested in assisting in maintaining Logrus?** Currently I have a +lot of obligations, and I am unable to provide Logrus with the maintainership it +needs. If you'd like to help, please reach out to me at `simon at author's +username dot com`. + +Nicely color-coded in development (when a TTY is attached, otherwise just +plain text): + +![Colored](http://i.imgur.com/PY7qMwd.png) + +With `log.SetFormatter(&log.JSONFormatter{})`, for easy parsing by logstash +or Splunk: + +```json +{"animal":"walrus","level":"info","msg":"A group of walrus emerges from the +ocean","size":10,"time":"2014-03-10 19:57:38.562264131 -0400 EDT"} + +{"level":"warning","msg":"The group's number increased tremendously!", +"number":122,"omg":true,"time":"2014-03-10 19:57:38.562471297 -0400 EDT"} + +{"animal":"walrus","level":"info","msg":"A giant walrus appears!", +"size":10,"time":"2014-03-10 19:57:38.562500591 -0400 EDT"} + +{"animal":"walrus","level":"info","msg":"Tremendously sized cow enters the ocean.", +"size":9,"time":"2014-03-10 19:57:38.562527896 -0400 EDT"} + +{"level":"fatal","msg":"The ice breaks!","number":100,"omg":true, +"time":"2014-03-10 19:57:38.562543128 -0400 EDT"} +``` + +With the default `log.SetFormatter(&log.TextFormatter{})` when a TTY is not +attached, the output is compatible with the +[logfmt](http://godoc.org/github.com/kr/logfmt) format: + +```text +time="2015-03-26T01:27:38-04:00" level=debug msg="Started observing beach" animal=walrus number=8 +time="2015-03-26T01:27:38-04:00" level=info msg="A group of walrus emerges from the ocean" animal=walrus size=10 +time="2015-03-26T01:27:38-04:00" level=warning msg="The group's number increased tremendously!" number=122 omg=true +time="2015-03-26T01:27:38-04:00" level=debug msg="Temperature changes" temperature=-4 +time="2015-03-26T01:27:38-04:00" level=panic msg="It's over 9000!" animal=orca size=9009 +time="2015-03-26T01:27:38-04:00" level=fatal msg="The ice breaks!" err=&{0x2082280c0 map[animal:orca size:9009] 2015-03-26 01:27:38.441574009 -0400 EDT panic It's over 9000!} number=100 omg=true +exit status 1 +``` + +#### Case-sensitivity + +The organization's name was changed to lower-case--and this will not be changed +back. If you are getting import conflicts due to case sensitivity, please use +the lower-case import: `github.com/sirupsen/logrus`. + +#### Example + +The simplest way to use Logrus is simply the package-level exported logger: + +```go +package main + +import ( + log "github.com/sirupsen/logrus" +) + +func main() { + log.WithFields(log.Fields{ + "animal": "walrus", + }).Info("A walrus appears") +} +``` + +Note that it's completely api-compatible with the stdlib logger, so you can +replace your `log` imports everywhere with `log "github.com/sirupsen/logrus"` +and you'll now have the flexibility of Logrus. You can customize it all you +want: + +```go +package main + +import ( + "os" + log "github.com/sirupsen/logrus" +) + +func init() { + // Log as JSON instead of the default ASCII formatter. + log.SetFormatter(&log.JSONFormatter{}) + + // Output to stdout instead of the default stderr + // Can be any io.Writer, see below for File example + log.SetOutput(os.Stdout) + + // Only log the warning severity or above. + log.SetLevel(log.WarnLevel) +} + +func main() { + log.WithFields(log.Fields{ + "animal": "walrus", + "size": 10, + }).Info("A group of walrus emerges from the ocean") + + log.WithFields(log.Fields{ + "omg": true, + "number": 122, + }).Warn("The group's number increased tremendously!") + + log.WithFields(log.Fields{ + "omg": true, + "number": 100, + }).Fatal("The ice breaks!") + + // A common pattern is to re-use fields between logging statements by re-using + // the logrus.Entry returned from WithFields() + contextLogger := log.WithFields(log.Fields{ + "common": "this is a common field", + "other": "I also should be logged always", + }) + + contextLogger.Info("I'll be logged with common and other field") + contextLogger.Info("Me too") +} +``` + +For more advanced usage such as logging to multiple locations from the same +application, you can also create an instance of the `logrus` Logger: + +```go +package main + +import ( + "os" + "github.com/sirupsen/logrus" +) + +// Create a new instance of the logger. You can have any number of instances. +var log = logrus.New() + +func main() { + // The API for setting attributes is a little different than the package level + // exported logger. See Godoc. + log.Out = os.Stdout + + // You could set this to any `io.Writer` such as a file + // file, err := os.OpenFile("logrus.log", os.O_CREATE|os.O_WRONLY, 0666) + // if err == nil { + // log.Out = file + // } else { + // log.Info("Failed to log to file, using default stderr") + // } + + log.WithFields(logrus.Fields{ + "animal": "walrus", + "size": 10, + }).Info("A group of walrus emerges from the ocean") +} +``` + +#### Fields + +Logrus encourages careful, structured logging through logging fields instead of +long, unparseable error messages. For example, instead of: `log.Fatalf("Failed +to send event %s to topic %s with key %d")`, you should log the much more +discoverable: + +```go +log.WithFields(log.Fields{ + "event": event, + "topic": topic, + "key": key, +}).Fatal("Failed to send event") +``` + +We've found this API forces you to think about logging in a way that produces +much more useful logging messages. We've been in countless situations where just +a single added field to a log statement that was already there would've saved us +hours. The `WithFields` call is optional. + +In general, with Logrus using any of the `printf`-family functions should be +seen as a hint you should add a field, however, you can still use the +`printf`-family functions with Logrus. + +#### Default Fields + +Often it's helpful to have fields _always_ attached to log statements in an +application or parts of one. For example, you may want to always log the +`request_id` and `user_ip` in the context of a request. Instead of writing +`log.WithFields(log.Fields{"request_id": request_id, "user_ip": user_ip})` on +every line, you can create a `logrus.Entry` to pass around instead: + +```go +requestLogger := log.WithFields(log.Fields{"request_id": request_id, "user_ip": user_ip}) +requestLogger.Info("something happened on that request") # will log request_id and user_ip +requestLogger.Warn("something not great happened") +``` + +#### Hooks + +You can add hooks for logging levels. For example to send errors to an exception +tracking service on `Error`, `Fatal` and `Panic`, info to StatsD or log to +multiple places simultaneously, e.g. syslog. + +Logrus comes with [built-in hooks](hooks/). Add those, or your custom hook, in +`init`: + +```go +import ( + log "github.com/sirupsen/logrus" + "gopkg.in/gemnasium/logrus-airbrake-hook.v2" // the package is named "airbrake" + logrus_syslog "github.com/sirupsen/logrus/hooks/syslog" + "log/syslog" +) + +func init() { + + // Use the Airbrake hook to report errors that have Error severity or above to + // an exception tracker. You can create custom hooks, see the Hooks section. + log.AddHook(airbrake.NewHook(123, "xyz", "production")) + + hook, err := logrus_syslog.NewSyslogHook("udp", "localhost:514", syslog.LOG_INFO, "") + if err != nil { + log.Error("Unable to connect to local syslog daemon") + } else { + log.AddHook(hook) + } +} +``` +Note: Syslog hook also support connecting to local syslog (Ex. "/dev/log" or "/var/run/syslog" or "/var/run/log"). For the detail, please check the [syslog hook README](hooks/syslog/README.md). + +| Hook | Description | +| ----- | ----------- | +| [Airbrake "legacy"](https://github.com/gemnasium/logrus-airbrake-legacy-hook) | Send errors to an exception tracking service compatible with the Airbrake API V2. Uses [`airbrake-go`](https://github.com/tobi/airbrake-go) behind the scenes. | +| [Airbrake](https://github.com/gemnasium/logrus-airbrake-hook) | Send errors to the Airbrake API V3. Uses the official [`gobrake`](https://github.com/airbrake/gobrake) behind the scenes. | +| [Amazon Kinesis](https://github.com/evalphobia/logrus_kinesis) | Hook for logging to [Amazon Kinesis](https://aws.amazon.com/kinesis/) | +| [Amqp-Hook](https://github.com/vladoatanasov/logrus_amqp) | Hook for logging to Amqp broker (Like RabbitMQ) | +| [Application Insights](https://github.com/jjcollinge/logrus-appinsights) | Hook for logging to [Application Insights](https://azure.microsoft.com/en-us/services/application-insights/) +| [AzureTableHook](https://github.com/kpfaulkner/azuretablehook/) | Hook for logging to Azure Table Storage| +| [Bugsnag](https://github.com/Shopify/logrus-bugsnag/blob/master/bugsnag.go) | Send errors to the Bugsnag exception tracking service. | +| [ClickHouse](https://github.com/oxgrouby/logrus-clickhouse-hook) | Send logs to [ClickHouse](https://clickhouse.yandex/) | +| [DeferPanic](https://github.com/deferpanic/dp-logrus) | Hook for logging to DeferPanic | +| [Discordrus](https://github.com/kz/discordrus) | Hook for logging to [Discord](https://discordapp.com/) | +| [ElasticSearch](https://github.com/sohlich/elogrus) | Hook for logging to ElasticSearch| +| [Firehose](https://github.com/beaubrewer/logrus_firehose) | Hook for logging to [Amazon Firehose](https://aws.amazon.com/kinesis/firehose/) +| [Fluentd](https://github.com/evalphobia/logrus_fluent) | Hook for logging to fluentd | +| [Go-Slack](https://github.com/multiplay/go-slack) | Hook for logging to [Slack](https://slack.com) | +| [Graylog](https://github.com/gemnasium/logrus-graylog-hook) | Hook for logging to [Graylog](http://graylog2.org/) | +| [Hiprus](https://github.com/nubo/hiprus) | Send errors to a channel in hipchat. | +| [Honeybadger](https://github.com/agonzalezro/logrus_honeybadger) | Hook for sending exceptions to Honeybadger | +| [InfluxDB](https://github.com/Abramovic/logrus_influxdb) | Hook for logging to influxdb | +| [Influxus](http://github.com/vlad-doru/influxus) | Hook for concurrently logging to [InfluxDB](http://influxdata.com/) | +| [Journalhook](https://github.com/wercker/journalhook) | Hook for logging to `systemd-journald` | +| [KafkaLogrus](https://github.com/tracer0tong/kafkalogrus) | Hook for logging to Kafka | +| [Kafka REST Proxy](https://github.com/Nordstrom/logrus-kafka-rest-proxy) | Hook for logging to [Kafka REST Proxy](https://docs.confluent.io/current/kafka-rest/docs) | +| [LFShook](https://github.com/rifflock/lfshook) | Hook for logging to the local filesystem | +| [Logbeat](https://github.com/macandmia/logbeat) | Hook for logging to [Opbeat](https://opbeat.com/) | +| [Logentries](https://github.com/jcftang/logentriesrus) | Hook for logging to [Logentries](https://logentries.com/) | +| [Logentrus](https://github.com/puddingfactory/logentrus) | Hook for logging to [Logentries](https://logentries.com/) | +| [Logmatic.io](https://github.com/logmatic/logmatic-go) | Hook for logging to [Logmatic.io](http://logmatic.io/) | +| [Logrusly](https://github.com/sebest/logrusly) | Send logs to [Loggly](https://www.loggly.com/) | +| [Logstash](https://github.com/bshuster-repo/logrus-logstash-hook) | Hook for logging to [Logstash](https://www.elastic.co/products/logstash) | +| [Mail](https://github.com/zbindenren/logrus_mail) | Hook for sending exceptions via mail | +| [Mattermost](https://github.com/shuLhan/mattermost-integration/tree/master/hooks/logrus) | Hook for logging to [Mattermost](https://mattermost.com/) | +| [Mongodb](https://github.com/weekface/mgorus) | Hook for logging to mongodb | +| [NATS-Hook](https://github.com/rybit/nats_logrus_hook) | Hook for logging to [NATS](https://nats.io) | +| [Octokit](https://github.com/dorajistyle/logrus-octokit-hook) | Hook for logging to github via octokit | +| [Papertrail](https://github.com/polds/logrus-papertrail-hook) | Send errors to the [Papertrail](https://papertrailapp.com) hosted logging service via UDP. | +| [PostgreSQL](https://github.com/gemnasium/logrus-postgresql-hook) | Send logs to [PostgreSQL](http://postgresql.org) | +| [Promrus](https://github.com/weaveworks/promrus) | Expose number of log messages as [Prometheus](https://prometheus.io/) metrics | +| [Pushover](https://github.com/toorop/logrus_pushover) | Send error via [Pushover](https://pushover.net) | +| [Raygun](https://github.com/squirkle/logrus-raygun-hook) | Hook for logging to [Raygun.io](http://raygun.io/) | +| [Redis-Hook](https://github.com/rogierlommers/logrus-redis-hook) | Hook for logging to a ELK stack (through Redis) | +| [Rollrus](https://github.com/heroku/rollrus) | Hook for sending errors to rollbar | +| [Scribe](https://github.com/sagar8192/logrus-scribe-hook) | Hook for logging to [Scribe](https://github.com/facebookarchive/scribe)| +| [Sentry](https://github.com/evalphobia/logrus_sentry) | Send errors to the Sentry error logging and aggregation service. | +| [Slackrus](https://github.com/johntdyer/slackrus) | Hook for Slack chat. | +| [Stackdriver](https://github.com/knq/sdhook) | Hook for logging to [Google Stackdriver](https://cloud.google.com/logging/) | +| [Sumorus](https://github.com/doublefree/sumorus) | Hook for logging to [SumoLogic](https://www.sumologic.com/)| +| [Syslog](https://github.com/sirupsen/logrus/blob/master/hooks/syslog/syslog.go) | Send errors to remote syslog server. Uses standard library `log/syslog` behind the scenes. | +| [Syslog TLS](https://github.com/shinji62/logrus-syslog-ng) | Send errors to remote syslog server with TLS support. | +| [Telegram](https://github.com/rossmcdonald/telegram_hook) | Hook for logging errors to [Telegram](https://telegram.org/) | +| [TraceView](https://github.com/evalphobia/logrus_appneta) | Hook for logging to [AppNeta TraceView](https://www.appneta.com/products/traceview/) | +| [Typetalk](https://github.com/dragon3/logrus-typetalk-hook) | Hook for logging to [Typetalk](https://www.typetalk.in/) | +| [logz.io](https://github.com/ripcurld00d/logrus-logzio-hook) | Hook for logging to [logz.io](https://logz.io), a Log as a Service using Logstash | +| [SQS-Hook](https://github.com/tsarpaul/logrus_sqs) | Hook for logging to [Amazon Simple Queue Service (SQS)](https://aws.amazon.com/sqs/) | + +#### Level logging + +Logrus has six logging levels: Debug, Info, Warning, Error, Fatal and Panic. + +```go +log.Debug("Useful debugging information.") +log.Info("Something noteworthy happened!") +log.Warn("You should probably take a look at this.") +log.Error("Something failed but I'm not quitting.") +// Calls os.Exit(1) after logging +log.Fatal("Bye.") +// Calls panic() after logging +log.Panic("I'm bailing.") +``` + +You can set the logging level on a `Logger`, then it will only log entries with +that severity or anything above it: + +```go +// Will log anything that is info or above (warn, error, fatal, panic). Default. +log.SetLevel(log.InfoLevel) +``` + +It may be useful to set `log.Level = logrus.DebugLevel` in a debug or verbose +environment if your application has that. + +#### Entries + +Besides the fields added with `WithField` or `WithFields` some fields are +automatically added to all logging events: + +1. `time`. The timestamp when the entry was created. +2. `msg`. The logging message passed to `{Info,Warn,Error,Fatal,Panic}` after + the `AddFields` call. E.g. `Failed to send event.` +3. `level`. The logging level. E.g. `info`. + +#### Environments + +Logrus has no notion of environment. + +If you wish for hooks and formatters to only be used in specific environments, +you should handle that yourself. For example, if your application has a global +variable `Environment`, which is a string representation of the environment you +could do: + +```go +import ( + log "github.com/sirupsen/logrus" +) + +init() { + // do something here to set environment depending on an environment variable + // or command-line flag + if Environment == "production" { + log.SetFormatter(&log.JSONFormatter{}) + } else { + // The TextFormatter is default, you don't actually have to do this. + log.SetFormatter(&log.TextFormatter{}) + } +} +``` + +This configuration is how `logrus` was intended to be used, but JSON in +production is mostly only useful if you do log aggregation with tools like +Splunk or Logstash. + +#### Formatters + +The built-in logging formatters are: + +* `logrus.TextFormatter`. Logs the event in colors if stdout is a tty, otherwise + without colors. + * *Note:* to force colored output when there is no TTY, set the `ForceColors` + field to `true`. To force no colored output even if there is a TTY set the + `DisableColors` field to `true`. For Windows, see + [github.com/mattn/go-colorable](https://github.com/mattn/go-colorable). + * All options are listed in the [generated docs](https://godoc.org/github.com/sirupsen/logrus#TextFormatter). +* `logrus.JSONFormatter`. Logs fields as JSON. + * All options are listed in the [generated docs](https://godoc.org/github.com/sirupsen/logrus#JSONFormatter). + +Third party logging formatters: + +* [`FluentdFormatter`](https://github.com/joonix/log). Formats entries that can be parsed by Kubernetes and Google Container Engine. +* [`logstash`](https://github.com/bshuster-repo/logrus-logstash-hook). Logs fields as [Logstash](http://logstash.net) Events. +* [`prefixed`](https://github.com/x-cray/logrus-prefixed-formatter). Displays log entry source along with alternative layout. +* [`zalgo`](https://github.com/aybabtme/logzalgo). Invoking the P͉̫o̳̼̊w̖͈̰͎e̬͔̭͂r͚̼̹̲ ̫͓͉̳͈ō̠͕͖̚f̝͍̠ ͕̲̞͖͑Z̖̫̤̫ͪa͉̬͈̗l͖͎g̳̥o̰̥̅!̣͔̲̻͊̄ ̙̘̦̹̦. + +You can define your formatter by implementing the `Formatter` interface, +requiring a `Format` method. `Format` takes an `*Entry`. `entry.Data` is a +`Fields` type (`map[string]interface{}`) with all your fields as well as the +default ones (see Entries section above): + +```go +type MyJSONFormatter struct { +} + +log.SetFormatter(new(MyJSONFormatter)) + +func (f *MyJSONFormatter) Format(entry *Entry) ([]byte, error) { + // Note this doesn't include Time, Level and Message which are available on + // the Entry. Consult `godoc` on information about those fields or read the + // source of the official loggers. + serialized, err := json.Marshal(entry.Data) + if err != nil { + return nil, fmt.Errorf("Failed to marshal fields to JSON, %v", err) + } + return append(serialized, '\n'), nil +} +``` + +#### Logger as an `io.Writer` + +Logrus can be transformed into an `io.Writer`. That writer is the end of an `io.Pipe` and it is your responsibility to close it. + +```go +w := logger.Writer() +defer w.Close() + +srv := http.Server{ + // create a stdlib log.Logger that writes to + // logrus.Logger. + ErrorLog: log.New(w, "", 0), +} +``` + +Each line written to that writer will be printed the usual way, using formatters +and hooks. The level for those entries is `info`. + +This means that we can override the standard library logger easily: + +```go +logger := logrus.New() +logger.Formatter = &logrus.JSONFormatter{} + +// Use logrus for standard log output +// Note that `log` here references stdlib's log +// Not logrus imported under the name `log`. +log.SetOutput(logger.Writer()) +``` + +#### Rotation + +Log rotation is not provided with Logrus. Log rotation should be done by an +external program (like `logrotate(8)`) that can compress and delete old log +entries. It should not be a feature of the application-level logger. + +#### Tools + +| Tool | Description | +| ---- | ----------- | +|[Logrus Mate](https://github.com/gogap/logrus_mate)|Logrus mate is a tool for Logrus to manage loggers, you can initial logger's level, hook and formatter by config file, the logger will generated with different config at different environment.| +|[Logrus Viper Helper](https://github.com/heirko/go-contrib/tree/master/logrusHelper)|An Helper around Logrus to wrap with spf13/Viper to load configuration with fangs! And to simplify Logrus configuration use some behavior of [Logrus Mate](https://github.com/gogap/logrus_mate). [sample](https://github.com/heirko/iris-contrib/blob/master/middleware/logrus-logger/example) | + +#### Testing + +Logrus has a built in facility for asserting the presence of log messages. This is implemented through the `test` hook and provides: + +* decorators for existing logger (`test.NewLocal` and `test.NewGlobal`) which basically just add the `test` hook +* a test logger (`test.NewNullLogger`) that just records log messages (and does not output any): + +```go +import( + "github.com/sirupsen/logrus" + "github.com/sirupsen/logrus/hooks/test" + "github.com/stretchr/testify/assert" + "testing" +) + +func TestSomething(t*testing.T){ + logger, hook := test.NewNullLogger() + logger.Error("Helloerror") + + assert.Equal(t, 1, len(hook.Entries)) + assert.Equal(t, logrus.ErrorLevel, hook.LastEntry().Level) + assert.Equal(t, "Helloerror", hook.LastEntry().Message) + + hook.Reset() + assert.Nil(t, hook.LastEntry()) +} +``` + +#### Fatal handlers + +Logrus can register one or more functions that will be called when any `fatal` +level message is logged. The registered handlers will be executed before +logrus performs a `os.Exit(1)`. This behavior may be helpful if callers need +to gracefully shutdown. Unlike a `panic("Something went wrong...")` call which can be intercepted with a deferred `recover` a call to `os.Exit(1)` can not be intercepted. + +``` +... +handler := func() { + // gracefully shutdown something... +} +logrus.RegisterExitHandler(handler) +... +``` + +#### Thread safety + +By default Logger is protected by mutex for concurrent writes, this mutex is invoked when calling hooks and writing logs. +If you are sure such locking is not needed, you can call logger.SetNoLock() to disable the locking. + +Situation when locking is not needed includes: + +* You have no hooks registered, or hooks calling is already thread-safe. + +* Writing to logger.Out is already thread-safe, for example: + + 1) logger.Out is protected by locks. + + 2) logger.Out is a os.File handler opened with `O_APPEND` flag, and every write is smaller than 4k. (This allow multi-thread/multi-process writing) + + (Refer to http://www.notthewizard.com/2014/06/17/are-files-appends-really-atomic/) diff --git a/vendor/github.com/sirupsen/logrus/alt_exit.go b/vendor/github.com/sirupsen/logrus/alt_exit.go new file mode 100644 index 000000000..8af90637a --- /dev/null +++ b/vendor/github.com/sirupsen/logrus/alt_exit.go @@ -0,0 +1,64 @@ +package logrus + +// The following code was sourced and modified from the +// https://github.com/tebeka/atexit package governed by the following license: +// +// Copyright (c) 2012 Miki Tebeka . +// +// Permission is hereby granted, free of charge, to any person obtaining a copy of +// this software and associated documentation files (the "Software"), to deal in +// the Software without restriction, including without limitation the rights to +// use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of +// the Software, and to permit persons to whom the Software is furnished to do so, +// subject to the following conditions: +// +// The above copyright notice and this permission notice shall be included in all +// copies or substantial portions of the Software. +// +// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR +// IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS +// FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR +// COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER +// IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN +// CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. + +import ( + "fmt" + "os" +) + +var handlers = []func(){} + +func runHandler(handler func()) { + defer func() { + if err := recover(); err != nil { + fmt.Fprintln(os.Stderr, "Error: Logrus exit handler error:", err) + } + }() + + handler() +} + +func runHandlers() { + for _, handler := range handlers { + runHandler(handler) + } +} + +// Exit runs all the Logrus atexit handlers and then terminates the program using os.Exit(code) +func Exit(code int) { + runHandlers() + os.Exit(code) +} + +// RegisterExitHandler adds a Logrus Exit handler, call logrus.Exit to invoke +// all handlers. The handlers will also be invoked when any Fatal log entry is +// made. +// +// This method is useful when a caller wishes to use logrus to log a fatal +// message but also needs to gracefully shutdown. An example usecase could be +// closing database connections, or sending a alert that the application is +// closing. +func RegisterExitHandler(handler func()) { + handlers = append(handlers, handler) +} diff --git a/vendor/github.com/sirupsen/logrus/appveyor.yml b/vendor/github.com/sirupsen/logrus/appveyor.yml new file mode 100644 index 000000000..96c2ce15f --- /dev/null +++ b/vendor/github.com/sirupsen/logrus/appveyor.yml @@ -0,0 +1,14 @@ +version: "{build}" +platform: x64 +clone_folder: c:\gopath\src\github.com\sirupsen\logrus +environment: + GOPATH: c:\gopath +branches: + only: + - master +install: + - set PATH=%GOPATH%\bin;c:\go\bin;%PATH% + - go version +build_script: + - go get -t + - go test diff --git a/vendor/github.com/sirupsen/logrus/doc.go b/vendor/github.com/sirupsen/logrus/doc.go new file mode 100644 index 000000000..da67aba06 --- /dev/null +++ b/vendor/github.com/sirupsen/logrus/doc.go @@ -0,0 +1,26 @@ +/* +Package logrus is a structured logger for Go, completely API compatible with the standard library logger. + + +The simplest way to use Logrus is simply the package-level exported logger: + + package main + + import ( + log "github.com/sirupsen/logrus" + ) + + func main() { + log.WithFields(log.Fields{ + "animal": "walrus", + "number": 1, + "size": 10, + }).Info("A walrus appears") + } + +Output: + time="2015-09-07T08:48:33Z" level=info msg="A walrus appears" animal=walrus number=1 size=10 + +For a full guide visit https://github.com/sirupsen/logrus +*/ +package logrus diff --git a/vendor/github.com/sirupsen/logrus/entry.go b/vendor/github.com/sirupsen/logrus/entry.go new file mode 100644 index 000000000..778f4c9f0 --- /dev/null +++ b/vendor/github.com/sirupsen/logrus/entry.go @@ -0,0 +1,288 @@ +package logrus + +import ( + "bytes" + "fmt" + "os" + "sync" + "time" +) + +var bufferPool *sync.Pool + +func init() { + bufferPool = &sync.Pool{ + New: func() interface{} { + return new(bytes.Buffer) + }, + } +} + +// Defines the key when adding errors using WithError. +var ErrorKey = "error" + +// An entry is the final or intermediate Logrus logging entry. It contains all +// the fields passed with WithField{,s}. It's finally logged when Debug, Info, +// Warn, Error, Fatal or Panic is called on it. These objects can be reused and +// passed around as much as you wish to avoid field duplication. +type Entry struct { + Logger *Logger + + // Contains all the fields set by the user. + Data Fields + + // Time at which the log entry was created + Time time.Time + + // Level the log entry was logged at: Debug, Info, Warn, Error, Fatal or Panic + // This field will be set on entry firing and the value will be equal to the one in Logger struct field. + Level Level + + // Message passed to Debug, Info, Warn, Error, Fatal or Panic + Message string + + // When formatter is called in entry.log(), an Buffer may be set to entry + Buffer *bytes.Buffer +} + +func NewEntry(logger *Logger) *Entry { + return &Entry{ + Logger: logger, + // Default is three fields, give a little extra room + Data: make(Fields, 5), + } +} + +// Returns the string representation from the reader and ultimately the +// formatter. +func (entry *Entry) String() (string, error) { + serialized, err := entry.Logger.Formatter.Format(entry) + if err != nil { + return "", err + } + str := string(serialized) + return str, nil +} + +// Add an error as single field (using the key defined in ErrorKey) to the Entry. +func (entry *Entry) WithError(err error) *Entry { + return entry.WithField(ErrorKey, err) +} + +// Add a single field to the Entry. +func (entry *Entry) WithField(key string, value interface{}) *Entry { + return entry.WithFields(Fields{key: value}) +} + +// Add a map of fields to the Entry. +func (entry *Entry) WithFields(fields Fields) *Entry { + data := make(Fields, len(entry.Data)+len(fields)) + for k, v := range entry.Data { + data[k] = v + } + for k, v := range fields { + data[k] = v + } + return &Entry{Logger: entry.Logger, Data: data} +} + +// This function is not declared with a pointer value because otherwise +// race conditions will occur when using multiple goroutines +func (entry Entry) log(level Level, msg string) { + var buffer *bytes.Buffer + entry.Time = time.Now() + entry.Level = level + entry.Message = msg + + entry.fireHooks() + + buffer = bufferPool.Get().(*bytes.Buffer) + buffer.Reset() + defer bufferPool.Put(buffer) + entry.Buffer = buffer + + entry.write() + + entry.Buffer = nil + + // To avoid Entry#log() returning a value that only would make sense for + // panic() to use in Entry#Panic(), we avoid the allocation by checking + // directly here. + if level <= PanicLevel { + panic(&entry) + } +} + +// This function is not declared with a pointer value because otherwise +// race conditions will occur when using multiple goroutines +func (entry Entry) fireHooks() { + entry.Logger.mu.Lock() + defer entry.Logger.mu.Unlock() + err := entry.Logger.Hooks.Fire(entry.Level, &entry) + if err != nil { + fmt.Fprintf(os.Stderr, "Failed to fire hook: %v\n", err) + } +} + +func (entry *Entry) write() { + serialized, err := entry.Logger.Formatter.Format(entry) + entry.Logger.mu.Lock() + defer entry.Logger.mu.Unlock() + if err != nil { + fmt.Fprintf(os.Stderr, "Failed to obtain reader, %v\n", err) + } else { + _, err = entry.Logger.Out.Write(serialized) + if err != nil { + fmt.Fprintf(os.Stderr, "Failed to write to log, %v\n", err) + } + } +} + +func (entry *Entry) Debug(args ...interface{}) { + if entry.Logger.level() >= DebugLevel { + entry.log(DebugLevel, fmt.Sprint(args...)) + } +} + +func (entry *Entry) Print(args ...interface{}) { + entry.Info(args...) +} + +func (entry *Entry) Info(args ...interface{}) { + if entry.Logger.level() >= InfoLevel { + entry.log(InfoLevel, fmt.Sprint(args...)) + } +} + +func (entry *Entry) Warn(args ...interface{}) { + if entry.Logger.level() >= WarnLevel { + entry.log(WarnLevel, fmt.Sprint(args...)) + } +} + +func (entry *Entry) Warning(args ...interface{}) { + entry.Warn(args...) +} + +func (entry *Entry) Error(args ...interface{}) { + if entry.Logger.level() >= ErrorLevel { + entry.log(ErrorLevel, fmt.Sprint(args...)) + } +} + +func (entry *Entry) Fatal(args ...interface{}) { + if entry.Logger.level() >= FatalLevel { + entry.log(FatalLevel, fmt.Sprint(args...)) + } + Exit(1) +} + +func (entry *Entry) Panic(args ...interface{}) { + if entry.Logger.level() >= PanicLevel { + entry.log(PanicLevel, fmt.Sprint(args...)) + } + panic(fmt.Sprint(args...)) +} + +// Entry Printf family functions + +func (entry *Entry) Debugf(format string, args ...interface{}) { + if entry.Logger.level() >= DebugLevel { + entry.Debug(fmt.Sprintf(format, args...)) + } +} + +func (entry *Entry) Infof(format string, args ...interface{}) { + if entry.Logger.level() >= InfoLevel { + entry.Info(fmt.Sprintf(format, args...)) + } +} + +func (entry *Entry) Printf(format string, args ...interface{}) { + entry.Infof(format, args...) +} + +func (entry *Entry) Warnf(format string, args ...interface{}) { + if entry.Logger.level() >= WarnLevel { + entry.Warn(fmt.Sprintf(format, args...)) + } +} + +func (entry *Entry) Warningf(format string, args ...interface{}) { + entry.Warnf(format, args...) +} + +func (entry *Entry) Errorf(format string, args ...interface{}) { + if entry.Logger.level() >= ErrorLevel { + entry.Error(fmt.Sprintf(format, args...)) + } +} + +func (entry *Entry) Fatalf(format string, args ...interface{}) { + if entry.Logger.level() >= FatalLevel { + entry.Fatal(fmt.Sprintf(format, args...)) + } + Exit(1) +} + +func (entry *Entry) Panicf(format string, args ...interface{}) { + if entry.Logger.level() >= PanicLevel { + entry.Panic(fmt.Sprintf(format, args...)) + } +} + +// Entry Println family functions + +func (entry *Entry) Debugln(args ...interface{}) { + if entry.Logger.level() >= DebugLevel { + entry.Debug(entry.sprintlnn(args...)) + } +} + +func (entry *Entry) Infoln(args ...interface{}) { + if entry.Logger.level() >= InfoLevel { + entry.Info(entry.sprintlnn(args...)) + } +} + +func (entry *Entry) Println(args ...interface{}) { + entry.Infoln(args...) +} + +func (entry *Entry) Warnln(args ...interface{}) { + if entry.Logger.level() >= WarnLevel { + entry.Warn(entry.sprintlnn(args...)) + } +} + +func (entry *Entry) Warningln(args ...interface{}) { + entry.Warnln(args...) +} + +func (entry *Entry) Errorln(args ...interface{}) { + if entry.Logger.level() >= ErrorLevel { + entry.Error(entry.sprintlnn(args...)) + } +} + +func (entry *Entry) Fatalln(args ...interface{}) { + if entry.Logger.level() >= FatalLevel { + entry.Fatal(entry.sprintlnn(args...)) + } + Exit(1) +} + +func (entry *Entry) Panicln(args ...interface{}) { + if entry.Logger.level() >= PanicLevel { + entry.Panic(entry.sprintlnn(args...)) + } +} + +// Sprintlnn => Sprint no newline. This is to get the behavior of how +// fmt.Sprintln where spaces are always added between operands, regardless of +// their type. Instead of vendoring the Sprintln implementation to spare a +// string allocation, we do the simplest thing. +func (entry *Entry) sprintlnn(args ...interface{}) string { + msg := fmt.Sprintln(args...) + return msg[:len(msg)-1] +} diff --git a/vendor/github.com/sirupsen/logrus/exported.go b/vendor/github.com/sirupsen/logrus/exported.go new file mode 100644 index 000000000..013183eda --- /dev/null +++ b/vendor/github.com/sirupsen/logrus/exported.go @@ -0,0 +1,193 @@ +package logrus + +import ( + "io" +) + +var ( + // std is the name of the standard logger in stdlib `log` + std = New() +) + +func StandardLogger() *Logger { + return std +} + +// SetOutput sets the standard logger output. +func SetOutput(out io.Writer) { + std.mu.Lock() + defer std.mu.Unlock() + std.Out = out +} + +// SetFormatter sets the standard logger formatter. +func SetFormatter(formatter Formatter) { + std.mu.Lock() + defer std.mu.Unlock() + std.Formatter = formatter +} + +// SetLevel sets the standard logger level. +func SetLevel(level Level) { + std.mu.Lock() + defer std.mu.Unlock() + std.SetLevel(level) +} + +// GetLevel returns the standard logger level. +func GetLevel() Level { + std.mu.Lock() + defer std.mu.Unlock() + return std.level() +} + +// AddHook adds a hook to the standard logger hooks. +func AddHook(hook Hook) { + std.mu.Lock() + defer std.mu.Unlock() + std.Hooks.Add(hook) +} + +// WithError creates an entry from the standard logger and adds an error to it, using the value defined in ErrorKey as key. +func WithError(err error) *Entry { + return std.WithField(ErrorKey, err) +} + +// WithField creates an entry from the standard logger and adds a field to +// it. If you want multiple fields, use `WithFields`. +// +// Note that it doesn't log until you call Debug, Print, Info, Warn, Fatal +// or Panic on the Entry it returns. +func WithField(key string, value interface{}) *Entry { + return std.WithField(key, value) +} + +// WithFields creates an entry from the standard logger and adds multiple +// fields to it. This is simply a helper for `WithField`, invoking it +// once for each field. +// +// Note that it doesn't log until you call Debug, Print, Info, Warn, Fatal +// or Panic on the Entry it returns. +func WithFields(fields Fields) *Entry { + return std.WithFields(fields) +} + +// Debug logs a message at level Debug on the standard logger. +func Debug(args ...interface{}) { + std.Debug(args...) +} + +// Print logs a message at level Info on the standard logger. +func Print(args ...interface{}) { + std.Print(args...) +} + +// Info logs a message at level Info on the standard logger. +func Info(args ...interface{}) { + std.Info(args...) +} + +// Warn logs a message at level Warn on the standard logger. +func Warn(args ...interface{}) { + std.Warn(args...) +} + +// Warning logs a message at level Warn on the standard logger. +func Warning(args ...interface{}) { + std.Warning(args...) +} + +// Error logs a message at level Error on the standard logger. +func Error(args ...interface{}) { + std.Error(args...) +} + +// Panic logs a message at level Panic on the standard logger. +func Panic(args ...interface{}) { + std.Panic(args...) +} + +// Fatal logs a message at level Fatal on the standard logger. +func Fatal(args ...interface{}) { + std.Fatal(args...) +} + +// Debugf logs a message at level Debug on the standard logger. +func Debugf(format string, args ...interface{}) { + std.Debugf(format, args...) +} + +// Printf logs a message at level Info on the standard logger. +func Printf(format string, args ...interface{}) { + std.Printf(format, args...) +} + +// Infof logs a message at level Info on the standard logger. +func Infof(format string, args ...interface{}) { + std.Infof(format, args...) +} + +// Warnf logs a message at level Warn on the standard logger. +func Warnf(format string, args ...interface{}) { + std.Warnf(format, args...) +} + +// Warningf logs a message at level Warn on the standard logger. +func Warningf(format string, args ...interface{}) { + std.Warningf(format, args...) +} + +// Errorf logs a message at level Error on the standard logger. +func Errorf(format string, args ...interface{}) { + std.Errorf(format, args...) +} + +// Panicf logs a message at level Panic on the standard logger. +func Panicf(format string, args ...interface{}) { + std.Panicf(format, args...) +} + +// Fatalf logs a message at level Fatal on the standard logger. +func Fatalf(format string, args ...interface{}) { + std.Fatalf(format, args...) +} + +// Debugln logs a message at level Debug on the standard logger. +func Debugln(args ...interface{}) { + std.Debugln(args...) +} + +// Println logs a message at level Info on the standard logger. +func Println(args ...interface{}) { + std.Println(args...) +} + +// Infoln logs a message at level Info on the standard logger. +func Infoln(args ...interface{}) { + std.Infoln(args...) +} + +// Warnln logs a message at level Warn on the standard logger. +func Warnln(args ...interface{}) { + std.Warnln(args...) +} + +// Warningln logs a message at level Warn on the standard logger. +func Warningln(args ...interface{}) { + std.Warningln(args...) +} + +// Errorln logs a message at level Error on the standard logger. +func Errorln(args ...interface{}) { + std.Errorln(args...) +} + +// Panicln logs a message at level Panic on the standard logger. +func Panicln(args ...interface{}) { + std.Panicln(args...) +} + +// Fatalln logs a message at level Fatal on the standard logger. +func Fatalln(args ...interface{}) { + std.Fatalln(args...) +} diff --git a/vendor/github.com/sirupsen/logrus/formatter.go b/vendor/github.com/sirupsen/logrus/formatter.go new file mode 100644 index 000000000..b183ff5b1 --- /dev/null +++ b/vendor/github.com/sirupsen/logrus/formatter.go @@ -0,0 +1,45 @@ +package logrus + +import "time" + +const defaultTimestampFormat = time.RFC3339 + +// The Formatter interface is used to implement a custom Formatter. It takes an +// `Entry`. It exposes all the fields, including the default ones: +// +// * `entry.Data["msg"]`. The message passed from Info, Warn, Error .. +// * `entry.Data["time"]`. The timestamp. +// * `entry.Data["level"]. The level the entry was logged at. +// +// Any additional fields added with `WithField` or `WithFields` are also in +// `entry.Data`. Format is expected to return an array of bytes which are then +// logged to `logger.Out`. +type Formatter interface { + Format(*Entry) ([]byte, error) +} + +// This is to not silently overwrite `time`, `msg` and `level` fields when +// dumping it. If this code wasn't there doing: +// +// logrus.WithField("level", 1).Info("hello") +// +// Would just silently drop the user provided level. Instead with this code +// it'll logged as: +// +// {"level": "info", "fields.level": 1, "msg": "hello", "time": "..."} +// +// It's not exported because it's still using Data in an opinionated way. It's to +// avoid code duplication between the two default formatters. +func prefixFieldClashes(data Fields) { + if t, ok := data["time"]; ok { + data["fields.time"] = t + } + + if m, ok := data["msg"]; ok { + data["fields.msg"] = m + } + + if l, ok := data["level"]; ok { + data["fields.level"] = l + } +} diff --git a/vendor/github.com/sirupsen/logrus/hooks.go b/vendor/github.com/sirupsen/logrus/hooks.go new file mode 100644 index 000000000..3f151cdc3 --- /dev/null +++ b/vendor/github.com/sirupsen/logrus/hooks.go @@ -0,0 +1,34 @@ +package logrus + +// A hook to be fired when logging on the logging levels returned from +// `Levels()` on your implementation of the interface. Note that this is not +// fired in a goroutine or a channel with workers, you should handle such +// functionality yourself if your call is non-blocking and you don't wish for +// the logging calls for levels returned from `Levels()` to block. +type Hook interface { + Levels() []Level + Fire(*Entry) error +} + +// Internal type for storing the hooks on a logger instance. +type LevelHooks map[Level][]Hook + +// Add a hook to an instance of logger. This is called with +// `log.Hooks.Add(new(MyHook))` where `MyHook` implements the `Hook` interface. +func (hooks LevelHooks) Add(hook Hook) { + for _, level := range hook.Levels() { + hooks[level] = append(hooks[level], hook) + } +} + +// Fire all the hooks for the passed level. Used by `entry.log` to fire +// appropriate hooks for a log entry. +func (hooks LevelHooks) Fire(level Level, entry *Entry) error { + for _, hook := range hooks[level] { + if err := hook.Fire(entry); err != nil { + return err + } + } + + return nil +} diff --git a/vendor/github.com/sirupsen/logrus/json_formatter.go b/vendor/github.com/sirupsen/logrus/json_formatter.go new file mode 100644 index 000000000..fb01c1b10 --- /dev/null +++ b/vendor/github.com/sirupsen/logrus/json_formatter.go @@ -0,0 +1,79 @@ +package logrus + +import ( + "encoding/json" + "fmt" +) + +type fieldKey string + +// FieldMap allows customization of the key names for default fields. +type FieldMap map[fieldKey]string + +// Default key names for the default fields +const ( + FieldKeyMsg = "msg" + FieldKeyLevel = "level" + FieldKeyTime = "time" +) + +func (f FieldMap) resolve(key fieldKey) string { + if k, ok := f[key]; ok { + return k + } + + return string(key) +} + +// JSONFormatter formats logs into parsable json +type JSONFormatter struct { + // TimestampFormat sets the format used for marshaling timestamps. + TimestampFormat string + + // DisableTimestamp allows disabling automatic timestamps in output + DisableTimestamp bool + + // FieldMap allows users to customize the names of keys for default fields. + // As an example: + // formatter := &JSONFormatter{ + // FieldMap: FieldMap{ + // FieldKeyTime: "@timestamp", + // FieldKeyLevel: "@level", + // FieldKeyMsg: "@message", + // }, + // } + FieldMap FieldMap +} + +// Format renders a single log entry +func (f *JSONFormatter) Format(entry *Entry) ([]byte, error) { + data := make(Fields, len(entry.Data)+3) + for k, v := range entry.Data { + switch v := v.(type) { + case error: + // Otherwise errors are ignored by `encoding/json` + // https://github.com/sirupsen/logrus/issues/137 + data[k] = v.Error() + default: + data[k] = v + } + } + prefixFieldClashes(data) + + timestampFormat := f.TimestampFormat + if timestampFormat == "" { + timestampFormat = defaultTimestampFormat + } + + if !f.DisableTimestamp { + data[f.FieldMap.resolve(FieldKeyTime)] = entry.Time.Format(timestampFormat) + } + data[f.FieldMap.resolve(FieldKeyMsg)] = entry.Message + data[f.FieldMap.resolve(FieldKeyLevel)] = entry.Level.String() + + serialized, err := json.Marshal(data) + if err != nil { + return nil, fmt.Errorf("Failed to marshal fields to JSON, %v", err) + } + return append(serialized, '\n'), nil +} diff --git a/vendor/github.com/sirupsen/logrus/logger.go b/vendor/github.com/sirupsen/logrus/logger.go new file mode 100644 index 000000000..fdaf8a653 --- /dev/null +++ b/vendor/github.com/sirupsen/logrus/logger.go @@ -0,0 +1,323 @@ +package logrus + +import ( + "io" + "os" + "sync" + "sync/atomic" +) + +type Logger struct { + // The logs are `io.Copy`'d to this in a mutex. It's common to set this to a + // file, or leave it default which is `os.Stderr`. You can also set this to + // something more adventorous, such as logging to Kafka. + Out io.Writer + // Hooks for the logger instance. These allow firing events based on logging + // levels and log entries. For example, to send errors to an error tracking + // service, log to StatsD or dump the core on fatal errors. + Hooks LevelHooks + // All log entries pass through the formatter before logged to Out. The + // included formatters are `TextFormatter` and `JSONFormatter` for which + // TextFormatter is the default. In development (when a TTY is attached) it + // logs with colors, but to a file it wouldn't. You can easily implement your + // own that implements the `Formatter` interface, see the `README` or included + // formatters for examples. + Formatter Formatter + // The logging level the logger should log at. This is typically (and defaults + // to) `logrus.Info`, which allows Info(), Warn(), Error() and Fatal() to be + // logged. + Level Level + // Used to sync writing to the log. Locking is enabled by Default + mu MutexWrap + // Reusable empty entry + entryPool sync.Pool +} + +type MutexWrap struct { + lock sync.Mutex + disabled bool +} + +func (mw *MutexWrap) Lock() { + if !mw.disabled { + mw.lock.Lock() + } +} + +func (mw *MutexWrap) Unlock() { + if !mw.disabled { + mw.lock.Unlock() + } +} + +func (mw *MutexWrap) Disable() { + mw.disabled = true +} + +// Creates a new logger. Configuration should be set by changing `Formatter`, +// `Out` and `Hooks` directly on the default logger instance. You can also just +// instantiate your own: +// +// var log = &Logger{ +// Out: os.Stderr, +// Formatter: new(JSONFormatter), +// Hooks: make(LevelHooks), +// Level: logrus.DebugLevel, +// } +// +// It's recommended to make this a global instance called `log`. +func New() *Logger { + return &Logger{ + Out: os.Stderr, + Formatter: new(TextFormatter), + Hooks: make(LevelHooks), + Level: InfoLevel, + } +} + +func (logger *Logger) newEntry() *Entry { + entry, ok := logger.entryPool.Get().(*Entry) + if ok { + return entry + } + return NewEntry(logger) +} + +func (logger *Logger) releaseEntry(entry *Entry) { + logger.entryPool.Put(entry) +} + +// Adds a field to the log entry, note that it doesn't log until you call +// Debug, Print, Info, Warn, Fatal or Panic. It only creates a log entry. +// If you want multiple fields, use `WithFields`. +func (logger *Logger) WithField(key string, value interface{}) *Entry { + entry := logger.newEntry() + defer logger.releaseEntry(entry) + return entry.WithField(key, value) +} + +// Adds a struct of fields to the log entry. All it does is call `WithField` for +// each `Field`. +func (logger *Logger) WithFields(fields Fields) *Entry { + entry := logger.newEntry() + defer logger.releaseEntry(entry) + return entry.WithFields(fields) +} + +// Add an error as single field to the log entry. All it does is call +// `WithError` for the given `error`. +func (logger *Logger) WithError(err error) *Entry { + entry := logger.newEntry() + defer logger.releaseEntry(entry) + return entry.WithError(err) +} + +func (logger *Logger) Debugf(format string, args ...interface{}) { + if logger.level() >= DebugLevel { + entry := logger.newEntry() + entry.Debugf(format, args...) + logger.releaseEntry(entry) + } +} + +func (logger *Logger) Infof(format string, args ...interface{}) { + if logger.level() >= InfoLevel { + entry := logger.newEntry() + entry.Infof(format, args...) + logger.releaseEntry(entry) + } +} + +func (logger *Logger) Printf(format string, args ...interface{}) { + entry := logger.newEntry() + entry.Printf(format, args...) + logger.releaseEntry(entry) +} + +func (logger *Logger) Warnf(format string, args ...interface{}) { + if logger.level() >= WarnLevel { + entry := logger.newEntry() + entry.Warnf(format, args...) + logger.releaseEntry(entry) + } +} + +func (logger *Logger) Warningf(format string, args ...interface{}) { + if logger.level() >= WarnLevel { + entry := logger.newEntry() + entry.Warnf(format, args...) + logger.releaseEntry(entry) + } +} + +func (logger *Logger) Errorf(format string, args ...interface{}) { + if logger.level() >= ErrorLevel { + entry := logger.newEntry() + entry.Errorf(format, args...) + logger.releaseEntry(entry) + } +} + +func (logger *Logger) Fatalf(format string, args ...interface{}) { + if logger.level() >= FatalLevel { + entry := logger.newEntry() + entry.Fatalf(format, args...) + logger.releaseEntry(entry) + } + Exit(1) +} + +func (logger *Logger) Panicf(format string, args ...interface{}) { + if logger.level() >= PanicLevel { + entry := logger.newEntry() + entry.Panicf(format, args...) + logger.releaseEntry(entry) + } +} + +func (logger *Logger) Debug(args ...interface{}) { + if logger.level() >= DebugLevel { + entry := logger.newEntry() + entry.Debug(args...) + logger.releaseEntry(entry) + } +} + +func (logger *Logger) Info(args ...interface{}) { + if logger.level() >= InfoLevel { + entry := logger.newEntry() + entry.Info(args...) + logger.releaseEntry(entry) + } +} + +func (logger *Logger) Print(args ...interface{}) { + entry := logger.newEntry() + entry.Info(args...) + logger.releaseEntry(entry) +} + +func (logger *Logger) Warn(args ...interface{}) { + if logger.level() >= WarnLevel { + entry := logger.newEntry() + entry.Warn(args...) + logger.releaseEntry(entry) + } +} + +func (logger *Logger) Warning(args ...interface{}) { + if logger.level() >= WarnLevel { + entry := logger.newEntry() + entry.Warn(args...) + logger.releaseEntry(entry) + } +} + +func (logger *Logger) Error(args ...interface{}) { + if logger.level() >= ErrorLevel { + entry := logger.newEntry() + entry.Error(args...) + logger.releaseEntry(entry) + } +} + +func (logger *Logger) Fatal(args ...interface{}) { + if logger.level() >= FatalLevel { + entry := logger.newEntry() + entry.Fatal(args...) + logger.releaseEntry(entry) + } + Exit(1) +} + +func (logger *Logger) Panic(args ...interface{}) { + if logger.level() >= PanicLevel { + entry := logger.newEntry() + entry.Panic(args...) + logger.releaseEntry(entry) + } +} + +func (logger *Logger) Debugln(args ...interface{}) { + if logger.level() >= DebugLevel { + entry := logger.newEntry() + entry.Debugln(args...) + logger.releaseEntry(entry) + } +} + +func (logger *Logger) Infoln(args ...interface{}) { + if logger.level() >= InfoLevel { + entry := logger.newEntry() + entry.Infoln(args...) + logger.releaseEntry(entry) + } +} + +func (logger *Logger) Println(args ...interface{}) { + entry := logger.newEntry() + entry.Println(args...) + logger.releaseEntry(entry) +} + +func (logger *Logger) Warnln(args ...interface{}) { + if logger.level() >= WarnLevel { + entry := logger.newEntry() + entry.Warnln(args...) + logger.releaseEntry(entry) + } +} + +func (logger *Logger) Warningln(args ...interface{}) { + if logger.level() >= WarnLevel { + entry := logger.newEntry() + entry.Warnln(args...) + logger.releaseEntry(entry) + } +} + +func (logger *Logger) Errorln(args ...interface{}) { + if logger.level() >= ErrorLevel { + entry := logger.newEntry() + entry.Errorln(args...) + logger.releaseEntry(entry) + } +} + +func (logger *Logger) Fatalln(args ...interface{}) { + if logger.level() >= FatalLevel { + entry := logger.newEntry() + entry.Fatalln(args...) + logger.releaseEntry(entry) + } + Exit(1) +} + +func (logger *Logger) Panicln(args ...interface{}) { + if logger.level() >= PanicLevel { + entry := logger.newEntry() + entry.Panicln(args...) + logger.releaseEntry(entry) + } +} + +//When file is opened with appending mode, it's safe to +//write concurrently to a file (within 4k message on Linux). +//In these cases user can choose to disable the lock. +func (logger *Logger) SetNoLock() { + logger.mu.Disable() +} + +func (logger *Logger) level() Level { + return Level(atomic.LoadUint32((*uint32)(&logger.Level))) +} + +func (logger *Logger) SetLevel(level Level) { + atomic.StoreUint32((*uint32)(&logger.Level), uint32(level)) +} + +func (logger *Logger) AddHook(hook Hook) { + logger.mu.Lock() + defer logger.mu.Unlock() + logger.Hooks.Add(hook) +} diff --git a/vendor/github.com/sirupsen/logrus/logrus.go b/vendor/github.com/sirupsen/logrus/logrus.go new file mode 100644 index 000000000..dd3899974 --- /dev/null +++ b/vendor/github.com/sirupsen/logrus/logrus.go @@ -0,0 +1,143 @@ +package logrus + +import ( + "fmt" + "log" + "strings" +) + +// Fields type, used to pass to `WithFields`. +type Fields map[string]interface{} + +// Level type +type Level uint32 + +// Convert the Level to a string. E.g. PanicLevel becomes "panic". +func (level Level) String() string { + switch level { + case DebugLevel: + return "debug" + case InfoLevel: + return "info" + case WarnLevel: + return "warning" + case ErrorLevel: + return "error" + case FatalLevel: + return "fatal" + case PanicLevel: + return "panic" + } + + return "unknown" +} + +// ParseLevel takes a string level and returns the Logrus log level constant. +func ParseLevel(lvl string) (Level, error) { + switch strings.ToLower(lvl) { + case "panic": + return PanicLevel, nil + case "fatal": + return FatalLevel, nil + case "error": + return ErrorLevel, nil + case "warn", "warning": + return WarnLevel, nil + case "info": + return InfoLevel, nil + case "debug": + return DebugLevel, nil + } + + var l Level + return l, fmt.Errorf("not a valid logrus Level: %q", lvl) +} + +// A constant exposing all logging levels +var AllLevels = []Level{ + PanicLevel, + FatalLevel, + ErrorLevel, + WarnLevel, + InfoLevel, + DebugLevel, +} + +// These are the different logging levels. You can set the logging level to log +// on your instance of logger, obtained with `logrus.New()`. +const ( + // PanicLevel level, highest level of severity. Logs and then calls panic with the + // message passed to Debug, Info, ... + PanicLevel Level = iota + // FatalLevel level. Logs and then calls `os.Exit(1)`. It will exit even if the + // logging level is set to Panic. + FatalLevel + // ErrorLevel level. Logs. Used for errors that should definitely be noted. + // Commonly used for hooks to send errors to an error tracking service. + ErrorLevel + // WarnLevel level. Non-critical entries that deserve eyes. + WarnLevel + // InfoLevel level. General operational entries about what's going on inside the + // application. + InfoLevel + // DebugLevel level. Usually only enabled when debugging. Very verbose logging. + DebugLevel +) + +// Won't compile if StdLogger can't be realized by a log.Logger +var ( + _ StdLogger = &log.Logger{} + _ StdLogger = &Entry{} + _ StdLogger = &Logger{} +) + +// StdLogger is what your logrus-enabled library should take, that way +// it'll accept a stdlib logger and a logrus logger. There's no standard +// interface, this is the closest we get, unfortunately. +type StdLogger interface { + Print(...interface{}) + Printf(string, ...interface{}) + Println(...interface{}) + + Fatal(...interface{}) + Fatalf(string, ...interface{}) + Fatalln(...interface{}) + + Panic(...interface{}) + Panicf(string, ...interface{}) + Panicln(...interface{}) +} + +// The FieldLogger interface generalizes the Entry and Logger types +type FieldLogger interface { + WithField(key string, value interface{}) *Entry + WithFields(fields Fields) *Entry + WithError(err error) *Entry + + Debugf(format string, args ...interface{}) + Infof(format string, args ...interface{}) + Printf(format string, args ...interface{}) + Warnf(format string, args ...interface{}) + Warningf(format string, args ...interface{}) + Errorf(format string, args ...interface{}) + Fatalf(format string, args ...interface{}) + Panicf(format string, args ...interface{}) + + Debug(args ...interface{}) + Info(args ...interface{}) + Print(args ...interface{}) + Warn(args ...interface{}) + Warning(args ...interface{}) + Error(args ...interface{}) + Fatal(args ...interface{}) + Panic(args ...interface{}) + + Debugln(args ...interface{}) + Infoln(args ...interface{}) + Println(args ...interface{}) + Warnln(args ...interface{}) + Warningln(args ...interface{}) + Errorln(args ...interface{}) + Fatalln(args ...interface{}) + Panicln(args ...interface{}) +} diff --git a/vendor/github.com/sirupsen/logrus/terminal_bsd.go b/vendor/github.com/sirupsen/logrus/terminal_bsd.go new file mode 100644 index 000000000..4880d13d2 --- /dev/null +++ b/vendor/github.com/sirupsen/logrus/terminal_bsd.go @@ -0,0 +1,10 @@ +// +build darwin freebsd openbsd netbsd dragonfly +// +build !appengine,!gopherjs + +package logrus + +import "golang.org/x/sys/unix" + +const ioctlReadTermios = unix.TIOCGETA + +type Termios unix.Termios diff --git a/vendor/github.com/sirupsen/logrus/terminal_check_notappengine.go b/vendor/github.com/sirupsen/logrus/terminal_check_notappengine.go new file mode 100644 index 000000000..067047a12 --- /dev/null +++ b/vendor/github.com/sirupsen/logrus/terminal_check_notappengine.go @@ -0,0 +1,19 @@ +// +build !appengine,!gopherjs + +package logrus + +import ( + "io" + "os" + + "golang.org/x/crypto/ssh/terminal" +) + +func checkIfTerminal(w io.Writer) bool { + switch v := w.(type) { + case *os.File: + return terminal.IsTerminal(int(v.Fd())) + default: + return false + } +} diff --git a/vendor/github.com/sirupsen/logrus/terminal_linux.go b/vendor/github.com/sirupsen/logrus/terminal_linux.go new file mode 100644 index 000000000..f29a0097c --- /dev/null +++ b/vendor/github.com/sirupsen/logrus/terminal_linux.go @@ -0,0 +1,14 @@ +// Based on ssh/terminal: +// Copyright 2013 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +// +build !appengine,!gopherjs + +package logrus + +import "golang.org/x/sys/unix" + +const ioctlReadTermios = unix.TCGETS + +type Termios unix.Termios diff --git a/vendor/github.com/sirupsen/logrus/text_formatter.go b/vendor/github.com/sirupsen/logrus/text_formatter.go new file mode 100644 index 000000000..61b21caea --- /dev/null +++ b/vendor/github.com/sirupsen/logrus/text_formatter.go @@ -0,0 +1,178 @@ +package logrus + +import ( + "bytes" + "fmt" + "sort" + "strings" + "sync" + "time" +) + +const ( + nocolor = 0 + red = 31 + green = 32 + yellow = 33 + blue = 36 + gray = 37 +) + +var ( + baseTimestamp time.Time +) + +func init() { + baseTimestamp = time.Now() +} + +// TextFormatter formats logs into text +type TextFormatter struct { + // Set to true to bypass checking for a TTY before outputting colors. + ForceColors bool + + // Force disabling colors. + DisableColors bool + + // Disable timestamp logging. useful when output is redirected to logging + // system that already adds timestamps. + DisableTimestamp bool + + // Enable logging the full timestamp when a TTY is attached instead of just + // the time passed since beginning of execution. + FullTimestamp bool + + // TimestampFormat to use for display when a full timestamp is printed + TimestampFormat string + + // The fields are sorted by default for a consistent output. For applications + // that log extremely frequently and don't use the JSON formatter this may not + // be desired. + DisableSorting bool + + // QuoteEmptyFields will wrap empty fields in quotes if true + QuoteEmptyFields bool + + // Whether the logger's out is to a terminal + isTerminal bool + + sync.Once +} + +func (f *TextFormatter) init(entry *Entry) { + if entry.Logger != nil { + f.isTerminal = checkIfTerminal(entry.Logger.Out) + } +} + +// Format renders a single log entry +func (f *TextFormatter) Format(entry *Entry) ([]byte, error) { + var b *bytes.Buffer + keys := make([]string, 0, len(entry.Data)) + for k := range entry.Data { + keys = append(keys, k) + } + + if !f.DisableSorting { + sort.Strings(keys) + } + if entry.Buffer != nil { + b = entry.Buffer + } else { + b = &bytes.Buffer{} + } + + prefixFieldClashes(entry.Data) + + f.Do(func() { f.init(entry) }) + + isColored := (f.ForceColors || f.isTerminal) && !f.DisableColors + + timestampFormat := f.TimestampFormat + if timestampFormat == "" { + timestampFormat = defaultTimestampFormat + } + if isColored { + f.printColored(b, entry, keys, timestampFormat) + } else { + if !f.DisableTimestamp { + f.appendKeyValue(b, "time", entry.Time.Format(timestampFormat)) + } + f.appendKeyValue(b, "level", entry.Level.String()) + if entry.Message != "" { + f.appendKeyValue(b, "msg", entry.Message) + } + for _, key := range keys { + f.appendKeyValue(b, key, entry.Data[key]) + } + } + + b.WriteByte('\n') + return b.Bytes(), nil +} + +func (f *TextFormatter) printColored(b *bytes.Buffer, entry *Entry, keys []string, timestampFormat string) { + var levelColor int + switch entry.Level { + case DebugLevel: + levelColor = gray + case WarnLevel: + levelColor = yellow + case ErrorLevel, FatalLevel, PanicLevel: + levelColor = red + default: + levelColor = blue + } + + levelText := strings.ToUpper(entry.Level.String())[0:4] + + if f.DisableTimestamp { + fmt.Fprintf(b, "\x1b[%dm%s\x1b[0m %-44s ", levelColor, levelText, entry.Message) + } else if !f.FullTimestamp { + fmt.Fprintf(b, "\x1b[%dm%s\x1b[0m[%04d] %-44s ", levelColor, levelText, int(entry.Time.Sub(baseTimestamp)/time.Second), entry.Message) + } else { + fmt.Fprintf(b, "\x1b[%dm%s\x1b[0m[%s] %-44s ", levelColor, levelText, entry.Time.Format(timestampFormat), entry.Message) + } + for _, k := range keys { + v := entry.Data[k] + fmt.Fprintf(b, " \x1b[%dm%s\x1b[0m=", levelColor, k) + f.appendValue(b, v) + } +} + +func (f *TextFormatter) needsQuoting(text string) bool { + if f.QuoteEmptyFields && len(text) == 0 { + return true + } + for _, ch := range text { + if !((ch >= 'a' && ch <= 'z') || + (ch >= 'A' && ch <= 'Z') || + (ch >= '0' && ch <= '9') || + ch == '-' || ch == '.' || ch == '_' || ch == '/' || ch == '@' || ch == '^' || ch == '+') { + return true + } + } + return false +} + +func (f *TextFormatter) appendKeyValue(b *bytes.Buffer, key string, value interface{}) { + if b.Len() > 0 { + b.WriteByte(' ') + } + b.WriteString(key) + b.WriteByte('=') + f.appendValue(b, value) +} + +func (f *TextFormatter) appendValue(b *bytes.Buffer, value interface{}) { + stringVal, ok := value.(string) + if !ok { + stringVal = fmt.Sprint(value) + } + + if !f.needsQuoting(stringVal) { + b.WriteString(stringVal) + } else { + b.WriteString(fmt.Sprintf("%q", stringVal)) + } +} diff --git a/vendor/github.com/sirupsen/logrus/writer.go b/vendor/github.com/sirupsen/logrus/writer.go new file mode 100644 index 000000000..7bdebedc6 --- /dev/null +++ b/vendor/github.com/sirupsen/logrus/writer.go @@ -0,0 +1,62 @@ +package logrus + +import ( + "bufio" + "io" + "runtime" +) + +func (logger *Logger) Writer() *io.PipeWriter { + return logger.WriterLevel(InfoLevel) +} + +func (logger *Logger) WriterLevel(level Level) *io.PipeWriter { + return NewEntry(logger).WriterLevel(level) +} + +func (entry *Entry) Writer() *io.PipeWriter { + return entry.WriterLevel(InfoLevel) +} + +func (entry *Entry) WriterLevel(level Level) *io.PipeWriter { + reader, writer := io.Pipe() + + var printFunc func(args ...interface{}) + + switch level { + case DebugLevel: + printFunc = entry.Debug + case InfoLevel: + printFunc = entry.Info + case WarnLevel: + printFunc = entry.Warn + case ErrorLevel: + printFunc = entry.Error + case FatalLevel: + printFunc = entry.Fatal + case PanicLevel: + printFunc = entry.Panic + default: + printFunc = entry.Print + } + + go entry.writerScanner(reader, printFunc) + runtime.SetFinalizer(writer, writerFinalizer) + + return writer +} + +func (entry *Entry) writerScanner(reader *io.PipeReader, printFunc func(args ...interface{})) { + scanner := bufio.NewScanner(reader) + for scanner.Scan() { + printFunc(scanner.Text()) + } + if err := scanner.Err(); err != nil { + entry.Errorf("Error while reading from Writer: %s", err) + } + reader.Close() +} + +func writerFinalizer(writer *io.PipeWriter) { + writer.Close() +} diff --git a/vendor/github.com/stretchr/objx/Gopkg.lock b/vendor/github.com/stretchr/objx/Gopkg.lock new file mode 100644 index 000000000..1f5739c91 --- /dev/null +++ b/vendor/github.com/stretchr/objx/Gopkg.lock @@ -0,0 +1,27 @@ +# This file is autogenerated, do not edit; changes may be undone by the next 'dep ensure'. + + +[[projects]] + name = "github.com/davecgh/go-spew" + packages = ["spew"] + revision = "346938d642f2ec3594ed81d874461961cd0faa76" + version = "v1.1.0" + +[[projects]] + name = "github.com/pmezard/go-difflib" + packages = ["difflib"] + revision = "792786c7400a136282c1664665ae0a8db921c6c2" + version = "v1.0.0" + +[[projects]] + name = "github.com/stretchr/testify" + packages = ["assert"] + revision = "b91bfb9ebec76498946beb6af7c0230c7cc7ba6c" + version = "v1.2.0" + +[solve-meta] + analyzer-name = "dep" + analyzer-version = 1 + inputs-digest = "50e2495ec1af6e2f7ffb2f3551e4300d30357d7c7fe38ff6056469fa9cfb3673" + solver-name = "gps-cdcl" + solver-version = 1 diff --git a/vendor/github.com/stretchr/objx/Gopkg.toml b/vendor/github.com/stretchr/objx/Gopkg.toml new file mode 100644 index 000000000..f87e18eb5 --- /dev/null +++ b/vendor/github.com/stretchr/objx/Gopkg.toml @@ -0,0 +1,3 @@ +[[constraint]] + name = "github.com/stretchr/testify" + version = "~1.2.0" diff --git a/vendor/github.com/stretchr/objx/LICENSE b/vendor/github.com/stretchr/objx/LICENSE new file mode 100644 index 000000000..44d4d9d5a --- /dev/null +++ b/vendor/github.com/stretchr/objx/LICENSE @@ -0,0 +1,22 @@ +The MIT License + +Copyright (c) 2014 Stretchr, Inc. +Copyright (c) 2017-2018 objx contributors + +Permission is hereby granted, free of charge, to any person obtaining a copy +of this software and associated documentation files (the "Software"), to deal +in the Software without restriction, including without limitation the rights +to use, copy, modify, merge, publish, distribute, sublicense, and/or sell +copies of the Software, and to permit persons to whom the Software is +furnished to do so, subject to the following conditions: + +The above copyright notice and this permission notice shall be included in all +copies or substantial portions of the Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR +IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, +FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE +AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER +LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, +OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +SOFTWARE. diff --git a/vendor/github.com/stretchr/objx/README.md b/vendor/github.com/stretchr/objx/README.md new file mode 100644 index 000000000..4e2400eb1 --- /dev/null +++ b/vendor/github.com/stretchr/objx/README.md @@ -0,0 +1,78 @@ +# Objx +[![Build Status](https://travis-ci.org/stretchr/objx.svg?branch=master)](https://travis-ci.org/stretchr/objx) +[![Go Report Card](https://goreportcard.com/badge/github.com/stretchr/objx)](https://goreportcard.com/report/github.com/stretchr/objx) +[![Sourcegraph](https://sourcegraph.com/github.com/stretchr/objx/-/badge.svg)](https://sourcegraph.com/github.com/stretchr/objx) +[![GoDoc](https://godoc.org/github.com/stretchr/objx?status.svg)](https://godoc.org/github.com/stretchr/objx) + +Objx - Go package for dealing with maps, slices, JSON and other data. + +Get started: + +- Install Objx with [one line of code](#installation), or [update it with another](#staying-up-to-date) +- Check out the API Documentation http://godoc.org/github.com/stretchr/objx + +## Overview +Objx provides the `objx.Map` type, which is a `map[string]interface{}` that exposes a powerful `Get` method (among others) that allows you to easily and quickly get access to data within the map, without having to worry too much about type assertions, missing data, default values etc. + +### Pattern +Objx uses a preditable pattern to make access data from within `map[string]interface{}` easy. Call one of the `objx.` functions to create your `objx.Map` to get going: + + m, err := objx.FromJSON(json) + +NOTE: Any methods or functions with the `Must` prefix will panic if something goes wrong, the rest will be optimistic and try to figure things out without panicking. + +Use `Get` to access the value you're interested in. You can use dot and array +notation too: + + m.Get("places[0].latlng") + +Once you have sought the `Value` you're interested in, you can use the `Is*` methods to determine its type. + + if m.Get("code").IsStr() { // Your code... } + +Or you can just assume the type, and use one of the strong type methods to extract the real value: + + m.Get("code").Int() + +If there's no value there (or if it's the wrong type) then a default value will be returned, or you can be explicit about the default value. + + Get("code").Int(-1) + +If you're dealing with a slice of data as a value, Objx provides many useful methods for iterating, manipulating and selecting that data. You can find out more by exploring the index below. + +### Reading data +A simple example of how to use Objx: + + // Use MustFromJSON to make an objx.Map from some JSON + m := objx.MustFromJSON(`{"name": "Mat", "age": 30}`) + + // Get the details + name := m.Get("name").Str() + age := m.Get("age").Int() + + // Get their nickname (or use their name if they don't have one) + nickname := m.Get("nickname").Str(name) + +### Ranging +Since `objx.Map` is a `map[string]interface{}` you can treat it as such. For example, to `range` the data, do what you would expect: + + m := objx.MustFromJSON(json) + for key, value := range m { + // Your code... + } + +## Installation +To install Objx, use go get: + + go get github.com/stretchr/objx + +### Staying up to date +To update Objx to the latest version, run: + + go get -u github.com/stretchr/objx + +### Supported go versions +We support the lastest two major Go versions, which are 1.8 and 1.9 at the moment. + +## Contributing +Please feel free to submit issues, fork the repository and send pull requests! diff --git a/vendor/github.com/stretchr/objx/Taskfile.yml b/vendor/github.com/stretchr/objx/Taskfile.yml new file mode 100644 index 000000000..403b5f06e --- /dev/null +++ b/vendor/github.com/stretchr/objx/Taskfile.yml @@ -0,0 +1,26 @@ +default: + deps: [test] + +dl-deps: + desc: Downloads cli dependencies + cmds: + - go get -u github.com/golang/lint/golint + - go get -u github.com/golang/dep/cmd/dep + +update-deps: + desc: Updates dependencies + cmds: + - dep ensure + - dep ensure -update + - dep prune + +lint: + desc: Runs golint + cmds: + - golint $(ls *.go | grep -v "doc.go") + silent: true + +test: + desc: Runs go tests + cmds: + - go test -race . diff --git a/vendor/github.com/stretchr/objx/accessors.go b/vendor/github.com/stretchr/objx/accessors.go new file mode 100644 index 000000000..d95be0ca9 --- /dev/null +++ b/vendor/github.com/stretchr/objx/accessors.go @@ -0,0 +1,171 @@ +package objx + +import ( + "fmt" + "regexp" + "strconv" + "strings" +) + +// arrayAccesRegexString is the regex used to extract the array number +// from the access path +const arrayAccesRegexString = `^(.+)\[([0-9]+)\]$` + +// arrayAccesRegex is the compiled arrayAccesRegexString +var arrayAccesRegex = regexp.MustCompile(arrayAccesRegexString) + +// Get gets the value using the specified selector and +// returns it inside a new Obj object. +// +// If it cannot find the value, Get will return a nil +// value inside an instance of Obj. +// +// Get can only operate directly on map[string]interface{} and []interface. +// +// Example +// +// To access the title of the third chapter of the second book, do: +// +// o.Get("books[1].chapters[2].title") +func (m Map) Get(selector string) *Value { + rawObj := access(m, selector, nil, false, false) + return &Value{data: rawObj} +} + +// Set sets the value using the specified selector and +// returns the object on which Set was called. +// +// Set can only operate directly on map[string]interface{} and []interface +// +// Example +// +// To set the title of the third chapter of the second book, do: +// +// o.Set("books[1].chapters[2].title","Time to Go") +func (m Map) Set(selector string, value interface{}) Map { + access(m, selector, value, true, false) + return m +} + +// access accesses the object using the selector and performs the +// appropriate action. +func access(current, selector, value interface{}, isSet, panics bool) interface{} { + + switch selector.(type) { + case int, int8, int16, int32, int64, uint, uint8, uint16, uint32, uint64: + + if array, ok := current.([]interface{}); ok { + index := intFromInterface(selector) + + if index >= len(array) { + if panics { + panic(fmt.Sprintf("objx: Index %d is out of range. Slice only contains %d items.", index, len(array))) + } + return nil + } + + return array[index] + } + + return nil + + case string: + + selStr := selector.(string) + selSegs := strings.SplitN(selStr, PathSeparator, 2) + thisSel := selSegs[0] + index := -1 + var err error + + if strings.Contains(thisSel, "[") { + arrayMatches := arrayAccesRegex.FindStringSubmatch(thisSel) + + if len(arrayMatches) > 0 { + // Get the key into the map + thisSel = arrayMatches[1] + + // Get the index into the array at the key + index, err = strconv.Atoi(arrayMatches[2]) + + if err != nil { + // This should never happen. If it does, something has gone + // seriously wrong. Panic. + panic("objx: Array index is not an integer. Must use array[int].") + } + } + } + + if curMap, ok := current.(Map); ok { + current = map[string]interface{}(curMap) + } + + // get the object in question + switch current.(type) { + case map[string]interface{}: + curMSI := current.(map[string]interface{}) + if len(selSegs) <= 1 && isSet { + curMSI[thisSel] = value + return nil + } + current = curMSI[thisSel] + default: + current = nil + } + + if current == nil && panics { + panic(fmt.Sprintf("objx: '%v' invalid on object.", selector)) + } + + // do we need to access the item of an array? + if index > -1 { + if array, ok := current.([]interface{}); ok { + if index < len(array) { + current = array[index] + } else { + if panics { + panic(fmt.Sprintf("objx: Index %d is out of range. Slice only contains %d items.", index, len(array))) + } + current = nil + } + } + } + + if len(selSegs) > 1 { + current = access(current, selSegs[1], value, isSet, panics) + } + + } + return current +} + +// intFromInterface converts an interface object to the largest +// representation of an unsigned integer using a type switch and +// assertions +func intFromInterface(selector interface{}) int { + var value int + switch selector.(type) { + case int: + value = selector.(int) + case int8: + value = int(selector.(int8)) + case int16: + value = int(selector.(int16)) + case int32: + value = int(selector.(int32)) + case int64: + value = int(selector.(int64)) + case uint: + value = int(selector.(uint)) + case uint8: + value = int(selector.(uint8)) + case uint16: + value = int(selector.(uint16)) + case uint32: + value = int(selector.(uint32)) + case uint64: + value = int(selector.(uint64)) + default: + panic("objx: array access argument is not an integer type (this should never happen)") + } + return value +} diff --git a/vendor/github.com/stretchr/objx/constants.go b/vendor/github.com/stretchr/objx/constants.go new file mode 100644 index 000000000..f9eb42a25 --- /dev/null +++ b/vendor/github.com/stretchr/objx/constants.go @@ -0,0 +1,13 @@ +package objx + +const ( + // PathSeparator is the character used to separate the elements + // of the keypath. + // + // For example, `location.address.city` + PathSeparator string = "." + + // SignatureSeparator is the character that is used to + // separate the Base64 string from the security signature. + SignatureSeparator = "_" +) diff --git a/vendor/github.com/stretchr/objx/conversions.go b/vendor/github.com/stretchr/objx/conversions.go new file mode 100644 index 000000000..5e020f310 --- /dev/null +++ b/vendor/github.com/stretchr/objx/conversions.go @@ -0,0 +1,108 @@ +package objx + +import ( + "bytes" + "encoding/base64" + "encoding/json" + "errors" + "fmt" + "net/url" +) + +// JSON converts the contained object to a JSON string +// representation +func (m Map) JSON() (string, error) { + result, err := json.Marshal(m) + if err != nil { + err = errors.New("objx: JSON encode failed with: " + err.Error()) + } + return string(result), err +} + +// MustJSON converts the contained object to a JSON string +// representation and panics if there is an error +func (m Map) MustJSON() string { + result, err := m.JSON() + if err != nil { + panic(err.Error()) + } + return result +} + +// Base64 converts the contained object to a Base64 string +// representation of the JSON string representation +func (m Map) Base64() (string, error) { + var buf bytes.Buffer + + jsonData, err := m.JSON() + if err != nil { + return "", err + } + + encoder := base64.NewEncoder(base64.StdEncoding, &buf) + _, err = encoder.Write([]byte(jsonData)) + if err != nil { + return "", err + } + _ = encoder.Close() + + return buf.String(), nil +} + +// MustBase64 converts the contained object to a Base64 string +// representation of the JSON string representation and panics +// if there is an error +func (m Map) MustBase64() string { + result, err := m.Base64() + if err != nil { + panic(err.Error()) + } + return result +} + +// SignedBase64 converts the contained object to a Base64 string +// representation of the JSON string representation and signs it +// using the provided key. +func (m Map) SignedBase64(key string) (string, error) { + base64, err := m.Base64() + if err != nil { + return "", err + } + + sig := HashWithKey(base64, key) + return base64 + SignatureSeparator + sig, nil +} + +// MustSignedBase64 converts the contained object to a Base64 string +// representation of the JSON string representation and signs it +// using the provided key and panics if there is an error +func (m Map) MustSignedBase64(key string) string { + result, err := m.SignedBase64(key) + if err != nil { + panic(err.Error()) + } + return result +} + +/* + URL Query + ------------------------------------------------ +*/ + +// URLValues creates a url.Values object from an Obj. This +// function requires that the wrapped object be a map[string]interface{} +func (m Map) URLValues() url.Values { + vals := make(url.Values) + for k, v := range m { + //TODO: can this be done without sprintf? + vals.Set(k, fmt.Sprintf("%v", v)) + } + return vals +} + +// URLQuery gets an encoded URL query representing the given +// Obj. This function requires that the wrapped object be a +// map[string]interface{} +func (m Map) URLQuery() (string, error) { + return m.URLValues().Encode(), nil +} diff --git a/vendor/github.com/stretchr/objx/doc.go b/vendor/github.com/stretchr/objx/doc.go new file mode 100644 index 000000000..6d6af1a83 --- /dev/null +++ b/vendor/github.com/stretchr/objx/doc.go @@ -0,0 +1,66 @@ +/* +Objx - Go package for dealing with maps, slices, JSON and other data. + +Overview + +Objx provides the `objx.Map` type, which is a `map[string]interface{}` that exposes +a powerful `Get` method (among others) that allows you to easily and quickly get +access to data within the map, without having to worry too much about type assertions, +missing data, default values etc. + +Pattern + +Objx uses a preditable pattern to make access data from within `map[string]interface{}` easy. +Call one of the `objx.` functions to create your `objx.Map` to get going: + + m, err := objx.FromJSON(json) + +NOTE: Any methods or functions with the `Must` prefix will panic if something goes wrong, +the rest will be optimistic and try to figure things out without panicking. + +Use `Get` to access the value you're interested in. You can use dot and array +notation too: + + m.Get("places[0].latlng") + +Once you have sought the `Value` you're interested in, you can use the `Is*` methods to determine its type. + + if m.Get("code").IsStr() { // Your code... } + +Or you can just assume the type, and use one of the strong type methods to extract the real value: + + m.Get("code").Int() + +If there's no value there (or if it's the wrong type) then a default value will be returned, +or you can be explicit about the default value. + + Get("code").Int(-1) + +If you're dealing with a slice of data as a value, Objx provides many useful methods for iterating, +manipulating and selecting that data. You can find out more by exploring the index below. + +Reading data + +A simple example of how to use Objx: + + // Use MustFromJSON to make an objx.Map from some JSON + m := objx.MustFromJSON(`{"name": "Mat", "age": 30}`) + + // Get the details + name := m.Get("name").Str() + age := m.Get("age").Int() + + // Get their nickname (or use their name if they don't have one) + nickname := m.Get("nickname").Str(name) + +Ranging + +Since `objx.Map` is a `map[string]interface{}` you can treat it as such. +For example, to `range` the data, do what you would expect: + + m := objx.MustFromJSON(json) + for key, value := range m { + // Your code... + } +*/ +package objx diff --git a/vendor/github.com/stretchr/objx/map.go b/vendor/github.com/stretchr/objx/map.go new file mode 100644 index 000000000..7e9389a20 --- /dev/null +++ b/vendor/github.com/stretchr/objx/map.go @@ -0,0 +1,193 @@ +package objx + +import ( + "encoding/base64" + "encoding/json" + "errors" + "io/ioutil" + "net/url" + "strings" +) + +// MSIConvertable is an interface that defines methods for converting your +// custom types to a map[string]interface{} representation. +type MSIConvertable interface { + // MSI gets a map[string]interface{} (msi) representing the + // object. + MSI() map[string]interface{} +} + +// Map provides extended functionality for working with +// untyped data, in particular map[string]interface (msi). +type Map map[string]interface{} + +// Value returns the internal value instance +func (m Map) Value() *Value { + return &Value{data: m} +} + +// Nil represents a nil Map. +var Nil = New(nil) + +// New creates a new Map containing the map[string]interface{} in the data argument. +// If the data argument is not a map[string]interface, New attempts to call the +// MSI() method on the MSIConvertable interface to create one. +func New(data interface{}) Map { + if _, ok := data.(map[string]interface{}); !ok { + if converter, ok := data.(MSIConvertable); ok { + data = converter.MSI() + } else { + return nil + } + } + return Map(data.(map[string]interface{})) +} + +// MSI creates a map[string]interface{} and puts it inside a new Map. +// +// The arguments follow a key, value pattern. +// +// Panics +// +// Panics if any key argument is non-string or if there are an odd number of arguments. +// +// Example +// +// To easily create Maps: +// +// m := objx.MSI("name", "Mat", "age", 29, "subobj", objx.MSI("active", true)) +// +// // creates an Map equivalent to +// m := objx.New(map[string]interface{}{"name": "Mat", "age": 29, "subobj": map[string]interface{}{"active": true}}) +func MSI(keyAndValuePairs ...interface{}) Map { + newMap := make(map[string]interface{}) + keyAndValuePairsLen := len(keyAndValuePairs) + if keyAndValuePairsLen%2 != 0 { + panic("objx: MSI must have an even number of arguments following the 'key, value' pattern.") + } + + for i := 0; i < keyAndValuePairsLen; i = i + 2 { + key := keyAndValuePairs[i] + value := keyAndValuePairs[i+1] + + // make sure the key is a string + keyString, keyStringOK := key.(string) + if !keyStringOK { + panic("objx: MSI must follow 'string, interface{}' pattern. " + keyString + " is not a valid key.") + } + newMap[keyString] = value + } + return New(newMap) +} + +// ****** Conversion Constructors + +// MustFromJSON creates a new Map containing the data specified in the +// jsonString. +// +// Panics if the JSON is invalid. +func MustFromJSON(jsonString string) Map { + o, err := FromJSON(jsonString) + if err != nil { + panic("objx: MustFromJSON failed with error: " + err.Error()) + } + return o +} + +// FromJSON creates a new Map containing the data specified in the +// jsonString. +// +// Returns an error if the JSON is invalid. +func FromJSON(jsonString string) (Map, error) { + var data interface{} + err := json.Unmarshal([]byte(jsonString), &data) + if err != nil { + return Nil, err + } + return New(data), nil +} + +// FromBase64 creates a new Obj containing the data specified +// in the Base64 string. +// +// The string is an encoded JSON string returned by Base64 +func FromBase64(base64String string) (Map, error) { + decoder := base64.NewDecoder(base64.StdEncoding, strings.NewReader(base64String)) + decoded, err := ioutil.ReadAll(decoder) + if err != nil { + return nil, err + } + return FromJSON(string(decoded)) +} + +// MustFromBase64 creates a new Obj containing the data specified +// in the Base64 string and panics if there is an error. +// +// The string is an encoded JSON string returned by Base64 +func MustFromBase64(base64String string) Map { + result, err := FromBase64(base64String) + if err != nil { + panic("objx: MustFromBase64 failed with error: " + err.Error()) + } + return result +} + +// FromSignedBase64 creates a new Obj containing the data specified +// in the Base64 string. +// +// The string is an encoded JSON string returned by SignedBase64 +func FromSignedBase64(base64String, key string) (Map, error) { + parts := strings.Split(base64String, SignatureSeparator) + if len(parts) != 2 { + return nil, errors.New("objx: Signed base64 string is malformed") + } + + sig := HashWithKey(parts[0], key) + if parts[1] != sig { + return nil, errors.New("objx: Signature for base64 data does not match") + } + return FromBase64(parts[0]) +} + +// MustFromSignedBase64 creates a new Obj containing the data specified +// in the Base64 string and panics if there is an error. +// +// The string is an encoded JSON string returned by Base64 +func MustFromSignedBase64(base64String, key string) Map { + result, err := FromSignedBase64(base64String, key) + if err != nil { + panic("objx: MustFromSignedBase64 failed with error: " + err.Error()) + } + return result +} + +// FromURLQuery generates a new Obj by parsing the specified +// query. +// +// For queries with multiple values, the first value is selected. +func FromURLQuery(query string) (Map, error) { + vals, err := url.ParseQuery(query) + if err != nil { + return nil, err + } + + m := make(map[string]interface{}) + for k, vals := range vals { + m[k] = vals[0] + } + return New(m), nil +} + +// MustFromURLQuery generates a new Obj by parsing the specified +// query. +// +// For queries with multiple values, the first value is selected. +// +// Panics if it encounters an error +func MustFromURLQuery(query string) Map { + o, err := FromURLQuery(query) + if err != nil { + panic("objx: MustFromURLQuery failed with error: " + err.Error()) + } + return o +} diff --git a/vendor/github.com/stretchr/objx/mutations.go b/vendor/github.com/stretchr/objx/mutations.go new file mode 100644 index 000000000..e7b8eb794 --- /dev/null +++ b/vendor/github.com/stretchr/objx/mutations.go @@ -0,0 +1,74 @@ +package objx + +// Exclude returns a new Map with the keys in the specified []string +// excluded. +func (m Map) Exclude(exclude []string) Map { + excluded := make(Map) + for k, v := range m { + var shouldInclude = true + for _, toExclude := range exclude { + if k == toExclude { + shouldInclude = false + break + } + } + if shouldInclude { + excluded[k] = v + } + } + return excluded +} + +// Copy creates a shallow copy of the Obj. +func (m Map) Copy() Map { + copied := make(map[string]interface{}) + for k, v := range m { + copied[k] = v + } + return New(copied) +} + +// Merge blends the specified map with a copy of this map and returns the result. +// +// Keys that appear in both will be selected from the specified map. +// This method requires that the wrapped object be a map[string]interface{} +func (m Map) Merge(merge Map) Map { + return m.Copy().MergeHere(merge) +} + +// MergeHere blends the specified map with this map and returns the current map. +// +// Keys that appear in both will be selected from the specified map. The original map +// will be modified. This method requires that +// the wrapped object be a map[string]interface{} +func (m Map) MergeHere(merge Map) Map { + for k, v := range merge { + m[k] = v + } + return m +} + +// Transform builds a new Obj giving the transformer a chance +// to change the keys and values as it goes. This method requires that +// the wrapped object be a map[string]interface{} +func (m Map) Transform(transformer func(key string, value interface{}) (string, interface{})) Map { + newMap := make(map[string]interface{}) + for k, v := range m { + modifiedKey, modifiedVal := transformer(k, v) + newMap[modifiedKey] = modifiedVal + } + return New(newMap) +} + +// TransformKeys builds a new map using the specified key mapping. +// +// Unspecified keys will be unaltered. +// This method requires that the wrapped object be a map[string]interface{} +func (m Map) TransformKeys(mapping map[string]string) Map { + return m.Transform(func(key string, value interface{}) (string, interface{}) { + if newKey, ok := mapping[key]; ok { + return newKey, value + } + return key, value + }) +} diff --git a/vendor/github.com/stretchr/objx/security.go b/vendor/github.com/stretchr/objx/security.go new file mode 100644 index 000000000..e052ff890 --- /dev/null +++ b/vendor/github.com/stretchr/objx/security.go @@ -0,0 +1,17 @@ +package objx + +import ( + "crypto/sha1" + "encoding/hex" +) + +// HashWithKey hashes the specified string using the security +// key. +func HashWithKey(data, key string) string { + hash := sha1.New() + _, err := hash.Write([]byte(data + ":" + key)) + if err != nil { + return "" + } + return hex.EncodeToString(hash.Sum(nil)) +} diff --git a/vendor/github.com/stretchr/objx/tests.go b/vendor/github.com/stretchr/objx/tests.go new file mode 100644 index 000000000..d9e0b479a --- /dev/null +++ b/vendor/github.com/stretchr/objx/tests.go @@ -0,0 +1,17 @@ +package objx + +// Has gets whether there is something at the specified selector +// or not. +// +// If m is nil, Has will always return false. +func (m Map) Has(selector string) bool { + if m == nil { + return false + } + return !m.Get(selector).IsNil() +} + +// IsNil gets whether the data is nil or not. +func (v *Value) IsNil() bool { + return v == nil || v.data == nil +} diff --git a/vendor/github.com/stretchr/objx/type_specific_codegen.go b/vendor/github.com/stretchr/objx/type_specific_codegen.go new file mode 100644 index 000000000..202a91f8c --- /dev/null +++ b/vendor/github.com/stretchr/objx/type_specific_codegen.go @@ -0,0 +1,2501 @@ +package objx + +/* + Inter (interface{} and []interface{}) +*/ + +// Inter gets the value as a interface{}, returns the optionalDefault +// value or a system default object if the value is the wrong type. +func (v *Value) Inter(optionalDefault ...interface{}) interface{} { + if s, ok := v.data.(interface{}); ok { + return s + } + if len(optionalDefault) == 1 { + return optionalDefault[0] + } + return nil +} + +// MustInter gets the value as a interface{}. +// +// Panics if the object is not a interface{}. +func (v *Value) MustInter() interface{} { + return v.data.(interface{}) +} + +// InterSlice gets the value as a []interface{}, returns the optionalDefault +// value or nil if the value is not a []interface{}. +func (v *Value) InterSlice(optionalDefault ...[]interface{}) []interface{} { + if s, ok := v.data.([]interface{}); ok { + return s + } + if len(optionalDefault) == 1 { + return optionalDefault[0] + } + return nil +} + +// MustInterSlice gets the value as a []interface{}. +// +// Panics if the object is not a []interface{}. +func (v *Value) MustInterSlice() []interface{} { + return v.data.([]interface{}) +} + +// IsInter gets whether the object contained is a interface{} or not. +func (v *Value) IsInter() bool { + _, ok := v.data.(interface{}) + return ok +} + +// IsInterSlice gets whether the object contained is a []interface{} or not. +func (v *Value) IsInterSlice() bool { + _, ok := v.data.([]interface{}) + return ok +} + +// EachInter calls the specified callback for each object +// in the []interface{}. +// +// Panics if the object is the wrong type. +func (v *Value) EachInter(callback func(int, interface{}) bool) *Value { + for index, val := range v.MustInterSlice() { + carryon := callback(index, val) + if !carryon { + break + } + } + return v +} + +// WhereInter uses the specified decider function to select items +// from the []interface{}. The object contained in the result will contain +// only the selected items. +func (v *Value) WhereInter(decider func(int, interface{}) bool) *Value { + var selected []interface{} + v.EachInter(func(index int, val interface{}) bool { + shouldSelect := decider(index, val) + if !shouldSelect { + selected = append(selected, val) + } + return true + }) + return &Value{data: selected} +} + +// GroupInter uses the specified grouper function to group the items +// keyed by the return of the grouper. The object contained in the +// result will contain a map[string][]interface{}. +func (v *Value) GroupInter(grouper func(int, interface{}) string) *Value { + groups := make(map[string][]interface{}) + v.EachInter(func(index int, val interface{}) bool { + group := grouper(index, val) + if _, ok := groups[group]; !ok { + groups[group] = make([]interface{}, 0) + } + groups[group] = append(groups[group], val) + return true + }) + return &Value{data: groups} +} + +// ReplaceInter uses the specified function to replace each interface{}s +// by iterating each item. The data in the returned result will be a +// []interface{} containing the replaced items. +func (v *Value) ReplaceInter(replacer func(int, interface{}) interface{}) *Value { + arr := v.MustInterSlice() + replaced := make([]interface{}, len(arr)) + v.EachInter(func(index int, val interface{}) bool { + replaced[index] = replacer(index, val) + return true + }) + return &Value{data: replaced} +} + +// CollectInter uses the specified collector function to collect a value +// for each of the interface{}s in the slice. The data returned will be a +// []interface{}. +func (v *Value) CollectInter(collector func(int, interface{}) interface{}) *Value { + arr := v.MustInterSlice() + collected := make([]interface{}, len(arr)) + v.EachInter(func(index int, val interface{}) bool { + collected[index] = collector(index, val) + return true + }) + return &Value{data: collected} +} + +/* + MSI (map[string]interface{} and []map[string]interface{}) +*/ + +// MSI gets the value as a map[string]interface{}, returns the optionalDefault +// value or a system default object if the value is the wrong type. +func (v *Value) MSI(optionalDefault ...map[string]interface{}) map[string]interface{} { + if s, ok := v.data.(map[string]interface{}); ok { + return s + } + if len(optionalDefault) == 1 { + return optionalDefault[0] + } + return nil +} + +// MustMSI gets the value as a map[string]interface{}. +// +// Panics if the object is not a map[string]interface{}. +func (v *Value) MustMSI() map[string]interface{} { + return v.data.(map[string]interface{}) +} + +// MSISlice gets the value as a []map[string]interface{}, returns the optionalDefault +// value or nil if the value is not a []map[string]interface{}. +func (v *Value) MSISlice(optionalDefault ...[]map[string]interface{}) []map[string]interface{} { + if s, ok := v.data.([]map[string]interface{}); ok { + return s + } + if len(optionalDefault) == 1 { + return optionalDefault[0] + } + return nil +} + +// MustMSISlice gets the value as a []map[string]interface{}. +// +// Panics if the object is not a []map[string]interface{}. +func (v *Value) MustMSISlice() []map[string]interface{} { + return v.data.([]map[string]interface{}) +} + +// IsMSI gets whether the object contained is a map[string]interface{} or not. +func (v *Value) IsMSI() bool { + _, ok := v.data.(map[string]interface{}) + return ok +} + +// IsMSISlice gets whether the object contained is a []map[string]interface{} or not. +func (v *Value) IsMSISlice() bool { + _, ok := v.data.([]map[string]interface{}) + return ok +} + +// EachMSI calls the specified callback for each object +// in the []map[string]interface{}. +// +// Panics if the object is the wrong type. +func (v *Value) EachMSI(callback func(int, map[string]interface{}) bool) *Value { + for index, val := range v.MustMSISlice() { + carryon := callback(index, val) + if !carryon { + break + } + } + return v +} + +// WhereMSI uses the specified decider function to select items +// from the []map[string]interface{}. The object contained in the result will contain +// only the selected items. +func (v *Value) WhereMSI(decider func(int, map[string]interface{}) bool) *Value { + var selected []map[string]interface{} + v.EachMSI(func(index int, val map[string]interface{}) bool { + shouldSelect := decider(index, val) + if !shouldSelect { + selected = append(selected, val) + } + return true + }) + return &Value{data: selected} +} + +// GroupMSI uses the specified grouper function to group the items +// keyed by the return of the grouper. The object contained in the +// result will contain a map[string][]map[string]interface{}. +func (v *Value) GroupMSI(grouper func(int, map[string]interface{}) string) *Value { + groups := make(map[string][]map[string]interface{}) + v.EachMSI(func(index int, val map[string]interface{}) bool { + group := grouper(index, val) + if _, ok := groups[group]; !ok { + groups[group] = make([]map[string]interface{}, 0) + } + groups[group] = append(groups[group], val) + return true + }) + return &Value{data: groups} +} + +// ReplaceMSI uses the specified function to replace each map[string]interface{}s +// by iterating each item. The data in the returned result will be a +// []map[string]interface{} containing the replaced items. +func (v *Value) ReplaceMSI(replacer func(int, map[string]interface{}) map[string]interface{}) *Value { + arr := v.MustMSISlice() + replaced := make([]map[string]interface{}, len(arr)) + v.EachMSI(func(index int, val map[string]interface{}) bool { + replaced[index] = replacer(index, val) + return true + }) + return &Value{data: replaced} +} + +// CollectMSI uses the specified collector function to collect a value +// for each of the map[string]interface{}s in the slice. The data returned will be a +// []interface{}. +func (v *Value) CollectMSI(collector func(int, map[string]interface{}) interface{}) *Value { + arr := v.MustMSISlice() + collected := make([]interface{}, len(arr)) + v.EachMSI(func(index int, val map[string]interface{}) bool { + collected[index] = collector(index, val) + return true + }) + return &Value{data: collected} +} + +/* + ObjxMap ((Map) and [](Map)) +*/ + +// ObjxMap gets the value as a (Map), returns the optionalDefault +// value or a system default object if the value is the wrong type. +func (v *Value) ObjxMap(optionalDefault ...(Map)) Map { + if s, ok := v.data.((Map)); ok { + return s + } + if len(optionalDefault) == 1 { + return optionalDefault[0] + } + return New(nil) +} + +// MustObjxMap gets the value as a (Map). +// +// Panics if the object is not a (Map). +func (v *Value) MustObjxMap() Map { + return v.data.((Map)) +} + +// ObjxMapSlice gets the value as a [](Map), returns the optionalDefault +// value or nil if the value is not a [](Map). +func (v *Value) ObjxMapSlice(optionalDefault ...[](Map)) [](Map) { + if s, ok := v.data.([](Map)); ok { + return s + } + if len(optionalDefault) == 1 { + return optionalDefault[0] + } + return nil +} + +// MustObjxMapSlice gets the value as a [](Map). +// +// Panics if the object is not a [](Map). +func (v *Value) MustObjxMapSlice() [](Map) { + return v.data.([](Map)) +} + +// IsObjxMap gets whether the object contained is a (Map) or not. +func (v *Value) IsObjxMap() bool { + _, ok := v.data.((Map)) + return ok +} + +// IsObjxMapSlice gets whether the object contained is a [](Map) or not. +func (v *Value) IsObjxMapSlice() bool { + _, ok := v.data.([](Map)) + return ok +} + +// EachObjxMap calls the specified callback for each object +// in the [](Map). +// +// Panics if the object is the wrong type. +func (v *Value) EachObjxMap(callback func(int, Map) bool) *Value { + for index, val := range v.MustObjxMapSlice() { + carryon := callback(index, val) + if !carryon { + break + } + } + return v +} + +// WhereObjxMap uses the specified decider function to select items +// from the [](Map). The object contained in the result will contain +// only the selected items. +func (v *Value) WhereObjxMap(decider func(int, Map) bool) *Value { + var selected [](Map) + v.EachObjxMap(func(index int, val Map) bool { + shouldSelect := decider(index, val) + if !shouldSelect { + selected = append(selected, val) + } + return true + }) + return &Value{data: selected} +} + +// GroupObjxMap uses the specified grouper function to group the items +// keyed by the return of the grouper. The object contained in the +// result will contain a map[string][](Map). +func (v *Value) GroupObjxMap(grouper func(int, Map) string) *Value { + groups := make(map[string][](Map)) + v.EachObjxMap(func(index int, val Map) bool { + group := grouper(index, val) + if _, ok := groups[group]; !ok { + groups[group] = make([](Map), 0) + } + groups[group] = append(groups[group], val) + return true + }) + return &Value{data: groups} +} + +// ReplaceObjxMap uses the specified function to replace each (Map)s +// by iterating each item. The data in the returned result will be a +// [](Map) containing the replaced items. +func (v *Value) ReplaceObjxMap(replacer func(int, Map) Map) *Value { + arr := v.MustObjxMapSlice() + replaced := make([](Map), len(arr)) + v.EachObjxMap(func(index int, val Map) bool { + replaced[index] = replacer(index, val) + return true + }) + return &Value{data: replaced} +} + +// CollectObjxMap uses the specified collector function to collect a value +// for each of the (Map)s in the slice. The data returned will be a +// []interface{}. +func (v *Value) CollectObjxMap(collector func(int, Map) interface{}) *Value { + arr := v.MustObjxMapSlice() + collected := make([]interface{}, len(arr)) + v.EachObjxMap(func(index int, val Map) bool { + collected[index] = collector(index, val) + return true + }) + return &Value{data: collected} +} + +/* + Bool (bool and []bool) +*/ + +// Bool gets the value as a bool, returns the optionalDefault +// value or a system default object if the value is the wrong type. +func (v *Value) Bool(optionalDefault ...bool) bool { + if s, ok := v.data.(bool); ok { + return s + } + if len(optionalDefault) == 1 { + return optionalDefault[0] + } + return false +} + +// MustBool gets the value as a bool. +// +// Panics if the object is not a bool. +func (v *Value) MustBool() bool { + return v.data.(bool) +} + +// BoolSlice gets the value as a []bool, returns the optionalDefault +// value or nil if the value is not a []bool. +func (v *Value) BoolSlice(optionalDefault ...[]bool) []bool { + if s, ok := v.data.([]bool); ok { + return s + } + if len(optionalDefault) == 1 { + return optionalDefault[0] + } + return nil +} + +// MustBoolSlice gets the value as a []bool. +// +// Panics if the object is not a []bool. +func (v *Value) MustBoolSlice() []bool { + return v.data.([]bool) +} + +// IsBool gets whether the object contained is a bool or not. +func (v *Value) IsBool() bool { + _, ok := v.data.(bool) + return ok +} + +// IsBoolSlice gets whether the object contained is a []bool or not. +func (v *Value) IsBoolSlice() bool { + _, ok := v.data.([]bool) + return ok +} + +// EachBool calls the specified callback for each object +// in the []bool. +// +// Panics if the object is the wrong type. +func (v *Value) EachBool(callback func(int, bool) bool) *Value { + for index, val := range v.MustBoolSlice() { + carryon := callback(index, val) + if !carryon { + break + } + } + return v +} + +// WhereBool uses the specified decider function to select items +// from the []bool. The object contained in the result will contain +// only the selected items. +func (v *Value) WhereBool(decider func(int, bool) bool) *Value { + var selected []bool + v.EachBool(func(index int, val bool) bool { + shouldSelect := decider(index, val) + if !shouldSelect { + selected = append(selected, val) + } + return true + }) + return &Value{data: selected} +} + +// GroupBool uses the specified grouper function to group the items +// keyed by the return of the grouper. The object contained in the +// result will contain a map[string][]bool. +func (v *Value) GroupBool(grouper func(int, bool) string) *Value { + groups := make(map[string][]bool) + v.EachBool(func(index int, val bool) bool { + group := grouper(index, val) + if _, ok := groups[group]; !ok { + groups[group] = make([]bool, 0) + } + groups[group] = append(groups[group], val) + return true + }) + return &Value{data: groups} +} + +// ReplaceBool uses the specified function to replace each bools +// by iterating each item. The data in the returned result will be a +// []bool containing the replaced items. +func (v *Value) ReplaceBool(replacer func(int, bool) bool) *Value { + arr := v.MustBoolSlice() + replaced := make([]bool, len(arr)) + v.EachBool(func(index int, val bool) bool { + replaced[index] = replacer(index, val) + return true + }) + return &Value{data: replaced} +} + +// CollectBool uses the specified collector function to collect a value +// for each of the bools in the slice. The data returned will be a +// []interface{}. +func (v *Value) CollectBool(collector func(int, bool) interface{}) *Value { + arr := v.MustBoolSlice() + collected := make([]interface{}, len(arr)) + v.EachBool(func(index int, val bool) bool { + collected[index] = collector(index, val) + return true + }) + return &Value{data: collected} +} + +/* + Str (string and []string) +*/ + +// Str gets the value as a string, returns the optionalDefault +// value or a system default object if the value is the wrong type. +func (v *Value) Str(optionalDefault ...string) string { + if s, ok := v.data.(string); ok { + return s + } + if len(optionalDefault) == 1 { + return optionalDefault[0] + } + return "" +} + +// MustStr gets the value as a string. +// +// Panics if the object is not a string. +func (v *Value) MustStr() string { + return v.data.(string) +} + +// StrSlice gets the value as a []string, returns the optionalDefault +// value or nil if the value is not a []string. +func (v *Value) StrSlice(optionalDefault ...[]string) []string { + if s, ok := v.data.([]string); ok { + return s + } + if len(optionalDefault) == 1 { + return optionalDefault[0] + } + return nil +} + +// MustStrSlice gets the value as a []string. +// +// Panics if the object is not a []string. +func (v *Value) MustStrSlice() []string { + return v.data.([]string) +} + +// IsStr gets whether the object contained is a string or not. +func (v *Value) IsStr() bool { + _, ok := v.data.(string) + return ok +} + +// IsStrSlice gets whether the object contained is a []string or not. +func (v *Value) IsStrSlice() bool { + _, ok := v.data.([]string) + return ok +} + +// EachStr calls the specified callback for each object +// in the []string. +// +// Panics if the object is the wrong type. +func (v *Value) EachStr(callback func(int, string) bool) *Value { + for index, val := range v.MustStrSlice() { + carryon := callback(index, val) + if !carryon { + break + } + } + return v +} + +// WhereStr uses the specified decider function to select items +// from the []string. The object contained in the result will contain +// only the selected items. +func (v *Value) WhereStr(decider func(int, string) bool) *Value { + var selected []string + v.EachStr(func(index int, val string) bool { + shouldSelect := decider(index, val) + if !shouldSelect { + selected = append(selected, val) + } + return true + }) + return &Value{data: selected} +} + +// GroupStr uses the specified grouper function to group the items +// keyed by the return of the grouper. The object contained in the +// result will contain a map[string][]string. +func (v *Value) GroupStr(grouper func(int, string) string) *Value { + groups := make(map[string][]string) + v.EachStr(func(index int, val string) bool { + group := grouper(index, val) + if _, ok := groups[group]; !ok { + groups[group] = make([]string, 0) + } + groups[group] = append(groups[group], val) + return true + }) + return &Value{data: groups} +} + +// ReplaceStr uses the specified function to replace each strings +// by iterating each item. The data in the returned result will be a +// []string containing the replaced items. +func (v *Value) ReplaceStr(replacer func(int, string) string) *Value { + arr := v.MustStrSlice() + replaced := make([]string, len(arr)) + v.EachStr(func(index int, val string) bool { + replaced[index] = replacer(index, val) + return true + }) + return &Value{data: replaced} +} + +// CollectStr uses the specified collector function to collect a value +// for each of the strings in the slice. The data returned will be a +// []interface{}. +func (v *Value) CollectStr(collector func(int, string) interface{}) *Value { + arr := v.MustStrSlice() + collected := make([]interface{}, len(arr)) + v.EachStr(func(index int, val string) bool { + collected[index] = collector(index, val) + return true + }) + return &Value{data: collected} +} + +/* + Int (int and []int) +*/ + +// Int gets the value as a int, returns the optionalDefault +// value or a system default object if the value is the wrong type. +func (v *Value) Int(optionalDefault ...int) int { + if s, ok := v.data.(int); ok { + return s + } + if len(optionalDefault) == 1 { + return optionalDefault[0] + } + return 0 +} + +// MustInt gets the value as a int. +// +// Panics if the object is not a int. +func (v *Value) MustInt() int { + return v.data.(int) +} + +// IntSlice gets the value as a []int, returns the optionalDefault +// value or nil if the value is not a []int. +func (v *Value) IntSlice(optionalDefault ...[]int) []int { + if s, ok := v.data.([]int); ok { + return s + } + if len(optionalDefault) == 1 { + return optionalDefault[0] + } + return nil +} + +// MustIntSlice gets the value as a []int. +// +// Panics if the object is not a []int. +func (v *Value) MustIntSlice() []int { + return v.data.([]int) +} + +// IsInt gets whether the object contained is a int or not. +func (v *Value) IsInt() bool { + _, ok := v.data.(int) + return ok +} + +// IsIntSlice gets whether the object contained is a []int or not. +func (v *Value) IsIntSlice() bool { + _, ok := v.data.([]int) + return ok +} + +// EachInt calls the specified callback for each object +// in the []int. +// +// Panics if the object is the wrong type. +func (v *Value) EachInt(callback func(int, int) bool) *Value { + for index, val := range v.MustIntSlice() { + carryon := callback(index, val) + if !carryon { + break + } + } + return v +} + +// WhereInt uses the specified decider function to select items +// from the []int. The object contained in the result will contain +// only the selected items. +func (v *Value) WhereInt(decider func(int, int) bool) *Value { + var selected []int + v.EachInt(func(index int, val int) bool { + shouldSelect := decider(index, val) + if !shouldSelect { + selected = append(selected, val) + } + return true + }) + return &Value{data: selected} +} + +// GroupInt uses the specified grouper function to group the items +// keyed by the return of the grouper. The object contained in the +// result will contain a map[string][]int. +func (v *Value) GroupInt(grouper func(int, int) string) *Value { + groups := make(map[string][]int) + v.EachInt(func(index int, val int) bool { + group := grouper(index, val) + if _, ok := groups[group]; !ok { + groups[group] = make([]int, 0) + } + groups[group] = append(groups[group], val) + return true + }) + return &Value{data: groups} +} + +// ReplaceInt uses the specified function to replace each ints +// by iterating each item. The data in the returned result will be a +// []int containing the replaced items. +func (v *Value) ReplaceInt(replacer func(int, int) int) *Value { + arr := v.MustIntSlice() + replaced := make([]int, len(arr)) + v.EachInt(func(index int, val int) bool { + replaced[index] = replacer(index, val) + return true + }) + return &Value{data: replaced} +} + +// CollectInt uses the specified collector function to collect a value +// for each of the ints in the slice. The data returned will be a +// []interface{}. +func (v *Value) CollectInt(collector func(int, int) interface{}) *Value { + arr := v.MustIntSlice() + collected := make([]interface{}, len(arr)) + v.EachInt(func(index int, val int) bool { + collected[index] = collector(index, val) + return true + }) + return &Value{data: collected} +} + +/* + Int8 (int8 and []int8) +*/ + +// Int8 gets the value as a int8, returns the optionalDefault +// value or a system default object if the value is the wrong type. +func (v *Value) Int8(optionalDefault ...int8) int8 { + if s, ok := v.data.(int8); ok { + return s + } + if len(optionalDefault) == 1 { + return optionalDefault[0] + } + return 0 +} + +// MustInt8 gets the value as a int8. +// +// Panics if the object is not a int8. +func (v *Value) MustInt8() int8 { + return v.data.(int8) +} + +// Int8Slice gets the value as a []int8, returns the optionalDefault +// value or nil if the value is not a []int8. +func (v *Value) Int8Slice(optionalDefault ...[]int8) []int8 { + if s, ok := v.data.([]int8); ok { + return s + } + if len(optionalDefault) == 1 { + return optionalDefault[0] + } + return nil +} + +// MustInt8Slice gets the value as a []int8. +// +// Panics if the object is not a []int8. +func (v *Value) MustInt8Slice() []int8 { + return v.data.([]int8) +} + +// IsInt8 gets whether the object contained is a int8 or not. +func (v *Value) IsInt8() bool { + _, ok := v.data.(int8) + return ok +} + +// IsInt8Slice gets whether the object contained is a []int8 or not. +func (v *Value) IsInt8Slice() bool { + _, ok := v.data.([]int8) + return ok +} + +// EachInt8 calls the specified callback for each object +// in the []int8. +// +// Panics if the object is the wrong type. +func (v *Value) EachInt8(callback func(int, int8) bool) *Value { + for index, val := range v.MustInt8Slice() { + carryon := callback(index, val) + if !carryon { + break + } + } + return v +} + +// WhereInt8 uses the specified decider function to select items +// from the []int8. The object contained in the result will contain +// only the selected items. +func (v *Value) WhereInt8(decider func(int, int8) bool) *Value { + var selected []int8 + v.EachInt8(func(index int, val int8) bool { + shouldSelect := decider(index, val) + if !shouldSelect { + selected = append(selected, val) + } + return true + }) + return &Value{data: selected} +} + +// GroupInt8 uses the specified grouper function to group the items +// keyed by the return of the grouper. The object contained in the +// result will contain a map[string][]int8. +func (v *Value) GroupInt8(grouper func(int, int8) string) *Value { + groups := make(map[string][]int8) + v.EachInt8(func(index int, val int8) bool { + group := grouper(index, val) + if _, ok := groups[group]; !ok { + groups[group] = make([]int8, 0) + } + groups[group] = append(groups[group], val) + return true + }) + return &Value{data: groups} +} + +// ReplaceInt8 uses the specified function to replace each int8s +// by iterating each item. The data in the returned result will be a +// []int8 containing the replaced items. +func (v *Value) ReplaceInt8(replacer func(int, int8) int8) *Value { + arr := v.MustInt8Slice() + replaced := make([]int8, len(arr)) + v.EachInt8(func(index int, val int8) bool { + replaced[index] = replacer(index, val) + return true + }) + return &Value{data: replaced} +} + +// CollectInt8 uses the specified collector function to collect a value +// for each of the int8s in the slice. The data returned will be a +// []interface{}. +func (v *Value) CollectInt8(collector func(int, int8) interface{}) *Value { + arr := v.MustInt8Slice() + collected := make([]interface{}, len(arr)) + v.EachInt8(func(index int, val int8) bool { + collected[index] = collector(index, val) + return true + }) + return &Value{data: collected} +} + +/* + Int16 (int16 and []int16) +*/ + +// Int16 gets the value as a int16, returns the optionalDefault +// value or a system default object if the value is the wrong type. +func (v *Value) Int16(optionalDefault ...int16) int16 { + if s, ok := v.data.(int16); ok { + return s + } + if len(optionalDefault) == 1 { + return optionalDefault[0] + } + return 0 +} + +// MustInt16 gets the value as a int16. +// +// Panics if the object is not a int16. +func (v *Value) MustInt16() int16 { + return v.data.(int16) +} + +// Int16Slice gets the value as a []int16, returns the optionalDefault +// value or nil if the value is not a []int16. +func (v *Value) Int16Slice(optionalDefault ...[]int16) []int16 { + if s, ok := v.data.([]int16); ok { + return s + } + if len(optionalDefault) == 1 { + return optionalDefault[0] + } + return nil +} + +// MustInt16Slice gets the value as a []int16. +// +// Panics if the object is not a []int16. +func (v *Value) MustInt16Slice() []int16 { + return v.data.([]int16) +} + +// IsInt16 gets whether the object contained is a int16 or not. +func (v *Value) IsInt16() bool { + _, ok := v.data.(int16) + return ok +} + +// IsInt16Slice gets whether the object contained is a []int16 or not. +func (v *Value) IsInt16Slice() bool { + _, ok := v.data.([]int16) + return ok +} + +// EachInt16 calls the specified callback for each object +// in the []int16. +// +// Panics if the object is the wrong type. +func (v *Value) EachInt16(callback func(int, int16) bool) *Value { + for index, val := range v.MustInt16Slice() { + carryon := callback(index, val) + if !carryon { + break + } + } + return v +} + +// WhereInt16 uses the specified decider function to select items +// from the []int16. The object contained in the result will contain +// only the selected items. +func (v *Value) WhereInt16(decider func(int, int16) bool) *Value { + var selected []int16 + v.EachInt16(func(index int, val int16) bool { + shouldSelect := decider(index, val) + if !shouldSelect { + selected = append(selected, val) + } + return true + }) + return &Value{data: selected} +} + +// GroupInt16 uses the specified grouper function to group the items +// keyed by the return of the grouper. The object contained in the +// result will contain a map[string][]int16. +func (v *Value) GroupInt16(grouper func(int, int16) string) *Value { + groups := make(map[string][]int16) + v.EachInt16(func(index int, val int16) bool { + group := grouper(index, val) + if _, ok := groups[group]; !ok { + groups[group] = make([]int16, 0) + } + groups[group] = append(groups[group], val) + return true + }) + return &Value{data: groups} +} + +// ReplaceInt16 uses the specified function to replace each int16s +// by iterating each item. The data in the returned result will be a +// []int16 containing the replaced items. +func (v *Value) ReplaceInt16(replacer func(int, int16) int16) *Value { + arr := v.MustInt16Slice() + replaced := make([]int16, len(arr)) + v.EachInt16(func(index int, val int16) bool { + replaced[index] = replacer(index, val) + return true + }) + return &Value{data: replaced} +} + +// CollectInt16 uses the specified collector function to collect a value +// for each of the int16s in the slice. The data returned will be a +// []interface{}. +func (v *Value) CollectInt16(collector func(int, int16) interface{}) *Value { + arr := v.MustInt16Slice() + collected := make([]interface{}, len(arr)) + v.EachInt16(func(index int, val int16) bool { + collected[index] = collector(index, val) + return true + }) + return &Value{data: collected} +} + +/* + Int32 (int32 and []int32) +*/ + +// Int32 gets the value as a int32, returns the optionalDefault +// value or a system default object if the value is the wrong type. +func (v *Value) Int32(optionalDefault ...int32) int32 { + if s, ok := v.data.(int32); ok { + return s + } + if len(optionalDefault) == 1 { + return optionalDefault[0] + } + return 0 +} + +// MustInt32 gets the value as a int32. +// +// Panics if the object is not a int32. +func (v *Value) MustInt32() int32 { + return v.data.(int32) +} + +// Int32Slice gets the value as a []int32, returns the optionalDefault +// value or nil if the value is not a []int32. +func (v *Value) Int32Slice(optionalDefault ...[]int32) []int32 { + if s, ok := v.data.([]int32); ok { + return s + } + if len(optionalDefault) == 1 { + return optionalDefault[0] + } + return nil +} + +// MustInt32Slice gets the value as a []int32. +// +// Panics if the object is not a []int32. +func (v *Value) MustInt32Slice() []int32 { + return v.data.([]int32) +} + +// IsInt32 gets whether the object contained is a int32 or not. +func (v *Value) IsInt32() bool { + _, ok := v.data.(int32) + return ok +} + +// IsInt32Slice gets whether the object contained is a []int32 or not. +func (v *Value) IsInt32Slice() bool { + _, ok := v.data.([]int32) + return ok +} + +// EachInt32 calls the specified callback for each object +// in the []int32. +// +// Panics if the object is the wrong type. +func (v *Value) EachInt32(callback func(int, int32) bool) *Value { + for index, val := range v.MustInt32Slice() { + carryon := callback(index, val) + if !carryon { + break + } + } + return v +} + +// WhereInt32 uses the specified decider function to select items +// from the []int32. The object contained in the result will contain +// only the selected items. +func (v *Value) WhereInt32(decider func(int, int32) bool) *Value { + var selected []int32 + v.EachInt32(func(index int, val int32) bool { + shouldSelect := decider(index, val) + if !shouldSelect { + selected = append(selected, val) + } + return true + }) + return &Value{data: selected} +} + +// GroupInt32 uses the specified grouper function to group the items +// keyed by the return of the grouper. The object contained in the +// result will contain a map[string][]int32. +func (v *Value) GroupInt32(grouper func(int, int32) string) *Value { + groups := make(map[string][]int32) + v.EachInt32(func(index int, val int32) bool { + group := grouper(index, val) + if _, ok := groups[group]; !ok { + groups[group] = make([]int32, 0) + } + groups[group] = append(groups[group], val) + return true + }) + return &Value{data: groups} +} + +// ReplaceInt32 uses the specified function to replace each int32s +// by iterating each item. The data in the returned result will be a +// []int32 containing the replaced items. +func (v *Value) ReplaceInt32(replacer func(int, int32) int32) *Value { + arr := v.MustInt32Slice() + replaced := make([]int32, len(arr)) + v.EachInt32(func(index int, val int32) bool { + replaced[index] = replacer(index, val) + return true + }) + return &Value{data: replaced} +} + +// CollectInt32 uses the specified collector function to collect a value +// for each of the int32s in the slice. The data returned will be a +// []interface{}. +func (v *Value) CollectInt32(collector func(int, int32) interface{}) *Value { + arr := v.MustInt32Slice() + collected := make([]interface{}, len(arr)) + v.EachInt32(func(index int, val int32) bool { + collected[index] = collector(index, val) + return true + }) + return &Value{data: collected} +} + +/* + Int64 (int64 and []int64) +*/ + +// Int64 gets the value as a int64, returns the optionalDefault +// value or a system default object if the value is the wrong type. +func (v *Value) Int64(optionalDefault ...int64) int64 { + if s, ok := v.data.(int64); ok { + return s + } + if len(optionalDefault) == 1 { + return optionalDefault[0] + } + return 0 +} + +// MustInt64 gets the value as a int64. +// +// Panics if the object is not a int64. +func (v *Value) MustInt64() int64 { + return v.data.(int64) +} + +// Int64Slice gets the value as a []int64, returns the optionalDefault +// value or nil if the value is not a []int64. +func (v *Value) Int64Slice(optionalDefault ...[]int64) []int64 { + if s, ok := v.data.([]int64); ok { + return s + } + if len(optionalDefault) == 1 { + return optionalDefault[0] + } + return nil +} + +// MustInt64Slice gets the value as a []int64. +// +// Panics if the object is not a []int64. +func (v *Value) MustInt64Slice() []int64 { + return v.data.([]int64) +} + +// IsInt64 gets whether the object contained is a int64 or not. +func (v *Value) IsInt64() bool { + _, ok := v.data.(int64) + return ok +} + +// IsInt64Slice gets whether the object contained is a []int64 or not. +func (v *Value) IsInt64Slice() bool { + _, ok := v.data.([]int64) + return ok +} + +// EachInt64 calls the specified callback for each object +// in the []int64. +// +// Panics if the object is the wrong type. +func (v *Value) EachInt64(callback func(int, int64) bool) *Value { + for index, val := range v.MustInt64Slice() { + carryon := callback(index, val) + if !carryon { + break + } + } + return v +} + +// WhereInt64 uses the specified decider function to select items +// from the []int64. The object contained in the result will contain +// only the selected items. +func (v *Value) WhereInt64(decider func(int, int64) bool) *Value { + var selected []int64 + v.EachInt64(func(index int, val int64) bool { + shouldSelect := decider(index, val) + if !shouldSelect { + selected = append(selected, val) + } + return true + }) + return &Value{data: selected} +} + +// GroupInt64 uses the specified grouper function to group the items +// keyed by the return of the grouper. The object contained in the +// result will contain a map[string][]int64. +func (v *Value) GroupInt64(grouper func(int, int64) string) *Value { + groups := make(map[string][]int64) + v.EachInt64(func(index int, val int64) bool { + group := grouper(index, val) + if _, ok := groups[group]; !ok { + groups[group] = make([]int64, 0) + } + groups[group] = append(groups[group], val) + return true + }) + return &Value{data: groups} +} + +// ReplaceInt64 uses the specified function to replace each int64s +// by iterating each item. The data in the returned result will be a +// []int64 containing the replaced items. +func (v *Value) ReplaceInt64(replacer func(int, int64) int64) *Value { + arr := v.MustInt64Slice() + replaced := make([]int64, len(arr)) + v.EachInt64(func(index int, val int64) bool { + replaced[index] = replacer(index, val) + return true + }) + return &Value{data: replaced} +} + +// CollectInt64 uses the specified collector function to collect a value +// for each of the int64s in the slice. The data returned will be a +// []interface{}. +func (v *Value) CollectInt64(collector func(int, int64) interface{}) *Value { + arr := v.MustInt64Slice() + collected := make([]interface{}, len(arr)) + v.EachInt64(func(index int, val int64) bool { + collected[index] = collector(index, val) + return true + }) + return &Value{data: collected} +} + +/* + Uint (uint and []uint) +*/ + +// Uint gets the value as a uint, returns the optionalDefault +// value or a system default object if the value is the wrong type. +func (v *Value) Uint(optionalDefault ...uint) uint { + if s, ok := v.data.(uint); ok { + return s + } + if len(optionalDefault) == 1 { + return optionalDefault[0] + } + return 0 +} + +// MustUint gets the value as a uint. +// +// Panics if the object is not a uint. +func (v *Value) MustUint() uint { + return v.data.(uint) +} + +// UintSlice gets the value as a []uint, returns the optionalDefault +// value or nil if the value is not a []uint. +func (v *Value) UintSlice(optionalDefault ...[]uint) []uint { + if s, ok := v.data.([]uint); ok { + return s + } + if len(optionalDefault) == 1 { + return optionalDefault[0] + } + return nil +} + +// MustUintSlice gets the value as a []uint. +// +// Panics if the object is not a []uint. +func (v *Value) MustUintSlice() []uint { + return v.data.([]uint) +} + +// IsUint gets whether the object contained is a uint or not. +func (v *Value) IsUint() bool { + _, ok := v.data.(uint) + return ok +} + +// IsUintSlice gets whether the object contained is a []uint or not. +func (v *Value) IsUintSlice() bool { + _, ok := v.data.([]uint) + return ok +} + +// EachUint calls the specified callback for each object +// in the []uint. +// +// Panics if the object is the wrong type. +func (v *Value) EachUint(callback func(int, uint) bool) *Value { + for index, val := range v.MustUintSlice() { + carryon := callback(index, val) + if !carryon { + break + } + } + return v +} + +// WhereUint uses the specified decider function to select items +// from the []uint. The object contained in the result will contain +// only the selected items. +func (v *Value) WhereUint(decider func(int, uint) bool) *Value { + var selected []uint + v.EachUint(func(index int, val uint) bool { + shouldSelect := decider(index, val) + if !shouldSelect { + selected = append(selected, val) + } + return true + }) + return &Value{data: selected} +} + +// GroupUint uses the specified grouper function to group the items +// keyed by the return of the grouper. The object contained in the +// result will contain a map[string][]uint. +func (v *Value) GroupUint(grouper func(int, uint) string) *Value { + groups := make(map[string][]uint) + v.EachUint(func(index int, val uint) bool { + group := grouper(index, val) + if _, ok := groups[group]; !ok { + groups[group] = make([]uint, 0) + } + groups[group] = append(groups[group], val) + return true + }) + return &Value{data: groups} +} + +// ReplaceUint uses the specified function to replace each uints +// by iterating each item. The data in the returned result will be a +// []uint containing the replaced items. +func (v *Value) ReplaceUint(replacer func(int, uint) uint) *Value { + arr := v.MustUintSlice() + replaced := make([]uint, len(arr)) + v.EachUint(func(index int, val uint) bool { + replaced[index] = replacer(index, val) + return true + }) + return &Value{data: replaced} +} + +// CollectUint uses the specified collector function to collect a value +// for each of the uints in the slice. The data returned will be a +// []interface{}. +func (v *Value) CollectUint(collector func(int, uint) interface{}) *Value { + arr := v.MustUintSlice() + collected := make([]interface{}, len(arr)) + v.EachUint(func(index int, val uint) bool { + collected[index] = collector(index, val) + return true + }) + return &Value{data: collected} +} + +/* + Uint8 (uint8 and []uint8) +*/ + +// Uint8 gets the value as a uint8, returns the optionalDefault +// value or a system default object if the value is the wrong type. +func (v *Value) Uint8(optionalDefault ...uint8) uint8 { + if s, ok := v.data.(uint8); ok { + return s + } + if len(optionalDefault) == 1 { + return optionalDefault[0] + } + return 0 +} + +// MustUint8 gets the value as a uint8. +// +// Panics if the object is not a uint8. +func (v *Value) MustUint8() uint8 { + return v.data.(uint8) +} + +// Uint8Slice gets the value as a []uint8, returns the optionalDefault +// value or nil if the value is not a []uint8. +func (v *Value) Uint8Slice(optionalDefault ...[]uint8) []uint8 { + if s, ok := v.data.([]uint8); ok { + return s + } + if len(optionalDefault) == 1 { + return optionalDefault[0] + } + return nil +} + +// MustUint8Slice gets the value as a []uint8. +// +// Panics if the object is not a []uint8. +func (v *Value) MustUint8Slice() []uint8 { + return v.data.([]uint8) +} + +// IsUint8 gets whether the object contained is a uint8 or not. +func (v *Value) IsUint8() bool { + _, ok := v.data.(uint8) + return ok +} + +// IsUint8Slice gets whether the object contained is a []uint8 or not. +func (v *Value) IsUint8Slice() bool { + _, ok := v.data.([]uint8) + return ok +} + +// EachUint8 calls the specified callback for each object +// in the []uint8. +// +// Panics if the object is the wrong type. +func (v *Value) EachUint8(callback func(int, uint8) bool) *Value { + for index, val := range v.MustUint8Slice() { + carryon := callback(index, val) + if !carryon { + break + } + } + return v +} + +// WhereUint8 uses the specified decider function to select items +// from the []uint8. The object contained in the result will contain +// only the selected items. +func (v *Value) WhereUint8(decider func(int, uint8) bool) *Value { + var selected []uint8 + v.EachUint8(func(index int, val uint8) bool { + shouldSelect := decider(index, val) + if !shouldSelect { + selected = append(selected, val) + } + return true + }) + return &Value{data: selected} +} + +// GroupUint8 uses the specified grouper function to group the items +// keyed by the return of the grouper. The object contained in the +// result will contain a map[string][]uint8. +func (v *Value) GroupUint8(grouper func(int, uint8) string) *Value { + groups := make(map[string][]uint8) + v.EachUint8(func(index int, val uint8) bool { + group := grouper(index, val) + if _, ok := groups[group]; !ok { + groups[group] = make([]uint8, 0) + } + groups[group] = append(groups[group], val) + return true + }) + return &Value{data: groups} +} + +// ReplaceUint8 uses the specified function to replace each uint8s +// by iterating each item. The data in the returned result will be a +// []uint8 containing the replaced items. +func (v *Value) ReplaceUint8(replacer func(int, uint8) uint8) *Value { + arr := v.MustUint8Slice() + replaced := make([]uint8, len(arr)) + v.EachUint8(func(index int, val uint8) bool { + replaced[index] = replacer(index, val) + return true + }) + return &Value{data: replaced} +} + +// CollectUint8 uses the specified collector function to collect a value +// for each of the uint8s in the slice. The data returned will be a +// []interface{}. +func (v *Value) CollectUint8(collector func(int, uint8) interface{}) *Value { + arr := v.MustUint8Slice() + collected := make([]interface{}, len(arr)) + v.EachUint8(func(index int, val uint8) bool { + collected[index] = collector(index, val) + return true + }) + return &Value{data: collected} +} + +/* + Uint16 (uint16 and []uint16) +*/ + +// Uint16 gets the value as a uint16, returns the optionalDefault +// value or a system default object if the value is the wrong type. +func (v *Value) Uint16(optionalDefault ...uint16) uint16 { + if s, ok := v.data.(uint16); ok { + return s + } + if len(optionalDefault) == 1 { + return optionalDefault[0] + } + return 0 +} + +// MustUint16 gets the value as a uint16. +// +// Panics if the object is not a uint16. +func (v *Value) MustUint16() uint16 { + return v.data.(uint16) +} + +// Uint16Slice gets the value as a []uint16, returns the optionalDefault +// value or nil if the value is not a []uint16. +func (v *Value) Uint16Slice(optionalDefault ...[]uint16) []uint16 { + if s, ok := v.data.([]uint16); ok { + return s + } + if len(optionalDefault) == 1 { + return optionalDefault[0] + } + return nil +} + +// MustUint16Slice gets the value as a []uint16. +// +// Panics if the object is not a []uint16. +func (v *Value) MustUint16Slice() []uint16 { + return v.data.([]uint16) +} + +// IsUint16 gets whether the object contained is a uint16 or not. +func (v *Value) IsUint16() bool { + _, ok := v.data.(uint16) + return ok +} + +// IsUint16Slice gets whether the object contained is a []uint16 or not. +func (v *Value) IsUint16Slice() bool { + _, ok := v.data.([]uint16) + return ok +} + +// EachUint16 calls the specified callback for each object +// in the []uint16. +// +// Panics if the object is the wrong type. +func (v *Value) EachUint16(callback func(int, uint16) bool) *Value { + for index, val := range v.MustUint16Slice() { + carryon := callback(index, val) + if !carryon { + break + } + } + return v +} + +// WhereUint16 uses the specified decider function to select items +// from the []uint16. The object contained in the result will contain +// only the selected items. +func (v *Value) WhereUint16(decider func(int, uint16) bool) *Value { + var selected []uint16 + v.EachUint16(func(index int, val uint16) bool { + shouldSelect := decider(index, val) + if !shouldSelect { + selected = append(selected, val) + } + return true + }) + return &Value{data: selected} +} + +// GroupUint16 uses the specified grouper function to group the items +// keyed by the return of the grouper. The object contained in the +// result will contain a map[string][]uint16. +func (v *Value) GroupUint16(grouper func(int, uint16) string) *Value { + groups := make(map[string][]uint16) + v.EachUint16(func(index int, val uint16) bool { + group := grouper(index, val) + if _, ok := groups[group]; !ok { + groups[group] = make([]uint16, 0) + } + groups[group] = append(groups[group], val) + return true + }) + return &Value{data: groups} +} + +// ReplaceUint16 uses the specified function to replace each uint16s +// by iterating each item. The data in the returned result will be a +// []uint16 containing the replaced items. +func (v *Value) ReplaceUint16(replacer func(int, uint16) uint16) *Value { + arr := v.MustUint16Slice() + replaced := make([]uint16, len(arr)) + v.EachUint16(func(index int, val uint16) bool { + replaced[index] = replacer(index, val) + return true + }) + return &Value{data: replaced} +} + +// CollectUint16 uses the specified collector function to collect a value +// for each of the uint16s in the slice. The data returned will be a +// []interface{}. +func (v *Value) CollectUint16(collector func(int, uint16) interface{}) *Value { + arr := v.MustUint16Slice() + collected := make([]interface{}, len(arr)) + v.EachUint16(func(index int, val uint16) bool { + collected[index] = collector(index, val) + return true + }) + return &Value{data: collected} +} + +/* + Uint32 (uint32 and []uint32) +*/ + +// Uint32 gets the value as a uint32, returns the optionalDefault +// value or a system default object if the value is the wrong type. +func (v *Value) Uint32(optionalDefault ...uint32) uint32 { + if s, ok := v.data.(uint32); ok { + return s + } + if len(optionalDefault) == 1 { + return optionalDefault[0] + } + return 0 +} + +// MustUint32 gets the value as a uint32. +// +// Panics if the object is not a uint32. +func (v *Value) MustUint32() uint32 { + return v.data.(uint32) +} + +// Uint32Slice gets the value as a []uint32, returns the optionalDefault +// value or nil if the value is not a []uint32. +func (v *Value) Uint32Slice(optionalDefault ...[]uint32) []uint32 { + if s, ok := v.data.([]uint32); ok { + return s + } + if len(optionalDefault) == 1 { + return optionalDefault[0] + } + return nil +} + +// MustUint32Slice gets the value as a []uint32. +// +// Panics if the object is not a []uint32. +func (v *Value) MustUint32Slice() []uint32 { + return v.data.([]uint32) +} + +// IsUint32 gets whether the object contained is a uint32 or not. +func (v *Value) IsUint32() bool { + _, ok := v.data.(uint32) + return ok +} + +// IsUint32Slice gets whether the object contained is a []uint32 or not. +func (v *Value) IsUint32Slice() bool { + _, ok := v.data.([]uint32) + return ok +} + +// EachUint32 calls the specified callback for each object +// in the []uint32. +// +// Panics if the object is the wrong type. +func (v *Value) EachUint32(callback func(int, uint32) bool) *Value { + for index, val := range v.MustUint32Slice() { + carryon := callback(index, val) + if !carryon { + break + } + } + return v +} + +// WhereUint32 uses the specified decider function to select items +// from the []uint32. The object contained in the result will contain +// only the selected items. +func (v *Value) WhereUint32(decider func(int, uint32) bool) *Value { + var selected []uint32 + v.EachUint32(func(index int, val uint32) bool { + shouldSelect := decider(index, val) + if !shouldSelect { + selected = append(selected, val) + } + return true + }) + return &Value{data: selected} +} + +// GroupUint32 uses the specified grouper function to group the items +// keyed by the return of the grouper. The object contained in the +// result will contain a map[string][]uint32. +func (v *Value) GroupUint32(grouper func(int, uint32) string) *Value { + groups := make(map[string][]uint32) + v.EachUint32(func(index int, val uint32) bool { + group := grouper(index, val) + if _, ok := groups[group]; !ok { + groups[group] = make([]uint32, 0) + } + groups[group] = append(groups[group], val) + return true + }) + return &Value{data: groups} +} + +// ReplaceUint32 uses the specified function to replace each uint32s +// by iterating each item. The data in the returned result will be a +// []uint32 containing the replaced items. +func (v *Value) ReplaceUint32(replacer func(int, uint32) uint32) *Value { + arr := v.MustUint32Slice() + replaced := make([]uint32, len(arr)) + v.EachUint32(func(index int, val uint32) bool { + replaced[index] = replacer(index, val) + return true + }) + return &Value{data: replaced} +} + +// CollectUint32 uses the specified collector function to collect a value +// for each of the uint32s in the slice. The data returned will be a +// []interface{}. +func (v *Value) CollectUint32(collector func(int, uint32) interface{}) *Value { + arr := v.MustUint32Slice() + collected := make([]interface{}, len(arr)) + v.EachUint32(func(index int, val uint32) bool { + collected[index] = collector(index, val) + return true + }) + return &Value{data: collected} +} + +/* + Uint64 (uint64 and []uint64) +*/ + +// Uint64 gets the value as a uint64, returns the optionalDefault +// value or a system default object if the value is the wrong type. +func (v *Value) Uint64(optionalDefault ...uint64) uint64 { + if s, ok := v.data.(uint64); ok { + return s + } + if len(optionalDefault) == 1 { + return optionalDefault[0] + } + return 0 +} + +// MustUint64 gets the value as a uint64. +// +// Panics if the object is not a uint64. +func (v *Value) MustUint64() uint64 { + return v.data.(uint64) +} + +// Uint64Slice gets the value as a []uint64, returns the optionalDefault +// value or nil if the value is not a []uint64. +func (v *Value) Uint64Slice(optionalDefault ...[]uint64) []uint64 { + if s, ok := v.data.([]uint64); ok { + return s + } + if len(optionalDefault) == 1 { + return optionalDefault[0] + } + return nil +} + +// MustUint64Slice gets the value as a []uint64. +// +// Panics if the object is not a []uint64. +func (v *Value) MustUint64Slice() []uint64 { + return v.data.([]uint64) +} + +// IsUint64 gets whether the object contained is a uint64 or not. +func (v *Value) IsUint64() bool { + _, ok := v.data.(uint64) + return ok +} + +// IsUint64Slice gets whether the object contained is a []uint64 or not. +func (v *Value) IsUint64Slice() bool { + _, ok := v.data.([]uint64) + return ok +} + +// EachUint64 calls the specified callback for each object +// in the []uint64. +// +// Panics if the object is the wrong type. +func (v *Value) EachUint64(callback func(int, uint64) bool) *Value { + for index, val := range v.MustUint64Slice() { + carryon := callback(index, val) + if !carryon { + break + } + } + return v +} + +// WhereUint64 uses the specified decider function to select items +// from the []uint64. The object contained in the result will contain +// only the selected items. +func (v *Value) WhereUint64(decider func(int, uint64) bool) *Value { + var selected []uint64 + v.EachUint64(func(index int, val uint64) bool { + shouldSelect := decider(index, val) + if !shouldSelect { + selected = append(selected, val) + } + return true + }) + return &Value{data: selected} +} + +// GroupUint64 uses the specified grouper function to group the items +// keyed by the return of the grouper. The object contained in the +// result will contain a map[string][]uint64. +func (v *Value) GroupUint64(grouper func(int, uint64) string) *Value { + groups := make(map[string][]uint64) + v.EachUint64(func(index int, val uint64) bool { + group := grouper(index, val) + if _, ok := groups[group]; !ok { + groups[group] = make([]uint64, 0) + } + groups[group] = append(groups[group], val) + return true + }) + return &Value{data: groups} +} + +// ReplaceUint64 uses the specified function to replace each uint64s +// by iterating each item. The data in the returned result will be a +// []uint64 containing the replaced items. +func (v *Value) ReplaceUint64(replacer func(int, uint64) uint64) *Value { + arr := v.MustUint64Slice() + replaced := make([]uint64, len(arr)) + v.EachUint64(func(index int, val uint64) bool { + replaced[index] = replacer(index, val) + return true + }) + return &Value{data: replaced} +} + +// CollectUint64 uses the specified collector function to collect a value +// for each of the uint64s in the slice. The data returned will be a +// []interface{}. +func (v *Value) CollectUint64(collector func(int, uint64) interface{}) *Value { + arr := v.MustUint64Slice() + collected := make([]interface{}, len(arr)) + v.EachUint64(func(index int, val uint64) bool { + collected[index] = collector(index, val) + return true + }) + return &Value{data: collected} +} + +/* + Uintptr (uintptr and []uintptr) +*/ + +// Uintptr gets the value as a uintptr, returns the optionalDefault +// value or a system default object if the value is the wrong type. +func (v *Value) Uintptr(optionalDefault ...uintptr) uintptr { + if s, ok := v.data.(uintptr); ok { + return s + } + if len(optionalDefault) == 1 { + return optionalDefault[0] + } + return 0 +} + +// MustUintptr gets the value as a uintptr. +// +// Panics if the object is not a uintptr. +func (v *Value) MustUintptr() uintptr { + return v.data.(uintptr) +} + +// UintptrSlice gets the value as a []uintptr, returns the optionalDefault +// value or nil if the value is not a []uintptr. +func (v *Value) UintptrSlice(optionalDefault ...[]uintptr) []uintptr { + if s, ok := v.data.([]uintptr); ok { + return s + } + if len(optionalDefault) == 1 { + return optionalDefault[0] + } + return nil +} + +// MustUintptrSlice gets the value as a []uintptr. +// +// Panics if the object is not a []uintptr. +func (v *Value) MustUintptrSlice() []uintptr { + return v.data.([]uintptr) +} + +// IsUintptr gets whether the object contained is a uintptr or not. +func (v *Value) IsUintptr() bool { + _, ok := v.data.(uintptr) + return ok +} + +// IsUintptrSlice gets whether the object contained is a []uintptr or not. +func (v *Value) IsUintptrSlice() bool { + _, ok := v.data.([]uintptr) + return ok +} + +// EachUintptr calls the specified callback for each object +// in the []uintptr. +// +// Panics if the object is the wrong type. +func (v *Value) EachUintptr(callback func(int, uintptr) bool) *Value { + for index, val := range v.MustUintptrSlice() { + carryon := callback(index, val) + if !carryon { + break + } + } + return v +} + +// WhereUintptr uses the specified decider function to select items +// from the []uintptr. The object contained in the result will contain +// only the selected items. +func (v *Value) WhereUintptr(decider func(int, uintptr) bool) *Value { + var selected []uintptr + v.EachUintptr(func(index int, val uintptr) bool { + shouldSelect := decider(index, val) + if !shouldSelect { + selected = append(selected, val) + } + return true + }) + return &Value{data: selected} +} + +// GroupUintptr uses the specified grouper function to group the items +// keyed by the return of the grouper. The object contained in the +// result will contain a map[string][]uintptr. +func (v *Value) GroupUintptr(grouper func(int, uintptr) string) *Value { + groups := make(map[string][]uintptr) + v.EachUintptr(func(index int, val uintptr) bool { + group := grouper(index, val) + if _, ok := groups[group]; !ok { + groups[group] = make([]uintptr, 0) + } + groups[group] = append(groups[group], val) + return true + }) + return &Value{data: groups} +} + +// ReplaceUintptr uses the specified function to replace each uintptrs +// by iterating each item. The data in the returned result will be a +// []uintptr containing the replaced items. +func (v *Value) ReplaceUintptr(replacer func(int, uintptr) uintptr) *Value { + arr := v.MustUintptrSlice() + replaced := make([]uintptr, len(arr)) + v.EachUintptr(func(index int, val uintptr) bool { + replaced[index] = replacer(index, val) + return true + }) + return &Value{data: replaced} +} + +// CollectUintptr uses the specified collector function to collect a value +// for each of the uintptrs in the slice. The data returned will be a +// []interface{}. +func (v *Value) CollectUintptr(collector func(int, uintptr) interface{}) *Value { + arr := v.MustUintptrSlice() + collected := make([]interface{}, len(arr)) + v.EachUintptr(func(index int, val uintptr) bool { + collected[index] = collector(index, val) + return true + }) + return &Value{data: collected} +} + +/* + Float32 (float32 and []float32) +*/ + +// Float32 gets the value as a float32, returns the optionalDefault +// value or a system default object if the value is the wrong type. +func (v *Value) Float32(optionalDefault ...float32) float32 { + if s, ok := v.data.(float32); ok { + return s + } + if len(optionalDefault) == 1 { + return optionalDefault[0] + } + return 0 +} + +// MustFloat32 gets the value as a float32. +// +// Panics if the object is not a float32. +func (v *Value) MustFloat32() float32 { + return v.data.(float32) +} + +// Float32Slice gets the value as a []float32, returns the optionalDefault +// value or nil if the value is not a []float32. +func (v *Value) Float32Slice(optionalDefault ...[]float32) []float32 { + if s, ok := v.data.([]float32); ok { + return s + } + if len(optionalDefault) == 1 { + return optionalDefault[0] + } + return nil +} + +// MustFloat32Slice gets the value as a []float32. +// +// Panics if the object is not a []float32. +func (v *Value) MustFloat32Slice() []float32 { + return v.data.([]float32) +} + +// IsFloat32 gets whether the object contained is a float32 or not. +func (v *Value) IsFloat32() bool { + _, ok := v.data.(float32) + return ok +} + +// IsFloat32Slice gets whether the object contained is a []float32 or not. +func (v *Value) IsFloat32Slice() bool { + _, ok := v.data.([]float32) + return ok +} + +// EachFloat32 calls the specified callback for each object +// in the []float32. +// +// Panics if the object is the wrong type. +func (v *Value) EachFloat32(callback func(int, float32) bool) *Value { + for index, val := range v.MustFloat32Slice() { + carryon := callback(index, val) + if !carryon { + break + } + } + return v +} + +// WhereFloat32 uses the specified decider function to select items +// from the []float32. The object contained in the result will contain +// only the selected items. +func (v *Value) WhereFloat32(decider func(int, float32) bool) *Value { + var selected []float32 + v.EachFloat32(func(index int, val float32) bool { + shouldSelect := decider(index, val) + if !shouldSelect { + selected = append(selected, val) + } + return true + }) + return &Value{data: selected} +} + +// GroupFloat32 uses the specified grouper function to group the items +// keyed by the return of the grouper. The object contained in the +// result will contain a map[string][]float32. +func (v *Value) GroupFloat32(grouper func(int, float32) string) *Value { + groups := make(map[string][]float32) + v.EachFloat32(func(index int, val float32) bool { + group := grouper(index, val) + if _, ok := groups[group]; !ok { + groups[group] = make([]float32, 0) + } + groups[group] = append(groups[group], val) + return true + }) + return &Value{data: groups} +} + +// ReplaceFloat32 uses the specified function to replace each float32s +// by iterating each item. The data in the returned result will be a +// []float32 containing the replaced items. +func (v *Value) ReplaceFloat32(replacer func(int, float32) float32) *Value { + arr := v.MustFloat32Slice() + replaced := make([]float32, len(arr)) + v.EachFloat32(func(index int, val float32) bool { + replaced[index] = replacer(index, val) + return true + }) + return &Value{data: replaced} +} + +// CollectFloat32 uses the specified collector function to collect a value +// for each of the float32s in the slice. The data returned will be a +// []interface{}. +func (v *Value) CollectFloat32(collector func(int, float32) interface{}) *Value { + arr := v.MustFloat32Slice() + collected := make([]interface{}, len(arr)) + v.EachFloat32(func(index int, val float32) bool { + collected[index] = collector(index, val) + return true + }) + return &Value{data: collected} +} + +/* + Float64 (float64 and []float64) +*/ + +// Float64 gets the value as a float64, returns the optionalDefault +// value or a system default object if the value is the wrong type. +func (v *Value) Float64(optionalDefault ...float64) float64 { + if s, ok := v.data.(float64); ok { + return s + } + if len(optionalDefault) == 1 { + return optionalDefault[0] + } + return 0 +} + +// MustFloat64 gets the value as a float64. +// +// Panics if the object is not a float64. +func (v *Value) MustFloat64() float64 { + return v.data.(float64) +} + +// Float64Slice gets the value as a []float64, returns the optionalDefault +// value or nil if the value is not a []float64. +func (v *Value) Float64Slice(optionalDefault ...[]float64) []float64 { + if s, ok := v.data.([]float64); ok { + return s + } + if len(optionalDefault) == 1 { + return optionalDefault[0] + } + return nil +} + +// MustFloat64Slice gets the value as a []float64. +// +// Panics if the object is not a []float64. +func (v *Value) MustFloat64Slice() []float64 { + return v.data.([]float64) +} + +// IsFloat64 gets whether the object contained is a float64 or not. +func (v *Value) IsFloat64() bool { + _, ok := v.data.(float64) + return ok +} + +// IsFloat64Slice gets whether the object contained is a []float64 or not. +func (v *Value) IsFloat64Slice() bool { + _, ok := v.data.([]float64) + return ok +} + +// EachFloat64 calls the specified callback for each object +// in the []float64. +// +// Panics if the object is the wrong type. +func (v *Value) EachFloat64(callback func(int, float64) bool) *Value { + for index, val := range v.MustFloat64Slice() { + carryon := callback(index, val) + if !carryon { + break + } + } + return v +} + +// WhereFloat64 uses the specified decider function to select items +// from the []float64. The object contained in the result will contain +// only the selected items. +func (v *Value) WhereFloat64(decider func(int, float64) bool) *Value { + var selected []float64 + v.EachFloat64(func(index int, val float64) bool { + shouldSelect := decider(index, val) + if !shouldSelect { + selected = append(selected, val) + } + return true + }) + return &Value{data: selected} +} + +// GroupFloat64 uses the specified grouper function to group the items +// keyed by the return of the grouper. The object contained in the +// result will contain a map[string][]float64. +func (v *Value) GroupFloat64(grouper func(int, float64) string) *Value { + groups := make(map[string][]float64) + v.EachFloat64(func(index int, val float64) bool { + group := grouper(index, val) + if _, ok := groups[group]; !ok { + groups[group] = make([]float64, 0) + } + groups[group] = append(groups[group], val) + return true + }) + return &Value{data: groups} +} + +// ReplaceFloat64 uses the specified function to replace each float64s +// by iterating each item. The data in the returned result will be a +// []float64 containing the replaced items. +func (v *Value) ReplaceFloat64(replacer func(int, float64) float64) *Value { + arr := v.MustFloat64Slice() + replaced := make([]float64, len(arr)) + v.EachFloat64(func(index int, val float64) bool { + replaced[index] = replacer(index, val) + return true + }) + return &Value{data: replaced} +} + +// CollectFloat64 uses the specified collector function to collect a value +// for each of the float64s in the slice. The data returned will be a +// []interface{}. +func (v *Value) CollectFloat64(collector func(int, float64) interface{}) *Value { + arr := v.MustFloat64Slice() + collected := make([]interface{}, len(arr)) + v.EachFloat64(func(index int, val float64) bool { + collected[index] = collector(index, val) + return true + }) + return &Value{data: collected} +} + +/* + Complex64 (complex64 and []complex64) +*/ + +// Complex64 gets the value as a complex64, returns the optionalDefault +// value or a system default object if the value is the wrong type. +func (v *Value) Complex64(optionalDefault ...complex64) complex64 { + if s, ok := v.data.(complex64); ok { + return s + } + if len(optionalDefault) == 1 { + return optionalDefault[0] + } + return 0 +} + +// MustComplex64 gets the value as a complex64. +// +// Panics if the object is not a complex64. +func (v *Value) MustComplex64() complex64 { + return v.data.(complex64) +} + +// Complex64Slice gets the value as a []complex64, returns the optionalDefault +// value or nil if the value is not a []complex64. +func (v *Value) Complex64Slice(optionalDefault ...[]complex64) []complex64 { + if s, ok := v.data.([]complex64); ok { + return s + } + if len(optionalDefault) == 1 { + return optionalDefault[0] + } + return nil +} + +// MustComplex64Slice gets the value as a []complex64. +// +// Panics if the object is not a []complex64. +func (v *Value) MustComplex64Slice() []complex64 { + return v.data.([]complex64) +} + +// IsComplex64 gets whether the object contained is a complex64 or not. +func (v *Value) IsComplex64() bool { + _, ok := v.data.(complex64) + return ok +} + +// IsComplex64Slice gets whether the object contained is a []complex64 or not. +func (v *Value) IsComplex64Slice() bool { + _, ok := v.data.([]complex64) + return ok +} + +// EachComplex64 calls the specified callback for each object +// in the []complex64. +// +// Panics if the object is the wrong type. +func (v *Value) EachComplex64(callback func(int, complex64) bool) *Value { + for index, val := range v.MustComplex64Slice() { + carryon := callback(index, val) + if !carryon { + break + } + } + return v +} + +// WhereComplex64 uses the specified decider function to select items +// from the []complex64. The object contained in the result will contain +// only the selected items. +func (v *Value) WhereComplex64(decider func(int, complex64) bool) *Value { + var selected []complex64 + v.EachComplex64(func(index int, val complex64) bool { + shouldSelect := decider(index, val) + if !shouldSelect { + selected = append(selected, val) + } + return true + }) + return &Value{data: selected} +} + +// GroupComplex64 uses the specified grouper function to group the items +// keyed by the return of the grouper. The object contained in the +// result will contain a map[string][]complex64. +func (v *Value) GroupComplex64(grouper func(int, complex64) string) *Value { + groups := make(map[string][]complex64) + v.EachComplex64(func(index int, val complex64) bool { + group := grouper(index, val) + if _, ok := groups[group]; !ok { + groups[group] = make([]complex64, 0) + } + groups[group] = append(groups[group], val) + return true + }) + return &Value{data: groups} +} + +// ReplaceComplex64 uses the specified function to replace each complex64s +// by iterating each item. The data in the returned result will be a +// []complex64 containing the replaced items. +func (v *Value) ReplaceComplex64(replacer func(int, complex64) complex64) *Value { + arr := v.MustComplex64Slice() + replaced := make([]complex64, len(arr)) + v.EachComplex64(func(index int, val complex64) bool { + replaced[index] = replacer(index, val) + return true + }) + return &Value{data: replaced} +} + +// CollectComplex64 uses the specified collector function to collect a value +// for each of the complex64s in the slice. The data returned will be a +// []interface{}. +func (v *Value) CollectComplex64(collector func(int, complex64) interface{}) *Value { + arr := v.MustComplex64Slice() + collected := make([]interface{}, len(arr)) + v.EachComplex64(func(index int, val complex64) bool { + collected[index] = collector(index, val) + return true + }) + return &Value{data: collected} +} + +/* + Complex128 (complex128 and []complex128) +*/ + +// Complex128 gets the value as a complex128, returns the optionalDefault +// value or a system default object if the value is the wrong type. +func (v *Value) Complex128(optionalDefault ...complex128) complex128 { + if s, ok := v.data.(complex128); ok { + return s + } + if len(optionalDefault) == 1 { + return optionalDefault[0] + } + return 0 +} + +// MustComplex128 gets the value as a complex128. +// +// Panics if the object is not a complex128. +func (v *Value) MustComplex128() complex128 { + return v.data.(complex128) +} + +// Complex128Slice gets the value as a []complex128, returns the optionalDefault +// value or nil if the value is not a []complex128. +func (v *Value) Complex128Slice(optionalDefault ...[]complex128) []complex128 { + if s, ok := v.data.([]complex128); ok { + return s + } + if len(optionalDefault) == 1 { + return optionalDefault[0] + } + return nil +} + +// MustComplex128Slice gets the value as a []complex128. +// +// Panics if the object is not a []complex128. +func (v *Value) MustComplex128Slice() []complex128 { + return v.data.([]complex128) +} + +// IsComplex128 gets whether the object contained is a complex128 or not. +func (v *Value) IsComplex128() bool { + _, ok := v.data.(complex128) + return ok +} + +// IsComplex128Slice gets whether the object contained is a []complex128 or not. +func (v *Value) IsComplex128Slice() bool { + _, ok := v.data.([]complex128) + return ok +} + +// EachComplex128 calls the specified callback for each object +// in the []complex128. +// +// Panics if the object is the wrong type. +func (v *Value) EachComplex128(callback func(int, complex128) bool) *Value { + for index, val := range v.MustComplex128Slice() { + carryon := callback(index, val) + if !carryon { + break + } + } + return v +} + +// WhereComplex128 uses the specified decider function to select items +// from the []complex128. The object contained in the result will contain +// only the selected items. +func (v *Value) WhereComplex128(decider func(int, complex128) bool) *Value { + var selected []complex128 + v.EachComplex128(func(index int, val complex128) bool { + shouldSelect := decider(index, val) + if !shouldSelect { + selected = append(selected, val) + } + return true + }) + return &Value{data: selected} +} + +// GroupComplex128 uses the specified grouper function to group the items +// keyed by the return of the grouper. The object contained in the +// result will contain a map[string][]complex128. +func (v *Value) GroupComplex128(grouper func(int, complex128) string) *Value { + groups := make(map[string][]complex128) + v.EachComplex128(func(index int, val complex128) bool { + group := grouper(index, val) + if _, ok := groups[group]; !ok { + groups[group] = make([]complex128, 0) + } + groups[group] = append(groups[group], val) + return true + }) + return &Value{data: groups} +} + +// ReplaceComplex128 uses the specified function to replace each complex128s +// by iterating each item. The data in the returned result will be a +// []complex128 containing the replaced items. +func (v *Value) ReplaceComplex128(replacer func(int, complex128) complex128) *Value { + arr := v.MustComplex128Slice() + replaced := make([]complex128, len(arr)) + v.EachComplex128(func(index int, val complex128) bool { + replaced[index] = replacer(index, val) + return true + }) + return &Value{data: replaced} +} + +// CollectComplex128 uses the specified collector function to collect a value +// for each of the complex128s in the slice. The data returned will be a +// []interface{}. +func (v *Value) CollectComplex128(collector func(int, complex128) interface{}) *Value { + arr := v.MustComplex128Slice() + collected := make([]interface{}, len(arr)) + v.EachComplex128(func(index int, val complex128) bool { + collected[index] = collector(index, val) + return true + }) + return &Value{data: collected} +} diff --git a/vendor/github.com/stretchr/objx/value.go b/vendor/github.com/stretchr/objx/value.go new file mode 100644 index 000000000..956a2211d --- /dev/null +++ b/vendor/github.com/stretchr/objx/value.go @@ -0,0 +1,56 @@ +package objx + +import ( + "fmt" + "strconv" +) + +// Value provides methods for extracting interface{} data in various +// types. +type Value struct { + // data contains the raw data being managed by this Value + data interface{} +} + +// Data returns the raw data contained by this Value +func (v *Value) Data() interface{} { + return v.data +} + +// String returns the value always as a string +func (v *Value) String() string { + switch { + case v.IsStr(): + return v.Str() + case v.IsBool(): + return strconv.FormatBool(v.Bool()) + case v.IsFloat32(): + return strconv.FormatFloat(float64(v.Float32()), 'f', -1, 32) + case v.IsFloat64(): + return strconv.FormatFloat(v.Float64(), 'f', -1, 64) + case v.IsInt(): + return strconv.FormatInt(int64(v.Int()), 10) + case v.IsInt(): + return strconv.FormatInt(int64(v.Int()), 10) + case v.IsInt8(): + return strconv.FormatInt(int64(v.Int8()), 10) + case v.IsInt16(): + return strconv.FormatInt(int64(v.Int16()), 10) + case v.IsInt32(): + return strconv.FormatInt(int64(v.Int32()), 10) + case v.IsInt64(): + return strconv.FormatInt(v.Int64(), 10) + case v.IsUint(): + return strconv.FormatUint(uint64(v.Uint()), 10) + case v.IsUint8(): + return strconv.FormatUint(uint64(v.Uint8()), 10) + case v.IsUint16(): + return strconv.FormatUint(uint64(v.Uint16()), 10) + case v.IsUint32(): + return strconv.FormatUint(uint64(v.Uint32()), 10) + case v.IsUint64(): + return strconv.FormatUint(v.Uint64(), 10) + } + + return fmt.Sprintf("%#v", v.Data()) +} diff --git a/vendor/github.com/stretchr/testify/assert/assertion_format.go b/vendor/github.com/stretchr/testify/assert/assertion_format.go new file mode 100644 index 000000000..ae06a54e2 --- /dev/null +++ b/vendor/github.com/stretchr/testify/assert/assertion_format.go @@ -0,0 +1,349 @@ +/* +* CODE GENERATED AUTOMATICALLY WITH github.com/stretchr/testify/_codegen +* THIS FILE MUST NOT BE EDITED BY HAND + */ + +package assert + +import ( + http "net/http" + url "net/url" + time "time" +) + +// Conditionf uses a Comparison to assert a complex condition. +func Conditionf(t TestingT, comp Comparison, msg string, args ...interface{}) bool { + return Condition(t, comp, append([]interface{}{msg}, args...)...) +} + +// Containsf asserts that the specified string, list(array, slice...) or map contains the +// specified substring or element. +// +// assert.Containsf(t, "Hello World", "World", "error message %s", "formatted") +// assert.Containsf(t, ["Hello", "World"], "World", "error message %s", "formatted") +// assert.Containsf(t, {"Hello": "World"}, "Hello", "error message %s", "formatted") +func Containsf(t TestingT, s interface{}, contains interface{}, msg string, args ...interface{}) bool { + return Contains(t, s, contains, append([]interface{}{msg}, args...)...) +} + +// DirExistsf checks whether a directory exists in the given path. It also fails if the path is a file rather a directory or there is an error checking whether it exists. +func DirExistsf(t TestingT, path string, msg string, args ...interface{}) bool { + return DirExists(t, path, append([]interface{}{msg}, args...)...) +} + +// ElementsMatchf asserts that the specified listA(array, slice...) is equal to specified +// listB(array, slice...) ignoring the order of the elements. If there are duplicate elements, +// the number of appearances of each of them in both lists should match. +// +// assert.ElementsMatchf(t, [1, 3, 2, 3], [1, 3, 3, 2], "error message %s", "formatted") +func ElementsMatchf(t TestingT, listA interface{}, listB interface{}, msg string, args ...interface{}) bool { + return ElementsMatch(t, listA, listB, append([]interface{}{msg}, args...)...) +} + +// Emptyf asserts that the specified object is empty. I.e. nil, "", false, 0 or either +// a slice or a channel with len == 0. +// +// assert.Emptyf(t, obj, "error message %s", "formatted") +func Emptyf(t TestingT, object interface{}, msg string, args ...interface{}) bool { + return Empty(t, object, append([]interface{}{msg}, args...)...) +} + +// Equalf asserts that two objects are equal. +// +// assert.Equalf(t, 123, 123, "error message %s", "formatted") +// +// Pointer variable equality is determined based on the equality of the +// referenced values (as opposed to the memory addresses). Function equality +// cannot be determined and will always fail. +func Equalf(t TestingT, expected interface{}, actual interface{}, msg string, args ...interface{}) bool { + return Equal(t, expected, actual, append([]interface{}{msg}, args...)...) +} + +// EqualErrorf asserts that a function returned an error (i.e. not `nil`) +// and that it is equal to the provided error. +// +// actualObj, err := SomeFunction() +// assert.EqualErrorf(t, err, expectedErrorString, "error message %s", "formatted") +func EqualErrorf(t TestingT, theError error, errString string, msg string, args ...interface{}) bool { + return EqualError(t, theError, errString, append([]interface{}{msg}, args...)...) +} + +// EqualValuesf asserts that two objects are equal or convertable to the same types +// and equal. +// +// assert.EqualValuesf(t, uint32(123, "error message %s", "formatted"), int32(123)) +func EqualValuesf(t TestingT, expected interface{}, actual interface{}, msg string, args ...interface{}) bool { + return EqualValues(t, expected, actual, append([]interface{}{msg}, args...)...) +} + +// Errorf asserts that a function returned an error (i.e. not `nil`). +// +// actualObj, err := SomeFunction() +// if assert.Errorf(t, err, "error message %s", "formatted") { +// assert.Equal(t, expectedErrorf, err) +// } +func Errorf(t TestingT, err error, msg string, args ...interface{}) bool { + return Error(t, err, append([]interface{}{msg}, args...)...) +} + +// Exactlyf asserts that two objects are equal in value and type. +// +// assert.Exactlyf(t, int32(123, "error message %s", "formatted"), int64(123)) +func Exactlyf(t TestingT, expected interface{}, actual interface{}, msg string, args ...interface{}) bool { + return Exactly(t, expected, actual, append([]interface{}{msg}, args...)...) +} + +// Failf reports a failure through +func Failf(t TestingT, failureMessage string, msg string, args ...interface{}) bool { + return Fail(t, failureMessage, append([]interface{}{msg}, args...)...) +} + +// FailNowf fails test +func FailNowf(t TestingT, failureMessage string, msg string, args ...interface{}) bool { + return FailNow(t, failureMessage, append([]interface{}{msg}, args...)...) +} + +// Falsef asserts that the specified value is false. +// +// assert.Falsef(t, myBool, "error message %s", "formatted") +func Falsef(t TestingT, value bool, msg string, args ...interface{}) bool { + return False(t, value, append([]interface{}{msg}, args...)...) +} + +// FileExistsf checks whether a file exists in the given path. It also fails if the path points to a directory or there is an error when trying to check the file. +func FileExistsf(t TestingT, path string, msg string, args ...interface{}) bool { + return FileExists(t, path, append([]interface{}{msg}, args...)...) +} + +// HTTPBodyContainsf asserts that a specified handler returns a +// body that contains a string. +// +// assert.HTTPBodyContainsf(t, myHandler, "www.google.com", nil, "I'm Feeling Lucky", "error message %s", "formatted") +// +// Returns whether the assertion was successful (true) or not (false). +func HTTPBodyContainsf(t TestingT, handler http.HandlerFunc, method string, url string, values url.Values, str interface{}, msg string, args ...interface{}) bool { + return HTTPBodyContains(t, handler, method, url, values, str, append([]interface{}{msg}, args...)...) +} + +// HTTPBodyNotContainsf asserts that a specified handler returns a +// body that does not contain a string. +// +// assert.HTTPBodyNotContainsf(t, myHandler, "www.google.com", nil, "I'm Feeling Lucky", "error message %s", "formatted") +// +// Returns whether the assertion was successful (true) or not (false). +func HTTPBodyNotContainsf(t TestingT, handler http.HandlerFunc, method string, url string, values url.Values, str interface{}, msg string, args ...interface{}) bool { + return HTTPBodyNotContains(t, handler, method, url, values, str, append([]interface{}{msg}, args...)...) +} + +// HTTPErrorf asserts that a specified handler returns an error status code. +// +// assert.HTTPErrorf(t, myHandler, "POST", "/a/b/c", url.Values{"a": []string{"b", "c"}} +// +// Returns whether the assertion was successful (true, "error message %s", "formatted") or not (false). +func HTTPErrorf(t TestingT, handler http.HandlerFunc, method string, url string, values url.Values, msg string, args ...interface{}) bool { + return HTTPError(t, handler, method, url, values, append([]interface{}{msg}, args...)...) +} + +// HTTPRedirectf asserts that a specified handler returns a redirect status code. +// +// assert.HTTPRedirectf(t, myHandler, "GET", "/a/b/c", url.Values{"a": []string{"b", "c"}} +// +// Returns whether the assertion was successful (true, "error message %s", "formatted") or not (false). +func HTTPRedirectf(t TestingT, handler http.HandlerFunc, method string, url string, values url.Values, msg string, args ...interface{}) bool { + return HTTPRedirect(t, handler, method, url, values, append([]interface{}{msg}, args...)...) +} + +// HTTPSuccessf asserts that a specified handler returns a success status code. +// +// assert.HTTPSuccessf(t, myHandler, "POST", "http://www.google.com", nil, "error message %s", "formatted") +// +// Returns whether the assertion was successful (true) or not (false). +func HTTPSuccessf(t TestingT, handler http.HandlerFunc, method string, url string, values url.Values, msg string, args ...interface{}) bool { + return HTTPSuccess(t, handler, method, url, values, append([]interface{}{msg}, args...)...) +} + +// Implementsf asserts that an object is implemented by the specified interface. +// +// assert.Implementsf(t, (*MyInterface, "error message %s", "formatted")(nil), new(MyObject)) +func Implementsf(t TestingT, interfaceObject interface{}, object interface{}, msg string, args ...interface{}) bool { + return Implements(t, interfaceObject, object, append([]interface{}{msg}, args...)...) +} + +// InDeltaf asserts that the two numerals are within delta of each other. +// +// assert.InDeltaf(t, math.Pi, (22 / 7.0, "error message %s", "formatted"), 0.01) +func InDeltaf(t TestingT, expected interface{}, actual interface{}, delta float64, msg string, args ...interface{}) bool { + return InDelta(t, expected, actual, delta, append([]interface{}{msg}, args...)...) +} + +// InDeltaMapValuesf is the same as InDelta, but it compares all values between two maps. Both maps must have exactly the same keys. +func InDeltaMapValuesf(t TestingT, expected interface{}, actual interface{}, delta float64, msg string, args ...interface{}) bool { + return InDeltaMapValues(t, expected, actual, delta, append([]interface{}{msg}, args...)...) +} + +// InDeltaSlicef is the same as InDelta, except it compares two slices. +func InDeltaSlicef(t TestingT, expected interface{}, actual interface{}, delta float64, msg string, args ...interface{}) bool { + return InDeltaSlice(t, expected, actual, delta, append([]interface{}{msg}, args...)...) +} + +// InEpsilonf asserts that expected and actual have a relative error less than epsilon +func InEpsilonf(t TestingT, expected interface{}, actual interface{}, epsilon float64, msg string, args ...interface{}) bool { + return InEpsilon(t, expected, actual, epsilon, append([]interface{}{msg}, args...)...) +} + +// InEpsilonSlicef is the same as InEpsilon, except it compares each value from two slices. +func InEpsilonSlicef(t TestingT, expected interface{}, actual interface{}, epsilon float64, msg string, args ...interface{}) bool { + return InEpsilonSlice(t, expected, actual, epsilon, append([]interface{}{msg}, args...)...) +} + +// IsTypef asserts that the specified objects are of the same type. +func IsTypef(t TestingT, expectedType interface{}, object interface{}, msg string, args ...interface{}) bool { + return IsType(t, expectedType, object, append([]interface{}{msg}, args...)...) +} + +// JSONEqf asserts that two JSON strings are equivalent. +// +// assert.JSONEqf(t, `{"hello": "world", "foo": "bar"}`, `{"foo": "bar", "hello": "world"}`, "error message %s", "formatted") +func JSONEqf(t TestingT, expected string, actual string, msg string, args ...interface{}) bool { + return JSONEq(t, expected, actual, append([]interface{}{msg}, args...)...) +} + +// Lenf asserts that the specified object has specific length. +// Lenf also fails if the object has a type that len() not accept. +// +// assert.Lenf(t, mySlice, 3, "error message %s", "formatted") +func Lenf(t TestingT, object interface{}, length int, msg string, args ...interface{}) bool { + return Len(t, object, length, append([]interface{}{msg}, args...)...) +} + +// Nilf asserts that the specified object is nil. +// +// assert.Nilf(t, err, "error message %s", "formatted") +func Nilf(t TestingT, object interface{}, msg string, args ...interface{}) bool { + return Nil(t, object, append([]interface{}{msg}, args...)...) +} + +// NoErrorf asserts that a function returned no error (i.e. `nil`). +// +// actualObj, err := SomeFunction() +// if assert.NoErrorf(t, err, "error message %s", "formatted") { +// assert.Equal(t, expectedObj, actualObj) +// } +func NoErrorf(t TestingT, err error, msg string, args ...interface{}) bool { + return NoError(t, err, append([]interface{}{msg}, args...)...) +} + +// NotContainsf asserts that the specified string, list(array, slice...) or map does NOT contain the +// specified substring or element. +// +// assert.NotContainsf(t, "Hello World", "Earth", "error message %s", "formatted") +// assert.NotContainsf(t, ["Hello", "World"], "Earth", "error message %s", "formatted") +// assert.NotContainsf(t, {"Hello": "World"}, "Earth", "error message %s", "formatted") +func NotContainsf(t TestingT, s interface{}, contains interface{}, msg string, args ...interface{}) bool { + return NotContains(t, s, contains, append([]interface{}{msg}, args...)...) +} + +// NotEmptyf asserts that the specified object is NOT empty. I.e. not nil, "", false, 0 or either +// a slice or a channel with len == 0. +// +// if assert.NotEmptyf(t, obj, "error message %s", "formatted") { +// assert.Equal(t, "two", obj[1]) +// } +func NotEmptyf(t TestingT, object interface{}, msg string, args ...interface{}) bool { + return NotEmpty(t, object, append([]interface{}{msg}, args...)...) +} + +// NotEqualf asserts that the specified values are NOT equal. +// +// assert.NotEqualf(t, obj1, obj2, "error message %s", "formatted") +// +// Pointer variable equality is determined based on the equality of the +// referenced values (as opposed to the memory addresses). +func NotEqualf(t TestingT, expected interface{}, actual interface{}, msg string, args ...interface{}) bool { + return NotEqual(t, expected, actual, append([]interface{}{msg}, args...)...) +} + +// NotNilf asserts that the specified object is not nil. +// +// assert.NotNilf(t, err, "error message %s", "formatted") +func NotNilf(t TestingT, object interface{}, msg string, args ...interface{}) bool { + return NotNil(t, object, append([]interface{}{msg}, args...)...) +} + +// NotPanicsf asserts that the code inside the specified PanicTestFunc does NOT panic. +// +// assert.NotPanicsf(t, func(){ RemainCalm() }, "error message %s", "formatted") +func NotPanicsf(t TestingT, f PanicTestFunc, msg string, args ...interface{}) bool { + return NotPanics(t, f, append([]interface{}{msg}, args...)...) +} + +// NotRegexpf asserts that a specified regexp does not match a string. +// +// assert.NotRegexpf(t, regexp.MustCompile("starts", "error message %s", "formatted"), "it's starting") +// assert.NotRegexpf(t, "^start", "it's not starting", "error message %s", "formatted") +func NotRegexpf(t TestingT, rx interface{}, str interface{}, msg string, args ...interface{}) bool { + return NotRegexp(t, rx, str, append([]interface{}{msg}, args...)...) +} + +// NotSubsetf asserts that the specified list(array, slice...) contains not all +// elements given in the specified subset(array, slice...). +// +// assert.NotSubsetf(t, [1, 3, 4], [1, 2], "But [1, 3, 4] does not contain [1, 2]", "error message %s", "formatted") +func NotSubsetf(t TestingT, list interface{}, subset interface{}, msg string, args ...interface{}) bool { + return NotSubset(t, list, subset, append([]interface{}{msg}, args...)...) +} + +// NotZerof asserts that i is not the zero value for its type. +func NotZerof(t TestingT, i interface{}, msg string, args ...interface{}) bool { + return NotZero(t, i, append([]interface{}{msg}, args...)...) +} + +// Panicsf asserts that the code inside the specified PanicTestFunc panics. +// +// assert.Panicsf(t, func(){ GoCrazy() }, "error message %s", "formatted") +func Panicsf(t TestingT, f PanicTestFunc, msg string, args ...interface{}) bool { + return Panics(t, f, append([]interface{}{msg}, args...)...) +} + +// PanicsWithValuef asserts that the code inside the specified PanicTestFunc panics, and that +// the recovered panic value equals the expected panic value. +// +// assert.PanicsWithValuef(t, "crazy error", func(){ GoCrazy() }, "error message %s", "formatted") +func PanicsWithValuef(t TestingT, expected interface{}, f PanicTestFunc, msg string, args ...interface{}) bool { + return PanicsWithValue(t, expected, f, append([]interface{}{msg}, args...)...) +} + +// Regexpf asserts that a specified regexp matches a string. +// +// assert.Regexpf(t, regexp.MustCompile("start", "error message %s", "formatted"), "it's starting") +// assert.Regexpf(t, "start...$", "it's not starting", "error message %s", "formatted") +func Regexpf(t TestingT, rx interface{}, str interface{}, msg string, args ...interface{}) bool { + return Regexp(t, rx, str, append([]interface{}{msg}, args...)...) +} + +// Subsetf asserts that the specified list(array, slice...) contains all +// elements given in the specified subset(array, slice...). +// +// assert.Subsetf(t, [1, 2, 3], [1, 2], "But [1, 2, 3] does contain [1, 2]", "error message %s", "formatted") +func Subsetf(t TestingT, list interface{}, subset interface{}, msg string, args ...interface{}) bool { + return Subset(t, list, subset, append([]interface{}{msg}, args...)...) +} + +// Truef asserts that the specified value is true. +// +// assert.Truef(t, myBool, "error message %s", "formatted") +func Truef(t TestingT, value bool, msg string, args ...interface{}) bool { + return True(t, value, append([]interface{}{msg}, args...)...) +} + +// WithinDurationf asserts that the two times are within duration delta of each other. +// +// assert.WithinDurationf(t, time.Now(), time.Now(), 10*time.Second, "error message %s", "formatted") +func WithinDurationf(t TestingT, expected time.Time, actual time.Time, delta time.Duration, msg string, args ...interface{}) bool { + return WithinDuration(t, expected, actual, delta, append([]interface{}{msg}, args...)...) +} + +// Zerof asserts that i is the zero value for its type. +func Zerof(t TestingT, i interface{}, msg string, args ...interface{}) bool { + return Zero(t, i, append([]interface{}{msg}, args...)...) +} diff --git a/vendor/github.com/stretchr/testify/assert/assertion_format.go.tmpl b/vendor/github.com/stretchr/testify/assert/assertion_format.go.tmpl new file mode 100644 index 000000000..c5cc66f43 --- /dev/null +++ b/vendor/github.com/stretchr/testify/assert/assertion_format.go.tmpl @@ -0,0 +1,4 @@ +{{.CommentFormat}} +func {{.DocInfo.Name}}f(t TestingT, {{.ParamsFormat}}) bool { + return {{.DocInfo.Name}}(t, {{.ForwardedParamsFormat}}) +} diff --git a/vendor/github.com/stretchr/testify/assert/assertion_forward.go b/vendor/github.com/stretchr/testify/assert/assertion_forward.go index 29b71d176..ffa5428f3 100644 --- a/vendor/github.com/stretchr/testify/assert/assertion_forward.go +++ b/vendor/github.com/stretchr/testify/assert/assertion_forward.go @@ -16,33 +16,82 @@ func (a *Assertions) Condition(comp Comparison, msgAndArgs ...interface{}) bool return Condition(a.t, comp, msgAndArgs...) } +// Conditionf uses a Comparison to assert a complex condition. +func (a *Assertions) Conditionf(comp Comparison, msg string, args ...interface{}) bool { + return Conditionf(a.t, comp, msg, args...) +} + // Contains asserts that the specified string, list(array, slice...) or map contains the // specified substring or element. // -// a.Contains("Hello World", "World", "But 'Hello World' does contain 'World'") -// a.Contains(["Hello", "World"], "World", "But ["Hello", "World"] does contain 'World'") -// a.Contains({"Hello": "World"}, "Hello", "But {'Hello': 'World'} does contain 'Hello'") -// -// Returns whether the assertion was successful (true) or not (false). +// a.Contains("Hello World", "World") +// a.Contains(["Hello", "World"], "World") +// a.Contains({"Hello": "World"}, "Hello") func (a *Assertions) Contains(s interface{}, contains interface{}, msgAndArgs ...interface{}) bool { return Contains(a.t, s, contains, msgAndArgs...) } +// Containsf asserts that the specified string, list(array, slice...) or map contains the +// specified substring or element. +// +// a.Containsf("Hello World", "World", "error message %s", "formatted") +// a.Containsf(["Hello", "World"], "World", "error message %s", "formatted") +// a.Containsf({"Hello": "World"}, "Hello", "error message %s", "formatted") +func (a *Assertions) Containsf(s interface{}, contains interface{}, msg string, args ...interface{}) bool { + return Containsf(a.t, s, contains, msg, args...) +} + +// DirExists checks whether a directory exists in the given path. It also fails if the path is a file rather a directory or there is an error checking whether it exists. +func (a *Assertions) DirExists(path string, msgAndArgs ...interface{}) bool { + return DirExists(a.t, path, msgAndArgs...) +} + +// DirExistsf checks whether a directory exists in the given path. It also fails if the path is a file rather a directory or there is an error checking whether it exists. +func (a *Assertions) DirExistsf(path string, msg string, args ...interface{}) bool { + return DirExistsf(a.t, path, msg, args...) +} + +// ElementsMatch asserts that the specified listA(array, slice...) is equal to specified +// listB(array, slice...) ignoring the order of the elements. If there are duplicate elements, +// the number of appearances of each of them in both lists should match. +// +// a.ElementsMatch([1, 3, 2, 3], [1, 3, 3, 2]) +func (a *Assertions) ElementsMatch(listA interface{}, listB interface{}, msgAndArgs ...interface{}) bool { + return ElementsMatch(a.t, listA, listB, msgAndArgs...) +} + +// ElementsMatchf asserts that the specified listA(array, slice...) is equal to specified +// listB(array, slice...) ignoring the order of the elements. If there are duplicate elements, +// the number of appearances of each of them in both lists should match. +// +// a.ElementsMatchf([1, 3, 2, 3], [1, 3, 3, 2], "error message %s", "formatted") +func (a *Assertions) ElementsMatchf(listA interface{}, listB interface{}, msg string, args ...interface{}) bool { + return ElementsMatchf(a.t, listA, listB, msg, args...) +} + // Empty asserts that the specified object is empty. I.e. nil, "", false, 0 or either // a slice or a channel with len == 0. // // a.Empty(obj) -// -// Returns whether the assertion was successful (true) or not (false). func (a *Assertions) Empty(object interface{}, msgAndArgs ...interface{}) bool { return Empty(a.t, object, msgAndArgs...) } +// Emptyf asserts that the specified object is empty. I.e. nil, "", false, 0 or either +// a slice or a channel with len == 0. +// +// a.Emptyf(obj, "error message %s", "formatted") +func (a *Assertions) Emptyf(object interface{}, msg string, args ...interface{}) bool { + return Emptyf(a.t, object, msg, args...) +} + // Equal asserts that two objects are equal. // -// a.Equal(123, 123, "123 and 123 should be equal") +// a.Equal(123, 123) // -// Returns whether the assertion was successful (true) or not (false). +// Pointer variable equality is determined based on the equality of the +// referenced values (as opposed to the memory addresses). Function equality +// cannot be determined and will always fail. func (a *Assertions) Equal(expected interface{}, actual interface{}, msgAndArgs ...interface{}) bool { return Equal(a.t, expected, actual, msgAndArgs...) } @@ -51,44 +100,81 @@ func (a *Assertions) Equal(expected interface{}, actual interface{}, msgAndArgs // and that it is equal to the provided error. // // actualObj, err := SomeFunction() -// a.EqualError(err, expectedErrorString, "An error was expected") -// -// Returns whether the assertion was successful (true) or not (false). +// a.EqualError(err, expectedErrorString) func (a *Assertions) EqualError(theError error, errString string, msgAndArgs ...interface{}) bool { return EqualError(a.t, theError, errString, msgAndArgs...) } +// EqualErrorf asserts that a function returned an error (i.e. not `nil`) +// and that it is equal to the provided error. +// +// actualObj, err := SomeFunction() +// a.EqualErrorf(err, expectedErrorString, "error message %s", "formatted") +func (a *Assertions) EqualErrorf(theError error, errString string, msg string, args ...interface{}) bool { + return EqualErrorf(a.t, theError, errString, msg, args...) +} + // EqualValues asserts that two objects are equal or convertable to the same types // and equal. // -// a.EqualValues(uint32(123), int32(123), "123 and 123 should be equal") -// -// Returns whether the assertion was successful (true) or not (false). +// a.EqualValues(uint32(123), int32(123)) func (a *Assertions) EqualValues(expected interface{}, actual interface{}, msgAndArgs ...interface{}) bool { return EqualValues(a.t, expected, actual, msgAndArgs...) } +// EqualValuesf asserts that two objects are equal or convertable to the same types +// and equal. +// +// a.EqualValuesf(uint32(123, "error message %s", "formatted"), int32(123)) +func (a *Assertions) EqualValuesf(expected interface{}, actual interface{}, msg string, args ...interface{}) bool { + return EqualValuesf(a.t, expected, actual, msg, args...) +} + +// Equalf asserts that two objects are equal. +// +// a.Equalf(123, 123, "error message %s", "formatted") +// +// Pointer variable equality is determined based on the equality of the +// referenced values (as opposed to the memory addresses). Function equality +// cannot be determined and will always fail. +func (a *Assertions) Equalf(expected interface{}, actual interface{}, msg string, args ...interface{}) bool { + return Equalf(a.t, expected, actual, msg, args...) +} + // Error asserts that a function returned an error (i.e. not `nil`). // // actualObj, err := SomeFunction() -// if a.Error(err, "An error was expected") { -// assert.Equal(t, err, expectedError) +// if a.Error(err) { +// assert.Equal(t, expectedError, err) // } -// -// Returns whether the assertion was successful (true) or not (false). func (a *Assertions) Error(err error, msgAndArgs ...interface{}) bool { return Error(a.t, err, msgAndArgs...) } -// Exactly asserts that two objects are equal is value and type. +// Errorf asserts that a function returned an error (i.e. not `nil`). // -// a.Exactly(int32(123), int64(123), "123 and 123 should NOT be equal") +// actualObj, err := SomeFunction() +// if a.Errorf(err, "error message %s", "formatted") { +// assert.Equal(t, expectedErrorf, err) +// } +func (a *Assertions) Errorf(err error, msg string, args ...interface{}) bool { + return Errorf(a.t, err, msg, args...) +} + +// Exactly asserts that two objects are equal in value and type. // -// Returns whether the assertion was successful (true) or not (false). +// a.Exactly(int32(123), int64(123)) func (a *Assertions) Exactly(expected interface{}, actual interface{}, msgAndArgs ...interface{}) bool { return Exactly(a.t, expected, actual, msgAndArgs...) } +// Exactlyf asserts that two objects are equal in value and type. +// +// a.Exactlyf(int32(123, "error message %s", "formatted"), int64(123)) +func (a *Assertions) Exactlyf(expected interface{}, actual interface{}, msg string, args ...interface{}) bool { + return Exactlyf(a.t, expected, actual, msg, args...) +} + // Fail reports a failure through func (a *Assertions) Fail(failureMessage string, msgAndArgs ...interface{}) bool { return Fail(a.t, failureMessage, msgAndArgs...) @@ -99,23 +185,58 @@ func (a *Assertions) FailNow(failureMessage string, msgAndArgs ...interface{}) b return FailNow(a.t, failureMessage, msgAndArgs...) } +// FailNowf fails test +func (a *Assertions) FailNowf(failureMessage string, msg string, args ...interface{}) bool { + return FailNowf(a.t, failureMessage, msg, args...) +} + +// Failf reports a failure through +func (a *Assertions) Failf(failureMessage string, msg string, args ...interface{}) bool { + return Failf(a.t, failureMessage, msg, args...) +} + // False asserts that the specified value is false. // -// a.False(myBool, "myBool should be false") -// -// Returns whether the assertion was successful (true) or not (false). +// a.False(myBool) func (a *Assertions) False(value bool, msgAndArgs ...interface{}) bool { return False(a.t, value, msgAndArgs...) } +// Falsef asserts that the specified value is false. +// +// a.Falsef(myBool, "error message %s", "formatted") +func (a *Assertions) Falsef(value bool, msg string, args ...interface{}) bool { + return Falsef(a.t, value, msg, args...) +} + +// FileExists checks whether a file exists in the given path. It also fails if the path points to a directory or there is an error when trying to check the file. +func (a *Assertions) FileExists(path string, msgAndArgs ...interface{}) bool { + return FileExists(a.t, path, msgAndArgs...) +} + +// FileExistsf checks whether a file exists in the given path. It also fails if the path points to a directory or there is an error when trying to check the file. +func (a *Assertions) FileExistsf(path string, msg string, args ...interface{}) bool { + return FileExistsf(a.t, path, msg, args...) +} + // HTTPBodyContains asserts that a specified handler returns a // body that contains a string. // // a.HTTPBodyContains(myHandler, "www.google.com", nil, "I'm Feeling Lucky") // // Returns whether the assertion was successful (true) or not (false). -func (a *Assertions) HTTPBodyContains(handler http.HandlerFunc, method string, url string, values url.Values, str interface{}) bool { - return HTTPBodyContains(a.t, handler, method, url, values, str) +func (a *Assertions) HTTPBodyContains(handler http.HandlerFunc, method string, url string, values url.Values, str interface{}, msgAndArgs ...interface{}) bool { + return HTTPBodyContains(a.t, handler, method, url, values, str, msgAndArgs...) +} + +// HTTPBodyContainsf asserts that a specified handler returns a +// body that contains a string. +// +// a.HTTPBodyContainsf(myHandler, "www.google.com", nil, "I'm Feeling Lucky", "error message %s", "formatted") +// +// Returns whether the assertion was successful (true) or not (false). +func (a *Assertions) HTTPBodyContainsf(handler http.HandlerFunc, method string, url string, values url.Values, str interface{}, msg string, args ...interface{}) bool { + return HTTPBodyContainsf(a.t, handler, method, url, values, str, msg, args...) } // HTTPBodyNotContains asserts that a specified handler returns a @@ -124,8 +245,18 @@ func (a *Assertions) HTTPBodyContains(handler http.HandlerFunc, method string, u // a.HTTPBodyNotContains(myHandler, "www.google.com", nil, "I'm Feeling Lucky") // // Returns whether the assertion was successful (true) or not (false). -func (a *Assertions) HTTPBodyNotContains(handler http.HandlerFunc, method string, url string, values url.Values, str interface{}) bool { - return HTTPBodyNotContains(a.t, handler, method, url, values, str) +func (a *Assertions) HTTPBodyNotContains(handler http.HandlerFunc, method string, url string, values url.Values, str interface{}, msgAndArgs ...interface{}) bool { + return HTTPBodyNotContains(a.t, handler, method, url, values, str, msgAndArgs...) +} + +// HTTPBodyNotContainsf asserts that a specified handler returns a +// body that does not contain a string. +// +// a.HTTPBodyNotContainsf(myHandler, "www.google.com", nil, "I'm Feeling Lucky", "error message %s", "formatted") +// +// Returns whether the assertion was successful (true) or not (false). +func (a *Assertions) HTTPBodyNotContainsf(handler http.HandlerFunc, method string, url string, values url.Values, str interface{}, msg string, args ...interface{}) bool { + return HTTPBodyNotContainsf(a.t, handler, method, url, values, str, msg, args...) } // HTTPError asserts that a specified handler returns an error status code. @@ -133,8 +264,17 @@ func (a *Assertions) HTTPBodyNotContains(handler http.HandlerFunc, method string // a.HTTPError(myHandler, "POST", "/a/b/c", url.Values{"a": []string{"b", "c"}} // // Returns whether the assertion was successful (true) or not (false). -func (a *Assertions) HTTPError(handler http.HandlerFunc, method string, url string, values url.Values) bool { - return HTTPError(a.t, handler, method, url, values) +func (a *Assertions) HTTPError(handler http.HandlerFunc, method string, url string, values url.Values, msgAndArgs ...interface{}) bool { + return HTTPError(a.t, handler, method, url, values, msgAndArgs...) +} + +// HTTPErrorf asserts that a specified handler returns an error status code. +// +// a.HTTPErrorf(myHandler, "POST", "/a/b/c", url.Values{"a": []string{"b", "c"}} +// +// Returns whether the assertion was successful (true, "error message %s", "formatted") or not (false). +func (a *Assertions) HTTPErrorf(handler http.HandlerFunc, method string, url string, values url.Values, msg string, args ...interface{}) bool { + return HTTPErrorf(a.t, handler, method, url, values, msg, args...) } // HTTPRedirect asserts that a specified handler returns a redirect status code. @@ -142,8 +282,17 @@ func (a *Assertions) HTTPError(handler http.HandlerFunc, method string, url stri // a.HTTPRedirect(myHandler, "GET", "/a/b/c", url.Values{"a": []string{"b", "c"}} // // Returns whether the assertion was successful (true) or not (false). -func (a *Assertions) HTTPRedirect(handler http.HandlerFunc, method string, url string, values url.Values) bool { - return HTTPRedirect(a.t, handler, method, url, values) +func (a *Assertions) HTTPRedirect(handler http.HandlerFunc, method string, url string, values url.Values, msgAndArgs ...interface{}) bool { + return HTTPRedirect(a.t, handler, method, url, values, msgAndArgs...) +} + +// HTTPRedirectf asserts that a specified handler returns a redirect status code. +// +// a.HTTPRedirectf(myHandler, "GET", "/a/b/c", url.Values{"a": []string{"b", "c"}} +// +// Returns whether the assertion was successful (true, "error message %s", "formatted") or not (false). +func (a *Assertions) HTTPRedirectf(handler http.HandlerFunc, method string, url string, values url.Values, msg string, args ...interface{}) bool { + return HTTPRedirectf(a.t, handler, method, url, values, msg, args...) } // HTTPSuccess asserts that a specified handler returns a success status code. @@ -151,34 +300,68 @@ func (a *Assertions) HTTPRedirect(handler http.HandlerFunc, method string, url s // a.HTTPSuccess(myHandler, "POST", "http://www.google.com", nil) // // Returns whether the assertion was successful (true) or not (false). -func (a *Assertions) HTTPSuccess(handler http.HandlerFunc, method string, url string, values url.Values) bool { - return HTTPSuccess(a.t, handler, method, url, values) +func (a *Assertions) HTTPSuccess(handler http.HandlerFunc, method string, url string, values url.Values, msgAndArgs ...interface{}) bool { + return HTTPSuccess(a.t, handler, method, url, values, msgAndArgs...) +} + +// HTTPSuccessf asserts that a specified handler returns a success status code. +// +// a.HTTPSuccessf(myHandler, "POST", "http://www.google.com", nil, "error message %s", "formatted") +// +// Returns whether the assertion was successful (true) or not (false). +func (a *Assertions) HTTPSuccessf(handler http.HandlerFunc, method string, url string, values url.Values, msg string, args ...interface{}) bool { + return HTTPSuccessf(a.t, handler, method, url, values, msg, args...) } // Implements asserts that an object is implemented by the specified interface. // -// a.Implements((*MyInterface)(nil), new(MyObject), "MyObject") +// a.Implements((*MyInterface)(nil), new(MyObject)) func (a *Assertions) Implements(interfaceObject interface{}, object interface{}, msgAndArgs ...interface{}) bool { return Implements(a.t, interfaceObject, object, msgAndArgs...) } +// Implementsf asserts that an object is implemented by the specified interface. +// +// a.Implementsf((*MyInterface, "error message %s", "formatted")(nil), new(MyObject)) +func (a *Assertions) Implementsf(interfaceObject interface{}, object interface{}, msg string, args ...interface{}) bool { + return Implementsf(a.t, interfaceObject, object, msg, args...) +} + // InDelta asserts that the two numerals are within delta of each other. // // a.InDelta(math.Pi, (22 / 7.0), 0.01) -// -// Returns whether the assertion was successful (true) or not (false). func (a *Assertions) InDelta(expected interface{}, actual interface{}, delta float64, msgAndArgs ...interface{}) bool { return InDelta(a.t, expected, actual, delta, msgAndArgs...) } +// InDeltaMapValues is the same as InDelta, but it compares all values between two maps. Both maps must have exactly the same keys. +func (a *Assertions) InDeltaMapValues(expected interface{}, actual interface{}, delta float64, msgAndArgs ...interface{}) bool { + return InDeltaMapValues(a.t, expected, actual, delta, msgAndArgs...) +} + +// InDeltaMapValuesf is the same as InDelta, but it compares all values between two maps. Both maps must have exactly the same keys. +func (a *Assertions) InDeltaMapValuesf(expected interface{}, actual interface{}, delta float64, msg string, args ...interface{}) bool { + return InDeltaMapValuesf(a.t, expected, actual, delta, msg, args...) +} + // InDeltaSlice is the same as InDelta, except it compares two slices. func (a *Assertions) InDeltaSlice(expected interface{}, actual interface{}, delta float64, msgAndArgs ...interface{}) bool { return InDeltaSlice(a.t, expected, actual, delta, msgAndArgs...) } -// InEpsilon asserts that expected and actual have a relative error less than epsilon +// InDeltaSlicef is the same as InDelta, except it compares two slices. +func (a *Assertions) InDeltaSlicef(expected interface{}, actual interface{}, delta float64, msg string, args ...interface{}) bool { + return InDeltaSlicef(a.t, expected, actual, delta, msg, args...) +} + +// InDeltaf asserts that the two numerals are within delta of each other. // -// Returns whether the assertion was successful (true) or not (false). +// a.InDeltaf(math.Pi, (22 / 7.0, "error message %s", "formatted"), 0.01) +func (a *Assertions) InDeltaf(expected interface{}, actual interface{}, delta float64, msg string, args ...interface{}) bool { + return InDeltaf(a.t, expected, actual, delta, msg, args...) +} + +// InEpsilon asserts that expected and actual have a relative error less than epsilon func (a *Assertions) InEpsilon(expected interface{}, actual interface{}, epsilon float64, msgAndArgs ...interface{}) bool { return InEpsilon(a.t, expected, actual, epsilon, msgAndArgs...) } @@ -188,159 +371,316 @@ func (a *Assertions) InEpsilonSlice(expected interface{}, actual interface{}, ep return InEpsilonSlice(a.t, expected, actual, epsilon, msgAndArgs...) } +// InEpsilonSlicef is the same as InEpsilon, except it compares each value from two slices. +func (a *Assertions) InEpsilonSlicef(expected interface{}, actual interface{}, epsilon float64, msg string, args ...interface{}) bool { + return InEpsilonSlicef(a.t, expected, actual, epsilon, msg, args...) +} + +// InEpsilonf asserts that expected and actual have a relative error less than epsilon +func (a *Assertions) InEpsilonf(expected interface{}, actual interface{}, epsilon float64, msg string, args ...interface{}) bool { + return InEpsilonf(a.t, expected, actual, epsilon, msg, args...) +} + // IsType asserts that the specified objects are of the same type. func (a *Assertions) IsType(expectedType interface{}, object interface{}, msgAndArgs ...interface{}) bool { return IsType(a.t, expectedType, object, msgAndArgs...) } +// IsTypef asserts that the specified objects are of the same type. +func (a *Assertions) IsTypef(expectedType interface{}, object interface{}, msg string, args ...interface{}) bool { + return IsTypef(a.t, expectedType, object, msg, args...) +} + // JSONEq asserts that two JSON strings are equivalent. // // a.JSONEq(`{"hello": "world", "foo": "bar"}`, `{"foo": "bar", "hello": "world"}`) -// -// Returns whether the assertion was successful (true) or not (false). func (a *Assertions) JSONEq(expected string, actual string, msgAndArgs ...interface{}) bool { return JSONEq(a.t, expected, actual, msgAndArgs...) } +// JSONEqf asserts that two JSON strings are equivalent. +// +// a.JSONEqf(`{"hello": "world", "foo": "bar"}`, `{"foo": "bar", "hello": "world"}`, "error message %s", "formatted") +func (a *Assertions) JSONEqf(expected string, actual string, msg string, args ...interface{}) bool { + return JSONEqf(a.t, expected, actual, msg, args...) +} + // Len asserts that the specified object has specific length. // Len also fails if the object has a type that len() not accept. // -// a.Len(mySlice, 3, "The size of slice is not 3") -// -// Returns whether the assertion was successful (true) or not (false). +// a.Len(mySlice, 3) func (a *Assertions) Len(object interface{}, length int, msgAndArgs ...interface{}) bool { return Len(a.t, object, length, msgAndArgs...) } +// Lenf asserts that the specified object has specific length. +// Lenf also fails if the object has a type that len() not accept. +// +// a.Lenf(mySlice, 3, "error message %s", "formatted") +func (a *Assertions) Lenf(object interface{}, length int, msg string, args ...interface{}) bool { + return Lenf(a.t, object, length, msg, args...) +} + // Nil asserts that the specified object is nil. // -// a.Nil(err, "err should be nothing") -// -// Returns whether the assertion was successful (true) or not (false). +// a.Nil(err) func (a *Assertions) Nil(object interface{}, msgAndArgs ...interface{}) bool { return Nil(a.t, object, msgAndArgs...) } +// Nilf asserts that the specified object is nil. +// +// a.Nilf(err, "error message %s", "formatted") +func (a *Assertions) Nilf(object interface{}, msg string, args ...interface{}) bool { + return Nilf(a.t, object, msg, args...) +} + // NoError asserts that a function returned no error (i.e. `nil`). // // actualObj, err := SomeFunction() // if a.NoError(err) { -// assert.Equal(t, actualObj, expectedObj) +// assert.Equal(t, expectedObj, actualObj) // } -// -// Returns whether the assertion was successful (true) or not (false). func (a *Assertions) NoError(err error, msgAndArgs ...interface{}) bool { return NoError(a.t, err, msgAndArgs...) } +// NoErrorf asserts that a function returned no error (i.e. `nil`). +// +// actualObj, err := SomeFunction() +// if a.NoErrorf(err, "error message %s", "formatted") { +// assert.Equal(t, expectedObj, actualObj) +// } +func (a *Assertions) NoErrorf(err error, msg string, args ...interface{}) bool { + return NoErrorf(a.t, err, msg, args...) +} + // NotContains asserts that the specified string, list(array, slice...) or map does NOT contain the // specified substring or element. // -// a.NotContains("Hello World", "Earth", "But 'Hello World' does NOT contain 'Earth'") -// a.NotContains(["Hello", "World"], "Earth", "But ['Hello', 'World'] does NOT contain 'Earth'") -// a.NotContains({"Hello": "World"}, "Earth", "But {'Hello': 'World'} does NOT contain 'Earth'") -// -// Returns whether the assertion was successful (true) or not (false). +// a.NotContains("Hello World", "Earth") +// a.NotContains(["Hello", "World"], "Earth") +// a.NotContains({"Hello": "World"}, "Earth") func (a *Assertions) NotContains(s interface{}, contains interface{}, msgAndArgs ...interface{}) bool { return NotContains(a.t, s, contains, msgAndArgs...) } +// NotContainsf asserts that the specified string, list(array, slice...) or map does NOT contain the +// specified substring or element. +// +// a.NotContainsf("Hello World", "Earth", "error message %s", "formatted") +// a.NotContainsf(["Hello", "World"], "Earth", "error message %s", "formatted") +// a.NotContainsf({"Hello": "World"}, "Earth", "error message %s", "formatted") +func (a *Assertions) NotContainsf(s interface{}, contains interface{}, msg string, args ...interface{}) bool { + return NotContainsf(a.t, s, contains, msg, args...) +} + // NotEmpty asserts that the specified object is NOT empty. I.e. not nil, "", false, 0 or either // a slice or a channel with len == 0. // // if a.NotEmpty(obj) { // assert.Equal(t, "two", obj[1]) // } -// -// Returns whether the assertion was successful (true) or not (false). func (a *Assertions) NotEmpty(object interface{}, msgAndArgs ...interface{}) bool { return NotEmpty(a.t, object, msgAndArgs...) } +// NotEmptyf asserts that the specified object is NOT empty. I.e. not nil, "", false, 0 or either +// a slice or a channel with len == 0. +// +// if a.NotEmptyf(obj, "error message %s", "formatted") { +// assert.Equal(t, "two", obj[1]) +// } +func (a *Assertions) NotEmptyf(object interface{}, msg string, args ...interface{}) bool { + return NotEmptyf(a.t, object, msg, args...) +} + // NotEqual asserts that the specified values are NOT equal. // -// a.NotEqual(obj1, obj2, "two objects shouldn't be equal") +// a.NotEqual(obj1, obj2) // -// Returns whether the assertion was successful (true) or not (false). +// Pointer variable equality is determined based on the equality of the +// referenced values (as opposed to the memory addresses). func (a *Assertions) NotEqual(expected interface{}, actual interface{}, msgAndArgs ...interface{}) bool { return NotEqual(a.t, expected, actual, msgAndArgs...) } +// NotEqualf asserts that the specified values are NOT equal. +// +// a.NotEqualf(obj1, obj2, "error message %s", "formatted") +// +// Pointer variable equality is determined based on the equality of the +// referenced values (as opposed to the memory addresses). +func (a *Assertions) NotEqualf(expected interface{}, actual interface{}, msg string, args ...interface{}) bool { + return NotEqualf(a.t, expected, actual, msg, args...) +} + // NotNil asserts that the specified object is not nil. // -// a.NotNil(err, "err should be something") -// -// Returns whether the assertion was successful (true) or not (false). +// a.NotNil(err) func (a *Assertions) NotNil(object interface{}, msgAndArgs ...interface{}) bool { return NotNil(a.t, object, msgAndArgs...) } +// NotNilf asserts that the specified object is not nil. +// +// a.NotNilf(err, "error message %s", "formatted") +func (a *Assertions) NotNilf(object interface{}, msg string, args ...interface{}) bool { + return NotNilf(a.t, object, msg, args...) +} + // NotPanics asserts that the code inside the specified PanicTestFunc does NOT panic. // -// a.NotPanics(func(){ -// RemainCalm() -// }, "Calling RemainCalm() should NOT panic") -// -// Returns whether the assertion was successful (true) or not (false). +// a.NotPanics(func(){ RemainCalm() }) func (a *Assertions) NotPanics(f PanicTestFunc, msgAndArgs ...interface{}) bool { return NotPanics(a.t, f, msgAndArgs...) } +// NotPanicsf asserts that the code inside the specified PanicTestFunc does NOT panic. +// +// a.NotPanicsf(func(){ RemainCalm() }, "error message %s", "formatted") +func (a *Assertions) NotPanicsf(f PanicTestFunc, msg string, args ...interface{}) bool { + return NotPanicsf(a.t, f, msg, args...) +} + // NotRegexp asserts that a specified regexp does not match a string. // // a.NotRegexp(regexp.MustCompile("starts"), "it's starting") // a.NotRegexp("^start", "it's not starting") -// -// Returns whether the assertion was successful (true) or not (false). func (a *Assertions) NotRegexp(rx interface{}, str interface{}, msgAndArgs ...interface{}) bool { return NotRegexp(a.t, rx, str, msgAndArgs...) } -// NotZero asserts that i is not the zero value for its type and returns the truth. +// NotRegexpf asserts that a specified regexp does not match a string. +// +// a.NotRegexpf(regexp.MustCompile("starts", "error message %s", "formatted"), "it's starting") +// a.NotRegexpf("^start", "it's not starting", "error message %s", "formatted") +func (a *Assertions) NotRegexpf(rx interface{}, str interface{}, msg string, args ...interface{}) bool { + return NotRegexpf(a.t, rx, str, msg, args...) +} + +// NotSubset asserts that the specified list(array, slice...) contains not all +// elements given in the specified subset(array, slice...). +// +// a.NotSubset([1, 3, 4], [1, 2], "But [1, 3, 4] does not contain [1, 2]") +func (a *Assertions) NotSubset(list interface{}, subset interface{}, msgAndArgs ...interface{}) bool { + return NotSubset(a.t, list, subset, msgAndArgs...) +} + +// NotSubsetf asserts that the specified list(array, slice...) contains not all +// elements given in the specified subset(array, slice...). +// +// a.NotSubsetf([1, 3, 4], [1, 2], "But [1, 3, 4] does not contain [1, 2]", "error message %s", "formatted") +func (a *Assertions) NotSubsetf(list interface{}, subset interface{}, msg string, args ...interface{}) bool { + return NotSubsetf(a.t, list, subset, msg, args...) +} + +// NotZero asserts that i is not the zero value for its type. func (a *Assertions) NotZero(i interface{}, msgAndArgs ...interface{}) bool { return NotZero(a.t, i, msgAndArgs...) } +// NotZerof asserts that i is not the zero value for its type. +func (a *Assertions) NotZerof(i interface{}, msg string, args ...interface{}) bool { + return NotZerof(a.t, i, msg, args...) +} + // Panics asserts that the code inside the specified PanicTestFunc panics. // -// a.Panics(func(){ -// GoCrazy() -// }, "Calling GoCrazy() should panic") -// -// Returns whether the assertion was successful (true) or not (false). +// a.Panics(func(){ GoCrazy() }) func (a *Assertions) Panics(f PanicTestFunc, msgAndArgs ...interface{}) bool { return Panics(a.t, f, msgAndArgs...) } +// PanicsWithValue asserts that the code inside the specified PanicTestFunc panics, and that +// the recovered panic value equals the expected panic value. +// +// a.PanicsWithValue("crazy error", func(){ GoCrazy() }) +func (a *Assertions) PanicsWithValue(expected interface{}, f PanicTestFunc, msgAndArgs ...interface{}) bool { + return PanicsWithValue(a.t, expected, f, msgAndArgs...) +} + +// PanicsWithValuef asserts that the code inside the specified PanicTestFunc panics, and that +// the recovered panic value equals the expected panic value. +// +// a.PanicsWithValuef("crazy error", func(){ GoCrazy() }, "error message %s", "formatted") +func (a *Assertions) PanicsWithValuef(expected interface{}, f PanicTestFunc, msg string, args ...interface{}) bool { + return PanicsWithValuef(a.t, expected, f, msg, args...) +} + +// Panicsf asserts that the code inside the specified PanicTestFunc panics. +// +// a.Panicsf(func(){ GoCrazy() }, "error message %s", "formatted") +func (a *Assertions) Panicsf(f PanicTestFunc, msg string, args ...interface{}) bool { + return Panicsf(a.t, f, msg, args...) +} + // Regexp asserts that a specified regexp matches a string. // // a.Regexp(regexp.MustCompile("start"), "it's starting") // a.Regexp("start...$", "it's not starting") -// -// Returns whether the assertion was successful (true) or not (false). func (a *Assertions) Regexp(rx interface{}, str interface{}, msgAndArgs ...interface{}) bool { return Regexp(a.t, rx, str, msgAndArgs...) } +// Regexpf asserts that a specified regexp matches a string. +// +// a.Regexpf(regexp.MustCompile("start", "error message %s", "formatted"), "it's starting") +// a.Regexpf("start...$", "it's not starting", "error message %s", "formatted") +func (a *Assertions) Regexpf(rx interface{}, str interface{}, msg string, args ...interface{}) bool { + return Regexpf(a.t, rx, str, msg, args...) +} + +// Subset asserts that the specified list(array, slice...) contains all +// elements given in the specified subset(array, slice...). +// +// a.Subset([1, 2, 3], [1, 2], "But [1, 2, 3] does contain [1, 2]") +func (a *Assertions) Subset(list interface{}, subset interface{}, msgAndArgs ...interface{}) bool { + return Subset(a.t, list, subset, msgAndArgs...) +} + +// Subsetf asserts that the specified list(array, slice...) contains all +// elements given in the specified subset(array, slice...). +// +// a.Subsetf([1, 2, 3], [1, 2], "But [1, 2, 3] does contain [1, 2]", "error message %s", "formatted") +func (a *Assertions) Subsetf(list interface{}, subset interface{}, msg string, args ...interface{}) bool { + return Subsetf(a.t, list, subset, msg, args...) +} + // True asserts that the specified value is true. // -// a.True(myBool, "myBool should be true") -// -// Returns whether the assertion was successful (true) or not (false). +// a.True(myBool) func (a *Assertions) True(value bool, msgAndArgs ...interface{}) bool { return True(a.t, value, msgAndArgs...) } +// Truef asserts that the specified value is true. +// +// a.Truef(myBool, "error message %s", "formatted") +func (a *Assertions) Truef(value bool, msg string, args ...interface{}) bool { + return Truef(a.t, value, msg, args...) +} + // WithinDuration asserts that the two times are within duration delta of each other. // -// a.WithinDuration(time.Now(), time.Now(), 10*time.Second, "The difference should not be more than 10s") -// -// Returns whether the assertion was successful (true) or not (false). +// a.WithinDuration(time.Now(), time.Now(), 10*time.Second) func (a *Assertions) WithinDuration(expected time.Time, actual time.Time, delta time.Duration, msgAndArgs ...interface{}) bool { return WithinDuration(a.t, expected, actual, delta, msgAndArgs...) } -// Zero asserts that i is the zero value for its type and returns the truth. +// WithinDurationf asserts that the two times are within duration delta of each other. +// +// a.WithinDurationf(time.Now(), time.Now(), 10*time.Second, "error message %s", "formatted") +func (a *Assertions) WithinDurationf(expected time.Time, actual time.Time, delta time.Duration, msg string, args ...interface{}) bool { + return WithinDurationf(a.t, expected, actual, delta, msg, args...) +} + +// Zero asserts that i is the zero value for its type. func (a *Assertions) Zero(i interface{}, msgAndArgs ...interface{}) bool { return Zero(a.t, i, msgAndArgs...) } + +// Zerof asserts that i is the zero value for its type. +func (a *Assertions) Zerof(i interface{}, msg string, args ...interface{}) bool { + return Zerof(a.t, i, msg, args...) +} diff --git a/vendor/github.com/stretchr/testify/assert/assertions.go b/vendor/github.com/stretchr/testify/assert/assertions.go index 835084ffc..47bda7786 100644 --- a/vendor/github.com/stretchr/testify/assert/assertions.go +++ b/vendor/github.com/stretchr/testify/assert/assertions.go @@ -4,8 +4,10 @@ import ( "bufio" "bytes" "encoding/json" + "errors" "fmt" "math" + "os" "reflect" "regexp" "runtime" @@ -18,9 +20,7 @@ import ( "github.com/pmezard/go-difflib/difflib" ) -func init() { - spew.Config.SortKeys = true -} +//go:generate go run ../_codegen/main.go -output-package=assert -template=assertion_format.go.tmpl // TestingT is an interface wrapper around *testing.T type TestingT interface { @@ -42,7 +42,15 @@ func ObjectsAreEqual(expected, actual interface{}) bool { if expected == nil || actual == nil { return expected == actual } - + if exp, ok := expected.([]byte); ok { + act, ok := actual.([]byte) + if !ok { + return false + } else if exp == nil || act == nil { + return exp == nil && act == nil + } + return bytes.Equal(exp, act) + } return reflect.DeepEqual(expected, actual) } @@ -112,10 +120,12 @@ func CallerInfo() []string { } parts := strings.Split(file, "/") - dir := parts[len(parts)-2] file = parts[len(parts)-1] - if (dir != "assert" && dir != "mock" && dir != "require") || file == "mock_test.go" { - callers = append(callers, fmt.Sprintf("%s:%d", file, line)) + if len(parts) > 1 { + dir := parts[len(parts)-2] + if (dir != "assert" && dir != "mock" && dir != "require") || file == "mock_test.go" { + callers = append(callers, fmt.Sprintf("%s:%d", file, line)) + } } // Drop the package @@ -157,7 +167,7 @@ func getWhitespaceString() string { parts := strings.Split(file, "/") file = parts[len(parts)-1] - return strings.Repeat(" ", len(fmt.Sprintf("%s:%d: ", file, line))) + return strings.Repeat(" ", len(fmt.Sprintf("%s:%d: ", file, line))) } @@ -174,22 +184,18 @@ func messageFromMsgAndArgs(msgAndArgs ...interface{}) string { return "" } -// Indents all lines of the message by appending a number of tabs to each line, in an output format compatible with Go's -// test printing (see inner comment for specifics) -func indentMessageLines(message string, tabs int) string { +// Aligns the provided message so that all lines after the first line start at the same location as the first line. +// Assumes that the first line starts at the correct location (after carriage return, tab, label, spacer and tab). +// The longestLabelLen parameter specifies the length of the longest label in the output (required becaues this is the +// basis on which the alignment occurs). +func indentMessageLines(message string, longestLabelLen int) string { outBuf := new(bytes.Buffer) for i, scanner := 0, bufio.NewScanner(strings.NewReader(message)); scanner.Scan(); i++ { + // no need to align first line because it starts at the correct location (after the label) if i != 0 { - outBuf.WriteRune('\n') - } - for ii := 0; ii < tabs; ii++ { - outBuf.WriteRune('\t') - // Bizarrely, all lines except the first need one fewer tabs prepended, so deliberately advance the counter - // by 1 prematurely. - if ii == 0 && i > 0 { - ii++ - } + // append alignLen+1 spaces to align with "{{longestLabel}}:" before adding tab + outBuf.WriteString("\n\r\t" + strings.Repeat(" ", longestLabelLen+1) + "\t") } outBuf.WriteString(scanner.Text()) } @@ -221,42 +227,70 @@ func FailNow(t TestingT, failureMessage string, msgAndArgs ...interface{}) bool // Fail reports a failure through func Fail(t TestingT, failureMessage string, msgAndArgs ...interface{}) bool { + content := []labeledContent{ + {"Error Trace", strings.Join(CallerInfo(), "\n\r\t\t\t")}, + {"Error", failureMessage}, + } + + // Add test name if the Go version supports it + if n, ok := t.(interface { + Name() string + }); ok { + content = append(content, labeledContent{"Test", n.Name()}) + } message := messageFromMsgAndArgs(msgAndArgs...) - - errorTrace := strings.Join(CallerInfo(), "\n\r\t\t\t") if len(message) > 0 { - t.Errorf("\r%s\r\tError Trace:\t%s\n"+ - "\r\tError:%s\n"+ - "\r\tMessages:\t%s\n\r", - getWhitespaceString(), - errorTrace, - indentMessageLines(failureMessage, 2), - message) - } else { - t.Errorf("\r%s\r\tError Trace:\t%s\n"+ - "\r\tError:%s\n\r", - getWhitespaceString(), - errorTrace, - indentMessageLines(failureMessage, 2)) + content = append(content, labeledContent{"Messages", message}) } + t.Errorf("%s", "\r"+getWhitespaceString()+labeledOutput(content...)) + return false } +type labeledContent struct { + label string + content string +} + +// labeledOutput returns a string consisting of the provided labeledContent. Each labeled output is appended in the following manner: +// +// \r\t{{label}}:{{align_spaces}}\t{{content}}\n +// +// The initial carriage return is required to undo/erase any padding added by testing.T.Errorf. The "\t{{label}}:" is for the label. +// If a label is shorter than the longest label provided, padding spaces are added to make all the labels match in length. Once this +// alignment is achieved, "\t{{content}}\n" is added for the output. +// +// If the content of the labeledOutput contains line breaks, the subsequent lines are aligned so that they start at the same location as the first line. +func labeledOutput(content ...labeledContent) string { + longestLabel := 0 + for _, v := range content { + if len(v.label) > longestLabel { + longestLabel = len(v.label) + } + } + var output string + for _, v := range content { + output += "\r\t" + v.label + ":" + strings.Repeat(" ", longestLabel-len(v.label)) + "\t" + indentMessageLines(v.content, longestLabel) + "\n" + } + return output +} + // Implements asserts that an object is implemented by the specified interface. // -// assert.Implements(t, (*MyInterface)(nil), new(MyObject), "MyObject") +// assert.Implements(t, (*MyInterface)(nil), new(MyObject)) func Implements(t TestingT, interfaceObject interface{}, object interface{}, msgAndArgs ...interface{}) bool { - interfaceType := reflect.TypeOf(interfaceObject).Elem() + if object == nil { + return Fail(t, fmt.Sprintf("Cannot check if nil implements %v", interfaceType), msgAndArgs...) + } if !reflect.TypeOf(object).Implements(interfaceType) { return Fail(t, fmt.Sprintf("%T must implement %v", object, interfaceType), msgAndArgs...) } return true - } // IsType asserts that the specified objects are of the same type. @@ -271,17 +305,23 @@ func IsType(t TestingT, expectedType interface{}, object interface{}, msgAndArgs // Equal asserts that two objects are equal. // -// assert.Equal(t, 123, 123, "123 and 123 should be equal") +// assert.Equal(t, 123, 123) // -// Returns whether the assertion was successful (true) or not (false). +// Pointer variable equality is determined based on the equality of the +// referenced values (as opposed to the memory addresses). Function equality +// cannot be determined and will always fail. func Equal(t TestingT, expected, actual interface{}, msgAndArgs ...interface{}) bool { + if err := validateEqualArgs(expected, actual); err != nil { + return Fail(t, fmt.Sprintf("Invalid operation: %#v == %#v (%s)", + expected, actual, err), msgAndArgs...) + } if !ObjectsAreEqual(expected, actual) { diff := diff(expected, actual) expected, actual = formatUnequalValues(expected, actual) return Fail(t, fmt.Sprintf("Not equal: \n"+ "expected: %s\n"+ - "received: %s%s", expected, actual, diff), msgAndArgs...) + "actual : %s%s", expected, actual, diff), msgAndArgs...) } return true @@ -295,37 +335,19 @@ func Equal(t TestingT, expected, actual interface{}, msgAndArgs ...interface{}) // with the type name, and the value will be enclosed in parenthesis similar // to a type conversion in the Go grammar. func formatUnequalValues(expected, actual interface{}) (e string, a string) { - aType := reflect.TypeOf(expected) - bType := reflect.TypeOf(actual) - - if aType != bType && isNumericType(aType) && isNumericType(bType) { - return fmt.Sprintf("%v(%#v)", aType, expected), - fmt.Sprintf("%v(%#v)", bType, actual) + if reflect.TypeOf(expected) != reflect.TypeOf(actual) { + return fmt.Sprintf("%T(%#v)", expected, expected), + fmt.Sprintf("%T(%#v)", actual, actual) } return fmt.Sprintf("%#v", expected), fmt.Sprintf("%#v", actual) } -func isNumericType(t reflect.Type) bool { - switch t.Kind() { - case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64: - return true - case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64: - return true - case reflect.Float32, reflect.Float64: - return true - } - - return false -} - // EqualValues asserts that two objects are equal or convertable to the same types // and equal. // -// assert.EqualValues(t, uint32(123), int32(123), "123 and 123 should be equal") -// -// Returns whether the assertion was successful (true) or not (false). +// assert.EqualValues(t, uint32(123), int32(123)) func EqualValues(t TestingT, expected, actual interface{}, msgAndArgs ...interface{}) bool { if !ObjectsAreEqualValues(expected, actual) { @@ -333,18 +355,16 @@ func EqualValues(t TestingT, expected, actual interface{}, msgAndArgs ...interfa expected, actual = formatUnequalValues(expected, actual) return Fail(t, fmt.Sprintf("Not equal: \n"+ "expected: %s\n"+ - "received: %s%s", expected, actual, diff), msgAndArgs...) + "actual : %s%s", expected, actual, diff), msgAndArgs...) } return true } -// Exactly asserts that two objects are equal is value and type. +// Exactly asserts that two objects are equal in value and type. // -// assert.Exactly(t, int32(123), int64(123), "123 and 123 should NOT be equal") -// -// Returns whether the assertion was successful (true) or not (false). +// assert.Exactly(t, int32(123), int64(123)) func Exactly(t TestingT, expected, actual interface{}, msgAndArgs ...interface{}) bool { aType := reflect.TypeOf(expected) @@ -360,9 +380,7 @@ func Exactly(t TestingT, expected, actual interface{}, msgAndArgs ...interface{} // NotNil asserts that the specified object is not nil. // -// assert.NotNil(t, err, "err should be something") -// -// Returns whether the assertion was successful (true) or not (false). +// assert.NotNil(t, err) func NotNil(t TestingT, object interface{}, msgAndArgs ...interface{}) bool { if !isNil(object) { return true @@ -387,9 +405,7 @@ func isNil(object interface{}) bool { // Nil asserts that the specified object is nil. // -// assert.Nil(t, err, "err should be nothing") -// -// Returns whether the assertion was successful (true) or not (false). +// assert.Nil(t, err) func Nil(t TestingT, object interface{}, msgAndArgs ...interface{}) bool { if isNil(object) { return true @@ -397,74 +413,38 @@ func Nil(t TestingT, object interface{}, msgAndArgs ...interface{}) bool { return Fail(t, fmt.Sprintf("Expected nil, but got: %#v", object), msgAndArgs...) } -var numericZeros = []interface{}{ - int(0), - int8(0), - int16(0), - int32(0), - int64(0), - uint(0), - uint8(0), - uint16(0), - uint32(0), - uint64(0), - float32(0), - float64(0), -} - // isEmpty gets whether the specified object is considered empty or not. func isEmpty(object interface{}) bool { + // get nil case out of the way if object == nil { return true - } else if object == "" { - return true - } else if object == false { - return true - } - - for _, v := range numericZeros { - if object == v { - return true - } } objValue := reflect.ValueOf(object) switch objValue.Kind() { - case reflect.Map: - fallthrough - case reflect.Slice, reflect.Chan: - { - return (objValue.Len() == 0) - } - case reflect.Struct: - switch object.(type) { - case time.Time: - return object.(time.Time).IsZero() - } + // collection types are empty when they have no element + case reflect.Array, reflect.Chan, reflect.Map, reflect.Slice: + return objValue.Len() == 0 + // pointers are empty if nil or if the value they point to is empty case reflect.Ptr: - { - if objValue.IsNil() { - return true - } - switch object.(type) { - case *time.Time: - return object.(*time.Time).IsZero() - default: - return false - } + if objValue.IsNil() { + return true } + deref := objValue.Elem().Interface() + return isEmpty(deref) + // for all other types, compare against the zero value + default: + zero := reflect.Zero(objValue.Type()) + return reflect.DeepEqual(object, zero.Interface()) } - return false } // Empty asserts that the specified object is empty. I.e. nil, "", false, 0 or either // a slice or a channel with len == 0. // // assert.Empty(t, obj) -// -// Returns whether the assertion was successful (true) or not (false). func Empty(t TestingT, object interface{}, msgAndArgs ...interface{}) bool { pass := isEmpty(object) @@ -482,8 +462,6 @@ func Empty(t TestingT, object interface{}, msgAndArgs ...interface{}) bool { // if assert.NotEmpty(t, obj) { // assert.Equal(t, "two", obj[1]) // } -// -// Returns whether the assertion was successful (true) or not (false). func NotEmpty(t TestingT, object interface{}, msgAndArgs ...interface{}) bool { pass := !isEmpty(object) @@ -510,9 +488,7 @@ func getLen(x interface{}) (ok bool, length int) { // Len asserts that the specified object has specific length. // Len also fails if the object has a type that len() not accept. // -// assert.Len(t, mySlice, 3, "The size of slice is not 3") -// -// Returns whether the assertion was successful (true) or not (false). +// assert.Len(t, mySlice, 3) func Len(t TestingT, object interface{}, length int, msgAndArgs ...interface{}) bool { ok, l := getLen(object) if !ok { @@ -527,9 +503,7 @@ func Len(t TestingT, object interface{}, length int, msgAndArgs ...interface{}) // True asserts that the specified value is true. // -// assert.True(t, myBool, "myBool should be true") -// -// Returns whether the assertion was successful (true) or not (false). +// assert.True(t, myBool) func True(t TestingT, value bool, msgAndArgs ...interface{}) bool { if value != true { @@ -542,9 +516,7 @@ func True(t TestingT, value bool, msgAndArgs ...interface{}) bool { // False asserts that the specified value is false. // -// assert.False(t, myBool, "myBool should be false") -// -// Returns whether the assertion was successful (true) or not (false). +// assert.False(t, myBool) func False(t TestingT, value bool, msgAndArgs ...interface{}) bool { if value != false { @@ -557,10 +529,15 @@ func False(t TestingT, value bool, msgAndArgs ...interface{}) bool { // NotEqual asserts that the specified values are NOT equal. // -// assert.NotEqual(t, obj1, obj2, "two objects shouldn't be equal") +// assert.NotEqual(t, obj1, obj2) // -// Returns whether the assertion was successful (true) or not (false). +// Pointer variable equality is determined based on the equality of the +// referenced values (as opposed to the memory addresses). func NotEqual(t TestingT, expected, actual interface{}, msgAndArgs ...interface{}) bool { + if err := validateEqualArgs(expected, actual); err != nil { + return Fail(t, fmt.Sprintf("Invalid operation: %#v != %#v (%s)", + expected, actual, err), msgAndArgs...) + } if ObjectsAreEqual(expected, actual) { return Fail(t, fmt.Sprintf("Should not be: %#v\n", actual), msgAndArgs...) @@ -611,11 +588,9 @@ func includeElement(list interface{}, element interface{}) (ok, found bool) { // Contains asserts that the specified string, list(array, slice...) or map contains the // specified substring or element. // -// assert.Contains(t, "Hello World", "World", "But 'Hello World' does contain 'World'") -// assert.Contains(t, ["Hello", "World"], "World", "But ["Hello", "World"] does contain 'World'") -// assert.Contains(t, {"Hello": "World"}, "Hello", "But {'Hello': 'World'} does contain 'Hello'") -// -// Returns whether the assertion was successful (true) or not (false). +// assert.Contains(t, "Hello World", "World") +// assert.Contains(t, ["Hello", "World"], "World") +// assert.Contains(t, {"Hello": "World"}, "Hello") func Contains(t TestingT, s, contains interface{}, msgAndArgs ...interface{}) bool { ok, found := includeElement(s, contains) @@ -633,11 +608,9 @@ func Contains(t TestingT, s, contains interface{}, msgAndArgs ...interface{}) bo // NotContains asserts that the specified string, list(array, slice...) or map does NOT contain the // specified substring or element. // -// assert.NotContains(t, "Hello World", "Earth", "But 'Hello World' does NOT contain 'Earth'") -// assert.NotContains(t, ["Hello", "World"], "Earth", "But ['Hello', 'World'] does NOT contain 'Earth'") -// assert.NotContains(t, {"Hello": "World"}, "Earth", "But {'Hello': 'World'} does NOT contain 'Earth'") -// -// Returns whether the assertion was successful (true) or not (false). +// assert.NotContains(t, "Hello World", "Earth") +// assert.NotContains(t, ["Hello", "World"], "Earth") +// assert.NotContains(t, {"Hello": "World"}, "Earth") func NotContains(t TestingT, s, contains interface{}, msgAndArgs ...interface{}) bool { ok, found := includeElement(s, contains) @@ -652,6 +625,142 @@ func NotContains(t TestingT, s, contains interface{}, msgAndArgs ...interface{}) } +// Subset asserts that the specified list(array, slice...) contains all +// elements given in the specified subset(array, slice...). +// +// assert.Subset(t, [1, 2, 3], [1, 2], "But [1, 2, 3] does contain [1, 2]") +func Subset(t TestingT, list, subset interface{}, msgAndArgs ...interface{}) (ok bool) { + if subset == nil { + return true // we consider nil to be equal to the nil set + } + + subsetValue := reflect.ValueOf(subset) + defer func() { + if e := recover(); e != nil { + ok = false + } + }() + + listKind := reflect.TypeOf(list).Kind() + subsetKind := reflect.TypeOf(subset).Kind() + + if listKind != reflect.Array && listKind != reflect.Slice { + return Fail(t, fmt.Sprintf("%q has an unsupported type %s", list, listKind), msgAndArgs...) + } + + if subsetKind != reflect.Array && subsetKind != reflect.Slice { + return Fail(t, fmt.Sprintf("%q has an unsupported type %s", subset, subsetKind), msgAndArgs...) + } + + for i := 0; i < subsetValue.Len(); i++ { + element := subsetValue.Index(i).Interface() + ok, found := includeElement(list, element) + if !ok { + return Fail(t, fmt.Sprintf("\"%s\" could not be applied builtin len()", list), msgAndArgs...) + } + if !found { + return Fail(t, fmt.Sprintf("\"%s\" does not contain \"%s\"", list, element), msgAndArgs...) + } + } + + return true +} + +// NotSubset asserts that the specified list(array, slice...) contains not all +// elements given in the specified subset(array, slice...). +// +// assert.NotSubset(t, [1, 3, 4], [1, 2], "But [1, 3, 4] does not contain [1, 2]") +func NotSubset(t TestingT, list, subset interface{}, msgAndArgs ...interface{}) (ok bool) { + if subset == nil { + return Fail(t, fmt.Sprintf("nil is the empty set which is a subset of every set"), msgAndArgs...) + } + + subsetValue := reflect.ValueOf(subset) + defer func() { + if e := recover(); e != nil { + ok = false + } + }() + + listKind := reflect.TypeOf(list).Kind() + subsetKind := reflect.TypeOf(subset).Kind() + + if listKind != reflect.Array && listKind != reflect.Slice { + return Fail(t, fmt.Sprintf("%q has an unsupported type %s", list, listKind), msgAndArgs...) + } + + if subsetKind != reflect.Array && subsetKind != reflect.Slice { + return Fail(t, fmt.Sprintf("%q has an unsupported type %s", subset, subsetKind), msgAndArgs...) + } + + for i := 0; i < subsetValue.Len(); i++ { + element := subsetValue.Index(i).Interface() + ok, found := includeElement(list, element) + if !ok { + return Fail(t, fmt.Sprintf("\"%s\" could not be applied builtin len()", list), msgAndArgs...) + } + if !found { + return true + } + } + + return Fail(t, fmt.Sprintf("%q is a subset of %q", subset, list), msgAndArgs...) +} + +// ElementsMatch asserts that the specified listA(array, slice...) is equal to specified +// listB(array, slice...) ignoring the order of the elements. If there are duplicate elements, +// the number of appearances of each of them in both lists should match. +// +// assert.ElementsMatch(t, [1, 3, 2, 3], [1, 3, 3, 2]) +func ElementsMatch(t TestingT, listA, listB interface{}, msgAndArgs ...interface{}) (ok bool) { + if isEmpty(listA) && isEmpty(listB) { + return true + } + + aKind := reflect.TypeOf(listA).Kind() + bKind := reflect.TypeOf(listB).Kind() + + if aKind != reflect.Array && aKind != reflect.Slice { + return Fail(t, fmt.Sprintf("%q has an unsupported type %s", listA, aKind), msgAndArgs...) + } + + if bKind != reflect.Array && bKind != reflect.Slice { + return Fail(t, fmt.Sprintf("%q has an unsupported type %s", listB, bKind), msgAndArgs...) + } + + aValue := reflect.ValueOf(listA) + bValue := reflect.ValueOf(listB) + + aLen := aValue.Len() + bLen := bValue.Len() + + if aLen != bLen { + return Fail(t, fmt.Sprintf("lengths don't match: %d != %d", aLen, bLen), msgAndArgs...) + } + + // Mark indexes in bValue that we already used + visited := make([]bool, bLen) + for i := 0; i < aLen; i++ { + element := aValue.Index(i).Interface() + found := false + for j := 0; j < bLen; j++ { + if visited[j] { + continue + } + if ObjectsAreEqual(bValue.Index(j).Interface(), element) { + visited[j] = true + found = true + break + } + } + if !found { + return Fail(t, fmt.Sprintf("element %s appears more times in %s than in %s", element, aValue, bValue), msgAndArgs...) + } + } + + return true +} + // Condition uses a Comparison to assert a complex condition. func Condition(t TestingT, comp Comparison, msgAndArgs ...interface{}) bool { result := comp() @@ -689,11 +798,7 @@ func didPanic(f PanicTestFunc) (bool, interface{}) { // Panics asserts that the code inside the specified PanicTestFunc panics. // -// assert.Panics(t, func(){ -// GoCrazy() -// }, "Calling GoCrazy() should panic") -// -// Returns whether the assertion was successful (true) or not (false). +// assert.Panics(t, func(){ GoCrazy() }) func Panics(t TestingT, f PanicTestFunc, msgAndArgs ...interface{}) bool { if funcDidPanic, panicValue := didPanic(f); !funcDidPanic { @@ -703,13 +808,26 @@ func Panics(t TestingT, f PanicTestFunc, msgAndArgs ...interface{}) bool { return true } +// PanicsWithValue asserts that the code inside the specified PanicTestFunc panics, and that +// the recovered panic value equals the expected panic value. +// +// assert.PanicsWithValue(t, "crazy error", func(){ GoCrazy() }) +func PanicsWithValue(t TestingT, expected interface{}, f PanicTestFunc, msgAndArgs ...interface{}) bool { + + funcDidPanic, panicValue := didPanic(f) + if !funcDidPanic { + return Fail(t, fmt.Sprintf("func %#v should panic\n\r\tPanic value:\t%v", f, panicValue), msgAndArgs...) + } + if panicValue != expected { + return Fail(t, fmt.Sprintf("func %#v should panic with value:\t%v\n\r\tPanic value:\t%v", f, expected, panicValue), msgAndArgs...) + } + + return true +} + // NotPanics asserts that the code inside the specified PanicTestFunc does NOT panic. // -// assert.NotPanics(t, func(){ -// RemainCalm() -// }, "Calling RemainCalm() should NOT panic") -// -// Returns whether the assertion was successful (true) or not (false). +// assert.NotPanics(t, func(){ RemainCalm() }) func NotPanics(t TestingT, f PanicTestFunc, msgAndArgs ...interface{}) bool { if funcDidPanic, panicValue := didPanic(f); funcDidPanic { @@ -721,9 +839,7 @@ func NotPanics(t TestingT, f PanicTestFunc, msgAndArgs ...interface{}) bool { // WithinDuration asserts that the two times are within duration delta of each other. // -// assert.WithinDuration(t, time.Now(), time.Now(), 10*time.Second, "The difference should not be more than 10s") -// -// Returns whether the assertion was successful (true) or not (false). +// assert.WithinDuration(t, time.Now(), time.Now(), 10*time.Second) func WithinDuration(t TestingT, expected, actual time.Time, delta time.Duration, msgAndArgs ...interface{}) bool { dt := expected.Sub(actual) @@ -761,6 +877,8 @@ func toFloat(x interface{}) (float64, bool) { xf = float64(xn) case float64: xf = float64(xn) + case time.Duration: + xf = float64(xn) default: xok = false } @@ -771,8 +889,6 @@ func toFloat(x interface{}) (float64, bool) { // InDelta asserts that the two numerals are within delta of each other. // // assert.InDelta(t, math.Pi, (22 / 7.0), 0.01) -// -// Returns whether the assertion was successful (true) or not (false). func InDelta(t TestingT, expected, actual interface{}, delta float64, msgAndArgs ...interface{}) bool { af, aok := toFloat(expected) @@ -783,7 +899,7 @@ func InDelta(t TestingT, expected, actual interface{}, delta float64, msgAndArgs } if math.IsNaN(af) { - return Fail(t, fmt.Sprintf("Actual must not be NaN"), msgAndArgs...) + return Fail(t, fmt.Sprintf("Expected must not be NaN"), msgAndArgs...) } if math.IsNaN(bf) { @@ -810,7 +926,7 @@ func InDeltaSlice(t TestingT, expected, actual interface{}, delta float64, msgAn expectedSlice := reflect.ValueOf(expected) for i := 0; i < actualSlice.Len(); i++ { - result := InDelta(t, actualSlice.Index(i).Interface(), expectedSlice.Index(i).Interface(), delta) + result := InDelta(t, actualSlice.Index(i).Interface(), expectedSlice.Index(i).Interface(), delta, msgAndArgs...) if !result { return result } @@ -819,6 +935,47 @@ func InDeltaSlice(t TestingT, expected, actual interface{}, delta float64, msgAn return true } +// InDeltaMapValues is the same as InDelta, but it compares all values between two maps. Both maps must have exactly the same keys. +func InDeltaMapValues(t TestingT, expected, actual interface{}, delta float64, msgAndArgs ...interface{}) bool { + if expected == nil || actual == nil || + reflect.TypeOf(actual).Kind() != reflect.Map || + reflect.TypeOf(expected).Kind() != reflect.Map { + return Fail(t, "Arguments must be maps", msgAndArgs...) + } + + expectedMap := reflect.ValueOf(expected) + actualMap := reflect.ValueOf(actual) + + if expectedMap.Len() != actualMap.Len() { + return Fail(t, "Arguments must have the same number of keys", msgAndArgs...) + } + + for _, k := range expectedMap.MapKeys() { + ev := expectedMap.MapIndex(k) + av := actualMap.MapIndex(k) + + if !ev.IsValid() { + return Fail(t, fmt.Sprintf("missing key %q in expected map", k), msgAndArgs...) + } + + if !av.IsValid() { + return Fail(t, fmt.Sprintf("missing key %q in actual map", k), msgAndArgs...) + } + + if !InDelta( + t, + ev.Interface(), + av.Interface(), + delta, + msgAndArgs..., + ) { + return false + } + } + + return true +} + func calcRelativeError(expected, actual interface{}) (float64, error) { af, aok := toFloat(expected) if !aok { @@ -829,15 +986,13 @@ func calcRelativeError(expected, actual interface{}) (float64, error) { } bf, bok := toFloat(actual) if !bok { - return 0, fmt.Errorf("expected value %q cannot be converted to float", actual) + return 0, fmt.Errorf("actual value %q cannot be converted to float", actual) } return math.Abs(af-bf) / math.Abs(af), nil } // InEpsilon asserts that expected and actual have a relative error less than epsilon -// -// Returns whether the assertion was successful (true) or not (false). func InEpsilon(t TestingT, expected, actual interface{}, epsilon float64, msgAndArgs ...interface{}) bool { actualEpsilon, err := calcRelativeError(expected, actual) if err != nil { @@ -845,7 +1000,7 @@ func InEpsilon(t TestingT, expected, actual interface{}, epsilon float64, msgAnd } if actualEpsilon > epsilon { return Fail(t, fmt.Sprintf("Relative error is too high: %#v (expected)\n"+ - " < %#v (actual)", actualEpsilon, epsilon), msgAndArgs...) + " < %#v (actual)", epsilon, actualEpsilon), msgAndArgs...) } return true @@ -880,10 +1035,8 @@ func InEpsilonSlice(t TestingT, expected, actual interface{}, epsilon float64, m // // actualObj, err := SomeFunction() // if assert.NoError(t, err) { -// assert.Equal(t, actualObj, expectedObj) +// assert.Equal(t, expectedObj, actualObj) // } -// -// Returns whether the assertion was successful (true) or not (false). func NoError(t TestingT, err error, msgAndArgs ...interface{}) bool { if err != nil { return Fail(t, fmt.Sprintf("Received unexpected error:\n%+v", err), msgAndArgs...) @@ -895,11 +1048,9 @@ func NoError(t TestingT, err error, msgAndArgs ...interface{}) bool { // Error asserts that a function returned an error (i.e. not `nil`). // // actualObj, err := SomeFunction() -// if assert.Error(t, err, "An error was expected") { -// assert.Equal(t, err, expectedError) +// if assert.Error(t, err) { +// assert.Equal(t, expectedError, err) // } -// -// Returns whether the assertion was successful (true) or not (false). func Error(t TestingT, err error, msgAndArgs ...interface{}) bool { if err == nil { @@ -913,9 +1064,7 @@ func Error(t TestingT, err error, msgAndArgs ...interface{}) bool { // and that it is equal to the provided error. // // actualObj, err := SomeFunction() -// assert.EqualError(t, err, expectedErrorString, "An error was expected") -// -// Returns whether the assertion was successful (true) or not (false). +// assert.EqualError(t, err, expectedErrorString) func EqualError(t TestingT, theError error, errString string, msgAndArgs ...interface{}) bool { if !Error(t, theError, msgAndArgs...) { return false @@ -926,7 +1075,7 @@ func EqualError(t TestingT, theError error, errString string, msgAndArgs ...inte if expected != actual { return Fail(t, fmt.Sprintf("Error message not equal:\n"+ "expected: %q\n"+ - "received: %q", expected, actual), msgAndArgs...) + "actual : %q", expected, actual), msgAndArgs...) } return true } @@ -949,8 +1098,6 @@ func matchRegexp(rx interface{}, str interface{}) bool { // // assert.Regexp(t, regexp.MustCompile("start"), "it's starting") // assert.Regexp(t, "start...$", "it's not starting") -// -// Returns whether the assertion was successful (true) or not (false). func Regexp(t TestingT, rx interface{}, str interface{}, msgAndArgs ...interface{}) bool { match := matchRegexp(rx, str) @@ -966,8 +1113,6 @@ func Regexp(t TestingT, rx interface{}, str interface{}, msgAndArgs ...interface // // assert.NotRegexp(t, regexp.MustCompile("starts"), "it's starting") // assert.NotRegexp(t, "^start", "it's not starting") -// -// Returns whether the assertion was successful (true) or not (false). func NotRegexp(t TestingT, rx interface{}, str interface{}, msgAndArgs ...interface{}) bool { match := matchRegexp(rx, str) @@ -979,7 +1124,7 @@ func NotRegexp(t TestingT, rx interface{}, str interface{}, msgAndArgs ...interf } -// Zero asserts that i is the zero value for its type and returns the truth. +// Zero asserts that i is the zero value for its type. func Zero(t TestingT, i interface{}, msgAndArgs ...interface{}) bool { if i != nil && !reflect.DeepEqual(i, reflect.Zero(reflect.TypeOf(i)).Interface()) { return Fail(t, fmt.Sprintf("Should be zero, but was %v", i), msgAndArgs...) @@ -987,7 +1132,7 @@ func Zero(t TestingT, i interface{}, msgAndArgs ...interface{}) bool { return true } -// NotZero asserts that i is not the zero value for its type and returns the truth. +// NotZero asserts that i is not the zero value for its type. func NotZero(t TestingT, i interface{}, msgAndArgs ...interface{}) bool { if i == nil || reflect.DeepEqual(i, reflect.Zero(reflect.TypeOf(i)).Interface()) { return Fail(t, fmt.Sprintf("Should not be zero, but was %v", i), msgAndArgs...) @@ -995,11 +1140,39 @@ func NotZero(t TestingT, i interface{}, msgAndArgs ...interface{}) bool { return true } +// FileExists checks whether a file exists in the given path. It also fails if the path points to a directory or there is an error when trying to check the file. +func FileExists(t TestingT, path string, msgAndArgs ...interface{}) bool { + info, err := os.Lstat(path) + if err != nil { + if os.IsNotExist(err) { + return Fail(t, fmt.Sprintf("unable to find file %q", path), msgAndArgs...) + } + return Fail(t, fmt.Sprintf("error when running os.Lstat(%q): %s", path, err), msgAndArgs...) + } + if info.IsDir() { + return Fail(t, fmt.Sprintf("%q is a directory", path), msgAndArgs...) + } + return true +} + +// DirExists checks whether a directory exists in the given path. It also fails if the path is a file rather a directory or there is an error checking whether it exists. +func DirExists(t TestingT, path string, msgAndArgs ...interface{}) bool { + info, err := os.Lstat(path) + if err != nil { + if os.IsNotExist(err) { + return Fail(t, fmt.Sprintf("unable to find file %q", path), msgAndArgs...) + } + return Fail(t, fmt.Sprintf("error when running os.Lstat(%q): %s", path, err), msgAndArgs...) + } + if !info.IsDir() { + return Fail(t, fmt.Sprintf("%q is a file", path), msgAndArgs...) + } + return true +} + // JSONEq asserts that two JSON strings are equivalent. // // assert.JSONEq(t, `{"hello": "world", "foo": "bar"}`, `{"foo": "bar", "hello": "world"}`) -// -// Returns whether the assertion was successful (true) or not (false). func JSONEq(t TestingT, expected string, actual string, msgAndArgs ...interface{}) bool { var expectedJSONAsInterface, actualJSONAsInterface interface{} @@ -1043,8 +1216,8 @@ func diff(expected interface{}, actual interface{}) string { return "" } - e := spew.Sdump(expected) - a := spew.Sdump(actual) + e := spewConfig.Sdump(expected) + a := spewConfig.Sdump(actual) diff, _ := difflib.GetUnifiedDiffString(difflib.UnifiedDiff{ A: difflib.SplitLines(e), @@ -1058,3 +1231,26 @@ func diff(expected interface{}, actual interface{}) string { return "\n\nDiff:\n" + diff } + +// validateEqualArgs checks whether provided arguments can be safely used in the +// Equal/NotEqual functions. +func validateEqualArgs(expected, actual interface{}) error { + if isFunction(expected) || isFunction(actual) { + return errors.New("cannot take func type as argument") + } + return nil +} + +func isFunction(arg interface{}) bool { + if arg == nil { + return false + } + return reflect.TypeOf(arg).Kind() == reflect.Func +} + +var spewConfig = spew.ConfigState{ + Indent: " ", + DisablePointerAddresses: true, + DisableCapacities: true, + SortKeys: true, +} diff --git a/vendor/github.com/stretchr/testify/assert/forward_assertions.go b/vendor/github.com/stretchr/testify/assert/forward_assertions.go index b867e95ea..9ad56851d 100644 --- a/vendor/github.com/stretchr/testify/assert/forward_assertions.go +++ b/vendor/github.com/stretchr/testify/assert/forward_assertions.go @@ -13,4 +13,4 @@ func New(t TestingT) *Assertions { } } -//go:generate go run ../_codegen/main.go -output-package=assert -template=assertion_forward.go.tmpl +//go:generate go run ../_codegen/main.go -output-package=assert -template=assertion_forward.go.tmpl -include-format-funcs diff --git a/vendor/github.com/stretchr/testify/assert/http_assertions.go b/vendor/github.com/stretchr/testify/assert/http_assertions.go index fa7ab89b1..3101e78dd 100644 --- a/vendor/github.com/stretchr/testify/assert/http_assertions.go +++ b/vendor/github.com/stretchr/testify/assert/http_assertions.go @@ -8,16 +8,16 @@ import ( "strings" ) -// httpCode is a helper that returns HTTP code of the response. It returns -1 -// if building a new request fails. -func httpCode(handler http.HandlerFunc, method, url string, values url.Values) int { +// httpCode is a helper that returns HTTP code of the response. It returns -1 and +// an error if building a new request fails. +func httpCode(handler http.HandlerFunc, method, url string, values url.Values) (int, error) { w := httptest.NewRecorder() req, err := http.NewRequest(method, url+"?"+values.Encode(), nil) if err != nil { - return -1 + return -1, err } handler(w, req) - return w.Code + return w.Code, nil } // HTTPSuccess asserts that a specified handler returns a success status code. @@ -25,12 +25,19 @@ func httpCode(handler http.HandlerFunc, method, url string, values url.Values) i // assert.HTTPSuccess(t, myHandler, "POST", "http://www.google.com", nil) // // Returns whether the assertion was successful (true) or not (false). -func HTTPSuccess(t TestingT, handler http.HandlerFunc, method, url string, values url.Values) bool { - code := httpCode(handler, method, url, values) - if code == -1 { +func HTTPSuccess(t TestingT, handler http.HandlerFunc, method, url string, values url.Values, msgAndArgs ...interface{}) bool { + code, err := httpCode(handler, method, url, values) + if err != nil { + Fail(t, fmt.Sprintf("Failed to build test request, got error: %s", err)) return false } - return code >= http.StatusOK && code <= http.StatusPartialContent + + isSuccessCode := code >= http.StatusOK && code <= http.StatusPartialContent + if !isSuccessCode { + Fail(t, fmt.Sprintf("Expected HTTP success status code for %q but received %d", url+"?"+values.Encode(), code)) + } + + return isSuccessCode } // HTTPRedirect asserts that a specified handler returns a redirect status code. @@ -38,12 +45,19 @@ func HTTPSuccess(t TestingT, handler http.HandlerFunc, method, url string, value // assert.HTTPRedirect(t, myHandler, "GET", "/a/b/c", url.Values{"a": []string{"b", "c"}} // // Returns whether the assertion was successful (true) or not (false). -func HTTPRedirect(t TestingT, handler http.HandlerFunc, method, url string, values url.Values) bool { - code := httpCode(handler, method, url, values) - if code == -1 { +func HTTPRedirect(t TestingT, handler http.HandlerFunc, method, url string, values url.Values, msgAndArgs ...interface{}) bool { + code, err := httpCode(handler, method, url, values) + if err != nil { + Fail(t, fmt.Sprintf("Failed to build test request, got error: %s", err)) return false } - return code >= http.StatusMultipleChoices && code <= http.StatusTemporaryRedirect + + isRedirectCode := code >= http.StatusMultipleChoices && code <= http.StatusTemporaryRedirect + if !isRedirectCode { + Fail(t, fmt.Sprintf("Expected HTTP redirect status code for %q but received %d", url+"?"+values.Encode(), code)) + } + + return isRedirectCode } // HTTPError asserts that a specified handler returns an error status code. @@ -51,12 +65,19 @@ func HTTPRedirect(t TestingT, handler http.HandlerFunc, method, url string, valu // assert.HTTPError(t, myHandler, "POST", "/a/b/c", url.Values{"a": []string{"b", "c"}} // // Returns whether the assertion was successful (true) or not (false). -func HTTPError(t TestingT, handler http.HandlerFunc, method, url string, values url.Values) bool { - code := httpCode(handler, method, url, values) - if code == -1 { +func HTTPError(t TestingT, handler http.HandlerFunc, method, url string, values url.Values, msgAndArgs ...interface{}) bool { + code, err := httpCode(handler, method, url, values) + if err != nil { + Fail(t, fmt.Sprintf("Failed to build test request, got error: %s", err)) return false } - return code >= http.StatusBadRequest + + isErrorCode := code >= http.StatusBadRequest + if !isErrorCode { + Fail(t, fmt.Sprintf("Expected HTTP error status code for %q but received %d", url+"?"+values.Encode(), code)) + } + + return isErrorCode } // HTTPBody is a helper that returns HTTP body of the response. It returns @@ -77,7 +98,7 @@ func HTTPBody(handler http.HandlerFunc, method, url string, values url.Values) s // assert.HTTPBodyContains(t, myHandler, "www.google.com", nil, "I'm Feeling Lucky") // // Returns whether the assertion was successful (true) or not (false). -func HTTPBodyContains(t TestingT, handler http.HandlerFunc, method, url string, values url.Values, str interface{}) bool { +func HTTPBodyContains(t TestingT, handler http.HandlerFunc, method, url string, values url.Values, str interface{}, msgAndArgs ...interface{}) bool { body := HTTPBody(handler, method, url, values) contains := strings.Contains(body, fmt.Sprint(str)) @@ -94,7 +115,7 @@ func HTTPBodyContains(t TestingT, handler http.HandlerFunc, method, url string, // assert.HTTPBodyNotContains(t, myHandler, "www.google.com", nil, "I'm Feeling Lucky") // // Returns whether the assertion was successful (true) or not (false). -func HTTPBodyNotContains(t TestingT, handler http.HandlerFunc, method, url string, values url.Values, str interface{}) bool { +func HTTPBodyNotContains(t TestingT, handler http.HandlerFunc, method, url string, values url.Values, str interface{}, msgAndArgs ...interface{}) bool { body := HTTPBody(handler, method, url, values) contains := strings.Contains(body, fmt.Sprint(str)) diff --git a/vendor/golang.org/x/crypto/ssh/terminal/terminal.go b/vendor/golang.org/x/crypto/ssh/terminal/terminal.go new file mode 100644 index 000000000..9a887598f --- /dev/null +++ b/vendor/golang.org/x/crypto/ssh/terminal/terminal.go @@ -0,0 +1,951 @@ +// Copyright 2011 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +package terminal + +import ( + "bytes" + "io" + "sync" + "unicode/utf8" +) + +// EscapeCodes contains escape sequences that can be written to the terminal in +// order to achieve different styles of text. +type EscapeCodes struct { + // Foreground colors + Black, Red, Green, Yellow, Blue, Magenta, Cyan, White []byte + + // Reset all attributes + Reset []byte +} + +var vt100EscapeCodes = EscapeCodes{ + Black: []byte{keyEscape, '[', '3', '0', 'm'}, + Red: []byte{keyEscape, '[', '3', '1', 'm'}, + Green: []byte{keyEscape, '[', '3', '2', 'm'}, + Yellow: []byte{keyEscape, '[', '3', '3', 'm'}, + Blue: []byte{keyEscape, '[', '3', '4', 'm'}, + Magenta: []byte{keyEscape, '[', '3', '5', 'm'}, + Cyan: []byte{keyEscape, '[', '3', '6', 'm'}, + White: []byte{keyEscape, '[', '3', '7', 'm'}, + + Reset: []byte{keyEscape, '[', '0', 'm'}, +} + +// Terminal contains the state for running a VT100 terminal that is capable of +// reading lines of input. +type Terminal struct { + // AutoCompleteCallback, if non-null, is called for each keypress with + // the full input line and the current position of the cursor (in + // bytes, as an index into |line|). If it returns ok=false, the key + // press is processed normally. Otherwise it returns a replacement line + // and the new cursor position. + AutoCompleteCallback func(line string, pos int, key rune) (newLine string, newPos int, ok bool) + + // Escape contains a pointer to the escape codes for this terminal. + // It's always a valid pointer, although the escape codes themselves + // may be empty if the terminal doesn't support them. + Escape *EscapeCodes + + // lock protects the terminal and the state in this object from + // concurrent processing of a key press and a Write() call. + lock sync.Mutex + + c io.ReadWriter + prompt []rune + + // line is the current line being entered. + line []rune + // pos is the logical position of the cursor in line + pos int + // echo is true if local echo is enabled + echo bool + // pasteActive is true iff there is a bracketed paste operation in + // progress. + pasteActive bool + + // cursorX contains the current X value of the cursor where the left + // edge is 0. cursorY contains the row number where the first row of + // the current line is 0. + cursorX, cursorY int + // maxLine is the greatest value of cursorY so far. + maxLine int + + termWidth, termHeight int + + // outBuf contains the terminal data to be sent. + outBuf []byte + // remainder contains the remainder of any partial key sequences after + // a read. It aliases into inBuf. + remainder []byte + inBuf [256]byte + + // history contains previously entered commands so that they can be + // accessed with the up and down keys. + history stRingBuffer + // historyIndex stores the currently accessed history entry, where zero + // means the immediately previous entry. + historyIndex int + // When navigating up and down the history it's possible to return to + // the incomplete, initial line. That value is stored in + // historyPending. + historyPending string +} + +// NewTerminal runs a VT100 terminal on the given ReadWriter. If the ReadWriter is +// a local terminal, that terminal must first have been put into raw mode. +// prompt is a string that is written at the start of each input line (i.e. +// "> "). +func NewTerminal(c io.ReadWriter, prompt string) *Terminal { + return &Terminal{ + Escape: &vt100EscapeCodes, + c: c, + prompt: []rune(prompt), + termWidth: 80, + termHeight: 24, + echo: true, + historyIndex: -1, + } +} + +const ( + keyCtrlD = 4 + keyCtrlU = 21 + keyEnter = '\r' + keyEscape = 27 + keyBackspace = 127 + keyUnknown = 0xd800 /* UTF-16 surrogate area */ + iota + keyUp + keyDown + keyLeft + keyRight + keyAltLeft + keyAltRight + keyHome + keyEnd + keyDeleteWord + keyDeleteLine + keyClearScreen + keyPasteStart + keyPasteEnd +) + +var ( + crlf = []byte{'\r', '\n'} + pasteStart = []byte{keyEscape, '[', '2', '0', '0', '~'} + pasteEnd = []byte{keyEscape, '[', '2', '0', '1', '~'} +) + +// bytesToKey tries to parse a key sequence from b. If successful, it returns +// the key and the remainder of the input. Otherwise it returns utf8.RuneError. +func bytesToKey(b []byte, pasteActive bool) (rune, []byte) { + if len(b) == 0 { + return utf8.RuneError, nil + } + + if !pasteActive { + switch b[0] { + case 1: // ^A + return keyHome, b[1:] + case 5: // ^E + return keyEnd, b[1:] + case 8: // ^H + return keyBackspace, b[1:] + case 11: // ^K + return keyDeleteLine, b[1:] + case 12: // ^L + return keyClearScreen, b[1:] + case 23: // ^W + return keyDeleteWord, b[1:] + } + } + + if b[0] != keyEscape { + if !utf8.FullRune(b) { + return utf8.RuneError, b + } + r, l := utf8.DecodeRune(b) + return r, b[l:] + } + + if !pasteActive && len(b) >= 3 && b[0] == keyEscape && b[1] == '[' { + switch b[2] { + case 'A': + return keyUp, b[3:] + case 'B': + return keyDown, b[3:] + case 'C': + return keyRight, b[3:] + case 'D': + return keyLeft, b[3:] + case 'H': + return keyHome, b[3:] + case 'F': + return keyEnd, b[3:] + } + } + + if !pasteActive && len(b) >= 6 && b[0] == keyEscape && b[1] == '[' && b[2] == '1' && b[3] == ';' && b[4] == '3' { + switch b[5] { + case 'C': + return keyAltRight, b[6:] + case 'D': + return keyAltLeft, b[6:] + } + } + + if !pasteActive && len(b) >= 6 && bytes.Equal(b[:6], pasteStart) { + return keyPasteStart, b[6:] + } + + if pasteActive && len(b) >= 6 && bytes.Equal(b[:6], pasteEnd) { + return keyPasteEnd, b[6:] + } + + // If we get here then we have a key that we don't recognise, or a + // partial sequence. It's not clear how one should find the end of a + // sequence without knowing them all, but it seems that [a-zA-Z~] only + // appears at the end of a sequence. + for i, c := range b[0:] { + if c >= 'a' && c <= 'z' || c >= 'A' && c <= 'Z' || c == '~' { + return keyUnknown, b[i+1:] + } + } + + return utf8.RuneError, b +} + +// queue appends data to the end of t.outBuf +func (t *Terminal) queue(data []rune) { + t.outBuf = append(t.outBuf, []byte(string(data))...) +} + +var eraseUnderCursor = []rune{' ', keyEscape, '[', 'D'} +var space = []rune{' '} + +func isPrintable(key rune) bool { + isInSurrogateArea := key >= 0xd800 && key <= 0xdbff + return key >= 32 && !isInSurrogateArea +} + +// moveCursorToPos appends data to t.outBuf which will move the cursor to the +// given, logical position in the text. +func (t *Terminal) moveCursorToPos(pos int) { + if !t.echo { + return + } + + x := visualLength(t.prompt) + pos + y := x / t.termWidth + x = x % t.termWidth + + up := 0 + if y < t.cursorY { + up = t.cursorY - y + } + + down := 0 + if y > t.cursorY { + down = y - t.cursorY + } + + left := 0 + if x < t.cursorX { + left = t.cursorX - x + } + + right := 0 + if x > t.cursorX { + right = x - t.cursorX + } + + t.cursorX = x + t.cursorY = y + t.move(up, down, left, right) +} + +func (t *Terminal) move(up, down, left, right int) { + movement := make([]rune, 3*(up+down+left+right)) + m := movement + for i := 0; i < up; i++ { + m[0] = keyEscape + m[1] = '[' + m[2] = 'A' + m = m[3:] + } + for i := 0; i < down; i++ { + m[0] = keyEscape + m[1] = '[' + m[2] = 'B' + m = m[3:] + } + for i := 0; i < left; i++ { + m[0] = keyEscape + m[1] = '[' + m[2] = 'D' + m = m[3:] + } + for i := 0; i < right; i++ { + m[0] = keyEscape + m[1] = '[' + m[2] = 'C' + m = m[3:] + } + + t.queue(movement) +} + +func (t *Terminal) clearLineToRight() { + op := []rune{keyEscape, '[', 'K'} + t.queue(op) +} + +const maxLineLength = 4096 + +func (t *Terminal) setLine(newLine []rune, newPos int) { + if t.echo { + t.moveCursorToPos(0) + t.writeLine(newLine) + for i := len(newLine); i < len(t.line); i++ { + t.writeLine(space) + } + t.moveCursorToPos(newPos) + } + t.line = newLine + t.pos = newPos +} + +func (t *Terminal) advanceCursor(places int) { + t.cursorX += places + t.cursorY += t.cursorX / t.termWidth + if t.cursorY > t.maxLine { + t.maxLine = t.cursorY + } + t.cursorX = t.cursorX % t.termWidth + + if places > 0 && t.cursorX == 0 { + // Normally terminals will advance the current position + // when writing a character. But that doesn't happen + // for the last character in a line. However, when + // writing a character (except a new line) that causes + // a line wrap, the position will be advanced two + // places. + // + // So, if we are stopping at the end of a line, we + // need to write a newline so that our cursor can be + // advanced to the next line. + t.outBuf = append(t.outBuf, '\r', '\n') + } +} + +func (t *Terminal) eraseNPreviousChars(n int) { + if n == 0 { + return + } + + if t.pos < n { + n = t.pos + } + t.pos -= n + t.moveCursorToPos(t.pos) + + copy(t.line[t.pos:], t.line[n+t.pos:]) + t.line = t.line[:len(t.line)-n] + if t.echo { + t.writeLine(t.line[t.pos:]) + for i := 0; i < n; i++ { + t.queue(space) + } + t.advanceCursor(n) + t.moveCursorToPos(t.pos) + } +} + +// countToLeftWord returns then number of characters from the cursor to the +// start of the previous word. +func (t *Terminal) countToLeftWord() int { + if t.pos == 0 { + return 0 + } + + pos := t.pos - 1 + for pos > 0 { + if t.line[pos] != ' ' { + break + } + pos-- + } + for pos > 0 { + if t.line[pos] == ' ' { + pos++ + break + } + pos-- + } + + return t.pos - pos +} + +// countToRightWord returns then number of characters from the cursor to the +// start of the next word. +func (t *Terminal) countToRightWord() int { + pos := t.pos + for pos < len(t.line) { + if t.line[pos] == ' ' { + break + } + pos++ + } + for pos < len(t.line) { + if t.line[pos] != ' ' { + break + } + pos++ + } + return pos - t.pos +} + +// visualLength returns the number of visible glyphs in s. +func visualLength(runes []rune) int { + inEscapeSeq := false + length := 0 + + for _, r := range runes { + switch { + case inEscapeSeq: + if (r >= 'a' && r <= 'z') || (r >= 'A' && r <= 'Z') { + inEscapeSeq = false + } + case r == '\x1b': + inEscapeSeq = true + default: + length++ + } + } + + return length +} + +// handleKey processes the given key and, optionally, returns a line of text +// that the user has entered. +func (t *Terminal) handleKey(key rune) (line string, ok bool) { + if t.pasteActive && key != keyEnter { + t.addKeyToLine(key) + return + } + + switch key { + case keyBackspace: + if t.pos == 0 { + return + } + t.eraseNPreviousChars(1) + case keyAltLeft: + // move left by a word. + t.pos -= t.countToLeftWord() + t.moveCursorToPos(t.pos) + case keyAltRight: + // move right by a word. + t.pos += t.countToRightWord() + t.moveCursorToPos(t.pos) + case keyLeft: + if t.pos == 0 { + return + } + t.pos-- + t.moveCursorToPos(t.pos) + case keyRight: + if t.pos == len(t.line) { + return + } + t.pos++ + t.moveCursorToPos(t.pos) + case keyHome: + if t.pos == 0 { + return + } + t.pos = 0 + t.moveCursorToPos(t.pos) + case keyEnd: + if t.pos == len(t.line) { + return + } + t.pos = len(t.line) + t.moveCursorToPos(t.pos) + case keyUp: + entry, ok := t.history.NthPreviousEntry(t.historyIndex + 1) + if !ok { + return "", false + } + if t.historyIndex == -1 { + t.historyPending = string(t.line) + } + t.historyIndex++ + runes := []rune(entry) + t.setLine(runes, len(runes)) + case keyDown: + switch t.historyIndex { + case -1: + return + case 0: + runes := []rune(t.historyPending) + t.setLine(runes, len(runes)) + t.historyIndex-- + default: + entry, ok := t.history.NthPreviousEntry(t.historyIndex - 1) + if ok { + t.historyIndex-- + runes := []rune(entry) + t.setLine(runes, len(runes)) + } + } + case keyEnter: + t.moveCursorToPos(len(t.line)) + t.queue([]rune("\r\n")) + line = string(t.line) + ok = true + t.line = t.line[:0] + t.pos = 0 + t.cursorX = 0 + t.cursorY = 0 + t.maxLine = 0 + case keyDeleteWord: + // Delete zero or more spaces and then one or more characters. + t.eraseNPreviousChars(t.countToLeftWord()) + case keyDeleteLine: + // Delete everything from the current cursor position to the + // end of line. + for i := t.pos; i < len(t.line); i++ { + t.queue(space) + t.advanceCursor(1) + } + t.line = t.line[:t.pos] + t.moveCursorToPos(t.pos) + case keyCtrlD: + // Erase the character under the current position. + // The EOF case when the line is empty is handled in + // readLine(). + if t.pos < len(t.line) { + t.pos++ + t.eraseNPreviousChars(1) + } + case keyCtrlU: + t.eraseNPreviousChars(t.pos) + case keyClearScreen: + // Erases the screen and moves the cursor to the home position. + t.queue([]rune("\x1b[2J\x1b[H")) + t.queue(t.prompt) + t.cursorX, t.cursorY = 0, 0 + t.advanceCursor(visualLength(t.prompt)) + t.setLine(t.line, t.pos) + default: + if t.AutoCompleteCallback != nil { + prefix := string(t.line[:t.pos]) + suffix := string(t.line[t.pos:]) + + t.lock.Unlock() + newLine, newPos, completeOk := t.AutoCompleteCallback(prefix+suffix, len(prefix), key) + t.lock.Lock() + + if completeOk { + t.setLine([]rune(newLine), utf8.RuneCount([]byte(newLine)[:newPos])) + return + } + } + if !isPrintable(key) { + return + } + if len(t.line) == maxLineLength { + return + } + t.addKeyToLine(key) + } + return +} + +// addKeyToLine inserts the given key at the current position in the current +// line. +func (t *Terminal) addKeyToLine(key rune) { + if len(t.line) == cap(t.line) { + newLine := make([]rune, len(t.line), 2*(1+len(t.line))) + copy(newLine, t.line) + t.line = newLine + } + t.line = t.line[:len(t.line)+1] + copy(t.line[t.pos+1:], t.line[t.pos:]) + t.line[t.pos] = key + if t.echo { + t.writeLine(t.line[t.pos:]) + } + t.pos++ + t.moveCursorToPos(t.pos) +} + +func (t *Terminal) writeLine(line []rune) { + for len(line) != 0 { + remainingOnLine := t.termWidth - t.cursorX + todo := len(line) + if todo > remainingOnLine { + todo = remainingOnLine + } + t.queue(line[:todo]) + t.advanceCursor(visualLength(line[:todo])) + line = line[todo:] + } +} + +// writeWithCRLF writes buf to w but replaces all occurrences of \n with \r\n. +func writeWithCRLF(w io.Writer, buf []byte) (n int, err error) { + for len(buf) > 0 { + i := bytes.IndexByte(buf, '\n') + todo := len(buf) + if i >= 0 { + todo = i + } + + var nn int + nn, err = w.Write(buf[:todo]) + n += nn + if err != nil { + return n, err + } + buf = buf[todo:] + + if i >= 0 { + if _, err = w.Write(crlf); err != nil { + return n, err + } + n++ + buf = buf[1:] + } + } + + return n, nil +} + +func (t *Terminal) Write(buf []byte) (n int, err error) { + t.lock.Lock() + defer t.lock.Unlock() + + if t.cursorX == 0 && t.cursorY == 0 { + // This is the easy case: there's nothing on the screen that we + // have to move out of the way. + return writeWithCRLF(t.c, buf) + } + + // We have a prompt and possibly user input on the screen. We + // have to clear it first. + t.move(0 /* up */, 0 /* down */, t.cursorX /* left */, 0 /* right */) + t.cursorX = 0 + t.clearLineToRight() + + for t.cursorY > 0 { + t.move(1 /* up */, 0, 0, 0) + t.cursorY-- + t.clearLineToRight() + } + + if _, err = t.c.Write(t.outBuf); err != nil { + return + } + t.outBuf = t.outBuf[:0] + + if n, err = writeWithCRLF(t.c, buf); err != nil { + return + } + + t.writeLine(t.prompt) + if t.echo { + t.writeLine(t.line) + } + + t.moveCursorToPos(t.pos) + + if _, err = t.c.Write(t.outBuf); err != nil { + return + } + t.outBuf = t.outBuf[:0] + return +} + +// ReadPassword temporarily changes the prompt and reads a password, without +// echo, from the terminal. +func (t *Terminal) ReadPassword(prompt string) (line string, err error) { + t.lock.Lock() + defer t.lock.Unlock() + + oldPrompt := t.prompt + t.prompt = []rune(prompt) + t.echo = false + + line, err = t.readLine() + + t.prompt = oldPrompt + t.echo = true + + return +} + +// ReadLine returns a line of input from the terminal. +func (t *Terminal) ReadLine() (line string, err error) { + t.lock.Lock() + defer t.lock.Unlock() + + return t.readLine() +} + +func (t *Terminal) readLine() (line string, err error) { + // t.lock must be held at this point + + if t.cursorX == 0 && t.cursorY == 0 { + t.writeLine(t.prompt) + t.c.Write(t.outBuf) + t.outBuf = t.outBuf[:0] + } + + lineIsPasted := t.pasteActive + + for { + rest := t.remainder + lineOk := false + for !lineOk { + var key rune + key, rest = bytesToKey(rest, t.pasteActive) + if key == utf8.RuneError { + break + } + if !t.pasteActive { + if key == keyCtrlD { + if len(t.line) == 0 { + return "", io.EOF + } + } + if key == keyPasteStart { + t.pasteActive = true + if len(t.line) == 0 { + lineIsPasted = true + } + continue + } + } else if key == keyPasteEnd { + t.pasteActive = false + continue + } + if !t.pasteActive { + lineIsPasted = false + } + line, lineOk = t.handleKey(key) + } + if len(rest) > 0 { + n := copy(t.inBuf[:], rest) + t.remainder = t.inBuf[:n] + } else { + t.remainder = nil + } + t.c.Write(t.outBuf) + t.outBuf = t.outBuf[:0] + if lineOk { + if t.echo { + t.historyIndex = -1 + t.history.Add(line) + } + if lineIsPasted { + err = ErrPasteIndicator + } + return + } + + // t.remainder is a slice at the beginning of t.inBuf + // containing a partial key sequence + readBuf := t.inBuf[len(t.remainder):] + var n int + + t.lock.Unlock() + n, err = t.c.Read(readBuf) + t.lock.Lock() + + if err != nil { + return + } + + t.remainder = t.inBuf[:n+len(t.remainder)] + } +} + +// SetPrompt sets the prompt to be used when reading subsequent lines. +func (t *Terminal) SetPrompt(prompt string) { + t.lock.Lock() + defer t.lock.Unlock() + + t.prompt = []rune(prompt) +} + +func (t *Terminal) clearAndRepaintLinePlusNPrevious(numPrevLines int) { + // Move cursor to column zero at the start of the line. + t.move(t.cursorY, 0, t.cursorX, 0) + t.cursorX, t.cursorY = 0, 0 + t.clearLineToRight() + for t.cursorY < numPrevLines { + // Move down a line + t.move(0, 1, 0, 0) + t.cursorY++ + t.clearLineToRight() + } + // Move back to beginning. + t.move(t.cursorY, 0, 0, 0) + t.cursorX, t.cursorY = 0, 0 + + t.queue(t.prompt) + t.advanceCursor(visualLength(t.prompt)) + t.writeLine(t.line) + t.moveCursorToPos(t.pos) +} + +func (t *Terminal) SetSize(width, height int) error { + t.lock.Lock() + defer t.lock.Unlock() + + if width == 0 { + width = 1 + } + + oldWidth := t.termWidth + t.termWidth, t.termHeight = width, height + + switch { + case width == oldWidth: + // If the width didn't change then nothing else needs to be + // done. + return nil + case len(t.line) == 0 && t.cursorX == 0 && t.cursorY == 0: + // If there is nothing on current line and no prompt printed, + // just do nothing + return nil + case width < oldWidth: + // Some terminals (e.g. xterm) will truncate lines that were + // too long when shinking. Others, (e.g. gnome-terminal) will + // attempt to wrap them. For the former, repainting t.maxLine + // works great, but that behaviour goes badly wrong in the case + // of the latter because they have doubled every full line. + + // We assume that we are working on a terminal that wraps lines + // and adjust the cursor position based on every previous line + // wrapping and turning into two. This causes the prompt on + // xterms to move upwards, which isn't great, but it avoids a + // huge mess with gnome-terminal. + if t.cursorX >= t.termWidth { + t.cursorX = t.termWidth - 1 + } + t.cursorY *= 2 + t.clearAndRepaintLinePlusNPrevious(t.maxLine * 2) + case width > oldWidth: + // If the terminal expands then our position calculations will + // be wrong in the future because we think the cursor is + // |t.pos| chars into the string, but there will be a gap at + // the end of any wrapped line. + // + // But the position will actually be correct until we move, so + // we can move back to the beginning and repaint everything. + t.clearAndRepaintLinePlusNPrevious(t.maxLine) + } + + _, err := t.c.Write(t.outBuf) + t.outBuf = t.outBuf[:0] + return err +} + +type pasteIndicatorError struct{} + +func (pasteIndicatorError) Error() string { + return "terminal: ErrPasteIndicator not correctly handled" +} + +// ErrPasteIndicator may be returned from ReadLine as the error, in addition +// to valid line data. It indicates that bracketed paste mode is enabled and +// that the returned line consists only of pasted data. Programs may wish to +// interpret pasted data more literally than typed data. +var ErrPasteIndicator = pasteIndicatorError{} + +// SetBracketedPasteMode requests that the terminal bracket paste operations +// with markers. Not all terminals support this but, if it is supported, then +// enabling this mode will stop any autocomplete callback from running due to +// pastes. Additionally, any lines that are completely pasted will be returned +// from ReadLine with the error set to ErrPasteIndicator. +func (t *Terminal) SetBracketedPasteMode(on bool) { + if on { + io.WriteString(t.c, "\x1b[?2004h") + } else { + io.WriteString(t.c, "\x1b[?2004l") + } +} + +// stRingBuffer is a ring buffer of strings. +type stRingBuffer struct { + // entries contains max elements. + entries []string + max int + // head contains the index of the element most recently added to the ring. + head int + // size contains the number of elements in the ring. + size int +} + +func (s *stRingBuffer) Add(a string) { + if s.entries == nil { + const defaultNumEntries = 100 + s.entries = make([]string, defaultNumEntries) + s.max = defaultNumEntries + } + + s.head = (s.head + 1) % s.max + s.entries[s.head] = a + if s.size < s.max { + s.size++ + } +} + +// NthPreviousEntry returns the value passed to the nth previous call to Add. +// If n is zero then the immediately prior value is returned, if one, then the +// next most recent, and so on. If such an element doesn't exist then ok is +// false. +func (s *stRingBuffer) NthPreviousEntry(n int) (value string, ok bool) { + if n >= s.size { + return "", false + } + index := s.head - n + if index < 0 { + index += s.max + } + return s.entries[index], true +} + +// readPasswordLine reads from reader until it finds \n or io.EOF. +// The slice returned does not include the \n. +// readPasswordLine also ignores any \r it finds. +func readPasswordLine(reader io.Reader) ([]byte, error) { + var buf [1]byte + var ret []byte + + for { + n, err := reader.Read(buf[:]) + if n > 0 { + switch buf[0] { + case '\n': + return ret, nil + case '\r': + // remove \r from passwords on Windows + default: + ret = append(ret, buf[0]) + } + continue + } + if err != nil { + if err == io.EOF && len(ret) > 0 { + return ret, nil + } + return ret, err + } + } +} diff --git a/vendor/golang.org/x/crypto/ssh/terminal/util.go b/vendor/golang.org/x/crypto/ssh/terminal/util.go new file mode 100644 index 000000000..731c89a28 --- /dev/null +++ b/vendor/golang.org/x/crypto/ssh/terminal/util.go @@ -0,0 +1,114 @@ +// Copyright 2011 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +// +build darwin dragonfly freebsd linux,!appengine netbsd openbsd + +// Package terminal provides support functions for dealing with terminals, as +// commonly found on UNIX systems. +// +// Putting a terminal into raw mode is the most common requirement: +// +// oldState, err := terminal.MakeRaw(0) +// if err != nil { +// panic(err) +// } +// defer terminal.Restore(0, oldState) +package terminal // import "golang.org/x/crypto/ssh/terminal" + +import ( + "golang.org/x/sys/unix" +) + +// State contains the state of a terminal. +type State struct { + termios unix.Termios +} + +// IsTerminal returns true if the given file descriptor is a terminal. +func IsTerminal(fd int) bool { + _, err := unix.IoctlGetTermios(fd, ioctlReadTermios) + return err == nil +} + +// MakeRaw put the terminal connected to the given file descriptor into raw +// mode and returns the previous state of the terminal so that it can be +// restored. +func MakeRaw(fd int) (*State, error) { + termios, err := unix.IoctlGetTermios(fd, ioctlReadTermios) + if err != nil { + return nil, err + } + + oldState := State{termios: *termios} + + // This attempts to replicate the behaviour documented for cfmakeraw in + // the termios(3) manpage. + termios.Iflag &^= unix.IGNBRK | unix.BRKINT | unix.PARMRK | unix.ISTRIP | unix.INLCR | unix.IGNCR | unix.ICRNL | unix.IXON + termios.Oflag &^= unix.OPOST + termios.Lflag &^= unix.ECHO | unix.ECHONL | unix.ICANON | unix.ISIG | unix.IEXTEN + termios.Cflag &^= unix.CSIZE | unix.PARENB + termios.Cflag |= unix.CS8 + termios.Cc[unix.VMIN] = 1 + termios.Cc[unix.VTIME] = 0 + if err := unix.IoctlSetTermios(fd, ioctlWriteTermios, termios); err != nil { + return nil, err + } + + return &oldState, nil +} + +// GetState returns the current state of a terminal which may be useful to +// restore the terminal after a signal. +func GetState(fd int) (*State, error) { + termios, err := unix.IoctlGetTermios(fd, ioctlReadTermios) + if err != nil { + return nil, err + } + + return &State{termios: *termios}, nil +} + +// Restore restores the terminal connected to the given file descriptor to a +// previous state. +func Restore(fd int, state *State) error { + return unix.IoctlSetTermios(fd, ioctlWriteTermios, &state.termios) +} + +// GetSize returns the dimensions of the given terminal. +func GetSize(fd int) (width, height int, err error) { + ws, err := unix.IoctlGetWinsize(fd, unix.TIOCGWINSZ) + if err != nil { + return -1, -1, err + } + return int(ws.Col), int(ws.Row), nil +} + +// passwordReader is an io.Reader that reads from a specific file descriptor. +type passwordReader int + +func (r passwordReader) Read(buf []byte) (int, error) { + return unix.Read(int(r), buf) +} + +// ReadPassword reads a line of input from a terminal without local echo. This +// is commonly used for inputting passwords and other sensitive data. The slice +// returned does not include the \n. +func ReadPassword(fd int) ([]byte, error) { + termios, err := unix.IoctlGetTermios(fd, ioctlReadTermios) + if err != nil { + return nil, err + } + + newState := *termios + newState.Lflag &^= unix.ECHO + newState.Lflag |= unix.ICANON | unix.ISIG + newState.Iflag |= unix.ICRNL + if err := unix.IoctlSetTermios(fd, ioctlWriteTermios, &newState); err != nil { + return nil, err + } + + defer unix.IoctlSetTermios(fd, ioctlWriteTermios, termios) + + return readPasswordLine(passwordReader(fd)) +} diff --git a/vendor/golang.org/x/crypto/ssh/terminal/util_bsd.go b/vendor/golang.org/x/crypto/ssh/terminal/util_bsd.go new file mode 100644 index 000000000..cb23a5904 --- /dev/null +++ b/vendor/golang.org/x/crypto/ssh/terminal/util_bsd.go @@ -0,0 +1,12 @@ +// Copyright 2013 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +// +build darwin dragonfly freebsd netbsd openbsd + +package terminal + +import "golang.org/x/sys/unix" + +const ioctlReadTermios = unix.TIOCGETA +const ioctlWriteTermios = unix.TIOCSETA diff --git a/vendor/golang.org/x/sys/unix/syscall_no_getwd.go b/vendor/golang.org/x/crypto/ssh/terminal/util_linux.go similarity index 53% rename from vendor/golang.org/x/sys/unix/syscall_no_getwd.go rename to vendor/golang.org/x/crypto/ssh/terminal/util_linux.go index 530792ea9..5fadfe8a1 100644 --- a/vendor/golang.org/x/sys/unix/syscall_no_getwd.go +++ b/vendor/golang.org/x/crypto/ssh/terminal/util_linux.go @@ -2,10 +2,9 @@ // Use of this source code is governed by a BSD-style // license that can be found in the LICENSE file. -// +build dragonfly freebsd netbsd openbsd +package terminal -package unix +import "golang.org/x/sys/unix" -const ImplementsGetwd = false - -func Getwd() (string, error) { return "", ENOTSUP } +const ioctlReadTermios = unix.TCGETS +const ioctlWriteTermios = unix.TCSETS diff --git a/vendor/golang.org/x/crypto/ssh/terminal/util_plan9.go b/vendor/golang.org/x/crypto/ssh/terminal/util_plan9.go new file mode 100644 index 000000000..799f049f0 --- /dev/null +++ b/vendor/golang.org/x/crypto/ssh/terminal/util_plan9.go @@ -0,0 +1,58 @@ +// Copyright 2016 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +// Package terminal provides support functions for dealing with terminals, as +// commonly found on UNIX systems. +// +// Putting a terminal into raw mode is the most common requirement: +// +// oldState, err := terminal.MakeRaw(0) +// if err != nil { +// panic(err) +// } +// defer terminal.Restore(0, oldState) +package terminal + +import ( + "fmt" + "runtime" +) + +type State struct{} + +// IsTerminal returns true if the given file descriptor is a terminal. +func IsTerminal(fd int) bool { + return false +} + +// MakeRaw put the terminal connected to the given file descriptor into raw +// mode and returns the previous state of the terminal so that it can be +// restored. +func MakeRaw(fd int) (*State, error) { + return nil, fmt.Errorf("terminal: MakeRaw not implemented on %s/%s", runtime.GOOS, runtime.GOARCH) +} + +// GetState returns the current state of a terminal which may be useful to +// restore the terminal after a signal. +func GetState(fd int) (*State, error) { + return nil, fmt.Errorf("terminal: GetState not implemented on %s/%s", runtime.GOOS, runtime.GOARCH) +} + +// Restore restores the terminal connected to the given file descriptor to a +// previous state. +func Restore(fd int, state *State) error { + return fmt.Errorf("terminal: Restore not implemented on %s/%s", runtime.GOOS, runtime.GOARCH) +} + +// GetSize returns the dimensions of the given terminal. +func GetSize(fd int) (width, height int, err error) { + return 0, 0, fmt.Errorf("terminal: GetSize not implemented on %s/%s", runtime.GOOS, runtime.GOARCH) +} + +// ReadPassword reads a line of input from a terminal without local echo. This +// is commonly used for inputting passwords and other sensitive data. The slice +// returned does not include the \n. +func ReadPassword(fd int) ([]byte, error) { + return nil, fmt.Errorf("terminal: ReadPassword not implemented on %s/%s", runtime.GOOS, runtime.GOARCH) +} diff --git a/vendor/golang.org/x/crypto/ssh/terminal/util_solaris.go b/vendor/golang.org/x/crypto/ssh/terminal/util_solaris.go new file mode 100644 index 000000000..9e41b9f43 --- /dev/null +++ b/vendor/golang.org/x/crypto/ssh/terminal/util_solaris.go @@ -0,0 +1,124 @@ +// Copyright 2015 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +// +build solaris + +package terminal // import "golang.org/x/crypto/ssh/terminal" + +import ( + "golang.org/x/sys/unix" + "io" + "syscall" +) + +// State contains the state of a terminal. +type State struct { + termios unix.Termios +} + +// IsTerminal returns true if the given file descriptor is a terminal. +func IsTerminal(fd int) bool { + _, err := unix.IoctlGetTermio(fd, unix.TCGETA) + return err == nil +} + +// ReadPassword reads a line of input from a terminal without local echo. This +// is commonly used for inputting passwords and other sensitive data. The slice +// returned does not include the \n. +func ReadPassword(fd int) ([]byte, error) { + // see also: http://src.illumos.org/source/xref/illumos-gate/usr/src/lib/libast/common/uwin/getpass.c + val, err := unix.IoctlGetTermios(fd, unix.TCGETS) + if err != nil { + return nil, err + } + oldState := *val + + newState := oldState + newState.Lflag &^= syscall.ECHO + newState.Lflag |= syscall.ICANON | syscall.ISIG + newState.Iflag |= syscall.ICRNL + err = unix.IoctlSetTermios(fd, unix.TCSETS, &newState) + if err != nil { + return nil, err + } + + defer unix.IoctlSetTermios(fd, unix.TCSETS, &oldState) + + var buf [16]byte + var ret []byte + for { + n, err := syscall.Read(fd, buf[:]) + if err != nil { + return nil, err + } + if n == 0 { + if len(ret) == 0 { + return nil, io.EOF + } + break + } + if buf[n-1] == '\n' { + n-- + } + ret = append(ret, buf[:n]...) + if n < len(buf) { + break + } + } + + return ret, nil +} + +// MakeRaw puts the terminal connected to the given file descriptor into raw +// mode and returns the previous state of the terminal so that it can be +// restored. +// see http://cr.illumos.org/~webrev/andy_js/1060/ +func MakeRaw(fd int) (*State, error) { + termios, err := unix.IoctlGetTermios(fd, unix.TCGETS) + if err != nil { + return nil, err + } + + oldState := State{termios: *termios} + + termios.Iflag &^= unix.IGNBRK | unix.BRKINT | unix.PARMRK | unix.ISTRIP | unix.INLCR | unix.IGNCR | unix.ICRNL | unix.IXON + termios.Oflag &^= unix.OPOST + termios.Lflag &^= unix.ECHO | unix.ECHONL | unix.ICANON | unix.ISIG | unix.IEXTEN + termios.Cflag &^= unix.CSIZE | unix.PARENB + termios.Cflag |= unix.CS8 + termios.Cc[unix.VMIN] = 1 + termios.Cc[unix.VTIME] = 0 + + if err := unix.IoctlSetTermios(fd, unix.TCSETS, termios); err != nil { + return nil, err + } + + return &oldState, nil +} + +// Restore restores the terminal connected to the given file descriptor to a +// previous state. +func Restore(fd int, oldState *State) error { + return unix.IoctlSetTermios(fd, unix.TCSETS, &oldState.termios) +} + +// GetState returns the current state of a terminal which may be useful to +// restore the terminal after a signal. +func GetState(fd int) (*State, error) { + termios, err := unix.IoctlGetTermios(fd, unix.TCGETS) + if err != nil { + return nil, err + } + + return &State{termios: *termios}, nil +} + +// GetSize returns the dimensions of the given terminal. +func GetSize(fd int) (width, height int, err error) { + ws, err := unix.IoctlGetWinsize(fd, unix.TIOCGWINSZ) + if err != nil { + return 0, 0, err + } + return int(ws.Col), int(ws.Row), nil +} diff --git a/vendor/golang.org/x/crypto/ssh/terminal/util_windows.go b/vendor/golang.org/x/crypto/ssh/terminal/util_windows.go new file mode 100644 index 000000000..8618955df --- /dev/null +++ b/vendor/golang.org/x/crypto/ssh/terminal/util_windows.go @@ -0,0 +1,103 @@ +// Copyright 2011 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +// +build windows + +// Package terminal provides support functions for dealing with terminals, as +// commonly found on UNIX systems. +// +// Putting a terminal into raw mode is the most common requirement: +// +// oldState, err := terminal.MakeRaw(0) +// if err != nil { +// panic(err) +// } +// defer terminal.Restore(0, oldState) +package terminal + +import ( + "os" + + "golang.org/x/sys/windows" +) + +type State struct { + mode uint32 +} + +// IsTerminal returns true if the given file descriptor is a terminal. +func IsTerminal(fd int) bool { + var st uint32 + err := windows.GetConsoleMode(windows.Handle(fd), &st) + return err == nil +} + +// MakeRaw put the terminal connected to the given file descriptor into raw +// mode and returns the previous state of the terminal so that it can be +// restored. +func MakeRaw(fd int) (*State, error) { + var st uint32 + if err := windows.GetConsoleMode(windows.Handle(fd), &st); err != nil { + return nil, err + } + raw := st &^ (windows.ENABLE_ECHO_INPUT | windows.ENABLE_PROCESSED_INPUT | windows.ENABLE_LINE_INPUT | windows.ENABLE_PROCESSED_OUTPUT) + if err := windows.SetConsoleMode(windows.Handle(fd), raw); err != nil { + return nil, err + } + return &State{st}, nil +} + +// GetState returns the current state of a terminal which may be useful to +// restore the terminal after a signal. +func GetState(fd int) (*State, error) { + var st uint32 + if err := windows.GetConsoleMode(windows.Handle(fd), &st); err != nil { + return nil, err + } + return &State{st}, nil +} + +// Restore restores the terminal connected to the given file descriptor to a +// previous state. +func Restore(fd int, state *State) error { + return windows.SetConsoleMode(windows.Handle(fd), state.mode) +} + +// GetSize returns the dimensions of the given terminal. +func GetSize(fd int) (width, height int, err error) { + var info windows.ConsoleScreenBufferInfo + if err := windows.GetConsoleScreenBufferInfo(windows.Handle(fd), &info); err != nil { + return 0, 0, err + } + return int(info.Size.X), int(info.Size.Y), nil +} + +// ReadPassword reads a line of input from a terminal without local echo. This +// is commonly used for inputting passwords and other sensitive data. The slice +// returned does not include the \n. +func ReadPassword(fd int) ([]byte, error) { + var st uint32 + if err := windows.GetConsoleMode(windows.Handle(fd), &st); err != nil { + return nil, err + } + old := st + + st &^= (windows.ENABLE_ECHO_INPUT) + st |= (windows.ENABLE_PROCESSED_INPUT | windows.ENABLE_LINE_INPUT | windows.ENABLE_PROCESSED_OUTPUT) + if err := windows.SetConsoleMode(windows.Handle(fd), st); err != nil { + return nil, err + } + + defer windows.SetConsoleMode(windows.Handle(fd), old) + + var h windows.Handle + p, _ := windows.GetCurrentProcess() + if err := windows.DuplicateHandle(p, windows.Handle(fd), p, &h, 0, false, windows.DUPLICATE_SAME_ACCESS); err != nil { + return nil, err + } + + f := os.NewFile(uintptr(h), "stdin") + defer f.Close() + return readPasswordLine(f) +} diff --git a/vendor/golang.org/x/net/context/context.go b/vendor/golang.org/x/net/context/context.go index 77b64d0c6..a3c021d3f 100644 --- a/vendor/golang.org/x/net/context/context.go +++ b/vendor/golang.org/x/net/context/context.go @@ -5,9 +5,11 @@ // Package context defines the Context type, which carries deadlines, // cancelation signals, and other request-scoped values across API boundaries // and between processes. +// As of Go 1.7 this package is available in the standard library under the +// name context. https://golang.org/pkg/context. // // Incoming requests to a server should create a Context, and outgoing calls to -// servers should accept a Context. The chain of function calls between must +// servers should accept a Context. The chain of function calls between must // propagate the Context, optionally replacing it with a modified copy created // using WithDeadline, WithTimeout, WithCancel, or WithValue. // @@ -16,14 +18,14 @@ // propagation: // // Do not store Contexts inside a struct type; instead, pass a Context -// explicitly to each function that needs it. The Context should be the first +// explicitly to each function that needs it. The Context should be the first // parameter, typically named ctx: // // func DoSomething(ctx context.Context, arg Arg) error { // // ... use ctx ... // } // -// Do not pass a nil Context, even if a function permits it. Pass context.TODO +// Do not pass a nil Context, even if a function permits it. Pass context.TODO // if you are unsure about which Context to use. // // Use context Values only for request-scoped data that transits processes and @@ -36,159 +38,15 @@ // Contexts. package context // import "golang.org/x/net/context" -import ( - "errors" - "fmt" - "sync" - "time" -) - -// A Context carries a deadline, a cancelation signal, and other values across -// API boundaries. -// -// Context's methods may be called by multiple goroutines simultaneously. -type Context interface { - // Deadline returns the time when work done on behalf of this context - // should be canceled. Deadline returns ok==false when no deadline is - // set. Successive calls to Deadline return the same results. - Deadline() (deadline time.Time, ok bool) - - // Done returns a channel that's closed when work done on behalf of this - // context should be canceled. Done may return nil if this context can - // never be canceled. Successive calls to Done return the same value. - // - // WithCancel arranges for Done to be closed when cancel is called; - // WithDeadline arranges for Done to be closed when the deadline - // expires; WithTimeout arranges for Done to be closed when the timeout - // elapses. - // - // Done is provided for use in select statements: - // - // // Stream generates values with DoSomething and sends them to out - // // until DoSomething returns an error or ctx.Done is closed. - // func Stream(ctx context.Context, out <-chan Value) error { - // for { - // v, err := DoSomething(ctx) - // if err != nil { - // return err - // } - // select { - // case <-ctx.Done(): - // return ctx.Err() - // case out <- v: - // } - // } - // } - // - // See http://blog.golang.org/pipelines for more examples of how to use - // a Done channel for cancelation. - Done() <-chan struct{} - - // Err returns a non-nil error value after Done is closed. Err returns - // Canceled if the context was canceled or DeadlineExceeded if the - // context's deadline passed. No other values for Err are defined. - // After Done is closed, successive calls to Err return the same value. - Err() error - - // Value returns the value associated with this context for key, or nil - // if no value is associated with key. Successive calls to Value with - // the same key returns the same result. - // - // Use context values only for request-scoped data that transits - // processes and API boundaries, not for passing optional parameters to - // functions. - // - // A key identifies a specific value in a Context. Functions that wish - // to store values in Context typically allocate a key in a global - // variable then use that key as the argument to context.WithValue and - // Context.Value. A key can be any type that supports equality; - // packages should define keys as an unexported type to avoid - // collisions. - // - // Packages that define a Context key should provide type-safe accessors - // for the values stores using that key: - // - // // Package user defines a User type that's stored in Contexts. - // package user - // - // import "golang.org/x/net/context" - // - // // User is the type of value stored in the Contexts. - // type User struct {...} - // - // // key is an unexported type for keys defined in this package. - // // This prevents collisions with keys defined in other packages. - // type key int - // - // // userKey is the key for user.User values in Contexts. It is - // // unexported; clients use user.NewContext and user.FromContext - // // instead of using this key directly. - // var userKey key = 0 - // - // // NewContext returns a new Context that carries value u. - // func NewContext(ctx context.Context, u *User) context.Context { - // return context.WithValue(ctx, userKey, u) - // } - // - // // FromContext returns the User value stored in ctx, if any. - // func FromContext(ctx context.Context) (*User, bool) { - // u, ok := ctx.Value(userKey).(*User) - // return u, ok - // } - Value(key interface{}) interface{} -} - -// Canceled is the error returned by Context.Err when the context is canceled. -var Canceled = errors.New("context canceled") - -// DeadlineExceeded is the error returned by Context.Err when the context's -// deadline passes. -var DeadlineExceeded = errors.New("context deadline exceeded") - -// An emptyCtx is never canceled, has no values, and has no deadline. It is not -// struct{}, since vars of this type must have distinct addresses. -type emptyCtx int - -func (*emptyCtx) Deadline() (deadline time.Time, ok bool) { - return -} - -func (*emptyCtx) Done() <-chan struct{} { - return nil -} - -func (*emptyCtx) Err() error { - return nil -} - -func (*emptyCtx) Value(key interface{}) interface{} { - return nil -} - -func (e *emptyCtx) String() string { - switch e { - case background: - return "context.Background" - case todo: - return "context.TODO" - } - return "unknown empty Context" -} - -var ( - background = new(emptyCtx) - todo = new(emptyCtx) -) - // Background returns a non-nil, empty Context. It is never canceled, has no -// values, and has no deadline. It is typically used by the main function, +// values, and has no deadline. It is typically used by the main function, // initialization, and tests, and as the top-level Context for incoming // requests. func Background() Context { return background } -// TODO returns a non-nil, empty Context. Code should use context.TODO when +// TODO returns a non-nil, empty Context. Code should use context.TODO when // it's unclear which Context to use or it is not yet available (because the // surrounding function has not yet been extended to accept a Context // parameter). TODO is recognized by static analysis tools that determine @@ -196,252 +54,3 @@ func Background() Context { func TODO() Context { return todo } - -// A CancelFunc tells an operation to abandon its work. -// A CancelFunc does not wait for the work to stop. -// After the first call, subsequent calls to a CancelFunc do nothing. -type CancelFunc func() - -// WithCancel returns a copy of parent with a new Done channel. The returned -// context's Done channel is closed when the returned cancel function is called -// or when the parent context's Done channel is closed, whichever happens first. -// -// Canceling this context releases resources associated with it, so code should -// call cancel as soon as the operations running in this Context complete. -func WithCancel(parent Context) (ctx Context, cancel CancelFunc) { - c := newCancelCtx(parent) - propagateCancel(parent, &c) - return &c, func() { c.cancel(true, Canceled) } -} - -// newCancelCtx returns an initialized cancelCtx. -func newCancelCtx(parent Context) cancelCtx { - return cancelCtx{ - Context: parent, - done: make(chan struct{}), - } -} - -// propagateCancel arranges for child to be canceled when parent is. -func propagateCancel(parent Context, child canceler) { - if parent.Done() == nil { - return // parent is never canceled - } - if p, ok := parentCancelCtx(parent); ok { - p.mu.Lock() - if p.err != nil { - // parent has already been canceled - child.cancel(false, p.err) - } else { - if p.children == nil { - p.children = make(map[canceler]bool) - } - p.children[child] = true - } - p.mu.Unlock() - } else { - go func() { - select { - case <-parent.Done(): - child.cancel(false, parent.Err()) - case <-child.Done(): - } - }() - } -} - -// parentCancelCtx follows a chain of parent references until it finds a -// *cancelCtx. This function understands how each of the concrete types in this -// package represents its parent. -func parentCancelCtx(parent Context) (*cancelCtx, bool) { - for { - switch c := parent.(type) { - case *cancelCtx: - return c, true - case *timerCtx: - return &c.cancelCtx, true - case *valueCtx: - parent = c.Context - default: - return nil, false - } - } -} - -// removeChild removes a context from its parent. -func removeChild(parent Context, child canceler) { - p, ok := parentCancelCtx(parent) - if !ok { - return - } - p.mu.Lock() - if p.children != nil { - delete(p.children, child) - } - p.mu.Unlock() -} - -// A canceler is a context type that can be canceled directly. The -// implementations are *cancelCtx and *timerCtx. -type canceler interface { - cancel(removeFromParent bool, err error) - Done() <-chan struct{} -} - -// A cancelCtx can be canceled. When canceled, it also cancels any children -// that implement canceler. -type cancelCtx struct { - Context - - done chan struct{} // closed by the first cancel call. - - mu sync.Mutex - children map[canceler]bool // set to nil by the first cancel call - err error // set to non-nil by the first cancel call -} - -func (c *cancelCtx) Done() <-chan struct{} { - return c.done -} - -func (c *cancelCtx) Err() error { - c.mu.Lock() - defer c.mu.Unlock() - return c.err -} - -func (c *cancelCtx) String() string { - return fmt.Sprintf("%v.WithCancel", c.Context) -} - -// cancel closes c.done, cancels each of c's children, and, if -// removeFromParent is true, removes c from its parent's children. -func (c *cancelCtx) cancel(removeFromParent bool, err error) { - if err == nil { - panic("context: internal error: missing cancel error") - } - c.mu.Lock() - if c.err != nil { - c.mu.Unlock() - return // already canceled - } - c.err = err - close(c.done) - for child := range c.children { - // NOTE: acquiring the child's lock while holding parent's lock. - child.cancel(false, err) - } - c.children = nil - c.mu.Unlock() - - if removeFromParent { - removeChild(c.Context, c) - } -} - -// WithDeadline returns a copy of the parent context with the deadline adjusted -// to be no later than d. If the parent's deadline is already earlier than d, -// WithDeadline(parent, d) is semantically equivalent to parent. The returned -// context's Done channel is closed when the deadline expires, when the returned -// cancel function is called, or when the parent context's Done channel is -// closed, whichever happens first. -// -// Canceling this context releases resources associated with it, so code should -// call cancel as soon as the operations running in this Context complete. -func WithDeadline(parent Context, deadline time.Time) (Context, CancelFunc) { - if cur, ok := parent.Deadline(); ok && cur.Before(deadline) { - // The current deadline is already sooner than the new one. - return WithCancel(parent) - } - c := &timerCtx{ - cancelCtx: newCancelCtx(parent), - deadline: deadline, - } - propagateCancel(parent, c) - d := deadline.Sub(time.Now()) - if d <= 0 { - c.cancel(true, DeadlineExceeded) // deadline has already passed - return c, func() { c.cancel(true, Canceled) } - } - c.mu.Lock() - defer c.mu.Unlock() - if c.err == nil { - c.timer = time.AfterFunc(d, func() { - c.cancel(true, DeadlineExceeded) - }) - } - return c, func() { c.cancel(true, Canceled) } -} - -// A timerCtx carries a timer and a deadline. It embeds a cancelCtx to -// implement Done and Err. It implements cancel by stopping its timer then -// delegating to cancelCtx.cancel. -type timerCtx struct { - cancelCtx - timer *time.Timer // Under cancelCtx.mu. - - deadline time.Time -} - -func (c *timerCtx) Deadline() (deadline time.Time, ok bool) { - return c.deadline, true -} - -func (c *timerCtx) String() string { - return fmt.Sprintf("%v.WithDeadline(%s [%s])", c.cancelCtx.Context, c.deadline, c.deadline.Sub(time.Now())) -} - -func (c *timerCtx) cancel(removeFromParent bool, err error) { - c.cancelCtx.cancel(false, err) - if removeFromParent { - // Remove this timerCtx from its parent cancelCtx's children. - removeChild(c.cancelCtx.Context, c) - } - c.mu.Lock() - if c.timer != nil { - c.timer.Stop() - c.timer = nil - } - c.mu.Unlock() -} - -// WithTimeout returns WithDeadline(parent, time.Now().Add(timeout)). -// -// Canceling this context releases resources associated with it, so code should -// call cancel as soon as the operations running in this Context complete: -// -// func slowOperationWithTimeout(ctx context.Context) (Result, error) { -// ctx, cancel := context.WithTimeout(ctx, 100*time.Millisecond) -// defer cancel() // releases resources if slowOperation completes before timeout elapses -// return slowOperation(ctx) -// } -func WithTimeout(parent Context, timeout time.Duration) (Context, CancelFunc) { - return WithDeadline(parent, time.Now().Add(timeout)) -} - -// WithValue returns a copy of parent in which the value associated with key is -// val. -// -// Use context Values only for request-scoped data that transits processes and -// APIs, not for passing optional parameters to functions. -func WithValue(parent Context, key interface{}, val interface{}) Context { - return &valueCtx{parent, key, val} -} - -// A valueCtx carries a key-value pair. It implements Value for that key and -// delegates all other calls to the embedded Context. -type valueCtx struct { - Context - key, val interface{} -} - -func (c *valueCtx) String() string { - return fmt.Sprintf("%v.WithValue(%#v, %#v)", c.Context, c.key, c.val) -} - -func (c *valueCtx) Value(key interface{}) interface{} { - if c.key == key { - return c.val - } - return c.Context.Value(key) -} diff --git a/vendor/golang.org/x/net/context/ctxhttp/cancelreq.go b/vendor/golang.org/x/net/context/ctxhttp/cancelreq.go deleted file mode 100644 index e3170e333..000000000 --- a/vendor/golang.org/x/net/context/ctxhttp/cancelreq.go +++ /dev/null @@ -1,19 +0,0 @@ -// Copyright 2015 The Go Authors. All rights reserved. -// Use of this source code is governed by a BSD-style -// license that can be found in the LICENSE file. - -// +build go1.5 - -package ctxhttp - -import "net/http" - -func canceler(client *http.Client, req *http.Request) func() { - // TODO(djd): Respect any existing value of req.Cancel. - ch := make(chan struct{}) - req.Cancel = ch - - return func() { - close(ch) - } -} diff --git a/vendor/golang.org/x/net/context/ctxhttp/cancelreq_go14.go b/vendor/golang.org/x/net/context/ctxhttp/cancelreq_go14.go deleted file mode 100644 index 56bcbadb8..000000000 --- a/vendor/golang.org/x/net/context/ctxhttp/cancelreq_go14.go +++ /dev/null @@ -1,23 +0,0 @@ -// Copyright 2015 The Go Authors. All rights reserved. -// Use of this source code is governed by a BSD-style -// license that can be found in the LICENSE file. - -// +build !go1.5 - -package ctxhttp - -import "net/http" - -type requestCanceler interface { - CancelRequest(*http.Request) -} - -func canceler(client *http.Client, req *http.Request) func() { - rc, ok := client.Transport.(requestCanceler) - if !ok { - return func() {} - } - return func() { - rc.CancelRequest(req) - } -} diff --git a/vendor/golang.org/x/net/context/ctxhttp/ctxhttp.go b/vendor/golang.org/x/net/context/ctxhttp/ctxhttp.go index 62620d4eb..606cf1f97 100644 --- a/vendor/golang.org/x/net/context/ctxhttp/ctxhttp.go +++ b/vendor/golang.org/x/net/context/ctxhttp/ctxhttp.go @@ -1,7 +1,9 @@ -// Copyright 2015 The Go Authors. All rights reserved. +// Copyright 2016 The Go Authors. All rights reserved. // Use of this source code is governed by a BSD-style // license that can be found in the LICENSE file. +// +build go1.7 + // Package ctxhttp provides helper functions for performing context-aware HTTP requests. package ctxhttp // import "golang.org/x/net/context/ctxhttp" @@ -14,71 +16,28 @@ import ( "golang.org/x/net/context" ) -func nop() {} - -var ( - testHookContextDoneBeforeHeaders = nop - testHookDoReturned = nop - testHookDidBodyClose = nop -) - -// Do sends an HTTP request with the provided http.Client and returns an HTTP response. +// Do sends an HTTP request with the provided http.Client and returns +// an HTTP response. +// // If the client is nil, http.DefaultClient is used. -// If the context is canceled or times out, ctx.Err() will be returned. +// +// The provided ctx must be non-nil. If it is canceled or times out, +// ctx.Err() will be returned. func Do(ctx context.Context, client *http.Client, req *http.Request) (*http.Response, error) { if client == nil { client = http.DefaultClient } - - // Request cancelation changed in Go 1.5, see cancelreq.go and cancelreq_go14.go. - cancel := canceler(client, req) - - type responseAndError struct { - resp *http.Response - err error - } - result := make(chan responseAndError, 1) - - go func() { - resp, err := client.Do(req) - testHookDoReturned() - result <- responseAndError{resp, err} - }() - - var resp *http.Response - - select { - case <-ctx.Done(): - testHookContextDoneBeforeHeaders() - cancel() - // Clean up after the goroutine calling client.Do: - go func() { - if r := <-result; r.resp != nil { - testHookDidBodyClose() - r.resp.Body.Close() - } - }() - return nil, ctx.Err() - case r := <-result: - var err error - resp, err = r.resp, r.err - if err != nil { - return resp, err - } - } - - c := make(chan struct{}) - go func() { + resp, err := client.Do(req.WithContext(ctx)) + // If we got an error, and the context has been canceled, + // the context's error is probably more useful. + if err != nil { select { case <-ctx.Done(): - cancel() - case <-c: - // The response's Body is closed. + err = ctx.Err() + default: } - }() - resp.Body = ¬ifyingReader{resp.Body, c} - - return resp, nil + } + return resp, err } // Get issues a GET request via the Do function. @@ -113,28 +72,3 @@ func Post(ctx context.Context, client *http.Client, url string, bodyType string, func PostForm(ctx context.Context, client *http.Client, url string, data url.Values) (*http.Response, error) { return Post(ctx, client, url, "application/x-www-form-urlencoded", strings.NewReader(data.Encode())) } - -// notifyingReader is an io.ReadCloser that closes the notify channel after -// Close is called or a Read fails on the underlying ReadCloser. -type notifyingReader struct { - io.ReadCloser - notify chan<- struct{} -} - -func (r *notifyingReader) Read(p []byte) (int, error) { - n, err := r.ReadCloser.Read(p) - if err != nil && r.notify != nil { - close(r.notify) - r.notify = nil - } - return n, err -} - -func (r *notifyingReader) Close() error { - err := r.ReadCloser.Close() - if r.notify != nil { - close(r.notify) - r.notify = nil - } - return err -} diff --git a/vendor/golang.org/x/net/context/ctxhttp/ctxhttp_pre17.go b/vendor/golang.org/x/net/context/ctxhttp/ctxhttp_pre17.go new file mode 100644 index 000000000..926870cc2 --- /dev/null +++ b/vendor/golang.org/x/net/context/ctxhttp/ctxhttp_pre17.go @@ -0,0 +1,147 @@ +// Copyright 2015 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +// +build !go1.7 + +package ctxhttp // import "golang.org/x/net/context/ctxhttp" + +import ( + "io" + "net/http" + "net/url" + "strings" + + "golang.org/x/net/context" +) + +func nop() {} + +var ( + testHookContextDoneBeforeHeaders = nop + testHookDoReturned = nop + testHookDidBodyClose = nop +) + +// Do sends an HTTP request with the provided http.Client and returns an HTTP response. +// If the client is nil, http.DefaultClient is used. +// If the context is canceled or times out, ctx.Err() will be returned. +func Do(ctx context.Context, client *http.Client, req *http.Request) (*http.Response, error) { + if client == nil { + client = http.DefaultClient + } + + // TODO(djd): Respect any existing value of req.Cancel. + cancel := make(chan struct{}) + req.Cancel = cancel + + type responseAndError struct { + resp *http.Response + err error + } + result := make(chan responseAndError, 1) + + // Make local copies of test hooks closed over by goroutines below. + // Prevents data races in tests. + testHookDoReturned := testHookDoReturned + testHookDidBodyClose := testHookDidBodyClose + + go func() { + resp, err := client.Do(req) + testHookDoReturned() + result <- responseAndError{resp, err} + }() + + var resp *http.Response + + select { + case <-ctx.Done(): + testHookContextDoneBeforeHeaders() + close(cancel) + // Clean up after the goroutine calling client.Do: + go func() { + if r := <-result; r.resp != nil { + testHookDidBodyClose() + r.resp.Body.Close() + } + }() + return nil, ctx.Err() + case r := <-result: + var err error + resp, err = r.resp, r.err + if err != nil { + return resp, err + } + } + + c := make(chan struct{}) + go func() { + select { + case <-ctx.Done(): + close(cancel) + case <-c: + // The response's Body is closed. + } + }() + resp.Body = ¬ifyingReader{resp.Body, c} + + return resp, nil +} + +// Get issues a GET request via the Do function. +func Get(ctx context.Context, client *http.Client, url string) (*http.Response, error) { + req, err := http.NewRequest("GET", url, nil) + if err != nil { + return nil, err + } + return Do(ctx, client, req) +} + +// Head issues a HEAD request via the Do function. +func Head(ctx context.Context, client *http.Client, url string) (*http.Response, error) { + req, err := http.NewRequest("HEAD", url, nil) + if err != nil { + return nil, err + } + return Do(ctx, client, req) +} + +// Post issues a POST request via the Do function. +func Post(ctx context.Context, client *http.Client, url string, bodyType string, body io.Reader) (*http.Response, error) { + req, err := http.NewRequest("POST", url, body) + if err != nil { + return nil, err + } + req.Header.Set("Content-Type", bodyType) + return Do(ctx, client, req) +} + +// PostForm issues a POST request via the Do function. +func PostForm(ctx context.Context, client *http.Client, url string, data url.Values) (*http.Response, error) { + return Post(ctx, client, url, "application/x-www-form-urlencoded", strings.NewReader(data.Encode())) +} + +// notifyingReader is an io.ReadCloser that closes the notify channel after +// Close is called or a Read fails on the underlying ReadCloser. +type notifyingReader struct { + io.ReadCloser + notify chan<- struct{} +} + +func (r *notifyingReader) Read(p []byte) (int, error) { + n, err := r.ReadCloser.Read(p) + if err != nil && r.notify != nil { + close(r.notify) + r.notify = nil + } + return n, err +} + +func (r *notifyingReader) Close() error { + err := r.ReadCloser.Close() + if r.notify != nil { + close(r.notify) + r.notify = nil + } + return err +} diff --git a/vendor/golang.org/x/net/context/go17.go b/vendor/golang.org/x/net/context/go17.go new file mode 100644 index 000000000..d20f52b7d --- /dev/null +++ b/vendor/golang.org/x/net/context/go17.go @@ -0,0 +1,72 @@ +// Copyright 2016 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +// +build go1.7 + +package context + +import ( + "context" // standard library's context, as of Go 1.7 + "time" +) + +var ( + todo = context.TODO() + background = context.Background() +) + +// Canceled is the error returned by Context.Err when the context is canceled. +var Canceled = context.Canceled + +// DeadlineExceeded is the error returned by Context.Err when the context's +// deadline passes. +var DeadlineExceeded = context.DeadlineExceeded + +// WithCancel returns a copy of parent with a new Done channel. The returned +// context's Done channel is closed when the returned cancel function is called +// or when the parent context's Done channel is closed, whichever happens first. +// +// Canceling this context releases resources associated with it, so code should +// call cancel as soon as the operations running in this Context complete. +func WithCancel(parent Context) (ctx Context, cancel CancelFunc) { + ctx, f := context.WithCancel(parent) + return ctx, CancelFunc(f) +} + +// WithDeadline returns a copy of the parent context with the deadline adjusted +// to be no later than d. If the parent's deadline is already earlier than d, +// WithDeadline(parent, d) is semantically equivalent to parent. The returned +// context's Done channel is closed when the deadline expires, when the returned +// cancel function is called, or when the parent context's Done channel is +// closed, whichever happens first. +// +// Canceling this context releases resources associated with it, so code should +// call cancel as soon as the operations running in this Context complete. +func WithDeadline(parent Context, deadline time.Time) (Context, CancelFunc) { + ctx, f := context.WithDeadline(parent, deadline) + return ctx, CancelFunc(f) +} + +// WithTimeout returns WithDeadline(parent, time.Now().Add(timeout)). +// +// Canceling this context releases resources associated with it, so code should +// call cancel as soon as the operations running in this Context complete: +// +// func slowOperationWithTimeout(ctx context.Context) (Result, error) { +// ctx, cancel := context.WithTimeout(ctx, 100*time.Millisecond) +// defer cancel() // releases resources if slowOperation completes before timeout elapses +// return slowOperation(ctx) +// } +func WithTimeout(parent Context, timeout time.Duration) (Context, CancelFunc) { + return WithDeadline(parent, time.Now().Add(timeout)) +} + +// WithValue returns a copy of parent in which the value associated with key is +// val. +// +// Use context Values only for request-scoped data that transits processes and +// APIs, not for passing optional parameters to functions. +func WithValue(parent Context, key interface{}, val interface{}) Context { + return context.WithValue(parent, key, val) +} diff --git a/vendor/golang.org/x/net/context/go19.go b/vendor/golang.org/x/net/context/go19.go new file mode 100644 index 000000000..d88bd1db1 --- /dev/null +++ b/vendor/golang.org/x/net/context/go19.go @@ -0,0 +1,20 @@ +// Copyright 2017 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +// +build go1.9 + +package context + +import "context" // standard library's context, as of Go 1.7 + +// A Context carries a deadline, a cancelation signal, and other values across +// API boundaries. +// +// Context's methods may be called by multiple goroutines simultaneously. +type Context = context.Context + +// A CancelFunc tells an operation to abandon its work. +// A CancelFunc does not wait for the work to stop. +// After the first call, subsequent calls to a CancelFunc do nothing. +type CancelFunc = context.CancelFunc diff --git a/vendor/golang.org/x/net/context/pre_go17.go b/vendor/golang.org/x/net/context/pre_go17.go new file mode 100644 index 000000000..0f35592df --- /dev/null +++ b/vendor/golang.org/x/net/context/pre_go17.go @@ -0,0 +1,300 @@ +// Copyright 2014 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +// +build !go1.7 + +package context + +import ( + "errors" + "fmt" + "sync" + "time" +) + +// An emptyCtx is never canceled, has no values, and has no deadline. It is not +// struct{}, since vars of this type must have distinct addresses. +type emptyCtx int + +func (*emptyCtx) Deadline() (deadline time.Time, ok bool) { + return +} + +func (*emptyCtx) Done() <-chan struct{} { + return nil +} + +func (*emptyCtx) Err() error { + return nil +} + +func (*emptyCtx) Value(key interface{}) interface{} { + return nil +} + +func (e *emptyCtx) String() string { + switch e { + case background: + return "context.Background" + case todo: + return "context.TODO" + } + return "unknown empty Context" +} + +var ( + background = new(emptyCtx) + todo = new(emptyCtx) +) + +// Canceled is the error returned by Context.Err when the context is canceled. +var Canceled = errors.New("context canceled") + +// DeadlineExceeded is the error returned by Context.Err when the context's +// deadline passes. +var DeadlineExceeded = errors.New("context deadline exceeded") + +// WithCancel returns a copy of parent with a new Done channel. The returned +// context's Done channel is closed when the returned cancel function is called +// or when the parent context's Done channel is closed, whichever happens first. +// +// Canceling this context releases resources associated with it, so code should +// call cancel as soon as the operations running in this Context complete. +func WithCancel(parent Context) (ctx Context, cancel CancelFunc) { + c := newCancelCtx(parent) + propagateCancel(parent, c) + return c, func() { c.cancel(true, Canceled) } +} + +// newCancelCtx returns an initialized cancelCtx. +func newCancelCtx(parent Context) *cancelCtx { + return &cancelCtx{ + Context: parent, + done: make(chan struct{}), + } +} + +// propagateCancel arranges for child to be canceled when parent is. +func propagateCancel(parent Context, child canceler) { + if parent.Done() == nil { + return // parent is never canceled + } + if p, ok := parentCancelCtx(parent); ok { + p.mu.Lock() + if p.err != nil { + // parent has already been canceled + child.cancel(false, p.err) + } else { + if p.children == nil { + p.children = make(map[canceler]bool) + } + p.children[child] = true + } + p.mu.Unlock() + } else { + go func() { + select { + case <-parent.Done(): + child.cancel(false, parent.Err()) + case <-child.Done(): + } + }() + } +} + +// parentCancelCtx follows a chain of parent references until it finds a +// *cancelCtx. This function understands how each of the concrete types in this +// package represents its parent. +func parentCancelCtx(parent Context) (*cancelCtx, bool) { + for { + switch c := parent.(type) { + case *cancelCtx: + return c, true + case *timerCtx: + return c.cancelCtx, true + case *valueCtx: + parent = c.Context + default: + return nil, false + } + } +} + +// removeChild removes a context from its parent. +func removeChild(parent Context, child canceler) { + p, ok := parentCancelCtx(parent) + if !ok { + return + } + p.mu.Lock() + if p.children != nil { + delete(p.children, child) + } + p.mu.Unlock() +} + +// A canceler is a context type that can be canceled directly. The +// implementations are *cancelCtx and *timerCtx. +type canceler interface { + cancel(removeFromParent bool, err error) + Done() <-chan struct{} +} + +// A cancelCtx can be canceled. When canceled, it also cancels any children +// that implement canceler. +type cancelCtx struct { + Context + + done chan struct{} // closed by the first cancel call. + + mu sync.Mutex + children map[canceler]bool // set to nil by the first cancel call + err error // set to non-nil by the first cancel call +} + +func (c *cancelCtx) Done() <-chan struct{} { + return c.done +} + +func (c *cancelCtx) Err() error { + c.mu.Lock() + defer c.mu.Unlock() + return c.err +} + +func (c *cancelCtx) String() string { + return fmt.Sprintf("%v.WithCancel", c.Context) +} + +// cancel closes c.done, cancels each of c's children, and, if +// removeFromParent is true, removes c from its parent's children. +func (c *cancelCtx) cancel(removeFromParent bool, err error) { + if err == nil { + panic("context: internal error: missing cancel error") + } + c.mu.Lock() + if c.err != nil { + c.mu.Unlock() + return // already canceled + } + c.err = err + close(c.done) + for child := range c.children { + // NOTE: acquiring the child's lock while holding parent's lock. + child.cancel(false, err) + } + c.children = nil + c.mu.Unlock() + + if removeFromParent { + removeChild(c.Context, c) + } +} + +// WithDeadline returns a copy of the parent context with the deadline adjusted +// to be no later than d. If the parent's deadline is already earlier than d, +// WithDeadline(parent, d) is semantically equivalent to parent. The returned +// context's Done channel is closed when the deadline expires, when the returned +// cancel function is called, or when the parent context's Done channel is +// closed, whichever happens first. +// +// Canceling this context releases resources associated with it, so code should +// call cancel as soon as the operations running in this Context complete. +func WithDeadline(parent Context, deadline time.Time) (Context, CancelFunc) { + if cur, ok := parent.Deadline(); ok && cur.Before(deadline) { + // The current deadline is already sooner than the new one. + return WithCancel(parent) + } + c := &timerCtx{ + cancelCtx: newCancelCtx(parent), + deadline: deadline, + } + propagateCancel(parent, c) + d := deadline.Sub(time.Now()) + if d <= 0 { + c.cancel(true, DeadlineExceeded) // deadline has already passed + return c, func() { c.cancel(true, Canceled) } + } + c.mu.Lock() + defer c.mu.Unlock() + if c.err == nil { + c.timer = time.AfterFunc(d, func() { + c.cancel(true, DeadlineExceeded) + }) + } + return c, func() { c.cancel(true, Canceled) } +} + +// A timerCtx carries a timer and a deadline. It embeds a cancelCtx to +// implement Done and Err. It implements cancel by stopping its timer then +// delegating to cancelCtx.cancel. +type timerCtx struct { + *cancelCtx + timer *time.Timer // Under cancelCtx.mu. + + deadline time.Time +} + +func (c *timerCtx) Deadline() (deadline time.Time, ok bool) { + return c.deadline, true +} + +func (c *timerCtx) String() string { + return fmt.Sprintf("%v.WithDeadline(%s [%s])", c.cancelCtx.Context, c.deadline, c.deadline.Sub(time.Now())) +} + +func (c *timerCtx) cancel(removeFromParent bool, err error) { + c.cancelCtx.cancel(false, err) + if removeFromParent { + // Remove this timerCtx from its parent cancelCtx's children. + removeChild(c.cancelCtx.Context, c) + } + c.mu.Lock() + if c.timer != nil { + c.timer.Stop() + c.timer = nil + } + c.mu.Unlock() +} + +// WithTimeout returns WithDeadline(parent, time.Now().Add(timeout)). +// +// Canceling this context releases resources associated with it, so code should +// call cancel as soon as the operations running in this Context complete: +// +// func slowOperationWithTimeout(ctx context.Context) (Result, error) { +// ctx, cancel := context.WithTimeout(ctx, 100*time.Millisecond) +// defer cancel() // releases resources if slowOperation completes before timeout elapses +// return slowOperation(ctx) +// } +func WithTimeout(parent Context, timeout time.Duration) (Context, CancelFunc) { + return WithDeadline(parent, time.Now().Add(timeout)) +} + +// WithValue returns a copy of parent in which the value associated with key is +// val. +// +// Use context Values only for request-scoped data that transits processes and +// APIs, not for passing optional parameters to functions. +func WithValue(parent Context, key interface{}, val interface{}) Context { + return &valueCtx{parent, key, val} +} + +// A valueCtx carries a key-value pair. It implements Value for that key and +// delegates all other calls to the embedded Context. +type valueCtx struct { + Context + key, val interface{} +} + +func (c *valueCtx) String() string { + return fmt.Sprintf("%v.WithValue(%#v, %#v)", c.Context, c.key, c.val) +} + +func (c *valueCtx) Value(key interface{}) interface{} { + if c.key == key { + return c.val + } + return c.Context.Value(key) +} diff --git a/vendor/golang.org/x/net/context/pre_go19.go b/vendor/golang.org/x/net/context/pre_go19.go new file mode 100644 index 000000000..b105f80be --- /dev/null +++ b/vendor/golang.org/x/net/context/pre_go19.go @@ -0,0 +1,109 @@ +// Copyright 2014 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +// +build !go1.9 + +package context + +import "time" + +// A Context carries a deadline, a cancelation signal, and other values across +// API boundaries. +// +// Context's methods may be called by multiple goroutines simultaneously. +type Context interface { + // Deadline returns the time when work done on behalf of this context + // should be canceled. Deadline returns ok==false when no deadline is + // set. Successive calls to Deadline return the same results. + Deadline() (deadline time.Time, ok bool) + + // Done returns a channel that's closed when work done on behalf of this + // context should be canceled. Done may return nil if this context can + // never be canceled. Successive calls to Done return the same value. + // + // WithCancel arranges for Done to be closed when cancel is called; + // WithDeadline arranges for Done to be closed when the deadline + // expires; WithTimeout arranges for Done to be closed when the timeout + // elapses. + // + // Done is provided for use in select statements: + // + // // Stream generates values with DoSomething and sends them to out + // // until DoSomething returns an error or ctx.Done is closed. + // func Stream(ctx context.Context, out chan<- Value) error { + // for { + // v, err := DoSomething(ctx) + // if err != nil { + // return err + // } + // select { + // case <-ctx.Done(): + // return ctx.Err() + // case out <- v: + // } + // } + // } + // + // See http://blog.golang.org/pipelines for more examples of how to use + // a Done channel for cancelation. + Done() <-chan struct{} + + // Err returns a non-nil error value after Done is closed. Err returns + // Canceled if the context was canceled or DeadlineExceeded if the + // context's deadline passed. No other values for Err are defined. + // After Done is closed, successive calls to Err return the same value. + Err() error + + // Value returns the value associated with this context for key, or nil + // if no value is associated with key. Successive calls to Value with + // the same key returns the same result. + // + // Use context values only for request-scoped data that transits + // processes and API boundaries, not for passing optional parameters to + // functions. + // + // A key identifies a specific value in a Context. Functions that wish + // to store values in Context typically allocate a key in a global + // variable then use that key as the argument to context.WithValue and + // Context.Value. A key can be any type that supports equality; + // packages should define keys as an unexported type to avoid + // collisions. + // + // Packages that define a Context key should provide type-safe accessors + // for the values stores using that key: + // + // // Package user defines a User type that's stored in Contexts. + // package user + // + // import "golang.org/x/net/context" + // + // // User is the type of value stored in the Contexts. + // type User struct {...} + // + // // key is an unexported type for keys defined in this package. + // // This prevents collisions with keys defined in other packages. + // type key int + // + // // userKey is the key for user.User values in Contexts. It is + // // unexported; clients use user.NewContext and user.FromContext + // // instead of using this key directly. + // var userKey key = 0 + // + // // NewContext returns a new Context that carries value u. + // func NewContext(ctx context.Context, u *User) context.Context { + // return context.WithValue(ctx, userKey, u) + // } + // + // // FromContext returns the User value stored in ctx, if any. + // func FromContext(ctx context.Context) (*User, bool) { + // u, ok := ctx.Value(userKey).(*User) + // return u, ok + // } + Value(key interface{}) interface{} +} + +// A CancelFunc tells an operation to abandon its work. +// A CancelFunc does not wait for the work to stop. +// After the first call, subsequent calls to a CancelFunc do nothing. +type CancelFunc func() diff --git a/vendor/golang.org/x/sync/LICENSE b/vendor/golang.org/x/sync/LICENSE new file mode 100644 index 000000000..6a66aea5e --- /dev/null +++ b/vendor/golang.org/x/sync/LICENSE @@ -0,0 +1,27 @@ +Copyright (c) 2009 The Go Authors. All rights reserved. + +Redistribution and use in source and binary forms, with or without +modification, are permitted provided that the following conditions are +met: + + * Redistributions of source code must retain the above copyright +notice, this list of conditions and the following disclaimer. + * Redistributions in binary form must reproduce the above +copyright notice, this list of conditions and the following disclaimer +in the documentation and/or other materials provided with the +distribution. + * Neither the name of Google Inc. nor the names of its +contributors may be used to endorse or promote products derived from +this software without specific prior written permission. + +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR +A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT +LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, +DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY +THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. diff --git a/vendor/golang.org/x/sync/PATENTS b/vendor/golang.org/x/sync/PATENTS new file mode 100644 index 000000000..733099041 --- /dev/null +++ b/vendor/golang.org/x/sync/PATENTS @@ -0,0 +1,22 @@ +Additional IP Rights Grant (Patents) + +"This implementation" means the copyrightable works distributed by +Google as part of the Go project. + +Google hereby grants to You a perpetual, worldwide, non-exclusive, +no-charge, royalty-free, irrevocable (except as stated in this section) +patent license to make, have made, use, offer to sell, sell, import, +transfer and otherwise run, modify and propagate the contents of this +implementation of Go, where such license applies only to those patent +claims, both currently owned or controlled by Google and acquired in +the future, licensable by Google that are necessarily infringed by this +implementation of Go. This grant does not include claims that would be +infringed only as a consequence of further modification of this +implementation. If you or your agent or exclusive licensee institute or +order or agree to the institution of patent litigation against any +entity (including a cross-claim or counterclaim in a lawsuit) alleging +that this implementation of Go or any code incorporated within this +implementation of Go constitutes direct or contributory patent +infringement, or inducement of patent infringement, then any patent +rights granted to you under this License for this implementation of Go +shall terminate as of the date such litigation is filed. diff --git a/vendor/golang.org/x/sync/errgroup/errgroup.go b/vendor/golang.org/x/sync/errgroup/errgroup.go new file mode 100644 index 000000000..533438d91 --- /dev/null +++ b/vendor/golang.org/x/sync/errgroup/errgroup.go @@ -0,0 +1,67 @@ +// Copyright 2016 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +// Package errgroup provides synchronization, error propagation, and Context +// cancelation for groups of goroutines working on subtasks of a common task. +package errgroup + +import ( + "sync" + + "golang.org/x/net/context" +) + +// A Group is a collection of goroutines working on subtasks that are part of +// the same overall task. +// +// A zero Group is valid and does not cancel on error. +type Group struct { + cancel func() + + wg sync.WaitGroup + + errOnce sync.Once + err error +} + +// WithContext returns a new Group and an associated Context derived from ctx. +// +// The derived Context is canceled the first time a function passed to Go +// returns a non-nil error or the first time Wait returns, whichever occurs +// first. +func WithContext(ctx context.Context) (*Group, context.Context) { + ctx, cancel := context.WithCancel(ctx) + return &Group{cancel: cancel}, ctx +} + +// Wait blocks until all function calls from the Go method have returned, then +// returns the first non-nil error (if any) from them. +func (g *Group) Wait() error { + g.wg.Wait() + if g.cancel != nil { + g.cancel() + } + return g.err +} + +// Go calls the given function in a new goroutine. +// +// The first call to return a non-nil error cancels the group; its error will be +// returned by Wait. +func (g *Group) Go(f func() error) { + g.wg.Add(1) + + go func() { + defer g.wg.Done() + + if err := f(); err != nil { + g.errOnce.Do(func() { + g.err = err + if g.cancel != nil { + g.cancel() + } + }) + } + }() +} diff --git a/vendor/golang.org/x/sys/unix/affinity_linux.go b/vendor/golang.org/x/sys/unix/affinity_linux.go new file mode 100644 index 000000000..72afe3338 --- /dev/null +++ b/vendor/golang.org/x/sys/unix/affinity_linux.go @@ -0,0 +1,124 @@ +// Copyright 2018 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +// CPU affinity functions + +package unix + +import ( + "unsafe" +) + +const cpuSetSize = _CPU_SETSIZE / _NCPUBITS + +// CPUSet represents a CPU affinity mask. +type CPUSet [cpuSetSize]cpuMask + +func schedAffinity(trap uintptr, pid int, set *CPUSet) error { + _, _, e := RawSyscall(trap, uintptr(pid), uintptr(unsafe.Sizeof(*set)), uintptr(unsafe.Pointer(set))) + if e != 0 { + return errnoErr(e) + } + return nil +} + +// SchedGetaffinity gets the CPU affinity mask of the thread specified by pid. +// If pid is 0 the calling thread is used. +func SchedGetaffinity(pid int, set *CPUSet) error { + return schedAffinity(SYS_SCHED_GETAFFINITY, pid, set) +} + +// SchedSetaffinity sets the CPU affinity mask of the thread specified by pid. +// If pid is 0 the calling thread is used. +func SchedSetaffinity(pid int, set *CPUSet) error { + return schedAffinity(SYS_SCHED_SETAFFINITY, pid, set) +} + +// Zero clears the set s, so that it contains no CPUs. +func (s *CPUSet) Zero() { + for i := range s { + s[i] = 0 + } +} + +func cpuBitsIndex(cpu int) int { + return cpu / _NCPUBITS +} + +func cpuBitsMask(cpu int) cpuMask { + return cpuMask(1 << (uint(cpu) % _NCPUBITS)) +} + +// Set adds cpu to the set s. +func (s *CPUSet) Set(cpu int) { + i := cpuBitsIndex(cpu) + if i < len(s) { + s[i] |= cpuBitsMask(cpu) + } +} + +// Clear removes cpu from the set s. +func (s *CPUSet) Clear(cpu int) { + i := cpuBitsIndex(cpu) + if i < len(s) { + s[i] &^= cpuBitsMask(cpu) + } +} + +// IsSet reports whether cpu is in the set s. +func (s *CPUSet) IsSet(cpu int) bool { + i := cpuBitsIndex(cpu) + if i < len(s) { + return s[i]&cpuBitsMask(cpu) != 0 + } + return false +} + +// Count returns the number of CPUs in the set s. +func (s *CPUSet) Count() int { + c := 0 + for _, b := range s { + c += onesCount64(uint64(b)) + } + return c +} + +// onesCount64 is a copy of Go 1.9's math/bits.OnesCount64. +// Once this package can require Go 1.9, we can delete this +// and update the caller to use bits.OnesCount64. +func onesCount64(x uint64) int { + const m0 = 0x5555555555555555 // 01010101 ... + const m1 = 0x3333333333333333 // 00110011 ... + const m2 = 0x0f0f0f0f0f0f0f0f // 00001111 ... + const m3 = 0x00ff00ff00ff00ff // etc. + const m4 = 0x0000ffff0000ffff + + // Implementation: Parallel summing of adjacent bits. + // See "Hacker's Delight", Chap. 5: Counting Bits. + // The following pattern shows the general approach: + // + // x = x>>1&(m0&m) + x&(m0&m) + // x = x>>2&(m1&m) + x&(m1&m) + // x = x>>4&(m2&m) + x&(m2&m) + // x = x>>8&(m3&m) + x&(m3&m) + // x = x>>16&(m4&m) + x&(m4&m) + // x = x>>32&(m5&m) + x&(m5&m) + // return int(x) + // + // Masking (& operations) can be left away when there's no + // danger that a field's sum will carry over into the next + // field: Since the result cannot be > 64, 8 bits is enough + // and we can ignore the masks for the shifts by 8 and up. + // Per "Hacker's Delight", the first line can be simplified + // more, but it saves at best one instruction, so we leave + // it alone for clarity. + const m = 1<<64 - 1 + x = x>>1&(m0&m) + x&(m0&m) + x = x>>2&(m1&m) + x&(m1&m) + x = (x>>4 + x) & (m2 & m) + x += x >> 8 + x += x >> 16 + x += x >> 32 + return int(x) & (1<<7 - 1) +} diff --git a/vendor/golang.org/x/sys/unix/asm_linux_386.s b/vendor/golang.org/x/sys/unix/asm_linux_386.s index 4db290932..448bebbb5 100644 --- a/vendor/golang.org/x/sys/unix/asm_linux_386.s +++ b/vendor/golang.org/x/sys/unix/asm_linux_386.s @@ -10,21 +10,51 @@ // System calls for 386, Linux // +// See ../runtime/sys_linux_386.s for the reason why we always use int 0x80 +// instead of the glibc-specific "CALL 0x10(GS)". +#define INVOKE_SYSCALL INT $0x80 + // Just jump to package syscall's implementation for all these functions. // The runtime may know about them. -TEXT ·Syscall(SB),NOSPLIT,$0-28 +TEXT ·Syscall(SB),NOSPLIT,$0-28 JMP syscall·Syscall(SB) -TEXT ·Syscall6(SB),NOSPLIT,$0-40 +TEXT ·Syscall6(SB),NOSPLIT,$0-40 JMP syscall·Syscall6(SB) +TEXT ·SyscallNoError(SB),NOSPLIT,$0-24 + CALL runtime·entersyscall(SB) + MOVL trap+0(FP), AX // syscall entry + MOVL a1+4(FP), BX + MOVL a2+8(FP), CX + MOVL a3+12(FP), DX + MOVL $0, SI + MOVL $0, DI + INVOKE_SYSCALL + MOVL AX, r1+16(FP) + MOVL DX, r2+20(FP) + CALL runtime·exitsyscall(SB) + RET + TEXT ·RawSyscall(SB),NOSPLIT,$0-28 JMP syscall·RawSyscall(SB) -TEXT ·RawSyscall6(SB),NOSPLIT,$0-40 +TEXT ·RawSyscall6(SB),NOSPLIT,$0-40 JMP syscall·RawSyscall6(SB) +TEXT ·RawSyscallNoError(SB),NOSPLIT,$0-24 + MOVL trap+0(FP), AX // syscall entry + MOVL a1+4(FP), BX + MOVL a2+8(FP), CX + MOVL a3+12(FP), DX + MOVL $0, SI + MOVL $0, DI + INVOKE_SYSCALL + MOVL AX, r1+16(FP) + MOVL DX, r2+20(FP) + RET + TEXT ·socketcall(SB),NOSPLIT,$0-36 JMP syscall·socketcall(SB) diff --git a/vendor/golang.org/x/sys/unix/asm_linux_amd64.s b/vendor/golang.org/x/sys/unix/asm_linux_amd64.s index 44e25c62f..c6468a958 100644 --- a/vendor/golang.org/x/sys/unix/asm_linux_amd64.s +++ b/vendor/golang.org/x/sys/unix/asm_linux_amd64.s @@ -13,17 +13,45 @@ // Just jump to package syscall's implementation for all these functions. // The runtime may know about them. -TEXT ·Syscall(SB),NOSPLIT,$0-56 +TEXT ·Syscall(SB),NOSPLIT,$0-56 JMP syscall·Syscall(SB) TEXT ·Syscall6(SB),NOSPLIT,$0-80 JMP syscall·Syscall6(SB) +TEXT ·SyscallNoError(SB),NOSPLIT,$0-48 + CALL runtime·entersyscall(SB) + MOVQ a1+8(FP), DI + MOVQ a2+16(FP), SI + MOVQ a3+24(FP), DX + MOVQ $0, R10 + MOVQ $0, R8 + MOVQ $0, R9 + MOVQ trap+0(FP), AX // syscall entry + SYSCALL + MOVQ AX, r1+32(FP) + MOVQ DX, r2+40(FP) + CALL runtime·exitsyscall(SB) + RET + TEXT ·RawSyscall(SB),NOSPLIT,$0-56 JMP syscall·RawSyscall(SB) TEXT ·RawSyscall6(SB),NOSPLIT,$0-80 JMP syscall·RawSyscall6(SB) +TEXT ·RawSyscallNoError(SB),NOSPLIT,$0-48 + MOVQ a1+8(FP), DI + MOVQ a2+16(FP), SI + MOVQ a3+24(FP), DX + MOVQ $0, R10 + MOVQ $0, R8 + MOVQ $0, R9 + MOVQ trap+0(FP), AX // syscall entry + SYSCALL + MOVQ AX, r1+32(FP) + MOVQ DX, r2+40(FP) + RET + TEXT ·gettimeofday(SB),NOSPLIT,$0-16 JMP syscall·gettimeofday(SB) diff --git a/vendor/golang.org/x/sys/unix/asm_linux_arm.s b/vendor/golang.org/x/sys/unix/asm_linux_arm.s index cf0b57465..cf0f3575c 100644 --- a/vendor/golang.org/x/sys/unix/asm_linux_arm.s +++ b/vendor/golang.org/x/sys/unix/asm_linux_arm.s @@ -13,17 +13,44 @@ // Just jump to package syscall's implementation for all these functions. // The runtime may know about them. -TEXT ·Syscall(SB),NOSPLIT,$0-28 +TEXT ·Syscall(SB),NOSPLIT,$0-28 B syscall·Syscall(SB) -TEXT ·Syscall6(SB),NOSPLIT,$0-40 +TEXT ·Syscall6(SB),NOSPLIT,$0-40 B syscall·Syscall6(SB) +TEXT ·SyscallNoError(SB),NOSPLIT,$0-24 + BL runtime·entersyscall(SB) + MOVW trap+0(FP), R7 + MOVW a1+4(FP), R0 + MOVW a2+8(FP), R1 + MOVW a3+12(FP), R2 + MOVW $0, R3 + MOVW $0, R4 + MOVW $0, R5 + SWI $0 + MOVW R0, r1+16(FP) + MOVW $0, R0 + MOVW R0, r2+20(FP) + BL runtime·exitsyscall(SB) + RET + TEXT ·RawSyscall(SB),NOSPLIT,$0-28 B syscall·RawSyscall(SB) -TEXT ·RawSyscall6(SB),NOSPLIT,$0-40 +TEXT ·RawSyscall6(SB),NOSPLIT,$0-40 B syscall·RawSyscall6(SB) -TEXT ·seek(SB),NOSPLIT,$0-32 +TEXT ·RawSyscallNoError(SB),NOSPLIT,$0-24 + MOVW trap+0(FP), R7 // syscall entry + MOVW a1+4(FP), R0 + MOVW a2+8(FP), R1 + MOVW a3+12(FP), R2 + SWI $0 + MOVW R0, r1+16(FP) + MOVW $0, R0 + MOVW R0, r2+20(FP) + RET + +TEXT ·seek(SB),NOSPLIT,$0-28 B syscall·seek(SB) diff --git a/vendor/golang.org/x/sys/unix/asm_linux_arm64.s b/vendor/golang.org/x/sys/unix/asm_linux_arm64.s index 4be9bfede..afe6fdf6b 100644 --- a/vendor/golang.org/x/sys/unix/asm_linux_arm64.s +++ b/vendor/golang.org/x/sys/unix/asm_linux_arm64.s @@ -11,14 +11,42 @@ // Just jump to package syscall's implementation for all these functions. // The runtime may know about them. -TEXT ·Syscall(SB),NOSPLIT,$0-56 +TEXT ·Syscall(SB),NOSPLIT,$0-56 B syscall·Syscall(SB) TEXT ·Syscall6(SB),NOSPLIT,$0-80 B syscall·Syscall6(SB) +TEXT ·SyscallNoError(SB),NOSPLIT,$0-48 + BL runtime·entersyscall(SB) + MOVD a1+8(FP), R0 + MOVD a2+16(FP), R1 + MOVD a3+24(FP), R2 + MOVD $0, R3 + MOVD $0, R4 + MOVD $0, R5 + MOVD trap+0(FP), R8 // syscall entry + SVC + MOVD R0, r1+32(FP) // r1 + MOVD R1, r2+40(FP) // r2 + BL runtime·exitsyscall(SB) + RET + TEXT ·RawSyscall(SB),NOSPLIT,$0-56 B syscall·RawSyscall(SB) TEXT ·RawSyscall6(SB),NOSPLIT,$0-80 B syscall·RawSyscall6(SB) + +TEXT ·RawSyscallNoError(SB),NOSPLIT,$0-48 + MOVD a1+8(FP), R0 + MOVD a2+16(FP), R1 + MOVD a3+24(FP), R2 + MOVD $0, R3 + MOVD $0, R4 + MOVD $0, R5 + MOVD trap+0(FP), R8 // syscall entry + SVC + MOVD R0, r1+32(FP) + MOVD R1, r2+40(FP) + RET diff --git a/vendor/golang.org/x/sys/unix/asm_linux_mips64x.s b/vendor/golang.org/x/sys/unix/asm_linux_mips64x.s index 724e580c4..ab9d63831 100644 --- a/vendor/golang.org/x/sys/unix/asm_linux_mips64x.s +++ b/vendor/golang.org/x/sys/unix/asm_linux_mips64x.s @@ -15,14 +15,42 @@ // Just jump to package syscall's implementation for all these functions. // The runtime may know about them. -TEXT ·Syscall(SB),NOSPLIT,$0-56 +TEXT ·Syscall(SB),NOSPLIT,$0-56 JMP syscall·Syscall(SB) -TEXT ·Syscall6(SB),NOSPLIT,$0-80 +TEXT ·Syscall6(SB),NOSPLIT,$0-80 JMP syscall·Syscall6(SB) -TEXT ·RawSyscall(SB),NOSPLIT,$0-56 +TEXT ·SyscallNoError(SB),NOSPLIT,$0-48 + JAL runtime·entersyscall(SB) + MOVV a1+8(FP), R4 + MOVV a2+16(FP), R5 + MOVV a3+24(FP), R6 + MOVV R0, R7 + MOVV R0, R8 + MOVV R0, R9 + MOVV trap+0(FP), R2 // syscall entry + SYSCALL + MOVV R2, r1+32(FP) + MOVV R3, r2+40(FP) + JAL runtime·exitsyscall(SB) + RET + +TEXT ·RawSyscall(SB),NOSPLIT,$0-56 JMP syscall·RawSyscall(SB) -TEXT ·RawSyscall6(SB),NOSPLIT,$0-80 +TEXT ·RawSyscall6(SB),NOSPLIT,$0-80 JMP syscall·RawSyscall6(SB) + +TEXT ·RawSyscallNoError(SB),NOSPLIT,$0-48 + MOVV a1+8(FP), R4 + MOVV a2+16(FP), R5 + MOVV a3+24(FP), R6 + MOVV R0, R7 + MOVV R0, R8 + MOVV R0, R9 + MOVV trap+0(FP), R2 // syscall entry + SYSCALL + MOVV R2, r1+32(FP) + MOVV R3, r2+40(FP) + RET diff --git a/vendor/golang.org/x/sys/unix/asm_linux_mipsx.s b/vendor/golang.org/x/sys/unix/asm_linux_mipsx.s index 2ea425755..99e539904 100644 --- a/vendor/golang.org/x/sys/unix/asm_linux_mipsx.s +++ b/vendor/golang.org/x/sys/unix/asm_linux_mipsx.s @@ -15,17 +15,40 @@ // Just jump to package syscall's implementation for all these functions. // The runtime may know about them. -TEXT ·Syscall(SB),NOSPLIT,$0-28 +TEXT ·Syscall(SB),NOSPLIT,$0-28 JMP syscall·Syscall(SB) -TEXT ·Syscall6(SB),NOSPLIT,$0-40 +TEXT ·Syscall6(SB),NOSPLIT,$0-40 JMP syscall·Syscall6(SB) -TEXT ·Syscall9(SB),NOSPLIT,$0-52 +TEXT ·Syscall9(SB),NOSPLIT,$0-52 JMP syscall·Syscall9(SB) -TEXT ·RawSyscall(SB),NOSPLIT,$0-28 +TEXT ·SyscallNoError(SB),NOSPLIT,$0-24 + JAL runtime·entersyscall(SB) + MOVW a1+4(FP), R4 + MOVW a2+8(FP), R5 + MOVW a3+12(FP), R6 + MOVW R0, R7 + MOVW trap+0(FP), R2 // syscall entry + SYSCALL + MOVW R2, r1+16(FP) // r1 + MOVW R3, r2+20(FP) // r2 + JAL runtime·exitsyscall(SB) + RET + +TEXT ·RawSyscall(SB),NOSPLIT,$0-28 JMP syscall·RawSyscall(SB) -TEXT ·RawSyscall6(SB),NOSPLIT,$0-40 +TEXT ·RawSyscall6(SB),NOSPLIT,$0-40 JMP syscall·RawSyscall6(SB) + +TEXT ·RawSyscallNoError(SB),NOSPLIT,$0-24 + MOVW a1+4(FP), R4 + MOVW a2+8(FP), R5 + MOVW a3+12(FP), R6 + MOVW trap+0(FP), R2 // syscall entry + SYSCALL + MOVW R2, r1+16(FP) + MOVW R3, r2+20(FP) + RET diff --git a/vendor/golang.org/x/sys/unix/asm_linux_ppc64x.s b/vendor/golang.org/x/sys/unix/asm_linux_ppc64x.s index 8d231feb4..649e58714 100644 --- a/vendor/golang.org/x/sys/unix/asm_linux_ppc64x.s +++ b/vendor/golang.org/x/sys/unix/asm_linux_ppc64x.s @@ -15,14 +15,42 @@ // Just jump to package syscall's implementation for all these functions. // The runtime may know about them. -TEXT ·Syscall(SB),NOSPLIT,$0-56 +TEXT ·Syscall(SB),NOSPLIT,$0-56 BR syscall·Syscall(SB) TEXT ·Syscall6(SB),NOSPLIT,$0-80 BR syscall·Syscall6(SB) +TEXT ·SyscallNoError(SB),NOSPLIT,$0-48 + BL runtime·entersyscall(SB) + MOVD a1+8(FP), R3 + MOVD a2+16(FP), R4 + MOVD a3+24(FP), R5 + MOVD R0, R6 + MOVD R0, R7 + MOVD R0, R8 + MOVD trap+0(FP), R9 // syscall entry + SYSCALL R9 + MOVD R3, r1+32(FP) + MOVD R4, r2+40(FP) + BL runtime·exitsyscall(SB) + RET + TEXT ·RawSyscall(SB),NOSPLIT,$0-56 BR syscall·RawSyscall(SB) TEXT ·RawSyscall6(SB),NOSPLIT,$0-80 BR syscall·RawSyscall6(SB) + +TEXT ·RawSyscallNoError(SB),NOSPLIT,$0-48 + MOVD a1+8(FP), R3 + MOVD a2+16(FP), R4 + MOVD a3+24(FP), R5 + MOVD R0, R6 + MOVD R0, R7 + MOVD R0, R8 + MOVD trap+0(FP), R9 // syscall entry + SYSCALL R9 + MOVD R3, r1+32(FP) + MOVD R4, r2+40(FP) + RET diff --git a/vendor/golang.org/x/sys/unix/asm_linux_s390x.s b/vendor/golang.org/x/sys/unix/asm_linux_s390x.s index 11889859f..a5a863c6b 100644 --- a/vendor/golang.org/x/sys/unix/asm_linux_s390x.s +++ b/vendor/golang.org/x/sys/unix/asm_linux_s390x.s @@ -21,8 +21,36 @@ TEXT ·Syscall(SB),NOSPLIT,$0-56 TEXT ·Syscall6(SB),NOSPLIT,$0-80 BR syscall·Syscall6(SB) +TEXT ·SyscallNoError(SB),NOSPLIT,$0-48 + BL runtime·entersyscall(SB) + MOVD a1+8(FP), R2 + MOVD a2+16(FP), R3 + MOVD a3+24(FP), R4 + MOVD $0, R5 + MOVD $0, R6 + MOVD $0, R7 + MOVD trap+0(FP), R1 // syscall entry + SYSCALL + MOVD R2, r1+32(FP) + MOVD R3, r2+40(FP) + BL runtime·exitsyscall(SB) + RET + TEXT ·RawSyscall(SB),NOSPLIT,$0-56 BR syscall·RawSyscall(SB) TEXT ·RawSyscall6(SB),NOSPLIT,$0-80 BR syscall·RawSyscall6(SB) + +TEXT ·RawSyscallNoError(SB),NOSPLIT,$0-48 + MOVD a1+8(FP), R2 + MOVD a2+16(FP), R3 + MOVD a3+24(FP), R4 + MOVD $0, R5 + MOVD $0, R6 + MOVD $0, R7 + MOVD trap+0(FP), R1 // syscall entry + SYSCALL + MOVD R2, r1+32(FP) + MOVD R3, r2+40(FP) + RET diff --git a/vendor/golang.org/x/sys/unix/dev_darwin.go b/vendor/golang.org/x/sys/unix/dev_darwin.go new file mode 100644 index 000000000..8d1dc0fa3 --- /dev/null +++ b/vendor/golang.org/x/sys/unix/dev_darwin.go @@ -0,0 +1,24 @@ +// Copyright 2017 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +// Functions to access/create device major and minor numbers matching the +// encoding used in Darwin's sys/types.h header. + +package unix + +// Major returns the major component of a Darwin device number. +func Major(dev uint64) uint32 { + return uint32((dev >> 24) & 0xff) +} + +// Minor returns the minor component of a Darwin device number. +func Minor(dev uint64) uint32 { + return uint32(dev & 0xffffff) +} + +// Mkdev returns a Darwin device number generated from the given major and minor +// components. +func Mkdev(major, minor uint32) uint64 { + return (uint64(major) << 24) | uint64(minor) +} diff --git a/vendor/golang.org/x/sys/unix/dev_dragonfly.go b/vendor/golang.org/x/sys/unix/dev_dragonfly.go new file mode 100644 index 000000000..8502f202c --- /dev/null +++ b/vendor/golang.org/x/sys/unix/dev_dragonfly.go @@ -0,0 +1,30 @@ +// Copyright 2017 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +// Functions to access/create device major and minor numbers matching the +// encoding used in Dragonfly's sys/types.h header. +// +// The information below is extracted and adapted from sys/types.h: +// +// Minor gives a cookie instead of an index since in order to avoid changing the +// meanings of bits 0-15 or wasting time and space shifting bits 16-31 for +// devices that don't use them. + +package unix + +// Major returns the major component of a DragonFlyBSD device number. +func Major(dev uint64) uint32 { + return uint32((dev >> 8) & 0xff) +} + +// Minor returns the minor component of a DragonFlyBSD device number. +func Minor(dev uint64) uint32 { + return uint32(dev & 0xffff00ff) +} + +// Mkdev returns a DragonFlyBSD device number generated from the given major and +// minor components. +func Mkdev(major, minor uint32) uint64 { + return (uint64(major) << 8) | uint64(minor) +} diff --git a/vendor/golang.org/x/sys/unix/dev_freebsd.go b/vendor/golang.org/x/sys/unix/dev_freebsd.go new file mode 100644 index 000000000..eba3b4bd3 --- /dev/null +++ b/vendor/golang.org/x/sys/unix/dev_freebsd.go @@ -0,0 +1,30 @@ +// Copyright 2017 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +// Functions to access/create device major and minor numbers matching the +// encoding used in FreeBSD's sys/types.h header. +// +// The information below is extracted and adapted from sys/types.h: +// +// Minor gives a cookie instead of an index since in order to avoid changing the +// meanings of bits 0-15 or wasting time and space shifting bits 16-31 for +// devices that don't use them. + +package unix + +// Major returns the major component of a FreeBSD device number. +func Major(dev uint64) uint32 { + return uint32((dev >> 8) & 0xff) +} + +// Minor returns the minor component of a FreeBSD device number. +func Minor(dev uint64) uint32 { + return uint32(dev & 0xffff00ff) +} + +// Mkdev returns a FreeBSD device number generated from the given major and +// minor components. +func Mkdev(major, minor uint32) uint64 { + return (uint64(major) << 8) | uint64(minor) +} diff --git a/vendor/golang.org/x/sys/unix/dev_linux.go b/vendor/golang.org/x/sys/unix/dev_linux.go index c902c39e8..d165d6f30 100644 --- a/vendor/golang.org/x/sys/unix/dev_linux.go +++ b/vendor/golang.org/x/sys/unix/dev_linux.go @@ -34,9 +34,9 @@ func Minor(dev uint64) uint32 { // Mkdev returns a Linux device number generated from the given major and minor // components. func Mkdev(major, minor uint32) uint64 { - dev := uint64((major & 0x00000fff) << 8) - dev |= uint64((major & 0xfffff000) << 32) - dev |= uint64((minor & 0x000000ff) << 0) - dev |= uint64((minor & 0xffffff00) << 12) + dev := (uint64(major) & 0x00000fff) << 8 + dev |= (uint64(major) & 0xfffff000) << 32 + dev |= (uint64(minor) & 0x000000ff) << 0 + dev |= (uint64(minor) & 0xffffff00) << 12 return dev } diff --git a/vendor/golang.org/x/sys/unix/dev_netbsd.go b/vendor/golang.org/x/sys/unix/dev_netbsd.go new file mode 100644 index 000000000..b4a203d0c --- /dev/null +++ b/vendor/golang.org/x/sys/unix/dev_netbsd.go @@ -0,0 +1,29 @@ +// Copyright 2017 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +// Functions to access/create device major and minor numbers matching the +// encoding used in NetBSD's sys/types.h header. + +package unix + +// Major returns the major component of a NetBSD device number. +func Major(dev uint64) uint32 { + return uint32((dev & 0x000fff00) >> 8) +} + +// Minor returns the minor component of a NetBSD device number. +func Minor(dev uint64) uint32 { + minor := uint32((dev & 0x000000ff) >> 0) + minor |= uint32((dev & 0xfff00000) >> 12) + return minor +} + +// Mkdev returns a NetBSD device number generated from the given major and minor +// components. +func Mkdev(major, minor uint32) uint64 { + dev := (uint64(major) << 8) & 0x000fff00 + dev |= (uint64(minor) << 12) & 0xfff00000 + dev |= (uint64(minor) << 0) & 0x000000ff + return dev +} diff --git a/vendor/golang.org/x/sys/unix/dev_openbsd.go b/vendor/golang.org/x/sys/unix/dev_openbsd.go new file mode 100644 index 000000000..f3430c42f --- /dev/null +++ b/vendor/golang.org/x/sys/unix/dev_openbsd.go @@ -0,0 +1,29 @@ +// Copyright 2017 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +// Functions to access/create device major and minor numbers matching the +// encoding used in OpenBSD's sys/types.h header. + +package unix + +// Major returns the major component of an OpenBSD device number. +func Major(dev uint64) uint32 { + return uint32((dev & 0x0000ff00) >> 8) +} + +// Minor returns the minor component of an OpenBSD device number. +func Minor(dev uint64) uint32 { + minor := uint32((dev & 0x000000ff) >> 0) + minor |= uint32((dev & 0xffff0000) >> 8) + return minor +} + +// Mkdev returns an OpenBSD device number generated from the given major and minor +// components. +func Mkdev(major, minor uint32) uint64 { + dev := (uint64(major) << 8) & 0x0000ff00 + dev |= (uint64(minor) << 8) & 0xffff0000 + dev |= (uint64(minor) << 0) & 0x000000ff + return dev +} diff --git a/vendor/golang.org/x/sys/unix/dirent.go b/vendor/golang.org/x/sys/unix/dirent.go index bd475812b..95fd35317 100644 --- a/vendor/golang.org/x/sys/unix/dirent.go +++ b/vendor/golang.org/x/sys/unix/dirent.go @@ -6,97 +6,12 @@ package unix -import "unsafe" - -// readInt returns the size-bytes unsigned integer in native byte order at offset off. -func readInt(b []byte, off, size uintptr) (u uint64, ok bool) { - if len(b) < int(off+size) { - return 0, false - } - if isBigEndian { - return readIntBE(b[off:], size), true - } - return readIntLE(b[off:], size), true -} - -func readIntBE(b []byte, size uintptr) uint64 { - switch size { - case 1: - return uint64(b[0]) - case 2: - _ = b[1] // bounds check hint to compiler; see golang.org/issue/14808 - return uint64(b[1]) | uint64(b[0])<<8 - case 4: - _ = b[3] // bounds check hint to compiler; see golang.org/issue/14808 - return uint64(b[3]) | uint64(b[2])<<8 | uint64(b[1])<<16 | uint64(b[0])<<24 - case 8: - _ = b[7] // bounds check hint to compiler; see golang.org/issue/14808 - return uint64(b[7]) | uint64(b[6])<<8 | uint64(b[5])<<16 | uint64(b[4])<<24 | - uint64(b[3])<<32 | uint64(b[2])<<40 | uint64(b[1])<<48 | uint64(b[0])<<56 - default: - panic("syscall: readInt with unsupported size") - } -} - -func readIntLE(b []byte, size uintptr) uint64 { - switch size { - case 1: - return uint64(b[0]) - case 2: - _ = b[1] // bounds check hint to compiler; see golang.org/issue/14808 - return uint64(b[0]) | uint64(b[1])<<8 - case 4: - _ = b[3] // bounds check hint to compiler; see golang.org/issue/14808 - return uint64(b[0]) | uint64(b[1])<<8 | uint64(b[2])<<16 | uint64(b[3])<<24 - case 8: - _ = b[7] // bounds check hint to compiler; see golang.org/issue/14808 - return uint64(b[0]) | uint64(b[1])<<8 | uint64(b[2])<<16 | uint64(b[3])<<24 | - uint64(b[4])<<32 | uint64(b[5])<<40 | uint64(b[6])<<48 | uint64(b[7])<<56 - default: - panic("syscall: readInt with unsupported size") - } -} +import "syscall" // ParseDirent parses up to max directory entries in buf, // appending the names to names. It returns the number of // bytes consumed from buf, the number of entries added // to names, and the new names slice. func ParseDirent(buf []byte, max int, names []string) (consumed int, count int, newnames []string) { - origlen := len(buf) - count = 0 - for max != 0 && len(buf) > 0 { - reclen, ok := direntReclen(buf) - if !ok || reclen > uint64(len(buf)) { - return origlen, count, names - } - rec := buf[:reclen] - buf = buf[reclen:] - ino, ok := direntIno(rec) - if !ok { - break - } - if ino == 0 { // File absent in directory. - continue - } - const namoff = uint64(unsafe.Offsetof(Dirent{}.Name)) - namlen, ok := direntNamlen(rec) - if !ok || namoff+namlen > uint64(len(rec)) { - break - } - name := rec[namoff : namoff+namlen] - for i, c := range name { - if c == 0 { - name = name[:i] - break - } - } - // Check for useless names before allocating a string. - if string(name) == "." || string(name) == ".." { - continue - } - max-- - count++ - names = append(names, string(name)) - } - return origlen - len(buf), count, names + return syscall.ParseDirent(buf, max, names) } diff --git a/vendor/golang.org/x/sys/unix/env_unix.go b/vendor/golang.org/x/sys/unix/env_unix.go index 45e281a04..706b3cd1d 100644 --- a/vendor/golang.org/x/sys/unix/env_unix.go +++ b/vendor/golang.org/x/sys/unix/env_unix.go @@ -1,4 +1,4 @@ -// Copyright 2010 The Go Authors. All rights reserved. +// Copyright 2010 The Go Authors. All rights reserved. // Use of this source code is governed by a BSD-style // license that can be found in the LICENSE file. @@ -25,3 +25,7 @@ func Clearenv() { func Environ() []string { return syscall.Environ() } + +func Unsetenv(key string) error { + return syscall.Unsetenv(key) +} diff --git a/vendor/golang.org/x/sys/unix/env_unset.go b/vendor/golang.org/x/sys/unix/env_unset.go deleted file mode 100644 index 922226255..000000000 --- a/vendor/golang.org/x/sys/unix/env_unset.go +++ /dev/null @@ -1,14 +0,0 @@ -// Copyright 2014 The Go Authors. All rights reserved. -// Use of this source code is governed by a BSD-style -// license that can be found in the LICENSE file. - -// +build go1.4 - -package unix - -import "syscall" - -func Unsetenv(key string) error { - // This was added in Go 1.4. - return syscall.Unsetenv(key) -} diff --git a/vendor/golang.org/x/sys/unix/file_unix.go b/vendor/golang.org/x/sys/unix/file_unix.go deleted file mode 100644 index 47f6a83f2..000000000 --- a/vendor/golang.org/x/sys/unix/file_unix.go +++ /dev/null @@ -1,27 +0,0 @@ -// Copyright 2017 The Go Authors. All rights reserved. -// Use of this source code is governed by a BSD-style -// license that can be found in the LICENSE file. - -package unix - -import ( - "os" - "syscall" -) - -// FIXME: unexported function from os -// syscallMode returns the syscall-specific mode bits from Go's portable mode bits. -func syscallMode(i os.FileMode) (o uint32) { - o |= uint32(i.Perm()) - if i&os.ModeSetuid != 0 { - o |= syscall.S_ISUID - } - if i&os.ModeSetgid != 0 { - o |= syscall.S_ISGID - } - if i&os.ModeSticky != 0 { - o |= syscall.S_ISVTX - } - // No mapping for Go's ModeTemporary (plan9 only). - return -} diff --git a/vendor/golang.org/x/sys/unix/gccgo.go b/vendor/golang.org/x/sys/unix/gccgo.go index 94c823212..50062e3c7 100644 --- a/vendor/golang.org/x/sys/unix/gccgo.go +++ b/vendor/golang.org/x/sys/unix/gccgo.go @@ -1,4 +1,4 @@ -// Copyright 2015 The Go Authors. All rights reserved. +// Copyright 2015 The Go Authors. All rights reserved. // Use of this source code is governed by a BSD-style // license that can be found in the LICENSE file. @@ -8,12 +8,22 @@ package unix import "syscall" -// We can't use the gc-syntax .s files for gccgo. On the plus side +// We can't use the gc-syntax .s files for gccgo. On the plus side // much of the functionality can be written directly in Go. +//extern gccgoRealSyscallNoError +func realSyscallNoError(trap, a1, a2, a3, a4, a5, a6, a7, a8, a9 uintptr) (r uintptr) + //extern gccgoRealSyscall func realSyscall(trap, a1, a2, a3, a4, a5, a6, a7, a8, a9 uintptr) (r, errno uintptr) +func SyscallNoError(trap, a1, a2, a3 uintptr) (r1, r2 uintptr) { + syscall.Entersyscall() + r := realSyscallNoError(trap, a1, a2, a3, 0, 0, 0, 0, 0, 0) + syscall.Exitsyscall() + return r, 0 +} + func Syscall(trap, a1, a2, a3 uintptr) (r1, r2 uintptr, err syscall.Errno) { syscall.Entersyscall() r, errno := realSyscall(trap, a1, a2, a3, 0, 0, 0, 0, 0, 0) @@ -35,6 +45,11 @@ func Syscall9(trap, a1, a2, a3, a4, a5, a6, a7, a8, a9 uintptr) (r1, r2 uintptr, return r, 0, syscall.Errno(errno) } +func RawSyscallNoError(trap, a1, a2, a3 uintptr) (r1, r2 uintptr) { + r := realSyscallNoError(trap, a1, a2, a3, 0, 0, 0, 0, 0, 0) + return r, 0 +} + func RawSyscall(trap, a1, a2, a3 uintptr) (r1, r2 uintptr, err syscall.Errno) { r, errno := realSyscall(trap, a1, a2, a3, 0, 0, 0, 0, 0, 0) return r, 0, syscall.Errno(errno) diff --git a/vendor/golang.org/x/sys/unix/gccgo_c.c b/vendor/golang.org/x/sys/unix/gccgo_c.c index 07f6be039..24e96b119 100644 --- a/vendor/golang.org/x/sys/unix/gccgo_c.c +++ b/vendor/golang.org/x/sys/unix/gccgo_c.c @@ -1,4 +1,4 @@ -// Copyright 2015 The Go Authors. All rights reserved. +// Copyright 2015 The Go Authors. All rights reserved. // Use of this source code is governed by a BSD-style // license that can be found in the LICENSE file. @@ -31,6 +31,12 @@ gccgoRealSyscall(uintptr_t trap, uintptr_t a1, uintptr_t a2, uintptr_t a3, uintp return r; } +uintptr_t +gccgoRealSyscallNoError(uintptr_t trap, uintptr_t a1, uintptr_t a2, uintptr_t a3, uintptr_t a4, uintptr_t a5, uintptr_t a6, uintptr_t a7, uintptr_t a8, uintptr_t a9) +{ + return syscall(trap, a1, a2, a3, a4, a5, a6, a7, a8, a9); +} + // Define the use function in C so that it is not inlined. extern void use(void *) __asm__ (GOSYM_PREFIX GOPKGPATH ".use") __attribute__((noinline)); diff --git a/vendor/golang.org/x/sys/unix/gccgo_linux_amd64.go b/vendor/golang.org/x/sys/unix/gccgo_linux_amd64.go index bffe1a77d..251a977a8 100644 --- a/vendor/golang.org/x/sys/unix/gccgo_linux_amd64.go +++ b/vendor/golang.org/x/sys/unix/gccgo_linux_amd64.go @@ -1,4 +1,4 @@ -// Copyright 2015 The Go Authors. All rights reserved. +// Copyright 2015 The Go Authors. All rights reserved. // Use of this source code is governed by a BSD-style // license that can be found in the LICENSE file. diff --git a/vendor/golang.org/x/sys/unix/gccgo_linux_sparc64.go b/vendor/golang.org/x/sys/unix/gccgo_linux_sparc64.go deleted file mode 100644 index 56332692c..000000000 --- a/vendor/golang.org/x/sys/unix/gccgo_linux_sparc64.go +++ /dev/null @@ -1,20 +0,0 @@ -// Copyright 2016 The Go Authors. All rights reserved. -// Use of this source code is governed by a BSD-style -// license that can be found in the LICENSE file. - -// +build gccgo,linux,sparc64 - -package unix - -import "syscall" - -//extern sysconf -func realSysconf(name int) int64 - -func sysconf(name int) (n int64, err syscall.Errno) { - r := realSysconf(name) - if r < 0 { - return 0, syscall.GetErrno() - } - return r, 0 -} diff --git a/vendor/golang.org/x/sys/unix/mkall.sh b/vendor/golang.org/x/sys/unix/mkall.sh index 09b3c7ab6..1715122bd 100755 --- a/vendor/golang.org/x/sys/unix/mkall.sh +++ b/vendor/golang.org/x/sys/unix/mkall.sh @@ -80,12 +80,6 @@ darwin_arm64) mksysnum="./mksysnum_darwin.pl $(xcrun --show-sdk-path --sdk iphoneos)/usr/include/sys/syscall.h" mktypes="GOARCH=$GOARCH go tool cgo -godefs" ;; -dragonfly_386) - mkerrors="$mkerrors -m32" - mksyscall="./mksyscall.pl -l32 -dragonfly" - mksysnum="curl -s 'http://gitweb.dragonflybsd.org/dragonfly.git/blob_plain/HEAD:/sys/kern/syscalls.master' | ./mksysnum_dragonfly.pl" - mktypes="GOARCH=$GOARCH go tool cgo -godefs" - ;; dragonfly_amd64) mkerrors="$mkerrors -m64" mksyscall="./mksyscall.pl -dragonfly" @@ -108,7 +102,7 @@ freebsd_arm) mksyscall="./mksyscall.pl -l32 -arm" mksysnum="curl -s 'http://svn.freebsd.org/base/stable/10/sys/kern/syscalls.master' | ./mksysnum_freebsd.pl" # Let the type of C char be signed for making the bare syscall - # API consistent across over platforms. + # API consistent across platforms. mktypes="GOARCH=$GOARCH go tool cgo -godefs -- -fsigned-char" ;; linux_sparc64) @@ -130,11 +124,18 @@ netbsd_amd64) mksysnum="curl -s 'http://cvsweb.netbsd.org/bsdweb.cgi/~checkout~/src/sys/kern/syscalls.master' | ./mksysnum_netbsd.pl" mktypes="GOARCH=$GOARCH go tool cgo -godefs" ;; +netbsd_arm) + mkerrors="$mkerrors" + mksyscall="./mksyscall.pl -l32 -netbsd -arm" + mksysnum="curl -s 'http://cvsweb.netbsd.org/bsdweb.cgi/~checkout~/src/sys/kern/syscalls.master' | ./mksysnum_netbsd.pl" + # Let the type of C char be signed for making the bare syscall + # API consistent across platforms. + mktypes="GOARCH=$GOARCH go tool cgo -godefs -- -fsigned-char" + ;; openbsd_386) mkerrors="$mkerrors -m32" mksyscall="./mksyscall.pl -l32 -openbsd" mksysctl="./mksysctl_openbsd.pl" - zsysctl="zsysctl_openbsd.go" mksysnum="curl -s 'http://cvsweb.openbsd.org/cgi-bin/cvsweb/~checkout~/src/sys/kern/syscalls.master' | ./mksysnum_openbsd.pl" mktypes="GOARCH=$GOARCH go tool cgo -godefs" ;; @@ -142,10 +143,18 @@ openbsd_amd64) mkerrors="$mkerrors -m64" mksyscall="./mksyscall.pl -openbsd" mksysctl="./mksysctl_openbsd.pl" - zsysctl="zsysctl_openbsd.go" mksysnum="curl -s 'http://cvsweb.openbsd.org/cgi-bin/cvsweb/~checkout~/src/sys/kern/syscalls.master' | ./mksysnum_openbsd.pl" mktypes="GOARCH=$GOARCH go tool cgo -godefs" ;; +openbsd_arm) + mkerrors="$mkerrors" + mksyscall="./mksyscall.pl -l32 -openbsd -arm" + mksysctl="./mksysctl_openbsd.pl" + mksysnum="curl -s 'http://cvsweb.openbsd.org/cgi-bin/cvsweb/~checkout~/src/sys/kern/syscalls.master' | ./mksysnum_openbsd.pl" + # Let the type of C char be signed for making the bare syscall + # API consistent across platforms. + mktypes="GOARCH=$GOARCH go tool cgo -godefs -- -fsigned-char" + ;; solaris_amd64) mksyscall="./mksyscall_solaris.pl" mkerrors="$mkerrors -m64" diff --git a/vendor/golang.org/x/sys/unix/mkerrors.sh b/vendor/golang.org/x/sys/unix/mkerrors.sh index 08dd77518..3b5e2c07b 100755 --- a/vendor/golang.org/x/sys/unix/mkerrors.sh +++ b/vendor/golang.org/x/sys/unix/mkerrors.sh @@ -17,8 +17,8 @@ if test -z "$GOARCH" -o -z "$GOOS"; then fi # Check that we are using the new build system if we should -if [[ "$GOOS" -eq "linux" ]] && [[ "$GOARCH" != "sparc64" ]]; then - if [[ "$GOLANG_SYS_BUILD" -ne "docker" ]]; then +if [[ "$GOOS" = "linux" ]] && [[ "$GOARCH" != "sparc64" ]]; then + if [[ "$GOLANG_SYS_BUILD" != "docker" ]]; then echo 1>&2 "In the new build system, mkerrors should not be called directly." echo 1>&2 "See README.md" exit 1 @@ -27,7 +27,7 @@ fi CC=${CC:-cc} -if [[ "$GOOS" -eq "solaris" ]]; then +if [[ "$GOOS" = "solaris" ]]; then # Assumes GNU versions of utilities in PATH. export PATH=/usr/gnu/bin:$PATH fi @@ -38,6 +38,8 @@ includes_Darwin=' #define _DARWIN_C_SOURCE #define KERNEL #define _DARWIN_USE_64_BIT_INODE +#include +#include #include #include #include @@ -46,6 +48,7 @@ includes_Darwin=' #include #include #include +#include #include #include #include @@ -84,6 +87,7 @@ includes_FreeBSD=' #include #include #include +#include #include #include #include @@ -181,6 +185,10 @@ struct ltchars { #include #include #include +#include +#include +#include +#include #include #include @@ -281,6 +289,7 @@ includes_SunOS=' #include #include #include +#include #include #include #include @@ -345,6 +354,7 @@ ccflags="$@" $2 !~ /^EXPR_/ && $2 ~ /^E[A-Z0-9_]+$/ || $2 ~ /^B[0-9_]+$/ || + $2 ~ /^(OLD|NEW)DEV$/ || $2 == "BOTHER" || $2 ~ /^CI?BAUD(EX)?$/ || $2 == "IBSHIFT" || @@ -354,6 +364,7 @@ ccflags="$@" $2 ~ /^IGN/ || $2 ~ /^IX(ON|ANY|OFF)$/ || $2 ~ /^IN(LCR|PCK)$/ || + $2 !~ "X86_CR3_PCID_NOFLUSH" && $2 ~ /(^FLU?SH)|(FLU?SH$)/ || $2 ~ /^C(LOCAL|READ|MSPAR|RTSCTS)$/ || $2 == "BRKINT" || @@ -378,7 +389,9 @@ ccflags="$@" $2 == "SOMAXCONN" || $2 == "NAME_MAX" || $2 == "IFNAMSIZ" || - $2 ~ /^CTL_(MAXNAME|NET|QUERY)$/ || + $2 ~ /^CTL_(HW|KERN|MAXNAME|NET|QUERY)$/ || + $2 ~ /^KERN_(HOSTNAME|OS(RELEASE|TYPE)|VERSION)$/ || + $2 ~ /^HW_MACHINE$/ || $2 ~ /^SYSCTL_VERS/ || $2 ~ /^(MS|MNT|UMOUNT)_/ || $2 ~ /^TUN(SET|GET|ATTACH|DETACH)/ || @@ -413,7 +426,16 @@ ccflags="$@" $2 ~ /^SECCOMP_MODE_/ || $2 ~ /^SPLICE_/ || $2 ~ /^(VM|VMADDR)_/ || + $2 ~ /^IOCTL_VM_SOCKETS_/ || + $2 ~ /^(TASKSTATS|TS)_/ || + $2 ~ /^CGROUPSTATS_/ || + $2 ~ /^GENL_/ || + $2 ~ /^STATX_/ || + $2 ~ /^UTIME_/ || $2 ~ /^XATTR_(CREATE|REPLACE)/ || + $2 ~ /^ATTR_(BIT_MAP_COUNT|(CMN|VOL|FILE)_)/ || + $2 ~ /^FSOPT_/ || + $2 ~ /^WDIOC_/ || $2 !~ "WMESGLEN" && $2 ~ /^W[A-Z0-9]+$/ || $2 ~ /^BLK[A-Z]*(GET$|SET$|BUF$|PART$|SIZE)/ {printf("\t%s = C.%s\n", $2, $2)} diff --git a/vendor/golang.org/x/sys/unix/mksyscall.pl b/vendor/golang.org/x/sys/unix/mksyscall.pl index fb929b4ce..1f6b926f8 100755 --- a/vendor/golang.org/x/sys/unix/mksyscall.pl +++ b/vendor/golang.org/x/sys/unix/mksyscall.pl @@ -210,7 +210,15 @@ while(<>) { # Determine which form to use; pad args with zeros. my $asm = "Syscall"; if ($nonblock) { - $asm = "RawSyscall"; + if ($errvar eq "" && $ENV{'GOOS'} eq "linux") { + $asm = "RawSyscallNoError"; + } else { + $asm = "RawSyscall"; + } + } else { + if ($errvar eq "" && $ENV{'GOOS'} eq "linux") { + $asm = "SyscallNoError"; + } } if(@args <= 3) { while(@args < 3) { @@ -284,7 +292,12 @@ while(<>) { if ($ret[0] eq "_" && $ret[1] eq "_" && $ret[2] eq "_") { $text .= "\t$call\n"; } else { - $text .= "\t$ret[0], $ret[1], $ret[2] := $call\n"; + if ($errvar eq "" && $ENV{'GOOS'} eq "linux") { + # raw syscall without error on Linux, see golang.org/issue/22924 + $text .= "\t$ret[0], $ret[1] := $call\n"; + } else { + $text .= "\t$ret[0], $ret[1], $ret[2] := $call\n"; + } } $text .= $body; diff --git a/vendor/golang.org/x/sys/unix/mksysnum_netbsd.pl b/vendor/golang.org/x/sys/unix/mksysnum_netbsd.pl index d31f2c48f..85988b140 100755 --- a/vendor/golang.org/x/sys/unix/mksysnum_netbsd.pl +++ b/vendor/golang.org/x/sys/unix/mksysnum_netbsd.pl @@ -47,7 +47,7 @@ while(<>){ $name = "$7_$11" if $11 ne ''; $name =~ y/a-z/A-Z/; - if($compat eq '' || $compat eq '30' || $compat eq '50') { + if($compat eq '' || $compat eq '13' || $compat eq '30' || $compat eq '50') { print " $name = $num; // $proto\n"; } } diff --git a/vendor/golang.org/x/sys/unix/pagesize_unix.go b/vendor/golang.org/x/sys/unix/pagesize_unix.go new file mode 100644 index 000000000..83c85e019 --- /dev/null +++ b/vendor/golang.org/x/sys/unix/pagesize_unix.go @@ -0,0 +1,15 @@ +// Copyright 2017 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +// +build darwin dragonfly freebsd linux netbsd openbsd solaris + +// For Unix, get the pagesize from the runtime. + +package unix + +import "syscall" + +func Getpagesize() int { + return syscall.Getpagesize() +} diff --git a/vendor/golang.org/x/sys/unix/race.go b/vendor/golang.org/x/sys/unix/race.go index 3c7627eb5..61712b51c 100644 --- a/vendor/golang.org/x/sys/unix/race.go +++ b/vendor/golang.org/x/sys/unix/race.go @@ -1,4 +1,4 @@ -// Copyright 2012 The Go Authors. All rights reserved. +// Copyright 2012 The Go Authors. All rights reserved. // Use of this source code is governed by a BSD-style // license that can be found in the LICENSE file. diff --git a/vendor/golang.org/x/sys/unix/race0.go b/vendor/golang.org/x/sys/unix/race0.go index f8678e0d2..dd0820431 100644 --- a/vendor/golang.org/x/sys/unix/race0.go +++ b/vendor/golang.org/x/sys/unix/race0.go @@ -1,4 +1,4 @@ -// Copyright 2012 The Go Authors. All rights reserved. +// Copyright 2012 The Go Authors. All rights reserved. // Use of this source code is governed by a BSD-style // license that can be found in the LICENSE file. diff --git a/vendor/golang.org/x/sys/unix/sockcmsg_linux.go b/vendor/golang.org/x/sys/unix/sockcmsg_linux.go index d9ff4731a..6079eb4ac 100644 --- a/vendor/golang.org/x/sys/unix/sockcmsg_linux.go +++ b/vendor/golang.org/x/sys/unix/sockcmsg_linux.go @@ -1,4 +1,4 @@ -// Copyright 2011 The Go Authors. All rights reserved. +// Copyright 2011 The Go Authors. All rights reserved. // Use of this source code is governed by a BSD-style // license that can be found in the LICENSE file. diff --git a/vendor/golang.org/x/sys/unix/syscall.go b/vendor/golang.org/x/sys/unix/syscall.go index 85e35020e..ef35fce80 100644 --- a/vendor/golang.org/x/sys/unix/syscall.go +++ b/vendor/golang.org/x/sys/unix/syscall.go @@ -5,30 +5,33 @@ // +build darwin dragonfly freebsd linux netbsd openbsd solaris // Package unix contains an interface to the low-level operating system -// primitives. OS details vary depending on the underlying system, and +// primitives. OS details vary depending on the underlying system, and // by default, godoc will display OS-specific documentation for the current -// system. If you want godoc to display OS documentation for another -// system, set $GOOS and $GOARCH to the desired system. For example, if +// system. If you want godoc to display OS documentation for another +// system, set $GOOS and $GOARCH to the desired system. For example, if // you want to view documentation for freebsd/arm on linux/amd64, set $GOOS // to freebsd and $GOARCH to arm. +// // The primary use of this package is inside other packages that provide a more // portable interface to the system, such as "os", "time" and "net". Use // those packages rather than this one if you can. +// // For details of the functions and data types in this package consult // the manuals for the appropriate operating system. +// // These calls return err == nil to indicate success; otherwise // err represents an operating system error describing the failure and // holds a value of type syscall.Errno. package unix // import "golang.org/x/sys/unix" +import "strings" + // ByteSliceFromString returns a NUL-terminated slice of bytes // containing the text of s. If s contains a NUL byte at any // location, it returns (nil, EINVAL). func ByteSliceFromString(s string) ([]byte, error) { - for i := 0; i < len(s); i++ { - if s[i] == 0 { - return nil, EINVAL - } + if strings.IndexByte(s, 0) != -1 { + return nil, EINVAL } a := make([]byte, len(s)+1) copy(a, s) @@ -49,21 +52,3 @@ func BytePtrFromString(s string) (*byte, error) { // Single-word zero for use when we need a valid pointer to 0 bytes. // See mkunix.pl. var _zero uintptr - -func (ts *Timespec) Unix() (sec int64, nsec int64) { - return int64(ts.Sec), int64(ts.Nsec) -} - -func (tv *Timeval) Unix() (sec int64, nsec int64) { - return int64(tv.Sec), int64(tv.Usec) * 1000 -} - -func (ts *Timespec) Nano() int64 { - return int64(ts.Sec)*1e9 + int64(ts.Nsec) -} - -func (tv *Timeval) Nano() int64 { - return int64(tv.Sec)*1e9 + int64(tv.Usec)*1000 -} - -func TimevalToNsec(tv Timeval) int64 { return int64(tv.Sec)*1e9 + int64(tv.Usec)*1e3 } diff --git a/vendor/golang.org/x/sys/unix/syscall_bsd.go b/vendor/golang.org/x/sys/unix/syscall_bsd.go index ccb29c75c..d3903edeb 100644 --- a/vendor/golang.org/x/sys/unix/syscall_bsd.go +++ b/vendor/golang.org/x/sys/unix/syscall_bsd.go @@ -34,7 +34,7 @@ func Getgroups() (gids []int, err error) { return nil, nil } - // Sanity check group count. Max is 16 on BSD. + // Sanity check group count. Max is 16 on BSD. if n < 0 || n > 1000 { return nil, EINVAL } @@ -352,6 +352,18 @@ func GetsockoptICMPv6Filter(fd, level, opt int) (*ICMPv6Filter, error) { return &value, err } +// GetsockoptString returns the string value of the socket option opt for the +// socket associated with fd at the given socket level. +func GetsockoptString(fd, level, opt int) (string, error) { + buf := make([]byte, 256) + vallen := _Socklen(len(buf)) + err := getsockopt(fd, level, opt, unsafe.Pointer(&buf[0]), &vallen) + if err != nil { + return "", err + } + return string(buf[:vallen-1]), nil +} + //sys recvfrom(fd int, p []byte, flags int, from *RawSockaddrAny, fromlen *_Socklen) (n int, err error) //sys sendto(s int, buf []byte, flags int, to unsafe.Pointer, addrlen _Socklen) (err error) //sys recvmsg(s int, msg *Msghdr, flags int) (n int, err error) @@ -561,13 +573,24 @@ func Utimes(path string, tv []Timeval) error { func UtimesNano(path string, ts []Timespec) error { if ts == nil { + err := utimensat(AT_FDCWD, path, nil, 0) + if err != ENOSYS { + return err + } return utimes(path, nil) } - // TODO: The BSDs can do utimensat with SYS_UTIMENSAT but it - // isn't supported by darwin so this uses utimes instead if len(ts) != 2 { return EINVAL } + // Darwin setattrlist can set nanosecond timestamps + err := setattrlistTimes(path, ts, 0) + if err != ENOSYS { + return err + } + err = utimensat(AT_FDCWD, path, (*[2]Timespec)(unsafe.Pointer(&ts[0])), 0) + if err != ENOSYS { + return err + } // Not as efficient as it could be because Timespec and // Timeval have different types in the different OSes tv := [2]Timeval{ @@ -577,6 +600,20 @@ func UtimesNano(path string, ts []Timespec) error { return utimes(path, (*[2]Timeval)(unsafe.Pointer(&tv[0]))) } +func UtimesNanoAt(dirfd int, path string, ts []Timespec, flags int) error { + if ts == nil { + return utimensat(dirfd, path, nil, flags) + } + if len(ts) != 2 { + return EINVAL + } + err := setattrlistTimes(path, ts, flags) + if err != ENOSYS { + return err + } + return utimensat(dirfd, path, (*[2]Timespec)(unsafe.Pointer(&ts[0])), flags) +} + //sys futimes(fd int, timeval *[2]Timeval) (err error) func Futimes(fd int, tv []Timeval) error { @@ -591,12 +628,18 @@ func Futimes(fd int, tv []Timeval) error { //sys fcntl(fd int, cmd int, arg int) (val int, err error) +//sys poll(fds *PollFd, nfds int, timeout int) (n int, err error) + +func Poll(fds []PollFd, timeout int) (n int, err error) { + if len(fds) == 0 { + return poll(nil, 0, timeout) + } + return poll(&fds[0], len(fds), timeout) +} + // TODO: wrap // Acct(name nil-string) (err error) // Gethostuuid(uuid *byte, timeout *Timespec) (err error) -// Madvise(addr *byte, len int, behav int) (err error) -// Mprotect(addr *byte, len int, prot int) (err error) -// Msync(addr *byte, len int, flags int) (err error) // Ptrace(req int, pid int, addr uintptr, data int) (ret uintptr, err error) var mapper = &mmapper{ @@ -612,3 +655,11 @@ func Mmap(fd int, offset int64, length int, prot int, flags int) (data []byte, e func Munmap(b []byte) (err error) { return mapper.Munmap(b) } + +//sys Madvise(b []byte, behav int) (err error) +//sys Mlock(b []byte) (err error) +//sys Mlockall(flags int) (err error) +//sys Mprotect(b []byte, prot int) (err error) +//sys Msync(b []byte, flags int) (err error) +//sys Munlock(b []byte) (err error) +//sys Munlockall() (err error) diff --git a/vendor/golang.org/x/sys/unix/syscall_darwin.go b/vendor/golang.org/x/sys/unix/syscall_darwin.go index 26b83602b..006e21f5d 100644 --- a/vendor/golang.org/x/sys/unix/syscall_darwin.go +++ b/vendor/golang.org/x/sys/unix/syscall_darwin.go @@ -36,6 +36,7 @@ func Getwd() (string, error) { return "", ENOTSUP } +// SockaddrDatalink implements the Sockaddr interface for AF_LINK type sockets. type SockaddrDatalink struct { Len uint8 Family uint8 @@ -54,7 +55,7 @@ func nametomib(name string) (mib []_C_int, err error) { // NOTE(rsc): It seems strange to set the buffer to have // size CTL_MAXNAME+2 but use only CTL_MAXNAME - // as the size. I don't know why the +2 is here, but the + // as the size. I don't know why the +2 is here, but the // kernel uses +2 for its own implementation of this function. // I am scared that if we don't include the +2 here, the kernel // will silently write 2 words farther than we specify @@ -76,18 +77,6 @@ func nametomib(name string) (mib []_C_int, err error) { return buf[0 : n/siz], nil } -func direntIno(buf []byte) (uint64, bool) { - return readInt(buf, unsafe.Offsetof(Dirent{}.Ino), unsafe.Sizeof(Dirent{}.Ino)) -} - -func direntReclen(buf []byte) (uint64, bool) { - return readInt(buf, unsafe.Offsetof(Dirent{}.Reclen), unsafe.Sizeof(Dirent{}.Reclen)) -} - -func direntNamlen(buf []byte) (uint64, bool) { - return readInt(buf, unsafe.Offsetof(Dirent{}.Namlen), unsafe.Sizeof(Dirent{}.Namlen)) -} - //sys ptrace(request int, pid int, addr uintptr, data uintptr) (err error) func PtraceAttach(pid int) (err error) { return ptrace(PT_ATTACH, pid, 0, 0) } func PtraceDetach(pid int) (err error) { return ptrace(PT_DETACH, pid, 0, 0) } @@ -187,6 +176,42 @@ func Getfsstat(buf []Statfs_t, flags int) (n int, err error) { return } +func setattrlistTimes(path string, times []Timespec, flags int) error { + _p0, err := BytePtrFromString(path) + if err != nil { + return err + } + + var attrList attrList + attrList.bitmapCount = ATTR_BIT_MAP_COUNT + attrList.CommonAttr = ATTR_CMN_MODTIME | ATTR_CMN_ACCTIME + + // order is mtime, atime: the opposite of Chtimes + attributes := [2]Timespec{times[1], times[0]} + options := 0 + if flags&AT_SYMLINK_NOFOLLOW != 0 { + options |= FSOPT_NOFOLLOW + } + _, _, e1 := Syscall6( + SYS_SETATTRLIST, + uintptr(unsafe.Pointer(_p0)), + uintptr(unsafe.Pointer(&attrList)), + uintptr(unsafe.Pointer(&attributes)), + uintptr(unsafe.Sizeof(attributes)), + uintptr(options), + 0, + ) + if e1 != 0 { + return e1 + } + return nil +} + +func utimensat(dirfd int, path string, times *[2]Timespec, flags int) error { + // Darwin doesn't support SYS_UTIMENSAT + return ENOSYS +} + /* * Wrapped */ @@ -234,6 +259,52 @@ func IoctlGetTermios(fd int, req uint) (*Termios, error) { return &value, err } +func Uname(uname *Utsname) error { + mib := []_C_int{CTL_KERN, KERN_OSTYPE} + n := unsafe.Sizeof(uname.Sysname) + if err := sysctl(mib, &uname.Sysname[0], &n, nil, 0); err != nil { + return err + } + + mib = []_C_int{CTL_KERN, KERN_HOSTNAME} + n = unsafe.Sizeof(uname.Nodename) + if err := sysctl(mib, &uname.Nodename[0], &n, nil, 0); err != nil { + return err + } + + mib = []_C_int{CTL_KERN, KERN_OSRELEASE} + n = unsafe.Sizeof(uname.Release) + if err := sysctl(mib, &uname.Release[0], &n, nil, 0); err != nil { + return err + } + + mib = []_C_int{CTL_KERN, KERN_VERSION} + n = unsafe.Sizeof(uname.Version) + if err := sysctl(mib, &uname.Version[0], &n, nil, 0); err != nil { + return err + } + + // The version might have newlines or tabs in it, convert them to + // spaces. + for i, b := range uname.Version { + if b == '\n' || b == '\t' { + if i == len(uname.Version)-1 { + uname.Version[i] = 0 + } else { + uname.Version[i] = ' ' + } + } + } + + mib = []_C_int{CTL_HW, HW_MACHINE} + n = unsafe.Sizeof(uname.Machine) + if err := sysctl(mib, &uname.Machine[0], &n, nil, 0); err != nil { + return err + } + + return nil +} + /* * Exposed directly */ @@ -259,6 +330,7 @@ func IoctlGetTermios(fd int, req uint) (*Termios, error) { //sys Flock(fd int, how int) (err error) //sys Fpathconf(fd int, name int) (val int, err error) //sys Fstat(fd int, stat *Stat_t) (err error) = SYS_FSTAT64 +//sys Fstatat(fd int, path string, stat *Stat_t, flags int) (err error) = SYS_FSTATAT64 //sys Fstatfs(fd int, stat *Statfs_t) (err error) = SYS_FSTATFS64 //sys Fsync(fd int) (err error) //sys Ftruncate(fd int, length int64) (err error) @@ -287,12 +359,6 @@ func IoctlGetTermios(fd int, req uint) (*Termios, error) { //sys Mkdirat(dirfd int, path string, mode uint32) (err error) //sys Mkfifo(path string, mode uint32) (err error) //sys Mknod(path string, mode uint32, dev int) (err error) -//sys Mlock(b []byte) (err error) -//sys Mlockall(flags int) (err error) -//sys Mprotect(b []byte, prot int) (err error) -//sys Msync(b []byte, flags int) (err error) -//sys Munlock(b []byte) (err error) -//sys Munlockall() (err error) //sys Open(path string, mode int, perm uint32) (fd int, err error) //sys Openat(dirfd int, path string, mode int, perm uint32) (fd int, err error) //sys Pathconf(path string, name int) (val int, err error) @@ -369,9 +435,6 @@ func IoctlGetTermios(fd int, req uint) (*Termios, error) { // Add_profil // Kdebug_trace // Sigreturn -// Mmap -// Mlock -// Munlock // Atsocket // Kqueue_from_portset_np // Kqueue_portset @@ -381,7 +444,6 @@ func IoctlGetTermios(fd int, req uint) (*Termios, error) { // Searchfs // Delete // Copyfile -// Poll // Watchevent // Waitevent // Modwatch @@ -464,8 +526,6 @@ func IoctlGetTermios(fd int, req uint) (*Termios, error) { // Lio_listio // __pthread_cond_wait // Iopolicysys -// Mlockall -// Munlockall // __pthread_kill // __pthread_sigmask // __sigwait diff --git a/vendor/golang.org/x/sys/unix/syscall_darwin_386.go b/vendor/golang.org/x/sys/unix/syscall_darwin_386.go index c172a3da5..b3ac109a2 100644 --- a/vendor/golang.org/x/sys/unix/syscall_darwin_386.go +++ b/vendor/golang.org/x/sys/unix/syscall_darwin_386.go @@ -11,27 +11,18 @@ import ( "unsafe" ) -func Getpagesize() int { return 4096 } - -func TimespecToNsec(ts Timespec) int64 { return int64(ts.Sec)*1e9 + int64(ts.Nsec) } - -func NsecToTimespec(nsec int64) (ts Timespec) { - ts.Sec = int32(nsec / 1e9) - ts.Nsec = int32(nsec % 1e9) - return +func setTimespec(sec, nsec int64) Timespec { + return Timespec{Sec: int32(sec), Nsec: int32(nsec)} } -func NsecToTimeval(nsec int64) (tv Timeval) { - nsec += 999 // round up to microsecond - tv.Usec = int32(nsec % 1e9 / 1e3) - tv.Sec = int32(nsec / 1e9) - return +func setTimeval(sec, usec int64) Timeval { + return Timeval{Sec: int32(sec), Usec: int32(usec)} } //sysnb gettimeofday(tp *Timeval) (sec int32, usec int32, err error) func Gettimeofday(tv *Timeval) (err error) { // The tv passed to gettimeofday must be non-nil - // but is otherwise unused. The answers come back + // but is otherwise unused. The answers come back // in the two registers. sec, usec, err := gettimeofday(tv) tv.Sec = int32(sec) diff --git a/vendor/golang.org/x/sys/unix/syscall_darwin_amd64.go b/vendor/golang.org/x/sys/unix/syscall_darwin_amd64.go index c6c99c13a..75219444a 100644 --- a/vendor/golang.org/x/sys/unix/syscall_darwin_amd64.go +++ b/vendor/golang.org/x/sys/unix/syscall_darwin_amd64.go @@ -11,27 +11,18 @@ import ( "unsafe" ) -func Getpagesize() int { return 4096 } - -func TimespecToNsec(ts Timespec) int64 { return int64(ts.Sec)*1e9 + int64(ts.Nsec) } - -func NsecToTimespec(nsec int64) (ts Timespec) { - ts.Sec = nsec / 1e9 - ts.Nsec = nsec % 1e9 - return +func setTimespec(sec, nsec int64) Timespec { + return Timespec{Sec: sec, Nsec: nsec} } -func NsecToTimeval(nsec int64) (tv Timeval) { - nsec += 999 // round up to microsecond - tv.Usec = int32(nsec % 1e9 / 1e3) - tv.Sec = int64(nsec / 1e9) - return +func setTimeval(sec, usec int64) Timeval { + return Timeval{Sec: sec, Usec: int32(usec)} } //sysnb gettimeofday(tp *Timeval) (sec int64, usec int32, err error) func Gettimeofday(tv *Timeval) (err error) { // The tv passed to gettimeofday must be non-nil - // but is otherwise unused. The answers come back + // but is otherwise unused. The answers come back // in the two registers. sec, usec, err := gettimeofday(tv) tv.Sec = sec diff --git a/vendor/golang.org/x/sys/unix/syscall_darwin_arm.go b/vendor/golang.org/x/sys/unix/syscall_darwin_arm.go index d286cf408..faae207a4 100644 --- a/vendor/golang.org/x/sys/unix/syscall_darwin_arm.go +++ b/vendor/golang.org/x/sys/unix/syscall_darwin_arm.go @@ -9,27 +9,18 @@ import ( "unsafe" ) -func Getpagesize() int { return 4096 } - -func TimespecToNsec(ts Timespec) int64 { return int64(ts.Sec)*1e9 + int64(ts.Nsec) } - -func NsecToTimespec(nsec int64) (ts Timespec) { - ts.Sec = int32(nsec / 1e9) - ts.Nsec = int32(nsec % 1e9) - return +func setTimespec(sec, nsec int64) Timespec { + return Timespec{Sec: int32(sec), Nsec: int32(nsec)} } -func NsecToTimeval(nsec int64) (tv Timeval) { - nsec += 999 // round up to microsecond - tv.Usec = int32(nsec % 1e9 / 1e3) - tv.Sec = int32(nsec / 1e9) - return +func setTimeval(sec, usec int64) Timeval { + return Timeval{Sec: int32(sec), Usec: int32(usec)} } //sysnb gettimeofday(tp *Timeval) (sec int32, usec int32, err error) func Gettimeofday(tv *Timeval) (err error) { // The tv passed to gettimeofday must be non-nil - // but is otherwise unused. The answers come back + // but is otherwise unused. The answers come back // in the two registers. sec, usec, err := gettimeofday(tv) tv.Sec = int32(sec) @@ -69,3 +60,7 @@ func sendfile(outfd int, infd int, offset *int64, count int) (written int, err e } func Syscall9(num, a1, a2, a3, a4, a5, a6, a7, a8, a9 uintptr) (r1, r2 uintptr, err syscall.Errno) // sic + +// SYS___SYSCTL is used by syscall_bsd.go for all BSDs, but in modern versions +// of darwin/arm the syscall is called sysctl instead of __sysctl. +const SYS___SYSCTL = SYS_SYSCTL diff --git a/vendor/golang.org/x/sys/unix/syscall_darwin_arm64.go b/vendor/golang.org/x/sys/unix/syscall_darwin_arm64.go index c33905cdc..d6d962801 100644 --- a/vendor/golang.org/x/sys/unix/syscall_darwin_arm64.go +++ b/vendor/golang.org/x/sys/unix/syscall_darwin_arm64.go @@ -11,27 +11,18 @@ import ( "unsafe" ) -func Getpagesize() int { return 16384 } - -func TimespecToNsec(ts Timespec) int64 { return int64(ts.Sec)*1e9 + int64(ts.Nsec) } - -func NsecToTimespec(nsec int64) (ts Timespec) { - ts.Sec = nsec / 1e9 - ts.Nsec = nsec % 1e9 - return +func setTimespec(sec, nsec int64) Timespec { + return Timespec{Sec: sec, Nsec: nsec} } -func NsecToTimeval(nsec int64) (tv Timeval) { - nsec += 999 // round up to microsecond - tv.Usec = int32(nsec % 1e9 / 1e3) - tv.Sec = int64(nsec / 1e9) - return +func setTimeval(sec, usec int64) Timeval { + return Timeval{Sec: sec, Usec: int32(usec)} } //sysnb gettimeofday(tp *Timeval) (sec int64, usec int32, err error) func Gettimeofday(tv *Timeval) (err error) { // The tv passed to gettimeofday must be non-nil - // but is otherwise unused. The answers come back + // but is otherwise unused. The answers come back // in the two registers. sec, usec, err := gettimeofday(tv) tv.Sec = sec diff --git a/vendor/golang.org/x/sys/unix/syscall_dragonfly.go b/vendor/golang.org/x/sys/unix/syscall_dragonfly.go index 7e0210fc9..b5072de28 100644 --- a/vendor/golang.org/x/sys/unix/syscall_dragonfly.go +++ b/vendor/golang.org/x/sys/unix/syscall_dragonfly.go @@ -14,6 +14,7 @@ package unix import "unsafe" +// SockaddrDatalink implements the Sockaddr interface for AF_LINK type sockets. type SockaddrDatalink struct { Len uint8 Family uint8 @@ -56,22 +57,6 @@ func nametomib(name string) (mib []_C_int, err error) { return buf[0 : n/siz], nil } -func direntIno(buf []byte) (uint64, bool) { - return readInt(buf, unsafe.Offsetof(Dirent{}.Fileno), unsafe.Sizeof(Dirent{}.Fileno)) -} - -func direntReclen(buf []byte) (uint64, bool) { - namlen, ok := direntNamlen(buf) - if !ok { - return 0, false - } - return (16 + namlen + 1 + 7) &^ 7, true -} - -func direntNamlen(buf []byte) (uint64, bool) { - return readInt(buf, unsafe.Offsetof(Dirent{}.Namlen), unsafe.Sizeof(Dirent{}.Namlen)) -} - //sysnb pipe() (r int, w int, err error) func Pipe(p []int) (err error) { @@ -110,6 +95,23 @@ func Accept4(fd, flags int) (nfd int, sa Sockaddr, err error) { return } +const ImplementsGetwd = true + +//sys Getcwd(buf []byte) (n int, err error) = SYS___GETCWD + +func Getwd() (string, error) { + var buf [PathMax]byte + _, err := Getcwd(buf[0:]) + if err != nil { + return "", err + } + n := clen(buf[:]) + if n < 1 { + return "", EINVAL + } + return string(buf[:n]), nil +} + func Getfsstat(buf []Statfs_t, flags int) (n int, err error) { var _p0 unsafe.Pointer var bufsize uintptr @@ -125,6 +127,113 @@ func Getfsstat(buf []Statfs_t, flags int) (n int, err error) { return } +func setattrlistTimes(path string, times []Timespec, flags int) error { + // used on Darwin for UtimesNano + return ENOSYS +} + +//sys ioctl(fd int, req uint, arg uintptr) (err error) + +// ioctl itself should not be exposed directly, but additional get/set +// functions for specific types are permissible. + +// IoctlSetInt performs an ioctl operation which sets an integer value +// on fd, using the specified request number. +func IoctlSetInt(fd int, req uint, value int) error { + return ioctl(fd, req, uintptr(value)) +} + +func IoctlSetWinsize(fd int, req uint, value *Winsize) error { + return ioctl(fd, req, uintptr(unsafe.Pointer(value))) +} + +func IoctlSetTermios(fd int, req uint, value *Termios) error { + return ioctl(fd, req, uintptr(unsafe.Pointer(value))) +} + +// IoctlGetInt performs an ioctl operation which gets an integer value +// from fd, using the specified request number. +func IoctlGetInt(fd int, req uint) (int, error) { + var value int + err := ioctl(fd, req, uintptr(unsafe.Pointer(&value))) + return value, err +} + +func IoctlGetWinsize(fd int, req uint) (*Winsize, error) { + var value Winsize + err := ioctl(fd, req, uintptr(unsafe.Pointer(&value))) + return &value, err +} + +func IoctlGetTermios(fd int, req uint) (*Termios, error) { + var value Termios + err := ioctl(fd, req, uintptr(unsafe.Pointer(&value))) + return &value, err +} + +func sysctlUname(mib []_C_int, old *byte, oldlen *uintptr) error { + err := sysctl(mib, old, oldlen, nil, 0) + if err != nil { + // Utsname members on Dragonfly are only 32 bytes and + // the syscall returns ENOMEM in case the actual value + // is longer. + if err == ENOMEM { + err = nil + } + } + return err +} + +func Uname(uname *Utsname) error { + mib := []_C_int{CTL_KERN, KERN_OSTYPE} + n := unsafe.Sizeof(uname.Sysname) + if err := sysctlUname(mib, &uname.Sysname[0], &n); err != nil { + return err + } + uname.Sysname[unsafe.Sizeof(uname.Sysname)-1] = 0 + + mib = []_C_int{CTL_KERN, KERN_HOSTNAME} + n = unsafe.Sizeof(uname.Nodename) + if err := sysctlUname(mib, &uname.Nodename[0], &n); err != nil { + return err + } + uname.Nodename[unsafe.Sizeof(uname.Nodename)-1] = 0 + + mib = []_C_int{CTL_KERN, KERN_OSRELEASE} + n = unsafe.Sizeof(uname.Release) + if err := sysctlUname(mib, &uname.Release[0], &n); err != nil { + return err + } + uname.Release[unsafe.Sizeof(uname.Release)-1] = 0 + + mib = []_C_int{CTL_KERN, KERN_VERSION} + n = unsafe.Sizeof(uname.Version) + if err := sysctlUname(mib, &uname.Version[0], &n); err != nil { + return err + } + + // The version might have newlines or tabs in it, convert them to + // spaces. + for i, b := range uname.Version { + if b == '\n' || b == '\t' { + if i == len(uname.Version)-1 { + uname.Version[i] = 0 + } else { + uname.Version[i] = ' ' + } + } + } + + mib = []_C_int{CTL_HW, HW_MACHINE} + n = unsafe.Sizeof(uname.Machine) + if err := sysctlUname(mib, &uname.Machine[0], &n); err != nil { + return err + } + uname.Machine[unsafe.Sizeof(uname.Machine)-1] = 0 + + return nil +} + /* * Exposed directly */ @@ -142,10 +251,12 @@ func Getfsstat(buf []Statfs_t, flags int) (n int, err error) { //sys Fchdir(fd int) (err error) //sys Fchflags(fd int, flags int) (err error) //sys Fchmod(fd int, mode uint32) (err error) +//sys Fchmodat(dirfd int, path string, mode uint32, flags int) (err error) //sys Fchown(fd int, uid int, gid int) (err error) //sys Flock(fd int, how int) (err error) //sys Fpathconf(fd int, name int) (val int, err error) //sys Fstat(fd int, stat *Stat_t) (err error) +//sys Fstatat(fd int, path string, stat *Stat_t, flags int) (err error) //sys Fstatfs(fd int, stat *Statfs_t) (err error) //sys Fsync(fd int) (err error) //sys Ftruncate(fd int, length int64) (err error) @@ -174,11 +285,6 @@ func Getfsstat(buf []Statfs_t, flags int) (n int, err error) { //sys Mkdir(path string, mode uint32) (err error) //sys Mkfifo(path string, mode uint32) (err error) //sys Mknod(path string, mode uint32, dev int) (err error) -//sys Mlock(b []byte) (err error) -//sys Mlockall(flags int) (err error) -//sys Mprotect(b []byte, prot int) (err error) -//sys Munlock(b []byte) (err error) -//sys Munlockall() (err error) //sys Nanosleep(time *Timespec, leftover *Timespec) (err error) //sys Open(path string, mode int, perm uint32) (fd int, err error) //sys Pathconf(path string, name int) (val int, err error) @@ -218,6 +324,7 @@ func Getfsstat(buf []Statfs_t, flags int) (n int, err error) { //sys readlen(fd int, buf *byte, nbuf int) (n int, err error) = SYS_READ //sys writelen(fd int, buf *byte, nbuf int) (n int, err error) = SYS_WRITE //sys accept4(fd int, rsa *RawSockaddrAny, addrlen *_Socklen, flags int) (nfd int, err error) +//sys utimensat(dirfd int, path string, times *[2]Timespec, flags int) (err error) /* * Unimplemented @@ -229,7 +336,6 @@ func Getfsstat(buf []Statfs_t, flags int) (n int, err error) { // Getlogin // Sigpending // Sigaltstack -// Ioctl // Reboot // Execve // Vfork @@ -252,9 +358,6 @@ func Getfsstat(buf []Statfs_t, flags int) (n int, err error) { // Add_profil // Kdebug_trace // Sigreturn -// Mmap -// Mlock -// Munlock // Atsocket // Kqueue_from_portset_np // Kqueue_portset @@ -264,7 +367,6 @@ func Getfsstat(buf []Statfs_t, flags int) (n int, err error) { // Searchfs // Delete // Copyfile -// Poll // Watchevent // Waitevent // Modwatch @@ -347,8 +449,6 @@ func Getfsstat(buf []Statfs_t, flags int) (n int, err error) { // Lio_listio // __pthread_cond_wait // Iopolicysys -// Mlockall -// Munlockall // __pthread_kill // __pthread_sigmask // __sigwait @@ -401,7 +501,6 @@ func Getfsstat(buf []Statfs_t, flags int) (n int, err error) { // Sendmsg_nocancel // Recvfrom_nocancel // Accept_nocancel -// Msync_nocancel // Fcntl_nocancel // Select_nocancel // Fsync_nocancel @@ -413,7 +512,6 @@ func Getfsstat(buf []Statfs_t, flags int) (n int, err error) { // Pread_nocancel // Pwrite_nocancel // Waitid_nocancel -// Poll_nocancel // Msgsnd_nocancel // Msgrcv_nocancel // Sem_wait_nocancel diff --git a/vendor/golang.org/x/sys/unix/syscall_dragonfly_amd64.go b/vendor/golang.org/x/sys/unix/syscall_dragonfly_amd64.go index da7cb7982..9babb31ea 100644 --- a/vendor/golang.org/x/sys/unix/syscall_dragonfly_amd64.go +++ b/vendor/golang.org/x/sys/unix/syscall_dragonfly_amd64.go @@ -11,21 +11,12 @@ import ( "unsafe" ) -func Getpagesize() int { return 4096 } - -func TimespecToNsec(ts Timespec) int64 { return int64(ts.Sec)*1e9 + int64(ts.Nsec) } - -func NsecToTimespec(nsec int64) (ts Timespec) { - ts.Sec = nsec / 1e9 - ts.Nsec = nsec % 1e9 - return +func setTimespec(sec, nsec int64) Timespec { + return Timespec{Sec: sec, Nsec: nsec} } -func NsecToTimeval(nsec int64) (tv Timeval) { - nsec += 999 // round up to microsecond - tv.Usec = nsec % 1e9 / 1e3 - tv.Sec = int64(nsec / 1e9) - return +func setTimeval(sec, usec int64) Timeval { + return Timeval{Sec: sec, Usec: usec} } func SetKevent(k *Kevent_t, fd, mode, flags int) { diff --git a/vendor/golang.org/x/sys/unix/syscall_freebsd.go b/vendor/golang.org/x/sys/unix/syscall_freebsd.go index e93356b4f..ba9df4ac1 100644 --- a/vendor/golang.org/x/sys/unix/syscall_freebsd.go +++ b/vendor/golang.org/x/sys/unix/syscall_freebsd.go @@ -12,8 +12,12 @@ package unix -import "unsafe" +import ( + "strings" + "unsafe" +) +// SockaddrDatalink implements the Sockaddr interface for AF_LINK type sockets. type SockaddrDatalink struct { Len uint8 Family uint8 @@ -32,7 +36,7 @@ func nametomib(name string) (mib []_C_int, err error) { // NOTE(rsc): It seems strange to set the buffer to have // size CTL_MAXNAME+2 but use only CTL_MAXNAME - // as the size. I don't know why the +2 is here, but the + // as the size. I don't know why the +2 is here, but the // kernel uses +2 for its own implementation of this function. // I am scared that if we don't include the +2 here, the kernel // will silently write 2 words farther than we specify @@ -54,18 +58,6 @@ func nametomib(name string) (mib []_C_int, err error) { return buf[0 : n/siz], nil } -func direntIno(buf []byte) (uint64, bool) { - return readInt(buf, unsafe.Offsetof(Dirent{}.Fileno), unsafe.Sizeof(Dirent{}.Fileno)) -} - -func direntReclen(buf []byte) (uint64, bool) { - return readInt(buf, unsafe.Offsetof(Dirent{}.Reclen), unsafe.Sizeof(Dirent{}.Reclen)) -} - -func direntNamlen(buf []byte) (uint64, bool) { - return readInt(buf, unsafe.Offsetof(Dirent{}.Namlen), unsafe.Sizeof(Dirent{}.Namlen)) -} - //sysnb pipe() (r int, w int, err error) func Pipe(p []int) (err error) { @@ -105,6 +97,23 @@ func Accept4(fd, flags int) (nfd int, sa Sockaddr, err error) { return } +const ImplementsGetwd = true + +//sys Getcwd(buf []byte) (n int, err error) = SYS___GETCWD + +func Getwd() (string, error) { + var buf [PathMax]byte + _, err := Getcwd(buf[0:]) + if err != nil { + return "", err + } + n := clen(buf[:]) + if n < 1 { + return "", EINVAL + } + return string(buf[:n]), nil +} + func Getfsstat(buf []Statfs_t, flags int) (n int, err error) { var _p0 unsafe.Pointer var bufsize uintptr @@ -120,17 +129,15 @@ func Getfsstat(buf []Statfs_t, flags int) (n int, err error) { return } +func setattrlistTimes(path string, times []Timespec, flags int) error { + // used on Darwin for UtimesNano + return ENOSYS +} + // Derive extattr namespace and attribute name func xattrnamespace(fullattr string) (ns int, attr string, err error) { - s := -1 - for idx, val := range fullattr { - if val == '.' { - s = idx - break - } - } - + s := strings.IndexByte(fullattr, '.') if s == -1 { return -1, "", ENOATTR } @@ -271,7 +278,6 @@ func Listxattr(file string, dest []byte) (sz int, err error) { // FreeBSD won't allow you to list xattrs from multiple namespaces s := 0 - var e error for _, nsid := range [...]int{EXTATTR_NAMESPACE_USER, EXTATTR_NAMESPACE_SYSTEM} { stmp, e := ExtattrListFile(file, nsid, uintptr(d), destsiz) @@ -283,7 +289,6 @@ func Listxattr(file string, dest []byte) (sz int, err error) { * we don't have read permissions on, so don't ignore those errors */ if e != nil && e == EPERM && nsid != EXTATTR_NAMESPACE_USER { - e = nil continue } else if e != nil { return s, e @@ -297,7 +302,7 @@ func Listxattr(file string, dest []byte) (sz int, err error) { d = initxattrdest(dest, s) } - return s, e + return s, nil } func Flistxattr(fd int, dest []byte) (sz int, err error) { @@ -305,11 +310,9 @@ func Flistxattr(fd int, dest []byte) (sz int, err error) { destsiz := len(dest) s := 0 - var e error for _, nsid := range [...]int{EXTATTR_NAMESPACE_USER, EXTATTR_NAMESPACE_SYSTEM} { stmp, e := ExtattrListFd(fd, nsid, uintptr(d), destsiz) if e != nil && e == EPERM && nsid != EXTATTR_NAMESPACE_USER { - e = nil continue } else if e != nil { return s, e @@ -323,7 +326,7 @@ func Flistxattr(fd int, dest []byte) (sz int, err error) { d = initxattrdest(dest, s) } - return s, e + return s, nil } func Llistxattr(link string, dest []byte) (sz int, err error) { @@ -331,11 +334,9 @@ func Llistxattr(link string, dest []byte) (sz int, err error) { destsiz := len(dest) s := 0 - var e error for _, nsid := range [...]int{EXTATTR_NAMESPACE_USER, EXTATTR_NAMESPACE_SYSTEM} { stmp, e := ExtattrListLink(link, nsid, uintptr(d), destsiz) if e != nil && e == EPERM && nsid != EXTATTR_NAMESPACE_USER { - e = nil continue } else if e != nil { return s, e @@ -349,7 +350,7 @@ func Llistxattr(link string, dest []byte) (sz int, err error) { d = initxattrdest(dest, s) } - return s, e + return s, nil } //sys ioctl(fd int, req uint, arg uintptr) (err error) @@ -391,6 +392,52 @@ func IoctlGetTermios(fd int, req uint) (*Termios, error) { return &value, err } +func Uname(uname *Utsname) error { + mib := []_C_int{CTL_KERN, KERN_OSTYPE} + n := unsafe.Sizeof(uname.Sysname) + if err := sysctl(mib, &uname.Sysname[0], &n, nil, 0); err != nil { + return err + } + + mib = []_C_int{CTL_KERN, KERN_HOSTNAME} + n = unsafe.Sizeof(uname.Nodename) + if err := sysctl(mib, &uname.Nodename[0], &n, nil, 0); err != nil { + return err + } + + mib = []_C_int{CTL_KERN, KERN_OSRELEASE} + n = unsafe.Sizeof(uname.Release) + if err := sysctl(mib, &uname.Release[0], &n, nil, 0); err != nil { + return err + } + + mib = []_C_int{CTL_KERN, KERN_VERSION} + n = unsafe.Sizeof(uname.Version) + if err := sysctl(mib, &uname.Version[0], &n, nil, 0); err != nil { + return err + } + + // The version might have newlines or tabs in it, convert them to + // spaces. + for i, b := range uname.Version { + if b == '\n' || b == '\t' { + if i == len(uname.Version)-1 { + uname.Version[i] = 0 + } else { + uname.Version[i] = ' ' + } + } + } + + mib = []_C_int{CTL_HW, HW_MACHINE} + n = unsafe.Sizeof(uname.Machine) + if err := sysctl(mib, &uname.Machine[0], &n, nil, 0); err != nil { + return err + } + + return nil +} + /* * Exposed directly */ @@ -431,9 +478,11 @@ func IoctlGetTermios(fd int, req uint) (*Termios, error) { //sys Flock(fd int, how int) (err error) //sys Fpathconf(fd int, name int) (val int, err error) //sys Fstat(fd int, stat *Stat_t) (err error) +//sys Fstatat(fd int, path string, stat *Stat_t, flags int) (err error) //sys Fstatfs(fd int, stat *Statfs_t) (err error) //sys Fsync(fd int) (err error) //sys Ftruncate(fd int, length int64) (err error) +//sys Getdents(fd int, buf []byte) (n int, err error) //sys Getdirentries(fd int, buf []byte, basep *uintptr) (n int, err error) //sys Getdtablesize() (size int) //sysnb Getegid() (egid int) @@ -461,11 +510,6 @@ func IoctlGetTermios(fd int, req uint) (*Termios, error) { //sys Mkdirat(dirfd int, path string, mode uint32) (err error) //sys Mkfifo(path string, mode uint32) (err error) //sys Mknod(path string, mode uint32, dev int) (err error) -//sys Mlock(b []byte) (err error) -//sys Mlockall(flags int) (err error) -//sys Mprotect(b []byte, prot int) (err error) -//sys Munlock(b []byte) (err error) -//sys Munlockall() (err error) //sys Nanosleep(time *Timespec, leftover *Timespec) (err error) //sys Open(path string, mode int, perm uint32) (fd int, err error) //sys Openat(fdat int, path string, mode int, perm uint32) (fd int, err error) @@ -512,6 +556,7 @@ func IoctlGetTermios(fd int, req uint) (*Termios, error) { //sys readlen(fd int, buf *byte, nbuf int) (n int, err error) = SYS_READ //sys writelen(fd int, buf *byte, nbuf int) (n int, err error) = SYS_WRITE //sys accept4(fd int, rsa *RawSockaddrAny, addrlen *_Socklen, flags int) (nfd int, err error) +//sys utimensat(dirfd int, path string, times *[2]Timespec, flags int) (err error) /* * Unimplemented @@ -545,9 +590,6 @@ func IoctlGetTermios(fd int, req uint) (*Termios, error) { // Add_profil // Kdebug_trace // Sigreturn -// Mmap -// Mlock -// Munlock // Atsocket // Kqueue_from_portset_np // Kqueue_portset @@ -557,7 +599,6 @@ func IoctlGetTermios(fd int, req uint) (*Termios, error) { // Searchfs // Delete // Copyfile -// Poll // Watchevent // Waitevent // Modwatch @@ -640,8 +681,6 @@ func IoctlGetTermios(fd int, req uint) (*Termios, error) { // Lio_listio // __pthread_cond_wait // Iopolicysys -// Mlockall -// Munlockall // __pthread_kill // __pthread_sigmask // __sigwait @@ -694,7 +733,6 @@ func IoctlGetTermios(fd int, req uint) (*Termios, error) { // Sendmsg_nocancel // Recvfrom_nocancel // Accept_nocancel -// Msync_nocancel // Fcntl_nocancel // Select_nocancel // Fsync_nocancel diff --git a/vendor/golang.org/x/sys/unix/syscall_freebsd_386.go b/vendor/golang.org/x/sys/unix/syscall_freebsd_386.go index 6a0cd804d..21e03958c 100644 --- a/vendor/golang.org/x/sys/unix/syscall_freebsd_386.go +++ b/vendor/golang.org/x/sys/unix/syscall_freebsd_386.go @@ -11,21 +11,12 @@ import ( "unsafe" ) -func Getpagesize() int { return 4096 } - -func TimespecToNsec(ts Timespec) int64 { return int64(ts.Sec)*1e9 + int64(ts.Nsec) } - -func NsecToTimespec(nsec int64) (ts Timespec) { - ts.Sec = int32(nsec / 1e9) - ts.Nsec = int32(nsec % 1e9) - return +func setTimespec(sec, nsec int64) Timespec { + return Timespec{Sec: int32(sec), Nsec: int32(nsec)} } -func NsecToTimeval(nsec int64) (tv Timeval) { - nsec += 999 // round up to microsecond - tv.Usec = int32(nsec % 1e9 / 1e3) - tv.Sec = int32(nsec / 1e9) - return +func setTimeval(sec, usec int64) Timeval { + return Timeval{Sec: int32(sec), Usec: int32(usec)} } func SetKevent(k *Kevent_t, fd, mode, flags int) { diff --git a/vendor/golang.org/x/sys/unix/syscall_freebsd_amd64.go b/vendor/golang.org/x/sys/unix/syscall_freebsd_amd64.go index e142540ef..9c945a657 100644 --- a/vendor/golang.org/x/sys/unix/syscall_freebsd_amd64.go +++ b/vendor/golang.org/x/sys/unix/syscall_freebsd_amd64.go @@ -11,21 +11,12 @@ import ( "unsafe" ) -func Getpagesize() int { return 4096 } - -func TimespecToNsec(ts Timespec) int64 { return int64(ts.Sec)*1e9 + int64(ts.Nsec) } - -func NsecToTimespec(nsec int64) (ts Timespec) { - ts.Sec = nsec / 1e9 - ts.Nsec = nsec % 1e9 - return +func setTimespec(sec, nsec int64) Timespec { + return Timespec{Sec: sec, Nsec: nsec} } -func NsecToTimeval(nsec int64) (tv Timeval) { - nsec += 999 // round up to microsecond - tv.Usec = nsec % 1e9 / 1e3 - tv.Sec = int64(nsec / 1e9) - return +func setTimeval(sec, usec int64) Timeval { + return Timeval{Sec: sec, Usec: usec} } func SetKevent(k *Kevent_t, fd, mode, flags int) { diff --git a/vendor/golang.org/x/sys/unix/syscall_freebsd_arm.go b/vendor/golang.org/x/sys/unix/syscall_freebsd_arm.go index 5504cb125..5cd6243f2 100644 --- a/vendor/golang.org/x/sys/unix/syscall_freebsd_arm.go +++ b/vendor/golang.org/x/sys/unix/syscall_freebsd_arm.go @@ -11,21 +11,12 @@ import ( "unsafe" ) -func Getpagesize() int { return 4096 } - -func TimespecToNsec(ts Timespec) int64 { return ts.Sec*1e9 + int64(ts.Nsec) } - -func NsecToTimespec(nsec int64) (ts Timespec) { - ts.Sec = nsec / 1e9 - ts.Nsec = int32(nsec % 1e9) - return +func setTimespec(sec, nsec int64) Timespec { + return Timespec{Sec: sec, Nsec: int32(nsec)} } -func NsecToTimeval(nsec int64) (tv Timeval) { - nsec += 999 // round up to microsecond - tv.Usec = int32(nsec % 1e9 / 1e3) - tv.Sec = nsec / 1e9 - return +func setTimeval(sec, usec int64) Timeval { + return Timeval{Sec: sec, Usec: int32(usec)} } func SetKevent(k *Kevent_t, fd, mode, flags int) { diff --git a/vendor/golang.org/x/sys/unix/syscall_linux.go b/vendor/golang.org/x/sys/unix/syscall_linux.go index bb9022571..76cf81f57 100644 --- a/vendor/golang.org/x/sys/unix/syscall_linux.go +++ b/vendor/golang.org/x/sys/unix/syscall_linux.go @@ -255,7 +255,7 @@ func Getgroups() (gids []int, err error) { return nil, nil } - // Sanity check group count. Max is 1<<16 on Linux. + // Sanity check group count. Max is 1<<16 on Linux. if n < 0 || n > 1<<20 { return nil, EINVAL } @@ -290,8 +290,8 @@ type WaitStatus uint32 // 0x7F (stopped), or a signal number that caused an exit. // The 0x80 bit is whether there was a core dump. // An extra number (exit code, signal causing a stop) -// is in the high bits. At least that's the idea. -// There are various irregularities. For example, the +// is in the high bits. At least that's the idea. +// There are various irregularities. For example, the // "continued" status is 0xFFFF, distinguishing itself // from stopped via the core dump bit. @@ -413,6 +413,7 @@ func (sa *SockaddrUnix) sockaddr() (unsafe.Pointer, _Socklen, error) { return unsafe.Pointer(&sa.raw), sl, nil } +// SockaddrLinklayer implements the Sockaddr interface for AF_PACKET type sockets. type SockaddrLinklayer struct { Protocol uint16 Ifindex int @@ -439,6 +440,7 @@ func (sa *SockaddrLinklayer) sockaddr() (unsafe.Pointer, _Socklen, error) { return unsafe.Pointer(&sa.raw), SizeofSockaddrLinklayer, nil } +// SockaddrNetlink implements the Sockaddr interface for AF_NETLINK type sockets. type SockaddrNetlink struct { Family uint16 Pad uint16 @@ -455,6 +457,8 @@ func (sa *SockaddrNetlink) sockaddr() (unsafe.Pointer, _Socklen, error) { return unsafe.Pointer(&sa.raw), SizeofSockaddrNetlink, nil } +// SockaddrHCI implements the Sockaddr interface for AF_BLUETOOTH type sockets +// using the HCI protocol. type SockaddrHCI struct { Dev uint16 Channel uint16 @@ -468,6 +472,31 @@ func (sa *SockaddrHCI) sockaddr() (unsafe.Pointer, _Socklen, error) { return unsafe.Pointer(&sa.raw), SizeofSockaddrHCI, nil } +// SockaddrL2 implements the Sockaddr interface for AF_BLUETOOTH type sockets +// using the L2CAP protocol. +type SockaddrL2 struct { + PSM uint16 + CID uint16 + Addr [6]uint8 + AddrType uint8 + raw RawSockaddrL2 +} + +func (sa *SockaddrL2) sockaddr() (unsafe.Pointer, _Socklen, error) { + sa.raw.Family = AF_BLUETOOTH + psm := (*[2]byte)(unsafe.Pointer(&sa.raw.Psm)) + psm[0] = byte(sa.PSM) + psm[1] = byte(sa.PSM >> 8) + for i := 0; i < len(sa.Addr); i++ { + sa.raw.Bdaddr[i] = sa.Addr[len(sa.Addr)-1-i] + } + cid := (*[2]byte)(unsafe.Pointer(&sa.raw.Cid)) + cid[0] = byte(sa.CID) + cid[1] = byte(sa.CID >> 8) + sa.raw.Bdaddr_type = sa.AddrType + return unsafe.Pointer(&sa.raw), SizeofSockaddrL2, nil +} + // SockaddrCAN implements the Sockaddr interface for AF_CAN type sockets. // The RxID and TxID fields are used for transport protocol addressing in // (CAN_TP16, CAN_TP20, CAN_MCNET, and CAN_ISOTP), they can be left with @@ -808,6 +837,24 @@ func GetsockoptTCPInfo(fd, level, opt int) (*TCPInfo, error) { return &value, err } +// GetsockoptString returns the string value of the socket option opt for the +// socket associated with fd at the given socket level. +func GetsockoptString(fd, level, opt int) (string, error) { + buf := make([]byte, 256) + vallen := _Socklen(len(buf)) + err := getsockopt(fd, level, opt, unsafe.Pointer(&buf[0]), &vallen) + if err != nil { + if err == ERANGE { + buf = make([]byte, vallen) + err = getsockopt(fd, level, opt, unsafe.Pointer(&buf[0]), &vallen) + } + if err != nil { + return "", err + } + } + return string(buf[:vallen-1]), nil +} + func SetsockoptIPMreqn(fd, level, opt int, mreq *IPMreqn) (err error) { return setsockopt(fd, level, opt, unsafe.Pointer(mreq), unsafe.Sizeof(*mreq)) } @@ -926,17 +973,22 @@ func Recvmsg(fd int, p, oob []byte, flags int) (n, oobn int, recvflags int, from msg.Namelen = uint32(SizeofSockaddrAny) var iov Iovec if len(p) > 0 { - iov.Base = (*byte)(unsafe.Pointer(&p[0])) + iov.Base = &p[0] iov.SetLen(len(p)) } var dummy byte if len(oob) > 0 { + var sockType int + sockType, err = GetsockoptInt(fd, SOL_SOCKET, SO_TYPE) + if err != nil { + return + } // receive at least one normal byte - if len(p) == 0 { + if sockType != SOCK_DGRAM && len(p) == 0 { iov.Base = &dummy iov.SetLen(1) } - msg.Control = (*byte)(unsafe.Pointer(&oob[0])) + msg.Control = &oob[0] msg.SetControllen(len(oob)) } msg.Iov = &iov @@ -969,21 +1021,26 @@ func SendmsgN(fd int, p, oob []byte, to Sockaddr, flags int) (n int, err error) } } var msg Msghdr - msg.Name = (*byte)(unsafe.Pointer(ptr)) + msg.Name = (*byte)(ptr) msg.Namelen = uint32(salen) var iov Iovec if len(p) > 0 { - iov.Base = (*byte)(unsafe.Pointer(&p[0])) + iov.Base = &p[0] iov.SetLen(len(p)) } var dummy byte if len(oob) > 0 { + var sockType int + sockType, err = GetsockoptInt(fd, SOL_SOCKET, SO_TYPE) + if err != nil { + return 0, err + } // send at least one normal byte - if len(p) == 0 { + if sockType != SOCK_DGRAM && len(p) == 0 { iov.Base = &dummy iov.SetLen(1) } - msg.Control = (*byte)(unsafe.Pointer(&oob[0])) + msg.Control = &oob[0] msg.SetControllen(len(oob)) } msg.Iov = &iov @@ -1013,7 +1070,7 @@ func ptracePeek(req int, pid int, addr uintptr, out []byte) (count int, err erro var buf [sizeofPtr]byte - // Leading edge. PEEKTEXT/PEEKDATA don't require aligned + // Leading edge. PEEKTEXT/PEEKDATA don't require aligned // access (PEEKUSER warns that it might), but if we don't // align our reads, we might straddle an unmapped page // boundary and not get the bytes leading up to the page @@ -1115,6 +1172,10 @@ func PtracePokeData(pid int, addr uintptr, data []byte) (count int, err error) { return ptracePoke(PTRACE_POKEDATA, PTRACE_PEEKDATA, pid, addr, data) } +func PtracePokeUser(pid int, addr uintptr, data []byte) (count int, err error) { + return ptracePoke(PTRACE_POKEUSR, PTRACE_PEEKUSR, pid, addr, data) +} + func PtraceGetRegs(pid int, regsout *PtraceRegs) (err error) { return ptrace(PTRACE_GETREGS, pid, 0, uintptr(unsafe.Pointer(regsout))) } @@ -1158,22 +1219,6 @@ func ReadDirent(fd int, buf []byte) (n int, err error) { return Getdents(fd, buf) } -func direntIno(buf []byte) (uint64, bool) { - return readInt(buf, unsafe.Offsetof(Dirent{}.Ino), unsafe.Sizeof(Dirent{}.Ino)) -} - -func direntReclen(buf []byte) (uint64, bool) { - return readInt(buf, unsafe.Offsetof(Dirent{}.Reclen), unsafe.Sizeof(Dirent{}.Reclen)) -} - -func direntNamlen(buf []byte) (uint64, bool) { - reclen, ok := direntReclen(buf) - if !ok { - return 0, false - } - return reclen - uint64(unsafe.Offsetof(Dirent{}.Name)), true -} - //sys mount(source string, target string, fstype string, flags uintptr, data *byte) (err error) func Mount(source string, target string, fstype string, flags uintptr, data string) (err error) { @@ -1252,6 +1297,7 @@ func Getpgrp() (pid int) { //sys PivotRoot(newroot string, putold string) (err error) = SYS_PIVOT_ROOT //sysnb prlimit(pid int, resource int, newlimit *Rlimit, old *Rlimit) (err error) = SYS_PRLIMIT64 //sys Prctl(option int, arg2 uintptr, arg3 uintptr, arg4 uintptr, arg5 uintptr) (err error) +//sys Pselect(nfd int, r *FdSet, w *FdSet, e *FdSet, timeout *Timespec, sigmask *Sigset_t) (n int, err error) = SYS_PSELECT6 //sys read(fd int, p []byte) (n int, err error) //sys Removexattr(path string, attr string) (err error) //sys Renameat(olddirfd int, oldpath string, newdirfd int, newpath string) (err error) @@ -1278,6 +1324,7 @@ func Setgid(uid int) (err error) { //sys Setpriority(which int, who int, prio int) (err error) //sys Setxattr(path string, attr string, data []byte, flags int) (err error) +//sys Statx(dirfd int, path string, flags int, mask int, stat *Statx_t) (err error) //sys Sync() //sys Syncfs(fd int) (err error) //sysnb Sysinfo(info *Sysinfo_t) (err error) @@ -1314,9 +1361,9 @@ func Munmap(b []byte) (err error) { //sys Madvise(b []byte, advice int) (err error) //sys Mprotect(b []byte, prot int) (err error) //sys Mlock(b []byte) (err error) -//sys Munlock(b []byte) (err error) //sys Mlockall(flags int) (err error) //sys Msync(b []byte, flags int) (err error) +//sys Munlock(b []byte) (err error) //sys Munlockall() (err error) // Vmsplice splices user pages from a slice of Iovecs into a pipe specified by fd, @@ -1384,7 +1431,6 @@ func Vmsplice(fd int, iovs []Iovec, flags int) (int, error) { // ModifyLdt // Mount // MovePages -// Mprotect // MqGetsetattr // MqNotify // MqOpen @@ -1396,7 +1442,6 @@ func Vmsplice(fd int, iovs []Iovec, flags int) (int, error) { // Msgget // Msgrcv // Msgsnd -// Newfstatat // Nfsservctl // Personality // Pselect6 @@ -1417,11 +1462,9 @@ func Vmsplice(fd int, iovs []Iovec, flags int) (int, error) { // RtSigtimedwait // SchedGetPriorityMax // SchedGetPriorityMin -// SchedGetaffinity // SchedGetparam // SchedGetscheduler // SchedRrGetInterval -// SchedSetaffinity // SchedSetparam // SchedYield // Security diff --git a/vendor/golang.org/x/sys/unix/syscall_linux_386.go b/vendor/golang.org/x/sys/unix/syscall_linux_386.go index 2b881b979..bb8e4fbd8 100644 --- a/vendor/golang.org/x/sys/unix/syscall_linux_386.go +++ b/vendor/golang.org/x/sys/unix/syscall_linux_386.go @@ -14,21 +14,12 @@ import ( "unsafe" ) -func Getpagesize() int { return 4096 } - -func TimespecToNsec(ts Timespec) int64 { return int64(ts.Sec)*1e9 + int64(ts.Nsec) } - -func NsecToTimespec(nsec int64) (ts Timespec) { - ts.Sec = int32(nsec / 1e9) - ts.Nsec = int32(nsec % 1e9) - return +func setTimespec(sec, nsec int64) Timespec { + return Timespec{Sec: int32(sec), Nsec: int32(nsec)} } -func NsecToTimeval(nsec int64) (tv Timeval) { - nsec += 999 // round up to microsecond - tv.Sec = int32(nsec / 1e9) - tv.Usec = int32(nsec % 1e9 / 1e3) - return +func setTimeval(sec, usec int64) Timeval { + return Timeval{Sec: int32(sec), Usec: int32(usec)} } //sysnb pipe(p *[2]_C_int) (err error) @@ -63,6 +54,7 @@ func Pipe2(p []int, flags int) (err error) { //sys Fadvise(fd int, offset int64, length int64, advice int) (err error) = SYS_FADVISE64_64 //sys Fchown(fd int, uid int, gid int) (err error) = SYS_FCHOWN32 //sys Fstat(fd int, stat *Stat_t) (err error) = SYS_FSTAT64 +//sys Fstatat(dirfd int, path string, stat *Stat_t, flags int) (err error) = SYS_FSTATAT64 //sys Ftruncate(fd int, length int64) (err error) = SYS_FTRUNCATE64 //sysnb Getegid() (egid int) = SYS_GETEGID32 //sysnb Geteuid() (euid int) = SYS_GETEUID32 @@ -185,9 +177,9 @@ func Seek(fd int, offset int64, whence int) (newoffset int64, err error) { // On x86 Linux, all the socket calls go through an extra indirection, // I think because the 5-register system call interface can't handle -// the 6-argument calls like sendto and recvfrom. Instead the +// the 6-argument calls like sendto and recvfrom. Instead the // arguments to the underlying system call are the number below -// and a pointer to an array of uintptr. We hide the pointer in the +// and a pointer to an array of uintptr. We hide the pointer in the // socketcall assembly to avoid allocation on every system call. const ( diff --git a/vendor/golang.org/x/sys/unix/syscall_linux_amd64.go b/vendor/golang.org/x/sys/unix/syscall_linux_amd64.go index 9516a3fd7..53d38a534 100644 --- a/vendor/golang.org/x/sys/unix/syscall_linux_amd64.go +++ b/vendor/golang.org/x/sys/unix/syscall_linux_amd64.go @@ -11,6 +11,7 @@ package unix //sys Fadvise(fd int, offset int64, length int64, advice int) (err error) = SYS_FADVISE64 //sys Fchown(fd int, uid int, gid int) (err error) //sys Fstat(fd int, stat *Stat_t) (err error) +//sys Fstatat(dirfd int, path string, stat *Stat_t, flags int) (err error) = SYS_NEWFSTATAT //sys Fstatfs(fd int, buf *Statfs_t) (err error) //sys Ftruncate(fd int, length int64) (err error) //sysnb Getegid() (egid int) @@ -69,8 +70,6 @@ func Gettimeofday(tv *Timeval) (err error) { return nil } -func Getpagesize() int { return 4096 } - func Time(t *Time_t) (tt Time_t, err error) { var tv Timeval errno := gettimeofday(&tv) @@ -85,19 +84,12 @@ func Time(t *Time_t) (tt Time_t, err error) { //sys Utime(path string, buf *Utimbuf) (err error) -func TimespecToNsec(ts Timespec) int64 { return int64(ts.Sec)*1e9 + int64(ts.Nsec) } - -func NsecToTimespec(nsec int64) (ts Timespec) { - ts.Sec = nsec / 1e9 - ts.Nsec = nsec % 1e9 - return +func setTimespec(sec, nsec int64) Timespec { + return Timespec{Sec: sec, Nsec: nsec} } -func NsecToTimeval(nsec int64) (tv Timeval) { - nsec += 999 // round up to microsecond - tv.Sec = nsec / 1e9 - tv.Usec = nsec % 1e9 / 1e3 - return +func setTimeval(sec, usec int64) Timeval { + return Timeval{Sec: sec, Usec: usec} } //sysnb pipe(p *[2]_C_int) (err error) diff --git a/vendor/golang.org/x/sys/unix/syscall_linux_arm.go b/vendor/golang.org/x/sys/unix/syscall_linux_arm.go index 71d870228..c59f8588f 100644 --- a/vendor/golang.org/x/sys/unix/syscall_linux_arm.go +++ b/vendor/golang.org/x/sys/unix/syscall_linux_arm.go @@ -11,21 +11,12 @@ import ( "unsafe" ) -func Getpagesize() int { return 4096 } - -func TimespecToNsec(ts Timespec) int64 { return int64(ts.Sec)*1e9 + int64(ts.Nsec) } - -func NsecToTimespec(nsec int64) (ts Timespec) { - ts.Sec = int32(nsec / 1e9) - ts.Nsec = int32(nsec % 1e9) - return +func setTimespec(sec, nsec int64) Timespec { + return Timespec{Sec: int32(sec), Nsec: int32(nsec)} } -func NsecToTimeval(nsec int64) (tv Timeval) { - nsec += 999 // round up to microsecond - tv.Sec = int32(nsec / 1e9) - tv.Usec = int32(nsec % 1e9 / 1e3) - return +func setTimeval(sec, usec int64) Timeval { + return Timeval{Sec: int32(sec), Usec: int32(usec)} } func Pipe(p []int) (err error) { @@ -86,6 +77,7 @@ func Seek(fd int, offset int64, whence int) (newoffset int64, err error) { //sys Dup2(oldfd int, newfd int) (err error) //sys Fchown(fd int, uid int, gid int) (err error) = SYS_FCHOWN32 //sys Fstat(fd int, stat *Stat_t) (err error) = SYS_FSTAT64 +//sys Fstatat(dirfd int, path string, stat *Stat_t, flags int) (err error) = SYS_FSTATAT64 //sysnb Getegid() (egid int) = SYS_GETEGID32 //sysnb Geteuid() (euid int) = SYS_GETEUID32 //sysnb Getgid() (gid int) = SYS_GETGID32 diff --git a/vendor/golang.org/x/sys/unix/syscall_linux_arm64.go b/vendor/golang.org/x/sys/unix/syscall_linux_arm64.go index 4a136396c..a1e8a609b 100644 --- a/vendor/golang.org/x/sys/unix/syscall_linux_arm64.go +++ b/vendor/golang.org/x/sys/unix/syscall_linux_arm64.go @@ -7,6 +7,7 @@ package unix //sys EpollWait(epfd int, events []EpollEvent, msec int) (n int, err error) = SYS_EPOLL_PWAIT +//sys Fadvise(fd int, offset int64, length int64, advice int) (err error) = SYS_FADVISE64 //sys Fchown(fd int, uid int, gid int) (err error) //sys Fstat(fd int, stat *Stat_t) (err error) //sys Fstatat(fd int, path string, stat *Stat_t, flags int) (err error) @@ -21,7 +22,15 @@ package unix //sys Pread(fd int, p []byte, offset int64) (n int, err error) = SYS_PREAD64 //sys Pwrite(fd int, p []byte, offset int64) (n int, err error) = SYS_PWRITE64 //sys Seek(fd int, offset int64, whence int) (off int64, err error) = SYS_LSEEK -//sys Select(nfd int, r *FdSet, w *FdSet, e *FdSet, timeout *Timeval) (n int, err error) = SYS_PSELECT6 + +func Select(nfd int, r *FdSet, w *FdSet, e *FdSet, timeout *Timeval) (n int, err error) { + var ts *Timespec + if timeout != nil { + ts = &Timespec{Sec: timeout.Sec, Nsec: timeout.Usec * 1000} + } + return Pselect(nfd, r, w, e, ts, nil) +} + //sys sendfile(outfd int, infd int, offset *int64, count int) (written int, err error) //sys Setfsgid(gid int) (err error) //sys Setfsuid(uid int) (err error) @@ -66,23 +75,14 @@ func Lstat(path string, stat *Stat_t) (err error) { //sys sendmsg(s int, msg *Msghdr, flags int) (n int, err error) //sys mmap(addr uintptr, length uintptr, prot int, flags int, fd int, offset int64) (xaddr uintptr, err error) -func Getpagesize() int { return 65536 } - //sysnb Gettimeofday(tv *Timeval) (err error) -func TimespecToNsec(ts Timespec) int64 { return int64(ts.Sec)*1e9 + int64(ts.Nsec) } - -func NsecToTimespec(nsec int64) (ts Timespec) { - ts.Sec = nsec / 1e9 - ts.Nsec = nsec % 1e9 - return +func setTimespec(sec, nsec int64) Timespec { + return Timespec{Sec: sec, Nsec: nsec} } -func NsecToTimeval(nsec int64) (tv Timeval) { - nsec += 999 // round up to microsecond - tv.Sec = nsec / 1e9 - tv.Usec = nsec % 1e9 / 1e3 - return +func setTimeval(sec, usec int64) Timeval { + return Timeval{Sec: sec, Usec: usec} } func Time(t *Time_t) (Time_t, error) { diff --git a/vendor/golang.org/x/sys/unix/syscall_linux_gc.go b/vendor/golang.org/x/sys/unix/syscall_linux_gc.go new file mode 100644 index 000000000..c26e6ec23 --- /dev/null +++ b/vendor/golang.org/x/sys/unix/syscall_linux_gc.go @@ -0,0 +1,14 @@ +// Copyright 2018 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +// +build linux,!gccgo + +package unix + +// SyscallNoError may be used instead of Syscall for syscalls that don't fail. +func SyscallNoError(trap, a1, a2, a3 uintptr) (r1, r2 uintptr) + +// RawSyscallNoError may be used instead of RawSyscall for syscalls that don't +// fail. +func RawSyscallNoError(trap, a1, a2, a3 uintptr) (r1, r2 uintptr) diff --git a/vendor/golang.org/x/sys/unix/syscall_linux_gccgo.go b/vendor/golang.org/x/sys/unix/syscall_linux_gccgo.go new file mode 100644 index 000000000..df9c12371 --- /dev/null +++ b/vendor/golang.org/x/sys/unix/syscall_linux_gccgo.go @@ -0,0 +1,21 @@ +// Copyright 2018 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +// +build linux +// +build gccgo +// +build 386 arm + +package unix + +import ( + "syscall" + "unsafe" +) + +func seek(fd int, offset int64, whence int) (newoffset int64, err syscall.Errno) { + offsetLow := uint32(offset & 0xffffffff) + offsetHigh := uint32((offset >> 32) & 0xffffffff) + _, _, err = Syscall6(SYS__LLSEEK, uintptr(fd), uintptr(offsetHigh), uintptr(offsetLow), uintptr(unsafe.Pointer(&newoffset)), uintptr(whence), 0) + return newoffset, err +} diff --git a/vendor/golang.org/x/sys/unix/syscall_linux_mips64x.go b/vendor/golang.org/x/sys/unix/syscall_linux_mips64x.go index 73318e5c6..090ed404a 100644 --- a/vendor/golang.org/x/sys/unix/syscall_linux_mips64x.go +++ b/vendor/golang.org/x/sys/unix/syscall_linux_mips64x.go @@ -9,7 +9,9 @@ package unix //sys Dup2(oldfd int, newfd int) (err error) //sys EpollWait(epfd int, events []EpollEvent, msec int) (n int, err error) +//sys Fadvise(fd int, offset int64, length int64, advice int) (err error) = SYS_FADVISE64 //sys Fchown(fd int, uid int, gid int) (err error) +//sys Fstatat(dirfd int, path string, stat *Stat_t, flags int) (err error) = SYS_NEWFSTATAT //sys Fstatfs(fd int, buf *Statfs_t) (err error) //sys Ftruncate(fd int, length int64) (err error) //sysnb Getegid() (egid int) @@ -23,7 +25,15 @@ package unix //sys Pread(fd int, p []byte, offset int64) (n int, err error) = SYS_PREAD64 //sys Pwrite(fd int, p []byte, offset int64) (n int, err error) = SYS_PWRITE64 //sys Seek(fd int, offset int64, whence int) (off int64, err error) = SYS_LSEEK -//sys Select(nfd int, r *FdSet, w *FdSet, e *FdSet, timeout *Timeval) (n int, err error) = SYS_PSELECT6 + +func Select(nfd int, r *FdSet, w *FdSet, e *FdSet, timeout *Timeval) (n int, err error) { + var ts *Timespec + if timeout != nil { + ts = &Timespec{Sec: timeout.Sec, Nsec: timeout.Usec * 1000} + } + return Pselect(nfd, r, w, e, ts, nil) +} + //sys sendfile(outfd int, infd int, offset *int64, count int) (written int, err error) //sys Setfsgid(gid int) (err error) //sys Setfsuid(uid int) (err error) @@ -55,8 +65,6 @@ package unix //sys sendmsg(s int, msg *Msghdr, flags int) (n int, err error) //sys mmap(addr uintptr, length uintptr, prot int, flags int, fd int, offset int64) (xaddr uintptr, err error) -func Getpagesize() int { return 65536 } - //sysnb Gettimeofday(tv *Timeval) (err error) func Time(t *Time_t) (tt Time_t, err error) { @@ -73,19 +81,12 @@ func Time(t *Time_t) (tt Time_t, err error) { //sys Utime(path string, buf *Utimbuf) (err error) -func TimespecToNsec(ts Timespec) int64 { return int64(ts.Sec)*1e9 + int64(ts.Nsec) } - -func NsecToTimespec(nsec int64) (ts Timespec) { - ts.Sec = nsec / 1e9 - ts.Nsec = nsec % 1e9 - return +func setTimespec(sec, nsec int64) Timespec { + return Timespec{Sec: sec, Nsec: nsec} } -func NsecToTimeval(nsec int64) (tv Timeval) { - nsec += 999 // round up to microsecond - tv.Sec = nsec / 1e9 - tv.Usec = nsec % 1e9 / 1e3 - return +func setTimeval(sec, usec int64) Timeval { + return Timeval{Sec: sec, Usec: usec} } func Pipe(p []int) (err error) { diff --git a/vendor/golang.org/x/sys/unix/syscall_linux_mipsx.go b/vendor/golang.org/x/sys/unix/syscall_linux_mipsx.go index b83d93fdf..3d5817f66 100644 --- a/vendor/golang.org/x/sys/unix/syscall_linux_mipsx.go +++ b/vendor/golang.org/x/sys/unix/syscall_linux_mipsx.go @@ -15,6 +15,7 @@ import ( func Syscall9(trap, a1, a2, a3, a4, a5, a6, a7, a8, a9 uintptr) (r1, r2 uintptr, err syscall.Errno) //sys Dup2(oldfd int, newfd int) (err error) +//sys Fadvise(fd int, offset int64, length int64, advice int) (err error) = SYS_FADVISE64 //sys Fchown(fd int, uid int, gid int) (err error) //sys Ftruncate(fd int, length int64) (err error) = SYS_FTRUNCATE64 //sysnb Getegid() (egid int) @@ -65,6 +66,7 @@ func Syscall9(trap, a1, a2, a3, a4, a5, a6, a7, a8, a9 uintptr) (r1, r2 uintptr, //sys Lstat(path string, stat *Stat_t) (err error) = SYS_LSTAT64 //sys Fstat(fd int, stat *Stat_t) (err error) = SYS_FSTAT64 +//sys Fstatat(dirfd int, path string, stat *Stat_t, flags int) (err error) = SYS_FSTATAT64 //sys Stat(path string, stat *Stat_t) (err error) = SYS_STAT64 //sys Utime(path string, buf *Utimbuf) (err error) @@ -99,19 +101,12 @@ func Seek(fd int, offset int64, whence int) (off int64, err error) { return } -func TimespecToNsec(ts Timespec) int64 { return int64(ts.Sec)*1e9 + int64(ts.Nsec) } - -func NsecToTimespec(nsec int64) (ts Timespec) { - ts.Sec = int32(nsec / 1e9) - ts.Nsec = int32(nsec % 1e9) - return +func setTimespec(sec, nsec int64) Timespec { + return Timespec{Sec: int32(sec), Nsec: int32(nsec)} } -func NsecToTimeval(nsec int64) (tv Timeval) { - nsec += 999 // round up to microsecond - tv.Sec = int32(nsec / 1e9) - tv.Usec = int32(nsec % 1e9 / 1e3) - return +func setTimeval(sec, usec int64) Timeval { + return Timeval{Sec: int32(sec), Usec: int32(usec)} } //sysnb pipe2(p *[2]_C_int, flags int) (err error) @@ -235,5 +230,3 @@ func Poll(fds []PollFd, timeout int) (n int, err error) { } return poll(&fds[0], len(fds), timeout) } - -func Getpagesize() int { return 4096 } diff --git a/vendor/golang.org/x/sys/unix/syscall_linux_ppc64x.go b/vendor/golang.org/x/sys/unix/syscall_linux_ppc64x.go index 60770f627..6fb8733d6 100644 --- a/vendor/golang.org/x/sys/unix/syscall_linux_ppc64x.go +++ b/vendor/golang.org/x/sys/unix/syscall_linux_ppc64x.go @@ -9,8 +9,10 @@ package unix //sys EpollWait(epfd int, events []EpollEvent, msec int) (n int, err error) //sys Dup2(oldfd int, newfd int) (err error) +//sys Fadvise(fd int, offset int64, length int64, advice int) (err error) = SYS_FADVISE64 //sys Fchown(fd int, uid int, gid int) (err error) //sys Fstat(fd int, stat *Stat_t) (err error) +//sys Fstatat(dirfd int, path string, stat *Stat_t, flags int) (err error) = SYS_NEWFSTATAT //sys Fstatfs(fd int, buf *Statfs_t) (err error) //sys Ftruncate(fd int, length int64) (err error) //sysnb Getegid() (egid int) @@ -28,7 +30,7 @@ package unix //sys Pread(fd int, p []byte, offset int64) (n int, err error) = SYS_PREAD64 //sys Pwrite(fd int, p []byte, offset int64) (n int, err error) = SYS_PWRITE64 //sys Seek(fd int, offset int64, whence int) (off int64, err error) = SYS_LSEEK -//sys Select(nfd int, r *FdSet, w *FdSet, e *FdSet, timeout *Timeval) (n int, err error) +//sys Select(nfd int, r *FdSet, w *FdSet, e *FdSet, timeout *Timeval) (n int, err error) = SYS__NEWSELECT //sys sendfile(outfd int, infd int, offset *int64, count int) (written int, err error) //sys Setfsgid(gid int) (err error) //sys Setfsuid(uid int) (err error) @@ -61,26 +63,17 @@ package unix //sys sendmsg(s int, msg *Msghdr, flags int) (n int, err error) //sys mmap(addr uintptr, length uintptr, prot int, flags int, fd int, offset int64) (xaddr uintptr, err error) -func Getpagesize() int { return 65536 } - //sysnb Gettimeofday(tv *Timeval) (err error) //sysnb Time(t *Time_t) (tt Time_t, err error) //sys Utime(path string, buf *Utimbuf) (err error) -func TimespecToNsec(ts Timespec) int64 { return int64(ts.Sec)*1e9 + int64(ts.Nsec) } - -func NsecToTimespec(nsec int64) (ts Timespec) { - ts.Sec = nsec / 1e9 - ts.Nsec = nsec % 1e9 - return +func setTimespec(sec, nsec int64) Timespec { + return Timespec{Sec: sec, Nsec: nsec} } -func NsecToTimeval(nsec int64) (tv Timeval) { - nsec += 999 // round up to microsecond - tv.Sec = nsec / 1e9 - tv.Usec = nsec % 1e9 / 1e3 - return +func setTimeval(sec, usec int64) Timeval { + return Timeval{Sec: sec, Usec: usec} } func (r *PtraceRegs) PC() uint64 { return r.Nip } diff --git a/vendor/golang.org/x/sys/unix/syscall_linux_s390x.go b/vendor/golang.org/x/sys/unix/syscall_linux_s390x.go index 1708a4bbf..c0d86e722 100644 --- a/vendor/golang.org/x/sys/unix/syscall_linux_s390x.go +++ b/vendor/golang.org/x/sys/unix/syscall_linux_s390x.go @@ -15,6 +15,7 @@ import ( //sys Fadvise(fd int, offset int64, length int64, advice int) (err error) = SYS_FADVISE64 //sys Fchown(fd int, uid int, gid int) (err error) //sys Fstat(fd int, stat *Stat_t) (err error) +//sys Fstatat(dirfd int, path string, stat *Stat_t, flags int) (err error) = SYS_NEWFSTATAT //sys Fstatfs(fd int, buf *Statfs_t) (err error) //sys Ftruncate(fd int, length int64) (err error) //sysnb Getegid() (egid int) @@ -46,8 +47,6 @@ import ( //sysnb getgroups(n int, list *_Gid_t) (nn int, err error) //sysnb setgroups(n int, list *_Gid_t) (err error) -func Getpagesize() int { return 4096 } - //sysnb Gettimeofday(tv *Timeval) (err error) func Time(t *Time_t) (tt Time_t, err error) { @@ -64,19 +63,12 @@ func Time(t *Time_t) (tt Time_t, err error) { //sys Utime(path string, buf *Utimbuf) (err error) -func TimespecToNsec(ts Timespec) int64 { return int64(ts.Sec)*1e9 + int64(ts.Nsec) } - -func NsecToTimespec(nsec int64) (ts Timespec) { - ts.Sec = nsec / 1e9 - ts.Nsec = nsec % 1e9 - return +func setTimespec(sec, nsec int64) Timespec { + return Timespec{Sec: sec, Nsec: nsec} } -func NsecToTimeval(nsec int64) (tv Timeval) { - nsec += 999 // round up to microsecond - tv.Sec = nsec / 1e9 - tv.Usec = nsec % 1e9 / 1e3 - return +func setTimeval(sec, usec int64) Timeval { + return Timeval{Sec: sec, Usec: usec} } //sysnb pipe2(p *[2]_C_int, flags int) (err error) diff --git a/vendor/golang.org/x/sys/unix/syscall_linux_sparc64.go b/vendor/golang.org/x/sys/unix/syscall_linux_sparc64.go index 20b7454d7..78c1e0df1 100644 --- a/vendor/golang.org/x/sys/unix/syscall_linux_sparc64.go +++ b/vendor/golang.org/x/sys/unix/syscall_linux_sparc64.go @@ -6,15 +6,12 @@ package unix -import ( - "sync/atomic" - "syscall" -) - //sys EpollWait(epfd int, events []EpollEvent, msec int) (n int, err error) +//sys Fadvise(fd int, offset int64, length int64, advice int) (err error) = SYS_FADVISE64 //sys Dup2(oldfd int, newfd int) (err error) //sys Fchown(fd int, uid int, gid int) (err error) //sys Fstat(fd int, stat *Stat_t) (err error) +//sys Fstatat(dirfd int, path string, stat *Stat_t, flags int) (err error) = SYS_FSTATAT64 //sys Fstatfs(fd int, buf *Statfs_t) (err error) //sys Ftruncate(fd int, length int64) (err error) //sysnb Getegid() (egid int) @@ -63,21 +60,6 @@ import ( //sys sendmsg(s int, msg *Msghdr, flags int) (n int, err error) //sys mmap(addr uintptr, length uintptr, prot int, flags int, fd int, offset int64) (xaddr uintptr, err error) -func sysconf(name int) (n int64, err syscall.Errno) - -// pageSize caches the value of Getpagesize, since it can't change -// once the system is booted. -var pageSize int64 // accessed atomically - -func Getpagesize() int { - n := atomic.LoadInt64(&pageSize) - if n == 0 { - n, _ = sysconf(_SC_PAGESIZE) - atomic.StoreInt64(&pageSize, n) - } - return int(n) -} - func Ioperm(from int, num int, on int) (err error) { return ENOSYS } @@ -102,19 +84,12 @@ func Time(t *Time_t) (tt Time_t, err error) { //sys Utime(path string, buf *Utimbuf) (err error) -func TimespecToNsec(ts Timespec) int64 { return int64(ts.Sec)*1e9 + int64(ts.Nsec) } - -func NsecToTimespec(nsec int64) (ts Timespec) { - ts.Sec = nsec / 1e9 - ts.Nsec = nsec % 1e9 - return +func setTimespec(sec, nsec int64) Timespec { + return Timespec{Sec: sec, Nsec: nsec} } -func NsecToTimeval(nsec int64) (tv Timeval) { - nsec += 999 // round up to microsecond - tv.Sec = nsec / 1e9 - tv.Usec = int32(nsec % 1e9 / 1e3) - return +func setTimeval(sec, usec int64) Timeval { + return Timeval{Sec: sec, Usec: int32(usec)} } func (r *PtraceRegs) PC() uint64 { return r.Tpc } diff --git a/vendor/golang.org/x/sys/unix/syscall_netbsd.go b/vendor/golang.org/x/sys/unix/syscall_netbsd.go index 01f6a48c8..e1a3baa23 100644 --- a/vendor/golang.org/x/sys/unix/syscall_netbsd.go +++ b/vendor/golang.org/x/sys/unix/syscall_netbsd.go @@ -17,6 +17,7 @@ import ( "unsafe" ) +// SockaddrDatalink implements the Sockaddr interface for AF_LINK type sockets. type SockaddrDatalink struct { Len uint8 Family uint8 @@ -55,7 +56,6 @@ func sysctlNodes(mib []_C_int) (nodes []Sysctlnode, err error) { } func nametomib(name string) (mib []_C_int, err error) { - // Split name into components. var parts []string last := 0 @@ -93,18 +93,6 @@ func nametomib(name string) (mib []_C_int, err error) { return mib, nil } -func direntIno(buf []byte) (uint64, bool) { - return readInt(buf, unsafe.Offsetof(Dirent{}.Fileno), unsafe.Sizeof(Dirent{}.Fileno)) -} - -func direntReclen(buf []byte) (uint64, bool) { - return readInt(buf, unsafe.Offsetof(Dirent{}.Reclen), unsafe.Sizeof(Dirent{}.Reclen)) -} - -func direntNamlen(buf []byte) (uint64, bool) { - return readInt(buf, unsafe.Offsetof(Dirent{}.Namlen), unsafe.Sizeof(Dirent{}.Namlen)) -} - //sysnb pipe() (fd1 int, fd2 int, err error) func Pipe(p []int) (err error) { if len(p) != 2 { @@ -119,11 +107,118 @@ func Getdirentries(fd int, buf []byte, basep *uintptr) (n int, err error) { return getdents(fd, buf) } +const ImplementsGetwd = true + +//sys Getcwd(buf []byte) (n int, err error) = SYS___GETCWD + +func Getwd() (string, error) { + var buf [PathMax]byte + _, err := Getcwd(buf[0:]) + if err != nil { + return "", err + } + n := clen(buf[:]) + if n < 1 { + return "", EINVAL + } + return string(buf[:n]), nil +} + // TODO func sendfile(outfd int, infd int, offset *int64, count int) (written int, err error) { return -1, ENOSYS } +func setattrlistTimes(path string, times []Timespec, flags int) error { + // used on Darwin for UtimesNano + return ENOSYS +} + +//sys ioctl(fd int, req uint, arg uintptr) (err error) + +// ioctl itself should not be exposed directly, but additional get/set +// functions for specific types are permissible. + +// IoctlSetInt performs an ioctl operation which sets an integer value +// on fd, using the specified request number. +func IoctlSetInt(fd int, req uint, value int) error { + return ioctl(fd, req, uintptr(value)) +} + +func IoctlSetWinsize(fd int, req uint, value *Winsize) error { + return ioctl(fd, req, uintptr(unsafe.Pointer(value))) +} + +func IoctlSetTermios(fd int, req uint, value *Termios) error { + return ioctl(fd, req, uintptr(unsafe.Pointer(value))) +} + +// IoctlGetInt performs an ioctl operation which gets an integer value +// from fd, using the specified request number. +func IoctlGetInt(fd int, req uint) (int, error) { + var value int + err := ioctl(fd, req, uintptr(unsafe.Pointer(&value))) + return value, err +} + +func IoctlGetWinsize(fd int, req uint) (*Winsize, error) { + var value Winsize + err := ioctl(fd, req, uintptr(unsafe.Pointer(&value))) + return &value, err +} + +func IoctlGetTermios(fd int, req uint) (*Termios, error) { + var value Termios + err := ioctl(fd, req, uintptr(unsafe.Pointer(&value))) + return &value, err +} + +func Uname(uname *Utsname) error { + mib := []_C_int{CTL_KERN, KERN_OSTYPE} + n := unsafe.Sizeof(uname.Sysname) + if err := sysctl(mib, &uname.Sysname[0], &n, nil, 0); err != nil { + return err + } + + mib = []_C_int{CTL_KERN, KERN_HOSTNAME} + n = unsafe.Sizeof(uname.Nodename) + if err := sysctl(mib, &uname.Nodename[0], &n, nil, 0); err != nil { + return err + } + + mib = []_C_int{CTL_KERN, KERN_OSRELEASE} + n = unsafe.Sizeof(uname.Release) + if err := sysctl(mib, &uname.Release[0], &n, nil, 0); err != nil { + return err + } + + mib = []_C_int{CTL_KERN, KERN_VERSION} + n = unsafe.Sizeof(uname.Version) + if err := sysctl(mib, &uname.Version[0], &n, nil, 0); err != nil { + return err + } + + // The version might have newlines or tabs in it, convert them to + // spaces. + for i, b := range uname.Version { + if b == '\n' || b == '\t' { + if i == len(uname.Version)-1 { + uname.Version[i] = 0 + } else { + uname.Version[i] = ' ' + } + } + } + + mib = []_C_int{CTL_HW, HW_MACHINE} + n = unsafe.Sizeof(uname.Machine) + if err := sysctl(mib, &uname.Machine[0], &n, nil, 0); err != nil { + return err + } + + return nil +} + /* * Exposed directly */ @@ -138,13 +233,16 @@ func sendfile(outfd int, infd int, offset *int64, count int) (written int, err e //sys Dup(fd int) (nfd int, err error) //sys Dup2(from int, to int) (err error) //sys Exit(code int) +//sys Fadvise(fd int, offset int64, length int64, advice int) (err error) = SYS_POSIX_FADVISE //sys Fchdir(fd int) (err error) //sys Fchflags(fd int, flags int) (err error) //sys Fchmod(fd int, mode uint32) (err error) +//sys Fchmodat(dirfd int, path string, mode uint32, flags int) (err error) //sys Fchown(fd int, uid int, gid int) (err error) //sys Flock(fd int, how int) (err error) //sys Fpathconf(fd int, name int) (val int, err error) //sys Fstat(fd int, stat *Stat_t) (err error) +//sys Fstatat(fd int, path string, stat *Stat_t, flags int) (err error) //sys Fsync(fd int) (err error) //sys Ftruncate(fd int, length int64) (err error) //sysnb Getegid() (egid int) @@ -170,11 +268,6 @@ func sendfile(outfd int, infd int, offset *int64, count int) (written int, err e //sys Mkdir(path string, mode uint32) (err error) //sys Mkfifo(path string, mode uint32) (err error) //sys Mknod(path string, mode uint32, dev int) (err error) -//sys Mlock(b []byte) (err error) -//sys Mlockall(flags int) (err error) -//sys Mprotect(b []byte, prot int) (err error) -//sys Munlock(b []byte) (err error) -//sys Munlockall() (err error) //sys Nanosleep(time *Timespec, leftover *Timespec) (err error) //sys Open(path string, mode int, perm uint32) (fd int, err error) //sys Pathconf(path string, name int) (val int, err error) @@ -210,6 +303,7 @@ func sendfile(outfd int, infd int, offset *int64, count int) (written int, err e //sys munmap(addr uintptr, length uintptr) (err error) //sys readlen(fd int, buf *byte, nbuf int) (n int, err error) = SYS_READ //sys writelen(fd int, buf *byte, nbuf int) (n int, err error) = SYS_WRITE +//sys utimensat(dirfd int, path string, times *[2]Timespec, flags int) (err error) /* * Unimplemented @@ -229,7 +323,6 @@ func sendfile(outfd int, infd int, offset *int64, count int) (written int, err e // __msync13 // __ntp_gettime30 // __posix_chown -// __posix_fadvise50 // __posix_fchown // __posix_lchown // __posix_rename @@ -388,7 +481,6 @@ func sendfile(outfd int, infd int, offset *int64, count int) (written int, err e // getitimer // getvfsstat // getxattr -// ioctl // ktrace // lchflags // lchmod @@ -426,7 +518,6 @@ func sendfile(outfd int, infd int, offset *int64, count int) (written int, err e // ntp_adjtime // pmc_control // pmc_get_info -// poll // pollts // preadv // profil diff --git a/vendor/golang.org/x/sys/unix/syscall_netbsd_386.go b/vendor/golang.org/x/sys/unix/syscall_netbsd_386.go index afaca0983..24f74e58c 100644 --- a/vendor/golang.org/x/sys/unix/syscall_netbsd_386.go +++ b/vendor/golang.org/x/sys/unix/syscall_netbsd_386.go @@ -6,21 +6,12 @@ package unix -func Getpagesize() int { return 4096 } - -func TimespecToNsec(ts Timespec) int64 { return int64(ts.Sec)*1e9 + int64(ts.Nsec) } - -func NsecToTimespec(nsec int64) (ts Timespec) { - ts.Sec = int64(nsec / 1e9) - ts.Nsec = int32(nsec % 1e9) - return +func setTimespec(sec, nsec int64) Timespec { + return Timespec{Sec: sec, Nsec: int32(nsec)} } -func NsecToTimeval(nsec int64) (tv Timeval) { - nsec += 999 // round up to microsecond - tv.Usec = int32(nsec % 1e9 / 1e3) - tv.Sec = int64(nsec / 1e9) - return +func setTimeval(sec, usec int64) Timeval { + return Timeval{Sec: sec, Usec: int32(usec)} } func SetKevent(k *Kevent_t, fd, mode, flags int) { diff --git a/vendor/golang.org/x/sys/unix/syscall_netbsd_amd64.go b/vendor/golang.org/x/sys/unix/syscall_netbsd_amd64.go index a6ff04ce5..6878bf7ff 100644 --- a/vendor/golang.org/x/sys/unix/syscall_netbsd_amd64.go +++ b/vendor/golang.org/x/sys/unix/syscall_netbsd_amd64.go @@ -6,21 +6,12 @@ package unix -func Getpagesize() int { return 4096 } - -func TimespecToNsec(ts Timespec) int64 { return int64(ts.Sec)*1e9 + int64(ts.Nsec) } - -func NsecToTimespec(nsec int64) (ts Timespec) { - ts.Sec = int64(nsec / 1e9) - ts.Nsec = int64(nsec % 1e9) - return +func setTimespec(sec, nsec int64) Timespec { + return Timespec{Sec: sec, Nsec: nsec} } -func NsecToTimeval(nsec int64) (tv Timeval) { - nsec += 999 // round up to microsecond - tv.Usec = int32(nsec % 1e9 / 1e3) - tv.Sec = int64(nsec / 1e9) - return +func setTimeval(sec, usec int64) Timeval { + return Timeval{Sec: sec, Usec: int32(usec)} } func SetKevent(k *Kevent_t, fd, mode, flags int) { diff --git a/vendor/golang.org/x/sys/unix/syscall_netbsd_arm.go b/vendor/golang.org/x/sys/unix/syscall_netbsd_arm.go index 68a6969b2..dbbfcf71d 100644 --- a/vendor/golang.org/x/sys/unix/syscall_netbsd_arm.go +++ b/vendor/golang.org/x/sys/unix/syscall_netbsd_arm.go @@ -6,21 +6,12 @@ package unix -func Getpagesize() int { return 4096 } - -func TimespecToNsec(ts Timespec) int64 { return int64(ts.Sec)*1e9 + int64(ts.Nsec) } - -func NsecToTimespec(nsec int64) (ts Timespec) { - ts.Sec = int64(nsec / 1e9) - ts.Nsec = int32(nsec % 1e9) - return +func setTimespec(sec, nsec int64) Timespec { + return Timespec{Sec: sec, Nsec: int32(nsec)} } -func NsecToTimeval(nsec int64) (tv Timeval) { - nsec += 999 // round up to microsecond - tv.Usec = int32(nsec % 1e9 / 1e3) - tv.Sec = int64(nsec / 1e9) - return +func setTimeval(sec, usec int64) Timeval { + return Timeval{Sec: sec, Usec: int32(usec)} } func SetKevent(k *Kevent_t, fd, mode, flags int) { diff --git a/vendor/golang.org/x/sys/unix/syscall_openbsd.go b/vendor/golang.org/x/sys/unix/syscall_openbsd.go index c0d2b6c80..387e1cfcf 100644 --- a/vendor/golang.org/x/sys/unix/syscall_openbsd.go +++ b/vendor/golang.org/x/sys/unix/syscall_openbsd.go @@ -13,10 +13,12 @@ package unix import ( + "sort" "syscall" "unsafe" ) +// SockaddrDatalink implements the Sockaddr interface for AF_LINK type sockets. type SockaddrDatalink struct { Len uint8 Family uint8 @@ -32,39 +34,15 @@ type SockaddrDatalink struct { func Syscall9(trap, a1, a2, a3, a4, a5, a6, a7, a8, a9 uintptr) (r1, r2 uintptr, err syscall.Errno) func nametomib(name string) (mib []_C_int, err error) { - - // Perform lookup via a binary search - left := 0 - right := len(sysctlMib) - 1 - for { - idx := left + (right-left)/2 - switch { - case name == sysctlMib[idx].ctlname: - return sysctlMib[idx].ctloid, nil - case name > sysctlMib[idx].ctlname: - left = idx + 1 - default: - right = idx - 1 - } - if left > right { - break - } + i := sort.Search(len(sysctlMib), func(i int) bool { + return sysctlMib[i].ctlname >= name + }) + if i < len(sysctlMib) && sysctlMib[i].ctlname == name { + return sysctlMib[i].ctloid, nil } return nil, EINVAL } -func direntIno(buf []byte) (uint64, bool) { - return readInt(buf, unsafe.Offsetof(Dirent{}.Fileno), unsafe.Sizeof(Dirent{}.Fileno)) -} - -func direntReclen(buf []byte) (uint64, bool) { - return readInt(buf, unsafe.Offsetof(Dirent{}.Reclen), unsafe.Sizeof(Dirent{}.Reclen)) -} - -func direntNamlen(buf []byte) (uint64, bool) { - return readInt(buf, unsafe.Offsetof(Dirent{}.Namlen), unsafe.Sizeof(Dirent{}.Namlen)) -} - //sysnb pipe(p *[2]_C_int) (err error) func Pipe(p []int) (err error) { if len(p) != 2 { @@ -82,6 +60,23 @@ func Getdirentries(fd int, buf []byte, basep *uintptr) (n int, err error) { return getdents(fd, buf) } +const ImplementsGetwd = true + +//sys Getcwd(buf []byte) (n int, err error) = SYS___GETCWD + +func Getwd() (string, error) { + var buf [PathMax]byte + _, err := Getcwd(buf[0:]) + if err != nil { + return "", err + } + n := clen(buf[:]) + if n < 1 { + return "", EINVAL + } + return string(buf[:n]), nil +} + // TODO func sendfile(outfd int, infd int, offset *int64, count int) (written int, err error) { return -1, ENOSYS @@ -102,6 +97,96 @@ func Getfsstat(buf []Statfs_t, flags int) (n int, err error) { return } +func setattrlistTimes(path string, times []Timespec, flags int) error { + // used on Darwin for UtimesNano + return ENOSYS +} + +//sys ioctl(fd int, req uint, arg uintptr) (err error) + +// ioctl itself should not be exposed directly, but additional get/set +// functions for specific types are permissible. + +// IoctlSetInt performs an ioctl operation which sets an integer value +// on fd, using the specified request number. +func IoctlSetInt(fd int, req uint, value int) error { + return ioctl(fd, req, uintptr(value)) +} + +func IoctlSetWinsize(fd int, req uint, value *Winsize) error { + return ioctl(fd, req, uintptr(unsafe.Pointer(value))) +} + +func IoctlSetTermios(fd int, req uint, value *Termios) error { + return ioctl(fd, req, uintptr(unsafe.Pointer(value))) +} + +// IoctlGetInt performs an ioctl operation which gets an integer value +// from fd, using the specified request number. +func IoctlGetInt(fd int, req uint) (int, error) { + var value int + err := ioctl(fd, req, uintptr(unsafe.Pointer(&value))) + return value, err +} + +func IoctlGetWinsize(fd int, req uint) (*Winsize, error) { + var value Winsize + err := ioctl(fd, req, uintptr(unsafe.Pointer(&value))) + return &value, err +} + +func IoctlGetTermios(fd int, req uint) (*Termios, error) { + var value Termios + err := ioctl(fd, req, uintptr(unsafe.Pointer(&value))) + return &value, err +} + +func Uname(uname *Utsname) error { + mib := []_C_int{CTL_KERN, KERN_OSTYPE} + n := unsafe.Sizeof(uname.Sysname) + if err := sysctl(mib, &uname.Sysname[0], &n, nil, 0); err != nil { + return err + } + + mib = []_C_int{CTL_KERN, KERN_HOSTNAME} + n = unsafe.Sizeof(uname.Nodename) + if err := sysctl(mib, &uname.Nodename[0], &n, nil, 0); err != nil { + return err + } + + mib = []_C_int{CTL_KERN, KERN_OSRELEASE} + n = unsafe.Sizeof(uname.Release) + if err := sysctl(mib, &uname.Release[0], &n, nil, 0); err != nil { + return err + } + + mib = []_C_int{CTL_KERN, KERN_VERSION} + n = unsafe.Sizeof(uname.Version) + if err := sysctl(mib, &uname.Version[0], &n, nil, 0); err != nil { + return err + } + + // The version might have newlines or tabs in it, convert them to + // spaces. + for i, b := range uname.Version { + if b == '\n' || b == '\t' { + if i == len(uname.Version)-1 { + uname.Version[i] = 0 + } else { + uname.Version[i] = ' ' + } + } + } + + mib = []_C_int{CTL_HW, HW_MACHINE} + n = unsafe.Sizeof(uname.Machine) + if err := sysctl(mib, &uname.Machine[0], &n, nil, 0); err != nil { + return err + } + + return nil +} + /* * Exposed directly */ @@ -119,10 +204,12 @@ func Getfsstat(buf []Statfs_t, flags int) (n int, err error) { //sys Fchdir(fd int) (err error) //sys Fchflags(fd int, flags int) (err error) //sys Fchmod(fd int, mode uint32) (err error) +//sys Fchmodat(dirfd int, path string, mode uint32, flags int) (err error) //sys Fchown(fd int, uid int, gid int) (err error) //sys Flock(fd int, how int) (err error) //sys Fpathconf(fd int, name int) (val int, err error) //sys Fstat(fd int, stat *Stat_t) (err error) +//sys Fstatat(fd int, path string, stat *Stat_t, flags int) (err error) //sys Fstatfs(fd int, stat *Statfs_t) (err error) //sys Fsync(fd int) (err error) //sys Ftruncate(fd int, length int64) (err error) @@ -149,11 +236,6 @@ func Getfsstat(buf []Statfs_t, flags int) (n int, err error) { //sys Mkdir(path string, mode uint32) (err error) //sys Mkfifo(path string, mode uint32) (err error) //sys Mknod(path string, mode uint32, dev int) (err error) -//sys Mlock(b []byte) (err error) -//sys Mlockall(flags int) (err error) -//sys Mprotect(b []byte, prot int) (err error) -//sys Munlock(b []byte) (err error) -//sys Munlockall() (err error) //sys Nanosleep(time *Timespec, leftover *Timespec) (err error) //sys Open(path string, mode int, perm uint32) (fd int, err error) //sys Pathconf(path string, name int) (val int, err error) @@ -193,6 +275,7 @@ func Getfsstat(buf []Statfs_t, flags int) (n int, err error) { //sys munmap(addr uintptr, length uintptr) (err error) //sys readlen(fd int, buf *byte, nbuf int) (n int, err error) = SYS_READ //sys writelen(fd int, buf *byte, nbuf int) (n int, err error) = SYS_WRITE +//sys utimensat(dirfd int, path string, times *[2]Timespec, flags int) (err error) /* * Unimplemented @@ -226,7 +309,6 @@ func Getfsstat(buf []Statfs_t, flags int) (n int, err error) { // getresuid // getrtable // getthrid -// ioctl // ktrace // lfs_bmapv // lfs_markv @@ -247,7 +329,6 @@ func Getfsstat(buf []Statfs_t, flags int) (n int, err error) { // nfssvc // nnpfspioctl // openat -// poll // preadv // profil // pwritev @@ -282,6 +363,5 @@ func Getfsstat(buf []Statfs_t, flags int) (n int, err error) { // thrsleep // thrwakeup // unlinkat -// utimensat // vfork // writev diff --git a/vendor/golang.org/x/sys/unix/syscall_openbsd_386.go b/vendor/golang.org/x/sys/unix/syscall_openbsd_386.go index a66ddc59c..994964a91 100644 --- a/vendor/golang.org/x/sys/unix/syscall_openbsd_386.go +++ b/vendor/golang.org/x/sys/unix/syscall_openbsd_386.go @@ -6,21 +6,12 @@ package unix -func Getpagesize() int { return 4096 } - -func TimespecToNsec(ts Timespec) int64 { return int64(ts.Sec)*1e9 + int64(ts.Nsec) } - -func NsecToTimespec(nsec int64) (ts Timespec) { - ts.Sec = int64(nsec / 1e9) - ts.Nsec = int32(nsec % 1e9) - return +func setTimespec(sec, nsec int64) Timespec { + return Timespec{Sec: sec, Nsec: int32(nsec)} } -func NsecToTimeval(nsec int64) (tv Timeval) { - nsec += 999 // round up to microsecond - tv.Usec = int32(nsec % 1e9 / 1e3) - tv.Sec = int64(nsec / 1e9) - return +func setTimeval(sec, usec int64) Timeval { + return Timeval{Sec: sec, Usec: int32(usec)} } func SetKevent(k *Kevent_t, fd, mode, flags int) { diff --git a/vendor/golang.org/x/sys/unix/syscall_openbsd_amd64.go b/vendor/golang.org/x/sys/unix/syscall_openbsd_amd64.go index 0776c1faf..649e67fcc 100644 --- a/vendor/golang.org/x/sys/unix/syscall_openbsd_amd64.go +++ b/vendor/golang.org/x/sys/unix/syscall_openbsd_amd64.go @@ -6,21 +6,12 @@ package unix -func Getpagesize() int { return 4096 } - -func TimespecToNsec(ts Timespec) int64 { return int64(ts.Sec)*1e9 + int64(ts.Nsec) } - -func NsecToTimespec(nsec int64) (ts Timespec) { - ts.Sec = nsec / 1e9 - ts.Nsec = nsec % 1e9 - return +func setTimespec(sec, nsec int64) Timespec { + return Timespec{Sec: sec, Nsec: nsec} } -func NsecToTimeval(nsec int64) (tv Timeval) { - nsec += 999 // round up to microsecond - tv.Usec = nsec % 1e9 / 1e3 - tv.Sec = nsec / 1e9 - return +func setTimeval(sec, usec int64) Timeval { + return Timeval{Sec: sec, Usec: usec} } func SetKevent(k *Kevent_t, fd, mode, flags int) { diff --git a/vendor/golang.org/x/sys/unix/syscall_openbsd_arm.go b/vendor/golang.org/x/sys/unix/syscall_openbsd_arm.go index 14ddaf3f3..59844f504 100644 --- a/vendor/golang.org/x/sys/unix/syscall_openbsd_arm.go +++ b/vendor/golang.org/x/sys/unix/syscall_openbsd_arm.go @@ -6,23 +6,12 @@ package unix -import "syscall" - -func Getpagesize() int { return syscall.Getpagesize() } - -func TimespecToNsec(ts Timespec) int64 { return int64(ts.Sec)*1e9 + int64(ts.Nsec) } - -func NsecToTimespec(nsec int64) (ts Timespec) { - ts.Sec = int64(nsec / 1e9) - ts.Nsec = int32(nsec % 1e9) - return +func setTimespec(sec, nsec int64) Timespec { + return Timespec{Sec: sec, Nsec: int32(nsec)} } -func NsecToTimeval(nsec int64) (tv Timeval) { - nsec += 999 // round up to microsecond - tv.Usec = int32(nsec % 1e9 / 1e3) - tv.Sec = int64(nsec / 1e9) - return +func setTimeval(sec, usec int64) Timeval { + return Timeval{Sec: sec, Usec: int32(usec)} } func SetKevent(k *Kevent_t, fd, mode, flags int) { diff --git a/vendor/golang.org/x/sys/unix/syscall_solaris.go b/vendor/golang.org/x/sys/unix/syscall_solaris.go index 0d4e5c4e6..f4d2a3451 100644 --- a/vendor/golang.org/x/sys/unix/syscall_solaris.go +++ b/vendor/golang.org/x/sys/unix/syscall_solaris.go @@ -13,7 +13,6 @@ package unix import ( - "sync/atomic" "syscall" "unsafe" ) @@ -24,6 +23,7 @@ type syscallFunc uintptr func rawSysvicall6(trap, nargs, a1, a2, a3, a4, a5, a6 uintptr) (r1, r2 uintptr, err syscall.Errno) func sysvicall6(trap, nargs, a1, a2, a3, a4, a5, a6 uintptr) (r1, r2 uintptr, err syscall.Errno) +// SockaddrDatalink implements the Sockaddr interface for AF_LINK type sockets. type SockaddrDatalink struct { Family uint16 Index uint16 @@ -35,31 +35,6 @@ type SockaddrDatalink struct { raw RawSockaddrDatalink } -func clen(n []byte) int { - for i := 0; i < len(n); i++ { - if n[i] == 0 { - return i - } - } - return len(n) -} - -func direntIno(buf []byte) (uint64, bool) { - return readInt(buf, unsafe.Offsetof(Dirent{}.Ino), unsafe.Sizeof(Dirent{}.Ino)) -} - -func direntReclen(buf []byte) (uint64, bool) { - return readInt(buf, unsafe.Offsetof(Dirent{}.Reclen), unsafe.Sizeof(Dirent{}.Reclen)) -} - -func direntNamlen(buf []byte) (uint64, bool) { - reclen, ok := direntReclen(buf) - if !ok { - return 0, false - } - return reclen - uint64(unsafe.Offsetof(Dirent{}.Name)), true -} - //sysnb pipe(p *[2]_C_int) (n int, err error) func Pipe(p []int) (err error) { @@ -140,6 +115,18 @@ func Getsockname(fd int) (sa Sockaddr, err error) { return anyToSockaddr(&rsa) } +// GetsockoptString returns the string value of the socket option opt for the +// socket associated with fd at the given socket level. +func GetsockoptString(fd, level, opt int) (string, error) { + buf := make([]byte, 256) + vallen := _Socklen(len(buf)) + err := getsockopt(fd, level, opt, unsafe.Pointer(&buf[0]), &vallen) + if err != nil { + return "", err + } + return string(buf[:vallen-1]), nil +} + const ImplementsGetwd = true //sys Getcwd(buf []byte) (n int, err error) @@ -167,7 +154,7 @@ func Getwd() (wd string, err error) { func Getgroups() (gids []int, err error) { n, err := getgroups(0, nil) - // Check for error and sanity check group count. Newer versions of + // Check for error and sanity check group count. Newer versions of // Solaris allow up to 1024 (NGROUPS_MAX). if n < 0 || n > 1024 { if err != nil { @@ -351,7 +338,7 @@ func Futimesat(dirfd int, path string, tv []Timeval) error { } // Solaris doesn't have an futimes function because it allows NULL to be -// specified as the path for futimesat. However, Go doesn't like +// specified as the path for futimesat. However, Go doesn't like // NULL-style string interfaces, so this simple wrapper is provided. func Futimes(fd int, tv []Timeval) error { if tv == nil { @@ -515,6 +502,24 @@ func Acct(path string) (err error) { return acct(pathp) } +//sys __makedev(version int, major uint, minor uint) (val uint64) + +func Mkdev(major, minor uint32) uint64 { + return __makedev(NEWDEV, uint(major), uint(minor)) +} + +//sys __major(version int, dev uint64) (val uint) + +func Major(dev uint64) uint32 { + return uint32(__major(NEWDEV, dev)) +} + +//sys __minor(version int, dev uint64) (val uint) + +func Minor(dev uint64) uint32 { + return uint32(__minor(NEWDEV, dev)) +} + /* * Expose the ioctl function */ @@ -561,6 +566,15 @@ func IoctlGetTermio(fd int, req uint) (*Termio, error) { return &value, err } +//sys poll(fds *PollFd, nfds int, timeout int) (n int, err error) + +func Poll(fds []PollFd, timeout int) (n int, err error) { + if len(fds) == 0 { + return poll(nil, 0, timeout) + } + return poll(&fds[0], len(fds), timeout) +} + /* * Exposed directly */ @@ -581,9 +595,10 @@ func IoctlGetTermio(fd int, req uint) (*Termio, error) { //sys Fchown(fd int, uid int, gid int) (err error) //sys Fchownat(dirfd int, path string, uid int, gid int, flags int) (err error) //sys Fdatasync(fd int) (err error) -//sys Flock(fd int, how int) (err error) +//sys Flock(fd int, how int) (err error) //sys Fpathconf(fd int, name int) (val int, err error) //sys Fstat(fd int, stat *Stat_t) (err error) +//sys Fstatat(fd int, path string, stat *Stat_t, flags int) (err error) //sys Fstatvfs(fd int, vfsstat *Statvfs_t) (err error) //sys Getdents(fd int, buf []byte, basep *uintptr) (n int, err error) //sysnb Getgid() (gid int) @@ -613,6 +628,7 @@ func IoctlGetTermio(fd int, req uint) (*Termio, error) { //sys Mlock(b []byte) (err error) //sys Mlockall(flags int) (err error) //sys Mprotect(b []byte, prot int) (err error) +//sys Msync(b []byte, flags int) (err error) //sys Munlock(b []byte) (err error) //sys Munlockall() (err error) //sys Nanosleep(time *Timespec, leftover *Timespec) (err error) @@ -628,6 +644,7 @@ func IoctlGetTermio(fd int, req uint) (*Termio, error) { //sys Renameat(olddirfd int, oldpath string, newdirfd int, newpath string) (err error) //sys Rmdir(path string) (err error) //sys Seek(fd int, offset int64, whence int) (newoffset int64, err error) = lseek +//sys Select(n int, r *FdSet, w *FdSet, e *FdSet, timeout *Timeval) (err error) //sysnb Setegid(egid int) (err error) //sysnb Seteuid(euid int) (err error) //sysnb Setgid(gid int) (err error) @@ -699,18 +716,3 @@ func Mmap(fd int, offset int64, length int, prot int, flags int) (data []byte, e func Munmap(b []byte) (err error) { return mapper.Munmap(b) } - -//sys sysconf(name int) (n int64, err error) - -// pageSize caches the value of Getpagesize, since it can't change -// once the system is booted. -var pageSize int64 // accessed atomically - -func Getpagesize() int { - n := atomic.LoadInt64(&pageSize) - if n == 0 { - n, _ = sysconf(_SC_PAGESIZE) - atomic.StoreInt64(&pageSize, n) - } - return int(n) -} diff --git a/vendor/golang.org/x/sys/unix/syscall_solaris_amd64.go b/vendor/golang.org/x/sys/unix/syscall_solaris_amd64.go index 5aff62c3b..9d4e7a678 100644 --- a/vendor/golang.org/x/sys/unix/syscall_solaris_amd64.go +++ b/vendor/golang.org/x/sys/unix/syscall_solaris_amd64.go @@ -6,19 +6,12 @@ package unix -func TimespecToNsec(ts Timespec) int64 { return int64(ts.Sec)*1e9 + int64(ts.Nsec) } - -func NsecToTimespec(nsec int64) (ts Timespec) { - ts.Sec = nsec / 1e9 - ts.Nsec = nsec % 1e9 - return +func setTimespec(sec, nsec int64) Timespec { + return Timespec{Sec: sec, Nsec: nsec} } -func NsecToTimeval(nsec int64) (tv Timeval) { - nsec += 999 // round up to microsecond - tv.Usec = nsec % 1e9 / 1e3 - tv.Sec = int64(nsec / 1e9) - return +func setTimeval(sec, usec int64) Timeval { + return Timeval{Sec: sec, Usec: usec} } func (iov *Iovec) SetLen(length int) { diff --git a/vendor/golang.org/x/sys/unix/syscall_unix.go b/vendor/golang.org/x/sys/unix/syscall_unix.go index 3ed8a91f5..8c66ae518 100644 --- a/vendor/golang.org/x/sys/unix/syscall_unix.go +++ b/vendor/golang.org/x/sys/unix/syscall_unix.go @@ -7,6 +7,7 @@ package unix import ( + "bytes" "runtime" "sync" "syscall" @@ -50,6 +51,15 @@ func errnoErr(e syscall.Errno) error { return e } +// clen returns the index of the first NULL byte in n or len(n) if n contains no NULL byte. +func clen(n []byte) int { + i := bytes.IndexByte(n, 0) + if i == -1 { + i = len(n) + } + return i +} + // Mmap manager, for use by operating system-specific implementations. type mmapper struct { @@ -138,16 +148,19 @@ func Write(fd int, p []byte) (n int, err error) { // creation of IPv6 sockets to return EAFNOSUPPORT. var SocketDisableIPv6 bool +// Sockaddr represents a socket address. type Sockaddr interface { sockaddr() (ptr unsafe.Pointer, len _Socklen, err error) // lowercase; only we can define Sockaddrs } +// SockaddrInet4 implements the Sockaddr interface for AF_INET type sockets. type SockaddrInet4 struct { Port int Addr [4]byte raw RawSockaddrInet4 } +// SockaddrInet6 implements the Sockaddr interface for AF_INET6 type sockets. type SockaddrInet6 struct { Port int ZoneId uint32 @@ -155,6 +168,7 @@ type SockaddrInet6 struct { raw RawSockaddrInet6 } +// SockaddrUnix implements the Sockaddr interface for AF_UNIX type sockets. type SockaddrUnix struct { Name string raw RawSockaddrUnix @@ -192,6 +206,20 @@ func GetsockoptInt(fd, level, opt int) (value int, err error) { return int(n), err } +func GetsockoptLinger(fd, level, opt int) (*Linger, error) { + var linger Linger + vallen := _Socklen(SizeofLinger) + err := getsockopt(fd, level, opt, unsafe.Pointer(&linger), &vallen) + return &linger, err +} + +func GetsockoptTimeval(fd, level, opt int) (*Timeval, error) { + var tv Timeval + vallen := _Socklen(unsafe.Sizeof(tv)) + err := getsockopt(fd, level, opt, unsafe.Pointer(&tv), &vallen) + return &tv, err +} + func Recvfrom(fd int, p []byte, flags int) (n int, from Sockaddr, err error) { var rsa RawSockaddrAny var len _Socklen = SizeofSockaddrAny @@ -291,3 +319,12 @@ func SetNonblock(fd int, nonblocking bool) (err error) { _, err = fcntl(fd, F_SETFL, flag) return err } + +// Exec calls execve(2), which replaces the calling executable in the process +// tree. argv0 should be the full path to an executable ("/bin/ls") and the +// executable name should also be the first argument in argv (["ls", "-l"]). +// envv are the environment variables that should be passed to the new +// process (["USER=go", "PWD=/tmp"]). +func Exec(argv0 string, argv []string, envv []string) error { + return syscall.Exec(argv0, argv, envv) +} diff --git a/vendor/golang.org/x/sys/unix/timestruct.go b/vendor/golang.org/x/sys/unix/timestruct.go new file mode 100644 index 000000000..47b9011ee --- /dev/null +++ b/vendor/golang.org/x/sys/unix/timestruct.go @@ -0,0 +1,82 @@ +// Copyright 2017 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +// +build darwin dragonfly freebsd linux netbsd openbsd solaris + +package unix + +import "time" + +// TimespecToNsec converts a Timespec value into a number of +// nanoseconds since the Unix epoch. +func TimespecToNsec(ts Timespec) int64 { return int64(ts.Sec)*1e9 + int64(ts.Nsec) } + +// NsecToTimespec takes a number of nanoseconds since the Unix epoch +// and returns the corresponding Timespec value. +func NsecToTimespec(nsec int64) Timespec { + sec := nsec / 1e9 + nsec = nsec % 1e9 + if nsec < 0 { + nsec += 1e9 + sec-- + } + return setTimespec(sec, nsec) +} + +// TimeToTimespec converts t into a Timespec. +// On some 32-bit systems the range of valid Timespec values are smaller +// than that of time.Time values. So if t is out of the valid range of +// Timespec, it returns a zero Timespec and ERANGE. +func TimeToTimespec(t time.Time) (Timespec, error) { + sec := t.Unix() + nsec := int64(t.Nanosecond()) + ts := setTimespec(sec, nsec) + + // Currently all targets have either int32 or int64 for Timespec.Sec. + // If there were a new target with floating point type for it, we have + // to consider the rounding error. + if int64(ts.Sec) != sec { + return Timespec{}, ERANGE + } + return ts, nil +} + +// TimevalToNsec converts a Timeval value into a number of nanoseconds +// since the Unix epoch. +func TimevalToNsec(tv Timeval) int64 { return int64(tv.Sec)*1e9 + int64(tv.Usec)*1e3 } + +// NsecToTimeval takes a number of nanoseconds since the Unix epoch +// and returns the corresponding Timeval value. +func NsecToTimeval(nsec int64) Timeval { + nsec += 999 // round up to microsecond + usec := nsec % 1e9 / 1e3 + sec := nsec / 1e9 + if usec < 0 { + usec += 1e6 + sec-- + } + return setTimeval(sec, usec) +} + +// Unix returns ts as the number of seconds and nanoseconds elapsed since the +// Unix epoch. +func (ts *Timespec) Unix() (sec int64, nsec int64) { + return int64(ts.Sec), int64(ts.Nsec) +} + +// Unix returns tv as the number of seconds and nanoseconds elapsed since the +// Unix epoch. +func (tv *Timeval) Unix() (sec int64, nsec int64) { + return int64(tv.Sec), int64(tv.Usec) * 1000 +} + +// Nano returns ts as the number of nanoseconds elapsed since the Unix epoch. +func (ts *Timespec) Nano() int64 { + return int64(ts.Sec)*1e9 + int64(ts.Nsec) +} + +// Nano returns tv as the number of nanoseconds elapsed since the Unix epoch. +func (tv *Timeval) Nano() int64 { + return int64(tv.Sec)*1e9 + int64(tv.Usec)*1000 +} diff --git a/vendor/golang.org/x/sys/unix/zerrors_darwin_386.go b/vendor/golang.org/x/sys/unix/zerrors_darwin_386.go index 1c68758b6..dcba88424 100644 --- a/vendor/golang.org/x/sys/unix/zerrors_darwin_386.go +++ b/vendor/golang.org/x/sys/unix/zerrors_darwin_386.go @@ -49,6 +49,86 @@ const ( AF_UNSPEC = 0x0 AF_UTUN = 0x26 ALTWERASE = 0x200 + ATTR_BIT_MAP_COUNT = 0x5 + ATTR_CMN_ACCESSMASK = 0x20000 + ATTR_CMN_ACCTIME = 0x1000 + ATTR_CMN_ADDEDTIME = 0x10000000 + ATTR_CMN_BKUPTIME = 0x2000 + ATTR_CMN_CHGTIME = 0x800 + ATTR_CMN_CRTIME = 0x200 + ATTR_CMN_DATA_PROTECT_FLAGS = 0x40000000 + ATTR_CMN_DEVID = 0x2 + ATTR_CMN_DOCUMENT_ID = 0x100000 + ATTR_CMN_ERROR = 0x20000000 + ATTR_CMN_EXTENDED_SECURITY = 0x400000 + ATTR_CMN_FILEID = 0x2000000 + ATTR_CMN_FLAGS = 0x40000 + ATTR_CMN_FNDRINFO = 0x4000 + ATTR_CMN_FSID = 0x4 + ATTR_CMN_FULLPATH = 0x8000000 + ATTR_CMN_GEN_COUNT = 0x80000 + ATTR_CMN_GRPID = 0x10000 + ATTR_CMN_GRPUUID = 0x1000000 + ATTR_CMN_MODTIME = 0x400 + ATTR_CMN_NAME = 0x1 + ATTR_CMN_NAMEDATTRCOUNT = 0x80000 + ATTR_CMN_NAMEDATTRLIST = 0x100000 + ATTR_CMN_OBJID = 0x20 + ATTR_CMN_OBJPERMANENTID = 0x40 + ATTR_CMN_OBJTAG = 0x10 + ATTR_CMN_OBJTYPE = 0x8 + ATTR_CMN_OWNERID = 0x8000 + ATTR_CMN_PARENTID = 0x4000000 + ATTR_CMN_PAROBJID = 0x80 + ATTR_CMN_RETURNED_ATTRS = 0x80000000 + ATTR_CMN_SCRIPT = 0x100 + ATTR_CMN_SETMASK = 0x41c7ff00 + ATTR_CMN_USERACCESS = 0x200000 + ATTR_CMN_UUID = 0x800000 + ATTR_CMN_VALIDMASK = 0xffffffff + ATTR_CMN_VOLSETMASK = 0x6700 + ATTR_FILE_ALLOCSIZE = 0x4 + ATTR_FILE_CLUMPSIZE = 0x10 + ATTR_FILE_DATAALLOCSIZE = 0x400 + ATTR_FILE_DATAEXTENTS = 0x800 + ATTR_FILE_DATALENGTH = 0x200 + ATTR_FILE_DEVTYPE = 0x20 + ATTR_FILE_FILETYPE = 0x40 + ATTR_FILE_FORKCOUNT = 0x80 + ATTR_FILE_FORKLIST = 0x100 + ATTR_FILE_IOBLOCKSIZE = 0x8 + ATTR_FILE_LINKCOUNT = 0x1 + ATTR_FILE_RSRCALLOCSIZE = 0x2000 + ATTR_FILE_RSRCEXTENTS = 0x4000 + ATTR_FILE_RSRCLENGTH = 0x1000 + ATTR_FILE_SETMASK = 0x20 + ATTR_FILE_TOTALSIZE = 0x2 + ATTR_FILE_VALIDMASK = 0x37ff + ATTR_VOL_ALLOCATIONCLUMP = 0x40 + ATTR_VOL_ATTRIBUTES = 0x40000000 + ATTR_VOL_CAPABILITIES = 0x20000 + ATTR_VOL_DIRCOUNT = 0x400 + ATTR_VOL_ENCODINGSUSED = 0x10000 + ATTR_VOL_FILECOUNT = 0x200 + ATTR_VOL_FSTYPE = 0x1 + ATTR_VOL_INFO = 0x80000000 + ATTR_VOL_IOBLOCKSIZE = 0x80 + ATTR_VOL_MAXOBJCOUNT = 0x800 + ATTR_VOL_MINALLOCATION = 0x20 + ATTR_VOL_MOUNTEDDEVICE = 0x8000 + ATTR_VOL_MOUNTFLAGS = 0x4000 + ATTR_VOL_MOUNTPOINT = 0x1000 + ATTR_VOL_NAME = 0x2000 + ATTR_VOL_OBJCOUNT = 0x100 + ATTR_VOL_QUOTA_SIZE = 0x10000000 + ATTR_VOL_RESERVED_SIZE = 0x20000000 + ATTR_VOL_SETMASK = 0x80002000 + ATTR_VOL_SIGNATURE = 0x2 + ATTR_VOL_SIZE = 0x4 + ATTR_VOL_SPACEAVAIL = 0x10 + ATTR_VOL_SPACEFREE = 0x8 + ATTR_VOL_UUID = 0x40000 + ATTR_VOL_VALIDMASK = 0xf007ffff B0 = 0x0 B110 = 0x6e B115200 = 0x1c200 @@ -169,6 +249,8 @@ const ( CSTOP = 0x13 CSTOPB = 0x400 CSUSP = 0x1a + CTL_HW = 0x6 + CTL_KERN = 0x1 CTL_MAXNAME = 0xc CTL_NET = 0x4 DLT_A429 = 0xb8 @@ -390,6 +472,11 @@ const ( FF1 = 0x4000 FFDLY = 0x4000 FLUSHO = 0x800000 + FSOPT_ATTR_CMN_EXTENDED = 0x20 + FSOPT_NOFOLLOW = 0x1 + FSOPT_NOINMEMUPDATE = 0x2 + FSOPT_PACK_INVAL_ATTRS = 0x8 + FSOPT_REPORT_FULLSIZE = 0x4 F_ADDFILESIGS = 0x3d F_ADDFILESIGS_FOR_DYLD_SIM = 0x53 F_ADDFILESIGS_RETURN = 0x61 @@ -425,6 +512,7 @@ const ( F_PATHPKG_CHECK = 0x34 F_PEOFPOSMODE = 0x3 F_PREALLOCATE = 0x2a + F_PUNCHHOLE = 0x63 F_RDADVISE = 0x2c F_RDAHEAD = 0x2d F_RDLCK = 0x1 @@ -441,10 +529,12 @@ const ( F_SINGLE_WRITER = 0x4c F_THAW_FS = 0x36 F_TRANSCODEKEY = 0x4b + F_TRIM_ACTIVE_FILE = 0x64 F_UNLCK = 0x2 F_VOLPOSMODE = 0x4 F_WRLCK = 0x3 HUPCL = 0x4000 + HW_MACHINE = 0x1 ICANON = 0x100 ICMP6_FILTER = 0x12 ICRNL = 0x100 @@ -681,6 +771,7 @@ const ( IPV6_FAITH = 0x1d IPV6_FLOWINFO_MASK = 0xffffff0f IPV6_FLOWLABEL_MASK = 0xffff0f00 + IPV6_FLOW_ECN_MASK = 0x300 IPV6_FRAGTTL = 0x3c IPV6_FW_ADD = 0x1e IPV6_FW_DEL = 0x1f @@ -771,6 +862,7 @@ const ( IP_RECVOPTS = 0x5 IP_RECVPKTINFO = 0x1a IP_RECVRETOPTS = 0x6 + IP_RECVTOS = 0x1b IP_RECVTTL = 0x18 IP_RETOPTS = 0x8 IP_RF = 0x8000 @@ -789,6 +881,10 @@ const ( IXANY = 0x800 IXOFF = 0x400 IXON = 0x200 + KERN_HOSTNAME = 0xa + KERN_OSRELEASE = 0x2 + KERN_OSTYPE = 0x1 + KERN_VERSION = 0x4 LOCK_EX = 0x2 LOCK_NB = 0x4 LOCK_SH = 0x1 diff --git a/vendor/golang.org/x/sys/unix/zerrors_darwin_amd64.go b/vendor/golang.org/x/sys/unix/zerrors_darwin_amd64.go index 48f63d4f0..1a51c963c 100644 --- a/vendor/golang.org/x/sys/unix/zerrors_darwin_amd64.go +++ b/vendor/golang.org/x/sys/unix/zerrors_darwin_amd64.go @@ -49,6 +49,86 @@ const ( AF_UNSPEC = 0x0 AF_UTUN = 0x26 ALTWERASE = 0x200 + ATTR_BIT_MAP_COUNT = 0x5 + ATTR_CMN_ACCESSMASK = 0x20000 + ATTR_CMN_ACCTIME = 0x1000 + ATTR_CMN_ADDEDTIME = 0x10000000 + ATTR_CMN_BKUPTIME = 0x2000 + ATTR_CMN_CHGTIME = 0x800 + ATTR_CMN_CRTIME = 0x200 + ATTR_CMN_DATA_PROTECT_FLAGS = 0x40000000 + ATTR_CMN_DEVID = 0x2 + ATTR_CMN_DOCUMENT_ID = 0x100000 + ATTR_CMN_ERROR = 0x20000000 + ATTR_CMN_EXTENDED_SECURITY = 0x400000 + ATTR_CMN_FILEID = 0x2000000 + ATTR_CMN_FLAGS = 0x40000 + ATTR_CMN_FNDRINFO = 0x4000 + ATTR_CMN_FSID = 0x4 + ATTR_CMN_FULLPATH = 0x8000000 + ATTR_CMN_GEN_COUNT = 0x80000 + ATTR_CMN_GRPID = 0x10000 + ATTR_CMN_GRPUUID = 0x1000000 + ATTR_CMN_MODTIME = 0x400 + ATTR_CMN_NAME = 0x1 + ATTR_CMN_NAMEDATTRCOUNT = 0x80000 + ATTR_CMN_NAMEDATTRLIST = 0x100000 + ATTR_CMN_OBJID = 0x20 + ATTR_CMN_OBJPERMANENTID = 0x40 + ATTR_CMN_OBJTAG = 0x10 + ATTR_CMN_OBJTYPE = 0x8 + ATTR_CMN_OWNERID = 0x8000 + ATTR_CMN_PARENTID = 0x4000000 + ATTR_CMN_PAROBJID = 0x80 + ATTR_CMN_RETURNED_ATTRS = 0x80000000 + ATTR_CMN_SCRIPT = 0x100 + ATTR_CMN_SETMASK = 0x41c7ff00 + ATTR_CMN_USERACCESS = 0x200000 + ATTR_CMN_UUID = 0x800000 + ATTR_CMN_VALIDMASK = 0xffffffff + ATTR_CMN_VOLSETMASK = 0x6700 + ATTR_FILE_ALLOCSIZE = 0x4 + ATTR_FILE_CLUMPSIZE = 0x10 + ATTR_FILE_DATAALLOCSIZE = 0x400 + ATTR_FILE_DATAEXTENTS = 0x800 + ATTR_FILE_DATALENGTH = 0x200 + ATTR_FILE_DEVTYPE = 0x20 + ATTR_FILE_FILETYPE = 0x40 + ATTR_FILE_FORKCOUNT = 0x80 + ATTR_FILE_FORKLIST = 0x100 + ATTR_FILE_IOBLOCKSIZE = 0x8 + ATTR_FILE_LINKCOUNT = 0x1 + ATTR_FILE_RSRCALLOCSIZE = 0x2000 + ATTR_FILE_RSRCEXTENTS = 0x4000 + ATTR_FILE_RSRCLENGTH = 0x1000 + ATTR_FILE_SETMASK = 0x20 + ATTR_FILE_TOTALSIZE = 0x2 + ATTR_FILE_VALIDMASK = 0x37ff + ATTR_VOL_ALLOCATIONCLUMP = 0x40 + ATTR_VOL_ATTRIBUTES = 0x40000000 + ATTR_VOL_CAPABILITIES = 0x20000 + ATTR_VOL_DIRCOUNT = 0x400 + ATTR_VOL_ENCODINGSUSED = 0x10000 + ATTR_VOL_FILECOUNT = 0x200 + ATTR_VOL_FSTYPE = 0x1 + ATTR_VOL_INFO = 0x80000000 + ATTR_VOL_IOBLOCKSIZE = 0x80 + ATTR_VOL_MAXOBJCOUNT = 0x800 + ATTR_VOL_MINALLOCATION = 0x20 + ATTR_VOL_MOUNTEDDEVICE = 0x8000 + ATTR_VOL_MOUNTFLAGS = 0x4000 + ATTR_VOL_MOUNTPOINT = 0x1000 + ATTR_VOL_NAME = 0x2000 + ATTR_VOL_OBJCOUNT = 0x100 + ATTR_VOL_QUOTA_SIZE = 0x10000000 + ATTR_VOL_RESERVED_SIZE = 0x20000000 + ATTR_VOL_SETMASK = 0x80002000 + ATTR_VOL_SIGNATURE = 0x2 + ATTR_VOL_SIZE = 0x4 + ATTR_VOL_SPACEAVAIL = 0x10 + ATTR_VOL_SPACEFREE = 0x8 + ATTR_VOL_UUID = 0x40000 + ATTR_VOL_VALIDMASK = 0xf007ffff B0 = 0x0 B110 = 0x6e B115200 = 0x1c200 @@ -169,6 +249,8 @@ const ( CSTOP = 0x13 CSTOPB = 0x400 CSUSP = 0x1a + CTL_HW = 0x6 + CTL_KERN = 0x1 CTL_MAXNAME = 0xc CTL_NET = 0x4 DLT_A429 = 0xb8 @@ -390,6 +472,11 @@ const ( FF1 = 0x4000 FFDLY = 0x4000 FLUSHO = 0x800000 + FSOPT_ATTR_CMN_EXTENDED = 0x20 + FSOPT_NOFOLLOW = 0x1 + FSOPT_NOINMEMUPDATE = 0x2 + FSOPT_PACK_INVAL_ATTRS = 0x8 + FSOPT_REPORT_FULLSIZE = 0x4 F_ADDFILESIGS = 0x3d F_ADDFILESIGS_FOR_DYLD_SIM = 0x53 F_ADDFILESIGS_RETURN = 0x61 @@ -425,6 +512,7 @@ const ( F_PATHPKG_CHECK = 0x34 F_PEOFPOSMODE = 0x3 F_PREALLOCATE = 0x2a + F_PUNCHHOLE = 0x63 F_RDADVISE = 0x2c F_RDAHEAD = 0x2d F_RDLCK = 0x1 @@ -441,10 +529,12 @@ const ( F_SINGLE_WRITER = 0x4c F_THAW_FS = 0x36 F_TRANSCODEKEY = 0x4b + F_TRIM_ACTIVE_FILE = 0x64 F_UNLCK = 0x2 F_VOLPOSMODE = 0x4 F_WRLCK = 0x3 HUPCL = 0x4000 + HW_MACHINE = 0x1 ICANON = 0x100 ICMP6_FILTER = 0x12 ICRNL = 0x100 @@ -681,6 +771,7 @@ const ( IPV6_FAITH = 0x1d IPV6_FLOWINFO_MASK = 0xffffff0f IPV6_FLOWLABEL_MASK = 0xffff0f00 + IPV6_FLOW_ECN_MASK = 0x300 IPV6_FRAGTTL = 0x3c IPV6_FW_ADD = 0x1e IPV6_FW_DEL = 0x1f @@ -771,6 +862,7 @@ const ( IP_RECVOPTS = 0x5 IP_RECVPKTINFO = 0x1a IP_RECVRETOPTS = 0x6 + IP_RECVTOS = 0x1b IP_RECVTTL = 0x18 IP_RETOPTS = 0x8 IP_RF = 0x8000 @@ -789,6 +881,10 @@ const ( IXANY = 0x800 IXOFF = 0x400 IXON = 0x200 + KERN_HOSTNAME = 0xa + KERN_OSRELEASE = 0x2 + KERN_OSTYPE = 0x1 + KERN_VERSION = 0x4 LOCK_EX = 0x2 LOCK_NB = 0x4 LOCK_SH = 0x1 diff --git a/vendor/golang.org/x/sys/unix/zerrors_darwin_arm.go b/vendor/golang.org/x/sys/unix/zerrors_darwin_arm.go index 24cb522d9..fa135b17c 100644 --- a/vendor/golang.org/x/sys/unix/zerrors_darwin_arm.go +++ b/vendor/golang.org/x/sys/unix/zerrors_darwin_arm.go @@ -49,6 +49,86 @@ const ( AF_UNSPEC = 0x0 AF_UTUN = 0x26 ALTWERASE = 0x200 + ATTR_BIT_MAP_COUNT = 0x5 + ATTR_CMN_ACCESSMASK = 0x20000 + ATTR_CMN_ACCTIME = 0x1000 + ATTR_CMN_ADDEDTIME = 0x10000000 + ATTR_CMN_BKUPTIME = 0x2000 + ATTR_CMN_CHGTIME = 0x800 + ATTR_CMN_CRTIME = 0x200 + ATTR_CMN_DATA_PROTECT_FLAGS = 0x40000000 + ATTR_CMN_DEVID = 0x2 + ATTR_CMN_DOCUMENT_ID = 0x100000 + ATTR_CMN_ERROR = 0x20000000 + ATTR_CMN_EXTENDED_SECURITY = 0x400000 + ATTR_CMN_FILEID = 0x2000000 + ATTR_CMN_FLAGS = 0x40000 + ATTR_CMN_FNDRINFO = 0x4000 + ATTR_CMN_FSID = 0x4 + ATTR_CMN_FULLPATH = 0x8000000 + ATTR_CMN_GEN_COUNT = 0x80000 + ATTR_CMN_GRPID = 0x10000 + ATTR_CMN_GRPUUID = 0x1000000 + ATTR_CMN_MODTIME = 0x400 + ATTR_CMN_NAME = 0x1 + ATTR_CMN_NAMEDATTRCOUNT = 0x80000 + ATTR_CMN_NAMEDATTRLIST = 0x100000 + ATTR_CMN_OBJID = 0x20 + ATTR_CMN_OBJPERMANENTID = 0x40 + ATTR_CMN_OBJTAG = 0x10 + ATTR_CMN_OBJTYPE = 0x8 + ATTR_CMN_OWNERID = 0x8000 + ATTR_CMN_PARENTID = 0x4000000 + ATTR_CMN_PAROBJID = 0x80 + ATTR_CMN_RETURNED_ATTRS = 0x80000000 + ATTR_CMN_SCRIPT = 0x100 + ATTR_CMN_SETMASK = 0x41c7ff00 + ATTR_CMN_USERACCESS = 0x200000 + ATTR_CMN_UUID = 0x800000 + ATTR_CMN_VALIDMASK = 0xffffffff + ATTR_CMN_VOLSETMASK = 0x6700 + ATTR_FILE_ALLOCSIZE = 0x4 + ATTR_FILE_CLUMPSIZE = 0x10 + ATTR_FILE_DATAALLOCSIZE = 0x400 + ATTR_FILE_DATAEXTENTS = 0x800 + ATTR_FILE_DATALENGTH = 0x200 + ATTR_FILE_DEVTYPE = 0x20 + ATTR_FILE_FILETYPE = 0x40 + ATTR_FILE_FORKCOUNT = 0x80 + ATTR_FILE_FORKLIST = 0x100 + ATTR_FILE_IOBLOCKSIZE = 0x8 + ATTR_FILE_LINKCOUNT = 0x1 + ATTR_FILE_RSRCALLOCSIZE = 0x2000 + ATTR_FILE_RSRCEXTENTS = 0x4000 + ATTR_FILE_RSRCLENGTH = 0x1000 + ATTR_FILE_SETMASK = 0x20 + ATTR_FILE_TOTALSIZE = 0x2 + ATTR_FILE_VALIDMASK = 0x37ff + ATTR_VOL_ALLOCATIONCLUMP = 0x40 + ATTR_VOL_ATTRIBUTES = 0x40000000 + ATTR_VOL_CAPABILITIES = 0x20000 + ATTR_VOL_DIRCOUNT = 0x400 + ATTR_VOL_ENCODINGSUSED = 0x10000 + ATTR_VOL_FILECOUNT = 0x200 + ATTR_VOL_FSTYPE = 0x1 + ATTR_VOL_INFO = 0x80000000 + ATTR_VOL_IOBLOCKSIZE = 0x80 + ATTR_VOL_MAXOBJCOUNT = 0x800 + ATTR_VOL_MINALLOCATION = 0x20 + ATTR_VOL_MOUNTEDDEVICE = 0x8000 + ATTR_VOL_MOUNTFLAGS = 0x4000 + ATTR_VOL_MOUNTPOINT = 0x1000 + ATTR_VOL_NAME = 0x2000 + ATTR_VOL_OBJCOUNT = 0x100 + ATTR_VOL_QUOTA_SIZE = 0x10000000 + ATTR_VOL_RESERVED_SIZE = 0x20000000 + ATTR_VOL_SETMASK = 0x80002000 + ATTR_VOL_SIGNATURE = 0x2 + ATTR_VOL_SIZE = 0x4 + ATTR_VOL_SPACEAVAIL = 0x10 + ATTR_VOL_SPACEFREE = 0x8 + ATTR_VOL_UUID = 0x40000 + ATTR_VOL_VALIDMASK = 0xf007ffff B0 = 0x0 B110 = 0x6e B115200 = 0x1c200 @@ -169,6 +249,8 @@ const ( CSTOP = 0x13 CSTOPB = 0x400 CSUSP = 0x1a + CTL_HW = 0x6 + CTL_KERN = 0x1 CTL_MAXNAME = 0xc CTL_NET = 0x4 DLT_A429 = 0xb8 @@ -390,6 +472,11 @@ const ( FF1 = 0x4000 FFDLY = 0x4000 FLUSHO = 0x800000 + FSOPT_ATTR_CMN_EXTENDED = 0x20 + FSOPT_NOFOLLOW = 0x1 + FSOPT_NOINMEMUPDATE = 0x2 + FSOPT_PACK_INVAL_ATTRS = 0x8 + FSOPT_REPORT_FULLSIZE = 0x4 F_ADDFILESIGS = 0x3d F_ADDFILESIGS_FOR_DYLD_SIM = 0x53 F_ADDFILESIGS_RETURN = 0x61 @@ -425,6 +512,7 @@ const ( F_PATHPKG_CHECK = 0x34 F_PEOFPOSMODE = 0x3 F_PREALLOCATE = 0x2a + F_PUNCHHOLE = 0x63 F_RDADVISE = 0x2c F_RDAHEAD = 0x2d F_RDLCK = 0x1 @@ -441,10 +529,12 @@ const ( F_SINGLE_WRITER = 0x4c F_THAW_FS = 0x36 F_TRANSCODEKEY = 0x4b + F_TRIM_ACTIVE_FILE = 0x64 F_UNLCK = 0x2 F_VOLPOSMODE = 0x4 F_WRLCK = 0x3 HUPCL = 0x4000 + HW_MACHINE = 0x1 ICANON = 0x100 ICMP6_FILTER = 0x12 ICRNL = 0x100 @@ -681,6 +771,7 @@ const ( IPV6_FAITH = 0x1d IPV6_FLOWINFO_MASK = 0xffffff0f IPV6_FLOWLABEL_MASK = 0xffff0f00 + IPV6_FLOW_ECN_MASK = 0x300 IPV6_FRAGTTL = 0x3c IPV6_FW_ADD = 0x1e IPV6_FW_DEL = 0x1f @@ -771,6 +862,7 @@ const ( IP_RECVOPTS = 0x5 IP_RECVPKTINFO = 0x1a IP_RECVRETOPTS = 0x6 + IP_RECVTOS = 0x1b IP_RECVTTL = 0x18 IP_RETOPTS = 0x8 IP_RF = 0x8000 @@ -789,6 +881,10 @@ const ( IXANY = 0x800 IXOFF = 0x400 IXON = 0x200 + KERN_HOSTNAME = 0xa + KERN_OSRELEASE = 0x2 + KERN_OSTYPE = 0x1 + KERN_VERSION = 0x4 LOCK_EX = 0x2 LOCK_NB = 0x4 LOCK_SH = 0x1 diff --git a/vendor/golang.org/x/sys/unix/zerrors_darwin_arm64.go b/vendor/golang.org/x/sys/unix/zerrors_darwin_arm64.go index cc8cc5b57..6419c65e1 100644 --- a/vendor/golang.org/x/sys/unix/zerrors_darwin_arm64.go +++ b/vendor/golang.org/x/sys/unix/zerrors_darwin_arm64.go @@ -49,6 +49,86 @@ const ( AF_UNSPEC = 0x0 AF_UTUN = 0x26 ALTWERASE = 0x200 + ATTR_BIT_MAP_COUNT = 0x5 + ATTR_CMN_ACCESSMASK = 0x20000 + ATTR_CMN_ACCTIME = 0x1000 + ATTR_CMN_ADDEDTIME = 0x10000000 + ATTR_CMN_BKUPTIME = 0x2000 + ATTR_CMN_CHGTIME = 0x800 + ATTR_CMN_CRTIME = 0x200 + ATTR_CMN_DATA_PROTECT_FLAGS = 0x40000000 + ATTR_CMN_DEVID = 0x2 + ATTR_CMN_DOCUMENT_ID = 0x100000 + ATTR_CMN_ERROR = 0x20000000 + ATTR_CMN_EXTENDED_SECURITY = 0x400000 + ATTR_CMN_FILEID = 0x2000000 + ATTR_CMN_FLAGS = 0x40000 + ATTR_CMN_FNDRINFO = 0x4000 + ATTR_CMN_FSID = 0x4 + ATTR_CMN_FULLPATH = 0x8000000 + ATTR_CMN_GEN_COUNT = 0x80000 + ATTR_CMN_GRPID = 0x10000 + ATTR_CMN_GRPUUID = 0x1000000 + ATTR_CMN_MODTIME = 0x400 + ATTR_CMN_NAME = 0x1 + ATTR_CMN_NAMEDATTRCOUNT = 0x80000 + ATTR_CMN_NAMEDATTRLIST = 0x100000 + ATTR_CMN_OBJID = 0x20 + ATTR_CMN_OBJPERMANENTID = 0x40 + ATTR_CMN_OBJTAG = 0x10 + ATTR_CMN_OBJTYPE = 0x8 + ATTR_CMN_OWNERID = 0x8000 + ATTR_CMN_PARENTID = 0x4000000 + ATTR_CMN_PAROBJID = 0x80 + ATTR_CMN_RETURNED_ATTRS = 0x80000000 + ATTR_CMN_SCRIPT = 0x100 + ATTR_CMN_SETMASK = 0x41c7ff00 + ATTR_CMN_USERACCESS = 0x200000 + ATTR_CMN_UUID = 0x800000 + ATTR_CMN_VALIDMASK = 0xffffffff + ATTR_CMN_VOLSETMASK = 0x6700 + ATTR_FILE_ALLOCSIZE = 0x4 + ATTR_FILE_CLUMPSIZE = 0x10 + ATTR_FILE_DATAALLOCSIZE = 0x400 + ATTR_FILE_DATAEXTENTS = 0x800 + ATTR_FILE_DATALENGTH = 0x200 + ATTR_FILE_DEVTYPE = 0x20 + ATTR_FILE_FILETYPE = 0x40 + ATTR_FILE_FORKCOUNT = 0x80 + ATTR_FILE_FORKLIST = 0x100 + ATTR_FILE_IOBLOCKSIZE = 0x8 + ATTR_FILE_LINKCOUNT = 0x1 + ATTR_FILE_RSRCALLOCSIZE = 0x2000 + ATTR_FILE_RSRCEXTENTS = 0x4000 + ATTR_FILE_RSRCLENGTH = 0x1000 + ATTR_FILE_SETMASK = 0x20 + ATTR_FILE_TOTALSIZE = 0x2 + ATTR_FILE_VALIDMASK = 0x37ff + ATTR_VOL_ALLOCATIONCLUMP = 0x40 + ATTR_VOL_ATTRIBUTES = 0x40000000 + ATTR_VOL_CAPABILITIES = 0x20000 + ATTR_VOL_DIRCOUNT = 0x400 + ATTR_VOL_ENCODINGSUSED = 0x10000 + ATTR_VOL_FILECOUNT = 0x200 + ATTR_VOL_FSTYPE = 0x1 + ATTR_VOL_INFO = 0x80000000 + ATTR_VOL_IOBLOCKSIZE = 0x80 + ATTR_VOL_MAXOBJCOUNT = 0x800 + ATTR_VOL_MINALLOCATION = 0x20 + ATTR_VOL_MOUNTEDDEVICE = 0x8000 + ATTR_VOL_MOUNTFLAGS = 0x4000 + ATTR_VOL_MOUNTPOINT = 0x1000 + ATTR_VOL_NAME = 0x2000 + ATTR_VOL_OBJCOUNT = 0x100 + ATTR_VOL_QUOTA_SIZE = 0x10000000 + ATTR_VOL_RESERVED_SIZE = 0x20000000 + ATTR_VOL_SETMASK = 0x80002000 + ATTR_VOL_SIGNATURE = 0x2 + ATTR_VOL_SIZE = 0x4 + ATTR_VOL_SPACEAVAIL = 0x10 + ATTR_VOL_SPACEFREE = 0x8 + ATTR_VOL_UUID = 0x40000 + ATTR_VOL_VALIDMASK = 0xf007ffff B0 = 0x0 B110 = 0x6e B115200 = 0x1c200 @@ -169,6 +249,8 @@ const ( CSTOP = 0x13 CSTOPB = 0x400 CSUSP = 0x1a + CTL_HW = 0x6 + CTL_KERN = 0x1 CTL_MAXNAME = 0xc CTL_NET = 0x4 DLT_A429 = 0xb8 @@ -390,6 +472,11 @@ const ( FF1 = 0x4000 FFDLY = 0x4000 FLUSHO = 0x800000 + FSOPT_ATTR_CMN_EXTENDED = 0x20 + FSOPT_NOFOLLOW = 0x1 + FSOPT_NOINMEMUPDATE = 0x2 + FSOPT_PACK_INVAL_ATTRS = 0x8 + FSOPT_REPORT_FULLSIZE = 0x4 F_ADDFILESIGS = 0x3d F_ADDFILESIGS_FOR_DYLD_SIM = 0x53 F_ADDFILESIGS_RETURN = 0x61 @@ -425,6 +512,7 @@ const ( F_PATHPKG_CHECK = 0x34 F_PEOFPOSMODE = 0x3 F_PREALLOCATE = 0x2a + F_PUNCHHOLE = 0x63 F_RDADVISE = 0x2c F_RDAHEAD = 0x2d F_RDLCK = 0x1 @@ -441,10 +529,12 @@ const ( F_SINGLE_WRITER = 0x4c F_THAW_FS = 0x36 F_TRANSCODEKEY = 0x4b + F_TRIM_ACTIVE_FILE = 0x64 F_UNLCK = 0x2 F_VOLPOSMODE = 0x4 F_WRLCK = 0x3 HUPCL = 0x4000 + HW_MACHINE = 0x1 ICANON = 0x100 ICMP6_FILTER = 0x12 ICRNL = 0x100 @@ -681,6 +771,7 @@ const ( IPV6_FAITH = 0x1d IPV6_FLOWINFO_MASK = 0xffffff0f IPV6_FLOWLABEL_MASK = 0xffff0f00 + IPV6_FLOW_ECN_MASK = 0x300 IPV6_FRAGTTL = 0x3c IPV6_FW_ADD = 0x1e IPV6_FW_DEL = 0x1f @@ -771,6 +862,7 @@ const ( IP_RECVOPTS = 0x5 IP_RECVPKTINFO = 0x1a IP_RECVRETOPTS = 0x6 + IP_RECVTOS = 0x1b IP_RECVTTL = 0x18 IP_RETOPTS = 0x8 IP_RF = 0x8000 @@ -789,6 +881,10 @@ const ( IXANY = 0x800 IXOFF = 0x400 IXON = 0x200 + KERN_HOSTNAME = 0xa + KERN_OSRELEASE = 0x2 + KERN_OSTYPE = 0x1 + KERN_VERSION = 0x4 LOCK_EX = 0x2 LOCK_NB = 0x4 LOCK_SH = 0x1 diff --git a/vendor/golang.org/x/sys/unix/zerrors_dragonfly_amd64.go b/vendor/golang.org/x/sys/unix/zerrors_dragonfly_amd64.go index 8f40598bb..474441b80 100644 --- a/vendor/golang.org/x/sys/unix/zerrors_dragonfly_amd64.go +++ b/vendor/golang.org/x/sys/unix/zerrors_dragonfly_amd64.go @@ -168,6 +168,8 @@ const ( CSTOP = 0x13 CSTOPB = 0x400 CSUSP = 0x1a + CTL_HW = 0x6 + CTL_KERN = 0x1 CTL_MAXNAME = 0xc CTL_NET = 0x4 DLT_A429 = 0xb8 @@ -353,6 +355,7 @@ const ( F_UNLCK = 0x2 F_WRLCK = 0x3 HUPCL = 0x4000 + HW_MACHINE = 0x1 ICANON = 0x100 ICMP6_FILTER = 0x12 ICRNL = 0x100 @@ -835,6 +838,10 @@ const ( IXANY = 0x800 IXOFF = 0x400 IXON = 0x200 + KERN_HOSTNAME = 0xa + KERN_OSRELEASE = 0x2 + KERN_OSTYPE = 0x1 + KERN_VERSION = 0x4 LOCK_EX = 0x2 LOCK_NB = 0x4 LOCK_SH = 0x1 @@ -973,7 +980,10 @@ const ( RLIMIT_CPU = 0x0 RLIMIT_DATA = 0x2 RLIMIT_FSIZE = 0x1 + RLIMIT_MEMLOCK = 0x6 RLIMIT_NOFILE = 0x8 + RLIMIT_NPROC = 0x7 + RLIMIT_RSS = 0x5 RLIMIT_STACK = 0x3 RLIM_INFINITY = 0x7fffffffffffffff RTAX_AUTHOR = 0x6 diff --git a/vendor/golang.org/x/sys/unix/zerrors_freebsd_386.go b/vendor/golang.org/x/sys/unix/zerrors_freebsd_386.go index 1d3eec44d..a8b05878e 100644 --- a/vendor/golang.org/x/sys/unix/zerrors_freebsd_386.go +++ b/vendor/golang.org/x/sys/unix/zerrors_freebsd_386.go @@ -351,6 +351,8 @@ const ( CSTOP = 0x13 CSTOPB = 0x400 CSUSP = 0x1a + CTL_HW = 0x6 + CTL_KERN = 0x1 CTL_MAXNAME = 0x18 CTL_NET = 0x4 DLT_A429 = 0xb8 @@ -608,6 +610,7 @@ const ( F_UNLCKSYS = 0x4 F_WRLCK = 0x3 HUPCL = 0x4000 + HW_MACHINE = 0x1 ICANON = 0x100 ICMP6_FILTER = 0x12 ICRNL = 0x100 @@ -944,6 +947,10 @@ const ( IXANY = 0x800 IXOFF = 0x400 IXON = 0x200 + KERN_HOSTNAME = 0xa + KERN_OSRELEASE = 0x2 + KERN_OSTYPE = 0x1 + KERN_VERSION = 0x4 LOCK_EX = 0x2 LOCK_NB = 0x4 LOCK_SH = 0x1 @@ -981,6 +988,49 @@ const ( MAP_STACK = 0x400 MCL_CURRENT = 0x1 MCL_FUTURE = 0x2 + MNT_ACLS = 0x8000000 + MNT_ASYNC = 0x40 + MNT_AUTOMOUNTED = 0x200000000 + MNT_BYFSID = 0x8000000 + MNT_CMDFLAGS = 0xd0f0000 + MNT_DEFEXPORTED = 0x200 + MNT_DELEXPORT = 0x20000 + MNT_EXKERB = 0x800 + MNT_EXPORTANON = 0x400 + MNT_EXPORTED = 0x100 + MNT_EXPUBLIC = 0x20000000 + MNT_EXRDONLY = 0x80 + MNT_FORCE = 0x80000 + MNT_GJOURNAL = 0x2000000 + MNT_IGNORE = 0x800000 + MNT_LAZY = 0x3 + MNT_LOCAL = 0x1000 + MNT_MULTILABEL = 0x4000000 + MNT_NFS4ACLS = 0x10 + MNT_NOATIME = 0x10000000 + MNT_NOCLUSTERR = 0x40000000 + MNT_NOCLUSTERW = 0x80000000 + MNT_NOEXEC = 0x4 + MNT_NONBUSY = 0x4000000 + MNT_NOSUID = 0x8 + MNT_NOSYMFOLLOW = 0x400000 + MNT_NOWAIT = 0x2 + MNT_QUOTA = 0x2000 + MNT_RDONLY = 0x1 + MNT_RELOAD = 0x40000 + MNT_ROOTFS = 0x4000 + MNT_SNAPSHOT = 0x1000000 + MNT_SOFTDEP = 0x200000 + MNT_SUIDDIR = 0x100000 + MNT_SUJ = 0x100000000 + MNT_SUSPEND = 0x4 + MNT_SYNCHRONOUS = 0x2 + MNT_UNION = 0x20 + MNT_UPDATE = 0x10000 + MNT_UPDATEMASK = 0x2d8d0807e + MNT_USER = 0x8000 + MNT_VISFLAGMASK = 0x3fef0ffff + MNT_WAIT = 0x1 MSG_CMSG_CLOEXEC = 0x40000 MSG_COMPAT = 0x8000 MSG_CTRUNC = 0x20 diff --git a/vendor/golang.org/x/sys/unix/zerrors_freebsd_amd64.go b/vendor/golang.org/x/sys/unix/zerrors_freebsd_amd64.go index ac094f9cf..cf5f01260 100644 --- a/vendor/golang.org/x/sys/unix/zerrors_freebsd_amd64.go +++ b/vendor/golang.org/x/sys/unix/zerrors_freebsd_amd64.go @@ -351,6 +351,8 @@ const ( CSTOP = 0x13 CSTOPB = 0x400 CSUSP = 0x1a + CTL_HW = 0x6 + CTL_KERN = 0x1 CTL_MAXNAME = 0x18 CTL_NET = 0x4 DLT_A429 = 0xb8 @@ -608,6 +610,7 @@ const ( F_UNLCKSYS = 0x4 F_WRLCK = 0x3 HUPCL = 0x4000 + HW_MACHINE = 0x1 ICANON = 0x100 ICMP6_FILTER = 0x12 ICRNL = 0x100 @@ -944,6 +947,10 @@ const ( IXANY = 0x800 IXOFF = 0x400 IXON = 0x200 + KERN_HOSTNAME = 0xa + KERN_OSRELEASE = 0x2 + KERN_OSTYPE = 0x1 + KERN_VERSION = 0x4 LOCK_EX = 0x2 LOCK_NB = 0x4 LOCK_SH = 0x1 @@ -982,6 +989,49 @@ const ( MAP_STACK = 0x400 MCL_CURRENT = 0x1 MCL_FUTURE = 0x2 + MNT_ACLS = 0x8000000 + MNT_ASYNC = 0x40 + MNT_AUTOMOUNTED = 0x200000000 + MNT_BYFSID = 0x8000000 + MNT_CMDFLAGS = 0xd0f0000 + MNT_DEFEXPORTED = 0x200 + MNT_DELEXPORT = 0x20000 + MNT_EXKERB = 0x800 + MNT_EXPORTANON = 0x400 + MNT_EXPORTED = 0x100 + MNT_EXPUBLIC = 0x20000000 + MNT_EXRDONLY = 0x80 + MNT_FORCE = 0x80000 + MNT_GJOURNAL = 0x2000000 + MNT_IGNORE = 0x800000 + MNT_LAZY = 0x3 + MNT_LOCAL = 0x1000 + MNT_MULTILABEL = 0x4000000 + MNT_NFS4ACLS = 0x10 + MNT_NOATIME = 0x10000000 + MNT_NOCLUSTERR = 0x40000000 + MNT_NOCLUSTERW = 0x80000000 + MNT_NOEXEC = 0x4 + MNT_NONBUSY = 0x4000000 + MNT_NOSUID = 0x8 + MNT_NOSYMFOLLOW = 0x400000 + MNT_NOWAIT = 0x2 + MNT_QUOTA = 0x2000 + MNT_RDONLY = 0x1 + MNT_RELOAD = 0x40000 + MNT_ROOTFS = 0x4000 + MNT_SNAPSHOT = 0x1000000 + MNT_SOFTDEP = 0x200000 + MNT_SUIDDIR = 0x100000 + MNT_SUJ = 0x100000000 + MNT_SUSPEND = 0x4 + MNT_SYNCHRONOUS = 0x2 + MNT_UNION = 0x20 + MNT_UPDATE = 0x10000 + MNT_UPDATEMASK = 0x2d8d0807e + MNT_USER = 0x8000 + MNT_VISFLAGMASK = 0x3fef0ffff + MNT_WAIT = 0x1 MSG_CMSG_CLOEXEC = 0x40000 MSG_COMPAT = 0x8000 MSG_CTRUNC = 0x20 diff --git a/vendor/golang.org/x/sys/unix/zerrors_freebsd_arm.go b/vendor/golang.org/x/sys/unix/zerrors_freebsd_arm.go index c5c6f13e5..9bbb90ad8 100644 --- a/vendor/golang.org/x/sys/unix/zerrors_freebsd_arm.go +++ b/vendor/golang.org/x/sys/unix/zerrors_freebsd_arm.go @@ -351,6 +351,8 @@ const ( CSTOP = 0x13 CSTOPB = 0x400 CSUSP = 0x1a + CTL_HW = 0x6 + CTL_KERN = 0x1 CTL_MAXNAME = 0x18 CTL_NET = 0x4 DLT_A429 = 0xb8 @@ -615,6 +617,7 @@ const ( F_UNLCKSYS = 0x4 F_WRLCK = 0x3 HUPCL = 0x4000 + HW_MACHINE = 0x1 ICANON = 0x100 ICMP6_FILTER = 0x12 ICRNL = 0x100 @@ -951,6 +954,10 @@ const ( IXANY = 0x800 IXOFF = 0x400 IXON = 0x200 + KERN_HOSTNAME = 0xa + KERN_OSRELEASE = 0x2 + KERN_OSTYPE = 0x1 + KERN_VERSION = 0x4 LOCK_EX = 0x2 LOCK_NB = 0x4 LOCK_SH = 0x1 @@ -989,6 +996,49 @@ const ( MAP_STACK = 0x400 MCL_CURRENT = 0x1 MCL_FUTURE = 0x2 + MNT_ACLS = 0x8000000 + MNT_ASYNC = 0x40 + MNT_AUTOMOUNTED = 0x200000000 + MNT_BYFSID = 0x8000000 + MNT_CMDFLAGS = 0xd0f0000 + MNT_DEFEXPORTED = 0x200 + MNT_DELEXPORT = 0x20000 + MNT_EXKERB = 0x800 + MNT_EXPORTANON = 0x400 + MNT_EXPORTED = 0x100 + MNT_EXPUBLIC = 0x20000000 + MNT_EXRDONLY = 0x80 + MNT_FORCE = 0x80000 + MNT_GJOURNAL = 0x2000000 + MNT_IGNORE = 0x800000 + MNT_LAZY = 0x3 + MNT_LOCAL = 0x1000 + MNT_MULTILABEL = 0x4000000 + MNT_NFS4ACLS = 0x10 + MNT_NOATIME = 0x10000000 + MNT_NOCLUSTERR = 0x40000000 + MNT_NOCLUSTERW = 0x80000000 + MNT_NOEXEC = 0x4 + MNT_NONBUSY = 0x4000000 + MNT_NOSUID = 0x8 + MNT_NOSYMFOLLOW = 0x400000 + MNT_NOWAIT = 0x2 + MNT_QUOTA = 0x2000 + MNT_RDONLY = 0x1 + MNT_RELOAD = 0x40000 + MNT_ROOTFS = 0x4000 + MNT_SNAPSHOT = 0x1000000 + MNT_SOFTDEP = 0x200000 + MNT_SUIDDIR = 0x100000 + MNT_SUJ = 0x100000000 + MNT_SUSPEND = 0x4 + MNT_SYNCHRONOUS = 0x2 + MNT_UNION = 0x20 + MNT_UPDATE = 0x10000 + MNT_UPDATEMASK = 0x2d8d0807e + MNT_USER = 0x8000 + MNT_VISFLAGMASK = 0x3fef0ffff + MNT_WAIT = 0x1 MSG_CMSG_CLOEXEC = 0x40000 MSG_COMPAT = 0x8000 MSG_CTRUNC = 0x20 diff --git a/vendor/golang.org/x/sys/unix/zerrors_linux_386.go b/vendor/golang.org/x/sys/unix/zerrors_linux_386.go index a6b3b5f14..fa0637408 100644 --- a/vendor/golang.org/x/sys/unix/zerrors_linux_386.go +++ b/vendor/golang.org/x/sys/unix/zerrors_linux_386.go @@ -36,7 +36,7 @@ const ( AF_KEY = 0xf AF_LLC = 0x1a AF_LOCAL = 0x1 - AF_MAX = 0x2b + AF_MAX = 0x2c AF_MPLS = 0x1c AF_NETBEUI = 0xd AF_NETLINK = 0x10 @@ -51,6 +51,7 @@ const ( AF_ROUTE = 0x10 AF_RXRPC = 0x21 AF_SECURITY = 0xe + AF_SMC = 0x2b AF_SNA = 0x16 AF_TIPC = 0x1e AF_UNIX = 0x1 @@ -120,6 +121,7 @@ const ( ARPHRD_PPP = 0x200 ARPHRD_PRONET = 0x4 ARPHRD_RAWHDLC = 0x206 + ARPHRD_RAWIP = 0x207 ARPHRD_ROSE = 0x10e ARPHRD_RSRVD = 0x104 ARPHRD_SIT = 0x308 @@ -129,6 +131,7 @@ const ( ARPHRD_TUNNEL = 0x300 ARPHRD_TUNNEL6 = 0x301 ARPHRD_VOID = 0xffff + ARPHRD_VSOCKMON = 0x33a ARPHRD_X25 = 0x10f B0 = 0x0 B1000000 = 0x1008 @@ -388,13 +391,16 @@ const ( ETH_P_DSA = 0x1b ETH_P_ECONET = 0x18 ETH_P_EDSA = 0xdada + ETH_P_ERSPAN = 0x88be ETH_P_FCOE = 0x8906 ETH_P_FIP = 0x8914 ETH_P_HDLC = 0x19 ETH_P_HSR = 0x892f + ETH_P_IBOE = 0x8915 ETH_P_IEEE802154 = 0xf6 ETH_P_IEEEPUP = 0xa00 ETH_P_IEEEPUPAT = 0xa01 + ETH_P_IFE = 0xed3e ETH_P_IP = 0x800 ETH_P_IPV6 = 0x86dd ETH_P_IPX = 0x8137 @@ -405,11 +411,13 @@ const ( ETH_P_LOOP = 0x60 ETH_P_LOOPBACK = 0x9000 ETH_P_MACSEC = 0x88e5 + ETH_P_MAP = 0xf9 ETH_P_MOBITEX = 0x15 ETH_P_MPLS_MC = 0x8848 ETH_P_MPLS_UC = 0x8847 ETH_P_MVRP = 0x88f5 ETH_P_NCSI = 0x88f8 + ETH_P_NSH = 0x894f ETH_P_PAE = 0x888e ETH_P_PAUSE = 0x8808 ETH_P_PHONET = 0xf5 @@ -453,6 +461,8 @@ const ( FF1 = 0x8000 FFDLY = 0x8000 FLUSHO = 0x1000 + FS_ENCRYPTION_MODE_AES_128_CBC = 0x5 + FS_ENCRYPTION_MODE_AES_128_CTS = 0x6 FS_ENCRYPTION_MODE_AES_256_CBC = 0x3 FS_ENCRYPTION_MODE_AES_256_CTS = 0x4 FS_ENCRYPTION_MODE_AES_256_GCM = 0x2 @@ -471,6 +481,7 @@ const ( FS_POLICY_FLAGS_PAD_8 = 0x1 FS_POLICY_FLAGS_PAD_MASK = 0x3 FS_POLICY_FLAGS_VALID = 0x3 + F_ADD_SEALS = 0x409 F_DUPFD = 0x0 F_DUPFD_CLOEXEC = 0x406 F_EXLCK = 0x4 @@ -483,6 +494,9 @@ const ( F_GETOWN_EX = 0x10 F_GETPIPE_SZ = 0x408 F_GETSIG = 0xb + F_GET_FILE_RW_HINT = 0x40d + F_GET_RW_HINT = 0x40b + F_GET_SEALS = 0x40a F_LOCK = 0x1 F_NOTIFY = 0x402 F_OFD_GETLK = 0x24 @@ -490,6 +504,10 @@ const ( F_OFD_SETLKW = 0x26 F_OK = 0x0 F_RDLCK = 0x0 + F_SEAL_GROW = 0x4 + F_SEAL_SEAL = 0x1 + F_SEAL_SHRINK = 0x2 + F_SEAL_WRITE = 0x8 F_SETFD = 0x2 F_SETFL = 0x4 F_SETLEASE = 0x400 @@ -501,12 +519,27 @@ const ( F_SETOWN_EX = 0xf F_SETPIPE_SZ = 0x407 F_SETSIG = 0xa + F_SET_FILE_RW_HINT = 0x40e + F_SET_RW_HINT = 0x40c F_SHLCK = 0x8 F_TEST = 0x3 F_TLOCK = 0x2 F_ULOCK = 0x0 F_UNLCK = 0x2 F_WRLCK = 0x1 + GENL_ADMIN_PERM = 0x1 + GENL_CMD_CAP_DO = 0x2 + GENL_CMD_CAP_DUMP = 0x4 + GENL_CMD_CAP_HASPOL = 0x8 + GENL_HDRLEN = 0x4 + GENL_ID_CTRL = 0x10 + GENL_ID_PMCRAID = 0x12 + GENL_ID_VFS_DQUOT = 0x11 + GENL_MAX_ID = 0x3ff + GENL_MIN_ID = 0x10 + GENL_NAMSIZ = 0x10 + GENL_START_ALLOC = 0x13 + GENL_UNS_ADMIN_PERM = 0x10 GRND_NONBLOCK = 0x1 GRND_RANDOM = 0x2 HUPCL = 0x400 @@ -543,6 +576,8 @@ const ( IFF_MASTER = 0x400 IFF_MULTICAST = 0x1000 IFF_MULTI_QUEUE = 0x100 + IFF_NAPI = 0x10 + IFF_NAPI_FRAGS = 0x20 IFF_NOARP = 0x80 IFF_NOFILTER = 0x1000 IFF_NOTRAILERS = 0x20 @@ -605,6 +640,7 @@ const ( IN_OPEN = 0x20 IN_Q_OVERFLOW = 0x4000 IN_UNMOUNT = 0x2000 + IOCTL_VM_SOCKETS_GET_LOCAL_CID = 0x7b9 IPPROTO_AH = 0x33 IPPROTO_BEETPH = 0x5e IPPROTO_COMP = 0x6c @@ -644,8 +680,10 @@ const ( IPV6_2292PKTOPTIONS = 0x6 IPV6_2292RTHDR = 0x5 IPV6_ADDRFORM = 0x1 + IPV6_ADDR_PREFERENCES = 0x48 IPV6_ADD_MEMBERSHIP = 0x14 IPV6_AUTHHDR = 0xa + IPV6_AUTOFLOWLABEL = 0x46 IPV6_CHECKSUM = 0x7 IPV6_DONTFRAG = 0x3e IPV6_DROP_MEMBERSHIP = 0x15 @@ -658,12 +696,14 @@ const ( IPV6_JOIN_GROUP = 0x14 IPV6_LEAVE_ANYCAST = 0x1c IPV6_LEAVE_GROUP = 0x15 + IPV6_MINHOPCOUNT = 0x49 IPV6_MTU = 0x18 IPV6_MTU_DISCOVER = 0x17 IPV6_MULTICAST_HOPS = 0x12 IPV6_MULTICAST_IF = 0x11 IPV6_MULTICAST_LOOP = 0x13 IPV6_NEXTHOP = 0x9 + IPV6_ORIGDSTADDR = 0x4a IPV6_PATHMTU = 0x3d IPV6_PKTINFO = 0x32 IPV6_PMTUDISC_DO = 0x2 @@ -674,8 +714,10 @@ const ( IPV6_PMTUDISC_WANT = 0x1 IPV6_RECVDSTOPTS = 0x3a IPV6_RECVERR = 0x19 + IPV6_RECVFRAGSIZE = 0x4d IPV6_RECVHOPLIMIT = 0x33 IPV6_RECVHOPOPTS = 0x35 + IPV6_RECVORIGDSTADDR = 0x4a IPV6_RECVPATHMTU = 0x3c IPV6_RECVPKTINFO = 0x31 IPV6_RECVRTHDR = 0x38 @@ -689,7 +731,9 @@ const ( IPV6_RXDSTOPTS = 0x3b IPV6_RXHOPOPTS = 0x36 IPV6_TCLASS = 0x43 + IPV6_TRANSPARENT = 0x4b IPV6_UNICAST_HOPS = 0x10 + IPV6_UNICAST_IF = 0x4c IPV6_V6ONLY = 0x1a IPV6_XFRM_POLICY = 0x23 IP_ADD_MEMBERSHIP = 0x23 @@ -732,6 +776,7 @@ const ( IP_PMTUDISC_PROBE = 0x3 IP_PMTUDISC_WANT = 0x1 IP_RECVERR = 0xb + IP_RECVFRAGSIZE = 0x19 IP_RECVOPTS = 0x6 IP_RECVORIGDSTADDR = 0x14 IP_RECVRETOPTS = 0x7 @@ -769,6 +814,7 @@ const ( KEYCTL_NEGATE = 0xd KEYCTL_READ = 0xb KEYCTL_REJECT = 0x13 + KEYCTL_RESTRICT_KEYRING = 0x1d KEYCTL_REVOKE = 0x3 KEYCTL_SEARCH = 0xa KEYCTL_SESSION_TO_PARENT = 0x12 @@ -816,6 +862,7 @@ const ( MADV_FREE = 0x8 MADV_HUGEPAGE = 0xe MADV_HWPOISON = 0x64 + MADV_KEEPONFORK = 0x13 MADV_MERGEABLE = 0xc MADV_NOHUGEPAGE = 0xf MADV_NORMAL = 0x0 @@ -824,6 +871,7 @@ const ( MADV_SEQUENTIAL = 0x2 MADV_UNMERGEABLE = 0xd MADV_WILLNEED = 0x3 + MADV_WIPEONFORK = 0x12 MAP_32BIT = 0x40 MAP_ANON = 0x20 MAP_ANONYMOUS = 0x20 @@ -870,6 +918,7 @@ const ( MSG_TRYHARD = 0x4 MSG_WAITALL = 0x100 MSG_WAITFORONE = 0x10000 + MSG_ZEROCOPY = 0x4000000 MS_ACTIVE = 0x40000000 MS_ASYNC = 0x1 MS_BIND = 0x1000 @@ -902,6 +951,7 @@ const ( MS_SILENT = 0x8000 MS_SLAVE = 0x80000 MS_STRICTATIME = 0x1000000 + MS_SUBMOUNT = 0x4000000 MS_SYNC = 0x4 MS_SYNCHRONOUS = 0x10 MS_UNBINDABLE = 0x20000 @@ -916,6 +966,7 @@ const ( NETLINK_DNRTMSG = 0xe NETLINK_DROP_MEMBERSHIP = 0x2 NETLINK_ECRYPTFS = 0x13 + NETLINK_EXT_ACK = 0xb NETLINK_FIB_LOOKUP = 0xa NETLINK_FIREWALL = 0x3 NETLINK_GENERIC = 0x10 @@ -934,6 +985,7 @@ const ( NETLINK_RX_RING = 0x6 NETLINK_SCSITRANSPORT = 0x12 NETLINK_SELINUX = 0x7 + NETLINK_SMC = 0x16 NETLINK_SOCK_DIAG = 0x4 NETLINK_TX_RING = 0x7 NETLINK_UNUSED = 0x1 @@ -954,8 +1006,10 @@ const ( NLMSG_NOOP = 0x1 NLMSG_OVERRUN = 0x4 NLM_F_ACK = 0x4 + NLM_F_ACK_TLVS = 0x200 NLM_F_APPEND = 0x800 NLM_F_ATOMIC = 0x400 + NLM_F_CAPPED = 0x100 NLM_F_CREATE = 0x400 NLM_F_DUMP = 0x300 NLM_F_DUMP_FILTERED = 0x20 @@ -964,6 +1018,7 @@ const ( NLM_F_EXCL = 0x200 NLM_F_MATCH = 0x200 NLM_F_MULTI = 0x2 + NLM_F_NONREC = 0x100 NLM_F_REPLACE = 0x100 NLM_F_REQUEST = 0x1 NLM_F_ROOT = 0x100 @@ -1012,6 +1067,7 @@ const ( PACKET_FANOUT_EBPF = 0x7 PACKET_FANOUT_FLAG_DEFRAG = 0x8000 PACKET_FANOUT_FLAG_ROLLOVER = 0x1000 + PACKET_FANOUT_FLAG_UNIQUEID = 0x2000 PACKET_FANOUT_HASH = 0x0 PACKET_FANOUT_LB = 0x1 PACKET_FANOUT_QM = 0x5 @@ -1161,6 +1217,11 @@ const ( PR_SET_TIMING = 0xe PR_SET_TSC = 0x1a PR_SET_UNALIGN = 0x6 + PR_SVE_GET_VL = 0x33 + PR_SVE_SET_VL = 0x32 + PR_SVE_SET_VL_ONEXEC = 0x40000 + PR_SVE_VL_INHERIT = 0x20000 + PR_SVE_VL_LEN_MASK = 0xffff PR_TASK_PERF_EVENTS_DISABLE = 0x1f PR_TASK_PERF_EVENTS_ENABLE = 0x20 PR_TIMING_STATISTICAL = 0x0 @@ -1243,10 +1304,11 @@ const ( RLIMIT_RTTIME = 0xf RLIMIT_SIGPENDING = 0xb RLIMIT_STACK = 0x3 - RLIM_INFINITY = -0x1 + RLIM_INFINITY = 0xffffffffffffffff RTAX_ADVMSS = 0x8 RTAX_CC_ALGO = 0x10 RTAX_CWND = 0x7 + RTAX_FASTOPEN_NO_COOKIE = 0x11 RTAX_FEATURES = 0xc RTAX_FEATURE_ALLFRAG = 0x8 RTAX_FEATURE_ECN = 0x1 @@ -1257,7 +1319,7 @@ const ( RTAX_INITCWND = 0xb RTAX_INITRWND = 0xe RTAX_LOCK = 0x1 - RTAX_MAX = 0x10 + RTAX_MAX = 0x11 RTAX_MTU = 0x2 RTAX_QUICKACK = 0xf RTAX_REORDERING = 0x9 @@ -1268,7 +1330,7 @@ const ( RTAX_UNSPEC = 0x0 RTAX_WINDOW = 0x3 RTA_ALIGNTO = 0x4 - RTA_MAX = 0x19 + RTA_MAX = 0x1a RTCF_DIRECTSRC = 0x4000000 RTCF_DOREDIRECT = 0x1000000 RTCF_LOG = 0x2000000 @@ -1312,6 +1374,7 @@ const ( RTM_DELLINK = 0x11 RTM_DELMDB = 0x55 RTM_DELNEIGH = 0x1d + RTM_DELNETCONF = 0x51 RTM_DELNSID = 0x59 RTM_DELQDISC = 0x25 RTM_DELROUTE = 0x19 @@ -1320,6 +1383,7 @@ const ( RTM_DELTFILTER = 0x2d RTM_F_CLONED = 0x200 RTM_F_EQUALIZE = 0x400 + RTM_F_FIB_MATCH = 0x2000 RTM_F_LOOKUP_TABLE = 0x1000 RTM_F_NOTIFY = 0x100 RTM_F_PREFIX = 0x800 @@ -1341,10 +1405,11 @@ const ( RTM_GETSTATS = 0x5e RTM_GETTCLASS = 0x2a RTM_GETTFILTER = 0x2e - RTM_MAX = 0x5f + RTM_MAX = 0x63 RTM_NEWACTION = 0x30 RTM_NEWADDR = 0x14 RTM_NEWADDRLABEL = 0x48 + RTM_NEWCACHEREPORT = 0x60 RTM_NEWLINK = 0x10 RTM_NEWMDB = 0x54 RTM_NEWNDUSEROPT = 0x44 @@ -1359,8 +1424,8 @@ const ( RTM_NEWSTATS = 0x5c RTM_NEWTCLASS = 0x28 RTM_NEWTFILTER = 0x2c - RTM_NR_FAMILIES = 0x14 - RTM_NR_MSGTYPES = 0x50 + RTM_NR_FAMILIES = 0x15 + RTM_NR_MSGTYPES = 0x54 RTM_SETDCB = 0x4f RTM_SETLINK = 0x13 RTM_SETNEIGHTBL = 0x43 @@ -1371,6 +1436,7 @@ const ( RTNH_F_OFFLOAD = 0x8 RTNH_F_ONLINK = 0x4 RTNH_F_PERVASIVE = 0x2 + RTNH_F_UNRESOLVED = 0x20 RTN_MAX = 0xb RTPROT_BABEL = 0x2a RTPROT_BIRD = 0xc @@ -1401,6 +1467,7 @@ const ( SCM_TIMESTAMP = 0x1d SCM_TIMESTAMPING = 0x25 SCM_TIMESTAMPING_OPT_STATS = 0x36 + SCM_TIMESTAMPING_PKTINFO = 0x3a SCM_TIMESTAMPNS = 0x23 SCM_WIFI_STATUS = 0x29 SECCOMP_MODE_DISABLED = 0x0 @@ -1526,6 +1593,7 @@ const ( SOL_SOCKET = 0x1 SOL_TCP = 0x6 SOL_TIPC = 0x10f + SOL_TLS = 0x11a SOL_X25 = 0x106 SOMAXCONN = 0x80 SO_ACCEPTCONN = 0x1e @@ -1539,6 +1607,7 @@ const ( SO_BSDCOMPAT = 0xe SO_BUSY_POLL = 0x2e SO_CNX_ADVICE = 0x35 + SO_COOKIE = 0x39 SO_DEBUG = 0x1 SO_DETACH_BPF = 0x1b SO_DETACH_FILTER = 0x1b @@ -1547,11 +1616,13 @@ const ( SO_ERROR = 0x4 SO_GET_FILTER = 0x1a SO_INCOMING_CPU = 0x31 + SO_INCOMING_NAPI_ID = 0x38 SO_KEEPALIVE = 0x9 SO_LINGER = 0xd SO_LOCK_FILTER = 0x2c SO_MARK = 0x24 SO_MAX_PACING_RATE = 0x2f + SO_MEMINFO = 0x37 SO_NOFCS = 0x2b SO_NO_CHECK = 0xb SO_OOBINLINE = 0xa @@ -1559,6 +1630,7 @@ const ( SO_PASSSEC = 0x22 SO_PEEK_OFF = 0x2a SO_PEERCRED = 0x11 + SO_PEERGROUPS = 0x3b SO_PEERNAME = 0x1c SO_PEERSEC = 0x1f SO_PRIORITY = 0xc @@ -1590,10 +1662,32 @@ const ( SO_VM_SOCKETS_PEER_HOST_VM_ID = 0x3 SO_VM_SOCKETS_TRUSTED = 0x5 SO_WIFI_STATUS = 0x29 + SO_ZEROCOPY = 0x3c SPLICE_F_GIFT = 0x8 SPLICE_F_MORE = 0x4 SPLICE_F_MOVE = 0x1 SPLICE_F_NONBLOCK = 0x2 + STATX_ALL = 0xfff + STATX_ATIME = 0x20 + STATX_ATTR_APPEND = 0x20 + STATX_ATTR_AUTOMOUNT = 0x1000 + STATX_ATTR_COMPRESSED = 0x4 + STATX_ATTR_ENCRYPTED = 0x800 + STATX_ATTR_IMMUTABLE = 0x10 + STATX_ATTR_NODUMP = 0x40 + STATX_BASIC_STATS = 0x7ff + STATX_BLOCKS = 0x400 + STATX_BTIME = 0x800 + STATX_CTIME = 0x80 + STATX_GID = 0x10 + STATX_INO = 0x100 + STATX_MODE = 0x2 + STATX_MTIME = 0x40 + STATX_NLINK = 0x4 + STATX_SIZE = 0x200 + STATX_TYPE = 0x1 + STATX_UID = 0x8 + STATX__RESERVED = 0x80000000 S_BLKSIZE = 0x200 S_IEXEC = 0x40 S_IFBLK = 0x6000 @@ -1626,6 +1720,12 @@ const ( TAB2 = 0x1000 TAB3 = 0x1800 TABDLY = 0x1800 + TASKSTATS_CMD_ATTR_MAX = 0x4 + TASKSTATS_CMD_MAX = 0x2 + TASKSTATS_GENL_NAME = "TASKSTATS" + TASKSTATS_GENL_VERSION = 0x1 + TASKSTATS_TYPE_MAX = 0x6 + TASKSTATS_VERSION = 0x8 TCFLSH = 0x540b TCGETA = 0x5405 TCGETS = 0x5401 @@ -1649,6 +1749,7 @@ const ( TCP_CORK = 0x3 TCP_DEFER_ACCEPT = 0x9 TCP_FASTOPEN = 0x17 + TCP_FASTOPEN_CONNECT = 0x1e TCP_INFO = 0xb TCP_KEEPCNT = 0x6 TCP_KEEPIDLE = 0x4 @@ -1658,6 +1759,8 @@ const ( TCP_MAXWIN = 0xffff TCP_MAX_WINSHIFT = 0xe TCP_MD5SIG = 0xe + TCP_MD5SIG_EXT = 0x20 + TCP_MD5SIG_FLAG_PREFIX = 0x1 TCP_MD5SIG_MAXKEYLEN = 0x50 TCP_MSS = 0x200 TCP_MSS_DEFAULT = 0x218 @@ -1678,6 +1781,7 @@ const ( TCP_THIN_DUPACK = 0x11 TCP_THIN_LINEAR_TIMEOUTS = 0x10 TCP_TIMESTAMP = 0x18 + TCP_ULP = 0x1f TCP_USER_TIMEOUT = 0x12 TCP_WINDOW_CLAMP = 0xa TCSAFLUSH = 0x2 @@ -1708,6 +1812,7 @@ const ( TIOCGPKT = 0x80045438 TIOCGPTLCK = 0x80045439 TIOCGPTN = 0x80045430 + TIOCGPTPEER = 0x5441 TIOCGRS485 = 0x542e TIOCGSERIAL = 0x541e TIOCGSID = 0x5429 @@ -1765,6 +1870,7 @@ const ( TIOCSWINSZ = 0x5414 TIOCVHANGUP = 0x5437 TOSTOP = 0x100 + TS_COMM_LEN = 0x20 TUNATTACHFILTER = 0x400854d5 TUNDETACHFILTER = 0x400854d6 TUNGETFEATURES = 0x800454cf @@ -1790,6 +1896,8 @@ const ( TUNSETVNETHDRSZ = 0x400454d8 TUNSETVNETLE = 0x400454dc UMOUNT_NOFOLLOW = 0x8 + UTIME_NOW = 0x3fffffff + UTIME_OMIT = 0x3ffffffe VDISCARD = 0xd VEOF = 0x4 VEOL = 0xb @@ -1819,6 +1927,17 @@ const ( WALL = 0x40000000 WCLONE = 0x80000000 WCONTINUED = 0x8 + WDIOC_GETBOOTSTATUS = 0x80045702 + WDIOC_GETPRETIMEOUT = 0x80045709 + WDIOC_GETSTATUS = 0x80045701 + WDIOC_GETSUPPORT = 0x80285700 + WDIOC_GETTEMP = 0x80045703 + WDIOC_GETTIMELEFT = 0x8004570a + WDIOC_GETTIMEOUT = 0x80045707 + WDIOC_KEEPALIVE = 0x80045705 + WDIOC_SETOPTIONS = 0x80045704 + WDIOC_SETPRETIMEOUT = 0xc0045708 + WDIOC_SETTIMEOUT = 0xc0045706 WEXITED = 0x4 WNOHANG = 0x1 WNOTHREAD = 0x20000000 @@ -1999,7 +2118,6 @@ const ( SIGTSTP = syscall.Signal(0x14) SIGTTIN = syscall.Signal(0x15) SIGTTOU = syscall.Signal(0x16) - SIGUNUSED = syscall.Signal(0x1f) SIGURG = syscall.Signal(0x17) SIGUSR1 = syscall.Signal(0xa) SIGUSR2 = syscall.Signal(0xc) diff --git a/vendor/golang.org/x/sys/unix/zerrors_linux_amd64.go b/vendor/golang.org/x/sys/unix/zerrors_linux_amd64.go index 4ffc8d29c..eb2a22f65 100644 --- a/vendor/golang.org/x/sys/unix/zerrors_linux_amd64.go +++ b/vendor/golang.org/x/sys/unix/zerrors_linux_amd64.go @@ -36,7 +36,7 @@ const ( AF_KEY = 0xf AF_LLC = 0x1a AF_LOCAL = 0x1 - AF_MAX = 0x2b + AF_MAX = 0x2c AF_MPLS = 0x1c AF_NETBEUI = 0xd AF_NETLINK = 0x10 @@ -51,6 +51,7 @@ const ( AF_ROUTE = 0x10 AF_RXRPC = 0x21 AF_SECURITY = 0xe + AF_SMC = 0x2b AF_SNA = 0x16 AF_TIPC = 0x1e AF_UNIX = 0x1 @@ -120,6 +121,7 @@ const ( ARPHRD_PPP = 0x200 ARPHRD_PRONET = 0x4 ARPHRD_RAWHDLC = 0x206 + ARPHRD_RAWIP = 0x207 ARPHRD_ROSE = 0x10e ARPHRD_RSRVD = 0x104 ARPHRD_SIT = 0x308 @@ -129,6 +131,7 @@ const ( ARPHRD_TUNNEL = 0x300 ARPHRD_TUNNEL6 = 0x301 ARPHRD_VOID = 0xffff + ARPHRD_VSOCKMON = 0x33a ARPHRD_X25 = 0x10f B0 = 0x0 B1000000 = 0x1008 @@ -388,13 +391,16 @@ const ( ETH_P_DSA = 0x1b ETH_P_ECONET = 0x18 ETH_P_EDSA = 0xdada + ETH_P_ERSPAN = 0x88be ETH_P_FCOE = 0x8906 ETH_P_FIP = 0x8914 ETH_P_HDLC = 0x19 ETH_P_HSR = 0x892f + ETH_P_IBOE = 0x8915 ETH_P_IEEE802154 = 0xf6 ETH_P_IEEEPUP = 0xa00 ETH_P_IEEEPUPAT = 0xa01 + ETH_P_IFE = 0xed3e ETH_P_IP = 0x800 ETH_P_IPV6 = 0x86dd ETH_P_IPX = 0x8137 @@ -405,11 +411,13 @@ const ( ETH_P_LOOP = 0x60 ETH_P_LOOPBACK = 0x9000 ETH_P_MACSEC = 0x88e5 + ETH_P_MAP = 0xf9 ETH_P_MOBITEX = 0x15 ETH_P_MPLS_MC = 0x8848 ETH_P_MPLS_UC = 0x8847 ETH_P_MVRP = 0x88f5 ETH_P_NCSI = 0x88f8 + ETH_P_NSH = 0x894f ETH_P_PAE = 0x888e ETH_P_PAUSE = 0x8808 ETH_P_PHONET = 0xf5 @@ -453,6 +461,8 @@ const ( FF1 = 0x8000 FFDLY = 0x8000 FLUSHO = 0x1000 + FS_ENCRYPTION_MODE_AES_128_CBC = 0x5 + FS_ENCRYPTION_MODE_AES_128_CTS = 0x6 FS_ENCRYPTION_MODE_AES_256_CBC = 0x3 FS_ENCRYPTION_MODE_AES_256_CTS = 0x4 FS_ENCRYPTION_MODE_AES_256_GCM = 0x2 @@ -471,6 +481,7 @@ const ( FS_POLICY_FLAGS_PAD_8 = 0x1 FS_POLICY_FLAGS_PAD_MASK = 0x3 FS_POLICY_FLAGS_VALID = 0x3 + F_ADD_SEALS = 0x409 F_DUPFD = 0x0 F_DUPFD_CLOEXEC = 0x406 F_EXLCK = 0x4 @@ -483,6 +494,9 @@ const ( F_GETOWN_EX = 0x10 F_GETPIPE_SZ = 0x408 F_GETSIG = 0xb + F_GET_FILE_RW_HINT = 0x40d + F_GET_RW_HINT = 0x40b + F_GET_SEALS = 0x40a F_LOCK = 0x1 F_NOTIFY = 0x402 F_OFD_GETLK = 0x24 @@ -490,6 +504,10 @@ const ( F_OFD_SETLKW = 0x26 F_OK = 0x0 F_RDLCK = 0x0 + F_SEAL_GROW = 0x4 + F_SEAL_SEAL = 0x1 + F_SEAL_SHRINK = 0x2 + F_SEAL_WRITE = 0x8 F_SETFD = 0x2 F_SETFL = 0x4 F_SETLEASE = 0x400 @@ -501,12 +519,27 @@ const ( F_SETOWN_EX = 0xf F_SETPIPE_SZ = 0x407 F_SETSIG = 0xa + F_SET_FILE_RW_HINT = 0x40e + F_SET_RW_HINT = 0x40c F_SHLCK = 0x8 F_TEST = 0x3 F_TLOCK = 0x2 F_ULOCK = 0x0 F_UNLCK = 0x2 F_WRLCK = 0x1 + GENL_ADMIN_PERM = 0x1 + GENL_CMD_CAP_DO = 0x2 + GENL_CMD_CAP_DUMP = 0x4 + GENL_CMD_CAP_HASPOL = 0x8 + GENL_HDRLEN = 0x4 + GENL_ID_CTRL = 0x10 + GENL_ID_PMCRAID = 0x12 + GENL_ID_VFS_DQUOT = 0x11 + GENL_MAX_ID = 0x3ff + GENL_MIN_ID = 0x10 + GENL_NAMSIZ = 0x10 + GENL_START_ALLOC = 0x13 + GENL_UNS_ADMIN_PERM = 0x10 GRND_NONBLOCK = 0x1 GRND_RANDOM = 0x2 HUPCL = 0x400 @@ -543,6 +576,8 @@ const ( IFF_MASTER = 0x400 IFF_MULTICAST = 0x1000 IFF_MULTI_QUEUE = 0x100 + IFF_NAPI = 0x10 + IFF_NAPI_FRAGS = 0x20 IFF_NOARP = 0x80 IFF_NOFILTER = 0x1000 IFF_NOTRAILERS = 0x20 @@ -605,6 +640,7 @@ const ( IN_OPEN = 0x20 IN_Q_OVERFLOW = 0x4000 IN_UNMOUNT = 0x2000 + IOCTL_VM_SOCKETS_GET_LOCAL_CID = 0x7b9 IPPROTO_AH = 0x33 IPPROTO_BEETPH = 0x5e IPPROTO_COMP = 0x6c @@ -644,8 +680,10 @@ const ( IPV6_2292PKTOPTIONS = 0x6 IPV6_2292RTHDR = 0x5 IPV6_ADDRFORM = 0x1 + IPV6_ADDR_PREFERENCES = 0x48 IPV6_ADD_MEMBERSHIP = 0x14 IPV6_AUTHHDR = 0xa + IPV6_AUTOFLOWLABEL = 0x46 IPV6_CHECKSUM = 0x7 IPV6_DONTFRAG = 0x3e IPV6_DROP_MEMBERSHIP = 0x15 @@ -658,12 +696,14 @@ const ( IPV6_JOIN_GROUP = 0x14 IPV6_LEAVE_ANYCAST = 0x1c IPV6_LEAVE_GROUP = 0x15 + IPV6_MINHOPCOUNT = 0x49 IPV6_MTU = 0x18 IPV6_MTU_DISCOVER = 0x17 IPV6_MULTICAST_HOPS = 0x12 IPV6_MULTICAST_IF = 0x11 IPV6_MULTICAST_LOOP = 0x13 IPV6_NEXTHOP = 0x9 + IPV6_ORIGDSTADDR = 0x4a IPV6_PATHMTU = 0x3d IPV6_PKTINFO = 0x32 IPV6_PMTUDISC_DO = 0x2 @@ -674,8 +714,10 @@ const ( IPV6_PMTUDISC_WANT = 0x1 IPV6_RECVDSTOPTS = 0x3a IPV6_RECVERR = 0x19 + IPV6_RECVFRAGSIZE = 0x4d IPV6_RECVHOPLIMIT = 0x33 IPV6_RECVHOPOPTS = 0x35 + IPV6_RECVORIGDSTADDR = 0x4a IPV6_RECVPATHMTU = 0x3c IPV6_RECVPKTINFO = 0x31 IPV6_RECVRTHDR = 0x38 @@ -689,7 +731,9 @@ const ( IPV6_RXDSTOPTS = 0x3b IPV6_RXHOPOPTS = 0x36 IPV6_TCLASS = 0x43 + IPV6_TRANSPARENT = 0x4b IPV6_UNICAST_HOPS = 0x10 + IPV6_UNICAST_IF = 0x4c IPV6_V6ONLY = 0x1a IPV6_XFRM_POLICY = 0x23 IP_ADD_MEMBERSHIP = 0x23 @@ -732,6 +776,7 @@ const ( IP_PMTUDISC_PROBE = 0x3 IP_PMTUDISC_WANT = 0x1 IP_RECVERR = 0xb + IP_RECVFRAGSIZE = 0x19 IP_RECVOPTS = 0x6 IP_RECVORIGDSTADDR = 0x14 IP_RECVRETOPTS = 0x7 @@ -769,6 +814,7 @@ const ( KEYCTL_NEGATE = 0xd KEYCTL_READ = 0xb KEYCTL_REJECT = 0x13 + KEYCTL_RESTRICT_KEYRING = 0x1d KEYCTL_REVOKE = 0x3 KEYCTL_SEARCH = 0xa KEYCTL_SESSION_TO_PARENT = 0x12 @@ -816,6 +862,7 @@ const ( MADV_FREE = 0x8 MADV_HUGEPAGE = 0xe MADV_HWPOISON = 0x64 + MADV_KEEPONFORK = 0x13 MADV_MERGEABLE = 0xc MADV_NOHUGEPAGE = 0xf MADV_NORMAL = 0x0 @@ -824,6 +871,7 @@ const ( MADV_SEQUENTIAL = 0x2 MADV_UNMERGEABLE = 0xd MADV_WILLNEED = 0x3 + MADV_WIPEONFORK = 0x12 MAP_32BIT = 0x40 MAP_ANON = 0x20 MAP_ANONYMOUS = 0x20 @@ -870,6 +918,7 @@ const ( MSG_TRYHARD = 0x4 MSG_WAITALL = 0x100 MSG_WAITFORONE = 0x10000 + MSG_ZEROCOPY = 0x4000000 MS_ACTIVE = 0x40000000 MS_ASYNC = 0x1 MS_BIND = 0x1000 @@ -902,6 +951,7 @@ const ( MS_SILENT = 0x8000 MS_SLAVE = 0x80000 MS_STRICTATIME = 0x1000000 + MS_SUBMOUNT = 0x4000000 MS_SYNC = 0x4 MS_SYNCHRONOUS = 0x10 MS_UNBINDABLE = 0x20000 @@ -916,6 +966,7 @@ const ( NETLINK_DNRTMSG = 0xe NETLINK_DROP_MEMBERSHIP = 0x2 NETLINK_ECRYPTFS = 0x13 + NETLINK_EXT_ACK = 0xb NETLINK_FIB_LOOKUP = 0xa NETLINK_FIREWALL = 0x3 NETLINK_GENERIC = 0x10 @@ -934,6 +985,7 @@ const ( NETLINK_RX_RING = 0x6 NETLINK_SCSITRANSPORT = 0x12 NETLINK_SELINUX = 0x7 + NETLINK_SMC = 0x16 NETLINK_SOCK_DIAG = 0x4 NETLINK_TX_RING = 0x7 NETLINK_UNUSED = 0x1 @@ -954,8 +1006,10 @@ const ( NLMSG_NOOP = 0x1 NLMSG_OVERRUN = 0x4 NLM_F_ACK = 0x4 + NLM_F_ACK_TLVS = 0x200 NLM_F_APPEND = 0x800 NLM_F_ATOMIC = 0x400 + NLM_F_CAPPED = 0x100 NLM_F_CREATE = 0x400 NLM_F_DUMP = 0x300 NLM_F_DUMP_FILTERED = 0x20 @@ -964,6 +1018,7 @@ const ( NLM_F_EXCL = 0x200 NLM_F_MATCH = 0x200 NLM_F_MULTI = 0x2 + NLM_F_NONREC = 0x100 NLM_F_REPLACE = 0x100 NLM_F_REQUEST = 0x1 NLM_F_ROOT = 0x100 @@ -1012,6 +1067,7 @@ const ( PACKET_FANOUT_EBPF = 0x7 PACKET_FANOUT_FLAG_DEFRAG = 0x8000 PACKET_FANOUT_FLAG_ROLLOVER = 0x1000 + PACKET_FANOUT_FLAG_UNIQUEID = 0x2000 PACKET_FANOUT_HASH = 0x0 PACKET_FANOUT_LB = 0x1 PACKET_FANOUT_QM = 0x5 @@ -1153,7 +1209,7 @@ const ( PR_SET_NO_NEW_PRIVS = 0x26 PR_SET_PDEATHSIG = 0x1 PR_SET_PTRACER = 0x59616d61 - PR_SET_PTRACER_ANY = -0x1 + PR_SET_PTRACER_ANY = 0xffffffffffffffff PR_SET_SECCOMP = 0x16 PR_SET_SECUREBITS = 0x1c PR_SET_THP_DISABLE = 0x29 @@ -1161,6 +1217,11 @@ const ( PR_SET_TIMING = 0xe PR_SET_TSC = 0x1a PR_SET_UNALIGN = 0x6 + PR_SVE_GET_VL = 0x33 + PR_SVE_SET_VL = 0x32 + PR_SVE_SET_VL_ONEXEC = 0x40000 + PR_SVE_VL_INHERIT = 0x20000 + PR_SVE_VL_LEN_MASK = 0xffff PR_TASK_PERF_EVENTS_DISABLE = 0x1f PR_TASK_PERF_EVENTS_ENABLE = 0x20 PR_TIMING_STATISTICAL = 0x0 @@ -1244,10 +1305,11 @@ const ( RLIMIT_RTTIME = 0xf RLIMIT_SIGPENDING = 0xb RLIMIT_STACK = 0x3 - RLIM_INFINITY = -0x1 + RLIM_INFINITY = 0xffffffffffffffff RTAX_ADVMSS = 0x8 RTAX_CC_ALGO = 0x10 RTAX_CWND = 0x7 + RTAX_FASTOPEN_NO_COOKIE = 0x11 RTAX_FEATURES = 0xc RTAX_FEATURE_ALLFRAG = 0x8 RTAX_FEATURE_ECN = 0x1 @@ -1258,7 +1320,7 @@ const ( RTAX_INITCWND = 0xb RTAX_INITRWND = 0xe RTAX_LOCK = 0x1 - RTAX_MAX = 0x10 + RTAX_MAX = 0x11 RTAX_MTU = 0x2 RTAX_QUICKACK = 0xf RTAX_REORDERING = 0x9 @@ -1269,7 +1331,7 @@ const ( RTAX_UNSPEC = 0x0 RTAX_WINDOW = 0x3 RTA_ALIGNTO = 0x4 - RTA_MAX = 0x19 + RTA_MAX = 0x1a RTCF_DIRECTSRC = 0x4000000 RTCF_DOREDIRECT = 0x1000000 RTCF_LOG = 0x2000000 @@ -1313,6 +1375,7 @@ const ( RTM_DELLINK = 0x11 RTM_DELMDB = 0x55 RTM_DELNEIGH = 0x1d + RTM_DELNETCONF = 0x51 RTM_DELNSID = 0x59 RTM_DELQDISC = 0x25 RTM_DELROUTE = 0x19 @@ -1321,6 +1384,7 @@ const ( RTM_DELTFILTER = 0x2d RTM_F_CLONED = 0x200 RTM_F_EQUALIZE = 0x400 + RTM_F_FIB_MATCH = 0x2000 RTM_F_LOOKUP_TABLE = 0x1000 RTM_F_NOTIFY = 0x100 RTM_F_PREFIX = 0x800 @@ -1342,10 +1406,11 @@ const ( RTM_GETSTATS = 0x5e RTM_GETTCLASS = 0x2a RTM_GETTFILTER = 0x2e - RTM_MAX = 0x5f + RTM_MAX = 0x63 RTM_NEWACTION = 0x30 RTM_NEWADDR = 0x14 RTM_NEWADDRLABEL = 0x48 + RTM_NEWCACHEREPORT = 0x60 RTM_NEWLINK = 0x10 RTM_NEWMDB = 0x54 RTM_NEWNDUSEROPT = 0x44 @@ -1360,8 +1425,8 @@ const ( RTM_NEWSTATS = 0x5c RTM_NEWTCLASS = 0x28 RTM_NEWTFILTER = 0x2c - RTM_NR_FAMILIES = 0x14 - RTM_NR_MSGTYPES = 0x50 + RTM_NR_FAMILIES = 0x15 + RTM_NR_MSGTYPES = 0x54 RTM_SETDCB = 0x4f RTM_SETLINK = 0x13 RTM_SETNEIGHTBL = 0x43 @@ -1372,6 +1437,7 @@ const ( RTNH_F_OFFLOAD = 0x8 RTNH_F_ONLINK = 0x4 RTNH_F_PERVASIVE = 0x2 + RTNH_F_UNRESOLVED = 0x20 RTN_MAX = 0xb RTPROT_BABEL = 0x2a RTPROT_BIRD = 0xc @@ -1402,6 +1468,7 @@ const ( SCM_TIMESTAMP = 0x1d SCM_TIMESTAMPING = 0x25 SCM_TIMESTAMPING_OPT_STATS = 0x36 + SCM_TIMESTAMPING_PKTINFO = 0x3a SCM_TIMESTAMPNS = 0x23 SCM_WIFI_STATUS = 0x29 SECCOMP_MODE_DISABLED = 0x0 @@ -1527,6 +1594,7 @@ const ( SOL_SOCKET = 0x1 SOL_TCP = 0x6 SOL_TIPC = 0x10f + SOL_TLS = 0x11a SOL_X25 = 0x106 SOMAXCONN = 0x80 SO_ACCEPTCONN = 0x1e @@ -1540,6 +1608,7 @@ const ( SO_BSDCOMPAT = 0xe SO_BUSY_POLL = 0x2e SO_CNX_ADVICE = 0x35 + SO_COOKIE = 0x39 SO_DEBUG = 0x1 SO_DETACH_BPF = 0x1b SO_DETACH_FILTER = 0x1b @@ -1548,11 +1617,13 @@ const ( SO_ERROR = 0x4 SO_GET_FILTER = 0x1a SO_INCOMING_CPU = 0x31 + SO_INCOMING_NAPI_ID = 0x38 SO_KEEPALIVE = 0x9 SO_LINGER = 0xd SO_LOCK_FILTER = 0x2c SO_MARK = 0x24 SO_MAX_PACING_RATE = 0x2f + SO_MEMINFO = 0x37 SO_NOFCS = 0x2b SO_NO_CHECK = 0xb SO_OOBINLINE = 0xa @@ -1560,6 +1631,7 @@ const ( SO_PASSSEC = 0x22 SO_PEEK_OFF = 0x2a SO_PEERCRED = 0x11 + SO_PEERGROUPS = 0x3b SO_PEERNAME = 0x1c SO_PEERSEC = 0x1f SO_PRIORITY = 0xc @@ -1591,10 +1663,32 @@ const ( SO_VM_SOCKETS_PEER_HOST_VM_ID = 0x3 SO_VM_SOCKETS_TRUSTED = 0x5 SO_WIFI_STATUS = 0x29 + SO_ZEROCOPY = 0x3c SPLICE_F_GIFT = 0x8 SPLICE_F_MORE = 0x4 SPLICE_F_MOVE = 0x1 SPLICE_F_NONBLOCK = 0x2 + STATX_ALL = 0xfff + STATX_ATIME = 0x20 + STATX_ATTR_APPEND = 0x20 + STATX_ATTR_AUTOMOUNT = 0x1000 + STATX_ATTR_COMPRESSED = 0x4 + STATX_ATTR_ENCRYPTED = 0x800 + STATX_ATTR_IMMUTABLE = 0x10 + STATX_ATTR_NODUMP = 0x40 + STATX_BASIC_STATS = 0x7ff + STATX_BLOCKS = 0x400 + STATX_BTIME = 0x800 + STATX_CTIME = 0x80 + STATX_GID = 0x10 + STATX_INO = 0x100 + STATX_MODE = 0x2 + STATX_MTIME = 0x40 + STATX_NLINK = 0x4 + STATX_SIZE = 0x200 + STATX_TYPE = 0x1 + STATX_UID = 0x8 + STATX__RESERVED = 0x80000000 S_BLKSIZE = 0x200 S_IEXEC = 0x40 S_IFBLK = 0x6000 @@ -1627,6 +1721,12 @@ const ( TAB2 = 0x1000 TAB3 = 0x1800 TABDLY = 0x1800 + TASKSTATS_CMD_ATTR_MAX = 0x4 + TASKSTATS_CMD_MAX = 0x2 + TASKSTATS_GENL_NAME = "TASKSTATS" + TASKSTATS_GENL_VERSION = 0x1 + TASKSTATS_TYPE_MAX = 0x6 + TASKSTATS_VERSION = 0x8 TCFLSH = 0x540b TCGETA = 0x5405 TCGETS = 0x5401 @@ -1650,6 +1750,7 @@ const ( TCP_CORK = 0x3 TCP_DEFER_ACCEPT = 0x9 TCP_FASTOPEN = 0x17 + TCP_FASTOPEN_CONNECT = 0x1e TCP_INFO = 0xb TCP_KEEPCNT = 0x6 TCP_KEEPIDLE = 0x4 @@ -1659,6 +1760,8 @@ const ( TCP_MAXWIN = 0xffff TCP_MAX_WINSHIFT = 0xe TCP_MD5SIG = 0xe + TCP_MD5SIG_EXT = 0x20 + TCP_MD5SIG_FLAG_PREFIX = 0x1 TCP_MD5SIG_MAXKEYLEN = 0x50 TCP_MSS = 0x200 TCP_MSS_DEFAULT = 0x218 @@ -1679,6 +1782,7 @@ const ( TCP_THIN_DUPACK = 0x11 TCP_THIN_LINEAR_TIMEOUTS = 0x10 TCP_TIMESTAMP = 0x18 + TCP_ULP = 0x1f TCP_USER_TIMEOUT = 0x12 TCP_WINDOW_CLAMP = 0xa TCSAFLUSH = 0x2 @@ -1709,6 +1813,7 @@ const ( TIOCGPKT = 0x80045438 TIOCGPTLCK = 0x80045439 TIOCGPTN = 0x80045430 + TIOCGPTPEER = 0x5441 TIOCGRS485 = 0x542e TIOCGSERIAL = 0x541e TIOCGSID = 0x5429 @@ -1766,6 +1871,7 @@ const ( TIOCSWINSZ = 0x5414 TIOCVHANGUP = 0x5437 TOSTOP = 0x100 + TS_COMM_LEN = 0x20 TUNATTACHFILTER = 0x401054d5 TUNDETACHFILTER = 0x401054d6 TUNGETFEATURES = 0x800454cf @@ -1791,6 +1897,8 @@ const ( TUNSETVNETHDRSZ = 0x400454d8 TUNSETVNETLE = 0x400454dc UMOUNT_NOFOLLOW = 0x8 + UTIME_NOW = 0x3fffffff + UTIME_OMIT = 0x3ffffffe VDISCARD = 0xd VEOF = 0x4 VEOL = 0xb @@ -1820,6 +1928,17 @@ const ( WALL = 0x40000000 WCLONE = 0x80000000 WCONTINUED = 0x8 + WDIOC_GETBOOTSTATUS = 0x80045702 + WDIOC_GETPRETIMEOUT = 0x80045709 + WDIOC_GETSTATUS = 0x80045701 + WDIOC_GETSUPPORT = 0x80285700 + WDIOC_GETTEMP = 0x80045703 + WDIOC_GETTIMELEFT = 0x8004570a + WDIOC_GETTIMEOUT = 0x80045707 + WDIOC_KEEPALIVE = 0x80045705 + WDIOC_SETOPTIONS = 0x80045704 + WDIOC_SETPRETIMEOUT = 0xc0045708 + WDIOC_SETTIMEOUT = 0xc0045706 WEXITED = 0x4 WNOHANG = 0x1 WNOTHREAD = 0x20000000 @@ -2000,7 +2119,6 @@ const ( SIGTSTP = syscall.Signal(0x14) SIGTTIN = syscall.Signal(0x15) SIGTTOU = syscall.Signal(0x16) - SIGUNUSED = syscall.Signal(0x1f) SIGURG = syscall.Signal(0x17) SIGUSR1 = syscall.Signal(0xa) SIGUSR2 = syscall.Signal(0xc) diff --git a/vendor/golang.org/x/sys/unix/zerrors_linux_arm.go b/vendor/golang.org/x/sys/unix/zerrors_linux_arm.go index f4b178ef1..37d212ca4 100644 --- a/vendor/golang.org/x/sys/unix/zerrors_linux_arm.go +++ b/vendor/golang.org/x/sys/unix/zerrors_linux_arm.go @@ -36,7 +36,7 @@ const ( AF_KEY = 0xf AF_LLC = 0x1a AF_LOCAL = 0x1 - AF_MAX = 0x2b + AF_MAX = 0x2c AF_MPLS = 0x1c AF_NETBEUI = 0xd AF_NETLINK = 0x10 @@ -51,6 +51,7 @@ const ( AF_ROUTE = 0x10 AF_RXRPC = 0x21 AF_SECURITY = 0xe + AF_SMC = 0x2b AF_SNA = 0x16 AF_TIPC = 0x1e AF_UNIX = 0x1 @@ -120,6 +121,7 @@ const ( ARPHRD_PPP = 0x200 ARPHRD_PRONET = 0x4 ARPHRD_RAWHDLC = 0x206 + ARPHRD_RAWIP = 0x207 ARPHRD_ROSE = 0x10e ARPHRD_RSRVD = 0x104 ARPHRD_SIT = 0x308 @@ -129,6 +131,7 @@ const ( ARPHRD_TUNNEL = 0x300 ARPHRD_TUNNEL6 = 0x301 ARPHRD_VOID = 0xffff + ARPHRD_VSOCKMON = 0x33a ARPHRD_X25 = 0x10f B0 = 0x0 B1000000 = 0x1008 @@ -388,13 +391,16 @@ const ( ETH_P_DSA = 0x1b ETH_P_ECONET = 0x18 ETH_P_EDSA = 0xdada + ETH_P_ERSPAN = 0x88be ETH_P_FCOE = 0x8906 ETH_P_FIP = 0x8914 ETH_P_HDLC = 0x19 ETH_P_HSR = 0x892f + ETH_P_IBOE = 0x8915 ETH_P_IEEE802154 = 0xf6 ETH_P_IEEEPUP = 0xa00 ETH_P_IEEEPUPAT = 0xa01 + ETH_P_IFE = 0xed3e ETH_P_IP = 0x800 ETH_P_IPV6 = 0x86dd ETH_P_IPX = 0x8137 @@ -405,11 +411,13 @@ const ( ETH_P_LOOP = 0x60 ETH_P_LOOPBACK = 0x9000 ETH_P_MACSEC = 0x88e5 + ETH_P_MAP = 0xf9 ETH_P_MOBITEX = 0x15 ETH_P_MPLS_MC = 0x8848 ETH_P_MPLS_UC = 0x8847 ETH_P_MVRP = 0x88f5 ETH_P_NCSI = 0x88f8 + ETH_P_NSH = 0x894f ETH_P_PAE = 0x888e ETH_P_PAUSE = 0x8808 ETH_P_PHONET = 0xf5 @@ -453,6 +461,8 @@ const ( FF1 = 0x8000 FFDLY = 0x8000 FLUSHO = 0x1000 + FS_ENCRYPTION_MODE_AES_128_CBC = 0x5 + FS_ENCRYPTION_MODE_AES_128_CTS = 0x6 FS_ENCRYPTION_MODE_AES_256_CBC = 0x3 FS_ENCRYPTION_MODE_AES_256_CTS = 0x4 FS_ENCRYPTION_MODE_AES_256_GCM = 0x2 @@ -471,6 +481,7 @@ const ( FS_POLICY_FLAGS_PAD_8 = 0x1 FS_POLICY_FLAGS_PAD_MASK = 0x3 FS_POLICY_FLAGS_VALID = 0x3 + F_ADD_SEALS = 0x409 F_DUPFD = 0x0 F_DUPFD_CLOEXEC = 0x406 F_EXLCK = 0x4 @@ -483,6 +494,9 @@ const ( F_GETOWN_EX = 0x10 F_GETPIPE_SZ = 0x408 F_GETSIG = 0xb + F_GET_FILE_RW_HINT = 0x40d + F_GET_RW_HINT = 0x40b + F_GET_SEALS = 0x40a F_LOCK = 0x1 F_NOTIFY = 0x402 F_OFD_GETLK = 0x24 @@ -490,6 +504,10 @@ const ( F_OFD_SETLKW = 0x26 F_OK = 0x0 F_RDLCK = 0x0 + F_SEAL_GROW = 0x4 + F_SEAL_SEAL = 0x1 + F_SEAL_SHRINK = 0x2 + F_SEAL_WRITE = 0x8 F_SETFD = 0x2 F_SETFL = 0x4 F_SETLEASE = 0x400 @@ -501,12 +519,27 @@ const ( F_SETOWN_EX = 0xf F_SETPIPE_SZ = 0x407 F_SETSIG = 0xa + F_SET_FILE_RW_HINT = 0x40e + F_SET_RW_HINT = 0x40c F_SHLCK = 0x8 F_TEST = 0x3 F_TLOCK = 0x2 F_ULOCK = 0x0 F_UNLCK = 0x2 F_WRLCK = 0x1 + GENL_ADMIN_PERM = 0x1 + GENL_CMD_CAP_DO = 0x2 + GENL_CMD_CAP_DUMP = 0x4 + GENL_CMD_CAP_HASPOL = 0x8 + GENL_HDRLEN = 0x4 + GENL_ID_CTRL = 0x10 + GENL_ID_PMCRAID = 0x12 + GENL_ID_VFS_DQUOT = 0x11 + GENL_MAX_ID = 0x3ff + GENL_MIN_ID = 0x10 + GENL_NAMSIZ = 0x10 + GENL_START_ALLOC = 0x13 + GENL_UNS_ADMIN_PERM = 0x10 GRND_NONBLOCK = 0x1 GRND_RANDOM = 0x2 HUPCL = 0x400 @@ -543,6 +576,8 @@ const ( IFF_MASTER = 0x400 IFF_MULTICAST = 0x1000 IFF_MULTI_QUEUE = 0x100 + IFF_NAPI = 0x10 + IFF_NAPI_FRAGS = 0x20 IFF_NOARP = 0x80 IFF_NOFILTER = 0x1000 IFF_NOTRAILERS = 0x20 @@ -605,6 +640,7 @@ const ( IN_OPEN = 0x20 IN_Q_OVERFLOW = 0x4000 IN_UNMOUNT = 0x2000 + IOCTL_VM_SOCKETS_GET_LOCAL_CID = 0x7b9 IPPROTO_AH = 0x33 IPPROTO_BEETPH = 0x5e IPPROTO_COMP = 0x6c @@ -644,8 +680,10 @@ const ( IPV6_2292PKTOPTIONS = 0x6 IPV6_2292RTHDR = 0x5 IPV6_ADDRFORM = 0x1 + IPV6_ADDR_PREFERENCES = 0x48 IPV6_ADD_MEMBERSHIP = 0x14 IPV6_AUTHHDR = 0xa + IPV6_AUTOFLOWLABEL = 0x46 IPV6_CHECKSUM = 0x7 IPV6_DONTFRAG = 0x3e IPV6_DROP_MEMBERSHIP = 0x15 @@ -658,12 +696,14 @@ const ( IPV6_JOIN_GROUP = 0x14 IPV6_LEAVE_ANYCAST = 0x1c IPV6_LEAVE_GROUP = 0x15 + IPV6_MINHOPCOUNT = 0x49 IPV6_MTU = 0x18 IPV6_MTU_DISCOVER = 0x17 IPV6_MULTICAST_HOPS = 0x12 IPV6_MULTICAST_IF = 0x11 IPV6_MULTICAST_LOOP = 0x13 IPV6_NEXTHOP = 0x9 + IPV6_ORIGDSTADDR = 0x4a IPV6_PATHMTU = 0x3d IPV6_PKTINFO = 0x32 IPV6_PMTUDISC_DO = 0x2 @@ -674,8 +714,10 @@ const ( IPV6_PMTUDISC_WANT = 0x1 IPV6_RECVDSTOPTS = 0x3a IPV6_RECVERR = 0x19 + IPV6_RECVFRAGSIZE = 0x4d IPV6_RECVHOPLIMIT = 0x33 IPV6_RECVHOPOPTS = 0x35 + IPV6_RECVORIGDSTADDR = 0x4a IPV6_RECVPATHMTU = 0x3c IPV6_RECVPKTINFO = 0x31 IPV6_RECVRTHDR = 0x38 @@ -689,7 +731,9 @@ const ( IPV6_RXDSTOPTS = 0x3b IPV6_RXHOPOPTS = 0x36 IPV6_TCLASS = 0x43 + IPV6_TRANSPARENT = 0x4b IPV6_UNICAST_HOPS = 0x10 + IPV6_UNICAST_IF = 0x4c IPV6_V6ONLY = 0x1a IPV6_XFRM_POLICY = 0x23 IP_ADD_MEMBERSHIP = 0x23 @@ -732,6 +776,7 @@ const ( IP_PMTUDISC_PROBE = 0x3 IP_PMTUDISC_WANT = 0x1 IP_RECVERR = 0xb + IP_RECVFRAGSIZE = 0x19 IP_RECVOPTS = 0x6 IP_RECVORIGDSTADDR = 0x14 IP_RECVRETOPTS = 0x7 @@ -769,6 +814,7 @@ const ( KEYCTL_NEGATE = 0xd KEYCTL_READ = 0xb KEYCTL_REJECT = 0x13 + KEYCTL_RESTRICT_KEYRING = 0x1d KEYCTL_REVOKE = 0x3 KEYCTL_SEARCH = 0xa KEYCTL_SESSION_TO_PARENT = 0x12 @@ -816,6 +862,7 @@ const ( MADV_FREE = 0x8 MADV_HUGEPAGE = 0xe MADV_HWPOISON = 0x64 + MADV_KEEPONFORK = 0x13 MADV_MERGEABLE = 0xc MADV_NOHUGEPAGE = 0xf MADV_NORMAL = 0x0 @@ -824,6 +871,7 @@ const ( MADV_SEQUENTIAL = 0x2 MADV_UNMERGEABLE = 0xd MADV_WILLNEED = 0x3 + MADV_WIPEONFORK = 0x12 MAP_ANON = 0x20 MAP_ANONYMOUS = 0x20 MAP_DENYWRITE = 0x800 @@ -869,6 +917,7 @@ const ( MSG_TRYHARD = 0x4 MSG_WAITALL = 0x100 MSG_WAITFORONE = 0x10000 + MSG_ZEROCOPY = 0x4000000 MS_ACTIVE = 0x40000000 MS_ASYNC = 0x1 MS_BIND = 0x1000 @@ -901,6 +950,7 @@ const ( MS_SILENT = 0x8000 MS_SLAVE = 0x80000 MS_STRICTATIME = 0x1000000 + MS_SUBMOUNT = 0x4000000 MS_SYNC = 0x4 MS_SYNCHRONOUS = 0x10 MS_UNBINDABLE = 0x20000 @@ -915,6 +965,7 @@ const ( NETLINK_DNRTMSG = 0xe NETLINK_DROP_MEMBERSHIP = 0x2 NETLINK_ECRYPTFS = 0x13 + NETLINK_EXT_ACK = 0xb NETLINK_FIB_LOOKUP = 0xa NETLINK_FIREWALL = 0x3 NETLINK_GENERIC = 0x10 @@ -933,6 +984,7 @@ const ( NETLINK_RX_RING = 0x6 NETLINK_SCSITRANSPORT = 0x12 NETLINK_SELINUX = 0x7 + NETLINK_SMC = 0x16 NETLINK_SOCK_DIAG = 0x4 NETLINK_TX_RING = 0x7 NETLINK_UNUSED = 0x1 @@ -953,8 +1005,10 @@ const ( NLMSG_NOOP = 0x1 NLMSG_OVERRUN = 0x4 NLM_F_ACK = 0x4 + NLM_F_ACK_TLVS = 0x200 NLM_F_APPEND = 0x800 NLM_F_ATOMIC = 0x400 + NLM_F_CAPPED = 0x100 NLM_F_CREATE = 0x400 NLM_F_DUMP = 0x300 NLM_F_DUMP_FILTERED = 0x20 @@ -963,6 +1017,7 @@ const ( NLM_F_EXCL = 0x200 NLM_F_MATCH = 0x200 NLM_F_MULTI = 0x2 + NLM_F_NONREC = 0x100 NLM_F_REPLACE = 0x100 NLM_F_REQUEST = 0x1 NLM_F_ROOT = 0x100 @@ -1011,6 +1066,7 @@ const ( PACKET_FANOUT_EBPF = 0x7 PACKET_FANOUT_FLAG_DEFRAG = 0x8000 PACKET_FANOUT_FLAG_ROLLOVER = 0x1000 + PACKET_FANOUT_FLAG_UNIQUEID = 0x2000 PACKET_FANOUT_HASH = 0x0 PACKET_FANOUT_LB = 0x1 PACKET_FANOUT_QM = 0x5 @@ -1160,6 +1216,11 @@ const ( PR_SET_TIMING = 0xe PR_SET_TSC = 0x1a PR_SET_UNALIGN = 0x6 + PR_SVE_GET_VL = 0x33 + PR_SVE_SET_VL = 0x32 + PR_SVE_SET_VL_ONEXEC = 0x40000 + PR_SVE_VL_INHERIT = 0x20000 + PR_SVE_VL_LEN_MASK = 0xffff PR_TASK_PERF_EVENTS_DISABLE = 0x1f PR_TASK_PERF_EVENTS_ENABLE = 0x20 PR_TIMING_STATISTICAL = 0x0 @@ -1181,6 +1242,9 @@ const ( PTRACE_EVENT_VFORK_DONE = 0x5 PTRACE_GETCRUNCHREGS = 0x19 PTRACE_GETEVENTMSG = 0x4201 + PTRACE_GETFDPIC = 0x1f + PTRACE_GETFDPIC_EXEC = 0x0 + PTRACE_GETFDPIC_INTERP = 0x1 PTRACE_GETFPREGS = 0xe PTRACE_GETHBPREGS = 0x1d PTRACE_GETREGS = 0xc @@ -1248,10 +1312,11 @@ const ( RLIMIT_RTTIME = 0xf RLIMIT_SIGPENDING = 0xb RLIMIT_STACK = 0x3 - RLIM_INFINITY = -0x1 + RLIM_INFINITY = 0xffffffffffffffff RTAX_ADVMSS = 0x8 RTAX_CC_ALGO = 0x10 RTAX_CWND = 0x7 + RTAX_FASTOPEN_NO_COOKIE = 0x11 RTAX_FEATURES = 0xc RTAX_FEATURE_ALLFRAG = 0x8 RTAX_FEATURE_ECN = 0x1 @@ -1262,7 +1327,7 @@ const ( RTAX_INITCWND = 0xb RTAX_INITRWND = 0xe RTAX_LOCK = 0x1 - RTAX_MAX = 0x10 + RTAX_MAX = 0x11 RTAX_MTU = 0x2 RTAX_QUICKACK = 0xf RTAX_REORDERING = 0x9 @@ -1273,7 +1338,7 @@ const ( RTAX_UNSPEC = 0x0 RTAX_WINDOW = 0x3 RTA_ALIGNTO = 0x4 - RTA_MAX = 0x19 + RTA_MAX = 0x1a RTCF_DIRECTSRC = 0x4000000 RTCF_DOREDIRECT = 0x1000000 RTCF_LOG = 0x2000000 @@ -1317,6 +1382,7 @@ const ( RTM_DELLINK = 0x11 RTM_DELMDB = 0x55 RTM_DELNEIGH = 0x1d + RTM_DELNETCONF = 0x51 RTM_DELNSID = 0x59 RTM_DELQDISC = 0x25 RTM_DELROUTE = 0x19 @@ -1325,6 +1391,7 @@ const ( RTM_DELTFILTER = 0x2d RTM_F_CLONED = 0x200 RTM_F_EQUALIZE = 0x400 + RTM_F_FIB_MATCH = 0x2000 RTM_F_LOOKUP_TABLE = 0x1000 RTM_F_NOTIFY = 0x100 RTM_F_PREFIX = 0x800 @@ -1346,10 +1413,11 @@ const ( RTM_GETSTATS = 0x5e RTM_GETTCLASS = 0x2a RTM_GETTFILTER = 0x2e - RTM_MAX = 0x5f + RTM_MAX = 0x63 RTM_NEWACTION = 0x30 RTM_NEWADDR = 0x14 RTM_NEWADDRLABEL = 0x48 + RTM_NEWCACHEREPORT = 0x60 RTM_NEWLINK = 0x10 RTM_NEWMDB = 0x54 RTM_NEWNDUSEROPT = 0x44 @@ -1364,8 +1432,8 @@ const ( RTM_NEWSTATS = 0x5c RTM_NEWTCLASS = 0x28 RTM_NEWTFILTER = 0x2c - RTM_NR_FAMILIES = 0x14 - RTM_NR_MSGTYPES = 0x50 + RTM_NR_FAMILIES = 0x15 + RTM_NR_MSGTYPES = 0x54 RTM_SETDCB = 0x4f RTM_SETLINK = 0x13 RTM_SETNEIGHTBL = 0x43 @@ -1376,6 +1444,7 @@ const ( RTNH_F_OFFLOAD = 0x8 RTNH_F_ONLINK = 0x4 RTNH_F_PERVASIVE = 0x2 + RTNH_F_UNRESOLVED = 0x20 RTN_MAX = 0xb RTPROT_BABEL = 0x2a RTPROT_BIRD = 0xc @@ -1406,6 +1475,7 @@ const ( SCM_TIMESTAMP = 0x1d SCM_TIMESTAMPING = 0x25 SCM_TIMESTAMPING_OPT_STATS = 0x36 + SCM_TIMESTAMPING_PKTINFO = 0x3a SCM_TIMESTAMPNS = 0x23 SCM_WIFI_STATUS = 0x29 SECCOMP_MODE_DISABLED = 0x0 @@ -1531,6 +1601,7 @@ const ( SOL_SOCKET = 0x1 SOL_TCP = 0x6 SOL_TIPC = 0x10f + SOL_TLS = 0x11a SOL_X25 = 0x106 SOMAXCONN = 0x80 SO_ACCEPTCONN = 0x1e @@ -1544,6 +1615,7 @@ const ( SO_BSDCOMPAT = 0xe SO_BUSY_POLL = 0x2e SO_CNX_ADVICE = 0x35 + SO_COOKIE = 0x39 SO_DEBUG = 0x1 SO_DETACH_BPF = 0x1b SO_DETACH_FILTER = 0x1b @@ -1552,11 +1624,13 @@ const ( SO_ERROR = 0x4 SO_GET_FILTER = 0x1a SO_INCOMING_CPU = 0x31 + SO_INCOMING_NAPI_ID = 0x38 SO_KEEPALIVE = 0x9 SO_LINGER = 0xd SO_LOCK_FILTER = 0x2c SO_MARK = 0x24 SO_MAX_PACING_RATE = 0x2f + SO_MEMINFO = 0x37 SO_NOFCS = 0x2b SO_NO_CHECK = 0xb SO_OOBINLINE = 0xa @@ -1564,6 +1638,7 @@ const ( SO_PASSSEC = 0x22 SO_PEEK_OFF = 0x2a SO_PEERCRED = 0x11 + SO_PEERGROUPS = 0x3b SO_PEERNAME = 0x1c SO_PEERSEC = 0x1f SO_PRIORITY = 0xc @@ -1595,10 +1670,32 @@ const ( SO_VM_SOCKETS_PEER_HOST_VM_ID = 0x3 SO_VM_SOCKETS_TRUSTED = 0x5 SO_WIFI_STATUS = 0x29 + SO_ZEROCOPY = 0x3c SPLICE_F_GIFT = 0x8 SPLICE_F_MORE = 0x4 SPLICE_F_MOVE = 0x1 SPLICE_F_NONBLOCK = 0x2 + STATX_ALL = 0xfff + STATX_ATIME = 0x20 + STATX_ATTR_APPEND = 0x20 + STATX_ATTR_AUTOMOUNT = 0x1000 + STATX_ATTR_COMPRESSED = 0x4 + STATX_ATTR_ENCRYPTED = 0x800 + STATX_ATTR_IMMUTABLE = 0x10 + STATX_ATTR_NODUMP = 0x40 + STATX_BASIC_STATS = 0x7ff + STATX_BLOCKS = 0x400 + STATX_BTIME = 0x800 + STATX_CTIME = 0x80 + STATX_GID = 0x10 + STATX_INO = 0x100 + STATX_MODE = 0x2 + STATX_MTIME = 0x40 + STATX_NLINK = 0x4 + STATX_SIZE = 0x200 + STATX_TYPE = 0x1 + STATX_UID = 0x8 + STATX__RESERVED = 0x80000000 S_BLKSIZE = 0x200 S_IEXEC = 0x40 S_IFBLK = 0x6000 @@ -1631,6 +1728,12 @@ const ( TAB2 = 0x1000 TAB3 = 0x1800 TABDLY = 0x1800 + TASKSTATS_CMD_ATTR_MAX = 0x4 + TASKSTATS_CMD_MAX = 0x2 + TASKSTATS_GENL_NAME = "TASKSTATS" + TASKSTATS_GENL_VERSION = 0x1 + TASKSTATS_TYPE_MAX = 0x6 + TASKSTATS_VERSION = 0x8 TCFLSH = 0x540b TCGETA = 0x5405 TCGETS = 0x5401 @@ -1654,6 +1757,7 @@ const ( TCP_CORK = 0x3 TCP_DEFER_ACCEPT = 0x9 TCP_FASTOPEN = 0x17 + TCP_FASTOPEN_CONNECT = 0x1e TCP_INFO = 0xb TCP_KEEPCNT = 0x6 TCP_KEEPIDLE = 0x4 @@ -1663,6 +1767,8 @@ const ( TCP_MAXWIN = 0xffff TCP_MAX_WINSHIFT = 0xe TCP_MD5SIG = 0xe + TCP_MD5SIG_EXT = 0x20 + TCP_MD5SIG_FLAG_PREFIX = 0x1 TCP_MD5SIG_MAXKEYLEN = 0x50 TCP_MSS = 0x200 TCP_MSS_DEFAULT = 0x218 @@ -1683,6 +1789,7 @@ const ( TCP_THIN_DUPACK = 0x11 TCP_THIN_LINEAR_TIMEOUTS = 0x10 TCP_TIMESTAMP = 0x18 + TCP_ULP = 0x1f TCP_USER_TIMEOUT = 0x12 TCP_WINDOW_CLAMP = 0xa TCSAFLUSH = 0x2 @@ -1713,6 +1820,7 @@ const ( TIOCGPKT = 0x80045438 TIOCGPTLCK = 0x80045439 TIOCGPTN = 0x80045430 + TIOCGPTPEER = 0x5441 TIOCGRS485 = 0x542e TIOCGSERIAL = 0x541e TIOCGSID = 0x5429 @@ -1770,6 +1878,7 @@ const ( TIOCSWINSZ = 0x5414 TIOCVHANGUP = 0x5437 TOSTOP = 0x100 + TS_COMM_LEN = 0x20 TUNATTACHFILTER = 0x400854d5 TUNDETACHFILTER = 0x400854d6 TUNGETFEATURES = 0x800454cf @@ -1795,6 +1904,8 @@ const ( TUNSETVNETHDRSZ = 0x400454d8 TUNSETVNETLE = 0x400454dc UMOUNT_NOFOLLOW = 0x8 + UTIME_NOW = 0x3fffffff + UTIME_OMIT = 0x3ffffffe VDISCARD = 0xd VEOF = 0x4 VEOL = 0xb @@ -1824,6 +1935,17 @@ const ( WALL = 0x40000000 WCLONE = 0x80000000 WCONTINUED = 0x8 + WDIOC_GETBOOTSTATUS = 0x80045702 + WDIOC_GETPRETIMEOUT = 0x80045709 + WDIOC_GETSTATUS = 0x80045701 + WDIOC_GETSUPPORT = 0x80285700 + WDIOC_GETTEMP = 0x80045703 + WDIOC_GETTIMELEFT = 0x8004570a + WDIOC_GETTIMEOUT = 0x80045707 + WDIOC_KEEPALIVE = 0x80045705 + WDIOC_SETOPTIONS = 0x80045704 + WDIOC_SETPRETIMEOUT = 0xc0045708 + WDIOC_SETTIMEOUT = 0xc0045706 WEXITED = 0x4 WNOHANG = 0x1 WNOTHREAD = 0x20000000 @@ -2004,7 +2126,6 @@ const ( SIGTSTP = syscall.Signal(0x14) SIGTTIN = syscall.Signal(0x15) SIGTTOU = syscall.Signal(0x16) - SIGUNUSED = syscall.Signal(0x1f) SIGURG = syscall.Signal(0x17) SIGUSR1 = syscall.Signal(0xa) SIGUSR2 = syscall.Signal(0xc) diff --git a/vendor/golang.org/x/sys/unix/zerrors_linux_arm64.go b/vendor/golang.org/x/sys/unix/zerrors_linux_arm64.go index 495f13b61..51d84a35c 100644 --- a/vendor/golang.org/x/sys/unix/zerrors_linux_arm64.go +++ b/vendor/golang.org/x/sys/unix/zerrors_linux_arm64.go @@ -36,7 +36,7 @@ const ( AF_KEY = 0xf AF_LLC = 0x1a AF_LOCAL = 0x1 - AF_MAX = 0x2b + AF_MAX = 0x2c AF_MPLS = 0x1c AF_NETBEUI = 0xd AF_NETLINK = 0x10 @@ -51,6 +51,7 @@ const ( AF_ROUTE = 0x10 AF_RXRPC = 0x21 AF_SECURITY = 0xe + AF_SMC = 0x2b AF_SNA = 0x16 AF_TIPC = 0x1e AF_UNIX = 0x1 @@ -120,6 +121,7 @@ const ( ARPHRD_PPP = 0x200 ARPHRD_PRONET = 0x4 ARPHRD_RAWHDLC = 0x206 + ARPHRD_RAWIP = 0x207 ARPHRD_ROSE = 0x10e ARPHRD_RSRVD = 0x104 ARPHRD_SIT = 0x308 @@ -129,6 +131,7 @@ const ( ARPHRD_TUNNEL = 0x300 ARPHRD_TUNNEL6 = 0x301 ARPHRD_VOID = 0xffff + ARPHRD_VSOCKMON = 0x33a ARPHRD_X25 = 0x10f B0 = 0x0 B1000000 = 0x1008 @@ -389,13 +392,16 @@ const ( ETH_P_DSA = 0x1b ETH_P_ECONET = 0x18 ETH_P_EDSA = 0xdada + ETH_P_ERSPAN = 0x88be ETH_P_FCOE = 0x8906 ETH_P_FIP = 0x8914 ETH_P_HDLC = 0x19 ETH_P_HSR = 0x892f + ETH_P_IBOE = 0x8915 ETH_P_IEEE802154 = 0xf6 ETH_P_IEEEPUP = 0xa00 ETH_P_IEEEPUPAT = 0xa01 + ETH_P_IFE = 0xed3e ETH_P_IP = 0x800 ETH_P_IPV6 = 0x86dd ETH_P_IPX = 0x8137 @@ -406,11 +412,13 @@ const ( ETH_P_LOOP = 0x60 ETH_P_LOOPBACK = 0x9000 ETH_P_MACSEC = 0x88e5 + ETH_P_MAP = 0xf9 ETH_P_MOBITEX = 0x15 ETH_P_MPLS_MC = 0x8848 ETH_P_MPLS_UC = 0x8847 ETH_P_MVRP = 0x88f5 ETH_P_NCSI = 0x88f8 + ETH_P_NSH = 0x894f ETH_P_PAE = 0x888e ETH_P_PAUSE = 0x8808 ETH_P_PHONET = 0xf5 @@ -441,6 +449,7 @@ const ( EXTA = 0xe EXTB = 0xf EXTPROC = 0x10000 + EXTRA_MAGIC = 0x45585401 FALLOC_FL_COLLAPSE_RANGE = 0x8 FALLOC_FL_INSERT_RANGE = 0x20 FALLOC_FL_KEEP_SIZE = 0x1 @@ -454,6 +463,8 @@ const ( FF1 = 0x8000 FFDLY = 0x8000 FLUSHO = 0x1000 + FS_ENCRYPTION_MODE_AES_128_CBC = 0x5 + FS_ENCRYPTION_MODE_AES_128_CTS = 0x6 FS_ENCRYPTION_MODE_AES_256_CBC = 0x3 FS_ENCRYPTION_MODE_AES_256_CTS = 0x4 FS_ENCRYPTION_MODE_AES_256_GCM = 0x2 @@ -472,6 +483,7 @@ const ( FS_POLICY_FLAGS_PAD_8 = 0x1 FS_POLICY_FLAGS_PAD_MASK = 0x3 FS_POLICY_FLAGS_VALID = 0x3 + F_ADD_SEALS = 0x409 F_DUPFD = 0x0 F_DUPFD_CLOEXEC = 0x406 F_EXLCK = 0x4 @@ -484,6 +496,9 @@ const ( F_GETOWN_EX = 0x10 F_GETPIPE_SZ = 0x408 F_GETSIG = 0xb + F_GET_FILE_RW_HINT = 0x40d + F_GET_RW_HINT = 0x40b + F_GET_SEALS = 0x40a F_LOCK = 0x1 F_NOTIFY = 0x402 F_OFD_GETLK = 0x24 @@ -491,6 +506,10 @@ const ( F_OFD_SETLKW = 0x26 F_OK = 0x0 F_RDLCK = 0x0 + F_SEAL_GROW = 0x4 + F_SEAL_SEAL = 0x1 + F_SEAL_SHRINK = 0x2 + F_SEAL_WRITE = 0x8 F_SETFD = 0x2 F_SETFL = 0x4 F_SETLEASE = 0x400 @@ -502,12 +521,27 @@ const ( F_SETOWN_EX = 0xf F_SETPIPE_SZ = 0x407 F_SETSIG = 0xa + F_SET_FILE_RW_HINT = 0x40e + F_SET_RW_HINT = 0x40c F_SHLCK = 0x8 F_TEST = 0x3 F_TLOCK = 0x2 F_ULOCK = 0x0 F_UNLCK = 0x2 F_WRLCK = 0x1 + GENL_ADMIN_PERM = 0x1 + GENL_CMD_CAP_DO = 0x2 + GENL_CMD_CAP_DUMP = 0x4 + GENL_CMD_CAP_HASPOL = 0x8 + GENL_HDRLEN = 0x4 + GENL_ID_CTRL = 0x10 + GENL_ID_PMCRAID = 0x12 + GENL_ID_VFS_DQUOT = 0x11 + GENL_MAX_ID = 0x3ff + GENL_MIN_ID = 0x10 + GENL_NAMSIZ = 0x10 + GENL_START_ALLOC = 0x13 + GENL_UNS_ADMIN_PERM = 0x10 GRND_NONBLOCK = 0x1 GRND_RANDOM = 0x2 HUPCL = 0x400 @@ -544,6 +578,8 @@ const ( IFF_MASTER = 0x400 IFF_MULTICAST = 0x1000 IFF_MULTI_QUEUE = 0x100 + IFF_NAPI = 0x10 + IFF_NAPI_FRAGS = 0x20 IFF_NOARP = 0x80 IFF_NOFILTER = 0x1000 IFF_NOTRAILERS = 0x20 @@ -606,6 +642,7 @@ const ( IN_OPEN = 0x20 IN_Q_OVERFLOW = 0x4000 IN_UNMOUNT = 0x2000 + IOCTL_VM_SOCKETS_GET_LOCAL_CID = 0x7b9 IPPROTO_AH = 0x33 IPPROTO_BEETPH = 0x5e IPPROTO_COMP = 0x6c @@ -645,8 +682,10 @@ const ( IPV6_2292PKTOPTIONS = 0x6 IPV6_2292RTHDR = 0x5 IPV6_ADDRFORM = 0x1 + IPV6_ADDR_PREFERENCES = 0x48 IPV6_ADD_MEMBERSHIP = 0x14 IPV6_AUTHHDR = 0xa + IPV6_AUTOFLOWLABEL = 0x46 IPV6_CHECKSUM = 0x7 IPV6_DONTFRAG = 0x3e IPV6_DROP_MEMBERSHIP = 0x15 @@ -659,12 +698,14 @@ const ( IPV6_JOIN_GROUP = 0x14 IPV6_LEAVE_ANYCAST = 0x1c IPV6_LEAVE_GROUP = 0x15 + IPV6_MINHOPCOUNT = 0x49 IPV6_MTU = 0x18 IPV6_MTU_DISCOVER = 0x17 IPV6_MULTICAST_HOPS = 0x12 IPV6_MULTICAST_IF = 0x11 IPV6_MULTICAST_LOOP = 0x13 IPV6_NEXTHOP = 0x9 + IPV6_ORIGDSTADDR = 0x4a IPV6_PATHMTU = 0x3d IPV6_PKTINFO = 0x32 IPV6_PMTUDISC_DO = 0x2 @@ -675,8 +716,10 @@ const ( IPV6_PMTUDISC_WANT = 0x1 IPV6_RECVDSTOPTS = 0x3a IPV6_RECVERR = 0x19 + IPV6_RECVFRAGSIZE = 0x4d IPV6_RECVHOPLIMIT = 0x33 IPV6_RECVHOPOPTS = 0x35 + IPV6_RECVORIGDSTADDR = 0x4a IPV6_RECVPATHMTU = 0x3c IPV6_RECVPKTINFO = 0x31 IPV6_RECVRTHDR = 0x38 @@ -690,7 +733,9 @@ const ( IPV6_RXDSTOPTS = 0x3b IPV6_RXHOPOPTS = 0x36 IPV6_TCLASS = 0x43 + IPV6_TRANSPARENT = 0x4b IPV6_UNICAST_HOPS = 0x10 + IPV6_UNICAST_IF = 0x4c IPV6_V6ONLY = 0x1a IPV6_XFRM_POLICY = 0x23 IP_ADD_MEMBERSHIP = 0x23 @@ -733,6 +778,7 @@ const ( IP_PMTUDISC_PROBE = 0x3 IP_PMTUDISC_WANT = 0x1 IP_RECVERR = 0xb + IP_RECVFRAGSIZE = 0x19 IP_RECVOPTS = 0x6 IP_RECVORIGDSTADDR = 0x14 IP_RECVRETOPTS = 0x7 @@ -770,6 +816,7 @@ const ( KEYCTL_NEGATE = 0xd KEYCTL_READ = 0xb KEYCTL_REJECT = 0x13 + KEYCTL_RESTRICT_KEYRING = 0x1d KEYCTL_REVOKE = 0x3 KEYCTL_SEARCH = 0xa KEYCTL_SESSION_TO_PARENT = 0x12 @@ -817,6 +864,7 @@ const ( MADV_FREE = 0x8 MADV_HUGEPAGE = 0xe MADV_HWPOISON = 0x64 + MADV_KEEPONFORK = 0x13 MADV_MERGEABLE = 0xc MADV_NOHUGEPAGE = 0xf MADV_NORMAL = 0x0 @@ -825,6 +873,7 @@ const ( MADV_SEQUENTIAL = 0x2 MADV_UNMERGEABLE = 0xd MADV_WILLNEED = 0x3 + MADV_WIPEONFORK = 0x12 MAP_ANON = 0x20 MAP_ANONYMOUS = 0x20 MAP_DENYWRITE = 0x800 @@ -870,6 +919,7 @@ const ( MSG_TRYHARD = 0x4 MSG_WAITALL = 0x100 MSG_WAITFORONE = 0x10000 + MSG_ZEROCOPY = 0x4000000 MS_ACTIVE = 0x40000000 MS_ASYNC = 0x1 MS_BIND = 0x1000 @@ -902,6 +952,7 @@ const ( MS_SILENT = 0x8000 MS_SLAVE = 0x80000 MS_STRICTATIME = 0x1000000 + MS_SUBMOUNT = 0x4000000 MS_SYNC = 0x4 MS_SYNCHRONOUS = 0x10 MS_UNBINDABLE = 0x20000 @@ -916,6 +967,7 @@ const ( NETLINK_DNRTMSG = 0xe NETLINK_DROP_MEMBERSHIP = 0x2 NETLINK_ECRYPTFS = 0x13 + NETLINK_EXT_ACK = 0xb NETLINK_FIB_LOOKUP = 0xa NETLINK_FIREWALL = 0x3 NETLINK_GENERIC = 0x10 @@ -934,6 +986,7 @@ const ( NETLINK_RX_RING = 0x6 NETLINK_SCSITRANSPORT = 0x12 NETLINK_SELINUX = 0x7 + NETLINK_SMC = 0x16 NETLINK_SOCK_DIAG = 0x4 NETLINK_TX_RING = 0x7 NETLINK_UNUSED = 0x1 @@ -954,8 +1007,10 @@ const ( NLMSG_NOOP = 0x1 NLMSG_OVERRUN = 0x4 NLM_F_ACK = 0x4 + NLM_F_ACK_TLVS = 0x200 NLM_F_APPEND = 0x800 NLM_F_ATOMIC = 0x400 + NLM_F_CAPPED = 0x100 NLM_F_CREATE = 0x400 NLM_F_DUMP = 0x300 NLM_F_DUMP_FILTERED = 0x20 @@ -964,6 +1019,7 @@ const ( NLM_F_EXCL = 0x200 NLM_F_MATCH = 0x200 NLM_F_MULTI = 0x2 + NLM_F_NONREC = 0x100 NLM_F_REPLACE = 0x100 NLM_F_REQUEST = 0x1 NLM_F_ROOT = 0x100 @@ -1012,6 +1068,7 @@ const ( PACKET_FANOUT_EBPF = 0x7 PACKET_FANOUT_FLAG_DEFRAG = 0x8000 PACKET_FANOUT_FLAG_ROLLOVER = 0x1000 + PACKET_FANOUT_FLAG_UNIQUEID = 0x2000 PACKET_FANOUT_HASH = 0x0 PACKET_FANOUT_LB = 0x1 PACKET_FANOUT_QM = 0x5 @@ -1153,7 +1210,7 @@ const ( PR_SET_NO_NEW_PRIVS = 0x26 PR_SET_PDEATHSIG = 0x1 PR_SET_PTRACER = 0x59616d61 - PR_SET_PTRACER_ANY = -0x1 + PR_SET_PTRACER_ANY = 0xffffffffffffffff PR_SET_SECCOMP = 0x16 PR_SET_SECUREBITS = 0x1c PR_SET_THP_DISABLE = 0x29 @@ -1161,6 +1218,11 @@ const ( PR_SET_TIMING = 0xe PR_SET_TSC = 0x1a PR_SET_UNALIGN = 0x6 + PR_SVE_GET_VL = 0x33 + PR_SVE_SET_VL = 0x32 + PR_SVE_SET_VL_ONEXEC = 0x40000 + PR_SVE_VL_INHERIT = 0x20000 + PR_SVE_VL_LEN_MASK = 0xffff PR_TASK_PERF_EVENTS_DISABLE = 0x1f PR_TASK_PERF_EVENTS_ENABLE = 0x20 PR_TIMING_STATISTICAL = 0x0 @@ -1233,10 +1295,11 @@ const ( RLIMIT_RTTIME = 0xf RLIMIT_SIGPENDING = 0xb RLIMIT_STACK = 0x3 - RLIM_INFINITY = -0x1 + RLIM_INFINITY = 0xffffffffffffffff RTAX_ADVMSS = 0x8 RTAX_CC_ALGO = 0x10 RTAX_CWND = 0x7 + RTAX_FASTOPEN_NO_COOKIE = 0x11 RTAX_FEATURES = 0xc RTAX_FEATURE_ALLFRAG = 0x8 RTAX_FEATURE_ECN = 0x1 @@ -1247,7 +1310,7 @@ const ( RTAX_INITCWND = 0xb RTAX_INITRWND = 0xe RTAX_LOCK = 0x1 - RTAX_MAX = 0x10 + RTAX_MAX = 0x11 RTAX_MTU = 0x2 RTAX_QUICKACK = 0xf RTAX_REORDERING = 0x9 @@ -1258,7 +1321,7 @@ const ( RTAX_UNSPEC = 0x0 RTAX_WINDOW = 0x3 RTA_ALIGNTO = 0x4 - RTA_MAX = 0x19 + RTA_MAX = 0x1a RTCF_DIRECTSRC = 0x4000000 RTCF_DOREDIRECT = 0x1000000 RTCF_LOG = 0x2000000 @@ -1302,6 +1365,7 @@ const ( RTM_DELLINK = 0x11 RTM_DELMDB = 0x55 RTM_DELNEIGH = 0x1d + RTM_DELNETCONF = 0x51 RTM_DELNSID = 0x59 RTM_DELQDISC = 0x25 RTM_DELROUTE = 0x19 @@ -1310,6 +1374,7 @@ const ( RTM_DELTFILTER = 0x2d RTM_F_CLONED = 0x200 RTM_F_EQUALIZE = 0x400 + RTM_F_FIB_MATCH = 0x2000 RTM_F_LOOKUP_TABLE = 0x1000 RTM_F_NOTIFY = 0x100 RTM_F_PREFIX = 0x800 @@ -1331,10 +1396,11 @@ const ( RTM_GETSTATS = 0x5e RTM_GETTCLASS = 0x2a RTM_GETTFILTER = 0x2e - RTM_MAX = 0x5f + RTM_MAX = 0x63 RTM_NEWACTION = 0x30 RTM_NEWADDR = 0x14 RTM_NEWADDRLABEL = 0x48 + RTM_NEWCACHEREPORT = 0x60 RTM_NEWLINK = 0x10 RTM_NEWMDB = 0x54 RTM_NEWNDUSEROPT = 0x44 @@ -1349,8 +1415,8 @@ const ( RTM_NEWSTATS = 0x5c RTM_NEWTCLASS = 0x28 RTM_NEWTFILTER = 0x2c - RTM_NR_FAMILIES = 0x14 - RTM_NR_MSGTYPES = 0x50 + RTM_NR_FAMILIES = 0x15 + RTM_NR_MSGTYPES = 0x54 RTM_SETDCB = 0x4f RTM_SETLINK = 0x13 RTM_SETNEIGHTBL = 0x43 @@ -1361,6 +1427,7 @@ const ( RTNH_F_OFFLOAD = 0x8 RTNH_F_ONLINK = 0x4 RTNH_F_PERVASIVE = 0x2 + RTNH_F_UNRESOLVED = 0x20 RTN_MAX = 0xb RTPROT_BABEL = 0x2a RTPROT_BIRD = 0xc @@ -1391,6 +1458,7 @@ const ( SCM_TIMESTAMP = 0x1d SCM_TIMESTAMPING = 0x25 SCM_TIMESTAMPING_OPT_STATS = 0x36 + SCM_TIMESTAMPING_PKTINFO = 0x3a SCM_TIMESTAMPNS = 0x23 SCM_WIFI_STATUS = 0x29 SECCOMP_MODE_DISABLED = 0x0 @@ -1516,6 +1584,7 @@ const ( SOL_SOCKET = 0x1 SOL_TCP = 0x6 SOL_TIPC = 0x10f + SOL_TLS = 0x11a SOL_X25 = 0x106 SOMAXCONN = 0x80 SO_ACCEPTCONN = 0x1e @@ -1529,6 +1598,7 @@ const ( SO_BSDCOMPAT = 0xe SO_BUSY_POLL = 0x2e SO_CNX_ADVICE = 0x35 + SO_COOKIE = 0x39 SO_DEBUG = 0x1 SO_DETACH_BPF = 0x1b SO_DETACH_FILTER = 0x1b @@ -1537,11 +1607,13 @@ const ( SO_ERROR = 0x4 SO_GET_FILTER = 0x1a SO_INCOMING_CPU = 0x31 + SO_INCOMING_NAPI_ID = 0x38 SO_KEEPALIVE = 0x9 SO_LINGER = 0xd SO_LOCK_FILTER = 0x2c SO_MARK = 0x24 SO_MAX_PACING_RATE = 0x2f + SO_MEMINFO = 0x37 SO_NOFCS = 0x2b SO_NO_CHECK = 0xb SO_OOBINLINE = 0xa @@ -1549,6 +1621,7 @@ const ( SO_PASSSEC = 0x22 SO_PEEK_OFF = 0x2a SO_PEERCRED = 0x11 + SO_PEERGROUPS = 0x3b SO_PEERNAME = 0x1c SO_PEERSEC = 0x1f SO_PRIORITY = 0xc @@ -1580,10 +1653,32 @@ const ( SO_VM_SOCKETS_PEER_HOST_VM_ID = 0x3 SO_VM_SOCKETS_TRUSTED = 0x5 SO_WIFI_STATUS = 0x29 + SO_ZEROCOPY = 0x3c SPLICE_F_GIFT = 0x8 SPLICE_F_MORE = 0x4 SPLICE_F_MOVE = 0x1 SPLICE_F_NONBLOCK = 0x2 + STATX_ALL = 0xfff + STATX_ATIME = 0x20 + STATX_ATTR_APPEND = 0x20 + STATX_ATTR_AUTOMOUNT = 0x1000 + STATX_ATTR_COMPRESSED = 0x4 + STATX_ATTR_ENCRYPTED = 0x800 + STATX_ATTR_IMMUTABLE = 0x10 + STATX_ATTR_NODUMP = 0x40 + STATX_BASIC_STATS = 0x7ff + STATX_BLOCKS = 0x400 + STATX_BTIME = 0x800 + STATX_CTIME = 0x80 + STATX_GID = 0x10 + STATX_INO = 0x100 + STATX_MODE = 0x2 + STATX_MTIME = 0x40 + STATX_NLINK = 0x4 + STATX_SIZE = 0x200 + STATX_TYPE = 0x1 + STATX_UID = 0x8 + STATX__RESERVED = 0x80000000 S_BLKSIZE = 0x200 S_IEXEC = 0x40 S_IFBLK = 0x6000 @@ -1616,6 +1711,12 @@ const ( TAB2 = 0x1000 TAB3 = 0x1800 TABDLY = 0x1800 + TASKSTATS_CMD_ATTR_MAX = 0x4 + TASKSTATS_CMD_MAX = 0x2 + TASKSTATS_GENL_NAME = "TASKSTATS" + TASKSTATS_GENL_VERSION = 0x1 + TASKSTATS_TYPE_MAX = 0x6 + TASKSTATS_VERSION = 0x8 TCFLSH = 0x540b TCGETA = 0x5405 TCGETS = 0x5401 @@ -1639,6 +1740,7 @@ const ( TCP_CORK = 0x3 TCP_DEFER_ACCEPT = 0x9 TCP_FASTOPEN = 0x17 + TCP_FASTOPEN_CONNECT = 0x1e TCP_INFO = 0xb TCP_KEEPCNT = 0x6 TCP_KEEPIDLE = 0x4 @@ -1648,6 +1750,8 @@ const ( TCP_MAXWIN = 0xffff TCP_MAX_WINSHIFT = 0xe TCP_MD5SIG = 0xe + TCP_MD5SIG_EXT = 0x20 + TCP_MD5SIG_FLAG_PREFIX = 0x1 TCP_MD5SIG_MAXKEYLEN = 0x50 TCP_MSS = 0x200 TCP_MSS_DEFAULT = 0x218 @@ -1668,6 +1772,7 @@ const ( TCP_THIN_DUPACK = 0x11 TCP_THIN_LINEAR_TIMEOUTS = 0x10 TCP_TIMESTAMP = 0x18 + TCP_ULP = 0x1f TCP_USER_TIMEOUT = 0x12 TCP_WINDOW_CLAMP = 0xa TCSAFLUSH = 0x2 @@ -1698,6 +1803,7 @@ const ( TIOCGPKT = 0x80045438 TIOCGPTLCK = 0x80045439 TIOCGPTN = 0x80045430 + TIOCGPTPEER = 0x5441 TIOCGRS485 = 0x542e TIOCGSERIAL = 0x541e TIOCGSID = 0x5429 @@ -1755,6 +1861,7 @@ const ( TIOCSWINSZ = 0x5414 TIOCVHANGUP = 0x5437 TOSTOP = 0x100 + TS_COMM_LEN = 0x20 TUNATTACHFILTER = 0x401054d5 TUNDETACHFILTER = 0x401054d6 TUNGETFEATURES = 0x800454cf @@ -1780,6 +1887,8 @@ const ( TUNSETVNETHDRSZ = 0x400454d8 TUNSETVNETLE = 0x400454dc UMOUNT_NOFOLLOW = 0x8 + UTIME_NOW = 0x3fffffff + UTIME_OMIT = 0x3ffffffe VDISCARD = 0xd VEOF = 0x4 VEOL = 0xb @@ -1809,6 +1918,17 @@ const ( WALL = 0x40000000 WCLONE = 0x80000000 WCONTINUED = 0x8 + WDIOC_GETBOOTSTATUS = 0x80045702 + WDIOC_GETPRETIMEOUT = 0x80045709 + WDIOC_GETSTATUS = 0x80045701 + WDIOC_GETSUPPORT = 0x80285700 + WDIOC_GETTEMP = 0x80045703 + WDIOC_GETTIMELEFT = 0x8004570a + WDIOC_GETTIMEOUT = 0x80045707 + WDIOC_KEEPALIVE = 0x80045705 + WDIOC_SETOPTIONS = 0x80045704 + WDIOC_SETPRETIMEOUT = 0xc0045708 + WDIOC_SETTIMEOUT = 0xc0045706 WEXITED = 0x4 WNOHANG = 0x1 WNOTHREAD = 0x20000000 @@ -1989,7 +2109,6 @@ const ( SIGTSTP = syscall.Signal(0x14) SIGTTIN = syscall.Signal(0x15) SIGTTOU = syscall.Signal(0x16) - SIGUNUSED = syscall.Signal(0x1f) SIGURG = syscall.Signal(0x17) SIGUSR1 = syscall.Signal(0xa) SIGUSR2 = syscall.Signal(0xc) diff --git a/vendor/golang.org/x/sys/unix/zerrors_linux_mips.go b/vendor/golang.org/x/sys/unix/zerrors_linux_mips.go index 59651e415..8aec95d6c 100644 --- a/vendor/golang.org/x/sys/unix/zerrors_linux_mips.go +++ b/vendor/golang.org/x/sys/unix/zerrors_linux_mips.go @@ -36,7 +36,7 @@ const ( AF_KEY = 0xf AF_LLC = 0x1a AF_LOCAL = 0x1 - AF_MAX = 0x2b + AF_MAX = 0x2c AF_MPLS = 0x1c AF_NETBEUI = 0xd AF_NETLINK = 0x10 @@ -51,6 +51,7 @@ const ( AF_ROUTE = 0x10 AF_RXRPC = 0x21 AF_SECURITY = 0xe + AF_SMC = 0x2b AF_SNA = 0x16 AF_TIPC = 0x1e AF_UNIX = 0x1 @@ -120,6 +121,7 @@ const ( ARPHRD_PPP = 0x200 ARPHRD_PRONET = 0x4 ARPHRD_RAWHDLC = 0x206 + ARPHRD_RAWIP = 0x207 ARPHRD_ROSE = 0x10e ARPHRD_RSRVD = 0x104 ARPHRD_SIT = 0x308 @@ -129,6 +131,7 @@ const ( ARPHRD_TUNNEL = 0x300 ARPHRD_TUNNEL6 = 0x301 ARPHRD_VOID = 0xffff + ARPHRD_VSOCKMON = 0x33a ARPHRD_X25 = 0x10f B0 = 0x0 B1000000 = 0x1008 @@ -388,13 +391,16 @@ const ( ETH_P_DSA = 0x1b ETH_P_ECONET = 0x18 ETH_P_EDSA = 0xdada + ETH_P_ERSPAN = 0x88be ETH_P_FCOE = 0x8906 ETH_P_FIP = 0x8914 ETH_P_HDLC = 0x19 ETH_P_HSR = 0x892f + ETH_P_IBOE = 0x8915 ETH_P_IEEE802154 = 0xf6 ETH_P_IEEEPUP = 0xa00 ETH_P_IEEEPUPAT = 0xa01 + ETH_P_IFE = 0xed3e ETH_P_IP = 0x800 ETH_P_IPV6 = 0x86dd ETH_P_IPX = 0x8137 @@ -405,11 +411,13 @@ const ( ETH_P_LOOP = 0x60 ETH_P_LOOPBACK = 0x9000 ETH_P_MACSEC = 0x88e5 + ETH_P_MAP = 0xf9 ETH_P_MOBITEX = 0x15 ETH_P_MPLS_MC = 0x8848 ETH_P_MPLS_UC = 0x8847 ETH_P_MVRP = 0x88f5 ETH_P_NCSI = 0x88f8 + ETH_P_NSH = 0x894f ETH_P_PAE = 0x888e ETH_P_PAUSE = 0x8808 ETH_P_PHONET = 0xf5 @@ -453,6 +461,8 @@ const ( FF1 = 0x8000 FFDLY = 0x8000 FLUSHO = 0x2000 + FS_ENCRYPTION_MODE_AES_128_CBC = 0x5 + FS_ENCRYPTION_MODE_AES_128_CTS = 0x6 FS_ENCRYPTION_MODE_AES_256_CBC = 0x3 FS_ENCRYPTION_MODE_AES_256_CTS = 0x4 FS_ENCRYPTION_MODE_AES_256_GCM = 0x2 @@ -471,6 +481,7 @@ const ( FS_POLICY_FLAGS_PAD_8 = 0x1 FS_POLICY_FLAGS_PAD_MASK = 0x3 FS_POLICY_FLAGS_VALID = 0x3 + F_ADD_SEALS = 0x409 F_DUPFD = 0x0 F_DUPFD_CLOEXEC = 0x406 F_EXLCK = 0x4 @@ -483,6 +494,9 @@ const ( F_GETOWN_EX = 0x10 F_GETPIPE_SZ = 0x408 F_GETSIG = 0xb + F_GET_FILE_RW_HINT = 0x40d + F_GET_RW_HINT = 0x40b + F_GET_SEALS = 0x40a F_LOCK = 0x1 F_NOTIFY = 0x402 F_OFD_GETLK = 0x24 @@ -490,6 +504,10 @@ const ( F_OFD_SETLKW = 0x26 F_OK = 0x0 F_RDLCK = 0x0 + F_SEAL_GROW = 0x4 + F_SEAL_SEAL = 0x1 + F_SEAL_SHRINK = 0x2 + F_SEAL_WRITE = 0x8 F_SETFD = 0x2 F_SETFL = 0x4 F_SETLEASE = 0x400 @@ -501,12 +519,27 @@ const ( F_SETOWN_EX = 0xf F_SETPIPE_SZ = 0x407 F_SETSIG = 0xa + F_SET_FILE_RW_HINT = 0x40e + F_SET_RW_HINT = 0x40c F_SHLCK = 0x8 F_TEST = 0x3 F_TLOCK = 0x2 F_ULOCK = 0x0 F_UNLCK = 0x2 F_WRLCK = 0x1 + GENL_ADMIN_PERM = 0x1 + GENL_CMD_CAP_DO = 0x2 + GENL_CMD_CAP_DUMP = 0x4 + GENL_CMD_CAP_HASPOL = 0x8 + GENL_HDRLEN = 0x4 + GENL_ID_CTRL = 0x10 + GENL_ID_PMCRAID = 0x12 + GENL_ID_VFS_DQUOT = 0x11 + GENL_MAX_ID = 0x3ff + GENL_MIN_ID = 0x10 + GENL_NAMSIZ = 0x10 + GENL_START_ALLOC = 0x13 + GENL_UNS_ADMIN_PERM = 0x10 GRND_NONBLOCK = 0x1 GRND_RANDOM = 0x2 HUPCL = 0x400 @@ -543,6 +576,8 @@ const ( IFF_MASTER = 0x400 IFF_MULTICAST = 0x1000 IFF_MULTI_QUEUE = 0x100 + IFF_NAPI = 0x10 + IFF_NAPI_FRAGS = 0x20 IFF_NOARP = 0x80 IFF_NOFILTER = 0x1000 IFF_NOTRAILERS = 0x20 @@ -605,6 +640,7 @@ const ( IN_OPEN = 0x20 IN_Q_OVERFLOW = 0x4000 IN_UNMOUNT = 0x2000 + IOCTL_VM_SOCKETS_GET_LOCAL_CID = 0x200007b9 IPPROTO_AH = 0x33 IPPROTO_BEETPH = 0x5e IPPROTO_COMP = 0x6c @@ -644,8 +680,10 @@ const ( IPV6_2292PKTOPTIONS = 0x6 IPV6_2292RTHDR = 0x5 IPV6_ADDRFORM = 0x1 + IPV6_ADDR_PREFERENCES = 0x48 IPV6_ADD_MEMBERSHIP = 0x14 IPV6_AUTHHDR = 0xa + IPV6_AUTOFLOWLABEL = 0x46 IPV6_CHECKSUM = 0x7 IPV6_DONTFRAG = 0x3e IPV6_DROP_MEMBERSHIP = 0x15 @@ -658,12 +696,14 @@ const ( IPV6_JOIN_GROUP = 0x14 IPV6_LEAVE_ANYCAST = 0x1c IPV6_LEAVE_GROUP = 0x15 + IPV6_MINHOPCOUNT = 0x49 IPV6_MTU = 0x18 IPV6_MTU_DISCOVER = 0x17 IPV6_MULTICAST_HOPS = 0x12 IPV6_MULTICAST_IF = 0x11 IPV6_MULTICAST_LOOP = 0x13 IPV6_NEXTHOP = 0x9 + IPV6_ORIGDSTADDR = 0x4a IPV6_PATHMTU = 0x3d IPV6_PKTINFO = 0x32 IPV6_PMTUDISC_DO = 0x2 @@ -674,8 +714,10 @@ const ( IPV6_PMTUDISC_WANT = 0x1 IPV6_RECVDSTOPTS = 0x3a IPV6_RECVERR = 0x19 + IPV6_RECVFRAGSIZE = 0x4d IPV6_RECVHOPLIMIT = 0x33 IPV6_RECVHOPOPTS = 0x35 + IPV6_RECVORIGDSTADDR = 0x4a IPV6_RECVPATHMTU = 0x3c IPV6_RECVPKTINFO = 0x31 IPV6_RECVRTHDR = 0x38 @@ -689,7 +731,9 @@ const ( IPV6_RXDSTOPTS = 0x3b IPV6_RXHOPOPTS = 0x36 IPV6_TCLASS = 0x43 + IPV6_TRANSPARENT = 0x4b IPV6_UNICAST_HOPS = 0x10 + IPV6_UNICAST_IF = 0x4c IPV6_V6ONLY = 0x1a IPV6_XFRM_POLICY = 0x23 IP_ADD_MEMBERSHIP = 0x23 @@ -732,6 +776,7 @@ const ( IP_PMTUDISC_PROBE = 0x3 IP_PMTUDISC_WANT = 0x1 IP_RECVERR = 0xb + IP_RECVFRAGSIZE = 0x19 IP_RECVOPTS = 0x6 IP_RECVORIGDSTADDR = 0x14 IP_RECVRETOPTS = 0x7 @@ -769,6 +814,7 @@ const ( KEYCTL_NEGATE = 0xd KEYCTL_READ = 0xb KEYCTL_REJECT = 0x13 + KEYCTL_RESTRICT_KEYRING = 0x1d KEYCTL_REVOKE = 0x3 KEYCTL_SEARCH = 0xa KEYCTL_SESSION_TO_PARENT = 0x12 @@ -816,6 +862,7 @@ const ( MADV_FREE = 0x8 MADV_HUGEPAGE = 0xe MADV_HWPOISON = 0x64 + MADV_KEEPONFORK = 0x13 MADV_MERGEABLE = 0xc MADV_NOHUGEPAGE = 0xf MADV_NORMAL = 0x0 @@ -824,6 +871,7 @@ const ( MADV_SEQUENTIAL = 0x2 MADV_UNMERGEABLE = 0xd MADV_WILLNEED = 0x3 + MADV_WIPEONFORK = 0x12 MAP_ANON = 0x800 MAP_ANONYMOUS = 0x800 MAP_DENYWRITE = 0x2000 @@ -870,6 +918,7 @@ const ( MSG_TRYHARD = 0x4 MSG_WAITALL = 0x100 MSG_WAITFORONE = 0x10000 + MSG_ZEROCOPY = 0x4000000 MS_ACTIVE = 0x40000000 MS_ASYNC = 0x1 MS_BIND = 0x1000 @@ -902,6 +951,7 @@ const ( MS_SILENT = 0x8000 MS_SLAVE = 0x80000 MS_STRICTATIME = 0x1000000 + MS_SUBMOUNT = 0x4000000 MS_SYNC = 0x4 MS_SYNCHRONOUS = 0x10 MS_UNBINDABLE = 0x20000 @@ -916,6 +966,7 @@ const ( NETLINK_DNRTMSG = 0xe NETLINK_DROP_MEMBERSHIP = 0x2 NETLINK_ECRYPTFS = 0x13 + NETLINK_EXT_ACK = 0xb NETLINK_FIB_LOOKUP = 0xa NETLINK_FIREWALL = 0x3 NETLINK_GENERIC = 0x10 @@ -934,6 +985,7 @@ const ( NETLINK_RX_RING = 0x6 NETLINK_SCSITRANSPORT = 0x12 NETLINK_SELINUX = 0x7 + NETLINK_SMC = 0x16 NETLINK_SOCK_DIAG = 0x4 NETLINK_TX_RING = 0x7 NETLINK_UNUSED = 0x1 @@ -954,8 +1006,10 @@ const ( NLMSG_NOOP = 0x1 NLMSG_OVERRUN = 0x4 NLM_F_ACK = 0x4 + NLM_F_ACK_TLVS = 0x200 NLM_F_APPEND = 0x800 NLM_F_ATOMIC = 0x400 + NLM_F_CAPPED = 0x100 NLM_F_CREATE = 0x400 NLM_F_DUMP = 0x300 NLM_F_DUMP_FILTERED = 0x20 @@ -964,6 +1018,7 @@ const ( NLM_F_EXCL = 0x200 NLM_F_MATCH = 0x200 NLM_F_MULTI = 0x2 + NLM_F_NONREC = 0x100 NLM_F_REPLACE = 0x100 NLM_F_REQUEST = 0x1 NLM_F_ROOT = 0x100 @@ -1012,6 +1067,7 @@ const ( PACKET_FANOUT_EBPF = 0x7 PACKET_FANOUT_FLAG_DEFRAG = 0x8000 PACKET_FANOUT_FLAG_ROLLOVER = 0x1000 + PACKET_FANOUT_FLAG_UNIQUEID = 0x2000 PACKET_FANOUT_HASH = 0x0 PACKET_FANOUT_LB = 0x1 PACKET_FANOUT_QM = 0x5 @@ -1161,6 +1217,11 @@ const ( PR_SET_TIMING = 0xe PR_SET_TSC = 0x1a PR_SET_UNALIGN = 0x6 + PR_SVE_GET_VL = 0x33 + PR_SVE_SET_VL = 0x32 + PR_SVE_SET_VL_ONEXEC = 0x40000 + PR_SVE_VL_INHERIT = 0x20000 + PR_SVE_VL_LEN_MASK = 0xffff PR_TASK_PERF_EVENTS_DISABLE = 0x1f PR_TASK_PERF_EVENTS_ENABLE = 0x20 PR_TIMING_STATISTICAL = 0x0 @@ -1245,10 +1306,11 @@ const ( RLIMIT_RTTIME = 0xf RLIMIT_SIGPENDING = 0xb RLIMIT_STACK = 0x3 - RLIM_INFINITY = -0x1 + RLIM_INFINITY = 0xffffffffffffffff RTAX_ADVMSS = 0x8 RTAX_CC_ALGO = 0x10 RTAX_CWND = 0x7 + RTAX_FASTOPEN_NO_COOKIE = 0x11 RTAX_FEATURES = 0xc RTAX_FEATURE_ALLFRAG = 0x8 RTAX_FEATURE_ECN = 0x1 @@ -1259,7 +1321,7 @@ const ( RTAX_INITCWND = 0xb RTAX_INITRWND = 0xe RTAX_LOCK = 0x1 - RTAX_MAX = 0x10 + RTAX_MAX = 0x11 RTAX_MTU = 0x2 RTAX_QUICKACK = 0xf RTAX_REORDERING = 0x9 @@ -1270,7 +1332,7 @@ const ( RTAX_UNSPEC = 0x0 RTAX_WINDOW = 0x3 RTA_ALIGNTO = 0x4 - RTA_MAX = 0x19 + RTA_MAX = 0x1a RTCF_DIRECTSRC = 0x4000000 RTCF_DOREDIRECT = 0x1000000 RTCF_LOG = 0x2000000 @@ -1314,6 +1376,7 @@ const ( RTM_DELLINK = 0x11 RTM_DELMDB = 0x55 RTM_DELNEIGH = 0x1d + RTM_DELNETCONF = 0x51 RTM_DELNSID = 0x59 RTM_DELQDISC = 0x25 RTM_DELROUTE = 0x19 @@ -1322,6 +1385,7 @@ const ( RTM_DELTFILTER = 0x2d RTM_F_CLONED = 0x200 RTM_F_EQUALIZE = 0x400 + RTM_F_FIB_MATCH = 0x2000 RTM_F_LOOKUP_TABLE = 0x1000 RTM_F_NOTIFY = 0x100 RTM_F_PREFIX = 0x800 @@ -1343,10 +1407,11 @@ const ( RTM_GETSTATS = 0x5e RTM_GETTCLASS = 0x2a RTM_GETTFILTER = 0x2e - RTM_MAX = 0x5f + RTM_MAX = 0x63 RTM_NEWACTION = 0x30 RTM_NEWADDR = 0x14 RTM_NEWADDRLABEL = 0x48 + RTM_NEWCACHEREPORT = 0x60 RTM_NEWLINK = 0x10 RTM_NEWMDB = 0x54 RTM_NEWNDUSEROPT = 0x44 @@ -1361,8 +1426,8 @@ const ( RTM_NEWSTATS = 0x5c RTM_NEWTCLASS = 0x28 RTM_NEWTFILTER = 0x2c - RTM_NR_FAMILIES = 0x14 - RTM_NR_MSGTYPES = 0x50 + RTM_NR_FAMILIES = 0x15 + RTM_NR_MSGTYPES = 0x54 RTM_SETDCB = 0x4f RTM_SETLINK = 0x13 RTM_SETNEIGHTBL = 0x43 @@ -1373,6 +1438,7 @@ const ( RTNH_F_OFFLOAD = 0x8 RTNH_F_ONLINK = 0x4 RTNH_F_PERVASIVE = 0x2 + RTNH_F_UNRESOLVED = 0x20 RTN_MAX = 0xb RTPROT_BABEL = 0x2a RTPROT_BIRD = 0xc @@ -1403,6 +1469,7 @@ const ( SCM_TIMESTAMP = 0x1d SCM_TIMESTAMPING = 0x25 SCM_TIMESTAMPING_OPT_STATS = 0x36 + SCM_TIMESTAMPING_PKTINFO = 0x3a SCM_TIMESTAMPNS = 0x23 SCM_WIFI_STATUS = 0x29 SECCOMP_MODE_DISABLED = 0x0 @@ -1528,6 +1595,7 @@ const ( SOL_SOCKET = 0xffff SOL_TCP = 0x6 SOL_TIPC = 0x10f + SOL_TLS = 0x11a SOL_X25 = 0x106 SOMAXCONN = 0x80 SO_ACCEPTCONN = 0x1009 @@ -1541,6 +1609,7 @@ const ( SO_BSDCOMPAT = 0xe SO_BUSY_POLL = 0x2e SO_CNX_ADVICE = 0x35 + SO_COOKIE = 0x39 SO_DEBUG = 0x1 SO_DETACH_BPF = 0x1b SO_DETACH_FILTER = 0x1b @@ -1549,11 +1618,13 @@ const ( SO_ERROR = 0x1007 SO_GET_FILTER = 0x1a SO_INCOMING_CPU = 0x31 + SO_INCOMING_NAPI_ID = 0x38 SO_KEEPALIVE = 0x8 SO_LINGER = 0x80 SO_LOCK_FILTER = 0x2c SO_MARK = 0x24 SO_MAX_PACING_RATE = 0x2f + SO_MEMINFO = 0x37 SO_NOFCS = 0x2b SO_NO_CHECK = 0xb SO_OOBINLINE = 0x100 @@ -1561,6 +1632,7 @@ const ( SO_PASSSEC = 0x22 SO_PEEK_OFF = 0x2a SO_PEERCRED = 0x12 + SO_PEERGROUPS = 0x3b SO_PEERNAME = 0x1c SO_PEERSEC = 0x1e SO_PRIORITY = 0xc @@ -1593,10 +1665,32 @@ const ( SO_VM_SOCKETS_PEER_HOST_VM_ID = 0x3 SO_VM_SOCKETS_TRUSTED = 0x5 SO_WIFI_STATUS = 0x29 + SO_ZEROCOPY = 0x3c SPLICE_F_GIFT = 0x8 SPLICE_F_MORE = 0x4 SPLICE_F_MOVE = 0x1 SPLICE_F_NONBLOCK = 0x2 + STATX_ALL = 0xfff + STATX_ATIME = 0x20 + STATX_ATTR_APPEND = 0x20 + STATX_ATTR_AUTOMOUNT = 0x1000 + STATX_ATTR_COMPRESSED = 0x4 + STATX_ATTR_ENCRYPTED = 0x800 + STATX_ATTR_IMMUTABLE = 0x10 + STATX_ATTR_NODUMP = 0x40 + STATX_BASIC_STATS = 0x7ff + STATX_BLOCKS = 0x400 + STATX_BTIME = 0x800 + STATX_CTIME = 0x80 + STATX_GID = 0x10 + STATX_INO = 0x100 + STATX_MODE = 0x2 + STATX_MTIME = 0x40 + STATX_NLINK = 0x4 + STATX_SIZE = 0x200 + STATX_TYPE = 0x1 + STATX_UID = 0x8 + STATX__RESERVED = 0x80000000 S_BLKSIZE = 0x200 S_IEXEC = 0x40 S_IFBLK = 0x6000 @@ -1629,6 +1723,12 @@ const ( TAB2 = 0x1000 TAB3 = 0x1800 TABDLY = 0x1800 + TASKSTATS_CMD_ATTR_MAX = 0x4 + TASKSTATS_CMD_MAX = 0x2 + TASKSTATS_GENL_NAME = "TASKSTATS" + TASKSTATS_GENL_VERSION = 0x1 + TASKSTATS_TYPE_MAX = 0x6 + TASKSTATS_VERSION = 0x8 TCFLSH = 0x5407 TCGETA = 0x5401 TCGETS = 0x540d @@ -1651,6 +1751,7 @@ const ( TCP_CORK = 0x3 TCP_DEFER_ACCEPT = 0x9 TCP_FASTOPEN = 0x17 + TCP_FASTOPEN_CONNECT = 0x1e TCP_INFO = 0xb TCP_KEEPCNT = 0x6 TCP_KEEPIDLE = 0x4 @@ -1660,6 +1761,8 @@ const ( TCP_MAXWIN = 0xffff TCP_MAX_WINSHIFT = 0xe TCP_MD5SIG = 0xe + TCP_MD5SIG_EXT = 0x20 + TCP_MD5SIG_FLAG_PREFIX = 0x1 TCP_MD5SIG_MAXKEYLEN = 0x50 TCP_MSS = 0x200 TCP_MSS_DEFAULT = 0x218 @@ -1680,6 +1783,7 @@ const ( TCP_THIN_DUPACK = 0x11 TCP_THIN_LINEAR_TIMEOUTS = 0x10 TCP_TIMESTAMP = 0x18 + TCP_ULP = 0x1f TCP_USER_TIMEOUT = 0x12 TCP_WINDOW_CLAMP = 0xa TCSAFLUSH = 0x5410 @@ -1709,6 +1813,7 @@ const ( TIOCGPKT = 0x40045438 TIOCGPTLCK = 0x40045439 TIOCGPTN = 0x40045430 + TIOCGPTPEER = 0x20005441 TIOCGRS485 = 0x4020542e TIOCGSERIAL = 0x5484 TIOCGSID = 0x7416 @@ -1769,6 +1874,7 @@ const ( TIOCSWINSZ = 0x80087467 TIOCVHANGUP = 0x5437 TOSTOP = 0x8000 + TS_COMM_LEN = 0x20 TUNATTACHFILTER = 0x800854d5 TUNDETACHFILTER = 0x800854d6 TUNGETFEATURES = 0x400454cf @@ -1794,6 +1900,8 @@ const ( TUNSETVNETHDRSZ = 0x800454d8 TUNSETVNETLE = 0x800454dc UMOUNT_NOFOLLOW = 0x8 + UTIME_NOW = 0x3fffffff + UTIME_OMIT = 0x3ffffffe VDISCARD = 0xd VEOF = 0x10 VEOL = 0x11 @@ -1824,6 +1932,17 @@ const ( WALL = 0x40000000 WCLONE = 0x80000000 WCONTINUED = 0x8 + WDIOC_GETBOOTSTATUS = 0x40045702 + WDIOC_GETPRETIMEOUT = 0x40045709 + WDIOC_GETSTATUS = 0x40045701 + WDIOC_GETSUPPORT = 0x40285700 + WDIOC_GETTEMP = 0x40045703 + WDIOC_GETTIMELEFT = 0x4004570a + WDIOC_GETTIMEOUT = 0x40045707 + WDIOC_KEEPALIVE = 0x40045705 + WDIOC_SETOPTIONS = 0x40045704 + WDIOC_SETPRETIMEOUT = 0xc0045708 + WDIOC_SETTIMEOUT = 0xc0045706 WEXITED = 0x4 WNOHANG = 0x1 WNOTHREAD = 0x20000000 diff --git a/vendor/golang.org/x/sys/unix/zerrors_linux_mips64.go b/vendor/golang.org/x/sys/unix/zerrors_linux_mips64.go index a09bf9b18..423f48ae0 100644 --- a/vendor/golang.org/x/sys/unix/zerrors_linux_mips64.go +++ b/vendor/golang.org/x/sys/unix/zerrors_linux_mips64.go @@ -36,7 +36,7 @@ const ( AF_KEY = 0xf AF_LLC = 0x1a AF_LOCAL = 0x1 - AF_MAX = 0x2b + AF_MAX = 0x2c AF_MPLS = 0x1c AF_NETBEUI = 0xd AF_NETLINK = 0x10 @@ -51,6 +51,7 @@ const ( AF_ROUTE = 0x10 AF_RXRPC = 0x21 AF_SECURITY = 0xe + AF_SMC = 0x2b AF_SNA = 0x16 AF_TIPC = 0x1e AF_UNIX = 0x1 @@ -120,6 +121,7 @@ const ( ARPHRD_PPP = 0x200 ARPHRD_PRONET = 0x4 ARPHRD_RAWHDLC = 0x206 + ARPHRD_RAWIP = 0x207 ARPHRD_ROSE = 0x10e ARPHRD_RSRVD = 0x104 ARPHRD_SIT = 0x308 @@ -129,6 +131,7 @@ const ( ARPHRD_TUNNEL = 0x300 ARPHRD_TUNNEL6 = 0x301 ARPHRD_VOID = 0xffff + ARPHRD_VSOCKMON = 0x33a ARPHRD_X25 = 0x10f B0 = 0x0 B1000000 = 0x1008 @@ -388,13 +391,16 @@ const ( ETH_P_DSA = 0x1b ETH_P_ECONET = 0x18 ETH_P_EDSA = 0xdada + ETH_P_ERSPAN = 0x88be ETH_P_FCOE = 0x8906 ETH_P_FIP = 0x8914 ETH_P_HDLC = 0x19 ETH_P_HSR = 0x892f + ETH_P_IBOE = 0x8915 ETH_P_IEEE802154 = 0xf6 ETH_P_IEEEPUP = 0xa00 ETH_P_IEEEPUPAT = 0xa01 + ETH_P_IFE = 0xed3e ETH_P_IP = 0x800 ETH_P_IPV6 = 0x86dd ETH_P_IPX = 0x8137 @@ -405,11 +411,13 @@ const ( ETH_P_LOOP = 0x60 ETH_P_LOOPBACK = 0x9000 ETH_P_MACSEC = 0x88e5 + ETH_P_MAP = 0xf9 ETH_P_MOBITEX = 0x15 ETH_P_MPLS_MC = 0x8848 ETH_P_MPLS_UC = 0x8847 ETH_P_MVRP = 0x88f5 ETH_P_NCSI = 0x88f8 + ETH_P_NSH = 0x894f ETH_P_PAE = 0x888e ETH_P_PAUSE = 0x8808 ETH_P_PHONET = 0xf5 @@ -453,6 +461,8 @@ const ( FF1 = 0x8000 FFDLY = 0x8000 FLUSHO = 0x2000 + FS_ENCRYPTION_MODE_AES_128_CBC = 0x5 + FS_ENCRYPTION_MODE_AES_128_CTS = 0x6 FS_ENCRYPTION_MODE_AES_256_CBC = 0x3 FS_ENCRYPTION_MODE_AES_256_CTS = 0x4 FS_ENCRYPTION_MODE_AES_256_GCM = 0x2 @@ -471,6 +481,7 @@ const ( FS_POLICY_FLAGS_PAD_8 = 0x1 FS_POLICY_FLAGS_PAD_MASK = 0x3 FS_POLICY_FLAGS_VALID = 0x3 + F_ADD_SEALS = 0x409 F_DUPFD = 0x0 F_DUPFD_CLOEXEC = 0x406 F_EXLCK = 0x4 @@ -483,6 +494,9 @@ const ( F_GETOWN_EX = 0x10 F_GETPIPE_SZ = 0x408 F_GETSIG = 0xb + F_GET_FILE_RW_HINT = 0x40d + F_GET_RW_HINT = 0x40b + F_GET_SEALS = 0x40a F_LOCK = 0x1 F_NOTIFY = 0x402 F_OFD_GETLK = 0x24 @@ -490,6 +504,10 @@ const ( F_OFD_SETLKW = 0x26 F_OK = 0x0 F_RDLCK = 0x0 + F_SEAL_GROW = 0x4 + F_SEAL_SEAL = 0x1 + F_SEAL_SHRINK = 0x2 + F_SEAL_WRITE = 0x8 F_SETFD = 0x2 F_SETFL = 0x4 F_SETLEASE = 0x400 @@ -501,12 +519,27 @@ const ( F_SETOWN_EX = 0xf F_SETPIPE_SZ = 0x407 F_SETSIG = 0xa + F_SET_FILE_RW_HINT = 0x40e + F_SET_RW_HINT = 0x40c F_SHLCK = 0x8 F_TEST = 0x3 F_TLOCK = 0x2 F_ULOCK = 0x0 F_UNLCK = 0x2 F_WRLCK = 0x1 + GENL_ADMIN_PERM = 0x1 + GENL_CMD_CAP_DO = 0x2 + GENL_CMD_CAP_DUMP = 0x4 + GENL_CMD_CAP_HASPOL = 0x8 + GENL_HDRLEN = 0x4 + GENL_ID_CTRL = 0x10 + GENL_ID_PMCRAID = 0x12 + GENL_ID_VFS_DQUOT = 0x11 + GENL_MAX_ID = 0x3ff + GENL_MIN_ID = 0x10 + GENL_NAMSIZ = 0x10 + GENL_START_ALLOC = 0x13 + GENL_UNS_ADMIN_PERM = 0x10 GRND_NONBLOCK = 0x1 GRND_RANDOM = 0x2 HUPCL = 0x400 @@ -543,6 +576,8 @@ const ( IFF_MASTER = 0x400 IFF_MULTICAST = 0x1000 IFF_MULTI_QUEUE = 0x100 + IFF_NAPI = 0x10 + IFF_NAPI_FRAGS = 0x20 IFF_NOARP = 0x80 IFF_NOFILTER = 0x1000 IFF_NOTRAILERS = 0x20 @@ -605,6 +640,7 @@ const ( IN_OPEN = 0x20 IN_Q_OVERFLOW = 0x4000 IN_UNMOUNT = 0x2000 + IOCTL_VM_SOCKETS_GET_LOCAL_CID = 0x200007b9 IPPROTO_AH = 0x33 IPPROTO_BEETPH = 0x5e IPPROTO_COMP = 0x6c @@ -644,8 +680,10 @@ const ( IPV6_2292PKTOPTIONS = 0x6 IPV6_2292RTHDR = 0x5 IPV6_ADDRFORM = 0x1 + IPV6_ADDR_PREFERENCES = 0x48 IPV6_ADD_MEMBERSHIP = 0x14 IPV6_AUTHHDR = 0xa + IPV6_AUTOFLOWLABEL = 0x46 IPV6_CHECKSUM = 0x7 IPV6_DONTFRAG = 0x3e IPV6_DROP_MEMBERSHIP = 0x15 @@ -658,12 +696,14 @@ const ( IPV6_JOIN_GROUP = 0x14 IPV6_LEAVE_ANYCAST = 0x1c IPV6_LEAVE_GROUP = 0x15 + IPV6_MINHOPCOUNT = 0x49 IPV6_MTU = 0x18 IPV6_MTU_DISCOVER = 0x17 IPV6_MULTICAST_HOPS = 0x12 IPV6_MULTICAST_IF = 0x11 IPV6_MULTICAST_LOOP = 0x13 IPV6_NEXTHOP = 0x9 + IPV6_ORIGDSTADDR = 0x4a IPV6_PATHMTU = 0x3d IPV6_PKTINFO = 0x32 IPV6_PMTUDISC_DO = 0x2 @@ -674,8 +714,10 @@ const ( IPV6_PMTUDISC_WANT = 0x1 IPV6_RECVDSTOPTS = 0x3a IPV6_RECVERR = 0x19 + IPV6_RECVFRAGSIZE = 0x4d IPV6_RECVHOPLIMIT = 0x33 IPV6_RECVHOPOPTS = 0x35 + IPV6_RECVORIGDSTADDR = 0x4a IPV6_RECVPATHMTU = 0x3c IPV6_RECVPKTINFO = 0x31 IPV6_RECVRTHDR = 0x38 @@ -689,7 +731,9 @@ const ( IPV6_RXDSTOPTS = 0x3b IPV6_RXHOPOPTS = 0x36 IPV6_TCLASS = 0x43 + IPV6_TRANSPARENT = 0x4b IPV6_UNICAST_HOPS = 0x10 + IPV6_UNICAST_IF = 0x4c IPV6_V6ONLY = 0x1a IPV6_XFRM_POLICY = 0x23 IP_ADD_MEMBERSHIP = 0x23 @@ -732,6 +776,7 @@ const ( IP_PMTUDISC_PROBE = 0x3 IP_PMTUDISC_WANT = 0x1 IP_RECVERR = 0xb + IP_RECVFRAGSIZE = 0x19 IP_RECVOPTS = 0x6 IP_RECVORIGDSTADDR = 0x14 IP_RECVRETOPTS = 0x7 @@ -769,6 +814,7 @@ const ( KEYCTL_NEGATE = 0xd KEYCTL_READ = 0xb KEYCTL_REJECT = 0x13 + KEYCTL_RESTRICT_KEYRING = 0x1d KEYCTL_REVOKE = 0x3 KEYCTL_SEARCH = 0xa KEYCTL_SESSION_TO_PARENT = 0x12 @@ -816,6 +862,7 @@ const ( MADV_FREE = 0x8 MADV_HUGEPAGE = 0xe MADV_HWPOISON = 0x64 + MADV_KEEPONFORK = 0x13 MADV_MERGEABLE = 0xc MADV_NOHUGEPAGE = 0xf MADV_NORMAL = 0x0 @@ -824,6 +871,7 @@ const ( MADV_SEQUENTIAL = 0x2 MADV_UNMERGEABLE = 0xd MADV_WILLNEED = 0x3 + MADV_WIPEONFORK = 0x12 MAP_ANON = 0x800 MAP_ANONYMOUS = 0x800 MAP_DENYWRITE = 0x2000 @@ -870,6 +918,7 @@ const ( MSG_TRYHARD = 0x4 MSG_WAITALL = 0x100 MSG_WAITFORONE = 0x10000 + MSG_ZEROCOPY = 0x4000000 MS_ACTIVE = 0x40000000 MS_ASYNC = 0x1 MS_BIND = 0x1000 @@ -902,6 +951,7 @@ const ( MS_SILENT = 0x8000 MS_SLAVE = 0x80000 MS_STRICTATIME = 0x1000000 + MS_SUBMOUNT = 0x4000000 MS_SYNC = 0x4 MS_SYNCHRONOUS = 0x10 MS_UNBINDABLE = 0x20000 @@ -916,6 +966,7 @@ const ( NETLINK_DNRTMSG = 0xe NETLINK_DROP_MEMBERSHIP = 0x2 NETLINK_ECRYPTFS = 0x13 + NETLINK_EXT_ACK = 0xb NETLINK_FIB_LOOKUP = 0xa NETLINK_FIREWALL = 0x3 NETLINK_GENERIC = 0x10 @@ -934,6 +985,7 @@ const ( NETLINK_RX_RING = 0x6 NETLINK_SCSITRANSPORT = 0x12 NETLINK_SELINUX = 0x7 + NETLINK_SMC = 0x16 NETLINK_SOCK_DIAG = 0x4 NETLINK_TX_RING = 0x7 NETLINK_UNUSED = 0x1 @@ -954,8 +1006,10 @@ const ( NLMSG_NOOP = 0x1 NLMSG_OVERRUN = 0x4 NLM_F_ACK = 0x4 + NLM_F_ACK_TLVS = 0x200 NLM_F_APPEND = 0x800 NLM_F_ATOMIC = 0x400 + NLM_F_CAPPED = 0x100 NLM_F_CREATE = 0x400 NLM_F_DUMP = 0x300 NLM_F_DUMP_FILTERED = 0x20 @@ -964,6 +1018,7 @@ const ( NLM_F_EXCL = 0x200 NLM_F_MATCH = 0x200 NLM_F_MULTI = 0x2 + NLM_F_NONREC = 0x100 NLM_F_REPLACE = 0x100 NLM_F_REQUEST = 0x1 NLM_F_ROOT = 0x100 @@ -1012,6 +1067,7 @@ const ( PACKET_FANOUT_EBPF = 0x7 PACKET_FANOUT_FLAG_DEFRAG = 0x8000 PACKET_FANOUT_FLAG_ROLLOVER = 0x1000 + PACKET_FANOUT_FLAG_UNIQUEID = 0x2000 PACKET_FANOUT_HASH = 0x0 PACKET_FANOUT_LB = 0x1 PACKET_FANOUT_QM = 0x5 @@ -1153,7 +1209,7 @@ const ( PR_SET_NO_NEW_PRIVS = 0x26 PR_SET_PDEATHSIG = 0x1 PR_SET_PTRACER = 0x59616d61 - PR_SET_PTRACER_ANY = -0x1 + PR_SET_PTRACER_ANY = 0xffffffffffffffff PR_SET_SECCOMP = 0x16 PR_SET_SECUREBITS = 0x1c PR_SET_THP_DISABLE = 0x29 @@ -1161,6 +1217,11 @@ const ( PR_SET_TIMING = 0xe PR_SET_TSC = 0x1a PR_SET_UNALIGN = 0x6 + PR_SVE_GET_VL = 0x33 + PR_SVE_SET_VL = 0x32 + PR_SVE_SET_VL_ONEXEC = 0x40000 + PR_SVE_VL_INHERIT = 0x20000 + PR_SVE_VL_LEN_MASK = 0xffff PR_TASK_PERF_EVENTS_DISABLE = 0x1f PR_TASK_PERF_EVENTS_ENABLE = 0x20 PR_TIMING_STATISTICAL = 0x0 @@ -1245,10 +1306,11 @@ const ( RLIMIT_RTTIME = 0xf RLIMIT_SIGPENDING = 0xb RLIMIT_STACK = 0x3 - RLIM_INFINITY = -0x1 + RLIM_INFINITY = 0xffffffffffffffff RTAX_ADVMSS = 0x8 RTAX_CC_ALGO = 0x10 RTAX_CWND = 0x7 + RTAX_FASTOPEN_NO_COOKIE = 0x11 RTAX_FEATURES = 0xc RTAX_FEATURE_ALLFRAG = 0x8 RTAX_FEATURE_ECN = 0x1 @@ -1259,7 +1321,7 @@ const ( RTAX_INITCWND = 0xb RTAX_INITRWND = 0xe RTAX_LOCK = 0x1 - RTAX_MAX = 0x10 + RTAX_MAX = 0x11 RTAX_MTU = 0x2 RTAX_QUICKACK = 0xf RTAX_REORDERING = 0x9 @@ -1270,7 +1332,7 @@ const ( RTAX_UNSPEC = 0x0 RTAX_WINDOW = 0x3 RTA_ALIGNTO = 0x4 - RTA_MAX = 0x19 + RTA_MAX = 0x1a RTCF_DIRECTSRC = 0x4000000 RTCF_DOREDIRECT = 0x1000000 RTCF_LOG = 0x2000000 @@ -1314,6 +1376,7 @@ const ( RTM_DELLINK = 0x11 RTM_DELMDB = 0x55 RTM_DELNEIGH = 0x1d + RTM_DELNETCONF = 0x51 RTM_DELNSID = 0x59 RTM_DELQDISC = 0x25 RTM_DELROUTE = 0x19 @@ -1322,6 +1385,7 @@ const ( RTM_DELTFILTER = 0x2d RTM_F_CLONED = 0x200 RTM_F_EQUALIZE = 0x400 + RTM_F_FIB_MATCH = 0x2000 RTM_F_LOOKUP_TABLE = 0x1000 RTM_F_NOTIFY = 0x100 RTM_F_PREFIX = 0x800 @@ -1343,10 +1407,11 @@ const ( RTM_GETSTATS = 0x5e RTM_GETTCLASS = 0x2a RTM_GETTFILTER = 0x2e - RTM_MAX = 0x5f + RTM_MAX = 0x63 RTM_NEWACTION = 0x30 RTM_NEWADDR = 0x14 RTM_NEWADDRLABEL = 0x48 + RTM_NEWCACHEREPORT = 0x60 RTM_NEWLINK = 0x10 RTM_NEWMDB = 0x54 RTM_NEWNDUSEROPT = 0x44 @@ -1361,8 +1426,8 @@ const ( RTM_NEWSTATS = 0x5c RTM_NEWTCLASS = 0x28 RTM_NEWTFILTER = 0x2c - RTM_NR_FAMILIES = 0x14 - RTM_NR_MSGTYPES = 0x50 + RTM_NR_FAMILIES = 0x15 + RTM_NR_MSGTYPES = 0x54 RTM_SETDCB = 0x4f RTM_SETLINK = 0x13 RTM_SETNEIGHTBL = 0x43 @@ -1373,6 +1438,7 @@ const ( RTNH_F_OFFLOAD = 0x8 RTNH_F_ONLINK = 0x4 RTNH_F_PERVASIVE = 0x2 + RTNH_F_UNRESOLVED = 0x20 RTN_MAX = 0xb RTPROT_BABEL = 0x2a RTPROT_BIRD = 0xc @@ -1403,6 +1469,7 @@ const ( SCM_TIMESTAMP = 0x1d SCM_TIMESTAMPING = 0x25 SCM_TIMESTAMPING_OPT_STATS = 0x36 + SCM_TIMESTAMPING_PKTINFO = 0x3a SCM_TIMESTAMPNS = 0x23 SCM_WIFI_STATUS = 0x29 SECCOMP_MODE_DISABLED = 0x0 @@ -1528,6 +1595,7 @@ const ( SOL_SOCKET = 0xffff SOL_TCP = 0x6 SOL_TIPC = 0x10f + SOL_TLS = 0x11a SOL_X25 = 0x106 SOMAXCONN = 0x80 SO_ACCEPTCONN = 0x1009 @@ -1541,6 +1609,7 @@ const ( SO_BSDCOMPAT = 0xe SO_BUSY_POLL = 0x2e SO_CNX_ADVICE = 0x35 + SO_COOKIE = 0x39 SO_DEBUG = 0x1 SO_DETACH_BPF = 0x1b SO_DETACH_FILTER = 0x1b @@ -1549,11 +1618,13 @@ const ( SO_ERROR = 0x1007 SO_GET_FILTER = 0x1a SO_INCOMING_CPU = 0x31 + SO_INCOMING_NAPI_ID = 0x38 SO_KEEPALIVE = 0x8 SO_LINGER = 0x80 SO_LOCK_FILTER = 0x2c SO_MARK = 0x24 SO_MAX_PACING_RATE = 0x2f + SO_MEMINFO = 0x37 SO_NOFCS = 0x2b SO_NO_CHECK = 0xb SO_OOBINLINE = 0x100 @@ -1561,6 +1632,7 @@ const ( SO_PASSSEC = 0x22 SO_PEEK_OFF = 0x2a SO_PEERCRED = 0x12 + SO_PEERGROUPS = 0x3b SO_PEERNAME = 0x1c SO_PEERSEC = 0x1e SO_PRIORITY = 0xc @@ -1593,10 +1665,32 @@ const ( SO_VM_SOCKETS_PEER_HOST_VM_ID = 0x3 SO_VM_SOCKETS_TRUSTED = 0x5 SO_WIFI_STATUS = 0x29 + SO_ZEROCOPY = 0x3c SPLICE_F_GIFT = 0x8 SPLICE_F_MORE = 0x4 SPLICE_F_MOVE = 0x1 SPLICE_F_NONBLOCK = 0x2 + STATX_ALL = 0xfff + STATX_ATIME = 0x20 + STATX_ATTR_APPEND = 0x20 + STATX_ATTR_AUTOMOUNT = 0x1000 + STATX_ATTR_COMPRESSED = 0x4 + STATX_ATTR_ENCRYPTED = 0x800 + STATX_ATTR_IMMUTABLE = 0x10 + STATX_ATTR_NODUMP = 0x40 + STATX_BASIC_STATS = 0x7ff + STATX_BLOCKS = 0x400 + STATX_BTIME = 0x800 + STATX_CTIME = 0x80 + STATX_GID = 0x10 + STATX_INO = 0x100 + STATX_MODE = 0x2 + STATX_MTIME = 0x40 + STATX_NLINK = 0x4 + STATX_SIZE = 0x200 + STATX_TYPE = 0x1 + STATX_UID = 0x8 + STATX__RESERVED = 0x80000000 S_BLKSIZE = 0x200 S_IEXEC = 0x40 S_IFBLK = 0x6000 @@ -1629,6 +1723,12 @@ const ( TAB2 = 0x1000 TAB3 = 0x1800 TABDLY = 0x1800 + TASKSTATS_CMD_ATTR_MAX = 0x4 + TASKSTATS_CMD_MAX = 0x2 + TASKSTATS_GENL_NAME = "TASKSTATS" + TASKSTATS_GENL_VERSION = 0x1 + TASKSTATS_TYPE_MAX = 0x6 + TASKSTATS_VERSION = 0x8 TCFLSH = 0x5407 TCGETA = 0x5401 TCGETS = 0x540d @@ -1651,6 +1751,7 @@ const ( TCP_CORK = 0x3 TCP_DEFER_ACCEPT = 0x9 TCP_FASTOPEN = 0x17 + TCP_FASTOPEN_CONNECT = 0x1e TCP_INFO = 0xb TCP_KEEPCNT = 0x6 TCP_KEEPIDLE = 0x4 @@ -1660,6 +1761,8 @@ const ( TCP_MAXWIN = 0xffff TCP_MAX_WINSHIFT = 0xe TCP_MD5SIG = 0xe + TCP_MD5SIG_EXT = 0x20 + TCP_MD5SIG_FLAG_PREFIX = 0x1 TCP_MD5SIG_MAXKEYLEN = 0x50 TCP_MSS = 0x200 TCP_MSS_DEFAULT = 0x218 @@ -1680,6 +1783,7 @@ const ( TCP_THIN_DUPACK = 0x11 TCP_THIN_LINEAR_TIMEOUTS = 0x10 TCP_TIMESTAMP = 0x18 + TCP_ULP = 0x1f TCP_USER_TIMEOUT = 0x12 TCP_WINDOW_CLAMP = 0xa TCSAFLUSH = 0x5410 @@ -1709,6 +1813,7 @@ const ( TIOCGPKT = 0x40045438 TIOCGPTLCK = 0x40045439 TIOCGPTN = 0x40045430 + TIOCGPTPEER = 0x20005441 TIOCGRS485 = 0x4020542e TIOCGSERIAL = 0x5484 TIOCGSID = 0x7416 @@ -1769,6 +1874,7 @@ const ( TIOCSWINSZ = 0x80087467 TIOCVHANGUP = 0x5437 TOSTOP = 0x8000 + TS_COMM_LEN = 0x20 TUNATTACHFILTER = 0x801054d5 TUNDETACHFILTER = 0x801054d6 TUNGETFEATURES = 0x400454cf @@ -1794,6 +1900,8 @@ const ( TUNSETVNETHDRSZ = 0x800454d8 TUNSETVNETLE = 0x800454dc UMOUNT_NOFOLLOW = 0x8 + UTIME_NOW = 0x3fffffff + UTIME_OMIT = 0x3ffffffe VDISCARD = 0xd VEOF = 0x10 VEOL = 0x11 @@ -1824,6 +1932,17 @@ const ( WALL = 0x40000000 WCLONE = 0x80000000 WCONTINUED = 0x8 + WDIOC_GETBOOTSTATUS = 0x40045702 + WDIOC_GETPRETIMEOUT = 0x40045709 + WDIOC_GETSTATUS = 0x40045701 + WDIOC_GETSUPPORT = 0x40285700 + WDIOC_GETTEMP = 0x40045703 + WDIOC_GETTIMELEFT = 0x4004570a + WDIOC_GETTIMEOUT = 0x40045707 + WDIOC_KEEPALIVE = 0x40045705 + WDIOC_SETOPTIONS = 0x40045704 + WDIOC_SETPRETIMEOUT = 0xc0045708 + WDIOC_SETTIMEOUT = 0xc0045706 WEXITED = 0x4 WNOHANG = 0x1 WNOTHREAD = 0x20000000 diff --git a/vendor/golang.org/x/sys/unix/zerrors_linux_mips64le.go b/vendor/golang.org/x/sys/unix/zerrors_linux_mips64le.go index 72a0083c4..5e4060701 100644 --- a/vendor/golang.org/x/sys/unix/zerrors_linux_mips64le.go +++ b/vendor/golang.org/x/sys/unix/zerrors_linux_mips64le.go @@ -36,7 +36,7 @@ const ( AF_KEY = 0xf AF_LLC = 0x1a AF_LOCAL = 0x1 - AF_MAX = 0x2b + AF_MAX = 0x2c AF_MPLS = 0x1c AF_NETBEUI = 0xd AF_NETLINK = 0x10 @@ -51,6 +51,7 @@ const ( AF_ROUTE = 0x10 AF_RXRPC = 0x21 AF_SECURITY = 0xe + AF_SMC = 0x2b AF_SNA = 0x16 AF_TIPC = 0x1e AF_UNIX = 0x1 @@ -120,6 +121,7 @@ const ( ARPHRD_PPP = 0x200 ARPHRD_PRONET = 0x4 ARPHRD_RAWHDLC = 0x206 + ARPHRD_RAWIP = 0x207 ARPHRD_ROSE = 0x10e ARPHRD_RSRVD = 0x104 ARPHRD_SIT = 0x308 @@ -129,6 +131,7 @@ const ( ARPHRD_TUNNEL = 0x300 ARPHRD_TUNNEL6 = 0x301 ARPHRD_VOID = 0xffff + ARPHRD_VSOCKMON = 0x33a ARPHRD_X25 = 0x10f B0 = 0x0 B1000000 = 0x1008 @@ -388,13 +391,16 @@ const ( ETH_P_DSA = 0x1b ETH_P_ECONET = 0x18 ETH_P_EDSA = 0xdada + ETH_P_ERSPAN = 0x88be ETH_P_FCOE = 0x8906 ETH_P_FIP = 0x8914 ETH_P_HDLC = 0x19 ETH_P_HSR = 0x892f + ETH_P_IBOE = 0x8915 ETH_P_IEEE802154 = 0xf6 ETH_P_IEEEPUP = 0xa00 ETH_P_IEEEPUPAT = 0xa01 + ETH_P_IFE = 0xed3e ETH_P_IP = 0x800 ETH_P_IPV6 = 0x86dd ETH_P_IPX = 0x8137 @@ -405,11 +411,13 @@ const ( ETH_P_LOOP = 0x60 ETH_P_LOOPBACK = 0x9000 ETH_P_MACSEC = 0x88e5 + ETH_P_MAP = 0xf9 ETH_P_MOBITEX = 0x15 ETH_P_MPLS_MC = 0x8848 ETH_P_MPLS_UC = 0x8847 ETH_P_MVRP = 0x88f5 ETH_P_NCSI = 0x88f8 + ETH_P_NSH = 0x894f ETH_P_PAE = 0x888e ETH_P_PAUSE = 0x8808 ETH_P_PHONET = 0xf5 @@ -453,6 +461,8 @@ const ( FF1 = 0x8000 FFDLY = 0x8000 FLUSHO = 0x2000 + FS_ENCRYPTION_MODE_AES_128_CBC = 0x5 + FS_ENCRYPTION_MODE_AES_128_CTS = 0x6 FS_ENCRYPTION_MODE_AES_256_CBC = 0x3 FS_ENCRYPTION_MODE_AES_256_CTS = 0x4 FS_ENCRYPTION_MODE_AES_256_GCM = 0x2 @@ -471,6 +481,7 @@ const ( FS_POLICY_FLAGS_PAD_8 = 0x1 FS_POLICY_FLAGS_PAD_MASK = 0x3 FS_POLICY_FLAGS_VALID = 0x3 + F_ADD_SEALS = 0x409 F_DUPFD = 0x0 F_DUPFD_CLOEXEC = 0x406 F_EXLCK = 0x4 @@ -483,6 +494,9 @@ const ( F_GETOWN_EX = 0x10 F_GETPIPE_SZ = 0x408 F_GETSIG = 0xb + F_GET_FILE_RW_HINT = 0x40d + F_GET_RW_HINT = 0x40b + F_GET_SEALS = 0x40a F_LOCK = 0x1 F_NOTIFY = 0x402 F_OFD_GETLK = 0x24 @@ -490,6 +504,10 @@ const ( F_OFD_SETLKW = 0x26 F_OK = 0x0 F_RDLCK = 0x0 + F_SEAL_GROW = 0x4 + F_SEAL_SEAL = 0x1 + F_SEAL_SHRINK = 0x2 + F_SEAL_WRITE = 0x8 F_SETFD = 0x2 F_SETFL = 0x4 F_SETLEASE = 0x400 @@ -501,12 +519,27 @@ const ( F_SETOWN_EX = 0xf F_SETPIPE_SZ = 0x407 F_SETSIG = 0xa + F_SET_FILE_RW_HINT = 0x40e + F_SET_RW_HINT = 0x40c F_SHLCK = 0x8 F_TEST = 0x3 F_TLOCK = 0x2 F_ULOCK = 0x0 F_UNLCK = 0x2 F_WRLCK = 0x1 + GENL_ADMIN_PERM = 0x1 + GENL_CMD_CAP_DO = 0x2 + GENL_CMD_CAP_DUMP = 0x4 + GENL_CMD_CAP_HASPOL = 0x8 + GENL_HDRLEN = 0x4 + GENL_ID_CTRL = 0x10 + GENL_ID_PMCRAID = 0x12 + GENL_ID_VFS_DQUOT = 0x11 + GENL_MAX_ID = 0x3ff + GENL_MIN_ID = 0x10 + GENL_NAMSIZ = 0x10 + GENL_START_ALLOC = 0x13 + GENL_UNS_ADMIN_PERM = 0x10 GRND_NONBLOCK = 0x1 GRND_RANDOM = 0x2 HUPCL = 0x400 @@ -543,6 +576,8 @@ const ( IFF_MASTER = 0x400 IFF_MULTICAST = 0x1000 IFF_MULTI_QUEUE = 0x100 + IFF_NAPI = 0x10 + IFF_NAPI_FRAGS = 0x20 IFF_NOARP = 0x80 IFF_NOFILTER = 0x1000 IFF_NOTRAILERS = 0x20 @@ -605,6 +640,7 @@ const ( IN_OPEN = 0x20 IN_Q_OVERFLOW = 0x4000 IN_UNMOUNT = 0x2000 + IOCTL_VM_SOCKETS_GET_LOCAL_CID = 0x200007b9 IPPROTO_AH = 0x33 IPPROTO_BEETPH = 0x5e IPPROTO_COMP = 0x6c @@ -644,8 +680,10 @@ const ( IPV6_2292PKTOPTIONS = 0x6 IPV6_2292RTHDR = 0x5 IPV6_ADDRFORM = 0x1 + IPV6_ADDR_PREFERENCES = 0x48 IPV6_ADD_MEMBERSHIP = 0x14 IPV6_AUTHHDR = 0xa + IPV6_AUTOFLOWLABEL = 0x46 IPV6_CHECKSUM = 0x7 IPV6_DONTFRAG = 0x3e IPV6_DROP_MEMBERSHIP = 0x15 @@ -658,12 +696,14 @@ const ( IPV6_JOIN_GROUP = 0x14 IPV6_LEAVE_ANYCAST = 0x1c IPV6_LEAVE_GROUP = 0x15 + IPV6_MINHOPCOUNT = 0x49 IPV6_MTU = 0x18 IPV6_MTU_DISCOVER = 0x17 IPV6_MULTICAST_HOPS = 0x12 IPV6_MULTICAST_IF = 0x11 IPV6_MULTICAST_LOOP = 0x13 IPV6_NEXTHOP = 0x9 + IPV6_ORIGDSTADDR = 0x4a IPV6_PATHMTU = 0x3d IPV6_PKTINFO = 0x32 IPV6_PMTUDISC_DO = 0x2 @@ -674,8 +714,10 @@ const ( IPV6_PMTUDISC_WANT = 0x1 IPV6_RECVDSTOPTS = 0x3a IPV6_RECVERR = 0x19 + IPV6_RECVFRAGSIZE = 0x4d IPV6_RECVHOPLIMIT = 0x33 IPV6_RECVHOPOPTS = 0x35 + IPV6_RECVORIGDSTADDR = 0x4a IPV6_RECVPATHMTU = 0x3c IPV6_RECVPKTINFO = 0x31 IPV6_RECVRTHDR = 0x38 @@ -689,7 +731,9 @@ const ( IPV6_RXDSTOPTS = 0x3b IPV6_RXHOPOPTS = 0x36 IPV6_TCLASS = 0x43 + IPV6_TRANSPARENT = 0x4b IPV6_UNICAST_HOPS = 0x10 + IPV6_UNICAST_IF = 0x4c IPV6_V6ONLY = 0x1a IPV6_XFRM_POLICY = 0x23 IP_ADD_MEMBERSHIP = 0x23 @@ -732,6 +776,7 @@ const ( IP_PMTUDISC_PROBE = 0x3 IP_PMTUDISC_WANT = 0x1 IP_RECVERR = 0xb + IP_RECVFRAGSIZE = 0x19 IP_RECVOPTS = 0x6 IP_RECVORIGDSTADDR = 0x14 IP_RECVRETOPTS = 0x7 @@ -769,6 +814,7 @@ const ( KEYCTL_NEGATE = 0xd KEYCTL_READ = 0xb KEYCTL_REJECT = 0x13 + KEYCTL_RESTRICT_KEYRING = 0x1d KEYCTL_REVOKE = 0x3 KEYCTL_SEARCH = 0xa KEYCTL_SESSION_TO_PARENT = 0x12 @@ -816,6 +862,7 @@ const ( MADV_FREE = 0x8 MADV_HUGEPAGE = 0xe MADV_HWPOISON = 0x64 + MADV_KEEPONFORK = 0x13 MADV_MERGEABLE = 0xc MADV_NOHUGEPAGE = 0xf MADV_NORMAL = 0x0 @@ -824,6 +871,7 @@ const ( MADV_SEQUENTIAL = 0x2 MADV_UNMERGEABLE = 0xd MADV_WILLNEED = 0x3 + MADV_WIPEONFORK = 0x12 MAP_ANON = 0x800 MAP_ANONYMOUS = 0x800 MAP_DENYWRITE = 0x2000 @@ -870,6 +918,7 @@ const ( MSG_TRYHARD = 0x4 MSG_WAITALL = 0x100 MSG_WAITFORONE = 0x10000 + MSG_ZEROCOPY = 0x4000000 MS_ACTIVE = 0x40000000 MS_ASYNC = 0x1 MS_BIND = 0x1000 @@ -902,6 +951,7 @@ const ( MS_SILENT = 0x8000 MS_SLAVE = 0x80000 MS_STRICTATIME = 0x1000000 + MS_SUBMOUNT = 0x4000000 MS_SYNC = 0x4 MS_SYNCHRONOUS = 0x10 MS_UNBINDABLE = 0x20000 @@ -916,6 +966,7 @@ const ( NETLINK_DNRTMSG = 0xe NETLINK_DROP_MEMBERSHIP = 0x2 NETLINK_ECRYPTFS = 0x13 + NETLINK_EXT_ACK = 0xb NETLINK_FIB_LOOKUP = 0xa NETLINK_FIREWALL = 0x3 NETLINK_GENERIC = 0x10 @@ -934,6 +985,7 @@ const ( NETLINK_RX_RING = 0x6 NETLINK_SCSITRANSPORT = 0x12 NETLINK_SELINUX = 0x7 + NETLINK_SMC = 0x16 NETLINK_SOCK_DIAG = 0x4 NETLINK_TX_RING = 0x7 NETLINK_UNUSED = 0x1 @@ -954,8 +1006,10 @@ const ( NLMSG_NOOP = 0x1 NLMSG_OVERRUN = 0x4 NLM_F_ACK = 0x4 + NLM_F_ACK_TLVS = 0x200 NLM_F_APPEND = 0x800 NLM_F_ATOMIC = 0x400 + NLM_F_CAPPED = 0x100 NLM_F_CREATE = 0x400 NLM_F_DUMP = 0x300 NLM_F_DUMP_FILTERED = 0x20 @@ -964,6 +1018,7 @@ const ( NLM_F_EXCL = 0x200 NLM_F_MATCH = 0x200 NLM_F_MULTI = 0x2 + NLM_F_NONREC = 0x100 NLM_F_REPLACE = 0x100 NLM_F_REQUEST = 0x1 NLM_F_ROOT = 0x100 @@ -1012,6 +1067,7 @@ const ( PACKET_FANOUT_EBPF = 0x7 PACKET_FANOUT_FLAG_DEFRAG = 0x8000 PACKET_FANOUT_FLAG_ROLLOVER = 0x1000 + PACKET_FANOUT_FLAG_UNIQUEID = 0x2000 PACKET_FANOUT_HASH = 0x0 PACKET_FANOUT_LB = 0x1 PACKET_FANOUT_QM = 0x5 @@ -1153,7 +1209,7 @@ const ( PR_SET_NO_NEW_PRIVS = 0x26 PR_SET_PDEATHSIG = 0x1 PR_SET_PTRACER = 0x59616d61 - PR_SET_PTRACER_ANY = -0x1 + PR_SET_PTRACER_ANY = 0xffffffffffffffff PR_SET_SECCOMP = 0x16 PR_SET_SECUREBITS = 0x1c PR_SET_THP_DISABLE = 0x29 @@ -1161,6 +1217,11 @@ const ( PR_SET_TIMING = 0xe PR_SET_TSC = 0x1a PR_SET_UNALIGN = 0x6 + PR_SVE_GET_VL = 0x33 + PR_SVE_SET_VL = 0x32 + PR_SVE_SET_VL_ONEXEC = 0x40000 + PR_SVE_VL_INHERIT = 0x20000 + PR_SVE_VL_LEN_MASK = 0xffff PR_TASK_PERF_EVENTS_DISABLE = 0x1f PR_TASK_PERF_EVENTS_ENABLE = 0x20 PR_TIMING_STATISTICAL = 0x0 @@ -1245,10 +1306,11 @@ const ( RLIMIT_RTTIME = 0xf RLIMIT_SIGPENDING = 0xb RLIMIT_STACK = 0x3 - RLIM_INFINITY = -0x1 + RLIM_INFINITY = 0xffffffffffffffff RTAX_ADVMSS = 0x8 RTAX_CC_ALGO = 0x10 RTAX_CWND = 0x7 + RTAX_FASTOPEN_NO_COOKIE = 0x11 RTAX_FEATURES = 0xc RTAX_FEATURE_ALLFRAG = 0x8 RTAX_FEATURE_ECN = 0x1 @@ -1259,7 +1321,7 @@ const ( RTAX_INITCWND = 0xb RTAX_INITRWND = 0xe RTAX_LOCK = 0x1 - RTAX_MAX = 0x10 + RTAX_MAX = 0x11 RTAX_MTU = 0x2 RTAX_QUICKACK = 0xf RTAX_REORDERING = 0x9 @@ -1270,7 +1332,7 @@ const ( RTAX_UNSPEC = 0x0 RTAX_WINDOW = 0x3 RTA_ALIGNTO = 0x4 - RTA_MAX = 0x19 + RTA_MAX = 0x1a RTCF_DIRECTSRC = 0x4000000 RTCF_DOREDIRECT = 0x1000000 RTCF_LOG = 0x2000000 @@ -1314,6 +1376,7 @@ const ( RTM_DELLINK = 0x11 RTM_DELMDB = 0x55 RTM_DELNEIGH = 0x1d + RTM_DELNETCONF = 0x51 RTM_DELNSID = 0x59 RTM_DELQDISC = 0x25 RTM_DELROUTE = 0x19 @@ -1322,6 +1385,7 @@ const ( RTM_DELTFILTER = 0x2d RTM_F_CLONED = 0x200 RTM_F_EQUALIZE = 0x400 + RTM_F_FIB_MATCH = 0x2000 RTM_F_LOOKUP_TABLE = 0x1000 RTM_F_NOTIFY = 0x100 RTM_F_PREFIX = 0x800 @@ -1343,10 +1407,11 @@ const ( RTM_GETSTATS = 0x5e RTM_GETTCLASS = 0x2a RTM_GETTFILTER = 0x2e - RTM_MAX = 0x5f + RTM_MAX = 0x63 RTM_NEWACTION = 0x30 RTM_NEWADDR = 0x14 RTM_NEWADDRLABEL = 0x48 + RTM_NEWCACHEREPORT = 0x60 RTM_NEWLINK = 0x10 RTM_NEWMDB = 0x54 RTM_NEWNDUSEROPT = 0x44 @@ -1361,8 +1426,8 @@ const ( RTM_NEWSTATS = 0x5c RTM_NEWTCLASS = 0x28 RTM_NEWTFILTER = 0x2c - RTM_NR_FAMILIES = 0x14 - RTM_NR_MSGTYPES = 0x50 + RTM_NR_FAMILIES = 0x15 + RTM_NR_MSGTYPES = 0x54 RTM_SETDCB = 0x4f RTM_SETLINK = 0x13 RTM_SETNEIGHTBL = 0x43 @@ -1373,6 +1438,7 @@ const ( RTNH_F_OFFLOAD = 0x8 RTNH_F_ONLINK = 0x4 RTNH_F_PERVASIVE = 0x2 + RTNH_F_UNRESOLVED = 0x20 RTN_MAX = 0xb RTPROT_BABEL = 0x2a RTPROT_BIRD = 0xc @@ -1403,6 +1469,7 @@ const ( SCM_TIMESTAMP = 0x1d SCM_TIMESTAMPING = 0x25 SCM_TIMESTAMPING_OPT_STATS = 0x36 + SCM_TIMESTAMPING_PKTINFO = 0x3a SCM_TIMESTAMPNS = 0x23 SCM_WIFI_STATUS = 0x29 SECCOMP_MODE_DISABLED = 0x0 @@ -1528,6 +1595,7 @@ const ( SOL_SOCKET = 0xffff SOL_TCP = 0x6 SOL_TIPC = 0x10f + SOL_TLS = 0x11a SOL_X25 = 0x106 SOMAXCONN = 0x80 SO_ACCEPTCONN = 0x1009 @@ -1541,6 +1609,7 @@ const ( SO_BSDCOMPAT = 0xe SO_BUSY_POLL = 0x2e SO_CNX_ADVICE = 0x35 + SO_COOKIE = 0x39 SO_DEBUG = 0x1 SO_DETACH_BPF = 0x1b SO_DETACH_FILTER = 0x1b @@ -1549,11 +1618,13 @@ const ( SO_ERROR = 0x1007 SO_GET_FILTER = 0x1a SO_INCOMING_CPU = 0x31 + SO_INCOMING_NAPI_ID = 0x38 SO_KEEPALIVE = 0x8 SO_LINGER = 0x80 SO_LOCK_FILTER = 0x2c SO_MARK = 0x24 SO_MAX_PACING_RATE = 0x2f + SO_MEMINFO = 0x37 SO_NOFCS = 0x2b SO_NO_CHECK = 0xb SO_OOBINLINE = 0x100 @@ -1561,6 +1632,7 @@ const ( SO_PASSSEC = 0x22 SO_PEEK_OFF = 0x2a SO_PEERCRED = 0x12 + SO_PEERGROUPS = 0x3b SO_PEERNAME = 0x1c SO_PEERSEC = 0x1e SO_PRIORITY = 0xc @@ -1593,10 +1665,32 @@ const ( SO_VM_SOCKETS_PEER_HOST_VM_ID = 0x3 SO_VM_SOCKETS_TRUSTED = 0x5 SO_WIFI_STATUS = 0x29 + SO_ZEROCOPY = 0x3c SPLICE_F_GIFT = 0x8 SPLICE_F_MORE = 0x4 SPLICE_F_MOVE = 0x1 SPLICE_F_NONBLOCK = 0x2 + STATX_ALL = 0xfff + STATX_ATIME = 0x20 + STATX_ATTR_APPEND = 0x20 + STATX_ATTR_AUTOMOUNT = 0x1000 + STATX_ATTR_COMPRESSED = 0x4 + STATX_ATTR_ENCRYPTED = 0x800 + STATX_ATTR_IMMUTABLE = 0x10 + STATX_ATTR_NODUMP = 0x40 + STATX_BASIC_STATS = 0x7ff + STATX_BLOCKS = 0x400 + STATX_BTIME = 0x800 + STATX_CTIME = 0x80 + STATX_GID = 0x10 + STATX_INO = 0x100 + STATX_MODE = 0x2 + STATX_MTIME = 0x40 + STATX_NLINK = 0x4 + STATX_SIZE = 0x200 + STATX_TYPE = 0x1 + STATX_UID = 0x8 + STATX__RESERVED = 0x80000000 S_BLKSIZE = 0x200 S_IEXEC = 0x40 S_IFBLK = 0x6000 @@ -1629,6 +1723,12 @@ const ( TAB2 = 0x1000 TAB3 = 0x1800 TABDLY = 0x1800 + TASKSTATS_CMD_ATTR_MAX = 0x4 + TASKSTATS_CMD_MAX = 0x2 + TASKSTATS_GENL_NAME = "TASKSTATS" + TASKSTATS_GENL_VERSION = 0x1 + TASKSTATS_TYPE_MAX = 0x6 + TASKSTATS_VERSION = 0x8 TCFLSH = 0x5407 TCGETA = 0x5401 TCGETS = 0x540d @@ -1651,6 +1751,7 @@ const ( TCP_CORK = 0x3 TCP_DEFER_ACCEPT = 0x9 TCP_FASTOPEN = 0x17 + TCP_FASTOPEN_CONNECT = 0x1e TCP_INFO = 0xb TCP_KEEPCNT = 0x6 TCP_KEEPIDLE = 0x4 @@ -1660,6 +1761,8 @@ const ( TCP_MAXWIN = 0xffff TCP_MAX_WINSHIFT = 0xe TCP_MD5SIG = 0xe + TCP_MD5SIG_EXT = 0x20 + TCP_MD5SIG_FLAG_PREFIX = 0x1 TCP_MD5SIG_MAXKEYLEN = 0x50 TCP_MSS = 0x200 TCP_MSS_DEFAULT = 0x218 @@ -1680,6 +1783,7 @@ const ( TCP_THIN_DUPACK = 0x11 TCP_THIN_LINEAR_TIMEOUTS = 0x10 TCP_TIMESTAMP = 0x18 + TCP_ULP = 0x1f TCP_USER_TIMEOUT = 0x12 TCP_WINDOW_CLAMP = 0xa TCSAFLUSH = 0x5410 @@ -1709,6 +1813,7 @@ const ( TIOCGPKT = 0x40045438 TIOCGPTLCK = 0x40045439 TIOCGPTN = 0x40045430 + TIOCGPTPEER = 0x20005441 TIOCGRS485 = 0x4020542e TIOCGSERIAL = 0x5484 TIOCGSID = 0x7416 @@ -1769,6 +1874,7 @@ const ( TIOCSWINSZ = 0x80087467 TIOCVHANGUP = 0x5437 TOSTOP = 0x8000 + TS_COMM_LEN = 0x20 TUNATTACHFILTER = 0x801054d5 TUNDETACHFILTER = 0x801054d6 TUNGETFEATURES = 0x400454cf @@ -1794,6 +1900,8 @@ const ( TUNSETVNETHDRSZ = 0x800454d8 TUNSETVNETLE = 0x800454dc UMOUNT_NOFOLLOW = 0x8 + UTIME_NOW = 0x3fffffff + UTIME_OMIT = 0x3ffffffe VDISCARD = 0xd VEOF = 0x10 VEOL = 0x11 @@ -1824,6 +1932,17 @@ const ( WALL = 0x40000000 WCLONE = 0x80000000 WCONTINUED = 0x8 + WDIOC_GETBOOTSTATUS = 0x40045702 + WDIOC_GETPRETIMEOUT = 0x40045709 + WDIOC_GETSTATUS = 0x40045701 + WDIOC_GETSUPPORT = 0x40285700 + WDIOC_GETTEMP = 0x40045703 + WDIOC_GETTIMELEFT = 0x4004570a + WDIOC_GETTIMEOUT = 0x40045707 + WDIOC_KEEPALIVE = 0x40045705 + WDIOC_SETOPTIONS = 0x40045704 + WDIOC_SETPRETIMEOUT = 0xc0045708 + WDIOC_SETTIMEOUT = 0xc0045706 WEXITED = 0x4 WNOHANG = 0x1 WNOTHREAD = 0x20000000 diff --git a/vendor/golang.org/x/sys/unix/zerrors_linux_mipsle.go b/vendor/golang.org/x/sys/unix/zerrors_linux_mipsle.go index 84c0e3cc1..b9b9d6374 100644 --- a/vendor/golang.org/x/sys/unix/zerrors_linux_mipsle.go +++ b/vendor/golang.org/x/sys/unix/zerrors_linux_mipsle.go @@ -36,7 +36,7 @@ const ( AF_KEY = 0xf AF_LLC = 0x1a AF_LOCAL = 0x1 - AF_MAX = 0x2b + AF_MAX = 0x2c AF_MPLS = 0x1c AF_NETBEUI = 0xd AF_NETLINK = 0x10 @@ -51,6 +51,7 @@ const ( AF_ROUTE = 0x10 AF_RXRPC = 0x21 AF_SECURITY = 0xe + AF_SMC = 0x2b AF_SNA = 0x16 AF_TIPC = 0x1e AF_UNIX = 0x1 @@ -120,6 +121,7 @@ const ( ARPHRD_PPP = 0x200 ARPHRD_PRONET = 0x4 ARPHRD_RAWHDLC = 0x206 + ARPHRD_RAWIP = 0x207 ARPHRD_ROSE = 0x10e ARPHRD_RSRVD = 0x104 ARPHRD_SIT = 0x308 @@ -129,6 +131,7 @@ const ( ARPHRD_TUNNEL = 0x300 ARPHRD_TUNNEL6 = 0x301 ARPHRD_VOID = 0xffff + ARPHRD_VSOCKMON = 0x33a ARPHRD_X25 = 0x10f B0 = 0x0 B1000000 = 0x1008 @@ -388,13 +391,16 @@ const ( ETH_P_DSA = 0x1b ETH_P_ECONET = 0x18 ETH_P_EDSA = 0xdada + ETH_P_ERSPAN = 0x88be ETH_P_FCOE = 0x8906 ETH_P_FIP = 0x8914 ETH_P_HDLC = 0x19 ETH_P_HSR = 0x892f + ETH_P_IBOE = 0x8915 ETH_P_IEEE802154 = 0xf6 ETH_P_IEEEPUP = 0xa00 ETH_P_IEEEPUPAT = 0xa01 + ETH_P_IFE = 0xed3e ETH_P_IP = 0x800 ETH_P_IPV6 = 0x86dd ETH_P_IPX = 0x8137 @@ -405,11 +411,13 @@ const ( ETH_P_LOOP = 0x60 ETH_P_LOOPBACK = 0x9000 ETH_P_MACSEC = 0x88e5 + ETH_P_MAP = 0xf9 ETH_P_MOBITEX = 0x15 ETH_P_MPLS_MC = 0x8848 ETH_P_MPLS_UC = 0x8847 ETH_P_MVRP = 0x88f5 ETH_P_NCSI = 0x88f8 + ETH_P_NSH = 0x894f ETH_P_PAE = 0x888e ETH_P_PAUSE = 0x8808 ETH_P_PHONET = 0xf5 @@ -453,6 +461,8 @@ const ( FF1 = 0x8000 FFDLY = 0x8000 FLUSHO = 0x2000 + FS_ENCRYPTION_MODE_AES_128_CBC = 0x5 + FS_ENCRYPTION_MODE_AES_128_CTS = 0x6 FS_ENCRYPTION_MODE_AES_256_CBC = 0x3 FS_ENCRYPTION_MODE_AES_256_CTS = 0x4 FS_ENCRYPTION_MODE_AES_256_GCM = 0x2 @@ -471,6 +481,7 @@ const ( FS_POLICY_FLAGS_PAD_8 = 0x1 FS_POLICY_FLAGS_PAD_MASK = 0x3 FS_POLICY_FLAGS_VALID = 0x3 + F_ADD_SEALS = 0x409 F_DUPFD = 0x0 F_DUPFD_CLOEXEC = 0x406 F_EXLCK = 0x4 @@ -483,6 +494,9 @@ const ( F_GETOWN_EX = 0x10 F_GETPIPE_SZ = 0x408 F_GETSIG = 0xb + F_GET_FILE_RW_HINT = 0x40d + F_GET_RW_HINT = 0x40b + F_GET_SEALS = 0x40a F_LOCK = 0x1 F_NOTIFY = 0x402 F_OFD_GETLK = 0x24 @@ -490,6 +504,10 @@ const ( F_OFD_SETLKW = 0x26 F_OK = 0x0 F_RDLCK = 0x0 + F_SEAL_GROW = 0x4 + F_SEAL_SEAL = 0x1 + F_SEAL_SHRINK = 0x2 + F_SEAL_WRITE = 0x8 F_SETFD = 0x2 F_SETFL = 0x4 F_SETLEASE = 0x400 @@ -501,12 +519,27 @@ const ( F_SETOWN_EX = 0xf F_SETPIPE_SZ = 0x407 F_SETSIG = 0xa + F_SET_FILE_RW_HINT = 0x40e + F_SET_RW_HINT = 0x40c F_SHLCK = 0x8 F_TEST = 0x3 F_TLOCK = 0x2 F_ULOCK = 0x0 F_UNLCK = 0x2 F_WRLCK = 0x1 + GENL_ADMIN_PERM = 0x1 + GENL_CMD_CAP_DO = 0x2 + GENL_CMD_CAP_DUMP = 0x4 + GENL_CMD_CAP_HASPOL = 0x8 + GENL_HDRLEN = 0x4 + GENL_ID_CTRL = 0x10 + GENL_ID_PMCRAID = 0x12 + GENL_ID_VFS_DQUOT = 0x11 + GENL_MAX_ID = 0x3ff + GENL_MIN_ID = 0x10 + GENL_NAMSIZ = 0x10 + GENL_START_ALLOC = 0x13 + GENL_UNS_ADMIN_PERM = 0x10 GRND_NONBLOCK = 0x1 GRND_RANDOM = 0x2 HUPCL = 0x400 @@ -543,6 +576,8 @@ const ( IFF_MASTER = 0x400 IFF_MULTICAST = 0x1000 IFF_MULTI_QUEUE = 0x100 + IFF_NAPI = 0x10 + IFF_NAPI_FRAGS = 0x20 IFF_NOARP = 0x80 IFF_NOFILTER = 0x1000 IFF_NOTRAILERS = 0x20 @@ -605,6 +640,7 @@ const ( IN_OPEN = 0x20 IN_Q_OVERFLOW = 0x4000 IN_UNMOUNT = 0x2000 + IOCTL_VM_SOCKETS_GET_LOCAL_CID = 0x200007b9 IPPROTO_AH = 0x33 IPPROTO_BEETPH = 0x5e IPPROTO_COMP = 0x6c @@ -644,8 +680,10 @@ const ( IPV6_2292PKTOPTIONS = 0x6 IPV6_2292RTHDR = 0x5 IPV6_ADDRFORM = 0x1 + IPV6_ADDR_PREFERENCES = 0x48 IPV6_ADD_MEMBERSHIP = 0x14 IPV6_AUTHHDR = 0xa + IPV6_AUTOFLOWLABEL = 0x46 IPV6_CHECKSUM = 0x7 IPV6_DONTFRAG = 0x3e IPV6_DROP_MEMBERSHIP = 0x15 @@ -658,12 +696,14 @@ const ( IPV6_JOIN_GROUP = 0x14 IPV6_LEAVE_ANYCAST = 0x1c IPV6_LEAVE_GROUP = 0x15 + IPV6_MINHOPCOUNT = 0x49 IPV6_MTU = 0x18 IPV6_MTU_DISCOVER = 0x17 IPV6_MULTICAST_HOPS = 0x12 IPV6_MULTICAST_IF = 0x11 IPV6_MULTICAST_LOOP = 0x13 IPV6_NEXTHOP = 0x9 + IPV6_ORIGDSTADDR = 0x4a IPV6_PATHMTU = 0x3d IPV6_PKTINFO = 0x32 IPV6_PMTUDISC_DO = 0x2 @@ -674,8 +714,10 @@ const ( IPV6_PMTUDISC_WANT = 0x1 IPV6_RECVDSTOPTS = 0x3a IPV6_RECVERR = 0x19 + IPV6_RECVFRAGSIZE = 0x4d IPV6_RECVHOPLIMIT = 0x33 IPV6_RECVHOPOPTS = 0x35 + IPV6_RECVORIGDSTADDR = 0x4a IPV6_RECVPATHMTU = 0x3c IPV6_RECVPKTINFO = 0x31 IPV6_RECVRTHDR = 0x38 @@ -689,7 +731,9 @@ const ( IPV6_RXDSTOPTS = 0x3b IPV6_RXHOPOPTS = 0x36 IPV6_TCLASS = 0x43 + IPV6_TRANSPARENT = 0x4b IPV6_UNICAST_HOPS = 0x10 + IPV6_UNICAST_IF = 0x4c IPV6_V6ONLY = 0x1a IPV6_XFRM_POLICY = 0x23 IP_ADD_MEMBERSHIP = 0x23 @@ -732,6 +776,7 @@ const ( IP_PMTUDISC_PROBE = 0x3 IP_PMTUDISC_WANT = 0x1 IP_RECVERR = 0xb + IP_RECVFRAGSIZE = 0x19 IP_RECVOPTS = 0x6 IP_RECVORIGDSTADDR = 0x14 IP_RECVRETOPTS = 0x7 @@ -769,6 +814,7 @@ const ( KEYCTL_NEGATE = 0xd KEYCTL_READ = 0xb KEYCTL_REJECT = 0x13 + KEYCTL_RESTRICT_KEYRING = 0x1d KEYCTL_REVOKE = 0x3 KEYCTL_SEARCH = 0xa KEYCTL_SESSION_TO_PARENT = 0x12 @@ -816,6 +862,7 @@ const ( MADV_FREE = 0x8 MADV_HUGEPAGE = 0xe MADV_HWPOISON = 0x64 + MADV_KEEPONFORK = 0x13 MADV_MERGEABLE = 0xc MADV_NOHUGEPAGE = 0xf MADV_NORMAL = 0x0 @@ -824,6 +871,7 @@ const ( MADV_SEQUENTIAL = 0x2 MADV_UNMERGEABLE = 0xd MADV_WILLNEED = 0x3 + MADV_WIPEONFORK = 0x12 MAP_ANON = 0x800 MAP_ANONYMOUS = 0x800 MAP_DENYWRITE = 0x2000 @@ -870,6 +918,7 @@ const ( MSG_TRYHARD = 0x4 MSG_WAITALL = 0x100 MSG_WAITFORONE = 0x10000 + MSG_ZEROCOPY = 0x4000000 MS_ACTIVE = 0x40000000 MS_ASYNC = 0x1 MS_BIND = 0x1000 @@ -902,6 +951,7 @@ const ( MS_SILENT = 0x8000 MS_SLAVE = 0x80000 MS_STRICTATIME = 0x1000000 + MS_SUBMOUNT = 0x4000000 MS_SYNC = 0x4 MS_SYNCHRONOUS = 0x10 MS_UNBINDABLE = 0x20000 @@ -916,6 +966,7 @@ const ( NETLINK_DNRTMSG = 0xe NETLINK_DROP_MEMBERSHIP = 0x2 NETLINK_ECRYPTFS = 0x13 + NETLINK_EXT_ACK = 0xb NETLINK_FIB_LOOKUP = 0xa NETLINK_FIREWALL = 0x3 NETLINK_GENERIC = 0x10 @@ -934,6 +985,7 @@ const ( NETLINK_RX_RING = 0x6 NETLINK_SCSITRANSPORT = 0x12 NETLINK_SELINUX = 0x7 + NETLINK_SMC = 0x16 NETLINK_SOCK_DIAG = 0x4 NETLINK_TX_RING = 0x7 NETLINK_UNUSED = 0x1 @@ -954,8 +1006,10 @@ const ( NLMSG_NOOP = 0x1 NLMSG_OVERRUN = 0x4 NLM_F_ACK = 0x4 + NLM_F_ACK_TLVS = 0x200 NLM_F_APPEND = 0x800 NLM_F_ATOMIC = 0x400 + NLM_F_CAPPED = 0x100 NLM_F_CREATE = 0x400 NLM_F_DUMP = 0x300 NLM_F_DUMP_FILTERED = 0x20 @@ -964,6 +1018,7 @@ const ( NLM_F_EXCL = 0x200 NLM_F_MATCH = 0x200 NLM_F_MULTI = 0x2 + NLM_F_NONREC = 0x100 NLM_F_REPLACE = 0x100 NLM_F_REQUEST = 0x1 NLM_F_ROOT = 0x100 @@ -1012,6 +1067,7 @@ const ( PACKET_FANOUT_EBPF = 0x7 PACKET_FANOUT_FLAG_DEFRAG = 0x8000 PACKET_FANOUT_FLAG_ROLLOVER = 0x1000 + PACKET_FANOUT_FLAG_UNIQUEID = 0x2000 PACKET_FANOUT_HASH = 0x0 PACKET_FANOUT_LB = 0x1 PACKET_FANOUT_QM = 0x5 @@ -1161,6 +1217,11 @@ const ( PR_SET_TIMING = 0xe PR_SET_TSC = 0x1a PR_SET_UNALIGN = 0x6 + PR_SVE_GET_VL = 0x33 + PR_SVE_SET_VL = 0x32 + PR_SVE_SET_VL_ONEXEC = 0x40000 + PR_SVE_VL_INHERIT = 0x20000 + PR_SVE_VL_LEN_MASK = 0xffff PR_TASK_PERF_EVENTS_DISABLE = 0x1f PR_TASK_PERF_EVENTS_ENABLE = 0x20 PR_TIMING_STATISTICAL = 0x0 @@ -1245,10 +1306,11 @@ const ( RLIMIT_RTTIME = 0xf RLIMIT_SIGPENDING = 0xb RLIMIT_STACK = 0x3 - RLIM_INFINITY = -0x1 + RLIM_INFINITY = 0xffffffffffffffff RTAX_ADVMSS = 0x8 RTAX_CC_ALGO = 0x10 RTAX_CWND = 0x7 + RTAX_FASTOPEN_NO_COOKIE = 0x11 RTAX_FEATURES = 0xc RTAX_FEATURE_ALLFRAG = 0x8 RTAX_FEATURE_ECN = 0x1 @@ -1259,7 +1321,7 @@ const ( RTAX_INITCWND = 0xb RTAX_INITRWND = 0xe RTAX_LOCK = 0x1 - RTAX_MAX = 0x10 + RTAX_MAX = 0x11 RTAX_MTU = 0x2 RTAX_QUICKACK = 0xf RTAX_REORDERING = 0x9 @@ -1270,7 +1332,7 @@ const ( RTAX_UNSPEC = 0x0 RTAX_WINDOW = 0x3 RTA_ALIGNTO = 0x4 - RTA_MAX = 0x19 + RTA_MAX = 0x1a RTCF_DIRECTSRC = 0x4000000 RTCF_DOREDIRECT = 0x1000000 RTCF_LOG = 0x2000000 @@ -1314,6 +1376,7 @@ const ( RTM_DELLINK = 0x11 RTM_DELMDB = 0x55 RTM_DELNEIGH = 0x1d + RTM_DELNETCONF = 0x51 RTM_DELNSID = 0x59 RTM_DELQDISC = 0x25 RTM_DELROUTE = 0x19 @@ -1322,6 +1385,7 @@ const ( RTM_DELTFILTER = 0x2d RTM_F_CLONED = 0x200 RTM_F_EQUALIZE = 0x400 + RTM_F_FIB_MATCH = 0x2000 RTM_F_LOOKUP_TABLE = 0x1000 RTM_F_NOTIFY = 0x100 RTM_F_PREFIX = 0x800 @@ -1343,10 +1407,11 @@ const ( RTM_GETSTATS = 0x5e RTM_GETTCLASS = 0x2a RTM_GETTFILTER = 0x2e - RTM_MAX = 0x5f + RTM_MAX = 0x63 RTM_NEWACTION = 0x30 RTM_NEWADDR = 0x14 RTM_NEWADDRLABEL = 0x48 + RTM_NEWCACHEREPORT = 0x60 RTM_NEWLINK = 0x10 RTM_NEWMDB = 0x54 RTM_NEWNDUSEROPT = 0x44 @@ -1361,8 +1426,8 @@ const ( RTM_NEWSTATS = 0x5c RTM_NEWTCLASS = 0x28 RTM_NEWTFILTER = 0x2c - RTM_NR_FAMILIES = 0x14 - RTM_NR_MSGTYPES = 0x50 + RTM_NR_FAMILIES = 0x15 + RTM_NR_MSGTYPES = 0x54 RTM_SETDCB = 0x4f RTM_SETLINK = 0x13 RTM_SETNEIGHTBL = 0x43 @@ -1373,6 +1438,7 @@ const ( RTNH_F_OFFLOAD = 0x8 RTNH_F_ONLINK = 0x4 RTNH_F_PERVASIVE = 0x2 + RTNH_F_UNRESOLVED = 0x20 RTN_MAX = 0xb RTPROT_BABEL = 0x2a RTPROT_BIRD = 0xc @@ -1403,6 +1469,7 @@ const ( SCM_TIMESTAMP = 0x1d SCM_TIMESTAMPING = 0x25 SCM_TIMESTAMPING_OPT_STATS = 0x36 + SCM_TIMESTAMPING_PKTINFO = 0x3a SCM_TIMESTAMPNS = 0x23 SCM_WIFI_STATUS = 0x29 SECCOMP_MODE_DISABLED = 0x0 @@ -1528,6 +1595,7 @@ const ( SOL_SOCKET = 0xffff SOL_TCP = 0x6 SOL_TIPC = 0x10f + SOL_TLS = 0x11a SOL_X25 = 0x106 SOMAXCONN = 0x80 SO_ACCEPTCONN = 0x1009 @@ -1541,6 +1609,7 @@ const ( SO_BSDCOMPAT = 0xe SO_BUSY_POLL = 0x2e SO_CNX_ADVICE = 0x35 + SO_COOKIE = 0x39 SO_DEBUG = 0x1 SO_DETACH_BPF = 0x1b SO_DETACH_FILTER = 0x1b @@ -1549,11 +1618,13 @@ const ( SO_ERROR = 0x1007 SO_GET_FILTER = 0x1a SO_INCOMING_CPU = 0x31 + SO_INCOMING_NAPI_ID = 0x38 SO_KEEPALIVE = 0x8 SO_LINGER = 0x80 SO_LOCK_FILTER = 0x2c SO_MARK = 0x24 SO_MAX_PACING_RATE = 0x2f + SO_MEMINFO = 0x37 SO_NOFCS = 0x2b SO_NO_CHECK = 0xb SO_OOBINLINE = 0x100 @@ -1561,6 +1632,7 @@ const ( SO_PASSSEC = 0x22 SO_PEEK_OFF = 0x2a SO_PEERCRED = 0x12 + SO_PEERGROUPS = 0x3b SO_PEERNAME = 0x1c SO_PEERSEC = 0x1e SO_PRIORITY = 0xc @@ -1593,10 +1665,32 @@ const ( SO_VM_SOCKETS_PEER_HOST_VM_ID = 0x3 SO_VM_SOCKETS_TRUSTED = 0x5 SO_WIFI_STATUS = 0x29 + SO_ZEROCOPY = 0x3c SPLICE_F_GIFT = 0x8 SPLICE_F_MORE = 0x4 SPLICE_F_MOVE = 0x1 SPLICE_F_NONBLOCK = 0x2 + STATX_ALL = 0xfff + STATX_ATIME = 0x20 + STATX_ATTR_APPEND = 0x20 + STATX_ATTR_AUTOMOUNT = 0x1000 + STATX_ATTR_COMPRESSED = 0x4 + STATX_ATTR_ENCRYPTED = 0x800 + STATX_ATTR_IMMUTABLE = 0x10 + STATX_ATTR_NODUMP = 0x40 + STATX_BASIC_STATS = 0x7ff + STATX_BLOCKS = 0x400 + STATX_BTIME = 0x800 + STATX_CTIME = 0x80 + STATX_GID = 0x10 + STATX_INO = 0x100 + STATX_MODE = 0x2 + STATX_MTIME = 0x40 + STATX_NLINK = 0x4 + STATX_SIZE = 0x200 + STATX_TYPE = 0x1 + STATX_UID = 0x8 + STATX__RESERVED = 0x80000000 S_BLKSIZE = 0x200 S_IEXEC = 0x40 S_IFBLK = 0x6000 @@ -1629,6 +1723,12 @@ const ( TAB2 = 0x1000 TAB3 = 0x1800 TABDLY = 0x1800 + TASKSTATS_CMD_ATTR_MAX = 0x4 + TASKSTATS_CMD_MAX = 0x2 + TASKSTATS_GENL_NAME = "TASKSTATS" + TASKSTATS_GENL_VERSION = 0x1 + TASKSTATS_TYPE_MAX = 0x6 + TASKSTATS_VERSION = 0x8 TCFLSH = 0x5407 TCGETA = 0x5401 TCGETS = 0x540d @@ -1651,6 +1751,7 @@ const ( TCP_CORK = 0x3 TCP_DEFER_ACCEPT = 0x9 TCP_FASTOPEN = 0x17 + TCP_FASTOPEN_CONNECT = 0x1e TCP_INFO = 0xb TCP_KEEPCNT = 0x6 TCP_KEEPIDLE = 0x4 @@ -1660,6 +1761,8 @@ const ( TCP_MAXWIN = 0xffff TCP_MAX_WINSHIFT = 0xe TCP_MD5SIG = 0xe + TCP_MD5SIG_EXT = 0x20 + TCP_MD5SIG_FLAG_PREFIX = 0x1 TCP_MD5SIG_MAXKEYLEN = 0x50 TCP_MSS = 0x200 TCP_MSS_DEFAULT = 0x218 @@ -1680,6 +1783,7 @@ const ( TCP_THIN_DUPACK = 0x11 TCP_THIN_LINEAR_TIMEOUTS = 0x10 TCP_TIMESTAMP = 0x18 + TCP_ULP = 0x1f TCP_USER_TIMEOUT = 0x12 TCP_WINDOW_CLAMP = 0xa TCSAFLUSH = 0x5410 @@ -1709,6 +1813,7 @@ const ( TIOCGPKT = 0x40045438 TIOCGPTLCK = 0x40045439 TIOCGPTN = 0x40045430 + TIOCGPTPEER = 0x20005441 TIOCGRS485 = 0x4020542e TIOCGSERIAL = 0x5484 TIOCGSID = 0x7416 @@ -1769,6 +1874,7 @@ const ( TIOCSWINSZ = 0x80087467 TIOCVHANGUP = 0x5437 TOSTOP = 0x8000 + TS_COMM_LEN = 0x20 TUNATTACHFILTER = 0x800854d5 TUNDETACHFILTER = 0x800854d6 TUNGETFEATURES = 0x400454cf @@ -1794,6 +1900,8 @@ const ( TUNSETVNETHDRSZ = 0x800454d8 TUNSETVNETLE = 0x800454dc UMOUNT_NOFOLLOW = 0x8 + UTIME_NOW = 0x3fffffff + UTIME_OMIT = 0x3ffffffe VDISCARD = 0xd VEOF = 0x10 VEOL = 0x11 @@ -1824,6 +1932,17 @@ const ( WALL = 0x40000000 WCLONE = 0x80000000 WCONTINUED = 0x8 + WDIOC_GETBOOTSTATUS = 0x40045702 + WDIOC_GETPRETIMEOUT = 0x40045709 + WDIOC_GETSTATUS = 0x40045701 + WDIOC_GETSUPPORT = 0x40285700 + WDIOC_GETTEMP = 0x40045703 + WDIOC_GETTIMELEFT = 0x4004570a + WDIOC_GETTIMEOUT = 0x40045707 + WDIOC_KEEPALIVE = 0x40045705 + WDIOC_SETOPTIONS = 0x40045704 + WDIOC_SETPRETIMEOUT = 0xc0045708 + WDIOC_SETTIMEOUT = 0xc0045706 WEXITED = 0x4 WNOHANG = 0x1 WNOTHREAD = 0x20000000 diff --git a/vendor/golang.org/x/sys/unix/zerrors_linux_ppc64.go b/vendor/golang.org/x/sys/unix/zerrors_linux_ppc64.go index 8e4606e06..509418ef2 100644 --- a/vendor/golang.org/x/sys/unix/zerrors_linux_ppc64.go +++ b/vendor/golang.org/x/sys/unix/zerrors_linux_ppc64.go @@ -36,7 +36,7 @@ const ( AF_KEY = 0xf AF_LLC = 0x1a AF_LOCAL = 0x1 - AF_MAX = 0x2b + AF_MAX = 0x2c AF_MPLS = 0x1c AF_NETBEUI = 0xd AF_NETLINK = 0x10 @@ -51,6 +51,7 @@ const ( AF_ROUTE = 0x10 AF_RXRPC = 0x21 AF_SECURITY = 0xe + AF_SMC = 0x2b AF_SNA = 0x16 AF_TIPC = 0x1e AF_UNIX = 0x1 @@ -120,6 +121,7 @@ const ( ARPHRD_PPP = 0x200 ARPHRD_PRONET = 0x4 ARPHRD_RAWHDLC = 0x206 + ARPHRD_RAWIP = 0x207 ARPHRD_ROSE = 0x10e ARPHRD_RSRVD = 0x104 ARPHRD_SIT = 0x308 @@ -129,6 +131,7 @@ const ( ARPHRD_TUNNEL = 0x300 ARPHRD_TUNNEL6 = 0x301 ARPHRD_VOID = 0xffff + ARPHRD_VSOCKMON = 0x33a ARPHRD_X25 = 0x10f B0 = 0x0 B1000000 = 0x17 @@ -388,13 +391,16 @@ const ( ETH_P_DSA = 0x1b ETH_P_ECONET = 0x18 ETH_P_EDSA = 0xdada + ETH_P_ERSPAN = 0x88be ETH_P_FCOE = 0x8906 ETH_P_FIP = 0x8914 ETH_P_HDLC = 0x19 ETH_P_HSR = 0x892f + ETH_P_IBOE = 0x8915 ETH_P_IEEE802154 = 0xf6 ETH_P_IEEEPUP = 0xa00 ETH_P_IEEEPUPAT = 0xa01 + ETH_P_IFE = 0xed3e ETH_P_IP = 0x800 ETH_P_IPV6 = 0x86dd ETH_P_IPX = 0x8137 @@ -405,11 +411,13 @@ const ( ETH_P_LOOP = 0x60 ETH_P_LOOPBACK = 0x9000 ETH_P_MACSEC = 0x88e5 + ETH_P_MAP = 0xf9 ETH_P_MOBITEX = 0x15 ETH_P_MPLS_MC = 0x8848 ETH_P_MPLS_UC = 0x8847 ETH_P_MVRP = 0x88f5 ETH_P_NCSI = 0x88f8 + ETH_P_NSH = 0x894f ETH_P_PAE = 0x888e ETH_P_PAUSE = 0x8808 ETH_P_PHONET = 0xf5 @@ -453,6 +461,8 @@ const ( FF1 = 0x4000 FFDLY = 0x4000 FLUSHO = 0x800000 + FS_ENCRYPTION_MODE_AES_128_CBC = 0x5 + FS_ENCRYPTION_MODE_AES_128_CTS = 0x6 FS_ENCRYPTION_MODE_AES_256_CBC = 0x3 FS_ENCRYPTION_MODE_AES_256_CTS = 0x4 FS_ENCRYPTION_MODE_AES_256_GCM = 0x2 @@ -471,6 +481,7 @@ const ( FS_POLICY_FLAGS_PAD_8 = 0x1 FS_POLICY_FLAGS_PAD_MASK = 0x3 FS_POLICY_FLAGS_VALID = 0x3 + F_ADD_SEALS = 0x409 F_DUPFD = 0x0 F_DUPFD_CLOEXEC = 0x406 F_EXLCK = 0x4 @@ -483,6 +494,9 @@ const ( F_GETOWN_EX = 0x10 F_GETPIPE_SZ = 0x408 F_GETSIG = 0xb + F_GET_FILE_RW_HINT = 0x40d + F_GET_RW_HINT = 0x40b + F_GET_SEALS = 0x40a F_LOCK = 0x1 F_NOTIFY = 0x402 F_OFD_GETLK = 0x24 @@ -490,6 +504,10 @@ const ( F_OFD_SETLKW = 0x26 F_OK = 0x0 F_RDLCK = 0x0 + F_SEAL_GROW = 0x4 + F_SEAL_SEAL = 0x1 + F_SEAL_SHRINK = 0x2 + F_SEAL_WRITE = 0x8 F_SETFD = 0x2 F_SETFL = 0x4 F_SETLEASE = 0x400 @@ -501,12 +519,27 @@ const ( F_SETOWN_EX = 0xf F_SETPIPE_SZ = 0x407 F_SETSIG = 0xa + F_SET_FILE_RW_HINT = 0x40e + F_SET_RW_HINT = 0x40c F_SHLCK = 0x8 F_TEST = 0x3 F_TLOCK = 0x2 F_ULOCK = 0x0 F_UNLCK = 0x2 F_WRLCK = 0x1 + GENL_ADMIN_PERM = 0x1 + GENL_CMD_CAP_DO = 0x2 + GENL_CMD_CAP_DUMP = 0x4 + GENL_CMD_CAP_HASPOL = 0x8 + GENL_HDRLEN = 0x4 + GENL_ID_CTRL = 0x10 + GENL_ID_PMCRAID = 0x12 + GENL_ID_VFS_DQUOT = 0x11 + GENL_MAX_ID = 0x3ff + GENL_MIN_ID = 0x10 + GENL_NAMSIZ = 0x10 + GENL_START_ALLOC = 0x13 + GENL_UNS_ADMIN_PERM = 0x10 GRND_NONBLOCK = 0x1 GRND_RANDOM = 0x2 HUPCL = 0x4000 @@ -543,6 +576,8 @@ const ( IFF_MASTER = 0x400 IFF_MULTICAST = 0x1000 IFF_MULTI_QUEUE = 0x100 + IFF_NAPI = 0x10 + IFF_NAPI_FRAGS = 0x20 IFF_NOARP = 0x80 IFF_NOFILTER = 0x1000 IFF_NOTRAILERS = 0x20 @@ -605,6 +640,7 @@ const ( IN_OPEN = 0x20 IN_Q_OVERFLOW = 0x4000 IN_UNMOUNT = 0x2000 + IOCTL_VM_SOCKETS_GET_LOCAL_CID = 0x200007b9 IPPROTO_AH = 0x33 IPPROTO_BEETPH = 0x5e IPPROTO_COMP = 0x6c @@ -644,8 +680,10 @@ const ( IPV6_2292PKTOPTIONS = 0x6 IPV6_2292RTHDR = 0x5 IPV6_ADDRFORM = 0x1 + IPV6_ADDR_PREFERENCES = 0x48 IPV6_ADD_MEMBERSHIP = 0x14 IPV6_AUTHHDR = 0xa + IPV6_AUTOFLOWLABEL = 0x46 IPV6_CHECKSUM = 0x7 IPV6_DONTFRAG = 0x3e IPV6_DROP_MEMBERSHIP = 0x15 @@ -658,12 +696,14 @@ const ( IPV6_JOIN_GROUP = 0x14 IPV6_LEAVE_ANYCAST = 0x1c IPV6_LEAVE_GROUP = 0x15 + IPV6_MINHOPCOUNT = 0x49 IPV6_MTU = 0x18 IPV6_MTU_DISCOVER = 0x17 IPV6_MULTICAST_HOPS = 0x12 IPV6_MULTICAST_IF = 0x11 IPV6_MULTICAST_LOOP = 0x13 IPV6_NEXTHOP = 0x9 + IPV6_ORIGDSTADDR = 0x4a IPV6_PATHMTU = 0x3d IPV6_PKTINFO = 0x32 IPV6_PMTUDISC_DO = 0x2 @@ -674,8 +714,10 @@ const ( IPV6_PMTUDISC_WANT = 0x1 IPV6_RECVDSTOPTS = 0x3a IPV6_RECVERR = 0x19 + IPV6_RECVFRAGSIZE = 0x4d IPV6_RECVHOPLIMIT = 0x33 IPV6_RECVHOPOPTS = 0x35 + IPV6_RECVORIGDSTADDR = 0x4a IPV6_RECVPATHMTU = 0x3c IPV6_RECVPKTINFO = 0x31 IPV6_RECVRTHDR = 0x38 @@ -689,7 +731,9 @@ const ( IPV6_RXDSTOPTS = 0x3b IPV6_RXHOPOPTS = 0x36 IPV6_TCLASS = 0x43 + IPV6_TRANSPARENT = 0x4b IPV6_UNICAST_HOPS = 0x10 + IPV6_UNICAST_IF = 0x4c IPV6_V6ONLY = 0x1a IPV6_XFRM_POLICY = 0x23 IP_ADD_MEMBERSHIP = 0x23 @@ -732,6 +776,7 @@ const ( IP_PMTUDISC_PROBE = 0x3 IP_PMTUDISC_WANT = 0x1 IP_RECVERR = 0xb + IP_RECVFRAGSIZE = 0x19 IP_RECVOPTS = 0x6 IP_RECVORIGDSTADDR = 0x14 IP_RECVRETOPTS = 0x7 @@ -769,6 +814,7 @@ const ( KEYCTL_NEGATE = 0xd KEYCTL_READ = 0xb KEYCTL_REJECT = 0x13 + KEYCTL_RESTRICT_KEYRING = 0x1d KEYCTL_REVOKE = 0x3 KEYCTL_SEARCH = 0xa KEYCTL_SESSION_TO_PARENT = 0x12 @@ -816,6 +862,7 @@ const ( MADV_FREE = 0x8 MADV_HUGEPAGE = 0xe MADV_HWPOISON = 0x64 + MADV_KEEPONFORK = 0x13 MADV_MERGEABLE = 0xc MADV_NOHUGEPAGE = 0xf MADV_NORMAL = 0x0 @@ -824,6 +871,7 @@ const ( MADV_SEQUENTIAL = 0x2 MADV_UNMERGEABLE = 0xd MADV_WILLNEED = 0x3 + MADV_WIPEONFORK = 0x12 MAP_ANON = 0x20 MAP_ANONYMOUS = 0x20 MAP_DENYWRITE = 0x800 @@ -869,6 +917,7 @@ const ( MSG_TRYHARD = 0x4 MSG_WAITALL = 0x100 MSG_WAITFORONE = 0x10000 + MSG_ZEROCOPY = 0x4000000 MS_ACTIVE = 0x40000000 MS_ASYNC = 0x1 MS_BIND = 0x1000 @@ -901,6 +950,7 @@ const ( MS_SILENT = 0x8000 MS_SLAVE = 0x80000 MS_STRICTATIME = 0x1000000 + MS_SUBMOUNT = 0x4000000 MS_SYNC = 0x4 MS_SYNCHRONOUS = 0x10 MS_UNBINDABLE = 0x20000 @@ -915,6 +965,7 @@ const ( NETLINK_DNRTMSG = 0xe NETLINK_DROP_MEMBERSHIP = 0x2 NETLINK_ECRYPTFS = 0x13 + NETLINK_EXT_ACK = 0xb NETLINK_FIB_LOOKUP = 0xa NETLINK_FIREWALL = 0x3 NETLINK_GENERIC = 0x10 @@ -933,6 +984,7 @@ const ( NETLINK_RX_RING = 0x6 NETLINK_SCSITRANSPORT = 0x12 NETLINK_SELINUX = 0x7 + NETLINK_SMC = 0x16 NETLINK_SOCK_DIAG = 0x4 NETLINK_TX_RING = 0x7 NETLINK_UNUSED = 0x1 @@ -955,8 +1007,10 @@ const ( NLMSG_NOOP = 0x1 NLMSG_OVERRUN = 0x4 NLM_F_ACK = 0x4 + NLM_F_ACK_TLVS = 0x200 NLM_F_APPEND = 0x800 NLM_F_ATOMIC = 0x400 + NLM_F_CAPPED = 0x100 NLM_F_CREATE = 0x400 NLM_F_DUMP = 0x300 NLM_F_DUMP_FILTERED = 0x20 @@ -965,6 +1019,7 @@ const ( NLM_F_EXCL = 0x200 NLM_F_MATCH = 0x200 NLM_F_MULTI = 0x2 + NLM_F_NONREC = 0x100 NLM_F_REPLACE = 0x100 NLM_F_REQUEST = 0x1 NLM_F_ROOT = 0x100 @@ -1013,6 +1068,7 @@ const ( PACKET_FANOUT_EBPF = 0x7 PACKET_FANOUT_FLAG_DEFRAG = 0x8000 PACKET_FANOUT_FLAG_ROLLOVER = 0x1000 + PACKET_FANOUT_FLAG_UNIQUEID = 0x2000 PACKET_FANOUT_HASH = 0x0 PACKET_FANOUT_LB = 0x1 PACKET_FANOUT_QM = 0x5 @@ -1155,7 +1211,7 @@ const ( PR_SET_NO_NEW_PRIVS = 0x26 PR_SET_PDEATHSIG = 0x1 PR_SET_PTRACER = 0x59616d61 - PR_SET_PTRACER_ANY = -0x1 + PR_SET_PTRACER_ANY = 0xffffffffffffffff PR_SET_SECCOMP = 0x16 PR_SET_SECUREBITS = 0x1c PR_SET_THP_DISABLE = 0x29 @@ -1163,6 +1219,11 @@ const ( PR_SET_TIMING = 0xe PR_SET_TSC = 0x1a PR_SET_UNALIGN = 0x6 + PR_SVE_GET_VL = 0x33 + PR_SVE_SET_VL = 0x32 + PR_SVE_SET_VL_ONEXEC = 0x40000 + PR_SVE_VL_INHERIT = 0x20000 + PR_SVE_VL_LEN_MASK = 0xffff PR_TASK_PERF_EVENTS_DISABLE = 0x1f PR_TASK_PERF_EVENTS_ENABLE = 0x20 PR_TIMING_STATISTICAL = 0x0 @@ -1301,10 +1362,11 @@ const ( RLIMIT_RTTIME = 0xf RLIMIT_SIGPENDING = 0xb RLIMIT_STACK = 0x3 - RLIM_INFINITY = -0x1 + RLIM_INFINITY = 0xffffffffffffffff RTAX_ADVMSS = 0x8 RTAX_CC_ALGO = 0x10 RTAX_CWND = 0x7 + RTAX_FASTOPEN_NO_COOKIE = 0x11 RTAX_FEATURES = 0xc RTAX_FEATURE_ALLFRAG = 0x8 RTAX_FEATURE_ECN = 0x1 @@ -1315,7 +1377,7 @@ const ( RTAX_INITCWND = 0xb RTAX_INITRWND = 0xe RTAX_LOCK = 0x1 - RTAX_MAX = 0x10 + RTAX_MAX = 0x11 RTAX_MTU = 0x2 RTAX_QUICKACK = 0xf RTAX_REORDERING = 0x9 @@ -1326,7 +1388,7 @@ const ( RTAX_UNSPEC = 0x0 RTAX_WINDOW = 0x3 RTA_ALIGNTO = 0x4 - RTA_MAX = 0x19 + RTA_MAX = 0x1a RTCF_DIRECTSRC = 0x4000000 RTCF_DOREDIRECT = 0x1000000 RTCF_LOG = 0x2000000 @@ -1370,6 +1432,7 @@ const ( RTM_DELLINK = 0x11 RTM_DELMDB = 0x55 RTM_DELNEIGH = 0x1d + RTM_DELNETCONF = 0x51 RTM_DELNSID = 0x59 RTM_DELQDISC = 0x25 RTM_DELROUTE = 0x19 @@ -1378,6 +1441,7 @@ const ( RTM_DELTFILTER = 0x2d RTM_F_CLONED = 0x200 RTM_F_EQUALIZE = 0x400 + RTM_F_FIB_MATCH = 0x2000 RTM_F_LOOKUP_TABLE = 0x1000 RTM_F_NOTIFY = 0x100 RTM_F_PREFIX = 0x800 @@ -1399,10 +1463,11 @@ const ( RTM_GETSTATS = 0x5e RTM_GETTCLASS = 0x2a RTM_GETTFILTER = 0x2e - RTM_MAX = 0x5f + RTM_MAX = 0x63 RTM_NEWACTION = 0x30 RTM_NEWADDR = 0x14 RTM_NEWADDRLABEL = 0x48 + RTM_NEWCACHEREPORT = 0x60 RTM_NEWLINK = 0x10 RTM_NEWMDB = 0x54 RTM_NEWNDUSEROPT = 0x44 @@ -1417,8 +1482,8 @@ const ( RTM_NEWSTATS = 0x5c RTM_NEWTCLASS = 0x28 RTM_NEWTFILTER = 0x2c - RTM_NR_FAMILIES = 0x14 - RTM_NR_MSGTYPES = 0x50 + RTM_NR_FAMILIES = 0x15 + RTM_NR_MSGTYPES = 0x54 RTM_SETDCB = 0x4f RTM_SETLINK = 0x13 RTM_SETNEIGHTBL = 0x43 @@ -1429,6 +1494,7 @@ const ( RTNH_F_OFFLOAD = 0x8 RTNH_F_ONLINK = 0x4 RTNH_F_PERVASIVE = 0x2 + RTNH_F_UNRESOLVED = 0x20 RTN_MAX = 0xb RTPROT_BABEL = 0x2a RTPROT_BIRD = 0xc @@ -1459,6 +1525,7 @@ const ( SCM_TIMESTAMP = 0x1d SCM_TIMESTAMPING = 0x25 SCM_TIMESTAMPING_OPT_STATS = 0x36 + SCM_TIMESTAMPING_PKTINFO = 0x3a SCM_TIMESTAMPNS = 0x23 SCM_WIFI_STATUS = 0x29 SECCOMP_MODE_DISABLED = 0x0 @@ -1584,6 +1651,7 @@ const ( SOL_SOCKET = 0x1 SOL_TCP = 0x6 SOL_TIPC = 0x10f + SOL_TLS = 0x11a SOL_X25 = 0x106 SOMAXCONN = 0x80 SO_ACCEPTCONN = 0x1e @@ -1597,6 +1665,7 @@ const ( SO_BSDCOMPAT = 0xe SO_BUSY_POLL = 0x2e SO_CNX_ADVICE = 0x35 + SO_COOKIE = 0x39 SO_DEBUG = 0x1 SO_DETACH_BPF = 0x1b SO_DETACH_FILTER = 0x1b @@ -1605,11 +1674,13 @@ const ( SO_ERROR = 0x4 SO_GET_FILTER = 0x1a SO_INCOMING_CPU = 0x31 + SO_INCOMING_NAPI_ID = 0x38 SO_KEEPALIVE = 0x9 SO_LINGER = 0xd SO_LOCK_FILTER = 0x2c SO_MARK = 0x24 SO_MAX_PACING_RATE = 0x2f + SO_MEMINFO = 0x37 SO_NOFCS = 0x2b SO_NO_CHECK = 0xb SO_OOBINLINE = 0xa @@ -1617,6 +1688,7 @@ const ( SO_PASSSEC = 0x22 SO_PEEK_OFF = 0x2a SO_PEERCRED = 0x15 + SO_PEERGROUPS = 0x3b SO_PEERNAME = 0x1c SO_PEERSEC = 0x1f SO_PRIORITY = 0xc @@ -1648,10 +1720,32 @@ const ( SO_VM_SOCKETS_PEER_HOST_VM_ID = 0x3 SO_VM_SOCKETS_TRUSTED = 0x5 SO_WIFI_STATUS = 0x29 + SO_ZEROCOPY = 0x3c SPLICE_F_GIFT = 0x8 SPLICE_F_MORE = 0x4 SPLICE_F_MOVE = 0x1 SPLICE_F_NONBLOCK = 0x2 + STATX_ALL = 0xfff + STATX_ATIME = 0x20 + STATX_ATTR_APPEND = 0x20 + STATX_ATTR_AUTOMOUNT = 0x1000 + STATX_ATTR_COMPRESSED = 0x4 + STATX_ATTR_ENCRYPTED = 0x800 + STATX_ATTR_IMMUTABLE = 0x10 + STATX_ATTR_NODUMP = 0x40 + STATX_BASIC_STATS = 0x7ff + STATX_BLOCKS = 0x400 + STATX_BTIME = 0x800 + STATX_CTIME = 0x80 + STATX_GID = 0x10 + STATX_INO = 0x100 + STATX_MODE = 0x2 + STATX_MTIME = 0x40 + STATX_NLINK = 0x4 + STATX_SIZE = 0x200 + STATX_TYPE = 0x1 + STATX_UID = 0x8 + STATX__RESERVED = 0x80000000 S_BLKSIZE = 0x200 S_IEXEC = 0x40 S_IFBLK = 0x6000 @@ -1684,6 +1778,12 @@ const ( TAB2 = 0x800 TAB3 = 0xc00 TABDLY = 0xc00 + TASKSTATS_CMD_ATTR_MAX = 0x4 + TASKSTATS_CMD_MAX = 0x2 + TASKSTATS_GENL_NAME = "TASKSTATS" + TASKSTATS_GENL_VERSION = 0x1 + TASKSTATS_TYPE_MAX = 0x6 + TASKSTATS_VERSION = 0x8 TCFLSH = 0x2000741f TCGETA = 0x40147417 TCGETS = 0x402c7413 @@ -1705,6 +1805,7 @@ const ( TCP_CORK = 0x3 TCP_DEFER_ACCEPT = 0x9 TCP_FASTOPEN = 0x17 + TCP_FASTOPEN_CONNECT = 0x1e TCP_INFO = 0xb TCP_KEEPCNT = 0x6 TCP_KEEPIDLE = 0x4 @@ -1714,6 +1815,8 @@ const ( TCP_MAXWIN = 0xffff TCP_MAX_WINSHIFT = 0xe TCP_MD5SIG = 0xe + TCP_MD5SIG_EXT = 0x20 + TCP_MD5SIG_FLAG_PREFIX = 0x1 TCP_MD5SIG_MAXKEYLEN = 0x50 TCP_MSS = 0x200 TCP_MSS_DEFAULT = 0x218 @@ -1734,6 +1837,7 @@ const ( TCP_THIN_DUPACK = 0x11 TCP_THIN_LINEAR_TIMEOUTS = 0x10 TCP_TIMESTAMP = 0x18 + TCP_ULP = 0x1f TCP_USER_TIMEOUT = 0x12 TCP_WINDOW_CLAMP = 0xa TCSAFLUSH = 0x2 @@ -1761,6 +1865,7 @@ const ( TIOCGPKT = 0x40045438 TIOCGPTLCK = 0x40045439 TIOCGPTN = 0x40045430 + TIOCGPTPEER = 0x20005441 TIOCGRS485 = 0x542e TIOCGSERIAL = 0x541e TIOCGSID = 0x5429 @@ -1827,6 +1932,7 @@ const ( TIOCSWINSZ = 0x80087467 TIOCVHANGUP = 0x5437 TOSTOP = 0x400000 + TS_COMM_LEN = 0x20 TUNATTACHFILTER = 0x801054d5 TUNDETACHFILTER = 0x801054d6 TUNGETFEATURES = 0x400454cf @@ -1852,6 +1958,8 @@ const ( TUNSETVNETHDRSZ = 0x800454d8 TUNSETVNETLE = 0x800454dc UMOUNT_NOFOLLOW = 0x8 + UTIME_NOW = 0x3fffffff + UTIME_OMIT = 0x3ffffffe VDISCARD = 0x10 VEOF = 0x4 VEOL = 0x6 @@ -1881,6 +1989,17 @@ const ( WALL = 0x40000000 WCLONE = 0x80000000 WCONTINUED = 0x8 + WDIOC_GETBOOTSTATUS = 0x40045702 + WDIOC_GETPRETIMEOUT = 0x40045709 + WDIOC_GETSTATUS = 0x40045701 + WDIOC_GETSUPPORT = 0x40285700 + WDIOC_GETTEMP = 0x40045703 + WDIOC_GETTIMELEFT = 0x4004570a + WDIOC_GETTIMEOUT = 0x40045707 + WDIOC_KEEPALIVE = 0x40045705 + WDIOC_SETOPTIONS = 0x40045704 + WDIOC_SETPRETIMEOUT = 0xc0045708 + WDIOC_SETTIMEOUT = 0xc0045706 WEXITED = 0x4 WNOHANG = 0x1 WNOTHREAD = 0x20000000 @@ -2061,7 +2180,6 @@ const ( SIGTSTP = syscall.Signal(0x14) SIGTTIN = syscall.Signal(0x15) SIGTTOU = syscall.Signal(0x16) - SIGUNUSED = syscall.Signal(0x1f) SIGURG = syscall.Signal(0x17) SIGUSR1 = syscall.Signal(0xa) SIGUSR2 = syscall.Signal(0xc) diff --git a/vendor/golang.org/x/sys/unix/zerrors_linux_ppc64le.go b/vendor/golang.org/x/sys/unix/zerrors_linux_ppc64le.go index 16ed19311..26afbb8de 100644 --- a/vendor/golang.org/x/sys/unix/zerrors_linux_ppc64le.go +++ b/vendor/golang.org/x/sys/unix/zerrors_linux_ppc64le.go @@ -36,7 +36,7 @@ const ( AF_KEY = 0xf AF_LLC = 0x1a AF_LOCAL = 0x1 - AF_MAX = 0x2b + AF_MAX = 0x2c AF_MPLS = 0x1c AF_NETBEUI = 0xd AF_NETLINK = 0x10 @@ -51,6 +51,7 @@ const ( AF_ROUTE = 0x10 AF_RXRPC = 0x21 AF_SECURITY = 0xe + AF_SMC = 0x2b AF_SNA = 0x16 AF_TIPC = 0x1e AF_UNIX = 0x1 @@ -120,6 +121,7 @@ const ( ARPHRD_PPP = 0x200 ARPHRD_PRONET = 0x4 ARPHRD_RAWHDLC = 0x206 + ARPHRD_RAWIP = 0x207 ARPHRD_ROSE = 0x10e ARPHRD_RSRVD = 0x104 ARPHRD_SIT = 0x308 @@ -129,6 +131,7 @@ const ( ARPHRD_TUNNEL = 0x300 ARPHRD_TUNNEL6 = 0x301 ARPHRD_VOID = 0xffff + ARPHRD_VSOCKMON = 0x33a ARPHRD_X25 = 0x10f B0 = 0x0 B1000000 = 0x17 @@ -388,13 +391,16 @@ const ( ETH_P_DSA = 0x1b ETH_P_ECONET = 0x18 ETH_P_EDSA = 0xdada + ETH_P_ERSPAN = 0x88be ETH_P_FCOE = 0x8906 ETH_P_FIP = 0x8914 ETH_P_HDLC = 0x19 ETH_P_HSR = 0x892f + ETH_P_IBOE = 0x8915 ETH_P_IEEE802154 = 0xf6 ETH_P_IEEEPUP = 0xa00 ETH_P_IEEEPUPAT = 0xa01 + ETH_P_IFE = 0xed3e ETH_P_IP = 0x800 ETH_P_IPV6 = 0x86dd ETH_P_IPX = 0x8137 @@ -405,11 +411,13 @@ const ( ETH_P_LOOP = 0x60 ETH_P_LOOPBACK = 0x9000 ETH_P_MACSEC = 0x88e5 + ETH_P_MAP = 0xf9 ETH_P_MOBITEX = 0x15 ETH_P_MPLS_MC = 0x8848 ETH_P_MPLS_UC = 0x8847 ETH_P_MVRP = 0x88f5 ETH_P_NCSI = 0x88f8 + ETH_P_NSH = 0x894f ETH_P_PAE = 0x888e ETH_P_PAUSE = 0x8808 ETH_P_PHONET = 0xf5 @@ -453,6 +461,8 @@ const ( FF1 = 0x4000 FFDLY = 0x4000 FLUSHO = 0x800000 + FS_ENCRYPTION_MODE_AES_128_CBC = 0x5 + FS_ENCRYPTION_MODE_AES_128_CTS = 0x6 FS_ENCRYPTION_MODE_AES_256_CBC = 0x3 FS_ENCRYPTION_MODE_AES_256_CTS = 0x4 FS_ENCRYPTION_MODE_AES_256_GCM = 0x2 @@ -471,6 +481,7 @@ const ( FS_POLICY_FLAGS_PAD_8 = 0x1 FS_POLICY_FLAGS_PAD_MASK = 0x3 FS_POLICY_FLAGS_VALID = 0x3 + F_ADD_SEALS = 0x409 F_DUPFD = 0x0 F_DUPFD_CLOEXEC = 0x406 F_EXLCK = 0x4 @@ -483,6 +494,9 @@ const ( F_GETOWN_EX = 0x10 F_GETPIPE_SZ = 0x408 F_GETSIG = 0xb + F_GET_FILE_RW_HINT = 0x40d + F_GET_RW_HINT = 0x40b + F_GET_SEALS = 0x40a F_LOCK = 0x1 F_NOTIFY = 0x402 F_OFD_GETLK = 0x24 @@ -490,6 +504,10 @@ const ( F_OFD_SETLKW = 0x26 F_OK = 0x0 F_RDLCK = 0x0 + F_SEAL_GROW = 0x4 + F_SEAL_SEAL = 0x1 + F_SEAL_SHRINK = 0x2 + F_SEAL_WRITE = 0x8 F_SETFD = 0x2 F_SETFL = 0x4 F_SETLEASE = 0x400 @@ -501,12 +519,27 @@ const ( F_SETOWN_EX = 0xf F_SETPIPE_SZ = 0x407 F_SETSIG = 0xa + F_SET_FILE_RW_HINT = 0x40e + F_SET_RW_HINT = 0x40c F_SHLCK = 0x8 F_TEST = 0x3 F_TLOCK = 0x2 F_ULOCK = 0x0 F_UNLCK = 0x2 F_WRLCK = 0x1 + GENL_ADMIN_PERM = 0x1 + GENL_CMD_CAP_DO = 0x2 + GENL_CMD_CAP_DUMP = 0x4 + GENL_CMD_CAP_HASPOL = 0x8 + GENL_HDRLEN = 0x4 + GENL_ID_CTRL = 0x10 + GENL_ID_PMCRAID = 0x12 + GENL_ID_VFS_DQUOT = 0x11 + GENL_MAX_ID = 0x3ff + GENL_MIN_ID = 0x10 + GENL_NAMSIZ = 0x10 + GENL_START_ALLOC = 0x13 + GENL_UNS_ADMIN_PERM = 0x10 GRND_NONBLOCK = 0x1 GRND_RANDOM = 0x2 HUPCL = 0x4000 @@ -543,6 +576,8 @@ const ( IFF_MASTER = 0x400 IFF_MULTICAST = 0x1000 IFF_MULTI_QUEUE = 0x100 + IFF_NAPI = 0x10 + IFF_NAPI_FRAGS = 0x20 IFF_NOARP = 0x80 IFF_NOFILTER = 0x1000 IFF_NOTRAILERS = 0x20 @@ -605,6 +640,7 @@ const ( IN_OPEN = 0x20 IN_Q_OVERFLOW = 0x4000 IN_UNMOUNT = 0x2000 + IOCTL_VM_SOCKETS_GET_LOCAL_CID = 0x200007b9 IPPROTO_AH = 0x33 IPPROTO_BEETPH = 0x5e IPPROTO_COMP = 0x6c @@ -644,8 +680,10 @@ const ( IPV6_2292PKTOPTIONS = 0x6 IPV6_2292RTHDR = 0x5 IPV6_ADDRFORM = 0x1 + IPV6_ADDR_PREFERENCES = 0x48 IPV6_ADD_MEMBERSHIP = 0x14 IPV6_AUTHHDR = 0xa + IPV6_AUTOFLOWLABEL = 0x46 IPV6_CHECKSUM = 0x7 IPV6_DONTFRAG = 0x3e IPV6_DROP_MEMBERSHIP = 0x15 @@ -658,12 +696,14 @@ const ( IPV6_JOIN_GROUP = 0x14 IPV6_LEAVE_ANYCAST = 0x1c IPV6_LEAVE_GROUP = 0x15 + IPV6_MINHOPCOUNT = 0x49 IPV6_MTU = 0x18 IPV6_MTU_DISCOVER = 0x17 IPV6_MULTICAST_HOPS = 0x12 IPV6_MULTICAST_IF = 0x11 IPV6_MULTICAST_LOOP = 0x13 IPV6_NEXTHOP = 0x9 + IPV6_ORIGDSTADDR = 0x4a IPV6_PATHMTU = 0x3d IPV6_PKTINFO = 0x32 IPV6_PMTUDISC_DO = 0x2 @@ -674,8 +714,10 @@ const ( IPV6_PMTUDISC_WANT = 0x1 IPV6_RECVDSTOPTS = 0x3a IPV6_RECVERR = 0x19 + IPV6_RECVFRAGSIZE = 0x4d IPV6_RECVHOPLIMIT = 0x33 IPV6_RECVHOPOPTS = 0x35 + IPV6_RECVORIGDSTADDR = 0x4a IPV6_RECVPATHMTU = 0x3c IPV6_RECVPKTINFO = 0x31 IPV6_RECVRTHDR = 0x38 @@ -689,7 +731,9 @@ const ( IPV6_RXDSTOPTS = 0x3b IPV6_RXHOPOPTS = 0x36 IPV6_TCLASS = 0x43 + IPV6_TRANSPARENT = 0x4b IPV6_UNICAST_HOPS = 0x10 + IPV6_UNICAST_IF = 0x4c IPV6_V6ONLY = 0x1a IPV6_XFRM_POLICY = 0x23 IP_ADD_MEMBERSHIP = 0x23 @@ -732,6 +776,7 @@ const ( IP_PMTUDISC_PROBE = 0x3 IP_PMTUDISC_WANT = 0x1 IP_RECVERR = 0xb + IP_RECVFRAGSIZE = 0x19 IP_RECVOPTS = 0x6 IP_RECVORIGDSTADDR = 0x14 IP_RECVRETOPTS = 0x7 @@ -769,6 +814,7 @@ const ( KEYCTL_NEGATE = 0xd KEYCTL_READ = 0xb KEYCTL_REJECT = 0x13 + KEYCTL_RESTRICT_KEYRING = 0x1d KEYCTL_REVOKE = 0x3 KEYCTL_SEARCH = 0xa KEYCTL_SESSION_TO_PARENT = 0x12 @@ -816,6 +862,7 @@ const ( MADV_FREE = 0x8 MADV_HUGEPAGE = 0xe MADV_HWPOISON = 0x64 + MADV_KEEPONFORK = 0x13 MADV_MERGEABLE = 0xc MADV_NOHUGEPAGE = 0xf MADV_NORMAL = 0x0 @@ -824,6 +871,7 @@ const ( MADV_SEQUENTIAL = 0x2 MADV_UNMERGEABLE = 0xd MADV_WILLNEED = 0x3 + MADV_WIPEONFORK = 0x12 MAP_ANON = 0x20 MAP_ANONYMOUS = 0x20 MAP_DENYWRITE = 0x800 @@ -869,6 +917,7 @@ const ( MSG_TRYHARD = 0x4 MSG_WAITALL = 0x100 MSG_WAITFORONE = 0x10000 + MSG_ZEROCOPY = 0x4000000 MS_ACTIVE = 0x40000000 MS_ASYNC = 0x1 MS_BIND = 0x1000 @@ -901,6 +950,7 @@ const ( MS_SILENT = 0x8000 MS_SLAVE = 0x80000 MS_STRICTATIME = 0x1000000 + MS_SUBMOUNT = 0x4000000 MS_SYNC = 0x4 MS_SYNCHRONOUS = 0x10 MS_UNBINDABLE = 0x20000 @@ -915,6 +965,7 @@ const ( NETLINK_DNRTMSG = 0xe NETLINK_DROP_MEMBERSHIP = 0x2 NETLINK_ECRYPTFS = 0x13 + NETLINK_EXT_ACK = 0xb NETLINK_FIB_LOOKUP = 0xa NETLINK_FIREWALL = 0x3 NETLINK_GENERIC = 0x10 @@ -933,6 +984,7 @@ const ( NETLINK_RX_RING = 0x6 NETLINK_SCSITRANSPORT = 0x12 NETLINK_SELINUX = 0x7 + NETLINK_SMC = 0x16 NETLINK_SOCK_DIAG = 0x4 NETLINK_TX_RING = 0x7 NETLINK_UNUSED = 0x1 @@ -955,8 +1007,10 @@ const ( NLMSG_NOOP = 0x1 NLMSG_OVERRUN = 0x4 NLM_F_ACK = 0x4 + NLM_F_ACK_TLVS = 0x200 NLM_F_APPEND = 0x800 NLM_F_ATOMIC = 0x400 + NLM_F_CAPPED = 0x100 NLM_F_CREATE = 0x400 NLM_F_DUMP = 0x300 NLM_F_DUMP_FILTERED = 0x20 @@ -965,6 +1019,7 @@ const ( NLM_F_EXCL = 0x200 NLM_F_MATCH = 0x200 NLM_F_MULTI = 0x2 + NLM_F_NONREC = 0x100 NLM_F_REPLACE = 0x100 NLM_F_REQUEST = 0x1 NLM_F_ROOT = 0x100 @@ -1013,6 +1068,7 @@ const ( PACKET_FANOUT_EBPF = 0x7 PACKET_FANOUT_FLAG_DEFRAG = 0x8000 PACKET_FANOUT_FLAG_ROLLOVER = 0x1000 + PACKET_FANOUT_FLAG_UNIQUEID = 0x2000 PACKET_FANOUT_HASH = 0x0 PACKET_FANOUT_LB = 0x1 PACKET_FANOUT_QM = 0x5 @@ -1155,7 +1211,7 @@ const ( PR_SET_NO_NEW_PRIVS = 0x26 PR_SET_PDEATHSIG = 0x1 PR_SET_PTRACER = 0x59616d61 - PR_SET_PTRACER_ANY = -0x1 + PR_SET_PTRACER_ANY = 0xffffffffffffffff PR_SET_SECCOMP = 0x16 PR_SET_SECUREBITS = 0x1c PR_SET_THP_DISABLE = 0x29 @@ -1163,6 +1219,11 @@ const ( PR_SET_TIMING = 0xe PR_SET_TSC = 0x1a PR_SET_UNALIGN = 0x6 + PR_SVE_GET_VL = 0x33 + PR_SVE_SET_VL = 0x32 + PR_SVE_SET_VL_ONEXEC = 0x40000 + PR_SVE_VL_INHERIT = 0x20000 + PR_SVE_VL_LEN_MASK = 0xffff PR_TASK_PERF_EVENTS_DISABLE = 0x1f PR_TASK_PERF_EVENTS_ENABLE = 0x20 PR_TIMING_STATISTICAL = 0x0 @@ -1301,10 +1362,11 @@ const ( RLIMIT_RTTIME = 0xf RLIMIT_SIGPENDING = 0xb RLIMIT_STACK = 0x3 - RLIM_INFINITY = -0x1 + RLIM_INFINITY = 0xffffffffffffffff RTAX_ADVMSS = 0x8 RTAX_CC_ALGO = 0x10 RTAX_CWND = 0x7 + RTAX_FASTOPEN_NO_COOKIE = 0x11 RTAX_FEATURES = 0xc RTAX_FEATURE_ALLFRAG = 0x8 RTAX_FEATURE_ECN = 0x1 @@ -1315,7 +1377,7 @@ const ( RTAX_INITCWND = 0xb RTAX_INITRWND = 0xe RTAX_LOCK = 0x1 - RTAX_MAX = 0x10 + RTAX_MAX = 0x11 RTAX_MTU = 0x2 RTAX_QUICKACK = 0xf RTAX_REORDERING = 0x9 @@ -1326,7 +1388,7 @@ const ( RTAX_UNSPEC = 0x0 RTAX_WINDOW = 0x3 RTA_ALIGNTO = 0x4 - RTA_MAX = 0x19 + RTA_MAX = 0x1a RTCF_DIRECTSRC = 0x4000000 RTCF_DOREDIRECT = 0x1000000 RTCF_LOG = 0x2000000 @@ -1370,6 +1432,7 @@ const ( RTM_DELLINK = 0x11 RTM_DELMDB = 0x55 RTM_DELNEIGH = 0x1d + RTM_DELNETCONF = 0x51 RTM_DELNSID = 0x59 RTM_DELQDISC = 0x25 RTM_DELROUTE = 0x19 @@ -1378,6 +1441,7 @@ const ( RTM_DELTFILTER = 0x2d RTM_F_CLONED = 0x200 RTM_F_EQUALIZE = 0x400 + RTM_F_FIB_MATCH = 0x2000 RTM_F_LOOKUP_TABLE = 0x1000 RTM_F_NOTIFY = 0x100 RTM_F_PREFIX = 0x800 @@ -1399,10 +1463,11 @@ const ( RTM_GETSTATS = 0x5e RTM_GETTCLASS = 0x2a RTM_GETTFILTER = 0x2e - RTM_MAX = 0x5f + RTM_MAX = 0x63 RTM_NEWACTION = 0x30 RTM_NEWADDR = 0x14 RTM_NEWADDRLABEL = 0x48 + RTM_NEWCACHEREPORT = 0x60 RTM_NEWLINK = 0x10 RTM_NEWMDB = 0x54 RTM_NEWNDUSEROPT = 0x44 @@ -1417,8 +1482,8 @@ const ( RTM_NEWSTATS = 0x5c RTM_NEWTCLASS = 0x28 RTM_NEWTFILTER = 0x2c - RTM_NR_FAMILIES = 0x14 - RTM_NR_MSGTYPES = 0x50 + RTM_NR_FAMILIES = 0x15 + RTM_NR_MSGTYPES = 0x54 RTM_SETDCB = 0x4f RTM_SETLINK = 0x13 RTM_SETNEIGHTBL = 0x43 @@ -1429,6 +1494,7 @@ const ( RTNH_F_OFFLOAD = 0x8 RTNH_F_ONLINK = 0x4 RTNH_F_PERVASIVE = 0x2 + RTNH_F_UNRESOLVED = 0x20 RTN_MAX = 0xb RTPROT_BABEL = 0x2a RTPROT_BIRD = 0xc @@ -1459,6 +1525,7 @@ const ( SCM_TIMESTAMP = 0x1d SCM_TIMESTAMPING = 0x25 SCM_TIMESTAMPING_OPT_STATS = 0x36 + SCM_TIMESTAMPING_PKTINFO = 0x3a SCM_TIMESTAMPNS = 0x23 SCM_WIFI_STATUS = 0x29 SECCOMP_MODE_DISABLED = 0x0 @@ -1584,6 +1651,7 @@ const ( SOL_SOCKET = 0x1 SOL_TCP = 0x6 SOL_TIPC = 0x10f + SOL_TLS = 0x11a SOL_X25 = 0x106 SOMAXCONN = 0x80 SO_ACCEPTCONN = 0x1e @@ -1597,6 +1665,7 @@ const ( SO_BSDCOMPAT = 0xe SO_BUSY_POLL = 0x2e SO_CNX_ADVICE = 0x35 + SO_COOKIE = 0x39 SO_DEBUG = 0x1 SO_DETACH_BPF = 0x1b SO_DETACH_FILTER = 0x1b @@ -1605,11 +1674,13 @@ const ( SO_ERROR = 0x4 SO_GET_FILTER = 0x1a SO_INCOMING_CPU = 0x31 + SO_INCOMING_NAPI_ID = 0x38 SO_KEEPALIVE = 0x9 SO_LINGER = 0xd SO_LOCK_FILTER = 0x2c SO_MARK = 0x24 SO_MAX_PACING_RATE = 0x2f + SO_MEMINFO = 0x37 SO_NOFCS = 0x2b SO_NO_CHECK = 0xb SO_OOBINLINE = 0xa @@ -1617,6 +1688,7 @@ const ( SO_PASSSEC = 0x22 SO_PEEK_OFF = 0x2a SO_PEERCRED = 0x15 + SO_PEERGROUPS = 0x3b SO_PEERNAME = 0x1c SO_PEERSEC = 0x1f SO_PRIORITY = 0xc @@ -1648,10 +1720,32 @@ const ( SO_VM_SOCKETS_PEER_HOST_VM_ID = 0x3 SO_VM_SOCKETS_TRUSTED = 0x5 SO_WIFI_STATUS = 0x29 + SO_ZEROCOPY = 0x3c SPLICE_F_GIFT = 0x8 SPLICE_F_MORE = 0x4 SPLICE_F_MOVE = 0x1 SPLICE_F_NONBLOCK = 0x2 + STATX_ALL = 0xfff + STATX_ATIME = 0x20 + STATX_ATTR_APPEND = 0x20 + STATX_ATTR_AUTOMOUNT = 0x1000 + STATX_ATTR_COMPRESSED = 0x4 + STATX_ATTR_ENCRYPTED = 0x800 + STATX_ATTR_IMMUTABLE = 0x10 + STATX_ATTR_NODUMP = 0x40 + STATX_BASIC_STATS = 0x7ff + STATX_BLOCKS = 0x400 + STATX_BTIME = 0x800 + STATX_CTIME = 0x80 + STATX_GID = 0x10 + STATX_INO = 0x100 + STATX_MODE = 0x2 + STATX_MTIME = 0x40 + STATX_NLINK = 0x4 + STATX_SIZE = 0x200 + STATX_TYPE = 0x1 + STATX_UID = 0x8 + STATX__RESERVED = 0x80000000 S_BLKSIZE = 0x200 S_IEXEC = 0x40 S_IFBLK = 0x6000 @@ -1684,6 +1778,12 @@ const ( TAB2 = 0x800 TAB3 = 0xc00 TABDLY = 0xc00 + TASKSTATS_CMD_ATTR_MAX = 0x4 + TASKSTATS_CMD_MAX = 0x2 + TASKSTATS_GENL_NAME = "TASKSTATS" + TASKSTATS_GENL_VERSION = 0x1 + TASKSTATS_TYPE_MAX = 0x6 + TASKSTATS_VERSION = 0x8 TCFLSH = 0x2000741f TCGETA = 0x40147417 TCGETS = 0x402c7413 @@ -1705,6 +1805,7 @@ const ( TCP_CORK = 0x3 TCP_DEFER_ACCEPT = 0x9 TCP_FASTOPEN = 0x17 + TCP_FASTOPEN_CONNECT = 0x1e TCP_INFO = 0xb TCP_KEEPCNT = 0x6 TCP_KEEPIDLE = 0x4 @@ -1714,6 +1815,8 @@ const ( TCP_MAXWIN = 0xffff TCP_MAX_WINSHIFT = 0xe TCP_MD5SIG = 0xe + TCP_MD5SIG_EXT = 0x20 + TCP_MD5SIG_FLAG_PREFIX = 0x1 TCP_MD5SIG_MAXKEYLEN = 0x50 TCP_MSS = 0x200 TCP_MSS_DEFAULT = 0x218 @@ -1734,6 +1837,7 @@ const ( TCP_THIN_DUPACK = 0x11 TCP_THIN_LINEAR_TIMEOUTS = 0x10 TCP_TIMESTAMP = 0x18 + TCP_ULP = 0x1f TCP_USER_TIMEOUT = 0x12 TCP_WINDOW_CLAMP = 0xa TCSAFLUSH = 0x2 @@ -1761,6 +1865,7 @@ const ( TIOCGPKT = 0x40045438 TIOCGPTLCK = 0x40045439 TIOCGPTN = 0x40045430 + TIOCGPTPEER = 0x20005441 TIOCGRS485 = 0x542e TIOCGSERIAL = 0x541e TIOCGSID = 0x5429 @@ -1827,6 +1932,7 @@ const ( TIOCSWINSZ = 0x80087467 TIOCVHANGUP = 0x5437 TOSTOP = 0x400000 + TS_COMM_LEN = 0x20 TUNATTACHFILTER = 0x801054d5 TUNDETACHFILTER = 0x801054d6 TUNGETFEATURES = 0x400454cf @@ -1852,6 +1958,8 @@ const ( TUNSETVNETHDRSZ = 0x800454d8 TUNSETVNETLE = 0x800454dc UMOUNT_NOFOLLOW = 0x8 + UTIME_NOW = 0x3fffffff + UTIME_OMIT = 0x3ffffffe VDISCARD = 0x10 VEOF = 0x4 VEOL = 0x6 @@ -1881,6 +1989,17 @@ const ( WALL = 0x40000000 WCLONE = 0x80000000 WCONTINUED = 0x8 + WDIOC_GETBOOTSTATUS = 0x40045702 + WDIOC_GETPRETIMEOUT = 0x40045709 + WDIOC_GETSTATUS = 0x40045701 + WDIOC_GETSUPPORT = 0x40285700 + WDIOC_GETTEMP = 0x40045703 + WDIOC_GETTIMELEFT = 0x4004570a + WDIOC_GETTIMEOUT = 0x40045707 + WDIOC_KEEPALIVE = 0x40045705 + WDIOC_SETOPTIONS = 0x40045704 + WDIOC_SETPRETIMEOUT = 0xc0045708 + WDIOC_SETTIMEOUT = 0xc0045706 WEXITED = 0x4 WNOHANG = 0x1 WNOTHREAD = 0x20000000 @@ -2061,7 +2180,6 @@ const ( SIGTSTP = syscall.Signal(0x14) SIGTTIN = syscall.Signal(0x15) SIGTTOU = syscall.Signal(0x16) - SIGUNUSED = syscall.Signal(0x1f) SIGURG = syscall.Signal(0x17) SIGUSR1 = syscall.Signal(0xa) SIGUSR2 = syscall.Signal(0xc) diff --git a/vendor/golang.org/x/sys/unix/zerrors_linux_s390x.go b/vendor/golang.org/x/sys/unix/zerrors_linux_s390x.go index bd385f809..eeb9941de 100644 --- a/vendor/golang.org/x/sys/unix/zerrors_linux_s390x.go +++ b/vendor/golang.org/x/sys/unix/zerrors_linux_s390x.go @@ -36,7 +36,7 @@ const ( AF_KEY = 0xf AF_LLC = 0x1a AF_LOCAL = 0x1 - AF_MAX = 0x2b + AF_MAX = 0x2c AF_MPLS = 0x1c AF_NETBEUI = 0xd AF_NETLINK = 0x10 @@ -51,6 +51,7 @@ const ( AF_ROUTE = 0x10 AF_RXRPC = 0x21 AF_SECURITY = 0xe + AF_SMC = 0x2b AF_SNA = 0x16 AF_TIPC = 0x1e AF_UNIX = 0x1 @@ -120,6 +121,7 @@ const ( ARPHRD_PPP = 0x200 ARPHRD_PRONET = 0x4 ARPHRD_RAWHDLC = 0x206 + ARPHRD_RAWIP = 0x207 ARPHRD_ROSE = 0x10e ARPHRD_RSRVD = 0x104 ARPHRD_SIT = 0x308 @@ -129,6 +131,7 @@ const ( ARPHRD_TUNNEL = 0x300 ARPHRD_TUNNEL6 = 0x301 ARPHRD_VOID = 0xffff + ARPHRD_VSOCKMON = 0x33a ARPHRD_X25 = 0x10f B0 = 0x0 B1000000 = 0x1008 @@ -388,13 +391,16 @@ const ( ETH_P_DSA = 0x1b ETH_P_ECONET = 0x18 ETH_P_EDSA = 0xdada + ETH_P_ERSPAN = 0x88be ETH_P_FCOE = 0x8906 ETH_P_FIP = 0x8914 ETH_P_HDLC = 0x19 ETH_P_HSR = 0x892f + ETH_P_IBOE = 0x8915 ETH_P_IEEE802154 = 0xf6 ETH_P_IEEEPUP = 0xa00 ETH_P_IEEEPUPAT = 0xa01 + ETH_P_IFE = 0xed3e ETH_P_IP = 0x800 ETH_P_IPV6 = 0x86dd ETH_P_IPX = 0x8137 @@ -405,11 +411,13 @@ const ( ETH_P_LOOP = 0x60 ETH_P_LOOPBACK = 0x9000 ETH_P_MACSEC = 0x88e5 + ETH_P_MAP = 0xf9 ETH_P_MOBITEX = 0x15 ETH_P_MPLS_MC = 0x8848 ETH_P_MPLS_UC = 0x8847 ETH_P_MVRP = 0x88f5 ETH_P_NCSI = 0x88f8 + ETH_P_NSH = 0x894f ETH_P_PAE = 0x888e ETH_P_PAUSE = 0x8808 ETH_P_PHONET = 0xf5 @@ -453,6 +461,8 @@ const ( FF1 = 0x8000 FFDLY = 0x8000 FLUSHO = 0x1000 + FS_ENCRYPTION_MODE_AES_128_CBC = 0x5 + FS_ENCRYPTION_MODE_AES_128_CTS = 0x6 FS_ENCRYPTION_MODE_AES_256_CBC = 0x3 FS_ENCRYPTION_MODE_AES_256_CTS = 0x4 FS_ENCRYPTION_MODE_AES_256_GCM = 0x2 @@ -471,6 +481,7 @@ const ( FS_POLICY_FLAGS_PAD_8 = 0x1 FS_POLICY_FLAGS_PAD_MASK = 0x3 FS_POLICY_FLAGS_VALID = 0x3 + F_ADD_SEALS = 0x409 F_DUPFD = 0x0 F_DUPFD_CLOEXEC = 0x406 F_EXLCK = 0x4 @@ -483,6 +494,9 @@ const ( F_GETOWN_EX = 0x10 F_GETPIPE_SZ = 0x408 F_GETSIG = 0xb + F_GET_FILE_RW_HINT = 0x40d + F_GET_RW_HINT = 0x40b + F_GET_SEALS = 0x40a F_LOCK = 0x1 F_NOTIFY = 0x402 F_OFD_GETLK = 0x24 @@ -490,6 +504,10 @@ const ( F_OFD_SETLKW = 0x26 F_OK = 0x0 F_RDLCK = 0x0 + F_SEAL_GROW = 0x4 + F_SEAL_SEAL = 0x1 + F_SEAL_SHRINK = 0x2 + F_SEAL_WRITE = 0x8 F_SETFD = 0x2 F_SETFL = 0x4 F_SETLEASE = 0x400 @@ -501,12 +519,27 @@ const ( F_SETOWN_EX = 0xf F_SETPIPE_SZ = 0x407 F_SETSIG = 0xa + F_SET_FILE_RW_HINT = 0x40e + F_SET_RW_HINT = 0x40c F_SHLCK = 0x8 F_TEST = 0x3 F_TLOCK = 0x2 F_ULOCK = 0x0 F_UNLCK = 0x2 F_WRLCK = 0x1 + GENL_ADMIN_PERM = 0x1 + GENL_CMD_CAP_DO = 0x2 + GENL_CMD_CAP_DUMP = 0x4 + GENL_CMD_CAP_HASPOL = 0x8 + GENL_HDRLEN = 0x4 + GENL_ID_CTRL = 0x10 + GENL_ID_PMCRAID = 0x12 + GENL_ID_VFS_DQUOT = 0x11 + GENL_MAX_ID = 0x3ff + GENL_MIN_ID = 0x10 + GENL_NAMSIZ = 0x10 + GENL_START_ALLOC = 0x13 + GENL_UNS_ADMIN_PERM = 0x10 GRND_NONBLOCK = 0x1 GRND_RANDOM = 0x2 HUPCL = 0x400 @@ -543,6 +576,8 @@ const ( IFF_MASTER = 0x400 IFF_MULTICAST = 0x1000 IFF_MULTI_QUEUE = 0x100 + IFF_NAPI = 0x10 + IFF_NAPI_FRAGS = 0x20 IFF_NOARP = 0x80 IFF_NOFILTER = 0x1000 IFF_NOTRAILERS = 0x20 @@ -605,6 +640,7 @@ const ( IN_OPEN = 0x20 IN_Q_OVERFLOW = 0x4000 IN_UNMOUNT = 0x2000 + IOCTL_VM_SOCKETS_GET_LOCAL_CID = 0x7b9 IPPROTO_AH = 0x33 IPPROTO_BEETPH = 0x5e IPPROTO_COMP = 0x6c @@ -644,8 +680,10 @@ const ( IPV6_2292PKTOPTIONS = 0x6 IPV6_2292RTHDR = 0x5 IPV6_ADDRFORM = 0x1 + IPV6_ADDR_PREFERENCES = 0x48 IPV6_ADD_MEMBERSHIP = 0x14 IPV6_AUTHHDR = 0xa + IPV6_AUTOFLOWLABEL = 0x46 IPV6_CHECKSUM = 0x7 IPV6_DONTFRAG = 0x3e IPV6_DROP_MEMBERSHIP = 0x15 @@ -658,12 +696,14 @@ const ( IPV6_JOIN_GROUP = 0x14 IPV6_LEAVE_ANYCAST = 0x1c IPV6_LEAVE_GROUP = 0x15 + IPV6_MINHOPCOUNT = 0x49 IPV6_MTU = 0x18 IPV6_MTU_DISCOVER = 0x17 IPV6_MULTICAST_HOPS = 0x12 IPV6_MULTICAST_IF = 0x11 IPV6_MULTICAST_LOOP = 0x13 IPV6_NEXTHOP = 0x9 + IPV6_ORIGDSTADDR = 0x4a IPV6_PATHMTU = 0x3d IPV6_PKTINFO = 0x32 IPV6_PMTUDISC_DO = 0x2 @@ -674,8 +714,10 @@ const ( IPV6_PMTUDISC_WANT = 0x1 IPV6_RECVDSTOPTS = 0x3a IPV6_RECVERR = 0x19 + IPV6_RECVFRAGSIZE = 0x4d IPV6_RECVHOPLIMIT = 0x33 IPV6_RECVHOPOPTS = 0x35 + IPV6_RECVORIGDSTADDR = 0x4a IPV6_RECVPATHMTU = 0x3c IPV6_RECVPKTINFO = 0x31 IPV6_RECVRTHDR = 0x38 @@ -689,7 +731,9 @@ const ( IPV6_RXDSTOPTS = 0x3b IPV6_RXHOPOPTS = 0x36 IPV6_TCLASS = 0x43 + IPV6_TRANSPARENT = 0x4b IPV6_UNICAST_HOPS = 0x10 + IPV6_UNICAST_IF = 0x4c IPV6_V6ONLY = 0x1a IPV6_XFRM_POLICY = 0x23 IP_ADD_MEMBERSHIP = 0x23 @@ -732,6 +776,7 @@ const ( IP_PMTUDISC_PROBE = 0x3 IP_PMTUDISC_WANT = 0x1 IP_RECVERR = 0xb + IP_RECVFRAGSIZE = 0x19 IP_RECVOPTS = 0x6 IP_RECVORIGDSTADDR = 0x14 IP_RECVRETOPTS = 0x7 @@ -769,6 +814,7 @@ const ( KEYCTL_NEGATE = 0xd KEYCTL_READ = 0xb KEYCTL_REJECT = 0x13 + KEYCTL_RESTRICT_KEYRING = 0x1d KEYCTL_REVOKE = 0x3 KEYCTL_SEARCH = 0xa KEYCTL_SESSION_TO_PARENT = 0x12 @@ -816,6 +862,7 @@ const ( MADV_FREE = 0x8 MADV_HUGEPAGE = 0xe MADV_HWPOISON = 0x64 + MADV_KEEPONFORK = 0x13 MADV_MERGEABLE = 0xc MADV_NOHUGEPAGE = 0xf MADV_NORMAL = 0x0 @@ -824,6 +871,7 @@ const ( MADV_SEQUENTIAL = 0x2 MADV_UNMERGEABLE = 0xd MADV_WILLNEED = 0x3 + MADV_WIPEONFORK = 0x12 MAP_ANON = 0x20 MAP_ANONYMOUS = 0x20 MAP_DENYWRITE = 0x800 @@ -869,6 +917,7 @@ const ( MSG_TRYHARD = 0x4 MSG_WAITALL = 0x100 MSG_WAITFORONE = 0x10000 + MSG_ZEROCOPY = 0x4000000 MS_ACTIVE = 0x40000000 MS_ASYNC = 0x1 MS_BIND = 0x1000 @@ -901,6 +950,7 @@ const ( MS_SILENT = 0x8000 MS_SLAVE = 0x80000 MS_STRICTATIME = 0x1000000 + MS_SUBMOUNT = 0x4000000 MS_SYNC = 0x4 MS_SYNCHRONOUS = 0x10 MS_UNBINDABLE = 0x20000 @@ -915,6 +965,7 @@ const ( NETLINK_DNRTMSG = 0xe NETLINK_DROP_MEMBERSHIP = 0x2 NETLINK_ECRYPTFS = 0x13 + NETLINK_EXT_ACK = 0xb NETLINK_FIB_LOOKUP = 0xa NETLINK_FIREWALL = 0x3 NETLINK_GENERIC = 0x10 @@ -933,6 +984,7 @@ const ( NETLINK_RX_RING = 0x6 NETLINK_SCSITRANSPORT = 0x12 NETLINK_SELINUX = 0x7 + NETLINK_SMC = 0x16 NETLINK_SOCK_DIAG = 0x4 NETLINK_TX_RING = 0x7 NETLINK_UNUSED = 0x1 @@ -953,8 +1005,10 @@ const ( NLMSG_NOOP = 0x1 NLMSG_OVERRUN = 0x4 NLM_F_ACK = 0x4 + NLM_F_ACK_TLVS = 0x200 NLM_F_APPEND = 0x800 NLM_F_ATOMIC = 0x400 + NLM_F_CAPPED = 0x100 NLM_F_CREATE = 0x400 NLM_F_DUMP = 0x300 NLM_F_DUMP_FILTERED = 0x20 @@ -963,6 +1017,7 @@ const ( NLM_F_EXCL = 0x200 NLM_F_MATCH = 0x200 NLM_F_MULTI = 0x2 + NLM_F_NONREC = 0x100 NLM_F_REPLACE = 0x100 NLM_F_REQUEST = 0x1 NLM_F_ROOT = 0x100 @@ -1011,6 +1066,7 @@ const ( PACKET_FANOUT_EBPF = 0x7 PACKET_FANOUT_FLAG_DEFRAG = 0x8000 PACKET_FANOUT_FLAG_ROLLOVER = 0x1000 + PACKET_FANOUT_FLAG_UNIQUEID = 0x2000 PACKET_FANOUT_HASH = 0x0 PACKET_FANOUT_LB = 0x1 PACKET_FANOUT_QM = 0x5 @@ -1152,7 +1208,7 @@ const ( PR_SET_NO_NEW_PRIVS = 0x26 PR_SET_PDEATHSIG = 0x1 PR_SET_PTRACER = 0x59616d61 - PR_SET_PTRACER_ANY = -0x1 + PR_SET_PTRACER_ANY = 0xffffffffffffffff PR_SET_SECCOMP = 0x16 PR_SET_SECUREBITS = 0x1c PR_SET_THP_DISABLE = 0x29 @@ -1160,6 +1216,11 @@ const ( PR_SET_TIMING = 0xe PR_SET_TSC = 0x1a PR_SET_UNALIGN = 0x6 + PR_SVE_GET_VL = 0x33 + PR_SVE_SET_VL = 0x32 + PR_SVE_SET_VL_ONEXEC = 0x40000 + PR_SVE_VL_INHERIT = 0x20000 + PR_SVE_VL_LEN_MASK = 0xffff PR_TASK_PERF_EVENTS_DISABLE = 0x1f PR_TASK_PERF_EVENTS_ENABLE = 0x20 PR_TIMING_STATISTICAL = 0x0 @@ -1305,10 +1366,11 @@ const ( RLIMIT_RTTIME = 0xf RLIMIT_SIGPENDING = 0xb RLIMIT_STACK = 0x3 - RLIM_INFINITY = -0x1 + RLIM_INFINITY = 0xffffffffffffffff RTAX_ADVMSS = 0x8 RTAX_CC_ALGO = 0x10 RTAX_CWND = 0x7 + RTAX_FASTOPEN_NO_COOKIE = 0x11 RTAX_FEATURES = 0xc RTAX_FEATURE_ALLFRAG = 0x8 RTAX_FEATURE_ECN = 0x1 @@ -1319,7 +1381,7 @@ const ( RTAX_INITCWND = 0xb RTAX_INITRWND = 0xe RTAX_LOCK = 0x1 - RTAX_MAX = 0x10 + RTAX_MAX = 0x11 RTAX_MTU = 0x2 RTAX_QUICKACK = 0xf RTAX_REORDERING = 0x9 @@ -1330,7 +1392,7 @@ const ( RTAX_UNSPEC = 0x0 RTAX_WINDOW = 0x3 RTA_ALIGNTO = 0x4 - RTA_MAX = 0x19 + RTA_MAX = 0x1a RTCF_DIRECTSRC = 0x4000000 RTCF_DOREDIRECT = 0x1000000 RTCF_LOG = 0x2000000 @@ -1374,6 +1436,7 @@ const ( RTM_DELLINK = 0x11 RTM_DELMDB = 0x55 RTM_DELNEIGH = 0x1d + RTM_DELNETCONF = 0x51 RTM_DELNSID = 0x59 RTM_DELQDISC = 0x25 RTM_DELROUTE = 0x19 @@ -1382,6 +1445,7 @@ const ( RTM_DELTFILTER = 0x2d RTM_F_CLONED = 0x200 RTM_F_EQUALIZE = 0x400 + RTM_F_FIB_MATCH = 0x2000 RTM_F_LOOKUP_TABLE = 0x1000 RTM_F_NOTIFY = 0x100 RTM_F_PREFIX = 0x800 @@ -1403,10 +1467,11 @@ const ( RTM_GETSTATS = 0x5e RTM_GETTCLASS = 0x2a RTM_GETTFILTER = 0x2e - RTM_MAX = 0x5f + RTM_MAX = 0x63 RTM_NEWACTION = 0x30 RTM_NEWADDR = 0x14 RTM_NEWADDRLABEL = 0x48 + RTM_NEWCACHEREPORT = 0x60 RTM_NEWLINK = 0x10 RTM_NEWMDB = 0x54 RTM_NEWNDUSEROPT = 0x44 @@ -1421,8 +1486,8 @@ const ( RTM_NEWSTATS = 0x5c RTM_NEWTCLASS = 0x28 RTM_NEWTFILTER = 0x2c - RTM_NR_FAMILIES = 0x14 - RTM_NR_MSGTYPES = 0x50 + RTM_NR_FAMILIES = 0x15 + RTM_NR_MSGTYPES = 0x54 RTM_SETDCB = 0x4f RTM_SETLINK = 0x13 RTM_SETNEIGHTBL = 0x43 @@ -1433,6 +1498,7 @@ const ( RTNH_F_OFFLOAD = 0x8 RTNH_F_ONLINK = 0x4 RTNH_F_PERVASIVE = 0x2 + RTNH_F_UNRESOLVED = 0x20 RTN_MAX = 0xb RTPROT_BABEL = 0x2a RTPROT_BIRD = 0xc @@ -1463,6 +1529,7 @@ const ( SCM_TIMESTAMP = 0x1d SCM_TIMESTAMPING = 0x25 SCM_TIMESTAMPING_OPT_STATS = 0x36 + SCM_TIMESTAMPING_PKTINFO = 0x3a SCM_TIMESTAMPNS = 0x23 SCM_WIFI_STATUS = 0x29 SECCOMP_MODE_DISABLED = 0x0 @@ -1588,6 +1655,7 @@ const ( SOL_SOCKET = 0x1 SOL_TCP = 0x6 SOL_TIPC = 0x10f + SOL_TLS = 0x11a SOL_X25 = 0x106 SOMAXCONN = 0x80 SO_ACCEPTCONN = 0x1e @@ -1601,6 +1669,7 @@ const ( SO_BSDCOMPAT = 0xe SO_BUSY_POLL = 0x2e SO_CNX_ADVICE = 0x35 + SO_COOKIE = 0x39 SO_DEBUG = 0x1 SO_DETACH_BPF = 0x1b SO_DETACH_FILTER = 0x1b @@ -1609,11 +1678,13 @@ const ( SO_ERROR = 0x4 SO_GET_FILTER = 0x1a SO_INCOMING_CPU = 0x31 + SO_INCOMING_NAPI_ID = 0x38 SO_KEEPALIVE = 0x9 SO_LINGER = 0xd SO_LOCK_FILTER = 0x2c SO_MARK = 0x24 SO_MAX_PACING_RATE = 0x2f + SO_MEMINFO = 0x37 SO_NOFCS = 0x2b SO_NO_CHECK = 0xb SO_OOBINLINE = 0xa @@ -1621,6 +1692,7 @@ const ( SO_PASSSEC = 0x22 SO_PEEK_OFF = 0x2a SO_PEERCRED = 0x11 + SO_PEERGROUPS = 0x3b SO_PEERNAME = 0x1c SO_PEERSEC = 0x1f SO_PRIORITY = 0xc @@ -1652,10 +1724,32 @@ const ( SO_VM_SOCKETS_PEER_HOST_VM_ID = 0x3 SO_VM_SOCKETS_TRUSTED = 0x5 SO_WIFI_STATUS = 0x29 + SO_ZEROCOPY = 0x3c SPLICE_F_GIFT = 0x8 SPLICE_F_MORE = 0x4 SPLICE_F_MOVE = 0x1 SPLICE_F_NONBLOCK = 0x2 + STATX_ALL = 0xfff + STATX_ATIME = 0x20 + STATX_ATTR_APPEND = 0x20 + STATX_ATTR_AUTOMOUNT = 0x1000 + STATX_ATTR_COMPRESSED = 0x4 + STATX_ATTR_ENCRYPTED = 0x800 + STATX_ATTR_IMMUTABLE = 0x10 + STATX_ATTR_NODUMP = 0x40 + STATX_BASIC_STATS = 0x7ff + STATX_BLOCKS = 0x400 + STATX_BTIME = 0x800 + STATX_CTIME = 0x80 + STATX_GID = 0x10 + STATX_INO = 0x100 + STATX_MODE = 0x2 + STATX_MTIME = 0x40 + STATX_NLINK = 0x4 + STATX_SIZE = 0x200 + STATX_TYPE = 0x1 + STATX_UID = 0x8 + STATX__RESERVED = 0x80000000 S_BLKSIZE = 0x200 S_IEXEC = 0x40 S_IFBLK = 0x6000 @@ -1688,6 +1782,12 @@ const ( TAB2 = 0x1000 TAB3 = 0x1800 TABDLY = 0x1800 + TASKSTATS_CMD_ATTR_MAX = 0x4 + TASKSTATS_CMD_MAX = 0x2 + TASKSTATS_GENL_NAME = "TASKSTATS" + TASKSTATS_GENL_VERSION = 0x1 + TASKSTATS_TYPE_MAX = 0x6 + TASKSTATS_VERSION = 0x8 TCFLSH = 0x540b TCGETA = 0x5405 TCGETS = 0x5401 @@ -1711,6 +1811,7 @@ const ( TCP_CORK = 0x3 TCP_DEFER_ACCEPT = 0x9 TCP_FASTOPEN = 0x17 + TCP_FASTOPEN_CONNECT = 0x1e TCP_INFO = 0xb TCP_KEEPCNT = 0x6 TCP_KEEPIDLE = 0x4 @@ -1720,6 +1821,8 @@ const ( TCP_MAXWIN = 0xffff TCP_MAX_WINSHIFT = 0xe TCP_MD5SIG = 0xe + TCP_MD5SIG_EXT = 0x20 + TCP_MD5SIG_FLAG_PREFIX = 0x1 TCP_MD5SIG_MAXKEYLEN = 0x50 TCP_MSS = 0x200 TCP_MSS_DEFAULT = 0x218 @@ -1740,6 +1843,7 @@ const ( TCP_THIN_DUPACK = 0x11 TCP_THIN_LINEAR_TIMEOUTS = 0x10 TCP_TIMESTAMP = 0x18 + TCP_ULP = 0x1f TCP_USER_TIMEOUT = 0x12 TCP_WINDOW_CLAMP = 0xa TCSAFLUSH = 0x2 @@ -1770,6 +1874,7 @@ const ( TIOCGPKT = 0x80045438 TIOCGPTLCK = 0x80045439 TIOCGPTN = 0x80045430 + TIOCGPTPEER = 0x5441 TIOCGRS485 = 0x542e TIOCGSERIAL = 0x541e TIOCGSID = 0x5429 @@ -1827,6 +1932,7 @@ const ( TIOCSWINSZ = 0x5414 TIOCVHANGUP = 0x5437 TOSTOP = 0x100 + TS_COMM_LEN = 0x20 TUNATTACHFILTER = 0x401054d5 TUNDETACHFILTER = 0x401054d6 TUNGETFEATURES = 0x800454cf @@ -1852,6 +1958,8 @@ const ( TUNSETVNETHDRSZ = 0x400454d8 TUNSETVNETLE = 0x400454dc UMOUNT_NOFOLLOW = 0x8 + UTIME_NOW = 0x3fffffff + UTIME_OMIT = 0x3ffffffe VDISCARD = 0xd VEOF = 0x4 VEOL = 0xb @@ -1881,6 +1989,17 @@ const ( WALL = 0x40000000 WCLONE = 0x80000000 WCONTINUED = 0x8 + WDIOC_GETBOOTSTATUS = 0x80045702 + WDIOC_GETPRETIMEOUT = 0x80045709 + WDIOC_GETSTATUS = 0x80045701 + WDIOC_GETSUPPORT = 0x80285700 + WDIOC_GETTEMP = 0x80045703 + WDIOC_GETTIMELEFT = 0x8004570a + WDIOC_GETTIMEOUT = 0x80045707 + WDIOC_KEEPALIVE = 0x80045705 + WDIOC_SETOPTIONS = 0x80045704 + WDIOC_SETPRETIMEOUT = 0xc0045708 + WDIOC_SETTIMEOUT = 0xc0045706 WEXITED = 0x4 WNOHANG = 0x1 WNOTHREAD = 0x20000000 @@ -2061,7 +2180,6 @@ const ( SIGTSTP = syscall.Signal(0x14) SIGTTIN = syscall.Signal(0x15) SIGTTOU = syscall.Signal(0x16) - SIGUNUSED = syscall.Signal(0x1f) SIGURG = syscall.Signal(0x17) SIGUSR1 = syscall.Signal(0xa) SIGUSR2 = syscall.Signal(0xc) diff --git a/vendor/golang.org/x/sys/unix/zerrors_netbsd_386.go b/vendor/golang.org/x/sys/unix/zerrors_netbsd_386.go index b4338d5f2..1612b6609 100644 --- a/vendor/golang.org/x/sys/unix/zerrors_netbsd_386.go +++ b/vendor/golang.org/x/sys/unix/zerrors_netbsd_386.go @@ -1,5 +1,5 @@ // mkerrors.sh -m32 -// MACHINE GENERATED BY THE COMMAND ABOVE; DO NOT EDIT +// Code generated by the command above; see README.md. DO NOT EDIT. // +build 386,netbsd @@ -169,6 +169,8 @@ const ( CSTOP = 0x13 CSTOPB = 0x400 CSUSP = 0x1a + CTL_HW = 0x6 + CTL_KERN = 0x1 CTL_MAXNAME = 0xc CTL_NET = 0x4 CTL_QUERY = -0x2 @@ -581,6 +583,7 @@ const ( F_UNLCK = 0x2 F_WRLCK = 0x3 HUPCL = 0x4000 + HW_MACHINE = 0x1 ICANON = 0x100 ICMP6_FILTER = 0x12 ICRNL = 0x100 @@ -970,6 +973,10 @@ const ( IXANY = 0x800 IXOFF = 0x400 IXON = 0x200 + KERN_HOSTNAME = 0xa + KERN_OSRELEASE = 0x2 + KERN_OSTYPE = 0x1 + KERN_VERSION = 0x4 LOCK_EX = 0x2 LOCK_NB = 0x4 LOCK_SH = 0x1 diff --git a/vendor/golang.org/x/sys/unix/zerrors_netbsd_amd64.go b/vendor/golang.org/x/sys/unix/zerrors_netbsd_amd64.go index 4994437b6..c994ab610 100644 --- a/vendor/golang.org/x/sys/unix/zerrors_netbsd_amd64.go +++ b/vendor/golang.org/x/sys/unix/zerrors_netbsd_amd64.go @@ -1,5 +1,5 @@ // mkerrors.sh -m64 -// MACHINE GENERATED BY THE COMMAND ABOVE; DO NOT EDIT +// Code generated by the command above; see README.md. DO NOT EDIT. // +build amd64,netbsd @@ -169,6 +169,8 @@ const ( CSTOP = 0x13 CSTOPB = 0x400 CSUSP = 0x1a + CTL_HW = 0x6 + CTL_KERN = 0x1 CTL_MAXNAME = 0xc CTL_NET = 0x4 CTL_QUERY = -0x2 @@ -571,6 +573,7 @@ const ( F_UNLCK = 0x2 F_WRLCK = 0x3 HUPCL = 0x4000 + HW_MACHINE = 0x1 ICANON = 0x100 ICMP6_FILTER = 0x12 ICRNL = 0x100 @@ -960,6 +963,10 @@ const ( IXANY = 0x800 IXOFF = 0x400 IXON = 0x200 + KERN_HOSTNAME = 0xa + KERN_OSRELEASE = 0x2 + KERN_OSTYPE = 0x1 + KERN_VERSION = 0x4 LOCK_EX = 0x2 LOCK_NB = 0x4 LOCK_SH = 0x1 diff --git a/vendor/golang.org/x/sys/unix/zerrors_netbsd_arm.go b/vendor/golang.org/x/sys/unix/zerrors_netbsd_arm.go index ac85ca645..a8f9efede 100644 --- a/vendor/golang.org/x/sys/unix/zerrors_netbsd_arm.go +++ b/vendor/golang.org/x/sys/unix/zerrors_netbsd_arm.go @@ -1,5 +1,5 @@ // mkerrors.sh -marm -// MACHINE GENERATED BY THE COMMAND ABOVE; DO NOT EDIT +// Code generated by the command above; see README.md. DO NOT EDIT. // +build arm,netbsd @@ -161,6 +161,8 @@ const ( CSTOP = 0x13 CSTOPB = 0x400 CSUSP = 0x1a + CTL_HW = 0x6 + CTL_KERN = 0x1 CTL_MAXNAME = 0xc CTL_NET = 0x4 CTL_QUERY = -0x2 @@ -563,6 +565,7 @@ const ( F_UNLCK = 0x2 F_WRLCK = 0x3 HUPCL = 0x4000 + HW_MACHINE = 0x1 ICANON = 0x100 ICMP6_FILTER = 0x12 ICRNL = 0x100 @@ -952,6 +955,10 @@ const ( IXANY = 0x800 IXOFF = 0x400 IXON = 0x200 + KERN_HOSTNAME = 0xa + KERN_OSRELEASE = 0x2 + KERN_OSTYPE = 0x1 + KERN_VERSION = 0x4 LOCK_EX = 0x2 LOCK_NB = 0x4 LOCK_SH = 0x1 @@ -1006,6 +1013,9 @@ const ( MSG_TRUNC = 0x10 MSG_USERFLAGS = 0xffffff MSG_WAITALL = 0x40 + MS_ASYNC = 0x1 + MS_INVALIDATE = 0x2 + MS_SYNC = 0x4 NAME_MAX = 0x1ff NET_RT_DUMP = 0x1 NET_RT_FLAGS = 0x2 diff --git a/vendor/golang.org/x/sys/unix/zerrors_openbsd_386.go b/vendor/golang.org/x/sys/unix/zerrors_openbsd_386.go index 3322e998d..04e4f3319 100644 --- a/vendor/golang.org/x/sys/unix/zerrors_openbsd_386.go +++ b/vendor/golang.org/x/sys/unix/zerrors_openbsd_386.go @@ -1,5 +1,5 @@ // mkerrors.sh -m32 -// MACHINE GENERATED BY THE COMMAND ABOVE; DO NOT EDIT +// Code generated by the command above; see README.md. DO NOT EDIT. // +build 386,openbsd @@ -157,6 +157,8 @@ const ( CSTOP = 0x13 CSTOPB = 0x400 CSUSP = 0x1a + CTL_HW = 0x6 + CTL_KERN = 0x1 CTL_MAXNAME = 0xc CTL_NET = 0x4 DIOCOSFPFLUSH = 0x2000444e @@ -442,6 +444,7 @@ const ( F_UNLCK = 0x2 F_WRLCK = 0x3 HUPCL = 0x4000 + HW_MACHINE = 0x1 ICANON = 0x100 ICMP6_FILTER = 0x12 ICRNL = 0x100 @@ -860,6 +863,10 @@ const ( IXANY = 0x800 IXOFF = 0x400 IXON = 0x200 + KERN_HOSTNAME = 0xa + KERN_OSRELEASE = 0x2 + KERN_OSTYPE = 0x1 + KERN_VERSION = 0x4 LCNT_OVERLOAD_FLUSH = 0x6 LOCK_EX = 0x2 LOCK_NB = 0x4 diff --git a/vendor/golang.org/x/sys/unix/zerrors_openbsd_amd64.go b/vendor/golang.org/x/sys/unix/zerrors_openbsd_amd64.go index 1758ecca9..c80ff9812 100644 --- a/vendor/golang.org/x/sys/unix/zerrors_openbsd_amd64.go +++ b/vendor/golang.org/x/sys/unix/zerrors_openbsd_amd64.go @@ -1,5 +1,5 @@ // mkerrors.sh -m64 -// MACHINE GENERATED BY THE COMMAND ABOVE; DO NOT EDIT +// Code generated by the command above; see README.md. DO NOT EDIT. // +build amd64,openbsd @@ -157,6 +157,8 @@ const ( CSTOP = 0x13 CSTOPB = 0x400 CSUSP = 0x1a + CTL_HW = 0x6 + CTL_KERN = 0x1 CTL_MAXNAME = 0xc CTL_NET = 0x4 DIOCOSFPFLUSH = 0x2000444e @@ -442,6 +444,7 @@ const ( F_UNLCK = 0x2 F_WRLCK = 0x3 HUPCL = 0x4000 + HW_MACHINE = 0x1 ICANON = 0x100 ICMP6_FILTER = 0x12 ICRNL = 0x100 @@ -860,6 +863,10 @@ const ( IXANY = 0x800 IXOFF = 0x400 IXON = 0x200 + KERN_HOSTNAME = 0xa + KERN_OSRELEASE = 0x2 + KERN_OSTYPE = 0x1 + KERN_VERSION = 0x4 LCNT_OVERLOAD_FLUSH = 0x6 LOCK_EX = 0x2 LOCK_NB = 0x4 diff --git a/vendor/golang.org/x/sys/unix/zerrors_openbsd_arm.go b/vendor/golang.org/x/sys/unix/zerrors_openbsd_arm.go index 3ed0b2602..4c320495c 100644 --- a/vendor/golang.org/x/sys/unix/zerrors_openbsd_arm.go +++ b/vendor/golang.org/x/sys/unix/zerrors_openbsd_arm.go @@ -1,5 +1,5 @@ // mkerrors.sh -// MACHINE GENERATED BY THE COMMAND ABOVE; DO NOT EDIT +// Code generated by the command above; see README.md. DO NOT EDIT. // Created by cgo -godefs - DO NOT EDIT // cgo -godefs -- _const.go @@ -157,6 +157,8 @@ const ( CSTOP = 0x13 CSTOPB = 0x400 CSUSP = 0x1a + CTL_HW = 0x6 + CTL_KERN = 0x1 CTL_MAXNAME = 0xc CTL_NET = 0x4 DIOCOSFPFLUSH = 0x2000444e @@ -441,6 +443,7 @@ const ( F_UNLCK = 0x2 F_WRLCK = 0x3 HUPCL = 0x4000 + HW_MACHINE = 0x1 ICANON = 0x100 ICMP6_FILTER = 0x12 ICRNL = 0x100 @@ -859,6 +862,10 @@ const ( IXANY = 0x800 IXOFF = 0x400 IXON = 0x200 + KERN_HOSTNAME = 0xa + KERN_OSRELEASE = 0x2 + KERN_OSTYPE = 0x1 + KERN_VERSION = 0x4 LCNT_OVERLOAD_FLUSH = 0x6 LOCK_EX = 0x2 LOCK_NB = 0x4 diff --git a/vendor/golang.org/x/sys/unix/zerrors_solaris_amd64.go b/vendor/golang.org/x/sys/unix/zerrors_solaris_amd64.go index 81e83d78f..09eedb009 100644 --- a/vendor/golang.org/x/sys/unix/zerrors_solaris_amd64.go +++ b/vendor/golang.org/x/sys/unix/zerrors_solaris_amd64.go @@ -664,6 +664,8 @@ const ( MS_OLDSYNC = 0x0 MS_SYNC = 0x4 M_FLUSH = 0x86 + NAME_MAX = 0xff + NEWDEV = 0x1 NL0 = 0x0 NL1 = 0x100 NLDLY = 0x100 @@ -672,6 +674,9 @@ const ( OFDEL = 0x80 OFILL = 0x40 OLCUC = 0x2 + OLDDEV = 0x0 + ONBITSMAJOR = 0x7 + ONBITSMINOR = 0x8 ONLCR = 0x4 ONLRET = 0x20 ONOCR = 0x10 @@ -1105,6 +1110,7 @@ const ( VEOL = 0x5 VEOL2 = 0x6 VERASE = 0x2 + VERASE2 = 0x11 VINTR = 0x0 VKILL = 0x3 VLNEXT = 0xf diff --git a/vendor/golang.org/x/sys/unix/zptrace386_linux.go b/vendor/golang.org/x/sys/unix/zptrace386_linux.go new file mode 100644 index 000000000..2d21c49e1 --- /dev/null +++ b/vendor/golang.org/x/sys/unix/zptrace386_linux.go @@ -0,0 +1,80 @@ +// Code generated by linux/mkall.go generatePtracePair(386, amd64). DO NOT EDIT. + +// +build linux +// +build 386 amd64 + +package unix + +import "unsafe" + +// PtraceRegs386 is the registers used by 386 binaries. +type PtraceRegs386 struct { + Ebx int32 + Ecx int32 + Edx int32 + Esi int32 + Edi int32 + Ebp int32 + Eax int32 + Xds int32 + Xes int32 + Xfs int32 + Xgs int32 + Orig_eax int32 + Eip int32 + Xcs int32 + Eflags int32 + Esp int32 + Xss int32 +} + +// PtraceGetRegs386 fetches the registers used by 386 binaries. +func PtraceGetRegs386(pid int, regsout *PtraceRegs386) error { + return ptrace(PTRACE_GETREGS, pid, 0, uintptr(unsafe.Pointer(regsout))) +} + +// PtraceSetRegs386 sets the registers used by 386 binaries. +func PtraceSetRegs386(pid int, regs *PtraceRegs386) error { + return ptrace(PTRACE_SETREGS, pid, 0, uintptr(unsafe.Pointer(regs))) +} + +// PtraceRegsAmd64 is the registers used by amd64 binaries. +type PtraceRegsAmd64 struct { + R15 uint64 + R14 uint64 + R13 uint64 + R12 uint64 + Rbp uint64 + Rbx uint64 + R11 uint64 + R10 uint64 + R9 uint64 + R8 uint64 + Rax uint64 + Rcx uint64 + Rdx uint64 + Rsi uint64 + Rdi uint64 + Orig_rax uint64 + Rip uint64 + Cs uint64 + Eflags uint64 + Rsp uint64 + Ss uint64 + Fs_base uint64 + Gs_base uint64 + Ds uint64 + Es uint64 + Fs uint64 + Gs uint64 +} + +// PtraceGetRegsAmd64 fetches the registers used by amd64 binaries. +func PtraceGetRegsAmd64(pid int, regsout *PtraceRegsAmd64) error { + return ptrace(PTRACE_GETREGS, pid, 0, uintptr(unsafe.Pointer(regsout))) +} + +// PtraceSetRegsAmd64 sets the registers used by amd64 binaries. +func PtraceSetRegsAmd64(pid int, regs *PtraceRegsAmd64) error { + return ptrace(PTRACE_SETREGS, pid, 0, uintptr(unsafe.Pointer(regs))) +} diff --git a/vendor/golang.org/x/sys/unix/zptracearm_linux.go b/vendor/golang.org/x/sys/unix/zptracearm_linux.go new file mode 100644 index 000000000..faf23bbed --- /dev/null +++ b/vendor/golang.org/x/sys/unix/zptracearm_linux.go @@ -0,0 +1,41 @@ +// Code generated by linux/mkall.go generatePtracePair(arm, arm64). DO NOT EDIT. + +// +build linux +// +build arm arm64 + +package unix + +import "unsafe" + +// PtraceRegsArm is the registers used by arm binaries. +type PtraceRegsArm struct { + Uregs [18]uint32 +} + +// PtraceGetRegsArm fetches the registers used by arm binaries. +func PtraceGetRegsArm(pid int, regsout *PtraceRegsArm) error { + return ptrace(PTRACE_GETREGS, pid, 0, uintptr(unsafe.Pointer(regsout))) +} + +// PtraceSetRegsArm sets the registers used by arm binaries. +func PtraceSetRegsArm(pid int, regs *PtraceRegsArm) error { + return ptrace(PTRACE_SETREGS, pid, 0, uintptr(unsafe.Pointer(regs))) +} + +// PtraceRegsArm64 is the registers used by arm64 binaries. +type PtraceRegsArm64 struct { + Regs [31]uint64 + Sp uint64 + Pc uint64 + Pstate uint64 +} + +// PtraceGetRegsArm64 fetches the registers used by arm64 binaries. +func PtraceGetRegsArm64(pid int, regsout *PtraceRegsArm64) error { + return ptrace(PTRACE_GETREGS, pid, 0, uintptr(unsafe.Pointer(regsout))) +} + +// PtraceSetRegsArm64 sets the registers used by arm64 binaries. +func PtraceSetRegsArm64(pid int, regs *PtraceRegsArm64) error { + return ptrace(PTRACE_SETREGS, pid, 0, uintptr(unsafe.Pointer(regs))) +} diff --git a/vendor/golang.org/x/sys/unix/zptracemips_linux.go b/vendor/golang.org/x/sys/unix/zptracemips_linux.go new file mode 100644 index 000000000..c431131e6 --- /dev/null +++ b/vendor/golang.org/x/sys/unix/zptracemips_linux.go @@ -0,0 +1,50 @@ +// Code generated by linux/mkall.go generatePtracePair(mips, mips64). DO NOT EDIT. + +// +build linux +// +build mips mips64 + +package unix + +import "unsafe" + +// PtraceRegsMips is the registers used by mips binaries. +type PtraceRegsMips struct { + Regs [32]uint64 + Lo uint64 + Hi uint64 + Epc uint64 + Badvaddr uint64 + Status uint64 + Cause uint64 +} + +// PtraceGetRegsMips fetches the registers used by mips binaries. +func PtraceGetRegsMips(pid int, regsout *PtraceRegsMips) error { + return ptrace(PTRACE_GETREGS, pid, 0, uintptr(unsafe.Pointer(regsout))) +} + +// PtraceSetRegsMips sets the registers used by mips binaries. +func PtraceSetRegsMips(pid int, regs *PtraceRegsMips) error { + return ptrace(PTRACE_SETREGS, pid, 0, uintptr(unsafe.Pointer(regs))) +} + +// PtraceRegsMips64 is the registers used by mips64 binaries. +type PtraceRegsMips64 struct { + Regs [32]uint64 + Lo uint64 + Hi uint64 + Epc uint64 + Badvaddr uint64 + Status uint64 + Cause uint64 +} + +// PtraceGetRegsMips64 fetches the registers used by mips64 binaries. +func PtraceGetRegsMips64(pid int, regsout *PtraceRegsMips64) error { + return ptrace(PTRACE_GETREGS, pid, 0, uintptr(unsafe.Pointer(regsout))) +} + +// PtraceSetRegsMips64 sets the registers used by mips64 binaries. +func PtraceSetRegsMips64(pid int, regs *PtraceRegsMips64) error { + return ptrace(PTRACE_SETREGS, pid, 0, uintptr(unsafe.Pointer(regs))) +} diff --git a/vendor/golang.org/x/sys/unix/zptracemipsle_linux.go b/vendor/golang.org/x/sys/unix/zptracemipsle_linux.go new file mode 100644 index 000000000..dc3d6d373 --- /dev/null +++ b/vendor/golang.org/x/sys/unix/zptracemipsle_linux.go @@ -0,0 +1,50 @@ +// Code generated by linux/mkall.go generatePtracePair(mipsle, mips64le). DO NOT EDIT. + +// +build linux +// +build mipsle mips64le + +package unix + +import "unsafe" + +// PtraceRegsMipsle is the registers used by mipsle binaries. +type PtraceRegsMipsle struct { + Regs [32]uint64 + Lo uint64 + Hi uint64 + Epc uint64 + Badvaddr uint64 + Status uint64 + Cause uint64 +} + +// PtraceGetRegsMipsle fetches the registers used by mipsle binaries. +func PtraceGetRegsMipsle(pid int, regsout *PtraceRegsMipsle) error { + return ptrace(PTRACE_GETREGS, pid, 0, uintptr(unsafe.Pointer(regsout))) +} + +// PtraceSetRegsMipsle sets the registers used by mipsle binaries. +func PtraceSetRegsMipsle(pid int, regs *PtraceRegsMipsle) error { + return ptrace(PTRACE_SETREGS, pid, 0, uintptr(unsafe.Pointer(regs))) +} + +// PtraceRegsMips64le is the registers used by mips64le binaries. +type PtraceRegsMips64le struct { + Regs [32]uint64 + Lo uint64 + Hi uint64 + Epc uint64 + Badvaddr uint64 + Status uint64 + Cause uint64 +} + +// PtraceGetRegsMips64le fetches the registers used by mips64le binaries. +func PtraceGetRegsMips64le(pid int, regsout *PtraceRegsMips64le) error { + return ptrace(PTRACE_GETREGS, pid, 0, uintptr(unsafe.Pointer(regsout))) +} + +// PtraceSetRegsMips64le sets the registers used by mips64le binaries. +func PtraceSetRegsMips64le(pid int, regs *PtraceRegsMips64le) error { + return ptrace(PTRACE_SETREGS, pid, 0, uintptr(unsafe.Pointer(regs))) +} diff --git a/vendor/golang.org/x/sys/unix/zsyscall_darwin_386.go b/vendor/golang.org/x/sys/unix/zsyscall_darwin_386.go index 92708acc3..4c9f72756 100644 --- a/vendor/golang.org/x/sys/unix/zsyscall_darwin_386.go +++ b/vendor/golang.org/x/sys/unix/zsyscall_darwin_386.go @@ -266,6 +266,117 @@ func fcntl(fd int, cmd int, arg int) (val int, err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT +func poll(fds *PollFd, nfds int, timeout int) (n int, err error) { + r0, _, e1 := Syscall(SYS_POLL, uintptr(unsafe.Pointer(fds)), uintptr(nfds), uintptr(timeout)) + n = int(r0) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + +func Madvise(b []byte, behav int) (err error) { + var _p0 unsafe.Pointer + if len(b) > 0 { + _p0 = unsafe.Pointer(&b[0]) + } else { + _p0 = unsafe.Pointer(&_zero) + } + _, _, e1 := Syscall(SYS_MADVISE, uintptr(_p0), uintptr(len(b)), uintptr(behav)) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + +func Mlock(b []byte) (err error) { + var _p0 unsafe.Pointer + if len(b) > 0 { + _p0 = unsafe.Pointer(&b[0]) + } else { + _p0 = unsafe.Pointer(&_zero) + } + _, _, e1 := Syscall(SYS_MLOCK, uintptr(_p0), uintptr(len(b)), 0) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + +func Mlockall(flags int) (err error) { + _, _, e1 := Syscall(SYS_MLOCKALL, uintptr(flags), 0, 0) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + +func Mprotect(b []byte, prot int) (err error) { + var _p0 unsafe.Pointer + if len(b) > 0 { + _p0 = unsafe.Pointer(&b[0]) + } else { + _p0 = unsafe.Pointer(&_zero) + } + _, _, e1 := Syscall(SYS_MPROTECT, uintptr(_p0), uintptr(len(b)), uintptr(prot)) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + +func Msync(b []byte, flags int) (err error) { + var _p0 unsafe.Pointer + if len(b) > 0 { + _p0 = unsafe.Pointer(&b[0]) + } else { + _p0 = unsafe.Pointer(&_zero) + } + _, _, e1 := Syscall(SYS_MSYNC, uintptr(_p0), uintptr(len(b)), uintptr(flags)) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + +func Munlock(b []byte) (err error) { + var _p0 unsafe.Pointer + if len(b) > 0 { + _p0 = unsafe.Pointer(&b[0]) + } else { + _p0 = unsafe.Pointer(&_zero) + } + _, _, e1 := Syscall(SYS_MUNLOCK, uintptr(_p0), uintptr(len(b)), 0) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + +func Munlockall() (err error) { + _, _, e1 := Syscall(SYS_MUNLOCKALL, 0, 0, 0) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + func ptrace(request int, pid int, addr uintptr, data uintptr) (err error) { _, _, e1 := Syscall6(SYS_PTRACE, uintptr(request), uintptr(pid), uintptr(addr), uintptr(data), 0, 0) if e1 != 0 { @@ -582,6 +693,21 @@ func Fstat(fd int, stat *Stat_t) (err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT +func Fstatat(fd int, path string, stat *Stat_t, flags int) (err error) { + var _p0 *byte + _p0, err = BytePtrFromString(path) + if err != nil { + return + } + _, _, e1 := Syscall6(SYS_FSTATAT64, uintptr(fd), uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(stat)), uintptr(flags), 0, 0) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + func Fstatfs(fd int, stat *Statfs_t) (err error) { _, _, e1 := Syscall(SYS_FSTATFS64, uintptr(fd), uintptr(unsafe.Pointer(stat)), 0) if e1 != 0 { @@ -905,90 +1031,6 @@ func Mknod(path string, mode uint32, dev int) (err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT -func Mlock(b []byte) (err error) { - var _p0 unsafe.Pointer - if len(b) > 0 { - _p0 = unsafe.Pointer(&b[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - _, _, e1 := Syscall(SYS_MLOCK, uintptr(_p0), uintptr(len(b)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Mlockall(flags int) (err error) { - _, _, e1 := Syscall(SYS_MLOCKALL, uintptr(flags), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Mprotect(b []byte, prot int) (err error) { - var _p0 unsafe.Pointer - if len(b) > 0 { - _p0 = unsafe.Pointer(&b[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - _, _, e1 := Syscall(SYS_MPROTECT, uintptr(_p0), uintptr(len(b)), uintptr(prot)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Msync(b []byte, flags int) (err error) { - var _p0 unsafe.Pointer - if len(b) > 0 { - _p0 = unsafe.Pointer(&b[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - _, _, e1 := Syscall(SYS_MSYNC, uintptr(_p0), uintptr(len(b)), uintptr(flags)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Munlock(b []byte) (err error) { - var _p0 unsafe.Pointer - if len(b) > 0 { - _p0 = unsafe.Pointer(&b[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - _, _, e1 := Syscall(SYS_MUNLOCK, uintptr(_p0), uintptr(len(b)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Munlockall() (err error) { - _, _, e1 := Syscall(SYS_MUNLOCKALL, 0, 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - func Open(path string, mode int, perm uint32) (fd int, err error) { var _p0 *byte _p0, err = BytePtrFromString(path) diff --git a/vendor/golang.org/x/sys/unix/zsyscall_darwin_amd64.go b/vendor/golang.org/x/sys/unix/zsyscall_darwin_amd64.go index 44fc14f8a..256237773 100644 --- a/vendor/golang.org/x/sys/unix/zsyscall_darwin_amd64.go +++ b/vendor/golang.org/x/sys/unix/zsyscall_darwin_amd64.go @@ -266,6 +266,117 @@ func fcntl(fd int, cmd int, arg int) (val int, err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT +func poll(fds *PollFd, nfds int, timeout int) (n int, err error) { + r0, _, e1 := Syscall(SYS_POLL, uintptr(unsafe.Pointer(fds)), uintptr(nfds), uintptr(timeout)) + n = int(r0) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + +func Madvise(b []byte, behav int) (err error) { + var _p0 unsafe.Pointer + if len(b) > 0 { + _p0 = unsafe.Pointer(&b[0]) + } else { + _p0 = unsafe.Pointer(&_zero) + } + _, _, e1 := Syscall(SYS_MADVISE, uintptr(_p0), uintptr(len(b)), uintptr(behav)) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + +func Mlock(b []byte) (err error) { + var _p0 unsafe.Pointer + if len(b) > 0 { + _p0 = unsafe.Pointer(&b[0]) + } else { + _p0 = unsafe.Pointer(&_zero) + } + _, _, e1 := Syscall(SYS_MLOCK, uintptr(_p0), uintptr(len(b)), 0) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + +func Mlockall(flags int) (err error) { + _, _, e1 := Syscall(SYS_MLOCKALL, uintptr(flags), 0, 0) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + +func Mprotect(b []byte, prot int) (err error) { + var _p0 unsafe.Pointer + if len(b) > 0 { + _p0 = unsafe.Pointer(&b[0]) + } else { + _p0 = unsafe.Pointer(&_zero) + } + _, _, e1 := Syscall(SYS_MPROTECT, uintptr(_p0), uintptr(len(b)), uintptr(prot)) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + +func Msync(b []byte, flags int) (err error) { + var _p0 unsafe.Pointer + if len(b) > 0 { + _p0 = unsafe.Pointer(&b[0]) + } else { + _p0 = unsafe.Pointer(&_zero) + } + _, _, e1 := Syscall(SYS_MSYNC, uintptr(_p0), uintptr(len(b)), uintptr(flags)) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + +func Munlock(b []byte) (err error) { + var _p0 unsafe.Pointer + if len(b) > 0 { + _p0 = unsafe.Pointer(&b[0]) + } else { + _p0 = unsafe.Pointer(&_zero) + } + _, _, e1 := Syscall(SYS_MUNLOCK, uintptr(_p0), uintptr(len(b)), 0) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + +func Munlockall() (err error) { + _, _, e1 := Syscall(SYS_MUNLOCKALL, 0, 0, 0) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + func ptrace(request int, pid int, addr uintptr, data uintptr) (err error) { _, _, e1 := Syscall6(SYS_PTRACE, uintptr(request), uintptr(pid), uintptr(addr), uintptr(data), 0, 0) if e1 != 0 { @@ -582,6 +693,21 @@ func Fstat(fd int, stat *Stat_t) (err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT +func Fstatat(fd int, path string, stat *Stat_t, flags int) (err error) { + var _p0 *byte + _p0, err = BytePtrFromString(path) + if err != nil { + return + } + _, _, e1 := Syscall6(SYS_FSTATAT64, uintptr(fd), uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(stat)), uintptr(flags), 0, 0) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + func Fstatfs(fd int, stat *Statfs_t) (err error) { _, _, e1 := Syscall(SYS_FSTATFS64, uintptr(fd), uintptr(unsafe.Pointer(stat)), 0) if e1 != 0 { @@ -905,90 +1031,6 @@ func Mknod(path string, mode uint32, dev int) (err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT -func Mlock(b []byte) (err error) { - var _p0 unsafe.Pointer - if len(b) > 0 { - _p0 = unsafe.Pointer(&b[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - _, _, e1 := Syscall(SYS_MLOCK, uintptr(_p0), uintptr(len(b)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Mlockall(flags int) (err error) { - _, _, e1 := Syscall(SYS_MLOCKALL, uintptr(flags), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Mprotect(b []byte, prot int) (err error) { - var _p0 unsafe.Pointer - if len(b) > 0 { - _p0 = unsafe.Pointer(&b[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - _, _, e1 := Syscall(SYS_MPROTECT, uintptr(_p0), uintptr(len(b)), uintptr(prot)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Msync(b []byte, flags int) (err error) { - var _p0 unsafe.Pointer - if len(b) > 0 { - _p0 = unsafe.Pointer(&b[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - _, _, e1 := Syscall(SYS_MSYNC, uintptr(_p0), uintptr(len(b)), uintptr(flags)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Munlock(b []byte) (err error) { - var _p0 unsafe.Pointer - if len(b) > 0 { - _p0 = unsafe.Pointer(&b[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - _, _, e1 := Syscall(SYS_MUNLOCK, uintptr(_p0), uintptr(len(b)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Munlockall() (err error) { - _, _, e1 := Syscall(SYS_MUNLOCKALL, 0, 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - func Open(path string, mode int, perm uint32) (fd int, err error) { var _p0 *byte _p0, err = BytePtrFromString(path) diff --git a/vendor/golang.org/x/sys/unix/zsyscall_darwin_arm.go b/vendor/golang.org/x/sys/unix/zsyscall_darwin_arm.go index 6e5bd7532..4ae787e49 100644 --- a/vendor/golang.org/x/sys/unix/zsyscall_darwin_arm.go +++ b/vendor/golang.org/x/sys/unix/zsyscall_darwin_arm.go @@ -221,7 +221,7 @@ func sysctl(mib []_C_int, old *byte, oldlen *uintptr, new *byte, newlen uintptr) } else { _p0 = unsafe.Pointer(&_zero) } - _, _, e1 := Syscall6(SYS_SYSCTL, uintptr(_p0), uintptr(len(mib)), uintptr(unsafe.Pointer(old)), uintptr(unsafe.Pointer(oldlen)), uintptr(unsafe.Pointer(new)), uintptr(newlen)) + _, _, e1 := Syscall6(SYS___SYSCTL, uintptr(_p0), uintptr(len(mib)), uintptr(unsafe.Pointer(old)), uintptr(unsafe.Pointer(oldlen)), uintptr(unsafe.Pointer(new)), uintptr(newlen)) if e1 != 0 { err = errnoErr(e1) } @@ -266,6 +266,117 @@ func fcntl(fd int, cmd int, arg int) (val int, err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT +func poll(fds *PollFd, nfds int, timeout int) (n int, err error) { + r0, _, e1 := Syscall(SYS_POLL, uintptr(unsafe.Pointer(fds)), uintptr(nfds), uintptr(timeout)) + n = int(r0) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + +func Madvise(b []byte, behav int) (err error) { + var _p0 unsafe.Pointer + if len(b) > 0 { + _p0 = unsafe.Pointer(&b[0]) + } else { + _p0 = unsafe.Pointer(&_zero) + } + _, _, e1 := Syscall(SYS_MADVISE, uintptr(_p0), uintptr(len(b)), uintptr(behav)) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + +func Mlock(b []byte) (err error) { + var _p0 unsafe.Pointer + if len(b) > 0 { + _p0 = unsafe.Pointer(&b[0]) + } else { + _p0 = unsafe.Pointer(&_zero) + } + _, _, e1 := Syscall(SYS_MLOCK, uintptr(_p0), uintptr(len(b)), 0) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + +func Mlockall(flags int) (err error) { + _, _, e1 := Syscall(SYS_MLOCKALL, uintptr(flags), 0, 0) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + +func Mprotect(b []byte, prot int) (err error) { + var _p0 unsafe.Pointer + if len(b) > 0 { + _p0 = unsafe.Pointer(&b[0]) + } else { + _p0 = unsafe.Pointer(&_zero) + } + _, _, e1 := Syscall(SYS_MPROTECT, uintptr(_p0), uintptr(len(b)), uintptr(prot)) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + +func Msync(b []byte, flags int) (err error) { + var _p0 unsafe.Pointer + if len(b) > 0 { + _p0 = unsafe.Pointer(&b[0]) + } else { + _p0 = unsafe.Pointer(&_zero) + } + _, _, e1 := Syscall(SYS_MSYNC, uintptr(_p0), uintptr(len(b)), uintptr(flags)) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + +func Munlock(b []byte) (err error) { + var _p0 unsafe.Pointer + if len(b) > 0 { + _p0 = unsafe.Pointer(&b[0]) + } else { + _p0 = unsafe.Pointer(&_zero) + } + _, _, e1 := Syscall(SYS_MUNLOCK, uintptr(_p0), uintptr(len(b)), 0) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + +func Munlockall() (err error) { + _, _, e1 := Syscall(SYS_MUNLOCKALL, 0, 0, 0) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + func ptrace(request int, pid int, addr uintptr, data uintptr) (err error) { _, _, e1 := Syscall6(SYS_PTRACE, uintptr(request), uintptr(pid), uintptr(addr), uintptr(data), 0, 0) if e1 != 0 { @@ -582,6 +693,21 @@ func Fstat(fd int, stat *Stat_t) (err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT +func Fstatat(fd int, path string, stat *Stat_t, flags int) (err error) { + var _p0 *byte + _p0, err = BytePtrFromString(path) + if err != nil { + return + } + _, _, e1 := Syscall6(SYS_FSTATAT64, uintptr(fd), uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(stat)), uintptr(flags), 0, 0) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + func Fstatfs(fd int, stat *Statfs_t) (err error) { _, _, e1 := Syscall(SYS_FSTATFS64, uintptr(fd), uintptr(unsafe.Pointer(stat)), 0) if e1 != 0 { @@ -905,74 +1031,6 @@ func Mknod(path string, mode uint32, dev int) (err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT -func Mlock(b []byte) (err error) { - var _p0 unsafe.Pointer - if len(b) > 0 { - _p0 = unsafe.Pointer(&b[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - _, _, e1 := Syscall(SYS_MLOCK, uintptr(_p0), uintptr(len(b)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Mlockall(flags int) (err error) { - _, _, e1 := Syscall(SYS_MLOCKALL, uintptr(flags), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Mprotect(b []byte, prot int) (err error) { - var _p0 unsafe.Pointer - if len(b) > 0 { - _p0 = unsafe.Pointer(&b[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - _, _, e1 := Syscall(SYS_MPROTECT, uintptr(_p0), uintptr(len(b)), uintptr(prot)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Munlock(b []byte) (err error) { - var _p0 unsafe.Pointer - if len(b) > 0 { - _p0 = unsafe.Pointer(&b[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - _, _, e1 := Syscall(SYS_MUNLOCK, uintptr(_p0), uintptr(len(b)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Munlockall() (err error) { - _, _, e1 := Syscall(SYS_MUNLOCKALL, 0, 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - func Open(path string, mode int, perm uint32) (fd int, err error) { var _p0 *byte _p0, err = BytePtrFromString(path) diff --git a/vendor/golang.org/x/sys/unix/zsyscall_darwin_arm64.go b/vendor/golang.org/x/sys/unix/zsyscall_darwin_arm64.go index 7005b8dd4..14ed6886c 100644 --- a/vendor/golang.org/x/sys/unix/zsyscall_darwin_arm64.go +++ b/vendor/golang.org/x/sys/unix/zsyscall_darwin_arm64.go @@ -266,6 +266,117 @@ func fcntl(fd int, cmd int, arg int) (val int, err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT +func poll(fds *PollFd, nfds int, timeout int) (n int, err error) { + r0, _, e1 := Syscall(SYS_POLL, uintptr(unsafe.Pointer(fds)), uintptr(nfds), uintptr(timeout)) + n = int(r0) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + +func Madvise(b []byte, behav int) (err error) { + var _p0 unsafe.Pointer + if len(b) > 0 { + _p0 = unsafe.Pointer(&b[0]) + } else { + _p0 = unsafe.Pointer(&_zero) + } + _, _, e1 := Syscall(SYS_MADVISE, uintptr(_p0), uintptr(len(b)), uintptr(behav)) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + +func Mlock(b []byte) (err error) { + var _p0 unsafe.Pointer + if len(b) > 0 { + _p0 = unsafe.Pointer(&b[0]) + } else { + _p0 = unsafe.Pointer(&_zero) + } + _, _, e1 := Syscall(SYS_MLOCK, uintptr(_p0), uintptr(len(b)), 0) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + +func Mlockall(flags int) (err error) { + _, _, e1 := Syscall(SYS_MLOCKALL, uintptr(flags), 0, 0) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + +func Mprotect(b []byte, prot int) (err error) { + var _p0 unsafe.Pointer + if len(b) > 0 { + _p0 = unsafe.Pointer(&b[0]) + } else { + _p0 = unsafe.Pointer(&_zero) + } + _, _, e1 := Syscall(SYS_MPROTECT, uintptr(_p0), uintptr(len(b)), uintptr(prot)) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + +func Msync(b []byte, flags int) (err error) { + var _p0 unsafe.Pointer + if len(b) > 0 { + _p0 = unsafe.Pointer(&b[0]) + } else { + _p0 = unsafe.Pointer(&_zero) + } + _, _, e1 := Syscall(SYS_MSYNC, uintptr(_p0), uintptr(len(b)), uintptr(flags)) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + +func Munlock(b []byte) (err error) { + var _p0 unsafe.Pointer + if len(b) > 0 { + _p0 = unsafe.Pointer(&b[0]) + } else { + _p0 = unsafe.Pointer(&_zero) + } + _, _, e1 := Syscall(SYS_MUNLOCK, uintptr(_p0), uintptr(len(b)), 0) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + +func Munlockall() (err error) { + _, _, e1 := Syscall(SYS_MUNLOCKALL, 0, 0, 0) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + func ptrace(request int, pid int, addr uintptr, data uintptr) (err error) { _, _, e1 := Syscall6(SYS_PTRACE, uintptr(request), uintptr(pid), uintptr(addr), uintptr(data), 0, 0) if e1 != 0 { @@ -582,6 +693,21 @@ func Fstat(fd int, stat *Stat_t) (err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT +func Fstatat(fd int, path string, stat *Stat_t, flags int) (err error) { + var _p0 *byte + _p0, err = BytePtrFromString(path) + if err != nil { + return + } + _, _, e1 := Syscall6(SYS_FSTATAT64, uintptr(fd), uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(stat)), uintptr(flags), 0, 0) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + func Fstatfs(fd int, stat *Statfs_t) (err error) { _, _, e1 := Syscall(SYS_FSTATFS64, uintptr(fd), uintptr(unsafe.Pointer(stat)), 0) if e1 != 0 { @@ -905,74 +1031,6 @@ func Mknod(path string, mode uint32, dev int) (err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT -func Mlock(b []byte) (err error) { - var _p0 unsafe.Pointer - if len(b) > 0 { - _p0 = unsafe.Pointer(&b[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - _, _, e1 := Syscall(SYS_MLOCK, uintptr(_p0), uintptr(len(b)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Mlockall(flags int) (err error) { - _, _, e1 := Syscall(SYS_MLOCKALL, uintptr(flags), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Mprotect(b []byte, prot int) (err error) { - var _p0 unsafe.Pointer - if len(b) > 0 { - _p0 = unsafe.Pointer(&b[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - _, _, e1 := Syscall(SYS_MPROTECT, uintptr(_p0), uintptr(len(b)), uintptr(prot)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Munlock(b []byte) (err error) { - var _p0 unsafe.Pointer - if len(b) > 0 { - _p0 = unsafe.Pointer(&b[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - _, _, e1 := Syscall(SYS_MUNLOCK, uintptr(_p0), uintptr(len(b)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Munlockall() (err error) { - _, _, e1 := Syscall(SYS_MUNLOCKALL, 0, 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - func Open(path string, mode int, perm uint32) (fd int, err error) { var _p0 *byte _p0, err = BytePtrFromString(path) diff --git a/vendor/golang.org/x/sys/unix/zsyscall_dragonfly_amd64.go b/vendor/golang.org/x/sys/unix/zsyscall_dragonfly_amd64.go index eafceb8e8..91f36e9ec 100644 --- a/vendor/golang.org/x/sys/unix/zsyscall_dragonfly_amd64.go +++ b/vendor/golang.org/x/sys/unix/zsyscall_dragonfly_amd64.go @@ -266,6 +266,117 @@ func fcntl(fd int, cmd int, arg int) (val int, err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT +func poll(fds *PollFd, nfds int, timeout int) (n int, err error) { + r0, _, e1 := Syscall(SYS_POLL, uintptr(unsafe.Pointer(fds)), uintptr(nfds), uintptr(timeout)) + n = int(r0) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + +func Madvise(b []byte, behav int) (err error) { + var _p0 unsafe.Pointer + if len(b) > 0 { + _p0 = unsafe.Pointer(&b[0]) + } else { + _p0 = unsafe.Pointer(&_zero) + } + _, _, e1 := Syscall(SYS_MADVISE, uintptr(_p0), uintptr(len(b)), uintptr(behav)) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + +func Mlock(b []byte) (err error) { + var _p0 unsafe.Pointer + if len(b) > 0 { + _p0 = unsafe.Pointer(&b[0]) + } else { + _p0 = unsafe.Pointer(&_zero) + } + _, _, e1 := Syscall(SYS_MLOCK, uintptr(_p0), uintptr(len(b)), 0) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + +func Mlockall(flags int) (err error) { + _, _, e1 := Syscall(SYS_MLOCKALL, uintptr(flags), 0, 0) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + +func Mprotect(b []byte, prot int) (err error) { + var _p0 unsafe.Pointer + if len(b) > 0 { + _p0 = unsafe.Pointer(&b[0]) + } else { + _p0 = unsafe.Pointer(&_zero) + } + _, _, e1 := Syscall(SYS_MPROTECT, uintptr(_p0), uintptr(len(b)), uintptr(prot)) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + +func Msync(b []byte, flags int) (err error) { + var _p0 unsafe.Pointer + if len(b) > 0 { + _p0 = unsafe.Pointer(&b[0]) + } else { + _p0 = unsafe.Pointer(&_zero) + } + _, _, e1 := Syscall(SYS_MSYNC, uintptr(_p0), uintptr(len(b)), uintptr(flags)) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + +func Munlock(b []byte) (err error) { + var _p0 unsafe.Pointer + if len(b) > 0 { + _p0 = unsafe.Pointer(&b[0]) + } else { + _p0 = unsafe.Pointer(&_zero) + } + _, _, e1 := Syscall(SYS_MUNLOCK, uintptr(_p0), uintptr(len(b)), 0) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + +func Munlockall() (err error) { + _, _, e1 := Syscall(SYS_MUNLOCKALL, 0, 0, 0) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + func pipe() (r int, w int, err error) { r0, r1, e1 := RawSyscall(SYS_PIPE, 0, 0, 0) r = int(r0) @@ -312,6 +423,33 @@ func extpwrite(fd int, p []byte, flags int, offset int64) (n int, err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT +func Getcwd(buf []byte) (n int, err error) { + var _p0 unsafe.Pointer + if len(buf) > 0 { + _p0 = unsafe.Pointer(&buf[0]) + } else { + _p0 = unsafe.Pointer(&_zero) + } + r0, _, e1 := Syscall(SYS___GETCWD, uintptr(_p0), uintptr(len(buf)), 0) + n = int(r0) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + +func ioctl(fd int, req uint, arg uintptr) (err error) { + _, _, e1 := Syscall(SYS_IOCTL, uintptr(fd), uintptr(req), uintptr(arg)) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + func Access(path string, mode uint32) (err error) { var _p0 *byte _p0, err = BytePtrFromString(path) @@ -480,6 +618,21 @@ func Fchmod(fd int, mode uint32) (err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT +func Fchmodat(dirfd int, path string, mode uint32, flags int) (err error) { + var _p0 *byte + _p0, err = BytePtrFromString(path) + if err != nil { + return + } + _, _, e1 := Syscall6(SYS_FCHMODAT, uintptr(dirfd), uintptr(unsafe.Pointer(_p0)), uintptr(mode), uintptr(flags), 0, 0) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + func Fchown(fd int, uid int, gid int) (err error) { _, _, e1 := Syscall(SYS_FCHOWN, uintptr(fd), uintptr(uid), uintptr(gid)) if e1 != 0 { @@ -521,6 +674,21 @@ func Fstat(fd int, stat *Stat_t) (err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT +func Fstatat(fd int, path string, stat *Stat_t, flags int) (err error) { + var _p0 *byte + _p0, err = BytePtrFromString(path) + if err != nil { + return + } + _, _, e1 := Syscall6(SYS_FSTATAT, uintptr(fd), uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(stat)), uintptr(flags), 0, 0) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + func Fstatfs(fd int, stat *Statfs_t) (err error) { _, _, e1 := Syscall(SYS_FSTATFS, uintptr(fd), uintptr(unsafe.Pointer(stat)), 0) if e1 != 0 { @@ -829,74 +997,6 @@ func Mknod(path string, mode uint32, dev int) (err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT -func Mlock(b []byte) (err error) { - var _p0 unsafe.Pointer - if len(b) > 0 { - _p0 = unsafe.Pointer(&b[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - _, _, e1 := Syscall(SYS_MLOCK, uintptr(_p0), uintptr(len(b)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Mlockall(flags int) (err error) { - _, _, e1 := Syscall(SYS_MLOCKALL, uintptr(flags), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Mprotect(b []byte, prot int) (err error) { - var _p0 unsafe.Pointer - if len(b) > 0 { - _p0 = unsafe.Pointer(&b[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - _, _, e1 := Syscall(SYS_MPROTECT, uintptr(_p0), uintptr(len(b)), uintptr(prot)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Munlock(b []byte) (err error) { - var _p0 unsafe.Pointer - if len(b) > 0 { - _p0 = unsafe.Pointer(&b[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - _, _, e1 := Syscall(SYS_MUNLOCK, uintptr(_p0), uintptr(len(b)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Munlockall() (err error) { - _, _, e1 := Syscall(SYS_MUNLOCKALL, 0, 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - func Nanosleep(time *Timespec, leftover *Timespec) (err error) { _, _, e1 := Syscall(SYS_NANOSLEEP, uintptr(unsafe.Pointer(time)), uintptr(unsafe.Pointer(leftover)), 0) if e1 != 0 { @@ -1391,3 +1491,18 @@ func accept4(fd int, rsa *RawSockaddrAny, addrlen *_Socklen, flags int) (nfd int } return } + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + +func utimensat(dirfd int, path string, times *[2]Timespec, flags int) (err error) { + var _p0 *byte + _p0, err = BytePtrFromString(path) + if err != nil { + return + } + _, _, e1 := Syscall6(SYS_UTIMENSAT, uintptr(dirfd), uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(times)), uintptr(flags), 0, 0) + if e1 != 0 { + err = errnoErr(e1) + } + return +} diff --git a/vendor/golang.org/x/sys/unix/zsyscall_freebsd_386.go b/vendor/golang.org/x/sys/unix/zsyscall_freebsd_386.go index 24de0a06f..a86434a7b 100644 --- a/vendor/golang.org/x/sys/unix/zsyscall_freebsd_386.go +++ b/vendor/golang.org/x/sys/unix/zsyscall_freebsd_386.go @@ -266,6 +266,117 @@ func fcntl(fd int, cmd int, arg int) (val int, err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT +func poll(fds *PollFd, nfds int, timeout int) (n int, err error) { + r0, _, e1 := Syscall(SYS_POLL, uintptr(unsafe.Pointer(fds)), uintptr(nfds), uintptr(timeout)) + n = int(r0) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + +func Madvise(b []byte, behav int) (err error) { + var _p0 unsafe.Pointer + if len(b) > 0 { + _p0 = unsafe.Pointer(&b[0]) + } else { + _p0 = unsafe.Pointer(&_zero) + } + _, _, e1 := Syscall(SYS_MADVISE, uintptr(_p0), uintptr(len(b)), uintptr(behav)) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + +func Mlock(b []byte) (err error) { + var _p0 unsafe.Pointer + if len(b) > 0 { + _p0 = unsafe.Pointer(&b[0]) + } else { + _p0 = unsafe.Pointer(&_zero) + } + _, _, e1 := Syscall(SYS_MLOCK, uintptr(_p0), uintptr(len(b)), 0) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + +func Mlockall(flags int) (err error) { + _, _, e1 := Syscall(SYS_MLOCKALL, uintptr(flags), 0, 0) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + +func Mprotect(b []byte, prot int) (err error) { + var _p0 unsafe.Pointer + if len(b) > 0 { + _p0 = unsafe.Pointer(&b[0]) + } else { + _p0 = unsafe.Pointer(&_zero) + } + _, _, e1 := Syscall(SYS_MPROTECT, uintptr(_p0), uintptr(len(b)), uintptr(prot)) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + +func Msync(b []byte, flags int) (err error) { + var _p0 unsafe.Pointer + if len(b) > 0 { + _p0 = unsafe.Pointer(&b[0]) + } else { + _p0 = unsafe.Pointer(&_zero) + } + _, _, e1 := Syscall(SYS_MSYNC, uintptr(_p0), uintptr(len(b)), uintptr(flags)) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + +func Munlock(b []byte) (err error) { + var _p0 unsafe.Pointer + if len(b) > 0 { + _p0 = unsafe.Pointer(&b[0]) + } else { + _p0 = unsafe.Pointer(&_zero) + } + _, _, e1 := Syscall(SYS_MUNLOCK, uintptr(_p0), uintptr(len(b)), 0) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + +func Munlockall() (err error) { + _, _, e1 := Syscall(SYS_MUNLOCKALL, 0, 0, 0) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + func pipe() (r int, w int, err error) { r0, r1, e1 := RawSyscall(SYS_PIPE, 0, 0, 0) r = int(r0) @@ -278,6 +389,23 @@ func pipe() (r int, w int, err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT +func Getcwd(buf []byte) (n int, err error) { + var _p0 unsafe.Pointer + if len(buf) > 0 { + _p0 = unsafe.Pointer(&buf[0]) + } else { + _p0 = unsafe.Pointer(&_zero) + } + r0, _, e1 := Syscall(SYS___GETCWD, uintptr(_p0), uintptr(len(buf)), 0) + n = int(r0) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + func ioctl(fd int, req uint, arg uintptr) (err error) { _, _, e1 := Syscall(SYS_IOCTL, uintptr(fd), uintptr(req), uintptr(arg)) if e1 != 0 { @@ -796,6 +924,21 @@ func Fstat(fd int, stat *Stat_t) (err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT +func Fstatat(fd int, path string, stat *Stat_t, flags int) (err error) { + var _p0 *byte + _p0, err = BytePtrFromString(path) + if err != nil { + return + } + _, _, e1 := Syscall6(SYS_FSTATAT, uintptr(fd), uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(stat)), uintptr(flags), 0, 0) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + func Fstatfs(fd int, stat *Statfs_t) (err error) { _, _, e1 := Syscall(SYS_FSTATFS, uintptr(fd), uintptr(unsafe.Pointer(stat)), 0) if e1 != 0 { @@ -826,6 +969,23 @@ func Ftruncate(fd int, length int64) (err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT +func Getdents(fd int, buf []byte) (n int, err error) { + var _p0 unsafe.Pointer + if len(buf) > 0 { + _p0 = unsafe.Pointer(&buf[0]) + } else { + _p0 = unsafe.Pointer(&_zero) + } + r0, _, e1 := Syscall(SYS_GETDENTS, uintptr(fd), uintptr(_p0), uintptr(len(buf))) + n = int(r0) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + func Getdirentries(fd int, buf []byte, basep *uintptr) (n int, err error) { var _p0 unsafe.Pointer if len(buf) > 0 { @@ -1139,74 +1299,6 @@ func Mknod(path string, mode uint32, dev int) (err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT -func Mlock(b []byte) (err error) { - var _p0 unsafe.Pointer - if len(b) > 0 { - _p0 = unsafe.Pointer(&b[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - _, _, e1 := Syscall(SYS_MLOCK, uintptr(_p0), uintptr(len(b)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Mlockall(flags int) (err error) { - _, _, e1 := Syscall(SYS_MLOCKALL, uintptr(flags), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Mprotect(b []byte, prot int) (err error) { - var _p0 unsafe.Pointer - if len(b) > 0 { - _p0 = unsafe.Pointer(&b[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - _, _, e1 := Syscall(SYS_MPROTECT, uintptr(_p0), uintptr(len(b)), uintptr(prot)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Munlock(b []byte) (err error) { - var _p0 unsafe.Pointer - if len(b) > 0 { - _p0 = unsafe.Pointer(&b[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - _, _, e1 := Syscall(SYS_MUNLOCK, uintptr(_p0), uintptr(len(b)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Munlockall() (err error) { - _, _, e1 := Syscall(SYS_MUNLOCKALL, 0, 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - func Nanosleep(time *Timespec, leftover *Timespec) (err error) { _, _, e1 := Syscall(SYS_NANOSLEEP, uintptr(unsafe.Pointer(time)), uintptr(unsafe.Pointer(leftover)), 0) if e1 != 0 { @@ -1828,3 +1920,18 @@ func accept4(fd int, rsa *RawSockaddrAny, addrlen *_Socklen, flags int) (nfd int } return } + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + +func utimensat(dirfd int, path string, times *[2]Timespec, flags int) (err error) { + var _p0 *byte + _p0, err = BytePtrFromString(path) + if err != nil { + return + } + _, _, e1 := Syscall6(SYS_UTIMENSAT, uintptr(dirfd), uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(times)), uintptr(flags), 0, 0) + if e1 != 0 { + err = errnoErr(e1) + } + return +} diff --git a/vendor/golang.org/x/sys/unix/zsyscall_freebsd_amd64.go b/vendor/golang.org/x/sys/unix/zsyscall_freebsd_amd64.go index 0b61521d2..040e2f760 100644 --- a/vendor/golang.org/x/sys/unix/zsyscall_freebsd_amd64.go +++ b/vendor/golang.org/x/sys/unix/zsyscall_freebsd_amd64.go @@ -266,6 +266,117 @@ func fcntl(fd int, cmd int, arg int) (val int, err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT +func poll(fds *PollFd, nfds int, timeout int) (n int, err error) { + r0, _, e1 := Syscall(SYS_POLL, uintptr(unsafe.Pointer(fds)), uintptr(nfds), uintptr(timeout)) + n = int(r0) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + +func Madvise(b []byte, behav int) (err error) { + var _p0 unsafe.Pointer + if len(b) > 0 { + _p0 = unsafe.Pointer(&b[0]) + } else { + _p0 = unsafe.Pointer(&_zero) + } + _, _, e1 := Syscall(SYS_MADVISE, uintptr(_p0), uintptr(len(b)), uintptr(behav)) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + +func Mlock(b []byte) (err error) { + var _p0 unsafe.Pointer + if len(b) > 0 { + _p0 = unsafe.Pointer(&b[0]) + } else { + _p0 = unsafe.Pointer(&_zero) + } + _, _, e1 := Syscall(SYS_MLOCK, uintptr(_p0), uintptr(len(b)), 0) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + +func Mlockall(flags int) (err error) { + _, _, e1 := Syscall(SYS_MLOCKALL, uintptr(flags), 0, 0) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + +func Mprotect(b []byte, prot int) (err error) { + var _p0 unsafe.Pointer + if len(b) > 0 { + _p0 = unsafe.Pointer(&b[0]) + } else { + _p0 = unsafe.Pointer(&_zero) + } + _, _, e1 := Syscall(SYS_MPROTECT, uintptr(_p0), uintptr(len(b)), uintptr(prot)) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + +func Msync(b []byte, flags int) (err error) { + var _p0 unsafe.Pointer + if len(b) > 0 { + _p0 = unsafe.Pointer(&b[0]) + } else { + _p0 = unsafe.Pointer(&_zero) + } + _, _, e1 := Syscall(SYS_MSYNC, uintptr(_p0), uintptr(len(b)), uintptr(flags)) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + +func Munlock(b []byte) (err error) { + var _p0 unsafe.Pointer + if len(b) > 0 { + _p0 = unsafe.Pointer(&b[0]) + } else { + _p0 = unsafe.Pointer(&_zero) + } + _, _, e1 := Syscall(SYS_MUNLOCK, uintptr(_p0), uintptr(len(b)), 0) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + +func Munlockall() (err error) { + _, _, e1 := Syscall(SYS_MUNLOCKALL, 0, 0, 0) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + func pipe() (r int, w int, err error) { r0, r1, e1 := RawSyscall(SYS_PIPE, 0, 0, 0) r = int(r0) @@ -278,6 +389,23 @@ func pipe() (r int, w int, err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT +func Getcwd(buf []byte) (n int, err error) { + var _p0 unsafe.Pointer + if len(buf) > 0 { + _p0 = unsafe.Pointer(&buf[0]) + } else { + _p0 = unsafe.Pointer(&_zero) + } + r0, _, e1 := Syscall(SYS___GETCWD, uintptr(_p0), uintptr(len(buf)), 0) + n = int(r0) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + func ioctl(fd int, req uint, arg uintptr) (err error) { _, _, e1 := Syscall(SYS_IOCTL, uintptr(fd), uintptr(req), uintptr(arg)) if e1 != 0 { @@ -796,6 +924,21 @@ func Fstat(fd int, stat *Stat_t) (err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT +func Fstatat(fd int, path string, stat *Stat_t, flags int) (err error) { + var _p0 *byte + _p0, err = BytePtrFromString(path) + if err != nil { + return + } + _, _, e1 := Syscall6(SYS_FSTATAT, uintptr(fd), uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(stat)), uintptr(flags), 0, 0) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + func Fstatfs(fd int, stat *Statfs_t) (err error) { _, _, e1 := Syscall(SYS_FSTATFS, uintptr(fd), uintptr(unsafe.Pointer(stat)), 0) if e1 != 0 { @@ -826,6 +969,23 @@ func Ftruncate(fd int, length int64) (err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT +func Getdents(fd int, buf []byte) (n int, err error) { + var _p0 unsafe.Pointer + if len(buf) > 0 { + _p0 = unsafe.Pointer(&buf[0]) + } else { + _p0 = unsafe.Pointer(&_zero) + } + r0, _, e1 := Syscall(SYS_GETDENTS, uintptr(fd), uintptr(_p0), uintptr(len(buf))) + n = int(r0) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + func Getdirentries(fd int, buf []byte, basep *uintptr) (n int, err error) { var _p0 unsafe.Pointer if len(buf) > 0 { @@ -1139,74 +1299,6 @@ func Mknod(path string, mode uint32, dev int) (err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT -func Mlock(b []byte) (err error) { - var _p0 unsafe.Pointer - if len(b) > 0 { - _p0 = unsafe.Pointer(&b[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - _, _, e1 := Syscall(SYS_MLOCK, uintptr(_p0), uintptr(len(b)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Mlockall(flags int) (err error) { - _, _, e1 := Syscall(SYS_MLOCKALL, uintptr(flags), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Mprotect(b []byte, prot int) (err error) { - var _p0 unsafe.Pointer - if len(b) > 0 { - _p0 = unsafe.Pointer(&b[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - _, _, e1 := Syscall(SYS_MPROTECT, uintptr(_p0), uintptr(len(b)), uintptr(prot)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Munlock(b []byte) (err error) { - var _p0 unsafe.Pointer - if len(b) > 0 { - _p0 = unsafe.Pointer(&b[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - _, _, e1 := Syscall(SYS_MUNLOCK, uintptr(_p0), uintptr(len(b)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Munlockall() (err error) { - _, _, e1 := Syscall(SYS_MUNLOCKALL, 0, 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - func Nanosleep(time *Timespec, leftover *Timespec) (err error) { _, _, e1 := Syscall(SYS_NANOSLEEP, uintptr(unsafe.Pointer(time)), uintptr(unsafe.Pointer(leftover)), 0) if e1 != 0 { @@ -1828,3 +1920,18 @@ func accept4(fd int, rsa *RawSockaddrAny, addrlen *_Socklen, flags int) (nfd int } return } + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + +func utimensat(dirfd int, path string, times *[2]Timespec, flags int) (err error) { + var _p0 *byte + _p0, err = BytePtrFromString(path) + if err != nil { + return + } + _, _, e1 := Syscall6(SYS_UTIMENSAT, uintptr(dirfd), uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(times)), uintptr(flags), 0, 0) + if e1 != 0 { + err = errnoErr(e1) + } + return +} diff --git a/vendor/golang.org/x/sys/unix/zsyscall_freebsd_arm.go b/vendor/golang.org/x/sys/unix/zsyscall_freebsd_arm.go index bf9a8c9b2..cddc5e86b 100644 --- a/vendor/golang.org/x/sys/unix/zsyscall_freebsd_arm.go +++ b/vendor/golang.org/x/sys/unix/zsyscall_freebsd_arm.go @@ -266,6 +266,117 @@ func fcntl(fd int, cmd int, arg int) (val int, err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT +func poll(fds *PollFd, nfds int, timeout int) (n int, err error) { + r0, _, e1 := Syscall(SYS_POLL, uintptr(unsafe.Pointer(fds)), uintptr(nfds), uintptr(timeout)) + n = int(r0) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + +func Madvise(b []byte, behav int) (err error) { + var _p0 unsafe.Pointer + if len(b) > 0 { + _p0 = unsafe.Pointer(&b[0]) + } else { + _p0 = unsafe.Pointer(&_zero) + } + _, _, e1 := Syscall(SYS_MADVISE, uintptr(_p0), uintptr(len(b)), uintptr(behav)) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + +func Mlock(b []byte) (err error) { + var _p0 unsafe.Pointer + if len(b) > 0 { + _p0 = unsafe.Pointer(&b[0]) + } else { + _p0 = unsafe.Pointer(&_zero) + } + _, _, e1 := Syscall(SYS_MLOCK, uintptr(_p0), uintptr(len(b)), 0) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + +func Mlockall(flags int) (err error) { + _, _, e1 := Syscall(SYS_MLOCKALL, uintptr(flags), 0, 0) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + +func Mprotect(b []byte, prot int) (err error) { + var _p0 unsafe.Pointer + if len(b) > 0 { + _p0 = unsafe.Pointer(&b[0]) + } else { + _p0 = unsafe.Pointer(&_zero) + } + _, _, e1 := Syscall(SYS_MPROTECT, uintptr(_p0), uintptr(len(b)), uintptr(prot)) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + +func Msync(b []byte, flags int) (err error) { + var _p0 unsafe.Pointer + if len(b) > 0 { + _p0 = unsafe.Pointer(&b[0]) + } else { + _p0 = unsafe.Pointer(&_zero) + } + _, _, e1 := Syscall(SYS_MSYNC, uintptr(_p0), uintptr(len(b)), uintptr(flags)) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + +func Munlock(b []byte) (err error) { + var _p0 unsafe.Pointer + if len(b) > 0 { + _p0 = unsafe.Pointer(&b[0]) + } else { + _p0 = unsafe.Pointer(&_zero) + } + _, _, e1 := Syscall(SYS_MUNLOCK, uintptr(_p0), uintptr(len(b)), 0) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + +func Munlockall() (err error) { + _, _, e1 := Syscall(SYS_MUNLOCKALL, 0, 0, 0) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + func pipe() (r int, w int, err error) { r0, r1, e1 := RawSyscall(SYS_PIPE, 0, 0, 0) r = int(r0) @@ -278,6 +389,23 @@ func pipe() (r int, w int, err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT +func Getcwd(buf []byte) (n int, err error) { + var _p0 unsafe.Pointer + if len(buf) > 0 { + _p0 = unsafe.Pointer(&buf[0]) + } else { + _p0 = unsafe.Pointer(&_zero) + } + r0, _, e1 := Syscall(SYS___GETCWD, uintptr(_p0), uintptr(len(buf)), 0) + n = int(r0) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + func ioctl(fd int, req uint, arg uintptr) (err error) { _, _, e1 := Syscall(SYS_IOCTL, uintptr(fd), uintptr(req), uintptr(arg)) if e1 != 0 { @@ -796,6 +924,21 @@ func Fstat(fd int, stat *Stat_t) (err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT +func Fstatat(fd int, path string, stat *Stat_t, flags int) (err error) { + var _p0 *byte + _p0, err = BytePtrFromString(path) + if err != nil { + return + } + _, _, e1 := Syscall6(SYS_FSTATAT, uintptr(fd), uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(stat)), uintptr(flags), 0, 0) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + func Fstatfs(fd int, stat *Statfs_t) (err error) { _, _, e1 := Syscall(SYS_FSTATFS, uintptr(fd), uintptr(unsafe.Pointer(stat)), 0) if e1 != 0 { @@ -826,6 +969,23 @@ func Ftruncate(fd int, length int64) (err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT +func Getdents(fd int, buf []byte) (n int, err error) { + var _p0 unsafe.Pointer + if len(buf) > 0 { + _p0 = unsafe.Pointer(&buf[0]) + } else { + _p0 = unsafe.Pointer(&_zero) + } + r0, _, e1 := Syscall(SYS_GETDENTS, uintptr(fd), uintptr(_p0), uintptr(len(buf))) + n = int(r0) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + func Getdirentries(fd int, buf []byte, basep *uintptr) (n int, err error) { var _p0 unsafe.Pointer if len(buf) > 0 { @@ -1139,74 +1299,6 @@ func Mknod(path string, mode uint32, dev int) (err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT -func Mlock(b []byte) (err error) { - var _p0 unsafe.Pointer - if len(b) > 0 { - _p0 = unsafe.Pointer(&b[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - _, _, e1 := Syscall(SYS_MLOCK, uintptr(_p0), uintptr(len(b)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Mlockall(flags int) (err error) { - _, _, e1 := Syscall(SYS_MLOCKALL, uintptr(flags), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Mprotect(b []byte, prot int) (err error) { - var _p0 unsafe.Pointer - if len(b) > 0 { - _p0 = unsafe.Pointer(&b[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - _, _, e1 := Syscall(SYS_MPROTECT, uintptr(_p0), uintptr(len(b)), uintptr(prot)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Munlock(b []byte) (err error) { - var _p0 unsafe.Pointer - if len(b) > 0 { - _p0 = unsafe.Pointer(&b[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - _, _, e1 := Syscall(SYS_MUNLOCK, uintptr(_p0), uintptr(len(b)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Munlockall() (err error) { - _, _, e1 := Syscall(SYS_MUNLOCKALL, 0, 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - func Nanosleep(time *Timespec, leftover *Timespec) (err error) { _, _, e1 := Syscall(SYS_NANOSLEEP, uintptr(unsafe.Pointer(time)), uintptr(unsafe.Pointer(leftover)), 0) if e1 != 0 { @@ -1828,3 +1920,18 @@ func accept4(fd int, rsa *RawSockaddrAny, addrlen *_Socklen, flags int) (nfd int } return } + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + +func utimensat(dirfd int, path string, times *[2]Timespec, flags int) (err error) { + var _p0 *byte + _p0, err = BytePtrFromString(path) + if err != nil { + return + } + _, _, e1 := Syscall6(SYS_UTIMENSAT, uintptr(dirfd), uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(times)), uintptr(flags), 0, 0) + if e1 != 0 { + err = errnoErr(e1) + } + return +} diff --git a/vendor/golang.org/x/sys/unix/zsyscall_linux_386.go b/vendor/golang.org/x/sys/unix/zsyscall_linux_386.go index 38c1bbdf9..ef9602c1e 100644 --- a/vendor/golang.org/x/sys/unix/zsyscall_linux_386.go +++ b/vendor/golang.org/x/sys/unix/zsyscall_linux_386.go @@ -538,7 +538,7 @@ func Eventfd(initval uint, flags int) (fd int, err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func Exit(code int) { - Syscall(SYS_EXIT_GROUP, uintptr(code), 0, 0) + SyscallNoError(SYS_EXIT_GROUP, uintptr(code), 0, 0) return } @@ -674,7 +674,7 @@ func Getpgid(pid int) (pgid int, err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func Getpid() (pid int) { - r0, _, _ := RawSyscall(SYS_GETPID, 0, 0, 0) + r0, _ := RawSyscallNoError(SYS_GETPID, 0, 0, 0) pid = int(r0) return } @@ -682,7 +682,7 @@ func Getpid() (pid int) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func Getppid() (ppid int) { - r0, _, _ := RawSyscall(SYS_GETPPID, 0, 0, 0) + r0, _ := RawSyscallNoError(SYS_GETPPID, 0, 0, 0) ppid = int(r0) return } @@ -739,7 +739,7 @@ func Getsid(pid int) (sid int, err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func Gettid() (tid int) { - r0, _, _ := RawSyscall(SYS_GETTID, 0, 0, 0) + r0, _ := RawSyscallNoError(SYS_GETTID, 0, 0, 0) tid = int(r0) return } @@ -1035,6 +1035,17 @@ func Prctl(option int, arg2 uintptr, arg3 uintptr, arg4 uintptr, arg5 uintptr) ( // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT +func Pselect(nfd int, r *FdSet, w *FdSet, e *FdSet, timeout *Timespec, sigmask *Sigset_t) (n int, err error) { + r0, _, e1 := Syscall6(SYS_PSELECT6, uintptr(nfd), uintptr(unsafe.Pointer(r)), uintptr(unsafe.Pointer(w)), uintptr(unsafe.Pointer(e)), uintptr(unsafe.Pointer(timeout)), uintptr(unsafe.Pointer(sigmask))) + n = int(r0) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + func read(fd int, p []byte) (n int, err error) { var _p0 unsafe.Pointer if len(p) > 0 { @@ -1227,8 +1238,23 @@ func Setxattr(path string, attr string, data []byte, flags int) (err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT +func Statx(dirfd int, path string, flags int, mask int, stat *Statx_t) (err error) { + var _p0 *byte + _p0, err = BytePtrFromString(path) + if err != nil { + return + } + _, _, e1 := Syscall6(SYS_STATX, uintptr(dirfd), uintptr(unsafe.Pointer(_p0)), uintptr(flags), uintptr(mask), uintptr(unsafe.Pointer(stat)), 0) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + func Sync() { - Syscall(SYS_SYNC, 0, 0, 0) + SyscallNoError(SYS_SYNC, 0, 0, 0) return } @@ -1287,7 +1313,7 @@ func Times(tms *Tms) (ticks uintptr, err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func Umask(mask int) (oldmask int) { - r0, _, _ := RawSyscall(SYS_UMASK, uintptr(mask), 0, 0) + r0, _ := RawSyscallNoError(SYS_UMASK, uintptr(mask), 0, 0) oldmask = int(r0) return } @@ -1446,22 +1472,6 @@ func Mlock(b []byte) (err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT -func Munlock(b []byte) (err error) { - var _p0 unsafe.Pointer - if len(b) > 0 { - _p0 = unsafe.Pointer(&b[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - _, _, e1 := Syscall(SYS_MUNLOCK, uintptr(_p0), uintptr(len(b)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - func Mlockall(flags int) (err error) { _, _, e1 := Syscall(SYS_MLOCKALL, uintptr(flags), 0, 0) if e1 != 0 { @@ -1488,6 +1498,22 @@ func Msync(b []byte, flags int) (err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT +func Munlock(b []byte) (err error) { + var _p0 unsafe.Pointer + if len(b) > 0 { + _p0 = unsafe.Pointer(&b[0]) + } else { + _p0 = unsafe.Pointer(&_zero) + } + _, _, e1 := Syscall(SYS_MUNLOCK, uintptr(_p0), uintptr(len(b)), 0) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + func Munlockall() (err error) { _, _, e1 := Syscall(SYS_MUNLOCKALL, 0, 0, 0) if e1 != 0 { @@ -1558,6 +1584,21 @@ func Fstat(fd int, stat *Stat_t) (err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT +func Fstatat(dirfd int, path string, stat *Stat_t, flags int) (err error) { + var _p0 *byte + _p0, err = BytePtrFromString(path) + if err != nil { + return + } + _, _, e1 := Syscall6(SYS_FSTATAT64, uintptr(dirfd), uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(stat)), uintptr(flags), 0, 0) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + func Ftruncate(fd int, length int64) (err error) { _, _, e1 := Syscall(SYS_FTRUNCATE64, uintptr(fd), uintptr(length), uintptr(length>>32)) if e1 != 0 { @@ -1569,7 +1610,7 @@ func Ftruncate(fd int, length int64) (err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func Getegid() (egid int) { - r0, _, _ := RawSyscall(SYS_GETEGID32, 0, 0, 0) + r0, _ := RawSyscallNoError(SYS_GETEGID32, 0, 0, 0) egid = int(r0) return } @@ -1577,7 +1618,7 @@ func Getegid() (egid int) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func Geteuid() (euid int) { - r0, _, _ := RawSyscall(SYS_GETEUID32, 0, 0, 0) + r0, _ := RawSyscallNoError(SYS_GETEUID32, 0, 0, 0) euid = int(r0) return } @@ -1585,7 +1626,7 @@ func Geteuid() (euid int) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func Getgid() (gid int) { - r0, _, _ := RawSyscall(SYS_GETGID32, 0, 0, 0) + r0, _ := RawSyscallNoError(SYS_GETGID32, 0, 0, 0) gid = int(r0) return } @@ -1593,7 +1634,7 @@ func Getgid() (gid int) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func Getuid() (uid int) { - r0, _, _ := RawSyscall(SYS_GETUID32, 0, 0, 0) + r0, _ := RawSyscallNoError(SYS_GETUID32, 0, 0, 0) uid = int(r0) return } diff --git a/vendor/golang.org/x/sys/unix/zsyscall_linux_amd64.go b/vendor/golang.org/x/sys/unix/zsyscall_linux_amd64.go index dc8fe0a84..63054b358 100644 --- a/vendor/golang.org/x/sys/unix/zsyscall_linux_amd64.go +++ b/vendor/golang.org/x/sys/unix/zsyscall_linux_amd64.go @@ -538,7 +538,7 @@ func Eventfd(initval uint, flags int) (fd int, err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func Exit(code int) { - Syscall(SYS_EXIT_GROUP, uintptr(code), 0, 0) + SyscallNoError(SYS_EXIT_GROUP, uintptr(code), 0, 0) return } @@ -674,7 +674,7 @@ func Getpgid(pid int) (pgid int, err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func Getpid() (pid int) { - r0, _, _ := RawSyscall(SYS_GETPID, 0, 0, 0) + r0, _ := RawSyscallNoError(SYS_GETPID, 0, 0, 0) pid = int(r0) return } @@ -682,7 +682,7 @@ func Getpid() (pid int) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func Getppid() (ppid int) { - r0, _, _ := RawSyscall(SYS_GETPPID, 0, 0, 0) + r0, _ := RawSyscallNoError(SYS_GETPPID, 0, 0, 0) ppid = int(r0) return } @@ -739,7 +739,7 @@ func Getsid(pid int) (sid int, err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func Gettid() (tid int) { - r0, _, _ := RawSyscall(SYS_GETTID, 0, 0, 0) + r0, _ := RawSyscallNoError(SYS_GETTID, 0, 0, 0) tid = int(r0) return } @@ -1035,6 +1035,17 @@ func Prctl(option int, arg2 uintptr, arg3 uintptr, arg4 uintptr, arg5 uintptr) ( // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT +func Pselect(nfd int, r *FdSet, w *FdSet, e *FdSet, timeout *Timespec, sigmask *Sigset_t) (n int, err error) { + r0, _, e1 := Syscall6(SYS_PSELECT6, uintptr(nfd), uintptr(unsafe.Pointer(r)), uintptr(unsafe.Pointer(w)), uintptr(unsafe.Pointer(e)), uintptr(unsafe.Pointer(timeout)), uintptr(unsafe.Pointer(sigmask))) + n = int(r0) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + func read(fd int, p []byte) (n int, err error) { var _p0 unsafe.Pointer if len(p) > 0 { @@ -1227,8 +1238,23 @@ func Setxattr(path string, attr string, data []byte, flags int) (err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT +func Statx(dirfd int, path string, flags int, mask int, stat *Statx_t) (err error) { + var _p0 *byte + _p0, err = BytePtrFromString(path) + if err != nil { + return + } + _, _, e1 := Syscall6(SYS_STATX, uintptr(dirfd), uintptr(unsafe.Pointer(_p0)), uintptr(flags), uintptr(mask), uintptr(unsafe.Pointer(stat)), 0) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + func Sync() { - Syscall(SYS_SYNC, 0, 0, 0) + SyscallNoError(SYS_SYNC, 0, 0, 0) return } @@ -1287,7 +1313,7 @@ func Times(tms *Tms) (ticks uintptr, err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func Umask(mask int) (oldmask int) { - r0, _, _ := RawSyscall(SYS_UMASK, uintptr(mask), 0, 0) + r0, _ := RawSyscallNoError(SYS_UMASK, uintptr(mask), 0, 0) oldmask = int(r0) return } @@ -1446,22 +1472,6 @@ func Mlock(b []byte) (err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT -func Munlock(b []byte) (err error) { - var _p0 unsafe.Pointer - if len(b) > 0 { - _p0 = unsafe.Pointer(&b[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - _, _, e1 := Syscall(SYS_MUNLOCK, uintptr(_p0), uintptr(len(b)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - func Mlockall(flags int) (err error) { _, _, e1 := Syscall(SYS_MLOCKALL, uintptr(flags), 0, 0) if e1 != 0 { @@ -1488,6 +1498,22 @@ func Msync(b []byte, flags int) (err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT +func Munlock(b []byte) (err error) { + var _p0 unsafe.Pointer + if len(b) > 0 { + _p0 = unsafe.Pointer(&b[0]) + } else { + _p0 = unsafe.Pointer(&_zero) + } + _, _, e1 := Syscall(SYS_MUNLOCK, uintptr(_p0), uintptr(len(b)), 0) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + func Munlockall() (err error) { _, _, e1 := Syscall(SYS_MUNLOCKALL, 0, 0, 0) if e1 != 0 { @@ -1555,6 +1581,21 @@ func Fstat(fd int, stat *Stat_t) (err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT +func Fstatat(dirfd int, path string, stat *Stat_t, flags int) (err error) { + var _p0 *byte + _p0, err = BytePtrFromString(path) + if err != nil { + return + } + _, _, e1 := Syscall6(SYS_NEWFSTATAT, uintptr(dirfd), uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(stat)), uintptr(flags), 0, 0) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + func Fstatfs(fd int, buf *Statfs_t) (err error) { _, _, e1 := Syscall(SYS_FSTATFS, uintptr(fd), uintptr(unsafe.Pointer(buf)), 0) if e1 != 0 { @@ -1576,7 +1617,7 @@ func Ftruncate(fd int, length int64) (err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func Getegid() (egid int) { - r0, _, _ := RawSyscall(SYS_GETEGID, 0, 0, 0) + r0, _ := RawSyscallNoError(SYS_GETEGID, 0, 0, 0) egid = int(r0) return } @@ -1584,7 +1625,7 @@ func Getegid() (egid int) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func Geteuid() (euid int) { - r0, _, _ := RawSyscall(SYS_GETEUID, 0, 0, 0) + r0, _ := RawSyscallNoError(SYS_GETEUID, 0, 0, 0) euid = int(r0) return } @@ -1592,7 +1633,7 @@ func Geteuid() (euid int) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func Getgid() (gid int) { - r0, _, _ := RawSyscall(SYS_GETGID, 0, 0, 0) + r0, _ := RawSyscallNoError(SYS_GETGID, 0, 0, 0) gid = int(r0) return } @@ -1610,7 +1651,7 @@ func Getrlimit(resource int, rlim *Rlimit) (err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func Getuid() (uid int) { - r0, _, _ := RawSyscall(SYS_GETUID, 0, 0, 0) + r0, _ := RawSyscallNoError(SYS_GETUID, 0, 0, 0) uid = int(r0) return } diff --git a/vendor/golang.org/x/sys/unix/zsyscall_linux_arm.go b/vendor/golang.org/x/sys/unix/zsyscall_linux_arm.go index 4d2804278..8b10ee144 100644 --- a/vendor/golang.org/x/sys/unix/zsyscall_linux_arm.go +++ b/vendor/golang.org/x/sys/unix/zsyscall_linux_arm.go @@ -538,7 +538,7 @@ func Eventfd(initval uint, flags int) (fd int, err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func Exit(code int) { - Syscall(SYS_EXIT_GROUP, uintptr(code), 0, 0) + SyscallNoError(SYS_EXIT_GROUP, uintptr(code), 0, 0) return } @@ -674,7 +674,7 @@ func Getpgid(pid int) (pgid int, err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func Getpid() (pid int) { - r0, _, _ := RawSyscall(SYS_GETPID, 0, 0, 0) + r0, _ := RawSyscallNoError(SYS_GETPID, 0, 0, 0) pid = int(r0) return } @@ -682,7 +682,7 @@ func Getpid() (pid int) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func Getppid() (ppid int) { - r0, _, _ := RawSyscall(SYS_GETPPID, 0, 0, 0) + r0, _ := RawSyscallNoError(SYS_GETPPID, 0, 0, 0) ppid = int(r0) return } @@ -739,7 +739,7 @@ func Getsid(pid int) (sid int, err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func Gettid() (tid int) { - r0, _, _ := RawSyscall(SYS_GETTID, 0, 0, 0) + r0, _ := RawSyscallNoError(SYS_GETTID, 0, 0, 0) tid = int(r0) return } @@ -1035,6 +1035,17 @@ func Prctl(option int, arg2 uintptr, arg3 uintptr, arg4 uintptr, arg5 uintptr) ( // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT +func Pselect(nfd int, r *FdSet, w *FdSet, e *FdSet, timeout *Timespec, sigmask *Sigset_t) (n int, err error) { + r0, _, e1 := Syscall6(SYS_PSELECT6, uintptr(nfd), uintptr(unsafe.Pointer(r)), uintptr(unsafe.Pointer(w)), uintptr(unsafe.Pointer(e)), uintptr(unsafe.Pointer(timeout)), uintptr(unsafe.Pointer(sigmask))) + n = int(r0) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + func read(fd int, p []byte) (n int, err error) { var _p0 unsafe.Pointer if len(p) > 0 { @@ -1227,8 +1238,23 @@ func Setxattr(path string, attr string, data []byte, flags int) (err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT +func Statx(dirfd int, path string, flags int, mask int, stat *Statx_t) (err error) { + var _p0 *byte + _p0, err = BytePtrFromString(path) + if err != nil { + return + } + _, _, e1 := Syscall6(SYS_STATX, uintptr(dirfd), uintptr(unsafe.Pointer(_p0)), uintptr(flags), uintptr(mask), uintptr(unsafe.Pointer(stat)), 0) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + func Sync() { - Syscall(SYS_SYNC, 0, 0, 0) + SyscallNoError(SYS_SYNC, 0, 0, 0) return } @@ -1287,7 +1313,7 @@ func Times(tms *Tms) (ticks uintptr, err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func Umask(mask int) (oldmask int) { - r0, _, _ := RawSyscall(SYS_UMASK, uintptr(mask), 0, 0) + r0, _ := RawSyscallNoError(SYS_UMASK, uintptr(mask), 0, 0) oldmask = int(r0) return } @@ -1446,22 +1472,6 @@ func Mlock(b []byte) (err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT -func Munlock(b []byte) (err error) { - var _p0 unsafe.Pointer - if len(b) > 0 { - _p0 = unsafe.Pointer(&b[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - _, _, e1 := Syscall(SYS_MUNLOCK, uintptr(_p0), uintptr(len(b)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - func Mlockall(flags int) (err error) { _, _, e1 := Syscall(SYS_MLOCKALL, uintptr(flags), 0, 0) if e1 != 0 { @@ -1488,6 +1498,22 @@ func Msync(b []byte, flags int) (err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT +func Munlock(b []byte) (err error) { + var _p0 unsafe.Pointer + if len(b) > 0 { + _p0 = unsafe.Pointer(&b[0]) + } else { + _p0 = unsafe.Pointer(&_zero) + } + _, _, e1 := Syscall(SYS_MUNLOCK, uintptr(_p0), uintptr(len(b)), 0) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + func Munlockall() (err error) { _, _, e1 := Syscall(SYS_MUNLOCKALL, 0, 0, 0) if e1 != 0 { @@ -1717,8 +1743,23 @@ func Fstat(fd int, stat *Stat_t) (err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT +func Fstatat(dirfd int, path string, stat *Stat_t, flags int) (err error) { + var _p0 *byte + _p0, err = BytePtrFromString(path) + if err != nil { + return + } + _, _, e1 := Syscall6(SYS_FSTATAT64, uintptr(dirfd), uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(stat)), uintptr(flags), 0, 0) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + func Getegid() (egid int) { - r0, _, _ := RawSyscall(SYS_GETEGID32, 0, 0, 0) + r0, _ := RawSyscallNoError(SYS_GETEGID32, 0, 0, 0) egid = int(r0) return } @@ -1726,7 +1767,7 @@ func Getegid() (egid int) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func Geteuid() (euid int) { - r0, _, _ := RawSyscall(SYS_GETEUID32, 0, 0, 0) + r0, _ := RawSyscallNoError(SYS_GETEUID32, 0, 0, 0) euid = int(r0) return } @@ -1734,7 +1775,7 @@ func Geteuid() (euid int) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func Getgid() (gid int) { - r0, _, _ := RawSyscall(SYS_GETGID32, 0, 0, 0) + r0, _ := RawSyscallNoError(SYS_GETGID32, 0, 0, 0) gid = int(r0) return } @@ -1742,7 +1783,7 @@ func Getgid() (gid int) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func Getuid() (uid int) { - r0, _, _ := RawSyscall(SYS_GETUID32, 0, 0, 0) + r0, _ := RawSyscallNoError(SYS_GETUID32, 0, 0, 0) uid = int(r0) return } diff --git a/vendor/golang.org/x/sys/unix/zsyscall_linux_arm64.go b/vendor/golang.org/x/sys/unix/zsyscall_linux_arm64.go index 20ad4b6c9..8c9e26a0a 100644 --- a/vendor/golang.org/x/sys/unix/zsyscall_linux_arm64.go +++ b/vendor/golang.org/x/sys/unix/zsyscall_linux_arm64.go @@ -538,7 +538,7 @@ func Eventfd(initval uint, flags int) (fd int, err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func Exit(code int) { - Syscall(SYS_EXIT_GROUP, uintptr(code), 0, 0) + SyscallNoError(SYS_EXIT_GROUP, uintptr(code), 0, 0) return } @@ -674,7 +674,7 @@ func Getpgid(pid int) (pgid int, err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func Getpid() (pid int) { - r0, _, _ := RawSyscall(SYS_GETPID, 0, 0, 0) + r0, _ := RawSyscallNoError(SYS_GETPID, 0, 0, 0) pid = int(r0) return } @@ -682,7 +682,7 @@ func Getpid() (pid int) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func Getppid() (ppid int) { - r0, _, _ := RawSyscall(SYS_GETPPID, 0, 0, 0) + r0, _ := RawSyscallNoError(SYS_GETPPID, 0, 0, 0) ppid = int(r0) return } @@ -739,7 +739,7 @@ func Getsid(pid int) (sid int, err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func Gettid() (tid int) { - r0, _, _ := RawSyscall(SYS_GETTID, 0, 0, 0) + r0, _ := RawSyscallNoError(SYS_GETTID, 0, 0, 0) tid = int(r0) return } @@ -1035,6 +1035,17 @@ func Prctl(option int, arg2 uintptr, arg3 uintptr, arg4 uintptr, arg5 uintptr) ( // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT +func Pselect(nfd int, r *FdSet, w *FdSet, e *FdSet, timeout *Timespec, sigmask *Sigset_t) (n int, err error) { + r0, _, e1 := Syscall6(SYS_PSELECT6, uintptr(nfd), uintptr(unsafe.Pointer(r)), uintptr(unsafe.Pointer(w)), uintptr(unsafe.Pointer(e)), uintptr(unsafe.Pointer(timeout)), uintptr(unsafe.Pointer(sigmask))) + n = int(r0) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + func read(fd int, p []byte) (n int, err error) { var _p0 unsafe.Pointer if len(p) > 0 { @@ -1227,8 +1238,23 @@ func Setxattr(path string, attr string, data []byte, flags int) (err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT +func Statx(dirfd int, path string, flags int, mask int, stat *Statx_t) (err error) { + var _p0 *byte + _p0, err = BytePtrFromString(path) + if err != nil { + return + } + _, _, e1 := Syscall6(SYS_STATX, uintptr(dirfd), uintptr(unsafe.Pointer(_p0)), uintptr(flags), uintptr(mask), uintptr(unsafe.Pointer(stat)), 0) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + func Sync() { - Syscall(SYS_SYNC, 0, 0, 0) + SyscallNoError(SYS_SYNC, 0, 0, 0) return } @@ -1287,7 +1313,7 @@ func Times(tms *Tms) (ticks uintptr, err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func Umask(mask int) (oldmask int) { - r0, _, _ := RawSyscall(SYS_UMASK, uintptr(mask), 0, 0) + r0, _ := RawSyscallNoError(SYS_UMASK, uintptr(mask), 0, 0) oldmask = int(r0) return } @@ -1446,22 +1472,6 @@ func Mlock(b []byte) (err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT -func Munlock(b []byte) (err error) { - var _p0 unsafe.Pointer - if len(b) > 0 { - _p0 = unsafe.Pointer(&b[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - _, _, e1 := Syscall(SYS_MUNLOCK, uintptr(_p0), uintptr(len(b)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - func Mlockall(flags int) (err error) { _, _, e1 := Syscall(SYS_MLOCKALL, uintptr(flags), 0, 0) if e1 != 0 { @@ -1488,6 +1498,22 @@ func Msync(b []byte, flags int) (err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT +func Munlock(b []byte) (err error) { + var _p0 unsafe.Pointer + if len(b) > 0 { + _p0 = unsafe.Pointer(&b[0]) + } else { + _p0 = unsafe.Pointer(&_zero) + } + _, _, e1 := Syscall(SYS_MUNLOCK, uintptr(_p0), uintptr(len(b)), 0) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + func Munlockall() (err error) { _, _, e1 := Syscall(SYS_MUNLOCKALL, 0, 0, 0) if e1 != 0 { @@ -1515,6 +1541,16 @@ func EpollWait(epfd int, events []EpollEvent, msec int) (n int, err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT +func Fadvise(fd int, offset int64, length int64, advice int) (err error) { + _, _, e1 := Syscall6(SYS_FADVISE64, uintptr(fd), uintptr(offset), uintptr(length), uintptr(advice), 0, 0) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + func Fchown(fd int, uid int, gid int) (err error) { _, _, e1 := Syscall(SYS_FCHOWN, uintptr(fd), uintptr(uid), uintptr(gid)) if e1 != 0 { @@ -1571,7 +1607,7 @@ func Ftruncate(fd int, length int64) (err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func Getegid() (egid int) { - r0, _, _ := RawSyscall(SYS_GETEGID, 0, 0, 0) + r0, _ := RawSyscallNoError(SYS_GETEGID, 0, 0, 0) egid = int(r0) return } @@ -1579,7 +1615,7 @@ func Getegid() (egid int) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func Geteuid() (euid int) { - r0, _, _ := RawSyscall(SYS_GETEUID, 0, 0, 0) + r0, _ := RawSyscallNoError(SYS_GETEUID, 0, 0, 0) euid = int(r0) return } @@ -1587,7 +1623,7 @@ func Geteuid() (euid int) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func Getgid() (gid int) { - r0, _, _ := RawSyscall(SYS_GETGID, 0, 0, 0) + r0, _ := RawSyscallNoError(SYS_GETGID, 0, 0, 0) gid = int(r0) return } @@ -1605,7 +1641,7 @@ func Getrlimit(resource int, rlim *Rlimit) (err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func Getuid() (uid int) { - r0, _, _ := RawSyscall(SYS_GETUID, 0, 0, 0) + r0, _ := RawSyscallNoError(SYS_GETUID, 0, 0, 0) uid = int(r0) return } @@ -1667,17 +1703,6 @@ func Seek(fd int, offset int64, whence int) (off int64, err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT -func Select(nfd int, r *FdSet, w *FdSet, e *FdSet, timeout *Timeval) (n int, err error) { - r0, _, e1 := Syscall6(SYS_PSELECT6, uintptr(nfd), uintptr(unsafe.Pointer(r)), uintptr(unsafe.Pointer(w)), uintptr(unsafe.Pointer(e)), uintptr(unsafe.Pointer(timeout)), 0) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - func sendfile(outfd int, infd int, offset *int64, count int) (written int, err error) { r0, _, e1 := Syscall6(SYS_SENDFILE, uintptr(outfd), uintptr(infd), uintptr(unsafe.Pointer(offset)), uintptr(count), 0, 0) written = int(r0) diff --git a/vendor/golang.org/x/sys/unix/zsyscall_linux_mips.go b/vendor/golang.org/x/sys/unix/zsyscall_linux_mips.go index 9f194dc4a..8dc2b58f5 100644 --- a/vendor/golang.org/x/sys/unix/zsyscall_linux_mips.go +++ b/vendor/golang.org/x/sys/unix/zsyscall_linux_mips.go @@ -538,7 +538,7 @@ func Eventfd(initval uint, flags int) (fd int, err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func Exit(code int) { - Syscall(SYS_EXIT_GROUP, uintptr(code), 0, 0) + SyscallNoError(SYS_EXIT_GROUP, uintptr(code), 0, 0) return } @@ -674,7 +674,7 @@ func Getpgid(pid int) (pgid int, err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func Getpid() (pid int) { - r0, _, _ := RawSyscall(SYS_GETPID, 0, 0, 0) + r0, _ := RawSyscallNoError(SYS_GETPID, 0, 0, 0) pid = int(r0) return } @@ -682,7 +682,7 @@ func Getpid() (pid int) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func Getppid() (ppid int) { - r0, _, _ := RawSyscall(SYS_GETPPID, 0, 0, 0) + r0, _ := RawSyscallNoError(SYS_GETPPID, 0, 0, 0) ppid = int(r0) return } @@ -739,7 +739,7 @@ func Getsid(pid int) (sid int, err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func Gettid() (tid int) { - r0, _, _ := RawSyscall(SYS_GETTID, 0, 0, 0) + r0, _ := RawSyscallNoError(SYS_GETTID, 0, 0, 0) tid = int(r0) return } @@ -1035,6 +1035,17 @@ func Prctl(option int, arg2 uintptr, arg3 uintptr, arg4 uintptr, arg5 uintptr) ( // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT +func Pselect(nfd int, r *FdSet, w *FdSet, e *FdSet, timeout *Timespec, sigmask *Sigset_t) (n int, err error) { + r0, _, e1 := Syscall6(SYS_PSELECT6, uintptr(nfd), uintptr(unsafe.Pointer(r)), uintptr(unsafe.Pointer(w)), uintptr(unsafe.Pointer(e)), uintptr(unsafe.Pointer(timeout)), uintptr(unsafe.Pointer(sigmask))) + n = int(r0) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + func read(fd int, p []byte) (n int, err error) { var _p0 unsafe.Pointer if len(p) > 0 { @@ -1227,8 +1238,23 @@ func Setxattr(path string, attr string, data []byte, flags int) (err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT +func Statx(dirfd int, path string, flags int, mask int, stat *Statx_t) (err error) { + var _p0 *byte + _p0, err = BytePtrFromString(path) + if err != nil { + return + } + _, _, e1 := Syscall6(SYS_STATX, uintptr(dirfd), uintptr(unsafe.Pointer(_p0)), uintptr(flags), uintptr(mask), uintptr(unsafe.Pointer(stat)), 0) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + func Sync() { - Syscall(SYS_SYNC, 0, 0, 0) + SyscallNoError(SYS_SYNC, 0, 0, 0) return } @@ -1287,7 +1313,7 @@ func Times(tms *Tms) (ticks uintptr, err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func Umask(mask int) (oldmask int) { - r0, _, _ := RawSyscall(SYS_UMASK, uintptr(mask), 0, 0) + r0, _ := RawSyscallNoError(SYS_UMASK, uintptr(mask), 0, 0) oldmask = int(r0) return } @@ -1446,22 +1472,6 @@ func Mlock(b []byte) (err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT -func Munlock(b []byte) (err error) { - var _p0 unsafe.Pointer - if len(b) > 0 { - _p0 = unsafe.Pointer(&b[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - _, _, e1 := Syscall(SYS_MUNLOCK, uintptr(_p0), uintptr(len(b)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - func Mlockall(flags int) (err error) { _, _, e1 := Syscall(SYS_MLOCKALL, uintptr(flags), 0, 0) if e1 != 0 { @@ -1488,6 +1498,22 @@ func Msync(b []byte, flags int) (err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT +func Munlock(b []byte) (err error) { + var _p0 unsafe.Pointer + if len(b) > 0 { + _p0 = unsafe.Pointer(&b[0]) + } else { + _p0 = unsafe.Pointer(&_zero) + } + _, _, e1 := Syscall(SYS_MUNLOCK, uintptr(_p0), uintptr(len(b)), 0) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + func Munlockall() (err error) { _, _, e1 := Syscall(SYS_MUNLOCKALL, 0, 0, 0) if e1 != 0 { @@ -1508,6 +1534,16 @@ func Dup2(oldfd int, newfd int) (err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT +func Fadvise(fd int, offset int64, length int64, advice int) (err error) { + _, _, e1 := Syscall9(SYS_FADVISE64, uintptr(fd), 0, uintptr(offset>>32), uintptr(offset), uintptr(length>>32), uintptr(length), uintptr(advice), 0, 0) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + func Fchown(fd int, uid int, gid int) (err error) { _, _, e1 := Syscall(SYS_FCHOWN, uintptr(fd), uintptr(uid), uintptr(gid)) if e1 != 0 { @@ -1529,7 +1565,7 @@ func Ftruncate(fd int, length int64) (err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func Getegid() (egid int) { - r0, _, _ := RawSyscall(SYS_GETEGID, 0, 0, 0) + r0, _ := RawSyscallNoError(SYS_GETEGID, 0, 0, 0) egid = int(r0) return } @@ -1537,7 +1573,7 @@ func Getegid() (egid int) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func Geteuid() (euid int) { - r0, _, _ := RawSyscall(SYS_GETEUID, 0, 0, 0) + r0, _ := RawSyscallNoError(SYS_GETEUID, 0, 0, 0) euid = int(r0) return } @@ -1545,7 +1581,7 @@ func Geteuid() (euid int) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func Getgid() (gid int) { - r0, _, _ := RawSyscall(SYS_GETGID, 0, 0, 0) + r0, _ := RawSyscallNoError(SYS_GETGID, 0, 0, 0) gid = int(r0) return } @@ -1553,7 +1589,7 @@ func Getgid() (gid int) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func Getuid() (uid int) { - r0, _, _ := RawSyscall(SYS_GETUID, 0, 0, 0) + r0, _ := RawSyscallNoError(SYS_GETUID, 0, 0, 0) uid = int(r0) return } @@ -2003,6 +2039,21 @@ func Fstat(fd int, stat *Stat_t) (err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT +func Fstatat(dirfd int, path string, stat *Stat_t, flags int) (err error) { + var _p0 *byte + _p0, err = BytePtrFromString(path) + if err != nil { + return + } + _, _, e1 := Syscall6(SYS_FSTATAT64, uintptr(dirfd), uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(stat)), uintptr(flags), 0, 0) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + func Stat(path string, stat *Stat_t) (err error) { var _p0 *byte _p0, err = BytePtrFromString(path) diff --git a/vendor/golang.org/x/sys/unix/zsyscall_linux_mips64.go b/vendor/golang.org/x/sys/unix/zsyscall_linux_mips64.go index 4fde3ef08..e8beef850 100644 --- a/vendor/golang.org/x/sys/unix/zsyscall_linux_mips64.go +++ b/vendor/golang.org/x/sys/unix/zsyscall_linux_mips64.go @@ -538,7 +538,7 @@ func Eventfd(initval uint, flags int) (fd int, err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func Exit(code int) { - Syscall(SYS_EXIT_GROUP, uintptr(code), 0, 0) + SyscallNoError(SYS_EXIT_GROUP, uintptr(code), 0, 0) return } @@ -674,7 +674,7 @@ func Getpgid(pid int) (pgid int, err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func Getpid() (pid int) { - r0, _, _ := RawSyscall(SYS_GETPID, 0, 0, 0) + r0, _ := RawSyscallNoError(SYS_GETPID, 0, 0, 0) pid = int(r0) return } @@ -682,7 +682,7 @@ func Getpid() (pid int) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func Getppid() (ppid int) { - r0, _, _ := RawSyscall(SYS_GETPPID, 0, 0, 0) + r0, _ := RawSyscallNoError(SYS_GETPPID, 0, 0, 0) ppid = int(r0) return } @@ -739,7 +739,7 @@ func Getsid(pid int) (sid int, err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func Gettid() (tid int) { - r0, _, _ := RawSyscall(SYS_GETTID, 0, 0, 0) + r0, _ := RawSyscallNoError(SYS_GETTID, 0, 0, 0) tid = int(r0) return } @@ -1035,6 +1035,17 @@ func Prctl(option int, arg2 uintptr, arg3 uintptr, arg4 uintptr, arg5 uintptr) ( // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT +func Pselect(nfd int, r *FdSet, w *FdSet, e *FdSet, timeout *Timespec, sigmask *Sigset_t) (n int, err error) { + r0, _, e1 := Syscall6(SYS_PSELECT6, uintptr(nfd), uintptr(unsafe.Pointer(r)), uintptr(unsafe.Pointer(w)), uintptr(unsafe.Pointer(e)), uintptr(unsafe.Pointer(timeout)), uintptr(unsafe.Pointer(sigmask))) + n = int(r0) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + func read(fd int, p []byte) (n int, err error) { var _p0 unsafe.Pointer if len(p) > 0 { @@ -1227,8 +1238,23 @@ func Setxattr(path string, attr string, data []byte, flags int) (err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT +func Statx(dirfd int, path string, flags int, mask int, stat *Statx_t) (err error) { + var _p0 *byte + _p0, err = BytePtrFromString(path) + if err != nil { + return + } + _, _, e1 := Syscall6(SYS_STATX, uintptr(dirfd), uintptr(unsafe.Pointer(_p0)), uintptr(flags), uintptr(mask), uintptr(unsafe.Pointer(stat)), 0) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + func Sync() { - Syscall(SYS_SYNC, 0, 0, 0) + SyscallNoError(SYS_SYNC, 0, 0, 0) return } @@ -1287,7 +1313,7 @@ func Times(tms *Tms) (ticks uintptr, err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func Umask(mask int) (oldmask int) { - r0, _, _ := RawSyscall(SYS_UMASK, uintptr(mask), 0, 0) + r0, _ := RawSyscallNoError(SYS_UMASK, uintptr(mask), 0, 0) oldmask = int(r0) return } @@ -1446,22 +1472,6 @@ func Mlock(b []byte) (err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT -func Munlock(b []byte) (err error) { - var _p0 unsafe.Pointer - if len(b) > 0 { - _p0 = unsafe.Pointer(&b[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - _, _, e1 := Syscall(SYS_MUNLOCK, uintptr(_p0), uintptr(len(b)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - func Mlockall(flags int) (err error) { _, _, e1 := Syscall(SYS_MLOCKALL, uintptr(flags), 0, 0) if e1 != 0 { @@ -1488,6 +1498,22 @@ func Msync(b []byte, flags int) (err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT +func Munlock(b []byte) (err error) { + var _p0 unsafe.Pointer + if len(b) > 0 { + _p0 = unsafe.Pointer(&b[0]) + } else { + _p0 = unsafe.Pointer(&_zero) + } + _, _, e1 := Syscall(SYS_MUNLOCK, uintptr(_p0), uintptr(len(b)), 0) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + func Munlockall() (err error) { _, _, e1 := Syscall(SYS_MUNLOCKALL, 0, 0, 0) if e1 != 0 { @@ -1525,6 +1551,16 @@ func EpollWait(epfd int, events []EpollEvent, msec int) (n int, err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT +func Fadvise(fd int, offset int64, length int64, advice int) (err error) { + _, _, e1 := Syscall6(SYS_FADVISE64, uintptr(fd), uintptr(offset), uintptr(length), uintptr(advice), 0, 0) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + func Fchown(fd int, uid int, gid int) (err error) { _, _, e1 := Syscall(SYS_FCHOWN, uintptr(fd), uintptr(uid), uintptr(gid)) if e1 != 0 { @@ -1535,6 +1571,21 @@ func Fchown(fd int, uid int, gid int) (err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT +func Fstatat(dirfd int, path string, stat *Stat_t, flags int) (err error) { + var _p0 *byte + _p0, err = BytePtrFromString(path) + if err != nil { + return + } + _, _, e1 := Syscall6(SYS_NEWFSTATAT, uintptr(dirfd), uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(stat)), uintptr(flags), 0, 0) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + func Fstatfs(fd int, buf *Statfs_t) (err error) { _, _, e1 := Syscall(SYS_FSTATFS, uintptr(fd), uintptr(unsafe.Pointer(buf)), 0) if e1 != 0 { @@ -1556,7 +1607,7 @@ func Ftruncate(fd int, length int64) (err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func Getegid() (egid int) { - r0, _, _ := RawSyscall(SYS_GETEGID, 0, 0, 0) + r0, _ := RawSyscallNoError(SYS_GETEGID, 0, 0, 0) egid = int(r0) return } @@ -1564,7 +1615,7 @@ func Getegid() (egid int) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func Geteuid() (euid int) { - r0, _, _ := RawSyscall(SYS_GETEUID, 0, 0, 0) + r0, _ := RawSyscallNoError(SYS_GETEUID, 0, 0, 0) euid = int(r0) return } @@ -1572,7 +1623,7 @@ func Geteuid() (euid int) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func Getgid() (gid int) { - r0, _, _ := RawSyscall(SYS_GETGID, 0, 0, 0) + r0, _ := RawSyscallNoError(SYS_GETGID, 0, 0, 0) gid = int(r0) return } @@ -1590,7 +1641,7 @@ func Getrlimit(resource int, rlim *Rlimit) (err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func Getuid() (uid int) { - r0, _, _ := RawSyscall(SYS_GETUID, 0, 0, 0) + r0, _ := RawSyscallNoError(SYS_GETUID, 0, 0, 0) uid = int(r0) return } @@ -1677,17 +1728,6 @@ func Seek(fd int, offset int64, whence int) (off int64, err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT -func Select(nfd int, r *FdSet, w *FdSet, e *FdSet, timeout *Timeval) (n int, err error) { - r0, _, e1 := Syscall6(SYS_PSELECT6, uintptr(nfd), uintptr(unsafe.Pointer(r)), uintptr(unsafe.Pointer(w)), uintptr(unsafe.Pointer(e)), uintptr(unsafe.Pointer(timeout)), 0) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - func sendfile(outfd int, infd int, offset *int64, count int) (written int, err error) { r0, _, e1 := Syscall6(SYS_SENDFILE, uintptr(outfd), uintptr(infd), uintptr(unsafe.Pointer(offset)), uintptr(count), 0, 0) written = int(r0) diff --git a/vendor/golang.org/x/sys/unix/zsyscall_linux_mips64le.go b/vendor/golang.org/x/sys/unix/zsyscall_linux_mips64le.go index f6463423c..899e4403a 100644 --- a/vendor/golang.org/x/sys/unix/zsyscall_linux_mips64le.go +++ b/vendor/golang.org/x/sys/unix/zsyscall_linux_mips64le.go @@ -538,7 +538,7 @@ func Eventfd(initval uint, flags int) (fd int, err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func Exit(code int) { - Syscall(SYS_EXIT_GROUP, uintptr(code), 0, 0) + SyscallNoError(SYS_EXIT_GROUP, uintptr(code), 0, 0) return } @@ -674,7 +674,7 @@ func Getpgid(pid int) (pgid int, err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func Getpid() (pid int) { - r0, _, _ := RawSyscall(SYS_GETPID, 0, 0, 0) + r0, _ := RawSyscallNoError(SYS_GETPID, 0, 0, 0) pid = int(r0) return } @@ -682,7 +682,7 @@ func Getpid() (pid int) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func Getppid() (ppid int) { - r0, _, _ := RawSyscall(SYS_GETPPID, 0, 0, 0) + r0, _ := RawSyscallNoError(SYS_GETPPID, 0, 0, 0) ppid = int(r0) return } @@ -739,7 +739,7 @@ func Getsid(pid int) (sid int, err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func Gettid() (tid int) { - r0, _, _ := RawSyscall(SYS_GETTID, 0, 0, 0) + r0, _ := RawSyscallNoError(SYS_GETTID, 0, 0, 0) tid = int(r0) return } @@ -1035,6 +1035,17 @@ func Prctl(option int, arg2 uintptr, arg3 uintptr, arg4 uintptr, arg5 uintptr) ( // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT +func Pselect(nfd int, r *FdSet, w *FdSet, e *FdSet, timeout *Timespec, sigmask *Sigset_t) (n int, err error) { + r0, _, e1 := Syscall6(SYS_PSELECT6, uintptr(nfd), uintptr(unsafe.Pointer(r)), uintptr(unsafe.Pointer(w)), uintptr(unsafe.Pointer(e)), uintptr(unsafe.Pointer(timeout)), uintptr(unsafe.Pointer(sigmask))) + n = int(r0) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + func read(fd int, p []byte) (n int, err error) { var _p0 unsafe.Pointer if len(p) > 0 { @@ -1227,8 +1238,23 @@ func Setxattr(path string, attr string, data []byte, flags int) (err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT +func Statx(dirfd int, path string, flags int, mask int, stat *Statx_t) (err error) { + var _p0 *byte + _p0, err = BytePtrFromString(path) + if err != nil { + return + } + _, _, e1 := Syscall6(SYS_STATX, uintptr(dirfd), uintptr(unsafe.Pointer(_p0)), uintptr(flags), uintptr(mask), uintptr(unsafe.Pointer(stat)), 0) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + func Sync() { - Syscall(SYS_SYNC, 0, 0, 0) + SyscallNoError(SYS_SYNC, 0, 0, 0) return } @@ -1287,7 +1313,7 @@ func Times(tms *Tms) (ticks uintptr, err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func Umask(mask int) (oldmask int) { - r0, _, _ := RawSyscall(SYS_UMASK, uintptr(mask), 0, 0) + r0, _ := RawSyscallNoError(SYS_UMASK, uintptr(mask), 0, 0) oldmask = int(r0) return } @@ -1446,22 +1472,6 @@ func Mlock(b []byte) (err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT -func Munlock(b []byte) (err error) { - var _p0 unsafe.Pointer - if len(b) > 0 { - _p0 = unsafe.Pointer(&b[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - _, _, e1 := Syscall(SYS_MUNLOCK, uintptr(_p0), uintptr(len(b)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - func Mlockall(flags int) (err error) { _, _, e1 := Syscall(SYS_MLOCKALL, uintptr(flags), 0, 0) if e1 != 0 { @@ -1488,6 +1498,22 @@ func Msync(b []byte, flags int) (err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT +func Munlock(b []byte) (err error) { + var _p0 unsafe.Pointer + if len(b) > 0 { + _p0 = unsafe.Pointer(&b[0]) + } else { + _p0 = unsafe.Pointer(&_zero) + } + _, _, e1 := Syscall(SYS_MUNLOCK, uintptr(_p0), uintptr(len(b)), 0) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + func Munlockall() (err error) { _, _, e1 := Syscall(SYS_MUNLOCKALL, 0, 0, 0) if e1 != 0 { @@ -1525,6 +1551,16 @@ func EpollWait(epfd int, events []EpollEvent, msec int) (n int, err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT +func Fadvise(fd int, offset int64, length int64, advice int) (err error) { + _, _, e1 := Syscall6(SYS_FADVISE64, uintptr(fd), uintptr(offset), uintptr(length), uintptr(advice), 0, 0) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + func Fchown(fd int, uid int, gid int) (err error) { _, _, e1 := Syscall(SYS_FCHOWN, uintptr(fd), uintptr(uid), uintptr(gid)) if e1 != 0 { @@ -1535,6 +1571,21 @@ func Fchown(fd int, uid int, gid int) (err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT +func Fstatat(dirfd int, path string, stat *Stat_t, flags int) (err error) { + var _p0 *byte + _p0, err = BytePtrFromString(path) + if err != nil { + return + } + _, _, e1 := Syscall6(SYS_NEWFSTATAT, uintptr(dirfd), uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(stat)), uintptr(flags), 0, 0) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + func Fstatfs(fd int, buf *Statfs_t) (err error) { _, _, e1 := Syscall(SYS_FSTATFS, uintptr(fd), uintptr(unsafe.Pointer(buf)), 0) if e1 != 0 { @@ -1556,7 +1607,7 @@ func Ftruncate(fd int, length int64) (err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func Getegid() (egid int) { - r0, _, _ := RawSyscall(SYS_GETEGID, 0, 0, 0) + r0, _ := RawSyscallNoError(SYS_GETEGID, 0, 0, 0) egid = int(r0) return } @@ -1564,7 +1615,7 @@ func Getegid() (egid int) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func Geteuid() (euid int) { - r0, _, _ := RawSyscall(SYS_GETEUID, 0, 0, 0) + r0, _ := RawSyscallNoError(SYS_GETEUID, 0, 0, 0) euid = int(r0) return } @@ -1572,7 +1623,7 @@ func Geteuid() (euid int) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func Getgid() (gid int) { - r0, _, _ := RawSyscall(SYS_GETGID, 0, 0, 0) + r0, _ := RawSyscallNoError(SYS_GETGID, 0, 0, 0) gid = int(r0) return } @@ -1590,7 +1641,7 @@ func Getrlimit(resource int, rlim *Rlimit) (err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func Getuid() (uid int) { - r0, _, _ := RawSyscall(SYS_GETUID, 0, 0, 0) + r0, _ := RawSyscallNoError(SYS_GETUID, 0, 0, 0) uid = int(r0) return } @@ -1677,17 +1728,6 @@ func Seek(fd int, offset int64, whence int) (off int64, err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT -func Select(nfd int, r *FdSet, w *FdSet, e *FdSet, timeout *Timeval) (n int, err error) { - r0, _, e1 := Syscall6(SYS_PSELECT6, uintptr(nfd), uintptr(unsafe.Pointer(r)), uintptr(unsafe.Pointer(w)), uintptr(unsafe.Pointer(e)), uintptr(unsafe.Pointer(timeout)), 0) - n = int(r0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - func sendfile(outfd int, infd int, offset *int64, count int) (written int, err error) { r0, _, e1 := Syscall6(SYS_SENDFILE, uintptr(outfd), uintptr(infd), uintptr(unsafe.Pointer(offset)), uintptr(count), 0, 0) written = int(r0) diff --git a/vendor/golang.org/x/sys/unix/zsyscall_linux_mipsle.go b/vendor/golang.org/x/sys/unix/zsyscall_linux_mipsle.go index 964591e5e..7a477cbde 100644 --- a/vendor/golang.org/x/sys/unix/zsyscall_linux_mipsle.go +++ b/vendor/golang.org/x/sys/unix/zsyscall_linux_mipsle.go @@ -538,7 +538,7 @@ func Eventfd(initval uint, flags int) (fd int, err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func Exit(code int) { - Syscall(SYS_EXIT_GROUP, uintptr(code), 0, 0) + SyscallNoError(SYS_EXIT_GROUP, uintptr(code), 0, 0) return } @@ -674,7 +674,7 @@ func Getpgid(pid int) (pgid int, err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func Getpid() (pid int) { - r0, _, _ := RawSyscall(SYS_GETPID, 0, 0, 0) + r0, _ := RawSyscallNoError(SYS_GETPID, 0, 0, 0) pid = int(r0) return } @@ -682,7 +682,7 @@ func Getpid() (pid int) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func Getppid() (ppid int) { - r0, _, _ := RawSyscall(SYS_GETPPID, 0, 0, 0) + r0, _ := RawSyscallNoError(SYS_GETPPID, 0, 0, 0) ppid = int(r0) return } @@ -739,7 +739,7 @@ func Getsid(pid int) (sid int, err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func Gettid() (tid int) { - r0, _, _ := RawSyscall(SYS_GETTID, 0, 0, 0) + r0, _ := RawSyscallNoError(SYS_GETTID, 0, 0, 0) tid = int(r0) return } @@ -1035,6 +1035,17 @@ func Prctl(option int, arg2 uintptr, arg3 uintptr, arg4 uintptr, arg5 uintptr) ( // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT +func Pselect(nfd int, r *FdSet, w *FdSet, e *FdSet, timeout *Timespec, sigmask *Sigset_t) (n int, err error) { + r0, _, e1 := Syscall6(SYS_PSELECT6, uintptr(nfd), uintptr(unsafe.Pointer(r)), uintptr(unsafe.Pointer(w)), uintptr(unsafe.Pointer(e)), uintptr(unsafe.Pointer(timeout)), uintptr(unsafe.Pointer(sigmask))) + n = int(r0) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + func read(fd int, p []byte) (n int, err error) { var _p0 unsafe.Pointer if len(p) > 0 { @@ -1227,8 +1238,23 @@ func Setxattr(path string, attr string, data []byte, flags int) (err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT +func Statx(dirfd int, path string, flags int, mask int, stat *Statx_t) (err error) { + var _p0 *byte + _p0, err = BytePtrFromString(path) + if err != nil { + return + } + _, _, e1 := Syscall6(SYS_STATX, uintptr(dirfd), uintptr(unsafe.Pointer(_p0)), uintptr(flags), uintptr(mask), uintptr(unsafe.Pointer(stat)), 0) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + func Sync() { - Syscall(SYS_SYNC, 0, 0, 0) + SyscallNoError(SYS_SYNC, 0, 0, 0) return } @@ -1287,7 +1313,7 @@ func Times(tms *Tms) (ticks uintptr, err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func Umask(mask int) (oldmask int) { - r0, _, _ := RawSyscall(SYS_UMASK, uintptr(mask), 0, 0) + r0, _ := RawSyscallNoError(SYS_UMASK, uintptr(mask), 0, 0) oldmask = int(r0) return } @@ -1446,22 +1472,6 @@ func Mlock(b []byte) (err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT -func Munlock(b []byte) (err error) { - var _p0 unsafe.Pointer - if len(b) > 0 { - _p0 = unsafe.Pointer(&b[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - _, _, e1 := Syscall(SYS_MUNLOCK, uintptr(_p0), uintptr(len(b)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - func Mlockall(flags int) (err error) { _, _, e1 := Syscall(SYS_MLOCKALL, uintptr(flags), 0, 0) if e1 != 0 { @@ -1488,6 +1498,22 @@ func Msync(b []byte, flags int) (err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT +func Munlock(b []byte) (err error) { + var _p0 unsafe.Pointer + if len(b) > 0 { + _p0 = unsafe.Pointer(&b[0]) + } else { + _p0 = unsafe.Pointer(&_zero) + } + _, _, e1 := Syscall(SYS_MUNLOCK, uintptr(_p0), uintptr(len(b)), 0) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + func Munlockall() (err error) { _, _, e1 := Syscall(SYS_MUNLOCKALL, 0, 0, 0) if e1 != 0 { @@ -1508,6 +1534,16 @@ func Dup2(oldfd int, newfd int) (err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT +func Fadvise(fd int, offset int64, length int64, advice int) (err error) { + _, _, e1 := Syscall9(SYS_FADVISE64, uintptr(fd), 0, uintptr(offset), uintptr(offset>>32), uintptr(length), uintptr(length>>32), uintptr(advice), 0, 0) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + func Fchown(fd int, uid int, gid int) (err error) { _, _, e1 := Syscall(SYS_FCHOWN, uintptr(fd), uintptr(uid), uintptr(gid)) if e1 != 0 { @@ -1529,7 +1565,7 @@ func Ftruncate(fd int, length int64) (err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func Getegid() (egid int) { - r0, _, _ := RawSyscall(SYS_GETEGID, 0, 0, 0) + r0, _ := RawSyscallNoError(SYS_GETEGID, 0, 0, 0) egid = int(r0) return } @@ -1537,7 +1573,7 @@ func Getegid() (egid int) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func Geteuid() (euid int) { - r0, _, _ := RawSyscall(SYS_GETEUID, 0, 0, 0) + r0, _ := RawSyscallNoError(SYS_GETEUID, 0, 0, 0) euid = int(r0) return } @@ -1545,7 +1581,7 @@ func Geteuid() (euid int) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func Getgid() (gid int) { - r0, _, _ := RawSyscall(SYS_GETGID, 0, 0, 0) + r0, _ := RawSyscallNoError(SYS_GETGID, 0, 0, 0) gid = int(r0) return } @@ -1553,7 +1589,7 @@ func Getgid() (gid int) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func Getuid() (uid int) { - r0, _, _ := RawSyscall(SYS_GETUID, 0, 0, 0) + r0, _ := RawSyscallNoError(SYS_GETUID, 0, 0, 0) uid = int(r0) return } @@ -2003,6 +2039,21 @@ func Fstat(fd int, stat *Stat_t) (err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT +func Fstatat(dirfd int, path string, stat *Stat_t, flags int) (err error) { + var _p0 *byte + _p0, err = BytePtrFromString(path) + if err != nil { + return + } + _, _, e1 := Syscall6(SYS_FSTATAT64, uintptr(dirfd), uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(stat)), uintptr(flags), 0, 0) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + func Stat(path string, stat *Stat_t) (err error) { var _p0 *byte _p0, err = BytePtrFromString(path) diff --git a/vendor/golang.org/x/sys/unix/zsyscall_linux_ppc64.go b/vendor/golang.org/x/sys/unix/zsyscall_linux_ppc64.go index 204ab1ae3..9dc4c7d6d 100644 --- a/vendor/golang.org/x/sys/unix/zsyscall_linux_ppc64.go +++ b/vendor/golang.org/x/sys/unix/zsyscall_linux_ppc64.go @@ -538,7 +538,7 @@ func Eventfd(initval uint, flags int) (fd int, err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func Exit(code int) { - Syscall(SYS_EXIT_GROUP, uintptr(code), 0, 0) + SyscallNoError(SYS_EXIT_GROUP, uintptr(code), 0, 0) return } @@ -674,7 +674,7 @@ func Getpgid(pid int) (pgid int, err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func Getpid() (pid int) { - r0, _, _ := RawSyscall(SYS_GETPID, 0, 0, 0) + r0, _ := RawSyscallNoError(SYS_GETPID, 0, 0, 0) pid = int(r0) return } @@ -682,7 +682,7 @@ func Getpid() (pid int) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func Getppid() (ppid int) { - r0, _, _ := RawSyscall(SYS_GETPPID, 0, 0, 0) + r0, _ := RawSyscallNoError(SYS_GETPPID, 0, 0, 0) ppid = int(r0) return } @@ -739,7 +739,7 @@ func Getsid(pid int) (sid int, err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func Gettid() (tid int) { - r0, _, _ := RawSyscall(SYS_GETTID, 0, 0, 0) + r0, _ := RawSyscallNoError(SYS_GETTID, 0, 0, 0) tid = int(r0) return } @@ -1035,6 +1035,17 @@ func Prctl(option int, arg2 uintptr, arg3 uintptr, arg4 uintptr, arg5 uintptr) ( // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT +func Pselect(nfd int, r *FdSet, w *FdSet, e *FdSet, timeout *Timespec, sigmask *Sigset_t) (n int, err error) { + r0, _, e1 := Syscall6(SYS_PSELECT6, uintptr(nfd), uintptr(unsafe.Pointer(r)), uintptr(unsafe.Pointer(w)), uintptr(unsafe.Pointer(e)), uintptr(unsafe.Pointer(timeout)), uintptr(unsafe.Pointer(sigmask))) + n = int(r0) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + func read(fd int, p []byte) (n int, err error) { var _p0 unsafe.Pointer if len(p) > 0 { @@ -1227,8 +1238,23 @@ func Setxattr(path string, attr string, data []byte, flags int) (err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT +func Statx(dirfd int, path string, flags int, mask int, stat *Statx_t) (err error) { + var _p0 *byte + _p0, err = BytePtrFromString(path) + if err != nil { + return + } + _, _, e1 := Syscall6(SYS_STATX, uintptr(dirfd), uintptr(unsafe.Pointer(_p0)), uintptr(flags), uintptr(mask), uintptr(unsafe.Pointer(stat)), 0) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + func Sync() { - Syscall(SYS_SYNC, 0, 0, 0) + SyscallNoError(SYS_SYNC, 0, 0, 0) return } @@ -1287,7 +1313,7 @@ func Times(tms *Tms) (ticks uintptr, err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func Umask(mask int) (oldmask int) { - r0, _, _ := RawSyscall(SYS_UMASK, uintptr(mask), 0, 0) + r0, _ := RawSyscallNoError(SYS_UMASK, uintptr(mask), 0, 0) oldmask = int(r0) return } @@ -1446,22 +1472,6 @@ func Mlock(b []byte) (err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT -func Munlock(b []byte) (err error) { - var _p0 unsafe.Pointer - if len(b) > 0 { - _p0 = unsafe.Pointer(&b[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - _, _, e1 := Syscall(SYS_MUNLOCK, uintptr(_p0), uintptr(len(b)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - func Mlockall(flags int) (err error) { _, _, e1 := Syscall(SYS_MLOCKALL, uintptr(flags), 0, 0) if e1 != 0 { @@ -1488,6 +1498,22 @@ func Msync(b []byte, flags int) (err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT +func Munlock(b []byte) (err error) { + var _p0 unsafe.Pointer + if len(b) > 0 { + _p0 = unsafe.Pointer(&b[0]) + } else { + _p0 = unsafe.Pointer(&_zero) + } + _, _, e1 := Syscall(SYS_MUNLOCK, uintptr(_p0), uintptr(len(b)), 0) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + func Munlockall() (err error) { _, _, e1 := Syscall(SYS_MUNLOCKALL, 0, 0, 0) if e1 != 0 { @@ -1525,6 +1551,16 @@ func Dup2(oldfd int, newfd int) (err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT +func Fadvise(fd int, offset int64, length int64, advice int) (err error) { + _, _, e1 := Syscall6(SYS_FADVISE64, uintptr(fd), uintptr(offset), uintptr(length), uintptr(advice), 0, 0) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + func Fchown(fd int, uid int, gid int) (err error) { _, _, e1 := Syscall(SYS_FCHOWN, uintptr(fd), uintptr(uid), uintptr(gid)) if e1 != 0 { @@ -1545,6 +1581,21 @@ func Fstat(fd int, stat *Stat_t) (err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT +func Fstatat(dirfd int, path string, stat *Stat_t, flags int) (err error) { + var _p0 *byte + _p0, err = BytePtrFromString(path) + if err != nil { + return + } + _, _, e1 := Syscall6(SYS_NEWFSTATAT, uintptr(dirfd), uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(stat)), uintptr(flags), 0, 0) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + func Fstatfs(fd int, buf *Statfs_t) (err error) { _, _, e1 := Syscall(SYS_FSTATFS, uintptr(fd), uintptr(unsafe.Pointer(buf)), 0) if e1 != 0 { @@ -1566,7 +1617,7 @@ func Ftruncate(fd int, length int64) (err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func Getegid() (egid int) { - r0, _, _ := RawSyscall(SYS_GETEGID, 0, 0, 0) + r0, _ := RawSyscallNoError(SYS_GETEGID, 0, 0, 0) egid = int(r0) return } @@ -1574,7 +1625,7 @@ func Getegid() (egid int) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func Geteuid() (euid int) { - r0, _, _ := RawSyscall(SYS_GETEUID, 0, 0, 0) + r0, _ := RawSyscallNoError(SYS_GETEUID, 0, 0, 0) euid = int(r0) return } @@ -1582,7 +1633,7 @@ func Geteuid() (euid int) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func Getgid() (gid int) { - r0, _, _ := RawSyscall(SYS_GETGID, 0, 0, 0) + r0, _ := RawSyscallNoError(SYS_GETGID, 0, 0, 0) gid = int(r0) return } @@ -1600,7 +1651,7 @@ func Getrlimit(resource int, rlim *Rlimit) (err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func Getuid() (uid int) { - r0, _, _ := RawSyscall(SYS_GETUID, 0, 0, 0) + r0, _ := RawSyscallNoError(SYS_GETUID, 0, 0, 0) uid = int(r0) return } @@ -1734,7 +1785,7 @@ func Seek(fd int, offset int64, whence int) (off int64, err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func Select(nfd int, r *FdSet, w *FdSet, e *FdSet, timeout *Timeval) (n int, err error) { - r0, _, e1 := Syscall6(SYS_SELECT, uintptr(nfd), uintptr(unsafe.Pointer(r)), uintptr(unsafe.Pointer(w)), uintptr(unsafe.Pointer(e)), uintptr(unsafe.Pointer(timeout)), 0) + r0, _, e1 := Syscall6(SYS__NEWSELECT, uintptr(nfd), uintptr(unsafe.Pointer(r)), uintptr(unsafe.Pointer(w)), uintptr(unsafe.Pointer(e)), uintptr(unsafe.Pointer(timeout)), 0) n = int(r0) if e1 != 0 { err = errnoErr(e1) diff --git a/vendor/golang.org/x/sys/unix/zsyscall_linux_ppc64le.go b/vendor/golang.org/x/sys/unix/zsyscall_linux_ppc64le.go index a8a2b0b0a..f0d1ee125 100644 --- a/vendor/golang.org/x/sys/unix/zsyscall_linux_ppc64le.go +++ b/vendor/golang.org/x/sys/unix/zsyscall_linux_ppc64le.go @@ -538,7 +538,7 @@ func Eventfd(initval uint, flags int) (fd int, err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func Exit(code int) { - Syscall(SYS_EXIT_GROUP, uintptr(code), 0, 0) + SyscallNoError(SYS_EXIT_GROUP, uintptr(code), 0, 0) return } @@ -674,7 +674,7 @@ func Getpgid(pid int) (pgid int, err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func Getpid() (pid int) { - r0, _, _ := RawSyscall(SYS_GETPID, 0, 0, 0) + r0, _ := RawSyscallNoError(SYS_GETPID, 0, 0, 0) pid = int(r0) return } @@ -682,7 +682,7 @@ func Getpid() (pid int) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func Getppid() (ppid int) { - r0, _, _ := RawSyscall(SYS_GETPPID, 0, 0, 0) + r0, _ := RawSyscallNoError(SYS_GETPPID, 0, 0, 0) ppid = int(r0) return } @@ -739,7 +739,7 @@ func Getsid(pid int) (sid int, err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func Gettid() (tid int) { - r0, _, _ := RawSyscall(SYS_GETTID, 0, 0, 0) + r0, _ := RawSyscallNoError(SYS_GETTID, 0, 0, 0) tid = int(r0) return } @@ -1035,6 +1035,17 @@ func Prctl(option int, arg2 uintptr, arg3 uintptr, arg4 uintptr, arg5 uintptr) ( // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT +func Pselect(nfd int, r *FdSet, w *FdSet, e *FdSet, timeout *Timespec, sigmask *Sigset_t) (n int, err error) { + r0, _, e1 := Syscall6(SYS_PSELECT6, uintptr(nfd), uintptr(unsafe.Pointer(r)), uintptr(unsafe.Pointer(w)), uintptr(unsafe.Pointer(e)), uintptr(unsafe.Pointer(timeout)), uintptr(unsafe.Pointer(sigmask))) + n = int(r0) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + func read(fd int, p []byte) (n int, err error) { var _p0 unsafe.Pointer if len(p) > 0 { @@ -1227,8 +1238,23 @@ func Setxattr(path string, attr string, data []byte, flags int) (err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT +func Statx(dirfd int, path string, flags int, mask int, stat *Statx_t) (err error) { + var _p0 *byte + _p0, err = BytePtrFromString(path) + if err != nil { + return + } + _, _, e1 := Syscall6(SYS_STATX, uintptr(dirfd), uintptr(unsafe.Pointer(_p0)), uintptr(flags), uintptr(mask), uintptr(unsafe.Pointer(stat)), 0) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + func Sync() { - Syscall(SYS_SYNC, 0, 0, 0) + SyscallNoError(SYS_SYNC, 0, 0, 0) return } @@ -1287,7 +1313,7 @@ func Times(tms *Tms) (ticks uintptr, err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func Umask(mask int) (oldmask int) { - r0, _, _ := RawSyscall(SYS_UMASK, uintptr(mask), 0, 0) + r0, _ := RawSyscallNoError(SYS_UMASK, uintptr(mask), 0, 0) oldmask = int(r0) return } @@ -1446,22 +1472,6 @@ func Mlock(b []byte) (err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT -func Munlock(b []byte) (err error) { - var _p0 unsafe.Pointer - if len(b) > 0 { - _p0 = unsafe.Pointer(&b[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - _, _, e1 := Syscall(SYS_MUNLOCK, uintptr(_p0), uintptr(len(b)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - func Mlockall(flags int) (err error) { _, _, e1 := Syscall(SYS_MLOCKALL, uintptr(flags), 0, 0) if e1 != 0 { @@ -1488,6 +1498,22 @@ func Msync(b []byte, flags int) (err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT +func Munlock(b []byte) (err error) { + var _p0 unsafe.Pointer + if len(b) > 0 { + _p0 = unsafe.Pointer(&b[0]) + } else { + _p0 = unsafe.Pointer(&_zero) + } + _, _, e1 := Syscall(SYS_MUNLOCK, uintptr(_p0), uintptr(len(b)), 0) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + func Munlockall() (err error) { _, _, e1 := Syscall(SYS_MUNLOCKALL, 0, 0, 0) if e1 != 0 { @@ -1525,6 +1551,16 @@ func Dup2(oldfd int, newfd int) (err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT +func Fadvise(fd int, offset int64, length int64, advice int) (err error) { + _, _, e1 := Syscall6(SYS_FADVISE64, uintptr(fd), uintptr(offset), uintptr(length), uintptr(advice), 0, 0) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + func Fchown(fd int, uid int, gid int) (err error) { _, _, e1 := Syscall(SYS_FCHOWN, uintptr(fd), uintptr(uid), uintptr(gid)) if e1 != 0 { @@ -1545,6 +1581,21 @@ func Fstat(fd int, stat *Stat_t) (err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT +func Fstatat(dirfd int, path string, stat *Stat_t, flags int) (err error) { + var _p0 *byte + _p0, err = BytePtrFromString(path) + if err != nil { + return + } + _, _, e1 := Syscall6(SYS_NEWFSTATAT, uintptr(dirfd), uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(stat)), uintptr(flags), 0, 0) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + func Fstatfs(fd int, buf *Statfs_t) (err error) { _, _, e1 := Syscall(SYS_FSTATFS, uintptr(fd), uintptr(unsafe.Pointer(buf)), 0) if e1 != 0 { @@ -1566,7 +1617,7 @@ func Ftruncate(fd int, length int64) (err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func Getegid() (egid int) { - r0, _, _ := RawSyscall(SYS_GETEGID, 0, 0, 0) + r0, _ := RawSyscallNoError(SYS_GETEGID, 0, 0, 0) egid = int(r0) return } @@ -1574,7 +1625,7 @@ func Getegid() (egid int) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func Geteuid() (euid int) { - r0, _, _ := RawSyscall(SYS_GETEUID, 0, 0, 0) + r0, _ := RawSyscallNoError(SYS_GETEUID, 0, 0, 0) euid = int(r0) return } @@ -1582,7 +1633,7 @@ func Geteuid() (euid int) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func Getgid() (gid int) { - r0, _, _ := RawSyscall(SYS_GETGID, 0, 0, 0) + r0, _ := RawSyscallNoError(SYS_GETGID, 0, 0, 0) gid = int(r0) return } @@ -1600,7 +1651,7 @@ func Getrlimit(resource int, rlim *Rlimit) (err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func Getuid() (uid int) { - r0, _, _ := RawSyscall(SYS_GETUID, 0, 0, 0) + r0, _ := RawSyscallNoError(SYS_GETUID, 0, 0, 0) uid = int(r0) return } @@ -1734,7 +1785,7 @@ func Seek(fd int, offset int64, whence int) (off int64, err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func Select(nfd int, r *FdSet, w *FdSet, e *FdSet, timeout *Timeval) (n int, err error) { - r0, _, e1 := Syscall6(SYS_SELECT, uintptr(nfd), uintptr(unsafe.Pointer(r)), uintptr(unsafe.Pointer(w)), uintptr(unsafe.Pointer(e)), uintptr(unsafe.Pointer(timeout)), 0) + r0, _, e1 := Syscall6(SYS__NEWSELECT, uintptr(nfd), uintptr(unsafe.Pointer(r)), uintptr(unsafe.Pointer(w)), uintptr(unsafe.Pointer(e)), uintptr(unsafe.Pointer(timeout)), 0) n = int(r0) if e1 != 0 { err = errnoErr(e1) diff --git a/vendor/golang.org/x/sys/unix/zsyscall_linux_s390x.go b/vendor/golang.org/x/sys/unix/zsyscall_linux_s390x.go index b6ff9e392..c443baf63 100644 --- a/vendor/golang.org/x/sys/unix/zsyscall_linux_s390x.go +++ b/vendor/golang.org/x/sys/unix/zsyscall_linux_s390x.go @@ -538,7 +538,7 @@ func Eventfd(initval uint, flags int) (fd int, err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func Exit(code int) { - Syscall(SYS_EXIT_GROUP, uintptr(code), 0, 0) + SyscallNoError(SYS_EXIT_GROUP, uintptr(code), 0, 0) return } @@ -674,7 +674,7 @@ func Getpgid(pid int) (pgid int, err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func Getpid() (pid int) { - r0, _, _ := RawSyscall(SYS_GETPID, 0, 0, 0) + r0, _ := RawSyscallNoError(SYS_GETPID, 0, 0, 0) pid = int(r0) return } @@ -682,7 +682,7 @@ func Getpid() (pid int) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func Getppid() (ppid int) { - r0, _, _ := RawSyscall(SYS_GETPPID, 0, 0, 0) + r0, _ := RawSyscallNoError(SYS_GETPPID, 0, 0, 0) ppid = int(r0) return } @@ -739,7 +739,7 @@ func Getsid(pid int) (sid int, err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func Gettid() (tid int) { - r0, _, _ := RawSyscall(SYS_GETTID, 0, 0, 0) + r0, _ := RawSyscallNoError(SYS_GETTID, 0, 0, 0) tid = int(r0) return } @@ -1035,6 +1035,17 @@ func Prctl(option int, arg2 uintptr, arg3 uintptr, arg4 uintptr, arg5 uintptr) ( // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT +func Pselect(nfd int, r *FdSet, w *FdSet, e *FdSet, timeout *Timespec, sigmask *Sigset_t) (n int, err error) { + r0, _, e1 := Syscall6(SYS_PSELECT6, uintptr(nfd), uintptr(unsafe.Pointer(r)), uintptr(unsafe.Pointer(w)), uintptr(unsafe.Pointer(e)), uintptr(unsafe.Pointer(timeout)), uintptr(unsafe.Pointer(sigmask))) + n = int(r0) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + func read(fd int, p []byte) (n int, err error) { var _p0 unsafe.Pointer if len(p) > 0 { @@ -1227,8 +1238,23 @@ func Setxattr(path string, attr string, data []byte, flags int) (err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT +func Statx(dirfd int, path string, flags int, mask int, stat *Statx_t) (err error) { + var _p0 *byte + _p0, err = BytePtrFromString(path) + if err != nil { + return + } + _, _, e1 := Syscall6(SYS_STATX, uintptr(dirfd), uintptr(unsafe.Pointer(_p0)), uintptr(flags), uintptr(mask), uintptr(unsafe.Pointer(stat)), 0) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + func Sync() { - Syscall(SYS_SYNC, 0, 0, 0) + SyscallNoError(SYS_SYNC, 0, 0, 0) return } @@ -1287,7 +1313,7 @@ func Times(tms *Tms) (ticks uintptr, err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func Umask(mask int) (oldmask int) { - r0, _, _ := RawSyscall(SYS_UMASK, uintptr(mask), 0, 0) + r0, _ := RawSyscallNoError(SYS_UMASK, uintptr(mask), 0, 0) oldmask = int(r0) return } @@ -1446,22 +1472,6 @@ func Mlock(b []byte) (err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT -func Munlock(b []byte) (err error) { - var _p0 unsafe.Pointer - if len(b) > 0 { - _p0 = unsafe.Pointer(&b[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - _, _, e1 := Syscall(SYS_MUNLOCK, uintptr(_p0), uintptr(len(b)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - func Mlockall(flags int) (err error) { _, _, e1 := Syscall(SYS_MLOCKALL, uintptr(flags), 0, 0) if e1 != 0 { @@ -1488,6 +1498,22 @@ func Msync(b []byte, flags int) (err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT +func Munlock(b []byte) (err error) { + var _p0 unsafe.Pointer + if len(b) > 0 { + _p0 = unsafe.Pointer(&b[0]) + } else { + _p0 = unsafe.Pointer(&_zero) + } + _, _, e1 := Syscall(SYS_MUNLOCK, uintptr(_p0), uintptr(len(b)), 0) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + func Munlockall() (err error) { _, _, e1 := Syscall(SYS_MUNLOCKALL, 0, 0, 0) if e1 != 0 { @@ -1555,6 +1581,21 @@ func Fstat(fd int, stat *Stat_t) (err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT +func Fstatat(dirfd int, path string, stat *Stat_t, flags int) (err error) { + var _p0 *byte + _p0, err = BytePtrFromString(path) + if err != nil { + return + } + _, _, e1 := Syscall6(SYS_NEWFSTATAT, uintptr(dirfd), uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(stat)), uintptr(flags), 0, 0) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + func Fstatfs(fd int, buf *Statfs_t) (err error) { _, _, e1 := Syscall(SYS_FSTATFS, uintptr(fd), uintptr(unsafe.Pointer(buf)), 0) if e1 != 0 { @@ -1576,7 +1617,7 @@ func Ftruncate(fd int, length int64) (err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func Getegid() (egid int) { - r0, _, _ := RawSyscall(SYS_GETEGID, 0, 0, 0) + r0, _ := RawSyscallNoError(SYS_GETEGID, 0, 0, 0) egid = int(r0) return } @@ -1584,7 +1625,7 @@ func Getegid() (egid int) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func Geteuid() (euid int) { - r0, _, _ := RawSyscall(SYS_GETEUID, 0, 0, 0) + r0, _ := RawSyscallNoError(SYS_GETEUID, 0, 0, 0) euid = int(r0) return } @@ -1592,7 +1633,7 @@ func Geteuid() (euid int) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func Getgid() (gid int) { - r0, _, _ := RawSyscall(SYS_GETGID, 0, 0, 0) + r0, _ := RawSyscallNoError(SYS_GETGID, 0, 0, 0) gid = int(r0) return } @@ -1610,7 +1651,7 @@ func Getrlimit(resource int, rlim *Rlimit) (err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func Getuid() (uid int) { - r0, _, _ := RawSyscall(SYS_GETUID, 0, 0, 0) + r0, _ := RawSyscallNoError(SYS_GETUID, 0, 0, 0) uid = int(r0) return } diff --git a/vendor/golang.org/x/sys/unix/zsyscall_linux_sparc64.go b/vendor/golang.org/x/sys/unix/zsyscall_linux_sparc64.go index 2dd98434e..c01b3b6ba 100644 --- a/vendor/golang.org/x/sys/unix/zsyscall_linux_sparc64.go +++ b/vendor/golang.org/x/sys/unix/zsyscall_linux_sparc64.go @@ -1222,6 +1222,16 @@ func EpollWait(epfd int, events []EpollEvent, msec int) (n int, err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT +func Fadvise(fd int, offset int64, length int64, advice int) (err error) { + _, _, e1 := Syscall6(SYS_FADVISE64, uintptr(fd), uintptr(offset), uintptr(length), uintptr(advice), 0, 0) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + func Dup2(oldfd int, newfd int) (err error) { _, _, e1 := Syscall(SYS_DUP2, uintptr(oldfd), uintptr(newfd), 0) if e1 != 0 { diff --git a/vendor/golang.org/x/sys/unix/zsyscall_netbsd_386.go b/vendor/golang.org/x/sys/unix/zsyscall_netbsd_386.go index 3182345ec..fb4b96278 100644 --- a/vendor/golang.org/x/sys/unix/zsyscall_netbsd_386.go +++ b/vendor/golang.org/x/sys/unix/zsyscall_netbsd_386.go @@ -1,5 +1,5 @@ // mksyscall.pl -l32 -netbsd -tags netbsd,386 syscall_bsd.go syscall_netbsd.go syscall_netbsd_386.go -// MACHINE GENERATED BY THE COMMAND ABOVE; DO NOT EDIT +// Code generated by the command above; see README.md. DO NOT EDIT. // +build netbsd,386 @@ -266,6 +266,117 @@ func fcntl(fd int, cmd int, arg int) (val int, err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT +func poll(fds *PollFd, nfds int, timeout int) (n int, err error) { + r0, _, e1 := Syscall(SYS_POLL, uintptr(unsafe.Pointer(fds)), uintptr(nfds), uintptr(timeout)) + n = int(r0) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + +func Madvise(b []byte, behav int) (err error) { + var _p0 unsafe.Pointer + if len(b) > 0 { + _p0 = unsafe.Pointer(&b[0]) + } else { + _p0 = unsafe.Pointer(&_zero) + } + _, _, e1 := Syscall(SYS_MADVISE, uintptr(_p0), uintptr(len(b)), uintptr(behav)) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + +func Mlock(b []byte) (err error) { + var _p0 unsafe.Pointer + if len(b) > 0 { + _p0 = unsafe.Pointer(&b[0]) + } else { + _p0 = unsafe.Pointer(&_zero) + } + _, _, e1 := Syscall(SYS_MLOCK, uintptr(_p0), uintptr(len(b)), 0) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + +func Mlockall(flags int) (err error) { + _, _, e1 := Syscall(SYS_MLOCKALL, uintptr(flags), 0, 0) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + +func Mprotect(b []byte, prot int) (err error) { + var _p0 unsafe.Pointer + if len(b) > 0 { + _p0 = unsafe.Pointer(&b[0]) + } else { + _p0 = unsafe.Pointer(&_zero) + } + _, _, e1 := Syscall(SYS_MPROTECT, uintptr(_p0), uintptr(len(b)), uintptr(prot)) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + +func Msync(b []byte, flags int) (err error) { + var _p0 unsafe.Pointer + if len(b) > 0 { + _p0 = unsafe.Pointer(&b[0]) + } else { + _p0 = unsafe.Pointer(&_zero) + } + _, _, e1 := Syscall(SYS_MSYNC, uintptr(_p0), uintptr(len(b)), uintptr(flags)) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + +func Munlock(b []byte) (err error) { + var _p0 unsafe.Pointer + if len(b) > 0 { + _p0 = unsafe.Pointer(&b[0]) + } else { + _p0 = unsafe.Pointer(&_zero) + } + _, _, e1 := Syscall(SYS_MUNLOCK, uintptr(_p0), uintptr(len(b)), 0) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + +func Munlockall() (err error) { + _, _, e1 := Syscall(SYS_MUNLOCKALL, 0, 0, 0) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + func pipe() (fd1 int, fd2 int, err error) { r0, r1, e1 := RawSyscall(SYS_PIPE, 0, 0, 0) fd1 = int(r0) @@ -295,6 +406,33 @@ func getdents(fd int, buf []byte) (n int, err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT +func Getcwd(buf []byte) (n int, err error) { + var _p0 unsafe.Pointer + if len(buf) > 0 { + _p0 = unsafe.Pointer(&buf[0]) + } else { + _p0 = unsafe.Pointer(&_zero) + } + r0, _, e1 := Syscall(SYS___GETCWD, uintptr(_p0), uintptr(len(buf)), 0) + n = int(r0) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + +func ioctl(fd int, req uint, arg uintptr) (err error) { + _, _, e1 := Syscall(SYS_IOCTL, uintptr(fd), uintptr(req), uintptr(arg)) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + func Access(path string, mode uint32) (err error) { var _p0 *byte _p0, err = BytePtrFromString(path) @@ -433,6 +571,16 @@ func Exit(code int) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT +func Fadvise(fd int, offset int64, length int64, advice int) (err error) { + _, _, e1 := Syscall9(SYS_POSIX_FADVISE, uintptr(fd), 0, uintptr(offset), uintptr(offset>>32), 0, uintptr(length), uintptr(length>>32), uintptr(advice), 0) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + func Fchdir(fd int) (err error) { _, _, e1 := Syscall(SYS_FCHDIR, uintptr(fd), 0, 0) if e1 != 0 { @@ -463,6 +611,21 @@ func Fchmod(fd int, mode uint32) (err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT +func Fchmodat(dirfd int, path string, mode uint32, flags int) (err error) { + var _p0 *byte + _p0, err = BytePtrFromString(path) + if err != nil { + return + } + _, _, e1 := Syscall6(SYS_FCHMODAT, uintptr(dirfd), uintptr(unsafe.Pointer(_p0)), uintptr(mode), uintptr(flags), 0, 0) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + func Fchown(fd int, uid int, gid int) (err error) { _, _, e1 := Syscall(SYS_FCHOWN, uintptr(fd), uintptr(uid), uintptr(gid)) if e1 != 0 { @@ -504,6 +667,21 @@ func Fstat(fd int, stat *Stat_t) (err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT +func Fstatat(fd int, path string, stat *Stat_t, flags int) (err error) { + var _p0 *byte + _p0, err = BytePtrFromString(path) + if err != nil { + return + } + _, _, e1 := Syscall6(SYS_FSTATAT, uintptr(fd), uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(stat)), uintptr(flags), 0, 0) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + func Fsync(fd int) (err error) { _, _, e1 := Syscall(SYS_FSYNC, uintptr(fd), 0, 0) if e1 != 0 { @@ -777,74 +955,6 @@ func Mknod(path string, mode uint32, dev int) (err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT -func Mlock(b []byte) (err error) { - var _p0 unsafe.Pointer - if len(b) > 0 { - _p0 = unsafe.Pointer(&b[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - _, _, e1 := Syscall(SYS_MLOCK, uintptr(_p0), uintptr(len(b)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Mlockall(flags int) (err error) { - _, _, e1 := Syscall(SYS_MLOCKALL, uintptr(flags), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Mprotect(b []byte, prot int) (err error) { - var _p0 unsafe.Pointer - if len(b) > 0 { - _p0 = unsafe.Pointer(&b[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - _, _, e1 := Syscall(SYS_MPROTECT, uintptr(_p0), uintptr(len(b)), uintptr(prot)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Munlock(b []byte) (err error) { - var _p0 unsafe.Pointer - if len(b) > 0 { - _p0 = unsafe.Pointer(&b[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - _, _, e1 := Syscall(SYS_MUNLOCK, uintptr(_p0), uintptr(len(b)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Munlockall() (err error) { - _, _, e1 := Syscall(SYS_MUNLOCKALL, 0, 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - func Nanosleep(time *Timespec, leftover *Timespec) (err error) { _, _, e1 := Syscall(SYS_NANOSLEEP, uintptr(unsafe.Pointer(time)), uintptr(unsafe.Pointer(leftover)), 0) if e1 != 0 { @@ -1297,3 +1407,18 @@ func writelen(fd int, buf *byte, nbuf int) (n int, err error) { } return } + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + +func utimensat(dirfd int, path string, times *[2]Timespec, flags int) (err error) { + var _p0 *byte + _p0, err = BytePtrFromString(path) + if err != nil { + return + } + _, _, e1 := Syscall6(SYS_UTIMENSAT, uintptr(dirfd), uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(times)), uintptr(flags), 0, 0) + if e1 != 0 { + err = errnoErr(e1) + } + return +} diff --git a/vendor/golang.org/x/sys/unix/zsyscall_netbsd_amd64.go b/vendor/golang.org/x/sys/unix/zsyscall_netbsd_amd64.go index 74ba8189a..beac82ef8 100644 --- a/vendor/golang.org/x/sys/unix/zsyscall_netbsd_amd64.go +++ b/vendor/golang.org/x/sys/unix/zsyscall_netbsd_amd64.go @@ -1,5 +1,5 @@ // mksyscall.pl -netbsd -tags netbsd,amd64 syscall_bsd.go syscall_netbsd.go syscall_netbsd_amd64.go -// MACHINE GENERATED BY THE COMMAND ABOVE; DO NOT EDIT +// Code generated by the command above; see README.md. DO NOT EDIT. // +build netbsd,amd64 @@ -266,6 +266,117 @@ func fcntl(fd int, cmd int, arg int) (val int, err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT +func poll(fds *PollFd, nfds int, timeout int) (n int, err error) { + r0, _, e1 := Syscall(SYS_POLL, uintptr(unsafe.Pointer(fds)), uintptr(nfds), uintptr(timeout)) + n = int(r0) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + +func Madvise(b []byte, behav int) (err error) { + var _p0 unsafe.Pointer + if len(b) > 0 { + _p0 = unsafe.Pointer(&b[0]) + } else { + _p0 = unsafe.Pointer(&_zero) + } + _, _, e1 := Syscall(SYS_MADVISE, uintptr(_p0), uintptr(len(b)), uintptr(behav)) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + +func Mlock(b []byte) (err error) { + var _p0 unsafe.Pointer + if len(b) > 0 { + _p0 = unsafe.Pointer(&b[0]) + } else { + _p0 = unsafe.Pointer(&_zero) + } + _, _, e1 := Syscall(SYS_MLOCK, uintptr(_p0), uintptr(len(b)), 0) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + +func Mlockall(flags int) (err error) { + _, _, e1 := Syscall(SYS_MLOCKALL, uintptr(flags), 0, 0) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + +func Mprotect(b []byte, prot int) (err error) { + var _p0 unsafe.Pointer + if len(b) > 0 { + _p0 = unsafe.Pointer(&b[0]) + } else { + _p0 = unsafe.Pointer(&_zero) + } + _, _, e1 := Syscall(SYS_MPROTECT, uintptr(_p0), uintptr(len(b)), uintptr(prot)) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + +func Msync(b []byte, flags int) (err error) { + var _p0 unsafe.Pointer + if len(b) > 0 { + _p0 = unsafe.Pointer(&b[0]) + } else { + _p0 = unsafe.Pointer(&_zero) + } + _, _, e1 := Syscall(SYS_MSYNC, uintptr(_p0), uintptr(len(b)), uintptr(flags)) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + +func Munlock(b []byte) (err error) { + var _p0 unsafe.Pointer + if len(b) > 0 { + _p0 = unsafe.Pointer(&b[0]) + } else { + _p0 = unsafe.Pointer(&_zero) + } + _, _, e1 := Syscall(SYS_MUNLOCK, uintptr(_p0), uintptr(len(b)), 0) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + +func Munlockall() (err error) { + _, _, e1 := Syscall(SYS_MUNLOCKALL, 0, 0, 0) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + func pipe() (fd1 int, fd2 int, err error) { r0, r1, e1 := RawSyscall(SYS_PIPE, 0, 0, 0) fd1 = int(r0) @@ -295,6 +406,33 @@ func getdents(fd int, buf []byte) (n int, err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT +func Getcwd(buf []byte) (n int, err error) { + var _p0 unsafe.Pointer + if len(buf) > 0 { + _p0 = unsafe.Pointer(&buf[0]) + } else { + _p0 = unsafe.Pointer(&_zero) + } + r0, _, e1 := Syscall(SYS___GETCWD, uintptr(_p0), uintptr(len(buf)), 0) + n = int(r0) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + +func ioctl(fd int, req uint, arg uintptr) (err error) { + _, _, e1 := Syscall(SYS_IOCTL, uintptr(fd), uintptr(req), uintptr(arg)) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + func Access(path string, mode uint32) (err error) { var _p0 *byte _p0, err = BytePtrFromString(path) @@ -433,6 +571,16 @@ func Exit(code int) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT +func Fadvise(fd int, offset int64, length int64, advice int) (err error) { + _, _, e1 := Syscall6(SYS_POSIX_FADVISE, uintptr(fd), 0, uintptr(offset), 0, uintptr(length), uintptr(advice)) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + func Fchdir(fd int) (err error) { _, _, e1 := Syscall(SYS_FCHDIR, uintptr(fd), 0, 0) if e1 != 0 { @@ -463,6 +611,21 @@ func Fchmod(fd int, mode uint32) (err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT +func Fchmodat(dirfd int, path string, mode uint32, flags int) (err error) { + var _p0 *byte + _p0, err = BytePtrFromString(path) + if err != nil { + return + } + _, _, e1 := Syscall6(SYS_FCHMODAT, uintptr(dirfd), uintptr(unsafe.Pointer(_p0)), uintptr(mode), uintptr(flags), 0, 0) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + func Fchown(fd int, uid int, gid int) (err error) { _, _, e1 := Syscall(SYS_FCHOWN, uintptr(fd), uintptr(uid), uintptr(gid)) if e1 != 0 { @@ -504,6 +667,21 @@ func Fstat(fd int, stat *Stat_t) (err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT +func Fstatat(fd int, path string, stat *Stat_t, flags int) (err error) { + var _p0 *byte + _p0, err = BytePtrFromString(path) + if err != nil { + return + } + _, _, e1 := Syscall6(SYS_FSTATAT, uintptr(fd), uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(stat)), uintptr(flags), 0, 0) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + func Fsync(fd int) (err error) { _, _, e1 := Syscall(SYS_FSYNC, uintptr(fd), 0, 0) if e1 != 0 { @@ -777,74 +955,6 @@ func Mknod(path string, mode uint32, dev int) (err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT -func Mlock(b []byte) (err error) { - var _p0 unsafe.Pointer - if len(b) > 0 { - _p0 = unsafe.Pointer(&b[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - _, _, e1 := Syscall(SYS_MLOCK, uintptr(_p0), uintptr(len(b)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Mlockall(flags int) (err error) { - _, _, e1 := Syscall(SYS_MLOCKALL, uintptr(flags), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Mprotect(b []byte, prot int) (err error) { - var _p0 unsafe.Pointer - if len(b) > 0 { - _p0 = unsafe.Pointer(&b[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - _, _, e1 := Syscall(SYS_MPROTECT, uintptr(_p0), uintptr(len(b)), uintptr(prot)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Munlock(b []byte) (err error) { - var _p0 unsafe.Pointer - if len(b) > 0 { - _p0 = unsafe.Pointer(&b[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - _, _, e1 := Syscall(SYS_MUNLOCK, uintptr(_p0), uintptr(len(b)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Munlockall() (err error) { - _, _, e1 := Syscall(SYS_MUNLOCKALL, 0, 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - func Nanosleep(time *Timespec, leftover *Timespec) (err error) { _, _, e1 := Syscall(SYS_NANOSLEEP, uintptr(unsafe.Pointer(time)), uintptr(unsafe.Pointer(leftover)), 0) if e1 != 0 { @@ -1297,3 +1407,18 @@ func writelen(fd int, buf *byte, nbuf int) (n int, err error) { } return } + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + +func utimensat(dirfd int, path string, times *[2]Timespec, flags int) (err error) { + var _p0 *byte + _p0, err = BytePtrFromString(path) + if err != nil { + return + } + _, _, e1 := Syscall6(SYS_UTIMENSAT, uintptr(dirfd), uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(times)), uintptr(flags), 0, 0) + if e1 != 0 { + err = errnoErr(e1) + } + return +} diff --git a/vendor/golang.org/x/sys/unix/zsyscall_netbsd_arm.go b/vendor/golang.org/x/sys/unix/zsyscall_netbsd_arm.go index 1f346e2f5..7bd5f60b0 100644 --- a/vendor/golang.org/x/sys/unix/zsyscall_netbsd_arm.go +++ b/vendor/golang.org/x/sys/unix/zsyscall_netbsd_arm.go @@ -1,5 +1,5 @@ -// mksyscall.pl -l32 -arm -tags netbsd,arm syscall_bsd.go syscall_netbsd.go syscall_netbsd_arm.go -// MACHINE GENERATED BY THE COMMAND ABOVE; DO NOT EDIT +// mksyscall.pl -l32 -netbsd -arm -tags netbsd,arm syscall_bsd.go syscall_netbsd.go syscall_netbsd_arm.go +// Code generated by the command above; see README.md. DO NOT EDIT. // +build netbsd,arm @@ -266,6 +266,117 @@ func fcntl(fd int, cmd int, arg int) (val int, err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT +func poll(fds *PollFd, nfds int, timeout int) (n int, err error) { + r0, _, e1 := Syscall(SYS_POLL, uintptr(unsafe.Pointer(fds)), uintptr(nfds), uintptr(timeout)) + n = int(r0) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + +func Madvise(b []byte, behav int) (err error) { + var _p0 unsafe.Pointer + if len(b) > 0 { + _p0 = unsafe.Pointer(&b[0]) + } else { + _p0 = unsafe.Pointer(&_zero) + } + _, _, e1 := Syscall(SYS_MADVISE, uintptr(_p0), uintptr(len(b)), uintptr(behav)) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + +func Mlock(b []byte) (err error) { + var _p0 unsafe.Pointer + if len(b) > 0 { + _p0 = unsafe.Pointer(&b[0]) + } else { + _p0 = unsafe.Pointer(&_zero) + } + _, _, e1 := Syscall(SYS_MLOCK, uintptr(_p0), uintptr(len(b)), 0) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + +func Mlockall(flags int) (err error) { + _, _, e1 := Syscall(SYS_MLOCKALL, uintptr(flags), 0, 0) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + +func Mprotect(b []byte, prot int) (err error) { + var _p0 unsafe.Pointer + if len(b) > 0 { + _p0 = unsafe.Pointer(&b[0]) + } else { + _p0 = unsafe.Pointer(&_zero) + } + _, _, e1 := Syscall(SYS_MPROTECT, uintptr(_p0), uintptr(len(b)), uintptr(prot)) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + +func Msync(b []byte, flags int) (err error) { + var _p0 unsafe.Pointer + if len(b) > 0 { + _p0 = unsafe.Pointer(&b[0]) + } else { + _p0 = unsafe.Pointer(&_zero) + } + _, _, e1 := Syscall(SYS_MSYNC, uintptr(_p0), uintptr(len(b)), uintptr(flags)) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + +func Munlock(b []byte) (err error) { + var _p0 unsafe.Pointer + if len(b) > 0 { + _p0 = unsafe.Pointer(&b[0]) + } else { + _p0 = unsafe.Pointer(&_zero) + } + _, _, e1 := Syscall(SYS_MUNLOCK, uintptr(_p0), uintptr(len(b)), 0) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + +func Munlockall() (err error) { + _, _, e1 := Syscall(SYS_MUNLOCKALL, 0, 0, 0) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + func pipe() (fd1 int, fd2 int, err error) { r0, r1, e1 := RawSyscall(SYS_PIPE, 0, 0, 0) fd1 = int(r0) @@ -295,6 +406,33 @@ func getdents(fd int, buf []byte) (n int, err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT +func Getcwd(buf []byte) (n int, err error) { + var _p0 unsafe.Pointer + if len(buf) > 0 { + _p0 = unsafe.Pointer(&buf[0]) + } else { + _p0 = unsafe.Pointer(&_zero) + } + r0, _, e1 := Syscall(SYS___GETCWD, uintptr(_p0), uintptr(len(buf)), 0) + n = int(r0) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + +func ioctl(fd int, req uint, arg uintptr) (err error) { + _, _, e1 := Syscall(SYS_IOCTL, uintptr(fd), uintptr(req), uintptr(arg)) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + func Access(path string, mode uint32) (err error) { var _p0 *byte _p0, err = BytePtrFromString(path) @@ -433,6 +571,16 @@ func Exit(code int) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT +func Fadvise(fd int, offset int64, length int64, advice int) (err error) { + _, _, e1 := Syscall9(SYS_POSIX_FADVISE, uintptr(fd), 0, uintptr(offset), uintptr(offset>>32), 0, uintptr(length), uintptr(length>>32), uintptr(advice), 0) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + func Fchdir(fd int) (err error) { _, _, e1 := Syscall(SYS_FCHDIR, uintptr(fd), 0, 0) if e1 != 0 { @@ -463,6 +611,21 @@ func Fchmod(fd int, mode uint32) (err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT +func Fchmodat(dirfd int, path string, mode uint32, flags int) (err error) { + var _p0 *byte + _p0, err = BytePtrFromString(path) + if err != nil { + return + } + _, _, e1 := Syscall6(SYS_FCHMODAT, uintptr(dirfd), uintptr(unsafe.Pointer(_p0)), uintptr(mode), uintptr(flags), 0, 0) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + func Fchown(fd int, uid int, gid int) (err error) { _, _, e1 := Syscall(SYS_FCHOWN, uintptr(fd), uintptr(uid), uintptr(gid)) if e1 != 0 { @@ -504,6 +667,21 @@ func Fstat(fd int, stat *Stat_t) (err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT +func Fstatat(fd int, path string, stat *Stat_t, flags int) (err error) { + var _p0 *byte + _p0, err = BytePtrFromString(path) + if err != nil { + return + } + _, _, e1 := Syscall6(SYS_FSTATAT, uintptr(fd), uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(stat)), uintptr(flags), 0, 0) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + func Fsync(fd int) (err error) { _, _, e1 := Syscall(SYS_FSYNC, uintptr(fd), 0, 0) if e1 != 0 { @@ -777,74 +955,6 @@ func Mknod(path string, mode uint32, dev int) (err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT -func Mlock(b []byte) (err error) { - var _p0 unsafe.Pointer - if len(b) > 0 { - _p0 = unsafe.Pointer(&b[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - _, _, e1 := Syscall(SYS_MLOCK, uintptr(_p0), uintptr(len(b)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Mlockall(flags int) (err error) { - _, _, e1 := Syscall(SYS_MLOCKALL, uintptr(flags), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Mprotect(b []byte, prot int) (err error) { - var _p0 unsafe.Pointer - if len(b) > 0 { - _p0 = unsafe.Pointer(&b[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - _, _, e1 := Syscall(SYS_MPROTECT, uintptr(_p0), uintptr(len(b)), uintptr(prot)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Munlock(b []byte) (err error) { - var _p0 unsafe.Pointer - if len(b) > 0 { - _p0 = unsafe.Pointer(&b[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - _, _, e1 := Syscall(SYS_MUNLOCK, uintptr(_p0), uintptr(len(b)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Munlockall() (err error) { - _, _, e1 := Syscall(SYS_MUNLOCKALL, 0, 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - func Nanosleep(time *Timespec, leftover *Timespec) (err error) { _, _, e1 := Syscall(SYS_NANOSLEEP, uintptr(unsafe.Pointer(time)), uintptr(unsafe.Pointer(leftover)), 0) if e1 != 0 { @@ -1297,3 +1407,18 @@ func writelen(fd int, buf *byte, nbuf int) (n int, err error) { } return } + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + +func utimensat(dirfd int, path string, times *[2]Timespec, flags int) (err error) { + var _p0 *byte + _p0, err = BytePtrFromString(path) + if err != nil { + return + } + _, _, e1 := Syscall6(SYS_UTIMENSAT, uintptr(dirfd), uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(times)), uintptr(flags), 0, 0) + if e1 != 0 { + err = errnoErr(e1) + } + return +} diff --git a/vendor/golang.org/x/sys/unix/zsyscall_openbsd_386.go b/vendor/golang.org/x/sys/unix/zsyscall_openbsd_386.go index ca3e81392..5c09c0758 100644 --- a/vendor/golang.org/x/sys/unix/zsyscall_openbsd_386.go +++ b/vendor/golang.org/x/sys/unix/zsyscall_openbsd_386.go @@ -1,5 +1,5 @@ // mksyscall.pl -l32 -openbsd -tags openbsd,386 syscall_bsd.go syscall_openbsd.go syscall_openbsd_386.go -// MACHINE GENERATED BY THE COMMAND ABOVE; DO NOT EDIT +// Code generated by the command above; see README.md. DO NOT EDIT. // +build openbsd,386 @@ -266,6 +266,117 @@ func fcntl(fd int, cmd int, arg int) (val int, err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT +func poll(fds *PollFd, nfds int, timeout int) (n int, err error) { + r0, _, e1 := Syscall(SYS_POLL, uintptr(unsafe.Pointer(fds)), uintptr(nfds), uintptr(timeout)) + n = int(r0) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + +func Madvise(b []byte, behav int) (err error) { + var _p0 unsafe.Pointer + if len(b) > 0 { + _p0 = unsafe.Pointer(&b[0]) + } else { + _p0 = unsafe.Pointer(&_zero) + } + _, _, e1 := Syscall(SYS_MADVISE, uintptr(_p0), uintptr(len(b)), uintptr(behav)) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + +func Mlock(b []byte) (err error) { + var _p0 unsafe.Pointer + if len(b) > 0 { + _p0 = unsafe.Pointer(&b[0]) + } else { + _p0 = unsafe.Pointer(&_zero) + } + _, _, e1 := Syscall(SYS_MLOCK, uintptr(_p0), uintptr(len(b)), 0) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + +func Mlockall(flags int) (err error) { + _, _, e1 := Syscall(SYS_MLOCKALL, uintptr(flags), 0, 0) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + +func Mprotect(b []byte, prot int) (err error) { + var _p0 unsafe.Pointer + if len(b) > 0 { + _p0 = unsafe.Pointer(&b[0]) + } else { + _p0 = unsafe.Pointer(&_zero) + } + _, _, e1 := Syscall(SYS_MPROTECT, uintptr(_p0), uintptr(len(b)), uintptr(prot)) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + +func Msync(b []byte, flags int) (err error) { + var _p0 unsafe.Pointer + if len(b) > 0 { + _p0 = unsafe.Pointer(&b[0]) + } else { + _p0 = unsafe.Pointer(&_zero) + } + _, _, e1 := Syscall(SYS_MSYNC, uintptr(_p0), uintptr(len(b)), uintptr(flags)) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + +func Munlock(b []byte) (err error) { + var _p0 unsafe.Pointer + if len(b) > 0 { + _p0 = unsafe.Pointer(&b[0]) + } else { + _p0 = unsafe.Pointer(&_zero) + } + _, _, e1 := Syscall(SYS_MUNLOCK, uintptr(_p0), uintptr(len(b)), 0) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + +func Munlockall() (err error) { + _, _, e1 := Syscall(SYS_MUNLOCKALL, 0, 0, 0) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + func pipe(p *[2]_C_int) (err error) { _, _, e1 := RawSyscall(SYS_PIPE, uintptr(unsafe.Pointer(p)), 0, 0) if e1 != 0 { @@ -293,6 +404,33 @@ func getdents(fd int, buf []byte) (n int, err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT +func Getcwd(buf []byte) (n int, err error) { + var _p0 unsafe.Pointer + if len(buf) > 0 { + _p0 = unsafe.Pointer(&buf[0]) + } else { + _p0 = unsafe.Pointer(&_zero) + } + r0, _, e1 := Syscall(SYS___GETCWD, uintptr(_p0), uintptr(len(buf)), 0) + n = int(r0) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + +func ioctl(fd int, req uint, arg uintptr) (err error) { + _, _, e1 := Syscall(SYS_IOCTL, uintptr(fd), uintptr(req), uintptr(arg)) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + func Access(path string, mode uint32) (err error) { var _p0 *byte _p0, err = BytePtrFromString(path) @@ -461,6 +599,21 @@ func Fchmod(fd int, mode uint32) (err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT +func Fchmodat(dirfd int, path string, mode uint32, flags int) (err error) { + var _p0 *byte + _p0, err = BytePtrFromString(path) + if err != nil { + return + } + _, _, e1 := Syscall6(SYS_FCHMODAT, uintptr(dirfd), uintptr(unsafe.Pointer(_p0)), uintptr(mode), uintptr(flags), 0, 0) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + func Fchown(fd int, uid int, gid int) (err error) { _, _, e1 := Syscall(SYS_FCHOWN, uintptr(fd), uintptr(uid), uintptr(gid)) if e1 != 0 { @@ -502,6 +655,21 @@ func Fstat(fd int, stat *Stat_t) (err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT +func Fstatat(fd int, path string, stat *Stat_t, flags int) (err error) { + var _p0 *byte + _p0, err = BytePtrFromString(path) + if err != nil { + return + } + _, _, e1 := Syscall6(SYS_FSTATAT, uintptr(fd), uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(stat)), uintptr(flags), 0, 0) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + func Fstatfs(fd int, stat *Statfs_t) (err error) { _, _, e1 := Syscall(SYS_FSTATFS, uintptr(fd), uintptr(unsafe.Pointer(stat)), 0) if e1 != 0 { @@ -785,74 +953,6 @@ func Mknod(path string, mode uint32, dev int) (err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT -func Mlock(b []byte) (err error) { - var _p0 unsafe.Pointer - if len(b) > 0 { - _p0 = unsafe.Pointer(&b[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - _, _, e1 := Syscall(SYS_MLOCK, uintptr(_p0), uintptr(len(b)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Mlockall(flags int) (err error) { - _, _, e1 := Syscall(SYS_MLOCKALL, uintptr(flags), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Mprotect(b []byte, prot int) (err error) { - var _p0 unsafe.Pointer - if len(b) > 0 { - _p0 = unsafe.Pointer(&b[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - _, _, e1 := Syscall(SYS_MPROTECT, uintptr(_p0), uintptr(len(b)), uintptr(prot)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Munlock(b []byte) (err error) { - var _p0 unsafe.Pointer - if len(b) > 0 { - _p0 = unsafe.Pointer(&b[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - _, _, e1 := Syscall(SYS_MUNLOCK, uintptr(_p0), uintptr(len(b)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Munlockall() (err error) { - _, _, e1 := Syscall(SYS_MUNLOCKALL, 0, 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - func Nanosleep(time *Timespec, leftover *Timespec) (err error) { _, _, e1 := Syscall(SYS_NANOSLEEP, uintptr(unsafe.Pointer(time)), uintptr(unsafe.Pointer(leftover)), 0) if e1 != 0 { @@ -1355,3 +1455,18 @@ func writelen(fd int, buf *byte, nbuf int) (n int, err error) { } return } + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + +func utimensat(dirfd int, path string, times *[2]Timespec, flags int) (err error) { + var _p0 *byte + _p0, err = BytePtrFromString(path) + if err != nil { + return + } + _, _, e1 := Syscall6(SYS_UTIMENSAT, uintptr(dirfd), uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(times)), uintptr(flags), 0, 0) + if e1 != 0 { + err = errnoErr(e1) + } + return +} diff --git a/vendor/golang.org/x/sys/unix/zsyscall_openbsd_amd64.go b/vendor/golang.org/x/sys/unix/zsyscall_openbsd_amd64.go index bf63d552e..54ccc935d 100644 --- a/vendor/golang.org/x/sys/unix/zsyscall_openbsd_amd64.go +++ b/vendor/golang.org/x/sys/unix/zsyscall_openbsd_amd64.go @@ -1,5 +1,5 @@ // mksyscall.pl -openbsd -tags openbsd,amd64 syscall_bsd.go syscall_openbsd.go syscall_openbsd_amd64.go -// MACHINE GENERATED BY THE COMMAND ABOVE; DO NOT EDIT +// Code generated by the command above; see README.md. DO NOT EDIT. // +build openbsd,amd64 @@ -266,6 +266,117 @@ func fcntl(fd int, cmd int, arg int) (val int, err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT +func poll(fds *PollFd, nfds int, timeout int) (n int, err error) { + r0, _, e1 := Syscall(SYS_POLL, uintptr(unsafe.Pointer(fds)), uintptr(nfds), uintptr(timeout)) + n = int(r0) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + +func Madvise(b []byte, behav int) (err error) { + var _p0 unsafe.Pointer + if len(b) > 0 { + _p0 = unsafe.Pointer(&b[0]) + } else { + _p0 = unsafe.Pointer(&_zero) + } + _, _, e1 := Syscall(SYS_MADVISE, uintptr(_p0), uintptr(len(b)), uintptr(behav)) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + +func Mlock(b []byte) (err error) { + var _p0 unsafe.Pointer + if len(b) > 0 { + _p0 = unsafe.Pointer(&b[0]) + } else { + _p0 = unsafe.Pointer(&_zero) + } + _, _, e1 := Syscall(SYS_MLOCK, uintptr(_p0), uintptr(len(b)), 0) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + +func Mlockall(flags int) (err error) { + _, _, e1 := Syscall(SYS_MLOCKALL, uintptr(flags), 0, 0) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + +func Mprotect(b []byte, prot int) (err error) { + var _p0 unsafe.Pointer + if len(b) > 0 { + _p0 = unsafe.Pointer(&b[0]) + } else { + _p0 = unsafe.Pointer(&_zero) + } + _, _, e1 := Syscall(SYS_MPROTECT, uintptr(_p0), uintptr(len(b)), uintptr(prot)) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + +func Msync(b []byte, flags int) (err error) { + var _p0 unsafe.Pointer + if len(b) > 0 { + _p0 = unsafe.Pointer(&b[0]) + } else { + _p0 = unsafe.Pointer(&_zero) + } + _, _, e1 := Syscall(SYS_MSYNC, uintptr(_p0), uintptr(len(b)), uintptr(flags)) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + +func Munlock(b []byte) (err error) { + var _p0 unsafe.Pointer + if len(b) > 0 { + _p0 = unsafe.Pointer(&b[0]) + } else { + _p0 = unsafe.Pointer(&_zero) + } + _, _, e1 := Syscall(SYS_MUNLOCK, uintptr(_p0), uintptr(len(b)), 0) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + +func Munlockall() (err error) { + _, _, e1 := Syscall(SYS_MUNLOCKALL, 0, 0, 0) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + func pipe(p *[2]_C_int) (err error) { _, _, e1 := RawSyscall(SYS_PIPE, uintptr(unsafe.Pointer(p)), 0, 0) if e1 != 0 { @@ -293,6 +404,33 @@ func getdents(fd int, buf []byte) (n int, err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT +func Getcwd(buf []byte) (n int, err error) { + var _p0 unsafe.Pointer + if len(buf) > 0 { + _p0 = unsafe.Pointer(&buf[0]) + } else { + _p0 = unsafe.Pointer(&_zero) + } + r0, _, e1 := Syscall(SYS___GETCWD, uintptr(_p0), uintptr(len(buf)), 0) + n = int(r0) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + +func ioctl(fd int, req uint, arg uintptr) (err error) { + _, _, e1 := Syscall(SYS_IOCTL, uintptr(fd), uintptr(req), uintptr(arg)) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + func Access(path string, mode uint32) (err error) { var _p0 *byte _p0, err = BytePtrFromString(path) @@ -461,6 +599,21 @@ func Fchmod(fd int, mode uint32) (err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT +func Fchmodat(dirfd int, path string, mode uint32, flags int) (err error) { + var _p0 *byte + _p0, err = BytePtrFromString(path) + if err != nil { + return + } + _, _, e1 := Syscall6(SYS_FCHMODAT, uintptr(dirfd), uintptr(unsafe.Pointer(_p0)), uintptr(mode), uintptr(flags), 0, 0) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + func Fchown(fd int, uid int, gid int) (err error) { _, _, e1 := Syscall(SYS_FCHOWN, uintptr(fd), uintptr(uid), uintptr(gid)) if e1 != 0 { @@ -502,6 +655,21 @@ func Fstat(fd int, stat *Stat_t) (err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT +func Fstatat(fd int, path string, stat *Stat_t, flags int) (err error) { + var _p0 *byte + _p0, err = BytePtrFromString(path) + if err != nil { + return + } + _, _, e1 := Syscall6(SYS_FSTATAT, uintptr(fd), uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(stat)), uintptr(flags), 0, 0) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + func Fstatfs(fd int, stat *Statfs_t) (err error) { _, _, e1 := Syscall(SYS_FSTATFS, uintptr(fd), uintptr(unsafe.Pointer(stat)), 0) if e1 != 0 { @@ -785,74 +953,6 @@ func Mknod(path string, mode uint32, dev int) (err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT -func Mlock(b []byte) (err error) { - var _p0 unsafe.Pointer - if len(b) > 0 { - _p0 = unsafe.Pointer(&b[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - _, _, e1 := Syscall(SYS_MLOCK, uintptr(_p0), uintptr(len(b)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Mlockall(flags int) (err error) { - _, _, e1 := Syscall(SYS_MLOCKALL, uintptr(flags), 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Mprotect(b []byte, prot int) (err error) { - var _p0 unsafe.Pointer - if len(b) > 0 { - _p0 = unsafe.Pointer(&b[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - _, _, e1 := Syscall(SYS_MPROTECT, uintptr(_p0), uintptr(len(b)), uintptr(prot)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Munlock(b []byte) (err error) { - var _p0 unsafe.Pointer - if len(b) > 0 { - _p0 = unsafe.Pointer(&b[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - _, _, e1 := Syscall(SYS_MUNLOCK, uintptr(_p0), uintptr(len(b)), 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - -func Munlockall() (err error) { - _, _, e1 := Syscall(SYS_MUNLOCKALL, 0, 0, 0) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - func Nanosleep(time *Timespec, leftover *Timespec) (err error) { _, _, e1 := Syscall(SYS_NANOSLEEP, uintptr(unsafe.Pointer(time)), uintptr(unsafe.Pointer(leftover)), 0) if e1 != 0 { @@ -1355,3 +1455,18 @@ func writelen(fd int, buf *byte, nbuf int) (n int, err error) { } return } + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + +func utimensat(dirfd int, path string, times *[2]Timespec, flags int) (err error) { + var _p0 *byte + _p0, err = BytePtrFromString(path) + if err != nil { + return + } + _, _, e1 := Syscall6(SYS_UTIMENSAT, uintptr(dirfd), uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(times)), uintptr(flags), 0, 0) + if e1 != 0 { + err = errnoErr(e1) + } + return +} diff --git a/vendor/golang.org/x/sys/unix/zsyscall_openbsd_arm.go b/vendor/golang.org/x/sys/unix/zsyscall_openbsd_arm.go index 9cabf577c..59258b0a4 100644 --- a/vendor/golang.org/x/sys/unix/zsyscall_openbsd_arm.go +++ b/vendor/golang.org/x/sys/unix/zsyscall_openbsd_arm.go @@ -1,5 +1,5 @@ // mksyscall.pl -l32 -openbsd -arm -tags openbsd,arm syscall_bsd.go syscall_openbsd.go syscall_openbsd_arm.go -// MACHINE GENERATED BY THE COMMAND ABOVE; DO NOT EDIT +// Code generated by the command above; see README.md. DO NOT EDIT. // +build openbsd,arm @@ -10,6 +10,8 @@ import ( "unsafe" ) +var _ syscall.Errno + // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT func getgroups(ngid int, gid *_Gid_t) (n int, err error) { @@ -264,6 +266,117 @@ func fcntl(fd int, cmd int, arg int) (val int, err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT +func poll(fds *PollFd, nfds int, timeout int) (n int, err error) { + r0, _, e1 := Syscall(SYS_POLL, uintptr(unsafe.Pointer(fds)), uintptr(nfds), uintptr(timeout)) + n = int(r0) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + +func Madvise(b []byte, behav int) (err error) { + var _p0 unsafe.Pointer + if len(b) > 0 { + _p0 = unsafe.Pointer(&b[0]) + } else { + _p0 = unsafe.Pointer(&_zero) + } + _, _, e1 := Syscall(SYS_MADVISE, uintptr(_p0), uintptr(len(b)), uintptr(behav)) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + +func Mlock(b []byte) (err error) { + var _p0 unsafe.Pointer + if len(b) > 0 { + _p0 = unsafe.Pointer(&b[0]) + } else { + _p0 = unsafe.Pointer(&_zero) + } + _, _, e1 := Syscall(SYS_MLOCK, uintptr(_p0), uintptr(len(b)), 0) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + +func Mlockall(flags int) (err error) { + _, _, e1 := Syscall(SYS_MLOCKALL, uintptr(flags), 0, 0) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + +func Mprotect(b []byte, prot int) (err error) { + var _p0 unsafe.Pointer + if len(b) > 0 { + _p0 = unsafe.Pointer(&b[0]) + } else { + _p0 = unsafe.Pointer(&_zero) + } + _, _, e1 := Syscall(SYS_MPROTECT, uintptr(_p0), uintptr(len(b)), uintptr(prot)) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + +func Msync(b []byte, flags int) (err error) { + var _p0 unsafe.Pointer + if len(b) > 0 { + _p0 = unsafe.Pointer(&b[0]) + } else { + _p0 = unsafe.Pointer(&_zero) + } + _, _, e1 := Syscall(SYS_MSYNC, uintptr(_p0), uintptr(len(b)), uintptr(flags)) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + +func Munlock(b []byte) (err error) { + var _p0 unsafe.Pointer + if len(b) > 0 { + _p0 = unsafe.Pointer(&b[0]) + } else { + _p0 = unsafe.Pointer(&_zero) + } + _, _, e1 := Syscall(SYS_MUNLOCK, uintptr(_p0), uintptr(len(b)), 0) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + +func Munlockall() (err error) { + _, _, e1 := Syscall(SYS_MUNLOCKALL, 0, 0, 0) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + func pipe(p *[2]_C_int) (err error) { _, _, e1 := RawSyscall(SYS_PIPE, uintptr(unsafe.Pointer(p)), 0, 0) if e1 != 0 { @@ -291,6 +404,33 @@ func getdents(fd int, buf []byte) (n int, err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT +func Getcwd(buf []byte) (n int, err error) { + var _p0 unsafe.Pointer + if len(buf) > 0 { + _p0 = unsafe.Pointer(&buf[0]) + } else { + _p0 = unsafe.Pointer(&_zero) + } + r0, _, e1 := Syscall(SYS___GETCWD, uintptr(_p0), uintptr(len(buf)), 0) + n = int(r0) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + +func ioctl(fd int, req uint, arg uintptr) (err error) { + _, _, e1 := Syscall(SYS_IOCTL, uintptr(fd), uintptr(req), uintptr(arg)) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + func Access(path string, mode uint32) (err error) { var _p0 *byte _p0, err = BytePtrFromString(path) @@ -459,6 +599,21 @@ func Fchmod(fd int, mode uint32) (err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT +func Fchmodat(dirfd int, path string, mode uint32, flags int) (err error) { + var _p0 *byte + _p0, err = BytePtrFromString(path) + if err != nil { + return + } + _, _, e1 := Syscall6(SYS_FCHMODAT, uintptr(dirfd), uintptr(unsafe.Pointer(_p0)), uintptr(mode), uintptr(flags), 0, 0) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + func Fchown(fd int, uid int, gid int) (err error) { _, _, e1 := Syscall(SYS_FCHOWN, uintptr(fd), uintptr(uid), uintptr(gid)) if e1 != 0 { @@ -500,6 +655,21 @@ func Fstat(fd int, stat *Stat_t) (err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT +func Fstatat(fd int, path string, stat *Stat_t, flags int) (err error) { + var _p0 *byte + _p0, err = BytePtrFromString(path) + if err != nil { + return + } + _, _, e1 := Syscall6(SYS_FSTATAT, uintptr(fd), uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(stat)), uintptr(flags), 0, 0) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + func Fstatfs(fd int, stat *Statfs_t) (err error) { _, _, e1 := Syscall(SYS_FSTATFS, uintptr(fd), uintptr(unsafe.Pointer(stat)), 0) if e1 != 0 { @@ -1054,6 +1224,26 @@ func Setreuid(ruid int, euid int) (err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT +func Setresgid(rgid int, egid int, sgid int) (err error) { + _, _, e1 := RawSyscall(SYS_SETRESGID, uintptr(rgid), uintptr(egid), uintptr(sgid)) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + +func Setresuid(ruid int, euid int, suid int) (err error) { + _, _, e1 := RawSyscall(SYS_SETRESUID, uintptr(ruid), uintptr(euid), uintptr(suid)) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + func Setrlimit(which int, lim *Rlimit) (err error) { _, _, e1 := RawSyscall(SYS_SETRLIMIT, uintptr(which), uintptr(unsafe.Pointer(lim)), 0) if e1 != 0 { @@ -1265,3 +1455,18 @@ func writelen(fd int, buf *byte, nbuf int) (n int, err error) { } return } + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + +func utimensat(dirfd int, path string, times *[2]Timespec, flags int) (err error) { + var _p0 *byte + _p0, err = BytePtrFromString(path) + if err != nil { + return + } + _, _, e1 := Syscall6(SYS_UTIMENSAT, uintptr(dirfd), uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(times)), uintptr(flags), 0, 0) + if e1 != 0 { + err = errnoErr(e1) + } + return +} diff --git a/vendor/golang.org/x/sys/unix/zsyscall_solaris_amd64.go b/vendor/golang.org/x/sys/unix/zsyscall_solaris_amd64.go index 4287133d0..5e88619c4 100644 --- a/vendor/golang.org/x/sys/unix/zsyscall_solaris_amd64.go +++ b/vendor/golang.org/x/sys/unix/zsyscall_solaris_amd64.go @@ -25,7 +25,11 @@ import ( //go:cgo_import_dynamic libc___xnet_recvmsg __xnet_recvmsg "libsocket.so" //go:cgo_import_dynamic libc___xnet_sendmsg __xnet_sendmsg "libsocket.so" //go:cgo_import_dynamic libc_acct acct "libc.so" +//go:cgo_import_dynamic libc___makedev __makedev "libc.so" +//go:cgo_import_dynamic libc___major __major "libc.so" +//go:cgo_import_dynamic libc___minor __minor "libc.so" //go:cgo_import_dynamic libc_ioctl ioctl "libc.so" +//go:cgo_import_dynamic libc_poll poll "libc.so" //go:cgo_import_dynamic libc_access access "libc.so" //go:cgo_import_dynamic libc_adjtime adjtime "libc.so" //go:cgo_import_dynamic libc_chdir chdir "libc.so" @@ -46,6 +50,7 @@ import ( //go:cgo_import_dynamic libc_flock flock "libc.so" //go:cgo_import_dynamic libc_fpathconf fpathconf "libc.so" //go:cgo_import_dynamic libc_fstat fstat "libc.so" +//go:cgo_import_dynamic libc_fstatat fstatat "libc.so" //go:cgo_import_dynamic libc_fstatvfs fstatvfs "libc.so" //go:cgo_import_dynamic libc_getdents getdents "libc.so" //go:cgo_import_dynamic libc_getgid getgid "libc.so" @@ -75,6 +80,7 @@ import ( //go:cgo_import_dynamic libc_mlock mlock "libc.so" //go:cgo_import_dynamic libc_mlockall mlockall "libc.so" //go:cgo_import_dynamic libc_mprotect mprotect "libc.so" +//go:cgo_import_dynamic libc_msync msync "libc.so" //go:cgo_import_dynamic libc_munlock munlock "libc.so" //go:cgo_import_dynamic libc_munlockall munlockall "libc.so" //go:cgo_import_dynamic libc_nanosleep nanosleep "libc.so" @@ -90,6 +96,7 @@ import ( //go:cgo_import_dynamic libc_renameat renameat "libc.so" //go:cgo_import_dynamic libc_rmdir rmdir "libc.so" //go:cgo_import_dynamic libc_lseek lseek "libc.so" +//go:cgo_import_dynamic libc_select select "libc.so" //go:cgo_import_dynamic libc_setegid setegid "libc.so" //go:cgo_import_dynamic libc_seteuid seteuid "libc.so" //go:cgo_import_dynamic libc_setgid setgid "libc.so" @@ -129,7 +136,6 @@ import ( //go:cgo_import_dynamic libc_getpeername getpeername "libsocket.so" //go:cgo_import_dynamic libc_setsockopt setsockopt "libsocket.so" //go:cgo_import_dynamic libc_recvfrom recvfrom "libsocket.so" -//go:cgo_import_dynamic libc_sysconf sysconf "libc.so" //go:linkname procpipe libc_pipe //go:linkname procgetsockname libc_getsockname @@ -146,7 +152,11 @@ import ( //go:linkname proc__xnet_recvmsg libc___xnet_recvmsg //go:linkname proc__xnet_sendmsg libc___xnet_sendmsg //go:linkname procacct libc_acct +//go:linkname proc__makedev libc___makedev +//go:linkname proc__major libc___major +//go:linkname proc__minor libc___minor //go:linkname procioctl libc_ioctl +//go:linkname procpoll libc_poll //go:linkname procAccess libc_access //go:linkname procAdjtime libc_adjtime //go:linkname procChdir libc_chdir @@ -167,6 +177,7 @@ import ( //go:linkname procFlock libc_flock //go:linkname procFpathconf libc_fpathconf //go:linkname procFstat libc_fstat +//go:linkname procFstatat libc_fstatat //go:linkname procFstatvfs libc_fstatvfs //go:linkname procGetdents libc_getdents //go:linkname procGetgid libc_getgid @@ -196,6 +207,7 @@ import ( //go:linkname procMlock libc_mlock //go:linkname procMlockall libc_mlockall //go:linkname procMprotect libc_mprotect +//go:linkname procMsync libc_msync //go:linkname procMunlock libc_munlock //go:linkname procMunlockall libc_munlockall //go:linkname procNanosleep libc_nanosleep @@ -211,6 +223,7 @@ import ( //go:linkname procRenameat libc_renameat //go:linkname procRmdir libc_rmdir //go:linkname proclseek libc_lseek +//go:linkname procSelect libc_select //go:linkname procSetegid libc_setegid //go:linkname procSeteuid libc_seteuid //go:linkname procSetgid libc_setgid @@ -250,7 +263,6 @@ import ( //go:linkname procgetpeername libc_getpeername //go:linkname procsetsockopt libc_setsockopt //go:linkname procrecvfrom libc_recvfrom -//go:linkname procsysconf libc_sysconf var ( procpipe, @@ -268,7 +280,11 @@ var ( proc__xnet_recvmsg, proc__xnet_sendmsg, procacct, + proc__makedev, + proc__major, + proc__minor, procioctl, + procpoll, procAccess, procAdjtime, procChdir, @@ -289,6 +305,7 @@ var ( procFlock, procFpathconf, procFstat, + procFstatat, procFstatvfs, procGetdents, procGetgid, @@ -318,6 +335,7 @@ var ( procMlock, procMlockall, procMprotect, + procMsync, procMunlock, procMunlockall, procNanosleep, @@ -333,6 +351,7 @@ var ( procRenameat, procRmdir, proclseek, + procSelect, procSetegid, procSeteuid, procSetgid, @@ -371,8 +390,7 @@ var ( proc__xnet_getsockopt, procgetpeername, procsetsockopt, - procrecvfrom, - procsysconf syscallFunc + procrecvfrom syscallFunc ) func pipe(p *[2]_C_int) (n int, err error) { @@ -522,6 +540,24 @@ func acct(path *byte) (err error) { return } +func __makedev(version int, major uint, minor uint) (val uint64) { + r0, _, _ := sysvicall6(uintptr(unsafe.Pointer(&proc__makedev)), 3, uintptr(version), uintptr(major), uintptr(minor), 0, 0, 0) + val = uint64(r0) + return +} + +func __major(version int, dev uint64) (val uint) { + r0, _, _ := sysvicall6(uintptr(unsafe.Pointer(&proc__major)), 2, uintptr(version), uintptr(dev), 0, 0, 0, 0) + val = uint(r0) + return +} + +func __minor(version int, dev uint64) (val uint) { + r0, _, _ := sysvicall6(uintptr(unsafe.Pointer(&proc__minor)), 2, uintptr(version), uintptr(dev), 0, 0, 0, 0) + val = uint(r0) + return +} + func ioctl(fd int, req uint, arg uintptr) (err error) { _, _, e1 := sysvicall6(uintptr(unsafe.Pointer(&procioctl)), 3, uintptr(fd), uintptr(req), uintptr(arg), 0, 0, 0) if e1 != 0 { @@ -530,6 +566,15 @@ func ioctl(fd int, req uint, arg uintptr) (err error) { return } +func poll(fds *PollFd, nfds int, timeout int) (n int, err error) { + r0, _, e1 := sysvicall6(uintptr(unsafe.Pointer(&procpoll)), 3, uintptr(unsafe.Pointer(fds)), uintptr(nfds), uintptr(timeout), 0, 0, 0) + n = int(r0) + if e1 != 0 { + err = e1 + } + return +} + func Access(path string, mode uint32) (err error) { var _p0 *byte _p0, err = BytePtrFromString(path) @@ -730,6 +775,19 @@ func Fstat(fd int, stat *Stat_t) (err error) { return } +func Fstatat(fd int, path string, stat *Stat_t, flags int) (err error) { + var _p0 *byte + _p0, err = BytePtrFromString(path) + if err != nil { + return + } + _, _, e1 := sysvicall6(uintptr(unsafe.Pointer(&procFstatat)), 4, uintptr(fd), uintptr(unsafe.Pointer(_p0)), uintptr(unsafe.Pointer(stat)), uintptr(flags), 0, 0) + if e1 != 0 { + err = e1 + } + return +} + func Fstatvfs(fd int, vfsstat *Statvfs_t) (err error) { _, _, e1 := sysvicall6(uintptr(unsafe.Pointer(&procFstatvfs)), 2, uintptr(fd), uintptr(unsafe.Pointer(vfsstat)), 0, 0, 0, 0) if e1 != 0 { @@ -1020,6 +1078,18 @@ func Mprotect(b []byte, prot int) (err error) { return } +func Msync(b []byte, flags int) (err error) { + var _p0 *byte + if len(b) > 0 { + _p0 = &b[0] + } + _, _, e1 := sysvicall6(uintptr(unsafe.Pointer(&procMsync)), 3, uintptr(unsafe.Pointer(_p0)), uintptr(len(b)), uintptr(flags), 0, 0, 0) + if e1 != 0 { + err = e1 + } + return +} + func Munlock(b []byte) (err error) { var _p0 *byte if len(b) > 0 { @@ -1213,6 +1283,14 @@ func Seek(fd int, offset int64, whence int) (newoffset int64, err error) { return } +func Select(n int, r *FdSet, w *FdSet, e *FdSet, timeout *Timeval) (err error) { + _, _, e1 := sysvicall6(uintptr(unsafe.Pointer(&procSelect)), 5, uintptr(n), uintptr(unsafe.Pointer(r)), uintptr(unsafe.Pointer(w)), uintptr(unsafe.Pointer(e)), uintptr(unsafe.Pointer(timeout)), 0) + if e1 != 0 { + err = e1 + } + return +} + func Setegid(egid int) (err error) { _, _, e1 := rawSysvicall6(uintptr(unsafe.Pointer(&procSetegid)), 1, uintptr(egid), 0, 0, 0, 0, 0) if e1 != 0 { @@ -1589,12 +1667,3 @@ func recvfrom(fd int, p []byte, flags int, from *RawSockaddrAny, fromlen *_Sockl } return } - -func sysconf(name int) (n int64, err error) { - r0, _, e1 := sysvicall6(uintptr(unsafe.Pointer(&procsysconf)), 1, uintptr(name), 0, 0, 0, 0, 0) - n = int64(r0) - if e1 != 0 { - err = e1 - } - return -} diff --git a/vendor/golang.org/x/sys/unix/zsysctl_openbsd.go b/vendor/golang.org/x/sys/unix/zsysctl_openbsd_386.go similarity index 100% rename from vendor/golang.org/x/sys/unix/zsysctl_openbsd.go rename to vendor/golang.org/x/sys/unix/zsysctl_openbsd_386.go diff --git a/vendor/golang.org/x/sys/unix/zsysctl_openbsd_amd64.go b/vendor/golang.org/x/sys/unix/zsysctl_openbsd_amd64.go new file mode 100644 index 000000000..83bb935b9 --- /dev/null +++ b/vendor/golang.org/x/sys/unix/zsysctl_openbsd_amd64.go @@ -0,0 +1,270 @@ +// mksysctl_openbsd.pl +// MACHINE GENERATED BY THE ABOVE COMMAND; DO NOT EDIT + +package unix + +type mibentry struct { + ctlname string + ctloid []_C_int +} + +var sysctlMib = []mibentry{ + {"ddb.console", []_C_int{9, 6}}, + {"ddb.log", []_C_int{9, 7}}, + {"ddb.max_line", []_C_int{9, 3}}, + {"ddb.max_width", []_C_int{9, 2}}, + {"ddb.panic", []_C_int{9, 5}}, + {"ddb.radix", []_C_int{9, 1}}, + {"ddb.tab_stop_width", []_C_int{9, 4}}, + {"ddb.trigger", []_C_int{9, 8}}, + {"fs.posix.setuid", []_C_int{3, 1, 1}}, + {"hw.allowpowerdown", []_C_int{6, 22}}, + {"hw.byteorder", []_C_int{6, 4}}, + {"hw.cpuspeed", []_C_int{6, 12}}, + {"hw.diskcount", []_C_int{6, 10}}, + {"hw.disknames", []_C_int{6, 8}}, + {"hw.diskstats", []_C_int{6, 9}}, + {"hw.machine", []_C_int{6, 1}}, + {"hw.model", []_C_int{6, 2}}, + {"hw.ncpu", []_C_int{6, 3}}, + {"hw.ncpufound", []_C_int{6, 21}}, + {"hw.pagesize", []_C_int{6, 7}}, + {"hw.physmem", []_C_int{6, 19}}, + {"hw.product", []_C_int{6, 15}}, + {"hw.serialno", []_C_int{6, 17}}, + {"hw.setperf", []_C_int{6, 13}}, + {"hw.usermem", []_C_int{6, 20}}, + {"hw.uuid", []_C_int{6, 18}}, + {"hw.vendor", []_C_int{6, 14}}, + {"hw.version", []_C_int{6, 16}}, + {"kern.arandom", []_C_int{1, 37}}, + {"kern.argmax", []_C_int{1, 8}}, + {"kern.boottime", []_C_int{1, 21}}, + {"kern.bufcachepercent", []_C_int{1, 72}}, + {"kern.ccpu", []_C_int{1, 45}}, + {"kern.clockrate", []_C_int{1, 12}}, + {"kern.consdev", []_C_int{1, 75}}, + {"kern.cp_time", []_C_int{1, 40}}, + {"kern.cp_time2", []_C_int{1, 71}}, + {"kern.cryptodevallowsoft", []_C_int{1, 53}}, + {"kern.domainname", []_C_int{1, 22}}, + {"kern.file", []_C_int{1, 73}}, + {"kern.forkstat", []_C_int{1, 42}}, + {"kern.fscale", []_C_int{1, 46}}, + {"kern.fsync", []_C_int{1, 33}}, + {"kern.hostid", []_C_int{1, 11}}, + {"kern.hostname", []_C_int{1, 10}}, + {"kern.intrcnt.nintrcnt", []_C_int{1, 63, 1}}, + {"kern.job_control", []_C_int{1, 19}}, + {"kern.malloc.buckets", []_C_int{1, 39, 1}}, + {"kern.malloc.kmemnames", []_C_int{1, 39, 3}}, + {"kern.maxclusters", []_C_int{1, 67}}, + {"kern.maxfiles", []_C_int{1, 7}}, + {"kern.maxlocksperuid", []_C_int{1, 70}}, + {"kern.maxpartitions", []_C_int{1, 23}}, + {"kern.maxproc", []_C_int{1, 6}}, + {"kern.maxthread", []_C_int{1, 25}}, + {"kern.maxvnodes", []_C_int{1, 5}}, + {"kern.mbstat", []_C_int{1, 59}}, + {"kern.msgbuf", []_C_int{1, 48}}, + {"kern.msgbufsize", []_C_int{1, 38}}, + {"kern.nchstats", []_C_int{1, 41}}, + {"kern.netlivelocks", []_C_int{1, 76}}, + {"kern.nfiles", []_C_int{1, 56}}, + {"kern.ngroups", []_C_int{1, 18}}, + {"kern.nosuidcoredump", []_C_int{1, 32}}, + {"kern.nprocs", []_C_int{1, 47}}, + {"kern.nselcoll", []_C_int{1, 43}}, + {"kern.nthreads", []_C_int{1, 26}}, + {"kern.numvnodes", []_C_int{1, 58}}, + {"kern.osrelease", []_C_int{1, 2}}, + {"kern.osrevision", []_C_int{1, 3}}, + {"kern.ostype", []_C_int{1, 1}}, + {"kern.osversion", []_C_int{1, 27}}, + {"kern.pool_debug", []_C_int{1, 77}}, + {"kern.posix1version", []_C_int{1, 17}}, + {"kern.proc", []_C_int{1, 66}}, + {"kern.random", []_C_int{1, 31}}, + {"kern.rawpartition", []_C_int{1, 24}}, + {"kern.saved_ids", []_C_int{1, 20}}, + {"kern.securelevel", []_C_int{1, 9}}, + {"kern.seminfo", []_C_int{1, 61}}, + {"kern.shminfo", []_C_int{1, 62}}, + {"kern.somaxconn", []_C_int{1, 28}}, + {"kern.sominconn", []_C_int{1, 29}}, + {"kern.splassert", []_C_int{1, 54}}, + {"kern.stackgap_random", []_C_int{1, 50}}, + {"kern.sysvipc_info", []_C_int{1, 51}}, + {"kern.sysvmsg", []_C_int{1, 34}}, + {"kern.sysvsem", []_C_int{1, 35}}, + {"kern.sysvshm", []_C_int{1, 36}}, + {"kern.timecounter.choice", []_C_int{1, 69, 4}}, + {"kern.timecounter.hardware", []_C_int{1, 69, 3}}, + {"kern.timecounter.tick", []_C_int{1, 69, 1}}, + {"kern.timecounter.timestepwarnings", []_C_int{1, 69, 2}}, + {"kern.tty.maxptys", []_C_int{1, 44, 6}}, + {"kern.tty.nptys", []_C_int{1, 44, 7}}, + {"kern.tty.tk_cancc", []_C_int{1, 44, 4}}, + {"kern.tty.tk_nin", []_C_int{1, 44, 1}}, + {"kern.tty.tk_nout", []_C_int{1, 44, 2}}, + {"kern.tty.tk_rawcc", []_C_int{1, 44, 3}}, + {"kern.tty.ttyinfo", []_C_int{1, 44, 5}}, + {"kern.ttycount", []_C_int{1, 57}}, + {"kern.userasymcrypto", []_C_int{1, 60}}, + {"kern.usercrypto", []_C_int{1, 52}}, + {"kern.usermount", []_C_int{1, 30}}, + {"kern.version", []_C_int{1, 4}}, + {"kern.vnode", []_C_int{1, 13}}, + {"kern.watchdog.auto", []_C_int{1, 64, 2}}, + {"kern.watchdog.period", []_C_int{1, 64, 1}}, + {"net.bpf.bufsize", []_C_int{4, 31, 1}}, + {"net.bpf.maxbufsize", []_C_int{4, 31, 2}}, + {"net.inet.ah.enable", []_C_int{4, 2, 51, 1}}, + {"net.inet.ah.stats", []_C_int{4, 2, 51, 2}}, + {"net.inet.carp.allow", []_C_int{4, 2, 112, 1}}, + {"net.inet.carp.log", []_C_int{4, 2, 112, 3}}, + {"net.inet.carp.preempt", []_C_int{4, 2, 112, 2}}, + {"net.inet.carp.stats", []_C_int{4, 2, 112, 4}}, + {"net.inet.divert.recvspace", []_C_int{4, 2, 258, 1}}, + {"net.inet.divert.sendspace", []_C_int{4, 2, 258, 2}}, + {"net.inet.divert.stats", []_C_int{4, 2, 258, 3}}, + {"net.inet.esp.enable", []_C_int{4, 2, 50, 1}}, + {"net.inet.esp.stats", []_C_int{4, 2, 50, 4}}, + {"net.inet.esp.udpencap", []_C_int{4, 2, 50, 2}}, + {"net.inet.esp.udpencap_port", []_C_int{4, 2, 50, 3}}, + {"net.inet.etherip.allow", []_C_int{4, 2, 97, 1}}, + {"net.inet.etherip.stats", []_C_int{4, 2, 97, 2}}, + {"net.inet.gre.allow", []_C_int{4, 2, 47, 1}}, + {"net.inet.gre.wccp", []_C_int{4, 2, 47, 2}}, + {"net.inet.icmp.bmcastecho", []_C_int{4, 2, 1, 2}}, + {"net.inet.icmp.errppslimit", []_C_int{4, 2, 1, 3}}, + {"net.inet.icmp.maskrepl", []_C_int{4, 2, 1, 1}}, + {"net.inet.icmp.rediraccept", []_C_int{4, 2, 1, 4}}, + {"net.inet.icmp.redirtimeout", []_C_int{4, 2, 1, 5}}, + {"net.inet.icmp.stats", []_C_int{4, 2, 1, 7}}, + {"net.inet.icmp.tstamprepl", []_C_int{4, 2, 1, 6}}, + {"net.inet.igmp.stats", []_C_int{4, 2, 2, 1}}, + {"net.inet.ip.arpqueued", []_C_int{4, 2, 0, 36}}, + {"net.inet.ip.encdebug", []_C_int{4, 2, 0, 12}}, + {"net.inet.ip.forwarding", []_C_int{4, 2, 0, 1}}, + {"net.inet.ip.ifq.congestion", []_C_int{4, 2, 0, 30, 4}}, + {"net.inet.ip.ifq.drops", []_C_int{4, 2, 0, 30, 3}}, + {"net.inet.ip.ifq.len", []_C_int{4, 2, 0, 30, 1}}, + {"net.inet.ip.ifq.maxlen", []_C_int{4, 2, 0, 30, 2}}, + {"net.inet.ip.maxqueue", []_C_int{4, 2, 0, 11}}, + {"net.inet.ip.mforwarding", []_C_int{4, 2, 0, 31}}, + {"net.inet.ip.mrtproto", []_C_int{4, 2, 0, 34}}, + {"net.inet.ip.mrtstats", []_C_int{4, 2, 0, 35}}, + {"net.inet.ip.mtu", []_C_int{4, 2, 0, 4}}, + {"net.inet.ip.mtudisc", []_C_int{4, 2, 0, 27}}, + {"net.inet.ip.mtudisctimeout", []_C_int{4, 2, 0, 28}}, + {"net.inet.ip.multipath", []_C_int{4, 2, 0, 32}}, + {"net.inet.ip.portfirst", []_C_int{4, 2, 0, 7}}, + {"net.inet.ip.porthifirst", []_C_int{4, 2, 0, 9}}, + {"net.inet.ip.porthilast", []_C_int{4, 2, 0, 10}}, + {"net.inet.ip.portlast", []_C_int{4, 2, 0, 8}}, + {"net.inet.ip.redirect", []_C_int{4, 2, 0, 2}}, + {"net.inet.ip.sourceroute", []_C_int{4, 2, 0, 5}}, + {"net.inet.ip.stats", []_C_int{4, 2, 0, 33}}, + {"net.inet.ip.ttl", []_C_int{4, 2, 0, 3}}, + {"net.inet.ipcomp.enable", []_C_int{4, 2, 108, 1}}, + {"net.inet.ipcomp.stats", []_C_int{4, 2, 108, 2}}, + {"net.inet.ipip.allow", []_C_int{4, 2, 4, 1}}, + {"net.inet.ipip.stats", []_C_int{4, 2, 4, 2}}, + {"net.inet.mobileip.allow", []_C_int{4, 2, 55, 1}}, + {"net.inet.pfsync.stats", []_C_int{4, 2, 240, 1}}, + {"net.inet.pim.stats", []_C_int{4, 2, 103, 1}}, + {"net.inet.tcp.ackonpush", []_C_int{4, 2, 6, 13}}, + {"net.inet.tcp.always_keepalive", []_C_int{4, 2, 6, 22}}, + {"net.inet.tcp.baddynamic", []_C_int{4, 2, 6, 6}}, + {"net.inet.tcp.drop", []_C_int{4, 2, 6, 19}}, + {"net.inet.tcp.ecn", []_C_int{4, 2, 6, 14}}, + {"net.inet.tcp.ident", []_C_int{4, 2, 6, 9}}, + {"net.inet.tcp.keepidle", []_C_int{4, 2, 6, 3}}, + {"net.inet.tcp.keepinittime", []_C_int{4, 2, 6, 2}}, + {"net.inet.tcp.keepintvl", []_C_int{4, 2, 6, 4}}, + {"net.inet.tcp.mssdflt", []_C_int{4, 2, 6, 11}}, + {"net.inet.tcp.reasslimit", []_C_int{4, 2, 6, 18}}, + {"net.inet.tcp.rfc1323", []_C_int{4, 2, 6, 1}}, + {"net.inet.tcp.rfc3390", []_C_int{4, 2, 6, 17}}, + {"net.inet.tcp.rstppslimit", []_C_int{4, 2, 6, 12}}, + {"net.inet.tcp.sack", []_C_int{4, 2, 6, 10}}, + {"net.inet.tcp.sackholelimit", []_C_int{4, 2, 6, 20}}, + {"net.inet.tcp.slowhz", []_C_int{4, 2, 6, 5}}, + {"net.inet.tcp.stats", []_C_int{4, 2, 6, 21}}, + {"net.inet.tcp.synbucketlimit", []_C_int{4, 2, 6, 16}}, + {"net.inet.tcp.syncachelimit", []_C_int{4, 2, 6, 15}}, + {"net.inet.udp.baddynamic", []_C_int{4, 2, 17, 2}}, + {"net.inet.udp.checksum", []_C_int{4, 2, 17, 1}}, + {"net.inet.udp.recvspace", []_C_int{4, 2, 17, 3}}, + {"net.inet.udp.sendspace", []_C_int{4, 2, 17, 4}}, + {"net.inet.udp.stats", []_C_int{4, 2, 17, 5}}, + {"net.inet6.divert.recvspace", []_C_int{4, 24, 86, 1}}, + {"net.inet6.divert.sendspace", []_C_int{4, 24, 86, 2}}, + {"net.inet6.divert.stats", []_C_int{4, 24, 86, 3}}, + {"net.inet6.icmp6.errppslimit", []_C_int{4, 24, 30, 14}}, + {"net.inet6.icmp6.mtudisc_hiwat", []_C_int{4, 24, 30, 16}}, + {"net.inet6.icmp6.mtudisc_lowat", []_C_int{4, 24, 30, 17}}, + {"net.inet6.icmp6.nd6_debug", []_C_int{4, 24, 30, 18}}, + {"net.inet6.icmp6.nd6_delay", []_C_int{4, 24, 30, 8}}, + {"net.inet6.icmp6.nd6_maxnudhint", []_C_int{4, 24, 30, 15}}, + {"net.inet6.icmp6.nd6_mmaxtries", []_C_int{4, 24, 30, 10}}, + {"net.inet6.icmp6.nd6_prune", []_C_int{4, 24, 30, 6}}, + {"net.inet6.icmp6.nd6_umaxtries", []_C_int{4, 24, 30, 9}}, + {"net.inet6.icmp6.nd6_useloopback", []_C_int{4, 24, 30, 11}}, + {"net.inet6.icmp6.nodeinfo", []_C_int{4, 24, 30, 13}}, + {"net.inet6.icmp6.rediraccept", []_C_int{4, 24, 30, 2}}, + {"net.inet6.icmp6.redirtimeout", []_C_int{4, 24, 30, 3}}, + {"net.inet6.ip6.accept_rtadv", []_C_int{4, 24, 17, 12}}, + {"net.inet6.ip6.auto_flowlabel", []_C_int{4, 24, 17, 17}}, + {"net.inet6.ip6.dad_count", []_C_int{4, 24, 17, 16}}, + {"net.inet6.ip6.dad_pending", []_C_int{4, 24, 17, 49}}, + {"net.inet6.ip6.defmcasthlim", []_C_int{4, 24, 17, 18}}, + {"net.inet6.ip6.forwarding", []_C_int{4, 24, 17, 1}}, + {"net.inet6.ip6.forwsrcrt", []_C_int{4, 24, 17, 5}}, + {"net.inet6.ip6.hdrnestlimit", []_C_int{4, 24, 17, 15}}, + {"net.inet6.ip6.hlim", []_C_int{4, 24, 17, 3}}, + {"net.inet6.ip6.log_interval", []_C_int{4, 24, 17, 14}}, + {"net.inet6.ip6.maxdynroutes", []_C_int{4, 24, 17, 48}}, + {"net.inet6.ip6.maxfragpackets", []_C_int{4, 24, 17, 9}}, + {"net.inet6.ip6.maxfrags", []_C_int{4, 24, 17, 41}}, + {"net.inet6.ip6.maxifdefrouters", []_C_int{4, 24, 17, 47}}, + {"net.inet6.ip6.maxifprefixes", []_C_int{4, 24, 17, 46}}, + {"net.inet6.ip6.mforwarding", []_C_int{4, 24, 17, 42}}, + {"net.inet6.ip6.mrtproto", []_C_int{4, 24, 17, 8}}, + {"net.inet6.ip6.mtudisctimeout", []_C_int{4, 24, 17, 50}}, + {"net.inet6.ip6.multicast_mtudisc", []_C_int{4, 24, 17, 44}}, + {"net.inet6.ip6.multipath", []_C_int{4, 24, 17, 43}}, + {"net.inet6.ip6.neighborgcthresh", []_C_int{4, 24, 17, 45}}, + {"net.inet6.ip6.redirect", []_C_int{4, 24, 17, 2}}, + {"net.inet6.ip6.rr_prune", []_C_int{4, 24, 17, 22}}, + {"net.inet6.ip6.sourcecheck", []_C_int{4, 24, 17, 10}}, + {"net.inet6.ip6.sourcecheck_logint", []_C_int{4, 24, 17, 11}}, + {"net.inet6.ip6.use_deprecated", []_C_int{4, 24, 17, 21}}, + {"net.inet6.ip6.v6only", []_C_int{4, 24, 17, 24}}, + {"net.key.sadb_dump", []_C_int{4, 30, 1}}, + {"net.key.spd_dump", []_C_int{4, 30, 2}}, + {"net.mpls.ifq.congestion", []_C_int{4, 33, 3, 4}}, + {"net.mpls.ifq.drops", []_C_int{4, 33, 3, 3}}, + {"net.mpls.ifq.len", []_C_int{4, 33, 3, 1}}, + {"net.mpls.ifq.maxlen", []_C_int{4, 33, 3, 2}}, + {"net.mpls.mapttl_ip", []_C_int{4, 33, 5}}, + {"net.mpls.mapttl_ip6", []_C_int{4, 33, 6}}, + {"net.mpls.maxloop_inkernel", []_C_int{4, 33, 4}}, + {"net.mpls.ttl", []_C_int{4, 33, 2}}, + {"net.pflow.stats", []_C_int{4, 34, 1}}, + {"net.pipex.enable", []_C_int{4, 35, 1}}, + {"vm.anonmin", []_C_int{2, 7}}, + {"vm.loadavg", []_C_int{2, 2}}, + {"vm.maxslp", []_C_int{2, 10}}, + {"vm.nkmempages", []_C_int{2, 6}}, + {"vm.psstrings", []_C_int{2, 3}}, + {"vm.swapencrypt.enable", []_C_int{2, 5, 0}}, + {"vm.swapencrypt.keyscreated", []_C_int{2, 5, 1}}, + {"vm.swapencrypt.keysdeleted", []_C_int{2, 5, 2}}, + {"vm.uspace", []_C_int{2, 11}}, + {"vm.uvmexp", []_C_int{2, 4}}, + {"vm.vmmeter", []_C_int{2, 1}}, + {"vm.vnodemin", []_C_int{2, 9}}, + {"vm.vtextmin", []_C_int{2, 8}}, +} diff --git a/vendor/golang.org/x/sys/unix/zsysctl_openbsd_arm.go b/vendor/golang.org/x/sys/unix/zsysctl_openbsd_arm.go new file mode 100644 index 000000000..83bb935b9 --- /dev/null +++ b/vendor/golang.org/x/sys/unix/zsysctl_openbsd_arm.go @@ -0,0 +1,270 @@ +// mksysctl_openbsd.pl +// MACHINE GENERATED BY THE ABOVE COMMAND; DO NOT EDIT + +package unix + +type mibentry struct { + ctlname string + ctloid []_C_int +} + +var sysctlMib = []mibentry{ + {"ddb.console", []_C_int{9, 6}}, + {"ddb.log", []_C_int{9, 7}}, + {"ddb.max_line", []_C_int{9, 3}}, + {"ddb.max_width", []_C_int{9, 2}}, + {"ddb.panic", []_C_int{9, 5}}, + {"ddb.radix", []_C_int{9, 1}}, + {"ddb.tab_stop_width", []_C_int{9, 4}}, + {"ddb.trigger", []_C_int{9, 8}}, + {"fs.posix.setuid", []_C_int{3, 1, 1}}, + {"hw.allowpowerdown", []_C_int{6, 22}}, + {"hw.byteorder", []_C_int{6, 4}}, + {"hw.cpuspeed", []_C_int{6, 12}}, + {"hw.diskcount", []_C_int{6, 10}}, + {"hw.disknames", []_C_int{6, 8}}, + {"hw.diskstats", []_C_int{6, 9}}, + {"hw.machine", []_C_int{6, 1}}, + {"hw.model", []_C_int{6, 2}}, + {"hw.ncpu", []_C_int{6, 3}}, + {"hw.ncpufound", []_C_int{6, 21}}, + {"hw.pagesize", []_C_int{6, 7}}, + {"hw.physmem", []_C_int{6, 19}}, + {"hw.product", []_C_int{6, 15}}, + {"hw.serialno", []_C_int{6, 17}}, + {"hw.setperf", []_C_int{6, 13}}, + {"hw.usermem", []_C_int{6, 20}}, + {"hw.uuid", []_C_int{6, 18}}, + {"hw.vendor", []_C_int{6, 14}}, + {"hw.version", []_C_int{6, 16}}, + {"kern.arandom", []_C_int{1, 37}}, + {"kern.argmax", []_C_int{1, 8}}, + {"kern.boottime", []_C_int{1, 21}}, + {"kern.bufcachepercent", []_C_int{1, 72}}, + {"kern.ccpu", []_C_int{1, 45}}, + {"kern.clockrate", []_C_int{1, 12}}, + {"kern.consdev", []_C_int{1, 75}}, + {"kern.cp_time", []_C_int{1, 40}}, + {"kern.cp_time2", []_C_int{1, 71}}, + {"kern.cryptodevallowsoft", []_C_int{1, 53}}, + {"kern.domainname", []_C_int{1, 22}}, + {"kern.file", []_C_int{1, 73}}, + {"kern.forkstat", []_C_int{1, 42}}, + {"kern.fscale", []_C_int{1, 46}}, + {"kern.fsync", []_C_int{1, 33}}, + {"kern.hostid", []_C_int{1, 11}}, + {"kern.hostname", []_C_int{1, 10}}, + {"kern.intrcnt.nintrcnt", []_C_int{1, 63, 1}}, + {"kern.job_control", []_C_int{1, 19}}, + {"kern.malloc.buckets", []_C_int{1, 39, 1}}, + {"kern.malloc.kmemnames", []_C_int{1, 39, 3}}, + {"kern.maxclusters", []_C_int{1, 67}}, + {"kern.maxfiles", []_C_int{1, 7}}, + {"kern.maxlocksperuid", []_C_int{1, 70}}, + {"kern.maxpartitions", []_C_int{1, 23}}, + {"kern.maxproc", []_C_int{1, 6}}, + {"kern.maxthread", []_C_int{1, 25}}, + {"kern.maxvnodes", []_C_int{1, 5}}, + {"kern.mbstat", []_C_int{1, 59}}, + {"kern.msgbuf", []_C_int{1, 48}}, + {"kern.msgbufsize", []_C_int{1, 38}}, + {"kern.nchstats", []_C_int{1, 41}}, + {"kern.netlivelocks", []_C_int{1, 76}}, + {"kern.nfiles", []_C_int{1, 56}}, + {"kern.ngroups", []_C_int{1, 18}}, + {"kern.nosuidcoredump", []_C_int{1, 32}}, + {"kern.nprocs", []_C_int{1, 47}}, + {"kern.nselcoll", []_C_int{1, 43}}, + {"kern.nthreads", []_C_int{1, 26}}, + {"kern.numvnodes", []_C_int{1, 58}}, + {"kern.osrelease", []_C_int{1, 2}}, + {"kern.osrevision", []_C_int{1, 3}}, + {"kern.ostype", []_C_int{1, 1}}, + {"kern.osversion", []_C_int{1, 27}}, + {"kern.pool_debug", []_C_int{1, 77}}, + {"kern.posix1version", []_C_int{1, 17}}, + {"kern.proc", []_C_int{1, 66}}, + {"kern.random", []_C_int{1, 31}}, + {"kern.rawpartition", []_C_int{1, 24}}, + {"kern.saved_ids", []_C_int{1, 20}}, + {"kern.securelevel", []_C_int{1, 9}}, + {"kern.seminfo", []_C_int{1, 61}}, + {"kern.shminfo", []_C_int{1, 62}}, + {"kern.somaxconn", []_C_int{1, 28}}, + {"kern.sominconn", []_C_int{1, 29}}, + {"kern.splassert", []_C_int{1, 54}}, + {"kern.stackgap_random", []_C_int{1, 50}}, + {"kern.sysvipc_info", []_C_int{1, 51}}, + {"kern.sysvmsg", []_C_int{1, 34}}, + {"kern.sysvsem", []_C_int{1, 35}}, + {"kern.sysvshm", []_C_int{1, 36}}, + {"kern.timecounter.choice", []_C_int{1, 69, 4}}, + {"kern.timecounter.hardware", []_C_int{1, 69, 3}}, + {"kern.timecounter.tick", []_C_int{1, 69, 1}}, + {"kern.timecounter.timestepwarnings", []_C_int{1, 69, 2}}, + {"kern.tty.maxptys", []_C_int{1, 44, 6}}, + {"kern.tty.nptys", []_C_int{1, 44, 7}}, + {"kern.tty.tk_cancc", []_C_int{1, 44, 4}}, + {"kern.tty.tk_nin", []_C_int{1, 44, 1}}, + {"kern.tty.tk_nout", []_C_int{1, 44, 2}}, + {"kern.tty.tk_rawcc", []_C_int{1, 44, 3}}, + {"kern.tty.ttyinfo", []_C_int{1, 44, 5}}, + {"kern.ttycount", []_C_int{1, 57}}, + {"kern.userasymcrypto", []_C_int{1, 60}}, + {"kern.usercrypto", []_C_int{1, 52}}, + {"kern.usermount", []_C_int{1, 30}}, + {"kern.version", []_C_int{1, 4}}, + {"kern.vnode", []_C_int{1, 13}}, + {"kern.watchdog.auto", []_C_int{1, 64, 2}}, + {"kern.watchdog.period", []_C_int{1, 64, 1}}, + {"net.bpf.bufsize", []_C_int{4, 31, 1}}, + {"net.bpf.maxbufsize", []_C_int{4, 31, 2}}, + {"net.inet.ah.enable", []_C_int{4, 2, 51, 1}}, + {"net.inet.ah.stats", []_C_int{4, 2, 51, 2}}, + {"net.inet.carp.allow", []_C_int{4, 2, 112, 1}}, + {"net.inet.carp.log", []_C_int{4, 2, 112, 3}}, + {"net.inet.carp.preempt", []_C_int{4, 2, 112, 2}}, + {"net.inet.carp.stats", []_C_int{4, 2, 112, 4}}, + {"net.inet.divert.recvspace", []_C_int{4, 2, 258, 1}}, + {"net.inet.divert.sendspace", []_C_int{4, 2, 258, 2}}, + {"net.inet.divert.stats", []_C_int{4, 2, 258, 3}}, + {"net.inet.esp.enable", []_C_int{4, 2, 50, 1}}, + {"net.inet.esp.stats", []_C_int{4, 2, 50, 4}}, + {"net.inet.esp.udpencap", []_C_int{4, 2, 50, 2}}, + {"net.inet.esp.udpencap_port", []_C_int{4, 2, 50, 3}}, + {"net.inet.etherip.allow", []_C_int{4, 2, 97, 1}}, + {"net.inet.etherip.stats", []_C_int{4, 2, 97, 2}}, + {"net.inet.gre.allow", []_C_int{4, 2, 47, 1}}, + {"net.inet.gre.wccp", []_C_int{4, 2, 47, 2}}, + {"net.inet.icmp.bmcastecho", []_C_int{4, 2, 1, 2}}, + {"net.inet.icmp.errppslimit", []_C_int{4, 2, 1, 3}}, + {"net.inet.icmp.maskrepl", []_C_int{4, 2, 1, 1}}, + {"net.inet.icmp.rediraccept", []_C_int{4, 2, 1, 4}}, + {"net.inet.icmp.redirtimeout", []_C_int{4, 2, 1, 5}}, + {"net.inet.icmp.stats", []_C_int{4, 2, 1, 7}}, + {"net.inet.icmp.tstamprepl", []_C_int{4, 2, 1, 6}}, + {"net.inet.igmp.stats", []_C_int{4, 2, 2, 1}}, + {"net.inet.ip.arpqueued", []_C_int{4, 2, 0, 36}}, + {"net.inet.ip.encdebug", []_C_int{4, 2, 0, 12}}, + {"net.inet.ip.forwarding", []_C_int{4, 2, 0, 1}}, + {"net.inet.ip.ifq.congestion", []_C_int{4, 2, 0, 30, 4}}, + {"net.inet.ip.ifq.drops", []_C_int{4, 2, 0, 30, 3}}, + {"net.inet.ip.ifq.len", []_C_int{4, 2, 0, 30, 1}}, + {"net.inet.ip.ifq.maxlen", []_C_int{4, 2, 0, 30, 2}}, + {"net.inet.ip.maxqueue", []_C_int{4, 2, 0, 11}}, + {"net.inet.ip.mforwarding", []_C_int{4, 2, 0, 31}}, + {"net.inet.ip.mrtproto", []_C_int{4, 2, 0, 34}}, + {"net.inet.ip.mrtstats", []_C_int{4, 2, 0, 35}}, + {"net.inet.ip.mtu", []_C_int{4, 2, 0, 4}}, + {"net.inet.ip.mtudisc", []_C_int{4, 2, 0, 27}}, + {"net.inet.ip.mtudisctimeout", []_C_int{4, 2, 0, 28}}, + {"net.inet.ip.multipath", []_C_int{4, 2, 0, 32}}, + {"net.inet.ip.portfirst", []_C_int{4, 2, 0, 7}}, + {"net.inet.ip.porthifirst", []_C_int{4, 2, 0, 9}}, + {"net.inet.ip.porthilast", []_C_int{4, 2, 0, 10}}, + {"net.inet.ip.portlast", []_C_int{4, 2, 0, 8}}, + {"net.inet.ip.redirect", []_C_int{4, 2, 0, 2}}, + {"net.inet.ip.sourceroute", []_C_int{4, 2, 0, 5}}, + {"net.inet.ip.stats", []_C_int{4, 2, 0, 33}}, + {"net.inet.ip.ttl", []_C_int{4, 2, 0, 3}}, + {"net.inet.ipcomp.enable", []_C_int{4, 2, 108, 1}}, + {"net.inet.ipcomp.stats", []_C_int{4, 2, 108, 2}}, + {"net.inet.ipip.allow", []_C_int{4, 2, 4, 1}}, + {"net.inet.ipip.stats", []_C_int{4, 2, 4, 2}}, + {"net.inet.mobileip.allow", []_C_int{4, 2, 55, 1}}, + {"net.inet.pfsync.stats", []_C_int{4, 2, 240, 1}}, + {"net.inet.pim.stats", []_C_int{4, 2, 103, 1}}, + {"net.inet.tcp.ackonpush", []_C_int{4, 2, 6, 13}}, + {"net.inet.tcp.always_keepalive", []_C_int{4, 2, 6, 22}}, + {"net.inet.tcp.baddynamic", []_C_int{4, 2, 6, 6}}, + {"net.inet.tcp.drop", []_C_int{4, 2, 6, 19}}, + {"net.inet.tcp.ecn", []_C_int{4, 2, 6, 14}}, + {"net.inet.tcp.ident", []_C_int{4, 2, 6, 9}}, + {"net.inet.tcp.keepidle", []_C_int{4, 2, 6, 3}}, + {"net.inet.tcp.keepinittime", []_C_int{4, 2, 6, 2}}, + {"net.inet.tcp.keepintvl", []_C_int{4, 2, 6, 4}}, + {"net.inet.tcp.mssdflt", []_C_int{4, 2, 6, 11}}, + {"net.inet.tcp.reasslimit", []_C_int{4, 2, 6, 18}}, + {"net.inet.tcp.rfc1323", []_C_int{4, 2, 6, 1}}, + {"net.inet.tcp.rfc3390", []_C_int{4, 2, 6, 17}}, + {"net.inet.tcp.rstppslimit", []_C_int{4, 2, 6, 12}}, + {"net.inet.tcp.sack", []_C_int{4, 2, 6, 10}}, + {"net.inet.tcp.sackholelimit", []_C_int{4, 2, 6, 20}}, + {"net.inet.tcp.slowhz", []_C_int{4, 2, 6, 5}}, + {"net.inet.tcp.stats", []_C_int{4, 2, 6, 21}}, + {"net.inet.tcp.synbucketlimit", []_C_int{4, 2, 6, 16}}, + {"net.inet.tcp.syncachelimit", []_C_int{4, 2, 6, 15}}, + {"net.inet.udp.baddynamic", []_C_int{4, 2, 17, 2}}, + {"net.inet.udp.checksum", []_C_int{4, 2, 17, 1}}, + {"net.inet.udp.recvspace", []_C_int{4, 2, 17, 3}}, + {"net.inet.udp.sendspace", []_C_int{4, 2, 17, 4}}, + {"net.inet.udp.stats", []_C_int{4, 2, 17, 5}}, + {"net.inet6.divert.recvspace", []_C_int{4, 24, 86, 1}}, + {"net.inet6.divert.sendspace", []_C_int{4, 24, 86, 2}}, + {"net.inet6.divert.stats", []_C_int{4, 24, 86, 3}}, + {"net.inet6.icmp6.errppslimit", []_C_int{4, 24, 30, 14}}, + {"net.inet6.icmp6.mtudisc_hiwat", []_C_int{4, 24, 30, 16}}, + {"net.inet6.icmp6.mtudisc_lowat", []_C_int{4, 24, 30, 17}}, + {"net.inet6.icmp6.nd6_debug", []_C_int{4, 24, 30, 18}}, + {"net.inet6.icmp6.nd6_delay", []_C_int{4, 24, 30, 8}}, + {"net.inet6.icmp6.nd6_maxnudhint", []_C_int{4, 24, 30, 15}}, + {"net.inet6.icmp6.nd6_mmaxtries", []_C_int{4, 24, 30, 10}}, + {"net.inet6.icmp6.nd6_prune", []_C_int{4, 24, 30, 6}}, + {"net.inet6.icmp6.nd6_umaxtries", []_C_int{4, 24, 30, 9}}, + {"net.inet6.icmp6.nd6_useloopback", []_C_int{4, 24, 30, 11}}, + {"net.inet6.icmp6.nodeinfo", []_C_int{4, 24, 30, 13}}, + {"net.inet6.icmp6.rediraccept", []_C_int{4, 24, 30, 2}}, + {"net.inet6.icmp6.redirtimeout", []_C_int{4, 24, 30, 3}}, + {"net.inet6.ip6.accept_rtadv", []_C_int{4, 24, 17, 12}}, + {"net.inet6.ip6.auto_flowlabel", []_C_int{4, 24, 17, 17}}, + {"net.inet6.ip6.dad_count", []_C_int{4, 24, 17, 16}}, + {"net.inet6.ip6.dad_pending", []_C_int{4, 24, 17, 49}}, + {"net.inet6.ip6.defmcasthlim", []_C_int{4, 24, 17, 18}}, + {"net.inet6.ip6.forwarding", []_C_int{4, 24, 17, 1}}, + {"net.inet6.ip6.forwsrcrt", []_C_int{4, 24, 17, 5}}, + {"net.inet6.ip6.hdrnestlimit", []_C_int{4, 24, 17, 15}}, + {"net.inet6.ip6.hlim", []_C_int{4, 24, 17, 3}}, + {"net.inet6.ip6.log_interval", []_C_int{4, 24, 17, 14}}, + {"net.inet6.ip6.maxdynroutes", []_C_int{4, 24, 17, 48}}, + {"net.inet6.ip6.maxfragpackets", []_C_int{4, 24, 17, 9}}, + {"net.inet6.ip6.maxfrags", []_C_int{4, 24, 17, 41}}, + {"net.inet6.ip6.maxifdefrouters", []_C_int{4, 24, 17, 47}}, + {"net.inet6.ip6.maxifprefixes", []_C_int{4, 24, 17, 46}}, + {"net.inet6.ip6.mforwarding", []_C_int{4, 24, 17, 42}}, + {"net.inet6.ip6.mrtproto", []_C_int{4, 24, 17, 8}}, + {"net.inet6.ip6.mtudisctimeout", []_C_int{4, 24, 17, 50}}, + {"net.inet6.ip6.multicast_mtudisc", []_C_int{4, 24, 17, 44}}, + {"net.inet6.ip6.multipath", []_C_int{4, 24, 17, 43}}, + {"net.inet6.ip6.neighborgcthresh", []_C_int{4, 24, 17, 45}}, + {"net.inet6.ip6.redirect", []_C_int{4, 24, 17, 2}}, + {"net.inet6.ip6.rr_prune", []_C_int{4, 24, 17, 22}}, + {"net.inet6.ip6.sourcecheck", []_C_int{4, 24, 17, 10}}, + {"net.inet6.ip6.sourcecheck_logint", []_C_int{4, 24, 17, 11}}, + {"net.inet6.ip6.use_deprecated", []_C_int{4, 24, 17, 21}}, + {"net.inet6.ip6.v6only", []_C_int{4, 24, 17, 24}}, + {"net.key.sadb_dump", []_C_int{4, 30, 1}}, + {"net.key.spd_dump", []_C_int{4, 30, 2}}, + {"net.mpls.ifq.congestion", []_C_int{4, 33, 3, 4}}, + {"net.mpls.ifq.drops", []_C_int{4, 33, 3, 3}}, + {"net.mpls.ifq.len", []_C_int{4, 33, 3, 1}}, + {"net.mpls.ifq.maxlen", []_C_int{4, 33, 3, 2}}, + {"net.mpls.mapttl_ip", []_C_int{4, 33, 5}}, + {"net.mpls.mapttl_ip6", []_C_int{4, 33, 6}}, + {"net.mpls.maxloop_inkernel", []_C_int{4, 33, 4}}, + {"net.mpls.ttl", []_C_int{4, 33, 2}}, + {"net.pflow.stats", []_C_int{4, 34, 1}}, + {"net.pipex.enable", []_C_int{4, 35, 1}}, + {"vm.anonmin", []_C_int{2, 7}}, + {"vm.loadavg", []_C_int{2, 2}}, + {"vm.maxslp", []_C_int{2, 10}}, + {"vm.nkmempages", []_C_int{2, 6}}, + {"vm.psstrings", []_C_int{2, 3}}, + {"vm.swapencrypt.enable", []_C_int{2, 5, 0}}, + {"vm.swapencrypt.keyscreated", []_C_int{2, 5, 1}}, + {"vm.swapencrypt.keysdeleted", []_C_int{2, 5, 2}}, + {"vm.uspace", []_C_int{2, 11}}, + {"vm.uvmexp", []_C_int{2, 4}}, + {"vm.vmmeter", []_C_int{2, 1}}, + {"vm.vnodemin", []_C_int{2, 9}}, + {"vm.vtextmin", []_C_int{2, 8}}, +} diff --git a/vendor/golang.org/x/sys/unix/zsysnum_darwin_386.go b/vendor/golang.org/x/sys/unix/zsysnum_darwin_386.go index 2786773ba..d1d36da3f 100644 --- a/vendor/golang.org/x/sys/unix/zsysnum_darwin_386.go +++ b/vendor/golang.org/x/sys/unix/zsysnum_darwin_386.go @@ -1,5 +1,5 @@ -// mksysnum_darwin.pl /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.10.sdk/usr/include/sys/syscall.h -// MACHINE GENERATED BY THE ABOVE COMMAND; DO NOT EDIT +// mksysnum_darwin.pl /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.13.sdk/usr/include/sys/syscall.h +// Code generated by the command above; see README.md. DO NOT EDIT. // +build 386,darwin @@ -121,13 +121,15 @@ const ( SYS_CSOPS = 169 SYS_CSOPS_AUDITTOKEN = 170 SYS_WAITID = 173 + SYS_KDEBUG_TYPEFILTER = 177 + SYS_KDEBUG_TRACE_STRING = 178 SYS_KDEBUG_TRACE64 = 179 SYS_KDEBUG_TRACE = 180 SYS_SETGID = 181 SYS_SETEGID = 182 SYS_SETEUID = 183 SYS_SIGRETURN = 184 - SYS_CHUD = 185 + SYS_THREAD_SELFCOUNTS = 186 SYS_FDATASYNC = 187 SYS_STAT = 188 SYS_FSTAT = 189 @@ -278,7 +280,6 @@ const ( SYS_KQUEUE = 362 SYS_KEVENT = 363 SYS_LCHOWN = 364 - SYS_STACK_SNAPSHOT = 365 SYS_BSDTHREAD_REGISTER = 366 SYS_WORKQ_OPEN = 367 SYS_WORKQ_KERNRETURN = 368 @@ -287,6 +288,8 @@ const ( SYS___OLD_SEMWAIT_SIGNAL_NOCANCEL = 371 SYS_THREAD_SELFID = 372 SYS_LEDGER = 373 + SYS_KEVENT_QOS = 374 + SYS_KEVENT_ID = 375 SYS___MAC_EXECVE = 380 SYS___MAC_SYSCALL = 381 SYS___MAC_GET_FILE = 382 @@ -298,11 +301,8 @@ const ( SYS___MAC_GET_FD = 388 SYS___MAC_SET_FD = 389 SYS___MAC_GET_PID = 390 - SYS___MAC_GET_LCID = 391 - SYS___MAC_GET_LCTX = 392 - SYS___MAC_SET_LCTX = 393 - SYS_SETLCID = 394 - SYS_GETLCID = 395 + SYS_PSELECT = 394 + SYS_PSELECT_NOCANCEL = 395 SYS_READ_NOCANCEL = 396 SYS_WRITE_NOCANCEL = 397 SYS_OPEN_NOCANCEL = 398 @@ -351,6 +351,7 @@ const ( SYS_GUARDED_CLOSE_NP = 442 SYS_GUARDED_KQUEUE_NP = 443 SYS_CHANGE_FDGUARD_NP = 444 + SYS_USRCTL = 445 SYS_PROC_RLIMIT_CONTROL = 446 SYS_CONNECTX = 447 SYS_DISCONNECTX = 448 @@ -367,6 +368,7 @@ const ( SYS_COALITION_INFO = 459 SYS_NECP_MATCH_POLICY = 460 SYS_GETATTRLISTBULK = 461 + SYS_CLONEFILEAT = 462 SYS_OPENAT = 463 SYS_OPENAT_NOCANCEL = 464 SYS_RENAMEAT = 465 @@ -392,7 +394,43 @@ const ( SYS_GUARDED_WRITE_NP = 485 SYS_GUARDED_PWRITE_NP = 486 SYS_GUARDED_WRITEV_NP = 487 - SYS_RENAME_EXT = 488 + SYS_RENAMEATX_NP = 488 SYS_MREMAP_ENCRYPTED = 489 - SYS_MAXSYSCALL = 490 + SYS_NETAGENT_TRIGGER = 490 + SYS_STACK_SNAPSHOT_WITH_CONFIG = 491 + SYS_MICROSTACKSHOT = 492 + SYS_GRAB_PGO_DATA = 493 + SYS_PERSONA = 494 + SYS_WORK_INTERVAL_CTL = 499 + SYS_GETENTROPY = 500 + SYS_NECP_OPEN = 501 + SYS_NECP_CLIENT_ACTION = 502 + SYS___NEXUS_OPEN = 503 + SYS___NEXUS_REGISTER = 504 + SYS___NEXUS_DEREGISTER = 505 + SYS___NEXUS_CREATE = 506 + SYS___NEXUS_DESTROY = 507 + SYS___NEXUS_GET_OPT = 508 + SYS___NEXUS_SET_OPT = 509 + SYS___CHANNEL_OPEN = 510 + SYS___CHANNEL_GET_INFO = 511 + SYS___CHANNEL_SYNC = 512 + SYS___CHANNEL_GET_OPT = 513 + SYS___CHANNEL_SET_OPT = 514 + SYS_ULOCK_WAIT = 515 + SYS_ULOCK_WAKE = 516 + SYS_FCLONEFILEAT = 517 + SYS_FS_SNAPSHOT = 518 + SYS_TERMINATE_WITH_PAYLOAD = 520 + SYS_ABORT_WITH_PAYLOAD = 521 + SYS_NECP_SESSION_OPEN = 522 + SYS_NECP_SESSION_ACTION = 523 + SYS_SETATTRLISTAT = 524 + SYS_NET_QOS_GUIDELINE = 525 + SYS_FMOUNT = 526 + SYS_NTP_ADJTIME = 527 + SYS_NTP_GETTIME = 528 + SYS_OS_FAULT_WITH_PAYLOAD = 529 + SYS_MAXSYSCALL = 530 + SYS_INVALID = 63 ) diff --git a/vendor/golang.org/x/sys/unix/zsysnum_darwin_amd64.go b/vendor/golang.org/x/sys/unix/zsysnum_darwin_amd64.go index 09de240c8..e35de4145 100644 --- a/vendor/golang.org/x/sys/unix/zsysnum_darwin_amd64.go +++ b/vendor/golang.org/x/sys/unix/zsysnum_darwin_amd64.go @@ -1,5 +1,5 @@ -// mksysnum_darwin.pl /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.10.sdk/usr/include/sys/syscall.h -// MACHINE GENERATED BY THE ABOVE COMMAND; DO NOT EDIT +// mksysnum_darwin.pl /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.13.sdk/usr/include/sys/syscall.h +// Code generated by the command above; see README.md. DO NOT EDIT. // +build amd64,darwin @@ -121,13 +121,15 @@ const ( SYS_CSOPS = 169 SYS_CSOPS_AUDITTOKEN = 170 SYS_WAITID = 173 + SYS_KDEBUG_TYPEFILTER = 177 + SYS_KDEBUG_TRACE_STRING = 178 SYS_KDEBUG_TRACE64 = 179 SYS_KDEBUG_TRACE = 180 SYS_SETGID = 181 SYS_SETEGID = 182 SYS_SETEUID = 183 SYS_SIGRETURN = 184 - SYS_CHUD = 185 + SYS_THREAD_SELFCOUNTS = 186 SYS_FDATASYNC = 187 SYS_STAT = 188 SYS_FSTAT = 189 @@ -278,7 +280,6 @@ const ( SYS_KQUEUE = 362 SYS_KEVENT = 363 SYS_LCHOWN = 364 - SYS_STACK_SNAPSHOT = 365 SYS_BSDTHREAD_REGISTER = 366 SYS_WORKQ_OPEN = 367 SYS_WORKQ_KERNRETURN = 368 @@ -287,6 +288,8 @@ const ( SYS___OLD_SEMWAIT_SIGNAL_NOCANCEL = 371 SYS_THREAD_SELFID = 372 SYS_LEDGER = 373 + SYS_KEVENT_QOS = 374 + SYS_KEVENT_ID = 375 SYS___MAC_EXECVE = 380 SYS___MAC_SYSCALL = 381 SYS___MAC_GET_FILE = 382 @@ -298,11 +301,8 @@ const ( SYS___MAC_GET_FD = 388 SYS___MAC_SET_FD = 389 SYS___MAC_GET_PID = 390 - SYS___MAC_GET_LCID = 391 - SYS___MAC_GET_LCTX = 392 - SYS___MAC_SET_LCTX = 393 - SYS_SETLCID = 394 - SYS_GETLCID = 395 + SYS_PSELECT = 394 + SYS_PSELECT_NOCANCEL = 395 SYS_READ_NOCANCEL = 396 SYS_WRITE_NOCANCEL = 397 SYS_OPEN_NOCANCEL = 398 @@ -351,6 +351,7 @@ const ( SYS_GUARDED_CLOSE_NP = 442 SYS_GUARDED_KQUEUE_NP = 443 SYS_CHANGE_FDGUARD_NP = 444 + SYS_USRCTL = 445 SYS_PROC_RLIMIT_CONTROL = 446 SYS_CONNECTX = 447 SYS_DISCONNECTX = 448 @@ -367,6 +368,7 @@ const ( SYS_COALITION_INFO = 459 SYS_NECP_MATCH_POLICY = 460 SYS_GETATTRLISTBULK = 461 + SYS_CLONEFILEAT = 462 SYS_OPENAT = 463 SYS_OPENAT_NOCANCEL = 464 SYS_RENAMEAT = 465 @@ -392,7 +394,43 @@ const ( SYS_GUARDED_WRITE_NP = 485 SYS_GUARDED_PWRITE_NP = 486 SYS_GUARDED_WRITEV_NP = 487 - SYS_RENAME_EXT = 488 + SYS_RENAMEATX_NP = 488 SYS_MREMAP_ENCRYPTED = 489 - SYS_MAXSYSCALL = 490 + SYS_NETAGENT_TRIGGER = 490 + SYS_STACK_SNAPSHOT_WITH_CONFIG = 491 + SYS_MICROSTACKSHOT = 492 + SYS_GRAB_PGO_DATA = 493 + SYS_PERSONA = 494 + SYS_WORK_INTERVAL_CTL = 499 + SYS_GETENTROPY = 500 + SYS_NECP_OPEN = 501 + SYS_NECP_CLIENT_ACTION = 502 + SYS___NEXUS_OPEN = 503 + SYS___NEXUS_REGISTER = 504 + SYS___NEXUS_DEREGISTER = 505 + SYS___NEXUS_CREATE = 506 + SYS___NEXUS_DESTROY = 507 + SYS___NEXUS_GET_OPT = 508 + SYS___NEXUS_SET_OPT = 509 + SYS___CHANNEL_OPEN = 510 + SYS___CHANNEL_GET_INFO = 511 + SYS___CHANNEL_SYNC = 512 + SYS___CHANNEL_GET_OPT = 513 + SYS___CHANNEL_SET_OPT = 514 + SYS_ULOCK_WAIT = 515 + SYS_ULOCK_WAKE = 516 + SYS_FCLONEFILEAT = 517 + SYS_FS_SNAPSHOT = 518 + SYS_TERMINATE_WITH_PAYLOAD = 520 + SYS_ABORT_WITH_PAYLOAD = 521 + SYS_NECP_SESSION_OPEN = 522 + SYS_NECP_SESSION_ACTION = 523 + SYS_SETATTRLISTAT = 524 + SYS_NET_QOS_GUIDELINE = 525 + SYS_FMOUNT = 526 + SYS_NTP_ADJTIME = 527 + SYS_NTP_GETTIME = 528 + SYS_OS_FAULT_WITH_PAYLOAD = 529 + SYS_MAXSYSCALL = 530 + SYS_INVALID = 63 ) diff --git a/vendor/golang.org/x/sys/unix/zsysnum_darwin_arm.go b/vendor/golang.org/x/sys/unix/zsysnum_darwin_arm.go index 41cb6ed39..f2df27db2 100644 --- a/vendor/golang.org/x/sys/unix/zsysnum_darwin_arm.go +++ b/vendor/golang.org/x/sys/unix/zsysnum_darwin_arm.go @@ -1,4 +1,4 @@ -// mksysnum_darwin.pl /Applications/Xcode.app/Contents/Developer/Platforms/iPhoneOS.platform/Developer/SDKs/iPhoneOS10.2.sdk/usr/include/sys/syscall.h +// mksysnum_darwin.pl /Applications/Xcode.app/Contents/Developer/Platforms/iPhoneOS.platform/Developer/SDKs/iPhoneOS11.1.sdk/usr/include/sys/syscall.h // Code generated by the command above; see README.md. DO NOT EDIT. // +build arm,darwin @@ -129,6 +129,7 @@ const ( SYS_SETEGID = 182 SYS_SETEUID = 183 SYS_SIGRETURN = 184 + SYS_THREAD_SELFCOUNTS = 186 SYS_FDATASYNC = 187 SYS_STAT = 188 SYS_FSTAT = 189 @@ -288,6 +289,7 @@ const ( SYS_THREAD_SELFID = 372 SYS_LEDGER = 373 SYS_KEVENT_QOS = 374 + SYS_KEVENT_ID = 375 SYS___MAC_EXECVE = 380 SYS___MAC_SYSCALL = 381 SYS___MAC_GET_FILE = 382 @@ -421,6 +423,14 @@ const ( SYS_FS_SNAPSHOT = 518 SYS_TERMINATE_WITH_PAYLOAD = 520 SYS_ABORT_WITH_PAYLOAD = 521 - SYS_MAXSYSCALL = 522 + SYS_NECP_SESSION_OPEN = 522 + SYS_NECP_SESSION_ACTION = 523 + SYS_SETATTRLISTAT = 524 + SYS_NET_QOS_GUIDELINE = 525 + SYS_FMOUNT = 526 + SYS_NTP_ADJTIME = 527 + SYS_NTP_GETTIME = 528 + SYS_OS_FAULT_WITH_PAYLOAD = 529 + SYS_MAXSYSCALL = 530 SYS_INVALID = 63 ) diff --git a/vendor/golang.org/x/sys/unix/zsysnum_darwin_arm64.go b/vendor/golang.org/x/sys/unix/zsysnum_darwin_arm64.go index 075816c34..969463023 100644 --- a/vendor/golang.org/x/sys/unix/zsysnum_darwin_arm64.go +++ b/vendor/golang.org/x/sys/unix/zsysnum_darwin_arm64.go @@ -1,4 +1,4 @@ -// mksysnum_darwin.pl /Applications/Xcode.app/Contents/Developer/Platforms/iPhoneOS.platform/Developer/SDKs/iPhoneOS10.2.sdk/usr/include/sys/syscall.h +// mksysnum_darwin.pl /Applications/Xcode.app/Contents/Developer/Platforms/iPhoneOS.platform/Developer/SDKs/iPhoneOS11.1.sdk/usr/include/sys/syscall.h // Code generated by the command above; see README.md. DO NOT EDIT. // +build arm64,darwin @@ -129,6 +129,7 @@ const ( SYS_SETEGID = 182 SYS_SETEUID = 183 SYS_SIGRETURN = 184 + SYS_THREAD_SELFCOUNTS = 186 SYS_FDATASYNC = 187 SYS_STAT = 188 SYS_FSTAT = 189 @@ -288,6 +289,7 @@ const ( SYS_THREAD_SELFID = 372 SYS_LEDGER = 373 SYS_KEVENT_QOS = 374 + SYS_KEVENT_ID = 375 SYS___MAC_EXECVE = 380 SYS___MAC_SYSCALL = 381 SYS___MAC_GET_FILE = 382 @@ -421,6 +423,14 @@ const ( SYS_FS_SNAPSHOT = 518 SYS_TERMINATE_WITH_PAYLOAD = 520 SYS_ABORT_WITH_PAYLOAD = 521 - SYS_MAXSYSCALL = 522 + SYS_NECP_SESSION_OPEN = 522 + SYS_NECP_SESSION_ACTION = 523 + SYS_SETATTRLISTAT = 524 + SYS_NET_QOS_GUIDELINE = 525 + SYS_FMOUNT = 526 + SYS_NTP_ADJTIME = 527 + SYS_NTP_GETTIME = 528 + SYS_OS_FAULT_WITH_PAYLOAD = 529 + SYS_MAXSYSCALL = 530 SYS_INVALID = 63 ) diff --git a/vendor/golang.org/x/sys/unix/zsysnum_linux_386.go b/vendor/golang.org/x/sys/unix/zsysnum_linux_386.go index cef4fed02..95ab12903 100644 --- a/vendor/golang.org/x/sys/unix/zsysnum_linux_386.go +++ b/vendor/golang.org/x/sys/unix/zsysnum_linux_386.go @@ -385,4 +385,6 @@ const ( SYS_PKEY_MPROTECT = 380 SYS_PKEY_ALLOC = 381 SYS_PKEY_FREE = 382 + SYS_STATX = 383 + SYS_ARCH_PRCTL = 384 ) diff --git a/vendor/golang.org/x/sys/unix/zsysnum_linux_amd64.go b/vendor/golang.org/x/sys/unix/zsysnum_linux_amd64.go index 49bfa1270..c5dabf2e4 100644 --- a/vendor/golang.org/x/sys/unix/zsysnum_linux_amd64.go +++ b/vendor/golang.org/x/sys/unix/zsysnum_linux_amd64.go @@ -338,4 +338,5 @@ const ( SYS_PKEY_MPROTECT = 329 SYS_PKEY_ALLOC = 330 SYS_PKEY_FREE = 331 + SYS_STATX = 332 ) diff --git a/vendor/golang.org/x/sys/unix/zsysnum_linux_arm.go b/vendor/golang.org/x/sys/unix/zsysnum_linux_arm.go index 97b182ef5..ab7fa5fd3 100644 --- a/vendor/golang.org/x/sys/unix/zsysnum_linux_arm.go +++ b/vendor/golang.org/x/sys/unix/zsysnum_linux_arm.go @@ -358,4 +358,5 @@ const ( SYS_PKEY_MPROTECT = 394 SYS_PKEY_ALLOC = 395 SYS_PKEY_FREE = 396 + SYS_STATX = 397 ) diff --git a/vendor/golang.org/x/sys/unix/zsysnum_linux_arm64.go b/vendor/golang.org/x/sys/unix/zsysnum_linux_arm64.go index 640784357..b1c6b4bd3 100644 --- a/vendor/golang.org/x/sys/unix/zsysnum_linux_arm64.go +++ b/vendor/golang.org/x/sys/unix/zsysnum_linux_arm64.go @@ -282,4 +282,5 @@ const ( SYS_PKEY_MPROTECT = 288 SYS_PKEY_ALLOC = 289 SYS_PKEY_FREE = 290 + SYS_STATX = 291 ) diff --git a/vendor/golang.org/x/sys/unix/zsysnum_linux_mips.go b/vendor/golang.org/x/sys/unix/zsysnum_linux_mips.go index 939567c09..2e9aa7a3e 100644 --- a/vendor/golang.org/x/sys/unix/zsysnum_linux_mips.go +++ b/vendor/golang.org/x/sys/unix/zsysnum_linux_mips.go @@ -371,4 +371,5 @@ const ( SYS_PKEY_MPROTECT = 4363 SYS_PKEY_ALLOC = 4364 SYS_PKEY_FREE = 4365 + SYS_STATX = 4366 ) diff --git a/vendor/golang.org/x/sys/unix/zsysnum_linux_mips64.go b/vendor/golang.org/x/sys/unix/zsysnum_linux_mips64.go index 09db95969..92827635a 100644 --- a/vendor/golang.org/x/sys/unix/zsysnum_linux_mips64.go +++ b/vendor/golang.org/x/sys/unix/zsysnum_linux_mips64.go @@ -331,4 +331,5 @@ const ( SYS_PKEY_MPROTECT = 5323 SYS_PKEY_ALLOC = 5324 SYS_PKEY_FREE = 5325 + SYS_STATX = 5326 ) diff --git a/vendor/golang.org/x/sys/unix/zsysnum_linux_mips64le.go b/vendor/golang.org/x/sys/unix/zsysnum_linux_mips64le.go index d1b872a09..45bd3fd6c 100644 --- a/vendor/golang.org/x/sys/unix/zsysnum_linux_mips64le.go +++ b/vendor/golang.org/x/sys/unix/zsysnum_linux_mips64le.go @@ -331,4 +331,5 @@ const ( SYS_PKEY_MPROTECT = 5323 SYS_PKEY_ALLOC = 5324 SYS_PKEY_FREE = 5325 + SYS_STATX = 5326 ) diff --git a/vendor/golang.org/x/sys/unix/zsysnum_linux_mipsle.go b/vendor/golang.org/x/sys/unix/zsysnum_linux_mipsle.go index 82ba20f28..62ccac4b7 100644 --- a/vendor/golang.org/x/sys/unix/zsysnum_linux_mipsle.go +++ b/vendor/golang.org/x/sys/unix/zsysnum_linux_mipsle.go @@ -371,4 +371,5 @@ const ( SYS_PKEY_MPROTECT = 4363 SYS_PKEY_ALLOC = 4364 SYS_PKEY_FREE = 4365 + SYS_STATX = 4366 ) diff --git a/vendor/golang.org/x/sys/unix/zsysnum_linux_ppc64.go b/vendor/golang.org/x/sys/unix/zsysnum_linux_ppc64.go index 8944448ae..dfe5dab67 100644 --- a/vendor/golang.org/x/sys/unix/zsysnum_linux_ppc64.go +++ b/vendor/golang.org/x/sys/unix/zsysnum_linux_ppc64.go @@ -366,4 +366,5 @@ const ( SYS_PREADV2 = 380 SYS_PWRITEV2 = 381 SYS_KEXEC_FILE_LOAD = 382 + SYS_STATX = 383 ) diff --git a/vendor/golang.org/x/sys/unix/zsysnum_linux_ppc64le.go b/vendor/golang.org/x/sys/unix/zsysnum_linux_ppc64le.go index 90a039be4..eca97f738 100644 --- a/vendor/golang.org/x/sys/unix/zsysnum_linux_ppc64le.go +++ b/vendor/golang.org/x/sys/unix/zsysnum_linux_ppc64le.go @@ -366,4 +366,5 @@ const ( SYS_PREADV2 = 380 SYS_PWRITEV2 = 381 SYS_KEXEC_FILE_LOAD = 382 + SYS_STATX = 383 ) diff --git a/vendor/golang.org/x/sys/unix/zsysnum_linux_s390x.go b/vendor/golang.org/x/sys/unix/zsysnum_linux_s390x.go index aab0cdb18..8bf50c8d4 100644 --- a/vendor/golang.org/x/sys/unix/zsysnum_linux_s390x.go +++ b/vendor/golang.org/x/sys/unix/zsysnum_linux_s390x.go @@ -306,6 +306,9 @@ const ( SYS_COPY_FILE_RANGE = 375 SYS_PREADV2 = 376 SYS_PWRITEV2 = 377 + SYS_S390_GUARDED_STORAGE = 378 + SYS_STATX = 379 + SYS_S390_STHYI = 380 SYS_SELECT = 142 SYS_GETRLIMIT = 191 SYS_LCHOWN = 198 diff --git a/vendor/golang.org/x/sys/unix/zsysnum_netbsd_386.go b/vendor/golang.org/x/sys/unix/zsysnum_netbsd_386.go index f60d8f988..8afda9c45 100644 --- a/vendor/golang.org/x/sys/unix/zsysnum_netbsd_386.go +++ b/vendor/golang.org/x/sys/unix/zsysnum_netbsd_386.go @@ -134,6 +134,7 @@ const ( SYS_MINHERIT = 273 // { int|sys||minherit(void *addr, size_t len, int inherit); } SYS_LCHMOD = 274 // { int|sys||lchmod(const char *path, mode_t mode); } SYS_LCHOWN = 275 // { int|sys||lchown(const char *path, uid_t uid, gid_t gid); } + SYS_MSYNC = 277 // { int|sys|13|msync(void *addr, size_t len, int flags); } SYS___POSIX_CHOWN = 283 // { int|sys||__posix_chown(const char *path, uid_t uid, gid_t gid); } SYS___POSIX_FCHOWN = 284 // { int|sys||__posix_fchown(int fd, uid_t uid, gid_t gid); } SYS___POSIX_LCHOWN = 285 // { int|sys||__posix_lchown(const char *path, uid_t uid, gid_t gid); } diff --git a/vendor/golang.org/x/sys/unix/zsysnum_netbsd_amd64.go b/vendor/golang.org/x/sys/unix/zsysnum_netbsd_amd64.go index 48a91d464..aea8dbec4 100644 --- a/vendor/golang.org/x/sys/unix/zsysnum_netbsd_amd64.go +++ b/vendor/golang.org/x/sys/unix/zsysnum_netbsd_amd64.go @@ -134,6 +134,7 @@ const ( SYS_MINHERIT = 273 // { int|sys||minherit(void *addr, size_t len, int inherit); } SYS_LCHMOD = 274 // { int|sys||lchmod(const char *path, mode_t mode); } SYS_LCHOWN = 275 // { int|sys||lchown(const char *path, uid_t uid, gid_t gid); } + SYS_MSYNC = 277 // { int|sys|13|msync(void *addr, size_t len, int flags); } SYS___POSIX_CHOWN = 283 // { int|sys||__posix_chown(const char *path, uid_t uid, gid_t gid); } SYS___POSIX_FCHOWN = 284 // { int|sys||__posix_fchown(int fd, uid_t uid, gid_t gid); } SYS___POSIX_LCHOWN = 285 // { int|sys||__posix_lchown(const char *path, uid_t uid, gid_t gid); } diff --git a/vendor/golang.org/x/sys/unix/zsysnum_netbsd_arm.go b/vendor/golang.org/x/sys/unix/zsysnum_netbsd_arm.go index 612ba662c..c6158a7ef 100644 --- a/vendor/golang.org/x/sys/unix/zsysnum_netbsd_arm.go +++ b/vendor/golang.org/x/sys/unix/zsysnum_netbsd_arm.go @@ -134,6 +134,7 @@ const ( SYS_MINHERIT = 273 // { int|sys||minherit(void *addr, size_t len, int inherit); } SYS_LCHMOD = 274 // { int|sys||lchmod(const char *path, mode_t mode); } SYS_LCHOWN = 275 // { int|sys||lchown(const char *path, uid_t uid, gid_t gid); } + SYS_MSYNC = 277 // { int|sys|13|msync(void *addr, size_t len, int flags); } SYS___POSIX_CHOWN = 283 // { int|sys||__posix_chown(const char *path, uid_t uid, gid_t gid); } SYS___POSIX_FCHOWN = 284 // { int|sys||__posix_fchown(int fd, uid_t uid, gid_t gid); } SYS___POSIX_LCHOWN = 285 // { int|sys||__posix_lchown(const char *path, uid_t uid, gid_t gid); } diff --git a/vendor/golang.org/x/sys/unix/zsysnum_solaris_amd64.go b/vendor/golang.org/x/sys/unix/zsysnum_solaris_amd64.go deleted file mode 100644 index c70865985..000000000 --- a/vendor/golang.org/x/sys/unix/zsysnum_solaris_amd64.go +++ /dev/null @@ -1,13 +0,0 @@ -// Copyright 2014 The Go Authors. All rights reserved. -// Use of this source code is governed by a BSD-style -// license that can be found in the LICENSE file. - -// +build amd64,solaris - -package unix - -// TODO(aram): remove these before Go 1.3. -const ( - SYS_EXECVE = 59 - SYS_FCNTL = 62 -) diff --git a/vendor/golang.org/x/sys/unix/ztypes_darwin_386.go b/vendor/golang.org/x/sys/unix/ztypes_darwin_386.go index e61d78a54..327af5fba 100644 --- a/vendor/golang.org/x/sys/unix/ztypes_darwin_386.go +++ b/vendor/golang.org/x/sys/unix/ztypes_darwin_386.go @@ -136,13 +136,13 @@ type Fsid struct { } type Dirent struct { - Ino uint64 - Seekoff uint64 - Reclen uint16 - Namlen uint16 - Type uint8 - Name [1024]int8 - Pad_cgo_0 [3]byte + Ino uint64 + Seekoff uint64 + Reclen uint16 + Namlen uint16 + Type uint8 + Name [1024]int8 + _ [3]byte } type RawSockaddrInet4 struct { @@ -295,14 +295,14 @@ const ( ) type IfMsghdr struct { - Msglen uint16 - Version uint8 - Type uint8 - Addrs int32 - Flags int32 - Index uint16 - Pad_cgo_0 [2]byte - Data IfData + Msglen uint16 + Version uint8 + Type uint8 + Addrs int32 + Flags int32 + Index uint16 + _ [2]byte + Data IfData } type IfData struct { @@ -338,51 +338,51 @@ type IfData struct { } type IfaMsghdr struct { - Msglen uint16 - Version uint8 - Type uint8 - Addrs int32 - Flags int32 - Index uint16 - Pad_cgo_0 [2]byte - Metric int32 + Msglen uint16 + Version uint8 + Type uint8 + Addrs int32 + Flags int32 + Index uint16 + _ [2]byte + Metric int32 } type IfmaMsghdr struct { - Msglen uint16 - Version uint8 - Type uint8 - Addrs int32 - Flags int32 - Index uint16 - Pad_cgo_0 [2]byte + Msglen uint16 + Version uint8 + Type uint8 + Addrs int32 + Flags int32 + Index uint16 + _ [2]byte } type IfmaMsghdr2 struct { - Msglen uint16 - Version uint8 - Type uint8 - Addrs int32 - Flags int32 - Index uint16 - Pad_cgo_0 [2]byte - Refcount int32 + Msglen uint16 + Version uint8 + Type uint8 + Addrs int32 + Flags int32 + Index uint16 + _ [2]byte + Refcount int32 } type RtMsghdr struct { - Msglen uint16 - Version uint8 - Type uint8 - Index uint16 - Pad_cgo_0 [2]byte - Flags int32 - Addrs int32 - Pid int32 - Seq int32 - Errno int32 - Use int32 - Inits uint32 - Rmx RtMetrics + Msglen uint16 + Version uint8 + Type uint8 + Index uint16 + _ [2]byte + Flags int32 + Addrs int32 + Pid int32 + Seq int32 + Errno int32 + Use int32 + Inits uint32 + Rmx RtMetrics } type RtMetrics struct { @@ -430,11 +430,11 @@ type BpfInsn struct { } type BpfHdr struct { - Tstamp Timeval - Caplen uint32 - Datalen uint32 - Hdrlen uint16 - Pad_cgo_0 [2]byte + Tstamp Timeval + Caplen uint32 + Datalen uint32 + Hdrlen uint16 + _ [2]byte } type Termios struct { @@ -460,3 +460,30 @@ const ( AT_SYMLINK_FOLLOW = 0x40 AT_SYMLINK_NOFOLLOW = 0x20 ) + +type PollFd struct { + Fd int32 + Events int16 + Revents int16 +} + +const ( + POLLERR = 0x8 + POLLHUP = 0x10 + POLLIN = 0x1 + POLLNVAL = 0x20 + POLLOUT = 0x4 + POLLPRI = 0x2 + POLLRDBAND = 0x80 + POLLRDNORM = 0x40 + POLLWRBAND = 0x100 + POLLWRNORM = 0x4 +) + +type Utsname struct { + Sysname [256]byte + Nodename [256]byte + Release [256]byte + Version [256]byte + Machine [256]byte +} diff --git a/vendor/golang.org/x/sys/unix/ztypes_darwin_amd64.go b/vendor/golang.org/x/sys/unix/ztypes_darwin_amd64.go index 2619155ff..116e6e075 100644 --- a/vendor/golang.org/x/sys/unix/ztypes_darwin_amd64.go +++ b/vendor/golang.org/x/sys/unix/ztypes_darwin_amd64.go @@ -26,9 +26,9 @@ type Timespec struct { } type Timeval struct { - Sec int64 - Usec int32 - Pad_cgo_0 [4]byte + Sec int64 + Usec int32 + _ [4]byte } type Timeval32 struct { @@ -70,7 +70,7 @@ type Stat_t struct { Uid uint32 Gid uint32 Rdev int32 - Pad_cgo_0 [4]byte + _ [4]byte Atimespec Timespec Mtimespec Timespec Ctimespec Timespec @@ -120,9 +120,9 @@ type Fstore_t struct { } type Radvisory_t struct { - Offset int64 - Count int32 - Pad_cgo_0 [4]byte + Offset int64 + Count int32 + _ [4]byte } type Fbootstraptransfer_t struct { @@ -132,9 +132,9 @@ type Fbootstraptransfer_t struct { } type Log2phys_t struct { - Flags uint32 - Pad_cgo_0 [8]byte - Pad_cgo_1 [8]byte + Flags uint32 + _ [8]byte + _ [8]byte } type Fsid struct { @@ -142,13 +142,13 @@ type Fsid struct { } type Dirent struct { - Ino uint64 - Seekoff uint64 - Reclen uint16 - Namlen uint16 - Type uint8 - Name [1024]int8 - Pad_cgo_0 [3]byte + Ino uint64 + Seekoff uint64 + Reclen uint16 + Namlen uint16 + Type uint8 + Name [1024]int8 + _ [3]byte } type RawSockaddrInet4 struct { @@ -221,10 +221,10 @@ type IPv6Mreq struct { type Msghdr struct { Name *byte Namelen uint32 - Pad_cgo_0 [4]byte + _ [4]byte Iov *Iovec Iovlen int32 - Pad_cgo_1 [4]byte + _ [4]byte Control *byte Controllen uint32 Flags int32 @@ -303,14 +303,14 @@ const ( ) type IfMsghdr struct { - Msglen uint16 - Version uint8 - Type uint8 - Addrs int32 - Flags int32 - Index uint16 - Pad_cgo_0 [2]byte - Data IfData + Msglen uint16 + Version uint8 + Type uint8 + Addrs int32 + Flags int32 + Index uint16 + _ [2]byte + Data IfData } type IfData struct { @@ -346,51 +346,51 @@ type IfData struct { } type IfaMsghdr struct { - Msglen uint16 - Version uint8 - Type uint8 - Addrs int32 - Flags int32 - Index uint16 - Pad_cgo_0 [2]byte - Metric int32 + Msglen uint16 + Version uint8 + Type uint8 + Addrs int32 + Flags int32 + Index uint16 + _ [2]byte + Metric int32 } type IfmaMsghdr struct { - Msglen uint16 - Version uint8 - Type uint8 - Addrs int32 - Flags int32 - Index uint16 - Pad_cgo_0 [2]byte + Msglen uint16 + Version uint8 + Type uint8 + Addrs int32 + Flags int32 + Index uint16 + _ [2]byte } type IfmaMsghdr2 struct { - Msglen uint16 - Version uint8 - Type uint8 - Addrs int32 - Flags int32 - Index uint16 - Pad_cgo_0 [2]byte - Refcount int32 + Msglen uint16 + Version uint8 + Type uint8 + Addrs int32 + Flags int32 + Index uint16 + _ [2]byte + Refcount int32 } type RtMsghdr struct { - Msglen uint16 - Version uint8 - Type uint8 - Index uint16 - Pad_cgo_0 [2]byte - Flags int32 - Addrs int32 - Pid int32 - Seq int32 - Errno int32 - Use int32 - Inits uint32 - Rmx RtMetrics + Msglen uint16 + Version uint8 + Type uint8 + Index uint16 + _ [2]byte + Flags int32 + Addrs int32 + Pid int32 + Seq int32 + Errno int32 + Use int32 + Inits uint32 + Rmx RtMetrics } type RtMetrics struct { @@ -426,9 +426,9 @@ type BpfStat struct { } type BpfProgram struct { - Len uint32 - Pad_cgo_0 [4]byte - Insns *BpfInsn + Len uint32 + _ [4]byte + Insns *BpfInsn } type BpfInsn struct { @@ -439,22 +439,22 @@ type BpfInsn struct { } type BpfHdr struct { - Tstamp Timeval32 - Caplen uint32 - Datalen uint32 - Hdrlen uint16 - Pad_cgo_0 [2]byte + Tstamp Timeval32 + Caplen uint32 + Datalen uint32 + Hdrlen uint16 + _ [2]byte } type Termios struct { - Iflag uint64 - Oflag uint64 - Cflag uint64 - Lflag uint64 - Cc [20]uint8 - Pad_cgo_0 [4]byte - Ispeed uint64 - Ospeed uint64 + Iflag uint64 + Oflag uint64 + Cflag uint64 + Lflag uint64 + Cc [20]uint8 + _ [4]byte + Ispeed uint64 + Ospeed uint64 } type Winsize struct { @@ -470,3 +470,30 @@ const ( AT_SYMLINK_FOLLOW = 0x40 AT_SYMLINK_NOFOLLOW = 0x20 ) + +type PollFd struct { + Fd int32 + Events int16 + Revents int16 +} + +const ( + POLLERR = 0x8 + POLLHUP = 0x10 + POLLIN = 0x1 + POLLNVAL = 0x20 + POLLOUT = 0x4 + POLLPRI = 0x2 + POLLRDBAND = 0x80 + POLLRDNORM = 0x40 + POLLWRBAND = 0x100 + POLLWRNORM = 0x4 +) + +type Utsname struct { + Sysname [256]byte + Nodename [256]byte + Release [256]byte + Version [256]byte + Machine [256]byte +} diff --git a/vendor/golang.org/x/sys/unix/ztypes_darwin_arm.go b/vendor/golang.org/x/sys/unix/ztypes_darwin_arm.go index 4dca0d4db..2750ad760 100644 --- a/vendor/golang.org/x/sys/unix/ztypes_darwin_arm.go +++ b/vendor/golang.org/x/sys/unix/ztypes_darwin_arm.go @@ -137,13 +137,13 @@ type Fsid struct { } type Dirent struct { - Ino uint64 - Seekoff uint64 - Reclen uint16 - Namlen uint16 - Type uint8 - Name [1024]int8 - Pad_cgo_0 [3]byte + Ino uint64 + Seekoff uint64 + Reclen uint16 + Namlen uint16 + Type uint8 + Name [1024]int8 + _ [3]byte } type RawSockaddrInet4 struct { @@ -296,14 +296,14 @@ const ( ) type IfMsghdr struct { - Msglen uint16 - Version uint8 - Type uint8 - Addrs int32 - Flags int32 - Index uint16 - Pad_cgo_0 [2]byte - Data IfData + Msglen uint16 + Version uint8 + Type uint8 + Addrs int32 + Flags int32 + Index uint16 + _ [2]byte + Data IfData } type IfData struct { @@ -339,51 +339,51 @@ type IfData struct { } type IfaMsghdr struct { - Msglen uint16 - Version uint8 - Type uint8 - Addrs int32 - Flags int32 - Index uint16 - Pad_cgo_0 [2]byte - Metric int32 + Msglen uint16 + Version uint8 + Type uint8 + Addrs int32 + Flags int32 + Index uint16 + _ [2]byte + Metric int32 } type IfmaMsghdr struct { - Msglen uint16 - Version uint8 - Type uint8 - Addrs int32 - Flags int32 - Index uint16 - Pad_cgo_0 [2]byte + Msglen uint16 + Version uint8 + Type uint8 + Addrs int32 + Flags int32 + Index uint16 + _ [2]byte } type IfmaMsghdr2 struct { - Msglen uint16 - Version uint8 - Type uint8 - Addrs int32 - Flags int32 - Index uint16 - Pad_cgo_0 [2]byte - Refcount int32 + Msglen uint16 + Version uint8 + Type uint8 + Addrs int32 + Flags int32 + Index uint16 + _ [2]byte + Refcount int32 } type RtMsghdr struct { - Msglen uint16 - Version uint8 - Type uint8 - Index uint16 - Pad_cgo_0 [2]byte - Flags int32 - Addrs int32 - Pid int32 - Seq int32 - Errno int32 - Use int32 - Inits uint32 - Rmx RtMetrics + Msglen uint16 + Version uint8 + Type uint8 + Index uint16 + _ [2]byte + Flags int32 + Addrs int32 + Pid int32 + Seq int32 + Errno int32 + Use int32 + Inits uint32 + Rmx RtMetrics } type RtMetrics struct { @@ -431,11 +431,11 @@ type BpfInsn struct { } type BpfHdr struct { - Tstamp Timeval - Caplen uint32 - Datalen uint32 - Hdrlen uint16 - Pad_cgo_0 [2]byte + Tstamp Timeval + Caplen uint32 + Datalen uint32 + Hdrlen uint16 + _ [2]byte } type Termios struct { @@ -461,3 +461,30 @@ const ( AT_SYMLINK_FOLLOW = 0x40 AT_SYMLINK_NOFOLLOW = 0x20 ) + +type PollFd struct { + Fd int32 + Events int16 + Revents int16 +} + +const ( + POLLERR = 0x8 + POLLHUP = 0x10 + POLLIN = 0x1 + POLLNVAL = 0x20 + POLLOUT = 0x4 + POLLPRI = 0x2 + POLLRDBAND = 0x80 + POLLRDNORM = 0x40 + POLLWRBAND = 0x100 + POLLWRNORM = 0x4 +) + +type Utsname struct { + Sysname [256]byte + Nodename [256]byte + Release [256]byte + Version [256]byte + Machine [256]byte +} diff --git a/vendor/golang.org/x/sys/unix/ztypes_darwin_arm64.go b/vendor/golang.org/x/sys/unix/ztypes_darwin_arm64.go index f2881fd14..8cead0996 100644 --- a/vendor/golang.org/x/sys/unix/ztypes_darwin_arm64.go +++ b/vendor/golang.org/x/sys/unix/ztypes_darwin_arm64.go @@ -1,6 +1,7 @@ +// cgo -godefs types_darwin.go | go run mkpost.go +// Code generated by the command above; see README.md. DO NOT EDIT. + // +build arm64,darwin -// Created by cgo -godefs - DO NOT EDIT -// cgo -godefs types_darwin.go package unix @@ -25,9 +26,9 @@ type Timespec struct { } type Timeval struct { - Sec int64 - Usec int32 - Pad_cgo_0 [4]byte + Sec int64 + Usec int32 + _ [4]byte } type Timeval32 struct { @@ -69,7 +70,7 @@ type Stat_t struct { Uid uint32 Gid uint32 Rdev int32 - Pad_cgo_0 [4]byte + _ [4]byte Atimespec Timespec Mtimespec Timespec Ctimespec Timespec @@ -119,9 +120,9 @@ type Fstore_t struct { } type Radvisory_t struct { - Offset int64 - Count int32 - Pad_cgo_0 [4]byte + Offset int64 + Count int32 + _ [4]byte } type Fbootstraptransfer_t struct { @@ -131,9 +132,9 @@ type Fbootstraptransfer_t struct { } type Log2phys_t struct { - Flags uint32 - Pad_cgo_0 [8]byte - Pad_cgo_1 [8]byte + Flags uint32 + _ [8]byte + _ [8]byte } type Fsid struct { @@ -141,13 +142,13 @@ type Fsid struct { } type Dirent struct { - Ino uint64 - Seekoff uint64 - Reclen uint16 - Namlen uint16 - Type uint8 - Name [1024]int8 - Pad_cgo_0 [3]byte + Ino uint64 + Seekoff uint64 + Reclen uint16 + Namlen uint16 + Type uint8 + Name [1024]int8 + _ [3]byte } type RawSockaddrInet4 struct { @@ -220,10 +221,10 @@ type IPv6Mreq struct { type Msghdr struct { Name *byte Namelen uint32 - Pad_cgo_0 [4]byte + _ [4]byte Iov *Iovec Iovlen int32 - Pad_cgo_1 [4]byte + _ [4]byte Control *byte Controllen uint32 Flags int32 @@ -302,14 +303,14 @@ const ( ) type IfMsghdr struct { - Msglen uint16 - Version uint8 - Type uint8 - Addrs int32 - Flags int32 - Index uint16 - Pad_cgo_0 [2]byte - Data IfData + Msglen uint16 + Version uint8 + Type uint8 + Addrs int32 + Flags int32 + Index uint16 + _ [2]byte + Data IfData } type IfData struct { @@ -345,51 +346,51 @@ type IfData struct { } type IfaMsghdr struct { - Msglen uint16 - Version uint8 - Type uint8 - Addrs int32 - Flags int32 - Index uint16 - Pad_cgo_0 [2]byte - Metric int32 + Msglen uint16 + Version uint8 + Type uint8 + Addrs int32 + Flags int32 + Index uint16 + _ [2]byte + Metric int32 } type IfmaMsghdr struct { - Msglen uint16 - Version uint8 - Type uint8 - Addrs int32 - Flags int32 - Index uint16 - Pad_cgo_0 [2]byte + Msglen uint16 + Version uint8 + Type uint8 + Addrs int32 + Flags int32 + Index uint16 + _ [2]byte } type IfmaMsghdr2 struct { - Msglen uint16 - Version uint8 - Type uint8 - Addrs int32 - Flags int32 - Index uint16 - Pad_cgo_0 [2]byte - Refcount int32 + Msglen uint16 + Version uint8 + Type uint8 + Addrs int32 + Flags int32 + Index uint16 + _ [2]byte + Refcount int32 } type RtMsghdr struct { - Msglen uint16 - Version uint8 - Type uint8 - Index uint16 - Pad_cgo_0 [2]byte - Flags int32 - Addrs int32 - Pid int32 - Seq int32 - Errno int32 - Use int32 - Inits uint32 - Rmx RtMetrics + Msglen uint16 + Version uint8 + Type uint8 + Index uint16 + _ [2]byte + Flags int32 + Addrs int32 + Pid int32 + Seq int32 + Errno int32 + Use int32 + Inits uint32 + Rmx RtMetrics } type RtMetrics struct { @@ -425,9 +426,9 @@ type BpfStat struct { } type BpfProgram struct { - Len uint32 - Pad_cgo_0 [4]byte - Insns *BpfInsn + Len uint32 + _ [4]byte + Insns *BpfInsn } type BpfInsn struct { @@ -438,22 +439,22 @@ type BpfInsn struct { } type BpfHdr struct { - Tstamp Timeval32 - Caplen uint32 - Datalen uint32 - Hdrlen uint16 - Pad_cgo_0 [2]byte + Tstamp Timeval32 + Caplen uint32 + Datalen uint32 + Hdrlen uint16 + _ [2]byte } type Termios struct { - Iflag uint64 - Oflag uint64 - Cflag uint64 - Lflag uint64 - Cc [20]uint8 - Pad_cgo_0 [4]byte - Ispeed uint64 - Ospeed uint64 + Iflag uint64 + Oflag uint64 + Cflag uint64 + Lflag uint64 + Cc [20]uint8 + _ [4]byte + Ispeed uint64 + Ospeed uint64 } type Winsize struct { @@ -469,3 +470,30 @@ const ( AT_SYMLINK_FOLLOW = 0x40 AT_SYMLINK_NOFOLLOW = 0x20 ) + +type PollFd struct { + Fd int32 + Events int16 + Revents int16 +} + +const ( + POLLERR = 0x8 + POLLHUP = 0x10 + POLLIN = 0x1 + POLLNVAL = 0x20 + POLLOUT = 0x4 + POLLPRI = 0x2 + POLLRDBAND = 0x80 + POLLRDNORM = 0x40 + POLLWRBAND = 0x100 + POLLWRNORM = 0x4 +) + +type Utsname struct { + Sysname [256]byte + Nodename [256]byte + Release [256]byte + Version [256]byte + Machine [256]byte +} diff --git a/vendor/golang.org/x/sys/unix/ztypes_dragonfly_amd64.go b/vendor/golang.org/x/sys/unix/ztypes_dragonfly_amd64.go index e585c893a..315a553bd 100644 --- a/vendor/golang.org/x/sys/unix/ztypes_dragonfly_amd64.go +++ b/vendor/golang.org/x/sys/unix/ztypes_dragonfly_amd64.go @@ -108,7 +108,7 @@ type Statfs_t struct { Owner uint32 Type int32 Flags int32 - Pad_cgo_0 [4]byte + _ [4]byte Syncwrites int64 Asyncwrites int64 Fstypename [16]int8 @@ -118,7 +118,7 @@ type Statfs_t struct { Spares1 int16 Mntfromname [80]int8 Spares2 int16 - Pad_cgo_1 [4]byte + _ [4]byte Spare [2]int64 } @@ -143,6 +143,10 @@ type Fsid struct { Val [2]int32 } +const ( + PathMax = 0x400 +) + type RawSockaddrInet4 struct { Len uint8 Family uint8 @@ -215,10 +219,10 @@ type IPv6Mreq struct { type Msghdr struct { Name *byte Namelen uint32 - Pad_cgo_0 [4]byte + _ [4]byte Iov *Iovec Iovlen int32 - Pad_cgo_1 [4]byte + _ [4]byte Control *byte Controllen uint32 Flags int32 @@ -290,14 +294,14 @@ const ( ) type IfMsghdr struct { - Msglen uint16 - Version uint8 - Type uint8 - Addrs int32 - Flags int32 - Index uint16 - Pad_cgo_0 [2]byte - Data IfData + Msglen uint16 + Version uint8 + Type uint8 + Addrs int32 + Flags int32 + Index uint16 + _ [2]byte + Data IfData } type IfData struct { @@ -307,7 +311,7 @@ type IfData struct { Hdrlen uint8 Recvquota uint8 Xmitquota uint8 - Pad_cgo_0 [2]byte + _ [2]byte Mtu uint64 Metric uint64 Link_state uint64 @@ -329,24 +333,24 @@ type IfData struct { } type IfaMsghdr struct { - Msglen uint16 - Version uint8 - Type uint8 - Addrs int32 - Flags int32 - Index uint16 - Pad_cgo_0 [2]byte - Metric int32 + Msglen uint16 + Version uint8 + Type uint8 + Addrs int32 + Flags int32 + Index uint16 + _ [2]byte + Metric int32 } type IfmaMsghdr struct { - Msglen uint16 - Version uint8 - Type uint8 - Addrs int32 - Flags int32 - Index uint16 - Pad_cgo_0 [2]byte + Msglen uint16 + Version uint8 + Type uint8 + Addrs int32 + Flags int32 + Index uint16 + _ [2]byte } type IfAnnounceMsghdr struct { @@ -359,19 +363,19 @@ type IfAnnounceMsghdr struct { } type RtMsghdr struct { - Msglen uint16 - Version uint8 - Type uint8 - Index uint16 - Pad_cgo_0 [2]byte - Flags int32 - Addrs int32 - Pid int32 - Seq int32 - Errno int32 - Use int32 - Inits uint64 - Rmx RtMetrics + Msglen uint16 + Version uint8 + Type uint8 + Index uint16 + _ [2]byte + Flags int32 + Addrs int32 + Pid int32 + Seq int32 + Errno int32 + Use int32 + Inits uint64 + Rmx RtMetrics } type RtMetrics struct { @@ -387,7 +391,7 @@ type RtMetrics struct { Hopcount uint64 Mssopt uint16 Pad uint16 - Pad_cgo_0 [4]byte + _ [4]byte Msl uint64 Iwmaxsegs uint64 Iwcapsegs uint64 @@ -412,9 +416,9 @@ type BpfStat struct { } type BpfProgram struct { - Len uint32 - Pad_cgo_0 [4]byte - Insns *BpfInsn + Len uint32 + _ [4]byte + Insns *BpfInsn } type BpfInsn struct { @@ -425,11 +429,11 @@ type BpfInsn struct { } type BpfHdr struct { - Tstamp Timeval - Caplen uint32 - Datalen uint32 - Hdrlen uint16 - Pad_cgo_0 [6]byte + Tstamp Timeval + Caplen uint32 + Datalen uint32 + Hdrlen uint16 + _ [6]byte } type Termios struct { @@ -441,3 +445,42 @@ type Termios struct { Ispeed uint32 Ospeed uint32 } + +type Winsize struct { + Row uint16 + Col uint16 + Xpixel uint16 + Ypixel uint16 +} + +const ( + AT_FDCWD = 0xfffafdcd + AT_SYMLINK_NOFOLLOW = 0x1 +) + +type PollFd struct { + Fd int32 + Events int16 + Revents int16 +} + +const ( + POLLERR = 0x8 + POLLHUP = 0x10 + POLLIN = 0x1 + POLLNVAL = 0x20 + POLLOUT = 0x4 + POLLPRI = 0x2 + POLLRDBAND = 0x80 + POLLRDNORM = 0x40 + POLLWRBAND = 0x100 + POLLWRNORM = 0x4 +) + +type Utsname struct { + Sysname [32]byte + Nodename [32]byte + Release [32]byte + Version [32]byte + Machine [32]byte +} diff --git a/vendor/golang.org/x/sys/unix/ztypes_freebsd_386.go b/vendor/golang.org/x/sys/unix/ztypes_freebsd_386.go index 5b28bcbba..878a21ad6 100644 --- a/vendor/golang.org/x/sys/unix/ztypes_freebsd_386.go +++ b/vendor/golang.org/x/sys/unix/ztypes_freebsd_386.go @@ -140,6 +140,10 @@ type Fsid struct { Val [2]int32 } +const ( + PathMax = 0x400 +) + const ( FADV_NORMAL = 0x0 FADV_RANDOM = 0x1 @@ -516,6 +520,34 @@ const ( AT_SYMLINK_NOFOLLOW = 0x200 ) +type PollFd struct { + Fd int32 + Events int16 + Revents int16 +} + +const ( + POLLERR = 0x8 + POLLHUP = 0x10 + POLLIN = 0x1 + POLLINIGNEOF = 0x2000 + POLLNVAL = 0x20 + POLLOUT = 0x4 + POLLPRI = 0x2 + POLLRDBAND = 0x80 + POLLRDNORM = 0x40 + POLLWRBAND = 0x100 + POLLWRNORM = 0x4 +) + type CapRights struct { Rights [2]uint64 } + +type Utsname struct { + Sysname [256]byte + Nodename [256]byte + Release [256]byte + Version [256]byte + Machine [256]byte +} diff --git a/vendor/golang.org/x/sys/unix/ztypes_freebsd_amd64.go b/vendor/golang.org/x/sys/unix/ztypes_freebsd_amd64.go index c65d89e49..8408af125 100644 --- a/vendor/golang.org/x/sys/unix/ztypes_freebsd_amd64.go +++ b/vendor/golang.org/x/sys/unix/ztypes_freebsd_amd64.go @@ -140,6 +140,10 @@ type Fsid struct { Val [2]int32 } +const ( + PathMax = 0x400 +) + const ( FADV_NORMAL = 0x0 FADV_RANDOM = 0x1 @@ -519,6 +523,34 @@ const ( AT_SYMLINK_NOFOLLOW = 0x200 ) +type PollFd struct { + Fd int32 + Events int16 + Revents int16 +} + +const ( + POLLERR = 0x8 + POLLHUP = 0x10 + POLLIN = 0x1 + POLLINIGNEOF = 0x2000 + POLLNVAL = 0x20 + POLLOUT = 0x4 + POLLPRI = 0x2 + POLLRDBAND = 0x80 + POLLRDNORM = 0x40 + POLLWRBAND = 0x100 + POLLWRNORM = 0x4 +) + type CapRights struct { Rights [2]uint64 } + +type Utsname struct { + Sysname [256]byte + Nodename [256]byte + Release [256]byte + Version [256]byte + Machine [256]byte +} diff --git a/vendor/golang.org/x/sys/unix/ztypes_freebsd_arm.go b/vendor/golang.org/x/sys/unix/ztypes_freebsd_arm.go index 42c0a502c..4b2d9a483 100644 --- a/vendor/golang.org/x/sys/unix/ztypes_freebsd_arm.go +++ b/vendor/golang.org/x/sys/unix/ztypes_freebsd_arm.go @@ -142,6 +142,10 @@ type Fsid struct { Val [2]int32 } +const ( + PathMax = 0x400 +) + const ( FADV_NORMAL = 0x0 FADV_RANDOM = 0x1 @@ -519,6 +523,34 @@ const ( AT_SYMLINK_NOFOLLOW = 0x200 ) +type PollFd struct { + Fd int32 + Events int16 + Revents int16 +} + +const ( + POLLERR = 0x8 + POLLHUP = 0x10 + POLLIN = 0x1 + POLLINIGNEOF = 0x2000 + POLLNVAL = 0x20 + POLLOUT = 0x4 + POLLPRI = 0x2 + POLLRDBAND = 0x80 + POLLRDNORM = 0x40 + POLLWRBAND = 0x100 + POLLWRNORM = 0x4 +) + type CapRights struct { Rights [2]uint64 } + +type Utsname struct { + Sysname [256]byte + Nodename [256]byte + Release [256]byte + Version [256]byte + Machine [256]byte +} diff --git a/vendor/golang.org/x/sys/unix/ztypes_linux_386.go b/vendor/golang.org/x/sys/unix/ztypes_linux_386.go index 0dcebb50b..f9a99358a 100644 --- a/vendor/golang.org/x/sys/unix/ztypes_linux_386.go +++ b/vendor/golang.org/x/sys/unix/ztypes_linux_386.go @@ -52,7 +52,7 @@ type Timex struct { Errcnt int32 Stbcnt int32 Tai int32 - Pad_cgo_0 [44]byte + _ [44]byte } type Time_t int32 @@ -98,7 +98,7 @@ type _Gid_t uint32 type Stat_t struct { Dev uint64 X__pad1 uint16 - Pad_cgo_0 [2]byte + _ [2]byte X__st_ino uint32 Mode uint32 Nlink uint32 @@ -106,7 +106,7 @@ type Stat_t struct { Gid uint32 Rdev uint64 X__pad2 uint16 - Pad_cgo_1 [2]byte + _ [2]byte Size int64 Blksize int32 Blocks int64 @@ -131,13 +131,43 @@ type Statfs_t struct { Spare [4]int32 } +type StatxTimestamp struct { + Sec int64 + Nsec uint32 + X__reserved int32 +} + +type Statx_t struct { + Mask uint32 + Blksize uint32 + Attributes uint64 + Nlink uint32 + Uid uint32 + Gid uint32 + Mode uint16 + _ [1]uint16 + Ino uint64 + Size uint64 + Blocks uint64 + Attributes_mask uint64 + Atime StatxTimestamp + Btime StatxTimestamp + Ctime StatxTimestamp + Mtime StatxTimestamp + Rdev_major uint32 + Rdev_minor uint32 + Dev_major uint32 + Dev_minor uint32 + _ [14]uint64 +} + type Dirent struct { - Ino uint64 - Off int64 - Reclen uint16 - Type uint8 - Name [256]int8 - Pad_cgo_0 [1]byte + Ino uint64 + Off int64 + Reclen uint16 + Type uint8 + Name [256]int8 + _ [1]byte } type Fsid struct { @@ -224,11 +254,20 @@ type RawSockaddrHCI struct { Channel uint16 } +type RawSockaddrL2 struct { + Family uint16 + Psm uint16 + Bdaddr [6]uint8 + Cid uint16 + Bdaddr_type uint8 + _ [1]byte +} + type RawSockaddrCAN struct { - Family uint16 - Pad_cgo_0 [2]byte - Ifindex int32 - Addr [8]byte + Family uint16 + _ [2]byte + Ifindex int32 + Addr [8]byte } type RawSockaddrALG struct { @@ -341,7 +380,7 @@ type TCPInfo struct { Probes uint8 Backoff uint8 Options uint8 - Pad_cgo_0 [2]byte + _ [2]byte Rto uint32 Ato uint32 Snd_mss uint32 @@ -376,6 +415,7 @@ const ( SizeofSockaddrLinklayer = 0x14 SizeofSockaddrNetlink = 0xc SizeofSockaddrHCI = 0x6 + SizeofSockaddrL2 = 0xe SizeofSockaddrCAN = 0x10 SizeofSockaddrALG = 0x58 SizeofSockaddrVM = 0x10 @@ -396,97 +436,123 @@ const ( ) const ( - IFA_UNSPEC = 0x0 - IFA_ADDRESS = 0x1 - IFA_LOCAL = 0x2 - IFA_LABEL = 0x3 - IFA_BROADCAST = 0x4 - IFA_ANYCAST = 0x5 - IFA_CACHEINFO = 0x6 - IFA_MULTICAST = 0x7 - IFLA_UNSPEC = 0x0 - IFLA_ADDRESS = 0x1 - IFLA_BROADCAST = 0x2 - IFLA_IFNAME = 0x3 - IFLA_MTU = 0x4 - IFLA_LINK = 0x5 - IFLA_QDISC = 0x6 - IFLA_STATS = 0x7 - IFLA_COST = 0x8 - IFLA_PRIORITY = 0x9 - IFLA_MASTER = 0xa - IFLA_WIRELESS = 0xb - IFLA_PROTINFO = 0xc - IFLA_TXQLEN = 0xd - IFLA_MAP = 0xe - IFLA_WEIGHT = 0xf - IFLA_OPERSTATE = 0x10 - IFLA_LINKMODE = 0x11 - IFLA_LINKINFO = 0x12 - IFLA_NET_NS_PID = 0x13 - IFLA_IFALIAS = 0x14 - IFLA_MAX = 0x2b - RT_SCOPE_UNIVERSE = 0x0 - RT_SCOPE_SITE = 0xc8 - RT_SCOPE_LINK = 0xfd - RT_SCOPE_HOST = 0xfe - RT_SCOPE_NOWHERE = 0xff - RT_TABLE_UNSPEC = 0x0 - RT_TABLE_COMPAT = 0xfc - RT_TABLE_DEFAULT = 0xfd - RT_TABLE_MAIN = 0xfe - RT_TABLE_LOCAL = 0xff - RT_TABLE_MAX = 0xffffffff - RTA_UNSPEC = 0x0 - RTA_DST = 0x1 - RTA_SRC = 0x2 - RTA_IIF = 0x3 - RTA_OIF = 0x4 - RTA_GATEWAY = 0x5 - RTA_PRIORITY = 0x6 - RTA_PREFSRC = 0x7 - RTA_METRICS = 0x8 - RTA_MULTIPATH = 0x9 - RTA_FLOW = 0xb - RTA_CACHEINFO = 0xc - RTA_TABLE = 0xf - RTN_UNSPEC = 0x0 - RTN_UNICAST = 0x1 - RTN_LOCAL = 0x2 - RTN_BROADCAST = 0x3 - RTN_ANYCAST = 0x4 - RTN_MULTICAST = 0x5 - RTN_BLACKHOLE = 0x6 - RTN_UNREACHABLE = 0x7 - RTN_PROHIBIT = 0x8 - RTN_THROW = 0x9 - RTN_NAT = 0xa - RTN_XRESOLVE = 0xb - RTNLGRP_NONE = 0x0 - RTNLGRP_LINK = 0x1 - RTNLGRP_NOTIFY = 0x2 - RTNLGRP_NEIGH = 0x3 - RTNLGRP_TC = 0x4 - RTNLGRP_IPV4_IFADDR = 0x5 - RTNLGRP_IPV4_MROUTE = 0x6 - RTNLGRP_IPV4_ROUTE = 0x7 - RTNLGRP_IPV4_RULE = 0x8 - RTNLGRP_IPV6_IFADDR = 0x9 - RTNLGRP_IPV6_MROUTE = 0xa - RTNLGRP_IPV6_ROUTE = 0xb - RTNLGRP_IPV6_IFINFO = 0xc - RTNLGRP_IPV6_PREFIX = 0x12 - RTNLGRP_IPV6_RULE = 0x13 - RTNLGRP_ND_USEROPT = 0x14 - SizeofNlMsghdr = 0x10 - SizeofNlMsgerr = 0x14 - SizeofRtGenmsg = 0x1 - SizeofNlAttr = 0x4 - SizeofRtAttr = 0x4 - SizeofIfInfomsg = 0x10 - SizeofIfAddrmsg = 0x8 - SizeofRtMsg = 0xc - SizeofRtNexthop = 0x8 + IFA_UNSPEC = 0x0 + IFA_ADDRESS = 0x1 + IFA_LOCAL = 0x2 + IFA_LABEL = 0x3 + IFA_BROADCAST = 0x4 + IFA_ANYCAST = 0x5 + IFA_CACHEINFO = 0x6 + IFA_MULTICAST = 0x7 + IFLA_UNSPEC = 0x0 + IFLA_ADDRESS = 0x1 + IFLA_BROADCAST = 0x2 + IFLA_IFNAME = 0x3 + IFLA_MTU = 0x4 + IFLA_LINK = 0x5 + IFLA_QDISC = 0x6 + IFLA_STATS = 0x7 + IFLA_COST = 0x8 + IFLA_PRIORITY = 0x9 + IFLA_MASTER = 0xa + IFLA_WIRELESS = 0xb + IFLA_PROTINFO = 0xc + IFLA_TXQLEN = 0xd + IFLA_MAP = 0xe + IFLA_WEIGHT = 0xf + IFLA_OPERSTATE = 0x10 + IFLA_LINKMODE = 0x11 + IFLA_LINKINFO = 0x12 + IFLA_NET_NS_PID = 0x13 + IFLA_IFALIAS = 0x14 + IFLA_NUM_VF = 0x15 + IFLA_VFINFO_LIST = 0x16 + IFLA_STATS64 = 0x17 + IFLA_VF_PORTS = 0x18 + IFLA_PORT_SELF = 0x19 + IFLA_AF_SPEC = 0x1a + IFLA_GROUP = 0x1b + IFLA_NET_NS_FD = 0x1c + IFLA_EXT_MASK = 0x1d + IFLA_PROMISCUITY = 0x1e + IFLA_NUM_TX_QUEUES = 0x1f + IFLA_NUM_RX_QUEUES = 0x20 + IFLA_CARRIER = 0x21 + IFLA_PHYS_PORT_ID = 0x22 + IFLA_CARRIER_CHANGES = 0x23 + IFLA_PHYS_SWITCH_ID = 0x24 + IFLA_LINK_NETNSID = 0x25 + IFLA_PHYS_PORT_NAME = 0x26 + IFLA_PROTO_DOWN = 0x27 + IFLA_GSO_MAX_SEGS = 0x28 + IFLA_GSO_MAX_SIZE = 0x29 + IFLA_PAD = 0x2a + IFLA_XDP = 0x2b + IFLA_EVENT = 0x2c + IFLA_NEW_NETNSID = 0x2d + IFLA_IF_NETNSID = 0x2e + IFLA_MAX = 0x2e + RT_SCOPE_UNIVERSE = 0x0 + RT_SCOPE_SITE = 0xc8 + RT_SCOPE_LINK = 0xfd + RT_SCOPE_HOST = 0xfe + RT_SCOPE_NOWHERE = 0xff + RT_TABLE_UNSPEC = 0x0 + RT_TABLE_COMPAT = 0xfc + RT_TABLE_DEFAULT = 0xfd + RT_TABLE_MAIN = 0xfe + RT_TABLE_LOCAL = 0xff + RT_TABLE_MAX = 0xffffffff + RTA_UNSPEC = 0x0 + RTA_DST = 0x1 + RTA_SRC = 0x2 + RTA_IIF = 0x3 + RTA_OIF = 0x4 + RTA_GATEWAY = 0x5 + RTA_PRIORITY = 0x6 + RTA_PREFSRC = 0x7 + RTA_METRICS = 0x8 + RTA_MULTIPATH = 0x9 + RTA_FLOW = 0xb + RTA_CACHEINFO = 0xc + RTA_TABLE = 0xf + RTN_UNSPEC = 0x0 + RTN_UNICAST = 0x1 + RTN_LOCAL = 0x2 + RTN_BROADCAST = 0x3 + RTN_ANYCAST = 0x4 + RTN_MULTICAST = 0x5 + RTN_BLACKHOLE = 0x6 + RTN_UNREACHABLE = 0x7 + RTN_PROHIBIT = 0x8 + RTN_THROW = 0x9 + RTN_NAT = 0xa + RTN_XRESOLVE = 0xb + RTNLGRP_NONE = 0x0 + RTNLGRP_LINK = 0x1 + RTNLGRP_NOTIFY = 0x2 + RTNLGRP_NEIGH = 0x3 + RTNLGRP_TC = 0x4 + RTNLGRP_IPV4_IFADDR = 0x5 + RTNLGRP_IPV4_MROUTE = 0x6 + RTNLGRP_IPV4_ROUTE = 0x7 + RTNLGRP_IPV4_RULE = 0x8 + RTNLGRP_IPV6_IFADDR = 0x9 + RTNLGRP_IPV6_MROUTE = 0xa + RTNLGRP_IPV6_ROUTE = 0xb + RTNLGRP_IPV6_IFINFO = 0xc + RTNLGRP_IPV6_PREFIX = 0x12 + RTNLGRP_IPV6_RULE = 0x13 + RTNLGRP_ND_USEROPT = 0x14 + SizeofNlMsghdr = 0x10 + SizeofNlMsgerr = 0x14 + SizeofRtGenmsg = 0x1 + SizeofNlAttr = 0x4 + SizeofRtAttr = 0x4 + SizeofIfInfomsg = 0x10 + SizeofIfAddrmsg = 0x8 + SizeofRtMsg = 0xc + SizeofRtNexthop = 0x8 ) type NlMsghdr struct { @@ -565,9 +631,9 @@ type SockFilter struct { } type SockFprog struct { - Len uint16 - Pad_cgo_0 [2]byte - Filter *SockFilter + Len uint16 + _ [2]byte + Filter *SockFilter } type InotifyEvent struct { @@ -621,12 +687,12 @@ type Sysinfo_t struct { } type Utsname struct { - Sysname [65]int8 - Nodename [65]int8 - Release [65]int8 - Version [65]int8 - Machine [65]int8 - Domainname [65]int8 + Sysname [65]byte + Nodename [65]byte + Release [65]byte + Version [65]byte + Machine [65]byte + Domainname [65]byte } type Ustat_t struct { @@ -643,8 +709,15 @@ type EpollEvent struct { } const ( - AT_FDCWD = -0x64 - AT_REMOVEDIR = 0x200 + AT_EMPTY_PATH = 0x1000 + AT_FDCWD = -0x64 + AT_NO_AUTOMOUNT = 0x800 + AT_REMOVEDIR = 0x200 + + AT_STATX_SYNC_AS_STAT = 0x0 + AT_STATX_FORCE_SYNC = 0x2000 + AT_STATX_DONT_SYNC = 0x4000 + AT_SYMLINK_FOLLOW = 0x400 AT_SYMLINK_NOFOLLOW = 0x100 ) @@ -673,8 +746,6 @@ const RNDGETENTCNT = 0x80045200 const PERF_IOC_FLAG_GROUP = 0x1 -const _SC_PAGESIZE = 0x1e - type Termios struct { Iflag uint32 Oflag uint32 @@ -692,3 +763,135 @@ type Winsize struct { Xpixel uint16 Ypixel uint16 } + +type Taskstats struct { + Version uint16 + _ [2]byte + Ac_exitcode uint32 + Ac_flag uint8 + Ac_nice uint8 + _ [6]byte + Cpu_count uint64 + Cpu_delay_total uint64 + Blkio_count uint64 + Blkio_delay_total uint64 + Swapin_count uint64 + Swapin_delay_total uint64 + Cpu_run_real_total uint64 + Cpu_run_virtual_total uint64 + Ac_comm [32]int8 + Ac_sched uint8 + Ac_pad [3]uint8 + _ [4]byte + Ac_uid uint32 + Ac_gid uint32 + Ac_pid uint32 + Ac_ppid uint32 + Ac_btime uint32 + _ [4]byte + Ac_etime uint64 + Ac_utime uint64 + Ac_stime uint64 + Ac_minflt uint64 + Ac_majflt uint64 + Coremem uint64 + Virtmem uint64 + Hiwater_rss uint64 + Hiwater_vm uint64 + Read_char uint64 + Write_char uint64 + Read_syscalls uint64 + Write_syscalls uint64 + Read_bytes uint64 + Write_bytes uint64 + Cancelled_write_bytes uint64 + Nvcsw uint64 + Nivcsw uint64 + Ac_utimescaled uint64 + Ac_stimescaled uint64 + Cpu_scaled_run_real_total uint64 + Freepages_count uint64 + Freepages_delay_total uint64 +} + +const ( + TASKSTATS_CMD_UNSPEC = 0x0 + TASKSTATS_CMD_GET = 0x1 + TASKSTATS_CMD_NEW = 0x2 + TASKSTATS_TYPE_UNSPEC = 0x0 + TASKSTATS_TYPE_PID = 0x1 + TASKSTATS_TYPE_TGID = 0x2 + TASKSTATS_TYPE_STATS = 0x3 + TASKSTATS_TYPE_AGGR_PID = 0x4 + TASKSTATS_TYPE_AGGR_TGID = 0x5 + TASKSTATS_TYPE_NULL = 0x6 + TASKSTATS_CMD_ATTR_UNSPEC = 0x0 + TASKSTATS_CMD_ATTR_PID = 0x1 + TASKSTATS_CMD_ATTR_TGID = 0x2 + TASKSTATS_CMD_ATTR_REGISTER_CPUMASK = 0x3 + TASKSTATS_CMD_ATTR_DEREGISTER_CPUMASK = 0x4 +) + +type CGroupStats struct { + Sleeping uint64 + Running uint64 + Stopped uint64 + Uninterruptible uint64 + Io_wait uint64 +} + +const ( + CGROUPSTATS_CMD_UNSPEC = 0x3 + CGROUPSTATS_CMD_GET = 0x4 + CGROUPSTATS_CMD_NEW = 0x5 + CGROUPSTATS_TYPE_UNSPEC = 0x0 + CGROUPSTATS_TYPE_CGROUP_STATS = 0x1 + CGROUPSTATS_CMD_ATTR_UNSPEC = 0x0 + CGROUPSTATS_CMD_ATTR_FD = 0x1 +) + +type Genlmsghdr struct { + Cmd uint8 + Version uint8 + Reserved uint16 +} + +const ( + CTRL_CMD_UNSPEC = 0x0 + CTRL_CMD_NEWFAMILY = 0x1 + CTRL_CMD_DELFAMILY = 0x2 + CTRL_CMD_GETFAMILY = 0x3 + CTRL_CMD_NEWOPS = 0x4 + CTRL_CMD_DELOPS = 0x5 + CTRL_CMD_GETOPS = 0x6 + CTRL_CMD_NEWMCAST_GRP = 0x7 + CTRL_CMD_DELMCAST_GRP = 0x8 + CTRL_CMD_GETMCAST_GRP = 0x9 + CTRL_ATTR_UNSPEC = 0x0 + CTRL_ATTR_FAMILY_ID = 0x1 + CTRL_ATTR_FAMILY_NAME = 0x2 + CTRL_ATTR_VERSION = 0x3 + CTRL_ATTR_HDRSIZE = 0x4 + CTRL_ATTR_MAXATTR = 0x5 + CTRL_ATTR_OPS = 0x6 + CTRL_ATTR_MCAST_GROUPS = 0x7 + CTRL_ATTR_OP_UNSPEC = 0x0 + CTRL_ATTR_OP_ID = 0x1 + CTRL_ATTR_OP_FLAGS = 0x2 + CTRL_ATTR_MCAST_GRP_UNSPEC = 0x0 + CTRL_ATTR_MCAST_GRP_NAME = 0x1 + CTRL_ATTR_MCAST_GRP_ID = 0x2 +) + +type cpuMask uint32 + +const ( + _CPU_SETSIZE = 0x400 + _NCPUBITS = 0x20 +) + +const ( + BDADDR_BREDR = 0x0 + BDADDR_LE_PUBLIC = 0x1 + BDADDR_LE_RANDOM = 0x2 +) diff --git a/vendor/golang.org/x/sys/unix/ztypes_linux_amd64.go b/vendor/golang.org/x/sys/unix/ztypes_linux_amd64.go index d70e54348..4df7088d4 100644 --- a/vendor/golang.org/x/sys/unix/ztypes_linux_amd64.go +++ b/vendor/golang.org/x/sys/unix/ztypes_linux_amd64.go @@ -33,13 +33,13 @@ type Timeval struct { type Timex struct { Modes uint32 - Pad_cgo_0 [4]byte + _ [4]byte Offset int64 Freq int64 Maxerror int64 Esterror int64 Status int32 - Pad_cgo_1 [4]byte + _ [4]byte Constant int64 Precision int64 Tolerance int64 @@ -48,14 +48,14 @@ type Timex struct { Ppsfreq int64 Jitter int64 Shift int32 - Pad_cgo_2 [4]byte + _ [4]byte Stabil int64 Jitcnt int64 Calcnt int64 Errcnt int64 Stbcnt int64 Tai int32 - Pad_cgo_3 [44]byte + _ [44]byte } type Time_t int64 @@ -131,13 +131,43 @@ type Statfs_t struct { Spare [4]int64 } +type StatxTimestamp struct { + Sec int64 + Nsec uint32 + X__reserved int32 +} + +type Statx_t struct { + Mask uint32 + Blksize uint32 + Attributes uint64 + Nlink uint32 + Uid uint32 + Gid uint32 + Mode uint16 + _ [1]uint16 + Ino uint64 + Size uint64 + Blocks uint64 + Attributes_mask uint64 + Atime StatxTimestamp + Btime StatxTimestamp + Ctime StatxTimestamp + Mtime StatxTimestamp + Rdev_major uint32 + Rdev_minor uint32 + Dev_major uint32 + Dev_minor uint32 + _ [14]uint64 +} + type Dirent struct { - Ino uint64 - Off int64 - Reclen uint16 - Type uint8 - Name [256]int8 - Pad_cgo_0 [5]byte + Ino uint64 + Off int64 + Reclen uint16 + Type uint8 + Name [256]int8 + _ [5]byte } type Fsid struct { @@ -145,13 +175,13 @@ type Fsid struct { } type Flock_t struct { - Type int16 - Whence int16 - Pad_cgo_0 [4]byte - Start int64 - Len int64 - Pid int32 - Pad_cgo_1 [4]byte + Type int16 + Whence int16 + _ [4]byte + Start int64 + Len int64 + Pid int32 + _ [4]byte } type FscryptPolicy struct { @@ -226,11 +256,20 @@ type RawSockaddrHCI struct { Channel uint16 } +type RawSockaddrL2 struct { + Family uint16 + Psm uint16 + Bdaddr [6]uint8 + Cid uint16 + Bdaddr_type uint8 + _ [1]byte +} + type RawSockaddrCAN struct { - Family uint16 - Pad_cgo_0 [2]byte - Ifindex int32 - Addr [8]byte + Family uint16 + _ [2]byte + Ifindex int32 + Addr [8]byte } type RawSockaddrALG struct { @@ -297,13 +336,13 @@ type PacketMreq struct { type Msghdr struct { Name *byte Namelen uint32 - Pad_cgo_0 [4]byte + _ [4]byte Iov *Iovec Iovlen uint64 Control *byte Controllen uint64 Flags int32 - Pad_cgo_1 [4]byte + _ [4]byte } type Cmsghdr struct { @@ -345,7 +384,7 @@ type TCPInfo struct { Probes uint8 Backoff uint8 Options uint8 - Pad_cgo_0 [2]byte + _ [2]byte Rto uint32 Ato uint32 Snd_mss uint32 @@ -380,6 +419,7 @@ const ( SizeofSockaddrLinklayer = 0x14 SizeofSockaddrNetlink = 0xc SizeofSockaddrHCI = 0x6 + SizeofSockaddrL2 = 0xe SizeofSockaddrCAN = 0x10 SizeofSockaddrALG = 0x58 SizeofSockaddrVM = 0x10 @@ -400,97 +440,123 @@ const ( ) const ( - IFA_UNSPEC = 0x0 - IFA_ADDRESS = 0x1 - IFA_LOCAL = 0x2 - IFA_LABEL = 0x3 - IFA_BROADCAST = 0x4 - IFA_ANYCAST = 0x5 - IFA_CACHEINFO = 0x6 - IFA_MULTICAST = 0x7 - IFLA_UNSPEC = 0x0 - IFLA_ADDRESS = 0x1 - IFLA_BROADCAST = 0x2 - IFLA_IFNAME = 0x3 - IFLA_MTU = 0x4 - IFLA_LINK = 0x5 - IFLA_QDISC = 0x6 - IFLA_STATS = 0x7 - IFLA_COST = 0x8 - IFLA_PRIORITY = 0x9 - IFLA_MASTER = 0xa - IFLA_WIRELESS = 0xb - IFLA_PROTINFO = 0xc - IFLA_TXQLEN = 0xd - IFLA_MAP = 0xe - IFLA_WEIGHT = 0xf - IFLA_OPERSTATE = 0x10 - IFLA_LINKMODE = 0x11 - IFLA_LINKINFO = 0x12 - IFLA_NET_NS_PID = 0x13 - IFLA_IFALIAS = 0x14 - IFLA_MAX = 0x2b - RT_SCOPE_UNIVERSE = 0x0 - RT_SCOPE_SITE = 0xc8 - RT_SCOPE_LINK = 0xfd - RT_SCOPE_HOST = 0xfe - RT_SCOPE_NOWHERE = 0xff - RT_TABLE_UNSPEC = 0x0 - RT_TABLE_COMPAT = 0xfc - RT_TABLE_DEFAULT = 0xfd - RT_TABLE_MAIN = 0xfe - RT_TABLE_LOCAL = 0xff - RT_TABLE_MAX = 0xffffffff - RTA_UNSPEC = 0x0 - RTA_DST = 0x1 - RTA_SRC = 0x2 - RTA_IIF = 0x3 - RTA_OIF = 0x4 - RTA_GATEWAY = 0x5 - RTA_PRIORITY = 0x6 - RTA_PREFSRC = 0x7 - RTA_METRICS = 0x8 - RTA_MULTIPATH = 0x9 - RTA_FLOW = 0xb - RTA_CACHEINFO = 0xc - RTA_TABLE = 0xf - RTN_UNSPEC = 0x0 - RTN_UNICAST = 0x1 - RTN_LOCAL = 0x2 - RTN_BROADCAST = 0x3 - RTN_ANYCAST = 0x4 - RTN_MULTICAST = 0x5 - RTN_BLACKHOLE = 0x6 - RTN_UNREACHABLE = 0x7 - RTN_PROHIBIT = 0x8 - RTN_THROW = 0x9 - RTN_NAT = 0xa - RTN_XRESOLVE = 0xb - RTNLGRP_NONE = 0x0 - RTNLGRP_LINK = 0x1 - RTNLGRP_NOTIFY = 0x2 - RTNLGRP_NEIGH = 0x3 - RTNLGRP_TC = 0x4 - RTNLGRP_IPV4_IFADDR = 0x5 - RTNLGRP_IPV4_MROUTE = 0x6 - RTNLGRP_IPV4_ROUTE = 0x7 - RTNLGRP_IPV4_RULE = 0x8 - RTNLGRP_IPV6_IFADDR = 0x9 - RTNLGRP_IPV6_MROUTE = 0xa - RTNLGRP_IPV6_ROUTE = 0xb - RTNLGRP_IPV6_IFINFO = 0xc - RTNLGRP_IPV6_PREFIX = 0x12 - RTNLGRP_IPV6_RULE = 0x13 - RTNLGRP_ND_USEROPT = 0x14 - SizeofNlMsghdr = 0x10 - SizeofNlMsgerr = 0x14 - SizeofRtGenmsg = 0x1 - SizeofNlAttr = 0x4 - SizeofRtAttr = 0x4 - SizeofIfInfomsg = 0x10 - SizeofIfAddrmsg = 0x8 - SizeofRtMsg = 0xc - SizeofRtNexthop = 0x8 + IFA_UNSPEC = 0x0 + IFA_ADDRESS = 0x1 + IFA_LOCAL = 0x2 + IFA_LABEL = 0x3 + IFA_BROADCAST = 0x4 + IFA_ANYCAST = 0x5 + IFA_CACHEINFO = 0x6 + IFA_MULTICAST = 0x7 + IFLA_UNSPEC = 0x0 + IFLA_ADDRESS = 0x1 + IFLA_BROADCAST = 0x2 + IFLA_IFNAME = 0x3 + IFLA_MTU = 0x4 + IFLA_LINK = 0x5 + IFLA_QDISC = 0x6 + IFLA_STATS = 0x7 + IFLA_COST = 0x8 + IFLA_PRIORITY = 0x9 + IFLA_MASTER = 0xa + IFLA_WIRELESS = 0xb + IFLA_PROTINFO = 0xc + IFLA_TXQLEN = 0xd + IFLA_MAP = 0xe + IFLA_WEIGHT = 0xf + IFLA_OPERSTATE = 0x10 + IFLA_LINKMODE = 0x11 + IFLA_LINKINFO = 0x12 + IFLA_NET_NS_PID = 0x13 + IFLA_IFALIAS = 0x14 + IFLA_NUM_VF = 0x15 + IFLA_VFINFO_LIST = 0x16 + IFLA_STATS64 = 0x17 + IFLA_VF_PORTS = 0x18 + IFLA_PORT_SELF = 0x19 + IFLA_AF_SPEC = 0x1a + IFLA_GROUP = 0x1b + IFLA_NET_NS_FD = 0x1c + IFLA_EXT_MASK = 0x1d + IFLA_PROMISCUITY = 0x1e + IFLA_NUM_TX_QUEUES = 0x1f + IFLA_NUM_RX_QUEUES = 0x20 + IFLA_CARRIER = 0x21 + IFLA_PHYS_PORT_ID = 0x22 + IFLA_CARRIER_CHANGES = 0x23 + IFLA_PHYS_SWITCH_ID = 0x24 + IFLA_LINK_NETNSID = 0x25 + IFLA_PHYS_PORT_NAME = 0x26 + IFLA_PROTO_DOWN = 0x27 + IFLA_GSO_MAX_SEGS = 0x28 + IFLA_GSO_MAX_SIZE = 0x29 + IFLA_PAD = 0x2a + IFLA_XDP = 0x2b + IFLA_EVENT = 0x2c + IFLA_NEW_NETNSID = 0x2d + IFLA_IF_NETNSID = 0x2e + IFLA_MAX = 0x2e + RT_SCOPE_UNIVERSE = 0x0 + RT_SCOPE_SITE = 0xc8 + RT_SCOPE_LINK = 0xfd + RT_SCOPE_HOST = 0xfe + RT_SCOPE_NOWHERE = 0xff + RT_TABLE_UNSPEC = 0x0 + RT_TABLE_COMPAT = 0xfc + RT_TABLE_DEFAULT = 0xfd + RT_TABLE_MAIN = 0xfe + RT_TABLE_LOCAL = 0xff + RT_TABLE_MAX = 0xffffffff + RTA_UNSPEC = 0x0 + RTA_DST = 0x1 + RTA_SRC = 0x2 + RTA_IIF = 0x3 + RTA_OIF = 0x4 + RTA_GATEWAY = 0x5 + RTA_PRIORITY = 0x6 + RTA_PREFSRC = 0x7 + RTA_METRICS = 0x8 + RTA_MULTIPATH = 0x9 + RTA_FLOW = 0xb + RTA_CACHEINFO = 0xc + RTA_TABLE = 0xf + RTN_UNSPEC = 0x0 + RTN_UNICAST = 0x1 + RTN_LOCAL = 0x2 + RTN_BROADCAST = 0x3 + RTN_ANYCAST = 0x4 + RTN_MULTICAST = 0x5 + RTN_BLACKHOLE = 0x6 + RTN_UNREACHABLE = 0x7 + RTN_PROHIBIT = 0x8 + RTN_THROW = 0x9 + RTN_NAT = 0xa + RTN_XRESOLVE = 0xb + RTNLGRP_NONE = 0x0 + RTNLGRP_LINK = 0x1 + RTNLGRP_NOTIFY = 0x2 + RTNLGRP_NEIGH = 0x3 + RTNLGRP_TC = 0x4 + RTNLGRP_IPV4_IFADDR = 0x5 + RTNLGRP_IPV4_MROUTE = 0x6 + RTNLGRP_IPV4_ROUTE = 0x7 + RTNLGRP_IPV4_RULE = 0x8 + RTNLGRP_IPV6_IFADDR = 0x9 + RTNLGRP_IPV6_MROUTE = 0xa + RTNLGRP_IPV6_ROUTE = 0xb + RTNLGRP_IPV6_IFINFO = 0xc + RTNLGRP_IPV6_PREFIX = 0x12 + RTNLGRP_IPV6_RULE = 0x13 + RTNLGRP_ND_USEROPT = 0x14 + SizeofNlMsghdr = 0x10 + SizeofNlMsgerr = 0x14 + SizeofRtGenmsg = 0x1 + SizeofNlAttr = 0x4 + SizeofRtAttr = 0x4 + SizeofIfInfomsg = 0x10 + SizeofIfAddrmsg = 0x8 + SizeofRtMsg = 0xc + SizeofRtNexthop = 0x8 ) type NlMsghdr struct { @@ -569,9 +635,9 @@ type SockFilter struct { } type SockFprog struct { - Len uint16 - Pad_cgo_0 [6]byte - Filter *SockFilter + Len uint16 + _ [6]byte + Filter *SockFilter } type InotifyEvent struct { @@ -628,30 +694,30 @@ type Sysinfo_t struct { Freeswap uint64 Procs uint16 Pad uint16 - Pad_cgo_0 [4]byte + _ [4]byte Totalhigh uint64 Freehigh uint64 Unit uint32 X_f [0]int8 - Pad_cgo_1 [4]byte + _ [4]byte } type Utsname struct { - Sysname [65]int8 - Nodename [65]int8 - Release [65]int8 - Version [65]int8 - Machine [65]int8 - Domainname [65]int8 + Sysname [65]byte + Nodename [65]byte + Release [65]byte + Version [65]byte + Machine [65]byte + Domainname [65]byte } type Ustat_t struct { - Tfree int32 - Pad_cgo_0 [4]byte - Tinode uint64 - Fname [6]int8 - Fpack [6]int8 - Pad_cgo_1 [4]byte + Tfree int32 + _ [4]byte + Tinode uint64 + Fname [6]int8 + Fpack [6]int8 + _ [4]byte } type EpollEvent struct { @@ -661,8 +727,15 @@ type EpollEvent struct { } const ( - AT_FDCWD = -0x64 - AT_REMOVEDIR = 0x200 + AT_EMPTY_PATH = 0x1000 + AT_FDCWD = -0x64 + AT_NO_AUTOMOUNT = 0x800 + AT_REMOVEDIR = 0x200 + + AT_STATX_SYNC_AS_STAT = 0x0 + AT_STATX_FORCE_SYNC = 0x2000 + AT_STATX_DONT_SYNC = 0x4000 + AT_SYMLINK_FOLLOW = 0x400 AT_SYMLINK_NOFOLLOW = 0x100 ) @@ -691,8 +764,6 @@ const RNDGETENTCNT = 0x80045200 const PERF_IOC_FLAG_GROUP = 0x1 -const _SC_PAGESIZE = 0x1e - type Termios struct { Iflag uint32 Oflag uint32 @@ -710,3 +781,135 @@ type Winsize struct { Xpixel uint16 Ypixel uint16 } + +type Taskstats struct { + Version uint16 + _ [2]byte + Ac_exitcode uint32 + Ac_flag uint8 + Ac_nice uint8 + _ [6]byte + Cpu_count uint64 + Cpu_delay_total uint64 + Blkio_count uint64 + Blkio_delay_total uint64 + Swapin_count uint64 + Swapin_delay_total uint64 + Cpu_run_real_total uint64 + Cpu_run_virtual_total uint64 + Ac_comm [32]int8 + Ac_sched uint8 + Ac_pad [3]uint8 + _ [4]byte + Ac_uid uint32 + Ac_gid uint32 + Ac_pid uint32 + Ac_ppid uint32 + Ac_btime uint32 + _ [4]byte + Ac_etime uint64 + Ac_utime uint64 + Ac_stime uint64 + Ac_minflt uint64 + Ac_majflt uint64 + Coremem uint64 + Virtmem uint64 + Hiwater_rss uint64 + Hiwater_vm uint64 + Read_char uint64 + Write_char uint64 + Read_syscalls uint64 + Write_syscalls uint64 + Read_bytes uint64 + Write_bytes uint64 + Cancelled_write_bytes uint64 + Nvcsw uint64 + Nivcsw uint64 + Ac_utimescaled uint64 + Ac_stimescaled uint64 + Cpu_scaled_run_real_total uint64 + Freepages_count uint64 + Freepages_delay_total uint64 +} + +const ( + TASKSTATS_CMD_UNSPEC = 0x0 + TASKSTATS_CMD_GET = 0x1 + TASKSTATS_CMD_NEW = 0x2 + TASKSTATS_TYPE_UNSPEC = 0x0 + TASKSTATS_TYPE_PID = 0x1 + TASKSTATS_TYPE_TGID = 0x2 + TASKSTATS_TYPE_STATS = 0x3 + TASKSTATS_TYPE_AGGR_PID = 0x4 + TASKSTATS_TYPE_AGGR_TGID = 0x5 + TASKSTATS_TYPE_NULL = 0x6 + TASKSTATS_CMD_ATTR_UNSPEC = 0x0 + TASKSTATS_CMD_ATTR_PID = 0x1 + TASKSTATS_CMD_ATTR_TGID = 0x2 + TASKSTATS_CMD_ATTR_REGISTER_CPUMASK = 0x3 + TASKSTATS_CMD_ATTR_DEREGISTER_CPUMASK = 0x4 +) + +type CGroupStats struct { + Sleeping uint64 + Running uint64 + Stopped uint64 + Uninterruptible uint64 + Io_wait uint64 +} + +const ( + CGROUPSTATS_CMD_UNSPEC = 0x3 + CGROUPSTATS_CMD_GET = 0x4 + CGROUPSTATS_CMD_NEW = 0x5 + CGROUPSTATS_TYPE_UNSPEC = 0x0 + CGROUPSTATS_TYPE_CGROUP_STATS = 0x1 + CGROUPSTATS_CMD_ATTR_UNSPEC = 0x0 + CGROUPSTATS_CMD_ATTR_FD = 0x1 +) + +type Genlmsghdr struct { + Cmd uint8 + Version uint8 + Reserved uint16 +} + +const ( + CTRL_CMD_UNSPEC = 0x0 + CTRL_CMD_NEWFAMILY = 0x1 + CTRL_CMD_DELFAMILY = 0x2 + CTRL_CMD_GETFAMILY = 0x3 + CTRL_CMD_NEWOPS = 0x4 + CTRL_CMD_DELOPS = 0x5 + CTRL_CMD_GETOPS = 0x6 + CTRL_CMD_NEWMCAST_GRP = 0x7 + CTRL_CMD_DELMCAST_GRP = 0x8 + CTRL_CMD_GETMCAST_GRP = 0x9 + CTRL_ATTR_UNSPEC = 0x0 + CTRL_ATTR_FAMILY_ID = 0x1 + CTRL_ATTR_FAMILY_NAME = 0x2 + CTRL_ATTR_VERSION = 0x3 + CTRL_ATTR_HDRSIZE = 0x4 + CTRL_ATTR_MAXATTR = 0x5 + CTRL_ATTR_OPS = 0x6 + CTRL_ATTR_MCAST_GROUPS = 0x7 + CTRL_ATTR_OP_UNSPEC = 0x0 + CTRL_ATTR_OP_ID = 0x1 + CTRL_ATTR_OP_FLAGS = 0x2 + CTRL_ATTR_MCAST_GRP_UNSPEC = 0x0 + CTRL_ATTR_MCAST_GRP_NAME = 0x1 + CTRL_ATTR_MCAST_GRP_ID = 0x2 +) + +type cpuMask uint64 + +const ( + _CPU_SETSIZE = 0x400 + _NCPUBITS = 0x40 +) + +const ( + BDADDR_BREDR = 0x0 + BDADDR_LE_PUBLIC = 0x1 + BDADDR_LE_RANDOM = 0x2 +) diff --git a/vendor/golang.org/x/sys/unix/ztypes_linux_arm.go b/vendor/golang.org/x/sys/unix/ztypes_linux_arm.go index 497f56319..a181469ca 100644 --- a/vendor/golang.org/x/sys/unix/ztypes_linux_arm.go +++ b/vendor/golang.org/x/sys/unix/ztypes_linux_arm.go @@ -52,7 +52,7 @@ type Timex struct { Errcnt int32 Stbcnt int32 Tai int32 - Pad_cgo_0 [44]byte + _ [44]byte } type Time_t int32 @@ -98,7 +98,7 @@ type _Gid_t uint32 type Stat_t struct { Dev uint64 X__pad1 uint16 - Pad_cgo_0 [2]byte + _ [2]byte X__st_ino uint32 Mode uint32 Nlink uint32 @@ -106,10 +106,10 @@ type Stat_t struct { Gid uint32 Rdev uint64 X__pad2 uint16 - Pad_cgo_1 [6]byte + _ [6]byte Size int64 Blksize int32 - Pad_cgo_2 [4]byte + _ [4]byte Blocks int64 Atim Timespec Mtim Timespec @@ -118,28 +118,58 @@ type Stat_t struct { } type Statfs_t struct { - Type int32 - Bsize int32 - Blocks uint64 - Bfree uint64 - Bavail uint64 - Files uint64 - Ffree uint64 - Fsid Fsid - Namelen int32 - Frsize int32 - Flags int32 - Spare [4]int32 - Pad_cgo_0 [4]byte + Type int32 + Bsize int32 + Blocks uint64 + Bfree uint64 + Bavail uint64 + Files uint64 + Ffree uint64 + Fsid Fsid + Namelen int32 + Frsize int32 + Flags int32 + Spare [4]int32 + _ [4]byte +} + +type StatxTimestamp struct { + Sec int64 + Nsec uint32 + X__reserved int32 +} + +type Statx_t struct { + Mask uint32 + Blksize uint32 + Attributes uint64 + Nlink uint32 + Uid uint32 + Gid uint32 + Mode uint16 + _ [1]uint16 + Ino uint64 + Size uint64 + Blocks uint64 + Attributes_mask uint64 + Atime StatxTimestamp + Btime StatxTimestamp + Ctime StatxTimestamp + Mtime StatxTimestamp + Rdev_major uint32 + Rdev_minor uint32 + Dev_major uint32 + Dev_minor uint32 + _ [14]uint64 } type Dirent struct { - Ino uint64 - Off int64 - Reclen uint16 - Type uint8 - Name [256]uint8 - Pad_cgo_0 [5]byte + Ino uint64 + Off int64 + Reclen uint16 + Type uint8 + Name [256]uint8 + _ [5]byte } type Fsid struct { @@ -147,13 +177,13 @@ type Fsid struct { } type Flock_t struct { - Type int16 - Whence int16 - Pad_cgo_0 [4]byte - Start int64 - Len int64 - Pid int32 - Pad_cgo_1 [4]byte + Type int16 + Whence int16 + _ [4]byte + Start int64 + Len int64 + Pid int32 + _ [4]byte } type FscryptPolicy struct { @@ -228,11 +258,20 @@ type RawSockaddrHCI struct { Channel uint16 } +type RawSockaddrL2 struct { + Family uint16 + Psm uint16 + Bdaddr [6]uint8 + Cid uint16 + Bdaddr_type uint8 + _ [1]byte +} + type RawSockaddrCAN struct { - Family uint16 - Pad_cgo_0 [2]byte - Ifindex int32 - Addr [8]byte + Family uint16 + _ [2]byte + Ifindex int32 + Addr [8]byte } type RawSockaddrALG struct { @@ -345,7 +384,7 @@ type TCPInfo struct { Probes uint8 Backoff uint8 Options uint8 - Pad_cgo_0 [2]byte + _ [2]byte Rto uint32 Ato uint32 Snd_mss uint32 @@ -380,6 +419,7 @@ const ( SizeofSockaddrLinklayer = 0x14 SizeofSockaddrNetlink = 0xc SizeofSockaddrHCI = 0x6 + SizeofSockaddrL2 = 0xe SizeofSockaddrCAN = 0x10 SizeofSockaddrALG = 0x58 SizeofSockaddrVM = 0x10 @@ -400,97 +440,123 @@ const ( ) const ( - IFA_UNSPEC = 0x0 - IFA_ADDRESS = 0x1 - IFA_LOCAL = 0x2 - IFA_LABEL = 0x3 - IFA_BROADCAST = 0x4 - IFA_ANYCAST = 0x5 - IFA_CACHEINFO = 0x6 - IFA_MULTICAST = 0x7 - IFLA_UNSPEC = 0x0 - IFLA_ADDRESS = 0x1 - IFLA_BROADCAST = 0x2 - IFLA_IFNAME = 0x3 - IFLA_MTU = 0x4 - IFLA_LINK = 0x5 - IFLA_QDISC = 0x6 - IFLA_STATS = 0x7 - IFLA_COST = 0x8 - IFLA_PRIORITY = 0x9 - IFLA_MASTER = 0xa - IFLA_WIRELESS = 0xb - IFLA_PROTINFO = 0xc - IFLA_TXQLEN = 0xd - IFLA_MAP = 0xe - IFLA_WEIGHT = 0xf - IFLA_OPERSTATE = 0x10 - IFLA_LINKMODE = 0x11 - IFLA_LINKINFO = 0x12 - IFLA_NET_NS_PID = 0x13 - IFLA_IFALIAS = 0x14 - IFLA_MAX = 0x2b - RT_SCOPE_UNIVERSE = 0x0 - RT_SCOPE_SITE = 0xc8 - RT_SCOPE_LINK = 0xfd - RT_SCOPE_HOST = 0xfe - RT_SCOPE_NOWHERE = 0xff - RT_TABLE_UNSPEC = 0x0 - RT_TABLE_COMPAT = 0xfc - RT_TABLE_DEFAULT = 0xfd - RT_TABLE_MAIN = 0xfe - RT_TABLE_LOCAL = 0xff - RT_TABLE_MAX = 0xffffffff - RTA_UNSPEC = 0x0 - RTA_DST = 0x1 - RTA_SRC = 0x2 - RTA_IIF = 0x3 - RTA_OIF = 0x4 - RTA_GATEWAY = 0x5 - RTA_PRIORITY = 0x6 - RTA_PREFSRC = 0x7 - RTA_METRICS = 0x8 - RTA_MULTIPATH = 0x9 - RTA_FLOW = 0xb - RTA_CACHEINFO = 0xc - RTA_TABLE = 0xf - RTN_UNSPEC = 0x0 - RTN_UNICAST = 0x1 - RTN_LOCAL = 0x2 - RTN_BROADCAST = 0x3 - RTN_ANYCAST = 0x4 - RTN_MULTICAST = 0x5 - RTN_BLACKHOLE = 0x6 - RTN_UNREACHABLE = 0x7 - RTN_PROHIBIT = 0x8 - RTN_THROW = 0x9 - RTN_NAT = 0xa - RTN_XRESOLVE = 0xb - RTNLGRP_NONE = 0x0 - RTNLGRP_LINK = 0x1 - RTNLGRP_NOTIFY = 0x2 - RTNLGRP_NEIGH = 0x3 - RTNLGRP_TC = 0x4 - RTNLGRP_IPV4_IFADDR = 0x5 - RTNLGRP_IPV4_MROUTE = 0x6 - RTNLGRP_IPV4_ROUTE = 0x7 - RTNLGRP_IPV4_RULE = 0x8 - RTNLGRP_IPV6_IFADDR = 0x9 - RTNLGRP_IPV6_MROUTE = 0xa - RTNLGRP_IPV6_ROUTE = 0xb - RTNLGRP_IPV6_IFINFO = 0xc - RTNLGRP_IPV6_PREFIX = 0x12 - RTNLGRP_IPV6_RULE = 0x13 - RTNLGRP_ND_USEROPT = 0x14 - SizeofNlMsghdr = 0x10 - SizeofNlMsgerr = 0x14 - SizeofRtGenmsg = 0x1 - SizeofNlAttr = 0x4 - SizeofRtAttr = 0x4 - SizeofIfInfomsg = 0x10 - SizeofIfAddrmsg = 0x8 - SizeofRtMsg = 0xc - SizeofRtNexthop = 0x8 + IFA_UNSPEC = 0x0 + IFA_ADDRESS = 0x1 + IFA_LOCAL = 0x2 + IFA_LABEL = 0x3 + IFA_BROADCAST = 0x4 + IFA_ANYCAST = 0x5 + IFA_CACHEINFO = 0x6 + IFA_MULTICAST = 0x7 + IFLA_UNSPEC = 0x0 + IFLA_ADDRESS = 0x1 + IFLA_BROADCAST = 0x2 + IFLA_IFNAME = 0x3 + IFLA_MTU = 0x4 + IFLA_LINK = 0x5 + IFLA_QDISC = 0x6 + IFLA_STATS = 0x7 + IFLA_COST = 0x8 + IFLA_PRIORITY = 0x9 + IFLA_MASTER = 0xa + IFLA_WIRELESS = 0xb + IFLA_PROTINFO = 0xc + IFLA_TXQLEN = 0xd + IFLA_MAP = 0xe + IFLA_WEIGHT = 0xf + IFLA_OPERSTATE = 0x10 + IFLA_LINKMODE = 0x11 + IFLA_LINKINFO = 0x12 + IFLA_NET_NS_PID = 0x13 + IFLA_IFALIAS = 0x14 + IFLA_NUM_VF = 0x15 + IFLA_VFINFO_LIST = 0x16 + IFLA_STATS64 = 0x17 + IFLA_VF_PORTS = 0x18 + IFLA_PORT_SELF = 0x19 + IFLA_AF_SPEC = 0x1a + IFLA_GROUP = 0x1b + IFLA_NET_NS_FD = 0x1c + IFLA_EXT_MASK = 0x1d + IFLA_PROMISCUITY = 0x1e + IFLA_NUM_TX_QUEUES = 0x1f + IFLA_NUM_RX_QUEUES = 0x20 + IFLA_CARRIER = 0x21 + IFLA_PHYS_PORT_ID = 0x22 + IFLA_CARRIER_CHANGES = 0x23 + IFLA_PHYS_SWITCH_ID = 0x24 + IFLA_LINK_NETNSID = 0x25 + IFLA_PHYS_PORT_NAME = 0x26 + IFLA_PROTO_DOWN = 0x27 + IFLA_GSO_MAX_SEGS = 0x28 + IFLA_GSO_MAX_SIZE = 0x29 + IFLA_PAD = 0x2a + IFLA_XDP = 0x2b + IFLA_EVENT = 0x2c + IFLA_NEW_NETNSID = 0x2d + IFLA_IF_NETNSID = 0x2e + IFLA_MAX = 0x2e + RT_SCOPE_UNIVERSE = 0x0 + RT_SCOPE_SITE = 0xc8 + RT_SCOPE_LINK = 0xfd + RT_SCOPE_HOST = 0xfe + RT_SCOPE_NOWHERE = 0xff + RT_TABLE_UNSPEC = 0x0 + RT_TABLE_COMPAT = 0xfc + RT_TABLE_DEFAULT = 0xfd + RT_TABLE_MAIN = 0xfe + RT_TABLE_LOCAL = 0xff + RT_TABLE_MAX = 0xffffffff + RTA_UNSPEC = 0x0 + RTA_DST = 0x1 + RTA_SRC = 0x2 + RTA_IIF = 0x3 + RTA_OIF = 0x4 + RTA_GATEWAY = 0x5 + RTA_PRIORITY = 0x6 + RTA_PREFSRC = 0x7 + RTA_METRICS = 0x8 + RTA_MULTIPATH = 0x9 + RTA_FLOW = 0xb + RTA_CACHEINFO = 0xc + RTA_TABLE = 0xf + RTN_UNSPEC = 0x0 + RTN_UNICAST = 0x1 + RTN_LOCAL = 0x2 + RTN_BROADCAST = 0x3 + RTN_ANYCAST = 0x4 + RTN_MULTICAST = 0x5 + RTN_BLACKHOLE = 0x6 + RTN_UNREACHABLE = 0x7 + RTN_PROHIBIT = 0x8 + RTN_THROW = 0x9 + RTN_NAT = 0xa + RTN_XRESOLVE = 0xb + RTNLGRP_NONE = 0x0 + RTNLGRP_LINK = 0x1 + RTNLGRP_NOTIFY = 0x2 + RTNLGRP_NEIGH = 0x3 + RTNLGRP_TC = 0x4 + RTNLGRP_IPV4_IFADDR = 0x5 + RTNLGRP_IPV4_MROUTE = 0x6 + RTNLGRP_IPV4_ROUTE = 0x7 + RTNLGRP_IPV4_RULE = 0x8 + RTNLGRP_IPV6_IFADDR = 0x9 + RTNLGRP_IPV6_MROUTE = 0xa + RTNLGRP_IPV6_ROUTE = 0xb + RTNLGRP_IPV6_IFINFO = 0xc + RTNLGRP_IPV6_PREFIX = 0x12 + RTNLGRP_IPV6_RULE = 0x13 + RTNLGRP_ND_USEROPT = 0x14 + SizeofNlMsghdr = 0x10 + SizeofNlMsgerr = 0x14 + SizeofRtGenmsg = 0x1 + SizeofNlAttr = 0x4 + SizeofRtAttr = 0x4 + SizeofIfInfomsg = 0x10 + SizeofIfAddrmsg = 0x8 + SizeofRtMsg = 0xc + SizeofRtNexthop = 0x8 ) type NlMsghdr struct { @@ -569,9 +635,9 @@ type SockFilter struct { } type SockFprog struct { - Len uint16 - Pad_cgo_0 [2]byte - Filter *SockFilter + Len uint16 + _ [2]byte + Filter *SockFilter } type InotifyEvent struct { @@ -609,12 +675,12 @@ type Sysinfo_t struct { } type Utsname struct { - Sysname [65]uint8 - Nodename [65]uint8 - Release [65]uint8 - Version [65]uint8 - Machine [65]uint8 - Domainname [65]uint8 + Sysname [65]byte + Nodename [65]byte + Release [65]byte + Version [65]byte + Machine [65]byte + Domainname [65]byte } type Ustat_t struct { @@ -632,8 +698,15 @@ type EpollEvent struct { } const ( - AT_FDCWD = -0x64 - AT_REMOVEDIR = 0x200 + AT_EMPTY_PATH = 0x1000 + AT_FDCWD = -0x64 + AT_NO_AUTOMOUNT = 0x800 + AT_REMOVEDIR = 0x200 + + AT_STATX_SYNC_AS_STAT = 0x0 + AT_STATX_FORCE_SYNC = 0x2000 + AT_STATX_DONT_SYNC = 0x4000 + AT_SYMLINK_FOLLOW = 0x400 AT_SYMLINK_NOFOLLOW = 0x100 ) @@ -662,8 +735,6 @@ const RNDGETENTCNT = 0x80045200 const PERF_IOC_FLAG_GROUP = 0x1 -const _SC_PAGESIZE = 0x1e - type Termios struct { Iflag uint32 Oflag uint32 @@ -681,3 +752,135 @@ type Winsize struct { Xpixel uint16 Ypixel uint16 } + +type Taskstats struct { + Version uint16 + _ [2]byte + Ac_exitcode uint32 + Ac_flag uint8 + Ac_nice uint8 + _ [6]byte + Cpu_count uint64 + Cpu_delay_total uint64 + Blkio_count uint64 + Blkio_delay_total uint64 + Swapin_count uint64 + Swapin_delay_total uint64 + Cpu_run_real_total uint64 + Cpu_run_virtual_total uint64 + Ac_comm [32]uint8 + Ac_sched uint8 + Ac_pad [3]uint8 + _ [4]byte + Ac_uid uint32 + Ac_gid uint32 + Ac_pid uint32 + Ac_ppid uint32 + Ac_btime uint32 + _ [4]byte + Ac_etime uint64 + Ac_utime uint64 + Ac_stime uint64 + Ac_minflt uint64 + Ac_majflt uint64 + Coremem uint64 + Virtmem uint64 + Hiwater_rss uint64 + Hiwater_vm uint64 + Read_char uint64 + Write_char uint64 + Read_syscalls uint64 + Write_syscalls uint64 + Read_bytes uint64 + Write_bytes uint64 + Cancelled_write_bytes uint64 + Nvcsw uint64 + Nivcsw uint64 + Ac_utimescaled uint64 + Ac_stimescaled uint64 + Cpu_scaled_run_real_total uint64 + Freepages_count uint64 + Freepages_delay_total uint64 +} + +const ( + TASKSTATS_CMD_UNSPEC = 0x0 + TASKSTATS_CMD_GET = 0x1 + TASKSTATS_CMD_NEW = 0x2 + TASKSTATS_TYPE_UNSPEC = 0x0 + TASKSTATS_TYPE_PID = 0x1 + TASKSTATS_TYPE_TGID = 0x2 + TASKSTATS_TYPE_STATS = 0x3 + TASKSTATS_TYPE_AGGR_PID = 0x4 + TASKSTATS_TYPE_AGGR_TGID = 0x5 + TASKSTATS_TYPE_NULL = 0x6 + TASKSTATS_CMD_ATTR_UNSPEC = 0x0 + TASKSTATS_CMD_ATTR_PID = 0x1 + TASKSTATS_CMD_ATTR_TGID = 0x2 + TASKSTATS_CMD_ATTR_REGISTER_CPUMASK = 0x3 + TASKSTATS_CMD_ATTR_DEREGISTER_CPUMASK = 0x4 +) + +type CGroupStats struct { + Sleeping uint64 + Running uint64 + Stopped uint64 + Uninterruptible uint64 + Io_wait uint64 +} + +const ( + CGROUPSTATS_CMD_UNSPEC = 0x3 + CGROUPSTATS_CMD_GET = 0x4 + CGROUPSTATS_CMD_NEW = 0x5 + CGROUPSTATS_TYPE_UNSPEC = 0x0 + CGROUPSTATS_TYPE_CGROUP_STATS = 0x1 + CGROUPSTATS_CMD_ATTR_UNSPEC = 0x0 + CGROUPSTATS_CMD_ATTR_FD = 0x1 +) + +type Genlmsghdr struct { + Cmd uint8 + Version uint8 + Reserved uint16 +} + +const ( + CTRL_CMD_UNSPEC = 0x0 + CTRL_CMD_NEWFAMILY = 0x1 + CTRL_CMD_DELFAMILY = 0x2 + CTRL_CMD_GETFAMILY = 0x3 + CTRL_CMD_NEWOPS = 0x4 + CTRL_CMD_DELOPS = 0x5 + CTRL_CMD_GETOPS = 0x6 + CTRL_CMD_NEWMCAST_GRP = 0x7 + CTRL_CMD_DELMCAST_GRP = 0x8 + CTRL_CMD_GETMCAST_GRP = 0x9 + CTRL_ATTR_UNSPEC = 0x0 + CTRL_ATTR_FAMILY_ID = 0x1 + CTRL_ATTR_FAMILY_NAME = 0x2 + CTRL_ATTR_VERSION = 0x3 + CTRL_ATTR_HDRSIZE = 0x4 + CTRL_ATTR_MAXATTR = 0x5 + CTRL_ATTR_OPS = 0x6 + CTRL_ATTR_MCAST_GROUPS = 0x7 + CTRL_ATTR_OP_UNSPEC = 0x0 + CTRL_ATTR_OP_ID = 0x1 + CTRL_ATTR_OP_FLAGS = 0x2 + CTRL_ATTR_MCAST_GRP_UNSPEC = 0x0 + CTRL_ATTR_MCAST_GRP_NAME = 0x1 + CTRL_ATTR_MCAST_GRP_ID = 0x2 +) + +type cpuMask uint32 + +const ( + _CPU_SETSIZE = 0x400 + _NCPUBITS = 0x20 +) + +const ( + BDADDR_BREDR = 0x0 + BDADDR_LE_PUBLIC = 0x1 + BDADDR_LE_RANDOM = 0x2 +) diff --git a/vendor/golang.org/x/sys/unix/ztypes_linux_arm64.go b/vendor/golang.org/x/sys/unix/ztypes_linux_arm64.go index f0bdaede6..cff50c3d3 100644 --- a/vendor/golang.org/x/sys/unix/ztypes_linux_arm64.go +++ b/vendor/golang.org/x/sys/unix/ztypes_linux_arm64.go @@ -33,13 +33,13 @@ type Timeval struct { type Timex struct { Modes uint32 - Pad_cgo_0 [4]byte + _ [4]byte Offset int64 Freq int64 Maxerror int64 Esterror int64 Status int32 - Pad_cgo_1 [4]byte + _ [4]byte Constant int64 Precision int64 Tolerance int64 @@ -48,14 +48,14 @@ type Timex struct { Ppsfreq int64 Jitter int64 Shift int32 - Pad_cgo_2 [4]byte + _ [4]byte Stabil int64 Jitcnt int64 Calcnt int64 Errcnt int64 Stbcnt int64 Tai int32 - Pad_cgo_3 [44]byte + _ [44]byte } type Time_t int64 @@ -132,13 +132,43 @@ type Statfs_t struct { Spare [4]int64 } +type StatxTimestamp struct { + Sec int64 + Nsec uint32 + X__reserved int32 +} + +type Statx_t struct { + Mask uint32 + Blksize uint32 + Attributes uint64 + Nlink uint32 + Uid uint32 + Gid uint32 + Mode uint16 + _ [1]uint16 + Ino uint64 + Size uint64 + Blocks uint64 + Attributes_mask uint64 + Atime StatxTimestamp + Btime StatxTimestamp + Ctime StatxTimestamp + Mtime StatxTimestamp + Rdev_major uint32 + Rdev_minor uint32 + Dev_major uint32 + Dev_minor uint32 + _ [14]uint64 +} + type Dirent struct { - Ino uint64 - Off int64 - Reclen uint16 - Type uint8 - Name [256]int8 - Pad_cgo_0 [5]byte + Ino uint64 + Off int64 + Reclen uint16 + Type uint8 + Name [256]int8 + _ [5]byte } type Fsid struct { @@ -146,13 +176,13 @@ type Fsid struct { } type Flock_t struct { - Type int16 - Whence int16 - Pad_cgo_0 [4]byte - Start int64 - Len int64 - Pid int32 - Pad_cgo_1 [4]byte + Type int16 + Whence int16 + _ [4]byte + Start int64 + Len int64 + Pid int32 + _ [4]byte } type FscryptPolicy struct { @@ -227,11 +257,20 @@ type RawSockaddrHCI struct { Channel uint16 } +type RawSockaddrL2 struct { + Family uint16 + Psm uint16 + Bdaddr [6]uint8 + Cid uint16 + Bdaddr_type uint8 + _ [1]byte +} + type RawSockaddrCAN struct { - Family uint16 - Pad_cgo_0 [2]byte - Ifindex int32 - Addr [8]byte + Family uint16 + _ [2]byte + Ifindex int32 + Addr [8]byte } type RawSockaddrALG struct { @@ -298,13 +337,13 @@ type PacketMreq struct { type Msghdr struct { Name *byte Namelen uint32 - Pad_cgo_0 [4]byte + _ [4]byte Iov *Iovec Iovlen uint64 Control *byte Controllen uint64 Flags int32 - Pad_cgo_1 [4]byte + _ [4]byte } type Cmsghdr struct { @@ -346,7 +385,7 @@ type TCPInfo struct { Probes uint8 Backoff uint8 Options uint8 - Pad_cgo_0 [2]byte + _ [2]byte Rto uint32 Ato uint32 Snd_mss uint32 @@ -381,6 +420,7 @@ const ( SizeofSockaddrLinklayer = 0x14 SizeofSockaddrNetlink = 0xc SizeofSockaddrHCI = 0x6 + SizeofSockaddrL2 = 0xe SizeofSockaddrCAN = 0x10 SizeofSockaddrALG = 0x58 SizeofSockaddrVM = 0x10 @@ -401,97 +441,123 @@ const ( ) const ( - IFA_UNSPEC = 0x0 - IFA_ADDRESS = 0x1 - IFA_LOCAL = 0x2 - IFA_LABEL = 0x3 - IFA_BROADCAST = 0x4 - IFA_ANYCAST = 0x5 - IFA_CACHEINFO = 0x6 - IFA_MULTICAST = 0x7 - IFLA_UNSPEC = 0x0 - IFLA_ADDRESS = 0x1 - IFLA_BROADCAST = 0x2 - IFLA_IFNAME = 0x3 - IFLA_MTU = 0x4 - IFLA_LINK = 0x5 - IFLA_QDISC = 0x6 - IFLA_STATS = 0x7 - IFLA_COST = 0x8 - IFLA_PRIORITY = 0x9 - IFLA_MASTER = 0xa - IFLA_WIRELESS = 0xb - IFLA_PROTINFO = 0xc - IFLA_TXQLEN = 0xd - IFLA_MAP = 0xe - IFLA_WEIGHT = 0xf - IFLA_OPERSTATE = 0x10 - IFLA_LINKMODE = 0x11 - IFLA_LINKINFO = 0x12 - IFLA_NET_NS_PID = 0x13 - IFLA_IFALIAS = 0x14 - IFLA_MAX = 0x2b - RT_SCOPE_UNIVERSE = 0x0 - RT_SCOPE_SITE = 0xc8 - RT_SCOPE_LINK = 0xfd - RT_SCOPE_HOST = 0xfe - RT_SCOPE_NOWHERE = 0xff - RT_TABLE_UNSPEC = 0x0 - RT_TABLE_COMPAT = 0xfc - RT_TABLE_DEFAULT = 0xfd - RT_TABLE_MAIN = 0xfe - RT_TABLE_LOCAL = 0xff - RT_TABLE_MAX = 0xffffffff - RTA_UNSPEC = 0x0 - RTA_DST = 0x1 - RTA_SRC = 0x2 - RTA_IIF = 0x3 - RTA_OIF = 0x4 - RTA_GATEWAY = 0x5 - RTA_PRIORITY = 0x6 - RTA_PREFSRC = 0x7 - RTA_METRICS = 0x8 - RTA_MULTIPATH = 0x9 - RTA_FLOW = 0xb - RTA_CACHEINFO = 0xc - RTA_TABLE = 0xf - RTN_UNSPEC = 0x0 - RTN_UNICAST = 0x1 - RTN_LOCAL = 0x2 - RTN_BROADCAST = 0x3 - RTN_ANYCAST = 0x4 - RTN_MULTICAST = 0x5 - RTN_BLACKHOLE = 0x6 - RTN_UNREACHABLE = 0x7 - RTN_PROHIBIT = 0x8 - RTN_THROW = 0x9 - RTN_NAT = 0xa - RTN_XRESOLVE = 0xb - RTNLGRP_NONE = 0x0 - RTNLGRP_LINK = 0x1 - RTNLGRP_NOTIFY = 0x2 - RTNLGRP_NEIGH = 0x3 - RTNLGRP_TC = 0x4 - RTNLGRP_IPV4_IFADDR = 0x5 - RTNLGRP_IPV4_MROUTE = 0x6 - RTNLGRP_IPV4_ROUTE = 0x7 - RTNLGRP_IPV4_RULE = 0x8 - RTNLGRP_IPV6_IFADDR = 0x9 - RTNLGRP_IPV6_MROUTE = 0xa - RTNLGRP_IPV6_ROUTE = 0xb - RTNLGRP_IPV6_IFINFO = 0xc - RTNLGRP_IPV6_PREFIX = 0x12 - RTNLGRP_IPV6_RULE = 0x13 - RTNLGRP_ND_USEROPT = 0x14 - SizeofNlMsghdr = 0x10 - SizeofNlMsgerr = 0x14 - SizeofRtGenmsg = 0x1 - SizeofNlAttr = 0x4 - SizeofRtAttr = 0x4 - SizeofIfInfomsg = 0x10 - SizeofIfAddrmsg = 0x8 - SizeofRtMsg = 0xc - SizeofRtNexthop = 0x8 + IFA_UNSPEC = 0x0 + IFA_ADDRESS = 0x1 + IFA_LOCAL = 0x2 + IFA_LABEL = 0x3 + IFA_BROADCAST = 0x4 + IFA_ANYCAST = 0x5 + IFA_CACHEINFO = 0x6 + IFA_MULTICAST = 0x7 + IFLA_UNSPEC = 0x0 + IFLA_ADDRESS = 0x1 + IFLA_BROADCAST = 0x2 + IFLA_IFNAME = 0x3 + IFLA_MTU = 0x4 + IFLA_LINK = 0x5 + IFLA_QDISC = 0x6 + IFLA_STATS = 0x7 + IFLA_COST = 0x8 + IFLA_PRIORITY = 0x9 + IFLA_MASTER = 0xa + IFLA_WIRELESS = 0xb + IFLA_PROTINFO = 0xc + IFLA_TXQLEN = 0xd + IFLA_MAP = 0xe + IFLA_WEIGHT = 0xf + IFLA_OPERSTATE = 0x10 + IFLA_LINKMODE = 0x11 + IFLA_LINKINFO = 0x12 + IFLA_NET_NS_PID = 0x13 + IFLA_IFALIAS = 0x14 + IFLA_NUM_VF = 0x15 + IFLA_VFINFO_LIST = 0x16 + IFLA_STATS64 = 0x17 + IFLA_VF_PORTS = 0x18 + IFLA_PORT_SELF = 0x19 + IFLA_AF_SPEC = 0x1a + IFLA_GROUP = 0x1b + IFLA_NET_NS_FD = 0x1c + IFLA_EXT_MASK = 0x1d + IFLA_PROMISCUITY = 0x1e + IFLA_NUM_TX_QUEUES = 0x1f + IFLA_NUM_RX_QUEUES = 0x20 + IFLA_CARRIER = 0x21 + IFLA_PHYS_PORT_ID = 0x22 + IFLA_CARRIER_CHANGES = 0x23 + IFLA_PHYS_SWITCH_ID = 0x24 + IFLA_LINK_NETNSID = 0x25 + IFLA_PHYS_PORT_NAME = 0x26 + IFLA_PROTO_DOWN = 0x27 + IFLA_GSO_MAX_SEGS = 0x28 + IFLA_GSO_MAX_SIZE = 0x29 + IFLA_PAD = 0x2a + IFLA_XDP = 0x2b + IFLA_EVENT = 0x2c + IFLA_NEW_NETNSID = 0x2d + IFLA_IF_NETNSID = 0x2e + IFLA_MAX = 0x2e + RT_SCOPE_UNIVERSE = 0x0 + RT_SCOPE_SITE = 0xc8 + RT_SCOPE_LINK = 0xfd + RT_SCOPE_HOST = 0xfe + RT_SCOPE_NOWHERE = 0xff + RT_TABLE_UNSPEC = 0x0 + RT_TABLE_COMPAT = 0xfc + RT_TABLE_DEFAULT = 0xfd + RT_TABLE_MAIN = 0xfe + RT_TABLE_LOCAL = 0xff + RT_TABLE_MAX = 0xffffffff + RTA_UNSPEC = 0x0 + RTA_DST = 0x1 + RTA_SRC = 0x2 + RTA_IIF = 0x3 + RTA_OIF = 0x4 + RTA_GATEWAY = 0x5 + RTA_PRIORITY = 0x6 + RTA_PREFSRC = 0x7 + RTA_METRICS = 0x8 + RTA_MULTIPATH = 0x9 + RTA_FLOW = 0xb + RTA_CACHEINFO = 0xc + RTA_TABLE = 0xf + RTN_UNSPEC = 0x0 + RTN_UNICAST = 0x1 + RTN_LOCAL = 0x2 + RTN_BROADCAST = 0x3 + RTN_ANYCAST = 0x4 + RTN_MULTICAST = 0x5 + RTN_BLACKHOLE = 0x6 + RTN_UNREACHABLE = 0x7 + RTN_PROHIBIT = 0x8 + RTN_THROW = 0x9 + RTN_NAT = 0xa + RTN_XRESOLVE = 0xb + RTNLGRP_NONE = 0x0 + RTNLGRP_LINK = 0x1 + RTNLGRP_NOTIFY = 0x2 + RTNLGRP_NEIGH = 0x3 + RTNLGRP_TC = 0x4 + RTNLGRP_IPV4_IFADDR = 0x5 + RTNLGRP_IPV4_MROUTE = 0x6 + RTNLGRP_IPV4_ROUTE = 0x7 + RTNLGRP_IPV4_RULE = 0x8 + RTNLGRP_IPV6_IFADDR = 0x9 + RTNLGRP_IPV6_MROUTE = 0xa + RTNLGRP_IPV6_ROUTE = 0xb + RTNLGRP_IPV6_IFINFO = 0xc + RTNLGRP_IPV6_PREFIX = 0x12 + RTNLGRP_IPV6_RULE = 0x13 + RTNLGRP_ND_USEROPT = 0x14 + SizeofNlMsghdr = 0x10 + SizeofNlMsgerr = 0x14 + SizeofRtGenmsg = 0x1 + SizeofNlAttr = 0x4 + SizeofRtAttr = 0x4 + SizeofIfInfomsg = 0x10 + SizeofIfAddrmsg = 0x8 + SizeofRtMsg = 0xc + SizeofRtNexthop = 0x8 ) type NlMsghdr struct { @@ -570,9 +636,9 @@ type SockFilter struct { } type SockFprog struct { - Len uint16 - Pad_cgo_0 [6]byte - Filter *SockFilter + Len uint16 + _ [6]byte + Filter *SockFilter } type InotifyEvent struct { @@ -606,30 +672,30 @@ type Sysinfo_t struct { Freeswap uint64 Procs uint16 Pad uint16 - Pad_cgo_0 [4]byte + _ [4]byte Totalhigh uint64 Freehigh uint64 Unit uint32 X_f [0]int8 - Pad_cgo_1 [4]byte + _ [4]byte } type Utsname struct { - Sysname [65]int8 - Nodename [65]int8 - Release [65]int8 - Version [65]int8 - Machine [65]int8 - Domainname [65]int8 + Sysname [65]byte + Nodename [65]byte + Release [65]byte + Version [65]byte + Machine [65]byte + Domainname [65]byte } type Ustat_t struct { - Tfree int32 - Pad_cgo_0 [4]byte - Tinode uint64 - Fname [6]int8 - Fpack [6]int8 - Pad_cgo_1 [4]byte + Tfree int32 + _ [4]byte + Tinode uint64 + Fname [6]int8 + Fpack [6]int8 + _ [4]byte } type EpollEvent struct { @@ -640,8 +706,15 @@ type EpollEvent struct { } const ( - AT_FDCWD = -0x64 - AT_REMOVEDIR = 0x200 + AT_EMPTY_PATH = 0x1000 + AT_FDCWD = -0x64 + AT_NO_AUTOMOUNT = 0x800 + AT_REMOVEDIR = 0x200 + + AT_STATX_SYNC_AS_STAT = 0x0 + AT_STATX_FORCE_SYNC = 0x2000 + AT_STATX_DONT_SYNC = 0x4000 + AT_SYMLINK_FOLLOW = 0x400 AT_SYMLINK_NOFOLLOW = 0x100 ) @@ -670,8 +743,6 @@ const RNDGETENTCNT = 0x80045200 const PERF_IOC_FLAG_GROUP = 0x1 -const _SC_PAGESIZE = 0x1e - type Termios struct { Iflag uint32 Oflag uint32 @@ -689,3 +760,135 @@ type Winsize struct { Xpixel uint16 Ypixel uint16 } + +type Taskstats struct { + Version uint16 + _ [2]byte + Ac_exitcode uint32 + Ac_flag uint8 + Ac_nice uint8 + _ [6]byte + Cpu_count uint64 + Cpu_delay_total uint64 + Blkio_count uint64 + Blkio_delay_total uint64 + Swapin_count uint64 + Swapin_delay_total uint64 + Cpu_run_real_total uint64 + Cpu_run_virtual_total uint64 + Ac_comm [32]int8 + Ac_sched uint8 + Ac_pad [3]uint8 + _ [4]byte + Ac_uid uint32 + Ac_gid uint32 + Ac_pid uint32 + Ac_ppid uint32 + Ac_btime uint32 + _ [4]byte + Ac_etime uint64 + Ac_utime uint64 + Ac_stime uint64 + Ac_minflt uint64 + Ac_majflt uint64 + Coremem uint64 + Virtmem uint64 + Hiwater_rss uint64 + Hiwater_vm uint64 + Read_char uint64 + Write_char uint64 + Read_syscalls uint64 + Write_syscalls uint64 + Read_bytes uint64 + Write_bytes uint64 + Cancelled_write_bytes uint64 + Nvcsw uint64 + Nivcsw uint64 + Ac_utimescaled uint64 + Ac_stimescaled uint64 + Cpu_scaled_run_real_total uint64 + Freepages_count uint64 + Freepages_delay_total uint64 +} + +const ( + TASKSTATS_CMD_UNSPEC = 0x0 + TASKSTATS_CMD_GET = 0x1 + TASKSTATS_CMD_NEW = 0x2 + TASKSTATS_TYPE_UNSPEC = 0x0 + TASKSTATS_TYPE_PID = 0x1 + TASKSTATS_TYPE_TGID = 0x2 + TASKSTATS_TYPE_STATS = 0x3 + TASKSTATS_TYPE_AGGR_PID = 0x4 + TASKSTATS_TYPE_AGGR_TGID = 0x5 + TASKSTATS_TYPE_NULL = 0x6 + TASKSTATS_CMD_ATTR_UNSPEC = 0x0 + TASKSTATS_CMD_ATTR_PID = 0x1 + TASKSTATS_CMD_ATTR_TGID = 0x2 + TASKSTATS_CMD_ATTR_REGISTER_CPUMASK = 0x3 + TASKSTATS_CMD_ATTR_DEREGISTER_CPUMASK = 0x4 +) + +type CGroupStats struct { + Sleeping uint64 + Running uint64 + Stopped uint64 + Uninterruptible uint64 + Io_wait uint64 +} + +const ( + CGROUPSTATS_CMD_UNSPEC = 0x3 + CGROUPSTATS_CMD_GET = 0x4 + CGROUPSTATS_CMD_NEW = 0x5 + CGROUPSTATS_TYPE_UNSPEC = 0x0 + CGROUPSTATS_TYPE_CGROUP_STATS = 0x1 + CGROUPSTATS_CMD_ATTR_UNSPEC = 0x0 + CGROUPSTATS_CMD_ATTR_FD = 0x1 +) + +type Genlmsghdr struct { + Cmd uint8 + Version uint8 + Reserved uint16 +} + +const ( + CTRL_CMD_UNSPEC = 0x0 + CTRL_CMD_NEWFAMILY = 0x1 + CTRL_CMD_DELFAMILY = 0x2 + CTRL_CMD_GETFAMILY = 0x3 + CTRL_CMD_NEWOPS = 0x4 + CTRL_CMD_DELOPS = 0x5 + CTRL_CMD_GETOPS = 0x6 + CTRL_CMD_NEWMCAST_GRP = 0x7 + CTRL_CMD_DELMCAST_GRP = 0x8 + CTRL_CMD_GETMCAST_GRP = 0x9 + CTRL_ATTR_UNSPEC = 0x0 + CTRL_ATTR_FAMILY_ID = 0x1 + CTRL_ATTR_FAMILY_NAME = 0x2 + CTRL_ATTR_VERSION = 0x3 + CTRL_ATTR_HDRSIZE = 0x4 + CTRL_ATTR_MAXATTR = 0x5 + CTRL_ATTR_OPS = 0x6 + CTRL_ATTR_MCAST_GROUPS = 0x7 + CTRL_ATTR_OP_UNSPEC = 0x0 + CTRL_ATTR_OP_ID = 0x1 + CTRL_ATTR_OP_FLAGS = 0x2 + CTRL_ATTR_MCAST_GRP_UNSPEC = 0x0 + CTRL_ATTR_MCAST_GRP_NAME = 0x1 + CTRL_ATTR_MCAST_GRP_ID = 0x2 +) + +type cpuMask uint64 + +const ( + _CPU_SETSIZE = 0x400 + _NCPUBITS = 0x40 +) + +const ( + BDADDR_BREDR = 0x0 + BDADDR_LE_PUBLIC = 0x1 + BDADDR_LE_RANDOM = 0x2 +) diff --git a/vendor/golang.org/x/sys/unix/ztypes_linux_mips.go b/vendor/golang.org/x/sys/unix/ztypes_linux_mips.go index 850a68cb2..87d0a8b1b 100644 --- a/vendor/golang.org/x/sys/unix/ztypes_linux_mips.go +++ b/vendor/golang.org/x/sys/unix/ztypes_linux_mips.go @@ -52,7 +52,7 @@ type Timex struct { Errcnt int32 Stbcnt int32 Tai int32 - Pad_cgo_0 [44]byte + _ [44]byte } type Time_t int32 @@ -116,29 +116,59 @@ type Stat_t struct { } type Statfs_t struct { - Type int32 - Bsize int32 - Frsize int32 - Pad_cgo_0 [4]byte - Blocks uint64 - Bfree uint64 - Files uint64 - Ffree uint64 - Bavail uint64 - Fsid Fsid - Namelen int32 - Flags int32 - Spare [5]int32 - Pad_cgo_1 [4]byte + Type int32 + Bsize int32 + Frsize int32 + _ [4]byte + Blocks uint64 + Bfree uint64 + Files uint64 + Ffree uint64 + Bavail uint64 + Fsid Fsid + Namelen int32 + Flags int32 + Spare [5]int32 + _ [4]byte +} + +type StatxTimestamp struct { + Sec int64 + Nsec uint32 + X__reserved int32 +} + +type Statx_t struct { + Mask uint32 + Blksize uint32 + Attributes uint64 + Nlink uint32 + Uid uint32 + Gid uint32 + Mode uint16 + _ [1]uint16 + Ino uint64 + Size uint64 + Blocks uint64 + Attributes_mask uint64 + Atime StatxTimestamp + Btime StatxTimestamp + Ctime StatxTimestamp + Mtime StatxTimestamp + Rdev_major uint32 + Rdev_minor uint32 + Dev_major uint32 + Dev_minor uint32 + _ [14]uint64 } type Dirent struct { - Ino uint64 - Off int64 - Reclen uint16 - Type uint8 - Name [256]int8 - Pad_cgo_0 [5]byte + Ino uint64 + Off int64 + Reclen uint16 + Type uint8 + Name [256]int8 + _ [5]byte } type Fsid struct { @@ -146,13 +176,13 @@ type Fsid struct { } type Flock_t struct { - Type int16 - Whence int16 - Pad_cgo_0 [4]byte - Start int64 - Len int64 - Pid int32 - Pad_cgo_1 [4]byte + Type int16 + Whence int16 + _ [4]byte + Start int64 + Len int64 + Pid int32 + _ [4]byte } type FscryptPolicy struct { @@ -227,11 +257,20 @@ type RawSockaddrHCI struct { Channel uint16 } +type RawSockaddrL2 struct { + Family uint16 + Psm uint16 + Bdaddr [6]uint8 + Cid uint16 + Bdaddr_type uint8 + _ [1]byte +} + type RawSockaddrCAN struct { - Family uint16 - Pad_cgo_0 [2]byte - Ifindex int32 - Addr [8]byte + Family uint16 + _ [2]byte + Ifindex int32 + Addr [8]byte } type RawSockaddrALG struct { @@ -344,7 +383,7 @@ type TCPInfo struct { Probes uint8 Backoff uint8 Options uint8 - Pad_cgo_0 [2]byte + _ [2]byte Rto uint32 Ato uint32 Snd_mss uint32 @@ -379,6 +418,7 @@ const ( SizeofSockaddrLinklayer = 0x14 SizeofSockaddrNetlink = 0xc SizeofSockaddrHCI = 0x6 + SizeofSockaddrL2 = 0xe SizeofSockaddrCAN = 0x10 SizeofSockaddrALG = 0x58 SizeofSockaddrVM = 0x10 @@ -399,97 +439,123 @@ const ( ) const ( - IFA_UNSPEC = 0x0 - IFA_ADDRESS = 0x1 - IFA_LOCAL = 0x2 - IFA_LABEL = 0x3 - IFA_BROADCAST = 0x4 - IFA_ANYCAST = 0x5 - IFA_CACHEINFO = 0x6 - IFA_MULTICAST = 0x7 - IFLA_UNSPEC = 0x0 - IFLA_ADDRESS = 0x1 - IFLA_BROADCAST = 0x2 - IFLA_IFNAME = 0x3 - IFLA_MTU = 0x4 - IFLA_LINK = 0x5 - IFLA_QDISC = 0x6 - IFLA_STATS = 0x7 - IFLA_COST = 0x8 - IFLA_PRIORITY = 0x9 - IFLA_MASTER = 0xa - IFLA_WIRELESS = 0xb - IFLA_PROTINFO = 0xc - IFLA_TXQLEN = 0xd - IFLA_MAP = 0xe - IFLA_WEIGHT = 0xf - IFLA_OPERSTATE = 0x10 - IFLA_LINKMODE = 0x11 - IFLA_LINKINFO = 0x12 - IFLA_NET_NS_PID = 0x13 - IFLA_IFALIAS = 0x14 - IFLA_MAX = 0x2b - RT_SCOPE_UNIVERSE = 0x0 - RT_SCOPE_SITE = 0xc8 - RT_SCOPE_LINK = 0xfd - RT_SCOPE_HOST = 0xfe - RT_SCOPE_NOWHERE = 0xff - RT_TABLE_UNSPEC = 0x0 - RT_TABLE_COMPAT = 0xfc - RT_TABLE_DEFAULT = 0xfd - RT_TABLE_MAIN = 0xfe - RT_TABLE_LOCAL = 0xff - RT_TABLE_MAX = 0xffffffff - RTA_UNSPEC = 0x0 - RTA_DST = 0x1 - RTA_SRC = 0x2 - RTA_IIF = 0x3 - RTA_OIF = 0x4 - RTA_GATEWAY = 0x5 - RTA_PRIORITY = 0x6 - RTA_PREFSRC = 0x7 - RTA_METRICS = 0x8 - RTA_MULTIPATH = 0x9 - RTA_FLOW = 0xb - RTA_CACHEINFO = 0xc - RTA_TABLE = 0xf - RTN_UNSPEC = 0x0 - RTN_UNICAST = 0x1 - RTN_LOCAL = 0x2 - RTN_BROADCAST = 0x3 - RTN_ANYCAST = 0x4 - RTN_MULTICAST = 0x5 - RTN_BLACKHOLE = 0x6 - RTN_UNREACHABLE = 0x7 - RTN_PROHIBIT = 0x8 - RTN_THROW = 0x9 - RTN_NAT = 0xa - RTN_XRESOLVE = 0xb - RTNLGRP_NONE = 0x0 - RTNLGRP_LINK = 0x1 - RTNLGRP_NOTIFY = 0x2 - RTNLGRP_NEIGH = 0x3 - RTNLGRP_TC = 0x4 - RTNLGRP_IPV4_IFADDR = 0x5 - RTNLGRP_IPV4_MROUTE = 0x6 - RTNLGRP_IPV4_ROUTE = 0x7 - RTNLGRP_IPV4_RULE = 0x8 - RTNLGRP_IPV6_IFADDR = 0x9 - RTNLGRP_IPV6_MROUTE = 0xa - RTNLGRP_IPV6_ROUTE = 0xb - RTNLGRP_IPV6_IFINFO = 0xc - RTNLGRP_IPV6_PREFIX = 0x12 - RTNLGRP_IPV6_RULE = 0x13 - RTNLGRP_ND_USEROPT = 0x14 - SizeofNlMsghdr = 0x10 - SizeofNlMsgerr = 0x14 - SizeofRtGenmsg = 0x1 - SizeofNlAttr = 0x4 - SizeofRtAttr = 0x4 - SizeofIfInfomsg = 0x10 - SizeofIfAddrmsg = 0x8 - SizeofRtMsg = 0xc - SizeofRtNexthop = 0x8 + IFA_UNSPEC = 0x0 + IFA_ADDRESS = 0x1 + IFA_LOCAL = 0x2 + IFA_LABEL = 0x3 + IFA_BROADCAST = 0x4 + IFA_ANYCAST = 0x5 + IFA_CACHEINFO = 0x6 + IFA_MULTICAST = 0x7 + IFLA_UNSPEC = 0x0 + IFLA_ADDRESS = 0x1 + IFLA_BROADCAST = 0x2 + IFLA_IFNAME = 0x3 + IFLA_MTU = 0x4 + IFLA_LINK = 0x5 + IFLA_QDISC = 0x6 + IFLA_STATS = 0x7 + IFLA_COST = 0x8 + IFLA_PRIORITY = 0x9 + IFLA_MASTER = 0xa + IFLA_WIRELESS = 0xb + IFLA_PROTINFO = 0xc + IFLA_TXQLEN = 0xd + IFLA_MAP = 0xe + IFLA_WEIGHT = 0xf + IFLA_OPERSTATE = 0x10 + IFLA_LINKMODE = 0x11 + IFLA_LINKINFO = 0x12 + IFLA_NET_NS_PID = 0x13 + IFLA_IFALIAS = 0x14 + IFLA_NUM_VF = 0x15 + IFLA_VFINFO_LIST = 0x16 + IFLA_STATS64 = 0x17 + IFLA_VF_PORTS = 0x18 + IFLA_PORT_SELF = 0x19 + IFLA_AF_SPEC = 0x1a + IFLA_GROUP = 0x1b + IFLA_NET_NS_FD = 0x1c + IFLA_EXT_MASK = 0x1d + IFLA_PROMISCUITY = 0x1e + IFLA_NUM_TX_QUEUES = 0x1f + IFLA_NUM_RX_QUEUES = 0x20 + IFLA_CARRIER = 0x21 + IFLA_PHYS_PORT_ID = 0x22 + IFLA_CARRIER_CHANGES = 0x23 + IFLA_PHYS_SWITCH_ID = 0x24 + IFLA_LINK_NETNSID = 0x25 + IFLA_PHYS_PORT_NAME = 0x26 + IFLA_PROTO_DOWN = 0x27 + IFLA_GSO_MAX_SEGS = 0x28 + IFLA_GSO_MAX_SIZE = 0x29 + IFLA_PAD = 0x2a + IFLA_XDP = 0x2b + IFLA_EVENT = 0x2c + IFLA_NEW_NETNSID = 0x2d + IFLA_IF_NETNSID = 0x2e + IFLA_MAX = 0x2e + RT_SCOPE_UNIVERSE = 0x0 + RT_SCOPE_SITE = 0xc8 + RT_SCOPE_LINK = 0xfd + RT_SCOPE_HOST = 0xfe + RT_SCOPE_NOWHERE = 0xff + RT_TABLE_UNSPEC = 0x0 + RT_TABLE_COMPAT = 0xfc + RT_TABLE_DEFAULT = 0xfd + RT_TABLE_MAIN = 0xfe + RT_TABLE_LOCAL = 0xff + RT_TABLE_MAX = 0xffffffff + RTA_UNSPEC = 0x0 + RTA_DST = 0x1 + RTA_SRC = 0x2 + RTA_IIF = 0x3 + RTA_OIF = 0x4 + RTA_GATEWAY = 0x5 + RTA_PRIORITY = 0x6 + RTA_PREFSRC = 0x7 + RTA_METRICS = 0x8 + RTA_MULTIPATH = 0x9 + RTA_FLOW = 0xb + RTA_CACHEINFO = 0xc + RTA_TABLE = 0xf + RTN_UNSPEC = 0x0 + RTN_UNICAST = 0x1 + RTN_LOCAL = 0x2 + RTN_BROADCAST = 0x3 + RTN_ANYCAST = 0x4 + RTN_MULTICAST = 0x5 + RTN_BLACKHOLE = 0x6 + RTN_UNREACHABLE = 0x7 + RTN_PROHIBIT = 0x8 + RTN_THROW = 0x9 + RTN_NAT = 0xa + RTN_XRESOLVE = 0xb + RTNLGRP_NONE = 0x0 + RTNLGRP_LINK = 0x1 + RTNLGRP_NOTIFY = 0x2 + RTNLGRP_NEIGH = 0x3 + RTNLGRP_TC = 0x4 + RTNLGRP_IPV4_IFADDR = 0x5 + RTNLGRP_IPV4_MROUTE = 0x6 + RTNLGRP_IPV4_ROUTE = 0x7 + RTNLGRP_IPV4_RULE = 0x8 + RTNLGRP_IPV6_IFADDR = 0x9 + RTNLGRP_IPV6_MROUTE = 0xa + RTNLGRP_IPV6_ROUTE = 0xb + RTNLGRP_IPV6_IFINFO = 0xc + RTNLGRP_IPV6_PREFIX = 0x12 + RTNLGRP_IPV6_RULE = 0x13 + RTNLGRP_ND_USEROPT = 0x14 + SizeofNlMsghdr = 0x10 + SizeofNlMsgerr = 0x14 + SizeofRtGenmsg = 0x1 + SizeofNlAttr = 0x4 + SizeofRtAttr = 0x4 + SizeofIfInfomsg = 0x10 + SizeofIfAddrmsg = 0x8 + SizeofRtMsg = 0xc + SizeofRtNexthop = 0x8 ) type NlMsghdr struct { @@ -568,9 +634,9 @@ type SockFilter struct { } type SockFprog struct { - Len uint16 - Pad_cgo_0 [2]byte - Filter *SockFilter + Len uint16 + _ [2]byte + Filter *SockFilter } type InotifyEvent struct { @@ -614,12 +680,12 @@ type Sysinfo_t struct { } type Utsname struct { - Sysname [65]int8 - Nodename [65]int8 - Release [65]int8 - Version [65]int8 - Machine [65]int8 - Domainname [65]int8 + Sysname [65]byte + Nodename [65]byte + Release [65]byte + Version [65]byte + Machine [65]byte + Domainname [65]byte } type Ustat_t struct { @@ -637,8 +703,15 @@ type EpollEvent struct { } const ( - AT_FDCWD = -0x64 - AT_REMOVEDIR = 0x200 + AT_EMPTY_PATH = 0x1000 + AT_FDCWD = -0x64 + AT_NO_AUTOMOUNT = 0x800 + AT_REMOVEDIR = 0x200 + + AT_STATX_SYNC_AS_STAT = 0x0 + AT_STATX_FORCE_SYNC = 0x2000 + AT_STATX_DONT_SYNC = 0x4000 + AT_SYMLINK_FOLLOW = 0x400 AT_SYMLINK_NOFOLLOW = 0x100 ) @@ -667,8 +740,6 @@ const RNDGETENTCNT = 0x40045200 const PERF_IOC_FLAG_GROUP = 0x1 -const _SC_PAGESIZE = 0x1e - type Termios struct { Iflag uint32 Oflag uint32 @@ -686,3 +757,135 @@ type Winsize struct { Xpixel uint16 Ypixel uint16 } + +type Taskstats struct { + Version uint16 + _ [2]byte + Ac_exitcode uint32 + Ac_flag uint8 + Ac_nice uint8 + _ [6]byte + Cpu_count uint64 + Cpu_delay_total uint64 + Blkio_count uint64 + Blkio_delay_total uint64 + Swapin_count uint64 + Swapin_delay_total uint64 + Cpu_run_real_total uint64 + Cpu_run_virtual_total uint64 + Ac_comm [32]int8 + Ac_sched uint8 + Ac_pad [3]uint8 + _ [4]byte + Ac_uid uint32 + Ac_gid uint32 + Ac_pid uint32 + Ac_ppid uint32 + Ac_btime uint32 + _ [4]byte + Ac_etime uint64 + Ac_utime uint64 + Ac_stime uint64 + Ac_minflt uint64 + Ac_majflt uint64 + Coremem uint64 + Virtmem uint64 + Hiwater_rss uint64 + Hiwater_vm uint64 + Read_char uint64 + Write_char uint64 + Read_syscalls uint64 + Write_syscalls uint64 + Read_bytes uint64 + Write_bytes uint64 + Cancelled_write_bytes uint64 + Nvcsw uint64 + Nivcsw uint64 + Ac_utimescaled uint64 + Ac_stimescaled uint64 + Cpu_scaled_run_real_total uint64 + Freepages_count uint64 + Freepages_delay_total uint64 +} + +const ( + TASKSTATS_CMD_UNSPEC = 0x0 + TASKSTATS_CMD_GET = 0x1 + TASKSTATS_CMD_NEW = 0x2 + TASKSTATS_TYPE_UNSPEC = 0x0 + TASKSTATS_TYPE_PID = 0x1 + TASKSTATS_TYPE_TGID = 0x2 + TASKSTATS_TYPE_STATS = 0x3 + TASKSTATS_TYPE_AGGR_PID = 0x4 + TASKSTATS_TYPE_AGGR_TGID = 0x5 + TASKSTATS_TYPE_NULL = 0x6 + TASKSTATS_CMD_ATTR_UNSPEC = 0x0 + TASKSTATS_CMD_ATTR_PID = 0x1 + TASKSTATS_CMD_ATTR_TGID = 0x2 + TASKSTATS_CMD_ATTR_REGISTER_CPUMASK = 0x3 + TASKSTATS_CMD_ATTR_DEREGISTER_CPUMASK = 0x4 +) + +type CGroupStats struct { + Sleeping uint64 + Running uint64 + Stopped uint64 + Uninterruptible uint64 + Io_wait uint64 +} + +const ( + CGROUPSTATS_CMD_UNSPEC = 0x3 + CGROUPSTATS_CMD_GET = 0x4 + CGROUPSTATS_CMD_NEW = 0x5 + CGROUPSTATS_TYPE_UNSPEC = 0x0 + CGROUPSTATS_TYPE_CGROUP_STATS = 0x1 + CGROUPSTATS_CMD_ATTR_UNSPEC = 0x0 + CGROUPSTATS_CMD_ATTR_FD = 0x1 +) + +type Genlmsghdr struct { + Cmd uint8 + Version uint8 + Reserved uint16 +} + +const ( + CTRL_CMD_UNSPEC = 0x0 + CTRL_CMD_NEWFAMILY = 0x1 + CTRL_CMD_DELFAMILY = 0x2 + CTRL_CMD_GETFAMILY = 0x3 + CTRL_CMD_NEWOPS = 0x4 + CTRL_CMD_DELOPS = 0x5 + CTRL_CMD_GETOPS = 0x6 + CTRL_CMD_NEWMCAST_GRP = 0x7 + CTRL_CMD_DELMCAST_GRP = 0x8 + CTRL_CMD_GETMCAST_GRP = 0x9 + CTRL_ATTR_UNSPEC = 0x0 + CTRL_ATTR_FAMILY_ID = 0x1 + CTRL_ATTR_FAMILY_NAME = 0x2 + CTRL_ATTR_VERSION = 0x3 + CTRL_ATTR_HDRSIZE = 0x4 + CTRL_ATTR_MAXATTR = 0x5 + CTRL_ATTR_OPS = 0x6 + CTRL_ATTR_MCAST_GROUPS = 0x7 + CTRL_ATTR_OP_UNSPEC = 0x0 + CTRL_ATTR_OP_ID = 0x1 + CTRL_ATTR_OP_FLAGS = 0x2 + CTRL_ATTR_MCAST_GRP_UNSPEC = 0x0 + CTRL_ATTR_MCAST_GRP_NAME = 0x1 + CTRL_ATTR_MCAST_GRP_ID = 0x2 +) + +type cpuMask uint32 + +const ( + _CPU_SETSIZE = 0x400 + _NCPUBITS = 0x20 +) + +const ( + BDADDR_BREDR = 0x0 + BDADDR_LE_PUBLIC = 0x1 + BDADDR_LE_RANDOM = 0x2 +) diff --git a/vendor/golang.org/x/sys/unix/ztypes_linux_mips64.go b/vendor/golang.org/x/sys/unix/ztypes_linux_mips64.go index 92aac5d93..cf4e2bd29 100644 --- a/vendor/golang.org/x/sys/unix/ztypes_linux_mips64.go +++ b/vendor/golang.org/x/sys/unix/ztypes_linux_mips64.go @@ -33,13 +33,13 @@ type Timeval struct { type Timex struct { Modes uint32 - Pad_cgo_0 [4]byte + _ [4]byte Offset int64 Freq int64 Maxerror int64 Esterror int64 Status int32 - Pad_cgo_1 [4]byte + _ [4]byte Constant int64 Precision int64 Tolerance int64 @@ -48,14 +48,14 @@ type Timex struct { Ppsfreq int64 Jitter int64 Shift int32 - Pad_cgo_2 [4]byte + _ [4]byte Stabil int64 Jitcnt int64 Calcnt int64 Errcnt int64 Stbcnt int64 Tai int32 - Pad_cgo_3 [44]byte + _ [44]byte } type Time_t int64 @@ -132,13 +132,43 @@ type Statfs_t struct { Spare [5]int64 } +type StatxTimestamp struct { + Sec int64 + Nsec uint32 + X__reserved int32 +} + +type Statx_t struct { + Mask uint32 + Blksize uint32 + Attributes uint64 + Nlink uint32 + Uid uint32 + Gid uint32 + Mode uint16 + _ [1]uint16 + Ino uint64 + Size uint64 + Blocks uint64 + Attributes_mask uint64 + Atime StatxTimestamp + Btime StatxTimestamp + Ctime StatxTimestamp + Mtime StatxTimestamp + Rdev_major uint32 + Rdev_minor uint32 + Dev_major uint32 + Dev_minor uint32 + _ [14]uint64 +} + type Dirent struct { - Ino uint64 - Off int64 - Reclen uint16 - Type uint8 - Name [256]int8 - Pad_cgo_0 [5]byte + Ino uint64 + Off int64 + Reclen uint16 + Type uint8 + Name [256]int8 + _ [5]byte } type Fsid struct { @@ -146,13 +176,13 @@ type Fsid struct { } type Flock_t struct { - Type int16 - Whence int16 - Pad_cgo_0 [4]byte - Start int64 - Len int64 - Pid int32 - Pad_cgo_1 [4]byte + Type int16 + Whence int16 + _ [4]byte + Start int64 + Len int64 + Pid int32 + _ [4]byte } type FscryptPolicy struct { @@ -227,11 +257,20 @@ type RawSockaddrHCI struct { Channel uint16 } +type RawSockaddrL2 struct { + Family uint16 + Psm uint16 + Bdaddr [6]uint8 + Cid uint16 + Bdaddr_type uint8 + _ [1]byte +} + type RawSockaddrCAN struct { - Family uint16 - Pad_cgo_0 [2]byte - Ifindex int32 - Addr [8]byte + Family uint16 + _ [2]byte + Ifindex int32 + Addr [8]byte } type RawSockaddrALG struct { @@ -298,13 +337,13 @@ type PacketMreq struct { type Msghdr struct { Name *byte Namelen uint32 - Pad_cgo_0 [4]byte + _ [4]byte Iov *Iovec Iovlen uint64 Control *byte Controllen uint64 Flags int32 - Pad_cgo_1 [4]byte + _ [4]byte } type Cmsghdr struct { @@ -346,7 +385,7 @@ type TCPInfo struct { Probes uint8 Backoff uint8 Options uint8 - Pad_cgo_0 [2]byte + _ [2]byte Rto uint32 Ato uint32 Snd_mss uint32 @@ -381,6 +420,7 @@ const ( SizeofSockaddrLinklayer = 0x14 SizeofSockaddrNetlink = 0xc SizeofSockaddrHCI = 0x6 + SizeofSockaddrL2 = 0xe SizeofSockaddrCAN = 0x10 SizeofSockaddrALG = 0x58 SizeofSockaddrVM = 0x10 @@ -401,97 +441,123 @@ const ( ) const ( - IFA_UNSPEC = 0x0 - IFA_ADDRESS = 0x1 - IFA_LOCAL = 0x2 - IFA_LABEL = 0x3 - IFA_BROADCAST = 0x4 - IFA_ANYCAST = 0x5 - IFA_CACHEINFO = 0x6 - IFA_MULTICAST = 0x7 - IFLA_UNSPEC = 0x0 - IFLA_ADDRESS = 0x1 - IFLA_BROADCAST = 0x2 - IFLA_IFNAME = 0x3 - IFLA_MTU = 0x4 - IFLA_LINK = 0x5 - IFLA_QDISC = 0x6 - IFLA_STATS = 0x7 - IFLA_COST = 0x8 - IFLA_PRIORITY = 0x9 - IFLA_MASTER = 0xa - IFLA_WIRELESS = 0xb - IFLA_PROTINFO = 0xc - IFLA_TXQLEN = 0xd - IFLA_MAP = 0xe - IFLA_WEIGHT = 0xf - IFLA_OPERSTATE = 0x10 - IFLA_LINKMODE = 0x11 - IFLA_LINKINFO = 0x12 - IFLA_NET_NS_PID = 0x13 - IFLA_IFALIAS = 0x14 - IFLA_MAX = 0x2b - RT_SCOPE_UNIVERSE = 0x0 - RT_SCOPE_SITE = 0xc8 - RT_SCOPE_LINK = 0xfd - RT_SCOPE_HOST = 0xfe - RT_SCOPE_NOWHERE = 0xff - RT_TABLE_UNSPEC = 0x0 - RT_TABLE_COMPAT = 0xfc - RT_TABLE_DEFAULT = 0xfd - RT_TABLE_MAIN = 0xfe - RT_TABLE_LOCAL = 0xff - RT_TABLE_MAX = 0xffffffff - RTA_UNSPEC = 0x0 - RTA_DST = 0x1 - RTA_SRC = 0x2 - RTA_IIF = 0x3 - RTA_OIF = 0x4 - RTA_GATEWAY = 0x5 - RTA_PRIORITY = 0x6 - RTA_PREFSRC = 0x7 - RTA_METRICS = 0x8 - RTA_MULTIPATH = 0x9 - RTA_FLOW = 0xb - RTA_CACHEINFO = 0xc - RTA_TABLE = 0xf - RTN_UNSPEC = 0x0 - RTN_UNICAST = 0x1 - RTN_LOCAL = 0x2 - RTN_BROADCAST = 0x3 - RTN_ANYCAST = 0x4 - RTN_MULTICAST = 0x5 - RTN_BLACKHOLE = 0x6 - RTN_UNREACHABLE = 0x7 - RTN_PROHIBIT = 0x8 - RTN_THROW = 0x9 - RTN_NAT = 0xa - RTN_XRESOLVE = 0xb - RTNLGRP_NONE = 0x0 - RTNLGRP_LINK = 0x1 - RTNLGRP_NOTIFY = 0x2 - RTNLGRP_NEIGH = 0x3 - RTNLGRP_TC = 0x4 - RTNLGRP_IPV4_IFADDR = 0x5 - RTNLGRP_IPV4_MROUTE = 0x6 - RTNLGRP_IPV4_ROUTE = 0x7 - RTNLGRP_IPV4_RULE = 0x8 - RTNLGRP_IPV6_IFADDR = 0x9 - RTNLGRP_IPV6_MROUTE = 0xa - RTNLGRP_IPV6_ROUTE = 0xb - RTNLGRP_IPV6_IFINFO = 0xc - RTNLGRP_IPV6_PREFIX = 0x12 - RTNLGRP_IPV6_RULE = 0x13 - RTNLGRP_ND_USEROPT = 0x14 - SizeofNlMsghdr = 0x10 - SizeofNlMsgerr = 0x14 - SizeofRtGenmsg = 0x1 - SizeofNlAttr = 0x4 - SizeofRtAttr = 0x4 - SizeofIfInfomsg = 0x10 - SizeofIfAddrmsg = 0x8 - SizeofRtMsg = 0xc - SizeofRtNexthop = 0x8 + IFA_UNSPEC = 0x0 + IFA_ADDRESS = 0x1 + IFA_LOCAL = 0x2 + IFA_LABEL = 0x3 + IFA_BROADCAST = 0x4 + IFA_ANYCAST = 0x5 + IFA_CACHEINFO = 0x6 + IFA_MULTICAST = 0x7 + IFLA_UNSPEC = 0x0 + IFLA_ADDRESS = 0x1 + IFLA_BROADCAST = 0x2 + IFLA_IFNAME = 0x3 + IFLA_MTU = 0x4 + IFLA_LINK = 0x5 + IFLA_QDISC = 0x6 + IFLA_STATS = 0x7 + IFLA_COST = 0x8 + IFLA_PRIORITY = 0x9 + IFLA_MASTER = 0xa + IFLA_WIRELESS = 0xb + IFLA_PROTINFO = 0xc + IFLA_TXQLEN = 0xd + IFLA_MAP = 0xe + IFLA_WEIGHT = 0xf + IFLA_OPERSTATE = 0x10 + IFLA_LINKMODE = 0x11 + IFLA_LINKINFO = 0x12 + IFLA_NET_NS_PID = 0x13 + IFLA_IFALIAS = 0x14 + IFLA_NUM_VF = 0x15 + IFLA_VFINFO_LIST = 0x16 + IFLA_STATS64 = 0x17 + IFLA_VF_PORTS = 0x18 + IFLA_PORT_SELF = 0x19 + IFLA_AF_SPEC = 0x1a + IFLA_GROUP = 0x1b + IFLA_NET_NS_FD = 0x1c + IFLA_EXT_MASK = 0x1d + IFLA_PROMISCUITY = 0x1e + IFLA_NUM_TX_QUEUES = 0x1f + IFLA_NUM_RX_QUEUES = 0x20 + IFLA_CARRIER = 0x21 + IFLA_PHYS_PORT_ID = 0x22 + IFLA_CARRIER_CHANGES = 0x23 + IFLA_PHYS_SWITCH_ID = 0x24 + IFLA_LINK_NETNSID = 0x25 + IFLA_PHYS_PORT_NAME = 0x26 + IFLA_PROTO_DOWN = 0x27 + IFLA_GSO_MAX_SEGS = 0x28 + IFLA_GSO_MAX_SIZE = 0x29 + IFLA_PAD = 0x2a + IFLA_XDP = 0x2b + IFLA_EVENT = 0x2c + IFLA_NEW_NETNSID = 0x2d + IFLA_IF_NETNSID = 0x2e + IFLA_MAX = 0x2e + RT_SCOPE_UNIVERSE = 0x0 + RT_SCOPE_SITE = 0xc8 + RT_SCOPE_LINK = 0xfd + RT_SCOPE_HOST = 0xfe + RT_SCOPE_NOWHERE = 0xff + RT_TABLE_UNSPEC = 0x0 + RT_TABLE_COMPAT = 0xfc + RT_TABLE_DEFAULT = 0xfd + RT_TABLE_MAIN = 0xfe + RT_TABLE_LOCAL = 0xff + RT_TABLE_MAX = 0xffffffff + RTA_UNSPEC = 0x0 + RTA_DST = 0x1 + RTA_SRC = 0x2 + RTA_IIF = 0x3 + RTA_OIF = 0x4 + RTA_GATEWAY = 0x5 + RTA_PRIORITY = 0x6 + RTA_PREFSRC = 0x7 + RTA_METRICS = 0x8 + RTA_MULTIPATH = 0x9 + RTA_FLOW = 0xb + RTA_CACHEINFO = 0xc + RTA_TABLE = 0xf + RTN_UNSPEC = 0x0 + RTN_UNICAST = 0x1 + RTN_LOCAL = 0x2 + RTN_BROADCAST = 0x3 + RTN_ANYCAST = 0x4 + RTN_MULTICAST = 0x5 + RTN_BLACKHOLE = 0x6 + RTN_UNREACHABLE = 0x7 + RTN_PROHIBIT = 0x8 + RTN_THROW = 0x9 + RTN_NAT = 0xa + RTN_XRESOLVE = 0xb + RTNLGRP_NONE = 0x0 + RTNLGRP_LINK = 0x1 + RTNLGRP_NOTIFY = 0x2 + RTNLGRP_NEIGH = 0x3 + RTNLGRP_TC = 0x4 + RTNLGRP_IPV4_IFADDR = 0x5 + RTNLGRP_IPV4_MROUTE = 0x6 + RTNLGRP_IPV4_ROUTE = 0x7 + RTNLGRP_IPV4_RULE = 0x8 + RTNLGRP_IPV6_IFADDR = 0x9 + RTNLGRP_IPV6_MROUTE = 0xa + RTNLGRP_IPV6_ROUTE = 0xb + RTNLGRP_IPV6_IFINFO = 0xc + RTNLGRP_IPV6_PREFIX = 0x12 + RTNLGRP_IPV6_RULE = 0x13 + RTNLGRP_ND_USEROPT = 0x14 + SizeofNlMsghdr = 0x10 + SizeofNlMsgerr = 0x14 + SizeofRtGenmsg = 0x1 + SizeofNlAttr = 0x4 + SizeofRtAttr = 0x4 + SizeofIfInfomsg = 0x10 + SizeofIfAddrmsg = 0x8 + SizeofRtMsg = 0xc + SizeofRtNexthop = 0x8 ) type NlMsghdr struct { @@ -570,9 +636,9 @@ type SockFilter struct { } type SockFprog struct { - Len uint16 - Pad_cgo_0 [6]byte - Filter *SockFilter + Len uint16 + _ [6]byte + Filter *SockFilter } type InotifyEvent struct { @@ -609,30 +675,30 @@ type Sysinfo_t struct { Freeswap uint64 Procs uint16 Pad uint16 - Pad_cgo_0 [4]byte + _ [4]byte Totalhigh uint64 Freehigh uint64 Unit uint32 X_f [0]int8 - Pad_cgo_1 [4]byte + _ [4]byte } type Utsname struct { - Sysname [65]int8 - Nodename [65]int8 - Release [65]int8 - Version [65]int8 - Machine [65]int8 - Domainname [65]int8 + Sysname [65]byte + Nodename [65]byte + Release [65]byte + Version [65]byte + Machine [65]byte + Domainname [65]byte } type Ustat_t struct { - Tfree int32 - Pad_cgo_0 [4]byte - Tinode uint64 - Fname [6]int8 - Fpack [6]int8 - Pad_cgo_1 [4]byte + Tfree int32 + _ [4]byte + Tinode uint64 + Fname [6]int8 + Fpack [6]int8 + _ [4]byte } type EpollEvent struct { @@ -642,8 +708,15 @@ type EpollEvent struct { } const ( - AT_FDCWD = -0x64 - AT_REMOVEDIR = 0x200 + AT_EMPTY_PATH = 0x1000 + AT_FDCWD = -0x64 + AT_NO_AUTOMOUNT = 0x800 + AT_REMOVEDIR = 0x200 + + AT_STATX_SYNC_AS_STAT = 0x0 + AT_STATX_FORCE_SYNC = 0x2000 + AT_STATX_DONT_SYNC = 0x4000 + AT_SYMLINK_FOLLOW = 0x400 AT_SYMLINK_NOFOLLOW = 0x100 ) @@ -672,8 +745,6 @@ const RNDGETENTCNT = 0x40045200 const PERF_IOC_FLAG_GROUP = 0x1 -const _SC_PAGESIZE = 0x1e - type Termios struct { Iflag uint32 Oflag uint32 @@ -691,3 +762,135 @@ type Winsize struct { Xpixel uint16 Ypixel uint16 } + +type Taskstats struct { + Version uint16 + _ [2]byte + Ac_exitcode uint32 + Ac_flag uint8 + Ac_nice uint8 + _ [6]byte + Cpu_count uint64 + Cpu_delay_total uint64 + Blkio_count uint64 + Blkio_delay_total uint64 + Swapin_count uint64 + Swapin_delay_total uint64 + Cpu_run_real_total uint64 + Cpu_run_virtual_total uint64 + Ac_comm [32]int8 + Ac_sched uint8 + Ac_pad [3]uint8 + _ [4]byte + Ac_uid uint32 + Ac_gid uint32 + Ac_pid uint32 + Ac_ppid uint32 + Ac_btime uint32 + _ [4]byte + Ac_etime uint64 + Ac_utime uint64 + Ac_stime uint64 + Ac_minflt uint64 + Ac_majflt uint64 + Coremem uint64 + Virtmem uint64 + Hiwater_rss uint64 + Hiwater_vm uint64 + Read_char uint64 + Write_char uint64 + Read_syscalls uint64 + Write_syscalls uint64 + Read_bytes uint64 + Write_bytes uint64 + Cancelled_write_bytes uint64 + Nvcsw uint64 + Nivcsw uint64 + Ac_utimescaled uint64 + Ac_stimescaled uint64 + Cpu_scaled_run_real_total uint64 + Freepages_count uint64 + Freepages_delay_total uint64 +} + +const ( + TASKSTATS_CMD_UNSPEC = 0x0 + TASKSTATS_CMD_GET = 0x1 + TASKSTATS_CMD_NEW = 0x2 + TASKSTATS_TYPE_UNSPEC = 0x0 + TASKSTATS_TYPE_PID = 0x1 + TASKSTATS_TYPE_TGID = 0x2 + TASKSTATS_TYPE_STATS = 0x3 + TASKSTATS_TYPE_AGGR_PID = 0x4 + TASKSTATS_TYPE_AGGR_TGID = 0x5 + TASKSTATS_TYPE_NULL = 0x6 + TASKSTATS_CMD_ATTR_UNSPEC = 0x0 + TASKSTATS_CMD_ATTR_PID = 0x1 + TASKSTATS_CMD_ATTR_TGID = 0x2 + TASKSTATS_CMD_ATTR_REGISTER_CPUMASK = 0x3 + TASKSTATS_CMD_ATTR_DEREGISTER_CPUMASK = 0x4 +) + +type CGroupStats struct { + Sleeping uint64 + Running uint64 + Stopped uint64 + Uninterruptible uint64 + Io_wait uint64 +} + +const ( + CGROUPSTATS_CMD_UNSPEC = 0x3 + CGROUPSTATS_CMD_GET = 0x4 + CGROUPSTATS_CMD_NEW = 0x5 + CGROUPSTATS_TYPE_UNSPEC = 0x0 + CGROUPSTATS_TYPE_CGROUP_STATS = 0x1 + CGROUPSTATS_CMD_ATTR_UNSPEC = 0x0 + CGROUPSTATS_CMD_ATTR_FD = 0x1 +) + +type Genlmsghdr struct { + Cmd uint8 + Version uint8 + Reserved uint16 +} + +const ( + CTRL_CMD_UNSPEC = 0x0 + CTRL_CMD_NEWFAMILY = 0x1 + CTRL_CMD_DELFAMILY = 0x2 + CTRL_CMD_GETFAMILY = 0x3 + CTRL_CMD_NEWOPS = 0x4 + CTRL_CMD_DELOPS = 0x5 + CTRL_CMD_GETOPS = 0x6 + CTRL_CMD_NEWMCAST_GRP = 0x7 + CTRL_CMD_DELMCAST_GRP = 0x8 + CTRL_CMD_GETMCAST_GRP = 0x9 + CTRL_ATTR_UNSPEC = 0x0 + CTRL_ATTR_FAMILY_ID = 0x1 + CTRL_ATTR_FAMILY_NAME = 0x2 + CTRL_ATTR_VERSION = 0x3 + CTRL_ATTR_HDRSIZE = 0x4 + CTRL_ATTR_MAXATTR = 0x5 + CTRL_ATTR_OPS = 0x6 + CTRL_ATTR_MCAST_GROUPS = 0x7 + CTRL_ATTR_OP_UNSPEC = 0x0 + CTRL_ATTR_OP_ID = 0x1 + CTRL_ATTR_OP_FLAGS = 0x2 + CTRL_ATTR_MCAST_GRP_UNSPEC = 0x0 + CTRL_ATTR_MCAST_GRP_NAME = 0x1 + CTRL_ATTR_MCAST_GRP_ID = 0x2 +) + +type cpuMask uint64 + +const ( + _CPU_SETSIZE = 0x400 + _NCPUBITS = 0x40 +) + +const ( + BDADDR_BREDR = 0x0 + BDADDR_LE_PUBLIC = 0x1 + BDADDR_LE_RANDOM = 0x2 +) diff --git a/vendor/golang.org/x/sys/unix/ztypes_linux_mips64le.go b/vendor/golang.org/x/sys/unix/ztypes_linux_mips64le.go index 623f58127..b8da482ca 100644 --- a/vendor/golang.org/x/sys/unix/ztypes_linux_mips64le.go +++ b/vendor/golang.org/x/sys/unix/ztypes_linux_mips64le.go @@ -33,13 +33,13 @@ type Timeval struct { type Timex struct { Modes uint32 - Pad_cgo_0 [4]byte + _ [4]byte Offset int64 Freq int64 Maxerror int64 Esterror int64 Status int32 - Pad_cgo_1 [4]byte + _ [4]byte Constant int64 Precision int64 Tolerance int64 @@ -48,14 +48,14 @@ type Timex struct { Ppsfreq int64 Jitter int64 Shift int32 - Pad_cgo_2 [4]byte + _ [4]byte Stabil int64 Jitcnt int64 Calcnt int64 Errcnt int64 Stbcnt int64 Tai int32 - Pad_cgo_3 [44]byte + _ [44]byte } type Time_t int64 @@ -132,13 +132,43 @@ type Statfs_t struct { Spare [5]int64 } +type StatxTimestamp struct { + Sec int64 + Nsec uint32 + X__reserved int32 +} + +type Statx_t struct { + Mask uint32 + Blksize uint32 + Attributes uint64 + Nlink uint32 + Uid uint32 + Gid uint32 + Mode uint16 + _ [1]uint16 + Ino uint64 + Size uint64 + Blocks uint64 + Attributes_mask uint64 + Atime StatxTimestamp + Btime StatxTimestamp + Ctime StatxTimestamp + Mtime StatxTimestamp + Rdev_major uint32 + Rdev_minor uint32 + Dev_major uint32 + Dev_minor uint32 + _ [14]uint64 +} + type Dirent struct { - Ino uint64 - Off int64 - Reclen uint16 - Type uint8 - Name [256]int8 - Pad_cgo_0 [5]byte + Ino uint64 + Off int64 + Reclen uint16 + Type uint8 + Name [256]int8 + _ [5]byte } type Fsid struct { @@ -146,13 +176,13 @@ type Fsid struct { } type Flock_t struct { - Type int16 - Whence int16 - Pad_cgo_0 [4]byte - Start int64 - Len int64 - Pid int32 - Pad_cgo_1 [4]byte + Type int16 + Whence int16 + _ [4]byte + Start int64 + Len int64 + Pid int32 + _ [4]byte } type FscryptPolicy struct { @@ -227,11 +257,20 @@ type RawSockaddrHCI struct { Channel uint16 } +type RawSockaddrL2 struct { + Family uint16 + Psm uint16 + Bdaddr [6]uint8 + Cid uint16 + Bdaddr_type uint8 + _ [1]byte +} + type RawSockaddrCAN struct { - Family uint16 - Pad_cgo_0 [2]byte - Ifindex int32 - Addr [8]byte + Family uint16 + _ [2]byte + Ifindex int32 + Addr [8]byte } type RawSockaddrALG struct { @@ -298,13 +337,13 @@ type PacketMreq struct { type Msghdr struct { Name *byte Namelen uint32 - Pad_cgo_0 [4]byte + _ [4]byte Iov *Iovec Iovlen uint64 Control *byte Controllen uint64 Flags int32 - Pad_cgo_1 [4]byte + _ [4]byte } type Cmsghdr struct { @@ -346,7 +385,7 @@ type TCPInfo struct { Probes uint8 Backoff uint8 Options uint8 - Pad_cgo_0 [2]byte + _ [2]byte Rto uint32 Ato uint32 Snd_mss uint32 @@ -381,6 +420,7 @@ const ( SizeofSockaddrLinklayer = 0x14 SizeofSockaddrNetlink = 0xc SizeofSockaddrHCI = 0x6 + SizeofSockaddrL2 = 0xe SizeofSockaddrCAN = 0x10 SizeofSockaddrALG = 0x58 SizeofSockaddrVM = 0x10 @@ -401,97 +441,123 @@ const ( ) const ( - IFA_UNSPEC = 0x0 - IFA_ADDRESS = 0x1 - IFA_LOCAL = 0x2 - IFA_LABEL = 0x3 - IFA_BROADCAST = 0x4 - IFA_ANYCAST = 0x5 - IFA_CACHEINFO = 0x6 - IFA_MULTICAST = 0x7 - IFLA_UNSPEC = 0x0 - IFLA_ADDRESS = 0x1 - IFLA_BROADCAST = 0x2 - IFLA_IFNAME = 0x3 - IFLA_MTU = 0x4 - IFLA_LINK = 0x5 - IFLA_QDISC = 0x6 - IFLA_STATS = 0x7 - IFLA_COST = 0x8 - IFLA_PRIORITY = 0x9 - IFLA_MASTER = 0xa - IFLA_WIRELESS = 0xb - IFLA_PROTINFO = 0xc - IFLA_TXQLEN = 0xd - IFLA_MAP = 0xe - IFLA_WEIGHT = 0xf - IFLA_OPERSTATE = 0x10 - IFLA_LINKMODE = 0x11 - IFLA_LINKINFO = 0x12 - IFLA_NET_NS_PID = 0x13 - IFLA_IFALIAS = 0x14 - IFLA_MAX = 0x2b - RT_SCOPE_UNIVERSE = 0x0 - RT_SCOPE_SITE = 0xc8 - RT_SCOPE_LINK = 0xfd - RT_SCOPE_HOST = 0xfe - RT_SCOPE_NOWHERE = 0xff - RT_TABLE_UNSPEC = 0x0 - RT_TABLE_COMPAT = 0xfc - RT_TABLE_DEFAULT = 0xfd - RT_TABLE_MAIN = 0xfe - RT_TABLE_LOCAL = 0xff - RT_TABLE_MAX = 0xffffffff - RTA_UNSPEC = 0x0 - RTA_DST = 0x1 - RTA_SRC = 0x2 - RTA_IIF = 0x3 - RTA_OIF = 0x4 - RTA_GATEWAY = 0x5 - RTA_PRIORITY = 0x6 - RTA_PREFSRC = 0x7 - RTA_METRICS = 0x8 - RTA_MULTIPATH = 0x9 - RTA_FLOW = 0xb - RTA_CACHEINFO = 0xc - RTA_TABLE = 0xf - RTN_UNSPEC = 0x0 - RTN_UNICAST = 0x1 - RTN_LOCAL = 0x2 - RTN_BROADCAST = 0x3 - RTN_ANYCAST = 0x4 - RTN_MULTICAST = 0x5 - RTN_BLACKHOLE = 0x6 - RTN_UNREACHABLE = 0x7 - RTN_PROHIBIT = 0x8 - RTN_THROW = 0x9 - RTN_NAT = 0xa - RTN_XRESOLVE = 0xb - RTNLGRP_NONE = 0x0 - RTNLGRP_LINK = 0x1 - RTNLGRP_NOTIFY = 0x2 - RTNLGRP_NEIGH = 0x3 - RTNLGRP_TC = 0x4 - RTNLGRP_IPV4_IFADDR = 0x5 - RTNLGRP_IPV4_MROUTE = 0x6 - RTNLGRP_IPV4_ROUTE = 0x7 - RTNLGRP_IPV4_RULE = 0x8 - RTNLGRP_IPV6_IFADDR = 0x9 - RTNLGRP_IPV6_MROUTE = 0xa - RTNLGRP_IPV6_ROUTE = 0xb - RTNLGRP_IPV6_IFINFO = 0xc - RTNLGRP_IPV6_PREFIX = 0x12 - RTNLGRP_IPV6_RULE = 0x13 - RTNLGRP_ND_USEROPT = 0x14 - SizeofNlMsghdr = 0x10 - SizeofNlMsgerr = 0x14 - SizeofRtGenmsg = 0x1 - SizeofNlAttr = 0x4 - SizeofRtAttr = 0x4 - SizeofIfInfomsg = 0x10 - SizeofIfAddrmsg = 0x8 - SizeofRtMsg = 0xc - SizeofRtNexthop = 0x8 + IFA_UNSPEC = 0x0 + IFA_ADDRESS = 0x1 + IFA_LOCAL = 0x2 + IFA_LABEL = 0x3 + IFA_BROADCAST = 0x4 + IFA_ANYCAST = 0x5 + IFA_CACHEINFO = 0x6 + IFA_MULTICAST = 0x7 + IFLA_UNSPEC = 0x0 + IFLA_ADDRESS = 0x1 + IFLA_BROADCAST = 0x2 + IFLA_IFNAME = 0x3 + IFLA_MTU = 0x4 + IFLA_LINK = 0x5 + IFLA_QDISC = 0x6 + IFLA_STATS = 0x7 + IFLA_COST = 0x8 + IFLA_PRIORITY = 0x9 + IFLA_MASTER = 0xa + IFLA_WIRELESS = 0xb + IFLA_PROTINFO = 0xc + IFLA_TXQLEN = 0xd + IFLA_MAP = 0xe + IFLA_WEIGHT = 0xf + IFLA_OPERSTATE = 0x10 + IFLA_LINKMODE = 0x11 + IFLA_LINKINFO = 0x12 + IFLA_NET_NS_PID = 0x13 + IFLA_IFALIAS = 0x14 + IFLA_NUM_VF = 0x15 + IFLA_VFINFO_LIST = 0x16 + IFLA_STATS64 = 0x17 + IFLA_VF_PORTS = 0x18 + IFLA_PORT_SELF = 0x19 + IFLA_AF_SPEC = 0x1a + IFLA_GROUP = 0x1b + IFLA_NET_NS_FD = 0x1c + IFLA_EXT_MASK = 0x1d + IFLA_PROMISCUITY = 0x1e + IFLA_NUM_TX_QUEUES = 0x1f + IFLA_NUM_RX_QUEUES = 0x20 + IFLA_CARRIER = 0x21 + IFLA_PHYS_PORT_ID = 0x22 + IFLA_CARRIER_CHANGES = 0x23 + IFLA_PHYS_SWITCH_ID = 0x24 + IFLA_LINK_NETNSID = 0x25 + IFLA_PHYS_PORT_NAME = 0x26 + IFLA_PROTO_DOWN = 0x27 + IFLA_GSO_MAX_SEGS = 0x28 + IFLA_GSO_MAX_SIZE = 0x29 + IFLA_PAD = 0x2a + IFLA_XDP = 0x2b + IFLA_EVENT = 0x2c + IFLA_NEW_NETNSID = 0x2d + IFLA_IF_NETNSID = 0x2e + IFLA_MAX = 0x2e + RT_SCOPE_UNIVERSE = 0x0 + RT_SCOPE_SITE = 0xc8 + RT_SCOPE_LINK = 0xfd + RT_SCOPE_HOST = 0xfe + RT_SCOPE_NOWHERE = 0xff + RT_TABLE_UNSPEC = 0x0 + RT_TABLE_COMPAT = 0xfc + RT_TABLE_DEFAULT = 0xfd + RT_TABLE_MAIN = 0xfe + RT_TABLE_LOCAL = 0xff + RT_TABLE_MAX = 0xffffffff + RTA_UNSPEC = 0x0 + RTA_DST = 0x1 + RTA_SRC = 0x2 + RTA_IIF = 0x3 + RTA_OIF = 0x4 + RTA_GATEWAY = 0x5 + RTA_PRIORITY = 0x6 + RTA_PREFSRC = 0x7 + RTA_METRICS = 0x8 + RTA_MULTIPATH = 0x9 + RTA_FLOW = 0xb + RTA_CACHEINFO = 0xc + RTA_TABLE = 0xf + RTN_UNSPEC = 0x0 + RTN_UNICAST = 0x1 + RTN_LOCAL = 0x2 + RTN_BROADCAST = 0x3 + RTN_ANYCAST = 0x4 + RTN_MULTICAST = 0x5 + RTN_BLACKHOLE = 0x6 + RTN_UNREACHABLE = 0x7 + RTN_PROHIBIT = 0x8 + RTN_THROW = 0x9 + RTN_NAT = 0xa + RTN_XRESOLVE = 0xb + RTNLGRP_NONE = 0x0 + RTNLGRP_LINK = 0x1 + RTNLGRP_NOTIFY = 0x2 + RTNLGRP_NEIGH = 0x3 + RTNLGRP_TC = 0x4 + RTNLGRP_IPV4_IFADDR = 0x5 + RTNLGRP_IPV4_MROUTE = 0x6 + RTNLGRP_IPV4_ROUTE = 0x7 + RTNLGRP_IPV4_RULE = 0x8 + RTNLGRP_IPV6_IFADDR = 0x9 + RTNLGRP_IPV6_MROUTE = 0xa + RTNLGRP_IPV6_ROUTE = 0xb + RTNLGRP_IPV6_IFINFO = 0xc + RTNLGRP_IPV6_PREFIX = 0x12 + RTNLGRP_IPV6_RULE = 0x13 + RTNLGRP_ND_USEROPT = 0x14 + SizeofNlMsghdr = 0x10 + SizeofNlMsgerr = 0x14 + SizeofRtGenmsg = 0x1 + SizeofNlAttr = 0x4 + SizeofRtAttr = 0x4 + SizeofIfInfomsg = 0x10 + SizeofIfAddrmsg = 0x8 + SizeofRtMsg = 0xc + SizeofRtNexthop = 0x8 ) type NlMsghdr struct { @@ -570,9 +636,9 @@ type SockFilter struct { } type SockFprog struct { - Len uint16 - Pad_cgo_0 [6]byte - Filter *SockFilter + Len uint16 + _ [6]byte + Filter *SockFilter } type InotifyEvent struct { @@ -609,30 +675,30 @@ type Sysinfo_t struct { Freeswap uint64 Procs uint16 Pad uint16 - Pad_cgo_0 [4]byte + _ [4]byte Totalhigh uint64 Freehigh uint64 Unit uint32 X_f [0]int8 - Pad_cgo_1 [4]byte + _ [4]byte } type Utsname struct { - Sysname [65]int8 - Nodename [65]int8 - Release [65]int8 - Version [65]int8 - Machine [65]int8 - Domainname [65]int8 + Sysname [65]byte + Nodename [65]byte + Release [65]byte + Version [65]byte + Machine [65]byte + Domainname [65]byte } type Ustat_t struct { - Tfree int32 - Pad_cgo_0 [4]byte - Tinode uint64 - Fname [6]int8 - Fpack [6]int8 - Pad_cgo_1 [4]byte + Tfree int32 + _ [4]byte + Tinode uint64 + Fname [6]int8 + Fpack [6]int8 + _ [4]byte } type EpollEvent struct { @@ -642,8 +708,15 @@ type EpollEvent struct { } const ( - AT_FDCWD = -0x64 - AT_REMOVEDIR = 0x200 + AT_EMPTY_PATH = 0x1000 + AT_FDCWD = -0x64 + AT_NO_AUTOMOUNT = 0x800 + AT_REMOVEDIR = 0x200 + + AT_STATX_SYNC_AS_STAT = 0x0 + AT_STATX_FORCE_SYNC = 0x2000 + AT_STATX_DONT_SYNC = 0x4000 + AT_SYMLINK_FOLLOW = 0x400 AT_SYMLINK_NOFOLLOW = 0x100 ) @@ -672,8 +745,6 @@ const RNDGETENTCNT = 0x40045200 const PERF_IOC_FLAG_GROUP = 0x1 -const _SC_PAGESIZE = 0x1e - type Termios struct { Iflag uint32 Oflag uint32 @@ -691,3 +762,135 @@ type Winsize struct { Xpixel uint16 Ypixel uint16 } + +type Taskstats struct { + Version uint16 + _ [2]byte + Ac_exitcode uint32 + Ac_flag uint8 + Ac_nice uint8 + _ [6]byte + Cpu_count uint64 + Cpu_delay_total uint64 + Blkio_count uint64 + Blkio_delay_total uint64 + Swapin_count uint64 + Swapin_delay_total uint64 + Cpu_run_real_total uint64 + Cpu_run_virtual_total uint64 + Ac_comm [32]int8 + Ac_sched uint8 + Ac_pad [3]uint8 + _ [4]byte + Ac_uid uint32 + Ac_gid uint32 + Ac_pid uint32 + Ac_ppid uint32 + Ac_btime uint32 + _ [4]byte + Ac_etime uint64 + Ac_utime uint64 + Ac_stime uint64 + Ac_minflt uint64 + Ac_majflt uint64 + Coremem uint64 + Virtmem uint64 + Hiwater_rss uint64 + Hiwater_vm uint64 + Read_char uint64 + Write_char uint64 + Read_syscalls uint64 + Write_syscalls uint64 + Read_bytes uint64 + Write_bytes uint64 + Cancelled_write_bytes uint64 + Nvcsw uint64 + Nivcsw uint64 + Ac_utimescaled uint64 + Ac_stimescaled uint64 + Cpu_scaled_run_real_total uint64 + Freepages_count uint64 + Freepages_delay_total uint64 +} + +const ( + TASKSTATS_CMD_UNSPEC = 0x0 + TASKSTATS_CMD_GET = 0x1 + TASKSTATS_CMD_NEW = 0x2 + TASKSTATS_TYPE_UNSPEC = 0x0 + TASKSTATS_TYPE_PID = 0x1 + TASKSTATS_TYPE_TGID = 0x2 + TASKSTATS_TYPE_STATS = 0x3 + TASKSTATS_TYPE_AGGR_PID = 0x4 + TASKSTATS_TYPE_AGGR_TGID = 0x5 + TASKSTATS_TYPE_NULL = 0x6 + TASKSTATS_CMD_ATTR_UNSPEC = 0x0 + TASKSTATS_CMD_ATTR_PID = 0x1 + TASKSTATS_CMD_ATTR_TGID = 0x2 + TASKSTATS_CMD_ATTR_REGISTER_CPUMASK = 0x3 + TASKSTATS_CMD_ATTR_DEREGISTER_CPUMASK = 0x4 +) + +type CGroupStats struct { + Sleeping uint64 + Running uint64 + Stopped uint64 + Uninterruptible uint64 + Io_wait uint64 +} + +const ( + CGROUPSTATS_CMD_UNSPEC = 0x3 + CGROUPSTATS_CMD_GET = 0x4 + CGROUPSTATS_CMD_NEW = 0x5 + CGROUPSTATS_TYPE_UNSPEC = 0x0 + CGROUPSTATS_TYPE_CGROUP_STATS = 0x1 + CGROUPSTATS_CMD_ATTR_UNSPEC = 0x0 + CGROUPSTATS_CMD_ATTR_FD = 0x1 +) + +type Genlmsghdr struct { + Cmd uint8 + Version uint8 + Reserved uint16 +} + +const ( + CTRL_CMD_UNSPEC = 0x0 + CTRL_CMD_NEWFAMILY = 0x1 + CTRL_CMD_DELFAMILY = 0x2 + CTRL_CMD_GETFAMILY = 0x3 + CTRL_CMD_NEWOPS = 0x4 + CTRL_CMD_DELOPS = 0x5 + CTRL_CMD_GETOPS = 0x6 + CTRL_CMD_NEWMCAST_GRP = 0x7 + CTRL_CMD_DELMCAST_GRP = 0x8 + CTRL_CMD_GETMCAST_GRP = 0x9 + CTRL_ATTR_UNSPEC = 0x0 + CTRL_ATTR_FAMILY_ID = 0x1 + CTRL_ATTR_FAMILY_NAME = 0x2 + CTRL_ATTR_VERSION = 0x3 + CTRL_ATTR_HDRSIZE = 0x4 + CTRL_ATTR_MAXATTR = 0x5 + CTRL_ATTR_OPS = 0x6 + CTRL_ATTR_MCAST_GROUPS = 0x7 + CTRL_ATTR_OP_UNSPEC = 0x0 + CTRL_ATTR_OP_ID = 0x1 + CTRL_ATTR_OP_FLAGS = 0x2 + CTRL_ATTR_MCAST_GRP_UNSPEC = 0x0 + CTRL_ATTR_MCAST_GRP_NAME = 0x1 + CTRL_ATTR_MCAST_GRP_ID = 0x2 +) + +type cpuMask uint64 + +const ( + _CPU_SETSIZE = 0x400 + _NCPUBITS = 0x40 +) + +const ( + BDADDR_BREDR = 0x0 + BDADDR_LE_PUBLIC = 0x1 + BDADDR_LE_RANDOM = 0x2 +) diff --git a/vendor/golang.org/x/sys/unix/ztypes_linux_mipsle.go b/vendor/golang.org/x/sys/unix/ztypes_linux_mipsle.go index 56598a1bf..7106b512d 100644 --- a/vendor/golang.org/x/sys/unix/ztypes_linux_mipsle.go +++ b/vendor/golang.org/x/sys/unix/ztypes_linux_mipsle.go @@ -52,7 +52,7 @@ type Timex struct { Errcnt int32 Stbcnt int32 Tai int32 - Pad_cgo_0 [44]byte + _ [44]byte } type Time_t int32 @@ -116,29 +116,59 @@ type Stat_t struct { } type Statfs_t struct { - Type int32 - Bsize int32 - Frsize int32 - Pad_cgo_0 [4]byte - Blocks uint64 - Bfree uint64 - Files uint64 - Ffree uint64 - Bavail uint64 - Fsid Fsid - Namelen int32 - Flags int32 - Spare [5]int32 - Pad_cgo_1 [4]byte + Type int32 + Bsize int32 + Frsize int32 + _ [4]byte + Blocks uint64 + Bfree uint64 + Files uint64 + Ffree uint64 + Bavail uint64 + Fsid Fsid + Namelen int32 + Flags int32 + Spare [5]int32 + _ [4]byte +} + +type StatxTimestamp struct { + Sec int64 + Nsec uint32 + X__reserved int32 +} + +type Statx_t struct { + Mask uint32 + Blksize uint32 + Attributes uint64 + Nlink uint32 + Uid uint32 + Gid uint32 + Mode uint16 + _ [1]uint16 + Ino uint64 + Size uint64 + Blocks uint64 + Attributes_mask uint64 + Atime StatxTimestamp + Btime StatxTimestamp + Ctime StatxTimestamp + Mtime StatxTimestamp + Rdev_major uint32 + Rdev_minor uint32 + Dev_major uint32 + Dev_minor uint32 + _ [14]uint64 } type Dirent struct { - Ino uint64 - Off int64 - Reclen uint16 - Type uint8 - Name [256]int8 - Pad_cgo_0 [5]byte + Ino uint64 + Off int64 + Reclen uint16 + Type uint8 + Name [256]int8 + _ [5]byte } type Fsid struct { @@ -146,13 +176,13 @@ type Fsid struct { } type Flock_t struct { - Type int16 - Whence int16 - Pad_cgo_0 [4]byte - Start int64 - Len int64 - Pid int32 - Pad_cgo_1 [4]byte + Type int16 + Whence int16 + _ [4]byte + Start int64 + Len int64 + Pid int32 + _ [4]byte } type FscryptPolicy struct { @@ -227,11 +257,20 @@ type RawSockaddrHCI struct { Channel uint16 } +type RawSockaddrL2 struct { + Family uint16 + Psm uint16 + Bdaddr [6]uint8 + Cid uint16 + Bdaddr_type uint8 + _ [1]byte +} + type RawSockaddrCAN struct { - Family uint16 - Pad_cgo_0 [2]byte - Ifindex int32 - Addr [8]byte + Family uint16 + _ [2]byte + Ifindex int32 + Addr [8]byte } type RawSockaddrALG struct { @@ -344,7 +383,7 @@ type TCPInfo struct { Probes uint8 Backoff uint8 Options uint8 - Pad_cgo_0 [2]byte + _ [2]byte Rto uint32 Ato uint32 Snd_mss uint32 @@ -379,6 +418,7 @@ const ( SizeofSockaddrLinklayer = 0x14 SizeofSockaddrNetlink = 0xc SizeofSockaddrHCI = 0x6 + SizeofSockaddrL2 = 0xe SizeofSockaddrCAN = 0x10 SizeofSockaddrALG = 0x58 SizeofSockaddrVM = 0x10 @@ -399,97 +439,123 @@ const ( ) const ( - IFA_UNSPEC = 0x0 - IFA_ADDRESS = 0x1 - IFA_LOCAL = 0x2 - IFA_LABEL = 0x3 - IFA_BROADCAST = 0x4 - IFA_ANYCAST = 0x5 - IFA_CACHEINFO = 0x6 - IFA_MULTICAST = 0x7 - IFLA_UNSPEC = 0x0 - IFLA_ADDRESS = 0x1 - IFLA_BROADCAST = 0x2 - IFLA_IFNAME = 0x3 - IFLA_MTU = 0x4 - IFLA_LINK = 0x5 - IFLA_QDISC = 0x6 - IFLA_STATS = 0x7 - IFLA_COST = 0x8 - IFLA_PRIORITY = 0x9 - IFLA_MASTER = 0xa - IFLA_WIRELESS = 0xb - IFLA_PROTINFO = 0xc - IFLA_TXQLEN = 0xd - IFLA_MAP = 0xe - IFLA_WEIGHT = 0xf - IFLA_OPERSTATE = 0x10 - IFLA_LINKMODE = 0x11 - IFLA_LINKINFO = 0x12 - IFLA_NET_NS_PID = 0x13 - IFLA_IFALIAS = 0x14 - IFLA_MAX = 0x2b - RT_SCOPE_UNIVERSE = 0x0 - RT_SCOPE_SITE = 0xc8 - RT_SCOPE_LINK = 0xfd - RT_SCOPE_HOST = 0xfe - RT_SCOPE_NOWHERE = 0xff - RT_TABLE_UNSPEC = 0x0 - RT_TABLE_COMPAT = 0xfc - RT_TABLE_DEFAULT = 0xfd - RT_TABLE_MAIN = 0xfe - RT_TABLE_LOCAL = 0xff - RT_TABLE_MAX = 0xffffffff - RTA_UNSPEC = 0x0 - RTA_DST = 0x1 - RTA_SRC = 0x2 - RTA_IIF = 0x3 - RTA_OIF = 0x4 - RTA_GATEWAY = 0x5 - RTA_PRIORITY = 0x6 - RTA_PREFSRC = 0x7 - RTA_METRICS = 0x8 - RTA_MULTIPATH = 0x9 - RTA_FLOW = 0xb - RTA_CACHEINFO = 0xc - RTA_TABLE = 0xf - RTN_UNSPEC = 0x0 - RTN_UNICAST = 0x1 - RTN_LOCAL = 0x2 - RTN_BROADCAST = 0x3 - RTN_ANYCAST = 0x4 - RTN_MULTICAST = 0x5 - RTN_BLACKHOLE = 0x6 - RTN_UNREACHABLE = 0x7 - RTN_PROHIBIT = 0x8 - RTN_THROW = 0x9 - RTN_NAT = 0xa - RTN_XRESOLVE = 0xb - RTNLGRP_NONE = 0x0 - RTNLGRP_LINK = 0x1 - RTNLGRP_NOTIFY = 0x2 - RTNLGRP_NEIGH = 0x3 - RTNLGRP_TC = 0x4 - RTNLGRP_IPV4_IFADDR = 0x5 - RTNLGRP_IPV4_MROUTE = 0x6 - RTNLGRP_IPV4_ROUTE = 0x7 - RTNLGRP_IPV4_RULE = 0x8 - RTNLGRP_IPV6_IFADDR = 0x9 - RTNLGRP_IPV6_MROUTE = 0xa - RTNLGRP_IPV6_ROUTE = 0xb - RTNLGRP_IPV6_IFINFO = 0xc - RTNLGRP_IPV6_PREFIX = 0x12 - RTNLGRP_IPV6_RULE = 0x13 - RTNLGRP_ND_USEROPT = 0x14 - SizeofNlMsghdr = 0x10 - SizeofNlMsgerr = 0x14 - SizeofRtGenmsg = 0x1 - SizeofNlAttr = 0x4 - SizeofRtAttr = 0x4 - SizeofIfInfomsg = 0x10 - SizeofIfAddrmsg = 0x8 - SizeofRtMsg = 0xc - SizeofRtNexthop = 0x8 + IFA_UNSPEC = 0x0 + IFA_ADDRESS = 0x1 + IFA_LOCAL = 0x2 + IFA_LABEL = 0x3 + IFA_BROADCAST = 0x4 + IFA_ANYCAST = 0x5 + IFA_CACHEINFO = 0x6 + IFA_MULTICAST = 0x7 + IFLA_UNSPEC = 0x0 + IFLA_ADDRESS = 0x1 + IFLA_BROADCAST = 0x2 + IFLA_IFNAME = 0x3 + IFLA_MTU = 0x4 + IFLA_LINK = 0x5 + IFLA_QDISC = 0x6 + IFLA_STATS = 0x7 + IFLA_COST = 0x8 + IFLA_PRIORITY = 0x9 + IFLA_MASTER = 0xa + IFLA_WIRELESS = 0xb + IFLA_PROTINFO = 0xc + IFLA_TXQLEN = 0xd + IFLA_MAP = 0xe + IFLA_WEIGHT = 0xf + IFLA_OPERSTATE = 0x10 + IFLA_LINKMODE = 0x11 + IFLA_LINKINFO = 0x12 + IFLA_NET_NS_PID = 0x13 + IFLA_IFALIAS = 0x14 + IFLA_NUM_VF = 0x15 + IFLA_VFINFO_LIST = 0x16 + IFLA_STATS64 = 0x17 + IFLA_VF_PORTS = 0x18 + IFLA_PORT_SELF = 0x19 + IFLA_AF_SPEC = 0x1a + IFLA_GROUP = 0x1b + IFLA_NET_NS_FD = 0x1c + IFLA_EXT_MASK = 0x1d + IFLA_PROMISCUITY = 0x1e + IFLA_NUM_TX_QUEUES = 0x1f + IFLA_NUM_RX_QUEUES = 0x20 + IFLA_CARRIER = 0x21 + IFLA_PHYS_PORT_ID = 0x22 + IFLA_CARRIER_CHANGES = 0x23 + IFLA_PHYS_SWITCH_ID = 0x24 + IFLA_LINK_NETNSID = 0x25 + IFLA_PHYS_PORT_NAME = 0x26 + IFLA_PROTO_DOWN = 0x27 + IFLA_GSO_MAX_SEGS = 0x28 + IFLA_GSO_MAX_SIZE = 0x29 + IFLA_PAD = 0x2a + IFLA_XDP = 0x2b + IFLA_EVENT = 0x2c + IFLA_NEW_NETNSID = 0x2d + IFLA_IF_NETNSID = 0x2e + IFLA_MAX = 0x2e + RT_SCOPE_UNIVERSE = 0x0 + RT_SCOPE_SITE = 0xc8 + RT_SCOPE_LINK = 0xfd + RT_SCOPE_HOST = 0xfe + RT_SCOPE_NOWHERE = 0xff + RT_TABLE_UNSPEC = 0x0 + RT_TABLE_COMPAT = 0xfc + RT_TABLE_DEFAULT = 0xfd + RT_TABLE_MAIN = 0xfe + RT_TABLE_LOCAL = 0xff + RT_TABLE_MAX = 0xffffffff + RTA_UNSPEC = 0x0 + RTA_DST = 0x1 + RTA_SRC = 0x2 + RTA_IIF = 0x3 + RTA_OIF = 0x4 + RTA_GATEWAY = 0x5 + RTA_PRIORITY = 0x6 + RTA_PREFSRC = 0x7 + RTA_METRICS = 0x8 + RTA_MULTIPATH = 0x9 + RTA_FLOW = 0xb + RTA_CACHEINFO = 0xc + RTA_TABLE = 0xf + RTN_UNSPEC = 0x0 + RTN_UNICAST = 0x1 + RTN_LOCAL = 0x2 + RTN_BROADCAST = 0x3 + RTN_ANYCAST = 0x4 + RTN_MULTICAST = 0x5 + RTN_BLACKHOLE = 0x6 + RTN_UNREACHABLE = 0x7 + RTN_PROHIBIT = 0x8 + RTN_THROW = 0x9 + RTN_NAT = 0xa + RTN_XRESOLVE = 0xb + RTNLGRP_NONE = 0x0 + RTNLGRP_LINK = 0x1 + RTNLGRP_NOTIFY = 0x2 + RTNLGRP_NEIGH = 0x3 + RTNLGRP_TC = 0x4 + RTNLGRP_IPV4_IFADDR = 0x5 + RTNLGRP_IPV4_MROUTE = 0x6 + RTNLGRP_IPV4_ROUTE = 0x7 + RTNLGRP_IPV4_RULE = 0x8 + RTNLGRP_IPV6_IFADDR = 0x9 + RTNLGRP_IPV6_MROUTE = 0xa + RTNLGRP_IPV6_ROUTE = 0xb + RTNLGRP_IPV6_IFINFO = 0xc + RTNLGRP_IPV6_PREFIX = 0x12 + RTNLGRP_IPV6_RULE = 0x13 + RTNLGRP_ND_USEROPT = 0x14 + SizeofNlMsghdr = 0x10 + SizeofNlMsgerr = 0x14 + SizeofRtGenmsg = 0x1 + SizeofNlAttr = 0x4 + SizeofRtAttr = 0x4 + SizeofIfInfomsg = 0x10 + SizeofIfAddrmsg = 0x8 + SizeofRtMsg = 0xc + SizeofRtNexthop = 0x8 ) type NlMsghdr struct { @@ -568,9 +634,9 @@ type SockFilter struct { } type SockFprog struct { - Len uint16 - Pad_cgo_0 [2]byte - Filter *SockFilter + Len uint16 + _ [2]byte + Filter *SockFilter } type InotifyEvent struct { @@ -614,12 +680,12 @@ type Sysinfo_t struct { } type Utsname struct { - Sysname [65]int8 - Nodename [65]int8 - Release [65]int8 - Version [65]int8 - Machine [65]int8 - Domainname [65]int8 + Sysname [65]byte + Nodename [65]byte + Release [65]byte + Version [65]byte + Machine [65]byte + Domainname [65]byte } type Ustat_t struct { @@ -637,8 +703,15 @@ type EpollEvent struct { } const ( - AT_FDCWD = -0x64 - AT_REMOVEDIR = 0x200 + AT_EMPTY_PATH = 0x1000 + AT_FDCWD = -0x64 + AT_NO_AUTOMOUNT = 0x800 + AT_REMOVEDIR = 0x200 + + AT_STATX_SYNC_AS_STAT = 0x0 + AT_STATX_FORCE_SYNC = 0x2000 + AT_STATX_DONT_SYNC = 0x4000 + AT_SYMLINK_FOLLOW = 0x400 AT_SYMLINK_NOFOLLOW = 0x100 ) @@ -667,8 +740,6 @@ const RNDGETENTCNT = 0x40045200 const PERF_IOC_FLAG_GROUP = 0x1 -const _SC_PAGESIZE = 0x1e - type Termios struct { Iflag uint32 Oflag uint32 @@ -686,3 +757,135 @@ type Winsize struct { Xpixel uint16 Ypixel uint16 } + +type Taskstats struct { + Version uint16 + _ [2]byte + Ac_exitcode uint32 + Ac_flag uint8 + Ac_nice uint8 + _ [6]byte + Cpu_count uint64 + Cpu_delay_total uint64 + Blkio_count uint64 + Blkio_delay_total uint64 + Swapin_count uint64 + Swapin_delay_total uint64 + Cpu_run_real_total uint64 + Cpu_run_virtual_total uint64 + Ac_comm [32]int8 + Ac_sched uint8 + Ac_pad [3]uint8 + _ [4]byte + Ac_uid uint32 + Ac_gid uint32 + Ac_pid uint32 + Ac_ppid uint32 + Ac_btime uint32 + _ [4]byte + Ac_etime uint64 + Ac_utime uint64 + Ac_stime uint64 + Ac_minflt uint64 + Ac_majflt uint64 + Coremem uint64 + Virtmem uint64 + Hiwater_rss uint64 + Hiwater_vm uint64 + Read_char uint64 + Write_char uint64 + Read_syscalls uint64 + Write_syscalls uint64 + Read_bytes uint64 + Write_bytes uint64 + Cancelled_write_bytes uint64 + Nvcsw uint64 + Nivcsw uint64 + Ac_utimescaled uint64 + Ac_stimescaled uint64 + Cpu_scaled_run_real_total uint64 + Freepages_count uint64 + Freepages_delay_total uint64 +} + +const ( + TASKSTATS_CMD_UNSPEC = 0x0 + TASKSTATS_CMD_GET = 0x1 + TASKSTATS_CMD_NEW = 0x2 + TASKSTATS_TYPE_UNSPEC = 0x0 + TASKSTATS_TYPE_PID = 0x1 + TASKSTATS_TYPE_TGID = 0x2 + TASKSTATS_TYPE_STATS = 0x3 + TASKSTATS_TYPE_AGGR_PID = 0x4 + TASKSTATS_TYPE_AGGR_TGID = 0x5 + TASKSTATS_TYPE_NULL = 0x6 + TASKSTATS_CMD_ATTR_UNSPEC = 0x0 + TASKSTATS_CMD_ATTR_PID = 0x1 + TASKSTATS_CMD_ATTR_TGID = 0x2 + TASKSTATS_CMD_ATTR_REGISTER_CPUMASK = 0x3 + TASKSTATS_CMD_ATTR_DEREGISTER_CPUMASK = 0x4 +) + +type CGroupStats struct { + Sleeping uint64 + Running uint64 + Stopped uint64 + Uninterruptible uint64 + Io_wait uint64 +} + +const ( + CGROUPSTATS_CMD_UNSPEC = 0x3 + CGROUPSTATS_CMD_GET = 0x4 + CGROUPSTATS_CMD_NEW = 0x5 + CGROUPSTATS_TYPE_UNSPEC = 0x0 + CGROUPSTATS_TYPE_CGROUP_STATS = 0x1 + CGROUPSTATS_CMD_ATTR_UNSPEC = 0x0 + CGROUPSTATS_CMD_ATTR_FD = 0x1 +) + +type Genlmsghdr struct { + Cmd uint8 + Version uint8 + Reserved uint16 +} + +const ( + CTRL_CMD_UNSPEC = 0x0 + CTRL_CMD_NEWFAMILY = 0x1 + CTRL_CMD_DELFAMILY = 0x2 + CTRL_CMD_GETFAMILY = 0x3 + CTRL_CMD_NEWOPS = 0x4 + CTRL_CMD_DELOPS = 0x5 + CTRL_CMD_GETOPS = 0x6 + CTRL_CMD_NEWMCAST_GRP = 0x7 + CTRL_CMD_DELMCAST_GRP = 0x8 + CTRL_CMD_GETMCAST_GRP = 0x9 + CTRL_ATTR_UNSPEC = 0x0 + CTRL_ATTR_FAMILY_ID = 0x1 + CTRL_ATTR_FAMILY_NAME = 0x2 + CTRL_ATTR_VERSION = 0x3 + CTRL_ATTR_HDRSIZE = 0x4 + CTRL_ATTR_MAXATTR = 0x5 + CTRL_ATTR_OPS = 0x6 + CTRL_ATTR_MCAST_GROUPS = 0x7 + CTRL_ATTR_OP_UNSPEC = 0x0 + CTRL_ATTR_OP_ID = 0x1 + CTRL_ATTR_OP_FLAGS = 0x2 + CTRL_ATTR_MCAST_GRP_UNSPEC = 0x0 + CTRL_ATTR_MCAST_GRP_NAME = 0x1 + CTRL_ATTR_MCAST_GRP_ID = 0x2 +) + +type cpuMask uint32 + +const ( + _CPU_SETSIZE = 0x400 + _NCPUBITS = 0x20 +) + +const ( + BDADDR_BREDR = 0x0 + BDADDR_LE_PUBLIC = 0x1 + BDADDR_LE_RANDOM = 0x2 +) diff --git a/vendor/golang.org/x/sys/unix/ztypes_linux_ppc64.go b/vendor/golang.org/x/sys/unix/ztypes_linux_ppc64.go index acc7c819d..319071c8b 100644 --- a/vendor/golang.org/x/sys/unix/ztypes_linux_ppc64.go +++ b/vendor/golang.org/x/sys/unix/ztypes_linux_ppc64.go @@ -33,13 +33,13 @@ type Timeval struct { type Timex struct { Modes uint32 - Pad_cgo_0 [4]byte + _ [4]byte Offset int64 Freq int64 Maxerror int64 Esterror int64 Status int32 - Pad_cgo_1 [4]byte + _ [4]byte Constant int64 Precision int64 Tolerance int64 @@ -48,14 +48,14 @@ type Timex struct { Ppsfreq int64 Jitter int64 Shift int32 - Pad_cgo_2 [4]byte + _ [4]byte Stabil int64 Jitcnt int64 Calcnt int64 Errcnt int64 Stbcnt int64 Tai int32 - Pad_cgo_3 [44]byte + _ [44]byte } type Time_t int64 @@ -133,13 +133,43 @@ type Statfs_t struct { Spare [4]int64 } +type StatxTimestamp struct { + Sec int64 + Nsec uint32 + X__reserved int32 +} + +type Statx_t struct { + Mask uint32 + Blksize uint32 + Attributes uint64 + Nlink uint32 + Uid uint32 + Gid uint32 + Mode uint16 + _ [1]uint16 + Ino uint64 + Size uint64 + Blocks uint64 + Attributes_mask uint64 + Atime StatxTimestamp + Btime StatxTimestamp + Ctime StatxTimestamp + Mtime StatxTimestamp + Rdev_major uint32 + Rdev_minor uint32 + Dev_major uint32 + Dev_minor uint32 + _ [14]uint64 +} + type Dirent struct { - Ino uint64 - Off int64 - Reclen uint16 - Type uint8 - Name [256]uint8 - Pad_cgo_0 [5]byte + Ino uint64 + Off int64 + Reclen uint16 + Type uint8 + Name [256]uint8 + _ [5]byte } type Fsid struct { @@ -147,13 +177,13 @@ type Fsid struct { } type Flock_t struct { - Type int16 - Whence int16 - Pad_cgo_0 [4]byte - Start int64 - Len int64 - Pid int32 - Pad_cgo_1 [4]byte + Type int16 + Whence int16 + _ [4]byte + Start int64 + Len int64 + Pid int32 + _ [4]byte } type FscryptPolicy struct { @@ -228,11 +258,20 @@ type RawSockaddrHCI struct { Channel uint16 } +type RawSockaddrL2 struct { + Family uint16 + Psm uint16 + Bdaddr [6]uint8 + Cid uint16 + Bdaddr_type uint8 + _ [1]byte +} + type RawSockaddrCAN struct { - Family uint16 - Pad_cgo_0 [2]byte - Ifindex int32 - Addr [8]byte + Family uint16 + _ [2]byte + Ifindex int32 + Addr [8]byte } type RawSockaddrALG struct { @@ -299,13 +338,13 @@ type PacketMreq struct { type Msghdr struct { Name *byte Namelen uint32 - Pad_cgo_0 [4]byte + _ [4]byte Iov *Iovec Iovlen uint64 Control *byte Controllen uint64 Flags int32 - Pad_cgo_1 [4]byte + _ [4]byte } type Cmsghdr struct { @@ -347,7 +386,7 @@ type TCPInfo struct { Probes uint8 Backoff uint8 Options uint8 - Pad_cgo_0 [2]byte + _ [2]byte Rto uint32 Ato uint32 Snd_mss uint32 @@ -382,6 +421,7 @@ const ( SizeofSockaddrLinklayer = 0x14 SizeofSockaddrNetlink = 0xc SizeofSockaddrHCI = 0x6 + SizeofSockaddrL2 = 0xe SizeofSockaddrCAN = 0x10 SizeofSockaddrALG = 0x58 SizeofSockaddrVM = 0x10 @@ -402,97 +442,123 @@ const ( ) const ( - IFA_UNSPEC = 0x0 - IFA_ADDRESS = 0x1 - IFA_LOCAL = 0x2 - IFA_LABEL = 0x3 - IFA_BROADCAST = 0x4 - IFA_ANYCAST = 0x5 - IFA_CACHEINFO = 0x6 - IFA_MULTICAST = 0x7 - IFLA_UNSPEC = 0x0 - IFLA_ADDRESS = 0x1 - IFLA_BROADCAST = 0x2 - IFLA_IFNAME = 0x3 - IFLA_MTU = 0x4 - IFLA_LINK = 0x5 - IFLA_QDISC = 0x6 - IFLA_STATS = 0x7 - IFLA_COST = 0x8 - IFLA_PRIORITY = 0x9 - IFLA_MASTER = 0xa - IFLA_WIRELESS = 0xb - IFLA_PROTINFO = 0xc - IFLA_TXQLEN = 0xd - IFLA_MAP = 0xe - IFLA_WEIGHT = 0xf - IFLA_OPERSTATE = 0x10 - IFLA_LINKMODE = 0x11 - IFLA_LINKINFO = 0x12 - IFLA_NET_NS_PID = 0x13 - IFLA_IFALIAS = 0x14 - IFLA_MAX = 0x2b - RT_SCOPE_UNIVERSE = 0x0 - RT_SCOPE_SITE = 0xc8 - RT_SCOPE_LINK = 0xfd - RT_SCOPE_HOST = 0xfe - RT_SCOPE_NOWHERE = 0xff - RT_TABLE_UNSPEC = 0x0 - RT_TABLE_COMPAT = 0xfc - RT_TABLE_DEFAULT = 0xfd - RT_TABLE_MAIN = 0xfe - RT_TABLE_LOCAL = 0xff - RT_TABLE_MAX = 0xffffffff - RTA_UNSPEC = 0x0 - RTA_DST = 0x1 - RTA_SRC = 0x2 - RTA_IIF = 0x3 - RTA_OIF = 0x4 - RTA_GATEWAY = 0x5 - RTA_PRIORITY = 0x6 - RTA_PREFSRC = 0x7 - RTA_METRICS = 0x8 - RTA_MULTIPATH = 0x9 - RTA_FLOW = 0xb - RTA_CACHEINFO = 0xc - RTA_TABLE = 0xf - RTN_UNSPEC = 0x0 - RTN_UNICAST = 0x1 - RTN_LOCAL = 0x2 - RTN_BROADCAST = 0x3 - RTN_ANYCAST = 0x4 - RTN_MULTICAST = 0x5 - RTN_BLACKHOLE = 0x6 - RTN_UNREACHABLE = 0x7 - RTN_PROHIBIT = 0x8 - RTN_THROW = 0x9 - RTN_NAT = 0xa - RTN_XRESOLVE = 0xb - RTNLGRP_NONE = 0x0 - RTNLGRP_LINK = 0x1 - RTNLGRP_NOTIFY = 0x2 - RTNLGRP_NEIGH = 0x3 - RTNLGRP_TC = 0x4 - RTNLGRP_IPV4_IFADDR = 0x5 - RTNLGRP_IPV4_MROUTE = 0x6 - RTNLGRP_IPV4_ROUTE = 0x7 - RTNLGRP_IPV4_RULE = 0x8 - RTNLGRP_IPV6_IFADDR = 0x9 - RTNLGRP_IPV6_MROUTE = 0xa - RTNLGRP_IPV6_ROUTE = 0xb - RTNLGRP_IPV6_IFINFO = 0xc - RTNLGRP_IPV6_PREFIX = 0x12 - RTNLGRP_IPV6_RULE = 0x13 - RTNLGRP_ND_USEROPT = 0x14 - SizeofNlMsghdr = 0x10 - SizeofNlMsgerr = 0x14 - SizeofRtGenmsg = 0x1 - SizeofNlAttr = 0x4 - SizeofRtAttr = 0x4 - SizeofIfInfomsg = 0x10 - SizeofIfAddrmsg = 0x8 - SizeofRtMsg = 0xc - SizeofRtNexthop = 0x8 + IFA_UNSPEC = 0x0 + IFA_ADDRESS = 0x1 + IFA_LOCAL = 0x2 + IFA_LABEL = 0x3 + IFA_BROADCAST = 0x4 + IFA_ANYCAST = 0x5 + IFA_CACHEINFO = 0x6 + IFA_MULTICAST = 0x7 + IFLA_UNSPEC = 0x0 + IFLA_ADDRESS = 0x1 + IFLA_BROADCAST = 0x2 + IFLA_IFNAME = 0x3 + IFLA_MTU = 0x4 + IFLA_LINK = 0x5 + IFLA_QDISC = 0x6 + IFLA_STATS = 0x7 + IFLA_COST = 0x8 + IFLA_PRIORITY = 0x9 + IFLA_MASTER = 0xa + IFLA_WIRELESS = 0xb + IFLA_PROTINFO = 0xc + IFLA_TXQLEN = 0xd + IFLA_MAP = 0xe + IFLA_WEIGHT = 0xf + IFLA_OPERSTATE = 0x10 + IFLA_LINKMODE = 0x11 + IFLA_LINKINFO = 0x12 + IFLA_NET_NS_PID = 0x13 + IFLA_IFALIAS = 0x14 + IFLA_NUM_VF = 0x15 + IFLA_VFINFO_LIST = 0x16 + IFLA_STATS64 = 0x17 + IFLA_VF_PORTS = 0x18 + IFLA_PORT_SELF = 0x19 + IFLA_AF_SPEC = 0x1a + IFLA_GROUP = 0x1b + IFLA_NET_NS_FD = 0x1c + IFLA_EXT_MASK = 0x1d + IFLA_PROMISCUITY = 0x1e + IFLA_NUM_TX_QUEUES = 0x1f + IFLA_NUM_RX_QUEUES = 0x20 + IFLA_CARRIER = 0x21 + IFLA_PHYS_PORT_ID = 0x22 + IFLA_CARRIER_CHANGES = 0x23 + IFLA_PHYS_SWITCH_ID = 0x24 + IFLA_LINK_NETNSID = 0x25 + IFLA_PHYS_PORT_NAME = 0x26 + IFLA_PROTO_DOWN = 0x27 + IFLA_GSO_MAX_SEGS = 0x28 + IFLA_GSO_MAX_SIZE = 0x29 + IFLA_PAD = 0x2a + IFLA_XDP = 0x2b + IFLA_EVENT = 0x2c + IFLA_NEW_NETNSID = 0x2d + IFLA_IF_NETNSID = 0x2e + IFLA_MAX = 0x2e + RT_SCOPE_UNIVERSE = 0x0 + RT_SCOPE_SITE = 0xc8 + RT_SCOPE_LINK = 0xfd + RT_SCOPE_HOST = 0xfe + RT_SCOPE_NOWHERE = 0xff + RT_TABLE_UNSPEC = 0x0 + RT_TABLE_COMPAT = 0xfc + RT_TABLE_DEFAULT = 0xfd + RT_TABLE_MAIN = 0xfe + RT_TABLE_LOCAL = 0xff + RT_TABLE_MAX = 0xffffffff + RTA_UNSPEC = 0x0 + RTA_DST = 0x1 + RTA_SRC = 0x2 + RTA_IIF = 0x3 + RTA_OIF = 0x4 + RTA_GATEWAY = 0x5 + RTA_PRIORITY = 0x6 + RTA_PREFSRC = 0x7 + RTA_METRICS = 0x8 + RTA_MULTIPATH = 0x9 + RTA_FLOW = 0xb + RTA_CACHEINFO = 0xc + RTA_TABLE = 0xf + RTN_UNSPEC = 0x0 + RTN_UNICAST = 0x1 + RTN_LOCAL = 0x2 + RTN_BROADCAST = 0x3 + RTN_ANYCAST = 0x4 + RTN_MULTICAST = 0x5 + RTN_BLACKHOLE = 0x6 + RTN_UNREACHABLE = 0x7 + RTN_PROHIBIT = 0x8 + RTN_THROW = 0x9 + RTN_NAT = 0xa + RTN_XRESOLVE = 0xb + RTNLGRP_NONE = 0x0 + RTNLGRP_LINK = 0x1 + RTNLGRP_NOTIFY = 0x2 + RTNLGRP_NEIGH = 0x3 + RTNLGRP_TC = 0x4 + RTNLGRP_IPV4_IFADDR = 0x5 + RTNLGRP_IPV4_MROUTE = 0x6 + RTNLGRP_IPV4_ROUTE = 0x7 + RTNLGRP_IPV4_RULE = 0x8 + RTNLGRP_IPV6_IFADDR = 0x9 + RTNLGRP_IPV6_MROUTE = 0xa + RTNLGRP_IPV6_ROUTE = 0xb + RTNLGRP_IPV6_IFINFO = 0xc + RTNLGRP_IPV6_PREFIX = 0x12 + RTNLGRP_IPV6_RULE = 0x13 + RTNLGRP_ND_USEROPT = 0x14 + SizeofNlMsghdr = 0x10 + SizeofNlMsgerr = 0x14 + SizeofRtGenmsg = 0x1 + SizeofNlAttr = 0x4 + SizeofRtAttr = 0x4 + SizeofIfInfomsg = 0x10 + SizeofIfAddrmsg = 0x8 + SizeofRtMsg = 0xc + SizeofRtNexthop = 0x8 ) type NlMsghdr struct { @@ -571,9 +637,9 @@ type SockFilter struct { } type SockFprog struct { - Len uint16 - Pad_cgo_0 [6]byte - Filter *SockFilter + Len uint16 + _ [6]byte + Filter *SockFilter } type InotifyEvent struct { @@ -616,30 +682,30 @@ type Sysinfo_t struct { Freeswap uint64 Procs uint16 Pad uint16 - Pad_cgo_0 [4]byte + _ [4]byte Totalhigh uint64 Freehigh uint64 Unit uint32 X_f [0]uint8 - Pad_cgo_1 [4]byte + _ [4]byte } type Utsname struct { - Sysname [65]uint8 - Nodename [65]uint8 - Release [65]uint8 - Version [65]uint8 - Machine [65]uint8 - Domainname [65]uint8 + Sysname [65]byte + Nodename [65]byte + Release [65]byte + Version [65]byte + Machine [65]byte + Domainname [65]byte } type Ustat_t struct { - Tfree int32 - Pad_cgo_0 [4]byte - Tinode uint64 - Fname [6]uint8 - Fpack [6]uint8 - Pad_cgo_1 [4]byte + Tfree int32 + _ [4]byte + Tinode uint64 + Fname [6]uint8 + Fpack [6]uint8 + _ [4]byte } type EpollEvent struct { @@ -650,8 +716,15 @@ type EpollEvent struct { } const ( - AT_FDCWD = -0x64 - AT_REMOVEDIR = 0x200 + AT_EMPTY_PATH = 0x1000 + AT_FDCWD = -0x64 + AT_NO_AUTOMOUNT = 0x800 + AT_REMOVEDIR = 0x200 + + AT_STATX_SYNC_AS_STAT = 0x0 + AT_STATX_FORCE_SYNC = 0x2000 + AT_STATX_DONT_SYNC = 0x4000 + AT_SYMLINK_FOLLOW = 0x400 AT_SYMLINK_NOFOLLOW = 0x100 ) @@ -680,8 +753,6 @@ const RNDGETENTCNT = 0x40045200 const PERF_IOC_FLAG_GROUP = 0x1 -const _SC_PAGESIZE = 0x1e - type Termios struct { Iflag uint32 Oflag uint32 @@ -699,3 +770,135 @@ type Winsize struct { Xpixel uint16 Ypixel uint16 } + +type Taskstats struct { + Version uint16 + _ [2]byte + Ac_exitcode uint32 + Ac_flag uint8 + Ac_nice uint8 + _ [6]byte + Cpu_count uint64 + Cpu_delay_total uint64 + Blkio_count uint64 + Blkio_delay_total uint64 + Swapin_count uint64 + Swapin_delay_total uint64 + Cpu_run_real_total uint64 + Cpu_run_virtual_total uint64 + Ac_comm [32]uint8 + Ac_sched uint8 + Ac_pad [3]uint8 + _ [4]byte + Ac_uid uint32 + Ac_gid uint32 + Ac_pid uint32 + Ac_ppid uint32 + Ac_btime uint32 + _ [4]byte + Ac_etime uint64 + Ac_utime uint64 + Ac_stime uint64 + Ac_minflt uint64 + Ac_majflt uint64 + Coremem uint64 + Virtmem uint64 + Hiwater_rss uint64 + Hiwater_vm uint64 + Read_char uint64 + Write_char uint64 + Read_syscalls uint64 + Write_syscalls uint64 + Read_bytes uint64 + Write_bytes uint64 + Cancelled_write_bytes uint64 + Nvcsw uint64 + Nivcsw uint64 + Ac_utimescaled uint64 + Ac_stimescaled uint64 + Cpu_scaled_run_real_total uint64 + Freepages_count uint64 + Freepages_delay_total uint64 +} + +const ( + TASKSTATS_CMD_UNSPEC = 0x0 + TASKSTATS_CMD_GET = 0x1 + TASKSTATS_CMD_NEW = 0x2 + TASKSTATS_TYPE_UNSPEC = 0x0 + TASKSTATS_TYPE_PID = 0x1 + TASKSTATS_TYPE_TGID = 0x2 + TASKSTATS_TYPE_STATS = 0x3 + TASKSTATS_TYPE_AGGR_PID = 0x4 + TASKSTATS_TYPE_AGGR_TGID = 0x5 + TASKSTATS_TYPE_NULL = 0x6 + TASKSTATS_CMD_ATTR_UNSPEC = 0x0 + TASKSTATS_CMD_ATTR_PID = 0x1 + TASKSTATS_CMD_ATTR_TGID = 0x2 + TASKSTATS_CMD_ATTR_REGISTER_CPUMASK = 0x3 + TASKSTATS_CMD_ATTR_DEREGISTER_CPUMASK = 0x4 +) + +type CGroupStats struct { + Sleeping uint64 + Running uint64 + Stopped uint64 + Uninterruptible uint64 + Io_wait uint64 +} + +const ( + CGROUPSTATS_CMD_UNSPEC = 0x3 + CGROUPSTATS_CMD_GET = 0x4 + CGROUPSTATS_CMD_NEW = 0x5 + CGROUPSTATS_TYPE_UNSPEC = 0x0 + CGROUPSTATS_TYPE_CGROUP_STATS = 0x1 + CGROUPSTATS_CMD_ATTR_UNSPEC = 0x0 + CGROUPSTATS_CMD_ATTR_FD = 0x1 +) + +type Genlmsghdr struct { + Cmd uint8 + Version uint8 + Reserved uint16 +} + +const ( + CTRL_CMD_UNSPEC = 0x0 + CTRL_CMD_NEWFAMILY = 0x1 + CTRL_CMD_DELFAMILY = 0x2 + CTRL_CMD_GETFAMILY = 0x3 + CTRL_CMD_NEWOPS = 0x4 + CTRL_CMD_DELOPS = 0x5 + CTRL_CMD_GETOPS = 0x6 + CTRL_CMD_NEWMCAST_GRP = 0x7 + CTRL_CMD_DELMCAST_GRP = 0x8 + CTRL_CMD_GETMCAST_GRP = 0x9 + CTRL_ATTR_UNSPEC = 0x0 + CTRL_ATTR_FAMILY_ID = 0x1 + CTRL_ATTR_FAMILY_NAME = 0x2 + CTRL_ATTR_VERSION = 0x3 + CTRL_ATTR_HDRSIZE = 0x4 + CTRL_ATTR_MAXATTR = 0x5 + CTRL_ATTR_OPS = 0x6 + CTRL_ATTR_MCAST_GROUPS = 0x7 + CTRL_ATTR_OP_UNSPEC = 0x0 + CTRL_ATTR_OP_ID = 0x1 + CTRL_ATTR_OP_FLAGS = 0x2 + CTRL_ATTR_MCAST_GRP_UNSPEC = 0x0 + CTRL_ATTR_MCAST_GRP_NAME = 0x1 + CTRL_ATTR_MCAST_GRP_ID = 0x2 +) + +type cpuMask uint64 + +const ( + _CPU_SETSIZE = 0x400 + _NCPUBITS = 0x40 +) + +const ( + BDADDR_BREDR = 0x0 + BDADDR_LE_PUBLIC = 0x1 + BDADDR_LE_RANDOM = 0x2 +) diff --git a/vendor/golang.org/x/sys/unix/ztypes_linux_ppc64le.go b/vendor/golang.org/x/sys/unix/ztypes_linux_ppc64le.go index b348885c8..ef00ed498 100644 --- a/vendor/golang.org/x/sys/unix/ztypes_linux_ppc64le.go +++ b/vendor/golang.org/x/sys/unix/ztypes_linux_ppc64le.go @@ -33,13 +33,13 @@ type Timeval struct { type Timex struct { Modes uint32 - Pad_cgo_0 [4]byte + _ [4]byte Offset int64 Freq int64 Maxerror int64 Esterror int64 Status int32 - Pad_cgo_1 [4]byte + _ [4]byte Constant int64 Precision int64 Tolerance int64 @@ -48,14 +48,14 @@ type Timex struct { Ppsfreq int64 Jitter int64 Shift int32 - Pad_cgo_2 [4]byte + _ [4]byte Stabil int64 Jitcnt int64 Calcnt int64 Errcnt int64 Stbcnt int64 Tai int32 - Pad_cgo_3 [44]byte + _ [44]byte } type Time_t int64 @@ -133,13 +133,43 @@ type Statfs_t struct { Spare [4]int64 } +type StatxTimestamp struct { + Sec int64 + Nsec uint32 + X__reserved int32 +} + +type Statx_t struct { + Mask uint32 + Blksize uint32 + Attributes uint64 + Nlink uint32 + Uid uint32 + Gid uint32 + Mode uint16 + _ [1]uint16 + Ino uint64 + Size uint64 + Blocks uint64 + Attributes_mask uint64 + Atime StatxTimestamp + Btime StatxTimestamp + Ctime StatxTimestamp + Mtime StatxTimestamp + Rdev_major uint32 + Rdev_minor uint32 + Dev_major uint32 + Dev_minor uint32 + _ [14]uint64 +} + type Dirent struct { - Ino uint64 - Off int64 - Reclen uint16 - Type uint8 - Name [256]uint8 - Pad_cgo_0 [5]byte + Ino uint64 + Off int64 + Reclen uint16 + Type uint8 + Name [256]uint8 + _ [5]byte } type Fsid struct { @@ -147,13 +177,13 @@ type Fsid struct { } type Flock_t struct { - Type int16 - Whence int16 - Pad_cgo_0 [4]byte - Start int64 - Len int64 - Pid int32 - Pad_cgo_1 [4]byte + Type int16 + Whence int16 + _ [4]byte + Start int64 + Len int64 + Pid int32 + _ [4]byte } type FscryptPolicy struct { @@ -228,11 +258,20 @@ type RawSockaddrHCI struct { Channel uint16 } +type RawSockaddrL2 struct { + Family uint16 + Psm uint16 + Bdaddr [6]uint8 + Cid uint16 + Bdaddr_type uint8 + _ [1]byte +} + type RawSockaddrCAN struct { - Family uint16 - Pad_cgo_0 [2]byte - Ifindex int32 - Addr [8]byte + Family uint16 + _ [2]byte + Ifindex int32 + Addr [8]byte } type RawSockaddrALG struct { @@ -299,13 +338,13 @@ type PacketMreq struct { type Msghdr struct { Name *byte Namelen uint32 - Pad_cgo_0 [4]byte + _ [4]byte Iov *Iovec Iovlen uint64 Control *byte Controllen uint64 Flags int32 - Pad_cgo_1 [4]byte + _ [4]byte } type Cmsghdr struct { @@ -347,7 +386,7 @@ type TCPInfo struct { Probes uint8 Backoff uint8 Options uint8 - Pad_cgo_0 [2]byte + _ [2]byte Rto uint32 Ato uint32 Snd_mss uint32 @@ -382,6 +421,7 @@ const ( SizeofSockaddrLinklayer = 0x14 SizeofSockaddrNetlink = 0xc SizeofSockaddrHCI = 0x6 + SizeofSockaddrL2 = 0xe SizeofSockaddrCAN = 0x10 SizeofSockaddrALG = 0x58 SizeofSockaddrVM = 0x10 @@ -402,97 +442,123 @@ const ( ) const ( - IFA_UNSPEC = 0x0 - IFA_ADDRESS = 0x1 - IFA_LOCAL = 0x2 - IFA_LABEL = 0x3 - IFA_BROADCAST = 0x4 - IFA_ANYCAST = 0x5 - IFA_CACHEINFO = 0x6 - IFA_MULTICAST = 0x7 - IFLA_UNSPEC = 0x0 - IFLA_ADDRESS = 0x1 - IFLA_BROADCAST = 0x2 - IFLA_IFNAME = 0x3 - IFLA_MTU = 0x4 - IFLA_LINK = 0x5 - IFLA_QDISC = 0x6 - IFLA_STATS = 0x7 - IFLA_COST = 0x8 - IFLA_PRIORITY = 0x9 - IFLA_MASTER = 0xa - IFLA_WIRELESS = 0xb - IFLA_PROTINFO = 0xc - IFLA_TXQLEN = 0xd - IFLA_MAP = 0xe - IFLA_WEIGHT = 0xf - IFLA_OPERSTATE = 0x10 - IFLA_LINKMODE = 0x11 - IFLA_LINKINFO = 0x12 - IFLA_NET_NS_PID = 0x13 - IFLA_IFALIAS = 0x14 - IFLA_MAX = 0x2b - RT_SCOPE_UNIVERSE = 0x0 - RT_SCOPE_SITE = 0xc8 - RT_SCOPE_LINK = 0xfd - RT_SCOPE_HOST = 0xfe - RT_SCOPE_NOWHERE = 0xff - RT_TABLE_UNSPEC = 0x0 - RT_TABLE_COMPAT = 0xfc - RT_TABLE_DEFAULT = 0xfd - RT_TABLE_MAIN = 0xfe - RT_TABLE_LOCAL = 0xff - RT_TABLE_MAX = 0xffffffff - RTA_UNSPEC = 0x0 - RTA_DST = 0x1 - RTA_SRC = 0x2 - RTA_IIF = 0x3 - RTA_OIF = 0x4 - RTA_GATEWAY = 0x5 - RTA_PRIORITY = 0x6 - RTA_PREFSRC = 0x7 - RTA_METRICS = 0x8 - RTA_MULTIPATH = 0x9 - RTA_FLOW = 0xb - RTA_CACHEINFO = 0xc - RTA_TABLE = 0xf - RTN_UNSPEC = 0x0 - RTN_UNICAST = 0x1 - RTN_LOCAL = 0x2 - RTN_BROADCAST = 0x3 - RTN_ANYCAST = 0x4 - RTN_MULTICAST = 0x5 - RTN_BLACKHOLE = 0x6 - RTN_UNREACHABLE = 0x7 - RTN_PROHIBIT = 0x8 - RTN_THROW = 0x9 - RTN_NAT = 0xa - RTN_XRESOLVE = 0xb - RTNLGRP_NONE = 0x0 - RTNLGRP_LINK = 0x1 - RTNLGRP_NOTIFY = 0x2 - RTNLGRP_NEIGH = 0x3 - RTNLGRP_TC = 0x4 - RTNLGRP_IPV4_IFADDR = 0x5 - RTNLGRP_IPV4_MROUTE = 0x6 - RTNLGRP_IPV4_ROUTE = 0x7 - RTNLGRP_IPV4_RULE = 0x8 - RTNLGRP_IPV6_IFADDR = 0x9 - RTNLGRP_IPV6_MROUTE = 0xa - RTNLGRP_IPV6_ROUTE = 0xb - RTNLGRP_IPV6_IFINFO = 0xc - RTNLGRP_IPV6_PREFIX = 0x12 - RTNLGRP_IPV6_RULE = 0x13 - RTNLGRP_ND_USEROPT = 0x14 - SizeofNlMsghdr = 0x10 - SizeofNlMsgerr = 0x14 - SizeofRtGenmsg = 0x1 - SizeofNlAttr = 0x4 - SizeofRtAttr = 0x4 - SizeofIfInfomsg = 0x10 - SizeofIfAddrmsg = 0x8 - SizeofRtMsg = 0xc - SizeofRtNexthop = 0x8 + IFA_UNSPEC = 0x0 + IFA_ADDRESS = 0x1 + IFA_LOCAL = 0x2 + IFA_LABEL = 0x3 + IFA_BROADCAST = 0x4 + IFA_ANYCAST = 0x5 + IFA_CACHEINFO = 0x6 + IFA_MULTICAST = 0x7 + IFLA_UNSPEC = 0x0 + IFLA_ADDRESS = 0x1 + IFLA_BROADCAST = 0x2 + IFLA_IFNAME = 0x3 + IFLA_MTU = 0x4 + IFLA_LINK = 0x5 + IFLA_QDISC = 0x6 + IFLA_STATS = 0x7 + IFLA_COST = 0x8 + IFLA_PRIORITY = 0x9 + IFLA_MASTER = 0xa + IFLA_WIRELESS = 0xb + IFLA_PROTINFO = 0xc + IFLA_TXQLEN = 0xd + IFLA_MAP = 0xe + IFLA_WEIGHT = 0xf + IFLA_OPERSTATE = 0x10 + IFLA_LINKMODE = 0x11 + IFLA_LINKINFO = 0x12 + IFLA_NET_NS_PID = 0x13 + IFLA_IFALIAS = 0x14 + IFLA_NUM_VF = 0x15 + IFLA_VFINFO_LIST = 0x16 + IFLA_STATS64 = 0x17 + IFLA_VF_PORTS = 0x18 + IFLA_PORT_SELF = 0x19 + IFLA_AF_SPEC = 0x1a + IFLA_GROUP = 0x1b + IFLA_NET_NS_FD = 0x1c + IFLA_EXT_MASK = 0x1d + IFLA_PROMISCUITY = 0x1e + IFLA_NUM_TX_QUEUES = 0x1f + IFLA_NUM_RX_QUEUES = 0x20 + IFLA_CARRIER = 0x21 + IFLA_PHYS_PORT_ID = 0x22 + IFLA_CARRIER_CHANGES = 0x23 + IFLA_PHYS_SWITCH_ID = 0x24 + IFLA_LINK_NETNSID = 0x25 + IFLA_PHYS_PORT_NAME = 0x26 + IFLA_PROTO_DOWN = 0x27 + IFLA_GSO_MAX_SEGS = 0x28 + IFLA_GSO_MAX_SIZE = 0x29 + IFLA_PAD = 0x2a + IFLA_XDP = 0x2b + IFLA_EVENT = 0x2c + IFLA_NEW_NETNSID = 0x2d + IFLA_IF_NETNSID = 0x2e + IFLA_MAX = 0x2e + RT_SCOPE_UNIVERSE = 0x0 + RT_SCOPE_SITE = 0xc8 + RT_SCOPE_LINK = 0xfd + RT_SCOPE_HOST = 0xfe + RT_SCOPE_NOWHERE = 0xff + RT_TABLE_UNSPEC = 0x0 + RT_TABLE_COMPAT = 0xfc + RT_TABLE_DEFAULT = 0xfd + RT_TABLE_MAIN = 0xfe + RT_TABLE_LOCAL = 0xff + RT_TABLE_MAX = 0xffffffff + RTA_UNSPEC = 0x0 + RTA_DST = 0x1 + RTA_SRC = 0x2 + RTA_IIF = 0x3 + RTA_OIF = 0x4 + RTA_GATEWAY = 0x5 + RTA_PRIORITY = 0x6 + RTA_PREFSRC = 0x7 + RTA_METRICS = 0x8 + RTA_MULTIPATH = 0x9 + RTA_FLOW = 0xb + RTA_CACHEINFO = 0xc + RTA_TABLE = 0xf + RTN_UNSPEC = 0x0 + RTN_UNICAST = 0x1 + RTN_LOCAL = 0x2 + RTN_BROADCAST = 0x3 + RTN_ANYCAST = 0x4 + RTN_MULTICAST = 0x5 + RTN_BLACKHOLE = 0x6 + RTN_UNREACHABLE = 0x7 + RTN_PROHIBIT = 0x8 + RTN_THROW = 0x9 + RTN_NAT = 0xa + RTN_XRESOLVE = 0xb + RTNLGRP_NONE = 0x0 + RTNLGRP_LINK = 0x1 + RTNLGRP_NOTIFY = 0x2 + RTNLGRP_NEIGH = 0x3 + RTNLGRP_TC = 0x4 + RTNLGRP_IPV4_IFADDR = 0x5 + RTNLGRP_IPV4_MROUTE = 0x6 + RTNLGRP_IPV4_ROUTE = 0x7 + RTNLGRP_IPV4_RULE = 0x8 + RTNLGRP_IPV6_IFADDR = 0x9 + RTNLGRP_IPV6_MROUTE = 0xa + RTNLGRP_IPV6_ROUTE = 0xb + RTNLGRP_IPV6_IFINFO = 0xc + RTNLGRP_IPV6_PREFIX = 0x12 + RTNLGRP_IPV6_RULE = 0x13 + RTNLGRP_ND_USEROPT = 0x14 + SizeofNlMsghdr = 0x10 + SizeofNlMsgerr = 0x14 + SizeofRtGenmsg = 0x1 + SizeofNlAttr = 0x4 + SizeofRtAttr = 0x4 + SizeofIfInfomsg = 0x10 + SizeofIfAddrmsg = 0x8 + SizeofRtMsg = 0xc + SizeofRtNexthop = 0x8 ) type NlMsghdr struct { @@ -571,9 +637,9 @@ type SockFilter struct { } type SockFprog struct { - Len uint16 - Pad_cgo_0 [6]byte - Filter *SockFilter + Len uint16 + _ [6]byte + Filter *SockFilter } type InotifyEvent struct { @@ -616,30 +682,30 @@ type Sysinfo_t struct { Freeswap uint64 Procs uint16 Pad uint16 - Pad_cgo_0 [4]byte + _ [4]byte Totalhigh uint64 Freehigh uint64 Unit uint32 X_f [0]uint8 - Pad_cgo_1 [4]byte + _ [4]byte } type Utsname struct { - Sysname [65]uint8 - Nodename [65]uint8 - Release [65]uint8 - Version [65]uint8 - Machine [65]uint8 - Domainname [65]uint8 + Sysname [65]byte + Nodename [65]byte + Release [65]byte + Version [65]byte + Machine [65]byte + Domainname [65]byte } type Ustat_t struct { - Tfree int32 - Pad_cgo_0 [4]byte - Tinode uint64 - Fname [6]uint8 - Fpack [6]uint8 - Pad_cgo_1 [4]byte + Tfree int32 + _ [4]byte + Tinode uint64 + Fname [6]uint8 + Fpack [6]uint8 + _ [4]byte } type EpollEvent struct { @@ -650,8 +716,15 @@ type EpollEvent struct { } const ( - AT_FDCWD = -0x64 - AT_REMOVEDIR = 0x200 + AT_EMPTY_PATH = 0x1000 + AT_FDCWD = -0x64 + AT_NO_AUTOMOUNT = 0x800 + AT_REMOVEDIR = 0x200 + + AT_STATX_SYNC_AS_STAT = 0x0 + AT_STATX_FORCE_SYNC = 0x2000 + AT_STATX_DONT_SYNC = 0x4000 + AT_SYMLINK_FOLLOW = 0x400 AT_SYMLINK_NOFOLLOW = 0x100 ) @@ -680,8 +753,6 @@ const RNDGETENTCNT = 0x40045200 const PERF_IOC_FLAG_GROUP = 0x1 -const _SC_PAGESIZE = 0x1e - type Termios struct { Iflag uint32 Oflag uint32 @@ -699,3 +770,135 @@ type Winsize struct { Xpixel uint16 Ypixel uint16 } + +type Taskstats struct { + Version uint16 + _ [2]byte + Ac_exitcode uint32 + Ac_flag uint8 + Ac_nice uint8 + _ [6]byte + Cpu_count uint64 + Cpu_delay_total uint64 + Blkio_count uint64 + Blkio_delay_total uint64 + Swapin_count uint64 + Swapin_delay_total uint64 + Cpu_run_real_total uint64 + Cpu_run_virtual_total uint64 + Ac_comm [32]uint8 + Ac_sched uint8 + Ac_pad [3]uint8 + _ [4]byte + Ac_uid uint32 + Ac_gid uint32 + Ac_pid uint32 + Ac_ppid uint32 + Ac_btime uint32 + _ [4]byte + Ac_etime uint64 + Ac_utime uint64 + Ac_stime uint64 + Ac_minflt uint64 + Ac_majflt uint64 + Coremem uint64 + Virtmem uint64 + Hiwater_rss uint64 + Hiwater_vm uint64 + Read_char uint64 + Write_char uint64 + Read_syscalls uint64 + Write_syscalls uint64 + Read_bytes uint64 + Write_bytes uint64 + Cancelled_write_bytes uint64 + Nvcsw uint64 + Nivcsw uint64 + Ac_utimescaled uint64 + Ac_stimescaled uint64 + Cpu_scaled_run_real_total uint64 + Freepages_count uint64 + Freepages_delay_total uint64 +} + +const ( + TASKSTATS_CMD_UNSPEC = 0x0 + TASKSTATS_CMD_GET = 0x1 + TASKSTATS_CMD_NEW = 0x2 + TASKSTATS_TYPE_UNSPEC = 0x0 + TASKSTATS_TYPE_PID = 0x1 + TASKSTATS_TYPE_TGID = 0x2 + TASKSTATS_TYPE_STATS = 0x3 + TASKSTATS_TYPE_AGGR_PID = 0x4 + TASKSTATS_TYPE_AGGR_TGID = 0x5 + TASKSTATS_TYPE_NULL = 0x6 + TASKSTATS_CMD_ATTR_UNSPEC = 0x0 + TASKSTATS_CMD_ATTR_PID = 0x1 + TASKSTATS_CMD_ATTR_TGID = 0x2 + TASKSTATS_CMD_ATTR_REGISTER_CPUMASK = 0x3 + TASKSTATS_CMD_ATTR_DEREGISTER_CPUMASK = 0x4 +) + +type CGroupStats struct { + Sleeping uint64 + Running uint64 + Stopped uint64 + Uninterruptible uint64 + Io_wait uint64 +} + +const ( + CGROUPSTATS_CMD_UNSPEC = 0x3 + CGROUPSTATS_CMD_GET = 0x4 + CGROUPSTATS_CMD_NEW = 0x5 + CGROUPSTATS_TYPE_UNSPEC = 0x0 + CGROUPSTATS_TYPE_CGROUP_STATS = 0x1 + CGROUPSTATS_CMD_ATTR_UNSPEC = 0x0 + CGROUPSTATS_CMD_ATTR_FD = 0x1 +) + +type Genlmsghdr struct { + Cmd uint8 + Version uint8 + Reserved uint16 +} + +const ( + CTRL_CMD_UNSPEC = 0x0 + CTRL_CMD_NEWFAMILY = 0x1 + CTRL_CMD_DELFAMILY = 0x2 + CTRL_CMD_GETFAMILY = 0x3 + CTRL_CMD_NEWOPS = 0x4 + CTRL_CMD_DELOPS = 0x5 + CTRL_CMD_GETOPS = 0x6 + CTRL_CMD_NEWMCAST_GRP = 0x7 + CTRL_CMD_DELMCAST_GRP = 0x8 + CTRL_CMD_GETMCAST_GRP = 0x9 + CTRL_ATTR_UNSPEC = 0x0 + CTRL_ATTR_FAMILY_ID = 0x1 + CTRL_ATTR_FAMILY_NAME = 0x2 + CTRL_ATTR_VERSION = 0x3 + CTRL_ATTR_HDRSIZE = 0x4 + CTRL_ATTR_MAXATTR = 0x5 + CTRL_ATTR_OPS = 0x6 + CTRL_ATTR_MCAST_GROUPS = 0x7 + CTRL_ATTR_OP_UNSPEC = 0x0 + CTRL_ATTR_OP_ID = 0x1 + CTRL_ATTR_OP_FLAGS = 0x2 + CTRL_ATTR_MCAST_GRP_UNSPEC = 0x0 + CTRL_ATTR_MCAST_GRP_NAME = 0x1 + CTRL_ATTR_MCAST_GRP_ID = 0x2 +) + +type cpuMask uint64 + +const ( + _CPU_SETSIZE = 0x400 + _NCPUBITS = 0x40 +) + +const ( + BDADDR_BREDR = 0x0 + BDADDR_LE_PUBLIC = 0x1 + BDADDR_LE_RANDOM = 0x2 +) diff --git a/vendor/golang.org/x/sys/unix/ztypes_linux_s390x.go b/vendor/golang.org/x/sys/unix/ztypes_linux_s390x.go index a706e2f8c..e9ee49706 100644 --- a/vendor/golang.org/x/sys/unix/ztypes_linux_s390x.go +++ b/vendor/golang.org/x/sys/unix/ztypes_linux_s390x.go @@ -132,6 +132,36 @@ type Statfs_t struct { _ [4]byte } +type StatxTimestamp struct { + Sec int64 + Nsec uint32 + _ int32 +} + +type Statx_t struct { + Mask uint32 + Blksize uint32 + Attributes uint64 + Nlink uint32 + Uid uint32 + Gid uint32 + Mode uint16 + _ [1]uint16 + Ino uint64 + Size uint64 + Blocks uint64 + Attributes_mask uint64 + Atime StatxTimestamp + Btime StatxTimestamp + Ctime StatxTimestamp + Mtime StatxTimestamp + Rdev_major uint32 + Rdev_minor uint32 + Dev_major uint32 + Dev_minor uint32 + _ [14]uint64 +} + type Dirent struct { Ino uint64 Off int64 @@ -227,6 +257,15 @@ type RawSockaddrHCI struct { Channel uint16 } +type RawSockaddrL2 struct { + Family uint16 + Psm uint16 + Bdaddr [6]uint8 + Cid uint16 + Bdaddr_type uint8 + _ [1]byte +} + type RawSockaddrCAN struct { Family uint16 _ [2]byte @@ -381,6 +420,7 @@ const ( SizeofSockaddrLinklayer = 0x14 SizeofSockaddrNetlink = 0xc SizeofSockaddrHCI = 0x6 + SizeofSockaddrL2 = 0xe SizeofSockaddrCAN = 0x10 SizeofSockaddrALG = 0x58 SizeofSockaddrVM = 0x10 @@ -401,97 +441,123 @@ const ( ) const ( - IFA_UNSPEC = 0x0 - IFA_ADDRESS = 0x1 - IFA_LOCAL = 0x2 - IFA_LABEL = 0x3 - IFA_BROADCAST = 0x4 - IFA_ANYCAST = 0x5 - IFA_CACHEINFO = 0x6 - IFA_MULTICAST = 0x7 - IFLA_UNSPEC = 0x0 - IFLA_ADDRESS = 0x1 - IFLA_BROADCAST = 0x2 - IFLA_IFNAME = 0x3 - IFLA_MTU = 0x4 - IFLA_LINK = 0x5 - IFLA_QDISC = 0x6 - IFLA_STATS = 0x7 - IFLA_COST = 0x8 - IFLA_PRIORITY = 0x9 - IFLA_MASTER = 0xa - IFLA_WIRELESS = 0xb - IFLA_PROTINFO = 0xc - IFLA_TXQLEN = 0xd - IFLA_MAP = 0xe - IFLA_WEIGHT = 0xf - IFLA_OPERSTATE = 0x10 - IFLA_LINKMODE = 0x11 - IFLA_LINKINFO = 0x12 - IFLA_NET_NS_PID = 0x13 - IFLA_IFALIAS = 0x14 - IFLA_MAX = 0x2b - RT_SCOPE_UNIVERSE = 0x0 - RT_SCOPE_SITE = 0xc8 - RT_SCOPE_LINK = 0xfd - RT_SCOPE_HOST = 0xfe - RT_SCOPE_NOWHERE = 0xff - RT_TABLE_UNSPEC = 0x0 - RT_TABLE_COMPAT = 0xfc - RT_TABLE_DEFAULT = 0xfd - RT_TABLE_MAIN = 0xfe - RT_TABLE_LOCAL = 0xff - RT_TABLE_MAX = 0xffffffff - RTA_UNSPEC = 0x0 - RTA_DST = 0x1 - RTA_SRC = 0x2 - RTA_IIF = 0x3 - RTA_OIF = 0x4 - RTA_GATEWAY = 0x5 - RTA_PRIORITY = 0x6 - RTA_PREFSRC = 0x7 - RTA_METRICS = 0x8 - RTA_MULTIPATH = 0x9 - RTA_FLOW = 0xb - RTA_CACHEINFO = 0xc - RTA_TABLE = 0xf - RTN_UNSPEC = 0x0 - RTN_UNICAST = 0x1 - RTN_LOCAL = 0x2 - RTN_BROADCAST = 0x3 - RTN_ANYCAST = 0x4 - RTN_MULTICAST = 0x5 - RTN_BLACKHOLE = 0x6 - RTN_UNREACHABLE = 0x7 - RTN_PROHIBIT = 0x8 - RTN_THROW = 0x9 - RTN_NAT = 0xa - RTN_XRESOLVE = 0xb - RTNLGRP_NONE = 0x0 - RTNLGRP_LINK = 0x1 - RTNLGRP_NOTIFY = 0x2 - RTNLGRP_NEIGH = 0x3 - RTNLGRP_TC = 0x4 - RTNLGRP_IPV4_IFADDR = 0x5 - RTNLGRP_IPV4_MROUTE = 0x6 - RTNLGRP_IPV4_ROUTE = 0x7 - RTNLGRP_IPV4_RULE = 0x8 - RTNLGRP_IPV6_IFADDR = 0x9 - RTNLGRP_IPV6_MROUTE = 0xa - RTNLGRP_IPV6_ROUTE = 0xb - RTNLGRP_IPV6_IFINFO = 0xc - RTNLGRP_IPV6_PREFIX = 0x12 - RTNLGRP_IPV6_RULE = 0x13 - RTNLGRP_ND_USEROPT = 0x14 - SizeofNlMsghdr = 0x10 - SizeofNlMsgerr = 0x14 - SizeofRtGenmsg = 0x1 - SizeofNlAttr = 0x4 - SizeofRtAttr = 0x4 - SizeofIfInfomsg = 0x10 - SizeofIfAddrmsg = 0x8 - SizeofRtMsg = 0xc - SizeofRtNexthop = 0x8 + IFA_UNSPEC = 0x0 + IFA_ADDRESS = 0x1 + IFA_LOCAL = 0x2 + IFA_LABEL = 0x3 + IFA_BROADCAST = 0x4 + IFA_ANYCAST = 0x5 + IFA_CACHEINFO = 0x6 + IFA_MULTICAST = 0x7 + IFLA_UNSPEC = 0x0 + IFLA_ADDRESS = 0x1 + IFLA_BROADCAST = 0x2 + IFLA_IFNAME = 0x3 + IFLA_MTU = 0x4 + IFLA_LINK = 0x5 + IFLA_QDISC = 0x6 + IFLA_STATS = 0x7 + IFLA_COST = 0x8 + IFLA_PRIORITY = 0x9 + IFLA_MASTER = 0xa + IFLA_WIRELESS = 0xb + IFLA_PROTINFO = 0xc + IFLA_TXQLEN = 0xd + IFLA_MAP = 0xe + IFLA_WEIGHT = 0xf + IFLA_OPERSTATE = 0x10 + IFLA_LINKMODE = 0x11 + IFLA_LINKINFO = 0x12 + IFLA_NET_NS_PID = 0x13 + IFLA_IFALIAS = 0x14 + IFLA_NUM_VF = 0x15 + IFLA_VFINFO_LIST = 0x16 + IFLA_STATS64 = 0x17 + IFLA_VF_PORTS = 0x18 + IFLA_PORT_SELF = 0x19 + IFLA_AF_SPEC = 0x1a + IFLA_GROUP = 0x1b + IFLA_NET_NS_FD = 0x1c + IFLA_EXT_MASK = 0x1d + IFLA_PROMISCUITY = 0x1e + IFLA_NUM_TX_QUEUES = 0x1f + IFLA_NUM_RX_QUEUES = 0x20 + IFLA_CARRIER = 0x21 + IFLA_PHYS_PORT_ID = 0x22 + IFLA_CARRIER_CHANGES = 0x23 + IFLA_PHYS_SWITCH_ID = 0x24 + IFLA_LINK_NETNSID = 0x25 + IFLA_PHYS_PORT_NAME = 0x26 + IFLA_PROTO_DOWN = 0x27 + IFLA_GSO_MAX_SEGS = 0x28 + IFLA_GSO_MAX_SIZE = 0x29 + IFLA_PAD = 0x2a + IFLA_XDP = 0x2b + IFLA_EVENT = 0x2c + IFLA_NEW_NETNSID = 0x2d + IFLA_IF_NETNSID = 0x2e + IFLA_MAX = 0x2e + RT_SCOPE_UNIVERSE = 0x0 + RT_SCOPE_SITE = 0xc8 + RT_SCOPE_LINK = 0xfd + RT_SCOPE_HOST = 0xfe + RT_SCOPE_NOWHERE = 0xff + RT_TABLE_UNSPEC = 0x0 + RT_TABLE_COMPAT = 0xfc + RT_TABLE_DEFAULT = 0xfd + RT_TABLE_MAIN = 0xfe + RT_TABLE_LOCAL = 0xff + RT_TABLE_MAX = 0xffffffff + RTA_UNSPEC = 0x0 + RTA_DST = 0x1 + RTA_SRC = 0x2 + RTA_IIF = 0x3 + RTA_OIF = 0x4 + RTA_GATEWAY = 0x5 + RTA_PRIORITY = 0x6 + RTA_PREFSRC = 0x7 + RTA_METRICS = 0x8 + RTA_MULTIPATH = 0x9 + RTA_FLOW = 0xb + RTA_CACHEINFO = 0xc + RTA_TABLE = 0xf + RTN_UNSPEC = 0x0 + RTN_UNICAST = 0x1 + RTN_LOCAL = 0x2 + RTN_BROADCAST = 0x3 + RTN_ANYCAST = 0x4 + RTN_MULTICAST = 0x5 + RTN_BLACKHOLE = 0x6 + RTN_UNREACHABLE = 0x7 + RTN_PROHIBIT = 0x8 + RTN_THROW = 0x9 + RTN_NAT = 0xa + RTN_XRESOLVE = 0xb + RTNLGRP_NONE = 0x0 + RTNLGRP_LINK = 0x1 + RTNLGRP_NOTIFY = 0x2 + RTNLGRP_NEIGH = 0x3 + RTNLGRP_TC = 0x4 + RTNLGRP_IPV4_IFADDR = 0x5 + RTNLGRP_IPV4_MROUTE = 0x6 + RTNLGRP_IPV4_ROUTE = 0x7 + RTNLGRP_IPV4_RULE = 0x8 + RTNLGRP_IPV6_IFADDR = 0x9 + RTNLGRP_IPV6_MROUTE = 0xa + RTNLGRP_IPV6_ROUTE = 0xb + RTNLGRP_IPV6_IFINFO = 0xc + RTNLGRP_IPV6_PREFIX = 0x12 + RTNLGRP_IPV6_RULE = 0x13 + RTNLGRP_ND_USEROPT = 0x14 + SizeofNlMsghdr = 0x10 + SizeofNlMsgerr = 0x14 + SizeofRtGenmsg = 0x1 + SizeofNlAttr = 0x4 + SizeofRtAttr = 0x4 + SizeofIfInfomsg = 0x10 + SizeofIfAddrmsg = 0x8 + SizeofRtMsg = 0xc + SizeofRtNexthop = 0x8 ) type NlMsghdr struct { @@ -642,12 +708,12 @@ type Sysinfo_t struct { } type Utsname struct { - Sysname [65]int8 - Nodename [65]int8 - Release [65]int8 - Version [65]int8 - Machine [65]int8 - Domainname [65]int8 + Sysname [65]byte + Nodename [65]byte + Release [65]byte + Version [65]byte + Machine [65]byte + Domainname [65]byte } type Ustat_t struct { @@ -667,8 +733,15 @@ type EpollEvent struct { } const ( - AT_FDCWD = -0x64 - AT_REMOVEDIR = 0x200 + AT_EMPTY_PATH = 0x1000 + AT_FDCWD = -0x64 + AT_NO_AUTOMOUNT = 0x800 + AT_REMOVEDIR = 0x200 + + AT_STATX_SYNC_AS_STAT = 0x0 + AT_STATX_FORCE_SYNC = 0x2000 + AT_STATX_DONT_SYNC = 0x4000 + AT_SYMLINK_FOLLOW = 0x400 AT_SYMLINK_NOFOLLOW = 0x100 ) @@ -697,8 +770,6 @@ const RNDGETENTCNT = 0x80045200 const PERF_IOC_FLAG_GROUP = 0x1 -const _SC_PAGESIZE = 0x1e - type Termios struct { Iflag uint32 Oflag uint32 @@ -716,3 +787,135 @@ type Winsize struct { Xpixel uint16 Ypixel uint16 } + +type Taskstats struct { + Version uint16 + _ [2]byte + Ac_exitcode uint32 + Ac_flag uint8 + Ac_nice uint8 + _ [6]byte + Cpu_count uint64 + Cpu_delay_total uint64 + Blkio_count uint64 + Blkio_delay_total uint64 + Swapin_count uint64 + Swapin_delay_total uint64 + Cpu_run_real_total uint64 + Cpu_run_virtual_total uint64 + Ac_comm [32]int8 + Ac_sched uint8 + Ac_pad [3]uint8 + _ [4]byte + Ac_uid uint32 + Ac_gid uint32 + Ac_pid uint32 + Ac_ppid uint32 + Ac_btime uint32 + _ [4]byte + Ac_etime uint64 + Ac_utime uint64 + Ac_stime uint64 + Ac_minflt uint64 + Ac_majflt uint64 + Coremem uint64 + Virtmem uint64 + Hiwater_rss uint64 + Hiwater_vm uint64 + Read_char uint64 + Write_char uint64 + Read_syscalls uint64 + Write_syscalls uint64 + Read_bytes uint64 + Write_bytes uint64 + Cancelled_write_bytes uint64 + Nvcsw uint64 + Nivcsw uint64 + Ac_utimescaled uint64 + Ac_stimescaled uint64 + Cpu_scaled_run_real_total uint64 + Freepages_count uint64 + Freepages_delay_total uint64 +} + +const ( + TASKSTATS_CMD_UNSPEC = 0x0 + TASKSTATS_CMD_GET = 0x1 + TASKSTATS_CMD_NEW = 0x2 + TASKSTATS_TYPE_UNSPEC = 0x0 + TASKSTATS_TYPE_PID = 0x1 + TASKSTATS_TYPE_TGID = 0x2 + TASKSTATS_TYPE_STATS = 0x3 + TASKSTATS_TYPE_AGGR_PID = 0x4 + TASKSTATS_TYPE_AGGR_TGID = 0x5 + TASKSTATS_TYPE_NULL = 0x6 + TASKSTATS_CMD_ATTR_UNSPEC = 0x0 + TASKSTATS_CMD_ATTR_PID = 0x1 + TASKSTATS_CMD_ATTR_TGID = 0x2 + TASKSTATS_CMD_ATTR_REGISTER_CPUMASK = 0x3 + TASKSTATS_CMD_ATTR_DEREGISTER_CPUMASK = 0x4 +) + +type CGroupStats struct { + Sleeping uint64 + Running uint64 + Stopped uint64 + Uninterruptible uint64 + Io_wait uint64 +} + +const ( + CGROUPSTATS_CMD_UNSPEC = 0x3 + CGROUPSTATS_CMD_GET = 0x4 + CGROUPSTATS_CMD_NEW = 0x5 + CGROUPSTATS_TYPE_UNSPEC = 0x0 + CGROUPSTATS_TYPE_CGROUP_STATS = 0x1 + CGROUPSTATS_CMD_ATTR_UNSPEC = 0x0 + CGROUPSTATS_CMD_ATTR_FD = 0x1 +) + +type Genlmsghdr struct { + Cmd uint8 + Version uint8 + Reserved uint16 +} + +const ( + CTRL_CMD_UNSPEC = 0x0 + CTRL_CMD_NEWFAMILY = 0x1 + CTRL_CMD_DELFAMILY = 0x2 + CTRL_CMD_GETFAMILY = 0x3 + CTRL_CMD_NEWOPS = 0x4 + CTRL_CMD_DELOPS = 0x5 + CTRL_CMD_GETOPS = 0x6 + CTRL_CMD_NEWMCAST_GRP = 0x7 + CTRL_CMD_DELMCAST_GRP = 0x8 + CTRL_CMD_GETMCAST_GRP = 0x9 + CTRL_ATTR_UNSPEC = 0x0 + CTRL_ATTR_FAMILY_ID = 0x1 + CTRL_ATTR_FAMILY_NAME = 0x2 + CTRL_ATTR_VERSION = 0x3 + CTRL_ATTR_HDRSIZE = 0x4 + CTRL_ATTR_MAXATTR = 0x5 + CTRL_ATTR_OPS = 0x6 + CTRL_ATTR_MCAST_GROUPS = 0x7 + CTRL_ATTR_OP_UNSPEC = 0x0 + CTRL_ATTR_OP_ID = 0x1 + CTRL_ATTR_OP_FLAGS = 0x2 + CTRL_ATTR_MCAST_GRP_UNSPEC = 0x0 + CTRL_ATTR_MCAST_GRP_NAME = 0x1 + CTRL_ATTR_MCAST_GRP_ID = 0x2 +) + +type cpuMask uint64 + +const ( + _CPU_SETSIZE = 0x400 + _NCPUBITS = 0x40 +) + +const ( + BDADDR_BREDR = 0x0 + BDADDR_LE_PUBLIC = 0x1 + BDADDR_LE_RANDOM = 0x2 +) diff --git a/vendor/golang.org/x/sys/unix/ztypes_linux_sparc64.go b/vendor/golang.org/x/sys/unix/ztypes_linux_sparc64.go index 22bdab961..8e7384b89 100644 --- a/vendor/golang.org/x/sys/unix/ztypes_linux_sparc64.go +++ b/vendor/golang.org/x/sys/unix/ztypes_linux_sparc64.go @@ -376,97 +376,123 @@ const ( ) const ( - IFA_UNSPEC = 0x0 - IFA_ADDRESS = 0x1 - IFA_LOCAL = 0x2 - IFA_LABEL = 0x3 - IFA_BROADCAST = 0x4 - IFA_ANYCAST = 0x5 - IFA_CACHEINFO = 0x6 - IFA_MULTICAST = 0x7 - IFLA_UNSPEC = 0x0 - IFLA_ADDRESS = 0x1 - IFLA_BROADCAST = 0x2 - IFLA_IFNAME = 0x3 - IFLA_MTU = 0x4 - IFLA_LINK = 0x5 - IFLA_QDISC = 0x6 - IFLA_STATS = 0x7 - IFLA_COST = 0x8 - IFLA_PRIORITY = 0x9 - IFLA_MASTER = 0xa - IFLA_WIRELESS = 0xb - IFLA_PROTINFO = 0xc - IFLA_TXQLEN = 0xd - IFLA_MAP = 0xe - IFLA_WEIGHT = 0xf - IFLA_OPERSTATE = 0x10 - IFLA_LINKMODE = 0x11 - IFLA_LINKINFO = 0x12 - IFLA_NET_NS_PID = 0x13 - IFLA_IFALIAS = 0x14 - IFLA_MAX = 0x2a - RT_SCOPE_UNIVERSE = 0x0 - RT_SCOPE_SITE = 0xc8 - RT_SCOPE_LINK = 0xfd - RT_SCOPE_HOST = 0xfe - RT_SCOPE_NOWHERE = 0xff - RT_TABLE_UNSPEC = 0x0 - RT_TABLE_COMPAT = 0xfc - RT_TABLE_DEFAULT = 0xfd - RT_TABLE_MAIN = 0xfe - RT_TABLE_LOCAL = 0xff - RT_TABLE_MAX = 0xffffffff - RTA_UNSPEC = 0x0 - RTA_DST = 0x1 - RTA_SRC = 0x2 - RTA_IIF = 0x3 - RTA_OIF = 0x4 - RTA_GATEWAY = 0x5 - RTA_PRIORITY = 0x6 - RTA_PREFSRC = 0x7 - RTA_METRICS = 0x8 - RTA_MULTIPATH = 0x9 - RTA_FLOW = 0xb - RTA_CACHEINFO = 0xc - RTA_TABLE = 0xf - RTN_UNSPEC = 0x0 - RTN_UNICAST = 0x1 - RTN_LOCAL = 0x2 - RTN_BROADCAST = 0x3 - RTN_ANYCAST = 0x4 - RTN_MULTICAST = 0x5 - RTN_BLACKHOLE = 0x6 - RTN_UNREACHABLE = 0x7 - RTN_PROHIBIT = 0x8 - RTN_THROW = 0x9 - RTN_NAT = 0xa - RTN_XRESOLVE = 0xb - RTNLGRP_NONE = 0x0 - RTNLGRP_LINK = 0x1 - RTNLGRP_NOTIFY = 0x2 - RTNLGRP_NEIGH = 0x3 - RTNLGRP_TC = 0x4 - RTNLGRP_IPV4_IFADDR = 0x5 - RTNLGRP_IPV4_MROUTE = 0x6 - RTNLGRP_IPV4_ROUTE = 0x7 - RTNLGRP_IPV4_RULE = 0x8 - RTNLGRP_IPV6_IFADDR = 0x9 - RTNLGRP_IPV6_MROUTE = 0xa - RTNLGRP_IPV6_ROUTE = 0xb - RTNLGRP_IPV6_IFINFO = 0xc - RTNLGRP_IPV6_PREFIX = 0x12 - RTNLGRP_IPV6_RULE = 0x13 - RTNLGRP_ND_USEROPT = 0x14 - SizeofNlMsghdr = 0x10 - SizeofNlMsgerr = 0x14 - SizeofRtGenmsg = 0x1 - SizeofNlAttr = 0x4 - SizeofRtAttr = 0x4 - SizeofIfInfomsg = 0x10 - SizeofIfAddrmsg = 0x8 - SizeofRtMsg = 0xc - SizeofRtNexthop = 0x8 + IFA_UNSPEC = 0x0 + IFA_ADDRESS = 0x1 + IFA_LOCAL = 0x2 + IFA_LABEL = 0x3 + IFA_BROADCAST = 0x4 + IFA_ANYCAST = 0x5 + IFA_CACHEINFO = 0x6 + IFA_MULTICAST = 0x7 + IFLA_UNSPEC = 0x0 + IFLA_ADDRESS = 0x1 + IFLA_BROADCAST = 0x2 + IFLA_IFNAME = 0x3 + IFLA_MTU = 0x4 + IFLA_LINK = 0x5 + IFLA_QDISC = 0x6 + IFLA_STATS = 0x7 + IFLA_COST = 0x8 + IFLA_PRIORITY = 0x9 + IFLA_MASTER = 0xa + IFLA_WIRELESS = 0xb + IFLA_PROTINFO = 0xc + IFLA_TXQLEN = 0xd + IFLA_MAP = 0xe + IFLA_WEIGHT = 0xf + IFLA_OPERSTATE = 0x10 + IFLA_LINKMODE = 0x11 + IFLA_LINKINFO = 0x12 + IFLA_NET_NS_PID = 0x13 + IFLA_IFALIAS = 0x14 + IFLA_NUM_VF = 0x15 + IFLA_VFINFO_LIST = 0x16 + IFLA_STATS64 = 0x17 + IFLA_VF_PORTS = 0x18 + IFLA_PORT_SELF = 0x19 + IFLA_AF_SPEC = 0x1a + IFLA_GROUP = 0x1b + IFLA_NET_NS_FD = 0x1c + IFLA_EXT_MASK = 0x1d + IFLA_PROMISCUITY = 0x1e + IFLA_NUM_TX_QUEUES = 0x1f + IFLA_NUM_RX_QUEUES = 0x20 + IFLA_CARRIER = 0x21 + IFLA_PHYS_PORT_ID = 0x22 + IFLA_CARRIER_CHANGES = 0x23 + IFLA_PHYS_SWITCH_ID = 0x24 + IFLA_LINK_NETNSID = 0x25 + IFLA_PHYS_PORT_NAME = 0x26 + IFLA_PROTO_DOWN = 0x27 + IFLA_GSO_MAX_SEGS = 0x28 + IFLA_GSO_MAX_SIZE = 0x29 + IFLA_PAD = 0x2a + IFLA_XDP = 0x2b + IFLA_EVENT = 0x2c + IFLA_NEW_NETNSID = 0x2d + IFLA_IF_NETNSID = 0x2e + IFLA_MAX = 0x2e + RT_SCOPE_UNIVERSE = 0x0 + RT_SCOPE_SITE = 0xc8 + RT_SCOPE_LINK = 0xfd + RT_SCOPE_HOST = 0xfe + RT_SCOPE_NOWHERE = 0xff + RT_TABLE_UNSPEC = 0x0 + RT_TABLE_COMPAT = 0xfc + RT_TABLE_DEFAULT = 0xfd + RT_TABLE_MAIN = 0xfe + RT_TABLE_LOCAL = 0xff + RT_TABLE_MAX = 0xffffffff + RTA_UNSPEC = 0x0 + RTA_DST = 0x1 + RTA_SRC = 0x2 + RTA_IIF = 0x3 + RTA_OIF = 0x4 + RTA_GATEWAY = 0x5 + RTA_PRIORITY = 0x6 + RTA_PREFSRC = 0x7 + RTA_METRICS = 0x8 + RTA_MULTIPATH = 0x9 + RTA_FLOW = 0xb + RTA_CACHEINFO = 0xc + RTA_TABLE = 0xf + RTN_UNSPEC = 0x0 + RTN_UNICAST = 0x1 + RTN_LOCAL = 0x2 + RTN_BROADCAST = 0x3 + RTN_ANYCAST = 0x4 + RTN_MULTICAST = 0x5 + RTN_BLACKHOLE = 0x6 + RTN_UNREACHABLE = 0x7 + RTN_PROHIBIT = 0x8 + RTN_THROW = 0x9 + RTN_NAT = 0xa + RTN_XRESOLVE = 0xb + RTNLGRP_NONE = 0x0 + RTNLGRP_LINK = 0x1 + RTNLGRP_NOTIFY = 0x2 + RTNLGRP_NEIGH = 0x3 + RTNLGRP_TC = 0x4 + RTNLGRP_IPV4_IFADDR = 0x5 + RTNLGRP_IPV4_MROUTE = 0x6 + RTNLGRP_IPV4_ROUTE = 0x7 + RTNLGRP_IPV4_RULE = 0x8 + RTNLGRP_IPV6_IFADDR = 0x9 + RTNLGRP_IPV6_MROUTE = 0xa + RTNLGRP_IPV6_ROUTE = 0xb + RTNLGRP_IPV6_IFINFO = 0xc + RTNLGRP_IPV6_PREFIX = 0x12 + RTNLGRP_IPV6_RULE = 0x13 + RTNLGRP_ND_USEROPT = 0x14 + SizeofNlMsghdr = 0x10 + SizeofNlMsgerr = 0x14 + SizeofRtGenmsg = 0x1 + SizeofNlAttr = 0x4 + SizeofRtAttr = 0x4 + SizeofIfInfomsg = 0x10 + SizeofIfAddrmsg = 0x8 + SizeofRtMsg = 0xc + SizeofRtNexthop = 0x8 ) type NlMsghdr struct { @@ -601,12 +627,12 @@ type Sysinfo_t struct { } type Utsname struct { - Sysname [65]int8 - Nodename [65]int8 - Release [65]int8 - Version [65]int8 - Machine [65]int8 - Domainname [65]int8 + Sysname [65]byte + Nodename [65]byte + Release [65]byte + Version [65]byte + Machine [65]byte + Domainname [65]byte } type Ustat_t struct { @@ -652,8 +678,6 @@ type Sigset_t struct { X__val [16]uint64 } -const _SC_PAGESIZE = 0x1e - type Termios struct { Iflag uint32 Oflag uint32 diff --git a/vendor/golang.org/x/sys/unix/ztypes_netbsd_386.go b/vendor/golang.org/x/sys/unix/ztypes_netbsd_386.go index caf755fb8..4b86fb2b3 100644 --- a/vendor/golang.org/x/sys/unix/ztypes_netbsd_386.go +++ b/vendor/golang.org/x/sys/unix/ztypes_netbsd_386.go @@ -1,5 +1,5 @@ -// Created by cgo -godefs - DO NOT EDIT -// cgo -godefs types_netbsd.go +// cgo -godefs types_netbsd.go | go run mkpost.go +// Code generated by the command above; see README.md. DO NOT EDIT. // +build 386,netbsd @@ -99,6 +99,19 @@ type Fsid struct { X__fsid_val [2]int32 } +const ( + PathMax = 0x400 +) + +const ( + FADV_NORMAL = 0x0 + FADV_RANDOM = 0x1 + FADV_SEQUENTIAL = 0x2 + FADV_WILLNEED = 0x3 + FADV_DONTNEED = 0x4 + FADV_NOREUSE = 0x5 +) + type RawSockaddrInet4 struct { Len uint8 Family uint8 @@ -382,6 +395,37 @@ type Termios struct { Ospeed int32 } +type Winsize struct { + Row uint16 + Col uint16 + Xpixel uint16 + Ypixel uint16 +} + +const ( + AT_FDCWD = -0x64 + AT_SYMLINK_NOFOLLOW = 0x200 +) + +type PollFd struct { + Fd int32 + Events int16 + Revents int16 +} + +const ( + POLLERR = 0x8 + POLLHUP = 0x10 + POLLIN = 0x1 + POLLNVAL = 0x20 + POLLOUT = 0x4 + POLLPRI = 0x2 + POLLRDBAND = 0x80 + POLLRDNORM = 0x40 + POLLWRBAND = 0x100 + POLLWRNORM = 0x4 +) + type Sysctlnode struct { Flags uint32 Num int32 @@ -394,3 +438,11 @@ type Sysctlnode struct { X_sysctl_parent [8]byte X_sysctl_desc [8]byte } + +type Utsname struct { + Sysname [256]byte + Nodename [256]byte + Release [256]byte + Version [256]byte + Machine [256]byte +} diff --git a/vendor/golang.org/x/sys/unix/ztypes_netbsd_amd64.go b/vendor/golang.org/x/sys/unix/ztypes_netbsd_amd64.go index 91b4a5305..9048a509d 100644 --- a/vendor/golang.org/x/sys/unix/ztypes_netbsd_amd64.go +++ b/vendor/golang.org/x/sys/unix/ztypes_netbsd_amd64.go @@ -1,5 +1,5 @@ -// Created by cgo -godefs - DO NOT EDIT -// cgo -godefs types_netbsd.go +// cgo -godefs types_netbsd.go | go run mkpost.go +// Code generated by the command above; see README.md. DO NOT EDIT. // +build amd64,netbsd @@ -103,6 +103,19 @@ type Fsid struct { X__fsid_val [2]int32 } +const ( + PathMax = 0x400 +) + +const ( + FADV_NORMAL = 0x0 + FADV_RANDOM = 0x1 + FADV_SEQUENTIAL = 0x2 + FADV_WILLNEED = 0x3 + FADV_DONTNEED = 0x4 + FADV_NOREUSE = 0x5 +) + type RawSockaddrInet4 struct { Len uint8 Family uint8 @@ -389,6 +402,37 @@ type Termios struct { Ospeed int32 } +type Winsize struct { + Row uint16 + Col uint16 + Xpixel uint16 + Ypixel uint16 +} + +const ( + AT_FDCWD = -0x64 + AT_SYMLINK_NOFOLLOW = 0x200 +) + +type PollFd struct { + Fd int32 + Events int16 + Revents int16 +} + +const ( + POLLERR = 0x8 + POLLHUP = 0x10 + POLLIN = 0x1 + POLLNVAL = 0x20 + POLLOUT = 0x4 + POLLPRI = 0x2 + POLLRDBAND = 0x80 + POLLRDNORM = 0x40 + POLLWRBAND = 0x100 + POLLWRNORM = 0x4 +) + type Sysctlnode struct { Flags uint32 Num int32 @@ -401,3 +445,11 @@ type Sysctlnode struct { X_sysctl_parent [8]byte X_sysctl_desc [8]byte } + +type Utsname struct { + Sysname [256]byte + Nodename [256]byte + Release [256]byte + Version [256]byte + Machine [256]byte +} diff --git a/vendor/golang.org/x/sys/unix/ztypes_netbsd_arm.go b/vendor/golang.org/x/sys/unix/ztypes_netbsd_arm.go index c0758f9d3..00525e7b0 100644 --- a/vendor/golang.org/x/sys/unix/ztypes_netbsd_arm.go +++ b/vendor/golang.org/x/sys/unix/ztypes_netbsd_arm.go @@ -1,5 +1,5 @@ -// Created by cgo -godefs - DO NOT EDIT -// cgo -godefs types_netbsd.go +// cgo -godefs types_netbsd.go | go run mkpost.go +// Code generated by the command above; see README.md. DO NOT EDIT. // +build arm,netbsd @@ -104,6 +104,19 @@ type Fsid struct { X__fsid_val [2]int32 } +const ( + PathMax = 0x400 +) + +const ( + FADV_NORMAL = 0x0 + FADV_RANDOM = 0x1 + FADV_SEQUENTIAL = 0x2 + FADV_WILLNEED = 0x3 + FADV_DONTNEED = 0x4 + FADV_NOREUSE = 0x5 +) + type RawSockaddrInet4 struct { Len uint8 Family uint8 @@ -387,6 +400,37 @@ type Termios struct { Ospeed int32 } +type Winsize struct { + Row uint16 + Col uint16 + Xpixel uint16 + Ypixel uint16 +} + +const ( + AT_FDCWD = -0x64 + AT_SYMLINK_NOFOLLOW = 0x200 +) + +type PollFd struct { + Fd int32 + Events int16 + Revents int16 +} + +const ( + POLLERR = 0x8 + POLLHUP = 0x10 + POLLIN = 0x1 + POLLNVAL = 0x20 + POLLOUT = 0x4 + POLLPRI = 0x2 + POLLRDBAND = 0x80 + POLLRDNORM = 0x40 + POLLWRBAND = 0x100 + POLLWRNORM = 0x4 +) + type Sysctlnode struct { Flags uint32 Num int32 @@ -399,3 +443,11 @@ type Sysctlnode struct { X_sysctl_parent [8]byte X_sysctl_desc [8]byte } + +type Utsname struct { + Sysname [256]byte + Nodename [256]byte + Release [256]byte + Version [256]byte + Machine [256]byte +} diff --git a/vendor/golang.org/x/sys/unix/ztypes_openbsd_386.go b/vendor/golang.org/x/sys/unix/ztypes_openbsd_386.go index 860a46979..d5a2d75da 100644 --- a/vendor/golang.org/x/sys/unix/ztypes_openbsd_386.go +++ b/vendor/golang.org/x/sys/unix/ztypes_openbsd_386.go @@ -1,5 +1,5 @@ -// Created by cgo -godefs - DO NOT EDIT -// cgo -godefs types_openbsd.go +// cgo -godefs types_openbsd.go | go run mkpost.go +// Code generated by the command above; see README.md. DO NOT EDIT. // +build 386,openbsd @@ -140,6 +140,10 @@ type Fsid struct { Val [2]int32 } +const ( + PathMax = 0x400 +) + type RawSockaddrInet4 struct { Len uint8 Family uint8 @@ -439,3 +443,42 @@ type Termios struct { Ispeed int32 Ospeed int32 } + +type Winsize struct { + Row uint16 + Col uint16 + Xpixel uint16 + Ypixel uint16 +} + +const ( + AT_FDCWD = -0x64 + AT_SYMLINK_NOFOLLOW = 0x2 +) + +type PollFd struct { + Fd int32 + Events int16 + Revents int16 +} + +const ( + POLLERR = 0x8 + POLLHUP = 0x10 + POLLIN = 0x1 + POLLNVAL = 0x20 + POLLOUT = 0x4 + POLLPRI = 0x2 + POLLRDBAND = 0x80 + POLLRDNORM = 0x40 + POLLWRBAND = 0x100 + POLLWRNORM = 0x4 +) + +type Utsname struct { + Sysname [256]byte + Nodename [256]byte + Release [256]byte + Version [256]byte + Machine [256]byte +} diff --git a/vendor/golang.org/x/sys/unix/ztypes_openbsd_amd64.go b/vendor/golang.org/x/sys/unix/ztypes_openbsd_amd64.go index 23c52727f..d53141085 100644 --- a/vendor/golang.org/x/sys/unix/ztypes_openbsd_amd64.go +++ b/vendor/golang.org/x/sys/unix/ztypes_openbsd_amd64.go @@ -1,5 +1,5 @@ -// Created by cgo -godefs - DO NOT EDIT -// cgo -godefs types_openbsd.go +// cgo -godefs types_openbsd.go | go run mkpost.go +// Code generated by the command above; see README.md. DO NOT EDIT. // +build amd64,openbsd @@ -142,6 +142,10 @@ type Fsid struct { Val [2]int32 } +const ( + PathMax = 0x400 +) + type RawSockaddrInet4 struct { Len uint8 Family uint8 @@ -446,3 +450,42 @@ type Termios struct { Ispeed int32 Ospeed int32 } + +type Winsize struct { + Row uint16 + Col uint16 + Xpixel uint16 + Ypixel uint16 +} + +const ( + AT_FDCWD = -0x64 + AT_SYMLINK_NOFOLLOW = 0x2 +) + +type PollFd struct { + Fd int32 + Events int16 + Revents int16 +} + +const ( + POLLERR = 0x8 + POLLHUP = 0x10 + POLLIN = 0x1 + POLLNVAL = 0x20 + POLLOUT = 0x4 + POLLPRI = 0x2 + POLLRDBAND = 0x80 + POLLRDNORM = 0x40 + POLLWRBAND = 0x100 + POLLWRNORM = 0x4 +) + +type Utsname struct { + Sysname [256]byte + Nodename [256]byte + Release [256]byte + Version [256]byte + Machine [256]byte +} diff --git a/vendor/golang.org/x/sys/unix/ztypes_openbsd_arm.go b/vendor/golang.org/x/sys/unix/ztypes_openbsd_arm.go index f960f6c0d..e35b13b6f 100644 --- a/vendor/golang.org/x/sys/unix/ztypes_openbsd_arm.go +++ b/vendor/golang.org/x/sys/unix/ztypes_openbsd_arm.go @@ -1,5 +1,5 @@ -// Created by cgo -godefs - DO NOT EDIT -// cgo -godefs types_openbsd.go +// cgo -godefs types_openbsd.go | go run mkpost.go +// Code generated by the command above; see README.md. DO NOT EDIT. // +build arm,openbsd @@ -140,6 +140,10 @@ type Fsid struct { Val [2]int32 } +const ( + PathMax = 0x400 +) + type RawSockaddrInet4 struct { Len uint8 Family uint8 @@ -432,3 +436,42 @@ type Termios struct { Ispeed int32 Ospeed int32 } + +type Winsize struct { + Row uint16 + Col uint16 + Xpixel uint16 + Ypixel uint16 +} + +const ( + AT_FDCWD = -0x64 + AT_SYMLINK_NOFOLLOW = 0x2 +) + +type PollFd struct { + Fd int32 + Events int16 + Revents int16 +} + +const ( + POLLERR = 0x8 + POLLHUP = 0x10 + POLLIN = 0x1 + POLLNVAL = 0x20 + POLLOUT = 0x4 + POLLPRI = 0x2 + POLLRDBAND = 0x80 + POLLRDNORM = 0x40 + POLLWRBAND = 0x100 + POLLWRNORM = 0x4 +) + +type Utsname struct { + Sysname [256]byte + Nodename [256]byte + Release [256]byte + Version [256]byte + Machine [256]byte +} diff --git a/vendor/golang.org/x/sys/unix/ztypes_solaris_amd64.go b/vendor/golang.org/x/sys/unix/ztypes_solaris_amd64.go index 92336f9f9..2248598d0 100644 --- a/vendor/golang.org/x/sys/unix/ztypes_solaris_amd64.go +++ b/vendor/golang.org/x/sys/unix/ztypes_solaris_amd64.go @@ -93,40 +93,40 @@ const ( ) type Stat_t struct { - Dev uint64 - Ino uint64 - Mode uint32 - Nlink uint32 - Uid uint32 - Gid uint32 - Rdev uint64 - Size int64 - Atim Timespec - Mtim Timespec - Ctim Timespec - Blksize int32 - Pad_cgo_0 [4]byte - Blocks int64 - Fstype [16]int8 + Dev uint64 + Ino uint64 + Mode uint32 + Nlink uint32 + Uid uint32 + Gid uint32 + Rdev uint64 + Size int64 + Atim Timespec + Mtim Timespec + Ctim Timespec + Blksize int32 + _ [4]byte + Blocks int64 + Fstype [16]int8 } type Flock_t struct { - Type int16 - Whence int16 - Pad_cgo_0 [4]byte - Start int64 - Len int64 - Sysid int32 - Pid int32 - Pad [4]int64 + Type int16 + Whence int16 + _ [4]byte + Start int64 + Len int64 + Sysid int32 + Pid int32 + Pad [4]int64 } type Dirent struct { - Ino uint64 - Off int64 - Reclen uint16 - Name [1]int8 - Pad_cgo_0 [5]byte + Ino uint64 + Off int64 + Reclen uint16 + Name [1]int8 + _ [5]byte } type _Fsblkcnt_t uint64 @@ -213,13 +213,13 @@ type IPv6Mreq struct { type Msghdr struct { Name *byte Namelen uint32 - Pad_cgo_0 [4]byte + _ [4]byte Iov *Iovec Iovlen int32 - Pad_cgo_1 [4]byte + _ [4]byte Accrights *int8 Accrightslen int32 - Pad_cgo_2 [4]byte + _ [4]byte } type Cmsghdr struct { @@ -263,19 +263,19 @@ type FdSet struct { } type Utsname struct { - Sysname [257]int8 - Nodename [257]int8 - Release [257]int8 - Version [257]int8 - Machine [257]int8 + Sysname [257]byte + Nodename [257]byte + Release [257]byte + Version [257]byte + Machine [257]byte } type Ustat_t struct { - Tfree int64 - Tinode uint64 - Fname [6]int8 - Fpack [6]int8 - Pad_cgo_0 [4]byte + Tfree int64 + Tinode uint64 + Fname [6]int8 + Fpack [6]int8 + _ [4]byte } const ( @@ -295,21 +295,21 @@ const ( ) type IfMsghdr struct { - Msglen uint16 - Version uint8 - Type uint8 - Addrs int32 - Flags int32 - Index uint16 - Pad_cgo_0 [2]byte - Data IfData + Msglen uint16 + Version uint8 + Type uint8 + Addrs int32 + Flags int32 + Index uint16 + _ [2]byte + Data IfData } type IfData struct { Type uint8 Addrlen uint8 Hdrlen uint8 - Pad_cgo_0 [1]byte + _ [1]byte Mtu uint32 Metric uint32 Baudrate uint32 @@ -328,30 +328,30 @@ type IfData struct { } type IfaMsghdr struct { - Msglen uint16 - Version uint8 - Type uint8 - Addrs int32 - Flags int32 - Index uint16 - Pad_cgo_0 [2]byte - Metric int32 + Msglen uint16 + Version uint8 + Type uint8 + Addrs int32 + Flags int32 + Index uint16 + _ [2]byte + Metric int32 } type RtMsghdr struct { - Msglen uint16 - Version uint8 - Type uint8 - Index uint16 - Pad_cgo_0 [2]byte - Flags int32 - Addrs int32 - Pid int32 - Seq int32 - Errno int32 - Use int32 - Inits uint32 - Rmx RtMetrics + Msglen uint16 + Version uint8 + Type uint8 + Index uint16 + _ [2]byte + Flags int32 + Addrs int32 + Pid int32 + Seq int32 + Errno int32 + Use int32 + Inits uint32 + Rmx RtMetrics } type RtMetrics struct { @@ -388,9 +388,9 @@ type BpfStat struct { } type BpfProgram struct { - Len uint32 - Pad_cgo_0 [4]byte - Insns *BpfInsn + Len uint32 + _ [4]byte + Insns *BpfInsn } type BpfInsn struct { @@ -406,32 +406,30 @@ type BpfTimeval struct { } type BpfHdr struct { - Tstamp BpfTimeval - Caplen uint32 - Datalen uint32 - Hdrlen uint16 - Pad_cgo_0 [2]byte + Tstamp BpfTimeval + Caplen uint32 + Datalen uint32 + Hdrlen uint16 + _ [2]byte } -const _SC_PAGESIZE = 0xb - type Termios struct { - Iflag uint32 - Oflag uint32 - Cflag uint32 - Lflag uint32 - Cc [19]uint8 - Pad_cgo_0 [1]byte + Iflag uint32 + Oflag uint32 + Cflag uint32 + Lflag uint32 + Cc [19]uint8 + _ [1]byte } type Termio struct { - Iflag uint16 - Oflag uint16 - Cflag uint16 - Lflag uint16 - Line int8 - Cc [8]uint8 - Pad_cgo_0 [1]byte + Iflag uint16 + Oflag uint16 + Cflag uint16 + Lflag uint16 + Line int8 + Cc [8]uint8 + _ [1]byte } type Winsize struct { @@ -440,3 +438,22 @@ type Winsize struct { Xpixel uint16 Ypixel uint16 } + +type PollFd struct { + Fd int32 + Events int16 + Revents int16 +} + +const ( + POLLERR = 0x8 + POLLHUP = 0x10 + POLLIN = 0x1 + POLLNVAL = 0x20 + POLLOUT = 0x4 + POLLPRI = 0x2 + POLLRDBAND = 0x80 + POLLRDNORM = 0x40 + POLLWRBAND = 0x100 + POLLWRNORM = 0x4 +) diff --git a/vendor/golang.org/x/sys/windows/asm_windows_386.s b/vendor/golang.org/x/sys/windows/asm_windows_386.s new file mode 100644 index 000000000..1c20dd2f8 --- /dev/null +++ b/vendor/golang.org/x/sys/windows/asm_windows_386.s @@ -0,0 +1,13 @@ +// Copyright 2009 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +// +// System calls for 386, Windows are implemented in runtime/syscall_windows.goc +// + +TEXT ·getprocaddress(SB), 7, $0-8 + JMP syscall·getprocaddress(SB) + +TEXT ·loadlibrary(SB), 7, $0-4 + JMP syscall·loadlibrary(SB) diff --git a/vendor/golang.org/x/sys/windows/asm_windows_amd64.s b/vendor/golang.org/x/sys/windows/asm_windows_amd64.s new file mode 100644 index 000000000..4d025ab55 --- /dev/null +++ b/vendor/golang.org/x/sys/windows/asm_windows_amd64.s @@ -0,0 +1,13 @@ +// Copyright 2009 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +// +// System calls for amd64, Windows are implemented in runtime/syscall_windows.goc +// + +TEXT ·getprocaddress(SB), 7, $0-32 + JMP syscall·getprocaddress(SB) + +TEXT ·loadlibrary(SB), 7, $0-8 + JMP syscall·loadlibrary(SB) diff --git a/vendor/golang.org/x/sys/windows/dll_windows.go b/vendor/golang.org/x/sys/windows/dll_windows.go new file mode 100644 index 000000000..e92c05b21 --- /dev/null +++ b/vendor/golang.org/x/sys/windows/dll_windows.go @@ -0,0 +1,378 @@ +// Copyright 2011 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +package windows + +import ( + "sync" + "sync/atomic" + "syscall" + "unsafe" +) + +// DLLError describes reasons for DLL load failures. +type DLLError struct { + Err error + ObjName string + Msg string +} + +func (e *DLLError) Error() string { return e.Msg } + +// Implemented in runtime/syscall_windows.goc; we provide jumps to them in our assembly file. +func loadlibrary(filename *uint16) (handle uintptr, err syscall.Errno) +func getprocaddress(handle uintptr, procname *uint8) (proc uintptr, err syscall.Errno) + +// A DLL implements access to a single DLL. +type DLL struct { + Name string + Handle Handle +} + +// LoadDLL loads DLL file into memory. +// +// Warning: using LoadDLL without an absolute path name is subject to +// DLL preloading attacks. To safely load a system DLL, use LazyDLL +// with System set to true, or use LoadLibraryEx directly. +func LoadDLL(name string) (dll *DLL, err error) { + namep, err := UTF16PtrFromString(name) + if err != nil { + return nil, err + } + h, e := loadlibrary(namep) + if e != 0 { + return nil, &DLLError{ + Err: e, + ObjName: name, + Msg: "Failed to load " + name + ": " + e.Error(), + } + } + d := &DLL{ + Name: name, + Handle: Handle(h), + } + return d, nil +} + +// MustLoadDLL is like LoadDLL but panics if load operation failes. +func MustLoadDLL(name string) *DLL { + d, e := LoadDLL(name) + if e != nil { + panic(e) + } + return d +} + +// FindProc searches DLL d for procedure named name and returns *Proc +// if found. It returns an error if search fails. +func (d *DLL) FindProc(name string) (proc *Proc, err error) { + namep, err := BytePtrFromString(name) + if err != nil { + return nil, err + } + a, e := getprocaddress(uintptr(d.Handle), namep) + if e != 0 { + return nil, &DLLError{ + Err: e, + ObjName: name, + Msg: "Failed to find " + name + " procedure in " + d.Name + ": " + e.Error(), + } + } + p := &Proc{ + Dll: d, + Name: name, + addr: a, + } + return p, nil +} + +// MustFindProc is like FindProc but panics if search fails. +func (d *DLL) MustFindProc(name string) *Proc { + p, e := d.FindProc(name) + if e != nil { + panic(e) + } + return p +} + +// Release unloads DLL d from memory. +func (d *DLL) Release() (err error) { + return FreeLibrary(d.Handle) +} + +// A Proc implements access to a procedure inside a DLL. +type Proc struct { + Dll *DLL + Name string + addr uintptr +} + +// Addr returns the address of the procedure represented by p. +// The return value can be passed to Syscall to run the procedure. +func (p *Proc) Addr() uintptr { + return p.addr +} + +//go:uintptrescapes + +// Call executes procedure p with arguments a. It will panic, if more than 15 arguments +// are supplied. +// +// The returned error is always non-nil, constructed from the result of GetLastError. +// Callers must inspect the primary return value to decide whether an error occurred +// (according to the semantics of the specific function being called) before consulting +// the error. The error will be guaranteed to contain windows.Errno. +func (p *Proc) Call(a ...uintptr) (r1, r2 uintptr, lastErr error) { + switch len(a) { + case 0: + return syscall.Syscall(p.Addr(), uintptr(len(a)), 0, 0, 0) + case 1: + return syscall.Syscall(p.Addr(), uintptr(len(a)), a[0], 0, 0) + case 2: + return syscall.Syscall(p.Addr(), uintptr(len(a)), a[0], a[1], 0) + case 3: + return syscall.Syscall(p.Addr(), uintptr(len(a)), a[0], a[1], a[2]) + case 4: + return syscall.Syscall6(p.Addr(), uintptr(len(a)), a[0], a[1], a[2], a[3], 0, 0) + case 5: + return syscall.Syscall6(p.Addr(), uintptr(len(a)), a[0], a[1], a[2], a[3], a[4], 0) + case 6: + return syscall.Syscall6(p.Addr(), uintptr(len(a)), a[0], a[1], a[2], a[3], a[4], a[5]) + case 7: + return syscall.Syscall9(p.Addr(), uintptr(len(a)), a[0], a[1], a[2], a[3], a[4], a[5], a[6], 0, 0) + case 8: + return syscall.Syscall9(p.Addr(), uintptr(len(a)), a[0], a[1], a[2], a[3], a[4], a[5], a[6], a[7], 0) + case 9: + return syscall.Syscall9(p.Addr(), uintptr(len(a)), a[0], a[1], a[2], a[3], a[4], a[5], a[6], a[7], a[8]) + case 10: + return syscall.Syscall12(p.Addr(), uintptr(len(a)), a[0], a[1], a[2], a[3], a[4], a[5], a[6], a[7], a[8], a[9], 0, 0) + case 11: + return syscall.Syscall12(p.Addr(), uintptr(len(a)), a[0], a[1], a[2], a[3], a[4], a[5], a[6], a[7], a[8], a[9], a[10], 0) + case 12: + return syscall.Syscall12(p.Addr(), uintptr(len(a)), a[0], a[1], a[2], a[3], a[4], a[5], a[6], a[7], a[8], a[9], a[10], a[11]) + case 13: + return syscall.Syscall15(p.Addr(), uintptr(len(a)), a[0], a[1], a[2], a[3], a[4], a[5], a[6], a[7], a[8], a[9], a[10], a[11], a[12], 0, 0) + case 14: + return syscall.Syscall15(p.Addr(), uintptr(len(a)), a[0], a[1], a[2], a[3], a[4], a[5], a[6], a[7], a[8], a[9], a[10], a[11], a[12], a[13], 0) + case 15: + return syscall.Syscall15(p.Addr(), uintptr(len(a)), a[0], a[1], a[2], a[3], a[4], a[5], a[6], a[7], a[8], a[9], a[10], a[11], a[12], a[13], a[14]) + default: + panic("Call " + p.Name + " with too many arguments " + itoa(len(a)) + ".") + } +} + +// A LazyDLL implements access to a single DLL. +// It will delay the load of the DLL until the first +// call to its Handle method or to one of its +// LazyProc's Addr method. +type LazyDLL struct { + Name string + + // System determines whether the DLL must be loaded from the + // Windows System directory, bypassing the normal DLL search + // path. + System bool + + mu sync.Mutex + dll *DLL // non nil once DLL is loaded +} + +// Load loads DLL file d.Name into memory. It returns an error if fails. +// Load will not try to load DLL, if it is already loaded into memory. +func (d *LazyDLL) Load() error { + // Non-racy version of: + // if d.dll != nil { + if atomic.LoadPointer((*unsafe.Pointer)(unsafe.Pointer(&d.dll))) != nil { + return nil + } + d.mu.Lock() + defer d.mu.Unlock() + if d.dll != nil { + return nil + } + + // kernel32.dll is special, since it's where LoadLibraryEx comes from. + // The kernel already special-cases its name, so it's always + // loaded from system32. + var dll *DLL + var err error + if d.Name == "kernel32.dll" { + dll, err = LoadDLL(d.Name) + } else { + dll, err = loadLibraryEx(d.Name, d.System) + } + if err != nil { + return err + } + + // Non-racy version of: + // d.dll = dll + atomic.StorePointer((*unsafe.Pointer)(unsafe.Pointer(&d.dll)), unsafe.Pointer(dll)) + return nil +} + +// mustLoad is like Load but panics if search fails. +func (d *LazyDLL) mustLoad() { + e := d.Load() + if e != nil { + panic(e) + } +} + +// Handle returns d's module handle. +func (d *LazyDLL) Handle() uintptr { + d.mustLoad() + return uintptr(d.dll.Handle) +} + +// NewProc returns a LazyProc for accessing the named procedure in the DLL d. +func (d *LazyDLL) NewProc(name string) *LazyProc { + return &LazyProc{l: d, Name: name} +} + +// NewLazyDLL creates new LazyDLL associated with DLL file. +func NewLazyDLL(name string) *LazyDLL { + return &LazyDLL{Name: name} +} + +// NewLazySystemDLL is like NewLazyDLL, but will only +// search Windows System directory for the DLL if name is +// a base name (like "advapi32.dll"). +func NewLazySystemDLL(name string) *LazyDLL { + return &LazyDLL{Name: name, System: true} +} + +// A LazyProc implements access to a procedure inside a LazyDLL. +// It delays the lookup until the Addr method is called. +type LazyProc struct { + Name string + + mu sync.Mutex + l *LazyDLL + proc *Proc +} + +// Find searches DLL for procedure named p.Name. It returns +// an error if search fails. Find will not search procedure, +// if it is already found and loaded into memory. +func (p *LazyProc) Find() error { + // Non-racy version of: + // if p.proc == nil { + if atomic.LoadPointer((*unsafe.Pointer)(unsafe.Pointer(&p.proc))) == nil { + p.mu.Lock() + defer p.mu.Unlock() + if p.proc == nil { + e := p.l.Load() + if e != nil { + return e + } + proc, e := p.l.dll.FindProc(p.Name) + if e != nil { + return e + } + // Non-racy version of: + // p.proc = proc + atomic.StorePointer((*unsafe.Pointer)(unsafe.Pointer(&p.proc)), unsafe.Pointer(proc)) + } + } + return nil +} + +// mustFind is like Find but panics if search fails. +func (p *LazyProc) mustFind() { + e := p.Find() + if e != nil { + panic(e) + } +} + +// Addr returns the address of the procedure represented by p. +// The return value can be passed to Syscall to run the procedure. +// It will panic if the procedure cannot be found. +func (p *LazyProc) Addr() uintptr { + p.mustFind() + return p.proc.Addr() +} + +//go:uintptrescapes + +// Call executes procedure p with arguments a. It will panic, if more than 15 arguments +// are supplied. It will also panic if the procedure cannot be found. +// +// The returned error is always non-nil, constructed from the result of GetLastError. +// Callers must inspect the primary return value to decide whether an error occurred +// (according to the semantics of the specific function being called) before consulting +// the error. The error will be guaranteed to contain windows.Errno. +func (p *LazyProc) Call(a ...uintptr) (r1, r2 uintptr, lastErr error) { + p.mustFind() + return p.proc.Call(a...) +} + +var canDoSearchSystem32Once struct { + sync.Once + v bool +} + +func initCanDoSearchSystem32() { + // https://msdn.microsoft.com/en-us/library/ms684179(v=vs.85).aspx says: + // "Windows 7, Windows Server 2008 R2, Windows Vista, and Windows + // Server 2008: The LOAD_LIBRARY_SEARCH_* flags are available on + // systems that have KB2533623 installed. To determine whether the + // flags are available, use GetProcAddress to get the address of the + // AddDllDirectory, RemoveDllDirectory, or SetDefaultDllDirectories + // function. If GetProcAddress succeeds, the LOAD_LIBRARY_SEARCH_* + // flags can be used with LoadLibraryEx." + canDoSearchSystem32Once.v = (modkernel32.NewProc("AddDllDirectory").Find() == nil) +} + +func canDoSearchSystem32() bool { + canDoSearchSystem32Once.Do(initCanDoSearchSystem32) + return canDoSearchSystem32Once.v +} + +func isBaseName(name string) bool { + for _, c := range name { + if c == ':' || c == '/' || c == '\\' { + return false + } + } + return true +} + +// loadLibraryEx wraps the Windows LoadLibraryEx function. +// +// See https://msdn.microsoft.com/en-us/library/windows/desktop/ms684179(v=vs.85).aspx +// +// If name is not an absolute path, LoadLibraryEx searches for the DLL +// in a variety of automatic locations unless constrained by flags. +// See: https://msdn.microsoft.com/en-us/library/ff919712%28VS.85%29.aspx +func loadLibraryEx(name string, system bool) (*DLL, error) { + loadDLL := name + var flags uintptr + if system { + if canDoSearchSystem32() { + const LOAD_LIBRARY_SEARCH_SYSTEM32 = 0x00000800 + flags = LOAD_LIBRARY_SEARCH_SYSTEM32 + } else if isBaseName(name) { + // WindowsXP or unpatched Windows machine + // trying to load "foo.dll" out of the system + // folder, but LoadLibraryEx doesn't support + // that yet on their system, so emulate it. + windir, _ := Getenv("WINDIR") // old var; apparently works on XP + if windir == "" { + return nil, errString("%WINDIR% not defined") + } + loadDLL = windir + "\\System32\\" + name + } + } + h, err := LoadLibraryEx(loadDLL, 0, flags) + if err != nil { + return nil, err + } + return &DLL{Name: name, Handle: h}, nil +} + +type errString string + +func (s errString) Error() string { return string(s) } diff --git a/vendor/golang.org/x/sys/windows/env_windows.go b/vendor/golang.org/x/sys/windows/env_windows.go new file mode 100644 index 000000000..bdc71e241 --- /dev/null +++ b/vendor/golang.org/x/sys/windows/env_windows.go @@ -0,0 +1,29 @@ +// Copyright 2010 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +// Windows environment variables. + +package windows + +import "syscall" + +func Getenv(key string) (value string, found bool) { + return syscall.Getenv(key) +} + +func Setenv(key, value string) error { + return syscall.Setenv(key, value) +} + +func Clearenv() { + syscall.Clearenv() +} + +func Environ() []string { + return syscall.Environ() +} + +func Unsetenv(key string) error { + return syscall.Unsetenv(key) +} diff --git a/vendor/golang.org/x/sys/windows/eventlog.go b/vendor/golang.org/x/sys/windows/eventlog.go new file mode 100644 index 000000000..40af946e1 --- /dev/null +++ b/vendor/golang.org/x/sys/windows/eventlog.go @@ -0,0 +1,20 @@ +// Copyright 2012 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +// +build windows + +package windows + +const ( + EVENTLOG_SUCCESS = 0 + EVENTLOG_ERROR_TYPE = 1 + EVENTLOG_WARNING_TYPE = 2 + EVENTLOG_INFORMATION_TYPE = 4 + EVENTLOG_AUDIT_SUCCESS = 8 + EVENTLOG_AUDIT_FAILURE = 16 +) + +//sys RegisterEventSource(uncServerName *uint16, sourceName *uint16) (handle Handle, err error) [failretval==0] = advapi32.RegisterEventSourceW +//sys DeregisterEventSource(handle Handle) (err error) = advapi32.DeregisterEventSource +//sys ReportEvent(log Handle, etype uint16, category uint16, eventId uint32, usrSId uintptr, numStrings uint16, dataSize uint32, strings **uint16, rawData *byte) (err error) = advapi32.ReportEventW diff --git a/vendor/golang.org/x/sys/windows/exec_windows.go b/vendor/golang.org/x/sys/windows/exec_windows.go new file mode 100644 index 000000000..3606c3a8b --- /dev/null +++ b/vendor/golang.org/x/sys/windows/exec_windows.go @@ -0,0 +1,97 @@ +// Copyright 2009 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +// Fork, exec, wait, etc. + +package windows + +// EscapeArg rewrites command line argument s as prescribed +// in http://msdn.microsoft.com/en-us/library/ms880421. +// This function returns "" (2 double quotes) if s is empty. +// Alternatively, these transformations are done: +// - every back slash (\) is doubled, but only if immediately +// followed by double quote ("); +// - every double quote (") is escaped by back slash (\); +// - finally, s is wrapped with double quotes (arg -> "arg"), +// but only if there is space or tab inside s. +func EscapeArg(s string) string { + if len(s) == 0 { + return "\"\"" + } + n := len(s) + hasSpace := false + for i := 0; i < len(s); i++ { + switch s[i] { + case '"', '\\': + n++ + case ' ', '\t': + hasSpace = true + } + } + if hasSpace { + n += 2 + } + if n == len(s) { + return s + } + + qs := make([]byte, n) + j := 0 + if hasSpace { + qs[j] = '"' + j++ + } + slashes := 0 + for i := 0; i < len(s); i++ { + switch s[i] { + default: + slashes = 0 + qs[j] = s[i] + case '\\': + slashes++ + qs[j] = s[i] + case '"': + for ; slashes > 0; slashes-- { + qs[j] = '\\' + j++ + } + qs[j] = '\\' + j++ + qs[j] = s[i] + } + j++ + } + if hasSpace { + for ; slashes > 0; slashes-- { + qs[j] = '\\' + j++ + } + qs[j] = '"' + j++ + } + return string(qs[:j]) +} + +func CloseOnExec(fd Handle) { + SetHandleInformation(Handle(fd), HANDLE_FLAG_INHERIT, 0) +} + +// FullPath retrieves the full path of the specified file. +func FullPath(name string) (path string, err error) { + p, err := UTF16PtrFromString(name) + if err != nil { + return "", err + } + n := uint32(100) + for { + buf := make([]uint16, n) + n, err = GetFullPathName(p, uint32(len(buf)), &buf[0], nil) + if err != nil { + return "", err + } + if n <= uint32(len(buf)) { + return UTF16ToString(buf[:n]), nil + } + } +} diff --git a/vendor/golang.org/x/sys/windows/memory_windows.go b/vendor/golang.org/x/sys/windows/memory_windows.go new file mode 100644 index 000000000..f80a4204f --- /dev/null +++ b/vendor/golang.org/x/sys/windows/memory_windows.go @@ -0,0 +1,26 @@ +// Copyright 2017 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +package windows + +const ( + MEM_COMMIT = 0x00001000 + MEM_RESERVE = 0x00002000 + MEM_DECOMMIT = 0x00004000 + MEM_RELEASE = 0x00008000 + MEM_RESET = 0x00080000 + MEM_TOP_DOWN = 0x00100000 + MEM_WRITE_WATCH = 0x00200000 + MEM_PHYSICAL = 0x00400000 + MEM_RESET_UNDO = 0x01000000 + MEM_LARGE_PAGES = 0x20000000 + + PAGE_NOACCESS = 0x01 + PAGE_READONLY = 0x02 + PAGE_READWRITE = 0x04 + PAGE_WRITECOPY = 0x08 + PAGE_EXECUTE_READ = 0x20 + PAGE_EXECUTE_READWRITE = 0x40 + PAGE_EXECUTE_WRITECOPY = 0x80 +) diff --git a/vendor/golang.org/x/sys/windows/mksyscall.go b/vendor/golang.org/x/sys/windows/mksyscall.go new file mode 100644 index 000000000..fb7db0ef8 --- /dev/null +++ b/vendor/golang.org/x/sys/windows/mksyscall.go @@ -0,0 +1,7 @@ +// Copyright 2009 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +package windows + +//go:generate go run $GOROOT/src/syscall/mksyscall_windows.go -output zsyscall_windows.go eventlog.go service.go syscall_windows.go security_windows.go diff --git a/vendor/golang.org/x/sys/windows/race.go b/vendor/golang.org/x/sys/windows/race.go new file mode 100644 index 000000000..a74e3e24b --- /dev/null +++ b/vendor/golang.org/x/sys/windows/race.go @@ -0,0 +1,30 @@ +// Copyright 2012 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +// +build windows,race + +package windows + +import ( + "runtime" + "unsafe" +) + +const raceenabled = true + +func raceAcquire(addr unsafe.Pointer) { + runtime.RaceAcquire(addr) +} + +func raceReleaseMerge(addr unsafe.Pointer) { + runtime.RaceReleaseMerge(addr) +} + +func raceReadRange(addr unsafe.Pointer, len int) { + runtime.RaceReadRange(addr, len) +} + +func raceWriteRange(addr unsafe.Pointer, len int) { + runtime.RaceWriteRange(addr, len) +} diff --git a/vendor/golang.org/x/sys/windows/race0.go b/vendor/golang.org/x/sys/windows/race0.go new file mode 100644 index 000000000..e44a3cbf6 --- /dev/null +++ b/vendor/golang.org/x/sys/windows/race0.go @@ -0,0 +1,25 @@ +// Copyright 2012 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +// +build windows,!race + +package windows + +import ( + "unsafe" +) + +const raceenabled = false + +func raceAcquire(addr unsafe.Pointer) { +} + +func raceReleaseMerge(addr unsafe.Pointer) { +} + +func raceReadRange(addr unsafe.Pointer, len int) { +} + +func raceWriteRange(addr unsafe.Pointer, len int) { +} diff --git a/vendor/golang.org/x/sys/windows/security_windows.go b/vendor/golang.org/x/sys/windows/security_windows.go new file mode 100644 index 000000000..f1ec5dc4e --- /dev/null +++ b/vendor/golang.org/x/sys/windows/security_windows.go @@ -0,0 +1,476 @@ +// Copyright 2012 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +package windows + +import ( + "syscall" + "unsafe" +) + +const ( + STANDARD_RIGHTS_REQUIRED = 0xf0000 + STANDARD_RIGHTS_READ = 0x20000 + STANDARD_RIGHTS_WRITE = 0x20000 + STANDARD_RIGHTS_EXECUTE = 0x20000 + STANDARD_RIGHTS_ALL = 0x1F0000 +) + +const ( + NameUnknown = 0 + NameFullyQualifiedDN = 1 + NameSamCompatible = 2 + NameDisplay = 3 + NameUniqueId = 6 + NameCanonical = 7 + NameUserPrincipal = 8 + NameCanonicalEx = 9 + NameServicePrincipal = 10 + NameDnsDomain = 12 +) + +// This function returns 1 byte BOOLEAN rather than the 4 byte BOOL. +// http://blogs.msdn.com/b/drnick/archive/2007/12/19/windows-and-upn-format-credentials.aspx +//sys TranslateName(accName *uint16, accNameFormat uint32, desiredNameFormat uint32, translatedName *uint16, nSize *uint32) (err error) [failretval&0xff==0] = secur32.TranslateNameW +//sys GetUserNameEx(nameFormat uint32, nameBuffre *uint16, nSize *uint32) (err error) [failretval&0xff==0] = secur32.GetUserNameExW + +// TranslateAccountName converts a directory service +// object name from one format to another. +func TranslateAccountName(username string, from, to uint32, initSize int) (string, error) { + u, e := UTF16PtrFromString(username) + if e != nil { + return "", e + } + n := uint32(50) + for { + b := make([]uint16, n) + e = TranslateName(u, from, to, &b[0], &n) + if e == nil { + return UTF16ToString(b[:n]), nil + } + if e != ERROR_INSUFFICIENT_BUFFER { + return "", e + } + if n <= uint32(len(b)) { + return "", e + } + } +} + +const ( + // do not reorder + NetSetupUnknownStatus = iota + NetSetupUnjoined + NetSetupWorkgroupName + NetSetupDomainName +) + +type UserInfo10 struct { + Name *uint16 + Comment *uint16 + UsrComment *uint16 + FullName *uint16 +} + +//sys NetUserGetInfo(serverName *uint16, userName *uint16, level uint32, buf **byte) (neterr error) = netapi32.NetUserGetInfo +//sys NetGetJoinInformation(server *uint16, name **uint16, bufType *uint32) (neterr error) = netapi32.NetGetJoinInformation +//sys NetApiBufferFree(buf *byte) (neterr error) = netapi32.NetApiBufferFree + +const ( + // do not reorder + SidTypeUser = 1 + iota + SidTypeGroup + SidTypeDomain + SidTypeAlias + SidTypeWellKnownGroup + SidTypeDeletedAccount + SidTypeInvalid + SidTypeUnknown + SidTypeComputer + SidTypeLabel +) + +type SidIdentifierAuthority struct { + Value [6]byte +} + +var ( + SECURITY_NULL_SID_AUTHORITY = SidIdentifierAuthority{[6]byte{0, 0, 0, 0, 0, 0}} + SECURITY_WORLD_SID_AUTHORITY = SidIdentifierAuthority{[6]byte{0, 0, 0, 0, 0, 1}} + SECURITY_LOCAL_SID_AUTHORITY = SidIdentifierAuthority{[6]byte{0, 0, 0, 0, 0, 2}} + SECURITY_CREATOR_SID_AUTHORITY = SidIdentifierAuthority{[6]byte{0, 0, 0, 0, 0, 3}} + SECURITY_NON_UNIQUE_AUTHORITY = SidIdentifierAuthority{[6]byte{0, 0, 0, 0, 0, 4}} + SECURITY_NT_AUTHORITY = SidIdentifierAuthority{[6]byte{0, 0, 0, 0, 0, 5}} + SECURITY_MANDATORY_LABEL_AUTHORITY = SidIdentifierAuthority{[6]byte{0, 0, 0, 0, 0, 16}} +) + +const ( + SECURITY_NULL_RID = 0 + SECURITY_WORLD_RID = 0 + SECURITY_LOCAL_RID = 0 + SECURITY_CREATOR_OWNER_RID = 0 + SECURITY_CREATOR_GROUP_RID = 1 + SECURITY_DIALUP_RID = 1 + SECURITY_NETWORK_RID = 2 + SECURITY_BATCH_RID = 3 + SECURITY_INTERACTIVE_RID = 4 + SECURITY_LOGON_IDS_RID = 5 + SECURITY_SERVICE_RID = 6 + SECURITY_LOCAL_SYSTEM_RID = 18 + SECURITY_BUILTIN_DOMAIN_RID = 32 + SECURITY_PRINCIPAL_SELF_RID = 10 + SECURITY_CREATOR_OWNER_SERVER_RID = 0x2 + SECURITY_CREATOR_GROUP_SERVER_RID = 0x3 + SECURITY_LOGON_IDS_RID_COUNT = 0x3 + SECURITY_ANONYMOUS_LOGON_RID = 0x7 + SECURITY_PROXY_RID = 0x8 + SECURITY_ENTERPRISE_CONTROLLERS_RID = 0x9 + SECURITY_SERVER_LOGON_RID = SECURITY_ENTERPRISE_CONTROLLERS_RID + SECURITY_AUTHENTICATED_USER_RID = 0xb + SECURITY_RESTRICTED_CODE_RID = 0xc + SECURITY_NT_NON_UNIQUE_RID = 0x15 +) + +// Predefined domain-relative RIDs for local groups. +// See https://msdn.microsoft.com/en-us/library/windows/desktop/aa379649(v=vs.85).aspx +const ( + DOMAIN_ALIAS_RID_ADMINS = 0x220 + DOMAIN_ALIAS_RID_USERS = 0x221 + DOMAIN_ALIAS_RID_GUESTS = 0x222 + DOMAIN_ALIAS_RID_POWER_USERS = 0x223 + DOMAIN_ALIAS_RID_ACCOUNT_OPS = 0x224 + DOMAIN_ALIAS_RID_SYSTEM_OPS = 0x225 + DOMAIN_ALIAS_RID_PRINT_OPS = 0x226 + DOMAIN_ALIAS_RID_BACKUP_OPS = 0x227 + DOMAIN_ALIAS_RID_REPLICATOR = 0x228 + DOMAIN_ALIAS_RID_RAS_SERVERS = 0x229 + DOMAIN_ALIAS_RID_PREW2KCOMPACCESS = 0x22a + DOMAIN_ALIAS_RID_REMOTE_DESKTOP_USERS = 0x22b + DOMAIN_ALIAS_RID_NETWORK_CONFIGURATION_OPS = 0x22c + DOMAIN_ALIAS_RID_INCOMING_FOREST_TRUST_BUILDERS = 0x22d + DOMAIN_ALIAS_RID_MONITORING_USERS = 0X22e + DOMAIN_ALIAS_RID_LOGGING_USERS = 0x22f + DOMAIN_ALIAS_RID_AUTHORIZATIONACCESS = 0x230 + DOMAIN_ALIAS_RID_TS_LICENSE_SERVERS = 0x231 + DOMAIN_ALIAS_RID_DCOM_USERS = 0x232 + DOMAIN_ALIAS_RID_IUSERS = 0x238 + DOMAIN_ALIAS_RID_CRYPTO_OPERATORS = 0x239 + DOMAIN_ALIAS_RID_CACHEABLE_PRINCIPALS_GROUP = 0x23b + DOMAIN_ALIAS_RID_NON_CACHEABLE_PRINCIPALS_GROUP = 0x23c + DOMAIN_ALIAS_RID_EVENT_LOG_READERS_GROUP = 0x23d + DOMAIN_ALIAS_RID_CERTSVC_DCOM_ACCESS_GROUP = 0x23e +) + +//sys LookupAccountSid(systemName *uint16, sid *SID, name *uint16, nameLen *uint32, refdDomainName *uint16, refdDomainNameLen *uint32, use *uint32) (err error) = advapi32.LookupAccountSidW +//sys LookupAccountName(systemName *uint16, accountName *uint16, sid *SID, sidLen *uint32, refdDomainName *uint16, refdDomainNameLen *uint32, use *uint32) (err error) = advapi32.LookupAccountNameW +//sys ConvertSidToStringSid(sid *SID, stringSid **uint16) (err error) = advapi32.ConvertSidToStringSidW +//sys ConvertStringSidToSid(stringSid *uint16, sid **SID) (err error) = advapi32.ConvertStringSidToSidW +//sys GetLengthSid(sid *SID) (len uint32) = advapi32.GetLengthSid +//sys CopySid(destSidLen uint32, destSid *SID, srcSid *SID) (err error) = advapi32.CopySid +//sys AllocateAndInitializeSid(identAuth *SidIdentifierAuthority, subAuth byte, subAuth0 uint32, subAuth1 uint32, subAuth2 uint32, subAuth3 uint32, subAuth4 uint32, subAuth5 uint32, subAuth6 uint32, subAuth7 uint32, sid **SID) (err error) = advapi32.AllocateAndInitializeSid +//sys FreeSid(sid *SID) (err error) [failretval!=0] = advapi32.FreeSid +//sys EqualSid(sid1 *SID, sid2 *SID) (isEqual bool) = advapi32.EqualSid + +// The security identifier (SID) structure is a variable-length +// structure used to uniquely identify users or groups. +type SID struct{} + +// StringToSid converts a string-format security identifier +// sid into a valid, functional sid. +func StringToSid(s string) (*SID, error) { + var sid *SID + p, e := UTF16PtrFromString(s) + if e != nil { + return nil, e + } + e = ConvertStringSidToSid(p, &sid) + if e != nil { + return nil, e + } + defer LocalFree((Handle)(unsafe.Pointer(sid))) + return sid.Copy() +} + +// LookupSID retrieves a security identifier sid for the account +// and the name of the domain on which the account was found. +// System specify target computer to search. +func LookupSID(system, account string) (sid *SID, domain string, accType uint32, err error) { + if len(account) == 0 { + return nil, "", 0, syscall.EINVAL + } + acc, e := UTF16PtrFromString(account) + if e != nil { + return nil, "", 0, e + } + var sys *uint16 + if len(system) > 0 { + sys, e = UTF16PtrFromString(system) + if e != nil { + return nil, "", 0, e + } + } + n := uint32(50) + dn := uint32(50) + for { + b := make([]byte, n) + db := make([]uint16, dn) + sid = (*SID)(unsafe.Pointer(&b[0])) + e = LookupAccountName(sys, acc, sid, &n, &db[0], &dn, &accType) + if e == nil { + return sid, UTF16ToString(db), accType, nil + } + if e != ERROR_INSUFFICIENT_BUFFER { + return nil, "", 0, e + } + if n <= uint32(len(b)) { + return nil, "", 0, e + } + } +} + +// String converts sid to a string format +// suitable for display, storage, or transmission. +func (sid *SID) String() (string, error) { + var s *uint16 + e := ConvertSidToStringSid(sid, &s) + if e != nil { + return "", e + } + defer LocalFree((Handle)(unsafe.Pointer(s))) + return UTF16ToString((*[256]uint16)(unsafe.Pointer(s))[:]), nil +} + +// Len returns the length, in bytes, of a valid security identifier sid. +func (sid *SID) Len() int { + return int(GetLengthSid(sid)) +} + +// Copy creates a duplicate of security identifier sid. +func (sid *SID) Copy() (*SID, error) { + b := make([]byte, sid.Len()) + sid2 := (*SID)(unsafe.Pointer(&b[0])) + e := CopySid(uint32(len(b)), sid2, sid) + if e != nil { + return nil, e + } + return sid2, nil +} + +// LookupAccount retrieves the name of the account for this sid +// and the name of the first domain on which this sid is found. +// System specify target computer to search for. +func (sid *SID) LookupAccount(system string) (account, domain string, accType uint32, err error) { + var sys *uint16 + if len(system) > 0 { + sys, err = UTF16PtrFromString(system) + if err != nil { + return "", "", 0, err + } + } + n := uint32(50) + dn := uint32(50) + for { + b := make([]uint16, n) + db := make([]uint16, dn) + e := LookupAccountSid(sys, sid, &b[0], &n, &db[0], &dn, &accType) + if e == nil { + return UTF16ToString(b), UTF16ToString(db), accType, nil + } + if e != ERROR_INSUFFICIENT_BUFFER { + return "", "", 0, e + } + if n <= uint32(len(b)) { + return "", "", 0, e + } + } +} + +const ( + // do not reorder + TOKEN_ASSIGN_PRIMARY = 1 << iota + TOKEN_DUPLICATE + TOKEN_IMPERSONATE + TOKEN_QUERY + TOKEN_QUERY_SOURCE + TOKEN_ADJUST_PRIVILEGES + TOKEN_ADJUST_GROUPS + TOKEN_ADJUST_DEFAULT + + TOKEN_ALL_ACCESS = STANDARD_RIGHTS_REQUIRED | + TOKEN_ASSIGN_PRIMARY | + TOKEN_DUPLICATE | + TOKEN_IMPERSONATE | + TOKEN_QUERY | + TOKEN_QUERY_SOURCE | + TOKEN_ADJUST_PRIVILEGES | + TOKEN_ADJUST_GROUPS | + TOKEN_ADJUST_DEFAULT + TOKEN_READ = STANDARD_RIGHTS_READ | TOKEN_QUERY + TOKEN_WRITE = STANDARD_RIGHTS_WRITE | + TOKEN_ADJUST_PRIVILEGES | + TOKEN_ADJUST_GROUPS | + TOKEN_ADJUST_DEFAULT + TOKEN_EXECUTE = STANDARD_RIGHTS_EXECUTE +) + +const ( + // do not reorder + TokenUser = 1 + iota + TokenGroups + TokenPrivileges + TokenOwner + TokenPrimaryGroup + TokenDefaultDacl + TokenSource + TokenType + TokenImpersonationLevel + TokenStatistics + TokenRestrictedSids + TokenSessionId + TokenGroupsAndPrivileges + TokenSessionReference + TokenSandBoxInert + TokenAuditPolicy + TokenOrigin + TokenElevationType + TokenLinkedToken + TokenElevation + TokenHasRestrictions + TokenAccessInformation + TokenVirtualizationAllowed + TokenVirtualizationEnabled + TokenIntegrityLevel + TokenUIAccess + TokenMandatoryPolicy + TokenLogonSid + MaxTokenInfoClass +) + +type SIDAndAttributes struct { + Sid *SID + Attributes uint32 +} + +type Tokenuser struct { + User SIDAndAttributes +} + +type Tokenprimarygroup struct { + PrimaryGroup *SID +} + +type Tokengroups struct { + GroupCount uint32 + Groups [1]SIDAndAttributes +} + +// Authorization Functions +//sys checkTokenMembership(tokenHandle Token, sidToCheck *SID, isMember *int32) (err error) = advapi32.CheckTokenMembership +//sys OpenProcessToken(h Handle, access uint32, token *Token) (err error) = advapi32.OpenProcessToken +//sys GetTokenInformation(t Token, infoClass uint32, info *byte, infoLen uint32, returnedLen *uint32) (err error) = advapi32.GetTokenInformation +//sys GetUserProfileDirectory(t Token, dir *uint16, dirLen *uint32) (err error) = userenv.GetUserProfileDirectoryW + +// An access token contains the security information for a logon session. +// The system creates an access token when a user logs on, and every +// process executed on behalf of the user has a copy of the token. +// The token identifies the user, the user's groups, and the user's +// privileges. The system uses the token to control access to securable +// objects and to control the ability of the user to perform various +// system-related operations on the local computer. +type Token Handle + +// OpenCurrentProcessToken opens the access token +// associated with current process. +func OpenCurrentProcessToken() (Token, error) { + p, e := GetCurrentProcess() + if e != nil { + return 0, e + } + var t Token + e = OpenProcessToken(p, TOKEN_QUERY, &t) + if e != nil { + return 0, e + } + return t, nil +} + +// Close releases access to access token. +func (t Token) Close() error { + return CloseHandle(Handle(t)) +} + +// getInfo retrieves a specified type of information about an access token. +func (t Token) getInfo(class uint32, initSize int) (unsafe.Pointer, error) { + n := uint32(initSize) + for { + b := make([]byte, n) + e := GetTokenInformation(t, class, &b[0], uint32(len(b)), &n) + if e == nil { + return unsafe.Pointer(&b[0]), nil + } + if e != ERROR_INSUFFICIENT_BUFFER { + return nil, e + } + if n <= uint32(len(b)) { + return nil, e + } + } +} + +// GetTokenUser retrieves access token t user account information. +func (t Token) GetTokenUser() (*Tokenuser, error) { + i, e := t.getInfo(TokenUser, 50) + if e != nil { + return nil, e + } + return (*Tokenuser)(i), nil +} + +// GetTokenGroups retrieves group accounts associated with access token t. +func (t Token) GetTokenGroups() (*Tokengroups, error) { + i, e := t.getInfo(TokenGroups, 50) + if e != nil { + return nil, e + } + return (*Tokengroups)(i), nil +} + +// GetTokenPrimaryGroup retrieves access token t primary group information. +// A pointer to a SID structure representing a group that will become +// the primary group of any objects created by a process using this access token. +func (t Token) GetTokenPrimaryGroup() (*Tokenprimarygroup, error) { + i, e := t.getInfo(TokenPrimaryGroup, 50) + if e != nil { + return nil, e + } + return (*Tokenprimarygroup)(i), nil +} + +// GetUserProfileDirectory retrieves path to the +// root directory of the access token t user's profile. +func (t Token) GetUserProfileDirectory() (string, error) { + n := uint32(100) + for { + b := make([]uint16, n) + e := GetUserProfileDirectory(t, &b[0], &n) + if e == nil { + return UTF16ToString(b), nil + } + if e != ERROR_INSUFFICIENT_BUFFER { + return "", e + } + if n <= uint32(len(b)) { + return "", e + } + } +} + +// IsMember reports whether the access token t is a member of the provided SID. +func (t Token) IsMember(sid *SID) (bool, error) { + var b int32 + if e := checkTokenMembership(t, sid, &b); e != nil { + return false, e + } + return b != 0, nil +} diff --git a/vendor/golang.org/x/sys/windows/service.go b/vendor/golang.org/x/sys/windows/service.go new file mode 100644 index 000000000..a500dd7df --- /dev/null +++ b/vendor/golang.org/x/sys/windows/service.go @@ -0,0 +1,164 @@ +// Copyright 2012 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +// +build windows + +package windows + +const ( + SC_MANAGER_CONNECT = 1 + SC_MANAGER_CREATE_SERVICE = 2 + SC_MANAGER_ENUMERATE_SERVICE = 4 + SC_MANAGER_LOCK = 8 + SC_MANAGER_QUERY_LOCK_STATUS = 16 + SC_MANAGER_MODIFY_BOOT_CONFIG = 32 + SC_MANAGER_ALL_ACCESS = 0xf003f +) + +//sys OpenSCManager(machineName *uint16, databaseName *uint16, access uint32) (handle Handle, err error) [failretval==0] = advapi32.OpenSCManagerW + +const ( + SERVICE_KERNEL_DRIVER = 1 + SERVICE_FILE_SYSTEM_DRIVER = 2 + SERVICE_ADAPTER = 4 + SERVICE_RECOGNIZER_DRIVER = 8 + SERVICE_WIN32_OWN_PROCESS = 16 + SERVICE_WIN32_SHARE_PROCESS = 32 + SERVICE_WIN32 = SERVICE_WIN32_OWN_PROCESS | SERVICE_WIN32_SHARE_PROCESS + SERVICE_INTERACTIVE_PROCESS = 256 + SERVICE_DRIVER = SERVICE_KERNEL_DRIVER | SERVICE_FILE_SYSTEM_DRIVER | SERVICE_RECOGNIZER_DRIVER + SERVICE_TYPE_ALL = SERVICE_WIN32 | SERVICE_ADAPTER | SERVICE_DRIVER | SERVICE_INTERACTIVE_PROCESS + + SERVICE_BOOT_START = 0 + SERVICE_SYSTEM_START = 1 + SERVICE_AUTO_START = 2 + SERVICE_DEMAND_START = 3 + SERVICE_DISABLED = 4 + + SERVICE_ERROR_IGNORE = 0 + SERVICE_ERROR_NORMAL = 1 + SERVICE_ERROR_SEVERE = 2 + SERVICE_ERROR_CRITICAL = 3 + + SC_STATUS_PROCESS_INFO = 0 + + SERVICE_STOPPED = 1 + SERVICE_START_PENDING = 2 + SERVICE_STOP_PENDING = 3 + SERVICE_RUNNING = 4 + SERVICE_CONTINUE_PENDING = 5 + SERVICE_PAUSE_PENDING = 6 + SERVICE_PAUSED = 7 + SERVICE_NO_CHANGE = 0xffffffff + + SERVICE_ACCEPT_STOP = 1 + SERVICE_ACCEPT_PAUSE_CONTINUE = 2 + SERVICE_ACCEPT_SHUTDOWN = 4 + SERVICE_ACCEPT_PARAMCHANGE = 8 + SERVICE_ACCEPT_NETBINDCHANGE = 16 + SERVICE_ACCEPT_HARDWAREPROFILECHANGE = 32 + SERVICE_ACCEPT_POWEREVENT = 64 + SERVICE_ACCEPT_SESSIONCHANGE = 128 + + SERVICE_CONTROL_STOP = 1 + SERVICE_CONTROL_PAUSE = 2 + SERVICE_CONTROL_CONTINUE = 3 + SERVICE_CONTROL_INTERROGATE = 4 + SERVICE_CONTROL_SHUTDOWN = 5 + SERVICE_CONTROL_PARAMCHANGE = 6 + SERVICE_CONTROL_NETBINDADD = 7 + SERVICE_CONTROL_NETBINDREMOVE = 8 + SERVICE_CONTROL_NETBINDENABLE = 9 + SERVICE_CONTROL_NETBINDDISABLE = 10 + SERVICE_CONTROL_DEVICEEVENT = 11 + SERVICE_CONTROL_HARDWAREPROFILECHANGE = 12 + SERVICE_CONTROL_POWEREVENT = 13 + SERVICE_CONTROL_SESSIONCHANGE = 14 + + SERVICE_ACTIVE = 1 + SERVICE_INACTIVE = 2 + SERVICE_STATE_ALL = 3 + + SERVICE_QUERY_CONFIG = 1 + SERVICE_CHANGE_CONFIG = 2 + SERVICE_QUERY_STATUS = 4 + SERVICE_ENUMERATE_DEPENDENTS = 8 + SERVICE_START = 16 + SERVICE_STOP = 32 + SERVICE_PAUSE_CONTINUE = 64 + SERVICE_INTERROGATE = 128 + SERVICE_USER_DEFINED_CONTROL = 256 + SERVICE_ALL_ACCESS = STANDARD_RIGHTS_REQUIRED | SERVICE_QUERY_CONFIG | SERVICE_CHANGE_CONFIG | SERVICE_QUERY_STATUS | SERVICE_ENUMERATE_DEPENDENTS | SERVICE_START | SERVICE_STOP | SERVICE_PAUSE_CONTINUE | SERVICE_INTERROGATE | SERVICE_USER_DEFINED_CONTROL + SERVICE_RUNS_IN_SYSTEM_PROCESS = 1 + SERVICE_CONFIG_DESCRIPTION = 1 + SERVICE_CONFIG_FAILURE_ACTIONS = 2 + + NO_ERROR = 0 + + SC_ENUM_PROCESS_INFO = 0 +) + +type SERVICE_STATUS struct { + ServiceType uint32 + CurrentState uint32 + ControlsAccepted uint32 + Win32ExitCode uint32 + ServiceSpecificExitCode uint32 + CheckPoint uint32 + WaitHint uint32 +} + +type SERVICE_TABLE_ENTRY struct { + ServiceName *uint16 + ServiceProc uintptr +} + +type QUERY_SERVICE_CONFIG struct { + ServiceType uint32 + StartType uint32 + ErrorControl uint32 + BinaryPathName *uint16 + LoadOrderGroup *uint16 + TagId uint32 + Dependencies *uint16 + ServiceStartName *uint16 + DisplayName *uint16 +} + +type SERVICE_DESCRIPTION struct { + Description *uint16 +} + +type SERVICE_STATUS_PROCESS struct { + ServiceType uint32 + CurrentState uint32 + ControlsAccepted uint32 + Win32ExitCode uint32 + ServiceSpecificExitCode uint32 + CheckPoint uint32 + WaitHint uint32 + ProcessId uint32 + ServiceFlags uint32 +} + +type ENUM_SERVICE_STATUS_PROCESS struct { + ServiceName *uint16 + DisplayName *uint16 + ServiceStatusProcess SERVICE_STATUS_PROCESS +} + +//sys CloseServiceHandle(handle Handle) (err error) = advapi32.CloseServiceHandle +//sys CreateService(mgr Handle, serviceName *uint16, displayName *uint16, access uint32, srvType uint32, startType uint32, errCtl uint32, pathName *uint16, loadOrderGroup *uint16, tagId *uint32, dependencies *uint16, serviceStartName *uint16, password *uint16) (handle Handle, err error) [failretval==0] = advapi32.CreateServiceW +//sys OpenService(mgr Handle, serviceName *uint16, access uint32) (handle Handle, err error) [failretval==0] = advapi32.OpenServiceW +//sys DeleteService(service Handle) (err error) = advapi32.DeleteService +//sys StartService(service Handle, numArgs uint32, argVectors **uint16) (err error) = advapi32.StartServiceW +//sys QueryServiceStatus(service Handle, status *SERVICE_STATUS) (err error) = advapi32.QueryServiceStatus +//sys ControlService(service Handle, control uint32, status *SERVICE_STATUS) (err error) = advapi32.ControlService +//sys StartServiceCtrlDispatcher(serviceTable *SERVICE_TABLE_ENTRY) (err error) = advapi32.StartServiceCtrlDispatcherW +//sys SetServiceStatus(service Handle, serviceStatus *SERVICE_STATUS) (err error) = advapi32.SetServiceStatus +//sys ChangeServiceConfig(service Handle, serviceType uint32, startType uint32, errorControl uint32, binaryPathName *uint16, loadOrderGroup *uint16, tagId *uint32, dependencies *uint16, serviceStartName *uint16, password *uint16, displayName *uint16) (err error) = advapi32.ChangeServiceConfigW +//sys QueryServiceConfig(service Handle, serviceConfig *QUERY_SERVICE_CONFIG, bufSize uint32, bytesNeeded *uint32) (err error) = advapi32.QueryServiceConfigW +//sys ChangeServiceConfig2(service Handle, infoLevel uint32, info *byte) (err error) = advapi32.ChangeServiceConfig2W +//sys QueryServiceConfig2(service Handle, infoLevel uint32, buff *byte, buffSize uint32, bytesNeeded *uint32) (err error) = advapi32.QueryServiceConfig2W +//sys EnumServicesStatusEx(mgr Handle, infoLevel uint32, serviceType uint32, serviceState uint32, services *byte, bufSize uint32, bytesNeeded *uint32, servicesReturned *uint32, resumeHandle *uint32, groupName *uint16) (err error) = advapi32.EnumServicesStatusExW diff --git a/vendor/golang.org/x/sys/windows/str.go b/vendor/golang.org/x/sys/windows/str.go new file mode 100644 index 000000000..917cc2aae --- /dev/null +++ b/vendor/golang.org/x/sys/windows/str.go @@ -0,0 +1,22 @@ +// Copyright 2009 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +// +build windows + +package windows + +func itoa(val int) string { // do it here rather than with fmt to avoid dependency + if val < 0 { + return "-" + itoa(-val) + } + var buf [32]byte // big enough for int64 + i := len(buf) - 1 + for val >= 10 { + buf[i] = byte(val%10 + '0') + i-- + val /= 10 + } + buf[i] = byte(val + '0') + return string(buf[i:]) +} diff --git a/vendor/golang.org/x/sys/windows/syscall.go b/vendor/golang.org/x/sys/windows/syscall.go new file mode 100644 index 000000000..af828a91b --- /dev/null +++ b/vendor/golang.org/x/sys/windows/syscall.go @@ -0,0 +1,74 @@ +// Copyright 2009 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +// +build windows + +// Package windows contains an interface to the low-level operating system +// primitives. OS details vary depending on the underlying system, and +// by default, godoc will display the OS-specific documentation for the current +// system. If you want godoc to display syscall documentation for another +// system, set $GOOS and $GOARCH to the desired system. For example, if +// you want to view documentation for freebsd/arm on linux/amd64, set $GOOS +// to freebsd and $GOARCH to arm. +// +// The primary use of this package is inside other packages that provide a more +// portable interface to the system, such as "os", "time" and "net". Use +// those packages rather than this one if you can. +// +// For details of the functions and data types in this package consult +// the manuals for the appropriate operating system. +// +// These calls return err == nil to indicate success; otherwise +// err represents an operating system error describing the failure and +// holds a value of type syscall.Errno. +package windows // import "golang.org/x/sys/windows" + +import ( + "syscall" +) + +// ByteSliceFromString returns a NUL-terminated slice of bytes +// containing the text of s. If s contains a NUL byte at any +// location, it returns (nil, syscall.EINVAL). +func ByteSliceFromString(s string) ([]byte, error) { + for i := 0; i < len(s); i++ { + if s[i] == 0 { + return nil, syscall.EINVAL + } + } + a := make([]byte, len(s)+1) + copy(a, s) + return a, nil +} + +// BytePtrFromString returns a pointer to a NUL-terminated array of +// bytes containing the text of s. If s contains a NUL byte at any +// location, it returns (nil, syscall.EINVAL). +func BytePtrFromString(s string) (*byte, error) { + a, err := ByteSliceFromString(s) + if err != nil { + return nil, err + } + return &a[0], nil +} + +// Single-word zero for use when we need a valid pointer to 0 bytes. +// See mksyscall.pl. +var _zero uintptr + +func (ts *Timespec) Unix() (sec int64, nsec int64) { + return int64(ts.Sec), int64(ts.Nsec) +} + +func (tv *Timeval) Unix() (sec int64, nsec int64) { + return int64(tv.Sec), int64(tv.Usec) * 1000 +} + +func (ts *Timespec) Nano() int64 { + return int64(ts.Sec)*1e9 + int64(ts.Nsec) +} + +func (tv *Timeval) Nano() int64 { + return int64(tv.Sec)*1e9 + int64(tv.Usec)*1000 +} diff --git a/vendor/golang.org/x/sys/windows/syscall_windows.go b/vendor/golang.org/x/sys/windows/syscall_windows.go new file mode 100644 index 000000000..1e9f4bb4a --- /dev/null +++ b/vendor/golang.org/x/sys/windows/syscall_windows.go @@ -0,0 +1,1153 @@ +// Copyright 2009 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +// Windows system calls. + +package windows + +import ( + errorspkg "errors" + "sync" + "syscall" + "unicode/utf16" + "unsafe" +) + +type Handle uintptr + +const ( + InvalidHandle = ^Handle(0) + + // Flags for DefineDosDevice. + DDD_EXACT_MATCH_ON_REMOVE = 0x00000004 + DDD_NO_BROADCAST_SYSTEM = 0x00000008 + DDD_RAW_TARGET_PATH = 0x00000001 + DDD_REMOVE_DEFINITION = 0x00000002 + + // Return values for GetDriveType. + DRIVE_UNKNOWN = 0 + DRIVE_NO_ROOT_DIR = 1 + DRIVE_REMOVABLE = 2 + DRIVE_FIXED = 3 + DRIVE_REMOTE = 4 + DRIVE_CDROM = 5 + DRIVE_RAMDISK = 6 + + // File system flags from GetVolumeInformation and GetVolumeInformationByHandle. + FILE_CASE_SENSITIVE_SEARCH = 0x00000001 + FILE_CASE_PRESERVED_NAMES = 0x00000002 + FILE_FILE_COMPRESSION = 0x00000010 + FILE_DAX_VOLUME = 0x20000000 + FILE_NAMED_STREAMS = 0x00040000 + FILE_PERSISTENT_ACLS = 0x00000008 + FILE_READ_ONLY_VOLUME = 0x00080000 + FILE_SEQUENTIAL_WRITE_ONCE = 0x00100000 + FILE_SUPPORTS_ENCRYPTION = 0x00020000 + FILE_SUPPORTS_EXTENDED_ATTRIBUTES = 0x00800000 + FILE_SUPPORTS_HARD_LINKS = 0x00400000 + FILE_SUPPORTS_OBJECT_IDS = 0x00010000 + FILE_SUPPORTS_OPEN_BY_FILE_ID = 0x01000000 + FILE_SUPPORTS_REPARSE_POINTS = 0x00000080 + FILE_SUPPORTS_SPARSE_FILES = 0x00000040 + FILE_SUPPORTS_TRANSACTIONS = 0x00200000 + FILE_SUPPORTS_USN_JOURNAL = 0x02000000 + FILE_UNICODE_ON_DISK = 0x00000004 + FILE_VOLUME_IS_COMPRESSED = 0x00008000 + FILE_VOLUME_QUOTAS = 0x00000020 +) + +// StringToUTF16 is deprecated. Use UTF16FromString instead. +// If s contains a NUL byte this function panics instead of +// returning an error. +func StringToUTF16(s string) []uint16 { + a, err := UTF16FromString(s) + if err != nil { + panic("windows: string with NUL passed to StringToUTF16") + } + return a +} + +// UTF16FromString returns the UTF-16 encoding of the UTF-8 string +// s, with a terminating NUL added. If s contains a NUL byte at any +// location, it returns (nil, syscall.EINVAL). +func UTF16FromString(s string) ([]uint16, error) { + for i := 0; i < len(s); i++ { + if s[i] == 0 { + return nil, syscall.EINVAL + } + } + return utf16.Encode([]rune(s + "\x00")), nil +} + +// UTF16ToString returns the UTF-8 encoding of the UTF-16 sequence s, +// with a terminating NUL removed. +func UTF16ToString(s []uint16) string { + for i, v := range s { + if v == 0 { + s = s[0:i] + break + } + } + return string(utf16.Decode(s)) +} + +// StringToUTF16Ptr is deprecated. Use UTF16PtrFromString instead. +// If s contains a NUL byte this function panics instead of +// returning an error. +func StringToUTF16Ptr(s string) *uint16 { return &StringToUTF16(s)[0] } + +// UTF16PtrFromString returns pointer to the UTF-16 encoding of +// the UTF-8 string s, with a terminating NUL added. If s +// contains a NUL byte at any location, it returns (nil, syscall.EINVAL). +func UTF16PtrFromString(s string) (*uint16, error) { + a, err := UTF16FromString(s) + if err != nil { + return nil, err + } + return &a[0], nil +} + +func Getpagesize() int { return 4096 } + +// NewCallback converts a Go function to a function pointer conforming to the stdcall calling convention. +// This is useful when interoperating with Windows code requiring callbacks. +func NewCallback(fn interface{}) uintptr { + return syscall.NewCallback(fn) +} + +// NewCallbackCDecl converts a Go function to a function pointer conforming to the cdecl calling convention. +// This is useful when interoperating with Windows code requiring callbacks. +func NewCallbackCDecl(fn interface{}) uintptr { + return syscall.NewCallbackCDecl(fn) +} + +// windows api calls + +//sys GetLastError() (lasterr error) +//sys LoadLibrary(libname string) (handle Handle, err error) = LoadLibraryW +//sys LoadLibraryEx(libname string, zero Handle, flags uintptr) (handle Handle, err error) = LoadLibraryExW +//sys FreeLibrary(handle Handle) (err error) +//sys GetProcAddress(module Handle, procname string) (proc uintptr, err error) +//sys GetVersion() (ver uint32, err error) +//sys FormatMessage(flags uint32, msgsrc uintptr, msgid uint32, langid uint32, buf []uint16, args *byte) (n uint32, err error) = FormatMessageW +//sys ExitProcess(exitcode uint32) +//sys CreateFile(name *uint16, access uint32, mode uint32, sa *SecurityAttributes, createmode uint32, attrs uint32, templatefile int32) (handle Handle, err error) [failretval==InvalidHandle] = CreateFileW +//sys ReadFile(handle Handle, buf []byte, done *uint32, overlapped *Overlapped) (err error) +//sys WriteFile(handle Handle, buf []byte, done *uint32, overlapped *Overlapped) (err error) +//sys SetFilePointer(handle Handle, lowoffset int32, highoffsetptr *int32, whence uint32) (newlowoffset uint32, err error) [failretval==0xffffffff] +//sys CloseHandle(handle Handle) (err error) +//sys GetStdHandle(stdhandle uint32) (handle Handle, err error) [failretval==InvalidHandle] +//sys SetStdHandle(stdhandle uint32, handle Handle) (err error) +//sys findFirstFile1(name *uint16, data *win32finddata1) (handle Handle, err error) [failretval==InvalidHandle] = FindFirstFileW +//sys findNextFile1(handle Handle, data *win32finddata1) (err error) = FindNextFileW +//sys FindClose(handle Handle) (err error) +//sys GetFileInformationByHandle(handle Handle, data *ByHandleFileInformation) (err error) +//sys GetCurrentDirectory(buflen uint32, buf *uint16) (n uint32, err error) = GetCurrentDirectoryW +//sys SetCurrentDirectory(path *uint16) (err error) = SetCurrentDirectoryW +//sys CreateDirectory(path *uint16, sa *SecurityAttributes) (err error) = CreateDirectoryW +//sys RemoveDirectory(path *uint16) (err error) = RemoveDirectoryW +//sys DeleteFile(path *uint16) (err error) = DeleteFileW +//sys MoveFile(from *uint16, to *uint16) (err error) = MoveFileW +//sys MoveFileEx(from *uint16, to *uint16, flags uint32) (err error) = MoveFileExW +//sys GetComputerName(buf *uint16, n *uint32) (err error) = GetComputerNameW +//sys GetComputerNameEx(nametype uint32, buf *uint16, n *uint32) (err error) = GetComputerNameExW +//sys SetEndOfFile(handle Handle) (err error) +//sys GetSystemTimeAsFileTime(time *Filetime) +//sys GetSystemTimePreciseAsFileTime(time *Filetime) +//sys GetTimeZoneInformation(tzi *Timezoneinformation) (rc uint32, err error) [failretval==0xffffffff] +//sys CreateIoCompletionPort(filehandle Handle, cphandle Handle, key uint32, threadcnt uint32) (handle Handle, err error) +//sys GetQueuedCompletionStatus(cphandle Handle, qty *uint32, key *uint32, overlapped **Overlapped, timeout uint32) (err error) +//sys PostQueuedCompletionStatus(cphandle Handle, qty uint32, key uint32, overlapped *Overlapped) (err error) +//sys CancelIo(s Handle) (err error) +//sys CancelIoEx(s Handle, o *Overlapped) (err error) +//sys CreateProcess(appName *uint16, commandLine *uint16, procSecurity *SecurityAttributes, threadSecurity *SecurityAttributes, inheritHandles bool, creationFlags uint32, env *uint16, currentDir *uint16, startupInfo *StartupInfo, outProcInfo *ProcessInformation) (err error) = CreateProcessW +//sys OpenProcess(da uint32, inheritHandle bool, pid uint32) (handle Handle, err error) +//sys TerminateProcess(handle Handle, exitcode uint32) (err error) +//sys GetExitCodeProcess(handle Handle, exitcode *uint32) (err error) +//sys GetStartupInfo(startupInfo *StartupInfo) (err error) = GetStartupInfoW +//sys GetCurrentProcess() (pseudoHandle Handle, err error) +//sys GetProcessTimes(handle Handle, creationTime *Filetime, exitTime *Filetime, kernelTime *Filetime, userTime *Filetime) (err error) +//sys DuplicateHandle(hSourceProcessHandle Handle, hSourceHandle Handle, hTargetProcessHandle Handle, lpTargetHandle *Handle, dwDesiredAccess uint32, bInheritHandle bool, dwOptions uint32) (err error) +//sys WaitForSingleObject(handle Handle, waitMilliseconds uint32) (event uint32, err error) [failretval==0xffffffff] +//sys GetTempPath(buflen uint32, buf *uint16) (n uint32, err error) = GetTempPathW +//sys CreatePipe(readhandle *Handle, writehandle *Handle, sa *SecurityAttributes, size uint32) (err error) +//sys GetFileType(filehandle Handle) (n uint32, err error) +//sys CryptAcquireContext(provhandle *Handle, container *uint16, provider *uint16, provtype uint32, flags uint32) (err error) = advapi32.CryptAcquireContextW +//sys CryptReleaseContext(provhandle Handle, flags uint32) (err error) = advapi32.CryptReleaseContext +//sys CryptGenRandom(provhandle Handle, buflen uint32, buf *byte) (err error) = advapi32.CryptGenRandom +//sys GetEnvironmentStrings() (envs *uint16, err error) [failretval==nil] = kernel32.GetEnvironmentStringsW +//sys FreeEnvironmentStrings(envs *uint16) (err error) = kernel32.FreeEnvironmentStringsW +//sys GetEnvironmentVariable(name *uint16, buffer *uint16, size uint32) (n uint32, err error) = kernel32.GetEnvironmentVariableW +//sys SetEnvironmentVariable(name *uint16, value *uint16) (err error) = kernel32.SetEnvironmentVariableW +//sys SetFileTime(handle Handle, ctime *Filetime, atime *Filetime, wtime *Filetime) (err error) +//sys GetFileAttributes(name *uint16) (attrs uint32, err error) [failretval==INVALID_FILE_ATTRIBUTES] = kernel32.GetFileAttributesW +//sys SetFileAttributes(name *uint16, attrs uint32) (err error) = kernel32.SetFileAttributesW +//sys GetFileAttributesEx(name *uint16, level uint32, info *byte) (err error) = kernel32.GetFileAttributesExW +//sys GetCommandLine() (cmd *uint16) = kernel32.GetCommandLineW +//sys CommandLineToArgv(cmd *uint16, argc *int32) (argv *[8192]*[8192]uint16, err error) [failretval==nil] = shell32.CommandLineToArgvW +//sys LocalFree(hmem Handle) (handle Handle, err error) [failretval!=0] +//sys SetHandleInformation(handle Handle, mask uint32, flags uint32) (err error) +//sys FlushFileBuffers(handle Handle) (err error) +//sys GetFullPathName(path *uint16, buflen uint32, buf *uint16, fname **uint16) (n uint32, err error) = kernel32.GetFullPathNameW +//sys GetLongPathName(path *uint16, buf *uint16, buflen uint32) (n uint32, err error) = kernel32.GetLongPathNameW +//sys GetShortPathName(longpath *uint16, shortpath *uint16, buflen uint32) (n uint32, err error) = kernel32.GetShortPathNameW +//sys CreateFileMapping(fhandle Handle, sa *SecurityAttributes, prot uint32, maxSizeHigh uint32, maxSizeLow uint32, name *uint16) (handle Handle, err error) = kernel32.CreateFileMappingW +//sys MapViewOfFile(handle Handle, access uint32, offsetHigh uint32, offsetLow uint32, length uintptr) (addr uintptr, err error) +//sys UnmapViewOfFile(addr uintptr) (err error) +//sys FlushViewOfFile(addr uintptr, length uintptr) (err error) +//sys VirtualLock(addr uintptr, length uintptr) (err error) +//sys VirtualUnlock(addr uintptr, length uintptr) (err error) +//sys VirtualAlloc(address uintptr, size uintptr, alloctype uint32, protect uint32) (value uintptr, err error) = kernel32.VirtualAlloc +//sys VirtualFree(address uintptr, size uintptr, freetype uint32) (err error) = kernel32.VirtualFree +//sys VirtualProtect(address uintptr, size uintptr, newprotect uint32, oldprotect *uint32) (err error) = kernel32.VirtualProtect +//sys TransmitFile(s Handle, handle Handle, bytesToWrite uint32, bytsPerSend uint32, overlapped *Overlapped, transmitFileBuf *TransmitFileBuffers, flags uint32) (err error) = mswsock.TransmitFile +//sys ReadDirectoryChanges(handle Handle, buf *byte, buflen uint32, watchSubTree bool, mask uint32, retlen *uint32, overlapped *Overlapped, completionRoutine uintptr) (err error) = kernel32.ReadDirectoryChangesW +//sys CertOpenSystemStore(hprov Handle, name *uint16) (store Handle, err error) = crypt32.CertOpenSystemStoreW +//sys CertOpenStore(storeProvider uintptr, msgAndCertEncodingType uint32, cryptProv uintptr, flags uint32, para uintptr) (handle Handle, err error) [failretval==InvalidHandle] = crypt32.CertOpenStore +//sys CertEnumCertificatesInStore(store Handle, prevContext *CertContext) (context *CertContext, err error) [failretval==nil] = crypt32.CertEnumCertificatesInStore +//sys CertAddCertificateContextToStore(store Handle, certContext *CertContext, addDisposition uint32, storeContext **CertContext) (err error) = crypt32.CertAddCertificateContextToStore +//sys CertCloseStore(store Handle, flags uint32) (err error) = crypt32.CertCloseStore +//sys CertGetCertificateChain(engine Handle, leaf *CertContext, time *Filetime, additionalStore Handle, para *CertChainPara, flags uint32, reserved uintptr, chainCtx **CertChainContext) (err error) = crypt32.CertGetCertificateChain +//sys CertFreeCertificateChain(ctx *CertChainContext) = crypt32.CertFreeCertificateChain +//sys CertCreateCertificateContext(certEncodingType uint32, certEncoded *byte, encodedLen uint32) (context *CertContext, err error) [failretval==nil] = crypt32.CertCreateCertificateContext +//sys CertFreeCertificateContext(ctx *CertContext) (err error) = crypt32.CertFreeCertificateContext +//sys CertVerifyCertificateChainPolicy(policyOID uintptr, chain *CertChainContext, para *CertChainPolicyPara, status *CertChainPolicyStatus) (err error) = crypt32.CertVerifyCertificateChainPolicy +//sys RegOpenKeyEx(key Handle, subkey *uint16, options uint32, desiredAccess uint32, result *Handle) (regerrno error) = advapi32.RegOpenKeyExW +//sys RegCloseKey(key Handle) (regerrno error) = advapi32.RegCloseKey +//sys RegQueryInfoKey(key Handle, class *uint16, classLen *uint32, reserved *uint32, subkeysLen *uint32, maxSubkeyLen *uint32, maxClassLen *uint32, valuesLen *uint32, maxValueNameLen *uint32, maxValueLen *uint32, saLen *uint32, lastWriteTime *Filetime) (regerrno error) = advapi32.RegQueryInfoKeyW +//sys RegEnumKeyEx(key Handle, index uint32, name *uint16, nameLen *uint32, reserved *uint32, class *uint16, classLen *uint32, lastWriteTime *Filetime) (regerrno error) = advapi32.RegEnumKeyExW +//sys RegQueryValueEx(key Handle, name *uint16, reserved *uint32, valtype *uint32, buf *byte, buflen *uint32) (regerrno error) = advapi32.RegQueryValueExW +//sys getCurrentProcessId() (pid uint32) = kernel32.GetCurrentProcessId +//sys GetConsoleMode(console Handle, mode *uint32) (err error) = kernel32.GetConsoleMode +//sys SetConsoleMode(console Handle, mode uint32) (err error) = kernel32.SetConsoleMode +//sys GetConsoleScreenBufferInfo(console Handle, info *ConsoleScreenBufferInfo) (err error) = kernel32.GetConsoleScreenBufferInfo +//sys WriteConsole(console Handle, buf *uint16, towrite uint32, written *uint32, reserved *byte) (err error) = kernel32.WriteConsoleW +//sys ReadConsole(console Handle, buf *uint16, toread uint32, read *uint32, inputControl *byte) (err error) = kernel32.ReadConsoleW +//sys CreateToolhelp32Snapshot(flags uint32, processId uint32) (handle Handle, err error) [failretval==InvalidHandle] = kernel32.CreateToolhelp32Snapshot +//sys Process32First(snapshot Handle, procEntry *ProcessEntry32) (err error) = kernel32.Process32FirstW +//sys Process32Next(snapshot Handle, procEntry *ProcessEntry32) (err error) = kernel32.Process32NextW +//sys DeviceIoControl(handle Handle, ioControlCode uint32, inBuffer *byte, inBufferSize uint32, outBuffer *byte, outBufferSize uint32, bytesReturned *uint32, overlapped *Overlapped) (err error) +// This function returns 1 byte BOOLEAN rather than the 4 byte BOOL. +//sys CreateSymbolicLink(symlinkfilename *uint16, targetfilename *uint16, flags uint32) (err error) [failretval&0xff==0] = CreateSymbolicLinkW +//sys CreateHardLink(filename *uint16, existingfilename *uint16, reserved uintptr) (err error) [failretval&0xff==0] = CreateHardLinkW +//sys GetCurrentThreadId() (id uint32) +//sys CreateEvent(eventAttrs *SecurityAttributes, manualReset uint32, initialState uint32, name *uint16) (handle Handle, err error) = kernel32.CreateEventW +//sys CreateEventEx(eventAttrs *SecurityAttributes, name *uint16, flags uint32, desiredAccess uint32) (handle Handle, err error) = kernel32.CreateEventExW +//sys OpenEvent(desiredAccess uint32, inheritHandle bool, name *uint16) (handle Handle, err error) = kernel32.OpenEventW +//sys SetEvent(event Handle) (err error) = kernel32.SetEvent +//sys ResetEvent(event Handle) (err error) = kernel32.ResetEvent +//sys PulseEvent(event Handle) (err error) = kernel32.PulseEvent + +// Volume Management Functions +//sys DefineDosDevice(flags uint32, deviceName *uint16, targetPath *uint16) (err error) = DefineDosDeviceW +//sys DeleteVolumeMountPoint(volumeMountPoint *uint16) (err error) = DeleteVolumeMountPointW +//sys FindFirstVolume(volumeName *uint16, bufferLength uint32) (handle Handle, err error) [failretval==InvalidHandle] = FindFirstVolumeW +//sys FindFirstVolumeMountPoint(rootPathName *uint16, volumeMountPoint *uint16, bufferLength uint32) (handle Handle, err error) [failretval==InvalidHandle] = FindFirstVolumeMountPointW +//sys FindNextVolume(findVolume Handle, volumeName *uint16, bufferLength uint32) (err error) = FindNextVolumeW +//sys FindNextVolumeMountPoint(findVolumeMountPoint Handle, volumeMountPoint *uint16, bufferLength uint32) (err error) = FindNextVolumeMountPointW +//sys FindVolumeClose(findVolume Handle) (err error) +//sys FindVolumeMountPointClose(findVolumeMountPoint Handle) (err error) +//sys GetDriveType(rootPathName *uint16) (driveType uint32) = GetDriveTypeW +//sys GetLogicalDrives() (drivesBitMask uint32, err error) [failretval==0] +//sys GetLogicalDriveStrings(bufferLength uint32, buffer *uint16) (n uint32, err error) [failretval==0] = GetLogicalDriveStringsW +//sys GetVolumeInformation(rootPathName *uint16, volumeNameBuffer *uint16, volumeNameSize uint32, volumeNameSerialNumber *uint32, maximumComponentLength *uint32, fileSystemFlags *uint32, fileSystemNameBuffer *uint16, fileSystemNameSize uint32) (err error) = GetVolumeInformationW +//sys GetVolumeInformationByHandle(file Handle, volumeNameBuffer *uint16, volumeNameSize uint32, volumeNameSerialNumber *uint32, maximumComponentLength *uint32, fileSystemFlags *uint32, fileSystemNameBuffer *uint16, fileSystemNameSize uint32) (err error) = GetVolumeInformationByHandleW +//sys GetVolumeNameForVolumeMountPoint(volumeMountPoint *uint16, volumeName *uint16, bufferlength uint32) (err error) = GetVolumeNameForVolumeMountPointW +//sys GetVolumePathName(fileName *uint16, volumePathName *uint16, bufferLength uint32) (err error) = GetVolumePathNameW +//sys GetVolumePathNamesForVolumeName(volumeName *uint16, volumePathNames *uint16, bufferLength uint32, returnLength *uint32) (err error) = GetVolumePathNamesForVolumeNameW +//sys QueryDosDevice(deviceName *uint16, targetPath *uint16, max uint32) (n uint32, err error) [failretval==0] = QueryDosDeviceW +//sys SetVolumeLabel(rootPathName *uint16, volumeName *uint16) (err error) = SetVolumeLabelW +//sys SetVolumeMountPoint(volumeMountPoint *uint16, volumeName *uint16) (err error) = SetVolumeMountPointW + +// syscall interface implementation for other packages + +// GetProcAddressByOrdinal retrieves the address of the exported +// function from module by ordinal. +func GetProcAddressByOrdinal(module Handle, ordinal uintptr) (proc uintptr, err error) { + r0, _, e1 := syscall.Syscall(procGetProcAddress.Addr(), 2, uintptr(module), ordinal, 0) + proc = uintptr(r0) + if proc == 0 { + if e1 != 0 { + err = errnoErr(e1) + } else { + err = syscall.EINVAL + } + } + return +} + +func Exit(code int) { ExitProcess(uint32(code)) } + +func makeInheritSa() *SecurityAttributes { + var sa SecurityAttributes + sa.Length = uint32(unsafe.Sizeof(sa)) + sa.InheritHandle = 1 + return &sa +} + +func Open(path string, mode int, perm uint32) (fd Handle, err error) { + if len(path) == 0 { + return InvalidHandle, ERROR_FILE_NOT_FOUND + } + pathp, err := UTF16PtrFromString(path) + if err != nil { + return InvalidHandle, err + } + var access uint32 + switch mode & (O_RDONLY | O_WRONLY | O_RDWR) { + case O_RDONLY: + access = GENERIC_READ + case O_WRONLY: + access = GENERIC_WRITE + case O_RDWR: + access = GENERIC_READ | GENERIC_WRITE + } + if mode&O_CREAT != 0 { + access |= GENERIC_WRITE + } + if mode&O_APPEND != 0 { + access &^= GENERIC_WRITE + access |= FILE_APPEND_DATA + } + sharemode := uint32(FILE_SHARE_READ | FILE_SHARE_WRITE) + var sa *SecurityAttributes + if mode&O_CLOEXEC == 0 { + sa = makeInheritSa() + } + var createmode uint32 + switch { + case mode&(O_CREAT|O_EXCL) == (O_CREAT | O_EXCL): + createmode = CREATE_NEW + case mode&(O_CREAT|O_TRUNC) == (O_CREAT | O_TRUNC): + createmode = CREATE_ALWAYS + case mode&O_CREAT == O_CREAT: + createmode = OPEN_ALWAYS + case mode&O_TRUNC == O_TRUNC: + createmode = TRUNCATE_EXISTING + default: + createmode = OPEN_EXISTING + } + h, e := CreateFile(pathp, access, sharemode, sa, createmode, FILE_ATTRIBUTE_NORMAL, 0) + return h, e +} + +func Read(fd Handle, p []byte) (n int, err error) { + var done uint32 + e := ReadFile(fd, p, &done, nil) + if e != nil { + if e == ERROR_BROKEN_PIPE { + // NOTE(brainman): work around ERROR_BROKEN_PIPE is returned on reading EOF from stdin + return 0, nil + } + return 0, e + } + if raceenabled { + if done > 0 { + raceWriteRange(unsafe.Pointer(&p[0]), int(done)) + } + raceAcquire(unsafe.Pointer(&ioSync)) + } + return int(done), nil +} + +func Write(fd Handle, p []byte) (n int, err error) { + if raceenabled { + raceReleaseMerge(unsafe.Pointer(&ioSync)) + } + var done uint32 + e := WriteFile(fd, p, &done, nil) + if e != nil { + return 0, e + } + if raceenabled && done > 0 { + raceReadRange(unsafe.Pointer(&p[0]), int(done)) + } + return int(done), nil +} + +var ioSync int64 + +func Seek(fd Handle, offset int64, whence int) (newoffset int64, err error) { + var w uint32 + switch whence { + case 0: + w = FILE_BEGIN + case 1: + w = FILE_CURRENT + case 2: + w = FILE_END + } + hi := int32(offset >> 32) + lo := int32(offset) + // use GetFileType to check pipe, pipe can't do seek + ft, _ := GetFileType(fd) + if ft == FILE_TYPE_PIPE { + return 0, syscall.EPIPE + } + rlo, e := SetFilePointer(fd, lo, &hi, w) + if e != nil { + return 0, e + } + return int64(hi)<<32 + int64(rlo), nil +} + +func Close(fd Handle) (err error) { + return CloseHandle(fd) +} + +var ( + Stdin = getStdHandle(STD_INPUT_HANDLE) + Stdout = getStdHandle(STD_OUTPUT_HANDLE) + Stderr = getStdHandle(STD_ERROR_HANDLE) +) + +func getStdHandle(stdhandle uint32) (fd Handle) { + r, _ := GetStdHandle(stdhandle) + CloseOnExec(r) + return r +} + +const ImplementsGetwd = true + +func Getwd() (wd string, err error) { + b := make([]uint16, 300) + n, e := GetCurrentDirectory(uint32(len(b)), &b[0]) + if e != nil { + return "", e + } + return string(utf16.Decode(b[0:n])), nil +} + +func Chdir(path string) (err error) { + pathp, err := UTF16PtrFromString(path) + if err != nil { + return err + } + return SetCurrentDirectory(pathp) +} + +func Mkdir(path string, mode uint32) (err error) { + pathp, err := UTF16PtrFromString(path) + if err != nil { + return err + } + return CreateDirectory(pathp, nil) +} + +func Rmdir(path string) (err error) { + pathp, err := UTF16PtrFromString(path) + if err != nil { + return err + } + return RemoveDirectory(pathp) +} + +func Unlink(path string) (err error) { + pathp, err := UTF16PtrFromString(path) + if err != nil { + return err + } + return DeleteFile(pathp) +} + +func Rename(oldpath, newpath string) (err error) { + from, err := UTF16PtrFromString(oldpath) + if err != nil { + return err + } + to, err := UTF16PtrFromString(newpath) + if err != nil { + return err + } + return MoveFileEx(from, to, MOVEFILE_REPLACE_EXISTING) +} + +func ComputerName() (name string, err error) { + var n uint32 = MAX_COMPUTERNAME_LENGTH + 1 + b := make([]uint16, n) + e := GetComputerName(&b[0], &n) + if e != nil { + return "", e + } + return string(utf16.Decode(b[0:n])), nil +} + +func Ftruncate(fd Handle, length int64) (err error) { + curoffset, e := Seek(fd, 0, 1) + if e != nil { + return e + } + defer Seek(fd, curoffset, 0) + _, e = Seek(fd, length, 0) + if e != nil { + return e + } + e = SetEndOfFile(fd) + if e != nil { + return e + } + return nil +} + +func Gettimeofday(tv *Timeval) (err error) { + var ft Filetime + GetSystemTimeAsFileTime(&ft) + *tv = NsecToTimeval(ft.Nanoseconds()) + return nil +} + +func Pipe(p []Handle) (err error) { + if len(p) != 2 { + return syscall.EINVAL + } + var r, w Handle + e := CreatePipe(&r, &w, makeInheritSa(), 0) + if e != nil { + return e + } + p[0] = r + p[1] = w + return nil +} + +func Utimes(path string, tv []Timeval) (err error) { + if len(tv) != 2 { + return syscall.EINVAL + } + pathp, e := UTF16PtrFromString(path) + if e != nil { + return e + } + h, e := CreateFile(pathp, + FILE_WRITE_ATTRIBUTES, FILE_SHARE_WRITE, nil, + OPEN_EXISTING, FILE_FLAG_BACKUP_SEMANTICS, 0) + if e != nil { + return e + } + defer Close(h) + a := NsecToFiletime(tv[0].Nanoseconds()) + w := NsecToFiletime(tv[1].Nanoseconds()) + return SetFileTime(h, nil, &a, &w) +} + +func UtimesNano(path string, ts []Timespec) (err error) { + if len(ts) != 2 { + return syscall.EINVAL + } + pathp, e := UTF16PtrFromString(path) + if e != nil { + return e + } + h, e := CreateFile(pathp, + FILE_WRITE_ATTRIBUTES, FILE_SHARE_WRITE, nil, + OPEN_EXISTING, FILE_FLAG_BACKUP_SEMANTICS, 0) + if e != nil { + return e + } + defer Close(h) + a := NsecToFiletime(TimespecToNsec(ts[0])) + w := NsecToFiletime(TimespecToNsec(ts[1])) + return SetFileTime(h, nil, &a, &w) +} + +func Fsync(fd Handle) (err error) { + return FlushFileBuffers(fd) +} + +func Chmod(path string, mode uint32) (err error) { + if mode == 0 { + return syscall.EINVAL + } + p, e := UTF16PtrFromString(path) + if e != nil { + return e + } + attrs, e := GetFileAttributes(p) + if e != nil { + return e + } + if mode&S_IWRITE != 0 { + attrs &^= FILE_ATTRIBUTE_READONLY + } else { + attrs |= FILE_ATTRIBUTE_READONLY + } + return SetFileAttributes(p, attrs) +} + +func LoadGetSystemTimePreciseAsFileTime() error { + return procGetSystemTimePreciseAsFileTime.Find() +} + +func LoadCancelIoEx() error { + return procCancelIoEx.Find() +} + +func LoadSetFileCompletionNotificationModes() error { + return procSetFileCompletionNotificationModes.Find() +} + +// net api calls + +const socket_error = uintptr(^uint32(0)) + +//sys WSAStartup(verreq uint32, data *WSAData) (sockerr error) = ws2_32.WSAStartup +//sys WSACleanup() (err error) [failretval==socket_error] = ws2_32.WSACleanup +//sys WSAIoctl(s Handle, iocc uint32, inbuf *byte, cbif uint32, outbuf *byte, cbob uint32, cbbr *uint32, overlapped *Overlapped, completionRoutine uintptr) (err error) [failretval==socket_error] = ws2_32.WSAIoctl +//sys socket(af int32, typ int32, protocol int32) (handle Handle, err error) [failretval==InvalidHandle] = ws2_32.socket +//sys Setsockopt(s Handle, level int32, optname int32, optval *byte, optlen int32) (err error) [failretval==socket_error] = ws2_32.setsockopt +//sys Getsockopt(s Handle, level int32, optname int32, optval *byte, optlen *int32) (err error) [failretval==socket_error] = ws2_32.getsockopt +//sys bind(s Handle, name unsafe.Pointer, namelen int32) (err error) [failretval==socket_error] = ws2_32.bind +//sys connect(s Handle, name unsafe.Pointer, namelen int32) (err error) [failretval==socket_error] = ws2_32.connect +//sys getsockname(s Handle, rsa *RawSockaddrAny, addrlen *int32) (err error) [failretval==socket_error] = ws2_32.getsockname +//sys getpeername(s Handle, rsa *RawSockaddrAny, addrlen *int32) (err error) [failretval==socket_error] = ws2_32.getpeername +//sys listen(s Handle, backlog int32) (err error) [failretval==socket_error] = ws2_32.listen +//sys shutdown(s Handle, how int32) (err error) [failretval==socket_error] = ws2_32.shutdown +//sys Closesocket(s Handle) (err error) [failretval==socket_error] = ws2_32.closesocket +//sys AcceptEx(ls Handle, as Handle, buf *byte, rxdatalen uint32, laddrlen uint32, raddrlen uint32, recvd *uint32, overlapped *Overlapped) (err error) = mswsock.AcceptEx +//sys GetAcceptExSockaddrs(buf *byte, rxdatalen uint32, laddrlen uint32, raddrlen uint32, lrsa **RawSockaddrAny, lrsalen *int32, rrsa **RawSockaddrAny, rrsalen *int32) = mswsock.GetAcceptExSockaddrs +//sys WSARecv(s Handle, bufs *WSABuf, bufcnt uint32, recvd *uint32, flags *uint32, overlapped *Overlapped, croutine *byte) (err error) [failretval==socket_error] = ws2_32.WSARecv +//sys WSASend(s Handle, bufs *WSABuf, bufcnt uint32, sent *uint32, flags uint32, overlapped *Overlapped, croutine *byte) (err error) [failretval==socket_error] = ws2_32.WSASend +//sys WSARecvFrom(s Handle, bufs *WSABuf, bufcnt uint32, recvd *uint32, flags *uint32, from *RawSockaddrAny, fromlen *int32, overlapped *Overlapped, croutine *byte) (err error) [failretval==socket_error] = ws2_32.WSARecvFrom +//sys WSASendTo(s Handle, bufs *WSABuf, bufcnt uint32, sent *uint32, flags uint32, to *RawSockaddrAny, tolen int32, overlapped *Overlapped, croutine *byte) (err error) [failretval==socket_error] = ws2_32.WSASendTo +//sys GetHostByName(name string) (h *Hostent, err error) [failretval==nil] = ws2_32.gethostbyname +//sys GetServByName(name string, proto string) (s *Servent, err error) [failretval==nil] = ws2_32.getservbyname +//sys Ntohs(netshort uint16) (u uint16) = ws2_32.ntohs +//sys GetProtoByName(name string) (p *Protoent, err error) [failretval==nil] = ws2_32.getprotobyname +//sys DnsQuery(name string, qtype uint16, options uint32, extra *byte, qrs **DNSRecord, pr *byte) (status error) = dnsapi.DnsQuery_W +//sys DnsRecordListFree(rl *DNSRecord, freetype uint32) = dnsapi.DnsRecordListFree +//sys DnsNameCompare(name1 *uint16, name2 *uint16) (same bool) = dnsapi.DnsNameCompare_W +//sys GetAddrInfoW(nodename *uint16, servicename *uint16, hints *AddrinfoW, result **AddrinfoW) (sockerr error) = ws2_32.GetAddrInfoW +//sys FreeAddrInfoW(addrinfo *AddrinfoW) = ws2_32.FreeAddrInfoW +//sys GetIfEntry(pIfRow *MibIfRow) (errcode error) = iphlpapi.GetIfEntry +//sys GetAdaptersInfo(ai *IpAdapterInfo, ol *uint32) (errcode error) = iphlpapi.GetAdaptersInfo +//sys SetFileCompletionNotificationModes(handle Handle, flags uint8) (err error) = kernel32.SetFileCompletionNotificationModes +//sys WSAEnumProtocols(protocols *int32, protocolBuffer *WSAProtocolInfo, bufferLength *uint32) (n int32, err error) [failretval==-1] = ws2_32.WSAEnumProtocolsW +//sys GetAdaptersAddresses(family uint32, flags uint32, reserved uintptr, adapterAddresses *IpAdapterAddresses, sizePointer *uint32) (errcode error) = iphlpapi.GetAdaptersAddresses +//sys GetACP() (acp uint32) = kernel32.GetACP +//sys MultiByteToWideChar(codePage uint32, dwFlags uint32, str *byte, nstr int32, wchar *uint16, nwchar int32) (nwrite int32, err error) = kernel32.MultiByteToWideChar + +// For testing: clients can set this flag to force +// creation of IPv6 sockets to return EAFNOSUPPORT. +var SocketDisableIPv6 bool + +type RawSockaddrInet4 struct { + Family uint16 + Port uint16 + Addr [4]byte /* in_addr */ + Zero [8]uint8 +} + +type RawSockaddrInet6 struct { + Family uint16 + Port uint16 + Flowinfo uint32 + Addr [16]byte /* in6_addr */ + Scope_id uint32 +} + +type RawSockaddr struct { + Family uint16 + Data [14]int8 +} + +type RawSockaddrAny struct { + Addr RawSockaddr + Pad [96]int8 +} + +type Sockaddr interface { + sockaddr() (ptr unsafe.Pointer, len int32, err error) // lowercase; only we can define Sockaddrs +} + +type SockaddrInet4 struct { + Port int + Addr [4]byte + raw RawSockaddrInet4 +} + +func (sa *SockaddrInet4) sockaddr() (unsafe.Pointer, int32, error) { + if sa.Port < 0 || sa.Port > 0xFFFF { + return nil, 0, syscall.EINVAL + } + sa.raw.Family = AF_INET + p := (*[2]byte)(unsafe.Pointer(&sa.raw.Port)) + p[0] = byte(sa.Port >> 8) + p[1] = byte(sa.Port) + for i := 0; i < len(sa.Addr); i++ { + sa.raw.Addr[i] = sa.Addr[i] + } + return unsafe.Pointer(&sa.raw), int32(unsafe.Sizeof(sa.raw)), nil +} + +type SockaddrInet6 struct { + Port int + ZoneId uint32 + Addr [16]byte + raw RawSockaddrInet6 +} + +func (sa *SockaddrInet6) sockaddr() (unsafe.Pointer, int32, error) { + if sa.Port < 0 || sa.Port > 0xFFFF { + return nil, 0, syscall.EINVAL + } + sa.raw.Family = AF_INET6 + p := (*[2]byte)(unsafe.Pointer(&sa.raw.Port)) + p[0] = byte(sa.Port >> 8) + p[1] = byte(sa.Port) + sa.raw.Scope_id = sa.ZoneId + for i := 0; i < len(sa.Addr); i++ { + sa.raw.Addr[i] = sa.Addr[i] + } + return unsafe.Pointer(&sa.raw), int32(unsafe.Sizeof(sa.raw)), nil +} + +type SockaddrUnix struct { + Name string +} + +func (sa *SockaddrUnix) sockaddr() (unsafe.Pointer, int32, error) { + // TODO(brainman): implement SockaddrUnix.sockaddr() + return nil, 0, syscall.EWINDOWS +} + +func (rsa *RawSockaddrAny) Sockaddr() (Sockaddr, error) { + switch rsa.Addr.Family { + case AF_UNIX: + return nil, syscall.EWINDOWS + + case AF_INET: + pp := (*RawSockaddrInet4)(unsafe.Pointer(rsa)) + sa := new(SockaddrInet4) + p := (*[2]byte)(unsafe.Pointer(&pp.Port)) + sa.Port = int(p[0])<<8 + int(p[1]) + for i := 0; i < len(sa.Addr); i++ { + sa.Addr[i] = pp.Addr[i] + } + return sa, nil + + case AF_INET6: + pp := (*RawSockaddrInet6)(unsafe.Pointer(rsa)) + sa := new(SockaddrInet6) + p := (*[2]byte)(unsafe.Pointer(&pp.Port)) + sa.Port = int(p[0])<<8 + int(p[1]) + sa.ZoneId = pp.Scope_id + for i := 0; i < len(sa.Addr); i++ { + sa.Addr[i] = pp.Addr[i] + } + return sa, nil + } + return nil, syscall.EAFNOSUPPORT +} + +func Socket(domain, typ, proto int) (fd Handle, err error) { + if domain == AF_INET6 && SocketDisableIPv6 { + return InvalidHandle, syscall.EAFNOSUPPORT + } + return socket(int32(domain), int32(typ), int32(proto)) +} + +func SetsockoptInt(fd Handle, level, opt int, value int) (err error) { + v := int32(value) + return Setsockopt(fd, int32(level), int32(opt), (*byte)(unsafe.Pointer(&v)), int32(unsafe.Sizeof(v))) +} + +func Bind(fd Handle, sa Sockaddr) (err error) { + ptr, n, err := sa.sockaddr() + if err != nil { + return err + } + return bind(fd, ptr, n) +} + +func Connect(fd Handle, sa Sockaddr) (err error) { + ptr, n, err := sa.sockaddr() + if err != nil { + return err + } + return connect(fd, ptr, n) +} + +func Getsockname(fd Handle) (sa Sockaddr, err error) { + var rsa RawSockaddrAny + l := int32(unsafe.Sizeof(rsa)) + if err = getsockname(fd, &rsa, &l); err != nil { + return + } + return rsa.Sockaddr() +} + +func Getpeername(fd Handle) (sa Sockaddr, err error) { + var rsa RawSockaddrAny + l := int32(unsafe.Sizeof(rsa)) + if err = getpeername(fd, &rsa, &l); err != nil { + return + } + return rsa.Sockaddr() +} + +func Listen(s Handle, n int) (err error) { + return listen(s, int32(n)) +} + +func Shutdown(fd Handle, how int) (err error) { + return shutdown(fd, int32(how)) +} + +func WSASendto(s Handle, bufs *WSABuf, bufcnt uint32, sent *uint32, flags uint32, to Sockaddr, overlapped *Overlapped, croutine *byte) (err error) { + rsa, l, err := to.sockaddr() + if err != nil { + return err + } + return WSASendTo(s, bufs, bufcnt, sent, flags, (*RawSockaddrAny)(unsafe.Pointer(rsa)), l, overlapped, croutine) +} + +func LoadGetAddrInfo() error { + return procGetAddrInfoW.Find() +} + +var connectExFunc struct { + once sync.Once + addr uintptr + err error +} + +func LoadConnectEx() error { + connectExFunc.once.Do(func() { + var s Handle + s, connectExFunc.err = Socket(AF_INET, SOCK_STREAM, IPPROTO_TCP) + if connectExFunc.err != nil { + return + } + defer CloseHandle(s) + var n uint32 + connectExFunc.err = WSAIoctl(s, + SIO_GET_EXTENSION_FUNCTION_POINTER, + (*byte)(unsafe.Pointer(&WSAID_CONNECTEX)), + uint32(unsafe.Sizeof(WSAID_CONNECTEX)), + (*byte)(unsafe.Pointer(&connectExFunc.addr)), + uint32(unsafe.Sizeof(connectExFunc.addr)), + &n, nil, 0) + }) + return connectExFunc.err +} + +func connectEx(s Handle, name unsafe.Pointer, namelen int32, sendBuf *byte, sendDataLen uint32, bytesSent *uint32, overlapped *Overlapped) (err error) { + r1, _, e1 := syscall.Syscall9(connectExFunc.addr, 7, uintptr(s), uintptr(name), uintptr(namelen), uintptr(unsafe.Pointer(sendBuf)), uintptr(sendDataLen), uintptr(unsafe.Pointer(bytesSent)), uintptr(unsafe.Pointer(overlapped)), 0, 0) + if r1 == 0 { + if e1 != 0 { + err = error(e1) + } else { + err = syscall.EINVAL + } + } + return +} + +func ConnectEx(fd Handle, sa Sockaddr, sendBuf *byte, sendDataLen uint32, bytesSent *uint32, overlapped *Overlapped) error { + err := LoadConnectEx() + if err != nil { + return errorspkg.New("failed to find ConnectEx: " + err.Error()) + } + ptr, n, err := sa.sockaddr() + if err != nil { + return err + } + return connectEx(fd, ptr, n, sendBuf, sendDataLen, bytesSent, overlapped) +} + +var sendRecvMsgFunc struct { + once sync.Once + sendAddr uintptr + recvAddr uintptr + err error +} + +func loadWSASendRecvMsg() error { + sendRecvMsgFunc.once.Do(func() { + var s Handle + s, sendRecvMsgFunc.err = Socket(AF_INET, SOCK_DGRAM, IPPROTO_UDP) + if sendRecvMsgFunc.err != nil { + return + } + defer CloseHandle(s) + var n uint32 + sendRecvMsgFunc.err = WSAIoctl(s, + SIO_GET_EXTENSION_FUNCTION_POINTER, + (*byte)(unsafe.Pointer(&WSAID_WSARECVMSG)), + uint32(unsafe.Sizeof(WSAID_WSARECVMSG)), + (*byte)(unsafe.Pointer(&sendRecvMsgFunc.recvAddr)), + uint32(unsafe.Sizeof(sendRecvMsgFunc.recvAddr)), + &n, nil, 0) + if sendRecvMsgFunc.err != nil { + return + } + sendRecvMsgFunc.err = WSAIoctl(s, + SIO_GET_EXTENSION_FUNCTION_POINTER, + (*byte)(unsafe.Pointer(&WSAID_WSASENDMSG)), + uint32(unsafe.Sizeof(WSAID_WSASENDMSG)), + (*byte)(unsafe.Pointer(&sendRecvMsgFunc.sendAddr)), + uint32(unsafe.Sizeof(sendRecvMsgFunc.sendAddr)), + &n, nil, 0) + }) + return sendRecvMsgFunc.err +} + +func WSASendMsg(fd Handle, msg *WSAMsg, flags uint32, bytesSent *uint32, overlapped *Overlapped, croutine *byte) error { + err := loadWSASendRecvMsg() + if err != nil { + return err + } + r1, _, e1 := syscall.Syscall6(sendRecvMsgFunc.sendAddr, 6, uintptr(fd), uintptr(unsafe.Pointer(msg)), uintptr(flags), uintptr(unsafe.Pointer(bytesSent)), uintptr(unsafe.Pointer(overlapped)), uintptr(unsafe.Pointer(croutine))) + if r1 == socket_error { + if e1 != 0 { + err = errnoErr(e1) + } else { + err = syscall.EINVAL + } + } + return err +} + +func WSARecvMsg(fd Handle, msg *WSAMsg, bytesReceived *uint32, overlapped *Overlapped, croutine *byte) error { + err := loadWSASendRecvMsg() + if err != nil { + return err + } + r1, _, e1 := syscall.Syscall6(sendRecvMsgFunc.recvAddr, 5, uintptr(fd), uintptr(unsafe.Pointer(msg)), uintptr(unsafe.Pointer(bytesReceived)), uintptr(unsafe.Pointer(overlapped)), uintptr(unsafe.Pointer(croutine)), 0) + if r1 == socket_error { + if e1 != 0 { + err = errnoErr(e1) + } else { + err = syscall.EINVAL + } + } + return err +} + +// Invented structures to support what package os expects. +type Rusage struct { + CreationTime Filetime + ExitTime Filetime + KernelTime Filetime + UserTime Filetime +} + +type WaitStatus struct { + ExitCode uint32 +} + +func (w WaitStatus) Exited() bool { return true } + +func (w WaitStatus) ExitStatus() int { return int(w.ExitCode) } + +func (w WaitStatus) Signal() Signal { return -1 } + +func (w WaitStatus) CoreDump() bool { return false } + +func (w WaitStatus) Stopped() bool { return false } + +func (w WaitStatus) Continued() bool { return false } + +func (w WaitStatus) StopSignal() Signal { return -1 } + +func (w WaitStatus) Signaled() bool { return false } + +func (w WaitStatus) TrapCause() int { return -1 } + +// Timespec is an invented structure on Windows, but here for +// consistency with the corresponding package for other operating systems. +type Timespec struct { + Sec int64 + Nsec int64 +} + +func TimespecToNsec(ts Timespec) int64 { return int64(ts.Sec)*1e9 + int64(ts.Nsec) } + +func NsecToTimespec(nsec int64) (ts Timespec) { + ts.Sec = nsec / 1e9 + ts.Nsec = nsec % 1e9 + return +} + +// TODO(brainman): fix all needed for net + +func Accept(fd Handle) (nfd Handle, sa Sockaddr, err error) { return 0, nil, syscall.EWINDOWS } +func Recvfrom(fd Handle, p []byte, flags int) (n int, from Sockaddr, err error) { + return 0, nil, syscall.EWINDOWS +} +func Sendto(fd Handle, p []byte, flags int, to Sockaddr) (err error) { return syscall.EWINDOWS } +func SetsockoptTimeval(fd Handle, level, opt int, tv *Timeval) (err error) { return syscall.EWINDOWS } + +// The Linger struct is wrong but we only noticed after Go 1. +// sysLinger is the real system call structure. + +// BUG(brainman): The definition of Linger is not appropriate for direct use +// with Setsockopt and Getsockopt. +// Use SetsockoptLinger instead. + +type Linger struct { + Onoff int32 + Linger int32 +} + +type sysLinger struct { + Onoff uint16 + Linger uint16 +} + +type IPMreq struct { + Multiaddr [4]byte /* in_addr */ + Interface [4]byte /* in_addr */ +} + +type IPv6Mreq struct { + Multiaddr [16]byte /* in6_addr */ + Interface uint32 +} + +func GetsockoptInt(fd Handle, level, opt int) (int, error) { return -1, syscall.EWINDOWS } + +func SetsockoptLinger(fd Handle, level, opt int, l *Linger) (err error) { + sys := sysLinger{Onoff: uint16(l.Onoff), Linger: uint16(l.Linger)} + return Setsockopt(fd, int32(level), int32(opt), (*byte)(unsafe.Pointer(&sys)), int32(unsafe.Sizeof(sys))) +} + +func SetsockoptInet4Addr(fd Handle, level, opt int, value [4]byte) (err error) { + return Setsockopt(fd, int32(level), int32(opt), (*byte)(unsafe.Pointer(&value[0])), 4) +} +func SetsockoptIPMreq(fd Handle, level, opt int, mreq *IPMreq) (err error) { + return Setsockopt(fd, int32(level), int32(opt), (*byte)(unsafe.Pointer(mreq)), int32(unsafe.Sizeof(*mreq))) +} +func SetsockoptIPv6Mreq(fd Handle, level, opt int, mreq *IPv6Mreq) (err error) { + return syscall.EWINDOWS +} + +func Getpid() (pid int) { return int(getCurrentProcessId()) } + +func FindFirstFile(name *uint16, data *Win32finddata) (handle Handle, err error) { + // NOTE(rsc): The Win32finddata struct is wrong for the system call: + // the two paths are each one uint16 short. Use the correct struct, + // a win32finddata1, and then copy the results out. + // There is no loss of expressivity here, because the final + // uint16, if it is used, is supposed to be a NUL, and Go doesn't need that. + // For Go 1.1, we might avoid the allocation of win32finddata1 here + // by adding a final Bug [2]uint16 field to the struct and then + // adjusting the fields in the result directly. + var data1 win32finddata1 + handle, err = findFirstFile1(name, &data1) + if err == nil { + copyFindData(data, &data1) + } + return +} + +func FindNextFile(handle Handle, data *Win32finddata) (err error) { + var data1 win32finddata1 + err = findNextFile1(handle, &data1) + if err == nil { + copyFindData(data, &data1) + } + return +} + +func getProcessEntry(pid int) (*ProcessEntry32, error) { + snapshot, err := CreateToolhelp32Snapshot(TH32CS_SNAPPROCESS, 0) + if err != nil { + return nil, err + } + defer CloseHandle(snapshot) + var procEntry ProcessEntry32 + procEntry.Size = uint32(unsafe.Sizeof(procEntry)) + if err = Process32First(snapshot, &procEntry); err != nil { + return nil, err + } + for { + if procEntry.ProcessID == uint32(pid) { + return &procEntry, nil + } + err = Process32Next(snapshot, &procEntry) + if err != nil { + return nil, err + } + } +} + +func Getppid() (ppid int) { + pe, err := getProcessEntry(Getpid()) + if err != nil { + return -1 + } + return int(pe.ParentProcessID) +} + +// TODO(brainman): fix all needed for os +func Fchdir(fd Handle) (err error) { return syscall.EWINDOWS } +func Link(oldpath, newpath string) (err error) { return syscall.EWINDOWS } +func Symlink(path, link string) (err error) { return syscall.EWINDOWS } + +func Fchmod(fd Handle, mode uint32) (err error) { return syscall.EWINDOWS } +func Chown(path string, uid int, gid int) (err error) { return syscall.EWINDOWS } +func Lchown(path string, uid int, gid int) (err error) { return syscall.EWINDOWS } +func Fchown(fd Handle, uid int, gid int) (err error) { return syscall.EWINDOWS } + +func Getuid() (uid int) { return -1 } +func Geteuid() (euid int) { return -1 } +func Getgid() (gid int) { return -1 } +func Getegid() (egid int) { return -1 } +func Getgroups() (gids []int, err error) { return nil, syscall.EWINDOWS } + +type Signal int + +func (s Signal) Signal() {} + +func (s Signal) String() string { + if 0 <= s && int(s) < len(signals) { + str := signals[s] + if str != "" { + return str + } + } + return "signal " + itoa(int(s)) +} + +func LoadCreateSymbolicLink() error { + return procCreateSymbolicLinkW.Find() +} + +// Readlink returns the destination of the named symbolic link. +func Readlink(path string, buf []byte) (n int, err error) { + fd, err := CreateFile(StringToUTF16Ptr(path), GENERIC_READ, 0, nil, OPEN_EXISTING, + FILE_FLAG_OPEN_REPARSE_POINT|FILE_FLAG_BACKUP_SEMANTICS, 0) + if err != nil { + return -1, err + } + defer CloseHandle(fd) + + rdbbuf := make([]byte, MAXIMUM_REPARSE_DATA_BUFFER_SIZE) + var bytesReturned uint32 + err = DeviceIoControl(fd, FSCTL_GET_REPARSE_POINT, nil, 0, &rdbbuf[0], uint32(len(rdbbuf)), &bytesReturned, nil) + if err != nil { + return -1, err + } + + rdb := (*reparseDataBuffer)(unsafe.Pointer(&rdbbuf[0])) + var s string + switch rdb.ReparseTag { + case IO_REPARSE_TAG_SYMLINK: + data := (*symbolicLinkReparseBuffer)(unsafe.Pointer(&rdb.reparseBuffer)) + p := (*[0xffff]uint16)(unsafe.Pointer(&data.PathBuffer[0])) + s = UTF16ToString(p[data.PrintNameOffset/2 : (data.PrintNameLength-data.PrintNameOffset)/2]) + case IO_REPARSE_TAG_MOUNT_POINT: + data := (*mountPointReparseBuffer)(unsafe.Pointer(&rdb.reparseBuffer)) + p := (*[0xffff]uint16)(unsafe.Pointer(&data.PathBuffer[0])) + s = UTF16ToString(p[data.PrintNameOffset/2 : (data.PrintNameLength-data.PrintNameOffset)/2]) + default: + // the path is not a symlink or junction but another type of reparse + // point + return -1, syscall.ENOENT + } + n = copy(buf, []byte(s)) + + return n, nil +} diff --git a/vendor/golang.org/x/sys/windows/types_windows.go b/vendor/golang.org/x/sys/windows/types_windows.go new file mode 100644 index 000000000..52c2037b6 --- /dev/null +++ b/vendor/golang.org/x/sys/windows/types_windows.go @@ -0,0 +1,1333 @@ +// Copyright 2011 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +package windows + +import "syscall" + +const ( + // Windows errors. + ERROR_FILE_NOT_FOUND syscall.Errno = 2 + ERROR_PATH_NOT_FOUND syscall.Errno = 3 + ERROR_ACCESS_DENIED syscall.Errno = 5 + ERROR_NO_MORE_FILES syscall.Errno = 18 + ERROR_HANDLE_EOF syscall.Errno = 38 + ERROR_NETNAME_DELETED syscall.Errno = 64 + ERROR_FILE_EXISTS syscall.Errno = 80 + ERROR_BROKEN_PIPE syscall.Errno = 109 + ERROR_BUFFER_OVERFLOW syscall.Errno = 111 + ERROR_INSUFFICIENT_BUFFER syscall.Errno = 122 + ERROR_MOD_NOT_FOUND syscall.Errno = 126 + ERROR_PROC_NOT_FOUND syscall.Errno = 127 + ERROR_ALREADY_EXISTS syscall.Errno = 183 + ERROR_ENVVAR_NOT_FOUND syscall.Errno = 203 + ERROR_MORE_DATA syscall.Errno = 234 + ERROR_OPERATION_ABORTED syscall.Errno = 995 + ERROR_IO_PENDING syscall.Errno = 997 + ERROR_SERVICE_SPECIFIC_ERROR syscall.Errno = 1066 + ERROR_NOT_FOUND syscall.Errno = 1168 + ERROR_PRIVILEGE_NOT_HELD syscall.Errno = 1314 + WSAEACCES syscall.Errno = 10013 + WSAEMSGSIZE syscall.Errno = 10040 + WSAECONNRESET syscall.Errno = 10054 +) + +const ( + // Invented values to support what package os expects. + O_RDONLY = 0x00000 + O_WRONLY = 0x00001 + O_RDWR = 0x00002 + O_CREAT = 0x00040 + O_EXCL = 0x00080 + O_NOCTTY = 0x00100 + O_TRUNC = 0x00200 + O_NONBLOCK = 0x00800 + O_APPEND = 0x00400 + O_SYNC = 0x01000 + O_ASYNC = 0x02000 + O_CLOEXEC = 0x80000 +) + +const ( + // More invented values for signals + SIGHUP = Signal(0x1) + SIGINT = Signal(0x2) + SIGQUIT = Signal(0x3) + SIGILL = Signal(0x4) + SIGTRAP = Signal(0x5) + SIGABRT = Signal(0x6) + SIGBUS = Signal(0x7) + SIGFPE = Signal(0x8) + SIGKILL = Signal(0x9) + SIGSEGV = Signal(0xb) + SIGPIPE = Signal(0xd) + SIGALRM = Signal(0xe) + SIGTERM = Signal(0xf) +) + +var signals = [...]string{ + 1: "hangup", + 2: "interrupt", + 3: "quit", + 4: "illegal instruction", + 5: "trace/breakpoint trap", + 6: "aborted", + 7: "bus error", + 8: "floating point exception", + 9: "killed", + 10: "user defined signal 1", + 11: "segmentation fault", + 12: "user defined signal 2", + 13: "broken pipe", + 14: "alarm clock", + 15: "terminated", +} + +const ( + GENERIC_READ = 0x80000000 + GENERIC_WRITE = 0x40000000 + GENERIC_EXECUTE = 0x20000000 + GENERIC_ALL = 0x10000000 + + FILE_LIST_DIRECTORY = 0x00000001 + FILE_APPEND_DATA = 0x00000004 + FILE_WRITE_ATTRIBUTES = 0x00000100 + + FILE_SHARE_READ = 0x00000001 + FILE_SHARE_WRITE = 0x00000002 + FILE_SHARE_DELETE = 0x00000004 + FILE_ATTRIBUTE_READONLY = 0x00000001 + FILE_ATTRIBUTE_HIDDEN = 0x00000002 + FILE_ATTRIBUTE_SYSTEM = 0x00000004 + FILE_ATTRIBUTE_DIRECTORY = 0x00000010 + FILE_ATTRIBUTE_ARCHIVE = 0x00000020 + FILE_ATTRIBUTE_NORMAL = 0x00000080 + FILE_ATTRIBUTE_REPARSE_POINT = 0x00000400 + + INVALID_FILE_ATTRIBUTES = 0xffffffff + + CREATE_NEW = 1 + CREATE_ALWAYS = 2 + OPEN_EXISTING = 3 + OPEN_ALWAYS = 4 + TRUNCATE_EXISTING = 5 + + FILE_FLAG_OPEN_REPARSE_POINT = 0x00200000 + FILE_FLAG_BACKUP_SEMANTICS = 0x02000000 + FILE_FLAG_OVERLAPPED = 0x40000000 + + HANDLE_FLAG_INHERIT = 0x00000001 + STARTF_USESTDHANDLES = 0x00000100 + STARTF_USESHOWWINDOW = 0x00000001 + DUPLICATE_CLOSE_SOURCE = 0x00000001 + DUPLICATE_SAME_ACCESS = 0x00000002 + + STD_INPUT_HANDLE = -10 & (1<<32 - 1) + STD_OUTPUT_HANDLE = -11 & (1<<32 - 1) + STD_ERROR_HANDLE = -12 & (1<<32 - 1) + + FILE_BEGIN = 0 + FILE_CURRENT = 1 + FILE_END = 2 + + LANG_ENGLISH = 0x09 + SUBLANG_ENGLISH_US = 0x01 + + FORMAT_MESSAGE_ALLOCATE_BUFFER = 256 + FORMAT_MESSAGE_IGNORE_INSERTS = 512 + FORMAT_MESSAGE_FROM_STRING = 1024 + FORMAT_MESSAGE_FROM_HMODULE = 2048 + FORMAT_MESSAGE_FROM_SYSTEM = 4096 + FORMAT_MESSAGE_ARGUMENT_ARRAY = 8192 + FORMAT_MESSAGE_MAX_WIDTH_MASK = 255 + + MAX_PATH = 260 + MAX_LONG_PATH = 32768 + + MAX_COMPUTERNAME_LENGTH = 15 + + TIME_ZONE_ID_UNKNOWN = 0 + TIME_ZONE_ID_STANDARD = 1 + + TIME_ZONE_ID_DAYLIGHT = 2 + IGNORE = 0 + INFINITE = 0xffffffff + + WAIT_TIMEOUT = 258 + WAIT_ABANDONED = 0x00000080 + WAIT_OBJECT_0 = 0x00000000 + WAIT_FAILED = 0xFFFFFFFF + + PROCESS_TERMINATE = 1 + PROCESS_QUERY_INFORMATION = 0x00000400 + SYNCHRONIZE = 0x00100000 + + FILE_MAP_COPY = 0x01 + FILE_MAP_WRITE = 0x02 + FILE_MAP_READ = 0x04 + FILE_MAP_EXECUTE = 0x20 + + CTRL_C_EVENT = 0 + CTRL_BREAK_EVENT = 1 + + // Windows reserves errors >= 1<<29 for application use. + APPLICATION_ERROR = 1 << 29 +) + +const ( + // Process creation flags. + CREATE_BREAKAWAY_FROM_JOB = 0x01000000 + CREATE_DEFAULT_ERROR_MODE = 0x04000000 + CREATE_NEW_CONSOLE = 0x00000010 + CREATE_NEW_PROCESS_GROUP = 0x00000200 + CREATE_NO_WINDOW = 0x08000000 + CREATE_PROTECTED_PROCESS = 0x00040000 + CREATE_PRESERVE_CODE_AUTHZ_LEVEL = 0x02000000 + CREATE_SEPARATE_WOW_VDM = 0x00000800 + CREATE_SHARED_WOW_VDM = 0x00001000 + CREATE_SUSPENDED = 0x00000004 + CREATE_UNICODE_ENVIRONMENT = 0x00000400 + DEBUG_ONLY_THIS_PROCESS = 0x00000002 + DEBUG_PROCESS = 0x00000001 + DETACHED_PROCESS = 0x00000008 + EXTENDED_STARTUPINFO_PRESENT = 0x00080000 + INHERIT_PARENT_AFFINITY = 0x00010000 +) + +const ( + // flags for CreateToolhelp32Snapshot + TH32CS_SNAPHEAPLIST = 0x01 + TH32CS_SNAPPROCESS = 0x02 + TH32CS_SNAPTHREAD = 0x04 + TH32CS_SNAPMODULE = 0x08 + TH32CS_SNAPMODULE32 = 0x10 + TH32CS_SNAPALL = TH32CS_SNAPHEAPLIST | TH32CS_SNAPMODULE | TH32CS_SNAPPROCESS | TH32CS_SNAPTHREAD + TH32CS_INHERIT = 0x80000000 +) + +const ( + // filters for ReadDirectoryChangesW + FILE_NOTIFY_CHANGE_FILE_NAME = 0x001 + FILE_NOTIFY_CHANGE_DIR_NAME = 0x002 + FILE_NOTIFY_CHANGE_ATTRIBUTES = 0x004 + FILE_NOTIFY_CHANGE_SIZE = 0x008 + FILE_NOTIFY_CHANGE_LAST_WRITE = 0x010 + FILE_NOTIFY_CHANGE_LAST_ACCESS = 0x020 + FILE_NOTIFY_CHANGE_CREATION = 0x040 + FILE_NOTIFY_CHANGE_SECURITY = 0x100 +) + +const ( + // do not reorder + FILE_ACTION_ADDED = iota + 1 + FILE_ACTION_REMOVED + FILE_ACTION_MODIFIED + FILE_ACTION_RENAMED_OLD_NAME + FILE_ACTION_RENAMED_NEW_NAME +) + +const ( + // wincrypt.h + PROV_RSA_FULL = 1 + PROV_RSA_SIG = 2 + PROV_DSS = 3 + PROV_FORTEZZA = 4 + PROV_MS_EXCHANGE = 5 + PROV_SSL = 6 + PROV_RSA_SCHANNEL = 12 + PROV_DSS_DH = 13 + PROV_EC_ECDSA_SIG = 14 + PROV_EC_ECNRA_SIG = 15 + PROV_EC_ECDSA_FULL = 16 + PROV_EC_ECNRA_FULL = 17 + PROV_DH_SCHANNEL = 18 + PROV_SPYRUS_LYNKS = 20 + PROV_RNG = 21 + PROV_INTEL_SEC = 22 + PROV_REPLACE_OWF = 23 + PROV_RSA_AES = 24 + CRYPT_VERIFYCONTEXT = 0xF0000000 + CRYPT_NEWKEYSET = 0x00000008 + CRYPT_DELETEKEYSET = 0x00000010 + CRYPT_MACHINE_KEYSET = 0x00000020 + CRYPT_SILENT = 0x00000040 + CRYPT_DEFAULT_CONTAINER_OPTIONAL = 0x00000080 + + USAGE_MATCH_TYPE_AND = 0 + USAGE_MATCH_TYPE_OR = 1 + + X509_ASN_ENCODING = 0x00000001 + PKCS_7_ASN_ENCODING = 0x00010000 + + CERT_STORE_PROV_MEMORY = 2 + + CERT_STORE_ADD_ALWAYS = 4 + + CERT_STORE_DEFER_CLOSE_UNTIL_LAST_FREE_FLAG = 0x00000004 + + CERT_TRUST_NO_ERROR = 0x00000000 + CERT_TRUST_IS_NOT_TIME_VALID = 0x00000001 + CERT_TRUST_IS_REVOKED = 0x00000004 + CERT_TRUST_IS_NOT_SIGNATURE_VALID = 0x00000008 + CERT_TRUST_IS_NOT_VALID_FOR_USAGE = 0x00000010 + CERT_TRUST_IS_UNTRUSTED_ROOT = 0x00000020 + CERT_TRUST_REVOCATION_STATUS_UNKNOWN = 0x00000040 + CERT_TRUST_IS_CYCLIC = 0x00000080 + CERT_TRUST_INVALID_EXTENSION = 0x00000100 + CERT_TRUST_INVALID_POLICY_CONSTRAINTS = 0x00000200 + CERT_TRUST_INVALID_BASIC_CONSTRAINTS = 0x00000400 + CERT_TRUST_INVALID_NAME_CONSTRAINTS = 0x00000800 + CERT_TRUST_HAS_NOT_SUPPORTED_NAME_CONSTRAINT = 0x00001000 + CERT_TRUST_HAS_NOT_DEFINED_NAME_CONSTRAINT = 0x00002000 + CERT_TRUST_HAS_NOT_PERMITTED_NAME_CONSTRAINT = 0x00004000 + CERT_TRUST_HAS_EXCLUDED_NAME_CONSTRAINT = 0x00008000 + CERT_TRUST_IS_OFFLINE_REVOCATION = 0x01000000 + CERT_TRUST_NO_ISSUANCE_CHAIN_POLICY = 0x02000000 + CERT_TRUST_IS_EXPLICIT_DISTRUST = 0x04000000 + CERT_TRUST_HAS_NOT_SUPPORTED_CRITICAL_EXT = 0x08000000 + + CERT_CHAIN_POLICY_BASE = 1 + CERT_CHAIN_POLICY_AUTHENTICODE = 2 + CERT_CHAIN_POLICY_AUTHENTICODE_TS = 3 + CERT_CHAIN_POLICY_SSL = 4 + CERT_CHAIN_POLICY_BASIC_CONSTRAINTS = 5 + CERT_CHAIN_POLICY_NT_AUTH = 6 + CERT_CHAIN_POLICY_MICROSOFT_ROOT = 7 + CERT_CHAIN_POLICY_EV = 8 + + CERT_E_EXPIRED = 0x800B0101 + CERT_E_ROLE = 0x800B0103 + CERT_E_PURPOSE = 0x800B0106 + CERT_E_UNTRUSTEDROOT = 0x800B0109 + CERT_E_CN_NO_MATCH = 0x800B010F + + AUTHTYPE_CLIENT = 1 + AUTHTYPE_SERVER = 2 +) + +var ( + OID_PKIX_KP_SERVER_AUTH = []byte("1.3.6.1.5.5.7.3.1\x00") + OID_SERVER_GATED_CRYPTO = []byte("1.3.6.1.4.1.311.10.3.3\x00") + OID_SGC_NETSCAPE = []byte("2.16.840.1.113730.4.1\x00") +) + +// Invented values to support what package os expects. +type Timeval struct { + Sec int32 + Usec int32 +} + +func (tv *Timeval) Nanoseconds() int64 { + return (int64(tv.Sec)*1e6 + int64(tv.Usec)) * 1e3 +} + +func NsecToTimeval(nsec int64) (tv Timeval) { + tv.Sec = int32(nsec / 1e9) + tv.Usec = int32(nsec % 1e9 / 1e3) + return +} + +type SecurityAttributes struct { + Length uint32 + SecurityDescriptor uintptr + InheritHandle uint32 +} + +type Overlapped struct { + Internal uintptr + InternalHigh uintptr + Offset uint32 + OffsetHigh uint32 + HEvent Handle +} + +type FileNotifyInformation struct { + NextEntryOffset uint32 + Action uint32 + FileNameLength uint32 + FileName uint16 +} + +type Filetime struct { + LowDateTime uint32 + HighDateTime uint32 +} + +// Nanoseconds returns Filetime ft in nanoseconds +// since Epoch (00:00:00 UTC, January 1, 1970). +func (ft *Filetime) Nanoseconds() int64 { + // 100-nanosecond intervals since January 1, 1601 + nsec := int64(ft.HighDateTime)<<32 + int64(ft.LowDateTime) + // change starting time to the Epoch (00:00:00 UTC, January 1, 1970) + nsec -= 116444736000000000 + // convert into nanoseconds + nsec *= 100 + return nsec +} + +func NsecToFiletime(nsec int64) (ft Filetime) { + // convert into 100-nanosecond + nsec /= 100 + // change starting time to January 1, 1601 + nsec += 116444736000000000 + // split into high / low + ft.LowDateTime = uint32(nsec & 0xffffffff) + ft.HighDateTime = uint32(nsec >> 32 & 0xffffffff) + return ft +} + +type Win32finddata struct { + FileAttributes uint32 + CreationTime Filetime + LastAccessTime Filetime + LastWriteTime Filetime + FileSizeHigh uint32 + FileSizeLow uint32 + Reserved0 uint32 + Reserved1 uint32 + FileName [MAX_PATH - 1]uint16 + AlternateFileName [13]uint16 +} + +// This is the actual system call structure. +// Win32finddata is what we committed to in Go 1. +type win32finddata1 struct { + FileAttributes uint32 + CreationTime Filetime + LastAccessTime Filetime + LastWriteTime Filetime + FileSizeHigh uint32 + FileSizeLow uint32 + Reserved0 uint32 + Reserved1 uint32 + FileName [MAX_PATH]uint16 + AlternateFileName [14]uint16 +} + +func copyFindData(dst *Win32finddata, src *win32finddata1) { + dst.FileAttributes = src.FileAttributes + dst.CreationTime = src.CreationTime + dst.LastAccessTime = src.LastAccessTime + dst.LastWriteTime = src.LastWriteTime + dst.FileSizeHigh = src.FileSizeHigh + dst.FileSizeLow = src.FileSizeLow + dst.Reserved0 = src.Reserved0 + dst.Reserved1 = src.Reserved1 + + // The src is 1 element bigger than dst, but it must be NUL. + copy(dst.FileName[:], src.FileName[:]) + copy(dst.AlternateFileName[:], src.AlternateFileName[:]) +} + +type ByHandleFileInformation struct { + FileAttributes uint32 + CreationTime Filetime + LastAccessTime Filetime + LastWriteTime Filetime + VolumeSerialNumber uint32 + FileSizeHigh uint32 + FileSizeLow uint32 + NumberOfLinks uint32 + FileIndexHigh uint32 + FileIndexLow uint32 +} + +const ( + GetFileExInfoStandard = 0 + GetFileExMaxInfoLevel = 1 +) + +type Win32FileAttributeData struct { + FileAttributes uint32 + CreationTime Filetime + LastAccessTime Filetime + LastWriteTime Filetime + FileSizeHigh uint32 + FileSizeLow uint32 +} + +// ShowWindow constants +const ( + // winuser.h + SW_HIDE = 0 + SW_NORMAL = 1 + SW_SHOWNORMAL = 1 + SW_SHOWMINIMIZED = 2 + SW_SHOWMAXIMIZED = 3 + SW_MAXIMIZE = 3 + SW_SHOWNOACTIVATE = 4 + SW_SHOW = 5 + SW_MINIMIZE = 6 + SW_SHOWMINNOACTIVE = 7 + SW_SHOWNA = 8 + SW_RESTORE = 9 + SW_SHOWDEFAULT = 10 + SW_FORCEMINIMIZE = 11 +) + +type StartupInfo struct { + Cb uint32 + _ *uint16 + Desktop *uint16 + Title *uint16 + X uint32 + Y uint32 + XSize uint32 + YSize uint32 + XCountChars uint32 + YCountChars uint32 + FillAttribute uint32 + Flags uint32 + ShowWindow uint16 + _ uint16 + _ *byte + StdInput Handle + StdOutput Handle + StdErr Handle +} + +type ProcessInformation struct { + Process Handle + Thread Handle + ProcessId uint32 + ThreadId uint32 +} + +type ProcessEntry32 struct { + Size uint32 + Usage uint32 + ProcessID uint32 + DefaultHeapID uintptr + ModuleID uint32 + Threads uint32 + ParentProcessID uint32 + PriClassBase int32 + Flags uint32 + ExeFile [MAX_PATH]uint16 +} + +type Systemtime struct { + Year uint16 + Month uint16 + DayOfWeek uint16 + Day uint16 + Hour uint16 + Minute uint16 + Second uint16 + Milliseconds uint16 +} + +type Timezoneinformation struct { + Bias int32 + StandardName [32]uint16 + StandardDate Systemtime + StandardBias int32 + DaylightName [32]uint16 + DaylightDate Systemtime + DaylightBias int32 +} + +// Socket related. + +const ( + AF_UNSPEC = 0 + AF_UNIX = 1 + AF_INET = 2 + AF_INET6 = 23 + AF_NETBIOS = 17 + + SOCK_STREAM = 1 + SOCK_DGRAM = 2 + SOCK_RAW = 3 + SOCK_SEQPACKET = 5 + + IPPROTO_IP = 0 + IPPROTO_IPV6 = 0x29 + IPPROTO_TCP = 6 + IPPROTO_UDP = 17 + + SOL_SOCKET = 0xffff + SO_REUSEADDR = 4 + SO_KEEPALIVE = 8 + SO_DONTROUTE = 16 + SO_BROADCAST = 32 + SO_LINGER = 128 + SO_RCVBUF = 0x1002 + SO_SNDBUF = 0x1001 + SO_UPDATE_ACCEPT_CONTEXT = 0x700b + SO_UPDATE_CONNECT_CONTEXT = 0x7010 + + IOC_OUT = 0x40000000 + IOC_IN = 0x80000000 + IOC_VENDOR = 0x18000000 + IOC_INOUT = IOC_IN | IOC_OUT + IOC_WS2 = 0x08000000 + SIO_GET_EXTENSION_FUNCTION_POINTER = IOC_INOUT | IOC_WS2 | 6 + SIO_KEEPALIVE_VALS = IOC_IN | IOC_VENDOR | 4 + SIO_UDP_CONNRESET = IOC_IN | IOC_VENDOR | 12 + + // cf. http://support.microsoft.com/default.aspx?scid=kb;en-us;257460 + + IP_TOS = 0x3 + IP_TTL = 0x4 + IP_MULTICAST_IF = 0x9 + IP_MULTICAST_TTL = 0xa + IP_MULTICAST_LOOP = 0xb + IP_ADD_MEMBERSHIP = 0xc + IP_DROP_MEMBERSHIP = 0xd + + IPV6_V6ONLY = 0x1b + IPV6_UNICAST_HOPS = 0x4 + IPV6_MULTICAST_IF = 0x9 + IPV6_MULTICAST_HOPS = 0xa + IPV6_MULTICAST_LOOP = 0xb + IPV6_JOIN_GROUP = 0xc + IPV6_LEAVE_GROUP = 0xd + + MSG_OOB = 0x1 + MSG_PEEK = 0x2 + MSG_DONTROUTE = 0x4 + MSG_WAITALL = 0x8 + + MSG_TRUNC = 0x0100 + MSG_CTRUNC = 0x0200 + MSG_BCAST = 0x0400 + MSG_MCAST = 0x0800 + + SOMAXCONN = 0x7fffffff + + TCP_NODELAY = 1 + + SHUT_RD = 0 + SHUT_WR = 1 + SHUT_RDWR = 2 + + WSADESCRIPTION_LEN = 256 + WSASYS_STATUS_LEN = 128 +) + +type WSABuf struct { + Len uint32 + Buf *byte +} + +type WSAMsg struct { + Name *syscall.RawSockaddrAny + Namelen int32 + Buffers *WSABuf + BufferCount uint32 + Control WSABuf + Flags uint32 +} + +// Invented values to support what package os expects. +const ( + S_IFMT = 0x1f000 + S_IFIFO = 0x1000 + S_IFCHR = 0x2000 + S_IFDIR = 0x4000 + S_IFBLK = 0x6000 + S_IFREG = 0x8000 + S_IFLNK = 0xa000 + S_IFSOCK = 0xc000 + S_ISUID = 0x800 + S_ISGID = 0x400 + S_ISVTX = 0x200 + S_IRUSR = 0x100 + S_IWRITE = 0x80 + S_IWUSR = 0x80 + S_IXUSR = 0x40 +) + +const ( + FILE_TYPE_CHAR = 0x0002 + FILE_TYPE_DISK = 0x0001 + FILE_TYPE_PIPE = 0x0003 + FILE_TYPE_REMOTE = 0x8000 + FILE_TYPE_UNKNOWN = 0x0000 +) + +type Hostent struct { + Name *byte + Aliases **byte + AddrType uint16 + Length uint16 + AddrList **byte +} + +type Protoent struct { + Name *byte + Aliases **byte + Proto uint16 +} + +const ( + DNS_TYPE_A = 0x0001 + DNS_TYPE_NS = 0x0002 + DNS_TYPE_MD = 0x0003 + DNS_TYPE_MF = 0x0004 + DNS_TYPE_CNAME = 0x0005 + DNS_TYPE_SOA = 0x0006 + DNS_TYPE_MB = 0x0007 + DNS_TYPE_MG = 0x0008 + DNS_TYPE_MR = 0x0009 + DNS_TYPE_NULL = 0x000a + DNS_TYPE_WKS = 0x000b + DNS_TYPE_PTR = 0x000c + DNS_TYPE_HINFO = 0x000d + DNS_TYPE_MINFO = 0x000e + DNS_TYPE_MX = 0x000f + DNS_TYPE_TEXT = 0x0010 + DNS_TYPE_RP = 0x0011 + DNS_TYPE_AFSDB = 0x0012 + DNS_TYPE_X25 = 0x0013 + DNS_TYPE_ISDN = 0x0014 + DNS_TYPE_RT = 0x0015 + DNS_TYPE_NSAP = 0x0016 + DNS_TYPE_NSAPPTR = 0x0017 + DNS_TYPE_SIG = 0x0018 + DNS_TYPE_KEY = 0x0019 + DNS_TYPE_PX = 0x001a + DNS_TYPE_GPOS = 0x001b + DNS_TYPE_AAAA = 0x001c + DNS_TYPE_LOC = 0x001d + DNS_TYPE_NXT = 0x001e + DNS_TYPE_EID = 0x001f + DNS_TYPE_NIMLOC = 0x0020 + DNS_TYPE_SRV = 0x0021 + DNS_TYPE_ATMA = 0x0022 + DNS_TYPE_NAPTR = 0x0023 + DNS_TYPE_KX = 0x0024 + DNS_TYPE_CERT = 0x0025 + DNS_TYPE_A6 = 0x0026 + DNS_TYPE_DNAME = 0x0027 + DNS_TYPE_SINK = 0x0028 + DNS_TYPE_OPT = 0x0029 + DNS_TYPE_DS = 0x002B + DNS_TYPE_RRSIG = 0x002E + DNS_TYPE_NSEC = 0x002F + DNS_TYPE_DNSKEY = 0x0030 + DNS_TYPE_DHCID = 0x0031 + DNS_TYPE_UINFO = 0x0064 + DNS_TYPE_UID = 0x0065 + DNS_TYPE_GID = 0x0066 + DNS_TYPE_UNSPEC = 0x0067 + DNS_TYPE_ADDRS = 0x00f8 + DNS_TYPE_TKEY = 0x00f9 + DNS_TYPE_TSIG = 0x00fa + DNS_TYPE_IXFR = 0x00fb + DNS_TYPE_AXFR = 0x00fc + DNS_TYPE_MAILB = 0x00fd + DNS_TYPE_MAILA = 0x00fe + DNS_TYPE_ALL = 0x00ff + DNS_TYPE_ANY = 0x00ff + DNS_TYPE_WINS = 0xff01 + DNS_TYPE_WINSR = 0xff02 + DNS_TYPE_NBSTAT = 0xff01 +) + +const ( + DNS_INFO_NO_RECORDS = 0x251D +) + +const ( + // flags inside DNSRecord.Dw + DnsSectionQuestion = 0x0000 + DnsSectionAnswer = 0x0001 + DnsSectionAuthority = 0x0002 + DnsSectionAdditional = 0x0003 +) + +type DNSSRVData struct { + Target *uint16 + Priority uint16 + Weight uint16 + Port uint16 + Pad uint16 +} + +type DNSPTRData struct { + Host *uint16 +} + +type DNSMXData struct { + NameExchange *uint16 + Preference uint16 + Pad uint16 +} + +type DNSTXTData struct { + StringCount uint16 + StringArray [1]*uint16 +} + +type DNSRecord struct { + Next *DNSRecord + Name *uint16 + Type uint16 + Length uint16 + Dw uint32 + Ttl uint32 + Reserved uint32 + Data [40]byte +} + +const ( + TF_DISCONNECT = 1 + TF_REUSE_SOCKET = 2 + TF_WRITE_BEHIND = 4 + TF_USE_DEFAULT_WORKER = 0 + TF_USE_SYSTEM_THREAD = 16 + TF_USE_KERNEL_APC = 32 +) + +type TransmitFileBuffers struct { + Head uintptr + HeadLength uint32 + Tail uintptr + TailLength uint32 +} + +const ( + IFF_UP = 1 + IFF_BROADCAST = 2 + IFF_LOOPBACK = 4 + IFF_POINTTOPOINT = 8 + IFF_MULTICAST = 16 +) + +const SIO_GET_INTERFACE_LIST = 0x4004747F + +// TODO(mattn): SockaddrGen is union of sockaddr/sockaddr_in/sockaddr_in6_old. +// will be fixed to change variable type as suitable. + +type SockaddrGen [24]byte + +type InterfaceInfo struct { + Flags uint32 + Address SockaddrGen + BroadcastAddress SockaddrGen + Netmask SockaddrGen +} + +type IpAddressString struct { + String [16]byte +} + +type IpMaskString IpAddressString + +type IpAddrString struct { + Next *IpAddrString + IpAddress IpAddressString + IpMask IpMaskString + Context uint32 +} + +const MAX_ADAPTER_NAME_LENGTH = 256 +const MAX_ADAPTER_DESCRIPTION_LENGTH = 128 +const MAX_ADAPTER_ADDRESS_LENGTH = 8 + +type IpAdapterInfo struct { + Next *IpAdapterInfo + ComboIndex uint32 + AdapterName [MAX_ADAPTER_NAME_LENGTH + 4]byte + Description [MAX_ADAPTER_DESCRIPTION_LENGTH + 4]byte + AddressLength uint32 + Address [MAX_ADAPTER_ADDRESS_LENGTH]byte + Index uint32 + Type uint32 + DhcpEnabled uint32 + CurrentIpAddress *IpAddrString + IpAddressList IpAddrString + GatewayList IpAddrString + DhcpServer IpAddrString + HaveWins bool + PrimaryWinsServer IpAddrString + SecondaryWinsServer IpAddrString + LeaseObtained int64 + LeaseExpires int64 +} + +const MAXLEN_PHYSADDR = 8 +const MAX_INTERFACE_NAME_LEN = 256 +const MAXLEN_IFDESCR = 256 + +type MibIfRow struct { + Name [MAX_INTERFACE_NAME_LEN]uint16 + Index uint32 + Type uint32 + Mtu uint32 + Speed uint32 + PhysAddrLen uint32 + PhysAddr [MAXLEN_PHYSADDR]byte + AdminStatus uint32 + OperStatus uint32 + LastChange uint32 + InOctets uint32 + InUcastPkts uint32 + InNUcastPkts uint32 + InDiscards uint32 + InErrors uint32 + InUnknownProtos uint32 + OutOctets uint32 + OutUcastPkts uint32 + OutNUcastPkts uint32 + OutDiscards uint32 + OutErrors uint32 + OutQLen uint32 + DescrLen uint32 + Descr [MAXLEN_IFDESCR]byte +} + +type CertContext struct { + EncodingType uint32 + EncodedCert *byte + Length uint32 + CertInfo uintptr + Store Handle +} + +type CertChainContext struct { + Size uint32 + TrustStatus CertTrustStatus + ChainCount uint32 + Chains **CertSimpleChain + LowerQualityChainCount uint32 + LowerQualityChains **CertChainContext + HasRevocationFreshnessTime uint32 + RevocationFreshnessTime uint32 +} + +type CertSimpleChain struct { + Size uint32 + TrustStatus CertTrustStatus + NumElements uint32 + Elements **CertChainElement + TrustListInfo uintptr + HasRevocationFreshnessTime uint32 + RevocationFreshnessTime uint32 +} + +type CertChainElement struct { + Size uint32 + CertContext *CertContext + TrustStatus CertTrustStatus + RevocationInfo *CertRevocationInfo + IssuanceUsage *CertEnhKeyUsage + ApplicationUsage *CertEnhKeyUsage + ExtendedErrorInfo *uint16 +} + +type CertRevocationInfo struct { + Size uint32 + RevocationResult uint32 + RevocationOid *byte + OidSpecificInfo uintptr + HasFreshnessTime uint32 + FreshnessTime uint32 + CrlInfo uintptr // *CertRevocationCrlInfo +} + +type CertTrustStatus struct { + ErrorStatus uint32 + InfoStatus uint32 +} + +type CertUsageMatch struct { + Type uint32 + Usage CertEnhKeyUsage +} + +type CertEnhKeyUsage struct { + Length uint32 + UsageIdentifiers **byte +} + +type CertChainPara struct { + Size uint32 + RequestedUsage CertUsageMatch + RequstedIssuancePolicy CertUsageMatch + URLRetrievalTimeout uint32 + CheckRevocationFreshnessTime uint32 + RevocationFreshnessTime uint32 + CacheResync *Filetime +} + +type CertChainPolicyPara struct { + Size uint32 + Flags uint32 + ExtraPolicyPara uintptr +} + +type SSLExtraCertChainPolicyPara struct { + Size uint32 + AuthType uint32 + Checks uint32 + ServerName *uint16 +} + +type CertChainPolicyStatus struct { + Size uint32 + Error uint32 + ChainIndex uint32 + ElementIndex uint32 + ExtraPolicyStatus uintptr +} + +const ( + // do not reorder + HKEY_CLASSES_ROOT = 0x80000000 + iota + HKEY_CURRENT_USER + HKEY_LOCAL_MACHINE + HKEY_USERS + HKEY_PERFORMANCE_DATA + HKEY_CURRENT_CONFIG + HKEY_DYN_DATA + + KEY_QUERY_VALUE = 1 + KEY_SET_VALUE = 2 + KEY_CREATE_SUB_KEY = 4 + KEY_ENUMERATE_SUB_KEYS = 8 + KEY_NOTIFY = 16 + KEY_CREATE_LINK = 32 + KEY_WRITE = 0x20006 + KEY_EXECUTE = 0x20019 + KEY_READ = 0x20019 + KEY_WOW64_64KEY = 0x0100 + KEY_WOW64_32KEY = 0x0200 + KEY_ALL_ACCESS = 0xf003f +) + +const ( + // do not reorder + REG_NONE = iota + REG_SZ + REG_EXPAND_SZ + REG_BINARY + REG_DWORD_LITTLE_ENDIAN + REG_DWORD_BIG_ENDIAN + REG_LINK + REG_MULTI_SZ + REG_RESOURCE_LIST + REG_FULL_RESOURCE_DESCRIPTOR + REG_RESOURCE_REQUIREMENTS_LIST + REG_QWORD_LITTLE_ENDIAN + REG_DWORD = REG_DWORD_LITTLE_ENDIAN + REG_QWORD = REG_QWORD_LITTLE_ENDIAN +) + +type AddrinfoW struct { + Flags int32 + Family int32 + Socktype int32 + Protocol int32 + Addrlen uintptr + Canonname *uint16 + Addr uintptr + Next *AddrinfoW +} + +const ( + AI_PASSIVE = 1 + AI_CANONNAME = 2 + AI_NUMERICHOST = 4 +) + +type GUID struct { + Data1 uint32 + Data2 uint16 + Data3 uint16 + Data4 [8]byte +} + +var WSAID_CONNECTEX = GUID{ + 0x25a207b9, + 0xddf3, + 0x4660, + [8]byte{0x8e, 0xe9, 0x76, 0xe5, 0x8c, 0x74, 0x06, 0x3e}, +} + +var WSAID_WSASENDMSG = GUID{ + 0xa441e712, + 0x754f, + 0x43ca, + [8]byte{0x84, 0xa7, 0x0d, 0xee, 0x44, 0xcf, 0x60, 0x6d}, +} + +var WSAID_WSARECVMSG = GUID{ + 0xf689d7c8, + 0x6f1f, + 0x436b, + [8]byte{0x8a, 0x53, 0xe5, 0x4f, 0xe3, 0x51, 0xc3, 0x22}, +} + +const ( + FILE_SKIP_COMPLETION_PORT_ON_SUCCESS = 1 + FILE_SKIP_SET_EVENT_ON_HANDLE = 2 +) + +const ( + WSAPROTOCOL_LEN = 255 + MAX_PROTOCOL_CHAIN = 7 + BASE_PROTOCOL = 1 + LAYERED_PROTOCOL = 0 + + XP1_CONNECTIONLESS = 0x00000001 + XP1_GUARANTEED_DELIVERY = 0x00000002 + XP1_GUARANTEED_ORDER = 0x00000004 + XP1_MESSAGE_ORIENTED = 0x00000008 + XP1_PSEUDO_STREAM = 0x00000010 + XP1_GRACEFUL_CLOSE = 0x00000020 + XP1_EXPEDITED_DATA = 0x00000040 + XP1_CONNECT_DATA = 0x00000080 + XP1_DISCONNECT_DATA = 0x00000100 + XP1_SUPPORT_BROADCAST = 0x00000200 + XP1_SUPPORT_MULTIPOINT = 0x00000400 + XP1_MULTIPOINT_CONTROL_PLANE = 0x00000800 + XP1_MULTIPOINT_DATA_PLANE = 0x00001000 + XP1_QOS_SUPPORTED = 0x00002000 + XP1_UNI_SEND = 0x00008000 + XP1_UNI_RECV = 0x00010000 + XP1_IFS_HANDLES = 0x00020000 + XP1_PARTIAL_MESSAGE = 0x00040000 + XP1_SAN_SUPPORT_SDP = 0x00080000 + + PFL_MULTIPLE_PROTO_ENTRIES = 0x00000001 + PFL_RECOMMENDED_PROTO_ENTRY = 0x00000002 + PFL_HIDDEN = 0x00000004 + PFL_MATCHES_PROTOCOL_ZERO = 0x00000008 + PFL_NETWORKDIRECT_PROVIDER = 0x00000010 +) + +type WSAProtocolInfo struct { + ServiceFlags1 uint32 + ServiceFlags2 uint32 + ServiceFlags3 uint32 + ServiceFlags4 uint32 + ProviderFlags uint32 + ProviderId GUID + CatalogEntryId uint32 + ProtocolChain WSAProtocolChain + Version int32 + AddressFamily int32 + MaxSockAddr int32 + MinSockAddr int32 + SocketType int32 + Protocol int32 + ProtocolMaxOffset int32 + NetworkByteOrder int32 + SecurityScheme int32 + MessageSize uint32 + ProviderReserved uint32 + ProtocolName [WSAPROTOCOL_LEN + 1]uint16 +} + +type WSAProtocolChain struct { + ChainLen int32 + ChainEntries [MAX_PROTOCOL_CHAIN]uint32 +} + +type TCPKeepalive struct { + OnOff uint32 + Time uint32 + Interval uint32 +} + +type symbolicLinkReparseBuffer struct { + SubstituteNameOffset uint16 + SubstituteNameLength uint16 + PrintNameOffset uint16 + PrintNameLength uint16 + Flags uint32 + PathBuffer [1]uint16 +} + +type mountPointReparseBuffer struct { + SubstituteNameOffset uint16 + SubstituteNameLength uint16 + PrintNameOffset uint16 + PrintNameLength uint16 + PathBuffer [1]uint16 +} + +type reparseDataBuffer struct { + ReparseTag uint32 + ReparseDataLength uint16 + Reserved uint16 + + // GenericReparseBuffer + reparseBuffer byte +} + +const ( + FSCTL_GET_REPARSE_POINT = 0x900A8 + MAXIMUM_REPARSE_DATA_BUFFER_SIZE = 16 * 1024 + IO_REPARSE_TAG_MOUNT_POINT = 0xA0000003 + IO_REPARSE_TAG_SYMLINK = 0xA000000C + SYMBOLIC_LINK_FLAG_DIRECTORY = 0x1 +) + +const ( + ComputerNameNetBIOS = 0 + ComputerNameDnsHostname = 1 + ComputerNameDnsDomain = 2 + ComputerNameDnsFullyQualified = 3 + ComputerNamePhysicalNetBIOS = 4 + ComputerNamePhysicalDnsHostname = 5 + ComputerNamePhysicalDnsDomain = 6 + ComputerNamePhysicalDnsFullyQualified = 7 + ComputerNameMax = 8 +) + +const ( + MOVEFILE_REPLACE_EXISTING = 0x1 + MOVEFILE_COPY_ALLOWED = 0x2 + MOVEFILE_DELAY_UNTIL_REBOOT = 0x4 + MOVEFILE_WRITE_THROUGH = 0x8 + MOVEFILE_CREATE_HARDLINK = 0x10 + MOVEFILE_FAIL_IF_NOT_TRACKABLE = 0x20 +) + +const GAA_FLAG_INCLUDE_PREFIX = 0x00000010 + +const ( + IF_TYPE_OTHER = 1 + IF_TYPE_ETHERNET_CSMACD = 6 + IF_TYPE_ISO88025_TOKENRING = 9 + IF_TYPE_PPP = 23 + IF_TYPE_SOFTWARE_LOOPBACK = 24 + IF_TYPE_ATM = 37 + IF_TYPE_IEEE80211 = 71 + IF_TYPE_TUNNEL = 131 + IF_TYPE_IEEE1394 = 144 +) + +type SocketAddress struct { + Sockaddr *syscall.RawSockaddrAny + SockaddrLength int32 +} + +type IpAdapterUnicastAddress struct { + Length uint32 + Flags uint32 + Next *IpAdapterUnicastAddress + Address SocketAddress + PrefixOrigin int32 + SuffixOrigin int32 + DadState int32 + ValidLifetime uint32 + PreferredLifetime uint32 + LeaseLifetime uint32 + OnLinkPrefixLength uint8 +} + +type IpAdapterAnycastAddress struct { + Length uint32 + Flags uint32 + Next *IpAdapterAnycastAddress + Address SocketAddress +} + +type IpAdapterMulticastAddress struct { + Length uint32 + Flags uint32 + Next *IpAdapterMulticastAddress + Address SocketAddress +} + +type IpAdapterDnsServerAdapter struct { + Length uint32 + Reserved uint32 + Next *IpAdapterDnsServerAdapter + Address SocketAddress +} + +type IpAdapterPrefix struct { + Length uint32 + Flags uint32 + Next *IpAdapterPrefix + Address SocketAddress + PrefixLength uint32 +} + +type IpAdapterAddresses struct { + Length uint32 + IfIndex uint32 + Next *IpAdapterAddresses + AdapterName *byte + FirstUnicastAddress *IpAdapterUnicastAddress + FirstAnycastAddress *IpAdapterAnycastAddress + FirstMulticastAddress *IpAdapterMulticastAddress + FirstDnsServerAddress *IpAdapterDnsServerAdapter + DnsSuffix *uint16 + Description *uint16 + FriendlyName *uint16 + PhysicalAddress [syscall.MAX_ADAPTER_ADDRESS_LENGTH]byte + PhysicalAddressLength uint32 + Flags uint32 + Mtu uint32 + IfType uint32 + OperStatus uint32 + Ipv6IfIndex uint32 + ZoneIndices [16]uint32 + FirstPrefix *IpAdapterPrefix + /* more fields might be present here. */ +} + +const ( + IfOperStatusUp = 1 + IfOperStatusDown = 2 + IfOperStatusTesting = 3 + IfOperStatusUnknown = 4 + IfOperStatusDormant = 5 + IfOperStatusNotPresent = 6 + IfOperStatusLowerLayerDown = 7 +) + +// Console related constants used for the mode parameter to SetConsoleMode. See +// https://docs.microsoft.com/en-us/windows/console/setconsolemode for details. + +const ( + ENABLE_PROCESSED_INPUT = 0x1 + ENABLE_LINE_INPUT = 0x2 + ENABLE_ECHO_INPUT = 0x4 + ENABLE_WINDOW_INPUT = 0x8 + ENABLE_MOUSE_INPUT = 0x10 + ENABLE_INSERT_MODE = 0x20 + ENABLE_QUICK_EDIT_MODE = 0x40 + ENABLE_EXTENDED_FLAGS = 0x80 + ENABLE_AUTO_POSITION = 0x100 + ENABLE_VIRTUAL_TERMINAL_INPUT = 0x200 + + ENABLE_PROCESSED_OUTPUT = 0x1 + ENABLE_WRAP_AT_EOL_OUTPUT = 0x2 + ENABLE_VIRTUAL_TERMINAL_PROCESSING = 0x4 + DISABLE_NEWLINE_AUTO_RETURN = 0x8 + ENABLE_LVB_GRID_WORLDWIDE = 0x10 +) + +type Coord struct { + X int16 + Y int16 +} + +type SmallRect struct { + Left int16 + Top int16 + Right int16 + Bottom int16 +} + +// Used with GetConsoleScreenBuffer to retreive information about a console +// screen buffer. See +// https://docs.microsoft.com/en-us/windows/console/console-screen-buffer-info-str +// for details. + +type ConsoleScreenBufferInfo struct { + Size Coord + CursorPosition Coord + Attributes uint16 + Window SmallRect + MaximumWindowSize Coord +} diff --git a/vendor/golang.org/x/sys/windows/types_windows_386.go b/vendor/golang.org/x/sys/windows/types_windows_386.go new file mode 100644 index 000000000..fe0ddd031 --- /dev/null +++ b/vendor/golang.org/x/sys/windows/types_windows_386.go @@ -0,0 +1,22 @@ +// Copyright 2011 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +package windows + +type WSAData struct { + Version uint16 + HighVersion uint16 + Description [WSADESCRIPTION_LEN + 1]byte + SystemStatus [WSASYS_STATUS_LEN + 1]byte + MaxSockets uint16 + MaxUdpDg uint16 + VendorInfo *byte +} + +type Servent struct { + Name *byte + Aliases **byte + Port uint16 + Proto *byte +} diff --git a/vendor/golang.org/x/sys/windows/types_windows_amd64.go b/vendor/golang.org/x/sys/windows/types_windows_amd64.go new file mode 100644 index 000000000..7e154c2df --- /dev/null +++ b/vendor/golang.org/x/sys/windows/types_windows_amd64.go @@ -0,0 +1,22 @@ +// Copyright 2011 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +package windows + +type WSAData struct { + Version uint16 + HighVersion uint16 + MaxSockets uint16 + MaxUdpDg uint16 + VendorInfo *byte + Description [WSADESCRIPTION_LEN + 1]byte + SystemStatus [WSASYS_STATUS_LEN + 1]byte +} + +type Servent struct { + Name *byte + Aliases **byte + Proto *byte + Port uint16 +} diff --git a/vendor/golang.org/x/sys/windows/zsyscall_windows.go b/vendor/golang.org/x/sys/windows/zsyscall_windows.go new file mode 100644 index 000000000..c7b3b15ea --- /dev/null +++ b/vendor/golang.org/x/sys/windows/zsyscall_windows.go @@ -0,0 +1,2687 @@ +// MACHINE GENERATED BY 'go generate' COMMAND; DO NOT EDIT + +package windows + +import ( + "syscall" + "unsafe" +) + +var _ unsafe.Pointer + +// Do the interface allocations only once for common +// Errno values. +const ( + errnoERROR_IO_PENDING = 997 +) + +var ( + errERROR_IO_PENDING error = syscall.Errno(errnoERROR_IO_PENDING) +) + +// errnoErr returns common boxed Errno values, to prevent +// allocations at runtime. +func errnoErr(e syscall.Errno) error { + switch e { + case 0: + return nil + case errnoERROR_IO_PENDING: + return errERROR_IO_PENDING + } + // TODO: add more here, after collecting data on the common + // error values see on Windows. (perhaps when running + // all.bat?) + return e +} + +var ( + modadvapi32 = NewLazySystemDLL("advapi32.dll") + modkernel32 = NewLazySystemDLL("kernel32.dll") + modshell32 = NewLazySystemDLL("shell32.dll") + modmswsock = NewLazySystemDLL("mswsock.dll") + modcrypt32 = NewLazySystemDLL("crypt32.dll") + modws2_32 = NewLazySystemDLL("ws2_32.dll") + moddnsapi = NewLazySystemDLL("dnsapi.dll") + modiphlpapi = NewLazySystemDLL("iphlpapi.dll") + modsecur32 = NewLazySystemDLL("secur32.dll") + modnetapi32 = NewLazySystemDLL("netapi32.dll") + moduserenv = NewLazySystemDLL("userenv.dll") + + procRegisterEventSourceW = modadvapi32.NewProc("RegisterEventSourceW") + procDeregisterEventSource = modadvapi32.NewProc("DeregisterEventSource") + procReportEventW = modadvapi32.NewProc("ReportEventW") + procOpenSCManagerW = modadvapi32.NewProc("OpenSCManagerW") + procCloseServiceHandle = modadvapi32.NewProc("CloseServiceHandle") + procCreateServiceW = modadvapi32.NewProc("CreateServiceW") + procOpenServiceW = modadvapi32.NewProc("OpenServiceW") + procDeleteService = modadvapi32.NewProc("DeleteService") + procStartServiceW = modadvapi32.NewProc("StartServiceW") + procQueryServiceStatus = modadvapi32.NewProc("QueryServiceStatus") + procControlService = modadvapi32.NewProc("ControlService") + procStartServiceCtrlDispatcherW = modadvapi32.NewProc("StartServiceCtrlDispatcherW") + procSetServiceStatus = modadvapi32.NewProc("SetServiceStatus") + procChangeServiceConfigW = modadvapi32.NewProc("ChangeServiceConfigW") + procQueryServiceConfigW = modadvapi32.NewProc("QueryServiceConfigW") + procChangeServiceConfig2W = modadvapi32.NewProc("ChangeServiceConfig2W") + procQueryServiceConfig2W = modadvapi32.NewProc("QueryServiceConfig2W") + procEnumServicesStatusExW = modadvapi32.NewProc("EnumServicesStatusExW") + procGetLastError = modkernel32.NewProc("GetLastError") + procLoadLibraryW = modkernel32.NewProc("LoadLibraryW") + procLoadLibraryExW = modkernel32.NewProc("LoadLibraryExW") + procFreeLibrary = modkernel32.NewProc("FreeLibrary") + procGetProcAddress = modkernel32.NewProc("GetProcAddress") + procGetVersion = modkernel32.NewProc("GetVersion") + procFormatMessageW = modkernel32.NewProc("FormatMessageW") + procExitProcess = modkernel32.NewProc("ExitProcess") + procCreateFileW = modkernel32.NewProc("CreateFileW") + procReadFile = modkernel32.NewProc("ReadFile") + procWriteFile = modkernel32.NewProc("WriteFile") + procSetFilePointer = modkernel32.NewProc("SetFilePointer") + procCloseHandle = modkernel32.NewProc("CloseHandle") + procGetStdHandle = modkernel32.NewProc("GetStdHandle") + procSetStdHandle = modkernel32.NewProc("SetStdHandle") + procFindFirstFileW = modkernel32.NewProc("FindFirstFileW") + procFindNextFileW = modkernel32.NewProc("FindNextFileW") + procFindClose = modkernel32.NewProc("FindClose") + procGetFileInformationByHandle = modkernel32.NewProc("GetFileInformationByHandle") + procGetCurrentDirectoryW = modkernel32.NewProc("GetCurrentDirectoryW") + procSetCurrentDirectoryW = modkernel32.NewProc("SetCurrentDirectoryW") + procCreateDirectoryW = modkernel32.NewProc("CreateDirectoryW") + procRemoveDirectoryW = modkernel32.NewProc("RemoveDirectoryW") + procDeleteFileW = modkernel32.NewProc("DeleteFileW") + procMoveFileW = modkernel32.NewProc("MoveFileW") + procMoveFileExW = modkernel32.NewProc("MoveFileExW") + procGetComputerNameW = modkernel32.NewProc("GetComputerNameW") + procGetComputerNameExW = modkernel32.NewProc("GetComputerNameExW") + procSetEndOfFile = modkernel32.NewProc("SetEndOfFile") + procGetSystemTimeAsFileTime = modkernel32.NewProc("GetSystemTimeAsFileTime") + procGetSystemTimePreciseAsFileTime = modkernel32.NewProc("GetSystemTimePreciseAsFileTime") + procGetTimeZoneInformation = modkernel32.NewProc("GetTimeZoneInformation") + procCreateIoCompletionPort = modkernel32.NewProc("CreateIoCompletionPort") + procGetQueuedCompletionStatus = modkernel32.NewProc("GetQueuedCompletionStatus") + procPostQueuedCompletionStatus = modkernel32.NewProc("PostQueuedCompletionStatus") + procCancelIo = modkernel32.NewProc("CancelIo") + procCancelIoEx = modkernel32.NewProc("CancelIoEx") + procCreateProcessW = modkernel32.NewProc("CreateProcessW") + procOpenProcess = modkernel32.NewProc("OpenProcess") + procTerminateProcess = modkernel32.NewProc("TerminateProcess") + procGetExitCodeProcess = modkernel32.NewProc("GetExitCodeProcess") + procGetStartupInfoW = modkernel32.NewProc("GetStartupInfoW") + procGetCurrentProcess = modkernel32.NewProc("GetCurrentProcess") + procGetProcessTimes = modkernel32.NewProc("GetProcessTimes") + procDuplicateHandle = modkernel32.NewProc("DuplicateHandle") + procWaitForSingleObject = modkernel32.NewProc("WaitForSingleObject") + procGetTempPathW = modkernel32.NewProc("GetTempPathW") + procCreatePipe = modkernel32.NewProc("CreatePipe") + procGetFileType = modkernel32.NewProc("GetFileType") + procCryptAcquireContextW = modadvapi32.NewProc("CryptAcquireContextW") + procCryptReleaseContext = modadvapi32.NewProc("CryptReleaseContext") + procCryptGenRandom = modadvapi32.NewProc("CryptGenRandom") + procGetEnvironmentStringsW = modkernel32.NewProc("GetEnvironmentStringsW") + procFreeEnvironmentStringsW = modkernel32.NewProc("FreeEnvironmentStringsW") + procGetEnvironmentVariableW = modkernel32.NewProc("GetEnvironmentVariableW") + procSetEnvironmentVariableW = modkernel32.NewProc("SetEnvironmentVariableW") + procSetFileTime = modkernel32.NewProc("SetFileTime") + procGetFileAttributesW = modkernel32.NewProc("GetFileAttributesW") + procSetFileAttributesW = modkernel32.NewProc("SetFileAttributesW") + procGetFileAttributesExW = modkernel32.NewProc("GetFileAttributesExW") + procGetCommandLineW = modkernel32.NewProc("GetCommandLineW") + procCommandLineToArgvW = modshell32.NewProc("CommandLineToArgvW") + procLocalFree = modkernel32.NewProc("LocalFree") + procSetHandleInformation = modkernel32.NewProc("SetHandleInformation") + procFlushFileBuffers = modkernel32.NewProc("FlushFileBuffers") + procGetFullPathNameW = modkernel32.NewProc("GetFullPathNameW") + procGetLongPathNameW = modkernel32.NewProc("GetLongPathNameW") + procGetShortPathNameW = modkernel32.NewProc("GetShortPathNameW") + procCreateFileMappingW = modkernel32.NewProc("CreateFileMappingW") + procMapViewOfFile = modkernel32.NewProc("MapViewOfFile") + procUnmapViewOfFile = modkernel32.NewProc("UnmapViewOfFile") + procFlushViewOfFile = modkernel32.NewProc("FlushViewOfFile") + procVirtualLock = modkernel32.NewProc("VirtualLock") + procVirtualUnlock = modkernel32.NewProc("VirtualUnlock") + procVirtualAlloc = modkernel32.NewProc("VirtualAlloc") + procVirtualFree = modkernel32.NewProc("VirtualFree") + procVirtualProtect = modkernel32.NewProc("VirtualProtect") + procTransmitFile = modmswsock.NewProc("TransmitFile") + procReadDirectoryChangesW = modkernel32.NewProc("ReadDirectoryChangesW") + procCertOpenSystemStoreW = modcrypt32.NewProc("CertOpenSystemStoreW") + procCertOpenStore = modcrypt32.NewProc("CertOpenStore") + procCertEnumCertificatesInStore = modcrypt32.NewProc("CertEnumCertificatesInStore") + procCertAddCertificateContextToStore = modcrypt32.NewProc("CertAddCertificateContextToStore") + procCertCloseStore = modcrypt32.NewProc("CertCloseStore") + procCertGetCertificateChain = modcrypt32.NewProc("CertGetCertificateChain") + procCertFreeCertificateChain = modcrypt32.NewProc("CertFreeCertificateChain") + procCertCreateCertificateContext = modcrypt32.NewProc("CertCreateCertificateContext") + procCertFreeCertificateContext = modcrypt32.NewProc("CertFreeCertificateContext") + procCertVerifyCertificateChainPolicy = modcrypt32.NewProc("CertVerifyCertificateChainPolicy") + procRegOpenKeyExW = modadvapi32.NewProc("RegOpenKeyExW") + procRegCloseKey = modadvapi32.NewProc("RegCloseKey") + procRegQueryInfoKeyW = modadvapi32.NewProc("RegQueryInfoKeyW") + procRegEnumKeyExW = modadvapi32.NewProc("RegEnumKeyExW") + procRegQueryValueExW = modadvapi32.NewProc("RegQueryValueExW") + procGetCurrentProcessId = modkernel32.NewProc("GetCurrentProcessId") + procGetConsoleMode = modkernel32.NewProc("GetConsoleMode") + procSetConsoleMode = modkernel32.NewProc("SetConsoleMode") + procGetConsoleScreenBufferInfo = modkernel32.NewProc("GetConsoleScreenBufferInfo") + procWriteConsoleW = modkernel32.NewProc("WriteConsoleW") + procReadConsoleW = modkernel32.NewProc("ReadConsoleW") + procCreateToolhelp32Snapshot = modkernel32.NewProc("CreateToolhelp32Snapshot") + procProcess32FirstW = modkernel32.NewProc("Process32FirstW") + procProcess32NextW = modkernel32.NewProc("Process32NextW") + procDeviceIoControl = modkernel32.NewProc("DeviceIoControl") + procCreateSymbolicLinkW = modkernel32.NewProc("CreateSymbolicLinkW") + procCreateHardLinkW = modkernel32.NewProc("CreateHardLinkW") + procGetCurrentThreadId = modkernel32.NewProc("GetCurrentThreadId") + procCreateEventW = modkernel32.NewProc("CreateEventW") + procCreateEventExW = modkernel32.NewProc("CreateEventExW") + procOpenEventW = modkernel32.NewProc("OpenEventW") + procSetEvent = modkernel32.NewProc("SetEvent") + procResetEvent = modkernel32.NewProc("ResetEvent") + procPulseEvent = modkernel32.NewProc("PulseEvent") + procDefineDosDeviceW = modkernel32.NewProc("DefineDosDeviceW") + procDeleteVolumeMountPointW = modkernel32.NewProc("DeleteVolumeMountPointW") + procFindFirstVolumeW = modkernel32.NewProc("FindFirstVolumeW") + procFindFirstVolumeMountPointW = modkernel32.NewProc("FindFirstVolumeMountPointW") + procFindNextVolumeW = modkernel32.NewProc("FindNextVolumeW") + procFindNextVolumeMountPointW = modkernel32.NewProc("FindNextVolumeMountPointW") + procFindVolumeClose = modkernel32.NewProc("FindVolumeClose") + procFindVolumeMountPointClose = modkernel32.NewProc("FindVolumeMountPointClose") + procGetDriveTypeW = modkernel32.NewProc("GetDriveTypeW") + procGetLogicalDrives = modkernel32.NewProc("GetLogicalDrives") + procGetLogicalDriveStringsW = modkernel32.NewProc("GetLogicalDriveStringsW") + procGetVolumeInformationW = modkernel32.NewProc("GetVolumeInformationW") + procGetVolumeInformationByHandleW = modkernel32.NewProc("GetVolumeInformationByHandleW") + procGetVolumeNameForVolumeMountPointW = modkernel32.NewProc("GetVolumeNameForVolumeMountPointW") + procGetVolumePathNameW = modkernel32.NewProc("GetVolumePathNameW") + procGetVolumePathNamesForVolumeNameW = modkernel32.NewProc("GetVolumePathNamesForVolumeNameW") + procQueryDosDeviceW = modkernel32.NewProc("QueryDosDeviceW") + procSetVolumeLabelW = modkernel32.NewProc("SetVolumeLabelW") + procSetVolumeMountPointW = modkernel32.NewProc("SetVolumeMountPointW") + procWSAStartup = modws2_32.NewProc("WSAStartup") + procWSACleanup = modws2_32.NewProc("WSACleanup") + procWSAIoctl = modws2_32.NewProc("WSAIoctl") + procsocket = modws2_32.NewProc("socket") + procsetsockopt = modws2_32.NewProc("setsockopt") + procgetsockopt = modws2_32.NewProc("getsockopt") + procbind = modws2_32.NewProc("bind") + procconnect = modws2_32.NewProc("connect") + procgetsockname = modws2_32.NewProc("getsockname") + procgetpeername = modws2_32.NewProc("getpeername") + proclisten = modws2_32.NewProc("listen") + procshutdown = modws2_32.NewProc("shutdown") + procclosesocket = modws2_32.NewProc("closesocket") + procAcceptEx = modmswsock.NewProc("AcceptEx") + procGetAcceptExSockaddrs = modmswsock.NewProc("GetAcceptExSockaddrs") + procWSARecv = modws2_32.NewProc("WSARecv") + procWSASend = modws2_32.NewProc("WSASend") + procWSARecvFrom = modws2_32.NewProc("WSARecvFrom") + procWSASendTo = modws2_32.NewProc("WSASendTo") + procgethostbyname = modws2_32.NewProc("gethostbyname") + procgetservbyname = modws2_32.NewProc("getservbyname") + procntohs = modws2_32.NewProc("ntohs") + procgetprotobyname = modws2_32.NewProc("getprotobyname") + procDnsQuery_W = moddnsapi.NewProc("DnsQuery_W") + procDnsRecordListFree = moddnsapi.NewProc("DnsRecordListFree") + procDnsNameCompare_W = moddnsapi.NewProc("DnsNameCompare_W") + procGetAddrInfoW = modws2_32.NewProc("GetAddrInfoW") + procFreeAddrInfoW = modws2_32.NewProc("FreeAddrInfoW") + procGetIfEntry = modiphlpapi.NewProc("GetIfEntry") + procGetAdaptersInfo = modiphlpapi.NewProc("GetAdaptersInfo") + procSetFileCompletionNotificationModes = modkernel32.NewProc("SetFileCompletionNotificationModes") + procWSAEnumProtocolsW = modws2_32.NewProc("WSAEnumProtocolsW") + procGetAdaptersAddresses = modiphlpapi.NewProc("GetAdaptersAddresses") + procGetACP = modkernel32.NewProc("GetACP") + procMultiByteToWideChar = modkernel32.NewProc("MultiByteToWideChar") + procTranslateNameW = modsecur32.NewProc("TranslateNameW") + procGetUserNameExW = modsecur32.NewProc("GetUserNameExW") + procNetUserGetInfo = modnetapi32.NewProc("NetUserGetInfo") + procNetGetJoinInformation = modnetapi32.NewProc("NetGetJoinInformation") + procNetApiBufferFree = modnetapi32.NewProc("NetApiBufferFree") + procLookupAccountSidW = modadvapi32.NewProc("LookupAccountSidW") + procLookupAccountNameW = modadvapi32.NewProc("LookupAccountNameW") + procConvertSidToStringSidW = modadvapi32.NewProc("ConvertSidToStringSidW") + procConvertStringSidToSidW = modadvapi32.NewProc("ConvertStringSidToSidW") + procGetLengthSid = modadvapi32.NewProc("GetLengthSid") + procCopySid = modadvapi32.NewProc("CopySid") + procAllocateAndInitializeSid = modadvapi32.NewProc("AllocateAndInitializeSid") + procFreeSid = modadvapi32.NewProc("FreeSid") + procEqualSid = modadvapi32.NewProc("EqualSid") + procCheckTokenMembership = modadvapi32.NewProc("CheckTokenMembership") + procOpenProcessToken = modadvapi32.NewProc("OpenProcessToken") + procGetTokenInformation = modadvapi32.NewProc("GetTokenInformation") + procGetUserProfileDirectoryW = moduserenv.NewProc("GetUserProfileDirectoryW") +) + +func RegisterEventSource(uncServerName *uint16, sourceName *uint16) (handle Handle, err error) { + r0, _, e1 := syscall.Syscall(procRegisterEventSourceW.Addr(), 2, uintptr(unsafe.Pointer(uncServerName)), uintptr(unsafe.Pointer(sourceName)), 0) + handle = Handle(r0) + if handle == 0 { + if e1 != 0 { + err = errnoErr(e1) + } else { + err = syscall.EINVAL + } + } + return +} + +func DeregisterEventSource(handle Handle) (err error) { + r1, _, e1 := syscall.Syscall(procDeregisterEventSource.Addr(), 1, uintptr(handle), 0, 0) + if r1 == 0 { + if e1 != 0 { + err = errnoErr(e1) + } else { + err = syscall.EINVAL + } + } + return +} + +func ReportEvent(log Handle, etype uint16, category uint16, eventId uint32, usrSId uintptr, numStrings uint16, dataSize uint32, strings **uint16, rawData *byte) (err error) { + r1, _, e1 := syscall.Syscall9(procReportEventW.Addr(), 9, uintptr(log), uintptr(etype), uintptr(category), uintptr(eventId), uintptr(usrSId), uintptr(numStrings), uintptr(dataSize), uintptr(unsafe.Pointer(strings)), uintptr(unsafe.Pointer(rawData))) + if r1 == 0 { + if e1 != 0 { + err = errnoErr(e1) + } else { + err = syscall.EINVAL + } + } + return +} + +func OpenSCManager(machineName *uint16, databaseName *uint16, access uint32) (handle Handle, err error) { + r0, _, e1 := syscall.Syscall(procOpenSCManagerW.Addr(), 3, uintptr(unsafe.Pointer(machineName)), uintptr(unsafe.Pointer(databaseName)), uintptr(access)) + handle = Handle(r0) + if handle == 0 { + if e1 != 0 { + err = errnoErr(e1) + } else { + err = syscall.EINVAL + } + } + return +} + +func CloseServiceHandle(handle Handle) (err error) { + r1, _, e1 := syscall.Syscall(procCloseServiceHandle.Addr(), 1, uintptr(handle), 0, 0) + if r1 == 0 { + if e1 != 0 { + err = errnoErr(e1) + } else { + err = syscall.EINVAL + } + } + return +} + +func CreateService(mgr Handle, serviceName *uint16, displayName *uint16, access uint32, srvType uint32, startType uint32, errCtl uint32, pathName *uint16, loadOrderGroup *uint16, tagId *uint32, dependencies *uint16, serviceStartName *uint16, password *uint16) (handle Handle, err error) { + r0, _, e1 := syscall.Syscall15(procCreateServiceW.Addr(), 13, uintptr(mgr), uintptr(unsafe.Pointer(serviceName)), uintptr(unsafe.Pointer(displayName)), uintptr(access), uintptr(srvType), uintptr(startType), uintptr(errCtl), uintptr(unsafe.Pointer(pathName)), uintptr(unsafe.Pointer(loadOrderGroup)), uintptr(unsafe.Pointer(tagId)), uintptr(unsafe.Pointer(dependencies)), uintptr(unsafe.Pointer(serviceStartName)), uintptr(unsafe.Pointer(password)), 0, 0) + handle = Handle(r0) + if handle == 0 { + if e1 != 0 { + err = errnoErr(e1) + } else { + err = syscall.EINVAL + } + } + return +} + +func OpenService(mgr Handle, serviceName *uint16, access uint32) (handle Handle, err error) { + r0, _, e1 := syscall.Syscall(procOpenServiceW.Addr(), 3, uintptr(mgr), uintptr(unsafe.Pointer(serviceName)), uintptr(access)) + handle = Handle(r0) + if handle == 0 { + if e1 != 0 { + err = errnoErr(e1) + } else { + err = syscall.EINVAL + } + } + return +} + +func DeleteService(service Handle) (err error) { + r1, _, e1 := syscall.Syscall(procDeleteService.Addr(), 1, uintptr(service), 0, 0) + if r1 == 0 { + if e1 != 0 { + err = errnoErr(e1) + } else { + err = syscall.EINVAL + } + } + return +} + +func StartService(service Handle, numArgs uint32, argVectors **uint16) (err error) { + r1, _, e1 := syscall.Syscall(procStartServiceW.Addr(), 3, uintptr(service), uintptr(numArgs), uintptr(unsafe.Pointer(argVectors))) + if r1 == 0 { + if e1 != 0 { + err = errnoErr(e1) + } else { + err = syscall.EINVAL + } + } + return +} + +func QueryServiceStatus(service Handle, status *SERVICE_STATUS) (err error) { + r1, _, e1 := syscall.Syscall(procQueryServiceStatus.Addr(), 2, uintptr(service), uintptr(unsafe.Pointer(status)), 0) + if r1 == 0 { + if e1 != 0 { + err = errnoErr(e1) + } else { + err = syscall.EINVAL + } + } + return +} + +func ControlService(service Handle, control uint32, status *SERVICE_STATUS) (err error) { + r1, _, e1 := syscall.Syscall(procControlService.Addr(), 3, uintptr(service), uintptr(control), uintptr(unsafe.Pointer(status))) + if r1 == 0 { + if e1 != 0 { + err = errnoErr(e1) + } else { + err = syscall.EINVAL + } + } + return +} + +func StartServiceCtrlDispatcher(serviceTable *SERVICE_TABLE_ENTRY) (err error) { + r1, _, e1 := syscall.Syscall(procStartServiceCtrlDispatcherW.Addr(), 1, uintptr(unsafe.Pointer(serviceTable)), 0, 0) + if r1 == 0 { + if e1 != 0 { + err = errnoErr(e1) + } else { + err = syscall.EINVAL + } + } + return +} + +func SetServiceStatus(service Handle, serviceStatus *SERVICE_STATUS) (err error) { + r1, _, e1 := syscall.Syscall(procSetServiceStatus.Addr(), 2, uintptr(service), uintptr(unsafe.Pointer(serviceStatus)), 0) + if r1 == 0 { + if e1 != 0 { + err = errnoErr(e1) + } else { + err = syscall.EINVAL + } + } + return +} + +func ChangeServiceConfig(service Handle, serviceType uint32, startType uint32, errorControl uint32, binaryPathName *uint16, loadOrderGroup *uint16, tagId *uint32, dependencies *uint16, serviceStartName *uint16, password *uint16, displayName *uint16) (err error) { + r1, _, e1 := syscall.Syscall12(procChangeServiceConfigW.Addr(), 11, uintptr(service), uintptr(serviceType), uintptr(startType), uintptr(errorControl), uintptr(unsafe.Pointer(binaryPathName)), uintptr(unsafe.Pointer(loadOrderGroup)), uintptr(unsafe.Pointer(tagId)), uintptr(unsafe.Pointer(dependencies)), uintptr(unsafe.Pointer(serviceStartName)), uintptr(unsafe.Pointer(password)), uintptr(unsafe.Pointer(displayName)), 0) + if r1 == 0 { + if e1 != 0 { + err = errnoErr(e1) + } else { + err = syscall.EINVAL + } + } + return +} + +func QueryServiceConfig(service Handle, serviceConfig *QUERY_SERVICE_CONFIG, bufSize uint32, bytesNeeded *uint32) (err error) { + r1, _, e1 := syscall.Syscall6(procQueryServiceConfigW.Addr(), 4, uintptr(service), uintptr(unsafe.Pointer(serviceConfig)), uintptr(bufSize), uintptr(unsafe.Pointer(bytesNeeded)), 0, 0) + if r1 == 0 { + if e1 != 0 { + err = errnoErr(e1) + } else { + err = syscall.EINVAL + } + } + return +} + +func ChangeServiceConfig2(service Handle, infoLevel uint32, info *byte) (err error) { + r1, _, e1 := syscall.Syscall(procChangeServiceConfig2W.Addr(), 3, uintptr(service), uintptr(infoLevel), uintptr(unsafe.Pointer(info))) + if r1 == 0 { + if e1 != 0 { + err = errnoErr(e1) + } else { + err = syscall.EINVAL + } + } + return +} + +func QueryServiceConfig2(service Handle, infoLevel uint32, buff *byte, buffSize uint32, bytesNeeded *uint32) (err error) { + r1, _, e1 := syscall.Syscall6(procQueryServiceConfig2W.Addr(), 5, uintptr(service), uintptr(infoLevel), uintptr(unsafe.Pointer(buff)), uintptr(buffSize), uintptr(unsafe.Pointer(bytesNeeded)), 0) + if r1 == 0 { + if e1 != 0 { + err = errnoErr(e1) + } else { + err = syscall.EINVAL + } + } + return +} + +func EnumServicesStatusEx(mgr Handle, infoLevel uint32, serviceType uint32, serviceState uint32, services *byte, bufSize uint32, bytesNeeded *uint32, servicesReturned *uint32, resumeHandle *uint32, groupName *uint16) (err error) { + r1, _, e1 := syscall.Syscall12(procEnumServicesStatusExW.Addr(), 10, uintptr(mgr), uintptr(infoLevel), uintptr(serviceType), uintptr(serviceState), uintptr(unsafe.Pointer(services)), uintptr(bufSize), uintptr(unsafe.Pointer(bytesNeeded)), uintptr(unsafe.Pointer(servicesReturned)), uintptr(unsafe.Pointer(resumeHandle)), uintptr(unsafe.Pointer(groupName)), 0, 0) + if r1 == 0 { + if e1 != 0 { + err = errnoErr(e1) + } else { + err = syscall.EINVAL + } + } + return +} + +func GetLastError() (lasterr error) { + r0, _, _ := syscall.Syscall(procGetLastError.Addr(), 0, 0, 0, 0) + if r0 != 0 { + lasterr = syscall.Errno(r0) + } + return +} + +func LoadLibrary(libname string) (handle Handle, err error) { + var _p0 *uint16 + _p0, err = syscall.UTF16PtrFromString(libname) + if err != nil { + return + } + return _LoadLibrary(_p0) +} + +func _LoadLibrary(libname *uint16) (handle Handle, err error) { + r0, _, e1 := syscall.Syscall(procLoadLibraryW.Addr(), 1, uintptr(unsafe.Pointer(libname)), 0, 0) + handle = Handle(r0) + if handle == 0 { + if e1 != 0 { + err = errnoErr(e1) + } else { + err = syscall.EINVAL + } + } + return +} + +func LoadLibraryEx(libname string, zero Handle, flags uintptr) (handle Handle, err error) { + var _p0 *uint16 + _p0, err = syscall.UTF16PtrFromString(libname) + if err != nil { + return + } + return _LoadLibraryEx(_p0, zero, flags) +} + +func _LoadLibraryEx(libname *uint16, zero Handle, flags uintptr) (handle Handle, err error) { + r0, _, e1 := syscall.Syscall(procLoadLibraryExW.Addr(), 3, uintptr(unsafe.Pointer(libname)), uintptr(zero), uintptr(flags)) + handle = Handle(r0) + if handle == 0 { + if e1 != 0 { + err = errnoErr(e1) + } else { + err = syscall.EINVAL + } + } + return +} + +func FreeLibrary(handle Handle) (err error) { + r1, _, e1 := syscall.Syscall(procFreeLibrary.Addr(), 1, uintptr(handle), 0, 0) + if r1 == 0 { + if e1 != 0 { + err = errnoErr(e1) + } else { + err = syscall.EINVAL + } + } + return +} + +func GetProcAddress(module Handle, procname string) (proc uintptr, err error) { + var _p0 *byte + _p0, err = syscall.BytePtrFromString(procname) + if err != nil { + return + } + return _GetProcAddress(module, _p0) +} + +func _GetProcAddress(module Handle, procname *byte) (proc uintptr, err error) { + r0, _, e1 := syscall.Syscall(procGetProcAddress.Addr(), 2, uintptr(module), uintptr(unsafe.Pointer(procname)), 0) + proc = uintptr(r0) + if proc == 0 { + if e1 != 0 { + err = errnoErr(e1) + } else { + err = syscall.EINVAL + } + } + return +} + +func GetVersion() (ver uint32, err error) { + r0, _, e1 := syscall.Syscall(procGetVersion.Addr(), 0, 0, 0, 0) + ver = uint32(r0) + if ver == 0 { + if e1 != 0 { + err = errnoErr(e1) + } else { + err = syscall.EINVAL + } + } + return +} + +func FormatMessage(flags uint32, msgsrc uintptr, msgid uint32, langid uint32, buf []uint16, args *byte) (n uint32, err error) { + var _p0 *uint16 + if len(buf) > 0 { + _p0 = &buf[0] + } + r0, _, e1 := syscall.Syscall9(procFormatMessageW.Addr(), 7, uintptr(flags), uintptr(msgsrc), uintptr(msgid), uintptr(langid), uintptr(unsafe.Pointer(_p0)), uintptr(len(buf)), uintptr(unsafe.Pointer(args)), 0, 0) + n = uint32(r0) + if n == 0 { + if e1 != 0 { + err = errnoErr(e1) + } else { + err = syscall.EINVAL + } + } + return +} + +func ExitProcess(exitcode uint32) { + syscall.Syscall(procExitProcess.Addr(), 1, uintptr(exitcode), 0, 0) + return +} + +func CreateFile(name *uint16, access uint32, mode uint32, sa *SecurityAttributes, createmode uint32, attrs uint32, templatefile int32) (handle Handle, err error) { + r0, _, e1 := syscall.Syscall9(procCreateFileW.Addr(), 7, uintptr(unsafe.Pointer(name)), uintptr(access), uintptr(mode), uintptr(unsafe.Pointer(sa)), uintptr(createmode), uintptr(attrs), uintptr(templatefile), 0, 0) + handle = Handle(r0) + if handle == InvalidHandle { + if e1 != 0 { + err = errnoErr(e1) + } else { + err = syscall.EINVAL + } + } + return +} + +func ReadFile(handle Handle, buf []byte, done *uint32, overlapped *Overlapped) (err error) { + var _p0 *byte + if len(buf) > 0 { + _p0 = &buf[0] + } + r1, _, e1 := syscall.Syscall6(procReadFile.Addr(), 5, uintptr(handle), uintptr(unsafe.Pointer(_p0)), uintptr(len(buf)), uintptr(unsafe.Pointer(done)), uintptr(unsafe.Pointer(overlapped)), 0) + if r1 == 0 { + if e1 != 0 { + err = errnoErr(e1) + } else { + err = syscall.EINVAL + } + } + return +} + +func WriteFile(handle Handle, buf []byte, done *uint32, overlapped *Overlapped) (err error) { + var _p0 *byte + if len(buf) > 0 { + _p0 = &buf[0] + } + r1, _, e1 := syscall.Syscall6(procWriteFile.Addr(), 5, uintptr(handle), uintptr(unsafe.Pointer(_p0)), uintptr(len(buf)), uintptr(unsafe.Pointer(done)), uintptr(unsafe.Pointer(overlapped)), 0) + if r1 == 0 { + if e1 != 0 { + err = errnoErr(e1) + } else { + err = syscall.EINVAL + } + } + return +} + +func SetFilePointer(handle Handle, lowoffset int32, highoffsetptr *int32, whence uint32) (newlowoffset uint32, err error) { + r0, _, e1 := syscall.Syscall6(procSetFilePointer.Addr(), 4, uintptr(handle), uintptr(lowoffset), uintptr(unsafe.Pointer(highoffsetptr)), uintptr(whence), 0, 0) + newlowoffset = uint32(r0) + if newlowoffset == 0xffffffff { + if e1 != 0 { + err = errnoErr(e1) + } else { + err = syscall.EINVAL + } + } + return +} + +func CloseHandle(handle Handle) (err error) { + r1, _, e1 := syscall.Syscall(procCloseHandle.Addr(), 1, uintptr(handle), 0, 0) + if r1 == 0 { + if e1 != 0 { + err = errnoErr(e1) + } else { + err = syscall.EINVAL + } + } + return +} + +func GetStdHandle(stdhandle uint32) (handle Handle, err error) { + r0, _, e1 := syscall.Syscall(procGetStdHandle.Addr(), 1, uintptr(stdhandle), 0, 0) + handle = Handle(r0) + if handle == InvalidHandle { + if e1 != 0 { + err = errnoErr(e1) + } else { + err = syscall.EINVAL + } + } + return +} + +func SetStdHandle(stdhandle uint32, handle Handle) (err error) { + r1, _, e1 := syscall.Syscall(procSetStdHandle.Addr(), 2, uintptr(stdhandle), uintptr(handle), 0) + if r1 == 0 { + if e1 != 0 { + err = errnoErr(e1) + } else { + err = syscall.EINVAL + } + } + return +} + +func findFirstFile1(name *uint16, data *win32finddata1) (handle Handle, err error) { + r0, _, e1 := syscall.Syscall(procFindFirstFileW.Addr(), 2, uintptr(unsafe.Pointer(name)), uintptr(unsafe.Pointer(data)), 0) + handle = Handle(r0) + if handle == InvalidHandle { + if e1 != 0 { + err = errnoErr(e1) + } else { + err = syscall.EINVAL + } + } + return +} + +func findNextFile1(handle Handle, data *win32finddata1) (err error) { + r1, _, e1 := syscall.Syscall(procFindNextFileW.Addr(), 2, uintptr(handle), uintptr(unsafe.Pointer(data)), 0) + if r1 == 0 { + if e1 != 0 { + err = errnoErr(e1) + } else { + err = syscall.EINVAL + } + } + return +} + +func FindClose(handle Handle) (err error) { + r1, _, e1 := syscall.Syscall(procFindClose.Addr(), 1, uintptr(handle), 0, 0) + if r1 == 0 { + if e1 != 0 { + err = errnoErr(e1) + } else { + err = syscall.EINVAL + } + } + return +} + +func GetFileInformationByHandle(handle Handle, data *ByHandleFileInformation) (err error) { + r1, _, e1 := syscall.Syscall(procGetFileInformationByHandle.Addr(), 2, uintptr(handle), uintptr(unsafe.Pointer(data)), 0) + if r1 == 0 { + if e1 != 0 { + err = errnoErr(e1) + } else { + err = syscall.EINVAL + } + } + return +} + +func GetCurrentDirectory(buflen uint32, buf *uint16) (n uint32, err error) { + r0, _, e1 := syscall.Syscall(procGetCurrentDirectoryW.Addr(), 2, uintptr(buflen), uintptr(unsafe.Pointer(buf)), 0) + n = uint32(r0) + if n == 0 { + if e1 != 0 { + err = errnoErr(e1) + } else { + err = syscall.EINVAL + } + } + return +} + +func SetCurrentDirectory(path *uint16) (err error) { + r1, _, e1 := syscall.Syscall(procSetCurrentDirectoryW.Addr(), 1, uintptr(unsafe.Pointer(path)), 0, 0) + if r1 == 0 { + if e1 != 0 { + err = errnoErr(e1) + } else { + err = syscall.EINVAL + } + } + return +} + +func CreateDirectory(path *uint16, sa *SecurityAttributes) (err error) { + r1, _, e1 := syscall.Syscall(procCreateDirectoryW.Addr(), 2, uintptr(unsafe.Pointer(path)), uintptr(unsafe.Pointer(sa)), 0) + if r1 == 0 { + if e1 != 0 { + err = errnoErr(e1) + } else { + err = syscall.EINVAL + } + } + return +} + +func RemoveDirectory(path *uint16) (err error) { + r1, _, e1 := syscall.Syscall(procRemoveDirectoryW.Addr(), 1, uintptr(unsafe.Pointer(path)), 0, 0) + if r1 == 0 { + if e1 != 0 { + err = errnoErr(e1) + } else { + err = syscall.EINVAL + } + } + return +} + +func DeleteFile(path *uint16) (err error) { + r1, _, e1 := syscall.Syscall(procDeleteFileW.Addr(), 1, uintptr(unsafe.Pointer(path)), 0, 0) + if r1 == 0 { + if e1 != 0 { + err = errnoErr(e1) + } else { + err = syscall.EINVAL + } + } + return +} + +func MoveFile(from *uint16, to *uint16) (err error) { + r1, _, e1 := syscall.Syscall(procMoveFileW.Addr(), 2, uintptr(unsafe.Pointer(from)), uintptr(unsafe.Pointer(to)), 0) + if r1 == 0 { + if e1 != 0 { + err = errnoErr(e1) + } else { + err = syscall.EINVAL + } + } + return +} + +func MoveFileEx(from *uint16, to *uint16, flags uint32) (err error) { + r1, _, e1 := syscall.Syscall(procMoveFileExW.Addr(), 3, uintptr(unsafe.Pointer(from)), uintptr(unsafe.Pointer(to)), uintptr(flags)) + if r1 == 0 { + if e1 != 0 { + err = errnoErr(e1) + } else { + err = syscall.EINVAL + } + } + return +} + +func GetComputerName(buf *uint16, n *uint32) (err error) { + r1, _, e1 := syscall.Syscall(procGetComputerNameW.Addr(), 2, uintptr(unsafe.Pointer(buf)), uintptr(unsafe.Pointer(n)), 0) + if r1 == 0 { + if e1 != 0 { + err = errnoErr(e1) + } else { + err = syscall.EINVAL + } + } + return +} + +func GetComputerNameEx(nametype uint32, buf *uint16, n *uint32) (err error) { + r1, _, e1 := syscall.Syscall(procGetComputerNameExW.Addr(), 3, uintptr(nametype), uintptr(unsafe.Pointer(buf)), uintptr(unsafe.Pointer(n))) + if r1 == 0 { + if e1 != 0 { + err = errnoErr(e1) + } else { + err = syscall.EINVAL + } + } + return +} + +func SetEndOfFile(handle Handle) (err error) { + r1, _, e1 := syscall.Syscall(procSetEndOfFile.Addr(), 1, uintptr(handle), 0, 0) + if r1 == 0 { + if e1 != 0 { + err = errnoErr(e1) + } else { + err = syscall.EINVAL + } + } + return +} + +func GetSystemTimeAsFileTime(time *Filetime) { + syscall.Syscall(procGetSystemTimeAsFileTime.Addr(), 1, uintptr(unsafe.Pointer(time)), 0, 0) + return +} + +func GetSystemTimePreciseAsFileTime(time *Filetime) { + syscall.Syscall(procGetSystemTimePreciseAsFileTime.Addr(), 1, uintptr(unsafe.Pointer(time)), 0, 0) + return +} + +func GetTimeZoneInformation(tzi *Timezoneinformation) (rc uint32, err error) { + r0, _, e1 := syscall.Syscall(procGetTimeZoneInformation.Addr(), 1, uintptr(unsafe.Pointer(tzi)), 0, 0) + rc = uint32(r0) + if rc == 0xffffffff { + if e1 != 0 { + err = errnoErr(e1) + } else { + err = syscall.EINVAL + } + } + return +} + +func CreateIoCompletionPort(filehandle Handle, cphandle Handle, key uint32, threadcnt uint32) (handle Handle, err error) { + r0, _, e1 := syscall.Syscall6(procCreateIoCompletionPort.Addr(), 4, uintptr(filehandle), uintptr(cphandle), uintptr(key), uintptr(threadcnt), 0, 0) + handle = Handle(r0) + if handle == 0 { + if e1 != 0 { + err = errnoErr(e1) + } else { + err = syscall.EINVAL + } + } + return +} + +func GetQueuedCompletionStatus(cphandle Handle, qty *uint32, key *uint32, overlapped **Overlapped, timeout uint32) (err error) { + r1, _, e1 := syscall.Syscall6(procGetQueuedCompletionStatus.Addr(), 5, uintptr(cphandle), uintptr(unsafe.Pointer(qty)), uintptr(unsafe.Pointer(key)), uintptr(unsafe.Pointer(overlapped)), uintptr(timeout), 0) + if r1 == 0 { + if e1 != 0 { + err = errnoErr(e1) + } else { + err = syscall.EINVAL + } + } + return +} + +func PostQueuedCompletionStatus(cphandle Handle, qty uint32, key uint32, overlapped *Overlapped) (err error) { + r1, _, e1 := syscall.Syscall6(procPostQueuedCompletionStatus.Addr(), 4, uintptr(cphandle), uintptr(qty), uintptr(key), uintptr(unsafe.Pointer(overlapped)), 0, 0) + if r1 == 0 { + if e1 != 0 { + err = errnoErr(e1) + } else { + err = syscall.EINVAL + } + } + return +} + +func CancelIo(s Handle) (err error) { + r1, _, e1 := syscall.Syscall(procCancelIo.Addr(), 1, uintptr(s), 0, 0) + if r1 == 0 { + if e1 != 0 { + err = errnoErr(e1) + } else { + err = syscall.EINVAL + } + } + return +} + +func CancelIoEx(s Handle, o *Overlapped) (err error) { + r1, _, e1 := syscall.Syscall(procCancelIoEx.Addr(), 2, uintptr(s), uintptr(unsafe.Pointer(o)), 0) + if r1 == 0 { + if e1 != 0 { + err = errnoErr(e1) + } else { + err = syscall.EINVAL + } + } + return +} + +func CreateProcess(appName *uint16, commandLine *uint16, procSecurity *SecurityAttributes, threadSecurity *SecurityAttributes, inheritHandles bool, creationFlags uint32, env *uint16, currentDir *uint16, startupInfo *StartupInfo, outProcInfo *ProcessInformation) (err error) { + var _p0 uint32 + if inheritHandles { + _p0 = 1 + } else { + _p0 = 0 + } + r1, _, e1 := syscall.Syscall12(procCreateProcessW.Addr(), 10, uintptr(unsafe.Pointer(appName)), uintptr(unsafe.Pointer(commandLine)), uintptr(unsafe.Pointer(procSecurity)), uintptr(unsafe.Pointer(threadSecurity)), uintptr(_p0), uintptr(creationFlags), uintptr(unsafe.Pointer(env)), uintptr(unsafe.Pointer(currentDir)), uintptr(unsafe.Pointer(startupInfo)), uintptr(unsafe.Pointer(outProcInfo)), 0, 0) + if r1 == 0 { + if e1 != 0 { + err = errnoErr(e1) + } else { + err = syscall.EINVAL + } + } + return +} + +func OpenProcess(da uint32, inheritHandle bool, pid uint32) (handle Handle, err error) { + var _p0 uint32 + if inheritHandle { + _p0 = 1 + } else { + _p0 = 0 + } + r0, _, e1 := syscall.Syscall(procOpenProcess.Addr(), 3, uintptr(da), uintptr(_p0), uintptr(pid)) + handle = Handle(r0) + if handle == 0 { + if e1 != 0 { + err = errnoErr(e1) + } else { + err = syscall.EINVAL + } + } + return +} + +func TerminateProcess(handle Handle, exitcode uint32) (err error) { + r1, _, e1 := syscall.Syscall(procTerminateProcess.Addr(), 2, uintptr(handle), uintptr(exitcode), 0) + if r1 == 0 { + if e1 != 0 { + err = errnoErr(e1) + } else { + err = syscall.EINVAL + } + } + return +} + +func GetExitCodeProcess(handle Handle, exitcode *uint32) (err error) { + r1, _, e1 := syscall.Syscall(procGetExitCodeProcess.Addr(), 2, uintptr(handle), uintptr(unsafe.Pointer(exitcode)), 0) + if r1 == 0 { + if e1 != 0 { + err = errnoErr(e1) + } else { + err = syscall.EINVAL + } + } + return +} + +func GetStartupInfo(startupInfo *StartupInfo) (err error) { + r1, _, e1 := syscall.Syscall(procGetStartupInfoW.Addr(), 1, uintptr(unsafe.Pointer(startupInfo)), 0, 0) + if r1 == 0 { + if e1 != 0 { + err = errnoErr(e1) + } else { + err = syscall.EINVAL + } + } + return +} + +func GetCurrentProcess() (pseudoHandle Handle, err error) { + r0, _, e1 := syscall.Syscall(procGetCurrentProcess.Addr(), 0, 0, 0, 0) + pseudoHandle = Handle(r0) + if pseudoHandle == 0 { + if e1 != 0 { + err = errnoErr(e1) + } else { + err = syscall.EINVAL + } + } + return +} + +func GetProcessTimes(handle Handle, creationTime *Filetime, exitTime *Filetime, kernelTime *Filetime, userTime *Filetime) (err error) { + r1, _, e1 := syscall.Syscall6(procGetProcessTimes.Addr(), 5, uintptr(handle), uintptr(unsafe.Pointer(creationTime)), uintptr(unsafe.Pointer(exitTime)), uintptr(unsafe.Pointer(kernelTime)), uintptr(unsafe.Pointer(userTime)), 0) + if r1 == 0 { + if e1 != 0 { + err = errnoErr(e1) + } else { + err = syscall.EINVAL + } + } + return +} + +func DuplicateHandle(hSourceProcessHandle Handle, hSourceHandle Handle, hTargetProcessHandle Handle, lpTargetHandle *Handle, dwDesiredAccess uint32, bInheritHandle bool, dwOptions uint32) (err error) { + var _p0 uint32 + if bInheritHandle { + _p0 = 1 + } else { + _p0 = 0 + } + r1, _, e1 := syscall.Syscall9(procDuplicateHandle.Addr(), 7, uintptr(hSourceProcessHandle), uintptr(hSourceHandle), uintptr(hTargetProcessHandle), uintptr(unsafe.Pointer(lpTargetHandle)), uintptr(dwDesiredAccess), uintptr(_p0), uintptr(dwOptions), 0, 0) + if r1 == 0 { + if e1 != 0 { + err = errnoErr(e1) + } else { + err = syscall.EINVAL + } + } + return +} + +func WaitForSingleObject(handle Handle, waitMilliseconds uint32) (event uint32, err error) { + r0, _, e1 := syscall.Syscall(procWaitForSingleObject.Addr(), 2, uintptr(handle), uintptr(waitMilliseconds), 0) + event = uint32(r0) + if event == 0xffffffff { + if e1 != 0 { + err = errnoErr(e1) + } else { + err = syscall.EINVAL + } + } + return +} + +func GetTempPath(buflen uint32, buf *uint16) (n uint32, err error) { + r0, _, e1 := syscall.Syscall(procGetTempPathW.Addr(), 2, uintptr(buflen), uintptr(unsafe.Pointer(buf)), 0) + n = uint32(r0) + if n == 0 { + if e1 != 0 { + err = errnoErr(e1) + } else { + err = syscall.EINVAL + } + } + return +} + +func CreatePipe(readhandle *Handle, writehandle *Handle, sa *SecurityAttributes, size uint32) (err error) { + r1, _, e1 := syscall.Syscall6(procCreatePipe.Addr(), 4, uintptr(unsafe.Pointer(readhandle)), uintptr(unsafe.Pointer(writehandle)), uintptr(unsafe.Pointer(sa)), uintptr(size), 0, 0) + if r1 == 0 { + if e1 != 0 { + err = errnoErr(e1) + } else { + err = syscall.EINVAL + } + } + return +} + +func GetFileType(filehandle Handle) (n uint32, err error) { + r0, _, e1 := syscall.Syscall(procGetFileType.Addr(), 1, uintptr(filehandle), 0, 0) + n = uint32(r0) + if n == 0 { + if e1 != 0 { + err = errnoErr(e1) + } else { + err = syscall.EINVAL + } + } + return +} + +func CryptAcquireContext(provhandle *Handle, container *uint16, provider *uint16, provtype uint32, flags uint32) (err error) { + r1, _, e1 := syscall.Syscall6(procCryptAcquireContextW.Addr(), 5, uintptr(unsafe.Pointer(provhandle)), uintptr(unsafe.Pointer(container)), uintptr(unsafe.Pointer(provider)), uintptr(provtype), uintptr(flags), 0) + if r1 == 0 { + if e1 != 0 { + err = errnoErr(e1) + } else { + err = syscall.EINVAL + } + } + return +} + +func CryptReleaseContext(provhandle Handle, flags uint32) (err error) { + r1, _, e1 := syscall.Syscall(procCryptReleaseContext.Addr(), 2, uintptr(provhandle), uintptr(flags), 0) + if r1 == 0 { + if e1 != 0 { + err = errnoErr(e1) + } else { + err = syscall.EINVAL + } + } + return +} + +func CryptGenRandom(provhandle Handle, buflen uint32, buf *byte) (err error) { + r1, _, e1 := syscall.Syscall(procCryptGenRandom.Addr(), 3, uintptr(provhandle), uintptr(buflen), uintptr(unsafe.Pointer(buf))) + if r1 == 0 { + if e1 != 0 { + err = errnoErr(e1) + } else { + err = syscall.EINVAL + } + } + return +} + +func GetEnvironmentStrings() (envs *uint16, err error) { + r0, _, e1 := syscall.Syscall(procGetEnvironmentStringsW.Addr(), 0, 0, 0, 0) + envs = (*uint16)(unsafe.Pointer(r0)) + if envs == nil { + if e1 != 0 { + err = errnoErr(e1) + } else { + err = syscall.EINVAL + } + } + return +} + +func FreeEnvironmentStrings(envs *uint16) (err error) { + r1, _, e1 := syscall.Syscall(procFreeEnvironmentStringsW.Addr(), 1, uintptr(unsafe.Pointer(envs)), 0, 0) + if r1 == 0 { + if e1 != 0 { + err = errnoErr(e1) + } else { + err = syscall.EINVAL + } + } + return +} + +func GetEnvironmentVariable(name *uint16, buffer *uint16, size uint32) (n uint32, err error) { + r0, _, e1 := syscall.Syscall(procGetEnvironmentVariableW.Addr(), 3, uintptr(unsafe.Pointer(name)), uintptr(unsafe.Pointer(buffer)), uintptr(size)) + n = uint32(r0) + if n == 0 { + if e1 != 0 { + err = errnoErr(e1) + } else { + err = syscall.EINVAL + } + } + return +} + +func SetEnvironmentVariable(name *uint16, value *uint16) (err error) { + r1, _, e1 := syscall.Syscall(procSetEnvironmentVariableW.Addr(), 2, uintptr(unsafe.Pointer(name)), uintptr(unsafe.Pointer(value)), 0) + if r1 == 0 { + if e1 != 0 { + err = errnoErr(e1) + } else { + err = syscall.EINVAL + } + } + return +} + +func SetFileTime(handle Handle, ctime *Filetime, atime *Filetime, wtime *Filetime) (err error) { + r1, _, e1 := syscall.Syscall6(procSetFileTime.Addr(), 4, uintptr(handle), uintptr(unsafe.Pointer(ctime)), uintptr(unsafe.Pointer(atime)), uintptr(unsafe.Pointer(wtime)), 0, 0) + if r1 == 0 { + if e1 != 0 { + err = errnoErr(e1) + } else { + err = syscall.EINVAL + } + } + return +} + +func GetFileAttributes(name *uint16) (attrs uint32, err error) { + r0, _, e1 := syscall.Syscall(procGetFileAttributesW.Addr(), 1, uintptr(unsafe.Pointer(name)), 0, 0) + attrs = uint32(r0) + if attrs == INVALID_FILE_ATTRIBUTES { + if e1 != 0 { + err = errnoErr(e1) + } else { + err = syscall.EINVAL + } + } + return +} + +func SetFileAttributes(name *uint16, attrs uint32) (err error) { + r1, _, e1 := syscall.Syscall(procSetFileAttributesW.Addr(), 2, uintptr(unsafe.Pointer(name)), uintptr(attrs), 0) + if r1 == 0 { + if e1 != 0 { + err = errnoErr(e1) + } else { + err = syscall.EINVAL + } + } + return +} + +func GetFileAttributesEx(name *uint16, level uint32, info *byte) (err error) { + r1, _, e1 := syscall.Syscall(procGetFileAttributesExW.Addr(), 3, uintptr(unsafe.Pointer(name)), uintptr(level), uintptr(unsafe.Pointer(info))) + if r1 == 0 { + if e1 != 0 { + err = errnoErr(e1) + } else { + err = syscall.EINVAL + } + } + return +} + +func GetCommandLine() (cmd *uint16) { + r0, _, _ := syscall.Syscall(procGetCommandLineW.Addr(), 0, 0, 0, 0) + cmd = (*uint16)(unsafe.Pointer(r0)) + return +} + +func CommandLineToArgv(cmd *uint16, argc *int32) (argv *[8192]*[8192]uint16, err error) { + r0, _, e1 := syscall.Syscall(procCommandLineToArgvW.Addr(), 2, uintptr(unsafe.Pointer(cmd)), uintptr(unsafe.Pointer(argc)), 0) + argv = (*[8192]*[8192]uint16)(unsafe.Pointer(r0)) + if argv == nil { + if e1 != 0 { + err = errnoErr(e1) + } else { + err = syscall.EINVAL + } + } + return +} + +func LocalFree(hmem Handle) (handle Handle, err error) { + r0, _, e1 := syscall.Syscall(procLocalFree.Addr(), 1, uintptr(hmem), 0, 0) + handle = Handle(r0) + if handle != 0 { + if e1 != 0 { + err = errnoErr(e1) + } else { + err = syscall.EINVAL + } + } + return +} + +func SetHandleInformation(handle Handle, mask uint32, flags uint32) (err error) { + r1, _, e1 := syscall.Syscall(procSetHandleInformation.Addr(), 3, uintptr(handle), uintptr(mask), uintptr(flags)) + if r1 == 0 { + if e1 != 0 { + err = errnoErr(e1) + } else { + err = syscall.EINVAL + } + } + return +} + +func FlushFileBuffers(handle Handle) (err error) { + r1, _, e1 := syscall.Syscall(procFlushFileBuffers.Addr(), 1, uintptr(handle), 0, 0) + if r1 == 0 { + if e1 != 0 { + err = errnoErr(e1) + } else { + err = syscall.EINVAL + } + } + return +} + +func GetFullPathName(path *uint16, buflen uint32, buf *uint16, fname **uint16) (n uint32, err error) { + r0, _, e1 := syscall.Syscall6(procGetFullPathNameW.Addr(), 4, uintptr(unsafe.Pointer(path)), uintptr(buflen), uintptr(unsafe.Pointer(buf)), uintptr(unsafe.Pointer(fname)), 0, 0) + n = uint32(r0) + if n == 0 { + if e1 != 0 { + err = errnoErr(e1) + } else { + err = syscall.EINVAL + } + } + return +} + +func GetLongPathName(path *uint16, buf *uint16, buflen uint32) (n uint32, err error) { + r0, _, e1 := syscall.Syscall(procGetLongPathNameW.Addr(), 3, uintptr(unsafe.Pointer(path)), uintptr(unsafe.Pointer(buf)), uintptr(buflen)) + n = uint32(r0) + if n == 0 { + if e1 != 0 { + err = errnoErr(e1) + } else { + err = syscall.EINVAL + } + } + return +} + +func GetShortPathName(longpath *uint16, shortpath *uint16, buflen uint32) (n uint32, err error) { + r0, _, e1 := syscall.Syscall(procGetShortPathNameW.Addr(), 3, uintptr(unsafe.Pointer(longpath)), uintptr(unsafe.Pointer(shortpath)), uintptr(buflen)) + n = uint32(r0) + if n == 0 { + if e1 != 0 { + err = errnoErr(e1) + } else { + err = syscall.EINVAL + } + } + return +} + +func CreateFileMapping(fhandle Handle, sa *SecurityAttributes, prot uint32, maxSizeHigh uint32, maxSizeLow uint32, name *uint16) (handle Handle, err error) { + r0, _, e1 := syscall.Syscall6(procCreateFileMappingW.Addr(), 6, uintptr(fhandle), uintptr(unsafe.Pointer(sa)), uintptr(prot), uintptr(maxSizeHigh), uintptr(maxSizeLow), uintptr(unsafe.Pointer(name))) + handle = Handle(r0) + if handle == 0 { + if e1 != 0 { + err = errnoErr(e1) + } else { + err = syscall.EINVAL + } + } + return +} + +func MapViewOfFile(handle Handle, access uint32, offsetHigh uint32, offsetLow uint32, length uintptr) (addr uintptr, err error) { + r0, _, e1 := syscall.Syscall6(procMapViewOfFile.Addr(), 5, uintptr(handle), uintptr(access), uintptr(offsetHigh), uintptr(offsetLow), uintptr(length), 0) + addr = uintptr(r0) + if addr == 0 { + if e1 != 0 { + err = errnoErr(e1) + } else { + err = syscall.EINVAL + } + } + return +} + +func UnmapViewOfFile(addr uintptr) (err error) { + r1, _, e1 := syscall.Syscall(procUnmapViewOfFile.Addr(), 1, uintptr(addr), 0, 0) + if r1 == 0 { + if e1 != 0 { + err = errnoErr(e1) + } else { + err = syscall.EINVAL + } + } + return +} + +func FlushViewOfFile(addr uintptr, length uintptr) (err error) { + r1, _, e1 := syscall.Syscall(procFlushViewOfFile.Addr(), 2, uintptr(addr), uintptr(length), 0) + if r1 == 0 { + if e1 != 0 { + err = errnoErr(e1) + } else { + err = syscall.EINVAL + } + } + return +} + +func VirtualLock(addr uintptr, length uintptr) (err error) { + r1, _, e1 := syscall.Syscall(procVirtualLock.Addr(), 2, uintptr(addr), uintptr(length), 0) + if r1 == 0 { + if e1 != 0 { + err = errnoErr(e1) + } else { + err = syscall.EINVAL + } + } + return +} + +func VirtualUnlock(addr uintptr, length uintptr) (err error) { + r1, _, e1 := syscall.Syscall(procVirtualUnlock.Addr(), 2, uintptr(addr), uintptr(length), 0) + if r1 == 0 { + if e1 != 0 { + err = errnoErr(e1) + } else { + err = syscall.EINVAL + } + } + return +} + +func VirtualAlloc(address uintptr, size uintptr, alloctype uint32, protect uint32) (value uintptr, err error) { + r0, _, e1 := syscall.Syscall6(procVirtualAlloc.Addr(), 4, uintptr(address), uintptr(size), uintptr(alloctype), uintptr(protect), 0, 0) + value = uintptr(r0) + if value == 0 { + if e1 != 0 { + err = errnoErr(e1) + } else { + err = syscall.EINVAL + } + } + return +} + +func VirtualFree(address uintptr, size uintptr, freetype uint32) (err error) { + r1, _, e1 := syscall.Syscall(procVirtualFree.Addr(), 3, uintptr(address), uintptr(size), uintptr(freetype)) + if r1 == 0 { + if e1 != 0 { + err = errnoErr(e1) + } else { + err = syscall.EINVAL + } + } + return +} + +func VirtualProtect(address uintptr, size uintptr, newprotect uint32, oldprotect *uint32) (err error) { + r1, _, e1 := syscall.Syscall6(procVirtualProtect.Addr(), 4, uintptr(address), uintptr(size), uintptr(newprotect), uintptr(unsafe.Pointer(oldprotect)), 0, 0) + if r1 == 0 { + if e1 != 0 { + err = errnoErr(e1) + } else { + err = syscall.EINVAL + } + } + return +} + +func TransmitFile(s Handle, handle Handle, bytesToWrite uint32, bytsPerSend uint32, overlapped *Overlapped, transmitFileBuf *TransmitFileBuffers, flags uint32) (err error) { + r1, _, e1 := syscall.Syscall9(procTransmitFile.Addr(), 7, uintptr(s), uintptr(handle), uintptr(bytesToWrite), uintptr(bytsPerSend), uintptr(unsafe.Pointer(overlapped)), uintptr(unsafe.Pointer(transmitFileBuf)), uintptr(flags), 0, 0) + if r1 == 0 { + if e1 != 0 { + err = errnoErr(e1) + } else { + err = syscall.EINVAL + } + } + return +} + +func ReadDirectoryChanges(handle Handle, buf *byte, buflen uint32, watchSubTree bool, mask uint32, retlen *uint32, overlapped *Overlapped, completionRoutine uintptr) (err error) { + var _p0 uint32 + if watchSubTree { + _p0 = 1 + } else { + _p0 = 0 + } + r1, _, e1 := syscall.Syscall9(procReadDirectoryChangesW.Addr(), 8, uintptr(handle), uintptr(unsafe.Pointer(buf)), uintptr(buflen), uintptr(_p0), uintptr(mask), uintptr(unsafe.Pointer(retlen)), uintptr(unsafe.Pointer(overlapped)), uintptr(completionRoutine), 0) + if r1 == 0 { + if e1 != 0 { + err = errnoErr(e1) + } else { + err = syscall.EINVAL + } + } + return +} + +func CertOpenSystemStore(hprov Handle, name *uint16) (store Handle, err error) { + r0, _, e1 := syscall.Syscall(procCertOpenSystemStoreW.Addr(), 2, uintptr(hprov), uintptr(unsafe.Pointer(name)), 0) + store = Handle(r0) + if store == 0 { + if e1 != 0 { + err = errnoErr(e1) + } else { + err = syscall.EINVAL + } + } + return +} + +func CertOpenStore(storeProvider uintptr, msgAndCertEncodingType uint32, cryptProv uintptr, flags uint32, para uintptr) (handle Handle, err error) { + r0, _, e1 := syscall.Syscall6(procCertOpenStore.Addr(), 5, uintptr(storeProvider), uintptr(msgAndCertEncodingType), uintptr(cryptProv), uintptr(flags), uintptr(para), 0) + handle = Handle(r0) + if handle == InvalidHandle { + if e1 != 0 { + err = errnoErr(e1) + } else { + err = syscall.EINVAL + } + } + return +} + +func CertEnumCertificatesInStore(store Handle, prevContext *CertContext) (context *CertContext, err error) { + r0, _, e1 := syscall.Syscall(procCertEnumCertificatesInStore.Addr(), 2, uintptr(store), uintptr(unsafe.Pointer(prevContext)), 0) + context = (*CertContext)(unsafe.Pointer(r0)) + if context == nil { + if e1 != 0 { + err = errnoErr(e1) + } else { + err = syscall.EINVAL + } + } + return +} + +func CertAddCertificateContextToStore(store Handle, certContext *CertContext, addDisposition uint32, storeContext **CertContext) (err error) { + r1, _, e1 := syscall.Syscall6(procCertAddCertificateContextToStore.Addr(), 4, uintptr(store), uintptr(unsafe.Pointer(certContext)), uintptr(addDisposition), uintptr(unsafe.Pointer(storeContext)), 0, 0) + if r1 == 0 { + if e1 != 0 { + err = errnoErr(e1) + } else { + err = syscall.EINVAL + } + } + return +} + +func CertCloseStore(store Handle, flags uint32) (err error) { + r1, _, e1 := syscall.Syscall(procCertCloseStore.Addr(), 2, uintptr(store), uintptr(flags), 0) + if r1 == 0 { + if e1 != 0 { + err = errnoErr(e1) + } else { + err = syscall.EINVAL + } + } + return +} + +func CertGetCertificateChain(engine Handle, leaf *CertContext, time *Filetime, additionalStore Handle, para *CertChainPara, flags uint32, reserved uintptr, chainCtx **CertChainContext) (err error) { + r1, _, e1 := syscall.Syscall9(procCertGetCertificateChain.Addr(), 8, uintptr(engine), uintptr(unsafe.Pointer(leaf)), uintptr(unsafe.Pointer(time)), uintptr(additionalStore), uintptr(unsafe.Pointer(para)), uintptr(flags), uintptr(reserved), uintptr(unsafe.Pointer(chainCtx)), 0) + if r1 == 0 { + if e1 != 0 { + err = errnoErr(e1) + } else { + err = syscall.EINVAL + } + } + return +} + +func CertFreeCertificateChain(ctx *CertChainContext) { + syscall.Syscall(procCertFreeCertificateChain.Addr(), 1, uintptr(unsafe.Pointer(ctx)), 0, 0) + return +} + +func CertCreateCertificateContext(certEncodingType uint32, certEncoded *byte, encodedLen uint32) (context *CertContext, err error) { + r0, _, e1 := syscall.Syscall(procCertCreateCertificateContext.Addr(), 3, uintptr(certEncodingType), uintptr(unsafe.Pointer(certEncoded)), uintptr(encodedLen)) + context = (*CertContext)(unsafe.Pointer(r0)) + if context == nil { + if e1 != 0 { + err = errnoErr(e1) + } else { + err = syscall.EINVAL + } + } + return +} + +func CertFreeCertificateContext(ctx *CertContext) (err error) { + r1, _, e1 := syscall.Syscall(procCertFreeCertificateContext.Addr(), 1, uintptr(unsafe.Pointer(ctx)), 0, 0) + if r1 == 0 { + if e1 != 0 { + err = errnoErr(e1) + } else { + err = syscall.EINVAL + } + } + return +} + +func CertVerifyCertificateChainPolicy(policyOID uintptr, chain *CertChainContext, para *CertChainPolicyPara, status *CertChainPolicyStatus) (err error) { + r1, _, e1 := syscall.Syscall6(procCertVerifyCertificateChainPolicy.Addr(), 4, uintptr(policyOID), uintptr(unsafe.Pointer(chain)), uintptr(unsafe.Pointer(para)), uintptr(unsafe.Pointer(status)), 0, 0) + if r1 == 0 { + if e1 != 0 { + err = errnoErr(e1) + } else { + err = syscall.EINVAL + } + } + return +} + +func RegOpenKeyEx(key Handle, subkey *uint16, options uint32, desiredAccess uint32, result *Handle) (regerrno error) { + r0, _, _ := syscall.Syscall6(procRegOpenKeyExW.Addr(), 5, uintptr(key), uintptr(unsafe.Pointer(subkey)), uintptr(options), uintptr(desiredAccess), uintptr(unsafe.Pointer(result)), 0) + if r0 != 0 { + regerrno = syscall.Errno(r0) + } + return +} + +func RegCloseKey(key Handle) (regerrno error) { + r0, _, _ := syscall.Syscall(procRegCloseKey.Addr(), 1, uintptr(key), 0, 0) + if r0 != 0 { + regerrno = syscall.Errno(r0) + } + return +} + +func RegQueryInfoKey(key Handle, class *uint16, classLen *uint32, reserved *uint32, subkeysLen *uint32, maxSubkeyLen *uint32, maxClassLen *uint32, valuesLen *uint32, maxValueNameLen *uint32, maxValueLen *uint32, saLen *uint32, lastWriteTime *Filetime) (regerrno error) { + r0, _, _ := syscall.Syscall12(procRegQueryInfoKeyW.Addr(), 12, uintptr(key), uintptr(unsafe.Pointer(class)), uintptr(unsafe.Pointer(classLen)), uintptr(unsafe.Pointer(reserved)), uintptr(unsafe.Pointer(subkeysLen)), uintptr(unsafe.Pointer(maxSubkeyLen)), uintptr(unsafe.Pointer(maxClassLen)), uintptr(unsafe.Pointer(valuesLen)), uintptr(unsafe.Pointer(maxValueNameLen)), uintptr(unsafe.Pointer(maxValueLen)), uintptr(unsafe.Pointer(saLen)), uintptr(unsafe.Pointer(lastWriteTime))) + if r0 != 0 { + regerrno = syscall.Errno(r0) + } + return +} + +func RegEnumKeyEx(key Handle, index uint32, name *uint16, nameLen *uint32, reserved *uint32, class *uint16, classLen *uint32, lastWriteTime *Filetime) (regerrno error) { + r0, _, _ := syscall.Syscall9(procRegEnumKeyExW.Addr(), 8, uintptr(key), uintptr(index), uintptr(unsafe.Pointer(name)), uintptr(unsafe.Pointer(nameLen)), uintptr(unsafe.Pointer(reserved)), uintptr(unsafe.Pointer(class)), uintptr(unsafe.Pointer(classLen)), uintptr(unsafe.Pointer(lastWriteTime)), 0) + if r0 != 0 { + regerrno = syscall.Errno(r0) + } + return +} + +func RegQueryValueEx(key Handle, name *uint16, reserved *uint32, valtype *uint32, buf *byte, buflen *uint32) (regerrno error) { + r0, _, _ := syscall.Syscall6(procRegQueryValueExW.Addr(), 6, uintptr(key), uintptr(unsafe.Pointer(name)), uintptr(unsafe.Pointer(reserved)), uintptr(unsafe.Pointer(valtype)), uintptr(unsafe.Pointer(buf)), uintptr(unsafe.Pointer(buflen))) + if r0 != 0 { + regerrno = syscall.Errno(r0) + } + return +} + +func getCurrentProcessId() (pid uint32) { + r0, _, _ := syscall.Syscall(procGetCurrentProcessId.Addr(), 0, 0, 0, 0) + pid = uint32(r0) + return +} + +func GetConsoleMode(console Handle, mode *uint32) (err error) { + r1, _, e1 := syscall.Syscall(procGetConsoleMode.Addr(), 2, uintptr(console), uintptr(unsafe.Pointer(mode)), 0) + if r1 == 0 { + if e1 != 0 { + err = errnoErr(e1) + } else { + err = syscall.EINVAL + } + } + return +} + +func SetConsoleMode(console Handle, mode uint32) (err error) { + r1, _, e1 := syscall.Syscall(procSetConsoleMode.Addr(), 2, uintptr(console), uintptr(mode), 0) + if r1 == 0 { + if e1 != 0 { + err = errnoErr(e1) + } else { + err = syscall.EINVAL + } + } + return +} + +func GetConsoleScreenBufferInfo(console Handle, info *ConsoleScreenBufferInfo) (err error) { + r1, _, e1 := syscall.Syscall(procGetConsoleScreenBufferInfo.Addr(), 2, uintptr(console), uintptr(unsafe.Pointer(info)), 0) + if r1 == 0 { + if e1 != 0 { + err = errnoErr(e1) + } else { + err = syscall.EINVAL + } + } + return +} + +func WriteConsole(console Handle, buf *uint16, towrite uint32, written *uint32, reserved *byte) (err error) { + r1, _, e1 := syscall.Syscall6(procWriteConsoleW.Addr(), 5, uintptr(console), uintptr(unsafe.Pointer(buf)), uintptr(towrite), uintptr(unsafe.Pointer(written)), uintptr(unsafe.Pointer(reserved)), 0) + if r1 == 0 { + if e1 != 0 { + err = errnoErr(e1) + } else { + err = syscall.EINVAL + } + } + return +} + +func ReadConsole(console Handle, buf *uint16, toread uint32, read *uint32, inputControl *byte) (err error) { + r1, _, e1 := syscall.Syscall6(procReadConsoleW.Addr(), 5, uintptr(console), uintptr(unsafe.Pointer(buf)), uintptr(toread), uintptr(unsafe.Pointer(read)), uintptr(unsafe.Pointer(inputControl)), 0) + if r1 == 0 { + if e1 != 0 { + err = errnoErr(e1) + } else { + err = syscall.EINVAL + } + } + return +} + +func CreateToolhelp32Snapshot(flags uint32, processId uint32) (handle Handle, err error) { + r0, _, e1 := syscall.Syscall(procCreateToolhelp32Snapshot.Addr(), 2, uintptr(flags), uintptr(processId), 0) + handle = Handle(r0) + if handle == InvalidHandle { + if e1 != 0 { + err = errnoErr(e1) + } else { + err = syscall.EINVAL + } + } + return +} + +func Process32First(snapshot Handle, procEntry *ProcessEntry32) (err error) { + r1, _, e1 := syscall.Syscall(procProcess32FirstW.Addr(), 2, uintptr(snapshot), uintptr(unsafe.Pointer(procEntry)), 0) + if r1 == 0 { + if e1 != 0 { + err = errnoErr(e1) + } else { + err = syscall.EINVAL + } + } + return +} + +func Process32Next(snapshot Handle, procEntry *ProcessEntry32) (err error) { + r1, _, e1 := syscall.Syscall(procProcess32NextW.Addr(), 2, uintptr(snapshot), uintptr(unsafe.Pointer(procEntry)), 0) + if r1 == 0 { + if e1 != 0 { + err = errnoErr(e1) + } else { + err = syscall.EINVAL + } + } + return +} + +func DeviceIoControl(handle Handle, ioControlCode uint32, inBuffer *byte, inBufferSize uint32, outBuffer *byte, outBufferSize uint32, bytesReturned *uint32, overlapped *Overlapped) (err error) { + r1, _, e1 := syscall.Syscall9(procDeviceIoControl.Addr(), 8, uintptr(handle), uintptr(ioControlCode), uintptr(unsafe.Pointer(inBuffer)), uintptr(inBufferSize), uintptr(unsafe.Pointer(outBuffer)), uintptr(outBufferSize), uintptr(unsafe.Pointer(bytesReturned)), uintptr(unsafe.Pointer(overlapped)), 0) + if r1 == 0 { + if e1 != 0 { + err = errnoErr(e1) + } else { + err = syscall.EINVAL + } + } + return +} + +func CreateSymbolicLink(symlinkfilename *uint16, targetfilename *uint16, flags uint32) (err error) { + r1, _, e1 := syscall.Syscall(procCreateSymbolicLinkW.Addr(), 3, uintptr(unsafe.Pointer(symlinkfilename)), uintptr(unsafe.Pointer(targetfilename)), uintptr(flags)) + if r1&0xff == 0 { + if e1 != 0 { + err = errnoErr(e1) + } else { + err = syscall.EINVAL + } + } + return +} + +func CreateHardLink(filename *uint16, existingfilename *uint16, reserved uintptr) (err error) { + r1, _, e1 := syscall.Syscall(procCreateHardLinkW.Addr(), 3, uintptr(unsafe.Pointer(filename)), uintptr(unsafe.Pointer(existingfilename)), uintptr(reserved)) + if r1&0xff == 0 { + if e1 != 0 { + err = errnoErr(e1) + } else { + err = syscall.EINVAL + } + } + return +} + +func GetCurrentThreadId() (id uint32) { + r0, _, _ := syscall.Syscall(procGetCurrentThreadId.Addr(), 0, 0, 0, 0) + id = uint32(r0) + return +} + +func CreateEvent(eventAttrs *SecurityAttributes, manualReset uint32, initialState uint32, name *uint16) (handle Handle, err error) { + r0, _, e1 := syscall.Syscall6(procCreateEventW.Addr(), 4, uintptr(unsafe.Pointer(eventAttrs)), uintptr(manualReset), uintptr(initialState), uintptr(unsafe.Pointer(name)), 0, 0) + handle = Handle(r0) + if handle == 0 { + if e1 != 0 { + err = errnoErr(e1) + } else { + err = syscall.EINVAL + } + } + return +} + +func CreateEventEx(eventAttrs *SecurityAttributes, name *uint16, flags uint32, desiredAccess uint32) (handle Handle, err error) { + r0, _, e1 := syscall.Syscall6(procCreateEventExW.Addr(), 4, uintptr(unsafe.Pointer(eventAttrs)), uintptr(unsafe.Pointer(name)), uintptr(flags), uintptr(desiredAccess), 0, 0) + handle = Handle(r0) + if handle == 0 { + if e1 != 0 { + err = errnoErr(e1) + } else { + err = syscall.EINVAL + } + } + return +} + +func OpenEvent(desiredAccess uint32, inheritHandle bool, name *uint16) (handle Handle, err error) { + var _p0 uint32 + if inheritHandle { + _p0 = 1 + } else { + _p0 = 0 + } + r0, _, e1 := syscall.Syscall(procOpenEventW.Addr(), 3, uintptr(desiredAccess), uintptr(_p0), uintptr(unsafe.Pointer(name))) + handle = Handle(r0) + if handle == 0 { + if e1 != 0 { + err = errnoErr(e1) + } else { + err = syscall.EINVAL + } + } + return +} + +func SetEvent(event Handle) (err error) { + r1, _, e1 := syscall.Syscall(procSetEvent.Addr(), 1, uintptr(event), 0, 0) + if r1 == 0 { + if e1 != 0 { + err = errnoErr(e1) + } else { + err = syscall.EINVAL + } + } + return +} + +func ResetEvent(event Handle) (err error) { + r1, _, e1 := syscall.Syscall(procResetEvent.Addr(), 1, uintptr(event), 0, 0) + if r1 == 0 { + if e1 != 0 { + err = errnoErr(e1) + } else { + err = syscall.EINVAL + } + } + return +} + +func PulseEvent(event Handle) (err error) { + r1, _, e1 := syscall.Syscall(procPulseEvent.Addr(), 1, uintptr(event), 0, 0) + if r1 == 0 { + if e1 != 0 { + err = errnoErr(e1) + } else { + err = syscall.EINVAL + } + } + return +} + +func DefineDosDevice(flags uint32, deviceName *uint16, targetPath *uint16) (err error) { + r1, _, e1 := syscall.Syscall(procDefineDosDeviceW.Addr(), 3, uintptr(flags), uintptr(unsafe.Pointer(deviceName)), uintptr(unsafe.Pointer(targetPath))) + if r1 == 0 { + if e1 != 0 { + err = errnoErr(e1) + } else { + err = syscall.EINVAL + } + } + return +} + +func DeleteVolumeMountPoint(volumeMountPoint *uint16) (err error) { + r1, _, e1 := syscall.Syscall(procDeleteVolumeMountPointW.Addr(), 1, uintptr(unsafe.Pointer(volumeMountPoint)), 0, 0) + if r1 == 0 { + if e1 != 0 { + err = errnoErr(e1) + } else { + err = syscall.EINVAL + } + } + return +} + +func FindFirstVolume(volumeName *uint16, bufferLength uint32) (handle Handle, err error) { + r0, _, e1 := syscall.Syscall(procFindFirstVolumeW.Addr(), 2, uintptr(unsafe.Pointer(volumeName)), uintptr(bufferLength), 0) + handle = Handle(r0) + if handle == InvalidHandle { + if e1 != 0 { + err = errnoErr(e1) + } else { + err = syscall.EINVAL + } + } + return +} + +func FindFirstVolumeMountPoint(rootPathName *uint16, volumeMountPoint *uint16, bufferLength uint32) (handle Handle, err error) { + r0, _, e1 := syscall.Syscall(procFindFirstVolumeMountPointW.Addr(), 3, uintptr(unsafe.Pointer(rootPathName)), uintptr(unsafe.Pointer(volumeMountPoint)), uintptr(bufferLength)) + handle = Handle(r0) + if handle == InvalidHandle { + if e1 != 0 { + err = errnoErr(e1) + } else { + err = syscall.EINVAL + } + } + return +} + +func FindNextVolume(findVolume Handle, volumeName *uint16, bufferLength uint32) (err error) { + r1, _, e1 := syscall.Syscall(procFindNextVolumeW.Addr(), 3, uintptr(findVolume), uintptr(unsafe.Pointer(volumeName)), uintptr(bufferLength)) + if r1 == 0 { + if e1 != 0 { + err = errnoErr(e1) + } else { + err = syscall.EINVAL + } + } + return +} + +func FindNextVolumeMountPoint(findVolumeMountPoint Handle, volumeMountPoint *uint16, bufferLength uint32) (err error) { + r1, _, e1 := syscall.Syscall(procFindNextVolumeMountPointW.Addr(), 3, uintptr(findVolumeMountPoint), uintptr(unsafe.Pointer(volumeMountPoint)), uintptr(bufferLength)) + if r1 == 0 { + if e1 != 0 { + err = errnoErr(e1) + } else { + err = syscall.EINVAL + } + } + return +} + +func FindVolumeClose(findVolume Handle) (err error) { + r1, _, e1 := syscall.Syscall(procFindVolumeClose.Addr(), 1, uintptr(findVolume), 0, 0) + if r1 == 0 { + if e1 != 0 { + err = errnoErr(e1) + } else { + err = syscall.EINVAL + } + } + return +} + +func FindVolumeMountPointClose(findVolumeMountPoint Handle) (err error) { + r1, _, e1 := syscall.Syscall(procFindVolumeMountPointClose.Addr(), 1, uintptr(findVolumeMountPoint), 0, 0) + if r1 == 0 { + if e1 != 0 { + err = errnoErr(e1) + } else { + err = syscall.EINVAL + } + } + return +} + +func GetDriveType(rootPathName *uint16) (driveType uint32) { + r0, _, _ := syscall.Syscall(procGetDriveTypeW.Addr(), 1, uintptr(unsafe.Pointer(rootPathName)), 0, 0) + driveType = uint32(r0) + return +} + +func GetLogicalDrives() (drivesBitMask uint32, err error) { + r0, _, e1 := syscall.Syscall(procGetLogicalDrives.Addr(), 0, 0, 0, 0) + drivesBitMask = uint32(r0) + if drivesBitMask == 0 { + if e1 != 0 { + err = errnoErr(e1) + } else { + err = syscall.EINVAL + } + } + return +} + +func GetLogicalDriveStrings(bufferLength uint32, buffer *uint16) (n uint32, err error) { + r0, _, e1 := syscall.Syscall(procGetLogicalDriveStringsW.Addr(), 2, uintptr(bufferLength), uintptr(unsafe.Pointer(buffer)), 0) + n = uint32(r0) + if n == 0 { + if e1 != 0 { + err = errnoErr(e1) + } else { + err = syscall.EINVAL + } + } + return +} + +func GetVolumeInformation(rootPathName *uint16, volumeNameBuffer *uint16, volumeNameSize uint32, volumeNameSerialNumber *uint32, maximumComponentLength *uint32, fileSystemFlags *uint32, fileSystemNameBuffer *uint16, fileSystemNameSize uint32) (err error) { + r1, _, e1 := syscall.Syscall9(procGetVolumeInformationW.Addr(), 8, uintptr(unsafe.Pointer(rootPathName)), uintptr(unsafe.Pointer(volumeNameBuffer)), uintptr(volumeNameSize), uintptr(unsafe.Pointer(volumeNameSerialNumber)), uintptr(unsafe.Pointer(maximumComponentLength)), uintptr(unsafe.Pointer(fileSystemFlags)), uintptr(unsafe.Pointer(fileSystemNameBuffer)), uintptr(fileSystemNameSize), 0) + if r1 == 0 { + if e1 != 0 { + err = errnoErr(e1) + } else { + err = syscall.EINVAL + } + } + return +} + +func GetVolumeInformationByHandle(file Handle, volumeNameBuffer *uint16, volumeNameSize uint32, volumeNameSerialNumber *uint32, maximumComponentLength *uint32, fileSystemFlags *uint32, fileSystemNameBuffer *uint16, fileSystemNameSize uint32) (err error) { + r1, _, e1 := syscall.Syscall9(procGetVolumeInformationByHandleW.Addr(), 8, uintptr(file), uintptr(unsafe.Pointer(volumeNameBuffer)), uintptr(volumeNameSize), uintptr(unsafe.Pointer(volumeNameSerialNumber)), uintptr(unsafe.Pointer(maximumComponentLength)), uintptr(unsafe.Pointer(fileSystemFlags)), uintptr(unsafe.Pointer(fileSystemNameBuffer)), uintptr(fileSystemNameSize), 0) + if r1 == 0 { + if e1 != 0 { + err = errnoErr(e1) + } else { + err = syscall.EINVAL + } + } + return +} + +func GetVolumeNameForVolumeMountPoint(volumeMountPoint *uint16, volumeName *uint16, bufferlength uint32) (err error) { + r1, _, e1 := syscall.Syscall(procGetVolumeNameForVolumeMountPointW.Addr(), 3, uintptr(unsafe.Pointer(volumeMountPoint)), uintptr(unsafe.Pointer(volumeName)), uintptr(bufferlength)) + if r1 == 0 { + if e1 != 0 { + err = errnoErr(e1) + } else { + err = syscall.EINVAL + } + } + return +} + +func GetVolumePathName(fileName *uint16, volumePathName *uint16, bufferLength uint32) (err error) { + r1, _, e1 := syscall.Syscall(procGetVolumePathNameW.Addr(), 3, uintptr(unsafe.Pointer(fileName)), uintptr(unsafe.Pointer(volumePathName)), uintptr(bufferLength)) + if r1 == 0 { + if e1 != 0 { + err = errnoErr(e1) + } else { + err = syscall.EINVAL + } + } + return +} + +func GetVolumePathNamesForVolumeName(volumeName *uint16, volumePathNames *uint16, bufferLength uint32, returnLength *uint32) (err error) { + r1, _, e1 := syscall.Syscall6(procGetVolumePathNamesForVolumeNameW.Addr(), 4, uintptr(unsafe.Pointer(volumeName)), uintptr(unsafe.Pointer(volumePathNames)), uintptr(bufferLength), uintptr(unsafe.Pointer(returnLength)), 0, 0) + if r1 == 0 { + if e1 != 0 { + err = errnoErr(e1) + } else { + err = syscall.EINVAL + } + } + return +} + +func QueryDosDevice(deviceName *uint16, targetPath *uint16, max uint32) (n uint32, err error) { + r0, _, e1 := syscall.Syscall(procQueryDosDeviceW.Addr(), 3, uintptr(unsafe.Pointer(deviceName)), uintptr(unsafe.Pointer(targetPath)), uintptr(max)) + n = uint32(r0) + if n == 0 { + if e1 != 0 { + err = errnoErr(e1) + } else { + err = syscall.EINVAL + } + } + return +} + +func SetVolumeLabel(rootPathName *uint16, volumeName *uint16) (err error) { + r1, _, e1 := syscall.Syscall(procSetVolumeLabelW.Addr(), 2, uintptr(unsafe.Pointer(rootPathName)), uintptr(unsafe.Pointer(volumeName)), 0) + if r1 == 0 { + if e1 != 0 { + err = errnoErr(e1) + } else { + err = syscall.EINVAL + } + } + return +} + +func SetVolumeMountPoint(volumeMountPoint *uint16, volumeName *uint16) (err error) { + r1, _, e1 := syscall.Syscall(procSetVolumeMountPointW.Addr(), 2, uintptr(unsafe.Pointer(volumeMountPoint)), uintptr(unsafe.Pointer(volumeName)), 0) + if r1 == 0 { + if e1 != 0 { + err = errnoErr(e1) + } else { + err = syscall.EINVAL + } + } + return +} + +func WSAStartup(verreq uint32, data *WSAData) (sockerr error) { + r0, _, _ := syscall.Syscall(procWSAStartup.Addr(), 2, uintptr(verreq), uintptr(unsafe.Pointer(data)), 0) + if r0 != 0 { + sockerr = syscall.Errno(r0) + } + return +} + +func WSACleanup() (err error) { + r1, _, e1 := syscall.Syscall(procWSACleanup.Addr(), 0, 0, 0, 0) + if r1 == socket_error { + if e1 != 0 { + err = errnoErr(e1) + } else { + err = syscall.EINVAL + } + } + return +} + +func WSAIoctl(s Handle, iocc uint32, inbuf *byte, cbif uint32, outbuf *byte, cbob uint32, cbbr *uint32, overlapped *Overlapped, completionRoutine uintptr) (err error) { + r1, _, e1 := syscall.Syscall9(procWSAIoctl.Addr(), 9, uintptr(s), uintptr(iocc), uintptr(unsafe.Pointer(inbuf)), uintptr(cbif), uintptr(unsafe.Pointer(outbuf)), uintptr(cbob), uintptr(unsafe.Pointer(cbbr)), uintptr(unsafe.Pointer(overlapped)), uintptr(completionRoutine)) + if r1 == socket_error { + if e1 != 0 { + err = errnoErr(e1) + } else { + err = syscall.EINVAL + } + } + return +} + +func socket(af int32, typ int32, protocol int32) (handle Handle, err error) { + r0, _, e1 := syscall.Syscall(procsocket.Addr(), 3, uintptr(af), uintptr(typ), uintptr(protocol)) + handle = Handle(r0) + if handle == InvalidHandle { + if e1 != 0 { + err = errnoErr(e1) + } else { + err = syscall.EINVAL + } + } + return +} + +func Setsockopt(s Handle, level int32, optname int32, optval *byte, optlen int32) (err error) { + r1, _, e1 := syscall.Syscall6(procsetsockopt.Addr(), 5, uintptr(s), uintptr(level), uintptr(optname), uintptr(unsafe.Pointer(optval)), uintptr(optlen), 0) + if r1 == socket_error { + if e1 != 0 { + err = errnoErr(e1) + } else { + err = syscall.EINVAL + } + } + return +} + +func Getsockopt(s Handle, level int32, optname int32, optval *byte, optlen *int32) (err error) { + r1, _, e1 := syscall.Syscall6(procgetsockopt.Addr(), 5, uintptr(s), uintptr(level), uintptr(optname), uintptr(unsafe.Pointer(optval)), uintptr(unsafe.Pointer(optlen)), 0) + if r1 == socket_error { + if e1 != 0 { + err = errnoErr(e1) + } else { + err = syscall.EINVAL + } + } + return +} + +func bind(s Handle, name unsafe.Pointer, namelen int32) (err error) { + r1, _, e1 := syscall.Syscall(procbind.Addr(), 3, uintptr(s), uintptr(name), uintptr(namelen)) + if r1 == socket_error { + if e1 != 0 { + err = errnoErr(e1) + } else { + err = syscall.EINVAL + } + } + return +} + +func connect(s Handle, name unsafe.Pointer, namelen int32) (err error) { + r1, _, e1 := syscall.Syscall(procconnect.Addr(), 3, uintptr(s), uintptr(name), uintptr(namelen)) + if r1 == socket_error { + if e1 != 0 { + err = errnoErr(e1) + } else { + err = syscall.EINVAL + } + } + return +} + +func getsockname(s Handle, rsa *RawSockaddrAny, addrlen *int32) (err error) { + r1, _, e1 := syscall.Syscall(procgetsockname.Addr(), 3, uintptr(s), uintptr(unsafe.Pointer(rsa)), uintptr(unsafe.Pointer(addrlen))) + if r1 == socket_error { + if e1 != 0 { + err = errnoErr(e1) + } else { + err = syscall.EINVAL + } + } + return +} + +func getpeername(s Handle, rsa *RawSockaddrAny, addrlen *int32) (err error) { + r1, _, e1 := syscall.Syscall(procgetpeername.Addr(), 3, uintptr(s), uintptr(unsafe.Pointer(rsa)), uintptr(unsafe.Pointer(addrlen))) + if r1 == socket_error { + if e1 != 0 { + err = errnoErr(e1) + } else { + err = syscall.EINVAL + } + } + return +} + +func listen(s Handle, backlog int32) (err error) { + r1, _, e1 := syscall.Syscall(proclisten.Addr(), 2, uintptr(s), uintptr(backlog), 0) + if r1 == socket_error { + if e1 != 0 { + err = errnoErr(e1) + } else { + err = syscall.EINVAL + } + } + return +} + +func shutdown(s Handle, how int32) (err error) { + r1, _, e1 := syscall.Syscall(procshutdown.Addr(), 2, uintptr(s), uintptr(how), 0) + if r1 == socket_error { + if e1 != 0 { + err = errnoErr(e1) + } else { + err = syscall.EINVAL + } + } + return +} + +func Closesocket(s Handle) (err error) { + r1, _, e1 := syscall.Syscall(procclosesocket.Addr(), 1, uintptr(s), 0, 0) + if r1 == socket_error { + if e1 != 0 { + err = errnoErr(e1) + } else { + err = syscall.EINVAL + } + } + return +} + +func AcceptEx(ls Handle, as Handle, buf *byte, rxdatalen uint32, laddrlen uint32, raddrlen uint32, recvd *uint32, overlapped *Overlapped) (err error) { + r1, _, e1 := syscall.Syscall9(procAcceptEx.Addr(), 8, uintptr(ls), uintptr(as), uintptr(unsafe.Pointer(buf)), uintptr(rxdatalen), uintptr(laddrlen), uintptr(raddrlen), uintptr(unsafe.Pointer(recvd)), uintptr(unsafe.Pointer(overlapped)), 0) + if r1 == 0 { + if e1 != 0 { + err = errnoErr(e1) + } else { + err = syscall.EINVAL + } + } + return +} + +func GetAcceptExSockaddrs(buf *byte, rxdatalen uint32, laddrlen uint32, raddrlen uint32, lrsa **RawSockaddrAny, lrsalen *int32, rrsa **RawSockaddrAny, rrsalen *int32) { + syscall.Syscall9(procGetAcceptExSockaddrs.Addr(), 8, uintptr(unsafe.Pointer(buf)), uintptr(rxdatalen), uintptr(laddrlen), uintptr(raddrlen), uintptr(unsafe.Pointer(lrsa)), uintptr(unsafe.Pointer(lrsalen)), uintptr(unsafe.Pointer(rrsa)), uintptr(unsafe.Pointer(rrsalen)), 0) + return +} + +func WSARecv(s Handle, bufs *WSABuf, bufcnt uint32, recvd *uint32, flags *uint32, overlapped *Overlapped, croutine *byte) (err error) { + r1, _, e1 := syscall.Syscall9(procWSARecv.Addr(), 7, uintptr(s), uintptr(unsafe.Pointer(bufs)), uintptr(bufcnt), uintptr(unsafe.Pointer(recvd)), uintptr(unsafe.Pointer(flags)), uintptr(unsafe.Pointer(overlapped)), uintptr(unsafe.Pointer(croutine)), 0, 0) + if r1 == socket_error { + if e1 != 0 { + err = errnoErr(e1) + } else { + err = syscall.EINVAL + } + } + return +} + +func WSASend(s Handle, bufs *WSABuf, bufcnt uint32, sent *uint32, flags uint32, overlapped *Overlapped, croutine *byte) (err error) { + r1, _, e1 := syscall.Syscall9(procWSASend.Addr(), 7, uintptr(s), uintptr(unsafe.Pointer(bufs)), uintptr(bufcnt), uintptr(unsafe.Pointer(sent)), uintptr(flags), uintptr(unsafe.Pointer(overlapped)), uintptr(unsafe.Pointer(croutine)), 0, 0) + if r1 == socket_error { + if e1 != 0 { + err = errnoErr(e1) + } else { + err = syscall.EINVAL + } + } + return +} + +func WSARecvFrom(s Handle, bufs *WSABuf, bufcnt uint32, recvd *uint32, flags *uint32, from *RawSockaddrAny, fromlen *int32, overlapped *Overlapped, croutine *byte) (err error) { + r1, _, e1 := syscall.Syscall9(procWSARecvFrom.Addr(), 9, uintptr(s), uintptr(unsafe.Pointer(bufs)), uintptr(bufcnt), uintptr(unsafe.Pointer(recvd)), uintptr(unsafe.Pointer(flags)), uintptr(unsafe.Pointer(from)), uintptr(unsafe.Pointer(fromlen)), uintptr(unsafe.Pointer(overlapped)), uintptr(unsafe.Pointer(croutine))) + if r1 == socket_error { + if e1 != 0 { + err = errnoErr(e1) + } else { + err = syscall.EINVAL + } + } + return +} + +func WSASendTo(s Handle, bufs *WSABuf, bufcnt uint32, sent *uint32, flags uint32, to *RawSockaddrAny, tolen int32, overlapped *Overlapped, croutine *byte) (err error) { + r1, _, e1 := syscall.Syscall9(procWSASendTo.Addr(), 9, uintptr(s), uintptr(unsafe.Pointer(bufs)), uintptr(bufcnt), uintptr(unsafe.Pointer(sent)), uintptr(flags), uintptr(unsafe.Pointer(to)), uintptr(tolen), uintptr(unsafe.Pointer(overlapped)), uintptr(unsafe.Pointer(croutine))) + if r1 == socket_error { + if e1 != 0 { + err = errnoErr(e1) + } else { + err = syscall.EINVAL + } + } + return +} + +func GetHostByName(name string) (h *Hostent, err error) { + var _p0 *byte + _p0, err = syscall.BytePtrFromString(name) + if err != nil { + return + } + return _GetHostByName(_p0) +} + +func _GetHostByName(name *byte) (h *Hostent, err error) { + r0, _, e1 := syscall.Syscall(procgethostbyname.Addr(), 1, uintptr(unsafe.Pointer(name)), 0, 0) + h = (*Hostent)(unsafe.Pointer(r0)) + if h == nil { + if e1 != 0 { + err = errnoErr(e1) + } else { + err = syscall.EINVAL + } + } + return +} + +func GetServByName(name string, proto string) (s *Servent, err error) { + var _p0 *byte + _p0, err = syscall.BytePtrFromString(name) + if err != nil { + return + } + var _p1 *byte + _p1, err = syscall.BytePtrFromString(proto) + if err != nil { + return + } + return _GetServByName(_p0, _p1) +} + +func _GetServByName(name *byte, proto *byte) (s *Servent, err error) { + r0, _, e1 := syscall.Syscall(procgetservbyname.Addr(), 2, uintptr(unsafe.Pointer(name)), uintptr(unsafe.Pointer(proto)), 0) + s = (*Servent)(unsafe.Pointer(r0)) + if s == nil { + if e1 != 0 { + err = errnoErr(e1) + } else { + err = syscall.EINVAL + } + } + return +} + +func Ntohs(netshort uint16) (u uint16) { + r0, _, _ := syscall.Syscall(procntohs.Addr(), 1, uintptr(netshort), 0, 0) + u = uint16(r0) + return +} + +func GetProtoByName(name string) (p *Protoent, err error) { + var _p0 *byte + _p0, err = syscall.BytePtrFromString(name) + if err != nil { + return + } + return _GetProtoByName(_p0) +} + +func _GetProtoByName(name *byte) (p *Protoent, err error) { + r0, _, e1 := syscall.Syscall(procgetprotobyname.Addr(), 1, uintptr(unsafe.Pointer(name)), 0, 0) + p = (*Protoent)(unsafe.Pointer(r0)) + if p == nil { + if e1 != 0 { + err = errnoErr(e1) + } else { + err = syscall.EINVAL + } + } + return +} + +func DnsQuery(name string, qtype uint16, options uint32, extra *byte, qrs **DNSRecord, pr *byte) (status error) { + var _p0 *uint16 + _p0, status = syscall.UTF16PtrFromString(name) + if status != nil { + return + } + return _DnsQuery(_p0, qtype, options, extra, qrs, pr) +} + +func _DnsQuery(name *uint16, qtype uint16, options uint32, extra *byte, qrs **DNSRecord, pr *byte) (status error) { + r0, _, _ := syscall.Syscall6(procDnsQuery_W.Addr(), 6, uintptr(unsafe.Pointer(name)), uintptr(qtype), uintptr(options), uintptr(unsafe.Pointer(extra)), uintptr(unsafe.Pointer(qrs)), uintptr(unsafe.Pointer(pr))) + if r0 != 0 { + status = syscall.Errno(r0) + } + return +} + +func DnsRecordListFree(rl *DNSRecord, freetype uint32) { + syscall.Syscall(procDnsRecordListFree.Addr(), 2, uintptr(unsafe.Pointer(rl)), uintptr(freetype), 0) + return +} + +func DnsNameCompare(name1 *uint16, name2 *uint16) (same bool) { + r0, _, _ := syscall.Syscall(procDnsNameCompare_W.Addr(), 2, uintptr(unsafe.Pointer(name1)), uintptr(unsafe.Pointer(name2)), 0) + same = r0 != 0 + return +} + +func GetAddrInfoW(nodename *uint16, servicename *uint16, hints *AddrinfoW, result **AddrinfoW) (sockerr error) { + r0, _, _ := syscall.Syscall6(procGetAddrInfoW.Addr(), 4, uintptr(unsafe.Pointer(nodename)), uintptr(unsafe.Pointer(servicename)), uintptr(unsafe.Pointer(hints)), uintptr(unsafe.Pointer(result)), 0, 0) + if r0 != 0 { + sockerr = syscall.Errno(r0) + } + return +} + +func FreeAddrInfoW(addrinfo *AddrinfoW) { + syscall.Syscall(procFreeAddrInfoW.Addr(), 1, uintptr(unsafe.Pointer(addrinfo)), 0, 0) + return +} + +func GetIfEntry(pIfRow *MibIfRow) (errcode error) { + r0, _, _ := syscall.Syscall(procGetIfEntry.Addr(), 1, uintptr(unsafe.Pointer(pIfRow)), 0, 0) + if r0 != 0 { + errcode = syscall.Errno(r0) + } + return +} + +func GetAdaptersInfo(ai *IpAdapterInfo, ol *uint32) (errcode error) { + r0, _, _ := syscall.Syscall(procGetAdaptersInfo.Addr(), 2, uintptr(unsafe.Pointer(ai)), uintptr(unsafe.Pointer(ol)), 0) + if r0 != 0 { + errcode = syscall.Errno(r0) + } + return +} + +func SetFileCompletionNotificationModes(handle Handle, flags uint8) (err error) { + r1, _, e1 := syscall.Syscall(procSetFileCompletionNotificationModes.Addr(), 2, uintptr(handle), uintptr(flags), 0) + if r1 == 0 { + if e1 != 0 { + err = errnoErr(e1) + } else { + err = syscall.EINVAL + } + } + return +} + +func WSAEnumProtocols(protocols *int32, protocolBuffer *WSAProtocolInfo, bufferLength *uint32) (n int32, err error) { + r0, _, e1 := syscall.Syscall(procWSAEnumProtocolsW.Addr(), 3, uintptr(unsafe.Pointer(protocols)), uintptr(unsafe.Pointer(protocolBuffer)), uintptr(unsafe.Pointer(bufferLength))) + n = int32(r0) + if n == -1 { + if e1 != 0 { + err = errnoErr(e1) + } else { + err = syscall.EINVAL + } + } + return +} + +func GetAdaptersAddresses(family uint32, flags uint32, reserved uintptr, adapterAddresses *IpAdapterAddresses, sizePointer *uint32) (errcode error) { + r0, _, _ := syscall.Syscall6(procGetAdaptersAddresses.Addr(), 5, uintptr(family), uintptr(flags), uintptr(reserved), uintptr(unsafe.Pointer(adapterAddresses)), uintptr(unsafe.Pointer(sizePointer)), 0) + if r0 != 0 { + errcode = syscall.Errno(r0) + } + return +} + +func GetACP() (acp uint32) { + r0, _, _ := syscall.Syscall(procGetACP.Addr(), 0, 0, 0, 0) + acp = uint32(r0) + return +} + +func MultiByteToWideChar(codePage uint32, dwFlags uint32, str *byte, nstr int32, wchar *uint16, nwchar int32) (nwrite int32, err error) { + r0, _, e1 := syscall.Syscall6(procMultiByteToWideChar.Addr(), 6, uintptr(codePage), uintptr(dwFlags), uintptr(unsafe.Pointer(str)), uintptr(nstr), uintptr(unsafe.Pointer(wchar)), uintptr(nwchar)) + nwrite = int32(r0) + if nwrite == 0 { + if e1 != 0 { + err = errnoErr(e1) + } else { + err = syscall.EINVAL + } + } + return +} + +func TranslateName(accName *uint16, accNameFormat uint32, desiredNameFormat uint32, translatedName *uint16, nSize *uint32) (err error) { + r1, _, e1 := syscall.Syscall6(procTranslateNameW.Addr(), 5, uintptr(unsafe.Pointer(accName)), uintptr(accNameFormat), uintptr(desiredNameFormat), uintptr(unsafe.Pointer(translatedName)), uintptr(unsafe.Pointer(nSize)), 0) + if r1&0xff == 0 { + if e1 != 0 { + err = errnoErr(e1) + } else { + err = syscall.EINVAL + } + } + return +} + +func GetUserNameEx(nameFormat uint32, nameBuffre *uint16, nSize *uint32) (err error) { + r1, _, e1 := syscall.Syscall(procGetUserNameExW.Addr(), 3, uintptr(nameFormat), uintptr(unsafe.Pointer(nameBuffre)), uintptr(unsafe.Pointer(nSize))) + if r1&0xff == 0 { + if e1 != 0 { + err = errnoErr(e1) + } else { + err = syscall.EINVAL + } + } + return +} + +func NetUserGetInfo(serverName *uint16, userName *uint16, level uint32, buf **byte) (neterr error) { + r0, _, _ := syscall.Syscall6(procNetUserGetInfo.Addr(), 4, uintptr(unsafe.Pointer(serverName)), uintptr(unsafe.Pointer(userName)), uintptr(level), uintptr(unsafe.Pointer(buf)), 0, 0) + if r0 != 0 { + neterr = syscall.Errno(r0) + } + return +} + +func NetGetJoinInformation(server *uint16, name **uint16, bufType *uint32) (neterr error) { + r0, _, _ := syscall.Syscall(procNetGetJoinInformation.Addr(), 3, uintptr(unsafe.Pointer(server)), uintptr(unsafe.Pointer(name)), uintptr(unsafe.Pointer(bufType))) + if r0 != 0 { + neterr = syscall.Errno(r0) + } + return +} + +func NetApiBufferFree(buf *byte) (neterr error) { + r0, _, _ := syscall.Syscall(procNetApiBufferFree.Addr(), 1, uintptr(unsafe.Pointer(buf)), 0, 0) + if r0 != 0 { + neterr = syscall.Errno(r0) + } + return +} + +func LookupAccountSid(systemName *uint16, sid *SID, name *uint16, nameLen *uint32, refdDomainName *uint16, refdDomainNameLen *uint32, use *uint32) (err error) { + r1, _, e1 := syscall.Syscall9(procLookupAccountSidW.Addr(), 7, uintptr(unsafe.Pointer(systemName)), uintptr(unsafe.Pointer(sid)), uintptr(unsafe.Pointer(name)), uintptr(unsafe.Pointer(nameLen)), uintptr(unsafe.Pointer(refdDomainName)), uintptr(unsafe.Pointer(refdDomainNameLen)), uintptr(unsafe.Pointer(use)), 0, 0) + if r1 == 0 { + if e1 != 0 { + err = errnoErr(e1) + } else { + err = syscall.EINVAL + } + } + return +} + +func LookupAccountName(systemName *uint16, accountName *uint16, sid *SID, sidLen *uint32, refdDomainName *uint16, refdDomainNameLen *uint32, use *uint32) (err error) { + r1, _, e1 := syscall.Syscall9(procLookupAccountNameW.Addr(), 7, uintptr(unsafe.Pointer(systemName)), uintptr(unsafe.Pointer(accountName)), uintptr(unsafe.Pointer(sid)), uintptr(unsafe.Pointer(sidLen)), uintptr(unsafe.Pointer(refdDomainName)), uintptr(unsafe.Pointer(refdDomainNameLen)), uintptr(unsafe.Pointer(use)), 0, 0) + if r1 == 0 { + if e1 != 0 { + err = errnoErr(e1) + } else { + err = syscall.EINVAL + } + } + return +} + +func ConvertSidToStringSid(sid *SID, stringSid **uint16) (err error) { + r1, _, e1 := syscall.Syscall(procConvertSidToStringSidW.Addr(), 2, uintptr(unsafe.Pointer(sid)), uintptr(unsafe.Pointer(stringSid)), 0) + if r1 == 0 { + if e1 != 0 { + err = errnoErr(e1) + } else { + err = syscall.EINVAL + } + } + return +} + +func ConvertStringSidToSid(stringSid *uint16, sid **SID) (err error) { + r1, _, e1 := syscall.Syscall(procConvertStringSidToSidW.Addr(), 2, uintptr(unsafe.Pointer(stringSid)), uintptr(unsafe.Pointer(sid)), 0) + if r1 == 0 { + if e1 != 0 { + err = errnoErr(e1) + } else { + err = syscall.EINVAL + } + } + return +} + +func GetLengthSid(sid *SID) (len uint32) { + r0, _, _ := syscall.Syscall(procGetLengthSid.Addr(), 1, uintptr(unsafe.Pointer(sid)), 0, 0) + len = uint32(r0) + return +} + +func CopySid(destSidLen uint32, destSid *SID, srcSid *SID) (err error) { + r1, _, e1 := syscall.Syscall(procCopySid.Addr(), 3, uintptr(destSidLen), uintptr(unsafe.Pointer(destSid)), uintptr(unsafe.Pointer(srcSid))) + if r1 == 0 { + if e1 != 0 { + err = errnoErr(e1) + } else { + err = syscall.EINVAL + } + } + return +} + +func AllocateAndInitializeSid(identAuth *SidIdentifierAuthority, subAuth byte, subAuth0 uint32, subAuth1 uint32, subAuth2 uint32, subAuth3 uint32, subAuth4 uint32, subAuth5 uint32, subAuth6 uint32, subAuth7 uint32, sid **SID) (err error) { + r1, _, e1 := syscall.Syscall12(procAllocateAndInitializeSid.Addr(), 11, uintptr(unsafe.Pointer(identAuth)), uintptr(subAuth), uintptr(subAuth0), uintptr(subAuth1), uintptr(subAuth2), uintptr(subAuth3), uintptr(subAuth4), uintptr(subAuth5), uintptr(subAuth6), uintptr(subAuth7), uintptr(unsafe.Pointer(sid)), 0) + if r1 == 0 { + if e1 != 0 { + err = errnoErr(e1) + } else { + err = syscall.EINVAL + } + } + return +} + +func FreeSid(sid *SID) (err error) { + r1, _, e1 := syscall.Syscall(procFreeSid.Addr(), 1, uintptr(unsafe.Pointer(sid)), 0, 0) + if r1 != 0 { + if e1 != 0 { + err = errnoErr(e1) + } else { + err = syscall.EINVAL + } + } + return +} + +func EqualSid(sid1 *SID, sid2 *SID) (isEqual bool) { + r0, _, _ := syscall.Syscall(procEqualSid.Addr(), 2, uintptr(unsafe.Pointer(sid1)), uintptr(unsafe.Pointer(sid2)), 0) + isEqual = r0 != 0 + return +} + +func checkTokenMembership(tokenHandle Token, sidToCheck *SID, isMember *int32) (err error) { + r1, _, e1 := syscall.Syscall(procCheckTokenMembership.Addr(), 3, uintptr(tokenHandle), uintptr(unsafe.Pointer(sidToCheck)), uintptr(unsafe.Pointer(isMember))) + if r1 == 0 { + if e1 != 0 { + err = errnoErr(e1) + } else { + err = syscall.EINVAL + } + } + return +} + +func OpenProcessToken(h Handle, access uint32, token *Token) (err error) { + r1, _, e1 := syscall.Syscall(procOpenProcessToken.Addr(), 3, uintptr(h), uintptr(access), uintptr(unsafe.Pointer(token))) + if r1 == 0 { + if e1 != 0 { + err = errnoErr(e1) + } else { + err = syscall.EINVAL + } + } + return +} + +func GetTokenInformation(t Token, infoClass uint32, info *byte, infoLen uint32, returnedLen *uint32) (err error) { + r1, _, e1 := syscall.Syscall6(procGetTokenInformation.Addr(), 5, uintptr(t), uintptr(infoClass), uintptr(unsafe.Pointer(info)), uintptr(infoLen), uintptr(unsafe.Pointer(returnedLen)), 0) + if r1 == 0 { + if e1 != 0 { + err = errnoErr(e1) + } else { + err = syscall.EINVAL + } + } + return +} + +func GetUserProfileDirectory(t Token, dir *uint16, dirLen *uint32) (err error) { + r1, _, e1 := syscall.Syscall(procGetUserProfileDirectoryW.Addr(), 3, uintptr(t), uintptr(unsafe.Pointer(dir)), uintptr(unsafe.Pointer(dirLen))) + if r1 == 0 { + if e1 != 0 { + err = errnoErr(e1) + } else { + err = syscall.EINVAL + } + } + return +} diff --git a/vendor/google.golang.org/api/gensupport/go18.go b/vendor/google.golang.org/api/gensupport/go18.go new file mode 100644 index 000000000..c76cb8f20 --- /dev/null +++ b/vendor/google.golang.org/api/gensupport/go18.go @@ -0,0 +1,17 @@ +// Copyright 2018 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +// +build go1.8 + +package gensupport + +import ( + "io" + "net/http" +) + +// SetGetBody sets the GetBody field of req to f. +func SetGetBody(req *http.Request, f func() (io.ReadCloser, error)) { + req.GetBody = f +} diff --git a/vendor/google.golang.org/api/gensupport/media.go b/vendor/google.golang.org/api/gensupport/media.go index c6410e89a..5a2674104 100644 --- a/vendor/google.golang.org/api/gensupport/media.go +++ b/vendor/google.golang.org/api/gensupport/media.go @@ -5,12 +5,14 @@ package gensupport import ( + "bytes" "fmt" "io" "io/ioutil" "mime/multipart" "net/http" "net/textproto" + "strings" "google.golang.org/api/googleapi" ) @@ -174,26 +176,156 @@ func typeHeader(contentType string) textproto.MIMEHeader { // PrepareUpload determines whether the data in the supplied reader should be // uploaded in a single request, or in sequential chunks. // chunkSize is the size of the chunk that media should be split into. -// If chunkSize is non-zero and the contents of media do not fit in a single -// chunk (or there is an error reading media), then media will be returned as a -// MediaBuffer. Otherwise, media will be returned as a Reader. +// +// If chunkSize is zero, media is returned as the first value, and the other +// two return values are nil, true. +// +// Otherwise, a MediaBuffer is returned, along with a bool indicating whether the +// contents of media fit in a single chunk. // // After PrepareUpload has been called, media should no longer be used: the // media content should be accessed via one of the return values. -func PrepareUpload(media io.Reader, chunkSize int) (io.Reader, *MediaBuffer) { +func PrepareUpload(media io.Reader, chunkSize int) (r io.Reader, mb *MediaBuffer, singleChunk bool) { if chunkSize == 0 { // do not chunk - return media, nil + return media, nil, true + } + mb = NewMediaBuffer(media, chunkSize) + _, _, _, err := mb.Chunk() + // If err is io.EOF, we can upload this in a single request. Otherwise, err is + // either nil or a non-EOF error. If it is the latter, then the next call to + // mb.Chunk will return the same error. Returning a MediaBuffer ensures that this + // error will be handled at some point. + return nil, mb, err == io.EOF +} + +// MediaInfo holds information for media uploads. It is intended for use by generated +// code only. +type MediaInfo struct { + // At most one of Media and MediaBuffer will be set. + media io.Reader + buffer *MediaBuffer + singleChunk bool + mType string + size int64 // mediaSize, if known. Used only for calls to progressUpdater_. + progressUpdater googleapi.ProgressUpdater +} + +// NewInfoFromMedia should be invoked from the Media method of a call. It returns a +// MediaInfo populated with chunk size and content type, and a reader or MediaBuffer +// if needed. +func NewInfoFromMedia(r io.Reader, options []googleapi.MediaOption) *MediaInfo { + mi := &MediaInfo{} + opts := googleapi.ProcessMediaOptions(options) + if !opts.ForceEmptyContentType { + r, mi.mType = DetermineContentType(r, opts.ContentType) + } + mi.media, mi.buffer, mi.singleChunk = PrepareUpload(r, opts.ChunkSize) + return mi +} + +// NewInfoFromResumableMedia should be invoked from the ResumableMedia method of a +// call. It returns a MediaInfo using the given reader, size and media type. +func NewInfoFromResumableMedia(r io.ReaderAt, size int64, mediaType string) *MediaInfo { + rdr := ReaderAtToReader(r, size) + rdr, mType := DetermineContentType(rdr, mediaType) + return &MediaInfo{ + size: size, + mType: mType, + buffer: NewMediaBuffer(rdr, googleapi.DefaultUploadChunkSize), + media: nil, + singleChunk: false, + } +} + +func (mi *MediaInfo) SetProgressUpdater(pu googleapi.ProgressUpdater) { + if mi != nil { + mi.progressUpdater = pu + } +} + +// UploadType determines the type of upload: a single request, or a resumable +// series of requests. +func (mi *MediaInfo) UploadType() string { + if mi.singleChunk { + return "multipart" + } + return "resumable" +} + +// UploadRequest sets up an HTTP request for media upload. It adds headers +// as necessary, and returns a replacement for the body and a function for http.Request.GetBody. +func (mi *MediaInfo) UploadRequest(reqHeaders http.Header, body io.Reader) (newBody io.Reader, getBody func() (io.ReadCloser, error), cleanup func()) { + cleanup = func() {} + if mi == nil { + return body, nil, cleanup + } + var media io.Reader + if mi.media != nil { + // This only happens when the caller has turned off chunking. In that + // case, we write all of media in a single non-retryable request. + media = mi.media + } else if mi.singleChunk { + // The data fits in a single chunk, which has now been read into the MediaBuffer. + // We obtain that chunk so we can write it in a single request. The request can + // be retried because the data is stored in the MediaBuffer. + media, _, _, _ = mi.buffer.Chunk() + } + if media != nil { + fb := readerFunc(body) + fm := readerFunc(media) + combined, ctype := CombineBodyMedia(body, "application/json", media, mi.mType) + if fb != nil && fm != nil { + getBody = func() (io.ReadCloser, error) { + rb := ioutil.NopCloser(fb()) + rm := ioutil.NopCloser(fm()) + r, _ := CombineBodyMedia(rb, "application/json", rm, mi.mType) + return r, nil + } + } + cleanup = func() { combined.Close() } + reqHeaders.Set("Content-Type", ctype) + body = combined + } + if mi.buffer != nil && mi.mType != "" && !mi.singleChunk { + reqHeaders.Set("X-Upload-Content-Type", mi.mType) + } + return body, getBody, cleanup +} + +// readerFunc returns a function that always returns an io.Reader that has the same +// contents as r, provided that can be done without consuming r. Otherwise, it +// returns nil. +// See http.NewRequest (in net/http/request.go). +func readerFunc(r io.Reader) func() io.Reader { + switch r := r.(type) { + case *bytes.Buffer: + buf := r.Bytes() + return func() io.Reader { return bytes.NewReader(buf) } + case *bytes.Reader: + snapshot := *r + return func() io.Reader { r := snapshot; return &r } + case *strings.Reader: + snapshot := *r + return func() io.Reader { r := snapshot; return &r } + default: + return nil + } +} + +// ResumableUpload returns an appropriately configured ResumableUpload value if the +// upload is resumable, or nil otherwise. +func (mi *MediaInfo) ResumableUpload(locURI string) *ResumableUpload { + if mi == nil || mi.singleChunk { + return nil + } + return &ResumableUpload{ + URI: locURI, + Media: mi.buffer, + MediaType: mi.mType, + Callback: func(curr int64) { + if mi.progressUpdater != nil { + mi.progressUpdater(curr, mi.size) + } + }, } - - mb := NewMediaBuffer(media, chunkSize) - rdr, _, _, err := mb.Chunk() - - if err == io.EOF { // we can upload this in a single request - return rdr, nil - } - // err might be a non-EOF error. If it is, the next call to mb.Chunk will - // return the same error. Returning a MediaBuffer ensures that this error - // will be handled at some point. - - return nil, mb } diff --git a/vendor/google.golang.org/api/gensupport/not_go18.go b/vendor/google.golang.org/api/gensupport/not_go18.go new file mode 100644 index 000000000..2536501ce --- /dev/null +++ b/vendor/google.golang.org/api/gensupport/not_go18.go @@ -0,0 +1,14 @@ +// Copyright 2018 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +// +build !go1.8 + +package gensupport + +import ( + "io" + "net/http" +) + +func SetGetBody(req *http.Request, f func() (io.ReadCloser, error)) {} diff --git a/vendor/google.golang.org/api/gensupport/send.go b/vendor/google.golang.org/api/gensupport/send.go index 3d22f638f..0f75aa867 100644 --- a/vendor/google.golang.org/api/gensupport/send.go +++ b/vendor/google.golang.org/api/gensupport/send.go @@ -5,6 +5,8 @@ package gensupport import ( + "encoding/json" + "errors" "net/http" "golang.org/x/net/context" @@ -32,6 +34,11 @@ func RegisterHook(h Hook) { // If ctx is non-nil, it calls all hooks, then sends the request with // ctxhttp.Do, then calls any functions returned by the hooks in reverse order. func SendRequest(ctx context.Context, client *http.Client, req *http.Request) (*http.Response, error) { + // Disallow Accept-Encoding because it interferes with the automatic gzip handling + // done by the default http.Transport. See https://github.com/google/google-api-go-client/issues/219. + if _, ok := req.Header["Accept-Encoding"]; ok { + return nil, errors.New("google api: custom Accept-Encoding headers not allowed") + } if ctx == nil { return client.Do(req) } @@ -53,3 +60,12 @@ func SendRequest(ctx context.Context, client *http.Client, req *http.Request) (* } return resp, err } + +// DecodeResponse decodes the body of res into target. If there is no body, +// target is unchanged. +func DecodeResponse(target interface{}, res *http.Response) error { + if res.StatusCode == http.StatusNoContent { + return nil + } + return json.NewDecoder(res.Body).Decode(target) +} diff --git a/vendor/google.golang.org/api/storage/v1/storage-api.json b/vendor/google.golang.org/api/storage/v1/storage-api.json new file mode 100644 index 000000000..2d23f028b --- /dev/null +++ b/vendor/google.golang.org/api/storage/v1/storage-api.json @@ -0,0 +1,3784 @@ +{ + "auth": { + "oauth2": { + "scopes": { + "https://www.googleapis.com/auth/cloud-platform": { + "description": "View and manage your data across Google Cloud Platform services" + }, + "https://www.googleapis.com/auth/cloud-platform.read-only": { + "description": "View your data across Google Cloud Platform services" + }, + "https://www.googleapis.com/auth/devstorage.full_control": { + "description": "Manage your data and permissions in Google Cloud Storage" + }, + "https://www.googleapis.com/auth/devstorage.read_only": { + "description": "View your data in Google Cloud Storage" + }, + "https://www.googleapis.com/auth/devstorage.read_write": { + "description": "Manage your data in Google Cloud Storage" + } + } + } + }, + "basePath": "/storage/v1/", + "baseUrl": "https://www.googleapis.com/storage/v1/", + "batchPath": "batch/storage/v1", + "description": "Stores and retrieves potentially large, immutable data objects.", + "discoveryVersion": "v1", + "documentationLink": "https://developers.google.com/storage/docs/json_api/", + "etag": "\"-iA1DTNe4s-I6JZXPt1t1Ypy8IU/nD7tyZqYOGFELa4QRBOZJ8raFKA\"", + "icons": { + "x16": "https://www.google.com/images/icons/product/cloud_storage-16.png", + "x32": "https://www.google.com/images/icons/product/cloud_storage-32.png" + }, + "id": "storage:v1", + "kind": "discovery#restDescription", + "labels": [ + "labs" + ], + "name": "storage", + "ownerDomain": "google.com", + "ownerName": "Google", + "parameters": { + "alt": { + "default": "json", + "description": "Data format for the response.", + "enum": [ + "json" + ], + "enumDescriptions": [ + "Responses with Content-Type of application/json" + ], + "location": "query", + "type": "string" + }, + "fields": { + "description": "Selector specifying which fields to include in a partial response.", + "location": "query", + "type": "string" + }, + "key": { + "description": "API key. Your API key identifies your project and provides you with API access, quota, and reports. Required unless you provide an OAuth 2.0 token.", + "location": "query", + "type": "string" + }, + "oauth_token": { + "description": "OAuth 2.0 token for the current user.", + "location": "query", + "type": "string" + }, + "prettyPrint": { + "default": "true", + "description": "Returns response with indentations and line breaks.", + "location": "query", + "type": "boolean" + }, + "quotaUser": { + "description": "Available to use for quota purposes for server-side applications. Can be any arbitrary string assigned to a user, but should not exceed 40 characters. Overrides userIp if both are provided.", + "location": "query", + "type": "string" + }, + "userIp": { + "description": "IP address of the site where the request originates. Use this if you want to enforce per-user limits.", + "location": "query", + "type": "string" + } + }, + "protocol": "rest", + "resources": { + "bucketAccessControls": { + "methods": { + "delete": { + "description": "Permanently deletes the ACL entry for the specified entity on the specified bucket.", + "httpMethod": "DELETE", + "id": "storage.bucketAccessControls.delete", + "parameterOrder": [ + "bucket", + "entity" + ], + "parameters": { + "bucket": { + "description": "Name of a bucket.", + "location": "path", + "required": true, + "type": "string" + }, + "entity": { + "description": "The entity holding the permission. Can be user-userId, user-emailAddress, group-groupId, group-emailAddress, allUsers, or allAuthenticatedUsers.", + "location": "path", + "required": true, + "type": "string" + }, + "userProject": { + "description": "The project to be billed for this request. Required for Requester Pays buckets.", + "location": "query", + "type": "string" + } + }, + "path": "b/{bucket}/acl/{entity}", + "scopes": [ + "https://www.googleapis.com/auth/cloud-platform", + "https://www.googleapis.com/auth/devstorage.full_control" + ] + }, + "get": { + "description": "Returns the ACL entry for the specified entity on the specified bucket.", + "httpMethod": "GET", + "id": "storage.bucketAccessControls.get", + "parameterOrder": [ + "bucket", + "entity" + ], + "parameters": { + "bucket": { + "description": "Name of a bucket.", + "location": "path", + "required": true, + "type": "string" + }, + "entity": { + "description": "The entity holding the permission. Can be user-userId, user-emailAddress, group-groupId, group-emailAddress, allUsers, or allAuthenticatedUsers.", + "location": "path", + "required": true, + "type": "string" + }, + "userProject": { + "description": "The project to be billed for this request. Required for Requester Pays buckets.", + "location": "query", + "type": "string" + } + }, + "path": "b/{bucket}/acl/{entity}", + "response": { + "$ref": "BucketAccessControl" + }, + "scopes": [ + "https://www.googleapis.com/auth/cloud-platform", + "https://www.googleapis.com/auth/devstorage.full_control" + ] + }, + "insert": { + "description": "Creates a new ACL entry on the specified bucket.", + "httpMethod": "POST", + "id": "storage.bucketAccessControls.insert", + "parameterOrder": [ + "bucket" + ], + "parameters": { + "bucket": { + "description": "Name of a bucket.", + "location": "path", + "required": true, + "type": "string" + }, + "userProject": { + "description": "The project to be billed for this request. Required for Requester Pays buckets.", + "location": "query", + "type": "string" + } + }, + "path": "b/{bucket}/acl", + "request": { + "$ref": "BucketAccessControl" + }, + "response": { + "$ref": "BucketAccessControl" + }, + "scopes": [ + "https://www.googleapis.com/auth/cloud-platform", + "https://www.googleapis.com/auth/devstorage.full_control" + ] + }, + "list": { + "description": "Retrieves ACL entries on the specified bucket.", + "httpMethod": "GET", + "id": "storage.bucketAccessControls.list", + "parameterOrder": [ + "bucket" + ], + "parameters": { + "bucket": { + "description": "Name of a bucket.", + "location": "path", + "required": true, + "type": "string" + }, + "userProject": { + "description": "The project to be billed for this request. Required for Requester Pays buckets.", + "location": "query", + "type": "string" + } + }, + "path": "b/{bucket}/acl", + "response": { + "$ref": "BucketAccessControls" + }, + "scopes": [ + "https://www.googleapis.com/auth/cloud-platform", + "https://www.googleapis.com/auth/devstorage.full_control" + ] + }, + "patch": { + "description": "Patches an ACL entry on the specified bucket.", + "httpMethod": "PATCH", + "id": "storage.bucketAccessControls.patch", + "parameterOrder": [ + "bucket", + "entity" + ], + "parameters": { + "bucket": { + "description": "Name of a bucket.", + "location": "path", + "required": true, + "type": "string" + }, + "entity": { + "description": "The entity holding the permission. Can be user-userId, user-emailAddress, group-groupId, group-emailAddress, allUsers, or allAuthenticatedUsers.", + "location": "path", + "required": true, + "type": "string" + }, + "userProject": { + "description": "The project to be billed for this request. Required for Requester Pays buckets.", + "location": "query", + "type": "string" + } + }, + "path": "b/{bucket}/acl/{entity}", + "request": { + "$ref": "BucketAccessControl" + }, + "response": { + "$ref": "BucketAccessControl" + }, + "scopes": [ + "https://www.googleapis.com/auth/cloud-platform", + "https://www.googleapis.com/auth/devstorage.full_control" + ] + }, + "update": { + "description": "Updates an ACL entry on the specified bucket.", + "httpMethod": "PUT", + "id": "storage.bucketAccessControls.update", + "parameterOrder": [ + "bucket", + "entity" + ], + "parameters": { + "bucket": { + "description": "Name of a bucket.", + "location": "path", + "required": true, + "type": "string" + }, + "entity": { + "description": "The entity holding the permission. Can be user-userId, user-emailAddress, group-groupId, group-emailAddress, allUsers, or allAuthenticatedUsers.", + "location": "path", + "required": true, + "type": "string" + }, + "userProject": { + "description": "The project to be billed for this request. Required for Requester Pays buckets.", + "location": "query", + "type": "string" + } + }, + "path": "b/{bucket}/acl/{entity}", + "request": { + "$ref": "BucketAccessControl" + }, + "response": { + "$ref": "BucketAccessControl" + }, + "scopes": [ + "https://www.googleapis.com/auth/cloud-platform", + "https://www.googleapis.com/auth/devstorage.full_control" + ] + } + } + }, + "buckets": { + "methods": { + "delete": { + "description": "Permanently deletes an empty bucket.", + "httpMethod": "DELETE", + "id": "storage.buckets.delete", + "parameterOrder": [ + "bucket" + ], + "parameters": { + "bucket": { + "description": "Name of a bucket.", + "location": "path", + "required": true, + "type": "string" + }, + "ifMetagenerationMatch": { + "description": "If set, only deletes the bucket if its metageneration matches this value.", + "format": "int64", + "location": "query", + "type": "string" + }, + "ifMetagenerationNotMatch": { + "description": "If set, only deletes the bucket if its metageneration does not match this value.", + "format": "int64", + "location": "query", + "type": "string" + }, + "userProject": { + "description": "The project to be billed for this request. Required for Requester Pays buckets.", + "location": "query", + "type": "string" + } + }, + "path": "b/{bucket}", + "scopes": [ + "https://www.googleapis.com/auth/cloud-platform", + "https://www.googleapis.com/auth/devstorage.full_control", + "https://www.googleapis.com/auth/devstorage.read_write" + ] + }, + "get": { + "description": "Returns metadata for the specified bucket.", + "httpMethod": "GET", + "id": "storage.buckets.get", + "parameterOrder": [ + "bucket" + ], + "parameters": { + "bucket": { + "description": "Name of a bucket.", + "location": "path", + "required": true, + "type": "string" + }, + "ifMetagenerationMatch": { + "description": "Makes the return of the bucket metadata conditional on whether the bucket's current metageneration matches the given value.", + "format": "int64", + "location": "query", + "type": "string" + }, + "ifMetagenerationNotMatch": { + "description": "Makes the return of the bucket metadata conditional on whether the bucket's current metageneration does not match the given value.", + "format": "int64", + "location": "query", + "type": "string" + }, + "projection": { + "description": "Set of properties to return. Defaults to noAcl.", + "enum": [ + "full", + "noAcl" + ], + "enumDescriptions": [ + "Include all properties.", + "Omit owner, acl and defaultObjectAcl properties." + ], + "location": "query", + "type": "string" + }, + "userProject": { + "description": "The project to be billed for this request. Required for Requester Pays buckets.", + "location": "query", + "type": "string" + } + }, + "path": "b/{bucket}", + "response": { + "$ref": "Bucket" + }, + "scopes": [ + "https://www.googleapis.com/auth/cloud-platform", + "https://www.googleapis.com/auth/cloud-platform.read-only", + "https://www.googleapis.com/auth/devstorage.full_control", + "https://www.googleapis.com/auth/devstorage.read_only", + "https://www.googleapis.com/auth/devstorage.read_write" + ] + }, + "getIamPolicy": { + "description": "Returns an IAM policy for the specified bucket.", + "httpMethod": "GET", + "id": "storage.buckets.getIamPolicy", + "parameterOrder": [ + "bucket" + ], + "parameters": { + "bucket": { + "description": "Name of a bucket.", + "location": "path", + "required": true, + "type": "string" + }, + "userProject": { + "description": "The project to be billed for this request. Required for Requester Pays buckets.", + "location": "query", + "type": "string" + } + }, + "path": "b/{bucket}/iam", + "response": { + "$ref": "Policy" + }, + "scopes": [ + "https://www.googleapis.com/auth/cloud-platform", + "https://www.googleapis.com/auth/cloud-platform.read-only", + "https://www.googleapis.com/auth/devstorage.full_control", + "https://www.googleapis.com/auth/devstorage.read_only", + "https://www.googleapis.com/auth/devstorage.read_write" + ] + }, + "insert": { + "description": "Creates a new bucket.", + "httpMethod": "POST", + "id": "storage.buckets.insert", + "parameterOrder": [ + "project" + ], + "parameters": { + "predefinedAcl": { + "description": "Apply a predefined set of access controls to this bucket.", + "enum": [ + "authenticatedRead", + "private", + "projectPrivate", + "publicRead", + "publicReadWrite" + ], + "enumDescriptions": [ + "Project team owners get OWNER access, and allAuthenticatedUsers get READER access.", + "Project team owners get OWNER access.", + "Project team members get access according to their roles.", + "Project team owners get OWNER access, and allUsers get READER access.", + "Project team owners get OWNER access, and allUsers get WRITER access." + ], + "location": "query", + "type": "string" + }, + "predefinedDefaultObjectAcl": { + "description": "Apply a predefined set of default object access controls to this bucket.", + "enum": [ + "authenticatedRead", + "bucketOwnerFullControl", + "bucketOwnerRead", + "private", + "projectPrivate", + "publicRead" + ], + "enumDescriptions": [ + "Object owner gets OWNER access, and allAuthenticatedUsers get READER access.", + "Object owner gets OWNER access, and project team owners get OWNER access.", + "Object owner gets OWNER access, and project team owners get READER access.", + "Object owner gets OWNER access.", + "Object owner gets OWNER access, and project team members get access according to their roles.", + "Object owner gets OWNER access, and allUsers get READER access." + ], + "location": "query", + "type": "string" + }, + "project": { + "description": "A valid API project identifier.", + "location": "query", + "required": true, + "type": "string" + }, + "projection": { + "description": "Set of properties to return. Defaults to noAcl, unless the bucket resource specifies acl or defaultObjectAcl properties, when it defaults to full.", + "enum": [ + "full", + "noAcl" + ], + "enumDescriptions": [ + "Include all properties.", + "Omit owner, acl and defaultObjectAcl properties." + ], + "location": "query", + "type": "string" + }, + "userProject": { + "description": "The project to be billed for this request.", + "location": "query", + "type": "string" + } + }, + "path": "b", + "request": { + "$ref": "Bucket" + }, + "response": { + "$ref": "Bucket" + }, + "scopes": [ + "https://www.googleapis.com/auth/cloud-platform", + "https://www.googleapis.com/auth/devstorage.full_control", + "https://www.googleapis.com/auth/devstorage.read_write" + ] + }, + "list": { + "description": "Retrieves a list of buckets for a given project.", + "httpMethod": "GET", + "id": "storage.buckets.list", + "parameterOrder": [ + "project" + ], + "parameters": { + "maxResults": { + "default": "1000", + "description": "Maximum number of buckets to return in a single response. The service will use this parameter or 1,000 items, whichever is smaller.", + "format": "uint32", + "location": "query", + "minimum": "0", + "type": "integer" + }, + "pageToken": { + "description": "A previously-returned page token representing part of the larger set of results to view.", + "location": "query", + "type": "string" + }, + "prefix": { + "description": "Filter results to buckets whose names begin with this prefix.", + "location": "query", + "type": "string" + }, + "project": { + "description": "A valid API project identifier.", + "location": "query", + "required": true, + "type": "string" + }, + "projection": { + "description": "Set of properties to return. Defaults to noAcl.", + "enum": [ + "full", + "noAcl" + ], + "enumDescriptions": [ + "Include all properties.", + "Omit owner, acl and defaultObjectAcl properties." + ], + "location": "query", + "type": "string" + }, + "userProject": { + "description": "The project to be billed for this request.", + "location": "query", + "type": "string" + } + }, + "path": "b", + "response": { + "$ref": "Buckets" + }, + "scopes": [ + "https://www.googleapis.com/auth/cloud-platform", + "https://www.googleapis.com/auth/cloud-platform.read-only", + "https://www.googleapis.com/auth/devstorage.full_control", + "https://www.googleapis.com/auth/devstorage.read_only", + "https://www.googleapis.com/auth/devstorage.read_write" + ] + }, + "lockRetentionPolicy": { + "description": "Locks retention policy on a bucket.", + "httpMethod": "POST", + "id": "storage.buckets.lockRetentionPolicy", + "parameterOrder": [ + "bucket", + "ifMetagenerationMatch" + ], + "parameters": { + "bucket": { + "description": "Name of a bucket.", + "location": "path", + "required": true, + "type": "string" + }, + "ifMetagenerationMatch": { + "description": "Makes the operation conditional on whether bucket's current metageneration matches the given value.", + "format": "int64", + "location": "query", + "required": true, + "type": "string" + }, + "userProject": { + "description": "The project to be billed for this request. Required for Requester Pays buckets.", + "location": "query", + "type": "string" + } + }, + "path": "b/{bucket}/lockRetentionPolicy", + "response": { + "$ref": "Bucket" + }, + "scopes": [ + "https://www.googleapis.com/auth/cloud-platform", + "https://www.googleapis.com/auth/devstorage.full_control", + "https://www.googleapis.com/auth/devstorage.read_write" + ] + }, + "patch": { + "description": "Updates a bucket. Changes to the bucket will be readable immediately after writing, but configuration changes may take time to propagate. This method supports patch semantics.", + "httpMethod": "PATCH", + "id": "storage.buckets.patch", + "parameterOrder": [ + "bucket" + ], + "parameters": { + "bucket": { + "description": "Name of a bucket.", + "location": "path", + "required": true, + "type": "string" + }, + "ifMetagenerationMatch": { + "description": "Makes the return of the bucket metadata conditional on whether the bucket's current metageneration matches the given value.", + "format": "int64", + "location": "query", + "type": "string" + }, + "ifMetagenerationNotMatch": { + "description": "Makes the return of the bucket metadata conditional on whether the bucket's current metageneration does not match the given value.", + "format": "int64", + "location": "query", + "type": "string" + }, + "predefinedAcl": { + "description": "Apply a predefined set of access controls to this bucket.", + "enum": [ + "authenticatedRead", + "private", + "projectPrivate", + "publicRead", + "publicReadWrite" + ], + "enumDescriptions": [ + "Project team owners get OWNER access, and allAuthenticatedUsers get READER access.", + "Project team owners get OWNER access.", + "Project team members get access according to their roles.", + "Project team owners get OWNER access, and allUsers get READER access.", + "Project team owners get OWNER access, and allUsers get WRITER access." + ], + "location": "query", + "type": "string" + }, + "predefinedDefaultObjectAcl": { + "description": "Apply a predefined set of default object access controls to this bucket.", + "enum": [ + "authenticatedRead", + "bucketOwnerFullControl", + "bucketOwnerRead", + "private", + "projectPrivate", + "publicRead" + ], + "enumDescriptions": [ + "Object owner gets OWNER access, and allAuthenticatedUsers get READER access.", + "Object owner gets OWNER access, and project team owners get OWNER access.", + "Object owner gets OWNER access, and project team owners get READER access.", + "Object owner gets OWNER access.", + "Object owner gets OWNER access, and project team members get access according to their roles.", + "Object owner gets OWNER access, and allUsers get READER access." + ], + "location": "query", + "type": "string" + }, + "projection": { + "description": "Set of properties to return. Defaults to full.", + "enum": [ + "full", + "noAcl" + ], + "enumDescriptions": [ + "Include all properties.", + "Omit owner, acl and defaultObjectAcl properties." + ], + "location": "query", + "type": "string" + }, + "userProject": { + "description": "The project to be billed for this request. Required for Requester Pays buckets.", + "location": "query", + "type": "string" + } + }, + "path": "b/{bucket}", + "request": { + "$ref": "Bucket" + }, + "response": { + "$ref": "Bucket" + }, + "scopes": [ + "https://www.googleapis.com/auth/cloud-platform", + "https://www.googleapis.com/auth/devstorage.full_control" + ] + }, + "setIamPolicy": { + "description": "Updates an IAM policy for the specified bucket.", + "httpMethod": "PUT", + "id": "storage.buckets.setIamPolicy", + "parameterOrder": [ + "bucket" + ], + "parameters": { + "bucket": { + "description": "Name of a bucket.", + "location": "path", + "required": true, + "type": "string" + }, + "userProject": { + "description": "The project to be billed for this request. Required for Requester Pays buckets.", + "location": "query", + "type": "string" + } + }, + "path": "b/{bucket}/iam", + "request": { + "$ref": "Policy" + }, + "response": { + "$ref": "Policy" + }, + "scopes": [ + "https://www.googleapis.com/auth/cloud-platform", + "https://www.googleapis.com/auth/devstorage.full_control", + "https://www.googleapis.com/auth/devstorage.read_write" + ] + }, + "testIamPermissions": { + "description": "Tests a set of permissions on the given bucket to see which, if any, are held by the caller.", + "httpMethod": "GET", + "id": "storage.buckets.testIamPermissions", + "parameterOrder": [ + "bucket", + "permissions" + ], + "parameters": { + "bucket": { + "description": "Name of a bucket.", + "location": "path", + "required": true, + "type": "string" + }, + "permissions": { + "description": "Permissions to test.", + "location": "query", + "repeated": true, + "required": true, + "type": "string" + }, + "userProject": { + "description": "The project to be billed for this request. Required for Requester Pays buckets.", + "location": "query", + "type": "string" + } + }, + "path": "b/{bucket}/iam/testPermissions", + "response": { + "$ref": "TestIamPermissionsResponse" + }, + "scopes": [ + "https://www.googleapis.com/auth/cloud-platform", + "https://www.googleapis.com/auth/cloud-platform.read-only", + "https://www.googleapis.com/auth/devstorage.full_control", + "https://www.googleapis.com/auth/devstorage.read_only", + "https://www.googleapis.com/auth/devstorage.read_write" + ] + }, + "update": { + "description": "Updates a bucket. Changes to the bucket will be readable immediately after writing, but configuration changes may take time to propagate.", + "httpMethod": "PUT", + "id": "storage.buckets.update", + "parameterOrder": [ + "bucket" + ], + "parameters": { + "bucket": { + "description": "Name of a bucket.", + "location": "path", + "required": true, + "type": "string" + }, + "ifMetagenerationMatch": { + "description": "Makes the return of the bucket metadata conditional on whether the bucket's current metageneration matches the given value.", + "format": "int64", + "location": "query", + "type": "string" + }, + "ifMetagenerationNotMatch": { + "description": "Makes the return of the bucket metadata conditional on whether the bucket's current metageneration does not match the given value.", + "format": "int64", + "location": "query", + "type": "string" + }, + "predefinedAcl": { + "description": "Apply a predefined set of access controls to this bucket.", + "enum": [ + "authenticatedRead", + "private", + "projectPrivate", + "publicRead", + "publicReadWrite" + ], + "enumDescriptions": [ + "Project team owners get OWNER access, and allAuthenticatedUsers get READER access.", + "Project team owners get OWNER access.", + "Project team members get access according to their roles.", + "Project team owners get OWNER access, and allUsers get READER access.", + "Project team owners get OWNER access, and allUsers get WRITER access." + ], + "location": "query", + "type": "string" + }, + "predefinedDefaultObjectAcl": { + "description": "Apply a predefined set of default object access controls to this bucket.", + "enum": [ + "authenticatedRead", + "bucketOwnerFullControl", + "bucketOwnerRead", + "private", + "projectPrivate", + "publicRead" + ], + "enumDescriptions": [ + "Object owner gets OWNER access, and allAuthenticatedUsers get READER access.", + "Object owner gets OWNER access, and project team owners get OWNER access.", + "Object owner gets OWNER access, and project team owners get READER access.", + "Object owner gets OWNER access.", + "Object owner gets OWNER access, and project team members get access according to their roles.", + "Object owner gets OWNER access, and allUsers get READER access." + ], + "location": "query", + "type": "string" + }, + "projection": { + "description": "Set of properties to return. Defaults to full.", + "enum": [ + "full", + "noAcl" + ], + "enumDescriptions": [ + "Include all properties.", + "Omit owner, acl and defaultObjectAcl properties." + ], + "location": "query", + "type": "string" + }, + "userProject": { + "description": "The project to be billed for this request. Required for Requester Pays buckets.", + "location": "query", + "type": "string" + } + }, + "path": "b/{bucket}", + "request": { + "$ref": "Bucket" + }, + "response": { + "$ref": "Bucket" + }, + "scopes": [ + "https://www.googleapis.com/auth/cloud-platform", + "https://www.googleapis.com/auth/devstorage.full_control" + ] + } + } + }, + "channels": { + "methods": { + "stop": { + "description": "Stop watching resources through this channel", + "httpMethod": "POST", + "id": "storage.channels.stop", + "path": "channels/stop", + "request": { + "$ref": "Channel", + "parameterName": "resource" + }, + "scopes": [ + "https://www.googleapis.com/auth/cloud-platform", + "https://www.googleapis.com/auth/cloud-platform.read-only", + "https://www.googleapis.com/auth/devstorage.full_control", + "https://www.googleapis.com/auth/devstorage.read_only", + "https://www.googleapis.com/auth/devstorage.read_write" + ] + } + } + }, + "defaultObjectAccessControls": { + "methods": { + "delete": { + "description": "Permanently deletes the default object ACL entry for the specified entity on the specified bucket.", + "httpMethod": "DELETE", + "id": "storage.defaultObjectAccessControls.delete", + "parameterOrder": [ + "bucket", + "entity" + ], + "parameters": { + "bucket": { + "description": "Name of a bucket.", + "location": "path", + "required": true, + "type": "string" + }, + "entity": { + "description": "The entity holding the permission. Can be user-userId, user-emailAddress, group-groupId, group-emailAddress, allUsers, or allAuthenticatedUsers.", + "location": "path", + "required": true, + "type": "string" + }, + "userProject": { + "description": "The project to be billed for this request. Required for Requester Pays buckets.", + "location": "query", + "type": "string" + } + }, + "path": "b/{bucket}/defaultObjectAcl/{entity}", + "scopes": [ + "https://www.googleapis.com/auth/cloud-platform", + "https://www.googleapis.com/auth/devstorage.full_control" + ] + }, + "get": { + "description": "Returns the default object ACL entry for the specified entity on the specified bucket.", + "httpMethod": "GET", + "id": "storage.defaultObjectAccessControls.get", + "parameterOrder": [ + "bucket", + "entity" + ], + "parameters": { + "bucket": { + "description": "Name of a bucket.", + "location": "path", + "required": true, + "type": "string" + }, + "entity": { + "description": "The entity holding the permission. Can be user-userId, user-emailAddress, group-groupId, group-emailAddress, allUsers, or allAuthenticatedUsers.", + "location": "path", + "required": true, + "type": "string" + }, + "userProject": { + "description": "The project to be billed for this request. Required for Requester Pays buckets.", + "location": "query", + "type": "string" + } + }, + "path": "b/{bucket}/defaultObjectAcl/{entity}", + "response": { + "$ref": "ObjectAccessControl" + }, + "scopes": [ + "https://www.googleapis.com/auth/cloud-platform", + "https://www.googleapis.com/auth/devstorage.full_control" + ] + }, + "insert": { + "description": "Creates a new default object ACL entry on the specified bucket.", + "httpMethod": "POST", + "id": "storage.defaultObjectAccessControls.insert", + "parameterOrder": [ + "bucket" + ], + "parameters": { + "bucket": { + "description": "Name of a bucket.", + "location": "path", + "required": true, + "type": "string" + }, + "userProject": { + "description": "The project to be billed for this request. Required for Requester Pays buckets.", + "location": "query", + "type": "string" + } + }, + "path": "b/{bucket}/defaultObjectAcl", + "request": { + "$ref": "ObjectAccessControl" + }, + "response": { + "$ref": "ObjectAccessControl" + }, + "scopes": [ + "https://www.googleapis.com/auth/cloud-platform", + "https://www.googleapis.com/auth/devstorage.full_control" + ] + }, + "list": { + "description": "Retrieves default object ACL entries on the specified bucket.", + "httpMethod": "GET", + "id": "storage.defaultObjectAccessControls.list", + "parameterOrder": [ + "bucket" + ], + "parameters": { + "bucket": { + "description": "Name of a bucket.", + "location": "path", + "required": true, + "type": "string" + }, + "ifMetagenerationMatch": { + "description": "If present, only return default ACL listing if the bucket's current metageneration matches this value.", + "format": "int64", + "location": "query", + "type": "string" + }, + "ifMetagenerationNotMatch": { + "description": "If present, only return default ACL listing if the bucket's current metageneration does not match the given value.", + "format": "int64", + "location": "query", + "type": "string" + }, + "userProject": { + "description": "The project to be billed for this request. Required for Requester Pays buckets.", + "location": "query", + "type": "string" + } + }, + "path": "b/{bucket}/defaultObjectAcl", + "response": { + "$ref": "ObjectAccessControls" + }, + "scopes": [ + "https://www.googleapis.com/auth/cloud-platform", + "https://www.googleapis.com/auth/devstorage.full_control" + ] + }, + "patch": { + "description": "Patches a default object ACL entry on the specified bucket.", + "httpMethod": "PATCH", + "id": "storage.defaultObjectAccessControls.patch", + "parameterOrder": [ + "bucket", + "entity" + ], + "parameters": { + "bucket": { + "description": "Name of a bucket.", + "location": "path", + "required": true, + "type": "string" + }, + "entity": { + "description": "The entity holding the permission. Can be user-userId, user-emailAddress, group-groupId, group-emailAddress, allUsers, or allAuthenticatedUsers.", + "location": "path", + "required": true, + "type": "string" + }, + "userProject": { + "description": "The project to be billed for this request. Required for Requester Pays buckets.", + "location": "query", + "type": "string" + } + }, + "path": "b/{bucket}/defaultObjectAcl/{entity}", + "request": { + "$ref": "ObjectAccessControl" + }, + "response": { + "$ref": "ObjectAccessControl" + }, + "scopes": [ + "https://www.googleapis.com/auth/cloud-platform", + "https://www.googleapis.com/auth/devstorage.full_control" + ] + }, + "update": { + "description": "Updates a default object ACL entry on the specified bucket.", + "httpMethod": "PUT", + "id": "storage.defaultObjectAccessControls.update", + "parameterOrder": [ + "bucket", + "entity" + ], + "parameters": { + "bucket": { + "description": "Name of a bucket.", + "location": "path", + "required": true, + "type": "string" + }, + "entity": { + "description": "The entity holding the permission. Can be user-userId, user-emailAddress, group-groupId, group-emailAddress, allUsers, or allAuthenticatedUsers.", + "location": "path", + "required": true, + "type": "string" + }, + "userProject": { + "description": "The project to be billed for this request. Required for Requester Pays buckets.", + "location": "query", + "type": "string" + } + }, + "path": "b/{bucket}/defaultObjectAcl/{entity}", + "request": { + "$ref": "ObjectAccessControl" + }, + "response": { + "$ref": "ObjectAccessControl" + }, + "scopes": [ + "https://www.googleapis.com/auth/cloud-platform", + "https://www.googleapis.com/auth/devstorage.full_control" + ] + } + } + }, + "notifications": { + "methods": { + "delete": { + "description": "Permanently deletes a notification subscription.", + "httpMethod": "DELETE", + "id": "storage.notifications.delete", + "parameterOrder": [ + "bucket", + "notification" + ], + "parameters": { + "bucket": { + "description": "The parent bucket of the notification.", + "location": "path", + "required": true, + "type": "string" + }, + "notification": { + "description": "ID of the notification to delete.", + "location": "path", + "required": true, + "type": "string" + }, + "userProject": { + "description": "The project to be billed for this request. Required for Requester Pays buckets.", + "location": "query", + "type": "string" + } + }, + "path": "b/{bucket}/notificationConfigs/{notification}", + "scopes": [ + "https://www.googleapis.com/auth/cloud-platform", + "https://www.googleapis.com/auth/devstorage.full_control", + "https://www.googleapis.com/auth/devstorage.read_write" + ] + }, + "get": { + "description": "View a notification configuration.", + "httpMethod": "GET", + "id": "storage.notifications.get", + "parameterOrder": [ + "bucket", + "notification" + ], + "parameters": { + "bucket": { + "description": "The parent bucket of the notification.", + "location": "path", + "required": true, + "type": "string" + }, + "notification": { + "description": "Notification ID", + "location": "path", + "required": true, + "type": "string" + }, + "userProject": { + "description": "The project to be billed for this request. Required for Requester Pays buckets.", + "location": "query", + "type": "string" + } + }, + "path": "b/{bucket}/notificationConfigs/{notification}", + "response": { + "$ref": "Notification" + }, + "scopes": [ + "https://www.googleapis.com/auth/cloud-platform", + "https://www.googleapis.com/auth/cloud-platform.read-only", + "https://www.googleapis.com/auth/devstorage.full_control", + "https://www.googleapis.com/auth/devstorage.read_only", + "https://www.googleapis.com/auth/devstorage.read_write" + ] + }, + "insert": { + "description": "Creates a notification subscription for a given bucket.", + "httpMethod": "POST", + "id": "storage.notifications.insert", + "parameterOrder": [ + "bucket" + ], + "parameters": { + "bucket": { + "description": "The parent bucket of the notification.", + "location": "path", + "required": true, + "type": "string" + }, + "userProject": { + "description": "The project to be billed for this request. Required for Requester Pays buckets.", + "location": "query", + "type": "string" + } + }, + "path": "b/{bucket}/notificationConfigs", + "request": { + "$ref": "Notification" + }, + "response": { + "$ref": "Notification" + }, + "scopes": [ + "https://www.googleapis.com/auth/cloud-platform", + "https://www.googleapis.com/auth/devstorage.full_control", + "https://www.googleapis.com/auth/devstorage.read_write" + ] + }, + "list": { + "description": "Retrieves a list of notification subscriptions for a given bucket.", + "httpMethod": "GET", + "id": "storage.notifications.list", + "parameterOrder": [ + "bucket" + ], + "parameters": { + "bucket": { + "description": "Name of a Google Cloud Storage bucket.", + "location": "path", + "required": true, + "type": "string" + }, + "userProject": { + "description": "The project to be billed for this request. Required for Requester Pays buckets.", + "location": "query", + "type": "string" + } + }, + "path": "b/{bucket}/notificationConfigs", + "response": { + "$ref": "Notifications" + }, + "scopes": [ + "https://www.googleapis.com/auth/cloud-platform", + "https://www.googleapis.com/auth/cloud-platform.read-only", + "https://www.googleapis.com/auth/devstorage.full_control", + "https://www.googleapis.com/auth/devstorage.read_only", + "https://www.googleapis.com/auth/devstorage.read_write" + ] + } + } + }, + "objectAccessControls": { + "methods": { + "delete": { + "description": "Permanently deletes the ACL entry for the specified entity on the specified object.", + "httpMethod": "DELETE", + "id": "storage.objectAccessControls.delete", + "parameterOrder": [ + "bucket", + "object", + "entity" + ], + "parameters": { + "bucket": { + "description": "Name of a bucket.", + "location": "path", + "required": true, + "type": "string" + }, + "entity": { + "description": "The entity holding the permission. Can be user-userId, user-emailAddress, group-groupId, group-emailAddress, allUsers, or allAuthenticatedUsers.", + "location": "path", + "required": true, + "type": "string" + }, + "generation": { + "description": "If present, selects a specific revision of this object (as opposed to the latest version, the default).", + "format": "int64", + "location": "query", + "type": "string" + }, + "object": { + "description": "Name of the object. For information about how to URL encode object names to be path safe, see Encoding URI Path Parts.", + "location": "path", + "required": true, + "type": "string" + }, + "userProject": { + "description": "The project to be billed for this request. Required for Requester Pays buckets.", + "location": "query", + "type": "string" + } + }, + "path": "b/{bucket}/o/{object}/acl/{entity}", + "scopes": [ + "https://www.googleapis.com/auth/cloud-platform", + "https://www.googleapis.com/auth/devstorage.full_control" + ] + }, + "get": { + "description": "Returns the ACL entry for the specified entity on the specified object.", + "httpMethod": "GET", + "id": "storage.objectAccessControls.get", + "parameterOrder": [ + "bucket", + "object", + "entity" + ], + "parameters": { + "bucket": { + "description": "Name of a bucket.", + "location": "path", + "required": true, + "type": "string" + }, + "entity": { + "description": "The entity holding the permission. Can be user-userId, user-emailAddress, group-groupId, group-emailAddress, allUsers, or allAuthenticatedUsers.", + "location": "path", + "required": true, + "type": "string" + }, + "generation": { + "description": "If present, selects a specific revision of this object (as opposed to the latest version, the default).", + "format": "int64", + "location": "query", + "type": "string" + }, + "object": { + "description": "Name of the object. For information about how to URL encode object names to be path safe, see Encoding URI Path Parts.", + "location": "path", + "required": true, + "type": "string" + }, + "userProject": { + "description": "The project to be billed for this request. Required for Requester Pays buckets.", + "location": "query", + "type": "string" + } + }, + "path": "b/{bucket}/o/{object}/acl/{entity}", + "response": { + "$ref": "ObjectAccessControl" + }, + "scopes": [ + "https://www.googleapis.com/auth/cloud-platform", + "https://www.googleapis.com/auth/devstorage.full_control" + ] + }, + "insert": { + "description": "Creates a new ACL entry on the specified object.", + "httpMethod": "POST", + "id": "storage.objectAccessControls.insert", + "parameterOrder": [ + "bucket", + "object" + ], + "parameters": { + "bucket": { + "description": "Name of a bucket.", + "location": "path", + "required": true, + "type": "string" + }, + "generation": { + "description": "If present, selects a specific revision of this object (as opposed to the latest version, the default).", + "format": "int64", + "location": "query", + "type": "string" + }, + "object": { + "description": "Name of the object. For information about how to URL encode object names to be path safe, see Encoding URI Path Parts.", + "location": "path", + "required": true, + "type": "string" + }, + "userProject": { + "description": "The project to be billed for this request. Required for Requester Pays buckets.", + "location": "query", + "type": "string" + } + }, + "path": "b/{bucket}/o/{object}/acl", + "request": { + "$ref": "ObjectAccessControl" + }, + "response": { + "$ref": "ObjectAccessControl" + }, + "scopes": [ + "https://www.googleapis.com/auth/cloud-platform", + "https://www.googleapis.com/auth/devstorage.full_control" + ] + }, + "list": { + "description": "Retrieves ACL entries on the specified object.", + "httpMethod": "GET", + "id": "storage.objectAccessControls.list", + "parameterOrder": [ + "bucket", + "object" + ], + "parameters": { + "bucket": { + "description": "Name of a bucket.", + "location": "path", + "required": true, + "type": "string" + }, + "generation": { + "description": "If present, selects a specific revision of this object (as opposed to the latest version, the default).", + "format": "int64", + "location": "query", + "type": "string" + }, + "object": { + "description": "Name of the object. For information about how to URL encode object names to be path safe, see Encoding URI Path Parts.", + "location": "path", + "required": true, + "type": "string" + }, + "userProject": { + "description": "The project to be billed for this request. Required for Requester Pays buckets.", + "location": "query", + "type": "string" + } + }, + "path": "b/{bucket}/o/{object}/acl", + "response": { + "$ref": "ObjectAccessControls" + }, + "scopes": [ + "https://www.googleapis.com/auth/cloud-platform", + "https://www.googleapis.com/auth/devstorage.full_control" + ] + }, + "patch": { + "description": "Patches an ACL entry on the specified object.", + "httpMethod": "PATCH", + "id": "storage.objectAccessControls.patch", + "parameterOrder": [ + "bucket", + "object", + "entity" + ], + "parameters": { + "bucket": { + "description": "Name of a bucket.", + "location": "path", + "required": true, + "type": "string" + }, + "entity": { + "description": "The entity holding the permission. Can be user-userId, user-emailAddress, group-groupId, group-emailAddress, allUsers, or allAuthenticatedUsers.", + "location": "path", + "required": true, + "type": "string" + }, + "generation": { + "description": "If present, selects a specific revision of this object (as opposed to the latest version, the default).", + "format": "int64", + "location": "query", + "type": "string" + }, + "object": { + "description": "Name of the object. For information about how to URL encode object names to be path safe, see Encoding URI Path Parts.", + "location": "path", + "required": true, + "type": "string" + }, + "userProject": { + "description": "The project to be billed for this request. Required for Requester Pays buckets.", + "location": "query", + "type": "string" + } + }, + "path": "b/{bucket}/o/{object}/acl/{entity}", + "request": { + "$ref": "ObjectAccessControl" + }, + "response": { + "$ref": "ObjectAccessControl" + }, + "scopes": [ + "https://www.googleapis.com/auth/cloud-platform", + "https://www.googleapis.com/auth/devstorage.full_control" + ] + }, + "update": { + "description": "Updates an ACL entry on the specified object.", + "httpMethod": "PUT", + "id": "storage.objectAccessControls.update", + "parameterOrder": [ + "bucket", + "object", + "entity" + ], + "parameters": { + "bucket": { + "description": "Name of a bucket.", + "location": "path", + "required": true, + "type": "string" + }, + "entity": { + "description": "The entity holding the permission. Can be user-userId, user-emailAddress, group-groupId, group-emailAddress, allUsers, or allAuthenticatedUsers.", + "location": "path", + "required": true, + "type": "string" + }, + "generation": { + "description": "If present, selects a specific revision of this object (as opposed to the latest version, the default).", + "format": "int64", + "location": "query", + "type": "string" + }, + "object": { + "description": "Name of the object. For information about how to URL encode object names to be path safe, see Encoding URI Path Parts.", + "location": "path", + "required": true, + "type": "string" + }, + "userProject": { + "description": "The project to be billed for this request. Required for Requester Pays buckets.", + "location": "query", + "type": "string" + } + }, + "path": "b/{bucket}/o/{object}/acl/{entity}", + "request": { + "$ref": "ObjectAccessControl" + }, + "response": { + "$ref": "ObjectAccessControl" + }, + "scopes": [ + "https://www.googleapis.com/auth/cloud-platform", + "https://www.googleapis.com/auth/devstorage.full_control" + ] + } + } + }, + "objects": { + "methods": { + "compose": { + "description": "Concatenates a list of existing objects into a new object in the same bucket.", + "httpMethod": "POST", + "id": "storage.objects.compose", + "parameterOrder": [ + "destinationBucket", + "destinationObject" + ], + "parameters": { + "destinationBucket": { + "description": "Name of the bucket in which to store the new object.", + "location": "path", + "required": true, + "type": "string" + }, + "destinationObject": { + "description": "Name of the new object. For information about how to URL encode object names to be path safe, see Encoding URI Path Parts.", + "location": "path", + "required": true, + "type": "string" + }, + "destinationPredefinedAcl": { + "description": "Apply a predefined set of access controls to the destination object.", + "enum": [ + "authenticatedRead", + "bucketOwnerFullControl", + "bucketOwnerRead", + "private", + "projectPrivate", + "publicRead" + ], + "enumDescriptions": [ + "Object owner gets OWNER access, and allAuthenticatedUsers get READER access.", + "Object owner gets OWNER access, and project team owners get OWNER access.", + "Object owner gets OWNER access, and project team owners get READER access.", + "Object owner gets OWNER access.", + "Object owner gets OWNER access, and project team members get access according to their roles.", + "Object owner gets OWNER access, and allUsers get READER access." + ], + "location": "query", + "type": "string" + }, + "ifGenerationMatch": { + "description": "Makes the operation conditional on whether the object's current generation matches the given value. Setting to 0 makes the operation succeed only if there are no live versions of the object.", + "format": "int64", + "location": "query", + "type": "string" + }, + "ifMetagenerationMatch": { + "description": "Makes the operation conditional on whether the object's current metageneration matches the given value.", + "format": "int64", + "location": "query", + "type": "string" + }, + "kmsKeyName": { + "description": "Resource name of the Cloud KMS key, of the form projects/my-project/locations/global/keyRings/my-kr/cryptoKeys/my-key, that will be used to encrypt the object. Overrides the object metadata's kms_key_name value, if any.", + "location": "query", + "type": "string" + }, + "userProject": { + "description": "The project to be billed for this request. Required for Requester Pays buckets.", + "location": "query", + "type": "string" + } + }, + "path": "b/{destinationBucket}/o/{destinationObject}/compose", + "request": { + "$ref": "ComposeRequest" + }, + "response": { + "$ref": "Object" + }, + "scopes": [ + "https://www.googleapis.com/auth/cloud-platform", + "https://www.googleapis.com/auth/devstorage.full_control", + "https://www.googleapis.com/auth/devstorage.read_write" + ] + }, + "copy": { + "description": "Copies a source object to a destination object. Optionally overrides metadata.", + "httpMethod": "POST", + "id": "storage.objects.copy", + "parameterOrder": [ + "sourceBucket", + "sourceObject", + "destinationBucket", + "destinationObject" + ], + "parameters": { + "destinationBucket": { + "description": "Name of the bucket in which to store the new object. Overrides the provided object metadata's bucket value, if any.For information about how to URL encode object names to be path safe, see Encoding URI Path Parts.", + "location": "path", + "required": true, + "type": "string" + }, + "destinationObject": { + "description": "Name of the new object. Required when the object metadata is not otherwise provided. Overrides the object metadata's name value, if any.", + "location": "path", + "required": true, + "type": "string" + }, + "destinationPredefinedAcl": { + "description": "Apply a predefined set of access controls to the destination object.", + "enum": [ + "authenticatedRead", + "bucketOwnerFullControl", + "bucketOwnerRead", + "private", + "projectPrivate", + "publicRead" + ], + "enumDescriptions": [ + "Object owner gets OWNER access, and allAuthenticatedUsers get READER access.", + "Object owner gets OWNER access, and project team owners get OWNER access.", + "Object owner gets OWNER access, and project team owners get READER access.", + "Object owner gets OWNER access.", + "Object owner gets OWNER access, and project team members get access according to their roles.", + "Object owner gets OWNER access, and allUsers get READER access." + ], + "location": "query", + "type": "string" + }, + "ifGenerationMatch": { + "description": "Makes the operation conditional on whether the destination object's current generation matches the given value. Setting to 0 makes the operation succeed only if there are no live versions of the object.", + "format": "int64", + "location": "query", + "type": "string" + }, + "ifGenerationNotMatch": { + "description": "Makes the operation conditional on whether the destination object's current generation does not match the given value. If no live object exists, the precondition fails. Setting to 0 makes the operation succeed only if there is a live version of the object.", + "format": "int64", + "location": "query", + "type": "string" + }, + "ifMetagenerationMatch": { + "description": "Makes the operation conditional on whether the destination object's current metageneration matches the given value.", + "format": "int64", + "location": "query", + "type": "string" + }, + "ifMetagenerationNotMatch": { + "description": "Makes the operation conditional on whether the destination object's current metageneration does not match the given value.", + "format": "int64", + "location": "query", + "type": "string" + }, + "ifSourceGenerationMatch": { + "description": "Makes the operation conditional on whether the source object's current generation matches the given value.", + "format": "int64", + "location": "query", + "type": "string" + }, + "ifSourceGenerationNotMatch": { + "description": "Makes the operation conditional on whether the source object's current generation does not match the given value.", + "format": "int64", + "location": "query", + "type": "string" + }, + "ifSourceMetagenerationMatch": { + "description": "Makes the operation conditional on whether the source object's current metageneration matches the given value.", + "format": "int64", + "location": "query", + "type": "string" + }, + "ifSourceMetagenerationNotMatch": { + "description": "Makes the operation conditional on whether the source object's current metageneration does not match the given value.", + "format": "int64", + "location": "query", + "type": "string" + }, + "projection": { + "description": "Set of properties to return. Defaults to noAcl, unless the object resource specifies the acl property, when it defaults to full.", + "enum": [ + "full", + "noAcl" + ], + "enumDescriptions": [ + "Include all properties.", + "Omit the owner, acl property." + ], + "location": "query", + "type": "string" + }, + "sourceBucket": { + "description": "Name of the bucket in which to find the source object.", + "location": "path", + "required": true, + "type": "string" + }, + "sourceGeneration": { + "description": "If present, selects a specific revision of the source object (as opposed to the latest version, the default).", + "format": "int64", + "location": "query", + "type": "string" + }, + "sourceObject": { + "description": "Name of the source object. For information about how to URL encode object names to be path safe, see Encoding URI Path Parts.", + "location": "path", + "required": true, + "type": "string" + }, + "userProject": { + "description": "The project to be billed for this request. Required for Requester Pays buckets.", + "location": "query", + "type": "string" + } + }, + "path": "b/{sourceBucket}/o/{sourceObject}/copyTo/b/{destinationBucket}/o/{destinationObject}", + "request": { + "$ref": "Object" + }, + "response": { + "$ref": "Object" + }, + "scopes": [ + "https://www.googleapis.com/auth/cloud-platform", + "https://www.googleapis.com/auth/devstorage.full_control", + "https://www.googleapis.com/auth/devstorage.read_write" + ] + }, + "delete": { + "description": "Deletes an object and its metadata. Deletions are permanent if versioning is not enabled for the bucket, or if the generation parameter is used.", + "httpMethod": "DELETE", + "id": "storage.objects.delete", + "parameterOrder": [ + "bucket", + "object" + ], + "parameters": { + "bucket": { + "description": "Name of the bucket in which the object resides.", + "location": "path", + "required": true, + "type": "string" + }, + "generation": { + "description": "If present, permanently deletes a specific revision of this object (as opposed to the latest version, the default).", + "format": "int64", + "location": "query", + "type": "string" + }, + "ifGenerationMatch": { + "description": "Makes the operation conditional on whether the object's current generation matches the given value. Setting to 0 makes the operation succeed only if there are no live versions of the object.", + "format": "int64", + "location": "query", + "type": "string" + }, + "ifGenerationNotMatch": { + "description": "Makes the operation conditional on whether the object's current generation does not match the given value. If no live object exists, the precondition fails. Setting to 0 makes the operation succeed only if there is a live version of the object.", + "format": "int64", + "location": "query", + "type": "string" + }, + "ifMetagenerationMatch": { + "description": "Makes the operation conditional on whether the object's current metageneration matches the given value.", + "format": "int64", + "location": "query", + "type": "string" + }, + "ifMetagenerationNotMatch": { + "description": "Makes the operation conditional on whether the object's current metageneration does not match the given value.", + "format": "int64", + "location": "query", + "type": "string" + }, + "object": { + "description": "Name of the object. For information about how to URL encode object names to be path safe, see Encoding URI Path Parts.", + "location": "path", + "required": true, + "type": "string" + }, + "userProject": { + "description": "The project to be billed for this request. Required for Requester Pays buckets.", + "location": "query", + "type": "string" + } + }, + "path": "b/{bucket}/o/{object}", + "scopes": [ + "https://www.googleapis.com/auth/cloud-platform", + "https://www.googleapis.com/auth/devstorage.full_control", + "https://www.googleapis.com/auth/devstorage.read_write" + ] + }, + "get": { + "description": "Retrieves an object or its metadata.", + "httpMethod": "GET", + "id": "storage.objects.get", + "parameterOrder": [ + "bucket", + "object" + ], + "parameters": { + "bucket": { + "description": "Name of the bucket in which the object resides.", + "location": "path", + "required": true, + "type": "string" + }, + "generation": { + "description": "If present, selects a specific revision of this object (as opposed to the latest version, the default).", + "format": "int64", + "location": "query", + "type": "string" + }, + "ifGenerationMatch": { + "description": "Makes the operation conditional on whether the object's current generation matches the given value. Setting to 0 makes the operation succeed only if there are no live versions of the object.", + "format": "int64", + "location": "query", + "type": "string" + }, + "ifGenerationNotMatch": { + "description": "Makes the operation conditional on whether the object's current generation does not match the given value. If no live object exists, the precondition fails. Setting to 0 makes the operation succeed only if there is a live version of the object.", + "format": "int64", + "location": "query", + "type": "string" + }, + "ifMetagenerationMatch": { + "description": "Makes the operation conditional on whether the object's current metageneration matches the given value.", + "format": "int64", + "location": "query", + "type": "string" + }, + "ifMetagenerationNotMatch": { + "description": "Makes the operation conditional on whether the object's current metageneration does not match the given value.", + "format": "int64", + "location": "query", + "type": "string" + }, + "object": { + "description": "Name of the object. For information about how to URL encode object names to be path safe, see Encoding URI Path Parts.", + "location": "path", + "required": true, + "type": "string" + }, + "projection": { + "description": "Set of properties to return. Defaults to noAcl.", + "enum": [ + "full", + "noAcl" + ], + "enumDescriptions": [ + "Include all properties.", + "Omit the owner, acl property." + ], + "location": "query", + "type": "string" + }, + "userProject": { + "description": "The project to be billed for this request. Required for Requester Pays buckets.", + "location": "query", + "type": "string" + } + }, + "path": "b/{bucket}/o/{object}", + "response": { + "$ref": "Object" + }, + "scopes": [ + "https://www.googleapis.com/auth/cloud-platform", + "https://www.googleapis.com/auth/cloud-platform.read-only", + "https://www.googleapis.com/auth/devstorage.full_control", + "https://www.googleapis.com/auth/devstorage.read_only", + "https://www.googleapis.com/auth/devstorage.read_write" + ], + "supportsMediaDownload": true, + "useMediaDownloadService": true + }, + "getIamPolicy": { + "description": "Returns an IAM policy for the specified object.", + "httpMethod": "GET", + "id": "storage.objects.getIamPolicy", + "parameterOrder": [ + "bucket", + "object" + ], + "parameters": { + "bucket": { + "description": "Name of the bucket in which the object resides.", + "location": "path", + "required": true, + "type": "string" + }, + "generation": { + "description": "If present, selects a specific revision of this object (as opposed to the latest version, the default).", + "format": "int64", + "location": "query", + "type": "string" + }, + "object": { + "description": "Name of the object. For information about how to URL encode object names to be path safe, see Encoding URI Path Parts.", + "location": "path", + "required": true, + "type": "string" + }, + "userProject": { + "description": "The project to be billed for this request. Required for Requester Pays buckets.", + "location": "query", + "type": "string" + } + }, + "path": "b/{bucket}/o/{object}/iam", + "response": { + "$ref": "Policy" + }, + "scopes": [ + "https://www.googleapis.com/auth/cloud-platform", + "https://www.googleapis.com/auth/cloud-platform.read-only", + "https://www.googleapis.com/auth/devstorage.full_control", + "https://www.googleapis.com/auth/devstorage.read_only", + "https://www.googleapis.com/auth/devstorage.read_write" + ] + }, + "insert": { + "description": "Stores a new object and metadata.", + "httpMethod": "POST", + "id": "storage.objects.insert", + "mediaUpload": { + "accept": [ + "*/*" + ], + "protocols": { + "resumable": { + "multipart": true, + "path": "/resumable/upload/storage/v1/b/{bucket}/o" + }, + "simple": { + "multipart": true, + "path": "/upload/storage/v1/b/{bucket}/o" + } + } + }, + "parameterOrder": [ + "bucket" + ], + "parameters": { + "bucket": { + "description": "Name of the bucket in which to store the new object. Overrides the provided object metadata's bucket value, if any.", + "location": "path", + "required": true, + "type": "string" + }, + "contentEncoding": { + "description": "If set, sets the contentEncoding property of the final object to this value. Setting this parameter is equivalent to setting the contentEncoding metadata property. This can be useful when uploading an object with uploadType=media to indicate the encoding of the content being uploaded.", + "location": "query", + "type": "string" + }, + "ifGenerationMatch": { + "description": "Makes the operation conditional on whether the object's current generation matches the given value. Setting to 0 makes the operation succeed only if there are no live versions of the object.", + "format": "int64", + "location": "query", + "type": "string" + }, + "ifGenerationNotMatch": { + "description": "Makes the operation conditional on whether the object's current generation does not match the given value. If no live object exists, the precondition fails. Setting to 0 makes the operation succeed only if there is a live version of the object.", + "format": "int64", + "location": "query", + "type": "string" + }, + "ifMetagenerationMatch": { + "description": "Makes the operation conditional on whether the object's current metageneration matches the given value.", + "format": "int64", + "location": "query", + "type": "string" + }, + "ifMetagenerationNotMatch": { + "description": "Makes the operation conditional on whether the object's current metageneration does not match the given value.", + "format": "int64", + "location": "query", + "type": "string" + }, + "kmsKeyName": { + "description": "Resource name of the Cloud KMS key, of the form projects/my-project/locations/global/keyRings/my-kr/cryptoKeys/my-key, that will be used to encrypt the object. Overrides the object metadata's kms_key_name value, if any. Limited availability; usable only by enabled projects.", + "location": "query", + "type": "string" + }, + "name": { + "description": "Name of the object. Required when the object metadata is not otherwise provided. Overrides the object metadata's name value, if any. For information about how to URL encode object names to be path safe, see Encoding URI Path Parts.", + "location": "query", + "type": "string" + }, + "predefinedAcl": { + "description": "Apply a predefined set of access controls to this object.", + "enum": [ + "authenticatedRead", + "bucketOwnerFullControl", + "bucketOwnerRead", + "private", + "projectPrivate", + "publicRead" + ], + "enumDescriptions": [ + "Object owner gets OWNER access, and allAuthenticatedUsers get READER access.", + "Object owner gets OWNER access, and project team owners get OWNER access.", + "Object owner gets OWNER access, and project team owners get READER access.", + "Object owner gets OWNER access.", + "Object owner gets OWNER access, and project team members get access according to their roles.", + "Object owner gets OWNER access, and allUsers get READER access." + ], + "location": "query", + "type": "string" + }, + "projection": { + "description": "Set of properties to return. Defaults to noAcl, unless the object resource specifies the acl property, when it defaults to full.", + "enum": [ + "full", + "noAcl" + ], + "enumDescriptions": [ + "Include all properties.", + "Omit the owner, acl property." + ], + "location": "query", + "type": "string" + }, + "userProject": { + "description": "The project to be billed for this request. Required for Requester Pays buckets.", + "location": "query", + "type": "string" + } + }, + "path": "b/{bucket}/o", + "request": { + "$ref": "Object" + }, + "response": { + "$ref": "Object" + }, + "scopes": [ + "https://www.googleapis.com/auth/cloud-platform", + "https://www.googleapis.com/auth/devstorage.full_control", + "https://www.googleapis.com/auth/devstorage.read_write" + ], + "supportsMediaUpload": true + }, + "list": { + "description": "Retrieves a list of objects matching the criteria.", + "httpMethod": "GET", + "id": "storage.objects.list", + "parameterOrder": [ + "bucket" + ], + "parameters": { + "bucket": { + "description": "Name of the bucket in which to look for objects.", + "location": "path", + "required": true, + "type": "string" + }, + "delimiter": { + "description": "Returns results in a directory-like mode. items will contain only objects whose names, aside from the prefix, do not contain delimiter. Objects whose names, aside from the prefix, contain delimiter will have their name, truncated after the delimiter, returned in prefixes. Duplicate prefixes are omitted.", + "location": "query", + "type": "string" + }, + "maxResults": { + "default": "1000", + "description": "Maximum number of items plus prefixes to return in a single page of responses. As duplicate prefixes are omitted, fewer total results may be returned than requested. The service will use this parameter or 1,000 items, whichever is smaller.", + "format": "uint32", + "location": "query", + "minimum": "0", + "type": "integer" + }, + "pageToken": { + "description": "A previously-returned page token representing part of the larger set of results to view.", + "location": "query", + "type": "string" + }, + "prefix": { + "description": "Filter results to objects whose names begin with this prefix.", + "location": "query", + "type": "string" + }, + "projection": { + "description": "Set of properties to return. Defaults to noAcl.", + "enum": [ + "full", + "noAcl" + ], + "enumDescriptions": [ + "Include all properties.", + "Omit the owner, acl property." + ], + "location": "query", + "type": "string" + }, + "userProject": { + "description": "The project to be billed for this request. Required for Requester Pays buckets.", + "location": "query", + "type": "string" + }, + "versions": { + "description": "If true, lists all versions of an object as distinct results. The default is false. For more information, see Object Versioning.", + "location": "query", + "type": "boolean" + } + }, + "path": "b/{bucket}/o", + "response": { + "$ref": "Objects" + }, + "scopes": [ + "https://www.googleapis.com/auth/cloud-platform", + "https://www.googleapis.com/auth/cloud-platform.read-only", + "https://www.googleapis.com/auth/devstorage.full_control", + "https://www.googleapis.com/auth/devstorage.read_only", + "https://www.googleapis.com/auth/devstorage.read_write" + ], + "supportsSubscription": true + }, + "patch": { + "description": "Patches an object's metadata.", + "httpMethod": "PATCH", + "id": "storage.objects.patch", + "parameterOrder": [ + "bucket", + "object" + ], + "parameters": { + "bucket": { + "description": "Name of the bucket in which the object resides.", + "location": "path", + "required": true, + "type": "string" + }, + "generation": { + "description": "If present, selects a specific revision of this object (as opposed to the latest version, the default).", + "format": "int64", + "location": "query", + "type": "string" + }, + "ifGenerationMatch": { + "description": "Makes the operation conditional on whether the object's current generation matches the given value. Setting to 0 makes the operation succeed only if there are no live versions of the object.", + "format": "int64", + "location": "query", + "type": "string" + }, + "ifGenerationNotMatch": { + "description": "Makes the operation conditional on whether the object's current generation does not match the given value. If no live object exists, the precondition fails. Setting to 0 makes the operation succeed only if there is a live version of the object.", + "format": "int64", + "location": "query", + "type": "string" + }, + "ifMetagenerationMatch": { + "description": "Makes the operation conditional on whether the object's current metageneration matches the given value.", + "format": "int64", + "location": "query", + "type": "string" + }, + "ifMetagenerationNotMatch": { + "description": "Makes the operation conditional on whether the object's current metageneration does not match the given value.", + "format": "int64", + "location": "query", + "type": "string" + }, + "object": { + "description": "Name of the object. For information about how to URL encode object names to be path safe, see Encoding URI Path Parts.", + "location": "path", + "required": true, + "type": "string" + }, + "predefinedAcl": { + "description": "Apply a predefined set of access controls to this object.", + "enum": [ + "authenticatedRead", + "bucketOwnerFullControl", + "bucketOwnerRead", + "private", + "projectPrivate", + "publicRead" + ], + "enumDescriptions": [ + "Object owner gets OWNER access, and allAuthenticatedUsers get READER access.", + "Object owner gets OWNER access, and project team owners get OWNER access.", + "Object owner gets OWNER access, and project team owners get READER access.", + "Object owner gets OWNER access.", + "Object owner gets OWNER access, and project team members get access according to their roles.", + "Object owner gets OWNER access, and allUsers get READER access." + ], + "location": "query", + "type": "string" + }, + "projection": { + "description": "Set of properties to return. Defaults to full.", + "enum": [ + "full", + "noAcl" + ], + "enumDescriptions": [ + "Include all properties.", + "Omit the owner, acl property." + ], + "location": "query", + "type": "string" + }, + "userProject": { + "description": "The project to be billed for this request, for Requester Pays buckets.", + "location": "query", + "type": "string" + } + }, + "path": "b/{bucket}/o/{object}", + "request": { + "$ref": "Object" + }, + "response": { + "$ref": "Object" + }, + "scopes": [ + "https://www.googleapis.com/auth/cloud-platform", + "https://www.googleapis.com/auth/devstorage.full_control" + ] + }, + "rewrite": { + "description": "Rewrites a source object to a destination object. Optionally overrides metadata.", + "httpMethod": "POST", + "id": "storage.objects.rewrite", + "parameterOrder": [ + "sourceBucket", + "sourceObject", + "destinationBucket", + "destinationObject" + ], + "parameters": { + "destinationBucket": { + "description": "Name of the bucket in which to store the new object. Overrides the provided object metadata's bucket value, if any.", + "location": "path", + "required": true, + "type": "string" + }, + "destinationKmsKeyName": { + "description": "Resource name of the Cloud KMS key, of the form projects/my-project/locations/global/keyRings/my-kr/cryptoKeys/my-key, that will be used to encrypt the object. Overrides the object metadata's kms_key_name value, if any.", + "location": "query", + "type": "string" + }, + "destinationObject": { + "description": "Name of the new object. Required when the object metadata is not otherwise provided. Overrides the object metadata's name value, if any. For information about how to URL encode object names to be path safe, see Encoding URI Path Parts.", + "location": "path", + "required": true, + "type": "string" + }, + "destinationPredefinedAcl": { + "description": "Apply a predefined set of access controls to the destination object.", + "enum": [ + "authenticatedRead", + "bucketOwnerFullControl", + "bucketOwnerRead", + "private", + "projectPrivate", + "publicRead" + ], + "enumDescriptions": [ + "Object owner gets OWNER access, and allAuthenticatedUsers get READER access.", + "Object owner gets OWNER access, and project team owners get OWNER access.", + "Object owner gets OWNER access, and project team owners get READER access.", + "Object owner gets OWNER access.", + "Object owner gets OWNER access, and project team members get access according to their roles.", + "Object owner gets OWNER access, and allUsers get READER access." + ], + "location": "query", + "type": "string" + }, + "ifGenerationMatch": { + "description": "Makes the operation conditional on whether the object's current generation matches the given value. Setting to 0 makes the operation succeed only if there are no live versions of the object.", + "format": "int64", + "location": "query", + "type": "string" + }, + "ifGenerationNotMatch": { + "description": "Makes the operation conditional on whether the object's current generation does not match the given value. If no live object exists, the precondition fails. Setting to 0 makes the operation succeed only if there is a live version of the object.", + "format": "int64", + "location": "query", + "type": "string" + }, + "ifMetagenerationMatch": { + "description": "Makes the operation conditional on whether the destination object's current metageneration matches the given value.", + "format": "int64", + "location": "query", + "type": "string" + }, + "ifMetagenerationNotMatch": { + "description": "Makes the operation conditional on whether the destination object's current metageneration does not match the given value.", + "format": "int64", + "location": "query", + "type": "string" + }, + "ifSourceGenerationMatch": { + "description": "Makes the operation conditional on whether the source object's current generation matches the given value.", + "format": "int64", + "location": "query", + "type": "string" + }, + "ifSourceGenerationNotMatch": { + "description": "Makes the operation conditional on whether the source object's current generation does not match the given value.", + "format": "int64", + "location": "query", + "type": "string" + }, + "ifSourceMetagenerationMatch": { + "description": "Makes the operation conditional on whether the source object's current metageneration matches the given value.", + "format": "int64", + "location": "query", + "type": "string" + }, + "ifSourceMetagenerationNotMatch": { + "description": "Makes the operation conditional on whether the source object's current metageneration does not match the given value.", + "format": "int64", + "location": "query", + "type": "string" + }, + "maxBytesRewrittenPerCall": { + "description": "The maximum number of bytes that will be rewritten per rewrite request. Most callers shouldn't need to specify this parameter - it is primarily in place to support testing. If specified the value must be an integral multiple of 1 MiB (1048576). Also, this only applies to requests where the source and destination span locations and/or storage classes. Finally, this value must not change across rewrite calls else you'll get an error that the rewriteToken is invalid.", + "format": "int64", + "location": "query", + "type": "string" + }, + "projection": { + "description": "Set of properties to return. Defaults to noAcl, unless the object resource specifies the acl property, when it defaults to full.", + "enum": [ + "full", + "noAcl" + ], + "enumDescriptions": [ + "Include all properties.", + "Omit the owner, acl property." + ], + "location": "query", + "type": "string" + }, + "rewriteToken": { + "description": "Include this field (from the previous rewrite response) on each rewrite request after the first one, until the rewrite response 'done' flag is true. Calls that provide a rewriteToken can omit all other request fields, but if included those fields must match the values provided in the first rewrite request.", + "location": "query", + "type": "string" + }, + "sourceBucket": { + "description": "Name of the bucket in which to find the source object.", + "location": "path", + "required": true, + "type": "string" + }, + "sourceGeneration": { + "description": "If present, selects a specific revision of the source object (as opposed to the latest version, the default).", + "format": "int64", + "location": "query", + "type": "string" + }, + "sourceObject": { + "description": "Name of the source object. For information about how to URL encode object names to be path safe, see Encoding URI Path Parts.", + "location": "path", + "required": true, + "type": "string" + }, + "userProject": { + "description": "The project to be billed for this request. Required for Requester Pays buckets.", + "location": "query", + "type": "string" + } + }, + "path": "b/{sourceBucket}/o/{sourceObject}/rewriteTo/b/{destinationBucket}/o/{destinationObject}", + "request": { + "$ref": "Object" + }, + "response": { + "$ref": "RewriteResponse" + }, + "scopes": [ + "https://www.googleapis.com/auth/cloud-platform", + "https://www.googleapis.com/auth/devstorage.full_control", + "https://www.googleapis.com/auth/devstorage.read_write" + ] + }, + "setIamPolicy": { + "description": "Updates an IAM policy for the specified object.", + "httpMethod": "PUT", + "id": "storage.objects.setIamPolicy", + "parameterOrder": [ + "bucket", + "object" + ], + "parameters": { + "bucket": { + "description": "Name of the bucket in which the object resides.", + "location": "path", + "required": true, + "type": "string" + }, + "generation": { + "description": "If present, selects a specific revision of this object (as opposed to the latest version, the default).", + "format": "int64", + "location": "query", + "type": "string" + }, + "object": { + "description": "Name of the object. For information about how to URL encode object names to be path safe, see Encoding URI Path Parts.", + "location": "path", + "required": true, + "type": "string" + }, + "userProject": { + "description": "The project to be billed for this request. Required for Requester Pays buckets.", + "location": "query", + "type": "string" + } + }, + "path": "b/{bucket}/o/{object}/iam", + "request": { + "$ref": "Policy" + }, + "response": { + "$ref": "Policy" + }, + "scopes": [ + "https://www.googleapis.com/auth/cloud-platform", + "https://www.googleapis.com/auth/devstorage.full_control", + "https://www.googleapis.com/auth/devstorage.read_write" + ] + }, + "testIamPermissions": { + "description": "Tests a set of permissions on the given object to see which, if any, are held by the caller.", + "httpMethod": "GET", + "id": "storage.objects.testIamPermissions", + "parameterOrder": [ + "bucket", + "object", + "permissions" + ], + "parameters": { + "bucket": { + "description": "Name of the bucket in which the object resides.", + "location": "path", + "required": true, + "type": "string" + }, + "generation": { + "description": "If present, selects a specific revision of this object (as opposed to the latest version, the default).", + "format": "int64", + "location": "query", + "type": "string" + }, + "object": { + "description": "Name of the object. For information about how to URL encode object names to be path safe, see Encoding URI Path Parts.", + "location": "path", + "required": true, + "type": "string" + }, + "permissions": { + "description": "Permissions to test.", + "location": "query", + "repeated": true, + "required": true, + "type": "string" + }, + "userProject": { + "description": "The project to be billed for this request. Required for Requester Pays buckets.", + "location": "query", + "type": "string" + } + }, + "path": "b/{bucket}/o/{object}/iam/testPermissions", + "response": { + "$ref": "TestIamPermissionsResponse" + }, + "scopes": [ + "https://www.googleapis.com/auth/cloud-platform", + "https://www.googleapis.com/auth/cloud-platform.read-only", + "https://www.googleapis.com/auth/devstorage.full_control", + "https://www.googleapis.com/auth/devstorage.read_only", + "https://www.googleapis.com/auth/devstorage.read_write" + ] + }, + "update": { + "description": "Updates an object's metadata.", + "httpMethod": "PUT", + "id": "storage.objects.update", + "parameterOrder": [ + "bucket", + "object" + ], + "parameters": { + "bucket": { + "description": "Name of the bucket in which the object resides.", + "location": "path", + "required": true, + "type": "string" + }, + "generation": { + "description": "If present, selects a specific revision of this object (as opposed to the latest version, the default).", + "format": "int64", + "location": "query", + "type": "string" + }, + "ifGenerationMatch": { + "description": "Makes the operation conditional on whether the object's current generation matches the given value. Setting to 0 makes the operation succeed only if there are no live versions of the object.", + "format": "int64", + "location": "query", + "type": "string" + }, + "ifGenerationNotMatch": { + "description": "Makes the operation conditional on whether the object's current generation does not match the given value. If no live object exists, the precondition fails. Setting to 0 makes the operation succeed only if there is a live version of the object.", + "format": "int64", + "location": "query", + "type": "string" + }, + "ifMetagenerationMatch": { + "description": "Makes the operation conditional on whether the object's current metageneration matches the given value.", + "format": "int64", + "location": "query", + "type": "string" + }, + "ifMetagenerationNotMatch": { + "description": "Makes the operation conditional on whether the object's current metageneration does not match the given value.", + "format": "int64", + "location": "query", + "type": "string" + }, + "object": { + "description": "Name of the object. For information about how to URL encode object names to be path safe, see Encoding URI Path Parts.", + "location": "path", + "required": true, + "type": "string" + }, + "predefinedAcl": { + "description": "Apply a predefined set of access controls to this object.", + "enum": [ + "authenticatedRead", + "bucketOwnerFullControl", + "bucketOwnerRead", + "private", + "projectPrivate", + "publicRead" + ], + "enumDescriptions": [ + "Object owner gets OWNER access, and allAuthenticatedUsers get READER access.", + "Object owner gets OWNER access, and project team owners get OWNER access.", + "Object owner gets OWNER access, and project team owners get READER access.", + "Object owner gets OWNER access.", + "Object owner gets OWNER access, and project team members get access according to their roles.", + "Object owner gets OWNER access, and allUsers get READER access." + ], + "location": "query", + "type": "string" + }, + "projection": { + "description": "Set of properties to return. Defaults to full.", + "enum": [ + "full", + "noAcl" + ], + "enumDescriptions": [ + "Include all properties.", + "Omit the owner, acl property." + ], + "location": "query", + "type": "string" + }, + "userProject": { + "description": "The project to be billed for this request. Required for Requester Pays buckets.", + "location": "query", + "type": "string" + } + }, + "path": "b/{bucket}/o/{object}", + "request": { + "$ref": "Object" + }, + "response": { + "$ref": "Object" + }, + "scopes": [ + "https://www.googleapis.com/auth/cloud-platform", + "https://www.googleapis.com/auth/devstorage.full_control" + ] + }, + "watchAll": { + "description": "Watch for changes on all objects in a bucket.", + "httpMethod": "POST", + "id": "storage.objects.watchAll", + "parameterOrder": [ + "bucket" + ], + "parameters": { + "bucket": { + "description": "Name of the bucket in which to look for objects.", + "location": "path", + "required": true, + "type": "string" + }, + "delimiter": { + "description": "Returns results in a directory-like mode. items will contain only objects whose names, aside from the prefix, do not contain delimiter. Objects whose names, aside from the prefix, contain delimiter will have their name, truncated after the delimiter, returned in prefixes. Duplicate prefixes are omitted.", + "location": "query", + "type": "string" + }, + "maxResults": { + "default": "1000", + "description": "Maximum number of items plus prefixes to return in a single page of responses. As duplicate prefixes are omitted, fewer total results may be returned than requested. The service will use this parameter or 1,000 items, whichever is smaller.", + "format": "uint32", + "location": "query", + "minimum": "0", + "type": "integer" + }, + "pageToken": { + "description": "A previously-returned page token representing part of the larger set of results to view.", + "location": "query", + "type": "string" + }, + "prefix": { + "description": "Filter results to objects whose names begin with this prefix.", + "location": "query", + "type": "string" + }, + "projection": { + "description": "Set of properties to return. Defaults to noAcl.", + "enum": [ + "full", + "noAcl" + ], + "enumDescriptions": [ + "Include all properties.", + "Omit the owner, acl property." + ], + "location": "query", + "type": "string" + }, + "userProject": { + "description": "The project to be billed for this request. Required for Requester Pays buckets.", + "location": "query", + "type": "string" + }, + "versions": { + "description": "If true, lists all versions of an object as distinct results. The default is false. For more information, see Object Versioning.", + "location": "query", + "type": "boolean" + } + }, + "path": "b/{bucket}/o/watch", + "request": { + "$ref": "Channel", + "parameterName": "resource" + }, + "response": { + "$ref": "Channel" + }, + "scopes": [ + "https://www.googleapis.com/auth/cloud-platform", + "https://www.googleapis.com/auth/cloud-platform.read-only", + "https://www.googleapis.com/auth/devstorage.full_control", + "https://www.googleapis.com/auth/devstorage.read_only", + "https://www.googleapis.com/auth/devstorage.read_write" + ], + "supportsSubscription": true + } + } + }, + "projects": { + "resources": { + "serviceAccount": { + "methods": { + "get": { + "description": "Get the email address of this project's Google Cloud Storage service account.", + "httpMethod": "GET", + "id": "storage.projects.serviceAccount.get", + "parameterOrder": [ + "projectId" + ], + "parameters": { + "projectId": { + "description": "Project ID", + "location": "path", + "required": true, + "type": "string" + }, + "userProject": { + "description": "The project to be billed for this request.", + "location": "query", + "type": "string" + } + }, + "path": "projects/{projectId}/serviceAccount", + "response": { + "$ref": "ServiceAccount" + }, + "scopes": [ + "https://www.googleapis.com/auth/cloud-platform", + "https://www.googleapis.com/auth/cloud-platform.read-only", + "https://www.googleapis.com/auth/devstorage.full_control", + "https://www.googleapis.com/auth/devstorage.read_only", + "https://www.googleapis.com/auth/devstorage.read_write" + ] + } + } + } + } + } + }, + "revision": "20180305", + "rootUrl": "https://www.googleapis.com/", + "schemas": { + "Bucket": { + "description": "A bucket.", + "id": "Bucket", + "properties": { + "acl": { + "annotations": { + "required": [ + "storage.buckets.update" + ] + }, + "description": "Access controls on the bucket.", + "items": { + "$ref": "BucketAccessControl" + }, + "type": "array" + }, + "billing": { + "description": "The bucket's billing configuration.", + "properties": { + "requesterPays": { + "description": "When set to true, Requester Pays is enabled for this bucket.", + "type": "boolean" + } + }, + "type": "object" + }, + "cors": { + "description": "The bucket's Cross-Origin Resource Sharing (CORS) configuration.", + "items": { + "properties": { + "maxAgeSeconds": { + "description": "The value, in seconds, to return in the Access-Control-Max-Age header used in preflight responses.", + "format": "int32", + "type": "integer" + }, + "method": { + "description": "The list of HTTP methods on which to include CORS response headers, (GET, OPTIONS, POST, etc) Note: \"*\" is permitted in the list of methods, and means \"any method\".", + "items": { + "type": "string" + }, + "type": "array" + }, + "origin": { + "description": "The list of Origins eligible to receive CORS response headers. Note: \"*\" is permitted in the list of origins, and means \"any Origin\".", + "items": { + "type": "string" + }, + "type": "array" + }, + "responseHeader": { + "description": "The list of HTTP headers other than the simple response headers to give permission for the user-agent to share across domains.", + "items": { + "type": "string" + }, + "type": "array" + } + }, + "type": "object" + }, + "type": "array" + }, + "defaultEventBasedHold": { + "description": "Defines the default value for Event-Based hold on newly created objects in this bucket. Event-Based hold is a way to retain objects indefinitely until an event occurs, signified by the hold's release. After being released, such objects will be subject to bucket-level retention (if any). One sample use case of this flag is for banks to hold loan documents for at least 3 years after loan is paid in full. Here bucket-level retention is 3 years and the event is loan being paid in full. In this example these objects will be held intact for any number of years until the event has occurred (hold is released) and then 3 more years after that. Objects under Event-Based hold cannot be deleted, overwritten or archived until the hold is removed.", + "type": "boolean" + }, + "defaultObjectAcl": { + "description": "Default access controls to apply to new objects when no ACL is provided.", + "items": { + "$ref": "ObjectAccessControl" + }, + "type": "array" + }, + "encryption": { + "description": "Encryption configuration used by default for newly inserted objects, when no encryption config is specified.", + "properties": { + "defaultKmsKeyName": { + "description": "A Cloud KMS key that will be used to encrypt objects inserted into this bucket, if no encryption method is specified. Limited availability; usable only by enabled projects.", + "type": "string" + } + }, + "type": "object" + }, + "etag": { + "description": "HTTP 1.1 Entity tag for the bucket.", + "type": "string" + }, + "id": { + "description": "The ID of the bucket. For buckets, the id and name properties are the same.", + "type": "string" + }, + "kind": { + "default": "storage#bucket", + "description": "The kind of item this is. For buckets, this is always storage#bucket.", + "type": "string" + }, + "labels": { + "additionalProperties": { + "description": "An individual label entry.", + "type": "string" + }, + "description": "User-provided labels, in key/value pairs.", + "type": "object" + }, + "lifecycle": { + "description": "The bucket's lifecycle configuration. See lifecycle management for more information.", + "properties": { + "rule": { + "description": "A lifecycle management rule, which is made of an action to take and the condition(s) under which the action will be taken.", + "items": { + "properties": { + "action": { + "description": "The action to take.", + "properties": { + "storageClass": { + "description": "Target storage class. Required iff the type of the action is SetStorageClass.", + "type": "string" + }, + "type": { + "description": "Type of the action. Currently, only Delete and SetStorageClass are supported.", + "type": "string" + } + }, + "type": "object" + }, + "condition": { + "description": "The condition(s) under which the action will be taken.", + "properties": { + "age": { + "description": "Age of an object (in days). This condition is satisfied when an object reaches the specified age.", + "format": "int32", + "type": "integer" + }, + "createdBefore": { + "description": "A date in RFC 3339 format with only the date part (for instance, \"2013-01-15\"). This condition is satisfied when an object is created before midnight of the specified date in UTC.", + "format": "date", + "type": "string" + }, + "isLive": { + "description": "Relevant only for versioned objects. If the value is true, this condition matches live objects; if the value is false, it matches archived objects.", + "type": "boolean" + }, + "matchesStorageClass": { + "description": "Objects having any of the storage classes specified by this condition will be matched. Values include MULTI_REGIONAL, REGIONAL, NEARLINE, COLDLINE, STANDARD, and DURABLE_REDUCED_AVAILABILITY.", + "items": { + "type": "string" + }, + "type": "array" + }, + "numNewerVersions": { + "description": "Relevant only for versioned objects. If the value is N, this condition is satisfied when there are at least N versions (including the live version) newer than this version of the object.", + "format": "int32", + "type": "integer" + } + }, + "type": "object" + } + }, + "type": "object" + }, + "type": "array" + } + }, + "type": "object" + }, + "location": { + "description": "The location of the bucket. Object data for objects in the bucket resides in physical storage within this region. Defaults to US. See the developer's guide for the authoritative list.", + "type": "string" + }, + "logging": { + "description": "The bucket's logging configuration, which defines the destination bucket and optional name prefix for the current bucket's logs.", + "properties": { + "logBucket": { + "description": "The destination bucket where the current bucket's logs should be placed.", + "type": "string" + }, + "logObjectPrefix": { + "description": "A prefix for log object names.", + "type": "string" + } + }, + "type": "object" + }, + "metageneration": { + "description": "The metadata generation of this bucket.", + "format": "int64", + "type": "string" + }, + "name": { + "annotations": { + "required": [ + "storage.buckets.insert" + ] + }, + "description": "The name of the bucket.", + "type": "string" + }, + "owner": { + "description": "The owner of the bucket. This is always the project team's owner group.", + "properties": { + "entity": { + "description": "The entity, in the form project-owner-projectId.", + "type": "string" + }, + "entityId": { + "description": "The ID for the entity.", + "type": "string" + } + }, + "type": "object" + }, + "projectNumber": { + "description": "The project number of the project the bucket belongs to.", + "format": "uint64", + "type": "string" + }, + "retentionPolicy": { + "description": "Defines the retention policy for a bucket. The Retention policy enforces a minimum retention time for all objects contained in the bucket, based on their creation time. Any attempt to overwrite or delete objects younger than the retention period will result in a PERMISSION_DENIED error. An unlocked retention policy can be modified or removed from the bucket via the UpdateBucketMetadata RPC. A locked retention policy cannot be removed or shortened in duration for the lifetime of the bucket. Attempting to remove or decrease period of a locked retention policy will result in a PERMISSION_DENIED error.", + "properties": { + "effectiveTime": { + "description": "The time from which policy was enforced and effective. RFC 3339 format.", + "format": "date-time", + "type": "string" + }, + "isLocked": { + "description": "Once locked, an object retention policy cannot be modified.", + "type": "boolean" + }, + "retentionPeriod": { + "description": "Specifies the duration that objects need to be retained. Retention duration must be greater than zero and less than 100 years. Note that enforcement of retention periods less than a day is not guaranteed. Such periods should only be used for testing purposes.", + "format": "int64", + "type": "string" + } + }, + "type": "object" + }, + "selfLink": { + "description": "The URI of this bucket.", + "type": "string" + }, + "storageClass": { + "description": "The bucket's default storage class, used whenever no storageClass is specified for a newly-created object. This defines how objects in the bucket are stored and determines the SLA and the cost of storage. Values include MULTI_REGIONAL, REGIONAL, STANDARD, NEARLINE, COLDLINE, and DURABLE_REDUCED_AVAILABILITY. If this value is not specified when the bucket is created, it will default to STANDARD. For more information, see storage classes.", + "type": "string" + }, + "timeCreated": { + "description": "The creation time of the bucket in RFC 3339 format.", + "format": "date-time", + "type": "string" + }, + "updated": { + "description": "The modification time of the bucket in RFC 3339 format.", + "format": "date-time", + "type": "string" + }, + "versioning": { + "description": "The bucket's versioning configuration.", + "properties": { + "enabled": { + "description": "While set to true, versioning is fully enabled for this bucket.", + "type": "boolean" + } + }, + "type": "object" + }, + "website": { + "description": "The bucket's website configuration, controlling how the service behaves when accessing bucket contents as a web site. See the Static Website Examples for more information.", + "properties": { + "mainPageSuffix": { + "description": "If the requested object path is missing, the service will ensure the path has a trailing '/', append this suffix, and attempt to retrieve the resulting object. This allows the creation of index.html objects to represent directory pages.", + "type": "string" + }, + "notFoundPage": { + "description": "If the requested object path is missing, and any mainPageSuffix object is missing, if applicable, the service will return the named object from this bucket as the content for a 404 Not Found result.", + "type": "string" + } + }, + "type": "object" + } + }, + "type": "object" + }, + "BucketAccessControl": { + "description": "An access-control entry.", + "id": "BucketAccessControl", + "properties": { + "bucket": { + "description": "The name of the bucket.", + "type": "string" + }, + "domain": { + "description": "The domain associated with the entity, if any.", + "type": "string" + }, + "email": { + "description": "The email address associated with the entity, if any.", + "type": "string" + }, + "entity": { + "annotations": { + "required": [ + "storage.bucketAccessControls.insert" + ] + }, + "description": "The entity holding the permission, in one of the following forms: \n- user-userId \n- user-email \n- group-groupId \n- group-email \n- domain-domain \n- project-team-projectId \n- allUsers \n- allAuthenticatedUsers Examples: \n- The user liz@example.com would be user-liz@example.com. \n- The group example@googlegroups.com would be group-example@googlegroups.com. \n- To refer to all members of the Google Apps for Business domain example.com, the entity would be domain-example.com.", + "type": "string" + }, + "entityId": { + "description": "The ID for the entity, if any.", + "type": "string" + }, + "etag": { + "description": "HTTP 1.1 Entity tag for the access-control entry.", + "type": "string" + }, + "id": { + "description": "The ID of the access-control entry.", + "type": "string" + }, + "kind": { + "default": "storage#bucketAccessControl", + "description": "The kind of item this is. For bucket access control entries, this is always storage#bucketAccessControl.", + "type": "string" + }, + "projectTeam": { + "description": "The project team associated with the entity, if any.", + "properties": { + "projectNumber": { + "description": "The project number.", + "type": "string" + }, + "team": { + "description": "The team.", + "type": "string" + } + }, + "type": "object" + }, + "role": { + "annotations": { + "required": [ + "storage.bucketAccessControls.insert" + ] + }, + "description": "The access permission for the entity.", + "type": "string" + }, + "selfLink": { + "description": "The link to this access-control entry.", + "type": "string" + } + }, + "type": "object" + }, + "BucketAccessControls": { + "description": "An access-control list.", + "id": "BucketAccessControls", + "properties": { + "items": { + "description": "The list of items.", + "items": { + "$ref": "BucketAccessControl" + }, + "type": "array" + }, + "kind": { + "default": "storage#bucketAccessControls", + "description": "The kind of item this is. For lists of bucket access control entries, this is always storage#bucketAccessControls.", + "type": "string" + } + }, + "type": "object" + }, + "Buckets": { + "description": "A list of buckets.", + "id": "Buckets", + "properties": { + "items": { + "description": "The list of items.", + "items": { + "$ref": "Bucket" + }, + "type": "array" + }, + "kind": { + "default": "storage#buckets", + "description": "The kind of item this is. For lists of buckets, this is always storage#buckets.", + "type": "string" + }, + "nextPageToken": { + "description": "The continuation token, used to page through large result sets. Provide this value in a subsequent request to return the next page of results.", + "type": "string" + } + }, + "type": "object" + }, + "Channel": { + "description": "An notification channel used to watch for resource changes.", + "id": "Channel", + "properties": { + "address": { + "description": "The address where notifications are delivered for this channel.", + "type": "string" + }, + "expiration": { + "description": "Date and time of notification channel expiration, expressed as a Unix timestamp, in milliseconds. Optional.", + "format": "int64", + "type": "string" + }, + "id": { + "description": "A UUID or similar unique string that identifies this channel.", + "type": "string" + }, + "kind": { + "default": "api#channel", + "description": "Identifies this as a notification channel used to watch for changes to a resource. Value: the fixed string \"api#channel\".", + "type": "string" + }, + "params": { + "additionalProperties": { + "description": "Declares a new parameter by name.", + "type": "string" + }, + "description": "Additional parameters controlling delivery channel behavior. Optional.", + "type": "object" + }, + "payload": { + "description": "A Boolean value to indicate whether payload is wanted. Optional.", + "type": "boolean" + }, + "resourceId": { + "description": "An opaque ID that identifies the resource being watched on this channel. Stable across different API versions.", + "type": "string" + }, + "resourceUri": { + "description": "A version-specific identifier for the watched resource.", + "type": "string" + }, + "token": { + "description": "An arbitrary string delivered to the target address with each notification delivered over this channel. Optional.", + "type": "string" + }, + "type": { + "description": "The type of delivery mechanism used for this channel.", + "type": "string" + } + }, + "type": "object" + }, + "ComposeRequest": { + "description": "A Compose request.", + "id": "ComposeRequest", + "properties": { + "destination": { + "$ref": "Object", + "description": "Properties of the resulting object." + }, + "kind": { + "default": "storage#composeRequest", + "description": "The kind of item this is.", + "type": "string" + }, + "sourceObjects": { + "annotations": { + "required": [ + "storage.objects.compose" + ] + }, + "description": "The list of source objects that will be concatenated into a single object.", + "items": { + "properties": { + "generation": { + "description": "The generation of this object to use as the source.", + "format": "int64", + "type": "string" + }, + "name": { + "annotations": { + "required": [ + "storage.objects.compose" + ] + }, + "description": "The source object's name. The source object's bucket is implicitly the destination bucket.", + "type": "string" + }, + "objectPreconditions": { + "description": "Conditions that must be met for this operation to execute.", + "properties": { + "ifGenerationMatch": { + "description": "Only perform the composition if the generation of the source object that would be used matches this value. If this value and a generation are both specified, they must be the same value or the call will fail.", + "format": "int64", + "type": "string" + } + }, + "type": "object" + } + }, + "type": "object" + }, + "type": "array" + } + }, + "type": "object" + }, + "Notification": { + "description": "A subscription to receive Google PubSub notifications.", + "id": "Notification", + "properties": { + "custom_attributes": { + "additionalProperties": { + "type": "string" + }, + "description": "An optional list of additional attributes to attach to each Cloud PubSub message published for this notification subscription.", + "type": "object" + }, + "etag": { + "description": "HTTP 1.1 Entity tag for this subscription notification.", + "type": "string" + }, + "event_types": { + "description": "If present, only send notifications about listed event types. If empty, sent notifications for all event types.", + "items": { + "type": "string" + }, + "type": "array" + }, + "id": { + "description": "The ID of the notification.", + "type": "string" + }, + "kind": { + "default": "storage#notification", + "description": "The kind of item this is. For notifications, this is always storage#notification.", + "type": "string" + }, + "object_name_prefix": { + "description": "If present, only apply this notification configuration to object names that begin with this prefix.", + "type": "string" + }, + "payload_format": { + "annotations": { + "required": [ + "storage.notifications.insert" + ] + }, + "default": "JSON_API_V1", + "description": "The desired content of the Payload.", + "type": "string" + }, + "selfLink": { + "description": "The canonical URL of this notification.", + "type": "string" + }, + "topic": { + "annotations": { + "required": [ + "storage.notifications.insert" + ] + }, + "description": "The Cloud PubSub topic to which this subscription publishes. Formatted as: '//pubsub.googleapis.com/projects/{project-identifier}/topics/{my-topic}'", + "type": "string" + } + }, + "type": "object" + }, + "Notifications": { + "description": "A list of notification subscriptions.", + "id": "Notifications", + "properties": { + "items": { + "description": "The list of items.", + "items": { + "$ref": "Notification" + }, + "type": "array" + }, + "kind": { + "default": "storage#notifications", + "description": "The kind of item this is. For lists of notifications, this is always storage#notifications.", + "type": "string" + } + }, + "type": "object" + }, + "Object": { + "description": "An object.", + "id": "Object", + "properties": { + "acl": { + "annotations": { + "required": [ + "storage.objects.update" + ] + }, + "description": "Access controls on the object.", + "items": { + "$ref": "ObjectAccessControl" + }, + "type": "array" + }, + "bucket": { + "description": "The name of the bucket containing this object.", + "type": "string" + }, + "cacheControl": { + "description": "Cache-Control directive for the object data. If omitted, and the object is accessible to all anonymous users, the default will be public, max-age=3600.", + "type": "string" + }, + "componentCount": { + "description": "Number of underlying components that make up this object. Components are accumulated by compose operations.", + "format": "int32", + "type": "integer" + }, + "contentDisposition": { + "description": "Content-Disposition of the object data.", + "type": "string" + }, + "contentEncoding": { + "description": "Content-Encoding of the object data.", + "type": "string" + }, + "contentLanguage": { + "description": "Content-Language of the object data.", + "type": "string" + }, + "contentType": { + "description": "Content-Type of the object data. If an object is stored without a Content-Type, it is served as application/octet-stream.", + "type": "string" + }, + "crc32c": { + "description": "CRC32c checksum, as described in RFC 4960, Appendix B; encoded using base64 in big-endian byte order. For more information about using the CRC32c checksum, see Hashes and ETags: Best Practices.", + "type": "string" + }, + "customerEncryption": { + "description": "Metadata of customer-supplied encryption key, if the object is encrypted by such a key.", + "properties": { + "encryptionAlgorithm": { + "description": "The encryption algorithm.", + "type": "string" + }, + "keySha256": { + "description": "SHA256 hash value of the encryption key.", + "type": "string" + } + }, + "type": "object" + }, + "etag": { + "description": "HTTP 1.1 Entity tag for the object.", + "type": "string" + }, + "eventBasedHold": { + "description": "Defines the Event-Based hold for an object. Event-Based hold is a way to retain objects indefinitely until an event occurs, signified by the hold's release. After being released, such objects will be subject to bucket-level retention (if any). One sample use case of this flag is for banks to hold loan documents for at least 3 years after loan is paid in full. Here bucket-level retention is 3 years and the event is loan being paid in full. In this example these objects will be held intact for any number of years until the event has occurred (hold is released) and then 3 more years after that.", + "type": "boolean" + }, + "generation": { + "description": "The content generation of this object. Used for object versioning.", + "format": "int64", + "type": "string" + }, + "id": { + "description": "The ID of the object, including the bucket name, object name, and generation number.", + "type": "string" + }, + "kind": { + "default": "storage#object", + "description": "The kind of item this is. For objects, this is always storage#object.", + "type": "string" + }, + "kmsKeyName": { + "description": "Cloud KMS Key used to encrypt this object, if the object is encrypted by such a key. Limited availability; usable only by enabled projects.", + "type": "string" + }, + "md5Hash": { + "description": "MD5 hash of the data; encoded using base64. For more information about using the MD5 hash, see Hashes and ETags: Best Practices.", + "type": "string" + }, + "mediaLink": { + "description": "Media download link.", + "type": "string" + }, + "metadata": { + "additionalProperties": { + "description": "An individual metadata entry.", + "type": "string" + }, + "description": "User-provided metadata, in key/value pairs.", + "type": "object" + }, + "metageneration": { + "description": "The version of the metadata for this object at this generation. Used for preconditions and for detecting changes in metadata. A metageneration number is only meaningful in the context of a particular generation of a particular object.", + "format": "int64", + "type": "string" + }, + "name": { + "description": "The name of the object. Required if not specified by URL parameter.", + "type": "string" + }, + "owner": { + "description": "The owner of the object. This will always be the uploader of the object.", + "properties": { + "entity": { + "description": "The entity, in the form user-userId.", + "type": "string" + }, + "entityId": { + "description": "The ID for the entity.", + "type": "string" + } + }, + "type": "object" + }, + "retentionExpirationTime": { + "description": "Specifies the earliest time that the object's retention period expires. This value is server-determined and is in RFC 3339 format. Note 1: This field is not provided for objects with an active Event-Based hold, since retention expiration is unknown until the hold is removed. Note 2: This value can be provided even when TemporaryHold is set (so that the user can reason about policy without having to first unset the TemporaryHold).", + "format": "date-time", + "type": "string" + }, + "selfLink": { + "description": "The link to this object.", + "type": "string" + }, + "size": { + "description": "Content-Length of the data in bytes.", + "format": "uint64", + "type": "string" + }, + "storageClass": { + "description": "Storage class of the object.", + "type": "string" + }, + "temporaryHold": { + "description": "Defines the temporary hold for an object. This flag is used to enforce a temporary hold on an object. While it is set to true, the object is protected against deletion and overwrites. A common use case of this flag is regulatory investigations where objects need to be retained while the investigation is ongoing.", + "type": "boolean" + }, + "timeCreated": { + "description": "The creation time of the object in RFC 3339 format.", + "format": "date-time", + "type": "string" + }, + "timeDeleted": { + "description": "The deletion time of the object in RFC 3339 format. Will be returned if and only if this version of the object has been deleted.", + "format": "date-time", + "type": "string" + }, + "timeStorageClassUpdated": { + "description": "The time at which the object's storage class was last changed. When the object is initially created, it will be set to timeCreated.", + "format": "date-time", + "type": "string" + }, + "updated": { + "description": "The modification time of the object metadata in RFC 3339 format.", + "format": "date-time", + "type": "string" + } + }, + "type": "object" + }, + "ObjectAccessControl": { + "description": "An access-control entry.", + "id": "ObjectAccessControl", + "properties": { + "bucket": { + "description": "The name of the bucket.", + "type": "string" + }, + "domain": { + "description": "The domain associated with the entity, if any.", + "type": "string" + }, + "email": { + "description": "The email address associated with the entity, if any.", + "type": "string" + }, + "entity": { + "annotations": { + "required": [ + "storage.defaultObjectAccessControls.insert", + "storage.objectAccessControls.insert" + ] + }, + "description": "The entity holding the permission, in one of the following forms: \n- user-userId \n- user-email \n- group-groupId \n- group-email \n- domain-domain \n- project-team-projectId \n- allUsers \n- allAuthenticatedUsers Examples: \n- The user liz@example.com would be user-liz@example.com. \n- The group example@googlegroups.com would be group-example@googlegroups.com. \n- To refer to all members of the Google Apps for Business domain example.com, the entity would be domain-example.com.", + "type": "string" + }, + "entityId": { + "description": "The ID for the entity, if any.", + "type": "string" + }, + "etag": { + "description": "HTTP 1.1 Entity tag for the access-control entry.", + "type": "string" + }, + "generation": { + "description": "The content generation of the object, if applied to an object.", + "format": "int64", + "type": "string" + }, + "id": { + "description": "The ID of the access-control entry.", + "type": "string" + }, + "kind": { + "default": "storage#objectAccessControl", + "description": "The kind of item this is. For object access control entries, this is always storage#objectAccessControl.", + "type": "string" + }, + "object": { + "description": "The name of the object, if applied to an object.", + "type": "string" + }, + "projectTeam": { + "description": "The project team associated with the entity, if any.", + "properties": { + "projectNumber": { + "description": "The project number.", + "type": "string" + }, + "team": { + "description": "The team.", + "type": "string" + } + }, + "type": "object" + }, + "role": { + "annotations": { + "required": [ + "storage.defaultObjectAccessControls.insert", + "storage.objectAccessControls.insert" + ] + }, + "description": "The access permission for the entity.", + "type": "string" + }, + "selfLink": { + "description": "The link to this access-control entry.", + "type": "string" + } + }, + "type": "object" + }, + "ObjectAccessControls": { + "description": "An access-control list.", + "id": "ObjectAccessControls", + "properties": { + "items": { + "description": "The list of items.", + "items": { + "$ref": "ObjectAccessControl" + }, + "type": "array" + }, + "kind": { + "default": "storage#objectAccessControls", + "description": "The kind of item this is. For lists of object access control entries, this is always storage#objectAccessControls.", + "type": "string" + } + }, + "type": "object" + }, + "Objects": { + "description": "A list of objects.", + "id": "Objects", + "properties": { + "items": { + "description": "The list of items.", + "items": { + "$ref": "Object" + }, + "type": "array" + }, + "kind": { + "default": "storage#objects", + "description": "The kind of item this is. For lists of objects, this is always storage#objects.", + "type": "string" + }, + "nextPageToken": { + "description": "The continuation token, used to page through large result sets. Provide this value in a subsequent request to return the next page of results.", + "type": "string" + }, + "prefixes": { + "description": "The list of prefixes of objects matching-but-not-listed up to and including the requested delimiter.", + "items": { + "type": "string" + }, + "type": "array" + } + }, + "type": "object" + }, + "Policy": { + "description": "A bucket/object IAM policy.", + "id": "Policy", + "properties": { + "bindings": { + "annotations": { + "required": [ + "storage.buckets.setIamPolicy", + "storage.objects.setIamPolicy" + ] + }, + "description": "An association between a role, which comes with a set of permissions, and members who may assume that role.", + "items": { + "properties": { + "condition": { + "type": "any" + }, + "members": { + "annotations": { + "required": [ + "storage.buckets.setIamPolicy", + "storage.objects.setIamPolicy" + ] + }, + "description": "A collection of identifiers for members who may assume the provided role. Recognized identifiers are as follows: \n- allUsers — A special identifier that represents anyone on the internet; with or without a Google account. \n- allAuthenticatedUsers — A special identifier that represents anyone who is authenticated with a Google account or a service account. \n- user:emailid — An email address that represents a specific account. For example, user:alice@gmail.com or user:joe@example.com. \n- serviceAccount:emailid — An email address that represents a service account. For example, serviceAccount:my-other-app@appspot.gserviceaccount.com . \n- group:emailid — An email address that represents a Google group. For example, group:admins@example.com. \n- domain:domain — A Google Apps domain name that represents all the users of that domain. For example, domain:google.com or domain:example.com. \n- projectOwner:projectid — Owners of the given project. For example, projectOwner:my-example-project \n- projectEditor:projectid — Editors of the given project. For example, projectEditor:my-example-project \n- projectViewer:projectid — Viewers of the given project. For example, projectViewer:my-example-project", + "items": { + "type": "string" + }, + "type": "array" + }, + "role": { + "annotations": { + "required": [ + "storage.buckets.setIamPolicy", + "storage.objects.setIamPolicy" + ] + }, + "description": "The role to which members belong. Two types of roles are supported: new IAM roles, which grant permissions that do not map directly to those provided by ACLs, and legacy IAM roles, which do map directly to ACL permissions. All roles are of the format roles/storage.specificRole.\nThe new IAM roles are: \n- roles/storage.admin — Full control of Google Cloud Storage resources. \n- roles/storage.objectViewer — Read-Only access to Google Cloud Storage objects. \n- roles/storage.objectCreator — Access to create objects in Google Cloud Storage. \n- roles/storage.objectAdmin — Full control of Google Cloud Storage objects. The legacy IAM roles are: \n- roles/storage.legacyObjectReader — Read-only access to objects without listing. Equivalent to an ACL entry on an object with the READER role. \n- roles/storage.legacyObjectOwner — Read/write access to existing objects without listing. Equivalent to an ACL entry on an object with the OWNER role. \n- roles/storage.legacyBucketReader — Read access to buckets with object listing. Equivalent to an ACL entry on a bucket with the READER role. \n- roles/storage.legacyBucketWriter — Read access to buckets with object listing/creation/deletion. Equivalent to an ACL entry on a bucket with the WRITER role. \n- roles/storage.legacyBucketOwner — Read and write access to existing buckets with object listing/creation/deletion. Equivalent to an ACL entry on a bucket with the OWNER role.", + "type": "string" + } + }, + "type": "object" + }, + "type": "array" + }, + "etag": { + "description": "HTTP 1.1 Entity tag for the policy.", + "format": "byte", + "type": "string" + }, + "kind": { + "default": "storage#policy", + "description": "The kind of item this is. For policies, this is always storage#policy. This field is ignored on input.", + "type": "string" + }, + "resourceId": { + "description": "The ID of the resource to which this policy belongs. Will be of the form projects/_/buckets/bucket for buckets, and projects/_/buckets/bucket/objects/object for objects. A specific generation may be specified by appending #generationNumber to the end of the object name, e.g. projects/_/buckets/my-bucket/objects/data.txt#17. The current generation can be denoted with #0. This field is ignored on input.", + "type": "string" + } + }, + "type": "object" + }, + "RewriteResponse": { + "description": "A rewrite response.", + "id": "RewriteResponse", + "properties": { + "done": { + "description": "true if the copy is finished; otherwise, false if the copy is in progress. This property is always present in the response.", + "type": "boolean" + }, + "kind": { + "default": "storage#rewriteResponse", + "description": "The kind of item this is.", + "type": "string" + }, + "objectSize": { + "description": "The total size of the object being copied in bytes. This property is always present in the response.", + "format": "int64", + "type": "string" + }, + "resource": { + "$ref": "Object", + "description": "A resource containing the metadata for the copied-to object. This property is present in the response only when copying completes." + }, + "rewriteToken": { + "description": "A token to use in subsequent requests to continue copying data. This token is present in the response only when there is more data to copy.", + "type": "string" + }, + "totalBytesRewritten": { + "description": "The total bytes written so far, which can be used to provide a waiting user with a progress indicator. This property is always present in the response.", + "format": "int64", + "type": "string" + } + }, + "type": "object" + }, + "ServiceAccount": { + "description": "A subscription to receive Google PubSub notifications.", + "id": "ServiceAccount", + "properties": { + "email_address": { + "description": "The ID of the notification.", + "type": "string" + }, + "kind": { + "default": "storage#serviceAccount", + "description": "The kind of item this is. For notifications, this is always storage#notification.", + "type": "string" + } + }, + "type": "object" + }, + "TestIamPermissionsResponse": { + "description": "A storage.(buckets|objects).testIamPermissions response.", + "id": "TestIamPermissionsResponse", + "properties": { + "kind": { + "default": "storage#testIamPermissionsResponse", + "description": "The kind of item this is.", + "type": "string" + }, + "permissions": { + "description": "The permissions held by the caller. Permissions are always of the format storage.resource.capability, where resource is one of buckets or objects. The supported permissions are as follows: \n- storage.buckets.delete — Delete bucket. \n- storage.buckets.get — Read bucket metadata. \n- storage.buckets.getIamPolicy — Read bucket IAM policy. \n- storage.buckets.create — Create bucket. \n- storage.buckets.list — List buckets. \n- storage.buckets.setIamPolicy — Update bucket IAM policy. \n- storage.buckets.update — Update bucket metadata. \n- storage.objects.delete — Delete object. \n- storage.objects.get — Read object data and metadata. \n- storage.objects.getIamPolicy — Read object IAM policy. \n- storage.objects.create — Create object. \n- storage.objects.list — List objects. \n- storage.objects.setIamPolicy — Update object IAM policy. \n- storage.objects.update — Update object metadata.", + "items": { + "type": "string" + }, + "type": "array" + } + }, + "type": "object" + } + }, + "servicePath": "storage/v1/", + "title": "Cloud Storage JSON API", + "version": "v1" +} \ No newline at end of file diff --git a/vendor/google.golang.org/api/storage/v1/storage-gen.go b/vendor/google.golang.org/api/storage/v1/storage-gen.go new file mode 100644 index 000000000..36846eb54 --- /dev/null +++ b/vendor/google.golang.org/api/storage/v1/storage-gen.go @@ -0,0 +1,11171 @@ +// Package storage provides access to the Cloud Storage JSON API. +// +// See https://developers.google.com/storage/docs/json_api/ +// +// Usage example: +// +// import "google.golang.org/api/storage/v1" +// ... +// storageService, err := storage.New(oauthHttpClient) +package storage // import "google.golang.org/api/storage/v1" + +import ( + "bytes" + "encoding/json" + "errors" + "fmt" + context "golang.org/x/net/context" + ctxhttp "golang.org/x/net/context/ctxhttp" + gensupport "google.golang.org/api/gensupport" + googleapi "google.golang.org/api/googleapi" + "io" + "net/http" + "net/url" + "strconv" + "strings" +) + +// Always reference these packages, just in case the auto-generated code +// below doesn't. +var _ = bytes.NewBuffer +var _ = strconv.Itoa +var _ = fmt.Sprintf +var _ = json.NewDecoder +var _ = io.Copy +var _ = url.Parse +var _ = gensupport.MarshalJSON +var _ = googleapi.Version +var _ = errors.New +var _ = strings.Replace +var _ = context.Canceled +var _ = ctxhttp.Do + +const apiId = "storage:v1" +const apiName = "storage" +const apiVersion = "v1" +const basePath = "https://www.googleapis.com/storage/v1/" + +// OAuth2 scopes used by this API. +const ( + // View and manage your data across Google Cloud Platform services + CloudPlatformScope = "https://www.googleapis.com/auth/cloud-platform" + + // View your data across Google Cloud Platform services + CloudPlatformReadOnlyScope = "https://www.googleapis.com/auth/cloud-platform.read-only" + + // Manage your data and permissions in Google Cloud Storage + DevstorageFullControlScope = "https://www.googleapis.com/auth/devstorage.full_control" + + // View your data in Google Cloud Storage + DevstorageReadOnlyScope = "https://www.googleapis.com/auth/devstorage.read_only" + + // Manage your data in Google Cloud Storage + DevstorageReadWriteScope = "https://www.googleapis.com/auth/devstorage.read_write" +) + +func New(client *http.Client) (*Service, error) { + if client == nil { + return nil, errors.New("client is nil") + } + s := &Service{client: client, BasePath: basePath} + s.BucketAccessControls = NewBucketAccessControlsService(s) + s.Buckets = NewBucketsService(s) + s.Channels = NewChannelsService(s) + s.DefaultObjectAccessControls = NewDefaultObjectAccessControlsService(s) + s.Notifications = NewNotificationsService(s) + s.ObjectAccessControls = NewObjectAccessControlsService(s) + s.Objects = NewObjectsService(s) + s.Projects = NewProjectsService(s) + return s, nil +} + +type Service struct { + client *http.Client + BasePath string // API endpoint base URL + UserAgent string // optional additional User-Agent fragment + + BucketAccessControls *BucketAccessControlsService + + Buckets *BucketsService + + Channels *ChannelsService + + DefaultObjectAccessControls *DefaultObjectAccessControlsService + + Notifications *NotificationsService + + ObjectAccessControls *ObjectAccessControlsService + + Objects *ObjectsService + + Projects *ProjectsService +} + +func (s *Service) userAgent() string { + if s.UserAgent == "" { + return googleapi.UserAgent + } + return googleapi.UserAgent + " " + s.UserAgent +} + +func NewBucketAccessControlsService(s *Service) *BucketAccessControlsService { + rs := &BucketAccessControlsService{s: s} + return rs +} + +type BucketAccessControlsService struct { + s *Service +} + +func NewBucketsService(s *Service) *BucketsService { + rs := &BucketsService{s: s} + return rs +} + +type BucketsService struct { + s *Service +} + +func NewChannelsService(s *Service) *ChannelsService { + rs := &ChannelsService{s: s} + return rs +} + +type ChannelsService struct { + s *Service +} + +func NewDefaultObjectAccessControlsService(s *Service) *DefaultObjectAccessControlsService { + rs := &DefaultObjectAccessControlsService{s: s} + return rs +} + +type DefaultObjectAccessControlsService struct { + s *Service +} + +func NewNotificationsService(s *Service) *NotificationsService { + rs := &NotificationsService{s: s} + return rs +} + +type NotificationsService struct { + s *Service +} + +func NewObjectAccessControlsService(s *Service) *ObjectAccessControlsService { + rs := &ObjectAccessControlsService{s: s} + return rs +} + +type ObjectAccessControlsService struct { + s *Service +} + +func NewObjectsService(s *Service) *ObjectsService { + rs := &ObjectsService{s: s} + return rs +} + +type ObjectsService struct { + s *Service +} + +func NewProjectsService(s *Service) *ProjectsService { + rs := &ProjectsService{s: s} + rs.ServiceAccount = NewProjectsServiceAccountService(s) + return rs +} + +type ProjectsService struct { + s *Service + + ServiceAccount *ProjectsServiceAccountService +} + +func NewProjectsServiceAccountService(s *Service) *ProjectsServiceAccountService { + rs := &ProjectsServiceAccountService{s: s} + return rs +} + +type ProjectsServiceAccountService struct { + s *Service +} + +// Bucket: A bucket. +type Bucket struct { + // Acl: Access controls on the bucket. + Acl []*BucketAccessControl `json:"acl,omitempty"` + + // Billing: The bucket's billing configuration. + Billing *BucketBilling `json:"billing,omitempty"` + + // Cors: The bucket's Cross-Origin Resource Sharing (CORS) + // configuration. + Cors []*BucketCors `json:"cors,omitempty"` + + // DefaultEventBasedHold: Defines the default value for Event-Based hold + // on newly created objects in this bucket. Event-Based hold is a way to + // retain objects indefinitely until an event occurs, signified by the + // hold's release. After being released, such objects will be subject to + // bucket-level retention (if any). One sample use case of this flag is + // for banks to hold loan documents for at least 3 years after loan is + // paid in full. Here bucket-level retention is 3 years and the event is + // loan being paid in full. In this example these objects will be held + // intact for any number of years until the event has occurred (hold is + // released) and then 3 more years after that. Objects under Event-Based + // hold cannot be deleted, overwritten or archived until the hold is + // removed. + DefaultEventBasedHold bool `json:"defaultEventBasedHold,omitempty"` + + // DefaultObjectAcl: Default access controls to apply to new objects + // when no ACL is provided. + DefaultObjectAcl []*ObjectAccessControl `json:"defaultObjectAcl,omitempty"` + + // Encryption: Encryption configuration used by default for newly + // inserted objects, when no encryption config is specified. + Encryption *BucketEncryption `json:"encryption,omitempty"` + + // Etag: HTTP 1.1 Entity tag for the bucket. + Etag string `json:"etag,omitempty"` + + // Id: The ID of the bucket. For buckets, the id and name properties are + // the same. + Id string `json:"id,omitempty"` + + // Kind: The kind of item this is. For buckets, this is always + // storage#bucket. + Kind string `json:"kind,omitempty"` + + // Labels: User-provided labels, in key/value pairs. + Labels map[string]string `json:"labels,omitempty"` + + // Lifecycle: The bucket's lifecycle configuration. See lifecycle + // management for more information. + Lifecycle *BucketLifecycle `json:"lifecycle,omitempty"` + + // Location: The location of the bucket. Object data for objects in the + // bucket resides in physical storage within this region. Defaults to + // US. See the developer's guide for the authoritative list. + Location string `json:"location,omitempty"` + + // Logging: The bucket's logging configuration, which defines the + // destination bucket and optional name prefix for the current bucket's + // logs. + Logging *BucketLogging `json:"logging,omitempty"` + + // Metageneration: The metadata generation of this bucket. + Metageneration int64 `json:"metageneration,omitempty,string"` + + // Name: The name of the bucket. + Name string `json:"name,omitempty"` + + // Owner: The owner of the bucket. This is always the project team's + // owner group. + Owner *BucketOwner `json:"owner,omitempty"` + + // ProjectNumber: The project number of the project the bucket belongs + // to. + ProjectNumber uint64 `json:"projectNumber,omitempty,string"` + + // RetentionPolicy: Defines the retention policy for a bucket. The + // Retention policy enforces a minimum retention time for all objects + // contained in the bucket, based on their creation time. Any attempt to + // overwrite or delete objects younger than the retention period will + // result in a PERMISSION_DENIED error. An unlocked retention policy can + // be modified or removed from the bucket via the UpdateBucketMetadata + // RPC. A locked retention policy cannot be removed or shortened in + // duration for the lifetime of the bucket. Attempting to remove or + // decrease period of a locked retention policy will result in a + // PERMISSION_DENIED error. + RetentionPolicy *BucketRetentionPolicy `json:"retentionPolicy,omitempty"` + + // SelfLink: The URI of this bucket. + SelfLink string `json:"selfLink,omitempty"` + + // StorageClass: The bucket's default storage class, used whenever no + // storageClass is specified for a newly-created object. This defines + // how objects in the bucket are stored and determines the SLA and the + // cost of storage. Values include MULTI_REGIONAL, REGIONAL, STANDARD, + // NEARLINE, COLDLINE, and DURABLE_REDUCED_AVAILABILITY. If this value + // is not specified when the bucket is created, it will default to + // STANDARD. For more information, see storage classes. + StorageClass string `json:"storageClass,omitempty"` + + // TimeCreated: The creation time of the bucket in RFC 3339 format. + TimeCreated string `json:"timeCreated,omitempty"` + + // Updated: The modification time of the bucket in RFC 3339 format. + Updated string `json:"updated,omitempty"` + + // Versioning: The bucket's versioning configuration. + Versioning *BucketVersioning `json:"versioning,omitempty"` + + // Website: The bucket's website configuration, controlling how the + // service behaves when accessing bucket contents as a web site. See the + // Static Website Examples for more information. + Website *BucketWebsite `json:"website,omitempty"` + + // ServerResponse contains the HTTP response code and headers from the + // server. + googleapi.ServerResponse `json:"-"` + + // ForceSendFields is a list of field names (e.g. "Acl") to + // unconditionally include in API requests. By default, fields with + // empty values are omitted from API requests. However, any non-pointer, + // non-interface field appearing in ForceSendFields will be sent to the + // server regardless of whether the field is empty or not. This may be + // used to include empty fields in Patch requests. + ForceSendFields []string `json:"-"` + + // NullFields is a list of field names (e.g. "Acl") to include in API + // requests with the JSON null value. By default, fields with empty + // values are omitted from API requests. However, any field with an + // empty value appearing in NullFields will be sent to the server as + // null. It is an error if a field in this list has a non-empty value. + // This may be used to include null fields in Patch requests. + NullFields []string `json:"-"` +} + +func (s *Bucket) MarshalJSON() ([]byte, error) { + type NoMethod Bucket + raw := NoMethod(*s) + return gensupport.MarshalJSON(raw, s.ForceSendFields, s.NullFields) +} + +// BucketBilling: The bucket's billing configuration. +type BucketBilling struct { + // RequesterPays: When set to true, Requester Pays is enabled for this + // bucket. + RequesterPays bool `json:"requesterPays,omitempty"` + + // ForceSendFields is a list of field names (e.g. "RequesterPays") to + // unconditionally include in API requests. By default, fields with + // empty values are omitted from API requests. However, any non-pointer, + // non-interface field appearing in ForceSendFields will be sent to the + // server regardless of whether the field is empty or not. This may be + // used to include empty fields in Patch requests. + ForceSendFields []string `json:"-"` + + // NullFields is a list of field names (e.g. "RequesterPays") to include + // in API requests with the JSON null value. By default, fields with + // empty values are omitted from API requests. However, any field with + // an empty value appearing in NullFields will be sent to the server as + // null. It is an error if a field in this list has a non-empty value. + // This may be used to include null fields in Patch requests. + NullFields []string `json:"-"` +} + +func (s *BucketBilling) MarshalJSON() ([]byte, error) { + type NoMethod BucketBilling + raw := NoMethod(*s) + return gensupport.MarshalJSON(raw, s.ForceSendFields, s.NullFields) +} + +type BucketCors struct { + // MaxAgeSeconds: The value, in seconds, to return in the + // Access-Control-Max-Age header used in preflight responses. + MaxAgeSeconds int64 `json:"maxAgeSeconds,omitempty"` + + // Method: The list of HTTP methods on which to include CORS response + // headers, (GET, OPTIONS, POST, etc) Note: "*" is permitted in the list + // of methods, and means "any method". + Method []string `json:"method,omitempty"` + + // Origin: The list of Origins eligible to receive CORS response + // headers. Note: "*" is permitted in the list of origins, and means + // "any Origin". + Origin []string `json:"origin,omitempty"` + + // ResponseHeader: The list of HTTP headers other than the simple + // response headers to give permission for the user-agent to share + // across domains. + ResponseHeader []string `json:"responseHeader,omitempty"` + + // ForceSendFields is a list of field names (e.g. "MaxAgeSeconds") to + // unconditionally include in API requests. By default, fields with + // empty values are omitted from API requests. However, any non-pointer, + // non-interface field appearing in ForceSendFields will be sent to the + // server regardless of whether the field is empty or not. This may be + // used to include empty fields in Patch requests. + ForceSendFields []string `json:"-"` + + // NullFields is a list of field names (e.g. "MaxAgeSeconds") to include + // in API requests with the JSON null value. By default, fields with + // empty values are omitted from API requests. However, any field with + // an empty value appearing in NullFields will be sent to the server as + // null. It is an error if a field in this list has a non-empty value. + // This may be used to include null fields in Patch requests. + NullFields []string `json:"-"` +} + +func (s *BucketCors) MarshalJSON() ([]byte, error) { + type NoMethod BucketCors + raw := NoMethod(*s) + return gensupport.MarshalJSON(raw, s.ForceSendFields, s.NullFields) +} + +// BucketEncryption: Encryption configuration used by default for newly +// inserted objects, when no encryption config is specified. +type BucketEncryption struct { + // DefaultKmsKeyName: A Cloud KMS key that will be used to encrypt + // objects inserted into this bucket, if no encryption method is + // specified. Limited availability; usable only by enabled projects. + DefaultKmsKeyName string `json:"defaultKmsKeyName,omitempty"` + + // ForceSendFields is a list of field names (e.g. "DefaultKmsKeyName") + // to unconditionally include in API requests. By default, fields with + // empty values are omitted from API requests. However, any non-pointer, + // non-interface field appearing in ForceSendFields will be sent to the + // server regardless of whether the field is empty or not. This may be + // used to include empty fields in Patch requests. + ForceSendFields []string `json:"-"` + + // NullFields is a list of field names (e.g. "DefaultKmsKeyName") to + // include in API requests with the JSON null value. By default, fields + // with empty values are omitted from API requests. However, any field + // with an empty value appearing in NullFields will be sent to the + // server as null. It is an error if a field in this list has a + // non-empty value. This may be used to include null fields in Patch + // requests. + NullFields []string `json:"-"` +} + +func (s *BucketEncryption) MarshalJSON() ([]byte, error) { + type NoMethod BucketEncryption + raw := NoMethod(*s) + return gensupport.MarshalJSON(raw, s.ForceSendFields, s.NullFields) +} + +// BucketLifecycle: The bucket's lifecycle configuration. See lifecycle +// management for more information. +type BucketLifecycle struct { + // Rule: A lifecycle management rule, which is made of an action to take + // and the condition(s) under which the action will be taken. + Rule []*BucketLifecycleRule `json:"rule,omitempty"` + + // ForceSendFields is a list of field names (e.g. "Rule") to + // unconditionally include in API requests. By default, fields with + // empty values are omitted from API requests. However, any non-pointer, + // non-interface field appearing in ForceSendFields will be sent to the + // server regardless of whether the field is empty or not. This may be + // used to include empty fields in Patch requests. + ForceSendFields []string `json:"-"` + + // NullFields is a list of field names (e.g. "Rule") to include in API + // requests with the JSON null value. By default, fields with empty + // values are omitted from API requests. However, any field with an + // empty value appearing in NullFields will be sent to the server as + // null. It is an error if a field in this list has a non-empty value. + // This may be used to include null fields in Patch requests. + NullFields []string `json:"-"` +} + +func (s *BucketLifecycle) MarshalJSON() ([]byte, error) { + type NoMethod BucketLifecycle + raw := NoMethod(*s) + return gensupport.MarshalJSON(raw, s.ForceSendFields, s.NullFields) +} + +type BucketLifecycleRule struct { + // Action: The action to take. + Action *BucketLifecycleRuleAction `json:"action,omitempty"` + + // Condition: The condition(s) under which the action will be taken. + Condition *BucketLifecycleRuleCondition `json:"condition,omitempty"` + + // ForceSendFields is a list of field names (e.g. "Action") to + // unconditionally include in API requests. By default, fields with + // empty values are omitted from API requests. However, any non-pointer, + // non-interface field appearing in ForceSendFields will be sent to the + // server regardless of whether the field is empty or not. This may be + // used to include empty fields in Patch requests. + ForceSendFields []string `json:"-"` + + // NullFields is a list of field names (e.g. "Action") to include in API + // requests with the JSON null value. By default, fields with empty + // values are omitted from API requests. However, any field with an + // empty value appearing in NullFields will be sent to the server as + // null. It is an error if a field in this list has a non-empty value. + // This may be used to include null fields in Patch requests. + NullFields []string `json:"-"` +} + +func (s *BucketLifecycleRule) MarshalJSON() ([]byte, error) { + type NoMethod BucketLifecycleRule + raw := NoMethod(*s) + return gensupport.MarshalJSON(raw, s.ForceSendFields, s.NullFields) +} + +// BucketLifecycleRuleAction: The action to take. +type BucketLifecycleRuleAction struct { + // StorageClass: Target storage class. Required iff the type of the + // action is SetStorageClass. + StorageClass string `json:"storageClass,omitempty"` + + // Type: Type of the action. Currently, only Delete and SetStorageClass + // are supported. + Type string `json:"type,omitempty"` + + // ForceSendFields is a list of field names (e.g. "StorageClass") to + // unconditionally include in API requests. By default, fields with + // empty values are omitted from API requests. However, any non-pointer, + // non-interface field appearing in ForceSendFields will be sent to the + // server regardless of whether the field is empty or not. This may be + // used to include empty fields in Patch requests. + ForceSendFields []string `json:"-"` + + // NullFields is a list of field names (e.g. "StorageClass") to include + // in API requests with the JSON null value. By default, fields with + // empty values are omitted from API requests. However, any field with + // an empty value appearing in NullFields will be sent to the server as + // null. It is an error if a field in this list has a non-empty value. + // This may be used to include null fields in Patch requests. + NullFields []string `json:"-"` +} + +func (s *BucketLifecycleRuleAction) MarshalJSON() ([]byte, error) { + type NoMethod BucketLifecycleRuleAction + raw := NoMethod(*s) + return gensupport.MarshalJSON(raw, s.ForceSendFields, s.NullFields) +} + +// BucketLifecycleRuleCondition: The condition(s) under which the action +// will be taken. +type BucketLifecycleRuleCondition struct { + // Age: Age of an object (in days). This condition is satisfied when an + // object reaches the specified age. + Age int64 `json:"age,omitempty"` + + // CreatedBefore: A date in RFC 3339 format with only the date part (for + // instance, "2013-01-15"). This condition is satisfied when an object + // is created before midnight of the specified date in UTC. + CreatedBefore string `json:"createdBefore,omitempty"` + + // IsLive: Relevant only for versioned objects. If the value is true, + // this condition matches live objects; if the value is false, it + // matches archived objects. + IsLive *bool `json:"isLive,omitempty"` + + // MatchesStorageClass: Objects having any of the storage classes + // specified by this condition will be matched. Values include + // MULTI_REGIONAL, REGIONAL, NEARLINE, COLDLINE, STANDARD, and + // DURABLE_REDUCED_AVAILABILITY. + MatchesStorageClass []string `json:"matchesStorageClass,omitempty"` + + // NumNewerVersions: Relevant only for versioned objects. If the value + // is N, this condition is satisfied when there are at least N versions + // (including the live version) newer than this version of the object. + NumNewerVersions int64 `json:"numNewerVersions,omitempty"` + + // ForceSendFields is a list of field names (e.g. "Age") to + // unconditionally include in API requests. By default, fields with + // empty values are omitted from API requests. However, any non-pointer, + // non-interface field appearing in ForceSendFields will be sent to the + // server regardless of whether the field is empty or not. This may be + // used to include empty fields in Patch requests. + ForceSendFields []string `json:"-"` + + // NullFields is a list of field names (e.g. "Age") to include in API + // requests with the JSON null value. By default, fields with empty + // values are omitted from API requests. However, any field with an + // empty value appearing in NullFields will be sent to the server as + // null. It is an error if a field in this list has a non-empty value. + // This may be used to include null fields in Patch requests. + NullFields []string `json:"-"` +} + +func (s *BucketLifecycleRuleCondition) MarshalJSON() ([]byte, error) { + type NoMethod BucketLifecycleRuleCondition + raw := NoMethod(*s) + return gensupport.MarshalJSON(raw, s.ForceSendFields, s.NullFields) +} + +// BucketLogging: The bucket's logging configuration, which defines the +// destination bucket and optional name prefix for the current bucket's +// logs. +type BucketLogging struct { + // LogBucket: The destination bucket where the current bucket's logs + // should be placed. + LogBucket string `json:"logBucket,omitempty"` + + // LogObjectPrefix: A prefix for log object names. + LogObjectPrefix string `json:"logObjectPrefix,omitempty"` + + // ForceSendFields is a list of field names (e.g. "LogBucket") to + // unconditionally include in API requests. By default, fields with + // empty values are omitted from API requests. However, any non-pointer, + // non-interface field appearing in ForceSendFields will be sent to the + // server regardless of whether the field is empty or not. This may be + // used to include empty fields in Patch requests. + ForceSendFields []string `json:"-"` + + // NullFields is a list of field names (e.g. "LogBucket") to include in + // API requests with the JSON null value. By default, fields with empty + // values are omitted from API requests. However, any field with an + // empty value appearing in NullFields will be sent to the server as + // null. It is an error if a field in this list has a non-empty value. + // This may be used to include null fields in Patch requests. + NullFields []string `json:"-"` +} + +func (s *BucketLogging) MarshalJSON() ([]byte, error) { + type NoMethod BucketLogging + raw := NoMethod(*s) + return gensupport.MarshalJSON(raw, s.ForceSendFields, s.NullFields) +} + +// BucketOwner: The owner of the bucket. This is always the project +// team's owner group. +type BucketOwner struct { + // Entity: The entity, in the form project-owner-projectId. + Entity string `json:"entity,omitempty"` + + // EntityId: The ID for the entity. + EntityId string `json:"entityId,omitempty"` + + // ForceSendFields is a list of field names (e.g. "Entity") to + // unconditionally include in API requests. By default, fields with + // empty values are omitted from API requests. However, any non-pointer, + // non-interface field appearing in ForceSendFields will be sent to the + // server regardless of whether the field is empty or not. This may be + // used to include empty fields in Patch requests. + ForceSendFields []string `json:"-"` + + // NullFields is a list of field names (e.g. "Entity") to include in API + // requests with the JSON null value. By default, fields with empty + // values are omitted from API requests. However, any field with an + // empty value appearing in NullFields will be sent to the server as + // null. It is an error if a field in this list has a non-empty value. + // This may be used to include null fields in Patch requests. + NullFields []string `json:"-"` +} + +func (s *BucketOwner) MarshalJSON() ([]byte, error) { + type NoMethod BucketOwner + raw := NoMethod(*s) + return gensupport.MarshalJSON(raw, s.ForceSendFields, s.NullFields) +} + +// BucketRetentionPolicy: Defines the retention policy for a bucket. The +// Retention policy enforces a minimum retention time for all objects +// contained in the bucket, based on their creation time. Any attempt to +// overwrite or delete objects younger than the retention period will +// result in a PERMISSION_DENIED error. An unlocked retention policy can +// be modified or removed from the bucket via the UpdateBucketMetadata +// RPC. A locked retention policy cannot be removed or shortened in +// duration for the lifetime of the bucket. Attempting to remove or +// decrease period of a locked retention policy will result in a +// PERMISSION_DENIED error. +type BucketRetentionPolicy struct { + // EffectiveTime: The time from which policy was enforced and effective. + // RFC 3339 format. + EffectiveTime string `json:"effectiveTime,omitempty"` + + // IsLocked: Once locked, an object retention policy cannot be modified. + IsLocked bool `json:"isLocked,omitempty"` + + // RetentionPeriod: Specifies the duration that objects need to be + // retained. Retention duration must be greater than zero and less than + // 100 years. Note that enforcement of retention periods less than a day + // is not guaranteed. Such periods should only be used for testing + // purposes. + RetentionPeriod int64 `json:"retentionPeriod,omitempty,string"` + + // ForceSendFields is a list of field names (e.g. "EffectiveTime") to + // unconditionally include in API requests. By default, fields with + // empty values are omitted from API requests. However, any non-pointer, + // non-interface field appearing in ForceSendFields will be sent to the + // server regardless of whether the field is empty or not. This may be + // used to include empty fields in Patch requests. + ForceSendFields []string `json:"-"` + + // NullFields is a list of field names (e.g. "EffectiveTime") to include + // in API requests with the JSON null value. By default, fields with + // empty values are omitted from API requests. However, any field with + // an empty value appearing in NullFields will be sent to the server as + // null. It is an error if a field in this list has a non-empty value. + // This may be used to include null fields in Patch requests. + NullFields []string `json:"-"` +} + +func (s *BucketRetentionPolicy) MarshalJSON() ([]byte, error) { + type NoMethod BucketRetentionPolicy + raw := NoMethod(*s) + return gensupport.MarshalJSON(raw, s.ForceSendFields, s.NullFields) +} + +// BucketVersioning: The bucket's versioning configuration. +type BucketVersioning struct { + // Enabled: While set to true, versioning is fully enabled for this + // bucket. + Enabled bool `json:"enabled,omitempty"` + + // ForceSendFields is a list of field names (e.g. "Enabled") to + // unconditionally include in API requests. By default, fields with + // empty values are omitted from API requests. However, any non-pointer, + // non-interface field appearing in ForceSendFields will be sent to the + // server regardless of whether the field is empty or not. This may be + // used to include empty fields in Patch requests. + ForceSendFields []string `json:"-"` + + // NullFields is a list of field names (e.g. "Enabled") to include in + // API requests with the JSON null value. By default, fields with empty + // values are omitted from API requests. However, any field with an + // empty value appearing in NullFields will be sent to the server as + // null. It is an error if a field in this list has a non-empty value. + // This may be used to include null fields in Patch requests. + NullFields []string `json:"-"` +} + +func (s *BucketVersioning) MarshalJSON() ([]byte, error) { + type NoMethod BucketVersioning + raw := NoMethod(*s) + return gensupport.MarshalJSON(raw, s.ForceSendFields, s.NullFields) +} + +// BucketWebsite: The bucket's website configuration, controlling how +// the service behaves when accessing bucket contents as a web site. See +// the Static Website Examples for more information. +type BucketWebsite struct { + // MainPageSuffix: If the requested object path is missing, the service + // will ensure the path has a trailing '/', append this suffix, and + // attempt to retrieve the resulting object. This allows the creation of + // index.html objects to represent directory pages. + MainPageSuffix string `json:"mainPageSuffix,omitempty"` + + // NotFoundPage: If the requested object path is missing, and any + // mainPageSuffix object is missing, if applicable, the service will + // return the named object from this bucket as the content for a 404 Not + // Found result. + NotFoundPage string `json:"notFoundPage,omitempty"` + + // ForceSendFields is a list of field names (e.g. "MainPageSuffix") to + // unconditionally include in API requests. By default, fields with + // empty values are omitted from API requests. However, any non-pointer, + // non-interface field appearing in ForceSendFields will be sent to the + // server regardless of whether the field is empty or not. This may be + // used to include empty fields in Patch requests. + ForceSendFields []string `json:"-"` + + // NullFields is a list of field names (e.g. "MainPageSuffix") to + // include in API requests with the JSON null value. By default, fields + // with empty values are omitted from API requests. However, any field + // with an empty value appearing in NullFields will be sent to the + // server as null. It is an error if a field in this list has a + // non-empty value. This may be used to include null fields in Patch + // requests. + NullFields []string `json:"-"` +} + +func (s *BucketWebsite) MarshalJSON() ([]byte, error) { + type NoMethod BucketWebsite + raw := NoMethod(*s) + return gensupport.MarshalJSON(raw, s.ForceSendFields, s.NullFields) +} + +// BucketAccessControl: An access-control entry. +type BucketAccessControl struct { + // Bucket: The name of the bucket. + Bucket string `json:"bucket,omitempty"` + + // Domain: The domain associated with the entity, if any. + Domain string `json:"domain,omitempty"` + + // Email: The email address associated with the entity, if any. + Email string `json:"email,omitempty"` + + // Entity: The entity holding the permission, in one of the following + // forms: + // - user-userId + // - user-email + // - group-groupId + // - group-email + // - domain-domain + // - project-team-projectId + // - allUsers + // - allAuthenticatedUsers Examples: + // - The user liz@example.com would be user-liz@example.com. + // - The group example@googlegroups.com would be + // group-example@googlegroups.com. + // - To refer to all members of the Google Apps for Business domain + // example.com, the entity would be domain-example.com. + Entity string `json:"entity,omitempty"` + + // EntityId: The ID for the entity, if any. + EntityId string `json:"entityId,omitempty"` + + // Etag: HTTP 1.1 Entity tag for the access-control entry. + Etag string `json:"etag,omitempty"` + + // Id: The ID of the access-control entry. + Id string `json:"id,omitempty"` + + // Kind: The kind of item this is. For bucket access control entries, + // this is always storage#bucketAccessControl. + Kind string `json:"kind,omitempty"` + + // ProjectTeam: The project team associated with the entity, if any. + ProjectTeam *BucketAccessControlProjectTeam `json:"projectTeam,omitempty"` + + // Role: The access permission for the entity. + Role string `json:"role,omitempty"` + + // SelfLink: The link to this access-control entry. + SelfLink string `json:"selfLink,omitempty"` + + // ServerResponse contains the HTTP response code and headers from the + // server. + googleapi.ServerResponse `json:"-"` + + // ForceSendFields is a list of field names (e.g. "Bucket") to + // unconditionally include in API requests. By default, fields with + // empty values are omitted from API requests. However, any non-pointer, + // non-interface field appearing in ForceSendFields will be sent to the + // server regardless of whether the field is empty or not. This may be + // used to include empty fields in Patch requests. + ForceSendFields []string `json:"-"` + + // NullFields is a list of field names (e.g. "Bucket") to include in API + // requests with the JSON null value. By default, fields with empty + // values are omitted from API requests. However, any field with an + // empty value appearing in NullFields will be sent to the server as + // null. It is an error if a field in this list has a non-empty value. + // This may be used to include null fields in Patch requests. + NullFields []string `json:"-"` +} + +func (s *BucketAccessControl) MarshalJSON() ([]byte, error) { + type NoMethod BucketAccessControl + raw := NoMethod(*s) + return gensupport.MarshalJSON(raw, s.ForceSendFields, s.NullFields) +} + +// BucketAccessControlProjectTeam: The project team associated with the +// entity, if any. +type BucketAccessControlProjectTeam struct { + // ProjectNumber: The project number. + ProjectNumber string `json:"projectNumber,omitempty"` + + // Team: The team. + Team string `json:"team,omitempty"` + + // ForceSendFields is a list of field names (e.g. "ProjectNumber") to + // unconditionally include in API requests. By default, fields with + // empty values are omitted from API requests. However, any non-pointer, + // non-interface field appearing in ForceSendFields will be sent to the + // server regardless of whether the field is empty or not. This may be + // used to include empty fields in Patch requests. + ForceSendFields []string `json:"-"` + + // NullFields is a list of field names (e.g. "ProjectNumber") to include + // in API requests with the JSON null value. By default, fields with + // empty values are omitted from API requests. However, any field with + // an empty value appearing in NullFields will be sent to the server as + // null. It is an error if a field in this list has a non-empty value. + // This may be used to include null fields in Patch requests. + NullFields []string `json:"-"` +} + +func (s *BucketAccessControlProjectTeam) MarshalJSON() ([]byte, error) { + type NoMethod BucketAccessControlProjectTeam + raw := NoMethod(*s) + return gensupport.MarshalJSON(raw, s.ForceSendFields, s.NullFields) +} + +// BucketAccessControls: An access-control list. +type BucketAccessControls struct { + // Items: The list of items. + Items []*BucketAccessControl `json:"items,omitempty"` + + // Kind: The kind of item this is. For lists of bucket access control + // entries, this is always storage#bucketAccessControls. + Kind string `json:"kind,omitempty"` + + // ServerResponse contains the HTTP response code and headers from the + // server. + googleapi.ServerResponse `json:"-"` + + // ForceSendFields is a list of field names (e.g. "Items") to + // unconditionally include in API requests. By default, fields with + // empty values are omitted from API requests. However, any non-pointer, + // non-interface field appearing in ForceSendFields will be sent to the + // server regardless of whether the field is empty or not. This may be + // used to include empty fields in Patch requests. + ForceSendFields []string `json:"-"` + + // NullFields is a list of field names (e.g. "Items") to include in API + // requests with the JSON null value. By default, fields with empty + // values are omitted from API requests. However, any field with an + // empty value appearing in NullFields will be sent to the server as + // null. It is an error if a field in this list has a non-empty value. + // This may be used to include null fields in Patch requests. + NullFields []string `json:"-"` +} + +func (s *BucketAccessControls) MarshalJSON() ([]byte, error) { + type NoMethod BucketAccessControls + raw := NoMethod(*s) + return gensupport.MarshalJSON(raw, s.ForceSendFields, s.NullFields) +} + +// Buckets: A list of buckets. +type Buckets struct { + // Items: The list of items. + Items []*Bucket `json:"items,omitempty"` + + // Kind: The kind of item this is. For lists of buckets, this is always + // storage#buckets. + Kind string `json:"kind,omitempty"` + + // NextPageToken: The continuation token, used to page through large + // result sets. Provide this value in a subsequent request to return the + // next page of results. + NextPageToken string `json:"nextPageToken,omitempty"` + + // ServerResponse contains the HTTP response code and headers from the + // server. + googleapi.ServerResponse `json:"-"` + + // ForceSendFields is a list of field names (e.g. "Items") to + // unconditionally include in API requests. By default, fields with + // empty values are omitted from API requests. However, any non-pointer, + // non-interface field appearing in ForceSendFields will be sent to the + // server regardless of whether the field is empty or not. This may be + // used to include empty fields in Patch requests. + ForceSendFields []string `json:"-"` + + // NullFields is a list of field names (e.g. "Items") to include in API + // requests with the JSON null value. By default, fields with empty + // values are omitted from API requests. However, any field with an + // empty value appearing in NullFields will be sent to the server as + // null. It is an error if a field in this list has a non-empty value. + // This may be used to include null fields in Patch requests. + NullFields []string `json:"-"` +} + +func (s *Buckets) MarshalJSON() ([]byte, error) { + type NoMethod Buckets + raw := NoMethod(*s) + return gensupport.MarshalJSON(raw, s.ForceSendFields, s.NullFields) +} + +// Channel: An notification channel used to watch for resource changes. +type Channel struct { + // Address: The address where notifications are delivered for this + // channel. + Address string `json:"address,omitempty"` + + // Expiration: Date and time of notification channel expiration, + // expressed as a Unix timestamp, in milliseconds. Optional. + Expiration int64 `json:"expiration,omitempty,string"` + + // Id: A UUID or similar unique string that identifies this channel. + Id string `json:"id,omitempty"` + + // Kind: Identifies this as a notification channel used to watch for + // changes to a resource. Value: the fixed string "api#channel". + Kind string `json:"kind,omitempty"` + + // Params: Additional parameters controlling delivery channel behavior. + // Optional. + Params map[string]string `json:"params,omitempty"` + + // Payload: A Boolean value to indicate whether payload is wanted. + // Optional. + Payload bool `json:"payload,omitempty"` + + // ResourceId: An opaque ID that identifies the resource being watched + // on this channel. Stable across different API versions. + ResourceId string `json:"resourceId,omitempty"` + + // ResourceUri: A version-specific identifier for the watched resource. + ResourceUri string `json:"resourceUri,omitempty"` + + // Token: An arbitrary string delivered to the target address with each + // notification delivered over this channel. Optional. + Token string `json:"token,omitempty"` + + // Type: The type of delivery mechanism used for this channel. + Type string `json:"type,omitempty"` + + // ServerResponse contains the HTTP response code and headers from the + // server. + googleapi.ServerResponse `json:"-"` + + // ForceSendFields is a list of field names (e.g. "Address") to + // unconditionally include in API requests. By default, fields with + // empty values are omitted from API requests. However, any non-pointer, + // non-interface field appearing in ForceSendFields will be sent to the + // server regardless of whether the field is empty or not. This may be + // used to include empty fields in Patch requests. + ForceSendFields []string `json:"-"` + + // NullFields is a list of field names (e.g. "Address") to include in + // API requests with the JSON null value. By default, fields with empty + // values are omitted from API requests. However, any field with an + // empty value appearing in NullFields will be sent to the server as + // null. It is an error if a field in this list has a non-empty value. + // This may be used to include null fields in Patch requests. + NullFields []string `json:"-"` +} + +func (s *Channel) MarshalJSON() ([]byte, error) { + type NoMethod Channel + raw := NoMethod(*s) + return gensupport.MarshalJSON(raw, s.ForceSendFields, s.NullFields) +} + +// ComposeRequest: A Compose request. +type ComposeRequest struct { + // Destination: Properties of the resulting object. + Destination *Object `json:"destination,omitempty"` + + // Kind: The kind of item this is. + Kind string `json:"kind,omitempty"` + + // SourceObjects: The list of source objects that will be concatenated + // into a single object. + SourceObjects []*ComposeRequestSourceObjects `json:"sourceObjects,omitempty"` + + // ForceSendFields is a list of field names (e.g. "Destination") to + // unconditionally include in API requests. By default, fields with + // empty values are omitted from API requests. However, any non-pointer, + // non-interface field appearing in ForceSendFields will be sent to the + // server regardless of whether the field is empty or not. This may be + // used to include empty fields in Patch requests. + ForceSendFields []string `json:"-"` + + // NullFields is a list of field names (e.g. "Destination") to include + // in API requests with the JSON null value. By default, fields with + // empty values are omitted from API requests. However, any field with + // an empty value appearing in NullFields will be sent to the server as + // null. It is an error if a field in this list has a non-empty value. + // This may be used to include null fields in Patch requests. + NullFields []string `json:"-"` +} + +func (s *ComposeRequest) MarshalJSON() ([]byte, error) { + type NoMethod ComposeRequest + raw := NoMethod(*s) + return gensupport.MarshalJSON(raw, s.ForceSendFields, s.NullFields) +} + +type ComposeRequestSourceObjects struct { + // Generation: The generation of this object to use as the source. + Generation int64 `json:"generation,omitempty,string"` + + // Name: The source object's name. The source object's bucket is + // implicitly the destination bucket. + Name string `json:"name,omitempty"` + + // ObjectPreconditions: Conditions that must be met for this operation + // to execute. + ObjectPreconditions *ComposeRequestSourceObjectsObjectPreconditions `json:"objectPreconditions,omitempty"` + + // ForceSendFields is a list of field names (e.g. "Generation") to + // unconditionally include in API requests. By default, fields with + // empty values are omitted from API requests. However, any non-pointer, + // non-interface field appearing in ForceSendFields will be sent to the + // server regardless of whether the field is empty or not. This may be + // used to include empty fields in Patch requests. + ForceSendFields []string `json:"-"` + + // NullFields is a list of field names (e.g. "Generation") to include in + // API requests with the JSON null value. By default, fields with empty + // values are omitted from API requests. However, any field with an + // empty value appearing in NullFields will be sent to the server as + // null. It is an error if a field in this list has a non-empty value. + // This may be used to include null fields in Patch requests. + NullFields []string `json:"-"` +} + +func (s *ComposeRequestSourceObjects) MarshalJSON() ([]byte, error) { + type NoMethod ComposeRequestSourceObjects + raw := NoMethod(*s) + return gensupport.MarshalJSON(raw, s.ForceSendFields, s.NullFields) +} + +// ComposeRequestSourceObjectsObjectPreconditions: Conditions that must +// be met for this operation to execute. +type ComposeRequestSourceObjectsObjectPreconditions struct { + // IfGenerationMatch: Only perform the composition if the generation of + // the source object that would be used matches this value. If this + // value and a generation are both specified, they must be the same + // value or the call will fail. + IfGenerationMatch int64 `json:"ifGenerationMatch,omitempty,string"` + + // ForceSendFields is a list of field names (e.g. "IfGenerationMatch") + // to unconditionally include in API requests. By default, fields with + // empty values are omitted from API requests. However, any non-pointer, + // non-interface field appearing in ForceSendFields will be sent to the + // server regardless of whether the field is empty or not. This may be + // used to include empty fields in Patch requests. + ForceSendFields []string `json:"-"` + + // NullFields is a list of field names (e.g. "IfGenerationMatch") to + // include in API requests with the JSON null value. By default, fields + // with empty values are omitted from API requests. However, any field + // with an empty value appearing in NullFields will be sent to the + // server as null. It is an error if a field in this list has a + // non-empty value. This may be used to include null fields in Patch + // requests. + NullFields []string `json:"-"` +} + +func (s *ComposeRequestSourceObjectsObjectPreconditions) MarshalJSON() ([]byte, error) { + type NoMethod ComposeRequestSourceObjectsObjectPreconditions + raw := NoMethod(*s) + return gensupport.MarshalJSON(raw, s.ForceSendFields, s.NullFields) +} + +// Notification: A subscription to receive Google PubSub notifications. +type Notification struct { + // CustomAttributes: An optional list of additional attributes to attach + // to each Cloud PubSub message published for this notification + // subscription. + CustomAttributes map[string]string `json:"custom_attributes,omitempty"` + + // Etag: HTTP 1.1 Entity tag for this subscription notification. + Etag string `json:"etag,omitempty"` + + // EventTypes: If present, only send notifications about listed event + // types. If empty, sent notifications for all event types. + EventTypes []string `json:"event_types,omitempty"` + + // Id: The ID of the notification. + Id string `json:"id,omitempty"` + + // Kind: The kind of item this is. For notifications, this is always + // storage#notification. + Kind string `json:"kind,omitempty"` + + // ObjectNamePrefix: If present, only apply this notification + // configuration to object names that begin with this prefix. + ObjectNamePrefix string `json:"object_name_prefix,omitempty"` + + // PayloadFormat: The desired content of the Payload. + PayloadFormat string `json:"payload_format,omitempty"` + + // SelfLink: The canonical URL of this notification. + SelfLink string `json:"selfLink,omitempty"` + + // Topic: The Cloud PubSub topic to which this subscription publishes. + // Formatted as: + // '//pubsub.googleapis.com/projects/{project-identifier}/topics/{my-topi + // c}' + Topic string `json:"topic,omitempty"` + + // ServerResponse contains the HTTP response code and headers from the + // server. + googleapi.ServerResponse `json:"-"` + + // ForceSendFields is a list of field names (e.g. "CustomAttributes") to + // unconditionally include in API requests. By default, fields with + // empty values are omitted from API requests. However, any non-pointer, + // non-interface field appearing in ForceSendFields will be sent to the + // server regardless of whether the field is empty or not. This may be + // used to include empty fields in Patch requests. + ForceSendFields []string `json:"-"` + + // NullFields is a list of field names (e.g. "CustomAttributes") to + // include in API requests with the JSON null value. By default, fields + // with empty values are omitted from API requests. However, any field + // with an empty value appearing in NullFields will be sent to the + // server as null. It is an error if a field in this list has a + // non-empty value. This may be used to include null fields in Patch + // requests. + NullFields []string `json:"-"` +} + +func (s *Notification) MarshalJSON() ([]byte, error) { + type NoMethod Notification + raw := NoMethod(*s) + return gensupport.MarshalJSON(raw, s.ForceSendFields, s.NullFields) +} + +// Notifications: A list of notification subscriptions. +type Notifications struct { + // Items: The list of items. + Items []*Notification `json:"items,omitempty"` + + // Kind: The kind of item this is. For lists of notifications, this is + // always storage#notifications. + Kind string `json:"kind,omitempty"` + + // ServerResponse contains the HTTP response code and headers from the + // server. + googleapi.ServerResponse `json:"-"` + + // ForceSendFields is a list of field names (e.g. "Items") to + // unconditionally include in API requests. By default, fields with + // empty values are omitted from API requests. However, any non-pointer, + // non-interface field appearing in ForceSendFields will be sent to the + // server regardless of whether the field is empty or not. This may be + // used to include empty fields in Patch requests. + ForceSendFields []string `json:"-"` + + // NullFields is a list of field names (e.g. "Items") to include in API + // requests with the JSON null value. By default, fields with empty + // values are omitted from API requests. However, any field with an + // empty value appearing in NullFields will be sent to the server as + // null. It is an error if a field in this list has a non-empty value. + // This may be used to include null fields in Patch requests. + NullFields []string `json:"-"` +} + +func (s *Notifications) MarshalJSON() ([]byte, error) { + type NoMethod Notifications + raw := NoMethod(*s) + return gensupport.MarshalJSON(raw, s.ForceSendFields, s.NullFields) +} + +// Object: An object. +type Object struct { + // Acl: Access controls on the object. + Acl []*ObjectAccessControl `json:"acl,omitempty"` + + // Bucket: The name of the bucket containing this object. + Bucket string `json:"bucket,omitempty"` + + // CacheControl: Cache-Control directive for the object data. If + // omitted, and the object is accessible to all anonymous users, the + // default will be public, max-age=3600. + CacheControl string `json:"cacheControl,omitempty"` + + // ComponentCount: Number of underlying components that make up this + // object. Components are accumulated by compose operations. + ComponentCount int64 `json:"componentCount,omitempty"` + + // ContentDisposition: Content-Disposition of the object data. + ContentDisposition string `json:"contentDisposition,omitempty"` + + // ContentEncoding: Content-Encoding of the object data. + ContentEncoding string `json:"contentEncoding,omitempty"` + + // ContentLanguage: Content-Language of the object data. + ContentLanguage string `json:"contentLanguage,omitempty"` + + // ContentType: Content-Type of the object data. If an object is stored + // without a Content-Type, it is served as application/octet-stream. + ContentType string `json:"contentType,omitempty"` + + // Crc32c: CRC32c checksum, as described in RFC 4960, Appendix B; + // encoded using base64 in big-endian byte order. For more information + // about using the CRC32c checksum, see Hashes and ETags: Best + // Practices. + Crc32c string `json:"crc32c,omitempty"` + + // CustomerEncryption: Metadata of customer-supplied encryption key, if + // the object is encrypted by such a key. + CustomerEncryption *ObjectCustomerEncryption `json:"customerEncryption,omitempty"` + + // Etag: HTTP 1.1 Entity tag for the object. + Etag string `json:"etag,omitempty"` + + // EventBasedHold: Defines the Event-Based hold for an object. + // Event-Based hold is a way to retain objects indefinitely until an + // event occurs, signified by the hold's release. After being released, + // such objects will be subject to bucket-level retention (if any). One + // sample use case of this flag is for banks to hold loan documents for + // at least 3 years after loan is paid in full. Here bucket-level + // retention is 3 years and the event is loan being paid in full. In + // this example these objects will be held intact for any number of + // years until the event has occurred (hold is released) and then 3 more + // years after that. + EventBasedHold bool `json:"eventBasedHold,omitempty"` + + // Generation: The content generation of this object. Used for object + // versioning. + Generation int64 `json:"generation,omitempty,string"` + + // Id: The ID of the object, including the bucket name, object name, and + // generation number. + Id string `json:"id,omitempty"` + + // Kind: The kind of item this is. For objects, this is always + // storage#object. + Kind string `json:"kind,omitempty"` + + // KmsKeyName: Cloud KMS Key used to encrypt this object, if the object + // is encrypted by such a key. Limited availability; usable only by + // enabled projects. + KmsKeyName string `json:"kmsKeyName,omitempty"` + + // Md5Hash: MD5 hash of the data; encoded using base64. For more + // information about using the MD5 hash, see Hashes and ETags: Best + // Practices. + Md5Hash string `json:"md5Hash,omitempty"` + + // MediaLink: Media download link. + MediaLink string `json:"mediaLink,omitempty"` + + // Metadata: User-provided metadata, in key/value pairs. + Metadata map[string]string `json:"metadata,omitempty"` + + // Metageneration: The version of the metadata for this object at this + // generation. Used for preconditions and for detecting changes in + // metadata. A metageneration number is only meaningful in the context + // of a particular generation of a particular object. + Metageneration int64 `json:"metageneration,omitempty,string"` + + // Name: The name of the object. Required if not specified by URL + // parameter. + Name string `json:"name,omitempty"` + + // Owner: The owner of the object. This will always be the uploader of + // the object. + Owner *ObjectOwner `json:"owner,omitempty"` + + // RetentionExpirationTime: Specifies the earliest time that the + // object's retention period expires. This value is server-determined + // and is in RFC 3339 format. Note 1: This field is not provided for + // objects with an active Event-Based hold, since retention expiration + // is unknown until the hold is removed. Note 2: This value can be + // provided even when TemporaryHold is set (so that the user can reason + // about policy without having to first unset the TemporaryHold). + RetentionExpirationTime string `json:"retentionExpirationTime,omitempty"` + + // SelfLink: The link to this object. + SelfLink string `json:"selfLink,omitempty"` + + // Size: Content-Length of the data in bytes. + Size uint64 `json:"size,omitempty,string"` + + // StorageClass: Storage class of the object. + StorageClass string `json:"storageClass,omitempty"` + + // TemporaryHold: Defines the temporary hold for an object. This flag is + // used to enforce a temporary hold on an object. While it is set to + // true, the object is protected against deletion and overwrites. A + // common use case of this flag is regulatory investigations where + // objects need to be retained while the investigation is ongoing. + TemporaryHold bool `json:"temporaryHold,omitempty"` + + // TimeCreated: The creation time of the object in RFC 3339 format. + TimeCreated string `json:"timeCreated,omitempty"` + + // TimeDeleted: The deletion time of the object in RFC 3339 format. Will + // be returned if and only if this version of the object has been + // deleted. + TimeDeleted string `json:"timeDeleted,omitempty"` + + // TimeStorageClassUpdated: The time at which the object's storage class + // was last changed. When the object is initially created, it will be + // set to timeCreated. + TimeStorageClassUpdated string `json:"timeStorageClassUpdated,omitempty"` + + // Updated: The modification time of the object metadata in RFC 3339 + // format. + Updated string `json:"updated,omitempty"` + + // ServerResponse contains the HTTP response code and headers from the + // server. + googleapi.ServerResponse `json:"-"` + + // ForceSendFields is a list of field names (e.g. "Acl") to + // unconditionally include in API requests. By default, fields with + // empty values are omitted from API requests. However, any non-pointer, + // non-interface field appearing in ForceSendFields will be sent to the + // server regardless of whether the field is empty or not. This may be + // used to include empty fields in Patch requests. + ForceSendFields []string `json:"-"` + + // NullFields is a list of field names (e.g. "Acl") to include in API + // requests with the JSON null value. By default, fields with empty + // values are omitted from API requests. However, any field with an + // empty value appearing in NullFields will be sent to the server as + // null. It is an error if a field in this list has a non-empty value. + // This may be used to include null fields in Patch requests. + NullFields []string `json:"-"` +} + +func (s *Object) MarshalJSON() ([]byte, error) { + type NoMethod Object + raw := NoMethod(*s) + return gensupport.MarshalJSON(raw, s.ForceSendFields, s.NullFields) +} + +// ObjectCustomerEncryption: Metadata of customer-supplied encryption +// key, if the object is encrypted by such a key. +type ObjectCustomerEncryption struct { + // EncryptionAlgorithm: The encryption algorithm. + EncryptionAlgorithm string `json:"encryptionAlgorithm,omitempty"` + + // KeySha256: SHA256 hash value of the encryption key. + KeySha256 string `json:"keySha256,omitempty"` + + // ForceSendFields is a list of field names (e.g. "EncryptionAlgorithm") + // to unconditionally include in API requests. By default, fields with + // empty values are omitted from API requests. However, any non-pointer, + // non-interface field appearing in ForceSendFields will be sent to the + // server regardless of whether the field is empty or not. This may be + // used to include empty fields in Patch requests. + ForceSendFields []string `json:"-"` + + // NullFields is a list of field names (e.g. "EncryptionAlgorithm") to + // include in API requests with the JSON null value. By default, fields + // with empty values are omitted from API requests. However, any field + // with an empty value appearing in NullFields will be sent to the + // server as null. It is an error if a field in this list has a + // non-empty value. This may be used to include null fields in Patch + // requests. + NullFields []string `json:"-"` +} + +func (s *ObjectCustomerEncryption) MarshalJSON() ([]byte, error) { + type NoMethod ObjectCustomerEncryption + raw := NoMethod(*s) + return gensupport.MarshalJSON(raw, s.ForceSendFields, s.NullFields) +} + +// ObjectOwner: The owner of the object. This will always be the +// uploader of the object. +type ObjectOwner struct { + // Entity: The entity, in the form user-userId. + Entity string `json:"entity,omitempty"` + + // EntityId: The ID for the entity. + EntityId string `json:"entityId,omitempty"` + + // ForceSendFields is a list of field names (e.g. "Entity") to + // unconditionally include in API requests. By default, fields with + // empty values are omitted from API requests. However, any non-pointer, + // non-interface field appearing in ForceSendFields will be sent to the + // server regardless of whether the field is empty or not. This may be + // used to include empty fields in Patch requests. + ForceSendFields []string `json:"-"` + + // NullFields is a list of field names (e.g. "Entity") to include in API + // requests with the JSON null value. By default, fields with empty + // values are omitted from API requests. However, any field with an + // empty value appearing in NullFields will be sent to the server as + // null. It is an error if a field in this list has a non-empty value. + // This may be used to include null fields in Patch requests. + NullFields []string `json:"-"` +} + +func (s *ObjectOwner) MarshalJSON() ([]byte, error) { + type NoMethod ObjectOwner + raw := NoMethod(*s) + return gensupport.MarshalJSON(raw, s.ForceSendFields, s.NullFields) +} + +// ObjectAccessControl: An access-control entry. +type ObjectAccessControl struct { + // Bucket: The name of the bucket. + Bucket string `json:"bucket,omitempty"` + + // Domain: The domain associated with the entity, if any. + Domain string `json:"domain,omitempty"` + + // Email: The email address associated with the entity, if any. + Email string `json:"email,omitempty"` + + // Entity: The entity holding the permission, in one of the following + // forms: + // - user-userId + // - user-email + // - group-groupId + // - group-email + // - domain-domain + // - project-team-projectId + // - allUsers + // - allAuthenticatedUsers Examples: + // - The user liz@example.com would be user-liz@example.com. + // - The group example@googlegroups.com would be + // group-example@googlegroups.com. + // - To refer to all members of the Google Apps for Business domain + // example.com, the entity would be domain-example.com. + Entity string `json:"entity,omitempty"` + + // EntityId: The ID for the entity, if any. + EntityId string `json:"entityId,omitempty"` + + // Etag: HTTP 1.1 Entity tag for the access-control entry. + Etag string `json:"etag,omitempty"` + + // Generation: The content generation of the object, if applied to an + // object. + Generation int64 `json:"generation,omitempty,string"` + + // Id: The ID of the access-control entry. + Id string `json:"id,omitempty"` + + // Kind: The kind of item this is. For object access control entries, + // this is always storage#objectAccessControl. + Kind string `json:"kind,omitempty"` + + // Object: The name of the object, if applied to an object. + Object string `json:"object,omitempty"` + + // ProjectTeam: The project team associated with the entity, if any. + ProjectTeam *ObjectAccessControlProjectTeam `json:"projectTeam,omitempty"` + + // Role: The access permission for the entity. + Role string `json:"role,omitempty"` + + // SelfLink: The link to this access-control entry. + SelfLink string `json:"selfLink,omitempty"` + + // ServerResponse contains the HTTP response code and headers from the + // server. + googleapi.ServerResponse `json:"-"` + + // ForceSendFields is a list of field names (e.g. "Bucket") to + // unconditionally include in API requests. By default, fields with + // empty values are omitted from API requests. However, any non-pointer, + // non-interface field appearing in ForceSendFields will be sent to the + // server regardless of whether the field is empty or not. This may be + // used to include empty fields in Patch requests. + ForceSendFields []string `json:"-"` + + // NullFields is a list of field names (e.g. "Bucket") to include in API + // requests with the JSON null value. By default, fields with empty + // values are omitted from API requests. However, any field with an + // empty value appearing in NullFields will be sent to the server as + // null. It is an error if a field in this list has a non-empty value. + // This may be used to include null fields in Patch requests. + NullFields []string `json:"-"` +} + +func (s *ObjectAccessControl) MarshalJSON() ([]byte, error) { + type NoMethod ObjectAccessControl + raw := NoMethod(*s) + return gensupport.MarshalJSON(raw, s.ForceSendFields, s.NullFields) +} + +// ObjectAccessControlProjectTeam: The project team associated with the +// entity, if any. +type ObjectAccessControlProjectTeam struct { + // ProjectNumber: The project number. + ProjectNumber string `json:"projectNumber,omitempty"` + + // Team: The team. + Team string `json:"team,omitempty"` + + // ForceSendFields is a list of field names (e.g. "ProjectNumber") to + // unconditionally include in API requests. By default, fields with + // empty values are omitted from API requests. However, any non-pointer, + // non-interface field appearing in ForceSendFields will be sent to the + // server regardless of whether the field is empty or not. This may be + // used to include empty fields in Patch requests. + ForceSendFields []string `json:"-"` + + // NullFields is a list of field names (e.g. "ProjectNumber") to include + // in API requests with the JSON null value. By default, fields with + // empty values are omitted from API requests. However, any field with + // an empty value appearing in NullFields will be sent to the server as + // null. It is an error if a field in this list has a non-empty value. + // This may be used to include null fields in Patch requests. + NullFields []string `json:"-"` +} + +func (s *ObjectAccessControlProjectTeam) MarshalJSON() ([]byte, error) { + type NoMethod ObjectAccessControlProjectTeam + raw := NoMethod(*s) + return gensupport.MarshalJSON(raw, s.ForceSendFields, s.NullFields) +} + +// ObjectAccessControls: An access-control list. +type ObjectAccessControls struct { + // Items: The list of items. + Items []*ObjectAccessControl `json:"items,omitempty"` + + // Kind: The kind of item this is. For lists of object access control + // entries, this is always storage#objectAccessControls. + Kind string `json:"kind,omitempty"` + + // ServerResponse contains the HTTP response code and headers from the + // server. + googleapi.ServerResponse `json:"-"` + + // ForceSendFields is a list of field names (e.g. "Items") to + // unconditionally include in API requests. By default, fields with + // empty values are omitted from API requests. However, any non-pointer, + // non-interface field appearing in ForceSendFields will be sent to the + // server regardless of whether the field is empty or not. This may be + // used to include empty fields in Patch requests. + ForceSendFields []string `json:"-"` + + // NullFields is a list of field names (e.g. "Items") to include in API + // requests with the JSON null value. By default, fields with empty + // values are omitted from API requests. However, any field with an + // empty value appearing in NullFields will be sent to the server as + // null. It is an error if a field in this list has a non-empty value. + // This may be used to include null fields in Patch requests. + NullFields []string `json:"-"` +} + +func (s *ObjectAccessControls) MarshalJSON() ([]byte, error) { + type NoMethod ObjectAccessControls + raw := NoMethod(*s) + return gensupport.MarshalJSON(raw, s.ForceSendFields, s.NullFields) +} + +// Objects: A list of objects. +type Objects struct { + // Items: The list of items. + Items []*Object `json:"items,omitempty"` + + // Kind: The kind of item this is. For lists of objects, this is always + // storage#objects. + Kind string `json:"kind,omitempty"` + + // NextPageToken: The continuation token, used to page through large + // result sets. Provide this value in a subsequent request to return the + // next page of results. + NextPageToken string `json:"nextPageToken,omitempty"` + + // Prefixes: The list of prefixes of objects matching-but-not-listed up + // to and including the requested delimiter. + Prefixes []string `json:"prefixes,omitempty"` + + // ServerResponse contains the HTTP response code and headers from the + // server. + googleapi.ServerResponse `json:"-"` + + // ForceSendFields is a list of field names (e.g. "Items") to + // unconditionally include in API requests. By default, fields with + // empty values are omitted from API requests. However, any non-pointer, + // non-interface field appearing in ForceSendFields will be sent to the + // server regardless of whether the field is empty or not. This may be + // used to include empty fields in Patch requests. + ForceSendFields []string `json:"-"` + + // NullFields is a list of field names (e.g. "Items") to include in API + // requests with the JSON null value. By default, fields with empty + // values are omitted from API requests. However, any field with an + // empty value appearing in NullFields will be sent to the server as + // null. It is an error if a field in this list has a non-empty value. + // This may be used to include null fields in Patch requests. + NullFields []string `json:"-"` +} + +func (s *Objects) MarshalJSON() ([]byte, error) { + type NoMethod Objects + raw := NoMethod(*s) + return gensupport.MarshalJSON(raw, s.ForceSendFields, s.NullFields) +} + +// Policy: A bucket/object IAM policy. +type Policy struct { + // Bindings: An association between a role, which comes with a set of + // permissions, and members who may assume that role. + Bindings []*PolicyBindings `json:"bindings,omitempty"` + + // Etag: HTTP 1.1 Entity tag for the policy. + Etag string `json:"etag,omitempty"` + + // Kind: The kind of item this is. For policies, this is always + // storage#policy. This field is ignored on input. + Kind string `json:"kind,omitempty"` + + // ResourceId: The ID of the resource to which this policy belongs. Will + // be of the form projects/_/buckets/bucket for buckets, and + // projects/_/buckets/bucket/objects/object for objects. A specific + // generation may be specified by appending #generationNumber to the end + // of the object name, e.g. + // projects/_/buckets/my-bucket/objects/data.txt#17. The current + // generation can be denoted with #0. This field is ignored on input. + ResourceId string `json:"resourceId,omitempty"` + + // ServerResponse contains the HTTP response code and headers from the + // server. + googleapi.ServerResponse `json:"-"` + + // ForceSendFields is a list of field names (e.g. "Bindings") to + // unconditionally include in API requests. By default, fields with + // empty values are omitted from API requests. However, any non-pointer, + // non-interface field appearing in ForceSendFields will be sent to the + // server regardless of whether the field is empty or not. This may be + // used to include empty fields in Patch requests. + ForceSendFields []string `json:"-"` + + // NullFields is a list of field names (e.g. "Bindings") to include in + // API requests with the JSON null value. By default, fields with empty + // values are omitted from API requests. However, any field with an + // empty value appearing in NullFields will be sent to the server as + // null. It is an error if a field in this list has a non-empty value. + // This may be used to include null fields in Patch requests. + NullFields []string `json:"-"` +} + +func (s *Policy) MarshalJSON() ([]byte, error) { + type NoMethod Policy + raw := NoMethod(*s) + return gensupport.MarshalJSON(raw, s.ForceSendFields, s.NullFields) +} + +type PolicyBindings struct { + Condition interface{} `json:"condition,omitempty"` + + // Members: A collection of identifiers for members who may assume the + // provided role. Recognized identifiers are as follows: + // - allUsers — A special identifier that represents anyone on the + // internet; with or without a Google account. + // - allAuthenticatedUsers — A special identifier that represents + // anyone who is authenticated with a Google account or a service + // account. + // - user:emailid — An email address that represents a specific + // account. For example, user:alice@gmail.com or user:joe@example.com. + // + // - serviceAccount:emailid — An email address that represents a + // service account. For example, + // serviceAccount:my-other-app@appspot.gserviceaccount.com . + // - group:emailid — An email address that represents a Google group. + // For example, group:admins@example.com. + // - domain:domain — A Google Apps domain name that represents all the + // users of that domain. For example, domain:google.com or + // domain:example.com. + // - projectOwner:projectid — Owners of the given project. For + // example, projectOwner:my-example-project + // - projectEditor:projectid — Editors of the given project. For + // example, projectEditor:my-example-project + // - projectViewer:projectid — Viewers of the given project. For + // example, projectViewer:my-example-project + Members []string `json:"members,omitempty"` + + // Role: The role to which members belong. Two types of roles are + // supported: new IAM roles, which grant permissions that do not map + // directly to those provided by ACLs, and legacy IAM roles, which do + // map directly to ACL permissions. All roles are of the format + // roles/storage.specificRole. + // The new IAM roles are: + // - roles/storage.admin — Full control of Google Cloud Storage + // resources. + // - roles/storage.objectViewer — Read-Only access to Google Cloud + // Storage objects. + // - roles/storage.objectCreator — Access to create objects in Google + // Cloud Storage. + // - roles/storage.objectAdmin — Full control of Google Cloud Storage + // objects. The legacy IAM roles are: + // - roles/storage.legacyObjectReader — Read-only access to objects + // without listing. Equivalent to an ACL entry on an object with the + // READER role. + // - roles/storage.legacyObjectOwner — Read/write access to existing + // objects without listing. Equivalent to an ACL entry on an object with + // the OWNER role. + // - roles/storage.legacyBucketReader — Read access to buckets with + // object listing. Equivalent to an ACL entry on a bucket with the + // READER role. + // - roles/storage.legacyBucketWriter — Read access to buckets with + // object listing/creation/deletion. Equivalent to an ACL entry on a + // bucket with the WRITER role. + // - roles/storage.legacyBucketOwner — Read and write access to + // existing buckets with object listing/creation/deletion. Equivalent to + // an ACL entry on a bucket with the OWNER role. + Role string `json:"role,omitempty"` + + // ForceSendFields is a list of field names (e.g. "Condition") to + // unconditionally include in API requests. By default, fields with + // empty values are omitted from API requests. However, any non-pointer, + // non-interface field appearing in ForceSendFields will be sent to the + // server regardless of whether the field is empty or not. This may be + // used to include empty fields in Patch requests. + ForceSendFields []string `json:"-"` + + // NullFields is a list of field names (e.g. "Condition") to include in + // API requests with the JSON null value. By default, fields with empty + // values are omitted from API requests. However, any field with an + // empty value appearing in NullFields will be sent to the server as + // null. It is an error if a field in this list has a non-empty value. + // This may be used to include null fields in Patch requests. + NullFields []string `json:"-"` +} + +func (s *PolicyBindings) MarshalJSON() ([]byte, error) { + type NoMethod PolicyBindings + raw := NoMethod(*s) + return gensupport.MarshalJSON(raw, s.ForceSendFields, s.NullFields) +} + +// RewriteResponse: A rewrite response. +type RewriteResponse struct { + // Done: true if the copy is finished; otherwise, false if the copy is + // in progress. This property is always present in the response. + Done bool `json:"done,omitempty"` + + // Kind: The kind of item this is. + Kind string `json:"kind,omitempty"` + + // ObjectSize: The total size of the object being copied in bytes. This + // property is always present in the response. + ObjectSize int64 `json:"objectSize,omitempty,string"` + + // Resource: A resource containing the metadata for the copied-to + // object. This property is present in the response only when copying + // completes. + Resource *Object `json:"resource,omitempty"` + + // RewriteToken: A token to use in subsequent requests to continue + // copying data. This token is present in the response only when there + // is more data to copy. + RewriteToken string `json:"rewriteToken,omitempty"` + + // TotalBytesRewritten: The total bytes written so far, which can be + // used to provide a waiting user with a progress indicator. This + // property is always present in the response. + TotalBytesRewritten int64 `json:"totalBytesRewritten,omitempty,string"` + + // ServerResponse contains the HTTP response code and headers from the + // server. + googleapi.ServerResponse `json:"-"` + + // ForceSendFields is a list of field names (e.g. "Done") to + // unconditionally include in API requests. By default, fields with + // empty values are omitted from API requests. However, any non-pointer, + // non-interface field appearing in ForceSendFields will be sent to the + // server regardless of whether the field is empty or not. This may be + // used to include empty fields in Patch requests. + ForceSendFields []string `json:"-"` + + // NullFields is a list of field names (e.g. "Done") to include in API + // requests with the JSON null value. By default, fields with empty + // values are omitted from API requests. However, any field with an + // empty value appearing in NullFields will be sent to the server as + // null. It is an error if a field in this list has a non-empty value. + // This may be used to include null fields in Patch requests. + NullFields []string `json:"-"` +} + +func (s *RewriteResponse) MarshalJSON() ([]byte, error) { + type NoMethod RewriteResponse + raw := NoMethod(*s) + return gensupport.MarshalJSON(raw, s.ForceSendFields, s.NullFields) +} + +// ServiceAccount: A subscription to receive Google PubSub +// notifications. +type ServiceAccount struct { + // EmailAddress: The ID of the notification. + EmailAddress string `json:"email_address,omitempty"` + + // Kind: The kind of item this is. For notifications, this is always + // storage#notification. + Kind string `json:"kind,omitempty"` + + // ServerResponse contains the HTTP response code and headers from the + // server. + googleapi.ServerResponse `json:"-"` + + // ForceSendFields is a list of field names (e.g. "EmailAddress") to + // unconditionally include in API requests. By default, fields with + // empty values are omitted from API requests. However, any non-pointer, + // non-interface field appearing in ForceSendFields will be sent to the + // server regardless of whether the field is empty or not. This may be + // used to include empty fields in Patch requests. + ForceSendFields []string `json:"-"` + + // NullFields is a list of field names (e.g. "EmailAddress") to include + // in API requests with the JSON null value. By default, fields with + // empty values are omitted from API requests. However, any field with + // an empty value appearing in NullFields will be sent to the server as + // null. It is an error if a field in this list has a non-empty value. + // This may be used to include null fields in Patch requests. + NullFields []string `json:"-"` +} + +func (s *ServiceAccount) MarshalJSON() ([]byte, error) { + type NoMethod ServiceAccount + raw := NoMethod(*s) + return gensupport.MarshalJSON(raw, s.ForceSendFields, s.NullFields) +} + +// TestIamPermissionsResponse: A +// storage.(buckets|objects).testIamPermissions response. +type TestIamPermissionsResponse struct { + // Kind: The kind of item this is. + Kind string `json:"kind,omitempty"` + + // Permissions: The permissions held by the caller. Permissions are + // always of the format storage.resource.capability, where resource is + // one of buckets or objects. The supported permissions are as follows: + // + // - storage.buckets.delete — Delete bucket. + // - storage.buckets.get — Read bucket metadata. + // - storage.buckets.getIamPolicy — Read bucket IAM policy. + // - storage.buckets.create — Create bucket. + // - storage.buckets.list — List buckets. + // - storage.buckets.setIamPolicy — Update bucket IAM policy. + // - storage.buckets.update — Update bucket metadata. + // - storage.objects.delete — Delete object. + // - storage.objects.get — Read object data and metadata. + // - storage.objects.getIamPolicy — Read object IAM policy. + // - storage.objects.create — Create object. + // - storage.objects.list — List objects. + // - storage.objects.setIamPolicy — Update object IAM policy. + // - storage.objects.update — Update object metadata. + Permissions []string `json:"permissions,omitempty"` + + // ServerResponse contains the HTTP response code and headers from the + // server. + googleapi.ServerResponse `json:"-"` + + // ForceSendFields is a list of field names (e.g. "Kind") to + // unconditionally include in API requests. By default, fields with + // empty values are omitted from API requests. However, any non-pointer, + // non-interface field appearing in ForceSendFields will be sent to the + // server regardless of whether the field is empty or not. This may be + // used to include empty fields in Patch requests. + ForceSendFields []string `json:"-"` + + // NullFields is a list of field names (e.g. "Kind") to include in API + // requests with the JSON null value. By default, fields with empty + // values are omitted from API requests. However, any field with an + // empty value appearing in NullFields will be sent to the server as + // null. It is an error if a field in this list has a non-empty value. + // This may be used to include null fields in Patch requests. + NullFields []string `json:"-"` +} + +func (s *TestIamPermissionsResponse) MarshalJSON() ([]byte, error) { + type NoMethod TestIamPermissionsResponse + raw := NoMethod(*s) + return gensupport.MarshalJSON(raw, s.ForceSendFields, s.NullFields) +} + +// method id "storage.bucketAccessControls.delete": + +type BucketAccessControlsDeleteCall struct { + s *Service + bucket string + entity string + urlParams_ gensupport.URLParams + ctx_ context.Context + header_ http.Header +} + +// Delete: Permanently deletes the ACL entry for the specified entity on +// the specified bucket. +func (r *BucketAccessControlsService) Delete(bucket string, entity string) *BucketAccessControlsDeleteCall { + c := &BucketAccessControlsDeleteCall{s: r.s, urlParams_: make(gensupport.URLParams)} + c.bucket = bucket + c.entity = entity + return c +} + +// UserProject sets the optional parameter "userProject": The project to +// be billed for this request. Required for Requester Pays buckets. +func (c *BucketAccessControlsDeleteCall) UserProject(userProject string) *BucketAccessControlsDeleteCall { + c.urlParams_.Set("userProject", userProject) + return c +} + +// Fields allows partial responses to be retrieved. See +// https://developers.google.com/gdata/docs/2.0/basics#PartialResponse +// for more information. +func (c *BucketAccessControlsDeleteCall) Fields(s ...googleapi.Field) *BucketAccessControlsDeleteCall { + c.urlParams_.Set("fields", googleapi.CombineFields(s)) + return c +} + +// Context sets the context to be used in this call's Do method. Any +// pending HTTP request will be aborted if the provided context is +// canceled. +func (c *BucketAccessControlsDeleteCall) Context(ctx context.Context) *BucketAccessControlsDeleteCall { + c.ctx_ = ctx + return c +} + +// Header returns an http.Header that can be modified by the caller to +// add HTTP headers to the request. +func (c *BucketAccessControlsDeleteCall) Header() http.Header { + if c.header_ == nil { + c.header_ = make(http.Header) + } + return c.header_ +} + +func (c *BucketAccessControlsDeleteCall) doRequest(alt string) (*http.Response, error) { + reqHeaders := make(http.Header) + for k, v := range c.header_ { + reqHeaders[k] = v + } + reqHeaders.Set("User-Agent", c.s.userAgent()) + var body io.Reader = nil + c.urlParams_.Set("alt", alt) + urls := googleapi.ResolveRelative(c.s.BasePath, "b/{bucket}/acl/{entity}") + urls += "?" + c.urlParams_.Encode() + req, _ := http.NewRequest("DELETE", urls, body) + req.Header = reqHeaders + googleapi.Expand(req.URL, map[string]string{ + "bucket": c.bucket, + "entity": c.entity, + }) + return gensupport.SendRequest(c.ctx_, c.s.client, req) +} + +// Do executes the "storage.bucketAccessControls.delete" call. +func (c *BucketAccessControlsDeleteCall) Do(opts ...googleapi.CallOption) error { + gensupport.SetOptions(c.urlParams_, opts...) + res, err := c.doRequest("json") + if err != nil { + return err + } + defer googleapi.CloseBody(res) + if err := googleapi.CheckResponse(res); err != nil { + return err + } + return nil + // { + // "description": "Permanently deletes the ACL entry for the specified entity on the specified bucket.", + // "httpMethod": "DELETE", + // "id": "storage.bucketAccessControls.delete", + // "parameterOrder": [ + // "bucket", + // "entity" + // ], + // "parameters": { + // "bucket": { + // "description": "Name of a bucket.", + // "location": "path", + // "required": true, + // "type": "string" + // }, + // "entity": { + // "description": "The entity holding the permission. Can be user-userId, user-emailAddress, group-groupId, group-emailAddress, allUsers, or allAuthenticatedUsers.", + // "location": "path", + // "required": true, + // "type": "string" + // }, + // "userProject": { + // "description": "The project to be billed for this request. Required for Requester Pays buckets.", + // "location": "query", + // "type": "string" + // } + // }, + // "path": "b/{bucket}/acl/{entity}", + // "scopes": [ + // "https://www.googleapis.com/auth/cloud-platform", + // "https://www.googleapis.com/auth/devstorage.full_control" + // ] + // } + +} + +// method id "storage.bucketAccessControls.get": + +type BucketAccessControlsGetCall struct { + s *Service + bucket string + entity string + urlParams_ gensupport.URLParams + ifNoneMatch_ string + ctx_ context.Context + header_ http.Header +} + +// Get: Returns the ACL entry for the specified entity on the specified +// bucket. +func (r *BucketAccessControlsService) Get(bucket string, entity string) *BucketAccessControlsGetCall { + c := &BucketAccessControlsGetCall{s: r.s, urlParams_: make(gensupport.URLParams)} + c.bucket = bucket + c.entity = entity + return c +} + +// UserProject sets the optional parameter "userProject": The project to +// be billed for this request. Required for Requester Pays buckets. +func (c *BucketAccessControlsGetCall) UserProject(userProject string) *BucketAccessControlsGetCall { + c.urlParams_.Set("userProject", userProject) + return c +} + +// Fields allows partial responses to be retrieved. See +// https://developers.google.com/gdata/docs/2.0/basics#PartialResponse +// for more information. +func (c *BucketAccessControlsGetCall) Fields(s ...googleapi.Field) *BucketAccessControlsGetCall { + c.urlParams_.Set("fields", googleapi.CombineFields(s)) + return c +} + +// IfNoneMatch sets the optional parameter which makes the operation +// fail if the object's ETag matches the given value. This is useful for +// getting updates only after the object has changed since the last +// request. Use googleapi.IsNotModified to check whether the response +// error from Do is the result of In-None-Match. +func (c *BucketAccessControlsGetCall) IfNoneMatch(entityTag string) *BucketAccessControlsGetCall { + c.ifNoneMatch_ = entityTag + return c +} + +// Context sets the context to be used in this call's Do method. Any +// pending HTTP request will be aborted if the provided context is +// canceled. +func (c *BucketAccessControlsGetCall) Context(ctx context.Context) *BucketAccessControlsGetCall { + c.ctx_ = ctx + return c +} + +// Header returns an http.Header that can be modified by the caller to +// add HTTP headers to the request. +func (c *BucketAccessControlsGetCall) Header() http.Header { + if c.header_ == nil { + c.header_ = make(http.Header) + } + return c.header_ +} + +func (c *BucketAccessControlsGetCall) doRequest(alt string) (*http.Response, error) { + reqHeaders := make(http.Header) + for k, v := range c.header_ { + reqHeaders[k] = v + } + reqHeaders.Set("User-Agent", c.s.userAgent()) + if c.ifNoneMatch_ != "" { + reqHeaders.Set("If-None-Match", c.ifNoneMatch_) + } + var body io.Reader = nil + c.urlParams_.Set("alt", alt) + urls := googleapi.ResolveRelative(c.s.BasePath, "b/{bucket}/acl/{entity}") + urls += "?" + c.urlParams_.Encode() + req, _ := http.NewRequest("GET", urls, body) + req.Header = reqHeaders + googleapi.Expand(req.URL, map[string]string{ + "bucket": c.bucket, + "entity": c.entity, + }) + return gensupport.SendRequest(c.ctx_, c.s.client, req) +} + +// Do executes the "storage.bucketAccessControls.get" call. +// Exactly one of *BucketAccessControl or error will be non-nil. Any +// non-2xx status code is an error. Response headers are in either +// *BucketAccessControl.ServerResponse.Header or (if a response was +// returned at all) in error.(*googleapi.Error).Header. Use +// googleapi.IsNotModified to check whether the returned error was +// because http.StatusNotModified was returned. +func (c *BucketAccessControlsGetCall) Do(opts ...googleapi.CallOption) (*BucketAccessControl, error) { + gensupport.SetOptions(c.urlParams_, opts...) + res, err := c.doRequest("json") + if res != nil && res.StatusCode == http.StatusNotModified { + if res.Body != nil { + res.Body.Close() + } + return nil, &googleapi.Error{ + Code: res.StatusCode, + Header: res.Header, + } + } + if err != nil { + return nil, err + } + defer googleapi.CloseBody(res) + if err := googleapi.CheckResponse(res); err != nil { + return nil, err + } + ret := &BucketAccessControl{ + ServerResponse: googleapi.ServerResponse{ + Header: res.Header, + HTTPStatusCode: res.StatusCode, + }, + } + target := &ret + if err := gensupport.DecodeResponse(target, res); err != nil { + return nil, err + } + return ret, nil + // { + // "description": "Returns the ACL entry for the specified entity on the specified bucket.", + // "httpMethod": "GET", + // "id": "storage.bucketAccessControls.get", + // "parameterOrder": [ + // "bucket", + // "entity" + // ], + // "parameters": { + // "bucket": { + // "description": "Name of a bucket.", + // "location": "path", + // "required": true, + // "type": "string" + // }, + // "entity": { + // "description": "The entity holding the permission. Can be user-userId, user-emailAddress, group-groupId, group-emailAddress, allUsers, or allAuthenticatedUsers.", + // "location": "path", + // "required": true, + // "type": "string" + // }, + // "userProject": { + // "description": "The project to be billed for this request. Required for Requester Pays buckets.", + // "location": "query", + // "type": "string" + // } + // }, + // "path": "b/{bucket}/acl/{entity}", + // "response": { + // "$ref": "BucketAccessControl" + // }, + // "scopes": [ + // "https://www.googleapis.com/auth/cloud-platform", + // "https://www.googleapis.com/auth/devstorage.full_control" + // ] + // } + +} + +// method id "storage.bucketAccessControls.insert": + +type BucketAccessControlsInsertCall struct { + s *Service + bucket string + bucketaccesscontrol *BucketAccessControl + urlParams_ gensupport.URLParams + ctx_ context.Context + header_ http.Header +} + +// Insert: Creates a new ACL entry on the specified bucket. +func (r *BucketAccessControlsService) Insert(bucket string, bucketaccesscontrol *BucketAccessControl) *BucketAccessControlsInsertCall { + c := &BucketAccessControlsInsertCall{s: r.s, urlParams_: make(gensupport.URLParams)} + c.bucket = bucket + c.bucketaccesscontrol = bucketaccesscontrol + return c +} + +// UserProject sets the optional parameter "userProject": The project to +// be billed for this request. Required for Requester Pays buckets. +func (c *BucketAccessControlsInsertCall) UserProject(userProject string) *BucketAccessControlsInsertCall { + c.urlParams_.Set("userProject", userProject) + return c +} + +// Fields allows partial responses to be retrieved. See +// https://developers.google.com/gdata/docs/2.0/basics#PartialResponse +// for more information. +func (c *BucketAccessControlsInsertCall) Fields(s ...googleapi.Field) *BucketAccessControlsInsertCall { + c.urlParams_.Set("fields", googleapi.CombineFields(s)) + return c +} + +// Context sets the context to be used in this call's Do method. Any +// pending HTTP request will be aborted if the provided context is +// canceled. +func (c *BucketAccessControlsInsertCall) Context(ctx context.Context) *BucketAccessControlsInsertCall { + c.ctx_ = ctx + return c +} + +// Header returns an http.Header that can be modified by the caller to +// add HTTP headers to the request. +func (c *BucketAccessControlsInsertCall) Header() http.Header { + if c.header_ == nil { + c.header_ = make(http.Header) + } + return c.header_ +} + +func (c *BucketAccessControlsInsertCall) doRequest(alt string) (*http.Response, error) { + reqHeaders := make(http.Header) + for k, v := range c.header_ { + reqHeaders[k] = v + } + reqHeaders.Set("User-Agent", c.s.userAgent()) + var body io.Reader = nil + body, err := googleapi.WithoutDataWrapper.JSONReader(c.bucketaccesscontrol) + if err != nil { + return nil, err + } + reqHeaders.Set("Content-Type", "application/json") + c.urlParams_.Set("alt", alt) + urls := googleapi.ResolveRelative(c.s.BasePath, "b/{bucket}/acl") + urls += "?" + c.urlParams_.Encode() + req, _ := http.NewRequest("POST", urls, body) + req.Header = reqHeaders + googleapi.Expand(req.URL, map[string]string{ + "bucket": c.bucket, + }) + return gensupport.SendRequest(c.ctx_, c.s.client, req) +} + +// Do executes the "storage.bucketAccessControls.insert" call. +// Exactly one of *BucketAccessControl or error will be non-nil. Any +// non-2xx status code is an error. Response headers are in either +// *BucketAccessControl.ServerResponse.Header or (if a response was +// returned at all) in error.(*googleapi.Error).Header. Use +// googleapi.IsNotModified to check whether the returned error was +// because http.StatusNotModified was returned. +func (c *BucketAccessControlsInsertCall) Do(opts ...googleapi.CallOption) (*BucketAccessControl, error) { + gensupport.SetOptions(c.urlParams_, opts...) + res, err := c.doRequest("json") + if res != nil && res.StatusCode == http.StatusNotModified { + if res.Body != nil { + res.Body.Close() + } + return nil, &googleapi.Error{ + Code: res.StatusCode, + Header: res.Header, + } + } + if err != nil { + return nil, err + } + defer googleapi.CloseBody(res) + if err := googleapi.CheckResponse(res); err != nil { + return nil, err + } + ret := &BucketAccessControl{ + ServerResponse: googleapi.ServerResponse{ + Header: res.Header, + HTTPStatusCode: res.StatusCode, + }, + } + target := &ret + if err := gensupport.DecodeResponse(target, res); err != nil { + return nil, err + } + return ret, nil + // { + // "description": "Creates a new ACL entry on the specified bucket.", + // "httpMethod": "POST", + // "id": "storage.bucketAccessControls.insert", + // "parameterOrder": [ + // "bucket" + // ], + // "parameters": { + // "bucket": { + // "description": "Name of a bucket.", + // "location": "path", + // "required": true, + // "type": "string" + // }, + // "userProject": { + // "description": "The project to be billed for this request. Required for Requester Pays buckets.", + // "location": "query", + // "type": "string" + // } + // }, + // "path": "b/{bucket}/acl", + // "request": { + // "$ref": "BucketAccessControl" + // }, + // "response": { + // "$ref": "BucketAccessControl" + // }, + // "scopes": [ + // "https://www.googleapis.com/auth/cloud-platform", + // "https://www.googleapis.com/auth/devstorage.full_control" + // ] + // } + +} + +// method id "storage.bucketAccessControls.list": + +type BucketAccessControlsListCall struct { + s *Service + bucket string + urlParams_ gensupport.URLParams + ifNoneMatch_ string + ctx_ context.Context + header_ http.Header +} + +// List: Retrieves ACL entries on the specified bucket. +func (r *BucketAccessControlsService) List(bucket string) *BucketAccessControlsListCall { + c := &BucketAccessControlsListCall{s: r.s, urlParams_: make(gensupport.URLParams)} + c.bucket = bucket + return c +} + +// UserProject sets the optional parameter "userProject": The project to +// be billed for this request. Required for Requester Pays buckets. +func (c *BucketAccessControlsListCall) UserProject(userProject string) *BucketAccessControlsListCall { + c.urlParams_.Set("userProject", userProject) + return c +} + +// Fields allows partial responses to be retrieved. See +// https://developers.google.com/gdata/docs/2.0/basics#PartialResponse +// for more information. +func (c *BucketAccessControlsListCall) Fields(s ...googleapi.Field) *BucketAccessControlsListCall { + c.urlParams_.Set("fields", googleapi.CombineFields(s)) + return c +} + +// IfNoneMatch sets the optional parameter which makes the operation +// fail if the object's ETag matches the given value. This is useful for +// getting updates only after the object has changed since the last +// request. Use googleapi.IsNotModified to check whether the response +// error from Do is the result of In-None-Match. +func (c *BucketAccessControlsListCall) IfNoneMatch(entityTag string) *BucketAccessControlsListCall { + c.ifNoneMatch_ = entityTag + return c +} + +// Context sets the context to be used in this call's Do method. Any +// pending HTTP request will be aborted if the provided context is +// canceled. +func (c *BucketAccessControlsListCall) Context(ctx context.Context) *BucketAccessControlsListCall { + c.ctx_ = ctx + return c +} + +// Header returns an http.Header that can be modified by the caller to +// add HTTP headers to the request. +func (c *BucketAccessControlsListCall) Header() http.Header { + if c.header_ == nil { + c.header_ = make(http.Header) + } + return c.header_ +} + +func (c *BucketAccessControlsListCall) doRequest(alt string) (*http.Response, error) { + reqHeaders := make(http.Header) + for k, v := range c.header_ { + reqHeaders[k] = v + } + reqHeaders.Set("User-Agent", c.s.userAgent()) + if c.ifNoneMatch_ != "" { + reqHeaders.Set("If-None-Match", c.ifNoneMatch_) + } + var body io.Reader = nil + c.urlParams_.Set("alt", alt) + urls := googleapi.ResolveRelative(c.s.BasePath, "b/{bucket}/acl") + urls += "?" + c.urlParams_.Encode() + req, _ := http.NewRequest("GET", urls, body) + req.Header = reqHeaders + googleapi.Expand(req.URL, map[string]string{ + "bucket": c.bucket, + }) + return gensupport.SendRequest(c.ctx_, c.s.client, req) +} + +// Do executes the "storage.bucketAccessControls.list" call. +// Exactly one of *BucketAccessControls or error will be non-nil. Any +// non-2xx status code is an error. Response headers are in either +// *BucketAccessControls.ServerResponse.Header or (if a response was +// returned at all) in error.(*googleapi.Error).Header. Use +// googleapi.IsNotModified to check whether the returned error was +// because http.StatusNotModified was returned. +func (c *BucketAccessControlsListCall) Do(opts ...googleapi.CallOption) (*BucketAccessControls, error) { + gensupport.SetOptions(c.urlParams_, opts...) + res, err := c.doRequest("json") + if res != nil && res.StatusCode == http.StatusNotModified { + if res.Body != nil { + res.Body.Close() + } + return nil, &googleapi.Error{ + Code: res.StatusCode, + Header: res.Header, + } + } + if err != nil { + return nil, err + } + defer googleapi.CloseBody(res) + if err := googleapi.CheckResponse(res); err != nil { + return nil, err + } + ret := &BucketAccessControls{ + ServerResponse: googleapi.ServerResponse{ + Header: res.Header, + HTTPStatusCode: res.StatusCode, + }, + } + target := &ret + if err := gensupport.DecodeResponse(target, res); err != nil { + return nil, err + } + return ret, nil + // { + // "description": "Retrieves ACL entries on the specified bucket.", + // "httpMethod": "GET", + // "id": "storage.bucketAccessControls.list", + // "parameterOrder": [ + // "bucket" + // ], + // "parameters": { + // "bucket": { + // "description": "Name of a bucket.", + // "location": "path", + // "required": true, + // "type": "string" + // }, + // "userProject": { + // "description": "The project to be billed for this request. Required for Requester Pays buckets.", + // "location": "query", + // "type": "string" + // } + // }, + // "path": "b/{bucket}/acl", + // "response": { + // "$ref": "BucketAccessControls" + // }, + // "scopes": [ + // "https://www.googleapis.com/auth/cloud-platform", + // "https://www.googleapis.com/auth/devstorage.full_control" + // ] + // } + +} + +// method id "storage.bucketAccessControls.patch": + +type BucketAccessControlsPatchCall struct { + s *Service + bucket string + entity string + bucketaccesscontrol *BucketAccessControl + urlParams_ gensupport.URLParams + ctx_ context.Context + header_ http.Header +} + +// Patch: Patches an ACL entry on the specified bucket. +func (r *BucketAccessControlsService) Patch(bucket string, entity string, bucketaccesscontrol *BucketAccessControl) *BucketAccessControlsPatchCall { + c := &BucketAccessControlsPatchCall{s: r.s, urlParams_: make(gensupport.URLParams)} + c.bucket = bucket + c.entity = entity + c.bucketaccesscontrol = bucketaccesscontrol + return c +} + +// UserProject sets the optional parameter "userProject": The project to +// be billed for this request. Required for Requester Pays buckets. +func (c *BucketAccessControlsPatchCall) UserProject(userProject string) *BucketAccessControlsPatchCall { + c.urlParams_.Set("userProject", userProject) + return c +} + +// Fields allows partial responses to be retrieved. See +// https://developers.google.com/gdata/docs/2.0/basics#PartialResponse +// for more information. +func (c *BucketAccessControlsPatchCall) Fields(s ...googleapi.Field) *BucketAccessControlsPatchCall { + c.urlParams_.Set("fields", googleapi.CombineFields(s)) + return c +} + +// Context sets the context to be used in this call's Do method. Any +// pending HTTP request will be aborted if the provided context is +// canceled. +func (c *BucketAccessControlsPatchCall) Context(ctx context.Context) *BucketAccessControlsPatchCall { + c.ctx_ = ctx + return c +} + +// Header returns an http.Header that can be modified by the caller to +// add HTTP headers to the request. +func (c *BucketAccessControlsPatchCall) Header() http.Header { + if c.header_ == nil { + c.header_ = make(http.Header) + } + return c.header_ +} + +func (c *BucketAccessControlsPatchCall) doRequest(alt string) (*http.Response, error) { + reqHeaders := make(http.Header) + for k, v := range c.header_ { + reqHeaders[k] = v + } + reqHeaders.Set("User-Agent", c.s.userAgent()) + var body io.Reader = nil + body, err := googleapi.WithoutDataWrapper.JSONReader(c.bucketaccesscontrol) + if err != nil { + return nil, err + } + reqHeaders.Set("Content-Type", "application/json") + c.urlParams_.Set("alt", alt) + urls := googleapi.ResolveRelative(c.s.BasePath, "b/{bucket}/acl/{entity}") + urls += "?" + c.urlParams_.Encode() + req, _ := http.NewRequest("PATCH", urls, body) + req.Header = reqHeaders + googleapi.Expand(req.URL, map[string]string{ + "bucket": c.bucket, + "entity": c.entity, + }) + return gensupport.SendRequest(c.ctx_, c.s.client, req) +} + +// Do executes the "storage.bucketAccessControls.patch" call. +// Exactly one of *BucketAccessControl or error will be non-nil. Any +// non-2xx status code is an error. Response headers are in either +// *BucketAccessControl.ServerResponse.Header or (if a response was +// returned at all) in error.(*googleapi.Error).Header. Use +// googleapi.IsNotModified to check whether the returned error was +// because http.StatusNotModified was returned. +func (c *BucketAccessControlsPatchCall) Do(opts ...googleapi.CallOption) (*BucketAccessControl, error) { + gensupport.SetOptions(c.urlParams_, opts...) + res, err := c.doRequest("json") + if res != nil && res.StatusCode == http.StatusNotModified { + if res.Body != nil { + res.Body.Close() + } + return nil, &googleapi.Error{ + Code: res.StatusCode, + Header: res.Header, + } + } + if err != nil { + return nil, err + } + defer googleapi.CloseBody(res) + if err := googleapi.CheckResponse(res); err != nil { + return nil, err + } + ret := &BucketAccessControl{ + ServerResponse: googleapi.ServerResponse{ + Header: res.Header, + HTTPStatusCode: res.StatusCode, + }, + } + target := &ret + if err := gensupport.DecodeResponse(target, res); err != nil { + return nil, err + } + return ret, nil + // { + // "description": "Patches an ACL entry on the specified bucket.", + // "httpMethod": "PATCH", + // "id": "storage.bucketAccessControls.patch", + // "parameterOrder": [ + // "bucket", + // "entity" + // ], + // "parameters": { + // "bucket": { + // "description": "Name of a bucket.", + // "location": "path", + // "required": true, + // "type": "string" + // }, + // "entity": { + // "description": "The entity holding the permission. Can be user-userId, user-emailAddress, group-groupId, group-emailAddress, allUsers, or allAuthenticatedUsers.", + // "location": "path", + // "required": true, + // "type": "string" + // }, + // "userProject": { + // "description": "The project to be billed for this request. Required for Requester Pays buckets.", + // "location": "query", + // "type": "string" + // } + // }, + // "path": "b/{bucket}/acl/{entity}", + // "request": { + // "$ref": "BucketAccessControl" + // }, + // "response": { + // "$ref": "BucketAccessControl" + // }, + // "scopes": [ + // "https://www.googleapis.com/auth/cloud-platform", + // "https://www.googleapis.com/auth/devstorage.full_control" + // ] + // } + +} + +// method id "storage.bucketAccessControls.update": + +type BucketAccessControlsUpdateCall struct { + s *Service + bucket string + entity string + bucketaccesscontrol *BucketAccessControl + urlParams_ gensupport.URLParams + ctx_ context.Context + header_ http.Header +} + +// Update: Updates an ACL entry on the specified bucket. +func (r *BucketAccessControlsService) Update(bucket string, entity string, bucketaccesscontrol *BucketAccessControl) *BucketAccessControlsUpdateCall { + c := &BucketAccessControlsUpdateCall{s: r.s, urlParams_: make(gensupport.URLParams)} + c.bucket = bucket + c.entity = entity + c.bucketaccesscontrol = bucketaccesscontrol + return c +} + +// UserProject sets the optional parameter "userProject": The project to +// be billed for this request. Required for Requester Pays buckets. +func (c *BucketAccessControlsUpdateCall) UserProject(userProject string) *BucketAccessControlsUpdateCall { + c.urlParams_.Set("userProject", userProject) + return c +} + +// Fields allows partial responses to be retrieved. See +// https://developers.google.com/gdata/docs/2.0/basics#PartialResponse +// for more information. +func (c *BucketAccessControlsUpdateCall) Fields(s ...googleapi.Field) *BucketAccessControlsUpdateCall { + c.urlParams_.Set("fields", googleapi.CombineFields(s)) + return c +} + +// Context sets the context to be used in this call's Do method. Any +// pending HTTP request will be aborted if the provided context is +// canceled. +func (c *BucketAccessControlsUpdateCall) Context(ctx context.Context) *BucketAccessControlsUpdateCall { + c.ctx_ = ctx + return c +} + +// Header returns an http.Header that can be modified by the caller to +// add HTTP headers to the request. +func (c *BucketAccessControlsUpdateCall) Header() http.Header { + if c.header_ == nil { + c.header_ = make(http.Header) + } + return c.header_ +} + +func (c *BucketAccessControlsUpdateCall) doRequest(alt string) (*http.Response, error) { + reqHeaders := make(http.Header) + for k, v := range c.header_ { + reqHeaders[k] = v + } + reqHeaders.Set("User-Agent", c.s.userAgent()) + var body io.Reader = nil + body, err := googleapi.WithoutDataWrapper.JSONReader(c.bucketaccesscontrol) + if err != nil { + return nil, err + } + reqHeaders.Set("Content-Type", "application/json") + c.urlParams_.Set("alt", alt) + urls := googleapi.ResolveRelative(c.s.BasePath, "b/{bucket}/acl/{entity}") + urls += "?" + c.urlParams_.Encode() + req, _ := http.NewRequest("PUT", urls, body) + req.Header = reqHeaders + googleapi.Expand(req.URL, map[string]string{ + "bucket": c.bucket, + "entity": c.entity, + }) + return gensupport.SendRequest(c.ctx_, c.s.client, req) +} + +// Do executes the "storage.bucketAccessControls.update" call. +// Exactly one of *BucketAccessControl or error will be non-nil. Any +// non-2xx status code is an error. Response headers are in either +// *BucketAccessControl.ServerResponse.Header or (if a response was +// returned at all) in error.(*googleapi.Error).Header. Use +// googleapi.IsNotModified to check whether the returned error was +// because http.StatusNotModified was returned. +func (c *BucketAccessControlsUpdateCall) Do(opts ...googleapi.CallOption) (*BucketAccessControl, error) { + gensupport.SetOptions(c.urlParams_, opts...) + res, err := c.doRequest("json") + if res != nil && res.StatusCode == http.StatusNotModified { + if res.Body != nil { + res.Body.Close() + } + return nil, &googleapi.Error{ + Code: res.StatusCode, + Header: res.Header, + } + } + if err != nil { + return nil, err + } + defer googleapi.CloseBody(res) + if err := googleapi.CheckResponse(res); err != nil { + return nil, err + } + ret := &BucketAccessControl{ + ServerResponse: googleapi.ServerResponse{ + Header: res.Header, + HTTPStatusCode: res.StatusCode, + }, + } + target := &ret + if err := gensupport.DecodeResponse(target, res); err != nil { + return nil, err + } + return ret, nil + // { + // "description": "Updates an ACL entry on the specified bucket.", + // "httpMethod": "PUT", + // "id": "storage.bucketAccessControls.update", + // "parameterOrder": [ + // "bucket", + // "entity" + // ], + // "parameters": { + // "bucket": { + // "description": "Name of a bucket.", + // "location": "path", + // "required": true, + // "type": "string" + // }, + // "entity": { + // "description": "The entity holding the permission. Can be user-userId, user-emailAddress, group-groupId, group-emailAddress, allUsers, or allAuthenticatedUsers.", + // "location": "path", + // "required": true, + // "type": "string" + // }, + // "userProject": { + // "description": "The project to be billed for this request. Required for Requester Pays buckets.", + // "location": "query", + // "type": "string" + // } + // }, + // "path": "b/{bucket}/acl/{entity}", + // "request": { + // "$ref": "BucketAccessControl" + // }, + // "response": { + // "$ref": "BucketAccessControl" + // }, + // "scopes": [ + // "https://www.googleapis.com/auth/cloud-platform", + // "https://www.googleapis.com/auth/devstorage.full_control" + // ] + // } + +} + +// method id "storage.buckets.delete": + +type BucketsDeleteCall struct { + s *Service + bucket string + urlParams_ gensupport.URLParams + ctx_ context.Context + header_ http.Header +} + +// Delete: Permanently deletes an empty bucket. +func (r *BucketsService) Delete(bucket string) *BucketsDeleteCall { + c := &BucketsDeleteCall{s: r.s, urlParams_: make(gensupport.URLParams)} + c.bucket = bucket + return c +} + +// IfMetagenerationMatch sets the optional parameter +// "ifMetagenerationMatch": If set, only deletes the bucket if its +// metageneration matches this value. +func (c *BucketsDeleteCall) IfMetagenerationMatch(ifMetagenerationMatch int64) *BucketsDeleteCall { + c.urlParams_.Set("ifMetagenerationMatch", fmt.Sprint(ifMetagenerationMatch)) + return c +} + +// IfMetagenerationNotMatch sets the optional parameter +// "ifMetagenerationNotMatch": If set, only deletes the bucket if its +// metageneration does not match this value. +func (c *BucketsDeleteCall) IfMetagenerationNotMatch(ifMetagenerationNotMatch int64) *BucketsDeleteCall { + c.urlParams_.Set("ifMetagenerationNotMatch", fmt.Sprint(ifMetagenerationNotMatch)) + return c +} + +// UserProject sets the optional parameter "userProject": The project to +// be billed for this request. Required for Requester Pays buckets. +func (c *BucketsDeleteCall) UserProject(userProject string) *BucketsDeleteCall { + c.urlParams_.Set("userProject", userProject) + return c +} + +// Fields allows partial responses to be retrieved. See +// https://developers.google.com/gdata/docs/2.0/basics#PartialResponse +// for more information. +func (c *BucketsDeleteCall) Fields(s ...googleapi.Field) *BucketsDeleteCall { + c.urlParams_.Set("fields", googleapi.CombineFields(s)) + return c +} + +// Context sets the context to be used in this call's Do method. Any +// pending HTTP request will be aborted if the provided context is +// canceled. +func (c *BucketsDeleteCall) Context(ctx context.Context) *BucketsDeleteCall { + c.ctx_ = ctx + return c +} + +// Header returns an http.Header that can be modified by the caller to +// add HTTP headers to the request. +func (c *BucketsDeleteCall) Header() http.Header { + if c.header_ == nil { + c.header_ = make(http.Header) + } + return c.header_ +} + +func (c *BucketsDeleteCall) doRequest(alt string) (*http.Response, error) { + reqHeaders := make(http.Header) + for k, v := range c.header_ { + reqHeaders[k] = v + } + reqHeaders.Set("User-Agent", c.s.userAgent()) + var body io.Reader = nil + c.urlParams_.Set("alt", alt) + urls := googleapi.ResolveRelative(c.s.BasePath, "b/{bucket}") + urls += "?" + c.urlParams_.Encode() + req, _ := http.NewRequest("DELETE", urls, body) + req.Header = reqHeaders + googleapi.Expand(req.URL, map[string]string{ + "bucket": c.bucket, + }) + return gensupport.SendRequest(c.ctx_, c.s.client, req) +} + +// Do executes the "storage.buckets.delete" call. +func (c *BucketsDeleteCall) Do(opts ...googleapi.CallOption) error { + gensupport.SetOptions(c.urlParams_, opts...) + res, err := c.doRequest("json") + if err != nil { + return err + } + defer googleapi.CloseBody(res) + if err := googleapi.CheckResponse(res); err != nil { + return err + } + return nil + // { + // "description": "Permanently deletes an empty bucket.", + // "httpMethod": "DELETE", + // "id": "storage.buckets.delete", + // "parameterOrder": [ + // "bucket" + // ], + // "parameters": { + // "bucket": { + // "description": "Name of a bucket.", + // "location": "path", + // "required": true, + // "type": "string" + // }, + // "ifMetagenerationMatch": { + // "description": "If set, only deletes the bucket if its metageneration matches this value.", + // "format": "int64", + // "location": "query", + // "type": "string" + // }, + // "ifMetagenerationNotMatch": { + // "description": "If set, only deletes the bucket if its metageneration does not match this value.", + // "format": "int64", + // "location": "query", + // "type": "string" + // }, + // "userProject": { + // "description": "The project to be billed for this request. Required for Requester Pays buckets.", + // "location": "query", + // "type": "string" + // } + // }, + // "path": "b/{bucket}", + // "scopes": [ + // "https://www.googleapis.com/auth/cloud-platform", + // "https://www.googleapis.com/auth/devstorage.full_control", + // "https://www.googleapis.com/auth/devstorage.read_write" + // ] + // } + +} + +// method id "storage.buckets.get": + +type BucketsGetCall struct { + s *Service + bucket string + urlParams_ gensupport.URLParams + ifNoneMatch_ string + ctx_ context.Context + header_ http.Header +} + +// Get: Returns metadata for the specified bucket. +func (r *BucketsService) Get(bucket string) *BucketsGetCall { + c := &BucketsGetCall{s: r.s, urlParams_: make(gensupport.URLParams)} + c.bucket = bucket + return c +} + +// IfMetagenerationMatch sets the optional parameter +// "ifMetagenerationMatch": Makes the return of the bucket metadata +// conditional on whether the bucket's current metageneration matches +// the given value. +func (c *BucketsGetCall) IfMetagenerationMatch(ifMetagenerationMatch int64) *BucketsGetCall { + c.urlParams_.Set("ifMetagenerationMatch", fmt.Sprint(ifMetagenerationMatch)) + return c +} + +// IfMetagenerationNotMatch sets the optional parameter +// "ifMetagenerationNotMatch": Makes the return of the bucket metadata +// conditional on whether the bucket's current metageneration does not +// match the given value. +func (c *BucketsGetCall) IfMetagenerationNotMatch(ifMetagenerationNotMatch int64) *BucketsGetCall { + c.urlParams_.Set("ifMetagenerationNotMatch", fmt.Sprint(ifMetagenerationNotMatch)) + return c +} + +// Projection sets the optional parameter "projection": Set of +// properties to return. Defaults to noAcl. +// +// Possible values: +// "full" - Include all properties. +// "noAcl" - Omit owner, acl and defaultObjectAcl properties. +func (c *BucketsGetCall) Projection(projection string) *BucketsGetCall { + c.urlParams_.Set("projection", projection) + return c +} + +// UserProject sets the optional parameter "userProject": The project to +// be billed for this request. Required for Requester Pays buckets. +func (c *BucketsGetCall) UserProject(userProject string) *BucketsGetCall { + c.urlParams_.Set("userProject", userProject) + return c +} + +// Fields allows partial responses to be retrieved. See +// https://developers.google.com/gdata/docs/2.0/basics#PartialResponse +// for more information. +func (c *BucketsGetCall) Fields(s ...googleapi.Field) *BucketsGetCall { + c.urlParams_.Set("fields", googleapi.CombineFields(s)) + return c +} + +// IfNoneMatch sets the optional parameter which makes the operation +// fail if the object's ETag matches the given value. This is useful for +// getting updates only after the object has changed since the last +// request. Use googleapi.IsNotModified to check whether the response +// error from Do is the result of In-None-Match. +func (c *BucketsGetCall) IfNoneMatch(entityTag string) *BucketsGetCall { + c.ifNoneMatch_ = entityTag + return c +} + +// Context sets the context to be used in this call's Do method. Any +// pending HTTP request will be aborted if the provided context is +// canceled. +func (c *BucketsGetCall) Context(ctx context.Context) *BucketsGetCall { + c.ctx_ = ctx + return c +} + +// Header returns an http.Header that can be modified by the caller to +// add HTTP headers to the request. +func (c *BucketsGetCall) Header() http.Header { + if c.header_ == nil { + c.header_ = make(http.Header) + } + return c.header_ +} + +func (c *BucketsGetCall) doRequest(alt string) (*http.Response, error) { + reqHeaders := make(http.Header) + for k, v := range c.header_ { + reqHeaders[k] = v + } + reqHeaders.Set("User-Agent", c.s.userAgent()) + if c.ifNoneMatch_ != "" { + reqHeaders.Set("If-None-Match", c.ifNoneMatch_) + } + var body io.Reader = nil + c.urlParams_.Set("alt", alt) + urls := googleapi.ResolveRelative(c.s.BasePath, "b/{bucket}") + urls += "?" + c.urlParams_.Encode() + req, _ := http.NewRequest("GET", urls, body) + req.Header = reqHeaders + googleapi.Expand(req.URL, map[string]string{ + "bucket": c.bucket, + }) + return gensupport.SendRequest(c.ctx_, c.s.client, req) +} + +// Do executes the "storage.buckets.get" call. +// Exactly one of *Bucket or error will be non-nil. Any non-2xx status +// code is an error. Response headers are in either +// *Bucket.ServerResponse.Header or (if a response was returned at all) +// in error.(*googleapi.Error).Header. Use googleapi.IsNotModified to +// check whether the returned error was because http.StatusNotModified +// was returned. +func (c *BucketsGetCall) Do(opts ...googleapi.CallOption) (*Bucket, error) { + gensupport.SetOptions(c.urlParams_, opts...) + res, err := c.doRequest("json") + if res != nil && res.StatusCode == http.StatusNotModified { + if res.Body != nil { + res.Body.Close() + } + return nil, &googleapi.Error{ + Code: res.StatusCode, + Header: res.Header, + } + } + if err != nil { + return nil, err + } + defer googleapi.CloseBody(res) + if err := googleapi.CheckResponse(res); err != nil { + return nil, err + } + ret := &Bucket{ + ServerResponse: googleapi.ServerResponse{ + Header: res.Header, + HTTPStatusCode: res.StatusCode, + }, + } + target := &ret + if err := gensupport.DecodeResponse(target, res); err != nil { + return nil, err + } + return ret, nil + // { + // "description": "Returns metadata for the specified bucket.", + // "httpMethod": "GET", + // "id": "storage.buckets.get", + // "parameterOrder": [ + // "bucket" + // ], + // "parameters": { + // "bucket": { + // "description": "Name of a bucket.", + // "location": "path", + // "required": true, + // "type": "string" + // }, + // "ifMetagenerationMatch": { + // "description": "Makes the return of the bucket metadata conditional on whether the bucket's current metageneration matches the given value.", + // "format": "int64", + // "location": "query", + // "type": "string" + // }, + // "ifMetagenerationNotMatch": { + // "description": "Makes the return of the bucket metadata conditional on whether the bucket's current metageneration does not match the given value.", + // "format": "int64", + // "location": "query", + // "type": "string" + // }, + // "projection": { + // "description": "Set of properties to return. Defaults to noAcl.", + // "enum": [ + // "full", + // "noAcl" + // ], + // "enumDescriptions": [ + // "Include all properties.", + // "Omit owner, acl and defaultObjectAcl properties." + // ], + // "location": "query", + // "type": "string" + // }, + // "userProject": { + // "description": "The project to be billed for this request. Required for Requester Pays buckets.", + // "location": "query", + // "type": "string" + // } + // }, + // "path": "b/{bucket}", + // "response": { + // "$ref": "Bucket" + // }, + // "scopes": [ + // "https://www.googleapis.com/auth/cloud-platform", + // "https://www.googleapis.com/auth/cloud-platform.read-only", + // "https://www.googleapis.com/auth/devstorage.full_control", + // "https://www.googleapis.com/auth/devstorage.read_only", + // "https://www.googleapis.com/auth/devstorage.read_write" + // ] + // } + +} + +// method id "storage.buckets.getIamPolicy": + +type BucketsGetIamPolicyCall struct { + s *Service + bucket string + urlParams_ gensupport.URLParams + ifNoneMatch_ string + ctx_ context.Context + header_ http.Header +} + +// GetIamPolicy: Returns an IAM policy for the specified bucket. +func (r *BucketsService) GetIamPolicy(bucket string) *BucketsGetIamPolicyCall { + c := &BucketsGetIamPolicyCall{s: r.s, urlParams_: make(gensupport.URLParams)} + c.bucket = bucket + return c +} + +// UserProject sets the optional parameter "userProject": The project to +// be billed for this request. Required for Requester Pays buckets. +func (c *BucketsGetIamPolicyCall) UserProject(userProject string) *BucketsGetIamPolicyCall { + c.urlParams_.Set("userProject", userProject) + return c +} + +// Fields allows partial responses to be retrieved. See +// https://developers.google.com/gdata/docs/2.0/basics#PartialResponse +// for more information. +func (c *BucketsGetIamPolicyCall) Fields(s ...googleapi.Field) *BucketsGetIamPolicyCall { + c.urlParams_.Set("fields", googleapi.CombineFields(s)) + return c +} + +// IfNoneMatch sets the optional parameter which makes the operation +// fail if the object's ETag matches the given value. This is useful for +// getting updates only after the object has changed since the last +// request. Use googleapi.IsNotModified to check whether the response +// error from Do is the result of In-None-Match. +func (c *BucketsGetIamPolicyCall) IfNoneMatch(entityTag string) *BucketsGetIamPolicyCall { + c.ifNoneMatch_ = entityTag + return c +} + +// Context sets the context to be used in this call's Do method. Any +// pending HTTP request will be aborted if the provided context is +// canceled. +func (c *BucketsGetIamPolicyCall) Context(ctx context.Context) *BucketsGetIamPolicyCall { + c.ctx_ = ctx + return c +} + +// Header returns an http.Header that can be modified by the caller to +// add HTTP headers to the request. +func (c *BucketsGetIamPolicyCall) Header() http.Header { + if c.header_ == nil { + c.header_ = make(http.Header) + } + return c.header_ +} + +func (c *BucketsGetIamPolicyCall) doRequest(alt string) (*http.Response, error) { + reqHeaders := make(http.Header) + for k, v := range c.header_ { + reqHeaders[k] = v + } + reqHeaders.Set("User-Agent", c.s.userAgent()) + if c.ifNoneMatch_ != "" { + reqHeaders.Set("If-None-Match", c.ifNoneMatch_) + } + var body io.Reader = nil + c.urlParams_.Set("alt", alt) + urls := googleapi.ResolveRelative(c.s.BasePath, "b/{bucket}/iam") + urls += "?" + c.urlParams_.Encode() + req, _ := http.NewRequest("GET", urls, body) + req.Header = reqHeaders + googleapi.Expand(req.URL, map[string]string{ + "bucket": c.bucket, + }) + return gensupport.SendRequest(c.ctx_, c.s.client, req) +} + +// Do executes the "storage.buckets.getIamPolicy" call. +// Exactly one of *Policy or error will be non-nil. Any non-2xx status +// code is an error. Response headers are in either +// *Policy.ServerResponse.Header or (if a response was returned at all) +// in error.(*googleapi.Error).Header. Use googleapi.IsNotModified to +// check whether the returned error was because http.StatusNotModified +// was returned. +func (c *BucketsGetIamPolicyCall) Do(opts ...googleapi.CallOption) (*Policy, error) { + gensupport.SetOptions(c.urlParams_, opts...) + res, err := c.doRequest("json") + if res != nil && res.StatusCode == http.StatusNotModified { + if res.Body != nil { + res.Body.Close() + } + return nil, &googleapi.Error{ + Code: res.StatusCode, + Header: res.Header, + } + } + if err != nil { + return nil, err + } + defer googleapi.CloseBody(res) + if err := googleapi.CheckResponse(res); err != nil { + return nil, err + } + ret := &Policy{ + ServerResponse: googleapi.ServerResponse{ + Header: res.Header, + HTTPStatusCode: res.StatusCode, + }, + } + target := &ret + if err := gensupport.DecodeResponse(target, res); err != nil { + return nil, err + } + return ret, nil + // { + // "description": "Returns an IAM policy for the specified bucket.", + // "httpMethod": "GET", + // "id": "storage.buckets.getIamPolicy", + // "parameterOrder": [ + // "bucket" + // ], + // "parameters": { + // "bucket": { + // "description": "Name of a bucket.", + // "location": "path", + // "required": true, + // "type": "string" + // }, + // "userProject": { + // "description": "The project to be billed for this request. Required for Requester Pays buckets.", + // "location": "query", + // "type": "string" + // } + // }, + // "path": "b/{bucket}/iam", + // "response": { + // "$ref": "Policy" + // }, + // "scopes": [ + // "https://www.googleapis.com/auth/cloud-platform", + // "https://www.googleapis.com/auth/cloud-platform.read-only", + // "https://www.googleapis.com/auth/devstorage.full_control", + // "https://www.googleapis.com/auth/devstorage.read_only", + // "https://www.googleapis.com/auth/devstorage.read_write" + // ] + // } + +} + +// method id "storage.buckets.insert": + +type BucketsInsertCall struct { + s *Service + bucket *Bucket + urlParams_ gensupport.URLParams + ctx_ context.Context + header_ http.Header +} + +// Insert: Creates a new bucket. +func (r *BucketsService) Insert(projectid string, bucket *Bucket) *BucketsInsertCall { + c := &BucketsInsertCall{s: r.s, urlParams_: make(gensupport.URLParams)} + c.urlParams_.Set("project", projectid) + c.bucket = bucket + return c +} + +// PredefinedAcl sets the optional parameter "predefinedAcl": Apply a +// predefined set of access controls to this bucket. +// +// Possible values: +// "authenticatedRead" - Project team owners get OWNER access, and +// allAuthenticatedUsers get READER access. +// "private" - Project team owners get OWNER access. +// "projectPrivate" - Project team members get access according to +// their roles. +// "publicRead" - Project team owners get OWNER access, and allUsers +// get READER access. +// "publicReadWrite" - Project team owners get OWNER access, and +// allUsers get WRITER access. +func (c *BucketsInsertCall) PredefinedAcl(predefinedAcl string) *BucketsInsertCall { + c.urlParams_.Set("predefinedAcl", predefinedAcl) + return c +} + +// PredefinedDefaultObjectAcl sets the optional parameter +// "predefinedDefaultObjectAcl": Apply a predefined set of default +// object access controls to this bucket. +// +// Possible values: +// "authenticatedRead" - Object owner gets OWNER access, and +// allAuthenticatedUsers get READER access. +// "bucketOwnerFullControl" - Object owner gets OWNER access, and +// project team owners get OWNER access. +// "bucketOwnerRead" - Object owner gets OWNER access, and project +// team owners get READER access. +// "private" - Object owner gets OWNER access. +// "projectPrivate" - Object owner gets OWNER access, and project team +// members get access according to their roles. +// "publicRead" - Object owner gets OWNER access, and allUsers get +// READER access. +func (c *BucketsInsertCall) PredefinedDefaultObjectAcl(predefinedDefaultObjectAcl string) *BucketsInsertCall { + c.urlParams_.Set("predefinedDefaultObjectAcl", predefinedDefaultObjectAcl) + return c +} + +// Projection sets the optional parameter "projection": Set of +// properties to return. Defaults to noAcl, unless the bucket resource +// specifies acl or defaultObjectAcl properties, when it defaults to +// full. +// +// Possible values: +// "full" - Include all properties. +// "noAcl" - Omit owner, acl and defaultObjectAcl properties. +func (c *BucketsInsertCall) Projection(projection string) *BucketsInsertCall { + c.urlParams_.Set("projection", projection) + return c +} + +// UserProject sets the optional parameter "userProject": The project to +// be billed for this request. +func (c *BucketsInsertCall) UserProject(userProject string) *BucketsInsertCall { + c.urlParams_.Set("userProject", userProject) + return c +} + +// Fields allows partial responses to be retrieved. See +// https://developers.google.com/gdata/docs/2.0/basics#PartialResponse +// for more information. +func (c *BucketsInsertCall) Fields(s ...googleapi.Field) *BucketsInsertCall { + c.urlParams_.Set("fields", googleapi.CombineFields(s)) + return c +} + +// Context sets the context to be used in this call's Do method. Any +// pending HTTP request will be aborted if the provided context is +// canceled. +func (c *BucketsInsertCall) Context(ctx context.Context) *BucketsInsertCall { + c.ctx_ = ctx + return c +} + +// Header returns an http.Header that can be modified by the caller to +// add HTTP headers to the request. +func (c *BucketsInsertCall) Header() http.Header { + if c.header_ == nil { + c.header_ = make(http.Header) + } + return c.header_ +} + +func (c *BucketsInsertCall) doRequest(alt string) (*http.Response, error) { + reqHeaders := make(http.Header) + for k, v := range c.header_ { + reqHeaders[k] = v + } + reqHeaders.Set("User-Agent", c.s.userAgent()) + var body io.Reader = nil + body, err := googleapi.WithoutDataWrapper.JSONReader(c.bucket) + if err != nil { + return nil, err + } + reqHeaders.Set("Content-Type", "application/json") + c.urlParams_.Set("alt", alt) + urls := googleapi.ResolveRelative(c.s.BasePath, "b") + urls += "?" + c.urlParams_.Encode() + req, _ := http.NewRequest("POST", urls, body) + req.Header = reqHeaders + return gensupport.SendRequest(c.ctx_, c.s.client, req) +} + +// Do executes the "storage.buckets.insert" call. +// Exactly one of *Bucket or error will be non-nil. Any non-2xx status +// code is an error. Response headers are in either +// *Bucket.ServerResponse.Header or (if a response was returned at all) +// in error.(*googleapi.Error).Header. Use googleapi.IsNotModified to +// check whether the returned error was because http.StatusNotModified +// was returned. +func (c *BucketsInsertCall) Do(opts ...googleapi.CallOption) (*Bucket, error) { + gensupport.SetOptions(c.urlParams_, opts...) + res, err := c.doRequest("json") + if res != nil && res.StatusCode == http.StatusNotModified { + if res.Body != nil { + res.Body.Close() + } + return nil, &googleapi.Error{ + Code: res.StatusCode, + Header: res.Header, + } + } + if err != nil { + return nil, err + } + defer googleapi.CloseBody(res) + if err := googleapi.CheckResponse(res); err != nil { + return nil, err + } + ret := &Bucket{ + ServerResponse: googleapi.ServerResponse{ + Header: res.Header, + HTTPStatusCode: res.StatusCode, + }, + } + target := &ret + if err := gensupport.DecodeResponse(target, res); err != nil { + return nil, err + } + return ret, nil + // { + // "description": "Creates a new bucket.", + // "httpMethod": "POST", + // "id": "storage.buckets.insert", + // "parameterOrder": [ + // "project" + // ], + // "parameters": { + // "predefinedAcl": { + // "description": "Apply a predefined set of access controls to this bucket.", + // "enum": [ + // "authenticatedRead", + // "private", + // "projectPrivate", + // "publicRead", + // "publicReadWrite" + // ], + // "enumDescriptions": [ + // "Project team owners get OWNER access, and allAuthenticatedUsers get READER access.", + // "Project team owners get OWNER access.", + // "Project team members get access according to their roles.", + // "Project team owners get OWNER access, and allUsers get READER access.", + // "Project team owners get OWNER access, and allUsers get WRITER access." + // ], + // "location": "query", + // "type": "string" + // }, + // "predefinedDefaultObjectAcl": { + // "description": "Apply a predefined set of default object access controls to this bucket.", + // "enum": [ + // "authenticatedRead", + // "bucketOwnerFullControl", + // "bucketOwnerRead", + // "private", + // "projectPrivate", + // "publicRead" + // ], + // "enumDescriptions": [ + // "Object owner gets OWNER access, and allAuthenticatedUsers get READER access.", + // "Object owner gets OWNER access, and project team owners get OWNER access.", + // "Object owner gets OWNER access, and project team owners get READER access.", + // "Object owner gets OWNER access.", + // "Object owner gets OWNER access, and project team members get access according to their roles.", + // "Object owner gets OWNER access, and allUsers get READER access." + // ], + // "location": "query", + // "type": "string" + // }, + // "project": { + // "description": "A valid API project identifier.", + // "location": "query", + // "required": true, + // "type": "string" + // }, + // "projection": { + // "description": "Set of properties to return. Defaults to noAcl, unless the bucket resource specifies acl or defaultObjectAcl properties, when it defaults to full.", + // "enum": [ + // "full", + // "noAcl" + // ], + // "enumDescriptions": [ + // "Include all properties.", + // "Omit owner, acl and defaultObjectAcl properties." + // ], + // "location": "query", + // "type": "string" + // }, + // "userProject": { + // "description": "The project to be billed for this request.", + // "location": "query", + // "type": "string" + // } + // }, + // "path": "b", + // "request": { + // "$ref": "Bucket" + // }, + // "response": { + // "$ref": "Bucket" + // }, + // "scopes": [ + // "https://www.googleapis.com/auth/cloud-platform", + // "https://www.googleapis.com/auth/devstorage.full_control", + // "https://www.googleapis.com/auth/devstorage.read_write" + // ] + // } + +} + +// method id "storage.buckets.list": + +type BucketsListCall struct { + s *Service + urlParams_ gensupport.URLParams + ifNoneMatch_ string + ctx_ context.Context + header_ http.Header +} + +// List: Retrieves a list of buckets for a given project. +func (r *BucketsService) List(projectid string) *BucketsListCall { + c := &BucketsListCall{s: r.s, urlParams_: make(gensupport.URLParams)} + c.urlParams_.Set("project", projectid) + return c +} + +// MaxResults sets the optional parameter "maxResults": Maximum number +// of buckets to return in a single response. The service will use this +// parameter or 1,000 items, whichever is smaller. +func (c *BucketsListCall) MaxResults(maxResults int64) *BucketsListCall { + c.urlParams_.Set("maxResults", fmt.Sprint(maxResults)) + return c +} + +// PageToken sets the optional parameter "pageToken": A +// previously-returned page token representing part of the larger set of +// results to view. +func (c *BucketsListCall) PageToken(pageToken string) *BucketsListCall { + c.urlParams_.Set("pageToken", pageToken) + return c +} + +// Prefix sets the optional parameter "prefix": Filter results to +// buckets whose names begin with this prefix. +func (c *BucketsListCall) Prefix(prefix string) *BucketsListCall { + c.urlParams_.Set("prefix", prefix) + return c +} + +// Projection sets the optional parameter "projection": Set of +// properties to return. Defaults to noAcl. +// +// Possible values: +// "full" - Include all properties. +// "noAcl" - Omit owner, acl and defaultObjectAcl properties. +func (c *BucketsListCall) Projection(projection string) *BucketsListCall { + c.urlParams_.Set("projection", projection) + return c +} + +// UserProject sets the optional parameter "userProject": The project to +// be billed for this request. +func (c *BucketsListCall) UserProject(userProject string) *BucketsListCall { + c.urlParams_.Set("userProject", userProject) + return c +} + +// Fields allows partial responses to be retrieved. See +// https://developers.google.com/gdata/docs/2.0/basics#PartialResponse +// for more information. +func (c *BucketsListCall) Fields(s ...googleapi.Field) *BucketsListCall { + c.urlParams_.Set("fields", googleapi.CombineFields(s)) + return c +} + +// IfNoneMatch sets the optional parameter which makes the operation +// fail if the object's ETag matches the given value. This is useful for +// getting updates only after the object has changed since the last +// request. Use googleapi.IsNotModified to check whether the response +// error from Do is the result of In-None-Match. +func (c *BucketsListCall) IfNoneMatch(entityTag string) *BucketsListCall { + c.ifNoneMatch_ = entityTag + return c +} + +// Context sets the context to be used in this call's Do method. Any +// pending HTTP request will be aborted if the provided context is +// canceled. +func (c *BucketsListCall) Context(ctx context.Context) *BucketsListCall { + c.ctx_ = ctx + return c +} + +// Header returns an http.Header that can be modified by the caller to +// add HTTP headers to the request. +func (c *BucketsListCall) Header() http.Header { + if c.header_ == nil { + c.header_ = make(http.Header) + } + return c.header_ +} + +func (c *BucketsListCall) doRequest(alt string) (*http.Response, error) { + reqHeaders := make(http.Header) + for k, v := range c.header_ { + reqHeaders[k] = v + } + reqHeaders.Set("User-Agent", c.s.userAgent()) + if c.ifNoneMatch_ != "" { + reqHeaders.Set("If-None-Match", c.ifNoneMatch_) + } + var body io.Reader = nil + c.urlParams_.Set("alt", alt) + urls := googleapi.ResolveRelative(c.s.BasePath, "b") + urls += "?" + c.urlParams_.Encode() + req, _ := http.NewRequest("GET", urls, body) + req.Header = reqHeaders + return gensupport.SendRequest(c.ctx_, c.s.client, req) +} + +// Do executes the "storage.buckets.list" call. +// Exactly one of *Buckets or error will be non-nil. Any non-2xx status +// code is an error. Response headers are in either +// *Buckets.ServerResponse.Header or (if a response was returned at all) +// in error.(*googleapi.Error).Header. Use googleapi.IsNotModified to +// check whether the returned error was because http.StatusNotModified +// was returned. +func (c *BucketsListCall) Do(opts ...googleapi.CallOption) (*Buckets, error) { + gensupport.SetOptions(c.urlParams_, opts...) + res, err := c.doRequest("json") + if res != nil && res.StatusCode == http.StatusNotModified { + if res.Body != nil { + res.Body.Close() + } + return nil, &googleapi.Error{ + Code: res.StatusCode, + Header: res.Header, + } + } + if err != nil { + return nil, err + } + defer googleapi.CloseBody(res) + if err := googleapi.CheckResponse(res); err != nil { + return nil, err + } + ret := &Buckets{ + ServerResponse: googleapi.ServerResponse{ + Header: res.Header, + HTTPStatusCode: res.StatusCode, + }, + } + target := &ret + if err := gensupport.DecodeResponse(target, res); err != nil { + return nil, err + } + return ret, nil + // { + // "description": "Retrieves a list of buckets for a given project.", + // "httpMethod": "GET", + // "id": "storage.buckets.list", + // "parameterOrder": [ + // "project" + // ], + // "parameters": { + // "maxResults": { + // "default": "1000", + // "description": "Maximum number of buckets to return in a single response. The service will use this parameter or 1,000 items, whichever is smaller.", + // "format": "uint32", + // "location": "query", + // "minimum": "0", + // "type": "integer" + // }, + // "pageToken": { + // "description": "A previously-returned page token representing part of the larger set of results to view.", + // "location": "query", + // "type": "string" + // }, + // "prefix": { + // "description": "Filter results to buckets whose names begin with this prefix.", + // "location": "query", + // "type": "string" + // }, + // "project": { + // "description": "A valid API project identifier.", + // "location": "query", + // "required": true, + // "type": "string" + // }, + // "projection": { + // "description": "Set of properties to return. Defaults to noAcl.", + // "enum": [ + // "full", + // "noAcl" + // ], + // "enumDescriptions": [ + // "Include all properties.", + // "Omit owner, acl and defaultObjectAcl properties." + // ], + // "location": "query", + // "type": "string" + // }, + // "userProject": { + // "description": "The project to be billed for this request.", + // "location": "query", + // "type": "string" + // } + // }, + // "path": "b", + // "response": { + // "$ref": "Buckets" + // }, + // "scopes": [ + // "https://www.googleapis.com/auth/cloud-platform", + // "https://www.googleapis.com/auth/cloud-platform.read-only", + // "https://www.googleapis.com/auth/devstorage.full_control", + // "https://www.googleapis.com/auth/devstorage.read_only", + // "https://www.googleapis.com/auth/devstorage.read_write" + // ] + // } + +} + +// Pages invokes f for each page of results. +// A non-nil error returned from f will halt the iteration. +// The provided context supersedes any context provided to the Context method. +func (c *BucketsListCall) Pages(ctx context.Context, f func(*Buckets) error) error { + c.ctx_ = ctx + defer c.PageToken(c.urlParams_.Get("pageToken")) // reset paging to original point + for { + x, err := c.Do() + if err != nil { + return err + } + if err := f(x); err != nil { + return err + } + if x.NextPageToken == "" { + return nil + } + c.PageToken(x.NextPageToken) + } +} + +// method id "storage.buckets.lockRetentionPolicy": + +type BucketsLockRetentionPolicyCall struct { + s *Service + bucket string + urlParams_ gensupport.URLParams + ctx_ context.Context + header_ http.Header +} + +// LockRetentionPolicy: Locks retention policy on a bucket. +func (r *BucketsService) LockRetentionPolicy(bucket string, ifMetagenerationMatch int64) *BucketsLockRetentionPolicyCall { + c := &BucketsLockRetentionPolicyCall{s: r.s, urlParams_: make(gensupport.URLParams)} + c.bucket = bucket + c.urlParams_.Set("ifMetagenerationMatch", fmt.Sprint(ifMetagenerationMatch)) + return c +} + +// UserProject sets the optional parameter "userProject": The project to +// be billed for this request. Required for Requester Pays buckets. +func (c *BucketsLockRetentionPolicyCall) UserProject(userProject string) *BucketsLockRetentionPolicyCall { + c.urlParams_.Set("userProject", userProject) + return c +} + +// Fields allows partial responses to be retrieved. See +// https://developers.google.com/gdata/docs/2.0/basics#PartialResponse +// for more information. +func (c *BucketsLockRetentionPolicyCall) Fields(s ...googleapi.Field) *BucketsLockRetentionPolicyCall { + c.urlParams_.Set("fields", googleapi.CombineFields(s)) + return c +} + +// Context sets the context to be used in this call's Do method. Any +// pending HTTP request will be aborted if the provided context is +// canceled. +func (c *BucketsLockRetentionPolicyCall) Context(ctx context.Context) *BucketsLockRetentionPolicyCall { + c.ctx_ = ctx + return c +} + +// Header returns an http.Header that can be modified by the caller to +// add HTTP headers to the request. +func (c *BucketsLockRetentionPolicyCall) Header() http.Header { + if c.header_ == nil { + c.header_ = make(http.Header) + } + return c.header_ +} + +func (c *BucketsLockRetentionPolicyCall) doRequest(alt string) (*http.Response, error) { + reqHeaders := make(http.Header) + for k, v := range c.header_ { + reqHeaders[k] = v + } + reqHeaders.Set("User-Agent", c.s.userAgent()) + var body io.Reader = nil + c.urlParams_.Set("alt", alt) + urls := googleapi.ResolveRelative(c.s.BasePath, "b/{bucket}/lockRetentionPolicy") + urls += "?" + c.urlParams_.Encode() + req, _ := http.NewRequest("POST", urls, body) + req.Header = reqHeaders + googleapi.Expand(req.URL, map[string]string{ + "bucket": c.bucket, + }) + return gensupport.SendRequest(c.ctx_, c.s.client, req) +} + +// Do executes the "storage.buckets.lockRetentionPolicy" call. +// Exactly one of *Bucket or error will be non-nil. Any non-2xx status +// code is an error. Response headers are in either +// *Bucket.ServerResponse.Header or (if a response was returned at all) +// in error.(*googleapi.Error).Header. Use googleapi.IsNotModified to +// check whether the returned error was because http.StatusNotModified +// was returned. +func (c *BucketsLockRetentionPolicyCall) Do(opts ...googleapi.CallOption) (*Bucket, error) { + gensupport.SetOptions(c.urlParams_, opts...) + res, err := c.doRequest("json") + if res != nil && res.StatusCode == http.StatusNotModified { + if res.Body != nil { + res.Body.Close() + } + return nil, &googleapi.Error{ + Code: res.StatusCode, + Header: res.Header, + } + } + if err != nil { + return nil, err + } + defer googleapi.CloseBody(res) + if err := googleapi.CheckResponse(res); err != nil { + return nil, err + } + ret := &Bucket{ + ServerResponse: googleapi.ServerResponse{ + Header: res.Header, + HTTPStatusCode: res.StatusCode, + }, + } + target := &ret + if err := gensupport.DecodeResponse(target, res); err != nil { + return nil, err + } + return ret, nil + // { + // "description": "Locks retention policy on a bucket.", + // "httpMethod": "POST", + // "id": "storage.buckets.lockRetentionPolicy", + // "parameterOrder": [ + // "bucket", + // "ifMetagenerationMatch" + // ], + // "parameters": { + // "bucket": { + // "description": "Name of a bucket.", + // "location": "path", + // "required": true, + // "type": "string" + // }, + // "ifMetagenerationMatch": { + // "description": "Makes the operation conditional on whether bucket's current metageneration matches the given value.", + // "format": "int64", + // "location": "query", + // "required": true, + // "type": "string" + // }, + // "userProject": { + // "description": "The project to be billed for this request. Required for Requester Pays buckets.", + // "location": "query", + // "type": "string" + // } + // }, + // "path": "b/{bucket}/lockRetentionPolicy", + // "response": { + // "$ref": "Bucket" + // }, + // "scopes": [ + // "https://www.googleapis.com/auth/cloud-platform", + // "https://www.googleapis.com/auth/devstorage.full_control", + // "https://www.googleapis.com/auth/devstorage.read_write" + // ] + // } + +} + +// method id "storage.buckets.patch": + +type BucketsPatchCall struct { + s *Service + bucket string + bucket2 *Bucket + urlParams_ gensupport.URLParams + ctx_ context.Context + header_ http.Header +} + +// Patch: Updates a bucket. Changes to the bucket will be readable +// immediately after writing, but configuration changes may take time to +// propagate. This method supports patch semantics. +func (r *BucketsService) Patch(bucket string, bucket2 *Bucket) *BucketsPatchCall { + c := &BucketsPatchCall{s: r.s, urlParams_: make(gensupport.URLParams)} + c.bucket = bucket + c.bucket2 = bucket2 + return c +} + +// IfMetagenerationMatch sets the optional parameter +// "ifMetagenerationMatch": Makes the return of the bucket metadata +// conditional on whether the bucket's current metageneration matches +// the given value. +func (c *BucketsPatchCall) IfMetagenerationMatch(ifMetagenerationMatch int64) *BucketsPatchCall { + c.urlParams_.Set("ifMetagenerationMatch", fmt.Sprint(ifMetagenerationMatch)) + return c +} + +// IfMetagenerationNotMatch sets the optional parameter +// "ifMetagenerationNotMatch": Makes the return of the bucket metadata +// conditional on whether the bucket's current metageneration does not +// match the given value. +func (c *BucketsPatchCall) IfMetagenerationNotMatch(ifMetagenerationNotMatch int64) *BucketsPatchCall { + c.urlParams_.Set("ifMetagenerationNotMatch", fmt.Sprint(ifMetagenerationNotMatch)) + return c +} + +// PredefinedAcl sets the optional parameter "predefinedAcl": Apply a +// predefined set of access controls to this bucket. +// +// Possible values: +// "authenticatedRead" - Project team owners get OWNER access, and +// allAuthenticatedUsers get READER access. +// "private" - Project team owners get OWNER access. +// "projectPrivate" - Project team members get access according to +// their roles. +// "publicRead" - Project team owners get OWNER access, and allUsers +// get READER access. +// "publicReadWrite" - Project team owners get OWNER access, and +// allUsers get WRITER access. +func (c *BucketsPatchCall) PredefinedAcl(predefinedAcl string) *BucketsPatchCall { + c.urlParams_.Set("predefinedAcl", predefinedAcl) + return c +} + +// PredefinedDefaultObjectAcl sets the optional parameter +// "predefinedDefaultObjectAcl": Apply a predefined set of default +// object access controls to this bucket. +// +// Possible values: +// "authenticatedRead" - Object owner gets OWNER access, and +// allAuthenticatedUsers get READER access. +// "bucketOwnerFullControl" - Object owner gets OWNER access, and +// project team owners get OWNER access. +// "bucketOwnerRead" - Object owner gets OWNER access, and project +// team owners get READER access. +// "private" - Object owner gets OWNER access. +// "projectPrivate" - Object owner gets OWNER access, and project team +// members get access according to their roles. +// "publicRead" - Object owner gets OWNER access, and allUsers get +// READER access. +func (c *BucketsPatchCall) PredefinedDefaultObjectAcl(predefinedDefaultObjectAcl string) *BucketsPatchCall { + c.urlParams_.Set("predefinedDefaultObjectAcl", predefinedDefaultObjectAcl) + return c +} + +// Projection sets the optional parameter "projection": Set of +// properties to return. Defaults to full. +// +// Possible values: +// "full" - Include all properties. +// "noAcl" - Omit owner, acl and defaultObjectAcl properties. +func (c *BucketsPatchCall) Projection(projection string) *BucketsPatchCall { + c.urlParams_.Set("projection", projection) + return c +} + +// UserProject sets the optional parameter "userProject": The project to +// be billed for this request. Required for Requester Pays buckets. +func (c *BucketsPatchCall) UserProject(userProject string) *BucketsPatchCall { + c.urlParams_.Set("userProject", userProject) + return c +} + +// Fields allows partial responses to be retrieved. See +// https://developers.google.com/gdata/docs/2.0/basics#PartialResponse +// for more information. +func (c *BucketsPatchCall) Fields(s ...googleapi.Field) *BucketsPatchCall { + c.urlParams_.Set("fields", googleapi.CombineFields(s)) + return c +} + +// Context sets the context to be used in this call's Do method. Any +// pending HTTP request will be aborted if the provided context is +// canceled. +func (c *BucketsPatchCall) Context(ctx context.Context) *BucketsPatchCall { + c.ctx_ = ctx + return c +} + +// Header returns an http.Header that can be modified by the caller to +// add HTTP headers to the request. +func (c *BucketsPatchCall) Header() http.Header { + if c.header_ == nil { + c.header_ = make(http.Header) + } + return c.header_ +} + +func (c *BucketsPatchCall) doRequest(alt string) (*http.Response, error) { + reqHeaders := make(http.Header) + for k, v := range c.header_ { + reqHeaders[k] = v + } + reqHeaders.Set("User-Agent", c.s.userAgent()) + var body io.Reader = nil + body, err := googleapi.WithoutDataWrapper.JSONReader(c.bucket2) + if err != nil { + return nil, err + } + reqHeaders.Set("Content-Type", "application/json") + c.urlParams_.Set("alt", alt) + urls := googleapi.ResolveRelative(c.s.BasePath, "b/{bucket}") + urls += "?" + c.urlParams_.Encode() + req, _ := http.NewRequest("PATCH", urls, body) + req.Header = reqHeaders + googleapi.Expand(req.URL, map[string]string{ + "bucket": c.bucket, + }) + return gensupport.SendRequest(c.ctx_, c.s.client, req) +} + +// Do executes the "storage.buckets.patch" call. +// Exactly one of *Bucket or error will be non-nil. Any non-2xx status +// code is an error. Response headers are in either +// *Bucket.ServerResponse.Header or (if a response was returned at all) +// in error.(*googleapi.Error).Header. Use googleapi.IsNotModified to +// check whether the returned error was because http.StatusNotModified +// was returned. +func (c *BucketsPatchCall) Do(opts ...googleapi.CallOption) (*Bucket, error) { + gensupport.SetOptions(c.urlParams_, opts...) + res, err := c.doRequest("json") + if res != nil && res.StatusCode == http.StatusNotModified { + if res.Body != nil { + res.Body.Close() + } + return nil, &googleapi.Error{ + Code: res.StatusCode, + Header: res.Header, + } + } + if err != nil { + return nil, err + } + defer googleapi.CloseBody(res) + if err := googleapi.CheckResponse(res); err != nil { + return nil, err + } + ret := &Bucket{ + ServerResponse: googleapi.ServerResponse{ + Header: res.Header, + HTTPStatusCode: res.StatusCode, + }, + } + target := &ret + if err := gensupport.DecodeResponse(target, res); err != nil { + return nil, err + } + return ret, nil + // { + // "description": "Updates a bucket. Changes to the bucket will be readable immediately after writing, but configuration changes may take time to propagate. This method supports patch semantics.", + // "httpMethod": "PATCH", + // "id": "storage.buckets.patch", + // "parameterOrder": [ + // "bucket" + // ], + // "parameters": { + // "bucket": { + // "description": "Name of a bucket.", + // "location": "path", + // "required": true, + // "type": "string" + // }, + // "ifMetagenerationMatch": { + // "description": "Makes the return of the bucket metadata conditional on whether the bucket's current metageneration matches the given value.", + // "format": "int64", + // "location": "query", + // "type": "string" + // }, + // "ifMetagenerationNotMatch": { + // "description": "Makes the return of the bucket metadata conditional on whether the bucket's current metageneration does not match the given value.", + // "format": "int64", + // "location": "query", + // "type": "string" + // }, + // "predefinedAcl": { + // "description": "Apply a predefined set of access controls to this bucket.", + // "enum": [ + // "authenticatedRead", + // "private", + // "projectPrivate", + // "publicRead", + // "publicReadWrite" + // ], + // "enumDescriptions": [ + // "Project team owners get OWNER access, and allAuthenticatedUsers get READER access.", + // "Project team owners get OWNER access.", + // "Project team members get access according to their roles.", + // "Project team owners get OWNER access, and allUsers get READER access.", + // "Project team owners get OWNER access, and allUsers get WRITER access." + // ], + // "location": "query", + // "type": "string" + // }, + // "predefinedDefaultObjectAcl": { + // "description": "Apply a predefined set of default object access controls to this bucket.", + // "enum": [ + // "authenticatedRead", + // "bucketOwnerFullControl", + // "bucketOwnerRead", + // "private", + // "projectPrivate", + // "publicRead" + // ], + // "enumDescriptions": [ + // "Object owner gets OWNER access, and allAuthenticatedUsers get READER access.", + // "Object owner gets OWNER access, and project team owners get OWNER access.", + // "Object owner gets OWNER access, and project team owners get READER access.", + // "Object owner gets OWNER access.", + // "Object owner gets OWNER access, and project team members get access according to their roles.", + // "Object owner gets OWNER access, and allUsers get READER access." + // ], + // "location": "query", + // "type": "string" + // }, + // "projection": { + // "description": "Set of properties to return. Defaults to full.", + // "enum": [ + // "full", + // "noAcl" + // ], + // "enumDescriptions": [ + // "Include all properties.", + // "Omit owner, acl and defaultObjectAcl properties." + // ], + // "location": "query", + // "type": "string" + // }, + // "userProject": { + // "description": "The project to be billed for this request. Required for Requester Pays buckets.", + // "location": "query", + // "type": "string" + // } + // }, + // "path": "b/{bucket}", + // "request": { + // "$ref": "Bucket" + // }, + // "response": { + // "$ref": "Bucket" + // }, + // "scopes": [ + // "https://www.googleapis.com/auth/cloud-platform", + // "https://www.googleapis.com/auth/devstorage.full_control" + // ] + // } + +} + +// method id "storage.buckets.setIamPolicy": + +type BucketsSetIamPolicyCall struct { + s *Service + bucket string + policy *Policy + urlParams_ gensupport.URLParams + ctx_ context.Context + header_ http.Header +} + +// SetIamPolicy: Updates an IAM policy for the specified bucket. +func (r *BucketsService) SetIamPolicy(bucket string, policy *Policy) *BucketsSetIamPolicyCall { + c := &BucketsSetIamPolicyCall{s: r.s, urlParams_: make(gensupport.URLParams)} + c.bucket = bucket + c.policy = policy + return c +} + +// UserProject sets the optional parameter "userProject": The project to +// be billed for this request. Required for Requester Pays buckets. +func (c *BucketsSetIamPolicyCall) UserProject(userProject string) *BucketsSetIamPolicyCall { + c.urlParams_.Set("userProject", userProject) + return c +} + +// Fields allows partial responses to be retrieved. See +// https://developers.google.com/gdata/docs/2.0/basics#PartialResponse +// for more information. +func (c *BucketsSetIamPolicyCall) Fields(s ...googleapi.Field) *BucketsSetIamPolicyCall { + c.urlParams_.Set("fields", googleapi.CombineFields(s)) + return c +} + +// Context sets the context to be used in this call's Do method. Any +// pending HTTP request will be aborted if the provided context is +// canceled. +func (c *BucketsSetIamPolicyCall) Context(ctx context.Context) *BucketsSetIamPolicyCall { + c.ctx_ = ctx + return c +} + +// Header returns an http.Header that can be modified by the caller to +// add HTTP headers to the request. +func (c *BucketsSetIamPolicyCall) Header() http.Header { + if c.header_ == nil { + c.header_ = make(http.Header) + } + return c.header_ +} + +func (c *BucketsSetIamPolicyCall) doRequest(alt string) (*http.Response, error) { + reqHeaders := make(http.Header) + for k, v := range c.header_ { + reqHeaders[k] = v + } + reqHeaders.Set("User-Agent", c.s.userAgent()) + var body io.Reader = nil + body, err := googleapi.WithoutDataWrapper.JSONReader(c.policy) + if err != nil { + return nil, err + } + reqHeaders.Set("Content-Type", "application/json") + c.urlParams_.Set("alt", alt) + urls := googleapi.ResolveRelative(c.s.BasePath, "b/{bucket}/iam") + urls += "?" + c.urlParams_.Encode() + req, _ := http.NewRequest("PUT", urls, body) + req.Header = reqHeaders + googleapi.Expand(req.URL, map[string]string{ + "bucket": c.bucket, + }) + return gensupport.SendRequest(c.ctx_, c.s.client, req) +} + +// Do executes the "storage.buckets.setIamPolicy" call. +// Exactly one of *Policy or error will be non-nil. Any non-2xx status +// code is an error. Response headers are in either +// *Policy.ServerResponse.Header or (if a response was returned at all) +// in error.(*googleapi.Error).Header. Use googleapi.IsNotModified to +// check whether the returned error was because http.StatusNotModified +// was returned. +func (c *BucketsSetIamPolicyCall) Do(opts ...googleapi.CallOption) (*Policy, error) { + gensupport.SetOptions(c.urlParams_, opts...) + res, err := c.doRequest("json") + if res != nil && res.StatusCode == http.StatusNotModified { + if res.Body != nil { + res.Body.Close() + } + return nil, &googleapi.Error{ + Code: res.StatusCode, + Header: res.Header, + } + } + if err != nil { + return nil, err + } + defer googleapi.CloseBody(res) + if err := googleapi.CheckResponse(res); err != nil { + return nil, err + } + ret := &Policy{ + ServerResponse: googleapi.ServerResponse{ + Header: res.Header, + HTTPStatusCode: res.StatusCode, + }, + } + target := &ret + if err := gensupport.DecodeResponse(target, res); err != nil { + return nil, err + } + return ret, nil + // { + // "description": "Updates an IAM policy for the specified bucket.", + // "httpMethod": "PUT", + // "id": "storage.buckets.setIamPolicy", + // "parameterOrder": [ + // "bucket" + // ], + // "parameters": { + // "bucket": { + // "description": "Name of a bucket.", + // "location": "path", + // "required": true, + // "type": "string" + // }, + // "userProject": { + // "description": "The project to be billed for this request. Required for Requester Pays buckets.", + // "location": "query", + // "type": "string" + // } + // }, + // "path": "b/{bucket}/iam", + // "request": { + // "$ref": "Policy" + // }, + // "response": { + // "$ref": "Policy" + // }, + // "scopes": [ + // "https://www.googleapis.com/auth/cloud-platform", + // "https://www.googleapis.com/auth/devstorage.full_control", + // "https://www.googleapis.com/auth/devstorage.read_write" + // ] + // } + +} + +// method id "storage.buckets.testIamPermissions": + +type BucketsTestIamPermissionsCall struct { + s *Service + bucket string + urlParams_ gensupport.URLParams + ifNoneMatch_ string + ctx_ context.Context + header_ http.Header +} + +// TestIamPermissions: Tests a set of permissions on the given bucket to +// see which, if any, are held by the caller. +func (r *BucketsService) TestIamPermissions(bucket string, permissions []string) *BucketsTestIamPermissionsCall { + c := &BucketsTestIamPermissionsCall{s: r.s, urlParams_: make(gensupport.URLParams)} + c.bucket = bucket + c.urlParams_.SetMulti("permissions", append([]string{}, permissions...)) + return c +} + +// UserProject sets the optional parameter "userProject": The project to +// be billed for this request. Required for Requester Pays buckets. +func (c *BucketsTestIamPermissionsCall) UserProject(userProject string) *BucketsTestIamPermissionsCall { + c.urlParams_.Set("userProject", userProject) + return c +} + +// Fields allows partial responses to be retrieved. See +// https://developers.google.com/gdata/docs/2.0/basics#PartialResponse +// for more information. +func (c *BucketsTestIamPermissionsCall) Fields(s ...googleapi.Field) *BucketsTestIamPermissionsCall { + c.urlParams_.Set("fields", googleapi.CombineFields(s)) + return c +} + +// IfNoneMatch sets the optional parameter which makes the operation +// fail if the object's ETag matches the given value. This is useful for +// getting updates only after the object has changed since the last +// request. Use googleapi.IsNotModified to check whether the response +// error from Do is the result of In-None-Match. +func (c *BucketsTestIamPermissionsCall) IfNoneMatch(entityTag string) *BucketsTestIamPermissionsCall { + c.ifNoneMatch_ = entityTag + return c +} + +// Context sets the context to be used in this call's Do method. Any +// pending HTTP request will be aborted if the provided context is +// canceled. +func (c *BucketsTestIamPermissionsCall) Context(ctx context.Context) *BucketsTestIamPermissionsCall { + c.ctx_ = ctx + return c +} + +// Header returns an http.Header that can be modified by the caller to +// add HTTP headers to the request. +func (c *BucketsTestIamPermissionsCall) Header() http.Header { + if c.header_ == nil { + c.header_ = make(http.Header) + } + return c.header_ +} + +func (c *BucketsTestIamPermissionsCall) doRequest(alt string) (*http.Response, error) { + reqHeaders := make(http.Header) + for k, v := range c.header_ { + reqHeaders[k] = v + } + reqHeaders.Set("User-Agent", c.s.userAgent()) + if c.ifNoneMatch_ != "" { + reqHeaders.Set("If-None-Match", c.ifNoneMatch_) + } + var body io.Reader = nil + c.urlParams_.Set("alt", alt) + urls := googleapi.ResolveRelative(c.s.BasePath, "b/{bucket}/iam/testPermissions") + urls += "?" + c.urlParams_.Encode() + req, _ := http.NewRequest("GET", urls, body) + req.Header = reqHeaders + googleapi.Expand(req.URL, map[string]string{ + "bucket": c.bucket, + }) + return gensupport.SendRequest(c.ctx_, c.s.client, req) +} + +// Do executes the "storage.buckets.testIamPermissions" call. +// Exactly one of *TestIamPermissionsResponse or error will be non-nil. +// Any non-2xx status code is an error. Response headers are in either +// *TestIamPermissionsResponse.ServerResponse.Header or (if a response +// was returned at all) in error.(*googleapi.Error).Header. Use +// googleapi.IsNotModified to check whether the returned error was +// because http.StatusNotModified was returned. +func (c *BucketsTestIamPermissionsCall) Do(opts ...googleapi.CallOption) (*TestIamPermissionsResponse, error) { + gensupport.SetOptions(c.urlParams_, opts...) + res, err := c.doRequest("json") + if res != nil && res.StatusCode == http.StatusNotModified { + if res.Body != nil { + res.Body.Close() + } + return nil, &googleapi.Error{ + Code: res.StatusCode, + Header: res.Header, + } + } + if err != nil { + return nil, err + } + defer googleapi.CloseBody(res) + if err := googleapi.CheckResponse(res); err != nil { + return nil, err + } + ret := &TestIamPermissionsResponse{ + ServerResponse: googleapi.ServerResponse{ + Header: res.Header, + HTTPStatusCode: res.StatusCode, + }, + } + target := &ret + if err := gensupport.DecodeResponse(target, res); err != nil { + return nil, err + } + return ret, nil + // { + // "description": "Tests a set of permissions on the given bucket to see which, if any, are held by the caller.", + // "httpMethod": "GET", + // "id": "storage.buckets.testIamPermissions", + // "parameterOrder": [ + // "bucket", + // "permissions" + // ], + // "parameters": { + // "bucket": { + // "description": "Name of a bucket.", + // "location": "path", + // "required": true, + // "type": "string" + // }, + // "permissions": { + // "description": "Permissions to test.", + // "location": "query", + // "repeated": true, + // "required": true, + // "type": "string" + // }, + // "userProject": { + // "description": "The project to be billed for this request. Required for Requester Pays buckets.", + // "location": "query", + // "type": "string" + // } + // }, + // "path": "b/{bucket}/iam/testPermissions", + // "response": { + // "$ref": "TestIamPermissionsResponse" + // }, + // "scopes": [ + // "https://www.googleapis.com/auth/cloud-platform", + // "https://www.googleapis.com/auth/cloud-platform.read-only", + // "https://www.googleapis.com/auth/devstorage.full_control", + // "https://www.googleapis.com/auth/devstorage.read_only", + // "https://www.googleapis.com/auth/devstorage.read_write" + // ] + // } + +} + +// method id "storage.buckets.update": + +type BucketsUpdateCall struct { + s *Service + bucket string + bucket2 *Bucket + urlParams_ gensupport.URLParams + ctx_ context.Context + header_ http.Header +} + +// Update: Updates a bucket. Changes to the bucket will be readable +// immediately after writing, but configuration changes may take time to +// propagate. +func (r *BucketsService) Update(bucket string, bucket2 *Bucket) *BucketsUpdateCall { + c := &BucketsUpdateCall{s: r.s, urlParams_: make(gensupport.URLParams)} + c.bucket = bucket + c.bucket2 = bucket2 + return c +} + +// IfMetagenerationMatch sets the optional parameter +// "ifMetagenerationMatch": Makes the return of the bucket metadata +// conditional on whether the bucket's current metageneration matches +// the given value. +func (c *BucketsUpdateCall) IfMetagenerationMatch(ifMetagenerationMatch int64) *BucketsUpdateCall { + c.urlParams_.Set("ifMetagenerationMatch", fmt.Sprint(ifMetagenerationMatch)) + return c +} + +// IfMetagenerationNotMatch sets the optional parameter +// "ifMetagenerationNotMatch": Makes the return of the bucket metadata +// conditional on whether the bucket's current metageneration does not +// match the given value. +func (c *BucketsUpdateCall) IfMetagenerationNotMatch(ifMetagenerationNotMatch int64) *BucketsUpdateCall { + c.urlParams_.Set("ifMetagenerationNotMatch", fmt.Sprint(ifMetagenerationNotMatch)) + return c +} + +// PredefinedAcl sets the optional parameter "predefinedAcl": Apply a +// predefined set of access controls to this bucket. +// +// Possible values: +// "authenticatedRead" - Project team owners get OWNER access, and +// allAuthenticatedUsers get READER access. +// "private" - Project team owners get OWNER access. +// "projectPrivate" - Project team members get access according to +// their roles. +// "publicRead" - Project team owners get OWNER access, and allUsers +// get READER access. +// "publicReadWrite" - Project team owners get OWNER access, and +// allUsers get WRITER access. +func (c *BucketsUpdateCall) PredefinedAcl(predefinedAcl string) *BucketsUpdateCall { + c.urlParams_.Set("predefinedAcl", predefinedAcl) + return c +} + +// PredefinedDefaultObjectAcl sets the optional parameter +// "predefinedDefaultObjectAcl": Apply a predefined set of default +// object access controls to this bucket. +// +// Possible values: +// "authenticatedRead" - Object owner gets OWNER access, and +// allAuthenticatedUsers get READER access. +// "bucketOwnerFullControl" - Object owner gets OWNER access, and +// project team owners get OWNER access. +// "bucketOwnerRead" - Object owner gets OWNER access, and project +// team owners get READER access. +// "private" - Object owner gets OWNER access. +// "projectPrivate" - Object owner gets OWNER access, and project team +// members get access according to their roles. +// "publicRead" - Object owner gets OWNER access, and allUsers get +// READER access. +func (c *BucketsUpdateCall) PredefinedDefaultObjectAcl(predefinedDefaultObjectAcl string) *BucketsUpdateCall { + c.urlParams_.Set("predefinedDefaultObjectAcl", predefinedDefaultObjectAcl) + return c +} + +// Projection sets the optional parameter "projection": Set of +// properties to return. Defaults to full. +// +// Possible values: +// "full" - Include all properties. +// "noAcl" - Omit owner, acl and defaultObjectAcl properties. +func (c *BucketsUpdateCall) Projection(projection string) *BucketsUpdateCall { + c.urlParams_.Set("projection", projection) + return c +} + +// UserProject sets the optional parameter "userProject": The project to +// be billed for this request. Required for Requester Pays buckets. +func (c *BucketsUpdateCall) UserProject(userProject string) *BucketsUpdateCall { + c.urlParams_.Set("userProject", userProject) + return c +} + +// Fields allows partial responses to be retrieved. See +// https://developers.google.com/gdata/docs/2.0/basics#PartialResponse +// for more information. +func (c *BucketsUpdateCall) Fields(s ...googleapi.Field) *BucketsUpdateCall { + c.urlParams_.Set("fields", googleapi.CombineFields(s)) + return c +} + +// Context sets the context to be used in this call's Do method. Any +// pending HTTP request will be aborted if the provided context is +// canceled. +func (c *BucketsUpdateCall) Context(ctx context.Context) *BucketsUpdateCall { + c.ctx_ = ctx + return c +} + +// Header returns an http.Header that can be modified by the caller to +// add HTTP headers to the request. +func (c *BucketsUpdateCall) Header() http.Header { + if c.header_ == nil { + c.header_ = make(http.Header) + } + return c.header_ +} + +func (c *BucketsUpdateCall) doRequest(alt string) (*http.Response, error) { + reqHeaders := make(http.Header) + for k, v := range c.header_ { + reqHeaders[k] = v + } + reqHeaders.Set("User-Agent", c.s.userAgent()) + var body io.Reader = nil + body, err := googleapi.WithoutDataWrapper.JSONReader(c.bucket2) + if err != nil { + return nil, err + } + reqHeaders.Set("Content-Type", "application/json") + c.urlParams_.Set("alt", alt) + urls := googleapi.ResolveRelative(c.s.BasePath, "b/{bucket}") + urls += "?" + c.urlParams_.Encode() + req, _ := http.NewRequest("PUT", urls, body) + req.Header = reqHeaders + googleapi.Expand(req.URL, map[string]string{ + "bucket": c.bucket, + }) + return gensupport.SendRequest(c.ctx_, c.s.client, req) +} + +// Do executes the "storage.buckets.update" call. +// Exactly one of *Bucket or error will be non-nil. Any non-2xx status +// code is an error. Response headers are in either +// *Bucket.ServerResponse.Header or (if a response was returned at all) +// in error.(*googleapi.Error).Header. Use googleapi.IsNotModified to +// check whether the returned error was because http.StatusNotModified +// was returned. +func (c *BucketsUpdateCall) Do(opts ...googleapi.CallOption) (*Bucket, error) { + gensupport.SetOptions(c.urlParams_, opts...) + res, err := c.doRequest("json") + if res != nil && res.StatusCode == http.StatusNotModified { + if res.Body != nil { + res.Body.Close() + } + return nil, &googleapi.Error{ + Code: res.StatusCode, + Header: res.Header, + } + } + if err != nil { + return nil, err + } + defer googleapi.CloseBody(res) + if err := googleapi.CheckResponse(res); err != nil { + return nil, err + } + ret := &Bucket{ + ServerResponse: googleapi.ServerResponse{ + Header: res.Header, + HTTPStatusCode: res.StatusCode, + }, + } + target := &ret + if err := gensupport.DecodeResponse(target, res); err != nil { + return nil, err + } + return ret, nil + // { + // "description": "Updates a bucket. Changes to the bucket will be readable immediately after writing, but configuration changes may take time to propagate.", + // "httpMethod": "PUT", + // "id": "storage.buckets.update", + // "parameterOrder": [ + // "bucket" + // ], + // "parameters": { + // "bucket": { + // "description": "Name of a bucket.", + // "location": "path", + // "required": true, + // "type": "string" + // }, + // "ifMetagenerationMatch": { + // "description": "Makes the return of the bucket metadata conditional on whether the bucket's current metageneration matches the given value.", + // "format": "int64", + // "location": "query", + // "type": "string" + // }, + // "ifMetagenerationNotMatch": { + // "description": "Makes the return of the bucket metadata conditional on whether the bucket's current metageneration does not match the given value.", + // "format": "int64", + // "location": "query", + // "type": "string" + // }, + // "predefinedAcl": { + // "description": "Apply a predefined set of access controls to this bucket.", + // "enum": [ + // "authenticatedRead", + // "private", + // "projectPrivate", + // "publicRead", + // "publicReadWrite" + // ], + // "enumDescriptions": [ + // "Project team owners get OWNER access, and allAuthenticatedUsers get READER access.", + // "Project team owners get OWNER access.", + // "Project team members get access according to their roles.", + // "Project team owners get OWNER access, and allUsers get READER access.", + // "Project team owners get OWNER access, and allUsers get WRITER access." + // ], + // "location": "query", + // "type": "string" + // }, + // "predefinedDefaultObjectAcl": { + // "description": "Apply a predefined set of default object access controls to this bucket.", + // "enum": [ + // "authenticatedRead", + // "bucketOwnerFullControl", + // "bucketOwnerRead", + // "private", + // "projectPrivate", + // "publicRead" + // ], + // "enumDescriptions": [ + // "Object owner gets OWNER access, and allAuthenticatedUsers get READER access.", + // "Object owner gets OWNER access, and project team owners get OWNER access.", + // "Object owner gets OWNER access, and project team owners get READER access.", + // "Object owner gets OWNER access.", + // "Object owner gets OWNER access, and project team members get access according to their roles.", + // "Object owner gets OWNER access, and allUsers get READER access." + // ], + // "location": "query", + // "type": "string" + // }, + // "projection": { + // "description": "Set of properties to return. Defaults to full.", + // "enum": [ + // "full", + // "noAcl" + // ], + // "enumDescriptions": [ + // "Include all properties.", + // "Omit owner, acl and defaultObjectAcl properties." + // ], + // "location": "query", + // "type": "string" + // }, + // "userProject": { + // "description": "The project to be billed for this request. Required for Requester Pays buckets.", + // "location": "query", + // "type": "string" + // } + // }, + // "path": "b/{bucket}", + // "request": { + // "$ref": "Bucket" + // }, + // "response": { + // "$ref": "Bucket" + // }, + // "scopes": [ + // "https://www.googleapis.com/auth/cloud-platform", + // "https://www.googleapis.com/auth/devstorage.full_control" + // ] + // } + +} + +// method id "storage.channels.stop": + +type ChannelsStopCall struct { + s *Service + channel *Channel + urlParams_ gensupport.URLParams + ctx_ context.Context + header_ http.Header +} + +// Stop: Stop watching resources through this channel +func (r *ChannelsService) Stop(channel *Channel) *ChannelsStopCall { + c := &ChannelsStopCall{s: r.s, urlParams_: make(gensupport.URLParams)} + c.channel = channel + return c +} + +// Fields allows partial responses to be retrieved. See +// https://developers.google.com/gdata/docs/2.0/basics#PartialResponse +// for more information. +func (c *ChannelsStopCall) Fields(s ...googleapi.Field) *ChannelsStopCall { + c.urlParams_.Set("fields", googleapi.CombineFields(s)) + return c +} + +// Context sets the context to be used in this call's Do method. Any +// pending HTTP request will be aborted if the provided context is +// canceled. +func (c *ChannelsStopCall) Context(ctx context.Context) *ChannelsStopCall { + c.ctx_ = ctx + return c +} + +// Header returns an http.Header that can be modified by the caller to +// add HTTP headers to the request. +func (c *ChannelsStopCall) Header() http.Header { + if c.header_ == nil { + c.header_ = make(http.Header) + } + return c.header_ +} + +func (c *ChannelsStopCall) doRequest(alt string) (*http.Response, error) { + reqHeaders := make(http.Header) + for k, v := range c.header_ { + reqHeaders[k] = v + } + reqHeaders.Set("User-Agent", c.s.userAgent()) + var body io.Reader = nil + body, err := googleapi.WithoutDataWrapper.JSONReader(c.channel) + if err != nil { + return nil, err + } + reqHeaders.Set("Content-Type", "application/json") + c.urlParams_.Set("alt", alt) + urls := googleapi.ResolveRelative(c.s.BasePath, "channels/stop") + urls += "?" + c.urlParams_.Encode() + req, _ := http.NewRequest("POST", urls, body) + req.Header = reqHeaders + return gensupport.SendRequest(c.ctx_, c.s.client, req) +} + +// Do executes the "storage.channels.stop" call. +func (c *ChannelsStopCall) Do(opts ...googleapi.CallOption) error { + gensupport.SetOptions(c.urlParams_, opts...) + res, err := c.doRequest("json") + if err != nil { + return err + } + defer googleapi.CloseBody(res) + if err := googleapi.CheckResponse(res); err != nil { + return err + } + return nil + // { + // "description": "Stop watching resources through this channel", + // "httpMethod": "POST", + // "id": "storage.channels.stop", + // "path": "channels/stop", + // "request": { + // "$ref": "Channel", + // "parameterName": "resource" + // }, + // "scopes": [ + // "https://www.googleapis.com/auth/cloud-platform", + // "https://www.googleapis.com/auth/cloud-platform.read-only", + // "https://www.googleapis.com/auth/devstorage.full_control", + // "https://www.googleapis.com/auth/devstorage.read_only", + // "https://www.googleapis.com/auth/devstorage.read_write" + // ] + // } + +} + +// method id "storage.defaultObjectAccessControls.delete": + +type DefaultObjectAccessControlsDeleteCall struct { + s *Service + bucket string + entity string + urlParams_ gensupport.URLParams + ctx_ context.Context + header_ http.Header +} + +// Delete: Permanently deletes the default object ACL entry for the +// specified entity on the specified bucket. +func (r *DefaultObjectAccessControlsService) Delete(bucket string, entity string) *DefaultObjectAccessControlsDeleteCall { + c := &DefaultObjectAccessControlsDeleteCall{s: r.s, urlParams_: make(gensupport.URLParams)} + c.bucket = bucket + c.entity = entity + return c +} + +// UserProject sets the optional parameter "userProject": The project to +// be billed for this request. Required for Requester Pays buckets. +func (c *DefaultObjectAccessControlsDeleteCall) UserProject(userProject string) *DefaultObjectAccessControlsDeleteCall { + c.urlParams_.Set("userProject", userProject) + return c +} + +// Fields allows partial responses to be retrieved. See +// https://developers.google.com/gdata/docs/2.0/basics#PartialResponse +// for more information. +func (c *DefaultObjectAccessControlsDeleteCall) Fields(s ...googleapi.Field) *DefaultObjectAccessControlsDeleteCall { + c.urlParams_.Set("fields", googleapi.CombineFields(s)) + return c +} + +// Context sets the context to be used in this call's Do method. Any +// pending HTTP request will be aborted if the provided context is +// canceled. +func (c *DefaultObjectAccessControlsDeleteCall) Context(ctx context.Context) *DefaultObjectAccessControlsDeleteCall { + c.ctx_ = ctx + return c +} + +// Header returns an http.Header that can be modified by the caller to +// add HTTP headers to the request. +func (c *DefaultObjectAccessControlsDeleteCall) Header() http.Header { + if c.header_ == nil { + c.header_ = make(http.Header) + } + return c.header_ +} + +func (c *DefaultObjectAccessControlsDeleteCall) doRequest(alt string) (*http.Response, error) { + reqHeaders := make(http.Header) + for k, v := range c.header_ { + reqHeaders[k] = v + } + reqHeaders.Set("User-Agent", c.s.userAgent()) + var body io.Reader = nil + c.urlParams_.Set("alt", alt) + urls := googleapi.ResolveRelative(c.s.BasePath, "b/{bucket}/defaultObjectAcl/{entity}") + urls += "?" + c.urlParams_.Encode() + req, _ := http.NewRequest("DELETE", urls, body) + req.Header = reqHeaders + googleapi.Expand(req.URL, map[string]string{ + "bucket": c.bucket, + "entity": c.entity, + }) + return gensupport.SendRequest(c.ctx_, c.s.client, req) +} + +// Do executes the "storage.defaultObjectAccessControls.delete" call. +func (c *DefaultObjectAccessControlsDeleteCall) Do(opts ...googleapi.CallOption) error { + gensupport.SetOptions(c.urlParams_, opts...) + res, err := c.doRequest("json") + if err != nil { + return err + } + defer googleapi.CloseBody(res) + if err := googleapi.CheckResponse(res); err != nil { + return err + } + return nil + // { + // "description": "Permanently deletes the default object ACL entry for the specified entity on the specified bucket.", + // "httpMethod": "DELETE", + // "id": "storage.defaultObjectAccessControls.delete", + // "parameterOrder": [ + // "bucket", + // "entity" + // ], + // "parameters": { + // "bucket": { + // "description": "Name of a bucket.", + // "location": "path", + // "required": true, + // "type": "string" + // }, + // "entity": { + // "description": "The entity holding the permission. Can be user-userId, user-emailAddress, group-groupId, group-emailAddress, allUsers, or allAuthenticatedUsers.", + // "location": "path", + // "required": true, + // "type": "string" + // }, + // "userProject": { + // "description": "The project to be billed for this request. Required for Requester Pays buckets.", + // "location": "query", + // "type": "string" + // } + // }, + // "path": "b/{bucket}/defaultObjectAcl/{entity}", + // "scopes": [ + // "https://www.googleapis.com/auth/cloud-platform", + // "https://www.googleapis.com/auth/devstorage.full_control" + // ] + // } + +} + +// method id "storage.defaultObjectAccessControls.get": + +type DefaultObjectAccessControlsGetCall struct { + s *Service + bucket string + entity string + urlParams_ gensupport.URLParams + ifNoneMatch_ string + ctx_ context.Context + header_ http.Header +} + +// Get: Returns the default object ACL entry for the specified entity on +// the specified bucket. +func (r *DefaultObjectAccessControlsService) Get(bucket string, entity string) *DefaultObjectAccessControlsGetCall { + c := &DefaultObjectAccessControlsGetCall{s: r.s, urlParams_: make(gensupport.URLParams)} + c.bucket = bucket + c.entity = entity + return c +} + +// UserProject sets the optional parameter "userProject": The project to +// be billed for this request. Required for Requester Pays buckets. +func (c *DefaultObjectAccessControlsGetCall) UserProject(userProject string) *DefaultObjectAccessControlsGetCall { + c.urlParams_.Set("userProject", userProject) + return c +} + +// Fields allows partial responses to be retrieved. See +// https://developers.google.com/gdata/docs/2.0/basics#PartialResponse +// for more information. +func (c *DefaultObjectAccessControlsGetCall) Fields(s ...googleapi.Field) *DefaultObjectAccessControlsGetCall { + c.urlParams_.Set("fields", googleapi.CombineFields(s)) + return c +} + +// IfNoneMatch sets the optional parameter which makes the operation +// fail if the object's ETag matches the given value. This is useful for +// getting updates only after the object has changed since the last +// request. Use googleapi.IsNotModified to check whether the response +// error from Do is the result of In-None-Match. +func (c *DefaultObjectAccessControlsGetCall) IfNoneMatch(entityTag string) *DefaultObjectAccessControlsGetCall { + c.ifNoneMatch_ = entityTag + return c +} + +// Context sets the context to be used in this call's Do method. Any +// pending HTTP request will be aborted if the provided context is +// canceled. +func (c *DefaultObjectAccessControlsGetCall) Context(ctx context.Context) *DefaultObjectAccessControlsGetCall { + c.ctx_ = ctx + return c +} + +// Header returns an http.Header that can be modified by the caller to +// add HTTP headers to the request. +func (c *DefaultObjectAccessControlsGetCall) Header() http.Header { + if c.header_ == nil { + c.header_ = make(http.Header) + } + return c.header_ +} + +func (c *DefaultObjectAccessControlsGetCall) doRequest(alt string) (*http.Response, error) { + reqHeaders := make(http.Header) + for k, v := range c.header_ { + reqHeaders[k] = v + } + reqHeaders.Set("User-Agent", c.s.userAgent()) + if c.ifNoneMatch_ != "" { + reqHeaders.Set("If-None-Match", c.ifNoneMatch_) + } + var body io.Reader = nil + c.urlParams_.Set("alt", alt) + urls := googleapi.ResolveRelative(c.s.BasePath, "b/{bucket}/defaultObjectAcl/{entity}") + urls += "?" + c.urlParams_.Encode() + req, _ := http.NewRequest("GET", urls, body) + req.Header = reqHeaders + googleapi.Expand(req.URL, map[string]string{ + "bucket": c.bucket, + "entity": c.entity, + }) + return gensupport.SendRequest(c.ctx_, c.s.client, req) +} + +// Do executes the "storage.defaultObjectAccessControls.get" call. +// Exactly one of *ObjectAccessControl or error will be non-nil. Any +// non-2xx status code is an error. Response headers are in either +// *ObjectAccessControl.ServerResponse.Header or (if a response was +// returned at all) in error.(*googleapi.Error).Header. Use +// googleapi.IsNotModified to check whether the returned error was +// because http.StatusNotModified was returned. +func (c *DefaultObjectAccessControlsGetCall) Do(opts ...googleapi.CallOption) (*ObjectAccessControl, error) { + gensupport.SetOptions(c.urlParams_, opts...) + res, err := c.doRequest("json") + if res != nil && res.StatusCode == http.StatusNotModified { + if res.Body != nil { + res.Body.Close() + } + return nil, &googleapi.Error{ + Code: res.StatusCode, + Header: res.Header, + } + } + if err != nil { + return nil, err + } + defer googleapi.CloseBody(res) + if err := googleapi.CheckResponse(res); err != nil { + return nil, err + } + ret := &ObjectAccessControl{ + ServerResponse: googleapi.ServerResponse{ + Header: res.Header, + HTTPStatusCode: res.StatusCode, + }, + } + target := &ret + if err := gensupport.DecodeResponse(target, res); err != nil { + return nil, err + } + return ret, nil + // { + // "description": "Returns the default object ACL entry for the specified entity on the specified bucket.", + // "httpMethod": "GET", + // "id": "storage.defaultObjectAccessControls.get", + // "parameterOrder": [ + // "bucket", + // "entity" + // ], + // "parameters": { + // "bucket": { + // "description": "Name of a bucket.", + // "location": "path", + // "required": true, + // "type": "string" + // }, + // "entity": { + // "description": "The entity holding the permission. Can be user-userId, user-emailAddress, group-groupId, group-emailAddress, allUsers, or allAuthenticatedUsers.", + // "location": "path", + // "required": true, + // "type": "string" + // }, + // "userProject": { + // "description": "The project to be billed for this request. Required for Requester Pays buckets.", + // "location": "query", + // "type": "string" + // } + // }, + // "path": "b/{bucket}/defaultObjectAcl/{entity}", + // "response": { + // "$ref": "ObjectAccessControl" + // }, + // "scopes": [ + // "https://www.googleapis.com/auth/cloud-platform", + // "https://www.googleapis.com/auth/devstorage.full_control" + // ] + // } + +} + +// method id "storage.defaultObjectAccessControls.insert": + +type DefaultObjectAccessControlsInsertCall struct { + s *Service + bucket string + objectaccesscontrol *ObjectAccessControl + urlParams_ gensupport.URLParams + ctx_ context.Context + header_ http.Header +} + +// Insert: Creates a new default object ACL entry on the specified +// bucket. +func (r *DefaultObjectAccessControlsService) Insert(bucket string, objectaccesscontrol *ObjectAccessControl) *DefaultObjectAccessControlsInsertCall { + c := &DefaultObjectAccessControlsInsertCall{s: r.s, urlParams_: make(gensupport.URLParams)} + c.bucket = bucket + c.objectaccesscontrol = objectaccesscontrol + return c +} + +// UserProject sets the optional parameter "userProject": The project to +// be billed for this request. Required for Requester Pays buckets. +func (c *DefaultObjectAccessControlsInsertCall) UserProject(userProject string) *DefaultObjectAccessControlsInsertCall { + c.urlParams_.Set("userProject", userProject) + return c +} + +// Fields allows partial responses to be retrieved. See +// https://developers.google.com/gdata/docs/2.0/basics#PartialResponse +// for more information. +func (c *DefaultObjectAccessControlsInsertCall) Fields(s ...googleapi.Field) *DefaultObjectAccessControlsInsertCall { + c.urlParams_.Set("fields", googleapi.CombineFields(s)) + return c +} + +// Context sets the context to be used in this call's Do method. Any +// pending HTTP request will be aborted if the provided context is +// canceled. +func (c *DefaultObjectAccessControlsInsertCall) Context(ctx context.Context) *DefaultObjectAccessControlsInsertCall { + c.ctx_ = ctx + return c +} + +// Header returns an http.Header that can be modified by the caller to +// add HTTP headers to the request. +func (c *DefaultObjectAccessControlsInsertCall) Header() http.Header { + if c.header_ == nil { + c.header_ = make(http.Header) + } + return c.header_ +} + +func (c *DefaultObjectAccessControlsInsertCall) doRequest(alt string) (*http.Response, error) { + reqHeaders := make(http.Header) + for k, v := range c.header_ { + reqHeaders[k] = v + } + reqHeaders.Set("User-Agent", c.s.userAgent()) + var body io.Reader = nil + body, err := googleapi.WithoutDataWrapper.JSONReader(c.objectaccesscontrol) + if err != nil { + return nil, err + } + reqHeaders.Set("Content-Type", "application/json") + c.urlParams_.Set("alt", alt) + urls := googleapi.ResolveRelative(c.s.BasePath, "b/{bucket}/defaultObjectAcl") + urls += "?" + c.urlParams_.Encode() + req, _ := http.NewRequest("POST", urls, body) + req.Header = reqHeaders + googleapi.Expand(req.URL, map[string]string{ + "bucket": c.bucket, + }) + return gensupport.SendRequest(c.ctx_, c.s.client, req) +} + +// Do executes the "storage.defaultObjectAccessControls.insert" call. +// Exactly one of *ObjectAccessControl or error will be non-nil. Any +// non-2xx status code is an error. Response headers are in either +// *ObjectAccessControl.ServerResponse.Header or (if a response was +// returned at all) in error.(*googleapi.Error).Header. Use +// googleapi.IsNotModified to check whether the returned error was +// because http.StatusNotModified was returned. +func (c *DefaultObjectAccessControlsInsertCall) Do(opts ...googleapi.CallOption) (*ObjectAccessControl, error) { + gensupport.SetOptions(c.urlParams_, opts...) + res, err := c.doRequest("json") + if res != nil && res.StatusCode == http.StatusNotModified { + if res.Body != nil { + res.Body.Close() + } + return nil, &googleapi.Error{ + Code: res.StatusCode, + Header: res.Header, + } + } + if err != nil { + return nil, err + } + defer googleapi.CloseBody(res) + if err := googleapi.CheckResponse(res); err != nil { + return nil, err + } + ret := &ObjectAccessControl{ + ServerResponse: googleapi.ServerResponse{ + Header: res.Header, + HTTPStatusCode: res.StatusCode, + }, + } + target := &ret + if err := gensupport.DecodeResponse(target, res); err != nil { + return nil, err + } + return ret, nil + // { + // "description": "Creates a new default object ACL entry on the specified bucket.", + // "httpMethod": "POST", + // "id": "storage.defaultObjectAccessControls.insert", + // "parameterOrder": [ + // "bucket" + // ], + // "parameters": { + // "bucket": { + // "description": "Name of a bucket.", + // "location": "path", + // "required": true, + // "type": "string" + // }, + // "userProject": { + // "description": "The project to be billed for this request. Required for Requester Pays buckets.", + // "location": "query", + // "type": "string" + // } + // }, + // "path": "b/{bucket}/defaultObjectAcl", + // "request": { + // "$ref": "ObjectAccessControl" + // }, + // "response": { + // "$ref": "ObjectAccessControl" + // }, + // "scopes": [ + // "https://www.googleapis.com/auth/cloud-platform", + // "https://www.googleapis.com/auth/devstorage.full_control" + // ] + // } + +} + +// method id "storage.defaultObjectAccessControls.list": + +type DefaultObjectAccessControlsListCall struct { + s *Service + bucket string + urlParams_ gensupport.URLParams + ifNoneMatch_ string + ctx_ context.Context + header_ http.Header +} + +// List: Retrieves default object ACL entries on the specified bucket. +func (r *DefaultObjectAccessControlsService) List(bucket string) *DefaultObjectAccessControlsListCall { + c := &DefaultObjectAccessControlsListCall{s: r.s, urlParams_: make(gensupport.URLParams)} + c.bucket = bucket + return c +} + +// IfMetagenerationMatch sets the optional parameter +// "ifMetagenerationMatch": If present, only return default ACL listing +// if the bucket's current metageneration matches this value. +func (c *DefaultObjectAccessControlsListCall) IfMetagenerationMatch(ifMetagenerationMatch int64) *DefaultObjectAccessControlsListCall { + c.urlParams_.Set("ifMetagenerationMatch", fmt.Sprint(ifMetagenerationMatch)) + return c +} + +// IfMetagenerationNotMatch sets the optional parameter +// "ifMetagenerationNotMatch": If present, only return default ACL +// listing if the bucket's current metageneration does not match the +// given value. +func (c *DefaultObjectAccessControlsListCall) IfMetagenerationNotMatch(ifMetagenerationNotMatch int64) *DefaultObjectAccessControlsListCall { + c.urlParams_.Set("ifMetagenerationNotMatch", fmt.Sprint(ifMetagenerationNotMatch)) + return c +} + +// UserProject sets the optional parameter "userProject": The project to +// be billed for this request. Required for Requester Pays buckets. +func (c *DefaultObjectAccessControlsListCall) UserProject(userProject string) *DefaultObjectAccessControlsListCall { + c.urlParams_.Set("userProject", userProject) + return c +} + +// Fields allows partial responses to be retrieved. See +// https://developers.google.com/gdata/docs/2.0/basics#PartialResponse +// for more information. +func (c *DefaultObjectAccessControlsListCall) Fields(s ...googleapi.Field) *DefaultObjectAccessControlsListCall { + c.urlParams_.Set("fields", googleapi.CombineFields(s)) + return c +} + +// IfNoneMatch sets the optional parameter which makes the operation +// fail if the object's ETag matches the given value. This is useful for +// getting updates only after the object has changed since the last +// request. Use googleapi.IsNotModified to check whether the response +// error from Do is the result of In-None-Match. +func (c *DefaultObjectAccessControlsListCall) IfNoneMatch(entityTag string) *DefaultObjectAccessControlsListCall { + c.ifNoneMatch_ = entityTag + return c +} + +// Context sets the context to be used in this call's Do method. Any +// pending HTTP request will be aborted if the provided context is +// canceled. +func (c *DefaultObjectAccessControlsListCall) Context(ctx context.Context) *DefaultObjectAccessControlsListCall { + c.ctx_ = ctx + return c +} + +// Header returns an http.Header that can be modified by the caller to +// add HTTP headers to the request. +func (c *DefaultObjectAccessControlsListCall) Header() http.Header { + if c.header_ == nil { + c.header_ = make(http.Header) + } + return c.header_ +} + +func (c *DefaultObjectAccessControlsListCall) doRequest(alt string) (*http.Response, error) { + reqHeaders := make(http.Header) + for k, v := range c.header_ { + reqHeaders[k] = v + } + reqHeaders.Set("User-Agent", c.s.userAgent()) + if c.ifNoneMatch_ != "" { + reqHeaders.Set("If-None-Match", c.ifNoneMatch_) + } + var body io.Reader = nil + c.urlParams_.Set("alt", alt) + urls := googleapi.ResolveRelative(c.s.BasePath, "b/{bucket}/defaultObjectAcl") + urls += "?" + c.urlParams_.Encode() + req, _ := http.NewRequest("GET", urls, body) + req.Header = reqHeaders + googleapi.Expand(req.URL, map[string]string{ + "bucket": c.bucket, + }) + return gensupport.SendRequest(c.ctx_, c.s.client, req) +} + +// Do executes the "storage.defaultObjectAccessControls.list" call. +// Exactly one of *ObjectAccessControls or error will be non-nil. Any +// non-2xx status code is an error. Response headers are in either +// *ObjectAccessControls.ServerResponse.Header or (if a response was +// returned at all) in error.(*googleapi.Error).Header. Use +// googleapi.IsNotModified to check whether the returned error was +// because http.StatusNotModified was returned. +func (c *DefaultObjectAccessControlsListCall) Do(opts ...googleapi.CallOption) (*ObjectAccessControls, error) { + gensupport.SetOptions(c.urlParams_, opts...) + res, err := c.doRequest("json") + if res != nil && res.StatusCode == http.StatusNotModified { + if res.Body != nil { + res.Body.Close() + } + return nil, &googleapi.Error{ + Code: res.StatusCode, + Header: res.Header, + } + } + if err != nil { + return nil, err + } + defer googleapi.CloseBody(res) + if err := googleapi.CheckResponse(res); err != nil { + return nil, err + } + ret := &ObjectAccessControls{ + ServerResponse: googleapi.ServerResponse{ + Header: res.Header, + HTTPStatusCode: res.StatusCode, + }, + } + target := &ret + if err := gensupport.DecodeResponse(target, res); err != nil { + return nil, err + } + return ret, nil + // { + // "description": "Retrieves default object ACL entries on the specified bucket.", + // "httpMethod": "GET", + // "id": "storage.defaultObjectAccessControls.list", + // "parameterOrder": [ + // "bucket" + // ], + // "parameters": { + // "bucket": { + // "description": "Name of a bucket.", + // "location": "path", + // "required": true, + // "type": "string" + // }, + // "ifMetagenerationMatch": { + // "description": "If present, only return default ACL listing if the bucket's current metageneration matches this value.", + // "format": "int64", + // "location": "query", + // "type": "string" + // }, + // "ifMetagenerationNotMatch": { + // "description": "If present, only return default ACL listing if the bucket's current metageneration does not match the given value.", + // "format": "int64", + // "location": "query", + // "type": "string" + // }, + // "userProject": { + // "description": "The project to be billed for this request. Required for Requester Pays buckets.", + // "location": "query", + // "type": "string" + // } + // }, + // "path": "b/{bucket}/defaultObjectAcl", + // "response": { + // "$ref": "ObjectAccessControls" + // }, + // "scopes": [ + // "https://www.googleapis.com/auth/cloud-platform", + // "https://www.googleapis.com/auth/devstorage.full_control" + // ] + // } + +} + +// method id "storage.defaultObjectAccessControls.patch": + +type DefaultObjectAccessControlsPatchCall struct { + s *Service + bucket string + entity string + objectaccesscontrol *ObjectAccessControl + urlParams_ gensupport.URLParams + ctx_ context.Context + header_ http.Header +} + +// Patch: Patches a default object ACL entry on the specified bucket. +func (r *DefaultObjectAccessControlsService) Patch(bucket string, entity string, objectaccesscontrol *ObjectAccessControl) *DefaultObjectAccessControlsPatchCall { + c := &DefaultObjectAccessControlsPatchCall{s: r.s, urlParams_: make(gensupport.URLParams)} + c.bucket = bucket + c.entity = entity + c.objectaccesscontrol = objectaccesscontrol + return c +} + +// UserProject sets the optional parameter "userProject": The project to +// be billed for this request. Required for Requester Pays buckets. +func (c *DefaultObjectAccessControlsPatchCall) UserProject(userProject string) *DefaultObjectAccessControlsPatchCall { + c.urlParams_.Set("userProject", userProject) + return c +} + +// Fields allows partial responses to be retrieved. See +// https://developers.google.com/gdata/docs/2.0/basics#PartialResponse +// for more information. +func (c *DefaultObjectAccessControlsPatchCall) Fields(s ...googleapi.Field) *DefaultObjectAccessControlsPatchCall { + c.urlParams_.Set("fields", googleapi.CombineFields(s)) + return c +} + +// Context sets the context to be used in this call's Do method. Any +// pending HTTP request will be aborted if the provided context is +// canceled. +func (c *DefaultObjectAccessControlsPatchCall) Context(ctx context.Context) *DefaultObjectAccessControlsPatchCall { + c.ctx_ = ctx + return c +} + +// Header returns an http.Header that can be modified by the caller to +// add HTTP headers to the request. +func (c *DefaultObjectAccessControlsPatchCall) Header() http.Header { + if c.header_ == nil { + c.header_ = make(http.Header) + } + return c.header_ +} + +func (c *DefaultObjectAccessControlsPatchCall) doRequest(alt string) (*http.Response, error) { + reqHeaders := make(http.Header) + for k, v := range c.header_ { + reqHeaders[k] = v + } + reqHeaders.Set("User-Agent", c.s.userAgent()) + var body io.Reader = nil + body, err := googleapi.WithoutDataWrapper.JSONReader(c.objectaccesscontrol) + if err != nil { + return nil, err + } + reqHeaders.Set("Content-Type", "application/json") + c.urlParams_.Set("alt", alt) + urls := googleapi.ResolveRelative(c.s.BasePath, "b/{bucket}/defaultObjectAcl/{entity}") + urls += "?" + c.urlParams_.Encode() + req, _ := http.NewRequest("PATCH", urls, body) + req.Header = reqHeaders + googleapi.Expand(req.URL, map[string]string{ + "bucket": c.bucket, + "entity": c.entity, + }) + return gensupport.SendRequest(c.ctx_, c.s.client, req) +} + +// Do executes the "storage.defaultObjectAccessControls.patch" call. +// Exactly one of *ObjectAccessControl or error will be non-nil. Any +// non-2xx status code is an error. Response headers are in either +// *ObjectAccessControl.ServerResponse.Header or (if a response was +// returned at all) in error.(*googleapi.Error).Header. Use +// googleapi.IsNotModified to check whether the returned error was +// because http.StatusNotModified was returned. +func (c *DefaultObjectAccessControlsPatchCall) Do(opts ...googleapi.CallOption) (*ObjectAccessControl, error) { + gensupport.SetOptions(c.urlParams_, opts...) + res, err := c.doRequest("json") + if res != nil && res.StatusCode == http.StatusNotModified { + if res.Body != nil { + res.Body.Close() + } + return nil, &googleapi.Error{ + Code: res.StatusCode, + Header: res.Header, + } + } + if err != nil { + return nil, err + } + defer googleapi.CloseBody(res) + if err := googleapi.CheckResponse(res); err != nil { + return nil, err + } + ret := &ObjectAccessControl{ + ServerResponse: googleapi.ServerResponse{ + Header: res.Header, + HTTPStatusCode: res.StatusCode, + }, + } + target := &ret + if err := gensupport.DecodeResponse(target, res); err != nil { + return nil, err + } + return ret, nil + // { + // "description": "Patches a default object ACL entry on the specified bucket.", + // "httpMethod": "PATCH", + // "id": "storage.defaultObjectAccessControls.patch", + // "parameterOrder": [ + // "bucket", + // "entity" + // ], + // "parameters": { + // "bucket": { + // "description": "Name of a bucket.", + // "location": "path", + // "required": true, + // "type": "string" + // }, + // "entity": { + // "description": "The entity holding the permission. Can be user-userId, user-emailAddress, group-groupId, group-emailAddress, allUsers, or allAuthenticatedUsers.", + // "location": "path", + // "required": true, + // "type": "string" + // }, + // "userProject": { + // "description": "The project to be billed for this request. Required for Requester Pays buckets.", + // "location": "query", + // "type": "string" + // } + // }, + // "path": "b/{bucket}/defaultObjectAcl/{entity}", + // "request": { + // "$ref": "ObjectAccessControl" + // }, + // "response": { + // "$ref": "ObjectAccessControl" + // }, + // "scopes": [ + // "https://www.googleapis.com/auth/cloud-platform", + // "https://www.googleapis.com/auth/devstorage.full_control" + // ] + // } + +} + +// method id "storage.defaultObjectAccessControls.update": + +type DefaultObjectAccessControlsUpdateCall struct { + s *Service + bucket string + entity string + objectaccesscontrol *ObjectAccessControl + urlParams_ gensupport.URLParams + ctx_ context.Context + header_ http.Header +} + +// Update: Updates a default object ACL entry on the specified bucket. +func (r *DefaultObjectAccessControlsService) Update(bucket string, entity string, objectaccesscontrol *ObjectAccessControl) *DefaultObjectAccessControlsUpdateCall { + c := &DefaultObjectAccessControlsUpdateCall{s: r.s, urlParams_: make(gensupport.URLParams)} + c.bucket = bucket + c.entity = entity + c.objectaccesscontrol = objectaccesscontrol + return c +} + +// UserProject sets the optional parameter "userProject": The project to +// be billed for this request. Required for Requester Pays buckets. +func (c *DefaultObjectAccessControlsUpdateCall) UserProject(userProject string) *DefaultObjectAccessControlsUpdateCall { + c.urlParams_.Set("userProject", userProject) + return c +} + +// Fields allows partial responses to be retrieved. See +// https://developers.google.com/gdata/docs/2.0/basics#PartialResponse +// for more information. +func (c *DefaultObjectAccessControlsUpdateCall) Fields(s ...googleapi.Field) *DefaultObjectAccessControlsUpdateCall { + c.urlParams_.Set("fields", googleapi.CombineFields(s)) + return c +} + +// Context sets the context to be used in this call's Do method. Any +// pending HTTP request will be aborted if the provided context is +// canceled. +func (c *DefaultObjectAccessControlsUpdateCall) Context(ctx context.Context) *DefaultObjectAccessControlsUpdateCall { + c.ctx_ = ctx + return c +} + +// Header returns an http.Header that can be modified by the caller to +// add HTTP headers to the request. +func (c *DefaultObjectAccessControlsUpdateCall) Header() http.Header { + if c.header_ == nil { + c.header_ = make(http.Header) + } + return c.header_ +} + +func (c *DefaultObjectAccessControlsUpdateCall) doRequest(alt string) (*http.Response, error) { + reqHeaders := make(http.Header) + for k, v := range c.header_ { + reqHeaders[k] = v + } + reqHeaders.Set("User-Agent", c.s.userAgent()) + var body io.Reader = nil + body, err := googleapi.WithoutDataWrapper.JSONReader(c.objectaccesscontrol) + if err != nil { + return nil, err + } + reqHeaders.Set("Content-Type", "application/json") + c.urlParams_.Set("alt", alt) + urls := googleapi.ResolveRelative(c.s.BasePath, "b/{bucket}/defaultObjectAcl/{entity}") + urls += "?" + c.urlParams_.Encode() + req, _ := http.NewRequest("PUT", urls, body) + req.Header = reqHeaders + googleapi.Expand(req.URL, map[string]string{ + "bucket": c.bucket, + "entity": c.entity, + }) + return gensupport.SendRequest(c.ctx_, c.s.client, req) +} + +// Do executes the "storage.defaultObjectAccessControls.update" call. +// Exactly one of *ObjectAccessControl or error will be non-nil. Any +// non-2xx status code is an error. Response headers are in either +// *ObjectAccessControl.ServerResponse.Header or (if a response was +// returned at all) in error.(*googleapi.Error).Header. Use +// googleapi.IsNotModified to check whether the returned error was +// because http.StatusNotModified was returned. +func (c *DefaultObjectAccessControlsUpdateCall) Do(opts ...googleapi.CallOption) (*ObjectAccessControl, error) { + gensupport.SetOptions(c.urlParams_, opts...) + res, err := c.doRequest("json") + if res != nil && res.StatusCode == http.StatusNotModified { + if res.Body != nil { + res.Body.Close() + } + return nil, &googleapi.Error{ + Code: res.StatusCode, + Header: res.Header, + } + } + if err != nil { + return nil, err + } + defer googleapi.CloseBody(res) + if err := googleapi.CheckResponse(res); err != nil { + return nil, err + } + ret := &ObjectAccessControl{ + ServerResponse: googleapi.ServerResponse{ + Header: res.Header, + HTTPStatusCode: res.StatusCode, + }, + } + target := &ret + if err := gensupport.DecodeResponse(target, res); err != nil { + return nil, err + } + return ret, nil + // { + // "description": "Updates a default object ACL entry on the specified bucket.", + // "httpMethod": "PUT", + // "id": "storage.defaultObjectAccessControls.update", + // "parameterOrder": [ + // "bucket", + // "entity" + // ], + // "parameters": { + // "bucket": { + // "description": "Name of a bucket.", + // "location": "path", + // "required": true, + // "type": "string" + // }, + // "entity": { + // "description": "The entity holding the permission. Can be user-userId, user-emailAddress, group-groupId, group-emailAddress, allUsers, or allAuthenticatedUsers.", + // "location": "path", + // "required": true, + // "type": "string" + // }, + // "userProject": { + // "description": "The project to be billed for this request. Required for Requester Pays buckets.", + // "location": "query", + // "type": "string" + // } + // }, + // "path": "b/{bucket}/defaultObjectAcl/{entity}", + // "request": { + // "$ref": "ObjectAccessControl" + // }, + // "response": { + // "$ref": "ObjectAccessControl" + // }, + // "scopes": [ + // "https://www.googleapis.com/auth/cloud-platform", + // "https://www.googleapis.com/auth/devstorage.full_control" + // ] + // } + +} + +// method id "storage.notifications.delete": + +type NotificationsDeleteCall struct { + s *Service + bucket string + notification string + urlParams_ gensupport.URLParams + ctx_ context.Context + header_ http.Header +} + +// Delete: Permanently deletes a notification subscription. +func (r *NotificationsService) Delete(bucket string, notification string) *NotificationsDeleteCall { + c := &NotificationsDeleteCall{s: r.s, urlParams_: make(gensupport.URLParams)} + c.bucket = bucket + c.notification = notification + return c +} + +// UserProject sets the optional parameter "userProject": The project to +// be billed for this request. Required for Requester Pays buckets. +func (c *NotificationsDeleteCall) UserProject(userProject string) *NotificationsDeleteCall { + c.urlParams_.Set("userProject", userProject) + return c +} + +// Fields allows partial responses to be retrieved. See +// https://developers.google.com/gdata/docs/2.0/basics#PartialResponse +// for more information. +func (c *NotificationsDeleteCall) Fields(s ...googleapi.Field) *NotificationsDeleteCall { + c.urlParams_.Set("fields", googleapi.CombineFields(s)) + return c +} + +// Context sets the context to be used in this call's Do method. Any +// pending HTTP request will be aborted if the provided context is +// canceled. +func (c *NotificationsDeleteCall) Context(ctx context.Context) *NotificationsDeleteCall { + c.ctx_ = ctx + return c +} + +// Header returns an http.Header that can be modified by the caller to +// add HTTP headers to the request. +func (c *NotificationsDeleteCall) Header() http.Header { + if c.header_ == nil { + c.header_ = make(http.Header) + } + return c.header_ +} + +func (c *NotificationsDeleteCall) doRequest(alt string) (*http.Response, error) { + reqHeaders := make(http.Header) + for k, v := range c.header_ { + reqHeaders[k] = v + } + reqHeaders.Set("User-Agent", c.s.userAgent()) + var body io.Reader = nil + c.urlParams_.Set("alt", alt) + urls := googleapi.ResolveRelative(c.s.BasePath, "b/{bucket}/notificationConfigs/{notification}") + urls += "?" + c.urlParams_.Encode() + req, _ := http.NewRequest("DELETE", urls, body) + req.Header = reqHeaders + googleapi.Expand(req.URL, map[string]string{ + "bucket": c.bucket, + "notification": c.notification, + }) + return gensupport.SendRequest(c.ctx_, c.s.client, req) +} + +// Do executes the "storage.notifications.delete" call. +func (c *NotificationsDeleteCall) Do(opts ...googleapi.CallOption) error { + gensupport.SetOptions(c.urlParams_, opts...) + res, err := c.doRequest("json") + if err != nil { + return err + } + defer googleapi.CloseBody(res) + if err := googleapi.CheckResponse(res); err != nil { + return err + } + return nil + // { + // "description": "Permanently deletes a notification subscription.", + // "httpMethod": "DELETE", + // "id": "storage.notifications.delete", + // "parameterOrder": [ + // "bucket", + // "notification" + // ], + // "parameters": { + // "bucket": { + // "description": "The parent bucket of the notification.", + // "location": "path", + // "required": true, + // "type": "string" + // }, + // "notification": { + // "description": "ID of the notification to delete.", + // "location": "path", + // "required": true, + // "type": "string" + // }, + // "userProject": { + // "description": "The project to be billed for this request. Required for Requester Pays buckets.", + // "location": "query", + // "type": "string" + // } + // }, + // "path": "b/{bucket}/notificationConfigs/{notification}", + // "scopes": [ + // "https://www.googleapis.com/auth/cloud-platform", + // "https://www.googleapis.com/auth/devstorage.full_control", + // "https://www.googleapis.com/auth/devstorage.read_write" + // ] + // } + +} + +// method id "storage.notifications.get": + +type NotificationsGetCall struct { + s *Service + bucket string + notification string + urlParams_ gensupport.URLParams + ifNoneMatch_ string + ctx_ context.Context + header_ http.Header +} + +// Get: View a notification configuration. +func (r *NotificationsService) Get(bucket string, notification string) *NotificationsGetCall { + c := &NotificationsGetCall{s: r.s, urlParams_: make(gensupport.URLParams)} + c.bucket = bucket + c.notification = notification + return c +} + +// UserProject sets the optional parameter "userProject": The project to +// be billed for this request. Required for Requester Pays buckets. +func (c *NotificationsGetCall) UserProject(userProject string) *NotificationsGetCall { + c.urlParams_.Set("userProject", userProject) + return c +} + +// Fields allows partial responses to be retrieved. See +// https://developers.google.com/gdata/docs/2.0/basics#PartialResponse +// for more information. +func (c *NotificationsGetCall) Fields(s ...googleapi.Field) *NotificationsGetCall { + c.urlParams_.Set("fields", googleapi.CombineFields(s)) + return c +} + +// IfNoneMatch sets the optional parameter which makes the operation +// fail if the object's ETag matches the given value. This is useful for +// getting updates only after the object has changed since the last +// request. Use googleapi.IsNotModified to check whether the response +// error from Do is the result of In-None-Match. +func (c *NotificationsGetCall) IfNoneMatch(entityTag string) *NotificationsGetCall { + c.ifNoneMatch_ = entityTag + return c +} + +// Context sets the context to be used in this call's Do method. Any +// pending HTTP request will be aborted if the provided context is +// canceled. +func (c *NotificationsGetCall) Context(ctx context.Context) *NotificationsGetCall { + c.ctx_ = ctx + return c +} + +// Header returns an http.Header that can be modified by the caller to +// add HTTP headers to the request. +func (c *NotificationsGetCall) Header() http.Header { + if c.header_ == nil { + c.header_ = make(http.Header) + } + return c.header_ +} + +func (c *NotificationsGetCall) doRequest(alt string) (*http.Response, error) { + reqHeaders := make(http.Header) + for k, v := range c.header_ { + reqHeaders[k] = v + } + reqHeaders.Set("User-Agent", c.s.userAgent()) + if c.ifNoneMatch_ != "" { + reqHeaders.Set("If-None-Match", c.ifNoneMatch_) + } + var body io.Reader = nil + c.urlParams_.Set("alt", alt) + urls := googleapi.ResolveRelative(c.s.BasePath, "b/{bucket}/notificationConfigs/{notification}") + urls += "?" + c.urlParams_.Encode() + req, _ := http.NewRequest("GET", urls, body) + req.Header = reqHeaders + googleapi.Expand(req.URL, map[string]string{ + "bucket": c.bucket, + "notification": c.notification, + }) + return gensupport.SendRequest(c.ctx_, c.s.client, req) +} + +// Do executes the "storage.notifications.get" call. +// Exactly one of *Notification or error will be non-nil. Any non-2xx +// status code is an error. Response headers are in either +// *Notification.ServerResponse.Header or (if a response was returned at +// all) in error.(*googleapi.Error).Header. Use googleapi.IsNotModified +// to check whether the returned error was because +// http.StatusNotModified was returned. +func (c *NotificationsGetCall) Do(opts ...googleapi.CallOption) (*Notification, error) { + gensupport.SetOptions(c.urlParams_, opts...) + res, err := c.doRequest("json") + if res != nil && res.StatusCode == http.StatusNotModified { + if res.Body != nil { + res.Body.Close() + } + return nil, &googleapi.Error{ + Code: res.StatusCode, + Header: res.Header, + } + } + if err != nil { + return nil, err + } + defer googleapi.CloseBody(res) + if err := googleapi.CheckResponse(res); err != nil { + return nil, err + } + ret := &Notification{ + ServerResponse: googleapi.ServerResponse{ + Header: res.Header, + HTTPStatusCode: res.StatusCode, + }, + } + target := &ret + if err := gensupport.DecodeResponse(target, res); err != nil { + return nil, err + } + return ret, nil + // { + // "description": "View a notification configuration.", + // "httpMethod": "GET", + // "id": "storage.notifications.get", + // "parameterOrder": [ + // "bucket", + // "notification" + // ], + // "parameters": { + // "bucket": { + // "description": "The parent bucket of the notification.", + // "location": "path", + // "required": true, + // "type": "string" + // }, + // "notification": { + // "description": "Notification ID", + // "location": "path", + // "required": true, + // "type": "string" + // }, + // "userProject": { + // "description": "The project to be billed for this request. Required for Requester Pays buckets.", + // "location": "query", + // "type": "string" + // } + // }, + // "path": "b/{bucket}/notificationConfigs/{notification}", + // "response": { + // "$ref": "Notification" + // }, + // "scopes": [ + // "https://www.googleapis.com/auth/cloud-platform", + // "https://www.googleapis.com/auth/cloud-platform.read-only", + // "https://www.googleapis.com/auth/devstorage.full_control", + // "https://www.googleapis.com/auth/devstorage.read_only", + // "https://www.googleapis.com/auth/devstorage.read_write" + // ] + // } + +} + +// method id "storage.notifications.insert": + +type NotificationsInsertCall struct { + s *Service + bucket string + notification *Notification + urlParams_ gensupport.URLParams + ctx_ context.Context + header_ http.Header +} + +// Insert: Creates a notification subscription for a given bucket. +func (r *NotificationsService) Insert(bucket string, notification *Notification) *NotificationsInsertCall { + c := &NotificationsInsertCall{s: r.s, urlParams_: make(gensupport.URLParams)} + c.bucket = bucket + c.notification = notification + return c +} + +// UserProject sets the optional parameter "userProject": The project to +// be billed for this request. Required for Requester Pays buckets. +func (c *NotificationsInsertCall) UserProject(userProject string) *NotificationsInsertCall { + c.urlParams_.Set("userProject", userProject) + return c +} + +// Fields allows partial responses to be retrieved. See +// https://developers.google.com/gdata/docs/2.0/basics#PartialResponse +// for more information. +func (c *NotificationsInsertCall) Fields(s ...googleapi.Field) *NotificationsInsertCall { + c.urlParams_.Set("fields", googleapi.CombineFields(s)) + return c +} + +// Context sets the context to be used in this call's Do method. Any +// pending HTTP request will be aborted if the provided context is +// canceled. +func (c *NotificationsInsertCall) Context(ctx context.Context) *NotificationsInsertCall { + c.ctx_ = ctx + return c +} + +// Header returns an http.Header that can be modified by the caller to +// add HTTP headers to the request. +func (c *NotificationsInsertCall) Header() http.Header { + if c.header_ == nil { + c.header_ = make(http.Header) + } + return c.header_ +} + +func (c *NotificationsInsertCall) doRequest(alt string) (*http.Response, error) { + reqHeaders := make(http.Header) + for k, v := range c.header_ { + reqHeaders[k] = v + } + reqHeaders.Set("User-Agent", c.s.userAgent()) + var body io.Reader = nil + body, err := googleapi.WithoutDataWrapper.JSONReader(c.notification) + if err != nil { + return nil, err + } + reqHeaders.Set("Content-Type", "application/json") + c.urlParams_.Set("alt", alt) + urls := googleapi.ResolveRelative(c.s.BasePath, "b/{bucket}/notificationConfigs") + urls += "?" + c.urlParams_.Encode() + req, _ := http.NewRequest("POST", urls, body) + req.Header = reqHeaders + googleapi.Expand(req.URL, map[string]string{ + "bucket": c.bucket, + }) + return gensupport.SendRequest(c.ctx_, c.s.client, req) +} + +// Do executes the "storage.notifications.insert" call. +// Exactly one of *Notification or error will be non-nil. Any non-2xx +// status code is an error. Response headers are in either +// *Notification.ServerResponse.Header or (if a response was returned at +// all) in error.(*googleapi.Error).Header. Use googleapi.IsNotModified +// to check whether the returned error was because +// http.StatusNotModified was returned. +func (c *NotificationsInsertCall) Do(opts ...googleapi.CallOption) (*Notification, error) { + gensupport.SetOptions(c.urlParams_, opts...) + res, err := c.doRequest("json") + if res != nil && res.StatusCode == http.StatusNotModified { + if res.Body != nil { + res.Body.Close() + } + return nil, &googleapi.Error{ + Code: res.StatusCode, + Header: res.Header, + } + } + if err != nil { + return nil, err + } + defer googleapi.CloseBody(res) + if err := googleapi.CheckResponse(res); err != nil { + return nil, err + } + ret := &Notification{ + ServerResponse: googleapi.ServerResponse{ + Header: res.Header, + HTTPStatusCode: res.StatusCode, + }, + } + target := &ret + if err := gensupport.DecodeResponse(target, res); err != nil { + return nil, err + } + return ret, nil + // { + // "description": "Creates a notification subscription for a given bucket.", + // "httpMethod": "POST", + // "id": "storage.notifications.insert", + // "parameterOrder": [ + // "bucket" + // ], + // "parameters": { + // "bucket": { + // "description": "The parent bucket of the notification.", + // "location": "path", + // "required": true, + // "type": "string" + // }, + // "userProject": { + // "description": "The project to be billed for this request. Required for Requester Pays buckets.", + // "location": "query", + // "type": "string" + // } + // }, + // "path": "b/{bucket}/notificationConfigs", + // "request": { + // "$ref": "Notification" + // }, + // "response": { + // "$ref": "Notification" + // }, + // "scopes": [ + // "https://www.googleapis.com/auth/cloud-platform", + // "https://www.googleapis.com/auth/devstorage.full_control", + // "https://www.googleapis.com/auth/devstorage.read_write" + // ] + // } + +} + +// method id "storage.notifications.list": + +type NotificationsListCall struct { + s *Service + bucket string + urlParams_ gensupport.URLParams + ifNoneMatch_ string + ctx_ context.Context + header_ http.Header +} + +// List: Retrieves a list of notification subscriptions for a given +// bucket. +func (r *NotificationsService) List(bucket string) *NotificationsListCall { + c := &NotificationsListCall{s: r.s, urlParams_: make(gensupport.URLParams)} + c.bucket = bucket + return c +} + +// UserProject sets the optional parameter "userProject": The project to +// be billed for this request. Required for Requester Pays buckets. +func (c *NotificationsListCall) UserProject(userProject string) *NotificationsListCall { + c.urlParams_.Set("userProject", userProject) + return c +} + +// Fields allows partial responses to be retrieved. See +// https://developers.google.com/gdata/docs/2.0/basics#PartialResponse +// for more information. +func (c *NotificationsListCall) Fields(s ...googleapi.Field) *NotificationsListCall { + c.urlParams_.Set("fields", googleapi.CombineFields(s)) + return c +} + +// IfNoneMatch sets the optional parameter which makes the operation +// fail if the object's ETag matches the given value. This is useful for +// getting updates only after the object has changed since the last +// request. Use googleapi.IsNotModified to check whether the response +// error from Do is the result of In-None-Match. +func (c *NotificationsListCall) IfNoneMatch(entityTag string) *NotificationsListCall { + c.ifNoneMatch_ = entityTag + return c +} + +// Context sets the context to be used in this call's Do method. Any +// pending HTTP request will be aborted if the provided context is +// canceled. +func (c *NotificationsListCall) Context(ctx context.Context) *NotificationsListCall { + c.ctx_ = ctx + return c +} + +// Header returns an http.Header that can be modified by the caller to +// add HTTP headers to the request. +func (c *NotificationsListCall) Header() http.Header { + if c.header_ == nil { + c.header_ = make(http.Header) + } + return c.header_ +} + +func (c *NotificationsListCall) doRequest(alt string) (*http.Response, error) { + reqHeaders := make(http.Header) + for k, v := range c.header_ { + reqHeaders[k] = v + } + reqHeaders.Set("User-Agent", c.s.userAgent()) + if c.ifNoneMatch_ != "" { + reqHeaders.Set("If-None-Match", c.ifNoneMatch_) + } + var body io.Reader = nil + c.urlParams_.Set("alt", alt) + urls := googleapi.ResolveRelative(c.s.BasePath, "b/{bucket}/notificationConfigs") + urls += "?" + c.urlParams_.Encode() + req, _ := http.NewRequest("GET", urls, body) + req.Header = reqHeaders + googleapi.Expand(req.URL, map[string]string{ + "bucket": c.bucket, + }) + return gensupport.SendRequest(c.ctx_, c.s.client, req) +} + +// Do executes the "storage.notifications.list" call. +// Exactly one of *Notifications or error will be non-nil. Any non-2xx +// status code is an error. Response headers are in either +// *Notifications.ServerResponse.Header or (if a response was returned +// at all) in error.(*googleapi.Error).Header. Use +// googleapi.IsNotModified to check whether the returned error was +// because http.StatusNotModified was returned. +func (c *NotificationsListCall) Do(opts ...googleapi.CallOption) (*Notifications, error) { + gensupport.SetOptions(c.urlParams_, opts...) + res, err := c.doRequest("json") + if res != nil && res.StatusCode == http.StatusNotModified { + if res.Body != nil { + res.Body.Close() + } + return nil, &googleapi.Error{ + Code: res.StatusCode, + Header: res.Header, + } + } + if err != nil { + return nil, err + } + defer googleapi.CloseBody(res) + if err := googleapi.CheckResponse(res); err != nil { + return nil, err + } + ret := &Notifications{ + ServerResponse: googleapi.ServerResponse{ + Header: res.Header, + HTTPStatusCode: res.StatusCode, + }, + } + target := &ret + if err := gensupport.DecodeResponse(target, res); err != nil { + return nil, err + } + return ret, nil + // { + // "description": "Retrieves a list of notification subscriptions for a given bucket.", + // "httpMethod": "GET", + // "id": "storage.notifications.list", + // "parameterOrder": [ + // "bucket" + // ], + // "parameters": { + // "bucket": { + // "description": "Name of a Google Cloud Storage bucket.", + // "location": "path", + // "required": true, + // "type": "string" + // }, + // "userProject": { + // "description": "The project to be billed for this request. Required for Requester Pays buckets.", + // "location": "query", + // "type": "string" + // } + // }, + // "path": "b/{bucket}/notificationConfigs", + // "response": { + // "$ref": "Notifications" + // }, + // "scopes": [ + // "https://www.googleapis.com/auth/cloud-platform", + // "https://www.googleapis.com/auth/cloud-platform.read-only", + // "https://www.googleapis.com/auth/devstorage.full_control", + // "https://www.googleapis.com/auth/devstorage.read_only", + // "https://www.googleapis.com/auth/devstorage.read_write" + // ] + // } + +} + +// method id "storage.objectAccessControls.delete": + +type ObjectAccessControlsDeleteCall struct { + s *Service + bucket string + object string + entity string + urlParams_ gensupport.URLParams + ctx_ context.Context + header_ http.Header +} + +// Delete: Permanently deletes the ACL entry for the specified entity on +// the specified object. +func (r *ObjectAccessControlsService) Delete(bucket string, object string, entity string) *ObjectAccessControlsDeleteCall { + c := &ObjectAccessControlsDeleteCall{s: r.s, urlParams_: make(gensupport.URLParams)} + c.bucket = bucket + c.object = object + c.entity = entity + return c +} + +// Generation sets the optional parameter "generation": If present, +// selects a specific revision of this object (as opposed to the latest +// version, the default). +func (c *ObjectAccessControlsDeleteCall) Generation(generation int64) *ObjectAccessControlsDeleteCall { + c.urlParams_.Set("generation", fmt.Sprint(generation)) + return c +} + +// UserProject sets the optional parameter "userProject": The project to +// be billed for this request. Required for Requester Pays buckets. +func (c *ObjectAccessControlsDeleteCall) UserProject(userProject string) *ObjectAccessControlsDeleteCall { + c.urlParams_.Set("userProject", userProject) + return c +} + +// Fields allows partial responses to be retrieved. See +// https://developers.google.com/gdata/docs/2.0/basics#PartialResponse +// for more information. +func (c *ObjectAccessControlsDeleteCall) Fields(s ...googleapi.Field) *ObjectAccessControlsDeleteCall { + c.urlParams_.Set("fields", googleapi.CombineFields(s)) + return c +} + +// Context sets the context to be used in this call's Do method. Any +// pending HTTP request will be aborted if the provided context is +// canceled. +func (c *ObjectAccessControlsDeleteCall) Context(ctx context.Context) *ObjectAccessControlsDeleteCall { + c.ctx_ = ctx + return c +} + +// Header returns an http.Header that can be modified by the caller to +// add HTTP headers to the request. +func (c *ObjectAccessControlsDeleteCall) Header() http.Header { + if c.header_ == nil { + c.header_ = make(http.Header) + } + return c.header_ +} + +func (c *ObjectAccessControlsDeleteCall) doRequest(alt string) (*http.Response, error) { + reqHeaders := make(http.Header) + for k, v := range c.header_ { + reqHeaders[k] = v + } + reqHeaders.Set("User-Agent", c.s.userAgent()) + var body io.Reader = nil + c.urlParams_.Set("alt", alt) + urls := googleapi.ResolveRelative(c.s.BasePath, "b/{bucket}/o/{object}/acl/{entity}") + urls += "?" + c.urlParams_.Encode() + req, _ := http.NewRequest("DELETE", urls, body) + req.Header = reqHeaders + googleapi.Expand(req.URL, map[string]string{ + "bucket": c.bucket, + "object": c.object, + "entity": c.entity, + }) + return gensupport.SendRequest(c.ctx_, c.s.client, req) +} + +// Do executes the "storage.objectAccessControls.delete" call. +func (c *ObjectAccessControlsDeleteCall) Do(opts ...googleapi.CallOption) error { + gensupport.SetOptions(c.urlParams_, opts...) + res, err := c.doRequest("json") + if err != nil { + return err + } + defer googleapi.CloseBody(res) + if err := googleapi.CheckResponse(res); err != nil { + return err + } + return nil + // { + // "description": "Permanently deletes the ACL entry for the specified entity on the specified object.", + // "httpMethod": "DELETE", + // "id": "storage.objectAccessControls.delete", + // "parameterOrder": [ + // "bucket", + // "object", + // "entity" + // ], + // "parameters": { + // "bucket": { + // "description": "Name of a bucket.", + // "location": "path", + // "required": true, + // "type": "string" + // }, + // "entity": { + // "description": "The entity holding the permission. Can be user-userId, user-emailAddress, group-groupId, group-emailAddress, allUsers, or allAuthenticatedUsers.", + // "location": "path", + // "required": true, + // "type": "string" + // }, + // "generation": { + // "description": "If present, selects a specific revision of this object (as opposed to the latest version, the default).", + // "format": "int64", + // "location": "query", + // "type": "string" + // }, + // "object": { + // "description": "Name of the object. For information about how to URL encode object names to be path safe, see Encoding URI Path Parts.", + // "location": "path", + // "required": true, + // "type": "string" + // }, + // "userProject": { + // "description": "The project to be billed for this request. Required for Requester Pays buckets.", + // "location": "query", + // "type": "string" + // } + // }, + // "path": "b/{bucket}/o/{object}/acl/{entity}", + // "scopes": [ + // "https://www.googleapis.com/auth/cloud-platform", + // "https://www.googleapis.com/auth/devstorage.full_control" + // ] + // } + +} + +// method id "storage.objectAccessControls.get": + +type ObjectAccessControlsGetCall struct { + s *Service + bucket string + object string + entity string + urlParams_ gensupport.URLParams + ifNoneMatch_ string + ctx_ context.Context + header_ http.Header +} + +// Get: Returns the ACL entry for the specified entity on the specified +// object. +func (r *ObjectAccessControlsService) Get(bucket string, object string, entity string) *ObjectAccessControlsGetCall { + c := &ObjectAccessControlsGetCall{s: r.s, urlParams_: make(gensupport.URLParams)} + c.bucket = bucket + c.object = object + c.entity = entity + return c +} + +// Generation sets the optional parameter "generation": If present, +// selects a specific revision of this object (as opposed to the latest +// version, the default). +func (c *ObjectAccessControlsGetCall) Generation(generation int64) *ObjectAccessControlsGetCall { + c.urlParams_.Set("generation", fmt.Sprint(generation)) + return c +} + +// UserProject sets the optional parameter "userProject": The project to +// be billed for this request. Required for Requester Pays buckets. +func (c *ObjectAccessControlsGetCall) UserProject(userProject string) *ObjectAccessControlsGetCall { + c.urlParams_.Set("userProject", userProject) + return c +} + +// Fields allows partial responses to be retrieved. See +// https://developers.google.com/gdata/docs/2.0/basics#PartialResponse +// for more information. +func (c *ObjectAccessControlsGetCall) Fields(s ...googleapi.Field) *ObjectAccessControlsGetCall { + c.urlParams_.Set("fields", googleapi.CombineFields(s)) + return c +} + +// IfNoneMatch sets the optional parameter which makes the operation +// fail if the object's ETag matches the given value. This is useful for +// getting updates only after the object has changed since the last +// request. Use googleapi.IsNotModified to check whether the response +// error from Do is the result of In-None-Match. +func (c *ObjectAccessControlsGetCall) IfNoneMatch(entityTag string) *ObjectAccessControlsGetCall { + c.ifNoneMatch_ = entityTag + return c +} + +// Context sets the context to be used in this call's Do method. Any +// pending HTTP request will be aborted if the provided context is +// canceled. +func (c *ObjectAccessControlsGetCall) Context(ctx context.Context) *ObjectAccessControlsGetCall { + c.ctx_ = ctx + return c +} + +// Header returns an http.Header that can be modified by the caller to +// add HTTP headers to the request. +func (c *ObjectAccessControlsGetCall) Header() http.Header { + if c.header_ == nil { + c.header_ = make(http.Header) + } + return c.header_ +} + +func (c *ObjectAccessControlsGetCall) doRequest(alt string) (*http.Response, error) { + reqHeaders := make(http.Header) + for k, v := range c.header_ { + reqHeaders[k] = v + } + reqHeaders.Set("User-Agent", c.s.userAgent()) + if c.ifNoneMatch_ != "" { + reqHeaders.Set("If-None-Match", c.ifNoneMatch_) + } + var body io.Reader = nil + c.urlParams_.Set("alt", alt) + urls := googleapi.ResolveRelative(c.s.BasePath, "b/{bucket}/o/{object}/acl/{entity}") + urls += "?" + c.urlParams_.Encode() + req, _ := http.NewRequest("GET", urls, body) + req.Header = reqHeaders + googleapi.Expand(req.URL, map[string]string{ + "bucket": c.bucket, + "object": c.object, + "entity": c.entity, + }) + return gensupport.SendRequest(c.ctx_, c.s.client, req) +} + +// Do executes the "storage.objectAccessControls.get" call. +// Exactly one of *ObjectAccessControl or error will be non-nil. Any +// non-2xx status code is an error. Response headers are in either +// *ObjectAccessControl.ServerResponse.Header or (if a response was +// returned at all) in error.(*googleapi.Error).Header. Use +// googleapi.IsNotModified to check whether the returned error was +// because http.StatusNotModified was returned. +func (c *ObjectAccessControlsGetCall) Do(opts ...googleapi.CallOption) (*ObjectAccessControl, error) { + gensupport.SetOptions(c.urlParams_, opts...) + res, err := c.doRequest("json") + if res != nil && res.StatusCode == http.StatusNotModified { + if res.Body != nil { + res.Body.Close() + } + return nil, &googleapi.Error{ + Code: res.StatusCode, + Header: res.Header, + } + } + if err != nil { + return nil, err + } + defer googleapi.CloseBody(res) + if err := googleapi.CheckResponse(res); err != nil { + return nil, err + } + ret := &ObjectAccessControl{ + ServerResponse: googleapi.ServerResponse{ + Header: res.Header, + HTTPStatusCode: res.StatusCode, + }, + } + target := &ret + if err := gensupport.DecodeResponse(target, res); err != nil { + return nil, err + } + return ret, nil + // { + // "description": "Returns the ACL entry for the specified entity on the specified object.", + // "httpMethod": "GET", + // "id": "storage.objectAccessControls.get", + // "parameterOrder": [ + // "bucket", + // "object", + // "entity" + // ], + // "parameters": { + // "bucket": { + // "description": "Name of a bucket.", + // "location": "path", + // "required": true, + // "type": "string" + // }, + // "entity": { + // "description": "The entity holding the permission. Can be user-userId, user-emailAddress, group-groupId, group-emailAddress, allUsers, or allAuthenticatedUsers.", + // "location": "path", + // "required": true, + // "type": "string" + // }, + // "generation": { + // "description": "If present, selects a specific revision of this object (as opposed to the latest version, the default).", + // "format": "int64", + // "location": "query", + // "type": "string" + // }, + // "object": { + // "description": "Name of the object. For information about how to URL encode object names to be path safe, see Encoding URI Path Parts.", + // "location": "path", + // "required": true, + // "type": "string" + // }, + // "userProject": { + // "description": "The project to be billed for this request. Required for Requester Pays buckets.", + // "location": "query", + // "type": "string" + // } + // }, + // "path": "b/{bucket}/o/{object}/acl/{entity}", + // "response": { + // "$ref": "ObjectAccessControl" + // }, + // "scopes": [ + // "https://www.googleapis.com/auth/cloud-platform", + // "https://www.googleapis.com/auth/devstorage.full_control" + // ] + // } + +} + +// method id "storage.objectAccessControls.insert": + +type ObjectAccessControlsInsertCall struct { + s *Service + bucket string + object string + objectaccesscontrol *ObjectAccessControl + urlParams_ gensupport.URLParams + ctx_ context.Context + header_ http.Header +} + +// Insert: Creates a new ACL entry on the specified object. +func (r *ObjectAccessControlsService) Insert(bucket string, object string, objectaccesscontrol *ObjectAccessControl) *ObjectAccessControlsInsertCall { + c := &ObjectAccessControlsInsertCall{s: r.s, urlParams_: make(gensupport.URLParams)} + c.bucket = bucket + c.object = object + c.objectaccesscontrol = objectaccesscontrol + return c +} + +// Generation sets the optional parameter "generation": If present, +// selects a specific revision of this object (as opposed to the latest +// version, the default). +func (c *ObjectAccessControlsInsertCall) Generation(generation int64) *ObjectAccessControlsInsertCall { + c.urlParams_.Set("generation", fmt.Sprint(generation)) + return c +} + +// UserProject sets the optional parameter "userProject": The project to +// be billed for this request. Required for Requester Pays buckets. +func (c *ObjectAccessControlsInsertCall) UserProject(userProject string) *ObjectAccessControlsInsertCall { + c.urlParams_.Set("userProject", userProject) + return c +} + +// Fields allows partial responses to be retrieved. See +// https://developers.google.com/gdata/docs/2.0/basics#PartialResponse +// for more information. +func (c *ObjectAccessControlsInsertCall) Fields(s ...googleapi.Field) *ObjectAccessControlsInsertCall { + c.urlParams_.Set("fields", googleapi.CombineFields(s)) + return c +} + +// Context sets the context to be used in this call's Do method. Any +// pending HTTP request will be aborted if the provided context is +// canceled. +func (c *ObjectAccessControlsInsertCall) Context(ctx context.Context) *ObjectAccessControlsInsertCall { + c.ctx_ = ctx + return c +} + +// Header returns an http.Header that can be modified by the caller to +// add HTTP headers to the request. +func (c *ObjectAccessControlsInsertCall) Header() http.Header { + if c.header_ == nil { + c.header_ = make(http.Header) + } + return c.header_ +} + +func (c *ObjectAccessControlsInsertCall) doRequest(alt string) (*http.Response, error) { + reqHeaders := make(http.Header) + for k, v := range c.header_ { + reqHeaders[k] = v + } + reqHeaders.Set("User-Agent", c.s.userAgent()) + var body io.Reader = nil + body, err := googleapi.WithoutDataWrapper.JSONReader(c.objectaccesscontrol) + if err != nil { + return nil, err + } + reqHeaders.Set("Content-Type", "application/json") + c.urlParams_.Set("alt", alt) + urls := googleapi.ResolveRelative(c.s.BasePath, "b/{bucket}/o/{object}/acl") + urls += "?" + c.urlParams_.Encode() + req, _ := http.NewRequest("POST", urls, body) + req.Header = reqHeaders + googleapi.Expand(req.URL, map[string]string{ + "bucket": c.bucket, + "object": c.object, + }) + return gensupport.SendRequest(c.ctx_, c.s.client, req) +} + +// Do executes the "storage.objectAccessControls.insert" call. +// Exactly one of *ObjectAccessControl or error will be non-nil. Any +// non-2xx status code is an error. Response headers are in either +// *ObjectAccessControl.ServerResponse.Header or (if a response was +// returned at all) in error.(*googleapi.Error).Header. Use +// googleapi.IsNotModified to check whether the returned error was +// because http.StatusNotModified was returned. +func (c *ObjectAccessControlsInsertCall) Do(opts ...googleapi.CallOption) (*ObjectAccessControl, error) { + gensupport.SetOptions(c.urlParams_, opts...) + res, err := c.doRequest("json") + if res != nil && res.StatusCode == http.StatusNotModified { + if res.Body != nil { + res.Body.Close() + } + return nil, &googleapi.Error{ + Code: res.StatusCode, + Header: res.Header, + } + } + if err != nil { + return nil, err + } + defer googleapi.CloseBody(res) + if err := googleapi.CheckResponse(res); err != nil { + return nil, err + } + ret := &ObjectAccessControl{ + ServerResponse: googleapi.ServerResponse{ + Header: res.Header, + HTTPStatusCode: res.StatusCode, + }, + } + target := &ret + if err := gensupport.DecodeResponse(target, res); err != nil { + return nil, err + } + return ret, nil + // { + // "description": "Creates a new ACL entry on the specified object.", + // "httpMethod": "POST", + // "id": "storage.objectAccessControls.insert", + // "parameterOrder": [ + // "bucket", + // "object" + // ], + // "parameters": { + // "bucket": { + // "description": "Name of a bucket.", + // "location": "path", + // "required": true, + // "type": "string" + // }, + // "generation": { + // "description": "If present, selects a specific revision of this object (as opposed to the latest version, the default).", + // "format": "int64", + // "location": "query", + // "type": "string" + // }, + // "object": { + // "description": "Name of the object. For information about how to URL encode object names to be path safe, see Encoding URI Path Parts.", + // "location": "path", + // "required": true, + // "type": "string" + // }, + // "userProject": { + // "description": "The project to be billed for this request. Required for Requester Pays buckets.", + // "location": "query", + // "type": "string" + // } + // }, + // "path": "b/{bucket}/o/{object}/acl", + // "request": { + // "$ref": "ObjectAccessControl" + // }, + // "response": { + // "$ref": "ObjectAccessControl" + // }, + // "scopes": [ + // "https://www.googleapis.com/auth/cloud-platform", + // "https://www.googleapis.com/auth/devstorage.full_control" + // ] + // } + +} + +// method id "storage.objectAccessControls.list": + +type ObjectAccessControlsListCall struct { + s *Service + bucket string + object string + urlParams_ gensupport.URLParams + ifNoneMatch_ string + ctx_ context.Context + header_ http.Header +} + +// List: Retrieves ACL entries on the specified object. +func (r *ObjectAccessControlsService) List(bucket string, object string) *ObjectAccessControlsListCall { + c := &ObjectAccessControlsListCall{s: r.s, urlParams_: make(gensupport.URLParams)} + c.bucket = bucket + c.object = object + return c +} + +// Generation sets the optional parameter "generation": If present, +// selects a specific revision of this object (as opposed to the latest +// version, the default). +func (c *ObjectAccessControlsListCall) Generation(generation int64) *ObjectAccessControlsListCall { + c.urlParams_.Set("generation", fmt.Sprint(generation)) + return c +} + +// UserProject sets the optional parameter "userProject": The project to +// be billed for this request. Required for Requester Pays buckets. +func (c *ObjectAccessControlsListCall) UserProject(userProject string) *ObjectAccessControlsListCall { + c.urlParams_.Set("userProject", userProject) + return c +} + +// Fields allows partial responses to be retrieved. See +// https://developers.google.com/gdata/docs/2.0/basics#PartialResponse +// for more information. +func (c *ObjectAccessControlsListCall) Fields(s ...googleapi.Field) *ObjectAccessControlsListCall { + c.urlParams_.Set("fields", googleapi.CombineFields(s)) + return c +} + +// IfNoneMatch sets the optional parameter which makes the operation +// fail if the object's ETag matches the given value. This is useful for +// getting updates only after the object has changed since the last +// request. Use googleapi.IsNotModified to check whether the response +// error from Do is the result of In-None-Match. +func (c *ObjectAccessControlsListCall) IfNoneMatch(entityTag string) *ObjectAccessControlsListCall { + c.ifNoneMatch_ = entityTag + return c +} + +// Context sets the context to be used in this call's Do method. Any +// pending HTTP request will be aborted if the provided context is +// canceled. +func (c *ObjectAccessControlsListCall) Context(ctx context.Context) *ObjectAccessControlsListCall { + c.ctx_ = ctx + return c +} + +// Header returns an http.Header that can be modified by the caller to +// add HTTP headers to the request. +func (c *ObjectAccessControlsListCall) Header() http.Header { + if c.header_ == nil { + c.header_ = make(http.Header) + } + return c.header_ +} + +func (c *ObjectAccessControlsListCall) doRequest(alt string) (*http.Response, error) { + reqHeaders := make(http.Header) + for k, v := range c.header_ { + reqHeaders[k] = v + } + reqHeaders.Set("User-Agent", c.s.userAgent()) + if c.ifNoneMatch_ != "" { + reqHeaders.Set("If-None-Match", c.ifNoneMatch_) + } + var body io.Reader = nil + c.urlParams_.Set("alt", alt) + urls := googleapi.ResolveRelative(c.s.BasePath, "b/{bucket}/o/{object}/acl") + urls += "?" + c.urlParams_.Encode() + req, _ := http.NewRequest("GET", urls, body) + req.Header = reqHeaders + googleapi.Expand(req.URL, map[string]string{ + "bucket": c.bucket, + "object": c.object, + }) + return gensupport.SendRequest(c.ctx_, c.s.client, req) +} + +// Do executes the "storage.objectAccessControls.list" call. +// Exactly one of *ObjectAccessControls or error will be non-nil. Any +// non-2xx status code is an error. Response headers are in either +// *ObjectAccessControls.ServerResponse.Header or (if a response was +// returned at all) in error.(*googleapi.Error).Header. Use +// googleapi.IsNotModified to check whether the returned error was +// because http.StatusNotModified was returned. +func (c *ObjectAccessControlsListCall) Do(opts ...googleapi.CallOption) (*ObjectAccessControls, error) { + gensupport.SetOptions(c.urlParams_, opts...) + res, err := c.doRequest("json") + if res != nil && res.StatusCode == http.StatusNotModified { + if res.Body != nil { + res.Body.Close() + } + return nil, &googleapi.Error{ + Code: res.StatusCode, + Header: res.Header, + } + } + if err != nil { + return nil, err + } + defer googleapi.CloseBody(res) + if err := googleapi.CheckResponse(res); err != nil { + return nil, err + } + ret := &ObjectAccessControls{ + ServerResponse: googleapi.ServerResponse{ + Header: res.Header, + HTTPStatusCode: res.StatusCode, + }, + } + target := &ret + if err := gensupport.DecodeResponse(target, res); err != nil { + return nil, err + } + return ret, nil + // { + // "description": "Retrieves ACL entries on the specified object.", + // "httpMethod": "GET", + // "id": "storage.objectAccessControls.list", + // "parameterOrder": [ + // "bucket", + // "object" + // ], + // "parameters": { + // "bucket": { + // "description": "Name of a bucket.", + // "location": "path", + // "required": true, + // "type": "string" + // }, + // "generation": { + // "description": "If present, selects a specific revision of this object (as opposed to the latest version, the default).", + // "format": "int64", + // "location": "query", + // "type": "string" + // }, + // "object": { + // "description": "Name of the object. For information about how to URL encode object names to be path safe, see Encoding URI Path Parts.", + // "location": "path", + // "required": true, + // "type": "string" + // }, + // "userProject": { + // "description": "The project to be billed for this request. Required for Requester Pays buckets.", + // "location": "query", + // "type": "string" + // } + // }, + // "path": "b/{bucket}/o/{object}/acl", + // "response": { + // "$ref": "ObjectAccessControls" + // }, + // "scopes": [ + // "https://www.googleapis.com/auth/cloud-platform", + // "https://www.googleapis.com/auth/devstorage.full_control" + // ] + // } + +} + +// method id "storage.objectAccessControls.patch": + +type ObjectAccessControlsPatchCall struct { + s *Service + bucket string + object string + entity string + objectaccesscontrol *ObjectAccessControl + urlParams_ gensupport.URLParams + ctx_ context.Context + header_ http.Header +} + +// Patch: Patches an ACL entry on the specified object. +func (r *ObjectAccessControlsService) Patch(bucket string, object string, entity string, objectaccesscontrol *ObjectAccessControl) *ObjectAccessControlsPatchCall { + c := &ObjectAccessControlsPatchCall{s: r.s, urlParams_: make(gensupport.URLParams)} + c.bucket = bucket + c.object = object + c.entity = entity + c.objectaccesscontrol = objectaccesscontrol + return c +} + +// Generation sets the optional parameter "generation": If present, +// selects a specific revision of this object (as opposed to the latest +// version, the default). +func (c *ObjectAccessControlsPatchCall) Generation(generation int64) *ObjectAccessControlsPatchCall { + c.urlParams_.Set("generation", fmt.Sprint(generation)) + return c +} + +// UserProject sets the optional parameter "userProject": The project to +// be billed for this request. Required for Requester Pays buckets. +func (c *ObjectAccessControlsPatchCall) UserProject(userProject string) *ObjectAccessControlsPatchCall { + c.urlParams_.Set("userProject", userProject) + return c +} + +// Fields allows partial responses to be retrieved. See +// https://developers.google.com/gdata/docs/2.0/basics#PartialResponse +// for more information. +func (c *ObjectAccessControlsPatchCall) Fields(s ...googleapi.Field) *ObjectAccessControlsPatchCall { + c.urlParams_.Set("fields", googleapi.CombineFields(s)) + return c +} + +// Context sets the context to be used in this call's Do method. Any +// pending HTTP request will be aborted if the provided context is +// canceled. +func (c *ObjectAccessControlsPatchCall) Context(ctx context.Context) *ObjectAccessControlsPatchCall { + c.ctx_ = ctx + return c +} + +// Header returns an http.Header that can be modified by the caller to +// add HTTP headers to the request. +func (c *ObjectAccessControlsPatchCall) Header() http.Header { + if c.header_ == nil { + c.header_ = make(http.Header) + } + return c.header_ +} + +func (c *ObjectAccessControlsPatchCall) doRequest(alt string) (*http.Response, error) { + reqHeaders := make(http.Header) + for k, v := range c.header_ { + reqHeaders[k] = v + } + reqHeaders.Set("User-Agent", c.s.userAgent()) + var body io.Reader = nil + body, err := googleapi.WithoutDataWrapper.JSONReader(c.objectaccesscontrol) + if err != nil { + return nil, err + } + reqHeaders.Set("Content-Type", "application/json") + c.urlParams_.Set("alt", alt) + urls := googleapi.ResolveRelative(c.s.BasePath, "b/{bucket}/o/{object}/acl/{entity}") + urls += "?" + c.urlParams_.Encode() + req, _ := http.NewRequest("PATCH", urls, body) + req.Header = reqHeaders + googleapi.Expand(req.URL, map[string]string{ + "bucket": c.bucket, + "object": c.object, + "entity": c.entity, + }) + return gensupport.SendRequest(c.ctx_, c.s.client, req) +} + +// Do executes the "storage.objectAccessControls.patch" call. +// Exactly one of *ObjectAccessControl or error will be non-nil. Any +// non-2xx status code is an error. Response headers are in either +// *ObjectAccessControl.ServerResponse.Header or (if a response was +// returned at all) in error.(*googleapi.Error).Header. Use +// googleapi.IsNotModified to check whether the returned error was +// because http.StatusNotModified was returned. +func (c *ObjectAccessControlsPatchCall) Do(opts ...googleapi.CallOption) (*ObjectAccessControl, error) { + gensupport.SetOptions(c.urlParams_, opts...) + res, err := c.doRequest("json") + if res != nil && res.StatusCode == http.StatusNotModified { + if res.Body != nil { + res.Body.Close() + } + return nil, &googleapi.Error{ + Code: res.StatusCode, + Header: res.Header, + } + } + if err != nil { + return nil, err + } + defer googleapi.CloseBody(res) + if err := googleapi.CheckResponse(res); err != nil { + return nil, err + } + ret := &ObjectAccessControl{ + ServerResponse: googleapi.ServerResponse{ + Header: res.Header, + HTTPStatusCode: res.StatusCode, + }, + } + target := &ret + if err := gensupport.DecodeResponse(target, res); err != nil { + return nil, err + } + return ret, nil + // { + // "description": "Patches an ACL entry on the specified object.", + // "httpMethod": "PATCH", + // "id": "storage.objectAccessControls.patch", + // "parameterOrder": [ + // "bucket", + // "object", + // "entity" + // ], + // "parameters": { + // "bucket": { + // "description": "Name of a bucket.", + // "location": "path", + // "required": true, + // "type": "string" + // }, + // "entity": { + // "description": "The entity holding the permission. Can be user-userId, user-emailAddress, group-groupId, group-emailAddress, allUsers, or allAuthenticatedUsers.", + // "location": "path", + // "required": true, + // "type": "string" + // }, + // "generation": { + // "description": "If present, selects a specific revision of this object (as opposed to the latest version, the default).", + // "format": "int64", + // "location": "query", + // "type": "string" + // }, + // "object": { + // "description": "Name of the object. For information about how to URL encode object names to be path safe, see Encoding URI Path Parts.", + // "location": "path", + // "required": true, + // "type": "string" + // }, + // "userProject": { + // "description": "The project to be billed for this request. Required for Requester Pays buckets.", + // "location": "query", + // "type": "string" + // } + // }, + // "path": "b/{bucket}/o/{object}/acl/{entity}", + // "request": { + // "$ref": "ObjectAccessControl" + // }, + // "response": { + // "$ref": "ObjectAccessControl" + // }, + // "scopes": [ + // "https://www.googleapis.com/auth/cloud-platform", + // "https://www.googleapis.com/auth/devstorage.full_control" + // ] + // } + +} + +// method id "storage.objectAccessControls.update": + +type ObjectAccessControlsUpdateCall struct { + s *Service + bucket string + object string + entity string + objectaccesscontrol *ObjectAccessControl + urlParams_ gensupport.URLParams + ctx_ context.Context + header_ http.Header +} + +// Update: Updates an ACL entry on the specified object. +func (r *ObjectAccessControlsService) Update(bucket string, object string, entity string, objectaccesscontrol *ObjectAccessControl) *ObjectAccessControlsUpdateCall { + c := &ObjectAccessControlsUpdateCall{s: r.s, urlParams_: make(gensupport.URLParams)} + c.bucket = bucket + c.object = object + c.entity = entity + c.objectaccesscontrol = objectaccesscontrol + return c +} + +// Generation sets the optional parameter "generation": If present, +// selects a specific revision of this object (as opposed to the latest +// version, the default). +func (c *ObjectAccessControlsUpdateCall) Generation(generation int64) *ObjectAccessControlsUpdateCall { + c.urlParams_.Set("generation", fmt.Sprint(generation)) + return c +} + +// UserProject sets the optional parameter "userProject": The project to +// be billed for this request. Required for Requester Pays buckets. +func (c *ObjectAccessControlsUpdateCall) UserProject(userProject string) *ObjectAccessControlsUpdateCall { + c.urlParams_.Set("userProject", userProject) + return c +} + +// Fields allows partial responses to be retrieved. See +// https://developers.google.com/gdata/docs/2.0/basics#PartialResponse +// for more information. +func (c *ObjectAccessControlsUpdateCall) Fields(s ...googleapi.Field) *ObjectAccessControlsUpdateCall { + c.urlParams_.Set("fields", googleapi.CombineFields(s)) + return c +} + +// Context sets the context to be used in this call's Do method. Any +// pending HTTP request will be aborted if the provided context is +// canceled. +func (c *ObjectAccessControlsUpdateCall) Context(ctx context.Context) *ObjectAccessControlsUpdateCall { + c.ctx_ = ctx + return c +} + +// Header returns an http.Header that can be modified by the caller to +// add HTTP headers to the request. +func (c *ObjectAccessControlsUpdateCall) Header() http.Header { + if c.header_ == nil { + c.header_ = make(http.Header) + } + return c.header_ +} + +func (c *ObjectAccessControlsUpdateCall) doRequest(alt string) (*http.Response, error) { + reqHeaders := make(http.Header) + for k, v := range c.header_ { + reqHeaders[k] = v + } + reqHeaders.Set("User-Agent", c.s.userAgent()) + var body io.Reader = nil + body, err := googleapi.WithoutDataWrapper.JSONReader(c.objectaccesscontrol) + if err != nil { + return nil, err + } + reqHeaders.Set("Content-Type", "application/json") + c.urlParams_.Set("alt", alt) + urls := googleapi.ResolveRelative(c.s.BasePath, "b/{bucket}/o/{object}/acl/{entity}") + urls += "?" + c.urlParams_.Encode() + req, _ := http.NewRequest("PUT", urls, body) + req.Header = reqHeaders + googleapi.Expand(req.URL, map[string]string{ + "bucket": c.bucket, + "object": c.object, + "entity": c.entity, + }) + return gensupport.SendRequest(c.ctx_, c.s.client, req) +} + +// Do executes the "storage.objectAccessControls.update" call. +// Exactly one of *ObjectAccessControl or error will be non-nil. Any +// non-2xx status code is an error. Response headers are in either +// *ObjectAccessControl.ServerResponse.Header or (if a response was +// returned at all) in error.(*googleapi.Error).Header. Use +// googleapi.IsNotModified to check whether the returned error was +// because http.StatusNotModified was returned. +func (c *ObjectAccessControlsUpdateCall) Do(opts ...googleapi.CallOption) (*ObjectAccessControl, error) { + gensupport.SetOptions(c.urlParams_, opts...) + res, err := c.doRequest("json") + if res != nil && res.StatusCode == http.StatusNotModified { + if res.Body != nil { + res.Body.Close() + } + return nil, &googleapi.Error{ + Code: res.StatusCode, + Header: res.Header, + } + } + if err != nil { + return nil, err + } + defer googleapi.CloseBody(res) + if err := googleapi.CheckResponse(res); err != nil { + return nil, err + } + ret := &ObjectAccessControl{ + ServerResponse: googleapi.ServerResponse{ + Header: res.Header, + HTTPStatusCode: res.StatusCode, + }, + } + target := &ret + if err := gensupport.DecodeResponse(target, res); err != nil { + return nil, err + } + return ret, nil + // { + // "description": "Updates an ACL entry on the specified object.", + // "httpMethod": "PUT", + // "id": "storage.objectAccessControls.update", + // "parameterOrder": [ + // "bucket", + // "object", + // "entity" + // ], + // "parameters": { + // "bucket": { + // "description": "Name of a bucket.", + // "location": "path", + // "required": true, + // "type": "string" + // }, + // "entity": { + // "description": "The entity holding the permission. Can be user-userId, user-emailAddress, group-groupId, group-emailAddress, allUsers, or allAuthenticatedUsers.", + // "location": "path", + // "required": true, + // "type": "string" + // }, + // "generation": { + // "description": "If present, selects a specific revision of this object (as opposed to the latest version, the default).", + // "format": "int64", + // "location": "query", + // "type": "string" + // }, + // "object": { + // "description": "Name of the object. For information about how to URL encode object names to be path safe, see Encoding URI Path Parts.", + // "location": "path", + // "required": true, + // "type": "string" + // }, + // "userProject": { + // "description": "The project to be billed for this request. Required for Requester Pays buckets.", + // "location": "query", + // "type": "string" + // } + // }, + // "path": "b/{bucket}/o/{object}/acl/{entity}", + // "request": { + // "$ref": "ObjectAccessControl" + // }, + // "response": { + // "$ref": "ObjectAccessControl" + // }, + // "scopes": [ + // "https://www.googleapis.com/auth/cloud-platform", + // "https://www.googleapis.com/auth/devstorage.full_control" + // ] + // } + +} + +// method id "storage.objects.compose": + +type ObjectsComposeCall struct { + s *Service + destinationBucket string + destinationObject string + composerequest *ComposeRequest + urlParams_ gensupport.URLParams + ctx_ context.Context + header_ http.Header +} + +// Compose: Concatenates a list of existing objects into a new object in +// the same bucket. +func (r *ObjectsService) Compose(destinationBucket string, destinationObject string, composerequest *ComposeRequest) *ObjectsComposeCall { + c := &ObjectsComposeCall{s: r.s, urlParams_: make(gensupport.URLParams)} + c.destinationBucket = destinationBucket + c.destinationObject = destinationObject + c.composerequest = composerequest + return c +} + +// DestinationPredefinedAcl sets the optional parameter +// "destinationPredefinedAcl": Apply a predefined set of access controls +// to the destination object. +// +// Possible values: +// "authenticatedRead" - Object owner gets OWNER access, and +// allAuthenticatedUsers get READER access. +// "bucketOwnerFullControl" - Object owner gets OWNER access, and +// project team owners get OWNER access. +// "bucketOwnerRead" - Object owner gets OWNER access, and project +// team owners get READER access. +// "private" - Object owner gets OWNER access. +// "projectPrivate" - Object owner gets OWNER access, and project team +// members get access according to their roles. +// "publicRead" - Object owner gets OWNER access, and allUsers get +// READER access. +func (c *ObjectsComposeCall) DestinationPredefinedAcl(destinationPredefinedAcl string) *ObjectsComposeCall { + c.urlParams_.Set("destinationPredefinedAcl", destinationPredefinedAcl) + return c +} + +// IfGenerationMatch sets the optional parameter "ifGenerationMatch": +// Makes the operation conditional on whether the object's current +// generation matches the given value. Setting to 0 makes the operation +// succeed only if there are no live versions of the object. +func (c *ObjectsComposeCall) IfGenerationMatch(ifGenerationMatch int64) *ObjectsComposeCall { + c.urlParams_.Set("ifGenerationMatch", fmt.Sprint(ifGenerationMatch)) + return c +} + +// IfMetagenerationMatch sets the optional parameter +// "ifMetagenerationMatch": Makes the operation conditional on whether +// the object's current metageneration matches the given value. +func (c *ObjectsComposeCall) IfMetagenerationMatch(ifMetagenerationMatch int64) *ObjectsComposeCall { + c.urlParams_.Set("ifMetagenerationMatch", fmt.Sprint(ifMetagenerationMatch)) + return c +} + +// KmsKeyName sets the optional parameter "kmsKeyName": Resource name of +// the Cloud KMS key, of the form +// projects/my-project/locations/global/keyRings/my-kr/cryptoKeys/my-key, +// that will be used to encrypt the object. Overrides the object +// metadata's kms_key_name value, if any. +func (c *ObjectsComposeCall) KmsKeyName(kmsKeyName string) *ObjectsComposeCall { + c.urlParams_.Set("kmsKeyName", kmsKeyName) + return c +} + +// UserProject sets the optional parameter "userProject": The project to +// be billed for this request. Required for Requester Pays buckets. +func (c *ObjectsComposeCall) UserProject(userProject string) *ObjectsComposeCall { + c.urlParams_.Set("userProject", userProject) + return c +} + +// Fields allows partial responses to be retrieved. See +// https://developers.google.com/gdata/docs/2.0/basics#PartialResponse +// for more information. +func (c *ObjectsComposeCall) Fields(s ...googleapi.Field) *ObjectsComposeCall { + c.urlParams_.Set("fields", googleapi.CombineFields(s)) + return c +} + +// Context sets the context to be used in this call's Do method. Any +// pending HTTP request will be aborted if the provided context is +// canceled. +func (c *ObjectsComposeCall) Context(ctx context.Context) *ObjectsComposeCall { + c.ctx_ = ctx + return c +} + +// Header returns an http.Header that can be modified by the caller to +// add HTTP headers to the request. +func (c *ObjectsComposeCall) Header() http.Header { + if c.header_ == nil { + c.header_ = make(http.Header) + } + return c.header_ +} + +func (c *ObjectsComposeCall) doRequest(alt string) (*http.Response, error) { + reqHeaders := make(http.Header) + for k, v := range c.header_ { + reqHeaders[k] = v + } + reqHeaders.Set("User-Agent", c.s.userAgent()) + var body io.Reader = nil + body, err := googleapi.WithoutDataWrapper.JSONReader(c.composerequest) + if err != nil { + return nil, err + } + reqHeaders.Set("Content-Type", "application/json") + c.urlParams_.Set("alt", alt) + urls := googleapi.ResolveRelative(c.s.BasePath, "b/{destinationBucket}/o/{destinationObject}/compose") + urls += "?" + c.urlParams_.Encode() + req, _ := http.NewRequest("POST", urls, body) + req.Header = reqHeaders + googleapi.Expand(req.URL, map[string]string{ + "destinationBucket": c.destinationBucket, + "destinationObject": c.destinationObject, + }) + return gensupport.SendRequest(c.ctx_, c.s.client, req) +} + +// Do executes the "storage.objects.compose" call. +// Exactly one of *Object or error will be non-nil. Any non-2xx status +// code is an error. Response headers are in either +// *Object.ServerResponse.Header or (if a response was returned at all) +// in error.(*googleapi.Error).Header. Use googleapi.IsNotModified to +// check whether the returned error was because http.StatusNotModified +// was returned. +func (c *ObjectsComposeCall) Do(opts ...googleapi.CallOption) (*Object, error) { + gensupport.SetOptions(c.urlParams_, opts...) + res, err := c.doRequest("json") + if res != nil && res.StatusCode == http.StatusNotModified { + if res.Body != nil { + res.Body.Close() + } + return nil, &googleapi.Error{ + Code: res.StatusCode, + Header: res.Header, + } + } + if err != nil { + return nil, err + } + defer googleapi.CloseBody(res) + if err := googleapi.CheckResponse(res); err != nil { + return nil, err + } + ret := &Object{ + ServerResponse: googleapi.ServerResponse{ + Header: res.Header, + HTTPStatusCode: res.StatusCode, + }, + } + target := &ret + if err := gensupport.DecodeResponse(target, res); err != nil { + return nil, err + } + return ret, nil + // { + // "description": "Concatenates a list of existing objects into a new object in the same bucket.", + // "httpMethod": "POST", + // "id": "storage.objects.compose", + // "parameterOrder": [ + // "destinationBucket", + // "destinationObject" + // ], + // "parameters": { + // "destinationBucket": { + // "description": "Name of the bucket in which to store the new object.", + // "location": "path", + // "required": true, + // "type": "string" + // }, + // "destinationObject": { + // "description": "Name of the new object. For information about how to URL encode object names to be path safe, see Encoding URI Path Parts.", + // "location": "path", + // "required": true, + // "type": "string" + // }, + // "destinationPredefinedAcl": { + // "description": "Apply a predefined set of access controls to the destination object.", + // "enum": [ + // "authenticatedRead", + // "bucketOwnerFullControl", + // "bucketOwnerRead", + // "private", + // "projectPrivate", + // "publicRead" + // ], + // "enumDescriptions": [ + // "Object owner gets OWNER access, and allAuthenticatedUsers get READER access.", + // "Object owner gets OWNER access, and project team owners get OWNER access.", + // "Object owner gets OWNER access, and project team owners get READER access.", + // "Object owner gets OWNER access.", + // "Object owner gets OWNER access, and project team members get access according to their roles.", + // "Object owner gets OWNER access, and allUsers get READER access." + // ], + // "location": "query", + // "type": "string" + // }, + // "ifGenerationMatch": { + // "description": "Makes the operation conditional on whether the object's current generation matches the given value. Setting to 0 makes the operation succeed only if there are no live versions of the object.", + // "format": "int64", + // "location": "query", + // "type": "string" + // }, + // "ifMetagenerationMatch": { + // "description": "Makes the operation conditional on whether the object's current metageneration matches the given value.", + // "format": "int64", + // "location": "query", + // "type": "string" + // }, + // "kmsKeyName": { + // "description": "Resource name of the Cloud KMS key, of the form projects/my-project/locations/global/keyRings/my-kr/cryptoKeys/my-key, that will be used to encrypt the object. Overrides the object metadata's kms_key_name value, if any.", + // "location": "query", + // "type": "string" + // }, + // "userProject": { + // "description": "The project to be billed for this request. Required for Requester Pays buckets.", + // "location": "query", + // "type": "string" + // } + // }, + // "path": "b/{destinationBucket}/o/{destinationObject}/compose", + // "request": { + // "$ref": "ComposeRequest" + // }, + // "response": { + // "$ref": "Object" + // }, + // "scopes": [ + // "https://www.googleapis.com/auth/cloud-platform", + // "https://www.googleapis.com/auth/devstorage.full_control", + // "https://www.googleapis.com/auth/devstorage.read_write" + // ] + // } + +} + +// method id "storage.objects.copy": + +type ObjectsCopyCall struct { + s *Service + sourceBucket string + sourceObject string + destinationBucket string + destinationObject string + object *Object + urlParams_ gensupport.URLParams + ctx_ context.Context + header_ http.Header +} + +// Copy: Copies a source object to a destination object. Optionally +// overrides metadata. +func (r *ObjectsService) Copy(sourceBucket string, sourceObject string, destinationBucket string, destinationObject string, object *Object) *ObjectsCopyCall { + c := &ObjectsCopyCall{s: r.s, urlParams_: make(gensupport.URLParams)} + c.sourceBucket = sourceBucket + c.sourceObject = sourceObject + c.destinationBucket = destinationBucket + c.destinationObject = destinationObject + c.object = object + return c +} + +// DestinationPredefinedAcl sets the optional parameter +// "destinationPredefinedAcl": Apply a predefined set of access controls +// to the destination object. +// +// Possible values: +// "authenticatedRead" - Object owner gets OWNER access, and +// allAuthenticatedUsers get READER access. +// "bucketOwnerFullControl" - Object owner gets OWNER access, and +// project team owners get OWNER access. +// "bucketOwnerRead" - Object owner gets OWNER access, and project +// team owners get READER access. +// "private" - Object owner gets OWNER access. +// "projectPrivate" - Object owner gets OWNER access, and project team +// members get access according to their roles. +// "publicRead" - Object owner gets OWNER access, and allUsers get +// READER access. +func (c *ObjectsCopyCall) DestinationPredefinedAcl(destinationPredefinedAcl string) *ObjectsCopyCall { + c.urlParams_.Set("destinationPredefinedAcl", destinationPredefinedAcl) + return c +} + +// IfGenerationMatch sets the optional parameter "ifGenerationMatch": +// Makes the operation conditional on whether the destination object's +// current generation matches the given value. Setting to 0 makes the +// operation succeed only if there are no live versions of the object. +func (c *ObjectsCopyCall) IfGenerationMatch(ifGenerationMatch int64) *ObjectsCopyCall { + c.urlParams_.Set("ifGenerationMatch", fmt.Sprint(ifGenerationMatch)) + return c +} + +// IfGenerationNotMatch sets the optional parameter +// "ifGenerationNotMatch": Makes the operation conditional on whether +// the destination object's current generation does not match the given +// value. If no live object exists, the precondition fails. Setting to 0 +// makes the operation succeed only if there is a live version of the +// object. +func (c *ObjectsCopyCall) IfGenerationNotMatch(ifGenerationNotMatch int64) *ObjectsCopyCall { + c.urlParams_.Set("ifGenerationNotMatch", fmt.Sprint(ifGenerationNotMatch)) + return c +} + +// IfMetagenerationMatch sets the optional parameter +// "ifMetagenerationMatch": Makes the operation conditional on whether +// the destination object's current metageneration matches the given +// value. +func (c *ObjectsCopyCall) IfMetagenerationMatch(ifMetagenerationMatch int64) *ObjectsCopyCall { + c.urlParams_.Set("ifMetagenerationMatch", fmt.Sprint(ifMetagenerationMatch)) + return c +} + +// IfMetagenerationNotMatch sets the optional parameter +// "ifMetagenerationNotMatch": Makes the operation conditional on +// whether the destination object's current metageneration does not +// match the given value. +func (c *ObjectsCopyCall) IfMetagenerationNotMatch(ifMetagenerationNotMatch int64) *ObjectsCopyCall { + c.urlParams_.Set("ifMetagenerationNotMatch", fmt.Sprint(ifMetagenerationNotMatch)) + return c +} + +// IfSourceGenerationMatch sets the optional parameter +// "ifSourceGenerationMatch": Makes the operation conditional on whether +// the source object's current generation matches the given value. +func (c *ObjectsCopyCall) IfSourceGenerationMatch(ifSourceGenerationMatch int64) *ObjectsCopyCall { + c.urlParams_.Set("ifSourceGenerationMatch", fmt.Sprint(ifSourceGenerationMatch)) + return c +} + +// IfSourceGenerationNotMatch sets the optional parameter +// "ifSourceGenerationNotMatch": Makes the operation conditional on +// whether the source object's current generation does not match the +// given value. +func (c *ObjectsCopyCall) IfSourceGenerationNotMatch(ifSourceGenerationNotMatch int64) *ObjectsCopyCall { + c.urlParams_.Set("ifSourceGenerationNotMatch", fmt.Sprint(ifSourceGenerationNotMatch)) + return c +} + +// IfSourceMetagenerationMatch sets the optional parameter +// "ifSourceMetagenerationMatch": Makes the operation conditional on +// whether the source object's current metageneration matches the given +// value. +func (c *ObjectsCopyCall) IfSourceMetagenerationMatch(ifSourceMetagenerationMatch int64) *ObjectsCopyCall { + c.urlParams_.Set("ifSourceMetagenerationMatch", fmt.Sprint(ifSourceMetagenerationMatch)) + return c +} + +// IfSourceMetagenerationNotMatch sets the optional parameter +// "ifSourceMetagenerationNotMatch": Makes the operation conditional on +// whether the source object's current metageneration does not match the +// given value. +func (c *ObjectsCopyCall) IfSourceMetagenerationNotMatch(ifSourceMetagenerationNotMatch int64) *ObjectsCopyCall { + c.urlParams_.Set("ifSourceMetagenerationNotMatch", fmt.Sprint(ifSourceMetagenerationNotMatch)) + return c +} + +// Projection sets the optional parameter "projection": Set of +// properties to return. Defaults to noAcl, unless the object resource +// specifies the acl property, when it defaults to full. +// +// Possible values: +// "full" - Include all properties. +// "noAcl" - Omit the owner, acl property. +func (c *ObjectsCopyCall) Projection(projection string) *ObjectsCopyCall { + c.urlParams_.Set("projection", projection) + return c +} + +// SourceGeneration sets the optional parameter "sourceGeneration": If +// present, selects a specific revision of the source object (as opposed +// to the latest version, the default). +func (c *ObjectsCopyCall) SourceGeneration(sourceGeneration int64) *ObjectsCopyCall { + c.urlParams_.Set("sourceGeneration", fmt.Sprint(sourceGeneration)) + return c +} + +// UserProject sets the optional parameter "userProject": The project to +// be billed for this request. Required for Requester Pays buckets. +func (c *ObjectsCopyCall) UserProject(userProject string) *ObjectsCopyCall { + c.urlParams_.Set("userProject", userProject) + return c +} + +// Fields allows partial responses to be retrieved. See +// https://developers.google.com/gdata/docs/2.0/basics#PartialResponse +// for more information. +func (c *ObjectsCopyCall) Fields(s ...googleapi.Field) *ObjectsCopyCall { + c.urlParams_.Set("fields", googleapi.CombineFields(s)) + return c +} + +// Context sets the context to be used in this call's Do method. Any +// pending HTTP request will be aborted if the provided context is +// canceled. +func (c *ObjectsCopyCall) Context(ctx context.Context) *ObjectsCopyCall { + c.ctx_ = ctx + return c +} + +// Header returns an http.Header that can be modified by the caller to +// add HTTP headers to the request. +func (c *ObjectsCopyCall) Header() http.Header { + if c.header_ == nil { + c.header_ = make(http.Header) + } + return c.header_ +} + +func (c *ObjectsCopyCall) doRequest(alt string) (*http.Response, error) { + reqHeaders := make(http.Header) + for k, v := range c.header_ { + reqHeaders[k] = v + } + reqHeaders.Set("User-Agent", c.s.userAgent()) + var body io.Reader = nil + body, err := googleapi.WithoutDataWrapper.JSONReader(c.object) + if err != nil { + return nil, err + } + reqHeaders.Set("Content-Type", "application/json") + c.urlParams_.Set("alt", alt) + urls := googleapi.ResolveRelative(c.s.BasePath, "b/{sourceBucket}/o/{sourceObject}/copyTo/b/{destinationBucket}/o/{destinationObject}") + urls += "?" + c.urlParams_.Encode() + req, _ := http.NewRequest("POST", urls, body) + req.Header = reqHeaders + googleapi.Expand(req.URL, map[string]string{ + "sourceBucket": c.sourceBucket, + "sourceObject": c.sourceObject, + "destinationBucket": c.destinationBucket, + "destinationObject": c.destinationObject, + }) + return gensupport.SendRequest(c.ctx_, c.s.client, req) +} + +// Do executes the "storage.objects.copy" call. +// Exactly one of *Object or error will be non-nil. Any non-2xx status +// code is an error. Response headers are in either +// *Object.ServerResponse.Header or (if a response was returned at all) +// in error.(*googleapi.Error).Header. Use googleapi.IsNotModified to +// check whether the returned error was because http.StatusNotModified +// was returned. +func (c *ObjectsCopyCall) Do(opts ...googleapi.CallOption) (*Object, error) { + gensupport.SetOptions(c.urlParams_, opts...) + res, err := c.doRequest("json") + if res != nil && res.StatusCode == http.StatusNotModified { + if res.Body != nil { + res.Body.Close() + } + return nil, &googleapi.Error{ + Code: res.StatusCode, + Header: res.Header, + } + } + if err != nil { + return nil, err + } + defer googleapi.CloseBody(res) + if err := googleapi.CheckResponse(res); err != nil { + return nil, err + } + ret := &Object{ + ServerResponse: googleapi.ServerResponse{ + Header: res.Header, + HTTPStatusCode: res.StatusCode, + }, + } + target := &ret + if err := gensupport.DecodeResponse(target, res); err != nil { + return nil, err + } + return ret, nil + // { + // "description": "Copies a source object to a destination object. Optionally overrides metadata.", + // "httpMethod": "POST", + // "id": "storage.objects.copy", + // "parameterOrder": [ + // "sourceBucket", + // "sourceObject", + // "destinationBucket", + // "destinationObject" + // ], + // "parameters": { + // "destinationBucket": { + // "description": "Name of the bucket in which to store the new object. Overrides the provided object metadata's bucket value, if any.For information about how to URL encode object names to be path safe, see Encoding URI Path Parts.", + // "location": "path", + // "required": true, + // "type": "string" + // }, + // "destinationObject": { + // "description": "Name of the new object. Required when the object metadata is not otherwise provided. Overrides the object metadata's name value, if any.", + // "location": "path", + // "required": true, + // "type": "string" + // }, + // "destinationPredefinedAcl": { + // "description": "Apply a predefined set of access controls to the destination object.", + // "enum": [ + // "authenticatedRead", + // "bucketOwnerFullControl", + // "bucketOwnerRead", + // "private", + // "projectPrivate", + // "publicRead" + // ], + // "enumDescriptions": [ + // "Object owner gets OWNER access, and allAuthenticatedUsers get READER access.", + // "Object owner gets OWNER access, and project team owners get OWNER access.", + // "Object owner gets OWNER access, and project team owners get READER access.", + // "Object owner gets OWNER access.", + // "Object owner gets OWNER access, and project team members get access according to their roles.", + // "Object owner gets OWNER access, and allUsers get READER access." + // ], + // "location": "query", + // "type": "string" + // }, + // "ifGenerationMatch": { + // "description": "Makes the operation conditional on whether the destination object's current generation matches the given value. Setting to 0 makes the operation succeed only if there are no live versions of the object.", + // "format": "int64", + // "location": "query", + // "type": "string" + // }, + // "ifGenerationNotMatch": { + // "description": "Makes the operation conditional on whether the destination object's current generation does not match the given value. If no live object exists, the precondition fails. Setting to 0 makes the operation succeed only if there is a live version of the object.", + // "format": "int64", + // "location": "query", + // "type": "string" + // }, + // "ifMetagenerationMatch": { + // "description": "Makes the operation conditional on whether the destination object's current metageneration matches the given value.", + // "format": "int64", + // "location": "query", + // "type": "string" + // }, + // "ifMetagenerationNotMatch": { + // "description": "Makes the operation conditional on whether the destination object's current metageneration does not match the given value.", + // "format": "int64", + // "location": "query", + // "type": "string" + // }, + // "ifSourceGenerationMatch": { + // "description": "Makes the operation conditional on whether the source object's current generation matches the given value.", + // "format": "int64", + // "location": "query", + // "type": "string" + // }, + // "ifSourceGenerationNotMatch": { + // "description": "Makes the operation conditional on whether the source object's current generation does not match the given value.", + // "format": "int64", + // "location": "query", + // "type": "string" + // }, + // "ifSourceMetagenerationMatch": { + // "description": "Makes the operation conditional on whether the source object's current metageneration matches the given value.", + // "format": "int64", + // "location": "query", + // "type": "string" + // }, + // "ifSourceMetagenerationNotMatch": { + // "description": "Makes the operation conditional on whether the source object's current metageneration does not match the given value.", + // "format": "int64", + // "location": "query", + // "type": "string" + // }, + // "projection": { + // "description": "Set of properties to return. Defaults to noAcl, unless the object resource specifies the acl property, when it defaults to full.", + // "enum": [ + // "full", + // "noAcl" + // ], + // "enumDescriptions": [ + // "Include all properties.", + // "Omit the owner, acl property." + // ], + // "location": "query", + // "type": "string" + // }, + // "sourceBucket": { + // "description": "Name of the bucket in which to find the source object.", + // "location": "path", + // "required": true, + // "type": "string" + // }, + // "sourceGeneration": { + // "description": "If present, selects a specific revision of the source object (as opposed to the latest version, the default).", + // "format": "int64", + // "location": "query", + // "type": "string" + // }, + // "sourceObject": { + // "description": "Name of the source object. For information about how to URL encode object names to be path safe, see Encoding URI Path Parts.", + // "location": "path", + // "required": true, + // "type": "string" + // }, + // "userProject": { + // "description": "The project to be billed for this request. Required for Requester Pays buckets.", + // "location": "query", + // "type": "string" + // } + // }, + // "path": "b/{sourceBucket}/o/{sourceObject}/copyTo/b/{destinationBucket}/o/{destinationObject}", + // "request": { + // "$ref": "Object" + // }, + // "response": { + // "$ref": "Object" + // }, + // "scopes": [ + // "https://www.googleapis.com/auth/cloud-platform", + // "https://www.googleapis.com/auth/devstorage.full_control", + // "https://www.googleapis.com/auth/devstorage.read_write" + // ] + // } + +} + +// method id "storage.objects.delete": + +type ObjectsDeleteCall struct { + s *Service + bucket string + object string + urlParams_ gensupport.URLParams + ctx_ context.Context + header_ http.Header +} + +// Delete: Deletes an object and its metadata. Deletions are permanent +// if versioning is not enabled for the bucket, or if the generation +// parameter is used. +func (r *ObjectsService) Delete(bucket string, object string) *ObjectsDeleteCall { + c := &ObjectsDeleteCall{s: r.s, urlParams_: make(gensupport.URLParams)} + c.bucket = bucket + c.object = object + return c +} + +// Generation sets the optional parameter "generation": If present, +// permanently deletes a specific revision of this object (as opposed to +// the latest version, the default). +func (c *ObjectsDeleteCall) Generation(generation int64) *ObjectsDeleteCall { + c.urlParams_.Set("generation", fmt.Sprint(generation)) + return c +} + +// IfGenerationMatch sets the optional parameter "ifGenerationMatch": +// Makes the operation conditional on whether the object's current +// generation matches the given value. Setting to 0 makes the operation +// succeed only if there are no live versions of the object. +func (c *ObjectsDeleteCall) IfGenerationMatch(ifGenerationMatch int64) *ObjectsDeleteCall { + c.urlParams_.Set("ifGenerationMatch", fmt.Sprint(ifGenerationMatch)) + return c +} + +// IfGenerationNotMatch sets the optional parameter +// "ifGenerationNotMatch": Makes the operation conditional on whether +// the object's current generation does not match the given value. If no +// live object exists, the precondition fails. Setting to 0 makes the +// operation succeed only if there is a live version of the object. +func (c *ObjectsDeleteCall) IfGenerationNotMatch(ifGenerationNotMatch int64) *ObjectsDeleteCall { + c.urlParams_.Set("ifGenerationNotMatch", fmt.Sprint(ifGenerationNotMatch)) + return c +} + +// IfMetagenerationMatch sets the optional parameter +// "ifMetagenerationMatch": Makes the operation conditional on whether +// the object's current metageneration matches the given value. +func (c *ObjectsDeleteCall) IfMetagenerationMatch(ifMetagenerationMatch int64) *ObjectsDeleteCall { + c.urlParams_.Set("ifMetagenerationMatch", fmt.Sprint(ifMetagenerationMatch)) + return c +} + +// IfMetagenerationNotMatch sets the optional parameter +// "ifMetagenerationNotMatch": Makes the operation conditional on +// whether the object's current metageneration does not match the given +// value. +func (c *ObjectsDeleteCall) IfMetagenerationNotMatch(ifMetagenerationNotMatch int64) *ObjectsDeleteCall { + c.urlParams_.Set("ifMetagenerationNotMatch", fmt.Sprint(ifMetagenerationNotMatch)) + return c +} + +// UserProject sets the optional parameter "userProject": The project to +// be billed for this request. Required for Requester Pays buckets. +func (c *ObjectsDeleteCall) UserProject(userProject string) *ObjectsDeleteCall { + c.urlParams_.Set("userProject", userProject) + return c +} + +// Fields allows partial responses to be retrieved. See +// https://developers.google.com/gdata/docs/2.0/basics#PartialResponse +// for more information. +func (c *ObjectsDeleteCall) Fields(s ...googleapi.Field) *ObjectsDeleteCall { + c.urlParams_.Set("fields", googleapi.CombineFields(s)) + return c +} + +// Context sets the context to be used in this call's Do method. Any +// pending HTTP request will be aborted if the provided context is +// canceled. +func (c *ObjectsDeleteCall) Context(ctx context.Context) *ObjectsDeleteCall { + c.ctx_ = ctx + return c +} + +// Header returns an http.Header that can be modified by the caller to +// add HTTP headers to the request. +func (c *ObjectsDeleteCall) Header() http.Header { + if c.header_ == nil { + c.header_ = make(http.Header) + } + return c.header_ +} + +func (c *ObjectsDeleteCall) doRequest(alt string) (*http.Response, error) { + reqHeaders := make(http.Header) + for k, v := range c.header_ { + reqHeaders[k] = v + } + reqHeaders.Set("User-Agent", c.s.userAgent()) + var body io.Reader = nil + c.urlParams_.Set("alt", alt) + urls := googleapi.ResolveRelative(c.s.BasePath, "b/{bucket}/o/{object}") + urls += "?" + c.urlParams_.Encode() + req, _ := http.NewRequest("DELETE", urls, body) + req.Header = reqHeaders + googleapi.Expand(req.URL, map[string]string{ + "bucket": c.bucket, + "object": c.object, + }) + return gensupport.SendRequest(c.ctx_, c.s.client, req) +} + +// Do executes the "storage.objects.delete" call. +func (c *ObjectsDeleteCall) Do(opts ...googleapi.CallOption) error { + gensupport.SetOptions(c.urlParams_, opts...) + res, err := c.doRequest("json") + if err != nil { + return err + } + defer googleapi.CloseBody(res) + if err := googleapi.CheckResponse(res); err != nil { + return err + } + return nil + // { + // "description": "Deletes an object and its metadata. Deletions are permanent if versioning is not enabled for the bucket, or if the generation parameter is used.", + // "httpMethod": "DELETE", + // "id": "storage.objects.delete", + // "parameterOrder": [ + // "bucket", + // "object" + // ], + // "parameters": { + // "bucket": { + // "description": "Name of the bucket in which the object resides.", + // "location": "path", + // "required": true, + // "type": "string" + // }, + // "generation": { + // "description": "If present, permanently deletes a specific revision of this object (as opposed to the latest version, the default).", + // "format": "int64", + // "location": "query", + // "type": "string" + // }, + // "ifGenerationMatch": { + // "description": "Makes the operation conditional on whether the object's current generation matches the given value. Setting to 0 makes the operation succeed only if there are no live versions of the object.", + // "format": "int64", + // "location": "query", + // "type": "string" + // }, + // "ifGenerationNotMatch": { + // "description": "Makes the operation conditional on whether the object's current generation does not match the given value. If no live object exists, the precondition fails. Setting to 0 makes the operation succeed only if there is a live version of the object.", + // "format": "int64", + // "location": "query", + // "type": "string" + // }, + // "ifMetagenerationMatch": { + // "description": "Makes the operation conditional on whether the object's current metageneration matches the given value.", + // "format": "int64", + // "location": "query", + // "type": "string" + // }, + // "ifMetagenerationNotMatch": { + // "description": "Makes the operation conditional on whether the object's current metageneration does not match the given value.", + // "format": "int64", + // "location": "query", + // "type": "string" + // }, + // "object": { + // "description": "Name of the object. For information about how to URL encode object names to be path safe, see Encoding URI Path Parts.", + // "location": "path", + // "required": true, + // "type": "string" + // }, + // "userProject": { + // "description": "The project to be billed for this request. Required for Requester Pays buckets.", + // "location": "query", + // "type": "string" + // } + // }, + // "path": "b/{bucket}/o/{object}", + // "scopes": [ + // "https://www.googleapis.com/auth/cloud-platform", + // "https://www.googleapis.com/auth/devstorage.full_control", + // "https://www.googleapis.com/auth/devstorage.read_write" + // ] + // } + +} + +// method id "storage.objects.get": + +type ObjectsGetCall struct { + s *Service + bucket string + object string + urlParams_ gensupport.URLParams + ifNoneMatch_ string + ctx_ context.Context + header_ http.Header +} + +// Get: Retrieves an object or its metadata. +func (r *ObjectsService) Get(bucket string, object string) *ObjectsGetCall { + c := &ObjectsGetCall{s: r.s, urlParams_: make(gensupport.URLParams)} + c.bucket = bucket + c.object = object + return c +} + +// Generation sets the optional parameter "generation": If present, +// selects a specific revision of this object (as opposed to the latest +// version, the default). +func (c *ObjectsGetCall) Generation(generation int64) *ObjectsGetCall { + c.urlParams_.Set("generation", fmt.Sprint(generation)) + return c +} + +// IfGenerationMatch sets the optional parameter "ifGenerationMatch": +// Makes the operation conditional on whether the object's current +// generation matches the given value. Setting to 0 makes the operation +// succeed only if there are no live versions of the object. +func (c *ObjectsGetCall) IfGenerationMatch(ifGenerationMatch int64) *ObjectsGetCall { + c.urlParams_.Set("ifGenerationMatch", fmt.Sprint(ifGenerationMatch)) + return c +} + +// IfGenerationNotMatch sets the optional parameter +// "ifGenerationNotMatch": Makes the operation conditional on whether +// the object's current generation does not match the given value. If no +// live object exists, the precondition fails. Setting to 0 makes the +// operation succeed only if there is a live version of the object. +func (c *ObjectsGetCall) IfGenerationNotMatch(ifGenerationNotMatch int64) *ObjectsGetCall { + c.urlParams_.Set("ifGenerationNotMatch", fmt.Sprint(ifGenerationNotMatch)) + return c +} + +// IfMetagenerationMatch sets the optional parameter +// "ifMetagenerationMatch": Makes the operation conditional on whether +// the object's current metageneration matches the given value. +func (c *ObjectsGetCall) IfMetagenerationMatch(ifMetagenerationMatch int64) *ObjectsGetCall { + c.urlParams_.Set("ifMetagenerationMatch", fmt.Sprint(ifMetagenerationMatch)) + return c +} + +// IfMetagenerationNotMatch sets the optional parameter +// "ifMetagenerationNotMatch": Makes the operation conditional on +// whether the object's current metageneration does not match the given +// value. +func (c *ObjectsGetCall) IfMetagenerationNotMatch(ifMetagenerationNotMatch int64) *ObjectsGetCall { + c.urlParams_.Set("ifMetagenerationNotMatch", fmt.Sprint(ifMetagenerationNotMatch)) + return c +} + +// Projection sets the optional parameter "projection": Set of +// properties to return. Defaults to noAcl. +// +// Possible values: +// "full" - Include all properties. +// "noAcl" - Omit the owner, acl property. +func (c *ObjectsGetCall) Projection(projection string) *ObjectsGetCall { + c.urlParams_.Set("projection", projection) + return c +} + +// UserProject sets the optional parameter "userProject": The project to +// be billed for this request. Required for Requester Pays buckets. +func (c *ObjectsGetCall) UserProject(userProject string) *ObjectsGetCall { + c.urlParams_.Set("userProject", userProject) + return c +} + +// Fields allows partial responses to be retrieved. See +// https://developers.google.com/gdata/docs/2.0/basics#PartialResponse +// for more information. +func (c *ObjectsGetCall) Fields(s ...googleapi.Field) *ObjectsGetCall { + c.urlParams_.Set("fields", googleapi.CombineFields(s)) + return c +} + +// IfNoneMatch sets the optional parameter which makes the operation +// fail if the object's ETag matches the given value. This is useful for +// getting updates only after the object has changed since the last +// request. Use googleapi.IsNotModified to check whether the response +// error from Do is the result of In-None-Match. +func (c *ObjectsGetCall) IfNoneMatch(entityTag string) *ObjectsGetCall { + c.ifNoneMatch_ = entityTag + return c +} + +// Context sets the context to be used in this call's Do and Download +// methods. Any pending HTTP request will be aborted if the provided +// context is canceled. +func (c *ObjectsGetCall) Context(ctx context.Context) *ObjectsGetCall { + c.ctx_ = ctx + return c +} + +// Header returns an http.Header that can be modified by the caller to +// add HTTP headers to the request. +func (c *ObjectsGetCall) Header() http.Header { + if c.header_ == nil { + c.header_ = make(http.Header) + } + return c.header_ +} + +func (c *ObjectsGetCall) doRequest(alt string) (*http.Response, error) { + reqHeaders := make(http.Header) + for k, v := range c.header_ { + reqHeaders[k] = v + } + reqHeaders.Set("User-Agent", c.s.userAgent()) + if c.ifNoneMatch_ != "" { + reqHeaders.Set("If-None-Match", c.ifNoneMatch_) + } + var body io.Reader = nil + c.urlParams_.Set("alt", alt) + urls := googleapi.ResolveRelative(c.s.BasePath, "b/{bucket}/o/{object}") + urls += "?" + c.urlParams_.Encode() + req, _ := http.NewRequest("GET", urls, body) + req.Header = reqHeaders + googleapi.Expand(req.URL, map[string]string{ + "bucket": c.bucket, + "object": c.object, + }) + return gensupport.SendRequest(c.ctx_, c.s.client, req) +} + +// Download fetches the API endpoint's "media" value, instead of the normal +// API response value. If the returned error is nil, the Response is guaranteed to +// have a 2xx status code. Callers must close the Response.Body as usual. +func (c *ObjectsGetCall) Download(opts ...googleapi.CallOption) (*http.Response, error) { + gensupport.SetOptions(c.urlParams_, opts...) + res, err := c.doRequest("media") + if err != nil { + return nil, err + } + if err := googleapi.CheckMediaResponse(res); err != nil { + res.Body.Close() + return nil, err + } + return res, nil +} + +// Do executes the "storage.objects.get" call. +// Exactly one of *Object or error will be non-nil. Any non-2xx status +// code is an error. Response headers are in either +// *Object.ServerResponse.Header or (if a response was returned at all) +// in error.(*googleapi.Error).Header. Use googleapi.IsNotModified to +// check whether the returned error was because http.StatusNotModified +// was returned. +func (c *ObjectsGetCall) Do(opts ...googleapi.CallOption) (*Object, error) { + gensupport.SetOptions(c.urlParams_, opts...) + res, err := c.doRequest("json") + if res != nil && res.StatusCode == http.StatusNotModified { + if res.Body != nil { + res.Body.Close() + } + return nil, &googleapi.Error{ + Code: res.StatusCode, + Header: res.Header, + } + } + if err != nil { + return nil, err + } + defer googleapi.CloseBody(res) + if err := googleapi.CheckResponse(res); err != nil { + return nil, err + } + ret := &Object{ + ServerResponse: googleapi.ServerResponse{ + Header: res.Header, + HTTPStatusCode: res.StatusCode, + }, + } + target := &ret + if err := gensupport.DecodeResponse(target, res); err != nil { + return nil, err + } + return ret, nil + // { + // "description": "Retrieves an object or its metadata.", + // "httpMethod": "GET", + // "id": "storage.objects.get", + // "parameterOrder": [ + // "bucket", + // "object" + // ], + // "parameters": { + // "bucket": { + // "description": "Name of the bucket in which the object resides.", + // "location": "path", + // "required": true, + // "type": "string" + // }, + // "generation": { + // "description": "If present, selects a specific revision of this object (as opposed to the latest version, the default).", + // "format": "int64", + // "location": "query", + // "type": "string" + // }, + // "ifGenerationMatch": { + // "description": "Makes the operation conditional on whether the object's current generation matches the given value. Setting to 0 makes the operation succeed only if there are no live versions of the object.", + // "format": "int64", + // "location": "query", + // "type": "string" + // }, + // "ifGenerationNotMatch": { + // "description": "Makes the operation conditional on whether the object's current generation does not match the given value. If no live object exists, the precondition fails. Setting to 0 makes the operation succeed only if there is a live version of the object.", + // "format": "int64", + // "location": "query", + // "type": "string" + // }, + // "ifMetagenerationMatch": { + // "description": "Makes the operation conditional on whether the object's current metageneration matches the given value.", + // "format": "int64", + // "location": "query", + // "type": "string" + // }, + // "ifMetagenerationNotMatch": { + // "description": "Makes the operation conditional on whether the object's current metageneration does not match the given value.", + // "format": "int64", + // "location": "query", + // "type": "string" + // }, + // "object": { + // "description": "Name of the object. For information about how to URL encode object names to be path safe, see Encoding URI Path Parts.", + // "location": "path", + // "required": true, + // "type": "string" + // }, + // "projection": { + // "description": "Set of properties to return. Defaults to noAcl.", + // "enum": [ + // "full", + // "noAcl" + // ], + // "enumDescriptions": [ + // "Include all properties.", + // "Omit the owner, acl property." + // ], + // "location": "query", + // "type": "string" + // }, + // "userProject": { + // "description": "The project to be billed for this request. Required for Requester Pays buckets.", + // "location": "query", + // "type": "string" + // } + // }, + // "path": "b/{bucket}/o/{object}", + // "response": { + // "$ref": "Object" + // }, + // "scopes": [ + // "https://www.googleapis.com/auth/cloud-platform", + // "https://www.googleapis.com/auth/cloud-platform.read-only", + // "https://www.googleapis.com/auth/devstorage.full_control", + // "https://www.googleapis.com/auth/devstorage.read_only", + // "https://www.googleapis.com/auth/devstorage.read_write" + // ], + // "supportsMediaDownload": true, + // "useMediaDownloadService": true + // } + +} + +// method id "storage.objects.getIamPolicy": + +type ObjectsGetIamPolicyCall struct { + s *Service + bucket string + object string + urlParams_ gensupport.URLParams + ifNoneMatch_ string + ctx_ context.Context + header_ http.Header +} + +// GetIamPolicy: Returns an IAM policy for the specified object. +func (r *ObjectsService) GetIamPolicy(bucket string, object string) *ObjectsGetIamPolicyCall { + c := &ObjectsGetIamPolicyCall{s: r.s, urlParams_: make(gensupport.URLParams)} + c.bucket = bucket + c.object = object + return c +} + +// Generation sets the optional parameter "generation": If present, +// selects a specific revision of this object (as opposed to the latest +// version, the default). +func (c *ObjectsGetIamPolicyCall) Generation(generation int64) *ObjectsGetIamPolicyCall { + c.urlParams_.Set("generation", fmt.Sprint(generation)) + return c +} + +// UserProject sets the optional parameter "userProject": The project to +// be billed for this request. Required for Requester Pays buckets. +func (c *ObjectsGetIamPolicyCall) UserProject(userProject string) *ObjectsGetIamPolicyCall { + c.urlParams_.Set("userProject", userProject) + return c +} + +// Fields allows partial responses to be retrieved. See +// https://developers.google.com/gdata/docs/2.0/basics#PartialResponse +// for more information. +func (c *ObjectsGetIamPolicyCall) Fields(s ...googleapi.Field) *ObjectsGetIamPolicyCall { + c.urlParams_.Set("fields", googleapi.CombineFields(s)) + return c +} + +// IfNoneMatch sets the optional parameter which makes the operation +// fail if the object's ETag matches the given value. This is useful for +// getting updates only after the object has changed since the last +// request. Use googleapi.IsNotModified to check whether the response +// error from Do is the result of In-None-Match. +func (c *ObjectsGetIamPolicyCall) IfNoneMatch(entityTag string) *ObjectsGetIamPolicyCall { + c.ifNoneMatch_ = entityTag + return c +} + +// Context sets the context to be used in this call's Do method. Any +// pending HTTP request will be aborted if the provided context is +// canceled. +func (c *ObjectsGetIamPolicyCall) Context(ctx context.Context) *ObjectsGetIamPolicyCall { + c.ctx_ = ctx + return c +} + +// Header returns an http.Header that can be modified by the caller to +// add HTTP headers to the request. +func (c *ObjectsGetIamPolicyCall) Header() http.Header { + if c.header_ == nil { + c.header_ = make(http.Header) + } + return c.header_ +} + +func (c *ObjectsGetIamPolicyCall) doRequest(alt string) (*http.Response, error) { + reqHeaders := make(http.Header) + for k, v := range c.header_ { + reqHeaders[k] = v + } + reqHeaders.Set("User-Agent", c.s.userAgent()) + if c.ifNoneMatch_ != "" { + reqHeaders.Set("If-None-Match", c.ifNoneMatch_) + } + var body io.Reader = nil + c.urlParams_.Set("alt", alt) + urls := googleapi.ResolveRelative(c.s.BasePath, "b/{bucket}/o/{object}/iam") + urls += "?" + c.urlParams_.Encode() + req, _ := http.NewRequest("GET", urls, body) + req.Header = reqHeaders + googleapi.Expand(req.URL, map[string]string{ + "bucket": c.bucket, + "object": c.object, + }) + return gensupport.SendRequest(c.ctx_, c.s.client, req) +} + +// Do executes the "storage.objects.getIamPolicy" call. +// Exactly one of *Policy or error will be non-nil. Any non-2xx status +// code is an error. Response headers are in either +// *Policy.ServerResponse.Header or (if a response was returned at all) +// in error.(*googleapi.Error).Header. Use googleapi.IsNotModified to +// check whether the returned error was because http.StatusNotModified +// was returned. +func (c *ObjectsGetIamPolicyCall) Do(opts ...googleapi.CallOption) (*Policy, error) { + gensupport.SetOptions(c.urlParams_, opts...) + res, err := c.doRequest("json") + if res != nil && res.StatusCode == http.StatusNotModified { + if res.Body != nil { + res.Body.Close() + } + return nil, &googleapi.Error{ + Code: res.StatusCode, + Header: res.Header, + } + } + if err != nil { + return nil, err + } + defer googleapi.CloseBody(res) + if err := googleapi.CheckResponse(res); err != nil { + return nil, err + } + ret := &Policy{ + ServerResponse: googleapi.ServerResponse{ + Header: res.Header, + HTTPStatusCode: res.StatusCode, + }, + } + target := &ret + if err := gensupport.DecodeResponse(target, res); err != nil { + return nil, err + } + return ret, nil + // { + // "description": "Returns an IAM policy for the specified object.", + // "httpMethod": "GET", + // "id": "storage.objects.getIamPolicy", + // "parameterOrder": [ + // "bucket", + // "object" + // ], + // "parameters": { + // "bucket": { + // "description": "Name of the bucket in which the object resides.", + // "location": "path", + // "required": true, + // "type": "string" + // }, + // "generation": { + // "description": "If present, selects a specific revision of this object (as opposed to the latest version, the default).", + // "format": "int64", + // "location": "query", + // "type": "string" + // }, + // "object": { + // "description": "Name of the object. For information about how to URL encode object names to be path safe, see Encoding URI Path Parts.", + // "location": "path", + // "required": true, + // "type": "string" + // }, + // "userProject": { + // "description": "The project to be billed for this request. Required for Requester Pays buckets.", + // "location": "query", + // "type": "string" + // } + // }, + // "path": "b/{bucket}/o/{object}/iam", + // "response": { + // "$ref": "Policy" + // }, + // "scopes": [ + // "https://www.googleapis.com/auth/cloud-platform", + // "https://www.googleapis.com/auth/cloud-platform.read-only", + // "https://www.googleapis.com/auth/devstorage.full_control", + // "https://www.googleapis.com/auth/devstorage.read_only", + // "https://www.googleapis.com/auth/devstorage.read_write" + // ] + // } + +} + +// method id "storage.objects.insert": + +type ObjectsInsertCall struct { + s *Service + bucket string + object *Object + urlParams_ gensupport.URLParams + mediaInfo_ *gensupport.MediaInfo + ctx_ context.Context + header_ http.Header +} + +// Insert: Stores a new object and metadata. +func (r *ObjectsService) Insert(bucket string, object *Object) *ObjectsInsertCall { + c := &ObjectsInsertCall{s: r.s, urlParams_: make(gensupport.URLParams)} + c.bucket = bucket + c.object = object + return c +} + +// ContentEncoding sets the optional parameter "contentEncoding": If +// set, sets the contentEncoding property of the final object to this +// value. Setting this parameter is equivalent to setting the +// contentEncoding metadata property. This can be useful when uploading +// an object with uploadType=media to indicate the encoding of the +// content being uploaded. +func (c *ObjectsInsertCall) ContentEncoding(contentEncoding string) *ObjectsInsertCall { + c.urlParams_.Set("contentEncoding", contentEncoding) + return c +} + +// IfGenerationMatch sets the optional parameter "ifGenerationMatch": +// Makes the operation conditional on whether the object's current +// generation matches the given value. Setting to 0 makes the operation +// succeed only if there are no live versions of the object. +func (c *ObjectsInsertCall) IfGenerationMatch(ifGenerationMatch int64) *ObjectsInsertCall { + c.urlParams_.Set("ifGenerationMatch", fmt.Sprint(ifGenerationMatch)) + return c +} + +// IfGenerationNotMatch sets the optional parameter +// "ifGenerationNotMatch": Makes the operation conditional on whether +// the object's current generation does not match the given value. If no +// live object exists, the precondition fails. Setting to 0 makes the +// operation succeed only if there is a live version of the object. +func (c *ObjectsInsertCall) IfGenerationNotMatch(ifGenerationNotMatch int64) *ObjectsInsertCall { + c.urlParams_.Set("ifGenerationNotMatch", fmt.Sprint(ifGenerationNotMatch)) + return c +} + +// IfMetagenerationMatch sets the optional parameter +// "ifMetagenerationMatch": Makes the operation conditional on whether +// the object's current metageneration matches the given value. +func (c *ObjectsInsertCall) IfMetagenerationMatch(ifMetagenerationMatch int64) *ObjectsInsertCall { + c.urlParams_.Set("ifMetagenerationMatch", fmt.Sprint(ifMetagenerationMatch)) + return c +} + +// IfMetagenerationNotMatch sets the optional parameter +// "ifMetagenerationNotMatch": Makes the operation conditional on +// whether the object's current metageneration does not match the given +// value. +func (c *ObjectsInsertCall) IfMetagenerationNotMatch(ifMetagenerationNotMatch int64) *ObjectsInsertCall { + c.urlParams_.Set("ifMetagenerationNotMatch", fmt.Sprint(ifMetagenerationNotMatch)) + return c +} + +// KmsKeyName sets the optional parameter "kmsKeyName": Resource name of +// the Cloud KMS key, of the form +// projects/my-project/locations/global/keyRings/my-kr/cryptoKeys/my-key, +// that will be used to encrypt the object. Overrides the object +// metadata's kms_key_name value, if any. Limited availability; usable +// only by enabled projects. +func (c *ObjectsInsertCall) KmsKeyName(kmsKeyName string) *ObjectsInsertCall { + c.urlParams_.Set("kmsKeyName", kmsKeyName) + return c +} + +// Name sets the optional parameter "name": Name of the object. Required +// when the object metadata is not otherwise provided. Overrides the +// object metadata's name value, if any. For information about how to +// URL encode object names to be path safe, see Encoding URI Path Parts. +func (c *ObjectsInsertCall) Name(name string) *ObjectsInsertCall { + c.urlParams_.Set("name", name) + return c +} + +// PredefinedAcl sets the optional parameter "predefinedAcl": Apply a +// predefined set of access controls to this object. +// +// Possible values: +// "authenticatedRead" - Object owner gets OWNER access, and +// allAuthenticatedUsers get READER access. +// "bucketOwnerFullControl" - Object owner gets OWNER access, and +// project team owners get OWNER access. +// "bucketOwnerRead" - Object owner gets OWNER access, and project +// team owners get READER access. +// "private" - Object owner gets OWNER access. +// "projectPrivate" - Object owner gets OWNER access, and project team +// members get access according to their roles. +// "publicRead" - Object owner gets OWNER access, and allUsers get +// READER access. +func (c *ObjectsInsertCall) PredefinedAcl(predefinedAcl string) *ObjectsInsertCall { + c.urlParams_.Set("predefinedAcl", predefinedAcl) + return c +} + +// Projection sets the optional parameter "projection": Set of +// properties to return. Defaults to noAcl, unless the object resource +// specifies the acl property, when it defaults to full. +// +// Possible values: +// "full" - Include all properties. +// "noAcl" - Omit the owner, acl property. +func (c *ObjectsInsertCall) Projection(projection string) *ObjectsInsertCall { + c.urlParams_.Set("projection", projection) + return c +} + +// UserProject sets the optional parameter "userProject": The project to +// be billed for this request. Required for Requester Pays buckets. +func (c *ObjectsInsertCall) UserProject(userProject string) *ObjectsInsertCall { + c.urlParams_.Set("userProject", userProject) + return c +} + +// Media specifies the media to upload in one or more chunks. The chunk +// size may be controlled by supplying a MediaOption generated by +// googleapi.ChunkSize. The chunk size defaults to +// googleapi.DefaultUploadChunkSize.The Content-Type header used in the +// upload request will be determined by sniffing the contents of r, +// unless a MediaOption generated by googleapi.ContentType is +// supplied. +// At most one of Media and ResumableMedia may be set. +func (c *ObjectsInsertCall) Media(r io.Reader, options ...googleapi.MediaOption) *ObjectsInsertCall { + if ct := c.object.ContentType; ct != "" { + options = append([]googleapi.MediaOption{googleapi.ContentType(ct)}, options...) + } + c.mediaInfo_ = gensupport.NewInfoFromMedia(r, options) + return c +} + +// ResumableMedia specifies the media to upload in chunks and can be +// canceled with ctx. +// +// Deprecated: use Media instead. +// +// At most one of Media and ResumableMedia may be set. mediaType +// identifies the MIME media type of the upload, such as "image/png". If +// mediaType is "", it will be auto-detected. The provided ctx will +// supersede any context previously provided to the Context method. +func (c *ObjectsInsertCall) ResumableMedia(ctx context.Context, r io.ReaderAt, size int64, mediaType string) *ObjectsInsertCall { + c.ctx_ = ctx + c.mediaInfo_ = gensupport.NewInfoFromResumableMedia(r, size, mediaType) + return c +} + +// ProgressUpdater provides a callback function that will be called +// after every chunk. It should be a low-latency function in order to +// not slow down the upload operation. This should only be called when +// using ResumableMedia (as opposed to Media). +func (c *ObjectsInsertCall) ProgressUpdater(pu googleapi.ProgressUpdater) *ObjectsInsertCall { + c.mediaInfo_.SetProgressUpdater(pu) + return c +} + +// Fields allows partial responses to be retrieved. See +// https://developers.google.com/gdata/docs/2.0/basics#PartialResponse +// for more information. +func (c *ObjectsInsertCall) Fields(s ...googleapi.Field) *ObjectsInsertCall { + c.urlParams_.Set("fields", googleapi.CombineFields(s)) + return c +} + +// Context sets the context to be used in this call's Do method. Any +// pending HTTP request will be aborted if the provided context is +// canceled. +// This context will supersede any context previously provided to the +// ResumableMedia method. +func (c *ObjectsInsertCall) Context(ctx context.Context) *ObjectsInsertCall { + c.ctx_ = ctx + return c +} + +// Header returns an http.Header that can be modified by the caller to +// add HTTP headers to the request. +func (c *ObjectsInsertCall) Header() http.Header { + if c.header_ == nil { + c.header_ = make(http.Header) + } + return c.header_ +} + +func (c *ObjectsInsertCall) doRequest(alt string) (*http.Response, error) { + reqHeaders := make(http.Header) + for k, v := range c.header_ { + reqHeaders[k] = v + } + reqHeaders.Set("User-Agent", c.s.userAgent()) + var body io.Reader = nil + body, err := googleapi.WithoutDataWrapper.JSONReader(c.object) + if err != nil { + return nil, err + } + reqHeaders.Set("Content-Type", "application/json") + c.urlParams_.Set("alt", alt) + urls := googleapi.ResolveRelative(c.s.BasePath, "b/{bucket}/o") + if c.mediaInfo_ != nil { + urls = strings.Replace(urls, "https://www.googleapis.com/", "https://www.googleapis.com/upload/", 1) + c.urlParams_.Set("uploadType", c.mediaInfo_.UploadType()) + } + if body == nil { + body = new(bytes.Buffer) + reqHeaders.Set("Content-Type", "application/json") + } + body, getBody, cleanup := c.mediaInfo_.UploadRequest(reqHeaders, body) + defer cleanup() + urls += "?" + c.urlParams_.Encode() + req, _ := http.NewRequest("POST", urls, body) + req.Header = reqHeaders + gensupport.SetGetBody(req, getBody) + googleapi.Expand(req.URL, map[string]string{ + "bucket": c.bucket, + }) + return gensupport.SendRequest(c.ctx_, c.s.client, req) +} + +// Do executes the "storage.objects.insert" call. +// Exactly one of *Object or error will be non-nil. Any non-2xx status +// code is an error. Response headers are in either +// *Object.ServerResponse.Header or (if a response was returned at all) +// in error.(*googleapi.Error).Header. Use googleapi.IsNotModified to +// check whether the returned error was because http.StatusNotModified +// was returned. +func (c *ObjectsInsertCall) Do(opts ...googleapi.CallOption) (*Object, error) { + gensupport.SetOptions(c.urlParams_, opts...) + res, err := c.doRequest("json") + if res != nil && res.StatusCode == http.StatusNotModified { + if res.Body != nil { + res.Body.Close() + } + return nil, &googleapi.Error{ + Code: res.StatusCode, + Header: res.Header, + } + } + if err != nil { + return nil, err + } + defer googleapi.CloseBody(res) + if err := googleapi.CheckResponse(res); err != nil { + return nil, err + } + rx := c.mediaInfo_.ResumableUpload(res.Header.Get("Location")) + if rx != nil { + rx.Client = c.s.client + rx.UserAgent = c.s.userAgent() + ctx := c.ctx_ + if ctx == nil { + ctx = context.TODO() + } + res, err = rx.Upload(ctx) + if err != nil { + return nil, err + } + defer res.Body.Close() + if err := googleapi.CheckResponse(res); err != nil { + return nil, err + } + } + ret := &Object{ + ServerResponse: googleapi.ServerResponse{ + Header: res.Header, + HTTPStatusCode: res.StatusCode, + }, + } + target := &ret + if err := gensupport.DecodeResponse(target, res); err != nil { + return nil, err + } + return ret, nil + // { + // "description": "Stores a new object and metadata.", + // "httpMethod": "POST", + // "id": "storage.objects.insert", + // "mediaUpload": { + // "accept": [ + // "*/*" + // ], + // "protocols": { + // "resumable": { + // "multipart": true, + // "path": "/resumable/upload/storage/v1/b/{bucket}/o" + // }, + // "simple": { + // "multipart": true, + // "path": "/upload/storage/v1/b/{bucket}/o" + // } + // } + // }, + // "parameterOrder": [ + // "bucket" + // ], + // "parameters": { + // "bucket": { + // "description": "Name of the bucket in which to store the new object. Overrides the provided object metadata's bucket value, if any.", + // "location": "path", + // "required": true, + // "type": "string" + // }, + // "contentEncoding": { + // "description": "If set, sets the contentEncoding property of the final object to this value. Setting this parameter is equivalent to setting the contentEncoding metadata property. This can be useful when uploading an object with uploadType=media to indicate the encoding of the content being uploaded.", + // "location": "query", + // "type": "string" + // }, + // "ifGenerationMatch": { + // "description": "Makes the operation conditional on whether the object's current generation matches the given value. Setting to 0 makes the operation succeed only if there are no live versions of the object.", + // "format": "int64", + // "location": "query", + // "type": "string" + // }, + // "ifGenerationNotMatch": { + // "description": "Makes the operation conditional on whether the object's current generation does not match the given value. If no live object exists, the precondition fails. Setting to 0 makes the operation succeed only if there is a live version of the object.", + // "format": "int64", + // "location": "query", + // "type": "string" + // }, + // "ifMetagenerationMatch": { + // "description": "Makes the operation conditional on whether the object's current metageneration matches the given value.", + // "format": "int64", + // "location": "query", + // "type": "string" + // }, + // "ifMetagenerationNotMatch": { + // "description": "Makes the operation conditional on whether the object's current metageneration does not match the given value.", + // "format": "int64", + // "location": "query", + // "type": "string" + // }, + // "kmsKeyName": { + // "description": "Resource name of the Cloud KMS key, of the form projects/my-project/locations/global/keyRings/my-kr/cryptoKeys/my-key, that will be used to encrypt the object. Overrides the object metadata's kms_key_name value, if any. Limited availability; usable only by enabled projects.", + // "location": "query", + // "type": "string" + // }, + // "name": { + // "description": "Name of the object. Required when the object metadata is not otherwise provided. Overrides the object metadata's name value, if any. For information about how to URL encode object names to be path safe, see Encoding URI Path Parts.", + // "location": "query", + // "type": "string" + // }, + // "predefinedAcl": { + // "description": "Apply a predefined set of access controls to this object.", + // "enum": [ + // "authenticatedRead", + // "bucketOwnerFullControl", + // "bucketOwnerRead", + // "private", + // "projectPrivate", + // "publicRead" + // ], + // "enumDescriptions": [ + // "Object owner gets OWNER access, and allAuthenticatedUsers get READER access.", + // "Object owner gets OWNER access, and project team owners get OWNER access.", + // "Object owner gets OWNER access, and project team owners get READER access.", + // "Object owner gets OWNER access.", + // "Object owner gets OWNER access, and project team members get access according to their roles.", + // "Object owner gets OWNER access, and allUsers get READER access." + // ], + // "location": "query", + // "type": "string" + // }, + // "projection": { + // "description": "Set of properties to return. Defaults to noAcl, unless the object resource specifies the acl property, when it defaults to full.", + // "enum": [ + // "full", + // "noAcl" + // ], + // "enumDescriptions": [ + // "Include all properties.", + // "Omit the owner, acl property." + // ], + // "location": "query", + // "type": "string" + // }, + // "userProject": { + // "description": "The project to be billed for this request. Required for Requester Pays buckets.", + // "location": "query", + // "type": "string" + // } + // }, + // "path": "b/{bucket}/o", + // "request": { + // "$ref": "Object" + // }, + // "response": { + // "$ref": "Object" + // }, + // "scopes": [ + // "https://www.googleapis.com/auth/cloud-platform", + // "https://www.googleapis.com/auth/devstorage.full_control", + // "https://www.googleapis.com/auth/devstorage.read_write" + // ], + // "supportsMediaUpload": true + // } + +} + +// method id "storage.objects.list": + +type ObjectsListCall struct { + s *Service + bucket string + urlParams_ gensupport.URLParams + ifNoneMatch_ string + ctx_ context.Context + header_ http.Header +} + +// List: Retrieves a list of objects matching the criteria. +func (r *ObjectsService) List(bucket string) *ObjectsListCall { + c := &ObjectsListCall{s: r.s, urlParams_: make(gensupport.URLParams)} + c.bucket = bucket + return c +} + +// Delimiter sets the optional parameter "delimiter": Returns results in +// a directory-like mode. items will contain only objects whose names, +// aside from the prefix, do not contain delimiter. Objects whose names, +// aside from the prefix, contain delimiter will have their name, +// truncated after the delimiter, returned in prefixes. Duplicate +// prefixes are omitted. +func (c *ObjectsListCall) Delimiter(delimiter string) *ObjectsListCall { + c.urlParams_.Set("delimiter", delimiter) + return c +} + +// MaxResults sets the optional parameter "maxResults": Maximum number +// of items plus prefixes to return in a single page of responses. As +// duplicate prefixes are omitted, fewer total results may be returned +// than requested. The service will use this parameter or 1,000 items, +// whichever is smaller. +func (c *ObjectsListCall) MaxResults(maxResults int64) *ObjectsListCall { + c.urlParams_.Set("maxResults", fmt.Sprint(maxResults)) + return c +} + +// PageToken sets the optional parameter "pageToken": A +// previously-returned page token representing part of the larger set of +// results to view. +func (c *ObjectsListCall) PageToken(pageToken string) *ObjectsListCall { + c.urlParams_.Set("pageToken", pageToken) + return c +} + +// Prefix sets the optional parameter "prefix": Filter results to +// objects whose names begin with this prefix. +func (c *ObjectsListCall) Prefix(prefix string) *ObjectsListCall { + c.urlParams_.Set("prefix", prefix) + return c +} + +// Projection sets the optional parameter "projection": Set of +// properties to return. Defaults to noAcl. +// +// Possible values: +// "full" - Include all properties. +// "noAcl" - Omit the owner, acl property. +func (c *ObjectsListCall) Projection(projection string) *ObjectsListCall { + c.urlParams_.Set("projection", projection) + return c +} + +// UserProject sets the optional parameter "userProject": The project to +// be billed for this request. Required for Requester Pays buckets. +func (c *ObjectsListCall) UserProject(userProject string) *ObjectsListCall { + c.urlParams_.Set("userProject", userProject) + return c +} + +// Versions sets the optional parameter "versions": If true, lists all +// versions of an object as distinct results. The default is false. For +// more information, see Object Versioning. +func (c *ObjectsListCall) Versions(versions bool) *ObjectsListCall { + c.urlParams_.Set("versions", fmt.Sprint(versions)) + return c +} + +// Fields allows partial responses to be retrieved. See +// https://developers.google.com/gdata/docs/2.0/basics#PartialResponse +// for more information. +func (c *ObjectsListCall) Fields(s ...googleapi.Field) *ObjectsListCall { + c.urlParams_.Set("fields", googleapi.CombineFields(s)) + return c +} + +// IfNoneMatch sets the optional parameter which makes the operation +// fail if the object's ETag matches the given value. This is useful for +// getting updates only after the object has changed since the last +// request. Use googleapi.IsNotModified to check whether the response +// error from Do is the result of In-None-Match. +func (c *ObjectsListCall) IfNoneMatch(entityTag string) *ObjectsListCall { + c.ifNoneMatch_ = entityTag + return c +} + +// Context sets the context to be used in this call's Do method. Any +// pending HTTP request will be aborted if the provided context is +// canceled. +func (c *ObjectsListCall) Context(ctx context.Context) *ObjectsListCall { + c.ctx_ = ctx + return c +} + +// Header returns an http.Header that can be modified by the caller to +// add HTTP headers to the request. +func (c *ObjectsListCall) Header() http.Header { + if c.header_ == nil { + c.header_ = make(http.Header) + } + return c.header_ +} + +func (c *ObjectsListCall) doRequest(alt string) (*http.Response, error) { + reqHeaders := make(http.Header) + for k, v := range c.header_ { + reqHeaders[k] = v + } + reqHeaders.Set("User-Agent", c.s.userAgent()) + if c.ifNoneMatch_ != "" { + reqHeaders.Set("If-None-Match", c.ifNoneMatch_) + } + var body io.Reader = nil + c.urlParams_.Set("alt", alt) + urls := googleapi.ResolveRelative(c.s.BasePath, "b/{bucket}/o") + urls += "?" + c.urlParams_.Encode() + req, _ := http.NewRequest("GET", urls, body) + req.Header = reqHeaders + googleapi.Expand(req.URL, map[string]string{ + "bucket": c.bucket, + }) + return gensupport.SendRequest(c.ctx_, c.s.client, req) +} + +// Do executes the "storage.objects.list" call. +// Exactly one of *Objects or error will be non-nil. Any non-2xx status +// code is an error. Response headers are in either +// *Objects.ServerResponse.Header or (if a response was returned at all) +// in error.(*googleapi.Error).Header. Use googleapi.IsNotModified to +// check whether the returned error was because http.StatusNotModified +// was returned. +func (c *ObjectsListCall) Do(opts ...googleapi.CallOption) (*Objects, error) { + gensupport.SetOptions(c.urlParams_, opts...) + res, err := c.doRequest("json") + if res != nil && res.StatusCode == http.StatusNotModified { + if res.Body != nil { + res.Body.Close() + } + return nil, &googleapi.Error{ + Code: res.StatusCode, + Header: res.Header, + } + } + if err != nil { + return nil, err + } + defer googleapi.CloseBody(res) + if err := googleapi.CheckResponse(res); err != nil { + return nil, err + } + ret := &Objects{ + ServerResponse: googleapi.ServerResponse{ + Header: res.Header, + HTTPStatusCode: res.StatusCode, + }, + } + target := &ret + if err := gensupport.DecodeResponse(target, res); err != nil { + return nil, err + } + return ret, nil + // { + // "description": "Retrieves a list of objects matching the criteria.", + // "httpMethod": "GET", + // "id": "storage.objects.list", + // "parameterOrder": [ + // "bucket" + // ], + // "parameters": { + // "bucket": { + // "description": "Name of the bucket in which to look for objects.", + // "location": "path", + // "required": true, + // "type": "string" + // }, + // "delimiter": { + // "description": "Returns results in a directory-like mode. items will contain only objects whose names, aside from the prefix, do not contain delimiter. Objects whose names, aside from the prefix, contain delimiter will have their name, truncated after the delimiter, returned in prefixes. Duplicate prefixes are omitted.", + // "location": "query", + // "type": "string" + // }, + // "maxResults": { + // "default": "1000", + // "description": "Maximum number of items plus prefixes to return in a single page of responses. As duplicate prefixes are omitted, fewer total results may be returned than requested. The service will use this parameter or 1,000 items, whichever is smaller.", + // "format": "uint32", + // "location": "query", + // "minimum": "0", + // "type": "integer" + // }, + // "pageToken": { + // "description": "A previously-returned page token representing part of the larger set of results to view.", + // "location": "query", + // "type": "string" + // }, + // "prefix": { + // "description": "Filter results to objects whose names begin with this prefix.", + // "location": "query", + // "type": "string" + // }, + // "projection": { + // "description": "Set of properties to return. Defaults to noAcl.", + // "enum": [ + // "full", + // "noAcl" + // ], + // "enumDescriptions": [ + // "Include all properties.", + // "Omit the owner, acl property." + // ], + // "location": "query", + // "type": "string" + // }, + // "userProject": { + // "description": "The project to be billed for this request. Required for Requester Pays buckets.", + // "location": "query", + // "type": "string" + // }, + // "versions": { + // "description": "If true, lists all versions of an object as distinct results. The default is false. For more information, see Object Versioning.", + // "location": "query", + // "type": "boolean" + // } + // }, + // "path": "b/{bucket}/o", + // "response": { + // "$ref": "Objects" + // }, + // "scopes": [ + // "https://www.googleapis.com/auth/cloud-platform", + // "https://www.googleapis.com/auth/cloud-platform.read-only", + // "https://www.googleapis.com/auth/devstorage.full_control", + // "https://www.googleapis.com/auth/devstorage.read_only", + // "https://www.googleapis.com/auth/devstorage.read_write" + // ], + // "supportsSubscription": true + // } + +} + +// Pages invokes f for each page of results. +// A non-nil error returned from f will halt the iteration. +// The provided context supersedes any context provided to the Context method. +func (c *ObjectsListCall) Pages(ctx context.Context, f func(*Objects) error) error { + c.ctx_ = ctx + defer c.PageToken(c.urlParams_.Get("pageToken")) // reset paging to original point + for { + x, err := c.Do() + if err != nil { + return err + } + if err := f(x); err != nil { + return err + } + if x.NextPageToken == "" { + return nil + } + c.PageToken(x.NextPageToken) + } +} + +// method id "storage.objects.patch": + +type ObjectsPatchCall struct { + s *Service + bucket string + object string + object2 *Object + urlParams_ gensupport.URLParams + ctx_ context.Context + header_ http.Header +} + +// Patch: Patches an object's metadata. +func (r *ObjectsService) Patch(bucket string, object string, object2 *Object) *ObjectsPatchCall { + c := &ObjectsPatchCall{s: r.s, urlParams_: make(gensupport.URLParams)} + c.bucket = bucket + c.object = object + c.object2 = object2 + return c +} + +// Generation sets the optional parameter "generation": If present, +// selects a specific revision of this object (as opposed to the latest +// version, the default). +func (c *ObjectsPatchCall) Generation(generation int64) *ObjectsPatchCall { + c.urlParams_.Set("generation", fmt.Sprint(generation)) + return c +} + +// IfGenerationMatch sets the optional parameter "ifGenerationMatch": +// Makes the operation conditional on whether the object's current +// generation matches the given value. Setting to 0 makes the operation +// succeed only if there are no live versions of the object. +func (c *ObjectsPatchCall) IfGenerationMatch(ifGenerationMatch int64) *ObjectsPatchCall { + c.urlParams_.Set("ifGenerationMatch", fmt.Sprint(ifGenerationMatch)) + return c +} + +// IfGenerationNotMatch sets the optional parameter +// "ifGenerationNotMatch": Makes the operation conditional on whether +// the object's current generation does not match the given value. If no +// live object exists, the precondition fails. Setting to 0 makes the +// operation succeed only if there is a live version of the object. +func (c *ObjectsPatchCall) IfGenerationNotMatch(ifGenerationNotMatch int64) *ObjectsPatchCall { + c.urlParams_.Set("ifGenerationNotMatch", fmt.Sprint(ifGenerationNotMatch)) + return c +} + +// IfMetagenerationMatch sets the optional parameter +// "ifMetagenerationMatch": Makes the operation conditional on whether +// the object's current metageneration matches the given value. +func (c *ObjectsPatchCall) IfMetagenerationMatch(ifMetagenerationMatch int64) *ObjectsPatchCall { + c.urlParams_.Set("ifMetagenerationMatch", fmt.Sprint(ifMetagenerationMatch)) + return c +} + +// IfMetagenerationNotMatch sets the optional parameter +// "ifMetagenerationNotMatch": Makes the operation conditional on +// whether the object's current metageneration does not match the given +// value. +func (c *ObjectsPatchCall) IfMetagenerationNotMatch(ifMetagenerationNotMatch int64) *ObjectsPatchCall { + c.urlParams_.Set("ifMetagenerationNotMatch", fmt.Sprint(ifMetagenerationNotMatch)) + return c +} + +// PredefinedAcl sets the optional parameter "predefinedAcl": Apply a +// predefined set of access controls to this object. +// +// Possible values: +// "authenticatedRead" - Object owner gets OWNER access, and +// allAuthenticatedUsers get READER access. +// "bucketOwnerFullControl" - Object owner gets OWNER access, and +// project team owners get OWNER access. +// "bucketOwnerRead" - Object owner gets OWNER access, and project +// team owners get READER access. +// "private" - Object owner gets OWNER access. +// "projectPrivate" - Object owner gets OWNER access, and project team +// members get access according to their roles. +// "publicRead" - Object owner gets OWNER access, and allUsers get +// READER access. +func (c *ObjectsPatchCall) PredefinedAcl(predefinedAcl string) *ObjectsPatchCall { + c.urlParams_.Set("predefinedAcl", predefinedAcl) + return c +} + +// Projection sets the optional parameter "projection": Set of +// properties to return. Defaults to full. +// +// Possible values: +// "full" - Include all properties. +// "noAcl" - Omit the owner, acl property. +func (c *ObjectsPatchCall) Projection(projection string) *ObjectsPatchCall { + c.urlParams_.Set("projection", projection) + return c +} + +// UserProject sets the optional parameter "userProject": The project to +// be billed for this request, for Requester Pays buckets. +func (c *ObjectsPatchCall) UserProject(userProject string) *ObjectsPatchCall { + c.urlParams_.Set("userProject", userProject) + return c +} + +// Fields allows partial responses to be retrieved. See +// https://developers.google.com/gdata/docs/2.0/basics#PartialResponse +// for more information. +func (c *ObjectsPatchCall) Fields(s ...googleapi.Field) *ObjectsPatchCall { + c.urlParams_.Set("fields", googleapi.CombineFields(s)) + return c +} + +// Context sets the context to be used in this call's Do method. Any +// pending HTTP request will be aborted if the provided context is +// canceled. +func (c *ObjectsPatchCall) Context(ctx context.Context) *ObjectsPatchCall { + c.ctx_ = ctx + return c +} + +// Header returns an http.Header that can be modified by the caller to +// add HTTP headers to the request. +func (c *ObjectsPatchCall) Header() http.Header { + if c.header_ == nil { + c.header_ = make(http.Header) + } + return c.header_ +} + +func (c *ObjectsPatchCall) doRequest(alt string) (*http.Response, error) { + reqHeaders := make(http.Header) + for k, v := range c.header_ { + reqHeaders[k] = v + } + reqHeaders.Set("User-Agent", c.s.userAgent()) + var body io.Reader = nil + body, err := googleapi.WithoutDataWrapper.JSONReader(c.object2) + if err != nil { + return nil, err + } + reqHeaders.Set("Content-Type", "application/json") + c.urlParams_.Set("alt", alt) + urls := googleapi.ResolveRelative(c.s.BasePath, "b/{bucket}/o/{object}") + urls += "?" + c.urlParams_.Encode() + req, _ := http.NewRequest("PATCH", urls, body) + req.Header = reqHeaders + googleapi.Expand(req.URL, map[string]string{ + "bucket": c.bucket, + "object": c.object, + }) + return gensupport.SendRequest(c.ctx_, c.s.client, req) +} + +// Do executes the "storage.objects.patch" call. +// Exactly one of *Object or error will be non-nil. Any non-2xx status +// code is an error. Response headers are in either +// *Object.ServerResponse.Header or (if a response was returned at all) +// in error.(*googleapi.Error).Header. Use googleapi.IsNotModified to +// check whether the returned error was because http.StatusNotModified +// was returned. +func (c *ObjectsPatchCall) Do(opts ...googleapi.CallOption) (*Object, error) { + gensupport.SetOptions(c.urlParams_, opts...) + res, err := c.doRequest("json") + if res != nil && res.StatusCode == http.StatusNotModified { + if res.Body != nil { + res.Body.Close() + } + return nil, &googleapi.Error{ + Code: res.StatusCode, + Header: res.Header, + } + } + if err != nil { + return nil, err + } + defer googleapi.CloseBody(res) + if err := googleapi.CheckResponse(res); err != nil { + return nil, err + } + ret := &Object{ + ServerResponse: googleapi.ServerResponse{ + Header: res.Header, + HTTPStatusCode: res.StatusCode, + }, + } + target := &ret + if err := gensupport.DecodeResponse(target, res); err != nil { + return nil, err + } + return ret, nil + // { + // "description": "Patches an object's metadata.", + // "httpMethod": "PATCH", + // "id": "storage.objects.patch", + // "parameterOrder": [ + // "bucket", + // "object" + // ], + // "parameters": { + // "bucket": { + // "description": "Name of the bucket in which the object resides.", + // "location": "path", + // "required": true, + // "type": "string" + // }, + // "generation": { + // "description": "If present, selects a specific revision of this object (as opposed to the latest version, the default).", + // "format": "int64", + // "location": "query", + // "type": "string" + // }, + // "ifGenerationMatch": { + // "description": "Makes the operation conditional on whether the object's current generation matches the given value. Setting to 0 makes the operation succeed only if there are no live versions of the object.", + // "format": "int64", + // "location": "query", + // "type": "string" + // }, + // "ifGenerationNotMatch": { + // "description": "Makes the operation conditional on whether the object's current generation does not match the given value. If no live object exists, the precondition fails. Setting to 0 makes the operation succeed only if there is a live version of the object.", + // "format": "int64", + // "location": "query", + // "type": "string" + // }, + // "ifMetagenerationMatch": { + // "description": "Makes the operation conditional on whether the object's current metageneration matches the given value.", + // "format": "int64", + // "location": "query", + // "type": "string" + // }, + // "ifMetagenerationNotMatch": { + // "description": "Makes the operation conditional on whether the object's current metageneration does not match the given value.", + // "format": "int64", + // "location": "query", + // "type": "string" + // }, + // "object": { + // "description": "Name of the object. For information about how to URL encode object names to be path safe, see Encoding URI Path Parts.", + // "location": "path", + // "required": true, + // "type": "string" + // }, + // "predefinedAcl": { + // "description": "Apply a predefined set of access controls to this object.", + // "enum": [ + // "authenticatedRead", + // "bucketOwnerFullControl", + // "bucketOwnerRead", + // "private", + // "projectPrivate", + // "publicRead" + // ], + // "enumDescriptions": [ + // "Object owner gets OWNER access, and allAuthenticatedUsers get READER access.", + // "Object owner gets OWNER access, and project team owners get OWNER access.", + // "Object owner gets OWNER access, and project team owners get READER access.", + // "Object owner gets OWNER access.", + // "Object owner gets OWNER access, and project team members get access according to their roles.", + // "Object owner gets OWNER access, and allUsers get READER access." + // ], + // "location": "query", + // "type": "string" + // }, + // "projection": { + // "description": "Set of properties to return. Defaults to full.", + // "enum": [ + // "full", + // "noAcl" + // ], + // "enumDescriptions": [ + // "Include all properties.", + // "Omit the owner, acl property." + // ], + // "location": "query", + // "type": "string" + // }, + // "userProject": { + // "description": "The project to be billed for this request, for Requester Pays buckets.", + // "location": "query", + // "type": "string" + // } + // }, + // "path": "b/{bucket}/o/{object}", + // "request": { + // "$ref": "Object" + // }, + // "response": { + // "$ref": "Object" + // }, + // "scopes": [ + // "https://www.googleapis.com/auth/cloud-platform", + // "https://www.googleapis.com/auth/devstorage.full_control" + // ] + // } + +} + +// method id "storage.objects.rewrite": + +type ObjectsRewriteCall struct { + s *Service + sourceBucket string + sourceObject string + destinationBucket string + destinationObject string + object *Object + urlParams_ gensupport.URLParams + ctx_ context.Context + header_ http.Header +} + +// Rewrite: Rewrites a source object to a destination object. Optionally +// overrides metadata. +func (r *ObjectsService) Rewrite(sourceBucket string, sourceObject string, destinationBucket string, destinationObject string, object *Object) *ObjectsRewriteCall { + c := &ObjectsRewriteCall{s: r.s, urlParams_: make(gensupport.URLParams)} + c.sourceBucket = sourceBucket + c.sourceObject = sourceObject + c.destinationBucket = destinationBucket + c.destinationObject = destinationObject + c.object = object + return c +} + +// DestinationKmsKeyName sets the optional parameter +// "destinationKmsKeyName": Resource name of the Cloud KMS key, of the +// form +// projects/my-project/locations/global/keyRings/my-kr/cryptoKeys/my-key, +// that will be used to encrypt the object. Overrides the object +// metadata's kms_key_name value, if any. +func (c *ObjectsRewriteCall) DestinationKmsKeyName(destinationKmsKeyName string) *ObjectsRewriteCall { + c.urlParams_.Set("destinationKmsKeyName", destinationKmsKeyName) + return c +} + +// DestinationPredefinedAcl sets the optional parameter +// "destinationPredefinedAcl": Apply a predefined set of access controls +// to the destination object. +// +// Possible values: +// "authenticatedRead" - Object owner gets OWNER access, and +// allAuthenticatedUsers get READER access. +// "bucketOwnerFullControl" - Object owner gets OWNER access, and +// project team owners get OWNER access. +// "bucketOwnerRead" - Object owner gets OWNER access, and project +// team owners get READER access. +// "private" - Object owner gets OWNER access. +// "projectPrivate" - Object owner gets OWNER access, and project team +// members get access according to their roles. +// "publicRead" - Object owner gets OWNER access, and allUsers get +// READER access. +func (c *ObjectsRewriteCall) DestinationPredefinedAcl(destinationPredefinedAcl string) *ObjectsRewriteCall { + c.urlParams_.Set("destinationPredefinedAcl", destinationPredefinedAcl) + return c +} + +// IfGenerationMatch sets the optional parameter "ifGenerationMatch": +// Makes the operation conditional on whether the object's current +// generation matches the given value. Setting to 0 makes the operation +// succeed only if there are no live versions of the object. +func (c *ObjectsRewriteCall) IfGenerationMatch(ifGenerationMatch int64) *ObjectsRewriteCall { + c.urlParams_.Set("ifGenerationMatch", fmt.Sprint(ifGenerationMatch)) + return c +} + +// IfGenerationNotMatch sets the optional parameter +// "ifGenerationNotMatch": Makes the operation conditional on whether +// the object's current generation does not match the given value. If no +// live object exists, the precondition fails. Setting to 0 makes the +// operation succeed only if there is a live version of the object. +func (c *ObjectsRewriteCall) IfGenerationNotMatch(ifGenerationNotMatch int64) *ObjectsRewriteCall { + c.urlParams_.Set("ifGenerationNotMatch", fmt.Sprint(ifGenerationNotMatch)) + return c +} + +// IfMetagenerationMatch sets the optional parameter +// "ifMetagenerationMatch": Makes the operation conditional on whether +// the destination object's current metageneration matches the given +// value. +func (c *ObjectsRewriteCall) IfMetagenerationMatch(ifMetagenerationMatch int64) *ObjectsRewriteCall { + c.urlParams_.Set("ifMetagenerationMatch", fmt.Sprint(ifMetagenerationMatch)) + return c +} + +// IfMetagenerationNotMatch sets the optional parameter +// "ifMetagenerationNotMatch": Makes the operation conditional on +// whether the destination object's current metageneration does not +// match the given value. +func (c *ObjectsRewriteCall) IfMetagenerationNotMatch(ifMetagenerationNotMatch int64) *ObjectsRewriteCall { + c.urlParams_.Set("ifMetagenerationNotMatch", fmt.Sprint(ifMetagenerationNotMatch)) + return c +} + +// IfSourceGenerationMatch sets the optional parameter +// "ifSourceGenerationMatch": Makes the operation conditional on whether +// the source object's current generation matches the given value. +func (c *ObjectsRewriteCall) IfSourceGenerationMatch(ifSourceGenerationMatch int64) *ObjectsRewriteCall { + c.urlParams_.Set("ifSourceGenerationMatch", fmt.Sprint(ifSourceGenerationMatch)) + return c +} + +// IfSourceGenerationNotMatch sets the optional parameter +// "ifSourceGenerationNotMatch": Makes the operation conditional on +// whether the source object's current generation does not match the +// given value. +func (c *ObjectsRewriteCall) IfSourceGenerationNotMatch(ifSourceGenerationNotMatch int64) *ObjectsRewriteCall { + c.urlParams_.Set("ifSourceGenerationNotMatch", fmt.Sprint(ifSourceGenerationNotMatch)) + return c +} + +// IfSourceMetagenerationMatch sets the optional parameter +// "ifSourceMetagenerationMatch": Makes the operation conditional on +// whether the source object's current metageneration matches the given +// value. +func (c *ObjectsRewriteCall) IfSourceMetagenerationMatch(ifSourceMetagenerationMatch int64) *ObjectsRewriteCall { + c.urlParams_.Set("ifSourceMetagenerationMatch", fmt.Sprint(ifSourceMetagenerationMatch)) + return c +} + +// IfSourceMetagenerationNotMatch sets the optional parameter +// "ifSourceMetagenerationNotMatch": Makes the operation conditional on +// whether the source object's current metageneration does not match the +// given value. +func (c *ObjectsRewriteCall) IfSourceMetagenerationNotMatch(ifSourceMetagenerationNotMatch int64) *ObjectsRewriteCall { + c.urlParams_.Set("ifSourceMetagenerationNotMatch", fmt.Sprint(ifSourceMetagenerationNotMatch)) + return c +} + +// MaxBytesRewrittenPerCall sets the optional parameter +// "maxBytesRewrittenPerCall": The maximum number of bytes that will be +// rewritten per rewrite request. Most callers shouldn't need to specify +// this parameter - it is primarily in place to support testing. If +// specified the value must be an integral multiple of 1 MiB (1048576). +// Also, this only applies to requests where the source and destination +// span locations and/or storage classes. Finally, this value must not +// change across rewrite calls else you'll get an error that the +// rewriteToken is invalid. +func (c *ObjectsRewriteCall) MaxBytesRewrittenPerCall(maxBytesRewrittenPerCall int64) *ObjectsRewriteCall { + c.urlParams_.Set("maxBytesRewrittenPerCall", fmt.Sprint(maxBytesRewrittenPerCall)) + return c +} + +// Projection sets the optional parameter "projection": Set of +// properties to return. Defaults to noAcl, unless the object resource +// specifies the acl property, when it defaults to full. +// +// Possible values: +// "full" - Include all properties. +// "noAcl" - Omit the owner, acl property. +func (c *ObjectsRewriteCall) Projection(projection string) *ObjectsRewriteCall { + c.urlParams_.Set("projection", projection) + return c +} + +// RewriteToken sets the optional parameter "rewriteToken": Include this +// field (from the previous rewrite response) on each rewrite request +// after the first one, until the rewrite response 'done' flag is true. +// Calls that provide a rewriteToken can omit all other request fields, +// but if included those fields must match the values provided in the +// first rewrite request. +func (c *ObjectsRewriteCall) RewriteToken(rewriteToken string) *ObjectsRewriteCall { + c.urlParams_.Set("rewriteToken", rewriteToken) + return c +} + +// SourceGeneration sets the optional parameter "sourceGeneration": If +// present, selects a specific revision of the source object (as opposed +// to the latest version, the default). +func (c *ObjectsRewriteCall) SourceGeneration(sourceGeneration int64) *ObjectsRewriteCall { + c.urlParams_.Set("sourceGeneration", fmt.Sprint(sourceGeneration)) + return c +} + +// UserProject sets the optional parameter "userProject": The project to +// be billed for this request. Required for Requester Pays buckets. +func (c *ObjectsRewriteCall) UserProject(userProject string) *ObjectsRewriteCall { + c.urlParams_.Set("userProject", userProject) + return c +} + +// Fields allows partial responses to be retrieved. See +// https://developers.google.com/gdata/docs/2.0/basics#PartialResponse +// for more information. +func (c *ObjectsRewriteCall) Fields(s ...googleapi.Field) *ObjectsRewriteCall { + c.urlParams_.Set("fields", googleapi.CombineFields(s)) + return c +} + +// Context sets the context to be used in this call's Do method. Any +// pending HTTP request will be aborted if the provided context is +// canceled. +func (c *ObjectsRewriteCall) Context(ctx context.Context) *ObjectsRewriteCall { + c.ctx_ = ctx + return c +} + +// Header returns an http.Header that can be modified by the caller to +// add HTTP headers to the request. +func (c *ObjectsRewriteCall) Header() http.Header { + if c.header_ == nil { + c.header_ = make(http.Header) + } + return c.header_ +} + +func (c *ObjectsRewriteCall) doRequest(alt string) (*http.Response, error) { + reqHeaders := make(http.Header) + for k, v := range c.header_ { + reqHeaders[k] = v + } + reqHeaders.Set("User-Agent", c.s.userAgent()) + var body io.Reader = nil + body, err := googleapi.WithoutDataWrapper.JSONReader(c.object) + if err != nil { + return nil, err + } + reqHeaders.Set("Content-Type", "application/json") + c.urlParams_.Set("alt", alt) + urls := googleapi.ResolveRelative(c.s.BasePath, "b/{sourceBucket}/o/{sourceObject}/rewriteTo/b/{destinationBucket}/o/{destinationObject}") + urls += "?" + c.urlParams_.Encode() + req, _ := http.NewRequest("POST", urls, body) + req.Header = reqHeaders + googleapi.Expand(req.URL, map[string]string{ + "sourceBucket": c.sourceBucket, + "sourceObject": c.sourceObject, + "destinationBucket": c.destinationBucket, + "destinationObject": c.destinationObject, + }) + return gensupport.SendRequest(c.ctx_, c.s.client, req) +} + +// Do executes the "storage.objects.rewrite" call. +// Exactly one of *RewriteResponse or error will be non-nil. Any non-2xx +// status code is an error. Response headers are in either +// *RewriteResponse.ServerResponse.Header or (if a response was returned +// at all) in error.(*googleapi.Error).Header. Use +// googleapi.IsNotModified to check whether the returned error was +// because http.StatusNotModified was returned. +func (c *ObjectsRewriteCall) Do(opts ...googleapi.CallOption) (*RewriteResponse, error) { + gensupport.SetOptions(c.urlParams_, opts...) + res, err := c.doRequest("json") + if res != nil && res.StatusCode == http.StatusNotModified { + if res.Body != nil { + res.Body.Close() + } + return nil, &googleapi.Error{ + Code: res.StatusCode, + Header: res.Header, + } + } + if err != nil { + return nil, err + } + defer googleapi.CloseBody(res) + if err := googleapi.CheckResponse(res); err != nil { + return nil, err + } + ret := &RewriteResponse{ + ServerResponse: googleapi.ServerResponse{ + Header: res.Header, + HTTPStatusCode: res.StatusCode, + }, + } + target := &ret + if err := gensupport.DecodeResponse(target, res); err != nil { + return nil, err + } + return ret, nil + // { + // "description": "Rewrites a source object to a destination object. Optionally overrides metadata.", + // "httpMethod": "POST", + // "id": "storage.objects.rewrite", + // "parameterOrder": [ + // "sourceBucket", + // "sourceObject", + // "destinationBucket", + // "destinationObject" + // ], + // "parameters": { + // "destinationBucket": { + // "description": "Name of the bucket in which to store the new object. Overrides the provided object metadata's bucket value, if any.", + // "location": "path", + // "required": true, + // "type": "string" + // }, + // "destinationKmsKeyName": { + // "description": "Resource name of the Cloud KMS key, of the form projects/my-project/locations/global/keyRings/my-kr/cryptoKeys/my-key, that will be used to encrypt the object. Overrides the object metadata's kms_key_name value, if any.", + // "location": "query", + // "type": "string" + // }, + // "destinationObject": { + // "description": "Name of the new object. Required when the object metadata is not otherwise provided. Overrides the object metadata's name value, if any. For information about how to URL encode object names to be path safe, see Encoding URI Path Parts.", + // "location": "path", + // "required": true, + // "type": "string" + // }, + // "destinationPredefinedAcl": { + // "description": "Apply a predefined set of access controls to the destination object.", + // "enum": [ + // "authenticatedRead", + // "bucketOwnerFullControl", + // "bucketOwnerRead", + // "private", + // "projectPrivate", + // "publicRead" + // ], + // "enumDescriptions": [ + // "Object owner gets OWNER access, and allAuthenticatedUsers get READER access.", + // "Object owner gets OWNER access, and project team owners get OWNER access.", + // "Object owner gets OWNER access, and project team owners get READER access.", + // "Object owner gets OWNER access.", + // "Object owner gets OWNER access, and project team members get access according to their roles.", + // "Object owner gets OWNER access, and allUsers get READER access." + // ], + // "location": "query", + // "type": "string" + // }, + // "ifGenerationMatch": { + // "description": "Makes the operation conditional on whether the object's current generation matches the given value. Setting to 0 makes the operation succeed only if there are no live versions of the object.", + // "format": "int64", + // "location": "query", + // "type": "string" + // }, + // "ifGenerationNotMatch": { + // "description": "Makes the operation conditional on whether the object's current generation does not match the given value. If no live object exists, the precondition fails. Setting to 0 makes the operation succeed only if there is a live version of the object.", + // "format": "int64", + // "location": "query", + // "type": "string" + // }, + // "ifMetagenerationMatch": { + // "description": "Makes the operation conditional on whether the destination object's current metageneration matches the given value.", + // "format": "int64", + // "location": "query", + // "type": "string" + // }, + // "ifMetagenerationNotMatch": { + // "description": "Makes the operation conditional on whether the destination object's current metageneration does not match the given value.", + // "format": "int64", + // "location": "query", + // "type": "string" + // }, + // "ifSourceGenerationMatch": { + // "description": "Makes the operation conditional on whether the source object's current generation matches the given value.", + // "format": "int64", + // "location": "query", + // "type": "string" + // }, + // "ifSourceGenerationNotMatch": { + // "description": "Makes the operation conditional on whether the source object's current generation does not match the given value.", + // "format": "int64", + // "location": "query", + // "type": "string" + // }, + // "ifSourceMetagenerationMatch": { + // "description": "Makes the operation conditional on whether the source object's current metageneration matches the given value.", + // "format": "int64", + // "location": "query", + // "type": "string" + // }, + // "ifSourceMetagenerationNotMatch": { + // "description": "Makes the operation conditional on whether the source object's current metageneration does not match the given value.", + // "format": "int64", + // "location": "query", + // "type": "string" + // }, + // "maxBytesRewrittenPerCall": { + // "description": "The maximum number of bytes that will be rewritten per rewrite request. Most callers shouldn't need to specify this parameter - it is primarily in place to support testing. If specified the value must be an integral multiple of 1 MiB (1048576). Also, this only applies to requests where the source and destination span locations and/or storage classes. Finally, this value must not change across rewrite calls else you'll get an error that the rewriteToken is invalid.", + // "format": "int64", + // "location": "query", + // "type": "string" + // }, + // "projection": { + // "description": "Set of properties to return. Defaults to noAcl, unless the object resource specifies the acl property, when it defaults to full.", + // "enum": [ + // "full", + // "noAcl" + // ], + // "enumDescriptions": [ + // "Include all properties.", + // "Omit the owner, acl property." + // ], + // "location": "query", + // "type": "string" + // }, + // "rewriteToken": { + // "description": "Include this field (from the previous rewrite response) on each rewrite request after the first one, until the rewrite response 'done' flag is true. Calls that provide a rewriteToken can omit all other request fields, but if included those fields must match the values provided in the first rewrite request.", + // "location": "query", + // "type": "string" + // }, + // "sourceBucket": { + // "description": "Name of the bucket in which to find the source object.", + // "location": "path", + // "required": true, + // "type": "string" + // }, + // "sourceGeneration": { + // "description": "If present, selects a specific revision of the source object (as opposed to the latest version, the default).", + // "format": "int64", + // "location": "query", + // "type": "string" + // }, + // "sourceObject": { + // "description": "Name of the source object. For information about how to URL encode object names to be path safe, see Encoding URI Path Parts.", + // "location": "path", + // "required": true, + // "type": "string" + // }, + // "userProject": { + // "description": "The project to be billed for this request. Required for Requester Pays buckets.", + // "location": "query", + // "type": "string" + // } + // }, + // "path": "b/{sourceBucket}/o/{sourceObject}/rewriteTo/b/{destinationBucket}/o/{destinationObject}", + // "request": { + // "$ref": "Object" + // }, + // "response": { + // "$ref": "RewriteResponse" + // }, + // "scopes": [ + // "https://www.googleapis.com/auth/cloud-platform", + // "https://www.googleapis.com/auth/devstorage.full_control", + // "https://www.googleapis.com/auth/devstorage.read_write" + // ] + // } + +} + +// method id "storage.objects.setIamPolicy": + +type ObjectsSetIamPolicyCall struct { + s *Service + bucket string + object string + policy *Policy + urlParams_ gensupport.URLParams + ctx_ context.Context + header_ http.Header +} + +// SetIamPolicy: Updates an IAM policy for the specified object. +func (r *ObjectsService) SetIamPolicy(bucket string, object string, policy *Policy) *ObjectsSetIamPolicyCall { + c := &ObjectsSetIamPolicyCall{s: r.s, urlParams_: make(gensupport.URLParams)} + c.bucket = bucket + c.object = object + c.policy = policy + return c +} + +// Generation sets the optional parameter "generation": If present, +// selects a specific revision of this object (as opposed to the latest +// version, the default). +func (c *ObjectsSetIamPolicyCall) Generation(generation int64) *ObjectsSetIamPolicyCall { + c.urlParams_.Set("generation", fmt.Sprint(generation)) + return c +} + +// UserProject sets the optional parameter "userProject": The project to +// be billed for this request. Required for Requester Pays buckets. +func (c *ObjectsSetIamPolicyCall) UserProject(userProject string) *ObjectsSetIamPolicyCall { + c.urlParams_.Set("userProject", userProject) + return c +} + +// Fields allows partial responses to be retrieved. See +// https://developers.google.com/gdata/docs/2.0/basics#PartialResponse +// for more information. +func (c *ObjectsSetIamPolicyCall) Fields(s ...googleapi.Field) *ObjectsSetIamPolicyCall { + c.urlParams_.Set("fields", googleapi.CombineFields(s)) + return c +} + +// Context sets the context to be used in this call's Do method. Any +// pending HTTP request will be aborted if the provided context is +// canceled. +func (c *ObjectsSetIamPolicyCall) Context(ctx context.Context) *ObjectsSetIamPolicyCall { + c.ctx_ = ctx + return c +} + +// Header returns an http.Header that can be modified by the caller to +// add HTTP headers to the request. +func (c *ObjectsSetIamPolicyCall) Header() http.Header { + if c.header_ == nil { + c.header_ = make(http.Header) + } + return c.header_ +} + +func (c *ObjectsSetIamPolicyCall) doRequest(alt string) (*http.Response, error) { + reqHeaders := make(http.Header) + for k, v := range c.header_ { + reqHeaders[k] = v + } + reqHeaders.Set("User-Agent", c.s.userAgent()) + var body io.Reader = nil + body, err := googleapi.WithoutDataWrapper.JSONReader(c.policy) + if err != nil { + return nil, err + } + reqHeaders.Set("Content-Type", "application/json") + c.urlParams_.Set("alt", alt) + urls := googleapi.ResolveRelative(c.s.BasePath, "b/{bucket}/o/{object}/iam") + urls += "?" + c.urlParams_.Encode() + req, _ := http.NewRequest("PUT", urls, body) + req.Header = reqHeaders + googleapi.Expand(req.URL, map[string]string{ + "bucket": c.bucket, + "object": c.object, + }) + return gensupport.SendRequest(c.ctx_, c.s.client, req) +} + +// Do executes the "storage.objects.setIamPolicy" call. +// Exactly one of *Policy or error will be non-nil. Any non-2xx status +// code is an error. Response headers are in either +// *Policy.ServerResponse.Header or (if a response was returned at all) +// in error.(*googleapi.Error).Header. Use googleapi.IsNotModified to +// check whether the returned error was because http.StatusNotModified +// was returned. +func (c *ObjectsSetIamPolicyCall) Do(opts ...googleapi.CallOption) (*Policy, error) { + gensupport.SetOptions(c.urlParams_, opts...) + res, err := c.doRequest("json") + if res != nil && res.StatusCode == http.StatusNotModified { + if res.Body != nil { + res.Body.Close() + } + return nil, &googleapi.Error{ + Code: res.StatusCode, + Header: res.Header, + } + } + if err != nil { + return nil, err + } + defer googleapi.CloseBody(res) + if err := googleapi.CheckResponse(res); err != nil { + return nil, err + } + ret := &Policy{ + ServerResponse: googleapi.ServerResponse{ + Header: res.Header, + HTTPStatusCode: res.StatusCode, + }, + } + target := &ret + if err := gensupport.DecodeResponse(target, res); err != nil { + return nil, err + } + return ret, nil + // { + // "description": "Updates an IAM policy for the specified object.", + // "httpMethod": "PUT", + // "id": "storage.objects.setIamPolicy", + // "parameterOrder": [ + // "bucket", + // "object" + // ], + // "parameters": { + // "bucket": { + // "description": "Name of the bucket in which the object resides.", + // "location": "path", + // "required": true, + // "type": "string" + // }, + // "generation": { + // "description": "If present, selects a specific revision of this object (as opposed to the latest version, the default).", + // "format": "int64", + // "location": "query", + // "type": "string" + // }, + // "object": { + // "description": "Name of the object. For information about how to URL encode object names to be path safe, see Encoding URI Path Parts.", + // "location": "path", + // "required": true, + // "type": "string" + // }, + // "userProject": { + // "description": "The project to be billed for this request. Required for Requester Pays buckets.", + // "location": "query", + // "type": "string" + // } + // }, + // "path": "b/{bucket}/o/{object}/iam", + // "request": { + // "$ref": "Policy" + // }, + // "response": { + // "$ref": "Policy" + // }, + // "scopes": [ + // "https://www.googleapis.com/auth/cloud-platform", + // "https://www.googleapis.com/auth/devstorage.full_control", + // "https://www.googleapis.com/auth/devstorage.read_write" + // ] + // } + +} + +// method id "storage.objects.testIamPermissions": + +type ObjectsTestIamPermissionsCall struct { + s *Service + bucket string + object string + urlParams_ gensupport.URLParams + ifNoneMatch_ string + ctx_ context.Context + header_ http.Header +} + +// TestIamPermissions: Tests a set of permissions on the given object to +// see which, if any, are held by the caller. +func (r *ObjectsService) TestIamPermissions(bucket string, object string, permissions []string) *ObjectsTestIamPermissionsCall { + c := &ObjectsTestIamPermissionsCall{s: r.s, urlParams_: make(gensupport.URLParams)} + c.bucket = bucket + c.object = object + c.urlParams_.SetMulti("permissions", append([]string{}, permissions...)) + return c +} + +// Generation sets the optional parameter "generation": If present, +// selects a specific revision of this object (as opposed to the latest +// version, the default). +func (c *ObjectsTestIamPermissionsCall) Generation(generation int64) *ObjectsTestIamPermissionsCall { + c.urlParams_.Set("generation", fmt.Sprint(generation)) + return c +} + +// UserProject sets the optional parameter "userProject": The project to +// be billed for this request. Required for Requester Pays buckets. +func (c *ObjectsTestIamPermissionsCall) UserProject(userProject string) *ObjectsTestIamPermissionsCall { + c.urlParams_.Set("userProject", userProject) + return c +} + +// Fields allows partial responses to be retrieved. See +// https://developers.google.com/gdata/docs/2.0/basics#PartialResponse +// for more information. +func (c *ObjectsTestIamPermissionsCall) Fields(s ...googleapi.Field) *ObjectsTestIamPermissionsCall { + c.urlParams_.Set("fields", googleapi.CombineFields(s)) + return c +} + +// IfNoneMatch sets the optional parameter which makes the operation +// fail if the object's ETag matches the given value. This is useful for +// getting updates only after the object has changed since the last +// request. Use googleapi.IsNotModified to check whether the response +// error from Do is the result of In-None-Match. +func (c *ObjectsTestIamPermissionsCall) IfNoneMatch(entityTag string) *ObjectsTestIamPermissionsCall { + c.ifNoneMatch_ = entityTag + return c +} + +// Context sets the context to be used in this call's Do method. Any +// pending HTTP request will be aborted if the provided context is +// canceled. +func (c *ObjectsTestIamPermissionsCall) Context(ctx context.Context) *ObjectsTestIamPermissionsCall { + c.ctx_ = ctx + return c +} + +// Header returns an http.Header that can be modified by the caller to +// add HTTP headers to the request. +func (c *ObjectsTestIamPermissionsCall) Header() http.Header { + if c.header_ == nil { + c.header_ = make(http.Header) + } + return c.header_ +} + +func (c *ObjectsTestIamPermissionsCall) doRequest(alt string) (*http.Response, error) { + reqHeaders := make(http.Header) + for k, v := range c.header_ { + reqHeaders[k] = v + } + reqHeaders.Set("User-Agent", c.s.userAgent()) + if c.ifNoneMatch_ != "" { + reqHeaders.Set("If-None-Match", c.ifNoneMatch_) + } + var body io.Reader = nil + c.urlParams_.Set("alt", alt) + urls := googleapi.ResolveRelative(c.s.BasePath, "b/{bucket}/o/{object}/iam/testPermissions") + urls += "?" + c.urlParams_.Encode() + req, _ := http.NewRequest("GET", urls, body) + req.Header = reqHeaders + googleapi.Expand(req.URL, map[string]string{ + "bucket": c.bucket, + "object": c.object, + }) + return gensupport.SendRequest(c.ctx_, c.s.client, req) +} + +// Do executes the "storage.objects.testIamPermissions" call. +// Exactly one of *TestIamPermissionsResponse or error will be non-nil. +// Any non-2xx status code is an error. Response headers are in either +// *TestIamPermissionsResponse.ServerResponse.Header or (if a response +// was returned at all) in error.(*googleapi.Error).Header. Use +// googleapi.IsNotModified to check whether the returned error was +// because http.StatusNotModified was returned. +func (c *ObjectsTestIamPermissionsCall) Do(opts ...googleapi.CallOption) (*TestIamPermissionsResponse, error) { + gensupport.SetOptions(c.urlParams_, opts...) + res, err := c.doRequest("json") + if res != nil && res.StatusCode == http.StatusNotModified { + if res.Body != nil { + res.Body.Close() + } + return nil, &googleapi.Error{ + Code: res.StatusCode, + Header: res.Header, + } + } + if err != nil { + return nil, err + } + defer googleapi.CloseBody(res) + if err := googleapi.CheckResponse(res); err != nil { + return nil, err + } + ret := &TestIamPermissionsResponse{ + ServerResponse: googleapi.ServerResponse{ + Header: res.Header, + HTTPStatusCode: res.StatusCode, + }, + } + target := &ret + if err := gensupport.DecodeResponse(target, res); err != nil { + return nil, err + } + return ret, nil + // { + // "description": "Tests a set of permissions on the given object to see which, if any, are held by the caller.", + // "httpMethod": "GET", + // "id": "storage.objects.testIamPermissions", + // "parameterOrder": [ + // "bucket", + // "object", + // "permissions" + // ], + // "parameters": { + // "bucket": { + // "description": "Name of the bucket in which the object resides.", + // "location": "path", + // "required": true, + // "type": "string" + // }, + // "generation": { + // "description": "If present, selects a specific revision of this object (as opposed to the latest version, the default).", + // "format": "int64", + // "location": "query", + // "type": "string" + // }, + // "object": { + // "description": "Name of the object. For information about how to URL encode object names to be path safe, see Encoding URI Path Parts.", + // "location": "path", + // "required": true, + // "type": "string" + // }, + // "permissions": { + // "description": "Permissions to test.", + // "location": "query", + // "repeated": true, + // "required": true, + // "type": "string" + // }, + // "userProject": { + // "description": "The project to be billed for this request. Required for Requester Pays buckets.", + // "location": "query", + // "type": "string" + // } + // }, + // "path": "b/{bucket}/o/{object}/iam/testPermissions", + // "response": { + // "$ref": "TestIamPermissionsResponse" + // }, + // "scopes": [ + // "https://www.googleapis.com/auth/cloud-platform", + // "https://www.googleapis.com/auth/cloud-platform.read-only", + // "https://www.googleapis.com/auth/devstorage.full_control", + // "https://www.googleapis.com/auth/devstorage.read_only", + // "https://www.googleapis.com/auth/devstorage.read_write" + // ] + // } + +} + +// method id "storage.objects.update": + +type ObjectsUpdateCall struct { + s *Service + bucket string + object string + object2 *Object + urlParams_ gensupport.URLParams + ctx_ context.Context + header_ http.Header +} + +// Update: Updates an object's metadata. +func (r *ObjectsService) Update(bucket string, object string, object2 *Object) *ObjectsUpdateCall { + c := &ObjectsUpdateCall{s: r.s, urlParams_: make(gensupport.URLParams)} + c.bucket = bucket + c.object = object + c.object2 = object2 + return c +} + +// Generation sets the optional parameter "generation": If present, +// selects a specific revision of this object (as opposed to the latest +// version, the default). +func (c *ObjectsUpdateCall) Generation(generation int64) *ObjectsUpdateCall { + c.urlParams_.Set("generation", fmt.Sprint(generation)) + return c +} + +// IfGenerationMatch sets the optional parameter "ifGenerationMatch": +// Makes the operation conditional on whether the object's current +// generation matches the given value. Setting to 0 makes the operation +// succeed only if there are no live versions of the object. +func (c *ObjectsUpdateCall) IfGenerationMatch(ifGenerationMatch int64) *ObjectsUpdateCall { + c.urlParams_.Set("ifGenerationMatch", fmt.Sprint(ifGenerationMatch)) + return c +} + +// IfGenerationNotMatch sets the optional parameter +// "ifGenerationNotMatch": Makes the operation conditional on whether +// the object's current generation does not match the given value. If no +// live object exists, the precondition fails. Setting to 0 makes the +// operation succeed only if there is a live version of the object. +func (c *ObjectsUpdateCall) IfGenerationNotMatch(ifGenerationNotMatch int64) *ObjectsUpdateCall { + c.urlParams_.Set("ifGenerationNotMatch", fmt.Sprint(ifGenerationNotMatch)) + return c +} + +// IfMetagenerationMatch sets the optional parameter +// "ifMetagenerationMatch": Makes the operation conditional on whether +// the object's current metageneration matches the given value. +func (c *ObjectsUpdateCall) IfMetagenerationMatch(ifMetagenerationMatch int64) *ObjectsUpdateCall { + c.urlParams_.Set("ifMetagenerationMatch", fmt.Sprint(ifMetagenerationMatch)) + return c +} + +// IfMetagenerationNotMatch sets the optional parameter +// "ifMetagenerationNotMatch": Makes the operation conditional on +// whether the object's current metageneration does not match the given +// value. +func (c *ObjectsUpdateCall) IfMetagenerationNotMatch(ifMetagenerationNotMatch int64) *ObjectsUpdateCall { + c.urlParams_.Set("ifMetagenerationNotMatch", fmt.Sprint(ifMetagenerationNotMatch)) + return c +} + +// PredefinedAcl sets the optional parameter "predefinedAcl": Apply a +// predefined set of access controls to this object. +// +// Possible values: +// "authenticatedRead" - Object owner gets OWNER access, and +// allAuthenticatedUsers get READER access. +// "bucketOwnerFullControl" - Object owner gets OWNER access, and +// project team owners get OWNER access. +// "bucketOwnerRead" - Object owner gets OWNER access, and project +// team owners get READER access. +// "private" - Object owner gets OWNER access. +// "projectPrivate" - Object owner gets OWNER access, and project team +// members get access according to their roles. +// "publicRead" - Object owner gets OWNER access, and allUsers get +// READER access. +func (c *ObjectsUpdateCall) PredefinedAcl(predefinedAcl string) *ObjectsUpdateCall { + c.urlParams_.Set("predefinedAcl", predefinedAcl) + return c +} + +// Projection sets the optional parameter "projection": Set of +// properties to return. Defaults to full. +// +// Possible values: +// "full" - Include all properties. +// "noAcl" - Omit the owner, acl property. +func (c *ObjectsUpdateCall) Projection(projection string) *ObjectsUpdateCall { + c.urlParams_.Set("projection", projection) + return c +} + +// UserProject sets the optional parameter "userProject": The project to +// be billed for this request. Required for Requester Pays buckets. +func (c *ObjectsUpdateCall) UserProject(userProject string) *ObjectsUpdateCall { + c.urlParams_.Set("userProject", userProject) + return c +} + +// Fields allows partial responses to be retrieved. See +// https://developers.google.com/gdata/docs/2.0/basics#PartialResponse +// for more information. +func (c *ObjectsUpdateCall) Fields(s ...googleapi.Field) *ObjectsUpdateCall { + c.urlParams_.Set("fields", googleapi.CombineFields(s)) + return c +} + +// Context sets the context to be used in this call's Do method. Any +// pending HTTP request will be aborted if the provided context is +// canceled. +func (c *ObjectsUpdateCall) Context(ctx context.Context) *ObjectsUpdateCall { + c.ctx_ = ctx + return c +} + +// Header returns an http.Header that can be modified by the caller to +// add HTTP headers to the request. +func (c *ObjectsUpdateCall) Header() http.Header { + if c.header_ == nil { + c.header_ = make(http.Header) + } + return c.header_ +} + +func (c *ObjectsUpdateCall) doRequest(alt string) (*http.Response, error) { + reqHeaders := make(http.Header) + for k, v := range c.header_ { + reqHeaders[k] = v + } + reqHeaders.Set("User-Agent", c.s.userAgent()) + var body io.Reader = nil + body, err := googleapi.WithoutDataWrapper.JSONReader(c.object2) + if err != nil { + return nil, err + } + reqHeaders.Set("Content-Type", "application/json") + c.urlParams_.Set("alt", alt) + urls := googleapi.ResolveRelative(c.s.BasePath, "b/{bucket}/o/{object}") + urls += "?" + c.urlParams_.Encode() + req, _ := http.NewRequest("PUT", urls, body) + req.Header = reqHeaders + googleapi.Expand(req.URL, map[string]string{ + "bucket": c.bucket, + "object": c.object, + }) + return gensupport.SendRequest(c.ctx_, c.s.client, req) +} + +// Do executes the "storage.objects.update" call. +// Exactly one of *Object or error will be non-nil. Any non-2xx status +// code is an error. Response headers are in either +// *Object.ServerResponse.Header or (if a response was returned at all) +// in error.(*googleapi.Error).Header. Use googleapi.IsNotModified to +// check whether the returned error was because http.StatusNotModified +// was returned. +func (c *ObjectsUpdateCall) Do(opts ...googleapi.CallOption) (*Object, error) { + gensupport.SetOptions(c.urlParams_, opts...) + res, err := c.doRequest("json") + if res != nil && res.StatusCode == http.StatusNotModified { + if res.Body != nil { + res.Body.Close() + } + return nil, &googleapi.Error{ + Code: res.StatusCode, + Header: res.Header, + } + } + if err != nil { + return nil, err + } + defer googleapi.CloseBody(res) + if err := googleapi.CheckResponse(res); err != nil { + return nil, err + } + ret := &Object{ + ServerResponse: googleapi.ServerResponse{ + Header: res.Header, + HTTPStatusCode: res.StatusCode, + }, + } + target := &ret + if err := gensupport.DecodeResponse(target, res); err != nil { + return nil, err + } + return ret, nil + // { + // "description": "Updates an object's metadata.", + // "httpMethod": "PUT", + // "id": "storage.objects.update", + // "parameterOrder": [ + // "bucket", + // "object" + // ], + // "parameters": { + // "bucket": { + // "description": "Name of the bucket in which the object resides.", + // "location": "path", + // "required": true, + // "type": "string" + // }, + // "generation": { + // "description": "If present, selects a specific revision of this object (as opposed to the latest version, the default).", + // "format": "int64", + // "location": "query", + // "type": "string" + // }, + // "ifGenerationMatch": { + // "description": "Makes the operation conditional on whether the object's current generation matches the given value. Setting to 0 makes the operation succeed only if there are no live versions of the object.", + // "format": "int64", + // "location": "query", + // "type": "string" + // }, + // "ifGenerationNotMatch": { + // "description": "Makes the operation conditional on whether the object's current generation does not match the given value. If no live object exists, the precondition fails. Setting to 0 makes the operation succeed only if there is a live version of the object.", + // "format": "int64", + // "location": "query", + // "type": "string" + // }, + // "ifMetagenerationMatch": { + // "description": "Makes the operation conditional on whether the object's current metageneration matches the given value.", + // "format": "int64", + // "location": "query", + // "type": "string" + // }, + // "ifMetagenerationNotMatch": { + // "description": "Makes the operation conditional on whether the object's current metageneration does not match the given value.", + // "format": "int64", + // "location": "query", + // "type": "string" + // }, + // "object": { + // "description": "Name of the object. For information about how to URL encode object names to be path safe, see Encoding URI Path Parts.", + // "location": "path", + // "required": true, + // "type": "string" + // }, + // "predefinedAcl": { + // "description": "Apply a predefined set of access controls to this object.", + // "enum": [ + // "authenticatedRead", + // "bucketOwnerFullControl", + // "bucketOwnerRead", + // "private", + // "projectPrivate", + // "publicRead" + // ], + // "enumDescriptions": [ + // "Object owner gets OWNER access, and allAuthenticatedUsers get READER access.", + // "Object owner gets OWNER access, and project team owners get OWNER access.", + // "Object owner gets OWNER access, and project team owners get READER access.", + // "Object owner gets OWNER access.", + // "Object owner gets OWNER access, and project team members get access according to their roles.", + // "Object owner gets OWNER access, and allUsers get READER access." + // ], + // "location": "query", + // "type": "string" + // }, + // "projection": { + // "description": "Set of properties to return. Defaults to full.", + // "enum": [ + // "full", + // "noAcl" + // ], + // "enumDescriptions": [ + // "Include all properties.", + // "Omit the owner, acl property." + // ], + // "location": "query", + // "type": "string" + // }, + // "userProject": { + // "description": "The project to be billed for this request. Required for Requester Pays buckets.", + // "location": "query", + // "type": "string" + // } + // }, + // "path": "b/{bucket}/o/{object}", + // "request": { + // "$ref": "Object" + // }, + // "response": { + // "$ref": "Object" + // }, + // "scopes": [ + // "https://www.googleapis.com/auth/cloud-platform", + // "https://www.googleapis.com/auth/devstorage.full_control" + // ] + // } + +} + +// method id "storage.objects.watchAll": + +type ObjectsWatchAllCall struct { + s *Service + bucket string + channel *Channel + urlParams_ gensupport.URLParams + ctx_ context.Context + header_ http.Header +} + +// WatchAll: Watch for changes on all objects in a bucket. +func (r *ObjectsService) WatchAll(bucket string, channel *Channel) *ObjectsWatchAllCall { + c := &ObjectsWatchAllCall{s: r.s, urlParams_: make(gensupport.URLParams)} + c.bucket = bucket + c.channel = channel + return c +} + +// Delimiter sets the optional parameter "delimiter": Returns results in +// a directory-like mode. items will contain only objects whose names, +// aside from the prefix, do not contain delimiter. Objects whose names, +// aside from the prefix, contain delimiter will have their name, +// truncated after the delimiter, returned in prefixes. Duplicate +// prefixes are omitted. +func (c *ObjectsWatchAllCall) Delimiter(delimiter string) *ObjectsWatchAllCall { + c.urlParams_.Set("delimiter", delimiter) + return c +} + +// MaxResults sets the optional parameter "maxResults": Maximum number +// of items plus prefixes to return in a single page of responses. As +// duplicate prefixes are omitted, fewer total results may be returned +// than requested. The service will use this parameter or 1,000 items, +// whichever is smaller. +func (c *ObjectsWatchAllCall) MaxResults(maxResults int64) *ObjectsWatchAllCall { + c.urlParams_.Set("maxResults", fmt.Sprint(maxResults)) + return c +} + +// PageToken sets the optional parameter "pageToken": A +// previously-returned page token representing part of the larger set of +// results to view. +func (c *ObjectsWatchAllCall) PageToken(pageToken string) *ObjectsWatchAllCall { + c.urlParams_.Set("pageToken", pageToken) + return c +} + +// Prefix sets the optional parameter "prefix": Filter results to +// objects whose names begin with this prefix. +func (c *ObjectsWatchAllCall) Prefix(prefix string) *ObjectsWatchAllCall { + c.urlParams_.Set("prefix", prefix) + return c +} + +// Projection sets the optional parameter "projection": Set of +// properties to return. Defaults to noAcl. +// +// Possible values: +// "full" - Include all properties. +// "noAcl" - Omit the owner, acl property. +func (c *ObjectsWatchAllCall) Projection(projection string) *ObjectsWatchAllCall { + c.urlParams_.Set("projection", projection) + return c +} + +// UserProject sets the optional parameter "userProject": The project to +// be billed for this request. Required for Requester Pays buckets. +func (c *ObjectsWatchAllCall) UserProject(userProject string) *ObjectsWatchAllCall { + c.urlParams_.Set("userProject", userProject) + return c +} + +// Versions sets the optional parameter "versions": If true, lists all +// versions of an object as distinct results. The default is false. For +// more information, see Object Versioning. +func (c *ObjectsWatchAllCall) Versions(versions bool) *ObjectsWatchAllCall { + c.urlParams_.Set("versions", fmt.Sprint(versions)) + return c +} + +// Fields allows partial responses to be retrieved. See +// https://developers.google.com/gdata/docs/2.0/basics#PartialResponse +// for more information. +func (c *ObjectsWatchAllCall) Fields(s ...googleapi.Field) *ObjectsWatchAllCall { + c.urlParams_.Set("fields", googleapi.CombineFields(s)) + return c +} + +// Context sets the context to be used in this call's Do method. Any +// pending HTTP request will be aborted if the provided context is +// canceled. +func (c *ObjectsWatchAllCall) Context(ctx context.Context) *ObjectsWatchAllCall { + c.ctx_ = ctx + return c +} + +// Header returns an http.Header that can be modified by the caller to +// add HTTP headers to the request. +func (c *ObjectsWatchAllCall) Header() http.Header { + if c.header_ == nil { + c.header_ = make(http.Header) + } + return c.header_ +} + +func (c *ObjectsWatchAllCall) doRequest(alt string) (*http.Response, error) { + reqHeaders := make(http.Header) + for k, v := range c.header_ { + reqHeaders[k] = v + } + reqHeaders.Set("User-Agent", c.s.userAgent()) + var body io.Reader = nil + body, err := googleapi.WithoutDataWrapper.JSONReader(c.channel) + if err != nil { + return nil, err + } + reqHeaders.Set("Content-Type", "application/json") + c.urlParams_.Set("alt", alt) + urls := googleapi.ResolveRelative(c.s.BasePath, "b/{bucket}/o/watch") + urls += "?" + c.urlParams_.Encode() + req, _ := http.NewRequest("POST", urls, body) + req.Header = reqHeaders + googleapi.Expand(req.URL, map[string]string{ + "bucket": c.bucket, + }) + return gensupport.SendRequest(c.ctx_, c.s.client, req) +} + +// Do executes the "storage.objects.watchAll" call. +// Exactly one of *Channel or error will be non-nil. Any non-2xx status +// code is an error. Response headers are in either +// *Channel.ServerResponse.Header or (if a response was returned at all) +// in error.(*googleapi.Error).Header. Use googleapi.IsNotModified to +// check whether the returned error was because http.StatusNotModified +// was returned. +func (c *ObjectsWatchAllCall) Do(opts ...googleapi.CallOption) (*Channel, error) { + gensupport.SetOptions(c.urlParams_, opts...) + res, err := c.doRequest("json") + if res != nil && res.StatusCode == http.StatusNotModified { + if res.Body != nil { + res.Body.Close() + } + return nil, &googleapi.Error{ + Code: res.StatusCode, + Header: res.Header, + } + } + if err != nil { + return nil, err + } + defer googleapi.CloseBody(res) + if err := googleapi.CheckResponse(res); err != nil { + return nil, err + } + ret := &Channel{ + ServerResponse: googleapi.ServerResponse{ + Header: res.Header, + HTTPStatusCode: res.StatusCode, + }, + } + target := &ret + if err := gensupport.DecodeResponse(target, res); err != nil { + return nil, err + } + return ret, nil + // { + // "description": "Watch for changes on all objects in a bucket.", + // "httpMethod": "POST", + // "id": "storage.objects.watchAll", + // "parameterOrder": [ + // "bucket" + // ], + // "parameters": { + // "bucket": { + // "description": "Name of the bucket in which to look for objects.", + // "location": "path", + // "required": true, + // "type": "string" + // }, + // "delimiter": { + // "description": "Returns results in a directory-like mode. items will contain only objects whose names, aside from the prefix, do not contain delimiter. Objects whose names, aside from the prefix, contain delimiter will have their name, truncated after the delimiter, returned in prefixes. Duplicate prefixes are omitted.", + // "location": "query", + // "type": "string" + // }, + // "maxResults": { + // "default": "1000", + // "description": "Maximum number of items plus prefixes to return in a single page of responses. As duplicate prefixes are omitted, fewer total results may be returned than requested. The service will use this parameter or 1,000 items, whichever is smaller.", + // "format": "uint32", + // "location": "query", + // "minimum": "0", + // "type": "integer" + // }, + // "pageToken": { + // "description": "A previously-returned page token representing part of the larger set of results to view.", + // "location": "query", + // "type": "string" + // }, + // "prefix": { + // "description": "Filter results to objects whose names begin with this prefix.", + // "location": "query", + // "type": "string" + // }, + // "projection": { + // "description": "Set of properties to return. Defaults to noAcl.", + // "enum": [ + // "full", + // "noAcl" + // ], + // "enumDescriptions": [ + // "Include all properties.", + // "Omit the owner, acl property." + // ], + // "location": "query", + // "type": "string" + // }, + // "userProject": { + // "description": "The project to be billed for this request. Required for Requester Pays buckets.", + // "location": "query", + // "type": "string" + // }, + // "versions": { + // "description": "If true, lists all versions of an object as distinct results. The default is false. For more information, see Object Versioning.", + // "location": "query", + // "type": "boolean" + // } + // }, + // "path": "b/{bucket}/o/watch", + // "request": { + // "$ref": "Channel", + // "parameterName": "resource" + // }, + // "response": { + // "$ref": "Channel" + // }, + // "scopes": [ + // "https://www.googleapis.com/auth/cloud-platform", + // "https://www.googleapis.com/auth/cloud-platform.read-only", + // "https://www.googleapis.com/auth/devstorage.full_control", + // "https://www.googleapis.com/auth/devstorage.read_only", + // "https://www.googleapis.com/auth/devstorage.read_write" + // ], + // "supportsSubscription": true + // } + +} + +// method id "storage.projects.serviceAccount.get": + +type ProjectsServiceAccountGetCall struct { + s *Service + projectId string + urlParams_ gensupport.URLParams + ifNoneMatch_ string + ctx_ context.Context + header_ http.Header +} + +// Get: Get the email address of this project's Google Cloud Storage +// service account. +func (r *ProjectsServiceAccountService) Get(projectId string) *ProjectsServiceAccountGetCall { + c := &ProjectsServiceAccountGetCall{s: r.s, urlParams_: make(gensupport.URLParams)} + c.projectId = projectId + return c +} + +// UserProject sets the optional parameter "userProject": The project to +// be billed for this request. +func (c *ProjectsServiceAccountGetCall) UserProject(userProject string) *ProjectsServiceAccountGetCall { + c.urlParams_.Set("userProject", userProject) + return c +} + +// Fields allows partial responses to be retrieved. See +// https://developers.google.com/gdata/docs/2.0/basics#PartialResponse +// for more information. +func (c *ProjectsServiceAccountGetCall) Fields(s ...googleapi.Field) *ProjectsServiceAccountGetCall { + c.urlParams_.Set("fields", googleapi.CombineFields(s)) + return c +} + +// IfNoneMatch sets the optional parameter which makes the operation +// fail if the object's ETag matches the given value. This is useful for +// getting updates only after the object has changed since the last +// request. Use googleapi.IsNotModified to check whether the response +// error from Do is the result of In-None-Match. +func (c *ProjectsServiceAccountGetCall) IfNoneMatch(entityTag string) *ProjectsServiceAccountGetCall { + c.ifNoneMatch_ = entityTag + return c +} + +// Context sets the context to be used in this call's Do method. Any +// pending HTTP request will be aborted if the provided context is +// canceled. +func (c *ProjectsServiceAccountGetCall) Context(ctx context.Context) *ProjectsServiceAccountGetCall { + c.ctx_ = ctx + return c +} + +// Header returns an http.Header that can be modified by the caller to +// add HTTP headers to the request. +func (c *ProjectsServiceAccountGetCall) Header() http.Header { + if c.header_ == nil { + c.header_ = make(http.Header) + } + return c.header_ +} + +func (c *ProjectsServiceAccountGetCall) doRequest(alt string) (*http.Response, error) { + reqHeaders := make(http.Header) + for k, v := range c.header_ { + reqHeaders[k] = v + } + reqHeaders.Set("User-Agent", c.s.userAgent()) + if c.ifNoneMatch_ != "" { + reqHeaders.Set("If-None-Match", c.ifNoneMatch_) + } + var body io.Reader = nil + c.urlParams_.Set("alt", alt) + urls := googleapi.ResolveRelative(c.s.BasePath, "projects/{projectId}/serviceAccount") + urls += "?" + c.urlParams_.Encode() + req, _ := http.NewRequest("GET", urls, body) + req.Header = reqHeaders + googleapi.Expand(req.URL, map[string]string{ + "projectId": c.projectId, + }) + return gensupport.SendRequest(c.ctx_, c.s.client, req) +} + +// Do executes the "storage.projects.serviceAccount.get" call. +// Exactly one of *ServiceAccount or error will be non-nil. Any non-2xx +// status code is an error. Response headers are in either +// *ServiceAccount.ServerResponse.Header or (if a response was returned +// at all) in error.(*googleapi.Error).Header. Use +// googleapi.IsNotModified to check whether the returned error was +// because http.StatusNotModified was returned. +func (c *ProjectsServiceAccountGetCall) Do(opts ...googleapi.CallOption) (*ServiceAccount, error) { + gensupport.SetOptions(c.urlParams_, opts...) + res, err := c.doRequest("json") + if res != nil && res.StatusCode == http.StatusNotModified { + if res.Body != nil { + res.Body.Close() + } + return nil, &googleapi.Error{ + Code: res.StatusCode, + Header: res.Header, + } + } + if err != nil { + return nil, err + } + defer googleapi.CloseBody(res) + if err := googleapi.CheckResponse(res); err != nil { + return nil, err + } + ret := &ServiceAccount{ + ServerResponse: googleapi.ServerResponse{ + Header: res.Header, + HTTPStatusCode: res.StatusCode, + }, + } + target := &ret + if err := gensupport.DecodeResponse(target, res); err != nil { + return nil, err + } + return ret, nil + // { + // "description": "Get the email address of this project's Google Cloud Storage service account.", + // "httpMethod": "GET", + // "id": "storage.projects.serviceAccount.get", + // "parameterOrder": [ + // "projectId" + // ], + // "parameters": { + // "projectId": { + // "description": "Project ID", + // "location": "path", + // "required": true, + // "type": "string" + // }, + // "userProject": { + // "description": "The project to be billed for this request.", + // "location": "query", + // "type": "string" + // } + // }, + // "path": "projects/{projectId}/serviceAccount", + // "response": { + // "$ref": "ServiceAccount" + // }, + // "scopes": [ + // "https://www.googleapis.com/auth/cloud-platform", + // "https://www.googleapis.com/auth/cloud-platform.read-only", + // "https://www.googleapis.com/auth/devstorage.full_control", + // "https://www.googleapis.com/auth/devstorage.read_only", + // "https://www.googleapis.com/auth/devstorage.read_write" + // ] + // } + +} diff --git a/vendor/gopkg.in/yaml.v2/LICENSE b/vendor/gopkg.in/yaml.v2/LICENSE new file mode 100644 index 000000000..8dada3eda --- /dev/null +++ b/vendor/gopkg.in/yaml.v2/LICENSE @@ -0,0 +1,201 @@ + Apache License + Version 2.0, January 2004 + http://www.apache.org/licenses/ + + TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION + + 1. Definitions. + + "License" shall mean the terms and conditions for use, reproduction, + and distribution as defined by Sections 1 through 9 of this document. + + "Licensor" shall mean the copyright owner or entity authorized by + the copyright owner that is granting the License. + + "Legal Entity" shall mean the union of the acting entity and all + other entities that control, are controlled by, or are under common + control with that entity. For the purposes of this definition, + "control" means (i) the power, direct or indirect, to cause the + direction or management of such entity, whether by contract or + otherwise, or (ii) ownership of fifty percent (50%) or more of the + outstanding shares, or (iii) beneficial ownership of such entity. + + "You" (or "Your") shall mean an individual or Legal Entity + exercising permissions granted by this License. + + "Source" form shall mean the preferred form for making modifications, + including but not limited to software source code, documentation + source, and configuration files. + + "Object" form shall mean any form resulting from mechanical + transformation or translation of a Source form, including but + not limited to compiled object code, generated documentation, + and conversions to other media types. + + "Work" shall mean the work of authorship, whether in Source or + Object form, made available under the License, as indicated by a + copyright notice that is included in or attached to the work + (an example is provided in the Appendix below). + + "Derivative Works" shall mean any work, whether in Source or Object + form, that is based on (or derived from) the Work and for which the + editorial revisions, annotations, elaborations, or other modifications + represent, as a whole, an original work of authorship. For the purposes + of this License, Derivative Works shall not include works that remain + separable from, or merely link (or bind by name) to the interfaces of, + the Work and Derivative Works thereof. + + "Contribution" shall mean any work of authorship, including + the original version of the Work and any modifications or additions + to that Work or Derivative Works thereof, that is intentionally + submitted to Licensor for inclusion in the Work by the copyright owner + or by an individual or Legal Entity authorized to submit on behalf of + the copyright owner. For the purposes of this definition, "submitted" + means any form of electronic, verbal, or written communication sent + to the Licensor or its representatives, including but not limited to + communication on electronic mailing lists, source code control systems, + and issue tracking systems that are managed by, or on behalf of, the + Licensor for the purpose of discussing and improving the Work, but + excluding communication that is conspicuously marked or otherwise + designated in writing by the copyright owner as "Not a Contribution." + + "Contributor" shall mean Licensor and any individual or Legal Entity + on behalf of whom a Contribution has been received by Licensor and + subsequently incorporated within the Work. + + 2. Grant of Copyright License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + copyright license to reproduce, prepare Derivative Works of, + publicly display, publicly perform, sublicense, and distribute the + Work and such Derivative Works in Source or Object form. + + 3. Grant of Patent License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + (except as stated in this section) patent license to make, have made, + use, offer to sell, sell, import, and otherwise transfer the Work, + where such license applies only to those patent claims licensable + by such Contributor that are necessarily infringed by their + Contribution(s) alone or by combination of their Contribution(s) + with the Work to which such Contribution(s) was submitted. If You + institute patent litigation against any entity (including a + cross-claim or counterclaim in a lawsuit) alleging that the Work + or a Contribution incorporated within the Work constitutes direct + or contributory patent infringement, then any patent licenses + granted to You under this License for that Work shall terminate + as of the date such litigation is filed. + + 4. Redistribution. You may reproduce and distribute copies of the + Work or Derivative Works thereof in any medium, with or without + modifications, and in Source or Object form, provided that You + meet the following conditions: + + (a) You must give any other recipients of the Work or + Derivative Works a copy of this License; and + + (b) You must cause any modified files to carry prominent notices + stating that You changed the files; and + + (c) You must retain, in the Source form of any Derivative Works + that You distribute, all copyright, patent, trademark, and + attribution notices from the Source form of the Work, + excluding those notices that do not pertain to any part of + the Derivative Works; and + + (d) If the Work includes a "NOTICE" text file as part of its + distribution, then any Derivative Works that You distribute must + include a readable copy of the attribution notices contained + within such NOTICE file, excluding those notices that do not + pertain to any part of the Derivative Works, in at least one + of the following places: within a NOTICE text file distributed + as part of the Derivative Works; within the Source form or + documentation, if provided along with the Derivative Works; or, + within a display generated by the Derivative Works, if and + wherever such third-party notices normally appear. The contents + of the NOTICE file are for informational purposes only and + do not modify the License. You may add Your own attribution + notices within Derivative Works that You distribute, alongside + or as an addendum to the NOTICE text from the Work, provided + that such additional attribution notices cannot be construed + as modifying the License. + + You may add Your own copyright statement to Your modifications and + may provide additional or different license terms and conditions + for use, reproduction, or distribution of Your modifications, or + for any such Derivative Works as a whole, provided Your use, + reproduction, and distribution of the Work otherwise complies with + the conditions stated in this License. + + 5. Submission of Contributions. Unless You explicitly state otherwise, + any Contribution intentionally submitted for inclusion in the Work + by You to the Licensor shall be under the terms and conditions of + this License, without any additional terms or conditions. + Notwithstanding the above, nothing herein shall supersede or modify + the terms of any separate license agreement you may have executed + with Licensor regarding such Contributions. + + 6. Trademarks. This License does not grant permission to use the trade + names, trademarks, service marks, or product names of the Licensor, + except as required for reasonable and customary use in describing the + origin of the Work and reproducing the content of the NOTICE file. + + 7. Disclaimer of Warranty. Unless required by applicable law or + agreed to in writing, Licensor provides the Work (and each + Contributor provides its Contributions) on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or + implied, including, without limitation, any warranties or conditions + of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A + PARTICULAR PURPOSE. You are solely responsible for determining the + appropriateness of using or redistributing the Work and assume any + risks associated with Your exercise of permissions under this License. + + 8. Limitation of Liability. In no event and under no legal theory, + whether in tort (including negligence), contract, or otherwise, + unless required by applicable law (such as deliberate and grossly + negligent acts) or agreed to in writing, shall any Contributor be + liable to You for damages, including any direct, indirect, special, + incidental, or consequential damages of any character arising as a + result of this License or out of the use or inability to use the + Work (including but not limited to damages for loss of goodwill, + work stoppage, computer failure or malfunction, or any and all + other commercial damages or losses), even if such Contributor + has been advised of the possibility of such damages. + + 9. Accepting Warranty or Additional Liability. While redistributing + the Work or Derivative Works thereof, You may choose to offer, + and charge a fee for, acceptance of support, warranty, indemnity, + or other liability obligations and/or rights consistent with this + License. However, in accepting such obligations, You may act only + on Your own behalf and on Your sole responsibility, not on behalf + of any other Contributor, and only if You agree to indemnify, + defend, and hold each Contributor harmless for any liability + incurred by, or claims asserted against, such Contributor by reason + of your accepting any such warranty or additional liability. + + END OF TERMS AND CONDITIONS + + APPENDIX: How to apply the Apache License to your work. + + To apply the Apache License to your work, attach the following + boilerplate notice, with the fields enclosed by brackets "{}" + replaced with your own identifying information. (Don't include + the brackets!) The text should be enclosed in the appropriate + comment syntax for the file format. We also recommend that a + file or class name and description of purpose be included on the + same "printed page" as the copyright notice for easier + identification within third-party archives. + + Copyright {yyyy} {name of copyright owner} + + Licensed under the Apache License, Version 2.0 (the "License"); + you may not use this file except in compliance with the License. + You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. diff --git a/vendor/gopkg.in/yaml.v2/LICENSE.libyaml b/vendor/gopkg.in/yaml.v2/LICENSE.libyaml new file mode 100644 index 000000000..8da58fbf6 --- /dev/null +++ b/vendor/gopkg.in/yaml.v2/LICENSE.libyaml @@ -0,0 +1,31 @@ +The following files were ported to Go from C files of libyaml, and thus +are still covered by their original copyright and license: + + apic.go + emitterc.go + parserc.go + readerc.go + scannerc.go + writerc.go + yamlh.go + yamlprivateh.go + +Copyright (c) 2006 Kirill Simonov + +Permission is hereby granted, free of charge, to any person obtaining a copy of +this software and associated documentation files (the "Software"), to deal in +the Software without restriction, including without limitation the rights to +use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies +of the Software, and to permit persons to whom the Software is furnished to do +so, subject to the following conditions: + +The above copyright notice and this permission notice shall be included in all +copies or substantial portions of the Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR +IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, +FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE +AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER +LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, +OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +SOFTWARE. diff --git a/vendor/gopkg.in/yaml.v2/NOTICE b/vendor/gopkg.in/yaml.v2/NOTICE new file mode 100644 index 000000000..866d74a7a --- /dev/null +++ b/vendor/gopkg.in/yaml.v2/NOTICE @@ -0,0 +1,13 @@ +Copyright 2011-2016 Canonical Ltd. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. diff --git a/vendor/gopkg.in/yaml.v2/README.md b/vendor/gopkg.in/yaml.v2/README.md new file mode 100644 index 000000000..b50c6e877 --- /dev/null +++ b/vendor/gopkg.in/yaml.v2/README.md @@ -0,0 +1,133 @@ +# YAML support for the Go language + +Introduction +------------ + +The yaml package enables Go programs to comfortably encode and decode YAML +values. It was developed within [Canonical](https://www.canonical.com) as +part of the [juju](https://juju.ubuntu.com) project, and is based on a +pure Go port of the well-known [libyaml](http://pyyaml.org/wiki/LibYAML) +C library to parse and generate YAML data quickly and reliably. + +Compatibility +------------- + +The yaml package supports most of YAML 1.1 and 1.2, including support for +anchors, tags, map merging, etc. Multi-document unmarshalling is not yet +implemented, and base-60 floats from YAML 1.1 are purposefully not +supported since they're a poor design and are gone in YAML 1.2. + +Installation and usage +---------------------- + +The import path for the package is *gopkg.in/yaml.v2*. + +To install it, run: + + go get gopkg.in/yaml.v2 + +API documentation +----------------- + +If opened in a browser, the import path itself leads to the API documentation: + + * [https://gopkg.in/yaml.v2](https://gopkg.in/yaml.v2) + +API stability +------------- + +The package API for yaml v2 will remain stable as described in [gopkg.in](https://gopkg.in). + + +License +------- + +The yaml package is licensed under the Apache License 2.0. Please see the LICENSE file for details. + + +Example +------- + +```Go +package main + +import ( + "fmt" + "log" + + "gopkg.in/yaml.v2" +) + +var data = ` +a: Easy! +b: + c: 2 + d: [3, 4] +` + +// Note: struct fields must be public in order for unmarshal to +// correctly populate the data. +type T struct { + A string + B struct { + RenamedC int `yaml:"c"` + D []int `yaml:",flow"` + } +} + +func main() { + t := T{} + + err := yaml.Unmarshal([]byte(data), &t) + if err != nil { + log.Fatalf("error: %v", err) + } + fmt.Printf("--- t:\n%v\n\n", t) + + d, err := yaml.Marshal(&t) + if err != nil { + log.Fatalf("error: %v", err) + } + fmt.Printf("--- t dump:\n%s\n\n", string(d)) + + m := make(map[interface{}]interface{}) + + err = yaml.Unmarshal([]byte(data), &m) + if err != nil { + log.Fatalf("error: %v", err) + } + fmt.Printf("--- m:\n%v\n\n", m) + + d, err = yaml.Marshal(&m) + if err != nil { + log.Fatalf("error: %v", err) + } + fmt.Printf("--- m dump:\n%s\n\n", string(d)) +} +``` + +This example will generate the following output: + +``` +--- t: +{Easy! {2 [3 4]}} + +--- t dump: +a: Easy! +b: + c: 2 + d: [3, 4] + + +--- m: +map[a:Easy! b:map[c:2 d:[3 4]]] + +--- m dump: +a: Easy! +b: + c: 2 + d: + - 3 + - 4 +``` + diff --git a/vendor/gopkg.in/yaml.v2/apic.go b/vendor/gopkg.in/yaml.v2/apic.go new file mode 100644 index 000000000..1f7e87e67 --- /dev/null +++ b/vendor/gopkg.in/yaml.v2/apic.go @@ -0,0 +1,739 @@ +package yaml + +import ( + "io" +) + +func yaml_insert_token(parser *yaml_parser_t, pos int, token *yaml_token_t) { + //fmt.Println("yaml_insert_token", "pos:", pos, "typ:", token.typ, "head:", parser.tokens_head, "len:", len(parser.tokens)) + + // Check if we can move the queue at the beginning of the buffer. + if parser.tokens_head > 0 && len(parser.tokens) == cap(parser.tokens) { + if parser.tokens_head != len(parser.tokens) { + copy(parser.tokens, parser.tokens[parser.tokens_head:]) + } + parser.tokens = parser.tokens[:len(parser.tokens)-parser.tokens_head] + parser.tokens_head = 0 + } + parser.tokens = append(parser.tokens, *token) + if pos < 0 { + return + } + copy(parser.tokens[parser.tokens_head+pos+1:], parser.tokens[parser.tokens_head+pos:]) + parser.tokens[parser.tokens_head+pos] = *token +} + +// Create a new parser object. +func yaml_parser_initialize(parser *yaml_parser_t) bool { + *parser = yaml_parser_t{ + raw_buffer: make([]byte, 0, input_raw_buffer_size), + buffer: make([]byte, 0, input_buffer_size), + } + return true +} + +// Destroy a parser object. +func yaml_parser_delete(parser *yaml_parser_t) { + *parser = yaml_parser_t{} +} + +// String read handler. +func yaml_string_read_handler(parser *yaml_parser_t, buffer []byte) (n int, err error) { + if parser.input_pos == len(parser.input) { + return 0, io.EOF + } + n = copy(buffer, parser.input[parser.input_pos:]) + parser.input_pos += n + return n, nil +} + +// Reader read handler. +func yaml_reader_read_handler(parser *yaml_parser_t, buffer []byte) (n int, err error) { + return parser.input_reader.Read(buffer) +} + +// Set a string input. +func yaml_parser_set_input_string(parser *yaml_parser_t, input []byte) { + if parser.read_handler != nil { + panic("must set the input source only once") + } + parser.read_handler = yaml_string_read_handler + parser.input = input + parser.input_pos = 0 +} + +// Set a file input. +func yaml_parser_set_input_reader(parser *yaml_parser_t, r io.Reader) { + if parser.read_handler != nil { + panic("must set the input source only once") + } + parser.read_handler = yaml_reader_read_handler + parser.input_reader = r +} + +// Set the source encoding. +func yaml_parser_set_encoding(parser *yaml_parser_t, encoding yaml_encoding_t) { + if parser.encoding != yaml_ANY_ENCODING { + panic("must set the encoding only once") + } + parser.encoding = encoding +} + +// Create a new emitter object. +func yaml_emitter_initialize(emitter *yaml_emitter_t) { + *emitter = yaml_emitter_t{ + buffer: make([]byte, output_buffer_size), + raw_buffer: make([]byte, 0, output_raw_buffer_size), + states: make([]yaml_emitter_state_t, 0, initial_stack_size), + events: make([]yaml_event_t, 0, initial_queue_size), + } +} + +// Destroy an emitter object. +func yaml_emitter_delete(emitter *yaml_emitter_t) { + *emitter = yaml_emitter_t{} +} + +// String write handler. +func yaml_string_write_handler(emitter *yaml_emitter_t, buffer []byte) error { + *emitter.output_buffer = append(*emitter.output_buffer, buffer...) + return nil +} + +// yaml_writer_write_handler uses emitter.output_writer to write the +// emitted text. +func yaml_writer_write_handler(emitter *yaml_emitter_t, buffer []byte) error { + _, err := emitter.output_writer.Write(buffer) + return err +} + +// Set a string output. +func yaml_emitter_set_output_string(emitter *yaml_emitter_t, output_buffer *[]byte) { + if emitter.write_handler != nil { + panic("must set the output target only once") + } + emitter.write_handler = yaml_string_write_handler + emitter.output_buffer = output_buffer +} + +// Set a file output. +func yaml_emitter_set_output_writer(emitter *yaml_emitter_t, w io.Writer) { + if emitter.write_handler != nil { + panic("must set the output target only once") + } + emitter.write_handler = yaml_writer_write_handler + emitter.output_writer = w +} + +// Set the output encoding. +func yaml_emitter_set_encoding(emitter *yaml_emitter_t, encoding yaml_encoding_t) { + if emitter.encoding != yaml_ANY_ENCODING { + panic("must set the output encoding only once") + } + emitter.encoding = encoding +} + +// Set the canonical output style. +func yaml_emitter_set_canonical(emitter *yaml_emitter_t, canonical bool) { + emitter.canonical = canonical +} + +//// Set the indentation increment. +func yaml_emitter_set_indent(emitter *yaml_emitter_t, indent int) { + if indent < 2 || indent > 9 { + indent = 2 + } + emitter.best_indent = indent +} + +// Set the preferred line width. +func yaml_emitter_set_width(emitter *yaml_emitter_t, width int) { + if width < 0 { + width = -1 + } + emitter.best_width = width +} + +// Set if unescaped non-ASCII characters are allowed. +func yaml_emitter_set_unicode(emitter *yaml_emitter_t, unicode bool) { + emitter.unicode = unicode +} + +// Set the preferred line break character. +func yaml_emitter_set_break(emitter *yaml_emitter_t, line_break yaml_break_t) { + emitter.line_break = line_break +} + +///* +// * Destroy a token object. +// */ +// +//YAML_DECLARE(void) +//yaml_token_delete(yaml_token_t *token) +//{ +// assert(token); // Non-NULL token object expected. +// +// switch (token.type) +// { +// case YAML_TAG_DIRECTIVE_TOKEN: +// yaml_free(token.data.tag_directive.handle); +// yaml_free(token.data.tag_directive.prefix); +// break; +// +// case YAML_ALIAS_TOKEN: +// yaml_free(token.data.alias.value); +// break; +// +// case YAML_ANCHOR_TOKEN: +// yaml_free(token.data.anchor.value); +// break; +// +// case YAML_TAG_TOKEN: +// yaml_free(token.data.tag.handle); +// yaml_free(token.data.tag.suffix); +// break; +// +// case YAML_SCALAR_TOKEN: +// yaml_free(token.data.scalar.value); +// break; +// +// default: +// break; +// } +// +// memset(token, 0, sizeof(yaml_token_t)); +//} +// +///* +// * Check if a string is a valid UTF-8 sequence. +// * +// * Check 'reader.c' for more details on UTF-8 encoding. +// */ +// +//static int +//yaml_check_utf8(yaml_char_t *start, size_t length) +//{ +// yaml_char_t *end = start+length; +// yaml_char_t *pointer = start; +// +// while (pointer < end) { +// unsigned char octet; +// unsigned int width; +// unsigned int value; +// size_t k; +// +// octet = pointer[0]; +// width = (octet & 0x80) == 0x00 ? 1 : +// (octet & 0xE0) == 0xC0 ? 2 : +// (octet & 0xF0) == 0xE0 ? 3 : +// (octet & 0xF8) == 0xF0 ? 4 : 0; +// value = (octet & 0x80) == 0x00 ? octet & 0x7F : +// (octet & 0xE0) == 0xC0 ? octet & 0x1F : +// (octet & 0xF0) == 0xE0 ? octet & 0x0F : +// (octet & 0xF8) == 0xF0 ? octet & 0x07 : 0; +// if (!width) return 0; +// if (pointer+width > end) return 0; +// for (k = 1; k < width; k ++) { +// octet = pointer[k]; +// if ((octet & 0xC0) != 0x80) return 0; +// value = (value << 6) + (octet & 0x3F); +// } +// if (!((width == 1) || +// (width == 2 && value >= 0x80) || +// (width == 3 && value >= 0x800) || +// (width == 4 && value >= 0x10000))) return 0; +// +// pointer += width; +// } +// +// return 1; +//} +// + +// Create STREAM-START. +func yaml_stream_start_event_initialize(event *yaml_event_t, encoding yaml_encoding_t) { + *event = yaml_event_t{ + typ: yaml_STREAM_START_EVENT, + encoding: encoding, + } +} + +// Create STREAM-END. +func yaml_stream_end_event_initialize(event *yaml_event_t) { + *event = yaml_event_t{ + typ: yaml_STREAM_END_EVENT, + } +} + +// Create DOCUMENT-START. +func yaml_document_start_event_initialize( + event *yaml_event_t, + version_directive *yaml_version_directive_t, + tag_directives []yaml_tag_directive_t, + implicit bool, +) { + *event = yaml_event_t{ + typ: yaml_DOCUMENT_START_EVENT, + version_directive: version_directive, + tag_directives: tag_directives, + implicit: implicit, + } +} + +// Create DOCUMENT-END. +func yaml_document_end_event_initialize(event *yaml_event_t, implicit bool) { + *event = yaml_event_t{ + typ: yaml_DOCUMENT_END_EVENT, + implicit: implicit, + } +} + +///* +// * Create ALIAS. +// */ +// +//YAML_DECLARE(int) +//yaml_alias_event_initialize(event *yaml_event_t, anchor *yaml_char_t) +//{ +// mark yaml_mark_t = { 0, 0, 0 } +// anchor_copy *yaml_char_t = NULL +// +// assert(event) // Non-NULL event object is expected. +// assert(anchor) // Non-NULL anchor is expected. +// +// if (!yaml_check_utf8(anchor, strlen((char *)anchor))) return 0 +// +// anchor_copy = yaml_strdup(anchor) +// if (!anchor_copy) +// return 0 +// +// ALIAS_EVENT_INIT(*event, anchor_copy, mark, mark) +// +// return 1 +//} + +// Create SCALAR. +func yaml_scalar_event_initialize(event *yaml_event_t, anchor, tag, value []byte, plain_implicit, quoted_implicit bool, style yaml_scalar_style_t) bool { + *event = yaml_event_t{ + typ: yaml_SCALAR_EVENT, + anchor: anchor, + tag: tag, + value: value, + implicit: plain_implicit, + quoted_implicit: quoted_implicit, + style: yaml_style_t(style), + } + return true +} + +// Create SEQUENCE-START. +func yaml_sequence_start_event_initialize(event *yaml_event_t, anchor, tag []byte, implicit bool, style yaml_sequence_style_t) bool { + *event = yaml_event_t{ + typ: yaml_SEQUENCE_START_EVENT, + anchor: anchor, + tag: tag, + implicit: implicit, + style: yaml_style_t(style), + } + return true +} + +// Create SEQUENCE-END. +func yaml_sequence_end_event_initialize(event *yaml_event_t) bool { + *event = yaml_event_t{ + typ: yaml_SEQUENCE_END_EVENT, + } + return true +} + +// Create MAPPING-START. +func yaml_mapping_start_event_initialize(event *yaml_event_t, anchor, tag []byte, implicit bool, style yaml_mapping_style_t) { + *event = yaml_event_t{ + typ: yaml_MAPPING_START_EVENT, + anchor: anchor, + tag: tag, + implicit: implicit, + style: yaml_style_t(style), + } +} + +// Create MAPPING-END. +func yaml_mapping_end_event_initialize(event *yaml_event_t) { + *event = yaml_event_t{ + typ: yaml_MAPPING_END_EVENT, + } +} + +// Destroy an event object. +func yaml_event_delete(event *yaml_event_t) { + *event = yaml_event_t{} +} + +///* +// * Create a document object. +// */ +// +//YAML_DECLARE(int) +//yaml_document_initialize(document *yaml_document_t, +// version_directive *yaml_version_directive_t, +// tag_directives_start *yaml_tag_directive_t, +// tag_directives_end *yaml_tag_directive_t, +// start_implicit int, end_implicit int) +//{ +// struct { +// error yaml_error_type_t +// } context +// struct { +// start *yaml_node_t +// end *yaml_node_t +// top *yaml_node_t +// } nodes = { NULL, NULL, NULL } +// version_directive_copy *yaml_version_directive_t = NULL +// struct { +// start *yaml_tag_directive_t +// end *yaml_tag_directive_t +// top *yaml_tag_directive_t +// } tag_directives_copy = { NULL, NULL, NULL } +// value yaml_tag_directive_t = { NULL, NULL } +// mark yaml_mark_t = { 0, 0, 0 } +// +// assert(document) // Non-NULL document object is expected. +// assert((tag_directives_start && tag_directives_end) || +// (tag_directives_start == tag_directives_end)) +// // Valid tag directives are expected. +// +// if (!STACK_INIT(&context, nodes, INITIAL_STACK_SIZE)) goto error +// +// if (version_directive) { +// version_directive_copy = yaml_malloc(sizeof(yaml_version_directive_t)) +// if (!version_directive_copy) goto error +// version_directive_copy.major = version_directive.major +// version_directive_copy.minor = version_directive.minor +// } +// +// if (tag_directives_start != tag_directives_end) { +// tag_directive *yaml_tag_directive_t +// if (!STACK_INIT(&context, tag_directives_copy, INITIAL_STACK_SIZE)) +// goto error +// for (tag_directive = tag_directives_start +// tag_directive != tag_directives_end; tag_directive ++) { +// assert(tag_directive.handle) +// assert(tag_directive.prefix) +// if (!yaml_check_utf8(tag_directive.handle, +// strlen((char *)tag_directive.handle))) +// goto error +// if (!yaml_check_utf8(tag_directive.prefix, +// strlen((char *)tag_directive.prefix))) +// goto error +// value.handle = yaml_strdup(tag_directive.handle) +// value.prefix = yaml_strdup(tag_directive.prefix) +// if (!value.handle || !value.prefix) goto error +// if (!PUSH(&context, tag_directives_copy, value)) +// goto error +// value.handle = NULL +// value.prefix = NULL +// } +// } +// +// DOCUMENT_INIT(*document, nodes.start, nodes.end, version_directive_copy, +// tag_directives_copy.start, tag_directives_copy.top, +// start_implicit, end_implicit, mark, mark) +// +// return 1 +// +//error: +// STACK_DEL(&context, nodes) +// yaml_free(version_directive_copy) +// while (!STACK_EMPTY(&context, tag_directives_copy)) { +// value yaml_tag_directive_t = POP(&context, tag_directives_copy) +// yaml_free(value.handle) +// yaml_free(value.prefix) +// } +// STACK_DEL(&context, tag_directives_copy) +// yaml_free(value.handle) +// yaml_free(value.prefix) +// +// return 0 +//} +// +///* +// * Destroy a document object. +// */ +// +//YAML_DECLARE(void) +//yaml_document_delete(document *yaml_document_t) +//{ +// struct { +// error yaml_error_type_t +// } context +// tag_directive *yaml_tag_directive_t +// +// context.error = YAML_NO_ERROR // Eliminate a compiler warning. +// +// assert(document) // Non-NULL document object is expected. +// +// while (!STACK_EMPTY(&context, document.nodes)) { +// node yaml_node_t = POP(&context, document.nodes) +// yaml_free(node.tag) +// switch (node.type) { +// case YAML_SCALAR_NODE: +// yaml_free(node.data.scalar.value) +// break +// case YAML_SEQUENCE_NODE: +// STACK_DEL(&context, node.data.sequence.items) +// break +// case YAML_MAPPING_NODE: +// STACK_DEL(&context, node.data.mapping.pairs) +// break +// default: +// assert(0) // Should not happen. +// } +// } +// STACK_DEL(&context, document.nodes) +// +// yaml_free(document.version_directive) +// for (tag_directive = document.tag_directives.start +// tag_directive != document.tag_directives.end +// tag_directive++) { +// yaml_free(tag_directive.handle) +// yaml_free(tag_directive.prefix) +// } +// yaml_free(document.tag_directives.start) +// +// memset(document, 0, sizeof(yaml_document_t)) +//} +// +///** +// * Get a document node. +// */ +// +//YAML_DECLARE(yaml_node_t *) +//yaml_document_get_node(document *yaml_document_t, index int) +//{ +// assert(document) // Non-NULL document object is expected. +// +// if (index > 0 && document.nodes.start + index <= document.nodes.top) { +// return document.nodes.start + index - 1 +// } +// return NULL +//} +// +///** +// * Get the root object. +// */ +// +//YAML_DECLARE(yaml_node_t *) +//yaml_document_get_root_node(document *yaml_document_t) +//{ +// assert(document) // Non-NULL document object is expected. +// +// if (document.nodes.top != document.nodes.start) { +// return document.nodes.start +// } +// return NULL +//} +// +///* +// * Add a scalar node to a document. +// */ +// +//YAML_DECLARE(int) +//yaml_document_add_scalar(document *yaml_document_t, +// tag *yaml_char_t, value *yaml_char_t, length int, +// style yaml_scalar_style_t) +//{ +// struct { +// error yaml_error_type_t +// } context +// mark yaml_mark_t = { 0, 0, 0 } +// tag_copy *yaml_char_t = NULL +// value_copy *yaml_char_t = NULL +// node yaml_node_t +// +// assert(document) // Non-NULL document object is expected. +// assert(value) // Non-NULL value is expected. +// +// if (!tag) { +// tag = (yaml_char_t *)YAML_DEFAULT_SCALAR_TAG +// } +// +// if (!yaml_check_utf8(tag, strlen((char *)tag))) goto error +// tag_copy = yaml_strdup(tag) +// if (!tag_copy) goto error +// +// if (length < 0) { +// length = strlen((char *)value) +// } +// +// if (!yaml_check_utf8(value, length)) goto error +// value_copy = yaml_malloc(length+1) +// if (!value_copy) goto error +// memcpy(value_copy, value, length) +// value_copy[length] = '\0' +// +// SCALAR_NODE_INIT(node, tag_copy, value_copy, length, style, mark, mark) +// if (!PUSH(&context, document.nodes, node)) goto error +// +// return document.nodes.top - document.nodes.start +// +//error: +// yaml_free(tag_copy) +// yaml_free(value_copy) +// +// return 0 +//} +// +///* +// * Add a sequence node to a document. +// */ +// +//YAML_DECLARE(int) +//yaml_document_add_sequence(document *yaml_document_t, +// tag *yaml_char_t, style yaml_sequence_style_t) +//{ +// struct { +// error yaml_error_type_t +// } context +// mark yaml_mark_t = { 0, 0, 0 } +// tag_copy *yaml_char_t = NULL +// struct { +// start *yaml_node_item_t +// end *yaml_node_item_t +// top *yaml_node_item_t +// } items = { NULL, NULL, NULL } +// node yaml_node_t +// +// assert(document) // Non-NULL document object is expected. +// +// if (!tag) { +// tag = (yaml_char_t *)YAML_DEFAULT_SEQUENCE_TAG +// } +// +// if (!yaml_check_utf8(tag, strlen((char *)tag))) goto error +// tag_copy = yaml_strdup(tag) +// if (!tag_copy) goto error +// +// if (!STACK_INIT(&context, items, INITIAL_STACK_SIZE)) goto error +// +// SEQUENCE_NODE_INIT(node, tag_copy, items.start, items.end, +// style, mark, mark) +// if (!PUSH(&context, document.nodes, node)) goto error +// +// return document.nodes.top - document.nodes.start +// +//error: +// STACK_DEL(&context, items) +// yaml_free(tag_copy) +// +// return 0 +//} +// +///* +// * Add a mapping node to a document. +// */ +// +//YAML_DECLARE(int) +//yaml_document_add_mapping(document *yaml_document_t, +// tag *yaml_char_t, style yaml_mapping_style_t) +//{ +// struct { +// error yaml_error_type_t +// } context +// mark yaml_mark_t = { 0, 0, 0 } +// tag_copy *yaml_char_t = NULL +// struct { +// start *yaml_node_pair_t +// end *yaml_node_pair_t +// top *yaml_node_pair_t +// } pairs = { NULL, NULL, NULL } +// node yaml_node_t +// +// assert(document) // Non-NULL document object is expected. +// +// if (!tag) { +// tag = (yaml_char_t *)YAML_DEFAULT_MAPPING_TAG +// } +// +// if (!yaml_check_utf8(tag, strlen((char *)tag))) goto error +// tag_copy = yaml_strdup(tag) +// if (!tag_copy) goto error +// +// if (!STACK_INIT(&context, pairs, INITIAL_STACK_SIZE)) goto error +// +// MAPPING_NODE_INIT(node, tag_copy, pairs.start, pairs.end, +// style, mark, mark) +// if (!PUSH(&context, document.nodes, node)) goto error +// +// return document.nodes.top - document.nodes.start +// +//error: +// STACK_DEL(&context, pairs) +// yaml_free(tag_copy) +// +// return 0 +//} +// +///* +// * Append an item to a sequence node. +// */ +// +//YAML_DECLARE(int) +//yaml_document_append_sequence_item(document *yaml_document_t, +// sequence int, item int) +//{ +// struct { +// error yaml_error_type_t +// } context +// +// assert(document) // Non-NULL document is required. +// assert(sequence > 0 +// && document.nodes.start + sequence <= document.nodes.top) +// // Valid sequence id is required. +// assert(document.nodes.start[sequence-1].type == YAML_SEQUENCE_NODE) +// // A sequence node is required. +// assert(item > 0 && document.nodes.start + item <= document.nodes.top) +// // Valid item id is required. +// +// if (!PUSH(&context, +// document.nodes.start[sequence-1].data.sequence.items, item)) +// return 0 +// +// return 1 +//} +// +///* +// * Append a pair of a key and a value to a mapping node. +// */ +// +//YAML_DECLARE(int) +//yaml_document_append_mapping_pair(document *yaml_document_t, +// mapping int, key int, value int) +//{ +// struct { +// error yaml_error_type_t +// } context +// +// pair yaml_node_pair_t +// +// assert(document) // Non-NULL document is required. +// assert(mapping > 0 +// && document.nodes.start + mapping <= document.nodes.top) +// // Valid mapping id is required. +// assert(document.nodes.start[mapping-1].type == YAML_MAPPING_NODE) +// // A mapping node is required. +// assert(key > 0 && document.nodes.start + key <= document.nodes.top) +// // Valid key id is required. +// assert(value > 0 && document.nodes.start + value <= document.nodes.top) +// // Valid value id is required. +// +// pair.key = key +// pair.value = value +// +// if (!PUSH(&context, +// document.nodes.start[mapping-1].data.mapping.pairs, pair)) +// return 0 +// +// return 1 +//} +// +// diff --git a/vendor/gopkg.in/yaml.v2/decode.go b/vendor/gopkg.in/yaml.v2/decode.go new file mode 100644 index 000000000..e4e56e28e --- /dev/null +++ b/vendor/gopkg.in/yaml.v2/decode.go @@ -0,0 +1,775 @@ +package yaml + +import ( + "encoding" + "encoding/base64" + "fmt" + "io" + "math" + "reflect" + "strconv" + "time" +) + +const ( + documentNode = 1 << iota + mappingNode + sequenceNode + scalarNode + aliasNode +) + +type node struct { + kind int + line, column int + tag string + // For an alias node, alias holds the resolved alias. + alias *node + value string + implicit bool + children []*node + anchors map[string]*node +} + +// ---------------------------------------------------------------------------- +// Parser, produces a node tree out of a libyaml event stream. + +type parser struct { + parser yaml_parser_t + event yaml_event_t + doc *node + doneInit bool +} + +func newParser(b []byte) *parser { + p := parser{} + if !yaml_parser_initialize(&p.parser) { + panic("failed to initialize YAML emitter") + } + if len(b) == 0 { + b = []byte{'\n'} + } + yaml_parser_set_input_string(&p.parser, b) + return &p +} + +func newParserFromReader(r io.Reader) *parser { + p := parser{} + if !yaml_parser_initialize(&p.parser) { + panic("failed to initialize YAML emitter") + } + yaml_parser_set_input_reader(&p.parser, r) + return &p +} + +func (p *parser) init() { + if p.doneInit { + return + } + p.expect(yaml_STREAM_START_EVENT) + p.doneInit = true +} + +func (p *parser) destroy() { + if p.event.typ != yaml_NO_EVENT { + yaml_event_delete(&p.event) + } + yaml_parser_delete(&p.parser) +} + +// expect consumes an event from the event stream and +// checks that it's of the expected type. +func (p *parser) expect(e yaml_event_type_t) { + if p.event.typ == yaml_NO_EVENT { + if !yaml_parser_parse(&p.parser, &p.event) { + p.fail() + } + } + if p.event.typ == yaml_STREAM_END_EVENT { + failf("attempted to go past the end of stream; corrupted value?") + } + if p.event.typ != e { + p.parser.problem = fmt.Sprintf("expected %s event but got %s", e, p.event.typ) + p.fail() + } + yaml_event_delete(&p.event) + p.event.typ = yaml_NO_EVENT +} + +// peek peeks at the next event in the event stream, +// puts the results into p.event and returns the event type. +func (p *parser) peek() yaml_event_type_t { + if p.event.typ != yaml_NO_EVENT { + return p.event.typ + } + if !yaml_parser_parse(&p.parser, &p.event) { + p.fail() + } + return p.event.typ +} + +func (p *parser) fail() { + var where string + var line int + if p.parser.problem_mark.line != 0 { + line = p.parser.problem_mark.line + // Scanner errors don't iterate line before returning error + if p.parser.error == yaml_SCANNER_ERROR { + line++ + } + } else if p.parser.context_mark.line != 0 { + line = p.parser.context_mark.line + } + if line != 0 { + where = "line " + strconv.Itoa(line) + ": " + } + var msg string + if len(p.parser.problem) > 0 { + msg = p.parser.problem + } else { + msg = "unknown problem parsing YAML content" + } + failf("%s%s", where, msg) +} + +func (p *parser) anchor(n *node, anchor []byte) { + if anchor != nil { + p.doc.anchors[string(anchor)] = n + } +} + +func (p *parser) parse() *node { + p.init() + switch p.peek() { + case yaml_SCALAR_EVENT: + return p.scalar() + case yaml_ALIAS_EVENT: + return p.alias() + case yaml_MAPPING_START_EVENT: + return p.mapping() + case yaml_SEQUENCE_START_EVENT: + return p.sequence() + case yaml_DOCUMENT_START_EVENT: + return p.document() + case yaml_STREAM_END_EVENT: + // Happens when attempting to decode an empty buffer. + return nil + default: + panic("attempted to parse unknown event: " + p.event.typ.String()) + } +} + +func (p *parser) node(kind int) *node { + return &node{ + kind: kind, + line: p.event.start_mark.line, + column: p.event.start_mark.column, + } +} + +func (p *parser) document() *node { + n := p.node(documentNode) + n.anchors = make(map[string]*node) + p.doc = n + p.expect(yaml_DOCUMENT_START_EVENT) + n.children = append(n.children, p.parse()) + p.expect(yaml_DOCUMENT_END_EVENT) + return n +} + +func (p *parser) alias() *node { + n := p.node(aliasNode) + n.value = string(p.event.anchor) + n.alias = p.doc.anchors[n.value] + if n.alias == nil { + failf("unknown anchor '%s' referenced", n.value) + } + p.expect(yaml_ALIAS_EVENT) + return n +} + +func (p *parser) scalar() *node { + n := p.node(scalarNode) + n.value = string(p.event.value) + n.tag = string(p.event.tag) + n.implicit = p.event.implicit + p.anchor(n, p.event.anchor) + p.expect(yaml_SCALAR_EVENT) + return n +} + +func (p *parser) sequence() *node { + n := p.node(sequenceNode) + p.anchor(n, p.event.anchor) + p.expect(yaml_SEQUENCE_START_EVENT) + for p.peek() != yaml_SEQUENCE_END_EVENT { + n.children = append(n.children, p.parse()) + } + p.expect(yaml_SEQUENCE_END_EVENT) + return n +} + +func (p *parser) mapping() *node { + n := p.node(mappingNode) + p.anchor(n, p.event.anchor) + p.expect(yaml_MAPPING_START_EVENT) + for p.peek() != yaml_MAPPING_END_EVENT { + n.children = append(n.children, p.parse(), p.parse()) + } + p.expect(yaml_MAPPING_END_EVENT) + return n +} + +// ---------------------------------------------------------------------------- +// Decoder, unmarshals a node into a provided value. + +type decoder struct { + doc *node + aliases map[*node]bool + mapType reflect.Type + terrors []string + strict bool +} + +var ( + mapItemType = reflect.TypeOf(MapItem{}) + durationType = reflect.TypeOf(time.Duration(0)) + defaultMapType = reflect.TypeOf(map[interface{}]interface{}{}) + ifaceType = defaultMapType.Elem() + timeType = reflect.TypeOf(time.Time{}) + ptrTimeType = reflect.TypeOf(&time.Time{}) +) + +func newDecoder(strict bool) *decoder { + d := &decoder{mapType: defaultMapType, strict: strict} + d.aliases = make(map[*node]bool) + return d +} + +func (d *decoder) terror(n *node, tag string, out reflect.Value) { + if n.tag != "" { + tag = n.tag + } + value := n.value + if tag != yaml_SEQ_TAG && tag != yaml_MAP_TAG { + if len(value) > 10 { + value = " `" + value[:7] + "...`" + } else { + value = " `" + value + "`" + } + } + d.terrors = append(d.terrors, fmt.Sprintf("line %d: cannot unmarshal %s%s into %s", n.line+1, shortTag(tag), value, out.Type())) +} + +func (d *decoder) callUnmarshaler(n *node, u Unmarshaler) (good bool) { + terrlen := len(d.terrors) + err := u.UnmarshalYAML(func(v interface{}) (err error) { + defer handleErr(&err) + d.unmarshal(n, reflect.ValueOf(v)) + if len(d.terrors) > terrlen { + issues := d.terrors[terrlen:] + d.terrors = d.terrors[:terrlen] + return &TypeError{issues} + } + return nil + }) + if e, ok := err.(*TypeError); ok { + d.terrors = append(d.terrors, e.Errors...) + return false + } + if err != nil { + fail(err) + } + return true +} + +// d.prepare initializes and dereferences pointers and calls UnmarshalYAML +// if a value is found to implement it. +// It returns the initialized and dereferenced out value, whether +// unmarshalling was already done by UnmarshalYAML, and if so whether +// its types unmarshalled appropriately. +// +// If n holds a null value, prepare returns before doing anything. +func (d *decoder) prepare(n *node, out reflect.Value) (newout reflect.Value, unmarshaled, good bool) { + if n.tag == yaml_NULL_TAG || n.kind == scalarNode && n.tag == "" && (n.value == "null" || n.value == "~" || n.value == "" && n.implicit) { + return out, false, false + } + again := true + for again { + again = false + if out.Kind() == reflect.Ptr { + if out.IsNil() { + out.Set(reflect.New(out.Type().Elem())) + } + out = out.Elem() + again = true + } + if out.CanAddr() { + if u, ok := out.Addr().Interface().(Unmarshaler); ok { + good = d.callUnmarshaler(n, u) + return out, true, good + } + } + } + return out, false, false +} + +func (d *decoder) unmarshal(n *node, out reflect.Value) (good bool) { + switch n.kind { + case documentNode: + return d.document(n, out) + case aliasNode: + return d.alias(n, out) + } + out, unmarshaled, good := d.prepare(n, out) + if unmarshaled { + return good + } + switch n.kind { + case scalarNode: + good = d.scalar(n, out) + case mappingNode: + good = d.mapping(n, out) + case sequenceNode: + good = d.sequence(n, out) + default: + panic("internal error: unknown node kind: " + strconv.Itoa(n.kind)) + } + return good +} + +func (d *decoder) document(n *node, out reflect.Value) (good bool) { + if len(n.children) == 1 { + d.doc = n + d.unmarshal(n.children[0], out) + return true + } + return false +} + +func (d *decoder) alias(n *node, out reflect.Value) (good bool) { + if d.aliases[n] { + // TODO this could actually be allowed in some circumstances. + failf("anchor '%s' value contains itself", n.value) + } + d.aliases[n] = true + good = d.unmarshal(n.alias, out) + delete(d.aliases, n) + return good +} + +var zeroValue reflect.Value + +func resetMap(out reflect.Value) { + for _, k := range out.MapKeys() { + out.SetMapIndex(k, zeroValue) + } +} + +func (d *decoder) scalar(n *node, out reflect.Value) bool { + var tag string + var resolved interface{} + if n.tag == "" && !n.implicit { + tag = yaml_STR_TAG + resolved = n.value + } else { + tag, resolved = resolve(n.tag, n.value) + if tag == yaml_BINARY_TAG { + data, err := base64.StdEncoding.DecodeString(resolved.(string)) + if err != nil { + failf("!!binary value contains invalid base64 data") + } + resolved = string(data) + } + } + if resolved == nil { + if out.Kind() == reflect.Map && !out.CanAddr() { + resetMap(out) + } else { + out.Set(reflect.Zero(out.Type())) + } + return true + } + if resolvedv := reflect.ValueOf(resolved); out.Type() == resolvedv.Type() { + // We've resolved to exactly the type we want, so use that. + out.Set(resolvedv) + return true + } + // Perhaps we can use the value as a TextUnmarshaler to + // set its value. + if out.CanAddr() { + u, ok := out.Addr().Interface().(encoding.TextUnmarshaler) + if ok { + var text []byte + if tag == yaml_BINARY_TAG { + text = []byte(resolved.(string)) + } else { + // We let any value be unmarshaled into TextUnmarshaler. + // That might be more lax than we'd like, but the + // TextUnmarshaler itself should bowl out any dubious values. + text = []byte(n.value) + } + err := u.UnmarshalText(text) + if err != nil { + fail(err) + } + return true + } + } + switch out.Kind() { + case reflect.String: + if tag == yaml_BINARY_TAG { + out.SetString(resolved.(string)) + return true + } + if resolved != nil { + out.SetString(n.value) + return true + } + case reflect.Interface: + if resolved == nil { + out.Set(reflect.Zero(out.Type())) + } else if tag == yaml_TIMESTAMP_TAG { + // It looks like a timestamp but for backward compatibility + // reasons we set it as a string, so that code that unmarshals + // timestamp-like values into interface{} will continue to + // see a string and not a time.Time. + // TODO(v3) Drop this. + out.Set(reflect.ValueOf(n.value)) + } else { + out.Set(reflect.ValueOf(resolved)) + } + return true + case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64: + switch resolved := resolved.(type) { + case int: + if !out.OverflowInt(int64(resolved)) { + out.SetInt(int64(resolved)) + return true + } + case int64: + if !out.OverflowInt(resolved) { + out.SetInt(resolved) + return true + } + case uint64: + if resolved <= math.MaxInt64 && !out.OverflowInt(int64(resolved)) { + out.SetInt(int64(resolved)) + return true + } + case float64: + if resolved <= math.MaxInt64 && !out.OverflowInt(int64(resolved)) { + out.SetInt(int64(resolved)) + return true + } + case string: + if out.Type() == durationType { + d, err := time.ParseDuration(resolved) + if err == nil { + out.SetInt(int64(d)) + return true + } + } + } + case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64, reflect.Uintptr: + switch resolved := resolved.(type) { + case int: + if resolved >= 0 && !out.OverflowUint(uint64(resolved)) { + out.SetUint(uint64(resolved)) + return true + } + case int64: + if resolved >= 0 && !out.OverflowUint(uint64(resolved)) { + out.SetUint(uint64(resolved)) + return true + } + case uint64: + if !out.OverflowUint(uint64(resolved)) { + out.SetUint(uint64(resolved)) + return true + } + case float64: + if resolved <= math.MaxUint64 && !out.OverflowUint(uint64(resolved)) { + out.SetUint(uint64(resolved)) + return true + } + } + case reflect.Bool: + switch resolved := resolved.(type) { + case bool: + out.SetBool(resolved) + return true + } + case reflect.Float32, reflect.Float64: + switch resolved := resolved.(type) { + case int: + out.SetFloat(float64(resolved)) + return true + case int64: + out.SetFloat(float64(resolved)) + return true + case uint64: + out.SetFloat(float64(resolved)) + return true + case float64: + out.SetFloat(resolved) + return true + } + case reflect.Struct: + if resolvedv := reflect.ValueOf(resolved); out.Type() == resolvedv.Type() { + out.Set(resolvedv) + return true + } + case reflect.Ptr: + if out.Type().Elem() == reflect.TypeOf(resolved) { + // TODO DOes this make sense? When is out a Ptr except when decoding a nil value? + elem := reflect.New(out.Type().Elem()) + elem.Elem().Set(reflect.ValueOf(resolved)) + out.Set(elem) + return true + } + } + d.terror(n, tag, out) + return false +} + +func settableValueOf(i interface{}) reflect.Value { + v := reflect.ValueOf(i) + sv := reflect.New(v.Type()).Elem() + sv.Set(v) + return sv +} + +func (d *decoder) sequence(n *node, out reflect.Value) (good bool) { + l := len(n.children) + + var iface reflect.Value + switch out.Kind() { + case reflect.Slice: + out.Set(reflect.MakeSlice(out.Type(), l, l)) + case reflect.Array: + if l != out.Len() { + failf("invalid array: want %d elements but got %d", out.Len(), l) + } + case reflect.Interface: + // No type hints. Will have to use a generic sequence. + iface = out + out = settableValueOf(make([]interface{}, l)) + default: + d.terror(n, yaml_SEQ_TAG, out) + return false + } + et := out.Type().Elem() + + j := 0 + for i := 0; i < l; i++ { + e := reflect.New(et).Elem() + if ok := d.unmarshal(n.children[i], e); ok { + out.Index(j).Set(e) + j++ + } + } + if out.Kind() != reflect.Array { + out.Set(out.Slice(0, j)) + } + if iface.IsValid() { + iface.Set(out) + } + return true +} + +func (d *decoder) mapping(n *node, out reflect.Value) (good bool) { + switch out.Kind() { + case reflect.Struct: + return d.mappingStruct(n, out) + case reflect.Slice: + return d.mappingSlice(n, out) + case reflect.Map: + // okay + case reflect.Interface: + if d.mapType.Kind() == reflect.Map { + iface := out + out = reflect.MakeMap(d.mapType) + iface.Set(out) + } else { + slicev := reflect.New(d.mapType).Elem() + if !d.mappingSlice(n, slicev) { + return false + } + out.Set(slicev) + return true + } + default: + d.terror(n, yaml_MAP_TAG, out) + return false + } + outt := out.Type() + kt := outt.Key() + et := outt.Elem() + + mapType := d.mapType + if outt.Key() == ifaceType && outt.Elem() == ifaceType { + d.mapType = outt + } + + if out.IsNil() { + out.Set(reflect.MakeMap(outt)) + } + l := len(n.children) + for i := 0; i < l; i += 2 { + if isMerge(n.children[i]) { + d.merge(n.children[i+1], out) + continue + } + k := reflect.New(kt).Elem() + if d.unmarshal(n.children[i], k) { + kkind := k.Kind() + if kkind == reflect.Interface { + kkind = k.Elem().Kind() + } + if kkind == reflect.Map || kkind == reflect.Slice { + failf("invalid map key: %#v", k.Interface()) + } + e := reflect.New(et).Elem() + if d.unmarshal(n.children[i+1], e) { + d.setMapIndex(n.children[i+1], out, k, e) + } + } + } + d.mapType = mapType + return true +} + +func (d *decoder) setMapIndex(n *node, out, k, v reflect.Value) { + if d.strict && out.MapIndex(k) != zeroValue { + d.terrors = append(d.terrors, fmt.Sprintf("line %d: key %#v already set in map", n.line+1, k.Interface())) + return + } + out.SetMapIndex(k, v) +} + +func (d *decoder) mappingSlice(n *node, out reflect.Value) (good bool) { + outt := out.Type() + if outt.Elem() != mapItemType { + d.terror(n, yaml_MAP_TAG, out) + return false + } + + mapType := d.mapType + d.mapType = outt + + var slice []MapItem + var l = len(n.children) + for i := 0; i < l; i += 2 { + if isMerge(n.children[i]) { + d.merge(n.children[i+1], out) + continue + } + item := MapItem{} + k := reflect.ValueOf(&item.Key).Elem() + if d.unmarshal(n.children[i], k) { + v := reflect.ValueOf(&item.Value).Elem() + if d.unmarshal(n.children[i+1], v) { + slice = append(slice, item) + } + } + } + out.Set(reflect.ValueOf(slice)) + d.mapType = mapType + return true +} + +func (d *decoder) mappingStruct(n *node, out reflect.Value) (good bool) { + sinfo, err := getStructInfo(out.Type()) + if err != nil { + panic(err) + } + name := settableValueOf("") + l := len(n.children) + + var inlineMap reflect.Value + var elemType reflect.Type + if sinfo.InlineMap != -1 { + inlineMap = out.Field(sinfo.InlineMap) + inlineMap.Set(reflect.New(inlineMap.Type()).Elem()) + elemType = inlineMap.Type().Elem() + } + + var doneFields []bool + if d.strict { + doneFields = make([]bool, len(sinfo.FieldsList)) + } + for i := 0; i < l; i += 2 { + ni := n.children[i] + if isMerge(ni) { + d.merge(n.children[i+1], out) + continue + } + if !d.unmarshal(ni, name) { + continue + } + if info, ok := sinfo.FieldsMap[name.String()]; ok { + if d.strict { + if doneFields[info.Id] { + d.terrors = append(d.terrors, fmt.Sprintf("line %d: field %s already set in type %s", ni.line+1, name.String(), out.Type())) + continue + } + doneFields[info.Id] = true + } + var field reflect.Value + if info.Inline == nil { + field = out.Field(info.Num) + } else { + field = out.FieldByIndex(info.Inline) + } + d.unmarshal(n.children[i+1], field) + } else if sinfo.InlineMap != -1 { + if inlineMap.IsNil() { + inlineMap.Set(reflect.MakeMap(inlineMap.Type())) + } + value := reflect.New(elemType).Elem() + d.unmarshal(n.children[i+1], value) + d.setMapIndex(n.children[i+1], inlineMap, name, value) + } else if d.strict { + d.terrors = append(d.terrors, fmt.Sprintf("line %d: field %s not found in type %s", ni.line+1, name.String(), out.Type())) + } + } + return true +} + +func failWantMap() { + failf("map merge requires map or sequence of maps as the value") +} + +func (d *decoder) merge(n *node, out reflect.Value) { + switch n.kind { + case mappingNode: + d.unmarshal(n, out) + case aliasNode: + an, ok := d.doc.anchors[n.value] + if ok && an.kind != mappingNode { + failWantMap() + } + d.unmarshal(n, out) + case sequenceNode: + // Step backwards as earlier nodes take precedence. + for i := len(n.children) - 1; i >= 0; i-- { + ni := n.children[i] + if ni.kind == aliasNode { + an, ok := d.doc.anchors[ni.value] + if ok && an.kind != mappingNode { + failWantMap() + } + } else if ni.kind != mappingNode { + failWantMap() + } + d.unmarshal(ni, out) + } + default: + failWantMap() + } +} + +func isMerge(n *node) bool { + return n.kind == scalarNode && n.value == "<<" && (n.implicit == true || n.tag == yaml_MERGE_TAG) +} diff --git a/vendor/gopkg.in/yaml.v2/emitterc.go b/vendor/gopkg.in/yaml.v2/emitterc.go new file mode 100644 index 000000000..a1c2cc526 --- /dev/null +++ b/vendor/gopkg.in/yaml.v2/emitterc.go @@ -0,0 +1,1685 @@ +package yaml + +import ( + "bytes" + "fmt" +) + +// Flush the buffer if needed. +func flush(emitter *yaml_emitter_t) bool { + if emitter.buffer_pos+5 >= len(emitter.buffer) { + return yaml_emitter_flush(emitter) + } + return true +} + +// Put a character to the output buffer. +func put(emitter *yaml_emitter_t, value byte) bool { + if emitter.buffer_pos+5 >= len(emitter.buffer) && !yaml_emitter_flush(emitter) { + return false + } + emitter.buffer[emitter.buffer_pos] = value + emitter.buffer_pos++ + emitter.column++ + return true +} + +// Put a line break to the output buffer. +func put_break(emitter *yaml_emitter_t) bool { + if emitter.buffer_pos+5 >= len(emitter.buffer) && !yaml_emitter_flush(emitter) { + return false + } + switch emitter.line_break { + case yaml_CR_BREAK: + emitter.buffer[emitter.buffer_pos] = '\r' + emitter.buffer_pos += 1 + case yaml_LN_BREAK: + emitter.buffer[emitter.buffer_pos] = '\n' + emitter.buffer_pos += 1 + case yaml_CRLN_BREAK: + emitter.buffer[emitter.buffer_pos+0] = '\r' + emitter.buffer[emitter.buffer_pos+1] = '\n' + emitter.buffer_pos += 2 + default: + panic("unknown line break setting") + } + emitter.column = 0 + emitter.line++ + return true +} + +// Copy a character from a string into buffer. +func write(emitter *yaml_emitter_t, s []byte, i *int) bool { + if emitter.buffer_pos+5 >= len(emitter.buffer) && !yaml_emitter_flush(emitter) { + return false + } + p := emitter.buffer_pos + w := width(s[*i]) + switch w { + case 4: + emitter.buffer[p+3] = s[*i+3] + fallthrough + case 3: + emitter.buffer[p+2] = s[*i+2] + fallthrough + case 2: + emitter.buffer[p+1] = s[*i+1] + fallthrough + case 1: + emitter.buffer[p+0] = s[*i+0] + default: + panic("unknown character width") + } + emitter.column++ + emitter.buffer_pos += w + *i += w + return true +} + +// Write a whole string into buffer. +func write_all(emitter *yaml_emitter_t, s []byte) bool { + for i := 0; i < len(s); { + if !write(emitter, s, &i) { + return false + } + } + return true +} + +// Copy a line break character from a string into buffer. +func write_break(emitter *yaml_emitter_t, s []byte, i *int) bool { + if s[*i] == '\n' { + if !put_break(emitter) { + return false + } + *i++ + } else { + if !write(emitter, s, i) { + return false + } + emitter.column = 0 + emitter.line++ + } + return true +} + +// Set an emitter error and return false. +func yaml_emitter_set_emitter_error(emitter *yaml_emitter_t, problem string) bool { + emitter.error = yaml_EMITTER_ERROR + emitter.problem = problem + return false +} + +// Emit an event. +func yaml_emitter_emit(emitter *yaml_emitter_t, event *yaml_event_t) bool { + emitter.events = append(emitter.events, *event) + for !yaml_emitter_need_more_events(emitter) { + event := &emitter.events[emitter.events_head] + if !yaml_emitter_analyze_event(emitter, event) { + return false + } + if !yaml_emitter_state_machine(emitter, event) { + return false + } + yaml_event_delete(event) + emitter.events_head++ + } + return true +} + +// Check if we need to accumulate more events before emitting. +// +// We accumulate extra +// - 1 event for DOCUMENT-START +// - 2 events for SEQUENCE-START +// - 3 events for MAPPING-START +// +func yaml_emitter_need_more_events(emitter *yaml_emitter_t) bool { + if emitter.events_head == len(emitter.events) { + return true + } + var accumulate int + switch emitter.events[emitter.events_head].typ { + case yaml_DOCUMENT_START_EVENT: + accumulate = 1 + break + case yaml_SEQUENCE_START_EVENT: + accumulate = 2 + break + case yaml_MAPPING_START_EVENT: + accumulate = 3 + break + default: + return false + } + if len(emitter.events)-emitter.events_head > accumulate { + return false + } + var level int + for i := emitter.events_head; i < len(emitter.events); i++ { + switch emitter.events[i].typ { + case yaml_STREAM_START_EVENT, yaml_DOCUMENT_START_EVENT, yaml_SEQUENCE_START_EVENT, yaml_MAPPING_START_EVENT: + level++ + case yaml_STREAM_END_EVENT, yaml_DOCUMENT_END_EVENT, yaml_SEQUENCE_END_EVENT, yaml_MAPPING_END_EVENT: + level-- + } + if level == 0 { + return false + } + } + return true +} + +// Append a directive to the directives stack. +func yaml_emitter_append_tag_directive(emitter *yaml_emitter_t, value *yaml_tag_directive_t, allow_duplicates bool) bool { + for i := 0; i < len(emitter.tag_directives); i++ { + if bytes.Equal(value.handle, emitter.tag_directives[i].handle) { + if allow_duplicates { + return true + } + return yaml_emitter_set_emitter_error(emitter, "duplicate %TAG directive") + } + } + + // [Go] Do we actually need to copy this given garbage collection + // and the lack of deallocating destructors? + tag_copy := yaml_tag_directive_t{ + handle: make([]byte, len(value.handle)), + prefix: make([]byte, len(value.prefix)), + } + copy(tag_copy.handle, value.handle) + copy(tag_copy.prefix, value.prefix) + emitter.tag_directives = append(emitter.tag_directives, tag_copy) + return true +} + +// Increase the indentation level. +func yaml_emitter_increase_indent(emitter *yaml_emitter_t, flow, indentless bool) bool { + emitter.indents = append(emitter.indents, emitter.indent) + if emitter.indent < 0 { + if flow { + emitter.indent = emitter.best_indent + } else { + emitter.indent = 0 + } + } else if !indentless { + emitter.indent += emitter.best_indent + } + return true +} + +// State dispatcher. +func yaml_emitter_state_machine(emitter *yaml_emitter_t, event *yaml_event_t) bool { + switch emitter.state { + default: + case yaml_EMIT_STREAM_START_STATE: + return yaml_emitter_emit_stream_start(emitter, event) + + case yaml_EMIT_FIRST_DOCUMENT_START_STATE: + return yaml_emitter_emit_document_start(emitter, event, true) + + case yaml_EMIT_DOCUMENT_START_STATE: + return yaml_emitter_emit_document_start(emitter, event, false) + + case yaml_EMIT_DOCUMENT_CONTENT_STATE: + return yaml_emitter_emit_document_content(emitter, event) + + case yaml_EMIT_DOCUMENT_END_STATE: + return yaml_emitter_emit_document_end(emitter, event) + + case yaml_EMIT_FLOW_SEQUENCE_FIRST_ITEM_STATE: + return yaml_emitter_emit_flow_sequence_item(emitter, event, true) + + case yaml_EMIT_FLOW_SEQUENCE_ITEM_STATE: + return yaml_emitter_emit_flow_sequence_item(emitter, event, false) + + case yaml_EMIT_FLOW_MAPPING_FIRST_KEY_STATE: + return yaml_emitter_emit_flow_mapping_key(emitter, event, true) + + case yaml_EMIT_FLOW_MAPPING_KEY_STATE: + return yaml_emitter_emit_flow_mapping_key(emitter, event, false) + + case yaml_EMIT_FLOW_MAPPING_SIMPLE_VALUE_STATE: + return yaml_emitter_emit_flow_mapping_value(emitter, event, true) + + case yaml_EMIT_FLOW_MAPPING_VALUE_STATE: + return yaml_emitter_emit_flow_mapping_value(emitter, event, false) + + case yaml_EMIT_BLOCK_SEQUENCE_FIRST_ITEM_STATE: + return yaml_emitter_emit_block_sequence_item(emitter, event, true) + + case yaml_EMIT_BLOCK_SEQUENCE_ITEM_STATE: + return yaml_emitter_emit_block_sequence_item(emitter, event, false) + + case yaml_EMIT_BLOCK_MAPPING_FIRST_KEY_STATE: + return yaml_emitter_emit_block_mapping_key(emitter, event, true) + + case yaml_EMIT_BLOCK_MAPPING_KEY_STATE: + return yaml_emitter_emit_block_mapping_key(emitter, event, false) + + case yaml_EMIT_BLOCK_MAPPING_SIMPLE_VALUE_STATE: + return yaml_emitter_emit_block_mapping_value(emitter, event, true) + + case yaml_EMIT_BLOCK_MAPPING_VALUE_STATE: + return yaml_emitter_emit_block_mapping_value(emitter, event, false) + + case yaml_EMIT_END_STATE: + return yaml_emitter_set_emitter_error(emitter, "expected nothing after STREAM-END") + } + panic("invalid emitter state") +} + +// Expect STREAM-START. +func yaml_emitter_emit_stream_start(emitter *yaml_emitter_t, event *yaml_event_t) bool { + if event.typ != yaml_STREAM_START_EVENT { + return yaml_emitter_set_emitter_error(emitter, "expected STREAM-START") + } + if emitter.encoding == yaml_ANY_ENCODING { + emitter.encoding = event.encoding + if emitter.encoding == yaml_ANY_ENCODING { + emitter.encoding = yaml_UTF8_ENCODING + } + } + if emitter.best_indent < 2 || emitter.best_indent > 9 { + emitter.best_indent = 2 + } + if emitter.best_width >= 0 && emitter.best_width <= emitter.best_indent*2 { + emitter.best_width = 80 + } + if emitter.best_width < 0 { + emitter.best_width = 1<<31 - 1 + } + if emitter.line_break == yaml_ANY_BREAK { + emitter.line_break = yaml_LN_BREAK + } + + emitter.indent = -1 + emitter.line = 0 + emitter.column = 0 + emitter.whitespace = true + emitter.indention = true + + if emitter.encoding != yaml_UTF8_ENCODING { + if !yaml_emitter_write_bom(emitter) { + return false + } + } + emitter.state = yaml_EMIT_FIRST_DOCUMENT_START_STATE + return true +} + +// Expect DOCUMENT-START or STREAM-END. +func yaml_emitter_emit_document_start(emitter *yaml_emitter_t, event *yaml_event_t, first bool) bool { + + if event.typ == yaml_DOCUMENT_START_EVENT { + + if event.version_directive != nil { + if !yaml_emitter_analyze_version_directive(emitter, event.version_directive) { + return false + } + } + + for i := 0; i < len(event.tag_directives); i++ { + tag_directive := &event.tag_directives[i] + if !yaml_emitter_analyze_tag_directive(emitter, tag_directive) { + return false + } + if !yaml_emitter_append_tag_directive(emitter, tag_directive, false) { + return false + } + } + + for i := 0; i < len(default_tag_directives); i++ { + tag_directive := &default_tag_directives[i] + if !yaml_emitter_append_tag_directive(emitter, tag_directive, true) { + return false + } + } + + implicit := event.implicit + if !first || emitter.canonical { + implicit = false + } + + if emitter.open_ended && (event.version_directive != nil || len(event.tag_directives) > 0) { + if !yaml_emitter_write_indicator(emitter, []byte("..."), true, false, false) { + return false + } + if !yaml_emitter_write_indent(emitter) { + return false + } + } + + if event.version_directive != nil { + implicit = false + if !yaml_emitter_write_indicator(emitter, []byte("%YAML"), true, false, false) { + return false + } + if !yaml_emitter_write_indicator(emitter, []byte("1.1"), true, false, false) { + return false + } + if !yaml_emitter_write_indent(emitter) { + return false + } + } + + if len(event.tag_directives) > 0 { + implicit = false + for i := 0; i < len(event.tag_directives); i++ { + tag_directive := &event.tag_directives[i] + if !yaml_emitter_write_indicator(emitter, []byte("%TAG"), true, false, false) { + return false + } + if !yaml_emitter_write_tag_handle(emitter, tag_directive.handle) { + return false + } + if !yaml_emitter_write_tag_content(emitter, tag_directive.prefix, true) { + return false + } + if !yaml_emitter_write_indent(emitter) { + return false + } + } + } + + if yaml_emitter_check_empty_document(emitter) { + implicit = false + } + if !implicit { + if !yaml_emitter_write_indent(emitter) { + return false + } + if !yaml_emitter_write_indicator(emitter, []byte("---"), true, false, false) { + return false + } + if emitter.canonical { + if !yaml_emitter_write_indent(emitter) { + return false + } + } + } + + emitter.state = yaml_EMIT_DOCUMENT_CONTENT_STATE + return true + } + + if event.typ == yaml_STREAM_END_EVENT { + if emitter.open_ended { + if !yaml_emitter_write_indicator(emitter, []byte("..."), true, false, false) { + return false + } + if !yaml_emitter_write_indent(emitter) { + return false + } + } + if !yaml_emitter_flush(emitter) { + return false + } + emitter.state = yaml_EMIT_END_STATE + return true + } + + return yaml_emitter_set_emitter_error(emitter, "expected DOCUMENT-START or STREAM-END") +} + +// Expect the root node. +func yaml_emitter_emit_document_content(emitter *yaml_emitter_t, event *yaml_event_t) bool { + emitter.states = append(emitter.states, yaml_EMIT_DOCUMENT_END_STATE) + return yaml_emitter_emit_node(emitter, event, true, false, false, false) +} + +// Expect DOCUMENT-END. +func yaml_emitter_emit_document_end(emitter *yaml_emitter_t, event *yaml_event_t) bool { + if event.typ != yaml_DOCUMENT_END_EVENT { + return yaml_emitter_set_emitter_error(emitter, "expected DOCUMENT-END") + } + if !yaml_emitter_write_indent(emitter) { + return false + } + if !event.implicit { + // [Go] Allocate the slice elsewhere. + if !yaml_emitter_write_indicator(emitter, []byte("..."), true, false, false) { + return false + } + if !yaml_emitter_write_indent(emitter) { + return false + } + } + if !yaml_emitter_flush(emitter) { + return false + } + emitter.state = yaml_EMIT_DOCUMENT_START_STATE + emitter.tag_directives = emitter.tag_directives[:0] + return true +} + +// Expect a flow item node. +func yaml_emitter_emit_flow_sequence_item(emitter *yaml_emitter_t, event *yaml_event_t, first bool) bool { + if first { + if !yaml_emitter_write_indicator(emitter, []byte{'['}, true, true, false) { + return false + } + if !yaml_emitter_increase_indent(emitter, true, false) { + return false + } + emitter.flow_level++ + } + + if event.typ == yaml_SEQUENCE_END_EVENT { + emitter.flow_level-- + emitter.indent = emitter.indents[len(emitter.indents)-1] + emitter.indents = emitter.indents[:len(emitter.indents)-1] + if emitter.canonical && !first { + if !yaml_emitter_write_indicator(emitter, []byte{','}, false, false, false) { + return false + } + if !yaml_emitter_write_indent(emitter) { + return false + } + } + if !yaml_emitter_write_indicator(emitter, []byte{']'}, false, false, false) { + return false + } + emitter.state = emitter.states[len(emitter.states)-1] + emitter.states = emitter.states[:len(emitter.states)-1] + + return true + } + + if !first { + if !yaml_emitter_write_indicator(emitter, []byte{','}, false, false, false) { + return false + } + } + + if emitter.canonical || emitter.column > emitter.best_width { + if !yaml_emitter_write_indent(emitter) { + return false + } + } + emitter.states = append(emitter.states, yaml_EMIT_FLOW_SEQUENCE_ITEM_STATE) + return yaml_emitter_emit_node(emitter, event, false, true, false, false) +} + +// Expect a flow key node. +func yaml_emitter_emit_flow_mapping_key(emitter *yaml_emitter_t, event *yaml_event_t, first bool) bool { + if first { + if !yaml_emitter_write_indicator(emitter, []byte{'{'}, true, true, false) { + return false + } + if !yaml_emitter_increase_indent(emitter, true, false) { + return false + } + emitter.flow_level++ + } + + if event.typ == yaml_MAPPING_END_EVENT { + emitter.flow_level-- + emitter.indent = emitter.indents[len(emitter.indents)-1] + emitter.indents = emitter.indents[:len(emitter.indents)-1] + if emitter.canonical && !first { + if !yaml_emitter_write_indicator(emitter, []byte{','}, false, false, false) { + return false + } + if !yaml_emitter_write_indent(emitter) { + return false + } + } + if !yaml_emitter_write_indicator(emitter, []byte{'}'}, false, false, false) { + return false + } + emitter.state = emitter.states[len(emitter.states)-1] + emitter.states = emitter.states[:len(emitter.states)-1] + return true + } + + if !first { + if !yaml_emitter_write_indicator(emitter, []byte{','}, false, false, false) { + return false + } + } + if emitter.canonical || emitter.column > emitter.best_width { + if !yaml_emitter_write_indent(emitter) { + return false + } + } + + if !emitter.canonical && yaml_emitter_check_simple_key(emitter) { + emitter.states = append(emitter.states, yaml_EMIT_FLOW_MAPPING_SIMPLE_VALUE_STATE) + return yaml_emitter_emit_node(emitter, event, false, false, true, true) + } + if !yaml_emitter_write_indicator(emitter, []byte{'?'}, true, false, false) { + return false + } + emitter.states = append(emitter.states, yaml_EMIT_FLOW_MAPPING_VALUE_STATE) + return yaml_emitter_emit_node(emitter, event, false, false, true, false) +} + +// Expect a flow value node. +func yaml_emitter_emit_flow_mapping_value(emitter *yaml_emitter_t, event *yaml_event_t, simple bool) bool { + if simple { + if !yaml_emitter_write_indicator(emitter, []byte{':'}, false, false, false) { + return false + } + } else { + if emitter.canonical || emitter.column > emitter.best_width { + if !yaml_emitter_write_indent(emitter) { + return false + } + } + if !yaml_emitter_write_indicator(emitter, []byte{':'}, true, false, false) { + return false + } + } + emitter.states = append(emitter.states, yaml_EMIT_FLOW_MAPPING_KEY_STATE) + return yaml_emitter_emit_node(emitter, event, false, false, true, false) +} + +// Expect a block item node. +func yaml_emitter_emit_block_sequence_item(emitter *yaml_emitter_t, event *yaml_event_t, first bool) bool { + if first { + if !yaml_emitter_increase_indent(emitter, false, emitter.mapping_context && !emitter.indention) { + return false + } + } + if event.typ == yaml_SEQUENCE_END_EVENT { + emitter.indent = emitter.indents[len(emitter.indents)-1] + emitter.indents = emitter.indents[:len(emitter.indents)-1] + emitter.state = emitter.states[len(emitter.states)-1] + emitter.states = emitter.states[:len(emitter.states)-1] + return true + } + if !yaml_emitter_write_indent(emitter) { + return false + } + if !yaml_emitter_write_indicator(emitter, []byte{'-'}, true, false, true) { + return false + } + emitter.states = append(emitter.states, yaml_EMIT_BLOCK_SEQUENCE_ITEM_STATE) + return yaml_emitter_emit_node(emitter, event, false, true, false, false) +} + +// Expect a block key node. +func yaml_emitter_emit_block_mapping_key(emitter *yaml_emitter_t, event *yaml_event_t, first bool) bool { + if first { + if !yaml_emitter_increase_indent(emitter, false, false) { + return false + } + } + if event.typ == yaml_MAPPING_END_EVENT { + emitter.indent = emitter.indents[len(emitter.indents)-1] + emitter.indents = emitter.indents[:len(emitter.indents)-1] + emitter.state = emitter.states[len(emitter.states)-1] + emitter.states = emitter.states[:len(emitter.states)-1] + return true + } + if !yaml_emitter_write_indent(emitter) { + return false + } + if yaml_emitter_check_simple_key(emitter) { + emitter.states = append(emitter.states, yaml_EMIT_BLOCK_MAPPING_SIMPLE_VALUE_STATE) + return yaml_emitter_emit_node(emitter, event, false, false, true, true) + } + if !yaml_emitter_write_indicator(emitter, []byte{'?'}, true, false, true) { + return false + } + emitter.states = append(emitter.states, yaml_EMIT_BLOCK_MAPPING_VALUE_STATE) + return yaml_emitter_emit_node(emitter, event, false, false, true, false) +} + +// Expect a block value node. +func yaml_emitter_emit_block_mapping_value(emitter *yaml_emitter_t, event *yaml_event_t, simple bool) bool { + if simple { + if !yaml_emitter_write_indicator(emitter, []byte{':'}, false, false, false) { + return false + } + } else { + if !yaml_emitter_write_indent(emitter) { + return false + } + if !yaml_emitter_write_indicator(emitter, []byte{':'}, true, false, true) { + return false + } + } + emitter.states = append(emitter.states, yaml_EMIT_BLOCK_MAPPING_KEY_STATE) + return yaml_emitter_emit_node(emitter, event, false, false, true, false) +} + +// Expect a node. +func yaml_emitter_emit_node(emitter *yaml_emitter_t, event *yaml_event_t, + root bool, sequence bool, mapping bool, simple_key bool) bool { + + emitter.root_context = root + emitter.sequence_context = sequence + emitter.mapping_context = mapping + emitter.simple_key_context = simple_key + + switch event.typ { + case yaml_ALIAS_EVENT: + return yaml_emitter_emit_alias(emitter, event) + case yaml_SCALAR_EVENT: + return yaml_emitter_emit_scalar(emitter, event) + case yaml_SEQUENCE_START_EVENT: + return yaml_emitter_emit_sequence_start(emitter, event) + case yaml_MAPPING_START_EVENT: + return yaml_emitter_emit_mapping_start(emitter, event) + default: + return yaml_emitter_set_emitter_error(emitter, + fmt.Sprintf("expected SCALAR, SEQUENCE-START, MAPPING-START, or ALIAS, but got %v", event.typ)) + } +} + +// Expect ALIAS. +func yaml_emitter_emit_alias(emitter *yaml_emitter_t, event *yaml_event_t) bool { + if !yaml_emitter_process_anchor(emitter) { + return false + } + emitter.state = emitter.states[len(emitter.states)-1] + emitter.states = emitter.states[:len(emitter.states)-1] + return true +} + +// Expect SCALAR. +func yaml_emitter_emit_scalar(emitter *yaml_emitter_t, event *yaml_event_t) bool { + if !yaml_emitter_select_scalar_style(emitter, event) { + return false + } + if !yaml_emitter_process_anchor(emitter) { + return false + } + if !yaml_emitter_process_tag(emitter) { + return false + } + if !yaml_emitter_increase_indent(emitter, true, false) { + return false + } + if !yaml_emitter_process_scalar(emitter) { + return false + } + emitter.indent = emitter.indents[len(emitter.indents)-1] + emitter.indents = emitter.indents[:len(emitter.indents)-1] + emitter.state = emitter.states[len(emitter.states)-1] + emitter.states = emitter.states[:len(emitter.states)-1] + return true +} + +// Expect SEQUENCE-START. +func yaml_emitter_emit_sequence_start(emitter *yaml_emitter_t, event *yaml_event_t) bool { + if !yaml_emitter_process_anchor(emitter) { + return false + } + if !yaml_emitter_process_tag(emitter) { + return false + } + if emitter.flow_level > 0 || emitter.canonical || event.sequence_style() == yaml_FLOW_SEQUENCE_STYLE || + yaml_emitter_check_empty_sequence(emitter) { + emitter.state = yaml_EMIT_FLOW_SEQUENCE_FIRST_ITEM_STATE + } else { + emitter.state = yaml_EMIT_BLOCK_SEQUENCE_FIRST_ITEM_STATE + } + return true +} + +// Expect MAPPING-START. +func yaml_emitter_emit_mapping_start(emitter *yaml_emitter_t, event *yaml_event_t) bool { + if !yaml_emitter_process_anchor(emitter) { + return false + } + if !yaml_emitter_process_tag(emitter) { + return false + } + if emitter.flow_level > 0 || emitter.canonical || event.mapping_style() == yaml_FLOW_MAPPING_STYLE || + yaml_emitter_check_empty_mapping(emitter) { + emitter.state = yaml_EMIT_FLOW_MAPPING_FIRST_KEY_STATE + } else { + emitter.state = yaml_EMIT_BLOCK_MAPPING_FIRST_KEY_STATE + } + return true +} + +// Check if the document content is an empty scalar. +func yaml_emitter_check_empty_document(emitter *yaml_emitter_t) bool { + return false // [Go] Huh? +} + +// Check if the next events represent an empty sequence. +func yaml_emitter_check_empty_sequence(emitter *yaml_emitter_t) bool { + if len(emitter.events)-emitter.events_head < 2 { + return false + } + return emitter.events[emitter.events_head].typ == yaml_SEQUENCE_START_EVENT && + emitter.events[emitter.events_head+1].typ == yaml_SEQUENCE_END_EVENT +} + +// Check if the next events represent an empty mapping. +func yaml_emitter_check_empty_mapping(emitter *yaml_emitter_t) bool { + if len(emitter.events)-emitter.events_head < 2 { + return false + } + return emitter.events[emitter.events_head].typ == yaml_MAPPING_START_EVENT && + emitter.events[emitter.events_head+1].typ == yaml_MAPPING_END_EVENT +} + +// Check if the next node can be expressed as a simple key. +func yaml_emitter_check_simple_key(emitter *yaml_emitter_t) bool { + length := 0 + switch emitter.events[emitter.events_head].typ { + case yaml_ALIAS_EVENT: + length += len(emitter.anchor_data.anchor) + case yaml_SCALAR_EVENT: + if emitter.scalar_data.multiline { + return false + } + length += len(emitter.anchor_data.anchor) + + len(emitter.tag_data.handle) + + len(emitter.tag_data.suffix) + + len(emitter.scalar_data.value) + case yaml_SEQUENCE_START_EVENT: + if !yaml_emitter_check_empty_sequence(emitter) { + return false + } + length += len(emitter.anchor_data.anchor) + + len(emitter.tag_data.handle) + + len(emitter.tag_data.suffix) + case yaml_MAPPING_START_EVENT: + if !yaml_emitter_check_empty_mapping(emitter) { + return false + } + length += len(emitter.anchor_data.anchor) + + len(emitter.tag_data.handle) + + len(emitter.tag_data.suffix) + default: + return false + } + return length <= 128 +} + +// Determine an acceptable scalar style. +func yaml_emitter_select_scalar_style(emitter *yaml_emitter_t, event *yaml_event_t) bool { + + no_tag := len(emitter.tag_data.handle) == 0 && len(emitter.tag_data.suffix) == 0 + if no_tag && !event.implicit && !event.quoted_implicit { + return yaml_emitter_set_emitter_error(emitter, "neither tag nor implicit flags are specified") + } + + style := event.scalar_style() + if style == yaml_ANY_SCALAR_STYLE { + style = yaml_PLAIN_SCALAR_STYLE + } + if emitter.canonical { + style = yaml_DOUBLE_QUOTED_SCALAR_STYLE + } + if emitter.simple_key_context && emitter.scalar_data.multiline { + style = yaml_DOUBLE_QUOTED_SCALAR_STYLE + } + + if style == yaml_PLAIN_SCALAR_STYLE { + if emitter.flow_level > 0 && !emitter.scalar_data.flow_plain_allowed || + emitter.flow_level == 0 && !emitter.scalar_data.block_plain_allowed { + style = yaml_SINGLE_QUOTED_SCALAR_STYLE + } + if len(emitter.scalar_data.value) == 0 && (emitter.flow_level > 0 || emitter.simple_key_context) { + style = yaml_SINGLE_QUOTED_SCALAR_STYLE + } + if no_tag && !event.implicit { + style = yaml_SINGLE_QUOTED_SCALAR_STYLE + } + } + if style == yaml_SINGLE_QUOTED_SCALAR_STYLE { + if !emitter.scalar_data.single_quoted_allowed { + style = yaml_DOUBLE_QUOTED_SCALAR_STYLE + } + } + if style == yaml_LITERAL_SCALAR_STYLE || style == yaml_FOLDED_SCALAR_STYLE { + if !emitter.scalar_data.block_allowed || emitter.flow_level > 0 || emitter.simple_key_context { + style = yaml_DOUBLE_QUOTED_SCALAR_STYLE + } + } + + if no_tag && !event.quoted_implicit && style != yaml_PLAIN_SCALAR_STYLE { + emitter.tag_data.handle = []byte{'!'} + } + emitter.scalar_data.style = style + return true +} + +// Write an anchor. +func yaml_emitter_process_anchor(emitter *yaml_emitter_t) bool { + if emitter.anchor_data.anchor == nil { + return true + } + c := []byte{'&'} + if emitter.anchor_data.alias { + c[0] = '*' + } + if !yaml_emitter_write_indicator(emitter, c, true, false, false) { + return false + } + return yaml_emitter_write_anchor(emitter, emitter.anchor_data.anchor) +} + +// Write a tag. +func yaml_emitter_process_tag(emitter *yaml_emitter_t) bool { + if len(emitter.tag_data.handle) == 0 && len(emitter.tag_data.suffix) == 0 { + return true + } + if len(emitter.tag_data.handle) > 0 { + if !yaml_emitter_write_tag_handle(emitter, emitter.tag_data.handle) { + return false + } + if len(emitter.tag_data.suffix) > 0 { + if !yaml_emitter_write_tag_content(emitter, emitter.tag_data.suffix, false) { + return false + } + } + } else { + // [Go] Allocate these slices elsewhere. + if !yaml_emitter_write_indicator(emitter, []byte("!<"), true, false, false) { + return false + } + if !yaml_emitter_write_tag_content(emitter, emitter.tag_data.suffix, false) { + return false + } + if !yaml_emitter_write_indicator(emitter, []byte{'>'}, false, false, false) { + return false + } + } + return true +} + +// Write a scalar. +func yaml_emitter_process_scalar(emitter *yaml_emitter_t) bool { + switch emitter.scalar_data.style { + case yaml_PLAIN_SCALAR_STYLE: + return yaml_emitter_write_plain_scalar(emitter, emitter.scalar_data.value, !emitter.simple_key_context) + + case yaml_SINGLE_QUOTED_SCALAR_STYLE: + return yaml_emitter_write_single_quoted_scalar(emitter, emitter.scalar_data.value, !emitter.simple_key_context) + + case yaml_DOUBLE_QUOTED_SCALAR_STYLE: + return yaml_emitter_write_double_quoted_scalar(emitter, emitter.scalar_data.value, !emitter.simple_key_context) + + case yaml_LITERAL_SCALAR_STYLE: + return yaml_emitter_write_literal_scalar(emitter, emitter.scalar_data.value) + + case yaml_FOLDED_SCALAR_STYLE: + return yaml_emitter_write_folded_scalar(emitter, emitter.scalar_data.value) + } + panic("unknown scalar style") +} + +// Check if a %YAML directive is valid. +func yaml_emitter_analyze_version_directive(emitter *yaml_emitter_t, version_directive *yaml_version_directive_t) bool { + if version_directive.major != 1 || version_directive.minor != 1 { + return yaml_emitter_set_emitter_error(emitter, "incompatible %YAML directive") + } + return true +} + +// Check if a %TAG directive is valid. +func yaml_emitter_analyze_tag_directive(emitter *yaml_emitter_t, tag_directive *yaml_tag_directive_t) bool { + handle := tag_directive.handle + prefix := tag_directive.prefix + if len(handle) == 0 { + return yaml_emitter_set_emitter_error(emitter, "tag handle must not be empty") + } + if handle[0] != '!' { + return yaml_emitter_set_emitter_error(emitter, "tag handle must start with '!'") + } + if handle[len(handle)-1] != '!' { + return yaml_emitter_set_emitter_error(emitter, "tag handle must end with '!'") + } + for i := 1; i < len(handle)-1; i += width(handle[i]) { + if !is_alpha(handle, i) { + return yaml_emitter_set_emitter_error(emitter, "tag handle must contain alphanumerical characters only") + } + } + if len(prefix) == 0 { + return yaml_emitter_set_emitter_error(emitter, "tag prefix must not be empty") + } + return true +} + +// Check if an anchor is valid. +func yaml_emitter_analyze_anchor(emitter *yaml_emitter_t, anchor []byte, alias bool) bool { + if len(anchor) == 0 { + problem := "anchor value must not be empty" + if alias { + problem = "alias value must not be empty" + } + return yaml_emitter_set_emitter_error(emitter, problem) + } + for i := 0; i < len(anchor); i += width(anchor[i]) { + if !is_alpha(anchor, i) { + problem := "anchor value must contain alphanumerical characters only" + if alias { + problem = "alias value must contain alphanumerical characters only" + } + return yaml_emitter_set_emitter_error(emitter, problem) + } + } + emitter.anchor_data.anchor = anchor + emitter.anchor_data.alias = alias + return true +} + +// Check if a tag is valid. +func yaml_emitter_analyze_tag(emitter *yaml_emitter_t, tag []byte) bool { + if len(tag) == 0 { + return yaml_emitter_set_emitter_error(emitter, "tag value must not be empty") + } + for i := 0; i < len(emitter.tag_directives); i++ { + tag_directive := &emitter.tag_directives[i] + if bytes.HasPrefix(tag, tag_directive.prefix) { + emitter.tag_data.handle = tag_directive.handle + emitter.tag_data.suffix = tag[len(tag_directive.prefix):] + return true + } + } + emitter.tag_data.suffix = tag + return true +} + +// Check if a scalar is valid. +func yaml_emitter_analyze_scalar(emitter *yaml_emitter_t, value []byte) bool { + var ( + block_indicators = false + flow_indicators = false + line_breaks = false + special_characters = false + + leading_space = false + leading_break = false + trailing_space = false + trailing_break = false + break_space = false + space_break = false + + preceded_by_whitespace = false + followed_by_whitespace = false + previous_space = false + previous_break = false + ) + + emitter.scalar_data.value = value + + if len(value) == 0 { + emitter.scalar_data.multiline = false + emitter.scalar_data.flow_plain_allowed = false + emitter.scalar_data.block_plain_allowed = true + emitter.scalar_data.single_quoted_allowed = true + emitter.scalar_data.block_allowed = false + return true + } + + if len(value) >= 3 && ((value[0] == '-' && value[1] == '-' && value[2] == '-') || (value[0] == '.' && value[1] == '.' && value[2] == '.')) { + block_indicators = true + flow_indicators = true + } + + preceded_by_whitespace = true + for i, w := 0, 0; i < len(value); i += w { + w = width(value[i]) + followed_by_whitespace = i+w >= len(value) || is_blank(value, i+w) + + if i == 0 { + switch value[i] { + case '#', ',', '[', ']', '{', '}', '&', '*', '!', '|', '>', '\'', '"', '%', '@', '`': + flow_indicators = true + block_indicators = true + case '?', ':': + flow_indicators = true + if followed_by_whitespace { + block_indicators = true + } + case '-': + if followed_by_whitespace { + flow_indicators = true + block_indicators = true + } + } + } else { + switch value[i] { + case ',', '?', '[', ']', '{', '}': + flow_indicators = true + case ':': + flow_indicators = true + if followed_by_whitespace { + block_indicators = true + } + case '#': + if preceded_by_whitespace { + flow_indicators = true + block_indicators = true + } + } + } + + if !is_printable(value, i) || !is_ascii(value, i) && !emitter.unicode { + special_characters = true + } + if is_space(value, i) { + if i == 0 { + leading_space = true + } + if i+width(value[i]) == len(value) { + trailing_space = true + } + if previous_break { + break_space = true + } + previous_space = true + previous_break = false + } else if is_break(value, i) { + line_breaks = true + if i == 0 { + leading_break = true + } + if i+width(value[i]) == len(value) { + trailing_break = true + } + if previous_space { + space_break = true + } + previous_space = false + previous_break = true + } else { + previous_space = false + previous_break = false + } + + // [Go]: Why 'z'? Couldn't be the end of the string as that's the loop condition. + preceded_by_whitespace = is_blankz(value, i) + } + + emitter.scalar_data.multiline = line_breaks + emitter.scalar_data.flow_plain_allowed = true + emitter.scalar_data.block_plain_allowed = true + emitter.scalar_data.single_quoted_allowed = true + emitter.scalar_data.block_allowed = true + + if leading_space || leading_break || trailing_space || trailing_break { + emitter.scalar_data.flow_plain_allowed = false + emitter.scalar_data.block_plain_allowed = false + } + if trailing_space { + emitter.scalar_data.block_allowed = false + } + if break_space { + emitter.scalar_data.flow_plain_allowed = false + emitter.scalar_data.block_plain_allowed = false + emitter.scalar_data.single_quoted_allowed = false + } + if space_break || special_characters { + emitter.scalar_data.flow_plain_allowed = false + emitter.scalar_data.block_plain_allowed = false + emitter.scalar_data.single_quoted_allowed = false + emitter.scalar_data.block_allowed = false + } + if line_breaks { + emitter.scalar_data.flow_plain_allowed = false + emitter.scalar_data.block_plain_allowed = false + } + if flow_indicators { + emitter.scalar_data.flow_plain_allowed = false + } + if block_indicators { + emitter.scalar_data.block_plain_allowed = false + } + return true +} + +// Check if the event data is valid. +func yaml_emitter_analyze_event(emitter *yaml_emitter_t, event *yaml_event_t) bool { + + emitter.anchor_data.anchor = nil + emitter.tag_data.handle = nil + emitter.tag_data.suffix = nil + emitter.scalar_data.value = nil + + switch event.typ { + case yaml_ALIAS_EVENT: + if !yaml_emitter_analyze_anchor(emitter, event.anchor, true) { + return false + } + + case yaml_SCALAR_EVENT: + if len(event.anchor) > 0 { + if !yaml_emitter_analyze_anchor(emitter, event.anchor, false) { + return false + } + } + if len(event.tag) > 0 && (emitter.canonical || (!event.implicit && !event.quoted_implicit)) { + if !yaml_emitter_analyze_tag(emitter, event.tag) { + return false + } + } + if !yaml_emitter_analyze_scalar(emitter, event.value) { + return false + } + + case yaml_SEQUENCE_START_EVENT: + if len(event.anchor) > 0 { + if !yaml_emitter_analyze_anchor(emitter, event.anchor, false) { + return false + } + } + if len(event.tag) > 0 && (emitter.canonical || !event.implicit) { + if !yaml_emitter_analyze_tag(emitter, event.tag) { + return false + } + } + + case yaml_MAPPING_START_EVENT: + if len(event.anchor) > 0 { + if !yaml_emitter_analyze_anchor(emitter, event.anchor, false) { + return false + } + } + if len(event.tag) > 0 && (emitter.canonical || !event.implicit) { + if !yaml_emitter_analyze_tag(emitter, event.tag) { + return false + } + } + } + return true +} + +// Write the BOM character. +func yaml_emitter_write_bom(emitter *yaml_emitter_t) bool { + if !flush(emitter) { + return false + } + pos := emitter.buffer_pos + emitter.buffer[pos+0] = '\xEF' + emitter.buffer[pos+1] = '\xBB' + emitter.buffer[pos+2] = '\xBF' + emitter.buffer_pos += 3 + return true +} + +func yaml_emitter_write_indent(emitter *yaml_emitter_t) bool { + indent := emitter.indent + if indent < 0 { + indent = 0 + } + if !emitter.indention || emitter.column > indent || (emitter.column == indent && !emitter.whitespace) { + if !put_break(emitter) { + return false + } + } + for emitter.column < indent { + if !put(emitter, ' ') { + return false + } + } + emitter.whitespace = true + emitter.indention = true + return true +} + +func yaml_emitter_write_indicator(emitter *yaml_emitter_t, indicator []byte, need_whitespace, is_whitespace, is_indention bool) bool { + if need_whitespace && !emitter.whitespace { + if !put(emitter, ' ') { + return false + } + } + if !write_all(emitter, indicator) { + return false + } + emitter.whitespace = is_whitespace + emitter.indention = (emitter.indention && is_indention) + emitter.open_ended = false + return true +} + +func yaml_emitter_write_anchor(emitter *yaml_emitter_t, value []byte) bool { + if !write_all(emitter, value) { + return false + } + emitter.whitespace = false + emitter.indention = false + return true +} + +func yaml_emitter_write_tag_handle(emitter *yaml_emitter_t, value []byte) bool { + if !emitter.whitespace { + if !put(emitter, ' ') { + return false + } + } + if !write_all(emitter, value) { + return false + } + emitter.whitespace = false + emitter.indention = false + return true +} + +func yaml_emitter_write_tag_content(emitter *yaml_emitter_t, value []byte, need_whitespace bool) bool { + if need_whitespace && !emitter.whitespace { + if !put(emitter, ' ') { + return false + } + } + for i := 0; i < len(value); { + var must_write bool + switch value[i] { + case ';', '/', '?', ':', '@', '&', '=', '+', '$', ',', '_', '.', '~', '*', '\'', '(', ')', '[', ']': + must_write = true + default: + must_write = is_alpha(value, i) + } + if must_write { + if !write(emitter, value, &i) { + return false + } + } else { + w := width(value[i]) + for k := 0; k < w; k++ { + octet := value[i] + i++ + if !put(emitter, '%') { + return false + } + + c := octet >> 4 + if c < 10 { + c += '0' + } else { + c += 'A' - 10 + } + if !put(emitter, c) { + return false + } + + c = octet & 0x0f + if c < 10 { + c += '0' + } else { + c += 'A' - 10 + } + if !put(emitter, c) { + return false + } + } + } + } + emitter.whitespace = false + emitter.indention = false + return true +} + +func yaml_emitter_write_plain_scalar(emitter *yaml_emitter_t, value []byte, allow_breaks bool) bool { + if !emitter.whitespace { + if !put(emitter, ' ') { + return false + } + } + + spaces := false + breaks := false + for i := 0; i < len(value); { + if is_space(value, i) { + if allow_breaks && !spaces && emitter.column > emitter.best_width && !is_space(value, i+1) { + if !yaml_emitter_write_indent(emitter) { + return false + } + i += width(value[i]) + } else { + if !write(emitter, value, &i) { + return false + } + } + spaces = true + } else if is_break(value, i) { + if !breaks && value[i] == '\n' { + if !put_break(emitter) { + return false + } + } + if !write_break(emitter, value, &i) { + return false + } + emitter.indention = true + breaks = true + } else { + if breaks { + if !yaml_emitter_write_indent(emitter) { + return false + } + } + if !write(emitter, value, &i) { + return false + } + emitter.indention = false + spaces = false + breaks = false + } + } + + emitter.whitespace = false + emitter.indention = false + if emitter.root_context { + emitter.open_ended = true + } + + return true +} + +func yaml_emitter_write_single_quoted_scalar(emitter *yaml_emitter_t, value []byte, allow_breaks bool) bool { + + if !yaml_emitter_write_indicator(emitter, []byte{'\''}, true, false, false) { + return false + } + + spaces := false + breaks := false + for i := 0; i < len(value); { + if is_space(value, i) { + if allow_breaks && !spaces && emitter.column > emitter.best_width && i > 0 && i < len(value)-1 && !is_space(value, i+1) { + if !yaml_emitter_write_indent(emitter) { + return false + } + i += width(value[i]) + } else { + if !write(emitter, value, &i) { + return false + } + } + spaces = true + } else if is_break(value, i) { + if !breaks && value[i] == '\n' { + if !put_break(emitter) { + return false + } + } + if !write_break(emitter, value, &i) { + return false + } + emitter.indention = true + breaks = true + } else { + if breaks { + if !yaml_emitter_write_indent(emitter) { + return false + } + } + if value[i] == '\'' { + if !put(emitter, '\'') { + return false + } + } + if !write(emitter, value, &i) { + return false + } + emitter.indention = false + spaces = false + breaks = false + } + } + if !yaml_emitter_write_indicator(emitter, []byte{'\''}, false, false, false) { + return false + } + emitter.whitespace = false + emitter.indention = false + return true +} + +func yaml_emitter_write_double_quoted_scalar(emitter *yaml_emitter_t, value []byte, allow_breaks bool) bool { + spaces := false + if !yaml_emitter_write_indicator(emitter, []byte{'"'}, true, false, false) { + return false + } + + for i := 0; i < len(value); { + if !is_printable(value, i) || (!emitter.unicode && !is_ascii(value, i)) || + is_bom(value, i) || is_break(value, i) || + value[i] == '"' || value[i] == '\\' { + + octet := value[i] + + var w int + var v rune + switch { + case octet&0x80 == 0x00: + w, v = 1, rune(octet&0x7F) + case octet&0xE0 == 0xC0: + w, v = 2, rune(octet&0x1F) + case octet&0xF0 == 0xE0: + w, v = 3, rune(octet&0x0F) + case octet&0xF8 == 0xF0: + w, v = 4, rune(octet&0x07) + } + for k := 1; k < w; k++ { + octet = value[i+k] + v = (v << 6) + (rune(octet) & 0x3F) + } + i += w + + if !put(emitter, '\\') { + return false + } + + var ok bool + switch v { + case 0x00: + ok = put(emitter, '0') + case 0x07: + ok = put(emitter, 'a') + case 0x08: + ok = put(emitter, 'b') + case 0x09: + ok = put(emitter, 't') + case 0x0A: + ok = put(emitter, 'n') + case 0x0b: + ok = put(emitter, 'v') + case 0x0c: + ok = put(emitter, 'f') + case 0x0d: + ok = put(emitter, 'r') + case 0x1b: + ok = put(emitter, 'e') + case 0x22: + ok = put(emitter, '"') + case 0x5c: + ok = put(emitter, '\\') + case 0x85: + ok = put(emitter, 'N') + case 0xA0: + ok = put(emitter, '_') + case 0x2028: + ok = put(emitter, 'L') + case 0x2029: + ok = put(emitter, 'P') + default: + if v <= 0xFF { + ok = put(emitter, 'x') + w = 2 + } else if v <= 0xFFFF { + ok = put(emitter, 'u') + w = 4 + } else { + ok = put(emitter, 'U') + w = 8 + } + for k := (w - 1) * 4; ok && k >= 0; k -= 4 { + digit := byte((v >> uint(k)) & 0x0F) + if digit < 10 { + ok = put(emitter, digit+'0') + } else { + ok = put(emitter, digit+'A'-10) + } + } + } + if !ok { + return false + } + spaces = false + } else if is_space(value, i) { + if allow_breaks && !spaces && emitter.column > emitter.best_width && i > 0 && i < len(value)-1 { + if !yaml_emitter_write_indent(emitter) { + return false + } + if is_space(value, i+1) { + if !put(emitter, '\\') { + return false + } + } + i += width(value[i]) + } else if !write(emitter, value, &i) { + return false + } + spaces = true + } else { + if !write(emitter, value, &i) { + return false + } + spaces = false + } + } + if !yaml_emitter_write_indicator(emitter, []byte{'"'}, false, false, false) { + return false + } + emitter.whitespace = false + emitter.indention = false + return true +} + +func yaml_emitter_write_block_scalar_hints(emitter *yaml_emitter_t, value []byte) bool { + if is_space(value, 0) || is_break(value, 0) { + indent_hint := []byte{'0' + byte(emitter.best_indent)} + if !yaml_emitter_write_indicator(emitter, indent_hint, false, false, false) { + return false + } + } + + emitter.open_ended = false + + var chomp_hint [1]byte + if len(value) == 0 { + chomp_hint[0] = '-' + } else { + i := len(value) - 1 + for value[i]&0xC0 == 0x80 { + i-- + } + if !is_break(value, i) { + chomp_hint[0] = '-' + } else if i == 0 { + chomp_hint[0] = '+' + emitter.open_ended = true + } else { + i-- + for value[i]&0xC0 == 0x80 { + i-- + } + if is_break(value, i) { + chomp_hint[0] = '+' + emitter.open_ended = true + } + } + } + if chomp_hint[0] != 0 { + if !yaml_emitter_write_indicator(emitter, chomp_hint[:], false, false, false) { + return false + } + } + return true +} + +func yaml_emitter_write_literal_scalar(emitter *yaml_emitter_t, value []byte) bool { + if !yaml_emitter_write_indicator(emitter, []byte{'|'}, true, false, false) { + return false + } + if !yaml_emitter_write_block_scalar_hints(emitter, value) { + return false + } + if !put_break(emitter) { + return false + } + emitter.indention = true + emitter.whitespace = true + breaks := true + for i := 0; i < len(value); { + if is_break(value, i) { + if !write_break(emitter, value, &i) { + return false + } + emitter.indention = true + breaks = true + } else { + if breaks { + if !yaml_emitter_write_indent(emitter) { + return false + } + } + if !write(emitter, value, &i) { + return false + } + emitter.indention = false + breaks = false + } + } + + return true +} + +func yaml_emitter_write_folded_scalar(emitter *yaml_emitter_t, value []byte) bool { + if !yaml_emitter_write_indicator(emitter, []byte{'>'}, true, false, false) { + return false + } + if !yaml_emitter_write_block_scalar_hints(emitter, value) { + return false + } + + if !put_break(emitter) { + return false + } + emitter.indention = true + emitter.whitespace = true + + breaks := true + leading_spaces := true + for i := 0; i < len(value); { + if is_break(value, i) { + if !breaks && !leading_spaces && value[i] == '\n' { + k := 0 + for is_break(value, k) { + k += width(value[k]) + } + if !is_blankz(value, k) { + if !put_break(emitter) { + return false + } + } + } + if !write_break(emitter, value, &i) { + return false + } + emitter.indention = true + breaks = true + } else { + if breaks { + if !yaml_emitter_write_indent(emitter) { + return false + } + leading_spaces = is_blank(value, i) + } + if !breaks && is_space(value, i) && !is_space(value, i+1) && emitter.column > emitter.best_width { + if !yaml_emitter_write_indent(emitter) { + return false + } + i += width(value[i]) + } else { + if !write(emitter, value, &i) { + return false + } + } + emitter.indention = false + breaks = false + } + } + return true +} diff --git a/vendor/gopkg.in/yaml.v2/encode.go b/vendor/gopkg.in/yaml.v2/encode.go new file mode 100644 index 000000000..a14435e82 --- /dev/null +++ b/vendor/gopkg.in/yaml.v2/encode.go @@ -0,0 +1,362 @@ +package yaml + +import ( + "encoding" + "fmt" + "io" + "reflect" + "regexp" + "sort" + "strconv" + "strings" + "time" + "unicode/utf8" +) + +type encoder struct { + emitter yaml_emitter_t + event yaml_event_t + out []byte + flow bool + // doneInit holds whether the initial stream_start_event has been + // emitted. + doneInit bool +} + +func newEncoder() *encoder { + e := &encoder{} + yaml_emitter_initialize(&e.emitter) + yaml_emitter_set_output_string(&e.emitter, &e.out) + yaml_emitter_set_unicode(&e.emitter, true) + return e +} + +func newEncoderWithWriter(w io.Writer) *encoder { + e := &encoder{} + yaml_emitter_initialize(&e.emitter) + yaml_emitter_set_output_writer(&e.emitter, w) + yaml_emitter_set_unicode(&e.emitter, true) + return e +} + +func (e *encoder) init() { + if e.doneInit { + return + } + yaml_stream_start_event_initialize(&e.event, yaml_UTF8_ENCODING) + e.emit() + e.doneInit = true +} + +func (e *encoder) finish() { + e.emitter.open_ended = false + yaml_stream_end_event_initialize(&e.event) + e.emit() +} + +func (e *encoder) destroy() { + yaml_emitter_delete(&e.emitter) +} + +func (e *encoder) emit() { + // This will internally delete the e.event value. + e.must(yaml_emitter_emit(&e.emitter, &e.event)) +} + +func (e *encoder) must(ok bool) { + if !ok { + msg := e.emitter.problem + if msg == "" { + msg = "unknown problem generating YAML content" + } + failf("%s", msg) + } +} + +func (e *encoder) marshalDoc(tag string, in reflect.Value) { + e.init() + yaml_document_start_event_initialize(&e.event, nil, nil, true) + e.emit() + e.marshal(tag, in) + yaml_document_end_event_initialize(&e.event, true) + e.emit() +} + +func (e *encoder) marshal(tag string, in reflect.Value) { + if !in.IsValid() || in.Kind() == reflect.Ptr && in.IsNil() { + e.nilv() + return + } + iface := in.Interface() + switch m := iface.(type) { + case time.Time, *time.Time: + // Although time.Time implements TextMarshaler, + // we don't want to treat it as a string for YAML + // purposes because YAML has special support for + // timestamps. + case Marshaler: + v, err := m.MarshalYAML() + if err != nil { + fail(err) + } + if v == nil { + e.nilv() + return + } + in = reflect.ValueOf(v) + case encoding.TextMarshaler: + text, err := m.MarshalText() + if err != nil { + fail(err) + } + in = reflect.ValueOf(string(text)) + case nil: + e.nilv() + return + } + switch in.Kind() { + case reflect.Interface: + e.marshal(tag, in.Elem()) + case reflect.Map: + e.mapv(tag, in) + case reflect.Ptr: + if in.Type() == ptrTimeType { + e.timev(tag, in.Elem()) + } else { + e.marshal(tag, in.Elem()) + } + case reflect.Struct: + if in.Type() == timeType { + e.timev(tag, in) + } else { + e.structv(tag, in) + } + case reflect.Slice, reflect.Array: + if in.Type().Elem() == mapItemType { + e.itemsv(tag, in) + } else { + e.slicev(tag, in) + } + case reflect.String: + e.stringv(tag, in) + case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64: + if in.Type() == durationType { + e.stringv(tag, reflect.ValueOf(iface.(time.Duration).String())) + } else { + e.intv(tag, in) + } + case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64, reflect.Uintptr: + e.uintv(tag, in) + case reflect.Float32, reflect.Float64: + e.floatv(tag, in) + case reflect.Bool: + e.boolv(tag, in) + default: + panic("cannot marshal type: " + in.Type().String()) + } +} + +func (e *encoder) mapv(tag string, in reflect.Value) { + e.mappingv(tag, func() { + keys := keyList(in.MapKeys()) + sort.Sort(keys) + for _, k := range keys { + e.marshal("", k) + e.marshal("", in.MapIndex(k)) + } + }) +} + +func (e *encoder) itemsv(tag string, in reflect.Value) { + e.mappingv(tag, func() { + slice := in.Convert(reflect.TypeOf([]MapItem{})).Interface().([]MapItem) + for _, item := range slice { + e.marshal("", reflect.ValueOf(item.Key)) + e.marshal("", reflect.ValueOf(item.Value)) + } + }) +} + +func (e *encoder) structv(tag string, in reflect.Value) { + sinfo, err := getStructInfo(in.Type()) + if err != nil { + panic(err) + } + e.mappingv(tag, func() { + for _, info := range sinfo.FieldsList { + var value reflect.Value + if info.Inline == nil { + value = in.Field(info.Num) + } else { + value = in.FieldByIndex(info.Inline) + } + if info.OmitEmpty && isZero(value) { + continue + } + e.marshal("", reflect.ValueOf(info.Key)) + e.flow = info.Flow + e.marshal("", value) + } + if sinfo.InlineMap >= 0 { + m := in.Field(sinfo.InlineMap) + if m.Len() > 0 { + e.flow = false + keys := keyList(m.MapKeys()) + sort.Sort(keys) + for _, k := range keys { + if _, found := sinfo.FieldsMap[k.String()]; found { + panic(fmt.Sprintf("Can't have key %q in inlined map; conflicts with struct field", k.String())) + } + e.marshal("", k) + e.flow = false + e.marshal("", m.MapIndex(k)) + } + } + } + }) +} + +func (e *encoder) mappingv(tag string, f func()) { + implicit := tag == "" + style := yaml_BLOCK_MAPPING_STYLE + if e.flow { + e.flow = false + style = yaml_FLOW_MAPPING_STYLE + } + yaml_mapping_start_event_initialize(&e.event, nil, []byte(tag), implicit, style) + e.emit() + f() + yaml_mapping_end_event_initialize(&e.event) + e.emit() +} + +func (e *encoder) slicev(tag string, in reflect.Value) { + implicit := tag == "" + style := yaml_BLOCK_SEQUENCE_STYLE + if e.flow { + e.flow = false + style = yaml_FLOW_SEQUENCE_STYLE + } + e.must(yaml_sequence_start_event_initialize(&e.event, nil, []byte(tag), implicit, style)) + e.emit() + n := in.Len() + for i := 0; i < n; i++ { + e.marshal("", in.Index(i)) + } + e.must(yaml_sequence_end_event_initialize(&e.event)) + e.emit() +} + +// isBase60 returns whether s is in base 60 notation as defined in YAML 1.1. +// +// The base 60 float notation in YAML 1.1 is a terrible idea and is unsupported +// in YAML 1.2 and by this package, but these should be marshalled quoted for +// the time being for compatibility with other parsers. +func isBase60Float(s string) (result bool) { + // Fast path. + if s == "" { + return false + } + c := s[0] + if !(c == '+' || c == '-' || c >= '0' && c <= '9') || strings.IndexByte(s, ':') < 0 { + return false + } + // Do the full match. + return base60float.MatchString(s) +} + +// From http://yaml.org/type/float.html, except the regular expression there +// is bogus. In practice parsers do not enforce the "\.[0-9_]*" suffix. +var base60float = regexp.MustCompile(`^[-+]?[0-9][0-9_]*(?::[0-5]?[0-9])+(?:\.[0-9_]*)?$`) + +func (e *encoder) stringv(tag string, in reflect.Value) { + var style yaml_scalar_style_t + s := in.String() + canUsePlain := true + switch { + case !utf8.ValidString(s): + if tag == yaml_BINARY_TAG { + failf("explicitly tagged !!binary data must be base64-encoded") + } + if tag != "" { + failf("cannot marshal invalid UTF-8 data as %s", shortTag(tag)) + } + // It can't be encoded directly as YAML so use a binary tag + // and encode it as base64. + tag = yaml_BINARY_TAG + s = encodeBase64(s) + case tag == "": + // Check to see if it would resolve to a specific + // tag when encoded unquoted. If it doesn't, + // there's no need to quote it. + rtag, _ := resolve("", s) + canUsePlain = rtag == yaml_STR_TAG && !isBase60Float(s) + } + // Note: it's possible for user code to emit invalid YAML + // if they explicitly specify a tag and a string containing + // text that's incompatible with that tag. + switch { + case strings.Contains(s, "\n"): + style = yaml_LITERAL_SCALAR_STYLE + case canUsePlain: + style = yaml_PLAIN_SCALAR_STYLE + default: + style = yaml_DOUBLE_QUOTED_SCALAR_STYLE + } + e.emitScalar(s, "", tag, style) +} + +func (e *encoder) boolv(tag string, in reflect.Value) { + var s string + if in.Bool() { + s = "true" + } else { + s = "false" + } + e.emitScalar(s, "", tag, yaml_PLAIN_SCALAR_STYLE) +} + +func (e *encoder) intv(tag string, in reflect.Value) { + s := strconv.FormatInt(in.Int(), 10) + e.emitScalar(s, "", tag, yaml_PLAIN_SCALAR_STYLE) +} + +func (e *encoder) uintv(tag string, in reflect.Value) { + s := strconv.FormatUint(in.Uint(), 10) + e.emitScalar(s, "", tag, yaml_PLAIN_SCALAR_STYLE) +} + +func (e *encoder) timev(tag string, in reflect.Value) { + t := in.Interface().(time.Time) + s := t.Format(time.RFC3339Nano) + e.emitScalar(s, "", tag, yaml_PLAIN_SCALAR_STYLE) +} + +func (e *encoder) floatv(tag string, in reflect.Value) { + // Issue #352: When formatting, use the precision of the underlying value + precision := 64 + if in.Kind() == reflect.Float32 { + precision = 32 + } + + s := strconv.FormatFloat(in.Float(), 'g', -1, precision) + switch s { + case "+Inf": + s = ".inf" + case "-Inf": + s = "-.inf" + case "NaN": + s = ".nan" + } + e.emitScalar(s, "", tag, yaml_PLAIN_SCALAR_STYLE) +} + +func (e *encoder) nilv() { + e.emitScalar("null", "", "", yaml_PLAIN_SCALAR_STYLE) +} + +func (e *encoder) emitScalar(value, anchor, tag string, style yaml_scalar_style_t) { + implicit := tag == "" + e.must(yaml_scalar_event_initialize(&e.event, []byte(anchor), []byte(tag), []byte(value), implicit, implicit, style)) + e.emit() +} diff --git a/vendor/gopkg.in/yaml.v2/go.mod b/vendor/gopkg.in/yaml.v2/go.mod new file mode 100644 index 000000000..1934e8769 --- /dev/null +++ b/vendor/gopkg.in/yaml.v2/go.mod @@ -0,0 +1,5 @@ +module "gopkg.in/yaml.v2" + +require ( + "gopkg.in/check.v1" v0.0.0-20161208181325-20d25e280405 +) diff --git a/vendor/gopkg.in/yaml.v2/parserc.go b/vendor/gopkg.in/yaml.v2/parserc.go new file mode 100644 index 000000000..81d05dfe5 --- /dev/null +++ b/vendor/gopkg.in/yaml.v2/parserc.go @@ -0,0 +1,1095 @@ +package yaml + +import ( + "bytes" +) + +// The parser implements the following grammar: +// +// stream ::= STREAM-START implicit_document? explicit_document* STREAM-END +// implicit_document ::= block_node DOCUMENT-END* +// explicit_document ::= DIRECTIVE* DOCUMENT-START block_node? DOCUMENT-END* +// block_node_or_indentless_sequence ::= +// ALIAS +// | properties (block_content | indentless_block_sequence)? +// | block_content +// | indentless_block_sequence +// block_node ::= ALIAS +// | properties block_content? +// | block_content +// flow_node ::= ALIAS +// | properties flow_content? +// | flow_content +// properties ::= TAG ANCHOR? | ANCHOR TAG? +// block_content ::= block_collection | flow_collection | SCALAR +// flow_content ::= flow_collection | SCALAR +// block_collection ::= block_sequence | block_mapping +// flow_collection ::= flow_sequence | flow_mapping +// block_sequence ::= BLOCK-SEQUENCE-START (BLOCK-ENTRY block_node?)* BLOCK-END +// indentless_sequence ::= (BLOCK-ENTRY block_node?)+ +// block_mapping ::= BLOCK-MAPPING_START +// ((KEY block_node_or_indentless_sequence?)? +// (VALUE block_node_or_indentless_sequence?)?)* +// BLOCK-END +// flow_sequence ::= FLOW-SEQUENCE-START +// (flow_sequence_entry FLOW-ENTRY)* +// flow_sequence_entry? +// FLOW-SEQUENCE-END +// flow_sequence_entry ::= flow_node | KEY flow_node? (VALUE flow_node?)? +// flow_mapping ::= FLOW-MAPPING-START +// (flow_mapping_entry FLOW-ENTRY)* +// flow_mapping_entry? +// FLOW-MAPPING-END +// flow_mapping_entry ::= flow_node | KEY flow_node? (VALUE flow_node?)? + +// Peek the next token in the token queue. +func peek_token(parser *yaml_parser_t) *yaml_token_t { + if parser.token_available || yaml_parser_fetch_more_tokens(parser) { + return &parser.tokens[parser.tokens_head] + } + return nil +} + +// Remove the next token from the queue (must be called after peek_token). +func skip_token(parser *yaml_parser_t) { + parser.token_available = false + parser.tokens_parsed++ + parser.stream_end_produced = parser.tokens[parser.tokens_head].typ == yaml_STREAM_END_TOKEN + parser.tokens_head++ +} + +// Get the next event. +func yaml_parser_parse(parser *yaml_parser_t, event *yaml_event_t) bool { + // Erase the event object. + *event = yaml_event_t{} + + // No events after the end of the stream or error. + if parser.stream_end_produced || parser.error != yaml_NO_ERROR || parser.state == yaml_PARSE_END_STATE { + return true + } + + // Generate the next event. + return yaml_parser_state_machine(parser, event) +} + +// Set parser error. +func yaml_parser_set_parser_error(parser *yaml_parser_t, problem string, problem_mark yaml_mark_t) bool { + parser.error = yaml_PARSER_ERROR + parser.problem = problem + parser.problem_mark = problem_mark + return false +} + +func yaml_parser_set_parser_error_context(parser *yaml_parser_t, context string, context_mark yaml_mark_t, problem string, problem_mark yaml_mark_t) bool { + parser.error = yaml_PARSER_ERROR + parser.context = context + parser.context_mark = context_mark + parser.problem = problem + parser.problem_mark = problem_mark + return false +} + +// State dispatcher. +func yaml_parser_state_machine(parser *yaml_parser_t, event *yaml_event_t) bool { + //trace("yaml_parser_state_machine", "state:", parser.state.String()) + + switch parser.state { + case yaml_PARSE_STREAM_START_STATE: + return yaml_parser_parse_stream_start(parser, event) + + case yaml_PARSE_IMPLICIT_DOCUMENT_START_STATE: + return yaml_parser_parse_document_start(parser, event, true) + + case yaml_PARSE_DOCUMENT_START_STATE: + return yaml_parser_parse_document_start(parser, event, false) + + case yaml_PARSE_DOCUMENT_CONTENT_STATE: + return yaml_parser_parse_document_content(parser, event) + + case yaml_PARSE_DOCUMENT_END_STATE: + return yaml_parser_parse_document_end(parser, event) + + case yaml_PARSE_BLOCK_NODE_STATE: + return yaml_parser_parse_node(parser, event, true, false) + + case yaml_PARSE_BLOCK_NODE_OR_INDENTLESS_SEQUENCE_STATE: + return yaml_parser_parse_node(parser, event, true, true) + + case yaml_PARSE_FLOW_NODE_STATE: + return yaml_parser_parse_node(parser, event, false, false) + + case yaml_PARSE_BLOCK_SEQUENCE_FIRST_ENTRY_STATE: + return yaml_parser_parse_block_sequence_entry(parser, event, true) + + case yaml_PARSE_BLOCK_SEQUENCE_ENTRY_STATE: + return yaml_parser_parse_block_sequence_entry(parser, event, false) + + case yaml_PARSE_INDENTLESS_SEQUENCE_ENTRY_STATE: + return yaml_parser_parse_indentless_sequence_entry(parser, event) + + case yaml_PARSE_BLOCK_MAPPING_FIRST_KEY_STATE: + return yaml_parser_parse_block_mapping_key(parser, event, true) + + case yaml_PARSE_BLOCK_MAPPING_KEY_STATE: + return yaml_parser_parse_block_mapping_key(parser, event, false) + + case yaml_PARSE_BLOCK_MAPPING_VALUE_STATE: + return yaml_parser_parse_block_mapping_value(parser, event) + + case yaml_PARSE_FLOW_SEQUENCE_FIRST_ENTRY_STATE: + return yaml_parser_parse_flow_sequence_entry(parser, event, true) + + case yaml_PARSE_FLOW_SEQUENCE_ENTRY_STATE: + return yaml_parser_parse_flow_sequence_entry(parser, event, false) + + case yaml_PARSE_FLOW_SEQUENCE_ENTRY_MAPPING_KEY_STATE: + return yaml_parser_parse_flow_sequence_entry_mapping_key(parser, event) + + case yaml_PARSE_FLOW_SEQUENCE_ENTRY_MAPPING_VALUE_STATE: + return yaml_parser_parse_flow_sequence_entry_mapping_value(parser, event) + + case yaml_PARSE_FLOW_SEQUENCE_ENTRY_MAPPING_END_STATE: + return yaml_parser_parse_flow_sequence_entry_mapping_end(parser, event) + + case yaml_PARSE_FLOW_MAPPING_FIRST_KEY_STATE: + return yaml_parser_parse_flow_mapping_key(parser, event, true) + + case yaml_PARSE_FLOW_MAPPING_KEY_STATE: + return yaml_parser_parse_flow_mapping_key(parser, event, false) + + case yaml_PARSE_FLOW_MAPPING_VALUE_STATE: + return yaml_parser_parse_flow_mapping_value(parser, event, false) + + case yaml_PARSE_FLOW_MAPPING_EMPTY_VALUE_STATE: + return yaml_parser_parse_flow_mapping_value(parser, event, true) + + default: + panic("invalid parser state") + } +} + +// Parse the production: +// stream ::= STREAM-START implicit_document? explicit_document* STREAM-END +// ************ +func yaml_parser_parse_stream_start(parser *yaml_parser_t, event *yaml_event_t) bool { + token := peek_token(parser) + if token == nil { + return false + } + if token.typ != yaml_STREAM_START_TOKEN { + return yaml_parser_set_parser_error(parser, "did not find expected ", token.start_mark) + } + parser.state = yaml_PARSE_IMPLICIT_DOCUMENT_START_STATE + *event = yaml_event_t{ + typ: yaml_STREAM_START_EVENT, + start_mark: token.start_mark, + end_mark: token.end_mark, + encoding: token.encoding, + } + skip_token(parser) + return true +} + +// Parse the productions: +// implicit_document ::= block_node DOCUMENT-END* +// * +// explicit_document ::= DIRECTIVE* DOCUMENT-START block_node? DOCUMENT-END* +// ************************* +func yaml_parser_parse_document_start(parser *yaml_parser_t, event *yaml_event_t, implicit bool) bool { + + token := peek_token(parser) + if token == nil { + return false + } + + // Parse extra document end indicators. + if !implicit { + for token.typ == yaml_DOCUMENT_END_TOKEN { + skip_token(parser) + token = peek_token(parser) + if token == nil { + return false + } + } + } + + if implicit && token.typ != yaml_VERSION_DIRECTIVE_TOKEN && + token.typ != yaml_TAG_DIRECTIVE_TOKEN && + token.typ != yaml_DOCUMENT_START_TOKEN && + token.typ != yaml_STREAM_END_TOKEN { + // Parse an implicit document. + if !yaml_parser_process_directives(parser, nil, nil) { + return false + } + parser.states = append(parser.states, yaml_PARSE_DOCUMENT_END_STATE) + parser.state = yaml_PARSE_BLOCK_NODE_STATE + + *event = yaml_event_t{ + typ: yaml_DOCUMENT_START_EVENT, + start_mark: token.start_mark, + end_mark: token.end_mark, + } + + } else if token.typ != yaml_STREAM_END_TOKEN { + // Parse an explicit document. + var version_directive *yaml_version_directive_t + var tag_directives []yaml_tag_directive_t + start_mark := token.start_mark + if !yaml_parser_process_directives(parser, &version_directive, &tag_directives) { + return false + } + token = peek_token(parser) + if token == nil { + return false + } + if token.typ != yaml_DOCUMENT_START_TOKEN { + yaml_parser_set_parser_error(parser, + "did not find expected ", token.start_mark) + return false + } + parser.states = append(parser.states, yaml_PARSE_DOCUMENT_END_STATE) + parser.state = yaml_PARSE_DOCUMENT_CONTENT_STATE + end_mark := token.end_mark + + *event = yaml_event_t{ + typ: yaml_DOCUMENT_START_EVENT, + start_mark: start_mark, + end_mark: end_mark, + version_directive: version_directive, + tag_directives: tag_directives, + implicit: false, + } + skip_token(parser) + + } else { + // Parse the stream end. + parser.state = yaml_PARSE_END_STATE + *event = yaml_event_t{ + typ: yaml_STREAM_END_EVENT, + start_mark: token.start_mark, + end_mark: token.end_mark, + } + skip_token(parser) + } + + return true +} + +// Parse the productions: +// explicit_document ::= DIRECTIVE* DOCUMENT-START block_node? DOCUMENT-END* +// *********** +// +func yaml_parser_parse_document_content(parser *yaml_parser_t, event *yaml_event_t) bool { + token := peek_token(parser) + if token == nil { + return false + } + if token.typ == yaml_VERSION_DIRECTIVE_TOKEN || + token.typ == yaml_TAG_DIRECTIVE_TOKEN || + token.typ == yaml_DOCUMENT_START_TOKEN || + token.typ == yaml_DOCUMENT_END_TOKEN || + token.typ == yaml_STREAM_END_TOKEN { + parser.state = parser.states[len(parser.states)-1] + parser.states = parser.states[:len(parser.states)-1] + return yaml_parser_process_empty_scalar(parser, event, + token.start_mark) + } + return yaml_parser_parse_node(parser, event, true, false) +} + +// Parse the productions: +// implicit_document ::= block_node DOCUMENT-END* +// ************* +// explicit_document ::= DIRECTIVE* DOCUMENT-START block_node? DOCUMENT-END* +// +func yaml_parser_parse_document_end(parser *yaml_parser_t, event *yaml_event_t) bool { + token := peek_token(parser) + if token == nil { + return false + } + + start_mark := token.start_mark + end_mark := token.start_mark + + implicit := true + if token.typ == yaml_DOCUMENT_END_TOKEN { + end_mark = token.end_mark + skip_token(parser) + implicit = false + } + + parser.tag_directives = parser.tag_directives[:0] + + parser.state = yaml_PARSE_DOCUMENT_START_STATE + *event = yaml_event_t{ + typ: yaml_DOCUMENT_END_EVENT, + start_mark: start_mark, + end_mark: end_mark, + implicit: implicit, + } + return true +} + +// Parse the productions: +// block_node_or_indentless_sequence ::= +// ALIAS +// ***** +// | properties (block_content | indentless_block_sequence)? +// ********** * +// | block_content | indentless_block_sequence +// * +// block_node ::= ALIAS +// ***** +// | properties block_content? +// ********** * +// | block_content +// * +// flow_node ::= ALIAS +// ***** +// | properties flow_content? +// ********** * +// | flow_content +// * +// properties ::= TAG ANCHOR? | ANCHOR TAG? +// ************************* +// block_content ::= block_collection | flow_collection | SCALAR +// ****** +// flow_content ::= flow_collection | SCALAR +// ****** +func yaml_parser_parse_node(parser *yaml_parser_t, event *yaml_event_t, block, indentless_sequence bool) bool { + //defer trace("yaml_parser_parse_node", "block:", block, "indentless_sequence:", indentless_sequence)() + + token := peek_token(parser) + if token == nil { + return false + } + + if token.typ == yaml_ALIAS_TOKEN { + parser.state = parser.states[len(parser.states)-1] + parser.states = parser.states[:len(parser.states)-1] + *event = yaml_event_t{ + typ: yaml_ALIAS_EVENT, + start_mark: token.start_mark, + end_mark: token.end_mark, + anchor: token.value, + } + skip_token(parser) + return true + } + + start_mark := token.start_mark + end_mark := token.start_mark + + var tag_token bool + var tag_handle, tag_suffix, anchor []byte + var tag_mark yaml_mark_t + if token.typ == yaml_ANCHOR_TOKEN { + anchor = token.value + start_mark = token.start_mark + end_mark = token.end_mark + skip_token(parser) + token = peek_token(parser) + if token == nil { + return false + } + if token.typ == yaml_TAG_TOKEN { + tag_token = true + tag_handle = token.value + tag_suffix = token.suffix + tag_mark = token.start_mark + end_mark = token.end_mark + skip_token(parser) + token = peek_token(parser) + if token == nil { + return false + } + } + } else if token.typ == yaml_TAG_TOKEN { + tag_token = true + tag_handle = token.value + tag_suffix = token.suffix + start_mark = token.start_mark + tag_mark = token.start_mark + end_mark = token.end_mark + skip_token(parser) + token = peek_token(parser) + if token == nil { + return false + } + if token.typ == yaml_ANCHOR_TOKEN { + anchor = token.value + end_mark = token.end_mark + skip_token(parser) + token = peek_token(parser) + if token == nil { + return false + } + } + } + + var tag []byte + if tag_token { + if len(tag_handle) == 0 { + tag = tag_suffix + tag_suffix = nil + } else { + for i := range parser.tag_directives { + if bytes.Equal(parser.tag_directives[i].handle, tag_handle) { + tag = append([]byte(nil), parser.tag_directives[i].prefix...) + tag = append(tag, tag_suffix...) + break + } + } + if len(tag) == 0 { + yaml_parser_set_parser_error_context(parser, + "while parsing a node", start_mark, + "found undefined tag handle", tag_mark) + return false + } + } + } + + implicit := len(tag) == 0 + if indentless_sequence && token.typ == yaml_BLOCK_ENTRY_TOKEN { + end_mark = token.end_mark + parser.state = yaml_PARSE_INDENTLESS_SEQUENCE_ENTRY_STATE + *event = yaml_event_t{ + typ: yaml_SEQUENCE_START_EVENT, + start_mark: start_mark, + end_mark: end_mark, + anchor: anchor, + tag: tag, + implicit: implicit, + style: yaml_style_t(yaml_BLOCK_SEQUENCE_STYLE), + } + return true + } + if token.typ == yaml_SCALAR_TOKEN { + var plain_implicit, quoted_implicit bool + end_mark = token.end_mark + if (len(tag) == 0 && token.style == yaml_PLAIN_SCALAR_STYLE) || (len(tag) == 1 && tag[0] == '!') { + plain_implicit = true + } else if len(tag) == 0 { + quoted_implicit = true + } + parser.state = parser.states[len(parser.states)-1] + parser.states = parser.states[:len(parser.states)-1] + + *event = yaml_event_t{ + typ: yaml_SCALAR_EVENT, + start_mark: start_mark, + end_mark: end_mark, + anchor: anchor, + tag: tag, + value: token.value, + implicit: plain_implicit, + quoted_implicit: quoted_implicit, + style: yaml_style_t(token.style), + } + skip_token(parser) + return true + } + if token.typ == yaml_FLOW_SEQUENCE_START_TOKEN { + // [Go] Some of the events below can be merged as they differ only on style. + end_mark = token.end_mark + parser.state = yaml_PARSE_FLOW_SEQUENCE_FIRST_ENTRY_STATE + *event = yaml_event_t{ + typ: yaml_SEQUENCE_START_EVENT, + start_mark: start_mark, + end_mark: end_mark, + anchor: anchor, + tag: tag, + implicit: implicit, + style: yaml_style_t(yaml_FLOW_SEQUENCE_STYLE), + } + return true + } + if token.typ == yaml_FLOW_MAPPING_START_TOKEN { + end_mark = token.end_mark + parser.state = yaml_PARSE_FLOW_MAPPING_FIRST_KEY_STATE + *event = yaml_event_t{ + typ: yaml_MAPPING_START_EVENT, + start_mark: start_mark, + end_mark: end_mark, + anchor: anchor, + tag: tag, + implicit: implicit, + style: yaml_style_t(yaml_FLOW_MAPPING_STYLE), + } + return true + } + if block && token.typ == yaml_BLOCK_SEQUENCE_START_TOKEN { + end_mark = token.end_mark + parser.state = yaml_PARSE_BLOCK_SEQUENCE_FIRST_ENTRY_STATE + *event = yaml_event_t{ + typ: yaml_SEQUENCE_START_EVENT, + start_mark: start_mark, + end_mark: end_mark, + anchor: anchor, + tag: tag, + implicit: implicit, + style: yaml_style_t(yaml_BLOCK_SEQUENCE_STYLE), + } + return true + } + if block && token.typ == yaml_BLOCK_MAPPING_START_TOKEN { + end_mark = token.end_mark + parser.state = yaml_PARSE_BLOCK_MAPPING_FIRST_KEY_STATE + *event = yaml_event_t{ + typ: yaml_MAPPING_START_EVENT, + start_mark: start_mark, + end_mark: end_mark, + anchor: anchor, + tag: tag, + implicit: implicit, + style: yaml_style_t(yaml_BLOCK_MAPPING_STYLE), + } + return true + } + if len(anchor) > 0 || len(tag) > 0 { + parser.state = parser.states[len(parser.states)-1] + parser.states = parser.states[:len(parser.states)-1] + + *event = yaml_event_t{ + typ: yaml_SCALAR_EVENT, + start_mark: start_mark, + end_mark: end_mark, + anchor: anchor, + tag: tag, + implicit: implicit, + quoted_implicit: false, + style: yaml_style_t(yaml_PLAIN_SCALAR_STYLE), + } + return true + } + + context := "while parsing a flow node" + if block { + context = "while parsing a block node" + } + yaml_parser_set_parser_error_context(parser, context, start_mark, + "did not find expected node content", token.start_mark) + return false +} + +// Parse the productions: +// block_sequence ::= BLOCK-SEQUENCE-START (BLOCK-ENTRY block_node?)* BLOCK-END +// ******************** *********** * ********* +// +func yaml_parser_parse_block_sequence_entry(parser *yaml_parser_t, event *yaml_event_t, first bool) bool { + if first { + token := peek_token(parser) + parser.marks = append(parser.marks, token.start_mark) + skip_token(parser) + } + + token := peek_token(parser) + if token == nil { + return false + } + + if token.typ == yaml_BLOCK_ENTRY_TOKEN { + mark := token.end_mark + skip_token(parser) + token = peek_token(parser) + if token == nil { + return false + } + if token.typ != yaml_BLOCK_ENTRY_TOKEN && token.typ != yaml_BLOCK_END_TOKEN { + parser.states = append(parser.states, yaml_PARSE_BLOCK_SEQUENCE_ENTRY_STATE) + return yaml_parser_parse_node(parser, event, true, false) + } else { + parser.state = yaml_PARSE_BLOCK_SEQUENCE_ENTRY_STATE + return yaml_parser_process_empty_scalar(parser, event, mark) + } + } + if token.typ == yaml_BLOCK_END_TOKEN { + parser.state = parser.states[len(parser.states)-1] + parser.states = parser.states[:len(parser.states)-1] + parser.marks = parser.marks[:len(parser.marks)-1] + + *event = yaml_event_t{ + typ: yaml_SEQUENCE_END_EVENT, + start_mark: token.start_mark, + end_mark: token.end_mark, + } + + skip_token(parser) + return true + } + + context_mark := parser.marks[len(parser.marks)-1] + parser.marks = parser.marks[:len(parser.marks)-1] + return yaml_parser_set_parser_error_context(parser, + "while parsing a block collection", context_mark, + "did not find expected '-' indicator", token.start_mark) +} + +// Parse the productions: +// indentless_sequence ::= (BLOCK-ENTRY block_node?)+ +// *********** * +func yaml_parser_parse_indentless_sequence_entry(parser *yaml_parser_t, event *yaml_event_t) bool { + token := peek_token(parser) + if token == nil { + return false + } + + if token.typ == yaml_BLOCK_ENTRY_TOKEN { + mark := token.end_mark + skip_token(parser) + token = peek_token(parser) + if token == nil { + return false + } + if token.typ != yaml_BLOCK_ENTRY_TOKEN && + token.typ != yaml_KEY_TOKEN && + token.typ != yaml_VALUE_TOKEN && + token.typ != yaml_BLOCK_END_TOKEN { + parser.states = append(parser.states, yaml_PARSE_INDENTLESS_SEQUENCE_ENTRY_STATE) + return yaml_parser_parse_node(parser, event, true, false) + } + parser.state = yaml_PARSE_INDENTLESS_SEQUENCE_ENTRY_STATE + return yaml_parser_process_empty_scalar(parser, event, mark) + } + parser.state = parser.states[len(parser.states)-1] + parser.states = parser.states[:len(parser.states)-1] + + *event = yaml_event_t{ + typ: yaml_SEQUENCE_END_EVENT, + start_mark: token.start_mark, + end_mark: token.start_mark, // [Go] Shouldn't this be token.end_mark? + } + return true +} + +// Parse the productions: +// block_mapping ::= BLOCK-MAPPING_START +// ******************* +// ((KEY block_node_or_indentless_sequence?)? +// *** * +// (VALUE block_node_or_indentless_sequence?)?)* +// +// BLOCK-END +// ********* +// +func yaml_parser_parse_block_mapping_key(parser *yaml_parser_t, event *yaml_event_t, first bool) bool { + if first { + token := peek_token(parser) + parser.marks = append(parser.marks, token.start_mark) + skip_token(parser) + } + + token := peek_token(parser) + if token == nil { + return false + } + + if token.typ == yaml_KEY_TOKEN { + mark := token.end_mark + skip_token(parser) + token = peek_token(parser) + if token == nil { + return false + } + if token.typ != yaml_KEY_TOKEN && + token.typ != yaml_VALUE_TOKEN && + token.typ != yaml_BLOCK_END_TOKEN { + parser.states = append(parser.states, yaml_PARSE_BLOCK_MAPPING_VALUE_STATE) + return yaml_parser_parse_node(parser, event, true, true) + } else { + parser.state = yaml_PARSE_BLOCK_MAPPING_VALUE_STATE + return yaml_parser_process_empty_scalar(parser, event, mark) + } + } else if token.typ == yaml_BLOCK_END_TOKEN { + parser.state = parser.states[len(parser.states)-1] + parser.states = parser.states[:len(parser.states)-1] + parser.marks = parser.marks[:len(parser.marks)-1] + *event = yaml_event_t{ + typ: yaml_MAPPING_END_EVENT, + start_mark: token.start_mark, + end_mark: token.end_mark, + } + skip_token(parser) + return true + } + + context_mark := parser.marks[len(parser.marks)-1] + parser.marks = parser.marks[:len(parser.marks)-1] + return yaml_parser_set_parser_error_context(parser, + "while parsing a block mapping", context_mark, + "did not find expected key", token.start_mark) +} + +// Parse the productions: +// block_mapping ::= BLOCK-MAPPING_START +// +// ((KEY block_node_or_indentless_sequence?)? +// +// (VALUE block_node_or_indentless_sequence?)?)* +// ***** * +// BLOCK-END +// +// +func yaml_parser_parse_block_mapping_value(parser *yaml_parser_t, event *yaml_event_t) bool { + token := peek_token(parser) + if token == nil { + return false + } + if token.typ == yaml_VALUE_TOKEN { + mark := token.end_mark + skip_token(parser) + token = peek_token(parser) + if token == nil { + return false + } + if token.typ != yaml_KEY_TOKEN && + token.typ != yaml_VALUE_TOKEN && + token.typ != yaml_BLOCK_END_TOKEN { + parser.states = append(parser.states, yaml_PARSE_BLOCK_MAPPING_KEY_STATE) + return yaml_parser_parse_node(parser, event, true, true) + } + parser.state = yaml_PARSE_BLOCK_MAPPING_KEY_STATE + return yaml_parser_process_empty_scalar(parser, event, mark) + } + parser.state = yaml_PARSE_BLOCK_MAPPING_KEY_STATE + return yaml_parser_process_empty_scalar(parser, event, token.start_mark) +} + +// Parse the productions: +// flow_sequence ::= FLOW-SEQUENCE-START +// ******************* +// (flow_sequence_entry FLOW-ENTRY)* +// * ********** +// flow_sequence_entry? +// * +// FLOW-SEQUENCE-END +// ***************** +// flow_sequence_entry ::= flow_node | KEY flow_node? (VALUE flow_node?)? +// * +// +func yaml_parser_parse_flow_sequence_entry(parser *yaml_parser_t, event *yaml_event_t, first bool) bool { + if first { + token := peek_token(parser) + parser.marks = append(parser.marks, token.start_mark) + skip_token(parser) + } + token := peek_token(parser) + if token == nil { + return false + } + if token.typ != yaml_FLOW_SEQUENCE_END_TOKEN { + if !first { + if token.typ == yaml_FLOW_ENTRY_TOKEN { + skip_token(parser) + token = peek_token(parser) + if token == nil { + return false + } + } else { + context_mark := parser.marks[len(parser.marks)-1] + parser.marks = parser.marks[:len(parser.marks)-1] + return yaml_parser_set_parser_error_context(parser, + "while parsing a flow sequence", context_mark, + "did not find expected ',' or ']'", token.start_mark) + } + } + + if token.typ == yaml_KEY_TOKEN { + parser.state = yaml_PARSE_FLOW_SEQUENCE_ENTRY_MAPPING_KEY_STATE + *event = yaml_event_t{ + typ: yaml_MAPPING_START_EVENT, + start_mark: token.start_mark, + end_mark: token.end_mark, + implicit: true, + style: yaml_style_t(yaml_FLOW_MAPPING_STYLE), + } + skip_token(parser) + return true + } else if token.typ != yaml_FLOW_SEQUENCE_END_TOKEN { + parser.states = append(parser.states, yaml_PARSE_FLOW_SEQUENCE_ENTRY_STATE) + return yaml_parser_parse_node(parser, event, false, false) + } + } + + parser.state = parser.states[len(parser.states)-1] + parser.states = parser.states[:len(parser.states)-1] + parser.marks = parser.marks[:len(parser.marks)-1] + + *event = yaml_event_t{ + typ: yaml_SEQUENCE_END_EVENT, + start_mark: token.start_mark, + end_mark: token.end_mark, + } + + skip_token(parser) + return true +} + +// +// Parse the productions: +// flow_sequence_entry ::= flow_node | KEY flow_node? (VALUE flow_node?)? +// *** * +// +func yaml_parser_parse_flow_sequence_entry_mapping_key(parser *yaml_parser_t, event *yaml_event_t) bool { + token := peek_token(parser) + if token == nil { + return false + } + if token.typ != yaml_VALUE_TOKEN && + token.typ != yaml_FLOW_ENTRY_TOKEN && + token.typ != yaml_FLOW_SEQUENCE_END_TOKEN { + parser.states = append(parser.states, yaml_PARSE_FLOW_SEQUENCE_ENTRY_MAPPING_VALUE_STATE) + return yaml_parser_parse_node(parser, event, false, false) + } + mark := token.end_mark + skip_token(parser) + parser.state = yaml_PARSE_FLOW_SEQUENCE_ENTRY_MAPPING_VALUE_STATE + return yaml_parser_process_empty_scalar(parser, event, mark) +} + +// Parse the productions: +// flow_sequence_entry ::= flow_node | KEY flow_node? (VALUE flow_node?)? +// ***** * +// +func yaml_parser_parse_flow_sequence_entry_mapping_value(parser *yaml_parser_t, event *yaml_event_t) bool { + token := peek_token(parser) + if token == nil { + return false + } + if token.typ == yaml_VALUE_TOKEN { + skip_token(parser) + token := peek_token(parser) + if token == nil { + return false + } + if token.typ != yaml_FLOW_ENTRY_TOKEN && token.typ != yaml_FLOW_SEQUENCE_END_TOKEN { + parser.states = append(parser.states, yaml_PARSE_FLOW_SEQUENCE_ENTRY_MAPPING_END_STATE) + return yaml_parser_parse_node(parser, event, false, false) + } + } + parser.state = yaml_PARSE_FLOW_SEQUENCE_ENTRY_MAPPING_END_STATE + return yaml_parser_process_empty_scalar(parser, event, token.start_mark) +} + +// Parse the productions: +// flow_sequence_entry ::= flow_node | KEY flow_node? (VALUE flow_node?)? +// * +// +func yaml_parser_parse_flow_sequence_entry_mapping_end(parser *yaml_parser_t, event *yaml_event_t) bool { + token := peek_token(parser) + if token == nil { + return false + } + parser.state = yaml_PARSE_FLOW_SEQUENCE_ENTRY_STATE + *event = yaml_event_t{ + typ: yaml_MAPPING_END_EVENT, + start_mark: token.start_mark, + end_mark: token.start_mark, // [Go] Shouldn't this be end_mark? + } + return true +} + +// Parse the productions: +// flow_mapping ::= FLOW-MAPPING-START +// ****************** +// (flow_mapping_entry FLOW-ENTRY)* +// * ********** +// flow_mapping_entry? +// ****************** +// FLOW-MAPPING-END +// **************** +// flow_mapping_entry ::= flow_node | KEY flow_node? (VALUE flow_node?)? +// * *** * +// +func yaml_parser_parse_flow_mapping_key(parser *yaml_parser_t, event *yaml_event_t, first bool) bool { + if first { + token := peek_token(parser) + parser.marks = append(parser.marks, token.start_mark) + skip_token(parser) + } + + token := peek_token(parser) + if token == nil { + return false + } + + if token.typ != yaml_FLOW_MAPPING_END_TOKEN { + if !first { + if token.typ == yaml_FLOW_ENTRY_TOKEN { + skip_token(parser) + token = peek_token(parser) + if token == nil { + return false + } + } else { + context_mark := parser.marks[len(parser.marks)-1] + parser.marks = parser.marks[:len(parser.marks)-1] + return yaml_parser_set_parser_error_context(parser, + "while parsing a flow mapping", context_mark, + "did not find expected ',' or '}'", token.start_mark) + } + } + + if token.typ == yaml_KEY_TOKEN { + skip_token(parser) + token = peek_token(parser) + if token == nil { + return false + } + if token.typ != yaml_VALUE_TOKEN && + token.typ != yaml_FLOW_ENTRY_TOKEN && + token.typ != yaml_FLOW_MAPPING_END_TOKEN { + parser.states = append(parser.states, yaml_PARSE_FLOW_MAPPING_VALUE_STATE) + return yaml_parser_parse_node(parser, event, false, false) + } else { + parser.state = yaml_PARSE_FLOW_MAPPING_VALUE_STATE + return yaml_parser_process_empty_scalar(parser, event, token.start_mark) + } + } else if token.typ != yaml_FLOW_MAPPING_END_TOKEN { + parser.states = append(parser.states, yaml_PARSE_FLOW_MAPPING_EMPTY_VALUE_STATE) + return yaml_parser_parse_node(parser, event, false, false) + } + } + + parser.state = parser.states[len(parser.states)-1] + parser.states = parser.states[:len(parser.states)-1] + parser.marks = parser.marks[:len(parser.marks)-1] + *event = yaml_event_t{ + typ: yaml_MAPPING_END_EVENT, + start_mark: token.start_mark, + end_mark: token.end_mark, + } + skip_token(parser) + return true +} + +// Parse the productions: +// flow_mapping_entry ::= flow_node | KEY flow_node? (VALUE flow_node?)? +// * ***** * +// +func yaml_parser_parse_flow_mapping_value(parser *yaml_parser_t, event *yaml_event_t, empty bool) bool { + token := peek_token(parser) + if token == nil { + return false + } + if empty { + parser.state = yaml_PARSE_FLOW_MAPPING_KEY_STATE + return yaml_parser_process_empty_scalar(parser, event, token.start_mark) + } + if token.typ == yaml_VALUE_TOKEN { + skip_token(parser) + token = peek_token(parser) + if token == nil { + return false + } + if token.typ != yaml_FLOW_ENTRY_TOKEN && token.typ != yaml_FLOW_MAPPING_END_TOKEN { + parser.states = append(parser.states, yaml_PARSE_FLOW_MAPPING_KEY_STATE) + return yaml_parser_parse_node(parser, event, false, false) + } + } + parser.state = yaml_PARSE_FLOW_MAPPING_KEY_STATE + return yaml_parser_process_empty_scalar(parser, event, token.start_mark) +} + +// Generate an empty scalar event. +func yaml_parser_process_empty_scalar(parser *yaml_parser_t, event *yaml_event_t, mark yaml_mark_t) bool { + *event = yaml_event_t{ + typ: yaml_SCALAR_EVENT, + start_mark: mark, + end_mark: mark, + value: nil, // Empty + implicit: true, + style: yaml_style_t(yaml_PLAIN_SCALAR_STYLE), + } + return true +} + +var default_tag_directives = []yaml_tag_directive_t{ + {[]byte("!"), []byte("!")}, + {[]byte("!!"), []byte("tag:yaml.org,2002:")}, +} + +// Parse directives. +func yaml_parser_process_directives(parser *yaml_parser_t, + version_directive_ref **yaml_version_directive_t, + tag_directives_ref *[]yaml_tag_directive_t) bool { + + var version_directive *yaml_version_directive_t + var tag_directives []yaml_tag_directive_t + + token := peek_token(parser) + if token == nil { + return false + } + + for token.typ == yaml_VERSION_DIRECTIVE_TOKEN || token.typ == yaml_TAG_DIRECTIVE_TOKEN { + if token.typ == yaml_VERSION_DIRECTIVE_TOKEN { + if version_directive != nil { + yaml_parser_set_parser_error(parser, + "found duplicate %YAML directive", token.start_mark) + return false + } + if token.major != 1 || token.minor != 1 { + yaml_parser_set_parser_error(parser, + "found incompatible YAML document", token.start_mark) + return false + } + version_directive = &yaml_version_directive_t{ + major: token.major, + minor: token.minor, + } + } else if token.typ == yaml_TAG_DIRECTIVE_TOKEN { + value := yaml_tag_directive_t{ + handle: token.value, + prefix: token.prefix, + } + if !yaml_parser_append_tag_directive(parser, value, false, token.start_mark) { + return false + } + tag_directives = append(tag_directives, value) + } + + skip_token(parser) + token = peek_token(parser) + if token == nil { + return false + } + } + + for i := range default_tag_directives { + if !yaml_parser_append_tag_directive(parser, default_tag_directives[i], true, token.start_mark) { + return false + } + } + + if version_directive_ref != nil { + *version_directive_ref = version_directive + } + if tag_directives_ref != nil { + *tag_directives_ref = tag_directives + } + return true +} + +// Append a tag directive to the directives stack. +func yaml_parser_append_tag_directive(parser *yaml_parser_t, value yaml_tag_directive_t, allow_duplicates bool, mark yaml_mark_t) bool { + for i := range parser.tag_directives { + if bytes.Equal(value.handle, parser.tag_directives[i].handle) { + if allow_duplicates { + return true + } + return yaml_parser_set_parser_error(parser, "found duplicate %TAG directive", mark) + } + } + + // [Go] I suspect the copy is unnecessary. This was likely done + // because there was no way to track ownership of the data. + value_copy := yaml_tag_directive_t{ + handle: make([]byte, len(value.handle)), + prefix: make([]byte, len(value.prefix)), + } + copy(value_copy.handle, value.handle) + copy(value_copy.prefix, value.prefix) + parser.tag_directives = append(parser.tag_directives, value_copy) + return true +} diff --git a/vendor/gopkg.in/yaml.v2/readerc.go b/vendor/gopkg.in/yaml.v2/readerc.go new file mode 100644 index 000000000..7c1f5fac3 --- /dev/null +++ b/vendor/gopkg.in/yaml.v2/readerc.go @@ -0,0 +1,412 @@ +package yaml + +import ( + "io" +) + +// Set the reader error and return 0. +func yaml_parser_set_reader_error(parser *yaml_parser_t, problem string, offset int, value int) bool { + parser.error = yaml_READER_ERROR + parser.problem = problem + parser.problem_offset = offset + parser.problem_value = value + return false +} + +// Byte order marks. +const ( + bom_UTF8 = "\xef\xbb\xbf" + bom_UTF16LE = "\xff\xfe" + bom_UTF16BE = "\xfe\xff" +) + +// Determine the input stream encoding by checking the BOM symbol. If no BOM is +// found, the UTF-8 encoding is assumed. Return 1 on success, 0 on failure. +func yaml_parser_determine_encoding(parser *yaml_parser_t) bool { + // Ensure that we had enough bytes in the raw buffer. + for !parser.eof && len(parser.raw_buffer)-parser.raw_buffer_pos < 3 { + if !yaml_parser_update_raw_buffer(parser) { + return false + } + } + + // Determine the encoding. + buf := parser.raw_buffer + pos := parser.raw_buffer_pos + avail := len(buf) - pos + if avail >= 2 && buf[pos] == bom_UTF16LE[0] && buf[pos+1] == bom_UTF16LE[1] { + parser.encoding = yaml_UTF16LE_ENCODING + parser.raw_buffer_pos += 2 + parser.offset += 2 + } else if avail >= 2 && buf[pos] == bom_UTF16BE[0] && buf[pos+1] == bom_UTF16BE[1] { + parser.encoding = yaml_UTF16BE_ENCODING + parser.raw_buffer_pos += 2 + parser.offset += 2 + } else if avail >= 3 && buf[pos] == bom_UTF8[0] && buf[pos+1] == bom_UTF8[1] && buf[pos+2] == bom_UTF8[2] { + parser.encoding = yaml_UTF8_ENCODING + parser.raw_buffer_pos += 3 + parser.offset += 3 + } else { + parser.encoding = yaml_UTF8_ENCODING + } + return true +} + +// Update the raw buffer. +func yaml_parser_update_raw_buffer(parser *yaml_parser_t) bool { + size_read := 0 + + // Return if the raw buffer is full. + if parser.raw_buffer_pos == 0 && len(parser.raw_buffer) == cap(parser.raw_buffer) { + return true + } + + // Return on EOF. + if parser.eof { + return true + } + + // Move the remaining bytes in the raw buffer to the beginning. + if parser.raw_buffer_pos > 0 && parser.raw_buffer_pos < len(parser.raw_buffer) { + copy(parser.raw_buffer, parser.raw_buffer[parser.raw_buffer_pos:]) + } + parser.raw_buffer = parser.raw_buffer[:len(parser.raw_buffer)-parser.raw_buffer_pos] + parser.raw_buffer_pos = 0 + + // Call the read handler to fill the buffer. + size_read, err := parser.read_handler(parser, parser.raw_buffer[len(parser.raw_buffer):cap(parser.raw_buffer)]) + parser.raw_buffer = parser.raw_buffer[:len(parser.raw_buffer)+size_read] + if err == io.EOF { + parser.eof = true + } else if err != nil { + return yaml_parser_set_reader_error(parser, "input error: "+err.Error(), parser.offset, -1) + } + return true +} + +// Ensure that the buffer contains at least `length` characters. +// Return true on success, false on failure. +// +// The length is supposed to be significantly less that the buffer size. +func yaml_parser_update_buffer(parser *yaml_parser_t, length int) bool { + if parser.read_handler == nil { + panic("read handler must be set") + } + + // [Go] This function was changed to guarantee the requested length size at EOF. + // The fact we need to do this is pretty awful, but the description above implies + // for that to be the case, and there are tests + + // If the EOF flag is set and the raw buffer is empty, do nothing. + if parser.eof && parser.raw_buffer_pos == len(parser.raw_buffer) { + // [Go] ACTUALLY! Read the documentation of this function above. + // This is just broken. To return true, we need to have the + // given length in the buffer. Not doing that means every single + // check that calls this function to make sure the buffer has a + // given length is Go) panicking; or C) accessing invalid memory. + //return true + } + + // Return if the buffer contains enough characters. + if parser.unread >= length { + return true + } + + // Determine the input encoding if it is not known yet. + if parser.encoding == yaml_ANY_ENCODING { + if !yaml_parser_determine_encoding(parser) { + return false + } + } + + // Move the unread characters to the beginning of the buffer. + buffer_len := len(parser.buffer) + if parser.buffer_pos > 0 && parser.buffer_pos < buffer_len { + copy(parser.buffer, parser.buffer[parser.buffer_pos:]) + buffer_len -= parser.buffer_pos + parser.buffer_pos = 0 + } else if parser.buffer_pos == buffer_len { + buffer_len = 0 + parser.buffer_pos = 0 + } + + // Open the whole buffer for writing, and cut it before returning. + parser.buffer = parser.buffer[:cap(parser.buffer)] + + // Fill the buffer until it has enough characters. + first := true + for parser.unread < length { + + // Fill the raw buffer if necessary. + if !first || parser.raw_buffer_pos == len(parser.raw_buffer) { + if !yaml_parser_update_raw_buffer(parser) { + parser.buffer = parser.buffer[:buffer_len] + return false + } + } + first = false + + // Decode the raw buffer. + inner: + for parser.raw_buffer_pos != len(parser.raw_buffer) { + var value rune + var width int + + raw_unread := len(parser.raw_buffer) - parser.raw_buffer_pos + + // Decode the next character. + switch parser.encoding { + case yaml_UTF8_ENCODING: + // Decode a UTF-8 character. Check RFC 3629 + // (http://www.ietf.org/rfc/rfc3629.txt) for more details. + // + // The following table (taken from the RFC) is used for + // decoding. + // + // Char. number range | UTF-8 octet sequence + // (hexadecimal) | (binary) + // --------------------+------------------------------------ + // 0000 0000-0000 007F | 0xxxxxxx + // 0000 0080-0000 07FF | 110xxxxx 10xxxxxx + // 0000 0800-0000 FFFF | 1110xxxx 10xxxxxx 10xxxxxx + // 0001 0000-0010 FFFF | 11110xxx 10xxxxxx 10xxxxxx 10xxxxxx + // + // Additionally, the characters in the range 0xD800-0xDFFF + // are prohibited as they are reserved for use with UTF-16 + // surrogate pairs. + + // Determine the length of the UTF-8 sequence. + octet := parser.raw_buffer[parser.raw_buffer_pos] + switch { + case octet&0x80 == 0x00: + width = 1 + case octet&0xE0 == 0xC0: + width = 2 + case octet&0xF0 == 0xE0: + width = 3 + case octet&0xF8 == 0xF0: + width = 4 + default: + // The leading octet is invalid. + return yaml_parser_set_reader_error(parser, + "invalid leading UTF-8 octet", + parser.offset, int(octet)) + } + + // Check if the raw buffer contains an incomplete character. + if width > raw_unread { + if parser.eof { + return yaml_parser_set_reader_error(parser, + "incomplete UTF-8 octet sequence", + parser.offset, -1) + } + break inner + } + + // Decode the leading octet. + switch { + case octet&0x80 == 0x00: + value = rune(octet & 0x7F) + case octet&0xE0 == 0xC0: + value = rune(octet & 0x1F) + case octet&0xF0 == 0xE0: + value = rune(octet & 0x0F) + case octet&0xF8 == 0xF0: + value = rune(octet & 0x07) + default: + value = 0 + } + + // Check and decode the trailing octets. + for k := 1; k < width; k++ { + octet = parser.raw_buffer[parser.raw_buffer_pos+k] + + // Check if the octet is valid. + if (octet & 0xC0) != 0x80 { + return yaml_parser_set_reader_error(parser, + "invalid trailing UTF-8 octet", + parser.offset+k, int(octet)) + } + + // Decode the octet. + value = (value << 6) + rune(octet&0x3F) + } + + // Check the length of the sequence against the value. + switch { + case width == 1: + case width == 2 && value >= 0x80: + case width == 3 && value >= 0x800: + case width == 4 && value >= 0x10000: + default: + return yaml_parser_set_reader_error(parser, + "invalid length of a UTF-8 sequence", + parser.offset, -1) + } + + // Check the range of the value. + if value >= 0xD800 && value <= 0xDFFF || value > 0x10FFFF { + return yaml_parser_set_reader_error(parser, + "invalid Unicode character", + parser.offset, int(value)) + } + + case yaml_UTF16LE_ENCODING, yaml_UTF16BE_ENCODING: + var low, high int + if parser.encoding == yaml_UTF16LE_ENCODING { + low, high = 0, 1 + } else { + low, high = 1, 0 + } + + // The UTF-16 encoding is not as simple as one might + // naively think. Check RFC 2781 + // (http://www.ietf.org/rfc/rfc2781.txt). + // + // Normally, two subsequent bytes describe a Unicode + // character. However a special technique (called a + // surrogate pair) is used for specifying character + // values larger than 0xFFFF. + // + // A surrogate pair consists of two pseudo-characters: + // high surrogate area (0xD800-0xDBFF) + // low surrogate area (0xDC00-0xDFFF) + // + // The following formulas are used for decoding + // and encoding characters using surrogate pairs: + // + // U = U' + 0x10000 (0x01 00 00 <= U <= 0x10 FF FF) + // U' = yyyyyyyyyyxxxxxxxxxx (0 <= U' <= 0x0F FF FF) + // W1 = 110110yyyyyyyyyy + // W2 = 110111xxxxxxxxxx + // + // where U is the character value, W1 is the high surrogate + // area, W2 is the low surrogate area. + + // Check for incomplete UTF-16 character. + if raw_unread < 2 { + if parser.eof { + return yaml_parser_set_reader_error(parser, + "incomplete UTF-16 character", + parser.offset, -1) + } + break inner + } + + // Get the character. + value = rune(parser.raw_buffer[parser.raw_buffer_pos+low]) + + (rune(parser.raw_buffer[parser.raw_buffer_pos+high]) << 8) + + // Check for unexpected low surrogate area. + if value&0xFC00 == 0xDC00 { + return yaml_parser_set_reader_error(parser, + "unexpected low surrogate area", + parser.offset, int(value)) + } + + // Check for a high surrogate area. + if value&0xFC00 == 0xD800 { + width = 4 + + // Check for incomplete surrogate pair. + if raw_unread < 4 { + if parser.eof { + return yaml_parser_set_reader_error(parser, + "incomplete UTF-16 surrogate pair", + parser.offset, -1) + } + break inner + } + + // Get the next character. + value2 := rune(parser.raw_buffer[parser.raw_buffer_pos+low+2]) + + (rune(parser.raw_buffer[parser.raw_buffer_pos+high+2]) << 8) + + // Check for a low surrogate area. + if value2&0xFC00 != 0xDC00 { + return yaml_parser_set_reader_error(parser, + "expected low surrogate area", + parser.offset+2, int(value2)) + } + + // Generate the value of the surrogate pair. + value = 0x10000 + ((value & 0x3FF) << 10) + (value2 & 0x3FF) + } else { + width = 2 + } + + default: + panic("impossible") + } + + // Check if the character is in the allowed range: + // #x9 | #xA | #xD | [#x20-#x7E] (8 bit) + // | #x85 | [#xA0-#xD7FF] | [#xE000-#xFFFD] (16 bit) + // | [#x10000-#x10FFFF] (32 bit) + switch { + case value == 0x09: + case value == 0x0A: + case value == 0x0D: + case value >= 0x20 && value <= 0x7E: + case value == 0x85: + case value >= 0xA0 && value <= 0xD7FF: + case value >= 0xE000 && value <= 0xFFFD: + case value >= 0x10000 && value <= 0x10FFFF: + default: + return yaml_parser_set_reader_error(parser, + "control characters are not allowed", + parser.offset, int(value)) + } + + // Move the raw pointers. + parser.raw_buffer_pos += width + parser.offset += width + + // Finally put the character into the buffer. + if value <= 0x7F { + // 0000 0000-0000 007F . 0xxxxxxx + parser.buffer[buffer_len+0] = byte(value) + buffer_len += 1 + } else if value <= 0x7FF { + // 0000 0080-0000 07FF . 110xxxxx 10xxxxxx + parser.buffer[buffer_len+0] = byte(0xC0 + (value >> 6)) + parser.buffer[buffer_len+1] = byte(0x80 + (value & 0x3F)) + buffer_len += 2 + } else if value <= 0xFFFF { + // 0000 0800-0000 FFFF . 1110xxxx 10xxxxxx 10xxxxxx + parser.buffer[buffer_len+0] = byte(0xE0 + (value >> 12)) + parser.buffer[buffer_len+1] = byte(0x80 + ((value >> 6) & 0x3F)) + parser.buffer[buffer_len+2] = byte(0x80 + (value & 0x3F)) + buffer_len += 3 + } else { + // 0001 0000-0010 FFFF . 11110xxx 10xxxxxx 10xxxxxx 10xxxxxx + parser.buffer[buffer_len+0] = byte(0xF0 + (value >> 18)) + parser.buffer[buffer_len+1] = byte(0x80 + ((value >> 12) & 0x3F)) + parser.buffer[buffer_len+2] = byte(0x80 + ((value >> 6) & 0x3F)) + parser.buffer[buffer_len+3] = byte(0x80 + (value & 0x3F)) + buffer_len += 4 + } + + parser.unread++ + } + + // On EOF, put NUL into the buffer and return. + if parser.eof { + parser.buffer[buffer_len] = 0 + buffer_len++ + parser.unread++ + break + } + } + // [Go] Read the documentation of this function above. To return true, + // we need to have the given length in the buffer. Not doing that means + // every single check that calls this function to make sure the buffer + // has a given length is Go) panicking; or C) accessing invalid memory. + // This happens here due to the EOF above breaking early. + for buffer_len < length { + parser.buffer[buffer_len] = 0 + buffer_len++ + } + parser.buffer = parser.buffer[:buffer_len] + return true +} diff --git a/vendor/gopkg.in/yaml.v2/resolve.go b/vendor/gopkg.in/yaml.v2/resolve.go new file mode 100644 index 000000000..6c151db6f --- /dev/null +++ b/vendor/gopkg.in/yaml.v2/resolve.go @@ -0,0 +1,258 @@ +package yaml + +import ( + "encoding/base64" + "math" + "regexp" + "strconv" + "strings" + "time" +) + +type resolveMapItem struct { + value interface{} + tag string +} + +var resolveTable = make([]byte, 256) +var resolveMap = make(map[string]resolveMapItem) + +func init() { + t := resolveTable + t[int('+')] = 'S' // Sign + t[int('-')] = 'S' + for _, c := range "0123456789" { + t[int(c)] = 'D' // Digit + } + for _, c := range "yYnNtTfFoO~" { + t[int(c)] = 'M' // In map + } + t[int('.')] = '.' // Float (potentially in map) + + var resolveMapList = []struct { + v interface{} + tag string + l []string + }{ + {true, yaml_BOOL_TAG, []string{"y", "Y", "yes", "Yes", "YES"}}, + {true, yaml_BOOL_TAG, []string{"true", "True", "TRUE"}}, + {true, yaml_BOOL_TAG, []string{"on", "On", "ON"}}, + {false, yaml_BOOL_TAG, []string{"n", "N", "no", "No", "NO"}}, + {false, yaml_BOOL_TAG, []string{"false", "False", "FALSE"}}, + {false, yaml_BOOL_TAG, []string{"off", "Off", "OFF"}}, + {nil, yaml_NULL_TAG, []string{"", "~", "null", "Null", "NULL"}}, + {math.NaN(), yaml_FLOAT_TAG, []string{".nan", ".NaN", ".NAN"}}, + {math.Inf(+1), yaml_FLOAT_TAG, []string{".inf", ".Inf", ".INF"}}, + {math.Inf(+1), yaml_FLOAT_TAG, []string{"+.inf", "+.Inf", "+.INF"}}, + {math.Inf(-1), yaml_FLOAT_TAG, []string{"-.inf", "-.Inf", "-.INF"}}, + {"<<", yaml_MERGE_TAG, []string{"<<"}}, + } + + m := resolveMap + for _, item := range resolveMapList { + for _, s := range item.l { + m[s] = resolveMapItem{item.v, item.tag} + } + } +} + +const longTagPrefix = "tag:yaml.org,2002:" + +func shortTag(tag string) string { + // TODO This can easily be made faster and produce less garbage. + if strings.HasPrefix(tag, longTagPrefix) { + return "!!" + tag[len(longTagPrefix):] + } + return tag +} + +func longTag(tag string) string { + if strings.HasPrefix(tag, "!!") { + return longTagPrefix + tag[2:] + } + return tag +} + +func resolvableTag(tag string) bool { + switch tag { + case "", yaml_STR_TAG, yaml_BOOL_TAG, yaml_INT_TAG, yaml_FLOAT_TAG, yaml_NULL_TAG, yaml_TIMESTAMP_TAG: + return true + } + return false +} + +var yamlStyleFloat = regexp.MustCompile(`^[-+]?[0-9]*\.?[0-9]+([eE][-+][0-9]+)?$`) + +func resolve(tag string, in string) (rtag string, out interface{}) { + if !resolvableTag(tag) { + return tag, in + } + + defer func() { + switch tag { + case "", rtag, yaml_STR_TAG, yaml_BINARY_TAG: + return + case yaml_FLOAT_TAG: + if rtag == yaml_INT_TAG { + switch v := out.(type) { + case int64: + rtag = yaml_FLOAT_TAG + out = float64(v) + return + case int: + rtag = yaml_FLOAT_TAG + out = float64(v) + return + } + } + } + failf("cannot decode %s `%s` as a %s", shortTag(rtag), in, shortTag(tag)) + }() + + // Any data is accepted as a !!str or !!binary. + // Otherwise, the prefix is enough of a hint about what it might be. + hint := byte('N') + if in != "" { + hint = resolveTable[in[0]] + } + if hint != 0 && tag != yaml_STR_TAG && tag != yaml_BINARY_TAG { + // Handle things we can lookup in a map. + if item, ok := resolveMap[in]; ok { + return item.tag, item.value + } + + // Base 60 floats are a bad idea, were dropped in YAML 1.2, and + // are purposefully unsupported here. They're still quoted on + // the way out for compatibility with other parser, though. + + switch hint { + case 'M': + // We've already checked the map above. + + case '.': + // Not in the map, so maybe a normal float. + floatv, err := strconv.ParseFloat(in, 64) + if err == nil { + return yaml_FLOAT_TAG, floatv + } + + case 'D', 'S': + // Int, float, or timestamp. + // Only try values as a timestamp if the value is unquoted or there's an explicit + // !!timestamp tag. + if tag == "" || tag == yaml_TIMESTAMP_TAG { + t, ok := parseTimestamp(in) + if ok { + return yaml_TIMESTAMP_TAG, t + } + } + + plain := strings.Replace(in, "_", "", -1) + intv, err := strconv.ParseInt(plain, 0, 64) + if err == nil { + if intv == int64(int(intv)) { + return yaml_INT_TAG, int(intv) + } else { + return yaml_INT_TAG, intv + } + } + uintv, err := strconv.ParseUint(plain, 0, 64) + if err == nil { + return yaml_INT_TAG, uintv + } + if yamlStyleFloat.MatchString(plain) { + floatv, err := strconv.ParseFloat(plain, 64) + if err == nil { + return yaml_FLOAT_TAG, floatv + } + } + if strings.HasPrefix(plain, "0b") { + intv, err := strconv.ParseInt(plain[2:], 2, 64) + if err == nil { + if intv == int64(int(intv)) { + return yaml_INT_TAG, int(intv) + } else { + return yaml_INT_TAG, intv + } + } + uintv, err := strconv.ParseUint(plain[2:], 2, 64) + if err == nil { + return yaml_INT_TAG, uintv + } + } else if strings.HasPrefix(plain, "-0b") { + intv, err := strconv.ParseInt("-" + plain[3:], 2, 64) + if err == nil { + if true || intv == int64(int(intv)) { + return yaml_INT_TAG, int(intv) + } else { + return yaml_INT_TAG, intv + } + } + } + default: + panic("resolveTable item not yet handled: " + string(rune(hint)) + " (with " + in + ")") + } + } + return yaml_STR_TAG, in +} + +// encodeBase64 encodes s as base64 that is broken up into multiple lines +// as appropriate for the resulting length. +func encodeBase64(s string) string { + const lineLen = 70 + encLen := base64.StdEncoding.EncodedLen(len(s)) + lines := encLen/lineLen + 1 + buf := make([]byte, encLen*2+lines) + in := buf[0:encLen] + out := buf[encLen:] + base64.StdEncoding.Encode(in, []byte(s)) + k := 0 + for i := 0; i < len(in); i += lineLen { + j := i + lineLen + if j > len(in) { + j = len(in) + } + k += copy(out[k:], in[i:j]) + if lines > 1 { + out[k] = '\n' + k++ + } + } + return string(out[:k]) +} + +// This is a subset of the formats allowed by the regular expression +// defined at http://yaml.org/type/timestamp.html. +var allowedTimestampFormats = []string{ + "2006-1-2T15:4:5.999999999Z07:00", // RCF3339Nano with short date fields. + "2006-1-2t15:4:5.999999999Z07:00", // RFC3339Nano with short date fields and lower-case "t". + "2006-1-2 15:4:5.999999999", // space separated with no time zone + "2006-1-2", // date only + // Notable exception: time.Parse cannot handle: "2001-12-14 21:59:43.10 -5" + // from the set of examples. +} + +// parseTimestamp parses s as a timestamp string and +// returns the timestamp and reports whether it succeeded. +// Timestamp formats are defined at http://yaml.org/type/timestamp.html +func parseTimestamp(s string) (time.Time, bool) { + // TODO write code to check all the formats supported by + // http://yaml.org/type/timestamp.html instead of using time.Parse. + + // Quick check: all date formats start with YYYY-. + i := 0 + for ; i < len(s); i++ { + if c := s[i]; c < '0' || c > '9' { + break + } + } + if i != 4 || i == len(s) || s[i] != '-' { + return time.Time{}, false + } + for _, format := range allowedTimestampFormats { + if t, err := time.Parse(format, s); err == nil { + return t, true + } + } + return time.Time{}, false +} diff --git a/vendor/gopkg.in/yaml.v2/scannerc.go b/vendor/gopkg.in/yaml.v2/scannerc.go new file mode 100644 index 000000000..077fd1dd2 --- /dev/null +++ b/vendor/gopkg.in/yaml.v2/scannerc.go @@ -0,0 +1,2696 @@ +package yaml + +import ( + "bytes" + "fmt" +) + +// Introduction +// ************ +// +// The following notes assume that you are familiar with the YAML specification +// (http://yaml.org/spec/1.2/spec.html). We mostly follow it, although in +// some cases we are less restrictive that it requires. +// +// The process of transforming a YAML stream into a sequence of events is +// divided on two steps: Scanning and Parsing. +// +// The Scanner transforms the input stream into a sequence of tokens, while the +// parser transform the sequence of tokens produced by the Scanner into a +// sequence of parsing events. +// +// The Scanner is rather clever and complicated. The Parser, on the contrary, +// is a straightforward implementation of a recursive-descendant parser (or, +// LL(1) parser, as it is usually called). +// +// Actually there are two issues of Scanning that might be called "clever", the +// rest is quite straightforward. The issues are "block collection start" and +// "simple keys". Both issues are explained below in details. +// +// Here the Scanning step is explained and implemented. We start with the list +// of all the tokens produced by the Scanner together with short descriptions. +// +// Now, tokens: +// +// STREAM-START(encoding) # The stream start. +// STREAM-END # The stream end. +// VERSION-DIRECTIVE(major,minor) # The '%YAML' directive. +// TAG-DIRECTIVE(handle,prefix) # The '%TAG' directive. +// DOCUMENT-START # '---' +// DOCUMENT-END # '...' +// BLOCK-SEQUENCE-START # Indentation increase denoting a block +// BLOCK-MAPPING-START # sequence or a block mapping. +// BLOCK-END # Indentation decrease. +// FLOW-SEQUENCE-START # '[' +// FLOW-SEQUENCE-END # ']' +// BLOCK-SEQUENCE-START # '{' +// BLOCK-SEQUENCE-END # '}' +// BLOCK-ENTRY # '-' +// FLOW-ENTRY # ',' +// KEY # '?' or nothing (simple keys). +// VALUE # ':' +// ALIAS(anchor) # '*anchor' +// ANCHOR(anchor) # '&anchor' +// TAG(handle,suffix) # '!handle!suffix' +// SCALAR(value,style) # A scalar. +// +// The following two tokens are "virtual" tokens denoting the beginning and the +// end of the stream: +// +// STREAM-START(encoding) +// STREAM-END +// +// We pass the information about the input stream encoding with the +// STREAM-START token. +// +// The next two tokens are responsible for tags: +// +// VERSION-DIRECTIVE(major,minor) +// TAG-DIRECTIVE(handle,prefix) +// +// Example: +// +// %YAML 1.1 +// %TAG ! !foo +// %TAG !yaml! tag:yaml.org,2002: +// --- +// +// The correspoding sequence of tokens: +// +// STREAM-START(utf-8) +// VERSION-DIRECTIVE(1,1) +// TAG-DIRECTIVE("!","!foo") +// TAG-DIRECTIVE("!yaml","tag:yaml.org,2002:") +// DOCUMENT-START +// STREAM-END +// +// Note that the VERSION-DIRECTIVE and TAG-DIRECTIVE tokens occupy a whole +// line. +// +// The document start and end indicators are represented by: +// +// DOCUMENT-START +// DOCUMENT-END +// +// Note that if a YAML stream contains an implicit document (without '---' +// and '...' indicators), no DOCUMENT-START and DOCUMENT-END tokens will be +// produced. +// +// In the following examples, we present whole documents together with the +// produced tokens. +// +// 1. An implicit document: +// +// 'a scalar' +// +// Tokens: +// +// STREAM-START(utf-8) +// SCALAR("a scalar",single-quoted) +// STREAM-END +// +// 2. An explicit document: +// +// --- +// 'a scalar' +// ... +// +// Tokens: +// +// STREAM-START(utf-8) +// DOCUMENT-START +// SCALAR("a scalar",single-quoted) +// DOCUMENT-END +// STREAM-END +// +// 3. Several documents in a stream: +// +// 'a scalar' +// --- +// 'another scalar' +// --- +// 'yet another scalar' +// +// Tokens: +// +// STREAM-START(utf-8) +// SCALAR("a scalar",single-quoted) +// DOCUMENT-START +// SCALAR("another scalar",single-quoted) +// DOCUMENT-START +// SCALAR("yet another scalar",single-quoted) +// STREAM-END +// +// We have already introduced the SCALAR token above. The following tokens are +// used to describe aliases, anchors, tag, and scalars: +// +// ALIAS(anchor) +// ANCHOR(anchor) +// TAG(handle,suffix) +// SCALAR(value,style) +// +// The following series of examples illustrate the usage of these tokens: +// +// 1. A recursive sequence: +// +// &A [ *A ] +// +// Tokens: +// +// STREAM-START(utf-8) +// ANCHOR("A") +// FLOW-SEQUENCE-START +// ALIAS("A") +// FLOW-SEQUENCE-END +// STREAM-END +// +// 2. A tagged scalar: +// +// !!float "3.14" # A good approximation. +// +// Tokens: +// +// STREAM-START(utf-8) +// TAG("!!","float") +// SCALAR("3.14",double-quoted) +// STREAM-END +// +// 3. Various scalar styles: +// +// --- # Implicit empty plain scalars do not produce tokens. +// --- a plain scalar +// --- 'a single-quoted scalar' +// --- "a double-quoted scalar" +// --- |- +// a literal scalar +// --- >- +// a folded +// scalar +// +// Tokens: +// +// STREAM-START(utf-8) +// DOCUMENT-START +// DOCUMENT-START +// SCALAR("a plain scalar",plain) +// DOCUMENT-START +// SCALAR("a single-quoted scalar",single-quoted) +// DOCUMENT-START +// SCALAR("a double-quoted scalar",double-quoted) +// DOCUMENT-START +// SCALAR("a literal scalar",literal) +// DOCUMENT-START +// SCALAR("a folded scalar",folded) +// STREAM-END +// +// Now it's time to review collection-related tokens. We will start with +// flow collections: +// +// FLOW-SEQUENCE-START +// FLOW-SEQUENCE-END +// FLOW-MAPPING-START +// FLOW-MAPPING-END +// FLOW-ENTRY +// KEY +// VALUE +// +// The tokens FLOW-SEQUENCE-START, FLOW-SEQUENCE-END, FLOW-MAPPING-START, and +// FLOW-MAPPING-END represent the indicators '[', ']', '{', and '}' +// correspondingly. FLOW-ENTRY represent the ',' indicator. Finally the +// indicators '?' and ':', which are used for denoting mapping keys and values, +// are represented by the KEY and VALUE tokens. +// +// The following examples show flow collections: +// +// 1. A flow sequence: +// +// [item 1, item 2, item 3] +// +// Tokens: +// +// STREAM-START(utf-8) +// FLOW-SEQUENCE-START +// SCALAR("item 1",plain) +// FLOW-ENTRY +// SCALAR("item 2",plain) +// FLOW-ENTRY +// SCALAR("item 3",plain) +// FLOW-SEQUENCE-END +// STREAM-END +// +// 2. A flow mapping: +// +// { +// a simple key: a value, # Note that the KEY token is produced. +// ? a complex key: another value, +// } +// +// Tokens: +// +// STREAM-START(utf-8) +// FLOW-MAPPING-START +// KEY +// SCALAR("a simple key",plain) +// VALUE +// SCALAR("a value",plain) +// FLOW-ENTRY +// KEY +// SCALAR("a complex key",plain) +// VALUE +// SCALAR("another value",plain) +// FLOW-ENTRY +// FLOW-MAPPING-END +// STREAM-END +// +// A simple key is a key which is not denoted by the '?' indicator. Note that +// the Scanner still produce the KEY token whenever it encounters a simple key. +// +// For scanning block collections, the following tokens are used (note that we +// repeat KEY and VALUE here): +// +// BLOCK-SEQUENCE-START +// BLOCK-MAPPING-START +// BLOCK-END +// BLOCK-ENTRY +// KEY +// VALUE +// +// The tokens BLOCK-SEQUENCE-START and BLOCK-MAPPING-START denote indentation +// increase that precedes a block collection (cf. the INDENT token in Python). +// The token BLOCK-END denote indentation decrease that ends a block collection +// (cf. the DEDENT token in Python). However YAML has some syntax pecularities +// that makes detections of these tokens more complex. +// +// The tokens BLOCK-ENTRY, KEY, and VALUE are used to represent the indicators +// '-', '?', and ':' correspondingly. +// +// The following examples show how the tokens BLOCK-SEQUENCE-START, +// BLOCK-MAPPING-START, and BLOCK-END are emitted by the Scanner: +// +// 1. Block sequences: +// +// - item 1 +// - item 2 +// - +// - item 3.1 +// - item 3.2 +// - +// key 1: value 1 +// key 2: value 2 +// +// Tokens: +// +// STREAM-START(utf-8) +// BLOCK-SEQUENCE-START +// BLOCK-ENTRY +// SCALAR("item 1",plain) +// BLOCK-ENTRY +// SCALAR("item 2",plain) +// BLOCK-ENTRY +// BLOCK-SEQUENCE-START +// BLOCK-ENTRY +// SCALAR("item 3.1",plain) +// BLOCK-ENTRY +// SCALAR("item 3.2",plain) +// BLOCK-END +// BLOCK-ENTRY +// BLOCK-MAPPING-START +// KEY +// SCALAR("key 1",plain) +// VALUE +// SCALAR("value 1",plain) +// KEY +// SCALAR("key 2",plain) +// VALUE +// SCALAR("value 2",plain) +// BLOCK-END +// BLOCK-END +// STREAM-END +// +// 2. Block mappings: +// +// a simple key: a value # The KEY token is produced here. +// ? a complex key +// : another value +// a mapping: +// key 1: value 1 +// key 2: value 2 +// a sequence: +// - item 1 +// - item 2 +// +// Tokens: +// +// STREAM-START(utf-8) +// BLOCK-MAPPING-START +// KEY +// SCALAR("a simple key",plain) +// VALUE +// SCALAR("a value",plain) +// KEY +// SCALAR("a complex key",plain) +// VALUE +// SCALAR("another value",plain) +// KEY +// SCALAR("a mapping",plain) +// BLOCK-MAPPING-START +// KEY +// SCALAR("key 1",plain) +// VALUE +// SCALAR("value 1",plain) +// KEY +// SCALAR("key 2",plain) +// VALUE +// SCALAR("value 2",plain) +// BLOCK-END +// KEY +// SCALAR("a sequence",plain) +// VALUE +// BLOCK-SEQUENCE-START +// BLOCK-ENTRY +// SCALAR("item 1",plain) +// BLOCK-ENTRY +// SCALAR("item 2",plain) +// BLOCK-END +// BLOCK-END +// STREAM-END +// +// YAML does not always require to start a new block collection from a new +// line. If the current line contains only '-', '?', and ':' indicators, a new +// block collection may start at the current line. The following examples +// illustrate this case: +// +// 1. Collections in a sequence: +// +// - - item 1 +// - item 2 +// - key 1: value 1 +// key 2: value 2 +// - ? complex key +// : complex value +// +// Tokens: +// +// STREAM-START(utf-8) +// BLOCK-SEQUENCE-START +// BLOCK-ENTRY +// BLOCK-SEQUENCE-START +// BLOCK-ENTRY +// SCALAR("item 1",plain) +// BLOCK-ENTRY +// SCALAR("item 2",plain) +// BLOCK-END +// BLOCK-ENTRY +// BLOCK-MAPPING-START +// KEY +// SCALAR("key 1",plain) +// VALUE +// SCALAR("value 1",plain) +// KEY +// SCALAR("key 2",plain) +// VALUE +// SCALAR("value 2",plain) +// BLOCK-END +// BLOCK-ENTRY +// BLOCK-MAPPING-START +// KEY +// SCALAR("complex key") +// VALUE +// SCALAR("complex value") +// BLOCK-END +// BLOCK-END +// STREAM-END +// +// 2. Collections in a mapping: +// +// ? a sequence +// : - item 1 +// - item 2 +// ? a mapping +// : key 1: value 1 +// key 2: value 2 +// +// Tokens: +// +// STREAM-START(utf-8) +// BLOCK-MAPPING-START +// KEY +// SCALAR("a sequence",plain) +// VALUE +// BLOCK-SEQUENCE-START +// BLOCK-ENTRY +// SCALAR("item 1",plain) +// BLOCK-ENTRY +// SCALAR("item 2",plain) +// BLOCK-END +// KEY +// SCALAR("a mapping",plain) +// VALUE +// BLOCK-MAPPING-START +// KEY +// SCALAR("key 1",plain) +// VALUE +// SCALAR("value 1",plain) +// KEY +// SCALAR("key 2",plain) +// VALUE +// SCALAR("value 2",plain) +// BLOCK-END +// BLOCK-END +// STREAM-END +// +// YAML also permits non-indented sequences if they are included into a block +// mapping. In this case, the token BLOCK-SEQUENCE-START is not produced: +// +// key: +// - item 1 # BLOCK-SEQUENCE-START is NOT produced here. +// - item 2 +// +// Tokens: +// +// STREAM-START(utf-8) +// BLOCK-MAPPING-START +// KEY +// SCALAR("key",plain) +// VALUE +// BLOCK-ENTRY +// SCALAR("item 1",plain) +// BLOCK-ENTRY +// SCALAR("item 2",plain) +// BLOCK-END +// + +// Ensure that the buffer contains the required number of characters. +// Return true on success, false on failure (reader error or memory error). +func cache(parser *yaml_parser_t, length int) bool { + // [Go] This was inlined: !cache(A, B) -> unread < B && !update(A, B) + return parser.unread >= length || yaml_parser_update_buffer(parser, length) +} + +// Advance the buffer pointer. +func skip(parser *yaml_parser_t) { + parser.mark.index++ + parser.mark.column++ + parser.unread-- + parser.buffer_pos += width(parser.buffer[parser.buffer_pos]) +} + +func skip_line(parser *yaml_parser_t) { + if is_crlf(parser.buffer, parser.buffer_pos) { + parser.mark.index += 2 + parser.mark.column = 0 + parser.mark.line++ + parser.unread -= 2 + parser.buffer_pos += 2 + } else if is_break(parser.buffer, parser.buffer_pos) { + parser.mark.index++ + parser.mark.column = 0 + parser.mark.line++ + parser.unread-- + parser.buffer_pos += width(parser.buffer[parser.buffer_pos]) + } +} + +// Copy a character to a string buffer and advance pointers. +func read(parser *yaml_parser_t, s []byte) []byte { + w := width(parser.buffer[parser.buffer_pos]) + if w == 0 { + panic("invalid character sequence") + } + if len(s) == 0 { + s = make([]byte, 0, 32) + } + if w == 1 && len(s)+w <= cap(s) { + s = s[:len(s)+1] + s[len(s)-1] = parser.buffer[parser.buffer_pos] + parser.buffer_pos++ + } else { + s = append(s, parser.buffer[parser.buffer_pos:parser.buffer_pos+w]...) + parser.buffer_pos += w + } + parser.mark.index++ + parser.mark.column++ + parser.unread-- + return s +} + +// Copy a line break character to a string buffer and advance pointers. +func read_line(parser *yaml_parser_t, s []byte) []byte { + buf := parser.buffer + pos := parser.buffer_pos + switch { + case buf[pos] == '\r' && buf[pos+1] == '\n': + // CR LF . LF + s = append(s, '\n') + parser.buffer_pos += 2 + parser.mark.index++ + parser.unread-- + case buf[pos] == '\r' || buf[pos] == '\n': + // CR|LF . LF + s = append(s, '\n') + parser.buffer_pos += 1 + case buf[pos] == '\xC2' && buf[pos+1] == '\x85': + // NEL . LF + s = append(s, '\n') + parser.buffer_pos += 2 + case buf[pos] == '\xE2' && buf[pos+1] == '\x80' && (buf[pos+2] == '\xA8' || buf[pos+2] == '\xA9'): + // LS|PS . LS|PS + s = append(s, buf[parser.buffer_pos:pos+3]...) + parser.buffer_pos += 3 + default: + return s + } + parser.mark.index++ + parser.mark.column = 0 + parser.mark.line++ + parser.unread-- + return s +} + +// Get the next token. +func yaml_parser_scan(parser *yaml_parser_t, token *yaml_token_t) bool { + // Erase the token object. + *token = yaml_token_t{} // [Go] Is this necessary? + + // No tokens after STREAM-END or error. + if parser.stream_end_produced || parser.error != yaml_NO_ERROR { + return true + } + + // Ensure that the tokens queue contains enough tokens. + if !parser.token_available { + if !yaml_parser_fetch_more_tokens(parser) { + return false + } + } + + // Fetch the next token from the queue. + *token = parser.tokens[parser.tokens_head] + parser.tokens_head++ + parser.tokens_parsed++ + parser.token_available = false + + if token.typ == yaml_STREAM_END_TOKEN { + parser.stream_end_produced = true + } + return true +} + +// Set the scanner error and return false. +func yaml_parser_set_scanner_error(parser *yaml_parser_t, context string, context_mark yaml_mark_t, problem string) bool { + parser.error = yaml_SCANNER_ERROR + parser.context = context + parser.context_mark = context_mark + parser.problem = problem + parser.problem_mark = parser.mark + return false +} + +func yaml_parser_set_scanner_tag_error(parser *yaml_parser_t, directive bool, context_mark yaml_mark_t, problem string) bool { + context := "while parsing a tag" + if directive { + context = "while parsing a %TAG directive" + } + return yaml_parser_set_scanner_error(parser, context, context_mark, problem) +} + +func trace(args ...interface{}) func() { + pargs := append([]interface{}{"+++"}, args...) + fmt.Println(pargs...) + pargs = append([]interface{}{"---"}, args...) + return func() { fmt.Println(pargs...) } +} + +// Ensure that the tokens queue contains at least one token which can be +// returned to the Parser. +func yaml_parser_fetch_more_tokens(parser *yaml_parser_t) bool { + // While we need more tokens to fetch, do it. + for { + // Check if we really need to fetch more tokens. + need_more_tokens := false + + if parser.tokens_head == len(parser.tokens) { + // Queue is empty. + need_more_tokens = true + } else { + // Check if any potential simple key may occupy the head position. + if !yaml_parser_stale_simple_keys(parser) { + return false + } + + for i := range parser.simple_keys { + simple_key := &parser.simple_keys[i] + if simple_key.possible && simple_key.token_number == parser.tokens_parsed { + need_more_tokens = true + break + } + } + } + + // We are finished. + if !need_more_tokens { + break + } + // Fetch the next token. + if !yaml_parser_fetch_next_token(parser) { + return false + } + } + + parser.token_available = true + return true +} + +// The dispatcher for token fetchers. +func yaml_parser_fetch_next_token(parser *yaml_parser_t) bool { + // Ensure that the buffer is initialized. + if parser.unread < 1 && !yaml_parser_update_buffer(parser, 1) { + return false + } + + // Check if we just started scanning. Fetch STREAM-START then. + if !parser.stream_start_produced { + return yaml_parser_fetch_stream_start(parser) + } + + // Eat whitespaces and comments until we reach the next token. + if !yaml_parser_scan_to_next_token(parser) { + return false + } + + // Remove obsolete potential simple keys. + if !yaml_parser_stale_simple_keys(parser) { + return false + } + + // Check the indentation level against the current column. + if !yaml_parser_unroll_indent(parser, parser.mark.column) { + return false + } + + // Ensure that the buffer contains at least 4 characters. 4 is the length + // of the longest indicators ('--- ' and '... '). + if parser.unread < 4 && !yaml_parser_update_buffer(parser, 4) { + return false + } + + // Is it the end of the stream? + if is_z(parser.buffer, parser.buffer_pos) { + return yaml_parser_fetch_stream_end(parser) + } + + // Is it a directive? + if parser.mark.column == 0 && parser.buffer[parser.buffer_pos] == '%' { + return yaml_parser_fetch_directive(parser) + } + + buf := parser.buffer + pos := parser.buffer_pos + + // Is it the document start indicator? + if parser.mark.column == 0 && buf[pos] == '-' && buf[pos+1] == '-' && buf[pos+2] == '-' && is_blankz(buf, pos+3) { + return yaml_parser_fetch_document_indicator(parser, yaml_DOCUMENT_START_TOKEN) + } + + // Is it the document end indicator? + if parser.mark.column == 0 && buf[pos] == '.' && buf[pos+1] == '.' && buf[pos+2] == '.' && is_blankz(buf, pos+3) { + return yaml_parser_fetch_document_indicator(parser, yaml_DOCUMENT_END_TOKEN) + } + + // Is it the flow sequence start indicator? + if buf[pos] == '[' { + return yaml_parser_fetch_flow_collection_start(parser, yaml_FLOW_SEQUENCE_START_TOKEN) + } + + // Is it the flow mapping start indicator? + if parser.buffer[parser.buffer_pos] == '{' { + return yaml_parser_fetch_flow_collection_start(parser, yaml_FLOW_MAPPING_START_TOKEN) + } + + // Is it the flow sequence end indicator? + if parser.buffer[parser.buffer_pos] == ']' { + return yaml_parser_fetch_flow_collection_end(parser, + yaml_FLOW_SEQUENCE_END_TOKEN) + } + + // Is it the flow mapping end indicator? + if parser.buffer[parser.buffer_pos] == '}' { + return yaml_parser_fetch_flow_collection_end(parser, + yaml_FLOW_MAPPING_END_TOKEN) + } + + // Is it the flow entry indicator? + if parser.buffer[parser.buffer_pos] == ',' { + return yaml_parser_fetch_flow_entry(parser) + } + + // Is it the block entry indicator? + if parser.buffer[parser.buffer_pos] == '-' && is_blankz(parser.buffer, parser.buffer_pos+1) { + return yaml_parser_fetch_block_entry(parser) + } + + // Is it the key indicator? + if parser.buffer[parser.buffer_pos] == '?' && (parser.flow_level > 0 || is_blankz(parser.buffer, parser.buffer_pos+1)) { + return yaml_parser_fetch_key(parser) + } + + // Is it the value indicator? + if parser.buffer[parser.buffer_pos] == ':' && (parser.flow_level > 0 || is_blankz(parser.buffer, parser.buffer_pos+1)) { + return yaml_parser_fetch_value(parser) + } + + // Is it an alias? + if parser.buffer[parser.buffer_pos] == '*' { + return yaml_parser_fetch_anchor(parser, yaml_ALIAS_TOKEN) + } + + // Is it an anchor? + if parser.buffer[parser.buffer_pos] == '&' { + return yaml_parser_fetch_anchor(parser, yaml_ANCHOR_TOKEN) + } + + // Is it a tag? + if parser.buffer[parser.buffer_pos] == '!' { + return yaml_parser_fetch_tag(parser) + } + + // Is it a literal scalar? + if parser.buffer[parser.buffer_pos] == '|' && parser.flow_level == 0 { + return yaml_parser_fetch_block_scalar(parser, true) + } + + // Is it a folded scalar? + if parser.buffer[parser.buffer_pos] == '>' && parser.flow_level == 0 { + return yaml_parser_fetch_block_scalar(parser, false) + } + + // Is it a single-quoted scalar? + if parser.buffer[parser.buffer_pos] == '\'' { + return yaml_parser_fetch_flow_scalar(parser, true) + } + + // Is it a double-quoted scalar? + if parser.buffer[parser.buffer_pos] == '"' { + return yaml_parser_fetch_flow_scalar(parser, false) + } + + // Is it a plain scalar? + // + // A plain scalar may start with any non-blank characters except + // + // '-', '?', ':', ',', '[', ']', '{', '}', + // '#', '&', '*', '!', '|', '>', '\'', '\"', + // '%', '@', '`'. + // + // In the block context (and, for the '-' indicator, in the flow context + // too), it may also start with the characters + // + // '-', '?', ':' + // + // if it is followed by a non-space character. + // + // The last rule is more restrictive than the specification requires. + // [Go] Make this logic more reasonable. + //switch parser.buffer[parser.buffer_pos] { + //case '-', '?', ':', ',', '?', '-', ',', ':', ']', '[', '}', '{', '&', '#', '!', '*', '>', '|', '"', '\'', '@', '%', '-', '`': + //} + if !(is_blankz(parser.buffer, parser.buffer_pos) || parser.buffer[parser.buffer_pos] == '-' || + parser.buffer[parser.buffer_pos] == '?' || parser.buffer[parser.buffer_pos] == ':' || + parser.buffer[parser.buffer_pos] == ',' || parser.buffer[parser.buffer_pos] == '[' || + parser.buffer[parser.buffer_pos] == ']' || parser.buffer[parser.buffer_pos] == '{' || + parser.buffer[parser.buffer_pos] == '}' || parser.buffer[parser.buffer_pos] == '#' || + parser.buffer[parser.buffer_pos] == '&' || parser.buffer[parser.buffer_pos] == '*' || + parser.buffer[parser.buffer_pos] == '!' || parser.buffer[parser.buffer_pos] == '|' || + parser.buffer[parser.buffer_pos] == '>' || parser.buffer[parser.buffer_pos] == '\'' || + parser.buffer[parser.buffer_pos] == '"' || parser.buffer[parser.buffer_pos] == '%' || + parser.buffer[parser.buffer_pos] == '@' || parser.buffer[parser.buffer_pos] == '`') || + (parser.buffer[parser.buffer_pos] == '-' && !is_blank(parser.buffer, parser.buffer_pos+1)) || + (parser.flow_level == 0 && + (parser.buffer[parser.buffer_pos] == '?' || parser.buffer[parser.buffer_pos] == ':') && + !is_blankz(parser.buffer, parser.buffer_pos+1)) { + return yaml_parser_fetch_plain_scalar(parser) + } + + // If we don't determine the token type so far, it is an error. + return yaml_parser_set_scanner_error(parser, + "while scanning for the next token", parser.mark, + "found character that cannot start any token") +} + +// Check the list of potential simple keys and remove the positions that +// cannot contain simple keys anymore. +func yaml_parser_stale_simple_keys(parser *yaml_parser_t) bool { + // Check for a potential simple key for each flow level. + for i := range parser.simple_keys { + simple_key := &parser.simple_keys[i] + + // The specification requires that a simple key + // + // - is limited to a single line, + // - is shorter than 1024 characters. + if simple_key.possible && (simple_key.mark.line < parser.mark.line || simple_key.mark.index+1024 < parser.mark.index) { + + // Check if the potential simple key to be removed is required. + if simple_key.required { + return yaml_parser_set_scanner_error(parser, + "while scanning a simple key", simple_key.mark, + "could not find expected ':'") + } + simple_key.possible = false + } + } + return true +} + +// Check if a simple key may start at the current position and add it if +// needed. +func yaml_parser_save_simple_key(parser *yaml_parser_t) bool { + // A simple key is required at the current position if the scanner is in + // the block context and the current column coincides with the indentation + // level. + + required := parser.flow_level == 0 && parser.indent == parser.mark.column + + // + // If the current position may start a simple key, save it. + // + if parser.simple_key_allowed { + simple_key := yaml_simple_key_t{ + possible: true, + required: required, + token_number: parser.tokens_parsed + (len(parser.tokens) - parser.tokens_head), + } + simple_key.mark = parser.mark + + if !yaml_parser_remove_simple_key(parser) { + return false + } + parser.simple_keys[len(parser.simple_keys)-1] = simple_key + } + return true +} + +// Remove a potential simple key at the current flow level. +func yaml_parser_remove_simple_key(parser *yaml_parser_t) bool { + i := len(parser.simple_keys) - 1 + if parser.simple_keys[i].possible { + // If the key is required, it is an error. + if parser.simple_keys[i].required { + return yaml_parser_set_scanner_error(parser, + "while scanning a simple key", parser.simple_keys[i].mark, + "could not find expected ':'") + } + } + // Remove the key from the stack. + parser.simple_keys[i].possible = false + return true +} + +// Increase the flow level and resize the simple key list if needed. +func yaml_parser_increase_flow_level(parser *yaml_parser_t) bool { + // Reset the simple key on the next level. + parser.simple_keys = append(parser.simple_keys, yaml_simple_key_t{}) + + // Increase the flow level. + parser.flow_level++ + return true +} + +// Decrease the flow level. +func yaml_parser_decrease_flow_level(parser *yaml_parser_t) bool { + if parser.flow_level > 0 { + parser.flow_level-- + parser.simple_keys = parser.simple_keys[:len(parser.simple_keys)-1] + } + return true +} + +// Push the current indentation level to the stack and set the new level +// the current column is greater than the indentation level. In this case, +// append or insert the specified token into the token queue. +func yaml_parser_roll_indent(parser *yaml_parser_t, column, number int, typ yaml_token_type_t, mark yaml_mark_t) bool { + // In the flow context, do nothing. + if parser.flow_level > 0 { + return true + } + + if parser.indent < column { + // Push the current indentation level to the stack and set the new + // indentation level. + parser.indents = append(parser.indents, parser.indent) + parser.indent = column + + // Create a token and insert it into the queue. + token := yaml_token_t{ + typ: typ, + start_mark: mark, + end_mark: mark, + } + if number > -1 { + number -= parser.tokens_parsed + } + yaml_insert_token(parser, number, &token) + } + return true +} + +// Pop indentation levels from the indents stack until the current level +// becomes less or equal to the column. For each indentation level, append +// the BLOCK-END token. +func yaml_parser_unroll_indent(parser *yaml_parser_t, column int) bool { + // In the flow context, do nothing. + if parser.flow_level > 0 { + return true + } + + // Loop through the indentation levels in the stack. + for parser.indent > column { + // Create a token and append it to the queue. + token := yaml_token_t{ + typ: yaml_BLOCK_END_TOKEN, + start_mark: parser.mark, + end_mark: parser.mark, + } + yaml_insert_token(parser, -1, &token) + + // Pop the indentation level. + parser.indent = parser.indents[len(parser.indents)-1] + parser.indents = parser.indents[:len(parser.indents)-1] + } + return true +} + +// Initialize the scanner and produce the STREAM-START token. +func yaml_parser_fetch_stream_start(parser *yaml_parser_t) bool { + + // Set the initial indentation. + parser.indent = -1 + + // Initialize the simple key stack. + parser.simple_keys = append(parser.simple_keys, yaml_simple_key_t{}) + + // A simple key is allowed at the beginning of the stream. + parser.simple_key_allowed = true + + // We have started. + parser.stream_start_produced = true + + // Create the STREAM-START token and append it to the queue. + token := yaml_token_t{ + typ: yaml_STREAM_START_TOKEN, + start_mark: parser.mark, + end_mark: parser.mark, + encoding: parser.encoding, + } + yaml_insert_token(parser, -1, &token) + return true +} + +// Produce the STREAM-END token and shut down the scanner. +func yaml_parser_fetch_stream_end(parser *yaml_parser_t) bool { + + // Force new line. + if parser.mark.column != 0 { + parser.mark.column = 0 + parser.mark.line++ + } + + // Reset the indentation level. + if !yaml_parser_unroll_indent(parser, -1) { + return false + } + + // Reset simple keys. + if !yaml_parser_remove_simple_key(parser) { + return false + } + + parser.simple_key_allowed = false + + // Create the STREAM-END token and append it to the queue. + token := yaml_token_t{ + typ: yaml_STREAM_END_TOKEN, + start_mark: parser.mark, + end_mark: parser.mark, + } + yaml_insert_token(parser, -1, &token) + return true +} + +// Produce a VERSION-DIRECTIVE or TAG-DIRECTIVE token. +func yaml_parser_fetch_directive(parser *yaml_parser_t) bool { + // Reset the indentation level. + if !yaml_parser_unroll_indent(parser, -1) { + return false + } + + // Reset simple keys. + if !yaml_parser_remove_simple_key(parser) { + return false + } + + parser.simple_key_allowed = false + + // Create the YAML-DIRECTIVE or TAG-DIRECTIVE token. + token := yaml_token_t{} + if !yaml_parser_scan_directive(parser, &token) { + return false + } + // Append the token to the queue. + yaml_insert_token(parser, -1, &token) + return true +} + +// Produce the DOCUMENT-START or DOCUMENT-END token. +func yaml_parser_fetch_document_indicator(parser *yaml_parser_t, typ yaml_token_type_t) bool { + // Reset the indentation level. + if !yaml_parser_unroll_indent(parser, -1) { + return false + } + + // Reset simple keys. + if !yaml_parser_remove_simple_key(parser) { + return false + } + + parser.simple_key_allowed = false + + // Consume the token. + start_mark := parser.mark + + skip(parser) + skip(parser) + skip(parser) + + end_mark := parser.mark + + // Create the DOCUMENT-START or DOCUMENT-END token. + token := yaml_token_t{ + typ: typ, + start_mark: start_mark, + end_mark: end_mark, + } + // Append the token to the queue. + yaml_insert_token(parser, -1, &token) + return true +} + +// Produce the FLOW-SEQUENCE-START or FLOW-MAPPING-START token. +func yaml_parser_fetch_flow_collection_start(parser *yaml_parser_t, typ yaml_token_type_t) bool { + // The indicators '[' and '{' may start a simple key. + if !yaml_parser_save_simple_key(parser) { + return false + } + + // Increase the flow level. + if !yaml_parser_increase_flow_level(parser) { + return false + } + + // A simple key may follow the indicators '[' and '{'. + parser.simple_key_allowed = true + + // Consume the token. + start_mark := parser.mark + skip(parser) + end_mark := parser.mark + + // Create the FLOW-SEQUENCE-START of FLOW-MAPPING-START token. + token := yaml_token_t{ + typ: typ, + start_mark: start_mark, + end_mark: end_mark, + } + // Append the token to the queue. + yaml_insert_token(parser, -1, &token) + return true +} + +// Produce the FLOW-SEQUENCE-END or FLOW-MAPPING-END token. +func yaml_parser_fetch_flow_collection_end(parser *yaml_parser_t, typ yaml_token_type_t) bool { + // Reset any potential simple key on the current flow level. + if !yaml_parser_remove_simple_key(parser) { + return false + } + + // Decrease the flow level. + if !yaml_parser_decrease_flow_level(parser) { + return false + } + + // No simple keys after the indicators ']' and '}'. + parser.simple_key_allowed = false + + // Consume the token. + + start_mark := parser.mark + skip(parser) + end_mark := parser.mark + + // Create the FLOW-SEQUENCE-END of FLOW-MAPPING-END token. + token := yaml_token_t{ + typ: typ, + start_mark: start_mark, + end_mark: end_mark, + } + // Append the token to the queue. + yaml_insert_token(parser, -1, &token) + return true +} + +// Produce the FLOW-ENTRY token. +func yaml_parser_fetch_flow_entry(parser *yaml_parser_t) bool { + // Reset any potential simple keys on the current flow level. + if !yaml_parser_remove_simple_key(parser) { + return false + } + + // Simple keys are allowed after ','. + parser.simple_key_allowed = true + + // Consume the token. + start_mark := parser.mark + skip(parser) + end_mark := parser.mark + + // Create the FLOW-ENTRY token and append it to the queue. + token := yaml_token_t{ + typ: yaml_FLOW_ENTRY_TOKEN, + start_mark: start_mark, + end_mark: end_mark, + } + yaml_insert_token(parser, -1, &token) + return true +} + +// Produce the BLOCK-ENTRY token. +func yaml_parser_fetch_block_entry(parser *yaml_parser_t) bool { + // Check if the scanner is in the block context. + if parser.flow_level == 0 { + // Check if we are allowed to start a new entry. + if !parser.simple_key_allowed { + return yaml_parser_set_scanner_error(parser, "", parser.mark, + "block sequence entries are not allowed in this context") + } + // Add the BLOCK-SEQUENCE-START token if needed. + if !yaml_parser_roll_indent(parser, parser.mark.column, -1, yaml_BLOCK_SEQUENCE_START_TOKEN, parser.mark) { + return false + } + } else { + // It is an error for the '-' indicator to occur in the flow context, + // but we let the Parser detect and report about it because the Parser + // is able to point to the context. + } + + // Reset any potential simple keys on the current flow level. + if !yaml_parser_remove_simple_key(parser) { + return false + } + + // Simple keys are allowed after '-'. + parser.simple_key_allowed = true + + // Consume the token. + start_mark := parser.mark + skip(parser) + end_mark := parser.mark + + // Create the BLOCK-ENTRY token and append it to the queue. + token := yaml_token_t{ + typ: yaml_BLOCK_ENTRY_TOKEN, + start_mark: start_mark, + end_mark: end_mark, + } + yaml_insert_token(parser, -1, &token) + return true +} + +// Produce the KEY token. +func yaml_parser_fetch_key(parser *yaml_parser_t) bool { + + // In the block context, additional checks are required. + if parser.flow_level == 0 { + // Check if we are allowed to start a new key (not nessesary simple). + if !parser.simple_key_allowed { + return yaml_parser_set_scanner_error(parser, "", parser.mark, + "mapping keys are not allowed in this context") + } + // Add the BLOCK-MAPPING-START token if needed. + if !yaml_parser_roll_indent(parser, parser.mark.column, -1, yaml_BLOCK_MAPPING_START_TOKEN, parser.mark) { + return false + } + } + + // Reset any potential simple keys on the current flow level. + if !yaml_parser_remove_simple_key(parser) { + return false + } + + // Simple keys are allowed after '?' in the block context. + parser.simple_key_allowed = parser.flow_level == 0 + + // Consume the token. + start_mark := parser.mark + skip(parser) + end_mark := parser.mark + + // Create the KEY token and append it to the queue. + token := yaml_token_t{ + typ: yaml_KEY_TOKEN, + start_mark: start_mark, + end_mark: end_mark, + } + yaml_insert_token(parser, -1, &token) + return true +} + +// Produce the VALUE token. +func yaml_parser_fetch_value(parser *yaml_parser_t) bool { + + simple_key := &parser.simple_keys[len(parser.simple_keys)-1] + + // Have we found a simple key? + if simple_key.possible { + // Create the KEY token and insert it into the queue. + token := yaml_token_t{ + typ: yaml_KEY_TOKEN, + start_mark: simple_key.mark, + end_mark: simple_key.mark, + } + yaml_insert_token(parser, simple_key.token_number-parser.tokens_parsed, &token) + + // In the block context, we may need to add the BLOCK-MAPPING-START token. + if !yaml_parser_roll_indent(parser, simple_key.mark.column, + simple_key.token_number, + yaml_BLOCK_MAPPING_START_TOKEN, simple_key.mark) { + return false + } + + // Remove the simple key. + simple_key.possible = false + + // A simple key cannot follow another simple key. + parser.simple_key_allowed = false + + } else { + // The ':' indicator follows a complex key. + + // In the block context, extra checks are required. + if parser.flow_level == 0 { + + // Check if we are allowed to start a complex value. + if !parser.simple_key_allowed { + return yaml_parser_set_scanner_error(parser, "", parser.mark, + "mapping values are not allowed in this context") + } + + // Add the BLOCK-MAPPING-START token if needed. + if !yaml_parser_roll_indent(parser, parser.mark.column, -1, yaml_BLOCK_MAPPING_START_TOKEN, parser.mark) { + return false + } + } + + // Simple keys after ':' are allowed in the block context. + parser.simple_key_allowed = parser.flow_level == 0 + } + + // Consume the token. + start_mark := parser.mark + skip(parser) + end_mark := parser.mark + + // Create the VALUE token and append it to the queue. + token := yaml_token_t{ + typ: yaml_VALUE_TOKEN, + start_mark: start_mark, + end_mark: end_mark, + } + yaml_insert_token(parser, -1, &token) + return true +} + +// Produce the ALIAS or ANCHOR token. +func yaml_parser_fetch_anchor(parser *yaml_parser_t, typ yaml_token_type_t) bool { + // An anchor or an alias could be a simple key. + if !yaml_parser_save_simple_key(parser) { + return false + } + + // A simple key cannot follow an anchor or an alias. + parser.simple_key_allowed = false + + // Create the ALIAS or ANCHOR token and append it to the queue. + var token yaml_token_t + if !yaml_parser_scan_anchor(parser, &token, typ) { + return false + } + yaml_insert_token(parser, -1, &token) + return true +} + +// Produce the TAG token. +func yaml_parser_fetch_tag(parser *yaml_parser_t) bool { + // A tag could be a simple key. + if !yaml_parser_save_simple_key(parser) { + return false + } + + // A simple key cannot follow a tag. + parser.simple_key_allowed = false + + // Create the TAG token and append it to the queue. + var token yaml_token_t + if !yaml_parser_scan_tag(parser, &token) { + return false + } + yaml_insert_token(parser, -1, &token) + return true +} + +// Produce the SCALAR(...,literal) or SCALAR(...,folded) tokens. +func yaml_parser_fetch_block_scalar(parser *yaml_parser_t, literal bool) bool { + // Remove any potential simple keys. + if !yaml_parser_remove_simple_key(parser) { + return false + } + + // A simple key may follow a block scalar. + parser.simple_key_allowed = true + + // Create the SCALAR token and append it to the queue. + var token yaml_token_t + if !yaml_parser_scan_block_scalar(parser, &token, literal) { + return false + } + yaml_insert_token(parser, -1, &token) + return true +} + +// Produce the SCALAR(...,single-quoted) or SCALAR(...,double-quoted) tokens. +func yaml_parser_fetch_flow_scalar(parser *yaml_parser_t, single bool) bool { + // A plain scalar could be a simple key. + if !yaml_parser_save_simple_key(parser) { + return false + } + + // A simple key cannot follow a flow scalar. + parser.simple_key_allowed = false + + // Create the SCALAR token and append it to the queue. + var token yaml_token_t + if !yaml_parser_scan_flow_scalar(parser, &token, single) { + return false + } + yaml_insert_token(parser, -1, &token) + return true +} + +// Produce the SCALAR(...,plain) token. +func yaml_parser_fetch_plain_scalar(parser *yaml_parser_t) bool { + // A plain scalar could be a simple key. + if !yaml_parser_save_simple_key(parser) { + return false + } + + // A simple key cannot follow a flow scalar. + parser.simple_key_allowed = false + + // Create the SCALAR token and append it to the queue. + var token yaml_token_t + if !yaml_parser_scan_plain_scalar(parser, &token) { + return false + } + yaml_insert_token(parser, -1, &token) + return true +} + +// Eat whitespaces and comments until the next token is found. +func yaml_parser_scan_to_next_token(parser *yaml_parser_t) bool { + + // Until the next token is not found. + for { + // Allow the BOM mark to start a line. + if parser.unread < 1 && !yaml_parser_update_buffer(parser, 1) { + return false + } + if parser.mark.column == 0 && is_bom(parser.buffer, parser.buffer_pos) { + skip(parser) + } + + // Eat whitespaces. + // Tabs are allowed: + // - in the flow context + // - in the block context, but not at the beginning of the line or + // after '-', '?', or ':' (complex value). + if parser.unread < 1 && !yaml_parser_update_buffer(parser, 1) { + return false + } + + for parser.buffer[parser.buffer_pos] == ' ' || ((parser.flow_level > 0 || !parser.simple_key_allowed) && parser.buffer[parser.buffer_pos] == '\t') { + skip(parser) + if parser.unread < 1 && !yaml_parser_update_buffer(parser, 1) { + return false + } + } + + // Eat a comment until a line break. + if parser.buffer[parser.buffer_pos] == '#' { + for !is_breakz(parser.buffer, parser.buffer_pos) { + skip(parser) + if parser.unread < 1 && !yaml_parser_update_buffer(parser, 1) { + return false + } + } + } + + // If it is a line break, eat it. + if is_break(parser.buffer, parser.buffer_pos) { + if parser.unread < 2 && !yaml_parser_update_buffer(parser, 2) { + return false + } + skip_line(parser) + + // In the block context, a new line may start a simple key. + if parser.flow_level == 0 { + parser.simple_key_allowed = true + } + } else { + break // We have found a token. + } + } + + return true +} + +// Scan a YAML-DIRECTIVE or TAG-DIRECTIVE token. +// +// Scope: +// %YAML 1.1 # a comment \n +// ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +// %TAG !yaml! tag:yaml.org,2002: \n +// ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +// +func yaml_parser_scan_directive(parser *yaml_parser_t, token *yaml_token_t) bool { + // Eat '%'. + start_mark := parser.mark + skip(parser) + + // Scan the directive name. + var name []byte + if !yaml_parser_scan_directive_name(parser, start_mark, &name) { + return false + } + + // Is it a YAML directive? + if bytes.Equal(name, []byte("YAML")) { + // Scan the VERSION directive value. + var major, minor int8 + if !yaml_parser_scan_version_directive_value(parser, start_mark, &major, &minor) { + return false + } + end_mark := parser.mark + + // Create a VERSION-DIRECTIVE token. + *token = yaml_token_t{ + typ: yaml_VERSION_DIRECTIVE_TOKEN, + start_mark: start_mark, + end_mark: end_mark, + major: major, + minor: minor, + } + + // Is it a TAG directive? + } else if bytes.Equal(name, []byte("TAG")) { + // Scan the TAG directive value. + var handle, prefix []byte + if !yaml_parser_scan_tag_directive_value(parser, start_mark, &handle, &prefix) { + return false + } + end_mark := parser.mark + + // Create a TAG-DIRECTIVE token. + *token = yaml_token_t{ + typ: yaml_TAG_DIRECTIVE_TOKEN, + start_mark: start_mark, + end_mark: end_mark, + value: handle, + prefix: prefix, + } + + // Unknown directive. + } else { + yaml_parser_set_scanner_error(parser, "while scanning a directive", + start_mark, "found unknown directive name") + return false + } + + // Eat the rest of the line including any comments. + if parser.unread < 1 && !yaml_parser_update_buffer(parser, 1) { + return false + } + + for is_blank(parser.buffer, parser.buffer_pos) { + skip(parser) + if parser.unread < 1 && !yaml_parser_update_buffer(parser, 1) { + return false + } + } + + if parser.buffer[parser.buffer_pos] == '#' { + for !is_breakz(parser.buffer, parser.buffer_pos) { + skip(parser) + if parser.unread < 1 && !yaml_parser_update_buffer(parser, 1) { + return false + } + } + } + + // Check if we are at the end of the line. + if !is_breakz(parser.buffer, parser.buffer_pos) { + yaml_parser_set_scanner_error(parser, "while scanning a directive", + start_mark, "did not find expected comment or line break") + return false + } + + // Eat a line break. + if is_break(parser.buffer, parser.buffer_pos) { + if parser.unread < 2 && !yaml_parser_update_buffer(parser, 2) { + return false + } + skip_line(parser) + } + + return true +} + +// Scan the directive name. +// +// Scope: +// %YAML 1.1 # a comment \n +// ^^^^ +// %TAG !yaml! tag:yaml.org,2002: \n +// ^^^ +// +func yaml_parser_scan_directive_name(parser *yaml_parser_t, start_mark yaml_mark_t, name *[]byte) bool { + // Consume the directive name. + if parser.unread < 1 && !yaml_parser_update_buffer(parser, 1) { + return false + } + + var s []byte + for is_alpha(parser.buffer, parser.buffer_pos) { + s = read(parser, s) + if parser.unread < 1 && !yaml_parser_update_buffer(parser, 1) { + return false + } + } + + // Check if the name is empty. + if len(s) == 0 { + yaml_parser_set_scanner_error(parser, "while scanning a directive", + start_mark, "could not find expected directive name") + return false + } + + // Check for an blank character after the name. + if !is_blankz(parser.buffer, parser.buffer_pos) { + yaml_parser_set_scanner_error(parser, "while scanning a directive", + start_mark, "found unexpected non-alphabetical character") + return false + } + *name = s + return true +} + +// Scan the value of VERSION-DIRECTIVE. +// +// Scope: +// %YAML 1.1 # a comment \n +// ^^^^^^ +func yaml_parser_scan_version_directive_value(parser *yaml_parser_t, start_mark yaml_mark_t, major, minor *int8) bool { + // Eat whitespaces. + if parser.unread < 1 && !yaml_parser_update_buffer(parser, 1) { + return false + } + for is_blank(parser.buffer, parser.buffer_pos) { + skip(parser) + if parser.unread < 1 && !yaml_parser_update_buffer(parser, 1) { + return false + } + } + + // Consume the major version number. + if !yaml_parser_scan_version_directive_number(parser, start_mark, major) { + return false + } + + // Eat '.'. + if parser.buffer[parser.buffer_pos] != '.' { + return yaml_parser_set_scanner_error(parser, "while scanning a %YAML directive", + start_mark, "did not find expected digit or '.' character") + } + + skip(parser) + + // Consume the minor version number. + if !yaml_parser_scan_version_directive_number(parser, start_mark, minor) { + return false + } + return true +} + +const max_number_length = 2 + +// Scan the version number of VERSION-DIRECTIVE. +// +// Scope: +// %YAML 1.1 # a comment \n +// ^ +// %YAML 1.1 # a comment \n +// ^ +func yaml_parser_scan_version_directive_number(parser *yaml_parser_t, start_mark yaml_mark_t, number *int8) bool { + + // Repeat while the next character is digit. + if parser.unread < 1 && !yaml_parser_update_buffer(parser, 1) { + return false + } + var value, length int8 + for is_digit(parser.buffer, parser.buffer_pos) { + // Check if the number is too long. + length++ + if length > max_number_length { + return yaml_parser_set_scanner_error(parser, "while scanning a %YAML directive", + start_mark, "found extremely long version number") + } + value = value*10 + int8(as_digit(parser.buffer, parser.buffer_pos)) + skip(parser) + if parser.unread < 1 && !yaml_parser_update_buffer(parser, 1) { + return false + } + } + + // Check if the number was present. + if length == 0 { + return yaml_parser_set_scanner_error(parser, "while scanning a %YAML directive", + start_mark, "did not find expected version number") + } + *number = value + return true +} + +// Scan the value of a TAG-DIRECTIVE token. +// +// Scope: +// %TAG !yaml! tag:yaml.org,2002: \n +// ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +// +func yaml_parser_scan_tag_directive_value(parser *yaml_parser_t, start_mark yaml_mark_t, handle, prefix *[]byte) bool { + var handle_value, prefix_value []byte + + // Eat whitespaces. + if parser.unread < 1 && !yaml_parser_update_buffer(parser, 1) { + return false + } + + for is_blank(parser.buffer, parser.buffer_pos) { + skip(parser) + if parser.unread < 1 && !yaml_parser_update_buffer(parser, 1) { + return false + } + } + + // Scan a handle. + if !yaml_parser_scan_tag_handle(parser, true, start_mark, &handle_value) { + return false + } + + // Expect a whitespace. + if parser.unread < 1 && !yaml_parser_update_buffer(parser, 1) { + return false + } + if !is_blank(parser.buffer, parser.buffer_pos) { + yaml_parser_set_scanner_error(parser, "while scanning a %TAG directive", + start_mark, "did not find expected whitespace") + return false + } + + // Eat whitespaces. + for is_blank(parser.buffer, parser.buffer_pos) { + skip(parser) + if parser.unread < 1 && !yaml_parser_update_buffer(parser, 1) { + return false + } + } + + // Scan a prefix. + if !yaml_parser_scan_tag_uri(parser, true, nil, start_mark, &prefix_value) { + return false + } + + // Expect a whitespace or line break. + if parser.unread < 1 && !yaml_parser_update_buffer(parser, 1) { + return false + } + if !is_blankz(parser.buffer, parser.buffer_pos) { + yaml_parser_set_scanner_error(parser, "while scanning a %TAG directive", + start_mark, "did not find expected whitespace or line break") + return false + } + + *handle = handle_value + *prefix = prefix_value + return true +} + +func yaml_parser_scan_anchor(parser *yaml_parser_t, token *yaml_token_t, typ yaml_token_type_t) bool { + var s []byte + + // Eat the indicator character. + start_mark := parser.mark + skip(parser) + + // Consume the value. + if parser.unread < 1 && !yaml_parser_update_buffer(parser, 1) { + return false + } + + for is_alpha(parser.buffer, parser.buffer_pos) { + s = read(parser, s) + if parser.unread < 1 && !yaml_parser_update_buffer(parser, 1) { + return false + } + } + + end_mark := parser.mark + + /* + * Check if length of the anchor is greater than 0 and it is followed by + * a whitespace character or one of the indicators: + * + * '?', ':', ',', ']', '}', '%', '@', '`'. + */ + + if len(s) == 0 || + !(is_blankz(parser.buffer, parser.buffer_pos) || parser.buffer[parser.buffer_pos] == '?' || + parser.buffer[parser.buffer_pos] == ':' || parser.buffer[parser.buffer_pos] == ',' || + parser.buffer[parser.buffer_pos] == ']' || parser.buffer[parser.buffer_pos] == '}' || + parser.buffer[parser.buffer_pos] == '%' || parser.buffer[parser.buffer_pos] == '@' || + parser.buffer[parser.buffer_pos] == '`') { + context := "while scanning an alias" + if typ == yaml_ANCHOR_TOKEN { + context = "while scanning an anchor" + } + yaml_parser_set_scanner_error(parser, context, start_mark, + "did not find expected alphabetic or numeric character") + return false + } + + // Create a token. + *token = yaml_token_t{ + typ: typ, + start_mark: start_mark, + end_mark: end_mark, + value: s, + } + + return true +} + +/* + * Scan a TAG token. + */ + +func yaml_parser_scan_tag(parser *yaml_parser_t, token *yaml_token_t) bool { + var handle, suffix []byte + + start_mark := parser.mark + + // Check if the tag is in the canonical form. + if parser.unread < 2 && !yaml_parser_update_buffer(parser, 2) { + return false + } + + if parser.buffer[parser.buffer_pos+1] == '<' { + // Keep the handle as '' + + // Eat '!<' + skip(parser) + skip(parser) + + // Consume the tag value. + if !yaml_parser_scan_tag_uri(parser, false, nil, start_mark, &suffix) { + return false + } + + // Check for '>' and eat it. + if parser.buffer[parser.buffer_pos] != '>' { + yaml_parser_set_scanner_error(parser, "while scanning a tag", + start_mark, "did not find the expected '>'") + return false + } + + skip(parser) + } else { + // The tag has either the '!suffix' or the '!handle!suffix' form. + + // First, try to scan a handle. + if !yaml_parser_scan_tag_handle(parser, false, start_mark, &handle) { + return false + } + + // Check if it is, indeed, handle. + if handle[0] == '!' && len(handle) > 1 && handle[len(handle)-1] == '!' { + // Scan the suffix now. + if !yaml_parser_scan_tag_uri(parser, false, nil, start_mark, &suffix) { + return false + } + } else { + // It wasn't a handle after all. Scan the rest of the tag. + if !yaml_parser_scan_tag_uri(parser, false, handle, start_mark, &suffix) { + return false + } + + // Set the handle to '!'. + handle = []byte{'!'} + + // A special case: the '!' tag. Set the handle to '' and the + // suffix to '!'. + if len(suffix) == 0 { + handle, suffix = suffix, handle + } + } + } + + // Check the character which ends the tag. + if parser.unread < 1 && !yaml_parser_update_buffer(parser, 1) { + return false + } + if !is_blankz(parser.buffer, parser.buffer_pos) { + yaml_parser_set_scanner_error(parser, "while scanning a tag", + start_mark, "did not find expected whitespace or line break") + return false + } + + end_mark := parser.mark + + // Create a token. + *token = yaml_token_t{ + typ: yaml_TAG_TOKEN, + start_mark: start_mark, + end_mark: end_mark, + value: handle, + suffix: suffix, + } + return true +} + +// Scan a tag handle. +func yaml_parser_scan_tag_handle(parser *yaml_parser_t, directive bool, start_mark yaml_mark_t, handle *[]byte) bool { + // Check the initial '!' character. + if parser.unread < 1 && !yaml_parser_update_buffer(parser, 1) { + return false + } + if parser.buffer[parser.buffer_pos] != '!' { + yaml_parser_set_scanner_tag_error(parser, directive, + start_mark, "did not find expected '!'") + return false + } + + var s []byte + + // Copy the '!' character. + s = read(parser, s) + + // Copy all subsequent alphabetical and numerical characters. + if parser.unread < 1 && !yaml_parser_update_buffer(parser, 1) { + return false + } + for is_alpha(parser.buffer, parser.buffer_pos) { + s = read(parser, s) + if parser.unread < 1 && !yaml_parser_update_buffer(parser, 1) { + return false + } + } + + // Check if the trailing character is '!' and copy it. + if parser.buffer[parser.buffer_pos] == '!' { + s = read(parser, s) + } else { + // It's either the '!' tag or not really a tag handle. If it's a %TAG + // directive, it's an error. If it's a tag token, it must be a part of URI. + if directive && string(s) != "!" { + yaml_parser_set_scanner_tag_error(parser, directive, + start_mark, "did not find expected '!'") + return false + } + } + + *handle = s + return true +} + +// Scan a tag. +func yaml_parser_scan_tag_uri(parser *yaml_parser_t, directive bool, head []byte, start_mark yaml_mark_t, uri *[]byte) bool { + //size_t length = head ? strlen((char *)head) : 0 + var s []byte + hasTag := len(head) > 0 + + // Copy the head if needed. + // + // Note that we don't copy the leading '!' character. + if len(head) > 1 { + s = append(s, head[1:]...) + } + + // Scan the tag. + if parser.unread < 1 && !yaml_parser_update_buffer(parser, 1) { + return false + } + + // The set of characters that may appear in URI is as follows: + // + // '0'-'9', 'A'-'Z', 'a'-'z', '_', '-', ';', '/', '?', ':', '@', '&', + // '=', '+', '$', ',', '.', '!', '~', '*', '\'', '(', ')', '[', ']', + // '%'. + // [Go] Convert this into more reasonable logic. + for is_alpha(parser.buffer, parser.buffer_pos) || parser.buffer[parser.buffer_pos] == ';' || + parser.buffer[parser.buffer_pos] == '/' || parser.buffer[parser.buffer_pos] == '?' || + parser.buffer[parser.buffer_pos] == ':' || parser.buffer[parser.buffer_pos] == '@' || + parser.buffer[parser.buffer_pos] == '&' || parser.buffer[parser.buffer_pos] == '=' || + parser.buffer[parser.buffer_pos] == '+' || parser.buffer[parser.buffer_pos] == '$' || + parser.buffer[parser.buffer_pos] == ',' || parser.buffer[parser.buffer_pos] == '.' || + parser.buffer[parser.buffer_pos] == '!' || parser.buffer[parser.buffer_pos] == '~' || + parser.buffer[parser.buffer_pos] == '*' || parser.buffer[parser.buffer_pos] == '\'' || + parser.buffer[parser.buffer_pos] == '(' || parser.buffer[parser.buffer_pos] == ')' || + parser.buffer[parser.buffer_pos] == '[' || parser.buffer[parser.buffer_pos] == ']' || + parser.buffer[parser.buffer_pos] == '%' { + // Check if it is a URI-escape sequence. + if parser.buffer[parser.buffer_pos] == '%' { + if !yaml_parser_scan_uri_escapes(parser, directive, start_mark, &s) { + return false + } + } else { + s = read(parser, s) + } + if parser.unread < 1 && !yaml_parser_update_buffer(parser, 1) { + return false + } + hasTag = true + } + + if !hasTag { + yaml_parser_set_scanner_tag_error(parser, directive, + start_mark, "did not find expected tag URI") + return false + } + *uri = s + return true +} + +// Decode an URI-escape sequence corresponding to a single UTF-8 character. +func yaml_parser_scan_uri_escapes(parser *yaml_parser_t, directive bool, start_mark yaml_mark_t, s *[]byte) bool { + + // Decode the required number of characters. + w := 1024 + for w > 0 { + // Check for a URI-escaped octet. + if parser.unread < 3 && !yaml_parser_update_buffer(parser, 3) { + return false + } + + if !(parser.buffer[parser.buffer_pos] == '%' && + is_hex(parser.buffer, parser.buffer_pos+1) && + is_hex(parser.buffer, parser.buffer_pos+2)) { + return yaml_parser_set_scanner_tag_error(parser, directive, + start_mark, "did not find URI escaped octet") + } + + // Get the octet. + octet := byte((as_hex(parser.buffer, parser.buffer_pos+1) << 4) + as_hex(parser.buffer, parser.buffer_pos+2)) + + // If it is the leading octet, determine the length of the UTF-8 sequence. + if w == 1024 { + w = width(octet) + if w == 0 { + return yaml_parser_set_scanner_tag_error(parser, directive, + start_mark, "found an incorrect leading UTF-8 octet") + } + } else { + // Check if the trailing octet is correct. + if octet&0xC0 != 0x80 { + return yaml_parser_set_scanner_tag_error(parser, directive, + start_mark, "found an incorrect trailing UTF-8 octet") + } + } + + // Copy the octet and move the pointers. + *s = append(*s, octet) + skip(parser) + skip(parser) + skip(parser) + w-- + } + return true +} + +// Scan a block scalar. +func yaml_parser_scan_block_scalar(parser *yaml_parser_t, token *yaml_token_t, literal bool) bool { + // Eat the indicator '|' or '>'. + start_mark := parser.mark + skip(parser) + + // Scan the additional block scalar indicators. + if parser.unread < 1 && !yaml_parser_update_buffer(parser, 1) { + return false + } + + // Check for a chomping indicator. + var chomping, increment int + if parser.buffer[parser.buffer_pos] == '+' || parser.buffer[parser.buffer_pos] == '-' { + // Set the chomping method and eat the indicator. + if parser.buffer[parser.buffer_pos] == '+' { + chomping = +1 + } else { + chomping = -1 + } + skip(parser) + + // Check for an indentation indicator. + if parser.unread < 1 && !yaml_parser_update_buffer(parser, 1) { + return false + } + if is_digit(parser.buffer, parser.buffer_pos) { + // Check that the indentation is greater than 0. + if parser.buffer[parser.buffer_pos] == '0' { + yaml_parser_set_scanner_error(parser, "while scanning a block scalar", + start_mark, "found an indentation indicator equal to 0") + return false + } + + // Get the indentation level and eat the indicator. + increment = as_digit(parser.buffer, parser.buffer_pos) + skip(parser) + } + + } else if is_digit(parser.buffer, parser.buffer_pos) { + // Do the same as above, but in the opposite order. + + if parser.buffer[parser.buffer_pos] == '0' { + yaml_parser_set_scanner_error(parser, "while scanning a block scalar", + start_mark, "found an indentation indicator equal to 0") + return false + } + increment = as_digit(parser.buffer, parser.buffer_pos) + skip(parser) + + if parser.unread < 1 && !yaml_parser_update_buffer(parser, 1) { + return false + } + if parser.buffer[parser.buffer_pos] == '+' || parser.buffer[parser.buffer_pos] == '-' { + if parser.buffer[parser.buffer_pos] == '+' { + chomping = +1 + } else { + chomping = -1 + } + skip(parser) + } + } + + // Eat whitespaces and comments to the end of the line. + if parser.unread < 1 && !yaml_parser_update_buffer(parser, 1) { + return false + } + for is_blank(parser.buffer, parser.buffer_pos) { + skip(parser) + if parser.unread < 1 && !yaml_parser_update_buffer(parser, 1) { + return false + } + } + if parser.buffer[parser.buffer_pos] == '#' { + for !is_breakz(parser.buffer, parser.buffer_pos) { + skip(parser) + if parser.unread < 1 && !yaml_parser_update_buffer(parser, 1) { + return false + } + } + } + + // Check if we are at the end of the line. + if !is_breakz(parser.buffer, parser.buffer_pos) { + yaml_parser_set_scanner_error(parser, "while scanning a block scalar", + start_mark, "did not find expected comment or line break") + return false + } + + // Eat a line break. + if is_break(parser.buffer, parser.buffer_pos) { + if parser.unread < 2 && !yaml_parser_update_buffer(parser, 2) { + return false + } + skip_line(parser) + } + + end_mark := parser.mark + + // Set the indentation level if it was specified. + var indent int + if increment > 0 { + if parser.indent >= 0 { + indent = parser.indent + increment + } else { + indent = increment + } + } + + // Scan the leading line breaks and determine the indentation level if needed. + var s, leading_break, trailing_breaks []byte + if !yaml_parser_scan_block_scalar_breaks(parser, &indent, &trailing_breaks, start_mark, &end_mark) { + return false + } + + // Scan the block scalar content. + if parser.unread < 1 && !yaml_parser_update_buffer(parser, 1) { + return false + } + var leading_blank, trailing_blank bool + for parser.mark.column == indent && !is_z(parser.buffer, parser.buffer_pos) { + // We are at the beginning of a non-empty line. + + // Is it a trailing whitespace? + trailing_blank = is_blank(parser.buffer, parser.buffer_pos) + + // Check if we need to fold the leading line break. + if !literal && !leading_blank && !trailing_blank && len(leading_break) > 0 && leading_break[0] == '\n' { + // Do we need to join the lines by space? + if len(trailing_breaks) == 0 { + s = append(s, ' ') + } + } else { + s = append(s, leading_break...) + } + leading_break = leading_break[:0] + + // Append the remaining line breaks. + s = append(s, trailing_breaks...) + trailing_breaks = trailing_breaks[:0] + + // Is it a leading whitespace? + leading_blank = is_blank(parser.buffer, parser.buffer_pos) + + // Consume the current line. + for !is_breakz(parser.buffer, parser.buffer_pos) { + s = read(parser, s) + if parser.unread < 1 && !yaml_parser_update_buffer(parser, 1) { + return false + } + } + + // Consume the line break. + if parser.unread < 2 && !yaml_parser_update_buffer(parser, 2) { + return false + } + + leading_break = read_line(parser, leading_break) + + // Eat the following indentation spaces and line breaks. + if !yaml_parser_scan_block_scalar_breaks(parser, &indent, &trailing_breaks, start_mark, &end_mark) { + return false + } + } + + // Chomp the tail. + if chomping != -1 { + s = append(s, leading_break...) + } + if chomping == 1 { + s = append(s, trailing_breaks...) + } + + // Create a token. + *token = yaml_token_t{ + typ: yaml_SCALAR_TOKEN, + start_mark: start_mark, + end_mark: end_mark, + value: s, + style: yaml_LITERAL_SCALAR_STYLE, + } + if !literal { + token.style = yaml_FOLDED_SCALAR_STYLE + } + return true +} + +// Scan indentation spaces and line breaks for a block scalar. Determine the +// indentation level if needed. +func yaml_parser_scan_block_scalar_breaks(parser *yaml_parser_t, indent *int, breaks *[]byte, start_mark yaml_mark_t, end_mark *yaml_mark_t) bool { + *end_mark = parser.mark + + // Eat the indentation spaces and line breaks. + max_indent := 0 + for { + // Eat the indentation spaces. + if parser.unread < 1 && !yaml_parser_update_buffer(parser, 1) { + return false + } + for (*indent == 0 || parser.mark.column < *indent) && is_space(parser.buffer, parser.buffer_pos) { + skip(parser) + if parser.unread < 1 && !yaml_parser_update_buffer(parser, 1) { + return false + } + } + if parser.mark.column > max_indent { + max_indent = parser.mark.column + } + + // Check for a tab character messing the indentation. + if (*indent == 0 || parser.mark.column < *indent) && is_tab(parser.buffer, parser.buffer_pos) { + return yaml_parser_set_scanner_error(parser, "while scanning a block scalar", + start_mark, "found a tab character where an indentation space is expected") + } + + // Have we found a non-empty line? + if !is_break(parser.buffer, parser.buffer_pos) { + break + } + + // Consume the line break. + if parser.unread < 2 && !yaml_parser_update_buffer(parser, 2) { + return false + } + // [Go] Should really be returning breaks instead. + *breaks = read_line(parser, *breaks) + *end_mark = parser.mark + } + + // Determine the indentation level if needed. + if *indent == 0 { + *indent = max_indent + if *indent < parser.indent+1 { + *indent = parser.indent + 1 + } + if *indent < 1 { + *indent = 1 + } + } + return true +} + +// Scan a quoted scalar. +func yaml_parser_scan_flow_scalar(parser *yaml_parser_t, token *yaml_token_t, single bool) bool { + // Eat the left quote. + start_mark := parser.mark + skip(parser) + + // Consume the content of the quoted scalar. + var s, leading_break, trailing_breaks, whitespaces []byte + for { + // Check that there are no document indicators at the beginning of the line. + if parser.unread < 4 && !yaml_parser_update_buffer(parser, 4) { + return false + } + + if parser.mark.column == 0 && + ((parser.buffer[parser.buffer_pos+0] == '-' && + parser.buffer[parser.buffer_pos+1] == '-' && + parser.buffer[parser.buffer_pos+2] == '-') || + (parser.buffer[parser.buffer_pos+0] == '.' && + parser.buffer[parser.buffer_pos+1] == '.' && + parser.buffer[parser.buffer_pos+2] == '.')) && + is_blankz(parser.buffer, parser.buffer_pos+3) { + yaml_parser_set_scanner_error(parser, "while scanning a quoted scalar", + start_mark, "found unexpected document indicator") + return false + } + + // Check for EOF. + if is_z(parser.buffer, parser.buffer_pos) { + yaml_parser_set_scanner_error(parser, "while scanning a quoted scalar", + start_mark, "found unexpected end of stream") + return false + } + + // Consume non-blank characters. + leading_blanks := false + for !is_blankz(parser.buffer, parser.buffer_pos) { + if single && parser.buffer[parser.buffer_pos] == '\'' && parser.buffer[parser.buffer_pos+1] == '\'' { + // Is is an escaped single quote. + s = append(s, '\'') + skip(parser) + skip(parser) + + } else if single && parser.buffer[parser.buffer_pos] == '\'' { + // It is a right single quote. + break + } else if !single && parser.buffer[parser.buffer_pos] == '"' { + // It is a right double quote. + break + + } else if !single && parser.buffer[parser.buffer_pos] == '\\' && is_break(parser.buffer, parser.buffer_pos+1) { + // It is an escaped line break. + if parser.unread < 3 && !yaml_parser_update_buffer(parser, 3) { + return false + } + skip(parser) + skip_line(parser) + leading_blanks = true + break + + } else if !single && parser.buffer[parser.buffer_pos] == '\\' { + // It is an escape sequence. + code_length := 0 + + // Check the escape character. + switch parser.buffer[parser.buffer_pos+1] { + case '0': + s = append(s, 0) + case 'a': + s = append(s, '\x07') + case 'b': + s = append(s, '\x08') + case 't', '\t': + s = append(s, '\x09') + case 'n': + s = append(s, '\x0A') + case 'v': + s = append(s, '\x0B') + case 'f': + s = append(s, '\x0C') + case 'r': + s = append(s, '\x0D') + case 'e': + s = append(s, '\x1B') + case ' ': + s = append(s, '\x20') + case '"': + s = append(s, '"') + case '\'': + s = append(s, '\'') + case '\\': + s = append(s, '\\') + case 'N': // NEL (#x85) + s = append(s, '\xC2') + s = append(s, '\x85') + case '_': // #xA0 + s = append(s, '\xC2') + s = append(s, '\xA0') + case 'L': // LS (#x2028) + s = append(s, '\xE2') + s = append(s, '\x80') + s = append(s, '\xA8') + case 'P': // PS (#x2029) + s = append(s, '\xE2') + s = append(s, '\x80') + s = append(s, '\xA9') + case 'x': + code_length = 2 + case 'u': + code_length = 4 + case 'U': + code_length = 8 + default: + yaml_parser_set_scanner_error(parser, "while parsing a quoted scalar", + start_mark, "found unknown escape character") + return false + } + + skip(parser) + skip(parser) + + // Consume an arbitrary escape code. + if code_length > 0 { + var value int + + // Scan the character value. + if parser.unread < code_length && !yaml_parser_update_buffer(parser, code_length) { + return false + } + for k := 0; k < code_length; k++ { + if !is_hex(parser.buffer, parser.buffer_pos+k) { + yaml_parser_set_scanner_error(parser, "while parsing a quoted scalar", + start_mark, "did not find expected hexdecimal number") + return false + } + value = (value << 4) + as_hex(parser.buffer, parser.buffer_pos+k) + } + + // Check the value and write the character. + if (value >= 0xD800 && value <= 0xDFFF) || value > 0x10FFFF { + yaml_parser_set_scanner_error(parser, "while parsing a quoted scalar", + start_mark, "found invalid Unicode character escape code") + return false + } + if value <= 0x7F { + s = append(s, byte(value)) + } else if value <= 0x7FF { + s = append(s, byte(0xC0+(value>>6))) + s = append(s, byte(0x80+(value&0x3F))) + } else if value <= 0xFFFF { + s = append(s, byte(0xE0+(value>>12))) + s = append(s, byte(0x80+((value>>6)&0x3F))) + s = append(s, byte(0x80+(value&0x3F))) + } else { + s = append(s, byte(0xF0+(value>>18))) + s = append(s, byte(0x80+((value>>12)&0x3F))) + s = append(s, byte(0x80+((value>>6)&0x3F))) + s = append(s, byte(0x80+(value&0x3F))) + } + + // Advance the pointer. + for k := 0; k < code_length; k++ { + skip(parser) + } + } + } else { + // It is a non-escaped non-blank character. + s = read(parser, s) + } + if parser.unread < 2 && !yaml_parser_update_buffer(parser, 2) { + return false + } + } + + if parser.unread < 1 && !yaml_parser_update_buffer(parser, 1) { + return false + } + + // Check if we are at the end of the scalar. + if single { + if parser.buffer[parser.buffer_pos] == '\'' { + break + } + } else { + if parser.buffer[parser.buffer_pos] == '"' { + break + } + } + + // Consume blank characters. + for is_blank(parser.buffer, parser.buffer_pos) || is_break(parser.buffer, parser.buffer_pos) { + if is_blank(parser.buffer, parser.buffer_pos) { + // Consume a space or a tab character. + if !leading_blanks { + whitespaces = read(parser, whitespaces) + } else { + skip(parser) + } + } else { + if parser.unread < 2 && !yaml_parser_update_buffer(parser, 2) { + return false + } + + // Check if it is a first line break. + if !leading_blanks { + whitespaces = whitespaces[:0] + leading_break = read_line(parser, leading_break) + leading_blanks = true + } else { + trailing_breaks = read_line(parser, trailing_breaks) + } + } + if parser.unread < 1 && !yaml_parser_update_buffer(parser, 1) { + return false + } + } + + // Join the whitespaces or fold line breaks. + if leading_blanks { + // Do we need to fold line breaks? + if len(leading_break) > 0 && leading_break[0] == '\n' { + if len(trailing_breaks) == 0 { + s = append(s, ' ') + } else { + s = append(s, trailing_breaks...) + } + } else { + s = append(s, leading_break...) + s = append(s, trailing_breaks...) + } + trailing_breaks = trailing_breaks[:0] + leading_break = leading_break[:0] + } else { + s = append(s, whitespaces...) + whitespaces = whitespaces[:0] + } + } + + // Eat the right quote. + skip(parser) + end_mark := parser.mark + + // Create a token. + *token = yaml_token_t{ + typ: yaml_SCALAR_TOKEN, + start_mark: start_mark, + end_mark: end_mark, + value: s, + style: yaml_SINGLE_QUOTED_SCALAR_STYLE, + } + if !single { + token.style = yaml_DOUBLE_QUOTED_SCALAR_STYLE + } + return true +} + +// Scan a plain scalar. +func yaml_parser_scan_plain_scalar(parser *yaml_parser_t, token *yaml_token_t) bool { + + var s, leading_break, trailing_breaks, whitespaces []byte + var leading_blanks bool + var indent = parser.indent + 1 + + start_mark := parser.mark + end_mark := parser.mark + + // Consume the content of the plain scalar. + for { + // Check for a document indicator. + if parser.unread < 4 && !yaml_parser_update_buffer(parser, 4) { + return false + } + if parser.mark.column == 0 && + ((parser.buffer[parser.buffer_pos+0] == '-' && + parser.buffer[parser.buffer_pos+1] == '-' && + parser.buffer[parser.buffer_pos+2] == '-') || + (parser.buffer[parser.buffer_pos+0] == '.' && + parser.buffer[parser.buffer_pos+1] == '.' && + parser.buffer[parser.buffer_pos+2] == '.')) && + is_blankz(parser.buffer, parser.buffer_pos+3) { + break + } + + // Check for a comment. + if parser.buffer[parser.buffer_pos] == '#' { + break + } + + // Consume non-blank characters. + for !is_blankz(parser.buffer, parser.buffer_pos) { + + // Check for indicators that may end a plain scalar. + if (parser.buffer[parser.buffer_pos] == ':' && is_blankz(parser.buffer, parser.buffer_pos+1)) || + (parser.flow_level > 0 && + (parser.buffer[parser.buffer_pos] == ',' || + parser.buffer[parser.buffer_pos] == '?' || parser.buffer[parser.buffer_pos] == '[' || + parser.buffer[parser.buffer_pos] == ']' || parser.buffer[parser.buffer_pos] == '{' || + parser.buffer[parser.buffer_pos] == '}')) { + break + } + + // Check if we need to join whitespaces and breaks. + if leading_blanks || len(whitespaces) > 0 { + if leading_blanks { + // Do we need to fold line breaks? + if leading_break[0] == '\n' { + if len(trailing_breaks) == 0 { + s = append(s, ' ') + } else { + s = append(s, trailing_breaks...) + } + } else { + s = append(s, leading_break...) + s = append(s, trailing_breaks...) + } + trailing_breaks = trailing_breaks[:0] + leading_break = leading_break[:0] + leading_blanks = false + } else { + s = append(s, whitespaces...) + whitespaces = whitespaces[:0] + } + } + + // Copy the character. + s = read(parser, s) + + end_mark = parser.mark + if parser.unread < 2 && !yaml_parser_update_buffer(parser, 2) { + return false + } + } + + // Is it the end? + if !(is_blank(parser.buffer, parser.buffer_pos) || is_break(parser.buffer, parser.buffer_pos)) { + break + } + + // Consume blank characters. + if parser.unread < 1 && !yaml_parser_update_buffer(parser, 1) { + return false + } + + for is_blank(parser.buffer, parser.buffer_pos) || is_break(parser.buffer, parser.buffer_pos) { + if is_blank(parser.buffer, parser.buffer_pos) { + + // Check for tab characters that abuse indentation. + if leading_blanks && parser.mark.column < indent && is_tab(parser.buffer, parser.buffer_pos) { + yaml_parser_set_scanner_error(parser, "while scanning a plain scalar", + start_mark, "found a tab character that violates indentation") + return false + } + + // Consume a space or a tab character. + if !leading_blanks { + whitespaces = read(parser, whitespaces) + } else { + skip(parser) + } + } else { + if parser.unread < 2 && !yaml_parser_update_buffer(parser, 2) { + return false + } + + // Check if it is a first line break. + if !leading_blanks { + whitespaces = whitespaces[:0] + leading_break = read_line(parser, leading_break) + leading_blanks = true + } else { + trailing_breaks = read_line(parser, trailing_breaks) + } + } + if parser.unread < 1 && !yaml_parser_update_buffer(parser, 1) { + return false + } + } + + // Check indentation level. + if parser.flow_level == 0 && parser.mark.column < indent { + break + } + } + + // Create a token. + *token = yaml_token_t{ + typ: yaml_SCALAR_TOKEN, + start_mark: start_mark, + end_mark: end_mark, + value: s, + style: yaml_PLAIN_SCALAR_STYLE, + } + + // Note that we change the 'simple_key_allowed' flag. + if leading_blanks { + parser.simple_key_allowed = true + } + return true +} diff --git a/vendor/gopkg.in/yaml.v2/sorter.go b/vendor/gopkg.in/yaml.v2/sorter.go new file mode 100644 index 000000000..4c45e660a --- /dev/null +++ b/vendor/gopkg.in/yaml.v2/sorter.go @@ -0,0 +1,113 @@ +package yaml + +import ( + "reflect" + "unicode" +) + +type keyList []reflect.Value + +func (l keyList) Len() int { return len(l) } +func (l keyList) Swap(i, j int) { l[i], l[j] = l[j], l[i] } +func (l keyList) Less(i, j int) bool { + a := l[i] + b := l[j] + ak := a.Kind() + bk := b.Kind() + for (ak == reflect.Interface || ak == reflect.Ptr) && !a.IsNil() { + a = a.Elem() + ak = a.Kind() + } + for (bk == reflect.Interface || bk == reflect.Ptr) && !b.IsNil() { + b = b.Elem() + bk = b.Kind() + } + af, aok := keyFloat(a) + bf, bok := keyFloat(b) + if aok && bok { + if af != bf { + return af < bf + } + if ak != bk { + return ak < bk + } + return numLess(a, b) + } + if ak != reflect.String || bk != reflect.String { + return ak < bk + } + ar, br := []rune(a.String()), []rune(b.String()) + for i := 0; i < len(ar) && i < len(br); i++ { + if ar[i] == br[i] { + continue + } + al := unicode.IsLetter(ar[i]) + bl := unicode.IsLetter(br[i]) + if al && bl { + return ar[i] < br[i] + } + if al || bl { + return bl + } + var ai, bi int + var an, bn int64 + if ar[i] == '0' || br[i] == '0' { + for j := i-1; j >= 0 && unicode.IsDigit(ar[j]); j-- { + if ar[j] != '0' { + an = 1 + bn = 1 + break + } + } + } + for ai = i; ai < len(ar) && unicode.IsDigit(ar[ai]); ai++ { + an = an*10 + int64(ar[ai]-'0') + } + for bi = i; bi < len(br) && unicode.IsDigit(br[bi]); bi++ { + bn = bn*10 + int64(br[bi]-'0') + } + if an != bn { + return an < bn + } + if ai != bi { + return ai < bi + } + return ar[i] < br[i] + } + return len(ar) < len(br) +} + +// keyFloat returns a float value for v if it is a number/bool +// and whether it is a number/bool or not. +func keyFloat(v reflect.Value) (f float64, ok bool) { + switch v.Kind() { + case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64: + return float64(v.Int()), true + case reflect.Float32, reflect.Float64: + return v.Float(), true + case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64, reflect.Uintptr: + return float64(v.Uint()), true + case reflect.Bool: + if v.Bool() { + return 1, true + } + return 0, true + } + return 0, false +} + +// numLess returns whether a < b. +// a and b must necessarily have the same kind. +func numLess(a, b reflect.Value) bool { + switch a.Kind() { + case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64: + return a.Int() < b.Int() + case reflect.Float32, reflect.Float64: + return a.Float() < b.Float() + case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64, reflect.Uintptr: + return a.Uint() < b.Uint() + case reflect.Bool: + return !a.Bool() && b.Bool() + } + panic("not a number") +} diff --git a/vendor/gopkg.in/yaml.v2/writerc.go b/vendor/gopkg.in/yaml.v2/writerc.go new file mode 100644 index 000000000..a2dde608c --- /dev/null +++ b/vendor/gopkg.in/yaml.v2/writerc.go @@ -0,0 +1,26 @@ +package yaml + +// Set the writer error and return false. +func yaml_emitter_set_writer_error(emitter *yaml_emitter_t, problem string) bool { + emitter.error = yaml_WRITER_ERROR + emitter.problem = problem + return false +} + +// Flush the output buffer. +func yaml_emitter_flush(emitter *yaml_emitter_t) bool { + if emitter.write_handler == nil { + panic("write handler not set") + } + + // Check if the buffer is empty. + if emitter.buffer_pos == 0 { + return true + } + + if err := emitter.write_handler(emitter, emitter.buffer[:emitter.buffer_pos]); err != nil { + return yaml_emitter_set_writer_error(emitter, "write error: "+err.Error()) + } + emitter.buffer_pos = 0 + return true +} diff --git a/vendor/gopkg.in/yaml.v2/yaml.go b/vendor/gopkg.in/yaml.v2/yaml.go new file mode 100644 index 000000000..de85aa4cd --- /dev/null +++ b/vendor/gopkg.in/yaml.v2/yaml.go @@ -0,0 +1,466 @@ +// Package yaml implements YAML support for the Go language. +// +// Source code and other details for the project are available at GitHub: +// +// https://github.com/go-yaml/yaml +// +package yaml + +import ( + "errors" + "fmt" + "io" + "reflect" + "strings" + "sync" +) + +// MapSlice encodes and decodes as a YAML map. +// The order of keys is preserved when encoding and decoding. +type MapSlice []MapItem + +// MapItem is an item in a MapSlice. +type MapItem struct { + Key, Value interface{} +} + +// The Unmarshaler interface may be implemented by types to customize their +// behavior when being unmarshaled from a YAML document. The UnmarshalYAML +// method receives a function that may be called to unmarshal the original +// YAML value into a field or variable. It is safe to call the unmarshal +// function parameter more than once if necessary. +type Unmarshaler interface { + UnmarshalYAML(unmarshal func(interface{}) error) error +} + +// The Marshaler interface may be implemented by types to customize their +// behavior when being marshaled into a YAML document. The returned value +// is marshaled in place of the original value implementing Marshaler. +// +// If an error is returned by MarshalYAML, the marshaling procedure stops +// and returns with the provided error. +type Marshaler interface { + MarshalYAML() (interface{}, error) +} + +// Unmarshal decodes the first document found within the in byte slice +// and assigns decoded values into the out value. +// +// Maps and pointers (to a struct, string, int, etc) are accepted as out +// values. If an internal pointer within a struct is not initialized, +// the yaml package will initialize it if necessary for unmarshalling +// the provided data. The out parameter must not be nil. +// +// The type of the decoded values should be compatible with the respective +// values in out. If one or more values cannot be decoded due to a type +// mismatches, decoding continues partially until the end of the YAML +// content, and a *yaml.TypeError is returned with details for all +// missed values. +// +// Struct fields are only unmarshalled if they are exported (have an +// upper case first letter), and are unmarshalled using the field name +// lowercased as the default key. Custom keys may be defined via the +// "yaml" name in the field tag: the content preceding the first comma +// is used as the key, and the following comma-separated options are +// used to tweak the marshalling process (see Marshal). +// Conflicting names result in a runtime error. +// +// For example: +// +// type T struct { +// F int `yaml:"a,omitempty"` +// B int +// } +// var t T +// yaml.Unmarshal([]byte("a: 1\nb: 2"), &t) +// +// See the documentation of Marshal for the format of tags and a list of +// supported tag options. +// +func Unmarshal(in []byte, out interface{}) (err error) { + return unmarshal(in, out, false) +} + +// UnmarshalStrict is like Unmarshal except that any fields that are found +// in the data that do not have corresponding struct members, or mapping +// keys that are duplicates, will result in +// an error. +func UnmarshalStrict(in []byte, out interface{}) (err error) { + return unmarshal(in, out, true) +} + +// A Decorder reads and decodes YAML values from an input stream. +type Decoder struct { + strict bool + parser *parser +} + +// NewDecoder returns a new decoder that reads from r. +// +// The decoder introduces its own buffering and may read +// data from r beyond the YAML values requested. +func NewDecoder(r io.Reader) *Decoder { + return &Decoder{ + parser: newParserFromReader(r), + } +} + +// SetStrict sets whether strict decoding behaviour is enabled when +// decoding items in the data (see UnmarshalStrict). By default, decoding is not strict. +func (dec *Decoder) SetStrict(strict bool) { + dec.strict = strict +} + +// Decode reads the next YAML-encoded value from its input +// and stores it in the value pointed to by v. +// +// See the documentation for Unmarshal for details about the +// conversion of YAML into a Go value. +func (dec *Decoder) Decode(v interface{}) (err error) { + d := newDecoder(dec.strict) + defer handleErr(&err) + node := dec.parser.parse() + if node == nil { + return io.EOF + } + out := reflect.ValueOf(v) + if out.Kind() == reflect.Ptr && !out.IsNil() { + out = out.Elem() + } + d.unmarshal(node, out) + if len(d.terrors) > 0 { + return &TypeError{d.terrors} + } + return nil +} + +func unmarshal(in []byte, out interface{}, strict bool) (err error) { + defer handleErr(&err) + d := newDecoder(strict) + p := newParser(in) + defer p.destroy() + node := p.parse() + if node != nil { + v := reflect.ValueOf(out) + if v.Kind() == reflect.Ptr && !v.IsNil() { + v = v.Elem() + } + d.unmarshal(node, v) + } + if len(d.terrors) > 0 { + return &TypeError{d.terrors} + } + return nil +} + +// Marshal serializes the value provided into a YAML document. The structure +// of the generated document will reflect the structure of the value itself. +// Maps and pointers (to struct, string, int, etc) are accepted as the in value. +// +// Struct fields are only marshalled if they are exported (have an upper case +// first letter), and are marshalled using the field name lowercased as the +// default key. Custom keys may be defined via the "yaml" name in the field +// tag: the content preceding the first comma is used as the key, and the +// following comma-separated options are used to tweak the marshalling process. +// Conflicting names result in a runtime error. +// +// The field tag format accepted is: +// +// `(...) yaml:"[][,[,]]" (...)` +// +// The following flags are currently supported: +// +// omitempty Only include the field if it's not set to the zero +// value for the type or to empty slices or maps. +// Zero valued structs will be omitted if all their public +// fields are zero, unless they implement an IsZero +// method (see the IsZeroer interface type), in which +// case the field will be included if that method returns true. +// +// flow Marshal using a flow style (useful for structs, +// sequences and maps). +// +// inline Inline the field, which must be a struct or a map, +// causing all of its fields or keys to be processed as if +// they were part of the outer struct. For maps, keys must +// not conflict with the yaml keys of other struct fields. +// +// In addition, if the key is "-", the field is ignored. +// +// For example: +// +// type T struct { +// F int `yaml:"a,omitempty"` +// B int +// } +// yaml.Marshal(&T{B: 2}) // Returns "b: 2\n" +// yaml.Marshal(&T{F: 1}} // Returns "a: 1\nb: 0\n" +// +func Marshal(in interface{}) (out []byte, err error) { + defer handleErr(&err) + e := newEncoder() + defer e.destroy() + e.marshalDoc("", reflect.ValueOf(in)) + e.finish() + out = e.out + return +} + +// An Encoder writes YAML values to an output stream. +type Encoder struct { + encoder *encoder +} + +// NewEncoder returns a new encoder that writes to w. +// The Encoder should be closed after use to flush all data +// to w. +func NewEncoder(w io.Writer) *Encoder { + return &Encoder{ + encoder: newEncoderWithWriter(w), + } +} + +// Encode writes the YAML encoding of v to the stream. +// If multiple items are encoded to the stream, the +// second and subsequent document will be preceded +// with a "---" document separator, but the first will not. +// +// See the documentation for Marshal for details about the conversion of Go +// values to YAML. +func (e *Encoder) Encode(v interface{}) (err error) { + defer handleErr(&err) + e.encoder.marshalDoc("", reflect.ValueOf(v)) + return nil +} + +// Close closes the encoder by writing any remaining data. +// It does not write a stream terminating string "...". +func (e *Encoder) Close() (err error) { + defer handleErr(&err) + e.encoder.finish() + return nil +} + +func handleErr(err *error) { + if v := recover(); v != nil { + if e, ok := v.(yamlError); ok { + *err = e.err + } else { + panic(v) + } + } +} + +type yamlError struct { + err error +} + +func fail(err error) { + panic(yamlError{err}) +} + +func failf(format string, args ...interface{}) { + panic(yamlError{fmt.Errorf("yaml: "+format, args...)}) +} + +// A TypeError is returned by Unmarshal when one or more fields in +// the YAML document cannot be properly decoded into the requested +// types. When this error is returned, the value is still +// unmarshaled partially. +type TypeError struct { + Errors []string +} + +func (e *TypeError) Error() string { + return fmt.Sprintf("yaml: unmarshal errors:\n %s", strings.Join(e.Errors, "\n ")) +} + +// -------------------------------------------------------------------------- +// Maintain a mapping of keys to structure field indexes + +// The code in this section was copied from mgo/bson. + +// structInfo holds details for the serialization of fields of +// a given struct. +type structInfo struct { + FieldsMap map[string]fieldInfo + FieldsList []fieldInfo + + // InlineMap is the number of the field in the struct that + // contains an ,inline map, or -1 if there's none. + InlineMap int +} + +type fieldInfo struct { + Key string + Num int + OmitEmpty bool + Flow bool + // Id holds the unique field identifier, so we can cheaply + // check for field duplicates without maintaining an extra map. + Id int + + // Inline holds the field index if the field is part of an inlined struct. + Inline []int +} + +var structMap = make(map[reflect.Type]*structInfo) +var fieldMapMutex sync.RWMutex + +func getStructInfo(st reflect.Type) (*structInfo, error) { + fieldMapMutex.RLock() + sinfo, found := structMap[st] + fieldMapMutex.RUnlock() + if found { + return sinfo, nil + } + + n := st.NumField() + fieldsMap := make(map[string]fieldInfo) + fieldsList := make([]fieldInfo, 0, n) + inlineMap := -1 + for i := 0; i != n; i++ { + field := st.Field(i) + if field.PkgPath != "" && !field.Anonymous { + continue // Private field + } + + info := fieldInfo{Num: i} + + tag := field.Tag.Get("yaml") + if tag == "" && strings.Index(string(field.Tag), ":") < 0 { + tag = string(field.Tag) + } + if tag == "-" { + continue + } + + inline := false + fields := strings.Split(tag, ",") + if len(fields) > 1 { + for _, flag := range fields[1:] { + switch flag { + case "omitempty": + info.OmitEmpty = true + case "flow": + info.Flow = true + case "inline": + inline = true + default: + return nil, errors.New(fmt.Sprintf("Unsupported flag %q in tag %q of type %s", flag, tag, st)) + } + } + tag = fields[0] + } + + if inline { + switch field.Type.Kind() { + case reflect.Map: + if inlineMap >= 0 { + return nil, errors.New("Multiple ,inline maps in struct " + st.String()) + } + if field.Type.Key() != reflect.TypeOf("") { + return nil, errors.New("Option ,inline needs a map with string keys in struct " + st.String()) + } + inlineMap = info.Num + case reflect.Struct: + sinfo, err := getStructInfo(field.Type) + if err != nil { + return nil, err + } + for _, finfo := range sinfo.FieldsList { + if _, found := fieldsMap[finfo.Key]; found { + msg := "Duplicated key '" + finfo.Key + "' in struct " + st.String() + return nil, errors.New(msg) + } + if finfo.Inline == nil { + finfo.Inline = []int{i, finfo.Num} + } else { + finfo.Inline = append([]int{i}, finfo.Inline...) + } + finfo.Id = len(fieldsList) + fieldsMap[finfo.Key] = finfo + fieldsList = append(fieldsList, finfo) + } + default: + //return nil, errors.New("Option ,inline needs a struct value or map field") + return nil, errors.New("Option ,inline needs a struct value field") + } + continue + } + + if tag != "" { + info.Key = tag + } else { + info.Key = strings.ToLower(field.Name) + } + + if _, found = fieldsMap[info.Key]; found { + msg := "Duplicated key '" + info.Key + "' in struct " + st.String() + return nil, errors.New(msg) + } + + info.Id = len(fieldsList) + fieldsList = append(fieldsList, info) + fieldsMap[info.Key] = info + } + + sinfo = &structInfo{ + FieldsMap: fieldsMap, + FieldsList: fieldsList, + InlineMap: inlineMap, + } + + fieldMapMutex.Lock() + structMap[st] = sinfo + fieldMapMutex.Unlock() + return sinfo, nil +} + +// IsZeroer is used to check whether an object is zero to +// determine whether it should be omitted when marshaling +// with the omitempty flag. One notable implementation +// is time.Time. +type IsZeroer interface { + IsZero() bool +} + +func isZero(v reflect.Value) bool { + kind := v.Kind() + if z, ok := v.Interface().(IsZeroer); ok { + if (kind == reflect.Ptr || kind == reflect.Interface) && v.IsNil() { + return true + } + return z.IsZero() + } + switch kind { + case reflect.String: + return len(v.String()) == 0 + case reflect.Interface, reflect.Ptr: + return v.IsNil() + case reflect.Slice: + return v.Len() == 0 + case reflect.Map: + return v.Len() == 0 + case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64: + return v.Int() == 0 + case reflect.Float32, reflect.Float64: + return v.Float() == 0 + case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64, reflect.Uintptr: + return v.Uint() == 0 + case reflect.Bool: + return !v.Bool() + case reflect.Struct: + vt := v.Type() + for i := v.NumField() - 1; i >= 0; i-- { + if vt.Field(i).PkgPath != "" { + continue // Private field + } + if !isZero(v.Field(i)) { + return false + } + } + return true + } + return false +} diff --git a/vendor/gopkg.in/yaml.v2/yamlh.go b/vendor/gopkg.in/yaml.v2/yamlh.go new file mode 100644 index 000000000..e25cee563 --- /dev/null +++ b/vendor/gopkg.in/yaml.v2/yamlh.go @@ -0,0 +1,738 @@ +package yaml + +import ( + "fmt" + "io" +) + +// The version directive data. +type yaml_version_directive_t struct { + major int8 // The major version number. + minor int8 // The minor version number. +} + +// The tag directive data. +type yaml_tag_directive_t struct { + handle []byte // The tag handle. + prefix []byte // The tag prefix. +} + +type yaml_encoding_t int + +// The stream encoding. +const ( + // Let the parser choose the encoding. + yaml_ANY_ENCODING yaml_encoding_t = iota + + yaml_UTF8_ENCODING // The default UTF-8 encoding. + yaml_UTF16LE_ENCODING // The UTF-16-LE encoding with BOM. + yaml_UTF16BE_ENCODING // The UTF-16-BE encoding with BOM. +) + +type yaml_break_t int + +// Line break types. +const ( + // Let the parser choose the break type. + yaml_ANY_BREAK yaml_break_t = iota + + yaml_CR_BREAK // Use CR for line breaks (Mac style). + yaml_LN_BREAK // Use LN for line breaks (Unix style). + yaml_CRLN_BREAK // Use CR LN for line breaks (DOS style). +) + +type yaml_error_type_t int + +// Many bad things could happen with the parser and emitter. +const ( + // No error is produced. + yaml_NO_ERROR yaml_error_type_t = iota + + yaml_MEMORY_ERROR // Cannot allocate or reallocate a block of memory. + yaml_READER_ERROR // Cannot read or decode the input stream. + yaml_SCANNER_ERROR // Cannot scan the input stream. + yaml_PARSER_ERROR // Cannot parse the input stream. + yaml_COMPOSER_ERROR // Cannot compose a YAML document. + yaml_WRITER_ERROR // Cannot write to the output stream. + yaml_EMITTER_ERROR // Cannot emit a YAML stream. +) + +// The pointer position. +type yaml_mark_t struct { + index int // The position index. + line int // The position line. + column int // The position column. +} + +// Node Styles + +type yaml_style_t int8 + +type yaml_scalar_style_t yaml_style_t + +// Scalar styles. +const ( + // Let the emitter choose the style. + yaml_ANY_SCALAR_STYLE yaml_scalar_style_t = iota + + yaml_PLAIN_SCALAR_STYLE // The plain scalar style. + yaml_SINGLE_QUOTED_SCALAR_STYLE // The single-quoted scalar style. + yaml_DOUBLE_QUOTED_SCALAR_STYLE // The double-quoted scalar style. + yaml_LITERAL_SCALAR_STYLE // The literal scalar style. + yaml_FOLDED_SCALAR_STYLE // The folded scalar style. +) + +type yaml_sequence_style_t yaml_style_t + +// Sequence styles. +const ( + // Let the emitter choose the style. + yaml_ANY_SEQUENCE_STYLE yaml_sequence_style_t = iota + + yaml_BLOCK_SEQUENCE_STYLE // The block sequence style. + yaml_FLOW_SEQUENCE_STYLE // The flow sequence style. +) + +type yaml_mapping_style_t yaml_style_t + +// Mapping styles. +const ( + // Let the emitter choose the style. + yaml_ANY_MAPPING_STYLE yaml_mapping_style_t = iota + + yaml_BLOCK_MAPPING_STYLE // The block mapping style. + yaml_FLOW_MAPPING_STYLE // The flow mapping style. +) + +// Tokens + +type yaml_token_type_t int + +// Token types. +const ( + // An empty token. + yaml_NO_TOKEN yaml_token_type_t = iota + + yaml_STREAM_START_TOKEN // A STREAM-START token. + yaml_STREAM_END_TOKEN // A STREAM-END token. + + yaml_VERSION_DIRECTIVE_TOKEN // A VERSION-DIRECTIVE token. + yaml_TAG_DIRECTIVE_TOKEN // A TAG-DIRECTIVE token. + yaml_DOCUMENT_START_TOKEN // A DOCUMENT-START token. + yaml_DOCUMENT_END_TOKEN // A DOCUMENT-END token. + + yaml_BLOCK_SEQUENCE_START_TOKEN // A BLOCK-SEQUENCE-START token. + yaml_BLOCK_MAPPING_START_TOKEN // A BLOCK-SEQUENCE-END token. + yaml_BLOCK_END_TOKEN // A BLOCK-END token. + + yaml_FLOW_SEQUENCE_START_TOKEN // A FLOW-SEQUENCE-START token. + yaml_FLOW_SEQUENCE_END_TOKEN // A FLOW-SEQUENCE-END token. + yaml_FLOW_MAPPING_START_TOKEN // A FLOW-MAPPING-START token. + yaml_FLOW_MAPPING_END_TOKEN // A FLOW-MAPPING-END token. + + yaml_BLOCK_ENTRY_TOKEN // A BLOCK-ENTRY token. + yaml_FLOW_ENTRY_TOKEN // A FLOW-ENTRY token. + yaml_KEY_TOKEN // A KEY token. + yaml_VALUE_TOKEN // A VALUE token. + + yaml_ALIAS_TOKEN // An ALIAS token. + yaml_ANCHOR_TOKEN // An ANCHOR token. + yaml_TAG_TOKEN // A TAG token. + yaml_SCALAR_TOKEN // A SCALAR token. +) + +func (tt yaml_token_type_t) String() string { + switch tt { + case yaml_NO_TOKEN: + return "yaml_NO_TOKEN" + case yaml_STREAM_START_TOKEN: + return "yaml_STREAM_START_TOKEN" + case yaml_STREAM_END_TOKEN: + return "yaml_STREAM_END_TOKEN" + case yaml_VERSION_DIRECTIVE_TOKEN: + return "yaml_VERSION_DIRECTIVE_TOKEN" + case yaml_TAG_DIRECTIVE_TOKEN: + return "yaml_TAG_DIRECTIVE_TOKEN" + case yaml_DOCUMENT_START_TOKEN: + return "yaml_DOCUMENT_START_TOKEN" + case yaml_DOCUMENT_END_TOKEN: + return "yaml_DOCUMENT_END_TOKEN" + case yaml_BLOCK_SEQUENCE_START_TOKEN: + return "yaml_BLOCK_SEQUENCE_START_TOKEN" + case yaml_BLOCK_MAPPING_START_TOKEN: + return "yaml_BLOCK_MAPPING_START_TOKEN" + case yaml_BLOCK_END_TOKEN: + return "yaml_BLOCK_END_TOKEN" + case yaml_FLOW_SEQUENCE_START_TOKEN: + return "yaml_FLOW_SEQUENCE_START_TOKEN" + case yaml_FLOW_SEQUENCE_END_TOKEN: + return "yaml_FLOW_SEQUENCE_END_TOKEN" + case yaml_FLOW_MAPPING_START_TOKEN: + return "yaml_FLOW_MAPPING_START_TOKEN" + case yaml_FLOW_MAPPING_END_TOKEN: + return "yaml_FLOW_MAPPING_END_TOKEN" + case yaml_BLOCK_ENTRY_TOKEN: + return "yaml_BLOCK_ENTRY_TOKEN" + case yaml_FLOW_ENTRY_TOKEN: + return "yaml_FLOW_ENTRY_TOKEN" + case yaml_KEY_TOKEN: + return "yaml_KEY_TOKEN" + case yaml_VALUE_TOKEN: + return "yaml_VALUE_TOKEN" + case yaml_ALIAS_TOKEN: + return "yaml_ALIAS_TOKEN" + case yaml_ANCHOR_TOKEN: + return "yaml_ANCHOR_TOKEN" + case yaml_TAG_TOKEN: + return "yaml_TAG_TOKEN" + case yaml_SCALAR_TOKEN: + return "yaml_SCALAR_TOKEN" + } + return "" +} + +// The token structure. +type yaml_token_t struct { + // The token type. + typ yaml_token_type_t + + // The start/end of the token. + start_mark, end_mark yaml_mark_t + + // The stream encoding (for yaml_STREAM_START_TOKEN). + encoding yaml_encoding_t + + // The alias/anchor/scalar value or tag/tag directive handle + // (for yaml_ALIAS_TOKEN, yaml_ANCHOR_TOKEN, yaml_SCALAR_TOKEN, yaml_TAG_TOKEN, yaml_TAG_DIRECTIVE_TOKEN). + value []byte + + // The tag suffix (for yaml_TAG_TOKEN). + suffix []byte + + // The tag directive prefix (for yaml_TAG_DIRECTIVE_TOKEN). + prefix []byte + + // The scalar style (for yaml_SCALAR_TOKEN). + style yaml_scalar_style_t + + // The version directive major/minor (for yaml_VERSION_DIRECTIVE_TOKEN). + major, minor int8 +} + +// Events + +type yaml_event_type_t int8 + +// Event types. +const ( + // An empty event. + yaml_NO_EVENT yaml_event_type_t = iota + + yaml_STREAM_START_EVENT // A STREAM-START event. + yaml_STREAM_END_EVENT // A STREAM-END event. + yaml_DOCUMENT_START_EVENT // A DOCUMENT-START event. + yaml_DOCUMENT_END_EVENT // A DOCUMENT-END event. + yaml_ALIAS_EVENT // An ALIAS event. + yaml_SCALAR_EVENT // A SCALAR event. + yaml_SEQUENCE_START_EVENT // A SEQUENCE-START event. + yaml_SEQUENCE_END_EVENT // A SEQUENCE-END event. + yaml_MAPPING_START_EVENT // A MAPPING-START event. + yaml_MAPPING_END_EVENT // A MAPPING-END event. +) + +var eventStrings = []string{ + yaml_NO_EVENT: "none", + yaml_STREAM_START_EVENT: "stream start", + yaml_STREAM_END_EVENT: "stream end", + yaml_DOCUMENT_START_EVENT: "document start", + yaml_DOCUMENT_END_EVENT: "document end", + yaml_ALIAS_EVENT: "alias", + yaml_SCALAR_EVENT: "scalar", + yaml_SEQUENCE_START_EVENT: "sequence start", + yaml_SEQUENCE_END_EVENT: "sequence end", + yaml_MAPPING_START_EVENT: "mapping start", + yaml_MAPPING_END_EVENT: "mapping end", +} + +func (e yaml_event_type_t) String() string { + if e < 0 || int(e) >= len(eventStrings) { + return fmt.Sprintf("unknown event %d", e) + } + return eventStrings[e] +} + +// The event structure. +type yaml_event_t struct { + + // The event type. + typ yaml_event_type_t + + // The start and end of the event. + start_mark, end_mark yaml_mark_t + + // The document encoding (for yaml_STREAM_START_EVENT). + encoding yaml_encoding_t + + // The version directive (for yaml_DOCUMENT_START_EVENT). + version_directive *yaml_version_directive_t + + // The list of tag directives (for yaml_DOCUMENT_START_EVENT). + tag_directives []yaml_tag_directive_t + + // The anchor (for yaml_SCALAR_EVENT, yaml_SEQUENCE_START_EVENT, yaml_MAPPING_START_EVENT, yaml_ALIAS_EVENT). + anchor []byte + + // The tag (for yaml_SCALAR_EVENT, yaml_SEQUENCE_START_EVENT, yaml_MAPPING_START_EVENT). + tag []byte + + // The scalar value (for yaml_SCALAR_EVENT). + value []byte + + // Is the document start/end indicator implicit, or the tag optional? + // (for yaml_DOCUMENT_START_EVENT, yaml_DOCUMENT_END_EVENT, yaml_SEQUENCE_START_EVENT, yaml_MAPPING_START_EVENT, yaml_SCALAR_EVENT). + implicit bool + + // Is the tag optional for any non-plain style? (for yaml_SCALAR_EVENT). + quoted_implicit bool + + // The style (for yaml_SCALAR_EVENT, yaml_SEQUENCE_START_EVENT, yaml_MAPPING_START_EVENT). + style yaml_style_t +} + +func (e *yaml_event_t) scalar_style() yaml_scalar_style_t { return yaml_scalar_style_t(e.style) } +func (e *yaml_event_t) sequence_style() yaml_sequence_style_t { return yaml_sequence_style_t(e.style) } +func (e *yaml_event_t) mapping_style() yaml_mapping_style_t { return yaml_mapping_style_t(e.style) } + +// Nodes + +const ( + yaml_NULL_TAG = "tag:yaml.org,2002:null" // The tag !!null with the only possible value: null. + yaml_BOOL_TAG = "tag:yaml.org,2002:bool" // The tag !!bool with the values: true and false. + yaml_STR_TAG = "tag:yaml.org,2002:str" // The tag !!str for string values. + yaml_INT_TAG = "tag:yaml.org,2002:int" // The tag !!int for integer values. + yaml_FLOAT_TAG = "tag:yaml.org,2002:float" // The tag !!float for float values. + yaml_TIMESTAMP_TAG = "tag:yaml.org,2002:timestamp" // The tag !!timestamp for date and time values. + + yaml_SEQ_TAG = "tag:yaml.org,2002:seq" // The tag !!seq is used to denote sequences. + yaml_MAP_TAG = "tag:yaml.org,2002:map" // The tag !!map is used to denote mapping. + + // Not in original libyaml. + yaml_BINARY_TAG = "tag:yaml.org,2002:binary" + yaml_MERGE_TAG = "tag:yaml.org,2002:merge" + + yaml_DEFAULT_SCALAR_TAG = yaml_STR_TAG // The default scalar tag is !!str. + yaml_DEFAULT_SEQUENCE_TAG = yaml_SEQ_TAG // The default sequence tag is !!seq. + yaml_DEFAULT_MAPPING_TAG = yaml_MAP_TAG // The default mapping tag is !!map. +) + +type yaml_node_type_t int + +// Node types. +const ( + // An empty node. + yaml_NO_NODE yaml_node_type_t = iota + + yaml_SCALAR_NODE // A scalar node. + yaml_SEQUENCE_NODE // A sequence node. + yaml_MAPPING_NODE // A mapping node. +) + +// An element of a sequence node. +type yaml_node_item_t int + +// An element of a mapping node. +type yaml_node_pair_t struct { + key int // The key of the element. + value int // The value of the element. +} + +// The node structure. +type yaml_node_t struct { + typ yaml_node_type_t // The node type. + tag []byte // The node tag. + + // The node data. + + // The scalar parameters (for yaml_SCALAR_NODE). + scalar struct { + value []byte // The scalar value. + length int // The length of the scalar value. + style yaml_scalar_style_t // The scalar style. + } + + // The sequence parameters (for YAML_SEQUENCE_NODE). + sequence struct { + items_data []yaml_node_item_t // The stack of sequence items. + style yaml_sequence_style_t // The sequence style. + } + + // The mapping parameters (for yaml_MAPPING_NODE). + mapping struct { + pairs_data []yaml_node_pair_t // The stack of mapping pairs (key, value). + pairs_start *yaml_node_pair_t // The beginning of the stack. + pairs_end *yaml_node_pair_t // The end of the stack. + pairs_top *yaml_node_pair_t // The top of the stack. + style yaml_mapping_style_t // The mapping style. + } + + start_mark yaml_mark_t // The beginning of the node. + end_mark yaml_mark_t // The end of the node. + +} + +// The document structure. +type yaml_document_t struct { + + // The document nodes. + nodes []yaml_node_t + + // The version directive. + version_directive *yaml_version_directive_t + + // The list of tag directives. + tag_directives_data []yaml_tag_directive_t + tag_directives_start int // The beginning of the tag directives list. + tag_directives_end int // The end of the tag directives list. + + start_implicit int // Is the document start indicator implicit? + end_implicit int // Is the document end indicator implicit? + + // The start/end of the document. + start_mark, end_mark yaml_mark_t +} + +// The prototype of a read handler. +// +// The read handler is called when the parser needs to read more bytes from the +// source. The handler should write not more than size bytes to the buffer. +// The number of written bytes should be set to the size_read variable. +// +// [in,out] data A pointer to an application data specified by +// yaml_parser_set_input(). +// [out] buffer The buffer to write the data from the source. +// [in] size The size of the buffer. +// [out] size_read The actual number of bytes read from the source. +// +// On success, the handler should return 1. If the handler failed, +// the returned value should be 0. On EOF, the handler should set the +// size_read to 0 and return 1. +type yaml_read_handler_t func(parser *yaml_parser_t, buffer []byte) (n int, err error) + +// This structure holds information about a potential simple key. +type yaml_simple_key_t struct { + possible bool // Is a simple key possible? + required bool // Is a simple key required? + token_number int // The number of the token. + mark yaml_mark_t // The position mark. +} + +// The states of the parser. +type yaml_parser_state_t int + +const ( + yaml_PARSE_STREAM_START_STATE yaml_parser_state_t = iota + + yaml_PARSE_IMPLICIT_DOCUMENT_START_STATE // Expect the beginning of an implicit document. + yaml_PARSE_DOCUMENT_START_STATE // Expect DOCUMENT-START. + yaml_PARSE_DOCUMENT_CONTENT_STATE // Expect the content of a document. + yaml_PARSE_DOCUMENT_END_STATE // Expect DOCUMENT-END. + yaml_PARSE_BLOCK_NODE_STATE // Expect a block node. + yaml_PARSE_BLOCK_NODE_OR_INDENTLESS_SEQUENCE_STATE // Expect a block node or indentless sequence. + yaml_PARSE_FLOW_NODE_STATE // Expect a flow node. + yaml_PARSE_BLOCK_SEQUENCE_FIRST_ENTRY_STATE // Expect the first entry of a block sequence. + yaml_PARSE_BLOCK_SEQUENCE_ENTRY_STATE // Expect an entry of a block sequence. + yaml_PARSE_INDENTLESS_SEQUENCE_ENTRY_STATE // Expect an entry of an indentless sequence. + yaml_PARSE_BLOCK_MAPPING_FIRST_KEY_STATE // Expect the first key of a block mapping. + yaml_PARSE_BLOCK_MAPPING_KEY_STATE // Expect a block mapping key. + yaml_PARSE_BLOCK_MAPPING_VALUE_STATE // Expect a block mapping value. + yaml_PARSE_FLOW_SEQUENCE_FIRST_ENTRY_STATE // Expect the first entry of a flow sequence. + yaml_PARSE_FLOW_SEQUENCE_ENTRY_STATE // Expect an entry of a flow sequence. + yaml_PARSE_FLOW_SEQUENCE_ENTRY_MAPPING_KEY_STATE // Expect a key of an ordered mapping. + yaml_PARSE_FLOW_SEQUENCE_ENTRY_MAPPING_VALUE_STATE // Expect a value of an ordered mapping. + yaml_PARSE_FLOW_SEQUENCE_ENTRY_MAPPING_END_STATE // Expect the and of an ordered mapping entry. + yaml_PARSE_FLOW_MAPPING_FIRST_KEY_STATE // Expect the first key of a flow mapping. + yaml_PARSE_FLOW_MAPPING_KEY_STATE // Expect a key of a flow mapping. + yaml_PARSE_FLOW_MAPPING_VALUE_STATE // Expect a value of a flow mapping. + yaml_PARSE_FLOW_MAPPING_EMPTY_VALUE_STATE // Expect an empty value of a flow mapping. + yaml_PARSE_END_STATE // Expect nothing. +) + +func (ps yaml_parser_state_t) String() string { + switch ps { + case yaml_PARSE_STREAM_START_STATE: + return "yaml_PARSE_STREAM_START_STATE" + case yaml_PARSE_IMPLICIT_DOCUMENT_START_STATE: + return "yaml_PARSE_IMPLICIT_DOCUMENT_START_STATE" + case yaml_PARSE_DOCUMENT_START_STATE: + return "yaml_PARSE_DOCUMENT_START_STATE" + case yaml_PARSE_DOCUMENT_CONTENT_STATE: + return "yaml_PARSE_DOCUMENT_CONTENT_STATE" + case yaml_PARSE_DOCUMENT_END_STATE: + return "yaml_PARSE_DOCUMENT_END_STATE" + case yaml_PARSE_BLOCK_NODE_STATE: + return "yaml_PARSE_BLOCK_NODE_STATE" + case yaml_PARSE_BLOCK_NODE_OR_INDENTLESS_SEQUENCE_STATE: + return "yaml_PARSE_BLOCK_NODE_OR_INDENTLESS_SEQUENCE_STATE" + case yaml_PARSE_FLOW_NODE_STATE: + return "yaml_PARSE_FLOW_NODE_STATE" + case yaml_PARSE_BLOCK_SEQUENCE_FIRST_ENTRY_STATE: + return "yaml_PARSE_BLOCK_SEQUENCE_FIRST_ENTRY_STATE" + case yaml_PARSE_BLOCK_SEQUENCE_ENTRY_STATE: + return "yaml_PARSE_BLOCK_SEQUENCE_ENTRY_STATE" + case yaml_PARSE_INDENTLESS_SEQUENCE_ENTRY_STATE: + return "yaml_PARSE_INDENTLESS_SEQUENCE_ENTRY_STATE" + case yaml_PARSE_BLOCK_MAPPING_FIRST_KEY_STATE: + return "yaml_PARSE_BLOCK_MAPPING_FIRST_KEY_STATE" + case yaml_PARSE_BLOCK_MAPPING_KEY_STATE: + return "yaml_PARSE_BLOCK_MAPPING_KEY_STATE" + case yaml_PARSE_BLOCK_MAPPING_VALUE_STATE: + return "yaml_PARSE_BLOCK_MAPPING_VALUE_STATE" + case yaml_PARSE_FLOW_SEQUENCE_FIRST_ENTRY_STATE: + return "yaml_PARSE_FLOW_SEQUENCE_FIRST_ENTRY_STATE" + case yaml_PARSE_FLOW_SEQUENCE_ENTRY_STATE: + return "yaml_PARSE_FLOW_SEQUENCE_ENTRY_STATE" + case yaml_PARSE_FLOW_SEQUENCE_ENTRY_MAPPING_KEY_STATE: + return "yaml_PARSE_FLOW_SEQUENCE_ENTRY_MAPPING_KEY_STATE" + case yaml_PARSE_FLOW_SEQUENCE_ENTRY_MAPPING_VALUE_STATE: + return "yaml_PARSE_FLOW_SEQUENCE_ENTRY_MAPPING_VALUE_STATE" + case yaml_PARSE_FLOW_SEQUENCE_ENTRY_MAPPING_END_STATE: + return "yaml_PARSE_FLOW_SEQUENCE_ENTRY_MAPPING_END_STATE" + case yaml_PARSE_FLOW_MAPPING_FIRST_KEY_STATE: + return "yaml_PARSE_FLOW_MAPPING_FIRST_KEY_STATE" + case yaml_PARSE_FLOW_MAPPING_KEY_STATE: + return "yaml_PARSE_FLOW_MAPPING_KEY_STATE" + case yaml_PARSE_FLOW_MAPPING_VALUE_STATE: + return "yaml_PARSE_FLOW_MAPPING_VALUE_STATE" + case yaml_PARSE_FLOW_MAPPING_EMPTY_VALUE_STATE: + return "yaml_PARSE_FLOW_MAPPING_EMPTY_VALUE_STATE" + case yaml_PARSE_END_STATE: + return "yaml_PARSE_END_STATE" + } + return "" +} + +// This structure holds aliases data. +type yaml_alias_data_t struct { + anchor []byte // The anchor. + index int // The node id. + mark yaml_mark_t // The anchor mark. +} + +// The parser structure. +// +// All members are internal. Manage the structure using the +// yaml_parser_ family of functions. +type yaml_parser_t struct { + + // Error handling + + error yaml_error_type_t // Error type. + + problem string // Error description. + + // The byte about which the problem occurred. + problem_offset int + problem_value int + problem_mark yaml_mark_t + + // The error context. + context string + context_mark yaml_mark_t + + // Reader stuff + + read_handler yaml_read_handler_t // Read handler. + + input_reader io.Reader // File input data. + input []byte // String input data. + input_pos int + + eof bool // EOF flag + + buffer []byte // The working buffer. + buffer_pos int // The current position of the buffer. + + unread int // The number of unread characters in the buffer. + + raw_buffer []byte // The raw buffer. + raw_buffer_pos int // The current position of the buffer. + + encoding yaml_encoding_t // The input encoding. + + offset int // The offset of the current position (in bytes). + mark yaml_mark_t // The mark of the current position. + + // Scanner stuff + + stream_start_produced bool // Have we started to scan the input stream? + stream_end_produced bool // Have we reached the end of the input stream? + + flow_level int // The number of unclosed '[' and '{' indicators. + + tokens []yaml_token_t // The tokens queue. + tokens_head int // The head of the tokens queue. + tokens_parsed int // The number of tokens fetched from the queue. + token_available bool // Does the tokens queue contain a token ready for dequeueing. + + indent int // The current indentation level. + indents []int // The indentation levels stack. + + simple_key_allowed bool // May a simple key occur at the current position? + simple_keys []yaml_simple_key_t // The stack of simple keys. + + // Parser stuff + + state yaml_parser_state_t // The current parser state. + states []yaml_parser_state_t // The parser states stack. + marks []yaml_mark_t // The stack of marks. + tag_directives []yaml_tag_directive_t // The list of TAG directives. + + // Dumper stuff + + aliases []yaml_alias_data_t // The alias data. + + document *yaml_document_t // The currently parsed document. +} + +// Emitter Definitions + +// The prototype of a write handler. +// +// The write handler is called when the emitter needs to flush the accumulated +// characters to the output. The handler should write @a size bytes of the +// @a buffer to the output. +// +// @param[in,out] data A pointer to an application data specified by +// yaml_emitter_set_output(). +// @param[in] buffer The buffer with bytes to be written. +// @param[in] size The size of the buffer. +// +// @returns On success, the handler should return @c 1. If the handler failed, +// the returned value should be @c 0. +// +type yaml_write_handler_t func(emitter *yaml_emitter_t, buffer []byte) error + +type yaml_emitter_state_t int + +// The emitter states. +const ( + // Expect STREAM-START. + yaml_EMIT_STREAM_START_STATE yaml_emitter_state_t = iota + + yaml_EMIT_FIRST_DOCUMENT_START_STATE // Expect the first DOCUMENT-START or STREAM-END. + yaml_EMIT_DOCUMENT_START_STATE // Expect DOCUMENT-START or STREAM-END. + yaml_EMIT_DOCUMENT_CONTENT_STATE // Expect the content of a document. + yaml_EMIT_DOCUMENT_END_STATE // Expect DOCUMENT-END. + yaml_EMIT_FLOW_SEQUENCE_FIRST_ITEM_STATE // Expect the first item of a flow sequence. + yaml_EMIT_FLOW_SEQUENCE_ITEM_STATE // Expect an item of a flow sequence. + yaml_EMIT_FLOW_MAPPING_FIRST_KEY_STATE // Expect the first key of a flow mapping. + yaml_EMIT_FLOW_MAPPING_KEY_STATE // Expect a key of a flow mapping. + yaml_EMIT_FLOW_MAPPING_SIMPLE_VALUE_STATE // Expect a value for a simple key of a flow mapping. + yaml_EMIT_FLOW_MAPPING_VALUE_STATE // Expect a value of a flow mapping. + yaml_EMIT_BLOCK_SEQUENCE_FIRST_ITEM_STATE // Expect the first item of a block sequence. + yaml_EMIT_BLOCK_SEQUENCE_ITEM_STATE // Expect an item of a block sequence. + yaml_EMIT_BLOCK_MAPPING_FIRST_KEY_STATE // Expect the first key of a block mapping. + yaml_EMIT_BLOCK_MAPPING_KEY_STATE // Expect the key of a block mapping. + yaml_EMIT_BLOCK_MAPPING_SIMPLE_VALUE_STATE // Expect a value for a simple key of a block mapping. + yaml_EMIT_BLOCK_MAPPING_VALUE_STATE // Expect a value of a block mapping. + yaml_EMIT_END_STATE // Expect nothing. +) + +// The emitter structure. +// +// All members are internal. Manage the structure using the @c yaml_emitter_ +// family of functions. +type yaml_emitter_t struct { + + // Error handling + + error yaml_error_type_t // Error type. + problem string // Error description. + + // Writer stuff + + write_handler yaml_write_handler_t // Write handler. + + output_buffer *[]byte // String output data. + output_writer io.Writer // File output data. + + buffer []byte // The working buffer. + buffer_pos int // The current position of the buffer. + + raw_buffer []byte // The raw buffer. + raw_buffer_pos int // The current position of the buffer. + + encoding yaml_encoding_t // The stream encoding. + + // Emitter stuff + + canonical bool // If the output is in the canonical style? + best_indent int // The number of indentation spaces. + best_width int // The preferred width of the output lines. + unicode bool // Allow unescaped non-ASCII characters? + line_break yaml_break_t // The preferred line break. + + state yaml_emitter_state_t // The current emitter state. + states []yaml_emitter_state_t // The stack of states. + + events []yaml_event_t // The event queue. + events_head int // The head of the event queue. + + indents []int // The stack of indentation levels. + + tag_directives []yaml_tag_directive_t // The list of tag directives. + + indent int // The current indentation level. + + flow_level int // The current flow level. + + root_context bool // Is it the document root context? + sequence_context bool // Is it a sequence context? + mapping_context bool // Is it a mapping context? + simple_key_context bool // Is it a simple mapping key context? + + line int // The current line. + column int // The current column. + whitespace bool // If the last character was a whitespace? + indention bool // If the last character was an indentation character (' ', '-', '?', ':')? + open_ended bool // If an explicit document end is required? + + // Anchor analysis. + anchor_data struct { + anchor []byte // The anchor value. + alias bool // Is it an alias? + } + + // Tag analysis. + tag_data struct { + handle []byte // The tag handle. + suffix []byte // The tag suffix. + } + + // Scalar analysis. + scalar_data struct { + value []byte // The scalar value. + multiline bool // Does the scalar contain line breaks? + flow_plain_allowed bool // Can the scalar be expessed in the flow plain style? + block_plain_allowed bool // Can the scalar be expressed in the block plain style? + single_quoted_allowed bool // Can the scalar be expressed in the single quoted style? + block_allowed bool // Can the scalar be expressed in the literal or folded styles? + style yaml_scalar_style_t // The output style. + } + + // Dumper stuff + + opened bool // If the stream was already opened? + closed bool // If the stream was already closed? + + // The information associated with the document nodes. + anchors *struct { + references int // The number of references. + anchor int // The anchor id. + serialized bool // If the node has been emitted? + } + + last_anchor_id int // The last assigned anchor id. + + document *yaml_document_t // The currently emitted document. +} diff --git a/vendor/gopkg.in/yaml.v2/yamlprivateh.go b/vendor/gopkg.in/yaml.v2/yamlprivateh.go new file mode 100644 index 000000000..8110ce3c3 --- /dev/null +++ b/vendor/gopkg.in/yaml.v2/yamlprivateh.go @@ -0,0 +1,173 @@ +package yaml + +const ( + // The size of the input raw buffer. + input_raw_buffer_size = 512 + + // The size of the input buffer. + // It should be possible to decode the whole raw buffer. + input_buffer_size = input_raw_buffer_size * 3 + + // The size of the output buffer. + output_buffer_size = 128 + + // The size of the output raw buffer. + // It should be possible to encode the whole output buffer. + output_raw_buffer_size = (output_buffer_size*2 + 2) + + // The size of other stacks and queues. + initial_stack_size = 16 + initial_queue_size = 16 + initial_string_size = 16 +) + +// Check if the character at the specified position is an alphabetical +// character, a digit, '_', or '-'. +func is_alpha(b []byte, i int) bool { + return b[i] >= '0' && b[i] <= '9' || b[i] >= 'A' && b[i] <= 'Z' || b[i] >= 'a' && b[i] <= 'z' || b[i] == '_' || b[i] == '-' +} + +// Check if the character at the specified position is a digit. +func is_digit(b []byte, i int) bool { + return b[i] >= '0' && b[i] <= '9' +} + +// Get the value of a digit. +func as_digit(b []byte, i int) int { + return int(b[i]) - '0' +} + +// Check if the character at the specified position is a hex-digit. +func is_hex(b []byte, i int) bool { + return b[i] >= '0' && b[i] <= '9' || b[i] >= 'A' && b[i] <= 'F' || b[i] >= 'a' && b[i] <= 'f' +} + +// Get the value of a hex-digit. +func as_hex(b []byte, i int) int { + bi := b[i] + if bi >= 'A' && bi <= 'F' { + return int(bi) - 'A' + 10 + } + if bi >= 'a' && bi <= 'f' { + return int(bi) - 'a' + 10 + } + return int(bi) - '0' +} + +// Check if the character is ASCII. +func is_ascii(b []byte, i int) bool { + return b[i] <= 0x7F +} + +// Check if the character at the start of the buffer can be printed unescaped. +func is_printable(b []byte, i int) bool { + return ((b[i] == 0x0A) || // . == #x0A + (b[i] >= 0x20 && b[i] <= 0x7E) || // #x20 <= . <= #x7E + (b[i] == 0xC2 && b[i+1] >= 0xA0) || // #0xA0 <= . <= #xD7FF + (b[i] > 0xC2 && b[i] < 0xED) || + (b[i] == 0xED && b[i+1] < 0xA0) || + (b[i] == 0xEE) || + (b[i] == 0xEF && // #xE000 <= . <= #xFFFD + !(b[i+1] == 0xBB && b[i+2] == 0xBF) && // && . != #xFEFF + !(b[i+1] == 0xBF && (b[i+2] == 0xBE || b[i+2] == 0xBF)))) +} + +// Check if the character at the specified position is NUL. +func is_z(b []byte, i int) bool { + return b[i] == 0x00 +} + +// Check if the beginning of the buffer is a BOM. +func is_bom(b []byte, i int) bool { + return b[0] == 0xEF && b[1] == 0xBB && b[2] == 0xBF +} + +// Check if the character at the specified position is space. +func is_space(b []byte, i int) bool { + return b[i] == ' ' +} + +// Check if the character at the specified position is tab. +func is_tab(b []byte, i int) bool { + return b[i] == '\t' +} + +// Check if the character at the specified position is blank (space or tab). +func is_blank(b []byte, i int) bool { + //return is_space(b, i) || is_tab(b, i) + return b[i] == ' ' || b[i] == '\t' +} + +// Check if the character at the specified position is a line break. +func is_break(b []byte, i int) bool { + return (b[i] == '\r' || // CR (#xD) + b[i] == '\n' || // LF (#xA) + b[i] == 0xC2 && b[i+1] == 0x85 || // NEL (#x85) + b[i] == 0xE2 && b[i+1] == 0x80 && b[i+2] == 0xA8 || // LS (#x2028) + b[i] == 0xE2 && b[i+1] == 0x80 && b[i+2] == 0xA9) // PS (#x2029) +} + +func is_crlf(b []byte, i int) bool { + return b[i] == '\r' && b[i+1] == '\n' +} + +// Check if the character is a line break or NUL. +func is_breakz(b []byte, i int) bool { + //return is_break(b, i) || is_z(b, i) + return ( // is_break: + b[i] == '\r' || // CR (#xD) + b[i] == '\n' || // LF (#xA) + b[i] == 0xC2 && b[i+1] == 0x85 || // NEL (#x85) + b[i] == 0xE2 && b[i+1] == 0x80 && b[i+2] == 0xA8 || // LS (#x2028) + b[i] == 0xE2 && b[i+1] == 0x80 && b[i+2] == 0xA9 || // PS (#x2029) + // is_z: + b[i] == 0) +} + +// Check if the character is a line break, space, or NUL. +func is_spacez(b []byte, i int) bool { + //return is_space(b, i) || is_breakz(b, i) + return ( // is_space: + b[i] == ' ' || + // is_breakz: + b[i] == '\r' || // CR (#xD) + b[i] == '\n' || // LF (#xA) + b[i] == 0xC2 && b[i+1] == 0x85 || // NEL (#x85) + b[i] == 0xE2 && b[i+1] == 0x80 && b[i+2] == 0xA8 || // LS (#x2028) + b[i] == 0xE2 && b[i+1] == 0x80 && b[i+2] == 0xA9 || // PS (#x2029) + b[i] == 0) +} + +// Check if the character is a line break, space, tab, or NUL. +func is_blankz(b []byte, i int) bool { + //return is_blank(b, i) || is_breakz(b, i) + return ( // is_blank: + b[i] == ' ' || b[i] == '\t' || + // is_breakz: + b[i] == '\r' || // CR (#xD) + b[i] == '\n' || // LF (#xA) + b[i] == 0xC2 && b[i+1] == 0x85 || // NEL (#x85) + b[i] == 0xE2 && b[i+1] == 0x80 && b[i+2] == 0xA8 || // LS (#x2028) + b[i] == 0xE2 && b[i+1] == 0x80 && b[i+2] == 0xA9 || // PS (#x2029) + b[i] == 0) +} + +// Determine the width of the character. +func width(b byte) int { + // Don't replace these by a switch without first + // confirming that it is being inlined. + if b&0x80 == 0x00 { + return 1 + } + if b&0xE0 == 0xC0 { + return 2 + } + if b&0xF0 == 0xE0 { + return 3 + } + if b&0xF8 == 0xF0 { + return 4 + } + return 0 + +} diff --git a/vendor/vendor.json b/vendor/vendor.json index a8db37219..f55a3cbdb 100644 --- a/vendor/vendor.json +++ b/vendor/vendor.json @@ -9,110 +9,112 @@ "revisionTime": "2016-08-11T22:04:02Z" }, { - "checksumSHA1": "QxOnuHvy3RS17v4Pxrr3WU8apAI=", - "comment": "v3.1.0-beta", - "path": "github.com/Azure/azure-sdk-for-go/arm/compute", - "revision": "26132835cbefa2669a306b777f34b929b56aa0a2", - "revisionTime": "2017-05-18T20:34:07Z", - "version": "v10.0.3-beta", - "versionExact": "v10.0.3-beta" + "checksumSHA1": "TY1n3V3oj1YcvXPTUVnlRlNfGow=", + "path": "github.com/Azure/azure-sdk-for-go/services/compute/mgmt/2018-04-01/compute", + "revision": "ef33a0a23b66ba5e81f699d03dc48b723c1bad50", + "revisionTime": "2018-06-15T22:13:27Z", + "version": "v17.3.1", + "versionExact": "v17.3.1" }, { - "checksumSHA1": "HMu0WrEHVs+wqyjsTzXxRsEJimI=", - "comment": "v3.1.0-beta", - "path": "github.com/Azure/azure-sdk-for-go/arm/network", - "revision": "26132835cbefa2669a306b777f34b929b56aa0a2", - "revisionTime": "2017-05-18T20:34:07Z", - "version": "v10.0.3-beta", - "versionExact": "v10.0.3-beta" + "checksumSHA1": "qzopX7CTZDrbARD7A8DBBbevmyM=", + "path": "github.com/Azure/azure-sdk-for-go/services/network/mgmt/2018-01-01/network", + "revision": "ef33a0a23b66ba5e81f699d03dc48b723c1bad50", + "revisionTime": "2018-06-15T22:13:27Z", + "version": "v17.3.1", + "versionExact": "v17.3.1" }, { - "checksumSHA1": "iJ8ZV+uhUAtNn91pkKjfIY0z+44=", - "comment": "v3.1.0-beta", - "path": "github.com/Azure/azure-sdk-for-go/arm/resources/resources", - "revision": "26132835cbefa2669a306b777f34b929b56aa0a2", - "revisionTime": "2017-05-18T20:34:07Z", - "version": "v10.0.3-beta", - "versionExact": "v10.0.3-beta" + "checksumSHA1": "DJY5zDEcGa82/Hbfud8lgHCuvU4=", + "path": "github.com/Azure/azure-sdk-for-go/services/resources/mgmt/2016-06-01/subscriptions", + "revision": "ef33a0a23b66ba5e81f699d03dc48b723c1bad50", + "revisionTime": "2018-06-15T22:13:27Z", + "version": "v17.3.1", + "versionExact": "v17.3.1" }, { - "checksumSHA1": "hukMjIbXDa589vkSXB4EF4rz7sQ=", - "comment": "v3.1.0-beta", - "path": "github.com/Azure/azure-sdk-for-go/arm/resources/subscriptions", - "revision": "26132835cbefa2669a306b777f34b929b56aa0a2", - "revisionTime": "2017-05-18T20:34:07Z", - "version": "v10.0.3-beta", - "versionExact": "v10.0.3-beta" + "checksumSHA1": "928vJH7PbmZ6/3pG5GIlaw+h7B4=", + "path": "github.com/Azure/azure-sdk-for-go/services/resources/mgmt/2018-02-01/resources", + "revision": "ef33a0a23b66ba5e81f699d03dc48b723c1bad50", + "revisionTime": "2018-06-15T22:13:27Z", + "version": "v17.3.1", + "versionExact": "v17.3.1" }, { - "checksumSHA1": "QiYT2buD7yqmgxfl/ESn8yJmQTY=", - "comment": "v3.1.0-beta", - "path": "github.com/Azure/azure-sdk-for-go/arm/storage", - "revision": "26132835cbefa2669a306b777f34b929b56aa0a2", - "revisionTime": "2017-05-18T20:34:07Z", - "version": "v10.0.3-beta", - "versionExact": "v10.0.3-beta" + "checksumSHA1": "ySkjHczHi2zN8B1TuWfvKVLGetw=", + "path": "github.com/Azure/azure-sdk-for-go/services/storage/mgmt/2017-10-01/storage", + "revision": "ef33a0a23b66ba5e81f699d03dc48b723c1bad50", + "revisionTime": "2018-06-15T22:13:27Z", + "version": "v17.3.1", + "versionExact": "v17.3.1" }, { - "checksumSHA1": "l1YTww/FDw8DkMLztHvLdcXK3IM=", - "comment": "v3.1.0-beta", + "checksumSHA1": "m3K3q/zhkmRtlIGWqMWgqzWR6C4=", "path": "github.com/Azure/azure-sdk-for-go/storage", - "revision": "26132835cbefa2669a306b777f34b929b56aa0a2", - "revisionTime": "2017-05-18T20:34:07Z", - "version": "v10.0.3-beta", - "versionExact": "v10.0.3-beta" + "revision": "ef33a0a23b66ba5e81f699d03dc48b723c1bad50", + "revisionTime": "2018-06-15T22:13:27Z", + "version": "v17.3.1", + "versionExact": "v17.3.1" }, { - "checksumSHA1": "NwbvjCz9Xo4spo0C96Tq6WCLX7U=", + "checksumSHA1": "Zl5uxmDmvXF6ADQsggG50XCiWmY=", + "path": "github.com/Azure/azure-sdk-for-go/version", + "revision": "ef33a0a23b66ba5e81f699d03dc48b723c1bad50", + "revisionTime": "2018-06-15T22:13:27Z", + "version": "v17.3.1", + "versionExact": "v17.3.1" + }, + { + "checksumSHA1": "4Ba4uKXCFYkXa54FD7NyI8EsXG4=", "comment": "v7.0.7", "path": "github.com/Azure/go-autorest/autorest", - "revision": "58f6f26e200fa5dfb40c9cd1c83f3e2c860d779d", - "revisionTime": "2017-04-28T17:52:31Z", - "version": "=v8.0.0", - "versionExact": "v8.0.0" + "revision": "1f7cd6cfe0adea687ad44a512dfe76140f804318", + "revisionTime": "2018-06-28T21:22:21Z", + "version": "v10.12.0", + "versionExact": "v10.12.0" }, { - "checksumSHA1": "KOETWLsF6QW+lrPVPsMNHDZP+xA=", + "checksumSHA1": "dMSqbz496pHMUGW8ID67vpafGto=", "path": "github.com/Azure/go-autorest/autorest/adal", - "revision": "58f6f26e200fa5dfb40c9cd1c83f3e2c860d779d", - "revisionTime": "2017-04-28T17:52:31Z", - "version": "=v8.0.0", - "versionExact": "v8.0.0" + "revision": "1f7cd6cfe0adea687ad44a512dfe76140f804318", + "revisionTime": "2018-06-28T21:22:21Z", + "version": "v10.12.0", + "versionExact": "v10.12.0" }, { - "checksumSHA1": "2KdBFgT4qY+fMOkBTa5vA9V0AiM=", + "checksumSHA1": "NT4tlDNlszWmb7gCXnYCkcvQxhs=", "comment": "v7.0.7", "path": "github.com/Azure/go-autorest/autorest/azure", - "revision": "58f6f26e200fa5dfb40c9cd1c83f3e2c860d779d", - "revisionTime": "2017-04-28T17:52:31Z", - "version": "=v8.0.0", - "versionExact": "v8.0.0" + "revision": "1f7cd6cfe0adea687ad44a512dfe76140f804318", + "revisionTime": "2018-06-28T21:22:21Z", + "version": "v10.12.0", + "versionExact": "v10.12.0" }, { - "checksumSHA1": "LSF/pNrjhIxl6jiS6bKooBFCOxI=", + "checksumSHA1": "9nXCi9qQsYjxCeajJKWttxgEt0I=", "comment": "v7.0.7", "path": "github.com/Azure/go-autorest/autorest/date", - "revision": "58f6f26e200fa5dfb40c9cd1c83f3e2c860d779d", - "revisionTime": "2017-04-28T17:52:31Z", - "version": "=v8.0.0", - "versionExact": "v8.0.0" + "revision": "1f7cd6cfe0adea687ad44a512dfe76140f804318", + "revisionTime": "2018-06-28T21:22:21Z", + "version": "v10.12.0", + "versionExact": "v10.12.0" }, { - "checksumSHA1": "Ev8qCsbFjDlMlX0N2tYAhYQFpUc=", + "checksumSHA1": "SbBb2GcJNm5GjuPKGL2777QywR4=", "comment": "v7.0.7", "path": "github.com/Azure/go-autorest/autorest/to", - "revision": "58f6f26e200fa5dfb40c9cd1c83f3e2c860d779d", - "revisionTime": "2017-04-28T17:52:31Z", - "version": "=v8.0.0", - "versionExact": "v8.0.0" + "revision": "1f7cd6cfe0adea687ad44a512dfe76140f804318", + "revisionTime": "2018-06-28T21:22:21Z", + "version": "v10.12.0", + "versionExact": "v10.12.0" }, { - "checksumSHA1": "oBixceM+55gdk47iff8DSEIh3po=", + "checksumSHA1": "HjdLfAF3oA2In8F3FKh/Y+BPyXk=", "path": "github.com/Azure/go-autorest/autorest/validation", - "revision": "58f6f26e200fa5dfb40c9cd1c83f3e2c860d779d", - "revisionTime": "2017-04-28T17:52:31Z", - "version": "=v8.0.0", - "versionExact": "v8.0.0" + "revision": "1f7cd6cfe0adea687ad44a512dfe76140f804318", + "revisionTime": "2018-06-28T21:22:21Z", + "version": "v10.12.0", + "versionExact": "v10.12.0" }, { "checksumSHA1": "TgrN0l/E16deTlLYNt8wf66urSU=", @@ -203,6 +205,30 @@ "revision": "4fe03583929022c9c96829401ffabc755e69782e", "revisionTime": "2017-06-25T21:53:50Z" }, + { + "checksumSHA1": "N0jIjg2HeB8fM7dt8OED2BiRZz0=", + "path": "github.com/NaverCloudPlatform/ncloud-sdk-go/common", + "revision": "c2e73f942591b0f033a3c6df00f44badb2347c38", + "revisionTime": "2018-01-10T05:50:12Z" + }, + { + "checksumSHA1": "7lT5Xmeo2MhE6CJvIInZyLBu6S8=", + "path": "github.com/NaverCloudPlatform/ncloud-sdk-go/oauth", + "revision": "c2e73f942591b0f033a3c6df00f44badb2347c38", + "revisionTime": "2018-01-10T05:50:12Z" + }, + { + "checksumSHA1": "DpfU1gyyhIc1BuKZUZDRVDBkeSc=", + "path": "github.com/NaverCloudPlatform/ncloud-sdk-go/request", + "revision": "c2e73f942591b0f033a3c6df00f44badb2347c38", + "revisionTime": "2018-01-10T05:50:12Z" + }, + { + "checksumSHA1": "QnZ2RMlms/zYbD8jg+lyS8ncQOY=", + "path": "github.com/NaverCloudPlatform/ncloud-sdk-go/sdk", + "revision": "c2e73f942591b0f033a3c6df00f44badb2347c38", + "revisionTime": "2018-01-10T05:50:12Z" + }, { "checksumSHA1": "HttiPj314X1a0i2Jen1p6lRH/vE=", "path": "github.com/aliyun/aliyun-oss-go-sdk/oss", @@ -242,244 +268,244 @@ "revision": "4239b77079c7b5d1243b7b4736304ce8ddb6f0f2" }, { - "checksumSHA1": "7vSTKBRZq6azJnhm1k6XmNxu97M=", + "checksumSHA1": "2GR/f+rwuhlBVooyOGVaxNIYbEg=", "path": "github.com/aws/aws-sdk-go", - "revision": "dd3acff9dc16f9a6fd87f6b4501590a532e7206a", - "revisionTime": "2017-08-10T20:40:06Z", - "version": "v1.10.23", - "versionExact": "v1.10.23" + "revision": "586c9ba6027a527800564282bb843d7e6e7985c9", + "revisionTime": "2018-02-07T00:16:19Z", + "version": "v1.12.72", + "versionExact": "v1.12.72" }, { - "checksumSHA1": "BRgDFkYJonSbN+/ugKMUuSXkS4c=", + "checksumSHA1": "mFHEkH8cgZqBuQ5qVqNP1SLN4QA=", "comment": "v1.7.1", "path": "github.com/aws/aws-sdk-go/aws", - "revision": "dd3acff9dc16f9a6fd87f6b4501590a532e7206a", - "revisionTime": "2017-08-10T20:40:06Z", - "version": "v1.10.23", - "versionExact": "v1.10.23" + "revision": "586c9ba6027a527800564282bb843d7e6e7985c9", + "revisionTime": "2018-02-07T00:16:19Z", + "version": "v1.12.72", + "versionExact": "v1.12.72" }, { "checksumSHA1": "Y9W+4GimK4Fuxq+vyIskVYFRnX4=", "comment": "v1.7.1", "path": "github.com/aws/aws-sdk-go/aws/awserr", - "revision": "dd3acff9dc16f9a6fd87f6b4501590a532e7206a", - "revisionTime": "2017-08-10T20:40:06Z", - "version": "v1.10.23", - "versionExact": "v1.10.23" + "revision": "586c9ba6027a527800564282bb843d7e6e7985c9", + "revisionTime": "2018-02-07T00:16:19Z", + "version": "v1.12.72", + "versionExact": "v1.12.72" }, { "checksumSHA1": "yyYr41HZ1Aq0hWc3J5ijXwYEcac=", "comment": "v1.7.1", "path": "github.com/aws/aws-sdk-go/aws/awsutil", - "revision": "dd3acff9dc16f9a6fd87f6b4501590a532e7206a", - "revisionTime": "2017-08-10T20:40:06Z", - "version": "v1.10.23", - "versionExact": "v1.10.23" + "revision": "586c9ba6027a527800564282bb843d7e6e7985c9", + "revisionTime": "2018-02-07T00:16:19Z", + "version": "v1.12.72", + "versionExact": "v1.12.72" }, { - "checksumSHA1": "n98FANpNeRT5kf6pizdpI7nm6Sw=", + "checksumSHA1": "9nE/FjZ4pYrT883KtV2/aI+Gayo=", "comment": "v1.7.1", "path": "github.com/aws/aws-sdk-go/aws/client", - "revision": "dd3acff9dc16f9a6fd87f6b4501590a532e7206a", - "revisionTime": "2017-08-10T20:40:06Z", - "version": "v1.10.23", - "versionExact": "v1.10.23" + "revision": "586c9ba6027a527800564282bb843d7e6e7985c9", + "revisionTime": "2018-02-07T00:16:19Z", + "version": "v1.12.72", + "versionExact": "v1.12.72" }, { "checksumSHA1": "ieAJ+Cvp/PKv1LpUEnUXpc3OI6E=", "comment": "v1.7.1", "path": "github.com/aws/aws-sdk-go/aws/client/metadata", - "revision": "dd3acff9dc16f9a6fd87f6b4501590a532e7206a", - "revisionTime": "2017-08-10T20:40:06Z", - "version": "v1.10.23", - "versionExact": "v1.10.23" + "revision": "586c9ba6027a527800564282bb843d7e6e7985c9", + "revisionTime": "2018-02-07T00:16:19Z", + "version": "v1.12.72", + "versionExact": "v1.12.72" }, { "checksumSHA1": "7/8j/q0TWtOgXyvEcv4B2Dhl00o=", "comment": "v1.7.1", "path": "github.com/aws/aws-sdk-go/aws/corehandlers", - "revision": "dd3acff9dc16f9a6fd87f6b4501590a532e7206a", - "revisionTime": "2017-08-10T20:40:06Z", - "version": "v1.10.23", - "versionExact": "v1.10.23" + "revision": "586c9ba6027a527800564282bb843d7e6e7985c9", + "revisionTime": "2018-02-07T00:16:19Z", + "version": "v1.12.72", + "versionExact": "v1.12.72" }, { "checksumSHA1": "Y+cPwQL0dZMyqp3wI+KJWmA9KQ8=", "comment": "v1.7.1", "path": "github.com/aws/aws-sdk-go/aws/credentials", - "revision": "dd3acff9dc16f9a6fd87f6b4501590a532e7206a", - "revisionTime": "2017-08-10T20:40:06Z", - "version": "v1.10.23", - "versionExact": "v1.10.23" + "revision": "586c9ba6027a527800564282bb843d7e6e7985c9", + "revisionTime": "2018-02-07T00:16:19Z", + "version": "v1.12.72", + "versionExact": "v1.12.72" }, { "checksumSHA1": "u3GOAJLmdvbuNUeUEcZSEAOeL/0=", "comment": "v1.7.1", "path": "github.com/aws/aws-sdk-go/aws/credentials/ec2rolecreds", - "revision": "dd3acff9dc16f9a6fd87f6b4501590a532e7206a", - "revisionTime": "2017-08-10T20:40:06Z", - "version": "v1.10.23", - "versionExact": "v1.10.23" + "revision": "586c9ba6027a527800564282bb843d7e6e7985c9", + "revisionTime": "2018-02-07T00:16:19Z", + "version": "v1.12.72", + "versionExact": "v1.12.72" }, { "checksumSHA1": "NUJUTWlc1sV8b7WjfiYc4JZbXl0=", "comment": "v1.7.1", "path": "github.com/aws/aws-sdk-go/aws/credentials/endpointcreds", - "revision": "dd3acff9dc16f9a6fd87f6b4501590a532e7206a", - "revisionTime": "2017-08-10T20:40:06Z", - "version": "v1.10.23", - "versionExact": "v1.10.23" + "revision": "586c9ba6027a527800564282bb843d7e6e7985c9", + "revisionTime": "2018-02-07T00:16:19Z", + "version": "v1.12.72", + "versionExact": "v1.12.72" }, { "checksumSHA1": "JEYqmF83O5n5bHkupAzA6STm0no=", "comment": "v1.7.1", "path": "github.com/aws/aws-sdk-go/aws/credentials/stscreds", - "revision": "dd3acff9dc16f9a6fd87f6b4501590a532e7206a", - "revisionTime": "2017-08-10T20:40:06Z", - "version": "v1.10.23", - "versionExact": "v1.10.23" + "revision": "586c9ba6027a527800564282bb843d7e6e7985c9", + "revisionTime": "2018-02-07T00:16:19Z", + "version": "v1.12.72", + "versionExact": "v1.12.72" }, { - "checksumSHA1": "ZdtYh3ZHSgP/WEIaqwJHTEhpkbs=", + "checksumSHA1": "OnU/n7R33oYXiB4SAGd5pK7I0Bs=", "comment": "v1.7.1", "path": "github.com/aws/aws-sdk-go/aws/defaults", - "revision": "dd3acff9dc16f9a6fd87f6b4501590a532e7206a", - "revisionTime": "2017-08-10T20:40:06Z", - "version": "v1.10.23", - "versionExact": "v1.10.23" + "revision": "586c9ba6027a527800564282bb843d7e6e7985c9", + "revisionTime": "2018-02-07T00:16:19Z", + "version": "v1.12.72", + "versionExact": "v1.12.72" }, { "checksumSHA1": "/EXbk/z2TWjWc1Hvb4QYs3Wmhb8=", "comment": "v1.7.1", "path": "github.com/aws/aws-sdk-go/aws/ec2metadata", - "revision": "dd3acff9dc16f9a6fd87f6b4501590a532e7206a", - "revisionTime": "2017-08-10T20:40:06Z", - "version": "v1.10.23", - "versionExact": "v1.10.23" + "revision": "586c9ba6027a527800564282bb843d7e6e7985c9", + "revisionTime": "2018-02-07T00:16:19Z", + "version": "v1.12.72", + "versionExact": "v1.12.72" }, { - "checksumSHA1": "W45tSKW4ZVgBk9H4lx5RbGi9OVg=", + "checksumSHA1": "CJNEM69cgdO9tZi6c5Lj07jI+dk=", "path": "github.com/aws/aws-sdk-go/aws/endpoints", - "revision": "dd3acff9dc16f9a6fd87f6b4501590a532e7206a", - "revisionTime": "2017-08-10T20:40:06Z", - "version": "v1.10.23", - "versionExact": "v1.10.23" + "revision": "586c9ba6027a527800564282bb843d7e6e7985c9", + "revisionTime": "2018-02-07T00:16:19Z", + "version": "v1.12.72", + "versionExact": "v1.12.72" }, { - "checksumSHA1": "Rdm9v0vpqsO1Ka9rwktfL19D8A8=", + "checksumSHA1": "JZ49s4cNe3nIttx3hWp04CQif4o=", "comment": "v1.7.1", "path": "github.com/aws/aws-sdk-go/aws/request", - "revision": "dd3acff9dc16f9a6fd87f6b4501590a532e7206a", - "revisionTime": "2017-08-10T20:40:06Z", - "version": "v1.10.23", - "versionExact": "v1.10.23" + "revision": "586c9ba6027a527800564282bb843d7e6e7985c9", + "revisionTime": "2018-02-07T00:16:19Z", + "version": "v1.12.72", + "versionExact": "v1.12.72" }, { - "checksumSHA1": "SK5Mn4Ga9+equOQTYA1DTSb3LWY=", + "checksumSHA1": "DIn7B+oP++/nw603OB95fmupzu8=", "comment": "v1.7.1", "path": "github.com/aws/aws-sdk-go/aws/session", - "revision": "dd3acff9dc16f9a6fd87f6b4501590a532e7206a", - "revisionTime": "2017-08-10T20:40:06Z", - "version": "v1.10.23", - "versionExact": "v1.10.23" + "revision": "586c9ba6027a527800564282bb843d7e6e7985c9", + "revisionTime": "2018-02-07T00:16:19Z", + "version": "v1.12.72", + "versionExact": "v1.12.72" }, { - "checksumSHA1": "1+ZxEwzc1Vz8X2l+kXkS2iATtas=", + "checksumSHA1": "iU00ZjhAml/13g+1YXT21IqoXqg=", "comment": "v1.7.1", "path": "github.com/aws/aws-sdk-go/aws/signer/v4", - "revision": "dd3acff9dc16f9a6fd87f6b4501590a532e7206a", - "revisionTime": "2017-08-10T20:40:06Z", - "version": "v1.10.23", - "versionExact": "v1.10.23" + "revision": "586c9ba6027a527800564282bb843d7e6e7985c9", + "revisionTime": "2018-02-07T00:16:19Z", + "version": "v1.12.72", + "versionExact": "v1.12.72" }, { "checksumSHA1": "04ypv4x12l4q0TksA1zEVsmgpvw=", "path": "github.com/aws/aws-sdk-go/internal/shareddefaults", - "revision": "dd3acff9dc16f9a6fd87f6b4501590a532e7206a", - "revisionTime": "2017-08-10T20:40:06Z", - "version": "v1.10.23", - "versionExact": "v1.10.23" + "revision": "586c9ba6027a527800564282bb843d7e6e7985c9", + "revisionTime": "2018-02-07T00:16:19Z", + "version": "v1.12.72", + "versionExact": "v1.12.72" }, { - "checksumSHA1": "wk7EyvDaHwb5qqoOP/4d3cV0708=", + "checksumSHA1": "NStHCXEvYqG72GknZyv1jaKaeH0=", "comment": "v1.7.1", "path": "github.com/aws/aws-sdk-go/private/protocol", - "revision": "dd3acff9dc16f9a6fd87f6b4501590a532e7206a", - "revisionTime": "2017-08-10T20:40:06Z", - "version": "v1.10.23", - "versionExact": "v1.10.23" + "revision": "586c9ba6027a527800564282bb843d7e6e7985c9", + "revisionTime": "2018-02-07T00:16:19Z", + "version": "v1.12.72", + "versionExact": "v1.12.72" }, { "checksumSHA1": "1QmQ3FqV37w0Zi44qv8pA1GeR0A=", "comment": "v1.7.1", "path": "github.com/aws/aws-sdk-go/private/protocol/ec2query", - "revision": "dd3acff9dc16f9a6fd87f6b4501590a532e7206a", - "revisionTime": "2017-08-10T20:40:06Z", - "version": "v1.10.23", - "versionExact": "v1.10.23" + "revision": "586c9ba6027a527800564282bb843d7e6e7985c9", + "revisionTime": "2018-02-07T00:16:19Z", + "version": "v1.12.72", + "versionExact": "v1.12.72" }, { - "checksumSHA1": "O6hcK24yI6w7FA+g4Pbr+eQ7pys=", + "checksumSHA1": "yHfT5DTbeCLs4NE2Rgnqrhe15ls=", "comment": "v1.7.1", "path": "github.com/aws/aws-sdk-go/private/protocol/json/jsonutil", - "revision": "dd3acff9dc16f9a6fd87f6b4501590a532e7206a", - "revisionTime": "2017-08-10T20:40:06Z", - "version": "v1.10.23", - "versionExact": "v1.10.23" + "revision": "586c9ba6027a527800564282bb843d7e6e7985c9", + "revisionTime": "2018-02-07T00:16:19Z", + "version": "v1.12.72", + "versionExact": "v1.12.72" }, { "checksumSHA1": "R00RL5jJXRYq1iiK1+PGvMfvXyM=", "comment": "v1.7.1", "path": "github.com/aws/aws-sdk-go/private/protocol/jsonrpc", - "revision": "dd3acff9dc16f9a6fd87f6b4501590a532e7206a", - "revisionTime": "2017-08-10T20:40:06Z", - "version": "v1.10.23", - "versionExact": "v1.10.23" + "revision": "586c9ba6027a527800564282bb843d7e6e7985c9", + "revisionTime": "2018-02-07T00:16:19Z", + "version": "v1.12.72", + "versionExact": "v1.12.72" }, { "checksumSHA1": "ZqY5RWavBLWTo6j9xqdyBEaNFRk=", "comment": "v1.7.1", "path": "github.com/aws/aws-sdk-go/private/protocol/query", - "revision": "dd3acff9dc16f9a6fd87f6b4501590a532e7206a", - "revisionTime": "2017-08-10T20:40:06Z", - "version": "v1.10.23", - "versionExact": "v1.10.23" + "revision": "586c9ba6027a527800564282bb843d7e6e7985c9", + "revisionTime": "2018-02-07T00:16:19Z", + "version": "v1.12.72", + "versionExact": "v1.12.72" }, { - "checksumSHA1": "Drt1JfLMa0DQEZLWrnMlTWaIcC8=", + "checksumSHA1": "9V1PvtFQ9MObZTc3sa86WcuOtOU=", "comment": "v1.7.1", "path": "github.com/aws/aws-sdk-go/private/protocol/query/queryutil", - "revision": "dd3acff9dc16f9a6fd87f6b4501590a532e7206a", - "revisionTime": "2017-08-10T20:40:06Z", - "version": "v1.10.23", - "versionExact": "v1.10.23" + "revision": "586c9ba6027a527800564282bb843d7e6e7985c9", + "revisionTime": "2018-02-07T00:16:19Z", + "version": "v1.12.72", + "versionExact": "v1.12.72" }, { - "checksumSHA1": "VCTh+dEaqqhog5ncy/WTt9+/gFM=", + "checksumSHA1": "pkeoOfZpHRvFG/AOZeTf0lwtsFg=", "comment": "v1.7.1", "path": "github.com/aws/aws-sdk-go/private/protocol/rest", - "revision": "dd3acff9dc16f9a6fd87f6b4501590a532e7206a", - "revisionTime": "2017-08-10T20:40:06Z", - "version": "v1.10.23", - "versionExact": "v1.10.23" + "revision": "586c9ba6027a527800564282bb843d7e6e7985c9", + "revisionTime": "2018-02-07T00:16:19Z", + "version": "v1.12.72", + "versionExact": "v1.12.72" }, { "checksumSHA1": "ODo+ko8D6unAxZuN1jGzMcN4QCc=", "comment": "v1.7.1", "path": "github.com/aws/aws-sdk-go/private/protocol/restxml", - "revision": "dd3acff9dc16f9a6fd87f6b4501590a532e7206a", - "revisionTime": "2017-08-10T20:40:06Z", - "version": "v1.10.23", - "versionExact": "v1.10.23" + "revision": "586c9ba6027a527800564282bb843d7e6e7985c9", + "revisionTime": "2018-02-07T00:16:19Z", + "version": "v1.12.72", + "versionExact": "v1.12.72" }, { "checksumSHA1": "0qYPUga28aQVkxZgBR3Z86AbGUQ=", "comment": "v1.7.1", "path": "github.com/aws/aws-sdk-go/private/protocol/xml/xmlutil", - "revision": "dd3acff9dc16f9a6fd87f6b4501590a532e7206a", - "revisionTime": "2017-08-10T20:40:06Z", - "version": "v1.10.23", - "versionExact": "v1.10.23" + "revision": "586c9ba6027a527800564282bb843d7e6e7985c9", + "revisionTime": "2018-02-07T00:16:19Z", + "version": "v1.12.72", + "versionExact": "v1.12.72" }, { "checksumSHA1": "Eo9yODN5U99BK0pMzoqnBm7PCrY=", @@ -487,62 +513,62 @@ "path": "github.com/aws/aws-sdk-go/private/waiter", "revision": "1bd588c8b2dba4da57dd4664b2b2750d260a5915", "revisionTime": "2017-02-24T22:28:38Z", - "version": "v1.10.23", - "versionExact": "v1.10.23" + "version": "v1.12.72", + "versionExact": "v1.12.72" }, { - "checksumSHA1": "1mbCBXbu6m8bfRAq1+7Cul4VXkU=", + "checksumSHA1": "Sj6NTKuc/6+amv4RsMqrZkvkvpc=", "comment": "v1.7.1", "path": "github.com/aws/aws-sdk-go/service/ec2", - "revision": "dd3acff9dc16f9a6fd87f6b4501590a532e7206a", - "revisionTime": "2017-08-10T20:40:06Z", - "version": "v1.10.23", - "versionExact": "v1.10.23" + "revision": "586c9ba6027a527800564282bb843d7e6e7985c9", + "revisionTime": "2018-02-07T00:16:19Z", + "version": "v1.12.72", + "versionExact": "v1.12.72" }, { - "checksumSHA1": "YNq7YhasHn9ceelWX2aG0Cg0Ga0=", + "checksumSHA1": "kEgV0dSAj3M3M1waEkC27JS7VnU=", "comment": "v1.7.1", "path": "github.com/aws/aws-sdk-go/service/ecr", - "revision": "dd3acff9dc16f9a6fd87f6b4501590a532e7206a", - "revisionTime": "2017-08-10T20:40:06Z", - "version": "v1.10.23", - "versionExact": "v1.10.23" + "revision": "586c9ba6027a527800564282bb843d7e6e7985c9", + "revisionTime": "2018-02-07T00:16:19Z", + "version": "v1.12.72", + "versionExact": "v1.12.72" }, { - "checksumSHA1": "SEKg+cGyOj6dKdK5ltUHsoL4R4Y=", + "checksumSHA1": "fXQn3V0ZRBZpTXUEHl4/yOjR4mQ=", "comment": "v1.7.1", "path": "github.com/aws/aws-sdk-go/service/s3", - "revision": "dd3acff9dc16f9a6fd87f6b4501590a532e7206a", - "revisionTime": "2017-08-10T20:40:06Z", - "version": "v1.10.23", - "versionExact": "v1.10.23" + "revision": "586c9ba6027a527800564282bb843d7e6e7985c9", + "revisionTime": "2018-02-07T00:16:19Z", + "version": "v1.12.72", + "versionExact": "v1.12.72" }, { - "checksumSHA1": "RVrBBPDYg3ViwQDLBanFehkdqkM=", + "checksumSHA1": "ZP6QI0X9BNKk8o1p3AyLfjabS20=", "comment": "v1.7.1", "path": "github.com/aws/aws-sdk-go/service/s3/s3iface", - "revision": "dd3acff9dc16f9a6fd87f6b4501590a532e7206a", - "revisionTime": "2017-08-10T20:40:06Z", - "version": "v1.10.23", - "versionExact": "v1.10.23" + "revision": "586c9ba6027a527800564282bb843d7e6e7985c9", + "revisionTime": "2018-02-07T00:16:19Z", + "version": "v1.12.72", + "versionExact": "v1.12.72" }, { - "checksumSHA1": "LlIpD3fzngPXshFh/5tuCWlrYzI=", + "checksumSHA1": "g6Eo2gEoj6YEZ+tLwydnfhwo7zg=", "comment": "v1.7.1", "path": "github.com/aws/aws-sdk-go/service/s3/s3manager", - "revision": "dd3acff9dc16f9a6fd87f6b4501590a532e7206a", - "revisionTime": "2017-08-10T20:40:06Z", - "version": "v1.10.23", - "versionExact": "v1.10.23" + "revision": "586c9ba6027a527800564282bb843d7e6e7985c9", + "revisionTime": "2018-02-07T00:16:19Z", + "version": "v1.12.72", + "versionExact": "v1.12.72" }, { - "checksumSHA1": "MerduaV3PxtZAWvOGpgoBIglo38=", + "checksumSHA1": "x7HCNPJnQi+4P6FKpBTY1hm3m6o=", "comment": "v1.7.1", "path": "github.com/aws/aws-sdk-go/service/sts", - "revision": "dd3acff9dc16f9a6fd87f6b4501590a532e7206a", - "revisionTime": "2017-08-10T20:40:06Z", - "version": "v1.10.23", - "versionExact": "v1.10.23" + "revision": "586c9ba6027a527800564282bb843d7e6e7985c9", + "revisionTime": "2018-02-07T00:16:19Z", + "version": "v1.12.72", + "versionExact": "v1.12.72" }, { "checksumSHA1": "7SbTaY0kaYxgQrG3/9cjrI+BcyU=", @@ -560,46 +586,56 @@ "revision": "50da7d4131a3b5c9d063932461cab4d1fafb20b0" }, { - "checksumSHA1": "Lf3uUXTkKK5DJ37BxQvxO1Fq+K8=", + "checksumSHA1": "X2/71FBrn4pA3WcA620ySVO0uHU=", + "path": "github.com/creack/goselect", + "revision": "528c74964609a58f7c17471525659c9b71cd499b", + "revisionTime": "2018-02-10T03:43:46Z" + }, + { + "checksumSHA1": "/5cvgU+J4l7EhMXTK76KaCAfOuU=", "comment": "v1.0.0-3-g6d21280", "path": "github.com/davecgh/go-spew/spew", - "revision": "6d212800a42e8ab5c146b8ace3490ee17e5225f9" + "revision": "346938d642f2ec3594ed81d874461961cd0faa76", + "revisionTime": "2016-10-29T20:57:26Z", + "version": "v1.1.0", + "versionExact": "v1.1.0" }, { - "checksumSHA1": "b2m7aICkBOz9JqmnX2kdDN4wgjI=", + "checksumSHA1": "5k4kiVJsn0CilLDx+gMjglXY6vs=", "path": "github.com/denverdino/aliyungo/common", - "revision": "b8a81c0f54003ea306ef566807919955d639cd48", - "revisionTime": "2017-08-02T08:24:47Z" + "revision": "ebad04655e0385f021ed264c89ef4b93958e7204", + "revisionTime": "2018-04-17T07:55:37Z" }, { - "checksumSHA1": "5UYxIc/DZ5g0SM7qwXskoyNOc78=", + "checksumSHA1": "y4Ay4E5HqT25sweC/Bl9/odNWaI=", "path": "github.com/denverdino/aliyungo/ecs", - "revision": "b8a81c0f54003ea306ef566807919955d639cd48", - "revisionTime": "2017-08-02T08:24:47Z" + "revision": "ec0e57291175fc9b06c62977f384756642285aab", + "revisionTime": "2017-11-27T16:20:29Z" }, { - "checksumSHA1": "VfIjW9Tf2eLmHrgrvk1wRnBaxTE=", + "checksumSHA1": "sievsBvgtVF2iZ2FjmDZppH3+Ro=", "path": "github.com/denverdino/aliyungo/ram", - "revision": "b8a81c0f54003ea306ef566807919955d639cd48", - "revisionTime": "2017-08-02T08:24:47Z" + "revision": "ec0e57291175fc9b06c62977f384756642285aab", + "revisionTime": "2017-11-27T16:20:29Z" }, { "checksumSHA1": "pQHH9wpyS0e4wpW0erxe3D7OILM=", "path": "github.com/denverdino/aliyungo/slb", - "revision": "b8a81c0f54003ea306ef566807919955d639cd48", - "revisionTime": "2017-08-02T08:24:47Z" + "revision": "ec0e57291175fc9b06c62977f384756642285aab", + "revisionTime": "2017-11-27T16:20:29Z" }, { - "checksumSHA1": "piZlmhWPLGxYkXLysTrjcXllO4c=", + "checksumSHA1": "cKVBRn7GKT+0IqfGUc/NnKDWzCw=", "path": "github.com/denverdino/aliyungo/util", - "revision": "b8a81c0f54003ea306ef566807919955d639cd48", - "revisionTime": "2017-08-02T08:24:47Z" + "revision": "ec0e57291175fc9b06c62977f384756642285aab", + "revisionTime": "2017-11-27T16:20:29Z" }, { - "checksumSHA1": "D37uI+U+FYvTJIdG2TTozXe7i7U=", - "comment": "v3.0.0", + "checksumSHA1": "4772zXrOaPVeDeSgdiV7Vp4KEjk=", + "comment": "v3.2.0", "path": "github.com/dgrijalva/jwt-go", - "revision": "d2709f9f1f31ebcda9651b03077758c1f3a0018c" + "revision": "06ea1031745cb8b3dab3f6a236daf2b0aa468b7e", + "revisionTime": "2018-03-08T23:13:08Z" }, { "checksumSHA1": "W1LGm0UNirwMDVCMFv5vZrOpUJI=", @@ -608,6 +644,24 @@ "revision": "4c04abe183f449bd9ede285f0e5c7ee575d0dbe4", "revisionTime": "2017-04-07T15:15:42Z" }, + { + "checksumSHA1": "1n5MBJthemxmfqU2gN3qLCd8s04=", + "path": "github.com/docker/docker/pkg/namesgenerator", + "revision": "fa3e2d5ab9b577cecd24201902bbe72b3f1b851c", + "revisionTime": "2017-04-06T12:40:27Z" + }, + { + "checksumSHA1": "lThih54jzz9A4zHKEFb9SIV3Ed0=", + "path": "github.com/docker/docker/pkg/random", + "revision": "fa3e2d5ab9b577cecd24201902bbe72b3f1b851c", + "revisionTime": "2017-04-06T12:40:27Z" + }, + { + "checksumSHA1": "rhLUtXvcmouYuBwOq9X/nYKzvNg=", + "path": "github.com/dustin/go-humanize", + "revision": "259d2a102b871d17f30e3cd9881a642961a1e486", + "revisionTime": "2017-02-28T07:34:54Z" + }, { "checksumSHA1": "GCskdwYAPW2S34918Z5CgNMJ2Wc=", "path": "github.com/dylanmei/iso8601", @@ -630,6 +684,30 @@ "path": "github.com/golang/protobuf/proto", "revision": "b982704f8bb716bb608144408cff30e15fbde841" }, + { + "checksumSHA1": "sOd6u/eTkieb8AgKxnXZ/C9/FhI=", + "path": "github.com/google/go-cmp/cmp", + "revision": "5411ab924f9ffa6566244a9e504bc347edacffd3", + "revisionTime": "2018-03-28T20:15:12Z" + }, + { + "checksumSHA1": "oLBV7th4sxzxzmf1b91NVVbdOfI=", + "path": "github.com/google/go-cmp/cmp/internal/diff", + "revision": "5411ab924f9ffa6566244a9e504bc347edacffd3", + "revisionTime": "2018-03-28T20:15:12Z" + }, + { + "checksumSHA1": "kYtvRhMjM0X4bvEjR3pqEHLw1qo=", + "path": "github.com/google/go-cmp/cmp/internal/function", + "revision": "5411ab924f9ffa6566244a9e504bc347edacffd3", + "revisionTime": "2018-03-28T20:15:12Z" + }, + { + "checksumSHA1": "OQQ56f38ikfmG2Zd3lUlk4LUBIw=", + "path": "github.com/google/go-cmp/cmp/internal/value", + "revision": "5411ab924f9ffa6566244a9e504bc347edacffd3", + "revisionTime": "2018-03-28T20:15:12Z" + }, { "checksumSHA1": "Evpv9y6iPdy+8FeAVDmKrqV1sqo=", "path": "github.com/google/go-querystring/query", @@ -642,112 +720,112 @@ "revisionTime": "2015-01-27T13:39:51Z" }, { - "checksumSHA1": "QTqcF26Y2e0SHe2Z+2wj+fedud4=", + "checksumSHA1": "qduT9GZUhXc00XoHEwLx16Xn9gM=", "path": "github.com/gophercloud/gophercloud", - "revision": "95a28eb606def6aaaed082b6b82d3244b0552184", - "revisionTime": "2017-06-23T01:44:30Z" + "revision": "7112fcd50da4ea27e8d4d499b30f04eea143bec2", + "revisionTime": "2018-05-31T02:06:30Z" }, { "checksumSHA1": "b7g9TcU1OmW7e2UySYeOAmcfHpY=", "path": "github.com/gophercloud/gophercloud/internal", - "revision": "95a28eb606def6aaaed082b6b82d3244b0552184", - "revisionTime": "2017-06-23T01:44:30Z" + "revision": "7112fcd50da4ea27e8d4d499b30f04eea143bec2", + "revisionTime": "2018-05-31T02:06:30Z" }, { - "checksumSHA1": "24DO5BEQdFKNl1rfWgI2b4+ry5U=", + "checksumSHA1": "YDNvjFNQS+UxqZMLm/shFs7aLNU=", "path": "github.com/gophercloud/gophercloud/openstack", - "revision": "95a28eb606def6aaaed082b6b82d3244b0552184", - "revisionTime": "2017-06-23T01:44:30Z" + "revision": "7112fcd50da4ea27e8d4d499b30f04eea143bec2", + "revisionTime": "2018-05-31T02:06:30Z" }, { - "checksumSHA1": "Au6MAsI90lewLByg9n+Yjtdqdh8=", - "path": "github.com/gophercloud/gophercloud/openstack/common/extensions", - "revision": "95a28eb606def6aaaed082b6b82d3244b0552184", - "revisionTime": "2017-06-23T01:44:30Z" - }, - { - "checksumSHA1": "4XWDCGMYqipwJymi9xJo9UffD7g=", - "path": "github.com/gophercloud/gophercloud/openstack/compute/v2/extensions", - "revision": "95a28eb606def6aaaed082b6b82d3244b0552184", - "revisionTime": "2017-06-23T01:44:30Z" - }, - { - "checksumSHA1": "e7AW3YDVYJPKUjpqsB4AL9RRlTw=", + "checksumSHA1": "vFS5BwnCdQIfKm1nNWrR+ijsAZA=", "path": "github.com/gophercloud/gophercloud/openstack/compute/v2/extensions/floatingips", - "revision": "95a28eb606def6aaaed082b6b82d3244b0552184", - "revisionTime": "2017-06-23T01:44:30Z" + "revision": "7112fcd50da4ea27e8d4d499b30f04eea143bec2", + "revisionTime": "2018-05-31T02:06:30Z" }, { - "checksumSHA1": "bx6QnHtpgB6nKmN4QRVKa5PszqY=", + "checksumSHA1": "lAQuKIqTuQ9JuMgN0pPkNtRH2RM=", "path": "github.com/gophercloud/gophercloud/openstack/compute/v2/extensions/keypairs", - "revision": "95a28eb606def6aaaed082b6b82d3244b0552184", - "revisionTime": "2017-06-23T01:44:30Z" + "revision": "7112fcd50da4ea27e8d4d499b30f04eea143bec2", + "revisionTime": "2018-05-31T02:06:30Z" }, { - "checksumSHA1": "qBpGbX7LQMPATdO8XyQmU7IXDiI=", + "checksumSHA1": "qfVZltu1fYTYXS97WbjeLuLPgUc=", "path": "github.com/gophercloud/gophercloud/openstack/compute/v2/extensions/startstop", - "revision": "95a28eb606def6aaaed082b6b82d3244b0552184", - "revisionTime": "2017-06-23T01:44:30Z" + "revision": "7112fcd50da4ea27e8d4d499b30f04eea143bec2", + "revisionTime": "2018-05-31T02:06:30Z" }, { - "checksumSHA1": "vTyXSR+Znw7/o/70UBOWG0F09r8=", + "checksumSHA1": "1/1G6O0CUVYyTFF/IqzWThGyuPQ=", "path": "github.com/gophercloud/gophercloud/openstack/compute/v2/flavors", - "revision": "95a28eb606def6aaaed082b6b82d3244b0552184", - "revisionTime": "2017-06-23T01:44:30Z" + "revision": "7112fcd50da4ea27e8d4d499b30f04eea143bec2", + "revisionTime": "2018-05-31T02:06:30Z" }, { - "checksumSHA1": "Rnzx2YgOD41k8KoPA08tR992PxQ=", + "checksumSHA1": "CSnfH01hSas0bdc/3m/f5Rt6SFY=", "path": "github.com/gophercloud/gophercloud/openstack/compute/v2/images", - "revision": "95a28eb606def6aaaed082b6b82d3244b0552184", - "revisionTime": "2017-06-23T01:44:30Z" + "revision": "7112fcd50da4ea27e8d4d499b30f04eea143bec2", + "revisionTime": "2018-05-31T02:06:30Z" }, { - "checksumSHA1": "IjCvcaNnRW++hclt21WUkMYinaA=", + "checksumSHA1": "FKVrvZnE/223fpKVGPqaIX4JP9I=", "path": "github.com/gophercloud/gophercloud/openstack/compute/v2/servers", - "revision": "95a28eb606def6aaaed082b6b82d3244b0552184", - "revisionTime": "2017-06-23T01:44:30Z" + "revision": "7112fcd50da4ea27e8d4d499b30f04eea143bec2", + "revisionTime": "2018-05-31T02:06:30Z" }, { - "checksumSHA1": "S8bHmOP+NjtlYioJC89zIBVvhYc=", + "checksumSHA1": "oOJkelRgWx0NzUmxuI3kTS27gM0=", "path": "github.com/gophercloud/gophercloud/openstack/identity/v2/tenants", - "revision": "95a28eb606def6aaaed082b6b82d3244b0552184", - "revisionTime": "2017-06-23T01:44:30Z" + "revision": "7112fcd50da4ea27e8d4d499b30f04eea143bec2", + "revisionTime": "2018-05-31T02:06:30Z" }, { - "checksumSHA1": "AvUU5En9YpG25iLlcAPDgcQODjI=", + "checksumSHA1": "z5NsqMZX3TLMzpmwzOOXE4M5D9w=", "path": "github.com/gophercloud/gophercloud/openstack/identity/v2/tokens", - "revision": "95a28eb606def6aaaed082b6b82d3244b0552184", - "revisionTime": "2017-06-23T01:44:30Z" + "revision": "7112fcd50da4ea27e8d4d499b30f04eea143bec2", + "revisionTime": "2018-05-31T02:06:30Z" }, { - "checksumSHA1": "rqE0NwmQ9qhXADXxg3DcuZ4A3wk=", + "checksumSHA1": "2PYxD2MOrbp4JCWy5794sEtDYD4=", "path": "github.com/gophercloud/gophercloud/openstack/identity/v3/tokens", - "revision": "95a28eb606def6aaaed082b6b82d3244b0552184", - "revisionTime": "2017-06-23T01:44:30Z" + "revision": "7112fcd50da4ea27e8d4d499b30f04eea143bec2", + "revisionTime": "2018-05-31T02:06:30Z" }, { - "checksumSHA1": "p2ivHupXGBmyHkusnob2NsbsCQk=", + "checksumSHA1": "Pop8rylL583hCZ0RjO+9TkrScCo=", "path": "github.com/gophercloud/gophercloud/openstack/imageservice/v2/images", - "revision": "95a28eb606def6aaaed082b6b82d3244b0552184", - "revisionTime": "2017-06-23T01:44:30Z" + "revision": "7112fcd50da4ea27e8d4d499b30f04eea143bec2", + "revisionTime": "2018-05-31T02:06:30Z" }, { - "checksumSHA1": "KA5YKF9TwIsTy9KssO27y+wk/6U=", + "checksumSHA1": "GFqX1Y5SpZvvyx0LPaP9D9Xp5k0=", "path": "github.com/gophercloud/gophercloud/openstack/imageservice/v2/members", - "revision": "95a28eb606def6aaaed082b6b82d3244b0552184", - "revisionTime": "2017-06-23T01:44:30Z" + "revision": "7112fcd50da4ea27e8d4d499b30f04eea143bec2", + "revisionTime": "2018-05-31T02:06:30Z" }, { - "checksumSHA1": "TDOZnaS0TO0NirpxV1QwPerAQTY=", + "checksumSHA1": "8KE4bJzhbFZKsYMxcRg6xLqqfTg=", "path": "github.com/gophercloud/gophercloud/openstack/utils", - "revision": "95a28eb606def6aaaed082b6b82d3244b0552184", - "revisionTime": "2017-06-23T01:44:30Z" + "revision": "7112fcd50da4ea27e8d4d499b30f04eea143bec2", + "revisionTime": "2018-05-31T02:06:30Z" }, { - "checksumSHA1": "YspETi3tOMvawKIT91HyuqaA5lM=", + "checksumSHA1": "jWdt1lN75UC0GrLG5Tmng+qP+ZI=", "path": "github.com/gophercloud/gophercloud/pagination", - "revision": "95a28eb606def6aaaed082b6b82d3244b0552184", - "revisionTime": "2017-06-23T01:44:30Z" + "revision": "7112fcd50da4ea27e8d4d499b30f04eea143bec2", + "revisionTime": "2018-05-31T02:06:30Z" + }, + { + "checksumSHA1": "ujo1JDey6cxwnGs4HXVCJNVrhHw=", + "path": "github.com/gophercloud/utils/openstack/clientconfig", + "revision": "afce78e977c56ca5407957bf67e8ecc56aab601d", + "revisionTime": "2018-05-22T20:53:45Z" + }, + { + "checksumSHA1": "xSmii71kfQASGNG2C8ttmHx9KTE=", + "path": "github.com/gorilla/websocket", + "revision": "a91eba7f97777409bc2c443f5534d41dd20c5720", + "revisionTime": "2017-03-19T17:27:27Z" }, { "checksumSHA1": "izBSRxLAHN+a/XpAku0in05UzlY=", @@ -784,6 +862,24 @@ "path": "github.com/hashicorp/go-multierror", "revision": "d30f09973e19c1dfcd120b2d9c4f168e68d6b5d5" }, + { + "checksumSHA1": "hjQfXn32Tvuu6IJACOTsMzm+AbA=", + "path": "github.com/hashicorp/go-oracle-terraform/client", + "revision": "62e2241f9c4154d5603a3678adc912991a47a468", + "revisionTime": "2018-01-31T23:42:02Z" + }, + { + "checksumSHA1": "yoA7SyeQNJ8XxwC7IcXdJ2kOTqg=", + "path": "github.com/hashicorp/go-oracle-terraform/compute", + "revision": "62e2241f9c4154d5603a3678adc912991a47a468", + "revisionTime": "2018-01-31T23:42:02Z" + }, + { + "checksumSHA1": "NuObCk0/ybL3w++EnltgrB1GQRc=", + "path": "github.com/hashicorp/go-oracle-terraform/opc", + "revision": "62e2241f9c4154d5603a3678adc912991a47a468", + "revisionTime": "2018-01-31T23:42:02Z" + }, { "checksumSHA1": "ErJHGU6AVPZM9yoY/xV11TwSjQs=", "path": "github.com/hashicorp/go-retryablehttp", @@ -817,34 +913,40 @@ "revision": "c01cf91b011868172fdcd9f41838e80c9d716264" }, { - "checksumSHA1": "EqvUu0Ku0Ec5Tk6yhGNOuOr8yeA=", + "checksumSHA1": "Lg8OHK87XRGCaipG+5+zFyN8OMw=", "path": "github.com/joyent/triton-go", - "revision": "5a58ad2cdec95cddd1e0a2e56f559341044b04f0", - "revisionTime": "2017-10-17T16:55:58Z" + "revision": "545edbe0d564f075ac576f1ad177f4ff39c9adaf", + "revisionTime": "2018-01-16T16:57:42Z" }, { - "checksumSHA1": "JKf97EAAAZFQ6Wf8qN9X7TWqNBY=", + "checksumSHA1": "Y03+L+I0FVZ2bMGWt1MHTDEyWM4=", "path": "github.com/joyent/triton-go/authentication", - "revision": "5a58ad2cdec95cddd1e0a2e56f559341044b04f0", - "revisionTime": "2017-10-17T16:55:58Z" + "revision": "545edbe0d564f075ac576f1ad177f4ff39c9adaf", + "revisionTime": "2018-01-16T16:57:42Z" }, { - "checksumSHA1": "dlO1or0cyVMAmZzyLcBuoy+M0xU=", + "checksumSHA1": "MuJsGBr6HlXQYxZY9cM5rBk+Lns=", "path": "github.com/joyent/triton-go/client", - "revision": "5a58ad2cdec95cddd1e0a2e56f559341044b04f0", - "revisionTime": "2017-10-17T16:55:58Z" + "revision": "545edbe0d564f075ac576f1ad177f4ff39c9adaf", + "revisionTime": "2018-01-16T16:57:42Z" }, { - "checksumSHA1": "O/y7BfKJFUf3A8TCRMXgo9HSb1w=", + "checksumSHA1": "A+YGcdf/qfac1gqK8QY1F5+pYGA=", "path": "github.com/joyent/triton-go/compute", - "revision": "5a58ad2cdec95cddd1e0a2e56f559341044b04f0", - "revisionTime": "2017-10-17T16:55:58Z" + "revision": "545edbe0d564f075ac576f1ad177f4ff39c9adaf", + "revisionTime": "2018-01-16T16:57:42Z" }, { - "checksumSHA1": "gyLtPyKlcumRSkrAH+SsDQo1GnY=", + "checksumSHA1": "d/Py6j/uMgOAFNFGpsQrNnSsO+k=", + "path": "github.com/joyent/triton-go/errors", + "revision": "545edbe0d564f075ac576f1ad177f4ff39c9adaf", + "revisionTime": "2018-01-16T16:57:42Z" + }, + { + "checksumSHA1": "MIrIeqiPmw66cko+gzk7Cht2SLE=", "path": "github.com/joyent/triton-go/network", - "revision": "5a58ad2cdec95cddd1e0a2e56f559341044b04f0", - "revisionTime": "2017-10-17T16:55:58Z" + "revision": "545edbe0d564f075ac576f1ad177f4ff39c9adaf", + "revisionTime": "2018-01-16T16:57:42Z" }, { "checksumSHA1": "gEjGS03N1eysvpQ+FCHTxPcbxXc=", @@ -877,6 +979,12 @@ "path": "github.com/kr/fs", "revision": "2788f0dbd16903de03cb8186e5c7d97b69ad387b" }, + { + "checksumSHA1": "T9E+5mKBQ/BX4wlNxgaPfetxdeI=", + "path": "github.com/marstr/guid", + "revision": "8bdf7d1a087ccc975cf37dd6507da50698fd19ca", + "revisionTime": "2017-04-27T23:51:15Z" + }, { "checksumSHA1": "t7hEeujQesCSzjZIPcE80XgQ9M0=", "path": "github.com/masterzen/azure-sdk-for-go/core/http", @@ -895,16 +1003,16 @@ "revision": "95ba30457eb1121fa27753627c774c7cd4e90083" }, { - "checksumSHA1": "8z5kCCFRsBkhXic9jxxeIV3bBn8=", + "checksumSHA1": "dVQEUn5TxdIAXczK7rh6qUrq44Q=", "path": "github.com/masterzen/winrm", - "revision": "a2df6b1315e6fd5885eb15c67ed259e85854125f", - "revisionTime": "2017-08-14T13:39:27Z" + "revision": "7e40f93ae939004a1ef3bd5ff5c88c756ee762bb", + "revisionTime": "2018-02-24T16:03:50Z" }, { "checksumSHA1": "XFSXma+KmkhkIPsh4dTd/eyja5s=", "path": "github.com/masterzen/winrm/soap", - "revision": "a2df6b1315e6fd5885eb15c67ed259e85854125f", - "revisionTime": "2017-08-14T13:39:27Z" + "revision": "7e40f93ae939004a1ef3bd5ff5c88c756ee762bb", + "revisionTime": "2018-02-24T16:03:50Z" }, { "checksumSHA1": "NkbetqlpWBi3gP08JDneC+axTKw=", @@ -912,22 +1020,28 @@ "revision": "56b76bdf51f7708750eac80fa38b952bb9f32639" }, { - "checksumSHA1": "UP+pXl+ic9y6qrpZA5MqDIAuGfw=", + "checksumSHA1": "cJE7dphDlam/i7PhnsyosNWtbd4=", + "path": "github.com/mattn/go-runewidth", + "revision": "97311d9f7767e3d6f422ea06661bc2c7a19e8a5d", + "revisionTime": "2017-05-10T07:48:58Z" + }, + { + "checksumSHA1": "UIqCj7qI0hhIMpAhS9YYqs2jD48=", "path": "github.com/mitchellh/cli", - "revision": "ee8578a9c12a5bb9d55303b9665cc448772c81b8", - "revisionTime": "2017-03-28T05:23:52Z" + "revision": "65fcae5817c8600da98ada9d7edf26dd1a84837b", + "revisionTime": "2017-09-08T18:10:43Z" }, { "checksumSHA1": "mVqDwKcibat0IKAdzAhfGIHPwI8=", "path": "github.com/mitchellh/go-fs", - "revision": "7bae45d9a684750e82b97ff320c82556614e621b", - "revisionTime": "2016-11-08T19:11:23Z" + "revision": "7b48fa161ea73ce54566b233d6fba8750b2faf47", + "revisionTime": "2018-04-02T23:40:41Z" }, { - "checksumSHA1": "B5dy+Lg6jFPgHYhozztz88z3b4A=", + "checksumSHA1": "i+3acspzTbQODW3dWX2JKydHVAo=", "path": "github.com/mitchellh/go-fs/fat", - "revision": "7bae45d9a684750e82b97ff320c82556614e621b", - "revisionTime": "2016-11-08T19:11:23Z" + "revision": "7b48fa161ea73ce54566b233d6fba8750b2faf47", + "revisionTime": "2018-04-02T23:40:41Z" }, { "checksumSHA1": "z235fRXw4+SW4xWgLTYc8SwkM2M=", @@ -945,15 +1059,10 @@ "revision": "87b45ffd0e9581375c491fef3d32130bb15c5bd7" }, { - "checksumSHA1": "4Js6Jlu93Wa0o6Kjt393L9Z7diE=", + "checksumSHA1": "1JtAhgmRN0x794LRNhs0DJ5t8io=", "path": "github.com/mitchellh/mapstructure", - "revision": "281073eb9eb092240d33ef253c404f1cca550309" - }, - { - "checksumSHA1": "5x1RX5m8SCkCRLyLL8wBc0qJpV8=", - "path": "github.com/mitchellh/multistep", - "revision": "391576a156a54cfbb4cf5d5eda40cf6ffa3e3a4d", - "revisionTime": "2017-03-16T18:53:39Z" + "revision": "b4575eea38cca1123ec2dc90c26529b5c5acfcff", + "revisionTime": "2018-01-11T00:07:20Z" }, { "checksumSHA1": "m2L8ohfZiFRsMW3iynaH/TWgnSY=", @@ -971,16 +1080,41 @@ "path": "github.com/mitchellh/reflectwalk", "revision": "eecf4c70c626c7cfbb95c90195bc34d386c74ac6" }, + { + "checksumSHA1": "KhOVzKefFYORHdIVe+/gNAHB23A=", + "path": "github.com/moul/anonuuid", + "revision": "609b752a95effbbef26d134ac18ed6f57e01b98e", + "revisionTime": "2016-02-22T16:21:17Z" + }, + { + "checksumSHA1": "nlJWlZ8NlDeKDaXoSFq1UM56RCs=", + "path": "github.com/moul/gotty-client", + "revision": "b26a57ebc215d2d68945184bb7e88886dfd9c66c", + "revisionTime": "2018-03-27T18:02:12Z" + }, { "checksumSHA1": "gcLub3oB+u4QrOJZcYmk/y2AP4k=", "path": "github.com/nu7hatch/gouuid", "revision": "179d4d0c4d8d407a32af483c2354df1d2c91e6c3" }, { - "checksumSHA1": "XXmfaQ8fEupEgaGd6PptrLnrE54=", + "checksumSHA1": "WY8p2ZNGyl69P1tcVc5HFal/Ng0=", + "path": "github.com/olekukonko/tablewriter", + "revision": "96aac992fc8b1a4c83841a6c3e7178d20d989625", + "revisionTime": "2018-01-05T11:11:33Z" + }, + { + "checksumSHA1": "/LfGOK58LAN5bN/kVhyjw6prFuw=", + "path": "github.com/oracle/oci-go-sdk", + "revision": "36126af3da60f6d2053487efed8801aa794f7605", + "revisionTime": "2018-03-02T19:49:46Z", + "tree": true + }, + { + "checksumSHA1": "/NoE6t3UkW4/iKAtbf59GGv6tF8=", "path": "github.com/packer-community/winrmcp/winrmcp", - "revision": "e1b7d6e6b1b1a27984270784190f1d06ad91888b", - "revisionTime": "2017-09-29T21:51:32Z" + "revision": "81144009af586de8e7729b829266f09dd0d59701", + "revisionTime": "2018-01-02T16:08:24Z" }, { "checksumSHA1": "oaXvjFg802gS/wx1bx2gAQwa7XQ=", @@ -997,6 +1131,12 @@ "path": "github.com/pierrec/xxHash/xxHash32", "revision": "5a004441f897722c627870a981d02b29924215fa" }, + { + "checksumSHA1": "xCv4GBFyw07vZkVtKF/XrUnkHRk=", + "path": "github.com/pkg/errors", + "revision": "e881fd58d78e04cf6d0de1217f8707c8cc2249bc", + "revisionTime": "2017-12-16T07:03:16Z" + }, { "checksumSHA1": "Mud+5WNq90BFb4gTeM5AByN8f2Y=", "path": "github.com/pkg/sftp", @@ -1038,12 +1178,42 @@ "path": "github.com/pmezard/go-difflib/difflib", "revision": "792786c7400a136282c1664665ae0a8db921c6c2" }, + { + "checksumSHA1": "rTNABfFJ9wtLQRH8uYNkEZGQOrY=", + "path": "github.com/posener/complete", + "revision": "88e59760adaddb8276c9b15511302890690e2dae", + "revisionTime": "2017-09-08T12:52:45Z" + }, + { + "checksumSHA1": "NB7uVS0/BJDmNu68vPAlbrq4TME=", + "path": "github.com/posener/complete/cmd", + "revision": "88e59760adaddb8276c9b15511302890690e2dae", + "revisionTime": "2017-09-08T12:52:45Z" + }, + { + "checksumSHA1": "Hwojin3GxRyKwPAiz5r7UszqkPc=", + "path": "github.com/posener/complete/cmd/install", + "revision": "88e59760adaddb8276c9b15511302890690e2dae", + "revisionTime": "2017-09-08T12:52:45Z" + }, + { + "checksumSHA1": "DMo94FwJAm9ZCYCiYdJU2+bh4no=", + "path": "github.com/posener/complete/match", + "revision": "88e59760adaddb8276c9b15511302890690e2dae", + "revisionTime": "2017-09-08T12:52:45Z" + }, { "checksumSHA1": "Kq0fF7R65dDcGReuhf47O3LQgrY=", "path": "github.com/profitbricks/profitbricks-sdk-go", "revision": "7bdb11aecb0e457ea23c86898c6b49bfc0eb4bb1", "revisionTime": "2017-08-01T13:52:49Z" }, + { + "checksumSHA1": "DF3jZEw4lCq/SEaC7DIl/R+7S70=", + "path": "github.com/renstrom/fuzzysearch/fuzzy", + "revision": "2d205ac6ec17a839a94bdbfd16d2fa6c6dada2e0", + "revisionTime": "2016-03-31T20:48:55Z" + }, { "checksumSHA1": "zmC8/3V4ls53DJlNTKDZwPSC/dA=", "path": "github.com/satori/go.uuid", @@ -1057,10 +1227,45 @@ "revisionTime": "2017-03-21T23:07:31Z" }, { - "checksumSHA1": "Q2V7Zs3diLmLfmfbiuLpSxETSuY=", + "checksumSHA1": "UvMjUbtTEF0wskLTpV8grtBUh7Q=", + "path": "github.com/scaleway/scaleway-cli/pkg/api", + "revision": "51d0290d9434dc6139dd89001236cfd989521a79", + "revisionTime": "2018-03-05T22:44:21Z" + }, + { + "checksumSHA1": "kveaAmNlnvmIIuEkFcMlB+N7TqY=", + "path": "github.com/scaleway/scaleway-cli/pkg/sshcommand", + "revision": "51d0290d9434dc6139dd89001236cfd989521a79", + "revisionTime": "2018-03-05T22:44:21Z" + }, + { + "checksumSHA1": "elO1R+B5Ym3ZNA5Zht4cIE/8b0c=", + "path": "github.com/scaleway/scaleway-cli/pkg/utils", + "revision": "51d0290d9434dc6139dd89001236cfd989521a79", + "revisionTime": "2018-03-05T22:44:21Z" + }, + { + "checksumSHA1": "hU3ibLi5mZBayj0HPIVpcMSvRgU=", + "path": "github.com/sirupsen/logrus", + "revision": "90150a8ed11b6ce285e77e8af2b0109559ce4777", + "revisionTime": "2018-03-15T01:07:03Z" + }, + { + "checksumSHA1": "xqtDGN726+wHn43myII/VQ4vQn8=", + "path": "github.com/stretchr/objx", + "revision": "facf9a85c22f48d2f52f2380e4efce1768749a89", + "revisionTime": "2018-01-06T01:13:53Z", + "version": "v0.1", + "versionExact": "v0.1" + }, + { + "checksumSHA1": "pbq5LYckA7/rqe03l9MtXDekmRg=", "comment": "v1.1.4-4-g976c720", "path": "github.com/stretchr/testify/assert", - "revision": "976c720a22c8eb4eb6a0b4348ad85ad12491a506" + "revision": "12b6f73e6084dad08a7c6e575284b177ecafbc71", + "revisionTime": "2018-01-31T22:23:50Z", + "version": "v1.2.1", + "versionExact": "v1.2.1" }, { "checksumSHA1": "GQ9bu6PuydK3Yor1JgtVKUfEJm8=", @@ -1208,14 +1413,22 @@ "revisionTime": "2017-02-08T20:51:15Z" }, { - "checksumSHA1": "5ARrN3Zq+E9zazFb/N+b08Serys=", - "path": "golang.org/x/net/context", - "revision": "6ccd6698c634f5d835c40c1c31848729e0cecda1" + "checksumSHA1": "BGm8lKZmvJbf/YOJLeL1rw2WVjA=", + "path": "golang.org/x/crypto/ssh/terminal", + "revision": "88942b9c40a4c9d203b82b3731787b672d6e809b", + "revisionTime": "2018-03-21T23:38:19Z" }, { - "checksumSHA1": "tHFno3QaRarH85A4DV1FYuWATQI=", + "checksumSHA1": "GtamqiJoL7PGHsN454AoffBFMa8=", + "path": "golang.org/x/net/context", + "revision": "5ccada7d0a7ba9aeb5d3aca8d3501b4c2a509fec", + "revisionTime": "2018-01-12T01:53:59Z" + }, + { + "checksumSHA1": "WHc3uByvGaMcnSoI21fhzYgbOgg=", "path": "golang.org/x/net/context/ctxhttp", - "revision": "6ccd6698c634f5d835c40c1c31848729e0cecda1" + "revision": "5ccada7d0a7ba9aeb5d3aca8d3501b4c2a509fec", + "revisionTime": "2018-01-12T01:53:59Z" }, { "checksumSHA1": "vqc3a+oTUGX8PmD0TS+qQ7gmN8I=", @@ -1267,10 +1480,22 @@ "revision": "8a57ed94ffd43444c0879fe75701732a38afc985" }, { - "checksumSHA1": "NzQ3QYllWwK+3GZliu11jMU6xwo=", + "checksumSHA1": "S0DP7Pn7sZUmXc55IzZnNvERu6s=", + "path": "golang.org/x/sync/errgroup", + "revision": "5a06fca2c336a4b2b2fcb45702e8c47621b2aa2c", + "revisionTime": "2017-03-17T17:13:11Z" + }, + { + "checksumSHA1": "HUsOCUBC1i9oPZo0VvmawbuhGa4=", "path": "golang.org/x/sys/unix", - "revision": "c84c1ab9fd18cdd4c23dd021c10f5f46dea95e46", - "revisionTime": "2017-08-11T22:05:22Z" + "revision": "13d03a9a82fba647c21a0ef8fba44a795d0f0835", + "revisionTime": "2018-03-26T12:16:02Z" + }, + { + "checksumSHA1": "Dfz4tGItokMvHZENCsBmPPB8FGQ=", + "path": "golang.org/x/sys/windows", + "revision": "13d03a9a82fba647c21a0ef8fba44a795d0f0835", + "revisionTime": "2018-03-26T12:16:02Z" }, { "checksumSHA1": "Mr4ur60bgQJnQFfJY0dGtwWwMPE=", @@ -1369,10 +1594,10 @@ "revisionTime": "2017-07-07T17:19:04Z" }, { - "checksumSHA1": "gvrxuXnqGhfzY0O3MFbS8XhMH/k=", + "checksumSHA1": "EooPqEpEyY/7NCRwHDMWhhlkQNw=", "path": "google.golang.org/api/gensupport", - "revision": "e665075b5ff79143ba49c58fab02df9dc122afd5", - "revisionTime": "2017-07-09T10:32:00Z" + "revision": "9c79deebf7496e355d7e95d82d4af1fe4e769b2f", + "revisionTime": "2018-04-16T00:04:00Z" }, { "checksumSHA1": "yQREK/OWrz9PLljbr127+xFk6J0=", @@ -1384,6 +1609,12 @@ "path": "google.golang.org/api/googleapi/internal/uritemplates", "revision": "ff0a1ff302946b997eb1832381419d1f95143483" }, + { + "checksumSHA1": "Zpu9YB1omKr0VhFb8iycN1u+42Y=", + "path": "google.golang.org/api/storage/v1", + "revision": "9c79deebf7496e355d7e95d82d4af1fe4e769b2f", + "revisionTime": "2018-04-16T00:04:00Z" + }, { "checksumSHA1": "SSYsrizGeHQRKn/S7j5CQu86egU=", "path": "google.golang.org/appengine", @@ -1473,6 +1704,12 @@ "checksumSHA1": "U7dGDNwEHORvJFMoNSXErKE7ITg=", "path": "google.golang.org/cloud/internal", "revision": "5a3b06f8b5da3b7c3a93da43163b872c86c509ef" + }, + { + "checksumSHA1": "ZSWoOPUNRr5+3dhkLK3C4cZAQPk=", + "path": "gopkg.in/yaml.v2", + "revision": "5420a8b6744d3b0345ab293f6fcba19c978f1183", + "revisionTime": "2018-03-28T19:50:20Z" } ], "rootPath": "github.com/hashicorp/packer" diff --git a/version/version.go b/version/version.go index f3d38291a..e2653351c 100644 --- a/version/version.go +++ b/version/version.go @@ -9,7 +9,7 @@ import ( var GitCommit string // The main version number that is being run at the moment. -const Version = "1.1.2" +const Version = "1.2.6" // A pre-release marker for the version. If this is "" (empty string) // then it means that it is a final release. Otherwise, this is a pre-release diff --git a/website/Gemfile b/website/Gemfile index a2c4d1525..f8a10cb94 100644 --- a/website/Gemfile +++ b/website/Gemfile @@ -1,3 +1,3 @@ source "https://rubygems.org" -gem "middleman-hashicorp", "0.3.28" +gem "middleman-hashicorp", "0.3.37" diff --git a/website/Gemfile.lock b/website/Gemfile.lock index dea170939..040e5237a 100644 --- a/website/Gemfile.lock +++ b/website/Gemfile.lock @@ -1,12 +1,12 @@ GEM remote: https://rubygems.org/ specs: - activesupport (4.2.8) + activesupport (4.2.10) i18n (~> 0.7) minitest (~> 5.1) thread_safe (~> 0.3, >= 0.3.4) tzinfo (~> 1.1) - autoprefixer-rails (7.1.1.2) + autoprefixer-rails (8.3.0) execjs bootstrap-sass (3.3.7) autoprefixer-rails (>= 5.2.1) @@ -18,7 +18,7 @@ GEM rack (>= 1.0.0) rack-test (>= 0.5.4) xpath (~> 2.0) - chunky_png (1.3.8) + chunky_png (1.3.10) coffee-script (2.4.1) coffee-script-source execjs @@ -39,10 +39,10 @@ GEM eventmachine (>= 0.12.9) http_parser.rb (~> 0.6.0) erubis (2.7.0) - eventmachine (1.2.3) + eventmachine (1.2.5) execjs (2.7.0) - ffi (1.9.18) - haml (5.0.1) + ffi (1.9.23) + haml (5.0.4) temple (>= 0.8.0) tilt hike (1.2.3) @@ -51,7 +51,7 @@ GEM http_parser.rb (0.6.0) i18n (0.7.0) json (2.1.0) - kramdown (1.13.2) + kramdown (1.16.2) listen (3.0.8) rb-fsevent (~> 0.9, >= 0.9.4) rb-inotify (~> 0.9, >= 0.9.7) @@ -78,7 +78,7 @@ GEM rack (>= 1.4.5, < 2.0) thor (>= 0.15.2, < 2.0) tilt (~> 1.4.1, < 2.0) - middleman-hashicorp (0.3.28) + middleman-hashicorp (0.3.37) bootstrap-sass (~> 3.3) builder (~> 3.2) middleman (~> 3.4) @@ -101,28 +101,28 @@ GEM mime-types (3.1) mime-types-data (~> 3.2015) mime-types-data (3.2016.0521) - mini_portile2 (2.2.0) - minitest (5.10.2) - multi_json (1.12.1) - nokogiri (1.8.0) - mini_portile2 (~> 2.2.0) - padrino-helpers (0.12.8.1) + mini_portile2 (2.3.0) + minitest (5.11.3) + multi_json (1.13.1) + nokogiri (1.8.2) + mini_portile2 (~> 2.3.0) + padrino-helpers (0.12.9) i18n (~> 0.6, >= 0.6.7) - padrino-support (= 0.12.8.1) - tilt (~> 1.4.1) - padrino-support (0.12.8.1) + padrino-support (= 0.12.9) + tilt (>= 1.4.1, < 3) + padrino-support (0.12.9) activesupport (>= 3.1) - rack (1.6.8) - rack-livereload (0.3.16) + rack (1.6.9) + rack-livereload (0.3.17) rack - rack-test (0.6.3) - rack (>= 1.0) - rb-fsevent (0.9.8) + rack-test (1.0.0) + rack (>= 1.0, < 3) + rb-fsevent (0.10.3) rb-inotify (0.9.10) ffi (>= 0.5.0, < 2) redcarpet (3.4.0) - rouge (2.1.1) - sass (3.4.24) + rouge (2.2.1) + sass (3.4.25) sprockets (2.12.4) hike (~> 1.2) multi_json (~> 1.0) @@ -134,13 +134,13 @@ GEM sprockets (~> 2.0) tilt (~> 1.1) temple (0.8.0) - thor (0.19.4) + thor (0.20.0) thread_safe (0.3.6) tilt (1.4.1) - turbolinks (5.0.1) - turbolinks-source (~> 5) - turbolinks-source (5.0.3) - tzinfo (1.2.3) + turbolinks (5.1.1) + turbolinks-source (~> 5.1) + turbolinks-source (5.1.0) + tzinfo (1.2.5) thread_safe (~> 0.1) uber (0.0.15) uglifier (2.7.2) @@ -153,7 +153,7 @@ PLATFORMS ruby DEPENDENCIES - middleman-hashicorp (= 0.3.28) + middleman-hashicorp (= 0.3.37) BUNDLED WITH - 1.15.1 + 1.16.1 diff --git a/website/Makefile b/website/Makefile index 4d3d361b7..7f14cadf9 100644 --- a/website/Makefile +++ b/website/Makefile @@ -1,4 +1,4 @@ -VERSION?="0.3.28" +VERSION?="0.3.37" build: @echo "==> Starting build in Docker..." @@ -7,6 +7,7 @@ build: --rm \ --tty \ --volume "$(shell pwd):/website" \ + -e "ENV=production" \ hashicorp/middleman-hashicorp:${VERSION} \ bundle exec middleman build --verbose --clean diff --git a/website/config.rb b/website/config.rb index 93d234cbe..dc89c0b69 100644 --- a/website/config.rb +++ b/website/config.rb @@ -2,12 +2,22 @@ set :base_url, "https://www.packer.io/" activate :hashicorp do |h| h.name = "packer" - h.version = "1.1.1" + h.version = "1.2.5" h.github_slug = "hashicorp/packer" h.website_root = "website" end helpers do + # Returns a segment tracking ID such that local development is not + # tracked to production systems. + def segmentId() + if (ENV['ENV'] == 'production') + 'AjXdfmTTk1I9q9dfyePuDFHBrz1tCO3l' + else + '0EXTgkNx0Ydje2PGXVbRhpKKoe5wtzcE' + end + end + # Returns the FQDN of the image URL. # # @param [String] path diff --git a/website/data/format.yml b/website/data/format.yml new file mode 100644 index 000000000..5a176fee2 --- /dev/null +++ b/website/data/format.yml @@ -0,0 +1,8 @@ +required: + : + type: + docs: +optional: + : + type: + docs: diff --git a/website/packer.json b/website/packer.json index fd2618f17..e1e5f77c8 100644 --- a/website/packer.json +++ b/website/packer.json @@ -3,12 +3,13 @@ "aws_access_key_id": "{{ env `AWS_ACCESS_KEY_ID` }}", "aws_secret_access_key": "{{ env `AWS_SECRET_ACCESS_KEY` }}", "aws_region": "{{ env `AWS_REGION` }}", + "website_environment": "production", "fastly_api_key": "{{ env `FASTLY_API_KEY` }}" }, "builders": [ { "type": "docker", - "image": "hashicorp/middleman-hashicorp:0.3.28", + "image": "hashicorp/middleman-hashicorp:0.3.37", "discard": "true", "volumes": { "{{ pwd }}": "/website" @@ -22,6 +23,7 @@ "AWS_ACCESS_KEY_ID={{ user `aws_access_key_id` }}", "AWS_SECRET_ACCESS_KEY={{ user `aws_secret_access_key` }}", "AWS_REGION={{ user `aws_region` }}", + "ENV={{ user `website_environment` }}", "FASTLY_API_KEY={{ user `fastly_api_key` }}" ], "inline": [ diff --git a/website/source/assets/images/guides/teamcity_build_configuration.png b/website/source/assets/images/guides/teamcity_build_configuration.png new file mode 100644 index 000000000..bcdcbb54a Binary files /dev/null and b/website/source/assets/images/guides/teamcity_build_configuration.png differ diff --git a/website/source/assets/images/guides/teamcity_build_log.png b/website/source/assets/images/guides/teamcity_build_log.png new file mode 100644 index 000000000..13d23e19c Binary files /dev/null and b/website/source/assets/images/guides/teamcity_build_log.png differ diff --git a/website/source/assets/images/guides/teamcity_build_log_complete.png b/website/source/assets/images/guides/teamcity_build_log_complete.png new file mode 100644 index 000000000..a05b2a685 Binary files /dev/null and b/website/source/assets/images/guides/teamcity_build_log_complete.png differ diff --git a/website/source/assets/images/guides/teamcity_create_project_from_url-1.png b/website/source/assets/images/guides/teamcity_create_project_from_url-1.png new file mode 100644 index 000000000..dc1ab3081 Binary files /dev/null and b/website/source/assets/images/guides/teamcity_create_project_from_url-1.png differ diff --git a/website/source/assets/images/guides/teamcity_create_project_from_url-2.png b/website/source/assets/images/guides/teamcity_create_project_from_url-2.png new file mode 100644 index 000000000..5b98b16b8 Binary files /dev/null and b/website/source/assets/images/guides/teamcity_create_project_from_url-2.png differ diff --git a/website/source/assets/javascripts/analytics.js b/website/source/assets/javascripts/analytics.js new file mode 100644 index 000000000..e6ad037bc --- /dev/null +++ b/website/source/assets/javascripts/analytics.js @@ -0,0 +1,16 @@ +document.addEventListener('turbolinks:load', function() { + analytics.page() + + track('.downloads .download .details li a', function(el) { + var m = el.href.match(/packer_(\d+\.\d+\.\d+)_(.*?)_(.*?)\.zip/) + return { + event: 'Download', + category: 'Button', + label: 'Packer | v' + m[1] + ' | ' + m[2] + ' | ' + m[3], + version: m[1], + os: m[2], + architecture: m[3], + product: 'packer' + } + }) +}) diff --git a/website/source/assets/javascripts/application.js b/website/source/assets/javascripts/application.js index 5d2365e14..ae70e571d 100644 --- a/website/source/assets/javascripts/application.js +++ b/website/source/assets/javascripts/application.js @@ -3,3 +3,6 @@ //= require hashicorp/mega-nav //= require hashicorp/sidebar +//= require hashicorp/analytics + +//= require analytics diff --git a/website/source/assets/stylesheets/_global.scss b/website/source/assets/stylesheets/_global.scss index 3b69c05dd..ac806456d 100644 --- a/website/source/assets/stylesheets/_global.scss +++ b/website/source/assets/stylesheets/_global.scss @@ -24,12 +24,3 @@ h1, h2, h3, h4, h5 { h1 { margin-bottom: 24px; } - -// Avoid FOUT -.wf-loading { - visibility: hidden; -} - -.wf-active, .wf-inactive { - visibility: visible; -} diff --git a/website/source/community-tools.html.md b/website/source/community-tools.html.md index 6dcf6445b..799373a1b 100644 --- a/website/source/community-tools.html.md +++ b/website/source/community-tools.html.md @@ -38,6 +38,9 @@ power of Packer templates. * [geerlingguy/packer-ubuntu-1604](https://github.com/geerlingguy/packer-ubuntu-1604) \- Ubuntu 16.04 minimal Vagrant Box using Ansible provisioner +* [jakobadam/packer-qemu-templates](https://github.com/jakobadam/packer-qemu-templates) + - QEMU templates for various operating systems + ## Wrappers - [packer-config](https://github.com/ianchesal/packer-config) - a Ruby model that lets you build Packer configurations in Ruby @@ -49,3 +52,4 @@ power of Packer templates. ## Other - [suitcase](https://github.com/tmclaugh/suitcase) - Packer based build system for CentOS OS images +- [Undo-WinRMConfig](https://cloudywindows.io/post/winrm-for-provisioning---close-the-door-on-the-way-out-eh/) - Open source automation to stage WinRM reset to pristine state at next shtudown diff --git a/website/source/docs/builders/alicloud-ecs.html.md b/website/source/docs/builders/alicloud-ecs.html.md index c19da9152..266b59445 100644 --- a/website/source/docs/builders/alicloud-ecs.html.md +++ b/website/source/docs/builders/alicloud-ecs.html.md @@ -27,50 +27,83 @@ builder. but it can also be sourced from the `ALICLOUD_ACCESS_KEY` environment variable. -- `secret_key` (string) - This is the Alicloud secret key. It must be provided, - but it can also be sourced from the `ALICLOUD_SECRET_KEY` environment - variable. - -- `region` (string) - This is the Alicloud region. It must be provided, but it - can also be sourced from the `ALICLOUD_REGION` environment variables. - -- `instance_type` (string) - Type of the instance. For values, see [Instance - Type Table](). You can also obtain the latest instance type table by invoking - the [Querying Instance Type - Table](https://intl.aliyun.com/help/doc-detail/25620.htm?spm=a3c0i.o25499en.a3.6.Dr1bik) - interface. - - `image_name` (string) - The name of the user-defined image, \[2, 128\] English or Chinese characters. It must begin with an uppercase/lowercase letter or a Chinese character, and may contain numbers, `_` or `-`. It cannot begin with `http://` or `https://`. +- `instance_type` (string) - Type of the instance. For values, see [Instance + Type Table](https://www.alibabacloud.com/help/doc-detail/25378.htm?spm=a3c0i.o25499en.a3.9.14a36ac8iYqKRA). + You can also obtain the latest instance type table by invoking the [Querying + Instance Type Table](https://intl.aliyun.com/help/doc-detail/25620.htm?spm=a3c0i.o25499en.a3.6.Dr1bik) + interface. + +- `region` (string) - This is the Alicloud region. It must be provided, but it + can also be sourced from the `ALICLOUD_REGION` environment variables. + +- `secret_key` (string) - This is the Alicloud secret key. It must be provided, + but it can also be sourced from the `ALICLOUD_SECRET_KEY` environment + variable. + - `source_image` (string) - This is the base image id which you want to create your customized images. ### Optional: -- `skip_region_validation` (boolean) - The region validation can be skipped if this - value is true, the default value is false. +- `force_stop_instance` (boolean) - Whether to force shutdown upon device restart. + The default value is `false`. -- `image_description` (string) - The description of the image, with a length - limit of 0 to 256 characters. Leaving it blank means null, which is the - default value. It cannot begin with `http://` or `https://`. - -- `image_version` (string) - The version number of the image, with a length limit - of 1 to 40 English characters. - -- `image_share_account` (array of string) - The IDs of to-be-added Aliyun - accounts to which the image is shared. The number of accounts is 1 to 10. If - number of accounts is greater than 10, this parameter is ignored. - -- `image_copy_regions` (array of string) - Copy to the destination regionIds. + If it is set to `false`, the system is shut down normally; if it is set to + `true`, the system is forced to shut down. - `image_copy_names` (array of string) - The name of the destination image, \[2, 128\] English or Chinese characters. It must begin with an uppercase/lowercase letter or a Chinese character, and may contain numbers, `_` or `-`. It cannot begin with `http://` or `https://`. +- `image_copy_regions` (array of string) - Copy to the destination regionIds. + +- `image_description` (string) - The description of the image, with a length + limit of 0 to 256 characters. Leaving it blank means null, which is the + default value. It cannot begin with `http://` or `https://`. + +- `image_disk_mappings` (array of image disk mappings) - Add one or more data disks + to the image. + + - `disk_category` (string) - Category of the data disk. Optional values are: + - cloud - general cloud disk + - cloud\_efficiency - efficiency cloud disk + - cloud\_ssd - cloud SSD + + Default value: cloud. + + - `disk_delete_with_instance` (boolean) - Whether or not the disk is released along with the instance: + - True indicates that when the instance is released, this disk will be released with it + - False indicates that when the instance is released, this disk will be retained. + + - `disk_description` (string) - The value of disk description is blank by default. \[2, 256\] characters. The disk description will appear on the console. It cannot begin with `http://` or `https://`. + + - `disk_device` (string) - Device information of the related instance: such as + `/dev/xvdb` It is null unless the Status is In\_use. + + - `disk_name` (string) - The value of disk name is blank by default. \[2, 128\] + English or Chinese characters, must begin with an uppercase/lowercase letter + or Chinese character. Can contain numbers, `.`, `_` and `-`. The disk name + will appear on the console. It cannot begin with `http://` or `https://`. + + - `disk_size` (number) - Size of the system disk, in GB, values range: + - cloud - 5 ~ 2000 + - cloud\_efficiency - 20 ~ 2048 + - cloud\_ssd - 20 ~ 2048 + + The value should be equal to or greater than the size of the specific SnapshotId. + + - `disk_snapshot_id` (string) - Snapshots are used to create the data disk + After this parameter is specified, Size is ignored. The actual size of the + created disk is the size of the specified snapshot. + + Snapshots from on or before July 15, 2013 cannot be used to create a disk. + - `image_force_delete` (boolean) - If this value is true, when the target image name is duplicated with an existing image, it will delete the existing image and then create the target image, otherwise, the creation will fail. The default @@ -80,80 +113,12 @@ builder. duplicated existing image, the source snapshot of this image will be delete either. -- `disk_name` (string) - The value of disk name is blank by default. \[2, 128\] - English or Chinese characters, must begin with an uppercase/lowercase letter - or Chinese character. Can contain numbers, `.`, `_` and `-`. The disk name - will appear on the console. It cannot begin with `http://` or `https://`. +- `image_share_account` (array of string) - The IDs of to-be-added Aliyun + accounts to which the image is shared. The number of accounts is 1 to 10. If + number of accounts is greater than 10, this parameter is ignored. -- `disk_category` (string) - Category of the data disk. Optional values are: - - cloud - general cloud disk - - cloud\_efficiency - efficiency cloud disk - - cloud\_ssd - cloud SSD - - Default value: cloud. - -- `disk_size` (number) - Size of the system disk, in GB, values range: - - cloud - 5 ~ 2000 - - cloud\_efficiency - 20 ~ 2048 - - cloud\_ssd - 20 ~ 2048 - - The value should be equal to or greater than the size of the specific SnapshotId. - -- `disk_snapshot_id` (string) - Snapshots are used to create the data disk - After this parameter is specified, Size is ignored. The actual size of the - created disk is the size of the specified snapshot. - - Snapshots from on or before July 15, 2013 cannot be used to create a disk. - -- `disk_description` (string) - The value of disk description is blank by default. \[2, 256\] characters. The disk description will appear on the console. It cannot begin with `http://` or `https://`. - -- `disk_delete_with_instance` (string) - Whether or not the disk is released along with the instance: -- True indicates that when the instance is released, this disk will be released with it -- False indicates that when the instance is released, this disk will be retained. - -- `disk_device` (string) - Device information of the related instance: such as - `/dev/xvdb` It is null unless the Status is In\_use. - -- `zone_id` (string) - ID of the zone to which the disk belongs. - -- `io_optimized` (boolean) - I/O optimized. - - Default value: false for Generation I instances; true for other instances. - -- `force_stop_instance` (boolean) - Whether to force shutdown upon device restart. - The default value is `false`. - - If it is set to `false`, the system is shut down normally; if it is set to - `true`, the system is forced to shut down. - -- `security_group_id` (string) - ID of the security group to which a newly - created instance belongs. Mutual access is allowed between instances in one - security group. If not specified, the newly created instance will be added to - the default security group. If the default group doesn’t exist, or the number - of instances in it has reached the maximum limit, a new security group will - be created automatically. - -- `security_group_name` (string) - The security group name. The default value is - blank. \[2, 128\] English or Chinese characters, must begin with an - uppercase/lowercase letter or Chinese character. Can contain numbers, `.`, - `_` or `-`. It cannot begin with `http://` or `https://`. - -- `user_data` (string) - The UserData of an instance must be encoded in `Base64` - format, and the maximum size of the raw data is `16 KB`. - -- `user_data_file` (string) - The file name of the userdata. - -- `vpc_id` (string) - VPC ID allocated by the system. - -- `vpc_name` (string) - The VPC name. The default value is blank. \[2, 128\] - English or Chinese characters, must begin with an uppercase/lowercase letter - or Chinese character. Can contain numbers, `_` and `-`. The disk description - will appear on the console. Cannot begin with `http://` or `https://`. - -- `vpc_cidr_block` (string) - Value options: `192.168.0.0/16` and `172.16.0.0/16`. - When not specified, the default value is `172.16.0.0/16`. - -- `vswitch_id` (string) - The ID of the VSwitch to be used. +- `image_version` (string) - The version number of the image, with a length limit + of 1 to 40 English characters. - `instance_name` (string) - Display name of the instance, which is a string of 2 to 128 Chinese or English characters. It must begin with an @@ -171,7 +136,6 @@ builder. For the regions out of China, currently only support `PayByTraffic`, you must set it manfully. - - `internet_max_bandwidth_out` (string) - Maximum outgoing bandwidth to the public network, measured in Mbps (Mega bit per second). @@ -179,10 +143,53 @@ builder. - PayByBandwidth: \[0, 100\]. If this parameter is not specified, API automatically sets it to 0 Mbps. - PayByTraffic: \[1, 100\]. If this parameter is not specified, an error is returned. +- `io_optimized` (boolean) - Whether an ECS instance is I/O optimized or not. + The default value is `false`. + +- `security_group_id` (string) - ID of the security group to which a newly + created instance belongs. Mutual access is allowed between instances in one + security group. If not specified, the newly created instance will be added to + the default security group. If the default group doesn’t exist, or the number + of instances in it has reached the maximum limit, a new security group will + be created automatically. + +- `security_group_name` (string) - The security group name. The default value is + blank. \[2, 128\] English or Chinese characters, must begin with an + uppercase/lowercase letter or Chinese character. Can contain numbers, `.`, + `_` or `-`. It cannot begin with `http://` or `https://`. + +- `security_token` (string) - STS access token, can be set through template or by exporting + as environment variable such "export SecurityToken=value". + +- `skip_region_validation` (boolean) - The region validation can be skipped if this + value is true, the default value is false. + - `temporary_key_pair_name` (string) - The name of the temporary key pair to generate. By default, Packer generates a name that looks like `packer_`, where `` is a 36 character unique identifier. +- `TLSHandshakeTimeout` (int) - When happen "net/http: TLS handshake timeout" problem, set this environment variable + to a bigger such as "export TLSHandshakeTimeout=30", it will set the TLS handshake timeout value to 30s. + +- `user_data` (string) - The UserData of an instance must be encoded in `Base64` + format, and the maximum size of the raw data is `16 KB`. + +- `user_data_file` (string) - The file name of the userdata. + +- `vpc_cidr_block` (string) - Value options: `192.168.0.0/16` and `172.16.0.0/16`. + When not specified, the default value is `172.16.0.0/16`. + +- `vpc_id` (string) - VPC ID allocated by the system. + +- `vpc_name` (string) - The VPC name. The default value is blank. \[2, 128\] + English or Chinese characters, must begin with an uppercase/lowercase letter + or Chinese character. Can contain numbers, `_` and `-`. The disk description + will appear on the console. Cannot begin with `http://` or `https://`. + +- `vswitch_id` (string) - The ID of the VSwitch to be used. + +- `zone_id` (string) - ID of the zone to which the disk belongs. + ## Basic Example Here is a basic example for Alicloud. diff --git a/website/source/docs/builders/amazon-chroot.html.md b/website/source/docs/builders/amazon-chroot.html.md index 64d966a4d..116fd9ae5 100644 --- a/website/source/docs/builders/amazon-chroot.html.md +++ b/website/source/docs/builders/amazon-chroot.html.md @@ -77,10 +77,8 @@ each category, the available configuration keys are alphabetized. - `ami_description` (string) - The description to set for the resulting AMI(s). By default this description is empty. This is a - [template engine](/docs/templates/engine.html) - where the `SourceAMI` variable is replaced with the source AMI ID and - `BuildRegion` variable is replaced with name of the region where this - is built. + [template engine](/docs/templates/engine.html), + see [Build template data](#build-template-data) for more information. - `ami_groups` (array of strings) - A list of groups that have access to launch the resulting AMI(s). By default no groups have permission to launch @@ -170,6 +168,9 @@ each category, the available configuration keys are alphabetized. - `encrypted` (boolean) - Indicates whether to encrypt the volume or not + - `kms_key_id` (string) - The ARN for the KMS encryption key. When + specifying `kms_key_id`, `encrypted` needs to be set to `true`. + - `iops` (number) - The number of I/O operations per second (IOPS) that the volume supports. See the documentation on [IOPs](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_EbsBlockDevice.html) @@ -212,8 +213,10 @@ each category, the available configuration keys are alphabetized. where the `.Device` variable is replaced with the name of the device where the volume is attached. -- `mount_partition` (number) - The partition number containing the - / partition. By default this is the first partition of the volume. +- `mount_partition` (string) - The partition number containing the + / partition. By default this is the first partition of the volume, (for + example, `xvda1`) but you can designate the entire block device by setting + `"mount_partition": "0"` in your config, which will mount `xvda` instead. - `mount_options` (array of strings) - Options to supply the `mount` command when mounting devices. Each option will be prefixed with `-o` and supplied @@ -222,6 +225,14 @@ each category, the available configuration keys are alphabetized. command](http://linuxcommand.org/man_pages/mount8.html) for valid file system specific options +- `nvme_device_path` (string) - When we call the mount command (by default + `mount -o device dir`), the string provided in `nvme_mount_path` will + replace `device` in that command. When this option is not set, `device` in + that command will be something like `/dev/sdf1`, mirroring the attached + device name. This assumption works for most instances but will fail with c5 + and m5 instances. In order to use the chroot builder with c5 and m5 + instances, you must manually set `nvme_device_path` and `device_path`. + - `pre_mount_commands` (array of strings) - A series of commands to execute after attaching the root volume and before mounting the chroot. This is not required unless using `from_scratch`. If so, this should include any @@ -243,19 +254,22 @@ each category, the available configuration keys are alphabetized. of the `source_ami` unless `from_scratch` is `true`, in which case this field must be defined. +- `root_volume_tags` (object of key/value strings) - Tags to apply to the volumes + that are *launched*. This is a + [template engine](/docs/templates/engine.html), + see [Build template data](#build-template-data) for more information. + - `skip_region_validation` (boolean) - Set to true if you want to skip validation of the `ami_regions` configuration option. Default `false`. - `snapshot_tags` (object of key/value strings) - Tags to apply to snapshot. They will override AMI tags if already applied to snapshot. This is a - [template engine](/docs/templates/engine.html) - where the `SourceAMI` variable is replaced with the source AMI ID and - `BuildRegion` variable is replaced with name of the region where this - is built. + [template engine](/docs/templates/engine.html), + see [Build template data](#build-template-data) for more information. - `snapshot_groups` (array of strings) - A list of groups that have access to create volumes from the snapshot(s). By default no groups have permission to create - volumes form the snapshot(s). `all` will make the snapshot publicly accessible. + volumes from the snapshot(s). `all` will make the snapshot publicly accessible. - `snapshot_users` (array of strings) - A list of account IDs that have access to create volumes from the snapshot(s). By default no additional users other than the @@ -291,6 +305,12 @@ each category, the available configuration keys are alphabetized. - `most_recent` (boolean) - Selects the newest created image when true. This is most useful for selecting a daily distro build. + You may set this in place of `source_ami` or in conjunction with it. If you + set this in conjunction with `source_ami`, the `source_ami` will be added to + the filter. The provided `source_ami` must meet all of the filtering criteria + provided in `source_ami_filter`; this pins the AMI returned by the filter, + but will cause Packer to fail if the `source_ami` does not exist. + - `sriov_support` (boolean) - Enable enhanced networking (SriovNetSupport but not ENA) on HVM-compatible AMIs. If true, add `ec2:ModifyInstanceAttribute` to your AWS IAM policy. Note: you must make sure enhanced networking is enabled on your instance. See [Amazon's @@ -298,10 +318,8 @@ each category, the available configuration keys are alphabetized. Default `false`. - `tags` (object of key/value strings) - Tags applied to the AMI. This is a - [template engine](/docs/templates/engine.html) - where the `SourceAMI` variable is replaced with the source AMI ID and - `BuildRegion` variable is replaced with name of the region where this - is built. + [template engine](/docs/templates/engine.html), + see [Build template data](#build-template-data) for more information. ## Basic Example @@ -332,7 +350,7 @@ chroot by Packer: These default mounts are usually good enough for anyone and are sane defaults. However, if you want to change or add the mount points, you may using the `chroot_mounts` configuration. Here is an example configuration which only -mounts `/prod` and `/dev`: +mounts `/proc` and `/dev`: ``` json { @@ -365,6 +383,7 @@ its internals such as finding an available device. ## Gotchas +### Unmounting the Filesystem One of the difficulties with using the chroot builder is that your provisioning scripts must not leave any processes running or packer will be unable to unmount the filesystem. @@ -394,6 +413,53 @@ services: } ``` +### Using Instances with NVMe block devices. +In C5, C5d, M5, and i3.metal instances, EBS volumes are exposed as NVMe block +devices [reference](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/nvme-ebs-volumes.html). +In order to correctly mount these devices, you have to do some extra legwork, +involving the `nvme_device_path` option above. Read that for more information. + +A working example for mounting an NVMe device is below: + +``` +{ + "variables": { + "region" : "us-east-2" + }, + "builders": [ + { + "type": "amazon-chroot", + "region": "{{user `region`}}", + "source_ami_filter": { + "filters": { + "virtualization-type": "hvm", + "name": "amzn-ami-hvm-*", + "root-device-type": "ebs" + }, + "owners": ["137112412989"], + "most_recent": true + }, + "ena_support": true, + "ami_name": "amazon-chroot-test-{{timestamp}}", + "nvme_device_path": "/dev/nvme1n1p", + "device_path": "/dev/sdf" + } + ], + + "provisioners": [ + { + "type": "shell", + "inline": ["echo Test > /tmp/test.txt"] + } + ] +} +``` + +Note that in the `nvme_device_path` you must end with the `p`; if you try to +define the partition in this path (e.g. "nvme_device_path": `/dev/nvme1n1p1`) +and haven't also set the `"mount_partition": 0`, a `1` will be appended to the +`nvme_device_path` and Packer will fail. + ## Building From Scratch This example demonstrates the essentials of building an image from scratch. A @@ -423,3 +489,12 @@ provisioning commands to install the os and bootloader. ] } ``` + +## Build template data + +The available variables are: + +- `BuildRegion` - The region (for example `eu-central-1`) where Packer is building the AMI. +- `SourceAMI` - The source AMI ID (for example `ami-a2412fcd`) used to build the AMI. +- `SourceAMIName` - The source AMI Name (for example `ubuntu/images/ebs-ssd/ubuntu-xenial-16.04-amd64-server-20180306`) used to build the AMI. +- `SourceAMITags` - The source AMI Tags, as a `map[string]string` object. diff --git a/website/source/docs/builders/amazon-ebs.html.md b/website/source/docs/builders/amazon-ebs.html.md index c11525dd0..c09591768 100644 --- a/website/source/docs/builders/amazon-ebs.html.md +++ b/website/source/docs/builders/amazon-ebs.html.md @@ -111,9 +111,8 @@ builder. - `ami_description` (string) - The description to set for the resulting AMI(s). By default this description is empty. This is a - [template engine](/docs/templates/engine.html) - where the `SourceAMI` variable is replaced with the source AMI ID and - `BuildRegion` variable is replaced with the value of `region`. + [template engine](/docs/templates/engine.html), + see [Build template data](#build-template-data) for more information. - `ami_groups` (array of strings) - A list of groups that have access to launch the resulting AMI(s). By default no groups have permission to launch @@ -151,8 +150,15 @@ builder. after all provisioners have run. For Windows instances, it is sometimes desirable to [run Sysprep](http://docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/ami-create-standard.html) which will stop the instance for you. If this is set to true, Packer *will not* - stop the instance and will wait for you to stop it manually. You can do this - with a [windows-shell provisioner](https://www.packer.io/docs/provisioners/windows-shell.html). + stop the instance but will assume that you will send the stop signal + yourself through your final provisioner. You can do this with a + [windows-shell provisioner](https://www.packer.io/docs/provisioners/windows-shell.html). + + Note that Packer will still wait for the instance to be stopped, and failing + to send the stop signal yourself, when you have set this flag to `true`, + will cause a timeout. + + Example of a valid shutdown command: ``` json { @@ -170,6 +176,30 @@ builder. Note: you must make sure enhanced networking is enabled on your instance. See [Amazon's documentation on enabling enhanced networking](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/enhanced-networking.html#enabling_enhanced_networking). Default `false`. +- `enable_t2_unlimited` (boolean) - Enabling T2 Unlimited allows the source + instance to burst additional CPU beyond its available [CPU Credits] + (https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/t2-credits-baseline-concepts.html) + for as long as the demand exists. + This is in contrast to the standard configuration that only allows an + instance to consume up to its available CPU Credits. + See the AWS documentation for [T2 Unlimited] + (https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/t2-unlimited.html) + and the 'T2 Unlimited Pricing' section of the [Amazon EC2 On-Demand + Pricing](https://aws.amazon.com/ec2/pricing/on-demand/) document for more + information. + By default this option is disabled and Packer will set up a [T2 + Standard](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/t2-std.html) + instance instead. + + To use T2 Unlimited you must use a T2 instance type e.g. t2.micro. + Additionally, T2 Unlimited cannot be used in conjunction with Spot + Instances e.g. when the `spot_price` option has been configured. + Attempting to do so will cause an error. + + !> **Warning!** Additional costs may be incurred by enabling T2 + Unlimited - even for instances that would usually qualify for the + [AWS Free Tier](https://aws.amazon.com/free/). + - `force_deregister` (boolean) - Force Packer to first deregister an existing AMI if one with the same name already exists. Default `false`. @@ -189,10 +219,14 @@ builder. profile](https://docs.aws.amazon.com/IAM/latest/UserGuide/instance-profiles.html) to launch the EC2 instance with. -- `launch_block_device_mappings` (array of block device mappings) - Add one or - more block devices before the Packer build starts. These are not necessarily - preserved when booting from the AMI built with Packer. See - `ami_block_device_mappings`, above, for details. +- `launch_block_device_mappings` (array of block device mappings) - Add one + or more block devices before the Packer build starts. If you add instance + store volumes or EBS volumes in addition to the root device volume, the + created AMI will contain block device mapping information for those + volumes. Amazon creates snapshots of the source instance's root volume and + any other EBS volumes described here. When you launch an instance from this + new AMI, the instance automatically launches with these additional volumes, + and will restore them from snapshots taken from the source instance. - `mfa_code` (string) - The MFA [TOTP](https://en.wikipedia.org/wiki/Time-based_One-time_Password_Algorithm) code. This should probably be a user variable since it changes all the time. @@ -214,16 +248,14 @@ builder. - `run_tags` (object of key/value strings) - Tags to apply to the instance that is *launched* to create the AMI. These tags are *not* applied to the resulting AMI unless they're duplicated in `tags`. This is a - [template engine](/docs/templates/engine.html) - where the `SourceAMI` variable is replaced with the source AMI ID and - `BuildRegion` variable is replaced with the value of `region`. + [template engine](/docs/templates/engine.html), + see [Build template data](#build-template-data) for more information. - `run_volume_tags` (object of key/value strings) - Tags to apply to the volumes that are *launched* to create the AMI. These tags are *not* applied to the resulting AMI unless they're duplicated in `tags`. This is a - [template engine](/docs/templates/engine.html) - where the `SourceAMI` variable is replaced with the source AMI ID and - `BuildRegion` variable is replaced with the value of `region`. + [template engine](/docs/templates/engine.html), + see [Build template data](#build-template-data) for more information. - `security_group_id` (string) - The ID (*not* the name) of the security group to assign to the instance. By default this is not set and Packer will @@ -237,7 +269,7 @@ builder. - `temporary_security_group_source_cidr` (string) - An IPv4 CIDR block to be authorized access to the instance, when packer is creating a temporary security group. - The default is `0.0.0.0/0` (ie, allow any IPv4 source). This is only used + The default is `0.0.0.0/0` (ie, allow any IPv4 source). This is only used when `security_group_id` or `security_group_ids` is not specified. - `shutdown_behavior` (string) - Automatically terminate instances on shutdown @@ -249,7 +281,7 @@ builder. - `snapshot_groups` (array of strings) - A list of groups that have access to create volumes from the snapshot(s). By default no groups have permission to create - volumes form the snapshot(s). `all` will make the snapshot publicly accessible. + volumes from the snapshot(s). `all` will make the snapshot publicly accessible. - `snapshot_users` (array of strings) - A list of account IDs that have access to create volumes from the snapshot(s). By default no additional users other than the @@ -257,9 +289,8 @@ builder. - `snapshot_tags` (object of key/value strings) - Tags to apply to snapshot. They will override AMI tags if already applied to snapshot. This is a - [template engine](/docs/templates/engine.html) - where the `SourceAMI` variable is replaced with the source AMI ID and - `BuildRegion` variable is replaced with the value of `region`. + [template engine](/docs/templates/engine.html), + see [Build template data](#build-template-data) for more information. - `source_ami_filter` (object) - Filters used to populate the `source_ami` field. Example: @@ -293,6 +324,12 @@ builder. - `most_recent` (boolean) - Selects the newest created image when true. This is most useful for selecting a daily distro build. + You may set this in place of `source_ami` or in conjunction with it. If you + set this in conjunction with `source_ami`, the `source_ami` will be added to + the filter. The provided `source_ami` must meet all of the filtering criteria + provided in `source_ami_filter`; this pins the AMI returned by the filter, + but will cause Packer to fail if the `source_ami` does not exist. + - `spot_price` (string) - The maximum hourly price to pay for a spot instance to create the AMI. Spot instances are a type of instance that EC2 starts when the current spot price is less than the maximum price you specify. Spot @@ -306,6 +343,10 @@ builder. best spot price. This must be one of: `Linux/UNIX`, `SUSE Linux`, `Windows`, `Linux/UNIX (Amazon VPC)`, `SUSE Linux (Amazon VPC)`, `Windows (Amazon VPC)` +- `spot_tags` (object of key/value strings) - Requires `spot_price` to + be set. This tells Packer to apply tags to the spot request that is + issued. + - `sriov_support` (boolean) - Enable enhanced networking (SriovNetSupport but not ENA) on HVM-compatible AMIs. If true, add `ec2:ModifyInstanceAttribute` to your AWS IAM policy. Note: you must make sure enhanced networking is enabled on your instance. See [Amazon's @@ -328,8 +369,19 @@ builder. in AWS with the source instance, set the `ssh_keypair_name` field to the name of the key pair. -- `ssh_private_ip` (boolean) - If true, then SSH will always use the private - IP if available. Also works for WinRM. +- `ssh_private_ip` (boolean) - No longer supported. See + [`ssh_interface`](#ssh_interface). A fixer exists to migrate. + +- `ssh_interface` (string) - One of `public_ip`, `private_ip`, + `public_dns` or `private_dns`. If set, either the public IP address, + private IP address, public DNS name or private DNS name will used as the host for SSH. + The default behaviour if inside a VPC is to use the public IP address if available, + otherwise the private IP address will be used. If not in a VPC the public DNS name + will be used. Also works for WinRM. + + Where Packer is configured for an outbound proxy but WinRM traffic should be direct, + `ssh_interface` must be set to `private_dns` and `.compute.internal` included + in the `NO_PROXY` environment variable. - `subnet_id` (string) - If using VPC, the ID of the subnet, such as `subnet-12345def`, where Packer will launch the EC2 instance. This field is @@ -337,9 +389,8 @@ builder. - `tags` (object of key/value strings) - Tags applied to the AMI and relevant snapshots. This is a - [template engine](/docs/templates/engine.html) - where the `SourceAMI` variable is replaced with the source AMI ID and - `BuildRegion` variable is replaced with the value of `region`. + [template engine](/docs/templates/engine.html), + see [Build template data](#build-template-data) for more information. - `temporary_key_pair_name` (string) - The name of the temporary key pair to generate. By default, Packer generates a name that looks like @@ -438,6 +489,15 @@ configuration of `launch_block_device_mappings` will expand the root volume } ``` +## Build template data + +The available variables are: + +- `BuildRegion` - The region (for example `eu-central-1`) where Packer is building the AMI. +- `SourceAMI` - The source AMI ID (for example `ami-a2412fcd`) used to build the AMI. +- `SourceAMIName` - The source AMI Name (for example `ubuntu/images/ebs-ssd/ubuntu-xenial-16.04-amd64-server-20180306`) used to build the AMI. +- `SourceAMITags` - The source AMI Tags, as a `map[string]string` object. + ## Tag Example Here is an example using the optional AMI tags. This will add the tags @@ -457,7 +517,9 @@ images exist when this template is run: "ami_name": "packer-quick-start {{timestamp}}", "tags": { "OS_Version": "Ubuntu", - "Release": "Latest" + "Release": "Latest", + "Base_AMI_Name": "{{ .SourceAMIName }}", + "Extra": "{{ .SourceAMITags.TagName }}" } } ``` diff --git a/website/source/docs/builders/amazon-ebssurrogate.html.md b/website/source/docs/builders/amazon-ebssurrogate.html.md index c044fc350..881cce45e 100644 --- a/website/source/docs/builders/amazon-ebssurrogate.html.md +++ b/website/source/docs/builders/amazon-ebssurrogate.html.md @@ -55,9 +55,9 @@ builder. the root device of the AMI. This looks like the mappings in `ami_block_device_mapping`, except with an additional field: -- `source_device_name` (string) - The device name of the block device on the - source instance to be used as the root device for the AMI. This must correspond - to a block device in `launch_block_device_mapping`. + - `source_device_name` (string) - The device name of the block device on the + source instance to be used as the root device for the AMI. This must correspond + to a block device in `launch_block_device_mapping`. ### Optional: @@ -104,9 +104,8 @@ builder. - `ami_description` (string) - The description to set for the resulting AMI(s). By default this description is empty. This is a - [template engine](/docs/templates/engine.html) - where the `SourceAMI` variable is replaced with the source AMI ID and - `BuildRegion` variable is replaced with the value of `region`. + [template engine](/docs/templates/engine.html), + see [Build template data](#build-template-data) for more information. - `ami_groups` (array of strings) - A list of groups that have access to launch the resulting AMI(s). By default no groups have permission to launch @@ -144,8 +143,15 @@ builder. after all provisioners have run. For Windows instances, it is sometimes desirable to [run Sysprep](http://docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/ami-create-standard.html) which will stop the instance for you. If this is set to true, Packer *will not* - stop the instance and will wait for you to stop it manually. You can do this - with a [windows-shell provisioner](https://www.packer.io/docs/provisioners/windows-shell.html). + stop the instance but will assume that you will send the stop signal + yourself through your final provisioner. You can do this with a + [windows-shell provisioner](https://www.packer.io/docs/provisioners/windows-shell.html). + + Note that Packer will still wait for the instance to be stopped, and failing + to send the stop signal yourself, when you have set this flag to `true`, + will cause a timeout. + + Example of a valid shutdown command: ``` json { @@ -163,6 +169,30 @@ builder. Note: you must make sure enhanced networking is enabled on your instance. See [Amazon's documentation on enabling enhanced networking](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/enhanced-networking.html#enabling_enhanced_networking). Default `false`. +- `enable_t2_unlimited` (boolean) - Enabling T2 Unlimited allows the source + instance to burst additional CPU beyond its available [CPU Credits] + (https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/t2-credits-baseline-concepts.html) + for as long as the demand exists. + This is in contrast to the standard configuration that only allows an + instance to consume up to its available CPU Credits. + See the AWS documentation for [T2 Unlimited] + (https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/t2-unlimited.html) + and the 'T2 Unlimited Pricing' section of the [Amazon EC2 On-Demand + Pricing](https://aws.amazon.com/ec2/pricing/on-demand/) document for more + information. + By default this option is disabled and Packer will set up a [T2 + Standard](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/t2-std.html) + instance instead. + + To use T2 Unlimited you must use a T2 instance type e.g. t2.micro. + Additionally, T2 Unlimited cannot be used in conjunction with Spot + Instances e.g. when the `spot_price` option has been configured. + Attempting to do so will cause an error. + + !> **Warning!** Additional costs may be incurred by enabling T2 + Unlimited - even for instances that would usually qualify for the + [AWS Free Tier](https://aws.amazon.com/free/). + - `force_deregister` (boolean) - Force Packer to first deregister an existing AMI if one with the same name already exists. Default `false`. @@ -182,10 +212,14 @@ builder. profile](https://docs.aws.amazon.com/IAM/latest/UserGuide/instance-profiles.html) to launch the EC2 instance with. -- `launch_block_device_mappings` (array of block device mappings) - Add one or - more block devices before the packer build starts. These are not necessarily - preserved when booting from the AMI built with packer. See - `ami_block_device_mappings`, above, for details. +- `launch_block_device_mappings` (array of block device mappings) - Add one + or more block devices before the Packer build starts. If you add instance + store volumes or EBS volumes in addition to the root device volume, the + created AMI will contain block device mapping information for those + volumes. Amazon creates snapshots of the source instance's root volume and + any other EBS volumes described here. When you launch an instance from this + new AMI, the instance automatically launches with these additional volumes, + and will restore them from snapshots taken from the source instance. - `mfa_code` (string) - The MFA [TOTP](https://en.wikipedia.org/wiki/Time-based_One-time_Password_Algorithm) code. This should probably be a user variable since it changes all the time. @@ -207,16 +241,14 @@ builder. - `run_tags` (object of key/value strings) - Tags to apply to the instance that is *launched* to create the AMI. These tags are *not* applied to the resulting AMI unless they're duplicated in `tags`. This is a - [template engine](/docs/templates/engine.html) - where the `SourceAMI` variable is replaced with the source AMI ID and - `BuildRegion` variable is replaced with the value of `region`. + [template engine](/docs/templates/engine.html), + see [Build template data](#build-template-data) for more information. - `run_volume_tags` (object of key/value strings) - Tags to apply to the volumes that are *launched* to create the AMI. These tags are *not* applied to the resulting AMI unless they're duplicated in `tags`. This is a - [template engine](/docs/templates/engine.html) - where the `SourceAMI` variable is replaced with the source AMI ID and - `BuildRegion` variable is replaced with the value of `region`. + [template engine](/docs/templates/engine.html), + see [Build template data](#build-template-data) for more information. - `security_group_id` (string) - The ID (*not* the name) of the security group to assign to the instance. By default this is not set and Packer will @@ -242,7 +274,7 @@ builder. - `snapshot_groups` (array of strings) - A list of groups that have access to create volumes from the snapshot(s). By default no groups have permission to create - volumes form the snapshot(s). `all` will make the snapshot publicly accessible. + volumes from the snapshot(s). `all` will make the snapshot publicly accessible. - `snapshot_users` (array of strings) - A list of account IDs that have access to create volumes from the snapshot(s). By default no additional users other than the @@ -250,9 +282,8 @@ builder. - `snapshot_tags` (object of key/value strings) - Tags to apply to snapshot. They will override AMI tags if already applied to snapshot. This is a - [template engine](/docs/templates/engine.html) - where the `SourceAMI` variable is replaced with the source AMI ID and - `BuildRegion` variable is replaced with the value of `region`. + [template engine](/docs/templates/engine.html), + see [Build template data](#build-template-data) for more information. - `source_ami_filter` (object) - Filters used to populate the `source_ami` field. Example: @@ -286,6 +317,12 @@ builder. - `most_recent` (boolean) - Selects the newest created image when true. This is most useful for selecting a daily distro build. + You may set this in place of `source_ami` or in conjunction with it. If you + set this in conjunction with `source_ami`, the `source_ami` will be added to + the filter. The provided `source_ami` must meet all of the filtering criteria + provided in `source_ami_filter`; this pins the AMI returned by the filter, + but will cause Packer to fail if the `source_ami` does not exist. + - `spot_price` (string) - The maximum hourly price to pay for a spot instance to create the AMI. Spot instances are a type of instance that EC2 starts when the current spot price is less than the maximum price you specify. Spot @@ -299,6 +336,10 @@ builder. best spot price. This must be one of: `Linux/UNIX`, `SUSE Linux`, `Windows`, `Linux/UNIX (Amazon VPC)`, `SUSE Linux (Amazon VPC)`, `Windows (Amazon VPC)` +- `spot_tags` (object of key/value strings) - Requires `spot_price` to + be set. This tells Packer to apply tags to the spot request that is + issued. + - `sriov_support` (boolean) - Enable enhanced networking (SriovNetSupport but not ENA) on HVM-compatible AMIs. If true, add `ec2:ModifyInstanceAttribute` to your AWS IAM policy. Note: you must make sure enhanced networking is enabled on your instance. See [Amazon's @@ -321,8 +362,19 @@ builder. in AWS with the source instance, set the `ssh_keypair_name` field to the name of the key pair. -- `ssh_private_ip` (boolean) - If true, then SSH will always use the private - IP if available. +- `ssh_private_ip` (boolean) - No longer supported. See + [`ssh_interface`](#ssh_interface). A fixer exists to migrate. + +- `ssh_interface` (string) - One of `public_ip`, `private_ip`, + `public_dns` or `private_dns`. If set, either the public IP address, + private IP address, public DNS name or private DNS name will used as the host for SSH. + The default behaviour if inside a VPC is to use the public IP address if available, + otherwise the private IP address will be used. If not in a VPC the public DNS name + will be used. Also works for WinRM. + + Where Packer is configured for an outbound proxy but WinRM traffic should be direct, + `ssh_interface` must be set to `private_dns` and `.compute.internal` included + in the `NO_PROXY` environment variable. - `subnet_id` (string) - If using VPC, the ID of the subnet, such as `subnet-12345def`, where Packer will launch the EC2 instance. This field is @@ -330,9 +382,8 @@ builder. - `tags` (object of key/value strings) - Tags applied to the AMI and relevant snapshots. This is a - [template engine](/docs/templates/engine.html) - where the `SourceAMI` variable is replaced with the source AMI ID and - `BuildRegion` variable is replaced with the value of `region`. + [template engine](/docs/templates/engine.html), + see [Build template data](#build-template-data) for more information. - `temporary_key_pair_name` (string) - The name of the temporary keypair to generate. By default, Packer generates a name with a UUID. @@ -402,6 +453,16 @@ with the `-debug` flag. In debug mode, the Amazon builder will save the private key in the current directory and will output the DNS or IP information as well. You can use this information to access the instance as it is running. +## Build template data + +The available variables are: + +- `BuildRegion` - The region (for example `eu-central-1`) where Packer is building the AMI. +- `SourceAMI` - The source AMI ID (for example `ami-a2412fcd`) used to build the AMI. +- `SourceAMIName` - The source AMI Name (for example `ubuntu/images/ebs-ssd/ubuntu-xenial-16.04-amd64-server-20180306`) used to build the AMI. +- `SourceAMITags` - The source AMI Tags, as a `map[string]string` object. + + -> **Note:** Packer uses pre-built AMIs as the source for building images. These source AMIs may include volumes that are not flagged to be destroyed on termination of the instance building the new image. In addition to those volumes diff --git a/website/source/docs/builders/amazon-ebsvolume.html.md b/website/source/docs/builders/amazon-ebsvolume.html.md index 125064626..a3ff74785 100644 --- a/website/source/docs/builders/amazon-ebsvolume.html.md +++ b/website/source/docs/builders/amazon-ebsvolume.html.md @@ -64,30 +64,41 @@ builder. - `device_name` (string) - The device name exposed to the instance (for example, `/dev/sdh` or `xvdh`). Required when specifying `volume_size`. + - `delete_on_termination` (boolean) - Indicates whether the EBS volume is - deleted on instance termination + deleted on instance termination. + - `encrypted` (boolean) - Indicates whether to encrypt the volume or not + + - `kms_key_id` (string) - The ARN for the KMS encryption key. When + specifying `kms_key_id`, `encrypted` needs to be set to `true`. + - `iops` (number) - The number of I/O operations per second (IOPS) that the volume supports. See the documentation on [IOPs](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_EbsBlockDevice.html) for more information + - `no_device` (boolean) - Suppresses the specified device included in the block device mapping of the AMI + - `snapshot_id` (string) - The ID of the snapshot + - `virtual_name` (string) - The virtual device name. See the documentation on [Block Device Mapping](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_BlockDeviceMapping.html) for more information + - `volume_size` (number) - The size of the volume, in GiB. Required if not specifying a `snapshot_id` + - `volume_type` (string) - The volume type. `gp2` for General Purpose (SSD) volumes, `io1` for Provisioned IOPS (SSD) volumes, and `standard` for Magnetic volumes + - `tags` (map) - Tags to apply to the volume. These are retained after the - builder completes. This is a \[template engine\] - (/docs/templates/engine.html) where the `SourceAMI` - variable is replaced with the source AMI ID and `BuildRegion` variable - is replaced with the value of `region`. + builder completes. This is a + [template engine](/docs/templates/engine.html), + see [Build template data](#build-template-data) for more information. - `associate_public_ip_address` (boolean) - If using a non-default VPC, public IP addresses are not provided by default. If this is toggled, your new @@ -100,6 +111,27 @@ builder. provider whose API is compatible with aws EC2. Specify another endpoint like this `https://ec2.custom.endpoint.com`. +- `disable_stop_instance` (boolean) - Packer normally stops the build instance + after all provisioners have run. For Windows instances, it is sometimes + desirable to [run Sysprep](http://docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/ami-create-standard.html) + which will stop the instance for you. If this is set to true, Packer *will not* + stop the instance but will assume that you will send the stop signal + yourself through your final provisioner. You can do this with a + [windows-shell provisioner](https://www.packer.io/docs/provisioners/windows-shell.html). + + Note that Packer will still wait for the instance to be stopped, and failing + to send the stop signal yourself, when you have set this flag to `true`, + will cause a timeout. + + Example of a valid shutdown command: + + ``` json + { + "type": "windows-shell", + "inline": ["\"c:\\Program Files\\Amazon\\Ec2ConfigService\\ec2config.exe\" -sysprep"] + } + ``` + - `ebs_optimized` (boolean) - Mark instance as [EBS Optimized](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSOptimized.html). Default `false`. @@ -109,6 +141,30 @@ builder. Note: you must make sure enhanced networking is enabled on your instance. See [Amazon's documentation on enabling enhanced networking](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/enhanced-networking.html#enabling_enhanced_networking). Default `false`. +- `enable_t2_unlimited` (boolean) - Enabling T2 Unlimited allows the source + instance to burst additional CPU beyond its available [CPU Credits] + (https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/t2-credits-baseline-concepts.html) + for as long as the demand exists. + This is in contrast to the standard configuration that only allows an + instance to consume up to its available CPU Credits. + See the AWS documentation for [T2 Unlimited] + (https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/t2-unlimited.html) + and the 'T2 Unlimited Pricing' section of the [Amazon EC2 On-Demand + Pricing](https://aws.amazon.com/ec2/pricing/on-demand/) document for more + information. + By default this option is disabled and Packer will set up a [T2 + Standard](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/t2-std.html) + instance instead. + + To use T2 Unlimited you must use a T2 instance type e.g. t2.micro. + Additionally, T2 Unlimited cannot be used in conjunction with Spot + Instances e.g. when the `spot_price` option has been configured. + Attempting to do so will cause an error. + + !> **Warning!** Additional costs may be incurred by enabling T2 + Unlimited - even for instances that would usually qualify for the + [AWS Free Tier](https://aws.amazon.com/free/). + - `iam_instance_profile` (string) - The name of an [IAM instance profile](https://docs.aws.amazon.com/IAM/latest/UserGuide/instance-profiles.html) to launch the EC2 instance with. @@ -121,21 +177,11 @@ builder. profiles](https://docs.aws.amazon.com/sdk-for-go/v1/developer-guide/configuring-sdk.html#specifying-profiles) for more details. -- `region_kms_key_ids` (map of strings) - a map of regions to copy the ami to, - along with the custom kms key id to use for encryption for that region. - Keys must match the regions provided in `ami_regions`. If you just want to - encrypt using a default ID, you can stick with `kms_key_id` and `ami_regions`. - If you want a region to be encrypted with that region's default key ID, you can - use an empty string `""` instead of a key id in this map. (e.g. `"us-east-1": ""`) - However, you cannot use default key IDs if you are using this in conjunction with - `snapshot_users` -- in that situation you must use custom keys. - - `run_tags` (object of key/value strings) - Tags to apply to the instance that is *launched* to create the AMI. These tags are *not* applied to the resulting AMI unless they're duplicated in `tags`. This is a - [template engine](/docs/templates/engine.html) - where the `SourceAMI` variable is replaced with the source AMI ID and - `BuildRegion` variable is replaced with the value of `region`. + [template engine](/docs/templates/engine.html), + see [Build template data](#build-template-data) for more information. - `security_group_id` (string) - The ID (*not* the name) of the security group to assign to the instance. By default this is not set and Packer will @@ -161,7 +207,7 @@ builder. - `snapshot_groups` (array of strings) - A list of groups that have access to create volumes from the snapshot(s). By default no groups have permission to create - volumes form the snapshot(s). `all` will make the snapshot publicly accessible. + volumes from the snapshot(s). `all` will make the snapshot publicly accessible. - `snapshot_users` (array of strings) - A list of account IDs that have access to create volumes from the snapshot(s). By default no additional users other than the @@ -199,6 +245,12 @@ builder. - `most_recent` (boolean) - Selects the newest created image when true. This is most useful for selecting a daily distro build. + You may set this in place of `source_ami` or in conjunction with it. If you + set this in conjunction with `source_ami`, the `source_ami` will be added to + the filter. The provided `source_ami` must meet all of the filtering criteria + provided in `source_ami_filter`; this pins the AMI returned by the filter, + but will cause Packer to fail if the `source_ami` does not exist. + - `spot_price` (string) - The maximum hourly price to pay for a spot instance to create the AMI. Spot instances are a type of instance that EC2 starts when the current spot price is less than the maximum price you specify. Spot @@ -212,6 +264,10 @@ builder. best spot price. This must be one of: `Linux/UNIX`, `SUSE Linux`, `Windows`, `Linux/UNIX (Amazon VPC)`, `SUSE Linux (Amazon VPC)` or `Windows (Amazon VPC)` +- `spot_tags` (object of key/value strings) - Requires `spot_price` to + be set. This tells Packer to apply tags to the spot request that is + issued. + - `sriov_support` (boolean) - Enable enhanced networking (SriovNetSupport but not ENA) on HVM-compatible AMIs. If true, add `ec2:ModifyInstanceAttribute` to your AWS IAM policy. Note: you must make sure enhanced networking is enabled on your instance. See [Amazon's @@ -225,8 +281,19 @@ builder. [`ssh_private_key_file`](/docs/templates/communicator.html#ssh_private_key_file) must be specified with this. -- `ssh_private_ip` (boolean) - If `true`, then SSH will always use the private - IP if available. Also works for WinRM. +- `ssh_private_ip` (boolean) - No longer supported. See + [`ssh_interface`](#ssh_interface). A fixer exists to migrate. + +- `ssh_interface` (string) - One of `public_ip`, `private_ip`, + `public_dns` or `private_dns`. If set, either the public IP address, + private IP address, public DNS name or private DNS name will used as the host for SSH. + The default behaviour if inside a VPC is to use the public IP address if available, + otherwise the private IP address will be used. If not in a VPC the public DNS name + will be used. Also works for WinRM. + + Where Packer is configured for an outbound proxy but WinRM traffic should be direct, + `ssh_interface` must be set to `private_dns` and `.compute.internal` included + in the `NO_PROXY` environment variable. - `subnet_id` (string) - If using VPC, the ID of the subnet, such as `subnet-12345def`, where Packer will launch the EC2 instance. This field is @@ -318,6 +385,15 @@ with the `-debug` flag. In debug mode, the Amazon builder will save the private key in the current directory and will output the DNS or IP information as well. You can use this information to access the instance as it is running. +## Build template data + +The available variables are: + +- `BuildRegion` - The region (for example `eu-central-1`) where Packer is building the AMI. +- `SourceAMI` - The source AMI ID (for example `ami-a2412fcd`) used to build the AMI. +- `SourceAMIName` - The source AMI Name (for example `ubuntu/images/ebs-ssd/ubuntu-xenial-16.04-amd64-server-20180306`) used to build the AMI. +- `SourceAMITags` - The source AMI Tags, as a `map[string]string` object. + -> **Note:** Packer uses pre-built AMIs as the source for building images. These source AMIs may include volumes that are not flagged to be destroyed on termination of the instance building the new image. In addition to those volumes diff --git a/website/source/docs/builders/amazon-instance.html.md b/website/source/docs/builders/amazon-instance.html.md index 95a1bc007..00700dfb5 100644 --- a/website/source/docs/builders/amazon-instance.html.md +++ b/website/source/docs/builders/amazon-instance.html.md @@ -133,9 +133,8 @@ builder. - `ami_description` (string) - The description to set for the resulting AMI(s). By default this description is empty. This is a - [template engine](/docs/templates/engine.html) - where the `SourceAMI` variable is replaced with the source AMI ID and - `BuildRegion` variable is replaced with the value of `region`. + [template engine](/docs/templates/engine.html), + see [Build template data](#build-template-data) for more information. - `ami_groups` (array of strings) - A list of groups that have access to launch the resulting AMI(s). By default no groups have permission to launch @@ -194,6 +193,30 @@ builder. Note: you must make sure enhanced networking is enabled on your instance. See [Amazon's documentation on enabling enhanced networking](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/enhanced-networking.html#enabling_enhanced_networking). Default `false`. +- `enable_t2_unlimited` (boolean) - Enabling T2 Unlimited allows the source + instance to burst additional CPU beyond its available [CPU Credits] + (https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/t2-credits-baseline-concepts.html) + for as long as the demand exists. + This is in contrast to the standard configuration that only allows an + instance to consume up to its available CPU Credits. + See the AWS documentation for [T2 Unlimited] + (https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/t2-unlimited.html) + and the 'T2 Unlimited Pricing' section of the [Amazon EC2 On-Demand + Pricing](https://aws.amazon.com/ec2/pricing/on-demand/) document for more + information. + By default this option is disabled and Packer will set up a [T2 + Standard](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/t2-std.html) + instance instead. + + To use T2 Unlimited you must use a T2 instance type e.g. t2.micro. + Additionally, T2 Unlimited cannot be used in conjunction with Spot + Instances e.g. when the `spot_price` option has been configured. + Attempting to do so will cause an error. + + !> **Warning!** Additional costs may be incurred by enabling T2 + Unlimited - even for instances that would usually qualify for the + [AWS Free Tier](https://aws.amazon.com/free/). + - `force_deregister` (boolean) - Force Packer to first deregister an existing AMI if one with the same name already exists. Defaults to `false`. @@ -204,10 +227,14 @@ builder. profile](https://docs.aws.amazon.com/IAM/latest/UserGuide/instance-profiles.html) to launch the EC2 instance with. -- `launch_block_device_mappings` (array of block device mappings) - Add one or - more block devices before the Packer build starts. These are not necessarily - preserved when booting from the AMI built with Packer. See - `ami_block_device_mappings`, above, for details. +- `launch_block_device_mappings` (array of block device mappings) - Add one + or more block devices before the Packer build starts. If you add instance + store volumes or EBS volumes in addition to the root device volume, the + created AMI will contain block device mapping information for those + volumes. Amazon creates snapshots of the source instance's root volume and + any other EBS volumes described here. When you launch an instance from this + new AMI, the instance automatically launches with these additional volumes, + and will restore them from snapshots taken from the source instance. - `mfa_code` (string) - The MFA [TOTP](https://en.wikipedia.org/wiki/Time-based_One-time_Password_Algorithm) code. This should probably be a user variable since it changes all the time. @@ -229,9 +256,8 @@ builder. - `run_tags` (object of key/value strings) - Tags to apply to the instance that is *launched* to create the AMI. These tags are *not* applied to the resulting AMI unless they're duplicated in `tags`. This is a - [template engine](/docs/templates/engine.html) - where the `SourceAMI` variable is replaced with the source AMI ID and - `BuildRegion` variable is replaced with the value of `region`. + [template engine](/docs/templates/engine.html), + see [Build template data](#build-template-data) for more information. - `security_group_id` (string) - The ID (*not* the name) of the security group to assign to the instance. By default this is not set and Packer will @@ -291,6 +317,12 @@ builder. - `most_recent` (boolean) - Selects the newest created image when true. This is most useful for selecting a daily distro build. + You may set this in place of `source_ami` or in conjunction with it. If you + set this in conjunction with `source_ami`, the `source_ami` will be added to + the filter. The provided `source_ami` must meet all of the filtering criteria + provided in `source_ami_filter`; this pins the AMI returned by the filter, + but will cause Packer to fail if the `source_ami` does not exist. + - `snapshot_tags` (object of key/value strings) - Tags to apply to snapshot. They will override AMI tags if already applied to snapshot. @@ -307,6 +339,10 @@ builder. best spot price. This must be one of: `Linux/UNIX`, `SUSE Linux`, `Windows`, `Linux/UNIX (Amazon VPC)`, `SUSE Linux (Amazon VPC)`, `Windows (Amazon VPC)` +- `spot_tags` (object of key/value strings) - Requires `spot_price` to + be set. This tells Packer to apply tags to the spot request that is + issued. + - `sriov_support` (boolean) - Enable enhanced networking (SriovNetSupport but not ENA) on HVM-compatible AMIs. If true, add `ec2:ModifyInstanceAttribute` to your AWS IAM policy. Note: you must make sure enhanced networking is enabled on your instance. See [Amazon's @@ -329,17 +365,27 @@ builder. in AWS with the source instance, set the `ssh_keypair_name` field to the name of the key pair. -- `ssh_private_ip` (boolean) - If true, then SSH will always use the private - IP if available. Also works for WinRM. +- `ssh_private_ip` (boolean) - No longer supported. See + [`ssh_interface`](#ssh_interface). A fixer exists to migrate. + +- `ssh_interface` (string) - One of `public_ip`, `private_ip`, + `public_dns` or `private_dns`. If set, either the public IP address, + private IP address, public DNS name or private DNS name will used as the host for SSH. + The default behaviour if inside a VPC is to use the public IP address if available, + otherwise the private IP address will be used. If not in a VPC the public DNS name + will be used. Also works for WinRM. + + Where Packer is configured for an outbound proxy but WinRM traffic should be direct, + `ssh_interface` must be set to `private_dns` and `.compute.internal` included + in the `NO_PROXY` environment variable. - `subnet_id` (string) - If using VPC, the ID of the subnet, such as `subnet-12345def`, where Packer will launch the EC2 instance. This field is required if you are using an non-default VPC. - `tags` (object of key/value strings) - Tags applied to the AMI. This is a - [template engine](/docs/templates/engine.html) - where the `SourceAMI` variable is replaced with the source AMI ID and - `BuildRegion` variable is replaced with the value of `region`. + [template engine](/docs/templates/engine.html), + see [Build template data](#build-template-data) for more information. - `temporary_key_pair_name` (string) - The name of the temporary key pair to generate. By default, Packer generates a name that looks like @@ -401,6 +447,15 @@ with the `-debug` flag. In debug mode, the Amazon builder will save the private key in the current directory and will output the DNS or IP information as well. You can use this information to access the instance as it is running. +## Build template data + +The available variables are: + +- `BuildRegion` - The region (for example `eu-central-1`) where Packer is building the AMI. +- `SourceAMI` - The source AMI ID (for example `ami-a2412fcd`) used to build the AMI. +- `SourceAMIName` - The source AMI Name (for example `ubuntu/images/ebs-ssd/ubuntu-xenial-16.04-amd64-server-20180306`) used to build the AMI. +- `SourceAMITags` - The source AMI Tags, as a `map[string]string` object. + ## Custom Bundle Commands A lot of the process required for creating an instance-store backed AMI involves @@ -470,3 +525,28 @@ parameters they're used to satisfy the `ec2-upload-bundle` command. Additionally, `{{.Token}}` is available when overriding this command. You must create your own bundle command with the addition of `-t {{.Token}} ` if you are assuming a role. + +#### Bundle Upload Permissions + +The `ec2-upload-bundle` requires a policy document that looks something like this: + +```json +{ + "Version": "2012-10-17", + "Statement": [ + { + "Effect": "Allow", + "Action": [ + "s3:PutObject", + "s3:GetObject", + "s3:ListBucket", + "s3:GetBucketLocation", + "s3:PutObjectAcl" + ], + "Resource": "*" + } + ] +} +``` + +You may wish to constrain the resource to a specific bucket. diff --git a/website/source/docs/builders/amazon.html.md b/website/source/docs/builders/amazon.html.md index 8873dfcbe..b1119f607 100644 --- a/website/source/docs/builders/amazon.html.md +++ b/website/source/docs/builders/amazon.html.md @@ -49,49 +49,90 @@ filesystem and data. -## Specifying Amazon Credentials +## Authentication -When you use any of the amazon builders, you must provide credentials to the API -in the form of an access key id and secret. These look like: +The AWS provider offers a flexible means of providing credentials for +authentication. The following methods are supported, in this order, and +explained below: - access key id: AKIAIOSFODNN7EXAMPLE - secret access key: wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY +- Static credentials +- Environment variables +- Shared credentials file +- EC2 Role -If you use other AWS tools you may already have these configured. If so, packer -will try to use them, *unless* they are specified in your packer template. -Credentials are resolved in the following order: +### Static Credentials -1. Values hard-coded in the packer template are always authoritative. -2. *Variables* in the packer template may be resolved from command-line flags - or from environment variables. Please read about [User - Variables](https://www.packer.io/docs/templates/user-variables.html) - for details. -3. If no credentials are found, packer falls back to automatic lookup. +Static credentials can be provided in the form of an access key id and secret. +These look like: -### Automatic Lookup +```json +{ + "access_key": "AKIAIOSFODNN7EXAMPLE", + "secret_key": "wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY", + "region": "us-east-1", + "type": "amazon-ebs" +} +``` -Packer depends on the [AWS -SDK](https://aws.amazon.com/documentation/sdk-for-go/) to perform automatic -lookup using *credential chains*. In short, the SDK looks for credentials in -the following order: +### Environment variables -1. Environment variables. -2. Shared credentials file. -3. If your application is running on an Amazon EC2 instance, IAM role for Amazon EC2. +You can provide your credentials via the `AWS_ACCESS_KEY_ID` and +`AWS_SECRET_ACCESS_KEY`, environment variables, representing your AWS Access +Key and AWS Secret Key, respectively. Note that setting your AWS credentials +using either these environment variables will override the use of +`AWS_SHARED_CREDENTIALS_FILE` and `AWS_PROFILE`. The `AWS_DEFAULT_REGION` and +`AWS_SESSION_TOKEN` environment variables are also used, if applicable: -Please refer to the SDK's documentation on [specifying -credentials](https://docs.aws.amazon.com/sdk-for-go/v1/developer-guide/configuring-sdk.html#specifying-credentials) -for more information. -## Using an IAM Task or Instance Role +Usage: -If AWS keys are not specified in the template, a -[shared credentials file](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-started.html#cli-config-files) -or through environment variables Packer will use credentials provided by -the task's or instance's IAM role, if it has one. +``` +$ export AWS_ACCESS_KEY_ID="anaccesskey" +$ export AWS_SECRET_ACCESS_KEY="asecretkey" +$ export AWS_DEFAULT_REGION="us-west-2" +$ packer build packer.json +``` -The following policy document provides the minimal set permissions necessary for -Packer to work: +### Shared Credentials file + +You can use an AWS credentials file to specify your credentials. The default +location is $HOME/.aws/credentials on Linux and OS X, or +"%USERPROFILE%.aws\credentials" for Windows users. If we fail to detect +credentials inline, or in the environment, Packer will check this location. You +can optionally specify a different location in the configuration by setting the +environment with the `AWS_SHARED_CREDENTIALS_FILE` variable. + +The format for the credentials file is like so + +``` +[default] +aws_access_key_id= +aws_secret_access_key= +``` + +You may also configure the profile to use by setting the `profile` +configuration option, or setting the `AWS_PROFILE` environment variable: + +```json +{ + "profile": "customprofile", + "region": "us-east-1", + "type": "amazon-ebs" +} +``` + + +### IAM Task or Instance Role + +Finally, Packer will use credentials provided by the task's or instance's IAM +role, if it has one. + +This is a preferred approach over any other when running in EC2 as you can +avoid hard coding credentials. Instead these are leased on-the-fly by Packer, +which reduces the chance of leakage. + +The following policy document provides the minimal set permissions necessary +for Packer to work: ``` json { @@ -108,7 +149,7 @@ Packer to work: "ec2:CreateSnapshot", "ec2:CreateTags", "ec2:CreateVolume", - "ec2:DeleteKeypair", + "ec2:DeleteKeyPair", "ec2:DeleteSecurityGroup", "ec2:DeleteSnapshot", "ec2:DeleteVolume", @@ -137,6 +178,20 @@ Packer to work: } ``` +Note that if you'd like to create a spot instance, you must also add: + +``` json +ec2:RequestSpotInstances, +ec2:CancelSpotInstanceRequests, +ec2:DescribeSpotInstanceRequests +``` + +If you have the `spot_price` parameter set to `auto`, you must also add: + +``` json +ec2:DescribeSpotPriceHistory +``` + ## Troubleshooting ### Attaching IAM Policies to Roles @@ -176,3 +231,20 @@ If you suspect your system's date is wrong, you can compare it against . On Linux/OS X, you can run the `date` command to get the current time. If you're on Linux, you can try setting the time with ntp by running `sudo ntpd -q`. + +### `exceeded wait attempts` while waiting for tasks to complete +We use the AWS SDK's built-in waiters to wait for longer-running tasks to +complete. These waiters have default delays between queries and maximum number +of queries that don't always work for our users. + +If you find that you are being rate-limited or have exceeded your max wait +attempts, you can override the defaults by setting the following packer +environment variables (note that these will apply to all aws tasks that we have +to wait for): + +`AWS_MAX_ATTEMPTS` - This is how many times to re-send a status update request. +Excepting tasks that we know can take an extremely long time, this defaults to +40tries. + +`AWS_POLL_DELAY_SECONDS` - How many seconds to wait in between status update +requests. Generally defaults to 2 or 5 seconds, depending on the task. \ No newline at end of file diff --git a/website/source/docs/builders/azure-setup.html.md b/website/source/docs/builders/azure-setup.html.md index 90c1a6e73..fe57033c4 100644 --- a/website/source/docs/builders/azure-setup.html.md +++ b/website/source/docs/builders/azure-setup.html.md @@ -17,8 +17,6 @@ In order to build VMs in Azure Packer needs 6 configuration options to be specif - `client_secret` - service principal secret / password -- `object_id` - service principal object id (OSType = Windows Only) - - `resource_group_name` - name of the resource group where your VHD(s) will be stored - `storage_account` - name of the storage account where your VHD(s) will be stored @@ -31,8 +29,7 @@ In order to get all of the items above, you will need a username and password fo Device login is an alternative way to authorize in Azure Packer. Device login only requires you to know your Subscription ID. (Device login is only supported for Linux based VMs.) Device login is intended for those who are first -time users, and just want to ''kick the tires.'' We recommend the SPN approach if you intend to automate Packer, or for -deploying Windows VMs. +time users, and just want to ''kick the tires.'' We recommend the SPN approach if you intend to automate Packer. > Device login is for **interactive** builds, and SPN is **automated** builds. @@ -44,12 +41,13 @@ There are three pieces of information you must provide to enable device login mo > Device login mode is enabled by not setting client\_id and client\_secret. -> Device login mode is for the public cloud only, and Linux VMs only. +> Device login mode is for the Public and US Gov clouds only. The device login flow asks that you open a web browser, navigate to , and input the supplied code. This authorizes the Packer for Azure application to act on your behalf. An OAuth token will be created, and stored in the user's home directory (~/.azure/packer/oauth-TenantID.json). This token is used if the token file exists, and it -is refreshed as necessary. The token file prevents the need to continually execute the device login flow. +is refreshed as necessary. The token file prevents the need to continually execute the device login flow. Packer will ask +for two device login auth, one for service management endpoint and another for accessing temp keyvault secrets that it creates. ## Install the Azure CLI @@ -63,6 +61,14 @@ If you already have node.js installed you can use `npm` to install `azure-cli`: $ npm install -g azure-cli --no-progress ``` +You can also use the Python-based Azure CLI in Docker. It also comes with `jq` pre-installed: + +```shell +$ docker run -it azuresdk/azure-cli-python +``` + +As there are differences between the node.js client and the Python client, we've included commands for the Python client underneath each node.js command. + ## Guided Setup The Packer project includes a [setup script](https://github.com/hashicorp/packer/blob/master/contrib/azure-setup.sh) that can help you setup your account. It uses an interactive bash script to log you into Azure, name your resources, and export your Packer configuration. @@ -80,6 +86,32 @@ $ azure config mode arm $ azure login -u USERNAME ``` +If you're using the Python client: + +```shell +$ az login +# To sign in, use a web browser to open the page https://aka.ms/devicelogin and enter the code CODE_PROVIDED to authenticate +``` + +Once you've completed logging in, you should get a JSON array like the one below: + +```shell +[ + { + "cloudName": "AzureCloud", + "id": "$uuid", + "isDefault": false, + "name": "Pay-As-You-Go", + "state": "Enabled", + "tenantId": "$tenant_uuid", + "user": { + "name": "my_email@anywhere.com", + "type": "user" + } + } +] + +``` Get your account information ``` shell @@ -88,6 +120,11 @@ $ azure account set ACCOUNTNAME $ azure account show --json | jq -r ".[] | .id" ``` +Python: +``` shell +$ az account set "$(az account list | jq -r '.[].name')" +``` + -> Throughout this document when you see a command pipe to `jq` you may instead omit `--json` and everything after it, but the output will be more verbose. For example you can simply run `azure account list` instead. This will print out one line that look like this: @@ -107,6 +144,12 @@ $ azure location list $ azure group create -n GROUPNAME -l LOCATION ``` +Python: + +```shell +$ az group create -n GROUPNAME -l LOCATION +``` + Your storage account (below) will need to use the same `GROUPNAME` and `LOCATION`. ### Create a Storage Account @@ -121,9 +164,15 @@ $ azure storage account create \ --kind storage STORAGENAME ``` --> `LRS` is meant as a literal "LRS" and not as a variable. +Python: -Make sure that `GROUPNAME` and `LOCATION` are the same as above. +```shell +$ az storage account create -n STORAGENAME -g GROUPNAME -l LOCATION --sku Standard_LRS +``` + +-> `LRS` and `Standard_LRS` are meant as literal "LRS" or "Standard_LRS" and not as variables. + +Make sure that `GROUPNAME` and `LOCATION` are the same as above. Also, ensure that `GROUPNAME` is less than 24 characters long and contains only lowercase letters and numbers. ### Create an Application @@ -137,6 +186,12 @@ $ azure ad app create \ -p PASSWORD ``` +Python: + +```shell +az ad app create --display-name APPNAME --identifier-uris APPURL --homepage APPURL --password PASSWORD +``` + Password is your `client_secret` and can be anything you like. I recommend using `openssl rand -base64 24`. ### Create a Service Principal @@ -153,6 +208,13 @@ $ azure ad app list --json \ $ azure ad sp create --applicationId APPID ``` +Python: + +```shell +$ id=$(az ad app list | jq -r '.[] | select(.displayName == "Packer") | .appId') +$ az ad sp create --id "$id" +``` + ### Grant Permissions to Your Application Finally, we will associate the proper permissions with our application's service principal. We're going to assign the `Owner` role to our Packer application and change the scope to manage our whole subscription. (The `Owner` role can be scoped to a specific resource group to further reduce the scope of the account.) This allows Packer to create temporary resource groups for each build. @@ -164,6 +226,13 @@ $ azure role assignment create \ -c /subscriptions/SUBSCRIPTIONID ``` +Python: + +```shell +# NOTE: Trying to assign the role to the service principal by name directly yields a HTTP 400 error. See: https://github.com/Azure/azure-cli/issues/4911 +$ az role assignment create --assignee "$(az ad sp list | jq -r '.[] | select(.displayName == "APPNAME") | .objectId')" --role Owner +``` + There are a lot of pre-defined roles and you can define your own with more granular permissions, though this is out of scope. You can see a list of pre-configured roles via: ``` shell @@ -175,6 +244,23 @@ $ azure role list --json \ Now (finally) everything has been setup in Azure. Let's get our configuration keys together: +Python: + +```shell +$ cat < { +> "subscription_id": $(az account show | jq '.id'), +> "client_id": $(az ad app list | jq '.[] | select(.displayName == "Packer") | .appId'), +> "client_secret": "$password", +> "location": "$location", +> "tenant_id": $(az account show | jq '.tenantId') +> "object_id": $(az ad app list | jq '.[] | select(.displayName == "Packer") | .objectId') +> } +> EOF +``` + +node.js: + Get `subscription_id`: ``` shell @@ -189,6 +275,7 @@ $ azure ad app list --json \ | jq '.[] | select(.displayName | contains("APPNAME")) | .appId' ``` + Get `client_secret` This cannot be retrieved. If you forgot this, you will have to delete and re-create your service principal and the associated permissions. diff --git a/website/source/docs/builders/azure.html.md b/website/source/docs/builders/azure.html.md index bcb2b5d77..a4f3a7b09 100644 --- a/website/source/docs/builders/azure.html.md +++ b/website/source/docs/builders/azure.html.md @@ -29,8 +29,7 @@ builder. - `client_secret` (string) The password or secret for your service principal. -- `subscription_id` (string) Subscription under which the build will be performed. **The service principal specified in `client_id` must have full access to this subscription.** -- `capture_container_name` (string) Destination container name. Essentially the "directory" where your VHD will be organized in Azure. The captured VHD's URL will be `https://.blob.core.windows.net/system/Microsoft.Compute/Images//.xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx.vhd`. +- `subscription_id` (string) Subscription under which the build will be performed. **The service principal specified in `client_id` must have full access to this subscription, unless build_resource_group_name option is specified in which case it needs to have owner access to the existing resource group specified in build_resource_group_name parameter.** - `image_publisher` (string) PublisherName for your base image. See [documentation](https://azure.microsoft.com/en-us/documentation/articles/resource-groups-vm-searching/) for details. @@ -44,10 +43,6 @@ builder. CLI example `azure vm image list-skus -l westus -p Canonical -o UbuntuServer` -- `location` (string) Azure datacenter in which your VM will build. - - CLI example `azure location list` - #### VHD or Managed Image The Azure builder can create either a VHD, or a managed image. If you @@ -55,10 +50,10 @@ are creating a VHD, you **must** start with a VHD. Likewise, if you want to create a managed image you **must** start with a managed image. When creating a VHD the following two options are required. -- `capture_container_name` (string) Destination container name. Essentially the "directory" where your VHD will be +- `capture_container_name` (string) Destination container name. Essentially the "directory" where your VHD will be organized in Azure. The captured VHD's URL will be `https://.blob.core.windows.net/system/Microsoft.Compute/Images//.xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx.vhd`. - -- `capture_name_prefix` (string) VHD prefix. The final artifacts will be named `PREFIX-osDisk.UUID` and + +- `capture_name_prefix` (string) VHD prefix. The final artifacts will be named `PREFIX-osDisk.UUID` and `PREFIX-vmTemplate.UUID`. - `resource_group_name` (string) Resource group under which the final artifact will be stored. @@ -68,15 +63,45 @@ image. When creating a VHD the following two options are required. When creating a managed image the following two options are required. - `managed_image_name` (string) Specify the managed image name where the result of the Packer build will be saved. The - image name must not exist ahead of time, and will not be overwritten. If this value is set, the value - `managed_image_resource_group_name` must also be set. See [documentation](https://docs.microsoft.com/en-us/azure/storage/storage-managed-disks-overview#images) + image name must not exist ahead of time, and will not be overwritten. If this value is set, the value + `managed_image_resource_group_name` must also be set. See [documentation](https://docs.microsoft.com/en-us/azure/storage/storage-managed-disks-overview#images) to learn more about managed images. - -- `managed_image_resource_group_name` (string) Specify the managed image resource group name where the result of the Packer build will be - saved. The resource group must already exist. If this value is set, the value `managed_image_name` must also be - set. See [documentation](https://docs.microsoft.com/en-us/azure/storage/storage-managed-disks-overview#images) to + +- `managed_image_resource_group_name` (string) Specify the managed image resource group name where the result of the Packer build will be + saved. The resource group must already exist. If this value is set, the value `managed_image_name` must also be + set. See [documentation](https://docs.microsoft.com/en-us/azure/storage/storage-managed-disks-overview#images) to learn more about managed images. +#### Resource Group Usage + +The Azure builder can either provision resources into a new resource group that +it controls (default) or an existing one. The advantage of using a packer +defined resource group is that failed resource cleanup is easier because you +can simply remove the entire resource group, however this means that the +provided credentials must have permission to create and remove resource groups. +By using an existing resource group you can scope the provided credentials to +just this group, however failed builds are more likely to leave unused +artifacts. + +To have packer create a resource group you **must** provide: + +- `location` (string) Azure datacenter in which your VM will build. + + CLI example `azure location list` + +and optionally: + +- `temp_resource_group_name` (string) name assigned to the temporary resource + group created during the build. If this value is not set, a random value will + be assigned. This resource group is deleted at the end of the build. + +To use an existing resource group you **must** provide: + +- `build_resource_group_name` (string) - Specify an existing resource group + to run the build in. + +Providing `temp_resource_group_name` or `location` in combination with `build_resource_group_name` is not allowed. + ### Optional: - `azure_tags` (object of name/value strings) - the user can define up to 15 tags. Tag names cannot exceed 512 @@ -86,9 +111,9 @@ When creating a managed image the following two options are required. - `cloud_environment_name` (string) One of `Public`, `China`, `Germany`, or `USGovernment`. Defaults to `Public`. Long forms such as `USGovernmentCloud` and `AzureUSGovernmentCloud` are also supported. - + - `custom_data_file` (string) Specify a file containing custom data to inject into the cloud-init process. The contents - of the file are read, base64 encoded, and injected into the ARM template. The custom data will be passed to + of the file are read, base64 encoded, and injected into the ARM template. The custom data will be passed to cloud-init for processing at the time of provisioning. See [documentation](http://cloudinit.readthedocs.io/en/latest/topics/examples.html) to learn more about custom data, and how it can be used to influence the provisioning process. @@ -110,30 +135,60 @@ When creating a managed image the following two options are required. - `image_url` (string) Specify a custom VHD to use. If this value is set, do not set image\_publisher, image\_offer, image\_sku, or image\_version. - + - `managed_image_storage_account_type` (string) Specify the storage account type for a managed image. Valid values are Standard_LRS and Premium\_LRS. The default is Standard\_LRS. - -- `object_id` (string) Specify an OAuth Object ID to protect WinRM certificates - created at runtime. This variable is required when creating images based on - Windows; this variable is not used by non-Windows builds. See `Windows` - behavior for `os_type`, below. - `os_disk_size_gb` (number) Specify the size of the OS disk in GB (gigabytes). Values of zero or less than zero are ignored. +- `disk_additional_size` (array of integers) - The size(s) of any additional + hard disks for the VM in gigabytes. If this is not specified then the VM + will only contain an OS disk. The number of additional disks and maximum size of a disk depends on the configuration of your VM. See [Windows](https://docs.microsoft.com/en-us/azure/virtual-machines/windows/about-disks-and-vhds) or [Linux](https://docs.microsoft.com/en-us/azure/virtual-machines/linux/about-disks-and-vhds) for more information. + + For VHD builds the final artifacts will be named `PREFIX-dataDisk-.UUID.vhd` and stored in the specified capture container along side the OS disk. The additional disks are included in the deployment template `PREFIX-vmTemplate.UUID`. + + For Managed build the final artifacts are included in the managed image. The additional disk will have the same storage account type as the OS disk, as specified with the `managed_image_storage_account_type` setting. + - `os_type` (string) If either `Linux` or `Windows` is specified Packer will automatically configure authentication credentials for the provisioned machine. For `Linux` this configures an SSH authorized key. For `Windows` this configures a WinRM certificate. -- `temp_compute_name` (string) temporary name assigned to the VM. If this value is not set, a random value will be - assigned. Knowing the resource group and VM name allows one to execute commands to update the VM during a Packer - build, e.g. attach a resource disk to the VM. +- `plan_info` (object) - Used for creating images from Marketplace images. Please refer to [Deploy an image with + Marketplace terms](https://aka.ms/azuremarketplaceapideployment) for more details. Not all Marketplace images + support programmatic deployment, and support is controlled by the image publisher. -- `temp_resource_group_name` (string) temporary name assigned to the resource group. If this value is not set, a random - value will be assigned. + An example plan_info object is defined below. + + ```json + { + "plan_info": { + "plan_name": "rabbitmq", + "plan_product": "rabbitmq", + "plan_publisher": "bitnami" + } + } + ``` + + `plan_name` (string) - The plan name, required. + `plan_product` (string) - The plan product, required. + `plan_publisher` (string) - The plan publisher, required. + `plan_promotion_code` (string) - Some images accept a promotion code, optional. + + Images created from the Marketplace with `plan_info` **must** specify `plan_info` whenever the image is deployed. + The builder automatically adds tags to the image to ensure this information is not lost. The following tags are + added. + + 1. PlanName + 1. PlanProduct + 1. PlanPublisher + 1. PlanPromotionCode + +- `temp_compute_name` (string) temporary name assigned to the VM. If this value is not set, a random value will be + assigned. Knowing the resource group and VM name allows one to execute commands to update the VM during a Packer + build, e.g. attach a resource disk to the VM. - `tenant_id` (string) The account identifier with which your `client_id` and `subscription_id` are associated. If not specified, `tenant_id` will be looked up using `subscription_id`. @@ -160,6 +215,9 @@ When creating a managed image the following two options are required. CLI example `azure vm sizes -l westus` +- `async_resourcegroup_delete` (boolean) If you want packer to delete the temporary resource group asynchronously set this value. It's a boolean value + and defaults to false. **Important** Setting this true means that your builds are faster, however any failed deletes are not reported. + ## Basic Example Here is a basic example for Azure. @@ -200,23 +258,7 @@ Please refer to the Azure [examples](https://github.com/hashicorp/packer/tree/ma ### Windows -The following provisioner snippet shows how to sysprep a Windows VM. Deprovision should be the last operation executed by a build. - -``` json -{ - "provisioners": [ - { - "type": "powershell", - "inline": [ - "if( Test-Path $Env:SystemRoot\\windows\\system32\\Sysprep\\unattend.xml ){ rm $Env:SystemRoot\\windows\\system32\\Sysprep\\unattend.xml -Force}", - "& $Env:SystemRoot\\System32\\Sysprep\\Sysprep.exe /oobe /generalize /shutdown /quiet" - ] - } - ] -} -``` - -In some circumstances the above isn't enough to reliably know that the sysprep is actually finished generalizing the image, the code below will wait for sysprep to write the image status in the registry and will exit after that. The possible states, in case you want to wait for another state, [are documented here](https://technet.microsoft.com/en-us/library/hh824815.aspx) +The following provisioner snippet shows how to sysprep a Windows VM. Deprovision should be the last operation executed by a build. The code below will wait for sysprep to write the image status in the registry and will exit after that. The possible states, in case you want to wait for another state, [are documented here](https://technet.microsoft.com/en-us/library/hh824815.aspx) ``` json { @@ -230,8 +272,6 @@ In some circumstances the above isn't enough to reliably know that the sysprep i } ] } - - ``` ### Linux @@ -270,6 +310,7 @@ The Azure builder attempts to pick default values that provide for a just works - The default user name is packer not root as in other builders. Most distros on Azure do not allow root to SSH to a VM hence the need for a non-root default user. Set the ssh\_username option to override the default value. - The default VM size is Standard\_A1. Set the vm\_size option to override the default value. - The default image version is latest. Set the image\_version option to override the default value. +- By default a temporary resource group will be created and destroyed as part of the build. If you do not have permissions to do so, use `build_resource_group_name` to specify an existing resource group to run the build in. ## Implementation @@ -312,9 +353,13 @@ The Azure builder creates the following random values at runtime. - Compute Name: a random 15-character name prefixed with pkrvm; the name of the VM. - Deployment Name: a random 15-character name prefixed with pkfdp; the name of the deployment. - KeyVault Name: a random 15-character name prefixed with pkrkv. +- NIC Name: a random 15-character name prefixed with pkrni. +- Public IP Name: a random 15-character name prefixed with pkrip. - OS Disk Name: a random 15-character name prefixed with pkros. - Resource Group Name: a random 33-character name prefixed with packer-Resource-Group-. +- Subnet Name: a random 15-character name prefixed with pkrsn. - SSH Key Pair: a 2,048-bit asymmetric key pair; can be overridden by the user. +- Virtual Network Name: a random 15-character name prefixed with pkrvn. The default alphabet used for random values is **0123456789bcdfghjklmnpqrstvwxyz**. The alphabet was reduced (no vowels) to prevent running afoul of Azure decency controls. @@ -344,8 +389,6 @@ A Windows build requires two templates and two deployments. Unfortunately, the K the same time hence the need for two templates and deployments. The time required to deploy a KeyVault template is minimal, so overall impact is small. -> The KeyVault certificate is protected using the object\_id of the SPN. This is why Windows builds require object\_id, -> and an SPN. The KeyVault is deleted when the resource group is deleted. See the [examples/azure](https://github.com/hashicorp/packer/tree/master/examples/azure) folder in the packer project for more examples. diff --git a/website/source/docs/builders/cloudstack.html.md b/website/source/docs/builders/cloudstack.html.md index 9268d628b..22b9f91d1 100644 --- a/website/source/docs/builders/cloudstack.html.md +++ b/website/source/docs/builders/cloudstack.html.md @@ -117,6 +117,9 @@ builder. - `instance_name` (string) - The name of the instance. Defaults to "packer-UUID" where UUID is dynamically generated. +- `prevent_firewall_changes` (boolean) - Set to `true` to prevent network ACLs + or firewall rules creation. Defaults to `false`. + - `project` (string) - The name or ID of the project to deploy the instance to. - `public_ip_address` (string) - The public IP address or it's ID used for diff --git a/website/source/docs/builders/digitalocean.html.md b/website/source/docs/builders/digitalocean.html.md index 595c1a62f..5683491df 100644 --- a/website/source/docs/builders/digitalocean.html.md +++ b/website/source/docs/builders/digitalocean.html.md @@ -73,9 +73,8 @@ builder. for the droplet being created. This defaults to `false`, or not enabled. - `snapshot_name` (string) - The name of the resulting snapshot that will - appear in your account. This must be unique. To help make this unique, use a - function like `timestamp` (see [configuration - templates](/docs/templates/engine.html) for more info) + appear in your account. Defaults to "packer-{{timestamp}}" (see + [configuration templates](/docs/templates/engine.html) for more info). - `snapshot_regions` (array of strings) - The regions of the resulting snapshot that will appear in your account. @@ -98,7 +97,7 @@ access tokens: { "type": "digitalocean", "api_token": "YOUR API KEY", - "image": "ubuntu-14-04-x64", + "image": "ubuntu-16-04-x64", "region": "nyc3", "size": "512mb", "ssh_username": "root" diff --git a/website/source/docs/builders/docker.html.md b/website/source/docs/builders/docker.html.md index 13c43da60..b0382b421 100644 --- a/website/source/docs/builders/docker.html.md +++ b/website/source/docs/builders/docker.html.md @@ -24,9 +24,15 @@ has a simple mental model: you provision containers much the same way you provision a normal virtualized or dedicated server. For more information, read the section on [Dockerfiles](#dockerfiles). -The Docker builder must run on a machine that has Docker installed. Therefore -the builder only works on machines that support Docker. You can learn about -what [platforms Docker supports and how to install onto them](https://docs.docker.com/engine/installation/) in the Docker documentation. +The Docker builder must run on a machine that has Docker Engine installed. +Therefore the builder only works on machines that support Docker and _does not +support running on a Docker remote host_. You can learn about what +[platforms Docker supports and how to install onto them](https://docs.docker.com/engine/installation/) +in the Docker documentation. + + + Please note: Packer does not yet have support for Windows containers. + ## Basic Example: Export @@ -125,9 +131,8 @@ Configuration options are organized below into two categories: required and optional. Within each category, the available options are alphabetized and described. -In addition to the options listed here, a -[communicator](/docs/templates/communicator.html) can be configured for this -builder. +The Docker builder uses a special Docker communicator _and will not use_ the +standard [communicators](/docs/templates/communicator.html). ### Required: @@ -212,7 +217,7 @@ You must specify (only) one of `commit`, `discard`, or `export_path`. - `container_dir` (string) - The directory inside container to mount temp directory from host server for work [file provisioner](/docs/provisioners/file.html). By default this is set to `/packer-files`. - + - `fix_upload_owner` (boolean) - If true, files uploaded to the container will be owned by the user the container is running as. If false, the owner will depend on the version of docker installed in the system. Defaults to true. @@ -303,7 +308,7 @@ nearly-identical sequence definitions, as demonstrated by the example below: [ { "type": "docker-tag", - "repository": "hashicorp/packer", + "repository": "hashicorp/packer1", "tag": "0.7" }, "docker-push" @@ -311,7 +316,7 @@ nearly-identical sequence definitions, as demonstrated by the example below: [ { "type": "docker-tag", - "repository": "hashicorp/packer", + "repository": "hashicorp/packer2", "tag": "0.7" }, "docker-push" @@ -356,14 +361,14 @@ shown below: This builder allows you to build Docker images *without* Dockerfiles. -With this builder, you can repeatably create Docker images without the use of a +With this builder, you can repeatedly create Docker images without the use of a Dockerfile. You don't need to know the syntax or semantics of Dockerfiles. Instead, you can just provide shell scripts, Chef recipes, Puppet manifests, etc. to provision your Docker container just like you would a regular virtualized or dedicated machine. While Docker has many features, Packer views Docker simply as an container -runner. To that end, Packer is able to repeatably build these containers +runner. To that end, Packer is able to repeatedly build these containers using portable provisioning scripts. ## Overriding the host directory diff --git a/website/source/docs/builders/googlecompute.html.md b/website/source/docs/builders/googlecompute.html.md index 4ccfe4cb1..005258f12 100644 --- a/website/source/docs/builders/googlecompute.html.md +++ b/website/source/docs/builders/googlecompute.html.md @@ -70,7 +70,7 @@ straightforwarded, it is documented here. 4. Create new service account that at least has `Compute Engine Instance Admin (v1)` and `Service Account User` roles. -5. Chose `JSON` as Key type and click "Create". +5. Choose `JSON` as Key type and click "Create". A JSON file will be downloaded automatically. This is your *account file*. ### Precedence of Authentication Methods @@ -93,7 +93,9 @@ Packer looks for credentials in the following places, preferring the first locat 4. On Google Compute Engine and Google App Engine Managed VMs, it fetches credentials from the metadata server. (Needs a correct VM authentication scope configuration, see above) -## Basic Example +## Examples + +### Basic Example Below is a fully functioning example. It doesn't do anything useful, since no provisioners or startup-script metadata are defined, but it will effectively @@ -109,6 +111,7 @@ is assumed to be the path to the file containing the JSON. "account_file": "account.json", "project_id": "my project", "source_image": "debian-7-wheezy-v20150127", + "ssh_username": "packer", "zone": "us-central1-a" } ] @@ -117,27 +120,55 @@ is assumed to be the path to the file containing the JSON. ### Windows Example -Running WinRM requires that it is opened in the firewall and that the VM enables WinRM for the -user used to connect in a startup-script. +Before you can provision using the winrm communicator, you need to navigate to +https://console.cloud.google.com/networking/firewalls/list to allow traffic +through google's firewall on the winrm port (tcp:5986). + +Once this is set up, the following is a complete working packer config after +setting a valid `account_file` and `project_id`: ``` {.json} { - "builders": [{ - "type": "googlecompute", - "account_file": "account.json", - "project_id": "my project", - "source_image": "windows-server-2016-dc-v20170227", - "disk_size": "50", - "machine_type": "n1-standard-1", - "communicator": "winrm", - "winrm_username": "packer_user", - "winrm_insecure": true, - "winrm_use_ssl": true, - "metadata": { - "windows-startup-script-cmd": "winrm quickconfig -quiet & net user /add packer_user & net localgroup administrators packer_user /add & winrm set winrm/config/service/auth @{Basic=\"true\"}" - }, - "zone": "us-central1-a" - }] + "builders": [ + { + "type": "googlecompute", + "account_file": "account.json", + "project_id": "my project", + "source_image": "windows-server-2016-dc-v20170227", + "disk_size": "50", + "machine_type": "n1-standard-1", + "communicator": "winrm", + "winrm_username": "packer_user", + "winrm_insecure": true, + "winrm_use_ssl": true, + "metadata": { + "windows-startup-script-cmd": "winrm quickconfig -quiet & net user /add packer_user & net localgroup administrators packer_user /add & winrm set winrm/config/service/auth @{Basic=\"true\"}" + }, + "zone": "us-central1-a" + } + ] +} +``` + +### Nested Hypervisor Example + +This is an example of using the `image_licenses` configuration option to create a GCE image that has nested virtualization enabled. See +[Enabling Nested Virtualization for VM Instances](https://cloud.google.com/compute/docs/instances/enable-nested-virtualization-vm-instances) +for details. + +``` json +{ + "builders": [ + { + "type": "googlecompute", + "account_file": "account.json", + "project_id": "my project", + "source_image_family": "centos-7", + "ssh_username": "packer", + "zone": "us-central1-a", + "image_licenses": ["projects/vm-options/global/licenses/enable-vmx"] + } + ] } ``` @@ -183,6 +214,9 @@ builder. - `address` (string) - The name of a pre-allocated static external IP address. Note, must be the name and not the actual IP address. +- `disable_default_service_account` (bool) - If true, the default service account will not be used if `service_account_email` + is not specified. Set this value to true and omit `service_account_email` to provision a VM with no service account. + - `disk_name` (string) - The name of the disk, if unset the instance name will be used. @@ -201,6 +235,8 @@ builder. - `image_labels` (object of key/value strings) - Key/value pair labels to apply to the created image. +- `image_licenses` (array of strings) - Licenses to apply to the created image. + - `image_name` (string) - The unique name of the resulting image. Defaults to `"packer-{{timestamp}}"`. @@ -234,11 +270,14 @@ builder. If preemptible is true this can only be `TERMINATE`. If preemptible is false, it defaults to `MIGRATE` -- `preemptible` (boolean) - If true, launch a preembtible instance. +- `preemptible` (boolean) - If true, launch a preemptible instance. - `region` (string) - The region in which to launch the instance. Defaults to to the region hosting the specified `zone`. +- `service_account_email` (string) - The service account to be used for launched instance. Defaults to + the project's default service account unless `disable_default_service_account` is true. + - `scopes` (array of strings) - The service account scopes for launched instance. Defaults to: diff --git a/website/source/docs/builders/hyperv-iso.html.md b/website/source/docs/builders/hyperv-iso.html.md.erb similarity index 77% rename from website/source/docs/builders/hyperv-iso.html.md rename to website/source/docs/builders/hyperv-iso.html.md.erb index b5b6a3826..437076663 100644 --- a/website/source/docs/builders/hyperv-iso.html.md +++ b/website/source/docs/builders/hyperv-iso.html.md.erb @@ -1,4 +1,6 @@ --- +modeline: | + vim: set ft=pandoc: description: | The Hyper-V Packer builder is able to create Hyper-V virtual machines and export them. @@ -11,19 +13,21 @@ sidebar_current: 'docs-builders-hyperv-iso' Type: `hyperv-iso` -The Hyper-V Packer builder is able to create [Hyper-V](https://www.microsoft.com/en-us/server-cloud/solutions/virtualization.aspx) +The Hyper-V Packer builder is able to create +[Hyper-V](https://www.microsoft.com/en-us/server-cloud/solutions/virtualization.aspx) virtual machines and export them, starting from an ISO image. -The builder builds a virtual machine by creating a new virtual machine -from scratch, booting it, installing an OS, provisioning software within -the OS, then shutting it down. The result of the Hyper-V builder is a directory -containing all the files necessary to run the virtual machine portably. +The builder builds a virtual machine by creating a new virtual machine from +scratch. Typically, the VM is booted, an OS is installed, and software is +provisioned within the OS. Finally the VM is shut down. The result of the +Hyper-V builder is a directory containing all the files necessary to run +the virtual machine portably. ## Basic Example -Here is a basic example. This example is not functional. It will start the -OS installer but then fail because we don't provide the preseed file for -Ubuntu to self-install. Still, the example serves to show the basic configuration: +Here is a basic example. This example is not functional. It will start the OS +installer but then fail because we don't provide the preseed file for Ubuntu +to self-install. Still, the example serves to show the basic configuration: ``` json { @@ -37,265 +41,262 @@ Ubuntu to self-install. Still, the example serves to show the basic configuratio } ``` -It is important to add a `shutdown_command`. By default Packer halts the -virtual machine and the file system may not be sync'd. Thus, changes made in a -provisioner might not be saved. +By default Packer will perform a hard power off of a virtual machine. +However, when a machine is powered off this way, it is possible that +changes made to the VMs file system may not be fully synced, possibly +leading to corruption of files or lost changes. As such, it is important to +add a `shutdown_command`. This tells Packer how to safely shutdown and +power off the VM. ## Configuration Reference -There are many configuration options available for the Hyper-V builder. -They are organized below into two categories: required and optional. Within -each category, the available options are alphabetized and described. +There are many configuration options available for the Hyper-V builder. They +are organized below into two categories: required and optional. Within each +category, the available options are alphabetized and described. In addition to the options listed here, a -[communicator](/docs/templates/communicator.html) -can be configured for this builder. +[communicator](/docs/templates/communicator.html) can be configured for this +builder. ### Required: -- `iso_checksum` (string) - The checksum for the OS ISO file or virtual - harddrive file. Because these files are so large, this is required and - Packer will verify it prior to booting a virtual machine with the ISO or - virtual harddrive attached. The type of the checksum is specified with - `iso_checksum_type`, documented below. +- `iso_checksum` (string) - The checksum for the ISO file or virtual + hard drive file. The algorithm to use when computing the checksum is + specified with `iso_checksum_type`. -- `iso_checksum_type` (string) - The type of the checksum specified in - `iso_checksum`. Valid values are "none", "md5", "sha1", "sha256", or - "sha512" currently. While "none" will skip checksumming, this is not - recommended since ISO files and virtual harddrive files are generally large - and corruption does happen from time to time. +- `iso_checksum_type` (string) - The algorithm to be used when computing + the checksum of the file specified in `iso_checksum`. Currently, valid + values are "none", "md5", "sha1", "sha256", or "sha512". Since the + validity of ISO and virtual disk files are typically crucial to a + successful build, Packer performs a check of any supplied media by + default. While setting "none" will cause Packer to skip this check, + corruption of large files such as ISOs and virtual hard drives can + occur from time to time. As such, skipping this check is not + recommended. - `iso_url` (string) - A URL to the ISO containing the installation image or - virtual harddrive vhd or vhdx file to clone. This URL can be either an HTTP - URL or a file URL (or path to a file). If this is an HTTP URL, Packer will - download the file and cache it between runs. + virtual hard drive (VHD or VHDX) file to clone. This URL can be either an + HTTP URL or a file URL (or path to a file). If this is an HTTP URL, Packer + will download the file and cache it between runs. ### Optional: - `boot_command` (array of strings) - This is an array of commands to type - when the virtual machine is first booted. The goal of these commands should - be to type just enough to initialize the operating system installer. Special - keys can be typed as well, and are covered in the section below on the boot - command. If this is not specified, it is assumed the installer will start - itself. + when the virtual machine is first booted. The goal of these commands + should be to type just enough to initialize the operating system + installer. Special keys can be typed as well, and are covered in the + section below on the boot command. If this is not specified, it is assumed + the installer will start itself. - `boot_wait` (string) - The time to wait after booting the initial virtual - machine before typing the `boot_command`. The value of this should be - a duration. Examples are "5s" and "1m30s" which will cause Packer to wait - five seconds and one minute 30 seconds, respectively. If this isn't specified, - the default is 10 seconds. + machine before typing the `boot_command`. The value specified should be + a duration. For example, setting a duration of "1m30s" would cause + Packer to wait for 1 minute 30 seconds before typing the boot command. + The default duration is "10s" (10 seconds). -- `cpu` (number) - The number of cpus the virtual machine should use. If this isn't specified, - the default is 1 cpu. +- `cpu` (number) - The number of CPUs the virtual machine should use. If + this isn't specified, the default is 1 CPU. -- `disk_additional_size` (array of integers) - The size(s) of any additional - hard disks for the VM in megabytes. If this is not specified then the VM - will only contain a primary hard disk. Additional drives will be attached to the SCSI - interface only. The builder uses expandable, not fixed-size virtual hard disks, - so the actual file representing the disk will not use the full size unless it is full. +- `differencing_disk` (boolean) - If true enables differencing disks. Only + the changes will be written to the new disk. This is especially useful if + your source is a VHD/VHDX. This defaults to `false`. + +- `disk_additional_size` (array of integers) - The size or sizes of any + additional hard disks for the VM in megabytes. If this is not specified + then the VM will only contain a primary hard disk. Additional drives + will be attached to the SCSI interface only. The builder uses + expandable rather than fixed-size virtual hard disks, so the actual + file representing the disk will not use the full size unless it is + full. + +- `disk_block_size` (string) - The block size of the VHD to be created. + Recommended disk block size for Linux hyper-v guests is 1 MiB. This + defaults to "32 MiB". - `disk_size` (number) - The size, in megabytes, of the hard disk to create for the VM. By default, this is 40 GB. -- `enable_dynamic_memory` (boolean) - If true enable dynamic memory for virtual machine. - This defaults to false. +- `enable_dynamic_memory` (boolean) - If `true` enable dynamic memory for + the virtual machine. This defaults to `false`. -- `enable_mac_spoofing` (boolean) - If true enable mac spoofing for virtual machine. - This defaults to false. +- `enable_mac_spoofing` (boolean) - If `true` enable MAC address spoofing + for the virtual machine. This defaults to `false`. -- `enable_secure_boot` (boolean) - If true enable secure boot for virtual machine. - This defaults to false. +- `enable_secure_boot` (boolean) - If `true` enable secure boot for the + virtual machine. This defaults to `false`. See `secure_boot_template` + below for additional settings. -- `enable_virtualization_extensions` (boolean) - If true enable virtualization extensions for virtual machine. - This defaults to false. For nested virtualization you need to enable mac spoofing, disable dynamic memory - and have at least 4GB of RAM for virtual machine. - -- `floppy_files` (array of strings) - A list of files to place onto a floppy - disk that is attached when the VM is booted. This is most useful - for unattended Windows installs, which look for an `Autounattend.xml` file - on removable media. By default, no floppy will be attached. All files - listed in this setting get placed into the root directory of the floppy - and the floppy is attached as the first floppy device. Currently, no - support exists for creating sub-directories on the floppy. Wildcard - characters (`*`, `?`, and `[]`) are allowed. Directory names are also allowed, - which will add all the files found in the directory to the floppy. +- `enable_virtualization_extensions` (boolean) - If `true` enable + virtualization extensions for the virtual machine. This defaults to + `false`. For nested virtualization you need to enable MAC spoofing, + disable dynamic memory and have at least 4GB of RAM assigned to the + virtual machine. - `floppy_dirs` (array of strings) - A list of directories to place onto the floppy disk recursively. This is similar to the `floppy_files` option except that the directory structure is preserved. This is useful for when your floppy disk includes drivers or if you just want to organize it's - contents as a hierarchy. Wildcard characters (\*, ?, and \[\]) are allowed. - The maximum summary size of all files in the listed directories are the - same as in `floppy_files`. + contents as a hierarchy. Wildcard characters (\*, ?, and \[\]) are + allowed. The maximum summary size of all files in the listed directories + are the same as in `floppy_files`. + +- `floppy_files` (array of strings) - A list of files to place onto a floppy + disk that is attached when the VM is booted. This is most useful for + unattended Windows installs, which look for an `Autounattend.xml` file on + removable media. By default, no floppy will be attached. All files listed + in this setting get placed into the root directory of the floppy and the + floppy is attached as the first floppy device. Currently, no support + exists for creating sub-directories on the floppy. Wildcard characters + (`*`, `?`, and `[]`) are allowed. Directory names are also allowed, which + will add all the files found in the directory to the floppy. - `generation` (number) - The Hyper-V generation for the virtual machine. By default, this is 1. Generation 2 Hyper-V virtual machines do not support floppy drives. In this scenario use `secondary_iso_images` instead. Hard - drives and dvd drives will also be scsi and not ide. + drives and DVD drives will also be SCSI and not IDE. -- `guest_additions_mode` (string) - How should guest additions be installed. - If value `attach` then attach iso image with by specified by `guest_additions_path`. - Otherwise guest additions is not installed. +- `guest_additions_mode` (string) - If set to `attach` then attach and + mount the ISO image specified in `guest_additions_path`. If set to + `none` then guest additions are not attached and mounted; This is the + default. -- `guest_additions_path` (string) - The path to the iso image for guest additions. +- `guest_additions_path` (string) - The path to the ISO image for guest + additions. -- `http_directory` (string) - Path to a directory to serve using an HTTP - server. The files in this directory will be available over HTTP that will - be requestable from the virtual machine. This is useful for hosting - kickstart files and so on. By default this is "", which means no HTTP - server will be started. The address and port of the HTTP server will be - available as variables in `boot_command`. This is covered in more detail - below. +- `headless` (boolean) - Packer defaults to building Hyper-V virtual + machines by launching a GUI that shows the console of the machine being + built. When this value is set to true, the machine will start without a + console. + +- `http_directory` (string) - Path to a directory to serve using Packers + inbuilt HTTP server. The files in this directory will be available + over HTTP to the virtual machine. This is useful for hosting kickstart + files and so on. By default this value is unset and the HTTP server is + not started. The address and port of the HTTP server will be available + as variables in `boot_command`. This is covered in more detail below. - `http_port_min` and `http_port_max` (number) - These are the minimum and - maximum port to use for the HTTP server started to serve the `http_directory`. - Because Packer often runs in parallel, Packer will choose a randomly available - port in this range to run the HTTP server. If you want to force the HTTP - server to be on one port, make this minimum and maximum port the same. + maximum port to use for the HTTP server started to serve the + `http_directory`. Since Packer often runs in parallel, a randomly + available port in this range will be repeatedly chosen until an + available port is found. To force the HTTP server to use a specific + port, set an identical value for `http_port_min` and `http_port_max`. By default the values are 8000 and 9000, respectively. -- `iso_urls` (array of strings) - Multiple URLs for the ISO to download. - Packer will try these in order. If anything goes wrong attempting to download - or while downloading a single URL, it will move on to the next. All URLs - must point to the same file (same checksum). By default this is empty - and `iso_url` is used. Only one of `iso_url` or `iso_urls` can be specified. - -- `iso_target_extension` (string) - The extension of the iso file after +- `iso_target_extension` (string) - The extension of the ISO file after download. This defaults to "iso". -- `iso_target_path` (string) - The path where the iso should be saved after - download. By default will go in the packer cache, with a hash of the - original filename as its name. +- `iso_target_path` (string) - The path where the ISO should be saved after + download. By default the ISO will be saved in the Packer cache + directory with a hash of the original filename as its name. + +- `iso_urls` (array of strings) - Multiple URLs for the ISO to download. + Packer will try these in order. If anything goes wrong attempting to + download or while downloading a single URL, it will move on to the next. + All URLs must point to the same file (same checksum). By default this is + empty and `iso_url` is used. Only one of `iso_url` or `iso_urls` can be + specified. + +- `mac_address` (string) - This allows a specific MAC address to be used on + the default virtual network card. The MAC address must be a string with + no delimiters, for example "0000deadbeef". - `output_directory` (string) - This is the path to the directory where the - resulting virtual machine will be created. This may be relative or absolute. - If relative, the path is relative to the working directory when `packer` - is executed. This directory must not exist or be empty prior to running the builder. - By default this is "output-BUILDNAME" where "BUILDNAME" is the name - of the build. + resulting virtual machine will be created. This may be relative or + absolute. If relative, the path is relative to the working directory when + `packer` is executed. This directory must not exist or be empty prior to + running the builder. By default this is "output-BUILDNAME" where + "BUILDNAME" is the name of the build. -- `ram_size` (number) - The size, in megabytes, of the ram to create - for the VM. By default, this is 1 GB. +- `ram_size` (number) - The amount, in megabytes, of RAM to assign to the + VM. By default, this is 1 GB. -- `secondary_iso_images` (array of strings) - A list of iso paths to attached to a - VM when it is booted. This is most useful for unattended Windows installs, which - look for an `Autounattend.xml` file on removable media. By default, no - secondary iso will be attached. +- `secondary_iso_images` (array of strings) - A list of ISO paths to + attach to a VM when it is booted. This is most useful for unattended + Windows installs, which look for an `Autounattend.xml` file on removable + media. By default, no secondary ISO will be attached. -- `shutdown_command` (string) - The command to use to gracefully shut down the machine once all - the provisioning is done. By default this is an empty string, which tells Packer to just - forcefully shut down the machine unless a shutdown command takes place inside script so this may - safely be omitted. If one or more scripts require a reboot it is suggested to leave this blank - since reboots may fail and specify the final shutdown command in your last script. +- `secure_boot_template` (string) - The secure boot template to be + configured. Valid values are "MicrosoftWindows" (Windows) or + "MicrosoftUEFICertificateAuthority" (Linux). This only takes effect if + `enable_secure_boot` is set to "true". This defaults to "MicrosoftWindows". + +- `shutdown_command` (string) - The command to use to gracefully shut down + the machine once all provisioning is complete. By default this is an + empty string, which tells Packer to just forcefully shut down the + machine. This setting can be safely omitted if for example, a shutdown + command to gracefully halt the machine is configured inside a + provisioning script. If one or more scripts require a reboot it is + suggested to leave this blank (since reboots may fail) and instead + specify the final shutdown command in your last script. - `shutdown_timeout` (string) - The amount of time to wait after executing the `shutdown_command` for the virtual machine to actually shut down. - If it doesn't shut down in this time, it is an error. By default, the timeout - is "5m", or five minutes. + If the machine doesn't shut down in this time it is considered an + error. By default, the time out is "5m" (five minutes). -- `skip_compaction` (boolean) - If true skip compacting the hard disk for virtual machine when - exporting. This defaults to false. +- `skip_compaction` (boolean) - If `true` skip compacting the hard disk for + the virtual machine when exporting. This defaults to `false`. -- `switch_name` (string) - The name of the switch to connect the virtual machine to. Be defaulting - this to an empty string, Packer will try to determine the switch to use by looking for - external switch that is up and running. +- `skip_export` (boolean) - If `true` Packer will skip the export of the + VM. If you are interested only in the VHD/VHDX files, you can enable + this option. This will create inline disks which improves the build + performance. There will not be any copying of source VHDs to the temp + directory. This defaults to `false`. -- `switch_vlan_id` (string) - This is the vlan of the virtual switch's network card. - By default none is set. If none is set then a vlan is not set on the switch's network card. - If this value is set it should match the vlan specified in by `vlan_id`. +- `switch_name` (string) - The name of the switch to connect the virtual + machine to. By default, leaving this value unset will cause Packer to + try and determine the switch to use by looking for an external switch + that is up and running. -* `vhd_temp_path` (string) - A separate path to be used for storing the VM's +- `switch_vlan_id` (string) - This is the VLAN of the virtual switch's + network card. By default none is set. If none is set then a VLAN is not + set on the switch's network card. If this value is set it should match + the VLAN specified in by `vlan_id`. + +- `temp_path` (string) - This is the temporary path in which Packer will + create the virtual machine. By default the value is the system `%temp%`. + +- `use_fixed_vhd_format` (boolean) - If true, creates the boot disk on the + virtual machine as a fixed VHD format disk. The default is `false`, which + creates a dynamic VHDX format disk. This option requires setting + `generation` to `1`, `skip_compaction` to `true`, and + `differencing_disk` to `false`. Additionally, any value entered for + `disk_block_size` will be ignored. The most likely use case for this + option is outputing a disk that is in the format required for upload to + Azure. + +- `vhd_temp_path` (string) - A separate path to be used for storing the VM's disk image. The purpose is to enable reading and writing to take place on different physical disks (read from VHD temp path, write to regular temp path while exporting the VM) to eliminate a single-disk bottleneck. -- `vlan_id` (string) - This is the vlan of the virtual machine's network card - for the new virtual machine. By default none is set. If none is set then - vlans are not set on the virtual machine's network card. +- `vlan_id` (string) - This is the VLAN of the virtual machine's network + card for the new virtual machine. By default none is set. If none is set + then VLANs are not set on the virtual machine's network card. -- `vm_name` (string) - This is the name of the virtual machine for the new virtual - machine, without the file extension. By default this is "packer-BUILDNAME", +- `vm_name` (string) - This is the name of the new virtual machine, + without the file extension. By default this is "packer-BUILDNAME", where "BUILDNAME" is the name of the build. -- `temp_path` (string) - This is the temporary path in which Packer will create the virtual - machine. Default value is system `%temp%` - ## Boot Command -The `boot_command` configuration is very important: it specifies the keys -to type when the virtual machine is first booted in order to start the -OS installer. This command is typed after `boot_wait`, which gives the -virtual machine some time to actually load the ISO. +The `boot_command` configuration is very important: it specifies the keys to +type when the virtual machine is first booted in order to start the OS +installer. This command is typed after `boot_wait`, which gives the virtual +machine some time to actually load the ISO. -As documented above, the `boot_command` is an array of strings. The -strings are all typed in sequence. It is an array only to improve readability -within the template. +As documented above, the `boot_command` is an array of strings. The strings +are all typed in sequence. It is an array only to improve readability within +the template. The boot command is "typed" character for character over the virtual keyboard -to the machine, simulating a human actually typing the keyboard. There are -a set of special keys available. If these are in your boot command, they -will be replaced by the proper key: +to the machine, simulating a human actually typing the keyboard. -- `` - Backspace +<%= partial "partials/builders/boot-command" %> -- `` - Delete - -- `` and `` - Simulates an actual "enter" or "return" keypress. - -- `` - Simulates pressing the escape key. - -- `` - Simulates pressing the tab key. - -- `` - `` - Simulates pressing a function key. - -- `` `` `` `` - Simulates pressing an arrow key. - -- `` - Simulates pressing the spacebar. - -- `` - Simulates pressing the insert key. - -- `` `` - Simulates pressing the home and end keys. - -- `` `` - Simulates pressing the page up and page down keys. - -- `` `` - Simulates pressing the alt key. - -- `` `` - Simulates pressing the ctrl key. - -- `` `` - Simulates pressing the shift key. - -- `` `` - Simulates pressing and holding the alt key. - -- `` `` - Simulates pressing and holding the ctrl key. - -- `` `` - Simulates pressing and holding the shift key. - -- `` `` - Simulates releasing a held alt key. - -- `` `` - Simulates releasing a held ctrl key. - -- `` `` - Simulates releasing a held shift key. - -- `` `` `` - Adds a 1, 5 or 10 second pause before - sending any additional keys. This is useful if you have to generally wait - for the UI to update before typing more. - -When using modifier keys `ctrl`, `alt`, `shift` ensure that you release them, -otherwise they will be held down until the machine reboots. Use lowercase -characters as well inside modifiers. For example: to simulate ctrl+c use -`c`. - -In addition to the special keys, each command to type is treated as a -[template engine](/docs/templates/engine.html). -The available variables are: - -- `HTTPIP` and `HTTPPort` - The IP and port, respectively of an HTTP server - that is started serving the directory specified by the `http_directory` - configuration parameter. If `http_directory` isn't specified, these will - be blank! - -Example boot command. This is actually a working boot command used to start -an Ubuntu 12.04 installer: +The example shown below is a working boot command used to start an Ubuntu +12.04 installer: ``` json [ @@ -316,23 +317,27 @@ For more examples of various boot commands, see the sample projects from our ## Integration Services -Packer will automatically attach the integration services iso as a dvd drive +Packer will automatically attach the integration services ISO as a DVD drive for the version of Hyper-V that is running. ## Generation 1 vs Generation 2 -Floppy drives are no longer supported by generation 2 machines. This requires you to -take another approach when dealing with preseed or answer files. Two possible options -are using virtual dvd drives or using the built in web server. +Floppy drives are no longer supported by generation 2 machines. This requires +you to take another approach when dealing with preseed or answer files. Two +possible options are using virtual DVD drives or using Packers built in web +server. -When dealing with Windows you need to enable UEFI drives for generation 2 virtual machines. +When dealing with Windows you need to enable UEFI drives for generation 2 +virtual machines. -## Creating iso from directory +## Creating an ISO From a Directory -Programs like mkisofs can be used to create an iso from a directory. -There is a [windows version of mkisofs](http://opensourcepack.blogspot.co.uk/p/cdrtools.html). +Programs like mkisofs can be used to create an ISO from a directory. There is +a [windows version of +mkisofs](http://opensourcepack.blogspot.co.uk/p/cdrtools.html) available. -Example powershell script. This is an actually working powershell script used to create a Windows answer iso: +Below is a working PowerShell script that can be used to create a Windows +answer ISO: ``` powershell $isoFolder = "answer-iso" @@ -646,7 +651,7 @@ Finish Setup cache proxy during installation --> cmd.exe /c winrm set winrm/config @{MaxTimeoutms="1800000"} - Win RM MaxTimoutms + Win RM MaxTimeoutms 5 true @@ -831,7 +836,7 @@ Finish Setup cache proxy during installation --> sysprep-unattend.xml: -``` text +``` xml @@ -895,8 +900,8 @@ Finish proxy after sysprep --> ## Example For Ubuntu Vivid Generation 2 -If you are running Windows under virtualization, you may need to create -a virtual switch with an `External` connection type. +If you are running Windows under virtualization, you may need to create a +virtual switch with an `External` connection type. ### Packer config: @@ -945,7 +950,7 @@ a virtual switch with an `External` connection type. "generation": 2, "enable_secure_boot": false } -] + ] } ``` diff --git a/website/source/docs/builders/hyperv-vmcx.html.md b/website/source/docs/builders/hyperv-vmcx.html.md.erb similarity index 73% rename from website/source/docs/builders/hyperv-vmcx.html.md rename to website/source/docs/builders/hyperv-vmcx.html.md.erb index d3236a4fe..e8ffe9071 100644 --- a/website/source/docs/builders/hyperv-vmcx.html.md +++ b/website/source/docs/builders/hyperv-vmcx.html.md.erb @@ -1,33 +1,36 @@ --- +modeline: | + vim: set ft=pandoc: description: |- The Hyper-V Packer builder is able to clone an existing Hyper-V virtual machine and export them. layout: "docs" sidebar_current: 'docs-builders-hyperv-vmcx' -page_title: "Hyper-V Builder (from an vmcx)" +page_title: "Hyper-V Builder (from a vmcx)" --- # Hyper-V Builder (from a vmcx) Type: `hyperv-vmcx` -The Hyper-V Packer builder is able to use exported virtual machines or clone existing +The Hyper-V Packer builder is able to use exported virtual machines or clone +existing [Hyper-V](https://www.microsoft.com/en-us/server-cloud/solutions/virtualization.aspx) virtual machines. -The builder imports a virtual machine or clones an existing virtual machine boots it, -and provisioning software within the OS, then shutting it down. The result of the -Hyper-V builder is a directory containing all the files necessary to run the virtual -machine portably. +Typically, the builder imports or clones an existing virtual machine, +boots it, provisions software within the OS, and then shuts it down. The +result of the Hyper-V builder is a directory containing all the files +necessary to run the virtual machine portably. ## Basic Example -Here is a basic example. This example is not functional. It will start the -OS installer but then fail because we don't provide the preseed file for -Ubuntu to self-install. Still, the example serves to show the basic configuration: +Here is a basic example. This example is not functional. It will start the OS +installer but then fail because we don't provide the preseed file for Ubuntu +to self-install. Still, the example serves to show the basic configuration: Import from folder: -```json +``` json { "type": "hyperv-vmcx", "clone_from_vmxc_path": "c:\\virtual machines\\ubuntu-12.04.5-server-amd64", @@ -39,7 +42,7 @@ Import from folder: Clone from existing virtual machine: -```json +``` json { "clone_from_vm_name": "ubuntu-12.04.5-server-amd64", "shutdown_command": "echo 'packer' | sudo -S shutdown -P now", @@ -49,19 +52,22 @@ Clone from existing virtual machine: } ``` -It is important to add a `shutdown_command`. By default Packer halts the -virtual machine and the file system may not be sync'd. Thus, changes made in a -provisioner might not be saved. +By default Packer will perform a hard power off of a virtual machine. +However, when a machine is powered off this way, it is possible that +changes made to the VMs file system may not be fully synced, possibly +leading to corruption of files or lost changes. As such, it is important to +add a `shutdown_command`. This tells Packer how to safely shutdown and +power off the VM. ## Configuration Reference -There are many configuration options available for the Hyper-V builder. -They are organized below into two categories: required and optional. Within -each category, the available options are alphabetized and described. +There are many configuration options available for the Hyper-V builder. They +are organized below into two categories: required and optional. Within each +category, the available options are alphabetized and described. In addition to the options listed here, a -[communicator](/docs/templates/communicator.html) -can be configured for this builder. +[communicator](/docs/templates/communicator.html) can be configured for this +builder. ### Required for virtual machine import: @@ -75,110 +81,126 @@ can be configured for this builder. ### Optional: -- `clone_from_snapshot_name` (string) - The name of the snapshot - -- `clone_all_snapshots` (boolean) - Should all snapshots be cloned when the - machine is cloned. - - `boot_command` (array of strings) - This is an array of commands to type - when the virtual machine is first booted. The goal of these commands should - be to type just enough to initialize the operating system installer. - Special keys can be typed as well, and are covered in the section below on - the boot command. If this is not specified, it is assumed the installer - will start itself. + when the virtual machine is first booted. The goal of these commands + should be to type just enough to initialize the operating system + installer. Special keys can be typed as well, and are covered in the + section below on the boot command. If this is not specified, it is assumed + the installer will start itself. - `boot_wait` (string) - The time to wait after booting the initial virtual - machine before typing the `boot_command`. The value of this should be - a duration. Examples are "5s" and "1m30s" which will cause Packer to wait - five seconds and one minute 30 seconds, respectively. If this isn't - specified, the default is 10 seconds. + machine before typing the `boot_command`. The value specified should be + a duration. For example, setting a duration of "1m30s" would cause + Packer to wait for 1 minute 30 seconds before typing the boot command. + The default duration is "10s" (10 seconds). -- `cpu` (number) - The number of cpus the virtual machine should use. If - this isn't specified, the default is 1 cpu. +- `clone_all_snapshots` (boolean) - If set to `true` all snapshots will be + cloned when the machine is cloned. -- `enable_dynamic_memory` (boolean) - If true enable dynamic memory for virtual - machine. This defaults to false. +- `clone_from_snapshot_name` (string) - The name of the snapshot to clone + from. -- `enable_mac_spoofing` (boolean) - If true enable mac spoofing for virtual - machine. This defaults to false. +- `cpu` (number) - The number of CPUs the virtual machine should use. If + this isn't specified, the default is 1 CPU. -- `enable_secure_boot` (boolean) - If true enable secure boot for virtual - machine. This defaults to false. +- `enable_dynamic_memory` (boolean) - If `true` enable dynamic memory for + the virtual machine. This defaults to `false`. -- `enable_virtualization_extensions` (boolean) - If true enable virtualization - extensions for virtual machine. This defaults to false. For nested - virtualization you need to enable mac spoofing, disable dynamic memory and - have at least 4GB of RAM for virtual machine. +- `enable_mac_spoofing` (boolean) - If `true` enable MAC address spoofing + for the virtual machine. This defaults to `false`. + +- `enable_secure_boot` (boolean) - If `true` enable secure boot for the + virtual machine. This defaults to `false`. See `secure_boot_template` + below for additional settings. + +- `enable_virtualization_extensions` (boolean) - If `true` enable + virtualization extensions for the virtual machine. This defaults to + `false`. For nested virtualization you need to enable MAC spoofing, + disable dynamic memory and have at least 4GB of RAM assigned to the + virtual machine. + +- `floppy_dirs` (array of strings) - A list of directories to place onto + the floppy disk recursively. This is similar to the `floppy_files` option + except that the directory structure is preserved. This is useful for when + your floppy disk includes drivers or if you just want to organize it's + contents as a hierarchy. Wildcard characters (\*, ?, and \[\]) are + allowed. The maximum summary size of all files in the listed directories + are the same as in `floppy_files`. - `floppy_files` (array of strings) - A list of files to place onto a floppy disk that is attached when the VM is booted. This is most useful for unattended Windows installs, which look for an `Autounattend.xml` file on removable media. By default, no floppy will be attached. All files listed in this setting get placed into the root directory of the floppy and the - floppy is attached as the first floppy device. Currently, no support exists - for creating sub-directories on the floppy. Wildcard characters (*, ?, and - []) are allowed. Directory names are also allowed, which will add all the - files found in the directory to the floppy. + floppy is attached as the first floppy device. Currently, no support + exists for creating sub-directories on the floppy. Wildcard characters + (`*`, `?`, and `[]`) are allowed. Directory names are also allowed, which + will add all the files found in the directory to the floppy. -- `floppy_dirs` (array of strings) - A list of directories to place onto the - floppy disk recursively. This is similar to the `floppy_files` option - except that the directory structure is preserved. This is useful for when - your floppy disk includes drivers or if you just want to organize it's - contents as a hierarchy. Wildcard characters (\*, ?, and \[\]) are allowed. - The maximum summary size of all files in the listed directories are the - same as in `floppy_files`. +- `guest_additions_mode` (string) - If set to `attach` then attach and + mount the ISO image specified in `guest_additions_path`. If set to + `none` then guest additions are not attached and mounted; This is the + default. -- `guest_additions_mode` (string) - How should guest additions be installed. - If value `attach` then attach iso image with by specified by - `guest_additions_path`. Otherwise guest additions is not installed. - -- `guest_additions_path` (string) - The path to the iso image for guest +- `guest_additions_path` (string) - The path to the ISO image for guest additions. -- `http_directory` (string) - Path to a directory to serve using an HTTP - server. The files in this directory will be available over HTTP that will - be requestable from the virtual machine. This is useful for hosting - kickstart files and so on. By default this is "", which means no HTTP - server will be started. The address and port of the HTTP server will be - available as variables in `boot_command`. This is covered in more detail - below. +- `headless` (boolean) - Packer defaults to building Hyper-V virtual + machines by launching a GUI that shows the console of the machine being + built. When this value is set to true, the machine will start without a + console. + +- `http_directory` (string) - Path to a directory to serve using Packers + inbuilt HTTP server. The files in this directory will be available + over HTTP to the virtual machine. This is useful for hosting kickstart + files and so on. By default this value is unset and the HTTP server is + not started. The address and port of the HTTP server will be available + as variables in `boot_command`. This is covered in more detail below. - `http_port_min` and `http_port_max` (number) - These are the minimum and maximum port to use for the HTTP server started to serve the - `http_directory`. Because Packer often runs in parallel, Packer will choose - a randomly available port in this range to run the HTTP server. If you want - to force the HTTP server to be on one port, make this minimum and maximum - port the same. By default the values are 8000 and 9000, respectively. + `http_directory`. Since Packer often runs in parallel, a randomly + available port in this range will be repeatedly chosen until an + available port is found. To force the HTTP server to use a specific + port, set an identical value for `http_port_min` and `http_port_max`. + By default the values are 8000 and 9000, respectively. -- `iso_checksum` (string) - The checksum for the OS ISO file. Because ISO - files are so large, this is required and Packer will verify it prior to - booting a virtual machine with the ISO attached. The type of the checksum - is specified with `iso_checksum_type`, documented below. +- `iso_checksum_type` (string) - The algorithm to be used when computing + the checksum of the file specified in `iso_checksum`. Currently, valid + values are "none", "md5", "sha1", "sha256", or "sha512". Since the + validity of ISO and virtual disk files are typically crucial to a + successful build, Packer performs a check of any supplied media by + default. While setting "none" will cause Packer to skip this check, + corruption of large files such as ISOs and virtual hard drives can + occur from time to time. As such, skipping this check is not + recommended. -- `iso_checksum_type` (string) - The type of the checksum specified in - `iso_checksum`. Valid values are "none", "md5", "sha1", "sha256", or - "sha512" currently. While "none" will skip checksumming, this is not - recommended since ISO files are generally large and corruption does happen - from time to time. +- `iso_checksum` (string) - The checksum for the ISO file or virtual + hard drive file. The algorithm to use when computing the checksum is + specified with `iso_checksum_type`. + +- `iso_target_extension` (string) - The extension of the ISO file after + download. This defaults to "iso". + +- `iso_target_path` (string) - The path where the ISO should be saved after + download. By default the ISO will be saved in the Packer cache + directory with a hash of the original filename as its name. - `iso_url` (string) - A URL to the ISO or VHD containing the installation - image. This URL can be either an HTTP URL or a file URL (or path to - a file). If this is an HTTP URL, Packer will download iso and cache it + image. This URL can be either an HTTP URL or a file URL (or path to a + file). If this is an HTTP URL, Packer will download iso and cache it between runs. - `iso_urls` (array of strings) - Multiple URLs for the ISO or VHD to - download. Packer will try these in order. If anything goes wrong attempting - to download or while downloading a single URL, it will move on to the next. - All URLs must point to the same file (same checksum). By default this is - empty and `iso_url` is used. Only one of `iso_url` or `iso_urls` can be - specified. + download. Packer will try these in order. If anything goes wrong + attempting to download or while downloading a single URL, it will move on + to the next. All URLs must point to the same file (same checksum). By + default this is empty and `iso_url` is used. Only one of `iso_url` or + `iso_urls` can be specified. -- `iso_target_extension` (string) - The extension of the iso file after - download. This defaults to "iso". - -- `iso_target_path` (string) - The path where the iso should be saved after - download. By default will go in the packer cache, with a hash of the - original filename as its name. +- `mac_address` (string) - This allows a specific MAC address to be used on + the default virtual network card. The MAC address must be a string with + no delimiters, for example "0000deadbeef". - `output_directory` (string) - This is the path to the directory where the resulting virtual machine will be created. This may be relative or @@ -187,123 +209,80 @@ can be configured for this builder. running the builder. By default this is "output-BUILDNAME" where "BUILDNAME" is the name of the build. -- `ram_size` (number) - The size, in megabytes, of the ram to create for the +- `ram_size` (number) - The amount, in megabytes, of RAM to assign to the VM. By default, this is 1 GB. -* `secondary_iso_images` (array of strings) - A list of iso paths to attached - to a VM when it is booted. This is most useful for unattended Windows - installs, which look for an `Autounattend.xml` file on removable media. By - default, no secondary iso will be attached. +- `secondary_iso_images` (array of strings) - A list of ISO paths to + attach to a VM when it is booted. This is most useful for unattended + Windows installs, which look for an `Autounattend.xml` file on removable + media. By default, no secondary ISO will be attached. + +- `secure_boot_template` (string) - The secure boot template to be + configured. Valid values are "MicrosoftWindows" (Windows) or + "MicrosoftUEFICertificateAuthority" (Linux). This only takes effect if + `enable_secure_boot` is set to "true". This defaults to "MicrosoftWindows". - `shutdown_command` (string) - The command to use to gracefully shut down - the machine once all the provisioning is done. By default this is an empty - string, which tells Packer to just forcefully shut down the machine unless - a shutdown command takes place inside script so this may safely be omitted. - If one or more scripts require a reboot it is suggested to leave this blank - since reboots may fail and specify the final shutdown command in your last - script. + the machine once all provisioning is complete. By default this is an + empty string, which tells Packer to just forcefully shut down the + machine. This setting can be safely omitted if for example, a shutdown + command to gracefully halt the machine is configured inside a + provisioning script. If one or more scripts require a reboot it is + suggested to leave this blank (since reboots may fail) and instead + specify the final shutdown command in your last script. - `shutdown_timeout` (string) - The amount of time to wait after executing - the `shutdown_command` for the virtual machine to actually shut down. If it - doesn't shut down in this time, it is an error. By default, the timeout is - "5m", or five minutes. + the `shutdown_command` for the virtual machine to actually shut down. + If the machine doesn't shut down in this time it is considered an + error. By default, the time out is "5m" (five minutes). -- `skip_compaction` (boolean) - If true skip compacting the hard disk for - virtual machine when exporting. This defaults to false. +- `skip_compaction` (boolean) - If `true` skip compacting the hard disk for + the virtual machine when exporting. This defaults to `false`. + +- `skip_export` (boolean) - If `true` Packer will skip the export of the + VM. If you are interested only in the VHD/VHDX files, you can enable + this option. This will create inline disks which improves the build + performance. There will not be any copying of source VHDs to the temp + directory. This defaults to `false`. - `switch_name` (string) - The name of the switch to connect the virtual - machine to. Be defaulting this to an empty string, Packer will try to - determine the switch to use by looking for external switch that is up and - running. + machine to. By default, leaving this value unset will cause Packer to + try and determine the switch to use by looking for an external switch + that is up and running. -- `switch_vlan_id` (string) - This is the vlan of the virtual switch's - network card. By default none is set. If none is set then a vlan is not set - on the switch's network card. If this value is set it should match the vlan - specified in by `vlan_id`. +- `switch_vlan_id` (string) - This is the VLAN of the virtual switch's + network card. By default none is set. If none is set then a VLAN is not + set on the switch's network card. If this value is set it should match + the VLAN specified in by `vlan_id`. -- `vlan_id` (string) - This is the vlan of the virtual machine's network card - for the new virtual machine. By default none is set. If none is set then - vlans are not set on the virtual machine's network card. +- `vlan_id` (string) - This is the VLAN of the virtual machine's network + card for the new virtual machine. By default none is set. If none is set + then VLANs are not set on the virtual machine's network card. -- `vm_name` (string) - This is the name of the virtual machine for the new - virtual machine, without the file extension. By default this is - "packer-BUILDNAME", where "BUILDNAME" is the name of the build. +- `vm_name` (string) - This is the name of the new virtual machine, + without the file extension. By default this is "packer-BUILDNAME", + where "BUILDNAME" is the name of the build. ## Boot Command -The `boot_command` configuration is very important: it specifies the keys -to type when the virtual machine is first booted in order to start the -OS installer. This command is typed after `boot_wait`, which gives the -virtual machine some time to actually load the ISO. +The `boot_command` configuration is very important: it specifies the keys to +type when the virtual machine is first booted in order to start the OS +installer. This command is typed after `boot_wait`, which gives the virtual +machine some time to actually load the ISO. -As documented above, the `boot_command` is an array of strings. The -strings are all typed in sequence. It is an array only to improve readability -within the template. +As documented above, the `boot_command` is an array of strings. The strings +are all typed in sequence. It is an array only to improve readability within +the template. The boot command is "typed" character for character over the virtual keyboard -to the machine, simulating a human actually typing the keyboard. There are -a set of special keys available. If these are in your boot command, they -will be replaced by the proper key: +to the machine, simulating a human actually typing the keyboard. -- `` - Backspace +<%= partial "partials/builders/boot-command" %> -- `` - Delete +The example shown below is a working boot command used to start an Ubuntu +12.04 installer: -- `` and `` - Simulates an actual "enter" or "return" keypress. - -- `` - Simulates pressing the escape key. - -- `` - Simulates pressing the tab key. - -- `` - `` - Simulates pressing a function key. - -- `` `` `` `` - Simulates pressing an arrow key. - -- `` - Simulates pressing the spacebar. - -- `` - Simulates pressing the insert key. - -- `` `` - Simulates pressing the home and end keys. - -- `` `` - Simulates pressing the page up and page down keys. - -- `` `` - Simulates pressing the alt key. - -- `` `` - Simulates pressing the ctrl key. - -- `` `` - Simulates pressing the shift key. - -- `` `` - Simulates pressing and holding the alt key. - -- `` `` - Simulates pressing and holding the ctrl key. - -- `` `` - Simulates pressing and holding the shift key. - -- `` `` - Simulates releasing a held alt key. - -- `` `` - Simulates releasing a held ctrl key. - -- `` `` - Simulates releasing a held shift key. - -- `` `` `` - Adds a 1, 5 or 10 second pause before - sending any additional keys. This is useful if you have to generally wait - for the UI to update before typing more. - -When using modifier keys `ctrl`, `alt`, `shift` ensure that you release them, otherwise they will be held down until the machine reboots. Use lowercase characters as well inside modifiers. For example: to simulate ctrl+c use `c`. - -In addition to the special keys, each command to type is treated as a -[configuration template](/docs/templates/configuration-templates.html). -The available variables are: - -* `HTTPIP` and `HTTPPort` - The IP and port, respectively of an HTTP server - that is started serving the directory specified by the `http_directory` - configuration parameter. If `http_directory` isn't specified, these will - be blank! - -Example boot command. This is actually a working boot command used to start -an Ubuntu 12.04 installer: - -```text +``` json [ "", "/install/vmlinuz noapic ", @@ -317,27 +296,34 @@ an Ubuntu 12.04 installer: ] ``` +For more examples of various boot commands, see the sample projects from our +[community templates page](/community-tools.html#templates). + ## Integration Services -Packer will automatically attach the integration services iso as a dvd drive +Packer will automatically attach the integration services ISO as a DVD drive for the version of Hyper-V that is running. ## Generation 1 vs Generation 2 -Floppy drives are no longer supported by generation 2 machines. This requires you to -take another approach when dealing with preseed or answer files. Two possible options -are using virtual dvd drives or using the built in web server. +Floppy drives are no longer supported by generation 2 machines. This requires +you to take another approach when dealing with preseed or answer files. Two +possible options are using virtual DVD drives or using Packers built in web +server. -When dealing with Windows you need to enable UEFI drives for generation 2 virtual machines. +When dealing with Windows you need to enable UEFI drives for generation 2 +virtual machines. -## Creating iso from directory +## Creating an ISO From a Directory -Programs like mkisofs can be used to create an iso from a directory. -There is a [windows version of mkisofs](http://opensourcepack.blogspot.co.uk/p/cdrtools.html). +Programs like mkisofs can be used to create an ISO from a directory. There is +a [windows version of +mkisofs](http://opensourcepack.blogspot.co.uk/p/cdrtools.html) available. -Example powershell script. This is an actually working powershell script used to create a Windows answer iso: +Below is a working PowerShell script that can be used to create a Windows +answer ISO: -```text +``` powershell $isoFolder = "answer-iso" if (test-path $isoFolder){ remove-item $isoFolder -Force -Recurse @@ -357,7 +343,7 @@ copy windows\common\win-updates.ps1 $isoFolder\ copy windows\common\run-sysprep.ps1 $isoFolder\ copy windows\common\run-sysprep.cmd $isoFolder\ -$textFile = "$isoFolder\Autounattend.xml" +$textFile = "$isoFolder\Autounattend.xml" $c = Get-Content -Encoding UTF8 $textFile @@ -371,54 +357,56 @@ if (test-path $isoFolder){ } ``` - ## Example For Windows Server 2012 R2 Generation 2 Packer config: -```javascript +``` json { "builders": [ - { - "vm_name":"windows2012r2", - "type": "hyperv-iso", - "disk_size": 61440, - "floppy_files": [], - "secondary_iso_images": [ - "./windows/windows-2012R2-serverdatacenter-amd64/answer.iso" - ], - "http_directory": "./windows/common/http/", - "boot_wait": "0s", - "boot_command": [ - "aaa" - ], - "iso_url": "http://download.microsoft.com/download/6/2/A/62A76ABB-9990-4EFC-A4FE-C7D698DAEB96/9600.16384.WINBLUE_RTM.130821-1623_X64FRE_SERVER_EVAL_EN-US-IRM_SSS_X64FREE_EN-US_DV5.ISO", - "iso_checksum_type": "md5", - "iso_checksum": "458ff91f8abc21b75cb544744bf92e6a", - "communicator":"winrm", - "winrm_username": "vagrant", - "winrm_password": "vagrant", - "winrm_timeout" : "4h", - "shutdown_command": "f:\\run-sysprep.cmd", - "ram_size": 4096, - "cpu": 4, - "generation": 2, - "switch_name":"LAN", - "enable_secure_boot":true - }], - "provisioners": [{ - "type": "powershell", - "elevated_user":"vagrant", - "elevated_password":"vagrant", - "scripts": [ - "./windows/common/install-7zip.ps1", - "./windows/common/install-chef.ps1", - "./windows/common/compile-dotnet-assemblies.ps1", - "./windows/common/cleanup.ps1", - "./windows/common/ultradefrag.ps1", - "./windows/common/sdelete.ps1" - ] - }], + { + "vm_name":"windows2012r2", + "type": "hyperv-iso", + "disk_size": 61440, + "floppy_files": [], + "secondary_iso_images": [ + "./windows/windows-2012R2-serverdatacenter-amd64/answer.iso" + ], + "http_directory": "./windows/common/http/", + "boot_wait": "0s", + "boot_command": [ + "aaa" + ], + "iso_url": "http://download.microsoft.com/download/6/2/A/62A76ABB-9990-4EFC-A4FE-C7D698DAEB96/9600.16384.WINBLUE_RTM.130821-1623_X64FRE_SERVER_EVAL_EN-US-IRM_SSS_X64FREE_EN-US_DV5.ISO", + "iso_checksum_type": "md5", + "iso_checksum": "458ff91f8abc21b75cb544744bf92e6a", + "communicator":"winrm", + "winrm_username": "vagrant", + "winrm_password": "vagrant", + "winrm_timeout" : "4h", + "shutdown_command": "f:\\run-sysprep.cmd", + "ram_size": 4096, + "cpu": 4, + "generation": 2, + "switch_name":"LAN", + "enable_secure_boot":true + } + ], + "provisioners": [ + { + "type": "powershell", + "elevated_user":"vagrant", + "elevated_password":"vagrant", + "scripts": [ + "./windows/common/install-7zip.ps1", + "./windows/common/install-chef.ps1", + "./windows/common/compile-dotnet-assemblies.ps1", + "./windows/common/cleanup.ps1", + "./windows/common/ultradefrag.ps1", + "./windows/common/sdelete.ps1" + ] + } + ], "post-processors": [ { "type": "vagrant", @@ -431,7 +419,7 @@ Packer config: autounattend.xml: -```xml +``` xml @@ -514,10 +502,10 @@ autounattend.xml: 3 128 MSR - + 4 - true + true Primary @@ -609,8 +597,8 @@ autounattend.xml: 0 true cache-proxy:3142 - -Finish Setup cache proxy during installation --> + +Finish Setup cache proxy during installation --> @@ -647,7 +635,7 @@ Finish Setup cache proxy during installation --> cmd.exe /c winrm set winrm/config @{MaxTimeoutms="1800000"} - Win RM MaxTimoutms + Win RM MaxTimeoutms 5 true @@ -828,12 +816,11 @@ Finish Setup cache proxy during installation --> - ``` sysprep-unattend.xml: -```text +``` xml @@ -842,13 +829,13 @@ sysprep-unattend.xml: - +Finish proxy after sysprep --> 0809:00000809 en-GB @@ -897,12 +884,12 @@ Finish proxy after sysprep --> ## Example For Ubuntu Vivid Generation 2 -If you are running Windows under virtualization, you may need to create -a virtual switch with an `External` connection type. +If you are running Windows under virtualization, you may need to create a +virtual switch with an `External` connection type. ### Packer config: -```javascript +``` json { "variables": { "vm_name": "ubuntu-xenial", @@ -914,45 +901,46 @@ a virtual switch with an `External` connection type. "iso_checksum": "DE5EE8665048F009577763EFBF4A6F0558833E59" }, "builders": [ - { - "vm_name":"{{user `vm_name`}}", - "type": "hyperv-iso", - "disk_size": "{{user `disk_size`}}", - "guest_additions_mode": "disable", - "iso_url": "{{user `iso_url`}}", - "iso_checksum_type": "{{user `iso_checksum_type`}}", - "iso_checksum": "{{user `iso_checksum`}}", - "communicator":"ssh", - "ssh_username": "packer", - "ssh_password": "packer", - "ssh_timeout" : "4h", - "http_directory": "./", - "boot_wait": "5s", - "boot_command": [ - "", - "set gfxpayload=1024x768", - "linux /install/vmlinuz ", - "preseed/url=http://{{.HTTPIP}}:{{.HTTPPort}}/hyperv-taliesins.cfg ", - "debian-installer=en_US auto locale=en_US kbd-chooser/method=us ", - "hostname={{.Name}} ", - "fb=false debconf/frontend=noninteractive ", - "keyboard-configuration/modelcode=SKIP keyboard-configuration/layout=USA ", - "keyboard-configuration/variant=USA console-setup/ask_detect=false ", - "initrd /install/initrd.gz", - "boot" - ], - "shutdown_command": "echo 'packer' | sudo -S -E shutdown -P now", - "ram_size": "{{user `ram_size`}}", - "cpu": "{{user `cpu`}}", - "generation": 2, - "enable_secure_boot": false - }] + { + "vm_name":"{{user `vm_name`}}", + "type": "hyperv-iso", + "disk_size": "{{user `disk_size`}}", + "guest_additions_mode": "disable", + "iso_url": "{{user `iso_url`}}", + "iso_checksum_type": "{{user `iso_checksum_type`}}", + "iso_checksum": "{{user `iso_checksum`}}", + "communicator":"ssh", + "ssh_username": "packer", + "ssh_password": "packer", + "ssh_timeout" : "4h", + "http_directory": "./", + "boot_wait": "5s", + "boot_command": [ + "", + "set gfxpayload=1024x768", + "linux /install/vmlinuz ", + "preseed/url=http://{{.HTTPIP}}:{{.HTTPPort}}/hyperv-taliesins.cfg ", + "debian-installer=en_US auto locale=en_US kbd-chooser/method=us ", + "hostname={{.Name}} ", + "fb=false debconf/frontend=noninteractive ", + "keyboard-configuration/modelcode=SKIP keyboard-configuration/layout=USA ", + "keyboard-configuration/variant=USA console-setup/ask_detect=false ", + "initrd /install/initrd.gz", + "boot" + ], + "shutdown_command": "echo 'packer' | sudo -S -E shutdown -P now", + "ram_size": "{{user `ram_size`}}", + "cpu": "{{user `cpu`}}", + "generation": 2, + "enable_secure_boot": false + } + ] } ``` ### preseed.cfg: -```text +``` text ## Options to set on the command line d-i debian-installer/locale string en_US.utf8 d-i console-setup/ask_detect boolean false diff --git a/website/source/docs/builders/hyperv.html.md b/website/source/docs/builders/hyperv.html.md index 57f2015a9..58358f1e9 100644 --- a/website/source/docs/builders/hyperv.html.md +++ b/website/source/docs/builders/hyperv.html.md @@ -18,6 +18,6 @@ virtual machines and export them. an image. This is best for people who want to start from scratch. - [hyperv-vmcx](/docs/builders/hyperv-vmcx.html) - Clones an - an existing virtual machine, provisions software within the OS, - then exports that machine to create an image. This is best for + an existing virtual machine, provisions software within the OS, + then exports that machine to create an image. This is best for people who have existing base images and want to customize them. \ No newline at end of file diff --git a/website/source/docs/builders/lxd.html.md b/website/source/docs/builders/lxd.html.md index 4915e1e28..4f8cf9c84 100644 --- a/website/source/docs/builders/lxd.html.md +++ b/website/source/docs/builders/lxd.html.md @@ -30,7 +30,7 @@ Below is a fully functioning example. "type": "lxd", "name": "lxd-xenial", "image": "ubuntu-daily:xenial", - "output_image": "ubuntu-xenial" + "output_image": "ubuntu-xenial", "publish_properties": { "description": "Trivial repackage with Packer" } @@ -54,6 +54,9 @@ Below is a fully functioning example. ### Optional: +- `init_sleep` (string) - The number of seconds to sleep between launching the + LXD instance and provisioning it; defaults to 3 seconds. + - `name` (string) - The name of the started container. Defaults to `packer-$PACKER_BUILD_NAME`. diff --git a/website/source/docs/builders/ncloud.html.md b/website/source/docs/builders/ncloud.html.md new file mode 100644 index 000000000..9148ae568 --- /dev/null +++ b/website/source/docs/builders/ncloud.html.md @@ -0,0 +1,126 @@ +--- +description: | + The ncloud builder allows you to create server images using the NAVER Cloud Platform. +layout: docs +page_title: 'Naver Cloud Platform - Builders' +sidebar_current: 'docs-builders-ncloud' +--- + +# NAVER CLOUD PLATFORM Builder + +The `ncloud` builder allows you to create server images using the [NAVER Cloud +Platform](https://www.ncloud.com/). + +## Configuration Reference + +### Required: + +- `ncloud_access_key` (string) - User's access key. Go to [\[Account + Management \> Authentication + Key\]](https://www.ncloud.com/mypage/manage/authkey) to create and view + your authentication key. + +- `ncloud_secret_key` (string) - User's secret key paired with the access + key. Go to [\[Account Management \> Authentication + Key\]](https://www.ncloud.com/mypage/manage/authkey) to create and view + your authentication key. + +- `server_image_product_code` (string) - Product code of an image to create. + (member\_server\_image\_no is required if not specified) + +- `server_product_code` (string) - Product (spec) code to create. + +### Optional: + +- `member_server_image_no` (string) - Previous image code. If there is an + image previously created, it can be used to create a new image. + (`server_image_product_code` is required if not specified) + +- `server_image_name` (string) - Name of an image to create. + +- `server_image_description` (string) - Description of an image to create. + +- `block_storage_size` (number) - You can add block storage ranging from 10 + GB to 2000 GB, in increments of 10 GB. + +- `access_control_group_configuration_no` (string) - This is used to allow + winrm access when you create a Windows server. An ACG that specifies an + access source (`0.0.0.0/0`) and allowed port (5985) must be created in + advance. + +- `user_data` (string) - Init script to run when an instance is created. + - For Linux servers, Python, Perl, and Shell scripts can be used. The + path of the script to run should be included at the beginning of the + script, like \#!/usr/bin/env python, \#!/bin/perl, or \#!/bin/bash. + - For Windows servers, only Visual Basic scripts can be used. + - All scripts must be written in English. +- `user_data_file` (string) - A path to a file containing a `user_data` + script. See above for more information. + +- `region` (string) - Name of the region where you want to create an image. + (default: Korea) + - values: Korea / US-West / HongKong / Singapore / Japan / Germany + + +## Sample code of template.json + +``` +{ + "variables": { + "ncloud_access_key": "FRxhOQRNjKVMqIz3sRLY", + "ncloud_secret_key": "xd6kTO5iNcLookBx0D8TDKmpLj2ikxqEhc06MQD2" + }, + "builders": [ + { + "type": "ncloud", + "access_key": "{{user `ncloud_access_key`}}", + "secret_key": "{{user `ncloud_secret_key`}}", + + "server_image_product_code": "SPSW0WINNT000016", + "server_product_code": "SPSVRSSD00000011", + "member_server_image_no": "4223", + "server_image_name": "packer-test {{timestamp}}", + "server_description": "server description", + "user_data": "CreateObject(\"WScript.Shell\").run(\"cmd.exe /c powershell Set-ExecutionPolicy RemoteSigned & winrm quickconfig -q & sc config WinRM start= auto & winrm set winrm/config/service/auth @{Basic=\"\"true\"\"} & winrm set winrm/config/service @{AllowUnencrypted=\"\"true\"\"} & winrm get winrm/config/service\")", + "region": "US-West" + } + ] +} +``` + +## Requirements for creating Windows images + +You should include the following code in the packer configuration file for +provision when creating a Windows server. + +``` + "builders": [ + { + "type": "ncloud", + ... + "user_data": + "CreateObject(\"WScript.Shell\").run(\"cmd.exe /c powershell Set-ExecutionPolicy RemoteSigned & winrm set winrm/config/service/auth @{Basic=\"\"true\"\"} & winrm set winrm/config/service @{AllowUnencrypted=\"\"true\"\"} & winrm quickconfig -q & sc config WinRM start= auto & winrm get winrm/config/service\")", + "communicator": "winrm", + "winrm_username": "Administrator" + } + ], + "provisioners": [ + { + "type": "powershell", + "inline": [ + "$Env:SystemRoot\\System32\\Sysprep\\Sysprep.exe /oobe /generalize /shutdown /quiet \"/unattend:C:\\Program Files (x86)\\NBP\\nserver64.xml\" " + ] + } + ] +``` + +## Note + +* You can only create as many public IP addresses as the number of server + instances you own. Before running Packer, please make sure that the number of + public IP addresses previously created is not larger than the number of + server instances (including those to be used to create server images). +* When you forcibly terminate the packer process or close the terminal + (command) window where the process is running, the resources may not be + cleaned up as the packer process no longer runs. In this case, you should + manually clean up the resources associated with the process. diff --git a/website/source/docs/builders/openstack.html.md b/website/source/docs/builders/openstack.html.md index 5406f90ed..c6059da50 100644 --- a/website/source/docs/builders/openstack.html.md +++ b/website/source/docs/builders/openstack.html.md @@ -25,12 +25,17 @@ created. This simplifies configuration quite a bit. The builder does *not* manage images. Once it creates an image, it is up to you to use it or delete it. +~> **Note:** To use OpenStack builder with the OpenStack Newton (Oct 2016) +or earlier, we recommend you use Packer v1.1.2 or earlier version. + ~> **OpenStack Liberty or later requires OpenSSL!** To use the OpenStack builder with OpenStack Liberty (Oct 2015) or later you need to have OpenSSL installed *if you are using temporary key pairs*, i.e. don't use [`ssh_keypair_name`](openstack.html#ssh_keypair_name) nor [`ssh_password`](/docs/templates/communicator.html#ssh_password). All major -OS'es have OpenSSL installed by default except Windows. +OS'es have OpenSSL installed by default except Windows. This have been +resolved in OpenStack Ocata(Feb 2017). + ## Configuration Reference @@ -51,7 +56,7 @@ builder. - `identity_endpoint` (string) - The URL to the OpenStack Identity service. If not specified, Packer will use the environment variables `OS_AUTH_URL`, - if set. + if set. This is not required if using `cloud.yaml`. - `source_image` (string) - The ID or full URL to the base image to use. This is the image that will be used to launch a new server and provision it. @@ -64,11 +69,14 @@ builder. - `username` or `user_id` (string) - The username or id used to connect to the OpenStack service. If not specified, Packer will use the environment - variable `OS_USERNAME` or `OS_USERID`, if set. + variable `OS_USERNAME` or `OS_USERID`, if set. This is not required if + using access token instead of password or if using `cloud.yaml`. - `password` (string) - The password used to connect to the OpenStack service. If not specified, Packer will use the environment variables `OS_PASSWORD`, - if set. + if set. This is not required if using access token instead of password or + if using `cloud.yaml`. + ### Optional: @@ -77,14 +85,20 @@ builder. cluster will be used. This may be required for some OpenStack clusters. - `cacert` (string) - Custom CA certificate file path. - If ommited the OS\_CACERT environment variable can be used. + If omitted the `OS_CACERT` environment variable can be used. + +- `cert` (string) - Client certificate file path for SSL client authentication. + If omitted the `OS_CERT` environment variable can be used. + +- `cloud` (string) - An entry in a `clouds.yaml` file. See the OpenStack + os-client-config + [documentation](https://docs.openstack.org/os-client-config/latest/user/configuration.html) + for more information about `clouds.yaml` files. If omitted, the `OS_CLOUD` + environment variable is used. - `config_drive` (boolean) - Whether or not nova should use ConfigDrive for cloud-init metadata. -- `cert` (string) - Client certificate file path for SSL client authentication. - If omitted the OS\_CERT environment variable can be used. - - `domain_name` or `domain_id` (string) - The Domain name or ID you are authenticating with. OpenStack installations require this if identity v3 is used. Packer will use the environment variable `OS_DOMAIN_NAME` or `OS_DOMAIN_ID`, if set. @@ -100,7 +114,7 @@ builder. - `image_members` (array of strings) - List of members to add to the image after creation. An image member is usually a project (also called the - “tenant”) with whom the image is shared. + "tenant") with whom the image is shared. - `image_visibility` (string) - One of "public", "private", "shared", or "community". @@ -109,11 +123,14 @@ builder. done over an insecure connection. By default this is false. - `key` (string) - Client private key file path for SSL client authentication. - If ommited the OS\_KEY environment variable can be used. + If omitted the `OS_KEY` environment variable can be used. - `metadata` (object of key/value strings) - Glance metadata that will be applied to the image. +- `instance_name` (string) - Name that is applied to the server instance + created by Packer. If this isn't specified, the default is same as `image_name`. + - `instance_metadata` (object of key/value strings) - Metadata that is applied to the server instance created by Packer. Also called server properties in some documentation. The strings have a max size of 255 bytes @@ -170,8 +187,11 @@ builder. - `tenant_id` or `tenant_name` (string) - The tenant ID or name to boot the instance into. Some OpenStack installations require this. If not specified, - Packer will use the environment variable `OS_TENANT_NAME`, if set. Tenant - is also called Project in later versions of OpenStack. + Packer will use the environment variable `OS_TENANT_NAME` or `OS_TENANT_ID`, + if set. Tenant is also called Project in later versions of OpenStack. + +- `token` (string) - the token (id) to use with token based authorization. + Packer will use the environment variable `OS_TOKEN`, if set. - `use_floating_ip` (boolean) - *Deprecated* use `floating_ip` or `floating_ip_pool` instead. @@ -272,11 +292,12 @@ export OS_PROJECT_DOMAIN_NAME="mydomain" ## Notes on OpenStack Authorization -The simplest way to get all settings for authorization agains OpenStack is to +The simplest way to get all settings for authorization against OpenStack is to go into the OpenStack Dashboard (Horizon) select your *Project* and navigate *Project, Access & Security*, select *API Access* and *Download OpenStack RC -File v3*. Source the file, and select your wanted region by setting -environment variable `OS_REGION_NAME` or `OS_REGION_ID` and `export OS_TENANT_NAME=$OS_PROJECT_NAME` or `export OS_TENANT_ID=$OS_PROJECT_ID`. +File v3*. Source the file, and select your wanted region +by setting environment variable `OS_REGION_NAME` or `OS_REGION_ID` and +`export OS_TENANT_NAME=$OS_PROJECT_NAME` or `export OS_TENANT_ID=$OS_PROJECT_ID`. ~> `OS_TENANT_NAME` or `OS_TENANT_ID` must be used even with Identity v3, `OS_PROJECT_NAME` and `OS_PROJECT_ID` has no effect in Packer. @@ -285,3 +306,13 @@ To troubleshoot authorization issues test you environment variables with the OpenStack cli. It can be installed with $ pip install --user python-openstackclient + +### Authorize Using Tokens + +To authorize with a access token only `identity_endpoint` and `token` is needed, +and possibly `tenant_name` or `tenant_id` depending on your token type. Or use +the following environment variables: + +- `OS_AUTH_URL` +- `OS_TOKEN` +- One of `OS_TENANT_NAME` or `OS_TENANT_ID` diff --git a/website/source/docs/builders/oracle-classic.html.md b/website/source/docs/builders/oracle-classic.html.md new file mode 100644 index 000000000..383d15409 --- /dev/null +++ b/website/source/docs/builders/oracle-classic.html.md @@ -0,0 +1,184 @@ +--- +description: | + The oracle-classic builder is able to create new custom images for use with Oracle + Cloud Infrastructure Classic Compute. +layout: docs +page_title: 'Oracle Cloud Infrastructure Classic - Builders' +sidebar_current: 'docs-builders-oracle-classic' +--- + +# Oracle Cloud Infrastructure Classic Compute Builder + +Type: `oracle-classic` + +The `oracle-classic` Packer builder is able to create custom images for use +with [Oracle Cloud Infrastructure Classic Compute](https://cloud.oracle.com/compute-classic). The builder +takes a base image, runs any provisioning necessary on the base image after +launching it, and finally snapshots it creating a reusable custom image. + +It is recommended that you familiarise yourself with the +[Key Concepts and Terminology](https://docs.oracle.com/en/cloud/iaas/compute-iaas-cloud/stcsg/terminology.html) +prior to using this builder if you have not done so already. + +The builder _does not_ manage images. Once it creates an image, it is up to you +to use it or delete it. + +## Authorization + +This builder authenticates API calls to Oracle Cloud Infrastructure Classic Compute using basic +authentication (user name and password). +To read more, see the [authentication documentation](https://docs.oracle.com/en/cloud/iaas/compute-iaas-cloud/stcsa/Authentication.html) + +## Configuration Reference + +There are many configuration options available for the `oracle-classic` builder. +This builder currently only works with the SSH communicator. + +### Required + + - `api_endpoint` (string) - This is your custom API endpoint for sending + requests. Instructions for determining your API endpoint can be found + [here](https://docs.oracle.com/en/cloud/iaas/compute-iaas-cloud/stcsa/SendRequests.html) + + - `dest_image_list` (string) - Where to save the machine image to once you've + provisioned it. If the provided image list does not exist, Packer will create it. + + - `identity_domain` (string) - This is your customer-specific identity domain + as generated by Oracle. If you don't know what your identity domain is, ask + your account administrator. For a little more information, see the Oracle + [documentation](https://docs.oracle.com/en/cloud/get-started/subscriptions-cloud/ocuid/identity-domain-overview.html#GUID-7969F881-5F4D-443E-B86C-9044C8085B8A). + + - `source_image_list` (string) - This is what image you want to use as your base image. + See the [documentation](https://docs.oracle.com/en/cloud/iaas/compute-iaas-cloud/stcsg/listing-machine-images.html) + for more details. You may use either a public image list, or a private image list. To see what public image lists are available, you can use + the CLI, as described [here](https://docs.oracle.com/en/cloud/iaas/compute-iaas-cloud/stopc/image-lists-stclr-and-nmcli.html#GUID-DB7E75FE-F752-4FF7-AB70-3C8DCDFCA0FA) + + - `password` (string) - Your account password. + + - `shape` (string) - The template that determines the number of + CPUs, amount of memory, and other resources allocated to a newly created + instance. For more information about shapes, see the documentation [here](https://docs.oracle.com/en/cloud/iaas/compute-iaas-cloud/stcsg/machine-images-and-shapes.html). + + - `username` (string) - Your account username. + +### Optional + + - `attributes` (string) - (string) - Attributes to apply when launching the + instance. Note that you need to be careful about escaping characters due to + the templates being JSON. It is often more convenient to use + `attributes_file`, instead. You may only define either `attributes` or + `attributes_file`, not both. + + - `attributes_file` (string) - Path to a json file that will be used for the + attributes when launching the instance. You may only define either + `attributes` or `attributes_file`, not both. + + - `image_description` (string) - a description for your destination + image list. If you don't provide one, Packer will provide a generic description. + + - `ssh_username` (string) - The username that Packer will use to SSH into the + instance; defaults to `opc`, the default oracle user, which has sudo + privileges. If you have already configured users on your machine, you may + prompt Packer to use one of those instead. For more detail, see the + [documentation](https://docs.oracle.com/en/cloud/iaas/compute-iaas-cloud/stcsg/accessing-oracle-linux-instance-using-ssh.html). + + - `image_name` (string) - The name to assign to the resulting custom image. + + - `snapshot_timeout` (string) - How long to wait for a snapshot to be + created. Expects a positive golang Time.Duration string, which is + a sequence of decimal numbers and a unit suffix; valid suffixes are `ns` + (nanoseconds), `us` (microseconds), `ms` (milliseconds), `s` (seconds), `m` + (minutes), and `h` (hours). Examples of valid inputs: `100ms`, `250ms`, `1s`, + `2.5s`, `2.5m`, `1m30s`. + Example: `"snapshot_timeout": "15m"`. Default: `20m`. + +## Basic Example + +Here is a basic example. Note that account specific configuration has been +obfuscated; you will need to add a working `username`, `password`, +`identity_domain`, and `api_endpoint` in order for the example to work. + +``` {.json} +{ + "builders": [ + { + "type": "oracle-classic", + "username": "myuser@myaccount.com", + "password": "supersecretpasswordhere", + "identity_domain": "#######", + "api_endpoint": "https://api-###.compute.###.oraclecloud.com/", + "source_image_list": "/oracle/public/OL_7.2_UEKR4_x86_64", + "shape": "oc3", + "image_name": "Packer_Builder_Test_{{timestamp}}", + "attributes": "{\"userdata\": {\"pre-bootstrap\": {\"script\": [\"...\"]}}}", + "dest_image_list": "Packer_Builder_Test_List" + } + ], + "provisioners": [ + { + "type": "shell", + "inline": ["echo hello"] + } + ] +} +``` + +## Basic Example -- Windows + +Attributes file is optional for connecting via ssh, but required for winrm. + +The following file contains the bare minimum necessary to get winRM working; +you have to give it the password to give to the "Administrator" user, which +will be the one winrm connects to. You must also whitelist your computer +to connect via winRM -- the empty braces below whitelist any computer to access +winRM but you can make it more secure by only allowing your build machine +access. See the [docs](https://docs.oracle.com/en/cloud/iaas/compute-iaas-cloud/stcsg/automating-instance-initialization-using-opc-init.html#GUID-A0A107D6-3B38-47F4-8FC8-96D50D99379B) +for more details on how to define a trusted host. + +Save this file as `windows_attributes.json`: + +```{.json} +{ + "userdata": { + "administrator_password": "password", + "winrm": {} + } +} +``` + +Following is a minimal but working Packer config that references this attributes +file: + +```{.json} +{ + "variables": { + "opc_username": "{{ env `OPC_USERNAME`}}", + "opc_password": "{{ env `OPC_PASSWORD`}}", + "opc_identity_domain": "{{env `OPC_IDENTITY_DOMAIN`}}", + "opc_api_endpoint": "{{ env `OPC_ENDPOINT`}}" + }, + "builders": [ + { + "type": "oracle-classic", + "username": "{{ user `opc_username`}}", + "password": "{{ user `opc_password`}}", + "identity_domain": "{{ user `opc_identity_domain`}}", + "api_endpoint": "{{ user `opc_api_endpoint`}}", + "source_image_list": "/Compute-{{ user `opc_identity_domain` }}/{{ user `opc_username`}}/Microsoft_Windows_Server_2012_R2-17.3.6-20170930-124649", + "attributes_file": "./windows_attributes.json", + "shape": "oc3", + "image_name": "Packer_Windows_Demo_{{timestamp}}", + "dest_image_list": "Packer_Windows_Demo", + "communicator": "winrm", + "winrm_username": "Administrator", + "winrm_password": "password" + } + ], + "provisioners": [ + { + "type": "powershell", + "inline": "Write-Output(\"HELLO WORLD\")" + } + ] +} +``` diff --git a/website/source/docs/builders/oracle-oci.html.md b/website/source/docs/builders/oracle-oci.html.md index 3503d4fca..61b93e2fa 100644 --- a/website/source/docs/builders/oracle-oci.html.md +++ b/website/source/docs/builders/oracle-oci.html.md @@ -1,5 +1,5 @@ --- -description: +description: | The oracle-oci builder is able to create new custom images for use with Oracle Cloud Infrastructure (OCI). layout: docs @@ -123,6 +123,21 @@ builder. value provided by the [OCI config file](https://docs.us-phoenix-1.oraclecloud.com/Content/API/Concepts/sdkconfig.htm) if present. + - `use_private_ip` (boolean) - Use private ip addresses to connect to the instance via ssh. + + - `metadata` (map of strings) - Metadata optionally contains custom metadata key/value pairs provided in the +configuration. While this can be used to set metadata["user_data"] the explicit "user_data" and "user_data_file" values will have precedence. An instance's metadata can be obtained from at http://169.254.169.254 on the +launched instance. + + - `user_data` (string) - user_data to be used by cloud + init. See [the Oracle docs](https://docs.us-phoenix-1.oraclecloud.com/api/#/en/iaas/20160918/LaunchInstanceDetails) for more details. Generally speaking, it is easier to use the `user_data_file`, + but you can use this option to put either the plaintext data or the base64 + encoded data directly into your Packer config. + + - `user_data_file` (string) - Path to a file to be used as user_data by cloud + init. See [the Oracle docs](https://docs.us-phoenix-1.oraclecloud.com/api/#/en/iaas/20160918/LaunchInstanceDetails) for more details. Example: + `"user_data_file": "./boot_config/myscript.sh"` + ## Basic Example diff --git a/website/source/docs/builders/oracle.html.md b/website/source/docs/builders/oracle.html.md new file mode 100644 index 000000000..7184b0a59 --- /dev/null +++ b/website/source/docs/builders/oracle.html.md @@ -0,0 +1,22 @@ +--- +description: | + Packer is able to create custom images using Oracle Cloud Infrastructure. +layout: docs +page_title: 'Oracle - Builders' +sidebar_current: 'docs-builders-oracle' +--- + +# Oracle Builder + +Packer is able to create custom images on both Oracle Cloud Infrastructure and +Oracle Cloud Infrastructure Classic Compute. Packer comes with builders +designed to support both platforms. Please choose the one that's right for you: + +- [oracle-classic](/docs/builders/oracle-classic.html) - Create custom images + in Oracle Cloud Infrastructure Classic Compute by launching a source + instance and creating an image list from a snapshot of it after + provisioning. + +- [oracle-oci](/docs/builders/oracle-oci.html) - Create custom images in + Oracle Cloud Infrastructure (OCI) by launching a base instance and creating + an image from it after provisioning. diff --git a/website/source/docs/builders/parallels-iso.html.md b/website/source/docs/builders/parallels-iso.html.md.erb similarity index 86% rename from website/source/docs/builders/parallels-iso.html.md rename to website/source/docs/builders/parallels-iso.html.md.erb index c9c23d919..5e1b19da4 100644 --- a/website/source/docs/builders/parallels-iso.html.md +++ b/website/source/docs/builders/parallels-iso.html.md.erb @@ -1,4 +1,6 @@ --- +modeline: | + vim: set ft=pandoc: description: | The Parallels Packer builder is able to create Parallels Desktop for Mac virtual machines and export them in the PVM format, starting from an ISO @@ -37,7 +39,7 @@ self-install. Still, the example serves to show the basic configuration: "parallels_tools_flavor": "lin", "ssh_username": "packer", "ssh_password": "packer", - "ssh_wait_timeout": "30s", + "ssh_timeout": "30s", "shutdown_command": "echo 'packer' | sudo -S shutdown -P now" } ``` @@ -246,68 +248,9 @@ template. The boot command is "typed" character for character (using the Parallels Virtualization SDK, see [Parallels Builder](/docs/builders/parallels.html)) -simulating a human actually typing the keyboard. There are a set of special keys -available. If these are in your boot command, they will be replaced by the -proper key: +simulating a human actually typing the keyboard. -- `` - Backspace - -- `` - Delete - -- `` and `` - Simulates an actual "enter" or "return" keypress. - -- `` - Simulates pressing the escape key. - -- `` - Simulates pressing the tab key. - -- `` - `` - Simulates pressing a function key. - -- `` `` `` `` - Simulates pressing an arrow key. - -- `` - Simulates pressing the spacebar. - -- `` - Simulates pressing the insert key. - -- `` `` - Simulates pressing the home and end keys. - -- `` `` - Simulates pressing the page up and page down keys. - -- `` `` - Simulates pressing the alt key. - -- `` `` - Simulates pressing the ctrl key. - -- `` `` - Simulates pressing the shift key. - -- `` `` - Simulates pressing and holding the alt key. - -- `` `` - Simulates pressing and holding the ctrl key. - -- `` `` - Simulates pressing and holding the shift key. - -- `` `` - Simulates releasing a held alt key. - -- `` `` - Simulates releasing a held ctrl key. - -- `` `` - Simulates releasing a held shift key. - -- `` `` `` - Adds a 1, 5 or 10 second pause before - sending any additional keys. This is useful if you have to generally wait - for the UI to update before typing more. - -When using modifier keys `ctrl`, `alt`, `shift` ensure that you release them, -otherwise they will be held down until the machine reboots. Use lowercase -characters as well inside modifiers. - -For example: to simulate ctrl+c use `c`. - -In addition to the special keys, each command to type is treated as a -[template engine](/docs/templates/engine.html). The -available variables are: - -- `HTTPIP` and `HTTPPort` - The IP and port, respectively of an HTTP server - that is started serving the directory specified by the `http_directory` - configuration parameter. If `http_directory` isn't specified, these will be - blank! +<%= partial "partials/builders/boot-command" %> Example boot command. This is actually a working boot command used to start an Ubuntu 12.04 installer: diff --git a/website/source/docs/builders/parallels-pvm.html.md b/website/source/docs/builders/parallels-pvm.html.md.erb similarity index 83% rename from website/source/docs/builders/parallels-pvm.html.md rename to website/source/docs/builders/parallels-pvm.html.md.erb index ba3d9c1a9..9cc9ab14a 100644 --- a/website/source/docs/builders/parallels-pvm.html.md +++ b/website/source/docs/builders/parallels-pvm.html.md.erb @@ -1,4 +1,6 @@ --- +modeline: | + vim: set ft=pandoc: description: | This Parallels builder is able to create Parallels Desktop for Mac virtual machines and export them in the PVM format, starting from an existing PVM @@ -33,7 +35,7 @@ the settings here. "source_path": "source.pvm", "ssh_username": "packer", "ssh_password": "packer", - "ssh_wait_timeout": "30s", + "ssh_timeout": "30s", "shutdown_command": "echo 'packer' | sudo -S shutdown -P now" } ``` @@ -179,60 +181,9 @@ template. The boot command is "typed" character for character (using the Parallels Virtualization SDK, see [Parallels Builder](/docs/builders/parallels.html)) -simulating a human actually typing the keyboard. There are a set of special keys -available. If these are in your boot command, they will be replaced by the -proper key: +simulating a human actually typing the keyboard. -- `` - Backspace - -- `` - Delete - -- `` and `` - Simulates an actual "enter" or "return" keypress. - -- `` - Simulates pressing the escape key. - -- `` - Simulates pressing the tab key. - -- `` - `` - Simulates pressing a function key. - -- `` `` `` `` - Simulates pressing an arrow key. - -- `` - Simulates pressing the spacebar. - -- `` - Simulates pressing the insert key. - -- `` `` - Simulates pressing the home and end keys. - -- `` `` - Simulates pressing the page up and page down keys. - -- `` `` - Simulates pressing the alt key. - -- `` `` - Simulates pressing the ctrl key. - -- `` `` - Simulates pressing the shift key. - -- `` `` - Simulates pressing and holding the alt key. - -- `` `` - Simulates pressing and holding the ctrl key. - -- `` `` - Simulates pressing and holding the shift key. - -- `` `` - Simulates releasing a held alt key. - -- `` `` - Simulates releasing a held ctrl key. - -- `` `` - Simulates releasing a held shift key. - -- `` `` `` - Adds a 1, 5 or 10 second pause before - sending any additional keys. This is useful if you have to generally wait - for the UI to update before typing more. - -In addition to the special keys, each command to type is treated as a -[template engine](/docs/templates/engine.html). The -available variables are: - -For more examples of various boot commands, see the sample projects from our -[community templates page](/community-tools.html#templates). +<%= partial "partials/builders/boot-command" %> ## prlctl Commands diff --git a/website/source/docs/builders/profitbricks.html.md b/website/source/docs/builders/profitbricks.html.md index bbb691bab..2b4bc4bc5 100644 --- a/website/source/docs/builders/profitbricks.html.md +++ b/website/source/docs/builders/profitbricks.html.md @@ -25,9 +25,9 @@ builder. - `image` (string) - ProfitBricks volume image. Only Linux public images are supported. To obtain full list of available images you can use [ProfitBricks CLI](https://github.com/profitbricks/profitbricks-cli#image). -- `password` (string) - ProfitBricks password. This can be specified via environment variable \`PROFITBRICKS\_PASSWORD', if provided. The value definded in the config has precedence over environemnt variable. +- `password` (string) - ProfitBricks password. This can be specified via environment variable \`PROFITBRICKS\_PASSWORD', if provided. The value defined in the config has precedence over environemnt variable. -- `username` (string) - ProfitBricks username. This can be specified via environment variable \`PROFITBRICKS\_USERNAME', if provided. The value definded in the config has precedence over environemnt variable. +- `username` (string) - ProfitBricks username. This can be specified via environment variable \`PROFITBRICKS\_USERNAME', if provided. The value defined in the config has precedence over environemnt variable. ### Optional diff --git a/website/source/docs/builders/qemu.html.md b/website/source/docs/builders/qemu.html.md.erb similarity index 73% rename from website/source/docs/builders/qemu.html.md rename to website/source/docs/builders/qemu.html.md.erb index e1bff6043..1d190b7ec 100644 --- a/website/source/docs/builders/qemu.html.md +++ b/website/source/docs/builders/qemu.html.md.erb @@ -1,4 +1,6 @@ --- +modeline: | + vim: set ft=pandoc: description: | The Qemu Packer builder is able to create KVM and Xen virtual machine images. layout: docs @@ -30,11 +32,11 @@ to files, URLS for ISOs and checksums. [ { "type": "qemu", - "iso_url": "http://mirror.raystedman.net/centos/6/isos/x86_64/CentOS-6.8-x86_64-minimal.iso", - "iso_checksum": "0ca12fe5f28c2ceed4f4084b41ff8a0b", + "iso_url": "http://mirror.raystedman.net/centos/6/isos/x86_64/CentOS-6.9-x86_64-minimal.iso", + "iso_checksum": "af4a1640c0c6f348c6c41f1ea9e192a2", "iso_checksum_type": "md5", "output_directory": "output_centos_tdhtest", - "shutdown_command": "shutdown -P now", + "shutdown_command": "echo 'packer' | sudo -S shutdown -P now", "disk_size": 5000, "format": "qcow2", "headless": false, @@ -47,7 +49,7 @@ to files, URLS for ISOs and checksums. "ssh_username": "root", "ssh_password": "s0m3password", "ssh_port": 22, - "ssh_wait_timeout": "30s", + "ssh_timeout": "30s", "vm_name": "tdhtest", "net_device": "virtio-net", "disk_interface": "virtio", @@ -90,8 +92,8 @@ Linux server and have not enabled X11 forwarding (`ssh -X`). over `iso_checksum_url` type. - `iso_checksum_type` (string) - The type of the checksum specified in - `iso_checksum`. Valid values are "none", "md5", "sha1", "sha256", or - "sha512" currently. While "none" will skip checksumming, this is not + `iso_checksum`. Valid values are `none`, `md5`, `sha1`, `sha256`, or + `sha512` currently. While `none` will skip checksumming, this is not recommended since ISO files are generally large and corruption does happen from time to time. @@ -105,15 +107,25 @@ Linux server and have not enabled X11 forwarding (`ssh -X`). this is an HTTP URL, Packer will download it and cache it between runs. This can also be a URL to an IMG or QCOW2 file, in which case QEMU will boot directly from it. When passing a path to an IMG or QCOW2 file, you - should set `disk_image` to "true". + should set `disk_image` to `true`. ### Optional: - `accelerator` (string) - The accelerator type to use when running the VM. - This may be `none`, `kvm`, `tcg`, or `xen`. The appropriate software must - already been installed on your build machine to use the accelerator you - specified. When no accelerator is specified, Packer will try to use `kvm` - if it is available but will default to `tcg` otherwise. + This may be `none`, `kvm`, `tcg`, `hax`, `hvf`, or `xen`. The appropriate + software must have already been installed on your build machine to use the + accelerator you specified. When no accelerator is specified, Packer will try + to use `kvm` if it is available but will default to `tcg` otherwise. + + -> The `hax` accelerator has issues attaching CDROM ISOs. This is an + upstream issue which can be tracked + [here](https://github.com/intel/haxm/issues/20). + + -> The `hvf` accelerator is new and experimental as of + [QEMU 2.12.0](https://wiki.qemu.org/ChangeLog/2.12#Host_support). + You may encounter issues unrelated to Packer when using it. You may need to + add [ "-global", "virtio-pci.disable-modern=on" ] to `qemuargs` depending on the + guest operating system. - `boot_command` (array of strings) - This is an array of commands to type when the virtual machine is first booted. The goal of these commands should @@ -124,40 +136,49 @@ Linux server and have not enabled X11 forwarding (`ssh -X`). - `boot_wait` (string) - The time to wait after booting the initial virtual machine before typing the `boot_command`. The value of this should be - a duration. Examples are "5s" and "1m30s" which will cause Packer to wait + a duration. Examples are `5s` and `1m30s` which will cause Packer to wait five seconds and one minute 30 seconds, respectively. If this isn't - specified, the default is 10 seconds. + specified, the default is `10s` or 10 seconds. - `disk_cache` (string) - The cache mode to use for disk. Allowed values - include any of "writethrough", "writeback", "none", "unsafe" - or "directsync". By default, this is set to "writeback". + include any of `writethrough`, `writeback`, `none`, `unsafe` + or `directsync`. By default, this is set to `writeback`. - `disk_compression` (boolean) - Apply compression to the QCOW2 disk file using `qemu-img convert`. Defaults to `false`. - `disk_discard` (string) - The discard mode to use for disk. Allowed values - include any of "unmap" or "ignore". By default, this is set to "ignore". + include any of `unmap` or `ignore`. By default, this is set to `ignore`. - `disk_image` (boolean) - Packer defaults to building from an ISO file, this parameter controls whether the ISO URL supplied is actually a bootable - QEMU image. When this value is set to true, the machine will clone the - source, resize it according to `disk_size` and boot the image. + QEMU image. When this value is set to `true`, the machine will either clone + the source or use it as a backing file (if `use_backing_file` is `true`); + then, it will resize the image according to `disk_size` and boot it. - `disk_interface` (string) - The interface to use for the disk. Allowed - values include any of "ide", "scsi", "virtio" or "virtio-scsi"^\* . Note also - that any boot commands or kickstart type scripts must have proper - adjustments for resulting device names. The Qemu builder uses "virtio" by + values include any of `ide`, `scsi`, `virtio` or `virtio-scsi`^\*. Note + also that any boot commands or kickstart type scripts must have proper + adjustments for resulting device names. The Qemu builder uses `virtio` by default. - ^\* Please be aware that use of the "scsi" disk interface has been disabled + ^\* Please be aware that use of the `scsi` disk interface has been disabled by Red Hat due to a bug described [here](https://bugzilla.redhat.com/show_bug.cgi?id=1019220). If you are running Qemu on RHEL or a RHEL variant such as CentOS, you - *must* choose one of the other listed interfaces. Using the "scsi" + *must* choose one of the other listed interfaces. Using the `scsi` interface under these circumstances will cause the build to fail. - `disk_size` (number) - The size, in megabytes, of the hard disk to create - for the VM. By default, this is 40000 (about 40 GB). + for the VM. By default, this is `40960` (40 GB). + +- `floppy_dirs` (array of strings) - A list of directories to place onto + the floppy disk recursively. This is similar to the `floppy_files` option + except that the directory structure is preserved. This is useful for when + your floppy disk includes drivers or if you just want to organize it's + contents as a hierarchy. Wildcard characters (\*, ?, and \[\]) are allowed. + The maximum summary size of all files in the listed directories are the + same as in `floppy_files`. - `floppy_files` (array of strings) - A list of files to place onto a floppy disk that is attached when the VM is booted. This is most useful for @@ -171,20 +192,12 @@ Linux server and have not enabled X11 forwarding (`ssh -X`). listed files must not exceed 1.44 MB. The supported ways to move large files into the OS are using `http_directory` or [the file provisioner](https://www.packer.io/docs/provisioners/file.html). -- `floppy_dirs` (array of strings) - A list of directories to place onto - the floppy disk recursively. This is similar to the `floppy_files` option - except that the directory structure is preserved. This is useful for when - your floppy disk includes drivers or if you just want to organize it's - contents as a hierarchy. Wildcard characters (\*, ?, and \[\]) are allowed. - The maximum summary size of all files in the listed directories are the - same as in `floppy_files`. - -- `format` (string) - Either "qcow2" or "raw", this specifies the output +- `format` (string) - Either `qcow2` or `raw`, this specifies the output format of the virtual machine image. This defaults to `qcow2`. - `headless` (boolean) - Packer defaults to building QEMU virtual machines by launching a GUI that shows the console of the machine being built. When this - value is set to true, the machine will start without a console. + value is set to `true`, the machine will start without a console. You can still see the console if you make a note of the VNC display number chosen, and then connect using `vncviewer -Shared :` @@ -192,22 +205,23 @@ Linux server and have not enabled X11 forwarding (`ssh -X`). - `http_directory` (string) - Path to a directory to serve using an HTTP server. The files in this directory will be available over HTTP that will be requestable from the virtual machine. This is useful for hosting - kickstart files and so on. By default this is "", which means no HTTP server - will be started. The address and port of the HTTP server will be available - as variables in `boot_command`. This is covered in more detail below. + kickstart files and so on. By default this is an empty string, which means + no HTTP server will be started. The address and port of the HTTP server will + be available as variables in `boot_command`. This is covered in more detail + below. - `http_port_min` and `http_port_max` (number) - These are the minimum and maximum port to use for the HTTP server started to serve the `http_directory`. Because Packer often runs in parallel, Packer will choose a randomly available port in this range to run the HTTP server. If you want to force the HTTP server to be on one port, make this minimum and maximum - port the same. By default the values are 8000 and 9000, respectively. + port the same. By default the values are `8000` and `9000`, respectively. - `iso_skip_cache` (boolean) - Use iso from provided url. Qemu must support curl block device. This defaults to `false`. - `iso_target_extension` (string) - The extension of the iso file after - download. This defaults to "iso". + download. This defaults to `iso`. - `iso_target_path` (string) - The path where the iso should be saved after download. By default will go in the packer cache, with a hash of the @@ -221,25 +235,25 @@ Linux server and have not enabled X11 forwarding (`ssh -X`). - `machine_type` (string) - The type of machine emulation to use. Run your qemu binary with the flags `-machine help` to list available types for - your system. This defaults to "pc". + your system. This defaults to `pc`. - `net_device` (string) - The driver to use for the network interface. Allowed - values "ne2k\_pci", "i82551", "i82557b", "i82559er", "rtl8139", "e1000", - "pcnet", "virtio", "virtio-net", "virtio-net-pci", "usb-net", "i82559a", - "i82559b", "i82559c", "i82550", "i82562", "i82557a", "i82557c", "i82801", - "vmxnet3", "i82558a" or "i82558b". The Qemu builder uses "virtio-net" by + values `ne2k_pci`, `i82551`, `i82557b`, `i82559er`, `rtl8139`, `e1000`, + `pcnet`, `virtio`, `virtio-net`, `virtio-net-pci`, `usb-net`, `i82559a`, + `i82559b`, `i82559c`, `i82550`, `i82562`, `i82557a`, `i82557c`, `i82801`, + `vmxnet3`, `i82558a` or `i82558b`. The Qemu builder uses `virtio-net` by default. - `output_directory` (string) - This is the path to the directory where the resulting virtual machine will be created. This may be relative or absolute. If relative, the path is relative to the working directory when `packer` is executed. This directory must not exist or be empty prior to running - the builder. By default this is "output-BUILDNAME" where "BUILDNAME" is the + the builder. By default this is `output-BUILDNAME` where "BUILDNAME" is the name of the build. - `qemu_binary` (string) - The name of the Qemu binary to look for. This - defaults to "qemu-system-x86\_64", but may need to be changed for - some platforms. For example "qemu-kvm", or "qemu-system-i386" may be a + defaults to `qemu-system-x86_64`, but may need to be changed for + some platforms. For example `qemu-kvm`, or `qemu-system-i386` may be a better choice for some systems. - `qemuargs` (array of array of strings) - Allows complete control over the @@ -306,46 +320,55 @@ will bind to their own SSH port as determined by each process. This will also work with WinRM, just change the port forward in `qemuargs` to map to WinRM's default port of `5985` or whatever value you have the service set to listen on. +- `use_backing_file` (boolean) - Only applicable when `disk_image` is `true` + and `format` is `qcow2`, set this option to `true` to create a new QCOW2 + file that uses the file located at `iso_url` as a backing file. The new file + will only contain blocks that have changed compared to the backing file, so + enabling this option can significantly reduce disk usage. + - `use_default_display` (boolean) - If true, do not pass a `-display` option to qemu, allowing it to choose the default. This may be needed when running - under OS X. + under macOS, and getting errors about `sdl` not being available. - `shutdown_command` (string) - The command to use to gracefully shut down the machine once all the provisioning is done. By default this is an empty string, which tells Packer to just forcefully shut down the machine unless a - shutdown command takes place inside script so this may safely be omitted. If - one or more scripts require a reboot it is suggested to leave this blank - since reboots may fail and specify the final shutdown command in your - last script. + shutdown command takes place inside script so this may safely be omitted. It + is important to add a `shutdown_command`. By default Packer halts the virtual + machine and the file system may not be sync'd. Thus, changes made in a + provisioner might not be saved. If one or more scripts require a reboot it is + suggested to leave this blank since reboots may fail and specify the final + shutdown command in your last script. - `shutdown_timeout` (string) - The amount of time to wait after executing the `shutdown_command` for the virtual machine to actually shut down. If it doesn't shut down in this time, it is an error. By default, the timeout is - `5m`, or five minutes. + `5m` or five minutes. -- `skip_compaction` (boolean) - Packer compacts the QCOW2 image using `qemu-img convert`. - Set this option to `true` to disable compacting. Defaults to `false`. +- `skip_compaction` (boolean) - Packer compacts the QCOW2 image using + `qemu-img convert`. Set this option to `true` to disable compacting. + Defaults to `false`. - `ssh_host_port_min` and `ssh_host_port_max` (number) - The minimum and maximum port to use for the SSH port on the host machine which is forwarded to the SSH port on the guest machine. Because Packer often runs in parallel, Packer will choose a randomly available port in this range to use as the - host port. By default this is 2222 to 4444. + host port. By default this is `2222` to `4444`. - `vm_name` (string) - This is the name of the image (QCOW2 or IMG) file for - the new virtual machine. By default this is "packer-BUILDNAME", where - `BUILDNAME` is the name of the build. Currently, no file extension will be + the new virtual machine. By default this is `packer-BUILDNAME`, where + "BUILDNAME" is the name of the build. Currently, no file extension will be used unless it is specified in this option. -- `vnc_bind_address` (string / IP address) - The IP address that should be binded - to for VNC. By default packer will use 127.0.0.1 for this. If you wish to bind - to all interfaces use 0.0.0.0 +- `vnc_bind_address` (string / IP address) - The IP address that should be + binded to for VNC. By default packer will use `127.0.0.1` for this. If you + wish to bind to all interfaces use `0.0.0.0`. - `vnc_port_min` and `vnc_port_max` (number) - The minimum and maximum port to use for VNC access to the virtual machine. The builder uses VNC to type the initial `boot_command`. Because Packer generally runs in parallel, Packer uses a randomly chosen port in this range that appears available. By - default this is 5900 to 6000. The minimum and maximum ports are inclusive. + default this is `5900` to `6000`. The minimum and maximum ports are inclusive. ## Boot Command @@ -366,69 +389,7 @@ default 100ms delay. The delay alleviates issues with latency and CPU contention. For local builds you can tune this delay by specifying e.g. `PACKER_KEY_INTERVAL=10ms` to speed through the boot command. -There are a set of special keys available. If these are in your boot -command, they will be replaced by the proper key: - -- `` - Backspace - -- `` - Delete - -- `` and `` - Simulates an actual "enter" or "return" keypress. - -- `` - Simulates pressing the escape key. - -- `` - Simulates pressing the tab key. - -- `` - `` - Simulates pressing a function key. - -- `` `` `` `` - Simulates pressing an arrow key. - -- `` - Simulates pressing the spacebar. - -- `` - Simulates pressing the insert key. - -- `` `` - Simulates pressing the home and end keys. - -- `` `` - Simulates pressing the page up and page down keys. - -- `` `` - Simulates pressing the alt key. - -- `` `` - Simulates pressing the ctrl key. - -- `` `` - Simulates pressing the shift key. - -- `` `` - Simulates pressing and holding the alt key. - -- `` `` - Simulates pressing and holding the ctrl key. - -- `` `` - Simulates pressing and holding the shift key. - -- `` `` - Simulates releasing a held alt key. - -- `` `` - Simulates releasing a held ctrl key. - -- `` `` - Simulates releasing a held shift key. - -- `` `` `` - Adds a 1, 5 or 10 second pause before - sending any additional keys. This is useful if you have to generally wait - for the UI to update before typing more. - -- `` - Add user defined time.Duration pause before sending any - additional keys. For example `` or `` - -When using modifier keys `ctrl`, `alt`, `shift` ensure that you release them, -otherwise they will be held down until the machine reboots. Use lowercase -characters as well inside modifiers. For example: to simulate ctrl+c use -`c`. - -In addition to the special keys, each command to type is treated as a -[template engine](/docs/templates/engine.html). The -available variables are: - -- `HTTPIP` and `HTTPPort` - The IP and port, respectively of an HTTP server - that is started serving the directory specified by the `http_directory` - configuration parameter. If `http_directory` isn't specified, these will be - blank! +<%= partial "partials/builders/boot-command" %> Example boot command. This is actually a working boot command used to start an CentOS 6.4 installer: @@ -452,3 +413,7 @@ seems to be related to having a `common` directory or file in the directory they've run Packer in, like the packer source directory. This appears to be an upstream bug with qemu, and the best solution for now is to remove the file/directory or run in another directory. + +Some users have reported issues with incorrect keymaps using qemu version 2.11. +This is a bug with qemu, and the solution is to upgrade, or downgrade to 2.10.1 +or earlier. diff --git a/website/source/docs/builders/scaleway.html.md b/website/source/docs/builders/scaleway.html.md new file mode 100644 index 000000000..bcef9f900 --- /dev/null +++ b/website/source/docs/builders/scaleway.html.md @@ -0,0 +1,100 @@ +--- +layout: docs +sidebar_current: docs-builders-scaleway +page_title: Scaleway - Builders +description: |- + The Scaleway Packer builder is able to create new images for use with + Scaleway BareMetal and Virtual cloud server. The builder takes a source + image, runs any provisioning necessary on the image after launching it, then + snapshots it into a reusable image. This reusable image can then be used as + the foundation of new servers that are launched within Scaleway. +--- + + +# Scaleway Builder + +Type: `scaleway` + +The `scaleway` Packer builder is able to create new images for use with +[Scaleway](https://www.scaleway.com). The builder takes a source image, +runs any provisioning necessary on the image after launching it, then snapshots +it into a reusable image. This reusable image can then be used as the foundation +of new servers that are launched within Scaleway. + +The builder does *not* manage snapshots. Once it creates an image, it is up to you +to use it or delete it. + +## Configuration Reference + +There are many configuration options available for the builder. They are +segmented below into two categories: required and optional parameters. Within +each category, the available configuration keys are alphabetized. + +In addition to the options listed here, a +[communicator](/docs/templates/communicator.html) can be configured for this +builder. + +### Required: + + +- `api_access_key` (string) - The organization access key to use to identify your + organization. It can also be specified via environment variable + `SCALEWAY_API_ACCESS_KEY`. Your access key is available in the + ["Credentials" section](https://cloud.scaleway.com/#/credentials) of the + control panel. + +- `api_token` (string) - The token to use to authenticate with your + account. It can also be specified via environment variable + `SCALEWAY_API_TOKEN`. You can see and generate tokens in the + ["Credentials" section](https://cloud.scaleway.com/#/credentials) of the + control panel. + +- `image` (string) - The UUID of the base image to use. This is the image + that will be used to launch a new server and provision it. See + get the complete list of the + accepted image UUID. + +- `region` (string) - The name of the region to launch the server in (`par1` + or `ams1`). Consequently, this is the region where the snapshot will be + available. + +- `commercial_type` (string) - The name of the server commercial type: + `ARM64-128GB`,`ARM64-16GB`,`ARM64-2GB`,`ARM64-32GB`,`ARM64-4GB`, `ARM64-64GB`, + `ARM64-8GB`,`C1`,`C2L`,`C2M`,`C2S`,`START1-L`,`START1-M`,`START1-S`,`START1-XS`, + `X64-120GB`,`X64-15GB`,`X64-30GB`,`X64-60GB` + +### Optional: + +- `server_name` (string) - The name assigned to the server. Default + `packer-UUID` + +- `image_name` (string) - The name of the resulting image that will appear in + your account. Default `packer-TIMESTAMP` + +- `snapshot_name` (string) - The name of the resulting snapshot that will + appear in your account. Default `packer-TIMESTAMP` + +- `bootscript` (string) - The id of an existing bootscript to use when booting + the server. + +## Basic Example + +Here is a basic example. It is completely valid as soon as you enter your own +access tokens: + +```json +{ + "type": "scaleway", + "api_access_key": "YOUR API ACCESS KEY", + "api_token": "YOUR TOKEN", + "image": "UUID OF THE BASE IMAGE", + "region": "par1", + "commercial_type": "START1-S", + "ssh_username": "root", + "ssh_private_key_file": "~/.ssh/id_rsa" +} +``` + +When you do not specified the `ssh_private_key_file`, a temporarily SSH keypair +is generated to connect the server. This key will only allow the `root` user to +connect the server. diff --git a/website/source/docs/builders/triton.html.md b/website/source/docs/builders/triton.html.md index 80c54a5b4..956228c84 100644 --- a/website/source/docs/builders/triton.html.md +++ b/website/source/docs/builders/triton.html.md @@ -50,6 +50,7 @@ builder. - `triton_account` (string) - The username of the Triton account to use when using the Triton Cloud API. + - `triton_key_id` (string) - The fingerprint of the public key of the SSH key pair to use for authentication with the Triton Cloud API. If `triton_key_material` is not set, it is assumed that the SSH agent has the @@ -92,6 +93,14 @@ builder. of `triton_key_id` is stored. For example `/home/soandso/.ssh/id_rsa`. If this is not specified, the SSH agent is used to sign requests with the `triton_key_id` specified. + +- `triton_user` (string) - The username of a user who has access to your Triton + account. + +- `insecure_skip_tls_verify` - (bool) This allows skipping TLS verification of + the Triton endpoint. It is useful when connecting to a temporary Triton + installation such as Cloud-On-A-Laptop which does not generally use a + certificate signed by a trusted root CA. The default is `false`. - `source_machine_firewall_enabled` (boolean) - Whether or not the firewall of the VM used to create an image of is enabled. The Triton firewall only @@ -149,7 +158,7 @@ builder. ## Basic Example -Below is a minimal example to create an joyent-brand image on the Joyent public +Below is a minimal example to create an image on the Joyent public cloud: ``` json diff --git a/website/source/docs/builders/virtualbox-iso.html.md b/website/source/docs/builders/virtualbox-iso.html.md.erb similarity index 79% rename from website/source/docs/builders/virtualbox-iso.html.md rename to website/source/docs/builders/virtualbox-iso.html.md.erb index 85d4905ed..788f5e709 100644 --- a/website/source/docs/builders/virtualbox-iso.html.md +++ b/website/source/docs/builders/virtualbox-iso.html.md.erb @@ -1,4 +1,6 @@ --- +modeline: | + vim: set ft=pandoc: description: | The VirtualBox Packer builder is able to create VirtualBox virtual machines and export them in the OVF format, starting from an ISO image. @@ -63,8 +65,8 @@ builder. over `iso_checksum_url` type. - `iso_checksum_type` (string) - The type of the checksum specified in - `iso_checksum`. Valid values are "none", "md5", "sha1", "sha256", or - "sha512" currently. While "none" will skip checksumming, this is not + `iso_checksum`. Valid values are `none`, `md5`, `sha1`, `sha256`, or + `sha512` currently. While `none` will skip checksumming, this is not recommended since ISO files are generally large and corruption does happen from time to time. @@ -88,12 +90,12 @@ builder. - `boot_wait` (string) - The time to wait after booting the initial virtual machine before typing the `boot_command`. The value of this should be - a duration. Examples are "5s" and "1m30s" which will cause Packer to wait + a duration. Examples are `5s` and `1m30s` which will cause Packer to wait five seconds and one minute 30 seconds, respectively. If this isn't - specified, the default is 10 seconds. + specified, the default is `10s` or 10 seconds. - `disk_size` (number) - The size, in megabytes, of the hard disk to create - for the VM. By default, this is 40000 (about 40 GB). + for the VM. By default, this is `40000` (about 40 GB). - `export_opts` (array of strings) - Additional options to pass to the [VBoxManage @@ -137,6 +139,12 @@ builder. "packer_conf.json" ``` +- `floppy_dirs` (array of strings) - A list of directories to place onto + the floppy disk recursively. This is similar to the `floppy_files` option + except that the directory structure is preserved. This is useful for when + your floppy disk includes drivers or if you just want to organize it's + contents as a hierarchy. Wildcard characters (\*, ?, and \[\]) are allowed. + - `floppy_files` (array of strings) - A list of files to place onto a floppy disk that is attached when the VM is booted. This is most useful for unattended Windows installs, which look for an `Autounattend.xml` file on @@ -147,26 +155,20 @@ builder. and \[\]) are allowed. Directory names are also allowed, which will add all the files found in the directory to the floppy. -- `floppy_dirs` (array of strings) - A list of directories to place onto - the floppy disk recursively. This is similar to the `floppy_files` option - except that the directory structure is preserved. This is useful for when - your floppy disk includes drivers or if you just want to organize it's - contents as a hierarchy. Wildcard characters (\*, ?, and \[\]) are allowed. - -- `format` (string) - Either "ovf" or "ova", this specifies the output format - of the exported virtual machine. This defaults to "ovf". +- `format` (string) - Either `ovf` or `ova`, this specifies the output format + of the exported virtual machine. This defaults to `ovf`. - `guest_additions_mode` (string) - The method by which guest additions are - made available to the guest for installation. Valid options are "upload", - "attach", or "disable". If the mode is "attach" the guest additions ISO will - be attached as a CD device to the virtual machine. If the mode is "upload" + made available to the guest for installation. Valid options are `upload`, + `attach`, or `disable`. If the mode is `attach` the guest additions ISO will + be attached as a CD device to the virtual machine. If the mode is `upload` the guest additions ISO will be uploaded to the path specified by - `guest_additions_path`. The default value is "upload". If "disable" is used, + `guest_additions_path`. The default value is `upload`. If `disable` is used, guest additions won't be downloaded, either. - `guest_additions_path` (string) - The path on the guest virtual machine where the VirtualBox guest additions ISO will be uploaded. By default this - is "VBoxGuestAdditions.iso" which should upload into the login directory of + is `VBoxGuestAdditions.iso` which should upload into the login directory of the user. This is a [configuration template](/docs/templates/engine.html) where the `Version` variable is replaced with the VirtualBox version. @@ -183,24 +185,24 @@ builder. download the proper guest additions ISO from the internet. - `guest_os_type` (string) - The guest OS type being installed. By default - this is "other", but you can get *dramatic* performance improvements by + this is `other`, but you can get *dramatic* performance improvements by setting this to the proper value. To view all available values for this run `VBoxManage list ostypes`. Setting the correct value hints to VirtualBox how to optimize the virtual hardware to work best with that operating system. - `hard_drive_interface` (string) - The type of controller that the primary - hard drive is attached to, defaults to "ide". When set to "sata", the drive - is attached to an AHCI SATA controller. When set to "scsi", the drive is + hard drive is attached to, defaults to `ide`. When set to `sata`, the drive + is attached to an AHCI SATA controller. When set to `scsi`, the drive is attached to an LsiLogic SCSI controller. - `sata_port_count` (number) - The number of ports available on any SATA - controller created, defaults to 1. VirtualBox supports up to 30 ports on a + controller created, defaults to `1`. VirtualBox supports up to 30 ports on a maximum of 1 SATA controller. Increasing this value can be useful if you want to attach additional drives. - `hard_drive_nonrotational` (boolean) - Forces some guests (i.e. Windows 7+) to treat disks as SSDs and stops them from performing disk fragmentation. - Also set `hard_drive_Discard` to `true` to enable TRIM support. + Also set `hard_drive_discard` to `true` to enable TRIM support. - `hard_drive_discard` (boolean) - When this value is set to `true`, a VDI image will be shrunk in response to the trim command from the guest OS. @@ -209,29 +211,30 @@ builder. - `headless` (boolean) - Packer defaults to building VirtualBox virtual machines by launching a GUI that shows the console of the machine - being built. When this value is set to `true`, the machine will start without - a console. + being built. When this value is set to `true`, the machine will start + without a console. - `http_directory` (string) - Path to a directory to serve using an HTTP server. The files in this directory will be available over HTTP that will be requestable from the virtual machine. This is useful for hosting - kickstart files and so on. By default this is "", which means no HTTP server - will be started. The address and port of the HTTP server will be available - as variables in `boot_command`. This is covered in more detail below. + kickstart files and so on. By default this is an empty string, which means + no HTTP server will be started. The address and port of the HTTP server will + be available as variables in `boot_command`. This is covered in more detail + below. - `http_port_min` and `http_port_max` (number) - These are the minimum and maximum port to use for the HTTP server started to serve the `http_directory`. Because Packer often runs in parallel, Packer will choose a randomly available port in this range to run the HTTP server. If you want to force the HTTP server to be on one port, make this minimum and maximum - port the same. By default the values are 8000 and 9000, respectively. + port the same. By default the values are `8000` and `9000`, respectively. - `iso_interface` (string) - The type of controller that the ISO is attached - to, defaults to "ide". When set to "sata", the drive is attached to an AHCI + to, defaults to `ide`. When set to `sata`, the drive is attached to an AHCI SATA controller. - `iso_target_extension` (string) - The extension of the iso file after - download. This defaults to "iso". + download. This defaults to `iso`. - `iso_target_path` (string) - The path where the iso should be saved after download. By default will go in the packer cache, with a hash of the @@ -250,13 +253,13 @@ builder. resulting virtual machine will be created. This may be relative or absolute. If relative, the path is relative to the working directory when `packer` is executed. This directory must not exist or be empty prior to running - the builder. By default this is "output-BUILDNAME" where "BUILDNAME" is the + the builder. By default this is `output-BUILDNAME` where "BUILDNAME" is the name of the build. - `post_shutdown_delay` (string) - The amount of time to wait after shutting down the virtual machine. If you get the error `Error removing floppy controller`, you might need to set this to `5m` - or so. By default, the delay is `0s`, or disabled. + or so. By default, the delay is `0s` or disabled. - `shutdown_command` (string) - The command to use to gracefully shut down the machine once all the provisioning is done. By default this is an empty @@ -269,7 +272,7 @@ builder. - `shutdown_timeout` (string) - The amount of time to wait after executing the `shutdown_command` for the virtual machine to actually shut down. If it doesn't shut down in this time, it is an error. By default, the timeout is - `5m`, or five minutes. + `5m` or five minutes. - `skip_export` (boolean) - Defaults to `false`. When enabled, Packer will not export the VM. Useful if the build output is not the resultant image, @@ -279,11 +282,11 @@ builder. maximum port to use for the SSH port on the host machine which is forwarded to the SSH port on the guest machine. Because Packer often runs in parallel, Packer will choose a randomly available port in this range to use as the - host port. By default this is 2222 to 4444. + host port. By default this is `2222` to `4444`. - `ssh_skip_nat_mapping` (boolean) - Defaults to `false`. When enabled, Packer does not setup forwarded port mapping for SSH requests and uses `ssh_port` - on the host to communicate to the virtual machine + on the host to communicate to the virtual machine. - `vboxmanage` (array of array of strings) - Custom `VBoxManage` commands to execute in order to further customize the virtual machine being created. The @@ -303,22 +306,22 @@ builder. - `virtualbox_version_file` (string) - The path within the virtual machine to upload a file that contains the VirtualBox version that was used to create the machine. This information can be useful for provisioning. By default - this is ".vbox\_version", which will generally be upload it into the + this is `.vbox_version`, which will generally be upload it into the home directory. Set to an empty string to skip uploading this file, which can be useful when using the `none` communicator. - `vm_name` (string) - This is the name of the OVF file for the new virtual - machine, without the file extension. By default this is "packer-BUILDNAME", + machine, without the file extension. By default this is `packer-BUILDNAME`, where "BUILDNAME" is the name of the build. - `vrdp_bind_address` (string / IP address) - The IP address that should be - binded to for VRDP. By default packer will use 127.0.0.1 for this. If you - wish to bind to all interfaces use 0.0.0.0 + binded to for VRDP. By default packer will use `127.0.0.1` for this. If you + wish to bind to all interfaces use `0.0.0.0`. - `vrdp_port_min` and `vrdp_port_max` (number) - The minimum and maximum port to use for VRDP access to the virtual machine. Packer uses a randomly chosen - port in this range that appears available. By default this is 5900 to 6000. - The minimum and maximum ports are inclusive. + port in this range that appears available. By default this is `5900` to + `6000`. The minimum and maximum ports are inclusive. ## Boot Command @@ -331,71 +334,13 @@ As documented above, the `boot_command` is an array of strings. The strings are all typed in sequence. It is an array only to improve readability within the template. -The boot command is "typed" character for character over a VNC connection to the -machine, simulating a human actually typing the keyboard. There are a set of -special keys available. If these are in your boot command, they will be replaced -by the proper key: +The boot command is sent to the VM through the `VBoxManage` utility in as few +invocations as possible. We send each character in groups of 25, with a default +delay of 100ms between groups. The delay alleviates issues with latency and CPU +contention. If you notice missing keys, you can tune this delay by specifying e.g. +`PACKER_KEY_INTERVAL=500ms` to wait longer between each group of characters. -- `` - Backspace - -- `` - Delete - -- `` and `` - Simulates an actual "enter" or "return" keypress. - -- `` - Simulates pressing the escape key. - -- `` - Simulates pressing the tab key. - -- `` - `` - Simulates pressing a function key. - -- `` `` `` `` - Simulates pressing an arrow key. - -- `` - Simulates pressing the spacebar. - -- `` - Simulates pressing the insert key. - -- `` `` - Simulates pressing the home and end keys. - -- `` `` - Simulates pressing the page up and page down keys. - -- `` `` - Simulates pressing the alt key. - -- `` `` - Simulates pressing the ctrl key. - -- `` `` - Simulates pressing the shift key. - -- `` `` - Simulates pressing and holding the alt key. - -- `` `` - Simulates pressing and holding the - ctrl key. - -- `` `` - Simulates pressing and holding the - shift key. - -- `` `` - Simulates releasing a held alt key. - -- `` `` - Simulates releasing a held ctrl key. - -- `` `` - Simulates releasing a held shift key. - -- `` `` `` - Adds a 1, 5 or 10 second pause before - sending any additional keys. This is useful if you have to generally wait - for the UI to update before typing more. - -When using modifier keys `ctrl`, `alt`, `shift` ensure that you release them, -otherwise they will be held down until the machine reboots. Use lowercase -characters as well inside modifiers. - -For example: to simulate ctrl+c use `c`. - -In addition to the special keys, each command to type is treated as a -[template engine](/docs/templates/engine.html). The -available variables are: - -- `HTTPIP` and `HTTPPort` - The IP and port, respectively of an HTTP server - that is started serving the directory specified by the `http_directory` - configuration parameter. If `http_directory` isn't specified, these will be - blank! +<%= partial "partials/builders/boot-command" %> Example boot command. This is actually a working boot command used to start an Ubuntu 12.04 installer: diff --git a/website/source/docs/builders/virtualbox-ovf.html.md b/website/source/docs/builders/virtualbox-ovf.html.md.erb similarity index 80% rename from website/source/docs/builders/virtualbox-ovf.html.md rename to website/source/docs/builders/virtualbox-ovf.html.md.erb index d58101100..ae4317240 100644 --- a/website/source/docs/builders/virtualbox-ovf.html.md +++ b/website/source/docs/builders/virtualbox-ovf.html.md.erb @@ -1,4 +1,6 @@ --- +modeline: | + vim: set ft=pandoc: description: | This VirtualBox Packer builder is able to create VirtualBox virtual machines and export them in the OVF format, starting from an existing OVF/OVA (exported @@ -76,15 +78,15 @@ builder. - `boot_wait` (string) - The time to wait after booting the initial virtual machine before typing the `boot_command`. The value of this should be - a duration. Examples are "5s" and "1m30s" which will cause Packer to wait + a duration. Examples are `5s` and `1m30s` which will cause Packer to wait five seconds and one minute 30 seconds, respectively. If this isn't - specified, the default is 10 seconds. + specified, the default is `10s` or 10 seconds. - `checksum` (string) - The checksum for the OVA file. The type of the checksum is specified with `checksum_type`, documented below. - `checksum_type` (string) - The type of the checksum specified in `checksum`. - Valid values are "none", "md5", "sha1", "sha256", or "sha512". Although the + Valid values are `none`, `md5`, `sha1`, `sha256`, or `sha512`. Although the checksum will not be verified when `checksum_type` is set to "none", this is not recommended since OVA files can be very large and corruption does happen from time to time. @@ -131,6 +133,12 @@ builder. "packer_conf.json" ``` +- `floppy_dirs` (array of strings) - A list of directories to place onto the + floppy disk recursively. This is similar to the `floppy_files` option except + that the directory structure is preserved. This is useful for when your + floppy disk includes drivers or if you just want to organize it's contents + as a hierarchy. Wildcard characters (\*, ?, and \[\]) are allowed. + - `floppy_files` (array of strings) - A list of files to place onto a floppy disk that is attached when the VM is booted. This is most useful for unattended Windows installs, which look for an `Autounattend.xml` file on @@ -141,26 +149,20 @@ builder. and \[\]) are allowed. Directory names are also allowed, which will add all the files found in the directory to the floppy. -- `floppy_dirs` (array of strings) - A list of directories to place onto the - floppy disk recursively. This is similar to the `floppy_files` option except - that the directory structure is preserved. This is useful for when your - floppy disk includes drivers or if you just want to organize it's contents - as a hierarchy. Wildcard characters (\*, ?, and \[\]) are allowed. - -- `format` (string) - Either "ovf" or "ova", this specifies the output format - of the exported virtual machine. This defaults to "ovf". +- `format` (string) - Either `ovf` or `ova`, this specifies the output format + of the exported virtual machine. This defaults to `ovf`. - `guest_additions_mode` (string) - The method by which guest additions are - made available to the guest for installation. Valid options are "upload", - "attach", or "disable". If the mode is "attach" the guest additions ISO will - be attached as a CD device to the virtual machine. If the mode is "upload" + made available to the guest for installation. Valid options are `upload`, + `attach`, or `disable`. If the mode is `attach` the guest additions ISO will + be attached as a CD device to the virtual machine. If the mode is `upload` the guest additions ISO will be uploaded to the path specified by - `guest_additions_path`. The default value is "upload". If "disable" is used, + `guest_additions_path`. The default value is `upload`. If `disable` is used, guest additions won't be downloaded, either. - `guest_additions_path` (string) - The path on the guest virtual machine where the VirtualBox guest additions ISO will be uploaded. By default this - is "VBoxGuestAdditions.iso" which should upload into the login directory of + is `VBoxGuestAdditions.iso` which should upload into the login directory of the user. This is a [configuration template](/docs/templates/engine.html) where the `Version` variable is replaced with the VirtualBox version. @@ -177,30 +179,31 @@ builder. - `headless` (boolean) - Packer defaults to building VirtualBox virtual machines by launching a GUI that shows the console of the machine - being built. When this value is set to true, the machine will start without - a console. + being built. When this value is set to `true`, the machine will start + without a console. - `http_directory` (string) - Path to a directory to serve using an HTTP server. The files in this directory will be available over HTTP that will be requestable from the virtual machine. This is useful for hosting - kickstart files and so on. By default this is "", which means no HTTP server - will be started. The address and port of the HTTP server will be available - as variables in `boot_command`. This is covered in more detail below. + kickstart files and so on. By default this is an empty string, which means + no HTTP server will be started. The address and port of the HTTP server will + be available as variables in `boot_command`. This is covered in more detail + below. - `http_port_min` and `http_port_max` (number) - These are the minimum and maximum port to use for the HTTP server started to serve the `http_directory`. Because Packer often runs in parallel, Packer will choose a randomly available port in this range to run the HTTP server. If you want to force the HTTP server to be on one port, make this minimum and maximum - port the same. By default the values are 8000 and 9000, respectively. + port the same. By default the values are `8000` and `9000`, respectively. - `import_flags` (array of strings) - Additional flags to pass to `VBoxManage import`. This can be used to add additional command-line flags such as `--eula-accept` to accept a EULA in the OVF. - `import_opts` (string) - Additional options to pass to the - `VBoxManage import`. This can be useful for passing "keepallmacs" or - "keepnatmacs" options for existing ovf images. + `VBoxManage import`. This can be useful for passing `keepallmacs` or + `keepnatmacs` options for existing ovf images. - `keep_registered` (boolean) - Set this to `true` if you would like to keep the VM registered with virtualbox. Defaults to `false`. @@ -209,13 +212,13 @@ builder. resulting virtual machine will be created. This may be relative or absolute. If relative, the path is relative to the working directory when `packer` is executed. This directory must not exist or be empty prior to running - the builder. By default this is "output-BUILDNAME" where "BUILDNAME" is the + the builder. By default this is `output-BUILDNAME` where "BUILDNAME" is the name of the build. - `post_shutdown_delay` (string) - The amount of time to wait after shutting down the virtual machine. If you get the error `Error removing floppy controller`, you might need to set this to `5m` - or so. By default, the delay is `0s`, or disabled. + or so. By default, the delay is `0s` or disabled. - `shutdown_command` (string) - The command to use to gracefully shut down the machine once all the provisioning is done. By default this is an empty @@ -228,7 +231,7 @@ builder. - `shutdown_timeout` (string) - The amount of time to wait after executing the `shutdown_command` for the virtual machine to actually shut down. If it doesn't shut down in this time, it is an error. By default, the timeout is - "5m", or five minutes. + `5m` or five minutes. - `skip_export` (boolean) - Defaults to `false`. When enabled, Packer will not export the VM. Useful if the build output is not the resultant image, @@ -238,11 +241,11 @@ builder. maximum port to use for the SSH port on the host machine which is forwarded to the SSH port on the guest machine. Because Packer often runs in parallel, Packer will choose a randomly available port in this range to use as the - host port. + host port. By default this is `2222` to `4444`. - `ssh_skip_nat_mapping` (boolean) - Defaults to false. When enabled, Packer does not setup forwarded port mapping for SSH requests and uses `ssh_port` - on the host to communicate to the virtual machine + on the host to communicate to the virtual machine. - `target_path` (string) - The path where the OVA should be saved after download. By default, it will go in the packer cache, with a hash of @@ -266,22 +269,23 @@ builder. - `virtualbox_version_file` (string) - The path within the virtual machine to upload a file that contains the VirtualBox version that was used to create the machine. This information can be useful for provisioning. By default - this is ".vbox\_version", which will generally be upload it into the + this is `.vbox_version`, which will generally be upload it into the home directory. Set to an empty string to skip uploading this file, which can be useful when using the `none` communicator. - `vm_name` (string) - This is the name of the virtual machine when it is imported as well as the name of the OVF file when the virtual machine - is exported. By default this is "packer-BUILDNAME", where "BUILDNAME" is the + is exported. By default this is `packer-BUILDNAME`, where "BUILDNAME" is the name of the build. - `vrdp_bind_address` (string / IP address) - The IP address that should be - binded to for VRDP. By default packer will use 127.0.0.1 for this. + binded to for VRDP. By default packer will use `127.0.0.1` for this. If you + wish to bind to all interfaces use `0.0.0.0`. - `vrdp_port_min` and `vrdp_port_max` (number) - The minimum and maximum port to use for VRDP access to the virtual machine. Packer uses a randomly chosen - port in this range that appears available. By default this is 5900 to 6000. - The minimum and maximum ports are inclusive. + port in this range that appears available. By default this is `5900` to + `6000`. The minimum and maximum ports are inclusive. ## Boot Command @@ -293,65 +297,13 @@ As documented above, the `boot_command` is an array of strings. The strings are all typed in sequence. It is an array only to improve readability within the template. -The boot command is "typed" character for character over a VNC connection to the -machine, simulating a human actually typing the keyboard. There are a set of -special keys available. If these are in your boot command, they will be replaced -by the proper key: +The boot command is sent to the VM through the `VBoxManage` utility in as few +invocations as possible. We send each character in groups of 25, with a default +delay of 100ms between groups. The delay alleviates issues with latency and CPU +contention. If you notice missing keys, you can tune this delay by specifying e.g. +`PACKER_KEY_INTERVAL=500ms` to wait longer between each group of characters. -- `` - Backspace - -- `` - Delete - -- `` and `` - Simulates an actual "enter" or "return" keypress. - -- `` - Simulates pressing the escape key. - -- `` - Simulates pressing the tab key. - -- `` - `` - Simulates pressing a function key. - -- `` `` `` `` - Simulates pressing an arrow key. - -- `` - Simulates pressing the spacebar. - -- `` - Simulates pressing the insert key. - -- `` `` - Simulates pressing the home and end keys. - -- `` `` - Simulates pressing the page up and page down keys. - -- `` `` - Simulates pressing the alt key. - -- `` `` - Simulates pressing the ctrl key. - -- `` `` - Simulates pressing the shift key. - -- `` `` - Simulates pressing and holding the alt key. - -- `` `` - Simulates pressing and holding the - ctrl key. - -- `` `` - Simulates pressing and holding the - shift key. - -- `` `` - Simulates releasing a held alt key. - -- `` `` - Simulates releasing a held ctrl key. - -- `` `` - Simulates releasing a held shift key. - -- `` `` `` - Adds a 1, 5 or 10 second pause before - sending any additional keys. This is useful if you have to generally wait - for the UI to update before typing more. - -In addition to the special keys, each command to type is treated as a -[template engine](/docs/templates/engine.html). The -available variables are: - -- `HTTPIP` and `HTTPPort` - The IP and port, respectively of an HTTP server - that is started serving the directory specified by the `http_directory` - configuration parameter. If `http_directory` isn't specified, these will be - blank! +<%= partial "partials/builders/boot-command" %> Example boot command. This is actually a working boot command used to start an Ubuntu 12.04 installer: diff --git a/website/source/docs/builders/vmware-iso.html.md b/website/source/docs/builders/vmware-iso.html.md.erb similarity index 67% rename from website/source/docs/builders/vmware-iso.html.md rename to website/source/docs/builders/vmware-iso.html.md.erb index 343a685ef..b42259ee8 100644 --- a/website/source/docs/builders/vmware-iso.html.md +++ b/website/source/docs/builders/vmware-iso.html.md.erb @@ -1,4 +1,6 @@ --- +modeline: | + vim: set ft=pandoc: description: | This VMware Packer builder is able to create VMware virtual machines from an ISO file as a source. It currently supports building virtual machines on hosts @@ -67,8 +69,8 @@ builder. over `iso_checksum_url` type. - `iso_checksum_type` (string) - The type of the checksum specified in - `iso_checksum`. Valid values are "none", "md5", "sha1", "sha256", or - "sha512" currently. While "none" will skip checksumming, this is not + `iso_checksum`. Valid values are `none`, `md5`, `sha1`, `sha256`, or + `sha512` currently. While `none` will skip checksumming, this is not recommended since ISO files are generally large and corruption does happen from time to time. @@ -92,9 +94,29 @@ builder. - `boot_wait` (string) - The time to wait after booting the initial virtual machine before typing the `boot_command`. The value of this should be - a duration. Examples are "5s" and "1m30s" which will cause Packer to wait + a duration. Examples are `5s` and `1m30s` which will cause Packer to wait five seconds and one minute 30 seconds, respectively. If this isn't - specified, the default is 10 seconds. + specified, the default is `10s` or 10 seconds. + +- `cdrom_adapter_type` (string) - The adapter type (or bus) that will be used + by the cdrom device. This is chosen by default based on the disk adapter + type. VMware tends to lean towards `ide` for the cdrom device unless + `sata` is chosen for the disk adapter and so Packer attempts to mirror + this logic. This field can be specified as either `ide`, `sata`, or `scsi`. + +- `disable_vnc` (boolean) - Whether to create a VNC connection or not. + A `boot_command` cannot be used when this is `true`. Defaults to `false`. + +- `disk_adapter_type` (string) - The adapter type of the VMware virtual disk + to create. This option is for advanced usage, modify only if you know what + you're doing. Some of the options you can specify are `ide`, `sata`, `nvme` + or `scsi` (which uses the "lsilogic" scsi interface by default). If you + specify another option, Packer will assume that you're specifying a `scsi` + interface of that specified type. For more information, please consult the + + Virtual Disk Manager User's Guide for desktop VMware clients. + For ESXi, refer to the proper ESXi documentation. - `disk_additional_size` (array of integers) - The size(s) of any additional hard disks for the VM in megabytes. If this is not specified then the VM @@ -105,18 +127,37 @@ builder. - `disk_size` (number) - The size of the hard disk for the VM in megabytes. The builder uses expandable, not fixed-size virtual hard disks, so the actual file representing the disk will not use the full size unless it - is full. By default this is set to 40,000 (about 40 GB). + is full. By default this is set to `40000` (about 40 GB). -- `disk_type_id` (string) - The type of VMware virtual disk to create. The - default is "1", which corresponds to a growable virtual disk split in 2GB - files. For ESXi, this defaults to "zeroedthick". This option is for - advanced usage. For more information, please consult the [Virtual Disk - Manager User's Guide](https://www.vmware.com/pdf/VirtualDiskManager.pdf) - for desktop VMware clients. For ESXi, refer to the proper ESXi - documentation. +- `disk_type_id` (string) - The type of VMware virtual disk to create. This + option is for advanced usage. -* `disable_vnc` (boolean) - Whether to create a VNC connection or not. - A `boot_command` cannot be used when this is `false`. Defaults to `false`. + For desktop VMware clients: + + Type ID | Description + --- | --- + `0` | Growable virtual disk contained in a single file (monolithic sparse). + `1` | Growable virtual disk split into 2GB files (split sparse). + `2` | Preallocated virtual disk contained in a single file (monolithic flat). + `3` | Preallocated virtual disk split into 2GB files (split flat). + `4` | Preallocated virtual disk compatible with ESX server (VMFS flat). + `5` | Compressed disk optimized for streaming. + + The default is `1`. + + For ESXi, this defaults to `zeroedthick`. The available options for ESXi + are: `zeroedthick`, `eagerzeroedthick`, `thin`, `rdm:dev`, `rdmp:dev`, + `2gbsparse`. + + For more information, please consult the [Virtual Disk Manager User's + Guide](https://www.vmware.com/pdf/VirtualDiskManager.pdf) for desktop + VMware clients. For ESXi, refer to the proper ESXi documentation. + +- `floppy_dirs` (array of strings) - A list of directories to place onto + the floppy disk recursively. This is similar to the `floppy_files` option + except that the directory structure is preserved. This is useful for when + your floppy disk includes drivers or if you just want to organize it's + contents as a hierarchy. Wildcard characters (\*, ?, and \[\]) are allowed. - `floppy_files` (array of strings) - A list of files to place onto a floppy disk that is attached when the VM is booted. This is most useful for @@ -128,44 +169,39 @@ builder. and \[\]) are allowed. Directory names are also allowed, which will add all the files found in the directory to the floppy. -- `floppy_dirs` (array of strings) - A list of directories to place onto - the floppy disk recursively. This is similar to the `floppy_files` option - except that the directory structure is preserved. This is useful for when - your floppy disk includes drivers or if you just want to organize it's - contents as a hierarchy. Wildcard characters (\*, ?, and \[\]) are allowed. - - `fusion_app_path` (string) - Path to "VMware Fusion.app". By default this is - "/Applications/VMware Fusion.app" but this setting allows you to + `/Applications/VMware Fusion.app` but this setting allows you to customize this. - `guest_os_type` (string) - The guest OS type being installed. This will be - set in the VMware VMX. By default this is "other". By specifying a more + set in the VMware VMX. By default this is `other`. By specifying a more specific OS type, VMware may perform some optimizations or virtual hardware changes to better support the operating system running in the virtual machine. - `headless` (boolean) - Packer defaults to building VMware virtual machines by launching a GUI that shows the console of the machine being built. When - this value is set to true, the machine will start without a console. For + this value is set to `true`, the machine will start without a console. For VMware machines, Packer will output VNC connection information in case you need to connect to the console to debug the build process. - `http_directory` (string) - Path to a directory to serve using an HTTP server. The files in this directory will be available over HTTP that will be requestable from the virtual machine. This is useful for hosting - kickstart files and so on. By default this is "", which means no HTTP server - will be started. The address and port of the HTTP server will be available - as variables in `boot_command`. This is covered in more detail below. + kickstart files and so on. By default this is an empty string, which means + no HTTP server will be started. The address and port of the HTTP server will + be available as variables in `boot_command`. This is covered in more detail + below. - `http_port_min` and `http_port_max` (number) - These are the minimum and maximum port to use for the HTTP server started to serve the `http_directory`. Because Packer often runs in parallel, Packer will choose a randomly available port in this range to run the HTTP server. If you want to force the HTTP server to be on one port, make this minimum and maximum - port the same. By default the values are 8000 and 9000, respectively. + port the same. By default the values are `8000` and `9000`, respectively. - `iso_target_extension` (string) - The extension of the iso file after - download. This defaults to "iso". + download. This defaults to `iso`. - `iso_target_path` (string) - The path where the iso should be saved after download. By default will go in the packer cache, with a hash of the @@ -177,13 +213,40 @@ builder. URLs must point to the same file (same checksum). By default this is empty and `iso_url` is used. Only one of `iso_url` or `iso_urls` can be specified. +- `network` (string) - This is the network type that the virtual machine will + be created with. This can be one of the generic values that map to a device + such as `hostonly`, `nat`, or `bridged`. If the network is not one of these + values, then it is assumed to be a VMware network device. (VMnet0..x) + +- `network_adapter_type` (string) - This is the ethernet adapter type the the + virtual machine will be created with. By default the `e1000` network adapter + type will be used by Packer. For more information, please consult the + + Choosing a network adapter for your virtual machine for desktop VMware + clients. For ESXi, refer to the proper ESXi documentation. + - `output_directory` (string) - This is the path to the directory where the resulting virtual machine will be created. This may be relative or absolute. If relative, the path is relative to the working directory when `packer` is executed. This directory must not exist or be empty prior to running - the builder. By default this is "output-BUILDNAME" where "BUILDNAME" is the + the builder. By default this is `output-BUILDNAME` where "BUILDNAME" is the name of the build. +- `parallel` (string) - This specifies a parallel port to add to the VM. It + has the format of `Type:option1,option2,...`. Type can be one of the + following values: `FILE`, `DEVICE`, `AUTO`, or `NONE`. + + * `FILE:path` - Specifies the path to the local file to be used for the + parallel port. + * `DEVICE:path` - Specifies the path to the local device to be used for the + parallel port. + * `AUTO:direction` - Specifies to use auto-detection to determine the + parallel port. Direction can be `BI` to specify + bidirectional communication or `UNI` to specify + unidirectional communication. + * `NONE` - Specifies to not use a parallel port. (default) + - `remote_cache_datastore` (string) - The path to the datastore where supporting files will be stored during the build on the remote machine. By default this is the same as the `remote_datastore` option. This only has an @@ -192,12 +255,12 @@ builder. - `remote_cache_directory` (string) - The path where the ISO and/or floppy files will be stored during the build on the remote machine. The path is relative to the `remote_cache_datastore` on the remote machine. By default - this is "packer\_cache". This only has an effect if `remote_type` + this is `packer_cache`. This only has an effect if `remote_type` is enabled. - `remote_datastore` (string) - The path to the datastore where the resulting VM will be stored when it is built on the remote machine. By default this - is "datastore1". This only has an effect if `remote_type` is enabled. + is `datastore1`. This only has an effect if `remote_type` is enabled. - `remote_host` (string) - The host of the remote machine used for access. This is only required if `remote_type` is enabled. @@ -212,12 +275,47 @@ builder. - `remote_type` (string) - The type of remote machine that will be used to build this VM rather than a local desktop product. The only value accepted - for this currently is "esx5". If this is not set, a desktop product will + for this currently is `esx5`. If this is not set, a desktop product will be used. By default, this is not set. - `remote_username` (string) - The username for the SSH user that will access the remote machine. This is required if `remote_type` is enabled. +- `serial` (string) - This specifies a serial port to add to the VM. + It has a format of `Type:option1,option2,...`. The field `Type` can be one + of the following values: `FILE`, `DEVICE`, `PIPE`, `AUTO`, or `NONE`. + + * `FILE:path(,yield)` - Specifies the path to the local file to be used as the + serial port. + * `yield` (bool) - This is an optional boolean that specifies whether + the vm should yield the cpu when polling the port. + By default, the builder will assume this as `FALSE`. + * `DEVICE:path(,yield)` - Specifies the path to the local device to be used + as the serial port. If `path` is empty, then + default to the first serial port. + * `yield` (bool) - This is an optional boolean that specifies whether + the vm should yield the cpu when polling the port. + By default, the builder will assume this as `FALSE`. + * `PIPE:path,endpoint,host(,yield)` - Specifies to use the named-pipe "path" + as a serial port. This has a few + options that determine how the VM + should use the named-pipe. + * `endpoint` (string) - Chooses the type of the VM-end, which can be + either a `client` or `server`. + * `host` (string) - Chooses the type of the host-end, which can be either + an `app` (application) or `vm` (another virtual-machine). + * `yield` (bool) - This is an optional boolean that specifies whether + the vm should yield the cpu when polling the port. + By default, the builder will assume this as `FALSE`. + + * `AUTO:(yield)` - Specifies to use auto-detection to determine the serial + port to use. This has one option to determine how the VM + should support the serial port. + * `yield` (bool) - This is an optional boolean that specifies whether + the vm should yield the cpu when polling the port. + By default, the builder will assume this as `FALSE`. + * `NONE` - Specifies to not use a serial port. (default) + - `shutdown_command` (string) - The command to use to gracefully shut down the machine once all the provisioning is done. By default this is an empty string, which tells Packer to just forcefully shut down the machine. @@ -225,7 +323,7 @@ builder. - `shutdown_timeout` (string) - The amount of time to wait after executing the `shutdown_command` for the virtual machine to actually shut down. If it doesn't shut down in this time, it is an error. By default, the timeout is - "5m", or five minutes. + `5m` or five minutes. - `skip_compaction` (boolean) - VMware-created disks are defragmented and compacted at the end of the build process using `vmware-vdiskmanager`. In @@ -234,8 +332,13 @@ builder. using this configuration value. Defaults to `false`. - `skip_export` (boolean) - Defaults to `false`. When enabled, Packer will - not export the VM. Useful if the build output is not the resultant image, - but created inside the VM. + not export the VM. Useful if the build output is not the resultant + image, but created inside the VM. + Currently, exporting the build VM is only supported when building on + ESXi e.g. when `remote_type` is set to `esx5`. See the [Building on a + Remote vSphere + Hypervisor](/docs/builders/vmware-iso.html#building-on-a-remote-vsphere-hypervisor) + section below for more info. - `keep_registered` (boolean) - Set this to `true` if you would like to keep the VM registered with the remote ESXi server. This is convenient if you @@ -247,9 +350,16 @@ builder. during export. Each item in the array is a new argument. The options `--noSSLVerify`, `--skipManifestCheck`, and `--targetType` are reserved, and should not be passed to this argument. + Currently, exporting the build VM (with ovftool) is only supported when + building on ESXi e.g. when `remote_type` is set to `esx5`. See the + [Building on a Remote vSphere + Hypervisor](/docs/builders/vmware-iso.html#building-on-a-remote-vsphere-hypervisor) + section below for more info. + +- `sound` (boolean) - Enable VMware's virtual soundcard device for the VM. - `tools_upload_flavor` (string) - The flavor of the VMware Tools ISO to - upload into the VM. Valid values are "darwin", "linux", and "windows". By + upload into the VM. Valid values are `darwin`, `linux`, and `windows`. By default, this is empty, which means VMware tools won't be uploaded. - `tools_upload_path` (string) - The path in the VM to upload the @@ -258,19 +368,23 @@ builder. template](/docs/templates/engine.html) that has a single valid variable: `Flavor`, which will be the value of `tools_upload_flavor`. By default the upload path is set to `{{.Flavor}}.iso`. This setting is not - used when `remote_type` is "esx5". + used when `remote_type` is `esx5`. + +- `usb` (boolean) - Enable VMware's USB bus for the guest VM. To enable usage + of the XHCI bus for USB 3 (5 Gbit/s), one can use the `vmx_data` option to + enable it by specifying `true` for the `usb_xhci.present` property. - `version` (string) - The [vmx hardware version](http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1003746) for the new virtual machine. Only the default value has been tested, any - other value is experimental. Default value is '9'. + other value is experimental. Default value is `9`. - `vm_name` (string) - This is the name of the VMX file for the new virtual - machine, without the file extension. By default this is "packer-BUILDNAME", + machine, without the file extension. By default this is `packer-BUILDNAME`, where "BUILDNAME" is the name of the build. - `vmdk_name` (string) - The filename of the virtual disk that'll be created, - without the extension. This defaults to "packer". + without the extension. This defaults to `packer`. - `vmx_data` (object of key/value strings) - Arbitrary key/values to enter into the virtual machine VMX file. This is for advanced users who want to @@ -280,10 +394,11 @@ builder. except that it is run after the virtual machine is shutdown, and before the virtual machine is exported. -- `vmx_remove_ethernet_interfaces` (boolean) - Remove all ethernet interfaces from - the VMX file after building. This is for advanced users who understand the - ramifications, but is useful for building Vagrant boxes since Vagrant will - create ethernet interfaces when provisioning a box. +- `vmx_remove_ethernet_interfaces` (boolean) - Remove all ethernet interfaces + from the VMX file after building. This is for advanced users who understand + the ramifications, but is useful for building Vagrant boxes since Vagrant + will create ethernet interfaces when provisioning a box. Defaults to + `false`. - `vmx_template_path` (string) - Path to a [configuration template](/docs/templates/engine.html) that defines the @@ -292,18 +407,21 @@ builder. below for more information. For basic VMX modifications, try `vmx_data` first. -- `vnc_bind_address` (string / IP address) - The IP address that should be binded - to for VNC. By default packer will use 127.0.0.1 for this. If you wish to bind - to all interfaces use 0.0.0.0 +- `vnc_bind_address` (string / IP address) - The IP address that should be + binded to for VNC. By default packer will use `127.0.0.1` for this. If you + wish to bind to all interfaces use `0.0.0.0`. -- `vnc_disable_password` (boolean) - Don't auto-generate a VNC password that is - used to secure the VNC communication with the VM. +- `vnc_disable_password` (boolean) - Don't auto-generate a VNC password that + is used to secure the VNC communication with the VM. This must be set to + `true` if building on ESXi 6.5 and 6.7 with VNC enabled. Defaults to + `false`. - `vnc_port_min` and `vnc_port_max` (number) - The minimum and maximum port to use for VNC access to the virtual machine. The builder uses VNC to type the initial `boot_command`. Because Packer generally runs in parallel, Packer uses a randomly chosen port in this range that appears available. By - default this is 5900 to 6000. The minimum and maximum ports are inclusive. + default this is `5900` to `6000`. The minimum and maximum ports are + inclusive. ## Boot Command @@ -324,67 +442,13 @@ default 100ms delay. The delay alleviates issues with latency and CPU contention. For local builds you can tune this delay by specifying e.g. `PACKER_KEY_INTERVAL=10ms` to speed through the boot command. -There are a set of special keys available. If these are in your boot -command, they will be replaced by the proper key: +<%= partial "partials/builders/boot-command" %> -- `` - Backspace - -- `` - Delete - -- `` and `` - Simulates an actual "enter" or "return" keypress. - -- `` - Simulates pressing the escape key. - -- `` - Simulates pressing the tab key. - -- `` - `` - Simulates pressing a function key. - -- `` `` `` `` - Simulates pressing an arrow key. - -- `` - Simulates pressing the spacebar. - -- `` - Simulates pressing the insert key. - -- `` `` - Simulates pressing the home and end keys. - -- `` `` - Simulates pressing the page up and page down keys. - -- `` `` - Simulates pressing the alt key. - -- `` `` - Simulates pressing the ctrl key. - -- `` `` - Simulates pressing the shift key. - -- `` `` - Simulates pressing and holding the alt key. - -- `` `` - Simulates pressing and holding the ctrl key. - -- `` `` - Simulates pressing and holding the shift key. - -- `` `` - Simulates releasing a held alt key. - -- `` `` - Simulates releasing a held ctrl key. - -- `` `` - Simulates releasing a held shift key. - -- `` `` `` - Adds a 1, 5 or 10 second pause before - sending any additional keys. This is useful if you have to generally wait - for the UI to update before typing more. - -When using modifier keys `ctrl`, `alt`, `shift` ensure that you release them, -otherwise they will be held down until the machine reboots. Use lowercase -characters as well inside modifiers. - -For example: to simulate ctrl+c use `c`. - -In addition to the special keys, each command to type is treated as a -[template engine](/docs/templates/engine.html). The -available variables are: - -- `HTTPIP` and `HTTPPort` - The IP and port, respectively of an HTTP server - that is started serving the directory specified by the `http_directory` - configuration parameter. If `http_directory` isn't specified, these will be - blank! +-> **Note**: for the `HTTPIP` to be resolved correctly, your VM's network +configuration has to include a `hostonly` or `nat` type network interface. +If you are using this feature, it is recommended to leave the default network +configuration while you are building the VM, and use the `vmx_data_post` hook +to modify the network configuration after the VM is done building. Example boot command. This is actually a working boot command used to start an Ubuntu 12.04 installer: @@ -487,7 +551,12 @@ modify as well: - `format` (string) - Either "ovf", "ova" or "vmx", this specifies the output format of the exported virtual machine. This defaults to "ovf". Before using this option, you need to install `ovftool`. This option - works currently only with option remote_type set to "esx5". + currently only works when option remote_type is set to "esx5". + Since ovftool is only capable of password based authentication + `remote_password` must be set when exporting the VM. + +- `vnc_disable_password` - This must be set to "true" when using VNC with + ESXi 6.5 or 6.7. ### VNC port discovery diff --git a/website/source/docs/builders/vmware-vmx.html.md b/website/source/docs/builders/vmware-vmx.html.md.erb similarity index 73% rename from website/source/docs/builders/vmware-vmx.html.md rename to website/source/docs/builders/vmware-vmx.html.md.erb index ae5bfd29f..5ce8ef4e3 100644 --- a/website/source/docs/builders/vmware-vmx.html.md +++ b/website/source/docs/builders/vmware-vmx.html.md.erb @@ -1,4 +1,6 @@ --- +modeline: | + vim: set ft=pandoc: description: | This VMware Packer builder is able to create VMware virtual machines from an existing VMware virtual machine (a VMX file). It currently supports building @@ -67,13 +69,19 @@ builder. - `boot_wait` (string) - The time to wait after booting the initial virtual machine before typing the `boot_command`. The value of this should be - a duration. Examples are "5s" and "1m30s" which will cause Packer to wait + a duration. Examples are `5s` and `1m30s` which will cause Packer to wait five seconds and one minute 30 seconds, respectively. If this isn't - specified, the default is 10 seconds. + specified, the default is `10s` or 10 seconds. * `disable_vnc` (boolean) - Whether to create a VNC connection or not. A `boot_command` cannot be used when this is `false`. Defaults to `false`. +- `floppy_dirs` (array of strings) - A list of directories to place onto + the floppy disk recursively. This is similar to the `floppy_files` option + except that the directory structure is preserved. This is useful for when + your floppy disk includes drivers or if you just want to organize it's + contents as a hierarchy. Wildcard characters (\*, ?, and \[\]) are allowed. + - `floppy_files` (array of strings) - A list of files to place onto a floppy disk that is attached when the VM is booted. This is most useful for unattended Windows installs, which look for an `Autounattend.xml` file on @@ -84,41 +92,36 @@ builder. and \[\]) are allowed. Directory names are also allowed, which will add all the files found in the directory to the floppy. -- `floppy_dirs` (array of strings) - A list of directories to place onto - the floppy disk recursively. This is similar to the `floppy_files` option - except that the directory structure is preserved. This is useful for when - your floppy disk includes drivers or if you just want to organize it's - contents as a hierarchy. Wildcard characters (\*, ?, and \[\]) are allowed. - - `fusion_app_path` (string) - Path to "VMware Fusion.app". By default this is - "/Applications/VMware Fusion.app" but this setting allows you to + `/Applications/VMware Fusion.app` but this setting allows you to customize this. - `headless` (boolean) - Packer defaults to building VMware virtual machines by launching a GUI that shows the console of the machine being built. When - this value is set to true, the machine will start without a console. For + this value is set to `true`, the machine will start without a console. For VMware machines, Packer will output VNC connection information in case you need to connect to the console to debug the build process. - `http_directory` (string) - Path to a directory to serve using an HTTP server. The files in this directory will be available over HTTP that will be requestable from the virtual machine. This is useful for hosting - kickstart files and so on. By default this is "", which means no HTTP server - will be started. The address and port of the HTTP server will be available - as variables in `boot_command`. This is covered in more detail below. + kickstart files and so on. By default this is an empty string, which means + no HTTP server will be started. The address and port of the HTTP server will + be available as variables in `boot_command`. This is covered in more detail + below. - `http_port_min` and `http_port_max` (number) - These are the minimum and maximum port to use for the HTTP server started to serve the `http_directory`. Because Packer often runs in parallel, Packer will choose a randomly available port in this range to run the HTTP server. If you want to force the HTTP server to be on one port, make this minimum and maximum - port the same. By default the values are 8000 and 9000, respectively. + port the same. By default the values are `8000` and `9000`, respectively. - `output_directory` (string) - This is the path to the directory where the resulting virtual machine will be created. This may be relative or absolute. If relative, the path is relative to the working directory when `packer` is executed. This directory must not exist or be empty prior to running - the builder. By default this is "output-BUILDNAME" where "BUILDNAME" is the + the builder. By default this is `output-BUILDNAME` where "BUILDNAME" is the name of the build. - `shutdown_command` (string) - The command to use to gracefully shut down the @@ -132,16 +135,30 @@ builder. - `shutdown_timeout` (string) - The amount of time to wait after executing the `shutdown_command` for the virtual machine to actually shut down. If it doesn't shut down in this time, it is an error. By default, the timeout is - "5m", or five minutes. + `5m` or five minutes. + +- `linked` (boolean) - By default Packer creates a 'full' clone of + the virtual machine specified in `source_path`. The resultant virtual + machine is fully independant from the parent it was cloned from. + + Setting `linked` to `true` instead causes Packer to create the virtual + machine as a 'linked' clone. Linked clones use and require ongoing + access to the disks of the parent virtual machine. The benefit of a + linked clone is that the clones virtual disk is typically very much + smaller than would be the case for a full clone. Additionally, the + cloned virtual machine can also be created much faster. Creating a + linked clone will typically only be of benefit in some advanced build + scenarios. Most users will wish to create a full clone instead. + Defaults to `false`. - `skip_compaction` (boolean) - VMware-created disks are defragmented and compacted at the end of the build process using `vmware-vdiskmanager`. In certain rare cases, this might actually end up making the resulting disks slightly larger. If you find this to be the case, you can disable compaction - using this configuration value. + using this configuration value. Defaults to `false`. - `tools_upload_flavor` (string) - The flavor of the VMware Tools ISO to - upload into the VM. Valid values are "darwin", "linux", and "windows". By + upload into the VM. Valid values are `darwin`, `linux`, and `windows`. By default, this is empty, which means VMware tools won't be uploaded. - `tools_upload_path` (string) - The path in the VM to upload the @@ -152,7 +169,7 @@ builder. By default the upload path is set to `{{.Flavor}}.iso`. - `vm_name` (string) - This is the name of the VMX file for the new virtual - machine, without the file extension. By default this is "packer-BUILDNAME", + machine, without the file extension. By default this is `packer-BUILDNAME`, where "BUILDNAME" is the name of the build. - `vmx_data` (object of key/value strings) - Arbitrary key/values to enter @@ -163,22 +180,25 @@ builder. except that it is run after the virtual machine is shutdown, and before the virtual machine is exported. -- `vmx_remove_ethernet_interfaces` (boolean) - Remove all ethernet interfaces from - the VMX file after building. This is for advanced users who understand the - ramifications, but is useful for building Vagrant boxes since Vagrant will - create ethernet interfaces when provisioning a box. +- `vmx_remove_ethernet_interfaces` (boolean) - Remove all ethernet interfaces + from the VMX file after building. This is for advanced users who understand + the ramifications, but is useful for building Vagrant boxes since Vagrant + will create ethernet interfaces when provisioning a box. Defaults to + `false`. -- `vnc_bind_address` (string / IP address) - The IP address that should be binded - to for VNC. By default packer will use 127.0.0.1 for this. +- `vnc_bind_address` (string / IP address) - The IP address that should be + binded to for VNC. By default packer will use `127.0.0.1` for this. If you + wish to bind to all interfaces use `0.0.0.0`. -- `vnc_disable_password` (boolean) - Don't auto-generate a VNC password that is - used to secure the VNC communication with the VM. +- `vnc_disable_password` (boolean) - Don't auto-generate a VNC password that + is used to secure the VNC communication with the VM. - `vnc_port_min` and `vnc_port_max` (number) - The minimum and maximum port to use for VNC access to the virtual machine. The builder uses VNC to type the initial `boot_command`. Because Packer generally runs in parallel, Packer uses a randomly chosen port in this range that appears available. By - default this is 5900 to 6000. The minimum and maximum ports are inclusive. + default this is `5900` to `6000`. The minimum and maximum ports are + inclusive. ## Boot Command @@ -198,63 +218,7 @@ default 100ms delay. The delay alleviates issues with latency and CPU contention. For local builds you can tune this delay by specifying e.g. `PACKER_KEY_INTERVAL=10ms` to speed through the boot command. -There are a set of special keys available. If these are in your boot -command, they will be replaced by the proper key: - -- `` - Backspace - -- `` - Delete - -- `` and `` - Simulates an actual "enter" or "return" keypress. - -- `` - Simulates pressing the escape key. - -- `` - Simulates pressing the tab key. - -- `` - `` - Simulates pressing a function key. - -- `` `` `` `` - Simulates pressing an arrow key. - -- `` - Simulates pressing the spacebar. - -- `` - Simulates pressing the insert key. - -- `` `` - Simulates pressing the home and end keys. - -- `` `` - Simulates pressing the page up and page down keys. - -- `` `` - Simulates pressing the alt key. - -- `` `` - Simulates pressing the ctrl key. - -- `` `` - Simulates pressing the shift key. - -- `` `` - Simulates pressing and holding the alt key. - -- `` `` - Simulates pressing and holding the ctrl - key. - -- `` `` - Simulates pressing and holding the - shift key. - -- `` `` - Simulates releasing a held alt key. - -- `` `` - Simulates releasing a held ctrl key. - -- `` `` - Simulates releasing a held shift key. - -- `` `` `` - Adds a 1, 5 or 10 second pause before - sending any additional keys. This is useful if you have to generally wait - for the UI to update before typing more. - -In addition to the special keys, each command to type is treated as a -[template engine](/docs/templates/engine.html). The -available variables are: - -- `HTTPIP` and `HTTPPort` - The IP and port, respectively of an HTTP server - that is started serving the directory specified by the `http_directory` - configuration parameter. If `http_directory` isn't specified, these will be - blank! +<%= partial "partials/builders/boot-command" %> Example boot command. This is actually a working boot command used to start an Ubuntu 12.04 installer: diff --git a/website/source/docs/commands/build.html.md b/website/source/docs/commands/build.html.md index 7d331159e..575f9ed84 100644 --- a/website/source/docs/commands/build.html.md +++ b/website/source/docs/commands/build.html.md @@ -49,3 +49,8 @@ that are created will be outputted at the end of the build. - `-parallel=false` - Disable parallelization of multiple builders (on by default). + +- `-var` - Set a variable in your packer template. This option can be used + multiple times. This is useful for setting version numbers for your build. + +- `-var-file` - Set template variables from a file. diff --git a/website/source/docs/commands/fix.html.md b/website/source/docs/commands/fix.html.md index 1793d3275..428ad0e11 100644 --- a/website/source/docs/commands/fix.html.md +++ b/website/source/docs/commands/fix.html.md @@ -35,3 +35,7 @@ human readability. The full list of fixes that the fix command performs is visible in the help output, which can be seen via `packer fix -h`. + +## Options + +- `-validate=false` - Disables validation of the fixed template. True by default. diff --git a/website/source/docs/commands/index.html.md b/website/source/docs/commands/index.html.md index 39060745f..843a2db43 100644 --- a/website/source/docs/commands/index.html.md +++ b/website/source/docs/commands/index.html.md @@ -106,3 +106,18 @@ The set of machine-readable message types can be found in the documentation section. This section contains documentation on all the message types exposed by Packer core as well as all the components that ship with Packer by default. + +## Autocompletion + +The `packer` command features opt-in subcommand autocompletion that you can +enable for your shell with `packer -autocomplete-install`. After doing so, +you can invoke a new shell and use the feature. + +For example, assume a tab is typed at the end of each prompt line: + +``` +$ packer p +plugin push +$ packer push - +-name -sensitive -token -var -var-file +``` diff --git a/website/source/docs/commands/push.html.md b/website/source/docs/commands/push.html.md index c30eb1587..134b443e7 100644 --- a/website/source/docs/commands/push.html.md +++ b/website/source/docs/commands/push.html.md @@ -9,6 +9,12 @@ sidebar_current: 'docs-commands-push' # `push` Command +!> The Packer and Artifact Registry features of Atlas will no longer be +actively developed or maintained and will be fully decommissioned. +Please see our [guide on building immutable infrastructure with +Packer on CI/CD](/guides/packer-on-cicd/) for ideas on implementing these +features yourself. + The `packer push` command uploads a template and other required files to the Atlas service, which will run your packer build for you. [Learn more about Packer in Atlas.](https://atlas.hashicorp.com/help/packer/features) @@ -23,7 +29,7 @@ artifacts in Atlas. In order to do that you will also need to configure the [Atlas post-processor](/docs/post-processors/atlas.html). This is optional, and both the post-processor and push commands can be used independently. -!> The push command uploads your template and other files, like provisioning +~> The push command uploads your template and other files, like provisioning scripts, to Atlas. Take care not to upload files that you don't intend to, like secrets or large binaries. **If you have secrets in your Packer template, you should [move them into environment diff --git a/website/source/docs/commands/validate.html.md b/website/source/docs/commands/validate.html.md index d1102f2d1..aba07175a 100644 --- a/website/source/docs/commands/validate.html.md +++ b/website/source/docs/commands/validate.html.md @@ -32,3 +32,16 @@ Errors validating build 'vmware'. 1 error(s) occurred: - `-syntax-only` - Only the syntax of the template is checked. The configuration is not validated. + +- `-except=foo,bar,baz` - Builds all the builds except those with the given + comma-separated names. Build names by default are the names of their builders, + unless a specific `name` attribute is specified within the configuration. + +- `-only=foo,bar,baz` - Only build the builds with the given comma-separated + names. Build names by default are the names of their builders, unless a + specific `name` attribute is specified within the configuration. + +- `-var` - Set a variable in your packer template. This option can be used + multiple times. This is useful for setting version numbers for your build. + +- `-var-file` - Set template variables from a file. diff --git a/website/source/docs/extending/custom-builders.html.md b/website/source/docs/extending/custom-builders.html.md index 7cd31b161..3d421f35d 100644 --- a/website/source/docs/extending/custom-builders.html.md +++ b/website/source/docs/extending/custom-builders.html.md @@ -82,11 +82,11 @@ And `packer.Cache` is used to store files between multiple Packer runs, and is covered in more detail in the cache section below. Because builder runs are typically a complex set of many steps, the -[multistep](https://github.com/mitchellh/multistep) library is recommended to -bring order to the complexity. Multistep is a library which allows you to -separate your logic into multiple distinct "steps" and string them together. It -fully supports cancellation mid-step and so on. Please check it out, it is how -the built-in builders are all implemented. +[multistep](https://github.com/hashicorp/packer/blob/master/helper/multistep) +helper is recommended to bring order to the complexity. Multistep is a library +which allows you to separate your logic into multiple distinct "steps" and +string them together. It fully supports cancellation mid-step and so on. Please +check it out, it is how the built-in builders are all implemented. Finally, as a result of `Run`, an implementation of `packer.Artifact` should be returned. More details on creating a `packer.Artifact` are covered in the diff --git a/website/source/docs/extending/plugins.html.md b/website/source/docs/extending/plugins.html.md index 180a6e8ea..8d1b6b2a5 100644 --- a/website/source/docs/extending/plugins.html.md +++ b/website/source/docs/extending/plugins.html.md @@ -17,9 +17,9 @@ writing plugins that are simply distributed with Packer. For example, all the builders, provisioners, and more that ship with Packer are implemented as Plugins that are simply hardcoded to load with Packer. -This page will cover how to install and use plugins. If you're interested in -developing plugins, the documentation for that is available the [developing -plugins](/docs/extending/plugins.html) page. +This section will cover how to install and use plugins. If you're interested in +developing plugins, the documentation for that is available below, in the [developing +plugins](#developing-plugins) section. Because Packer is so young, there is no official listing of available Packer plugins. Plugins are best found via Google. Typically, searching "packer plugin @@ -32,11 +32,11 @@ Packer plugins are completely separate, standalone applications that the core of Packer starts and communicates with. These plugin applications aren't meant to be run manually. Instead, Packer core -executes these plugin applications in a certain way and communicates with them. -For example, the VMware builder is actually a standalone binary named -`packer-builder-vmware`. The next time you run a Packer build, look at your -process list and you should see a handful of `packer-` prefixed applications -running. +executes them as a sub-process, run as a sub-command (`packer plugin`) and +communicates with them. For example, the VMware builder is actually run as +`packer plugin packer-builder-vmware`. The next time you run a Packer build, +look at your process list and you should see a handful of `packer-` prefixed +applications running. ## Installing Plugins @@ -153,12 +153,10 @@ using standard installation procedures. The specifics of how to implement each type of interface are covered in the relevant subsections available in the navigation to the left. -~> **Lock your dependencies!** Unfortunately, Go's dependency management -story is fairly sad. There are various unofficial methods out there for locking -dependencies, and using one of them is highly recommended since the Packer -codebase will continue to improve, potentially breaking APIs along the way until -there is a stable release. By locking your dependencies, your plugins will -continue to work with the version of Packer you lock to. +~> **Lock your dependencies!** Using `govendor` is highly recommended since +the Packer codebase will continue to improve, potentially breaking APIs along +the way until there is a stable release. By locking your dependencies, your +plugins will continue to work with the version of Packer you lock to. ### Logging and Debugging diff --git a/website/source/docs/install/index.html.md b/website/source/docs/install/index.html.md index aefe52650..635ffb134 100644 --- a/website/source/docs/install/index.html.md +++ b/website/source/docs/install/index.html.md @@ -9,58 +9,5 @@ sidebar_current: 'docs-install' # Install Packer -Installing Packer is simple. There are two approaches to installing Packer: - -1. Using a [precompiled binary](#precompiled-binaries) - -2. Installing [from source](#compiling-from-source) - -Downloading a precompiled binary is easiest, and we provide downloads over TLS -along with SHA256 sums to verify the binary. We also distribute a PGP signature -with the SHA256 sums that can be verified. - -## Precompiled Binaries - -To install the precompiled binary, [download](/downloads.html) the appropriate -package for your system. Packer is currently packaged as a zip file. We do not -have any near term plans to provide system packages. - -Once the zip is downloaded, unzip it into any directory. The `packer` binary -inside is all that is necessary to run Packer (or `packer.exe` for Windows). Any -additional files, if any, aren't required to run Packer. - -Copy the binary to anywhere on your system. If you intend to access it from the -command-line, make sure to place it somewhere on your `PATH`. - -## Compiling from Source - -To compile from source, you will need [Go](https://golang.org) installed and -configured properly (including a `GOPATH` environment variable set), as well -as a copy of [`git`](https://www.git-scm.com/) in your `PATH`. - -1. Clone the Packer repository from GitHub into your `GOPATH`: - - ``` shell - $ mkdir -p $GOPATH/src/github.com/hashicorp && cd $_ - $ git clone https://github.com/hashicorp/packer.git - $ cd packer - ``` - -2. Build Packer for your current system and put the - binary in `./bin/` (relative to the git checkout). The `make dev` target is - just a shortcut that builds `packer` for only your local build environment (no - cross-compiled targets). - - ``` shell - $ make dev - ``` - -## Verifying the Installation - -To verify Packer is properly installed, run `packer -v` on your system. You -should see help output. If you are executing it from the command line, make sure -it is on your PATH or you may get an error about Packer not being found. - -``` shell -$ packer -v -``` +For detailed instructions on how to install Packer, see [this page](/intro/getting-started/install.html) in our +getting-started guide. \ No newline at end of file diff --git a/website/source/docs/other/debugging.html.md b/website/source/docs/other/debugging.html.md index b34f1485b..74f266c96 100644 --- a/website/source/docs/other/debugging.html.md +++ b/website/source/docs/other/debugging.html.md @@ -10,6 +10,9 @@ sidebar_current: 'docs-other-debugging' # Debugging Packer Builds +Using `packer build -on-error=ask` allows you to inspect failures and try out +solutions before restarting the build. + For remote builds with cloud providers like Amazon Web Services AMIs, debugging a Packer build can be eased greatly with `packer build -debug`. This disables parallelization and enables debug mode. @@ -28,7 +31,7 @@ ephemeral key will be deleted at the end of the packer run during cleanup. For a local builder, the SSH session initiated will be visible in the detail provided when `PACKER_LOG=1` environment variable is set prior to a build, and you can connect to the local machine using the userid and password defined -in the kickstart or preseed associated with initialzing the local VM. +in the kickstart or preseed associated with initializing the local VM. ### Windows diff --git a/website/source/docs/post-processors/amazon-import.html.md b/website/source/docs/post-processors/amazon-import.html.md index 0150d4a70..836a271c3 100644 --- a/website/source/docs/post-processors/amazon-import.html.md +++ b/website/source/docs/post-processors/amazon-import.html.md @@ -61,6 +61,10 @@ Optional: launch the imported AMI. By default no additional users other than the user importing the AMI has permission to launch it. +- `custom_endpoint_ec2` (string) - This option is useful if you use a cloud + provider whose API is compatible with aws EC2. Specify another endpoint + like this `https://ec2.custom.endpoint.com`. + - `license_type` (string) - The license type to be used for the Amazon Machine Image (AMI) after importing. Valid values: `AWS` or `BYOL` (default). For more details regarding licensing, see @@ -70,6 +74,13 @@ Optional: - `mfa_code` (string) - The MFA [TOTP](https://en.wikipedia.org/wiki/Time-based_One-time_Password_Algorithm) code. This should probably be a user variable since it changes all the time. +- `profile` (string) - The profile to use in the shared credentials file for + AWS. See Amazon's documentation on [specifying + profiles](https://docs.aws.amazon.com/sdk-for-go/v1/developer-guide/configuring-sdk.html#specifying-profiles) + for more details. + +- `role_name` (string) - The name of the role to use when not using the default role, 'vmimport' + - `s3_key_name` (string) - The name of the key in `s3_bucket_name` where the OVA file will be copied to for import. If not specified, this will default to "packer-import-{{timestamp}}.ova". This key (ie, the uploaded OVA) will @@ -78,6 +89,9 @@ Optional: - `skip_clean` (boolean) - Whether we should skip removing the OVA file uploaded to S3 after the import process has completed. "true" means that we should leave it in the S3 bucket, "false" means to clean it out. Defaults to `false`. +- `skip_region_validation` (boolean) - Set to true if you want to skip + validation of the region configuration option. Default `false`. + - `tags` (object of key/value strings) - Tags applied to the created AMI and relevant snapshots. @@ -104,6 +118,49 @@ Here is a basic example. This assumes that the builder has produced an OVA artif } ``` +## VMWare Example + +This is an example that uses `vmware-iso` builder and exports the `.ova` file using ovftool. + +``` json +"post-processors" : [ + [ + { + "type": "shell-local", + "inline": [ "/usr/bin/ovftool /.vmx /.ova" ] + }, + { + "files": [ + "/.ova" + ], + "type": "artifice" + }, + { + "type": "amazon-import", + "access_key": "YOUR KEY HERE", + "secret_key": "YOUR SECRET KEY HERE", + "region": "us-east-1", + "s3_bucket_name": "importbucket", + "license_type": "BYOL", + "tags": { + "Description": "packer amazon-import {{timestamp}}" + } + } + ] + ] +``` + +## Troubleshooting Timeouts +The amazon-import feature can take a long time to upload and convert your OVAs +into AMIs; if you find that your build is failing because you have exceeded your +max retries or find yourself being rate limited, you can override the max +retries and the delay in between retries by setting the environment variables + `AWS_MAX_ATTEMPTS` and `AWS_POLL_DELAY_SECONDS` on the machine running the + Packer build. By default, the waiter that waits for your image to be imported + from s3 waits retries up to 300 times with a 5 second delay in between retries. + This is dramatically higher than many of our other waiters, to account for how + long this process can take. + -> **Note:** Packer can also read the access key and secret access key from environmental variables. See the configuration reference in the section above for more information on what environmental variables Packer will look for. diff --git a/website/source/docs/post-processors/atlas.html.md b/website/source/docs/post-processors/atlas.html.md index b54f1e45a..8cf78b4fc 100644 --- a/website/source/docs/post-processors/atlas.html.md +++ b/website/source/docs/post-processors/atlas.html.md @@ -10,6 +10,12 @@ sidebar_current: 'docs-post-processors-atlas' # Atlas Post-Processor +!> The Packer and Artifact Registry features of Atlas will no longer be +actively developed or maintained and will be fully decommissioned. +Please see our [guide on building immutable infrastructure with +Packer on CI/CD](/guides/packer-on-cicd/) for ideas on implementing these +features yourself. + Type: `atlas` The Atlas post-processor uploads artifacts from your packer builds to Atlas for diff --git a/website/source/docs/post-processors/googlecompute-import.html.md b/website/source/docs/post-processors/googlecompute-import.html.md new file mode 100644 index 000000000..9281c4c5e --- /dev/null +++ b/website/source/docs/post-processors/googlecompute-import.html.md @@ -0,0 +1,147 @@ +--- +description: | + The Google Compute Image Import post-processor takes a compressed raw disk + image and imports it to a GCE image available to Google Compute Engine. + +layout: docs +page_title: 'Google Compute Image Import - Post-Processors' +sidebar_current: 'docs-post-processors-googlecompute-import' +--- + +# Google Compute Image Import Post-Processor + +Type: `googlecompute-import` + +The Google Compute Image Import post-processor takes a compressed raw disk +image and imports it to a GCE image available to Google Compute Engine. + +~> This post-processor is for advanced users. Please ensure you read the [GCE import documentation](https://cloud.google.com/compute/docs/images/import-existing-image) before using this post-processor. + +## How Does it Work? + +The import process operates by uploading a temporary copy of the compressed raw disk image +to a GCS bucket, and calling an import task in GCP on the raw disk file. Once completed, a +GCE image is created containing the converted virtual machine. The temporary raw disk image +copy in GCS can be discarded after the import is complete. + +Google Cloud has very specific requirements for images being imported. Please see the +[GCE import documentation](https://cloud.google.com/compute/docs/images/import-existing-image) +for details. + +## Configuration + +### Required + +- `account_file` (string) - The JSON file containing your account credentials. + +- `bucket` (string) - The name of the GCS bucket where the raw disk image + will be uploaded. + +- `image_name` (string) - The unique name of the resulting image. + +- `project_id` (string) - The project ID where the GCS bucket exists and + where the GCE image is stored. + +### Optional + +- `gcs_object_name` (string) - The name of the GCS object in `bucket` where the RAW disk image will be copied for import. Defaults to "packer-import-{{timestamp}}.tar.gz". + +- `image_description` (string) - The description of the resulting image. + +- `image_family` (string) - The name of the image family to which the resulting image belongs. + +- `image_labels` (object of key/value strings) - Key/value pair labels to apply to the created image. + +- `keep_input_artifact` (boolean) - if true, do not delete the compressed RAW disk image. Defaults to false. + +- `skip_clean` (boolean) - Skip removing the TAR file uploaded to the GCS bucket after the import process has completed. "true" means that we should leave it in the GCS bucket, "false" means to clean it out. Defaults to `false`. + + +## Basic Example + +Here is a basic example. This assumes that the builder has produced an compressed +raw disk image artifact for us to work with, and that the GCS bucket has been created. + +``` json +{ + "type": "googlecompute-import", + "account_file": "account.json", + "project_id": "my-project", + "bucket": "my-bucket", + "image_name": "my-gce-image" +} + +``` + +## QEMU Builder Example + +Here is a complete example for building a Fedora 28 server GCE image. For this example +packer was run from a CentOS 7 server with KVM installed. The CentOS 7 server was running +in GCE with the nested hypervisor feature enabled. + +``` +$ packer build -var serial=$(tty) build.json +``` + +``` json +{ + "variables": { + "serial": "" + }, + "builders": [ + { + "type": "qemu", + "accelerator": "kvm", + "communicator": "none", + "boot_command": [" console=ttyS0,115200n8 inst.text inst.ks=http://{{ .HTTPIP }}:{{ .HTTPPort }}/fedora-28-ks.cfg rd.live.check=0"], + "disk_size": "15000", + "format": "raw", + "iso_checksum_type": "sha256", + "iso_checksum": "ea1efdc692356b3346326f82e2f468903e8da59324fdee8b10eac4fea83f23fe", + "iso_url": "https://download-ib01.fedoraproject.org/pub/fedora/linux/releases/28/Server/x86_64/iso/Fedora-Server-netinst-x86_64-28-1.1.iso", + "headless": "true", + "http_directory": "http", + "http_port_max": "10089", + "http_port_min": "10082", + "output_directory": "output", + "shutdown_timeout": "30m", + "vm_name": "disk.raw", + "qemu_binary": "/usr/libexec/qemu-kvm", + "qemuargs": [ + [ + "-m", "1024" + ], + [ + "-cpu", "host" + ], + [ + "-chardev", "tty,id=pts,path={{user `serial`}}" + ], + [ + "-device", "isa-serial,chardev=pts" + ], + [ + "-device", "virtio-net,netdev=user.0" + ] + ] + } + ], + "post-processors": [ + [ + { + "type": "compress", + "output": "output/disk.raw.tar.gz" + }, + { + "type": "googlecompute-import", + "project_id": "my-project", + "account_file": "account.json", + "bucket": "my-bucket", + "image_name": "fedora28-server-{{timestamp}}", + "image_description": "Fedora 28 Server", + "image_family": "fedora28-server" + } + ] + ] +} +``` diff --git a/website/source/docs/post-processors/manifest.html.md b/website/source/docs/post-processors/manifest.html.md index 8ad53f451..4d5aaa398 100644 --- a/website/source/docs/post-processors/manifest.html.md +++ b/website/source/docs/post-processors/manifest.html.md @@ -67,7 +67,7 @@ An example manifest file looks like: If the build is run again, the new build artifacts will be added to the manifest file rather than replacing it. It is possible to grab specific build artifacts from the manifest by using `packer_run_uuid`. -The above mainfest was generated with this packer.json: +The above manifest was generated with this packer.json: ```json { diff --git a/website/source/docs/post-processors/shell-local.html.md b/website/source/docs/post-processors/shell-local.html.md index 74e21993d..ac8056407 100644 --- a/website/source/docs/post-processors/shell-local.html.md +++ b/website/source/docs/post-processors/shell-local.html.md @@ -13,7 +13,7 @@ Type: `shell-local` The local shell post processor executes scripts locally during the post processing stage. Shell local provides a convenient way to automate executing -some task with the packer outputs. +some task with packer outputs and variables. ## Basic example @@ -33,6 +33,9 @@ required element is either "inline" or "script". Every other option is optional. Exactly *one* of the following is required: +- `command` (string) - This is a single command to execute. It will be written + to a temporary file and run using the `execute_command` call below. + - `inline` (array of strings) - This is an array of commands to execute. The commands are concatenated by newlines and turned into a single file, so they are all executed within the same context. This allows you to change @@ -52,15 +55,34 @@ Exactly *one* of the following is required: Optional parameters: - `environment_vars` (array of strings) - An array of key/value pairs to - inject prior to the execute\_command. The format should be `key=value`. + inject prior to the `execute_command`. The format should be `key=value`. Packer injects some environmental variables by default into the environment, as well, which are covered in the section below. -- `execute_command` (string) - The command to use to execute the script. By - default this is `chmod +x "{{.Script}}"; {{.Vars}} "{{.Script}}"`. - The value of this is treated as [template engine](/docs/templates/engine.html). +- `execute_command` (array of strings) - The command used to execute the script. By + default this is `["/bin/sh", "-c", "{{.Vars}}, "{{.Script}}"]` + on unix and `["cmd", "/c", "{{.Vars}}", "{{.Script}}"]` on windows. + This is treated as a [template engine](/docs/templates/engine.html). There are two available variables: `Script`, which is the path to the script - to run, `Vars`, which is the list of `environment_vars`, if configured. + to run, and `Vars`, which is the list of `environment_vars`, if configured. + If you choose to set this option, make sure that the first element in the + array is the shell program you want to use (for example, "sh" or + "/usr/local/bin/zsh" or even "powershell.exe" although anything other than + a flavor of the shell command language is not explicitly supported and may + be broken by assumptions made within Packer). It's worth noting that if you + choose to try to use shell-local for Powershell or other Windows commands, + the environment variables will not be set properly for your environment. + + For backwards compatibility, `execute_command` will accept a string instead + of an array of strings. If a single string or an array of strings with only + one element is provided, Packer will replicate past behavior by appending + your `execute_command` to the array of strings `["sh", "-c"]`. For example, + if you set `"execute_command": "foo bar"`, the final `execute_command` that + Packer runs will be ["sh", "-c", "foo bar"]. If you set `"execute_command": ["foo", "bar"]`, + the final execute_command will remain `["foo", "bar"]`. + + Again, the above is only provided as a backwards compatibility fix; we + strongly recommend that you set execute_command as an array of strings. - `inline_shebang` (string) - The [shebang](http://en.wikipedia.org/wiki/Shebang_%28Unix%29) value to use when @@ -69,20 +91,80 @@ Optional parameters: **Important:** If you customize this, be sure to include something like the `-e` flag, otherwise individual steps failing won't fail the provisioner. -## Execute Command Example +- `use_linux_pathing` (bool) - This is only relevant to windows hosts. If you + are running Packer in a Windows environment with the Windows Subsystem for + Linux feature enabled, and would like to invoke a bash script rather than + invoking a Cmd script, you'll need to set this flag to true; it tells Packer + to use the linux subsystem path for your script rather than the Windows path. + (e.g. /mnt/c/path/to/your/file instead of C:/path/to/your/file). Please see + the example below for more guidance on how to use this feature. If you are + not on a Windows host, or you do not intend to use the shell-local + post-processor to run a bash script, please ignore this option. + If you set this flag to true, you still need to provide the standard windows + path to the script when providing a `script`. This is a beta feature. + +## Execute Command To many new users, the `execute_command` is puzzling. However, it provides an important function: customization of how the command is executed. The most common use case for this is dealing with **sudo password prompts**. You may also need to customize this if you use a non-POSIX shell, such as `tcsh` on FreeBSD. +### The Windows Linux Subsystem + +The shell-local post-processor was designed with the idea of allowing you to run +commands in your local operating system's native shell. For Windows, we've +assumed in our defaults that this is Cmd. However, it is possible to run a +bash script as part of the Windows Linux Subsystem from the shell-local +post-processor, by modifying the `execute_command` and the `use_linux_pathing` +options in the post-processor config. + +The example below is a fully functional test config. + +One limitation of this offering is that "inline" and "command" options are not +available to you; please limit yourself to using the "script" or "scripts" +options instead. + +Please note that this feature is still in beta, as the underlying WSL is also +still in beta. There will be some limitations as a result. For example, it will +likely not work unless both Packer and the scripts you want to run are both on +the C drive. + +``` +{ + "builders": [ + { + "type": "null", + "communicator": "none" + } + ], + "provisioners": [ + { + "type": "shell-local", + "environment_vars": ["PROVISIONERTEST=ProvisionerTest1"], + "execute_command": ["bash", "-c", "{{.Vars}} {{.Script}}"], + "use_linux_pathing": true, + "scripts": ["C:/Users/me/scripts/example_bash.sh"] + }, + { + "type": "shell-local", + "environment_vars": ["PROVISIONERTEST=ProvisionerTest2"], + "execute_command": ["bash", "-c", "{{.Vars}} {{.Script}}"], + "use_linux_pathing": true, + "script": "C:/Users/me/scripts/example_bash.sh" + } + ] +} +``` + ## Default Environmental Variables In addition to being able to specify custom environmental variables using the `environment_vars` configuration, the provisioner automatically defines certain commonly useful environmental variables: -- `PACKER_BUILD_NAME` is set to the name of the build that Packer is running. +- `PACKER_BUILD_NAME` is set to the + [name of the build](/docs/templates/builders.html#named-builds) that Packer is running. This is most useful when Packer is making multiple builds and you want to distinguish them slightly from a common provisioning script. @@ -149,3 +231,106 @@ are cleaned up. For a shell script, that means the script **must** exit with a zero code. You *must* be extra careful to `exit 0` when necessary. + + +## Usage Examples: + +Example of running a .cmd file on windows: + +``` + { + "type": "shell-local", + "environment_vars": ["SHELLLOCALTEST=ShellTest1"], + "scripts": ["./scripts/test_cmd.cmd"] + }, +``` + +Contents of "test_cmd.cmd": + +``` +echo %SHELLLOCALTEST% +``` + +Example of running an inline command on windows: +Required customization: tempfile_extension + +``` + { + "type": "shell-local", + "environment_vars": ["SHELLLOCALTEST=ShellTest2"], + "tempfile_extension": ".cmd", + "inline": ["echo %SHELLLOCALTEST%"] + }, +``` + +Example of running a bash command on windows using WSL: +Required customizations: use_linux_pathing and execute_command + +``` + { + "type": "shell-local", + "environment_vars": ["SHELLLOCALTEST=ShellTest3"], + "execute_command": ["bash", "-c", "{{.Vars}} {{.Script}}"], + "use_linux_pathing": true, + "script": "./scripts/example_bash.sh" + } +``` + +Contents of "example_bash.sh": + +``` +#!/bin/bash +echo $SHELLLOCALTEST +``` + +Example of running a powershell script on windows: +Required customizations: env_var_format and execute_command + +``` + + { + "type": "shell-local", + "environment_vars": ["SHELLLOCALTEST=ShellTest4"], + "execute_command": ["powershell.exe", "{{.Vars}} {{.Script}}"], + "env_var_format": "$env:%s=\"%s\"; ", + "script": "./scripts/example_ps.ps1" + } +``` + +Example of running a powershell script on windows as "inline": +Required customizations: env_var_format, tempfile_extension, and execute_command + +``` + { + "type": "shell-local", + "tempfile_extension": ".ps1", + "environment_vars": ["SHELLLOCALTEST=ShellTest5"], + "execute_command": ["powershell.exe", "{{.Vars}} {{.Script}}"], + "env_var_format": "$env:%s=\"%s\"; ", + "inline": ["write-output $env:SHELLLOCALTEST"] + } +``` + + +Example of running a bash script on linux: + +``` + { + "type": "shell-local", + "environment_vars": ["PROVISIONERTEST=ProvisionerTest1"], + "scripts": ["./scripts/example_bash.sh"] + } +``` + +Example of running a bash "inline" on linux: + +``` + { + "type": "shell-local", + "environment_vars": ["PROVISIONERTEST=ProvisionerTest2"], + "inline": ["echo hello", + "echo $PROVISIONERTEST"] + } +``` + + diff --git a/website/source/docs/post-processors/vagrant.html.md b/website/source/docs/post-processors/vagrant.html.md index 5f0a9bc50..bc6e581bd 100644 --- a/website/source/docs/post-processors/vagrant.html.md +++ b/website/source/docs/post-processors/vagrant.html.md @@ -32,11 +32,14 @@ providers. - AWS - DigitalOcean +- Google - Hyper-V +- LXC - Parallels - QEMU - VirtualBox - VMware +- Docker -> **Support for additional providers** is planned. If the Vagrant post-processor doesn't support creating boxes for a provider you care about, @@ -100,8 +103,19 @@ Specify overrides within the `override` configuration by provider name: In the example above, the compression level will be set to 1 except for VMware, where it will be set to 0. -The available provider names are: `aws`, `digitalocean`, `virtualbox`, `vmware`, -and `parallels`. +The available provider names are: + +- `aws` +- `digitalocean` +- `google` +- `hyperv` +- `parallels` +- `libvirt` +- `lxc` +- `scaleway` +- `virtualbox` +- `vmware` +- `docker` ## Input Artifacts @@ -112,3 +126,16 @@ it. Please see the [documentation on input artifacts](/docs/templates/post-processors.html#toc_2) for more information. + +### Docker + +Using a Docker input artifact will include a reference to the image in the +`Vagrantfile`. If the image tag is not specified in the post-processor, the +sha256 hash will be used. + +The following Docker input artifacts are supported: + + - `docker` builder with `commit: true`, always uses the sha256 hash + - `docker-import` + - `docker-tag` + - `docker-push` \ No newline at end of file diff --git a/website/source/docs/post-processors/vsphere-template.html.md b/website/source/docs/post-processors/vsphere-template.html.md index e72dc19cc..49e19b560 100644 --- a/website/source/docs/post-processors/vsphere-template.html.md +++ b/website/source/docs/post-processors/vsphere-template.html.md @@ -1,7 +1,7 @@ --- description: | - The Packer vSphere Template post-processor takes an artifact from the VMware-iso builder built on ESXi (i.e. remote) - and allows to mark a VM as a template and leaving it in a path of choice. + The Packer vSphere Template post-processor takes an artifact from the VMware-iso builder, built on ESXi (i.e. remote) + or an artifact from the vSphere post-processor and allows to mark a VM as a template and leaving it in a path of choice. layout: docs page_title: 'vSphere Template - Post-Processors' sidebar_current: 'docs-post-processors-vSphere-template' @@ -11,22 +11,23 @@ sidebar_current: 'docs-post-processors-vSphere-template' Type: `vsphere-template` -The Packer vSphere template post-processor takes an artifact from the VMware-iso builder built on ESXi (i.e. remote) and -allows to mark a VM as a template and leaving it in a path of choice. +The Packer vSphere Template post-processor takes an artifact from the VMware-iso builder, built on ESXi (i.e. remote) +or an artifact from the [vSphere](/docs/post-processors/vsphere.html) post-processor and allows to mark a VM as a +template and leaving it in a path of choice. ## Example An example is shown below, showing only the post-processor configuration: ``` json -{ +{ "type": "vsphere-template", "host": "vcenter.local", "insecure": true, "username": "root", - "password": "secret", + "password": "secret", "datacenter": "mydatacenter", - "folder": "/packer-templates/os/distro-7" + "folder": "/packer-templates/os/distro-7" } ``` @@ -38,7 +39,7 @@ each category, the available configuration keys are alphabetized. Required: -- `host` (string) - The vSphere host that contains the VM built by the vmware-iso. +- `host` (string) - The vSphere host that contains the VM built by the vmware-iso. - `password` (string) - Password to use to authenticate to the vSphere endpoint. @@ -48,6 +49,34 @@ Optional: - `datacenter` (string) - If you have more than one, you will need to specify which one the ESXi used. -- `folder` (string) - Target path where the template will be created. +- `folder` (string) - Target path where the template will be created. - `insecure` (boolean) - If it's true skip verification of server certificate. Default is false + +## Using the vSphere Template with local builders + +Once the [vSphere](/docs/post-processors/vsphere.html) takes an artifact from the VMware builder and uploads it +to a vSphere endpoint, you will likely want to mark that VM as template. Packer can do this for you automatically +using a sequence definition (a collection of post-processors that are treated as as single pipeline, see +[Post-Processors](/docs/templates/post-processors.html) for more information): + +``` json +{ + "post-processors": [ + [ + { + "type": "vsphere", + ... + }, + { + "type": "vsphere-template", + ... + } + ] + ] +} +``` + +In the example above, the result of each builder is passed through the defined sequence of post-processors starting +with the `vsphere` post-processor which will upload the artifact to a vSphere endpoint. The resulting artifact is then +passed on to the `vsphere-template` post-processor which handles marking a VM as a template. diff --git a/website/source/docs/provisioners/ansible-local.html.md b/website/source/docs/provisioners/ansible-local.html.md index 92d46fb3d..37d3d3826 100644 --- a/website/source/docs/provisioners/ansible-local.html.md +++ b/website/source/docs/provisioners/ansible-local.html.md @@ -1,8 +1,10 @@ --- description: | - The ansible-local Packer provisioner configures Ansible to run on the - machine by Packer from local Playbook and Role files. Playbooks and Roles can - be uploaded from your local machine to the remote machine. + The ansible-local Packer provisioner will run ansible in ansible's "local" + mode on the remote/guest VM using Playbook and Role files that exist on the + guest VM. This means ansible must be installed on the remote/guest VM. + Playbooks and Roles can be uploaded from your build machine + (the one running Packer) to the vm. layout: docs page_title: 'Ansible Local - Provisioners' sidebar_current: 'docs-provisioners-ansible-local' @@ -12,15 +14,17 @@ sidebar_current: 'docs-provisioners-ansible-local' Type: `ansible-local` -The `ansible-local` Packer provisioner configures Ansible to run on the machine -by Packer from local Playbook and Role files. Playbooks and Roles can be -uploaded from your local machine to the remote machine. Ansible is run in [local -mode](https://docs.ansible.com/ansible/playbooks_delegation.html#local-playbooks) via the +The `ansible-local` Packer provisioner will run ansible in ansible's "local" + mode on the remote/guest VM using Playbook and Role files that exist on the + guest VM. This means ansible must be installed on the remote/guest VM. + Playbooks and Roles can be uploaded from your build machine + (the one running Packer) to the vm. Ansible is then run on the guest machine + in [local mode](https://docs.ansible.com/ansible/playbooks_delegation.html#local-playbooks) via the `ansible-playbook` command. -> **Note:** Ansible will *not* be installed automatically by this provisioner. This provisioner expects that Ansible is already installed on the -machine. It is common practice to use the [shell +guest/remote machine. It is common practice to use the [shell provisioner](/docs/provisioners/shell.html) before the Ansible provisioner to do this. @@ -43,7 +47,12 @@ Required: - `playbook_file` (string) - The playbook file to be executed by ansible. This file must exist on your local system and will be uploaded to the - remote machine. + remote machine. This option is exclusive with `playbook_files`. + +- `playbook_files` (array of strings) - The playbook files to be executed by ansible. + These files must exist on your local system. If the files don't exist in the `playbook_dir` + or you don't set `playbook_dir` they will be uploaded to the remote machine. This option + is exclusive with `playbook_file`. Optional: @@ -114,7 +123,7 @@ chi-appservers cli](http://docs.ansible.com/ansible/galaxy.html#the-ansible-galaxy-command-line-tool) on the remote machine. By default, this is empty. -- `galaxycommand` (string) - The command to invoke ansible-galaxy. +- `galaxycommand` (string) - The command to invoke ansible-galaxy. By default, this is ansible-galaxy. - `group_vars` (string) - a path to the directory containing ansible group @@ -140,6 +149,10 @@ chi-appservers are not correct, use a shell provisioner prior to this to configure it properly. +- `clean_staging_directory` (boolean) - If set to `true`, the content of + the `staging_directory` will be removed after executing ansible. By + default, this is set to `false`. + ## Default Extra Variables In addition to being able to specify extra arguments using the diff --git a/website/source/docs/provisioners/ansible.html.md b/website/source/docs/provisioners/ansible.html.md index 5ea4de0f2..890cc00b0 100644 --- a/website/source/docs/provisioners/ansible.html.md +++ b/website/source/docs/provisioners/ansible.html.md @@ -14,9 +14,10 @@ Type: `ansible` The `ansible` Packer provisioner runs Ansible playbooks. It dynamically creates an Ansible inventory file configured to use SSH, runs an SSH server, executes `ansible-playbook`, and marshals Ansible plays through the SSH server to the -machine being provisioned by Packer. Note, this means that any `remote_user` -defined in tasks will be ignored. Packer will always connect with the user -given in the json config. +machine being provisioned by Packer. + +-> **Note:**: Any `remote_user` defined in tasks will be ignored. Packer will +always connect with the user given in the json config for this provisioner. ## Basic Example @@ -60,6 +61,15 @@ Optional Parameters: "ansible_env_vars": [ "ANSIBLE_HOST_KEY_CHECKING=False", "ANSIBLE_SSH_ARGS='-o ForwardAgent=yes -o ControlMaster=auto -o ControlPersist=60s'", "ANSIBLE_NOCOLOR=True" ] } ``` + If you are running a Windows build on AWS, Azure or Google Compute and would + like to access the auto-generated password that Packer uses to connect to a + Windows instance via WinRM, you can use the template variable + {{.WinRMPassword}} in this option. + For example: + + ```json + "ansible_env_vars": [ "WINRM_PASSWORD={{.WinRMPassword}}" ], + ``` - `command` (string) - The command to invoke ansible. Defaults to `ansible-playbook`. @@ -71,18 +81,35 @@ Optional Parameters: These arguments *will not* be passed through a shell and arguments should not be quoted. Usage example: - ``` json + ```json { "extra_arguments": [ "--extra-vars", "Region={{user `Region`}} Stage={{user `Stage`}}" ] } ``` + If you are running a Windows build on AWS, Azure or Google Compute and would + like to access the auto-generated password that Packer uses to connect to a + Windows instance via WinRM, you can use the template variable + {{.WinRMPassword}} in this option. + For example: + + ```json + "extra_arguments": [ + "--extra-vars", "winrm_password={{ .WinRMPassword }}" + ] + ``` + - `groups` (array of strings) - The groups into which the Ansible host should be placed. When unspecified, the host is not associated with any groups. +- `inventory_file` (string) - The inventory file to use during provisioning. + When unspecified, Packer will create a temporary inventory file and will + use the `host_alias`. + - `host_alias` (string) - The alias by which the Ansible host should be known. - Defaults to `default`. + Defaults to `default`. This setting is ignored when using a custom inventory + file. - `inventory_directory` (string) - The directory in which to place the temporary generated Ansible inventory file. By default, this is the @@ -136,11 +163,18 @@ commonly useful Ansible variables: machine that the script is running on. This is useful if you want to run only certain parts of the playbook on systems built with certain builders. +- `packer_http_addr` If using a builder that provides an http server for file + transfer (such as hyperv, parallels, qemu, virtualbox, and vmware), this + will be set to the address. You can use this address in your provisioner to + download large files over http. This may be useful if you're experiencing + slower speeds using the default file provisioner. A file provisioner using + the `winrm` communicator may experience these types of difficulties. + ## Debugging To debug underlying issues with Ansible, add `"-vvvv"` to `"extra_arguments"` to enable verbose logging. -``` json +```json { "extra_arguments": [ "-vvvv" ] } @@ -161,7 +195,7 @@ Redhat / CentOS builds have been known to fail with the following error due to ` Building within a chroot (e.g. `amazon-chroot`) requires changing the Ansible connection to chroot. -``` json +```json { "builders": [ { @@ -186,7 +220,8 @@ Building within a chroot (e.g. `amazon-chroot`) requires changing the Ansible co ### winrm communicator -Windows builds require a custom Ansible connection plugin and a particular configuration. Assuming a directory named `connection_plugins` is next to the playbook and contains a file named `packer.py` whose contents is +Windows builds require a custom Ansible connection plugin and a particular configuration. Assuming a directory named `connection_plugins` is next to the playbook and contains a file named `packer.py` which implements +the connection plugin. On versions of Ansible before 2.4.x, the following works as the connection plugin ``` python from __future__ import (absolute_import, division, print_function) @@ -207,6 +242,46 @@ class Connection(SSHConnection): super(Connection, self).__init__(*args, **kwargs) ``` +Newer versions of Ansible require all plugins to have a documentation string. You can see if there is a +plugin available for the version of Ansible you are using [here](https://github.com/hashicorp/packer/tree/master/examples/ansible/connection-plugin). + +To create the plugin yourself, you will need to copy all of the `options` from the `DOCUMENTATION` string +from the [ssh.py Ansible connection plugin](https://github.com/ansible/ansible/blob/devel/lib/ansible/plugins/connection/ssh.py) +of the Ansible version you are using and add it to a packer.py file similar to as follows + +``` python +from __future__ import (absolute_import, division, print_function) +__metaclass__ = type + +from ansible.plugins.connection.ssh import Connection as SSHConnection + +DOCUMENTATION = ''' + connection: packer + short_description: ssh based connections for powershell via packer + description: + - This connection plugin allows ansible to communicate to the target packer + machines via ssh based connections for powershell. + author: Packer + version_added: na + options: + **** Copy ALL the options from + https://github.com/ansible/ansible/blob/devel/lib/ansible/plugins/connection/ssh.py + for the version of Ansible you are using **** +''' + +class Connection(SSHConnection): + ''' ssh based connections for powershell via packer''' + + transport = 'packer' + has_pipelining = True + become_methods = [] + allow_executable = False + module_implementation_preferences = ('.ps1', '') + + def __init__(self, *args, **kwargs): + super(Connection, self).__init__(*args, **kwargs) +``` + This template should build a Windows Server 2012 image on Google Cloud Platform: ``` json @@ -250,8 +325,12 @@ SSH servers only allow you to attempt to authenticate a certain number of times. googlecompute: fatal: [default]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: Warning: Permanently added '[127.0.0.1]:62684' (RSA) to the list of known hosts.\r\nReceived disconnect from 127.0.0.1 port 62684:2: too many authentication failures\r\nAuthentication failed.\r\n", "unreachable": true} ``` -To unload all keys from your `ssh-agent`, run: +To unload all keys from your `ssh-agent`, run: ```console $ ssh-add -D ``` + +### Become: yes + +We recommend against running Packer as root; if you do then you won't be able to successfully run your ansible playbook as root; `become: yes` will fail. diff --git a/website/source/docs/provisioners/chef-client.html.md b/website/source/docs/provisioners/chef-client.html.md index 54cd5e778..b89cf0ef7 100644 --- a/website/source/docs/provisioners/chef-client.html.md +++ b/website/source/docs/provisioners/chef-client.html.md @@ -80,6 +80,12 @@ configuration is actually required. - `node_name` (string) - The name of the node to register with the Chef Server. This is optional and by default is packer-{{uuid}}. +* `policy_group` (string) - The name of a policy group that exists on the + Chef server. `policy_name` must also be specified. + +* `policy_name` (string) - The name of a policy, as identified by the name + setting in a `Policyfile.rb` file. `policy_group` must also be specified. + - `prevent_sudo` (boolean) - By default, the configured commands that are executed to install and run Chef are executed with `sudo`. If this is true, then the sudo will be omitted. This has no effect when guest\_os\_type is @@ -98,6 +104,9 @@ configuration is actually required. - `skip_clean_node` (boolean) - If true, Packer won't remove the node from the Chef server after it is done running. By default, this is false. +- `skip_clean_staging_directory` (boolean) - If true, Packer won't remove the Chef staging + directory from the machine after it is done running. By default, this is false. + - `skip_install` (boolean) - If true, Chef will not automatically be installed on the machine using the Chef omnibus installers. @@ -105,6 +114,11 @@ configuration is actually required. SSL certificates. If not set, this defaults to "verify\_peer" which validates all SSL certifications. +- `trusted_certs_dir` (string) - This is a directory that contains additional + SSL certificates to trust. Any certificates in this directory will be added to + whatever CA bundle ruby is using. Use this to add self-signed certs for your + Chef Server or local HTTP file servers. + - `staging_directory` (string) - This is the directory where all the configuration of Chef by Packer will be placed. By default this is "/tmp/packer-chef-client" when guest\_os\_type unix and @@ -155,9 +169,18 @@ node_name "{{.NodeName}}" {{if ne .ChefEnvironment ""}} environment "{{.ChefEnvironment}}" {{end}} +{{if ne .PolicyGroup ""}} +policy_group "{{.PolicyGroup}}" +{{end}} +{{if ne .PolicyName ""}} +policy_name "{{.PolicyName}}" +{{end}} {{if ne .SslVerifyMode ""}} ssl_verify_mode :{{.SslVerifyMode}} {{end}} +{{if ne .TrustedCertsDir ""}} +trusted_certs_dir :{{.TrustedCertsDir}} +{{end}} ``` This template is a [configuration @@ -170,6 +193,7 @@ variables available to use: - `NodeName` - The node name set in the configuration. - `ServerUrl` - The URL of the Chef Server set in the configuration. - `SslVerifyMode` - Whether Chef SSL verify mode is on or off. +- `TrustedCertsDir` - Path to dir with trusted certificates. - `ValidationClientName` - The name of the client used for validation. - `ValidationKeyPath` - Path to the validation key, if it is set. @@ -212,7 +236,7 @@ readability) to install Chef. This command can be customized if you want to install Chef in another way. ``` text -curl -L https://www.chef.io/chef/install.sh | \ +curl -L https://omnitruck.chef.io/chef/install.sh | \ {{if .Sudo}}sudo{{end}} bash ``` @@ -265,7 +289,65 @@ directories, append a shell provisioner after Chef to modify them. ## Examples -### Chef Client Local Mode +### Chef Client Local Mode - Simple + +The following example shows how to run the `chef-client` provisioner in local +mode. + +**Packer variables** + +Set the necessary Packer variables using environment variables or provide a [var +file](/docs/templates/user-variables.html). + +``` json +"variables": { + "chef_dir": "/tmp/packer-chef-client" +} +``` + +**Setup the** `chef-client` **provisioner** + +Make sure we have the correct directories and permissions for the `chef-client` +provisioner. You will need to bootstrap the Chef run by providing the necessary +cookbooks using Berkshelf or some other means. + +``` json + "provisioners": [ + ... + { "type": "shell", "inline": [ "mkdir -p {{user `chef_dir`}}" ] }, + { "type": "file", "source": "./roles", "destination": "{{user `chef_dir`}}" }, + { "type": "file", "source": "./cookbooks", "destination": "{{user `chef_dir`}}" }, + { "type": "file", "source": "./data_bags", "destination": "{{user `chef_dir`}}" }, + { "type": "file", "source": "./environments", "destination": "{{user `chef_dir`}}" }, + { "type": "file", "source": "./scripts/install_chef.sh", "destination": "{{user `chef_dir`}}/install_chef.sh" }, + { + "type": "chef-client", + "install_command": "sudo bash {{user `chef_dir`}}/install_chef.sh", + "server_url": "http://localhost:8889", + "config_template": "./config/client.rb.template", + "run_list": [ "role[testing]" ], + "skip_clean_node": true, + "skip_clean_client": true + } + ... + ] +``` + +And ./config/client.rb.template referenced by the above configuration: + +```ruby +log_level :info +log_location STDOUT +local_mode true +chef_zero.enabled true +ssl_verify_mode "verify_peer" +role_path "{{user `chef_dir`}}/roles" +data_bag_path "{{user `chef_dir`}}/data_bags" +environment_path "{{user `chef_dir`}}/environments" +cookbook_path [ "{{user `chef_dir`}}/cookbooks" ] +``` + +### Chef Client Local Mode - Passing variables The following example shows how to run the `chef-client` provisioner in local mode, while passing a `run_list` using a variable. diff --git a/website/source/docs/provisioners/converge.html.md b/website/source/docs/provisioners/converge.html.md index b76652abe..1c06678ed 100644 --- a/website/source/docs/provisioners/converge.html.md +++ b/website/source/docs/provisioners/converge.html.md @@ -59,7 +59,7 @@ Optional parameters: various [configuration template variables](/docs/templates/engine.html) available. -- `prevent_sudo` (boolean) - stop Converge from running with adminstrator +- `prevent_sudo` (boolean) - stop Converge from running with administrator privileges via sudo - `bootstrap_command` (string) - the command used to bootstrap Converge. This diff --git a/website/source/docs/provisioners/file.html.md b/website/source/docs/provisioners/file.html.md index da78d4b61..abb7f10b9 100644 --- a/website/source/docs/provisioners/file.html.md +++ b/website/source/docs/provisioners/file.html.md @@ -32,7 +32,9 @@ The file provisioner can upload both single files and complete directories. ## Configuration Reference -The available configuration options are listed below. All elements are required. +The available configuration options are listed below. + +### Required - `source` (string) - The path to a local file or directory to upload to the machine. The path can be absolute or relative. If it is relative, it is @@ -48,6 +50,16 @@ The available configuration options are listed below. All elements are required. "upload". If it is set to "download" then the file "source" in the machine will be downloaded locally to "destination" +### Optional + +- `generated` (boolean) - For advanced users only. If true, check the file + existence only before uploading, rather than upon pre-build validation. + This allows to upload files created on-the-fly. This defaults to false. We + don't recommend using this feature, since it can cause Packer to become + dependent on system state. We would prefer you generate your files before + the Packer run, but realize that there are situations where this may be + unavoidable. + ## Directory Uploads The file provisioner is also able to upload a complete directory to the remote @@ -75,6 +87,17 @@ directly. This behavior was adopted from the standard behavior of rsync. Note that under the covers, rsync may or may not be used. +## Uploading files that don't exist before Packer starts + +In general, local files used as the source **must** exist before Packer is run. +This is great for catching typos and ensuring that once a build is started, +that it will succeed. However, this also means that you can't generate a file +during your build and then upload it using the file provisioner later. +A convenient workaround is to upload a directory instead of a file. The +directory still must exist, but its contents don't. You can write your +generated file to the directory during the Packer run, and have it be uploaded +later. + ## Symbolic link uploads The behavior when uploading symbolic links depends on the communicator. The @@ -90,6 +113,9 @@ drwxr-xr-x 3 mwhooker staff 102 Jan 27 17:10 a lrwxr-xr-x 1 mwhooker staff 1 Jan 27 17:10 b -> a -rw-r--r-- 1 mwhooker staff 0 Jan 27 17:10 file1 lrwxr-xr-x 1 mwhooker staff 5 Jan 27 17:10 file1link -> file1 +$ ls -l toupload +total 0 +-rw-r--r-- 1 mwhooker staff 0 Jan 27 17:10 files.tar ``` ``` json @@ -97,7 +123,7 @@ lrwxr-xr-x 1 mwhooker staff 5 Jan 27 17:10 file1link -> file1 "provisioners": [ { "type": "shell-local", - "command": "mkdir -p toupload; tar cf toupload/files.tar files" + "command": "tar cf toupload/files.tar files" }, { "destination": "/tmp/", diff --git a/website/source/docs/provisioners/powershell.html.md b/website/source/docs/provisioners/powershell.html.md index f085a6e82..5f4525aba 100644 --- a/website/source/docs/provisioners/powershell.html.md +++ b/website/source/docs/provisioners/powershell.html.md @@ -1,8 +1,8 @@ --- description: | - The shell Packer provisioner provisions machines built by Packer using shell - scripts. Shell provisioning is the easiest way to get software installed and - configured on a machine. + The PowerShell Packer provisioner runs PowerShell scripts on Windows + machines. + It assumes that the communicator in use is WinRM. layout: docs page_title: 'PowerShell - Provisioners' sidebar_current: 'docs-provisioners-powershell' @@ -13,7 +13,11 @@ sidebar_current: 'docs-provisioners-powershell' Type: `powershell` The PowerShell Packer provisioner runs PowerShell scripts on Windows machines. -It assumes that the communicator in use is WinRM. +It assumes that the communicator in use is WinRM. However, the provisioner +can work equally well (with a few caveats) when combined with the SSH +communicator. See the [section +below](/docs/provisioners/powershell.html#combining-the-powershell-provisioner-with-the-ssh-communicator) +for details. ## Basic Example @@ -29,20 +33,21 @@ The example below is fully functional. ## Configuration Reference The reference of available configuration options is listed below. The only -required element is either "inline" or "script". Every other option is optional. +required element is either "inline" or "script". Every other option is +optional. Exactly *one* of the following is required: - `inline` (array of strings) - This is an array of commands to execute. The - commands are concatenated by newlines and turned into a single file, so they - are all executed within the same context. This allows you to change + commands are concatenated by newlines and turned into a single file, so + they are all executed within the same context. This allows you to change directories in one command and use something in the directory in the next and so on. Inline scripts are the easiest way to pull off simple tasks within the machine. - `script` (string) - The path to a script to upload and execute in - the machine. This path can be absolute or relative. If it is relative, it is - relative to the working directory when Packer is executed. + the machine. This path can be absolute or relative. If it is relative, it + is relative to the working directory when Packer is executed. - `scripts` (array of strings) - An array of scripts to execute. The scripts will be uploaded and executed in the order specified. Each script is @@ -51,12 +56,12 @@ Exactly *one* of the following is required: Optional parameters: -- `binary` (boolean) - If true, specifies that the script(s) are binary files, - and Packer should therefore not convert Windows line endings to Unix line - endings (if there are any). By default this is false. +- `binary` (boolean) - If true, specifies that the script(s) are binary + files, and Packer should therefore not convert Windows line endings to Unix + line endings (if there are any). By default this is false. -- `elevated_execute_command` (string) - The command to use to execute the elevated - script. By default this is as follows: +- `elevated_execute_command` (string) - The command to use to execute the + elevated script. By default this is as follows: ``` powershell powershell -executionpolicy bypass "& { if (Test-Path variable:global:ProgressPreference){$ProgressPreference='SilentlyContinue'};. {{.Vars}}; &'{{.Path}}'; exit $LastExitCode }" @@ -65,32 +70,75 @@ Optional parameters: The value of this is treated as [configuration template](/docs/templates/engine.html). There are two available variables: `Path`, which is the path to the script to run, and - `Vars`, which is the location of a temp file containing the list of `environment_vars`, if configured. + `Vars`, which is the location of a temp file containing the list of + `environment_vars`, if configured. - `environment_vars` (array of strings) - An array of key/value pairs to inject prior to the execute\_command. The format should be `key=value`. - Packer injects some environmental variables by default into the environment, - as well, which are covered in the section below. + Packer injects some environmental variables by default into the + environment, as well, which are covered in the section below. + If you are running on AWS, Azure or Google Compute and would like to access the generated + password that Packer uses to connect to the instance via + WinRM, you can use the template variable `{{.WinRMPassword}}` to set this + as an environment variable. For example: + + ```json + { + "type": "powershell", + "environment_vars": "WINRMPASS={{.WinRMPassword}}", + "inline": ["Write-Host \"Automatically generated aws password is: $Env:WINRMPASS\""] + }, + ``` - `execute_command` (string) - The command to use to execute the script. By default this is as follows: ``` powershell - powershell -executionpolicy bypass "& { if (Test-Path variable:global:ProgressPreference){$ProgressPreference='SilentlyContinue'};{{.Vars}}&'{{.Path}}';exit $LastExitCode }" + powershell -executionpolicy bypass "& { if (Test-Path variable:global:ProgressPreference){$ProgressPreference='SilentlyContinue'};. {{.Vars}}; &'{{.Path}}'; exit $LastExitCode }" ``` The value of this is treated as [configuration template](/docs/templates/engine.html). There are two available variables: `Path`, which is the path to the script to run, and - `Vars`, which is the list of `environment_vars`, if configured. + `Vars`, which is the location of a temp file containing the list of + `environment_vars`. The value of both `Path` and `Vars` can be + manually configured by setting the values for `remote_path` and + `remote_env_var_path` respectively. - `elevated_user` and `elevated_password` (string) - If specified, the PowerShell script will be run with elevated privileges using the given - Windows user. + Windows user. If you are running a build on AWS, Azure or Google Compute and would like to run using + the generated password that Packer uses to connect to the instance via + WinRM, you may do so by using the template variable {{.WinRMPassword}}. + For example: -- `remote_path` (string) - The path where the script will be uploaded to in - the machine. This defaults to "c:/Windows/Temp/script.ps1". This value must be a - writable location and any parent directories must already exist. + ``` json + "elevated_user": "Administrator", + "elevated_password": "{{.WinRMPassword}}", + ``` + +- `remote_path` (string) - The path where the PowerShell script will be + uploaded to within the target build machine. This defaults to + `C:/Windows/Temp/script-UUID.ps1` where UUID is replaced with a + dynamically generated string that uniquely identifies the script. + + This setting allows users to override the default upload location. The + value must be a writable location and any parent directories must + already exist. + +- `remote_env_var_path` (string) - Environment variables required within + the remote environment are uploaded within a PowerShell script and then + enabled by 'dot sourcing' the script immediately prior to execution of + the main command or script. + + The path the environment variables script will be uploaded to defaults to + `C:/Windows/Temp/packer-ps-env-vars-UUID.ps1` where UUID is replaced + with a dynamically generated string that uniquely identifies the + script. + + This setting allows users to override the location the environment + variable script is uploaded to. The value must be a writable location + and any parent directories must already exist. - `start_retry_timeout` (string) - The amount of time to attempt to *start* the remote process. By default this is "5m" or 5 minutes. This setting @@ -107,13 +155,15 @@ In addition to being able to specify custom environmental variables using the `environment_vars` configuration, the provisioner automatically defines certain commonly useful environmental variables: -- `PACKER_BUILD_NAME` is set to the name of the build that Packer is running. +- `PACKER_BUILD_NAME` is set to the + [name of the build](/docs/templates/builders.html#named-builds) that Packer is running. This is most useful when Packer is making multiple builds and you want to distinguish them slightly from a common provisioning script. -- `PACKER_BUILDER_TYPE` is the type of the builder that was used to create the - machine that the script is running on. This is useful if you want to run - only certain parts of the script on systems built with certain builders. +- `PACKER_BUILDER_TYPE` is the type of the builder that was used to create + the machine that the script is running on. This is useful if you want to + run only certain parts of the script on systems built with certain + builders. - `PACKER_HTTP_ADDR` If using a builder that provides an http server for file transfer (such as hyperv, parallels, qemu, virtualbox, and vmware), this @@ -121,3 +171,136 @@ commonly useful environmental variables: download large files over http. This may be useful if you're experiencing slower speeds using the default file provisioner. A file provisioner using the `winrm` communicator may experience these types of difficulties. + +## Combining the PowerShell Provisioner with the SSH Communicator + +The good news first. If you are using the +[Microsoft port of OpenSSH](https://github.com/PowerShell/Win32-OpenSSH/wiki) +then the provisioner should just work as expected - no extra configuration +effort is required. + +Now the caveats. If you are using an alternative configuration, and your SSH +connection lands you in a *nix shell on the remote host, then you will most +likely need to manually set the `execute_command`; The default +`execute_command` used by Packer will not work for you. +When configuring the command you will need to ensure that any dollar signs +or other characters that may be incorrectly interpreted by the remote shell +are escaped accordingly. + +The following example shows how the standard `execute_command` can be +reconfigured to work on a remote system with +[Cygwin/OpenSSH](https://cygwin.com/) installed. +The `execute_command` has each dollar sign backslash escaped so that it is +not interpreted by the remote Bash shell - Bash being the default shell for +Cygwin environments. + +```json + "provisioners": [ + { + "type": "powershell", + "execute_command": "powershell -executionpolicy bypass \"& { if (Test-Path variable:global:ProgressPreference){\\$ProgressPreference='SilentlyContinue'};. {{.Vars}}; &'{{.Path}}'; exit \\$LastExitCode }\"", + "inline": [ + "Write-Host \"Hello from PowerShell\"", + ] + } + ] +``` + + +## Packer's Handling of Characters Special to PowerShell + +The escape character in PowerShell is the `backtick`, also sometimes +referred to as the `grave accent`. When, and when not, to escape characters +special to PowerShell is probably best demonstrated with a series of examples. + +### When To Escape... + +Users need to deal with escaping characters special to PowerShell when they +appear *directly* in commands used in the `inline` PowerShell provisioner and +when they appear *directly* in the users own scripts. +Note that where double quotes appear within double quotes, the addition of +a backslash escape is required for the JSON template to be parsed correctly. + +``` json + "provisioners": [ + { + "type": "powershell", + "inline": [ + "Write-Host \"A literal dollar `$ must be escaped\"", + "Write-Host \"A literal backtick `` must be escaped\"", + "Write-Host \"Here `\"double quotes`\" must be escaped\"", + "Write-Host \"Here `'single quotes`' don`'t really need to be\"", + "Write-Host \"escaped... but it doesn`'t hurt to do so.\"", + ] + }, +``` + +The above snippet should result in the following output on the Packer console: + +``` +==> amazon-ebs: Provisioning with Powershell... +==> amazon-ebs: Provisioning with powershell script: /var/folders/15/d0f7gdg13rnd1cxp7tgmr55c0000gn/T/packer-powershell-provisioner508190439 + amazon-ebs: A literal dollar $ must be escaped + amazon-ebs: A literal backtick ` must be escaped + amazon-ebs: Here "double quotes" must be escaped + amazon-ebs: Here 'single quotes' don't really need to be + amazon-ebs: escaped... but it doesn't hurt to do so. +``` + +### When Not To Escape... + +Special characters appearing in user environment variable values and in the +`elevated_user` and `elevated_password` fields will be automatically +dealt with for the user. There is no need to use escapes in these instances. + +``` json +{ + "variables": { + "psvar": "My$tring" + }, + ... + "provisioners": [ + { + "type": "powershell", + "elevated_user": "Administrator", + "elevated_password": "Super$3cr3t!", + "inline": "Write-Output \"The dollar in the elevated_password is interpreted correctly\"" + }, + { + "type": "powershell", + "environment_vars": [ + "VAR1=A$Dollar", + "VAR2=A`Backtick", + "VAR3=A'SingleQuote", + "VAR4=A\"DoubleQuote", + "VAR5={{user `psvar`}}" + ], + "inline": [ + "Write-Output \"In the following examples the special character is interpreted correctly:\"", + "Write-Output \"The dollar in VAR1: $Env:VAR1\"", + "Write-Output \"The backtick in VAR2: $Env:VAR2\"", + "Write-Output \"The single quote in VAR3: $Env:VAR3\"", + "Write-Output \"The double quote in VAR4: $Env:VAR4\"", + "Write-Output \"The dollar in VAR5 (expanded from a user var): $Env:VAR5\"" + ] + } + ] + ... +} +``` + +The above snippet should result in the following output on the Packer console: + +``` +==> amazon-ebs: Provisioning with Powershell... +==> amazon-ebs: Provisioning with powershell script: /var/folders/15/d0f7gdg13rnd1cxp7tgmr55c0000gn/T/packer-powershell-provisioner961728919 + amazon-ebs: The dollar in the elevated_password is interpreted correctly +==> amazon-ebs: Provisioning with Powershell... +==> amazon-ebs: Provisioning with powershell script: /var/folders/15/d0f7gdg13rnd1cxp7tgmr55c0000gn/T/packer-powershell-provisioner142826554 + amazon-ebs: In the following examples the special character is interpreted correctly: + amazon-ebs: The dollar in VAR1: A$Dollar + amazon-ebs: The backtick in VAR2: A`Backtick + amazon-ebs: The single quote in VAR3: A'SingleQuote + amazon-ebs: The double quote in VAR4: A"DoubleQuote + amazon-ebs: The dollar in VAR5 (expanded from a user var): My$tring +``` diff --git a/website/source/docs/provisioners/puppet-masterless.html.md b/website/source/docs/provisioners/puppet-masterless.html.md index 1e1689d03..9afb2f759 100644 --- a/website/source/docs/provisioners/puppet-masterless.html.md +++ b/website/source/docs/provisioners/puppet-masterless.html.md @@ -54,68 +54,64 @@ Required parameters: Optional parameters: -- `execute_command` (string) - The command used to execute Puppet. This has - various [configuration template - variables](/docs/templates/engine.html) available. See - below for more information. +- `execute_command` (string) - The command-line to execute Puppet. This also has + various [configuration template variables](/docs/templates/engine.html) available. -- `extra_arguments` (array of strings) - This is an array of additional options to - pass to the puppet command when executing puppet. This allows for - customization of the `execute_command` without having to completely replace - or include it's contents, making forward-compatible customizations much - easier. +- `extra_arguments` (array of strings) - Additional options to + pass to the Puppet command. This allows for customization of + `execute_command` without having to completely replace + or subsume its contents, making forward-compatible customizations much + easier to maintain. + + This string is lazy-evaluated so one can incorporate logic driven by template variables as + well as private elements of ExecuteTemplate (see source: provisioner/puppet-masterless/provisioner.go). +``` +[ + {{if ne "{{user environment}}" ""}}--environment={{user environment}}{{end}}, + {{if ne ".ModulePath" ""}}--modulepath="{{.ModulePath}}{{.ModulePathJoiner}}$(puppet config print {{if ne "{{user `environment`}}" ""}}--environment={{user `environment`}}{{end}} modulepath)"{{end}} +] +``` -- `facter` (object of key/value strings) - Additional +- `facter` (object of key:value strings) - Additional [facts](https://puppetlabs.com/facter) to make - available when Puppet is running. + available to the Puppet run. -- `guest_os_type` (string) - The target guest OS type, either "unix" or - "windows". Setting this to "windows" will cause the provisioner to use - Windows friendly paths and commands. By default, this is "unix". +- `guest_os_type` (string) - The remote host's OS type ('windows' or 'unix') to + tailor command-line and path separators. (default: unix). -- `hiera_config_path` (string) - The path to a local file with hiera - configuration to be uploaded to the remote machine. Hiera data directories - must be uploaded using the file provisioner separately. +- `hiera_config_path` (string) - Local path to self-contained Hiera + data to be uploaded. NOTE: If you need data directories + they must be previously transferred with a File provisioner. -- `ignore_exit_codes` (boolean) - If true, Packer will never consider the - provisioner a failure. +- `ignore_exit_codes` (boolean) - If true, Packer will ignore failures. -- `manifest_dir` (string) - The path to a local directory with manifests to be - uploaded to the remote machine. This is useful if your main manifest file - uses imports. This directory doesn't necessarily contain the - `manifest_file`. It is a separate directory that will be set as the - "manifestdir" setting on Puppet. +- `manifest_dir` (string) - Local directory with manifests to be + uploaded. This is useful if your main manifest uses imports, but the + directory might not contain the `manifest_file` itself. -~> `manifest_dir` is passed to `puppet apply` as the `--manifestdir` option. +~> `manifest_dir` is passed to Puppet as `--manifestdir` option. This option was deprecated in puppet 3.6, and removed in puppet 4.0. If you have multiple manifests you should use `manifest_file` instead. -- `module_paths` (array of strings) - This is an array of paths to module - directories on your local filesystem. These will be uploaded to the - remote machine. By default, this is empty. +- `module_paths` (array of strings) - Array of local module + directories to be uploaded. -- `prevent_sudo` (boolean) - By default, the configured commands that are - executed to run Puppet are executed with `sudo`. If this is true, then the - sudo will be omitted. +- `prevent_sudo` (boolean) - On Unix platforms Puppet is typically invoked with `sudo`. If true, + it will be omitted. (default: false) -- `puppet_bin_dir` (string) - The path to the directory that contains the puppet - binary for running `puppet apply`. Usually, this would be found via the `$PATH` - or `%PATH%` environment variable, but some builders (notably, the Docker one) do - not run profile-setup scripts, therefore the path is usually empty. +- `puppet_bin_dir` (string) - Path to the Puppet binary. Ideally the program + should be on the system (unix: `$PATH`, windows: `%PATH%`), but some builders (eg. Docker) do + not run profile-setup scripts and therefore PATH might be empty or minimal. -- `staging_directory` (string) - This is the directory where all the configuration - of Puppet by Packer will be placed. By default this is "/tmp/packer-puppet-masterless" - when guest OS type is unix and "C:/Windows/Temp/packer-puppet-masterless" when windows. - This directory doesn't need to exist but must have proper permissions so that the SSH - user that Packer uses is able to create directories and write into this folder. - If the permissions are not correct, use a shell provisioner prior to this to configure - it properly. +- `staging_directory` (string) - Directory to where uploaded files + will be placed (unix: "/tmp/packer-puppet-masterless", + windows: "%SYSTEMROOT%/Temp/packer-puppet-masterless"). + It doesn't need to pre-exist, but the parent must have permissions sufficient + for the account Packer connects as to create directories and write files. + Use a Shell provisioner to prepare the way if needed. -- `working_directory` (string) - This is the directory from which the puppet - command will be run. When using hiera with a relative path, this option - allows to ensure that the paths are working properly. If not specified, - defaults to the value of specified `staging_directory` (or its default value - if not specified either). +- `working_directory` (string) - Directory from which `execute_command` will be run. + If using Hiera files with relative paths, this option can be helpful. (default: `staging_directory`) ## Execute Command @@ -124,45 +120,33 @@ readability) to execute Puppet: ``` cd {{.WorkingDir}} && - {{if ne .FacterVars ""}}{{.FacterVars}} {{end}} -{{if .Sudo}}sudo -E {{end}} -{{if ne .PuppetBinDir ""}}{{.PuppetBinDir}}/{{end}} -puppet apply --verbose --modulepath='{{.ModulePath}}' - {{if ne .HieraConfigPath ""}}--hiera_config='{{.HieraConfigPath}}' {{end}} -{{if ne .ManifestDir ""}}--manifestdir='{{.ManifestDir}}' {{end}} ---detailed-exitcodes - {{if ne .ExtraArguments ""}}{{.ExtraArguments}} {{end}} -{{.ManifestFile}} + {{if ne .FacterVars ""}}{{.FacterVars}} {{end}} + {{if .Sudo}}sudo -E {{end}} + {{if ne .PuppetBinDir ""}}{{.PuppetBinDir}}/{{end}} + puppet apply --detailed-exitcodes + {{if .Debug}}--debug {{end}} + {{if ne .ModulePath ""}}--modulepath='{{.ModulePath}}' {{end}} + {{if ne .HieraConfigPath ""}}--hiera_config='{{.HieraConfigPath}}' {{end}} + {{if ne .ManifestDir ""}}--manifestdir='{{.ManifestDir}}' {{end}} + {{if ne .ExtraArguments ""}}{{.ExtraArguments}} {{end}} + {{.ManifestFile}} ``` The following command is used if guest OS type is windows: ``` cd {{.WorkingDir}} && - {{.FacterVars}} && - {{if ne .PuppetBinDir ""}}{{.PuppetBinDir}}/{{end}} -puppet apply --verbose --modulepath='{{.ModulePath}}' - {{if ne .HieraConfigPath ""}}--hiera_config='{{.HieraConfigPath}}' {{end}} -{{if ne .ManifestDir ""}}--manifestdir='{{.ManifestDir}}' {{end}} ---detailed-exitcodes - {{if ne .ExtraArguments ""}}{{.ExtraArguments}} {{end}} -{{.ManifestFile}} + {{if ne .FacterVars ""}}{{.FacterVars}} && {{end}} + {{if ne .PuppetBinDir ""}}{{.PuppetBinDir}}/{{end}} + puppet apply --detailed-exitcodes + {{if .Debug}}--debug {{end}} + {{if ne .ModulePath ""}}--modulepath='{{.ModulePath}}' {{end}} + {{if ne .HieraConfigPath ""}}--hiera_config='{{.HieraConfigPath}}' {{end}} + {{if ne .ManifestDir ""}}--manifestdir='{{.ManifestDir}}' {{end}} + {{if ne .ExtraArguments ""}}{{.ExtraArguments}} {{end}} + {{.ManifestFile}} ``` -This command can be customized using the `execute_command` configuration. As you -can see from the default value above, the value of this configuration can -contain various template variables, defined below: - -- `WorkingDir` - The path from which Puppet will be executed. -- `FacterVars` - Shell-friendly string of environmental variables used to set - custom facts configured for this provisioner. -- `HieraConfigPath` - The path to a hiera configuration file. -- `ManifestFile` - The path on the remote machine to the manifest file for - Puppet to use. -- `ModulePath` - The paths to the module directories. -- `Sudo` - A boolean of whether to `sudo` the command or not, depending on the - value of the `prevent_sudo` configuration. - ## Default Facts In addition to being able to specify custom Facter facts using the `facter` diff --git a/website/source/docs/provisioners/puppet-server.html.md b/website/source/docs/provisioners/puppet-server.html.md index 180f52b02..56461fc62 100644 --- a/website/source/docs/provisioners/puppet-server.html.md +++ b/website/source/docs/provisioners/puppet-server.html.md @@ -28,7 +28,7 @@ accessible from your network. ``` json { "type": "puppet-server", - "options": "--test --pluginsync", + "extra_arguments": "--test --pluginsync", "facter": { "server_role": "webserver" } @@ -50,31 +50,38 @@ listed below: contains the client private key for the node. This defaults to nothing, in which case a client private key won't be uploaded. -- `execute_command` (string) - The command used to execute Puppet. This has +- `execute_command` (string) - The command-line to execute Puppet. This also has various [configuration template variables](/docs/templates/engine.html) available. - See below for more information. -- `facter` (object of key/value strings) - Additional Facter facts to make +- `extra_arguments` (array of strings) - Additional options to + pass to the Puppet command. This allows for customization of + `execute_command` without having to completely replace + or subsume its contents, making forward-compatible customizations much + easier to maintain. + + This string is lazy-evaluated so one can incorporate logic driven by template variables as + well as private elements of ExecuteTemplate (see source: provisioner/puppet-server/provisioner.go). +``` +[ + {{if ne "{{user environment}}" ""}}--environment={{user environment}}{{end}} +] +``` + +- `facter` (object of key/value strings) - Additional + [facts](https://puppetlabs.com/facter) to make available to the Puppet run. -- `guest_os_type` (string) - The target guest OS type, either "unix" or - "windows". Setting this to "windows" will cause the provisioner to use - Windows friendly paths and commands. By default, this is "unix". +- `guest_os_type` (string) - The remote host's OS type ('windows' or 'unix') to + tailor command-line and path separators. (default: unix). -- `ignore_exit_codes` (boolean) - If true, Packer will never consider the - provisioner a failure. +- `ignore_exit_codes` (boolean) - If true, Packer will ignore failures. -- `options` (string) - Additional command line options to pass to - `puppet agent` when Puppet is run. +- `prevent_sudo` (boolean) - On Unix platforms Puppet is typically invoked with `sudo`. If true, + it will be omitted. (default: false) -- `prevent_sudo` (boolean) - By default, the configured commands that are - executed to run Puppet are executed with `sudo`. If this is true, then the - sudo will be omitted. - -- `puppet_bin_dir` (string) - The path to the directory that contains the puppet - binary for running `puppet agent`. Usually, this would be found via the `$PATH` - or `%PATH%` environment variable, but some builders (notably, the Docker one) do - not run profile-setup scripts, therefore the path is usually empty. +- `puppet_bin_dir` (string) - Path to the Puppet binary. Ideally the program + should be on the system (unix: `$PATH`, windows: `%PATH%`), but some builders (eg. Docker) do + not run profile-setup scripts and therefore PATH might be empty or minimal. - `puppet_node` (string) - The name of the node. If this isn't set, the fully qualified domain name will be used. @@ -82,42 +89,48 @@ listed below: - `puppet_server` (string) - Hostname of the Puppet server. By default "puppet" will be used. -- `staging_dir` (string) - This is the directory where all the - configuration of Puppet by Packer will be placed. By default this - is /tmp/packer-puppet-server. This directory doesn't need to exist but - must have proper permissions so that the SSH user that Packer uses is able - to create directories and write into this folder. If the permissions are not - correct, use a shell provisioner prior to this to configure it properly. +- `staging_dir` (string) - Directory to where uploaded files + will be placed (unix: "/tmp/packer-puppet-masterless", + windows: "%SYSTEMROOT%/Temp/packer-puppet-masterless"). + It doesn't need to pre-exist, but the parent must have permissions sufficient + for the account Packer connects as to create directories and write files. + Use a Shell provisioner to prepare the way if needed. -## Execute Command +- `working_directory` (string) - Directory from which `execute_command` will be run. + If using Hiera files with relative paths, this option can be helpful. (default: `staging_directory`) + + ## Execute Command By default, Packer uses the following command (broken across multiple lines for readability) to execute Puppet: ``` -{{.FacterVars}} {{if .Sudo}}sudo -E {{end}} -{{if ne .PuppetBinDir ""}}{{.PuppetBinDir}}/{{end}}puppet agent ---onetime --no-daemonize -{{if ne .PuppetServer ""}}--server='{{.PuppetServer}}' {{end}} -{{if ne .Options ""}}{{.Options}} {{end}} -{{if ne .PuppetNode ""}}--certname={{.PuppetNode}} {{end}} -{{if ne .ClientCertPath ""}}--certdir='{{.ClientCertPath}}' {{end}} -{{if ne .ClientPrivateKeyPath ""}}--privatekeydir='{{.ClientPrivateKeyPath}}' {{end}} ---detailed-exitcodes +cd {{.WorkingDir}} && + {{if ne .FacterVars ""}}{{.FacterVars}} {{end}} + {{if .Sudo}}sudo -E {{end}} + {{if ne .PuppetBinDir ""}}{{.PuppetBinDir}}/{{end}} + puppet agent --onetime --no-daemonize --detailed-exitcodes + {{if .Debug}}--debug {{end}} + {{if ne .PuppetServer ""}}--server='{{.PuppetServer}}' {{end}} + {{if ne .PuppetNode ""}}--certname='{{.PuppetNode}}' {{end}} + {{if ne .ClientCertPath ""}}--certdir='{{.ClientCertPath}}' {{end}} + {{if ne .ClientPrivateKeyPath ""}}--privatekeydir='{{.ClientPrivateKeyPath}}' {{end}} + {{if ne .ExtraArguments ""}}{{.ExtraArguments}} {{end}} ``` The following command is used if guest OS type is windows: ``` -{{.FacterVars}} -{{if ne .PuppetBinDir ""}}{{.PuppetBinDir}}/{{end}}puppet agent ---onetime --no-daemonize -{{if ne .PuppetServer ""}}--server='{{.PuppetServer}}' {{end}} -{{if ne .Options ""}}{{.Options}} {{end}} -{{if ne .PuppetNode ""}}--certname={{.PuppetNode}} {{end}} -{{if ne .ClientCertPath ""}}--certdir='{{.ClientCertPath}}' {{end}} -{{if ne .ClientPrivateKeyPath ""}}--privatekeydir='{{.ClientPrivateKeyPath}}' {{end}} ---detailed-exitcodes +cd {{.WorkingDir}} && + {{if ne .FacterVars ""}}{{.FacterVars}} && {{end}} + {{if ne .PuppetBinDir ""}}{{.PuppetBinDir}}/{{end}} + puppet agent --onetime --no-daemonize --detailed-exitcodes + {{if .Debug}}--debug {{end}} + {{if ne .PuppetServer ""}}--server='{{.PuppetServer}}' {{end}} + {{if ne .PuppetNode ""}}--certname='{{.PuppetNode}}' {{end}} + {{if ne .ClientCertPath ""}}--certdir='{{.ClientCertPath}}' {{end}} + {{if ne .ClientPrivateKeyPath ""}}--privatekeydir='{{.ClientPrivateKeyPath}}' {{end}} + {{if ne .ExtraArguments ""}}{{.ExtraArguments}} {{end}} ``` ## Default Facts diff --git a/website/source/docs/provisioners/salt-masterless.html.md b/website/source/docs/provisioners/salt-masterless.html.md index 443f37561..ad6920614 100644 --- a/website/source/docs/provisioners/salt-masterless.html.md +++ b/website/source/docs/provisioners/salt-masterless.html.md @@ -90,3 +90,5 @@ Optional: - `salt_bin_dir` (string) - Path to the `salt-call` executable. Useful if it is not on the PATH. + +- `guest_os_type` (string) - The target guest OS type, either "unix" or "windows". diff --git a/website/source/docs/provisioners/shell-local.html.md b/website/source/docs/provisioners/shell-local.html.md index bb9022453..16719f239 100644 --- a/website/source/docs/provisioners/shell-local.html.md +++ b/website/source/docs/provisioners/shell-local.html.md @@ -1,8 +1,9 @@ --- description: | - The shell Packer provisioner provisions machines built by Packer using shell - scripts. Shell provisioning is the easiest way to get software installed and - configured on a machine. + shell-local will run a shell script of your choosing on the machine where Packer + is being run - in other words, shell-local will run the shell script on your + build server, or your desktop, etc., rather than the remote/guest machine being + provisioned by Packer. layout: docs page_title: 'Shell (Local) - Provisioners' sidebar_current: 'docs-provisioners-shell-local' @@ -12,8 +13,12 @@ sidebar_current: 'docs-provisioners-shell-local' Type: `shell-local` -The local shell provisioner executes a local shell script on the machine running -Packer. The [remote shell](/docs/provisioners/shell.html) provisioner executes +shell-local will run a shell script of your choosing on the machine where Packer +is being run - in other words, shell-local will run the shell script on your +build server, or your desktop, etc., rather than the remote/guest machine being +provisioned by Packer. + +The [remote shell](/docs/provisioners/shell.html) provisioner executes shell scripts on a remote machine. ## Basic Example @@ -32,16 +37,278 @@ The example below is fully functional. The reference of available configuration options is listed below. The only required element is "command". -Required: +Exactly *one* of the following is required: -- `command` (string) - The command to execute. This will be executed within - the context of a shell as specified by `execute_command`. +- `command` (string) - This is a single command to execute. It will be written + to a temporary file and run using the `execute_command` call below. + If you are building a windows vm on AWS, Azure or Google Compute and would + like to access the generated password that Packer uses to connect to the + instance via WinRM, you can use the template variable `{{.WinRMPassword}}` + to set this as an environment variable. + +- `inline` (array of strings) - This is an array of commands to execute. The + commands are concatenated by newlines and turned into a single file, so they + are all executed within the same context. This allows you to change + directories in one command and use something in the directory in the next + and so on. Inline scripts are the easiest way to pull off simple tasks + within the machine. + +- `script` (string) - The path to a script to execute. This path can be + absolute or relative. If it is relative, it is relative to the working + directory when Packer is executed. + +- `scripts` (array of strings) - An array of scripts to execute. The scripts + will be executed in the order specified. Each script is executed in + isolation, so state such as variables from one script won't carry on to the + next. Optional parameters: -- `execute_command` (array of strings) - The command to use to execute - the script. By default this is `["/bin/sh", "-c", "{{.Command}}"]`. The value - is an array of arguments executed directly by the OS. The value of this is - treated as [configuration - template](/docs/templates/engine.html). The only available - variable is `Command` which is the command to execute. +- `environment_vars` (array of strings) - An array of key/value pairs to + inject prior to the `execute_command`. The format should be `key=value`. + Packer injects some environmental variables by default into the environment, + as well, which are covered in the section below. If you are building a + windows vm on AWS, Azure or Google Compute and would like to access the + generated password that Packer uses to connect to the instance via WinRM, + you can use the template variable `{{.WinRMPassword}}` to set this as an + environment variable. For example: + `"environment_vars": "WINRMPASS={{.WinRMPassword}}"` + +- `execute_command` (array of strings) - The command used to execute the script. + By default this is `["/bin/sh", "-c", "{{.Vars}}, "{{.Script}}"]` + on unix and `["cmd", "/c", "{{.Vars}}", "{{.Script}}"]` on windows. + This is treated as a [template engine](/docs/templates/engine.html). + There are two available variables: `Script`, which is the path to the script + to run, and `Vars`, which is the list of `environment_vars`, if configured. + + If you choose to set this option, make sure that the first element in the + array is the shell program you want to use (for example, "sh"), and a later + element in the array must be `{{.Script}}`. + + This option provides you a great deal of flexibility. You may choose to + provide your own shell program, for example "/usr/local/bin/zsh" or even + "powershell.exe". However, with great power comes great responsibility - + these commands are not officially supported and things like environment + variables may not work if you use a different shell than the default. + + For backwards compatibility, you may also use {{.Command}}, but it is + decoded the same way as {{.Script}}. We recommend using {{.Script}} for the + sake of clarity, as even when you set only a single `command` to run, + Packer writes it to a temporary file and then runs it as a script. + + If you are building a windows vm on AWS, Azure or Google Compute and would + like to access the generated password that Packer uses to connect to the + instance via WinRM, you can use the template variable `{{.WinRMPassword}}` + to set this as an environment variable. + +- `inline_shebang` (string) - The + [shebang](http://en.wikipedia.org/wiki/Shebang_%28Unix%29) value to use when + running commands specified by `inline`. By default, this is `/bin/sh -e`. If + you're not using `inline`, then this configuration has no effect. + **Important:** If you customize this, be sure to include something like the + `-e` flag, otherwise individual steps failing won't fail the provisioner. + +- `use_linux_pathing` (bool) - This is only relevant to windows hosts. If you + are running Packer in a Windows environment with the Windows Subsystem for + Linux feature enabled, and would like to invoke a bash script rather than + invoking a Cmd script, you'll need to set this flag to true; it tells Packer + to use the linux subsystem path for your script rather than the Windows path. + (e.g. /mnt/c/path/to/your/file instead of C:/path/to/your/file). Please see + the example below for more guidance on how to use this feature. If you are + not on a Windows host, or you do not intend to use the shell-local + provisioner to run a bash script, please ignore this option. + +## Execute Command + +To many new users, the `execute_command` is puzzling. However, it provides an +important function: customization of how the command is executed. The most +common use case for this is dealing with **sudo password prompts**. You may also +need to customize this if you use a non-POSIX shell, such as `tcsh` on FreeBSD. + +### The Windows Linux Subsystem + +The shell-local provisioner was designed with the idea of allowing you to run +commands in your local operating system's native shell. For Windows, we've +assumed in our defaults that this is Cmd. However, it is possible to run a +bash script as part of the Windows Linux Subsystem from the shell-local +provisioner, by modifying the `execute_command` and the `use_linux_pathing` +options in the provisioner config. + +The example below is a fully functional test config. + +One limitation of this offering is that "inline" and "command" options are not +available to you; please limit yourself to using the "script" or "scripts" +options instead. + +Please note that the WSL is a beta feature, and this tool is not guaranteed to +work as you expect it to. + +``` +{ + "builders": [ + { + "type": "null", + "communicator": "none" + } + ], + "provisioners": [ + { + "type": "shell-local", + "environment_vars": ["PROVISIONERTEST=ProvisionerTest1"], + "execute_command": ["bash", "-c", "{{.Vars}} {{.Script}}"], + "use_linux_pathing": true, + "scripts": ["C:/Users/me/scripts/example_bash.sh"] + }, + { + "type": "shell-local", + "environment_vars": ["PROVISIONERTEST=ProvisionerTest2"], + "execute_command": ["bash", "-c", "{{.Vars}} {{.Script}}"], + "use_linux_pathing": true, + "script": "C:/Users/me/scripts/example_bash.sh" + } + ] +} +``` + +## Default Environmental Variables + +In addition to being able to specify custom environmental variables using the +`environment_vars` configuration, the provisioner automatically defines certain +commonly useful environmental variables: + +- `PACKER_BUILD_NAME` is set to the name of the build that Packer is running. + This is most useful when Packer is making multiple builds and you want to + distinguish them slightly from a common provisioning script. + +- `PACKER_BUILDER_TYPE` is the type of the builder that was used to create the + machine that the script is running on. This is useful if you want to run + only certain parts of the script on systems built with certain builders. + +- `PACKER_HTTP_ADDR` If using a builder that provides an http server for file + transfer (such as hyperv, parallels, qemu, virtualbox, and vmware), this + will be set to the address. You can use this address in your provisioner to + download large files over http. This may be useful if you're experiencing + slower speeds using the default file provisioner. A file provisioner using + the `winrm` communicator may experience these types of difficulties. + +## Safely Writing A Script + +Whether you use the `inline` option, or pass it a direct `script` or `scripts`, +it is important to understand a few things about how the shell-local +provisioner works to run it safely and easily. This understanding will save +you much time in the process. + +### Once Per Builder + +The `shell-local` script(s) you pass are run once per builder. That means that +if you have an `amazon-ebs` builder and a `docker` builder, your script will be +run twice. If you have 3 builders, it will run 3 times, once for each builder. + +### Always Exit Intentionally + +If any provisioner fails, the `packer build` stops and all interim artifacts +are cleaned up. + +For a shell script, that means the script **must** exit with a zero code. You +*must* be extra careful to `exit 0` when necessary. + + +## Usage Examples: + +Example of running a .cmd file on windows: + +``` + { + "type": "shell-local", + "environment_vars": ["SHELLLOCALTEST=ShellTest1"], + "scripts": ["./scripts/test_cmd.cmd"] + }, +``` + +Contents of "test_cmd.cmd": + +``` +echo %SHELLLOCALTEST% +``` + +Example of running an inline command on windows: +Required customization: tempfile_extension + +``` + { + "type": "shell-local", + "environment_vars": ["SHELLLOCALTEST=ShellTest2"], + "tempfile_extension": ".cmd", + "inline": ["echo %SHELLLOCALTEST%"] + }, +``` + +Example of running a bash command on windows using WSL: +Required customizations: use_linux_pathing and execute_command + +``` + { + "type": "shell-local", + "environment_vars": ["SHELLLOCALTEST=ShellTest3"], + "execute_command": ["bash", "-c", "{{.Vars}} {{.Script}}"], + "use_linux_pathing": true, + "script": "./scripts/example_bash.sh" + } +``` + +Contents of "example_bash.sh": + +``` +#!/bin/bash +echo $SHELLLOCALTEST +``` + +Example of running a powershell script on windows: +Required customizations: env_var_format and execute_command + +``` + + { + "type": "shell-local", + "environment_vars": ["SHELLLOCALTEST=ShellTest4"], + "execute_command": ["powershell.exe", "{{.Vars}} {{.Script}}"], + "env_var_format": "$env:%s=\"%s\"; ", + } +``` + +Example of running a powershell script on windows as "inline": +Required customizations: env_var_format, tempfile_extension, and execute_command + +``` + { + "type": "shell-local", + "tempfile_extension": ".ps1", + "environment_vars": ["SHELLLOCALTEST=ShellTest5"], + "execute_command": ["powershell.exe", "{{.Vars}} {{.Script}}"], + "env_var_format": "$env:%s=\"%s\"; ", + "inline": ["write-output $env:SHELLLOCALTEST"] + } +``` + + +Example of running a bash script on linux: + +``` + { + "type": "shell-local", + "environment_vars": ["PROVISIONERTEST=ProvisionerTest1"], + "scripts": ["./scripts/dummy_bash.sh"] + } +``` + +Example of running a bash "inline" on linux: + +``` + { + "type": "shell-local", + "environment_vars": ["PROVISIONERTEST=ProvisionerTest2"], + "inline": ["echo hello", + "echo $PROVISIONERTEST"] + } +``` + diff --git a/website/source/docs/provisioners/shell.html.md b/website/source/docs/provisioners/shell.html.md index 827c04c62..e5c8f48b9 100644 --- a/website/source/docs/provisioners/shell.html.md +++ b/website/source/docs/provisioners/shell.html.md @@ -150,7 +150,8 @@ In addition to being able to specify custom environmental variables using the `environment_vars` configuration, the provisioner automatically defines certain commonly useful environmental variables: -- `PACKER_BUILD_NAME` is set to the name of the build that Packer is running. +- `PACKER_BUILD_NAME` is set to the + [name of the build](/docs/templates/builders.html#named-builds) that Packer is running. This is most useful when Packer is making multiple builds and you want to distinguish them slightly from a common provisioning script. diff --git a/website/source/docs/provisioners/windows-restart.html.md b/website/source/docs/provisioners/windows-restart.html.md index e9e348755..106340a4a 100644 --- a/website/source/docs/provisioners/windows-restart.html.md +++ b/website/source/docs/provisioners/windows-restart.html.md @@ -38,8 +38,7 @@ The reference of available configuration options is listed below. Optional parameters: - `restart_command` (string) - The command to execute to initiate the - restart. By default this is `shutdown /r /c "packer restart" /t 5 && net stop winrm`. A key action of this is to stop WinRM so that Packer can - detect it is rebooting. + restart. By default this is `shutdown /r /f /t 0 /c "packer restart"`. - `restart_check_command` (string) - A command to execute to check if the restart succeeded. This will be done in a loop. Example usage: diff --git a/website/source/docs/provisioners/windows-shell.html.md b/website/source/docs/provisioners/windows-shell.html.md index cdf4469fa..8deb6c9c9 100644 --- a/website/source/docs/provisioners/windows-shell.html.md +++ b/website/source/docs/provisioners/windows-shell.html.md @@ -81,7 +81,8 @@ In addition to being able to specify custom environmental variables using the `environment_vars` configuration, the provisioner automatically defines certain commonly useful environmental variables: -- `PACKER_BUILD_NAME` is set to the name of the build that Packer is running. +- `PACKER_BUILD_NAME` is set to the + [name of the build](/docs/templates/builders.html#named-builds) that Packer is running. This is most useful when Packer is making multiple builds and you want to distinguish them slightly from a common provisioning script. diff --git a/website/source/docs/templates/communicator.html.md b/website/source/docs/templates/communicator.html.md index 6bb5150b9..352d370e8 100644 --- a/website/source/docs/templates/communicator.html.md +++ b/website/source/docs/templates/communicator.html.md @@ -59,11 +59,11 @@ to the remote host. The SSH communicator has the following options: -- `ssh_agent_auth` (boolean) - If true, the local SSH agent will be used to - authenticate connections to the remote host. Defaults to false. +- `ssh_agent_auth` (boolean) - If `true`, the local SSH agent will be used to + authenticate connections to the remote host. Defaults to `false`. -- `ssh_bastion_agent_auth` (boolean) - If true, the local SSH agent will - be used to authenticate with the bastion host. Defaults to false. +- `ssh_bastion_agent_auth` (boolean) - If `true`, the local SSH agent will + be used to authenticate with the bastion host. Defaults to `false`. - `ssh_bastion_host` (string) - A bastion host to use for the actual SSH connection. @@ -71,7 +71,7 @@ The SSH communicator has the following options: - `ssh_bastion_password` (string) - The password to use to authenticate with the bastion host. -- `ssh_bastion_port` (number) - The port of the bastion host. Defaults to 1. +- `ssh_bastion_port` (number) - The port of the bastion host. Defaults to `1`. - `ssh_bastion_private_key_file` (string) - A private key file to use to authenticate with the bastion host. @@ -80,43 +80,52 @@ The SSH communicator has the following options: host. - `ssh_disable_agent_forwarding` (boolean) - If true, SSH agent forwarding - will be disabled. Defaults to false. + will be disabled. Defaults to `false`. - `ssh_file_transfer_method` (`scp` or `sftp`) - How to transfer files, Secure copy (default) or SSH File Transfer Protocol. - `ssh_handshake_attempts` (number) - The number of handshakes to attempt - with SSH once it can connect. This defaults to 10. + with SSH once it can connect. This defaults to `10`. - `ssh_host` (string) - The address to SSH to. This usually is automatically configured by the builder. +* `ssh_keep_alive_interval` (string) - How often to send "keep alive" + messages to the server. Set to a negative value (`-1s`) to disable. Example + value: `10s`. Defaults to `5s`. + - `ssh_password` (string) - A plaintext password to use to authenticate with SSH. -- `ssh_port` (number) - The port to connect to SSH. This defaults to 22. +- `ssh_port` (number) - The port to connect to SSH. This defaults to `22`. - `ssh_private_key_file` (string) - Path to a PEM encoded private key file to use to authenticate with SSH. -- `ssh_pty` (boolean) - If true, a PTY will be requested for the SSH - connection. This defaults to false. - -- `ssh_timeout` (string) - The time to wait for SSH to become available. - Packer uses this to determine when the machine has booted so this is - usually quite long. Example value: "10m" - -- `ssh_username` (string) - The username to connect to SSH with. Required - if using SSH. - `ssh_proxy_host` (string) - A SOCKS proxy host to use for SSH connection -- `ssh_proxy_port` (Integer) - A port of the SOCKS proxy, defaults to 1080 +- `ssh_proxy_password` (string) - The password to use to authenticate with + the proxy server. Optional. + +- `ssh_proxy_port` (number) - A port of the SOCKS proxy. Defaults to `1080`. - `ssh_proxy_username` (string) - The username to authenticate with the proxy server. Optional. -- `ssh_proxy_password` (string) - The password to use to authenticate with - the proxy server. Optional. +- `ssh_pty` (boolean) - If `true`, a PTY will be requested for the SSH + connection. This defaults to `false`. + +* `ssh_read_write_timeout` (string) - The amount of time to wait for a remote + command to end. This might be useful if, for example, packer hangs on + a connection after a reboot. Example: `5m`. Disabled by default. + +- `ssh_timeout` (string) - The time to wait for SSH to become available. + Packer uses this to determine when the machine has booted so this is + usually quite long. Example value: `10m`. + +- `ssh_username` (string) - The username to connect to SSH with. Required + if using SSH. ## WinRM Communicator @@ -124,23 +133,24 @@ The WinRM communicator has the following options. - `winrm_host` (string) - The address for WinRM to connect to. -- `winrm_port` (number) - The WinRM port to connect to. This defaults to - 5985 for plain unencrypted connection and 5986 for SSL when `winrm_use_ssl` is set to true. - -- `winrm_username` (string) - The username to use to connect to WinRM. +- `winrm_insecure` (boolean) - If `true`, do not check server certificate + chain and host name. - `winrm_password` (string) - The password to use to connect to WinRM. +- `winrm_port` (number) - The WinRM port to connect to. This defaults to + `5985` for plain unencrypted connection and `5986` for SSL when + `winrm_use_ssl` is set to true. + - `winrm_timeout` (string) - The amount of time to wait for WinRM to - become available. This defaults to "30m" since setting up a Windows + become available. This defaults to `30m` since setting up a Windows machine generally takes a long time. -- `winrm_use_ssl` (boolean) - If true, use HTTPS for WinRM - -- `winrm_insecure` (boolean) - If true, do not check server certificate - chain and host name - -- `winrm_use_ntlm` (boolean) - If true, NTLM authentication will be used for WinRM, +- `winrm_use_ntlm` (boolean) - If `true`, NTLM authentication will be used for WinRM, rather than default (basic authentication), removing the requirement for basic authentication to be enabled within the target guest. Further reading for remote connection authentication can be found [here](https://msdn.microsoft.com/en-us/library/aa384295(v=vs.85).aspx). + +- `winrm_use_ssl` (boolean) - If `true`, use HTTPS for WinRM. + +- `winrm_username` (string) - The username to use to connect to WinRM. diff --git a/website/source/docs/templates/engine.html.md b/website/source/docs/templates/engine.html.md index 3b7c60b24..dd5c9e0fe 100644 --- a/website/source/docs/templates/engine.html.md +++ b/website/source/docs/templates/engine.html.md @@ -43,6 +43,7 @@ Here is a full list of the available functions for reference. - `uuid` - Returns a random UUID. - `upper` - Uppercases the string. - `user` - Specifies a user variable. +- `packer_version` - Returns Packer version. #### Specific to Amazon builders: @@ -66,7 +67,7 @@ Here is a full list of the available functions for reference. This engine does not guarantee that the final image name will match the regex; it will not truncate your name if it exceeds 63 characters, and it - will not valiate that the beginning and end of the engine's output are + will not validate that the beginning and end of the engine's output are valid. For example, `"image_name": {{isotime | clean_image_name}}"` will cause your build to fail because the image name will start with a number, which is why in the diff --git a/website/source/docs/templates/push.html.md b/website/source/docs/templates/push.html.md index 255f3d842..541ff4422 100644 --- a/website/source/docs/templates/push.html.md +++ b/website/source/docs/templates/push.html.md @@ -9,6 +9,12 @@ sidebar_current: 'docs-templates-push' # Template Push +!> The Packer and Artifact Registry features of Atlas will no longer be +actively developed or maintained and will be fully decommissioned. +Please see our [guide on building immutable infrastructure with +Packer on CI/CD](/guides/packer-on-cicd/) for ideas on implementing these +features yourself. + Within the template, the push section configures how a template can be [pushed](/docs/commands/push.html) to a remote build service. diff --git a/website/source/docs/templates/user-variables.html.md b/website/source/docs/templates/user-variables.html.md index bcc22339b..8c3c8dfc9 100644 --- a/website/source/docs/templates/user-variables.html.md +++ b/website/source/docs/templates/user-variables.html.md @@ -176,7 +176,10 @@ variable values. Assuming this file is in `variables.json`, we can build our template using the following command: ``` text +On Linux : $ packer build -var-file=variables.json template.json +On Windows : +packer build -var-file variables.json template.json ``` The `-var-file` flag can be specified multiple times and variables from multiple diff --git a/website/source/downloads.html.erb b/website/source/downloads.html.erb index 0b975246a..12c8b8061 100644 --- a/website/source/downloads.html.erb +++ b/website/source/downloads.html.erb @@ -46,6 +46,14 @@ description: |-
  • <%= pretty_arch(arch) %>
  • <% end %> + <% if os == "freebsd" %> +
    + Note: At least FreeBSD 10-STABLE is required to run Packer. + If you'd like to run on an earlier version, please + compile Packer yourself + using the correct version of Go. +
    + <% end %>
    diff --git a/website/source/guides/packer-on-cicd/build-image-in-cicd.html.md b/website/source/guides/packer-on-cicd/build-image-in-cicd.html.md new file mode 100644 index 000000000..6146d57a7 --- /dev/null +++ b/website/source/guides/packer-on-cicd/build-image-in-cicd.html.md @@ -0,0 +1,27 @@ +--- +layout: guides +sidebar_current: guides-packer-on-cicd-build-image +page_title: Build Images in CI/CD +--- + +# Build Images in CI/CD + +The following guides from our partners show how to use their services to build +images with Packer. + +- [How to Build Immutable Infrastructure with Packer and CircleCI Workflows](https://circleci.com/blog/how-to-build-immutable-infrastructure-with-packer-and-circleci-workflows/) +- [Using Packer and Ansible to Build Immutable Infrastructure in CodeShip](https://blog.codeship.com/packer-ansible/) + +The majority of the [Packer Builders](/docs/builders/index.html) can run in +a container or VM, a common model used by most CI/CD services. However, the +[QEMU builder](/docs/builders/qemu.html) for +[KVM](https://www.linux-kvm.org/page/Main_Page) and +[Xen](https://www.xenproject.org/) virtual machine images, [VirtualBox +builder](/docs/builders/virtualbox.html) for OVA or OVF virtual machines and +[VMWare builder](/docs/builders/vmware.html) for use with VMware products +require running on a bare-metal machine. + +The [Building a VirtualBox Image with Packer in +TeamCity](/guides/packer-on-cicd/build-virtualbox-image.html) guide shows +how to create a VirtualBox image using TeamCity's support for running scripts +on bare-metal machines. diff --git a/website/source/guides/packer-on-cicd/build-virtualbox-image.html.md b/website/source/guides/packer-on-cicd/build-virtualbox-image.html.md new file mode 100644 index 000000000..479ab8614 --- /dev/null +++ b/website/source/guides/packer-on-cicd/build-virtualbox-image.html.md @@ -0,0 +1,150 @@ +--- +layout: guides +sidebar_current: guides-packer-on-cicd-build-virtualbox +page_title: Build a VirtualBox Image with Packer in TeamCity +--- + +# Build a VirtualBox Image with Packer in TeamCity + +This guide walks through the process of building a VirtualBox image using +Packer on a new TeamCity Agent. Before getting started you should have access +to a TeamCity Server. + +The Packer VirtualBox builder requires access to VirtualBox. Virtualization is +not universally supported on cloud instances, so we recommend you run these +builds on either a bare metal server, or cloud instances which support nested +virtualization, such as Azure or GCP. This is also true for the +[VMWare](/docs/builders/vmware.html) and the [QEMU](/docs/builders/qemu.html) +Packer builders. + +We will use Chef's [Bento boxes](https://github.com/chef/bento) to provision an +Ubuntu image on VirtualBox. For this example, we will use the repository +directly, but you may also fork it for the same results. + +## 1. Provision a Bare-metal Machine + +For the purposes of this example, we will run on a bare-metal instance from +[Packet](https://www.packet.net/). If you are a first time user of Packet, the +Packet team has provided HashiCorp the coupon code `hashi25` which you can use +for $25 off to test out this guide and up to 30% if you decide to +reserve ongoing servers (email help@packet.net for details). You can use +a `baremetal_0` server type for testing, but for regular use, the `baremetal_1` +instance may be a better option. + +There is also a [Packet +Provider](https://www.terraform.io/docs/providers/packet/index.html) in +Terraform you can use to provision the project and instance. + +```hcl +provider "packet" { } + +resource "packet_project" "teamcity_agents" { + name = "TeamCity" +} + +resource "packet_device" "agent" { + hostname = "teamcity-agent" + plan = "baremetal_0" + facility = "ams1" + operating_system = "ubuntu_16_04" + billing_cycle = "hourly" + project_id = "${packet_project.teamcity_project.id}" +} +``` + +## 2. Install VirtualBox and TeamCity dependencies + +VirtualBox must be installed on the new instance, and TeamCity requires the JDK +prior to installation. This guide uses Ubuntu as the Linux distribution, so you +may need to adjust these commands for your distribution of choice. + +**Install Teamcity Dependencies** + +```shell +apt-get upgrade +apt-get install -y zip linux-headers-generic linux-headers-4.13.0-16-generic build-essential openjdk-8-jdk +``` + +**Install VirtualBox** + +``` +curl -OL "https://download.virtualbox.org/virtualbox/5.2.2/virtualbox-5.2_5.2.2-119230~Ubuntu~xenial_amd64.deb" +dpkg -i virtualbox-5.2_5.2.2-119230~Ubuntu~xenial_amd64.deb +``` + +You can also use the [`remote-exec` +provisioner](https://www.terraform.io/docs/provisioners/remote-exec.html) in +your Terraform configuration to automatically run these commands when +provisioning the new instance. + +## 3. Install Packer + +The TeamCity Agent machine will also need Packer Installed. You can find the +latest download link from the [Packer +Download](https://www.packer.io/downloads.html) page. + +```shell +curl -OL "https://releases.hashicorp.com/packer/1.1.2/packer_1.1.2_linux_amd64.zip" +unzip ./packer_1.1.2_linux_amd64.zip +``` + +Packer is installed at the `/root/packer` path which is used in subsequent +steps. If it is installed elsewhere, take note of the path. + +## 4. Install TeamCity Agent + +This guide assume you already have a running instance of TeamCity Server. The +new TeamCity Agent can be installed by [downloading a zip file and installing +manually](https://confluence.jetbrains.com/display/TCD10//Setting+up+and+Running+Additional+Build+Agents#SettingupandRunningAdditionalBuildAgents-InstallingAdditionalBuildAgents), +or using [Agent +Push](https://confluence.jetbrains.com/display/TCD10//Setting+up+and+Running+Additional+Build+Agents#SettingupandRunningAdditionalBuildAgents-InstallingviaAgentPush). +Once it is installed it should appear in TeamCity as a new Agent. + +Create a new Agent Pool for agents responsible for VirtualBox Packer builds and +assign the new Agent to it. + +## 5. Create a New Build in TeamCity + +In TeamCity Server, create a new build. To use the upstream Bento repository, +we'll choose *From a repository URL*, and enter +`https://github.com/chef/bento.git` as the **Repository URL**. + +![TeamCity screenshot: New Build](/assets/images/guides/teamcity_create_project_from_url-1.png) + +Click **Proceed**. + +![TeamCity screenshot: New Build](/assets/images/guides/teamcity_create_project_from_url-2.png) + +And **Proceed** again. + +We won't use the *Auto-detected Build Steps*. Instead, click *configure build +steps manually*. For the *runner type*, pick **Command Line**, and enter the +following values. Make sure to click *Show advanced options*, as we need to set +the working directory. + +![TeamCity screenshot: Build Step](/assets/images/guides/teamcity_build_configuration.png) + + +This will use the `build` command in Packer to build the image defined in +`ubuntu/ubuntu-16.04-amd64.json`. It assumes that the VCS repository you're +using is a fork of [Chef/Bento](https://github.com/chef/bento). Packer defaults +to building VirtualBox machines by launching a GUI that shows the console. +Since this will run in CI/CD, use the [`headless` +variable](/docs/builders/virtualbox-iso.html#headless) to instruct Packer to +start the machine without the console. Packer can build multiple image types, +so the [`-only=virtualbox-iso` +option](/docs/commands/build.html#only-foo-bar-baz) instructs Packer to only +build the builds with the name `virtualbox-iso`. + +## 6. Run a build in TeamCity + +The entire configuration is ready for a new build. Start a new run in TeamCity +by pressing “Run”. + +The new run should be triggered and the virtual box image will be built. + +![TeamCity screenshot: Build log](/assets/images/guides/teamcity_build_log.png) + +Once complete, the build status should be updated to complete and successful. + +![TeamCity screenshot: Build log complete](/assets/images/guides/teamcity_build_log_complete.png) diff --git a/website/source/guides/packer-on-cicd/index.html.md b/website/source/guides/packer-on-cicd/index.html.md new file mode 100644 index 000000000..328e699c7 --- /dev/null +++ b/website/source/guides/packer-on-cicd/index.html.md @@ -0,0 +1,17 @@ +--- +layout: guides +sidebar_current: guides-packer-on-cicd-index +page_title: Build Immutable Infrastructure with Packer in CI/CD +--- + +# Build Immutable Infrastructure with Packer in CI/CD + +This guide focuses on the following workflow for building immutable +infrastructure. This workflow can be manual or automated and it can be +implemented with a variety of technologies. The goal of this guide is to show +how this workflow can be fully automated using Packer for building images from +a continuous integration/continuous deployment (CI/CD) pipeline. + +1. [Build Images using Packer in CI/CD](/guides/packer-on-cicd/build-image-in-cicd.html) +2. [Upload the new image to S3](/guides/packer-on-cicd/upload-images-to-artifact.html) for future deployment or use during development +3. [Create new Terraform Enterprise runs](/guides/packer-on-cicd/trigger-tfe.html) to provision new instances with the images diff --git a/website/source/guides/packer-on-cicd/trigger-tfe.html.md b/website/source/guides/packer-on-cicd/trigger-tfe.html.md new file mode 100644 index 000000000..48733038c --- /dev/null +++ b/website/source/guides/packer-on-cicd/trigger-tfe.html.md @@ -0,0 +1,60 @@ +--- +layout: guides +sidebar_current: guides-packer-on-cicd-trigger-tfe-run +page_title: Trigger Terraform Enterprise runs +--- + +# Create Terraform Enterprise Runs + +Once an image is built and uploaded to an artifact store, the next step is to +use this new image. In some cases the image will be downloaded by the dev team +and used locally in development, like is often done with VirtualBox images with +Vagrant. In most other cases, the new image will be used to provision new +infrastructure. + +[Terraform](https://www.terraform.io/) is an open source tool that is ideal for +provisioning new infrastructure with images generated by Packer, and [Terraform +Enterprise](https://www.hashicorp.com/products/terraform/) is the best way to +perform automated Terraform runs. + +## Create a Terraform Configuration and Workspace + +The following is a sample Terraform configuration which provisions a new AWS +EC2 instance. The `aws_ami_id` is a variable which will be provided when +running `terraform plan` and `terraform apply`. This variable references the +latest AMI generated with the Packer build in CI/CD. + +```hcl +variable "aws_ami_id" { } + +provider "aws" { + region = "us-west-2" +} + +resource "aws_instance" "web" { + ami = "${var.aws_ami_id}" + instance_type = "t2.micro" +} +``` + +Terraform Enterprise should have a workspace with this terraform configuration +and a placeholder variable `aws_ami_id`. + +## Include Terraform Enterprise in Your CI Builds + +Follow these steps to create a new run from CI/CD after a Packer build is +complete and uploaded. + +1. Add a new step to the CI/CD pipeline. +2. In the new step add a `curl` call to update the variables in the workspace + using the [update variables + API](https://www.terraform.io/docs/enterprise/api/variables.html#update-variables), + so that Terraform has a reference to the latest image. For the sample + configuration above, the `aws_ami_id` variable should be updated to the AMI + ID of the latest image. +3. In that same step, add another `curl` call to [create a new run via the + API](https://www.terraform.io/docs/enterprise/api/run.html#create-a-run). + A run performs a plan and apply on the last configuration version created, + using the variables set in the workspace. In the previous step we update the + variables, so the new run can be created using the previous configuration + version. diff --git a/website/source/guides/packer-on-cicd/upload-images-to-artifact.html.md b/website/source/guides/packer-on-cicd/upload-images-to-artifact.html.md new file mode 100644 index 000000000..ca0b1a2d8 --- /dev/null +++ b/website/source/guides/packer-on-cicd/upload-images-to-artifact.html.md @@ -0,0 +1,45 @@ +--- +layout: guides +sidebar_current: guides-packer-on-cicd-upload-image-to-artifact-store +page_title: Upload VirtualBox Image to S3 +--- + +# Upload VirtualBox Image to S3 + +Once the image is generated it will be used by other parts of your operations +workflow. For example, it is common to build VirtualBox images with Packer to +be used as base boxes in Vagrant. + +The exact process for uploading images depends on the artifact store and CI +system you use. TeamCity provides a [Build +Artifacts](https://confluence.jetbrains.com/display/TCD9/Build+Artifact) +feature which can be used to store the newly generated image. Other CI/CD +services also have similar build artifacts features built in, like [Circle CI +Build Artifacts](https://circleci.com/docs/2.0/artifacts/). In addition to the +built in artifact stores in CI/CD tools, there are also dedicated universal +artifact storage services like +[Artifactory](https://confluence.jetbrains.com/display/TCD9/Build+Artifact). +All of these are great options for image artifact storage. + +The following example uses TeamCity and Amazon S3. + +## Example: Uploading to S3 in a TeamCity Build + +On the agent machine responsible for building images, install the [AWS Command +Line Tool](https://aws.amazon.com/cli/). Since this is a one-time operation, +this can be incorporated into the initial agent provisioning step when +installing other dependencies. The AWS Command Line tool may require installing +additional +[dependencies](http://docs.aws.amazon.com/cli/latest/userguide/installing.html) +prior. + +```shell +pip install awscli +``` + +In your build configuration in TeamCity Server, add an additional **Build Step: +Command Line** and set the **Script content** field to the following: + +```shell +awscli s3 cp . s3://bucket/ --exclude “*” --include “*.ovf" +``` diff --git a/website/source/index.html.erb b/website/source/index.html.erb index 61990b4ba..18222370b 100644 --- a/website/source/index.html.erb +++ b/website/source/index.html.erb @@ -59,7 +59,7 @@ description: |- Infrastructure as code

    Modern, Automated

    - Packer is easy to use and automates the creation of any type of + HashiCorp Packer is easy to use and automates the creation of any type of machine image. It embraces modern configuration management by encouraging you to use automated scripts to install and configure the software within your Packer-made images. Packer brings machine diff --git a/website/source/intro/getting-started/build-image.html.md b/website/source/intro/getting-started/build-image.html.md index ba0683797..a550cee18 100644 --- a/website/source/intro/getting-started/build-image.html.md +++ b/website/source/intro/getting-started/build-image.html.md @@ -114,7 +114,7 @@ installing Redis. ## Your First Image -With a properly validated template. It is time to build your first image. This +With a properly validated template, it is time to build your first image. This is done by calling `packer build` with the template file. The output should look similar to below. Note that this process typically takes a few minutes. @@ -181,10 +181,7 @@ you will also need to set `associate_public_ip_address` to `true`, or set up a ## Managing the Image Packer only builds images. It does not attempt to manage them in any way. After -they're built, it is up to you to launch or destroy them as you see fit. If you -want to store and namespace images for quick reference, you can use [Atlas by -HashiCorp](https://atlas.hashicorp.com). We'll cover remotely building and -storing images at the end of this getting started guide. +they're built, it is up to you to launch or destroy them as you see fit. After running the above example, your AWS account now has an AMI associated with it. AMIs are stored in S3 by Amazon, so unless you want to be charged @@ -338,7 +335,7 @@ customize the image. Finally, when all is done, Packer will wrap the whole customized package up into a brand new AMI that will be available from the [AWS AMI management page]( https://console.aws.amazon.com/ec2/home?region=us-east-1#s=Images). Any -instances we subsequently create from this AMI will have our all of our +instances we subsequently create from this AMI will have all of our customizations baked in. This is the core benefit we are looking to achieve from using the [Amazon EBS builder](/docs/builders/amazon-ebs.html) in this example. @@ -448,14 +445,19 @@ variables we will set in our build template; copy the contents into your own `sample_script.ps1` and provide the path to it in your build template: ```powershell -Write-Host "PACKER_BUILD_NAME is automatically set for you, " -NoNewline -Write-Host "or you can set it in your builder variables; " -NoNewline +Write-Host "PACKER_BUILD_NAME is an env var Packer automatically sets for you." +Write-Host "...or you can set it in your builder variables." Write-Host "The default for this builder is:" $Env:PACKER_BUILD_NAME -Write-Host "Use backticks as the escape character when required in powershell:" +Write-Host "The PowerShell provisioner will automatically escape characters" +Write-Host "considered special to PowerShell when it encounters them in" +Write-Host "your environment variables or in the PowerShell elevated" +Write-Host "username/password fields." Write-Host "For example, VAR1 from our config is:" $Env:VAR1 Write-Host "Likewise, VAR2 is:" $Env:VAR2 -Write-Host "Finally, VAR3 is:" $Env:VAR3 +Write-Host "VAR3 is:" $Env:VAR3 +Write-Host "Finally, VAR4 is:" $Env:VAR4 +Write-Host "None of the special characters needed escaping in the template" ``` Finally, we need to create the actual [build template]( @@ -514,7 +516,12 @@ customize and control the build process: { "type": "powershell", "environment_vars": ["DEVOPS_LIFE_IMPROVER=PACKER"], - "inline": "Write-Host \"HELLO NEW USER; WELCOME TO $Env:DEVOPS_LIFE_IMPROVER\"" + "inline": [ + "Write-Host \"HELLO NEW USER; WELCOME TO $Env:DEVOPS_LIFE_IMPROVER\"", + "Write-Host \"You need to use backtick escapes when using\"", + "Write-Host \"characters such as DOLLAR`$ directly in a command\"", + "Write-Host \"or in your own scripts.\"" + ] }, { "type": "windows-restart" @@ -523,9 +530,10 @@ customize and control the build process: "script": "./sample_script.ps1", "type": "powershell", "environment_vars": [ - "VAR1=A`$Dollar", - "VAR2=A``Backtick", - "VAR3=A`'SingleQuote" + "VAR1=A$Dollar", + "VAR2=A`Backtick", + "VAR3=A'SingleQuote", + "VAR4=A\"DoubleQuote" ] } ] @@ -550,39 +558,49 @@ You should see output like this: ``` amazon-ebs output will be in this color. -==> amazon-ebs: Prevalidating AMI Name: packer-demo-1507933843 - amazon-ebs: Found Image ID: ami-23d93c59 -==> amazon-ebs: Creating temporary keypair: packer_59e13e94-203a-1bca-5327-bebf0d5ad15a -==> amazon-ebs: Creating temporary security group for this instance: packer_59e13ea9-3220-8dab-29c0-ed7f71e221a1 +==> amazon-ebs: Prevalidating AMI Name: packer-demo-1518111383 + amazon-ebs: Found Image ID: ami-013e197b +==> amazon-ebs: Creating temporary keypair: packer_5a7c8a97-f27f-6708-cc3c-6ab9b4688b13 +==> amazon-ebs: Creating temporary security group for this instance: packer_5a7c8ab5-444c-13f2-0aa1-18d124cdb975 ==> amazon-ebs: Authorizing access to port 5985 from 0.0.0.0/0 in the temporary security group... ==> amazon-ebs: Launching a source AWS instance... ==> amazon-ebs: Adding tags to source instance amazon-ebs: Adding tag: "Name": "Packer Builder" - amazon-ebs: Instance ID: i-0349406ac85f02166 -==> amazon-ebs: Waiting for instance (i-0349406ac85f02166) to become ready... + amazon-ebs: Instance ID: i-0c8c808a3b945782a +==> amazon-ebs: Waiting for instance (i-0c8c808a3b945782a) to become ready... ==> amazon-ebs: Skipping waiting for password since WinRM password set... ==> amazon-ebs: Waiting for WinRM to become available... amazon-ebs: WinRM connected. ==> amazon-ebs: Connected to WinRM! ==> amazon-ebs: Provisioning with Powershell... -==> amazon-ebs: Provisioning with powershell script: /var/folders/15/d0f7gdg13rnd1cxp7tgmr55c0000gn/T/packer-powershell-provisioner175214995 +==> amazon-ebs: Provisioning with powershell script: /var/folders/15/d0f7gdg13rnd1cxp7tgmr55c0000gn/T/packer-powershell-provisioner943573503 amazon-ebs: HELLO NEW USER; WELCOME TO PACKER + amazon-ebs: You need to use backtick escapes when using + amazon-ebs: characters such as DOLLAR$ directly in a command + amazon-ebs: or in your own scripts. ==> amazon-ebs: Restarting Machine ==> amazon-ebs: Waiting for machine to restart... - amazon-ebs: WIN-TEM0TDL751M restarted. + amazon-ebs: WIN-NI8N45RPJ23 restarted. ==> amazon-ebs: Machine successfully restarted, moving on ==> amazon-ebs: Provisioning with Powershell... ==> amazon-ebs: Provisioning with powershell script: ./sample_script.ps1 - amazon-ebs: PACKER_BUILD_NAME is automatically set for you, or you can set it in your builder variables; The default for this builder is: amazon-ebs - amazon-ebs: Use backticks as the escape character when required in powershell: + amazon-ebs: PACKER_BUILD_NAME is an env var Packer automatically sets for you. + amazon-ebs: ...or you can set it in your builder variables. + amazon-ebs: The default for this builder is: amazon-ebs + amazon-ebs: The PowerShell provisioner will automatically escape characters + amazon-ebs: considered special to PowerShell when it encounters them in + amazon-ebs: your environment variables or in the PowerShell elevated + amazon-ebs: username/password fields. amazon-ebs: For example, VAR1 from our config is: A$Dollar amazon-ebs: Likewise, VAR2 is: A`Backtick - amazon-ebs: Finally, VAR3 is: A'SingleQuote + amazon-ebs: VAR3 is: A'SingleQuote + amazon-ebs: Finally, VAR4 is: A"DoubleQuote + amazon-ebs: None of the special characters needed escaping in the template ==> amazon-ebs: Stopping the source instance... amazon-ebs: Stopping instance, attempt 1 ==> amazon-ebs: Waiting for the instance to stop... -==> amazon-ebs: Creating the AMI: packer-demo-1507933843 - amazon-ebs: AMI: ami-100fc56a +==> amazon-ebs: Creating the AMI: packer-demo-1518111383 + amazon-ebs: AMI: ami-f0060c8a ==> amazon-ebs: Waiting for AMI to become ready... ==> amazon-ebs: Terminating the source AWS instance... ==> amazon-ebs: Cleaning up any extra volumes... @@ -593,7 +611,7 @@ Build 'amazon-ebs' finished. ==> Builds finished. The artifacts of successful builds are: --> amazon-ebs: AMIs were created: -us-east-1: ami-100fc56a +us-east-1: ami-f0060c8a ``` And if you navigate to your EC2 dashboard you should see your shiny new AMI diff --git a/website/source/intro/getting-started/install.html.md b/website/source/intro/getting-started/install.html.md index 884683e9f..789694f68 100644 --- a/website/source/intro/getting-started/install.html.md +++ b/website/source/intro/getting-started/install.html.md @@ -10,24 +10,33 @@ description: |- advanced users. --- -# Install Packer +# Install Options -Packer must first be installed on the machine you want to run it on. To make -installation easier, Packer is distributed as a [binary package](/downloads.html) -for all supported platforms and architectures. This page will not cover how to -compile Packer from source, as that is covered in the -[README](https://github.com/hashicorp/packer/blob/master/README.md) and is only -recommended for advanced users. +Packer may be installed in the following ways: -## Installing Packer +1. Using a [precompiled binary](#precompiled-binaries); We release binaries + for all supported platforms and architectures. This method is recommended for + most users. -To install packer, first find the [appropriate package](/downloads.html) for -your system and download it. Packer is packaged as a "zip" file. +2. Installing [from source](#compiling-from-source) This method is only + recommended for advanced users. + +3. An unoffical [alternative installation method](#alternative-installation-methods) + +## Precompiled Binaries + +To install the precompiled binary, [download](/downloads.html) the appropriate +package for your system. Packer is currently packaged as a zip file. We do not +have any near term plans to provide system packages. Next, unzip the downloaded package into a directory where Packer will be installed. On Unix systems, `~/packer` or `/usr/local/packer` is generally good, depending on whether you want to restrict the install to just your user or -install it system-wide. On Windows systems, you can put it wherever you'd like. +install it system-wide. If you intend to access it from the command-line, make +sure to place it somewhere on your `PATH` before `/usr/sbin`. On Windows +systems, you can put it wherever you'd like. The `packer` (or `packer.exe` for +Windows) binary inside is all that is necessary to run Packer. Any additional +files aren't required to run Packer. After unzipping the package, the directory should contain a single binary program called `packer`. The final step to @@ -38,6 +47,29 @@ for instructions on setting the PATH on Linux and Mac. [This page](https://stackoverflow.com/questions/1618280/where-can-i-set-path-to-make-exe-on-windows) contains instructions for setting the PATH on Windows. +## Compiling from Source + +To compile from source, you will need [Go](https://golang.org) installed and +configured properly as well as a copy of [`git`](https://www.git-scm.com/) +in your `PATH`. + +1. Clone the Packer repository from GitHub into your `GOPATH`: + + ``` shell + $ mkdir -p $(go env GOPATH)/src/github.com/hashicorp && cd $_ + $ git clone https://github.com/hashicorp/packer.git + $ cd packer + ``` + +2. Build Packer for your current system and put the + binary in `./bin/` (relative to the git checkout). The `make dev` target is + just a shortcut that builds `packer` for only your local build environment (no + cross-compiled targets). + + ``` shell + $ make dev + ``` + ## Verifying the Installation After installing Packer, verify the installation worked by opening a new command diff --git a/website/source/layouts/docs.erb b/website/source/layouts/docs.erb index 9aa17ccc4..89e58b8fb 100644 --- a/website/source/layouts/docs.erb +++ b/website/source/layouts/docs.erb @@ -122,6 +122,9 @@ > LXD + > + NAVER Cloud + > Null @@ -131,8 +134,16 @@ > OpenStack - > - Oracle OCI + > + Oracle +

    > Parallels @@ -151,6 +162,9 @@ > QEMU + > + Scaleway + > Triton @@ -267,6 +281,9 @@ > Google Compute Export + > + Google Compute Import + > Manifest diff --git a/website/source/layouts/guides.erb b/website/source/layouts/guides.erb index f842f51a6..1d5653c04 100644 --- a/website/source/layouts/guides.erb +++ b/website/source/layouts/guides.erb @@ -4,6 +4,23 @@ > Veewee to Packer + > + Build Immutable Infrastructure with Packer in CI/CD + <% end %> diff --git a/website/source/layouts/layout.erb b/website/source/layouts/layout.erb index ab0f53361..eea8c29ea 100644 --- a/website/source/layouts/layout.erb +++ b/website/source/layouts/layout.erb @@ -27,23 +27,18 @@ <%= title_for(current_page) %> + + <%= stylesheet_link_tag "application" %> - - <%= javascript_include_tag "application" %> + + + <%= javascript_include_tag "application", defer: true %> - - - - - - + + <%= yield_content :head %> @@ -107,6 +102,7 @@
  • Guides
  • Docs
  • Community
  • +
  • Privacy
  • Security
  • Press Kit
  • @@ -118,17 +114,6 @@ - -